►
From YouTube: Kubernetes SIG Service Catalog 20170626
Description
- Beta Criteria
- Testing / refactoring
- Walkthrough differentiation between 1.6 and 1.7
A
A
I
think
that
other
integrations
like
with
other
projects
or,
if
someone's
got
another
API
they're
interested
in
integrating
that
we
should
discuss
after
we're
at
a
data
level.
In
that
my
own
personal
subjective
opinion,
for
what
I
think
beta
level
of
open,
serviceworker
API
integration
is,
is
that
we
implement
the
use
cases
in
the
API
correctly,
with
the
exception
of
route
services
and
volume,
services
which
are
really
class
boundaries.
Specific
and
not
straightforward
at
all
the
implement
and
kubernetes,
and
then
in
addition
to
that,
that
we
should
implement
the
error.
A
Checking
semantics
of
the
spec
correctly
I've
been
talking
with
a
lot
of
folks
about
the
different
errors
to
check
and
what
the
air
handling
should
be
and,
as
many
of
you
are
aware,
there's
this
orphans
part
of
the
open,
serviceworker
API
spec
that
deals
with
error
handling,
but
it's
not
especially
clear
and
so
as
part
of
getting
catalogs
of
data.
I
think
that
we
need
to
get
clarification
on
that
part
of
the
stack
and
then
implement
things
correctly.
A
So,
in
addition
to
all
that,
like
the
the
catalog
and
controller
need
to
be
robust
in
the
face
of
misbehave
brokers
and
a
busted
broker
should
not
bust,
the
catalog
I
guess
would
be
how
I
would
articulate
that?
Oh
and
then
there's
some
other
odds
and
ends
that
I
I
think
that
we'll
uncover,
as
we
discuss
this
in
github
so
I.
What
I
propose
is
that
anybody
that
is
interested
in
working
to
establish
what
the
data
criteria
is
should
audit
the
list
of
issues
that
are
currently
in
the
ODOT
1.0
bucket.
A
I
will
say
in
advance,
because
I
had
a
couple.
Mismatched
matches
with
people
in
the
last
week
is
that,
after
working
on
kubernetes
for
three
years,
just
about
github
notifications
are
basically
meaningless
for
me
personally,
even
with
heavy
filtering.
So
if
there
is
a
particular
thing
that
you
would
like
me
to
look
at
it,
please
feel
free
to
email
me
at
the
email
address
on
my
github
account,
which
is
just
amore
at
Red
Hat
calm
and
do
not
assume
that
I
have
seen
your
where
you
have
tied
me
into
github.
A
The
best
of
my
ability,
I
think
one
other
thing
is
that
it
would
be
really
great
to
have
a
set
of
use
case,
documentation
for
what
the
catalog
actually
implements.
We
have
some
of
this,
but
it's
gotten
a
little
bit
stale.
So
if
anybody
is
interested
in
attending
that
stuff,
I'd
love
to
have
volunteers
for
that
and
now
I
know
that
you
were
also
interested
in
this
topic.
So
I'll
give
you
the
floor
for
a
second,
and
you
can
say
if
you
think
that
I've
represented.
B
B
I
think
that
everything
is
that
is,
is
correct
and
I
just
looked
at
some
use
key
in
some
cases
in
the
source
code
and
I
just
found
some
host
there
that
somewhere
we
we
just
assume
that
there
is
no
secret,
for
example,
and
just
try
to
create
or
like
if
we
keep
it
like
this,
we
assume
that
it
belongs
to
us
and
things
like
that.
So
we
need
to
be
more
careful,
I
guess
not
just
with
dealing
with
the
responses
from
it
will
be,
but
also
with
with
our
own
stuff.
B
A
C
That
will
need
to
have
everything
represented
in
github
yeah.
Well,
the
only
thing
I
want
to
mention
is
that
I
really
wish
people
would
take
the
opportunity
to
go
through
and
look
at
what
we're
doing
from
a
usability
perspective,
to
make
sure
that
the
the
user
interaction
model
that
we're
designing
here
makes
sense
in
terms
of
its
make
its
it's
usable.
It's
easy,
and
it
covers
a
lot
of
the
use
cases
that
we
don't
necessarily
talk
about,
often,
for
example,
multiple
bindings
to
the
same
application
when
you
go
to
upgrade
your
application.
C
What
do
you
do
from
kubernetes
side
has
I?
Can
impact
us
from
the
service
broker
side?
Is
it
kind,
of
course,
a
new
binding
to
be
created
when
you
even
need
one
to
be
stuff
like
that
I
want
to
work
with
those
scenarios
we
should
recover
those,
because
some
of
those
may
expose
design
flaws
and
once
you
go
to
beta,
is
going
to
a
lot
harder
to
change
those
those
design
flaws.
A
D
Right
a
question:
I
know
if
it's
related
to
current
topic
or
not,
but
the
email
that
Aaron
sent
out
about
only
requiring
two
LG
TMS.
So
next
four
weeks
is
an
interesting
choice,
because
you
know
we
presumably
have
a
different
policy
normally,
because
we
care
about
you,
know,
quality
and
buying
everything
else,
and
you
know
you
basically
saying
hey.
You
know
what
we
can
you
can
give
on
that
just
because
Rita
is
on
vacation
or
something
you
know
if
it's
to
GLD
tim
is
good
enough,
the
next
four
weeks.
Why
isn't
it
good
enough?
A
D
Right
but
I
mean
this.
This
group
had
adopted
the
particular
stance
and
we
need
to
change
it
and
say
this
is
the
new
one
or
you
know,
I
mean
doing
a
temporary
hey
for
a
while.
You
know
you
can
put
things
in
with
less
oversight.
That
seems
like
a
strange
thing.
More
days
are
gone
or
changing
it
I'm
just
making
a
point
that
you
either
change
it
or
stick
to
it.
But
you
know:
don't
do
this
half-assed
thing
all.
C
D
A
So
I
say
what
I
think
these
issues
are
generally
ones
that
are
strange.
Attractors
I
totally
hear
what
you're
saying,
Martin
and
I
don't
think
that
anybody
intends
to
cut
Google
out
in
any
way.
So
if
people
would
like
to
review
full
requests,
I
suggest
that
everybody
try
to
review
full
requests
on
a
daily
basis.
That's
what
I
try
to
do
so
any
objections
to
moving
on
I
know.
We
have
a
fair
amount
of
other
stuff
on
the
agenda
for
today.
E
So
the
follow
up
to
the
first
sub
bullet
on
focus
for
beta,
which
reads
correctly,
implement
the
OSB
spec
and
a
second
is
also
important
here,
which,
as
a
reminder,
is
covering
non
success.
Scenarios
I'm
going
to
be
focusing
a
lot
on
tests
around
those
edge
cases
and
tests
around
correctly
implementing
the
OSB
spec.
It's
something
that
Paul
you've
also
focused
on
last
week,
or
maybe
a
little
bit
more
as
well.
So
there
are
some
github
issues.
I
will
see
submitting
some
more
and
you'll
also
see
some
PRS
around
this
work.
E
The
test
refactoring
work
is
intended
to
just
shorten
the
existing
tests.
We
have
to
refactor
out
some
of
the
checks.
We
have
some
of
the
setup
logic
we
have
and
make
it
generic
enough
to
use
across
the
suite
of
tests
or
a
sub
suite
of
tests
at
least
and
then
the
second
piece,
of
course,
is
to
do.
Things
like
Neil
said,
is
to
check
to
make
sure
that
we
can
handle
the
case
where
the
secret
already
exists
or
something
similar,
and
those
are
what
I
have
called
edge
case
testing.
E
So,
as
I
said,
I'll
create
issues
for
as
many
things
as
I
can
think
of
right,
now,
more
issues
to
come,
of
course,
and
if
someone
is
looking
to
jump
into
the
code
base,
this
is
where
I
would
like.
This
is
where
I
will
be
focusing,
as
I
said,
this
is
where
I
would
like
a
lot
of
the
group's
focus
to
be
as
well,
so
that
we
can
push
towards
beta
with
as
much
quality
and
as
quickly
as
we
can,
of
course.
E
So,
if
you've
got
questions
on,
you
want
to
jump
in
where
to
start
ping
me
I
could
talk
to
you
about
that
and
then
of
course
reviews
there
will
be
pull
requests
coming
in
at
a
decent
rate.
At
least
reviews
are
going
to
be
more
than
welcome,
and
that
goes
for
anybody
if,
if
you
want
to
review
pull
requests,
if
you
have
time
or
bandwidth
to
do
so,
please
do
so
from
anybody.
That
is
it
for
that
point.
So
I'll
feedback
to
you,
Paul.
A
A
Oops
my
candle
went
out
in
the
first
foot
and
then
I'm
just
dumped
jumping
hurdles
in
the
dark.
So
in
order
to
make
it
possible
to
release
on
a
rebase
on
a
regular
basis,
we
probably
need
to
to
work
with
the
API
machinery
group
to
help
move
in
that
direction.
I
don't
know
that
we
can
wait
for
that
to
exist
before
we
move
on
to
the
machine
reads
so
I
think
it
would
be
a
good
idea.
Probably
once
1-7
is
released
to
finish
the
existing
poll.
A
That
rebase
is
on
to
what
I
prefer
to
be
like
on
to
the
final
version
of
the
API
machinery.
That's
in
one-seven-zero,
just
from
a
general
standpoint
of
that's
probably
going
to
make
it
easier
to
assess
what
the
differences
are
when
we
next
blow
the
machinery
in
but
I'm
totally
in
agreement
that
we
should
be
doing
regularly.
A
Does
anybody
disagree
that
probably
after
well?
Let
me
let
me
phrase
it
this
way.
I
propose
that
we
that
we
do
the
next
three
days
after
one
seven
goes
out
and
we
use
the
one
set
of
no
machinery
I.
Think
the
current
pull
request
can
scale
them.
There
probably
won't
be
a
lot
of
rebasing,
the
rebase
that
has
to
be
done
to
get
that
whole
request
in
and
once
that
happens,
and
we
have
a
working
full
I'm
happy
to
merge
it.
Does
anybody
disagree
with
that?
I.
A
Unsecure
option
is
dropped
in
that
version
of
machinery.
I
think
that
may
change
the
walkthrough
a
little
bit.
It
will
definitely
change
the
chart
slightly,
but
I
think
that's
okay,
as
long
as
we
just
coordinate
and
ensure
that
there
isn't
something
left
in
the
walkthrough
that
you
can't
do
after
the
rebate,
that's
probably
the
biggest
thing
now.
Can
you
think
of
anything
else?
That
is
big
like
that.
A
A
Actually,
there
is
a
fairly
big
other
thing,
but
I
don't
think
it
affects
us,
which
is
that
yeah.
Since
the
1:7
machinery
uses
protobuf,
serialization
and
SED
by
default
that
there
you
cannot
express
a
difference
in
protobuf
between
an
empty
array
type
or
a
feel,
Bethan
array
that's
empty
and
a
field.
That's
an
array!
That's
not
set
I
think
there
were
a
couple
places
where
this
had
meaning
in
the
core.
Api
I.
Think
Todd
security
policy
was
one
of
those
I.
A
Don't
think
it
has
any
effect
on
on
our
API,
but
it's
something
that
is
worth
considering.
I
think
the
only
array
type
that
we
actually
had
in
the
API
is
the
conditions,
array
and
I.
Don't
think
that
on
empty
array
versus
mill
has
any
meeting
on
those
fields,
because
they're
derived
from
what
is
present
in
that
array,
so
nil
versus
NP,
not
a
big
distinction.
Can
anybody
else
think
of
anything.
A
A
I'm
here
to
know
so,
I
tell
you
what
that's
good
to
have
that
put
the
bed
I.
Think
we
still.
We
have
an
issue
that
is
open
for
determining
the
cadence
that
we
do
it
on
I
guess
we
didn't
really
talk
about
that.
So
to
me.
Ideally,
we
would
probably
want
to
pick
up
every
point:
release
of
kubernetes
api
machinery
so
that
we're
picking
up
like
say
that
were
in
one
a
development
cycle.
A
That
should
be
more
manageable
than
sipping
from
master,
and
it
might
be
a
little
difficult
if
we're,
depending
on
API
machinery,
changes
to
go
in.
But
at
this
point
I,
don't
think
that
we
are
in
that
position.
So
hypothetically,
if
if
there
are
easy
enough
to
consume
release,
notes-
and
we
find
that
it
can
be
an
exercise
that
doesn't
take
a
tremendous
amount
of
time,
I
be
totally
fine
with
doing
it
on
every
point
release.
What
does
everybody
else
think
about
that?
I.
A
Enjoy
I
I
do
agree
that
that
there
is
a
need
for
awareness
that
the
API
machinery
needs
to
be
stable.
I
think
that
if
we
agree
based
on
point
releases
that
we
would
naturally
get
that
because
I
I
would
not
see
large
changes
going
in
to
a
point
release
because
they
have
to
be
picked,
I
think
that
might
still
leave
some.
A
It
doesn't
guard
against
when
you
go
from
1
7
X
to
1
8,
oh,
but
it
might
protect
us
on
kind
of
like
the
normal.
What
I
would
think
would
be
the
normal
rebase
situation
when
I
say
normal
I
mean
that
if
we
were
to
rebase
on
point
releases
that
we
would
normally
mostly
be
doing
those
rebase
--is
and
then,
hypothetically
speaking,
you
would
expect
them
to
not
be
tremendously
different.
From
the
last
point,
release
like
bug
fixes
only
yeah.
C
And
I
notice,
I
just
agree
with
what
you
guys
are
saying
in
terms
of
tracking
master
being
a
pain
in
the
butt,
but
at
the
same
time,
if
someone
does
have
the
spare
cycles
or
spare
machinery
to
do
a
nightly
build
with
master.
So
you
are
notified
immediately
when
they
break
something
and
we
can
notify
them
immediately
that
you
know
make
sure
it
was
an
intentional
breakage
that
can
only
do
goodness.
A
Yeah,
that's
a
good
point.
Yeah
I
do
think
that
so
I
do
it.
I
I
want
to
say
it
like
for
folks.
That
may
not
have
a
lot
of
background
on
the
the
context
for
this
discussion
or
that
might
be
watching
on
the
internet.
Just
so
that
I'm
fully
representing
how
I
feel
about
the
subject
like
I
think
I
do
want
to
say,
I.
A
So
I
I
think
that
we
should
approach
this
situation
with,
with
with
empathy
for
that
and
knowledge
that
API
machinery
has
has
a
lot
of
their
own
challenges
and
that
we
should
be
prepared
to
work
with
them
to
make
the
changes
that
we
want
to
see.
So
I
actually
just
reminds
me
of
something
that
I
was
thinking
about
over
the
weekend,
which
is
that
I
think
it
would
be
really
productive
if
we
could
have
folks
that
are
contributors
to
to
this
repository,
the
incubator
repository
our
catalog.
A
If
we
could
have
a
couple
folks
that
that
regularly
attend
the
API
machinery
sig
any
off
sig
as
examples
I
think
those
are
two
SIG's
that
we
have
a
fairly
large
subject
matter,
overlap
with
for
one
reason
or
another,
I
think
the
API
machinery
one
is
obvious.
A
But
I'm,
just
one
person
and
I
am
very
far
from
perfect.
So
I
no
need
to
raise
your
hand
now.
But
if
someone
is
interested
in
doing
that,
I
would
love
it.
If
you'd
send
a
message
to
e
to
the
SIG's
Service
Catalog
mailing
list
and
we'll
try
to
work
that
into
future
meeting
agendas,
because
it's
just
if
there
are
way
too
many
things
going
on
in
cribben
eTI's
for
people
to
possibly
follow
them
all.
E
My
understanding
was
sick.
Pm
is
one
of
the
avenues
that
exists
for
kind
of
what
you
suggested
Paul
to
be
more
specific.
It's
an
avenue
for
us
to
learn
about
the
intentions
of
these
groups
that
we
care
about
and
also
for
us
to
provide
feedback
back
to
them
related
to
our
use
cases.
So
am
I
correct
there
or
not
I.
C
Don't
think
sig
p.m.
is
really
the
right
level
or
right
place
to
have
that
little
discussion
because
they
stay
pretty
high-level,
it's
more
like
okay
in
general.
What
features
are
you
doing
at
a
sig
level
if
you
want
to
have
deeper
conversations
like
I?
Think
we're
talking
about
here
in
terms
syncing
up
I,
think
it's
better
to
talk
directly
to
the
other
group
right,
but.
E
It
would
be
helpful
for
us
I
think
not
necessarily
like
you
said,
to
talk
about
technical
details,
but
us
to
raise
the
issue
through
that
venue
that,
for
example,
hey
we
care
about
secrets
and
we
want
to.
We
want
to
give
a
brain
dump
on
what
our
issues
and
how
the
changes
will
affect
us
and
then
using
that
to
be
the
avenue,
and
that
would
be
instead
of
having
someone
designated
to
just
go
to
every
meeting.
So
really
push
that
a
pole.
Yeah.
C
I
know
so
you're
saying
it's
I
still
don't
think
p.m.
is
the
right
group
for
that,
because
it
the
number
of
people
and
that
are
there
each
week
varies
and
I
really
don't
think
they're
going
to
get
into
that
level
of
discussion.
I
think
the
best
place
for
what
you're
talking
about
is
actually
the
cig
architecture
group,
which
is
in
the
process
of
being
stood
up,
I
think
that
might
be
a
better
place
for
it.
I
worked
back
directly
to
the
group.
A
A
Don't
think
this
is
something
that
anybody
in
cig
p.m.
would
ever
know
about.
I
would
love
to
be
wrong,
but
that
is
just
my
read
of
that
thing.
Is
that
those
kinds
of
like
development,
guidance,
I,
wouldn't
necessarily
expect
to
flow
through
that
channel
I?
Think
we
can
totally
do
that.
I,
don't
see
there
being
a
reason
not
to,
but
at
this
point
I
expect
that
it
will
be
most
effective
if
we
have
books
that
are
actually
attending
attending
those
meetings
and
I
also
know
in
the
case
of
API
machinery
there.
A
Let
me
put
it
this
way:
I
think
that
it
difficult
to
get
the
right
degree
of
communication
on
the
right
channels.
Correct
I,
know
that
there
well
it's
my
perception
that
there
might
be
some
frustration
within
the
API
missionary
group
about
certain
things.
These
are
me
how
frequently
in
and
what
channels
things
were
communicated
and
I.
Don't
think
there
is
a
perfect
solution
to
that
problem
like
I
said.
I
think
that
this
is
perhaps
one
of
those
areas
where,
as
a
community,
we
don't
know
the
best
way
to
message
about
those
things.
A
F
And
I
mean
to
sort
of
add
to
pulse
earlier
point
from
the
city
API
machinery
perspective.
One
of
the
things
we
feel
like
is
going
on
is
everyone
feels
like
they
need
to
communicate
and
communicate
better,
and
so
everyone
wants
to
communicate
more
and
it
seems
like
a
lot
of
messages
are
being
drowned
out
and
so
the
idea
of
having
people
where
we
need
strong
communication
channels,
like
I'm
from
the
API
machinery,
seg
and
I'm
here,
to
try
and
help
with
that
communication.
F
A
F
Sure
so
my
name
is
Walter.
Fender
I
mean
one
of
the
engineers
from
Google
on
API
machinery
and
on
slack
iron
chef,
taco,
CH,
es
ta,
ke,
o
and
yeah.
Please,
please
feel
free
to
reach
out.
If
you
have
any
questions
about
API
machinery
want
to
know,
what's
going
on,
why
something
isn't
being
done
or
suggestions
for
things
we
can
do
to
help
make
your
lives
easier.
F
A
I
tell
you
what
I
think
that,
since
it
is,
you
are
stated
as
part
of
part
of
your
day
job,
to
do
that
that
I
will
look
to
you
for
teacher
API
machinery
updates.
So
I'll
give
you
a
heads
up
if
we're
going
to
put
something
on
the
agenda,
where
I'll
ask
you
or
feel
free
to
just
edit
the
agenda.
If
you're
part
of
the
Google
group
for
Service
Catalog,
you
should
have
added
rights,
you
just
put
something
on
the
agenda
when
you're,
when
there's
anything
that
you'd
like
to
communicate.
A
E
E
We
keep
to
walk
through
documents,
one
four
one,
six
and
prior
and
then
one
four
one,
seven
and
onward
and
I
am
suggesting
this,
because
there
are
I
believe
enough
changes
to
the
user
experience,
not
necessarily
technically,
but
enough
changes
to
the
user
experience
after
one
seven,
that
the
documents
should
be
different
enough
and
should
be
separated
for
a
new
user
to
come
in.
So
I
have
a
few
examples,
but
before
I
talk
about
those
I
would
like
to
hear
some
preliminary
reactions.
E
A
B
Think
it's
not
just
the
one
point.
Six.
Oh
one
point
one
point:
seven
release
of
API
missionary:
it's
also
tied
to
our
own
release,
so
it
could
be
just
one
point:
whatever
is
the
latest
release
of
a
Service
Catalog
I
mean?
Can
we
actually
have
like
separate
two?
What
first
being
keeping
updated
to
again?
A
E
B
E
If,
if
we
have
changes
that
will
require
the
user
to
do
something
different
when
they're
running
a
kubernetes
one
six
cluster,
we
should
and
must
make
changes
to
the
1/6
walkthrough,
but
I
would
like
to
expand
on
that
for
a
second.
If
we
do
make
such
changes,
we
should
also
have
that
discussion
in
one
of
these
meetings
as
well.
C
E
C
B
A
E
Yeah,
that's
fine.
We
can
make
that
more
generic
decision
later,
but
I
even
fur,
into
care.
We'll
take
that
as
an
example,
even
down
to
the
commands,
the
QC
deal
commands
that
someone
must
use.
Those
will
be
different
across
one
six,
one,
seven
and
as
it
stands,
the
walkthrough
for
one
six
that
we
have
in
the
repo
now
is
pretty
long
I'd,
rather
not
include
even
more
that
a
user
has
to
parse
in
a
walkthrough
that
they
need
to
decide
this
command
four
one,
six,
this
command,
four
one,
seven
and
so
on
and
so
forth.
E
C
C
E
Right
right
now,
insecure
is
turn
on
by
default
and
per
the
instructions
in
the
walkthrough.
What
Neil
said
is
there
will
be
no
way
to
turn
on
insecure
or
us
otherwise
said
to
turn
off
secure
serving
so
the
setup
instructions
for
one
six
will
be
different
than
they
are
right
now
after
we
upgrade
our
dependency
on
API
machinery.
But
additionally,
when
using
using
excuse
me
service,
catalog
against
a
1/7
cluster,
so
there's
a
kubernetes
cluster
version
is
1:7.
C
I
understand
and
don't
refer,
this
is
pushing
back
on
having
to.
We
need
to
I'm
out
forward
back
as
I
do
with
you
having
conceptual.
If
statements
in
there
for
documentation
for
a
walkthrough,
we
just
ruin
the
user
experience
I'll
give
it
that
I
was
just
wondering
if
whether
it
makes
sense
in
this
particular
case
whether
we
make
sense
to
turn
on
security
for
1/6.
That
way,
you
can't
have
the
same
user
experience
well,.
E
I
think
we'll
have
to,
but
the
user
experience
will
still
be
different
right,
because
talking
to
the
aggregator,
a
at
least
doesn't
require
a
user
to
set
up
a
context,
but
also
Neal
and
Paul.
Maybe
you
can
correct
me
if
I'm
wrong
here,
I,
don't
believe
you'll
have
to
think
about
security.
Excuse
me
about
cert
management
between
coop
CTL
and
the
API.
E
A
Yeah
I
think
then
we'd
need
to
differentiate
one
seven
versus
one.
Six
I
think
that
we
should
not
differentiate
at
any
finer
grained
level
than
major
released.
A
major
release,
though,
because
if
you
add
another
dimension
to
this,
you
wind
up
with
at
least
three
walkthroughs,
and
at
that
point
I
would
expect
it
would
be
impossible
to
keep
them
in
sync.
C
A
All
right,
I'm
here
to
know
thanks
for
coming
everybody
Aaron
I'll,
be
very
interested
to
know
how
you,
like
that
crazy
fan
that
I
just
noticed
in
the
background,
and
that
is
about
the
ability
that
I
have
to
filter
things
between
my
brain
and
my
mouth.
All
right
see
you
all
next
week
or
sooner
bye,.