►
From YouTube: Kubernetes SIG Service Catalog 20170724
Description
Design Meeting:
- Update on PodPreset and service-catalog API server
- Background on TPR and CRD storage backends
- Kubernetes names of ServiceClass and Plan
- How to handle deprovision requests for instances with bindings
- Broker update semantics
- User provided services
A
A
Okay,
so
first
up,
we
discussed
in
architecture
sig
this
morning
or
I
guess
early
afternoon,
each
of
time
the
situation
with
pod
preset.
So
we
got
a
decision
out
of
that
to
move
the
settings.
Api
group
wholesale
to
Service
Catalog
API
server,
and
that
also
implies
that
we
will
implement
pod
preset
with
an
initializer
instead
of
an
admission
controller
for
anybody
that
doesn't
know
already,
there's
long
been
a
desire
to
have
admission
controllers
that
are
not
compiled
into
the
kubernetes
core
and
that's
what
an
initializer
is
so
Google
has.
A
Generously,
given
us
vanilla
to
make
that
happen
and
to
help
get
pod
preset
odd
to
a
beta
level
in
turbine
Nettie's
1/8
online.
Well,
that's
the
update
on
that.
I
do
think
that
we
should
I
have
perhaps
in
tomorrow's
design,
meeting
a
discussion
of
what
that
means.
These
are
the
odd
please
set
integration
and
catalogue
going
to
beta,
but
we
have
a
very
full
agenda
today.
So
I
think
we
can
discuss
that
tomorrow
or
later
in
the
week
next
up,
Erin
crts.
B
Yeah
can
someone
finish
the
notes
for
plot
preset,
then
I
will
give
it
a
shot.
Okay,
so
I
think
most
have
seen
issued
987
that
has
discussion
on
migrating
from
TP,
our
storage
back-end
to
the
CRD
storage,
back-end
I'm,
going
to
share
context
here
very
briefly
on.
Why
then
Deus
built
this
integration?
B
So
for
those
who
don't
know
the
issue
we
had
was
we
had
customers
who
didn't
want
to
hook
up
Service
Catalog
directly
to
an
ED
CD
wherever
that
might
have
been
and
at
the
time
I
think
the
options
were
basically
the
same
as
they
are
now
that
is
either
the
at
CD
that
kubernetes
core
talks
to
or
to
just
throw
a
net
CD
somewhere
into
the
cluster.
So
we
built
GPRS
as
a
storage,
back-end.
B
Now,
of
course,
TPR
is,
are
going
to
be
there
deprecated
now
and
they're
going
to
be
I
believe
eliminated
in
communities
1.8.
So
this
discussion
started
as
an
issue,
as
should
we
migrate
to
see
our
DS
I
think
the
obvious
answer.
There
is
yes,
we
should
migrate
off
of
TP
RS
to
something
the
requirement
that
we
have
to
be
clear
is
not.
We
must
do
CR
DS,
but
we
would
like
to
have
something
that
hasn't
required:
an
SPD
to
be
sprayed
onto
the
cluster
and
used
exclusively
by
Service
Catalog.
B
That's
where
the
PRD
discussion
left
off.
I
know
that
there's
some
interest
from
Google
and
from
Atlassian
that
they
may
one
or
both
of
those
groups
may
want
to
implement
a
CRD
storage,
back-end
I'm
going
to
have
a
discussion
with
at
least
Mikael
Mikkel.
Are
you
on
the
call
now
I
believe
I
did
not
see
you
I.
B
B
Great,
so
I
was
going
to
have
a
discussion
Neal
with
at
least
him,
but
I
think
you
were
going
to
be
there
as
well.
That
was
going
to
be
in
the
early
evening.
My
time
I
think
the
morning
your
time
about
how
to
proceed
here
with
implementing
CRD
back-end
support,
so
with
that
I
will
leave
it
and
Neal.
If
you
want
to
fill
stuff
in,
please
go
for
it.
C
Well,
I
think
that
both
you
and
Google
Hughes
case
analysis
is
pretty
much
the
same.
That,
like
the
reasoning
for
using
GPRS
or
series,
is
pretty
much
just
not
willing
to
manage
a
separate
etcd
cluster
and
just
using
the
same
etcd.
It
is
being
used
by
core
communities,
but
not
directly,
probably,
but
through
the
API,
so
yeah
I,
don't
I,
don't
see
any
extra
requirements
for
using
theories
or
any
other
resource.
Oh,
we
don't
care
a
lot
of
what
it
is
like
switch
from
GPRS
to
series
just
for
us.
C
A
Whether
we
can
actually
do
this
right
now
and
wholly
migrate
on
to
see
our
dues
or
whether
we
need
to
keep
TPR
support
around
which
support
is
being
very
generous
right.
It's
more
like
it's
there
and
if
you
can
try
it,
but
I
wouldn't
say
that
we
offer
support
on
it.
A
I
wonder
if
the
Microsoft
Google
and
have
lasting
folks
on
the
call
can
say
whether
they
feel
we
can
ditch
TPR
or
whether
we
need
to
keep
TPR
support
in
at
least
until
we
have
CRT
support
or
if
we
need
some
period
where
they're
at
CD,
TPR
and
Ciardi
are
all
technically
have
app
ways
in
the
code
associated
with
them.
So.
B
What
we
can
do
and
what
I
think
we
should
do
is
leave
CPR
in
the
state.
It
is
now
which
is
just
declared
in
Docs
at
alpha
when
and
we
can
concurrently
build
CRT
support
in
and
then
when
1.8
comes
out,
we
can
declare
TCR
support
deprecated
and
only
refer
to
it
as
in
alpha
in
the
1.6
and
previous
stocks.
Sorry
1.7
in
previous
documentation,
and
that
would
then
cover
the
use
case
for
people
running
1.6,
1.1
point
7
in
before
and
of
course
tell
them
again.
B
A
So
we
are
going
to
have
this
problem
again
because
I
see
our
DS
while
more
stable
than
GPRS
are
still
not
designed
as
a
data
store.
So
when
I've
spoken
to
API
machinery
type
folks,
maybe
Walter-
you
might
know
more
about
this
than
I.
Do
we
had
discussed
like
a
basically
totally
opaque
blob
resource
that
would
be
suitable
as
a
data
store
for
aggregated
API
s
and
obviously
that
doesn't
exist
yet,
but
I
think
the
desire
and
the
community.
A
It
certainly
seems
to
be
strong
enough
that
it's
arguable
that
we
should
build
such
a
thing,
especially
if
the
API
machinery,
maintainer
czar,
not
I,
think
that
CR
D
is
unsuitable
as
a
long
term
place
to
store
data
for
aggregated
API
s.
So
it's
worth
considering
what
our
deprecation
strategy
is
going
to
be
and
I
think
this
is
perhaps
another
area
where
we're
out
in
front
of
kubernetes,
because
I
don't
think
anybody
else
has
any
concern
like
this
about
supporting
different
data
stores.
Oh
so.
B
A
B
That's
that's
great
I,
plus
one
that
but
I
do
feel
like
we
have
enough
on
our
plate
in
terms
of
stuff
that
we're
taking
on
from
the
community.
Of
course,
pod
preset
is
the
biggest
example
there,
but
I'd
rather
just
implement
CR
DS
next
to
the
existing
TPR
implementation
and
then
carry
that
pattern
forward.
If
and
when
we
get
the
opaque
blob
API,
and
that
would
be
great,
of
course,
yeah.
D
A
I
will
+1
the
implements
er
D
next
to
t
PR
I
prefer
not
to
carry
t
PR
forever,
but
we
can
sort
that
out
at
a
later
date.
D
Actually,
that's
a
really
good
question:
there's
two
guys
in
the
office
Emil
and
Sean
who
we're
looking
at
this
right
now.
I,
don't
know
the
latest
status,
but
on
I
would
say
that
you
want
to
get
moving
on
CRD,
because
TPR
is
supposed
to
be
deprecated
and
those
are
the
discussions
about
which
needs
to
be
removed
in.
C
E
D
C
See
is
just
like,
for
example,
now,
if
you
using
two
graphs
or
theories,
you
can
use
cube
CTL
and,
like
all
the
objects,
I
expose,
they're
all
objects
and,
like
anyone
can
access
the
like
data
beneath
the
not
not
using
the
click.
Service
Catalog,
API
server,
so
I
would
I
think
that
it
would
be
useful
to
hide
all
these
objects
and
I'm,
always
through
so
simple.
So.
A
I
think
one
of
the
major
facets
that
the
API
missionary
group
thinks
feels
that
CRD
is
is
not
good
enough
as
a
long-term
location
for
just
arbitrary
serialized
data
I
is
that
they
do
not.
They
feel
that
the
versioning
support
and
Ciardi
is
not
strong
enough
to
be
the
basis
for
arbitrary
API
data.
Yeah.
D
The
other
things
I
would
add
to
that
are
1x.
Eb
is
really
not
designed
for
large
objects,
I'm
using
actually
investigating
a
couple
of
large
objects
basic
stuff.
So
we
would
like
to
keep
control
of,
what's
actually
in
FCB,
to
make
sure
that
we
can
get
good
behavior
out
of
kubernetes.
So
that's
one
very
important
piece.
B
Esso,
so
another
idea
that
I've
thrown
around
much
longer
term
is
providing
an
option
in
service
catalog
the
API
server
to
use
external
storage
outside
of
kubernetes.
That's
something
that's
kind
of
in
the
the
hobby
ideation
phase
in
my
head,
but
I
wanted
to
throw
it
out
there
for
someone
else
to
think
about
as
well.
If
you
have
ideas
in
that
space
and
at
the
end,
any
in
stock
I.
E
Aaron
just
to
circle
back
around
to
the
need
for
something
for
Denis
40
pairs
in
to
begin
with.
If
I
remember
correctly,
I
thought
during
winter
space
spaces,
you
guys
had
needed
it
because
you
wanted
the
Service
Catalog
stuff
to
run
I
want
to
say
outside
the
careers
cluster,
but
at
least
not
right
next
to
the
Gray's
core
processes,
or
something
like
that
and
then
and
there
might
be
like
firewall
issues
or
something
in
the
way
that
we're
going
to
from
you
know:
okay.
B
E
I
understand
the
latter
part
cuz,
you
don't
to
manage
a
separate
identity,
but
can
you
elaborate
on
why
you
don't
want
to
use
the
same
at
CD,
because
in
this
model,
that
community
is
heading
towards
where
it's
separate
API
servers,
they're
breaking
up
the
cord
to
separate
little
components?
This
seems
like
it
just
falls
into
yet
another
component.
Why
would
service
catalog
be
different
in
that
picture?
E
B
B
Does
that
mean
that
so
Doug?
Let
me
just
clarify
real
quick:
it's
the
the
problem
that
we've
never
had
the
problem
of
saying
we
hate
customer.
We
want
you
to
allow
service
calleb
to
store
data
on
a
CD.
The
problem
has
always
been.
We
want
you,
the
in
order
to
open
up
network
access
directly
to
at
CD
o.
B
E
A
Okay,
so
it
sounds
like
we
have
consensus
to
to
implement
CRT
and
keep
CPR
around
for
the
time
being,
and
it
sounds
like
that
is
good
enough
for
somebody
to
go
and
begin
working
on
it.
I
do
I
had
been
thinking
in
connection
with
CRTs
that
they
were
very
labor
intensive
for
Aaron
I
think
it
would
be
really
great
if
we
could,
if
we
could
get
someone
else
or
some
other
parties
multiple
to
make
a
contribution
for
CRT
I.
A
Think
Aaron's
already
jumped
a
lot
of
hurdles
in
the
dark
to
the
benefit
of
the
those
who
will
implement
such
a
thing,
and
hopefully
CRT
support
will
be
a.
A
F
F
A
F
B
So
that
if
you
look
at
my
screen,
I,
don't
know
the
third
that
this
person's
name,
but
the
cutest
github
handle
here
said
above
I
think
it's
somewhere
above
maybe
they
said
that
they
would
be
willing
to,
and
then,
like
I
said,
also
McHale
said
he
wanted
to
talk
about
it
later
this
afternoon.
My
time
it
looks
like
on
all
said
that
he
could
do
it.
C
E
B
A
B
A
A
E
Not
I
don't
think
we
did
this.
This
is
the
idea
of
what
to
do
with
the
fact
that
the
service
class
name,
the
plan
name,
might
actually
change
from
the
broker
when
you
do
a
refresh
and
that,
therefore,
because
those
things
can
change,
we
can't
use
those
names
for
the
resource
names
inside
kubernetes,
and
so
what
the
heck
do
we
do
about
that
now.
The
reason
I
added
this
to
the
agenda
was
because
I
had
a
proposal
in
there.
You
get
the
link
to
it
on
a
sec.
E
E
The
I
just
want
to
bet
the
others.
The
other
controversial
thing
which
is
dynamically
generating
data
on
the
fly
in
response
to
a
get
that's
something
else,
I
think
is
either
controversial
or
new
for
kubernetes
as
well,
but
anyway
go
up
and
think
about
that.
Because
it's
going
to
take
a
lot
of
thinking,
discussion,
I'm
sure.
A
E
Okay,
Bob
we're
on
top,
though
you
said
this
is
going
to
break
patch.
Can
you
elaborate
on
on
how
it
breaks
patch.
A
A
E
A
In
in
general,
though,
like
forget
about
patch,
my
most
pressing
point
of
discomfort
about
this
is
that,
like
I,
would
be
totally
fine
with
tooling.
That
did
this
for
you,
but
if,
if
we
rely
on
magic
in
the
API
server,
we
still
fundamentally
have
races
so
that
you
might
unintentionally
a
couple
yourself
to
a
class.
That
is
not
what
you
thought.
It
was
okay,.
A
Whether
porcelain
would
totally
solve
this
so
say
that
say
that
you
allow
users
to
post
the
the
human
readable
name
if
they
post
the
human,
readable
name
and
at
the
same
time
a
broker
is
being
updated
and
I
like
the
names
shuffle
around,
so
that
there
is
now
a
new
plan
or
in
like
another
plan
that
is
renamed
to
the
name
of
the
old
plan,
but
has
different
semantics.
A
But
that
said,
I
also
happen
to
remember
that
Cloud
Foundry
as
a
platform
does
not
have
an
automatic
realist
interval
like
we
do
so
they
perhaps
they
don't
do
any
prove
like
don't
take
any
preventative
measures
to
avoid
such
a
race,
because
they
consider
the
the
update
of
particular
brokers
catalog
to
be
manual
once
in
a
blue
moon
operation.
A
I'm,
not
quite
sure,
I
understand
the
race
condition,
so
the
race
condition
is
that
I
refer
to
service
plan
a
while
I'm
posting
that
the
controller
is
also
relisting
the
broker
and
the
broker
used
to
have
plans
a
and
D,
and
now
it
has
planned
a
B
and
C,
except
since
the
names
are
mutable,
it
changed
the
old
plan,
a
to
be
the
new
plan
B
in
the
old
plan,
B
to
be
the
new
plan
a
and
you
got
the
new
plan
a
when
you
wanted
the
old
plan.
Okay,.
E
E
E
If
between
the
time
you
did
the
query
and
the
time
you
actually
try
to
use
the
human
readable
name,
if
that
human,
readable
name
that
points
to
something
else,
then
you're
going
to
get
that
something
else
and
I'm
not
sure
we
can
necessarily
even
fix
that
either
because
it
that's
just
a
fact
of
life,
someone
renamed
something
on
you
and
reuse
of
that
name:
you're
going
to
get
the
new
thing,
because
you
have
no
way
of
knowing
whether
they
meant
to
point
to
the
new
thing
or
the
old
thing.
So.
A
C
E
E
G
B
B
Yeah,
so
this
one
we've
gone
over
this
three
times
now,
I
think
every
time
we've
kind
of
punted.
This
is
what
should
we
do
if
someone
tries
to
delete
an
instance
that
has
bindings
that
reference,
that
instance,
the
OSB
spec,
doesn't
currently
say
what
to
do
here.
That's
something
I'll
be
bringing
up
with
the
four
in
the
meeting
tomorrow
with
group
I.
A
B
A
Am,
however,
I
do
not
think
that
we
have
any
place
in
kubernetes
where
we
prevent
the
deletion
of
a
resource.
So
I
think
that
we
have
to
accept
the
delete,
and
perhaps
what
we
could
do
is
accept
the
delete,
but
then
not
actually
do
the
deep
revision
until
the
until
the
bindings
are
all
themselves
deleted.
Yeah.
B
A
B
So
it'll
go
into
finalization,
it
will
still
be
in
finalization,
but
there
will
be
a
condition
written
down
that
says
you
can't
delete
this
right
now,
because
the
following
bindings
still
point
to
it
have
the
list
of
the
binding
names
and
then
say
you
need
to
unbind
by
deleting
those
bindings.
Once
you
do
that,
then
this
instance
will
be
completely
deleted.
So.
A
A
B
E
E
F
B
Would
say
it's
worth
bringing
up
Brendan,
but
with
respect
specifically
to
issue
a
20
Cal
I,
don't
think
we
need
it
in
order
to
make
progress.
Okay.
Does
that
answer?
Commission
yeah.
F
B
I
would
agree
with
what
Paul
says
that
just
block
new
bindings
for
the
instance
and
since
we
don't
have
machinery
to
update
bindings
at
all
right
now,
because
it's
in
marination
so
to
speak
in
those
B
groups,
I
think
it's
okay
to
just
push
forward
on
implementing
block
new
bindings.
Do
the
failure
on
finalization
that
we
talked
about
and
then
go
and
talk
to
the
OSP
group
sort
of
in
another
thread
about
general
I
love.
B
A
B
C
A
Actually
don't,
as
far
as
I
know,
I'm
double-checking
now,
but
we
certainly
could
use
such
a
thing
yeah
we
do
not.
We
only
have
one
that
checks
to
ensure
that
the
kubernetes
namespace
exists,
so
we
can
make
it
a
distinct
issue
to
have
an
admission
controller
that
prevents
a
binding
from
actually
you
know
what
we
shouldn't
have
that,
because
you
might
create
them
out
of
order.
Yes,.
C
A
B
E
E
So
there
are
two
issues
here:
one
is
there,
gonna
be
cases
where
brokers
just
vanish
and
we
need
to
make
sure
we
can
delete
brokers,
instances
and
bindings
without
ever
making
the
call
out
to
the
broker.
However,
some
sort
of
forced
type
option
I'm
not
sure
we
support
that
right
now
and
also
we're
having
services
and
plans
that
does
vanish
when
you
do
a
refresh
from
the
market
from
the
from
the
brokers
marketplace.
How
are
we
going
to
handle
those?
Those
might?
E
A
E
B
A
I
think
that
there
are
I
think
in
my
it's
probably
more
complicated
than
forcefully,
because
we'll
have
to
finalize
we'll
have
to
finalize
the
four
bindings.
For
example,
if
we
just
force
delete
the
binding,
something
still
has
to
go
and
clean
up
the,
so
it
might
be
kind
of
tricky
to
reason
through
yeah
I
think
it's
actually
I'm,
certainly
more
complicated
than
it
seems
on
the
surface,
but
I'll
take
a
first
pass
and
writing
it
up
and
we'll
just
see
what
falls
out
of
that
sounds.
E
E
This
was
mine
as
well,
so
right
now,
user
provided
services
are
just
like
any
other
service
and
we
created
a
user
provided
broker
to
deal
with
those
and
as
I
was
going
through,
the
walkthrough
and
playing
with
Simon
plugins
stuff.
I
I
did
not
have
the
greatest
user
experience
with
it
and
it
may
seem
like
a
minor
thing,
but
it
kind
of
really
bug
me.
The
more
I
was
looking
at
it
and
it's
the
fact
that
one
I
had
to
explicitly
set
up
a
broker
to
do
it
now.
E
I
had
to
get
a
plan
name,
which
makes
no
sense
whatsoever
for
user
provided
service
because
they
don't
have
plans,
so
you
have
to
put
in
a
plan
name
of
default,
and
it
just
felt
really
odd
to
me
between
those
two
things
that
started
making
me
wonder
whether
we
should
file
the
cloud
fabric
pattern
and
just
have
a
well-defined
resource
called
user
provided
service
instance,
or
something
like
that.
That
way,
they
don't
have
to
feel
like
they're
doing
something
hacky
or
specialized.
E
A
E
E
Formal
proposal,
though,
where
whatever
time
frame
they
might
be
in
I,
would
like
to
have
people
think
about
this
for
a
day
or
so
so,
maybe
tomorrow,
people
come
back
and
say
whether
they
in
general
think
it's
probably
the
right
way
to
go
to
have
a
specialized
resource
or
whether
no
the
current
approach
is
the
right
way
to
go.
Does
that
we'll
think
about
for
24
hours?
I
prefer
that
does
your
office
resource.
B
A
E
C
D
H
E
I
know
this
is
the
right
phrase
to
use
for
for
go
line,
but
basically
I
would
I
kind
of
envisioning
user
provided
servers
as
being
not
quite
a
subclass
because
it
doesn't
have
plan,
but
it's
similar
in
that
sense,
but
maybe
actually
fit
away
around,
maybe
normal
instances
more
like
a
subclass
of
user
provided
service
because
it
actually
goes
one
step
further
by
adding
plan
to
it,
but
I'm
ABI
Shima
phrase
it
that
way.
Cuz
look
this
confuse
the
matter
so.
B
E
B
E
Would
not
be
asking
for
the
catalog
at
that
point,
would
you
be
asking
for
is
give
me
the
list
of
instances
of
services,
and
this
would
show
up
in
that
list.
The
same
way
MongoDB
would
show
up
in
that
list.
Gotcha
I'll
read
through
your
proposal.
What's
up
non-verbally
engineer
issue
yeah
that
Haga
what.
E
E
C
B
There's
kind
of
more
than
that
too
right
I've,
seen
this
thing
used
ton
for
things
that
an
operator
wanted
to
be
quote
multi-tenant,
so
they
they
spin
up
the
or
whatever
and
they
put
the
credentials
into
Cloud
Foundry.
And
then
they
say
this
is
what
to
use
you
bind
to
it
as
normal,
but
it's
shared
across
every
binding,
the
actual
database
table
or
manga
collection
or
whatever
is
the
thing
that
is
shared
multi-tenant
lee
across
all
of
the
bindings,
but
isn't
that
correct?
Well
that
that.
E
Is
that
that
is
a
use
case,
but
that's
basically
the
same
use
cases
where
I
describe,
because
in
that
case,
the
user
or
then
gets
the
admin
created
the
MongoDB
through
some
mechanism
right,
whether
it's
actually
through
Cloud
Foundry
or
through
this
one
API
directly
to
comm
or
whatever
it
doesn't
matter.
But
then,
when
they
go
off
and
create
the
user
provided
service
they're
telling
what
sound,
what
those
credentials
are
basically
the
same
thing.
How.
G
G
G
C
E
E
A
C
E
It's
interesting,
you
say
that,
because
that
actually
might
be
the
right
solution.
Yes
right,
we
may
look
at
this
and
say
you
know
what
we
don't
even
want
to
solve
the
this
use
case
via
Service
Worker
concepts.
It
all
depends
on
how
we
do
like
I
think
the
pod
preset
binding,
because
that's
almost
outside
the
scope
of
our
Service
Catalog
API
anyway,
right
yeah.
F
That's
a
really
valid
point:
I
think
the
reason
that
it's
part
of
the
Cloud
Foundry
platform
on
which
we're
basing
this
is
because
Cloud
Foundry
is
also
a
pass,
and
so
they
need
this
mechanism
to
be
able
to
inject
credentials
into
a
running
application
in
the
past.
Whereas
we
don't
have
that
mechanism,
we
don't
have
a
pass,
and
so
I
I
think
that's
a
very
astute
observation
that,
if
all
we're
doing
is
making
this
whole
extra
object
thing
that
then
turns
around
and
puts
it
in
native
kubernetes
resources.