►
From YouTube: Kubernetes SIG Service Catalog 20170810
Description
- Should we have an F2F at Kubecon?
- Separating OSB and PodPreset facets of bindings
- Using generation instead of checksum to detect changes in object spec
- Sending bind_resource.app_guid on bind requests
- Orphan mitigation
B
Thanks
Paul,
so
the
regular
announcements.
The
document
for
our
agenda
is
in
both
slack
in
our
Channel
and
now
I
added
it
a
second
time
to
the
zoom
chat
for
those
who
just
joined
jorge
castro
from
the
kunas
community
team
works
at
hefty.
Oh,
he
has
graciously
joined
us
again
today,
thanks
Jorge
Jorge,
my
bad
I'm.
Very
sorry,
I
always
do
that
thanks
again
for
joining
us.
The
second
day
he's
gonna,
take
notes
again
graciously
be
on
your
best
behavior.
Please.
Maybe
he'll
come
back
for
a
third
day
sometime.
C
B
Just
as
a
recap,
speaker,
queue
put
a
plus
hand
into
the
chat
if
you've
got
something
you
want
to
add
the
conversation
just
like
that
Jorge.
When
you're
ready,
can
you
spare
your
screen
and
if
you
are
an
attendee
today
you
hear
what
I
just
said.
That
means
you're
here
as
an
attendee.
Can
you
write
your
name
if
it's
not
already
there
under
the
attendees
section
under
August,
10th
2017?
That
would
be
very
helpful
to
all
of
us.
So
with
that
Jorge
I'll
wait
for
you
to
share
your
screen
and
then
we
can
get
started.
B
All
righty
sorry
about
that,
no
problem,
it
wasn't
anything
too
top-secret
I
hope,
no,
not
really.
Okay,
cool
all
right.
So
we've
got
the
first
item.
Doug,
you
are
gonna
talk.
It
looks
like
about
a
face
to
face,
so
why
don't
you
summarize
that
and
we'll
open
up
the
floor
to
the
speaker,
cue
when
you're
done
go
ahead
back.
D
D
B
D
So
this
issue
is
actually
opened
up
by
Paul
that,
based
on
previous
discussions,
where
we
agreed
to
pull
out
the
pod
preset
stuff
from
the
binding
and
that
we
agreed
to
resolve
exactly
what
kind
of
pod
preset
type
of
thing
we're
gonna
create
after
beta
I,
believe
on
this
one
I'm,
not
looking
to
close
the
assuring
like
that,
I'm
just
looking
to
get
agreement
that
we
want
to
move
this
particular
issue
out
of
zero
one
zero
to
a
milestone,
and
this
disappea
clear.
This
issue
is
all
about
creating
replacement
for
pod
presets.
D
You
know
I
think
Paul
actually
suggested
a
pod
preset
binding
type
resource,
but
we
need
to
get
into
what
the
decision
is
in
terms
of
what
the
resources
look
like.
It's
just
we
didn't.
We
I
think
we
made
the
decision
we're
going
to
make
that
decision
after
beta
so
I'm,
just
looking
for
consensus
that
we
want
to
move
this
to
be
a
post
beta
issue.
E
E
So
can
this
be
added
with
a
flag
got
rid
by
a
flag?
This
functionality?
That's
the
I
guess,
because
the
aspirant
plan,
what
we
are
hoping
is
that
we'll
have
quad
preset
implementation
as
far
as
the
API
is
concerned,
also
available
in
beta
but
guarded
by
a
flag,
so
that
people
can
try
out-
and
this
goes
hand-in-hand
having
the
integration
with
service
catalog
would
also
be
important,
and
if
we
make
progress
on
that
dish,
you
should
be.
We
should
at
least
have
some
cycles
from
reviewers
to
review
that
during
beta.
So
that's
the
comment.
A
A
Definitely
I
definitely
think
that
if
we
run
out
of
things
that
we
need
design,
consensus
on
and
finish
the
initial
beta
early
that
at
that
time,
I'd
be
happy
to
talk
more
about
pod.
Preset
lining
and
I
actually
have
some
fairly
detailed
mental
notes
about
this
thing.
So
I
was
considering
sketching
out
a
detailed
design
on
it
next
week,
while
I'm
gonna
be
out,
but
I.
Think
for
now
we
should.
E
Yeah
I
agree
concentration,
concentrating
on
items
which
are
needed
for
beta,
so
we
are
focusing
to
get
pod
reset
in.
But
then
what
would
happen
is
that
you
can
have
parties
at
api's,
but
the
integration
will
not
be
in
there.
So
if
you
want
to
get
early
feedback
and
create
a
time
frame
for
the
pod
preset
stuff,
which
is
probably
fine,
I
think
I.
A
Think
I
may
have
misunderstood
the
suggestion,
so
it
sounds
like
Sanel
I
think.
Maybe
some
information
is
missing
from
your
contacts,
so
we
put
if
I
remember
correctly
on
the
beta
scope
issue.
We
put
we
put
as
a
nice-to-have
pod,
preset
being
present
and
service
catalog
API
server,
and
so,
if
it's
done,
I
have
absolutely
no
problem.
A
E
A
It
would
be
useful
for
people
that
did
that
want
pod
preset,
but
don't
care
all
about
service
catalog.
As
we
know,
there
are
some
people
that
fall
into
that
category,
yeah
so
I
think
I
think
it
has
use
and
I
don't
see
any
reason
not
to
not
to
release
it.
If
it's
in
a
state
that
we
think
is
especially
if
it's
in,
if
it's
behind
a
flag,
okay,.
B
D
So
no
I
think
there
might
be
a
little
bit
of
miscommunication
here.
We're
not
looking
to
remove
pot
presets
pod
presets
will
still
be
be
there
in
1.8,
whether
it
sits
in
the
committee's
core
or
our
Service.
Catalog.
Repo
is
still
a
little
bit
up
for
decision
and
it's
probably
gonna
remain
where
it
is,
but
we're
not
looking
to
remove
pod
preset.
D
The
only
thing
we
did
is
remove
the
pointer
to
a
pod
preset
from
the
binding
object,
and
what
we're
really
deferring
here
is
that
the
notion
of
creating
other
pod
preset
type
of
resources
to
extend
the
functionality
of
pod
reset
so
you're
concerned
about
properties.
That's
not
being
there
or
not
being
available.
That's
not
true.
They
will
still
be
there.
It's
just
we're
not
going
to
necessarily
talk
about
them
as
a
thing
to
use,
as
per
the
Service
Catalog
for
the
short-term
guarded.
B
C
Yeah
so
I'd
be
curious
about
so
my
understanding
matches
exactly
what
Dan
was
saying,
which
is
pod
presets
whoa
Pete
there,
but
you
there
is
no
magic
that
service
catalog
provides
for
you
is
that
true,
yes
right
so
going
from
that,
I
would
be
curious.
How
useful-
and
this
is
sort
of
kind
of
it
would
be
interesting
to
get
some
thoughts
from
some
customers
and
whatnots
on
how
useful
do
we
think
it
is
cuz,
cuz,
I,
I
think
the
fact
that
they
just
kept
putting
the
secrets
is
reduces
the
function.
That'll
be
quite
a
bit.
D
Okay,
go
ahead,
take
floor,
it
so
I,
Soviet
I
think
we
had
a
whole
bunch
of
discussions
on
this
while
you're
on
vacation
and
what
I
think
it
might
be
useful
to
do
it
so
that
we
don't
and
too
much
time
on.
This
call
is,
if
you
want
I,
can
talk
to
you
offline
after
this
call
to
bring
it
to
speed
on
conversations
and
then,
if
you
want
to
in
essence
reopen
that
discussion,
which
I
think
is
a
valid
thing
because
want
to
make
sure
to
get
this
right
for
beta.
D
D
A
Go
ahead,
and
so
I
I,
just
I
think
that
we
should
retain
focus
on
things
that
we've
already
agreed
are
in
scope
and
we've
already
talked
about.
B
B
D
I
just
pasted
a
link
into
the
chat
so
again,
I
think
this
was
she
was
open,
I,
qualify,
Marin,
correctly
and
I
believe
on
the
August
7th
meeting.
We
did
sort
of
talk
about
switching
from
checksum
to
generation
and
I.
My
general
sense
was
that
we
were
all
kind
of
in
agreement
that
that
mechanism
is
already
there.
In
course,
we
might
as
well
reuse
it,
but
I
don't
think
we
actually
formally
agreed
to
that,
because
it
was
all
discussed
within
the
context
of
I.
D
Think
one
of
your
issues
there
in
a
camera
exactly
which
one
it
was
I,
might
have
been
relisting.
So
what
I
want
to
do
is
to
bring
it
back
up
to
make
sure
that
we're
in
agreement
from
as
a
group
on
that
general
direction
and
then
if
Paul
does
have
ever
former
posle,
we
could
talk
about
now,
but
if
he
didn't
then
I
would
at
least
want
to
get
that
general
consensus
going
forward
and
then
a
future
call.
You
could
talk
about
a
concrete
puzzle,
Paul
had
it
cool
Paul
go
ahead,
take
the
floor.
A
So
I
have
a
write-up
that
shows
exactly
what
we
should
do.
There
are
some
special
cases
that
for
broker,
for
example,
I've
written
a
broker.
I
haven't
finished,
writing
up
instance,
but
I'd
be
happy
to
talk
through
it
now.
I
think
it
would
take,
maybe
about
10
minutes
to
go
through
the
general
mechanics
of
checksum
and
at
least
say
why
it
will
work
for
broker
and
binding
with
the
additional
bonus
challenge.
If
I
can
mentally
write
the
rest
of
the
special
things,
for
instance,
I
can
probably
talk
that
out
as
well.
B
C
Yeah
me
I
haven't
seen
the
proposal
yet
and
and
if
I'm
the
only
one
who
hasn't
read
it,
that's
fine
I
would
rather
see
read
it
and
then
look
at
the
different
pointers
to
it.
First,
so
that
then
I
can
ask
questions
or
if
it
looks
good,
then
then
yeah
I'll
go
to
the
back
first
and
read
it
generations.
C
D
Doug
go
ahead:
yeah!
How
about
this
I
bet
it's
sort
of
a
compromise.
If
I
don't
we
go
through
the
other
agenda
items
that
have
concrete
proposals
sort
of
written
down
already
and
if
we
get
through
all
those-
and
we
basically
have
extra
time,
then
we
can
go
and
do
pod
Paul's
up
brain
Duncan
Paul
go
ahead.
A
B
Aaron
go
ahead.
This
is
Aaron
speaking
now
I'm
going
to
propose
that
we
have
I
believe
this
was
proposed
by
Doug
in
the
chat
yet
I'm
going
to
propose
that
we
do
a
10-minute
Thunderdome
so
to
be
ended
by
31
minutes
after
the
hour
and
whatever
we
can
get
through
in
ten
minutes
great
then
Paul.
You
come
back
with
a
concrete
proposal
in
a
future
design
meeting
for
us
to
go
over
and
continue
to
adjusting
and
that's
it
yo
F.
For
me,
any
responses,
reactions
from
that.
Please
put
them
into
the
speaker:
queue.
B
B
B
A
Can
see
what
I'm
talking
through
oops
and
it's
the
proposal
so
nice
I
pasted
it
twice
so
I'll
just
clean
that
up
alright,
so
the
proposed
changes
that
I
suggest
invite
up
by
the
way
so
for
context
generation
is
a
field
on
object,
meta,
it
is
not
user
settable
and
it
is
basically
maintained
by
the
API
server.
So
the
idea
basic
idea
generation
is
that
as
the
spec
of
an
object
changes,
the
API
server
will
bump
the
generation
on
an
update.
A
Now
that
doesn't
include
status
and
it
doesn't
include
annotations
or
labels.
It's
just
the
spec
and
the
pattern
that
resourcing
resources
and
kubernetes
use
this
field
in
is
when
they
have
finished,
reconciling
a
particular
status
or
I'm.
Sorry,
a
particular
version
of
the
spec,
a
particularly
generation
of
the
spec.
A
A
In
the
object
meta
when
the
spec
is
changed,
then
add
a
new
field
to
broker
instance
and
binding
status
called
reconciled
generation
that
the
controller
will
set
to
be
the
value
of
generation
currently
on
an
object
when
it
is
going
to
set
the
resources
ready
condition
to
have
status
true.
So
to
talk
through
an
example
of
this
say
that
for
broker,
we
we
add
a
new
broker.
A
B
A
A
A
That
makes
sense
in
open,
serviceworker,
API.
So
I,
don't
think
that
you
are
wrong,
but
I,
don't
think
that
we
will
have
a
such
a
field,
and
let
me
call
it
specific
counter
examples
here,
so
reference
to
a
secret
in
secret.
Okay,
so
I
think
what
we're
talking
about
is
you're
thinking
about
a
flow
where
you
we
would
we've
supported,
parameter,
updates
and
there's
a
secret
that
hat
holds
the
source
of
parameters
I
now
I
have
I
have.
A
This
is
the
part
that
I
did
not
get
to
right
now,
I
think
in
the
case
of
parameter
updates
and
I
I'm,
we
run
a
chance
of
getting
into
the
weeds,
but
I
think
in
the
case
of
parameter
updates.
There
is
a
convention
on
deployments
or
there's
precedent
on
the
deployment
resource
to
have
a
field
in
the
spec.
That's
called
pause
that
when
you,
when
you
set
that
field,
it
means
that
the
controller
won't
do
any
more
work
until
the
fields,
unset
and
I.
A
B
An
errand
thank
you
very
much
so
Paul
I
want
to
say,
since
this
isn't
written
in
a
design
document,
I've
gotten
some
comments
that
it
should
be
written
before
we
go
I'm
gonna,
be
the
rest
of
your
two
minutes.
I'll
actually
give
you
a
couple
extra
seconds
like
am
I
talking,
but
then
we
got
to
move
on
to
the
speaker,
queue
all.
C
D
Next,
in
the
queue
so
go
ahead,
Doug
well,
I
I
was
assuming
that
Paul
is
the
presenter
we
get
to
respond,
but
okay,
so
there's
question
is
kind
of
based
upon
the
conversation
we
had
a
couple
days
ago
when
around
Aaron's
issue
for
those
resources
where
the
user
may
need
to
force
a
resync.
Are
you
in
Paul?
Are
you
in
visiting
some
sort
of
glass
sink
type
of
URL
like
we
talked
about
the
previous
issue,
to
be
available
for
people
to
to
force
the
resync
to
happen
all.
B
A
I
think
that
for
broker
I
will
next
put
in
what
I've
written
up
for
broker,
and
it's
very
quick,
basically
special
considerations
for
broker
are
that
we
do
need
to
support
a
manual
resync,
and
so
what
I
have
spoken
about
with
Jordan
Liggett
who's?
One
of
the
folks
that
works
on
API
machinery,
stuff
at
Red
Hat
is
we
could
have
a
field
on
the
broker's,
spec
and
I've.
Just
I've
updated
the
the
issue.
A
B
Alright
I'm
gonna
put
my
hand
up
and
a
kit,
so
what
I
would
like
to
see
next?
This
also
echoes
I
think
what
you
just
said
is
well
Doug
I
think
the
next
step
would
be
Paul,
write
everything
down
in
completion
and
we
will
put
this
on
the
schedule.
Asap
I
know
you're
gonna
be
gone
next
week,
but
we're
absolutely
we're
gonna
need
to
see
this
in
full.
That's
it
for
me.
Paul
you're
up
next
go
ahead.
A
Yeah
I
can
live
with
that.
I
do
just
want
to
say
in
the
kubernetes
core
the
the
fields
that
hold
the
the
status
view
that
the
fields
that
hold
the
last
generation
you
didn't
work
for
are
in
the
status,
so
I
don't
know
that
we
would
have
traction
moving
them
into
object,
meta
and
I
think
that
if
we
did,
we
would
certainly
face
a
uphill
battle
on
it.
I
think
as
far
as
what
we
should
do
for
the
initial
beta
I
think
we
should
just
add
a
field
in
the
status
and
I
am
totally
fine.
C
Okay,
Eli
go
ahead.
Oh
sorry,
I
ruptured,
a
handed
it
I
was
gonna,
say
so
so
Neal
said
or
nail
said
that
there
is
a
something
in
the
deployments
already
called
something
else
observed
generation,
but
if
we
think
it's
never
ever
ever
going
to
go
the
object
metadata,
that
seems
goofy.
Every
controller
needs
to
solve
this
problem.
Is
there?
Is
this
ad
hoc
or
how
do
people
typically
do
this.
D
D
A
A
I
think
that
API
review
would
be
like
use
generation,
so
I'm
fine,
to
spell
it
out
in
more
detail,
but
I
think
that
if
I
think
that
a
vote
against
using
generation
is
a
vote
against
the
existing
conventions
that
will
later
require
a
migration
when
API
review,
I
guess
the
thing:
that's
that's.
Where
I
stand.
B
So
I've
once
again
handed
and
act
myself
in
the
confines
of
the
speaker,
queue
I,
hope,
that's,
okay,
Paul!
The
only
thing
I
want
to
ask
is
when
you
write
up
the
complete
proposal,
you
address
the
lace
three
questions
in
the
chat.
It
says:
how
do
other
objects
handle
this
number
one
number
two:
how
does
deployment
do
this
number?
Three?
Why
not
push
upstream
for
anyone?
Everyone
has
to
do
this.
Okay,.
A
B
B
D
So
I
just
pasted
a
link
to
not
just
the
issue,
but
also
the
proposal
that
I
put
in
there.
So
as
background
right
now,
when
you
do
a
bind
request,
there
is
an
optional
field
called
app
couid.
That
is
supposed
to
be
obviously
a
good,
that's
a
pointer
to
the
application.
That's
that's
trying
to
bind
to
the
Reese
to
the
service
for
right
now
we
actually
send
the
namespace
good
in
which
the
secret
is
going
to
be
created,
because
we
didn't
know
what
else.
D
To
put
so,
we
decide
to
put
something
there
and
that's
the
seem
like
as
good
as
anything
else.
So
there's
been
hold
the
whole
discussion
about
whether
we
should
change
that
or
not
to
something
else.
Well,
I
was
thinking
about
it
today
and
with
the
decision
to
remove
the
pod
preset
stuff
from
the
binding
we've,
basically
converted
the
binding
resource
into
what
we
sort
of
called
in
the
past,
the
instance
credentials
object
or
so
like
that,
basically
we're
reducing
binding
down
not
to
a
binding
option
or
action
anymore.
D
We
really
don't
have
a
application
at
that
point
in
time
in
the
workflow
to
work
with,
and
so
what
I
I
think
we
should
probably
keep
going
forward
is
exactly
we're
doing
now,
which
is
just
continue
to
send
the
namespace
gooood
in
which
into
which
a
secret
is
going
to
be
created,
because
that
at
least
give
the
source
broker
some
sort
of
scoping
mechanism.
If
it
wants
to
use
it
to
make
a
determination
about
how
to
handle
the
binding
requests-
and
I
think
that's
the
most
accurate
thing
to
send.
D
The
only
other
option
that
I've
heard
mentioned
is
to
send
nothing
at
all.
But
I
don't
think
that
really
helps
the
broker
do
its
job
if
it
actually
does
want
to
do
some
kind
of
logic
based
upon
groupings
or
associations
of
things.
So
my
basic
proposal
is
to
do
nothing
and
close
this
issue
without
any
action
and
just
continue
to
send
the
namespace
good
for
the
app
for
the
app
do
it.
Okay,.
B
A
D
So
the
broker
can
choose
to
use
this
field
or
not.
What
I've
seen
in
the
past
is
sometimes
brokers,
will
look
at
this
value
and
determine
that
the
the
user
is
trying
to
bind,
for
example,
the
same
application
to
the
same
instance
and
and
flag
that
so
sometimes
he
may
return
the
exact
same
credentials
before
or
he
made
to
say
yeah.
That's
fine,
I'm
going
to
return
different
set
of
credentials
because
they
want
I've
seen
other
cases
where
they
say
no
I'm,
not
gonna.
Let
you
bind
the
same
application
twice.
D
B
A
B
B
B
A
A
A
We
already
have
rooms
reserved
for
that
day.
So
if
we
want
space
during
cube,
con
I
have
somebody
that
can
try
to
get
some
space
for
us.
Why
don't
we
do
this?
How
about
we
have
an
action
item?
Think
about
which
day
you
want
to
have
a
meeting
and
either
I
guess
we
could
possibly
do
it
after,
although
I'm
not
sure
what
level
of
interest
in
such
a
schedule
would
be
or
how
much
energy
there
would
be
to
execute
it.
A
B
George,
you
had
a
hand
up
go
ahead.
Yeah
I
was
gonna,
say
usually
at
the
project
level.
They
have
a
spreadsheet
because
some
people
need
to
send
multiple
SIG's
and
you
kind
of
pencil
yourself
in
for
the
day
before,
so
that
certain
people
can
it
turns
ten,
certain
SIG's,
so
I
don't
know
when
that's
happening.
I
know
the
stance.
B
You
have
people
who
are
organizing
cube
content
to
put
on
an
announcement
style
thing,
so
just
something
to
keep
an
eye
out
for
so
how
about
this
I'm
gonna
just
suggest
I
can
put
out
a
poll
to
the
group
right
now
give
that
an
action
and
for
me,
sorry
to
take
over
the
queue
I
just
want
to
throw
that
out.
Paul
go
ahead.
You're
in
the
could
take
the
floor.
Yeah.
A
I
think
good
point,
George
I
think
this
this
channel
that
I
have
might
afford
us
the
chance
for
larger
space
in
longer
times
in
the
past
I.
If
I
I
remember
correctly,
there
had
been
like
specific
hours
and
that's
you
know.
A
couple
hours
is
not
actually
enough
time
to
talk
through
very
much
in
person,
so
we
might
be
able
to
get
like
a
four
six
hour
stretch
of
time.
This
way
just
thrown
it
out
there
I'm
happy
to
do
whatever
folks
want.
B
E
Actually
I
will
then
mention
something
which
is
more
relevant
to
the
audience
here.
So
I
did
a
work-in-progress
PR
for
moving
the
api's
and
I
ran
into
two
issues
and
which
we
can
probably
put
it
for
the
next
meeting.
One
is:
should
the
pod
reset
have
TP
our
storage
at
all
or
not,
or
should
just
be
at
CD?
So
storage
question
came
up
in
the
PR
and
second,
is
the
issue?
I
ran
into
walls?
What
should
be
so?
E
It
has
a
it
is
being
developed
under
the
group
settings
and
the
group
name
is
whatever:
whatever
is
the
domain
name,
which
is
settings
dot
gate
start
IO?
It
will
conflict
with
the
implementation
in
coal
kubernetes.
So
that
is
another
point
that
I
want
to
resolve.
What
would
be
the
group
name
when
we
start
implementing
it
and
service
catalog?
Will
it
be
settings
dot,
Service,
Catalog,
dot,
Kate
short
time,
so.
D
E
B
B
Okay,
not
seen
anything
here,
don't
what's
all
right
now
see
a
design
issues
needing
concrete
proposals.
I'll
just
call
out
that
these
are
these
look
like
all
issues
that
someone
or
some
people
need
to
write
proposals
for.
If
you
have
a
proposal
in
mind,
please
go
pick
one
of
those
issues
up
write
that
proposal.
That
would
be
great.
B
A
A
B
All
right
with
that
in
other
hands
looks
like
we
were
gonna,
be
able
to
get
eight
minutes
back.
That
looks
like
it
great
guys.
Thank
you,
everybody
for
coming
today.
We
are
I
believe
not
gonna
have
a
design
meeting
tomorrow,
so
that
would
mean
Monday's
next
time
we
all
get
together.
So
thanks,
everybody
we'll
see
you
on
Monday
I
will
send
it
I.