►
From YouTube: Kubernetes SIG Service Catalog 20180212
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
I
actually
have
a
group
of
students
at
Boston,
University
and
mentoring
that
are
working
on
a
project
that
is
exposing
a
let's
see,
I
think
it's
called
a
data
Lake
via
a
Service
Catalog
and
a
service
broker,
and
then
I
was
also
doing
a
talk
a
couple
weeks
ago
and
wanted
to
have
all
new
content.
So
those
three
things
put
together
kind
of
gave
me
an
idea
for
this
project.
A
So
here
we
go
I'm
in
master,
I
do
make
deploy
helm
and
since
I'm
going
to
maybe
show
some
iteration
I'm,
just
gonna
have
that
tag
equals
latest
thing
on
there.
So
this
builds
a
binary,
builds
a
container
image
deploys
a
chart
that
runs
the
broker
binary
in
a
container
as
a
service
in
front
of
it
for
the
stable
dns
and
makes
a
cluster
service
broker.
So
when
this
is
over,
we'll
have
a
new
broker
in
the
catalog
and
we'll
see
a
new
service
pop
up.
So.
B
C
So,
while
we're
waiting,
I
assume
the
user
when
they
actually
type
up
their
business
logic
for
a
better
term
they're,
basically
just
like
sort
of
filling
in
templates
or
filling
out
interfaces
type
of
stuff.
Can
you
show
us
the
the
template
or
the
thing
that
the
infrastructure
rounded,
that
they
would
just
be
filling
in
yeah.
A
A
A
What
I
have
found
is
that
initially
the
first
request
times
out
because
it's
made
before
the
there's
a
pod
there
behind
the
service,
so
at
some
point
in
the
future
this
will
this
will
go
through
BAM
there
we
go
alright,
so
cluster
service
burger
is
ready.
Now
it's
me
cat
oops.
D
A
A
Yeah
that
those
are
both
goals
like
quick,
start,
quick,
iteration
and
then
hopefully
a
base
layer
for
like
a
well.
So,
by
way
of
example,
we
have
a
couple
brokers
that
we
wrote
at
Red
Hat,
and
this
is
something
that
I
did
as
some
dude,
not
as
a
Red
Hat
employee,
but
we
are
looking
at
now
abstracting
or
taking
out
the
reusable
pieces
of
the
broker's
that
we've
already
written,
and
this
is
maybe
one
of
the
pieces
that
falls
out
of
it.
A
A
A
There
is
basically
a
minimally
functional
implementation
of
this
interface
in
this
project
under
pkg
user
there's
also
a
hook
for
adding
your
own
CLI
flags
and
factoring
methods,
for
you
know
the
making
the
sausage
of
connecting
up
your
custom
code
with
the
thing
that
calls
it
it's
possible
that
in
the
future,
like
in
the
short
term,
feature
that
we
may
split
this
into
a
QuickStart
project
in
a
library.
But
basically
the
idea
in
the
current
incarnation
is
that
all
your
code
that
you
need
to
make
should
be
confined
to
this.
One
package:
cool.
E
A
B
A
Yes,
that's
like
that
is
the
kind
of
thing
that
I
have
in
mind
when
I
say,
like
a
batteries
included
experience
another
one,
just
as
an
example
is
like
when
you're
on
the,
when
you're
on
the
receiving
end
of
the
call
from
the
catalog.
You
have
to
do
some
kind
of
check
to
see
whether
the
entity
that,
like
Ling's
you
up,
is
actually
allowed
to
talk
to
you
and
what
we've
done
on
the
Red
Hat
side.
A
For
that
in
our
brokers,
is
they
do
like
a
subject
access
review
on
a
special
verb
that
basically
checks
the
to
see
if
the
catalog
itself,
whether
it's
token
is
allowed
to
talk
to
the
broker?
And
that's
that's
another
thing
that
we
want
to
push
into
this
layer
so
that,
like
you,
you
don't
just
get
like
a
skeleton
that
gets
you
80%
of
the
way
there
with
like
the
rest,
calls
and
stuff.
But
you
actually
get
something
that
has
the
right
building
blocks
to
develop
a
production
ready.
C
All
right
go
ahead,
yeah,
so
Paul.
Is
it
really
cool?
Just
two
things
one
is
I,
don't
obviously
I
haven't
take
a
look
at
your
make
file
yet,
but
I
recommend
you
make
it
so
that
people
can
build
the
stuff,
build
a
docker
image,
independent
of
of
kubernetes
and
then
to
where
they
to
that
is
then
I
think
you
should
show
this
off
in
the
OSB
working
group
calls
so
that,
because
I
think
they
find
useful
as
well
yeah.
A
I've
actually
mentioned
to
to
Matt
and
Alex,
because
I
was
hanging
out
with
him
in
London
before
I
gave
the
talk
where
I
showed.
This
I
was
planning
on
showing
it
tomorrow
and
there
is
a
you
do
not
need
cube
to
build
the
image
at
all.
There's
there's
a
make
image
that
just
does
a
docker
build.
That's
a
dependency
of
the
target
that
I
ran
sounds
good.
C
D
I
just
wanted
to
share
my
experience
getting
up
and
running
with
this,
because
it
went
really
well
and
I
wanted
to
call
that
out.
I
originally
tried
this
with
the
pivotal
cloud
foundry.
You
know
starter
library
that
they
had
to
kind
of
help
you
get
up
and
running.
D
They
had
one
for
go
and
I
spent
like
two
days
on
that,
just
trying
to
get
it
the
way
I
wanted
and
everything,
and
then
pulping
me
about
this
and
I
was
able
to
switch
over
immediately
copy
my
code
where
it
belong
and
like
five
minutes
later,
was
running.
So
that
was
a
really
really
good
experience
and
I
hope
hope.
Once
we
we
figure
out
how
to
keep
things
in
sync.
That
really
good,
because,
like
the
very
first
thing
I
did
was
I
wanted.
D
Not
have
it
be.
You
know,
Paul's,
repo
and
and
OSB
Sarah
Peck
I
want
to
call
it
whatever
it
is.
I
was
going
to
call
it
and
that,
like
changed,
every
single
file
on
the
repo
but
I
think,
hopefully,
when
we,
if
we
split
it,
I,
don't
want
to
speak
for
Paul
but
like
hopefully
when
we
split
it.
It
would
be
a
lot
easier
to
to
keep
up
to
date
and
take
syncs,
and
until
we
get
to
that
point
like
if
you
submit
more
stuff
upstream,
it's
kind
of
hard
for
people
to
consume
them.
Yeah.
A
B
E
So
we
with
the
connection,
timeout
and
500
I,
think
we
do
orphan
mitigation
and
for
bad
request,
which
we
don't
do
anything
we
just
give
up,
and
the
problem
is
that
when
once
you
update
this
back,
so
skittle
does
accept
being
used
back,
but
it
doesn't
do
anything
it
just
basically
like
it
just
checks
that
the
status
as
a
terminal
error
and
doesn't
do
anything.
So
it's
really
confusing
at
me.
So,
first
of
all,
probably
like
I,
don't
know!
Why
do
we
accept
this
back
at
all?
E
If
we
see
that
this
instance
is
not
like,
we
can't
really
do
anything
about
the
new
stack
right.
So
it's
for
me.
It
was
like
from
the
user
experience.
It
wasn't
clear.
Why
there's
nothing
company
I,
always
thinking
whether
it's
some
bug
in
service
or
then
I
went
through
the
accordance,
so
that
like
there
was
actually
comment
saying
that,
like
any
any
non-successful
status,
HTTP
status
is
treated
as
a
terminal
error
and
it
wasn't
clear
I,
don't
know.
E
Maybe
we
need
to
want
to
update
something
that
is
more
clearly
and
like
ultimately,
I
I
would
just
as
I
said
in
the
issue
that
would
probably
want
to
actually
be
able
to
retry
provisioning
without
having
to
create
a
new
instance
every
time,
and
there
are
some
other
issues
where
when
like
for
some
reason,
whiting
gets
stuck
and
we
don't.
We
can't
really
do
it
at
other
than
I.
E
Just
recently
learned
that
that
you
can
just
delete
the
finalizer
and
force
to
do
it
in
this
way,
but
is
still
probably
not
the
best
he
reacts
so
yeah
like
for
us.
We
really
like
because
we're
building
the
we're
using
Smith
project
on
top
of
source
code
and
all
that
it
does.
It
meant
it
manages
the
service
instance,
so
resourceful
objects
definitively.
E
So
there
is
no
way
to
actually
tell
me
that,
like
you
need
to
delete
the
old
instance
and
create
a
new
one,
because
it
just
burns
the
existing
one
it
and
the
try
so
updated.
So
we
have
to
go
and
many
we
do
eat
the
instance
trusts
and
only
after
that,
like
Smith
will
be
able
to
create
a
new
one.
So
it's
really
confusing
from
our
point
of
view.
E
So
only
the
success
file
currently
works
more
or
less
for
us
and
if
anything
likes,
we
must
really
just
have
to
do
it
and
when
you
would
clean
up
everything
before
proceeding.
So
it's
really
really
really
annoying
part
from
us
and
I.
Think
that
we
should
like
I
will
try
to
probably
describe
these
issues
in
more
details
and
take
some
of
these
to
fix.
A
Yeah
so
I
think
the
I'm
trying
to
spool
back
into
my
head,
like
the
context
around
why
we
made
the
decisions
that
we
did.
I
think
that
you're
correct
that
at
least
part
of
this
is
working
as
design
that
I
I
do
recall
having
a
discussion
where
we
said
that,
like
terminal
errors
should
just
be
terminal
and
you've
got
to
clean
them
up
and
resubmit
resources.
A
A
So
we
have
some
bugs
to
represent
these
same
things,
that
we
have
upstream
issues
for
in
the
incubator:
roof
oh
and
I,
sat
down
last
week
and
was
like
I'm
gonna,
make
forced
delete
work
and,
at
the
end
of
that
thread
of
trying
to
make
forced
delete,
work.
I
can
share
some
information.
I
actually
created
an
issue
for
this.
That
has
the
information,
so
I
won't
just
be
shouting
into
the
wind
or
anything
linked
the
issue
in
the
chat
here.
A
A
There
is
code
for
pods,
for
example.
That
makes
force
deletion
do
what
it
does
for
pods
when
you
use
force
solution
or
there's
another
switch
confusingly
that
you
can
send
to
delete
called
now,
literally,
nothing
is
differentiated
in
any
way
about
the
request
that
actually
goes
to
the
API,
sir,
so
I
I
think
in
the
future.
Perhaps
what
we
could
do
is
put
a
command
into
SV
cat
that
would
make
it
that
would
do
the
finalizar
thing
for
you,
however,
I
think
we
need
to
be
super
careful
about
that,
and
this
is.
A
A
B
Yeah
I'll
just
talk
about
the
recursive,
so
we
we
talked
about
a
specific
case
where
we
wanted
to
recursively
delete
like
if
you
delete
an
instance.
Why
did
recursively,
maybe
recursively
delete
all
the
instances
children
bindings,
and
then
we
got
into
the
discussion
of
what
happens
if
unbinding
fails.
B
Then
we
have
this
like
Frankenstein
mix
of
recursive
and
force,
and
that
seems
like
a
really
good
candidate
to
kind
of
bundle
in
with
this
solution
that
you're
talking
about
Pollyanna
but
at
the
at
the
same
time
like
I,
feel
like.
Maybe
we
should
just
bite
off
the
force,
delete
for
a
single
instance,
probably
just
for
bindings,
because
we
don't
have
to
consider
recursive
and
then
from
there
they've
got
a
foundation
to
talk
about.
What
do
we
do
about?
Recursive
deletes
yeah.
E
First
of
all,
what
the
division,
your
ex
I,
think
that
one
of
the
things
we
can
do
better
not
just
have
a
walkthrough,
but
also
in
this
status.
Apart
from
the
flag,
maybe
have
a
status
message
or
something
where
we
can
probably
have
more
descriptive
way
of
saying:
hey
we're
actually
going
to
delete
this
binding.
E
Just
wait
a
little
bit
so
currently
just
says
if
the
binding
is
successfully
provisioned
and
when
you
delete
it,
just
has
a
flag
of
like
deletion,
written
required
or
something
like
that,
and
it's
not
really
clear
that
the
service
clock
has
picked
up
the
deletion
and
it
will
proceed
soon.
So
I
think
that's
the
place
where
we
can
improve
as
well
and
back
to
the
initial
issue.
I
have
been
talking
about
that
so
for
connection
timeout
and
some
other
issues.
E
We
do
need
orphan
mitigation
according
to
always
be
spec,
but
after
the
orphan
it
ocation
has
succeeded.
Is
it
okay
for
everyone
to
reuse
the
Eid
for
provision
a
new
instance?
It
was
like,
as
far
as
I
understand
the
reasoning
behind
not
retrying.
Was
that
like
if
your
initial
version
has
failed?
It's
basically,
okay.
This
instance
hasn't
never
existed
and
you
want
to
have
a
completely
new
one
independent.
E
So
we
don't
have
any
interest
in
that
I
guess
we're:
okay,
with
reusing
the
you
IDs
for
reading
again
and
again,
if
something
breaks
at
initial
provision
so
like.
Obviously,
if
you
have
deep
revisions
previously
successful
with
provision
incest,
maybe
that's
not
a
good
idea,
but
if
you
have
never
successful
operation,
it
should
be
fine.
I
guess.
D
B
Yeah
I
was
just
gonna,
basically
echo
that
I
think
it's
kind
of
like
Docs
and
layers.
We've
got
a
workflow,
sorry,
a
walkthrough
doc.
We
can
probably
put
it
in
like
another
reference
Docs
somewhere.
We
can
put
it
into
a
status
and
we
probably
can
also
have
SV
can
look
for
that
status
and
have
it
actually
spell
out
that
the
deletion
should
is
in
progress.
B
F
Yeah
I
wanted
to
talk
to
the
idea
of
reusing
that
ID
with
OSB
spec
I.
Think
it's
I
think
it's
not
clear
in
the
OS
respect
whether
you
can
ever
reuse
that
IB
and
to
the
point
of
okay,
maybe
all
right
to
use
that
idea.
A
ID
if
an
instance
has
never
actually
been
successful
provision.
The
whole
point
of
orphan
mitigation
is
that
we
don't
know
if
it
was
ever
successfully
provision.
E
Think
best
yeah
you're
right
that,
like
we,
we
never
know
like
the
the
purpose
of
all
communications
that
we
don't
know
whether
it
was
actually
a
successful
provision.
But
from
this
service
cut
out
point
of
view,
we
never
really
I
guess
we
were
never
really
propagated.
The
instance
coordinates
or
like
never
created
bindings
for
the
instance
right.
So
it's
even
if
resistance
has
has
been
present
in
the
past,
basically
haven't
been
ever
used
for
anything
but
from
being
just
freedom.
B
A
C
Go
ahead
and
so
I
think
on
timeouts
you're
supposed
to
do
or
mitigation
for
this
Beck
just
thought:
I'd
throw
it
out
there,
but
the
reason
reason
I
got
my
hand
up
was
because
either
naal
or
Matthew
can.
What
do
you
guys
open
up
an
issue
in
the
open
book
respects
there
be
any
clarity
on
whether
we
can
reuse
IDs
or
not
just
to
force
the
issue,
because
whether
we
can
or
not
I
don't
have
an
opinion
on
it
and
I.
F
Right
I
think
the
the
timeout
part,
where
we're
actually
just
not
able
to
make
a
successful
HTTP
connection
is
one
one
used
to
get
one
case
that
may
be
easily
solved,
but
there
I
think
there
are
other
use
cases
where
we
may
want
to
do.
Retries.
For
example,
the
user
just
didn't
put
the
right
information
in
the
spec
and
wants
to
update
it
rather
than
creating
a
whole
new
instance
to
fix
whatever
kind
of
provisioning
issue
they
may
have
had
with
that
instance.
F
A
Default
fields
and
status
are
not
have
their
own
complications
I'm,
trying
to
spool
the
stuff
back
into
memory
now.
I
wonder
if
this
might
be
something
that
we
would
be
well-served
to
have
like
a
call,
maybe
on
like
Wednesday
to
talk
about,
and
maybe
we
can
pick
a
call
of
these
things
and
drill
into
them
and
just
kind
of
Tiger
team.
Making
the
some
of
the
big
rocks
go
away
on
the
most
common
non
happy
fat
things.
B
It
sounds
like
we've
got
timeouts
and
then
400
and
500,
because
timeouts
are
kind
of
unclear.
That's
what
the
action
item
is
for
and
then
we
know
for
hundreds.
Don't
oh
yeah,
you
Nile,
you
wrote
it
out.
So
400
don't
need
an
orphan
mitigation,
but
500
does
so
I
guess
we
probably
should
talk
about
like
not
only
are
we
doing
it
and
are
we
to
spec,
which
I
think
we
are,
but
also
what
should
the
message
be
in
the
status
when
you're
trying
to
do
those
things
and
I'll
stop
now
go
ahead.
B
E
So
I
think
that,
like
we,
we
can
discuss
for
particular
response
status
like
offal,
connection
timeout,
for
example,
whether
we
need
to
to
do
the
orphan
mitigation,
but
I
think
the
likely
question
from
is
a
bigger
one
like
once
we
have
like
either
have
finished
or
for
mitigation,
or
in
case
of
when
request,
we
just
reported
that
the
something
is
wrong
with
this
back
after
that
currently
service,
though
just
gives
up
and
after
optional
over
mitigation.
My
suggestion
is
that
we
should
be
able
to
like
in
case
of
connection
timeout.
E
We
don't
probably
even
need
a
resource
version
bump.
We
just
can't
retry
and
in
case
of
500
internal
server
error,
probably
as
well
I'm,
not
sure
and
in
case
of
bad
requests.
For
example,
whilst
the
spec
has
been
changed,
so
we
need
to
track
whether
resource
efficient
has
been
bumped,
and
if
it
is,
we
should
be
able
to
retry.
That's,
like
my
naive
point
of
view,.
C
I
would
actually
recommend
that
if
I
understand
the
the
complete
scenario
that
we
probably
don't
want
to
do
that,
because
if
the
request
actually
did
go
all
the
way
through
and
the
back
end
did
actually
create
something
with
that
ID.
A
subsequent
request
with
a
new
ID
is
just
going
to
leave
orphans,
which
would
be
bad
because
I
could
cost
money,
but
a
new
request
with
the
exact
same
ID
should
find
the
the
same
resource
again
and
just
say:
oh
we're
already
finished.
I.
C
B
E
So
the
problem
with
orphan
mitigation
is
that
it
is
implemented
the
way
that
so
eschaton
just
sends
a
delete
request.
So
there
is
no
difference
between
like
between
successfully
provisioning
and
not
knowing
about
it
and
deprovision
it
and
the
case
where
you
can
have
actually
successfully
provision
it
crated,
bindings
whatever
and
then
to
provision
it.
So
like
and
I
guess
that's
what
Matt
is
trying
to
say
that,
like
we
can,
just
from
the
OSB
point
of
view,
we
can
distinguish
the
cases
where
we
try
to
orphan
mitigate
and
when
we
actually
do
provision
previously
provisions.
B
I
yeah
I
see
both
sides
of
this
so
I,
just
stepping
back
for
a
sec
I
wanted
to
just
do
a
time
check.
We've
got
15
minutes.
I
suspect
that
the
final
element
here
is
not
going
to
take
all
15
minutes,
but
I
want
to
give
it
a
fair
shot.
So
I
think
it's
probably
best
in
someone
kind
of
lead
organization
of
smaller
discussion,
probably
on
Wednesday
somewhere
around
there.
So
we
can
drill
in
more
and
actually
make
a
plan
for
what
to
do
any
volunteers.
B
B
A
I
mostly
wanted
to
remind
folks
that
we
did
room
the
otaku,
dado
milestone
and
github
and
I'll
put
a
link
to
that,
and
I
would
say
that
if
you
are
interested
in
contributing
it
looks
to
me
the
last
time
I
looked
at
this,
which
is
not
right
now
it
hasn't
loaded
yet
but
I'm
about
to
look
at
it
again.
It
looks
like
there
are
a
fair
number
of
issues
here
that
are
like
unaccounted
for
with
that
somebody
to
drive
them.
A
B
Cool
I'll
say
my
piece
here:
it's
pretty
different
but
related
so
I
think
last
month,
I
promised
to
make
up
a
list
of
sort
of
big
requirements
for
GA
I
apologize
greatly
for
letting
that
completely
slip.
Out
of
my
mind,
I
want
to
check
in
though
again
and
see
if
something
like
that
would
actually
be
helpful
for
people
to
know
where
we
stand
not
necessarily
for
Oda
to
know,
but
for
the
longer
term,
how
do
we
get
to
one,
not
Oh
No?
B
Any
other
opinions,
okay,
I,
will
get
that
done
this
week,
then
I'd
like
to
keep
it
a
living
document,
so
I'm
going
to
put
it
into
Google,
Doc
and
I'll
link
it
from
here
the
Google,
Group,
etc.
I
want
to
keep
track
of
where
we
are
so
we
can
get
a
sense
of
how
many
milestones
we
have
to
go
between
now
and
one
dot.
Oh,
so
that's
that
sort
of
you
know
transparency
around
my
my
intentions
here.
So
that's
all
for
me
got
any
other
comments,
questions,
etc.