►
From YouTube: SIG Service Catalog 20170306
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
all
right
now,
yep,
okay,
so
jess
was
going
to
demo
pod
presets,
which
is
what
pod
injection
policy
is
currently
called,
but
she
is
busy
with
some
stuff
for
DCP
next,
so
she's
going
to
do
that
for
us
next
week,
I
am
going
to
show
I'm
going
to
showed
delete
broker
and
unbind
I
hit
a
couple
hiccups
with
deep
revision
that
I
didn't
have
time
to
fix
before
the
meeting.
So
I'll
just
go
right
into
that.
Actually,
can
you
see
my
terminal
window.
A
A
A
So
if
you
delete
a
broker
now
we
accept
the
delete
and
set
the
deletion
trying
to
stamp
on
the
broker,
and
then
the
controller
goes
and
cleans
up
the
service
classes
associated
with
the
broker.
There's
some
additional
follow-on
work
that
we're
going
to
need
to
do
to
make
sure
that,
like
you,
can't
go
and
make
a
new
instance
for
a
service
class
that
is
going
to
get
cleaned
up
stuff
like
that,
but
the
mechanics
of
the
finalization
are
in
so
if
I
do
delete
Brooker
UPS
worker,
let's
see
alright,
it's
already
been
finalized
been
deleted.
A
No
service
classes,
so
what
happened
was
I
deleted
that
broker
the
API
server
accepted
it
and
set
the
deletion
time
stamp
on
it,
but
I
didn't
actually
delete.
It.
Controller
saw
an
update
event
where
the
deletion
time
stamp
was
set
and
went
and
deleted
the
service
classes
for
that
broker
and
then
cleared
the
finalizar
out
of
the
objects.
Metadata
and
I
did
an
update
on
it
and
when
the
API
server
saw
that
update
and
the
deletion
timestamp
was
set
and
the
finalized
list
was
empty,
it
went
and
deleted
the
broker.
A
A
So
we've
already
I
just
deleted
the
binding
and
I
went
to
go,
get
it
and
it
had
already
been
deleted.
So
if
I
go
and
I
get
the
secret,
the
speaker
got
deleted
too.
That's
one
bind,
so
this
is
implemented
using
a
pattern.
That's
basically
the
same
as
we
did
for
finalizing
brokers.
When
you
delete
a
binding
I,
you
are
the
the
API
server
accepts
the
deletion.
It
sets
the
deletion
time
stamp
on
the
object
and
doesn't
update
instead
of
a
delete.
A
Controller
sees
the
deletion
time
stamp
set
and
goes
and
does
the
work
it
needs
to
view
to
do
on
vine
and
then
clears
the
finalize
a
list
and
doesn't
update
and
the
API
server
when
it
sees
an
update
for
an
object
that
has
the
deletion
time
stamp
set
and
finalize
a
list
is
in
the
empty.
It
turns
that
into
a
delete
and
actually
deletes
key
object.
A
C
A
A
A
D
B
A
Okay,
well,
we
can.
We
can
give
me
late
a
minute
to
get
his
laptop
healthy
again.
I
wish
it
a
speedy
recovery.
We
had
next
on.
The
agenda
was
a
demo
of
integration
test,
but
I
have
not
had
a
chance
to
I,
get
that
like
working
on
my
laptop
yet
so
I
move
that
to
next
week
in
the
agenda.
Vla,
are
you
back
yeah.
D
A
D
Very
good,
so
what
I
have
done
is
I
have
installed
the
GCP
broker
already
not
going
to
show
you
the
business
credentials
in
and
this
is
recorded,
but
anyways.
What
I
care
about
is
the
instance.
So
if
I
look
at
the
instance
of
llamo
there's
a
bunch
of
normal
steps
here,
but
the
thing
that
really
matters
for
this
demo
is
this
parameter
section.
D
D
D
Which
is
basically
the
topic
that
I
asked
for
it
to
be
created
for
me,
so
that
one
basically
shows
you
have
therefore
win
through
both
ways.
I
can
find
a
way
to
go
ahead
and
see
where
it's
somewhere
in
this
massive
blob,
the
editor
role
encoded
in
there,
but
we
are
not
going
to
look
at
that,
but
so
both
for
the
service
instance
creation,
as
well
as
service,
finding
creation
parameters
for
two
just
fine
and
dandy.
D
D
D
A
D
D
A
As
any
sensible
human
would
very
good.
Okay
awesome
thanks
a
lot
bill.
A
thank
you.
Okay,
so
next
on
the
agenda
is
overview
of
the
current
state,
so
I'm
going
to
share
my
screen
again
and
we're
going
to
go
over
we're
going
to
go
over
what
is
in
MBT
one
that
is
still
open.
A
So
the
the
first
thing
is
some
clean
ups
that
Morgan
made
an
issue
for
they're,
not
done
I'm,
assuming
that
they're
small
things
I,
don't
remember
exactly
what
they
are.
So
that's
what
that
is.
If
do
I
have
a
volunteer
to
pick
that
one
up
on
the
chance
anybody's
interested
in
doing
that,
one
can.
B
A
E
A
Simon
all
right
going
and
comment
on
that
one
and
say
you're
going
to
work
on
it.
Okay,
so
next
one
is
determine
the
proper
validation
for
each
type.
I
think
that
is
right
and
good
to
do.
I
think
I
will
probably
pick
them
up
so
I'm
just
going
to
assign
myself
next
up
is
easy
to
test
for
unbind
I.
Think
that's
handled
by
ants
a
pull
request.
Next,
one
is
add
conditions
to
finding
resource
on
unbind,
I.
Think
that's
handled
by
canticle
request.
Let's
see
doing
this
yellow
copy
of
this
list.
A
A
A
All
right,
this
looks
like
potentially
some
plug
or
a
operator
error
on
the
chart
we'll
get
to
the
bottom
of
that
one.
It's
already
assigned
to
me.
Here's
another
one.
That's
a
bug
against
the
the
chart
that
takes
arrow
next,
one
is
for
36
GPR
based
API
server
cannot
update
status,
I.
Think
Aaron,
you
and
I
got
that
one
working
Bryce
for
42
will
close
Muslim.
Ok
next
one
is
integration.
Testament
I
will
go
ahead
and
assign
that
to
Jeff.
How
do
you
feel
about
doing
this?
One.
A
A
Some
of
the
fake
operations
in
the
fake
broker,
API
client
feel
you've
got
that
one
too.
A
sync
provision
and
deprovision
also
assigned
to
be
like
implement.
Unbind
is
covered
by
Kent's
pull
request,
and
then
these
next
two
I
understand
to
be
a
function
of
what
DLA
you're
currently
doing.
Is
that
accurate.
A
A
A
A
B
A
A
A
All
right
sounds
like
this:
one
bears
a
little
bit:
more
investigation
might
be
obsolete
if
no
one
knows
what
it
is.
Next
one
is
pretty
self-explanatory.
Codify
out
a
sync
provision
and
deprovision
responses
are
handled
in
the
catalog
controller.
A
If
we
need
to
set
it
so
that
later
iterations
of
the
sink
loop
or
you
know
when
you
do
a
realist,
that
don't
go
into
the
the
logic
for
where
you
are
and
try
to
update
based
on
that
operation.
Id
next,
one
is
to
add
a
DES
playing
section
to
develop
developer.
Docs
I
actually
have
something
in
progress
for
this
right
now.
A
Next,
one
is
to
talk
about
what
conditions
we
should
have
in
the
API
I
think
that
we
and
when
I
say
we
I,
mostly
myself,
because
I
think
I,
remember
being
the
one
that
did
this
I
think
we're
a
little
premature
and
figuring
out
what
conditions
we
needed
to
have
in
the
API.
So
I
was
discussing
a
little
bit
with
Jessica
Forrester
who's.
The
OpenShift
UI
lead
about
what
conditions
we
think
we
really
need
to
finalize
that
discussion.
A
I
made
this
issue,
that's
what
that
one
is
next,
one
is
to
add
events
to
resources,
so
one
tool
that
we
have
beyond
status
to
add
information,
that
services
to
users
in
kubernetes
KPIs
are
that
we
can
create
events
that
are
associated
with
specific
resources
and
I.
Think
that
we
should
do
this.
It's
a
we'll
help.
You
tell
the
story
of:
what's
happened
to
a
to
a
broker
or
an
instance
or
a
binding.
A
You
can
create
messages
like
events
with
messages
like
hey
I,
actually
did
the
provision
I'm
working
on
this
part
now
and
one
of
the
nice
things
about
events.
Is
they
have
event
compression?
So
if
you,
if
you
create
the
same
message
over
and
over
again,
you
don't
get
a
new
event.
You
get
like
a
counter
incremented
on
the
event
that
you
have
for
that
one
in
the
timestamp
updated.
So
you
can
see
how
long
ago,
a
certain
message
or
a
certain
event
was
created
again,
which
is
pretty
cool.
Neville
can.
B
A
So
next
one
is
enable
user
impersonation
for
burgers
brokers
that
target
kubernetes.
What
this
is
is
that,
if
your
broker
is
targeting
the
kubernetes
cluster
that
it's
deployed
into,
we
need
a
way
to
get
user
information
to
the
broker.
There's
a
couple
different
ways
that
we
could
do
it
Doug's
whole
request
for
adding
a
context
field
to
the
request
to
avoid
the
need
to
use
a
Cloud,
Foundry
specific
fields
and
like
munched
up
into
them,
has
landed
in
upstream.
A
So
we
could
use
the
context
field
of
the
request
to
handle
sending
the
namespace
user
name
service
account
name
in
the
request
that
way,
or
it
could
be
done
as
parameters
context
is
probably
more
appropriate.
Next
one
is
enable
working
with
the
API
aggregator.
A
Aggregator
is
the
thing
that
basically
sits
in
front
of
a
number
of
API
servers
and
allows
a
client
to
discover,
discover
the
right
endpoints
to
go
to
for
particular
API
s,
if
they're,
a
smart
client
or
if
they're,
a
dumb
client
it
can
proxy
to
API,
is,
but
it's
basically
a
something
that,
like
you,
can
use
to
work
with
any
number
of
API
servers
that
are
aggregated
together.
That
present
different
API
X,
and
this
issue
is
to
get
it
working
with
Service
Catalog.
A
Next
one
is
delegated
authentication
and
authorization
back
to
the
core
API
server.
So
what
that
is
is
a
when
you
set
up
Service
Catalog
in
your
communities
cluster.
You
don't
want
to
have
to
set
up
special
auth
for
that
thing,
so
we
should
be
able
to
delegate
back
to
the
core
API
server
and
the.
A
There
is
code
that
already
does
this.
It's
commented
out
in
master
right
now,
but
it
is
reputed
to
be
quite
easy
to
get
working
if
you're
running
in
the
cluster,
so
Phil
and
I
have
that
one
to
get
sorted
out.
We
know
that
there
are
a
couple
issues
with
that
working
in
gke
and
Phil's
taking
point
on
getting
those
sorted
out.
We
can
probably
start
by
just
establishing
that
it
works
in
a
local
cluster
and
create
all
ones
as
necessary
for
special
configurations.
A
A
Next,
one
is,
as
a
configuration
control
plane
for
admins
to
set
things
like
maximum
number
of
retries
maximum
time
out,
etc,
etc.
We
actually
had
the
foundation
of
this
already
there's
a
Huntington
Pig
API
odd.
That
is
an
hour
in
our
project.
We
aren't
using
it
yet,
but
basically,
this
this
issue
is
to
make
it
usable
and
and
wire
it
up.
X1
is
support
for
the
resource,
formerly
known
as
pip.
A
So
next
up
is
a
host
open,
API,
Docs
somewhere.
What
this
is
is,
if
you
go
to
kubernetes,
not
a
oh
and
you
click
on
Docs,
you
can
get
to
the
API
documentation.
That
shows
you.
You
know
the
fields
of
our
API
types
stuff
like
that.
We
already
generate
this
information,
but
it's
not
listed
anywhere
and
I
think
it
would
be
really
awesome
before
we
have
a
big
influx
of
folks
into
this
project,
which
I
expect
to
happen
hacker
it's
shown
at
Yukon.
A
One
here
is
finished
case
documentation,
and
this
is
pretty
much
what
it
sounds
like
I
think
everybody
has
agreed
in
the
past
that
it's
easier
to
ensure
that
your
use
case
documentation
stays
up-to-date
from
the
beginning,
which
might
not
seem
like
it,
but
we're
still
at
the
beginning
of
this
project,
and
this
is
basically
just
to
make
sure
that
our
use
case
Docs
match
up
with
what
were
calling
MVP
villas
next
up
is
we
have
a
script
that
verifies
that
batch
scripts
use
the
air
exit
flag,
which,
for
those
of
you
that
don't
know
bash
means
that
if
a
command
returns
an
error,
the
script
exits,
something
that's
one
of
the
things
is
that
is
considered
a
bash
back
practice.
A
A
Next,
one
is
make
target
or
suggested
DRM
for
setting
up
tube
CTL
when
using
our
API
server.
So
what
this
is
is
that,
right
now
you
have
to
maintain
either
a
separate
cube,
config
file
or
a
separate
cluster
and
your
existing
cute
config
and
switch
between
them
or
otherwise
munde
around
with
your
cute
config
setup.
A
Contribute
our
image
names
that
we
produce
are
like
not
knowable
as
being
intimately
related
to
the
service
catalog
from
the
name.
They
have
like
really
generic
names
like
controller
manager
and
API
server,
so
we
should
prefix
them
with
the
Service
Catalog
game.
The
next
one
is
to
fix
the
upstream
code
generation
tools,
so
they
generate
code
that
passes
lint
checks,
I
think
someone's
already
working
on
those
from
IBM.
Next
one
is
to
fix
the
Service
Catalog
link,
verification,
I,
actually
think
this
one
is
done,
but
we
can.
A
We
can
go,
take
a
pass
through
that
and
make
sure
the
next
one
is
an
older
issue.
Actually,
it's
not
that
old,
don't
build
and
deploy
and
trim
stuff
by
default.
This
is
just
like
proposed
changes
to
make
targets
and
what
they
what
happens
when
you
do
make
field
I'm
sure
we
can
sort
through
that,
here's
another
one.
That's
for
the
makefile,
evaluate
clean
targets
and
what
what
each
one
of
them
should
clean
next
up
is
broker
from
contribute.
Ampuls,
walk-through
doesn't
load.
A
We
can
actually
probably
close
this
one
since
that's
using
the
old
GPR
stuff
which
we're
not
going
to
release
I'll.
Take
that
as
a
take
away
next
up
is
to
figure
out
the
SSL
strategy
for
deploying
the
stuff
in
kubernetes.
I
kent
and
aaron,
and
I
have
taken
a
look
at
this
mostly
canceled,
there's
some
gaps
that
we
need
to
close
I
am
this
issue
is
to
do
it.
A
Next
up
is
further
reef
actors
to
your
base,
storage
interface,
I
think
these
are
cleanups
from
Aaron's
refactoring.
Is
that
accurate,
Aaron,
yep,
okay
and
then
next
one
is
that
big,
gnarly
rebase
on
to
kubernetes
one
six
candidate
I
have
assigned
this
to
myself
and
I
wanted
to
do
this
this
week,
I'm
starting
to
think
with
the
other
things
that
are
open,
that
it's
going
to
be
an
early
next
week,
thing
which
I
mean
it's?
A
A
If
someone
wants
to
take
this,
when
the
time
comes,
I
can
point
you
to
where
it
is,
and
you
can
probably
reuse
that
ticket
in
repo
in
from
you
can
start
consuming
it
from
there.
The
next
one
is
add
additional
information
to
condition
messages.
If
folks
have
looked
at
the
the
controller.
Currently
there
there
are
some
pretty
generic
user
facing
messages
on
conditions
that
we
should
probably
add
things
like
names
and
coordinates
of
involved
objects
to
that's
what
this
issue
is
for
and
next
is
add,
utilities
to
give
uniform
representations
of
objects
for
logging.
A
So
what
you
will
find
in
a
project
that
has
more
than
one
person
working
on
it,
and
sometimes
even
when
there's
just
one
person
objects
represented
in
logs,
the
same
object
will
have
like
three
or
four
different
representations.
It
makes
it
hard
to
read
the
logs.
It
makes
it
really
hard
to
write
anything
that
scrapes
logs,
and
we
should
have
just
like
a
uniform
way
that
you
refer
to
objects
by
under
probably
there's
like
one
or
two
different
conditions
of
where
you
have
like
a
short
short
way
and
then
a
long
way.
A
The
principal
will
object,
whatever
that's
what
that
is.
So
that's
what
we
have
for
MVP
I
urge
you
if,
if
the
vector
and
working
on
stuff-
and
you
find
something-
that's
like
a
clean
up,
create
an
issue
and
if
you
can't
set
a
milestone,
suggest
that
it
go
into
MVP
MEP.
One
is
now
overdue,
so
I'd
like
to
get
I'd
like
to
not
add
any
more
issues,
an
MVP
one,
let's
just
put
them
in
MVP
two
as
they're
created.
So
that's
what
is
on
the
agenda
today.