►
From YouTube: Kubernetes SIG Service Catalog 2018-02-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
as
a
primer,
I
had
been
talking
to
Eric
tuned
a
little
bit
out
of
band
and
it
sounds
like
a
lot
of
stuff
is
happening
in
the
CR
D
space
and
that
CR
DS
eventually
seem
to
be
the
correct
I.
In
my
opinion,
the
correct
backing
for
the
sort
of
resources
that
we
are
developing
in
Service
Catalog,
so
I,
don't
think
they're
quite
ready.
B
Yet,
though,
and
so
I
wanted
Eric
to
come
actually
and
talk
about
what
the
sort
of
state
of
the
art
is
for,
CR
DS
and
then
I
thought
it
would
be
useful
if
we
could
give
feedback
as
to
how
we're
using
our
API
server
today
and
what
we
would
need
before
we
considered
migrating.
So
some
of
the
things
that
we
talked
about,
out-of-band
where
validation,
auth
and
versioning
so
I,
don't
know
if
we
want
to
start
off
by
maybe
Eric.
You
could
summarize
sort
of
your
thoughts
on
those
topics.
C
C
C
Of
summarize,
what
I
see
is
the
advantage
of
using
CR
DS
over
aggregation
momentum,
which
I
just
talked
about,
is
less
code,
that
you
need
a
fork
and
rebase
and
allows
the
project
to
sort
of
add
features
for
you
without
you
having
to
do
anything
when
users
upgrade
to
or
Nettie's,
then
they
get
the
new
stuff
flipside
is
that
you
know
advantage
for.
You
is
if
you
can
get
it
for
summer.
You
have
users
that
you
can't
get
to
upgrade
your
clusters,
but
you
can
get
them
to
upgrade
your
your
aggregated,
API
server.
C
Then
I,
guess
that
does
allow
you
to
deliver
features
to
them
a
little
faster.
That's
a
trade-off!
You
guys
ought
to
think
about
you,
don't
to
run
a
separate,
sed
and
there's
sort
of
lower
net
memory
requirements
which
some
people
seem
to
care
about.
They
can
do
a
lot
as
of
1.9,
so
there's
actually
a
lot
of
validation.
You
can
do
without
needing
to
do
write
any
code.
You
can
enforce
different
types
of
api
date
formats.
C
You
have
min
and
Max
values
for
items
and
maximum
links
for
lists
and
require
fields
and
do
pattern
matches
and
require
uniqueness.
So
there's
a
lot
of
stuff.
You
can
do
with
open
api
schemas
currently
in
1.9.
If
those
don't
meet
your
needs,
you
can
also
do
more
complex
validation
using
a
thing
called
a
web
hook,
which
you
install
kind
of
the
same
way
that
you
install
an
aggregate
api
server.
C
But
it's
a
much
tighter
piece
of
code
still
has
to
install
a
client
cert,
so
the
API
server
can
call
out
to
it,
but
instead
of
doing
all
the
things
that
API
server
does
it
just
does
like
a
validation.
You
can
also
do
what
we
call
mutation
which
allows
you
to
default
fields,
hopefully
for
1.10,
and
if
not,
then
then
I
really
believe
it'll
be
done
by
1.11.
C
They'll
have
a
server
side
support
for
multi
versioning,
which
means
if,
instead
of
having
a
conversion
routine
compiled
into
an
API
server,
you
have
a
conversion
routine,
that's
compiled
into
a
much
simpler
web
hook
and
they'd
be
a
certain
API
server.
We'll
just
call
out
to
you
and
say:
hey:
can
you
convert
food
view
and
beta1
to
foo
v1
for
me
and
you'll
your
web
local
reply
status,
some
resources
being
worked
on
and
mandated
a
generation
support.
C
I
would
expect
to
be
ready
soon
and
then
strategic,
merge
patches
under
development,
so
that's
kind
of
what's
coming
down
the
pipeline.
The
main
reason
like
looking
to
the
future
like
six
months
from
now,
the
main
reason
I
would
say
people
would
still
be.
Do
you
want
to
use
API
aggregation
is
if
they
are
like
a
facade
and
kind
of
in
front
of
some
type
of
storage,
it's
totally
different
than
then
at
CD,
like
metrics
API,
as
a
facade
in
front
of
like
a
time
series
database.
A
Okay,
my
hands
up
first,
so
this
is
interesting,
I'm
just
curious.
It
is
this
something
that
you
think
the
broader
crudités
community
like
we
were
to
go
to
sig
architecture
and
said:
hey
API,
aggregation
is,
is
cool
and
all,
but
we're
just
going
to
use
CRT.
So
everything
do
you
think
sig
architectures
say
yeah.
That
should
be
the
strategic
direction
for
people
and
that
apap
aggregation
is
really
only
limited
to
the
situation
you
mentioned
there.
We
had
a
different
type
of
data
store,
or
is
this
more
just
those.
C
Believe
cigar
position
is
that
they
want
more
things
to
be
built
out
of
core
so
whether
using
aggregation
or
CRTs
you're
building
your
business
logic
out
of
the
core,
so
I
think
they
would
agree
with.
They
would
be
indifferent
on
that
point
and
then
I
have
certainly
floated
this
view
by
and
gotten
agreement
from,
Brian
grant
who's,
one
of
the
members
of
Sega
architecture
and
I'm
not
aware
of
any
objections
from
other
members
in
architecture
to
this
view,
but
maybe
I
should
go
talk
to
more
of
them.
That's
a
fair
feedback.
Well,.
A
D
C
Advantage
from
the
architecture
standpoint
is
that
it's
harder
for
people
to
do
things
which
are
unproven,
eTI's,
like
with
a
CR
D,
because
they
have
a
narrower
surface
area
and
it
allows
the
platform
to
stay
more
cohesive.
I.
Don't
know
this
necessarily
that
Service
Catalog
is
doing
that,
but,
as
a
general
point,
we'd
rather
give
people
that
are
extending
the
platform
less
leeway
rather
than
more,
but
still
meet
their
needs.
Yeah.
A
C
A
E
So
one
of
the
major
features
missing
from
series,
whilst
I
checked,
was
missing
variant
API,
so
you
can't
actually
transit
from
I
can
beat
it
oh
version
one
and
so
on
and
last
I
checked
that
the
cube
cone
in
December
with
architecture
guys
I
think
this
or
a
a
machine
I
can't
remember.
But
but
the
answer
was
that,
yes,
this
is
a
missing
feature
and
we
don't
have
any
plans
on
fixing
this
anytime,
soon
but
alike.
As
far
as
them.
E
C
A
design
doc
out
for
how
to
do
versioning
by
one
of
my
co-workers
and
it's
gotten
good
reviews,
I
mean
I,
think
it's
getting
reviews
from
David,
EADS,
stts
and
I
think
Nikita,
who
all
work
on
the
CRT
code.
So
I
think
that
sentiment
has
probably
changed
since
koukin
I
can
try
to
set
a
link
to
the
design
dog,
that's
being
discussed
now,
if
you
want
to
dive
into
it
or
just
say,
like
I,
think
we're
gonna,
I'm,
very
optimistic
you'll,
add
validation
soon,
sorry,
I'll
add
to
conversion
soon.
C
C
Will
chat
a
link
to
the
design,
doc,
that's
being
circulated
right
now
and
as
far
as
like
my
team
of
Google,
like
we
have
people
that,
like
headcount
allocated
to
implementing
as
soon
as
we
sort
of
reach
consensus
on
the
design.
Obviously
we
want
the
whole
community
to
participate,
but
like
in
terms
of
being
enough,
you
know
people
to
do
it.
I
think
that's
not
gonna
be
a
problem.
C
A
So
my
hands
up
next,
so
you
had
mentioned
that
you
think
we
should
consider
CR,
DS
I,
think
you
said
in
a
three
to
six
month
time
frame
just
out
of
curiosity.
If
we
were
to
consider
switching
over
now,
what
features
would
be
be
missing.
C
There's
a
bug
where
you
can't
set
metadata
regeneration
properly
and
so
until
that's
fixed,
which
might
come
in
a
1.9
patch
release,
but
probably
not
you
will
be
annoying
if
you're,
the
relying
on
metadata
our
generation
in
your
aggregate,
API,
server,
okay,
okay,
thank
you,
never
sure,
I
think
I
talked
to
both
you
Michael
and
V.
Lay
and
you're.
Not
doing
you
don't
have
multiple
versions
yet
or
you
actually
do
have
two
versions
alpha
and
beta.
So
if
you
really
wanted
to
keep
continuity
on
those,
you
couldn't
do
that
yet.
F
C
F
C
B
C
B
C
A
B
A
A
G
We're
seeing
some
issues
on
our
side
around
cleanup
of
namespaces,
failing
because
people
can't
delete
bindings
or
I
can't
believe
can't
leak
services.
Some
of
this
stuff
has
been
fixed
in
the
latest
work
we've
done
in
Service,
Catalog,
but
I
think
I
think
there's
still
some
holes
around
force
delete
and
in
the
general
you
know,
forged
clean
up
of
resources.
G
We've
got
one
issue
the
666th
has
been
around
for
a
while
it
was.
It
was
kind
of
focused
on
developer
experience,
org
developers
being
able
to
clean
up.
You
know
during
a
test
cycle,
but
I
think
it's
still
useful,
especially
in
our
early
early
days
with
Service,
Catalog
and
resources.
You
know
not
things,
maybe
not
working
out
properly
in
we're,
seeing
users
end
up
in
positions
where
they
can't
get
rid
of
catalog
resources
and
obviously
that's
a
bad
thing.
G
So
I
guess
I'm
just
surfacing
the
issue
one
and
if
any
folks,
any
other
people
are
seeing
issues
being
reported
in
the
field
and
something
I
think
we
probably
ought
to
try
to
get
some
work
on
prior
to
our
our
next
next
release.
Yeah
next
big
big
release,
so
I
guess
I'm
just
kind
of
raising
some
attention
on
it
and
seeing
if,
if
anyone's
available,
to
do
some
work
in
this
area,
okay
pulls
your
hands
up.
Yeah.
B
B
It's
been
a
pain
point
for
us
as
well:
I'm,
not
exactly
sure
what
the
correct
approach
would
be.
I
guess
would
it
be
because
right
now,
I
think
we
set
the
or
the
graceful
deletion
period
is
zero
I
believe
so
it
just
automatically
goes
into
the
final
stages
of
trying
to
clean
up
so
I
guess.
Would
we
want
to
change
that?
To
wear
it?
The
graceful
period
is
a
longer
interval
and
then,
if
it
reaches
that
point,
our
code
just
automatically
pulls
out
the
finalizer
I'm,
not
sure
what
the
right
approach
is
here.
G
B
B
A
G
G
G
G
H
Yeah
not
too
much
to
say
about
it.
It's
just
the
PR
for
adding
pod
preset
into
the
settings.
Api
group.
This
PR
has
passed
through
three
people,
including
myself,
so
all
I
really
did
was
make
sure
that
see.
I
was
passing.
There's
a
small
problem
with
the
previous
PR
that
Paul
did
so
I
think
it's
good
to
go
now
and
I
would
really
like
to
get
it
in,
so
that
I
can
start
doing
the
initializer
work.
On
top
of
it.
That's
pretty.
A
G
Doug,
while
you're
pasting
I
did
have
I
guess
one
other
issue
that
might
be
worthy
of
mention,
maybe
maybe
kibbles
actually
has
a
better
answer
on
this
one,
be
far.
As
you
know,
the
1.5
travis
build
the
version.
Fo
0,
1
5,
pushing
the
binaries
I
think
is
failing.
Is
it's
taking
too
long,
kibbles
I
think
you
said
you
were
working
on
teasing
apart
the
job
Travis
job
into
multiple.
B
G
E
C
B
E
I,
just
posted
a
link
to
the
new
issue.
I
have
just
created
yesterday
that
we
have
noticed
that
in
case
of
connection
timeout
with
the
OSB
request
service
catalog.
Currently
it
does
the
orphan
mitigation
and
then
marks
the
instance
or
biting
as
having
terminal
error
and
doesn't
retry.
So
Paul
has
already
commented
that
he
agrees
that
we
should
probably
go
to
trial
on
this.
What
other
people
think
about
it?
Maybe
there
are
some
other
cases
where
we
also
need
to
retry.
Instead
of
just
giving
up.
A
Michael
I
seen
your
hands
up
for
something
else,
we'll
get
to
you
in
a
sec.
Okay,
your
hands
up
for
that
too.
Okay
go
ahead;
yeah.
B
I
think
that
I
I
have
a
different
issue
saying
we
should
probably
retry
alright.
Maybe
this,
maybe
it's
a
slightly
different
scope,
but
I
do
agree
with
this.
Is
that
we
probably
should
be
retrying
these
cases,
where,
even
like
five
hundreds
I
feel
like
we
should
give
a
best
effort
to
hope
that
the
server
can
get
its
you
know
stuff
together,
but
sorry
I,
agree.
E
Just
wanted
to
reply
to
Mike
that,
like
I,
think
that's
the
main
problem
with
the
talk
is
that,
because
there
is
no
item,
potency
guarantee
that
every
time
before
you
try,
we
need
to
clean
up
or
like
to
run
the
or
perform
it
occasion.
So,
yes,
that's
probably
the
one
which
complicates
the
case,
because
without
that
we
would
be
able
to
just
which
we
will
retry
until
success
or
something
but
yeah
with
poor
communication.
B
B
Yeah,
so
for
the
ODOT
1.5
release,
because
I
think
the
deployments
gotten
all
the
way
through
all,
but
one
of
the
architectures
I
can't
remember
exactly
which
one
so
do
we
I
guess
what
do
we
want
to
do
it?
Are
we
okay
with
just
not
having
a
nota
1.5
for
that
specific
architecture,
or
do
we
want
to
have
a
like
a
patch
version
because
we're
we're
probably
gonna
release
another
version
in
two
days
right?
So
it's
probably
not
too
big
of
an
issue.
What's.
A
B
Issue
was
that
the
deployment,
so
our
entire
build
test,
deploy
steps
in
the
normal
case
for
commits
on
master,
is
within
the
50
minute
job
time
out,
but
for
the
special
version
releases
where
we
have
to
deploy
for
all
the
architectures.
It
goes
above
the
50
minutes.
So
it's
it's
like
right
on
the
cusp
I
found
and
so
for.
Odot
1.4
I
found
that
if
I
just
ran
it
a
bunch
of
times,
eventually
a
build
got
under
the
50
minutes.