►
From YouTube: Kubernetes SIG Service Catalog 2019-10-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
That's
correct
so,
basically,
maybe
the
first
one
just
inform
you
on
month
on
hard,
fried
Friday
I
created
a
better
release
or
lock
the
second
one.
There
was
few
enhancement
and
one
backticks
so
nearly
should
be
created,
but,
as
you
can
see,
I
created
also
an
issue
that
we
are
not
able
to
research
the
way
because
of
time
outs.
So
when
you
click
the
first
issue,
I
described
in
details,
what's
going
on
and
basically
right
now
we
are
releasing
you.
We
are
releasing
sociology
using
Travis
CI.
C
We
have
only
one
job
called
deploy
and
that
guy
is
responsible
for
building
and
pushing
all
our
images.
Basically
I
also
checked
our
history
and
we
were
we're
always
really
close
to
job
time
out,
because
on
Travis
you
have
only
15
minutes
to
build
and
to
finish
your
job.
So
right
now
we
are
not
able
to
release
because
of
time
out,
I
already
created
requests,
basically
as
a
hotfix,
so
only
the
short
term
solution,
I,
just
split
our
build
by
architecture.
C
So
right
now
we
have
six
built,
and
for
that
we
are
using
Travis
stages
in
a
long
term
solution.
What
we
should
do
is
to
use
proud,
basically,
we've
probably
our
own
interest
infrastructure.
We
can
contribute
timeouts
and
we
can
also
make
it
concurrent
right
now.
We
have
only
two:
we
can
have
only
two
concurrent
concurrent
jobs
and
there's
also
some
kind
of
drawbacks,
but
on
basically
right
now
it
will
save
us
so
as
a
hotfix,
it
should
work.
C
You
can
also
see
that
I
tested
that
on
our
Fork
repository
so
in
comment,
you
will
see
that
there
is
a
link
to
our
Travis
and
yeah
at
the
bottom.
There
should
be
a
comment
yeah
that
one
so
right
now,
as
you
can
see,
instead
of
one
deploy
job,
we
have
six
of
them
incorrect,
two
or
six
yeah
six
for
each
architecture
and
one
for
as
we
cut
CLI.
Thanks
to
that,
we
are
not
hitting
the
timeout.
That's
the
first
solution
that
I
have
in
my
mind
sounds
fine
okay.
C
C
I
already
mentioned
that
I
think
that
we
create
one
issue
that
you
some
umbrella
issued
to
just
discuss
that
thing.
Basically,
in
past
there
was
I
think
two
or
three
issues
describing
that
thing
and
right
now
we
have
also
a
future
request
from
our
customers
to
add
something
like
that.
But
we
need
to
discuss
the
implementation,
how
we
want
to
introduce
that
in
service
locks,
so
basically
I
will
create
an
issue
then,
or
no
ask
someone
on
slack
channel
sent
to
mailing
list
to
have
some
feedback
about
that.
Maybe
you
already
have
some
feedback.
A
C
C
Yeah,
that's
true,
and
basically
so
there
is
already
a
first
implementation,
just
already
created
a
request
for
that,
and
in
that
case
we
just
want
to
have
a
future
flag
on
six-block.
When
someone
will
have
such
feature
flag
enabled,
then,
when
deleting
service
instance,
then
underlying
service
bindings
will
be
deleted
automatically
and
in
future
make
it
as
default.
So
that's
the
question
if
there
is
a
case
that
someone
wants
to
somehow
have
a
current
situation
like
you
are
deleting
the
service
instance.
C
C
It
will
be-
and
that's
the
case,
how
to
discuss
that,
because
in
our
case,
our
peels
saying
that
he
wants
to
just
have
the
Cascade
deletion
and
don't
care
about
deleting
that
manually.
C
You'll
not
be
reverted.
What
you
can
do
only
is
to
just
delete
the
underlying
service
binding,
nothing
more.
You
cannot
create
new
bindings
for
this
service
instance.
You
also
cannot
upgrade
that
you
cannot
do
anything
with
it
because
it's
already
marked
as
deleted,
but
only
finalizer
is
keeping
that
instance
in
etcd
storage,
so
you
can
own.
Only
after
executing
delete
action
can
honest
describe
that
service
in
sense,
nothing
more,
and
that
was
a
question
from
my
Pio
saying
if
you
cannot
Bieber
that,
so
what
is
the
case
to
just
have
it
such
blockade
right,
I,.
B
A
A
Without
putting
too
fine
a
point
on
it,
there
wasn't
really
an
agreement
on
a
solution
to
it.
We
just
kind
of
kept
putting
it
off
and
putting
it
off
and
never
really
solved
it
mm-hm
the
simplest
way
to
solve
it
simplest
ways
to
solve.
It
are
probably
to
add
some
very
non-standard
behavior,
like
intersecting
all
delete
requests
to
our
object
types
and
replacing
them
with
a
flag
on
the
object.
It
says
to
be
deleted
and
then
like
go
do
stuff
on
the
back
end
before
marking
it
as
actually
deleted,
but
a
lot.
A
C
C
Proposal
was
to
just
create
an
umbrella
issue
grouping
those
issue
regarding
cascade
deletion,
implement
currents
behavior
with
a
future
flag,
and
then
I'll
now
evaluate
that
discuss
with
community
and
May.
We
just
if
they
were
the
one.
There
will
be
no
objections,
then
just
enable
that
behavior
by
default.
C
A
C
It
will
be
not
recycling
breaking
the
contract,
but
the
current
problem
is
that,
even
if
you
have
current
contract
right,
you
block
the
deletions.
It's
still
not
save
you,
because
the
object
is
already
markets
deleted.
So
even
if
you
have
it,
you
cannot
do
anything
with
that.
So
it's
still
not
a
good
behavior
right.
C
But
yeah
I
I
think
that
somehow
it
should
be
finally
finished
that
story
and
maybe
just
having
but
feature
flag
together
with
a
very
issue
and
asking
committee
about
feedback.
What
do
you
thinking
about
it?
What
what
are
your
your
use
cases
for
that?
Maybe
then
we
can
somehow
go
further
with
that
and
find
some
use
cases
and
then
implement
correct
behavior,
but
because
right
now,
our
customers,
our
users,
saying
that
is
really
problematic
you.
They
need
to
go
back
to
different
view,
delete
underlying
sense,
bindings
and.
C
It's
not
a
problem
I'm
just
saying
it
can
be
like
that
for
at
the
beginning,
but
the
case
for
future
Flags,
it's
like
basically
after
some
time.
Then
it
should
be
enabled
by
default
right,
because
it's
only
the
future
gate
for
for
some
features
like
like
that
one
just
to
have
it
in
alpha
and
beta
and
then
finale
enabled
by
default
right,
I,
guess
yeah.
So
that
was
the
kind
of
proposal
for
evolution
for
that
feature
to
at
the
beginning.
Have
it
in
a
alpha
state.
C
Thanks
to
that,
we
can
even
discard
that
current
approach,
but
maybe
if
there
will
be
only
there,
will
be
no
problems
from
community.
Then
we
can
maybe
in
future
enable
that
in
default,
okay-
just
a
kiss
but
probably
PK
brought
clear
from
my
team.
We
try
to
draw
drive
this
topic
because
he
has
also
some
use
cases,
businesses
cases
to
to
just
share
with
others,
and
maybe
he
will
discuss
all
those
things
with
community.
C
C
C
Not
yet,
but
I
think
that
this
week,
I
will
have
a
time
because
right
now
only
the
less
release
from
the
vlog
is
on
my,
and
do
you
have
some
idea
about
it?
Basically,
you
want
to
lead
those
presentation.
Basically,
right
now
we
have
intro
and
deep
dive
and,
for
example,
you
want
to
present
together,
intro
and
device
or
how
you
see
it
I
think.