►
From YouTube: SIG Cluster Lifecycle 2021-08-10
A
A
A
The
idea
is
to
present
the
overall
sick
picture
again,
like
we
do
most
of
the
time
at
cubecon
events,
but
also
this
time
we
would
like
to
reserve
most
of
them
the
time
to
have
something
like
a
birds
of
a
feather
open
discussion
with
super
projects
and
potentially
people
who
have
questions
and
how
about
how
we
do
technical
organization,
and
things
like
that.
A
So
I
think,
if
you
are
interested
in
this,
please
join
the
session.
It's
going
to
be
2,
30
p.m.
On
october
15th,
and
also
I
encourage
the
su
project
maintainers
to
join
the
costa
api,
mini
cube,
cube
spray,
etc.
A
Apparently,
this
is
a
really
bad
time
for
europe.
However,
I
calculated
roughly
that
this
is
going
to
be
like
midnight
for
me
on
a
friday
evening.
I'm
not
sure
if
I'm
going
to
make
it.
Hopefully
I
can
not
sure
about
tim.
B
Yeah,
it's
it's
one.
That
will
definitely
be
a
better
topic
for
the
more
people
that
are
in
person,
and
I
think
some
of
the
recent
events
have
made
that
look
less
likely.
So
that
is
a
little
disappointing,
but
I
guess
we
can
cross
our
fingers
and
see
how
it
goes.
Yeah.
A
B
That
would
be,
that
would
be
cool.
Yes,
I
yeah.
I
hope
that
I
hope
we
hold
there
and
it
doesn't
go,
doesn't
get
more
restricted,
as
it
were.
A
Yeah,
so
this
is
the
the
main.
The
basic
talk
that
we
have.
I
also
saw
that
mini
cube.
Have
a
deep
dive
is
going
to
be
presented
by
media.
A
Let's
see
it's
on
the
october
15th
of
4
30
p.m.
It's
like
a
deep
dive
for
five
years
of
mini
cube
if
you're
interested
in
the
general
topic
of
maintaining
a
project
for
so
long
you,
you
should
definitely
check
this
out.
A
A
Okay,
so
this
is
about
the
like
the
topic
of
how
to
do
a
kubernetes
distribution.
I
guess.
B
It's
certainly
that
something
that
we
have
been
pushed
towards.
I
think-
and
I
think
that
is
how
we
are
theming-
the
the
sort
of
update
you
know
we're
doing
a
lot
of
this
combination,
end-to-end
testing,
and
it
does
feel
like
we're
heading
in
that
direction.
So
these
are
the
I'm
not
one
of
the
speakers
for
this
by
the
way,
but
this
is
this
is
where
we
are.
That's
that's
the
topic.
A
Cool
there
are
also
four
speakers.
This
is
very
interesting.
A
A
So
do
we
have
any
rsu
project?
Does
everybody
know
if
we
submit
any
sessions
like
http
cluster
api?
Anything.
A
C
Yeah,
I
was
surprised
as
well.
I
was
in
pto
when
deadline,
one
was,
and
I
I
don't
know
why-
that
no
one
submitted
me
but
yeah,
probably
the
the
email
got,
ignored
or
stuff
like
that.
A
Yeah,
I
certainly
emailed
the
belly
list.
I
guess
nobody
just
took
the
initiative,
it's
it's
nice
to
for
costa
rica.
I
think
it's
very
nice
because
in
general
not
only
questions
there,
but
all
those
alpha
projects
to
submit
in
not
consider
this
like
a
status
update
that,
moreover,
gathering
more
contributors
is
always
nice.
At
these
events,.
A
All
right,
I
think,
that's
all
for
for
the
cubicle
topic.
Does
anybody
have
any
comments
or
questions
about
the
event
overall,
I
already
registered
by
the
way,
I'm
not
sure
if
anybody
else
has
done
that.
A
A
All
right
moving
to
subproject
updates,
I
added
one
for
cube
adm,
so
122
was
released
and
it
went
out
pretty
smoothly.
I
haven't
seen
any
major
issues,
yet
you
never
know
so.
A
The
situation
is
pretty
stable
at
this
point.
One
consistent
problem
that
we're
seeing
is
that
users
are
still
confused
about
c
group
drivers
and
we
are
just
a
lot
of
tickets
pop
up.
We
made
a
change
in
cube
adm
to
no
longer
how
to
detect
the
driver
for
docker
to
give
her
history.
Some
history
here
see
the
c
group
drive
of
container
d
was
never
out
to
detect
it,
but
we
have
this
little
auto
detection
for
docker.
So
if
you
have
docker,
we
detect
that
you
have
docker.
A
A
But
you
know
we
are
closing
tickets
explaining
how
to
resolve
the
issue
linking
to
troubleshooting
guide
and
that's
what
we
can
do
for
now
unless
sorry,
until
the
cri
implementers
coordinate
with
the
couplet
how
to
properly
manage
the
drivers,
it
shouldn't
be
on
the
part
of
the
deployer
or
the
user,
it's
just
a
really
bad
ux.
A
So
that's
all
the
updates
I
have
from
kubernetes.
Does
anybody
have
any
questions.
B
Right
moving
to
cops,
I
think
yes,
I
mean,
I
think,
the
the
the
perpetual
theme
of
chaos
releases
has
been
trying
to
catch
up.
We
are
actually
very
close.
We
just
we
have
the
122
release,
basically
more
or
less
ready
to
go.
We
just
think
we
had
one
issue.
I
was
actually
struggling
to
look
up
what
that
issue
was.
I
can't
actually
remember
off
the
top
of
my
head,
what
that
is.
B
She
was,
but
we
had
one
issue
which
gave
us
some
doubt,
and
then
we
we
decided
that
also
we
generally
were,
did
not
have
enough
confidence
in
generally
to
release
the
beta.
So
say
there
was
definitely
better
quality.
B
So,
oh,
yes,
the
issue
was
around
aws
load,
balancer
controllers,
a
sort
of
fairly
niche
add-on,
but
we
wanted
to
make
sure
that
was
squared
away
and
just
generally
check
on
all
the
other
add-ons
our
next
checkpoint
for
that
is
on
friday,
but
we
are
pretty
close
to
being
on
track
so
and
there's
a
bunch
of
contributors
sort
of
making
a
collective
decision.
So
I
think
that
that
release
process
is
in
in
fairly
good
shape.
I
would
say.
A
That's
nice
yeah,
you
know
oftentimes.
There
are
these
last-minute
problems
that
have
to
be
resolved,
but
it's
good
to
hear
that
cops
is
on
track
with
the
also
you
know
following
closely
the
kubernetes
releases,
so.
D
Yeah,
there's
a
pr
updates
there
there's
an
existing
pr.
Now
I
think
it's
up
to
date,
I'm
I'm
reviewing
it
to
run
ncd
as
a
static
pod,
and
that
adds
also
support
to
scdm
to
to
pull
lcd
images
as
opposed
to
the
like
binary
releases
from
github,
so
that
that's
something
that
will
be
useful
for,
in
particular,
for
the
external
cd
provider
for
cluster
api.
D
So
that's
that's
there.
If
anybody's
curious,
either
to
you
know
just
to
see
how
they
might
be
able
to
use
that
or
to
review
and
then
also
there's
a
pr
for
a
user-defined.
I
guess
that's
what
I
should
say:
user-defined
bind
address
as
opposed
to
the
one
that
idiom
just
derives
from.
I
think
the
default
route
now,
so
that's
that's
something
useful
for
for
anyone
with
you
know
a
slightly.
A
A
Yeah
we
have
this
incubate
yeah,
but
also
the
kubrick.
Has
it
you,
you
can
potentially
pin
the
the
the
by
the
address
in
the
world
of
these
kubernetes
components.
One
problem
is
that
we've
series
that
users
don't
usually
have
a
good
guide
on
how
to
set
up
the
topology
of
the
the
cluster,
the
network
deposit.
A
They
don't
know
how
to
set
up
the
topology
properly
so
that
the
default
address
derived
from
the
standard
road
is
one
that
suits
their
syrup,
because
if,
if
you
start
pinning
addresses,
you
now
enter
like
a
weird
scenario
where,
if
you
move
the
cluster,
you
have
to
add
new
addresses,
it's.
A
It
becomes
difficult
to
maintain
and
I
was
asking
actually
seek
network
one
of
the
sig
network
maintainers
to
help
with
the
little
guide,
to
explain
to
users
how
you
can
actually
set
up
your
network
in
a
way
that
conforms
kubernetes
components,
and
you
don't
have
to
pin
all
the
buying
addresses,
because
imagine
that
okay,
you
you
hcdm
is
just
one
component,
but
imagine
like
cube
api
server,
cube
control
manager.
Kubelet,
all
those
components
have
bind
addresses
if
you,
if
your
network
interfaces
are
strange,
you
have
to
potentially
update
all
the
components.
A
D
Is
there
a
link
by
any
chance,
something
that
I
can
just
I
don't
know
if
you
have
a
google
doc
or
something
floating
around.
A
Oh,
I
think,
there's
there
was
a
discussion
on
the
blog
post.
I
can
try
facebook
for
finishing
it
for
you
now.
Let
me
see
if
by
if
my
links
are
my
skills.
A
A
So
this
is
the
link.
Basically,
the
discussion
is
with
a
contributor
that
wanted
to
add
a
blog
post
like
how
do
you
set
up
kubernetes
with
all
the
bind
addresses
being
custom
for
all
the
components,
and
we
said
that
that's
actually
not
a
recommended
thing
to
do
with
kubernetes.
A
And
you
can
check
the
discussion.
Basically,
I
said
that
maybe
somebody
should
write
a
guide
for
that
eventually,
but
yeah
we
shall
see.
A
C
Yeah,
so
I
think
what
is
interesting
for
this
group
is
that
in
the
last
week
meeting
people
especially
interested
in
working
to
define
the
roadmap
to
graduate
our
api
to
beta1.
C
As
you
know,
cluster
api
is
already
used
by
many
companies,
so
it's
kind
of
production
grade,
even
if
our
api
is
still
alpha,
and
there
are
many
reasons
because
why
we
are
conservative
in
graduating
our
api,
because,
mostly
because
we
feel
that
some
features
are
still
missing,
but
one
of
the
of
the
biggest
feature
missing
is
being
addressed,
which
is
the
work
for
cluster
class
that
is
going
to
simplify
ux,
and
so
let
me
say,
the
the
blockers
for
graduating
are
moving
away.
So
it's
time
to
restart
this
discussion.
C
If
you
are
interesting,
I
assume
that
this
discussion
is
going
to
restart.
I
I
don't
know
if
this
will
happen
tomorrow,
because
we
are
in
august
and
some
people
are
taking
pto
but
yeah.
I
think
that
in
one
of
the
next
meeting
will
the
discussion
will
discuss.
We
will
start.
C
A
I
I
was
going
to
comment
with
a
bit
of
a
rant
on
that
particular
discussion,
but
I
held
back.
Basically,
I
think
that
we
are.
The
kubernetes
is
following
the
policy
of
infinite
beta,
which
is
never-ending.
Beta
is
so
totally
clustered
api's,
just
so
all
of
these
apis
in
the
core
cube
adm,
all
of
the
the
apis
that
we
have
are
beta
and
potentially
people
can
iterate
faster
and
you
can
just
call
the
thing
v1,
but
that
means
that
you
have
to
maintain
it
for
longer.
A
But
if
you
look
at
the
picture,
you're
already
maintaining
something
for
like
alpha
api
for
such
a
long
time
that
it's
so
already
like
production
great,
so
it
could
be
like
v1
already
but
yeah,
it's
a
it's
just
a
defense
for
the
maintainers.
I
can
see
that
just
it's
very,
very
weird
what
we
are
doing
covering
dishonesty.
C
Yeah,
I
I
I
hear
your
point.
I
I
think
that
in
class
api
we
are
kind
of
in
the
middle
of
this
already
the
middle
of
this
step,
because
we
are
already
providing
conversion.
C
We
ensure
the
guarantees
and
we
actually
ensure
as
much
as
possible,
upgradability
stuff
like
that,
so
we
are
kind
of
already
acting
like
we
are
beta,
but
yeah
we
at
the
same
time.
We
know
that
the
api
surface,
the
type
needs
some
improvement
for
cluster
class
forward,
balancer
stuff
like
that
so
yeah.
This
is
why,
but
I
I
got
your
point,
it
is.
It
is
the
same.
C
I'm
going
to
say
complain
that
that
we
get
from
some
users
that
they
ask
why
the
cluster
apis
has
a
huge
adoption,
but
it
is
still
still
the
api
is
still
alpha.
I
think
that
yeah,
it
is
time
to
move
this
to
move
there
or
the
originally
the
beginning
of
this
year.
The
plan
was
to
get
beta
1
for
before
kubercon,
but
then
for
a
reason,
this
was
not
possible.
C
So
we
have
to
reconvene
on
on
on
the
next
possible
date
and
the
next
viable
date.
A
Yes,
it's
going
to
be
also
strange,
something
that
cuba
deb
is
doing.
Is
the
coi
the
cuba
damage
ga
supposedly
so
it's
it
definitely
has
a
major
version,
but
the
api
is
better.
So
the
same
is
going
to
happen
for
question.
Api
is
going
to
be
a
v1
tag
in
the
repo,
but
it's
going
to
be
a
better
api
provider
setting.
C
Yeah,
but,
but
that
that's
let
me
say,
a
kind
of
common
misunderstanding
is
that,
where
people
are
are
mixing
are
mixing
up
the
the
maturity
of
a
project
with
the
maturity
of
one
of
the
its
component
in
cluster
api,
we
we
have,
let
me
say
many
apis.
Now
we
try
to
keep
it
simple,
keep
all
demo
all
together,
but
at
the
end
we
have
many
apis.
C
So
we
with
the
project
going.
I
think
that
it
is
more
reasonable
and
make
a
lot
of
sense
to
have
many
to
declare
component.
The
component
feature
by
feature
like
we
do
with
feature
gate
or
like
we
do
with
api
groups
the
maturity
of
one
feature.
So
I
think
that
it
is
not
reasonable
for
a
complex
system
to
assume
that
everything
has
the
same
maturity
it.
C
But
I
understand
that
for
users
this
is
kind
of
confusing
confusing
for
cluster
api,
which
is
where
the
api
is
so
relevant
is
a
little
bit
even
more
confusing
but
yeah.
I
I
got
your
point.
I
think
that
the
the
end
state
is
not
everything
is,
is
not
have
everything
at
the
same
maturity
level,
but
to
have
as
much
as
possible
at
the
same
maturity
level
of
the
entire
project.
A
Yeah,
obviously,
slogans
is
documented
properly.
People
should
be
able
to
understand
something
that
maybe
is
a
topic
for
the
closer
api
bt
like
and
also
our
prior
discussions
about
subproject
health.
Like
do
you
do
you
think
that
the
courser
bi
project
should
ask
periodically
all
of
these
infrastructure
providers
if
they
are
healthy?
C
C
C
And
does
not
dictate
how
they
work,
or,
let
me
say,
is
not
responsible
from
how
they
work
and
also
we
have
to
be
to
support
the
same
in
the
same
way,
providers
which
are
hosted
by
the
siege
and
provider
which
are
not
hosted
by
the
siege.
We
we,
for
instance,
cut
or
release
all
of
them,
no
matter
if
they
are
sea
coasted
or
not
so
yeah.
C
This
is
up
to
discussion,
but
I
think
that,
given
that
we
have,
we
have
connection,
at
least
with
the
provider
that
show
up
seeing
the
meetings
we
can
help
in
the
seeking
this
effort.
A
That
is,
that
is
true.
It's
definitely
the
sick
that
is
responsible
for
hosting
soup
projects.
A
A
So
if
we're
not
doing
this
in
this
session,
maybe
it
should
be
the
question
api
session,
but
maybe
there's
not
enough
time
to
do
it.
There
yeah.
C
But
I
think
that
it
is
perfectly
fine
if,
if
the
sig
goes
on
and
try
to
to
make
this
a
small
acception
on
the
project
health
and
like
the
steering
is
doing
a
top-level
kubernetes.
C
I
think
that
is
perfectly
fine
that
during
this
the
cluster
api
meeting,
let
me
say
we
we
give
to
echo
this
initiative
and
we
ask
to
all
the
provider
to
to
do
their
part
because
they
are
part
of
the
city,
the
sig.
At
the
end,.
C
A
C
Okay,
I
I
will
open
an
issue
and
try
to
find
out
someone
who
has
right
to
archive
the
project.
I'm
not
sure
I
have
a
right
to
archive
it,
but
yeah.
C
Cube
mark
is
also
still
yeah
this
this
one
is.
It
is
an
interesting
one
because
there
aren't.
There
is
interesting
working
in
in
something
related
to
ruby
mark,
but
I
don't
have
a
bandwidth
to
do
basically
to
maintain
a
separated
provider,
and
so
I
propose
the
mike
michael
mccoon
to
to
get
something
moving
inside
the
copy
and
then
eventually
to
move
to
a
separate
provider.
C
If
there
is
a
need,
because
yeah
adding
something
to
cup
d
is
much
easier,
you
don't
have
to
replicate
all
the
scaffolding
to
manage
a
different
project
and
stuff
like
that,
and
with
these
I
can
help
with
maintaining
us
as
some
separated
provider.
I
don't
have
a
bandwidth,
so
now
we
are
still
decided.
You
know
something
will
happen.
I
I
don't
know
if
it
will
happen
in
cup
of
tea
or
in
this
program
either.
C
A
Open
stack
is
active,
bucket
packet
is
active.
I
think
json
wanted
to
move
this
to
a
new
repository
with
a
rename,
but
I
don't
don't
know
what
the
status
of
that
is.
He
hasn't
commented
on
that
topic
recently.
A
A
A
A
Right,
thanks
all
for
jerick
and
she'll,
get
it
a
couple
weeks
bye.
Thank
you.