►
From YouTube: SIG Cluster Lifecycle 2021-04-20
A
A
So
this
is,
I
wanted
to
highlight
this
cap
for
changing
the
release
cadence
of
kubernetes.
A
It
has
been
there
for
a
while
it
received
above
one,
okay,
200
quick
comments.
At
this
point,
a
lot
of
people
commented
everybody's
here
pretty
much
the
tldr
is.
I
extracted
it
in
a
message
here
with
121.
The
release.
A
Cadence
is
moving
to
four
months
release
cycle
or
three
releases
per
year,
and
it's
with
a
cavite
that
sig
release
during
this
process
will
perform
conduct
some
surveys
to
try
to
get
feedback
from
the
consumers
of
kubernetes,
to
figure
out
that
if
people
are
happy
with
the
new
cadence
prior
discussions
in
this
particular
seek,
especially
with
tim,
we
show
the
preference
that
we
want
this
change.
A
As
somebody
who
maintains
helps
with
the
maintenance
of
cube
adm,
I
would
say
that
three
months
is
kind
of
difficult
for
us.
Even
if
it's
a
relatively
small
project,
we
just
cannot
get
all
of
things
into
a
release
and
the
kip
process
is
getting
more
and
more
complicated,
especially
around
production
readiness
review.
A
A
lot
of
people
depend
on
getting
the
cap,
merge
first
and
then
actually
working
on
some
of
the
pr's
that
they
have,
and
then
they
are
blocked
on
the
reviewers
and
so
on.
It's
got
the
three
months.
I
think
the
shorter
the
cycle,
the
more
difficult
to
get
the
process
rolling,
so
yeah.
I
think
we
are
generally
plus
one.
Does
anybody
have
comments
on
this
topic.
A
Yeah,
I
think
the
so
subcotics
here
there
used
to
be
a
working
group
lts
that
was
hosted
on
the
sig
release.
Basically,
the
result
of
this
working
group
before
it
was
close,
was
that,
instead
of
creating
an
lts
release
of
kubernetes,
we
just
decided
to
extend
the
support
window.
So
nowadays
we
have.
A
We
support
a
kubernetes
release
for
one
year
instead
of
nine
months.
That
is
the
result
of
the
lts
group,
and
it
is
mentioned
in
the
in
the
this
particular
camp
as
well,
but
it
remains
the
same.
Basically,
it's
one
year
support
window,
it's
just
the
the
the
release
cadence.
A
B
Yeah,
I've
actually
seen
a
lot
of
movement
on
that
front.
Actually,
so
I
think
there
is.
There
is
hope
there
as
well.
A
There's
a
view
too
called
creole:
did
they
getting
this
tool
to
eventually
replace
anago,
but
still
if
we
move
to
a
distributed
model
where
every
component
lives
in
a
separate
repository
like
tomorrow,
we
cannot
release
kubernetes,
because
the
tooling
for
release
does
not
support
that,
there's,
no
way
to
pull
artifacts
from
multiple
places.
Currently,
everything
is
built
from
kk
and
the
way
alago
is
written.
It's
like
concurrent
bash.
Nobody
understands
how
it
works
so
yeah
they
are
writing
a
two
in
goal
and
hopefully
we
can
start
using
that
for
the
release.
A
All
right,
because
you
have
a
topic.
D
Yeah,
just
a
quick
question
for
this
group,
mostly
just
around
process
we
in
getting
and
prototyping.
D
In
prototyping
cluster
api
for
bottle
rocket
or
container
os,
we
we
found
that
there's
a
few
things
that
would
be
really
nice
to
break
out
in
terms
of
the
the
join
command
for
control
plane
and
for
for
cubelet,
specifically
just
adding
a
few
more
into
like
specific
phases.
Instead
of
bundling
quite
as
much
into
every
phase,
their
cubanium
breaks
it
down
into
a
bunch
of
different
phases
like
for
control
plane,
but
for
cubelet,
it's
kind
of
all
wrapped
into
one,
because
our
bottle
rocket
os
uses
slightly
different.
D
We
have
slightly
different
file
paths
and-
and
we
do
things
a
little
bit
differently
in
terms
of
just
how
we
we
operate.
The
os
it'd,
be
really
nice
to
break
that
break.
Those
phases
out
into
a
few
more
sort
of
sub
phases.
Is
that
something
that
we
would,
you
know,
need
to
open
a
kept
for?
Would
that
be
just
like
a
small
enough
change?
We
could
open
a
pr
for.
Would
that
be
just
a
like?
I
guess.
Even
if
we
did
a
cat,
would
that
just
be
a
lighter
weight,
just
hey!
E
A
Break
it
into
a
number
of
surfaces,
I
think
there
was
a
problem
in
cobra
where
users
now
have
to
call
cobra
start
space
o,
which
is
like
a
we
add,
a
sub
command,
which
is
called
o
so
that
we
can
execute
all
the
subcommands
of
the
yeah,
the
couplet
start
command,
so
that
I
think
there
was
a
problem
with
that.
If
we,
we
cannot
preserve
backwards,
comparability.
A
If
we
break
this
down
into
some
phases
today
we
have
to
introduce
this
old
sub
command,
I'm
not
sure
whether
we
we
basically
overlook
something,
maybe
it's
possible,
but
you
know
we
had
three
people
looking
at
it.
When
we
were
designing
the
phases,
it
was
not
possible.
So,
for
example,
if
you
do
control
plan
prepared
today,
you
have
to
pass
all
here
and
that's
the
only
way
to
execute
all
the
sub
phases
of
and
another
way
is
to
which
is
not
ideal.
A
D
D
I
can
get
the
details,
but
I
know
there
was
like
a
few
steps
that
just
required,
like
some
some
sleeps
before,
like
I
think
we
had
to
like
write,
static,
pods,
then
sleep
and
then
run
bootstrap,
so
it
was
in
how
we
passed
bootstrap
tokens,
I
think
so
it
was
for
for
like
the
control
plane,
it
was,
creates
the
static
pods,
because
we
create
them
slightly
slightly
differently,
and
then
we
had
to
just
wait
for
the
cubelet
to
bring
those
up.
Then
we
had
to
do
some
other
things.
D
What
else
was
it?
Oh,
we
had
we
had
to
put
the
cubelet
in
standalone
mode
and
then
set
it
to
not
standalone.
D
Oh,
I
think
that's
what
it
was,
but
that's
kind
of
hacky.
Why
are
you
doing
that
with
so
with
our
bottle
rocket
os?
We
remap
some
of
the
the
file
locations
like
etsy,
isn't
writable
in
the
os,
and
so
we
had
to.
A
Oh
baby,
maybe
you
should
walk
an
issue
with
yeah.
I
I
think
so
is
this
about
securing
the
path
permissions
by
any
chance.
No.
D
A
Yeah
because
skip
idea
has
patches,
so
you
can
potentially
patch
the
volume
inside
the
static
the
default
keyboard
to
point
to
a
custom
path
for
hd,
which
means
that
you
can
avoid
breaking
down
the
corporate
phases
as
well
and
running
in
standalone.
D
Exclusive
to
pass
I'll
I'll
double
check
and
that
I
can
detail
it
in
an
issue
that
was
the
path
was
one
of
the
things,
but
the
others
was
like.
A
Okay,
I
can't
try
follow
it
up
on
that,
to
you
know,
provide
some.
E
A
Potentially,
basically,
we
consider
the
phases.
Ga
that's
like
a
unfortunate
artifact
of
exposing
implementation
details
to
users.
E
A
That,
once
you
expose
them,
it's
very
difficult
to
make
changes.
For
instance,
given
the
the
whole
quote-unquote
all
sub-command
problem,
if
we
decide
to
add
a
you
know,
a
separate
subface
here
under
control
plane
is
going
to
be
fine,
but
if
we
decide
to
break
this
down,
it's
a
breaking
change
and
I
see
that.
A
Yeah,
I
think
that
that
was
the
problem
and
if
we
can
manage
to
maybe
find
a
magical
solution
that
works
but,
like
I
said
we
have
three
people
looking
at
it
so
yeah
I
mean,
if,
once
you
lock
the
issue,
maybe
there's
a
workaround.
If
there's
no
worker
out,
we
can
investigate
this
cobra
problem
again,
let's
see
if
we
can
do
something.
E
B
B
I
think
it's
better
to
try
to
make
the
paths
writable
in
their
sort
of
canonical
locations,
for
example,
using
bind
mounts
and
the
reason
is
daemon
sets
that
do
things
like
opt
cnipin
is
the
canonical
example
or
at
cni
at
ccni,
where
you
know
these
demon
sets
have
these
expectations
that
these
paths
are
writable
in
these
sort
of
standard
locations,
and
so
I
I
wish
I
had
done
more
of
that
in
my
various
interactions
with
kubernetes,
rather
than
like
tweaking
the
pods
tweaking
all
the
the
tooling
to
like,
find
a
writable
location.
A
A
I
I
worry
about
the
the
phase
breakdown
that
people
are
doing,
because
it
makes
it
very
difficult
for
us
to
make
changes
to
the
phases
and
the
whole
problem
in
kubernetes
that
you
have
a
breaking
change.
Every
minor
release,
it's
not
great,
so
yeah,
it's
it's
a
difficult.
A
Moving
to
subproject
updates,
I
added
one
for
cube
adm
in
122.
We
are
adding
v1
beta
3,
which
is
a
new
api
for
qbm.
It's
a
work
in
progress
below
here.
You
can
find
the
kip
that
contains
a
list
of
changes
that
we're
going
to
do.
I
can
show
them
quickly.
A
We
may
not
have
time
to
complete
everything,
which
is
which
means
that
we
have
to
update
the
cap
to
you,
know
finalize
it
before
122,
but
here
we
have
a
small
list
of
things
that
we
want
to
do.
The
first
one
is
to
make
the
cuban
api
more
crd
friendly,
which
means
that
we
have
to
add
plus
optional
the
plus
option
attack
in
a
number
of
places
where
we
have
a
mid-empty.
A
Maybe
we
are
going
to
that
add
the
object
meta
to
some
of
the
structures
that
we
have.
This
is
mostly
for
customized.
We
are
still
debating
whether
whether
we
want
to
do
that
a
number
of
projects
that
used
to
use
customize
with
cube
adm
now
use
something
else,
or
maybe
they
don't
use
kubernetes.
A
Even
at
this
point,
so
we
may
have
to
ask
a
question
on
the
some
of
the
mailing
lists
to
see
if
people
are
still
doing
it
because
object
matter
was
really
a
problem
in
customized
modifications
to
some
of
these
structures
but
yeah.
This
is
a
bit
of
a
to
do.
I
was
keeping
phases
with
config
as
currently
it's
possible
with
a
flag,
but
it's
not
possible
with
the
config
a
war
of
requests
for
this
one.
A
This
is
like
the
p0
pretty
much
for
us
at
this
point
for
quest
api
from
individual
users
of
cuba
dm
this
is
we
something
we
are
definitely
going
to
do
the
application
structures,
custom
status
is
being
removed.
I
mean
the
dns
type
is
only
going
to
support,
coordinates
at
this
point.
Hypercube
is
being
removed
and
other
changes.
If
we
have
more
time
there's
a
list
in
in
the
in
this
particular
issue
that
they
like
there's
a
list
here.
A
If
you
you
can
look
at
all
the
changes
that
we
have
planned,
but
those
are
war
priority
and
by
introducing
v1
beta3
we're
also
removing
the
old
api
which
is
v1
beta1.
It's
a
v1
beta2
is
not
deprecated,
but
it
will
be
deprecated
in
a
future
release.
Maybe
123
or
later
does
everybody
have
questions
for
kubernetes.
A
C
Real
quick
question
and
then
we
can
end
early
or
maybe
just
a
statement.
I
don't
know
when
the
return
to
normalcy
will
occur
if
it
will
occur
this
year
or
maybe
it'll
be
next
year.
I
don't
know,
but
I
think
doing
some
preparatory
work
to
plan,
for
that
would
probably
be
a
good
thing,
because
I
I
imagine
that
it's
gonna
be
roaring
20s
all
over
again.
C
So
what
I
think,
maybe
starting
the
conversation
now
for
when
we
have
a
return
to
normalcy
and
a
actual
kubecon,
might
be
a
good
plan.
A
A
You
know
presentations
and
talks
present
there
and
to
see
how
we
can
meet
in
person
and
things
like
that.
I'm
sure
a
lot
of
people
miss
that
already
for
this
particular
cube
con,
which
is
again
virtua,
I'm
going
to
skip
it
completely
because,
unfortunately,
it
lands
on
the
orthodox
easter
in
my
country
so
yeah
and
I'm
going
to
have
to
skip
it.
But
I
will
watch
a
number
of
videos
after
that.
A
I
don't
know
we
have
a
question
api
presentation.
Does
anybody
else
know
what
we?
What
other
talks
we
have
for
this
one?
I
kind
of
forgot.
C
For
the
pure
virtual
one,
to
be
honest,
I
kind
of
tuned
out-
I
did
the
virtual
one
back
in
november
and
underwhelmed
is
probably
an
understatement.
So
you
know
I
I
had
a
you
know.
I
had
more
interest
of
just
going
on
slack
channels
and
watching
youtube
videos
than
I
did
of
the
engagement
so
to
speak.
You
want
to
call
it
that
in
a
pure
virtual
conference,
so
I'm
I'm
in
the
shrug
mode
around
the
pure
ritual
good
con
or
just
virtual
conferences
in
general.
A
C
The
the
highlight
for
me,
which
actually
was
actually
kind
of
fun,
was
we
did
it.
There
was
like
this
virtual
trivia
night,
and
I
think
actually
that
was
the
most
fun
I
had
in
a
virtual
conference
is
just
doing
the
virtual
trivia
where
there
was
an
orchestrator
and
they
split
off
the
teams
and
they
came
back
together
and
they
were
able
to
like
do
a
full
trivia
over
zoom
and
it
was
actually
engaging
and
a
lot
of
fun.
My
kids
loved
it.
A
What
was
the
topic
of
the
trivia
marvel?
Oh
wow,
yeah
yeah.
I
will
fail
this
trivia
miserably,
not
that
I
am
going
to
succeed
on
dc
trivia,
but
I
just
don't
know
comics
as
much,
but
that
sounds
like
a
wonderful,
maybe
star
wars
as
well.
I
think
this
is
a
popular
trivia
as
well.