►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2021-01-20
A
A
A
B
B
No,
no,
nothing,
nothing!
I
want
so.
A
A
Given
the
we
are
not
the
same
team
as
we
used
to
be,
I
don't
know
a
couple
of
years
ago.
Nowadays,
cuba
dm
is
very
limited
in
terms
of
development
power.
A
It's
pretty
much
me
you
for
british
and
some
developers
from
china
that
are
very
helpful
with
a
lot
of
pr's
that
they
sent,
but
everyone
else
is
gone,
including
you
know,
folks
from
susie
from
vmware,
so
it's
really
a
struggling
best
effort.
At
this
point,
I
am
not
convinced
that
we
should
keep
the
old
model
with
the
priorities
and
things
like
that,
because
it's
a
best
effort.
A
B
B
They
are
not
in
in
the
121
schedule,
but
they
are
for
the
122..
So
so
I
I,
I
think
that
most
of
so
kuku
mean
now
is
stable,
and
this
is
why
people
are
not
looking
they
they
are
using
it.
We
still
have
users
and
we
see
feedback
on
the
channels
so
but
the
the
development
stable.
B
A
I
would
say
that
at
least
60
of
those
are
things
that
are
blocked
from
discussion
and
with
without
any
maintainers
and
any
developers
to
talk
to
there's
not
enough
quorum
to
create
consensus.
B
This
is
something
interesting.
Maybe
we
can
arrange
some
work
through
our
backlog
review
and
try
to
classify
these
these
taking
these
issues
into
into
some
something
that
goes
into
the
roadmap,
because
currently
in
in
the
robot,
we
have
some.
We
are
we.
We
have
a
list
of
titles
of
titles
which
are
ambitious
because
and
they
require
works,
but.
B
They
are
interesting
if
you
think
that,
for
instance,
at
the
idea
of
the
kubernetes
library
it
it
will
be
a
big
opportunity
for
fixing
technical
depth
because
it
will
be
a
are
a
factor,
a
huge
factor.
So
let's
invest
some
time,
maybe
the
next
week
to
go
through
the
background
and
try
to
keep
this
issue
and
see
if
they
feel
with
what
we
have
in
mind
in
the
roadmap
at
least
we
have
a
better.
We
are.
B
A
B
B
An
example
could
be
today:
we
are
not
able
to
manage
a
kubernetes
credential
kubernetes
serving
cert
okay.
This
is
an
issue
that
that
pop
ups
every
now
and
then
okay,
if
we
consider-
and
we
are
not
able
to
do
this
due
to
how
things
are-
and
we
can
call
it
technical
depth-
okay,
this
issue
could
be
declared
if
we
do
the
work
for
the
operator,
for
instance,
because
if
we
want
to
have
a
kubrick
serving
cert,
we
need
to
be
able
to
rotate
them
and
we
need
operators,
so
that's
mean
that
we
move.
B
A
Well,
this
is
deputy
in
general
will
always
be
present,
despite
what
is
the
current
roadmap,
so
it's
like
something
that
will
persist
one
way
or
the
other.
A
So
what
what
is
your
idea
about
the
roadmap
in
general
like?
Are
we
going
to
start
discussing
on
a
google
doc
or
like
how?
How
are
we
going
to
establish
the
priority
in
the.
B
I
I
I
think
that
that
so
I
always
see
this
process.
First
of
all,
I
see
an
iterative
process,
an
iterative
process
that
initially
has
to
define
a
direction
okay
and
then
in
in
for
info
in
subsequent
iteration.
It
should
start
defining
priorities
and
and
going
down
to
the
tail.
So
when
I,
when
I
think
about
the
roadmap,
first
of
all,
I
I
would
like
to
to
settle
the
direction
so
how
what
will
be
kubert
mean
in
two
years,
three
years
or
whatever,
what
it
will
be
kubernetes
mean
to
2.0.
B
This
is
the
the
the
direction.
Okay,
then
there
is.
We
have
to
start
okay
to
get
there
once
we
agree
that
the
direction
so
where
we
want
to
go,
we
have
to
start
to
define
okay,
what
are
the
the
the
priority
things
to
to
to
get
there?
Let's
mean
that,
for
instance,
we
know
that
a
a
problem
that
that
really
is
really
blocking
most
of
the
next
step
for
kubat
mean
is
the
the
move
of
kubernetes
out
of
the
kk.
B
B
Possibly
we
have
an
idea
of
solution,
but
but
we
we
need
people
to
make
it
happen,
and
and
as
soon
as
we
have
this,
this
kind
of
clear
this.
This
story,
this
narrative,
okay,
we
can
go
to
the
community,
we
can
ask
to
country
backs
if
they
help
us
to
organize
some
workshop,
and
we
tell
the
community
a
community
couldn't
mean
want
to
go
there
for
the
following
reason.
B
A
At
least
we
have
to
try
so
yeah
all
right.
So,
basically,
in
a
couple
of
weeks,
we
can
have
issue
triage
and
see
what
we
have
move
it
to
the
roadmap
start
planning
on
the
roadmap
and
after
we
see
what
we
have,
maybe
we
should
also
bring
some
people.
You
know
who
previously
worked
on
the
project
to
comment
on
some
of
the
the
bigger
ideas,
possibly
around
the
operator,
and
things
like
that,
so
that
we
are
not
the
only
people
that
make
the
decisions
on
some
of
the
bigger
changes.
B
I
I
I
I
think
that
that
we
from
one
side
I
agree
from
the
other
side.
It
is
a
long
time
that
that
we
are
asking
to
the
sig
in
the
various
secrets,
recycled,
meeting
feedback
on
this
topic.
If
they
they
are
topic
are
not
getting.
Maybe
people
is
not
interested
or
they
have
different
priorities,
most
probably
the
second
one,
and
so
I
think
that
it
is
fine
that
that
we
move
forward
even
we
without
having
so
broad
consensus,
because
we
have
to
kind
of
break
the
current
cycle.
B
A
Yeah
I
I
noticed
that
the
moment
that
I
set
the
google
doc
with
the
rename
of
the
master
label
immediately.
A
lot
of
people
commented
on
it.
So
as
long
as
we
have
a
doc
at
least
we'll,
hopefully
get
people
reviewing,
even
if
they
don't
join,
they
don't
join
the
meetings.
A
A
A
A
So
technically
we
can
combine
the
your
comment
about
the
secret
driver
stuff
with
supposedly
my
first
item
that
I
wanted
to
work.
A
So,
given
we
recommend
to
users
to
set
the
system
d
driver
on
the
container
runtime
setup,
it's
literally
in
our
installation
docks
for
the
container
and
everywhere,
and
we
also
say:
okay,
we
have
to
you
because
this
is
not
the
default
in
the
couplet.
You
also
have
to
default
it
to
system
d
using
either
the
c
group
driver
or
pass
in
a
complete
configuration
with
the
c
group
driver
field.
A
I
started
poking
some
maintainers
and
you
know
gathered
some
feedback,
so
kind
really
wants
to
switch
to
systemd
all
over
the
place,
because
apparently,
when
things
start
moving
out
to
secrets
version
two
to
my
understanding,
the
c
group
fs
driver
will
not
no
longer
going
to
work
or
something
like
that.
But
don't
quote
me
on
that,
but
they
really
want
to
switch
anyway,
because
because
of
the
double
hierarchy,
mini
cube
are
already
doing
it
except
some
corner
case
that
is
not
covered,
but
they
also
want
to
switch
cube.
Spray
is
pretty
much
ready.
A
B
B
B
A
A
I
think
gke
and
possibly
openshift
as
well,
are
not
using
systemd
to
manage
the
couplets,
which
means
that
the
secret
sorry,
the
c
group
s
driver,
is
fine
in
that
case.
But
it's
not
fine
for
kubernetes.
B
Given
that
we
are
basically
recommending
to
the
in
our
packaging,
kubelet
is
deployed
as
a
system
the
unit
and
whatever
we
have
to
make
things
consistent.
That
means
that
applying
the
default
in
the
kubernetes
configuration
is
up
to
kubernetes
okay.
So
I
I'm
fine
with
these.
I'm
only
concerned
to
get
a
proper
story
to
get
to
be
sure
that
the
upgrade
the
changing
the
defaulting
kubert
mean
won't
break
existing
users.
A
This
is
my
my
comment
about
the
in-place
upgrades
and
users
that
you
know
they.
They
have
an
existing
bash
script
to
install
the
container
runtime
and
everything
for
those
users.
A
If
they
are
not
setting
system
d,
obviously
the
complete
will
fail
so
that
that's
the
action
required
for
users
that
are
creating
a
new
cluster
with
their
existing
container
runtime
setup.
A
Basically
kubernetes
in
a
way
is
now
forcing
the
systemd
driver,
but
they
they
can
still
pass
a
corporate
configuration
to
override
the
cube
adm
couplet
configuration
if
they
pass
the
secret.
B
A
Driver
there
it
will
ultimately
overwrite
the
kubernetes
default.
So
it's
it's
an
actual
required.
It's
not
a
it's
a
breaking
change
that
they
cannot
get
out
of.
A
But
yeah
for
those
users
that
create
new
custom.
This
is
the
problem
for
the
second
case
that
you
mentioned
is
immutable
upgrades
now
the
hear
the
story
is
more
interesting
and
I
already
joined
the
image
builder
meeting
and
discussing
with
them
so.
B
Sorry
and
I'm
taking
note
before
moving
to
immutable
upgrades,
so,
okay,
the
problem
is
to
make
it
not
disruptive,
and
you
are
saying
that
it
is
a.
It
is
required
that
an
action
require
an
action.
We
should
issue
an
action
required.
B
A
A
B
Have
I'm
not
one
100
sure
here,
so
I
kindly
ask
you
or
we
have
to
kind
of
pro
of
test
these
because
at
least
from
what
I
remember
from
the
ros
tpr
with
regards
to.
B
B
A
I
see
what
you're
saying
okay,
so
basically
ross
implemented
a
check
which
is
a
checksum
based
check
to
verify.
If
the
user
corporate
configuration,
for
instance,
is
different
from
the
default,
if,
if
it's
different
from
the
default
upgrade
will
not
happen,
if
it's
the
same
as
the
default
kubernetes
will
perform
upgrade
on
it,
which
means
that
it
will
generate
a
new
corporate
configuration.
A
So
in
the
case
of
an
existing
corporate
configuration
that
has
an
empty
value
for
the
c
group
driver.
If
we
want
to
apply
upgrade
on
the
node,
I
need
to
check
whether
the
basically
the
value
will
be
written
so
yeah.
I
think
it
might
be
a
problem
in
that
case,
we
what
we
should
do
if
we
continue
with
this
whole
proposal,
is
what
we
should
do
is
we
should
simply
always
up
patch
sorry,
not
not
patch
the
value
to
systemd.
A
B
Yeah,
but
I'm
not
really
sure
because
from
from
the
kubernetes
documentation
for
the
container
and
time
implementation,
basically,
you
should
not
change
these
on
existing
clusters.
B
That's
mean
that
if,
if
we
are
on
an
existing
cluster,
if
the
value
is
is
null
that
means
default,
which
is
a
c
group,
we
have
to
keep
a
c
group,
even
if
it
is
not
idea.
A
A
Yeah,
it's
it's
doable,
it's
doable.
B
B
Configuration
for
immutable
upgrade.
I
think
that
the
problem
is
is
a
coordination
problem.
I
want
to
raise
the
point
also
in
the
sql
in
the
cluster
api
meeting
later
later
today,
because
basically,
the
problem
is
that
we
have
to
synchronize
different
things:
the
change
in
kubernetes,
the
changing
image
builder
and
the
creation
of
the
new
images
based
on
the
changes
on
all
on
all
the
infrastructure
providers.
B
So
there
are
all
this
variable
at
plays
that
could
impact
on
the
resulting
on
the
resulting
basically
user
experience
for
upgrades.
A
A
Should
I
should
I
illustrate
the
problem
with
the
immutable
upgrades
so
that
you
know
people
understand
it
when
today.
B
At
the
meeting
well
now
to
you:
okay,
okay,
okay,.
A
Okay,
should
I.
B
B
A
I
I
wanted
to
ask
you
when,
because
the
cuba,
the
sorry,
the
costa
rica
upgrades
nowadays,
they
work
for
the
boost
the
kubernetes
bootstrap
right.
Only
the
booster,
the
kubernetes
bootstrap
provider
is
only
supported,
but
we
already
have
so
a
bunch
of
stuff
in
there
like
patching
coordination.
Things
like
that.
A
Because
nodes
that
are
joining
the
cluster
will
fetch.
A
Basically,
I
think
the
upgrade
processes
you
create
a
new
corporate
configuration,
that
is
with
the
new
version
and
the
new
nodes,
will
join
and
fetch
this
version
correct
exactly.
But
if
you
patch
a
system
d
driver
in
there
it
can,
it
can
work.
B
A
B
B
Let's,
let's
imagine
this
scenario
today
in
aws,
we
are
publishing
image
and
we
have
images
for
kubernetes
versions,
so
we
have
image
1.20
we
have
image
119.
B
For
using
the
the
the
c,
the
system
dc
drivers,
okay,
then
we
what
we
do.
Maybe
we
start
building
new
images
with
the
set
with
the
proper
setting
for
container
d
and
how
we
know
what
is
the
starting
version
for
these
changes.
B
A
B
A
Well,
currently,
image
builder
should
really
only
make
the
change
for
providers
that
are
ready
for
the
change
if
yeah,
if
providers
use
the
same
image,
that's
a
problem.
B
For
instance,
let's
make
this
example
that
aws
implement
this
change
for
121
dot,
zero
and
vosphere
implemented.
These
changes
for
121.2.
A
Information
well,
okay.
So
if
image
builder
bakes
the
system
d
driver
to
my
understanding
image
builder
currently
supports
I.
This
is
what
I
asked
the
image
builder
people.
You
have
a
system
d,
sorry,
a
container
d
configuration
file
that
you
can
add
values
to
and
also
they
after
we
opened
this
discussion.
They
basically
said
that
okay,
we're
going
to
add
a
built
time
flag
that
you
can
use
to
configure
the
system.
The
you
know
the
driver
for
this,
but
the.
A
Going
to
be
systemd
after
for
us
for
version
121
of
kubernetes,
so
basically
providers
that
just
want
to
follow
the
flow
you
know
will
not
change
the
default.
But
if
some
provider
is
you
know
behind
this
effort,
they
can
pass
overwrite
and
not
you
know
not
change
the
driver
at
all.
This
is
how.
B
A
Yes,
so
basically,
I
created
a
little
diagram
for
other
people
to
see
the
problem.
Hopefully
I'm
going
to
explain
it
so
so,
basically,
what
we
have
on
the
left
side
is
an
existing
cluster
that
is
120.
We
have
a
correct
configuration
that
is
defaulted
with
c
group
ffs,
and
you
have
a
couple
of
nodes
where
both
the
container
runtime
and
the
corporate
configuration
is
set
to
container
sorry
to
cigar
pfs,
and
then
you
try
using
the
immutable
upgrade
process
to
join
new
nodes
which
are
for
version
121.
A
If
you
default
the
container
runtime
to
system
d
on
the
nodes,
this
means
that
the
cubed
m
join
is
going
to
download
a
complete
configuration,
that
is
for
c
group
fs,
and
the
mismatch
between
single
professional
system
d
is
going
to
break
this
node.
The
cooperate
is
going
to
fail
the
node
so
originally.
B
B
A
B
A
Yeah,
we
could
discuss
this
more
during
the
constructivity
after
that.
Basically,
I
think
that
first
of
all
image
builders
should
start
versioning
the
images,
because
it's
not
currently
doing
that
yeah.
A
And
supposedly,
if,
if
we
do
this
in
the
cube
adm
upgrade
to
us
like
I
proposed,
we
should
just
if
the
version
of
kubernetes
121
just
modify,
the
complete
configuration
will
work
for
all
providers
that
use
kubernetes
right.
B
B
And
the
cluster
api
community
should
not
rely
on
the
assumption
that
there
is
a
kubernetes,
so
we
should
basically
define
raise
the
topic
and
agree
and
get
an
agreement
on
the
community
and
in
the
community
there
are
also
people
which
are
building
image
with
their
own
internal
process
for
building
images.
B
B
A
You
know
we
started
processing
kubernetes,
give
action
required
to
people,
so
they
have
to
adapt.
I
even
if
this,
if
this
will
affect
a
lot
of
product
and
people,
something
that
it
has
to
go
in
one
way
or
the
other
yeah.
B
I
I
definitely
agree,
I'm
just
saying
that,
unfortunately,
I
can
is
not
as
up
to
us
for
deciding
if
you
think,
for
instance,
to
the
aks
provider
for
the
cursor
api.
I
don't
know
if
they
can
control
these
in
or
which
level
of
control
they
have
on
this
kind
of
flags,
so
maybe
they
are
impacted
or
not.
They
then
is
up
to
them
to.
I
will
explain
the
problem
and
and.
B
And
let's
see
what
what
is
the
the
feedback
from
the
community
and
how
we
can
make
this
happen
without
creating
problem
to
the
user,
because
it
require
coordination
between
kubernetes
image
builder.
B
A
A
The
second
thing
I'm
going
to
fix
is
a
quick
one.
It's
basically
something
that
andrew
sorry,
I'm
not
sharing
the
screen,
something
that
andrew
gave
us.
It's
basically.
A
A
change
that
happened
in
core:
we
are
no
longer
tracking
the
master
labeled
nodes,
and
now
we
are
introducing
a
change
in
behavior,
which
is
that
load
balancers
will
try
to
consume
resources
on
the
control,
plane,
nodes
and
andrew
is
suggesting
that
we
apply
this
change
to
121..
A
Basically,
we
have
to
label
the
nodes
with
a
particular
label
and
plus
one
for
this
backwards
compatibility,
and
we
can
have
an
actual
required
for
users
to
opt
out
of
it
if
they
want
to.
B
A
Oh,
this
is
bad
acidic,
at
least
because
I
mean
I
don't
know
how
interacts
with
existing
load
balancers,
but
I
mean,
if
you
hopefully,
if
you
recreate
the
road
balancer,
at
least
it
will,
you
know,
go
stop
using
or
start
using
the
control
plane
nodes,
but
our
the
main
point
is
that
we
should
preserve
backwards
compatibility.
That's
that's
what
we
care
about.
There
is
a
load
balancer
mechanics
that
kubernetes
should
not
care
about.
B
A
The
oh
yes,
so
this
is
this-
is
here.
A
About
the
docker
stuff,
currently
we
have
detection
in
the
in
cube
adm
for
the
docker
signal
driver.
I
basically
in
this
issue
I'm
proposing
to
signals
that
remove
this
in
the
couplet
and
in
fact
one
of
the
signal
chairs
already
mentioned
that
they
like
the
idea
in
a
ticket.
So
this
is
something
that
I'm
also
going
to
track.
A
This
is
something
that
we
discussed
with
andy
and
coaster:
api
people.
Basically,
currently
the
cube
adm
couplets
talk
to
the
load
balancer
endpoint.
Instead
of
to
the
api
servers.
A
B
Okay-
and
I
think
I
think
it
makes
sense
for
this
change-
I
have
only
one
question
when
you
say:
move
the
detection
on
the
side
of
the
docker
scheme.
A
Okay,
so
the
situation
is
currently
that
we
have
c
group
driver
detection
for
docker
on
using
docker
info.
Basically,
you
type
docker
info.
You
get
a
structure,
you
get
the
cigarette
driver
from
there,
both
the
cooperating
cuba
dm.
Have
this
right.
B
A
This
is
very
nice
right.
It's
to
clarify
it's
not
going
to
work
for
cute
energy,
because
the
couplet
doesn't
handle
this
at
all
and
there's
no
way
to
do
it.
Currently,
you
have
to
go
and.
B
Files,
but
just
help
me
to
remember
please,
the
this
feature
in
kubernetes
is
only
for.
A
B
Yeah
and
how
this
will
behave
when,
when
the
docker
shim
goes
away,
the
code
will
we.
A
Will
remain
in
docker
stream,
it's
just
going
to
live
in
another
repository,
okay,
the
this
is
this
is
something
we
can
try
to
get
in
121.
That's
why
I
added
it
here.
B
So
I
I
I
had
a
few
few
lines.
I
call
this
wishlist
because
I
I
know
so
there
is
a
things
that
that
currently,
basically
we
have
when
we
we
do
init
the
the
resulting
node
node
is
a
little
bit
different
than
they
need
the
node
that
you
do
for
drawing
and
because-
and
mostly
this
is
because
the
the
cool,
the
cubelet
configuration
changes.
B
A
B
No,
it
is
not
the
same,
or
what
is
different
is
that
in
the
first
node,
basically,
you
generate
a
kubert
configuration,
you
could
admit,
generate
a
kubera
configuration
and
then
it's
stick
to
this
configuration
for
the
other.
Node
kubernetes
generate
a
bootstrap
kubernetes
configuration,
and
then
it
gets
a
kubernetes
configuration
which
is
also
automatically
rotated
and
so
on
and
so
forth.
A
A
Okay,
okay,
the
reason
the
reason
for
the
bootstrap
config
is
that
that
that's
how
the
couplet
works
it
has
when
it
joins
a
new
node
you
you
must
pass
a
bootstrap
complete.conf
you,
you
might
decide
to
not
do
that.
Kubereum
already
supports
by
the
way.
If
you
don't
want
to,
you,
can
pass
the
dash
dash
couplet
config
directly.
B
Yeah
but
sorry
maybe
it
was
not
clear.
I
don't
want
to
change
the
joining
node.
I
want
that
the
first
node
becomes
equal
to
the
joining
node.
A
Okay,
I
experimented
with
that
what
you're
talking
about
and
it
failed
completely
it
didn't,
create
a
load
object
at
all,
so
we
went
back
to
what
we
have
currently,
I
think
I
had
we
had
an
issue
about
this
already
somewhere.
A
Thank
you
to
let's
talk
in
the
course
repair
meeting,
bye.