►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2022-02-16
A
A
A
So
I
prepared
the
list
of
topics
that
we
were
tackling
in
this
release,
so
some
of
them
are
done.
Some
of
them
are
still
working
progress.
Some
of
them
we
are
skipping,
so
the
first
one
I
wanted
to
mention
is
the
cri
socket
path
updates,
tldr
cuba,
dm
did
not
prefix
the
cri
circuit
paths,
with
the
appropriate
url
schema
for
a
particular
operating
system
on
windows,
for
example.
That's
n
pipe
on
linux.
A
That's
the
unix
socket
url,
and
this
is
already
done,
but
there
are
some
pending
changes
for
125,
so
we
are
keeping
the
issue
open.
A
A
And
yes,
it
was
a
pretty
simple
change
in
terms
of
code
organization
and
the
kubernetes
website.
Docs
will
reflect
that
because
they
are
now
nowadays
they
are
pulling
the
latest
kubernetes
docks
and
they
generate
some
fancy
pages
around
those
another
one
is
the
docker
changes,
so
this
required
multiple
pr's,
I
think
kkk
website
the
milestone
is
moved
to
125.
Let
me
see
actually
I'm
starting
to
forget
what
we
changed
here.
A
Yes,
so
basically,
we
change
the
default
of
what
kubernetes
considers
the
default
socket.
Nowadays
it
will
become
container
d.
It
used
to
be
the
docker
ship
socket.
Another
change
is
that
we
are
treating
the
default
docker
socket
to
be
provided
by
the
cri
docker
d
project.
A
If
you,
google,
cri
drd,
you
will
see
a
project
maintained
by
mirantis.
That's
basically
the
new
owner
of
the
docker
integration
between
kubernetes
and
you
know
the
kubelet
in
your
hosts
operating
system.
Nowadays,
sorry,
prior
to
this
change,
it
used
to
be
the
kubrick
had
a
building.
A
B
Just
a
question
from
my
understanding:
he
does
mean
that
we
are
still
supporting
how
to
discover
the
mechanism,
so
in
case
there
are
more
than
one.
We
pick
one.
A
I
I
was
yeah,
I
didn't
mention
that,
but
we
we
also
updated
the
kubernetes
detection
mechanism
if
kubernetes
finds
multiple
sockets
on
the
host,
multiple
known
sockets
on
the
host.
One
of
them
is
the
the
new
cri
docker
d.
Another
one
is
the
container
d
socket
and
another
one
is
the
cryo
socket
kubernetes
will
complain
because
it
cannot
make
a
decision
for
you.
A
Quite
frankly,
the
logic
before
that
was
kind
of
weird,
because,
as
you
know,
container
d
nowadays
is
the
back
end
for
docker
and
kubernetes
had
some
very
sketchy
logic
to
like.
Basically
try
to
understand
if
you
are
using
like
a
solo
container
d
or
a
container
d,
that
is
powering
docker,
which
was
very
weird,
and
we
were
able
to
get
rid
of
this
weirdness.
A
The
another
change
that
I
I
I'm
not
so
sure
about,
but
I
I'm
hoping
that
it
works
is
that
is
that
we
shifted
all
interactions
with
cri
sockets
in
cube
adm
to
be
managed
by
the
cri
curl
tool.
I
tested
the
latest
version
of
crypto
that
I
have
and
it
worked
fine
with
the
documentation
circuit.
So
for
me
that
was
sufficient
signal
that
we
can
remove
the
direct
calls
that
we
used
to
have
for
the
docker
cli
on
the
host.
A
That
was
not.
It
was
a
layer
that
is
not
needed.
In
my
opinion,
I'm
hoping
that
we
are
not.
I
have
not
missed
anything.
I
asked
people
to
test
it,
but
I
didn't
get
feedback
in
any
case.
If
we
see
a
problem
in
that
particular
interaction
that
nowadays
will
become
a
problem
of
cri
docker
d,
which
is
essentially
the
new
layer.
A
So
if
assert
something
is
missing,
I
guess
it's
also
could
be
a
problem
with
the
cri
cuddle
tool.
So
we
are
pretty
much
delegating
responsibility
to
external
tools.
The
the
show
out
to
the
docker
coi
was,
I
think,
far
from
ideal.
A
A
docker
list
also
for
seeing
what
containers
are
currently
running.
A
Okay-
and
I
guess
in
this
particular
ticket
with
this-
some
cleanup
action
items
for
125,
because
we
support
the
kubelet
sq
kubernetes
supports
the
same
version
of
the
couplet
as
the
kubernetes
binary,
but
also
one
version
of
the
couplet
one
minor
version
order.
A
So
that's
the
summary
of
the
documentation
changes.
I
also
made
the
website
updates
and
it
was
a
quite
a
word
of
review.
I
guess,
but
basically
the
theories
that
we
managed
to
clean
all
direct
mentions
of
docker
as
the
preferred
container
diamond,
cubed
and
yeah.
I
think
we're
pretty
good
on
this
change
again.
This
will
break
a
lot
of
people,
I'm
not
convinced
that
we
are
like
as
a
kubernetes
project.
I'm
not
convinced
that
we're
surfacing
some
of
these
changes
in
a
sufficiently
visible
way
to
the
wider
audience.
A
We
assume
that
people
read
the
release,
notes
blog
posts
and
twitter.
That's
not
the
case,
so
someone
in
a
certain
company
just
gets
broken
because
they
didn't
read
any
and
they
assume
that
they
will
be
able
to
upgrade
from
123
kubernetes
to
124
corporate.
They
continue
using
docker,
but
just
the
cluster
is
going
to
fail
and
they
will
have
to
go
and
install
the
cri
docker
d
which,
by
the
way,
is
currently
lacking
documentation.
A
Okay,
another
something
else
that
we
completed
is
we
removed
this
output
v1
now
for
one
api,
it
was
it
contained
a
certain
annoying
technical
depth
where
we
introduce
binding
between
working
progress
apis
and
it
is
gone
now
we
introduced
a
new
d1
output,
sorry
output,
v1,
alpha
2,
which
solves
the
problem
and
yeah.
This
is
pretty
much
close
at
this
point.
If
you
want
to
read
more
about
it
check
the
issue
pretty
much,
it
has
sufficient
detail.
A
Work
in
progress:
this
is
the
conversion
corporate
configmap
change.
It
is
potentially
a
breaking
change
to
all
users
of
comedian
that
try
to
interact
with
the
implementation
detail.
That
is,
the
kubele
dash
config
configmap.
A
The
theory
is
that
it
used
to
contain
encoding
of
the
kubernetes
version
inside
the
name
such
as
config
xy,
and
this
was
discussed
in
the
past
technical
depth.
The
issue
itself
is
from
2019
and
I
think
nobody
liked
this
and
I
honestly
don't
understand
why
it
was
done
in
comedian.
But
I
because
the
kubernetes
config
map
is
the
kubernetes
configuration
itself
is
versioned,
so
virtually
the
name.
I
don't
think
it
was
a
good
idea,
so
it
is
a
breaking
change.
A
If
users
want
to
adapt
slowly,
they
will
have
to
use
the
feature
gate
to
turn
it
off
in
this
particular
release.
We're
switching
the
feature
gate
to
on
by
default
and
a
side
note
here
is
that
kubernetes
is
trying
to
push
for
switching
feature
gates,
keeping
feature
gates
to
off
in
beta.
A
Alpha
itself,
by
default,
you
introdu,
you
switch
you
graduate
to
beta,
but
you
continue
keeping
it
off
and
it
makes
sense
because
you
know
back
in
the
day
back
in
the
90s.
I
remember
people
were
not
using
alpha
beta
raw
in
production.
So
if
you
force
a
beater
feature
to
own
by
default
to
them,
that's
like
counter
productive
for
production
in
a
way,
but
I
don't
think
this
is
approved
for
feature
gates,
but
it's
definitely
happening
for
core
apis
at
the
moment.
A
So
in
this
particular
case
we
are
switching
the
feature
gate
incubation
to
enabled
for
beta.
I
guess
we
can
discuss
it
more
in
the
future,
whether
we
are
going
to
change
our
policy
this
this
policy.
We
have
been
doing
this
in
kubernetes
for
a
while.
A
A
Okay,
yeah,
I
have
to
look
at
this
again,
but
basically
the
new
version
of
kubernetes
124
will
basically
only
generate
and
read
from
a
cobra
config
map
instead
of
the
one
with
the
version.
A
Unless
you
are
explicit
about
the
value
of
the
feature
gate
and
we
have
end-to-end
tests,
some
endpoint
tests
broke,
I
had
to
update
them,
but
I
think
this
is
pretty
good.
Don't
have
any
questions
for
principles
to
find
about
this.
C
No,
nothing
looks
good
to
us.
We
were
waiting
a
little
bit
because
I
think
we
have
to
implement
something
on
our
siding
cluster
api
too,
but
we
weren't
sure
if
it's
already
merged,
if
everything
upstream
is
already
merged,
but
I
think
now
we
we're
ready
to
go
interesting,
that
our
tests
will
work,
which
means
we
don't
depend
on
that
change
apparently,
but
yeah.
We
should
align
that
we
do
the
same
things.
C
B
B
Yeah,
so
we
probably
we
have
two,
but
the
tests
are
not
failing,
because
we
don't
have
custom
config.
We
rely
on.
A
Now
the
other
topic
is
the
probably
one
of
the
more
breaking
changes
that
have
happened
in
kubernetes
history.
That's
the
rename
of
the
vast
labor
intake.
A
I
already
saw
some
users
complaining
downstream
asking
questions,
so
I
you
know
I'm
trying
to
ping
slack
channels
remind
people
as
much
as
I
can,
but
inevitably,
in
this
release
we
are
going
to
see
breakages
this
one
to
give
a
toddler
the
changes
renaming
the
master
label
and
taint
to
something
to
be
more
inclusive.
As
per
the
some
of
the
definitions
that
the
cncf
established
for
language
inclusiveness,
we
are
changing
master
to
be
control
plane.
A
The
effort
started
in
120
and
the
next
phase,
which
is
this
release,
is
going
to.
A
Basically,
let
me
see
what
what's
happening.
Actually.
A
Okay,
second
stage
is
124
okay.
So
in
this
particular
stage
we
are
removing
the
master
label
on
the
nodes
entirely,
and
I
mean
what
can
you
do
it
it's
gone
and
after
that
node
selectors
break,
you
have
to
see
the
action
required
in
the
release.
Note
that's
the
best
we
can
do
if
you
use
the
master
label
to
track
your
deployments
with
a
load
selector.
Now
they
break
so
people
after
that.
A
That's
for
the
second
stage.
We
also
are
handling
paint.
That's
we
are
adding
the
no
schedule,
control
plain
things
on
the
loads,
so
in
124
we're
going
to
have
to
have
we're
going
to
have
two
things
on
the
master:
sorry,
the
control
plane,
nodes,
we're
going
to
have
the
master
tank
and
the
control
printing.
At
the
same
time,
this
requires
deployments
to
tolerate
both
things
or
tolerate
the
wild
card
if
they
prefer
and
then
at
this
stage
we're
going
to
remove
the
the
master
paint
and.
A
But
yeah
124
I
envision
that
we
are
probably
going
to
break
a
lot
of
people.
So
so
how
are
you?
How
are
you
doing
in
on
this
topic?.
C
I
think
we're
good,
so
we
already
implemented
it
essentially,
so
we
had
one
place
where
we
had
to
add
additional
chains,
which
is
which
are
our
own
deployments
for
our
controllers
and
the
other
part
was
we
had
some
selector,
which
was
trying
to
figure
out
which
nodes
are
control,
plane,
nodes
and
now
we're
looking
at
both
labels,
essentially
because
we
have
to
support
right
now
back
down
to
1.19,
so
yeah,
just
looking
at
both
labels.
So
I
think
we
should
be
good.
Ci
is
still
green,
so
yeah.
C
A
But
yeah
yeah,
if
you,
if
you
see
reports
like
that
from
users
of
course,
api
saying
that
hey
you
broke
me,
you
can
just
link
them
to
this
particular
issue.
It's
20,
20
20.,.
C
We
have
an
issue
on
our
site
which
links
to
the
email,
and
I
think
to
that
issue
too,
and
so
yeah,
I
think
we're
good
linked,
but
maybe
for
pizza,
I'm
not
sure.
Once
we
have
a
cluster
api
release,
where
the
headline
is
something
like
hey
we're
supporting
too,
if
we
remember,
we
should
probably
include
a
note
about
that.
C
I
can
add
it
on
our
issue
so
that
we
think
yeah
echo.
B
Somehow,
in
our
matrix
virtual
matrix
support,
whenever
we
can
echo
a
who,
staffer
changing
could
mean
and
for
good.
C
B
Let
me
say
I
I
would
say
that
deploying
stuff
on
on
on
the
control
plane.
Node
should.
I
hope
this
is
an
exception,
especially
in
enterprise
context,
and
it
is
an
exception
in
the
sense
that
only
at
the
means
deploy
stuff
on
the
control
plane,
node
and
they
should
be.
There
is
much
chance
that
they
are
up
to
speed
with
respect
to
what
the
application
folks
but
yeah,
let's
see
how
I
do
expect
as
usual,
whenever
you
change
something
people
complain,
but
then,
as
I
said,
we
are
changing
for
good
reasons.
So.
C
A
Yeah,
I
think
I
could
imagine
people
deploying
some
critical
services
on
controller
machines
only
and
you
know
skipping
the
worker
modes
entirely-
that
that's
that's
a
good
use
case.
Maybe
yeah.
C
Yeah
in
the
past,
so
we
applied
things
like
I
don't
know
so,
let's
say
cluster
central
things,
but
you
are
also
admins
of
of
all
those
clusters.
Multiplied
prometheus
coordinates
cloud,
controller
manager,
things
like
that,
essentially
on
the
controller
nodes
because
it
was
yeah,
they
were
our
nodes
compared
to
workload,
nodes
nodes
for
actual
cluster
users,.
B
B
A
Yeah
something
else
that
this
change
proves
is
that
labels
and
things
are
apis,
so
you
cannot
really
remove
a
label
or
a
date
without
breaking
people.
It
just
makes
such
an
impact
and.
A
A
There's
actually
a
related
issue
that
I
wanted
to
mention,
but
it's
not
clear.
It
is
going
to
ever
happen.
There's
the
system
busters
super
super
group
inside
kubernetes,
which,
if
you
create
a
certificate
for
that
it
gives
you
a
break,
was
certificate
like
qb
uses
it
today,
but
apparently
like
a
lot
of
companies
signed
certificates
for
like
breakwatch
certificates
for
customers
using
this
group,
and
it
has
to
be
removed
eventually,
according
to
the
cncm
guideline,
but
the
the
sega
folks
are
really
not
looking
forward
to
that.
C
A
C
A
Exactly
he
skips
the
authorization
layer
in
there
yeah.
Look,
that's
pretty
much
this
change.
If
you
want
to
know
about
it
like
open
this
issue,
it's
twenty
two
zero
22.00
repository.
The
summary
is
detailed.
You
know.
If
you
have
any
questions,
you
can
find
us
from
kubernetes
on
slack
in
the
cube,
the
community
slack
channel.
A
A
And
the
the
final
topic
I
wanted
to
bring
today
is
the
rootless
control
plane
going
to
beta,
so
rootless
control
plane
is
a
a
feature
gate
in
cuba.
Again
that
is
currently
alpha.
A
We
discussed
whether
we
want
to
make
the
cubanium
control
plane,
secure
by
binding
specific
users
and
groups
inside
containers
that
we
deploy
for
the
control
plane
it.
It
works.
It's
still
alpha
why
it's
not
going
to
beat.
I
think
it's
not
going
to
be
because
we
kind
of
hold
around
discussion,
because
we
discovered
that
there's
a
new
feature
in
core
coverages
called
username
spaces.
A
You
know,
linux
has
username
spaces
and
kubernetes
might
add,
support
for
that,
which
means
that
we
can
completely
decouple
the
host
root
user
from
the
root
user
inside
the
container
of
a
controller
component,
and
this
will
allow
us
to
not
even
try
to
graduate
this
particular
kubernetes
feature
gate
to
beta.
A
I
wrote
a
very
detailed,
like
response
here
with
some
questions
for
the
original
contributor
for
google,
but
he
has
not
replied
yet,
but
basically
because
of
this
potential
of
an
alternative
solution.
We
we
discussed
this
with
fabrizio
last
time.
I
think
is
that
we
might
as
well
just
put
the
proposal,
hold
and
see
how
the
username
spaces
go.
A
A
C
A
Yeah,
I
guess
another
way,
which
is
also
tip
for
the
folks
who
watch
the
vod.
Is
we
always
prefix
the
release
loss
for
kubernetes
with
the
cube
avm
column?
So
just
it's
easy
to.
You
know,
search
the
release
loads
for
kubernetes
changes.
If
you
look
for
cobra
changes,
they
don't
have
the
prefix,
so
just
a
random
sentence
for
a
particular
change.
C
Yeah,
that's
that's
very
good.
In
the
past
I
spent
like
half
a
day
reading
through
all
those
releases,
and
sometimes
you
just
have
a
short
note
and
have
to
click
the
issue
and
maybe
read
a
bit
more
to
actually
figure
out
in
which
what
what
the
changes,
in
which
context
it
belongs
to
and
all
that
stuff.
A
Yeah,
it
also
saves
you,
as
a
maintainer,
saves
you
some
trouble
later.
If
you
have
to
respond
to
the
to
five
users
with
the
same
answer,
the
same
question:
it's
like
some
organizational
improvements
can
be
done
in
our
areas
of
communities.
In
that
regard,.
A
Okay,
I
guess,
if
you
don't
have
anything
else,
I
think
we
could
call
it.
So.
Thank
you
very
much
for
joining.
I
will
upload
the
vlog
later
see
you
bye,
see
you.