►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2022-01-05
A
A
38
open
tickets.
So
what
I
wanted
to
do
is
have
this
here.
A
A
I
mean
if,
of
course,
repair
has
objections.
We
can
delay
potentially,
but
it's
no.
B
Cluster
api
after
last
year,
changes
basically
now
generates
the
kubernetes
config
for
the
target.
Kubernetes
version
does
mean
that
it
generates.
We
want
beta
3,
for
we
want
30,
23
and
so
on.
So
should
not
be
a
problem.
Removing
all
the
api
for
cluster.
A
Api
yeah,
but
if
you
still
try
to
use
some
very
old
api
with
a
new
kubernetes
version
that
doesn't
have
it
it's
not
going
to
work.
Obviously.
B
A
For
134,
I
don't
think
we
have
any
more
changes
for
this.
Just
we
marked
the
documentation
of
the
api
as
deprecated.
A
It's
it
started
four
releases
ago
and
we
started
renaming
the
master
label
indeed,
and
I
have
to
look
at
the
cap
actually,
but
I
think
it's
124
is
the
the
next
cycle.
We
have
to
take
actions
on.
A
A
B
Yeah
we,
we
did
all
the
possible
things
to
make
the
user
aware
that
this
is
coming,
but
let's
send
a
reminder
in
kubernetes
there
that
is
going
to
happen.
So
if
someone
did
not
take
care
in
the
previous
cycle,
this
is
the
moment.
A
A
We
are
going
to
do
this,
the
similar
cleanup
in
the
fourth
stage,
which
is
to
remove
the
coordinates
toleration,
but
this
is
not
a
breaking
change
to
my
understanding.
A
This
is
a
breaking
change
in
this
release
because,
like
I
said,
if
you
don't
have
the
toleration
for
this
state,
you're
no
longer
going
to
be
able
to
schedule
workloads
on
the
control
planes.
B
Yeah,
this
is
the
let
me
say,
and
it
comes
nicely
because
the
these
this
kubernetes
release
is
already
introducing
a
lot
of
big
changes
like
docker,
shim
and
stuff,
like
that.
A
Yeah
yeah,
honestly,
I
worry
a
bit
because
there's
so
much
going
on
in
this
release
and
I'm
hoping
that
we
don't
make
the
users
angry,
because
sometimes
it's
better
to
spread
the
the
breaking
changes
across
releases.
A
B
Yeah,
let's
see
if
if
there
are
feedback
to
the
email
and
let's
give
the
visibility,
I
don't
know
if
we
should
consider.
Also,
I
don't
know
doing
a
post
in
the
kubernetes
blog.
B
Maybe
let's
talk
with
diems
or
someone
about
how
to
give
proper
visibility
of
this.
This
action.
A
Yes,
I'm
going
to
open
the
discussion
of
the
bailling
list.
First,
if
I
think
that
it's
not
sufficient,
we
can
try
using,
I
mean
I'm
possibly
also
going
to
ping
swag
anyway,
like
kubernetes,
like
you
know,
I
can
also
give
it
to
some
of
the
sub
projects.
Like
mini
q,
mentioned
people,
yes
kinds,
and
so
on.
A
Rock
control
plane
has
no
roots.
This
is
in
this.
We
started
a
bit
of
a
discussion
before
new
year,
so
the
future
is
called
the
alpha.
The
plan
was
to
graduate
in
123.
We
delayed
with
one
release
because
we
weren't
sure
if
we
want
to
get
this
feature
so
early
to
beat
them
without
feedback.
We
haven't
gotten
any
feedback
and
I
think
we
have
a
couple
of
users,
at
least
so.
A
A
The
linux
supports
user
namespaces
and
I
knew
this
for
a
long
time.
But
for
some
reason
I
was
never
asking
the
question
why
kubernetes
do
not
support
username
species.
So
when
you
run
a
container
what
you
get
inside
the
container
as
a
root
user
potentially
is
the
root
user
of
the
host.
It's
the
same
id,
the
same
privilege
and
the
same
user,
but
username
spaces.
Allow
you
to
sandbox
the
root
user
inside
the
container
to
be
a
completely
different
root
user.
So
in
the
host
you
can
have
the
container
running
as
id
1000.
A
So
it's
like
a
completely
decoupling
of
the
mapping
of
users
and
groups
and
that's
what
the
this
feature
in
the
kernel
has
and
if
you
think
about
kubernetes
spots,
the
question
here
was
like:
why
don't
kubernetes
spots
give
you
a
toggle
to
do
this,
like
I
want
to
enable
this
mapping
or
this
mapping,
apparently
just
a
missing
feature:
kubernetes
doesn't
have
it
the
details
in
the
cap.
I
have
read
it
already
and
I
I
know
the
vinayak,
the
author
of
the
cuba
dm
rootless
feature
knows
about
this
cable.
B
Right
yeah,
this
will
be
a
much
cleaner
solution.
If
I
got
it
right,
my
let
me
say
apart
from
this,
which
is
a
rightful
consideration.
So,
let's,
let's
stay
in
sync
with
the
project
direction.
B
B
B
B
Okay,
but
if
we
decide
to
go
to
beta
we,
I
would
like
to
have
a
signal
in
cluster
api
before
that
the
feature
actually
works.
B
By
default
in
beta
yeah,
that's
the
point
before
enabling
it
I
would
like
to
test
it
in
with
copy.
A
I
see
yes,
yes,
well,
you
can
also
someone
could
also
test
it
locally.
I
guess
instead
of
setting
up
a
job
and
if
it
works
vocally,
we
can
yeah.
I
assumed
to
see
your
point
here.
Yeah
the
the.
B
The
problem
is
that
okay,
we
enabled
it
by
default
by
default,
it
will
go
on
all
the
cluster
api
and
customer
api.
We
have
we
use
image
which
are
built
by
image
builder
and
if
everything
does
not
fit
in
place,
basically,
we
as
a
sig,
we
don't
do,
we
we
don't
make
a.
Let
me
say
it
will
not
good
as
a
sig.
So
let's
try
to
anticipate
the
problem
that
that's
only
my
my
only
concern
sure.
A
I
see
what
you
mean,
so
video
told
me
that
this
feature
from
what
I
gathered.
I
think
he
also
wants
this
feature
to
land
problem
with
it
is
that
it's
going
to
take
a
while
to
graduate
and
to
get
it
properly
done,
because
from
understanding
there's
some
sort
of
problem
with
host
mounts
because
horse
puff
mounts.
A
So
from
other
settings,
vinayak
is
going
to
potentially
try
to
push
for
to
get
the
cube
idea.
Future
graduated
to
bitter
in
this
window
of
time.
But
my
concern
there
is
going
to
be
that
if
we
switch
this
feature
on
by
default
in
cube
adm,
we
have
to
maintain
it
for
longer
periods
of
time.
Obviously,
potentially
one
year
after
we
switch
it
and
also
it
means
that
we
might
actually
opt
in
users
that
don't
want
to
be
opt-in
and
potentially
break
on
some
operating
systems
that
have
weird
system.
First,.
A
Fiddling
with
files
yeah
the
user
space
files,
at
least
there's
some
some
standardization
there,
but
it's
not
guaranteed.
B
See
with
the
with
the
the
person
that
contributed
this
feature,
and
and
also
we
can
ask,
also
in
the
an
opinion
to
the
user
in
in
the
we,
with
an
email
or
or
something
or
in
the
channel,
we
have
this
option.
Graduate
is
one,
but
then
it
will
be.
B
We
know
that
we
will
work
it
or
just
wait
until
if
the,
if
waiting
is
reasonable,
if
we
are
talking
to
wait
one
or
two
cycle,
I
think
that
is
reasonable
if
we
are
talking
about
waiting
two
years.
No,
so
let's
try
to
put
the
piece
of
the
puzzle
together
and
enrich
to
a
decision,
but
given
that
there
is
a
better
alternative,
let
me
say
being
discussed:
let's
wait
at
least
to
to
see
if
this
cap
emerges
and
the
implementation
plan.
A
Okay,
not
swap
I'm
just
going
to
copy
here
that
we
might
enable
node
swap
in
kubernetes
by
default
in
this
release
and
also
remove
pre-flight
checks.
There's
some
this
some
clean
appears
this.
I
think
we
have
an
action
item
in
this
release
to
remove
the
output
v1
alpha
one
I'm
going
to
do
that.
A
It's
part
of
the
plan.
I
don't
think
this
is
what
let
me
add
it
as
a
message:
okay,.
A
Improvement,
sweet
cd11
pro
lightness
probes.
I
know
that
indeed,
where
we
had
this
problem,
people
saw
it
in
the
company
within
this
internal
testing
problem
here
is
that,
as
discussion
pointed
out,
we
have
the
fix
in
a
new
lcd
release,
but
we
cannot
backport
it.
They
I
to
buy
the
staining
hcd
will
not
backport
it
to
other
hd
releases,
which
means
that
the
fix
will
land
in
the
future,
potentially
in
a
new
hcd
miner
release,
but,
like
I
said
here,
it
might
take
one
year
for
the
new
hd
release
to
land.
A
I
think
that
the
nadir
and
number
one
I
think
they
are
already
aware
of
this
problem,
but
if
the
ncd
team
agrees,
like
I
said,
if
somebody
wants
to
ask
the
city
people,
we
can
potentially
backport
the
changes
to
kubernetes,
but
there's
not
much
else.
We
can
do
here.
A
Machine
readable
outputs:
this
is
it's
pretty
much
help
wanted
at
this
point,
but
I'm
not
adding
the
label.
If
somebody
wants
to
continue
work
on
the
cap,
it's
welcome.
We
only
have
a
few
commands
that
support
it.
This
is
also
help
wanted
the
kubernetes
I'm
trying
to
get
them
in
turn
to
work
on
this.
A
But,
as
you
may
understand,
it's
it's
a
bit
difficult
for
new
contributors
to
get
on
board
on
this,
and
actually
I
was
feeling
a
fiddling
recently
with
functions
in
the
upgrade
code
and
it
just
it's
just
a
mess
between
front
end
and
back
end.
So
I
I'm
hoping.
B
Them
for
sure
yeah
the
code
is
needs
some
engineering
there.
B
B
The
discussion
was
triggered
by
the
ignition
pr
that
is
kind
of
that
is
conflicting.
Basically,
a
lot
of
kubernetes
booster
concern
and
machine
booster
of
concern,
because
now
we
have
ignition
and
cloud
in
it
inside
the
same
provider,
which
is
hard
to
maintain.
So
maybe
that
also
in
cluster
api,
we're
going
to
change
something
in
in
the
bootstrap
or
how
we've
wrapped
could
mean,
but
not
sure,
because
we
we
still
have
to
close
our
roadmap
for
2022..
A
One
so
you're
going
to
have
feature
requests
for
kubernetes
or
I'm
not
understanding.
A
So
it's
going
to
be
more
easy
for
you
to
handle
similar
breaking
changes
like
the
extra
axle.
B
If
change,
if
we
can
find
a
conversion,
so
the
the
the
the
point
is
that
okay,
we
can
work
on
this
idea
and,
for
instance,
figure
it
out
how
we
can
do
conversions
or
how
we
can
make
this
not
breaking
or
how
we
can
make
these
nice
for
the
for
the
user
stuff
like
that.
So
that
that's
the
point
but.
C
A
I
see
okay,
I
think
I'm
following
a
couple
of
issues
with
coaster:
api
related
to
the
ignition
stuff,
but
yeah.
A
A
Kind
of
base
images
at
some
point
I
might
have
to
start
pushing
a
new
kinder
base
image
because
our
our
container
version
inside
the
base,
which
is
very
outdated
at
this
point,
the
the
distro
in
the
image
itself
is
also
quite
old.
It's
a
tick!
It's
180,
one!
Sorry
18,
not
something
of
ubuntu.
A
Yeah
I'm
just
I
mean
I
may
want
to
start
using
the
the
image
promotion
process
that
kate
simply
have
to
push
potentially
a
new
directory
for
cuba,
game
images
or
something
like
that.
A
The
I'm
following
the
discussions
about
the
cri
dash
docker
d
provider.
Sorry,
it's
a
service
cri
docker,
this
service,
which
is
the
replacement
for
dockership,
and
I
can
discuss
it
later
in
the
we
have
an
issue
about
dockership.
Basically,
what
is
happening
there
that
we,
they
started
pushing
releases
and
we
might
want
to
bring
docker
testing
back
to
cuba
dm
eventually.
B
Yeah,
the
the
point
here
is
that
so
currently
in
kinder.
Basically,
we
have
two
code
parts,
one
for
docker
and
then
the
other
for
container
t
the
the
one
for
docker
over
time.
B
A
A
Yet
if
we
start
getting
users,
I'm
going
to
watch,
of
course
all
the
channels,
but
if
we
start
getting
users
that
use
cri
docker
d-
and
we
don't
have
signal
for
it-
and
if
we
have
special
paths
in
cuba
dm
that
we
want
to
actually
test,
then
this
is
going
to
be
worth
adding
a
new
go,
adding
back
the
docker
signal
to
the
kinder
jobs,
which
means
we
have
to
continue
maintaining
the
a
docker
base
image
like
it's
a
sequence
of
events.
That's
going
to
happen.
Yeah.
B
A
A
A
This
is
a
question
for
c
golf.
It's
not
clear
if
you're
going
to
make
require
require
any
changes
incubation.
A
I
don't
have
time
for
this,
but
I'm
not
sure
where
this
this
is
going
at
this
point,
the
roster
dropped
the
work
and
I
think
it's
in
the
it
was
in
the
middle
of
the
process.
A
We
started
for
watching
for
sharp
changes,
checksums
things
like
that
tolerate
new
configurations.
There
were
to
be
done
here,
but
I
don't
remember
the
details.
B
Yeah,
it
was
an
entire
work,
getting
ready
for
the
components,
kubernetes
and
stuff
like
that,
to
change
their
component
config
release
without
providing
a
conversion
tool.
B
So
let
me
say:
yeah
until
we
we
have
noticed
that
they
want
to
bump
their
component
config.
This
is
not
required.
A
Yes,
okay,
graduate
patch
support,
ga.
A
C
A
Corporate
service
backward-
this
is
a
big
change.
It's
also
breaking
change
to
some
user
assumptions.
Sig
release
need
to
help
us
with
some
package
decoupling.
We
have
issues
in
the
way
we
push
packages
to
the
distribution
repositories.
A
I
think
it
is
still
a
bit
more
discussion
about
this
and
I
do
not
convince
this
change
that
it's
doable.
I
mean
it's
doable,
but.
D
A
This
someone
has
to,
I
need
to
add
help
wanted
to
this.
Someone
needs
to
help
me
to
test
this.
A
B
A
And
I
tried
it
and
I
tried
it
multiple
times
and
it
doesn't
work
because
of
leader
election
to
my
understanding,
because
in
the
lb,
when
you
connect
to
the
lb
you're
going
to
get
an
active
api
server,
that
is
the
one
that
you
want
the
leader
in
the
aha.
Yes
and
it
you
can
bootstrap
to
this
api
server.
But
if
you
get
the
local
api
server,
it's
not
elected
as
a
leader,
so
it
cannot
fiddle
with
the
csr
logic
at
all.
B
A
A
D
A
He
had
a
design
proposal,
basically.
B
This
is
making
our
cooper
proxy
deployment
version
aware.
A
C
A
From
from
discussions
with
jason
and
andy,
I
recall
there
was
a
missing
feature
in
kubernetes
that
prevented
us
to
gracefully
swap
these
demon
sets
on
the
fly,
but
I
have,
I
may
have
forgotten
all
the
details.
There.
A
A
A
Admin
users,
one
is
the
super
user
super
admin,
the
other
one
is
the
the
regular
kubernetes
admin
are
back.
Potentially
kuberia
will
generate
a
couple
of
different
admin.
Dot
com
files
in
the
future.
A
So,
basically,
if
the
cra
socket
path
does
not
have
unix
or
named
pipe
on
windows,
we
are
now
going
to
show
a
warning
to
the
users
and
also
our
upgrade
code
is
going
to
try
to
automatically
prefix
the
url
scheme
schemes
everywhere,
where
we
store
the
socket
paths.
So
this
is
okay.
This
is
a
change
we
are
doing.
A
A
This
is
a
cleanup
housekeeping.
This
is
this
is
there's
an
action
item
for
this
124.
B
We
have
some
logic
that
that
update
the
kubrick
config
map.
B
I
have
to
check,
I
remember
for
sure
we
updated
kubernetes
config
map,
for
instance,
for
the
system,
the
defaulting
stuff,
like
that,
I
have
to
check.
A
I
mean
I
ideally
shouldn't
touch
the
complete
configmap
at
all.
A
B
Yeah
before,
basically
before
every
upgrade,
we
create
the
new
kubernetes.
A
So
you
took
note
I'm
going
to
execute
on
this
bitter
stage
in
this
124
so,
which
is
flip
it
to
beta
enabled
by
default.
Users
can
opt
out
by
disabling
the
feature.
B
That
that's
a
good
question.
I
have
to
figure
it
out
today.
We
still
have.
Let
me
say
that
the
fallback
option
is
to
disable
the
feature
gate
explicitly.
A
B
A
I
think
there
was
no
lot
of
upgrade
tests,
but
that
was
a
long
time
ago
for,
for
buster,
at
least
for
kubernetes
master.
B
You
know
we
we
we
have
them,
we
are
not.
Now
we
have
a
couple
of
them,
one
for
each
branch
and
stuff
like
that,
but
I
don't
remember
if
and
but
we
have
them
from
the
kubernetes
upgrade.
So
we
upgrade
from
current
to.
B
To
master
or
to
or
to
main,
but
we
run
only
the
kubernetes
upgrade
plus
conformance.
We
don't
run
all
the
the
next
upgrade
or
stuff
like
that.
A
Well,
I'm
going
to
ping
you
when
I
work
on
the
the
flip
so
yeah.
Thank
you.
So
this
one
is
the
last
one
is
the
small
cleanup,
and
this
is
the
dockery
shim.
A
A
So
if,
if
there
is
a
cri
dash,
docker
d
project
that
maintains
their
own
socket
is
now
we
consider
this
like
a
separate
sierra
implementation,
and
if
there
are
multiple
cris
on
the
host,
we
will
throw
an
error.
The
the
user
has
to
be
explicit
if
they
have
multiple
ones.
Okay,
you
have
docker.
We
have
sorry,
you
have
the
cri
docker
d.
You
have
cryo,
you
have
container
d,
you
have
to
be
explicit,
which
one
you
want
to
use
agreed.
A
A
A
A
So
if
you
pass
the
though
the
legacy
docker
shield
socket
inside
the
kubernetes
joint
or
in
it
configuration
you
can
actually
use
this,
you
can
continue
using
the
old
couplet,
the
123
kubrick.
Am
I
actually
adding
this
because
it's
very
difficult
for
people
to
migrate,
some
big
clusters
to
124
couplets
and
I'm
adding
this
option
if
they
want
to
stay
within
the
support
skew
of
the
couplet,
which
is
you
know,
kubernetes
supports
one
couplet
behind
123.,
okay,
so
this
is
the.
A
A
B
A
A
Another
change,
another
change
that
is
happening
is
that
I
tested
what
happens
if
you
use
crypto
with
the
docker
shim
socket,
and
it
actually
works
so
the
abstraction
layer
that
rusty
added
at
some
point
where
we
use
the
docker
cli
to
pull
images
and
perform
some
actions.
A
Incubation
was
actually
not
needed
from
my
understanding
because
we
could
have
used
krakatoa
on
the
docker
regime
socket,
so
this
change
is
also
including
in
one
of
the
commits
we
are
also
including
I'm
also,
including
switching
everything
to
krykato,
because
now
we
are
going
to
have
this
external
socket
and.
B
Yes,
so
also
what
what?
If
we
have
a
user
that
is
in
kubernetes,
1
24,
spin
up
a
cluster
123
and
using
docker.
B
B
B
I
have
to
jump
to
the
cluster
meeting,
but
thank
you
for
running
this
triage
session.
Sure.