►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2021-03-17
A
A
A
The
docs
are
looking
good.
I
because
there's
not
much
else
to
do
in
terms
of
features
and
bug
fixes
right
now.
I
just
look
in
look
in
the
docks
and
try
to
send
fixes
here
and
there
where,
when
they're,
you
know
when
they
are
needed,
nothing
else
in
terms
of
dogs
if
you're
interested
in
these
prs
check
them.
A
The
second
writer
give
an
update
is
the
kubernetes
status?
Sorry,
the
complete
performance
regression
that
we
saw
thanks
to
the
mini,
cube
maintainers
and
this
is
still
pending,
but
I
saw
a
comment
from
sig
notes
that
they're
actually
interested
in
getting
this
in
121..
A
If
we're
close
to
the
release,
I'm
going
to
ping
them
on
slack
to
see
if
they
can
approve,
currently
we're
technically
in
cold
freeze.
Let
me
check
the
days.
B
A
Let
me
see,
I
think
one
of
these.
You
should
have
taken
a
look
okay,
so
this
one
is
simple.
C
A
A
You
can
look
at
the
doorstop
change,
it's
not
really
required.
If
you
don't
have
the
time
just
leave
it
be
because
we
already
reviewing
this
thoroughly.
B
Yeah,
probably
it
is
the
one
that
should
look
at
this
is
antonio.
B
A
I
don't
know
if
this
phrase
is
supposed
to
completely
block
the
code
base
at
this
point,
and
only
actually,
I
so
test
three
starts
at
the
24th.
This
is
my
understanding
at
this
point.
We
should
not
merge
any
changes
in
tests.
A
I
think
jordan
said
that
the
performance
aggression
is
acceptable
because
we
are
fixing
a
more
important
problem.
I
agree
with
that,
but
this
means
that
basically,
the
cube
adm
users,
everybody
that
is
consuming
cube
idm-
will
start
questioning
why
we
are
tolerating
this
and
we
are
not
submitting
a
fix.
A
The
answer
is
that
the
sick,
the
corporate
approvers-
need
to
merge
it,
but
basically
this
is
the
theodr.
Here
is
a
comment
that
this
is
critical
origin
for
121.,
supposedly
before
test.
Freeze
we're
going
to
have
to
merge
this
yeah.
If
somebody
has
questions,
we
can
just
link
to
this
pr,
that's
all
we
can
do
for
now.
A
B
A
The
second
item
that
I
read
is
certainly
interesting
that
I
saw
some
contributors
started.
Writing
a
cap
for
running
the
cube,
idm
control,
plane,
components
as
non-root
some
context
here
is
that
we
run
the
control
com,
control,
plane,
components
as
the
user
uid
0.
A
A
A
A
A
So
yeah,
the
the
biggest
question
is
owning
ownership
of
the
cube
config
files
in
terms
of
uida
gid
and
the
other
big
question
is:
should
we
allow
configuration,
via
the
cost,
reconfiguration
structs
that
we
have
for
the
components.
B
B
To
be
honest,
my
my
first
reaction
is
that
we
we
should
not
allow
to
change
it,
but
I'm
pretty
sure
that
that
someone
will
ask
for
for
make
it
possible
to
change,
and
I-
and
I
don't
want
to
also
to
break
anyone
because.
B
If
we
pick
a,
I
use
a
uid.
What
what
if
the
user
is
already
have
already
allocated
one,
and
so
it
is
really
a
tricky
one.
Unfortunately,
it
came
in
a
moment
where,
where
we
don't
we
don't,
it
will
require
probably
a
configuration
change
and
that's
mean
that
that
is
it
will
kick
kick
in
the
work
on
v1.
We
want
beta3.
B
So
I
I
I'm
kind
of
debated
because
I
do
expect
people
will
ask
for
it,
but
the
this
will
trigger
a
lot
of
additional
work.
On
top
of
this
pr.
A
B
Like
that
yeah
another
option
that
I
was
considering
is
to
feature
gate
these,
and
so
why
it
is
feature
gated.
It
will
be
fine
to
to
pick
only
evaluate,
because
in
case
someone
is
is
stuck,
they
could
go
back
to
the
less
secure
manifests,
but
so
so
we-
and
probably
it
is
a
good
idea
to
futuregate
them,
because
we
we
make
this
possible.
B
A
Even
if
it's
opt-in
into
security
feature,
I
do
agree
that
some
people
might
not
be
comfortable
with
the
default
uid
gid
that
we
apply
until
we
have
the
api
change,
so
we
have
to
statically
fix
it.
Statically
fix
the
uid
gid
until
we
have
a
configuration
change,
yeah.
B
B
No,
but
what
I
was
thinking
is
that
okay,
the
main
problem
that
I
see
is
is
to
get
user
stuck
because
we
are
using,
let
me
say,
user
id
which
is
already
assigned
to
something
else.
So
if
someone
gets
stuck
for
these,
we
we
can
give
them
two
options.
One
is
to
turn
off
the
feature
gate
because,
okay,
this
is
obtained
so
go
back
to
the
normal,
manifest
option.
B
B
Now,
for
for
all
the
files
that
you
are
going
to
mount
in,
not
for
the
cuba
configuration
oh
yeah
and-
and
this
is
tricky
because
because
our
pod
can
have
a
different
list
of
volumes-
and
this
is
explained
in
in
the
cap-
so
you
cannot
do
these
in
beforehand.
You
have
to
do
these
basically
on
the
fly
when
the
container
starts,
and
this
is
kind
of
tricky-
not
not
super
hard,
but
but
it's
something
that
that
require
it's
not.
It
is
not
an
easy
customization.
Let
me
say.
A
Yeah
I,
what
I
would
like
to
do
is
just
test
the
whole
thing
myself.
Maybe
the
author
of
the
cap
started
with
some
pr,
so
the
scheduler
is
pretty
simple.
I'm
interested
in
what's
going
to
happen
with
the
controller
manager
which
mounts
a
bunch
of
stuff,
I
would
like
to
see
what
happens
there,
because
if
we
have
to
change
the
permissions
for
certificates,
I
I
don't
like
this
in
particular
for
cubeconfigs.
A
It's
also
a
problem
but
yeah.
Let's,
let's
see
how
we
progress
in
terms
of
proof
of
concept
prs
and
I
am
going
to
test
it
locally.
I
agree
with
the
feature
gate
yeah,
I'm
going
to
add
the
feature
gate
recommendation
in
the
nodes
here.
B
A
All
right-
and
hopefully
we
can
get
more
eyes
from
security
people
and
later
on.
I
think
we
were.
We
already
had
an
issue
for
that
in
kubernetes,
but
we
were
kind
of
blocked.
A
Actually
that's
for
q
proxy.
This
is
the
by
the
way.
Q
proxy
is
completely
blocked
because
of
a
problem
in
the
granularity
of
the
permissions
the
linux
kernel
gives.
A
B
C
Mean
the
cap
for
control
plane
as
long
road.
A
It
completely
does
not
work
on
windows.
So
if
one
day
we
have
something
that
is
like
a
windows
based
control
plane
in
the
future.
A
Basically,
we
have
to
abstract
the
same
command
to
be
a
completely
different
command
for
windows,
because
the
go
standard
library
is
broken.
It
doesn't
support
the
changing
ownership
of
window
file
windows
files
and
they
said
it's
too
early.
We
don't
have
the
control
play
for
windows.
I
said
it's
always
important
to
abstract
early
and
not
regret
it
later,
but
yeah.
A
I
I
I
vision
that
we
have
to
also
think
about
windows
in
a
distant
future.
With
respect
to
this
skip.
B
A
Yeah:
okay:
let's
see
how
this
goes
to
buy,
the
setting
122
is
going
to
be
the
like
the
first
proposal
for
merges
in
cuba
dm,
so
I
guess
we're
going
to
have
a
more
thorough
review
next
cycle.
A
A
Somebody
created
another
api
change
proposal,
which
is
to
support
custom
images
for
the
control
plane
components
in
prior
discussions.
We
explicitly
said
that
we
don't
want
to
do
this.
A
We
only
wanted
to
basically
allow
the
users
to
change
an
image
repository
and
then
the
tag
is
not
customizable,
which
means
that
the
tag
comes
from
the
kubernetes
version
in
coaster
configuration
and
the
names
of
the
images
are
also
fixed.
Basically,
what
the
user
is
wants
to
do
here
is,
I
can
show,
in
the
api
directly.
B
A
A
A
So
it
could
be,
you
know,
cube
scheduler
version
120.0,
but
they
want
to
append
the
corporate
name
there,
for
instance
azure
or
the
product
name,
or
something
like
that.
It's
just
part
of
the
build
process.
They
have
they
built
from
source,
but
they
want
to
have
a
custom
tag,
and
I
said
okay,
you
can
use
the
container
runtime,
like
you
know,
ctr
for
container
d
or
docker,
the
docker
coi.
A
You
can
use
it
to
create
attack
alias
so
you
can
have
your
original
one
and
you
can
apply
tag
alias
to
comply
with
the
demands
of
kubernetes,
but
this
they
don't
have
an
argument
against
this,
but
I
I've
seen
this
before.
Therefore,
I
created
this
issue
so
that
we
can
discuss
it.
B
Yeah,
so
my
answer
is
that
I
got
to
the
point,
but
but
to
to
be
honest,
I
don't
see
a
strong
need,
because
if
you
are
building
your
your
own
custom,
your
custom
version
so
you're
taking
your
kubernetes
version.
This
is
the
this.
Is
your
kubernetes
version,
so
if
you
are
using
v1
19.7
underscore
my
company.
B
A
One
caveat
that
we
have
agreed
on
you
know
on
sig
level,
for
this,
at
least
with
people
like
justin
in
cops
is
that
if
the
user
customizes
the
tag
specifically
they,
then
we
are
not
going
to
upgrade
this
component
exactly,
which
means
that
we,
we
have
to
extend
the
logic
in
upgrade
apply
when
it
fetches
the
cluster
configuration
to
potentially
skip
upgrades
of
the
control
plane
components
unless,
let's.
B
This
is
why,
every
time
you,
like
the
other
knobs
that,
then
you
have
such
kind
of
side
effect
that
that
you
have
to
take
care
of
some
some
one
of
them
might
be
in
desired.
Like
the
companies,
the
the
upgrade
is
not
able
anymore
to
to
to
do
upgrades
and
we
have
to
detect
these
in
beforehand,
because
otherwise,
maybe
that
we
do
some
upgrade
steps
and
we
get
blocked
when
doing
the
parts,
and
this
is
not
nice,
because
we
have
to
roll
back
of
the
the
entire
upgrade
sequence
so
this.
B
A
B
Complex
to
me
yeah,
but
but
in
that
case
I'm
a
little
bit
more
relaxed
because
atcd
and
corbin
yes,
we
have
this
also
for
core
dns.
Our
external
components
are
not
part
of
the
kubernetes.
A
Yeah,
okay,
I
guess
this
is
a
good
api
change.
To
be
honest,
we
are
getting
to
a
scary
volume
of
api
changes
for
v1
b3.
You
know
I've
been
bragging
about
this
since
rosty
was
here
and
I
think
we're
getting
to
a
point
where
all
the
changes
are
small,
but
there
are
a
lot
of
them,
so
executing
them
in
a
single
cycle
becomes
very
difficult.
In
my
opinion,.
B
Yeah
we
we
have
to
to
go
through
them
because
the
user,
many
of
them,
are
there,
but
we
don't
have
user
complaining
or
our
or
asking
so
maybe
some
of
them
could
be
discarded,
but
the
most
important
things
for
me
is
that
the
next
cycle,
if
we
want,
we
can
act
on
the
on
this
api.
B
B
This
work
was
unblocked
in
cluster
api.
A
B
Mechanism,
I'm
I'm
generally
okay,
because
yeah
the
operators.
To
be
honest,
we
are
still
to
to
find
the
time
to
to
discuss
it
and
and
agree
on
on
on
the
design.
I
would
like
what
what
I
really
would
like
is
is
to
move
away
from
so
so
to
to
to
have
a
better.
Let
me
say,
convergence
between
the
component,
config
and,
and
certainly
in
other
words,
I
I
really
would
like
to
get
rid
of
the
kubernetes
config
config
map
in
in
its
current
form,.
A
That's
like
a
long-term
goal.
B
A
Should
we
so
those
cube
builder,
specific
tags
that
you
have
in
the
api
port
in
cluster
api?
You
know
api
fork,
you
you
fork
the
cube,
adm
api.
You
have
those
extra
tags.
I
thought
they'll
share
this
specifically,
you
want
to
add
them
in
v1,
beta3.
B
So
most
of
them
are
were
required
in
order
to
get
a
better
sterilization
output
and
we
are
still
still
not
there,
because
probably
we
have
to
make
many
many
structure
as
a
pointer,
because
now
all
the
structures
are
are
not
pointed.
That
means
that
an
empty
structure
is
is
a
rendered.
As
an
you
know,
name
of
the
the
field
and
brackets,
which
is
not
nice.
B
Yeah
but
but
the
the
point
here
is
that
now,
basically
in
cluster
api,
the
there
is
a
there
is
a
a
things
which
is
called
kubernetes
types,
but
they
are
owned
by
cluster
api
and
they
will
mirror
the
kubernetes
and
they
will
evolve
in
a
different
way.
B
B
B
So
the
the
the
the
the
nice
point
of
the
work
that
we
did
in
cluster
api
is
that
now
basic
kubernetes
can
evolve
in
its
own
api
without
having
dependencies
on
cluster
api,
and
the
same
goes
for
cluster
api
cluster
api
can
evolve,
makes
things
simpler.
A
B
B
That's
a
solution,
but
but
it
is
tricky
because,
okay
think
to
the
user
experience
you
give
me
a
field
which
is
certificates
key
and
then
I
go
to
and
set
it.
And
then
you
error
out
to
me.
You
don't
have
to
set
it
up
so.
A
I
mean
I've
seen
this.
I
seen
this
before
when
one
up
one
application
is
a
subset
of
another.
You
know
a
parent
application
can
wrap
something-
and
I
said
this
before
it's
it's
not
a
you
know,
part
and
pattern
that
is
rejected.
B
B
Is
is
required
to
support
a
huge
version
of
skew
of
kubernetes
version.
We
don't
want
the
cluster
api
user
to
to
be.
Let
me
say
aware
of
the
kubernetes
of
the
underlying
kubernetes
format,
so
we
we
can
basically
make
make
a
lot
of
things
transparent
from
the
cluster
api
users
at
least
it
is
possible.
Now,
then,
it
is
an
another
topic.
B
If
you
are
going
how
how
we
are
going
to
push
on
these
on
this
option
or
not,
it
is
up
to
the
side,
but
yeah,
the
the
main
part
was
okay,
let's
unblock
could
have
kuben
mean
roadmap
done,
let's
make
cluster
api
user,
let
me
say
the
couples
from
from
the
kubernetes
roadmap
done
and
then
and
now
we
have
an
api.
We
can
decide
eventually
to
to
to
make
it,
let
me
say,
become
a
subset
of
what
kubernetes
offers
in
order
to
to
avoid
user
errors
or
not.
B
A
A
A
It's
your
like,
like
they
say
it's,
your
soup,
once
you
cook
the
soup,
it's
your
soup
to
eat.
B
No,
no
thank
you.
We,
we
agreed
the
component,
the
the
comments
and
we
appreciate
it.
B
A
I
wanted
to
ask
a
related
question
to
conversion
to
the
conversion
topic.
What
is
the
state
of
api?
You
know
what
is
the
state
of
api
machinery
in
terms
of
supporting
a
case
where,
from
one
api
version
to
another,
a
certain
configuration
struct
got
split
into
two.
A
B
Yeah,
so
I
I'm
not
I'm
not
super
expert
here
or
I
I
don't
I'm
not
aware
of
what
is
in
roma
for
api
machinery,
but
with
the
current
api
machinery.
B
Basically,
when
you
generate
function,
if
the
types
match
you
get
the
function
auto
generated,
if
the
type
does
not
match,
basically,
you
get
a
place
order
and
an
error,
and
you
and
you
have
to
provide
the
conversion
function.
B
A
By
the
way,
I
in
this
experiment
that
I
did
with
the
api,
I
completely
removed
generators
for
everything,
because
it's
very
simple
to
write
stuff
by
yourself.
One
argument
that
I
know
api
machinery
have
is
that
the
gener,
the
the
the
generator
for
the
defaults,
makes
code
very
efficient.
A
The
conversion
generator
is
also
very
efficient
because
in
my
implementation
here,
what
I
did
is
I
I
I
deep
copied
the
object
using
serialization,
but
also
there's
an
argument
here
that,
if
this
is
for
component
config
and
as
long
as
you
are
not
executing
this,
I
don't
know
every
50
milliseconds
in
a
controller,
you
should
be
fine
with
deep
copy
and
marshall
in
martial
to
that's
how
I
implemented
the
conversion
here.
A
B
Yeah
and
and
it
it
works
because
the
they're
similar.
B
Yeah
yeah,
because
they
are
this-
there's
a
realization
for
us
now,
but
I
so
I
don't
have
a
strong
opinion.
I
I
think
that
which
we
should
do
what
api
machinery
suggests.
Otherwise
we
have
to
go
to
them
and
and
raise
questions
there
because
well.
What
I
is
really
important
for
me
is
is
that
when
I
look
at
one
code
in
one
project,
I
don't
have
to
to
reinvent
the
wheel
to
understand
a
new
approach
to
the
same
problem.
A
B
Is
I
I
kind
of
agree
with
you?
Probably
all
this
api
machinery
was
designed
and
influenced,
but
by
much
use
cases
much
more
complex
that
that
way,
the
one
that
we
need
yeah,
but
it
is
a
trade-off.
I
prefer
code
consistencies
with
the
the
rest
of
the
codebase,
and
I
I'm
in.
B
A
B
Avoid
the
internal
type
you
can
use
the
external
type
in
in
all
your
code
code,
codebase
directly.
Okay,
you
don't
even
need
the
the
other
model,
because
you
you
only
need
to
do.
Conversion
from,
let
me
say
little
latest
than
one
two
two
letters
so
is
is
point
to
point.
You
don't
need
up,
because
we
don't
need
to
manage
many
conversion,
but
one.
A
The
the
internal
type
model
we
still
it's
also
a
hub
model.
It's
just
everything
converts
like
the
you
know
what
your
product
is,
that
we
can
cover
an
external
version
as
the
hub
version
right.
B
B
A
Okay,
I
guess
this
works.
I
still
don't
I
don't
like
the
machinery
in
general
and
like
if
the
this
problem
with
the
I
think
it's
possible
to
convert
from
one
type
to
two
types
in
the
next
version,
but
I
the
way
I
designed
this
thing
is
actually
supported,
because
I
I
don't
feed
the
kind.
I
feed
the
specification
to
the
converter,
so
I
can
tell
the
converter:
hey.
I
have
a
set
of
objects.
I
want
you
to
give
me
another
set
of
objects,
so
it
supports
from
converting
from
n
types
to
another
n
types.
A
You
know
in
the
cubed
air
case.
We
don't
care
much
about
this,
so
maybe
we
can
care
about
this.
If
we
split
the
cluster
configuration
into
instance,
specific
configuration
and
an
actual
question
configuration,
but
it's
just.
I
think
this
is
much
nicer
overall.
This
is
like
my,
but
I
agree.
This
is
reinventing
the
wheel.
A
It
also
something
else
that
I
want
to
showcase
again.
Is
that
actually
not
it's
not
here
I
created
a
map
between
versions,
so
you
can
you
can
get
this
and
you
can
figure
out
with
a
couple
of
utility
functions.
What
version
of
kubernetes
maps
to
what
api
version
which
we
like.
A
B
It
is
kind
it
is
kind
of
clean.
The
the
the
main
problem
is
that
you
have
to
explain
that
the
approach
to
the
other
contributor,
which
we
are
using
to
the
generator
stuff
like
that,
so
you
have
to
advocate
for
this
solution,
while
the
other
solution,
more
or
less,
is
accepted,
and
it
is
consistent.
So
if
someone
look
at
cluster
api
conversion,
he
just
take
a
look
and
he
knows
where
to
go
to
fix
it.
B
B
I
got
your
point
but
but
at
the
end
I
prefer
the
consistencies
with
with
the
rest
of
kubernetes.
A
Yeah
I
mean
as
long
as
we
progress
with
the
api.
I
I
don't
object,
which
version
sorry,
which
mechanism
we
use.
We
just
have
to
start
moving
with
the
api
because
we
have
been
stalling
for
a
while
and.
C
Decide
to
go
for
if
v1
beta1,
we
don't
want
to
remove.
B
In
term,
in
terms
for
some
of
sanity
of
the
code
base,
he
I
think
that
we
should
remove
it.
So
if
we
add
a
new
one,
let's
remove
the
one
better
one,
yes
or
eventually,
let's
keep
it
only
around
for
another
cycle.
If
we
if
we
want,
but
we
have
to
move
on,
that's
it.
So
we
we
are
providing
conversion,
we
provided
conversion
and
we
gave
time
for
the
user
to
to
migrate,
and
we
exceeded
this
time
already
by
three
cycle.
If
I
remember
was
so
okay,
we.
A
Yes,
we
are,
I
guess
we
should
you're
proposing
that
during
the
planning
session
we
should.
We
should
reserve
time
specifically
to
establish
priority
of
api
changes.
A
A
A
A
B
A
Godox,
where
we
document
what
fields
are
at
it.
B
A
Yes,
the
the
six
scalability
folks
by
the
way,
because
they
want
to
create
a
cluster
that
is
pretty
big
with
cuba,
idm
at
least
that
that
was
the
idea.
I
think
this.
The
whole
proposal
is
blocked
at
this
point,
but
the
idea
was
to
create
a
cluster
with
a
certain
corporate
configuration
that
applies
to
the
control
plan
and
then
the
workers
which
are
very
large
number
of
nodes
they
join,
but
they
have
to
use
a
separate
kubernetes
configuration
and
currently
kubernetes
doesn't
support
that.
A
B
A
B
Then,
okay,
I'm
giving
you
the
possibility
to
be
explicit,
and-
and
I,
if
you
want
to
opt
in
your
opt-in.