►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2021-05-12
B
So
the
first,
the
first
item
we
have
is
a
discussion
around
the
cap,
which
is
for
an
implementation
of
a
rootless
control
plane
in
cuba
dm.
Basically,
we
had
a
discussion
yesterday
with
venek
and
on
slack.
Basically,
the
big
question
was:
where
should
we
put
this
like?
Where
should
we
put
the
logic
for
allocating
these
new
users
on
a
particular
host
and
how?
How
are
we
supposed
to
manage
this
like
the
idea?
B
A
One
thing
I
want
to
mention
was
that
that
what
I
recommended
was
that
it
runs
after
the
pre-flight
phrase.
Com
completes
so
like
from
what
I
can
see
in
the
code
is
that
it's
first
pre-flight
runs.
Then
I
think
search,
there's
a
search
phase
that
runs
and
then
there's
like
a
cube,
config
phase
and
then
start
cubelet
phase
and
then
control
plane,
phase
that
runs
as
part
of
the
init
itself.
B
Yes,
so.
A
Go
ahead
so
so
what
I
was
recommending
is
that,
like
even
the
init
is
divided
into
like
these
several
phases,
and
so
initially
I
thought
it'd
be
a
good
idea
to
just
add
another
phase
there,
but
then,
like.
I
think
the
thing
that
you
recommended
that
we
do
it
as
part
of
the
control
plane
phase
inside
the
init
phase.
A
I
think
that
is
the
right
way
to
do
it
in
this
one,
because
all
the
files
that
we
need
have
already
been
created
at
that
point,
and
so
so
and
they're
all
and
we
have
and
they're
all
available
as
part
of
like
the
default
arguments
that
that
we
pass
to
the
commands
right.
So
I
think
that
that
would
be
a
good
place
to
like
put
it,
because
all
the
files
necessary
for
to
run
the
pods
are
already
been
created.
All
the
folders
are
set
up.
A
So
all
we
need
to
do
is
create
the
users
per
create
the
users
in
the
groups
and
then
just
as
we
assign
as
we
place
the
files
in
the
command
to
say
like.
Oh,
this
is
the
csr
file.
You
also
like
have
some
logic
there.
That
says,
if
the
feature
flag
is
enabled,
then
I
need
to
change
the
ownership
of
the
file
to
the
specified
user.
B
Yes,
I
was
also
thinking
about
this
last
night
a
little.
The
idea
was
also
to
make
this
part
of
reset,
so
the
users
can
execute
reset
and
remove
the
same
users
and
groups
if
this
is
a
phase
in
init,
and
if
this
is
a
phase
in
the
reset
users
could
do
it
on
demand,
which
is
kind
of
nice,
but
if
we
hardcore
it
as
part
of
the
control
plane
phases,
which
is
this,
the
control,
plane
and
lcd
the
the
separate
phases.
B
If
we
do
it,
then
the
creation
of
the
users
and
group
groups
will
be
implicit,
which
is
a
bit
of
a
limitation
from
what
was
what
I've
seen
from
users.
Users
always
want
the
full
control,
even
if
we
hardcore
it
a
bunch
of
stuff,
for
instance
in
cooperate
start.
We
have
some
well
not
not
in
this
couple,
but
the
one
for
join.
We
hardcore
at
a
bunch
of
stuff
in
there.
Our
users
are
not
happy
that
we
are
hardcoring,
so
many
things
in
us
in
the
same
phase.
B
They
are
not
happy
that
these
are
not
separate
phases.
So
what
do
you
think
about
the
alternative
solution,
which
I
think
we
think
is
to
have
a
separate
phase
after
hcd,
potentially
like
a
mutating
phase?
After
this
one.
C
So
I
I
explain
why
the
user
has
to
opt
in
using
a
feature
flag
in
the
for
this
feature,
and
that's
mean
that
it
is
already
explicit
that
he
wants.
Who
could
mean
to
do
so.
So
I
think
that
it
makes
sense
to
start
and
and
basically
embed
what
we
need
in
in
in
the
existing
phases,
but
buffer
kubernetes
join
and
reset,
and
eventually
we
can
iterate
and
create
separated
phases
before.
B
C
B
Actually,
that's
a
very
good
point
because
when
we
add
a
phase,
it's
a
ga
contract,
almost
unless
it's
experimental,
which
is
a
problem,
then
we
have
to
rename
it
because
we
always
prefix
with
experimental
like,
for
instance,
this
one
at
some
point.
It
has
to
be
removed
and
we
have
to
add
certain
rotation
as
the
phase
name.
So
I
think
what
for
british
is
talking
about
is
a
good
idea
and
your
proposal
at
the
beginning,
vinayak,
is
to
hard
code
the
logic
as
part
of
these
yeah.
A
That
also
kind
of
answers.
My
second
question,
which
was
like
that,
could
we
merge
the
lcd
and
control
plane
phases
into
one
if
fcd
is
local,
but
I
think
that
would
again
be
like
a
very
large
ga
contract
change,
which
we
probably
wouldn't
want
so
yeah.
Then
it's
totally
fine.
A
I
think,
like
what
I
have
is
like
everything
in
the
control
plane
running
as
non-root,
on
my
like
kind,
like
test
testing
thing
so
like
it's
not
a
hard
or
like
ugly
change
to
implement
just
in
control
pane,
but
I
do
want
to
run
the
idea
by
people
who
are
way
more
experienced
than
me
in
qadm
and
like
it's
like
the
properties
and
like
the
contract
that
it
shares
so
yeah,
like
I
think
I
agree
with
the
o
here
that,
like
adding
it
into
control
plane
is
probably
the
good
good
starting
point.
C
Yeah,
at
least
is
that
is
the
starting
point
with
less
friction
now,
and
I
have
a
question
sorry,
I
I
I
reviewed
the
cap,
but
but
I
I'm
reviewing
too
many
capping
this
time
and
don't
remember
detail,
are
we
going
to
create
a
different
credential
for
each
component
differ
the
user.
A
A
Yes,
each
component
will
run
with
a
unique
user
id
and
then
some
files
that
com,
different
user
ids,
need
to
access,
we'll
create
a
common
group
and
all
these
user
ids
and
groups
are
going
to
be
system
groups
and
I've
already
following
like
best
practices
for
creating
groups
and
users
which
is
like,
don't
give
them
a
home
directory,
don't
give
them
any
ability
to
log
in
using
that
user.
C
Okay,
great
so
that's
mean
that
we
can
nicely
break
out
break
down
the
process
by
phases.
A
Yes
like,
if
we
wanted
to,
we
could
do
this,
which
is
kind
of
I
think,
like
looking
at
the
code.
I
think
the
idea
of
phases
is
that
you
do
something
and
then
the
next
phase
does
not
have
to
like.
There
is
no
state
that
kind
of
lives
other
than
the
cluster
configuration
right.
A
So
we
could
introduce
a
phase
which
is
like
create
the
users
right
which
just
creates
the
users,
but
then
the
control
plane
phase,
we'll
just
assume
that
that
has
been
done,
but
then
you're
kind
of
coupling
those
two
phases
anyways.
So
you
might
as
well
just
move
the
code
into
one
phase.
A
Yeah
and
that's
exactly
what's
happening
today,
like
if
you,
at
least
in
in
in
the
in
in
the
code,
changes
that
I
have
on
my
machine.
It's
basically
like,
I
think,
there's
a
function
that
says,
create
static
parts
or
something
like
that
and
you
pass
in
the
name
of
the
components
like
you
can
pass
a
list
of
components
there,
and
only
those
components
that
are
passed
in
by
the
list
are
created.
A
A
The
good
thing
is
that
cube
api
server
is
will
still
be
able
to
access
everything,
because
we
will
change
the
file
permissions
based
on
cube
api
server
and
since
controller
manager
is
running
as
root,
it
can
anyways
access
any
file
created
by
any
user.
So
it
won't
be
like
a
problem
there
as
well.
I
tested
that
scenario
where
I
had
forced
item.
I
never
run
cube
controller
manager
as
non-root
and
it
still
works.
C
A
C
C
C
My
only
reminder
is
that
we
don't
have
only
to
take
care
of
in
it,
but
but
we
have
also
to
take
care
of
join
control,
plane.
A
Yeah,
so
what's
the
concern
in
in
in
join
specifically.
C
Exactly
that's
mean
that
your
code
should
kick
in.
Also
when
you
you,
you
basically
run
join
control
plane.
This
should
be.
If
you
are
doing
the
implementation,
like
you
just
explained
so
you
are,
you
are
you
are
basically
implementing
where
we
created
the
static
route
at
the
end,
the
code
pattern
are
the
same,
but
we
have
to
test
it.
C
No,
it
is
another
machine,
oh
okay,
yeah
that
joins
the
cluster
as
a
control
plane,
not
as
a
wall.
A
I
thought
like
if
we
land
like
another,
so
the
the
only
scenario
that,
like
I
see
will
break
is
like
if
you
run
another
control
plane
on
the
same
machine
like,
but
I
think
that's
not
a
scenario,
valid
scenario
anyways,
so
yeah,
okay
and
one
last
question
I
had
was
about
the
patching
right
like
so
I
what
I
know
from
the
code
is
that,
like
first,
we
put
in
all
the
defaults
right
and
then
we
create
the
command
and
then
right
at
the
end,
once
the
static
part
like
once,
we
have
the
pod
created
or
fully
filled
in
all
the
fields
that
we
want
to
fill
in.
A
After
that
we
call
like
patch
based
on
what
the
user
requests.
So
in
the
scenario
where
the
user
requests-
let's
say
they
want
to
like
override
the
location
where
the
search
are
stored.
For
some
reason,
then
they
are
gonna
have
to
manage
the
file
permissions
right
and
that's
like
an
expectation.
We
today
also
hold.
A
B
A
A
Yeah,
okay,
so
that's
what
I
wanted
to
be
explicit
like
maybe
we
should
also
call
that
when
I,
when
we
update
the
documentation,
we
should
probably
call
that,
like
if
you're
patching
it,
you
are
responsible
for
permission
like
file
permissions.
If
you,
if
the
feature
flag
is
enabled
just
so
that
it's
more
explicit,
I
guess.
B
A
B
No
problem,
thank
you.
I
wanted
to
ask
the
generic
questions
like
what
use
do
you
have
thing
for
cube
admin?
Maybe
it's
related
to
kind
like
what
is
the
use
case
on
your
site
for
this.
A
On
well
so
I
have
been
working
like
mostly
in
security
space
to
like
kind
of
promote
people
running
containers
as
non-root,
and
I,
when
I
joined
this
this,
like
kubernetes
security
space,
I
found
that
like,
despite
having
a
lot
of
tools
to
start
a
cluster
to
do
stuff
like
this
cube
up
and
there's
cube
admin,
even
kubernetes
wasn't
really
enforcing
its
own,
like
best
practices.
A
So
my
motivation
for
like
why
I
kind
of
did
it
for
cube.
Admin
was
like,
I
know,
like
a
lot
of
customers
use
it
like
customers
who
are
like,
and
a
lot
of
like
cloud
providers
also
use
it
and
if
we
add
an
easy
ability
for
a
cluster
to
start
with
non-root,
it
promotes
just
it
educates
other
developers
to
kind
of
hey
yeah.
We
should
run
our
containers,
let's
not
do
it
as
well.
The
other
thing
is
why
I
want
to
do
this.
A
I
also
want
to
call
out
is
in
in
the
security
community
like
look
there's,
there
are
challenges
to
running
controllers
as
containers
and
non-root,
and
we
should
kind
of
implement
some
of
the
things
that
we
kind
of
haven't
implemented
like
ambient
capabilities.
We
always
talk
about
like
if
we
do
ambient
capabilities
running
cube.
Api
server
is
non-root
and
binding
to
like
a
low
port,
which
is
a
completely
valid
scenario
for
qbps
server
right
would
become
way
easier
for
a
lot
of
customers
and
even
kubernetes
in
doing
it
for
cube
up.
A
We
ran
through,
like
we
had
to
go
through
several
hoops
to
to
do
it
for
cube
up.
So
that's
mainly
my
motivation
and
like
in
our
company.
I
think
we
definitely
we
use
cube
admin
for
for
a
few
products
where
we
use
it
to
like
bootstrap
clusters
and
also
use
kind
so
like
we
use
kind
but
like
if
you
use
kind,
you're
kind
of
using
cube
admin
so
like
because
the
bootstrap
is
done
by
a
cube
admin
there.
A
So
we
wanted
to
promote
that,
because
we
wanted
us,
I
think,
we're
kind
of
moving
towards
kind,
using
kind
for
like
a
lot
of
cluster
bootstraps
rather
than
you
know,
shell
scripts,
because
it's
like
easier
to
use
the
libraries
and
like
write,
go
code
and
test
it
and
be
able
to
make
sure
like
you're
testing
your
framework,
like
it's
very
hard
to
test
shell
scripts.
Well,
so
that's
something
I'm
promoting,
but
then
I
was
like
you
know
like.
I
will
have
to
go.
A
Do
a
lot
of
custom
work
to
bring
in
like
rootless
changes
to
run
these
things
as
rootless.
So
I
wanted
to
build
that
into
cube
admin
so
that
it
becomes
easier
and
more
people
can
also
use
it
right.
I
don't
just
want
to
do
it
for
my
company.
I
want
to
do
it
for
like
everybody,
so
that
becomes
very
easy
and
the
community
kind
of
realizes
that
oh
yeah,
if,
like
we're
running
out
all
our
con
controllers,
sorry
all
our
components
is
non-root,
so
maybe
we
should
go.
A
Try
as
well
and
it'll
it'll
serve
as
an
example
too,
because
it's
very
complicated,
like
the
capabilities
and
everything
in
kubernetes
is
kind
of
complicated,
and
I
don't
think
a
lot
of
people
really
understand
it
well
and
so
like
if
we
use
it
and
we
create
good
examples.
Yeah,
I
think
there'll
be
more
adoption.
B
Well,
thank
you
very
much.
This
work
for
cube
idea,
my
cube
up
is
greatly
appreciated,
but
I
I
see
that
you're
basically
advocating
for
the
whole
kubernetes
project
to
apply
some
of
these
best
practices.
So
that's
great.
B
Right
thanks
a
lot,
if
you
have
any
questions,
just
pick
me
on
slack
or
we
can
have
a
another
meeting
next
week.
Sorry,
in
a
couple
of
weeks
about
this,
I
I
may
have
to
drop
in
a
few
minutes
for
britisher.
I
wanted
to
show
you
like
what
is
the
status
of
the
v1
beta
3
peers?
B
I
am
going
to
continue
sending
prs.
We
are
facing
a
bit
of
a
problem
with
review.
I
know
that
you
are
very
busy.
Some
of
our
contributors
from
china
are
also
apparently
busy,
so
we
are
technically
without
reviewers.
At
this
point,.
C
Don't
worry
if
I'm
not
answering
tag
me
and
I
will
put
it
on
it's
just
yeah,
I'm
struggling
with
the
the
influx
of
notification,
but
if
you
need
a
review,
pygmy,
okay,
I'm
committed
to
make
a
vmware.
We
want
beta
3
to
happen.
So
I'm
super
happy
you,
you
are
doing
all
the
ev
lifting,
but
I
am
here
to
support
you.
B
Okay,
let
me
do
a
quick
summary
of
the
pending
prs,
so
you
know
to
give
you
some
context:
I'm
not
going
to
open
the
diff,
but
just
to
give
you
a
state,
I
added
the
support
for
skipping
phases
in
this
pr.
The
diff
is
very
small.
It
ended
up
being
an
easy
change.
The
way
it
is
unit
tests-
and
I
don't
think
we
need
them
that
much,
but
you
if
you
have
a
strong
argument
for
unities,
we
have
to
do
a
much
bigger
factor.
B
This
is
the
diff
is
easy
to
read
it's
a
simple
change
for
this
one
remove.
So
this
is
very
sketchy.
I
tried
multiple
times
and
I
you
know
we
had
a
meeting
about
this
on
private.
B
Basically,
I
ended
up
with
a
some
sort
of
a
state
where
we
have
to
keep
the
internal
dns
type,
but
it's
removed
in
v1,
beta,
3
and
kind
of
the
fuzzing.
Can
converters
are
a
bit
sketchy,
but
you
know
have
a
look.
If
you
don't
like
it,
I
can
try
again,
but
honestly,
I
think
I
spent
something
like
six
hours
from
this
trying
to
get
it
right,
but
it's
still
not.
B
B
Yeah.
It's
just
have
a
look,
but
it's
and
honestly
the
way
the
way
this
this
is.
We
already
have
changes
of
in
the
v1
beta
3
conversion
function,
because
v1
beta3
is
missing
a
field
right.
So
I
think
the
way
this
is
is
I
mean
it's
tolerable,
it's
not
great,
but
it
works.
The
the
biggest
problem
again
was
to
make
the
fuzzers
happy
and
a
future
request
that
I
have
for
the
fuzzing
logic
is
to
be
able
to
determine
what
is
the
context
of
the
fuzzing
like?
What
are
we
fuzzing?
B
If
I
knew
I
could
default,
I
can
pin
some
fields
differently
and
that's
not
possible
today,
but
yeah
that
that's
the
the
other
issue.
You
know,
have
a
look
if
you
have
the
time,
so
this
is
the
one
for
crds
to
have
the
plus
optional
on
omitter
empty.
B
B
And
you
can
have
a
look,
but
honestly,
I
think
I
should
drop
the
second
commit.
B
Yes,
that's
that's
the
only
argument
and
I
don't
think
we
should
do
it,
because
these
are
not
is
it's
a
customized
problem?
It's
not
accumulable
and
also
the
cap
for
object
beta,
never
merged.
So
we
are
following
the
rules.
B
B
If
you
want
object
better,
you
have
to
understand
that
maybe
we
should
do
a
custom,
one,
not
a
full
object
better
and-
and
maybe
we
should
not
embed
it
in
terms
of
structure
and
structural
embedding
we
should
just.
B
B
Okay
status,
this
one,
the
api
change
itself
was
easy,
but
some
of
the
logic
sorry.
B
Some
of
the
logic
that
rafael
around
the
so
we
use
annotations,
but
then
we
fall
back
to
costa
steros.
That
was
what
we
had
right.
This
is
not
I
I
made
changes
to
that.
I
think
that
my
logic
is
correct,
but
definitely
somebody
has
to
take
a
look
because
I'm
changing
code
actual
code,
the
api
change
itself
is
pretty
simple,
and
this
the
converters
was
simple.
The
change
in
the
route
converter
so
simple,
it's
just
this
needs
a
review
around
the
retries
and
fetch
logic.
B
Well,
it's
a
very
nice
cleanup
actually
also
another
problem
here
we
have
phases
that
are
up
that
are
called
update
status.
B
So
when
you
add
a
new
control
plane
node,
you
have
to
update
the
question
status.
That
was
the
case
before
right.
You
have
to
update
the
question
status
in
the
whole
course,
and
in
this
case
this
is
no
longer
needed,
because
the
static
port
will
become
a
mirror
part
that
will
have
the
annotation,
but
we
have
we
have
phases
for
this
already,
so
the
phases
are.
C
B
B
C
B
Yeah
and
now
we
I,
I
cannot
close
the
issue
with
this
pr.
I
have
to
keep
them
for
three
releases,
at
least
but
yeah.
That's
that's
one
side
effect
of
this
change
in
particular
the
other
one
is
the
retries
and
finally,
the
api
change.
But
you
know
you
can
have
a
look.
That's
that's
the
current
state
of
pending
pr's
now
the
rest
are
really
low
priority,
but
for
this
one
I'm
seeing
a
lot
more
like
what
is.
Do
you
have
a
quick
comment
on
this
one
like
for
purchasing
config.
C
B
B
B
C
Yeah,
but
the
the
my
point
is
the
following:
is
that
patches
is
a
kind
of
advanced
feature
that
that
should
be
the
escape
patch
when
you
have
problem
so
now
now
we
are
piling
up
two
questions,
so
one
is
that
it
is
not
easy
in
cluster
api
to
leverage
on
this
escape
patch,
because
customer
api
adds
hides
something
which
is
exposed
by
copy,
and
it
is.
This
is
not
not
a
problem
of
by
kubernetes.
C
Sorry,
this
is
not
a
problem
could
mean,
is
a
problem
of
api
modeling
in
copy,
so
maybe
that
what
is
easy
in
kubernetes
is
already
okay,
and
but
we
are
hiding
it
in
a
cluster.
Api
second
is,
is
that
the
there
are
discussion
about
putting
the
patches
in
in
in
our
configuration,
because
our
configuration
now
is
basically
global
configuration,
so
that's
mean
that
we
are
maybe
that
we
are
covering
some
use
use
cases
for
the
patches,
but
not
all
I'll
make
an
example.
B
C
C
B
B
C
Can
discuss
this,
it
could
be
a
mean
that
that
you,
the
user,
basically
don't
have
to
create
a
file
locally,
but
we
keep
file
from
the
config
and
then
we
store
them
locally
and
then
for
upgrades.
We
we
do
upgrades
using
a
well-known
folder.
B
I
mean
the
clock
is
ticking
for
the
remaining
of
the
cycle
like
what?
What
do
you
see
as
from
these
items,
which
one
is
like
the
highest
priority
for
you.
C
What
what
about
luke
mayer,
if
we
schedule
a
meeting,
because
so
now
we
are
basically
at
the
enhancement
phase
for
the
cycle
right
yeah
and
then
there
are
three
weeks
for
implementation.
C
But
but
please
feel
free
to
to
add
more
in
in
those
three
weeks,
so
we
are
sure
that
that
we
are
we
we
we
keep
things
moving
as
much
as
possible.
B
Okay,
I'm
going
to
do
some
of
probably
go
with
this
one.
Let's
I'm
going
to
work
on
some
of
these
pr's
but
half.
They
must
work
in
progress,
I'm
going
to
have
to
drop,
but
thank
you
for
joining
and
we
are
going
to
have
a
kubernetes
meeting
again
in
a
couple
of
weeks,
all
right,
bye-bye.