►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2021-02-03
A
A
A
quick
psa
that
the
release
team
contacted
us
about
the
state
of
kubernetes
caps
for
this
cycle,
and
I
said
that
we
don't
have
anything
in
terms
of
kept
changes,
but
it's
kind
of
difficult
to
ex
categorize
things
like,
for
instance,
if
I
want
to
just
patch
a
cap
with
a
small
change
which
is
not
really
changing
the
stat
state
of
the
future.
A
From
you
know,
from
alpha
to
beta,
for
example,
like
should
we
have
the
release
team
track
this,
I
think,
probably
not
so
just
this
something
that
we
should
discuss
with
the
the
basically
the
release
demon,
who
is
maintaining
the
whole
release
process.
Some
things
just
we
shouldn't
track
like
small
changes
without
any
state
changes.
B
But
yeah
this
is
one
question
will
be
here,
so
I
guess
this
isn't
for
new
features.
So
it's
something
that
is
pretty
like
pre-existing
as
long
as
we
don't
graduate.
It's
like
just
a
simple
change
to
vr.
A
Yeah
this
is
this:
is
the
release
team
basically
tracks
new
caps,
which
obviously
are
new
features
potentially,
and
they
also
track
feature
graduations
so
from
you
know
from
beta
to
ga,
but
obviously
we
shouldn't
track
small
things
like
you
know,
changing
a
sentence
in
a
cap
or
something
like
that
or
maybe
introducing
a
new
command
for
a
machine,
readable
output
that
supports
machine
learning
output
like
should
we
have
the
released
in
practice.
I
think
the
whole.
The
whole
thing
is
driven
by
a
milestone.
A
A
This
is
just
a
it's
more
of
a
release
team
discussion
than
a
kubernetes
discussion
pretty
much,
but
if
we
like,
we
have
a
discussion
about
machine
readable
output
later
so
if
we
want
to
change
something
in
likely,
we
are
just
not
going
to
have
the
release
team
track
this
work
and
if
they
ask
us
why
we're
just
going
to
have
to
discuss
this
topic
with
them.
A
Yeah,
so
this
is
the
release
schedule
I
just
showing
it
you
know,
for,
for
the
recording
9th
of
february
is
the
enhancement
phrase
which
is
for,
like
I
said,
for
new
features
in
future
changes
code,
freezes
of
the
9th
of
march,
so
we
have
like
a
month
after
that
to
make
any
potential
changes.
A
Okay,
that's
pretty
much
it
any
comments
from
the
release.
Team,
slash
release,
schedule
topic.
A
Okay,
moving
to
the
next
one,
this
is
something
that
nadirene
has
seen.
Prof
downstream
is
a
discussion
about
the
state
of
machinery
about
putin
justin.
Can
you
summarize
this
for
us.
B
Yeah,
so
we're
currently
working
on
the
cluster
api
side
of
things
on
a
major
stash,
a
cli
that
can
that
is
going
to
run
on
the
worker
notes
to
do
the
whole
bootstrapping.
So
the
idea
is
to
have
something
generic
that
would
that
wouldn't
be
basically
a
bash
script
that
will
run
on
top
of
cloud
in
it,
but
rather
something
that
is
composable
that
we
can
that
we
can
write
and
test
pretty
easily.
B
So
the
idea
is
for
this
is
to
also
do
some
of
the
secure
attestation
that
we
that
we
need-
and
that
is
going
to
be
specific
to
each
of
the
cloud
providers.
To
do
that.
We
will
likely.
We
will
reuse
cuba.
B
We
will
reduce
cuba
dm
and
we
will
likely
need
to
to
consume
some
of
the
outputs
from
kubernetes
because
otherwise
at
least
consume
the
out
of
the
machine
readable
objects,
because
if
it's
not
machine
readable,
then
we
would
have
to
parse
things
like
through
regex
and
and
such
so.
Ideally,
what
I'm
looking
for
is
all
of
the
phases
and
to
see
the
status
of
each
face
in
terms
of
machine,
readable,
outputs.
B
At
least
so
that
at
least
we
narrow
on
the
cap
side
what
work
we
need
to
do
incubation
for
this
to
happen.
If
any.
A
The
output
from
all
the
phases-
it's
a
lot
and
it's
it's
completely
unstructured.
Basically,
it's
just
a
print
line
and
k
walk
messages,
so
I
would
like
to
understand
like
what
exactly
do
we
need
for
the
you
know,
machine
at
the
station.
The
loaded
station
problem,
like
my
destiny,
is
obviously
that
we
need
maybe
the
bootstrap
token
and
the
certificate
key
which
should
be
machinable.
Is
there
anything
else
actually.
B
So,
like
aside
of
the
search
and
everything
it's
going
to
be
probably
like,
so
it's
gonna
be
probably
any
warning
or
message
that
we
need
to
account
on
for
the
logic,
because
I
assume
that
it's
not
gonna
be
just
say
like.
If
we
exact
stuff,
it's
not
going
to
be
always
just
zero
or
one,
but
we
would
need
to
account
for
some
specific
cases
that
cube
adm
might
communicate.
A
Okay,
so
is
it
an
option
if
we
basically
this
one?
This
is
an
original
proposal
that
we
discussed
with
the
you
know
the
the
contributor
who
added
the
boiler
plate
for
machinery
about
putting
kubernetes.
A
It
was
a
discussion
that,
since
we
have
so
so
much
stuff
printed
in
either
on
standard
standard
out
of
standard
error,
but
it's
it
really
doesn't
make
sense
to
some
of
that
to
be
machine
readable.
Our
idea
was
to
just
create
a
structure
that
looks
like
a
standard
standard
out
standard
error,
it's
a
complete
dump
and
what
it
actually
is
needed,
like
certificate
keys
and
tokens
and
stuff
like
that,
can
be
in
a
separate
structure
below
this
below
below
these
two
fields.
That's
obviously
going
to
be
quite
big.
B
Yeah-
and
I
guess,
like
anything
related,
for
example,
to
xcd
the
lcd
faces
and
such
so
like
what
I'm
saying
is
anything
that
can
be
that
can
be
of
interest,
as
you
said,
we're
happy
to
like
get
it
and
reuse
that,
but
I
agree
with
you:
we
don't
need,
like
the
whole
logs
and
such
it's
just
mainly
the
key
informations
for
the
workflow
to
happen.
A
Yeah,
and
also
even
if
we
do
that,
with
the
separate
streams
going
to
separate
fields,
you
still
have
to
pass
them
some
way
which,
like
it's,
not
possible.
Currently
I
mean,
unless
it's
completely
machine
readable
like,
for
instance,
I
I
have
a
question
like
why
do
you
need
the
output
of
hcd
for
machinist
at
the
station.
B
So
it's
not
it's
not
only
like
it's
not
an
only
machine
at
the
station,
but
it's
gonna
do
the
the
whole
bootstrapping.
So
the
idea
is
at
some
point
to
not
have
the
cube
a
dm
bootstrapper.
A
As
part
of
cathy,
at
least
I
see
so,
the
idea
is
to
potentially
remove
cubadium
or
are
it
is
optional?
A
B
It's
going
to
be
a
separate
output
structure
then,
so
it's
not
like
the
idea
is
not
to
remove
our
cube
adm,
it's
going
to
be
that
it's
still
going
to
be
like
the
default
we
ship
with
copy.
But
ideally
we
should
have
an
interface
that
basically
abstract
all
of
this,
so
anyone
that
has
their
own
bootstrapper
and
wants
to
plug
it
to
this
node
agent
just
have
to
implement
this
interface,
so
the
idea
is
to
to
still
keep
cube
avm
within
our
our
defaults.
A
Yeah
it's
this
this
feature
request
for
I
I
would
honestly.
I
would
like
to
see
the
spec
for
it
like,
because
it's
not
clear
to
me
what
particular
messages
we
need
and
what
not.
So
if
there
is
a
doctor,
if
there's
a
dog
for
that,
I
can
see
what
actually
we
can
avoid
the
whole
proposal
from
earlier
to
have
fields
for
separate
output
streams.
We
can
actually
write
the
file
with
the
data
that
is
needed,
but
as
long
as
you
have
a
spec
for
it.
B
Yeah,
so
we're
we're
working
on
on
that
with
nadir.
We
do
have
some
working
sessions
this
week,
we'll
probably
open
it
up
know
by
the
end
of
next
week
for
feedback.
C
So
I
I
have
to
comment
here
so
first
of
all,
given
that,
basically,
this
request
targets
in
it
and
join
my
my
assumption
is
that
the
implementation
won't
be
easy,
because
we
have
a
lot
of
fmt
dot,
printf
spread
into
the
code
and
basically
the
this
kind
of
things
are
not
well
and
engineering
today
and
it
will
be
really
difficult
to
shut
them
down
in
favor
of
machine
output.
So
point
number
one
is:
don't
underestimate
the
complexity
in
making
this
in
it
and
join?
C
My
number
two
is
a
question
for
you:
yes
scenes
so
in
cluster
api.
Basically,
the
the
range
of
supported
kubernetes
version
goes
from
1
16,
something
up
to
121.
If
I
remember
well
so,
and
and
this
change
won't
want
to
be
backported
because
it
is
I'm
basically
a
new
feature,
it
is
not
possible
to
back
apart
it.
So
how
do
you
plan
basically
to
implement
a
unique
cluster
api
nodeboost
wrapper
that
can
work
with
older
version
of
kuban
mean
without
the
bring
your
own
without
the
the
machine,
readable
output.
B
So
so,
ideally,
the
plan
would
be
to
keep
the
bootstrapper
like
keep
the
current
architecture
for
v1
alpha
3
and
as
soon
as
you
upgrade,
then
we
would
rely
on
the
new
node
agent.
That
would
that
would
like.
That
would
be
basically
from
where
we
would
start
our
compatibility
window.
And
then
we
would
need
to
think
about
a
skew
policy
between
the
controller
and
the
cli
slash
agent.
That
would
live
and
run
on
the
machine,
be
it
like
two
versions
of
cappy
between
cappy
controllers
and
the
agent
or.
C
More
no
but
sorry
to
interrupt.
So
what
I'm
concerned
is
that,
okay,
let's
assume
that
I'm
running
a
v1
alpha
4
cluster,
that
has
the
node
agent
okay,
but
we
with
the
one
alpha
node
agent,
that
probably
will
be
the
management
cluster
will
be
120
or
119.
So
something
really
really
recent.
With
that
cluster,
I
can
create
a
kubernetes
workload
cluster
of
older
version,
so
this
queue
for
the
end
doesn't
mean
that
I'm
going
to
basically
spin
up
a
machines
that
that
has
the
the
node
agent
and
and
kubernetes
version.
B
So
that's
interesting
because
it's
going
to
depend
on
the
policy
that
we
have
for
cube.
Adm.
I-
and
I
guess,
like
the
latest
cube
adm-
supports
the
older
versions
of
of
that.
A
A
If
we,
we
don't
have
any
that
many
branches
version
branches
inside
the
pod
manifests,
but
we
may
see
more
of
that
once
some
of
the
flags
from
the
control
plane,
components
start
going
away
and
then
we're
going
to
have
a
big
mess.
And
honestly,
I
really
don't
want
to
increase
the
security
officially
before.
B
That
so
I
mean
one
question
for
both
of
you
so
today
for
cappy,
I
guess
we're
still
shipping
this
worshipping
we're
shipping,
a
fixed
version
of
cubed
young
of
the,
depending
on
the
kubernetes
version
and
slash
image
that
we
want
correct.
C
Yeah
but
but
yeah,
I
agree
so,
first
of
all,
today
in
cluster
api,
the
image
builds
they're,
basically
packages
together
the
kubernetes
version
of
the
same
version
of
of
the
of
kubernetes
okay.
So
they
are
in
sync.
C
Machine,
readable,
output,
okay:
this
is
the
scenario
so
this
work,
if,
if
I
I
implement,
basically,
I
create
a
not
image
with
v121
and
kubern
miniv121
with
machine,
readable
output.
Okay,
fine!
Now
I
need
to
build
a
new
node
image
for
creating.
C
B
Yeah,
it's
going
to
be
it's
going
to
be,
I
guess
on
us
to
either
yeah
either
not
rely
on
machine,
readable
output
and
only
like
rely
on
the
set
of
features
that
features
that
exist
today
or
we
would
need
to
bring
to
the
table
folks
to
help
out
help
increase
the
skill,
because
I
assume,
like
with
the
current
maintainers
I
I
agree
that
it's
not
something
sustainable.
C
Yeah
and
that
it
was
also
something
difficult
to
do
in
a
retroactive
way,
so
I
think
that
we
can
agree
that
from
now
on
could
mean
will
every
cycle
add
something
to
its
skew
and
basically,
when
we
do
change,
we
keep
this
in
mind
my
view.
If
we
want
to
do
this
reconstructivity,
that
means
that
we
have
to
go
through
all
the
pr
in
the
of
the
little
cycle
and
understand
how
to
make
this
work
in
a
retroactive
way.
So
it
won't
be
an
easy
exercise.
B
B
C
A
A
Tests
and
no
features,
this
is
technically
a
feature
and
like
for
britain's
sake.
I
I
I
can't.
A
B
So
what
we
care
about
is
gonna,
be
probably
the
output
of,
so
if
we
go
and
check
the
bootstrap
so
so
this
is
the
bootstrap
script.
Basically,
for
for
cluster
api,
we
will
likely
need
to.
We
will
likely
need
to
check
some
of
the
outputs
here.
B
I
I
don't
know
if
I
don't
know
if
we're
gonna
re
like
we
use
some
of
the
certificates
stuff,
because
because
of
because
of
the
node
secure
attestation,
so
we
will
likely
have
more
kind
of
plugins
on
the
cube
config
to
generate
the
right
source.
The
way
like,
for
example,
it's
done
for
aws.
C
Yeah
so
to
comment
here
so
first
of
all,
this
is
the
experimental
try
script.
If
I'm,
if
I'm
not
wrong,.
A
C
There
is
already
an
issue
that
that
we
we
we
should
look
for,
get
rid
of
the
script,
be
because
basically,
this
script
was
created,
kind
of
a
messaging
while
delivering
castor
api
release,
and
after
that,
we
we
made
many
changes
both
in
cuba,
admin,
but
also
in
kubernete
in
in
class
tripia
itself,
basically
to
be
more
resilient
to
slow,
tcds
and
whatever.
So
most
probably,
this
script
is
not
necessary,
and
so
you
can
call
only
basically
only
kubernetes
or
join
instead
of
calling
phase
by
phase
is
one
comment.
C
Second
comment
is
that
me
might
be
that,
if
what
you
really
need
is
to
get
access
to
the
certificate,
the
easiest
way
is
just
to
check
for
the
certificates
in
the
file
system.
Instead
of
waiting
for
the
kubernetes
he
need
enjoy
to
output
the
certificate
itself,
which
is
also
something,
let
me
say,
not
not
secure
from
from
a
point
of
view.
I
I
I
won't
expect
that
in
the
kubernetes
in
a
log-
and
I
I
we
should
take
care
of
not
adding
sensitive
information
in
the
log.
B
Yeah
so
for
for
certs,
I'm
not
sure,
like
I'm
still
not
sure
if
we're
gonna
reuse
those
phases.
So
that's
why
I
was
like
we're
not
we're.
Probably
not
gonna,
use
the
whole
cube
adm
in
it
or
drawing,
because
in
the
middle
of
this
workflow
we
will
likely
need
to
add
some
bits
for
the
secure
attestation.
B
It's
it's
gonna,
be
probably
mainly
to
you
know,
have
more
insightful,
more
insightful
data
for
to
report
back
during
the
bootstrapping
process,
because
today
it's
just
basically
did
execute,
returns,
zero
or
not
returns
one.
So
we
we
don't
have
like
meaningful
data
to
show
the
you
to
show
the
user
during
the
progress.
It's
just
the
bootstrap
failed
or
succeeded,
and
I
guess
we
can
assume
the
status
quo
and
keep
the
same
thing
if
we
don't
find
the
way
to
do
it.
C
C
A
B
Yeah,
if
we,
if
we
go
to
another
route,
we
might
not
need
that
because
we
wouldn't,
because
we
still
need
to
support
three
of
the
older
releases
of
kubernetes,
which
we
can
show
a
picture.
D
A
A
I
saw
the
proposal,
so
is
the
upstream
cluster
api
proposal
for
node
agent,
the
one
where
somebody
can
check
for
the
latest,
like
updates
about
this
problem
like
how
how
how
to
check
the
state.
B
Of
this,
so
it's
gonna
probably
be
shared
with
the
community
by
the
end
of
the
week.
Once
we
share
that
we
can,
we
will
probably
target
the
folks
on
the
pr.
A
D
A
You
know
I
I've
been
complaining
about
the
phases,
but
every
time
we
change
something
in
a
version
of
cuba,
dm
that
is,
for
instance,
122.
If
we
change
the
order
of
phases,
it
will
become
a
mess
for
the
load
agent
and
whoever
is
managing
the
script.
B
Yeah,
so
that's
that's
another
issue
that
I
that
I
was
thinking
about.
It's
it's
regarding
the
backward
compatibility
and
stability
of
faces.
So
can
we
assume
that
the
same
phases
are
going
to
be
stable
from
a
kubernetes
version
to
another,
or
at
least
from
the
current
cube
adm
till
the
three
oldest
three
releases
back?
A
Yeah
the
phases
the
coi
itself
is
considered,
ga
if,
if
we
take
the
faces
as
part
of
the
cli
and
if
the
phase
is
not
explicitly
marked
as
experimental,
I
mean
that's
how
we
do
it.
We
can
say
that
the
phases
are
ga2
because
they
will,
and
potentially
they
will
break
people.
Whoever
you
know
is
breaking
the
process
into
phases.
A
I
think
I
was
thinking
of
one
way
for
us
to
be
able
to
make
changes
without
breaking
people
is
to
make
the
phases
re-entrant,
which
is
to
pretty
much
order
the
phases
but
execute
the
same
phase.
Another
time
like
in
later
stage.
A
But
if
it's
already
executed
know
about
that
and
skip
it
like
you,
we
can
play
around
methods
like
that,
but
you
know
it
gets
complicated
around
stuff
like
hcd,
you
like
you,
have
to
know
to
not
react
the
same
member,
so
the
re-entrance
is
going
to
be
a
bit
of
a
a
bit
of
a.
B
Discussion
topic
yeah,
and
I
guess
like
since,
since
we
each
time
bake
a
cube,
adm
version
with
within
image
builder.
I
can
assume
that
we
have
certain
stability
for
the
latest
releases,
because
otherwise
it
would
have
break
broken
up.
Cluster
api.
A
Yeah,
we
actually
changed
one
of
the
phases
recently.
I
believe
it
was
for
a
need
where
we,
because
of
the
alpine
users,
we
had
to
change
the
order
of
of
when
the
kubelet
is
started
during
cubed
in
it.
So
we
basically
moved
the
face
from
a
position
that
was
late
in
this
in
the
process
to
something
earlier
or
something
like
that.
We
change
the
phase
order.
A
A
It
would
have
been
fine
for
unless
you
are
on
the
alpine
system
right,
it
works
fine
for
the
system
decays
for
most
linux
distros.
So
this
particular
move
was
not
a
breaking
change
when
we
shifted
a
face,
but
if
something
else
looks
like
for
a
rename
or
things
like
that,
everything
will
be
breaking.
A
We
have
to
think
of
a
way
to,
like,
I
say,
introduce
a
new
face,
make
it
re-entrant
deprecate
the
old
one,
and
it's
going
to
be
very
tricky
for
us,
but
given
quasar
api
is
trying
to
use
phases,
I'm
not
talking
about
the
whole
alpha
feature
for
retry.
A
If
the
note
agent
is
trying
to
use
the
phases,
maybe
we
should
rethink
the
whole
idea
of
how
to
basically
write
the
proper
deprecation
policy
for
phases.
Something
else
we
can
think
about.
For
the
note
agent
is
the
phases
of
cube.
Adm
is
or
sorry
I'm
kind
of
confusing
the
topics.
Like
the
scripting
question.
A
A
B
I
know,
for
example,
for
cube
adm,
for
example,
if
you
have
already
assert
their
their
certificate
directory
that
exists
with
ci
inserts
cube.
Adm
will
probably
use
that,
so
I
assume
that
it's
not
going
to
like
execute
the
face
to
generate
the
straightforge,
so
all
right,
like
yeah,
I
I
assume
we
can
do
something
like
that,
for
example,
for
other
faces.
A
Yes,
like
again,
I
would
like
to
read
the
the
spec
everybody
is
going
to
have
to
comment
to
that.
We
can
break
jason
for
brits
who
will
comment,
so
we
we
have
to
think
about
like.
Can
we
avoid
the
the
breaking
down
of
phases?
I
see
the
it's
very
useful
to
here's,
how
I
see
phases
of
kubernetes.
A
You
can
use
them
as
a
standalone
command
to
perform
an
action,
but
if
you
are
to
my
understanding,
if
you
are
breaking
down
the
process
into
all
its
phases
and
maybe
skipping
one
of
one
two
of
them,
then
it
feels
like
to
me
this.
We
should
you
shouldn't
be
doing
that
like
it's,
it
doesn't
feel
right,
yeah.
If
you
only
want
to
execute
on
demand
a
couple
of
phases,
I
don't
know
a
cube
config
phase
to
generate
that
min.com.
Something
like
that.
A
I
can
see
this
as
a
use
case,
but
if
you're
breaking
the
whole
process,
I
I
really
don't
like
this,
the
basically
this
use
case
of
the
phases.
I
know
that
we
have
people
vmware
that
are
doing
that
also
but
yeah.
A
We
I
want
to
see
the
actual
use
cases
and
demands,
and
then
maybe
we
can
have
some
sort
of,
like
I
said,
like
a
state
file
on
disk
that
they
also
made
them
to
avoid
behaving
in
a
certain
way,
and
we
can
avoid
this
complication
about
phase
deprecation
the
risk
we
have.
It
has
yeah.
B
Like
I'm,
we're,
probably
within
the
dear
gonna,
do
some
pocs
to
see
to
what
extent
we
can
we
can
reuse
like
just
cubed
and
unit
and
cubed
engine.
B
It's
gonna
mainly
depend
on
how
flexible
it
is
to
we
use
what's
in
the
certificates
there,
because,
because
if
we
do
bring
your
own
ca
plus
have
desserts
with
the
with
some
custom
logic.
That
brings
and
verifies
that
that
we
provide
the
right
cn
for
that
kubernetes
node.
Then
that's
fine.
If
not,
we
would
have
to
reuse
the.
B
The
challenge
here
is
going
to
be
how
we
can
integrate
some
custom
logic
that
verifies
that
we
stick
the
the
right
common
name
into
the
certificates
of
for.
C
C
Basically,
you
don't
have
a
good
visibility
to
to
where
you
are
in
the
process
and
the
the
second
part
is
that,
basically,
if
there
is
a
failure
using
this
script,
you
are
basically
retrying
only
a
phase,
and
instead,
with
the
other
approach,
you
have
to
retry
everything.
B
C
C
It
is
if
the
tcd
member
is
still
there
to
to
to
make
the
join
more
robust,
so
yeah
there
are
pro
and
co
this.
This
script
was
really
useful
at
the
time,
but
I
really
would
like
to
to
basically
solve
the
problem
that
this
creep
is
solving
in
kubernetes
or
could
mean
control
plane.
So
we
have
a
a
more
robust
solution.
B
Yeah
great
I'll,
let
you
know
anyway
I'll,
let
you
know
the
results
of
our
testing
regarding
bringing
or
bringing
like
bringing
our
own
source,
for
example,.
A
Yeah,
I
think
we
like
last
year.
I
I
think
I
added
a
test
which
is
pretty
much
a
cubed
demand
twin
test,
which
is
pretty
much
doing
external
ca.
So
you
sign
everything
and
you
just
create
the
customer
like
we
already
have
a
test
for
that
like
we
in
you
know
upstream
kubernetes
cube
adm
the
basically
you
sign
all
the
certificates
that
you
need
externally
and
cubadium
just
bootstraps
with
those.
A
So
I
did.
I
think
nadir
has
to
look
at
that
as
well,
because
but
again
we
have
to
look
at
the
like
the
requirements
in
the
dock.
B
Yeah
yeah,
it's
gonna,
be
like
the
the,
as
I
said
like.
The
main
challenge
here
is
gonna,
be
at
least
for
the
node
attestation
is
to
have
something
that
that
assures
and
ensures
that
whatever
we
put
in
the
common
name
of
the
cube,
that's
cert
and
like
the
q,
config
is
going
to
be
the
right
common
name
and
not
just
some
random
binary
using
some,
for
example.
Random.
Note
me.
A
Sure
yeah,
currently
we
just
for
the
couplet
clancert,
we
just
put
the
node
name
in
the
standard
format.
Kubernetes
requires,
but
yeah.
A
It's
verifiable
so
yeah
I
was
curious.
Are
you
using
like
phases
in
your?
You
know,
implementations
of
kubernetes.
A
D
A
Yeah,
this
was
basically
the
original
idea
we
have.
We
had
with
fabricio
about
this,
like
you
can
skip
the
phases,
but
I
I
immediately
we
saw
that
users
are
breaking
down
the
process
and
after
that
you
know
order
because
of
importance.
So
yeah,
okay,
so
you
you
and
the
darius.
Sorry,
yes,
and
you
said
that
nadir
is
going
to
prepare
this
doc
like
next
week
or
yeah.
We
do
have.
We
do
have.
B
A
bunch
of
review
at
the
team
next
week
and
once
this
review
is
done,
we're
gonna
we're
gonna
share
with
the
community
gently
like
we
wanted
to
at
least
get
aligned
between
us
to
have
the
same
thinking.
A
A
I
think
it
would
be
sufficient
to
just
show
what
we
have
in
the
121
milestone.
Let
me
share
my
screen,
but
I'm
going
to
turn
my
camera
off,
because
I
don't
trust
my
internet
currently.
Can
you
give
me
back
a
host?
Please
you
have
to
click
on
more
on
my
name
and
you
can
click
more
and
give
co-hosts.
A
So,
let's
see
what
we
have
for
121.
A
So
for
pretty
a
quick
update
about
this,
I
was
experimenting
with
the
whole
system.
The
driver
change,
which
is
also
related
to
image
problems,
cost
api.
A
Basically,
I
experimented
and
I
saw
that
your
concern
is
confirmed
during
upgrades.
We
do
default
the
user
config
and
it's
going
to
break
an
existing
cluster.
If
you
do,
you
know,
kubernetes
upgrade
apply
on
it,
so
I
implemented
some
additional
logic.
To
basically
do
this.
Only
during
init
and
for
upgrades
the
whole
component
config
defaulting
is
not
going
to
happen
with
systemd.
A
I
mean
it's
a
proposal
at
this
state.
I
have,
I
have
a
branch
for
this
I
mean
I
can
send
it
before
the
deadline
for
cold
freeze
and
we
can
discuss
on
the
pr
but
yeah.
I
I
think
it's
something
we
should
be
doing.
C
C
We
have
only
to
make
sure
that
this
change
is
basically
lands
together
with
the
image
builder
change
and-
and
also
there
is
a
copy
change
for
this.
So
this
is.
We
have
to
make
sure
that
all
the
three
change
lengths
to
make
the
things
happen.
A
Well,
we
can
continue
this
topic
later
in
the
cycle.
I
think
it's
fine,
I'm
going
to
still
try
to
play
with
it
a
little
more
to
see
if
I
have
a
cleaner
pr
but
yeah,
it's
not
that
also.
I
think
we
should
do
a
docs
upgrade
sorry
potential.
Docs
addition
can
be
if
we
tell
the
users
how
to
drain
a
node
completely,
basically
swap
a
driver,
which
is,
I
don't
think,
it's
recommended
at
all.
It's
just
way
too
disruptive.
C
As
a
kubernete,
let
me
say
documentation,
we
should
basically
not
recommend
these,
which
is
that
that
is
consistent
with
the
runtime,
the
documentation,
the
container
and
time
documentation.
A
Yeah,
I
completely
agree.
The
cri
page
of
the
kubernetes
docsis
historically
has
been
maintained
by
sequester
life
cycle.
Pretty
much
signaled
has
not
contributed
to
documentation.
Now
I
see
people
starting
to
plan
some
contributions
related
to
the
whole
docker
scheme,
deprecation.
So
maybe,
with
this
you
know
new
generation
of
maintainers.
We
can
get
some
guides
in
there,
which
also
affect
our
work
yeah.
So
this
is
the
this
issue
this
one.
This
is
something
that
andrew
zakim
reported.
The
pr
is
ready.
A
It's
waiting
for
his
review,
I'm
going
to
ping
him
again
later
in
the
cycle.
If
he
you
know,
he
missed
the
the
notification
completely
this.
This
is
part
of
your
fabricio,
your
roadmap
discussion.
Maybe
we
should
just
have
on
the
next
meeting.
We
can
get
all
the
tickets
and
like
discuss
all
the
tickets
that
you
have
created
with
the
roadmap
prefix.
Maybe.
C
Yeah
in
the
meantime,
I
I
suggest
to
remove
the
milestone
from
everything
which
is
mercedes,
because,
by
definition,
we
are
still
not
yet
assigned
to
to
our
release.
A
Okay,
I'm
going
to
move
them
to
next
after
the
the
call
housekeeping
tax.
Let
me
check
like
what
what
is
this.
I
think
we're
pretty
good
from
that
I
mean
we
don't
have
anything
yet,
so
this
is
okay,
so
I
think
this
is
already
done.
I
have
to
update
this.
This
is
out
of
date.
A
This
is
a
topic
that,
basically
sorry,
let
me
try
to
hide
this
window
because
I'm
using
windows,
7
and
this
room
is
kind
of
buggy,
so
this
was
a
two-part
requested
change.
A
The
second
part
was
to
allow
cubadiem
to
tolerate
certificate,
bundles
that
have
an
intermediate
ca
certificate
inside
the
bundle
alongside,
for
example,
us
cube
api
server
server
certificate
that
this
change
was
done,
but
I
think
the
user
is
requesting
the
first
part
of
the
change
was
the
kubernetes
has
an
option
to
do
this,
like
I
don't
know
a
feature
gate
to
basically
bundle
all
the
certificates
it
has
in
a
similar
fashion.
A
I
think
we
can
get
back
to
this
eventually
and
I
would
like
to
get
nadir
to
also
discuss
it,
but
thus
far
this
has
been
the
only
the
only
request
for
that.
So
if
we
only
have
a
single
request,
I
don't
think
we
should
be
doing
that.
Then
you
can
already
sign
certificates
externally.
So
it's
still
a.
I
can
potentially
move
it
out
of
the
box
so
close
to
feature.
Freeze.
A
This
is
the
topic
of
q
proxy.
I
mean
I
started,
leaning
towards
a
minus
one
for
this
one
because
it
be
like
for
brito.
It
requires
that
we
start
maintaining
a
static
pot
for
q
proxy
on
all
the
nodes
which
it's
like
a
tight
bundling
with
the
add-on.
It's
not
it's
the
logo
adam.
It's
like
it,
becomes
a
core
component
pretty
much
so.
B
That
might
be
a
little
bit
challenging
because
some,
like
some,
you
or
security
oriented
users,
do
not
allow
static
bots
on
the
worker
nodes.
So
that
might
be
another
thing.
We
need
to
think
about.
A
Exactly
and
one
of
of
those
security
tools
out
there,
security
checking
tools
actually
complain
that
cuba
dm
does
not
disable
the
static
ports
on
worker
nodes.
So
I
assume
a
lot
of
users
just
disable
them.
B
Yeah-
and
I
and
I
guess
like
at
the
end
of
the
day,
we
can
always
say
that,
like
we,
we
can
defer
that
to
any
other
ass
solution
that
manages
add-ons.
A
Yeah,
I
think
jason
and
andy
had
a
solution
with
a
couple
of
demon
sets
to
try
to
comply
with
the
upgrade
requirement,
the
upgrade
requirements
for
proxy.
They
mentioned
that
it's
doable
with
a
couple
of
demonstrations,
but
there
was
a
particular
problem
with
with
the
way
we
do
rolling
upgrades
of
demonstrations.
I
don't
know,
but
I
think
we
shouldn't
do
this.
It's
just
it's
complicated,
but
we
could
get
back
to
this
topic
eventually.
A
Five
minutes:
this
is
something
that
sig
instrumentation
are
doing.
Api
types,
sorry,
api
fields
that
have
sensitive
data.
Basically,
the
policy
of
the
kubernetes
project
as
a
whole
is
to
nowadays
add
a
specific
tags
to
them,
like
you
know,
json
tags
to
the
fields
so
that
tools
like
kwak
can
potentially
skip
them.
A
Example,
basically,
it's
a
data
policy
pack.
Originally
this
pr
did
it
for
the
api
as
well,
but
I
said
I
don't
think
we
want
this
for
the
api.
We
should
just
wait
for
the
next
version.
E
A
Revolut
cuban
config
downloads
and
dynamic
defaults:
this
is
it's
a
complicated
refactor
by
the
way
fabrizio.
Can
you
please
take
a
look
at
this
this
pr,
for
actually
it
already
merged
it?
Okay,
qb
is
doing
dynamic,
defaults
or
all
over
the
place.
Every
time
you
try
to
default,
you
know
a
coaster
configuration
object.
It
applies
a
dynamic
defaults.
A
It
is
doing
some
things
with
the
local
host
like
fetching
the
local
interfaces
and
things
like
that,
and
I
basically
said
the
pr
list
recently
to
stop
doing
that
at
least
during
unit
tests,
but
we
also
can
stop
doing
that
to
some
commands
that
don't
need
it,
but
yeah
these
items
are
still
like
pending
pretty
much
there's
some.
You
know
permissions.
A
A
Okay,
two
minutes.
This
is
just
a
tracking
issue
for
something
that
sig
off
have
to
do.
A
It's
part
of
the
whole
master,
rename
we
are
doing
in
the
kubernetes
project.
Basically,
the
main
super
user
grouping
kubernetes
currently
is
called
system
masters,
and
if
you
have
to
rename
that
you
have
to
think
how
you're
going
to
maybe
apply
another
group
that
is
also
has
superpowers.
It's
going
to
be
very
tricky
because
currently
you
can
only
have
one
group
that
is,
that
has
super
powers.
A
A
This
is
part
of
the
roadmap.
There's
a
pr
for
this
to
fix
this
one.
This
is
basically
the
mutable
upgrades
have
a
bit
of
a
problem
with
the
user
can
specify
a
version,
but
we
say
that.
A
The
version
is
sorry,
it
was
the
preferred
the
variable
version,
but
actually
this
is
not
correct.
It
should
be
the
version
that
the
user
actually
requested.
So
there's
a
pr
that
should
be
emerging
today.
If
you
for
blitz,
if
you
have
comments,
so
you
can
check
this
one.
Is
it's
basically
changing
words
instead
of
available,
we
are
now
saying:
okay,
this
is
the
target
version
you
want
to
upgrade
to
is
just
just
this
change.