►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2021-10-27
A
A
This
is
the
agenda
for
today.
So
me,
we
migrated
the
cubic
the.
B
A
A
Where
we
had
a
hidden
web
hook,
apparently
some
of
the
old
administrators
of
this
repository-
I
guess
around
2017-
had
this
webhook
to
accidentally
installed
by
an
application
that
so
this
is
like
a
good
there's
some
details
in
here
in
my
comment,
but
basically
it's
a
good
idea
to
always
check
the
credential,
the
credibility
of
a
certain
github
up
before
authorizing
it,
because
it
can
also
do
some
damage
to
repositories
but
yeah.
This
is
pretty
much
done.
A
We
removed
this
web
hook
and
now
also
the
tests
are
running.
I
checked
everything
seems
fine.
If
something
fails
or
if
we
have
some
other
outdated
links,
we
can
update
him,
but
yeah.
That's
pretty
much
done
so.
This
is.
I
noticed
that
in
code
phrase
123
is
november
16th
and
test
freeze
is
november
23rd.
A
We
are
pretty
much
on
track
because
we
only
have
a
couple
of
bigger
appears
that
are
listed
below
that
we
should
try
to
get
in
for
this
release.
A
So
test
phrase
is
like
a
separate
stage
of
the
release
process
nowadays,
because
sometimes
the
tests
are
so
complicated
that
you
require
time
and
when
submitting
the
changes
for
kk,
for
instance,
it
oftentimes.
It
requires
additional
effort
to
also
write
the
tests
and
they're
basically
giving
one
week
to
write
the
tests.
B
A
Yeah,
it's
strange
like,
for
instance,
what
happens
if
you
merge
a
big
feature,
but
the
tests
somehow
get
delayed
and
you
cannot
merge
them
like?
Maybe
you
should,
I
think,
the
the
stick
that
the
maintainers
have
in
this
case.
You
should
just
revert
the
future
prs
know
if
the
tests
are
not
working.
The
future
is
not
working
or
maybe
it's
so
complex
that
you
have
to
spend
a
month.
Writing
the
tests
or
something
like
that,
which
means
that
the
feature
is
not
read
yet
and
yeah
this.
A
A
A
I
have
three
there's
also
something
that
I
wanted
to
do
for
the
docs,
and
I
can
try
to
do
it
before
the
end
of
november
to
potentially
try
to
update
our
reference
documentation
so
that
it
can
show
it.
A
Yeah,
so
these
cabans
here
these
are
automatically
generated.
For
instance,
if
I
go
to
kubernetes
config,
we
embed
these,
these
blobs
of
generated,
cobra
stuff
and
currently
the
navigation
here
is
kind
of
complicated
and
manual.
You
have
to
manually
manually
update
stuff
when
a
kubernetes
command
changes.
A
So
what
I
will
try
to
do
in
this
release
is
basically
automate
the
inclusion
of
generated
stuff
into
the
same
page,
but
apparently
it's
a
bit
complicated.
So
I'm
not
sure
if
I
will
be
able
to
do
it,
but
that's
from
my
to-do
list.
Basically.
B
A
Yeah,
it's
not
a
high
priority.
I
would
just
try
to
find
time
for
it
in
this
release.
If
not,
I
will
try
for
124.
A
So
that's
docs.
The
release
itself
is
on
december,
the
7th
and
after
that
we
have
retrospectives
which
discuss
potential
problems
in
the
release
and
try
to
fill
the
gaps
with
the
problems
and
improve
the
release
process.
That's
pretty
much
it.
The
release
cycle
is
going
to
start
pretty
much
before
christmas.
I
think
which
is
kind
of
odd,
and
I
I'm
not
convinced
I'm
not
convinced
a
lot
of
people
are
going
to
work
in
december,
but
we
shall
see.
A
Okay,
so
the
first
pr
I
wanted
to
highlight
today
is
this
one:
about
decoupling,
the
output
api
and
cube
adm.
A
We
discovered
there
is
a
coupling
between
the
output
view
and
alpha
one
bootstrap
token
object,
which
is
pretty
much
a
wrapper
around
the
kubernetes
bootstrap
tokens
so
that
you
can
print
them
in
a
output
machine,
consumable
output.
A
So
this
was
the
problem,
basically
the
booster
token,
which
is
center
output,
encoded
embedded
the
v1
beta
2
api,
which
we
want
to
eventually
deprecate
and
remove.
So
this
api
introduced
undesired
binding.
A
A
So
what
I
did
in
the
previous
release,
I
basically
extracted
the
bootstrap
token
api
to
no
longer
be
inside
the
cube
adm
api
and
I
move
it
to
a
v1,
ap,
separate
api
group
and
then
the
change
in
this
particular
pr
is
to
start
embedding
the
new
booster
token
api
here.
A
So
now
you
have
a
coupling
with
a
v1
api
which
is
stable
and
you
can
safely
remove
this
v1
beta
2
cubed
m
api.
Without
without
having
to
care
about
the
output,
api
just
start
trying
to
decouple
working
progress,
apis,
stable
apis
put
stable
apis
in
separate
packages.
B
A
It's
a
wrapper
around
the
v1
secret
object
that
has
a
type
because
the
secrets
have
a
type
which
is
a
string
field,
and
there
are
some
documented
types
in
booster.
Token
is
one
of
them,
but
what
we
try
to
do
in
the
cube
adm
stuff
is
that
we
basically
take
the
secret
and
wrap
it
around
this
ghost
structure
and
that's
what
our
api
is
doing,
but
sig
off
said
that
there's
no
contract.
This
is
like
your
cube,
adm
project.
This
is
not
something
that
is
part
of
the
api
server.
A
Is
the
the
controller
manager
doesn't
know
about
this?
It's
just
your
cubadiam
structure,
but
I
I
kinda
agree.
It's
it's
on
the
kuberian
side.
At
the
same
time,
it
feels
like
it
could
have
been
used
by
external
tools
to
consume
this
utility.
A
But
yeah
it
is
what
it
is.
It
has
been
rejected
at
least
a
couple
of
times.
We
tried
to
convince
this
multiple
times,
but
yeah.
B
B
B
An
echo
or
an
attacker
to
use
the
booster
token
to
join
another
node,
a
malicious
node
or
what
prevents
to
use
a
booster
token
to
join
a
counterplay
node.
So
yeah.
B
A
That
is
true,
I
think.
At
some
point
I
tried
to
research
using
google,
who
is
actually
using
bootstrap
talking
outside
of
cube
adm,
and
I
think
I
found
a
couple
of
blog
posts
which
were
similar
to
what
casey
hightower
has
with
his
kubernetes
the
hard
way,
but
instead
of
certificates.
Basically,
the
tutorials
were
trying
to
bootstrap
with
booster
tokens,
so
I
I'm
going
to
assume
that
they
are
users
of
bootstrap
tokens
outside
of
kubernetes.
A
After
all,
it
is
a
feature
that
is
supported
in
core
kubernetes.
It
was
especially
added
because
of
kubernetes-
I
think
joe
beda
historically
was
pushing
for
this,
so
that
we
can
just
have
a
better,
easier
ux
around
constructing
questions.
But,
like
you
say,
there's
a
security
people
are
complaining.
A
I
mean
I
also
have
arguments
about,
for
instance,
opening
your
github
account
and
exposing
our
authorization
token
to
be
able
to
perform
local
commands
with
the
github
api.
You
have
to
do
it
with
a
token
get.
The
github
website
gives
you
a
token,
but
then
you
can
also
share
this
token
accidentally
with
someone,
and
unless
you
are
watching
the
logs,
someone
else
can
perform
the
github
actions
for
you
and
you
know,
do
administrative
actions
on
a
particular
repository.
A
So
it's
like
a
it's
just
a
ux
tokens
usually
fall
into
the
bucket
of
this
is
a
easy
ux,
but
you
have
to
be
careful
because
you,
you
can
compromise
the
security.
With
this
token
and
yeah.
I
I
completely
agree
with
the
original
design
of
making
cubadiem
easy,
but
also
allowing
to
use
certificates
through
boosted.
Notes
is
also
obviously
the
more
secure
way.
A
Yeah
but
yeah,
I
think
tokens
are
used
outside
of
kubernetes.
It
might
have
been.
There
is
a
good
reason
to
expose
this,
our
v1
booster
token
api
publicly,
but
in
the
kubernetes
apis.
But
I
guess
seagulls
are
not
happy
with
that,
and
so
we
for
the
time
being
we're
going
to
keep
it
on
the
cubed
yeah.
Eventually,
maybe
in
the
cuba,
dm
library.
A
Yes,
so
this
api,
so
this
pr
is
pretty
much
clean
up.
Moving
converter
functions
around
making
fuzzers
happy.
A
If
you
don't
have
the
time.
I
just
honestly,
I'm
just
happy
with
somebody
putting
real
gtm
on
that
and
it's.
B
A
Yeah
thanks,
you
know.
If
you
have
the
time
I
I
will
be
happy
to
update
if
you
have
any
comments.
So
it's
okay.
A
A
Even
after
we
already
moved
the
whole
code
base
to
v1
beta3,
we
still
have
to
import
the
view
on
beta2
because
of
this
convolution.
With
the
token
yeah.
B
A
The
second
api
is
following
the
cap.
I
pushed
forward
for
renaming
the
couplet
configuration
map
and
our
back
rules
to
follow
the
new
schema,
a
new
naming
convention.
This
is
the
pr
for
it.
I
don't
think
we
need
any
more
prs.
This,
a
single
pr
contains
the
whole
change.
Let
me
open
the
cap
quickly
to
remind
what
this
is
about
to
those
who
are
watching
the
vod.
A
It
was
discussed
life
cycle.
Sorry
before
the
enhancement
phrase
for
this
cycle.
It
was
discussed
a
long
time
ago
as
well.
Basically,
the
the
whole
couplet
config
xy
is
not
desired
because
it's
just
this
version
doesn't
make
sense.
Cuban
has
skew
and
we
only
care
about
a
single
cooperate
configuration
currently,
maybe
in
the
future.
We
can
care
about
more,
but
the
xy
is
very
confusing
and
it's
also
incorrect.
Basically,
the
proposal
explains
what's
happening
here
and
I
started
executing
on
the
on
the
cap
in
this
pr.
A
A
But
let
me
let
me
give
a
quick
overview,
so
yeah
I'll
just
introduce
your
constants
old
concept.
We
already
have
this
practice
incubation,
like
add
to
those
everywhere,
so
that
we
have
a
tracking
issue
links
so
that
we
know
what
we
have
to
change,
but
yeah
that's
pretty
much
the
the
first
commit.
I
wanted
to
talk
to
you
about,
certainly
important.
So
in
the
cap
we
discussed
that.
A
Always
when,
when
you're
using
kubernetes
join
or
at
the
beginning
of
cube,
adm
upgrade
you
try
to
even
qubit
ember
set
fetches
the
the
kubernetes
configuration
because
satellite,
my
colleague
ross
he
used
to
work
on.
Is
he
refactored
the
the
whole
fetch
in
it
configuration
from
coaster
stuff?
Where
you
fetch
the
you
construct
an
indeed
configuration
in
memory.
You
fetch
the
cluster
configuration
you
face,
the
correlate
configuration
proxy
and
you
construct
this
a
bit
of
a
monstrosity
of
an
object.
A
That
is
an
init
configuration
that
you
pass
around
cube
adm,
which
is
you
know,
something
that
we
wanted
to
change
in
the
future,
not
pass.
This
big
object
with
multiple
structures,
but
basically
I
realized
that
kubernetes
upgrade
is
very
convoluted
and
when
you
run
kubernetes
upgrade
applied,
fetches
all
the
things
and
it
becomes
very
difficult
to
determine
what
configuration
complete
configuration
to
fetch
or
how
to
control
it
more
precisely.
A
There's
a
function
in
kubernetes
upgrades
that
is
called
enforcer
requirements.
You
probably
know
this
about
this
notorious
function
that
I
basically
show
it,
but
basically,
if
I
have
to
execute
on
what
is
exactly
described
in
the
cap,
to
always
precisely
fetch
the
right
version,
sorry,
the
right
naming
of
the
couplet
config
when
the
feature
gate
is
on
and
off.
A
I
think
it's
probably
a
little
bit
difficult
to
explain.
But
let
me
read
this
part
during
the
first
cube:
adm
upgrade
apply
when
the
feature
gate
goes
through
by
default,
a
preferred
user
value
is
missing.
A
In
a
question
configuration,
for
instance,
lowberry
has
enabled
intercostal
configuration
kubernetes
upgrade
apply,
will
try
to
fetch
using
the
new
format
and
the
this
config
mod
will
not
exist
yet
because
there's
no
config
map
with
the
new
name
yet,
which
means
that
we
have
to
potentially
populate,
like
a
jump
start
with
a
config
map
that
is
populated
and
then
kubernetes
upgrade
can
fetch
it,
even
if
it's
a
copy
of
the
old
graphic
map,
but
so,
for
this
particular
instance,
has
to
tolerate
both
old
and
new
until
versioned
complete
configuration.
A
A
So
this
is
a
drift
from
the
cap,
so,
basically
anytime,
we
fetch
from
the
cluster.
We
will
try
to
fetch
both
try
to
fetch
the
new
format
if
the
new
format
doesn't
exist,
we'll
fall
back
to
the
old
format,
and
this
why
I'm
doing
this
is
because
I
don't
want
to
touch
kubernetes
upgrade.
A
I
can
do
it,
but
it's
going
to
be
a
much
bigger
appear.
I
wanted
to
show
what
white
why
this
is
happening
exactly.
A
A
But
basically
the
reason
is
this
function
now
inside
this
function,
we
I
can
show
it.
This
is-
has
been
pending
refactor
for
a
very
long
time
and
nobody
has
that
time
to
do
it,
but
inside
a
function
we
call
it
right
away
in
cuba.
Dm
apply
upgrade
apply.
We
call
this
function,
it
was
the
config
either
from
the
cluster
or
from
vocally.
A
The
client
is
also
constructed
now.
What
I
can
do
here
instead
of
this
fetch
of
both
old
and
new.
At
the
same
time,
what
I
can
do
is
extract
the
client
here
above
enforce
requirements
and
using
this
client
I
can
copy
the
old
config
map
and
create
a
new
config
map.
But
the
problem
there's
also
a
problem
here,
because
we
don't
know
the
version
of
cube
adm
in
the
three
yet
because
the
kubernetes
version
of
the
cluster
is
inside
the
cluster
configuration.
A
So
we
actually
have
to
also
fetch
the
cluster
configuration
and
it
it's
a
mess.
I
can
do
it,
but
also,
I
guess
it
surfaces
a
problem
with.
A
Our
logic
of
fetching,
actually,
this
is
another
function
completely.
B
A
Yes,
from
my
tests,
it's
only
the
problem
is
only
during
upgrade
and
it
makes
sense
because
the
complete
conflict
doesn't
exist
yet
in
the
new
name
I
mean
a
problem
here
I
think,
is
that
this
the
whole
this
this
is
a
very
problematic
function.
It's
a
fetch
in
its
configuration
for
coaster
and
inside
this
function
we
fetch
a
bunch
of
stuff.
A
Ideally,
it
should
have
been
possible
to
more
granularly
extract
only
what
we
need,
but
the
way
arostii-
and
you
know
historically
it's
a
revolution
that
not
rusty
only
but
with
the
way
we
designed
it
is.
This
function
is
like
a
wrapper
of
fetch
me
all
the
stuff
from
the
quest
like
it
fetches
like
it
constructs
this
unit
configuration
default,
set.
A
So
I
at
this
point
I'm
debating
that
this
is
the
better
approach
with
the
or
the
new
version
I
I
was
trying
to
think
of
ways.
This
is
problematic
whether
doing
this
is
a
bad
idea,
and
I
honestly
cannot
think
of
any
unless
the
user
for
some
reason
decided
to.
A
Populate
the
config
map
that
is
called
couplet
config
and
it's
already
occupied.
The
name
is
already
occupied
for
some
reason,
but
I'm
hoping
that
if
they
see
this
release
now
they
should
adapt,
because
we
have
we've
taken
over
the
name
pretty
much.
B
A
Yeah,
so
that's
the
only
that's
the
only
problem
area,
I
think
in
this
period
the
rest
is
unit
tests.
You
know
constants
flags,
if
you,
if.
A
If
certain
reason,
if
you
decide
if
you
think
that
this
is
a
bad
idea,
if
you
find
something
conflating,
I
can
try
to
refactor
the
the
upgrade
code,
which
is
a
mess.
But
I
can
do
it
in
a
separate
commit.
B
A
Yeah,
it
becomes
a
problem
with
it's
bitter
and
the
feature
flag
is.
A
Enabled
it's
enabled
by
default
inside
the
media,
binary.
B
A
Yeah,
I
I
tried
this
already
what
you're
saying
yeah
I
tried
it,
but
it's
it's
still
a
problem
because
the
function
we
use
to
determine
if
the
feature
focus
label
first
checks
the
the
value
in
the
quest
configuration.
If
it's
missing
it
falls
back
to
the
the
default
inside
the
binary,
which
is
hard
coded
to
true,
eventually,
sorry
yeah.
Eventually,
when
you
go
ga
it's
hardcoded
to
true.
A
Would
find
that
this
value
is
true,
it
will
fail
during
fetching
of
the
new,
and
then
you
have
to
fall
back
to
the
okay.
I
see
what
you're
saying:
okay.
A
A
Yeah
do
an
upgrade.
This
will
fail
yeah
and
that's
why
I
wanted
to
touch
upgrade,
but
I
realized
it's
yeah,
but
if
you,
if
you,
if
you
find
any
like
better
solutions
or
want
to
improve
this,
I'm
happy
to
do
it
no
problem.
A
Yeah
this
is
this
pr.
I
really
don't
have
anything
else
for
today
we
are
40
minutes
in.
A
Do
you
have
anything
you
want
to
chat
about.