►
From YouTube: kubeadm office hours 2020-08-02
A
A
A
A
A
B
A
A
A
All
right,
this
is
basically
we
had
a
by
the
issue
around.
When
the
user
passes
a
version
explicitly
to
upgrade
plan
it
can
comedian
can
make
some
wrong
assumptions.
Thankfully,
you
know
users
don't
pass
a
version
to
plan,
usually
because
we
even
don't
have
this
in
the
upgrade
guide.
They
don't
pass
the
argument.
A
A
A
It
basically
tries
to
connect
components
like
keep
scheduler
and
controller
manager
to
point
to
the
load
balancer
to
the
control
plane
endpoint
instead
of
the
local
api
server,
which
creates,
creates
issues
around
upgrades
immutable
upgrades.
A
This
is
the
ticket,
so
it
had,
it
has
two
parts.
The
first
part
is
make
these
control
play
components
point
to
the
local
api
server.
The
pr
I
think
is
merging.
I
am
going
to
set
the
backboard
after
this
meeting
to
118.
A
B
A
B
D
B
A
A
Back
and
my
understanding
there
is
that
it's
not
falling
back
because
it's
following
the
skew
between
the
api
server
and
the
compo,
the
particular
component
yeah.
I'm
surprised
that
you
have
this
the
scheduler,
because.
B
But
probably
before
there
was
not
such
a
policy
formalized,
and
so
basically
people
were
over
conservative.
Now
there
is
a
policy,
so
in
case
someone
complains
basically
that
there
is
a
justification.
A
Yeah
that
that
could
be
the
case
here,
the
policy
this
policy
by
the
way
is
fairly
new.
Jordan
formulated
it
maybe
last
year
the
beginning
of
last
year,
and
I
don't
know
where
it
came
from.
In
fact,
we
had
some
discussion
around
like
who
is
the
owner
of
all
these
policies,
and
we
agree
that
it's
sick
architecture,
because
it
covers
everything
I
think
the.
A
This
failure,
you
know,
if
you
leave
this
failure
without
fixing
it.
I
don't
think
it's
going
to
happen
every
time,
because
the
the
issue
here
was
that
the
119
kcm
took
the
lease,
and
I
don't
I
don't
say
why
did
that?
That's
the
the
mysterious
part.
D
So
it
might
like,
at
least
like
a
leadership
change,
might
happen
during
also
leader
election
like
during
upgrade
if
the
leader
change
and
you
can't
grab
the
lock.
I
assume
that
once
it
settles
down,
another
kcm
could
grab
it
before
that.
The
original
one
is
able
to.
A
Yes,
it's
still
possible,
so
the
fix
is
quite
viable.
Tim
also
said
that
correctly
said
that
we
should
be
able
to
work
around
this
on
the
campus
side
to
you
know
re-elect
if
we
want
to
to
go
back
to
the
118
kcm,
but
this
is
a
it's,
not
a
good
good
hack.
This
is
you
know
it's
not
good.
A
A
It
is
a
breaking
change
to
all
those
users
that
assumed
fallback
to
an
api
server
somewhere
in
the
aha
control
plane
if
the
local
api
server
fails
for
some
reason,
because
if
if
one
of
the
components
is
trying
to
make
to
make
a
request,
the
local
api
fails
for
some
reason
temporarily,
there's
no
api
server
to
send
the
request
to
so
it's
only
the
local
api
server
in
this
case.
B
Sensitive
to
to
the
possible
shoe
that
that
can
happen
during
the
upgrade.
So
basically,
we
have
one
node
one
control,
plane,
node,
which
is
self-consistent
and
does
not
have
a
skew,
and
when
we
create
a
new
one,
this
new
one
is
self-consistent.
So
we
have
two
nodes
which
are
consistent
one.
If
one
and
and
we
are
not
creating
a
mixed
situation.
A
D
So
like
is
there
a
so,
we
could
also
legitimately
get
a
complaint
around
that
were
basically
deprecating
or
removing
a
behavior
not
conforming
to
the
duplication
policy.
So
can't
we
like
say
that
we
have,
we
can
have
a
flag
that
you
know
either
enables
the
old,
behavior
or
gates
the
new
one.
A
That
that's
not
a
bad
idea.
I
guess
I'm
trying
to
use
this
meeting
to
evaluate
how
breaking
it's
going
to
be
like
do.
We
have
even
users
that
care
about
this
scenario,
because
it's
it
it?
It
can
be
a
temporary
fallback
if
the
local
aps
server
is
not
online.
A
It
goes
to
another
api
server.
D
So
I
guess
the
issue
is
like
not
during
the
upgrade,
but
during
the
day
of
operation,
meaning
that
whenever
an
api
server
breaks,
we
might
see
like
users
might
see
cascading
failures
instead
of
just
one
api
server.
That
is
failed.
A
A
So
the
old
topology,
which
you
know
this
diagram,
is
not
correct
with
respect
to
that,
because
technically
these
components
point
to
the
lp,
but.
B
I
I
I
I'm
kind
of
debated
to
consider
this
a
debate
behavior,
because
this
is
really
a
kind
of
an
internal
of
cuba.
To
me.
D
B
And
I
see,
and
and
behind
these
I
see
a
good
reason
for
changing,
because
yeah
we
are
trading
a
little
bit
of
a
possible
something
that
can
happen
during
the
day
operation
for
something
else.
That
is
some
other
problem,
and
so
basically
after
we
we
created
kubernetes,
the
the
personal
skill
policy
was
created,
and
now
we
saw
that
that
in
kubernetes,
basically
they
are
doing
change
according
to
the
new
version
skew,
so
we
are
making
just
kubernetes
compatible
with
the
universal
skew
more
resilience
towards
the
new
version.
B
B
I
I'm
not
sure
if
this
will
be
considered
a
behavior
and,
to
be
honest,
I
think
the
flag
maintaining
a
flag.
It
is
a
burden
that
I
don't
know
if
it
is.
If
it
should
be
the
case,
I
agree
with
lubomir
and
how
we
implemented
the
pr
and
yeah.
In
case
someone
has
problem,
there
is
an
escape
patch,
which
is
the
the
patches
and
and
if
people
for
complaints
we
can
create
the
flag
and
manage
this.
C
I
too
don't
consider
this
as
a
behavior,
simply
because
we
haven't
like
at
least
up
to
my
knowledge.
We
haven't
documented
anywhere
on
the
documentation,
whether
the
control
manager
or
scheduler
actually
connected
to
local
api
server
or
to
the
bouncer
icon
on
this
graph.
There
is
nothing
that
says
that
these
two
components
connect
to
the
apis
or
either
locally
or
through
the
wall
bounce.
It
only
mentions.
B
The
worker
nodes
yeah
also
in
the
past,
every
time
we
changed
some
flagging
in
the
static
port
manifest
the
test
was
not
something
considered
like
a
bmw.
D
A
D
A
You
mean
so
this
is
going
to
be
in
the
release.
Notes,
do
you
mean
actually
required?
Yes,
so
nobody
actually
required
this
when
they
really
have
to
take
an
action
in
this
case,
upgrade
is
going
to
review
the
config
files
and
the
certificates
in
them
also
update
the
of
course,
the
where
the
api
server
in
the
cube
config
file
is
pointing
is
located,
but.
A
Upgrade
notes
so
we
have
the
kbm
upgrade
guides.
We,
I
don't
think
we
have
a
great
notes
in
the
release,
notes.
D
B
A
I
mean,
I
don't
think
they
have
to
take
any
action
and
also
the
cube
config
files.
We
don't
have
punches
for
them,
so
they
have
to
patch
manually.
A
But
yeah
I
okay,
I
think
we
should
we
can
risk
the
risk.
This
change.
I
don't
think,
there's
much
of
a
much
of
a
potential
that
people
are
going
to
complain
about
this.
It's
just
the
question
resilience
sentence.
I
think,
clarifies
this.
A
B
Okay,
I
agree
with
this.
I
think
that
it
was
discussed
with
beans
and
the
and
also
with
team
and
also
in
the
chat.
I
saw
some
class
one
to
these
changes,
so
in
my
opinion,
the
reaction
to
to
to
this
change
are
good,
so
we
should
not
expect.
A
I'm
going
to
set
the
backboard
for
this
later,
so
the
second
topic
is
a
couplet
cube,
config
file.
We
had,
as
this
is
part
of
the
shape
discussion
where
we
found
the
the
previous
problems.
A
We
also,
I
mentioned
that
the
couplet
on
control
plane
nodes
is
pointing
to
the
load
balancer
instead
of
the
local
api.
In
fact,
in
vmware
we
had
people.
Also,
you
know
speaking
with
us
about
this
particular
topic.
A
So
we
cannot
change
this
easily
because
if
we
check
you
know
just
modify
the
the
api
server
location
in
the
cubeconfig
files,
the
node
is
not
going
to
be
able
to
bootstrap,
and
I
had
a
some
code
examples
for
that.
A
So
this
is
where
we
basically
tell
the
couplet
to
use
the
bootstrap
config.
We
feed
to
it
and
contact
the
api
server
and
then
basically
get
signed
certificates
that
it
should
use
to
join
the
coaster
properly,
and
we
have
this
operation
here,
but
the
problem
is
that
we
block
and
we
block
for
the
kublai
to
complete
this
operation,
wait
for
a
tos,
bootstrap
bootstrap
client,
which
is
you
know
at
this
point.
We
have
the
bootstrap
cloud
ready
if
we
fail
to
that,
we
throw
an
error
and
the
control
plate
certificates
on
control.
A
A
A
D
Look
here
for
like,
if
we
do
this,
would
it
change
things
for
the
sands
that
we're
that
we're
using,
because
instead
we
were
adding
the
load
balancer
now
we
will
likely
need
to
add
localhost
and
the
host
name,
maybe
or
something
like
that.
A
A
So
basically
we
have
to
swap
this
phase
for
this
phase
and
I,
if
we
go
for
this
change,
we
have
to
do
some
experimentation.
Of
course,
the
biggest
question
question
is:
what
do
we
do
with
face
rearrangement
because
we
don't?
We
still
don't
have
a
policy,
what
we
do
if
we
change
the
order
of
phases,
remove
phases
around
phases.
A
A
For
instance,
coaster
api
to
my
knowledge
for
british
uk
correct
me
already-
has
the
order
broken
down
into
the
coordinate
scripts.
A
D
D
D
So,
like
you
end
up
in
a
situation
where
the
sands
were
containing
an
ip
but
they're
no
longer
containing
contain
the
actual
one.
So
I
guess
yeah
I
I
guess
this.
This
one
is
beneficial,
but
we'll
likely
need
to
put
the
word
out
there
that
we're
planning
to
do
so.
A
A
A
A
A
Hopefully,
users
are
going
to
be
aware
about
this
problem
before
that
by
looking
at
the
coi
by
looking
at
the
release
notes,
but
that
that's
that's
how
your
order
faces.
It's
a
breaking
change.
B
A
Booster,
so
was
was
the
hd
quest
so
going
in
granular
details.
If
we,
if
you
add
a
new
hdd
member
to
the
cluster
and
then
okay,
they.
B
Contact
yeah,
but
basically
to
do
the
tls
bootstrap.
We
need
a
scheduler,
a
working
scheduler
and
the
the
the
the
new
api
server
will
will
not
be.
A
Oh,
it's
fine!
It's
fine,
because
we
so
first
we
have
to
create
that
you
know.
If
we
look
at
these
phases,
I
mean
the
phases
are
not
enumerated
here
for
all
the
manifests,
but
if
you
create
the
hcd
manifest
first,
the
complete
is
going
to
start
create
this
new
instance
of
ncd.
Then
you
can
start
the
api
server
and
tell
it.
B
B
B
They
basically
wait
for
for
for
a
controller
manager
that
runs
on
another
machine.
So
there
is
a
skewed
around
the
cube,
but
this
will
work
yeah.
B
Yeah
but
but
the
in
this
case
there
will
be
this
queue,
and
so
we
have
to
check
the
basically
if
we
are
complaining.
So
in
this
case
we
will
have
a
kubernetes
which
is
talking
with
a
a
correspondent
version
of
the
api
server,
and
this
is
fine.
So
we
are
fixing
the
problem
between
the
the
kubernetes
and
the
abs
server
that
not
that
today
can
happen,
but
then
there
will
be.
There
could
be
a.
A
A
B
A
Yes,
I
I
kind
of
agree
that
we
should
do
a
feature
gate
for
this
change
if
it's
alpha
beta,
eventually
ga
actually,
when
it's
beta,
we
have
to
flip
it
by
default.
So
it's.
B
Yeah-
and
in
this
case,
I
think
that
we
have
to
find
a
name
for
this
new
booster
pro
process.
Let's
call
it
local
bootstrap
or
something.
B
Good
yeah,
you
know
it
potentially
has
also
many
advantages,
because
basically
you
are
bootstrapping
without
being
dependent
from
the
balancer
from
the
network.
So
I
I
see
it
is
an
enter.
The
vcd
of
local
bootstrap
might
be
really
interesting.
A
B
A
Right
so
after
experimentation,
I'm
going
to
update
the
group
again
about
like
what
did
I
get
in
terms
of
results,
we
can
then
proceed
to
decide
like.
B
E
B
B
No,
oh,
also
yeah,
it's
not
feasible
and
and
second
one.
The
the
second
question
that
I
have
is
for
justine
about
the
ipam
stories.
So
could
you
kindly
explain
to
me
what
is
the
problem.
D
That
you
see
so
say
you're
using
dhcp
for
an
h.a
proxy
vm,
that
is,
there
is
a
load
balancer
for
your
control
plane.
If
you
lose
your
release
of
the
dhcp,
your
control
plane
might
change.
Endpoint
might
change.
You
might
be
able
to
regenerate
the
cube
configs,
but
then
you'll
have
to
manually
or
issue
patches
to
the
cubelet.com
to
either
update
the
ip
or
do
something
like
going
to
the
local
api
server.
B
Yeah,
I
got
your
point,
but
I
I
think
that
this
is
kind
of
against
what
we
suggest
to
the
user.
So
we
suggest
to
the
user
that
the
control
panel
points
should
be
a
static
ip
address,
so
something
that
you
don't
release
or
dns
name,
so
something
that
that
you
can
reconfigure
without
touching
all
the
machines.
B
Basically,
it
is
a
pain
in
the
ass
that
there
are
issue
in
the
commandment
repository
that
that
sorry
for
personal,
if
you,
if
you
look
at
the
kubernetes
repository,
there
are
a
very,
very
long
discussion
how
to
do
this,
you
have
to
touch
all
the
config
you
are.
You
have
to
touch
the
clustering
for
config
map.
B
You
it's
a
complicated
procedure.
There
are
some
users
that
manage
to
do
it
to
do
it.
B
D
E
A
And
now
I've
heard
that
it
can
be
catastrophic
depending
on
how
the
components
react
to
each
other.
So
this
is
like
the
the
best
reason.
I
guess
to
change
that.
A
All
right,
I
have
a
couple
of
I've-
got
to
update
with
some
comments
after
that,
so
we
have
our
coordinates,
upgrade
panic.
I
think
this
is
the
second
one
we
have
in
the
past
few
releases.
A
A
Sorry,
basically,
the
problem
is
that
for
some
reason,
the
coordinate
spots
the
red
upgrade
can
be
pending,
so
they
are
not
running
yet
or
they
they
yeah
and
they
may
not
have
the
containers
enumerated
under
the
pod
object
yet
so
we
don't
know
why.
But
a
couple
of
users
already
found
this
and
basically
the
the
fix
that
we
said
is
to
weight.
A
A
Quite
frankly,
I
I
don't
even
know
at
this
point,
why
why
we
are
so
strict
about
the
versions
rusty?
What
do
you
think
about
the
the
whole
verification
we
have
with
shots
in
the
containers
and
could
did
you
see
this
pr.
C
Yeah,
I
saw
it
like.
I
think,
that
this
library,
that
the
coordinates
guys
are
rendering
in
should
be
actually
using
like
the
the
actual
version
from
the
dac,
but
we
should
be
careful
to
extract
the
actual
sim
version
instead
of
just
some
type,
prefix
or
suffix,
or
something
like
that.
C
The
thing
is
that
they
are
currently
using
the
shas,
which
is
kind
of
weird,
because
if
somebody
actually
uses
like
normal
coordinates
version
just
adds
an
additional
tool
to
that
image.
The
shy
is
going
to
be
different,
and
this
piece
of
code
is
not
going
to
migrate
its
config
because
it
won't
be
able
to
find
it
in
the
predefined
list
of
shas
so
yeah.
C
I
think
that
the
using
shas
here
is
a
little
bit
strange
to
me,
and
this
is
probably
the
root
cause
of
the
problem
and
if
we
actually
migrate
to
using
the
actual,
like
tax,
we'll
be
able
to
just
use
like
the
the
image
field
from
the
container
spec
color
than
the
like.
The
the
bot
status.
A
Yeah,
I
think
this.
This
is
a
very
strict
validation.
I
my
only
assumption
here
is
that
they
really
don't
want
to
support
bug
reports
by
users
if
they
see
a
message
that
is
about
an
image
that
was
customized.
A
I
think
this
is
the
only
argument
here.
Otherwise
this
is
way
too
strict.
I
don't
know,
but
for
this
pr
I
guess
we
should
keep
it
still.
It's
quite
quite
honestly.
Instead
of
refactoring
this
migration
code,
I
think
we
should
just
yolo
the
coordinates
operator
like
cops,
are
doing
and
then
think.
How
are
we
we
can
do
upgrades
and
so
on.
C
Yeah
the
thing
here
is
that,
like
even
if
we
put
a
tiny
little
weight,
it
should
not
be
that
big,
like
normally,
you
should
not
have
the
bots
in
a
painting
state.
You
may
like
find
them
in
like
a
creating
state
for
some
time,
but
painting
usually
mean
that
you
actually
have
some
problem:
the
concrete
user.
Actually
like
like
the
issue.
If
you
find
the
issue,
I
think
I
asked
the
user.
C
Why
were
his
supports
in
a
pending
state
and
he
mentioned
that
he
doesn't
have
a
cubelet
to
schedule
the
pods
on,
which
means
that
we
are
going
to
be
like
him
for
a
pretty
long
wait.
If
we
actually
try
to
wait,
there
probably
just
fail
the
operation
altogether.
A
So
you're
recommending
that
we
shouldn't
you
know,
increase
this
time
out
to
something
ridiculous,
but
instead.
A
A
A
A
A
A
I
would
assume
that
we
should
it's
a
good
idea
to
remove
this
warning,
I'm
going
to
say
the
pr
for
that.
My
estimate
is
that
it's
not
going
to
be
that
small
of
a
change.
C
I
think
that,
like
for
container
d,
there
was
a
special
case
because,
like
in
recent
docker
versions,
container
is
actually
like,
bundled
with
docker,
and
the
detection
code
is
going
to
be
able
to
find
the
container
and
it's
going
to
like
default
to
docker,
because
it's
like
the
the
wrapping
product
of
container
d
in
that
case,
but
yeah.
Even
if
you
find
just
container
d,
it
will
also
probe
for
the
current
use
that
one.
If
it's
there.
A
But
I
guess
the
the
idea
is
that
if
we
find
and
start
using
the
container
this
socket
with
this
couplet,
we
should
not
be
calling
docker
info.
A
All
right,
I'm
going
to
have
a
look
at
this.
Maybe
next
week
it's
not
a
critical
problem.
It's
just
a
warning.
It's
not
a
fatal
error.
All
right!
We
have
six
minutes.
Does
anybody
else
have
any
more
topics
for
today.
C
I
have
a
quick
question:
should
we
do
a
like
a
cherry
pick
for
the
pr
that
fixes
the
d
manifest
regeneration.
A
So
I
think
users
can
execute
the
init
phase
for
that
and
it's
going
to
give
them
the
lcd
body
first
that
they
want.
I
guess
if
they
pass
the
config
with
this
version,
that
they
want
it's
going
to
work.
I
think
so.
That's
a
workaround,
it's
not
ideal,
but
I
think
it's
a
workaround,
so
we
should
have
blackboard,
probably
because
of
that.
A
Okay,
eddie
topics.
A
I
try
to
keep
the
number
of
tickets
below
90
and
we
are
above.
We
should
try
to
keep
the
numbers
low.
Hopefully
I'm
going
to
have
more
time
to
set
some
to
fix
some
low
hanging
fruit
problems
around
cuba
dm
this
cycle.
We
also
have
a
bunch
of
deprecations.
A
I
think
we
might
remove
the
dynamic
complete
config
in
120,
it's
at
least
scheduled
for
that.
I
should
check
the
depth.
A
That's
that's
a
very
good
topic
for
the
planet
meeting.
I
in
fact
I
think
we
should.
A
B
A
Let's
anything.
A
A
But
let's
leave
room
for
the
question.
Take.
Thank
you.
Everybody
for
jerry,
see
you
again
in
a
couple
weeks.
Bye.