►
From YouTube: kubeadm office hours 2019-11-27
A
A
A
Don't
see
anybody
let's
proceed
to
the
psays,
so
this
is
a
quick
announcement
about
the
timeline
of
the
170
release.
So
cherubic
deadline
is
on
the
2nd
of
December.
So
that's
next
Monday,
but
we
don't
have
anything
critical.
If
we
find
something
quick
and
I
guess
we
have
to
work
on
it
during
the
weekend.
A
B
A
B
Yeah
I
think
the
truffles
idea
of
reiteration
later
on
is
good
one.
We
can
probably
do
a
meeting
on
11th
of
December,
so
we
can
basically
have
four
Brits
you
joining
us,
so
we
can
do
the
like
the
overview
of
everything,
and
then
we
can
focus
on
like
the
more
little
things
later
on
in
the
cycle,
probably
just
after
the
winter
correlation.
A
There
are
a
number
of
pages
here:
I'm
planning,
some
problems,
build
semantics
tagging,
synchronizing
branches
problems
with
buildings
here,
artifacts
everything.
So,
as
you
may
know,
we
promised
118
to
be
the
cycle
when
we
move
given
amount.
So
I
think
this
is
like
the
fame
the
highest
priority
without
planning
right
now.
This
promotes
the
highest
priority
for
us
if
you
want
to
keep
our
promise.
A
Right,
so
this
is
a
topic
Arafa
by
the
way,
by
the
way,
I
move
this
from
the
future
topics
to
today's
agenda.
We
can
just
discuss
it
way,
I
guess
so.
This
thing
it
tickets
right
here
we
and
also
before
that
in
I,
believe
in
a
random
pier
somewhere.
We
started
discussing
that
the
deployment
of
two
replicas
for
coordinates.
A
Incubate
game
is
problematic
and
I
wanted
to
create
a
little
diagram
for
today,
but
I
didn't
have
the
time
because
I'm
on
PT
all
technically
so
basically
the
problem
right
now
is
the
following:
if
you
have
a
single
control
playing
loads
without
workers,
you
can
deploy,
coordinates
and
coordinates
is
going
to
have
two
pods
on
the
this
particular
control
plane
mode
and
you
don't
care
if
the
load
dies.
You
don't
have
a
cluster,
because
it's
single,
no,
if
you
have
a
single
control
plane
with
multiple
workers.
A
A
Unless
you
delete
them,
you
know,
then
it's
going
to
happen
so
something
we
have
been
discussing
with
Rafael
Anna,
Johnson
tip
is
to
potentially
make
coordinates
that
the
demon
set
that
targets
control,
plane
modes
and
this
this
is
an
implementation
detail
for
most
users,
most
non
power
users,
because
they
don't
care
they.
They
only
need
the
DNS
server
to
work.
But
if
we
do
this
change,
we
are
going
to
break
all
those
power
users
that
are
potentially
patching.
The
coordinates
deployment
object.
C
Yeah,
not
only
that
I
mean
resource
to
the
DNS
of
the
scaler
that
some
people
might
use,
and
this
relies
on
Cordina
has
to
be
a
deployment,
so
this
DNS
of
the
scaler
is
going
to
to
change
the
replicas
based
on
different
metrics.
So
maybe
a
demon
set
is
going
well,
a
demon
is
going
to
break
that
use
case.
A
A
C
So
one
thing
that
I
propose
here
is
that
we
keep
the
deployment
with
two
replicas,
so
we
keep
open
for
the
people
that
want
to
use
the
autoscaler.
You
know
so
two
scalar
and
when
we
join,
we
have
a
new
phase.
That
is
experimental.
We
can
remove
that
in
the
future.
I
am
not
liking.
This
a
sign
saying
that
loud
but
anyways,
and
this
will
basically
check
whether
all
defaults
are
running
on
the
same
node
and
if
that's
the
case
it
will
just
eat
one
pod
or
I.
Don't
know
half
of
them.
C
A
But
this
is
going
to
like
I
explained
in
the
ticket
is
that
this
is
going
to
break
not
only
the
kubernetes
end-to-end
test
suite,
but
also
kinder,
because
the
the
logic
there
with
the
what
was
the
name
of
this
preferred
during
during
scheduling,
ignore
during
execution.
This
is
the
row
we
want
to
use,
but
the
problem
with
this
row
is
that
I
think
the
pot.
The
second
replica
is
still
going
to
be
pending.
C
C
C
Yeah
we
can
just
keep
CTL
deleting
out
with
the
claim.
D
File,
but
if
we
use
a
required
what
happened
I
created
the
first
node
I
have
to
write
because
one
remained
painting
yesterday,
but
it
is
fine
because
I'm
not
compromising
they.
Let
me
say
the
real
beauty
of
the
cluster,
because
I
have
only
one
control
plane
node
as
soon
as
the
second
control
pain.
No,
the
join
the
replicas
get
scheduled
and
so
unbalanced
that,
yes,.
D
C
D
D
A
Yes,
but
I'm
talking
about
the
beginning,
when
you
run
the
suite
at
the
beginning,
it
expects
all
the
pods
to
be
running.
Can't
you
make
this
sweet
wait
for
all
pending
pods.
Yes,
you
can
I
take
it.
You
can
customize
it,
but
we
are
entering
this
space
where
we
have
to
customize
the
suite
because
of
this
pending
pot.
I,
don't
want
us
to
do
this.
D
D
A
D
Okay,
then
I
created
the
second
and
if
I
run
the
nth,
West
and
now
I
have
a
yes,
it's
going
to
fail
because
all
of
the
potassium,
but
in
in
the
cool
minion,
to
enter
test
we
are
not
running
and
to
and
now
we
are
joining
a
second
control
pain
and
a
third
control
plane
before
running
and
to
end
that
doesn't
mean
that
that
before
Randy
had
to
end
I
have
all
the
core
DNS
both
scheduled.
What.
C
Yeah
and
I
think
we
can.
We
can
expect
many
people
to
between
exact.
It
is
like
creating
a
single
control,
plane,
cluster
waiting
for
all
the
posts
to
be
ready
and
then
run
whatever
they
want
from
so
I
guess
we
are
going
to
get
some
issues
with
that,
like
people
reporting
that
they
get
only
one
out
of
two
members.
A
D
A
A
F
He's
gone
sorry
I
was
saying
that
like
if,
if
it's
not
getting
rescheduled,
it's
kind
of
expected,
since
the
will
parameter
in
like
the
node
miniature
grace
period
and
the
pod
erection,
timeouts,
so
I
would
expect
the
pods
should
not
be
rescheduled
unless
time.
The
time
that
we
allow
allocated
was
concerned.
A
So
there
is
also
a
way
to
reschedule
reschedule
them.
I
think
I
forgot
the
exact
mechanic,
but
you
can
wait
if
the
notice
has
the
not
ready
label.
Sorry
annotation,
you
can
automatically
reschedule
the
pots.
I
think
this
also
works,
but
anyway
we
are
trying
to
work
around
the
fact
that
we
we
should
be
at
the
point,
coordinates
as
a
team
on
set
in
my
opinion
anyway.
I
don't
I,
don't
see.
Why
are
we
not?
The
point
coordinates
as
Adam
said,
it
would
break.
F
The
closer
a
precaution
autoscaler
like
if
we
don't
have
users
for
that
I
would
be
surprised,
but
I
guess
some
people
are
might
be
using
it
Plus,
given
the
I,
don't
know
if
we
can,
but
given
the
deprecation
policy,
I
think
that
this
counts
as
a
behavior.
So
we
would
need
to
go
through
the
deprecation
cycle,
at
least
for
this.
A
A
C
Yeah
I
agree,
I
mean
this
was
just
ridiculous.
There
is
this
project
that
actually
makes
sense
to
ask
a
core
DNS
depending
on
different
metrics.
So
maybe
some
people
were
using
it.
I,
don't
know,
I,
don't
know
anybody
that
is
using
it,
but
this
was
just
a
comment
right
that
if
we
move
this
to
a
demon
set,
we
are
breaking
that
use
case
and
they
would
have
to
redeploy
data
security
and
a
deployment
or.
E
A
E
A
F
But
like
they
depend
on
like
I,
would
check
with
six
scalability
I
know
that
they're
doing
some
stuff
in
this
space
so
like
at
least
we
can
get
an
idea
on
their
supported
s,
ellos
and
the
number
of
DNS
or
at
least
control
plane
parts
that
they
were
using
because,
like
yeah,
it
seems
like
users
of
my
want
that
you
have
a
limited
number
of
control,
plane
nodes,
but
still
scale
the
number
of
DNS.
If
the
queries
are,
the
number
of
queries
are
too
high.
E
Have
been
sorry,
I
think
I
have
an
answer
for
that,
so
we
usually
run
the
$2,000
and
$5,000.
Let's
get
six
scalability
before
every
release
when
they
run
this
five
thousand
or
test
we
using
the
autoscaler
it
the
number
of
replicas
scale
up
to
131
so
yeah,
so
those
tests
use
cube
up
with
the
co
DNS
deployment,
not
specifying
any
replicas,
but
depending
on
the
DNS
horizontal
autoscaler
to
ramp
up
the
number
of
pods
as
required,
and
the
maximum
number
every
one
is
131
years.
It's
131
for
5,000
nodes.
E
A
F
A
D
My
personal
opinion
is
that
we
should
not
move
to
demonstrate
today.
User
are
not
required
to
pass.
They
ever
wanted
to
delayed
a
pod
and
heating
it.
The
Dendera
scheduler
as
well,
and
in
my
opinion,
we
have
to
seek
for
a
solution
that
used
demo
set
and
and
the
red
balance
when
I
second
control
pain,
joints
as.
C
So
I
mean,
instead
of
touching
the
replica
number
I,
think
we
could
actually
start
with
two
on
keel
boats
like
half
of
the
boats
that
are
ready,
so
they
will
get
reschedule
it.
We
are
the
NT
affinity
rule,
because
if
there
are
other
components
touching
the
replica,
then
we
will
be
touching
the
replicas.
Well,
when
you
control
plane,
join
and
I
prefer
to
just
delete
half
the
notes
posted
itself
and
they
will
get
like.
You
were
nervous.
C
The
thing
is
that
if
we
are
touching
the
replica
number,
another
components
are
doing
as
well
like
the
DNS
or
the
scalar.
Then
we
are
going
to
office
you
so
I
propose
to,
instead
of
touching
the
replica
number
that
we
remove
half
of
the
pots
that
we
have
ready
of
core
DNS.
So
we
won't
have
any
impact
on
the
service
and
they
will
get
rescheduled
automatically.
A
A
So
if
you
have
any
ideas,
let's
let's
comment
in
the
issue:
do
we
really
even
want
to
talk
to
you
know,
target
this
high
scale
clusters,
or
should
we
like
what
should
we
do?
Alright,
the
next
one,
the
next
one
gender
is
a
convenient
token
disk
or
optimization
PR.
That
I
sent
I
think
Rusty's
in
the
comments
today.
A
It
changes
a
lot
of
stuff,
but
it's
much
better,
because
we
are
now
not
retrying
over
the
wall
of
the
logic
that
we
have
previously.
We
are
now
very
trying
only
on
the
API
calls
and,
if
you're
interested
in
this
refactor,
please
take
a
look.
Also.
It
has
a
lot
more
unit
tests
here
now
and
I
want
to
merge
this
in
118,
ideally
I.
B
A
C
Really
fast
on
that,
basically,
when
we
are
how
grading
the
cubit
we
want
cube
ATM
to
upload
the
cubelet
config
map
for
the
new
version,
we
could
do
that
ourselves.
I
mean
it's
not
really
critical
for
us
to
do
that
through
cube
ATM,
but
it
would
be
good
to
have
that
and
when
running
the
the
upload
conflict
Hewlett
the
upload
configurable
at
her
face,
then
it
also
tries
to
annotate
the
CRI,
which
also
you
know.
C
It
has
some
output
about
errors
because
it
couldn't
it
has
no
conflict,
so
it
doesn't
know
how
to
pass
or
waterpots.
So,
ideally,
we
could
remove
this
here.
I
socket
annotation
logic
from
the
upload
config
culet
face
I,
see
that
you
proposed
Lumiere
to
maybe
move
that
this
specifically
is
very
a
very
small
function,
call
to
a
hidden
face,
so
maybe
we
can
just
called
you
know
a
blob
convict
you
ballot.
Nothing
will
be
tried
to
be
done
regarding
this
reason
be
see-through
and.
A
D
A
So
but
when
we
created
the
I
was
really
like
feeling
this,
we
should
not
put
the
patch
in
of
the
north
stuff
inside
the
abort
config
face.
It
felt
wrong
back
then,
I
saw
complaints
about
this
from
VMware
people
and
I
saw
complaints
in
the
Q
beta
emission
tracker
by
Raffaella
somebody.
Somebody
else
I,
don't
remember
well,
so
it
feels
like
we
really
shouldn't
between
this
patching.
The
load
object
should
be,
should
it
the
visible
face?
Who
should
it
be
a
hidden
face?
C
Yeah
I
agree
with
that.
As
for
the
use
case,
for
it's
really
really
fast,
so
this
is
I
think
this
is
something
that
more
people
are
having
like.
If
you
have
a
cluster
created
with
Covidien
with
115,
for
example-
and
you
want
to
upgrade
to
116,
but
you
don't
want
the
default,
I
mean
you
want
to
tweak
some
of
the
defaults
from
the
cubelet
that
are
on
the
on
the
come
from
the
qulet
config
map.
C
You
want
to
change
some
flag
during
that
upgrade
may
be
good
because
they
give
let
had
a
new
flag
and
the
default
is
not
something
you
want
or
you
want
to
change.
Something
is
to
you
know,
to
be
able
to
do
that
in
a
separate
step.
So
I
can
upload
the
config
map
for
the
Cuba,
Leighton
and
I
can
run
q
ATM
upgrade
apply.
C
D
A
B
B
B
No,
ideally,
there
should
be
CRI
socket
field
inside
of
the
cubelet
component
config,
and
we
should
be
using
that
instead
of
the
the
annotation.
But
still
such
a
few
T
is
not
present
inside
of
the
capelet
component
config
and
it
might
be
actually
delayed
for
some
time,
at
least
until
the
folks
there
actually
get
rid
of
the
dr.
shim.
Oh.
A
G
A
A
G
That
we
do
see
is
that
customers
don't
upgrade,
make
changes
to
a
medium
configuration
file,
like
maybe
passing
in
things
like
fun,
infra
container
image
and
also
wanting
to
modify
the
cube,
ATM
environment
file
and
so
during
the
upgrade
process.
So
in
this
case,
if
this
feature
doesn't
exist,
then
such
plugs
will
be
added
into
the
cube.
G
Atm
in
the
user
would
have
to
modify
activity
of
environment
file
manually
muscle
also
like
with
the
operator
PI
command
in
case,
where,
if
the,
if
the
configuration
exists
in
the
config
environment
to
file
exists,
and
you
make
a
change
using
the
dash
dash
config
file,
the
change
doesn't
get
persistent
because
it
only
gets
changed
during
in
the
only
case
where
your
guest
changes.
If
that
father's
an
exist,
I.
A
A
A
A
G
A
A
D
C
Just
very
very
first
true
to
answer
to
tone
that
so
that
removing
the
config
flag
from
apply
I,
don't
think
that
would
be
good
for
us
because,
for
example,
we
are
using
this
config
to
also
pin
the
versions
of
core
DNS
on
the
queue
proxy,
for
example
not
cuprous
about
core
DNS,
and
it
CD.
We
are
using
this
feature
from
from
qadian
upgrade
apply,
confit
I.
B
A
D
If
you
want,
if
I
can
give
a
quick,
quick
update
from
Kubek
on
really
quick,
so
it
surprised
me
the
interested
that
people
still
have
in
cubed
mean.
So
that
does
mean
that
what
we
are
doing
is
important
for
people,
and
there
was
a
lot
of
discussion
around
the
open
question
around
the
character,
but
also
question
around.
Make
a
Kubat
mean
venerable
and
and
coop
administered
library,
which
is
an
interesting
topic
erased
from
user
and.
D
A
A
F
A
D
D
So
that
does
mean
that,
with
the
current
status,
that
the
Adhan
project
cannot
be
a
replacement
for
a
dawning
Cupid
me
and
the
in
the
current
status,
and
the
only
thing
that
the
term
project
can
be
is
a
an
alternative
for
managing
outdoors,
which
basically
doubles
the
effort
firm
for
managing
at
dawn.
She
couldn't
mean
that
is
something
that
I,
don't
I'm,
not
really
happy
to
do
so.
A
A
G
I'm
not
sure
if
this
is
the
right
place
to
ask
the
question,
but
is
there
anything
for
like
new
cube,
EDM
beginners,
like
maybe
watching
long
term
qbm
members
fixing
a
cube,
EDM
issue,
just
like
a
maybe
like
sessions
where
you
can,
you
could
join
in
and
watch
someone
who's
been.
You
know
actively
working
on
to
medium
for
a
long
time
fix
the
issue
and
just
shadow
that
a.
D
For
shooter,
is
there
a
carving
of
the
Accord
world,
all
true
that
Lou
Berman
created
the
we?
We
don't
have
an
enemy
say,
prepare
coding
session,
but
I
I'm
planning
to
record
the
session
when,
whereas
plane
out
to
set
up
a
good
environment
for
covered,
mean
I,
go
to
develop
an
environment
for
testing
Cupid
me
and
maybe
that
we
for
unfortunately,
I
cannot
do
this
this
weekend
and
next,
one
because
I
other
stuff
to
do.
But
I
will
ping
you
and
try
to
manage
to
do
this
together.