►
From YouTube: kubeadm office hours 2020-04-08
A
A
A
A
So
yesterday,
PSA
yesterday
the
sequence
1
psycho
session,
we
decided
to
some
new
rules
for
the
zoo
meetings.
So
basically,
when
you
host
a
meeting,
what
you
should
do
is
always
had
a
co-host,
and
this
is
possible
when
you
claim
the
host
you
can
go
to
the
list
of
participants,
elect
someone,
click
on
more
and
then
some
option
make
co-host.
A
A
B
A
B
It's
probably
could
probably
be
moved
up.
One
level
made
a
subsection,
it
was
just
I
mean
it's
just
sequential.
First
of
all,
you
you
decide
what
to
use
either
the
key
to
life,
th,
a
proxy
version
or
the
QP
IP
one,
and
after
that
you
do
the
bootstrap.
It's
just
a
little
last
section
saying
something
like
now:
you
have
to
run
cube
ADM
in
it,
according
to
the
docs
and
make
sure
to
pass
the
past,
the
control,
plane,
hostname
and
and
pause
to
the
cube,
ADM
command-line.
A
Okay,
I
see
so
as
if
idiot
feedback,
what
you
can
do
so
if
given
this
is
the
highest
level,
you
know
you
click
off
overview
as
a
two
hash
tags
here
it
is,
and
this
is
the
second
level,
the
second
item
of
the
second
level.
Yes,
but
I
think
these
should
be
level
three
at
this
point.
So
with
three
clusters.
B
B
B
A
A
C
A
I
think
this
is
going
to
be
a
huge
improvement.
Take
about
it,
because
people
until
now
are
basically
googling
for
blog
posts
in
tutorials,
and
if
we
maintain
this
guide,
it's
going
to
be
better,
except
that
the
me
in
particular
I'm
not
really
an
expert
in
not
balancer
by
any
means.
So,
if
somebody
finds
a
bug
here
like
the
question
is,
should
be
ping,
you
Martin
to
help
to
fix
this.
A
B
Obviously,
since
I've
written
it
I
should
be
owner.
That's
okay!
For
me,
I
I
have
been
using
those
tools
at
least
keep
alive,
DHA,
proxy,
nginx,
etc.
For
these
purposes,
for
quite
a
while,
so
I
know
a
little
bit
about
it.
But
of
course
I
can't
give
any
guaranteed,
but
but
I
shall
be.
I
shall
be
happy
to
try
to
help
if
there
is
anything
yeah.
A
It's
open
source,
it's
a
best-effort.
You
know,
if
you're
not
allowed
to
help
for
this,
we
have
to
figure
it
out.
Ourselves
know
how
to
amend
the
doc.
Oh
yeah,
they
take
you
very
much
for
this
I
guess
now.
The
question
is:
who
we
should
aside
as
reviewers
Fabricio,
as
you
requested,
is
we
want
to
have
a
look
and
also
ruff
I,
guess
I.
B
B
A
F
A
D
Yeah
I
just
wanted
to
notify
that
I,
followed
up
on
the
action
item
from
last
week
for
submitting
the
conformance
results
for
Covidien
the
summability
project,
added
the
ability
to
print
out
the
dependent
images
which
allows
us
to
make
a
pre-baked
to
VM
image
for
the
conformance
tests.
This
improves
runtime
from
three
and
a
half
hours
to
less
than
one
hour
and
a
half
hours,
sometimes
just
one
hour.
D
So
the
swapping
are
the
the
thrashing
of
I/o
for
image.
Poles
definitely
slows
down
the
conformance
run,
just
FYI
and
I
submitted
for
a
117
and
118
and
I
may
do
this
for
kind
as
well,
since
it's
not
that
hard
to
do
in
the
background
now
that
the
tests
are
no
longer
so
flaky
I
did
have
one
flake,
but
just
rerunning
this
week
was
fine
and
then
the
following
three
items
are
about
design
and
feeling
priority
and
direction
from
the
last
meeting.
A
Yeah
takes
a
I
wanted
to
have
a
question
here
so
like
do
you
take
in
the
future,
we
should
just
run
this
with
the
kind
because
to
buy
the
city,
the
the
sonobuoy
tests
tests
are
running
only
the
conformance
suite,
but
not
the
not
conformance.
Well,
the
major
argument
about
the
kernel
overrides
and
things
like
that.
A
D
Yeah
I'm,
not
educated,
I,
know
that
the
test
suite
that
sonobuoy
runs
is
rather
truncated.
I,
don't
know
what
the
review
and
requirements
process
for
the
conformance
test
suite.
Really
it
even
is
I
know
that
people
check
the
logs
and
the
j-unit
to
make
sure
that
versions
match
and
that
there's
no
failures,
because.
A
D
A
Yeah
I
think
because
I
know
that
kind
is
also
running
these
I,
don't
think
that
we
even
have
tests
that
are
going
to
catch
problems
in
the
coastal
that
is
sharing
Carlos
versus
a
question
that
is
using
separate
Carlos
for
the
loans
I.
Don't
think
we
have
this
courage
for
that,
so
I
try
to
invalidate
the
arguments.
That
kind
is
not
sufficient.
C
For
what
oh
sorry,
kinda
des
first
of
all,
is
not
a
secret
electrical
project,
so
we
can
help
in
submitting
the
conformist.
This
is
easy,
but
I
don't
catch,
why
we
should
care
about
the
story
about
shared
kappa
or
how
does
impact
our
test
grid
or
the
kindest
Audrina
should
not
be
this.
A
problem
of
signaled.
D
Well,
I
mean
signaled
is
not
the
one
submitting
conformance
tests
Wikipedia,
but
yeah
I
mean
I,
would
I
think
I'm
just
a
little
confused
as
to
why
we
even
need
to
discuss
it.
It's
like
this
test.
Runner
requires
very
little
maintenance
and
it
does
the
job
and
I
don't
understand
why
we
would
need
to
do
to
do
a
regression.
D
C
If
you
think,
meaning
you
probably
understand
all
of
them
are
validating
covered
me,
but
I'm
not
sure.
If
we
can
say
that
Cooperman
is
conferment
for
money,
I,
don't
know
if
it
is
body
day
I'm
there
in
direct
conformance
so
saying
that
could
mean
is
conformant,
because
we
submitted
kind
confirm
I,
don't
know.
If
people
understand
this.
A
C
Think
I
think
that
in
term
of
a
relation
with
the
community,
especially
with
people
with
which
are
getting-
and
we
are
starting
with
kubernetes
I-
think
that
having
common
mean
least
as
a
conformal
tools
is
something
important.
So
as
soon
as
we,
we
can
manage
design
plus
one
to
submit
a
separate
conformance,
I.
A
A
D
It's
not
clear
to
me
that
up
to
date,
conformance
is
strictly
required
to
be
listed
on
the
CNC
at
landscape,
the
kate's
conformance
repository
master
branch
can
be
considered,
authoritative
on
whether
or
not
the
list
is
up
to
date
for
a
particular
version.
So
if
you
go
to
any
of
these
directories-
and
you
can
see
who
is
up
to
date-
okay,
so
yeah
like
typhoon
and
Talos
right
here-
are
you
know,
they've
submitted
results
and
are
considered
good
yeah.
D
A
D
D
A
D
D
Right
here
the
define
base
image
and
then
it
says
trigger,
run
and
then
basically
on
up
it
packages,
this
VM
into
an
image
and
then
the
following
images
end
up
using
the
base
image.
So
yeah
I've
never
known
how
to
do
this
in
bigger
and
before
it's,
not
something,
that's
documented
or
supported.
Really
but
yeah.
D
You
can
do
this,
but
I've
always
wondered
like
how
can
I
prepare
a
base
image?
That's
separate
like
from
the
actual
application,
because
normally
people
will
have
a
provisioning
script
that
does
all
of
their
dependencies
and
then
all
of
their
apps
set
up
and
then,
if,
if
something
fails,
and
you
have
to
like
reprovision
the
base
image
like
every
time,
the
vagrant-
and
this
gets
rid
of
that
problem
completely.
D
A
D
Yeah,
this
is
just
regarding
that
on
solar
configuration
and
I'm
fine
to
call
it
something
else
like
I,
don't
apply
a
configuration
or
whatever
this.
This
is
just
the
name
right
now,
so
I
had
a
talk
with
sadef
from
the
cluster
API
group
and
we're
gonna
POC.
The
uninstaller
API
into
the
post,
apply
operator,
so
that'll
be
something
that
could
be
used
at
the
cluster
API
level.
D
So
the
main
problem
we're
looking
to
solve.
There
is
decoupling.
The
versions
and
the
ownership
of
the
CNI
manifests
from
the
version
of
cops,
because
it's
it's
a
large
maintenance
burden
for
the
cops
team.
So
we've
got
a
smaller
subset
of
that
problem
inside
of
Covidien,
which
is
just
that
we
have
the
core
admins
for
dns
and
proxy,
the
node
proxy.
So
if
we
can,
we
just
need
to
decide
like
okay,
well,
the
the
way
that
we
exposed,
coop
proxy
and
core
dns
to
users
is
potentially
inflexible
and
Timothy's.
D
D
The
second
item
is
actually
just
having
an
off
switch
for
both
of
these
add-ons,
something
that
can
be
formally
described
in
the
in
the
API
I.
Think,
whatever
we
need
whatever
we
do,
you
should
be
able
to
using
the
rubidium.
Ap
is
like
the
component
config
types.
You
should
be
able
to
declaratively
instruct
Covidien
to
not
do
these
things
and
then
also
potentially
do
something
else.
If
you
want
right,
so
the
add-on
installer
configuration
is
one
complete,
minimal
mechanism.
C
People
to
disable
add-ons
is
something
which
is
bargaining
us
from
some
time.
So
I
would
like
to
give
a
simple
solution
to
the
user
I'm
from
skipping
phases
now,
but
we
have
to
address
as
soon
as
we
skip
phases.
So
there
is
another
problem
that
we
have
to
make
upgrade
capable
to
detect
that
a
donor
missing
and
so
basically
to
skip
to
avoid
to
Rea
install
add-ons.
C
The
problem
of
adding
flags
to
the
or
boolean
to
the
config
is
that
we
have
to
cut
another
release
and
it's
something
that
I
would
like
to
avoid,
because
in
order
to
cut
another
release,
we
have
to
group
a
lot
of
significant
changes
and
cutting
a
release
also
is
giving
bargain
to
the
user.
So,
in
my
opinion,
we
you
should
go
for
a
simpler
release
without
changes
in
the
PR
in
the
API
and.
E
Actually,
the
way
I'm
imagining
this
is
more
like
if
we
enable
yellow
next
installer
itself
cube.
Adium
is
probably
then
going
to
give
up.
Oh,
like
oh
management,
on
the
add-ons
on
its
side.
So
the
moment
that
we
actually
get
the
modern
installer
enabled,
then
we
without
much
can
skip
all
the
coordinates
and
see.
D
My
thoughts
on
this
are
if
the
add-on
installer
is
enabled,
but
the
user
does
not
provide
a
config,
then
it
will
use
the
default.
It
config,
which
will
include
core
DNS,
include
proxy
yeah
and
then
the
Covidien
config
print
command
can
print
the
defaulted
add-on
config.
So
if
somebody
wants
to
change
the
add-on
installer
behavior,
then
they
can
for
their
kubernetes
version
using
their
kuba
dem
config.
They
can
get
a
specific
version
of
the
default
add-on.
Installer
config.
E
D
Yeah
I
think
like
the
pattern.
This
is
just
a
more
general
comment
but
like
when
Cuban
iam
doesn't
receive
input
from
somebody
like
on
upgrade
and
then
the
pattern
of
it
going
to
look
in
the
cluster
to
see.
You
know
if
there's
anything
that
it
can
do
on
best
effort
by
like
finding
a
secondary
source
of
authority.
D
For
a
user
to
override
the
behavior
of
the
tool
and
I
know
that
we've,
you
know
like
that,
doesn't
change
the
fact
that
our
stance
on
modifying
the
cluster
is
not
supported
right.
It's,
but
you
know,
if
you
give
the
config
flag
on
upgrade,
you
can
even
put
it
in
all
caps
in
the
help
text
you
know
like
not.
All
mutations
for
config
are
supported,
but
definitely
there
are
a
large
set
of
things
that
people
would
want
to
change
that
make
total
sense
and
so
yeah.
C
F
D
D
B
D
Yeah
I
think
there's
there's
some
from
ways
to
think
about.
Config
upgrades
one
of
the
more
interesting
points.
That's
often
where
mistakes
are
made
is
reusing,
the
same
config
map
with
the
same
name
and
changing
it
during
the
life
cycle
of
changing
a
deployment
which
produces
a
new
replica
set.
If
you
have
two
replicas
sets
that
you
know
say,
are
using
two
different
versions
of
core
DNS
and
they
need
different
configs.
Then
the
config
maps
that
those
replicas
sets
reference
should
be
different.
D
This
is
the
this
is
a
constraint
of
a
rolling
upgrade
strategy
deployment.
So
there
are
some
very
attractive
features
that
are
declarative
and
idempotent
that
are
built
into
tools
like
customize
and
helm.
That
will
say
hash
your
config
Maps
content,
the
data
right
and
then
name
the
config
map,
but
like
based
off
of
the
content
of
that
data,
so
say
in
the
context
of
accordion
s,
upgrade
where
you're
changing
the
version
of
the
core
file
right
and
the
scheme
has
changed.
D
Then
the
config
and
its
overrides
by
the
user
on
the
package
should
be
packaged
with
that
version
of
core
DMS.
If
you
need
something,
that's
like
truly
ornate
right,
which
is
like
supporting
a
live
upgrade
of
a
user
despite
the
fact
that
you've
declared
it
in
the
previous
add-on
version
to
be
this
content,
the
user
changed
or
whatever.
D
Then
you
can
still
package
those
things
and
put
a
job
or
an
operator
with
the
config
upgrade
mechanisms
into
it,
and
do
you
know
a
config
migration
or
something
and
put
it
into
a
new
config
map
with
a
new
name
right
and
then
get
the
second
replica
set
to
reference?
That
and
all
of
that
stuff
is
very
possible.
You
know
by
just
packaging
the
add-on
properly,
but
there
are
more
basic
needs
that
are
not
being
followed
with
regard
to
upgrading
these
components
that
we
can
focus
on
with
better
packaging.
F
C
D
It
would
be
the
add,
ons
responsibility
to
do
that
within
the
constraints
of
the
declarative
package,
I've
totally
totally
on
board
with
you
there
like.
If
calico
wants
to
instrument
an
upgrade
from
Etsy
D
to
kubernetes
back-end,
then
they
should
include
the
machinery
and
the
are
back
and
the
code
to
do
that
using
the
package
just
like
a
debian
package
or
anything
else.
That
would
need
to
do
migration.
D
A
Then,
when
you
do
a
join,
we
don't
care
about
coordinates,
but
we
care
about
q
proxy
because
we
want
to
have
a
proxy
instance
there
on
the
new
node
and
we
fail.
Currently
we
we
ignore
the
fact
that
the
q
proxy
configured
exists.
So
we
say
the
user
did
not
deploy
keep
proxy
on
this
question,
so
you
already
have
logic
for
them.
Q
proxy
is
not
deployed.
So
we
don't
care
about
this
atom
for
coordinates
on
Johnny
moss.
We
don't
care,
like
I,
said
on
upgrade.
A
I've
been
bragging
about
adding
upgrade
apply
phases
for
a
very
long
time.
If
we
can
skip
upgrading
of
add-ons,
we
can
potentially
delegate
others
completely
to
make
sure
no
stuck
quickly.
I
know
Rossi
approved
by
PR.
We
ignore
the
fact
that
a
q
proxy
does
not
exist,
see
the
cluster
and
we
can
say:
okay,
I'm
not
going
to
upgrade
proxy
of
this
question,
but
for
coordinates
we
still
perform
the
upgrade
because
we
don't
have
the
logic
to
skip
there.
A
In
my
view,
in
the
future,
what
we
should
do
is
we
should
deploy
a
core
DNS
operator
and
a
queue
proxy
operator
as
part
of
Canadian
by
default,
make
it
possible
to
skip
this
and
then,
if
the
user
skip
this,
they
can
delegate
to
an
external
add-on
installer
where
they
can
decide
to
deploy
these
add-ons
in
a
custom
way.
It
feels
very
it's
not
clear
to
be
something,
but
here
it's
not
clear
to
me
how
we're
going
to
pipe
some
control
playing
configurations
to
the
others,
Jason
added
a
very
good
example.
A
How
do
you
define
the
cost
cider
for
the
control
manager
when
you
want
to
deploy
calico,
because
it
has
a
need
for
a
specific
cider
that
you
have
to
deploy
in
advance
on
the
controller
manager
and
they'll
have
to
deploy
the
album
with
this
specific
cider
define?
This
is
going
to
be
a
very
difficult
problem
for
us
to
solve
and
I'm
not
sure
how
we're
going
to
do
it.
But
I
want
to
hear
more
about
my
operator
idea.
I.
C
To
be
honest
in
the
in
the
long
term,
I
would
like
to
cover
me
to
get
out
of
the
business
of
the
managing
the
dot.
So
in
the
future,
and
let
me
say,
and
my
dream
is,
that
could
mean
at
certain
point
just
using
the
dong
configuration
or
something
similar
gives
basic,
basically
counted,
something
else
that
install
add-ons
and
then,
if
there's
dance
as
an
operator.
C
It's
totally
transparent
from
a
government
point
of
view,
and
so
they,
the
user,
can
decide
it
with
to
install
CNI
CSI,
whatever
they
need
and
at
the
first
run
and
then
they
are
responsible
to
to
manage
the
lifecycle
of
this
object
by
interacting
with
the
parrot
or
with
with
the
API
the
operator
often
office.
So
yeah.
D
C
So
in
the
long
term,
I
simply
could
mean,
let
me
say,
but
also
in
the
individual,
that
each
tool
should
do
one
one
thing
and
and
and
then
we
can
have
a
compostable
piece
of
solution
could
mean
is
as
a
bootstrapper.
So
it
should
only
take
care
of
boostrap
the
cluster
and
then
basically
for
the,
for
the
start,
just
kick
in
some
advance
and
then
we
are
out
of
of
this
business.
C
A
This
is
this
is,
of
course,
what
you
medium
should
be.
Yes,
it
should
be
scoped
to
disturbing
problem.
Is
we
have
a
trade
off
with
you
X,
you
know.
Do
we
want
to
sacrifice
the
trade
of
one
of
the
other
management
users
now
have
to
like
I
said
earlier
users?
Now,
if
we
keep
medium,
is
not
the
point,
the
add-on
with
you
know
some
of
these
flags
specifically
configured
exactly
between
the
corporates.
Now
the
users
have
to
configure
that
uninstaller
separately,
like
how
they're
going
to
do
is.
C
Think
that,
in
terms
of
proposing
a
solution
for
the
user,
so
let
me
say
just
to
across
the
point
I
think
that
there
is
a
viable
solution
to
make
Quba
proxy
and
core
DNS
skippable
during
instant
the
Russia
stick
phases
and
during
during
a
parade.
We
already
did
this
for
Quebec
proxy.
We
can
do
something
similar
for
Corbin
s
as
well,
so
if,
if
the
fetus
is
not
there,
just
in
your
rate.
C
Changing
the
topic
we
are
telling
okay,
but
instead
of
doing
this
trick
or
workaround,
can
we
work
with
the
unknown
configuration
things?
And
we
forgot
to
this
topic.
I.
Think
that
the
point
of
Reuben,
ponies,
re-divided
and
I
think
that
what
is
missing
now
or
released
was
what
was
missing.
Last
time,
I
checked
the
day,
prototype
or
or
even
a
day.
The
camp
was
when
the
final
way
to
make
the
configuration
that
they
use
a
pass
to
cupboard,
mean
flow
down
to
that
Dawn's
I.
C
D
D
D
D
D
You
know
the
cluster
cider
cluster
DNS
zone
and
values
like
that
that
we
want
people
to
be
able
to
work
with,
because
they're
hard-coded
things
that
often
change.
We
could
include
those
in
a
go
template
context
for
inline
patches
right.
Like
that's
totally
fine
with
me.
You
just
invoke
that
on
library
with
the
go
template
context
and
it
works,
those
things
could
be
coordinated.
You
know
using
a
config
map
inside
of
the
cluster,
the
coop
proxy
configuration
could
be.
You
know
when
passed
as
a
component
config
to
kuba
nan.
C
C
F
C
D
I
completely
agree
with
you
and
every
everything
that
I've
intended
to
say
is
supposed
to
agree
with
what
you're
trying
to
express
as
well
so
I
think
we're
on
the
same
page
there
and
then
also
I'll,
just
point
out
that
I
actually
don't
have
a
problem
with
people
like
specifying
a
number
twice
like
I
I.
Think
it's
fine.
If
somebody
wants
to
define
the
cluster
zone
in
one
place
and
then
also
in
another
place,
but
I
am
very
happy
to
make
accommodations
so
that
people
don't
have
to
do
that.
D
See
it
more
as
just
supporting
in
all
a
different
UX,
which
is
basically
when
cubanÃa
constructs
the
defaults
add-on
config,
it
can
include
an
inline
patch
to
that
that
can
sake
for
the
coup
proxy
add-on,
that's
default.
Ultimately,
if
the
user
provides
their
own
config,
though
right
kuba
diem
has
nothing
to
do
right.
D
C
C
D
I'll
I'll
be
clear
that,
like
Cuba
diem
will
do
none
of
the
patching
it's
just
providing
config.
One
of
them
happens
to
be
a
patch,
but
it
we
don't
have
to
support
that
UX
at
all.
We
can
say
if
you're
using
add-on,
installer
configuration
the
coup.
Proxy
config
object
is
ignored
and
then
just
print
a
warning.
Okay,.
A
D
Yeah
I
mean
I.
The
idea
was,
let's
provide
a
library,
that's
very
small,
with
an
API,
that's
small
that
will
provide
general-purpose.
You
know
flexible
usage
of
this
API
type
across
installer
tools
so
that
we
don't
have
fragmented
people
like
running
past
scripts
and
stuff.
We
don't
have
a
core
solution
for
how
to
describe
installing
groups
of
resources.
D
A
They're
multiple
proposals
for
that,
so
so
how
about
we
just
stop
hearing
about
proxy
completely
on
the
side
of
comedian
and
document
how
to
install
this
using
external
means,
and
but
this
requires
users
to
know
what
flags
and
fields
comedian
needs
on
the
side
of
the
add-on
that
you
have
to
install
manually
after
comedian
finishes
now.
This
is
not
complicated
in
the
UX,
but
this
is
really
the
solution
to
scope,
basically
cube
medium
to
doing
comedy
bootstrapping,
but
is
you
know
it's
more
difficult
producers
yeah.
D
I
mean
I
think
could
be
them
initially
took
on
coup
proxy
and
Cordy
honest
because
they
are
quite
general.
Bootstrap
means
it's
just
a
question
of
whether
or
not
like
what
is
an
add-on
and
what
isn't
and
what
is
bootstrapping
and
what
isn't
if
the
goal
of
rubidium
is
to
be
able
to
provide
people
a
mechanism
to
talk
about
creating
working
clusters.
I
think
that
add-on
installer
is
a
small
thing
that
we
can
add.
D
That
will
provide
a
lot
of
return
in
that
area,
because
then
people
can
share
different
qubit
and
configurations
that
add
a
cloud
provider.
Add
a
CNI
change
core
DNS
to
multi
node
DNS.
You
know
so
that
you
can
create
a
working
cluster
on
AWS.
You
know,
or
things
like
that,
you
know
exclude
coop
proxy,
so
you
can
use
cilium
or
Kubb
routers
implementation,
those
kinds
of
things
it.
D
It
is
confusing.
If
the
default
behavior
for
quba
diem
continues
to
be,
you
know,
run
could
be
a
minute
and
then
there's
coo
proxy
and
core
DNS,
and
no
CNI
and
no
cloud
provider
like
we
either
need
to
deprecated
those
things.
In
my
opinion,
or
we
should
provide
something
a
little
bit
more
extensible,
because
yeah
led
to
your
point.
You
know
if,
if
you
require
special
config
and
you
have
to
change
the
flags
of
the
actual
Cuban
iam
runtime
to
and
then
you
get
into
the
the
phases
abstraction-
and
none
of
that
is
actually
controllable.
D
D
That,
like
is
the
messaging
standard
for
how
people
can
communicate.
This
is
a
multi
node
cluster
like
this
is
a
raspberry
pi
cluster.
This
is
a
cluster
that
works
on
bare
metal.
This
is
a
cluster
that
functions
in
an
AWS
networking
environment
and
if
that
is
the
goal
like
that,
certainly
been
my
vision
and
interpretation
you,
you
know,
people
might
disagree,
but
I
think
that
providing
an
extensible
layer
for
how
to
install
bundles
of
things
or
packages
is
important.
It's
11
o'clock,
sorry,
so.