►
From YouTube: kubeadm office hours 2020-08-05
A
A
B
I
maybe
do
morning
awards
to
point
people
to
the
to
the
survey
to
the
document
for
the
survey.
A
B
Now
I
I
give
you
a
little
bit
of
contrast
mark
nice
nice
to
see
you
back,
and
so
we
we
are
planning
to
launch
a
new
survey
when
the
kubecon
starts
and
we
are
trying
to
make
this
survey
not
only
for
combat
minima
for
the
entire
thick
cluster
recycle,
and
so
basically
there
will
be
a
general
session.
Then,
if
you
are
using
kubernetes
you
you,
you
get
into
a
branch
for
kubernetes
specific
question.
If
you
are
using
a
cluster
api,
you
get
into
a
specific
api
session
and
so
on
and
so
forth.
B
C
Cool
I'll
I'll
look
over
it.
B
A
I'm
going
to
update
by
the
way,
I
think
for
britisher,
you
should
be
able
to
have
access
edit
access
to
this
actual
form,
so
I'm
going
to
make
the
updates,
but
if
you
see
something
that
is
not
right
in
the
form
you
can
also
edit
rusty.
I
think
you
also
have
access,
but
the
rest
of
the
people
should
see
this.
A
That
yeah
we're
using
the
google
doc
as
the
place
for
deciding
the
questions,
and
then
a
few
people
can
do
the
edits
and
that's
it.
A
A
A
That
the
user
also
sent
it's
due
to
a
missing
slash
at
the
end
of
the
the
path.
So
when
we
iterate
over
the
proc
mounts,
if
the
cobra
directory
is
present
as
a
mount,
we
are
going
to
also
unmount
it,
because
our
we
have
a
lift
check
and
it
basically,
we
end
up
unmounting
the
cobra
directory,
which
is
not
ideal
rusty.
I
think
I
already
see
you
on
this
pr.
A
A
A
We
removed
awk,
which
is
a
shell
command
here,
and
we
introduced
this.
It
was
my
suggestion.
Actually,
we
traverse
the
contents
of
the
proc
file
mounts
file
and
we
manually
call
a
system
call
yeah.
That
is
the
amount,
and
this
is
the
breakage
at
some
point.
I
think
I
believe
ed
bartosz,
yes,
ed,
bartosz,
explicitly
added
this
trailing
slash
again,
because
otherwise
awk
unbounds
the
kubra
directory
also.
B
Okay,
make
sense,
so
it
is
a
real
bug
and
not
something
which
is
very
specific.
A
Yeah
we
it
used
to
be
broken,
we
fixed
it.
We
did
the
barton
spear,
then,
when
we
refactored
we
regressed
again
and
now
it's
broken,
so
it
it
is
a
regression
of.
But
it's
it's,
not
a
critical
bug
of
sorts.
B
By
the
way,
I'm
plus
one
to
not
backport
it,
I
I
don't
see
a
common
use
case
and
there
is
a
workaround.
A
Okay,
moving
to
the
next
topic
to
blitz,
you.
B
Yeah,
I
was
just
wondering,
given
that
this
meeting
is
the
tenders
is
more
or
less
than
the
same.
This
is
this
small
group
of
people.
I
think
that
we
can
move
this
meeting
to
be
weekly,
but
it's
not
a
problem
for
me
for
me
also
to
keep
it
every
week.
B
D
I
think
that
we
can
possibly
do
it
like
bi-weekly
because,
like
to
said,
the
change
rate
is
slower
like
there
are
fewer
shoes
and
at
least
like
critical
issues
discussing
and
the
agenda
topics
are
less
and
less
in
number,
and
mostly
we
are
doing
issue
triages,
for
which
we
can
also
like
book
separate.
A
A
We
saw
that
the
main
coastal
life
cycle
meeting
was
basically
the
main
sequence
slicer
committing
transition
to
bi-weekly.
We
used
to
use
this
meeting
for
discussing
cube
adm.
So
basically
we
had
a
couple
of
meetings
for
discussing
kubernetes
every
week
and
it
was
still
not
sufficient,
but
this
was
some
time
ago
and
nowadays,
cuba
dm
is
quite
stable,
which
is
great,
I
plus
one
for
this,
and
I
think
that
if
we
do
it,
given
we
have
a
meeting
today,
we
can
skip
next
week
and
alternate
following
this
father.
D
Yeah,
you
should
and
probably
just
leave
a
comment
in
the
sequester
lifecycle
chat.
A
A
Okay,
by
the
way,
so
I
today
I
clicked
the
barge
button
because
for
british
you
saw
this
pr.
Basically,
I
had
to
click
the
merge
button.
A
Actually,
no
this
one.
I
had
to
click
the
buttons
button,
because
apparently
the
kinder
latest
115
marker
is
gone,
so
we
cannot
use
it
actually,
we
cannot
use
it
in
kaido,
it's
gone
from
the
kubernetes
release
as
a
whole,
and
I
pinked
people
on
slack
about
it.
A
Upgrade
jobs
anymore,
but
116
is
still
in
support,
so
they
have
to
keep
115
for
longer
than
that.
Apparently,
it
was
removed
like
a
couple
of
days
ago,
for
I
don't
know
why
I've
stated
multiple
times
that
we
shouldn't
remove
these
old
markers
that
soon
so
yeah.
That
was
the
reason
I
had
to
click
the
merge,
but
this
is
an
issue.
A
I
saw
the
jobs
have
not
started
failing
yes
because
they
are
very
yet
because
they
are
very,
very
like
less
frequent.
I
think
we
run
them
12
hours
apart,
but
the
pr
itself
here
failed
because
we
have
a
we.
A
A
A
A
A
A
So
this
is
interesting.
I
actually
picked
some
people
vmware
about
it
like
nadir.
A
And
the
user,
the
user
is
suggesting
that
maybe
we
should
start
signing
the
service
certificates
for
the
cube
scheduling,
but
I
you
should
you
should
read
the
discussion,
but
what
I
think
here
is
that
this
is
jumping
over
the
boundary
of
cuba
dm
being
the
minimal
viable
cluster.
A
A
A
A
It
just
does
not
check
the
valid
that
it
trusts
the
server
it
just.
A
Yes,
exactly
there's,
so
this
is
the
part
of
that
is
here
checks.
The
other
part
is
that
we
should
just
what
the
user
is
suggesting.
We
should
just
sign
the
certificates,
for
I
use
the
ca,
the
cover
the
cube
idmca
to
sign
the
certificates.
A
You
know
I
had
a
broad,
multiple
arguments
here,
including
like
hey.
We
are
not
doing
that
for
the
couplet,
but
we're
not
managing
the
equivalent
like
who
is
going
to
rotate
this
certificate.
Obviously,
either
cuba,
dm
or
some
sort
of
controller
has
to
rotate
them,
and
then
also
we
have
to
copy
these
certificates
in
the
aha
certificate
copy
functionality
as
well,
because
they
are
control,
plane
certificates,
and
it
is
just
a
complication
that
I
said.
Let
me
try
to
quote
myself
here.
A
The
benefits
need
to
justify
the
maintenance
complexity,
so
I
I
don't
see
we
gained
that
much
with
this
feature,
so
I
I'm
overall
middle
swap
for
this.
Just
we
should
rally
discussions
on
the
topic,
and
I
said
that,
if
you
want
to,
if
you
really
want
this,
you,
you
should
write
a
clip.
B
A
Basically,
today
cube
a
dm
is
not
adding
for
keep
schedule.
Equip
control
manager.
Kubernetes
is
not
adding
this
flag
and
also
the
other
flag.
There
is
a
private
key,
so
if,
like
the
docs
here
here
say,
if
you
don't
pass
these
these
flags,
a
self-signed
certificating
here
generated
for
public
attack
access
and
saved,
basically
what
but
the
the
contributor,
the
user.
What
he
said
is
that,
instead,
what
happens
is
a
ca
generated
in
memory
that
signs
these.
A
It's
not
really
self-signed
certificates,
they
create
an
internal
temporary
ca
and
then
sign
using
it.
So
basically
the
user
is
suggested
that
kubernetes
should
start
doing
this,
but
you
know
there
is
another
overall
argument
like.
Why
should
we
do
this?
Why
do
why
should
the
general
you
know
the
80
percent
of
the
cuba,
which
is
the
basic
use
case
like?
Why
should
80
percent
care
about
this?
Am
I
I
don't
really
think
it's
something
that
we
should
do.
B
Yeah
doing
the
certificate
only
for
the
probes
seems
kind
of
overkilling,
given
that
the
probes
are
not
checking
certificates
but
not
doing
this
authorization.
A
B
B
I
I
I
added,
because
I
was
curious
to
understand
this
in
detail.
I
didn't
I
saw
the
issue,
but
I
didn't
add
time
to
to
read
it
through
all
the
comments.
D
But
if
the
user
like
who
reported
the
issue,
wants
to
discuss
it
and
possibly
get
into
deep
waters
by
creating
a
cap,
then
I'm
perfectly
okay.
As
long
as
this
is
like
his
contribution
and
it
doesn't
involve
like
breaking
changes
and
too
much
like
strangely
looking
code
into
occupation
in
order
to
just
have
proper
certificates
that
are
not.
A
A
Okay,
by
the
way,
we
did
break
users
a
little
and
it
was
in
the
context
of
the
issue
I
think
on
the
so
when
we
added
this
as
a
security
fix
for
all
the
versions.
A
This
is
just
a
side
psa
here
we
did
break
a
lot
of
people
that
were
using
the
basically
scraping
the
the
health
checks
in
securely
of
these
components
using
the
component
status
api,
so
they
do
get
compensators
and
it
used
to
work
for
kubernetes,
but
once
we
did,
this
copper
started
stopped
working
it
because
the
component
status
api
does
not
support
insecure,
sorry,
secure
serving
of
these
components,
and
I
I
saw
santa
q
a
center
pr
to
fix
that,
but
the
copper,
sorry,
the
sig
architecture.
A
A
Yes,
I
think
the
problem
there
is
that
the
main
problem
there
is
the
corporate
status
is
part
of
v1
is
part
of
the
v1
api.
So
if
you
deprecate
it
at
some
point,
you
should
remove
it,
but
removing
something
from
a
rest.
Api
means
that
you
have
to
release
a
new
version,
which
means
that
potentially
because
of
copper
status,
we
have
to
have
a
v2
and
because
of
because
golang
is
a
funny
language,
we
have
to
do
so
much
changes
everywhere.
A
A
B
A
Yeah
we
in
the
kinder
jobs
we're
using
one
of
the
the
same
markers.
We
have
some
really
interesting
markers.
A
A
We
have
some
very
dressy
markers,
such
as
latest
fast.
We
have
stable
one
stable
to
engage
beta
as
well,
so
these
are
supposed
to
be
the
gate
get
with
the
like.
How
is
it
ci
and
stable?
At
the
same
time?
It's
like
it's
not
clear,
but
the
idea
is
to
get
a
version
that
is
one
older
than
the
current
ci.
Maybe
this
is
these
are
kind
of
confusing.
We
also
have
cage
beater,
which,
for
some
reason,
does
not
does
not
give
me
the
119
work
in
progress.
A
I
propose
that
we
should
just
have
a
cap
for
these
at
some
point
you
know
or
a
document
where
we
basically
decide
what
the
markers
make
sense
for
kubernetes.
We
don't
care
that
much.
We
are
using
those
that,
hopefully,
are
not
going
to
be
removed
ever.
A
A
Yeah,
I
think
we
already
triaged
these.
What
we
can
do
is
quickly
look
at
the
one
120
remaining
tickets.
I
guess
just
quickly
and
we
can
end
early
unless
somebody
has
topics.
A
Changes
you
want
pizza
3.
Is
it
easy
to
consume
so
I
saw
from
bristol
I
saw
discussions
by
the
courser
api
folks.
Andy
in
specific
adi
was
proposing
that
basically,
costa
api
starts
wrapping
the
cube,
adm
config
public
config
types
institute,
yeah.
B
B
In
the
kubernetes
control
plane
object,
we
are
both.
We
are
configuring
in
a
single
google
mean
config,
we
are
configuring,
the
beat
for
joining
for
our
the
initial
control
plane
and
the
joining
control
plane.
So
we
have
buffet
configuration
and
joint
configuration,
and
the
user
has
to
take
care
of
make
the
two
things
consistent.
B
For
instance,
not
the
not
the
registration
option
should
be
consistent
and
also
it
is
weird
because
we
are
configuring,
a
con,
the
the
co,
the
option
for
a
comfort
control
plane,
and
we
have
to
take
care
of
in
the
kubernetes
config.
To
tell
again,
this
is
a
control
plane,
so
it
is
kind
of
of
weird,
because
there
are
some
concepts
which
which
we
want
to
explore
to
expose
the
lord,
the
low
level
and
and
some
concept
that
we
want
to
get
inherited
from
the
cluster
api
objects,
and
this
is
one
problem.
B
The
second
problem
is
that
in
cluster
api,
we
are
stuck
to
the
one
with
a
one
which
is
going
to
be
to
be
duplicated
and
we
have
to
plan
how
to
keep
up
with
kubernetes,
and
there
is
no,
no,
no,
no
simple
way
to
to
to
convert
the
kubernetes
config,
because
the
the
configuration
bits
are
not
public
and
kubernetes
cannot
be
rendered.
B
So,
while
discussing
this
that
there
was
some
ideas
and
put
on
the
table,
and
one
of
this
idea
was
to
basically
wrap
kubernetes
with
something.
There
was
two
idea
on
the
table.
One
is
to
use
kubernetes
v1
release
of
config,
but
this
is
not
not
yet
planned.
B
So
I
see
this
not
not
not
a
realistic
option
and
the
second
one
was
to
basically
define
a
an
adversary,
agnostic,
a
release
of
the
kubernetes
api
and
the
mask
to
the
user,
the
one
beta
one
with
one
beta,
two
conversion
and
whatever
it
is
still
something
in
discussion
but
yeah.
Basically,
we
in
cluster
api
we're
facing
the
problem
and
there
is
no
conversion
for
the
component
conflict.
D
B
Yeah
and-
and
you
cannot
vendor
this
as
a
library,
so
this
is
the
problem,
and
these
this,
and
also
some
other
discussion
triggered
some.
I
I
share
with
you
some
some
idea
about
the
future
of
me
in
me
for
me,
and
as
much
as
as
I
think
about
it,
it
is
important
that
kuber
mean
the
future
became
basically
three
different
things.
B
C
B
C
B
Yeah,
I'm
starting
to,
let
me
say
to
think
around
this
idea,
probably
as
soon
as
I
have
something
that
makes
sense
to
be
shared,
I
will
involve
more
people
in
the
discussion,
because
probably
did
this.
This
is
not
really
different
from
what
we
discussed
in
the
past,
maybe
is
a
little
expressed
a
little
bit
better,
but
basically
we
were
stuck
by
the
the
idea
of
moving
kubernetes
for
where
it
is
to
the
government
repo,
and
I
guess
that
that
we
can,
let
me
say,
work
around
this
problem
start
moving
the
library.
B
So
we
start.
Basically,
we
can
start
emptying
the
current
kubernetes
moving.
Only
I
don't
know
self
management
into
the
kubernetes
as
a
library
and
make
the
current
cli
use
the
library
for
this
part.
So
this
is
a
possible
way
to
to
start
moving
kubernetes,
but
of
course,
I'm
still
at
at
the
high
level.
I
think
that
we
want
this
should
be
stated
before
in
terms
of
principles
and
and
goals,
and
and
and
then
we
have
to
involve
more
people
in
the
area.
C
A
Weeks
there
again,
if
we
expose
kubernetes
as
a
library
with
the
idea
to
support
conversion
between
types,
aren't
we
breaking
the
rules
of
api
machinery
where
internal
types
should
not
be
consumed
by
other
components.
B
A
That
is
without
checking
computer.
That
is
what
I've
seen
in
other
projects,
so
save
kubernetes.
A
A
Basically,
they
have
public
to
public
type,
conversions,
so.
A
Yeah,
I
we
should
think
more
about
this,
but
I
think
they
said
something
fundamentally
wrong.
Even
if
quasar
api
imports
kubernetes
as
a
library.
B
I
I
make
an
example
in
cluster
api,
there
are
code
for
generating
kuber,
config
or
code
for
generating
certificates.
A
B
A
Okay
and
then
the
quest
api,
maybe
cluster
curl,
will
overwrite
the
user
yama
on
disk
to
convert
it
from
one
version
of
the
kubernetes
api
to
another.
B
B
There
is
no
need
for
a
copy
to
to
wrap
kubernetes
admin
api
unless
they
want
to
add
some
something
to
the
user,
but
at
least
we
give
a
clear
part.
Okay,
this
is
how
you
convert
according
confidence.
You
can
do
it
and
the
same
goes
from
from
all
the
other
utility
and
goodies
that
are
including
in
in.
A
So,
basically,
when
you
create
a
cluster
api
cluster,
you
have
to
to
the
bootstrap
provider
of
cube
idea.
Using
the
quasar
api
booster
provider,
cube
adm,
you
have
to
pass
a
yaml
file
that
may
include
user
values
with
the
kubernetes
api.
This
is
currently
what
is
implemented.
B
A
I
see
so
we're
still
leaving
this.
The
user
has
to
manually
convert
this
ammo
if
they
want
to
create
a
new
cluster
with
a
new
kubernetes
version.
They
have
to
manually,
convert.
B
But
yeah
really,
the
idea
is
that
now
there
is
no
option
for
doing
this
apart
to
implement
and
I
don't
like
to
implement
a
bit
of
kubernetes
inside
of
cluster
api,
and
this
is
why
I
was
thinking.
Is
there
a
way
to
get
a
seed
of
the
kuberne
library?
Somehow
and
then
I
started
thinking
and
then
I
I
know
I
throw
the
idea
on
the
table
without
much
contest,
but
yeah
yeah,
and
I
see
that
there
are
potential
use
cases
for
it.
A
Yeah,
I
think
this
is
a
good
idea
to
start
moving
parts
of
kubernetes.
I
know
one
problem
there
is
that
if
this
part
of
cuba
dm
is
importing
client
goal
or
something
that
is
already
in
kubernetes
staging,
we
cannot
do
it.
So
let
me
give
you
an
example,
so
imagine
that
we
want
to
move
the
utils
version.
For
instance,
where
is
version.
A
B
Yeah
there
are
many,
details
should
be,
should
be
defined,
and
I
I
I
agree
with
you.
A
Because
this
can
be
blocked
for
a
long
time.
Do
you
think
that
what
is
the
idea
of
using
an
operator
the
kubernetes
operator
to
do
the
conversions
like?
Can
we
can
we
use
that
which
may
be
the
faster
solution.
B
E
B
A
B
Yeah,
but
how
how
they
are
doing,
for
I
don't
know
for
cooper
cattle.
I
guess
the
cattle
use
this.
A
Yeah
cubecaro
is,
is
in
staging.
It
has
not
moved
the
same
way
like
we
have
a
separate
cube,
adm
repository
here.
A
B
A
The
the
way
it
works
is,
if
you
there
is
a
publishing
bot,
that's
the
name
of
the,
but
it's
a
publishing
bot.
If
you
want
to
make
a
change
in
the
repository
that
is
called
kubernetes
cubecaro.
Let
me
give
you
an.
A
B
B
A
A
All
right,
we
are
out
of
time.
Does
anybody
else
have
like
final
comments
for
today.
A
That's
no
problem,
I
guess
we
should
call
it
alright.
So
take
everybody
see
you
again
in
a
couple
weeks,
bye.