►
From YouTube: 20180523 sig cluster lifecycle kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
So
folks
actually
can
rely
on
it
for
that
it
needs
to
be
restructured
bit
because,
as
Fabrizio
and
other
artists,
for
example,
lists
and
Tim
have
pointed
out
in
the
current
in
the
current
API,
we
don't
distinguish.
What's
actually
so
we
have
the
blob
the
master
configuration
where
everything
that
we
might
possibly
need
from
Cuba
them
in.
It
is
but
there's
a
difference
between
that.
B
The
information
that
is
really
used
for
initialization,
like,
for
example,
what
is
used
for
node
registration
and
what
is
actually
something
we
need
later
when
doing
upgrades
so
so,
for
example,
a
good
example
of
this.
Is
we
let
the
user
specify
bootstrap
tokens
in
the
initialization
process,
but
that
shouldn't
go
into
the
cluster
config
mode,
because
that
is
just
a
one-off
thing.
You
can
do
it
in
it,
so
we
hence
need
to
split
the
API
from
what
we
have
1/4
cubed
M
in
it
1
struct
for
Cuban,
enjoin
and
then
inside
of
the
init
struct.
B
We
have
another
which
is
more
like
the
cluster
spec.
If
we
can
get
that
class
suspect
converged
with
the
cluster
API
we'll
see,
but
at
least
we'll
we'll
do
this
split.
So
we
have
something
that
is
so
everything
that
is
one
offs
or
in
the
in
its
configuration
and
joint
configuration
or
if
we
still
call
them
master
and
node
configuration.
We
can
debate
on
that
to
not
break
people
but
what's
needed
after
clustering.
It
is
then
in
the
cluster
cluster
spec
or
whatever.
B
We
call
it
and
what's
not
like
bootstrap
tokens
is
inside
of
the
init
configuration,
but
not
the
cluster
configuration
so
to
illustrate
what
I'm
talking
about
I'm
gonna
share
this
dock
here
and
I'm
gonna
do
some
screen
sharing
so
okay.
So
why
is
this
important
now?
So
I
just
want
to
get
some
immediate
initial
feedback,
so
we
know,
what's
that,
we
can
do
in
this
cycle
to
make
the
transition
easier.
So,
for
example,
we
have
a
lot
of.
Let
me
turn
on
screen
sharing.
B
Tim,
can
you
see
my
screen
cool?
So
let's
say
we
have
something
that
is
called
in
its
configuration
and
or
master
configuration.
This
is
what's
fed
to
cube
at
a
minute.
We
have
the
normal
type
mirror
things
like
that,
and
then
we
have
a
new
node
registration
struct,
which
is
shared
between
in
its
configuration
and
joint
configuration.
So
for
this
this
substract
I
already
have
a
PR
up.
B
A
B
Yeah,
but
so
we
can
I,
don't
think
so
so
this
in
its
configuration
I,
don't
see
that
having
metadata
at
all
and
because
it's
not
it's.
This
is
like
the
one-off
thing,
so
we
just
give
it.
You
know
registration
options
thing
and
here
I
mean
we
could
add
like
I,
guess
you
mean
like
this
metadata
matter.
We
one
objects
matter.
Yeah.
B
It
would
but
I
don't
know.
Is
there
anything
else
we
need
for
this,
because
otherwise
I
got
the
feedback
from
the
last
cycle
or
like
review
cycle,
that
it
might
be
unclear
to
specify?
Oh
because
this
is
not
a
like
normal
convenience.
A
API
object
so
anyway.
Well
we'll
talk
about
more
about
the
later.
We
we
see
the
rest
of
the
thing
so
see
a
rise
like
it
has
been
moved
here,
because
that
is
really
a
pernoding
and
used
during
in
its
initialization,
both
for
cuban
in
it
and
john
labels
to
be
discussed.
B
B
B
It's
not
gonna
be
merged.
I
talked
in
like
today,
it's
probably
not
gonna,
be
in
for
111,
so
yeah
that
yeah
there
was
one
with
the
taints.
That
is
like
pretty
much
clear.
So
in
the
meantime,
so
these
four
elements
we
comment
out
labels
here.
These
four
elements
and
this
node
registration
thing
is
gonna
I
hope
make
it
in
for
111
I.
Think
we
should
do
that.
We
can
do
this
for
taints,
because
that
is
working
better.
B
A
B
Yes,
like
someone
with
tire
privileges
and
a
qubit
itself,
so
and
if
you
leave
so
by
default,
this
is
gonna,
be
nil
and
if
you
leave
it
as
nil,
it's
gonna
be
when
you
rank
you
but
I'm
join.
It's
gonna
be
defaulted
to
the
most
ain't.
But
if
you
won't,
don't
want
the
master
taint,
you
can
just
leave
it
out
this
an
empty
empty
list.
A
C
B
So
it's
a
it's
a
come
on
strike
on
it.
It's
going
to
be
a
common
struck
between
master
configuration
that
we
have
now
and
join
registration.
That
looks
something
like
this
just
a
discovery
and
note
registration
right
now
we
call
this
master
and
node
configuration
we
might
or
might
not.
I
did
that,
but
I
already
have
a
PR
up
for
this
to
implement
these
four
fields,
and
this
would
also
then
so
I
so
from
one.
B
A
B
Yeah,
so
if
it's
not
specified,
it's
gonna
be
default
to
the
master
taint
when
drying
cube
at
a
minute
when
you're
running
cube
enjoy-
and
it's
not
gonna
be
defaulted
to
anything
so,
but
you
can
override
this,
so
I
mean
at
least
like
this
is
what
it
is
for
now
we
can
take
the
rest
of
the
comments
from
PR.
It's
I
don't
want.
D
B
A
A
B
B
B
Types
so
right
now
we
have
this
no
API
in
order
to
dupe
this
I'm
putting
in
a
substract,
but
then,
when
we're
doing
that
we
can
as
well
just
put
them
in
a
slice,
and
this
is
like
just
for
the
first
again.
This
is
only
in
the
first
in
its
allocation
process.
We
only
need
this
once
and
then
the
boots
are
token
is
going
to
look
something
like
this
token
TTL
usages
some
groups,
you
can
specify
many
if
you
want.
B
B
Okay,
so
then,
and
yeah
Fabricio
had
very
valid
concerns
and
long
road
up
a
great
long
piece
on
whether
we
should
embed
component
configs
like
the
cubelets
and
q
proxies
inside
our
own
API
so
far,
I
thought
we
should
embed
it,
but
both
Mike
too
often
and
four
bits.
I
brought
up
valid
points,
so
I'm
I'm,
very
open
to
discussing
that
I.
Don't
think
we're
gonna
get
moving
those
out
in
111,
that's
a
huge
or
not
huge,
but
it's
a
change.
A
B
B
Based
on
like
what
DNS
domain
you
have
set,
if
you,
if
you
haven't
said
like
so,
it's
gonna,
do
the
default
thing.
So
if
you
haven't
set
dns
domain
in
the
component
config
you
give
it
to
cuba
time
it's
going
to
default,
that
and
set
what's
in
the
rest
of
the
cluster
spec,
okay,
but
so
exactly
as
it
works
today,
but
just
like
reference
a
file
instead
of
embedding
because,
for
example,
one
point
that
was
brought
up
like
argument
is
well
what
what
what
if
the
component
cubelets
component
configuration
graduates
GA?
B
A
B
Yeah
I
would
have
to
make
something
like
that:
I
guess
it's
it's
gonna,
be
a
lot
of
discussion,
I
think
about
how
to
do
this.
Anyway,
we
have
these
two
options
that
I
see
at
least
right
now,
either
we
embed
it
or
we
reference
it
this
way
going
for.
So
if
we
take
a
look
at
the
node
configuration
that
I
now
here
for
the
proposal
called
join
configuration
join
configuration
to
onion,
it's
configuration
is
more
to
say
like
this
is
specific
to
cubed.
B
M
join,
and
this
is
specific,
should
be
given
two
cubed
a
minute
respectively,
instead
of
having
master
and
node,
which
is
more
generic
length
that
might
come
later
in
the
lifecycle
or
like
be
more
generally
treated.
These
should
be
specifically
for
init
and
specifically
for
join
and
then
contain
the
Mojo
parts
of
the
API,
and
so
here
we
have
the
same
object
and
then
discovery
that
is
basically
all
we
got.
This
cover
is
divided
into
two
parts:
blue
Chuck
token
or
file,
which
subtracts
with
options.
That
is
port
again.
This
is
like.
B
A
My
only
major
concern
is
just
the
migration
path
for
existing
folks
and
making
sure
that
we
double
down
and
testing
apparatus
for
this,
because
it's
pretty
disruptive
to
make
this
shift
and
so
long
as
that
it's
the
migration
is
seamless
and
we
agree
upon
the
end
results.
Then
I
think
that's
probably
a
constraint
yeah
the
code
so.
A
B
B
B
B
B
Like
cluster
AJ
cluster,
its
Cuba
demo,
something
right,
not
that
name
absolutely
not,
but
something
that
would
could
be
transferred
easily.
So
when
you
transition
from
the
I
have
only
one
master,
so
I
have
two
or
more.
We
could
do
something
like
this
and
have
like
that
as
pointer
as
well.
That
is
mutually
exclusive
with
the
others.
So
if
you
do
cube,
Adam
join
master
or
whatever
it's
gonna
transition
this
and
upgrade
your
config
from
being
one
local
at
CD
to
multiple
at
CDs.
So.
A
B
Or
let
me
show
that
well,
this
is
reference.
So
in
our
unit
configuration
we
specify
a
cluster
now
I
just
call
the
cluster,
we
could
call
it
plus
the
configuration
or
whatever,
and
we
have
this
list.
Cluster
object
is
serializable
as
it
has
type
matter,
so
this
can
exist
in
a
file
on
its
own.
It
has
metadata,
probably
here
class
metadata.
Cluster
name
is
going
to
fit
in.
Hence
it's
like
valid
to
block
the
PRF
that
I
had
there
on
this
happening.
B
Then
we
have
I,
don't
know
if
we
need
status,
but
it's
like
I
want
this
to
actually
look
like
it
could
be
native
subject,
not
as
a
component
configuration.
That
is
the
rest
we
have.
So
if
we
so
most
of
the
fields
in
master,
configuration
will
be
transitioned
to
the
cluster
spec.
So
here
we
have
global
fields
like
bananas
version
which
is
essential
to
Cuba,
remains
eternity
and
upgrade.
Here
we
have
that
CD
configuration.
We
have
API
server
control
on
your
schedule,
networking
to
the
global
I.
Don't
this
is.
B
If
we
need
it,
I
don't
know.
If
we
do,
we
might
feature
gates
that
are
enabled
for
this
cluster
I
haven't
told
yet
enough
under
control,
plane,
endpoint
stuff
and
not
on
the
certificates
directories
either.
But
the
thing
is
we
could
do
here
is
like
we
declare
a
slice
on
masters
and
where
do
we
put
a
CD
under
the
master
or
under
cluster
is
to
be
decided.
I
guess,
I,
see.
B
A
B
B
B
You
can
add
extra
volumes
and
extra
arguments,
and
this
is
just
a
stopgap
for
getting
components
configuration
to
work,
so
we
can
have
like
just
pass
to
the
component
configuration
which
we
then
upload
to
the
config
map
in
the
cluster.
So,
instead
of
having
the
extracts
here,
it's
going
to
be
in
a
different
configuration.
B
A
A
B
B
B
A
B
So
something
at
least
the
idea
that
we
have
something
that
is
called
a
cluster
with
spec
and
status
and
metadata.
Then
we
have
somewhere
a
version
or
multiple
versions
for
different
components.
Whatever
then
we
have
a
list
of
masters
and
inside
of
the
list
master
configuration,
we
can
specify
like
what
is
really
unique
for
every
master,
which
is,
for
example,
advertise
address
or
extra
cert
sense
or
like
peer
cert
sense
for
a
TD
stuff
like
that.
B
E
A
B
Yeah,
that
was
that
was
one
of
the
a
good
feedback,
so
I
mean
and
again
this
is
like
a
thought
process
process
of
like
weeks
and
months
like
for
me
at
least,
and
now
we've
got
great
feedback
from
the
cig.
In
the
initial
proposal
now
I
think
it
starts
to
look
like
something
we
actually
want
to
support,
instead
of
like
one
mega
struct
with
with
all
kinds
of
things
and
as
I
look
at
this
cluster
spec,
it
actually
starts
looking
like
something
the
cluster
API.
Maybe
wanna
would
want
to
do
so
here
again.
B
I'm
thinking
like
if
we
could
put
just
the
this,
is
like
an
idea,
but
if
we
could
just
at
CB
peer
certs,
and
here
then
we
would
have
so
when
we
do
create
the
for.
Let's
say
we
join
a
new
master
master.
Then
we
check.
What's
the
general
configuration
it
should
be,
and
then
we
say:
oh,
it
should
be
local
to
every
master
like
co-located.
B
What
image
should
we
run?
What
database
you
would
run,
what
extract
should
we
have,
and
then
these
would
be
taken
from
from
what's
actually
different
per
master
but
again
I'm,
not
sure
we
even
need
to
so
this
service.
Arts.
Honest
could
be
useful,
but
it
would
be
great
if
we
could
not
if
we
could
make
that
TD
just
listen
and
localhost
if
it's
local,
if
it's
local
yeah,
yes,
so
so
this
all
like
yeah,
that's.
A
A
There's
a
trade-off
there
I
just
need
to
be.
We
need
to
be
explicit
and
I
need
to
write
up
the
details
of
what
that
actually
means,
but
because,
when
you
use
localhost
for
doing
this,
you're
gonna
not
have
some
of
the
behavior
that
you
would
normally
have.
If
everything
was
going
over
remote
socket
and
if
you
didn't
have
those
peer,
cert
sans
and
everything
else
specified.
So.
B
B
B
A
B
Whatever
right
now
is
just
being
like
me,
removing
a
lot
of
feel
from
the
version,
one
all
for
one
API,
and
the
first
thing
you've
noticed
is
like
when
I
send
a
PR,
so
I'm
not
visioning.
It
should
be
like
that.
It
should
really
be
this
kind
of
interactive
process
and
like
in
this
fashion,
but
the
ones
that
are
so
yeah.
No,
no,
we
basically
have
to
decide
what
do
we
think
possibly
could
get
into
111,
so
I
think
the
moocher
talking
stuff
I'll.
B
B
A
We
can
try
I
need
to
think
through
some
of
the
details
and
implications
of
what
it
means.
I,
don't
I
think
we
can
always
put
up
the
PRS
and
see
if,
if
it
lands,
I'll
have
enough
review
bandwidth,
I,
don't
think
that's
a
problem.
I
just
want
to
make
sure
I
think
through
in
detail,
because
sometimes
it
always
happens
with
these
types
of
changes.
C
The
only
the
one
that
really
worries
a
little
bit
me
is
the
one
about
the
TCD,
because
we
English
parents
is
so
many
corner
in
this
ABCD
of
grade
to
be
story
and
in
the
ATC
DTLS
story
and
the
last
cycle
that
I'm
a
little
bit
scared
about
this
one.
The
other
two
are
just
shifting
of
field
I'm
a
less
warrior,
but
but
this
one
has
so
many
implications.
C
A
Of
agree
with
a
verb:
it's
either.
The
I
would
like
to
make
sure
that
when
we
do
make
modifications
to
@cd
in
the
master
that
we've
thought
through
the
implications
that
we
set
it
up
for
success
for
AJ,
and
we
do
it
like
the
beginning
of
the
1/12
cycle,
because
there
were
so
many
pain
points
that
occurred
during
upgrade
I
can't
even
describe
them
all.
I
know
that
it
might
be
like
a
trigger
word
for
both
Lee
and
Jason.
A
C
For
Shay
I'm,
less
worried
at
least
for
what
we
have
in
a
road
map
which
is
to
support
only
external
TCD
if
we
want
to
add
also
the
local
externality
CD.
For
me,
it
is
an
additional
feature,
so
it's
something
that
we
don't
have
now
as
product
of
suing
something
that
is
a
new
feature.
Is
it's
not
designed
yet?
So
it's
not
a
problem
now.
B
So,
for
me,
I
wasn't
here
in
in
Q
1
when
you,
when
you
discussed,
what's
in
and
what's
out
for
the
initial
initial
AJ
implementation,
but
I
do
think
that
the
setting
up
at
CB
in
AJ
mode
is
gonna,
be
whether
we
can
locate
or
in
some
way
co-located
with
the
masses
where
we
how
we
talked
at
CD
from
the
API
services,
like
doesn't
matter
in
this
case,
but
I
do
think.
Cubed
M
in
it
slash
Cuban
enjoin.
B
Setting
up
a
JIT
D
is
going
to
be
one
of
the
most
liked
appreciated,
but
also
for
us
hard
changes
or
features
to
add.
But
for
us
to
actually
say
we
can
support
AJ.
That
might
want
to
be
one
of
the
most
crucial
getting
people
to
like.
You
create
a
load
balancer
in
front
of
the
cubelets
for
the
masters,
it's
probably
less
troublesome
feature
or
all
requirements
for
them,
but
setting
up
external
at
CD
it's
going
to
be
hard
and
what
I'm?
B
A
Think
it's
not
a
requirement
for
the
mat.
I
think
what
the
conversation
piece
was.
Is
it's
not
a
requirement
for
the
master
join
workflow
No,
get
it
get
it
unblocked
to
get
feedback
on
it,
but
the
in
an
ideal
world
I
definitely
agree
that
we
and
we
will
have
cycles
like
hefty
Oh,
will,
will
fully
resource
getting
at
CD
local,
ed
CD
up
to
snuff,
hopefully
in
the
112
cycle.
B
Yeah
so
yaks
Alyssa,
now
or
more
maybe
understand
what
for
beta
will
say
so
for
the
initial
Cuba
I'm
joined
I
should
master
thing.
We
will
not
consider
external
CD,
but
what
I
was
saying
was
I
think
we
really
have
to
act
as
we
supported
a
j8
CD
in
our
cluster
spec
when
we're
designing
that
also
when
thinking
about
like,
can
we
get
this
code
to
converge
with
the
cluster
API,
which
is
definitely
gonna,
need
a
j
@
CD
represented
in
a
conflict
in
some
way,
yeah.
A
C
I
was
designed
to
be
totally
flexible,
so
that
means
that
behind
you
can
use,
you
know
whatever
you
want
for
traveling
on
the
cluster
Wow
I
in
the
main
API.
We
are
forced
to
do
some
some
position,
and
this
will
be
the
major
point
on
creature
I.
Guess
we,
the
people
he
designed
the
custard
API
I,
think
that
we
have
to
open
our
table
as
as
soon
as
possible
to
this
past
week
and
other
countries
and
I
don't
know
he
didn't
I.
Just.
B
I
think
so
so
address
your
concern.
Felicia
I
think
that,
so
still,
even
though
we
made
the
API
versions
the
same
or
the
API
struck
the
same
for
the
cluster
spec
and
things,
you
don't
have
to
use
cube
admin
to
implement
the
queue
cluster
API.
But
if
you
want
to
use
cube
admin,
you
can
just
like
the
exact
configure
generated
as
a
canonical
source
of
truth.
You
can
just
feel
I,
think
you
buy
them
and
it
works.
That
is
small.
B
The
thing
we're
looking
at
so
you
can
use
it
for
whatever
deployment
to
like
say
you
want
to
create
an
e
EE
or
ich
es
cluster
or
aks.
Then
you
feed
the
same
cluster
spec
to
them
and
they
have
converters
to
like
make
that
translate
that
into
actual
calls
to
the
cloud
provider
in
the
same
way
as
you
can
feed
that
the
cube
ADM
and
it
will
create
on
your
whatever
bare
metal
node,
it
will
just
work
and
they
talk
the
same
language.
B
D
B
How
about
another
yeah
I
forgot
that
so
one
other
thing
that
could
be
Hana
yeah,
well
scratch
tha.
So
this
is
what
it
currently
looks
like
I
hope
to
restructure
these
remove
this
one
melting
master.
We
refactor
or
put
Sierras
like
a
no-name
in
the
same
place,
I've
already
removed
cloud
provider
as
you
can
use,
API
server
extracts
instead
and
the
cubelet
extracts
thing
and
cubelet
configuration
is
still
gonna,
be
there
until
we
settle
on
whether
to
pass
a
file
or
embed
privileges.
Port
is
gone.
B
The
same
thing
was
like
weird,
or
it
was
like
a
thing
that
was
only
needed
for
the
OpenStack
club
in
three
cloud
provider,
but
now
OpenStack
has
transitioned
using
the
out
of
three
cloud
providers,
so
this
is
not
needed
anymore.
So
has
I
removed
that
as
well
image
pool
policy
is
not
needed.
Now
that
we
have
Chuck's
work
on
cubed,
M,
config
images
pool
and
also
doing
the
other
pre-flight
I'm
not
gonna,
be
able
to
remove
this
one
quite
yet
because
it
needs
like
being
able
to
specify
the
images
separately.
B
But
that's
for
the
next
thing
and
then
doing
the
yes
LCDs,
only
it's
where
to
be
refactored
self
hosted
at
CD.
This
was
added
just
just
as
a
port
self
hosting
with
that
CD
with
an
idea.
We
then
scratched
at
cube
Coniston,
so
we
never
use
these
types
internally.
We
just
added
them
and
had
a
follow-up
to
start
implementing,
but
we
never
merged
the
implementation
PR.
So
it's
totally
cool
Killam
yeah.
B
B
This
is
about
to
be
merged,
which
is
just
planning
up
the
mega
struct
into
to
whether
we
do
change
something
in
local
at
CD
or
like
to
support.
The
AJ
thing
is
to
be
discussed
later,
but,
as
is
just
as
this
field
move
it's
okay.
By
now
discovery
we
can
do
that
if
we
have
time
and
if
Tim,
basically
and
and
other
folks
can
think
of
all
the
educates
we
might
hit.
A
F
B
Can
I
can
talk
about
that
as
its
I
didn't
mention
it
so
yeah?
Here
we
have
the
least
question
was
how
do
we
do
so?
We
have
a
file
on
disk,
it
might
have
like
no
type
meta
set.
It
might
have
the
wrong
faulty
proxy
stuff
that
broke
as
in
110,
like
basically
feature
gates
where,
before
in
1
9,
it
was
a
string,
and
now
it
was
converted
into
a
map
string
string
map
and
we
have
to
support
all
those
cases
that
we
have
to
support
loading
a
faulty
one
off
of
one.
B
So
this
is
what
it
looked
looks
like.
So
we
take
a
configurator
configuration
file
and
the
default
defaulted
config.
So
in
this
case
incubate
a
minute
we,
the
flags,
have
been
defaulting
the
config
setting
some
basic
structure
on
it,
but
if
we,
if
we
do
specify
the
config,
is
gonna
override
everything,
that's
why
we
can't
specify
both
flags
and
config
them,
usually
both
normal
config,
changing
flags
and
config
than
usual
exclusive.
B
B
B
Type,
that
means
then,
when
we
have
the
map
of
string
interface,
we
can
check.
For
example,
we
can
see,
is
there
kind
is
kind
defined
or
is
API
version
defined
if
they
aren't
we're?
Gonna?
Add
that
in
and
three
days
as
version
one
off
one
ever
after
since
we're
enforcing
type
meta
being
set,
it's
a
good.