►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180207 - kubeadm
Description
Meeting Notes: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.q2tayno78vgq
Highlights:
- etcd UX
- kubeadm HA upgrade instructions
- Punting on self-hosting?
A
B
B
B
And
so
Tim
and
I
had
some
conversations,
while
there
I
thought
maintenance
about
what
it
would
look
like
to
do
to
provide
a
good
user
experience
for
people
in
bootstrapping
of
@cv
cluster.
Eventually,
you
know
that
seeking
cluster
and
we
have
some.
You
know
interesting
questions
about
whether
this
is
in
scope.
What
level
only
needs
you
kind
of
bring
forward?
I
also
talked
a
little
bit
with
this
about
this,
and
when
we
were
going
over
things,
we
kind
of
realized
that,
with
the
work
that
I
need
to
finish,
pushing
through
for
generating
TLS
certificates
for.
B
B
A
Want
to
mute
in
between
the
there
Lukas
and
several
other
folks
and
Jamie
have
had
several
Doc's
on
this
topic
and
I.
Think
after
the
operator
being
pulled
that
Lucas's
last
doc
on
how
he
proposed
doing
HJ
is
probably
the
most
accurate,
but
there's
overlap
with
some
of
the
work
that
Fabrizio
is
doing.
I
think
what
might
make
a
lot
of
sense
here
is
I'm,
totally
cool
with
pocs
that
we
can
demonstrate
if
it
beats.
A
A
I
think
most
of
the
major
issues
with
four
beaches
duck
have
been
addressed.
I
want
to
take
one
more
pass
through
it
sometime
this
week
and
possibly
have
the
notion
of
gating
it.
So
that
way,
we
can
at
least
have
like
some
gated
mechanism
to
allow
the
evolution
of
you
know
joining
multiple
masters
together,
and
that
will
of
course,
also
involve
a
control
plane.
There's
gonna
have
to
be
this
mishmash
of
these
two
bits
together
right
now,
the
one
thing
that
I
think
we
cannot
do
with
that
CD
is.
We
cannot
self
host
it
right.
A
I.
Think
the
the
lesson
in
the
sed
operator
is
that
it's
it's
too
thorny
and
we
don't
we
want
to
be.
We
don't
want
to
be
piloting
that
space
if
other
people
have
treaded
down
there
and
they
have
pulled
the
ejector
handle
so
I
think.
The
happy
path
that
we
currently
have
is
static,
manifests
right
and
I'm.
Totally.
Okay,
with
the
notion
will
have
to
answer.
A
The
upgrade
question
is
long
along
with
this
I'm,
totally
okay,
with
having
like
a
demon
set
or
a
manager
utility
or
some
other
thing
that
can
update
those
static,
manifests
to
pull
in
later
versions
and
possibly
upgrade
the
Ark's
too.
I
do
know
that
there
are
other
people
who
are
trying
to
create
tooling
for
managing
at
CD
and
there's
so
much
tooling.
A
I,
don't
know
where
to
begin
their
end,
but
what
I
would
like
to
do,
ideally,
is
anything
that
is
sed
related
to
kind
of
parcel
off
into
a
library,
because
we
kind
of
have
this
intermingling
of
code
in
phases
in
other
bits,
I
think
if
we
kind
of
push
some
of
the
management
aspects
of
the
code.
This
is
a
logistical.
You
know
request.
A
If
we
push
some
of
those
other
management
aspects
of
its
ad
into
its
own
library,
it'll
make
life
for
everyone
else
a
little
bit
easier,
because
at
least
all
these
other
people
who
are
doing
these
things
will
have
a
widget
by
which
we
can
agree
to
disagree
upon
then
from
there.
How
how
the
actual
management
of
the
certs
get
stood
up
and
how
that
data
gets
passed.
That
can
be
open
to
debate
that
seemed
reasonable.
A
A
A
A
B
Well,
yeah!
Well,
so
definitely
hear
you
on
the
definitely
hearing
on
the
difficulties
self-loathing
that
CD
I
must
have
talked
to
Helia
a
little
bit
about
this
and
all
of
those
selfless
components.
You
know
I
like
the
state,
then
to
lead
him
through
and
so
making
the
story
a
little
bit
more
mature
I
think
it's
definitely
worth
trying
a
POC
to
see
what
we
can
do
with
things
like
exit
the
extra
Arts
just
using
the
control
plane
incidentals
to
try
and
spin.
B
A
A
A
Therefore,
some
of
the
problems
that
he
may
have
bumped
into
I,
don't
necessarily
know
if
you'll
agree
to
all
of
the
other
constructs
that
he
used
to
solve
the
problem,
because
in
committee
and
we
try
to
do
like
be
minimal,
minimalistic
and
not
buy
into
other
other
ideas
or
technologies
that
we'd
have
to
subsume
in
order
to
make
the
facility
more
usable,
we
kind
of
like
punt
on
that
stuff.
We
want
to
be
as
minimalistic
as
we
can
be.
B
A
I'm,
totally
cool
and
supportive
of
creating
a
POC
and
a
dock
that
goes
along
with
it.
I
I
do
think.
We've
spent
a
lot
of
time
navel-gazing
on
documentation.
We
have
done
a
lot
of
that,
so
I
think.
As
long
as
you
have
a
sane
answer
for
how
you
want
to
do
certs.
That
is
the
hardest
problem,
and
if
you
have
a
sane
answer
for
that
which
that
the
most
sane
answer
that
didn't
cause
everyone
to
lose,
it
was
to
use
CR
DS.
As
a
almost
like
an
FTP
transport
that
was
Lucas's
proposal.
A
B
D
I
have
a
question
about
the
CD
you
weeks,
because
in
Cooper
admin
we
typically
we
bind
the
data
CD
member
with
the
master
node.
My
question
is:
we
want
to
keep
and
go
on
maintaining
this
assumption,
or
also
imagine
you
eeks,
that
that
allows
us
to
create
an
activity
cluster
with
gubelman
separated
by
the
master,
node
I.
B
A
A
That's
actually
the
way
most
people
deploy
for
production
environments
as
they
exit
e
nodes
are,
are
separate
right,
like
that
when,
when
I
recommended
scale
out
deployments
for
other
people,
we've
done
a
lot
of
rigorous
testing
at
scale
and
found
that
there
are
a
bunch
of
conflating
issues
that
occur
when
you
have
all
the
things
on
a
single
right
and
having
your
your
back-end
CP
storage
facility
and
isolated
environment
with
dedicated
resources.
Hey
that's
a
good
idea.
A
B
B
C
B
C
C
Well,
I,
don't
have
so
much
to
say.
It
is
basically
something
that
we
talked
about
a
couple
of
weeks
ago,
upgrading
an
H,
a
class
built
around
something
that
was
that
was
was
tracked
using
Q
body
and
so
I've
done.
Some
research
on
that
documented.
My
findings
also
implemented
some
ansible
code
and
published
that
on
on
Google,
Docs
and
github
respectively,
a
I
would
like
some
feedback.
If
somebody
has
to
take
a
look
on
it
and
could
just
tell
me
what
he
thinks
about
it,
but
B
we
talked
about
putting
some
of
that
stuff.
C
C
The
answer
that
is
basically
kind
of
a
reference
implementation
of
what
I
documented
so
I,
usually
do
that,
along
with
the
stuff
that
I
document
actually
I'm,
not
taking
any
look
at
upgrading
it
is.
He
did
because-
and
the
city
is
running
externally
anyway,
so
that's
beyond
scope
for
exit
ad
medium
anyway.
So
that's
some
totally.
This.
C
C
A
That
gets
weird,
that's
kind
of
like
into
the
that's
kind
of
like
the
zeroth
order
of
self-hosting
containerized
couplets,
and
we
kind
of
wilfully
punt
the
net
because
it
takes
on
a
host
of
other
problems
and
we've
we've
know
a
set
of
what
those
problems
are
and
I.
Don't
think
anyone
besides
mer
antis,
does
some
of
this
stuff
and
they
did
it
in
coop
spray
and
the
reason
they
don't
a
lot
of
people
don't
do.
This
is
just
because
of
the
the
problems
that
occur
like
there's.
A
So
if
you
have
a
system
to
unify
all,
it
kind
of
makes
it
very
clear
that
this
is
the
process
that
is
owning
it
and
a
parent
tree
that
it
belongs
to.
It
becomes
a
little
bit
weird
when
you
start
like
route
mounting
things
into
the
other
container,
and
then
it
becomes
even
more
weird
when
you
need
to
do
volume
mounting
right.
So
now,
you've
route
mounted
these
things
inside
there
and
now
you
need
to
do
volume
mounts
on
the
route
mount
right.
A
It's
kind
of
like
the
more
layers
you
get,
the
it
gets,
gets
weirder
and
weirder,
and
most
people
have
avoided
that
because
the
when
they
view
nodes,
because
the
nodes
are
the
hard
part
for
you
and
have
required
the
RPM
or
dev
update
right.
The
view,
though,
is
almost
as
in
you
know,
if
you
view
it
as
an
immutable
image
right
and
then
you
just
like
you
drain
you
cordon
and
drain
the
nodes,
then
you
just
spin
up
a
new
one
with
a
brand
new
image.
A
That's
usually
the
best
operating
model
that
I've
seen
because
that
way,
you're
not
the
idea
of
live
upgrading
a
cluster
sounds
good,
but
it's
a
terrible
idea
in
practice
because
it
becomes
with
overtime.
You
get
these
bits
that
are
hard
to
maintain
and
manage
versus
having
a
immutable
piece
of
your
infrastructure
that
Ukraine
cordon
and
drain,
and
then
you
spin
up
a
new
one
to
replace
it
right,
and
you
do
that
in
sort
of
like
a
rolling
fashion.
B
C
A
B
B
A
Where
we're
talking
about
this
in
another
cig,
where
we
want
to
have
a
infrastructure
for
spinning
up
a
doctor
in
docker
jig
as
part
of
integration
tests
for
broader
people
who
use,
we
use
kubernetes
as
a
bed
right
like
they,
they
don't
want,
they
don't
care
about
it,
but
they
want
to
have
an
integration
test
framework.
That
basically
says
like
give
me
this
cluster
and
give
me
back
a
handle
to
the
clients,
that's
all
I
care
about,
and
that's
a
highly
useful
atomic
widget
that
other
people
could
reuse
across.
A
You
know
people
are
creating
their
own
controllers.
People
are
creating
their
own
things.
They
just
want
to
have
an
integration
test
bed,
but
that
the
space
of
full
self
hosting,
with
code
little
booklet,
that
that
is
something
we
have
willfully
avoided
because
there's
a
whole
host
of
issues
that
arise
from
it.
A
I
did
want
to
chat
quickly
about
Fabrizio's
doc
if
folks
had
a
chance
to
take
a
look
at
it
and
if
there's
any
PSAs
that
four
beats
you
might
have
with
regards
to
that
doc,
it's
basically
outlining
the
potential
UX,
as
well
as
his
POC,
for
just
the
master
components
and
joining
them.
A
A
The
I
think
the
only
topic
that
we
that
I,
the
only
contentious
thing
I
had
in
that
dock
was
whether
or
not
we
want
to
force
the
idea
of
self-hosted
and
you
kind
of
answered
my
question
for
bricio
that,
like
the
way
the
world
is
today,
is
we
don't
want
to
force
people
into
that
if
they're
not
ready
and
there's
a
lot
of
legacy?
So
if
you
want
to
be
able
to
join
an
existing
set
of
static
manifests,
yes,.
D
A
So
I'm
kind
of
I'm
kind
of
getting
to
this
point
now,
where
the
idea
of
having
a
demon
setup
grader
almost
seems
cleaner
to
me
now.
Other
people
can
straight
me
against
it,
but
I
think
the
the
promise
of
self
hosting
was
that
you
get
these
magical
things
for
free,
but
the
logistics
are
painful
right.
Where
you
know
a
demon
set
up,
grader
or
revision.
Er
is
actually
something
that
other
people
do
anyways
today
and
it's
not
a
terrible
pattern.
A
A
A
D
D
The
did
the
difficulties
is
that
Cooperman
needs
to
be
deployed
on
the
machine
cannot
be
only
up
a
controller,
the
probably
the
ones
that
calls
the
the
Google
cloud
API
or
whatever.
So
the
logistics
is
much
more
complex
unit.
You
need
to
be
able
to
be
deploying
on
each
node
and
to
have
full
route
grants.
Basically.
D
And
even
if
you
do
these,
you
have
to
manage
the
the
initial
brute
boostrap
has
a
command
aligned
so
that
the
story
I,
think
that
it
is
a.
It
is
a
sound
solution
because
it
it
addressed
many
problems
from
upgrade
from
a
copy
of
things
whatever,
but
I'm
worried
about
the
complexity
that
that
that
the
jump
means
because
you
have
to
manage
the
bootstrap
and
then
the
the
demo
then
did
the
demo
set.
D
So
I
am
worried
that,
at
the
end,
kuba
mean
became
two
times.
This
is
the
same
bunch
of
code,
one
time
for
the
CLI
and
the
other
for
for
the
Fordham's
and
so
and
and
what
most
worries
me
is
that
Google
admin
phases
now
are
not
really
an
API.
D
A
This
is
the
the
self
hosting
that
sounds
good
might
be
mentioned
it,
but
I
think
I
think
the
idea
of
providing
if
we
have
the
ability
to
basically
lay
down
the
control
plane
and
make
it
easy.
You
have
a
master
join
as
well
as
having
the
sed
cluster
easily
configured
I
think
the
management
of
the
upgrades
could
be
done
as
pods
running
on
the
system
that
have
host
volume
up
privileges
and
just
called
the
upgrade
I.
Think
that's.
A
You
you
wouldn't
have
to
like,
if
you
said
cube
ATM
upgrade
like
if
I
was
on
the
command
line.
I
did
ku
baiting
him
upgrade.
I
could
first
list
down.
Who
are
the
masters
right
and
I
could
specifically
target
a
node
selector
with
that
has
the
taint
that
runs
a
job
on
that
machine
and
just
literally
grabs
the
logs
as
it
runs,
and
only
goes
on
to
the
next
node
once
that
one
is
done,
it
only
does
it
one
at
a
time
and
then
it
could
back
off.
A
Because
ku
medium
is
command
line,
driven
and
CLI,
driven
instead
of
me
just
like
blindly
flashing
the
whole
system,
it
could,
it
could
query
the
system
and
say
this
is
what
we
have.
This
is
the
configuration
we
have
and
I
will
go
through
ABC
one
step
at
a
time
nice
and
smooth
almost
like
is
if
it
were
a
very
explicit
role
out
through
ansible,
right,
yeah.
B
A
A
Because
kuba
tiem
knows
itself
right,
so
if
I,
if
I
build
the
command
line
utility,
it's
the
same
thing
as
what
we're
doing
on
a
separate
project
for
hep
do
called
soda
boy
where
you
do,
son
of
will
run
and
center
boy
composes
its
own
configuration
that
will
then
be
deployed
to
the
cluster
right
it.
You
could
do
that
with
upgrade.
A
You
you
would
have
you'd,
have
a
client
that
would
be
running
locally
and
it
would
literally
like
stream
back
the
logs
of
the
state
of
everything
of
each
individual
upgrade
upgrade
note
a
here's
all
the
state
of
Canadian
upgrade
and
then
that's
complete
note.
A
done
onto
node,
B
same
same
pattern
and.
A
That
could
be
that
could
be
built
into
yeah.
There's
gonna
be
a
tricky
part
of
like
what
happens
if
you're
on
node
B
and
that
upgrade
succeeds
or
that
upgrade
fails,
but
no
they
did
not
right.
That's
the
that's
the
tricky
part
right,
but
you
know
for
the
most
part,
if
we
build
in
single
fault
tolerance
failure.
A
A
D
This
is
an
interesting,
the
next
step
before
a
perform
grades.
I
think
that
there
is
a
as
a
sport
is
in
in
depth.
Great
history,
which
is
that
should
be
clarified
that
is
at
now
the
depth
grades
used
in
the
cubed
mean
config
config
map,
yes,
and
if
I'm
not
around,
there
are
some
information
in
this
coughing
crowd,
which
are
not
know
the
specific,
because
this
continuum,
this
config
map
was
designed.
D
Basically,
when
there
was
only
one
one
master-
and
this
is
something
that
I'm
curious
to
checking
in
in
the
in
today
in
the
document,
how
did
this
was
told
that?
Because
otherwise
you
you
risk
to
lose
the
the
API
server
address,
which
is
important
of
the
which
is
specific
of
the
master
node
and
they
added
node
name.
D
So
when
you
eat
it,
your
first
node
you've
write
the
API
server
at
the
verify
the
address
and
the
node
name,
not
one,
then
you
idiot
this
icon
master
and
you
override
those
information.
When
you
do
upgrade
you
then,
and
then
you
are
in
a
tricky
position,
because
you
can
do
a
grade
on
the
second
node
but
daughter
on
the
first
note,
if
you
do
upgrade
on
the
first
node
you
bring
in
on
the
first
node
they
they
are
versatile,
addressing
them
not
name
of
the
second
one
so
well,.
A
D
For
recently
I
I
using
a
workaround
this
program
for
the
device
address,
that
is
to
use
another
the
same
at
the
back
titled
as
for
all
the
nodes,
but
still
I
Ivana.
There
is
an
open
problem,
which
is
the
node
name
of
the
of
day
of
the
of
the
node
widget,
which
can
be
overwritten
and
so
on.
It
is
another
point
that
we
have
to
look
at.
D
A
A
Okay,
I
think
what
I
might
do,
which
I
think
I
kind
of
have
to
do
is
write
down
a
doc
that
we
can
rally
on
for
what
it
means
to
do.
Upgrades
for
a
highly
available
control,
plane
in
a
non
self
hosted
world
and
a
potential
workflow
there,
and
maybe
we
can
use
that
as
a
discussion
for
next
week.
During
this
call
and
in
the
interim
I
think
you
know
for
folks
to
take
a
look
at
for
beat
SEOs
back,
because
that
workflow
is
independent,
I.
A
I,
like
that
I
like
that
a
lot
because
otherwise
we
just
kept
on
going
in
circles
screen
the
end
ly.
If
you
wanted
to
work
on
that
other
bit
with
regards
to
you
know,
there's
multiple
pieces
there,
but
you
wanted
a
prototype
as
well
as
lay
down
a
dock
of
how
you
envision
some
of
the
flow
working
together.
A
B
A
B
A
I
think
this
is
a
nice
interim
state
I
like
this,
because
it's
that
you're
we're
dealing
with
just
the
masters.
Now
we
have
just
that
flow.
We
have
just
SED
and
just
that
flow
it'll
talk
about
just
upgrades
and
just
that
flow
I
like
that
a
lot
and
we're
just
gonna
we're
gonna
wash
our
hands
right
now,
it's
self
hosting,
just
because
of
the
as
you
mentioned,
as
we
add
more
pieces
to
the
puzzle
here,
like
the
audit
log,
it
requires
more
issues
and
more
amounts
to
be
managed
with
the
self
hosting
scenario.
B
A
D
We
have
finished
with
AJ
and
80
cd-I
I
want
to
share
an
idea
that
I'm
starting
to
working
on
add/edit
is
something
related
to
the
to
the
flags,
the
amount
of
flags
and
then,
in
the
amount
of
you
weeks
that
we
have
so
basically,
what
Hank
I'm
concerned
is
that
now
we
have
a
huge
amount
of
flags
and
we
have
a
huge
amount
of
and
the
same
flags
are
duplicated
intervene
in
it
and
in
phases.
We
we've
really
a
little
value
added,
III
think
so,
basically,
I
think.
D
So
what
I'm
thinking
about
is
to
ever
could
have
been
in
it
and
then
an
argument
that,
if
is,
if
it
is
empty,
it
means
all
the
phases.
Otherwise,
you
specified
you
to
mean
in
it
chairs
sets
could
mean
in
it
Cooper
config,
so
I'm
idea,
I'm
working
on
is
to
collapse
faces
into
in
it,
so
the
user,
the
code
will
be
as
much
as
possible
and.
D
A
D
Write
an
announcement
proposal
so
for
these
and
do
a
little
bit
of
prototyping
as
soon
as
I,
don't
know
if
next
week,
I
will
have
time,
but
first
of
all,
I
want
to
figure
out
how
this
can
be
done,
because
I
studied
a
little
bit
of
a
capacity.
L
does
this,
and
there
are
also
many
interesting
things
that
we
can't
reuse.
D
A
Thank
you
is
basically
everyone
asking
for
more
flags
because
they
have
these
use
cases
of
like
we
need
to
tweak
the
knobs
here
and
need
to
tweak
the
knobs
there
and
I
think
minimizing
the
duplication
of
flags.
That's
super
helpful
because,
as
you
saw
with
some
other
PRS
is
if
a
person
wanted
to
add
a
flag
to
all
the
phases,
because
they
were
part
of
it.
They'd
have
to
duplicate
the
flags
and
all
the
tooling
awesome.
A
Yeah
yeah,
so
I
love
the
idea,
I
think
logistically
we
need
to
figure
it
out,
but
I
do
think.
I
want
to
add
that,
as
a
blocking
like,
we
should
open
an
issue
on
the
cuvette
DM.
Repo
and
I
want
to
put
this
as
like
a
blocking
issue
for
GA
right.
So
you
know
how
we
have
a
ga
a
milestone.
Now
I
would
like
to
put
that
in
the
GA
milestone,
because
I
think
that's
that's
super
important
for
long-term
maintenance
of
this
tool
and
I.
A
A
No,
this
was
a
good
meeting,
lots
of
heavy
stuff
I'm
happy
about
this
one,
so
Martin
feel
free
to
PR
I'll,
take
a
look
at
your
dock
too,
as
well
and
Lee,
and
we
have
a
backlog
of
things
to
go
through
and
if
folks
have
a
chance.
I'd
really
like
appreciate
feedback
on
for
bridge
SEOs
master
join
and
I'm
really
really
hopeful
that
all
these
separate
pieces
can
kind
of
jive
and
mash
together
over
time,
so
that
if
no
one
else
has
any
other
comments,
I
think
it's.
We
can
call
it
a
meeting.