►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180117 - kubeadm
Description
Meeting Notes: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.k21262mvpkl8
Highlights:
- Updates from CoreOS about the stability of running self-hosted etcd
- Discussion on how to handle secrets for a self-hosted apiserver
A
B
C
These
are
sometimes
like
to
OPC
issues
or
other
issues
with
the
nebulous
of
static
pods,
where
they
don't
have
quite
well-defined
sort
of
they're,
sometimes
undefined
behavior,
when
they
like
reconnect
to
the
API
server
and
that
kind
of
thing.
So
what
we
have
seen
through
there
is
that
we
have
all
these
issues.
C
I
think
that
we
have
some
things
that
we
want
to
go
explore
upstream
to
go
fix,
particularly
around
API
servers
connecting
to
at
C
D
and
ensuring
that
that
is
that,
like
the
API
server
will
reconnect.
But
while
we
were
doing
that,
we
sort
of
put
a
lot
of
our
self
OCT
efforts
on
hold.
So
I,
don't
know
what
that
implies
for
you.
B
That's
fine.
We
just
want
to
get
your
take
because
I
don't
think
it's
worthwhile
for
Jamie
to
continue
to
rebase
and
pursue
his
efforts,
then
on
the
existing
patch,
and
that
was
his
main
question
and
I
I.
I
concur
because
we
had
multiple
approaches
we
were
willing
to
take
and
we
were
pursuing
all
of
them
and
now
it
just
eliminates
one
of
those
approaches
which
actually
makes
our
life
a
little
simpler.
So
I
think
we
should
probably
just
continue
to
execute
and
go
forward
and
the
other
approach
which
is
simpler
in
some
respects.
B
A
Was
gonna
have
a
sort
of
follow-up
question
for
Erica
Ryan
if
we're
not
recommending
self
hosting
at
CD?
What
are
for
the
recommended
Etsy
deployments.
Do
we
think
that
static
pods
is
a
good
way
to
go
still?
Do
we
think
that
having
an
external
entity,
cluster
is
sort
of
the
best
sort
of
gold
standard
like
what?
What
do
we
recommend
for
customers
right
an
ideal
case,
and
then
one
of
the
other
options
that
we
think
are
also
good,
so
I
can
say
that
core
OS
definitely.
C
Recommends
dedicated
at
CD
hosts
not
running
through
static
pods
I,
don't
know
if
the
shuffling
was
sort
of
the
issue
where
the
interaction
with
the
checkpointing
was
some
of
the
concerns.
I.
Don't
have
the
exact
notes
on
hand
about
the
bugs
that
we
hit,
but
I
can
follow
up
with
them
and
send
them
to
the
list.
A
C
C
I
actually
don't
know
if
there's
I,
don't
know,
if
there's
specific
motivations
for
why
we
don't
do
it,
I
think
that
generally
our
masters
or
in
auto-scaling
groups
were
where
our
SAT
instances
or
not,
we
don't
do
dynamic
scaling
of
SED
that
way.
So
for
our
architecture,
it's
often
simpler
just
to
have
the
masters
be
different
venom
than
the
Etsy
deeds.
C
B
B
B
B
B
There
aren't
any
takers,
which
is
a
little
sad
I
was
hoping
that
we
treaded
the
path
which
was
hard
to
do,
but
we
need
people
to
see
through
the
last
bit.
Can
you
mention
what
that
is
Tim
or
put
a
link
in
the
notes?
Today,
it's
determining
what
we
want
to
do
with
secrets
for
the
API
server.
We
determined
that
only
the
API
server
is
the
special
case
of
standing
back
up
during
the
to
failure
conditions
and
we
basically
need
to
either
figure
out.
D
D
D
All
sure
yeah
so
I
mean
like
I
guess.
I
was
just
wondering
whether
we
really
have
to
do
this
at
secret
level,
or
maybe
we
could,
you
know,
have
some
kind
of
secret
to
volume,
sink
and
use
volumes
and
then
and
then
volumes
back-end
could
be
in,
let's
say,
I,
don't
know
some
kind
of
encrypted
volume,
back-end
or
whatever
they
use
them
choose
that
well.
B
D
I,
you
would
rotate
that
you
would
rotate
this
by
updates
in
secret,
but
the
secret
would
be
copied
into
a
persistent
volume
of
some
sort
and
perhaps
in
some
cases
that
would
be
a
encrypted
persistent
volume
depending
what
what
sort
of
volume
providers
user
has
then
yeah.
That's
that's
the
kind
of
thing
that
crossed
my
mind.
A
Given
the
sort
of
life
cycle
of
volumes,
I'm
not
sure
how
that
volume
would
get
mounted,
though,
because
the
cubelet
used
to
be
responsible
for
for
attaching
and
mounting
volumes,
but
that
functionality
got
moved
into
the
controller
manager
to
centralize
it
and
make
it
more
reliable,
also
got
up
the
controller
manager.
If
not
it'll
talk
to
it,
to
know
that
it
should
be
mounting
volumes.
Yeah.
B
The
problem
is
bootstrapping
the
API
server
every
the
couplet
needs
to
talk
to
the
API
server
to
mount
the
volumes,
so
you
need
to
literally
strip
down
the
pod
spec
to
almost
nil
for
the
API
server,
because
there's
some
auto
mounting
magic
that
happens
for
like
service
account
tokens
another
group.
So
when
you
have
a
self-hosted,
API
server
needs
to
be
the
bare
minimum
and
you'd
have
to
put
a
special
thing
in
there
with
regards
to
secrets
that
would
be
like
we're.
D
A
Interesting
and
then
we
could
check
with
the
storage
digging
to
see
if
they
have
any
other
ideas
about
volumes.
But
I'm
doing.
B
B
I
came
across
that
conundrum,
when
I
was
actually
just
doing
pod
check,
pointing
because
there
was
a
bunch
of
extra
of
volume
outs
that
were
added
by
default
by
the
API
server,
and
you
need
to
explicitly
turn
all
that
stuff
off
in
order
for
you
to
actually
bring
the
pod
back
up.
Otherwise,
it'll
try
to
immediately
contact
the
API
server
when
you
are
the
API
server,
so
mm-hmm.
D
D
Because
that
yeah
I
was
just
thinking
that,
if
killed'
finds
a
lot
of
volume
it
just
mounted
before
it
means
before
it,
talks
to
me,
yeah
just
find
a
stuff
there,
but
then
that's
then
we're
gonna.
Keep
that
that's
all
right,
but
I
was
also
wondering
about
like
so
please
think
of
this
as
a
general
use
case
like
secret
checkpointing
as
a
general
use
case
then-
and
we
wouldn't
we
like
to
consider
config
matching
pointing
as
well
yeah.
B
It's
a
rabbit
hole.
You
get
done,
you
get
down
into
the
rabbit
hole
which
is
kind
of
like
I
kind
of
prefer.
What
other
people
had
mentioned
earlier
is
the
idea
of
just
using
hoist
value
mounts,
and
if
you
want
to
do
certain
rotation,
you
have
a
demon
set
pod
mm.
C
B
There's
nothing
that
prevents
us
from
doing
the
sort
of
demon
set
privilege
demon
set
for
upgrades
mentality
versus
the
self-hosted
I.
Think
option
was
we
wanted
to
use
the
facilities
that
kubernetes
provides
for
doing
the
actual
rollout
of
the
modified
control,
plane,
yeah,
yeah
I
absolutely
think
we
can
self
host
the
controller
manager
and
the
scheduler
today,
probably
if
you
wanted
to,
but
the
API
server
is
the
heartland.
D
B
D
B
I'm
I'm
not
opposed
to
that,
but
I
think
what
we
need
to
do
is
have
someone
investigate
the
state
space
of
what
was
there
and
investigate
the
potential
aspossible
''tis.
This
is
all
being
a
couple
of
them
and
kind
of
drive
this
home
with
the
upgrade
path.
So
we
kind
of
have
resolution
whether
or
not
we're
going
to
do
promote
self-hosting
to
be
a
first-class
thing
or
really
self
hosts
certain
bits
of
it
and
then
do
the
upgrade
with
the
control
plane.
But
the
I
think
the
loose
ends
here.
B
B
It
there
well
you'd
have
to
checkpoint
the
secrets,
and
you
have
to
mash
the
code
on
the
on
the
couplet
to
make
sure
that
it
it
doesn't
try
to
contact
the
API
server
as
part
of
its
secret
detection.
Right,
because
there's
there's
code
in
the
couplet
that
says,
like
oh
I,
see
you
have
this
construct.
I
need
to
go
check
the
API
server
to
make
sure
and
verify
that's
there.
I.
B
Had
like
a
pivot
right,
like
you,
could,
if
you
had
cube
ATM,
have
a
single
node.
If
we
had
stories
and
gradations,
you
could
probably
sell
this
to
in
a
relatively
clean
way,
like
I.
Have
a
single,
API,
server
use
diamond
sets
to
upgrade
those
configurations
or
if
I
have
a
go
to
H
a
then
it's
actually
self
hosted
I,
don't
care
because
I
should
be
able
to
use
the
self
hosted
configuration
in
all
as
well.
Okay,.
D
You
know
one
monster
node
one
dedicated
master
machine,
but
but
it
has
three
API
service
3,
it
seems
you're
a
prince,
no
I
mean
well,
I
was
thinking
too
I,
guess
I'm
just
going
back
to
what
I
was
doing
and
Q
brings
every
anywhere
initially,
where
I
was
using.
He
met
and
start
with,
and
I
was
running
and
control
playing
a
net
CD
on
the
overlay.
B
It
becomes
weird
I
am
I
like
the
idea
of
ages
and
stages
and
making
as
long
as
the
upgrade
path
is
relatively
unified
for
the
different
deployment
configurations.
That's
my
personal
opinion.
I
know.
I
know
everyone
wants
to
have
like
a
pristine
world
where
we
always
use
kubernetes
gratis,
but
in
that
world
I
think
it's
been
proven
that
it's
fraught
with
peril
and
there's
a
version
of
this
which
is
halfway
in
between
that
has
well-defined
configurations
for
a
and
B.
B
B
B
D
D
B
The
demon
said
idea
as
a
default
upgrades
possibility.
This
makes
things
a
lot
easier
for
having
the
well-defined
upgrade
path,
because
we
have
like
a
couple
staged
approach
currently
and
I.
Think
if
we
were
to
wrap
the
cube
ADM
artifact
in
its
container,
as
well
as
additional
eyes
the
upgrades
inside
of
the
code.
This
would
alleviate
the
major
concern,
because
the
idea
of
self
hosting
is
that
it
makes
upgrades
easier.
But
if
we
can
just
make
up
here
it's
easier
without
having
to
do
it,
then
win-win
mmm-hmm.
B
C
B
C
B
B
B
D
So
yeah,
and
and
as
sort
of
as
it
was
mentioned
in
the
previous
called
when
one
of
the
downsides
to
that
does
it
seem
like
and
a
general
enough
problem
right,
but
yeah
I
do
completely
understand
that
right
now
it
seems
like
abandoning
that
path
would
be
be
kind
of
silly.
So
having
a
go
at
checkpoint
in
secrets
and
sort
of
understanding,
the
the
depth
of
the
problem
would
be
the
right
place
to
start,
and
then
you
know,
keep
giving
the
other
options.
B
E
And
I
responded
to
some
things.
Sorry
I
don't
mean
to
cut
you
off
Tim
what
you
have
something
that
so
I
think
I
think
I
was
typing.
Sorry
Oh
me
yeah,
I
I'm
gonna,
look
at
separating
certificates
der
for
at
CD
out
into
a
nested
directory
of
at
secret
bananas,
PKI
a
little
I
just,
don't
know
what
I'm
gonna
find
when
I
go
in
there,
because
everything
else
is
in
a
flat
directory
right
now.
So
we
will
see
if
that
reflectors
nicely,
but
hopefully
it
doesn't
require
a
code
change
and
then
Tim.