►
From YouTube: 2017-06-16 18.04.21 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
And
I
started
to
put
all
the
piece
together
about
all
the
three
main
issues
or
announcement
that
we
are
talking
about,
supposing
a
chi
and
and
under
grace
and
and
it
was
a
more
difficult
to
understand
what
would
be
the
final
user
experience
of
coronated
meaning
at
the
end
of
this
journey,
so
I
try
to
put
to
put
together
some
some
notes
now
I
share.
So
this.
C
C
C
Second.
The
second
part
is
that
we
are
already
supporting
dissonant
deployment
option
for
component,
for
instance,
the
control
plane
we
are
now
we
are
supporting
the
deployment
with
study,
manifest
and
also
with
self-hosted,
then,
where
we
have
to
join
to
add
the
the
contra
brain
recovery
check,
pointing
with
the
there
was
no
discussion
about
with
name
and
and
as
the
same
goes
for
for
pc
d,
where
for
sure
now
we
are
supporting
such
in
manifest,
but
I
am
guessing
that
we
are
going
to
support.
Also
the
ATC
operator,
yeah
yeah.
A
So
I
think
that
this
presentation
as
extended
a
bit
ambitious
in
terms
of
what
we
hate.
My
understanding
is
that
we
are
not
actually
focusing
H
a
in
1.8.
It
will
probably
be
more
like
1.9,
but
I
may
be
wrong
and
it
may
be
that
the
work
proceeds
faster
than
we
expect
and
my
ambition
was
suggested
for
self
hosted
upgrades
with
a
single.
C
C
This
was
the
original
question.
I
don't
know
because
at
the
end,
then
thinking
and
many
points
come
up
to
me
and
I.
Don't
know
if
the
meeting
this
way
was
water,
all
the
meeting
and
then
we
have
a
year
are
on
that
can
give
us
an
update
on
the
excavator.
So
my
bidder
is
better
people
comment
on
the
document
offline
and
just
give
you
just
a
brief
overview
of
what
the
document
contains.
Every
it
looks
really
good,
yeah
fiscal
okay.
C
A
C
Because,
maybe
that
you
go
from
16.6
to
16.6,
not
mm-hmm
or
you
move,
you
move
the
measure
release,
so
you
do
an
upgrade
of
minor
or
major
and
so
on.
So
those
was
listing
area
that,
like
they
said,
we
are
supported
in
the
campaign.
Nvda
is
okay,
but
we
are
supporting
upgrade
for
all
all
these
matrix,
at
least
and
so
on.
It
is
something
that
I'm
committed
investment
to
defend.
But
another
point
is
that
we
are
going.
We
already
support
another
kind
of
upgrade,
which
is
the
I
call
it
the
program
or
optional
gray.
C
So
we
move
from
static
monitors
to
self
Oxford.
It
is
something
that
we
can
do
already
now
in
the
India
of
the
codes
and,
and
the
last
of
the
last
type
of
upgrade
is
that
okay,
I
move
from
single
node
to
single
master,
with
more
and
more
workers,
and
so
on.
So
and
those
was
scenario
that
for
me,
are
nearly
there.
So
it
is
that
I
think
it
to
understand
if
you
are
going
to
support
officially.
C
So
it's
something
that
we
have
to
plan
to
test
and
and
whatever
and
the
last
part,
which
is
a
really
drastic
starting
to
I,
started
to
graft
use
case
by
use
case,
the
user
experience.
So
it
contrary,
I
could
see
the
apt-get
and
whatever
could
add
meaning
in
it
and
then
I
need
to
pay
into
the
master
and
instrument
working
or
why
severe.
C
So
this
is
the
master,
the
one
master
and
also
I
were
basically
doing
it
and
then,
after
them
talking
to
join
or
the
most
interesting
here
is
what
will
be
the
experience
for
each
a
hero.
At
least,
there
are
two
scenario:
the
option
one
is
that
where
I
do
in
it
do
a
within
the
first
master,
the
distances
master
get
self-hosted,
most
probably
I
need
also
disappear
over
here
and
then
I
add
the
secondary
master
and
then
Joe
and
then
other
worker
node.
This
is
once
an
one
option.
C
So
you
get
up
a
work
in
markzware
and
without
the
problem
of
the
of
the
no
DB
issue,
which
is
annoying
many
many
people
in
the
civil
and
and
then
you
can
join
nodes
as
a
secondary
master,
so
candidate
masters,
and
only
at
this
point,
when
you
have
three
master,
join
with
the
networking,
you
upgrade
the
control
plane,
doing
self
forefinger
and
be
starting
the
ATC
operator
by
knowing
that
the
node,
where
it's
a
PCB
ever
will
be
solid.
This
and
after
that,
you
can
join
so,
and
the
three
tile
is
possible.
A
So
some
might
be
back
is
just
this
looks
like
a
fantastic
talk.
Human
can
grab
that
you're
thinking
about
this
I
think
that
the
end
states
that
you're
describing
here
is
not
what
we
will
get
to
in
1.8,
most
likely,
but
I
think
that
it's
good
to
try
and
hash
out
what
these
options
are
and
what
the
details
are
of
them,
and
that's
the
last
I'll
say
I'm
really
interested
in
knowing
what
everyone
else
on
the
call
things.
D
E
D
Around
now,
I'm,
not
sure,
there's
any
overlap
between
last
Friday's
call
and
people
who
were
here
today
but
I
didn't
watch
last
Friday's
meeting
and
there
was
a
lot
of
discussion
about
whether
we
wanted
to
sit
like.
We
definitely
need
to
support
upgrades
of
single
node
masters
also,
and
was
it
that
she's
yanqui
through
a
help,
self
hosted
mechanism
or
not,
and
so
I
think
we
need
to
figure
out
what
single
master
cluster
upgrade
looks
like
and
it's
fine,
if
it's,
if
it's
static,
manifests
and
kind
of.
A
Yeah,
it
was
kind
of
hoping
that
we'd
be
able
to
figure
out
how
to
upgrade
single
master
cell
social
justice
and
I'd
like
to
understand
what
the
issues
are,
that
we
can
simply
figure
out
how
solves.
D
A
D
One
issue
was
at
the
session
last
week:
Invicta
ended
with
running
things
like
the
scheduler
and
the
controller
manager
is
daemon
sets
if
you're
running
a
daemon
set
on
a
single
doe
master.
There's
really
dogged.
Wait
up
heard
that
I
think
Erin
had
pointed
out
during
a
meeting
on
Tuesday
that
that's
the
reason
they
use
deployments
for
those.
Then
you
could
end
up
with
two
copies
of
the
same
binary
on
the
same
machine
during
that
upgrade
process.
So.
F
Purpose
of
daemon
sets
was
primarily
for
higher
availability.
The
the
notion
of
self
hosting
without
high
availability,
I,
think,
is
suspect
at
best,
and
we
comment
about
that
last
week
too.
There's
a
there's
this
weird
thing
of
having
a
self-hosted
nan
H,
a
control
plane,
which
I
think
is
kind
of
awkward.
F
It
allows
it
get
the
only
benefit
it
gives.
You
is
like
one
thing,
which
is
like
easier
ish
upgrades
and
we
kind
of
agreed
that
you
know
there's
this
circle
of
features
and
where,
if
the
crossover
makes
more
census,
when
you
have
the
entire
H
a
slash
self-hosted,
but
is
there
an
upgrade
issue
with
Damon
sets
in
the
highly
available
environments.
A
F
Can
but
it's
it's
an
extra
step
right
in
the
puzzle
right
because,
where
there's
this
there's
three
checklists
of
features
and
they
all
kind
of
interleave
right
and
we
have
to
the
checklist,
we
don't
have
the
upgrade
checklist
kind
of
all
complete
right.
So
we
have
a
checklist
for
H
a
we
have
a
checklist
for
self
hosting
and
then
the
upgrade
one
is
not
all
there.
So
we
started
we're
starting
down
the
road
of
self
hosting.
F
First
right
and
I
know
Lucas
started
with
some
of
his
PRS,
specifically
with
diamond
such
in
mind,
but
we
haven't
there's
some
other
issues
that
we're
going
to
talk
about
this
week,
mainly
a
the
manifest
check
pointer,
which
I
like
to
call
it
now
versus
like
an
actual
check
winter
check
winter.
So
I,
don't
I,
don't
think
it's
a
strict
layering!
That's
that's
my
problem
right,
but
I
think
it's
I
think
there's
this
weird
intermingling
that
will
exist
for
a
while
and
I,
wouldn't
really
recommend.
E
Mean
I
can
at
least
speak
to
some
of
our
reasoning
and
and
our
process
for
this.
So
one
one
reason
was
we
wanted
the
same
task
or
I'm,
going
to
spin
up
a
single
single
node
cluster
and
now
I'm
going
to
expand
it
to
a
multi
master
cluster
and
that's
the
exact
same
installation
path
so
welcome.
Oh
I,
selected,
single
master
and
I
was,
you
know,
doing
work
against
it
and
now
I
don't
have
a
path
forwards.
That
was
a
big
one.
E
That
behavior
may
change
anyway,
not
opposed
to
demon,
says
that
kind
of
matches
a
lot
of
like
what
we
want,
which
is
these
components
running
on
master
nodes.
You've
got
a
whole
bunch
of
master
nodes,
it
might
be
inefficient,
but
yeah
so
I
mean
for
us.
It
was
worth
it
to
switch
to
using
deployments
so
that
we
could
support
single
master
upgrades
and
single
master
expansion.
Essentially
it.
E
Just
you
know,
we
have
to
figure
out
the
behavioral
from
Kubb
admin
side.
The
issue
right
now
is
essentially
the
networking
isn't
installed
automatically.
It's
like
this
follow-up
step.
So
stuff
is
just
sitting
there,
which
maybe
that's
just
something
that
needs
to
change
like
you
have
to
pre-select.
The
networking
needs
to
happen
as
Prezi
initial
faith.
I.
A
E
F
There's
examples
that
people
had
shown
or
ideas
that
people
had
they
entered
about,
one
of
which
was
that
they
would
have
like
a
configuration.
They'd
modify
the
configuration
file
so
that
it
took
and
mentioned
how
the
control
plane
would
be
established,
and
then
you
pass
that
around
as
part
of
your
initialization
for
starting
up.
So
if
you
had,
if
you
have
one
note,
you
can
pre
configure
your
network
with
your
configuration
and
you
can
also
pre-configure
what
what
to
expect.
F
Don't
know
if
people
had
thoughts
on
that
or
not
or
not.
I
know
that
there
are
I,
don't
know
what
the
issue
is
with
logistics
of
upgrading
and
daemon
sets.
I
know.
I
I
walked
into
the
middle
of
a
conversation,
but
I
didn't
hear
what
the
problem
was
so.
E
I
mean
essentially
it's
that
different
from
deployments,
where
you're
kind
of
expanding
the
number
of
pods
and
then
compacting
it.
So
if
you
have
one
copy,
you're
creating
two
and
then
deleting
the
old
one
with
daemon
sets,
it's
removed
that
one
copy
and
then
create
a
replacement.
But
let's
say
that
you're
upgrading,
the
controller
manager,
you
have
one
copy.
E
E
Queue,
let's
say
that
you
have
a
controller
manager
deployed
as
a
datum
sense.
There's
a
single
pond
running,
you
have
a
single
master
and
they
upgrade
that.
So
it's
going
to
do
a
rolling
update
of
that
game
and
set
now
its
behavior.
Its
first
step
is
to
delete
that
controller
manager,
and
now
there
is
nothing
that
actually
creates
the
new
controller
manager
and.
A
E
Yeah,
so
that's
what
we
do
in
deployments.
We
just
say
you
must
have
like
the
minimum
number
is
to
actually
with
daemon
sets.
You
would
have
to
run
to
daemon
sets
which
is
discussed
and
I'm
not
I
mean
internally
and
I'm,
not
a
big
fan
of
that,
because
it's
just
a
huge
amount
of
overhead
and
complexity,
just
kind
of
workarounds
behavioral
issues,
yeah.
F
I
still
I
still
cringe
at
the
single
the
single
host
or
the
single
master
self-hosted
I
just
it.
It
boggles
me
in
many
ways,
because
it's
a
core
condition
that
causes
a
bunch
of
weird
behavioral
things
that
you
have
to
take
into
account
versus
just
going
with
manifests
now
I
understand
the
upgrade
constraint
right
because
you
want
to
have
a
unified
way
of
upgrading
from
a
single,
but
you
could
extract
you
know
you
could
extract
that
away
right,
I,
actually.
A
Wouldn't
call
it
a
corner
case
and
the
reason
that
I
say
that
is,
but,
as
Erin
said,
it
can
be
part
of
the
way
that
you
spin
up
the
multi
node
cluster
in
the
beginning,
like
you
go
through
that
stage,
but
also
plenty
of
people
like
I
I,
speak
to
a
lot
of
people
who
are
using
cue,
ADM
they're,
using
it
with
single
masters.
They
might
be
happy
with
that
or
they
may
be.
A
And
my
concern
just
that
that
is
that
I
don't
want
to
have
two
parts:
I
won't
have
one
which
is
the
same,
whether
you're
running
single
master,
multi,
master
and
upgrades
should
be
the
same
in
both
cases,
ideally
I,
think
and
I'm.
Just
thinking
about
this
from
user
experience
perspective
right,
that's
my.
A
C
A
B
A
D
D
A
Please
do
yeah,
please
make
notes
in
the
meeting
dock
as
usual,
and
the
other
thing
sort
of
overall,
like
very
take
step
back
comments,
is
that
I
kind
of
just
want
to
say
that
I
feel
like
aaron,
has
already
figured
out.
Aaron
and
chorus
already
figured
out
how
to
do
this,
which
maybe
just
listen
to
their
way
of
doing
it,
and
do
it
their
way.
E
Well,
I
mean
so
all
of
these
discussions
we
like
when
we
still
have
internally,
so
the
Damon
said
like
again,
it
kind
of
makes
sense
in
a
way
or
you're
like
these
are
the
components
that
I
always
want
to
be
running
on.
You
know
X
number
of
masters
that
I
have
so,
if
we're
in
the.
E
And
we
say
there's
only
two
running
and
then
you
expand
to
five
master's.
Something
has
to
you
know,
expand
that
to
five,
whereas
a
dataset
just
naturally
would
so
there's
part
of
that.
That
makes
sense
in
some
cases,
if
you
had
50
masters
I,
don't
know
why
you
would,
but
you
actually
probably
wouldn't
want
to
be
doing
that.
So
this
was
this
is
open-ended.
E
E
Route
right
now,
because
that's
kind
of
what
works
today
without
any
new
changes,
and
then
you
know
there's
some
difficulties
there,
but
it
essentially
it
works.
The
one
thing
I
wanted
to
point
out
and
because
I've
seen
this
discussion
can
be
a
little
bit
confusing.
Is
we
don't
actually
checkpoint
the
scheduler
or
the
controller
manager?
We
only
checked
orient
the
API
server
and
SPG
if
it's,
if
it's
all
hosted.
E
The
reason
for
this
is
that
our
behavior
of
the
checkpoint,
which
is
something
else
I,
think
we
need
to
discuss
like
with
what
the
scope
is,
and
what
we
want
to
do
is
that
it
actually
implements
garbage
collection.
So
it
essentially
says
if
this
is
the
parent
table,
that
I
was
checkpointing.
No
longer
is
scheduled
to
this.
E
My
local
node
I
should
clean
up
all
of
those
checkpoints,
so
it
doesn't
actually
help
us
in
a
single
Masur
case,
if
we
check
pointed
the
scheduler
controller
manager
in
a
daemon
set,
because
if
they're
deleted,
you
know,
which
is
accurate,
that
pot
has
been
removed.
The
checkpoint
of
them
garbage
collects
those
those
local
checkpoints,
because
this
is
more
in
line.
In
my
mind,
with
how
the
coolest
would
actually
behave
where,
if
there
is
no
longer
a
parent,
that's
being
supposed
to
be
running.
E
D
Also,
really
not
we're
not
locked
into
any
implementation
right.
You
could
imagine
when
upgrading
from
180
to
190
had
chosen
to
use
statements,
that's
who
gets
which
deployments
or
vice
versa
right.
So
you
know
there
may
be
some
benefits
of
doing
what
is
prudent
and
works
today,
and
you
know
it's
not
we're
not
locking
to
that
forever,
and
it
sounds
like
that
sort
of
work
are
listed
right.
They
use
the
blender
quickly
that
worked
today
for
further
use
cases
which
included
single
node
self
listed,
but
that
doesn't
prevent
them
from
switching
today.
E
D
Get
that
differently,
it's
the
order
right,
do
you
create
the
delete
or
do
delete
thing
create
so
I
think
it's
if
that's
something
we
want
to
ask
for.
We
should
ask
for
it
now,
before
their
cigs
and
teams
sort
of
lock
in
what
they're
working
on
right.
We
that's
a
requirement
for
us
before
we
should
ask,
for
it
really
I
mean.
A
F
F
If
you
didn't
have
it,
you
mean
that
the
whole
reason
why
you
need
checkpointing
is,
if
you
had
a
single
master,
so
that
is
self
hosted
right,
that
you
can
make
forces
checkpointing.
You
have
to
have
it
like.
Otherwise,
your
your
reboot
cycle
in
your
toast
it
I'm
not
opposed
to
it,
but
it
we
just
have
to
I
think
what
we
need
to
do
is
come
up
with
the
order
of
how
we
want
it
to
behave
and
that
will
flush
out
the
checklist
of
execution.
A
That
seems
reasonable,
yeah
and
I
guess
I'm,
coming
from
it
very
much
from
a
user
experience
like
what
moms
do
you
type
like
Fabrizio's
diagram?
Maybe
we
should
spend
some
time
to
write
up
what
we
think
the
user
would
have
to
type
in
order
to
achieve
things
and
then
evaluate
it
and
say:
is
this
good
not
do
we
force
the
user
to
make
a
decision
about
single
master
versus
multi
master
early
on?
Are
we
okay
with
that?
That's
sort
of
thing.
D
E
I,
don't
think
that
it's
necessarily
I
don't
know
if
a
call
can
be
made
now,
but
one
thing
I
do
want
to
discuss
before
we
get
off.
This
is
plans
for
around
checkpointing.
If
we
can
at
some
point
yeah,
please
go
for
it
so
yeah.
So
part
of
this
is
like
what
are
the
requirements
from
ku
baton
inside
and
then
is
it
possible
that
we
all
work
on
the
same
thing
or
is
it
something
where
we
kind
of
need
to
forth
behaviors
and
work
on
different
things?
What
a
my
colleague
Shea,
oh
well,
hi!
E
One
thing
we
were
thinking
about
was
just
moving
it
out
of
goo
coop
and
into
an
incubator
projects
just
to
kind
of
D
couple
its
life
cycles
or
conclude
itself,
if
it's
in
fact
generally
like
generally
useful,
but
there
may
be
I,
do
scope,
changes,
or
maybe
things
like
changing
the
annotations
from
Chora,
less
specific,
so
wanted
to
discuss
that
and
the
needs
of
Kubb
admin
and
maybe
Tim's
thoughts.
Now
that
you've
had
a
chance
to
dig
into
it
and
then
kind
of
rolling
further
into
that
area.
E
F
I
I
saw
it
I
even
have
notes
where
I
said
that
to
myself
the
one
benefit
of
having
an
external
to
the
couplet
is
that
you
can
version
it
independently,
especially
if
it
says
somehow
container
again.
That
could
be
your
single,
manifest
bootstrapping
startup
saying
right.
The
one
thing
that
struck
me
was
that
you
are
also
checkpointing
secrets
right
and
that's
because
of
your
certain
how
you're,
storing
certs
right,
which
goes
back
into
there's,
there's
there's
a
whole
bunch
of
chicken
and
egg
problem.
F
So
this
is
one
of
them
right
was:
where
exactly
are
we
going
to
start
store
the
search
for
a
self-hosted
cluster,
because
there
are
security,
questions
and
constraints
about
storing
them
in
the
food
system,
namespace
right
and
if
you
don't
have
those
certs
on
startup
for
a
self-hosted
control
plane.
This
is
one
of
the
constraints,
no
we'll
start
to
get
into
big
into
this.
If
you
don't
have
those
certs
available
as
a
checkpoint,
you're
you're
toast
right.
F
So
this
is
why
I
was
bristling
a
little
bit
earlier
right
because
there's
a
bunch
of
interesting
constraints
that
occur
right,
I,
don't
know
if
Eric
has
thought
about
this
a
bit
more,
whether
or
not
he
thought
about
the
CDR
case,
or
you
know,
if
you
have
a
double
secret
namespace
that
that
you
know
this
other
can
no
one
else
can
see.
But
like
this
one
thing,
I
have
there
been
any
more
talks
on
sig
off.
There's.
E
Not
been
any
specific
talks
around
this,
this
particular
you
state
but
yeah
I
think
that
the
main
concern
was
the
number
of
things
that
try
to
reach
some
food
system.
It
was
also
just
like
I
think
that
there's
also
so
in
terms
of
writing
the
disk,
the
couplet
and
get
approval
process
they'd
already
rights
if
own
certificates
that
this,
even
if
you
use
the
CSR
endpoint
I,
think
you
know
the
awkward
alternatives
like
using
system
namespace
or
the
coop
secret
namespace
would
be
fine.
F
A
part
of
this,
to
the
reason
why
I
think
it
would
be
difficult
to
get
the
check
pointer
into
the
couplets
with
its
current
way
it
does.
It
is
that
it
has
an
opinionated
model
with
secrets,
and
it
would
only
be
for
a
bootstrap
condition
right,
you'd,
be
putting
in
something
into
the
Kubla
code
that
exists
only
for
bootstrapping
in
a
very
particular
way
right.
So.
E
We
have
the
main
reason
that
we
that
I
personally
want
to
see
this
not
has
an
external
processes,
because
we
had
conflicts
with
Google
authorization,
and
so
when
the
our
checkpoint
uses
the
couplet
API
in
order
to
yes
to
get
the
missus
daya
and
the
couplet
tulips
can
do
just
indication
where
it
says.
If
you
have
credentials
and
I'll,
let
you
through
any
of
my
AP
is,
but
they
also
have
a
village
h2
authorization.
E
So
you
can
say
Prometheus
can
read
the
health
endpoint,
but
it
can't
other
processes,
or
so
they
can't
exact
into
a
pod.
That
authorization
requires
an
API
server
to
be
alive,
so
there's
begins.
The
bootstrapping
problem
of
the
couplet
will
ask
the
API
server
authorization
decisions,
but
if
there's
no
API
server
that
the
boot
via
the
checkpoint
can't
access
that
API
I,
don't
know
if
that
is
so.
The
idea
of
baking
checkpointing
ends
to
be
couplet
is
something
that
wouldn't
help
us
get
around
this.
E
F
That
dawn
had
said
she
had.
We
had
coerced
her
in
a
corner
that
she
would
be
willing
to
entertain
this
for
the
1-8
cycle,
but
I
do
think.
We
probably
then
need
to
do
a
formal
proposal
and
have
all
our
eyes
dotted
and
t's
crossed
for
how
the
bootstrapping
sequence
would
go,
and
this
would
basically
mean
that
all
there
be
a
lot
less
opinions
right,
they're
going
to
be
distilling
how
self
hosting
and
bootstrapping
would
occur,
I
mean.
E
F
Local
state
for
self
hosted
because
it
can't
like
I,
was
thinking
about
those
two.
You
can't
really
recover
and
you
don't
want
to
for
other
workloads
in
the
non
self
hosted
case
right.
So
if
you
actually
had
workloads
you
a
standard
Google,
if
not
a
master,
you
don't
necessarily
want
to
recover
it,
because
the
time
delay
from
when
you
have
rebooted
or
whatnot
you'd
want
to
check
in
with
the
API
server
before
you
try
to
recover.
You
absolutely
need
to
do
that.
I,
don't.
E
Know
that
that's
actually
look
consider
the
case
right
right
now
today,
the
couplet,
if
it
is
restarted
itself,
is
going
to
recover
as
much
seat
as
it
can
from
the
container
runtime.
So
it's
going
to
look
at
docker,
be
like
oh
look,
there's
a
bunch
of
these
pods
and
they're
supposed
to
keep
running
so
I'm
going
to
make
sure
that
they
sure
yep.
E
So
the
only
difference
is
we're
saying
that
that
local
state
it's
it's
essentially
its
check,
burning
right
now
is
docker
state,
but
during
a
reboot
that
state
is
now
gone,
and
so
it
has
nothing
in
that
it
can
recover.
So
in
my
mind,
it's
not
that
different
from
restarting
with
Google
to
restarting
the
server.
It's
just
that
the
state
that
we're
relying
on
isn't
docker
anymore,
the
stage
don't
work.
Why
not?
Is
it
here's.
F
Why
I
disagree
because
you
don't
know
if
it's
a
controlled,
reboot
or
not,
and
the
only
way
you
can
do
that
you
don't
know
if
you're
in
maintenance
mode
right,
so
the
only
way
to
do
that
is
to
check
back
in
with
the
API
server.
So
if
you
start
rerunning
workloads
intimated
that
may
have
already
recovered
from
the
failure
condition
right
and
there
you
could
potentially
have
like
there
can
be
only
one
conditions
right.
F
So
if
you,
if
you
recovered
and
started
working
and
brought
these
things
back
up
online
before
checking
into
the
API
server
to
say,
is
it
okay
for
me
to
do
that?
Get
a
what
type
of
scenario
am
I
in?
Am
I
in
a
maintenance
mode
and
I
need
to
recover
my
states,
you
know
or
am
I
in
this,
like
it
was
a
catastrophic
failure,
condition
in
which
case
something
else
recovered
on
my
system
and
now
the
I
can
there
can
be
only
one
pod
is
now
alive
somewhere
else
right.
E
D
Think
the
assumption
might
be
that
if
docker
is
still
running
things
and
the
cubelet
restarted
the
time
delta
between
when
it
check
in
with
the
API
server
is
probably
less
I
could
see
the
power
power
off
a
machine.
Wait
two
days
turn
it
back
on
and
presume
that
those
workloads
are
now
running
somewhere
else.
If
your
cluster
is
functioning
properly
and
the
queue
has
no
notion
of
how
long
it's
been
right.
D
F
E
E
Hypothetically,
if
we
were
to
try
and
move
this
into
the
cupola,
it
could
be
something
restricted:
the
workloads
that
we
do,
this
torso,
something
like
an
annotation
on
the
workloads
and
then
additionally
start
having
concepts
of
you
know.
When
does
this?
Actually,
how
long
is
this
data
actually
valid?
For
yes,
I.
D
Those
as
well
as
going
to
suggest
is
why
we
just
like
an
alpha
annotation
on
the
control
plane
manifests
and
the
culet
code
could
say.
If
there's
this
alpha,
annotation
I
will
check
what
the
manifests
and
everybody
else
I
won't
and
that
basically
keeps
us
system
behavior
the
same
in
all
cases,
except
for
the
control
plane,
things
that
we
know
we
want
to
checkpoint
everything
else
like.
E
Like
so
I'm
I
heavily
would
prefer
that
this
was
implemented
in
the
coop.
Would
because
doing
it,
external
just
becomes
very
I
mean
you
see
the
checkpoint,
it's
pretty
gross,
it's
just
chef
when
I
bounce
around
on
this.
So
well,
you
know,
there's
some
complexion
here
around.
Is
it
okay?
If
we
check
for
secrets
or
not
or
things
like
you
know,
we
just
throwing
files
on
disk
and
the
Cugat
reads
those
and
then
try
to
persist
them
to
the
API
which,
in
Prior
discussions,
people
are
very
unhappy
about
that.
E
That
kind
of
direction
is
for
systems.
There's
a
lot
of
stuff
that
we
have
to
work
out.
But
if
we
were
going
you're
like
if
cube
admin
is
going
to
go
down
the
route
of
rewriting
their
own
style
of
check
for
taking
it,
we
have
to
continue
supporting
a
different
style
check,
training
and
then
no
one's
working
on
the
Google.
It
I
feel
like
we're
all
going
to
lose
out
that
I'm
just
biting
the
bullet,
maybe
just
getting
a
proposal
in
place
at
least
I
know
in
something
we
want
or
why
I
think.
F
D
And
we
make
it
an
alkyl
annotation,
we
turn
it.
You
know
maybe
off
by
default,
for
a
really
cycle.
We
can
turn
it
on
for
too
bad,
then
so
that
we
can
start
using
it.
But
then
it
doesn't
sort
of
impact
everybody
across
the
board,
and
if
it's
something
that's
really
great,
we
can
promote
it
from
annotation
to
a
field
player
right.
Make
it
part
of
the
official
object
that.
A
D
Think
one
thing
we
didn't
agree
on
to
step
back
a
couple
bits
was
Luke's
proposal
for
having
a
consistent
experience
of
self
hosted
on
one
node
versus
maybe
I
lost
my
connection,
while
I
was
trying
to
get
there
to
get
power,
but
I
didn't
hear
us
come
to
a
conclusion
on
whether
we
were
aiming
to
support
the
same
experience
on
one
versus
many
master
nodes.
I
think
Tim
Tim
was
dissenting
on
that.
Maybe
we
can
alleviate
those
concerns
by
saying
we
are
going
to
shoot
for
the
moon
for
1.8,
maybe
OSHA.
F
F
If
we
push
it
in
the
main
line-
and
we
have
this
mechanism
by
which
to
opt
in
for
the
Masters
I'm
not
opposed
to
it,
because
that
way,
it's
standard
right,
everyone's
going
to
agree
upon
the
same
practices
and
I'm.
Okay
with
that.
But
we're
there's
going
to
be
some
opinionated
thing
about
tokens
that
we're
going
to
have
to
deal
with
right.
Yeah.
D
I
think
at
a
high
level,
one
of
the
goals
of
cluster
life
cycle
and
and
and
what
we're
pushing
with
cube
admin
is
to
try
and
standardize
some
of
these
processes
so
that
everybody
isn't
reimplemented
at
their
own
way
that
the
core
mainline
code
has
the
features
we
need
for
bootstrapping.
I
think
this
is
one
of
those
features
that
people
have
always
sort
of
known
when
you
didn't
and
nobody's
really
sure
how
to
put
in
yet.
F
Think
that's
a
reasonable
thing
to
say
for
the
cycle
right
I
think
there's
going
to
be
some
pushback
from
the
node
team
and
we're
going
to
have
to
deal
with
the
funsies
are
there
so.
But
what
I'd
like
to
do
is
Lucas
has
a
checklist
for
self-hosted
right
and
there's
an
issue
parent
issue
for
that.
If
we
can
modify
that
one
and
I.
A
F
Vacation,
it's
killing
me
but
the
if
we
can
modify
that
checklist
to
include
portions
of
checkpointing
that
we
want
to
have
in
place.
I
think
you
know
we
can
execute
against
those
pieces.
D
Yeah
I
think
one
name.
One
important
thing
to
come
out
of
discussion
is
what
we
need
to
ask.
Other
people
for
I've
heard
two
things
so
far.
One
is,
we
need
the
know,
team
to
agree
to
some
sort
of
check,
pointing
I
think
we
need
to
give
them
a
puzzle
soon.
Don
verbally
agreed
during
the
leadership
off-site
effectively
saying.
D
E
D
They
need
to
agree
that
is
a
reasonable
thing
to
implement
and
that,
after
that,
we
can
put
it
on
their
roadmap
and
ask
them
nicely
to
do
it
or
ask
them
for
review
cycles
right.
If
we
just
do
it,
but
haven't
got
an
agreement
for
at
least
review
cycles,
then
that
there's
a
high
risk.
It's
not
going
to
make
it.
We.
D
I'll
know
it's:
it's
Erik
tune.
Okay,
yeah
Mike
no
longer
runs
Damon
sets,
unfortunately,
that
would
make
it
a
lot
easier.
Erik
and
Janet
from
the
Google
side
and.
E
F
F
D
E
And
then
does
anyone
have
problems
conceptually
with
checkpointing
secrets,
because
that's
kind
of
a
non-starter
for
us
and
like
that,
my
only
initial
thoughts
would
be
something
like
maybe
encrypt
them
with
like
a
couplet
key
material
or
something
because
it
already
exists
and
it
can
retrieve
it
from
the
API
anyway
worried
about
it
being
at
rest
or
something
something
like
that.
But
for
us
we
pretty
much
required
that
we
would
be
able
to
check
for
nothing.
I.
E
It
unless
you
expect
them
to
be
on
the
host
itself
in
the
best
part
of
year
on
toast
contracts
which
we're
trying
not
to
do
but
and
I,
don't
see
you
that
big
of
a
difference
between
check
bringing
the
data
on
the
hose
or
providing
it.
Oh,
my
gos
I
do
see
the
difference
of
gaming
in
terms
of
like
retrieving
the
Queen
API,
but
one
of
the
concerns
around
secrets
in
the
API
for
putting
them
defying
them
on
disk.
Anything.
So
I
think
that
the.
D
A
Okay,
well,
we're
about
out
of
time
identify.
E
D
D
E
F
Can
work
with
you
guys
because
you
have
opinions
on
how
different
I
mean
I,
think
which
we
should
probably
do
before.
Even
just
have
a
single
document
as
a
template
and
we
can
modified
portions
of
it
together
and
then
maybe
work
within
the
sig
before
you
promote
it
up
and
talk
to
Don
just
so
that
we're
on
the
same
page.
Does
that
seem
reasonable?
That's.
D
That's
right:
some
of
our
proposal
wouldn't
like
actually
propose
it
was
to
create
the
proposal,
so
we
can
agree
before
we
propose
it
right.
We
need
to
agree
first
before
we
start
talking
the
other
six
cuz.
What
we
need
to
create
that
skeleton
document
and
we
need
to
have
some
one
sort
of
fill
it
in
with
default
information
before
we
all
jump
on
top
of
it.
So
if
someone
needs
to
create
the
doc
and
start
filling
it
in
and
that's
what
we
need,
we
need
that
then
a
pillow.
You
can
start
together,
I
couldn't.
A
Cool
well,
thank
you
everyone.
This
feels
like
it
was
a
very
constructive
meeting.
A
I
guess
the
only
problem
that
I
feel
like
we
have
is
like
a
small
if
section
of
people
between
meetings
in
these
weekly
meetings,
because
it's
hard
to
drive
consensus
for
words
when
you
have
like
a
random
subset
of
people
in
each
meeting,
but
hopefully
we
can
only
people
who
were
here
today
can
try
and
come
next
week
as
well
and
make
sure
that
we
add
people
who
were,
and
so
we
can
get
part
of
the
consensus
and
driving
this
forward
totally
good.
Anything
else
awesome
thanks!
So
much
everyone,
that's
a
great
weekend,
I.