►
Description
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.fmpkgj4c8u4h
The agenda for the meeting was a deep dive into the checkpointing proposal, with representatives present from sig-auth and sig-node in addition to sig-cluster-lifecycle.
A
B
C
Yeah
I
go
to
my
my
position.
Right
now
is
like
I
know,
the
freeze
is
coming
up
and
we
want
to
get
some
check
waiting
in
just
doing
pods
right
now.
It
does
keep
me
couple
tweets
when
you
start
losing
the
sword.
My
biggest
concern
is:
if
we're
just
blanketly
leaving
out
the
other
objects,
that
upon
analyse
arm
are
completely
undiscussed.
C
Did
that
just
either
the
design
won't
support
it
or
it's
just
something
that
we're
pretending
like
we're,
never
going
to
do
and
that
works.
So
if
we
leave
it
out
of
the
proposal,
I
get
it
like
to
try
and
make
it
nice
move
the
newel
forward.
I
just
have
serious
concerns
about
this
kind
of
pattern
in
the
past
has
led
to
all
of
them
in
the
proposal
and
the
way
that
we
designed
it.
You
can't
do
this
sort
scenario
and
that
really
hurt,
if
that
be
end,
result
I,
don't.
B
Think,
there's
anything
that
precludes
us
from
moving
forwards.
I.
Do
there
there's
some
conflict
around
secrets,
but
I'm
not
going
to
partake
in
an
argument
right,
I,
let
other
people
partake
in
it
because
I
don't
have
strong
opinions
on
it.
The
the
config
maps,
I
think,
is
non-controversial,
and
everyone
agrees
to
that.
So
I
think
the
only
thing
that
is
quote
unquote,
controversial
in
some
respects
is
regarding
secrets.
So
I
think
we
have
enough
quorum
here
to
potentially
have
that
discussion.
So
that's.
C
Real
quick,
our
network
is
anything
that
regulated
never
going
to
be
included
in
this,
like
network
policy,
plugin
resources
that
networking
plugins
need
services
endpoints,
like
the
assumption
here,
that
these
check
for
your
pods
use
post
networking.
Always
I
think
that
actually
pretty
good
point,
I
think
one
day
something
like
who
proxy
or
something
it
might
be
nice
for
node
to
come
back
up
with
existing
state
of
service
routes.
But
I
would
I
mean
that's
like
a
big
question.
C
I'm
gonna
be
like
if,
if
the
networking
plugin
doesn't
come
back
up
on
the
node
you're
not
going
to
get
a
pod
IP
unless
you're
using
host
networking,
so
I
just
want
to.
I
want
to
be
really
clear
that
we're
saying
you
must
use
host
networking
for
this
to
be
useful
unless
you
have
a
network
provider
that
is
completely
independent
of
nodes
themselves
like
right
and
I
mean
this
microscope
and
usefulness
is
going
to
useful.
C
Bit
of
this
is
going
to
be
incredibly
small,
like
home,
networking
pods
that
don't
require
something
ups
that
don't
require
secret.
So
essentially,
that
is
the
only
use
case
that
this
is
actually
solving
and
potentially
can't
use
flex
volumes
can't
use
service
accounts
can't
use
just
trying
to
think
there
is
one
more
that
came
up
in
the
discussion.
There
was
a
host
level
resource
I'm
blanking
on
it
right
now,
but
if
we,
if
that's
the
scope
of
this,
then
this
is
not
a
general
purpose
solution
right.
This
is
a
very,
very
targeted.
C
B
For
now
it
is
but
the
mechanism
that
we're
employing
is
general,
so
the
the
potential
objects
and
resources
that
could
be
checked
pointed
are
hidden
behind
the
interface
and
that
the
Kubla
itself
only
deals
with
checkpoints.
So
the
the
mechanism
is
generic.
You
can
hide
anything
underneath
it.
You
know.
C
I
think
it's
like
our
assumption,
is
removing
everything
out
of
the
cube
list
that
we
possibly
can
CNI
etcetera
so
like
in
EC
and
I
plug
in
any
container
native
storage,
corn
and
etc,
and
a
general
purpose
mechanism.
Is
that
going
to
be
calling
through
the
cubelets
to
get
things
from
the
master?
Is
they
don't
need
those
only
check
pointing
to
rice.
B
C
I
just
wanted
to
be
really
clear
that
this
is
only
for
control,
plane
moving.
It
has
nothing
to
do
an
application
checkpoint.
If
no
one
should
this,
who
isn't
specifically
doing
it
and
we
have
no
plan
like
that's
what
I've
asked
you,
but
what's
the
transition
plan
like
if
we're
going
to
say
that
this
is
just
worth
it?
Let's
really
clearly
say
we
do
not
have
a.
B
C
Of
a
pod,
this
is
check
pointing
of
control
playing
pods
like
I,
already
really
clear
about
that,
because
that,
like
the
first
time,
we've
kind
of
had
this
discussion
about
checkpointing
and
come
to
make
the
actual
statement,
we
will
not
do
checkpointing
for
real
application
workloads.
Just
for
this
specific
control
playing
self-hosting
pilot
at.
A
Yeah
claim
I
think
what
happened
is
we
we
convinced
on
that?
We
could
add
checkpointing
for
this
cific
purpose
without
having
to
get
into
the
details
of
designing
a
general
purpose
check
money
mechanism
which
she
has
basically
been
been
pushing
people
pushing
back
on
and
putting
off
for
long
time
very
difficult.
And
so
we
need
to
solve
a
specific
use
case
here
and
we
have
a
very
targeted
checkpointing
implementation,
selves
that
use
case,
which
possibly
could
be
extended
to
be
more
generalized.
A
C
That
I
would
recommend
that
we
try
to
not
call
it
check
pointing
in
a
generic
sense
so
that
nobody
else
is
ever
confused.
It
is
control,
plane,
self-hosting,
pivot
or
whatever,
like
even
in
terms
of
how
its
phrase,
because
I
mean
I
think
for
anybody
outside
who's
familiar
with
the
checkpoint
accession,
it
would
be
confusing.
B
C
B
C
I
mean
I,
don't
and
unfortunately,
I,
don't
think
the
side
Eric,
no
one
else
from
signals
here,
but
one
of
his
comments
was
that
we
probably
have
to
bring
this
up
in
the
Gothic.
We
want
to
get
a
good
discussion
around
it,
but
I
feel
my
understanding.
It's
like,
because
the
users
decision
to
use
secrets
or
not
I,
don't
see
how
this
was
contentious
at
all
and
it's
ultimately,
your
C
or
D
or
anything
is
going
to
end
up
on
disk
and
I.
C
Just
can't
get
out
that,
in
my
mind
of
like
I,
know,
you're
putting
it
on
disk
or
this
year
idea
of
being
put
on
bit
what
the
secret
is
being
put
on
this
the
contention
is
locking
down
the
secret
in
the
API
or
the
secret
and
transport.
It's
not
one
on
the
node.
Whether
it's
deep
stored
on
disk
is
no
matter
what
we
do
any
of
these
cases.
If
ending
up
like
this
I
just
get
it
past
that
I
might
not
know
where
the
actual
contention
is.
C
There
I
think
I
think
the
question
would
be
somewhat
general
like
Sagat
owns
at
least
to
some
degree
the
general
security
posture,
the
cluster.
It
would
be
ensuring
that,
when
that
pod
has
moved
off
that
node
those
secrets
go
away
and
stay
gone,
because
essentially
your
move,
if
those
are
you're,
basically
just
giving
a
bunch
of
those
weird
access
to
the
cluster
and
so
you're,
making
that
choice
to
not
have
a
strong
security
posture
in
the
first
place,
which
is
a
should
be
an
opt-in
choice.
C
The
moment
you
do
that,
though,
making
sure
that
you
clean
it
up
very
strongly
afterwards.
I
think
is
the
biggest
part
of
the
start.
Even
if
it's
opted
you're
still
like
in
a
document
that
we
would
write
about
how
to
write
a
secure
cluster,
we
would
probably
have
to
say:
don't
use
this
mechanism
unless
you
limit
it
to
a
set
of
nodes
that
don't
run
other
workloads.
You
don't
feel
that's
fundamentally
different
from
you
saying
bring
on
master
nodes
with
these
assets
already
on
disk.
C
If
that
node
is
compromised
or
that-
and
you
want
to
add
masters,
you
want
to
repurpose
a
node.
It's
the
same
story:
you
better
clean
up
all
of
those
assets,
they're
on
disk
I.
Think
it's
actually
easier
score.
Eight
to
do
garbage
collection
based
on
checkpointing
than
it
is
to
document.
Eight,
you
better
be
sure
to
go,
wipe
the
disk
that
master
node.
If
you
want
to
repurpose
it
I
mean
in
the
sense
of
today
the
security
like.
C
If
you
set
up
a
zone
security
model
where
you
have
masters
and
nodes,
then
you
have
a
set
of
machines
that
can
have
that
secret.
It's
about
making
sure
that
if
someone
makes
the
choice
to
use
this
mechanism
and
they're
using
it
on
more
than
master
nodes,
that
it's
really
clear
that
they're
doing
so
and
they're
just
using
it
on
master
nodes.
This
mechanism
is
usually
positioned
at
something:
that's
better
than
setting
up
master
nodes
and
management
through
other
config
that
the
parallels
should
be
drawn
out.
C
That
I
think
is
is
addresses
most
of
the
general
security
posture
concern
I,
just
like
that
would
be
like
the
point
of
you're
putting
secrets
on
disk
to
write.
The
secrets
would
be
there
already
as
long
as
it's
clear
in
the
proposal,
how
to
prevent
those
secrets
for
being
left
as
well
as
controlling
where
that
go.
C
Then
that's
probably
the
bulk
of
the
of
the
challenge
there
yeah
I
mean
I
would
agree
with
that,
like
so,
in
our
case,
we're
only
actually
deploying
the
checkpoint
or
two
master
nodes
and
then
in
check
sort
of,
like
the
pot
check
break
it
out
out
of
google.
It
check
it's
only
in
check
pointing
those
secrets
to
those
master
notes.
Anyway,
it
does
garbage
collection
of
the
secrets,
those
pops,
for
whatever
reason,
are
moved
off
of
the
node.
C
D
Other
things
like
that
kind
of
zones
deployment
where
you
you
are
posting
the
kubernetes
control
plane
inside
the
cluster,
but
on
a
separate
set
of
nodes,
and
that
kind
of
zoning
is
important,
even
if
you're
not
so
posted,
because
you
still
don't
want
arbitrary
workloads
running
on
the
master
nodes.
Yeah.
C
That
that
is
probably
going
to
be
a
much
more
formal
requirement
or
formal
recommendation
going
forward
like
it's
kind
of
weak
right
now,
because
many
people
don't
even
like
we
don't
even
have
limits
on
what
pods
can
do.
So
it's
kind
of
the
argument
as
those
becomes
defaults
or
we
try
to
put
those
in
default
security
postures
for
deployments
that'll
become
much
more
reasonable.
C
And
to
be
fair,
there's
like
so
many
things
that
you
can
escalate
to
rout
at
this
point
in
most
clusters
that
get
deployed
like
this
is
an
odd
one
to
be
like
we're
not
going
to
this
needle
I
get
they're
like
we
should.
You
know,
do
this
across
the
board,
but
just
you
know,
a
peak
rate
existing
on
disk
feels
like
a
strange
one.
To
summarize
on
no.
E
C
But
the
other
option
here
is
just
saying
that
you
have
to
put
them
on
disk
anyway,
like
padding
I,
get
the
idea
of
locking
them
down
easy,
I'm,
being
a
concern
for
them
in
transports,
but
or
maybe
even
Democrats,
but
if
you're
putting
them
on
disk
anyway.
That
once
it's
already
will
do
that.
No,
but
that's
the
part
that
I'm
having
a
hard
time,
seeing
like
weird
this
music
I
guess.
A
I
think
maybe
the
difference
is
maybe
more
in
semantics,
because
you
are
writing
certificates
to
disk
right
now
and
you're.
Not
writing.
Secrecy
discs
like
they're,
not
taking
a
kubernetes
api
objects
and
persistent
to
disk
which
sort
of
changes
the
contract
people
have
with
that
api
outfit
like
yes,
they
in
in
practice
are
the
same
thing.
They're
both
a
certificate-
and
this
was
just
maybe
a
question
of
delivery
mechanism.
E
The
certificates
that
the
cubelet
is
using
for
its
own
purposes-
it
has
ownership
of
it,
knows
the
content
of
those
what
those
are
for,
and
it
is
responsible
for
those
arbitrary
secrets
used
for
arbitrary
pods.
The
cubelet
does
not
know
what
is
in
those
and
should
not
assume
that
it
can
write
them
to
disk.
D
E
I'm
not
sure
so
that
the
checkpointing,
how
does
a
pod
reference
a
CRD
I?
Guess
that's
the
like.
If
you're
checkpointing,
all
the
resources
required
to
start
up
a
pod
and
one
of
those
is
going
to
be
a
control
plane
thing
like
pods,
don't
reference
those
today
and
so
you're
kind
of
adding
news
tomb
antics
onto
pods
I'm
having
a
hard
time
adjusting.
D
F
F
But
that's
I
wouldn't
say
that
relevant
yeah
I
was.
C
Going
to
say
the
he's
a
nerdy,
it
won't
be
encrypted
at
rest,
but
then
I
was
like
well
you're,
not
even
really
going
to
benefit
from
encryption
at
rest,
because
you're
going
to
be
spraying
the
decryption
keys
around
the
cluster
anyway
or
onto
the
nodes
like
Technica.
So
I
like
that
kind
of
got
me
thinking
on
this
line,
which
is
like
the
problem,
is
we've
never
really
defined.
What
an
actual
secure,
kubernetes
deployment
looks
like
in
concrete
writing.
My
gut
just
based
on
everything
we've
discussed
and
what
I
know
of
that
is.
C
We
would
never
recommend
bootstrapping
if
you
want
to
build
a
truly
sister
cluster,
like
at
the
extreme
right,
there's
a
little
left
into
the
spectrum
and
a
right
inspector
on
the
left-hand
respect
for
security,
you're,
going
to
put
EDTV
on
separate
machines,
you're
going
to
get
a
credential
to
the
Masters.
The
credential
on
the
Masters
allows
them
to
access
it
to
be.
It
has
to
consume
keys
for
secrets
or
whatever
you
don't
run
a
node
on
the
masters
straight
up
like
no
chance
of
control,
plain
confusion.
C
You
then
have
a
connection
between
the
Masters
and
the
rest
of
the
cluster,
and
you
never
correlate
those
and
how
the
Masters
set
up
is
really
just
a
different
system.
So,
on
that
far
left
example,
nothing
we're
talking
about
here
is
know
about
it
like.
If
you
want
that
kind
of
security,
you
wouldn't
run
a
cluster
set
up
this
way,
if
you
step
back
from
that
anything
where
you
start
running
things
on
the
same
note
as
the
master
today,
you
straight
up
given
up
all
security.
C
If
you
follow
this
mechanism-
and
so
the
thread
model
for
this
kind
of
cluster
is
you're
not
actually
worried
about
people
gaining
access
to
these
root
level
secrets
and
that
feels
kind
of
harsh
like
I,
don't
want
to
I'm
not
trying
to
be
harsh
about
it,
but
that's
what
I
worry
about
when
we
go
down
all
the
bootstrapping
itself
hosting
flows
is
we're
basically
saying
we're
using
getting
the
benefits
of
running
queue
without
putting
in
place
only
the
guide
rails?
That
said,
we
could
actually
make
this
secure.
That's,
okay!
It's
a
weak!
C
C
Hsr
is
we
are
isolated
in
some
way
and
that
they're
only
able
to
access
the
features
that
they
need
to
access,
and
nothing
else
is
most
please
it's
like
down
on
the
lines
that
crazy
to
think.
If
you
could
isolate
things
in
their
ways
like
right
now,
not
doing
a
great
job
of
typing
on,
oh
yeah,
I
think
it's
kind
of
the
vagueness
which
is
if
somebody
wants
to
make
progress
on
the
proposal.
We're
basically
saying
we
don't
know
enough
to
how
to
how
to
go.
Do
that
security.
C
So
we
don't
actually
know
whether
this
could
be
made
secure
and
I.
Think
that's
like
some
of
Jordans
comments,
and
some
of
my
comments
is
with
all
the
like
putting
secrets
on
disk
moving
around
without
the
note
isolation,
whatever
we
really
don't
know
enough
to
be
able
to
say,
like
we
suspect,
but
like
we
were
going
to
go
down
a
path
where
we're
not
actually
positive
that
this
approach
will
end
up
with
us
being
able
to
graduate
to
a
secure
mechanism.
If
that's
a
conscious
choice,
that's
fine.
C
It's
feel
like
it
should
be
a
conscious
statement
that
we
make,
that
we're
saying
we're
willing
to
trade
security
and
a
security
profile
and
the
confident
that
we
can
make
it
truly
secure
in
the
future
to
get
convenience
now,
and
it
probably
needs
to
be
sold
that
way
too
right,
like
I'm,
not
trying
to
be
like
harsh
about
it.
I
just
again
like
this
is
a
tension
of
usability
versus
security.
We
turn
the
needs
over
the
left.
C
E
E
Still
trying
to
understand
kind
of
where
the
proposal
falls
between
a
static
manifest,
which
only
requires
locally
available
resources
and
kind
of
recreating
everything
the
API
server
would
have
provided
after
our
cold
start
are
we,
it
seems
like
once
you
try
to
run
pods
that
you
got
from
the
API
server
after
a
cold
start,
you're,
just
kind
of
opening
the
door
to
a
thousand
bug.
Reports
of
oh,
my
pod
depended
on
this
resource,
and
you
know
peepees
weren't
check
pointed
and
PVCs
weren't
check,
point
and
config
maps.
E
Weren't
check
pointed
and
like
the
the
network
plug-in
that
is
supposed
to
be
running.
That
needs
its
information
based
on
all
the
services
in
the
cluster.
That's
not
running
and
like
I
have
a
hard
time
seeing
how
doing
it
generically
is
going
to
be
able
to
be
done
kind
of
piecemeal.
But
if
the
specific
goal
is
to
let
us
start
a
cluster
with
static
pods
and
then
bring
up
infrastructure,
manage
pods
to
kind
of
replicate
that
in
an
H
a
environment
and
have
the
cost
to
be
able
to
recover
from
that.
E
After
our
cold
start,
it
seems
like
we
could
do
things
in
those
static
pods
that
could
stand
down
the
static
pods
as
long
as
the
infrastructure,
managed
ones
were
still
available
and
then,
after
a
cold
start,
you
know
the
static
pod
has
an
init
containment.
That
says,
is
the
API
service
available
cool
I?
Don't
need
to
run?
If
not,
then
I'm
going
to
start
backup.
My
static
API
server,
like
I.
G
E
Once
you
kind
of
drop
down
to
that
level,
you're
back
in
the
the
realm
of
you
know,
the
cube
live
is
managing
its
secrets
and
certificates,
and
it
knows
it's
ok
to
persist
them
locally,
with
read-only
read
permission
if
the
static
pods
it's
running,
the
API
server
needs
its
config
and
it's
serving
certs,
and
things
like
that.
You
know
I
think
it
has
the
freedom
to
persist
those
locally
with
root
only
Reformation.
E
You
know
because
at
that
point
you're
in
application
domain,
and
it
knows
what
those
secrets
are
and
what
the
implications
of
persisting
those
are
something
that
a
static
pod
that
could
be
cognizant
of
config
changes
and
certificate
changes
and
keep
those
replicated
locally
so
that
it
could
recover
after
a
cold
start
seems
interesting
to
me.
I
didn't
know
if
that
had
been
explored
at
all,
I
mean.
C
This
is
fundamentally
what
we
do
right
now.
We
have
like
the
external
context
where
it's
essentially
shuttling
static
pods
around
on
this
thing,
your
replacement
parents
are
running
the
replacement,
deep-dive
server,
but
you
don't
need
to
run
anymore
so
remove
that
that
static,
hot
stock
that
could
be
moved
into
making
dinners
works.
Today,
it's
just
kind
of
gross,
because
you're
trying
to
determine
local
state
of
the
coolest
in
not
very
reliable
ways
like
we
can
open
up.
Your
API
is
an
Akula
to
make
it
easier,
but
I
mean
it
works.
It's
just.
C
E
I
think
my
concern
is
that
the
checkpointing,
the
the
reasons
are,
the
motivations
for
it
are
really
different,
depending
on
the
use
case.
Rights
like
for
checkpointing
in
service
of
sort
of
recovering
a
self-hosted
cluster
looks
really
different
than
check
pointing
to
maintain
existing
running
state
after
a
key
to
blood
restart.
So
it's
like
the
coasting
aspect,
I
think,
is
really
useful.
B
B
B
Think
the
clarity
is
just
to
document
it
as
such,
because
I
think
the
confusion
lies
in
the
name
is
Clayton
pointed
out,
because,
if
not
from
the
word,
checkpointing
has
many
connotations
right
and
when
we
say
that
it's
it
can
be
ambiguous,
but
if
we
call
it
something
very
specific,
there's
no
ambiguity
of
what
we're
trying
to
do
here
right.
It's
only
for
this
purpose
at
this
time,
I
could
extrapolate
it
for
some
other
means,
and
it
could
eventually
expand
in
scope.
But
we're
not
we're
not
going
to
touch
that
right
and
would
know.
F
D
B
To
move
the
ball
forward
here,
I
think.
The
common
theme
we
have
here
is
that
we
think
they're
still
ambiguity
with
regards
to
cigarettes
right
and
the
name.
Checkpointing
has
various
connotations
so
I'm
happy
to
change.
They
don't
really
care
I
just
want
to
move
the
ball
forwards,
such
that
we
can
get
a
self-hosted
control
plane
to
hold
start
properly
after
it's
been
pivoted
from
a
static
pod,
so
happy
to
change
the
name,
I'm
open
to
suggestions.
B
My
plan
is
to
put
the
proposal
out
there
soon,
along
with
a
whip
and
talk
with
you
and
are
there
other
major
concerns
that
folks
have
I
mean
we
want
to
provide
a
path
forwards.
That
is
not
I,
think
there's
a
lot
of
hearsay
going
on
I
think
the
signal
team
has
said
they
want
to
expand
the
scope
of
this
I.
B
Leave
that
to
them
to
decide
when
and
where
now
and
I
think
that
we
will
eventually
have
the
capabilities
to
store,
not
checkpoints,
we'll
call
them
something
else,
semi
manifest
whatever
you
want
to
call
them
to
persistent
dish
to
allow
for
the
cold
start.
Bootstrapping
scenario:
are
there
other
major
contentious
points
that
folks
wanted
to
talk
about
or
think
that
what
I'm
saying
is
absurd,
I'm
happy
to
listen
that.
G
So
so
we
talked
about
this
is
not
application
checkpointing,
but
with
it
is
considered
potentially
extending
this
to
checkpoint
Contino
Cuba
States
in
the
future.
So
we
said
that
this
is
not
a
goal
for
a
1-8
and
we
don't
want
to
checkpoint
an
internal
chamber.
Space
I
think
it's
also
fair
to
say
that
it's
possible
if
we
call
this
checkpointing
and
we
imply
that
we
are
checkpoint,
some
of
the
things
that
we
get
from
API
server,
that
ventually
we
could
use.
We
uses
for
some
children
checkpoint,
our
internal
states,
for
example.
G
G
But
we're
just
saying
that
if
we
do
decide
to
close
checkpointing
and
do
something
similar,
it's
possible
to
combine
both
and
then
just
store,
some
of
the
pod
objects
on
a
disk
and
treat
them
similar
as
if
they
work
on
the
API
server
and
I
understand
that
they
are
concerned
about
our
children,
thinking
those
parts
trying
to
get
the
persistent
bowring
as
well.
Maybe
a
server
is
down,
I
think
will
avoid
China,
avoid
doing
that
explicitly.
G
It's
just
sort
of
a
recovery
phase
where
you
can
see
the
lost
information
in
access
them,
but
if
the
decision
eventually
is
to
limit
the
scope
even
further,
so
that
we
only
rights
at
a
cost,
they
are
not
checkpoints.
In
any
sense,
there
are
special
mechanism
just
for
self
hosting
those
master
components
and
I
think
this
will
be
a
completely
separate
issue
and
yeah.
We
should
just
stop
calling
a
checkpointing
anything
I.
B
Don't
think
the
mechanism
that
is
there
or
that
I've
started
working
on
is
would
be
different
for
scenario
a
versus
B.
It
would
be
the
semantics
that
people
want
to
use
and
make
sure
that
it's
been
vetted
properly
by
all
parties.
So
the
general
mechanism
is
an
abstraction
layer
and
I
could
call
it.
I
could
call
it
cats
and
dogs,
and
it
would
matter
what
it's
doing
is
the
same
thing
right.
So.
B
So
long
as
we
denote,
you
know
certain
things
that
it
does
not
do,
I
think
we're
okay
and
we
can
move
forwards.
How
signal
wants
to
use
this
going
forwards
and
add
more
value
to
it
for
their
purposes?
I
think
is
another
proposal
on
top
of
this,
and
I
would
be
like
how
we
are
expanding
this
idea.
That
seems
like
a
reasonable
approach,
just
like
phase
one
and
then
phase
2
and
then
page
three.
A
G
If
I'm
missing
correctly
in
the
actual
proposal,
this
is
not
really
storing
contents
on
static
pods.
You
already
created
those
as
part
in
API
server,
and
you
are
trying
to
pick
some
of
them
to
write
down
to
a
disk,
so
I
give
them
over
concerned
about
how
general
this
mechanism
is.
When
you
just
say,
we
can
pick
some
of
the
pods
and
just
write
in
this
manifest
on
to
the
disk.
A
A
My
pods
too,
like
I,
should
use
this
for
my
application
and
go
down
a
bad
path
and
so
I
think
that's
where
we
need
to
documentation,
and
maybe
the
name
of
the
annotation,
to
make
it
clear
that,
like
this,
this
type
of
feature
will
work
for
checkpoint,
a
control
plane,
because
the
control
plane
is,
you
know,
using
host
networking
and
can
be
run
a
static
pod
and
if
you're
using
features
that
don't
satisfy
those
constraints.
You
can
put
this
annotation
all
you
want
and
it's
not
going
to
do
what
you
think
it
well
yeah.
B
E
F
E
Secrets
were
not
ever
allowed
and
config
maps
are
not
allowed.
If
you
are
running
the
node
authorizer,
which
is
by
default
in
1:7
in
g,
ke,
g
c
e
and
q
bottom,
the
intent
was
for
secrets
and
config
maps
to
behave
identically,
so
I
mean
it
makes
sense
right.
You,
you
only
let
a
node
get
the
config
maps
that
are
being
used
by
its
pods
and
if
you
let
it
create
a
pods
and
say
hey
I'm
using
this,
then
you
any
node
can
get
any
config
map
at
once.
So.
F
C
C
E
B
B
B
These
are
couplets
that
are
headless,
so
it's
not
self
hosted
couplets.
The
couplets
that
are
hosting
the
control
planner
are
simply
there
to
run
static,
manifest
odds
at
the
beginning
right
and
then
they
pivot
right
and
during
that
pivot
the
original
static,
manifest
pods
were
the
control
point
itself.
We
pivot
then
allows
them
to
be
hosted
so
that
the
API
servers
hosted
by
a
couplet,
but
that
could
itself
has
no
notion
of
a
control,
plane,
I
think.
A
It's
Jordans
asking
about
what
happens
during
the
cold
sore
case,
so
you've
got
that
set
up,
and
then
you
turn
your
master
machine
off.
So
there's
only
one,
you
restart
your
master
machine.
What
does
the
culet
come
back
up?
We
start
the
control,
plane
and
register
itself
right
because
there's
a
race
there
between
the
cubic
trying
to
register
itself
to
see
what
it
should
run
and
and
trying
to
get
the
API
server
back
up
and
running
I.
Don't.
B
E
E
On
the
weight,
so
we
can
take
this
afterwards.
I
I,
just
AM
I've,
been
in
the
cube
low
startup
flow
enough
to
know
that
a
change
that
like
tries
to
add
a
third
type
of
thing,
like
you've,
got
set
of
things.
You
got
API
things,
and
now
you
have
pointed
things
also
getting
in
the
middle
of
the
startup
flow
is
going
to
be
interesting
and
may
require
violence
to
the
startup
flow.
G
E
G
H
E
Mean
that
that's
what
I
had
in
my
might
work
either
run
once
or
kind
of
cold
standby
like
block
and
then
run
if
conditions
that,
if
the
the
control
plane
isn't
available,
these
static
pods
come
in
to
come
to
life
either
by
blocking
on
a
net
container.
You
know
so
the
city
you
get
that
standby,
behavior
or
cold
start
behavior.
That's
what
I
had
my
mind,
but
I
didn't
know
like
it
seemed
like
it'd
be
cleaner
to
have
like
one
cubelet
runs.
Job
is
to
run
this
control
plane.
E
The
aesthetic
pods
and
the
static
pods
have
enough
intelligence
in
them
to
not
stomp
on
a
control
plane
once
that's
actually
running
right,
and
so
that's
all
that
thing
does
and
then
separately.
You
have
a
cube
load.
This
one
normal
cubes
open
talks
to
an
API
and
can
do
its
normal
startup
flow
and
and
then
they
each
have
their
own
responsibilities
on
a
cold
start,
the
one
that
whose
job
it
is
to
establish
the
control
plane.
E
If
it's
not,
there
is
working
off
those
static
odds,
it
can
recover,
but
it
also
doesn't
it
tolerates
the
control
plane
being
available?
I
don't
know
I,
like
I,
said
I'm
concerned
about
entangling
like
three
different
things
in
service
of
two
really
different
use
cases
in
a
single,
a
single
cube,
let
in
a
single
startup
flow,
but
maybe
maybe
I'm
just
overly
skeptical,
but
that
will
work
out.
A
We
are
partners
but
on
on
the
clock
and
no
other
agenda
items
so
not
sure,
there's
anything
to
Devon
I
did
a
question
about
what
you
just
said,
though
Jordan
you
said
that
you
were
worried
about
as
having
two
very
different
use
cases.
What
do
you
see
that
I
mean
I
I
assign
my
mind?
I
only
see
one
use
case
here
for
why
we
want
this.
What
do
you
feel,
like
the
other
use
cases.
E
A
That's
why
like
Tim's
dock,
is
targeted
not
at
that
use
case
right.
This
is
not
intended
to
replace,
run
fluid
and
all
your
nodes
in
a
static
pod
to
disappear.
After
they
start,
it's
really
targeted
at
cold
start
recovery
of
just
to
control.
One
right,
and
that's
why
I
say
I
feel
like
there's
just
one
use
case
here
in
that
one
use
case,
is
you
say
you
have
one
master
node
and
you
reboot
the
node?
How
do
you
get
your
API
server
running
again
and
the
downside
of
permanently
maintaining
static?
A
Pods
is
sort
of
what
we
have
today
with
with
upgrading
the
static
pods
is
that
you
have
to
then
rewrite
these
manifests,
which
is
potentially
becomes
difficult
across
upgrades,
especially
if
you
want
to
keep
static
log,
manifest
in
sync,
with
the
sort
of
daemon
set
or
deployment
objects
or
resources
you
have
in
the
API
server
it's
much
simpler,
like
on
an
initial
deployment.
It
gets
like
you
just
scrape
them
the
same,
but
then
during
an
upgrade
flow.
E
E
How
specialized
the
use
cases
I
had
hoped
that
we
could
embed
all
of
that
complexity
and
logic,
either
in
the
static
pods
or
in
a
component
the
static
pods
that
the
bootstrap
process
starts
up.
So
once
you
have
a
bootstrap
cluster,
you've
got
a
manager
running
that
is
maintaining
the
the
static
manifests,
saying.
Here's
I'm
at
version
X,
my
cluster,
so
I'm
going
to
go,
make
sure
all
of
the
nodes
have
version
X
of
the
static
pod
and
the
static
pods
are
going
to
start
up
and
say:
oh
the
clusters
up
I.
E
E
A
C
I
mean
the
main
things
with
it
like
we're.
Trying
to.
We
have
to
essentially
determine
local
state
from
the
coolest
and
the
way
that
it
actually
updates
its
internal
state
is
that
if
there's
no
API
server
running,
then
they
try
to
reach
from
the
container
runtime
determines
the
state
and
then
try
that
first
visit
to
the
API,
but
then
it
can
and
when
you
can
slash
pod
api
employees
on
the
couplet
that
never
actually
gets
updated,
updated
api
should
refer
to
support,
updates
that
locally
I'm
free.
C
So
we're
relying
on
the
running,
pods
API
endpoint
actually
determine
the
true
local
state
and
that's
actually
a
debugging
it
in
point
that
we
shouldn't
be
using.
So
that
means,
if
we
wanted
to
continue
doing
something
like
this
I
gotta,
even
consider
just
directly
trying
to
hit
the
CRI
endpoints
to
determine
the
local
state
instead.
But
you
need
some
more
consistent
API
where
we
could
get
that
true,
like
what
is
actually
running
state.
So
this
is
possible.
C
It's
just
we've
kind
of
learned
how
to
work
around
those,
because
the
end
goal
in
our
minds
was
that
this
would
be
play
into
the
couplet.
Not
an
external
process:
if
we
wanted
to
go
the
external
route,
then
we
would
need
to
do
some
work
there
and
shuffling
files
around
on
disk
is
just
kind
of
gross.
C
G
Yeah
I
agree:
it's
not
reliable,
trying
to
determine
to
save
a
cubed
and
then
even
by
looking
at
running
past
is
not
just
not
really
good,
and
but
I
do
passionately
agree
that
if
we
shop
this
whole
thing
in
children
is
going
to
be
very
complicated.
It's
a
very
special
use
case.
If
we
treat
them
a
separate
path,
the
only
certain
parts
can
be
used
with
this
kind
of
mechanism,
and
also
it
complicates
the
handoff
mechanisms
and
everything
I'll.
E
Acting
like
we
have
the
API
available
when
we
don't
really-
and
you
can
either
do
that
by
taking
those
resources
locally
and
making
most
bounces,
you
did,
which
I
agree
it's
kind
of
weird,
but
it's
inspectable
and
like
you,
can
look
at
it
and
say:
oh,
what
is
this
doing?
These
files
are
here
it's
mounting
these
directories.
E
Okay,
that's
like
that
is
working
with
the
system
and
it's
understandable
or
you
put
all
of
that
inside
the
cubelet,
and
you
say
well
when
you
start
up,
if
you
don't
have
the
API
available,
but
you
have
like
a
persisted
list
of
config
maps
from
this
folder.
Then
act
like
you
got
them
from
the
API
like
it.
E
That's
kind
of
gross,
so
I,
don't
know
before
we
kind
of
take
all
this
internal
for
the
cubelet
I
just
want
to
make
sure
that
it's
serving
more
use
cases
than
just
this
narrow
one
and
and
that
we
feel
like
we
can
actually
maintain
that
complexity.
I'm,
not
confident
we
can,
but,
like
I,
said
my
famine,
the
majority.
A
All
right
we're
out
of
time
outside
we're
a
to
kala
here
and
I.
Think
if
people
want
to
continue
the
discussion
obvious
there's
another
meeting
next
week
in
the
meantime
or
they
get
have
issues,
so
we
can
people,
so
thank
them
for
coming.
If
you
have
anything
to
add
somebody
notes,
please
do
so
and
we'll
see
you
guys
in.