►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190116 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.t969yu41wiix
A
A
B
A
This
is
something
I
think
that
we're
gonna
try
to
do
a
little
bit
better
job
standardizing
across
all
of
the
state
cluster
lifecycle.
We
went
through
and
cleared
out
the
backlog
by
closing
or
duping
or
moving
issues
out
of
the
repo
and
what
we
were
left
with.
We
sorted
into
two
milestones,
and
so
the
milestones
are
linked
from
meeting
notes.
There's
a
milestone
called
feet:
1
alpha
1,
which
is
basically
things
we
are
working
on
right
now.
A
B
A
A
A
B
B
A
So
what
I
would
like
to
do
going
forward
is
spend
a
little
bit
of
time.
Each
meeting
sort
of
looking
at
the
v1
alpha
milestone
at
leasts
Lily's,
looking
at
the
Eyre,
get
numbers
and
seeing
if
we're
sort
of
trending
in
the
right
direction,
projection
closing
things
as
we
spend
time
for
discussing
issues
and
PRS
I'd
like
to
as
a
group,
try
to
prioritize
the
ones
that
are
part
of
the
milestones.
A
So
we
can
focus
our
attention
on
the
things
that
we
think
are
important
to
get
done,
so
we
actually
get
releases
out
the
door
and
try
to
get
this
this
list
of
issues
burned
down.
So
if
you
saw
so,
if
you're
looking
throw
the
right
hand,
side
like
most
of
these
have
an
assignee
so
far,
the
other
thing
I'll
point
out
is
that
the
approach
that
the
cube
ATM
folks
take,
which
I
think
we
should
adopt,
is
that
even
if
an
issue
has
an
assignee,
it
doesn't
mean
you
can't
on
it.
A
Oftentimes
people
will
mark
an
issue,
as
lifecycle
active,
to
indicate
they're,
actually
writing
code
for
it,
or
sometimes
the
assignee
will
be
sort
of
delegating
pieces
of
the
work
to
other
people
or
sort
of
driving
a
process
to
completion.
So
just
because
things
are
assigned
doesn't
mean
there
aren't
places
you
eye,
people
can
contribute
and
you
can
contribute
in
the
short-term
to
things
that
are
really
important.
There's
something
you
want
to
work
on.
Please,
you
know
reach
out
to
these
I
need
a
a
lot
of
people.
A
So
the
list
is,
you
know,
23
things
long
I
don't
want
to
go
through
everything,
every
single
one
here,
just
you
know
we
did
that
last
week,
but
I
would
encourage
people
if
you're
interested
to
go
through
and
look
at
the
the
split
between
alpha,
1
and
next
make
sure.
There's
nothing
in
the
next
backlog
that
you
think
really
should
have
been
part
alpha,
1.
A
D
A
I
think
that
that's
a
real
interesting
point
as
I
was
trying
to
describe
this
I
said
we're
gonna
be
doing
via
1
alpha
1
next
right,
but
there's
also
something
called
next
right,
which
is
I.
Think
a
little
bit
confusing
Tim
promised
to
write
up
a
description
of
how
cube
ATM
has
been
doing
this,
that
we
can
sort
of
socialize
across
the
signal
and
I
think
when
he
does
that.
A
That
would
be
very
time
to
get
him
him
that
feedback,
because
I'd
love
for
us
to
all
change
it
from
next
to
backlog
or
some
other
name.
That
we
think
is
better
together
right,
like
item
we
can
rename
ours,
but
then
were
less
consistent
with
the
rest
of
the
city
and
it
makes
it
harder
for
someone
who's
contributing
to
qadian
to
easily
come
over
here
and
sort
of
find
the
things
they're.
Looking
for
you're.
A
D
A
A
Cool,
so
there
are
23
things.
Maybe
if
we
don't
have
much
of
an
agenda,
we
can
pull
those
back
up
and
actually
go
through
them
during
this
meeting,
but
I
see
people
are
adding
stuff
to
the
meeting
notes.
So
let's
I
think
we
should
go
through
those
first
and
see
if
there's
other
urgent
things
for
people
to
discuss
next
on
the
list,
I
put
something
here
which
was
I,
got
an
email
from
the
organizers
of
Q
Khan
in
Barcelona
cube
con
tu
2019
about
setting
up
sinc
sessions
right.
A
So
in
Seattle
we
had
a
cig,
intro
and
to
say
deep
dives,
one
of
which
was
key
Vidya
and
wonderful,
which
was
cluster
API.
In
China
we
had
a
cig,
intro
and
a
cig
deep
dive,
and
in
cube
con
last
year
we
had
a
cig
intro
and
a
cig
deep
dive.
So
one
thing
is
when
we
were
doing
the
deep
dive
for
cluster
API
in
Seattle.
We
had
a
number
of
folks
that
were
interested
in
helping
sort
of
co-present
that
session.
A
A
That
is
to
all
the
sub
projects
like
you're,
mentioning
Justin
like.
Should
we
do
a
deep
dive
on
cops
instead
of
cluster
API,
or
should
we
do
a
deep
dive
on?
You
know
boot
cube
or
like
pick
some
other
cig
project
right
and
and
how
we
pick
those
right.
There's
our
cig
is
difficult,
because
we
can't
really
do
a
single,
deep
dive.
B
D
A
E
True
I
guess
we
could
always
ask
that's
interesting.
Actually,
how
does
so
so
you
have
to
request
these
yes
and
then
you'll
get
them
for
free.
The
CEO
fees
are
contingent
on
approval.
Yes,
so
Justin
his
said
that
he's
proposing
a
deep
dive.
I
know
that
Daniel
Jason
and
myself
have
also
suggested
two
different
deep
dives,
only
one
of
which
is
directly
clustered
api-related.
E
E
B
A
B
A
big
cloud
provider
is
running
into
the
same
issue
also
because
we
have
a
bunch
of
providers
who
are
SIG's,
which
will
you
know
they
typically
get
the
deep
dives,
but
then
some
cloud
providers-
you
don't
get
the
deep
dives
and
we
haven't
be
able
Mosel
to
the
tech.
More
writing
a
proposal
to
the
technical
committee
to
district
technical
steering
committee
to
figure
out
how
to
handle
this
I
can
dig
that
up
and
share
it
with
you
all
in
case.
B
We
want
to
broaden
the
scope
of
that,
so
we
can
kind
of
figure
out
larger
as
a
community
when
these
sakes
have
multiple
sub.
You
know,
SIG's
are
working
groups
underneath
them.
How
we,
you
know
how
they're
handled
at
events
like
like
cube
con,
so
I'll
dig
that
up
and
post
a
link
to
us.
So
you
can
see
what
we're
working
on
okay.
A
D
A
A
So
we
is
like
to
a
reasonable
upper
bound
and
then
yes,
we
can
sort
of
change
the
topic
of
those
like
how
many
should
we
ask
for
to
start
with,
because
we
probably
won't
hear
back
about
the
CFP
process
until
after
February
8th.
So
we
need
to
ask
for
for
the
deep
dives
before
we
actually
know
if
the
CFP
sources
were
accepted.
A
D
A
E
A
Cool
so
Chris
put
the
the
link
for
the
the
cloud
provider
topic
he
was
talking
about
in
Chatham.
I,
also
pasted
that
into
the
meeting
notes
that
we
can
follow
up
on
that
again.
This
is
not
particularly
urgent
other
than
the
fact
that
I
want
to
put
it
out
there.
So
people
they
start
thinking
out
if
they
hey
I,
guess
it's
a
reminder
that
the
the
CFPs
are
due
on
Friday.
A
So
if
you
didn't
want
to
submit
as
a
PA
you've
got
two
more
days
to
do
that
and
they'd
be
that
you
know
I'm,
hopefully,
gonna
propose
we
do
a
cluster
API
deep
dive.
Maybe
if
we
get
enough
CFPs
accepted
that
becomes
not
necessary,
but
I
think
we
should
at
least
try
to
hold
a
spot,
and
if
people
are
interested
in
doing
that,
you
know
in
some
ways
it
would
be
good
if
there
were
different
presenters
for
that,
then
people
who
had
regular
talks
so
that
we
get
more
people
up
talking
about
it.
A
But
if
those
are
the
only
people
that
are
gonna
go
to
the
conference
like
I'd
rather
have
those
people
do
the
talk
as
well,
so
so
yeah.
We
certainly
have
some
time
to
sort
that
out
and
I
know
that
last
time
we
submitted
for
the
talk
and
then
we
we
were
able
to
shuffle
around
the
specific
people,
doing
it
even
like
much
later
than
February
8.
A
So
so
it's
plenty
of
time,
but
if
you're
planning
on
going
to
Barcelona
and
interested
in
getting
up
on
stage
and
talking,
you
know
we
should
we
should
start
figuring
that
part
out.
Okay,
I,
think
that's
enough
time
on
that.
One
next
Vince
isn't
here,
but
we've
been
talking
about
updates
to
bootstrapping.
So
the
first
thing
is
he
sent
a
PR
to
do
some
flag
refactoring.
A
You
know
you
put
in
the
peer
comments
that
it's
a
it's
a
breaking
change.
It's
basically
just
a
change
to
the
flags
we
passed
into
cluster
huddle
so
when,
when
this
gets
vendored
into
the
provider
repos
and
your
cluster
huddle
documentation
should
get
updated
as
well.
I
guess.
My
question
for
this
group
was
to
this.
This
also
sort
of
sets
the
stage
for
us
to
have
more
than
one
bootstrapper.
So
right
now
the
bootstrapping
code
is
it's
either
mini
cube
or
an
existing
cluster,
and
this
allows
us
to
plug
in
other
news,
choppers
right.
A
B
A
D
E
D
Environments
which
don't
have
docker
installed,
I
see
okay,
there
may
be
other
requirements.
I
know
I'm,
not
aware
of
like
because
kind
of
some
cool
right
in
terms
of
networking,
but
I
think
it's
just
docker,
there's
the
only
requirement,
but
that
is
that
is
itself
a
problem
in
some
CIO
environments,
for
example,.
A
Yeah
I
know
in
terms
of
sort
of
environmental
requirements.
Some
of
the
folks
from
VMware
mentioned
that
mini
cube.
I
think,
like
Luke,
was
mentoring
that
mini
cube,
didn't
work
in
many
environment,
which
is
why
we're
looking
at
other
bootstrapping
options,
so
I
think
kind
of
might
work
in
more
environments
in
mini
cube,
right
like
it
only
requires
docker
doesn't
require
virtualization,
which
is
less
of
a
hurdle
to
jump,
but
then
there
are
places
where
maybe
even
the
hurdle
of
docker
is
too
much,
and
we
should
think
about
other
ways
to
do
it
too.
F
E
D
Think
it
makes
a
ton
of
sense
to
check
with
him.
I
do
think
that
we
we
are.
We
are
in
a
much
more
tightly
controlled
environment
like
we
are
running
a
handful
of
controllers
that
we
basically
know
right,
rather
than
like,
using
kind
to
replace
mini
cube
for
the
local
development
use
case
right,
which
is
a
sort
of
different,
a
harder
problem.
A
In
addition,
we're
also
not
trying
to
use
kind
to
reduce
CI
things
or
we're
running
version
version
of
kubernetes
that
you
know
we're
compiling
or
is
unstable,
like
we
took
a
stable
version
of
communities
to
run
and
a
specific
thing
to
run
inside
right.
So
yeah
I
agree.
It's
a
pretty
constrained
environment
which
hopefully,
we
could
make
reliable.
A
Okay,
I
did
like
Justin's
idea
of
not
having
a
default
bootstrapper
and
and
forcing
an
option,
so
you
either
say
use
existing
cluster,
which
would
be
one
option
or
you'd
say,
use,
mini
cube
or
use
kind
or
whatever
the
the
other
one
that
we
might
want
to
plug
in
in
the
future.
That
seems
like
a
good
answer
for
now
and
if
everybody
starts
complaining
like
I,
don't
want
to
have
to
keep
passing
mini
cube
every
single
time,
because
that's
all
I
do
anybody
uses
like
then.
Maybe
we
can
make
that
the
default.
A
F
G
Go
ahead:
okay,
I
think
I'll
also
put
that
argument
that
not
having
default
not
only
for
now
but
in
general,
also
just
wondering
that
I'm
not
used
fine,
but
probably
in
the
case
of
mini
cube.
Is
it
possible
that
somebody
accidentally
executes
the
command?
Then
she
has
got
some
locals
that
have
done
which
gets
messed
up,
because
she
was
not
aware
that
somebody
folded
I
would
local
environment
is
being
used
here.
G
A
So,
just
to
clarify
Eric
you're
saying
it
at
some
level,
the
user
should
understand
like
what
the
bootstrapper
is
doing,
because
they're
gonna
have
like
installed
mini
cube
or
something
it's
like
actually
make
it
work
so
asking
them
to
pacify
it.
It
says
yes,
I
installed
many
queue.
Please
use
that
doesn't
seem
like
a
bad
idea.
Everything
so.
G
Iii
think
defaulting
does
not
defaulting,
doesn't
seem
to
be
a
good
idea
to
me,
because
if
you
have
some
locals
that
have
done
proper
sat
in
local
environment
and
if
you
execute,
if
you
accidently,
execute
people,
especially
if
they
come
on
user,
not
knowing
that
this
is
noble,
environment
is
going
to
be
used
for
this
purpose,
and
things
can
go
wrong
here
and
then
give
this
better.
That
which
is
like,
but
I
guess.
The
consensus
for
now
is
good.
Like
is
that,
for
now
we
can
keep
it
required
and
that
same
future.
Really.
A
F
Was
just
going
to
say
that
for
requester
Carolla
as
I
was
reviewing
some
PRC
in
the
post,
repair,
repo
and
I
looked
at
the
code
and
I
think
the
flags
need
some
renaming
can
refactor
in
terms
of
the
multiple
bootstrap
options
that
are
going
to
be
available
like
kind
and
mini
cube,
and
also
the
the
the
flux
has
to
be
moved,
mutually
exclusive
for
sure
so
yeah.
That's
what
I
wanted
to
mention.
A
The
way
that
Vince
was
trying
to
do
was
by
naming
each
flag
with
a
prefix
of
the
bootstrapper
that
that
would
make
them
mutually
exclusive,
both
in
the
code
and
on
the
command
line,
because
you
wouldn't
like
every
flight
aside
mini
cube.
Whatever
stuff
you
had
mini
cube,
how
much
memory
or
kind
how
much
memory
those
would
be.
Two
different
flag,
names
right
and
they'd-
be
two
different
names
in
the
code,
so
I
would
D
do
them.
A
686
I
linked
it
in
the
notes,
so
I
can
stick
it
in
check
if
that's
easier
for
you,
okay,
so
yeah
I
take
a
look
at
that
I
think
Vincent's
had
the
same
problem.
He's
like
I
need
to
prefix
de
Flags,
a
so
it's
clear
to
users
and
then
B,
so
that
like
we
can
have
two
flags
to
do
the
same
thing
for
different
researcher.
So.
F
A
E
One
of
the
issues
that
I
signed
up
to
try
to
marshal
through
the
process
towards
v1
alpha
1
is
whether
we
should
establish
a
stronger
link
between
machines,
machine
sets,
machine
deployments
and
the
cluster.
This
has
been
something
we've
debated
for
over
a
year
now
and
I
think
it's
important
that
we
reach
consensus,
not
just
for
this
milestone,
but
in
order
for
higher-level
tooling
to
be
built,
there
needs
to
be
documentation
and
test
which
enforce
these
relationships
or
lack
thereof.
E
But
since
we've
been
talking
about
that
for
a
year,
I
don't
think
we're
gonna
resolve
it
in
this
gaining
there's
a
Lord
piece
of
hanging
fruit,
though
there's
an
open
pair
right
now
to
allow
the
machine
actuator
to
function
without
the
cluster
object.
I
think
that
this
PR
I
think
should
be
merged.
I
think
that
there's
a
warning
that
it
needs
to
be
removed
from
it,
but
I
wanted
to
get
feedback
on
what
others
thought
in
terms
of
allowing
the
machine
actuator
to
operate
without
a
cluster.
E
The
the
argument
that
I
made
in
the
PR
and
the
reason
why
I
think
this
should
be
merged
is
that
currently,
if
you
delete
a
cluster,
it
then
becomes
impossible
to
delete
machines
and
this
results
in
a
poor
user
experience
either.
The
user
has
to
recreate
a
fake
cluster
in
order
to
delete
the
machines
or
they
have
to
manually
patch
the
finalizar,
either
way
they
have
to
understand
the
internals
of
the
cluster
API
in
a
way
that
only
a
developer
should
so.
D
I've
also
hit
that
bug
I
it
also
it's
incredibly
frustrating
it
I
share
that
I
feel
like
it
is
a
special
sub
kiss,
but
I
I
think
you're
right
in
that.
If
we,
if
we
merge
the
PR,
require
machine
actuators
to
effectively
determine
whether
they're
going
to
handle
the
cluster,
then
we
can
sort
of
start
to
see
whether
people
can
really
write
meaningful
controllers
for
thee.
The
none
I
think
delete
is
a
special
case
like
and
everyone
I
think,
I
hope.
D
Everyone
can
agree
that
I
should
have
deleted
machine
deployments
if
I
just
created
the
wrong
namespace,
not
that
I've
ever
done
that,
but
but
he
we
can
see
whether
people
can
actually
like
do
the
creation
machines
in
their
implementations
without
a
cluster,
and
if
they
can
then
I
think
there's
a
strong
case
for
for
I.
Think
it
gathers
it
gives
gives
us
evidence
for
whether
or
not
we
need
a
linker
or
not
between
clusters
and
machines.
I
mean
Flint.
A
Yeah,
it's
a
good
point.
David
you
mentioned
that
there
was
a
warning
in
the
PRD
thought
should
be
removed.
I'm
looking
at
the
PR
I,
don't
see
a
warning.
Let
me
see
if
it
was
removed.
Oh,
maybe
play-doh
noticed.
Oh
I,
see
that
maybe
is
this
first
one
that
Justin
commented
on
this
message
is
gonna,
be
annoying
for
people
right.
A
Okay,
so
I
think
the
last
time
we
discuss
sister
in
the
meeting
Siddharth
was
the
sort
of
main
objector
and
I.
Don't
think
that
he's
here
today,
I
think
most
other
people
had
agreed
that
this
was
a
should
be
okay
to
go
in,
so
I
would
suggest
that
maybe
we
we
paint
him
and
give
him
a.
You
know
a
relatively
short
timeout
to
raise
any
further
objections,
let
MSM
rebase,
and
then
you
know
try
to
merge
this
in
a
couple
of
days.
If
he
doesn't
have
objections.
B
G
E
Pre
so
so
I
think
there's
a
debate
on
whether
you
need
to
have
access
to
the
cluster
from
the
machine
actuator
and
there's
a
case.
It
can
be
made
the
machine
actuator
if
it
needs
access
to
the
cluster
object,
it
can
find
it
itself
right.
Maybe
it
can
be
an
opt-in
functionality
and
just
to
Joseph
I.
Think
Justin
made
the
point
that
by
merging
this
PR,
we
can
gather
evidence
to
whether
or
not
that's
true
right
whether
actuators
require
the
cluster
object.
E
The
reason
that
there
is
this
open
question
of
whether
machines
and
clusters
are
actually
the
reason,
there's
an
open
question
in
my
mind
on
whether
clusters
should
be
required
or
not,
and
I'll
just
put
this
out
there.
So
people
could
think
about
it.
For
the
coming
weeks
is
one
of
the
things
the
cluster
object
provides
right
now.
Well,
it
provides
two
things:
it
provides
some
minimal
networking
information
and
the
spec,
and
then
it
provides
an
API
endpoint
and
the
status
the
API
endpoint
is
useful
in
determining
when
the
control
plane
has
been
provisioned.
E
A
E
Can
move
it
back?
That's
the
wrong
place!
Well,
I!
Don't
know
that
it's
the
wrong
place,
but
I,
don't
think
that
the
I
don't
think
the
GCP
is
the
only
provider
where
this
semantic
information
is
useful,
and
so
the
question,
though
the
question
to
the
community
is,
do
so
I
think
it
is
highly
useful
to
have
some
way
to
determine
when
your
control
plane
is
provisioned
and
one
way
might
be
to
say
that
it's
the
API
endpoint
filled
in
the
cluster
object.
E
If
we
set
that,
then
we
would
require
the
cluster
object
in
order
to
have
that
semantic,
meaning
for
higher-level
tooling
to
be
built
on.
There
are
other
proposals
which
are
not
going
to
probably
make
it
into
v1
alpha
one
whereby
we
might
split
up
control,
plane,
provisioning
from
the
cluster
object.
This
is
something
miss
proposed
recently
right.
E
If
you
did
that,
then
then
the
then
the
way
you
would
determine
if
the
control
plane
was
provisioned
or
not
would
not
depend
on
the
API
endpoint
field.
So
so
my
goal
is
I
think
that
it
I
think
that
the
cluster
API
should
provide
a
way
for
higher-level
tooling
to
determine
when
the
control
plane
is
ready
and
right
now
the
closest
thing
we
have
to
that
is
this
field,
which
is
optional,
which
makes
it
not
dependable,
and
therefore
you
can't
build
higher
level
to
Lane.
On
top
of
it,
yeah.
A
In
some
ways,
I
think
that's
a
little
bit
orthogonal,
because
the
question
is
whether
the
Machine
actuator
needs
to
know
that
or
not
right.
The
Machine
actuator
probably
needs
to
be
told.
Here's
the
end
point
where
you
should
tell
the
Machine
to
register
to
but
I'm
not
sure
it
needs
the
whole
cluster
object,
certainly
for
delete.
H
G
I
think
this
brings
very
good
design
question
that
I
wanted
to
talk
some
time
back
itself,
but
so
I
think
we
need
to
already
think
that
when
we
have
actuators-
or
we
are
already
thinking
about
the
externalizing-
the
cloud
cover
specific
stuff.
We
need
to
think
that
what
kind
of
responsibilities
you
actually
want
to
give
to
that
code,
which
is
actually
in
not
in
an
aged
under
the
personal
right,
because
we
discussed
once
that
there
would
be
different
cloud
providers
with
different
levels
of
different
kinds
of
capabilities
and
liability.
G
So
I
would
expect
that
at
some
point
we
give
the
minimum
responsibility
with
a
actuator
where
the
job
of
the
actuator
should
be
only
to
create
the
machine
and
delete
the
machine.
Instructions
are
given
and
not
really
worry
about
what
how
we
hope
the
machine
object
itself,
all,
probably
how
the
first
objects
will
be
sucking
us.
So
this
is
also
kind
of
learning
that
I
see
from
the
CSI
model,
so
wasn't
implementing
the
geography
drivers
for
NPM.
G
They
learned
from
them
that
they're
also,
it's
intentional,
that
they
don't
want
to
give
the
cube
contiguous
switch
to
the
out
of
grid.
Ivor's,
so
if
you
give
the
cube
config
kind
of
information
without
the
drivers,
then
at
some
point
the
patron
will
start
evolving
there,
where
the
code
inside
will
start
modifying
the
machine
of
the
machine
status,
object
code
will
start
modifying
the
first
isettas
object
and
so
on,
and
then
there
will
not
be
homogeneous
behavior
across
different
cloud
providers
when
especially
specifically
for
the
multi
cloud
offering
folks.
G
B
A
A
G
So,
looking
at
this
issue
and
one
day
to
scoop
down
the
requirements
here,
so
what
do
we
want?
Our
Foreman?
Let's
say
who
do
we
look
at
the
Machine
health
strategy?
What
kind
of
health
issues
we
want
to
surface
and
what
kind
of
fill
this
is
we
won't
actually
solve
so
I
had
basically
the
last
comment:
I
get
put
mainly
two
objectives
which
I
see
is
low-hanging
fruit
which,
where
the
conditions
are
already
available
from
the
motor
conditions,
so
first
one
is
the
eat.
G
Timeout,
so
basically
went
to
blue
stops,
responding
per
certain
period
of
the
time.
We
can
consider
that
the
machine
is
unhealthy
and
you
can
get
replaced
basically
the
Machine
object
and
let
the
Machine
circuit
with
the
new
machine
object
the
first
one,
and
the
second
was
that
when
the
disk
pressure
is
highly
trained
in
the
particular
machine.
So
if
this
pressure
is
I
then
should
be
probably
a
relatively
longer
timeout
because
of
the
30
minutes.
Also
this
case,
the
application
also
gets
some
change
to
be
was
some
kind
of
cleanup
if
it's
good.
G
Otherwise,
if
this
pressure
is
really
happen,
I
have
seen
the
behaviors
where
the
machine
can
actually
go
unusable.
The
new
parts
may
get
scheduled,
but
then
they
don't
get
into
space
on
the
host
part,
whose
machine
ever
any
time
anyway,
and
so
the
second
disc,
is
just
fine,
but
then
for
the
first
disc,
as
I
saw
a
comment
from
Robert
that
on
GCP,
which
I
was
not
really
aware
of
ever
heard
about
the
note.
Auto
repair
feature
indicator
if
machine
is
not
healthy
for
certain
period
of
time,
gkt
places
it.
G
But
I
was
not
aware
of
the
GC
piece.
This
part
where,
if
cubelet
does
not
respond
at
a
four
five
minute,
then
it
deletes
the
node
object
so
who
delivers
a
note,
object
and-
and
the
Machine
also
also
is
also
replaced,
or
is
it
if
you
fail
to
make
sure
that
the
new
machine
is
gated
and
old
on
each
one?
Yeah.
A
So
it
doesn't
do
anything
with
the
underlying
infrastructure
other
than
query
it.
So
yeah
I'm,
not
sure
if
I
said
this
quite
correctly
here,
so
what
it
does
is
if
the
cubed
start
stops
heart
beating,
it
will
check
with
GCE
and
say:
does
that
machine
still
exist?
And
if
that
machine
is
gone
then
it
will
delete
the
note
because
and
that's
basically
to
capture
the
like.
You
had
a
machine.
A
You
had
something
in
your
cluster
it
it
went
away
right,
like
you
had
to
make
you
scale
it
down
by
one
something
disappeared:
the
way
that
we
noticed
that
and
we
removed
the
note
object.
Is
we
see
that
it
stops
heart
beating
and
then
we,
you
know,
delete
it
later
so
like
assuming
that
you
sort
of
stranded
it
right
like
that
sort
of
it
the
failsafe.
A
If
that
fails,
and
so
I
think
in
that
environment,
that
I
think
the
timeout
default
for
that
is
about
five
minutes,
and
so
that
would
always
kick
in
before
your
proposed
time
out
of
ten
minutes,
for
if
we
stop
heart,
beating
right
and
that's
not
to
say
like
this-
would
be
sort
of
another
backstop
and
again
would
work
in
other
environments
that
don't
have
that
cloud
provider
integration.
It's
also
a
little
bit
different
in
the
sense
than
I.
Don't
think
I
thought
about
this
for
Notre
Dame,
I.
B
A
A
A
G
So,
exactly
to
what
old
I
wanted
to
actually
come
to
consensus
with
that
these
two
objectives
look
good
enough
or
if
there
is
an
idea
to
including
the
first
cut
itself,
and
we
can
pick
it
up
or
the
next
step
that
I'm
seeing
is
that
in
future,
probably
when
we
actually
put
them
we
D,
we
will
expect
that
we
will
get
more
load
conditions
from
the
machines
and
you
can
take
better
actions,
but
that
should
be
community.
So
if
this
particles
are
good
enough
or
I
could
say,
I
could
see
both
of
them.
G
D
Can
I
ask
so
I
haven't
I'm
trying
we
get
the
whole
issue
in
catch
up?
What
is
the
mechanism
for
doing
this,
and
is
it
extensible
or
like?
In
other
words,
if
I
have
a
particular
strategy
for
my
machines
or
if
I
say,
like
I'm
gonna
run
my
GPU
super
hot
and
then
I
shut
them
down
when
the
GPU
like
is
about
to
melt
right
like?
G
Exactly
so,
for
now,
this
position
is
the
implementation
waves.
We
will
actually
be
looking
at
only
the
node
conditions,
which
is
exactly
the
cubelet
not
ready,
and
this
so
if
the
trail
machine
is
really
really
hot
on
the
CPU,
and
if
because
of
that,
if
these
conditions
are
not
really
affected
and
the
machine
controller,
this
level
will
not
take
any
action.
G
But
if
you
want
machine
controller
to
take
action
and
probably
mimic
an
ism
extensible,
probably
by
outside
driver
or
external
controllers-
and
that
should
be
possible
just
that
those
conditions
should
be
pipe
inside
that
same
area
of
North
conditions
that
we
have,
where
the
new
condition
pops
up,
saying
that
the
CPU
is
hot,
now
controller
should
decide.
This
is
the
action
should
I
should
take
for
that
four
dimensions.
A
Think
the
proposal
here
is
to
put
this
in
the
machine
set
controller
code
which
would
make
this
behavior
common
and
built-in,
and
if
you
wanted
to
do
other
ways
of
identifying
unhealthy
machines,
you
could
always
you
know,
delete
individual
machines
yourself
out
of
band
and
the
Machine
set
controller
would
replace
them
for
you
or,
if,
if
and
when
we
implement
these
scale
down
by
specific
machine
request
from
the
autoscaler
folks,
you
could
do
it
that
way
too
and
say
you
know:
hey,
hey
machine
set
controller,
I!
Guess
you?
Wouldn't
you
just
go
down?
A
You
just
delete
the
machine
right.
You
just
say
like
this
machine
looks
bad
delete.
This
machine
looks
bad
delete
and
the
machine
set
controller
should
just
replace
them
for
you
right.
So
this
would
be
sort
of
the
built
in
behavior.
That
does
that
automatically
in
specific
situations
which
you
think
are
are
common
enough
to
encode
in
the
core
in
the
core.
Basically
and
the
like
I
think
Hart
is
proposing
a
small
set
of
those
which
we
think
are
common.
A
D
It
sounds
like
there
is
a
mechanism
in
that
we,
if
we
wanted
to
have
some
complicated
like
after
five
minutes,
that
the
CPU
is
too
hot.
Then
we
shut
it
down
like
a
behavior.
We
could
we
could
do
that
with
conditions
and
so
given
we
are
I,
think
pre-alpha.
This
makes
it
ton
of
sense
to
merge
in
where
we
can
get
it
in
and
and
see
where
we
go
see.
How
we
go
to
think
is.
I
G
I
Yeah
I'm
trying
to
think
out
loud
to
be
honest,
yeah
but
yeah
when
you
go
to
Q
button.
There's
this
H
a
thing
and
you
notice,
like
the
first
thing,
to
take
care
of
and
I,
think
that
what
just
being
at
this
moment
is
it's
like
a
good
basis
for
where
to
go.
If
it
maybe
there's
some
common
things,
we
could
locate
and
look
through
and
yeah
I.
Just
think
that
you
know
yeah.
A
The
other
option
is
you
could
make
that
H
a
control
plane
out
of
individual
machines
instead
of
a
set
in
which
cases
behavior
would
not
apply.
I
think
it
really
depends
upon
the
implementation
of
how
you
are
stitching
together
that
H
a
control
plane
with
qadian
and
if
you're
gonna
put
that
in
a
machine
set,
you
need
to
make
it
resilient
to
the
fact
that
any
one
of
those
those
machines
can
go
down
and
come
back
up.
Oh.
I
D
You
raise
a
great
point
like
if
we
are
running
a
master's
and
they
are
deleted
when
the
disk
fills
up
and
we
are
running
on
the
local
disk
and
EDD
grows
they're
all
going
to
be
deleted
at
exactly
the
same
time,
which
is
very
much
not
what
you
want
right,
but
I
think
I
think
the
argument
that
a
machine
you
would
then
create
three
machines
for
them
for
an
H.
A
control
plane
is
a
very
strong
one,
I
think
or
three.
A
A
Or
whatever
repugnant,
that's
it
right,
right
and
or
if
you're.
If,
when
you
create
a
control
plane,
you
run
an
it
on
one
and
join
on
the
others,
and
it
has
different
subscripts
and
flags
and,
like
those
are
very
specific
than
their
pets,
and
you
shouldn't
put
them
in
a
set
because
you
won't
be
able
to
just
replace
them
right.
Do
we
need
a
console
next,
a
full.
D
Right
I
hope
we
can
figure
it
out.
I
think
there's
some
work
going
like
the
sed
ADM
project,
which
I'm
working
on
should
hopefully
eliminate
the
asymmetry
there
so
hopefully
not
like
it.
Can
it
can
do
the
mounting
volumes
and
it
can.
It
doesn't
need
you
to
run,
join
and
then
in
it.
You
can
just
front.
Go
on
all
three.
A
F
Sinclair
suggested
that
for
cube
ADM,
we
started
basically
omitting
the
use
of
format,
error
F
and
we
instead,
we
started
using
the
package
terror
package
which,
for
some
reason,
is
not
in
the
standard
library.
I
do
not
know
why.
But
this
is
something
that
Tim
recommended
and
we
pretty
much
really
did
cube
ATM
to
use
this.
It
has
a
wrapper
for
errors
and
it's
arguably
the
cleaner
way
of
doing
it.
So
for
the
first
one
is
clearly
a
Robert.
F
D
D
Because
it's
not
it's
not
anything
to
do
with
the
go
team
is
I,
think
the
issue.
This
is
a
third-party
dependency
which
doesn't
make
it
bad.
It
just
means
that
it's
like
we
should
be
there's
nothing.
The
other
thing
is
there
is
apparently
this
happening
in
go
to
whenever
this
is
going
to
I.
Think
it's
on
their
list
of
topics.
A
A
A
A
A
A
Cool
all
right
and
with
that
we
are
just
about
at
the
top
of
the
hour
thanks
everyone
for
coming.
We
didn't
have
a
chance
to
go
through
and
look
at
sort
of
the
23
issues
in
the
v1
alpha-1
milestone
today.
I
would
encourage
people
to
go
ahead
and
try
to
do
that
on
their
own
over
the
next
week
if
they
care
I.
A
Think
what
we'll
try
to
do
in
the
next
couple
meetings
is
either
chunk
them
and
like
say,
we'll
go
through
five
or
ten
per
week
or
we'll
pull
out
specific
ones
that
we
think
are
either
blocked
or
need
someone
to
work
on
them.
So
we
can
really
start
trying
to
burn
down
that
list
as
we
go
forward.
So
again,
please
take
a
look,
and
thanks
for
coming
and
we'll
see
folks
in
a
week
take
care.