►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181128 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.yaovy3j8you4
Highlights:
- Scope of clusterctl: adding functionality vs. using kubectl or bespoke tooling
- HA support in clusterctl
- ProviderID in machine {spec|status}
- When can the API be considered stable?
- Machine phases PR
A
B
B
And
we're
using
the
management
cluster
to
create
mini
clusters,
the
ability
to
list
the
cluster
from
cluster
CTL,
instead
of
forcing
the
user
to
go
to
coop
CTL
after
they
use
cluster
CTO
to
create,
would
be
I
think
a
better
use
of
experience,
and
it
wouldn't
be
that
hard
to
add
this.
So
I
wanted
to
see.
If,
though,
what
people
think
about
adding
this
functionality.
B
A
Think
that
their
intent
is
to
be
just
sort
of
a
list
of
clusters
where
you
can
build
a
like
a
multi
cluster
controller.
Like
I
know
they
built
a
controller
called
multi
cluster
ingress,
which
allows
you
to
create
an
ingress
controller
that
target
services
in
more
than
one
cluster,
and
that
can
that
controller
can
look
at
the
cluster
registry
to
see
which
clusters
should
be
targeted
by
Bres,
so
I
think
they.
A
They
see
themselves
as
sort
of
filling
that
role
of
if
you
have
more
than
one
cluster,
let's
put
them
all
in
one
place
so
that
we
can
build
management
tools.
On
top
of
that,
so
I
think
the
interesting
question
for
me
here
is:
if
we
are
using
a
sort
of
management
cluster
pattern,
then
how
does
that
overlap
with
what
they're
building
and
does
that
management
cluster
become
a
natural
place
to
also
sort
of
run?
The
cluster
registry
CR
DS,
and
maybe
the
controllers
that
cluster
registry
for
the
cluster
is
that
it's
managing
that's.
B
A
very
interesting
question:
yeah
I,
need
to
take
a
look
at
it.
I,
don't
and
I.
Don't
know
enough
about
the
cluster
registry
to
determine.
C
B
You
know
those
of
us
who
are
developers,
you
know
we.
We
understand
that
you
can
use
coop
cuddle
to
create
for
the
list
of
cluster,
but
you
know
we
work
with
a
lot
of
administrators
and
they
have
a
different
mindset
and
it
it
would
just
be
a
little
bit
of
a
mind
shift
for
them.
So
just
adding
that
simple
feature
would
be
would
be
very
useful,
friendly.
A
D
We
in
in
cops
time
we've
had
this
idea,
which
predates
the
cluster
API
called
cop
server,
and
the
idea
is
that
we
would
have
API
objects
like
like
the
cluster
and
the
Machine
deployment,
and
that
we,
basically
that
would
be
this
first
one
and
then
the
second
one
would
be
that
we'd
basically
move
the
controllers
onto
the
onto
the
server,
so
I
guess
in
that
world.
We
wouldn't
really
expect
people
to
use
cops.
The
cops
CLI
tool
very
much
and
I
guess
the
analog
here
would
be.
D
B
I
know
that
some
of
you
guys
have
expressed
that
cluster.
You
guys
be
cluster
CTO
as
a
bridge
tool,
but
in
we
for
us
we're
still
using
cluster
CTO
and
until
we
have
a
solution
until
we
decide
to
do
something
different
clusters
of
you
know,
it's
it's
our
intermediate
tool
also,
so
we
we
would
like
to
expand
the
the
functionality
of
it
until
we
decide
to
switch.
D
D
I
would
say
that
I
think
that
the
cops
vision
was
that
you
would
use
goop,
cuddle
or
other
like
native
tooling
right
so
like
if
you
know
like
coop,
Caudill
Ewing,
you
know
use
dashboard
whatever
it
is,
but
like
get
people
using
the
same
cool
kubernetes
api
that
the
coop
cuddle
tooling
and
push
them
in
that
direction,
so
they
don't
have
to
sort
of
jump
from
cost
of
CTL
to
to
Kolkata.
But
it's
all
you
know
it's
all
the
same
thing,
whether
you're
managing
a
cluster
or
manatee
application.
C
So
I
can
see
some
some
value
to
having
it.
Just
for
that,
simplification
for
user
kind
of
experience,
but
I,
don't
necessarily
think
that
we
want
to
force
user
for,
saw
a
no
commands
into
a
cluster
cuddle
to
accommodate
all
of
the
different
types
of
workflows.
You
would
do
with
the
clusters
and
the
Machine
objects
so.
B
B
C
B
To
bootstrap
we
have
another
mechanism,
so
the
the
reason
why
you
know
just
I
guess
because
we
haven't
had
time
to
think
about
it.
But
you
know
the
we've
been
still
been
using
cluster
CTO
because
of
all
the
steps
in
the
workflow
for
creation
that
that
we've
been
using
it
for,
but
we
can
replace
it
actually
with
just
a
shell
script.
That
does
there's
all
the
same
thing.
So
maybe
maybe
that's
the.
C
D
Okay,
I'd
also
say
that
I
think
coop
cuddle
has
has
plug
ins.
Now
that
actually
are
I,
don't
know
if
they're
first
class,
but
they
certainly
like
they
aren't
hidden
under
like
a
plug-in
sub
command
anymore.
So
if
you
have
that
objection
to
coop
cuddle,
you
can
write
a
plug-in
now,
so
you
can
modify
coop
cuddle
to
do.
You
can
add
a
ver
word
after
coop
cuddle,
like
you
could
write
coop
cuddle
namespace
is
the
example
I
like
to
give
okay
I.
B
Well,
so
the
next
question
I
had
well
it's
it's
very
similar
to
that
topic.
Yeah
the
workflow
for
the
cluster
CTL
delete,
doesn't
work
for
us,
the
the
the
part
of
the
workflow
that
tries
to
delete
the
API
objects
and
then
it
does
the
pivot
I
don't
know
if
anybody
is
using
cluster
C
CTL
delete
cluster,
but
it
just
hangs
for
us
and
we
I
don't
really
see
the
need
for
deleting
the
API
object,
C
first,
so
on
the
target
cluster.
A
I
can
explain
to
you
why
that
was
implement
that
way,
because
I
was
talking
to
rob
about
this
when
he
was
trying
to
figure
out
how
to
implement
delete
and
the
thing
he
was
concerned
about
was,
if
you
have
two
clusters
that
both
at
the
same
time
have
a
representation
of
the
resources
in
that
cluster.
The
machines
are
the
cluster
itself.
A
Then
it's
unclear
who
is
managing
the
cluster
and
they
can
fight
over
that
cluster.
And
so
the
idea
was
what
you
do.
Is
you
remove
things
from
the
target
cluster
so
that
the
controllers
in
that
cluster
would
stop
managing
things?
And
then
then,
so
you
bet
you've
copied
them
out,
you'd
remove
them
from
that
cluster,
and
then
you
starts
controllers
in
sort
of
the
external
cluster
I.
A
Guess
it's
not
a
boot
shop
cluster,
because
it's
for
teardown,
but
in
that
nap
boot
shop,
cluster
and
then
it
would
sort
of
in
take
over
managing
those
resources,
and
that
allows
you
to
delete
them
without
the
controller's
inside
the
cluster
fighting
with
you,
and
so
that's
why
they
are
removed
in
that
cluster
is
so
the
controller's
in
that
cluster.
Stop
managing
them.
B
C
C
A
You
know
being
said,
I
think
the
the
way
we
sketched
out
delete
was
sort
of
very
specific
to
the
you're
pivoting
out
into
this
bootstrapping
cluster,
and
we
spent
a
lot
of
time
talking
about
alternatives
for
creation
where
you
maybe
don't
have
a
bootstrapping,
ephemeral
cluster.
You
have
this
sort
of
more
long-standing
management
cluster,
and
so
we
may
need
a
in
the
same
way.
We've
rethought
creation
and
Jason's
broken
into
phases.
A
Maybe
we
need
to
look
at
delete
more
closely
and
figure
out
a
better
way
to
factor
it
so
that
it
it
works
in
both
of
those
scenarios.
I
don't
know
I,
like
I,
said,
like
I,
don't
I've
never
tested
it
in
the
scenario
you're
describing
I
have
tested
it
in
the
pivoted
out
scenario,
and
we
made
sure
it
worked
in
that
scenario.
So
it's
possible.
It
just
doesn't
work
correctly
in
your
scenario
today
we
need
to
fix
that.
B
A
A
C
C
Instances
where
we
create
the
initial
one
using
cube
ATM
in
it,
and
then
we
see
really
create
the
additional
control
plane.
Instances
using
the
new
control
plane,
join
experimental
control,
plane,
join
feature
in
cubed
e
m113
and
there's
potential
around
lock
contention
or
race
conditions
around
how
we
access
some
of
the
objects
in
cube
ATM
right
now.
D
I'm
I'm
happy
I
always
to
share
like
how
cops
isn't
it,
but
I
would
love
to
hear
any
cool
ideas
before
we,
like
you
know,
cube
a
DM
has
a
way
to
do
it
cops
as
they
do
it.
I,
don't
think
I
think
the
cops
one
is
perfect,
and
if
anyone
has
a
great
idea,
I
would
love
to
discuss
that
before
we,
you
know,
focus
on
any
particular
approaches,
but
yeah
you
were
hinting
that
there
might
be
a
better
a
better.
C
Way,
Jason
well
I
just
know.
In
the
past
we
had
talked
about
kind
of
making
the
cluster
actuator
actually
spin
out,
either
the
control
playing
instances
or
manager
control
playing
in
some
other
way
versus
the
way
that
we're
doing
it
now
we're
having
these
predefined
machined
objects
for
the
control
plane
that
we're
hard-coding
the
workflow
for
standing.
Those
up
as
part
of
cluster
cuddle,
I.
A
Haven't
quite
figured
out
how
to
do
this,
but
I
would
love
for
us
not
to
have
this
distinction
between
machines
that
run
a
startup
script,
to
be
a
control,
plane
and
machines
that
run
a
startup
script,
to
be
a
node
and
have
everything
every
machine
effectively
just
be
a
node
and
and
be
able
to.
Then
you
know
sort
of
schedule
or
after
the
fact
run
the
control
plane
bits
on
top
of
them
and
I.
Think
in
in
like
lock
scenario
with
the
management
cluster,
that's
possible
because
you
have
something
else
that
can
control
things.
A
If
we
can
assume,
every
machine
is
just
a
node
that
can
run
things
right
because
it
has
a
cubelet
i7
around
my
brain
around
exactly
how
we
make
that
work
in
practice
right
because
in
some
at
some
point
you
do
have
to
run
this
cube
a
bit
in
it
or
if
you
dive
in
control,
plane,
join
and
something
has
to
trigger
that
right,
and
so
maybe
there's
a
way.
For
you
know
the
bootstrap
cluster
to
like
run
a
job
or
run
like
you
know.
A
Something
is
like
a
one-shot
thing
to
get
that
like
if
you
could
schedule
a
pod
that
basically
is
run
in
privilege
mode
like
drops
into
loco
OS
and
runs
cubeb
and
in
it
and
can
drop
the
files
on
there
and,
like
maybe
that's
a
way
to
bootstrap
it
or
maybe
there's
another
way
to
do
that.
If
you
don't
have
this
other
cluster,
that's
sort
of
able
to
manage
it
for
you.
A
A
We
need
to
kept
but
I
think
even
before
it
kept
I
think
it'd
be
useful
to
like
sit
down
and
sketch
out
some
ideas
and
brainstorm
before
we
actually
try
to
write
out
something
official
I
think
once
we
sort
of
have
an
approach
that
we
think
might
be
reasonable,
we
should
definitely
rent
up
a
cap,
but
I,
don't
think
we're
at
the
point
where
anybody
says
like
this
would
be
a
great
way
to
do
it.
Yes,.
A
So
mini
cube
goes
away
right,
like
if
mini
cubed
running
on
my
laptop.
It
can't
be
trusted
to
keep
things
running
on
the
target
cluster
right.
So
even
if
our
workflow
for
cluster
huddle
didn't
delete,
mini
cube
at
the
end,
which
today
it
does,
you
know,
I
shot
my
laptop
but
I
disconnect
network
and
I
get
on
an
airplane
or
whatever
else.
Then,
all
over.
A
A
So
we
either
need
to
be
able
to
create
a
management
cluster
where
we
can
then
sort
of
apply
the
pattern
that
Locke
was
describing
or
we
need
to
have
a
way
that
we
pivot
and
like
stand
up
a
cluster
that
sort
of
self
self
maintainable
going
forward,
which
is
which
is
what
the
code
tries
to
do
today
using
static
pods
with
keep
I've
been
in.
It.
B
A
Yeah
I
mean
that's.
A
great
point
like
mini
cube
is
is
convenient
because
it's
it's
portable,
everybody
can
run
it.
You
know
quote
unquote
everybody
right.
You
can
run
it
in
pretty
much
any
environments.
You
know
obviously
can't
necessary,
run
it
on
a
VM
and
a
cloud.
Sometimes
some
places
have
security
requirements
that
don't
allow
you
to
run
VMs.
So
there
lots
of
restrictions
on
that
sense,
but
you
know
in
general,
if
someone
just
wants
to
go
for
the
project
and
pick
something
up,
it's
something
that
we
can
sort
of.
D
If
we
had
a
man,
if
you
had
a
mansion
cluster
on
Google,
you
would
run
on
gke
right
and
if
you
had
one
I
wasn't
even
on
UK
s
and
you
know,
etc,
and
so
I'm,
just
thinking
about
whether
we
could
make
the
OSS
mini
cube
into
a
mentoring
cluster.
So
we
didn't
have
this
divide
between
the
two
and
what
like?
What
are
those
dependencies
between
like?
When
do?
D
C
A
Yeah
and
I
think
that
again,
like
mini
cube,
is
convenient
because
we
know
it's
consistent
across
each
of
those
environments
and
I'm.
You
know
I
know
like
the
the
way
that
the
Gardner
project
has
I
think
it's
called
Cuba
Phi
to
basically
do
exactly
that
right.
You
can
use
you
know
terraform
or
something
like
that
to
spin
up
your
first
set
of
things
that
allow
you
to
to
start
that
process,
and
so
yeah
I.
Think
and
that's
you
know,
goes
back
to
the
deluxe.
You
know
from
many
meanings.
A
So
I
think
to
circle
back
to
your
your
actual
question:
Jason
about
H,
a
control
planes
I
think
we
should
I,
don't
know
if
it
the
best
way
to
start
as
toward
up
a
dock.
If
we
just
want
to
you
know,
cube
cons
in
a
couple
of
weeks,
if
we
just
wait
until
cube
con
and
try
to
you
know,
find
a
room
and
the
white
board
and
sit
down
and
hash
out
some
ideas.
A
If
we
do
that,
we
need
to
make
sure
that
it's
sort
of
not
just
like
me
and
you
sitting
room
but,
like
you
know,
we
widely
announced
the
people
in
this
meeting
so
that
people
can
try
to
show
up
if
their
cube,
calm
and
also
understand
that
people
aren't
all
gonna,
be
a
cute
gun.
So
the
result
of
that
should
be
here's
a
proposal.
You
know
we
know
not
everybody's
there.
You
guys
might
have
better
ideas
than
we
did.
Please.
Let
us
know
what
you
think:
yeah.
A
A
My
reading
of
the
issue
is
that
most
people
agree
that
a
provider
ID
is
useful,
if
not
for
the
original
reasons
in
the
PR
for
different
reasons,
and
the
contention
seems
to
be
on
whether
it
should
be
a
spec
or
a
status
field
right
so
I
think
to
summarize,
the
arguments
for
status
are
that
it
in
most
cases
appears
to
be
an
observed
field.
It's
nothing
that
you
can
tell
the
cloud
provider
at
something
a
cloud
provider
tells
you.
A
Therefore,
it's
sort
of
a
status
field
and
the
arguments
for
a
spec
field
are
even
though
it's
something
that's
the
cloud
provider
tells
you.
There
are
cases
where
you
want
to
tell
that
field
to
the
controllers,
because
you
know
it
a
priori
through
some
out-of-band
means
and
therefore
it's
something
you
you
want
to
provide.
Not
you
want
to
get
back.
That
would
be
the
argument
for
respect.
D
The
the
waste
back
in
status
was
explained
to
me
was
that
if,
if
it
could
be
recovered,
it
can
go
into
status,
but
if
it
can't
be,
it
has
to
go
into
spec.
So,
in
other
words,
if
you
lost
the
status,
could
we
get
it
back
and
out?
If
I
don't,
could
we
get
the
priority
back
and
I
guess
I,
I
guess
the
answer
in
the
general
case
would
be
no
here.
C
F
A
I
think
the
the
controllers
are
gonna
have
to
be
a
little
bit
flexible
because
in
most
cases
where
you're
not
adopting
existing
infrastructure.
That
feels
not
going
to
be
there
right
like.
So.
If
I
want
the
actuator
to
create
a
new
machine,
then
this
field
would
be
missing
or
blank
which
would
tell
the
actuator
like
create
something
new
for
me
and
if
won't
the
actuator
to
adopt
a
machine,
I
might
fill
it
in
in
the
spec
and
the
actuator
says.
A
F
But
adoption
is
largest
of
an
use
case.
Oh
and
I
know
kind
of
I
brought
that
up
earlier.
That
could
be
one
way
of
kind
of
leveraging.
It,
however,
I
mean
I'm
just
thinking
of
the
simple
use
case
where
we
are
going
a
bootstrap
cluster
and
then
we
have
pivoting
onto
it.
I
mean
that's
like
a
simple
workflow
class
that
we
have
today
and
in
that
simple
use
case.
F
That
field
will
be
lost
and
then
the
actual
will
actuators
will
probably
go
on
to
a
path
execute
a
code
pad
that
assumes
that
whatever
the
meaning
or
significance
of
that
not
being
there
is
and-
and
that's
why
I'm
saying
it's
probably
more
important
to
kind
of
clearly
articulate.
What
exactly
is
the
use
case
that
you
are
trying
to
solve
this
with?
You
know
with
that
particular
provider?
Id
P
and
that
can
probably
drive
back
the
discussion
of
where
it
is
where
it
is
a
better
suited.
Based
on
that.
G
You
can
just
look
on
object,
a
object,
B
or
you
can
uniformly
put
it
on
object,
a
and
doesn't
matter
that
way.
It
gets
it.
The
problem
with
putting
a
new
field
into
an
API
is
that
you're
expecting
that
it
will
be
uniformly
applied
across
all
different
providers.
The
beauty
of
an
annotation
is
that
you
have
kind
of
a
dealer's
choice
between
different
providers
and
you're,
not
sort
of
saying,
strictly
that
this
field
is
required
for
to
be
used
in
this
particular
way
across
providers.
So.
C
It
wasn't
part
of
the
kind
of
impetus
of
this
for
external
tooling,
that's
interacting
with
the
cluster
API
objects
and,
and
that
was
kind
of
part
of
trying
to
standardize
it
across
providers.
If
we
go
that
annotation
based
approach,
I
think
it
introduces
the
same
issue
that
we
have
with
having
it
in
the
provider
config
or
provider
status.
G
At
this
point
it
does
I'm
not
saying
you
will
I
think
the
problem
is.
There
is
no
one
right
answer
for
some
of
these
things,
and
if
people
want
to
create
auxilary
tools,
they
can
do
a
need
or
scenario,
but
that
we
can't
necessarily
put
things
into
the
core
API.
If
we
can't
come
to
consensus
and
agreement
on
where
it
should
live,
yeah.
A
So
the
original
mean
of
reason
for
this
PR
was
to
put
it
in
API,
so
the
autoscaler
can
use
it
right.
So
that
would
be
generic
tooling.
That
relies
upon
this
field
to
do
something
and
I
think
people
push
back
on
that
saying.
Well,
we
don't
exactly
know
how
the
autoscaler
is
going
to
work
integrate
with
cluster
API.
That's
not
a
good
reason
to
promote
it.
However,
there
are
other
reasons
we
would
still
want
this
field
right
and
I.
A
Think,
for
those
other
reasons,
Tim,
you
might
be
right
that
an
annotation
suffice
is,
for
those
other
reasons,
and
then,
if
we
get
to
the
point
where
the
autoscaler
does
need
it
as
a
field
that
it's
consistently
named
inaccessible,
we
can
we
can
maybe
move
the
annotations
into
a
field
at
that
point
as
part
of
those
discussions.
So
it's
you
know
to
just
hit
hearth
or
other
other
folks
that
are
wanting
to
use
this
field
now.
A
F
F
I
think
that's
defying
or
machine
on
the
close-quarter,
which
I
would
definitely
expect
on
a
machine
object,
because
if
you
are
relying
on
any
other
mechanisms
which
could
be
peg
or
private,
IPS
and
so
on,
there
are
still
rare
but
the
possibilities
of
the
NIST
confusions
right.
So
that
was
one
reason,
but
then
for
us.
What
we
do
is
that
if
you
also
think
it
over
whether
idea
is
the
most
polite
way
of
mapping
the
mode
with
the
machine
which
can
never
go
wrong.
F
If
you,
if
you
get
the
right
priority
on
the
machine
object,
then
it
cannot
happen
that
you
are
messing
with
the
wrong
mission
on
the
cloud
for
any
mission
objectives,
but
something
else,
but
for
the
other
ways
I
have
heard
you
could
use
the
tags
we
could.
Music
stereotype
is,
but
we
know
that
both
are
the
reusable
things
you
can
have
same
pegs
and
Puma
machines.
I
know.
No
one
will
do
that,
but
that's
I'm
just
talking
about
the
possibilities
and
then
for
the
Private
Eye.
It
is
also
you
type.
F
This
can
be
used
here
and
there,
if
you
fear
about
sudden
destruction
with
large
to
large
cluster
and
it's
from
by
pieces
being
used
here
and
there.
You
could
have
a
big
confusion
that
some
new
machine
get
the
IP,
which
was
allocated
to
some
other
machine
object,
which
was
given
previously
and
Sinquefield,
but
the
priority
is
only
field
which
can
very
reliably
solve
the
problem
of
mapping
the
node
with
the
machine.
F
So
now
the
question
comes,
we
said
how
the
time
of
the
bootstrapping
itself,
how
do
you
basically
decide
that
what
would
be
the
provider
ID
when
you
create
the
machine
itself
right?
So
there
are
ways
to
provide
variety
for
different
cloud
providers
have
a
very
certain
way
of
building
it
up,
so
we
can
actually
check
in
the
code
of
which
abilities.
Well,
the
format
is
very
nicely
defined,
so,
for
example,
for
AWS
the
format
is
the
region
and
its
nest,
the
the
region,
and
then
this
is
necessarily
zone
and
then
the
instance
ID.
F
You
know
what
could
be
the
right
node,
which
is
right,
nor
which
is
should
be
met
with
emotion
and
once
the
confirm,
if
it
happens
that
once
you
get
the
node
object,
you
have
any
way
to
prove
quite
early
available
in
the
node
itself.
So
that
is
one
way
you
can,
but
it
reliably
solve
the
problem
of
mapping.
No
commercial
input
is
one
of
the
reason
I
would
like
to
see
in
the
machine
object
atmosphere,
but
I
don't
have
very
strong
opinions
on
having
it
on
I
burst.
F
It
was
all
spec
looking
at
the
use
cases.
Spec
could
be
fine
or
but
I
have
also
seen
the
use
cases
where
it
was
mentioned
that
the
machines
are
being
deleted
unless
you
are
implementing
the
machine
controls
other
way
where
Mission
Control
itself
is
nuclear
permission,
then
you
might
want
to
date
the
provider
ID
in
between
where,
again,
that
falls
into
the
philosophy
that
if
you
are
operating,
something
that
why
don't
you
keep
it
in
the
machine
students,
so
I
don't
have
a
strong
opinion
on
where
to
keep
it.
F
Just
one
quick
things,
I
mean
I,
think
I
agree
with
the
fact
that
you
know
having
that
information
present
available
is
good
I,
don't
any
shoes
which
are
now
as
far
the
annotation
is
concerned,
I
think
it's
a
reasonable.
You
know
place
kind
of
in-between
place
where
you
can
at
least
put
it.
However,
just
a
side
note,
I
mean
what
I've
noticed
is
the
cluster
CTL
implementation
that
happened
in
today
make
certain
assumptions
about
the
applications
that
will
be
there
on
the
machines.
It
is
kind
of
little
naive
that
you
probably
need
to
change.
F
For
example,
I
think
if
I
remember
correctly,
what
I
find
the
code
is,
is
then
you
create
with
using
cluster
CTL?
It
is
waiting
for
like
list
of
annotations,
to
show
up
and
the
moment
it.
The
annotation
shows
up.
It
thinks
that
the
machine
is
ready
and
it
tries
to
move
on
which
essentially
kind
of
broke
for
let's,
for
example,
the
v-shape
visor,
because
in
the
beginning,
when
we
were
doing
the
implementation
of
the
actuator,
we
try
to
use
the
annotation
to
kind
of
keep
track
of
some
temporary
data
like
now.
F
That
can
be
like
a
task
ID
that's
going
on
and
that
kind
of
broke
the
cluster
CTL
behavior.
It
exterminated
saw
the
annotation
present.
It
certainly
assumed
that
the
machine
or
the
underlying
infrastructure
is
ready,
so
so
I
mean
I.
Think
it's
okay
to
put
it
in
the
annotation,
but
I
guess
we
probably
have
to
go
back
into
the
cluster
CTL
and
correct
those
assumptions
as
well
just
because
otherwise
we
might
see
certain
things
not
behave
in
this
in
the
way
we
want
it
to
be
yeah.
B
B
E
A
I
think
it's
a
little
bit
different
than
what
hardik
was
saying.
No,
because
what
hardik
was
saying
was
like
the
AWS
like
in
inside
of
sort
of
I.
Guess
it's
in
cork
arrays
right
now,
but
in
the
cloud
provider
implementation
which
were
extracting
from
core,
there
is
a
specific
way
that
AWS
creates
a
provider
ID
that
we'd
want
to
match
to
different
variants,
we're
both
creating
AWS
VMs.
A
Nate's
asking
in
chat
if
it's
offensive
to
use
both
an
annotation
and
a
status
field,
you
could
use
annotation
sort
of
in
place
of
the
field
in
aspect.
You
use
that
sort
of
as
a
telling
the
actuator
of
the
provider
to
adopt,
or
you
know,
to
take
one,
that's
being
pivoted,
and
then
you
could
use
a
status
to
actually
post
back.
The
thing
that's
being
observed.
A
Which
is
which
is
definitely
intersting,
I.
Think
if
we
use
if
we
do
use
status
as
the
end
result,
that
might
be
a
good
way
to
work
around.
So
it
arts
concern
because
you
can,
you
can
even
put
the
annotation
on
and
then
have
the
Machine
controller,
remove
that
annotation
when
it
moves
into
the
status.
A
A
This
has
been
actually
really
useful
and
I
think
I'll
try
to
go
and
summarize
this
discussion
in
the
PR
and
see
if
that
can
help,
move
the
conversation
forward
and
help
us
reach
consensus
here,
the
next
week
or
two
like
the
h8
discussion.
That
might
also
be
useful
to
sit
down
at
cube
con
and
and
have
a
high-bandwidth
discussion
that
can
last
longer
than
the
ten
minutes
or
you
know,
15
minutes
we
get
in
this
meeting
every
week,
so
Jason
you're,
asking
about
the
current
status
of
config
and
whether
it'll
be
beta
soon
yeah.
C
This
was
more
of
an
outstanding
question.
I
know
we
have
the
Alpha
and
the
beta
exit
criteria
set
up,
but
that
seems
more
for
the
actual
cluster
API
implementation
itself
and
less
around
the
config
objects
that
we're
defining
and
I
know
that
there
have
been
requests
for
graduating,
at
least
a
machine
spec
and/or,
the
machine
config
and
the
all
of
that.
So
I
want
to
see
thoughts
from
the
group
on
how
close
do
we
think
we
are
to
going
beta
for
at
least
the
config
objects.
A
Okay,
so
I
guess
I,
wouldn't
call
those
config
objects.
I
found
that
confusing
I
wasn't
sure
we
were
asking
about
you're
asking
about
the
Machine
API
definition,
the
cluster
API
definition,
machine
set
definition,
the
Machine
deployment
definition
and
right.
How
close
we
think
those
are
to
being
beta
quality,
which
Tim
had
broken
out
into
the
next
question
about
the
status
of
making
those
things
alpha,
because
we
do
have
there
a
couple
different
milestones
defined.
A
We
have
a
milestone
for
like
how
good
is
the
implementation,
but
we
also
have
a
milestone
for
let's
I
can
find
it
real
quick
for
making
the
API
alpha
I
think
we
called
it
stable
API,
maybe
that
people,
maybe
people,
are
interpreting
stable
as
different
things,
but
I
was
thinking
stable
API,
which
has
five
open
issues,
was
really
the
milestone
of
where
we
think
the
API
is
sort
of
good
enough
to
start
building
on
top
of
and
I
would
actually
not
call
that
beta,
because
the
beta
contract
is
pretty
strong
in
terms
of
breaking
compatibility.
A
I
would
actually
call
that
the
first
alpha
and
use
it
for
a
little
while
and
try
to
start
building
more
things
for
real
on
top
of
it
and
then
sort
of
promote
it
to
beta
after
we
believe
that
it
works,
and
we,
you
know,
shake
out
any
issues
with
the
API.
So
one
of
the
things
that
have
been
on
my
list
is
to
go
back
and
sort
of
triage,
open
issues
and
and
figure
out,
there's
other
things
that
should
be
in
this
milestone,
which
I
hadn't
gotten
to
yet.
A
But
I
would
love
for
us
to
be
able
to
look
at
this
this
milestone,
and
it
is
milestone,
slash
one
and
be
able
to
sort
of
pull
that
up
at
the
beginning
of
this
meeting
every
week
and
say
here
are
the
things
left
before
we
think
the
API
is
settled
like?
Can
we
knock
those
things
out?
Can
we
assign
those
issues
to
people?
A
A
Justin
also
mentioned
this
I
think
as
part
of
the
issue
I
think
below
the
there's,
creating
an
alpha
release
which
Lucas
was
excited
about
last
week,
so
that
you
get
a
link
to
it
from
the
cube
admin,
GA
blog
post
and
where
we
are
there
and
I.
Think
Justin
had
a
great
response
to
him
there,
which
is
we
shouldn't
rush
or
or
pretend
that
the
cluster
API
is
in
a
state
that
it's
not.
A
Sounds
good,
thank
you,
so
somebody
sucked
the
milestone
link
into
the
notes.
So
if
you
have
issues
that
you
think
should
be
in
that
milestone,
I
don't
know
how
many
people
actually
have
access
to
change
issues
or
set
milestones.
I,
don't
know
if
the
commands
can
do
that
because
as
a
repo
owner
I
can
just
set
the
labels
directly
but
I
think
if
you
type
slash
milestone
and
put
the
mouse
tun
name,
I
should
apply
it.
A
A
B
Glad
you
brought
up
this
milestone
list
because
I
see
on
here
create
integration
test
for
machine
control
which
I
I
thought
about
asking
that
question
this
week,
but
maybe
we'll
talk
about
it
next
week,
I
don't
want
to.
We
didn't.
We
don't
have
a
lot
of
time,
but
with
we're
thinking
about
that
right
now,
yeah.
A
A
So
there's
one
milestone
that
says
basically
what
changes,
so
we
may
need
to
make
to
the
API
definitions
for
us
to
believe
that
they're
done
and
the
other
milestone
which
smells
don't,
which
is
what
do
we
have
to
do
for
us
to
be
convinced
that
our
implementation,
that
API
is
reasonable
right
and
for
the
implementation
part
we
definitely
want
to
have
you
know,
tests
that
are
actually
running
in
an
automated
sense
for
the
API
definition
itself.
The
things
we
have
listed
there
are
things
like
yeah.
A
If
there's
one
here
about
integration
with
the
cluster
registry,
which
is
a
question
you
brought
up
today
like
what
what's
our
strategy
for
the
listing
clusters?
Is
that
something
we
want
to
build
into
our
API?
Is
that
something
we
defer
to
the
cluster
registry?
Api
figure
out
errors,
some
of
these
most
these
issues
are
pretty
old
because
we
haven't
actually
looked
at
this
milestone
in
a
while
there's
one
about
changing
the
type
of
instant
status.
A
I
have
no
idea
what
that's
referring
to,
but
it's
basically
things
like
there
were
things
on
here
about
you
know:
kicking
rolls
out
of
the
top
level
as
a
top
level
field,
because
that's
not
something
we
want
to
support
and
definitely
going
forward.
So
it's
more
things
like
how
do
we
tweak
the
API
definitions
to
the
point
where
they're
stable
right
so
as
Jason
was
asking
like?
When
can
we
say
that
our
API
is
stable
and
we
can
not
have
to
worry
about
it?
A
Changing
underneath
SS
we
build
tools
and
then
the
other
milestone
which
is
milestone.
2,
is
about
how
do
we
convince
ourselves
that
our
implementation
of
things
like
the
Machine
deployment
controller
in
the
machine
tech
controller,
are
working
today
and
continue
working
as
we
as
we
submit
PRS
to
make
changes
I.
B
G
Cut
another
alpha
release,
that's
kind
of
the
beauty
of
alpha.
My
biggest
concern
is
just
making
sure
that
we
had
an
API
conversion
in
place.
So
that
way,
if
people
do
depend
upon
alpha
1,
they
get
support
for
alpha
2
on
the
upgrade
conversions
right
right
now,
it's
totally
Yolo.
So
if
you're
kind
of
doing
things
or
if
you've
actually
subsumed
from
upstream
into
your
provider,
you
have
no
conversion
apparatus
that
will
be
supported
at
this
point
in
time
unless
you're
magically
hit
the
right,
cha
right.
G
So
I
think
that
that
that
should
also
be
consideration.
We
don't
need
to
wait
forever
to
try
and
make
sure
you
get
all
these
issues
assigned.
We
can
say,
there's
like
a
line
that
we
say
good
enough
like
and
we
can
always
cut
an
alpha
2
alpha
3
alpha
for,
however
many
many
alphas.
It
gets
to
the
point
where
we
feel
comfortable,
saying
we're.
Gonna
broadly
support
this
push
to
a
more
stable
version
of
the
API.
A
That's
probably
a
great
thing
to
put
on
that
list
is
API
versioning
support,
and
maybe
that's
the
biggest
blocker
for
us
to
saying
everything
else.
We
can
change
right
like
once.
We
have
enough
conversions
in
place
where
we
think
the
chances
of
us
actually
breaking
somebody
where
we
can't
forward
convert
them
are
pretty
small.
That
might
be
enough
for
us
to
actually
say
that
the
API
is
ready
for
another.
A
So
we
have
a
couple
of
minutes
if
people
have
either
last
questions
for
hardik
or
concerns
or
objections
for
the
pr,
otherwise
we'll
merge
it.
After
this
meeting,
it's
most
I
think
most
of
you
guys
have
been
here
for
last
couple
weeks,
we've
discussed
this
PR
sort
of
weekly
for
for
quite
a
while
now
so
hopefully,
there
aren't
too
many
or
more
questions
at
this
point
that
haven't
been.
A
Okay,
while
we
wait
we'll
give
people
a
minute
to
think
about
it
or
to
read
it
in
terms
of
scheduling
so
there's
next
week's
meeting,
which
is
on
December
5th
and
the
following
week,
is
during
cube
con
I'm
gonna
go
ahead
and
suggest
that
we
cancel
that
meeting,
because
I
think
that
most
people
here
will
probably
be
at
cube
con
and
that
we
tried
to
schedule
some
time
for
people
to
sit
down
and
chat.
It's
know
each
other,
maybe
whiteboard
a
little
bit
about
AJ
and
some
other
things
like
that.
A
If
anybody
wants
to
take
the
lead
on
trying
to
to
find
a
place
and/or
time
for
us
to
get
together,
that
would
be
awesome
and
if
not,
we
can
chat
about
it
again
next
week,
if
people
haven't
started
thinking
about
sort
of
a
meeting
time
or
place
also
remind
people
that
we
have
a
cig,
closer
lifecycle,
intro
talk
that
Tim
and
I
are
giving
I,
which
I
believe
is
on
Tuesday
and
then
a
cluster
API,
deep
dive,
which
I
believe
is
on
Thursday.
The
deep
dive
is
probably
not
like.
A
The
presentation
is
probably
not
gonna
be
of
interest
people
here,
because
you
guys
actually
are
already
going
to
know
all
the
information
that
gets
presented,
but
if
people
want
to
show
up
and
meet
people
who
aren't
in
this
meeting
normally
that
come
and
ask
questions
or
help
answer
questions.
That
would
be
great,
and
it's
also
possible
that
maybe
right
after
that,
the
that
talk
slot
would
be
a
good
time
for
us
to
to
get
together.
I
meet
anyway,
so
I
know
the
at
the
last
cube
con.
We
did
in
Austin.
A
We
planned
the
sort
of
meeting
time
first,
a
closer
life
cycle
right
after
Lucas's
talk,
and
that
worked
out
pretty
well
because
a
lot
of
people
who
were
interested
in
meeting
we're
at
Lucas's
talk
when
we
kind
of
corral
people
out
the
door
and
say
hey
if
you're
interested
learning
more
just
come
with
us.
We're
gonna
have
a
little
little
bit
together
right
after
this.
So
that's
something
people
should
start
thinking
about,
because
that's
coming
up
here,
pretty
quick.
A
B
F
So
on
the
kukuana,
you
guys,
you
guys
gonna,
do
some
sort
of
a
get
together.
We're
gonna
hash
out
some
proposals.
Is
it
possible
for
you
guys
to
also
have
like
a
zoom
session
in
there
I
mean
I'm
taking
on
waitlist
I'm,
not
sure
if
I'm
gonna
make
it
there,
but
I
would
love
to
be
part
of
it.
So
if
he
resumed
session,
that
was
yes.
A
A
There
is
sometimes
there's
not
like
a
room
where
you
can
have
it
sort
of
quiet
enough
with
background
noise
for
people
remotely
to
be
able
to
hear,
but
if,
if
you
know
that
you're
not
gonna
be
there,
if
other
people
know
they're
not
going
to
be
there,
and
it
would
be
helpful
for
us
to
be
able
to
sort
of
live
stream
that
then,
yes,
we
will
we'll
do
what
we
can.
I've
certainly
missed
these.
In
the
past
there
was
a
get-together
in
in
Q
Khan
Europe
last
time
that
I
wasn't
at.
A
That's
a
great
point:
all
right:
it's
11:00,
I,
haven't
heard
any
objections
in
chat
or
verbally
about
the
Machine
phases.
Pr,
so
I'm
gonna
go
ahead
and
approve
that
and
let
it
merge
and
please
everyone
go
back
and
update
issues.
If
there
are
things
you
want
to
comment
on
and
if
not,
we
will
see
everyone
again
next
week,
thanks
for
coming.