►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190123 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.1easfdkp14us
A
Hello
and
welcome
to
the
Wednesday
January
23rd
edition
of
the
sequester
lifecycle
sub-project
meeting
for
the
cluster
API
today
we're
gonna
get
started
talking
about
the
provider
ID
for
machines.
This
is
pull
requests.
565
you're,
following
along
with
MIDI
notes:
we've
had
some
discussions
in
the
past
about
putting
a
provider
specific
idea
and
I
was
initially
proposed
as
part
of
the
autoscaler
I.
Think
people
agreed
it
was
too
early
to
do
it.
A
For
that
reason,
but
there
were
lots
of
other
good
reasons
that
we
might
want
to
have
this
field
enjoyed
and
we're
having
sort
of
a
lively
discussion
about
whether
it
should
be
in
the
spec
in
a
status
in
both
places,
etc.
So
hardik
pinkness.
I
think
yesterday
saying
let's
try
to
get
this
resolved,
so
we
can
move
forward
so
I
stuck
it
first
on
the
agenda,
because
I
thought
this
might
take
a
few
minutes
for
people
to
to
weigh
in
and
express
their
opinions.
I.
B
A
Yeah
I
was
I
was
chatting
with
same
hawk
about
this,
trying
to
pick
his
brain
about
sort
of
best
practices
for
kubernetes,
and
he
gave
me
a
recommendation
but
said
that
I
should
double-check,
with
Justin.
First
I'm
curious
Justin.
What
you
think
I'll
give
you
a
little
two
minutes
warning
before
the
meeting
started.
Think
about
it.
I
expect
it.
We
interesting
if
Tim
said,
status.
Tim
also
suspect
so
I
think
if
Eric's
point
is,
is
yeah
that
we
should
pick
one
and
go
with
it.
I
think
expect
make
sense.
A
We
also
went
back
and
forth
a
little
bit
about
whether
it
should
be
inspected
and
status,
because
then,
if,
if
you
specify
it,
you
know,
you
hope
that
spec
instead
it's
actually
match
and
you
expect
control
to
reconcile
if
they
don't
and
if
you
don't
specify
it,
then
so
the
controller
just
posts,
a
status
which
is
sort
of
another
interesting
would
about
it.
But
I
think
for
now
we're
fine,
just
sticking
in
spec
and
treating
it
sort
of
the
same
way
as
the
external
IP
and
services.
A
Alright,
so
I
think
this
particular
PR
probably
needs
have
make
run
on
it,
so
that
we
actually
update
the
er
de
documentation
and
that's
not
passing
all
the
priests
admits
to
mowing,
but
it
does
stick
it
just
in
the
spec.
So
I
think
the
the
changes
to
the
API
are
what
we
just
discussed
and
we
just
need
to
just
sort
of
flush
it
out
and
then
get
it
merged.
I.
C
Missed
about
30
seconds
of
audio
you
did.
The
problem
is
on
my
end.
Okay,
I
just
wanted
to
point
out
that,
as
we
have
changes
like
this,
it
would
be
really
good
to
have
documentation.
I,
don't
know
that
it
has
to
be
a
blocking
thing,
but
Lee
I
mean
it
needs
to
happen
and
I.
Think
part
of
the
reason
it's
good
is
that
when,
when
you
write
down
the
intent
of
a
field,
it
forces
you
to
really
think
about
like
what
it
means.
A
Yeah,
that's
a
great
point.
So
I
guess
one
question
here
is
you
know
this
particular
PR
came
out
of:
let's
use
it
for
the
autoscaler.
If
that's
my
guess
is
sort
of
main
reason
to
send
it
and
he
may
not
be
interested
in
pushing
it
through
half
the
evacuation
and
so
forth.
So
one
option
we
can
do
is
heartache.
If
you
want
to
create
like
a
new
PR,
you
know
I
take
the
get
booked
and
run
make
to
get
all
of
the
other
bits
in
shape.
A
D
E
F
A
And
that
that
node
field
in
a
pod
spec
is
can
be
specified
by
users,
but
generally
is
not.
You
can
use
the
pin
pod
some
machines,
I
guess
for
people
who
don't
know,
but
generally,
what
happens?
Is
you
don't
specify
it?
The
scheduler
sees
that
it's
not
set
goes
through,
assigns
it
and
that's
what
actually
assigns
a
poncho
machine
I
think
that
might
be
sort
of
a
great
way
to
think
about
it.
They
guess
the
only
difference
is
in
this
case.
A
Sort
of
the
same
controller
would
be
assigned
the
value
as
sort
of
using
the
value
instead
of
having
that
responsibility
distributed
across
two
different
things.
I
think
that's,
maybe
another
good
example
and
where
the
field
is
in
the
spec,
but
it's
set
programmatically
by
a
controller,
all
right,
yeah
yeah,
that's
maybe
a
better
analogy
to
use
and
the
one
I've
been
using
about
service
eyepiece.
Okay,.
A
Thank
you
all
right,
I
guess:
that's
also
sort
of
further
evidence
that
it
should
be
in
spec
right
if
we
think
about
it
that
way
and
I
think
in
a
lot
of
ways,
it's
sort
of
the
same
use
case.
Where,
generally,
you
would
expect
users
not
to
ever
want
to
set
this
field,
but
automation
may
want
to
set
this
field
right,
which
is
sort
of
similar
to
Gen
users.
A
F
Okay
right,
so,
if
we
look
at
the
type
there
is
in
the
Machine
spec,
there
is
a
paint
field
that
is,
that
is
used.
So
if
I,
if
I,
want,
if
I
want
so
I,
create
a
machine
object
and
I
want
a
specific
set
of
paints
to
be
applied
to
that
machine
by
whatever,
by
whatever,
by
the
actuator
right
or
by
whatever
I'm,
using
to
provision
kubernetes.
F
That
I
want
applies
the
note
so
today
the
last
two
are
possible,
but
the
first
one,
the
the
absence
of
paints
is
not
possible,
and
that
is
because,
if
you
provide
an
empty
slice
in
the
machine
and
the
machine
struct,
and
then
you
marshal
it
to
to
some
configuration
or
or
even
if
you
don't
marshal
it
this
in
the
figuration,
but
but
you
know,
you've
been
eternally
there.
There's
there's
no
way
to
distinguish
between
the
absence
of
paints
and
just
it's
just
saying
that
that
I
haven't
I
haven't,
provided
any
value.
Please
send
default.
F
And
so
I
yeah,
that's
that's
the
that's!
The
issue
there's
also
a
similar
issue
in
the
cube,
ATM
types
and
I
I.
Think
one
way
to
to
address
this
is
to
change
the
the
type
from
a
slice
to
a
pointer
to
a
slice,
and
so
when,
when
it's
nil,
that
means
nothing
has
been
specified
when
it's
an
empty
slice.
That
means
the
user
has
said.
No
taints
should
be
applied
to
this
note
and
then,
of
course
it
we
just
our
items
in
the
slice.
F
F
Yeah
machine
spec
tapes.
So
let
me
let
me
actually
just
paste
this
in
the
chat
here
and
then
I'll.
Add
it
to
the
to
the
meeting
notes
as
well.
A
These
labels
I
expect
to
be
there,
but
if
other
ones
are
there,
that's
also.
Ok,
then
setting
no
taints
or
labels
for
the
Machine
controller
means
that
the
machine
won't
be
created
with
any
change
your
labels,
but
you
can
still
add
them
later
and
we
won't
remove
them,
which
I
think
makes
a
lot
of
sense.
A
So
I
think
what
you're
asking
for
Daniel
is
in
that
case,
if
we
want
to
say,
create
a
machine,
it
shouldn't
start
off
that
with
any
labels
or
taints
that
that
should
be
sort
of
explicit
through
the
API
of
like
it's
empty,
not
it's
sort
of,
and
it's
explicitly
empty
via
using
a
pointer
as
opposed
to
it's
an
empty
list.
Right.
F
Yeah
and
the
reason,
the
reason
that
I
brought
this
up
here
at
all
is
that
I
am
I'm
using
the
cluster
API
types
and
then
you
know
taking
taking
the
values
there
and
then
generating
a
cube.
Atm
configuration
so
and
I
found
this
issue
in
the
qadian
configuration
and
realized.
Okay,
even
if
I
fix
it
there,
I
I
still
won't
be
able
to
pass
it.
You
know
past
the
indicate
an
absence
of
paint
if
I'm
using
the
cluster
API
types.
F
H
That's
yeah,
so
that's
I
was
gonna
say
like:
can
we
just
make
it
not
apply
taints
by
default
to
make
you
specify
that
right,
at
least
in
our
like?
We
definitely
have
a
problem
in
API
machinery,
where
we
have
been
a
little
bit
loosey-goosey
about
the
difference.
You
know
nil,
pointer
and
an
empty
list,
and
this
has
come
up
again
and
again.
We're
really
sort
of
have
accidently
got
them
confused
and
I.
Think
the
other
side
of
that
coin
is.
H
A
This
also
related
to
the
what
fields
we
can
default
via
mutating
white
box.
I
know
there
was
an
issue
where,
if
we
are
not
using
pointers,
we
can't
default
fields
via
web
hooks.
Is
this
a
case
where,
if
we
wanted
to
default
taints
and
it's
a
slice
instead
of
a
pointer
to
a
slice,
we
wouldn't
be
able
to
default
it
into
a
book,
because
that
would
be
another
argument
for
making
a
pointer.
D
F
F
A
Yeah,
so
I
think
one
thing
is:
if,
with
the
slice
we
have
today,
if
we
leave
it
empty
and
say
that
means
no
taints
one
way
to
fix
this
would
be
if
we
fixed
cube
ATM
the
translation
logic
could
say
if
there
no
taints
here,
I
tell
cube
idiom
not
to
use
taints,
but
I.
Think
the
your
other
points
about
using
a
pointer
is
valid.
For
other
reasons.
A
I
would
say
right,
so
I
think
there's
sort
of
two
different
two
different
issues:
you're
raising
one
is
it
with
cube
ATM
when
you
could
have
a
way
to
specify
today,
so
you
remove
those
taints
from
that
node
so
that
it's
scheduled
able
and
you
you
could
do
that
without
changing
the
machine
definition
of
how
to
change
your
specified.
But
we
should
probably
also
change
the
machine
definition
of
has
danger
specified
both
for
consistency
with
cube
ATM
and
so
that
we
can
define
it.
A
J
F
Yeah
and
I
think
in
general
of
the
I'll
add
a
link
to
the
the
API
conventions.
Doc
does
does
it
does
describe
this
this
issue
and
there
are
actually
a
couple
of
quite
old
sort
of
umbrella
issues.
One
is,
you
know
like
to
address
these
kind
of
changes
like
changing
the
pointers
in
the
v2
API.
So
it
looks
like
this
is
this
is
a
problem?
That's
you
know
it's
been
around,
but
here
we
have
an
alpha.
Api,
isn't.
Maybe
we
have
a
chance
to
you
know
to
address
it.
M
Yeah
it
sounded
it
sounded
like
rubidium
is
not
handling,
pointers
and
taint
lists
right,
but
I
think
it
that
I'm
not
sure
if
yeah
but
I'm,
just
kind
of
confused
as
to
where
the
issue
with
couvade
is
coming
from,
because
it
appears
according
to
the
comment
that
we
do,
handle
the
empty
list.
Verse
Neil
correctly.
Thank
you
video,
but
if
there's
something
I'm
missing,
if
it's
like
additive
property
that
we're
talking
about
it
isn't
handled
correctly,.
F
Yeah
it
does
so
the
problem
isn't
that
qadian
doesn't
handle
the
input
correctly.
It
is.
It
is
not
possible
to
use
the
cube,
ATM
types
to
marshal
and
empty
slice.
So
so,
if
you,
if
you
generate
the
cube,
ATM
config
by
hand
or
you
know
dynamically-
you
can
you
can
you
know
you
can
do
this,
but
if
you're,
if
you're
using
a
go
program
to
generate
the
configuration,
then
you
won't
have
that
options
because
of
the
omit
empty
and
the
way
go
handled
an
empty
slice
by
yes.
Oh
that's.
I
D
A
Okay,
it
would
be
great
if
you
for
those
of
us
that
may
not
have
a
chance
to
follow
along
with
your
issue
explicitly
they
sort
of
come
back
to
this
meeting,
maybe
next
week
and
sort
of
summarized
the
conclusions
from
there
we'll
try
to
find
the
issue
and
follow
it.
Building
that
may
or
may
not
end
up
actually
happening.
A
All
right
so
next
I
put
670
in
here
so
two
weeks
ago
we
discussed
process
for
updating
project
maintainers,
so
sort
of
spurred
by
events
ascending,
a
PR,
saying,
I,
think
I'm
I'm
ready,
please
add
me,
so
we
can
increase
productivity.
Tim
gave
some
background
about
how
it
works
for
attending
the
premiere.
Both
gave
a
crowd
about
how
it
works
for
cube
idiom
and
we
sort
of
chose
to
follow
the
same
process
at
cube.
A
Atm
does
where
we
are
sort
of
cycling
people,
both
in
and
out
on
approximately
maybe
quarterly
basis
sort
of
see
how
that
goes,
and
so
I
created
this
PR.
It's
been
sitting
for
about
two
weeks,
I
think
it's
ready
to
go
in
I,
haven't
seen
any
objections,
but
I'd
wanted
to
sort
of
do
one
last
reminder
during
this
meeting
in
case
people
haven't
been
here
for
the
last
couple
of
weeks
and
wanted
to
raise
last-minute
objection,
otherwise,
I
think
it's
it's
time
to
merge.
L
Let's
bring
this
up
again,
so
we
haven't
decided
if
we
want
to
pretty
much
allow
multiple
asset
to
live
in
the
same
namespace
yet,
which
also
means
like
we
will
need
to
have
like
a
strong
link
between
machines
and
the
cluster
object.
I
wanna
raise
this
also
because
there
is
like
some
other
efforts,
like
kind
of
dealing
the
machine
in
cluster,
so
it
would
be
great
like
if
we
can
tie
everything
together
and
kind
of
make
a
decision
on
this,
so
we
can
bring
it
forward.
N
Just
to
add
to
that,
that's
also
been
a
discussion
to
basically
do
the
reverse
thing,
which
is
like
losing
the
coupling
completely
of
machines
to
to
past
us
doesn't
open
pre
requests.
I
have
to
find
that
on
which
there
was
discussion
of
it
would
make
sense
to
like
move
the
complete
machines
machines
head
machine
if
I
wanted
to
its
own
API
group,
yeah.
L
Two
things
kind
of
completely
conflicting
with
it
with
each
other
I'm,
not
sure
what
like
the
best
answer
is
like,
because
we
have
so
many
use
cases
in
a
way
like
we
kind
of
want
to
probably
make
everybody
happy,
not
sure.
If
that's
possible,
like
we'll,
see.
C
So
the
last
time
this
came
up
was
last
week
when
we
merged
PR
to
allow
the
nil
cluster
to
be
passed
to
the
machine
actuator,
and
at
that
time
we
discussed
the
idea
that
maybe
would
wait
some
period
to
find
to
see
if
there
any
providers
which
depend
on
the
cluster.
If
there
are
providers
which
which
don't
and
then
I
think
the
question
of
whether
or
not
you
can
have
more
than
one
cluster
in
a
namespace
and
the
questioner,
a
question
of
whether
or
not
we
have
different
API
groups
are
related,
a
distinct.
C
Right
so
with
so,
as
an
example,
I
saw
there
was
a
PR
for
the
AWS
implement
provider
to
make
the
cluster
optional,
and
in
that
case,
I
looked
at
it.
I
thought.
Is
there
a
good
reason
to
make
it
optional
for
the
eight
of
us
provider
it
didn't
I,
didn't
see
a
reason,
so
I
was
kind
of
waiting
to
find
out
from
others
if
they
thought
that
there
was
a
good
reason
to
have
the
link.
C
O
One
use
case
well,
there's
an
easier
use
case
when
you
have
the
cluster
object.
If
you
want
to
delete
the
whole
cluster
or
you
want
to
upgrade
all
the
machines
in
the
cluster
at
once,
if
you
allow
using
the
cluster
object
to
make
the
change
like
for
a
new
version
or
for
deleting
it
makes
it
easier
for
the
API
to
do
that
off
the
cluster,
that's
the
only
case
I
can
think
of
where
you
want
to
keep
it.
We
want
to
make
our
API
or
learn
in
those
cases.
I.
C
Think
it's
a
really
good
point.
So
that's
another
area
that
I,
like
I,
would
like
to
see
the
cluster
object
fleshed
out
more
and
have
a
better
documentation
regarding
its
purpose,
and
one
really
good
use
case
would
be
if
the
cluster
was
the
de
facto
interface
for
the
user.
When
managing
the
creation
addition
deletion
of
clusters
in
the
past
that
he
was
Alberto.
A
And
decided
to
leave
it
yeah
I
think
Vince
brought
up
something
similar
when
we
chatted
at
cube
con,
which
is,
as
he
was,
sort
of
coming
new
to
this
project.
It
was
really
unclear
like
what
the
purpose
of
the
cluster
was
like
how
you
should
use
it,
how
it
related
to
machines
and
I
think
maybe
we're
trying
to
solve
like
a
number
of
different
use
cases
here
and
and
that's
kind
of
muddling
things.
Clayton
made
a
really
good
point
in
I.
A
Think
in
this
pull
request,
yeah
in
pole,
645
that
API
groups
are
sort
of
roughly
analogous
to
life
cycles
for
different
components.
I
think
everyone
would
probably
agree
that
the
life
cycle
of
a
machine
it
should
be
different
than
the
life
cycle
of
a
cluster
which
to
me,
argues
that
they
should
be
in
different
API
groups
and
I.
Think
we've
also,
as
we've
sort
of
been
iterating
here,
come
to
come
to
some
conclusions
like
they
were
saying
one
that
machines
might
depend
upon
knowing
an
end
point
for
a
cluster.
A
They
don't
actually
depend
upon
sort
of
everything
else
that
we
might
want
to
put
in
a
cluster
cluster
sort
of
logically
might
make
sense
to
be
a
grouping
of
machines
right,
so
I
could
imagine
a
cluster
could
have
good
own.
Maybe
machines!
That's
if
you
created
a
cluster
with
some
machines
initially
like
it
could
have
an
owner
up
for
those,
in
which
case
it
would
make
sense.
If
you
deleted
the
cluster
that
the
machines
we've
deleted
or
as
if
you
created
machines
that
were
not
underneath
the
cluster,
it
wouldn't
make
sense
right
so
I
think.
A
So,
in
my
mind,
like
as
I've,
been
sort
of
thinking
about
this
since
cube
con,
what
we,
what
we
sort
of
want
is
machines
which
we
can
use
either
independently
or
inside
of
a
cluster.
We
want
a
control
plane
which
we
could
use
inside
of
a
cluster
and
we
need
the
infrastructure
for
a
cluster
that
ties,
everything
together
and
I
think
maybe
structuring
it.
That
way
would
make
more
sense.
A
O
A
L
That
said,
like
the
other
establishing
along
the
link,
it's
for
this
cycle
and
so
I'm
wondering
if,
because
it's
pre-alpha
right
now,
we
should
actually
make
all
this
massive
breaking
changes
now
and
kind
of
volunteer
myself
here.
I
know,
but
I
think
this
is
kind
of
needed
and
I
agree
that,
like
splitting
things
up,
the
cluster
will
become
pretty
much
like
a
third
full
connection
at
that
point
in
between
machines
and
control
planes,
and
then
a
war
could
have
like
an
infrastructure,
I
traded,
those
other
things
right
and
the
provider.
A
Yeah
I
mean
my
only
concern
about
trying
to
put
that
in
the
alpha.
One
is
time
lines
if
we
think
we
can
get
it
done
fast
enough.
I
do
also
agree
that
now
is
a
great
time
to
make
these
sort
of
large
sweeping
breaking
changes
before
we
have
cut
an
alpha
one
and
told
people
to
start
using
it.
I
think
it
would
be
much
better
if,
when
we
cut
the
alpha
one,
we
felt
like
we
had
a
good
organization
of
resources
and
a
good
separation
in
today
pair
groups.
A
It
would
also
allow
us
to
say,
maybe
cut
an
alpha
one
of
the
machines
API
groups
sooner
if
we
can
agree
that
that
is
in
good
shape
separately
from
having
to
cut
an
alpha
of
everything,
I
think
there,
a
lot
of
people
are
interested
in
getting
the
machines.
Specific
part
done
faster,
even
if
the
other
part
lags
a
little
bit
obviously
be
great.
A
A
Events,
it
sounded
like
you
were
sort
of
signing
up
to
do
that,
so
what
I
would
suggest
is
maybe,
starting
with
like
a
Google,
Doc
sort
of
with
a
quick
summary
like
like
a
design,
doc
saying
here's
what
we
want
to
go.
I
think
that'll
be
a
lot
easier
for
people
to
digest
and
agree
to
then
trying
to
review
code
as
I
assume
that
the
code
changes
are
getting
kind
of
sweeping
and
hard
to
review
right.
A
So
if
we
can
sort
of
write
it
out
sort
of
in
English
and
explain
to
people,
this
is
the
intent.
This
is
where
we're
headed.
If
you
little
sign
up
on
that,
then
we
then
everybody
doesn't
have
to
dive
into
all
the
code
to
understand
the
nuances
of
why
we're
doing
it
and
how
things
will
relate
to
each
other
sounds.
L
A
L
So
last
week
we
were
merged
like
to
two
different
PRS
one.
It's
like
breaking
teacher
for
Custer
Carol.
We
removed
all
the
mini
queue,
specific
flags
and
created
like
a
bootstrapper
package,
and
so
all
the
Flex
right
now
like
are
pretty
much
agnostic,
the
bootstrap
type,
and
now
the
falls
to
none
so
like.
If
you
try
to
run
cluster
cuddle
and
at
this
point
without
a
bootstrapper,
it
probably
won't
work,
and
there
is
other
like
a
breaking
changes.
For
example
mini
cuban
vm
driver
which
were
specific
to
minute.
L
A
Excellent
Thank,
You
Vince
so
got
about
20
minutes
left
and
one
of
the
things
we
talked
about
last
week.
I
think
Tim,
you
weren't
here,
but
I
went
over
sort
of
the
different
milestones
and
what
we
were
using
them
for
and
talked
about.
We
didn't
go
through
all
the
different
issues
in
the
milestones,
but
I
encourage
people
to
do
that
on
their
own
time
if
they
were
interested
and
we
talked
about
trying
to
look
at
the
milestones
each
week
and
try
and
drive
them
towards
zero.
A
So
if
my
memory
serves
me
correctly
last
week
we
had
about
23,
I,
think
open
things
and
via
one
elf
one.
We
still
have
23
open,
but
I
just
added
one.
So
we've
at
least
gotten
one
thing
done
and
last
week
and
I
think
there
are
a
number
of
PRS
in
flight
as
well.
That
will
close
close
some
of
the
other
ones
too.
A
Hopefully
next
week,
we'll
actually
see
that
number
start
to
drop,
maybe
get
a
bowl
of
20
same
I,
don't
know
for
cube
ATM,
what's
sort
of
the
process
you
guys
use
to
go
through
this.
Do
you
just
share
your
screen
and
kind
of
walk
through
the
milestones?
Do
you
try
to
do
this
today,
but
you
try
to
pre-plan
and
say
these
the
ones
we
think
might
get
done
in
next
week.
Let's
focus
on
a
couple
of
them.
What
did
you
found
to
be
effective?
Typically.
I
We
don't
micro
organized
for
subsections
of
things
that
have
already
been
triaged
for
a
milestone.
We
just
kind
of
like
Federation
go
and
then
over
time
like
when
we
have
a
sink.
We
periodically
we'll
go
through
the
details
and
see
what's
left
and
remaining
open,
I
think
it
becomes
ambiguous
from
an
external
observer
because
we
haven't
had
a
release
of
clusters
or
API
of
what
the
status
is.
I.
I
Think
that's
the
common
question
common
complaint
concern
that
I
get
poked
on
it's
like
when
is
the
V
1
alpha
1
and
from
the
from
the
outside
world.
Looking
in
it's
not
readily,
it
hasn't
been
readily
apparent
because
it's
always
been
soon.
The
question
of
when
is
soon
done
is
the
one
that's
been
nebulous.
A
When
we
went
through
initially
and
and
triage
things
into
different
milestones,
we
tried
to
assign
everything
review
an
alpha
1
to
an
owner
and
I'm.
Looking
now
and
I
see
a
couple
of
things
that
are
not
assigned,
if
you
think
it'd
be
useful
to
spend
a
couple
of
minutes,
highlighting
those
issues
and
seeing
if
somebody
can
sign
up
to
do
absolutely.
I
J
J
J
J
A
Okay,
there
are
six
unassigned
issues
in
the
alpha
one
milestone,
so
I'm
gonna
just
read
through
those
real
quick.
The
first
one
is
integration
test
for
the
machine
controller.
So
somebody
last
year
suggested
that
we
want
to
have
a
an
integration
test
to
where
we
test
create,
delete,
update,
scale
up
scale
down
and
a
bit
of
a
stress
test.
I
think
we
actually
got
a
PR
that
did
that
sort
of
internally
like
as
a
unit
test,
I
guess
the
question
is:
do
people
think
well?
Does
anyone
want
to
sign
up
for
this?
A
Do
we
need
more
testing
than
what
we
have
in
our
unit
tests
for
alpha?
I
guess
would
be
another
question,
I
think
reading
the
title
we
triage
this
into
alpha
one,
but
as
I
look
at
it
and
think
about
the
PRS
that
I've
seen
going
recently
I'm
not
actually
sure
how
much
more
we
need
to
do
here
to
feel
comfortable
with
the
implementation.
C
I
C
Gonna
say
I
think
we
could
take
this
out
of
you
one
alpha.
One
I,
don't
think
we
need
to
yet
like
maybe
it's
too
soon
to
to
make
those
decisions,
because
this
is
a
great
thing,
so
so
having
more
tests,
I
think
would
be
better.
I
think
it'd
be
appreciated
and
it
makes
us
feel
more
confident
about
the
1
alpha
1.
C
We
might
look
at
it
and
decide.
We
have
enough
tests,
but
I
don't
know
that
anyone's
looked
yet
so
I
just
don't
know
that
it's
I
think
there's
value
in
Abita
on
the
list,
because
it
highlights
something:
that's
important
for
me:
1
alpha,
1,
whether
it's
necessary
or
not.
Maybe
we
could
revisit
later.
Ok,.
A
A
Right
so
the
second
one
is
support
for
scaling
down
specific
machine,
while
my
colleagues
mesm
volunteered
to
push
that
from
Google
side,
I
think
there
are
a
lot
of
interested
parties
on
this
one,
so
I'm,
luckily
and
github
you
can
assign
to
multiple
people.
So
if
other
folks
are
also
interested
in
signing
up
for
this,
please
let
me
know,
and
I
can
also
tag
you
on
the
issue.
Yeah
apparently
I
need
to
add
miss
them
to
the
org
before
that
will
actually
work
so
github
won't.
A
A
So
right
now,
it's
possible
to
do
this
using
cubelet
flags
right,
but
then
every
provider
has
to
re-implement
the
parsing
of
the
fields
and
the
plumbing
through
to
couplet
flags,
for
cuba
to
self
register,
with
labels
and
tanks,
and
the
sing-off
group
has
said
that
they
want
to
stop
allowing
cubelets
to
do
that
in
the
future,
because
it's
a
security
risk,
in
which
case
it
seems
like
the
right
place
to
do
this,
is
in
the
machine
controller
and
have
the
machine
controller
set
the
labels
there's
a
little
bit
of
a
race
condition.
There
also.
A
Unless
we
somehow
mark
machines
not
ready
or
something
until
we
have
been
able
to
label
them,
I
think
I
think
talking
to
Mike
Denise.
Last
year
he
had
a
possible
design
for
doing
that
they
were
shared
with
the
community.
Maybe
I
should
dig
that
up
in
this
I.
Add
it
to
this
issue.
In
any
case,
is
this
something
that
someone
feels
strongly
about
and
or
is
willing
to
to
take
on
and
the
view
and
alpha
timeframe
it'll?
There
are
not
a
number
of
open
chef
folks
that
are
interested
in
tissue.
M
E
Kind
of
I
mean
this
is
basically
based
on
our
discussion
earlier
around
the
same
topic
then,
basically
asking
about
whether
we
can
use
to
medium
specific
flags,
I
mean
the
current
implementation.
Has
it
stashed,
Bay
or
the
question
is
ki
provider?
These
fears
we
kind
of
handle
this
requirement.
Why
are
the
cube,
ATM
flags
and
I?
A
Yes,
I
think
what
you
sort
of
sort
of
reiterate,
what
I
said,
which
is
right
now
a
provider
can
do
this
themselves
explicitly
by
plumbing
things
through
two
flags
in
the
cubelets
and
cube
idiom.
But
ideally
we
would
have
common
code
that
does
this
that
works
consistently
across
providers
like
we
shouldn't
have
a
degree
of.
B
Yes,
I
think
also
I
also
feel
that
the
same
I
think,
if
not
not
very
soon,
we
will
anyway
how
to
define
very
generic
way
of
building
the
thing
dealing
with
the
pain.
Actually,
you
can
call
it
the
propagation
of
things,
labels
and
so
on
from
machine
deployment
we
said
and
the
machines,
because,
when
in
the
ruling
of
plate,
we
have
not
yet
seen
but
really
simply
try
to
implement
the
feature.
B
Well,
when
you
do
a
ruling
of
predators
to
happen
there
on
the
on
the
previous
and
then
the
older
machine
set,
but
the
machines
are
from
where
the
machines
are
being
migrated
from
you,
you
would
probably
want
to
put
the
taint
of
prefer
no
schedule
and
so
on.
So
there
always.
You
will
see
that
the
ports
are
continuously
getting
rescheduled
on
the
own
machine
set,
and
we
will
see
that
many
ports
are
getting
the
static
for
no
reason
right.
So
this
features
then
will
complement
it
and
will
be
very
useful
in
one
way.
B
But
then,
of
course,
the
questions
will
come
that
what
happens
when
some
state
modifies
in
between
and
how
do
you
know
whether
it
already
existed
and
so
on
right?
So
there
we
can
rather
be
additive
in
general
and
no
not
really.
We
prefer
that
a
I
will
blindly
removing
something
was
removed,
but
I
will
try
to
make
sure
what
I
have
is
always
maintained.
I
send
the
language
something
like
that.
B
We
could,
first
of
all
in
there
also
I
guess
when
new
thing
came
up
and
I'm,
not
sure
if
it's
a
good
idea
yet
or
not,
but
P
probably
also
might
might
want
to
have
kind
of
authoritative
behavior.
For
the
specific
kind
of
things
created
by
the
machine
controller
arrivals
in
machine
machine
epi
controllers,
for
example,
slave
of
the
ruler,
so
this
is
kind
of
my
example.
Well,
if
I
well,
the
rolling
up
it
is
going
on
you
somehow
one
to
always
maintain
the
prefer
no
scribble
on
the
old
machines.
B
If
someone
goes
and
remove
it
from
the
machine
for
somebody
while
rolling
up
it
is
going
on,
if
no
object
is
lost
in
load
of
an
object
comes
up
people.
You
want
to
still
become
the
author.
If
you
team
up
with
the
painter
game
on
that's
right,
so
you
might
want
to
think
in
that
direction
as
well.
That
could
be
two
different
kinds
of
things,
one
coming
from
the
user
and
when
generated
by
the
Machine
API
controller's
weather.
A
I
think
maybe
an
elegant
way
to
do
that
hurricanes
to
have
the
Machine
deployment
controller
when
you
tell
it
to
do
an
update,
have
it
set
the
taint
on
the
machine
set
and
then
have
the
Machine
reconcile
that
because
then
you
can
put
in
the
documentation
if
you're
using
machine
deployments
and
you
do
a
rolling
update.
This
is
the
behavior
that
you're
going
to
see
right
and
it's
not
sort
of
hidden
in
the
implementation
of
the
machine,
control
or
somewhere
else.
I
could
become.
It's
really
obvious.
A
It's
something
you
see
on
the
machine
set,
it's
something
you
see
on
the
machines
in
the
API
and
I.
Think.
The
other
point
is
it's
really
good.
Also.
Is
that
the
notion
of
being
additive?
We
should
be
reconciling
that
additive.
Miss
right,
like
it
shouldn't,
be
a
fire-and-forget
like
we
said
it
once
and
then
leave
it
alone.
We
should
make
sure
that
if
you
have
specified
that
your
desired
state
does
have
these
labels
or
chains,
we
keep
those
there.
But
in
doing
so
we
don't
remove
other
ones
that
users
have
cept
right.
B
C
Sure
so
the
basic
problem
is
that
when
you
have
machines,
so
when
you
have
a
manager,
when
you
have
a
distinction
between
manager
clusters
and
managed
clusters,
the
cluster
API
resources
exist
in
the
manager
cluster,
but
not
the
managed
cluster
and,
as
a
result,
the
link
between
the
Machine
and
the
node
does
not
exist.
That
has
consequences.
It
means
things
like
node
conditions
are
not
copied
to
the
machine
object.
C
A
Given
that
we
can
assign
multiple
people
and
github
I'm
gonna
sign
you
up
for
it,
since
you
said
that,
and
if
other
people
also
want
to
help
work
on
this,
please
again
assign
yourself
or
coordinate
with
David
to
help
contribute
and
I
I
agree.
I
think
this
is
also
really
important,
given
sort
of
the
the
way
we've
seen.
Implementations
evolve
over
time.
A
This
may
be
not
what
we
expected
when,
when
we
initially
put
that
field
and
the
all
right,
so
next
one
of
our
out
created,
which
is
optional,
struct
literal
fields,
an
API,
don't
allow
defaulting
this.
We
referred
to
this
earlier
I
think,
there's
agreement
that
we
should
fix
this.
We
just
need
someone
to
just
to
create
the
PRS
and
go
ahead
and
fix
it
and
I'll
bar
you
create
a
first
PR
here,
that's
referenced,
but
I
think
we
sort
of
need
to
do
it
more
consistently
across
the
API
surface.
N
A
I'm
gonna
sign
that
one
to
you
and
the
last
one
we
label
this
one
as
awaiting
more
evidence
which
Tim
had
mentioned,
is
something
using
cube
ATM
for
somebody
filed
an
issue
but
waiting
for
them
to
tell
us
more
about
it
before
we
bother
trying
to
triage
it
further.
So
I
think
this
one
we
can
ignore
for
now.
A
A
So
one
last
thing:
I'll:
do
we
have
two
minutes
left?
Is
we
have
two
issues
that
don't
have
a
milestone,
so
we
can
look
at
those
briefly
and
triage
them,
so
cluster
validation
fails.
Even
though
a
cluster
has
been
created,
I
believe
there
is
a
PR
open
to
try
and
fix
cluster
validation,
so
I'm
gonna
optimistically
throw
that
into
this
milestone.
We
can
kick
it
out
later.
The
second
one
is
a
cluster
deploy
report
creation
of
clusters
with
an
H,
a
control
plane.
It's
like
the
event
C
or
CC
dönitz.
L
Okay,
I
will
see
seed
on
it.
Oh
I
think
I
think
this
is
because,
like
we
hard-code
that
or
like
we
throw
an
error
like
when
you
have
multiple
control
flames,
which
it's
not
a
yogi
I
mean
you're,
the
best
provider
actually
has
support
for
AJ
now
in
in
some
way,
Chuck
my
speak
more
to
that,
but
I
think
we
probably
need
to
I
like
support
your
upstream
for
it.
L
M
Is
not
blocking
this
is
sort
of
a
decision
that
I
think
we
were
talking
about
if,
if
cluster
sorry
cluster
declare
cluster
control,
I
think
this
is
actually
related
to
cluster
controller.
Is
that
right,
cluster
control,
yeah
so
is
o
is
close
to
control
the
tool
we
actually
want
to
use
in
the
future.
It's
not
blocking
right
now
because
of
the
phases
work,
though
this
doesn't
really
matter
very
much
too
close
to
API
and
and
whether
or
not
a
cluster
like.
M
A
I
will
stick
it
in
next
and
yeah.
Maybe
we
can
close
as
obsolete
based
on
the
other
one
yeah
excellent,
all
right,
so
we're
down
to
zero
issues
that
need
triage,
which
is
excellent.
We
are
also
out
of
time
so
go
ahead
and
we'll
wrap
up
the
meeting,
I
think
government
for
coming.
If
you
have
an
issue
in
the
alpha
one
milestone
assigned
to
you,
please
try
to
you
know,
make
some
progress
over
on
it
over
the
next
week.
A
At
the
end
of
the
meeting,
I
haven't
seen
anybody
object
to
merging
the
maintainer
SBR,
so
I'm
gonna
go
ahead
and
do
that
now
and
then
you
know,
hopefully
that
helps
us
keep
the
product
velocity
going
and
again
thanks
Spence
for
all
your
contributions,
your
excited
to
have
you
as
a
maintainer
all
right
thanks.
Everyone
I'll
see
you
guys
next
week.