►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181107 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.5c21ziyjlves
Highlights:
- Review the proposal for provisioning logic
- Tests for machine set and machine deployment controller deleted during CRD migration
- Switching the yaml parser to the new location
- Provider ID in machine status
- Renaming provider config to provider spec
- Requiring doc updates for API changes
- Adding context to actuator methods
- Using kubelet args to set labels and taints on nodes
A
Hello
and
welcome
to
the
Wednesday
November
7th
edition
of
the
cluster
API
subgroup
meeting.
First
part
of
sig
cluster
lifecycle.
Let's
see
today
the
first
thing
I
stuck
on
the
agenda
was
we
got
a
notification
from
github
or
maybe
just
I
got
it,
because
I
think
it
only
goes
to
administrator
to
the
repository
that
we
have
security,
vulnerability
in
our
repo
and
I
poke
to
this
a
little
bit
last
night,
and
it
looks
like
one
of
the
dependencies
that
was
introduced
when
our
get
booked
PR
emerged
needs
to
be
updated
to
a
new
version.
A
B
We
went
today
to
the
document
in
left
and
asked
for
it
comments
on
what
is
left
so
far.
We
are
going
to
leave
it
like
that
for
like
three
days
or
so,
and
then,
after
that
we
are
going
to
create
a
PR,
beat
a
markdown
file
to
have
a
PR
repository.
So
this
is
a
sort
of
final
call.
If
somebody
wants
to
go
once
again,
we
then
check.
Is
it
okay,.
A
Great
thank
you
for
bringing
that
back
to
everyone's
attention.
I
know
that
I'd
had
it
on
my
list
of
things
to
go
back
and
look
at
and
had
not
gotten
to
it.
Yet
so
I
will
definitely
try
to
take
a
look
at
the
next
couple
of
days.
Hopefully,
other
people
have
a
chance
to
as
well
alright
Alfaro
during
the
CRT
move,
most
tests
for
machine
set
machine
deployment.
Controller
were
deleted.
Was
this
intentional?
No,
that
was
not
intentional.
That
must
have
been
an
oversight.
A
A
C
Can
tell
because
the
tests
don't
exist
anymore,
so
maybe
it
still
works
as
a
debt.
Maybe
it
doesn't
and
yeah.
But
one
thing
I
noticed
when
looking
into
that
is
that
apparently
we
don't
have
a
fake
handset
anymore,
because
the
cube
builder
assumes
we
are
always
going
to
use
this
test
API,
so
a
net
CD
and
but
those
tests
that
where
we
moved
actually
rely
on
the
fake
kind
set
and
like
checking
with
extra
Jackson
for
taking
and
some
segments
on
so
yeah
I
think
that
could
have
been
one
reason
why.
A
Okay
I
know
I
fixed
up
some
of
the
tests
or
in
a
migration
that
we're
using
the
fake
client
to
instead
use
an
actual
dynamic
client,
like
you
said,
pointed
at
sort
of
a
fake
API
server
at
CD,
or
maybe
it's
a
real
one
that
gets
spun
up
as
part
of
testing,
but
you
can
actually
use
a
real
client
to
to
verify
the
behavior.
Also
right,
you
can,
in
a
sort
of
a
sandbox
environment,
create
and
modify
and
delete
resources
and
verify
that
the
side
effects
are
what
you
expect.
A
C
A
A
A
C
A
Sent
a
couple
of
PRS
in
here
next
that
we're
probably
worth
talking
about
so
the
first
one
is
Jason's,
which
is
to
switch
our
game
old
dependency
from
from
Sam's
repo
to
the
kubernetes
fork.
I'm
just
curious.
If
there
was
an
update
on
that,
I
know
that
DIMMs
has
been
doing
a
lot
of
work
on
the
main
repo
to
try
and
switch
over
to
a
fork,
and
we
were
sort
of
waiting
for
that
to
land
and
I
haven't
been
keeping
paying
attention.
So
I,
don't
know
where
we
are.
If
we're
unblocked
or
not
yeah.
D
We
should
be
completely
unblocked
now
there
was
a
new
release
cut
of
the
forked
version
that
sits
on
six
decades
at
I/o,
and
there
was
a
some
issue
with
the
test
harness
that
we
had.
One
of
the
images
that
we
were
using
was
a
using
go
1.9,
so
the
testing
for
changes
are
in.
There
should
be
ready
to
merge
it's
more
of
a
let's
get
this
over
with
in
advance,
so
that
when
we
update
to
a
later
version
of
queue
builder,
we
can
then
officially
drop
the
the
other
yellow
package
from
our
dependency
chain.
Right.
A
A
D
A
A
A
On
the
other
hand,
multiple
people
said
they
still
wanted
to
add
this
field,
but
not
because
of
the
autoscaler,
but
in
particular
it
would
allow
us
to
directly
track
through
the
Machine
controller
or
actuator,
the
individual
identity
of
a
cloud
resource
that
we
had
provision
which
makes
it
easier
in
terms
of
bookkeeping,
so
I
think
with
the
you
know,
the
provider
status.
We
could
do
it
there
right.
A
So
I
wanted
to
sort
of
reopen
the
discussion
here
and
it
sounded
like
from
what
I
was
hearing
that
we
should
not
merge
this
PR
for
the
autoscaler
work,
but
the
people
were
interested
in
merging
it.
For
other
reasons,
and
just
sort
of
continue
that
conversation
and
see
or
do
people
think
we
should
merge
SPR
now
or
should
we
be
waiting.
A
E
Went
that
also
it's
more
than
machine
status,
though
it
would
percolate
through
your
cluster
configuration
all
the
way
through
down
to
potentially
the
other
sub
elements,
because
it
was,
it
was
partially
for
inbound
on
creation.
If
you
had
many
controller,
the
original
conversation
pieces
we
had
last
week
was
that
on
inbound
we
did.
We
were
talking
about
annotation
versus
field
and
when
the
the
other
controllers
could
ignore,
if
there
are
all
within
a
single
MIDI
control
plane,
they
could
ignore
the
other
providers.
D
E
D
A
F
A
C
A
C
G
So
one
question
on
there
like
who
would
be
responsible
for
populating
this
provided
ID
in
this
address?
Is
it
gonna
be
like
the
actuator
code
that
probably
will
discover
find
out
the
ID
and
then
put
it
in
there
like?
Who
is
that
the
answer?
Like
the
machine
actuator?
For
example,
an
individual
provider
would
populate
that
I.
G
Yeah,
oh
I
mean
just
confirming
for
him,
because
you're
gonna
mention
that
this
might
be
the
provider
ID
that
the
cubelet
might
also
report.
So,
if
that
were
the
case,
then
there's
probably
a
different
mechanism
as
well
that
one
could
detect
and
then
get
it.
So,
okay,
that
that's
fine
I
mean
nothing
makes
sense
if
their
machine
actuator
is
putting
that
provider
ID
in
fact,
I
think
just
on
a
side
note.
You
know
currently
for
at
least
four,
for
example:
the
vSphere
provider
implementation.
A
So
say
that
one
more
time,
so,
if
your
when
you
say
if
you
want
to
do
a
pivot,
that
it
wouldn't
carry
over
so
if
I
have
a
controller,
is
managing
a
machine-
and
it
knows
the
the
provider
ID
for
that
machine
and
I
basically
wanted
to
transfer
ownership
of
that
machine
from
the
controller
may
be
running
the
bootstrap
cluster
to
a
controller,
that's
running
inside
the
cluster,
wouldn't
the
provider
ID
for
that
machine
be
the
same
because
that
controller
is
basically
taking
over
ownership.
It.
G
G
If
it
really
wants
to
use
it,
I
mean
this
is
primarily
because
you
know
we
can
I
did
something
similar
within
the
purview
of
the
provider
specific
part
in
the
status.
But
then
we
realize
that
this
would
be
the
problem
and
we
basically
switched
as
a
solution.
We
switch
to
enriching
the
start,
like
the
spec
itself,
to
have
that
reference
so
that
it
can
be
portable,
essentially.
G
So
what
we
did
essentially
is
named
flag
for
the
vSphere
pro
cloud
provider:
specific
implementation,
the
the
provider
spec.
We
basically
added
an
additional
field
like
a
like
a
machine
reference
machine
ref,
which
is
essentially
an
opaque
string
for
us,
the
provider
level
inside
it.
But
then
that
is
what
we
populate,
and
that
is
what
we
use.
For
example,
if
you
know
once
we
identified
the
underlying
infrastructure
and
there
and
the
unique
identification
of
that
VM,
then
that's
what
we
put
in
there
and
that
becomes
our
portable
blob.
G
G
So
essentially,
what
happens
is
in
the
way,
at
least
we
have
in
the
vSphere
cloud
provider.
Implementation
is
the
exists
method,
for
example,
it
looks
in
for
that
reference.
If
it
exists,
then
it
actually
goes
and
verifies.
If
that
reference
is
actually
valid
in
the
underlying
provider,
rake
infrastructure,
and
if
it
is,
then
it
returns.
Yes,
it
is
exists
and
it
doesn't
do
anything
more.
So
that
way,
it
kind
of
avoids
the
recreation
of
that
resource
whatsoever.
Again.
G
A
It's
it's
a
little
bit
of
an
edge
case,
but
if
somebody
like,
if
I
manually,
set
that
field
to
a
value
and
post
to
the
machine,
would
you
notice
that
that
value
was
incorrect
and
like
basically
clear
it
out,
replace
it
or
would
you
say
that's
that's
an
invalid
aspect
that
you've
created
that
machine
doesn't
exist
and
just
basically
throw
it
into
the
error
State.
So.
G
Actually,
that's
a
specific
use
case
that
actually
you
know
for
one
of
the
product
that
I'm
working
on
kind
of
utilizes.
That
fact
itself.
So
the
idea
is
you
create
the
master.
Let's
say
for
us,
for
example,
for
vSphere
you
just
have
a
no.
You
know
easy
mechanism
deploying
a
VM
is
our
system
is
that
you
have
an
OVF,
that
you
is
an
artifact
that
you
just
deploy
right.
So
one
of
them
one
of
the
things
that
we're
trying
to
do
is
you
deploy
this
OVF
and
then
you
power
it
on
it.
G
G
A
G
Precisely
I
mean
in
fact,
I
mean
the
image
in
a
normal
workflow.
Your
actuator
is
the
one
which
actually
populates
that
field
again
once
it
has
actually
realized
that
machine.
So
in
theory,
that's
the
mechanism,
how
very
similar
to,
for
example,
how
we
had
this
mechanism
earlier
using
annotations,
just
moved
it
from
an
annotation
into
this
pack.
So
it's
a
little
bit
I
would
say
you
know
slightly
better
I.
Think
I
may
not
be
perfect,
but
that's
at
least
the
way
that
we
came
up
with.
That
seems
to
be
working
for
us,
so
yeah.
A
And
I
know
people
say
that
the
services
API
isn't
great,
but
this
does
kind
of
map
pretty
well
to
when
you're
requesting
and
externalize
service
in
kubernetes.
If
you
don't
specify
what
IP
you
want
to
be
on,
then
the
service
controller
will
fill
in
in
and
it
puts
it
in
the
spec
and
if
you
do
put
an
IP
that
you
want
in
the
spec,
then
that
the
service
controller
will
try
to
honor
that
request
right.
A
G
A
G
F
A
I
mean
ideally
be
like
Tim
right.
Tim
Hawken
is
the
the
one
who
wrote
the
services
API
and
I've
heard
him
mention
verbally
a
couple
times
it
that
he's
not
particularly
happy
with
it
need
love
to
redo
it,
but
it
seems
unlikely
to
happen
since
it's
such
a
core
part
of
kubernetes
and
really
hard
to
change
so
I
mean
that
would
be
the
ideal
thing.
I
think
one
other
thing
is,
as
we
sort
of
get
to
the
state
where
we
think
that
our
API
is
in
pretty
good
shape.
A
We
probably
should
take
it
I,
don't
know
if
the
architecture
is
the
right
place
or
if
there's
a
set
of
API,
reviewers
or
API
consultants
that
we
could
go
to
to
say.
This
is
what
we
think
the
API
should
look
like.
We
can
do
and
why
we
put
these
fields
in
these
particular
places.
But
you
know
please
give
us
some
feedback
on.
A
You
know
how
this
you
know
fits
into
the
commands
ecosystem,
because
we
do
want
it
to
feel
like
the
rest
of
the
communities.
Api's
and
I
know.
There's
there's
some
burgeoning
talks
of
like
creating
a
better
API
reviewer
system
from
Sagarika
texture
and,
and
we
don't
necessarily
need
a
review
in
the
sense
that
this
doesn't
go
into
core,
doesn't
have
to
be
approved
by
that
set
of
folks.
A
But
I
think
it
would
be
useful
to
get
their
feedback
and
make
sure
that
they
they
think
we're
headed
in
the
right
direction,
and
we
did
you
know
when
before
this
was
was
published.
Publicly
Jacob,
b2
and
I
sat
down
with
him
and
sort
of
walked
through
the
API
and
definitely
got
some
feedback
from
him
on
making
sure
we
were
headed
the
right
direction.
A
H
G
So,
just
out
of
curiosity,
like
the
other
cloud
providers,
for
example,
says
GCE
or
OpenStack
or
any
other
one
like
how,
when
when
they
create
a
VM
or
an
instance
on
the
underlying
infrastructure,
if
there
must
be
some
way,
they're
kind
of
enriching
the
you
know,
the
object
on
the
cluster
API
side
to
kind
of
reflect.
That
fact
that
this
is
the
unique
idea
that
have
been
generated,
I
mean
how
I
mean.
Maybe
folks
who
know
more
about
the
other
providers,
can
maybe
chai
me
like
how?
How
is
that
happening?
F
Just
going
to
firm
what
you're
saying
we
also
use
an
annotation
and
I
think
we
should
at
some
point.
Does
this
point
it
out
in
a
get
book?
I
think
we
should
decide
if
we
think
it's
acceptable
and
codify
that
as
an
expectation
for
consumers
with
a
busted
API
or
put
it
by
a
different
solution.
But
right
now
that's
the
de
facto
standard.
A
Yeah
I
think
there's
one
place
where,
if
everybody's
using
an
annotation-
and
that
becomes
a
de
facto
standard
like
it,
does
kind
of
make
sense
to
make
that
an
actual
field,
that's
have
an
annotation,
and
maybe
this
is.
This
is
again
like
solving
that
problem,
where
we
wouldn't
need
that
annotation.
A
D
That,
by
applying
some
tags
on
the
AWS
instance,
so
that
when
we
go
back
in
to
create
it
will
attempt
to
query
for
those
tags
that
we
specify
and
we
use
the
cluster
name
and
the
machine
name,
to
help
limit
the
instances
that
we
return.
And
then
that
will
populate
repopulate.
The
instance
ID
and
the
status.
Okay.
A
So
you
guys
have
sought
basically
the
problems
that
are
mentioned.
You
guys
have
also
solved,
but
in
a
slightly
different
way.
Yes,
cool
excellent!
That's
that's
super
helpful
I.
Think
I
think
it's
really
good
to
also
see
how
different
people
are
sort
of
approaching
the
same
problem
in
different
ways.
So
I
think
that's
great.
A
All
right,
so
it
sounds
like
there's
a
sort
of
counter
proposal
for
this
PR,
which
is
instead
of
putting
a
provider
ID
in
the
machine
status.
We
should
potentially
look
at
putting
in
a
machine
spec,
so
I
think
we
should
again
not
merge
the
PR.
Quite
yet
I
don't
know
if,
if
someone
wants
to
write
up
a
issue,
a
short
doc
talking
about
the
true
dreamer
approaches
and
why
we
wouldn't
want
to
pick
one
versus
the
other
so
that
we
can
try
to
to
move
this
forward
because
I
do
I
do
think.
A
A
A
A
H
A
If
if
people
are
comfortable
merging
it
before
he
has
a
chance
to
test
it,
then
that's
fine
with
me.
I
do
think
it's
a
good
exercise
to
try
and
figure
out
like
when
you
make
this
sort
of
change.
How
do
we
verify
that?
It's
not
gonna
break
everybody
before
we
put
it
in,
but
I
think
the
changes
is
relatively
innocuous,
so
I.
A
A
F
F
Without
heart,
here,
I,
don't
know
how
or
we
will
go
down
the
path
of
resolving
this
I
just
had
a
comment
right
in
the
back.
That
I
do
think
explicit
fields
are
better
than
implicit
checks
for
consumers,
and
that's
one
axis,
but
we
could
evaluate
this
and
then
I
had
I
was
trying
to
think
we
have
this
problem
with
the
number
of
open,
PRS
and
issues
similar
to
the
annotation
fighting
earlier.
The
visor,
ID
and
I
was
thinking
what
are
some
different
things
we
might
require
in
order
to
determine
there's
something
should
be
merged.
F
A
Yeah
I
think
I
agree
with
what
you're
saying
and
now
also
that
we
have
a
place
to
put
that
documentation.
I,
think
that
would
be
sort
of
a
good
requirement
to
put
on
top
of
people
I
think
it's
fair
to
let
people
create
their
PR
without
the
documentation
to
start
with,
so
that
we
can
have
a
discussion
at
the
PR
and
agree
that
we
should
move
forward
with
it
and
sort
of
once
it
passes
the
yes.
We
think
this
is
a
good
API
change.
Then
we
should
write
documentation
before
we
emerge.
A
G
So
one
question
on
this
thing
is
like
so
the
node
ref
that
we've
been
talking
about.
You
know
that
is
only
available
for
the
local
fluster
I
mean
for
a
remote
cluster
case
as
well.
If
we
were
to
just
since
we
have
the
cube
config
for
the
remote
cluster
that
has
been
utilized
by
the
clustered
API
controllers,
anyways
I
mean,
if
would
it
be
that
bad
of
an
idea
to
say
you
know
what?
G
Let's
just
capture
the
node
ref
you
know
coming
from
that
remote,
let's
go
in
put
it
in
the
same
place
for
tracking
and
whenever
the
corresponding
controller
actuator
was
looking
into
that
node
or
that
machine
I
mean
from
the
machine.
They
can
see
whether
it's
a
part
of
which
cluster
and
from
which
cluster
they
can
see,
whether
it's
a
remote
cluster
or
not
or
some
sort
of
a
way,
and
then
they
can
appropriately
use
the
right
cube
config
to
actually
resolve
that
node
ref
into
the
actual
object.
G
If
they
want
to
do
something
more
with
it,
I
mean
I
mean
just
my
thought.
I
don't
know
if
there
is
probably
some
hose
there
that
I'm
not
thought
about,
but
something
that
I
bothered
my
mind.
That,
maybe
is
it
that
difficult
or
maybe
there's
some
downside
to
that
that
people
think
that
is
completely
a
bad
idea
to
do
that
as.
F
G
Think
it
has
like
a
type
it
has
I
think
it
does
have
a
node,
UID
I
think
it
does
have
the
ID
and
the
type
node,
but
definitely
does
not
have
the
information
about
which
cluster
it
is
coming
from.
But
what
I'm
saying
is
from
the
Machine
object,
which
this
thing
is
part
of
from
the
machine
object.
We
can
always
go
to
say
which
cluster
this
machine
object
belongs
to.
F
G
A
Okay,
I
see
some
plus
plus
ones
and
various
formats
in
chat,
I
think
David
might
be
a
really
good
thing
to
put
into
the
get
book
sort
of
some
of
those
expectations,
and
then
we
can
just
link
PRS
and
say,
like
hey,
you
need,
add
documentation
and
put
a
link
to
where
it
describes
what
that
documentation
should
look
like
in
the
get
book,
and
this
is
a
really
good
strategy.
People
use
for
code
reviews
when
they
can
say
like.
A
Oh,
this
is
you
know,
a
common
comment
for
go
code
reviews
and
you
can
just
point
to
like
this
standard
set
of
like
normal
comments
right.
So
if
we
can
put
a
short
comment
on
up
here
and
say
like
hey,
you
need
Docs,
and
then
we
put
a
link
into
our
get
book.
I
think
that's
a
good
way
to
do
it.
Instead
of
trying
to
read
ascribe
to
everybody,
what
we
expect
on
a
VP
are.
H
So
when
we
are
talking
about
breaking
changes,
so
I've
been
playing
color
with
the
mid
this
wind
actuator
and
then
I
noticed
that
currently,
there's
no
way
to
actually
see
note
to
the
providers
see
if
they're,
if
they
need
to
cancel
what
they're
doing
so
and,
for
example,
on
the
cloud
provider,
the
quad
controller
manger
the
interface
I
think
all
the
methods
pass
context
already.
So
this
is
a
really
good
practice
that
we
probably
want
to
adopt
to.
A
Yes,
I
definitely
agree
it's
interesting
as
I'm
going
through
the
some
of
the
changes
for
CRTs,
and
you
look
at
the
DQ
builder
code.
All
the
client
calls
take
a
context
and
basically,
all
of
them.
We
just
pass
context
top
background,
because
there's
no
context
plumbed
through
the
stack,
which
is
really
unfortunate.
His
context
that
background
isn't
particularly
useful.
Like
you
said
you
can't
you
can't
get
tracing
through
your
code,
you
can't
cancel
it
or
anything.
A
I
think
would
be
awesome
if
we
put
context
and
the
actuator
methods-
and
you
know
it
won't
be
immediate,
but
we
can
start
like
sort
of
plumbing
that
down
to
the
actuators.
So
when
they're
actually
making
calls,
it
would
be
good
to
actually
start
plumbing
it
up,
also
so
that
the
rest
of
our
code
actually
has
contacts
that
are
consistent
through
call
chains.
A
I
A
I
know
all
of
the
like
auto-generated
stuff
from
cue
builder
is
context
up
back
from
that,
which
is
very
surprising
to
me.
So
yeah
I
think
that's
pronounced,
PR
opportunity,
I
agree,
yes
context
to
do
is
the
right
way,
because
that's
the
same
thing
as
background,
except
that
it
lets.
You
know
that
you
should
actually
be
changing
it
thanks,
Justin
Siddharth
you've
got
the
next
thing,
which
is
the
last
thing
on
the
agenda.
We
have
about
30
minutes
left.
So
if
people
want
to
try
to
sing
something
at
the
end,
please
add
it
now.
G
So
I
opened
this
issue
a
while
back.
The
basic
idea
is
that
when
we
define
the
machine
spec
or
the
machine
definition
in
that
we
can
sneak
in,
for
example,
things
like
what
kind
of
things
do
you
want?
What
kind
of
Labor's
do
you
want?
That's
part
of
actually
the
Machine
object
itself,
not
even
this
back
now.
The
point
is,
even
though
you
may
specify
that
those
are
not
realized
in
the
real
world
at
the
moment.
G
So,
for
example,
even
though
you
might
say
I
want
this
X,
you
know
this
taint
or
this
label
on
this
machine
when
it
gets
created.
That
doesn't
really
happen.
So,
as
I
was
trying
to
think
about
some
different
ways,
one
thing
that
I
kind
of
stumbled
upon
and
I
was
like.
Okay,
maybe
we
can
use
how
about
using?
G
Let's
say
the
cube,
let
args
itself
to
kind
of
provide
that
I
mean
that's
a
it's,
not
the
perfect
way,
because
again
it
doesn't
solve
the
problem
of
what,
if,
after
the
fact
after
the
provisioning
has
happened,
I
want
to
modify
those
and
then
get
it
affected.
Yeah
that
doesn't
solve
that
problem,
but
at
least
the
bare
minimum
when
you're
bootstrapping
and
you
will
bringing
a
brand-new
nodes
I
think
at
that
point
in
time
at
least
it
will
be
reflective
of
the
intentions
that
the
user
wants
and
will
be
able
to
do
that
properly.
D
One
concern
I
have
with
you
know:
adding
additional
cubelet
arms
is
is
as
part
of
the
AWS
work.
We
want
to
expose
kind
of
the
cube
ATM
config
through
our
provider
config,
and
if
we,
if
you
can
specify
the
cubelet
args
on
the
top-level
object
and
then
also
when
the
provider
config
through
the
cube
ATM,
that
would
require
us
to
actually
have
to
like
reconcile
those
two
which
would
cause
some
complications.
There.
C
Yeah,
another
point
is
that
if
I
remember
correctly,
but
now
they
are
commented
as
the
tend
to
be
applied
once
and
whenever
a
user
like
changes
or
controller
or
whatever
changes
the
labor
it
contains.
The
controller
should
not
reconcile
them
to
what's
on
the
machine
and
if
you
would
put
them
as
an
arc
on
the
cubelet.
Whenever
the
Cooper
case,
we
started
the
tanks
and
they
would
will
there
be
again.
C
E
A
very
difficult
thing
to
do
once
you
already
have
a
couplet
established
and
percolating
it
to
the
top
level.
Api
is
an
administrative
operation
that
is
not
I,
don't
know
how
you
do
it
in
a
programmatic
fashion
to
change
it
on
the
node
itself,
without
actually
having
an
administrative
ackles
to
do
it
right.
You're
gonna
have
to
give
privileges
to
the
controllers.
In
order
for
you
to
do
that,
which
means
that
your
controllers
could
hone
everything
which
they
probably
could
right
now,
but.
A
A
Think
the
point
being
like,
if,
if
we,
if
we
don't
use
the
couplet
args
to
set
tanks
and
labels
and
something
out-of-band
like
the
machine
controller,
is
responsible
for
setting
those
things
on
the
node,
then
the
node
doesn't
even
have
to
have
the
permission
to
be
able
to
set
its
own
paints
and
labels
at
all
right.
I.
I
Think
I
think
there
is
an
opportunity
there.
So
like
fix
this
problem,
as
you
say,
Robby
I
think
I
think
one
thing
we
definitely
want
to
avoid
is
a
node
coming
up
and
know
that
we
intend
to
be
changes
coming
up
as
untainted
and
then
there
being
a
window
on
which
but
say
user
workloads,
land
on
a
super
privilege,
secured
node
type
thing.
So
we
need
something
I
sound
like
today.
I
A
G
G
So,
essentially,
what
so?
Okay,
so
what
Justin
said
right
I
mean
we
don't
want,
for
example,
especially
a
node
that
comes
up
without
a
taint
that
it
should
have
come
up
with
I
mean
at
the
moment
the
best
way
at
least
I
know
of
doing.
That
would
be
probably
like
the
cube,
let
ours
because
that's
you
know,
that's
pretty
much
the
nor
that
comes
up
with
that.
So
there
is
no
window
of
opportunity
there
for
it
to
miss
it.
G
Now
from
the
implementation
side
of
you,
I
mean
anyways
right
now,
as
things
stand
today,
all
the
scripts
that
bootstrap
these
nodes,
they're
all
provided
specific
SAP,
literally
every
provider,
has
to
generate
this
script.
Shell
script
that
runs
inside
these
nodes
that
would
bring
up
the
cubelet
and
everything
and
set
the
cube
networks.
G
Let's
do
that,
but
maybe
in
the
shorter
term,
would
it
be
a
reasonable
thing
to
say
that
you
know
for
each
provider
who
do
you
know,
providers
that
do
care
about
this
Taine's
and
the
labels
to
come
up
with
one
reasonable
solution
would
be
to
within
that
provider
code
where
they're
creating
those
you
know
taints,
they
can
actually
just
make
it
as
part
of
the
cubelet
for
the
time
being
until
we
get
to
a
maybe
a
better
solution.
You
know
down
the
way
this.
G
I
agree,
I
mean
I.
Think.
Yes,
if,
if
you
were
to
use
the
cube
and
that's
just
like
the
first
time
one-time
deal,
I
mean
it's
not
something
that
will
you
know
hold
valid
for
the
future
lifetime
of
that
object.
But
if
somebody
mutates
that,
then
that's
not
gonna
get
affected,
but
maybe
if
that
is
the
case,
maybe
by
a
documentation
or
maybe
making
new
fields
up,
which
could
actually
mean
like
this
is
a
one-time
thing
that
you
can
apply
to
in
each
like,
like
bootstrap
bootstrapping
labels
in
bootstrapping
taints.
G
I
I
reckon
we
should
talk
to
you
sinkhole
and
find
out
their
timeline
and
especially
because
we
actually
have
an
opportunity
to
offer
them
like
assertion
here
right.
It
is
perfectly
reasonable
for
us
to
write
a
node
admission
controller,
which
adds
paints
and
a
I
guess
it's
an
emission
controller
to
prevent
mutations
of
machine
deployments.
I
E
I
A
Ya
I
think
Daniel
also
had
some
questions
about
what
these
labels
intense
meant
for
four
nodes,
because
if
the,
if
we
were
trying
to
actively
reconcile
these
things
and
people
were
trying
to
change
them,
it
would,
it
would
be
frustrating
if
the
other
automation
was
trying
to
add
labels,
and
we
were
overriding.
Those
that
would
be
bad
and
I.
Think
you
propose
sort
of
a
merge
strategy,
and
so
one
question
is:
is:
are
the
are
the
fields
in
the
Machine
API?
A
After
the
fact,
I
think
it
does
make
sense
to
have
them
in
when
I
describe
what
a
machine
is
and
and
how
that
node
should
appear
in
the
system
to
put
the
labels
and
paints
there,
because
that's
like
a
nice
declarative
place
to
put
them
initially
right,
and
so
the
question
is:
is
that
is
that
actively
reconciled
or
not
like
art?
Should
those
fields
be
immutable
and
should
they
be
only
applied
once
that's
a
real
question,
a.
I
A
The
Siddarth
I
know
you
you
brought
this
up.
Are
you
interested
in
trying
to
keep
driving
this
forward
and
toward
the
broader
conversation
with
Visigoths
and
potentially
running
a
camp?
Are
you
looking
for
an
answer
to?
Is
it
okay?
If
the
bees
for
provider
today
just
plums
us
through
cubelet
args,
even
if
nobody
else
wants
to
do
that,
while
we
wait
for
a
better
solution,
I'm
curious,
what
you're
trying
to
get
out
of
editor
question
here.
G
G
Now,
as
far
as
the
broader
conversation
about
you
know,
solving
this
in
a
different
way.
I
like
I,
do
like
the
idea
about
that.
The
admission
controller
I
think
that
that
definitely
has
you
know.
Quite
a
lot
of
value
and
I
mean
I
can
probably
I'm
not
sure
if
I
know
the
relevant
people
at
this
point
in
time
in
different
sayings,
because
I
am
not
really
participating
in
the
different
SIG's
all
together
to
really
reach
out.
So
if
somebody
else
has
a
little
better
connection
can
kick
off
things.
A
Okay,
I
mean
I
think
to
answer
your
your
first
part
I,
think
using
cubelet
args
in
you
know
your
provider
for
now
to
solve
this
problem
sounds
reasonable
to
me.
I,
don't
know
that
it's
it's
a
pattern
that
we
want
to
sort
of
force
upon
everybody
and
to
codify
and
to
actually
resolve
the
linked
issue.
I
think
for
that
we
do
want
to
have
the
larger
discussion,
so
I'd
say
if
blocks
you
for
now
that
sounds
reasonable
and,
and
we
should
also
in
parallel
start
the
discussion
with
Stig
off.