►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180613 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xil9madvokmo
Highlights:
- First release of clusterctl
- Status of clusterctl documentation
- Roles in the Machine object
- Specifying per Machine kubelet configuration
- Adding some networking configuration to machine status
- Integrate better with the IAAS control planes (e.g. GKE, AKE, EKS)?
- How does someone new try out the cluster api?
A
Hello,
everyone
and
welcome
to
the
Wednesday
June
13th
edition
of
the
cluster
API
working
group,
breakout
meeting
from
state
cluster
lifecycle.
The
first
thing
I
put
on
the
agenda
today
is
just
a
brief
recap.
From
last
week
we
talked
about
cutting
the
first
release
of
cluster
cuddle
and
we
did
that
last
Thursday,
so
right
sort
of
the
day
after
this
meeting
we
found
a
couple
of
bugs
and
I
cut
another
release
on
Monday
of
this
week.
So
we
have
two
releases.
A
You
can
see
a
link
to
where
those
are
our
github
repo,
and
you
can
see
the
the
release
notes
that
I
put
together
for
those.
If
you
have
any
suggestions
about
how
to
make
better
release
notes
or
if
you
want
to
follow
up
on
the
release
process
in
general,
there
is
a
issue
that
was
linked
last
week
about
the
release
process
and
we
should
probably
come
back
to
that
when
Chris
Nova
is
on
the
line.
A
B
B
I
noticed
that
in
the
cluster
API
repo,
the
other
two
of
the
other
primary
binaries
that
are
generated,
the
controller
and
the
API
manager
or
API
server,
they
do
not
how
ramiz
so
I
wondered
if
there
is
a
different,
authoritative
sports
or
documenting
cluster
control
other
than
the
relatively
Church
documentation
in
the
readme
right
now,
and
then.
Secondly,
if
that
readme
is
the
authoritative
source,
do
we
want
to
put
documentation
for
the
containerized
version
there?
It
was.
How
do
we
envision
recommending
like
the
primary
way
to
run
this?
A
A
C
C
Do
I
set
up
the
machine
that
the
stalker
image
runs
in
a
particular
way
for
it
to
work
well,
except
there
etcetera,
so
those
sort
of
pieces
of
information,
I
think
would
be
helpful
for
folks
who
want
to
use
the
docker
image.
So
they
know
what
are
the
requirements,
restrictions,
how
to
use,
etc.
I
anticipate
people
will
want
to
use
cluster
color
directly
and
I
will
also
anticipate
some
people
might
want
to
use
it
in
a
docker
image.
C
B
So
let
me
state
that
to
make
sure
so
documentation
for
the
cluster.
The
container
version
of
cluster
control
should
go
in
the
same.
Read
me
in
the
command
directory
and
we
want
to
maintain
the
other
parallel
documentation
and
that
same
readme
for
how
to
run
things
as
binary
and
not
containerized
minor.
C
Yeah
in
terms
of
where
the
documentation
for
the
container
goes
I
don't
actually
mind
whether
it's
part
of
the
readme
for
the
cluster
confederal
binary
itself
or
whether
it's
in
the
separate
page
that
happens
the
link
in
one
direction
or
the
other
doesn't
matter
to
me
so
feel
free
to
put
it
as
part
of
the
official
cluster
cuddle
Mimi
and
feel
free
to
put
it
in
a
separate
readme.
That's
it's
where
the
docker
image
is
I,
have
no
preference.
A
Do
either
of
you
guys
have
a
an
opinion
on
what
we
think
would
be
the
recommended
way
to
run
cluster
huddle.
Do
we
think
that
we'll
get
to
a
point
where
we
recommend
people
running
it
in
a
darker
container,
because
it
removes
dependencies
on
how
those
is
configured
at
some
point
in
the
future,
I
mean
I
I,
it's
great
to
have
the
flexibility
to
run
it
in
lots
of
different
ways,
but
there's
also
value
in
saying
here's
the
way
you
should
run
it.
A
C
That
will,
in
general
it's
if
the
docker
container
contains
you
know
things
set
up
in
a
good
way
for
a
cluster
cuddle.
That
would
be
good
for
people
to
use
that,
because
I
mean
you
have
consistent
environments
that
were
running
cluster
cuttle.
In
the
reason
why
I
don't
say
that
we
should
all
use
that
docker
container
is
because
the
image
is
properly
tied
to
something
like
Linux
or
whatnot,
and
I
feel
like
there
may
be
people
who
want
to
run
cluster
cuddle
in
environment
other
than
Linux
and
so
I.
C
B
B
Even
still,
I
worry
about
I
wonder
what
we
should.
We
should
promote
as
the
official
or
recommended
way,
because
that's
the
documentation
we
have
to
put
first,
we
have
to
make
sure
we
maintain
and
I'm
not
even
sure,
if
I
really
see
a
strong
need
to
document
how
a
developer
might
not
use
the
container
like.
If
you
wanted
to
run
this
in
environment
that
wasn't
Linux
I
would
say,
that's
a
feature
that
we
should
add
like.
We
need
a
an
on
Linux
container
that
we
can
support.
A
I
guess
that's
that's
kind
of
I
was
trying
to
get
to
is,
is
even
if
it
works
in
multiple
ways.
It'd
be
great.
If
we
had
one
way
that
we
told
people
to
do
it
and
there
are
maybe
other
ways
you
could
do
it,
but
that
we
don't
spend
a
ton
of
effort
making
sure
those
continue
working
indefinitely
right
and
you
know
we
have
documentation
saying
this
is
how
you
should
do
it.
A
And,
yes,
you
could
do
it
differently
like
right
now
the
documentation
says,
run
the
binary
and
you
could
run
it
in
the
container
right.
If
you
want
to
figure
out
how
to
do
that,
but
we
aren't
necessarily
going
to
spend
a
lot
of
time
if
you
network
and
we
could
flip
it
and
say
we
think
you
should
run
it
in
the
container
and
if
you
want
to
run
a
bare
binary,
especially
as
a
developer,
you
know.
Maybe
we
have
remade
it's
sort
of
deeper
in
the
tree.
That's
less
sort
of
end
user
facing
that.
C
It
looks
like
all
that's
happening
here
is
the
cluster
cuddle
binary
is
in
a
docker
image
if
it
had
more
of
the
prerequisites
set
up
for
a
cluster
cuddle
and
I
think
we
should
definitely
use
the
container
as
our
standard
and
our
first
choice,
but
if
it's
just
a
binary
in
an
image,
then
I
I'm
a
little
bit
less
compelled
to
make
that
are
our
first
default
choice
for
how
to
run
cluster
cuddle,
because
then,
then,
you
have
to
kind
of
go
into
the
container.
To
run.
B
Tend
to
agree,
that's
why
I
didn't
want
to
spend
too
much
time
on
this.
Maybe
the
action
items
I
should
come
back
with
more
detailed
documentation
which
would
have
benefit
independent
of
the
choice
and
then
we
can
revisit.
If
that
documentation
makes
it
valuable
or
enough,
then
we
can
make
that
the
recommended
way,
but
right
now,
I
agree
that
the
patch
doesn't
do
I
mean
it
literally
was
a
cut
and
paste
or
something
oh
yeah.
A
I
think
I
think
documentation
would
be
great
because
that
will
sort
of
elucidate
the
the
end
user
experience
of
what
we
think
this
should
look
like,
and
then
we
can
make
the
patch.
Actually,
you
know,
drive
to
that
end
user
experience
as
opposed
to
doing
it
or
the
other
way
around
where
we
are
creating
a
change,
but
we
aren't
really
sure
what
the
value
of
that
changes
right.
Yeah.
C
Yeah,
like
maybe
maybe
I,
should
take
a
step
back
and
sort
of
ask.
Is
the
goal
of
the
stalker
image
to
be
cluster
cuddle,
plus
all
the
dependencies
I
pre-installed,
or
something
like
that?
So
all
you
have
to
do
is
like
all.
The
requirements
on
the
operating
system
is
you're
running
Linux
and
you
have
docker
and
therefore
you
can
run
cluster
cuddle,
or
is
this
only
supposed
to
be
a
thin
layer
with
just
the
cluster
by
not
in
it
for
the
future
I.
A
Okay,
so
I
think
the
action
there
is
to
sort
of
write,
maybe
write
the
docs
first,
almost
in
terms
of
here's,
how
we
think
they
should
work,
and
once
we
agree
on
that
experience,
then
we'll
make
the
code
match
and
I
think
Jessica.
Your
servant
closely
here
signed
up
to
be
the
reviewer
of
those.
At
that
experience,
that's
okay
with
you
sounds.
D
A
E
E
We
had
some
discussions
here
in
to
me
and
I
see
P
and
some
other
folks
and
I
mean
there
were
some
doubts
about
even
on
me
groaning
aggregator,
because
the
command
is
very
good,
I
mean
the
brains.
Api
much
you
know
is
very
good
for
fighting
gate
guys.
So
there
are
some
talks
about
even
this
thing.
So
and
my
question
is
pretty
much:
do
we
actually
need
rows
or
it's
only
for
cube
IBM,
because
there
are
other
solutions
right
now
that
it
much
run
command?
A
A
We've
talked
a
lot
about
in
the
past
about
introducing
new
fields
by
putting
them
in
the
provider
config,
and
if
we
see
consistency,
we
can
elevate
those
into
the
API
I,
almost
wonder
if
we
should
just
take
roles
and
demote
those
into
the
provider
config
for
the
places
we
need
them
until
we
can
decide
if
we
actually
need
them
in
the
long
term
and
remove
them
from
the
top
level.
Api.
C
E
Yeah
I
mean
in
which,
if
you
look
at
the
at
the
use
case
about
like
yet
when
he
CCD
in
that
case,
you
don't
even
need
to
construct,
because
there
you
only
specify,
like
the
that,
the
odd
next,
the
pot
sitters
the
service,
sitters
and-
and
he
let
me
touch
with
those
things
for
him
for
Cuba
girl
get
worse,
oh
yeah.
In
this
case,
there
is
also
nothing
for
actually
for
the
for
the
coaster
object,
I
mean
only
for
the
diversion
so.
A
E
I
mean
it's
not
keep
our
graduates
only
its
deep,
loose
key
pocket
aggregate.
That's
it!
You
can
run
as
a
binary,
but
why
we
could
you
can
use
couplets
if
people,
if
you
want
to
yeah,
to
have
the
compressed
experience
but
yeah
so
I'm,
not
sure
if,
if
this
thing
is
actually
covered
but
by
this
API
or
should
be
correct,
in
fact,
yes,.
A
E
I
mean
because
in
some
cases
they
act
well
from
what
I've
heard
that
they
are
actually
interested,
for
example,
in
learning,
cactus
and
kombu,
because
they
want
the
the
authentication
and
authorization
from
the
API
server
and
probably
a
stripped-down
version
of
the
IDS.
Are
it
only
the
like
this?
The
services
and
the
secrets?
Ap
is
enabled
and
yeah,
and
probably
the
controller
manger,
with
only
like
the
service
service
controller
service
account
control.
Okay,.
A
Want
parts
of
the
controller
manager
they
want
parts
of
the
API
server
right.
So
at
that
point
it
is
starting
to
look
more
like
the
way
that
that
gardener
runs
a
grantees
cluster,
but
instead
of
deployed
kubernetes
cluster
you'd
say
deploy
this
whatever
else
you
want
to
call
it.
That's
like
bare-bones,
crew,
readies,
ish
control,
plane.
You.
A
To
test
that
back
to
what
we
were
talking
about,
I
think
agreed
on,
but
I
want
to
get
feedback
from
everybody
else
on
the
call.
Do
people
have
any
sort
of
strong
feelings,
one
way
or
the
other
about
whether
roles
should
be
part
of
the
machines
API
or
what
that's
something
that
people
are
comfortable
taking
out
of
the
API.
G
B
Cool
anybody
else,
so
another
question:
actually,
when
you
think
about
implementing
something
like
upgrader
on
top
of
cluster
api.
B
B
A
Yeah,
there
are
a
couple
places
in
the
GCP
code
right
now,
I
know
where
we
were
supposedly
checking
that
role
in
doing
different
actions.
So
we
were
talking
about
deleting
clusters
and
one
thing
that
the
GCP
code
does
right.
Now
is
some:
if
this
is
a
master,
you
know
don't
delete
it
right
because
we
don't
want
to
accidentally
delete
the
master
I
think
what
you're
bringing
up
is
a
similar
use
case,
which
is
during
upgrades.
You
want
to
maybe
treat
that
master
machine
differently,
and
you
would
treat
a
worker
note
that
we're
saying.
B
B
Instead,
we
have
to
say:
does
it
have
this
taint
and
doesn't
have
this
annotation,
but
not
this
anima
tation
Neve
so
go
ahead
and
we
found
that
there
were
a
lot
of
properties
of
the
nodes
that
overlapped,
multiple
Road
roles,
and
so
he
made
the
checks
more
complicated,
I
think
that's
fine.
We
ended
up
hiding
those
checks
in
some
of
function,
but
there's
there
is.
B
Appear
at
the
same
time
for
I,
don't
know
I'm
trying
to
avoid
using
the
word
roll,
because
that's
what
we're
talking
about
I
don't
make
the
question,
but
that
sort
of
things
like
nodes
have
certain
functionality
that
may
not
be
may
not
map
directly
to
a
single
property.
Unless
you
invent
a
property.
A
Yeah
that
was
actually
one
of
the
problems
we
saw
with
rolls
also,
which
was
right.
Now
we
have
a
role
for
master
enroll
for
node,
but
you
know
Martin
mentioned
you
know
other
use
cases
for
things
that
aren't
masters
or
nodes
right,
like
you
might
want
to
run
the
aggregator
plus
at
CD.
We've
also
talked
about
like
what,
if
you
wanted
a
net
CD
cluster,
you
know
didn't
run
the
full
sort
of
stack
of
a
master's
cube.
A
Admin
defines
it,
but
you
just
want
to
run
some
at
CV
nodes,
and
then
you
talk
to
you
bad
then
I
have
an
external
at
CD
cluster.
When
you
spin
up.
How
do
you
represent
those
those
here?
Also
the
with
the
current
two
roles.
We
have
there's
a
table
that
says
you
can
specify
more
than
one.
If
you
specify
this
one,
you
get
this
behavior.
A
If
you
specify
both
you
get
this
behavior
and
that
that
right
now
sort
of
small
you
know
four
cell
table
explodes
exponentially,
as
you
start,
adding
more
roles
and
you
can
have
a
list
of
roles
and
all
of
their
overlapping
combinations
and
I
think
it
toward
devolves
into
the
same
thing.
You
were
talking
about
with
the
implicit
roles
you
had
with
annotations
and
taints,
where
you're
gonna
have
to
check.
A
Does
that
make
sense
yeah,
so
I
I,
wonder
if
we
need
to
attack
that
problem
just
differently
rather
than
you
know,
using
roles
or
annotations
or
taints
like
those
sort
of
all
or
similar
ways
of
solving
the
problem,
it's
sort
of
where
you
represent
it
in
the
API,
and
maybe
we
we
should
think
about
trying
to
step
back
and
figure
out
what
the
problem
is
we're
trying
to
solve
and
if
there's
a
better
way
to
represent
it.
I,
don't
know
the
answer
to
that
yet.
A
C
And
maybe
maybe
this
is
an
idea
I'm
throwing
out
there,
but
maybe
we
just
need
to
be
more
explicit,
like
this
machine
is
running
the
API
server.
This
machine
is
running
the
aggregator
or
the
schedule
and
then,
instead
of
having
kind
of
master,
represent
a
collection
of
roles,
a
collection
of
things
are
running
on
the
machine
that
we
have
explicitly.
This
is
running.
This
is
something
this
is
running
here
and
then
logic
above
that
can
be
like.
A
A
So
as
I
guess,
the
first
course
anyone
object
to
sort
of
demoting
rolls
into
the
provider
config
and
then,
if
we
start
to
see
similarity
or
we
have
no
solutions
like
what
Jessica
mentioned,
we
can
talk
about
whether
we
want
to
promote
that
back
up,
because
in
the
meantime
it
would
simplify
the
top-level
API,
and
you
know
it
sounds
like
for
for
the
use
case.
Martin
was
talking
about
the
rolls,
don't
really
make
any
sense
and
it
kind
of
practice
be
ignoring
them
anyway.
A
So
they're
they're
used
for
identifying
where
install
the
software
but
I
think
as
Frank
mentioned,
the
GCP
code
right
now,
switches
on
roles
based
on
how
it
does
upgrades
right,
because
the
the
master
is
a
singleton
and
we
don't
want
it
to
delete
it.
It
does
a
different
style,
upgrade
facilities
and
maybe
that's
something
that
we
should
look
about
changing
as
well,
but
right
now
that
the
way
that
Slayer
was
implemented.
I
D
A
The
way
it's
implemented
right
now
for
TCP
is
we're
not
actually
putting
those
I
am
permissions
on
the
VM
itself,
we're
taking
a
service
account
and
and
shoving
it
into
the
containers
that
should
have
those
permissions.
So,
instead
of
having
that
that
role
scoped
to
the
entire
node,
where
you'd
be
sort
of
scared
of
any
any
workload
that
runs
on
the
node
sort
of
inherits
those
permissions,
you
know
Jessica
correct
if
I'm
wrong,
I
believe
we
actually
take
a
service
accountant
and
push
it
into
words,
actually
needed.
A
Yeah
this
is
us,
are
the
only
danger
of
putting
it
into
the
provider
camp
idiots
and
fewer
things
you
over
the
top
level.
The
harder
is
for
general
tools
to
be
able
to
use
the
toppled
API
features
to
make
smart
decisions
right,
I.
Think
that's
what
I
think
it's
is
it
they've
endured
Daniels.
Do
you
want
it's
David
drink?
Sorry,.
F
A
H
Is
definitely
missed.
A
lot
of
discussion
from
earlier
on
is
labeling
an
annotation
annotations,
something
that
could
be
used
on
machines
to
kind
of
identify.
Some
of
the
intent
were
to
place.
Particular
software
in
the
whole
I
am
think,
sounds
to
me
fairly
provider.
I,
guess,
config
specific
right
will
just
go
into
the
genetic
session
set
section
of
the
of
the
I
guess,
whatever
are
the
configuration
is
for,
but
instead,
instead
of
potentially
specifying
for
all
as
the
top-level
thing
can
we
not
rely
on
something
more
I
guess.
H
Maybe
it's
more
conventional
I
guess
where
we'll
just
say,
there's
some
some
things
that
are
labeled
and
then
over
time.
If
we
go
from
one
component,
that's
you
know
maybe
API
to
24
points.
You
know
whatever
they
may
be
right.
All
the
way
they
be
talking
about
is
controllers.
Just
looking
at
the
labels
of
the
machines
resources.
A
If
you
could
certainly
use
labels
I
think
the
the
tricky
bit
there
would
be
sort
of
making
standard
labels
that
everybody
would
have
to
adhere
to.
If,
if
automation
was
going
to
rely
on
those
labels
to
do
the
right
thing,
so
it's
it's,
maybe
both
labels
and
things,
probably
or
potentially
annotations
right.
So
David
mentioned
that
they
had
use
all
of
those
things
at
Samsung,
but.
H
A
That's
true,
and
we
could
certainly
say
that
for
for
cluster
cuddle
installs,
these
are
the
labels.
We
expect
to
be
on
different
types
of
machines,
I
guess.
The
question
then,
is:
if
we're
standardizing
that
way,
how
different
from
actually
exposing
a
notion
of
a
role
or
Jessica
mentioned
specific
things
we
expect
to
be
running
on
those
machines
is
that
is
that
just
an
indirect
way
of
doing
that,
instead
of
a
direct
way
of
doing.
D
D
J
A
Ok,
it's
it's
a
little
bit
more
than
half
assed
and
I.
Don't
want
to
rat
hole
on
one
topic
for
all
times.
I
think
the
the
other
topics
are
probably
interesting
too
I
would
say
it
sounds
like
we
have
sort
of
tentative
agreement
to
demote
roles.
You
know
we
probably
won't
actually
do
that
before
next
week's
meeting.
So
maybe
we'll
give
people
a
week
to
to
think
about
that.
A
E
So
right
now
it's
only
possible
to
specify,
for
example,
where's
the
where's,
the
dynamic
cubed
configuration
cookie
map
available,
and
it's
not
actually
possible
to
specify
for
this
qubit.
That
is
running
on
some
GPU
note.
So
I
want
to
run
this
experimental
api's
or
get
this
black
and
etc
so,
and
it's
not
only
limited
to
people
at
configuration.
You
might
have
use
cases
in
which
you
want
to
configure
other
components
like
the
a
server
as
well
and
currently
I
don't
see
this
thinking
in
the
API.
E
A
That's
a
link
to
the
dynamic
UV
configures
on
a
machine
right.
So
if
you
have
two
sets
of
machines,
one
which
is
running
your
special
GPUs,
which
you
want
different
flags
and
one
of
which
is
running
just
standard
CPU
memory
machines,
you
can
take
each
of
those
machine
sets
and
point
them
at
a
different
dynamic
cuba
config,
which
was
specified
different
properties
for
the
cubelet
on
those
machines.
Is
that
kind
of
what
you're
looking
for
or
do
I.
F
E
It's
not
provided
specific
because
other
components
I
mean
because
other
qual
providers-
it's
not
so
much
about
about
the
different
compiler.
It's
more
about
the
different
components
that
we
run
on
that
machine,
and
this
pretty
much
ties
to
the
previous
to
the
previous.
Actually,
to
the
previous
discussion
about
the
rows
and,
for
example,
in
some
cases
you
you
might
want
to
upgrade
the
a
server
and
then
specify
some
flags
or
sneeze.
A
Yes,
I
think
we
should
probably
separate
the
conversation
about
components
running
on
machine
like
an
API
server,
which
sort
of
ties
back
to
roles
with
the
cubelet
like
I.
Think
the
qubit
is
something
we
should
definitely
make
clear
and
possible
how
to
configure
in
the
machine
API,
because
the
couplet
is
tightly
tied
to
the
machine,
and
then
we
want
to
talk
about.
Like
the
other
components,
we
want
to
run
to
make
a
control
plane,
which
you
know
formerly
was
what
happened
when
you
specified
rolled
master.
A
That's
probably
a
separate
discussion,
so
I
kind
of
want
to
tease
those
appartments
and
let's
start
with
the
couplet
and
say
so.
We
want
cubelets
to
have
different
flags
or
different
behavior
in
different
nodes
on
the
system.
How
do
we
do
that?
Is
the
API
expressive
enough
today?
So
that's
I
think
that's
the
first
thing
to
talk
about.
H
E
My
goal
was
pretty
much
to
describe
like,
for
example,
I
want
this
machine
to
have
all
those
flags
on
the
qubits
and
currently
with
the
machine.
Api
I
cannot
do
it
indirectly.
I
have
to
first
grade
probably
the
cotton
tick
configuration
it's
a
complete
map
and
then
just
reference.
It
I'm
not
sure
that
this
use
case,
but.
A
A
That's
what
my
coffin
has
been
working
on
for
a
while,
it's
it's
finally
in
beta
now
in
one
but
eleven,
and
that
the
machines
API
would
basically
stitch
together
the
description
of
the
physical
machine
with
the
config
for
that
machine
by
referencing
between
the
two
and
so,
if
you're
writing
a
system
that
uses
the
machines
API,
you
have
the
choice
of
saying:
I
want
to
create
this
config
map
and
then
a
machine
that
uses
it.
And
then
you
could
build
interesting
tools
that
know
how
to
create
different
config
maps
for
different
types
of
machines.
A
If
you
wanted
to
like
a
GPU
machine
I,
don't
know
how
we
would
represent
that
in
the
machines
API.
If
we
were
doing
it
more
directly
or
you're,
saying
that
we
should
take
the
fields
we
care
about
in
the
cubelet
and
make
them
like
cube,
ly
config
fields
in
the
machines
API
and
then
have
the
controller,
create
the
cubelet
config
from
those
and
then
link
that
to
the
node
object.
I
mean.
E
If
you
look
at
the
version,
etcetera
because
I
know,
there
will
be
some
coordination
between
bottle
both
of
those
things
at
the
end.
So
I'm
not
sure.
Where
is
the
correct
place
to
put
this
coordination
and
who
is
that?
Who
should
actually
care
about
those
things?
Because,
for
example,
when
you
grade
the
configuration,
some
fields
might
change
and
accessible.
So.
A
We
could
take
the
approach
that
you
know
the
fields
that
are
important
we
specified
directly
on
the
API
and
that
the
thing
that
links
them
together
is
the
Machine
controller,
like
the
Machine
controller,
could
be
responsible
for
creating
a
dynamic
cube,
looking
at
fig
and
stitching
it
into
the
cubelet
Hewlett's
configuration
that
would
be
interesting.
I
know
that
talking
to
Mike
the
the
cubic
and
fake
has
something
over
a
hundred
different
fields
in
it
and
I.
Don't
know
how
many
of
those
would
be
important
enough
to
put
as
direct
fields
into
our
API.
A
Because
of
the
cubits,
you
do
have
a
strong
link
that
there
is
there's
actually
one
cubelet
per
machine
right
and
we
have
an
API
that
describes
a
machine
and
so
configuring
that
cubelet
on
that
machine
like
there
is
sort
of
a
tight
link
there.
When
you
say
you
know,
I
declaratively
want
this
machine
to
be
part
of
my
cluster
and
you
just
you
describe
the
configuration
of
that
machine
in
terms
of
its
physical
properties,
and
then
you
also
want
to
describe
the
configuration
of
that
machine
in
terms
of
it's
sort
of
couplet
flags
as
well.
A
A
I,
don't
I,
don't
know
if
anybody's
actually
used
that
part
of
the
API,
yet
the
dynamic
keep
looking
fake
part
of
the
API.
Let
me
be
interesting
to
try
and
get
that
working
and
see
which
which
pieces
of
dynamic
couplet
configure
actually
valuable
to
set
like.
If
you
do,
if
you're
creating
machines
with
GPUs
how
much
Cuba
config
is
there,
you
have
to
change
versus
a
non
GPU
machine
so.
E
Right
now,
for
example,
we're
we're
using
that
as
a
way
to
to
boost
up
like
notes.
Currently,
we
don't
specify
explicitly
those
flags
and
we
have
like
one
generic
for
all,
but
hopefully
in
the
future
we
can.
We
should
be
able
to
do
so
and
alpha
being
to
get
like
one
way
to
express
those
things
for
different
set
of
yeah.
A
That
comes
back
to
like
which
things
are
you
trying
to
express
right
because
there
are
they're,
like
you
know,
over
a
hundred
knobs
for
the
cubelet
that
we
can
specify
like
which
ones
are
you
finding
in
practice?
You
actually
need
to
do
changing
and
it
sounds
like
so
far.
You
have
a
consistent
set
and
you
think
there's
going
to
be
some
divergence,
but
do
we
know
what
the
divergence
is
yet.
E
A
Yeah
I
think
there
there
could
be
an
argument
made
that
that
there
should
be
some
explicit
cubelet
configuration
in
the
machines
API,
because
we
are
saying
here's
the
version
of
the
cubelet
that
should
be
running
on
the
machine.
Maybe
we
should
also
be
saying
here's
how
to
configure
that
cubelet
and,
like
I,
said
right
now.
A
We
have
that,
via
the
reference
to
dynamic
queue,
looking
thing
so
if
that
is
sufficient,
then
maybe
we
leave
that
and
if
it's
not
sufficient,
maybe
we
replace
that
with
more
in
line
configuration,
but
I
think
we
need
to
figure
out.
We
need
to
try
and
use
what
we
have
now
and
see
if
it's
too
hard
to
use-
and
we
also
need
to
see
which
things
we
think
would
be
important
enough
to
promote
to
the
API
to
configure
in
the
couplet,
because
I
don't
want
to
have
to
copy
every
field.
A
That's
in
dynamic,
cubic
config
over
write.
That
was,
that
was
the
reason.
I
put
a
reference
was
with
a
reference.
You
say,
the
node
team
and
and
sig
node
is
responsible
for
maintaining
all
the
flags
you
can
use
whatever
Flags
you
want.
As
long
as
they
have
them
defined-
and
we
don't
have
to
sort
of
mirror
those
or
stay
up
to
date,
which
is
a
pretty
big
burden
for
us-
that
they
can't
I
think.
E
A
That's
a
good
point:
yeah
there,
a
couple
other
things
like
that
as
well,
that
are
that
are
hard
to
to
do
programmatically
without
copying
all
the
queries
code,
which
is
a
pain.
That's
a
good
point.
Oh
I'll
poke
Mike
about
that
and
see
if
they
have
any
thoughts
about
so
music
speaking
client,
tooling.
E
Actually,
I
keep
calm,
I
have
a
talk
with
all
the
API
machinery
guys
and
they
were
pretty
much
ok
for
I
mean
yeah
they're,
talking
I
mean
I,
taught
them
about
getting
all
those
different
configurations,
and
hopefully
in
the
future,
for
example,
for
the
80s
ever
to
be
like
compound
boutique
configuration
and
have
them
as
a
resource
actually,
and
they
were
say:
oh
yeah,
it
was
good,
but
something
to
it.
So
yeah.
A
Yeah,
that's
that's
always
the
hard
part
people
can
agree
on
good
ideas.
It's
actually
building
them.
That's
this
hard!
All
right!
We
have
about
15
minutes
left,
so
I
think
we
should
move
on
to
well.
If
you
want
to
choose
to,
we
talked
about
configuring,
other
components
or
your
last
agenda
item.
We
may
not
have
time
for
both
which
is
container
retire.
Let's.
E
Get
the
so
link,
for
example,
with
cryo
and
some
other
tools.
You
can
specify,
for
example,
multiple
runtimes
that
you
can
use,
so
you
can
see,
cut
to
containers,
and
then
you
can
ship
divisor
for
different
workloads
and
I've
been
wondering.
Should
we
make,
for
example,
the
10,
the
runtime
like
a
slice
or
for
multiples.
E
And,
for
example,
when
we
put
also
the
configurations
in
this
thing,
because
in
some
cases
you
might
want
to
have
different
configurations
for
the
different
runtimes,
because
they
said
because,
for
example,
cube
root,
will
probably
move
to
any
runtimes.
Hopefully
in
the
next
one
year
and
remove
the
doctor.
A
A
A
All
right,
Swee
about
ten
minutes,
left
Martin
I've,
been
having
the
majority
of
the
conversation.
Sort
of
we've
been
monopolizing
I
want
to
make
sure
that
the
other
people
on
the
call
have
a
chance
to
bring
up
any
other
issues
that
they
might
have
another
they
weren't
in
the
agenda.
But
if
there
are
other
things
you
guys
have
been
thinking
about.
Listening
to
the
conversation
or
other
points
you
want
to
bring
up,
I
want
to
sort
of
open
the
floor
up
to
make
sure
we
hear
some
other
voices.
G
A
couple
of
weeks
ago
we
talked
about
possibly
adding
some
Network
information
to
machine
status,
so
I
put
up
that
issue.
There
wasn't
any
feedback
and
I
was
just
wondering
if
it
would
be
worth
throwing
together
a
patch
for
that.
Just
a
basic
IP
and
DNS
fields
to
the
top
level
machine
status.
I
camera
was
it
status
or
was
it
spec
I
raised
status?
That
was
the
one
that
I
was
after
someone
on
the
call
suggested
spec
as
well.
That's
not
something
I
strictly
need,
and
unless
somebody
does
I
would
propose.
A
Yeah
I,
don't
that's.
That
seems
reasonable
to
me.
I
think
that
there
are
in,
like
you
were
describing
environments
where
you
know
you're
creating
machine,
and
you
know
what
its
IP
is
going
to
be
before
the
cubelet
comes
back
and
tells
you
what
the
IP
is
right.
Yeah
for
the
spec
yeah
yeah
I
know
those
that's
for
the
status
right.
So
you
say
give
me
a
machine.
Oh
yes,.
F
A
H
G
My
thought
was
actually
to
just
do
what
we
do
on
the
node,
which
is
it's
kind
of
nonspecific
as
far
as
I
know,
I'm
not
sure
exactly
how
they
calculate
which
IP
to
put
in
there,
but
you
can
put
in
as
many
as
you
want
of
various
types.
So
it
would
just
be.
It
would
kind
of
be
up
to
the
provider
to
interpret
it.
How
you
need
to,
if
you're,
if
you're,
gonna,
use
that
information
that.
H
A
F
A
If
you're
using
this,
that's
right,
so
I
think
that
that
would
work
as
long
as
the
the
hosted
service
allows
you
to
register
an
extension
API
server,
you
could
register
an
extension
API
server
and
then
it
could
use
the
API
aggregation
in
the
hosted
control
plane
to
get
to
the
resources
that
are
in
your
extension,
a
cloud
server.
So
you'd
run
your
own
extension
API
server,
you'd
run
your
own
controller
or
your
own
machine
controller.
That
would
act
upon
those
resources
and
then
it
should
work
there.
H
I
kill
that
data
interesting
video
is
a
you
know,
some
of
the
tooling
that
from
a
CLI
perspective,
for
example,
how
would
that
integrate
was
kind
of
a
setup
like
this
right?
If
that
tooling
always
assumes
that
it
is
responsible
for,
looks
the
upgrading
in
cluster
8
there's
potential,
less
flexibility
there
and
support
in.
A
H
A
That
yeah,
so
I
think
in
that
case,
you'd
probably
end
up
with
that.
You
know
the
machines
API
would
would
make
sense
for
that
cluster,
but
not
the
sort
of
cluster
API
that
declaratively
specifies
a
control
plane.
You'd
still
have
to
talk
to
the
cloud
provider
to
upgrade
your
control
plan
version,
I.