►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171115 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.n38b7dggmouq
Highlights:
- Presentation about https://github.com/kube-node followed by a demo and a long discussion
A
B
Mean
hi
everyone
so
and
the
best
I'm
one
of
the
founders
of
new
Lutzer,
not
sure
if
everyone
is
aware
of
it.
So
we
are
a
german-based
startup,
mainly
focusing
on
cuban
eaters,
and
we
built
a
product
called
chromatic
container
engine.
It's
somehow
what
we
build
is
we
want
to
run
or
have
a
product
shipping
to
our
customers
that
they
can
run
cuban.
It
is
cluster
everywhere,
similar
like
a
google
container
engine,
so
that
you
can
easily
manage
a
lot
of
cuban.
B
It
is
cluster
like
service
providers
who
want
to
have
a
similar
product
like
google
container
engine,
so
that
they
can
easily
build
this
and
uses
a
wide
level
solution
or
have
bigger
enterprises
who
want
to
build
hybrid
setups
and,
of
course,
what
we
figured
out
is
like
it's
hard
to
deploy
cuban
eaters.
There
are
so
many
differents
solutions
there,
so
we
were
looking
at
the
approaches
and
really
everything
don't
really
feel
native
and
our
complete
tool
stack
is
only
cuban
eaters,
so
we
deploy
technically.
B
What
we
built
is
a
cuban
--'tis
operator
to
deploy
cuba
this
cluster
on
top
of
it,
and
so
we
were
looking
into
it.
How
we
can
manage
notes
and
have
something
similar
like
replica
said
in
cuba,
niches
and
completely
using
is
with
cuban
eaters
and
not
any
other
tooling.
So
we
looked
at
the
like
in
verse
and
possessed
volume
claim
wave
of
different
types
of
objects
or
resources.
B
So
we
first
come
up
with
the
idea.
Okay,
we
want
to
have
something
like
a
note
controller,
but
because
we
saw
okay,
they
are
now
coming
differents
Club
provide
operator,
but
we
were
thinking,
okay,
that
don't
fields
white,
because
every
cloud
provider
operator
is
working
different
and
we
want
to
have
like
a
similar
set
up
for
all
the
cloud
providers.
B
So
can
you
can
build
your
own
node
controller
for
for
google
for
AWS
or
you
can
create
a
node
controller
for
terraform,
which
then
talks
to
terraform
and
and
do
the
rest
or
you
can
even
create
a
PI,
2,
node
controller
and
working,
for
example,
talking
with
sensible
and
the
node
controller
is
really
looking
or
watering
on
the
node
resource
and
then
like
provisions
of
the
machines
deploying
deploying
the
machine
and
deleting
the
machine
our
first.
Our
intention
is
that
we
don't
updating
a
machine.
B
So
when
there
should
be
an
update
of
a
machine,
we
were
similar
to
thoughts.
We
will
roll
out
or
create
a
new
node
and
deleting
the
old
node
and
yes,
I
will
show
you
also
the
structure,
but
it
should
be
quite
easy
to
configure
this
and
set
this
up
additional
so
that
we
don't
have
all
the
information
about
cloud
provider.
In
the
note
resource,
we
introduced
a
custom
resource
definition,
note
class,
where
you
specify
all
your
cloud
provider
specific
and
machine
specific
or
our
specific
details
like
what
are
you
cloud
provider
credentials?
B
What
are
you
machine
types?
What
you
want
to
provision
like
system,
D
unit
files,
SSH
key
SSH
command?
What
couplets
you
want
to
run,
perhaps
also
your
your
docket
engine,
so
that
you
really
like
similar
to
storage
class
on
the
storage
side,
the
admin
can
configure
and
creating
this
note
class
and
then
the
developer
can
easily
reuse
and
reference
it
in
this
node
so
that
we
can
specify.
B
Ok,
you
are
allowed
to
create
two
types
of
node
in
this
in
this
region
and
say
then
the
developer
can
easily
reuse
this,
and
on
top
of
this,
because
managing
one
node
is
not
enough.
We
want
to
have
something
like
a
node
set,
so
a
group
of
similar
nodes,
similar
to
the
cloud
provider,
like
instance,
groups
on
Google,
so
that
you
have
a
node
set,
which
is
which
has
a
reference
to
the
node
class,
which
has
a
replica.
B
How
many
notes
you
want
to
run,
which
also
has
a
reference
to
the
node
controller
and
which
resent
can
easily
integrate
with
a
node
autoscaler.
So
there
is
a
queue:
benitas
autoscaler,
so
the
next
step
or
what
our
idea
would
be
useless
and
instead
talking
to
Google
or
AWS
talking
to
the
node
set
and
adjusting
the
replica
count
up
and
down
automatically
similar,
you
do
with
the
pod
autoscaler
yeah
and
then,
of
course,
you
have
this
node
set
controller.
Who
takes
care
about
it
to
check?
B
We
have
a
note
class
core,
digitalocean
I
mean
this.
This
is
running
currently
our
example
is
running
on
digital
ocean.
So
here
you
can
specify.
Like
it's
completely
said.
Oh,
it
depends
on
your
note
controller
specification.
So
in
our
case
we
have
here
docker
machine
flex.
So
what
is
the
access
token?
What
type
of
image
you
want
to
run
region
the
size
and
so
on?
What
is
your
provider
like
digitalocean?
What
you
want
to
provision
so
like
commands?
What
SSH
commands
you
want
to
execute?
B
What
files
you
want
to
execute
like
putting
in
a
cute
conflict
for
the
couplet,
with
a
with
the
API
server
URL
and
also
with
a
token,
and
then
we
use
like
from
cube
a
DM
to
generate
the
certificates
and
get
it
to
zq
played
and
we
configure
it
and
yeah
some
other
content.
We
can
provide
so
it's
completely
flexible
depending
on
the
user
or
depending
on
the
app
and
what
you
want
to
do.
It
can
provide
files.
You
can
provide
commands
what
he
needs
to
do
to
spin
up
the
node.
B
B
B
We
already
see
there
is
a
new
node
object
created
and
now
when
we
looking
at
the
note
controller,
it's
now
spinning
up
the
new
note.
So
it's
called
digitalocean
spinning
up
the
machines
and
do
all
the
copy
or
copying
over
the
three
files.
And
then
later
this
machine
will
join
the
cluster.
And
then
we
have
a
running
machine
and
when
we
do.
B
So,
for
example,
we
currently
restore
the
that
we
know
the
reference
we
store.
The
basic
c4
coded
note
class
content
in
it.
We
we
see
also
generation,
we
add
some
annotation
to
store
some
information,
and
now
the
machine
is
booting
up
and
it's
joining
the
cluster
and
yeah.
That's
currently,
mainly
it.
What
we
have
done
so
we're
currently
working
on
on
updates
and
also
integrates
this.
We
see
autoscaler
the
really.
The
intention
is
really
focusing
on
how
to
manage
the
node,
because
what
we
see
also
is
our
customers.
B
They
already
have
mostly
solution
how
to
do
how
to
provide
networking
and
all
this
stuff
which
is
created
at
the
beginning,
but
what
we
want
and
a
lot
of
stuffs
you
created
once
at
the
beginning,
but
then
especially
the
nodes
are
flexible.
You
want
to
scale
up
and
scale
it
down
this,
and
for
this
we
wanted
to
have
a
solution
and
probably
later
wanted
to
integrate,
with
cubic
on,
to
do
probably
also
some
more
lifting
at
the
beginning,
so
that
we
can
create
notes,
knows
like
networks
and
so
on.
B
Here
it's
a
note,
set
you
find
a
proposal
and
also
you
find
our
first
implementation
with
cute
machine.
Also
Ivan.
They
did
it
for
the
arcon
note
said
so.
We
also
have
for
note
note
set
do
a
POC
or
a
small
implementation
so
that
the
note
set
could
be
integrated
with
a
boo
container
and
not
Google
was
a
Google
instance
group.
B
So,
and
yeah
idea
is
what
we
want
to
do
is
like
getting
it
close
to
you
guys
or
align
with
you
guys,
so
that
we
get
a
standard
way
to
do
this
and
avoid
that
for
every
cloud
provider.
You
somehow
must
do
it
in
a
different
way
that
you
really
can
the
developers
know
the
same
with
containers.
Do
okay,
I
need
notes,
I
want
to
scale
this.
How
do
I
do
this
and
have
say
I
mean
I
think
there
should
be
different
ways.
A
Yeah,
that's
great!
It's
interesting
watching
your
your
slide
deck.
It's
almost
exactly
the
same
as
the
slide
deck
that
we
put
together.
As
we
started
doing
the
cluster
API
effort,
the
cluster
API
always
sort
of
consider
to
be
split
into
two
pieces.
One
is
sort
of
this
node
or
machine
management.
The
other
half
is
the
control
plane
management
when
you
guys
are
also
sort
of
simultaneously
tackling
the
machine
management
part.
A
So
we've
done
a
couple
demos
in
the
past
couple
of
weeks,
basically
showing
almost
exactly
what
you
just
showed,
where
you
can
declaratively
create
machine
definitions
and
have
machines
pop
up.
As
a
result,
we've
taken
a
slightly
different
approach
where,
instead
of
using
annotations
and
trying
to
sort
of
reuse,
the
existing
node
objects
that
are
in
kubernetes
we're
instead
adding
another
layer
of
objects
called
machines
where
the
the
current
node
objects,
as
you
mentioned,
is
kind
of
a
weird
object
in
kubernetes,
where
most
kubernetes
objects
have
an
aspect
aspect
in
a
status.
A
The
node
object
is
really
only
a
status
right.
If
you
think
about
it,
there's
no
way
to
declaratively
say
something
about
a
node
and
have
that.
Take
effect,
it's
really
just
function
of
what
the
cubelet
has
identified,
and
so
so
the
approach
that
we
took
was
to
create
a
separate
object,
called
a
machine
which
is
effectively
the
spec
and
then
leave
the
note
as
the
status
and
tie
the
two
together
with
cross
references.
And
then
you
can
do
things
like
cube
cuddle
get
machines.
A
You
pedal
this
machine's
with
you
know
the
support
for
dynamic
types
in
cube,
cuddle
and
sort
of
see
your
machines
there
and
manage
them
in
that
way.
But
but
in
general,
like
from
a
philosophical
point
of
view,
I
think
it's
almost
exactly
aligned
with
what
you
guys
are
doing
there.
Just
a
couple
of
implementation.
Differences.
A
So
we've
been
coding
in
school,
Google's
cloud
platform
sounds
like
you
guys
been
doing
against
digitalocean,
which
is
cool,
because
then
we
might
have
two
things
implemented
very
quickly.
We
have
an
API
there's
one.
That's
checked
in
there's
also
a
PR
that's
outstanding,
except
for
people
to
leave
comments
on
more
easily
in
the
chat
conversion,
and
we
basically
have
an
implementation
of
the
API
that
runs
against
Google.
Where
you
can
create
a
cluster,
you
can
scale
nodes
up
and
down.
A
We
took
the
approach
that
the
first
cut
of
the
API
was
gonna,
be
more
human,
focused
to
make
it
easier
for
people
to
specify
the
machines
that
that,
when
I
had
because
when
you
start
adding
all
of
the
extra
things
that
the
autoscaler
needs
to
make
the
automation
work,
it
makes
it
kind
of
complicated
for
people
to
use.
And
one
of
the
things
that
we've
heard
from
feedback
is.
When
you
look
at
things
like
persistent
volume,
claims,
there's
a
pretty
large
sort
of
mental
jump.
A
B
Great
I
mean,
what's
the
best
way
to
align,
because
probably
of
course
we
don't
want
to
do
a
lot
of
work
and
if,
if
we
somehow
can
align
and
we
could
spend
some
of
the
works
on
your
side
would
be
icing
best,
so
we
don't
or
can
reuse
a
lot
of
things,
and
probably
we
only
adapt
the
cube
machine,
because
currently,
our
cube
machine
is
not
only
working
this
only
the
digital
ocean
work
so
we're
using
this
already
for
digital
ocean
who
uses
for
AWS
OpenStack
VMware.
So
is
that
exactly
it's?
B
The
idea
why
we
choose
the
library
from
docker
machine
to
easily
add
new,
my
new
cloud
providers
exactly
in
the
same
way,
so
there's
the
only
thing
what
we
need
is.
We
must
spin
up
the
machines.
Our
focus
is
currently
that's
how
we
run
our
set
up.
It's
it's
like
okay,
the
master
is
running
on
our
stuff,
it's
once
on
a
kubernetes
cluster,
so
our
focus
is
really
spinning
up
only
work
on
doubts,
so
we
haven't
dig
into
it:
okay,
probably
what's
required
to
run
master
components.
B
This
is
additionally
what
you
can't
provision
with
it,
but
for
workout
notes
it's
already
worked
quite
easy
to
integrate
a
new
club
provider.
So
currently
it
takes
us
like
in
best
case
when
they
say,
like
marshy,
dhaka,
machine
library
available
one
or
two
days
to
integrate
this.
Probably
when
there's
nothing
available.
It'd
take
us
like
10
days
that
writes
this
dhaka
machine
driver
and
but
also
for
this,
we
only
need
like
currently
create
delete
and
sub
status
information,
but
not
like
all
the
implementation
they
used
for
the
Joker
machine
driver
and
the
library
interesting.
A
B
This
is
exactly
the
idea
so
that
you
can
have
helped
provide
a
specific
implementation
like
you
guys
from
Google
can
build
your
own
because
you
want
to
have
much
more
things
managed
for
us.
Our
goal
was
really
we
needed
an
easy
way
and
a
fast
way
to
integrate
this
and
one
of
this
implementation
is.
It
is
cube
machine
so
that
we
can
easily
add
new
cloud
providers,
probably
in
the
future.
There
could
be
also
a
telephone
machine
or
telephone
controller.
Who
is
doing
this
and
yeah
this
exactly
the
intention?
B
A
It's
the
other
question
I
have
to
ask,
is
one
of
the
reasons
that
we're
interested
in
standardizing
the
cluster
API
is
is
to
start
the
building
higher-level
tools
on
top
of
it,
that
we
can
make
consistent
across
multiple
environments.
It
sounds
like
you
guys,
also
have
that
goal
and
I'm
curious
what
sorts
of
tools
you
envision
be
able
to
provide
your
customers
now,
once
you
have
a
consistent
API
that
they
can
be
built
on
top
I
mean.
B
Showing
our
solution,
so
what
we
have
is
it's
something
similar
like
Google
container
engine
to
easily
manage
all
this,
so
that
you
have
an
easy
way
to
add
or
remove
nodes
so
that
it's
really
easy
for
the
customer.
Our
intention
is
to
build
a
solution
when
the
customer
is
not
on
Google
or
aw
is
nothing
about
Asia
where
the
our
managed
service
available,
and
it
were
to
have
it
where
we
are
currently
working
quite
a
lot
with
smaller
Internet
service
provider
who
say
hey.
B
B
So
you
have
a
complete
fools
tag
and
a
complete
suite
so
that
we
can
easily,
when
up
different
cloud
providers
and
also
bigger
enterprises
who
say,
ok
and
and
do
a
hybrid
cloud
approach,
so
I
won
on
I
have
my
own
datacenter
I
want
to
have
Google
I
have
in
Germany,
so
chatty
local
cloud,
I
have
in
China,
Alibaba
and
I
was
a
unique
control
plane
on
top
of
it
and
once
and
provides
them
a
solution
than
cancer.
We
can
say
hey,
he
is
something
it
works.
95
or
99
percent
is
a
similar
way.
B
Of
course,
they
are
club
provider,
specific
things
like
storage
and
load
balancing
things
like
this,
but
it's
not
like
for
each
cloud
provider.
You
have
a
complete
different
implementation,
so
you
must
every
time
learn
it
and
yeah.
You
can
manage
in
our
case
the
complete
master.
Also,
this
cluster
are
running
on
a
cube
anita's
cluster
itself,
so
we
spin
up
this
components
as
containers
on
a
cube,
anita's
cluster
with
a
little
cube,
Anita's
operator
and
then
from
outside.
B
A
C
D
B
Mean
why
we
choose
this
is
our
tool.
Sec
is
completely
also
written
in
go
and
we
were
looking
around
where
we
can
find
implementation
for
a
lot
of
different
cloud
providers
written
in
written
in
go,
and
there
are
mostly
two
big
resources.
One
is
telephone
and
the
other
one
is
docker
machine
and
we
don't
use
more
or
less.
B
What
we
use
is
like
only
the
the
interface,
how
they
call
and
create
the
VMS,
and
so
we
use
only
the
functions
because
they
did
all
the
implementation
and
have
been
that
all
the
different
cloud
provider,
and
also
it's
easy
to
create
a
new
cloud
provider
for
daca
machine.
So
that
was
the
reason
to
choose
this:
it's
not
because
yeah,
it's
it's
talking
machine
or
we
use
a
couple
more.
C
C
I
was
actually
asking
more
of
cushion
and
all
the
line
like.
So,
if
you
use
cube
at
mean
like
on
the
master
node,
you
install
cube
at
mean
you
install
da
car
and
used
to
install
cube
blade
and
then
on.
The
regular
nose
are
the
worker
nodes
you
go
and
actually
down
like
that.
You
had
me
enjoying
which
actually
so
I'm
saying
like
which
like
what
is
doing
that
part
in
your
system,
mainly.
B
C
B
Exactly
so,
you
could
have
different
node
classes
for
different
versions
that
you
want.
In
our
case.
We
want
to
control
this,
so
you
want
to
have
like
a
node
class
for
Choo
Benitez
1.7
at
the
different
node
class
for
1.8,
and
then
reference
this
in
the
note
said
so
that
you,
our
ideas,
also
that
you
can
change,
see
not
note
class
and
sends
note
set
controller,
will
take
care
of
updating
all
these
machines
so
that
you
can
can
do
this.
C
B
A
Yeah,
that's
that's.
Gae
doesn't
come
that
either.
Chica
also
does
replace
VMs
during
up
rigs
and
and
basically,
if
you're,
using
local
volumes.
That
stuff
disappears
during
upgrades
and
I
think
that's
think.
That's
relatively
common
in
kubernetes,
because
we
as
a
community
are
trying
to
get
people
to
treat
VMs
as
cattle
instead
of
aspects
yeah.
E
D
F
F
Chris,
you
finish,
and
then
I'll
talk,
no
okay,
so
I
probably
mentioned
earlier
I
open,
quite
because
I'm
still
recovering
my
voice,
I
have
a
thousand
questions
and
I
won't
take
up
this
meeting,
I'm
hoping
to
reach
out
to
you
afterwards
Sebastian.
If
you
could
put
some
contact
info
and
the
meeting
notes,
one
of
the
big
things
that
I
would
like
to
discuss,
though
that
was
kind
of
alluded
to
was
right
now
in
our
alternative
machines
proposal,
the
API
is
very
I
won't
just
say
simplistic,
but
more
conceptual.
F
So
we
hide
the
details
of
what
will
happen
on
the
the
actual
machine
itself
and
instead
say
and
declarative
way.
I
would
like
to
be
running
this
version
of
the
cube
list
and
whatever
you
need
to
do
whatever
controller
is
responsible
for
this.
Please
make
that
happen,
I'm,
not
particularly
picky
about
where
things
go
or
how
it
starts
up,
but
just
make
that
be,
and
you
guys
have
it
have
tackled
this,
like,
like
Robbie,
said
philosophically
the
same
exact
problem
with
a
very
different
approach
which
gives
you
so
much
control
of
I.
F
F
Oh,
you
update
that
in
the
spec
one
field
and
then
the
rest
just
happens.
Kind
of
magically,
as
opposed
to
I,
need
to
know
exactly
where
the
binaries
are
put
on
disk
I
need
to
edit
my
bootstrap
script.
To
make
sure
that
it
uses
the
new
flags,
all
that
stuff,
so
I'm
wondering
what
your
guiding
tenants
are
for
designing
the
API
like
what
level
of
flexibility
you're
going
for
and
how
much
of
that
do,
you
think
is
going
to
naturally
evolve
to
simplify
over
time.
I
mean
question
yeah.
B
B
It's
not
so
much
the
problem
between
cloud
providers,
it's
more
like
portability
between
operating
systems
so
like.
If
you
are
talking
to
chorus
or
if
you
want
to
deploy
chorus,
you
need
a
completely
different
approach
comparing
to
Center
S
or
Ubuntu.
So
this
is
why
we
introduced
more
flexibility
and
say:
okay,
probably
cubelet,
container
container
runtime
is
not
enough.
We
need
more
flexibility
or
otherwise
we
need
have
sis
coded
somewhere.
B
In
the
note
controller,
when
it's
some
additional
resource
for
this
to
manage
this
I
mean
the
good
thing
is,
as
I
said,
our
focus
is
really
deploying
the
notes
itself
with
the
worker
notes.
So
we
don't
need
much.
We
need
C
couplet,
we
need
to
contain
at
runtime
and
we
need
a
system
the
unit
five
more
or
less,
with,
like
the
API
servers
a
token
to
join,
and
this
works
more
or
less
on
any
cloud
provider.
This
is
only
a
few.
Perhaps
changes.
B
B
It
also
depends
what
is
included
in
the
image,
of
course,
so
for
some
club
provide
us,
some
binaries
which
are
required
are
probably
not
installed
in
defect,
so
we
must
do
something,
and
so
we
wanted
to
have
like
most
of
the
flexibility
and
provides
to
z/2
the
admins,
probably
for
for
like
beginners,
it's
still
too
complex.
We
could
have
somehow
layer,
on
top
so
to
to
reduces
and
say,
hey
here
are
default
setups.
B
You
only
must
change
like
couplet
and
container
at
runtime
and
things
like
this,
but
what
we
see
also
in
our
case,
we
quite
often
want
to
have
such
flexibility
or
we
need
this
flexibility.
Otherwise
we
somehow
we
need
a
different
resource
or
we
must
put
it
into
the
codebase.
And
then
it's
like,
like
hard-coded
or
somehow
not
easy
to
manage
well,
doesn't
make
sense.
B
F
I
just
have
to
run
a
different
controller.
That's
aware
of
how
to
setup
things
under
the
covers
so
that
they
fulfill
the
contract
of
the
API
and
in
the
mood
to
environments
for
Debian,
environment
or
scent
Oz,
and
in
that
case,
I
think.
The
things
need
to
expose
in
the
API
directly
are
just
like.
What's
the
version
of
cubelet,
what
kind
of
configuration
do
you
have
for
the
cubelets
and
the
rest?
Can
be
just
fulfilled
by
the
the
controller
that
starts
up
the
VM
under
the
covers
versus?
F
G
From
somebody
who's
I've
written
a
few
of
these
controllers,
not
with
the
new
official
API
but
just
in
the
past,
started
out
with
the
very
like
granular,
you
can
define
arbitrary
commands
and
basically
do
command
injection
and
later
evolved
into
a
higher
level
abstraction.
And
then
we
had
a
whole
library
of
controllers.
We
didn't
separate
a
based
on
the
operating
system
like
you
suggested,
but
we
probably
should
have
have
in
retrospect,
but
I
very
much
wanted
this
really
quick
+1.
D
A
C
B
D
A
That
we
get
I'm
not
actually
really
curious,
because
on
DK
we've,
we've
pushed
people
away
from
being
able
to
run
startup
scripts
and
just
said
no
use
daemon
set,
there's
a
little
bit
of
a
race
condition
because
you
don't
know
exactly
when
it's
gonna
run,
but
for
the
most
part
we
haven't
got
a
lot
of
complaints
with
that
decision,
and
maybe
our
user
base
is
a
little
bit
different
I.
Also,
you
do
want
to
echo
one
thing
you
said
earlier
Chris,
which
is
like
on
gke.
A
We
do
the
delete
and
replace
for
nodes
and
that's
great
because
we're
on
a
cloud
but
I
do
also
strongly
believe
that
we're
gonna
need
a
support
in
place
upgrades
for
a
bare-metal
yeah
and
it's
part
of
what
we
thought
about
we're
designing.
The
machines.
Api
is
with
the
machines
API
right
now
you
can
patch
a
machine
and
the
the
contract
there
is.
That
means,
do
something
in
place
and
you
can
delete
and
create
machines,
and
that
means
replace
actual
machines
and.
G
The
whole
point
of
encapsulating
the
how
we're
solving
these
problems
in
the
software
is
so
that
we
can
build
in
cool
things
like
maybe
we
want
to
pull
from
some
github
repo,
the
commands
that
we
want
to
execute
and
that's
the
responsibility
of
the
controller
like
because
it's
software
we
can
make
it
do
whatever
we
want
right.
But
that's
the
point
of
having
the
controller
right.
D
But
if
we
don't
are
we
looking
at
having
a
relationship-
and
this
is
I'll-
make
a
note
of
this,
because
I
think
this
is
a
different
topic?
Are
we
looking
at
having
a
relationship
where
we
have
the
capability
of
the
controller
getting
the
data
it
needs
if
you're
looking
at
it
running
on
the
node,
because
if
we're
looking
at
it
a
software-defined
model,
we're
gonna
have
a
relationship
there
and
we
can't
just
you,
know,
I'm
not
saying
to
have
it,
but
have
a
placeholder
have
it
somewhere,
where
it's
fuzzy.
D
D
A
B
Questionable
some
comments
from
from
outside
so
I
mean
what
we
are
talking
is.
We
are
talking
a
lot
of
Louisville
enterprise
companies
who
runnings
in
their
own
data
center
and
yeah
I.
Think
in
the
best
case,
we
would
easily
have
an
a
way
to
say
yeah.
This
only
is
a
couplet
and
this
is
a
container
one
time.
What
we
also
see
is
like
they
coming
up.
Probably
it's
not
best
practice
to
do
this,
but
I
still
have
say
requirement
to
say
hey.
This
is
how
we
do
this.
B
We
must
deploy
this
additional
files
and
documents
on
top
of
the
machine
so
to
provide
some
releases
flexibility
and
say
no,
oh
sorry,
that
we
can't
use
our
solution.
We
said
okay,
we
need
as
much
as
flexibility,
because
otherwise
we
run
into
a
lot
of
projects
and
exactly
stopping
at
that
point
and
the
customer
said
can
I
do
this?
Can
I
install
this
specific
binary
and
so
on?
Probably
it's
different.
If
you
are
a
big
cloud
provider
because
then
you
can
say
hey,
this
is
the
best
practice
we
are
doing
this.
B
This
is
the
only
way
we
supporting
it.
Our
case.
It's
a
little
bit
different
because
we
are
like
the
smaller
ones.
It's
a
beginner
process.
This
we
are
doing
and
we
try
to
convince
them.
But
at
the
end
we
say
yeah,
we
we
let
you
know
it's
not
best
practice,
but
to
get
it
running.
You
can
do
this
easily
and
there's
mostly
every
time
also
a
reason
for
it.
It's
not
like
because
they
don't
want
to
do
this
as
I
have
legacy
system
and
so
on.
So
what
to
say:
hey
yeah,
it's
possible!
Why
not.
A
C
C
For
different
other
cloud
providers
or
and
are
all
those
going
to
be
sort
of
like
under
something,
so
what
is
the
like
goal?
Actually,
I
mean
I'm,
like
the
part
that
I'm
not
sort
of
a
because
I
saw
it
very
quickly
like
you,
but
when
it
started
like
implementing
stuff,
which
is
great
like.
Is
it
the
idea
that
the
DK
will
internally
start
using
this
like
the
code
that
is
like
out
there
in
the
open
or
like
something
else,.
A
And
in
the
intent
of
the
cluster
API
is
to
have
a
open
source
API
that
we
can
use
to
do
consistent
cluster
operations
across
different
environments.
We
anticipate
having
multiple
controllers
backing
that
API.
It's
actually
enact
those
changes
across
environments.
So
what
Chris
was
saying
earlier
about
having
at
different
controllers
for
different
clouds
or
Jake?
G
That's
a
huge
bit
of
success
criteria
for
this
whole
effort
is
I
should,
in
theory
the
way
I've
always
imagined
this,
and
please
disagree
with
me.
You
should,
in
theory,
in
my
eyes,
be
able
to
define
the
same
resource
and
have
any
controller
implement
that.
However,
they
and
if
we're
changing
the
resource
based
on
of
the
implementation,
that
feels
a
little
backwards.
C
Ok,
another
question
that
I
actually
asked
in
one
of
the
peers
in
that
repo
is
like
they
need
the
new
console
of
this
machine
like
how
do
some
sync
between
the
Machine
and
the
node,
and
also
the
question
of
like
meetings.
Just
just
you
mentioned
that,
like
it
appears,
if
you
replace
the
machine
in
the
eastern
side,
it
changes
like
and
then,
if
you
usually
use
this
instance
poop
on
Google
cloud
or
like
it,
uploads
like
the
machine
names
are
automatically
assigned
right
like
so.
How
do
you
see
those
being
sync,
this
part
yeah.
F
So
right
now
in
our
prototype,
the
names
happen
to
be
the
same
like
when
you
create
a
machine,
you
give
it
a
name
or
if
you
use
the
API
machineries
generate
name,
and
it
gives
it
a
name.
We
create
a
VM
with
exactly
that
name.
I
personally,
don't
think
that
that
should
be
part
of
the
contract.
I
think
that's
an
implementation
behind
the
scenes.
F
F
You
can
look
at
the
machine
spec
and
then
traverse
the
machine
status
to
the
node
that
corresponds
to
it
and
then
check
the
fields
to
see
if
whatever
you
care
about
has
been
reconciled.
But
under
the
covers,
let's
say
you
want
to
do
a
very
quick
replacement
of
a
machine
like
you
you're
in
a
cloud
environment,
and
you
want
to
change
the
instance
type
or
something
instead
of
having
to
completely
delete
the
VM
so
that
you're
able
to
reuse
the
name
and
bring
up
the
same
VM
with
the
same
name.
F
You
could
just
you
create
a
new
VM,
giving
it
the
same
machine
identifier
as
part
of
its
startup,
and
it
can
have
a
completely
arbitrary
name.
You
don't
care
about
the
the
name
of
the
machine
or
the
name
of
the
VM
and
delete
the
old
VM
simultaneously
and
when
the
new
one
comes
up,
its
status,
4.2
I
started
the
machine.
Well,
now
it's
status
will
point
to
the
new
node
and
instead
of
the
old
node,
and
you
can
have
those
happen
in
parallel
and
the
name
of
the
VM
you
don't
particularly
care
about.
F
So,
if
you're
in
the,
if
you're,
in
a
setup
where
you
want
to
take
advantage
of
a
managed
instance
group,
you
don't
actually
care
about
the
name
of
the
VM
necessarily
and
we
haven't
completely
tackled
how
this
would
work
or
even
if
we
want
to
take
advantage
of
manage
instance
groups
it
doesn't.
It
doesn't
provide
a
lot
of
value
when
I
think
when
you're
using
declarative
machines
like
this,
and
we
could
have
the
concepts
of
machine
set.
F
D
C
C
A
All
right,
so
it's
been
50
minutes
on
this
topic.
We
had
a
couple
other
things
on
the
agenda
and
I
want
to
make
sure
we
get
at
least
the
first
one
of
those
two.
So
if
there
are
any
closing
thoughts
that
people
want
to
spend
the
next
30
seconds
on
otherwise
I
propose
that
we
continue
this
discussion
both
offline
in
probably
next
week
as
well,
there's
a
meta
topics.
We
should
continue
next
week
and
I
want
to
make
sure
that
Jacob
gets
connected
with
Sebastian
offline
to
see
how
we
can
merge
these
API
efforts
together.
G
Quick
in
the
name
of
merging
the
API
efforts
together,
can
we,
if
we
don't
already,
can
we
come
up
with
some
sort
of
documentation,
IRA
meet
me
or
something
or
other
that
just
says
where
it
lives,
and
then
we
all
kind
of
agree
on,
because
right
now
that
gates
in
keep
deployed
I
just
want
to
make
sure
that
that
is
the
official
spot
and
that's
documented
somewhere.
Yeah.
A
D
It
ties
real
nicely
into
the
thoughts
I
had
some
Sebastian,
it's
kind
of
funny,
the
looking
at
the
Machine
API
and
help
me
out
here
to
understand
this
a
bit
better,
because
I
always
go
back
to
base
types.
You
know
in
deployments
one
of
my
favorite
base
types.
We've
got
the
deployment
we've
got
a
replica
set.
Then
we
got
a
pod
already
with
the
machine.
Are
we
trying
to
kind
of
think
about
the
pod
at
this
time
or
do
we
have
any
pattern
like
that
that
we're
doing
yeah.
A
D
A
So
we're
trying
to
start
with
just
the
basic
one,
and
then
we
can
layer
the
other
things
on
top
right
now
in
CUDA,
flurry
bow
a
client-side
machine
set
implementation,
so
kind
of
fakes,
out
grouping
client
side
based
on
labels,
and
we
anticipate
making
that
a
first
class
API
object
in
the
future.
Once
we
feel
like
we've,
really
nailed
machines
themselves.
So.
A
Was
me
clear,
my
throat
yes
later
but
later,
hopefully
not
too
far
later
right,
like
we
lessons
from
pods
and
deployments
and
replica
sets,
we
don't
think
it's
going
to
take
us
nearly
as
long
as
it
took
folks
doing
those
to
move
up
the
stack,
but
we
do
want
to
make
sure
that
we
all
agree
on
what
the
machine
is.
First,
because
the
stuff
on
top
of
it
I
think,
is
much
less
complicated.
Well,.
D
The
challenge
that
you
run
into
is
you:
don't
have
any
traceability
factor
of
machines
unless
you
have
the
higher
up
object
which,
with
models
replica
set?
Yes,
so
that's
the
thing
that
we're
gonna
run
into
immediately
and
just
from
a
raw
implementation
standpoint.
I
know
that
having
two
upper
level
objects
may
be
problematic
but
from
my
perspective,
I'm
not
even
sure
like
how
am
I
unless
you're,
creating
the
node
yourself
right,
I'm,
not
sure
the
you
how
much
usability
this
would
have.
D
A
Know
so
I
think
machines
are
a
little
bit
more
than
once
than
that
I
think
of
people.
Let
love
to
treat
them
as
cattle,
but
there
are
in
some
ways
a
little
bit
more
like
stateful
sense
right,
because
they
do
have
local
discs.
You
can
have
local
storage.
They
are
all
a
little
bit
different
and
a
little
bit
unique
and
you
can
individually
address
them
in
a
way.
D
It's
also
gonna
impact
like,
for
example,
we
have
the
couplet
version
on
the
pod,
but
within
a
group
of
nodes,
I'll
be
running
the
same
couplet
version
or
here's
the
analogy.
I
have
a
pod
that
is
like
I
have
a
group
of
pods
that
are
all
the
same.
Nvidia
have
the
same.
Nvidia
driver
on
them
right,
I
wouldn't
have
that
on
the
machine.
I
would
have
that
on
the
replica
set
or
instant
script.
You
know
notes
that
what
the
Machine
set,
whatever
that
we
call
it
right.
D
It
worth
opening
the
conversation
now
to
help
us
define
that,
because
I
almost
feel
like
we're.
If
we
don't
deal
with
those
questions
now,
I
think
we're
gonna
miss
some
stuff
in
the
machine
and
don't
get
me
wrong.
This
is
a
great
start.
I'm,
not
I'm,
not
trying
to
criticize
here
and
I
know
I'm
kind
of
late
in
the
game.
A
I
mean
dick
up
I
had
a
machine
set
proposal
and
we
just
kind
of
deleted
it
temporarily,
and
so
we
can
put
it
back
out
there.
If
people
think
it's
gonna
help
I'm
a
little
worried.
It's
gonna
fragment
the
discussion
and
we're
gonna
make
slower
progress
overall
than
if
we
focus
on
one
piece
and
then
there's
an.
A
D
Last
question
hard
implementation,
detail:
I'm,
not
sure
if
the
GK
gke
guys
are
on
the
call,
are
you
looking
at
having
an
instant
template
manager
within
this
model,
or
are
you
looking
at
having
because
the
comment
has
been
the
node
is
created
by
a
controller?
Well,
that's
cloudy,
or
the
node
already
exists
and
bare
metals.
So.
A
Sort
of
I
mean
yeah,
so
bare
metal
is
interesting
because
when
people
talk
about
on-prem,
there's
multiple
types
of
on-prem
right,
there's
the
private
cloud
on
frame
where
you've
got
vSphere,
OpenStack
or
something
else
where
you
do
have
an
is,
and
you
have
API,
is
to
provision
new
things
that
look
like
machines.
And
then
you
have
like
the
true
bare
metal
where
you
actually
have
machines
that
you
know
your
pixie,
booting
us
or
so
forth.
A
But
even
those
you
know,
a
lot
of
people
who
have
larger
systems
have
built
some
nation
around
being
able
to
fix
Ibuki
machines
like
their
different
different
tools.
You
can
use
for
that
and
so
I
think
that's
where,
when
you
say,
create
a
new
machine
and
then
a
controller
goes
and
loops
on
your
machine
for
you
or
it
files
a
ticket
against
your
admin.
And
it's
not
a
very
quick
fulfillment
of
that
request.
But
at
some
point
the
new
machine
will
show
up
that
matches.
Your
request
right.
D
A
Yeah
we
call
Miggs,
or
I
GMS,
either
way
the
with
the
cluster
api,
we're
trying
to
figure
out
how
that
fits
right.
So
with
node
sets.
It
makes
sense
because
said
sort
of
analogous
in
some
screens,
but
for
individual
machine.
It
doesn't
necessarily
make
sense,
because
you're
gonna
effectively
create
instance
templates
for
every
machine
that
looks
different
and
have
to
try
and
figure
out
which
one's
match.
If
you
want
to
attach
one,
but
not
the
other
like
how
does
that
work?
A
But
one
thing
we
have
learned
with
gke
is
node
pools
are
awesome,
like
they've,
been
copied
sort
of
all
over
the
place,
but
they're
also
not
the
most
flexible
thing
in
the
world,
because
you
have
to
create
an
instance
template
and
an
instance
group.
Even
if
you
want
to
have
one
thing
that
looks
different,
and
so
one
of
the
goals,
explicit
levels
with
the
cluster
api
was
to
be
able
to
add
single
teams
that
aren't
part
of
a
set,
because
we
think
that's
really
important.
I.
B
A
B
I
mean
this
is
similar
to
us,
so
like
see
node
class
I,
don't
must
be
assigned
to
a
node
set.
It
could
be
also
assigned
to
to
a
node
itself
and
then
only
the
node
controller
will
picks
it
up
and
see.
Notes
head
controller
is
out
of
scope.
So
exactly
when
you
have
this
case,
okay,
probably
you
only
want
to
manage
single
nodes.
You
can
easily
do
this
instead
of
putting
like
a
one
to
one
relationship.
First,
the
note
said.
A
Great,
so
we're
just
about
out
of
time
I
kind
of
wish.
We
were
having
another
meeting
this
week
over
another
hour
right
now,
because
this
has
been
really
awesome.
Hopefully,
people
can
show
up
again
next
week.
I
know,
at
least
in
the
u.s.
it's
the
day
before
Thanksgiving,
so
people
may
be
on
vacation.
B
One
last
question
from
my
side:
is
there
any
plans
to
meet
before
or
around
so
probably
because
Henrik
and
I
will
definitely
attend,
so
we
can
also
do
the
discussion
in
person
around
there
to
get
like
also
aligned
what
parts
could
be
contribute,
or
so
to
get
a
better
and
get
also
some
insights
from
us,
because
we
running
already
some
parts
of
production.
So
we
can
also
give
you
some
things,
probably
what
works
well
and
probably
where
we
see
already
improvements,
yeah.