►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180110 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.evqj64lfwx9m
Highlight:
- Presentation for a similar effort as ClusterAPI https://github.com/gardener/node-controller-manager
- Q & A, discussion around implementation strategy with node controller manager project.
- Discussion around single machine vs scale group
-
A
A
A
A
First,
on
the
cocoa
side,
we
just
already
lapping
up
after
attention,
so
we
stuck
in
sign
code,
refactoring
and
also
we
stuck
into
before
the
before
the
great
will
be.
We
talk
about
the
from
the
custom
resource
definitions,
IATI
to
move
to
the
atom
API
server
for
infiltration.
So
we
do
start
effort.
A
C
Able
to
see
the
screen
right,
so
my
name
is
hajeck
and
michonne.
Then
we
are
joining
the
meeting
for
the
first
time
and
we
got
to
know
about
the
cluster
dpi
some
time
back
and
at
that
time
we
were
actually
already
in
the
process
of
doing
something
very
similar,
but
like
some
time
the
logy
that
already
used
in
there.
C
It
must
be
okay,
so
the
terminology-
and
so
first
thing,
first
so
few
things
that
we
all
of
us
know
already,
but
just
to
confirm
that
in
cooperate
is
everything
starts
with
the
pod
and
what
we
obviously
get
is
a
pod
controller.
It's
a
it's
called
a
pod
informer
and
it's
controller
which
basically
manages
the
objects
and
then
the
second
core
object
of
lip
asset
and
then,
similarly,
we
have
the
public
asset
controller,
which
basically
manages
it
up.
Liquor
sets.
But
the
interesting
part
is
the
flicker
sets
only
manage
the
only
component.
C
Similarly,
we
have
deployment
controller
and
we
do
the
planning
object
and
the
interest
of
all
this
different
object.
Only
talks
to
the
delica
said:
don't
really
care
about
the
elders
and
the
same
way.
It
goes
for
the
horizontal
all
these
controllers
put
in
one
bundle.
We
call
it
so
we
have
adopted
the
same
terminology
and
then
the
question
UK
is
very
kind
machine.
C
We
call
the
base
object
as
a
machine
object
and
motion
controller
which
manages
the
machines,
and
then
we
have
machine
set
controller
which
basically
manages
only
machine
objects,
and
then
we
will
deployment
controller.
So
this
deployment
controller
only
to
the
Machine
set
does
not
it
never
tells
about
the
real
machine.
C
Similarly,
we
have
can't
help
the
cluster
out
the
scanner,
but
we
plan
to
him.
We
have
to
think
about
design
and
so
on.
But
the
point
here
is
there:
there
is
a
clear
separation
of
responsibilities,
but
each
controller
the
way
each
controller
is
they're
going
right
and
you
put
all
these
things
come
undone
and.
C
C
C
We
have
defined
EWS
machine
class
is
the
one
of
the
CRT
and
that
Marshall
class
basically
contains
all
the
details
of
the
things
that
machine
normal
machine
should
be
reflected
by,
and
one
more
is
that
we
have
to
count
the
details,
which
are
little
sensitive,
for
example,
for
AWS,
be
able
to
out
the
details
like
access,
key
ID
and
success.
Key
in
the
main
part
is
a
cloud
cultic,
so
the
cloudy
night
script
that
will
boot
up
this
boot
of
the
actual
machine.
C
So
we
put
it
as
a
separate
secret
and
we
have
pointer
from
machine
class
towards
the
secret.
So,
as
already
mentioned,
each
component
is
responsible
only
for
managing
the
child
component,
so
which
means
machine
set,
only
manages
the
machine
machine
or
the
flame,
and
only
melodies
the
machine
sight
and
so
on,
and
they
are
in
chain
relationship
and
we
have
implemented
it
in
a
similar
way
that
currently
the
replica
set
and
works.
So
it's
more
of
a
label
select
away.
C
So
each
machine,
which
has
got
XYZ
or
specific
label,
is
physically
identified
by
physically
adopted
by
the
machine
set,
which
has
the
selector
as
a
similar
labels
right
same
with
the,
and
you
also
got
the
owner
references
to
each
machine.
He
has
got
the
owner
reference
which
files
to
the
machine
set
and
the
machine
set
has
owner
owns,
which
points
to
the
machine
deployment.
So
this
controller
more
or
less
works
so
going
quickly,
one
by
one.
What
each
controller
really
does
right
so.
C
And
the
creation
flow
it's
pretty
much
simple.
We
basically
create
the
machine
see
already
that's
here.
They
gets
picked
up
by
the
machine
controller
in
machine
controller,
basically
immediately
talks
to
the
cloud
provider
and
puts
up
a
request
for
creation
of
a
machine
and
it
in
we
have
identified
two
different
states
in
between
that,
when
machine
controller
tell
Scout
provider
to
create
a
machine.
At
that
point,
there
is
a
state
between
well
machine
gets
registered,
which
abilities
flown,
but
it's
not
yet
read.
C
So
we
wait
for
that
time
and
then
we
wait
till
the
machine
gets
a
very
state
and
in
any
in
between
it
in
any
state.
If
anything
goes
wrong
or
timeout
happens,
then
that
machine
is
declared
as
a
field.
So
so
basically
machine
controller
changes.
The
status
of
the
machine
will
be
failed
and
it's
basic
reason
that
the
other
controllers
then
we'll
be
looking
at
the
state
assembly
will
be
taking
their
they'll,
be
taking
their
actions
based
on
the
status
of
the
machine
and
the
machine
deployment
similar
that
we
create.
We
physically
do
a
cube.
C
City
holding
it
on
the
machine
controller
gets
to
know
basically
by
The
Informers,
that
machine
object
has
been
deleted
and
again
the
two
flows.
If
deletion
goes
well,
it
cloud
provider
deletes
the
machine.
So
we
basically
wait
till
the
cloud
provider
really
deletes
the
machine
and
confirms,
and
then
only
we
delete
the
machine
object.
Timeout
happens.
Any
flower
fighter
cannot
really
for
any
reason.
It
cannot
read
attributed
the
machine.
C
The
machine
controller
again
changes
the
status
of
the
machine
frame
right
and
also
we
take
the
error
message
coming
from
the
cloud
provider
and
we
put
it
on
the
machine
she
was
here
for
the
health
monitoring.
Is
they
already
said,
so
the
machine
controller
has
an
eye
on
the
mood
condition
of
each
load.
So
as
we
have
the
load
condition
from
each
of
the
node
inside
the
machine
object,
a
machine
controller,
basically,
checks
with
the
cubed
is
ready
or
not.
Whether
this
equation
is
happening
on
lot
and
so
on.
C
If,
let's
say,
for
example,
qubit
is
not
ready,
then
controller
declares
the
declares.
The
machine
knows
changes
the
status
of
the
machine
to
unknown
state,
and
then
we
wait
for
a
configurable
amount
of
time.
We
have
choose
five
minutes
variable.
Only
we
wait
for
set
amount
of
it
gets
back
to
at
least
eight
again
becomes
healthy.
If
doesn't
the
Machine
communal
again
makes
changes
the
status
of
the
machine
in
your
field
so
then
comes
the
machine
set.
C
So
until
here
it's
clear
that
field
controller
doesn't
really
try
to
recreate
the
machine
or
machine
doesn't
really
care
that
it
has
to
maintain
a
machine.
It
just
creates
or
till
it's
emotions
like
monitors,
Emily
I'm,
a
machine
inside
controller.
So
since
add
controller
is
separately
concerned
based
on
the
informants
of
course
and
oceans
when
it
actually
gets
triggered.
C
When
one
is,
of
course,
when
the
machine
set
object
is
created,
deleted
or
updated
quite
the
conform
is
we
have
to
know
and
based
on
that
actions
will
be
taken,
but
also
when
a
machine
object
is
updated.
So
when
n/a,
any
of
the
machine
object
is
updated.
This
machine
set
controller,
it
gets
to
move
a
checks
with
it.
Machine
belongs
to
that
missing,
set
or
not,
and.
C
So
now
the
dot
is
connected
to
the
previous
previous
part,
then
I
say
the
machine
control
changes,
the
status
of
the
machine
to
field.
At
that
time
you
see,
the
mercial
object
is
actually
updated
and
the
machines,
the
parentals
machine
is
set,
gets
removal
and
it
takes
action
that
this
machine
is
failed,
so
it
has
to
be
a
dilution.
It
follows
the
principles
of
work
with
you,
of
course,
when
you
put
the
machine
instead
of
the
console.
C
The
flow
for
the
machine
conceal,
set
controller
exactly
the
way
the
replica
said
more
or
less
works
that
the
vsync
you
can
be
superior
and
after
every
releasing
period
it
physically
checks
the
available
set
of
commercial
objects.
So
if
any
out
of
the
available
set
of
machine
objects,
if
there
are
healthy
replicas
is
equals
to
desired
number
of
replicas,
it's
good.
If
it's
it's
less
than
desired
number
of
replicas,
we
simply
leave
the
Machine
object.
So
here
the
catch
is
the
Machine
set
in
the
normal
period
serial
machine.
It
just
creates
the
motion
CR
this.
C
It
just
creates
additional
machines
here
on
the
healthy
replicas
are
deletes.
It
just
deletes
the
Machine
CRD
and
gives
responsibility
of
printing
the
actual
mission
to
Mission
Control.
Can
we
keep
the
cueing
the
Machine
set
inside
the
cube?
Are
you
so
then
we
have?
We
have
also
caught
emission
deployment
controls
so
much
in
deployment
controller
in
a
similar
way.
We
have
got
strategies,
so
one
is
a
rolling
update
and
we
create
the
symbols
the
same
way.
We
have
in
Jubilee
this
deployment
controller.
So
it's
obvious
just
to
be
just
to
depreciate
encoding
update.
C
You
basically
basically
create
a
new
machine
set
object
and
you
do
it
one
by
one.
So
you
basically
kill
the
machine
from
one
machine
set
to
create
it
in
the
new
machine
set.
We
need
one
by
one
or
physically,
based
on
the
parameter
you
provide
as
part
of
mixer
and
then
in
the
same
deployment
object.
You
can
also
post
the
requirement
and
interesting
leap.
Deployment
fails
in
between
it
automatically
gets
all
sudden
sayings.
It
ends
up
in
situation
there.
D
Yeah,
oh
man,
it's
my
screen,
visible
yeah,
I,
guess
it
is
so
to
show
how
machine
a
machine
class
looks
like.
So
we
have
this
ews
machine
class.
The
third
leak
was
talking
about
earlier
and
we
give
a
name
to
it.
The
spec
right
now
contains
all
fields
related
to
aw,
so
our
implementation
right
now
suppose
he'd
abuse,
but
it's
easily
extensible.
D
So
and
we
have
this
machine
image,
availability
zone,
the
different
fields,
machine
type,
the
network,
interfaces
block
devices
and
so
on.
What
is
the
interesting
field
here
is
the
secret
reference,
where
you
point
it
to
one
of
the
cuban
IT
secrets.
So
here
I
have
the
secret
called
secret:
one
dot
e
dot
zero.
D
What
this
secret
would
contain
is
so
basically
it
would
contain
a
user
data
field,
which
is
basically
a
cloud
config
file,
which
already
has
the
cubic
version
mentioned
inside
as
well
as
we
have
the
provider,
access,
keys
and
provider,
secret
keys
required
for
AWS,
and
so
these.
So
this,
then,
is
used
by
the
machine
deployment.
So
here
we
give
the
machine
deployment
name.
These
two
fields
that
have
commented
out
are
used
for
pause
and
roll
back.
D
D
I
just
have
a
watch
on
this
test
machine
deployment
that
I'm
going
to
create
and
right
now
let
me
quickly
show
you
the
cluster
state,
so
how
our
cluster
is
set
up
is
that
the
master
nodes
are
run
separately
and
the
worker
nodes
are
run
separately
so
right
now,
I
don't
have
any
worker
nodes,
so
the
classic
shows
no
worker
nodes.
When
I
do
keep
CTL.
You
get,
let
me
show
you
in
the
interest
of
time.
D
I
have
already
deployed
the
cube
AWS
machine
class,
so
let
me
call
it
as
class
v1
and
maybe
the
class
me
too
I'll
be
using
it
later,
while
doing
a
rolling
update
and
similarly
the
secrets
that
is
used
to
store
the
cloud
config
and
provider-specific
secret.
So
I
have
the
secret
one
point:
8.0
and
secret.
One
point
eight
point
two
which
have
which
have
your
cube.
Let
versions
of
one
point,
eight
point:
zero
and
one
point
eight
point
two
and
let
me
go
to
get
machine.
D
So
right
now
yeah
this
completely
empty
cluster,
and
let
me
deploy
this
machine
deployment
that
I
just
showed
you
so
as
soon
as
I'd
like
this
there's,
an
object
created
in
the
controller
picks
it
up
sees
the
need
at
a
number
of
replicas
is
three
in
starts:
creating
three
replicas.
If
you
look
at
the
machine
object,
you
can
see
the
name
what
I
interesting.
These
are
the
things
you
would
expect,
but
so
here
we
have
the
status
field
where
we
have
the
condition
right
now
it
says
the
deployment
doesn't
have
the
minimum
availability.
D
All
this
is
borrowed
from
deployments
and
kubernetes
itself
and
the
number
of
replicas
number
of
unavailable
replicas.
When
is
that
ready?
It
would
show
the
number
of
ready
replicas
as
well.
Let
me
quickly
show
you
how
so
this
is
the
machine
deployment.
Similarly,
we
have
the
machine
set
backing
this
deployment.
D
D
And
like
the
machine
object,
would
look
something
like
this:
how
we
map
machines
to
the
nodes
is
we
use
labels
where
each
machine
has
this
label
containing
the
node?
That
is
your
kubernetes
node
and
because
we
are
using
AWS,
it's
the
IP
of
it
and
the
owner
reference
again
to
the
Machine
set
and
the
clasp
and
the
status
in
the
status.
We
are
borrowing
the
node
conditions
and
the
current
status
is
still
pending
since
it's
being
created
once
it's
created
it.
This
will
change
through
running.
D
D
So
yeah
this
this
is
nearly
ready
yeah.
This
is
ready.
Well,
what
I
can
do
is
I
can
quickly
increase
the
number
of
replicas.
So
do
it
change
that
the
five
I'm
gonna
do
an
apply.
The
controller
picks
it
up
again
realizes
there
are
two
less
it
needs
to
create.
Two
nodes
which
starts
creating
them
takes
over
takes
a
few
seconds
so.
A
E
D
D
Even
have
to
provide
a
start
provider,
the
machine
at
the
provider
stops
responding,
it
would
be
recreated
as
well,
and
this
would
take
so
it
would
take
a
couple
of
maybe
minute
or
so
for
these
nodes
to
come
up.
So
you
can
look
at
even
the
ready
replicas
over
here
right
now,
like
the
machine
deploying.
D
So
once
so,
when
someone
said-
and
this
is
this-
this
is
very
last
part-
mate
better
quickly
show
you
a
rolling
update
of
where
we
have
where
I
try
to
update
the
cluster
from
this
machines,
which
are
pointing
to
the
skew
blade
version.
One
point:
eight
point:
zero
to
one
point,
eight
point,
two:
so
till
this
mesh
till
this
start
scan
till
these
nodes
are
getting
ready.
Let
me
quickly
show
you
what
that
looks
like
now,
so
the
earlier
I
showed
you,
the
machine
class
version
was
version.
D
So,
as
you
see
you
can
see
here
right
now,
these
notes
are,
at
one
point,
eight
point
zero.
Once
we
are
done
with
the
update
fit
all
change
to.
One
point:
eight
point
two:
so
the
update
that
we
do
is
replace
in
not
an
in-place
update
right.
The
our
implementation
is
work
summary
play
strategy.
As
of
now
now
now
it's
yeah
now.
D
When
I
apply,
this
controller
immediately
picks
it
up
starts
deleting
deleting
the
extra
Williams
and
starts
creating
new
VMs
in
parallel.
So
at
this
point
you
will
have
to
machine
classes
to
machine
sets
one
pointing
to
an
old
one
and
one
pointing
towards
the
new
r1.
And
if
we
have
a
look
at
the
AWS
console
at
this
point,
you
will
find
a
mixed
set
of
machines.
So
you
have
these
cool.
D
D
Quickly,
let's
do
a
kick
nodes
and
we
would
see
eventually
new
machines
joining
the
cluster
with
version.
One
point:
eight
point
yeah,
so
you
see
here
at
their
new
machines
joining
with
one
point,
eight
point
two,
and
since
this
is
a
rolling
update,
it
happens
in
parallel.
Just
waits
few
machines
come
in
wait
for
the
others,
depending
on
the
strategy
just
like
how
to
deploy
mean
to
work.
C
E
E
You
chose
for
simple
things
like
you,
you
have
you
use
board
machine,
but
then
we
already
have
nodes
which
are
kind
of
machines,
and
you
have
deployment
that
machine
said.
I'm
like
Gigi
has
known
cool
and
I
heard
some
folks
been
working
on
nodes,
say
you
begin
to
sort
of
figure
out.
What
is
a
good
terminology
you
might
want
to
use.
These
just
seem
to
be
a
problem
that
the
bunch
people
solving
in
their
own
place.
C
A
D
B
D
Machines
moving
back
so
you
can
see
that
takes
4
large
are
being
shut
down
in
m4
large,
which
was
the
original
VM
which
are
coming
back
up.
We
could
also
do
a
quick
pause
on
the
updates
pause
on
the
rollout,
and
what
this
will
do
is
just
pause
whatever
the
controller
is
doing,
and
this
way
we
you
know
we
can
have
this
hybrid
version
of
this
cluster
at
some
point,
where
you
have
a
mix
of
put
both
different
kinds
of
nodes,
and
this
will
take
a
while
for
it
to
come
up
in.
A
G
F
H
C
B
C
From
the
usability
point
of
view,
if
you
see
from
the
portability
point
of
view
and
also
in
future,
if
you
don't
have
a
separate,
C
or
D
like
machine
which
will
class,
then
it
becomes
really
I'm,
not
really
sure
that
how
would
then
deployment
will
basically
propagate
the
provider
config
back
to
the
machine
and
if,
if
you
want
to
do
a
rolling
updated
at
that
point,
then
how
do
we
update
the
provider
can
pick
itself
so
that
the
Machine
deployment
gets
from
each
provider?
You
can
pick
from
which
machine
to
be
update.
C
H
H
That
notion
and
so
I
think
a
lot
of
people
find
that
to
be
from
the
developer
side,
a
pretty
logical
way
to
think
about
this
I'm
a
little
bit
worried
about
from
the
end-user
side
that
it's
too
complicated
for
end-users
to
think
about
machines.
That
way,
which
is
why
we've
moved
away
from
it.
But
maybe
maybe
we
should
embrace
the
classes.
C
E
C
C
There
is
also
reason
that
may
be
explicitly
went
for
in
a
blaze
machine
class
and
not
a
generic
motional
class
concept.
So
again
there
is,
if
you
see
there
is
very
less
overlap
between
different
providers,
so
say,
for
example,
for
Google
it's
called
preemptable
machines
and
for
other
places
called
spot
instances
right.
So
if
we
have
two
classes,
then
user,
who
is
creating
or
filling
up
the
details
in
the
class
is
only
aware
of.
C
Let's
say
it
obvious,
then
he
will
pick
up
it
comfortable
for
you
to
pick
up
the
word
spotting
skills,
which
is
the
standard
obvious
one,
but
in
the
other
way,
if
we
try
to
define
on
a
common
mission,
Class
C
Rd
and
then
put
a
kind
AWS
and
AW
asaji
CP
inside,
and
you
have
to
come
up
with
a
more
generic
name
which
applies
to
all
of
the
cloud
providers
right.
So
so
those
were
domain.
Isn't
that
the
is
there
is
not
much
overlap.
We
decided
to
have
different
name
for
the
machine
classes.
H
Other
question
I
have
about
that
is
one
thing.
One
problem
we've
had
in
the
past
is
that
as
TCP
or
AWS
change,
the
set
of
parameters
and
options
you
can
specify
on
individual
machines,
it
can
often
become
very
difficult
to
sort
of
plumb
those
new
fields
through
so
like
when
GC
added
print
will
be
MS,
adding
that
to
this
class.
If
that
requires
revving
your
API
version
and
pushing
out
a
new
software
release.
H
The
turnaround
time
between
when
the
underlying
cloud
provider
adds
a
feature
and
when
we
can
take
advantage
of
it,
becomes
really
long
and
becomes
very
slow
and
completed
process,
whereas
if
we
can
have
more
of
spec
that
is
sort
of
maybe
just
blindly
passed
to
the
cloud
provider,
maybe
with
some
basic
validation
on
top
of
that
becomes
a
lot
more
flex,
because
then
you
can
just
let
end-users
specify
and
their
yamo
like
here's.
This
new
field
and
the
the
controller
itself
doesn't
need
to
know
the
meaning
of
that
field.
H
E
C
Like
yes,
so
that's
I
think
already
a
very
good
argument
on
that,
but
so
from
our
experience
for
so
far
right.
So
what
we
have
learned
is
that,
if
you
buy,
if
user
mistakenly
gives
a
wrong
I
mean,
ideally
he
should
not.
But
if
you
give
a
wrong
provider
configured
right,
then
the
validation
should
be
actually
so
solid
that
we
can
tell
the
user
back.
That.
C
E
It's
sort
of
it's
the
question
of
who
is
the
user
right?
It's
like
you
know.
If
you
forgive
this,
if
this
kind
of
thing
is
exposed
to
to
an
organization-
or
you
know,
you
know,
there
are
only
very
few
classes
and
occasionally
they
can
change.
You
know
a
sort
of
a
more
manual
validation.
Sort
of
thing
just
just
could
be
sufficient
enough
but
like
if
he
provides
to
external
users,
who
will
allow
to
do
anything
and
should
be
able
to
specify
just
about
any
anything
that
the
underlying
provider
supports.
Then
that's
a
different
problem.
I
Quick
question
about
the
expected
behavior
with
acids,
so
let's
say
you
create
one
of
the
eight
of
us
machine
classes
and
then
decide
to
change
one
of
the
fields
I
took
out.
What's
the
fields
word
but
I
think
they
included
some
things
about
the
details
of
the
VMs.
Does
that
trigger
a
deployment
that
changes
all
the
machines
to
the
new?
You
know
sighs
OH,
or
does
it
ignore
that
so.
C
That's
actually
a
very
nice
design
creation
and
again,
actually
that's
only
a
question
which
triggers
when
we
go
with.
It
is
a
machine
class
we
write
so
the
there
is.
There
are
actually
three
options
in
three
different
ways:
we
can
tackle
it.
So
one
B's
is:
you
mentioned
the
controller.
You
can
start
picking
up
the
Machine
and
start
modifying
them
based
on
the
new
class,
the
second
which
we
have
adopted
for
now,
and
we
have
taken
the
analogy
from
the
storage
class
way
that
we
don't
do
anything.
C
H
C
H
C
H
C
C
H
Yeah
and
I
think
we
can,
and
this
this
is
a
pretty
large
forum
to
do
an
intricate
design
review.
So
we
could,
it
might
make
sense,
to
do
sort
of
a
separate
breakout,
higher
bandwidth,
lower
participant
design,
review
and
then
kind
of
come
back
to
this
meeting,
hopefully
by
next
week.
What's
sort
of
the
result
of
that
and
present
that
to
the
wider
audience,
I,
don't
think
everybody
here
needs
to
be
involved
in
that
I
think
it'd
be
easier
if
it
was
sort
of
a
smaller
I.
H
A
A
K
Whereas
in
what
we
had
been
working
on
the
project,
we're
working
on
a
Red
Hat,
we
had
got
some
guidance
from
our
Operations
Group.
They
were
very
focused
on
the
use
of
scale
groups
and
and
instance,
groups.
So
what
I
was
at
the
time?
I
was
just
asking
for
specifics
on
where
folks
thought
that
the
approach
of
having
the
cube
controllers
fully
responsible
for
creation
and
deletion
of
every
individual
instance,
what
benefits
people
had
in
mind
for
that
or
where
it's
gone
wrong
in
the
past.
K
Although
I
should
note
I've
since
been
talking
with
our
Operations
Group,
we
got
that
guidance
simply
because
they
didn't
have
any
other
option.
They're
not
actually
opposed
to
controllers
that
manage
every
single
node.
So
we
did
get
more
information
there,
but
I
know
Robert
mentioned.
He
might
have
some
thoughts
on
that,
but
we
just
would
love
to
hear
any
info
from
anybody.
Who's
seen
that
go
right
or
wrong.
Yeah.
C
We
wanted
it
to
be
is
clouded
Gnostic
as
possible.
So
if
you
see
the
only
dependence,
if
we
expect
from
the
cloud
providers
to
create
the
machine
and
delete
the
motion,
nothing
else,
so
the
implementation
wise.
If
we,
if
you
expect
only
these
two
methods,
then
more
I
mean
we
believe
that
we
can
include
with
more
number
of
cloud
providers.
But
it's
always
a
way
that
you
can
always
go
with
the
boat
pulls
on
Google
Latin
is
this
is
to
have
work.
E
Just
like,
if
you
consider
that,
but
I
would
probably
do
the
same
thing
from
slightly
different
perspective
because
is
G's
with
a
whole
bag
of
stuff
and
right
so
like
SG,
for
example
like
if
you
wanted
to
get
individual
IPS
of
machines
that
instances
that
they
HD
trains
for
you,
you
have
to
actually
go
and
look
at
each
of
them
right.
E
So
you
have
to
like,
listen
and
go,
go
there
and
get
it
and
I
could
sort
of
it's
a
slightly
indirect
way
of
controlling,
because
now,
as
G
kind
of
in
charge
of
the
things
you
have
to
create,
and
they
could
just
yet,
there
are
a
variety
of
limitations
that
are
really
that
or
annoyances.
In
other
words,
one.
L
A
machine
definition
that
was
created
by
machine
set
change
its
labels
so
that
a
machine
set
doesn't
control
it
anymore.
That
machines
that
will
go
create
another
machine
because
it
sees
that
it's
count,
went
down
for
the
matching
label
said
and
then
I
can
just
kind
of
like
quarantine
that
machine
for
debugging,
if
I
have
an
issue
with
it.
So
it's
all
sorts
of
ancillary
benefits
and
then
that
would
also
be
like
like
cloud
agnostic
or
even
if
you
did
on
prim,
it
would
be
on
prim
agnostic.