►
From YouTube: Kubernetes SIG Cluster Ops 20171116
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
welcome
to
the
November
16th
cluster
ops,
sig
meeting.
This
is
our
one
more
before
koukin.
So
Robert
has
asked
some
time
in
this
meeting
as
our
primary
agenda
topic
to
talk
to
discuss
and
get
input
on
the
cluster
API
work
that
he's
been
doing
so
with
that
prefaced,
let's
roll
it,
let's
roll
great.
B
Thanks
Rob,
so
I'm
gonna
get
to
a
couple
of
slides
to
Rebecca
that
I
give
background
and
then
do
a
quick
demo,
so
the
cluster
API,
what
we
started
looking
at
from
our
perspective
at
Google
is
it's.
The
state
of
the
world
for
cluster
management
has
turned
into
a
lot
of
fragmentation.
Over
the
last
couple
of
years.
We
started
off
where
you
know
most
people
are
using
cube
up
and
then
a
lot
of
people
decided
that
they
didn't
like
queue
up,
started,
rewriting
their
own
solutions.
So
we've
got
sort
of
a
wide
variety
of
things.
B
You
know:
cops
are
from
open
source
like
cops
or
cube,
o
or
cube
spray
to
commercial
entities
like
tectonic
or
gke.
There's
just
been
sort
of
a
flourishing
of
different
options.
Here,
along
with
that
came,
you
know,
each
person
chose
their
particular
way.
They
wanted
to
configure
certain
components.
We
have
a
lack
of
consistency
and
things
like
admission
controllers
or
how
people
have
configured
off
off
a
neuron,
see
people
implement
upgrade
in
different
ways,
so
you've
got
some
people
doing
and
place
upgrades.
B
B
Those
reconciles
could
run
inside
or
outside
of
the
cluster.
This
is
a
little
bit
different
than
normal
currents.
Reconciler
switch
pretty
much
always
run
inside
the
cluster,
because
we
have
interesting
bootstrapping
problems
when
you're
talking
about
the
lifecycle
of
the
cluster
itself
and
then
what
we
have
is
is
sort
of
an
interface
layer
between
this
cluster
api
and
different
underlying
cloud
or
bare
metal
providers
where
we
want
to
have
higher
level
tools,
speed
to
cluster
api
and
an
intermediary
controller
interface,
that
down
to
the
underlying
infrastructure.
B
So
the
higher
level
tools
don't
need
to
know
anything
about
the
under
underlying
infrastructure
and
import
existing
tools
that
target
underlying
infrastructure
that
are
built
for
kubernetes
to
instead
target
the
cluster
api,
and
this
is
sort
of
what's
been
been
built
for
gke
for
cops
for
tests.
There
are
a
lot
of
systems
that
implicitly
or
explicitly
define
a
cluster
API,
but
there's
no
consistency
across
the
community
and
what
we'd
like
to
do
is
to
standardize
it.
B
What
we
envision
this
looking
light
is
like
is,
you
have
still
have
you
know
multitude
of
deployment
tools.
People
gonna
have
different
ways
that
like
to
deploy
their
cluster,
if
they
really
like
ansible
or
if
they've
like
cops,
they
can
still
use
those.
But
when
the
cluster
gets
created,
the
cluster
will
come
with
an
API
where
you
have
a
consistent
set
of
tools
or
automation
like
the
cluster
autoscaler,
no
daughter,
provision
or
cluster
upgrade
cluster
repair.
B
They
can
run
against
any
cluster
that
is
conformant
to
the
cluster
api,
and
those
tools
can
do
things
like
add
in
remove
nodes
from
the
cluster
and
upgrade
the
cluster
apply
different
upgrade
strategies
underneath
the
cluster
api.
You
have
the
underlying
cloud
providers
and
then
sort
of
a
glue
layer
in
between
where
your
controllers
that
reconcile
the
desired
state.
That's
expressed
for
the
cluster
api
to
the
actual
state
of
the
world,
and
this
will
you
know,
there's
an
S
there's
going
to
be
sort
of
one
controller
to
rule
them
all.
B
There's
actually
be
multiple
controllers
running
in
a
single
cluster,
multiple
controllers,
written
for
different
environments.
So
you
could
imagine
a
terraform
controller
that
works
against
Google
Cloud
and
also
a
sort
of
five
Google
Cloud
Control,
as
it
runs
against
Google
Cloud
as
well.
So
some
example
features
that
we
expect
to
come
out
of
this,
which,
which
is
what
I
was
hoping
to
discuss.
Mostly
with
this
group.
Is
you
know
you
can
do
things
like
specify
policies
for
cluster
upgrades?
B
And
today
you
know,
if
you
have
three
clusters
on
Prem
and
3
on
GCE,
you
likely
installed
them
using
two
different
tools
and
you're
likely
do
upgrades
using
two
different
tools
and
what
we'd
like
to
do
is
to
be
able
to
write
a
single
tool
that
knows
how
to
upgrade
all
of
your
clusters,
regardless
of
where
they
live,
and
once
you
have
that
single
tool,
you
can
then
start
to
express
policies
like
doing
gradual
rollouts
of
your
clusters.
Even
when
the
clusters
span
different
environments,
you
can
also,
once
you
have
a
declarative
definition
of
clusters.
B
You
can
diff
clusters
and
you
can
try
to
keep
them
in
sync,
and
so
you
can
do
fun
things
like
make
sure
that
if
you're
running,
you
know
see
I
type
clusters
for
testing
new
changes
that
those
have
all
of
the
same
configuration
flags
for
the
API
server
and
controller
manager
that
you
using
for
your
production
clusters,
so
that
your
test
environment
as
closely
as
possible
matches
your
production
environment.
Even
if
you
have
you
know
a
different
number
of
nodes
in
your
cluster,
you
can
also
differ.
B
You
know,
modify
machines,
and
since
machines
are
expressed
using
the
kubernetes
api
semantics,
you
can
actually
just
use
cue
cuddle
to
do
this.
You
don't
have
to
have
any
custom
tools.
You
can
use
all
the
tools
that
you're
used
to
for
modifying
your
apps
to
also
modify
your
infrastructure
underneath
those
apps,
so
I
can
dive
a
little
bit
into
what
the
machines
API
looks
like
this
is
one
of
the
sort
of
I
would
say
more
like
harder
problems
that
we're
trying
to
solve.
B
So
a
couple
of
tenants
we're
trying
to
follow
in
this
API
is
that
the
API
should
be
extra
functional
on
top
of
kubernetes.
It's
not
part
of
the
Corcoran
at
ease,
and
we
shouldn't
have
to
change
any
of
the
core
types
to
make
it
work.
So
that
has
a
couple
of
ramifications.
You'll
see
in
a
minute
and
we'd
also
like
to
make
a
very
clean
separation
between
pieces
that
are
specific
to
where
you're
running
for
your
environment
or
your
provider.
B
As
a
result
of
that
definition,
you
should
be
able
to
delete
specific
notes,
and
this
is
not
something
that's
needed
by
the
cluster
autoscaler
when
it
wants
to
scale
down
the
size
of
your
nodes.
It
figured
out
which
node
it
thinks
should
be
deleted
and
wants
to
delete
that
specific
node
as
opposed
to
the
next
one,
which
is
I,
have
a
set
of
nodes
just
pick
one
to
randomly
delete.
If
you
think
about
this
as
analogous
to
pods
and
replica
sets
replica
sets,
have
this
this
third
bullet,
which
is
you
scale
down
by
one?
B
A
little
bit
lower
priority
is
be
able
to
update,
contain
a
runtime,
so
we'd
like
to
be
able
to
declaratively
say
we
should
be
upgrading
from
docker
112
to
dr
113
on
this
node
and
have
that
be
enacted
by
the
controller
and
much
lower
priority,
which
is
something
where
we're
thinking
about,
but
we
haven't
really
delved
into
too
much
or
figured
out
what
the
API
would
look
like
is
to
specify
sort
of
arbitrary
packages
that
you'd
like
to
have
on
your
machines.
So
things
like
what
kernel
version
you
want
to
have.
B
What
do
you
wanna
have
so
cat
installed
or
open
SSL
or
other
sort
of
system
libraries?
We
have
heard
that
there
are
a
lot
of
people,
especially
they
were
running
on
Prem,
more
and
more
customized
enterprise
environments
that
have
very
specific
ways
that
they
want
to
set
up
their
base
images
and
so
I
think
there
is
some
some
demand
for
this.
But
I
don't
think
we
know
enough
about
what
the
requirements
are
to
design
an
API
that
we
think
will
will
actually
work
for
everyone
and
finally,
we'd
like
to
build
support
auto-scaling.
B
So
one
of
the
goals
here
is
to
to
rebase
existing
tools
that
work
against
cloud
provider
api's
directly
on
top
of
the
cluster
api,
and
one
of
those
tools
is
auto
scaling.
So
we
have
an
auto
scaler
in
the
kubernetes
ecosystem
that
works
on
GCE,
and
I
think
it's
also
important
to
AWS
and
was
at
least
at
one
time
for
detacher.
Although
I
think
the
support
for
Azure
fell
out
of
maintenance
and
was
removed.
B
But
if
we
had
the
autoscaler
pointed
at
the
cluster
API,
then
any
environment
where
you
implemented
the
cluster
api
would
just
sort
of
get
auto
scaling
for
free,
which
I
think
would
be
a
really
big
win
for
the
whole
ecosystem.
So
what
what
does
this
look
like?
Normally
in
kubernetes,
we
have
a
type
that
includes
sort
of
two
top-level
things.
B
If
you
go
back
to
our
initial
tenants,
which
was
to
not
touch
the
existing
Corelli's
API
you'll
notice
it
that
four
nodes,
we
actually
have
a
node
type
in
your
dice
and
it
has
a
node
speck
and
it
has
a
node
status,
but
the
existing
node
type
is
really
just
a
status
right.
It
reflects
the
current
status
of
a
node
and
has
really
has
no
declarative
fields.
Where
you
can,
you
can
ask
you
know
the
cubelet
to
change
the
state
of
that
node
itself.
B
So
what
we've
done
is
we've
taken
a
step
up
from
there
and
create
a
concept
called
a
machine,
and
a
machine
is
basically
the
spec
for
a
node,
and
so
you
can
think
of
sort
of
a
virtualized
supertype,
which
is
what's
what's
actually
a
node
in
kubernetes
contains
a
spec
which
is
the
new
type
we've
created
and
a
status,
which
is
the
existing
type.
That's
in
the
core,
and
you
put
these
two
things
together,
and
that
is
really
what
a
node
in
Corelli's
is.
B
B
You
know
if
we,
if
we
refactor
kubernetes
and
if
we
decide
that
the
the
API
is
for
the
cluster
or
should
be
part
of
the
core
which
is
still
TBD,
we
could
make
it
a
node
type
in
the
future
that
actually
had
the
speck
in
the
status
in
the
same
object
and
I,
don't
tell
me,
do
a
quick
demo,
so
I
ran
the
first
command
already,
which
was
to
create
a
cluster.
You
can
see
it
here.
It
took
about
two
and
half
minutes
now.
B
I
have
a
cluster
I'm
running
get
nodes
in
a
loop
here,
so
you
can
see
that
I
have
two
nodes.
My
cluster
I'm
also
running
get
machines,
so
one
of
the
nice
things
about
the
way
we're
doing
this.
Is
it
again
you
can
just
use
your
existing
two
Rani's
tools
to
to
look
at
resources,
so
you
can
get
machines
who
can
describe
a
single
machine.
B
Oops
I
need
to
actually
say
what
type
it
is
besides
showing
and
you
get
you
get
the
EML
out
of
that
which
tells
you
that
this
machine
is
running
one
seven.
It
should
be
running
one
seven,
four
right.
This
is
a
declarative
half
and
if
you
like
it
get
nodes,
we
see
that
in
fact
it
is
running
version
174.
The
next
thing
we're
going
to
do
is
scale,
so
this
is
what
I
saw
before
is
we
have
a
client-side
tool?
B
That
knows
how
to
add
new
machines
to
your
cluster,
so
we're
gonna
say
you
know,
I
like
this
node
that
I've
got
I
want
to
have
more
of
those
so
I'm
going
to
scale
the
number
of
machines.
Oh
redo,
our
watch
here
scale.
The
new
machine
is
my
cluster,
where
the
type
equals
node
up
to
five,
and
so
here
before
we
just
had
the
first
two
right,
they've
been
around
for
half
an
hour
and
we
scale
to
five.
B
We
get
four
new
machines
added,
so
this
is
are
expressing
our
intent
that
we
would
like
new
machines
to
be
added
to
our
cluster
down
here.
I'm
watching
the
instances
in
Google
compute
engine
and
you
can
see
as
a
result
of
adding
the
fact
that
I
would
like
new
machines
to
exist.
We
are
provisioning
new
VMs
in
GC
and
then,
if
we
wait
a
minute
or
two,
those
will
start
to
show
up
as
nodes
inside
of
our
cluster,
and
you
know
you
can
scale
up
down.
You
know
to
your
heart's
content.
B
Well,
let
that
show
up
and
then
the
last
thing
we're
gonna
do
is
we
can
show
how
upgrades
work
and
so
upgrades
are
fun
because
with
upgrades,
what
what
we
do
is
we
just
change
any
machines?
What
version
we
would
like
to
be
running
on
each
machine
and
then
the
control
under
underneath
that
actually
goes
in
that
change
for
us
and
modifies
the
underlying
infrastructure
to
give
us
nodes
to
running
in
a
different
version.
A
B
C
And
your
examples
are
creating,
if
you
want
to
do
things
like
cordoning
your
node
and
draining
the
pods
off
it
and
then
doing
the
actual,
replace
and
then
waiting
for
it
to
come
up
before
you
uncoordinated
n
right.
This
provides
a
way,
I
think,
that's
easier
than,
or
at
least
more
native
than
terraform
to
all.
That
kind
of
I.
A
Mean
from
a
curb
admin
perspective,
some
of
what
you
just
what
we're
describing
or
primitive
x'
that
you
would
wrap
to
take
those
actions.
Are
you
envisioning
this
as
a
standing
service
that
you're
interacting
with
the
an
api,
or
is
this
a
go
binary
that
would
exercise
the
existing
api's
of
different
cloud
providers
and
and
could
control
yeah.
B
So
think
about
the
layering
with
cube
admin,
cube
admin
assumes
infrastructure
exists
right,
it's
it's
built
to
be
run
on
top
of
machines
that
have
been
provisioned,
and
this
is
the
the
next
layer
we're
trying
to
standardize
on
which
is
provisioned
machines
right.
So
this
is
how
can
I
declaratively
say,
create
these
machines
for
my
cluster,
we're
actually
in
our
implementation,
using
cube
and
in
it
on
the
master
and
using
cube
I've
enjoyed
on
the
nodes
to
actually
do
the
clustering
part
of
this.
A
B
B
And
and
since
it's
a
metaphor,
the
cluster
there's
an
interesting
questions
around
disaster
recovery
and
you
know
what
happens
if
etsy
D
gets
corrupted.
How
do
you
cover
from
that?
And
that's
one
of
the
reasons
where
you
want
to
migrate
from
CR
DS
API
extensions
is
we
can
put
our
cluster
definition
outside
of
the
same
at
C
D,
that's
being
used
for
the
cluster
itself.
So
if,
if
you
create
a
job
that
causes
that
CD
to
whom
you
can
have
API,
the
API
server
still
be
hitting
the
API
extension
server.
B
That's
that's
hosting
your
machine
definitions.
That's
reading
from
a
different
at
CD.
It
is
not
down
right
and,
and
that
could
be
stored
outside
of
the
cluster
as
well
right.
So
I
think
this
is
where,
in
the
beginning
of
the
slides
I
was
saying
like
are
in
general
in
kubernetes,
we
run
reconciler
inside
the
cluster
that
they're
operating
on,
and
since
this
reconciler
is
operating
on
the
cluster
itself,
I
think
there
are
cases
where
people
might
want
to
run
this
reconciler
outside
of
the
cluster.
What.
A
That's
what
was
getting
me
a
little
bit
confused
so
now,
I
see
why
you
had
the
inside
outside
right,
so
from
from
the
way
you've
built
up
the
slides.
It
wasn't
clear
to
me
if
you
were
talking
about
a
kubernetes
extension
or
an
additional
service,
just
just
his
feedback
for
the
it
would
be.
It
would
have
been
clear
to
me
if
it
had
been
very
distinctively
set
as
an
API
extension
to
the
to
the
cluster,
because
some
of
what
you
were
describing
like
reconciling
multiple
clusters,
it's
not
something
that
needs
your
reconciler.
B
It's
like
I,
said
their
diets.
Two
things
I
was
trying
to
describe.
One
is
how
we
define
what
a
cluster
looks
like
and
how
we
might
implement
that,
and
so
we
have
an
implementation
right
now
that
I'm
showing
you
which
works
on
GCE,
along
with
our
proposed
API,
for
what
a
cluster
should
look
like,
and
the
other
thing
was
once
you
have
that
definition,
what
can
you
do
with
it
right
and
I
guess
that's
what
I
really
wanted
to
discuss
with
cluster
operators
is
what
are
the
common
things
that
people
do
on
their
clusters?
B
A
The
the
it
strikes
me
that
the
pattern
of
destroying
a
node
to
upgrade
it
makes
a
lot
of
sense
for
cloud
users,
it's
not
as
easy
on
physical
ops,
but
our
our
opinion
has
merged,
and
this
is
where,
when
we
talk
to
people
other
operators,
it's
like
this
also
moving
to
the
destroy.
Recreate
pattern
is
a
better
pattern
for
managing
a
cluster
like
this,
then
try
to
figure
out
all
the
patch
permutations
and
there's
a
part
of
me
that
says
that
be
opinionated.
Don't
try
patch
machines,
just
say
it's
a
it's!
B
So
that's
interesting.
That's
that's
sort
of
coming
in
line
with
the
push
for
a
mutable
infrastructure
right
where
you
create
it,
and
it's
it's
there
and
if
you
want
something
different,
you
delete
a
new
one.
I
think
what
we're
trying
to
achieve
with
the
cluster
API
is
not
to
be
that
opinionated
I
think
we
actually
want
to
support
both
types
of
upgrades.
Depending
on
what
your
controller
does.
So
you
you
change
you.
You
know
cube
cuddle
edit,
your
your
node
and
you
give
it
a
new
version,
and
what
does
that
mean
and
I
think
we're?
B
We're
still
have
yet
to
say
exactly
what
it
means
that
one
proposal
is.
That
means
you
do
an
in-place
upgrade
and
if
you
want
to
do
sort
of
the
immutable
infrastructure,
you
should
just
delete
the
machine,
create
a
new
machine
with
the
version
you
want
and
that
will
express
your
intent
of
delete
and
create
because
you
wanted
me
to
go
under
structure.
The
other
proposed
alternative.
Is
you
you
cuddle
edit,
a
machine?
B
You
change
the
desired
cubelet
version
and
it's
up
to
the
controller
to
decide
whether
it
implements
that
as
an
in-place
or
replacement
upgrade,
and
that's
actually
what
we
have
today
right
now.
You
can
cue
cuddle
edit,
a
node
and
it
will
just
it'll,
delete
the
VM
and
create
a
new
VM,
and
we
were
discussing
yesterday
whether
that's
the
right,
semantics
or
not,
but
I
think
the
intent
either
way
is
for
the
API
to
allow
you
to
have
both
implementations.
So
you
can
imagine
somebody
who's
running
on
bare
metal.
B
A
I
I
in
the
past
I
might
have
said
to
be
flexible,
but
I.
Think
at
this
point
with
my
experience,
the
right
answer
is
to
be
opinionated,
because
I
think
you're
gonna
get
lost
in
that
edge
case
and
I.
Think
the
edge
case
has
very
little
value.
I
do
think
that
semver
wise
a
dot,
a
dot,
couplet
patch
should
be
a
that's
a
that's
a
patch
I.
Guess,
that's
fine,
but
anything
else.
I
would
I
would
say,
look
it's!
This
is
the
pattern.
You
should
be
doing
this.
A
C
C
A
I
think
you
get
into
really
serious
issues,
especially
with
a
high
level
API
like
this.
If
you
say,
oh
we're,
gonna
try
and
enable
you
to
do
a
kernel
patch.
You
know
of
a
bun
too,
because
there's
a
new
kernel
out
that
I
don't
want
to
touch
the
rest
of
my
install
I'm,
just
gonna
start
doing
kernel
patches
that
to
me
isn't,
is
it's
not
a
good
pattern
for
a
generalized
API
right?
A
You
want
to
say:
look
we
you,
if
you're,
if
you're,
not
just
patching
a
couplet,
then
that's
really
outside
of
the
scope
of
cluster
management.
We
expect
you
to
have
a
either.
Do
that
completely
outside
of
this
or
and
we'll
give
you
a
reset
button
and
you
can
implement
the
reset.
However,
you
want
if
your
reset
is
just
patched.
The
kernel
is
you're
gonna
interrupt
the
flow
anyway,
while
you're
patching
the
kernel,
you're
gonna
have
to
reboot
that's
a
minimum
right
k.
C
A
C
B
You
even
mentioned
that
we
probably
would
want
to
do
in
place
updates
for
patch
releases
of
cubelets
right,
so
like
they're
right
there,
you've
got
a
use
case
already,
where
you'd
think
we
should
do
in
place
upgrades
so
I
think
what
we're
trying
to
do
with
the
API
is
is
not
close.
The
doors
to
the
different
types
of
upgrade
scenarios
people
might
want
to
build
rather
than
mandating.
This
is
how
you
have
to
do
it
for
it.
Yeah.
A
When
what
we
you
know
we,
so
we
quickly
dropped
the
use,
go
binaries
whether
it
was
logical
or
not,
operators
made
it
made
a
ton
of
sense
for
operators
to
put
the
binaries
in
place
and
not
authorizing,
but
that
didn't
write
it.
It
was
a
variant
that
people
didn't.
You
know
we
did
it
just
wasn't
helping,
even
if
it
was
smart
and
so
I.
What
my
suggestion
is
the
cloud
pattern
is
the
pattern
you're
gonna
see
the
most
of
anyway.
A
We
need
to
bring
along
the
physical
opposite
people.
This
is
I
mean
this
is
practical,
considering
we're
right
where
I
used
to
think
on
this
I've
just
been
watching
people
who
are
moving
into
more
mutable
infrastructure
patterns,
and
it
fits
better
with
the
kubernetes
model.
You
might
leave
the
day-to-day
success.
Fine
right,
yeah.
C
A
No
I,
sorry
yeah
I,
don't
want
to
get
distracted
on
that
I,
like
the
concept
I
think
it's
important
time.
The
other
thing
that
jumps
out
at
me
is
you're.
Gonna
end
up
having
to
implement
note,
create
destroy
interfaces
within
this
API
or
you
know
how
do
you,
how
do
you?
How
do
you
not
get
into
the
a
million
ways
to
create
you
know
to
create
nodes
depending
on
your
infrastructure
problem?
You.
B
Mean
well
so
the
API
is
is
to
clarity
declaratively,
expressing
what
the
node
should
look
like,
not
actually
how
its
created,
and
so
the
controller
that
that
watches,
that
API
and
actually
interfaces
with
the
ayahs
can
be
opinionated
about
the.
How
that
gets
created
right,
it
can
say,
like
I'm,
only
gonna
create
nodes
using
terraform.
B
If
you
don't
give
me
a
terraform
blob,
I'm,
not
gonna,
do
anything
or
it
could
say,
I'm
a
new
stalker
machine
to
create
nodes
and
both
of
those
systems
support
a
really
wide
variety
of
cloud
platforms,
and
you
can
have
sort
of
a
common
denominator
controller.
It's
pretty
easy
to
plug
in
and
get
pretty
broad
support
without
too
much
work.
B
The
the
loots
a
guys
who
came
to
our
meeting
yesterday
so
that
they
would
chose
doctor
machine
for
that
reason
when
they
were
implementing
node
sets
because
they
could
add
new
cloud
providers
in
about
two
days.
All
right,
so
I
gives
a
really
broad
reach,
on
the
other
hand,
for
the
larger
clouds
like
I
could
imagine
you
know.
Google
is
gonna,
be
interested
in
this,
and
maybe
Amazon
and
Microsoft
as
well
like.
B
We
might
have
you
know
more
customized
controllers
that
can,
you
know,
eke
a
little
bit
more
performance
or
features
out
of
out
of
a
mature
cloud
platform.
So
you
could
install
the
doctor
machine
controller
and
use
that
provision
Google
nodes
or
you
can
install
the
gyro
controller
and
use
that
frisson
Google
nodes
and
from
the
API
semantics.
It
looks
exactly
the
same
to
you,
but
the
underlying
implementation
might
be.
You
know
significantly
faster
or
better
in
some
way
right
right.
So.
B
Sure,
right
and
that's
the
other
thing
Chris
and
I
were
talking
about,
is
you
could
have
the
controller
generate
the
terraform
right?
You
don't
have
a
user-specified,
terraform
user
could
specify
like
I,
want
to
know
and
I
wanted
to
have
this
machine
type
and
as
long
as
the
controller
understands
that
it
could
then
generate
terraform,
you
can
still
use
tariffs
or
behind
the
machine
behind
the
scenes,
actually
apply.
Those
changes
to
your
cloud
right,
so
there's
lots
of
different
implementation
patterns
behind
the
API.
A
A
B
A
I
actually
think
that
that
would
that
that
undermines
the
I.
Think
I
think
you
have
too
much
scope
right
now.
I
would
I
would
be
more
willing
to
make
compromises
to
do
things
like
SATA
use
that
terraform
plugins,
not
terraform
at
the
plugins
and
drive
them
be
an
API.
Only
support
node
create
delete
model
just
to
get
things
to
get
things
flowing
through
called
version
one.