►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171220 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.cmfhryt158zg
Highlights:
- Brainstormed about the 2018 Roadmap for Cluster API
- Update on MachineSets
- Update on AWS implementation
A
Hello
and
welcome
to
the
December
20th
meeting
for
the
cluster
API
working
group
as
part
of
a
cluster
lifecycle.
Today
we
are
going
to
get
started
talking
about
the
2018
roadmap
linked
dock,
that
Lucas
started,
putting
together
for
the
overall
signal
ester
level
map
for
2018,
the
top
of
the
meeting
notes.
He
has
a
section
there
for
cluster
API,
where
he
took
sort
of
a
stab
at
you
know
what
he
thought
we
were
working
on,
even
though
he
doesn't
come
to
this
breakout
meeting.
A
So
I
can
paste
his
thing
into
these
meeting
notes
if
people
don't
want
to
click
through
and
I'm
curious
what
people
think
about
this
as
sort
of
a
goal
for
next
year?
If
we
think
it's
it's
too
ambitious,
not
ambitious
enough
or
if
it's
sort
of
the
right
direction
do
we
have
it
in
it's
sort
of
that
time
of
year,
what
we're
trying
to
figure
out
what
we're?
What
we're
up
to
for
the
next
large
chunk
of
time.
B
B
A
Can
release
as
many
times
as
we
want
not
quite
infinite
I,
don't
think
we
can
release
many
more
than
four
times
I.
Think
I
think
Lucas
might
have
put
beta
here,
as
a
conservative
estimate.
I
do
think
will
be
great
to
capsule
to
its
GA
by
the
end
of
the
year,
with
beta.
Maybe
sort
of
about
halfway
through
the
year
would
be
ideal.
A
I
was
gonna,
try
to
pull
up,
there's
a
dock
where
we
actually
describe
each
of
the
API
levels,
alpha
beta
and
GA,
and
what
that
means
the
biggest
thing
that's
different
in
alpha
and
beta,
though,
is
once
you
hit
beta.
You
can't
have
changes
that
are
breaking
that
you
can't
upgrade,
so
they
can.
There
can
be
changes
as
long
as
you
can
automatically
upgrade
from
the
old
version
to
the
new
version
where,
as
an
alpha,
you
can
have
any
braking
changes
that
you
want
and.
B
A
D
A
So
other
than
the
specification
hopefully
getting
to
GA,
definitely
to
beta
n
2018
Lucas
had
written
down
some
implementations
of
the
API
that
he'd
like
to
see
so
GCE
terraform
I
think
I
added
docker
machine
question
mark.
So
that's
what
the
Luci
guys
had
been
have
been
using
for
their
product
with
notes.
Hence
he
put
cops.
Cubes
writing
cubic
horn
support
all
with
question
marks
at
the
end
and
also
using
the
cluster
API
for
our
end-to-end
tests.
A
So
this
is
basically
an
intermediate
step
of
replacing
kubernetes
anywhere
and
using
that
instead
using
a
cluster
API
annotation
instead
I
think
everybody
had
some
had
high
hopes
for
for
kubernetes
anywhere
and
it's
it's
gotten
parametrized
to
the
point
where
it
looks
sort
of
like
the
cube
of
scripts,
which
is
unfortunate
and
people
that
aren't
sorry
very
deeply
involved
in
it
have
trouble
figure
out
what
it's
doing
so,
hopefully
don't
fall
into
that
same
trap.
A
third
time,
I
think
Justin.
You
guys
have
been
able
to
avoid
that
on
cops.
B
Chris
nova
has
a
great.
She
has
very
good
talk
about
this,
where
we
didn't
avoid
that
we
actually
made
the
same
mistake
where
we
started
off
the
templating
and
we
ended
up
moving
everything
to
go
code
instead
of
using
templating
because
inevitably,
like
templating
devolves
into
exactly
this,
everything
is
templated
and
you're,
effectively
writing
code
and
you
move
more
logic
than
the
back
end,
and
it's
just
you
might
have
straightening
go
or
do
works.
D
Create
an
API,
atop,
terraform
yeah,
it's
just
not
written
to
be
a
library,
I'm,
actually
kind
of
skeptical,
of
attempts
to
abstract
cluster
infrastructure.
The
way
care
for
nose,
I
think
that
ap
eyes
are
more
accurate
and
true
or
representation,
and
especially
right
hand,
go
code.
It
makes
it
easier
to
programmatically
do
things
you
can't
do
with
the
reform
as
it
is,
I
mean
you
get
this
semi
structured
text
back
and
even
if
humans
really
are
machine,
can't
do
much
better.
A
Yeah
I
think
the
appeal
of
having
terraform
as
an
implementation
is
that
you
could
take
sort
of
generic
generic
description
of
what
an
don't
you
look
like
and
then
dumped
in
a
terraform
blog.
A
lot
of
people
are
using
terraform
today
and
that
provides
both
a
migration
path
to
the
cluster
api,
but
also
it
provides
a
sort
of
very
broad
platform
support
with
no
extra
work
on
our
end,
because
terraform
itself
has
broad
platform
support
right.
So
you
can
imagine
if
we
don't
have
like
say
an
azure
specific
controller
for
machines.
A
You
could
still
use
cluster
ap
on
Azure
just
by
using
careful
right
and
it
might
not
be
quite
as
clean
as
an
AWS
or
GCP
implementation.
If
we
have
specific
controllers
for
those,
but
it
would
still
work
yeah.
So
I
would
imagine
that
you
know
the
the
bigger
use.
Cases
would
have
specific
implementations
and
then
terraform
to
be
sort
of
like
the
lowest
common
denominator
fall
back.
I.
E
B
I'm
happy
to
answer
as
to
what
the
immediate
plans
are
for
cops,
adoption
and
and
like
I,
think
I
have
a
pretty
clear
idea
what
we
should
do
in
the
short
term,
which
is
cops
currently
creates.
We
have
this
notion
of
instance,
groups
which
is
like
machine
sets
and
what
we
will
do
is,
and
we
currently
map
those
two
on
a
diverse
auto-scaling
groups
on
GC
nigs.
B
B
A
Yeah
I
guess
the
way
I
sort
of
see
this
going
is.
You
know,
cops
has
an
API
today
that
defines
what
a
cluster
should
look
like
and
Cooper
corn
has
an
API
today,
I
think
it's
starting
to
migrate.
Already
it's
the
cluster
API,
but
it
has
has
its
own
sort
of
API,
of
how
you
define
a
cluster
and
what
I
would
hope
to
see
is,
as
we
define
these
sort
of
common
pieces.
A
So
we're
talking
about
the
machines
API
right
now,
but
there's
also
the
sort
of
cluster
control
plain
ish
parts
that
we
have
to
define
as
well,
that
those
tools
would
start
to
use
those
as
sort
of
their
native
API.
You
know
in
crops
might
have
some
extra
fields
that
it
still
needs
to
provide
independence
of
those
or
cubic
corn
might
as
well.
A
But
the
common
pieces
are
common
such
that
you,
then
you
can
write
tooling,
like
an
upgrade
er
that
if
you
installed
a
cluster
with
cops
or
even
stalled
a
cluster
with
cubic
corn,
it
doesn't
matter.
You
can
point
your
your
upgrader
at
either
of
those
clusters
and
it
can
use
those.
You
know
the
machines
API
to
upgrade
all
the
nodes.
F
A
That's
on
on
my
my
wish
list
is
to
have
fgk
also
support
the
cluster
API,
because
I
don't
want
to
have
a
separate
implementation.
I
don't
want
to
have
a
separate
set
of
tools
that
only
work
for
gke
I
talked
to
the
core
OS
folks
at
coupon,
about
this
as
well
and
I.
Think
they're
in
a
similar
boat
with
tectonic,
where
you
know
like
gke.
A
They
have
a
lot
of
sort
of
custom
code
that
is
largely
redundant
with
you
know
other
cluster
provisioning
and
cluster
management
projects
that
are
out
there
and
the
more
of
that
code
that
we
can
sort
of
share
and
the
more
of
the
API
is
that
we
can
share
I.
Think
you
know
each
vendor
is
going
to
have
a
little
bit
of
sort
of
secret,
sauce
or
special
stuff
that
they
want
to
add,
but
I'm
hoping
to
reduce
that
as
much
as
possible.
A
C
Has
there
been
any
collaboration
or
discussion
with
the
folks
from
the
infrakit
project,
people
like
david
chung
or
any
of
the
docker
core
team?
I'm
sure
that
they
would
also
have
a
platform
interest,
considering
that
their
platform
needs
to
support
kubernetes
I'm
kind
of
hearing
like
a
similar
architecture,
and
it
kind
of
makes
me
think
of
like
a
CRI.
You
know,
but
instead
of
containers
for
nodes,
you
know
or
machine.
A
Yeah
I,
don't
think
anybody
has
talked
to
those
folks,
yet
I
think
we've
been
trying
to
find
as
many
people
in
the
Kuban
Hayes
ecosystem
that
have
been
sort
of
walking
down
this
road
and
trying
to
rope
them
in
and
get
people
on
the
same
page.
We
have
not
yet
looked
as
much
outside
of
that
ecosystem,
so
thankfully
contacts
about
that
would
be
great.
A
F
Had
like
three
months
or
four
months
ago,
first
initial
contact
regarding
the
node
set,
probably
I,
can
contact
them
again
and
say:
hey
that's
new
idea,
so
they
were,
we
were
open
to
it,
and
but
for
us
it
was
currently
not
so
important
to
to
integrate
with
a
packet.
But
it's
now
getting
big
I.
Think
probably
they
I
think
there
would
be
also
interesting
into
it.
How
to
integrate
and
yeah.
C
Yeah
I
mean
I
I
could
see
a
pretty
clean
slice.
Translation
shim
for
a
lot
of
the
intricate
objects
directly
into
the
cluster
API
schema.
You
know,
we're
in
forget
potentially,
would
still
be
managing
the
creation
and
lifecycle
of
the
notes.
But
then
the
cluster
API
controller
could
translate
those
objects
into
something
that's
stored
in
at
D.
A
Do
you
have
any
idea
Lee
how
interested
the
darker
folks
might
be
in
in
collaborating
on
that
I
mean
I.
Think
what
we're
designing
right
now
at
least
is
pretty
Cooper
nutty
specific,
and
maybe
it
makes
sense
to
step
back
and
think
about
it,
a
little
bit
more
generically,
and
but
maybe
it
doesn't
right
at
some
point
if,
if
it's
too
generic,
it's
not
actually
going
to
solve
our
problem
in
a
reasonable
amount
of
time.
A
C
C
A
C
Yeah
in
infrakit
has
its
own
provider
shim,
and
the
providers
tend
to
be
implemented
specifically
for
intricate.
Some
of
the
code
gets
forked
from
docker
machine
they're,
also
similar
to
what
we
were
just
talking
about
is
a
chimp
or
a
provider
for
terraform,
which
is
maybe
the
less
ideal
way
to
do
it,
but
also
really
interesting
kind
of
beast
in
its
own
right.
A
F
F
C
A
So
to
me
that
sounds
pretty
similar
to
two
cops.
I
was
talking
to
Chris
love.
He
said
they
were
cops,
was
not
as
interested
in
like
the
singleton
machines
and
was
more
interested
in
the
machine
sets,
and
that's
what
they'd
like
to
adopt,
which
I
think
is
in
line
with
what
Justin
said.
I
think
that's
also
what
Eric
Parris
was
saying
last
week
that
Red
Hat
was
more
interested
in.
Was
that
that's
that
concept
as
well.
A
So
no
it
looks
like
someone
added
crack
and
support
question
mark
here.
I
think
Lucas
had
just
thrown
out
a
couple
of
things
that
he
sort
of
knew
were
entertaining
the
idea
of
potentially
adopting
the
cluster
API,
which
was
cops,
cubes
brain
cubic
horn.
I
would
love
to
see
broader,
tooling
support,
so
I,
don't
know
who
added
cracking
or
if
they
work
on
Kraken,
but
it
would
be
great
to
move
those
folks
in
as
well.
I
know.
A
D
Three
years
ago,
and
so
the
landscape
is
much
different,
even
today,
some
of
the
biggest
things
we
struggle
the
biggest
difference
we
have
with
most
other
tools
has
to
do.
Look
at
the
D
configuration
I'm
of
two
minds
on
this
there's
half
a
team.
It
says
at
C
D
absolutely
has
to
be
experimental,
kubernetes
and
isolated,
so
it
just
has
to
be
the
design
principle
of
not
having
any
recursive
dependencies
in
another
part
of
it.
D
It
says
SCE
so
hard
to
manage
you're
crazy,
not
to
self
boost
and
use
the
operator
and
then
then
I
think
well,
if
I
wanted
to
meet
both
goals,
maybe
you
wouldn't
do
if
you
have,
let's
tell
posted
xtd
one
through
next
cluster,
acting
nasty
xdd
for
a
different
convenience
cluster
right
and
that
community
exposed
Arkansas
post
and
the
sed
for
so
basically,
you
can
still
suppose
but
you're,
not
self
posting.
For
yourself,
that's
the
biggest
difference
and.
A
Just
because
the
the
other
tools
are
opinionated,
saying
that
city
lives
inside
the
cluster
and
don't
give
me
like
out
to
move
it
out
so
I
know
that
cube
admin
when
you're
installing
a
cube
admin
cluster.
If
you
go
to
that
level
of
tooling,
it
allows
you
to
specify
an
external
at
CD.
Is
that
except
the
cluster?
It
doesn't
set
that
up
for
you
right.
So
just
crackin
set
up
that
CD.
That's
then,
outside
of
the
cluster
as
part
of
its
provisioning.
A
D
A
The
core
West
folks,
let
us
know
at
cube
con-
that
self
hosted
a
che
at
CD,
had
some
unfortunate
edge
cases.
There
were
some
bugs
right
now,
so
I
would
recommend
not
doing
that
yet
until
they
sort
of
figure
out
how
to
make
it
work.
Cuz
they've
been
spending
a
lot
of
time
trying
to
make
that
work
for
tectonic
and
it
found
some
places
where
it
doesn't
quite
work
correctly
and
it's
difficult
to
recover
from
the
failures.
D
A
A
That
was
not
my
understanding
at
all.
I
believe
tecktonik
runs
a
che
at
CD
and
it's
fine
I
know
we're
running
a
che
at
TD
and
gke.
Now
with
regional
clusters.
I
think
cops
supports
a
che
at
CD
as
well
and
has
for
quite
a
while
so
I,
don't
think
like
running
multiple
at
CDs
and
a
cluster
has
problems.
I
think
it's
when
you
couple
that
with
self
listed
that
you
can
get
cases
where
it
fails
and.
B
I'll
try
one
more
time
and
then
I'll
be
quiet
if
it
doesn't
work.
I
I've
been
working
on
a
net
CD
manager
which,
instead
of
dignities
API
for
self
hosting,
so
it's
not
self
hosted,
but
it
is
it's
like
a
bootstrap
thing
that
may
be
more
reliable.
It's
on
my
github
all
placed
a
link
in
a
minute
when
I
find
it,
but
if
anyone
is
interested
in
collaborating
or
playing
with
that
or
whatever,
then
let
me
know,
but
it
it
may
be.
A
A
D
A
D
Not
run
workloads
on
one
of
the
clusters,
that's
running
at
CD.
You
don't
have
to
do
that.
The
main
problem
I
see
that
at
CD
is
maintenance.
Upgrades
basic
operations
are
difficult
enough
that
when
I
try
to
describe
it
to
the
customers,
I
don't
feel
comfortable,
walking
away
so
I
feel
like
if
the
operator
is
is
huge
and
and
honestly
I
trust
a
CI
procedure.
A
A
F
One
comment
regarding
the
implementation:
should
we
have
something
like
example:
implementation
like
for
Python,
also
so
that
it's
clearly
because
I
think
all
of
this
are
written
in
go.
So
it's
clearly
to
make
clear
that
you
can
easy
write
integrations
also
with
like
Python
or
other
tools,
because
especially
patent
is
quite
common
in
yeah
in
the
ops
area,.
A
A
C
A
A
Yes,
I
bring
up
interesting
question
of
what
code
should
live
there
or
reverse
or
official
repo
ends
up
being.
If
it's
not,
if
it's
not
cube
deploy,
you
might
try
to
get
a
top-level
one.
That's
called
you
know
cluster
API,
it
sort
of
of
two
minds.
On
the
one
hand,
I
don't
want
all
of
the
cloud
specific
code
to
all
be
in
a
single
repo,
and
we
found
with
the
main
cameras
repo
that
that
doesn't
scale
very
well,
especially
with
go
and
go
depths
and
dependency
management.
It
gets
really
ugly
really
fast.
A
We
have
the
GCE
code
in
there
right
now,
because
I
was
the
fastest
way
to
prototype,
but
I
kind
of
want
to
take
it
out
and
just
have
the
API
definitions
and
reusable
libraries
in
there
and
have
the
the
cloud
specific
code
somewhere
else.
Now.
What
we
could
do
is
we
could
create
top
level
directories
and
cube,
deploy
where
one
top
level
directory
is
is
all
the
shared
stuff.
A
And
then
we
have
other
top
level
directories
that
are
sort
of
like
separate
repos
in
the
sense
that
they
would
build
separately
and
we
could
vendor
things
separately
in
those
repos.
If
we
wanted
to
have
a
shared
code,
location
but
I
don't
want
to
get
sort
of
the
tangled
dependency
mess
that
we
have
and
we're
trying
to
get
out
of
with
the
cloud
providers
in
the
main
repo
right
now.
A
I
mean
right
now
we
have
a
couple
of
top
level
directories
in
in
cube,
deployed
Justin's
got
one
for
the
image.
Filter,
I
deleted
a
couple
other
ones,
and
then
we
have
cluster
API.
So
we
could
sort
of
pull
all
the
GCE
code
out
of
API
and
crea
directory.
It's
like
GCE
machine
controller
or
something
and
then
have
just
clan,
have
cluster
API
you've
entered
into
that
sort
of
other
directory,
even
though
it's
in
the
same
repository
to
get
sort
of
a
clean
separation
there.
A
A
E
Do
I
got
another
question:
the
current
resources
in
the
cluster
API
on
a
cluster
scoped
in
one
of
these
other
projects-
they're
not
like
Kiba,
corn
or
the
cops.
Is
there
any
thought
of
making
them
non
clusters
code?
It's
not,
they
can
be
used
directly.
Otherwise
that
would
have
to
be
some
way
to
translate
right.
E
A
So
right
now
we
have
sort
of
two
resources.
There's
there's
the
note:
there's
a
machines
right
which
right
I,
don't
know
that
it
makes
sense,
but
does
in
a
namespace
sense
nodes
themselves,
aren't
in
namespaces
and
then
the
cluster
resource
really
I
mean
it
has
like
three
properties
in
it
right.
It's
got
like
the
the
network
configuration
for
your
cluster
and
maybe
like
one
or
two
other
sort
of
small
things.
I
guess
I'm
wondering
what
advantage
it
would
be
to
put
be
able
to
put
that
into
inside
of
a
namespace.
A
E
If,
if
you
know
these
projects
are
going
to
inherit
the
cluster
API
and
somewhat
a
they
would
need
to
I,
don't
know
either
we
say
it's
it's
not
compatible,
or
we
say
you
know
once
those
let's
say
cops
creates
a
cluster.
It
will
create
the
necessary
resources
for
the
cluster
API
to
manage
that
created
cluster,
but
cops
itself
would
not
necessarily
use
the
closer
API
itself
when
defining
it.
A
A
A
B
Might
this
be
dealt
with,
I
guess:
I!
Guess
the
questions
like
what
is
the?
What
do
you
want
to
do
with
this
right
like
if
the
use
case
is
I,
want
to
update
all
my
clusters
at
the
same
time
right
something
like
that?
Maybe
it
will
be
that
that
the
multi
cluster
registry
will
have
a
mode
in
coop
Kutta,
where
you
can
apply
operation
spanning
class
dishes
and
like
that,
so
maybe
okay
is
that
what
that?
What
is
the
sort
of
what
are
you
thinking
of
doing
here?
B
C
The
intention
or
existing
coursers,
if
you
have
just
the
single
cluster
scope
for
a
cluster
API,
if
the
cluster
is
standing,
he
would
be
able
to
switch
context.
But
just
with
cook
kettle,
but
yeah
I
mean
I,
don't
think
it's
like
it's
not
that
there
isn't
a
plan,
but
I
think
that
a
previous
conversation
and
decided
that
the
scope
for
cluster
API
would
be
smaller
and
that
that
would
be
a
responsibility
that
might
be
better
suited
for
the
work
that
sig
multi
clusters
is
putting
together
on
this.
A
Yeah
I
think
Justin's.
Point
is
key,
though,
like
what
is
what
is
the
use
case,
we're
trying
to
solve
and
is
putting
multiple
cluster
definitions
inside
of
a
single
cluster
via
the
cluster
API,
the
right
way
to
solve
that
or
since
we
have
the
notion
of
a
cluster
registry
being
defined,
which
is
basically
the
same
thing.
Is
that
a
better
place
to
put
it
and
I?
Don't
know
that
where
the
path
or
one
down
is
the
correct
path,
but
we
we
had
to
pick
until
we
picked
that
one
for
now.
C
Okay,
there
is
there
considerations
that
we
need
to
make
for
our
cluster
object.
That
would
maybe
facilitate
a
consumer
of
the
API
that
we
can
kind
of
talk
about
right
now,
like
things
like
names
and
ID's,
or
extra
properties
that
you
might
be
able
to
filter
on
I'm,
not
sure
I,
quite
understand
what
you're
asking
just
I
mean
like
with
the
use
case
that,
like
something's
gonna,
need
to
aggregate
these
objects.
You
know
we
can
we
think
of
anything
that
we
might
be
missing.
That
would
make
that
an
easier
task.
Yeah.
A
So
we
get
it
name
as
part
of
the
crudités
metadata
for
a
cluster,
but
there's
no
guarantee
those
are
unique.
I
know:
SIG's,
Service
Catalog
has
the
need
for
a
UID,
and
it's
something
we've
talked
about
for
a
while
is
how
do
you
assign
a
UID
to
a
cluster
and
so
I
think
that
is
a
field
that
is
needed
by
the
cluster
registry,
and
you
know
we
may
want
to
figure
out
how
to
work
that
into
the
cluster
API
as
well.
A
B
It
is
e
UID,
so
I
think
it
will
be
in
practice.
Unique
I.
Think
the
one
thing
from
the
experience
of
cups
is
that
we
like
it's
nice
to
have
like
the
DNS
name
or
whatever.
It
is
to
know
how
to
get
it,
but
I.
Don't
know
that
for
the
the
IP
address
of
the
API
server,
the
API,
your
point
I,
don't
know
that
necessary.
That
is
a
cluster
object,
because
in
order
to
get
the
cluster
object,
you
must
have
reached
it
anyway,
but
it.
B
To
like,
even
when
you're
inside
the
clusters
do
not
know
the
external
ingress
point,
but
I
don't
know
that
necessary
bums
in
the
cluster
API,
and
we
can
certainly
add
it
later
if
we
need
it
like
I,
can't
think
why
we'd
necessarily
require
it
for
anything
we
were
doing
here,
although
I'm
often
come
across,
you
know
places
where
I
would
want
it.
I.
A
Think
having
a
UID
is
pretty
important.
We
found
pretty
early
on
in
gke
that
you
want
to
have
a
unique
way
to
address
a
cluster
right,
especially
if
a
user
user
can
reuse
the
same
name
over
and
over,
and
you
want
to
know
that
this
is
a
different
cluster
than
the
one
that
created
last
week
with
the
same
name,
and
so
you
should
be
assigned
you
IDs
to
each
cluster
so
that
we
know
you
know
when
we're
searching
through
logs
or
something
which
cluster
we're
talking
about
and
I
think
even
within,
like
the
cluster
API.
A
Ok,
so
I
want
to
move
forward,
so
I
only
have
15
minutes
left,
so
the
bastion
since
you're
on
the
call
I
added
an
agenda
item
to
get
an
update
from
from
you
about
machine
sense.
Last
week
you
said
you
guys
would
try
to
send
PR
for
machines.
That's
it
I
know,
and
that
made
it
may
slip,
but
I
was
curious.
What
the
status
of
that
was
yeah.
F
We
have
some
first
implementation
or
we're
working
on.
It's
currently
not
ready
to
push
I.
Guess
we
get
it
next
week.
Probably
we
don't
have
the
Colo
know
next
week
is
Christmas.
So
after
the
holidays
and
yeah,
because
I
also
forgot
Henrik
is
already
on
vacation.
So
en
time
this
week,
so
I
guess
after
the
vacation
we
can
do
the
first
commit.
A
B
A
I
think
our
initial
plan
was
easy
to
just
do
the
minimal
bridge,
as
it
were
so
we'll
look
at
what
Cooper
point
has
that
we
have
a
lot
of
similar
code
already
and
will
probably
be
harder
to
adopt.
You
know
in
in
cops
adopted,
but
not
really
anything
out,
but
it
wants
to
help
these
peers
are
always
welcome,
but
otherwise
you
know
we'll
see
how
quiet
I.
D
A
Are
you
guys
going
to
sort
of
offer
that
as
an
additional
way
to
define
machines
in
addition
to
what
you
have
currently,
for
instance,
groups
and
translate
between
the
two,
knowing
that
sort
of
like
your
internal
representation
can
remain
stable
and
you
can
just
change
the
mapping
if
we
break
our
API
or
do
you
have
some
some
other
way
you're
thinking
about
doing
it?
Will.
A
Yeah
I,
certainly
don't
wanna
break
cops
users,
but
it
does.
It
would
be
great
to
start
getting
more
mileage
on
the
API
to
sort
of
prove
it
out
and
see
if
we
need
to
change
it
exciting.
One
of
the
the
problems
in
kubernetes
in
general
is
people
to
find
alpha.
Api's
they're
disabled
by
default.
A
lot
of
people
don't
turn
them
on.
They
don't
get
a
lot
of
mileage
graduate
to
beta,
and
then
people
find
problems
with
them,
but
at
that
point
their
beta
and
we
are
promising
compatibility.
B
A
B
A
And
I
think
Kubek
horn
has
that
advantage
of
they
don't
have
a
stable
release
yet,
and
so
they
can
just
break
all
of
their
they're
coated.
Well,
at
this
point
and
Chris
was
very
excited
about
that,
she's
like
I,
don't
have
to
promise
any
support
users,
so
so
I
think
it'll
be
easier
for
them
to
rejigger
their
code
to
adopt
probably
a
little
bit
faster
than
you
guys.
I.