►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171101 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.1xxkmbrspdw0
Summary:
- Reminder about presenting the cluster API to sig-cluster-ops tomorrow
- Discussion about the machines API
- Discussion about the control plane API
A
Hello
and
welcome
to
the
November
1st
2017
edition
of
the
cluster
API
breakout
meeting
for
Sigma
cluster
lifecycle.
We're
gonna
start
today
with
just
a
quick
PSA
I've
got
that
tomorrow,
at
1
p.m.
Pacific
I'll
present
I'll
be
presenting
for
the
current
status
and
direction
of
the
cluster
API
two
sig
cluster
ops
Rob
has
the
lead
of
cyclists.
A
Ops
has
put
out
a
couple
of
announcements
about
this
announcement
in
the
community
last
Thursday
I
know
he
sent
out
an
email
as
a
close
to
Rob's
group,
maybe
the
wider
communities
dev
group
to
help
sort
of
collect
operators
as
we're.
Looking
as
that
at
that,
as
a
good
forum
to
get
feedback
in
terms
of
what
sort
of
requirements
we
should
be
looking
for
for
cluster
API
I
think
I
mentioned
last
week.
A
We
have
a
sort
of
set
of
requirements
that
we,
as
Googlers,
have
have
garnered
from
our
experience
operating
communities
clusters
4gk,
but
we
feel
like
that.
There's
there's
obviously
stuff
that
we're
missing
from
the
way
other
people
run
their
clusters
and
we're
hoping
to
sort
of
collect
as
many
use
cases
as
we
can
from
the
community
to
help
drive
this
effort
forward.
A
So
just
quick
announcement
there
next
I'm
going
to
toss
it
over
to
Jacob
to
give
a
quick
update
on
where
we're
on
the
machines.
Api.
Last
week
we
had
a
lively
discussion
about
the
current
PR
for
the
API
I.
Think
we've
made
some
progress
since
then.
So
Jacob
can
give
us
an
update
on
where
we're
sure.
B
So
so
the
P
are
still
open.
It's
received
more
feedback
that
I've
addressed
nothing's
actually
changed
in
the
API.
So
far
the
types
have
been
merged
to
the
cube
deployed
code
base
so
that
we
can
start
prototyping
on
them.
They've
been
integrated
with
API
machinery
and
we've
generated
deep
copy
functions
and
all
that
so
we
actually
have
prototype
machine
controllers
that
work
against
it.
B
We
actually
own
pretty
fully
functioning
for
GCE
right
now
and
we
have
to
have
a
GCE
installer,
which
hasn't
really
been
bridged,
all
the
way
to
meet
the
machines,
a
the
machines
controller,
but
we're
bringing
that
together
now,
but
so
we're
coming
v1l
phone
one.
It's
not
actually
set
in
stones,
even
telephone
one
still
welcome
more
feedback
on
the
PR
and
there's
actually
a
few
issues
that
come
up
on
the
PR.
That
I
was
hoping
we
could
talk
through.
B
If
there
are
interesting
partners
here,
let's
kind
of
status
right
now,
I,
don't
think
I
name
was
prepared.
A
demo
like
we
did
last
week
fell
11,
so
I
wasn't
really
prepared,
but
we
have
a
more
functional
GC
machines,
controller,
to
figure
out.
It's
any
edge
cases
that
we
missed,
or
anything
that
wasn't
well
supported
in
the
API,
and
so
far
we've
figured
out
its
kind
of
the
nuances
of
what
the
expectations
of
different
components
would
be,
but
nothing's
actually
broken
the
API.
B
So
maybe
one
of
the
things
that
we
could
talk
about
this
meeting,
our
next
meeting
is
internally.
The
Googlers
that
I've
been
working
on
the
GC
prototype
have
kind
of
figured
out
here
are
the
responsibilities
to
the
Installer.
It
should
only
do
these
things
and
by
the
time
the
Installer
finishes.
All
of
these
things
should
be
met.
B
Whatever
controllers,
can
the
machines
has
these
responsibilities
that
sort
of
who's
in
charge
of
what
kind
of
at
the
high
level
architecture,
but
everything
with
the
API
as
stated,
has
been
working
right
now,
although
we
haven't
really
talked
about
the
cluster
level
of
control,
plain
API,
we
women
talked
about
the
machines
API,
but
one
of
the
things
that's
come
up
in
the
PR
and
actually
came
up
on
earlier
draft
of
the
doc
was
how
to
differentiate
different
kinds
of
machines
originally
way
back
when
my
very
very
first
draft
had
just
a
single
string.
B
This
quarter,
we
should
be
able
to
represent
these
other
configurations
there.
Some
of
the
things
that
I
was
rattling
on.
What,
if
you
want
a
master
that
is
unschedulable
or
what,
if
you
want
a
differentiate
between
the
master
that
should
have
you
know,
worker
your
workloads
on
it's
those
sorts
of
things.
So
I'll
stop
talking.
Does.
D
B
D
The
other
thing
that
we've
run
into
a
lot
is
depending
on
which
distro
you're
deploying
with
you
might
have
to
partition
your
drives
in
different
ways.
So
you
can
use
a
different
docker
or
volume
back-end
it's.
It
can
get
really
really
messy
quickly,
so
having
some
kind
of
abstraction
in
there,
where
you
can
tell
it,
you
know
a
really
high
level.
I
want
this
kind
of
thing,
and
then
you
have
some
kind
of
provisioners
that
US
operators
can
tweak
as
needed.
D
B
Sure
so
the
way
that
we've
gone
after
this
so
far
is
at
the
top
level
of
the
machines
API.
We
have
common
fields
that
we
think
you
would
want
to
be
able
to
actuate
over
time,
independent
of
the
exact
cluster
configuration
and
environment
and
cloud,
and
whether
your
bare
metal
or
anything
like
that.
So
so,
the
things
that
are
common
in
the
spec
are
what
is
the
Persian
of
cubelets
really
coarse-grained?
B
What
is
your
container
runtime
things
that
we
think,
regardless
of
environment,
that
you're
in
you
might
want
to
write
tooling,
to
say?
Oh
I
want
to
upgrade
this
node
or
yeah,
but
we
also
have
this
completely
opaque,
provider-specific
blob
that
you're
allowed
to
put
whatever
you
want
in
as
long
as
like
each
each
different
provider,
in
this
case
we're
proto
set
for
GCC.
So
we
have
this
other
API
type,
that
is
just
for
GCE.
It's
juicing,
a
provider
config
that
has
all
the
information
that
we
need
to
create
a
GC
node
and
anything
else.
B
We
want
to
shove
in
there
and
that
lets
us
fiddle
with
all
of
the
little
bits
to
get
things
just
perfect.
But
it's
completely
unimportant
with
what
that
means,
that
I
still
get
that
level
of
flexibility
that
we
want
out
of
our
specific
installation.
But
then
we
can
still
write
tooling
on
top
of
the
cluster
API.
That
is
completely
cloud
agnostic
and
could
still
do
things
like
upgrade
a
node
upgrade
the
control
plane,
those
sorts
of
things
kind
of
address,
your
concerns.
B
D
Think
that
mostly
does
would
would
there
be
kind
of
a
multi
scheduler
kind
of
support
like
there
is
in
pods
so
like.
If
you
could,
you
could
potentially
run
GCE
provisioner
and
a
bare-metal
provisioner
in
the
same
kubernetes
and
and
you
tag
your
node
types
as
I
want
it
to
be
provisioned
by
this
kind
of
tooling.
Yes,.
B
So
maybe
this
is
wrong,
but
instead
of
calling
out
a
specific
provision
or
equals
this
fields,
we've
relied
on
the
fact
that
when
you,
the
information
that
you
put
into
this,
this
blob
writer
config
should
be
a
serialized
API
type
with
a
proper
kind.
So
if
you
set
the
kind
to
be
GCE
note-
or
did
you
see
note
config
or
something
like
that,
and
only
the
GC
Machines
controller
will
pay
attention
to
those
machines
and
then
you'll
also
have
your
config
be
a
local
bearer,
but
bare
metal
configure
something
like
that.
B
You
can
have
new
controllers
that
are
just
watching
two
different
kinds
and
then
that
kind
of
forbids
you
from
saying
I
want
to
use
the
same
type
and
have
it
actuated
on
by
two
different
controllers.
So
maybe
we
do
really
call
out
explicitly
the
the
provisioner,
but
that's
how
we've
been
thinking
about
it
so
far
like
every
machine.
That's,
how
is
everybody
knob?
That's
housed
in
the
cluster
should
be
operated
on
by
that
cluster,
but
you
can
run
different
specialized
controllers
to
handle
different
kinds
based
off
of
and.
B
B
At
om
completely
agree,
and
in
fact
like
even
in
some
of
them,
the
strongman
examples
I
was
thinking
of
is
even
in
GCE.
You
might
want
a
generic
node
controller
for
generic
workloads
and
then
you
might
want
a
highly
specialized
if
you're
doing
machine
learning
a
completely
different
controller
that
understands
GPU
nodes
and
how
to
scale
those
and
yeah.
B
A
A
Do
you
have
a
use
case
where
you
actually
want
to
cluster
to
have
two
different
controllers,
because
you
have
fair
metal
nodes
and
GSP
nodes
in
the
same
cluster,
or
you
just
worried
about
a
situation
where
you
might
need
different
controller
for
different
types
of
things,
even
when
they're
sort
of
co-located
in
space,
so.
E
E
E
Don't
know
that
we
have
to
hard
code
it.
What
I'm
thinking
is
that
the
the
Machine
API
wouldn't
really
care
the
the
controller
at
the
node
controller.
The
machine
controller
would
obviously
understand
those
and
would
have
knowledge,
is
embezzle
engine
or
AG.
Ke
has
an
entrepreneur,
in
other
words
like
acting
on
them
happens,
but
the
autoscaler
wouldn't
necessarily
particularly
need
to
know
either
it
would
just
CEO
in
the
brave
new
world
where
we
separate
out
betsy
d.
E
A
Autoscaler
needs
to
know
if
I
provision
of
VM
expose
I
put
these
labels
and
taints
if
the
master
specifies
a
role
as
a
label,
but
doesn't
also
specify
the
taints.
That's
not
gonna
be
enough
for
the
autoscaler
to
understand
it
might
create
a
new
one.
The
controller
knows
that
that
label
implies
all
these
other
things.
You
know
they
will
also
be
added
to
the
node.
A
D
So
you
may
you
kind
of
need
one
level
of
indirection,
maybe
we're
already.
The
autoscaler
asks
for
a
node
of
type.
You
know
GPU
node
and
the
provisioner
takes
that
and
says:
oh,
he
wants
a
GPU
node.
It
needs
to
have
these
taints
and
these
Toleration
x',
and
you
know
these
attributes
and
whatever
and
then
go
to
the
cloud
that
it's
gonna
need
to
be
on
and
go
launch
it
I.
A
A
E
A
Is
true,
we
do
not
want
two
answers
for
that,
but
in
terms
of
things
like
the
the
indirection
between
labels
and
taints,
and
how
the
autoscaler
automatically
knows
that
those
will
show
up
on
a
node
like.
Maybe
that
can
be
a
more
complex
set
of
objects
and
we
want
to
to
express
just
for
a
user
than
they
like.
So
we've
heard
feedback
that
you
know
the
more
objects
we
create
the
more
the
harder
it
is
for
you
to
serve
Jesus
system.
A
So
if
you
have
the
initial
proposal,
we'd
have
templates
in
classes
and
machines
and
these
these
relationships
between
them
and
that
was
sort
of
designed
so
that
the
autoscaler
buffer,
a
person
that
becomes
very
complicated
because
the
person
now
needs
to
understand
how
to
construct
three
different
objects
with
all
of
the
right
cross.
References
to
actually
tell
us
in
provision
a
node,
whereas
if
a
person
you
to
say
give
me
a
machine,
it
should
look
like
X.
A
Then
that's
much
easier
for
a
person
to
do
node,
but
maybe
isn't
sufficient
for
the
autoscaler
and
so
for
the
autoscaler.
You
might
need
to
introduce
a
couple
of
extra
things
to
make
the
autoscaler
work,
but
those
aren't
actually
necessary
for
the
system
to
work
for
a
person,
yeah,
I
guess
strictly
for.
D
Like
public
clouds,
the
API
is
probably
so
static
that
the
user
doesn't
necessarily
ever
have
to
template
that
stuff
out.
So
they
they
could.
Just
you
know,
helm
installed
provision
or
something,
and
it
would
just
work,
whereas
for
bare
metal
folks,
they're
almost
always
going
to
have
to
tweak
those
things.
I
think.
A
Yeah,
but
if,
if
you
can
create
a
template
where
you're
sort
of
sweet
all
these
things
and
said,
like
here's,
the
template
of
how
to
run,
how
will
my
bear
rental
machine
should
look
like
and
you
have
a
corresponding
controller,
they
could
take
that
template
and
provision
bare
metal
machines.
Question
was:
is
there
is
that
enough,
or
do
you
do?
You
need
more
levels
of
indirection,
I?
Think.
A
D
Okay,
so
here's
a
question:
are
you
know
the
queue?
Bernays
doesn't
exactly
have
a
concept
to
multi-tenancy
yet,
but
it
may
is
this
something
that
non
cluster
admin
users
are
going
to
want
to
do
and
if
so
is
something
like
a
flavor
where
admins
create
flavors
and
users
consume
them.
Would
that
an
indirection
be
helpful
in.
A
D
Possibly
and/or
that
gives
kubernetes
operators
the
chance
of
tagging,
things
on
to
the
flavor
that
then
users
don't
have
to
bother
with
anymore.
So
you
can
say:
I
want
a
big
compute
versus
a
small
compute,
and
then
users
just
can
choose
from
those
things
and
not
have
to
worry
about.
You
know
how
how
many
gigs
of
RAM
is
the
better
editor,
so.
B
In
an
earlier
draft
of
the
proposal,
as
Robert
alerted
you,
there
were
these
notions
of
machine
classes
and
machine
templates
and
I
still
think
that
in
the
future
there
might
be
need
for
them
to
it.
To
do
what
you
said
to
say:
I
just
want
a
machine,
but
instead
of
providing
this
enormous
blob
that
fully
specifies
exactly
how
to
create
that
machine.
With
all
of
my
specific
bits,
I
just
want
to
say,
ass,
equals
large
or
class
equals
GPU.
Something
like
that.
Yeah.
B
So
if
we
just
want
to
at
least
be
able
to
specify
that
we
could
start
with
that
base
and
then
always
add
on
the
concept.
Like
a
registry
of
templates,
which
we
were
calling
class
as
a
machine
class,
that
one
then
is
putting
take
that
provider,
the
provider
config
and
just
have
as
many
objects
as
you
want
to
with
different
names,
then
you
can
just
specify
a
name.
It
would
still
be
backwards
compatible
and
you
could
still
maybe
override
certain
fields
from
it.
B
So
we
would
just
merge
here's
what
I
got
from
the
templates
and
here's
what
you
additionally
specified
in
the
individual
machine
and
we
just
kind
of
put
it
on
top
of
it.
We
can
start
with
just
the
simple
machine
and
get
that
API
and
then
always
add
more
complexity
in
the
future,
so
kind
of
following.
D
D
B
The
takeaway
to
be
that
we're
never
gonna
go
in
that
direction
like
I,
actually
think
that
was
a
really
strong
direction
to
go
into
it's
just.
We
really
want
to
start
as
absolutely
minimalistic
as
possible,
while
still
delivering
some
value
and
showing
the
vision
and
then
hopefully
incremental
II,
execute
on
that
vision.
A
A
Your
analogy
to
volumes
is
also
apps,
because
we
didn't
start
with
persistent
volume
claims
and
dynamic
volume
provisioning.
We
started
with
this
simpler,
like
I,
want
to
mount
this
volume
and
eaten
right
and
then
graduated
towards
the
more
complex
scenarios
where,
as
you
know,
we
started
it
with
our
API
looking
into
more
complex
scenarios,
saying
like
here's,
here's
a
cool
way
to
do
it
and
I
think
that
that
extra
complexity
from
the
start
is
maybe
not
not
necessary
or
warranted.
B
Want
to
throw
out
a
pretty,
maybe
controversial
idea,
so
so
I
was
brainstorming
with
Christy
who's
on
the
call
last
week
and
he's
been
he's
been
really
focused
on
the
control
plane
level
definition
of
the
cluster
API,
whereas
I've
been
looking
at
the
machines
level
and
one
of
the
things
that
we
he
brought
up
actually
was.
If
you
want
to
enable
the
tooling
to
be
written
to
allow
you
to
operate
a
cluster.
B
If
you
completely
separate
out
here's
the
definition
of
my
control
plane
and
the
versions
of
it
for
components
that
I'm
running
versus
machines
and
some
of
those
machines
are
masters
flipping
one
bit
on
the
control
plane,
it
doesn't
give
you
a
lot
of
control
over
like
a
slow
rollout
of
the
Masters.
So
another
potential
way
of
representing
masters
is
to
have
a
separate
master
machine,
or
something
like
that.
That
looks
almost
identical
to
the
normal
machines,
which
are
to
be
accepted.
B
I
was
for
additional
pointers
and
configuration
to
say
here
is
either
it's
an
object
reference.
Something
to
say
here
is
the
version
of
the
control
plane
configuration
that
I
want,
so
that
you,
someone
writing
tooling.
On
top
of
this
could
actually
say
a
parade
just
master
upgrade
the
control
plane
master
I'll
create
the
API
server
on
that
master,
wait
for
it
to
be
successful
and
I'll
go
to
the
next
master,
upgrade
that
API
server,
whereas
right
now,
if
they're
completely
separated,
you
just
upgrade
the
entire
control
plane
all
at
once.
D
Let
let
me
throw
in
the
other,
crazy
wrinkle
and
looking
at
what
cube
ATM
is
doing
and
and
boot
cube
and
friends
they're
using
kubernetes
to
manage
kubernetes
control
planes
just
like
there
any
other
things.
Is
there
really
much
need
to
have
an
actual
controller,
node
type
versus
a
non
controller,
node
type,
when
you're
fully
self
hosting
everything.
B
So
once
you've
gotten
past
the
bootstrap
stage,
I,
don't
think
so,
although
and
probably
still
have
different
teams
and
labels
for
the
master,
but
but
for
the
bootstrapping,
it's
it's
somewhat
critical
to
say:
I
need
to
bring
up
at
least
one
machine
where
I
manually
install
the
control
plant
or
whatever
the
startup
script
knows
to
do
cube
admin
in
its
vs.
cube
and
then
join
or
something
like
a
boot
strapping.
It's
at
least
critical
to
know
do
I
need
to
install
the
control
plane
versus
am
I
just
joining
an
existing
control
plane.
C
D
E
I
just
want
to
ask
like
I'm
something
is
you're
talking
about
basically
versioning
the
version
of
other
objects
that
we're
referring
to
I,
guess:
I!
Guess,
if
that's
the
case
I,
don't
understand
why
this
from
isn't
specific
to
the
masternodes
like
to
me.
It
applies
to
every
node
and
also
to
every
opportunities
right.
Oh.
B
Sure
so,
right
now
the
the
Machine
objects
has
in
mind
the
configuration
for
all
the
things
that
we
think
are
specific
to
that
machines,
like
the
version
of
cubelets.
The
container
run
time
to
use
the
version
of
that,
but
it
has
no
concept
currently
of
the
control
plane
and
we're
saying
there's
the
separate
object
called
the
cluster
or
objects
which
has
all
of
the
configuration
of
all
the
control
plane,
components
that
we
cared
represented
name,
but
right
now,
there's
actually
no
link
between
the
two
they're
just.
B
B
What
I'm
putting
forward
is?
Do
we
need
to
support
the
tooling
to
be
able
to
only
upgrade
the
entire
control
plane
all
at
once
or
do
we
want
to
lean
built
on
top
of
the
cluster
API,
to
have
the
fine-grained
control
of
creating
individual
components
on
individual
master
hosts
and
and
how
we
achieve
that
technically
is
yeah.
D
D
C
I
think
the
blue
cube
and
cube
ATM
people
have
the
ability
to
be
very
opinionated
about
how
you
deploy
but
see
you.
Your
provider
did
not
want
to
do
self-hosted.
They
wanted
to
do
another
installation
or
host
method.
I
think
we
should
be
able
to
allow
a
provider
if
they
want
to
do
that.
D
C
D
D
A
So
in
this
off
hosted
environment
we
used
to
beretti's
to
drive
upgrades
of
the
curators
control
plane.
Yes,
but
but
nobody
has
yet
use
that
to
drive
upgrades
of
the
cubelets,
although
the
core
West
folks
are
working
towards
that,
but
also
there's
the
underlying
operating
system.
So
yes,
even
if
you're
using
your
Eddy's
self-hosting
to
drive
the
control
plan
in
the
couplet,
somehow
somewhere,
you
have
to
yeah
upgrade
core
OS.
Underneath
those
or
you
know,
I
do
punch
you
underneath
those
yeah.
D
B
I
don't
really
need
this
API,
but
I
would
still
like
to
programmatically
upgrade
my
control
plane
and
all
the
configuration
there.
So
maybe
if
we
do
keep
them
completely
separates
with,
as
Chris
mentioned,
maybe
an
optional
pointer
if
it
makes
sense,
since
this
might
be
an
appropriate
temperature,
we're
already
talking
about
to
dive
into
the
control
planes
level
of
configuration
which
I
think
was
next
on
the
agenda
anyway.
Yeah.
C
If
you
only
have
one
control,
one
master
in
the
clay,
and
so
a
lot
of
these
needed
to
be
split
out
into
her
master
parameters,
and
we
already
were
discussing
having
a
master
role
on
machines
and
the
idea
is
that
I
need
to
rework
this.
To
move
some
of
those
parameters
in
some
way
to
the
machines.
Api
I
was
hoping
to
do
it
by
adding
an
optional
field
for
master
configuration
to
the
machine.
A
E
B
Yeah,
that's
kind
of
in
line
with
a
name,
a
cubicle
config,
where
the
node
objects
on
this
bag.
They
have
a
pointer
to
the
config
map,
but
you
could
just
have
one
global
configure
map
that
all
of
the
nodes
are
pointing
to
and
as
you
introduce
another
one
slowly
roll
over
each
node
to
point
at
the
new
config
map,
and
now
everyone's
pointed
the
new
one
or
you
can
maintain
as
many
different
ones
as
you
find
useful
as
an
operator.
B
A
I
guess
the
other
thing
to
think
about
with
masters
is
I
was
chatting
about
this,
but
these
were
official
way.
You're
supposed
to
upgrade.
H8
masters
is
not
knocking
down
one
mission
at
a
time,
but
it's
I
think
it
was
addressing,
but
it's
actually
like
upgrading
all
the
API
servers
first
across
your
machines,
then
upgrade
all
the
controller
managers
across
your
machines.
They
know
that
right
so
you're
supposed
to
sort
of
really
upgrade
each
of
your
control
plane
pieces
not
upgrade
each
machine
to
a
new
version,
all
right.
This
further
complicated.
A
A
A
Do
the
node
right
so
something
we
have
to
do
the
OS
in
there
summer
right
right.
So
at
some
point
you
do
have
to
knock
the
machine
over,
but
you
should
also
be
doing
that.
Maybe
the
apps
at
a
separate
time
that
when
you
knock
the
machine
over
which
complicates
things
too,
because
you
have
to
qualify
your
apps
on
two
different
base
images
right.
B
Not
necessarily
like,
if
we
have
you
ever
said,
if
we
have
just
the
right
way
of
doing
it
and
anything
else,
any
deviation
from
that
is
the
wrong
way
of
doing
it.
Then
maybe
we
don't
want
to
get
that
control
where
you
can
accidentally
shoot
yourself
in
the
foot
so
in
thinking
about
all
the
levels
of
this
API
I've
been
thinking
about
what
is
the
amount
of
power
we
want
to
give
to
the
tooling
that
sits
above
it,
but
also?
B
What
is
a
really
good
user
experience
and
I
think
a
really
powerful
user
experience
is
I.
Look
at
my
control
plane,
which
is
one
point,
seven
I,
say
and
I
want
you
to
be
one
point:
eight
and
the
controllers
that
are
actuated
underneath
that
just
do
it
flawlessly
like
they
know
the
exact
perfect
way
to
upgrade
everything,
and
that's
that's
really
powerful
from
a
UX
perspective.
So
everything
question
is:
do
we
then
want
the
power
to
break
that
flow
for
things
above
the
control
tonight?
D
This
and
that's
yeah
I
mean
that's
kind
of
the
way
the
I.
What
you're
talking
about
is
is
almost
in
a
way
what
what
the
operator
concept
is
like
like.
If
you
look
at
rook,
they
have
an
option
in
there.
When
you,
you
spawn
your
cluster
to
say:
I
want
a
jewel
SEF
cluster,
and
then
you
can
tweak
that
document
and
say
I
want
luminous
now
and
it
will
look
at
the
cluster
and
what
you
asked
for,
and
it
will
do
the
rolling
upgrade
as
appropriate
for
those
particular
versions.
D
Even
the
procedure
might
be
different,
depending
on
which
version
you
are
on
and
what
version
you're
going
to,
but
they
they
baked
that
logic
outside
of
you
know:
they're
driving
the
kubernetes
api
to
update
deployments
and
to
even
tweaking
config
maps
and
doing
you
know
whatever
is
appropriate
to
run
the
workflow.
But
it's
a
separate
controller
from
you
know.
How
do
you
land
a
pod
on
a
node?
D
D
And
we've
already
talked
about
two
different
ways
of
doing
this,
to
the
cube
that
the
self-hosted
way
of
launching
your
control,
plane
and
doing
upgrades
through
tweaking
the
image
in
your
deployments
versus
having
nodes,
node
provisioners
have
logic
in
order
to
decide
how
to
provision
it.
You
know
if
it's
doing
it
non
containerized
or
puppet
or
chef
or
whatever.
D
E
Saying
I
think
I
made
a
joke.
Ii
can't
like
that,
the
way
it
evolved.
It's
like
we
find
problems
and
that
that
to
me
suggests
that,
like
we
should
bake
that
logic
into
code
rather
than
have
it
be
expect
someone
to
like
make
the
correct
sequence
of
calls.
But
there
are
two
places
we
can
bake
them
into
code,
one
of
which
is
that
the
cops
provision
or
the
G
K
provision
are
old
and
all
the
Machine
or
node
provisioners
would
implement
it
themselves
that
we'll
all
get
it
right
and
the
other
one
is
I.
E
B
E
D
E
Yeah
and
definitely
unless
we
find
a
magic
project
which
we
can
all
implement,
which
has
yet
to
happen
like
that,
we've
all
collaborated
on
one
project.
Yes,
then,
you
know
we
have
to
accept
that
whatever,
however,
each
project
breaks
it
up,
there
will
be
like
an
implementation
sign
whether
we
want
each
one
of
them
to
implement
the
logic
themselves
right.
A
somehow
have
that
operate,
logic
be
shared
which
mean
yeah.
A
Yeah
I
mean
ideally
there's
a
shared
upgrade
flow,
which
is
also
what
gets
tested
and
what
we
all
will
be
all
test
and
throw
our
weight
behind
is
saying.
This
is
how
you
safely
upgrade
your
to
many
clusters.
I
think
one
problem
we
have
today
is
everybody
implements
their
own
upgrades
and
we
test
some
of
them
with
us,
some
of
them
in
different
versions,
and
then
we
tell
people
that
I'm
very
torch,
that's
right,
yeah,
which
is
dangerous
right,
because
the
parades
don't
always
work
in
every
different
order.
Configuration
yeah.
A
Yeah
I
was
I
was
reading,
the
post-mortem
from
I
think
was
Monza,
which,
which
was
somewhat
related
to
grade
ordering,
and
so
you
know,
if
we
had
a
canonical
way
to
do
upgrades,
then
maybe
they
wouldn't
have
have
gotten
into
the
case
where
everything
broke.
They
were
there
running
some
extra
extra
custom
software,
so
maybe
that
was
still
broken,
but
at
least
they
really
part
of
their
systems
still
maintained.
It's
moving.
A
We've
also
talked
a
lot
with
cue
bad
men
about
having
built
a
built-in
versus
external
ed,
CD
I
think
it's
yeah
common
in
bare
metal
scenarios
to
have
a
TD
running
outside
of
the
cluster.
Having
a
separate
entity
cluster
you
manage,
and
maybe
in
more
cloud
scenarios
or
in
the
sort
of
key
fire
scenarios.
You
bake
at
CD
into
your
cluster
right,
so
you
run
that
CD
on
its
vs.
point
to
an
external
one,
and
so
there's
also
some
some
drift
error
in
terms
of
a
framework.
C
C
E
E
E
E
C
C
B
C
This
blob
for
me
so
I'm,
punting,
on
search
for
now
and
just
say
just
delegating
to
the
provider,
to
a
sign
the
search
for
the
machines
that
you
need
to
and
distribute
them
how
you
need,
but
none
of
that
is
being
stored
in
the
API.
As
of
now,
this
may
seem
like
a
reasonable
approach.
Do
we
want
to
let
people
specify
a
signing,
see
a.
C
B
B
C
B
C
A
Right
we
have
six
minutes
left
and
the
conversation
has
been
dominated
by
a
small
number
of
people,
I'm
wondering
if
there
are
any
other
folks
on
the
line
that
had
opinions
to
express
about.
You
know
these
two
topics
see
the
mission
API
or
the
control
plan
API
other
requirements
that
they
think
need
to
be
included.
A
Please
don't
be
shy,
you're
here,
for
a
reason,
some
people
might
just
be
lurking
and
listening,
but
we
are.
We
are
here
to
give
you
no
feedback
from
the
broadest
set
of
people
that
we
can,
even
if
they
tend
to
be
shy.
So
please
speak
up
or
type
something
in
chat
or
into
the
meeting
notes.
If
you
don't
want
something,
that's
fine,
too.
A
Okay,
so
nobody
speaking
up
stuff
Justin,
who
type
something
in
chat
asking
what's
next
so
I
think
on
those
of
these
fronts
were
sort
of
continue
to
push
forward.
As
Jacob
mentioned
earlier,
we've
started:
writing
prototype
code
against
the
machines
API
to
actually
try
and
get
stuff
stood
up
and
as
thanks
for
us
excusing
as
as
Chris's
PRS
and
types
of
start
to
get
merged.
We'll
start
writing
code
against
the
control
plane
have
to
start
standing
of
a
master.
A
A
lot
of
this
is
really
not
to
try
and
write
pressure
a
great
code,
yet
it's
really
to
try
and
write
code
to
prove
out
that
the
API
makes
sense
and
it's
usable
and
be
demo
able
right.
So
that's
that's
sort
of
the
initial
goal
of
what's
next.
Obviously,
we're
gonna
try
to
keep
collecting
feedback,
I
hope
to
get
some
feedback
tomorrow
in
terms
of
requirements,
news
cases
from
seed
cluster
ops
and
will
sync
up
again
in
a
week
and
hopefully
have
maybe
some
dimmable
stuff.
B
E
A
A
All
right,
I'm
hearing
a
lot
of
silence
so
I
think
we're
gonna.
Call
it
at
this
point.
I
think
everyone
for
coming
and
we
will
see
you
again
in
a
week
and,
like
I
said,
hopefully
have
something
much
closer
or
to
a
demo
point
and
you'll
read
all
along
in
a
cube
app
in
repo
or
the
star.
The
cube
deploy
repo
to
sees
for
the
progress
of
the
code
and
PRS
name
in
time
or
reach
out
to
us
on
slack.
If
everyone,
okay,.