►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171213 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.lsmwz6edy5vg
Highlights:
- Moving the meeting time an hour earlier starting next week!
- Feedback on the cluster api from Kubecon
- Discussion about whether we should use cloud provider machine groups (ASG, MIG) for MachineSets
- API aggregation vs. CRDs
A
Hello
and
welcome
to
the
December
13th
edition
of
the
cluster
API
breakout
meeting
for
sig
cluster
lifecycle.
Today
is
the
first
meeting
we've
had
after
Q
Khan,
where
there
was
lots
of
interest
in
the
cluster
API
as
I'm
hoping
we
have
some
new
folks
on
the
line.
If
you
are
new,
please
add
your
name
to
the
agenda.
Docs,
a
LinkedIn
chat
and
any
topics
that
you'd
like
to
discuss
while
you're
doing
so
I
think
Jessica
had
some
thoughts
about
status
of
provider
can
things?
Maybe
we
can
use
that
to
kick
off
today's
discussion.
A
C
A
We
talked
a
little
bit
with
the
loot
to
you
guys
actually
about
the
sort
of
proliferation
of
proliferation
of
installers
and
what
we
might
want
to
do
to
reduce
that,
and
you
know
I
think
there's
there's
a
tension
between
having
sort
of
like
the
one
true
way
to
install
kubernetes
and
then
allowing
everybody
to
be.
Opinionated
know
that
when
I
talk
to
Justin
Santa
Barbara,
he
said
that
you
know
he
would
prefer
if
there
was
sort
of
one
installer
right.
A
If
there
was
one
Community
Supported
installer,
and
we
can
make
it
flexible
enough
that
it
would
support
these
cases
that
that
would
be
sort
of
easier
for
us
as
a
Saiga
night
community
to
support
tend
to
get
behind.
I.
Think
we've
also
seen
over
the
last
couple
years
that
I'm
not
sure
we
can
agree
on
one.
One
mechanism
for
installation
and
people
have
just
sort
of
built
lots
of
lots
of
difference
dollars.
A
It's
possible,
that's
just
because
there
wasn't
one
sort
of
blessed
installer
and
it's
also
possible
because
you
know
if
someone
really
likes
ansible,
it
really
likes
puppet
or
really
like
chef,
they're
gonna
follow
Kelsey's
doc
or
you
know,
look
through
those
getting
started
from
scratch.
Talk
and
figure
out
how
to
install
the
cluster.
A
So
I
think
we
should,
as
a
community
sort
of
aim
to
support
a
small
number
of
installers
and
try
to
consolidate
our
support
behind
a
couple
that
work
that
work
well.
I
think
it's
a
little
bit
TBD
what
those
might
be
and
how
soon
we
could
get
there
I
think
in
the
meantime,
they're
still
going
to
be
a
large
number
of
installers
and
we're
hoping
to
move
them
towards
using
common
tools
like
cube
admin
in
the
cluster
API.
A
Rodrigo
there
we
got
some
feedback
on
the
tester
API
Q
con.
One
sort
of
minor
thing
that,
though
we
mentioned
for
the
networking
team,
is
that
our
description
of
you
know.
Cluster
address
ranges
that
the
cluster
of
network
stuff
was
is
not
flexible
enough
and
the
future
cig
network
is
planning
on
allowing
multiple
non
contiguous
cider
ranges
for
things
like
pods
and
services.
It's
we
need
to
make
sure
we
restructure
the
network
configs
to
take
that
to
account
I.
Think
I
saw
you
on
the
line.
A
D
A
So
one
of
the
the
like
so
we're
there
a
couple
of
things
that
came
out
of
conversation.
We
have
to
lose
you
guys.
So
one
is
that,
from
their
point
of
view,
they
would
like
to
use
machines
API
to
manage
nodes,
but
not
masters,
which
in
some
ways
is
similar
to
the
GK
use
case,
where
nodes
are
visible
to
users
and
under
users,
control
and
masters
are
hidden
and
I.
Think
that's
that's
perfectly
achievable
with
the
cluster
API
as
it's
written
today.
A
If
you
try
to
create
a
machine
definition
and
market
is
a
roll
type
master,
then
the
the
machine
controller
it's
running
in
that
particular
cluster
would
just
reject.
Reject
that
with
an
error
message:
sync,
that's:
okay,
the
other
interesting
sort
of
side
conversation
we
had
was
about
multiple
controllers,
and
so
you
could
imagine
a
cluster
where
you
have.
You
know
one
controller.
That
is,
you
know
just
like
a
generic
terraform
controller.
That's
able
to
spin
up,
you
know
terraform
machines
and
another
controller,
that's
maybe
a
specific
AWS
controller.
A
That's
able
to
spin
up
machines
using
an
AWS,
specific
definition
and
you
put
those
in
the
same
cluster
and
in
the
same
mechanism
using
a
similar
mechanism.
We
have
two
like
the
multiple
scheduler
support
and
kubernetes
use,
either
annotations
or
the
explicit
sort
of
provider
config
fields
to
determine
which
machine
controller
picks
up
the
machine,
you're,
defining
and
sort
of
mix
and
match
controllers.
A
We
also
had
a
couple
conversations
with
people
about
API
aggregation
and
what
that
sort
of
might
look
like
architectural
e
I
know
in
turn,
like
Google,
when
we've
talked
about
the
cluster
API,
there
have
been
some
concerns
about
storing
cluster
state
inside
the
cluster.
Is
you
a
failure
scenario
where,
if
the
cluster
itself
is
unreachable,
you
can't
figure
out
what
the
desire
to
say
to
the
cluster
should
be
to
try
and
fix
it,
which
is
reasonable
right.
A
A
E
Don't
I
have
not
hit
any
roadblocks
with
it,
yet
a
I
think
like
Service
Catalog
is
doing
it
and
it's
it's
pretty
solid
for
them,
and
I've
borrowed
a
lot
of
my
a
lot
of
their
knowledge
and
experience
with
it
their
to
bring
it
over
to
the
project.
We're
working
on
interesting.
A
E
A
Okay,
interesting
yeah,
cuz
I've,
been
I've,
been
told
that
you
can
use
a
great
API
server
to
point
to
API
servers
or
outside
the
cluster,
and
and
we
could
use
that
for
brazillian-
see
where,
if,
if
the
API
server
hosting
your
jobs
goes
down,
because
someone
tossed
your
cluster
by
creating
a
job
that
it's
crash,
living
you'd
still
be
able
to
know
what
the
cluster
should
look
like.
Yeah.
A
All
right,
oh
one
of
the
things
I've
talked
about
with
with
Chris
Slav
at
cube.
Con
was
machine
sets
so
I
think
you
know
what
we
presented
so
far
has
been
the
machine
definition
I,
think
it's
I
believe
we
person
it
sort
of
the
road
map
of
expecting
to
build
machine
sets
and
in
Mission
deployments
on
top
of
machines.
E
I'm
from
again
here
us
at
Red
Hat,
and
what
we're
looking
to
do,
we
we
marry
up
with
that
well
as
well,
where
we
want
machine
sets
or
what
we're
calling
kind
of
know
groups
at
this
point
to
mirror.
You
know:
auto-scaling
groups
on
AWS
the
same
type
of
things,
they're
provided
in
the
cloud
I
thought
there.
E
F
A
Ya
know:
I
think
this
is
this
as
part
of
the
the
machine
controller
that
we'd
write
right
is,
but
but
I
do
think
there
is
some
advantage
of
having
some
consistent
patterns
where
we
say
like
you
know,
if
you
have
an
AWS
machine
controller
and
that's
using
aSG's
and
you
build
a
GCP
machine
controller
like
it
would
be
nice
from
sort
of
a
user
consistency.
Point
of
view,
if
that
one
used
MiG's
right,
so
that
users
that
are
transitioning
between
clouds
have
sort
of
a
similar
experience
right.
F
Azure
today
doesn't
support.
Vm
scale
sets
due
to
some
limitations
in
the
VM
Scout
skill
set,
api's,
and
so
all
of
their
clusters
are
deployed
with
loose
VMs
that
are
not
grouped
together
in
any
way
other
than
them.
You
know
homogeneous
within
their
own
associative
groups.
They
are
switching
to
the
M
scale,
sets
as
they
are
expanding.
The
idea
is
as
needed,
and
so
sometime
early
next
year,
they're
gonna
start.
You
know
running
clusters
with
DM
scale
sets
instead
it's
if
they
were
gonna
hop
on
this
they
depending
on
when
they
did.
A
Yeah
I
mean
I,
think
we'll
always
want
I.
Guess
early
sits
my
desire
to
always
still
be
able
to
support
individual
machines
that
aren't
part
of
a
set.
They
would
still
be
those
sort
of
loose
fiends
cuz.
That's
one
thing
that
we
can't
do
in
gk
and
I've
always
found
that
to
be
frustrating.
You
like
in
GA,
you're
forced
to
have
groups,
and
if
you
just
want
one
or
something
you
slept,
make
a
group
size,
one
which
seems
a
little
bit
silly.
Yeah.
G
A
H
A
Can
you
elaborate
a
little
bit
on
update
cycles
so,
if
just
to
set
the
context,
if
you
take
about
think
about
the
analogy
between
pod
replica
set
deployment
and
machine
machines,
that
machine
deployments
we've
been
thinking,
the
machine
deployment
would
actually
be
like.
The
third
level.
Is
the
API
object
where
you
would
express
sort
of
sophisticated
rollout
policies?
A
Know
that
the
maxim
flight,
the
max
surge,
etc
type
parameters,
yeah
yeah,
and
so,
if
you
think
about
like
a
machine
deployment
and
machine
deployment,
might
say,
ok
when
you're
updating
the
machine
set,
that's
owned
by
the
machine
deployments.
This
is
how
many
machines
you
update
in
parallel.
This
is
how
many
extra
machines
I
should
add
during
the
update
to
make
sure
I
have
enough
capacity
as
I'm
dropping
machines
that
Potsie
be
rescheduled.
H
A
Right
and
that's
actually
it's
interesting,
because
that
goes
back
to
my
previous
question.
If
we
decided
not
to
use
those
clouds
grouping,
API
is
right.
If
we
said
we're
not
going
to
use
aSG's
we're
just
going
to
create.
You
know
bare
VMs
on
Amazon
and
tie
them
together,
because
they
all
run
the
cubelet
point.
It's
an
API
server.
You
could
actually
implement
like
exactly
the
same
update
strategies
in
a
machine
deployment
across
a
whole
cloud
environments,
because.
A
It
with
their
VMs
there
we
go
with
the
grouping
API,
it's
which
I
think.
If
the
machine
set
level
seems
to
make
more
sense
to
people
in
terms
of
mapping
those
machine
sets
onto
clouds,
then
that
makes
the
machine
deployments
trickier
because
then,
like
you
said,
how
do
you
express
that
you
want
to
do
a
surge
update?
If
maybe
a
SGA
support
that,
but
makes
don't
right?
Do
we?
How
do
we
put
that
into
the
API
and
or
implement
that
and
I
think
the
Machine
controllers
generally
can
probably
be
smart
enough
to
do
it.
I.
J
J
This,
of
course,
was
you
that
would
would
be
a
little
bit
more
easier
to
understand.
Okay,
this
is
mapping
one
to
one
but
from
the
technical
side,
because,
like
communities
should
know,
most
is
the
best
when
to
scale
it
up
and
scale
it
down.
I,
don't
see
much
benefits
of
mapping
this
my
opinion.
It
creates
only
additional
complexity
should
about
mapping
to
like
a
mega
ESG,
exactly
so
like
mapping
on
easy
machines
to
two
VMs.
Instead
of
mapping,
machine
sets,
to
instance,
groups
or
synthesis.
J
Yeah
I
mean
on
the
other
side,
you
could
still
build
a
machine
set
controller
which
then
do
the
mapping.
If
you
have
specific
requirements,
but
in
general
I
think
it
makes
more
sense
to
in
general,
say:
hey.
We
mainly
looking
into
how
we
do
it
with
how
we
met
machines
was
a
no
controller
to
the
cloud
provider.
H
Right,
I'm,
fairly
new
to
this
group,
have
you
guys
discussed
already
kind
of
a
what
kind
of
injection
of
metadata
that
he
would
have
to
do
into
the
VM
I,
don't
know,
prove
identity
or
something
like
that.
A
We
have
other
sites
specifically
I,
think
that's
a
broader
problem
than
just
a
cluster
API.
There's
a
couple
proposal,
I
think
floating
around
sig
off
for
how
to
do
better
per
node
identity
and
restrict
node
permissions
within
the
cluster
and
I
think
you
know
as
right
now,
I'm
not
sure,
there's
anything
we
could
inject.
That
would
allow
us
to
do
a
better
job
if
the
core
of
true
entities
doesn't
support
the
facilities
to
leverage
that
I
think
works
is
worst
at
this
point,
waiting
for
them
to
I.
A
H
A
Yes,
I'm
wondering
what
so
I
mean
generally
most
of
the
m's
you're
gonna
say
something
like
you
know,
make
sure
doctors
installed
start
a
cubelet
join
this
cluster
use,
TLS,
bootstrapping,
here's
you
know
bootstrap
token,
or
maybe
you
can
use
like
a
vm
assert
a
shinto
prove
that
you
should
be
allowed
to
join
the
cluster.
I'm
not
sure
how
it
would
be
different
specifying
that
in
an
ASG
vs.
on
the
beer.
Vm,
though,.
H
A
So
right
now
the
the
bootstrap
token
is
the
same
for
everyone
and
in
the
VMS
get
per
VM
identity
through
the
TLS
bootstrapping
process.
There
has
been
some
talk
about
using
like
the
vm
attestation.
Api
is
on
cloud
providers
to
do
individual
identity
on
that
initial
bootstrapping
part
as
well,
but
I
don't
think
that's
been
implemented
for
any
clouds
yeah
and
if
you
did
that
it,
I
think,
would
actually
be
okay.
A
If
you're
in
an
ASG,
because
the
code
you
ran
to
get
that
a
certain
would
be
exactly
the
same
on
all
of
the
VMS
in
the
SG.
But
what's
different
is
what
you
get
back
from
the
cloud
providers.
Better
data
server
would
be
unique
per
VM,
so
I
think.
Even
in
that
case
you
can
have
the
same
template
for
all
the
VMS
you're
stamping
out,
and
they
would
just
all
prove
their
own
unique
identities.
A
A
Chris
has
been
working
on
getting
a
prototype
one
built
on
AWS
as
part
of
cubic
corn
and
we've
been
trying
to
sort
of
use
that
to
drive
some
of
the
API
discussion.
So
I
think,
in
my
opinion,
what's
next
is
to
sort
of
try
to
get
some
consensus
on
at
least
the
initial
album
versions
of
the
API
and
get
those
sort
of
finished.
If
you
will
I
know,
we
have
versions
checked
in
right
now
that
we're
building
code
against,
but
the
two
PRS
for
those
are
still
outstanding.
A
So
I
would
love
to
see
that
I
get
merged
within
the
next
couple
of
weeks
before
the
end
of
the
year,
and
then
we
can
have
subsequent
PRS
that
go
out
to
mutate,
the
API,
but
at
least
get
the
initial
versions
checked
in
I.
Think
that
the
rate
of
comments
on
those
has
gone
down
pretty
drastically
over
the
last
couple
of
weeks.
A
Well,
I
imagine
we'll
stay
in
alpha
for
a
little
bit
longer
and
I
would
I
would
like
to
see
machines
show
up
while
we're
still
in
that
phase,
because
that
will
help
us
sort
of
prove
out
the
underlying
API
and
then
it
makes
sense.
Both
Chris
love
and
Justin
were
really
interested
is
starting
to
build
some
prototype.
A
Support
for
machine
set
sink
ops,
as
they
said
that
you
know
cops
will
have,
would
love
to
adopt
the
machines
API
but
sort
of
not
without
machine
sense
that
was
kind
of
where
they
needed
it
to
be
before
they
could
adopt
it,
and
so
they
were
both
very
interested
in
getting
that
part
implemented.
Okay,.
K
A
J
Mean
do
you
want
to
have
a
look
at
our
note
set
because
there's
already
I
mean
it
creates
notes,
but
probably
we
can
use
this
as
a
starting
point
and
we
have
a
there's
already.
A
message
on
my
machine
note
said
controller,
which
then
generates
notes
out
of
it.
I
think
it
would
be
not
a
big
task
to
change
this
to
machines,
and
then
we
already
have
probably
in
in
a
week
or
so,
a
running
prototype
where
we
can
start
discussion
what
to
enhance
or
what
must
be
changed.
J
A
I
mean,
I
think,
that
I
think
I
think
what
sort
of
makes
sense
is
to
sort
of
figure
out
how
we
can
how
we
can
combine
efforts
right.
I
feel
like
in
our
prior
Randol,
like
six
different
people
at
cube
con
that
it
seemed
like
they
were
doing
the
same
thing
in
this
space
and
trying
to
sort
of
pull
that
together
under
a
common
umbrella
and
make
sure
that
we're
driving
the
same
direction
is
really
important
at
this
point.
A
A
Guess
that
that
reminds
me
of
other
thing,
we
talked
about
a
cube
con,
which
is
at
this
time
for
meeting,
is
pretty
terrible
for
for
you
guys
in
Europe
in
general.
I
think,
and
you
know,
I
know
you
you've
managed
to
make
it
today,
but
if,
if
we
should
be
doing
this
meeting
to
be
at
a
earlier
time
during
the
day,
yeah.
J
A
A
I'm
hearing
enough
people
that
I
think
we
should
probably
plan
on
having
this
meeting
next
week,
we'll
tentatively
plan
to
have
that
one
hour
earlier
at
10:00
a.m.
Pacific
and
I
guess.
This
will
want
to
cancel
the
meeting
the
following
weeks:
I'm,
not
sure
if
anybody's
gonna
want
to
be
around
that
week.
But
we
can
discuss
that
next
week
since
we'll
have
a
meeting
next
week.
C
A
That's
a
great
great
question
so,
as
I
mentioned
earlier,
I
think
if
we
can
get
some
review
from
people
who
care
on
the
to
PRS
that
are
outstanding
and
forget
those
merged.
That
would
be
really
great.
It'd
be
nice
if
we
could
have
those
merged
by
next
week
and
also
have
a
PR
firm
Henryk
on
the
machine
sets
by
next
week,
and
if
people
could
take
a
look
at
that.
So
I
think
that,
coming
to
an
agreement
on
the
API
definition
is
important
to
unblock
people
from
working
in
parallel
on
the
controllers.
A
I
will
poke
Chris
and
check
on
status.
The
Cooper
corn
side
I
know
she's
normally
on
vacation
for
the
rest
of
the
calendar
year.
So
we
may
not
see
a
lot
of
progress
there
want
to
see
if
someone
else
is
picking
that
up
and
then
I
think
from
from
the
Google
side.
What
we're
looking
at
doing
next
is
working
on
what
it
would
look
like
to
move
from
using
CR
DS
to
using
API
irrigation
for
the
implementation
of
objects.
A
G
Question
the
benefits
of
apne
aggregation,
yeah
versus
the
CNE
approach,
proper
versioning,
so
you
can
convert
between
v1
alpha
B,
1,
beta
and
actual
G
a
so.
You
have
conversion
built
into
the
type
system
you
have
defaulting.
You
have
validation,
you
can
do
special
handling
on
the
handlers
and
the
API
server.
If
you
want
it
brings
all
sorts
of
benefits.
M
A
Think
the
other
big
advantage,
especially
for
the
cluster
API,
is
that
with
API
irrigation
I
believe
you
can
put
the
API
server
for
your
resources
outside
of
the
cluster.
This
is
one
sort
of
unique
thing
we
talked
about
with
the
cluster
API
is
that
we
might
want
to
have
the
controllers,
and
maybe
even
the
backing,
API
server
not
running
in
the
same
cluster,
that's
being
for
resiliency,
and
you
definitely
can't
do
that
with
CEO
dues,
since
they
live
in
the
main
API
server.
N
A
O
A
A
So
I,
don't
I,
don't
see
very
many
downsides
of
using
ap
aggregation
versus
CR
DS,
except
for
the
fact
that
you
have
to
run
your
own
API
server,
which
I
you
know,
as
has
been
done
already
by
the
sto
folks
and
the
Service
Catalog
folks
and
so
I,
don't
think
other
than
the
usage
that
there's
too
many
risks
of
doing
it.
That
way,.
A
L
So
right
now
in
the
current
implementation
of
the
of
the
replica
sets
and
the
deployments,
that's
in
the
correctly
I
I
mean
what
of
the
logic
there.
Can
we
apply
to
what
we're
doing
so,
I
think
it's
too
early,
but
are
there
any
plans
to
make
the
logic
more
generic,
so
it
can
be
used
by
2x,
wide
appointments
and
then
for
XY
sets
and
etc,
because
I
think
that
they're
all
there
is
a
lot
of
common
logic
there,
especially
rolling
update,
House,
Inc
and
etc.
A
So
I
don't
think
we've
we've
written
any
logic
yet
for
things
like
rolling
updates,
because
I
do
imagine,
that
would
be
one
of
the
higher
level
controllers
right.
So,
if
you
tell
a
machine
to
update
it
should
update
right
away.
If
you
tell
all
your
machines
updated
exactly
the
same
time,
they
should
all
update
right
away,
in
fact,
is
actually
the
same
time,
because
that's
what
you've
told
them
to
do
and
if
you
want
to
do
a
rolling
update,
then
the
higher
level
controller
should
tell
each
machine
to
update
at
the
right
time.
A
The
the
updater
that
we
hold
together
for
the
demo
does
the
former,
where
it
just
tells
every
machine
at
exactly
the
same
time
to
update,
and
that
was
to
make
demo
go
fast.
But
it
actually
also
shows
a
potentially
valid
use
case
where
maybe
you
do
want
to
actually
update
everything
at
sort
of
max
parallelism
right.
A
So
we
don't
actually
have
any
rolling
updates.
Yet
I
could
imagine
that
the
client-side
updater
that
we
built
would
be
pretty
easy
to
mutate
to,
instead
of
update
everything
as
fast
as
possible
to
update
one
wait
for
it
to
show
up
up
days.
The
next
wait
for
to
show
up
and
that's
basically,
what
would
be
implemented
during
the
machine
sets
upgrade
as
well,
except
that
would
be
server-side
just
like
we
have
in
queue
control.
You
have.
A
We
had
client-side
rolling
update,
and
then
we
move
that
into
the
server,
although
in
that
case
it
was
for
the
higher-level
objects
right,
it
was
rolling.
Update
of
replica
sets
became
server-side
in
deployments,
so
we
again
have
to
decide
what
sort
of
update
support
we
put
into
sense
versus
putting
it
at
higher
level
than
sense
and
keeping
client-side
until
we
have
the
higher
level.
L
What
was
actually
referring
to
was
the
fact
that
the
logic
and
in
the
way,
the
deployment
center-
oh,
let's
put
it
because
it
is
your
the
deployments-
are
in
almost
all
the
cases
they
are
expected
to
behave
in
a
in
a
similar
way.
So
when
you're
doing
the
deployment
of
pods
and
then
on
machine,
they
should
more
or
less
behave
exactly
the
same
way,
and
my
idea
was
eventually
to
influence
the
core
API
to
more
or
less
externalize
the
core
logic.
L
F
K
Can
only
speak
to
what
we
were
very
new
at
this
book.
What
we
were
planning
was
that
the
the
concept
of
the
machines
that
would
create
the
scale
group
or
whatever
in
AWS
or
GCP,
and
and
then,
whether
or
not
that
actually
resulted
in
a
machine
record
or
not
was
still
kind
of
open.
On
our
end,
we
weren't
sure
if
we're
gonna
need
that
or
if
we
can
just
treat
them
as
a
as
I
completely
it
just
as
a
group
that
may
be
thrown
away
or
replaced
at
some
point.
I.
P
We're
very
strongly
taking
the
opinion
that
machines,
your
nodes
are
not
special
snowflakes
that
you
need
to
hang
on
to,
and
love
and
care
and
individually
update,
one
of
them
and
treat
them.
As
you
know,
the
most
amazing
thing
in
the
world.
So
our
thinking
which,
as
you
guys
can
tell
where
we're
just
starting
to
show
up
here
and
be
a
pain,
is
very
strongly
based
on
the
idea
of
a
set
or
a
group,
or
things
like
that.
O
F
Iii,
don't
disagree
with
anything.
You
said,
I
I
think
people
probably
take
a
similar
thought
towards
deployments
in
kubernetes,
though,
and
yet
they
saw
the
time.
I
never
touched
the
pod,
but
it's
it's
interesting
and
sometimes
useful
that
it's
there.
You
know
if
I
wanted
to
go
tweak
a
docker
version
on
a
machine
or
something
like
that
for
for
a
quick
test
case
anyway.
No,
not
necessarily
that
I'm
advocating
for
this
just
a
just
a
thought:
yeah.
P
It
is
also
an
interesting
analogy
and
something
that
I
think
I've
heard
discussed
here,
which
would
be.
You
can't
change
a
pod
right.
You
can't
go
into
a
pod
after
it's
running
and
changed
the
image
you
can't
go
into
a
pod
after
it's
running,
and
you
know,
although
maybe
a
vertical
out
of
pot
a
little
scalar
one
day
will
allow
such
a
thing.
But
today
you
can't
really
go
in
and
do
anything
meaningful
with
a
pod,
and
so
some
of
the
thoughts
that
I
feel
like
I've
seen
trying
to
go
back
and
read.
P
Notes
of
discussions
in
this
group
lead
me
to
believe
that
we
think
machines
are
going
to
be
these
mutable
pieces
of
the
infrastructure,
which
is
obviously
a
very
traditional
way
to
look
at
machines,
but
is
not
necessarily
the
past
that
I'm
excited
about
trying
to
deal
with
things
at
scale.
Yep,
yeah.
A
I
think
what
we've
expressed
in
the
past
is
is
machines
can
be
mutable,
but
there
are
also
a
lot
of
people
that
are
really
excited
about
immutable
infrastructure
and
saying
I.
Don't
want
a
mutate.
My
machines
I
want
to
just
throw
them
away
and
create
replacements,
but
that
there
are
there
are
use
cases,
especially
if
you
think
about
on-prem,
where
that's
maybe
not
an
option,
and
you
want
to
be
able
to
say
I
have
a
physical
machine.
You
know,
unlike
a
VM
I,
can't
just
throw
it
away
and
come
for
the
replacement.
F
Think
it's
reasonable
I
think
even
if
machine
sets
never
you
know
they
don't
act
like
you
know,
replica
sets
and
don't
mean
down.
Instantiate
machine
objects,
I
think
that's
fine,
I
think
you
give
people
that
want
to
mutate
individual
machine
objects.
They
can
write
their
own
tooling
and
you
know
write
a
bash
for
it
that
spits
out
20,
identical
machine
objects
and
then,
if
they
want
to
mutate
them,
they
have
the
flexibility
to
do
so.
P
G
Gonna
go
against
the
grain
here
and
I
I
had
the
vision
that
machines
that's
just
stamp
out
individual
machines
in
that
way,
any
provider
that
had
a
machine
controller
that
could
create
a
machine
from
a
machine
definition
could
have
a
generic
like
machine
set
controller
that
just
stamped
out
these
individual
machines
and
it
would
work
and
that
there
would
always
just
be
the
individual
machine
whether
that
machine
is
like
immutable
or
not.
G
M
First,
but
I
would
also
agree
with
Chris
in
that,
like
on
a
device
where
you
have
auto-scaling
groups
like
I,
still
see
value
in
nodes
or
four
nodes
for
non
masters
in
not
using
an
auto
scaling
group
necessarily
but
having
a
machine
set
at
least
optionally,
be
able
to
generate
machines
and
then
call
a
tea
bus
run
instances
rather
than
creating
an
auto
scaling
group.
The
reason
being
that,
then
you
have
more
control
over.
You
know
you
don't
have
to
have
the
same
instance
type
in
each
one.
P
P
If
you
wanted
a
snowflake,
you
could
have
a
scale
group
of
size
one
and
if
you
wanted
some
large
instances
and
some
small
instances,
you're
gonna
end
up
with.
In
my
mind,
two
different
scale
groups
like
in
the
same
cluster
I'm,
not
saying
that
you
only
get
one
scale
group
for
the
whole
cluster
in
your
whole
cluster
shall
be
homogeneous.
But
the
idea
that
nodes
as
an
individual
thing,
it
should
be
the
core
of
our
thinking,
I'm
concerned
by.
G
I
need
snowflakes,
it's
just
that.
I
need
to
be
able
to
take
the
same
tools
everywhere,
I
go
and
if
I
start
having
machine
sets
target
individual
cloud
scalars,
then
maybe
I
have
my
own
autoscaler.
That
targets
machine
sets
that
doesn't
quite
work
anymore
because
of
way
it
interacts,
whereas
if
the
machines
just
pounded
out
machine
templates
and
machine
controller,
just
created
those
VMs
I
mean
they're,
not
snowflakes,
they're,
still
part
of
a
group.
G
D
So
that's
a
question
that
I
had
as
someone
you
know
new
to
discussion
if,
if
you're,
using
and
all
those
kind
of
a
magical
than
also
healing
properties
or
scaling,
then
all
kind
of
behaviors
and
Miggs,
or
maybe
even
or
even
in
a
SS,
if
all
of
those
would
be
kind
of
an
odd
to
know
with
like
what
the
cluster
is
actually
doing,
you
know
with
their
like.
No
doubt
skill
and
other
things
you
know
there
could
be
acting
on
them
as
well.
I
M
A
A
The
replica
set
will
create
a
replacement
for
you,
but
leave
that
one
running
and
you
can
use
this
to
quarantine
pods
without
killing
them,
which
is
a
really
cool
feature
of
kubernetes,
and
it
would
be
nice
if
you
do
that
with
machines
too,
but
that's
gonna
probably
require
changes
to
Miggs
and
aSG's
Google
and
Amazon
respectively.
For
that
to
work.
J
But
the
question
is:
if
you,
if
you
really,
if
you
really
want
to
integrate
with
the
autoscaler,
I
mean
when
we
only
mapping
like
machines
to
the
MS
on
the
cloud
provider,
then
we
easily
can
can
do
this
and
you
can
change
this.
The
mustard
the
machine
is
still
existing
and
all
of
the
logic
is
inside
of
Security's
cluster.
Q
P
M
A
M
Don't
think
they're
exclusive,
though
like
I,
think
I
mean
I,
think
there's
a
debate
here
that
that
isn't
necessarily
a
debate
like
I
think
we
I
think
we
should
have
a
generic
machine
set
that
stamps
out
machines
that
can
be
used
basically
anywhere.
You
can
run
a
machine
and
then
we
should
have
an
AWS
specific
one
that
Maps
not
a
scaling
group
master
spot
fleet.
A
I
think
that
goes
back
to
the
earlier
part
of
the
discussion
about
multiple
machine
controllers.
In
a
cluster
like
you
could
imagine,
you
have
a
cluster
that
has
two
machine
controllers,
one
stamps
out
individual
PM's
and
ignores
MiG's
and
the
other
uses
MiG's
and
sort
of
glues.
Those
two
machines
and
you
can
try
both
right
and
so
in.
G
D
Q
With
would
rather
Sangma
can
hear
a
clear
advantage
of
being
able
to
use
like
to
kubernetes
agnostic
or
the
cloud
agnostic
machine
set
in
like
say,
gke,
but
then
also
to
have
a
machine
set,
that's
run
with
a
MIG
and
then
be
able
to
tie
that
Nick
to
like
Google
Cloud,
HTTP
load,
balancer
I
thought.
That
would
be
a
really
good
integration
or
use
case
for
what
that
suggestion
implies.
A
Okay,
so
as
to
sort
of
summarize
that
that
long
Rambler
discussion
that
we
had,
we
do
not
yet
have
consensus
on
whether
we
think
it
is
the
right
thing
to
do
to
build
on
top
of
the
cloud
provider
abstractions
for
groups
of
machines,
and
we
don't
want
to
commit
either
way.
At
this
point,
we
should
build
a
generic
machine
set
controller
that
stamps
out
machine
objects
by
passes,
what
the
clouds
do
and
potentially
at
the
same
time,
because
that
will
definitely
want
to
use
everywhere.
A
At
the
same
time,
we
can
build
ones
that
do
run
on
top
of
MiG's
and
aSG's
and
scale
sets
and
see.
If
that
makes
more
sense,
I
mean
I.
Think,
with
with
some
of
the
kubernetes
features,
we
have
an
API
machinery,
you
can
do
things
like
say
this
machine
came
from
this
machine
set
and
so
I
think
the
controller
can
know
how
to
tie
together
machine
and
machines
that
objects,
even
if
we're
using
aSG's
and
make
all
of
the
links
work
correctly.
A
J
A
A
Mean
Tim
asked
earlier
sort
of
tactically
how
people
can
get
involved.
The
other
thing
people
going
to
do
is,
if
they're
interested
in
in
trying
to
stuff
out
they
can
sort
of
run
through
the
code.
Do
we
have
today
and
see
how
it
works,
see
what
the
rough
edges
are
on
GCP.
You
can
start
prototyping
and
hacking
together
other
implementations
of
the
API
that
run
elsewhere
and
see
what
how
that
feels,
how
the
API
looks
I
think
that's
been
our
best
Avenue.
A
So
for
her
to
look
at
how
we
might
tweak
the
API
is
to
try
to
actually
write
code
against
it
and
see
how
it
works
both
on
the
on
the
implementation
side
and
the
client
side
right.
So
that
was
kind
of
the
point
of
the
demo
was
implement
the
API
and
see
if
we
can
get
it
to
work
and
write
some
client-side
tools
that
use
the
API
and
see
what
we
can
get
them
to
do.
A
H
A
Q
A
A
Yeah,
so
the
our
bootstrapping
that
we
built
for
the
demo
was
not
to
credit
cost
over
its
credit
master,
so
you
create
a
single
VM.
That's
all
and
you
run
cube
admin
in
it
on
it,
and
then
you
install
the
CR
DS
and
you
launch
the
machine
feller
and
from
there
you
can
instantiate
the
rest
of
your
cluster.
That
way.
Ok.
O
A
And
I
think
that's
what
the
cops
folks
are
planning
on
doing
is
they
are
gonna
they're,
not
gonna
change
their
cluster
correction
flow,
but
once
they
create
a
cluster,
they
can
install
CR
DS
and
use
the
machines
that
yet,
even
if
you
like
it,
that's
my
understanding.
Justin's,
give
me
a
thumbs
up
so
I
think
I'm
right,
yeah.
L
A
So
the
sessions
were
really
short,
35
minutes,
it's
not
very
long,
so
both
the
syphilis
or
life
cycle,
like
sig,
update
we
kind
of
ran
over
time
with
questions
and
in
the
cluster
API
stock.
We
ran
over
time
questions
also.
There
were
a
lot
of
hallway
conversations
afterwards,
but
you
know
all
the
people
that
worked
at
session
didn't
get
to
hear
everything
that
was
discussed
there,
which
is
a
little
unfortunate
I.
A
That
would
have
been
a
really
useful
forum
to
just
get
people
together
and
discuss
sort
of
plans
for
our
cig,
because
there
was
not
a
great
forum
either
during
the
contributor
summit
or
during
coupon
itself,
to
sort
of
have
the
cig
members
get
together
and
talk
about
what
we're
working
on
and
where
we're
headed.
You
know,
whiteboard.
You
know
design
discussion
type
stuff,
a
lot
of
that
just
sort
of
happened
in
the
hallway
pairwise.
A
A
All
right
that
word
time
something
go
ahead
in
kaulitz
and
I'll,
see
everybody
next
week.
Remember
if
you
join
late,
we
moved
we're
gonna
move
to
meeting
an
hour
earlier,
so
it's
gonna
be
at
10:00
a.m.
Pacific
time
instead
of
11:00.
I
will
also
send
a
note
out
on
the
mailing
list
about
that
and
I'll
see
everybody
soon.