►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180905 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.7tfb6scheyxs
Highlights:
- Moving from api aggregation to CRDs (with help from the authors of kubebuilder)
- Creating a new release of clusterctl
- Discussion on MachineClass
- Scale down strategy for MachineSet (and cluster autoscaler integration)
- Splitting the Machine API from the Cluster API
A
The
first
thing
I
stuck
on
the
agenda,
was
I
noticed
somebody
had
put
in
our
notes
a
big
sort
of
bold
heading
for
a
meeting
that
happened
yesterday
as
part
of
the
AWS
implementation
working
group,
I
removed
that
heading
and
actually
just
stuck
a
link
at
the
very
top
of
the
doc.
So
we'd
have
a
basically
a
static
link
at
the
top
of
the
doc
to
the
meeting
notes
for
that
work.
That
sort
of
working
group
and
I
think
there's
probably
meeting
notes
for,
like
the
provider
implementers
working
groups
that
are
running
as
well.
A
We
should
stick
those
at
the
top
of
this
doc
as
well,
so
I
think
Dan.
You
probably
have
those
handy
and
can
can
stick
those
up
there
at
the
top.
I
wasn't
sure
why
somebody
stuck
the
link
in
here
I
wasn't
sure
if
it
was
because
there
were
things
that
were
discussed
yesterday,
that
people
thought
were
pertinent
to
this
group,
but
it
sort
of
as
a
general
rule.
A
A
Alright,
you
guys
are
all
very
quiet
today:
I
will
I
will
keep
going.
So
the
next
thing
I
stuck
on
here
is
migration
to
CR
DS.
So
this
is
something
that
we've
discussed
a
couple
of
times
in
the
past
and
I
know
that
Jason
dude
started
tinkering
with
Q
builder,
but
so
that
you're
sort
of
doing
it
part-time
and
as
a
side
project
and
we're
looking
for
help.
I
ran
into
a
couple
of
folks
last
week
that
made
cute
builder
and
also
had
and
previously
made
the
what
we
called
it
before.
A
The
API
server
builder,
I
think,
is
what
we
used
before
and
Phil
said.
Oh
I
think
I
can
switch
your
your
project
over
in
just
a
couple
of
hours,
so
he
started
working
on
it
last
week
and
figure
out
that
was
a.
It
was
a
little
bit
more
complicated
than
just
a
couple
of
hours,
because
we've
done
things
like
modify
the
generated
code.
If
you
recall,
Chris
Rousey
had
an
issue
a
while
back
that
we
the
way
we
plumb
through
actuators,
as
we
we
actually
just
changed
some
of
the
generated
code.
A
So
we'd
live
with
that
patch.
So
Phil
sent
me
a
link
to
what
he
and
Sunil
had
been
working
on
over
in
his
Fork
of
the
cluster
API,
some
of
the
stuff
that
Morgan
has
been
pushed
yet.
But
you
can
look
at
the
cluster
controller
and
see
the
types
of
changes
they're
doing
things
that
they're
they're
doing
as
part
of
the
migration
are
sort
of
trying
to
re
redo
the
controllers
to
sort
of
morph
it.
A
So
we're
end
up
with
one
controller
per
type,
whereas
before
the
controller
to
sort
of
mix
together,
we're
gonna
get
rid
of
in
interaction
with
informers
and
queues
directly
and
and
rely
on
the
abstractions
provided
by
the
queue
builder
libraries
to
deal
with
those
and
we're
gonna
switch
to
a
single
dynamic
client
and
not
used
to
generate
clients
and
use
that
client
to
do
deep,
copies.
So
I
think
the
the
takeaway
is
it
like.
Our
make
file
should
get
significantly
cleaner.
A
A
We
hadn't
been
running
ghovat
manually
ourselves,
and
so
that's
also
kind
of
a
nice
side
benefit,
because
every
q
builder
project
sort
of
gets
go-go
that
buy
two
phones
and
Phil
pointed
out
that
if
anyone
from
the
commune
is
interested
in
helping
this
exercise,
I'd
be
great
and
to
sync
with
him.
He
said
he
might
might
show
up
this
morning
to
answer
questions,
but
I
don't
see
him
on
the
call.
Yet
so
people
can
ask
any
questions.
Do.
B
A
Left
et
I
got
was
end
of
this
week.
They'd
have
a
PR
ready
for
us
to
look
at
I,
don't
know
if
that's
still
on
track.
That
was
I.
Think
you
sent
that
out
to
me
at
the
end
of
last
week,
and
so
he
said,
I
think
it'll
be
about
a
week
to
get
this
done.
It
sounded
like
they
were,
making
pretty
good
progress,
but
I
don't
know
what
the
current
status
is,
but
that
was
my
expectation
was
about
the
end
of
this
week.
A
We'd
have
a
PR,
you
pull
it
in,
and
yes,
I
think
that
everyone
who
you
know
thinks
their
vendor
directory
is
going
to
then
have
the
ripple
effects
of
those
changes
and
I.
Don't
know
exactly
what
that's
going
to
look
like,
because
right
now
our
vendor
directories
are
using
some
of
the
similar
patterns
from
the
main
directory
and
we
we
might
be
able
to
take
as
a
vendor
sort
of
Rock
extension.
Api
is
if
you
will
and
also
use
queue
builder,
to
manage
those
and
clean
up
some
of
the
way
those
are
managed
as
well.
A
C
A
D
A
Yeah
I'm,
guessing
that
there
have
been
I,
know
that
Justin
mentioned
when,
when
he
tried
to
just
run
our
controllers
on
top
of
CR
DS,
that
there
were
a
couple,
things
were
missing:
there
were
some
nil
pointers,
etc
and
Phil
said
that
that
he
was
having
to
rewrite
parts
of
controllers
and
so
I'm
guessing
that
that's
taken
care
of
some
of
that
or
he'll
leave
some
to
dues
in
the
code
and
say
you
guys
should
fix
your
defaulting
right.
It's
gonna
be
one
of
the
two
right,
but
I
think
his
goal
is
to
have
them.
A
And
the
know
he
sent
me
yesterday
was
that
they
had
more
than
that
that
they
just
hadn't
pushed
it
yet
so
I
think
they're
even
even
further,
along
than
what
you
can
see
out
up
there,
and
if
people
want
to
help,
we
can
nudge
them
to
push
more.
So
we
can
actually
see
how
far
they
are
and
where
they
actually
need
help.
Because
if
you
try
to
help
with
with
what's
been
pushed,
then
you
may
be
duplicating
work.
That's
already
been
done.
A
A
Alright,
so
next
I
was
gonna,
ask
I.
Think
Cindy
pinged
me
on
slack
a
little
while
back
asking
about
cutting
a
new
release
of
cluster
cuddle,
and
so
I
want
to
bring
that
up
here.
I
think
in
the
past
we
we
cut
sort
of
too
early
releases,
just
to
sort
of
see
how
it
would
go
and
really
there's
no
reason
to
cut
or
to
not
cut
it
release
going
forward
unless
somebody
actually
wants
one.
So
I'm
happy
to
to
have
myself
or
someone
else.
A
A
So
unless
somebody
thinks
it's
a
really
bad
idea
and
I,
there's
really
no
overhead,
except
for
you
know,
putting
a
tag
and
get
and
pushing
binary.
Github
I
think
we'll
try
to
do
that.
Maybe
they're
a
couple
about
outstanding
PR
I
will
try
to
get
those
in
and
kind
of
release
before
they
cube
over
switch.
E
Yes,
so
it's
discussed
in
last
last
call
I
had
been
to
work,
which
was
done
previously
by
Robert
on
the
machine
class.
I
tried
to
get
to
get
the
same
thing
and
made
some
changes
based
on
the
comments
which
were
made
previously
on
the
old
repository
and
the
PR
is
third
from
couple
of
days
and
I
get
it
open
for
review.
If
somebody
wants
to
check
out
what's
happening
there,
and
it
would
be
nice
if
you
can
get
seen.
A
That's
the
first
question
for
people
that
may
not
know
the
history.
The
idea
of
machine
classes
is
we'd
like
to
have
a
way
to.
Instead
of
having
to
embed
the
provider
config
in
every
single
machine
machine
setter,
machine
deployment
to
be
able
to
create
effectively
a
provider
config
once
and
then
reference
that
for
machine
deployments.
So
we
sort
of
have
two
different
sort
of
use
cases
in
mind
here.
The
first
one
is
the
way
it
works
today.
A
It
makes
it
really
easy
for
people
to
create
a
machine
or
a
machine
set
without
having
to
understand
sort
of
complex
interdependencies
between
multiple
kubernetes
resources.
Alright,
you
say:
I
want
a
machine
and
here's
everything.
This
is
nice
to
know
about
the
machine,
and
then
you
get
a
machine.
As
a
result,
machine
classes
are
really
sort
of
for
more
of
an
advanced
use
case
where
you
say
great,
I
understand
how
to
create
and
delete
individual
machines
or
machine
sets.
You
know,
but
I
don't
want
to
have
to
put
the
same
information
in
every
single
time.
A
What
I
want
is
a
template
that
I
can
stamp
these
things
out
from
and
the
machine
class
or
it
serves
as
that
template
where
you
you
can
create
that
thing
once
and
then
you
can
reference
it
over
and
over
potentially
with
a
little
bit
of
parameterization
we're
sort
of
going
back
and
forth,
or
at
least
my
initial
proposal
that
had
some
fields
for
parameterization.
It's
not
clear
if
that's
a
good
idea
or
not
a
good
idea,
so
we
should.
A
We
should
discuss
that
and
I
think
it's
probably
safest
to
leave
it
out
to
start
with
and
and
then
they'll
add
it
later.
If
we
think
we
need
it,
so
I
was
sort
of
the
idea
behind
machine
classes
and
then
yet
Hardwick
picked
up
picked
up
that
work
and
is
driving
it
forward,
which
is
awesome.
Devon,
has
questions
chat
about
just
changing
the
template,
trickle
out
to
machines,
sets
and
machines,
I
think,
ideally,
what
we
discussed
before
is
the
machine
classes
would
be
immutable
and
we
would
try
to
enforce
that.
C
Or
just
think,
I
think
I
caught
the
tender
question
at
it
on
a
double.
Yes,
if
you
changed,
the
launch
and
launch
of
configurations
are
immutable.
So
in
order
scan
replaces
a
launch
configuration.
You
can
create
a
new
launch
integration
and
point
the
auto
scaling
group
to
it.
It
doesn't
auto
roll
by
default,
at
least,
but
when
instances
are
naturally
or
unnaturally
replaced,
they
will
yeah.
A
C
A
Think
so
I
mean
I
think
that's.
The
thing
is
like
if
we
should
try
to
make
it
immutable
and
I
think
that
would
be
one
way
to
make
it.
Okay,
ideally
like
there'd,
be
just
be
something
something
in
a
CRT
schema
that
says,
like
updates
not
allowed
like.
That
would
be
even
easier
cuz.
We
wouldn't
have
to
deal
with
Lud
cooks
but
yeah.
If
we,
if
web
folks,
is
the
right
way
to
to
enforce
that
constraint,
then
we
should
do
that.
A
And
I
don't
think
that
storage
classes
are
immutable,
but
there
are
definitely
fields
in
storage
classes
that
are
immutable
right.
So
storage
classes
are
a
built-in
API
object,
so
they
have
the
luxury
of
be
able
to
writes
their
validation
code
in
the
main
communities
API
server.
When
I
talked
to
Saad
and
looked
at
some
of
that
code,
they
have
some
explicit
checks
that
say.
If
you
try
to
change
this
field,
we
reject
you
of
treasuring
just
field.
We
reject
you
and
so,
for
all
intents
and
purposes
you
can't
mutate
a
storage
class.
A
In
the
same
way
right,
you
really
have
to
delete
it
and
create
a
new
one.
If
you
want
different
parameters
to
be
set,
so
I
think
it's
probably
a
better
user
experience
to
just
tell
people.
They
can't
change
anything
than
to
make
people
figure
out
which
fields
are
mutable
and
which
fields
are
immutable.
A
So
so
the
call
to
action
here
is
if
people
are
interested
in
in
what
this
API
was
gonna.
Look
like
please
go
review
the
PR
I'd
love
to
see
this.
You
know
get
in
soon,
because
I
think
we
want
to
start
trying
to
build
on
top
of
it.
So
I
I
haven't
looked
at
it
other
than
the
fact
that
it
was
based
on
my
previous,
so
I
will
probably
probably
+1
it,
but
I
will
try
to
look
at
it
the
in
the
next
day
or
so,
and
give
it
a
more
thorough
review.
D
F
F
A
A
E
Yes,
so
this
is
how
to
scale
it
down
in
Machine
sets.
So,
first
of
all,
the
context
is
from
the
cluster
or
two
scalars
right
now.
Plus
tortoise
color
has
an
interface
where
one
of
the
interface
method
is
about
scaling
down
the
Machine
set
and
also
at
the
same
time,
mentioning
which
machine
should
be
related
when
you
do
the
scale
down
so
specifically
deleting
the
machines
and
also
doing
the
scaling
down
right.
So
this
kind
of
feature
at
the
moment
is
not
available
in
the
pod
deployment
and
replica
set
model.
E
So
we
will
have
to
think
about
the
strategy
for
us
that
how
hold
words
right
way
of
doing
here
right
and
as
moshing
API
itself
is
growing,
and
there
is
a
clear
need
of
autoscaler
I
thought.
Maybe
we
can.
We
can
talk
about
the
low-hanging
fruit
kind
of
approach
here
that
how
it
can
be
achieved
so
do
I
have
already
mentioned
in
the
issue.
I
would
try
to
do
what
the
brief
about
the
approach,
but
I
will
put.
E
It
word
believe
that
what
I'm
thinking
so,
what
could
mere
possibility
is
that
when
machine
deployment
creates
the
set
of
machines
right,
we
can
default
out
the
priorities
for
each
machines.
This
is,
for
example,
let's
take
priority.
S3
and
all
machines
get
same
default
priority
and
then,
when
the
cluster
or
the
scalar
decides
that
it
needs
to
scale
down
a
certain
set
of
machines,
it
can
make
two
calls
not
parallel,
not
in
a
single
call,
but
it
can
make
two
sequence
mutual
calls
in
the
first
call.
E
It
can
reduce
the
priority
of
those
specific
machines
which
are
required
to
be
deleted
and
in
the
second
call,
then
it
can
do
the
scaled
down
of
the
machine
deployment
and
now
the
catch
here
is
that
we
will
have
to
configure
our
reconciliation
of
the
Machine
set
and
machine
deployment
in
a
way
that
it
basically
sorts
the
machines
well
scaling
down.
So
one
obvious
obvious
sorting,
which
is
already
there
in
the
pods
as
well,
is
that
we
should
ideally
choose
unhealthy
machines
first
and
then
choose
the
healthy
machines
and
so
on.
E
But
on
top
of
that
we
could
put
one
more
filtering
mechanism
where
we
first
choose
the
low
priority
machines
and
then
choose
the
high
priority.
Muchness
the
hence
the
purpose
would
be
solved.
So
this
approach
looks
pretty
pretty
clean
at
the
moment
and
and
also
as
I
mentioned,
it's
kind
of
a
low-hanging
food.
It
can
be
pretty
quickly
fetch
in
the
Machine
set,
but
in
the
long
term
there
is
still
better
things
that
we
can.
Think
of.
So
if
you
see
that
already
priority
class
and
so
on,
things
are
there
for
the
pods.
E
C
So
I
mean
my
my
gut
feel
what
if,
in
that
the
autoscaler
would
have
cordoned
the
note
it
if
it
knew
which
one
it
was
gonna
scale
down,
but
I
guess.
My
question
is
like
to
what
extent
is
it
the
case
that
there
are
a
couple
of
candidates
and
the
the
machine
caller
would
make
that
decision?
So,
for
example,
like
it
used
to
be
the
Amazon
build
by
the
hour,
and
so
you
would
want
to
shut
down
the
one
that
was
like
you
know,
either
closest
to
the
hour
or
whatever.
C
Whatever
your
strategy
was
right,
that's
not
the
case
anymore,
but
like
to
what
extent
is
it
the
case
that
the
autoscaler
really
knows
which
one
it
wants
to
shut
down
and
the
machine
controller,
just
sort
of
obeys?
That,
whatever
extent
is
the
machine
controller
actually
making
a
smart
decision?
Okay,.
E
So
this
input
of
scaling
down
right
that
has
to
come
from
the
core
logic
core
logic
of
the
autoscaler,
so
the
autoscaler
itself
is
written
in
a
way
right
now
that,
while
making
the
scale
down,
it
also
passes
as
a
parameter
the
set
of
pods
set
of
machines,
which
it
really
needs
to
scale
down,
so
that
interface
anyway
requires
a
parameter.
Well,
gives
us
a
set
of
machine
and
expects
that
this
set
of
machine
should
be
deleted,
which
would
delete
other
set
of
machine.
Then
the
behavior
of
the
other
scanner
becomes
weird.
E
A
I'm
wondering
if
Justin
was
was
more
asking.
That
is
how
the
autoscaler
is
written
today,
because
it
was
directly
talking
to
the
underlying
cloud
environments
and
they
didn't
know
anything
about
clusters
or
kubernetes
or
how
to
sort
of
smartly
pick
what
to
remove
in
a
world
where
we
have
a
machine's
API.
Would
it
make
sense
to
invert
the
logic
and
have
the
autoscaler
say
it
should
scale
down
and
let
the
machines
that
decide
which
one
to
remove
I
think
that's
what
Justin
was
asking
I
guess.
A
What's
going
to
happen
if
I
add
or
remove
machines,
that
I
would
prefer
not
to
put
all
of
the
scheduling
simulation
code
in
our
machine
set
controller,
to
figure
out
which
one
to
remove
and
to
figure
out
how
things
are
going
to
get
rescheduled,
because
we
have
that
logic
already
and
it's
needed
for
scaling
up
anyway
and
having
that
sort
of
as
a
separate
sort
of
micro
service
makes
a
lot
of
sense.
So
I
think
having
the
autoscaler
be
able
to
pull
the
strings
in
a
way
that
it
is
deterministic
about.
C
E
So
autoscaler
at
the
moment
force
drains
the
load
coordinates
it
brings
it
and
then
decides
to
skim
down
and
anyway,
in
the
that's
just
one
part
in
the
autoscaler
feature.
But
then,
when
the
machine
said,
I
would
also
recommend
that
anyway,
we
should
consider
the
possibility
that
whenever
we
do
the
scale
down
by
default,
we
should
grow
into
machines
and
them
to
be
and
then
delete
this
measure.
So
just
to
answer
it
specifically
your
question
yes
or
those
killer,
drains
Gordon
City
and
then
the
user
that.
E
So
it
would
then
be
at
the
both
the
steps
so
or
to
scale
I
would
be
out
of
the
would
be
using
machine
controller
out
of
the
book.
So
it
has
this
feature.
I
guess
we
need
to
check
whether
there
is
a
knob
where
we
can
set
or
the
scaler
to
not
worry
about
it
and
delegate
the
responsibility
to
the
motion
controller,
but
machine
controller
would
anyway
we'll
have
to
support
this
kind
of
saying
that
it
anyway
can.
G
A
Yeah
one
thing
I'll
say:
is
this
discussion
about
draining
and
where
that
logic
should
be,
is
similar
to
what
Justin
brought
up
about?
Choosing
the
machine
to
remove
is
if
I
think
the
answer
is
the
opposite
right
now,
the
closer
autoscaler
drains,
because
it
is
talking
to
the
underlying
environment
directly
when
the
cluster
autoscaler
can
talk
to
the
machine
controllers.
A
It
should
let
the
mission
controllers
drain
right,
because
the
machine
controllers
are
going
to
drink
anyway
and
so
I
think
that's
a
place
where,
with
the
choosing
which
machine
to
scale
down,
it
makes
sense
to
leave
that
in
the
autoscaler,
but
actually
doing
the
the
draining
as
part
of
the
scale
down.
It
makes
sense
to
move
that
out
of
the
autoscaler
and
into
the
machine
controller.
C
The
cordon,
though,
could
be,
could
be
separate
right
because
these
the
orders
caylor's
decision
to
scale
down,
depends
on
like
the
current
state
of
the
cluster.
So
technically
the
minute
the
scheduler
runs.
That
decision
is
no
longer
valid,
but
that's
sort
of
the
closest
approximation.
We
have
pretty
come
to
that.
It's
at
least
say:
well
all
this
Gators
gonna
cord
in
this
machine
off
and
and
stop
anything
else
from
landing
on
it
and
then
off
you
go
Mike
machine
controller
shut
it
down.
That
seems
to
me,
like
a
nice
way,
to
communicate
yeah.
A
E
So
if
I
understand
it
right,
then
is
the
proposal
about
having
autoscaler
rather
retained
the
machine
and
not
tied
into
that
priority
part,
and
if
that's
the
case
then
for
my
machine
said
then,
if
that's,
for
example,
if
priority
participated
is
not
at
all
being
used
right,
then
wouldn't
machines
that
may
end
up
taking
the
wrong
decisions
where
user
might
not
want
to
delete
those
tainted
machines.
First,
then,
might
want
a
real
unhealthy
machines
to
be
deleted
first
and
so
on.
E
C
That's
a
valid
point:
I
think
it's
not
unreasonable
to
say
you
you
couldn't
now!
If
you,
if
you
want
to
guide
it,
you
order
in
the
net
and
we
have
a
sure
Gordon
knows
you're
deleted
first
and
then
or
prioritize
first,
an
unhealthy
notes
or
prioritize.
First,
it's
just
an
alternate
for
the
priority
classes.
You're
suggesting
but
I,
don't
think
I
mean
I.
Think
it's
perfectly
reasonable
to
ask
the
user
if
they
want
to
kill
a
particular
node
when
they
scatter
their
machines
that
supporting
it
first.
So
it
taints
give.
G
G
E
So
you
mean
that
that
we
would
use
for
the
team
right
that
could
be
very
specific
to
the
autoscaler
wrong.
Then
we
will
expect
that
with
that
think
is
their
only
order.
Skinner
shouldn't
double
retained
and
user
should
his
his
own
set
of
gains
if
he
wants
to
do
with
the
pink
cracked
yeah
that
could
that
could
be
one
of
the
possibility
in
that
also
makes
sense,
actually.
G
A
E
E
Well,
it
rather
chooses
a
set
of
machines
first
and
then
decides
which
one
to
delete
out
of
them,
so
just
trying
to
think
other
way
down
where
if
there
is
a
possibility,
where
taint
could
backfire
and
it
could
be
interpreted
in
the
other
ways,
because
when
we
put
a
taint,
then
we
will
also
have
to
make
sure
that
somebody
removes
the
taint
because
that
taint.
If
there
are
multiple
kinds
of
toleration
side,
it's
no
schedule,
no
execute
and
so
on.
E
So
if
no
studio
is
set
up
the
scalar
mistakenly
and
then,
if
later
on,
we
decide
that.
Okay,
this
machine
is
no
more
inquires
to
be
deleted
and
then
somebody
has
to
go
and
remove
the
things.
So
we'll
autoscaler
have
a
good
think
about
this
interface
and
maintain,
add
or
remove
the
tails.
In
the
case
of
the
priorities,
the
machine
would
behave
in
just
the
normal
way
it
would
have
been
behaving
otherwise,
so
the
parts
would
still
be
able
to
get
scheduled
long
on
that
machine.
E
So
just
thinking
from
that
perspective
is
there
any
possibility
that
autoscaler
might
help
to
decide
first
delete
something
because
it
right
now
there
is
a
logic
you
know
discover.
It
first
put
some
kind
of
a
notation
on
the
more
object
to
decide
that
this
machine
will
might
be
get
might
get
deleted
soon,
and
then
it
decides
to
actually
leave
that
machine
later
on,
with
some
very
complicated
analysis
of
the
machine.
So
then
wouldn't
autoscaler
have
more
responsibility
of
maintaining
the
things
adding
the
movie
from
the
machine.
Yeah.
A
So
one
other
issue
I
can
think
about.
Science
is
right.
Now
things
are
part
of
our
declarative,
API
for
machines
and
machine
sets.
So
if
the
autoscaler
adds
a
taint
to
a
machine,
then
the
machine
controller,
when
it's
reconciling
the
desired
taint
through
machine
versus
the
actual
taints,
your
machine
may
just
go
and
remove
that
paint.
A
Oh,
and
if
the
autoscaler
modifies
the
machine
to
have
a
new
desired
taint
for
that
node,
then
the
machine
set
controller.
They
go
and
say:
that's
not
what
machines
in
my
machine
set
are
supposed
to
have
retains
I'm
gonna
go
for
it
right,
so
we
may
end
up
with
the
autoscaler
trying
to
apply
tanks
fighting
with
the
the
controller
loops
from
our
declarative
definition
of
what
paint
should
be
on
a
mission
I.
G
Think
that's
a
bug
not
a
feature.
Oh,
we
should
probably
like
use
initializers
to
set
the
dates
once
on
the
node
object
and
not
try
to
keep
doing
that
over
and
over
again,
because
users
would
probably
want
to
change
the
paint
on
the
node.
If
less
it
is
a
compromise
and
I
don't
want
to
schedule
something
on
the
node
and
I
would
rather
keep
the
node
around
to
find
out
exactly
what
happened.
A
E
You
can,
you
can
be
then,
put
the
thoughts
on
that
issue
and
let
me
think,
let's
think
over
it
and
then
to
get
the
consistence
on
the
issue
itself.
Then
the
idea
overall
was
to
start
thinking
about
this
aspect
that,
sooner
or
later
we
will
have
autoscaler
coming
in
on
the
picture
and
understand
the
parallel.
A
Thanks,
oh
yeah
I
like
that
in
the
meeting
notes,
I.
Think
Matson.
Can
you
take
an
action
item
to
write
up
what,
in
the
github
issue
that
that
hardik
link
at
the
top
here
number
75
what
it
would
look
like
to
use
taints
instead
of
priorities
and
how
that
would
play
with
the
autoscaler?
So
basically,
instead
of
the
autoscaler
sitting
a
priority,
the
autoscaler
would
add
a
taint
and
then
the
scaling
down
would
respect
that
tanks,
sure
and
I
think
also.
A
A
A
All
right,
so
next
is
Alberto.
So
I
saw
this
this
issue
this
morning.
It
got
filed
and
thought
it
would
be
a
great
topic
for
us
to
discuss
today,
which
is:
should
we
have
a
separate
API
group
for
machines
and
should
machines
be
considered
a
separate,
independently,
evolving
API
from
the
cluster
API?
Would
that
allow
us
to
add
the
machines
API
to
existing
clusters
more
easily
and
sort
of
what
do
people
think
about
having
those
things
split?
B
B
Because,
right
now,
with
CRD
is
Evo
supper
group
it
would
be
trivial
to
make
that
separate
grouping.
Honestly,
the
transition
would
be
pretty
easy
to
make
I
think
the
hardest
part
would
be
folks
doing
the
conversion
apparatus
around
that
right.
So
if
you
you
run
it
to
maintain
compatibility
across
that,
then
the
conversion
would
be
the
hardest
thing.
We
maintain.
A
A
H
No
I
just
mainly
wanted
to
bring
the
discussion
map
here,
and
here
everyone
thought
I
reckon
we
mainly
using
the
part
of
the
cluster
API
that
relates
to
the
machines,
and
we
assume
that
there
is
a
cluster
already
in
place
with
a
predefined
it
infrastructure.
So
then
we
leverage
the
cluster
API
to
provision
the
machines
that
joined
up
that
cluster,
so
we're
just
hoping
to
hear
everyone's
thoughts
about
the
coupling,
both
IP
hives
and
what
it
would
take
to
to
get
the
Machine
API
to
better
status,
separate
from
the
class
tape
yeah
as
a
hoe.
A
A
Does
that
make
sense
together,
or
maybe
your
implementation
doesn't
as
tightly
coupled
the
Machine
controller,
with
with
the
cluster
resource
definition,
I?
Think
some
of
the
the
implementations
I've
seen
the
Machine
controller?
The
first
thing
it
does
is
it
goes
and
finds
the
cluster
the
machine
belongs
to
so
we
can
get
some
cluster
wide
properties
to
then
apply
to
that
machine
right.
It's
like
on
TCP.
It
might
find
the
project
that
it's
running
in
or
the
zone
it's
running
and
etc
from
the
cluster
to
not
have
all
the
information
duplicated
in
every
machine.
H
Yeah,
so
one
of
the
things
that
it
would
say
to
split
the
both
api's
would
be
to
remove
that
double
that
so
right
now
all
the
actuators
are
expecting
to
receive
a
cluster
object
and
a
machine
object
right.
So
somehow
we
would
need
to
move
from
that
cluster
object
to
something
more
generic
and
allow
the
actuator
to
get
the
information
that
it
needs
from
that
context,
object
I'm
from
the
Machine
object
itself,
so
we
need
to
discuss
about
what
that
context.
Object
should
actually
contain.
C
I'd
also
said,
like
my
experience
with
implementing
this
in
in
cups
mirrored
that
and
that
we,
you
know,
we
have
a
clear
need
for
machines.
We
don't
have
a
clear
need
for
clusters,
so
from
that
point
of
view
not
requiring
the
cluster,
whether
that
means
clearly
on
two
separate
API
group
or
not
I,
don't
know
but
not
requiring
it
might
help
a
lot
with
adoption.
H
E
So
one
of
the
use
case
I
can
also
see,
is
that
then
I
think
it's
already
mentioning
the
issue
as
well.
That
then
both
of
the
parts
can
independently
improve
and
grow
right.
So,
for
example,
at
the
moment,
machine
API,
if
you
see,
is
more
or
less
very
near
to
be
complete,
we
will
very
soon
see
that
most
of
the
features
would
be
considered
in,
but
custom
API
might
be
stillness,
and
so
both
of
them
can
parallely
independently,
improve
and
not
have
to
depend
on
each
other
for
further
versions.
H
A
A
So
it
sounds
like
both
about
Alberto
and
hardik
are
saying
that,
in
contrast
to
what
Tim
said
before,
maybe
it's
not
too
soon
to
split
them,
because
the
machines
part
of
the
API
is
closer
to
what
we
think
it's
its
final
state
would
be
in
the
cluster
part
of
the
cluster.
Api
is
a
little
bit
earlier
and
also
long
lines
of
what
Justin
was
saying.
Is
that
it's
easier
to
retrofit
the
machines
API
into
existing
systems
and
existing
deployment
tools?
Then
the
entire
API
service.
C
B
We
could
always
have
an
alpha
to
write
the
API
versioning
in
schema,
validation
and
slow
migration
is
not
a
bad
thing,
because
if
this
is
such
a
federated
project,
unlike
other
projects
that,
because
we
are
crossing
so
many
different
repo
boundaries,
that
way
we
can
have
a
lockstep
sync
with
a
reasonable
amount
of
change
and
that
a
massive
disruptive
change,
because
I
don't
know
if
you're
you've
ever
been
a
consumer
of
client
go
in
the
very
beginnings
that
was
always
funsies.
Every
single
update
was
a
massive
destruction
of
your
codes,
so
I.
D
Also
wonder
how
much
this
is
actually
going
to
decouple
the
machine
and
the
cluster
implementations
if
we're
still
going
to
need
to
federated
information,
somehow
from
the
cluster
to
the
machine
actuator
as
well.
Are
we
removing
the
coupling
at
the
API
layer
just
to
create
more
coupling
within
the
actual
controller
implementation?
Instead,.
A
A
We
would
need
to
figure
out
how
to
separate
those
and
I
think
you
know,
maybe
in
something
like
tops
it's
easier
to
that,
because
cops
already
has
representation
of
a
cluster
and
it
would
be
easier
to
sort
of
just
add
machines
into
that
existing
representation,
whereas
if
you're
trying
to
use
cluster
huddle
and
use
the
whole
stack,
those
controllers
don't
have
sort
of
the
existing
data
store
of
the
sort
of
common
bits
that
they
need
to
pull.
Oh.
C
I
would
say
we
should
optimize
long
term
not
for
cops
right.
We
should
optimize
for
like
class
receipts
yeah,
but,
like
short
term
medium
term,
we
should
also
like
adoption
is
also
important,
so
I
don't
know
exactly
how
we
balance
that
I,
don't
I
don't
to
be
like.
We
should
do
this
because
of
cops
but
I'm.
Like
it's
long
term,
we
shot
some
ice
for
the
clusters.
The
full
stack,
as
you
say,
but
I
do
think
that
adoption.
C
A
There
have
been
proposals
to
literally
add
a
reference
in
the
API
between
machines
and
clusters,
as
part
of
being
able
to
put
multiple
clusters
into
the
same
namespace,
and
if
we
do
it
at
the
API
level,
then
that
does
have
ramifications
of
much
more
tightly
coupling
the
API
objects
than
what
we
have
today
right.
What
we
have
today
is
a
loose
coupling
from
the
API
point
of
view
and
the
implementation
couples
them
together,
but
there's
nothing
to
say
that
all
implementations
have
to
couple
them
together.
A
So
in
the
interest
like
you
know,
for
my
my
personal
taken,
the
interest
of
expediency
I
think
having
them
stayed
together,
make
sense
for
velocity
right
now,
I
think
if
we
can
make
an
argument
for
for
splitting
them
helps
increase
adoption.
You
know
hopefully
well
not
decreasing,
velocity
too
much
that
that
could
be
a
good
argument
to
be
made,
but
I'm
not
sure
that
I'm
seeing
that
argument
being
made
quite
yes,
I
feel.
C
A
That's
kind
of
what
Alberto
is
saying
is
right:
now
the
actuator
gets
passed
like
the
literal
cluster
and
if,
instead
it
just
got
passed
enough
of
a
cluster
type
of
things
that
it
needed
to
do,
it's
worked,
and
you
have
started
that
away.
You
could
pass
it
a
literal
cluster
or
the
the
subset
of
that
that
it
cared
about,
or
if
you
didn't
want
to
create
that
cluster,
you
could
just
pass
it
to
context
pieces
that
it
needed
to
do
its
job.
A
I
It
sounds
a
little
a
little
odd.
The
Machine
controller
right
would
be
the
one
that
translates
the
let's
say
the
cluster
object,
tubes
to
the
context,
object
and
I
mean
sometimes
sometimes
you
know
a
provider
may
want
different
information
in
that
in
that
context,
right
than
another
provider,
but
but
then
you
no
longer
sort
of
have
have
access
to
all
the
information
right,
because
then
some
of
it
is
being
removed
by
the
Machine
controller,
which
is
the
common
piece.
A
C
Guess
where
I
was
thinking
is,
does
that
mean
that
we're
saying
is
the
how
we
structure
a
controller?
There
has
sort
of
no
bearing
on
the
API
group
discussion
right,
so
it
doesn't
seem
like
this
I
I
haven't
I
haven't
heard
a
concrete
reason
to
spit
out
the
cluster
object
from
the
API
group,
even
though
you
know
from
a
I,
my
quad
makes
sense,
implement
whole
API
group
from
cups
and
run
I'm
gonna
do
this
bit.
So
that
makes
sense
to
me
right.
A
E
A
One
without
the
other,
because
if
you're
gonna
use
the
machine
resource,
you
have
to
have
a
cluster
resource
to
and
that
negates
the
benefit
of
splitting
them
and
allowing
you
to
use
one
just
use
the
machine
resources
right,
which
I
think
was
was
part
of
the
route.
The
you
know.
Impetus
for
this
question
was
I
want
to
just
use
machines,
but
not
clusters.
If
you
have
a
hard
reference,
that's
required.
You
can't
do
that.
A
Alright,
so
we
are
getting
a
little
bit
short
on
time
and
I
think
it's
the
last
two
agenda
items
so
I
think
the
issue
is
just
open,
I
think
if
people
have
thoughts,
please
go
ahead
and
add
them
to
that
issue.
Maybe
we
can
and
keep
chatting
about
this
and
its
relationship
to
harder
links
between
machines
and
clusters.
Next
week,
Tim
you
had
a
question
about
how
we
named
sub
things
within
providers
yeah.
B
So
we
are
getting
a
lot
of
feedback
of
folks
wanting
to
do
a
different,
flavor
or
variants
I
just
like
to
have
a
comment
do
later.
So,
as
we
start
to
design
code
around
this
and
actually
is
named
the
name
that
we
agree
upon
so
I
know
Chris.
Who
did
the
straw
poll
we
have
results?
Are
we
going
to
call
closing
to
the
poll.
J
J
Yeah
I
think
flavors
one
if
we
want
to
go
with
what
folks
voted
on
that's
the
clear
winner,
although
I
still
am
a
big
fan
of
the
word
variant,
I
think
it's
a
little
less
promiscuous
and
makes
sense
and
just
I
think
folks
instantly
understand
what
he
gains
by
using
that
word.
I
think
so.
I
would
say
that
our
second
place
when
your
variant
should
win
and
if
nobody
has
any
strong
feelings,
otherwise
I
I
would
write
that
down.
Didn't
say
that
ski
official
term
moving
forward.
C
Think
this
is
great
I.
Think
something
we've
learned
with
past
headset
debacles
is
that
certain
words
have
certain
meanings
to
certain
groups:
I,
don't
know
if
we
have
enough
of
a
representative
sample
or
we
need
to
like
if
we
think
variants
a
good
name
which
we
run
it
past
I,
don't
even
know
how
we
would
do
this.
We
got
the
community
meeting
to
see
if
anyone
is
gonna,
throw
up
their
hands
and
outrage.
J
C
J
Think
maybe
spend
you
in
an
email
out
to
I.
Don't
think
I
have
to
get
all
of
Cooper's
ideas
involved,
but
maybe
just
make
and
just
saying
we're
gonna
go
ahead
with
variants
of
us.
Anybody
has
any
reasons
why
and
you
know
why
we
spin
it,
and
then
you
use
that
as
like
our
paper
trail,
that's
the.
C
A
Other
way,
just
to
get
a
little
bit
immune,
maybe
not
the
most
diverse
representation,
but
we
could
run
it
by
a
cig
architecture
also
and
just
make
sure
nobody.
There
has
any
objections,
so
I
think
if
you
do
see
architecture
and
sit
close
to
a
lifecycle,
maybe
that's
sufficient
to
cover
our
bases.
Okay,.
J
A
Alright,
in
the
last
two
minutes,
I
did
want
to
mention
that
Fang,
who
is
a
Googler
who
has
previously
been
working
on
the
cluster
API
project,
has
submitted
a
talk
for
Q
Khan
in
Shanghai,
which
is
happening
in
November,
and
it
was
accepted
to
talk
about
the
cluster
API.
So
if
anyone
has
any
topics
or
call
for
action
that
you
know
he
should
reach
out
to
the
community
in
China,
he
is
a
native
speaker
which
allows
him
to
communicate
more
easily
with
them.
You
should
please
let
him
know
so
I
think.
A
If
we
have
updates
on
the
status
of
the
project
during
those
sorts
of
things,
we
want
to
make
sure
that
he
picks
up.
We
should
sync
with
him
before
his
talk
in
November,
so
he
just
wanted
to.
He
couldn't
make
it
today,
but
he
wanted
to.
Let
me
he
wanted
to
have
me.
Let
folks
know
that
that
was
happening
and
to
sync
with
him.
A
No
all
right
thanks
everyone
for
coming
and
we'll
see
you
guys
all
again
soon,
please,
if
you
have
action,
items
go,
take
care
of
those
and
there
were
a
couple
for
everyone
on
a
couple
of
issues
in
PRS.
If
you
are
interested
or
have
thoughts
to,
please
go,
add
them
to
those
those
issues
and
PRS,
because
we're
gonna
keep
trying
to
move
things
forward.
So
if
you,
if
you
have
thoughts
fit
on
there
and
if
you
don't
expect
things
to
keep
moving
great
and
we'll
see
everyone
next
time
take
care.