►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180321 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.dqa4ycwj7avf
Highlights:
- Plans for the new repository (owners, migrating code, migrating issues)
- MachineClasses
- Breaking the machine controller's dependency on Google Actuator
- MachineDeployment status
A
Hello
and
welcome
to
the
March
21st
edition
of
the
cluster
API
working
group
meeting.
That's
part
of
sig
cluster
lifecycle.
The
first
couple
of
it
in
items
today
are
from
Chris
Nova,
who
I
believe
is
not
quite
here
yet,
but
I
think
we
can
go
through
them
in
her
absence.
So
the
first
one
is
owners
for
the
new
repository.
A
So
those
referring
to
the
new
github
repository
that's
been
set
up
in
the
kubernetes
savings
organization,
which
is
new
and,
as
last
I
checked,
only
had
like
three
organization
numbers
which
is
frustrating
because
it
means
if
you're
on
the
organization.
You
can't
do
things
like
use
the
bots
to
assign
or
close
issues
or
add
labels,
as
I
found
out
to
my
chagrin
and
yeah.
So
we
need
to
figure
out
who
is
going
to
be
owners
of
that
repository.
A
So
after
how
we
get
added
to
the
org,
it
looked
like
she
got
add
to
the
or
because
she
was
able
to
close
an
issue.
But
it's
not
clear
to
me
how
other
people
are
supposed
to
get
added
to
that.
Org
I
haven't
seen
sort
of
like
a
new
members
document.
The
same
way,
there
is
for
the
normal
kubernetes
org
yeah
right
now.
It's
just
Brian
grant
Michelle
or
Allie
and
Aaron
frickin
Berger
are
the
only
people
in
the
committee
SIG's
org
at
all.
A
C
C
C
A
A
You
sure,
because
I
definitely
seen
pull
requests
coming
to
Caretti
is
where
it
says.
Someone
from
the
org
has
to
write
okay
to
test
before
we'll
run
tests
on
their
PR.
Yes,
once
it's
okay
to
test
you're
allowed
to
rerun
the
test,
though
okay.
That
means
that
right
now,
if
any
of
us
sent
a
pull
request,
no
tests
would
be
able
to
be
run
against
it
either.
C
A
Okay,
since
Chris
isn't
here,
I'm
gonna
assign
an
action
item
to
her
to
follow
up
on
this
in
terms
of
owners
for
the
new
repository.
Does
anyone
have
a
suggestion
or
recommendation
on
how
we,
if
people
to
be
owners
of
the
repository,
the
only
thing
that
I
think
s
it
so
far
was
when
Aaron
was
going
through
and
setting
up
owners
files
for
all
the
existing
repositories
he
set
it
up.
He
set
up
owners
aliases,
so
that
sig
leads
would
own
all
of
the
sig
repos.
A
So,
like
you
know,
you
know,
Lucas
and
Luke
are
owners
of
the
the
current
cube
deploy
repository,
even
though
they're
not
actively
working
on
cube,
deploy
as
sort
of
an
escape
hatch.
You
know
to
help
manage
manage
that
if
there's
turnover,
we
need
to
add
other
people,
so
that
was
the
only
thing
that
I
knew
that
we
should
definitely
see
that
presumably
people
that
are
active
contributors,
we
should
also
be
feasting
on
their
Styles.
A
B
There
there
used
to
be
a
ladder,
I
think
with
fairly
strict
definitions.
I
think
it's
been
very
much
relaxed,
and
now
we
can
do
whatever
we
want.
It
does
make
it
awkward.
If
someone
like
you
know
this
sort
of
self
submit
a
PR,
it
can
be
sort
of
like
not
having
guidelines,
makes
it
harder
in
my
opinion,
but
we
should
probably
make
everyone
that
has
got
a
PR
merged
reviewer
about
that
it's
a
starting
one
and
then
the
approvers
I,
don't
know
how
you
want
to
do
that.
A
A
Okay,
so
next
thing
was
a
migration
plan.
From
the
current
cube
deploy
a
repository
for
the
pieces,
we
want
to
pull
out
that
our
cloud
or
environment
independence
to
put
them
into
the
repository
following
the
discussion
week
or
two
ago,
Chris,
opened
an
issue
to
talk
about
how
we
should
do.
This,
there's
been
a
little
bit
of
discussion
on
that
issue.
A
Sort
of
coming
up
to
my
suggestion
that
you
know
we
can
keep
debating
this,
but
we
should
probably
just
move
forward
with
what
we
think
is
reasonable
now
and
then
refine
later
to
make
forward
progress
and
Chris
asks.
Should
we
just
open
a
PR
or
write
a
proposal
and
I
put
a
comment
in
the
dock
yesterday
and
I
was
reading
the
notes
for
this
meeting.
Basically
saying:
if
it's
simple,
would
you
just
write
a
PR?
A
C
Yeah
I
think
it'll
be
slightly
more
complicated.
Part
of
it
depends
I
guess
on
one
of
the
agenda
items
of
whether
the
change
I
have
our
view
of
whether
we
think
that's
very
way
to
go
forward,
because
until
something
like
that
happens,
you'll
be
pulling
in
clouds
specific
code
as
well,
or
else
it
won't
build,
and
then
also
we
have
there's
some
testing
for
stuff
to
turn
up.
But
it
that's
gonna
be
some
beckon.
C
A
B
E
A
E
A
A
Yes,
I
think
that
the
arguments
for
splitting
apart
today,
where
that
people
wanted
to
be
able
to
build
other
controllers
outside
of
the
main
code
and
I,
think
people
that
are
doing
that
are
likely
to
want
to
be
able
to
pull
in
some
of
the
other
code
in
addition
to
just
API.
So
they
can
have
reusable
libraries
for
doing
watches
and
you
know
have
the
machines
that
you
know
controller
and
so
forth.
A
I
think
that
my
argument
for
having
the
API
separate
was
more
that
client-side
tool
and
it
wants
to
depend
upon
the
API
and
not
implement
the
API.
They
don't
care
about.
Having
the
controller
implementations,
they
probably
maybe
want
client
implementations
like
they
might
want
to
client
go
above
once,
but
they
don't
want
the
machine
controller
write
code,
we've
entered
into
their
repo
and
I.
Think
that's
where
we
might
see
some
demand
for
splitting
them
up,
but
again
we're
not
we're
not
there
yet.
So
we
shouldn't
complicate
our
lives
until
we
need
to.
A
So
Chris's
next
agenda,
item
related
to
the
repository
migration
is
about
issues
and
whether
we
should
migrate
existing
issues
and
she
was
wondering
if
she
should
just
groom
issues
in
the
new
issue
tracker.
I
pinged
rodrigo
about
this
yesterday,
since
he's
been
also
been
doing.
A
lot
of
issue
back
logging
and
grooming
and
his
take,
which
I
tend
to
agree
with,
is
that
any
issues
that
are
specific
to
the
api
we
should
put
in
the
repo
with
the
api
issues
that
are
specific
to
the
the
code.
A
C
A
A
The
main
one
like
I
said
one
of
the
main
goals
for
doing
this
is
to
make
sure
that
it
will
work
with
auto
scaling,
because
the
the
way
that
the
code
is
written
right
now,
the
autoscaler
doesn't
really
have
any
hooks
to
plug
into
to
figure
out.
You
know
if
I
scale,
this
machine
set,
how
much
capacity
am
I
going
to
get
by
adding
one
node
or
adding
five
nodes.
How
much
is
going
to
be
reserved
by
you
know
no
allocatable?
A
F
I
think
really,
when
I
joined
there
was
a
discussion
happening
that
at
each
layer
we
should
add
the
machine
class
right.
So
in
this
situation,
when
consensus
I
think
at
that
time
was,
it
should
be
at
all
the
layers.
There
should
be
a
flexibility
to
edit
all
the
days.
It's
just
verifying.
We
what's
the
yes.
A
That's
a
great
question,
so
I
went
back.
I
was
actually
not
at
that
meeting,
but
I
went
back
from
a
does.
Me
knows:
I
think
that
seemed
like
a
reasonable
answer
and
so
the
the
way
that
the
provider
can
figure
out
broken
out.
It's
broken
out
at
the
lowest
level
at
the
Machine
level,
so
the
Machine
level
you'll
be
able
to
either
reference
a
class
or
inline
provider.
A
F
That's
great
and
from
the
auto
scaling
perspective,
so
not
very
sure
so
correct.
We
are
actually
currently
trying
to
integrate
cluster
scanner
with
the
machine
controller
manager,
which
was
previously
known
as
neutral,
and
let
me
change
the
name
to
motion
so
what
we
have
understood
that
cluster
waterskin
is
exposing
a
certain
interface
if
they
call
it
a
cloud
in
the
cloud
provider
interface,
and
there
are
certain
methods
that
we
need
to
implement
for
different
providers
and
I
was
checking
out
how
AWS
is
doing
and
so
more
or
less.
F
What
I
could
understand
is
that
cluster
or
descaler,
currently
just
manipulates
in
the
case
of
AWS.
Let's
say
the
the
target
size
is
of
the
esc,
so
it's
like
interplay
scale,
auto
scaling
groups
right.
So
in
a
way
what
we
are
doing
this
we
are
just
plugging
in
our
machine
deployment
at
the
moment,
so
it's
like
autoscaler
will
just
play
around
with
the
replicas
field
of
the
machine
deployment
modify
the
different
whenever
it's
that's
how
it
should
just
end
this
ooh,
what
sure
hoe
machine
plus
yeah.
A
So
let
me
go
into
that
a
little
bit,
so
we
talked
with
with
Marcin
who's.
One
of
the
the
tech
leads
for
Sega
auto-scaling,
and
he
was
telling
us
so
they
have
this
this
cloud
provider
interface,
but
that
that's
not
particularly
flexible,
because
it
means
that
that
basically
new
code,
that
wants
to
be
built
into
a
lot
of
scaler
has
to
be
put
in
tree
and
want
to
sort
of
the
pushes
with
kubernetes
is
to
make
things
more
composable
and
to
not
have
to
put
all
of
the
code
in
the
same
location.
A
But
the
thing
that
they
they
do
with
with
Ming
Zhan,
GCP
and
aSG's
on
AWS
is
that
they
actually
call
out
to
the
final
provider
and
say
you
know
what
size
machine
is
this?
How
many
cores
does
that
have
how
much
memory
does
it
have
they
at
least
on
GCP?
They
read
from
the
metadata
about
what
labels
and
tanks
are
gonna,
be
on
the
machine
and
how
much
capacity
we've
reserved
through
node
allocatable.
A
They
can
depend
upon,
or
we
expose
this
data
in
a
standard
way
across
different
implementations,
and
that
means
that
you
don't
actually
have
to
go
change.
The
autoscaler
code,
if
you
want
to
add
support
for
digitalocean,
all
you
have
to
do
is
add
the
Machine
controller
for
digitalocean
set
up
some
machine
classes
that
expose.
You
know
that
the
values
of
the
autoscaler
needs
and
the
autoscaler
should
just
work
and
that
allows
autoscaler
to
basically
rebase
instead
of
having
cloud
provider
code.
A
They
should
eventually
be
able
to
delete
all
their
cloud
provider
code,
just
talk
to
the
the
machines,
API
and
work
anywhere
that
the
machines
API
works,
and
so
that's
we're
talking
about
them
on
how
they
can
get
to
that
eventualities.
So,
instead
of
us
plugging
into
the
cloud
provider
interface
that
they
have
right
now
and
saying
we're
gonna
add
a
cluster
API
hook
into
your
cloud
provider.
A
A
Yeah,
so
the
the
main
goal
of
the
machine
classes,
from
my
point
of
view,
is
to
to
write
the
prototype
code,
verify
that
it
looks
like
it
serves
some
of
the
use
cases
we've
talked
about
in
this
meeting
and
then
go
back
to
Sugano,
scaling
and
say
this
is
what
we've
got.
We
think
that
it
will
work
for
you.
Can
you
verify
that
it's
good
enough
right
and
that's
to
need
sort
of
the
bar
for
it
sort
of
passing
muster?
Is
we
know
we
want
it
to
drive,
auto
scaling?
A
And
so
Marcin
has
been
been
pestering
me
for
the
last
couple
of
months.
Saying:
when
are
you
gonna
have
a
stable
API?
When
can
I
start
writing
some
code?
Like
you
know,
when's,
it's
gonna
stop
changing
right,
so
they
are
they're
very
excited
to
start
trying
to
rebase
their
code
against
the
machines.
A
A
A
Definitely
not
gonna
gonna.
Do
a
knife
switch
and
drop
all
the
old
support
when
you
diddly,
but
I,
think
it
gives
them
a
clean
path
forward
to
to
saying
this
is
great.
We
no
longer
have
to
talk
to
any
infrastructure.
We
don't
have
to
worry
about
vendor
and
cloud
libraries,
and
all
we
have
to
do
is
talk
to
minetti's
api.
Is,
you
know,
top
to
bottom
and
will
work
in
any
Carreras
environment,
so
I
think
that's
pretty
appealing
to
them
as
to
they
can
get
themselves
out
of
the
business
of
talking
to
underlining
for
treasure.
H
A
So
I
think
the
the
tricky
thing
is
how
how
we
express
that
in
a
generic
enough
way
that
the
autoscaler
can
be
driven
off
a
bit
or,
if
you
say,
I'm,
trying
to
settle
this
pod.
It
requires
an
accelerator.
How
does
the
autoscaler
know
from
the
machine
class
that
creating
a
machine
of
this
type
will
give
me
an
accelerator
and
I
think
the
the
way
that
we
are
the
way
that
I'm
looking
at
doing
the
classes?
A
There
is,
let's
see
if
I
can
find
it,
there's
something
called
a
resource
list
in
the
core,
v1
api's,
and
so
you
can
basically
say
what
are
the
resources
that
I
need?
I
think
we
might
be
able
to
leverage
that
as
part
of
the
capacity
of
your
machine
class,
and
so
the
machine
class
will
basically
say
like
here.
A
C
I
think
so
for
special
resources
like
this
you're
gonna
have
pods
that
want
to
use
them,
and
so
the
scheduler
has
to
be
informed
about
how
to
make
sure
to
match
up
that
pod
with
the
right
machine,
and
so
we
should
really
just
take
a
key
from.
However,
the
scheduler
is
doing
that
constraint,
matching
and
represent
it
the
same
way,
whether
it
be
a
resource
list
or
if
it's
a
special
one-off
label
or
something
like
that,
I
think
we
should
just
match.
However,
the
scheduler
represents
that
constraint.
Yeah.
A
That's
a
good
point
because
the
the
cluster
autoscaler,
like
literally
vendors
in
the
scheduler
code
and
runs
the
scheduling
algorithm
and
so
what
it
does
is.
It
says
I'm
gonna,
pretend
to
add
a
new
node
and
I'm
gonna.
Try
to
remodel
that
node
as
best
that
I
can
based
on
what
I
know
about
the
platform
and
then
I'm
gonna
run
the
scheduler
room
against
it.
So,
as
Chris
was
saying,
if
the
scheduling
algorithm
doesn't
take
these
things
into
account,
then
the
autoscaler
is
also
not
gonna.
A
They
might
even
take
them
to
make
sure
that
they're
not
wasting
those
machines
on
workloads
that
aren't
taking
advantage
of
the
GPUs
and
so
based
on
the
labels
and
paints,
which
are
the
primary
we're
closed
steering
mechanisms.
I
think
that
is
often
enough
to
inform
the
scheduler
in
the
autoscaler
to
do
the
right
thing.
A
My
understanding,
or
my
hope,
maybe
is
that
the
resource
list
is
sufficiently
expressive
that
it
allows
new
resources
to
be
added
in
the
future
that
will
sort
of
automatically
be
able
to
be
used
in
the
machines
API
as
they
get
added
to
the
core
of
kubernetes,
and
that
that
that
is
what
would
be
driving
the
scheduling
decisions,
the
cluster
auto
scaling
decisions
and
then
you'd
be
able
to
represent
that
that
machine's
API.
So
that's
definitely
something
I
need
to
validate
that
assumption.
I
Since
we're
talking
about
resources-
and
we,
what
was
it
discussed
at
some
point
to
map
or
somehow
have
the
some
piece
of
code
map
the
resources
to
what
the
kind
of
a
provider
config
would
look
like
automatically,
maybe
as
part
of
this
API
to
the
provider
itself,
that
way,
the
user
would
just
generically
specify
CPU
Ram,
you
know
whatever
disk
sizing
and
whatnot
and
the
the
providers
would
know
what
to
provision
versus
having
to
figure
out
manually
how
to
map.
You
know
AWS
instance,
types
to
sizes
and
things
like
that.
Yeah.
A
That's
something
that
we
talked
about
pretty
early
on
at
least
internally
I'm,
not
sure
if
that
was
something
we
talked
about
in
this
meeting
publicly
was
basically
being
able
to
say,
I,
want
a
machine
with
these
resources
and
then
allowing
the
underlying
cloud
provider
to
sort
of
do
a
best
fit
match.
So,
if
you
said,
I
want
three
and
a
half
CPUs
and
a
gig
of
memory.
A
You
know
that
might
not
be
possible
to
do
on
all
platforms,
but
you
might
round
that
up
to
four
CPUs-
and
you
might
say
you
know
AWS
if
I
have
four
CPUs
only
supports
eight
gigs
in
memory
because
they
want
to
have
the
right
ratio.
So
if
you
ask
for
three
and
a
half
CPUs
and
Giga
memory,
I'll
get
before
CPUs
and
eight
gigs
of
memory,
that's
the
best
fit
for
your
request
and
I'll
provide
that
machine
for
you.
It's
not
something.
A
That's
currently
modeled
in
the
machines
API
and
was
not
sort
of
the
primary
goal
for
the
machine
classes,
but
it
might
be.
Something
we
want
to
talk
about
is
how
we
would
sort
of
model
a
here's,
a
set
of
requested
resources
and
then
allowing
the
underlying
implementation
is
to
deliver
sort
of
their
best
fit
for
those
resources.
I
Yep
one
one
thing:
I
guess
in
emboss
the
way
we've
implemented
this
is
we
still
kept
the
API
to
be
you
know,
except
the
provider.
You
know
specific
config,
but
we
also
added
the
API
for
the
providers
to
eventually
be
able
to
convert
the
generic
input
into
something
more
specific
for
them.
So
if
actually
keeping
the
the
basic
API
the
same,
but
also
kind
of
a
extending
as
an
additional
feature
for
this
kind
of
a
conversion.
A
Otherwise,
it's
not
going
to
be
the
right
size
for
your
cluster
anymore
right,
but
but
maybe
something
like
auto
scaling
fixes
that,
where
you
say
like
you
know,
I
have
this
template
I
want
a
bunch
of
small
machines,
sure
exactly
what
they
look
like,
because
the
autoscaler
will
kick
in
and
give
you
the
right
number
based
on
my
workload.
So
we
do
have
that
that
sort
of
thing
the.
A
That's
a
good
question
so
the
way
that
it
works
for
storage
classes
is
the
provider
concede
some
storage
classes
where
they
say
you
know
we're
gonna
set
the
default
storage
class
on
TCP,
which
uses
standard,
p
d--'s,
but
then
off.
Then
cluster
operators
can
go,
add
more
storage
classes
and
they
can
reference
those
and
they
can
even
change
the
default.
So
if
machine
classes,
what
I
was
was
sort
of
picturing
in
my
head
was
you
know
when
you
you
launch
your?
You
know,
Google
machine
controller.
A
It
might
automatically
create
some
number
of
machine
classes
that
it
thinks
are
sort
of
common
sizes
of
machines
that
people
would
want
to
use
and
registers
those
automatically.
But
end-users
can
then
always
create
their
own
machine
classes
and
rep
rep
in
a
map.
Those
two
arbitrary
other.
You
know
underlying
resources,
but
those
those
machine
classes
would
only
exist
in
clusters
where
they
have,
they
have
created
them,
so
their
workflow
for
spinning
up
a
new
cluster
might
be
create
a
new
cluster.
A
D
C
Sorry
I
was
zoning
out.
What's
mine,
oh
yes,
the
PR!
Yes,
so
I've
gotten
some
feedback
that
there
have
been
some
hesitations
about
modifying
the
generated
code,
but
it
does
break
the
dependency
I
was
wondering
what
is
the
overall
consensus
on
this
I
have
there's
some
more
work
to
be
done
with
like
building
the
new
docker
images
and
changing
at
least
the
GCP
deployer
to
deploy
properly
with
the
supply
to
part
controller,
but
I
was
just
wondering
it
modifies
to
generated
files
that
are
only
specific
to
the
controller.
C
It
doesn't
change
anything
about
any
degenerated
API
code
or
client
code,
which
I
think
is
the
high
turn
stuff
for
for
generating
code.
Like
you
add
a
field
or
you
change
a
field
or
something
like
that.
That
will
need
to
be
regenerated
a
lot,
so
I
was
wondering
what
everyone's
feedback,
or
if
they
got
a
chance
to
look
at
what
everyone's
feedback
wasn't.
Whether
we
should
move
forward
with
that.
G
I
had
to
look
at
it,
so
one
problem
I
mean
I
mean
right
now.
If
you
look
at
the
brightest,
we
have
there,
the
not
the
container
network
interface
you
can
do
container
world
with
an
interface
and
I
was
thinking.
Maybe
approach
like
this.
G
This
is
the
best
way
to
go,
so
you
can
have
you
know,
for
example,
in
one
pot
you
can
have
the
action,
the
Machine
controller,
in
into
one
part
of
the
motion,
control
manager
that
encompasses
like
the
in
the
deployment,
the
asset
and
induction
machine
and
in
the
another
container
you
can
have
only
a
very
small
piece
of
code
that
who
implements
like
this:
she
runtime
interface,
all
or
whatever.
So
by
breaking
dependencies.
C
Everyone
has
to
implement
their
own
main
for
a
controller,
but
if
you
trying
to
navigate
my
change
right
now,
but
if
you
look
at
my
controller
main,
it's
basically
67
lines
of
code
that
mainly
reuses
libraries,
but
only
make
sure
make
sure
to
use
the
Machine
controller
library
with
the
Google
interface
plugged
into
it.
So
this
interface
has
to
be
written
for
whatever
provider
you're
on
whether
it's
AWS
or
Asher,
or
anything
like
that.
C
So
there's
no
getting
around
writing
that,
but
the
glue
code
to
actually
plug
them
together
is
not
that
much
more
and
it
also
allows
you
to
have
flags
per
machine
controller.
So
we
don't
have
things
overriding
like.
We
want
to
add
specific
config
options
to
the
GC
machine
controller
that
I'm
sure
AWS
erasure
would
not
care
about.
We
said
this
gives
us
an
entry
point
to
also
do
more
customization
for
specific
provider
controllers
as
well.
C
You're
right
most
people
just
want
to
create
a
machine.
This
change
should
not
affect
that
in
any
way
they
just
have.
Instead
of
deploying
an
API
server
at
CD
server
and
controller
manager
pod,
they
have
an
API
server
at
CD.
Menator
controller
manager
and
machine
controller
pod-
it's
just
it
just
adds
another
pod
that
hopefully
someone
else
writes
for
them
and
maintains,
and
the
API
interface
as
far
as
like
the
cluster
API
should
be
exactly
the
same.
G
Okay,
well
I'll.
Try
to
summarize
my
tics
might
process
into
like
comment
and
then
I'll
just
send
it.
G
Mean
yeah
I
mean
after
writing,
several
controls
interning
in
it
ICP
I
mean
it's.
There
is
lot
of
boilerplate
that
you
have
to
write
in
order
to
get
like
a
fully
functional
controller.
We
have
to
write
like,
for
example,
leader,
election
signal
handling
and
all
those
kinds
of
things,
and
if
those
things
are
already
implemented
you
to
dislike
the
controller
manger
already,
and
you
only
have
to
care
about
simply
creating
the
machines
and
and
without
attending
any
reconsideration,
hooks
or
critic
informers
and
all
those
kinds
of
things.
So
it
it's
easier.
G
C
I
think
leader,
election
and
stuff
like
that,
could
be
built
into
the
controller
library.
If
you
want
I,
don't
think
it's
there
now,
but
it's
something
that
we
should
add
to
the
generic
controller
library,
I.
Think
I,
don't
think
that
should
be
brought
out
to
the
glue
code.
Part
of
it
and
the
only
the
only
thing
the
controller
library
doesn't
do
as
part
of
the
controller
right
now,.
C
A
And
Chris,
nobody
explicitly
said:
I'm
not
going
to
have
any
Google
code
to
the
new
repo
and
as
it
is
today,
if
we
move
code
without
any
google
hook
in
the
new
repo.
As
far
as
we
mentioned
earlier,
that
code
will
not
compile
at
all
and
so
it'll
be
really
hard
for
someone
else
to
build
a
controller
because
I'll
be
vendor
again
code
that
is
compiled.
D
C
C
The
reason
I
had
to
modify
the
generated
code
was
I'm.
Modified
I
had
to
modify
the
machine
mple
in
our
the
struct
that
it
stamped
out,
and
it's
initialization
now
takes
another
parameter
that
the
generated
code
does
not
call.
It
tries
to
call
with
one
less
parameter,
so
the
generated
code
would
no
longer
build,
even
if,
if
I,
if
I.
G
C
That
relates
more
or
less
the
actuator
one
of
the
the
biggest
flag
is
just
gonna,
be
a
config
flag.
That's
gonna
be
a
parcel
config,
we're
gonna
back
it
by
a
config
map
and
right
now
we're
thinking
about
having
like
supported
bundles
like
if
they
request,
like
kubernetes
1.10
with
a
say
on
a
boon.
This
is
the
actual
GC
image,
and
this
is
the
actual
installation
setup
script
for
that,
whereas
we
can
now
with
that,
we
could
support
like
a
1.9
setup
method
and
1.10
setup
method.
C
A
C
One
thing
that
we
might
want
to
do
is
like
diverse,
like
we
want
to
have
a
custom
certificate
signer
and
not
have
the
Machine
controller,
have
access
to
the
cluster
priming
key
we
want
to
make.
It
may
have
an
API
endpoint
to
defer
the
signing
to
some
other
trusted
source
and
interface
to
back
that
with
Google
key
store
or
something
like
that.
But.
G
C
Think
it
would
happen
in
the
actuator.
Anything
we
feel
is
count
enough
for
everyone.
We
would
probably
want
to
promote
upstream-
and
there
may
be
some
things
like
that-
that
we
want
to
experiment
with
it.
First
on
Google
and
once
we
like
get
the
use
cases
down
and
see
that
okay,
we
made
a
mistake
here,
but
we
can
change
it
and
then
get
a
decent
use
case.
Now.
We'd
probably
bring
it
to
the
community
and
say:
would
this
be
useful
in
the
generic
library.
A
G
A
A
Yeah
big
gaps,
I
see
of
that
is
both
CNI
and
CRI,
and
CSI
are
going
through
very
long,
convoluted
sort
of
standardization
processes
which
I
don't
think
we
want
to
do
right
now
and
so
I
think
doing
it
in
process
now
and
if
we
find
we
need
to
break
that
out
later.
That
seems
like
a
task
we
get
to
account
later,
which
is
what
we
did
with
container
runtime
right,
like
initially,
the
culet
had
built
in
direct
support
for
docker,
and
then
they
decided
we
want
to
break
this
apart.
A
C
And
at
least
this
in
process
binding,
isn't
forced
upon
you,
you
get
to
choose,
what's
binded
that
questions
yeah
great
yeah,
so
there
there
are
concerns
for
the
modifying
of
the
generated
code.
One
person
may
be
volunteered
to
update
the
API.
A
server
builder
I'll
respond
that
this
may
be
something
we
want
to
go
through
if
they
wish
to
upstream
the
ability
to
customize
this
you're
in
good
a
bit.
C
So
we
so
it's
not
an
issue,
but
from
what
I'm
gathering
this
sounds
like
a
thumbs
up
if
I
Martin,
you
want
to
put
your
feedback
on
there
before
we
move
forward
with
I
still
have
a
lot
of
things
to
do.
I'm,
basically,
writing
make
files
and
changing
I.
Basically,
soft
write
make
files
and
change
the
G
speed
employer
to
have
a
different
deployment
to
account
for
the
extra
pod,
but
other
than
that.
A
A
So
I
don't
know.
If
anyone
here
can
provide
an
update
on
machine
deployments,
there
have
been
some
initial
design
dachshunds
around
I,
think
mostly
from
Messam
from
our
side
and
then
obviously
you
know
the
SVP
folks
have
the
machine
deployer
in
their
machine
controller
manager
as
well
as
sort
of
a
base
that
we
can.
We
can
build
on
top
of
I,
don't
know
if
there's
any
concrete
updates
that
people
can
provide
on
on
getting
that
upstream.
F
So
we
have
sorry
I
have
not
been
able
to
check
the
architecture
for
the
machine
deployment,
but
one
hint
on
that
that
we
recently
found
while
integrating
or
the
scalar
that
we
will
require
from
the
mushroom
deployment.
So
basically
autoscaler
expects
that
under
the
machines
which
are
under
a
certain
machine
deployment,
so
we
should
be
able
to
delete
a
specific
machine
and
at
the
same
point
in
time
we
should
be
able
to
decrease
the
number
of
replicas
of
the
machine
deployment
as
well
right.
So
this
two
things
should
happen
in
parallel.
F
So
if
we
are
having
a
standard
approach
of
just
scaling
down
blindly,
then
internally
we
will
we
may
end
up.
Initially
we
end
up
doing
the
things
like
the
machines
with
running
pending
and
terminated.
Make
you
high
priority
to
the
terminated
machines,
two
scales
in
town
first,
so
that's
where
new
logic
would
require
what
we
need.
F
We
can
have
a
new
annotation
for
a
priority,
so
you
just
basically
assign
the
priorities
to
the
to
the
machine
objects
and
we
expect
either
autoscaler
or
certain
other
component
to
actually
put
a
priority
annotation
on
the
machine
object,
and
then
machine
deployment
takes
third
priority
number
as
an
input
and
then
tries
to
delete
those
machine
object
first
before
the
other
machine
objects.
So
that
might
that
just
one
hits
of
that
we
found
while
integration
of
the
scanner.
F
A
That's
a
great
point,
and
that's
actually
something
that
was
kind
of
extension
to
us
as
one
of
the
requirements
is
I
want
to
be
able
to
shrink
the
set
by
by
a
particular
single
or
particular
set
of
machines
and
and
funnily
enough,
one
of
the
gke
customer
actually
asked
for
that
feature
on
deployments
inside
of
communities
as
well
or
they
said
if
I
have
a
deployment
or
communities
replica
set.
I
want
to
be
able
to
shrink
that
by
a
particular
one
also
like
they
said
as
a
feature
request.
A
I
want
to
be
able
to
delete
that
specific
replica
and
you
know
not
having
not
have
a
race
condition
where
I'm
trying
to
delete
it
and
shrink
it
and
having
and
fighting
with
controller
and
and
see
gaps
does
not
get
an
implement
of
that.
So
we
might
be,
might
be
threading
some
new
ground
there,
but
think
that
that
sort
of
feature
requests
has
also
been
asked
for
in
kubernetes
itself
managing
pauses.
So
it
seemed
unreasonable
to
just
build
that
directly
in.
A
So
I
think
I,
don't
I,
don't
think
anybody
is
actively
working
on
trying
to
put
a
PR
together
for
the
types
for
machine
to
Planet.
Yet
okay,
they
are,
they
should
let
us
know
now
so
we're
not
duplicating
effort,
but
I,
don't
I,
don't
know
if
anybody
has
assigned
to
do
that
quite
yet,
and
B
I,
don't
think
anybody's,
actually
I
think
it's
sort
of
in
everybody's
head,
as
as
for
the
next
step
after
machine
sets,
but
I
haven't
seen
anything
pop
up.
That
lets
me
think
something's
doing
for
yet
more.
A
For
the
most
part,
any
cool,
as
with
machine
sets,
will
need
to
be
careful
to
make
sure
that
all
of
the
fields
apply
in
the
same
way
which,
in
in
machines
in
this
case,
I
believe
that
they
did
and
then
like.
As
you
mentioned,
sort
of
making
sure
that
these
these
other
use
cases
that
aren't
yet
implemented
for
deployments
or
replica
sets
that
we
have
a
way
to
express
those
in
our
API,
which
is
hopefully
consistent
with
where
the
gaps
will
go
on
in
correlator.
F
It
could
just
recall
one
moon
point
there
so
recently,
then
we
also
implemented
the
green
functionality
so
before
deleting
a
machine,
so
you
could
actually.
This
could
actually
be
a
part
of
the
Machine
set
and
I'm,
not
sure
whether
it
is
already
there
in
the
existing
implementation
incapacity.
Okay,
so
we
can
actually
leverage
nice
feature
from
the
Cuban
is
itself
so
long
back.
F
I
think
there
was
a
conversation
that,
when
the
rolling
update
is
happening
from
the
Machine
deployment,
we
also
want
to
make
sure
that
the
application
is
turning
on
top
or
somehow
we
ensure
that
the
applications
have
actually
respecting
the
application
SLS.
So
the
PD
B's
are
being
respected
right
so
to
solve
that
problem,
rather
than
implementing
it
by
ourselves.
What
we
did
instead
before
deleting
a
machine,
we
initialized
the
parallel
train
functions
and
you
could
see
the
tube
Bundys
itself,
while
the
parallel
drain
is
opening
respects
the
PDP's
of
different
applications.
A
F
Before
deleting
any
machine,
so
we
by
default
by
default
in
machine
controller
manager
deleting
them
before
deleting
a
machine.
They
execute
the
same
drain
function
that
we
have
in
cube
city
L.
So
we
basically
copy
the
cube
city
on
the
screen.
Function,
put
it
inside
before
the
delete
logic,
and
what
happens
is
that
when
I,
what
Minerva
machine
deployment
with
ten
replicas
and
I
want
to
make
it
fine?
Basically,
five
machines
will
be
parallely.
Good.
F
Five
machines
will
basically
be
going
down
well
in
parallel
fashion
right,
so
these
five
drinks
are
happening
in
parallel
and
because
kubernetes
grain
function
is
implemented
in
a
way
that
the
application
PDP's
will
be
by
default,
respected
and
that's
how
we
can
ensure
that
the
application
is
also
respected.
So
back
then
the
conversation
was
that
it
can
be
very
critical,
very
complicated,
complicated
to
implement
it,
but
cortisol
would
much
much
easier.
Yeah.
A
G
G
C
Say
abs
for
workloads,
since
that's
the
oath
rescheduling
absent
workloads
off
of
a
note
in
respecting
their
essays.
A
Yeah
I
mean
maybe
it's
maybe
part
of
the
reason
it's
I'm
making
progress
is
because
the
ownership
isn't
clear,
because,
right
now,
it's
probably
owned
by
sig
CLI
rights
as
part
of
cue
cuddle,
but
when
it
gets
pushed
to
the
server,
it's
I
think
it's
a
little
bit
unclear
whether
its
API
machinery
or
workloads
or
scheduling,
but
it
has
to
do
with
scheduling,
also
and
sort
of
who
the
owner
of
that
becomes.
So
we
can.
A
J
One
last
question:
so
currently
no
one
is
putting
effort
into
it
to
create
a
first
PR
or
the
type
definition
should
we
should
I,
or
should
we
take
care
of
this,
creating
a
first
type
definition
and
then
creating
a
peons,
and
we
can
discuss
on
the
or
should
look
like
sure,
that'd
be
great.
If
you
want
to
take
that
as
an
action
yeah.
J
A
All
right
thanks,
passion
all
awful
day,
I
in
the
notes
for
you
and
it's
it's
not
a
little
bit
past
11:00
o'clock,
and
so
we're
gonna
go
ahead
and
call
this
meeting
over.
Thank
you.
Everyone
we've
had
a
I
think
a
very
productive
and
useful
meeting
today.
A
lot
of
big
discussion
and
help
see
everyone
again
next
week
take
care.