►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181031 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.klks7imc1cg0
Highlights:
- PR to add machine phases
- Support for static IPs
- Adding gitbook documentation
- Dependence on NodeRefs: managed vs. unmanaged clusters
- PR to add initial support for phases (alpha command)
- Adding a provider Id to machine status
A
Hello
and
welcome
to
the
Halloween
the
October
31st
edition
of
the
cluster
API
some
project
meeting.
First
aid:
cluster
lifecycle:
we've
got
a
nice
agenda
going
today,
so
let's
go
ahead
and
dive
right
in
so
first
up
is
our
static.
Eyepiece.
We
like
to
issue
519
here,
go
ahead
and
pull
that
up.
We
have
a
little
background.
I,
don't
know
if
everybody's
up
to
speed
on
what
the
issue
is.
A
B
A
Jason
was
like
you've
looked
at
it
just
anyway,
I
guess.
First,
the
main
question
is
anybody.
Anybody
have
any
concerns
again,
api's
an
alpha.
We
can
always
merge
things
and
undo
them
later
if
we
figure
out
that
there's
a
different
way,
we'd
like
to
do
them,
but
unless
we
have
sort
of
a
better
proposal,
I
don't
see
a
reason
not
to
be
forward
this
one
for
now,
but.
C
Yeah
I
think
Alvaro
from
Leeds
sore
loser
had
vetted
it
pretty
have
and
I
think
there
is
some
justification.
David
commented
on
it.
I
think
that
it's
definitely
there
to
have
something
that
represents
the
specific
lifecycle
of
the
machine,
not
just
having
the
node
rough,
especially
for
UX
purposes,
so
I
would
love
to
see
it
coming.
A
Okay,
I
think
what
I'll
do
I'll
ping
the
PR
again
today
and
and
we'll
maybe
hold
it
for
24
hours,
just
make
sure
nobody
objects
and
then
we'll
go
ahead
and
merge
it
tomorrow,
assuming
nobody
objects
on
the
PR
I
think
you
know
mechanically.
It's
it's
API
changes.
So
there's
there's
not
like
complicated
code
to
abuse.
That's
more!
Do
we
want
those
API
changes?
I
think
the
answer
is
yes
for
now
and
if
there's
something
better
will
change
it
later.
D
Thank
you
so
that
I
larger
thing
about
is
the
support
for
static
IP
for
the
machines.
So
this
use
case,
basically
at
least
from
the
vSphere
perspective,
is,
is
quite
relevant
primarily
because
and
not
all
these
filming
people
might
have
DHCP
running.
Therefore,
there
is
a
need
for
allocating
static
IPS
to
the
machines,
so
in
theory
I
mean
five
number.
Four,
the
vSphere
provided
implementation.
What
we
have
done
is
we've
already
enhanced
our
spec.
They
are
provide
respect
for
the
machines
to
actually
include
the
provision
for
static
IP.
D
D
I
posted
this
question
on
slack
and
one
interesting
use
case
that
was
that
came
along
was
you
know.
Maybe
bare
metal
support
later
on
might
also
need
something
like
a
static
IP,
because
most
of
the
cloud
providers
right
now,
like
aw
cheeky,
you
know
OpenStack,
they
all
handle
that
IP
allocation
automatically
and
doesn't
matter
if
it's
DHCP
server
or
not
LLL
handle
that
underneath
so
I
just
want
to
get
an
opinion
from
folks.
Like
do
you
guys
see
static.
Ip
are
something
that
you
know.
D
Clustered
API
as
a
whole
should
support
number
one
and
if,
if
you
guys
think
that
it's
a
good
idea,
then
maybe
you
know
any
ideas
around.
How
do
we
want
to
implement
VIP
static?
Ip,
like
IP
pools?
I
was
talking
to
one
more
guy
I'm.
Sorry,
if
I
pronounced
your
name,
one
suggestion
was:
maybe
we
can
implement
some
sort
of
like
a
mutating
web
book
admission,
we're
booked
for
the
machines
object
and
implement
some
sort
of
a
controller
there.
That
could
do
this
unique
job
of
tying.
D
The
IPS
from
the
IP
pool
and
allocating
to
the
Machine
object
itself
before
right
at
the
time
and
they're
created,
and
therefore
you
know
from
the
provider
actuator
code
like
the
machine
actuator
code,
they
just
get
the
Machine
object
filled
with
the
IP,
so
they
don't
have
to
really
do
any
custom
logic,
but
just
operate
as
if
they
would
have
been.
Otherwise
that
was
essentially
the
very
high
level
proposal
in
questions
I'm
open
to
what
sort
of
you
know
touch.
Others
have
on
this.
E
In
our
current
news
cases,
the
static
IDs
are
part
of
the
provider
stack
and
since
we're
dealing
with
bare
metal,
we
actually
don't
have
a
use
case
immediately
for
the
machine
stack
controller.
But
we
have
looked
at
extending
our
use
case
to
arbitrary
BMS,
in
which
case
it
might
be
possible
to
use
the
Machine
sent
controller,
I
I
think
so
so
the
first
part
is
I.
Think
there
is
a
use
case
for
allowing
the
machine
set
controller
to
work
with
static
with
machines
with
static
IP
this
and
then
the
question
of
implementation.
E
The
mutating
web
is
interesting.
Another
idea
that
we
had
thought
about
is
to
have
in
the
cluster
object
you
can
have
as
part
of
your
provider.
Spec
the
IP
address,
information
or
IP
address
ranges
and
then
right
now
the
machines
that
controller
creates
machines,
but
they
don't
have
IP
addresses
and
that's
fine
as
long
as
your
machine
controller
understands
how
to
assign
an
IP
address
from
the
IPS
which
are
stored
in
the
cluster
provider
spec.
E
D
It's
actually,
in
fact,
like
you,
know
the
the
probabilities
so
right,
we're
in
via
enhance
the
cluster
object
or
the
cluster
definition
itself
in
incorporate
that
IP
pool
information
and
then
utilize.
That
I
mean
that's
roughly
the
way
internally.
I
was
actually
thinking
and
was
trying
to
do
a
POC
around
that,
whether
the
vSphere
provider,
to
basically
enhance
you,
know
the
customer
object
with
some
some
sort
of
an
IP
pooled
information.
And
then
you
know
pretty
much
what
you
just
said.
So
yeah
that's,
but
the
only
thing
is
now.
D
What
happens
is
now
the
machines
so
either
the
machine
actuator
and
now
has
to
do
that
job
of
okay,
making
sure
that
it
pulls
out
an
IP
allocates
it
at
the
same
time,
and
then
the
only
complication
is
if
there,
like,
someone's
getting
like
ton
of
machines
in
parallel,
there's
a
little
bit
of
synchronization
issues
that
you
have
to
kind
of
make
sure
that
we
don't
run
into
allocating
the
same
IP
to
two
machines.
You
know
given
if
there
are
multiple
killing.
So
that's
where
I
mean
essentially
the
initial
suggestion
around.
D
Maybe
if
we
can
implement
the
logic
in
the
mutating
admission
web
work,
then
that
at
that
basically,
then
probably
a
slightly
better
place
to
solve
that
problem.
It's
not
essentially
it
doesn't
really
solves
the
uniqueness
problem
and
you
still
have
to
do
the
synchronization
part,
but
probably
a
slightly
better
location
to
solve
that.
But
yeah.
A
So
all
throughout
a
couple
things
one
is
cereth.
You
mention
at
the
beginning
that
this
isn't
really
needed
on
public
clouds,
because
I
do
IP
address
allocation
automatically
and
well.
That's
true.
By
default.
I
I
know,
at
least
on
GCE,
that
when
you
create
machines,
you
can
create
them
with
specific
keys.
I
think
you
have
to
provision.
A
The
IP
is
ahead
of
time,
but
I
mean
you
could
if
you
really
wanted
to
provision
a
bunch
of
IPs
and
then,
when
you
create
machines,
put
those
things
together
and
assign
them
so
I
think
you
could
shoehorn
this
into
at
least
some
cloud
environments.
If
you
really
wanted
to
know
which
IP
is
right,
which
machines,
the
other
thing
I'll
say
is,
it
sounds
like
we
have
at
least
two
providers
that
that
would
sort
of
be
implementing
the
same
thing
here
in
parallel
and
their
provider.
A
A
Is
it
something
that
we'd
want
to
standardize
right
and
even
if
it
doesn't
apply
to
all
environments
right,
they
can
be
optional
fields
and
we
can
just
leave
them
blank
and
let
the
clouds
do
their
default
thing,
but
especially
I
think
if
we
can
show
there
are
cases
where
in
the
clouds.
You
also
might
want
to
do
this,
then
it
become
something
general
that
we
could
apply
across
all
environments,
optionally
and
it
might
be
required,
for
you
know
some
bare
metal
environments,
but
even
it
sounds
like
what
the
vSphere
environments
you're
talking
about.
A
D
But
that's
absolutely
correct:
I
mean
essentially
I
can
actually
make
a
case
for
even
the
OpenStack
collider,
because
in
an
open
circuiting
you
can
allocate
a
neutron
port
ahead
of
time
until
they're
pretty
much
the
same
thing
as
you
I
mentioned
earlier.
So
yeah
I
mean
that
use
case
probably
is
applicable
for
more
than
just
means
fear
but
I
think
not
as
widely
used,
though.
C
One
of
the
ways
that
we've
looked
at
doing
this
because
having
it
directly
in
the
machine
controller,
might
be
a
little
bit
difficult
and
and
doesn't
necessarily
make
a
lot
of
sense
because
it
tends
to
be
fairly
stateless
was
we
were
looking
at
implementing
specifically
a
separate
controller
that
dealt
with
I
Pam,
had
specific
domain
specific
CR
DS
that
could
represent
things
like
IP
pools,
private
pools,
public
pools,
etc.
So
that's
something
that
we
are
looking
at
implementing
as
well.
So
maybe
we
should
pool
together
on
something
like
that.
C
A
D
Okay,
so
seems
like
I
mean
there
is
a
common
interest
seems
like
in
this
particular
area,
so
I
mean
if
it's
okay,
Ben.
What
I
can
try
and
do
is
I
can
maybe
create
start
a
Google
Doc
and
share
with
everyone
with
didn't
lease.
I
can
spec
out
things
that
at
least
I've
thought
about
and
what
we've
just
heard
from
like
few
other
folks,
and
maybe
we
can
collaborate
on
than
one
document
and
when
proposal,
and
then
you
know,
refine
the
ideas
and
see
where
which
direction
from
the
implementation.
A
You
and
one
more
things
that
ours
I,
don't
know
if
you're
reading
the
chat
but
just
and
putting
out
in
chat
that
it's
essentially
also
useful
on
TCP,
where
we
use
static
IPS
for
masters
as
well,
so
even
on
clouds.
There
are
cases
where
you
want
to
bind
specific
IP
system,
civil
machines.
So
definitely
you
should
cover
that
too.
E
I
agree
the
an
umbrella
ticket
this
morning
with
some
of
the
things
that
I
think
are
necessary
and
then
some
of
the
things
that
I'd
really
like
to
see,
hopefully
before,
like
good
god,
but
it's
not
contended
upon
it's
just
there's
certain
guidance
I
think
we
don't
have
enough
specific
specificity
around
them.
Things
like
how
do
you
determine
up
a
notice
provision
or
not,
and
these
are
questions
that
have
been
ongoing
for
months
right,
so
they're
not
going
to
be
resolved
soon.
E
E
It's
also
possible
that
we
could
use
github
pages
and
to
me
the
decision
point
there
is
sort
of
a
do.
We
value
the
fire
base
analytics
and
advantages
of
using
firebase,
or
do
we
think
github
pages
which
are
free
or
sufficient
and
then
B
to
what
extent
is
like
a
get
booked.
So
one
of
the
things
I
did
for
this
as
I
am
to
make
the
get
book
reference.
The
code
added,
Docs
and
good
markers
to
our
API
types.
F
G
E
So
do
we
want
to
come
to
a
decision
on
how
this
is
served
before
we
merge
I
feel
like
it's
less
useful
if
it's
not
served.
On
the
other
hand,
it's
primarily
a
developer
tool
and
there
are
instructions
for
how
the
developer
can
feel
verbally.
So
we
could
decide
to
merge
it
without
determining
how
we
serve
it.
F
A
E
Make
file
target
which
would
change
branches,
build
the
changes,
commit
the
changes,
push
them
and
then
check
change
back.
So
that's
sort
of
fragile,
because
I've
been
dipping
fails
in
that
target.
You
end
up
with
a
messed
up
repo
and
yet
to
clean
it
up.
That
said,
you
also
have
a
similar
and
under
when
you
use
firebase,
because
it
also
generates
a
bunch
of
files
which
may
or
may
not
be
correct,
and
then
you
have
to
then
untangle
what's
been
done
so
I'm,
not
sure
which
is
easier,
but
both
of
them
or
not.
E
A
E
F
I
know
gardener
does
something
like
I
think
it's
way
too
nascent
of
a
project.
We
haven't
even
cut
the
alpha-1
in
full
respects
right,
so
I
think
once
we
actually
do
that
and
get
much
broader
adoption
in
the
ecosystem.
We
will
see
a
100
flowers,
bloom
and
then
you'll
probably
see
the
common
patterns
emerge.
I
think.
A
I
think
David's
more
asking
like
for
the
people
on
the
call
that
are
trying
to
implement
this.
Are
they
trying
to
implement
it
inside
of
a
cluster
or
in
a
separate
cluster?
Managing
a
cluster
right,
so
I
agree
that
once
people
are
actually
in
the
wild
will
see
more
clearly
using
it,
but
those
people
will
likely
be
using
the
tools
that
are
built
by
folks
that
are
sort
of
here
now.
Building
up
the
patterns
that
people
end
up
using.
E
A
E
The
machines,
so
so
the
controller.
What
I
mean
is
that
the
machine
controller
copies
the
node
conditions,
and
we
can
only
do
that
if
there's
a
node
graph,
that's
it
right,
and
so
for
the
SSH
provider
that
we
have
there's
a
bunch
of
cases
where
we
have
this
problem
and
what
we
end
doing,
we
would
like
to
fix.
E
B
Ok,
so
just
one
comment
so
for
the
first
part
how
common
the
patron
easy?
Yes
gartner!
We
pretty
much
look
that
way,
so
we
basically
have
a
manager
cluster
if
I
got
technological
actually,
basically
have
a
one
cluster
which
controls
not
either
one
but
many
clusters.
So
basically
a
control
plane
of
people's
other
worker
clusters
are
hosted
on
this
one
big
person
and
then
that
one
big
cluster,
you
would
see
the
all
the
cluster
API
stack
machine,
a
PA
stack
will
be
loaded
and
for
this,
for
the
second
part
and
the
mode
reference
part.
B
So
here
so
to
what
we
tried
to
make.
My
design
is
that
the
Machine
set
only
needs
to
worry
about
the
Machine
objects
and
not
doesn't
really
go
till
the
node
object
by
has
a
low
deference
in
between
and
though
at
the
moment
it
does
not
have
a
dialect.
So
we
didn't
help
this
thing
in
my
mind
that
we
wanted
to
run
the
machine
control
completely
in
the
different
environment,
maybe,
and
that's
why
the
missions
would
be
useful
to
machines
on
machines
that
might
not
have
direct
access
to
the
remote
clustering
in
general.
B
B
So
the
one
of
the
biggest
reason
I
can
put
is
that
many
times
you
want
to
run
over
worker
machines
in
a
language
network,
and
when
we
do
that
we
want
to
do
the
pressure
search
and
stuff
like
that.
Then
we
have
to
go
the
passion
machine
in
between
and
that
becomes
pretty
messy.
So
we
be
mostly
try
to
figure
out
the
way
using
the
cloud.
I
need
scripts,
so
you
basically
ever
have
a
do
know.
Basically
what
what
you
want
to
them
on
the
machine.
B
You
know
how
to
good
scrap
the
cube
me:
do
you
know
how
to
occur
and
what
was
a
ticket
remember
think
I
needed.
So
from
the
machine
controller
stake
itself.
You
can
always
feeding
this
this
cloud
I
night
into
the
machine,
basically
splitting
certain
parameters
here
in
there.
So
that's
the
oral
idea
then
select
research
into
the
machine
for
you.
E
B
So
motion
controller
basically
have
access
to
the
target.
Cluster
is
right,
so
machine
controller
and
move
that
one.
We
call
it
the
shoot
cluster
or
you
can
call
it
a
tie,
be
plus
s.
Motion
controller
has
access
to
the
GPS.
Ever
you
can
think
of
the
target
cluster.
You
can
basically
fifteen
basically
found
node
objects
and
then
copy
the
corresponding
node
objects,
machine,
node
conditions
into
the
machine
conditions,
and
so.
B
We
have
a
slightly
defined
different
step,
so
in
our
case
we
don't
call
it
executable.
We
call
it
driver
and
the
driver
only
has
methods
like
create.
Vm
delirium,
industry,
egg
right
so
rest
of
the
entire
code
base
is
considered
or
assumed
to
be
shared
code.
So
it's
completely
shared
and
only
the
drivers
have
this
pretty
much
information
of
niblet
and
so
on.
So
to
answer
the
question:
who
does
the
no
condition
copying
part
so
that
is
being
done
by
the
shared
machine,
controller
and
I?
B
Just
put
a
idea
for
one
comment
going
on
this
leg
and
I
also
pointed
something
there,
which
is
about
also
pretty
diverse,
which
you
are
how
much
you
know
so
because
we
had
this
kind
of
designed,
not
know
what
we
are
experimenting
with
is
that
we
are
just
taking
out
and
internal
drivers
which
has
only
delete
metals
and
making
it
making
them,
which
your
pieces
almost
away
the
CSI
pulling
up
inferences.
So
we
do.
We
have
a
prototype
for
that,
but
I
have
certain
other
designs
in
mind.
B
C
And
feedback
to
you,
David
so
I
know
the
Lutzer
folks
also
used
the
manager
cluster
pattern
and
we
are
planning
I,
don't
know
we're
undoing
that
as
well
and
I
know.
But
it's
a
relatively
common
pattern
that's
been
used.
Lutz
has
been
using
something
like
that
for
almost
three
years
now
and
the
way
that
we
handled
it
then
and
I
don't
know
if
this
was
ever
actually
in
coop
deployed
before
it
became
cluster
API,
but
we
would
use
because
we
had
the
coop
config
very
similar
to
what
Gardner
does.
B
So,
just
to
add
on
that,
so
if
PR
c--
is
a
common
pattern,
I'll
be
more
than
happy
to
create
a
proposal
on
how
the
manager
clusters
could
look
like
overall
in
general
and
what
would
be
pros
and
cons.
And
how
would
it
be
different
from
the
way
that
we
aren't
doing
things
like
I
can
create
a
proposal
contest
if
we
find
some
agreement
on
that.
G
So
this
is
just
continuing
on
from
the
discussion
we
had
last
week:
I
updated
the
initial
PR,
which
just
adds
a
an
alpha
sub
command
underneath
that
a
phase
of
sub
command
and
right
now
it
just
implements
create
bootstrap
cluster.
You
know
just
as
a
you
know,
starting
point
for
phases
and
you
know
based
on
the
feedback.
Last
week
you
know
I
added
the
alpha
sub
command
to
make
sure
that
users
realize
there
is
a
you
know,
any
type
of
guarantee
around
compatibility
when
using
it.
G
Yeah,
what
when
I
started
prototyping
it
out?
I
initially
did
broke
out
more
of
the
phases,
and
it's
not
until
you
get
to
the
pivot
cluster
phase
that
it
starts
getting
a
little
complicated
because
we
currently
pivot
in
two
different
ways,
depending
on
whether
we're
pivoting
to
the
target
cluster
or
from
the
target
cluster
and
I.
Think
as
part
of
the
phase
of
support.
It'll
make
sense
to
unify
those
approaches,
because
they're,
mostly
identical,
except
the
from
moving
from
the
cluster,
is
a
little
bit
more
comprehensive.
G
A
This
in
a
similar
category
to
hardik
surly
er
PR,
where
people
should
take
a
look
over
the
next
24
hours,
I
think
you
had
noted
TM,
but
it
got
removed.
When
you
push
the
change
I
think
the
change
was
pretty
small,
so
we'll
maybe
try
to
emerge
this
in
about
a
day
as
well,
and
let
people
have
one
last
look.
I.
Think
said
our
three:
let's
say
something.
D
D
D
But
at
any
point
in
time
later,
in
the
lifecycle
management
perspective,
if
you
ever
want
to
do
some
sort
of
a
pivot,
then
if
we
have
a
very
well-defined
well-tested
value
of
doing
the
pivot,
using
let's
say
the
cluster
CTL
without
any
genetic
weights,
you
can
point
it
to.
You
can
take
1q
convict
you
and
you
know
whether
you
want
to
clean
up
the
resources
on
the
other
side.
Mart
that
would
be
really
come
in
handy,
I.
Think
I,
don't
know!
B
We
do
support
multiple
providers
for
the
same,
so
we
called
the
seed
clusters.
All
single
single
seed
cluster
can
create
clusters
on
two
different
money
to
different
providers.
So
we
would,
we
do
have
them
kind
of
architecture,
but
generally
we
avoid
a
scenario
for
latency
reason.
So
we
won't
try
to
make
sure
that
control
plan
and
the
real
workers
are
in
the
same
region
of
stuff
on
any
depth.
You
wanna,
first
never
of
to
prevent
long.
Latency
is
longer
the
two
different
providers
we
do
support,
but
you
don't
do
it
that
often
so.
E
I
don't
mean
just
to
clarify
I,
don't
mean
heterogeneous
clusters,
expand
clouds
or
anything
like
that.
I
mean
I,
wanted
to
run
the
GCP
provider
and
the
AWS
provider
I
guess
so.
Okay,
I
see
my
question
doesn't
make
sense
right
now,
because
those
are
providers.
Okay,
it
would
only
make
sense
in
the
case
where
you
have
revoke.
You
have
managed
clusters
in
manager,
clusters
right.
B
Okay,
so
what
I
meant
was
from
manager
clusters,
then
it's
more
preferred
to
have
a
manager
cluster
in
the
same
cloud
provider
and
even
the
better
one
is
in
the
same
region,
because
your
manager
cluster,
has
a
control,
plane
or
co-worker
cluster
and
it's
better
to
keep
them
as
near
as
possible,
rather
than
keeping
them.
This
will
be
so
that
the
and
we
try
to
follow
that
as
much
as
possible.
In
sorry,
we
do.
We
do
support
that.
You
have
one
manager
cluster,
which
can
create
clusters
on
other
cloud
providers,
but
we
avoided
it
completely.
D
B
G
Was
going
to
say
with
the
current
design
right
now,
what
you
would
basically
have
if
you
had
two
different
controllers
for
two
different
providers,
you
would
have
one
that
would
consistently
error
when
it
tries
to
decode
the
provider
config
for
the
other
one,
and
then
you
know
so
so
there
would
be
it
would
work,
but
it
would
be
ugly
because
you
would,
you
know,
constantly
see
errors,
for
you
know
the
objects
that
belong
or
that
are
managed
by
a
different.
You
know,
provider
right
now.
D
So
maybe
should
it
be
helpful
if
he,
for
example,
add
some
sort
of
like
a
type
field
or
some
sort
of
annotation
label.
You
know
any
sort
of
identifier
essentially
to
let
City
clustered
objects
and
the
relevant
objects.
That
would
say
well.
This
listed
object,
for
example,
belongs
to,
let's
say
the
you
know:
database
provider
or
the
GC
event
or
the
vSphere
provider,
and
that
way
the
controller's
like
pretty
much.
The
first
thing
that
they
check
is
well.
Is
it?
Is
it
the
type
that
I'm
interested
in?
D
If
not,
then
I
don't
even
process
it,
and
only
do
it
if
it's
a
site
that
I
want
to
sell
so
that
way,
I
mean
it's
a
small
and
enhancement,
but
I
think
that
will
probably
help
support
the
case
wherein
you,
if
you
have
a
centrally
managed
cluster,
then
you
want
to
run
multiple
providers
to
support
this
kind
of
thing
that
the
user
is
because
I
think
it's
a
valid
use
case.
That
I
mean,
incidentally,
I've
been
thinking
about
the
same
Hughes
case
in
the
last
couple
of
days.
C
D
I
agree,
I
mean
I,
think
we
should
probably
make
it
just
as
a
field
type
field
in
the
cluster
or
some
sort
of
a
wider
type
field
in
the
cluster
object
itself
and
any
little
object
to
say
you
know
what
was
the
type
of
the
provider
and
that
could
be
literally
just
an
enum
kind
of
value
which
could
map
to
all
the
possible
known
providers.
Well,.
A
Are
you
envisioning
that
string
being
mandatory
or
if,
if
it's
blank
and
there's
only
one
controller,
that
controller
just
sort
of
like?
Is
there
a
way
to
make
like
a
default
controller
or
like
let
the
controller
know
that
it
should
take
ones
that
aren't
explicitly
tagged,
I,
think
the
common
case
is
going
to
be
running
a
single
controller
per
cluster
and
not
doing
multiple,
and
it
would
be
great
not
to
have
to
set
this
field
all
the
time.
A
Yeah
so
I
think
what's
interesting
about
that,
is
it
right
now
you'd
have
two
providers.
They
both
see
a
machine
show
up,
they
can
both
try
to
parse
the
provider
config
and
one
of
them
would
probably
fail
and
not
handle
that
machine
and
the
other
would
succeed
and
say:
oh
I,
understand
this
provider.
Config
I
should
be
managing
that
machine,
so
we
sort
of
have
that
implicit
already
and
the
string
would
basically
just
be
a
shortcut
to
say:
don't
even
try
to
parse
the
provider
config,
but
you
can
sort
of
already
get
that
signal.
Implicitly.
D
Mean
in
addition
to
that,
maybe
you
know
when
that
controller
is
able
to
parse
a
particular.
Let's
it
be
machine
definition
and
if,
let's
say
that
type
is
missing
one
or
the
other
thing
that
maybe
the
actuator
can
do
is
go
back
and
retroactively
lis
basically
mark
that
cluster
are
all
the
objects
with
the
type
that
it
has
now
known.
So
now,
then
it's
like
that
failure
will
happen
only
in
the
very
beginning
and
then
the
moment
one
of
the
provider
understands
that
spec.
D
A
Yeah
I
guess
I'm
wondering
how
how
far
we
can
get
with
the
provider
configs,
because
American
things
are
already
structured
data
types
and
the
controller
should
understand
how
to
handle
specific
types
of
those
right.
So
you
could
imagine
having
two
machine
controllers
one
understand,
so
you
went
off
alarm
when
they
understand
this
view
and
alpha-2
right.
If
it
sees
the
wrong
one,
it
doesn't
parse
it
or
it
says:
I,
don't
understand
that
I'm
just
gonna
skip
that
skip
handle
in
that
machine.
So.
A
Now,
if
you
create
a
machine
without
a
Friday
config,
the
machine
controller
will
error
out
right,
it'll
call
the
actuator
and
the
actuator
won't
be
able
to
create
a
machine.
It
might
be
useful
to
have
the
machine
controller
itself.
Just
say
like
that.
It
doesn't
know
what
to
do
with
it.
I
mean
you
can
also
imagine
having
an
actuator
that
doesn't
need
a
provider
config
and
just
has
a
whole
bunch
of
default
settings
passed
on
the
command
line,
and
it
just
does
its
thing
right.
G
D
One
other
additional
question
that
I
have
is
I
mean
I
know
in
the
past.
You
have
talked
about
gradients
within
a
provider.
I
just
want
to
kind
of
bring
that
up
here
as
well.
In
the
context
to
say
you
know
if
and
when
we
start
supporting
different
variants
within
a
provider.
How
do
we
want
to
handle
that,
because
that
technically
I
mean
I'm
still
not
very
sure
like
how
that
multiple
variants
would
work
like?
D
Is
that,
like
a
single
controller
that
is
running
now,
that
is
capable
of
serving
all
the
variants
on
the
fly
and
if
that's
the
case,
I
mean,
for
example,
for
the
purposes
of
this
provider,
identification
I
mean
if
it's
a
single
controller,
then
probably
it's
okay,
but
if
it
is,
let's
say
if
you
were
to
explicitly
run
different
controllers
for
the
same
provider
to
serve
different
variants.
Then
you
might
want
to
then
for
this
identification
purposes.
D
Then
you
probably
want
to
identify
both
the
basic
provider
plus
the
variant
that
it's
serving
I
mean
again
I,
don't
know
again.
Have
we
made
some
decisions
or
some
more
concrete
decisions
around
like
how
we
plan
to
support
variants?
It
kind
of
goes
into
a
different
direction
of
questioning,
but
something
that
I
don't
know
the
answer
to.
If
anybody
has
please.
G
So
my
initial
thoughts
with
the
AWS
provider
was
to
basically
try
to
decode
into
the
separate
provider
configs
for
each
variant
and
if
I
could
decode
one
of
them,
then
that
would
determine
which
kind
of
variant
to
proceed
from
from
there.
But
having
you
know,
fields
that
I
can
key
off
of
would
be
kind
of
a
much
cleaner
approach
there
as
well.
G
D
G
A
A
A
We
have
five
minutes
left
and
there's
one
more
thing
on
the
agenda.
So
I'm
gonna
have
to
table
the
rest
of
this
discussions
to
make
sure
we
get
to
the
last
thing,
which
is
that
there's
a
PR
about
adding
a
provider
ID
and
it
looks
like
it's
related
to
the
cluster
autoscaler
and
I-
wanted
hopefully
vikas's
on
the
line,
because
I
wanted
to
yeah
here
here
some
more
background
about
exactly
what
this
would
be
used
for
and
how,
because
I'm
very
curious
and
I
didn't
quite
get
enough
out
of
your
your
poll,
question
yeah.
H
So
hello,
everyone,
and
because
from
Greg
and
we
have
been
working
on
a
POC
with
the
cluster,
illustrate
a
as
an
out
of
tree
provided
in
the
autoscaler.
So
there
is
a
logic
in
notice:
scaler,
where
it
where
it,
where
it
gets
the
provider
view
of
the
machines
and
get
the
notes
from
to
blood
notes
and
then
compares
to
find
out
like
which
machines
are
not
registered
yet
within
a
period
of
time
and
then
autoscaler
triggers
the
delete
of
those
machines.
H
A
B
Recommend
so
to
send
from
me,
because
I
have
in
past
worked
on
doctors,
care
integration
retains
iam,
so,
first
of
all,
vanilla
right
at
Boeing
with
the
community.
That
would
work
anyway
totally
if
you're
not
making
the
core
logic
of
the
autoscaler
aware
of
the
Machine
API
stack,
it
can
just
look
at
the
node
objects
and
build
stuff.
Do
you
still
work,
but
adding
the
provider
ID
I
will
very
much
support.
Adding
the
provider
ID
in
the
machine
object
for
mainly
for
reason
that
if
you
look
at
the
provider,
it's
pretty
limiting.
B
In
my
opinion,
you
see
for
interviews,
for
example,
it
tells
which
which
region
zone,
or
this
instance
ID
or
I'll
sure
it
will
tell
you
with
subscription
you
are
in-
gives
you
complete
detail
uniquely
identifies
a
VM
on
your
cloud
provider,
probably
which,
if
you're
on
the
OpenStack,
it
will
tell
you
the
same
kind
of
details
for
the
OpenStack.
So
we
head
you
want
without
thinking
about
the
stronger
use
cases
we
had
edit.
B
E
E
E
H
C
I
mean
maybe
I'm
missing
something,
but
he
did
store
responsibility
of
the
provider
to
do
all
the
actuation
with
the
cloud
provider
itself
for
creating
destroying
nodes.
Why
does
a
third
party
need
to
have
some
kind
of
unique
identifier
for
the
machine
beyond
the
Machine
CRV
itself,
I
mean
the
machine's
distinctly
represents
a
machine.
H
Yeah,
because
the
Machine
machine
has
to
link
with
a
Cuban
or
objective
reality
is
no
object.
Machine
has
to
ultimately
become
equivalent
is
nor
so
if
machine
is
not
able
to
become
equivalent
is
nor,
after
is
a
certain
amount
of
time.
That
means
there
is
some
problem
in
that
machine
and,
for
example,
higher
level
entity
like
corpuscular.
We
need
to
take
some
corrective
action
at
that
time,
like,
for
example,
like
by
default
after
15
minutes.
H
If
machine
is
there,
provider
is
saying
that
machine
is
there,
but
that
machine
is
not
dead
in
the
Cuban
is
notes.
That
means
there
is
some
problem
on
the
machine.
So
then
it
takes
that
action
to
believe
that
machine.
So
that's
why
I
did
this
link?
It
is
required
to
have
a
provided
view
of
the
machines
and
to
have
it
given.
It
is
view
of
the
notes
and
then
compared
so.
G
H
G
Right,
but
this
is
a
problem
that
we
have
generically
right
now,
because
if
you're
using
machine
deployments
or
machine
sets,
you
would
want
kind
of
an
unknown
node
that
never
actually
kind
of
came
online
or
a
node
that
becomes
unhealthy
to
eventually
be
removed
and
kind
of
brought
back
online
as
well.
So
it
seems
like
a
generic
problem
that
we
have
to
solve,
for
the
Machine
says.
H
C
Yes,
but
we
we
already
have
that
with
the
node
rev,
so
we
know
that
if
the
node
ref
is
set
and
the
node
says
it
is
ready,
we
are
aware
that
the
node
is
joined
to
the
cluster
and
is
ready
to
accept
workloads.
If
the
node
ref
is
such
on,
the
node
is
not
ready.
We
know
that
there's
a
problem.
We
can
identify
that
through
the
node
and
discover
why
it's
not
ready.
C
C
H
The
problem
here
is
that
we
don't
want
to
break
the
existing
entry
providers
in
daughter's
killer.
So
if,
if
we,
if
we
want
to
use
no
ref
to
determine
that
whether
a
machine
has
become
communities
in
order
or
not,
the
same
logic
is
not
applied
by
other
entry
providers
in
the
autoscaler,
like
a
SS
provider
or
DC
provider.
H
Also-
and
there
is
an
interface
which
has
to
be
implemented
by
clustering,
peg
provider
also,
so
if
we
we
have,
if
we
don't
want
to
break
the
existing
providers,
which
are
there
in
we'll
have
to,
we
have
will
have
to
implement
in
the
same
way.
Otherwise,
if
we,
if
we
try
to
leverage
note
ref,
then
we
will
end
up
breaking
the
existing
providers.
I
think
that's
what
we
do.
We
don't
want
like.
We
want
to
have
cluster
API,
also
there
and
entry
providers
also
working
at
the
same
em
and
then
eventually
remove
the
entry
providers.
H
A
Thanks
for
joining
so
Isaac,
quite
so
long
we're
we're
a
few
minutes
over.
Let's
continue
discussion
on
the
PR.
If
we
haven't
reached
resolution,
maybe
we
can
continue
next
Wednesday
as
well.
So
I
think
this
is
a
very
interesting
topic
that
I
understand
your
point.
Artx
point
about
the
priority
being
super
useful.
That
JSON
also
has
a
really
good
point
of.
If
you
expose
that
sort
of
information,
then
higher
level
controllers
can
and
will
start
to
depend
upon
it,
and
is
that
really
something
we
want?