►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180912 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.9tb0lzmd3t0e
Highlights:
- New Zoom Link
- Update on CRD migration for cluster-api repo
- Update on CRD migration for cluster-api-provider-gcp repo
- MachineClass
- Reconciling Machines with Nodes
- API guarantees when running external controllers
- Machine Phases / States
A
Hello
and
welcome
to
the
Wednesday
September
12th
edition
of
the
cluster
API
breakout
meeting.
First,
a
cluster
lifecycle.
Hopefully
the
people
that
are
here
to
figure
this
out,
but
at
the
top
of
the
agenda
I
put
that
we
have
a
new
zoom
URL
as
part
of
sort
of
adhering
to
the
community
standards
with
kubernetes
Paris
has
been
pestering.
The
sig
chairs
are
various
SIG's
to
follow
the
new
procedures,
for
how
will
you
zoom,
which
gives
us
improved
moderation,
capabilities
and
what
this
means
is
it?
A
Each
meeting
is
gonna,
have
a
different
URL
and
that
we
should
not
be
sharing
those
URLs
on
public
websites
or
social
media.
They
should
be
sort
of
just
kept
for
the
next
erased
community,
so
the
URL
for
the
meeting
you're
trying
to
join
should
always
be
correct
in
the
calendar
invites
and
at
the
top
of
the
meeting
notes.
A
So
if
you
know
you're
watching
this,
video
on
YouTube
and
you're
trying
to
join
me
in
the
future,
the
best
place
to
look
is
find
a
meeting
notes
for
the
meeting
you
want
to
go
to
and
go
to
the
top
of
that
dock
and
join
that
meeting.
All
of
those
documents
should
be
kept
up
to
date
because
it's
possible
that
we're
gonna
have
to
cycle
through
different
URLs
over
time
going
forward.
A
Url
I
credit
for
this
meeting
I
believe
lasts
for
like
a
year
and
a
half
but
doom,
wouldn't
let
me
schedule
it
out
farther
than
that.
So
I
think
it's
it's
sort
of
got
a
clock
on
we're
going
to
have
to
rotate.
We
might
rotate
sooner
if
the
situation
warrants
it
so
flows
that
they
were
able
to
find
the
right
place.
Thank
you
for
showing
up
today,
and
the
first
thing
I
put
on
the
agenda
is
talk
about
the
migration
to
CR
DS.
A
We
talked
about
this
last
week
that
Phil
wasn't
able
to
make
it
partly
because
I
forgot
to
tell
him
when
the
meeting
was.
But
Phil
was
nice
enough
to
join
us
today
and
we
can
give
an
overview
of
sort
of
where
we're
at
on
the
CRT
migration
and
we'll
open
up
to
questions.
I
know
people
have
some
questions
last
week
that
I
wasn't
able
to
answer
all
along
great.
B
So
yeah
I've
been
I've,
been
working
with
Robbie
and
Sunil
on
migrating
cluster
api's
to
COO
builder,
from
API
server
builder.
There's
a
lot
of
benefits
to
this.
That
I
can
go
to
into
if
you
want
afterwards,
but
the
status
update
is
that
we
got
the
code
moved
over
to
till
I
did
a
code
review
and
it
looks
correct
from
a
review
perspective.
B
However,
we
know
there's
probably
bits
and
pieces
that
are
missing.
We
haven't
even
looked
at
stuff
like
migrating
tests
over
or
make
files
or
various
other
pieces,
updating,
Docs
or
the
sort
of
stuff
we've
really
just
focused
on
the
controller
code
and
the
types
code,
the
type
code
was
pretty
straightforward.
Not
many
changes
were
required.
B
The
biggest
change
there
is
that
the
validation
and
the
defaulting
arm.
We
are
disabled
right
now,
so
the
code
is
kept
in,
but
we
need
to
set
up
web
hooks
too
to
enable
all
that
stuff.
The
controller
code
is
more
invasive.
It's
mostly
deletions
and
it's
mostly
collapsing
a
bunch
of
either
scaffolded
code
or
generated
code
into
a
single
library
call.
So,
for
instance,
currently
there's
a
bunch
of
code
around
like
setting
up
cues
and
setting
up
informers,
and
these
sorts
of
things
and
all
that
is
now
just
hidden
behind
an
abstraction
layer.
B
Things
like
traversing
and
finding
parents
and
then
reinstalled
object.
Change
in
this
sort
of
stuff
is
also
hidden
behind
an
abstraction
layer.
You
don't
need
to
manually.
Do
any
of
that
anymore.
The
big
the
other
big
change
is
the
client
shifted
from
a
generated
code
generated
client
to
a
client
that
is
dynamic
based
on
the
type
passed
in.
B
That
is
automatically
done
for
you.
We
do
need
to
give
a
close
look
at
the
code
to
make
sure
that,
in
places
where
we
shifted
from
reading
live
from
default
to
reading
from
cache
by
default,
that
that
is
something
we
actually
want
to
do
and
if,
if
we
don't
want
to
read
from
the
cache,
we
can
the
client
the
can
also
read
live.
You
just
need
to
pick
a
different
client.
B
A
Think
we've
before
we
did
I've
dive
into
specific
questions
you
mentioned
there
are
lots
of
benefits.
I
assume
you've
covered
at
least
some
of
those
in
terms
of
like
cleaning
up
the
client
code
and
so
forth,
maybe
like
it
will
be
useful
to
people
to
give
a
couple
of
the
other
like
higher
level
benefits.
I
know
are
the
reason
from
our
side
we
were
looking
at
doing.
This
was
that
we
were
using
the
API
server
builder
code
and
we
were
I
think
it
was
generating
compatibility
of
cámaras
1.9
and
people
are
like.
A
Well,
when
can
we
get
the
newer
versions
of
kubernetes
right?
So
I
think
from
our
point
of
view,
that
was
one
of
the
big
benefits
is
like.
It
allows
us
to
stay
up-to-date
much
more
easily
but
I'm.
Assuming
from
your
perspective,
having
worked
on
both
of
these
two
libraries,
we've
been
looking
at
there's
some
other
sort
of
big
things
we
should
know
about
from
the
plus
side
and
they
may.
We
can
also
talk
about
things
we
might
be
losing
as
well.
Yeah.
B
So
from
a
support
standpoint,
whereas
queue
builders
is
something
that
came
out
of
API
server
builder
and
the
COO
builder
as
it
is
today,
is
at
a
1.0
version
where
we've
bill
on
experience
of
early
versions
of
COO
builder
and
they'd,
say
it's
at
its
third
iteration
and
so
we've
from
from
perspective
of
building
the
infrastructure
that
you're
using
it's
a
much
more
stable,
well-thought-out
extensible
platform.
We
learned
about
how
certain
things
like
breaking
version
compatibility
with
API
server
builder.
B
A
delaying
in
queue
every
time
that
you
read
to
an
object,
instead
of
only
doing
a
Delaine
when
there's
an
error-
and
so
one
user,
for
instance,
found
out
that
when
they
had
to
reinvent
of
objects
intentionally
that
they
were
getting
this
espen
exponential
back-off,
which
was
causing
their
system
not
to
function
as
as
they
wanted.
So
there's,
there's
known
bugs
inside
the
API
server
builder
controller
code
that
you
probably
won't
hit,
while
you're
in
EAP
or
alpha,
maybe
not
even
data,
but
then
when
you
hit
GA
you're
gonna,
be
like.
Oh.
B
B
B
We've
tried
to
simplify
how
we
set
up
this
draw
the
project
structure
quite
a
bit.
There's
there's
not
this
extra
tool
that
you
need
that
runs
code
generators,
for
instance,
all
the
logic
is
built
right
into
the
project
and
you
can
just
do
go
generate
like
if
you
look
at
the
make
file,
it's
just
go
generate,
go
build
these
sorts
of
things,
and
so
like
the
the
only
cogent
or
you
need,
is
the
deep
copy
generator
which
is
actually
vendored
in.
So
you
don't
need
to
download
this
extra
binary.
B
B
A
I
think
that's
that's
really
great
I
think
the
well
I
think
about
the
potential
downsides
of
not
run
our
own
API
server.
The
things
that
come
to
mind
are
the
things
you
mentioned
that
they
were
serve
removed
today,
which
is
validation
and
defaulting,
and
in
particular,
will
probably
this
little
more
in
the
next
section.
But
right
now
we
can
do
that.
Validation
in
defaulting
in
our
client,
libraries
for
the
provider,
config,
Raw,
extensions
and
I-
think
that's
something
that
we
we
lose
using
Q
builder.
That's
all
done
on
the
server
side,
with
webhooks
yeah.
A
A
C
A
D
A
Trying
to
figure
out
like
what
the
ripple
effects
are
to
use
the
the
one
that's
switched
to
queue
builder
and
using
crts,
so
I
started
working
on
that
I've
got
it
compiling,
but
I'm,
not
convinced
it's
actually
running
and
working
successfully
and
and
yet
and
part
of
the
reason
for
that
exercise
is
before
we
make
this
change
in
the
upstream
rivo.
We
want
to
understand
what
the
downstream
implications
are,
what
things
have
to
change
in
the
provider,
specific
repos?
A
Getting
this
story
working
and
get
this
house
cleaned
up,
make
sure
that
we
have
a
good
understanding
of
how
we
want
to
do
the
multiple
pieces
and
then
kind
of
come
back
to
the
group
after
that
or
people
want
it
to
help
during
that
process.
I'd
be
that'd,
be
great,
also
and
and
sort
of
lay
out.
You
know
here's
what
its
gonna
look
like
to
migrate
and
verify
that
everybody
is
is
still
on
board.
I.
Think
part
of
the
reason
we
started
this
process
is
because
everybody
was
theoretically
on
board
with.
A
We
think
this
is
a
good
good
idea
and
a
good
strategy
and
I
think
that's
part
of
the
reason.
I
also
want
to
talk
about.
The
benefits
is
because
we
are
going
to
see
an
awful
lot
of
churn
and
the
main
repo
and
an
awful
lot
of
churn
and
then
provider
repos
during
this
migration
process
and
I
want
to
be
clear
on
this
is
why
we're
doing
it
like
there
are
there's
a
light
at
the
end
of
the
tunnel.
We
are
gonna,
have
some
really
good
benefits
as
a
result
of
timers.
A
A
Okay,
if
there
aren't
any
other
questions,
I'll
start
talking
about
the
next
thing,
which
is
the
GCP
specific
repo.
So
how
much
and
the
first
thing
that
I
tried
to
do
was
just
vendor
fills
branch
directly
into
the
existing
code
and
that
really
didn't
work.
It
sort
of
got
me
into
a
state
where
go
Depp's
at
one
point
was
in
what
I
think
was
an
infinite
loop.
I
stopped
it
after
it
ran
for
upwards
of
30
minutes,
and
it
was
doing
the
same
thing
over
and
over.
A
A
It's
spitting
out
some
errors
that
I
haven't
had
a
chance
to
debug,
but
again,
like
the
change
in
the
main
repo.
It's
a
pretty
big
delta,
because
it
is
sort
of
switching
the
way
that
the
project,
the
repo,
is
structured
from
the
structure
that
we
built
around
with
the
API
server
builder
to
a
structure
we
build
around
with
builder
and
those
look
quite
a
bit
different.
A
As
a
rocks
tension
inside
of
the
machine
object,
I
mean
that
we
lose
some
of
the
benefits
of
them
being
CR
DS
like
we
lose
the
benefits
of
how
the
server
be
able
to
do
like
the
web,
validation
and
web
book
defaulting.
We
lose
some
benefits
of
having
automatic
conversion
between
different
versions
and
having
really
an
easy
story
for
clients
and
and
server
and
controller
version.
Skew
so
I
can
imagine
a
scenario
where
we
could
have.
You
know
you
have
your
your
doc
about
getting
started
and
it
tells
you
to
embed.
A
A
Is
you
know
once
versioning
is
implemented
which
it's
not
yet,
but
the
client
would
post
the
view
and
alpha
1
and
the
server-side
would
actually
convert
that
via
web
book
to
V
1
alpha
2,
because
the
controller
asks
for
a
specific
version
of
that
resource
and
things
would
continue
to
work
across
versions,
and
that
would
give
us
a
way
to
sort
of
easily
move
things
forward.
I'm
a
little
bit
concerned
with
embedding
it
that
we
are
not
going
to
have
easy
facilities
to
move
things
forward.
A
So
I
was
again
something
I
hadn't
talked
yet
to
fill
about
and
I
was
wondering
if
you
had
any
thoughts
about
that
and
not
you
know,
if
you
don't
right
now,
I
don't
want
to
put
you
on
the
spot
and
we
can.
We
can
chat
about
it
more,
but
it's
something
that
Chris,
Riley
and
I
were
talking
about,
and
certainly
a
little
bit
concerned
with,
with
the
way
we're
doing
embedding
yeah.
A
You,
okay,
maybe
be
useful
for
me
to
write
up
a
quick
summary.
This
is
still
sort
of
coming
together
in
my
head
as
to
sort
of
what
the
problems
might
be
down
the
road
and
what
we
might
be
able
to
do
about
it,
but
I
did
want
to
mention
it
see
if
anybody
else's
has
run
into
this
edge
case
in
their
thinking
or
maybe
in
practice
using
crts
elsewhere.
I
might
have
had
some
experience
we
could
draw
upon.
E
Yes,
so
if
I
understood
correct,
if
it's
specifically
about
the
provider
configurate,
when
we
will
move
it
out
to
the
machine
classes,
maybe
in
that
cases
we
will
be,
we
will
be
having
much
more
freedom
right.
So
at
the
moment,
if
it's
in
line,
then
this
defaulting
out
could
be
a
problem,
because
we
have
multiple
versions,
of
course,
for
the
machine
and
put
a
config.
E
A
And
in
that
case,
I
also
worry
about
a
future
where,
if
you
are
allowed
about
inline
it
and
put
it
in
the
machine
class
that
you
might
have
examples
that
work,
if
you
put
it
in
machine
classes
where
we
do
get
the
defaulting
and
conversion
behavior,
but
they
break.
If
you
try
to
inline
it,
which
I
think
is
also
you
know
user
experience,
we
don't
want
to
have
where
like.
A
If
you
use
the
system
and
quote
the
right
way,
things
work
and
if
you
don't
use
it
the
right
way
like
we
don't
provide
any
sort
of
graceful
degradation
or
guardrails
to
prevent
you
from
shooting
yourself
in
the
foot,
because
the
whole
point
of
in
lining
it
is
it
supposed
to
be
a
better
user
experience?
And
if
that
becomes
a
really
nice
user
experience
because
half
the
time
it
works
it
out,
somehow
it
doesn't,
and
that
is
strictly
worse.
User
experience.
A
D
A
E
We
have
instantly
not
added
a
lot
of
fields
which
will
required
probably
for
cluster
or
scanner
or
other
outside
out-of-the-box
consumers
right
so
its
first
cutlets
more
or
less
equal
to
the
poi
to
configure.
But
then
subsequently
ORP
are
so.
You
can
try
to
include
more
fields
which
will
require
probably
much
more
discussion
like
allocating
bills
and
capacity,
and
at
the
moment
it's
only
about
externalizing.
The
four
different
food.
A
Okay,
so
I
think
maybe
the
path
for
there
is.
We
should
resolve
these
distant
comments
and
give
people.
You
may
be
like
another
day
to
add
more
comments
and
people
I'll
put
a
hold
on
it,
and
if
people
want
to
take
a
look,
we
can
all
you
see,
I'm
going
to
prove
it
and
then
we'll
remove
the
hold
in
about
24
hours
and
and
then
feel
that'll
be
one
more
one.
A
A
F
F
So,
basically,
some
controller
should
be
able
to
reconcile
the
winnowed
resulting
the
killing
machine
as
soon
as
it
comes
up.
So
one
possible
option
for
this
to
happen
would
be
to
use
something
like
the
the
IP
of
the
machine
so
Matt,
both
the
notes
on
the
machine
using
the
IP,
so
basically
I
think
I
like
to
have
two
discussions
like
one
is:
if
we
all
agree
that
this
should
happen
dynamically
and
the
second
one
is
about
this
day,
IP.
Is
that
the
best
way
to
do
this.
E
I
I
would
say
the
the
problem
is
essentially
about
mapping
the
load
and
machine
right.
The
first
problem
is
how
to
uniquely
map
a
node
in
the
machine
and
I
guess.
The
second
problem
is
whether
we
want
to
really
allow
the
load
to
be
replaced
when
the
machine
object
remains
the
same.
The
second
question
that
we
do
think,
but
for
the
first
part
I
guess
each
cloud
provider
anyway
provides
some
kind
of
unique
identifier
for
the
machine
for
it.
E
For
example,
if
you
do
a
create
column,
it
appears
you
can
take
society
back,
that
part's,
saying
Society
from
the
machine
for
GCP
or
share
it
and
so
on.
Whenever
you
try
to
create,
you
already
have
a
control
on
the
name
of
the
machine,
which
is
also
the
unique
identifier
for
the
machines
name
on
the
cloud
croydon,
so
that
would
not
be
complete
there.
E
So
what
we
could
do
is
that
on
one
layer
of
mapping,
for
instance,
for
it
for
a
table,
just
let's
imagine
that
on
the
machine
object,
we
will
have
to
refer
to
the
instance
ID
of
the
actual
node
created,
and
we
should
just
refer.
So
what
we
do
actually
am
saying
is
that
we
have
the
provider
ID.
If
you
see
the
node
object,
you
will
see,
you
will
find
a
field
called
provider,
ID
right
and
the
provider.
E
Id
is
a
field
which
contains
quite
a
good
detail
about
the
cloud
whether
it
says
which
we
generate
runs
in
which
so
only
cleansing
and
what's
the
exact
instance
ID.
So
we
take
that
one
provider
ID
out
of
that
node
object
and
put
it
on
the
machine
object.
So
that's
becomes
one
way
of
clear
mapping
that
this
machine
is
for
this
particular
node
and
that
pretty
much
suffices
and
on
top
of
it
you
could
have
one
field
which
we
I
guess
only
anyway
have
in
the
machine
status
is
basically
node
reference
right.
E
So
the
node
reference
have
the
load
name
which
Cuban
it
is
control
plane
understands
so
for
many
providers.
If
this
might
not
be
the
case
but
for
a
double
use,
you
will
see
that
the
node
object
is
mainly
the
private
IP
following
something
and
so
on,
and
then
you
will
also
see
that
the
actual
machine
is
instance
ID
and
the
real
name
of
the
machine
could
be
different
right.
So
that's
how
the
two
layers
we
can
solve
it,
but
the
second
question
remains
well:
do
we
really
want
to
behavior
when
I
create
a
machine
object?
E
E
Well,
if
my
node
object
is
missing,
I
should
get
rid
of
the
Machine
object
and
expect
a
new
machine
object
to
be
created
which
maps
to
the
new
node
object,
which
is
supposed
to
be
there,
so
I
would
prefer
the
later
one
just
to
be
just
to
avoid
the
confusion
at
the
controller
level
that
which
you
know
what
this
machine
object
is
belongs
to.
So
basically
the
mapping
should
happen
only
at
the
creation
time.
Then
you
create
it.
A
To
two
nodes:
when
they're,
like
the
qubits
person,
node
status,
present,
that
instance
ID,
for
example,
on
AWS
such
that
you
can
app
map
it
over
or
like
how
do
you
I
can
see
like
your
actuator?
It
says
you
know
create
nos
VM
and
it
gets
the
instance
ID
back.
Then
you
stick
that
on
the
machine
and
you're
now
you're
storing
that,
but
when
nodes
start
showing
up
and
joining
the
cluster,
how
do
you
then
know
which
one
is
using
that
instance
ID?
Is
that
reported
back
by
the
qiblah?
A
E
H
There
is
a
field
or
two
fields:
there's
internal
ID
and
external
ID,
I
think
on
the
No
today.
Those
are
populated
by
the
hewlett,
but
I
think
that
may
change
with
sort
of
moved
like
drinking
century.
I'm
not
sure
how
that
matching
will
happen
securely
in
future,
but
there
is
that
feel
which
I
think
is
on
all
platforms.
A
E
So
so,
basically,
the
mapping
the
the
population
of
the
information
should
happen
and
only
when
the
create
call
is
done
right.
So
if
create
is
successful,
the
machine
is
created.
You
get
the
instance
idea
and
the
entire
string
of
the
priority,
which
is
all
the
information
inside
it
and
you
put
it
on
the
machine
object.
That's
that's
the
way.
That's
the
only
way
you
would
want
to
modify
those
information
on
a
particular
machine
object.
Now,
if
machine
goes
down
it
reboots
there's
a
network
connection.
E
Whatever
happens,
the
same
machine
would
again
respond
in
bed
and
the
node
object.
If
the
node
object
is
registered
or
the
if
it's
gone,
it's
a
node
object
is
gone.
Then
there
is
a
different
problem
than
we
have
to
think
about
the
orphan
vm
cases.
We
would
expect
that
the
node
object
would
be
register
either
ready
or
not
ready.
So
based
on
the
node
object,
we
don't
need
to
really
think
about
the
cloud
provider.
Api
isn't
only
the
orphan
vm
case.
E
A
The
point
on
the
connection:
I'm,
not
understanding,
is
you
you
get
that
instance
ID
when
you
create
the
VM
and
you
put
on
the
machine
at
some
later
point.
A
node
shows
up
in
registers.
And
how
do
you
know
it's
like
you
know
you
create
10
in
parallel.
How
do
you
know
which
node
that
shows
up
you
linked
to
which
machine
if
the
nodes.
E
D
Yeah,
so
one
additional
thing
I
can
think
is
like,
since
we
is
using
pure
radium
to
join
these
nodes.
One
thing
we
can
do
is
add
additional
configure
option
in
the
queue
in
radium
itself
to
specify
available
to
dark
mode
when
it's
joining
in
with
the
unique
ID
of
that
cloud
provider
itself,
and
that
could
be
part
of
the
just
a
init
script
of
that
node
itself.
D
But
this
would
be
so
again.
This
would
probably
be
more
as
the
node
comes
up
and
since
we
are
simply
provided,
specific
thing
is
responsible
for
creating
whatever
the
script
is
that
kind
of
joins
that
thing,
so
it's
no
control
from
the
provider,
so
provider
can
inject
whatever
the
unique
identification
which
is
specific
to
that
provider.
D
Can
add
that
as
part
of
the
node
joining
mechanism
itself,
so
that
way,
it
becomes
a
little
bit
more
easier
to
tie
it
down
later,
but
yes,
I
mean
so
so
this
is
just
on
the
point
of
how
do
we
kind
of
potentially
identify
a
link?
The
two
number
one
yeah,
that's
an
issue
on
that
I
mean
I,
had
one
more
additional
point
to
make
on
on
a
side
note
around
the
same
topic,
though,
if
it's
okay,
if
I,
can
make
that
point.
Yes,.
A
D
Okay,
so
one
of
the
thing
I
notice
is
in
the
node
today
in
the
in
the
code,
which
we
doing
the
node
linking
its
needs,
an
inherent
assumption
that
the
nodes
that
you're
looking
at
is,
in
the
same
so
the
cluster
API,
is
running
on
the
same
target
cluster
itself.
That
means
the
pivot
has
happened
and
now
you're
all
in
the
same
ecosystem
of
the
same
kubernetes
cluster,
which
is
okay.
D
However,
now
in
the
case
which,
for
example,
page
you
don't
want,
and
you
want
to
add
a
kind
of
remote
management
right,
so
you
have
one
single
cluster
where
you're
running
clustered
a
PA
and
you're
just
spinning
off
n
number
of
target
clusters.
Now,
in
that
case,
that
piece
was
not
I
mean
that
assumption
will
break
and,
and
that
doesn't
work,
I
mean
essentially
so
any
thoughts
around
I
mean.
Can
we
do
something
about
it
too,
or
is
that
even
not
even
maybe
a
valid
use
case
or
maybe
want
to
handle
it
differently?.
G
A
If
we
stuck
a
reference
from
a
machine
to
a
cluster
explicitly
that
it
would
sort
of
become
impossible
to
do
this
from
row
management
scenario
that
you're
describing-
and
maybe
it's
not
maybe
it's
just
about
where
you
run
the
controller
and
you'd
still
put
the
those
two
objects
in
the
same
cluster
and
it
would
be
fine
I
think
we
need
to
understand
what
the
what
would
work
in
that.
How
would
work
in
that
deployment
scenario
if.
D
I,
remember
correctly,
there
was
one
proposal
already
some
discussion
that
I
remember
from
one
of
the
previous
evenings,
where
the
point
was.
Should
we
want
to,
for
example,
says
replicate
some
of
stands
for
remote
management
case?
Specifically,
it's
going
to
be
difficult
to
say.
You
know
from
one
cable
from
your
source,
kubernetes
cluster,
which
communities
cluster,
can
I,
constantly
try
and
manage
and
monitor,
for
example,
the
nodes
object
on
the
remote
thrusters
and
then
try
to
kind
of
keep
that
Sinkin
and
the
I
think
the
proposal
or
some
the
idea
was.
Can
we
copy
over?
D
D
D
A
It
sounds
like
I'm
just
trying
to
read
through
your
issue
on
497
and
I,
and
it
looks
like
Derrick
put
a
comment
there
about
a
specific
use
case,
which
I
think
is
really
valuable,
because
if
we
just
say
we
don't
want
to
have
this
link
like
it's
it's
hard
to
say
why.
Why
or
why
not.
So
I
I
just
want
to
read
Derrick
Sue's
case
here
that
he's
stuck
on
there
this
morning,
which
is
you
create
a
machine
that
creates
a
node.
A
F
Yeah,
so
that's
what
I
wanted
to
validate
with
you
guys,
like
again
main
thing,
is
to
make
sure
that
we
are
alignment
and
that
we
want
to
do
it
the
dynamic
line.
So
if
that's
the
case,
I,
don't
know
I
I,
guess
that
the
path
forward
would
be
to
get
investigating
the
best
way
to
go
in
it
and
maybe
I'd
be
able
to
come
up
with
a
PR.
So
we
can
discuss
about
it
or
I.
Don't
know
that
make
sense.
A
G
Well,
I
mean
I
suppose
in
some
ways
IPS
are
unique
within
a
given
cluster,
so
you
could
still
match
on
IP
I
guess
ip's
seem
like
an
implementation
detail
that
I,
don't
I,
don't
know
that
all
providers
want
to
expose.
On
the
other
hand,
since
we're
talking
about
the
link
between
a
machine
and
a
node,
every
node
has
an
IP,
so
maybe
I
don't
object.
A
Well,
I
guess,
if
it's,
if
it's
a
gut
feeling,
that's
fine
too,
like
if
you
say
like
my
gut,
is
telling
me
this
is
maybe
not
the
best
thing
to
use
I.
Think
that's
a
good
thing
for
us
to
consider
right!
I
was
trying
to
figure
out
if
it
was
more
of
like
a
gut
feeling
like
I
feel
like
this
is
not
great
design
or
if
it's
like
I
have
a
specific
use
case
where
this
definitely
won't
work,
because
I
think
you
know,
we
should
consider
those
differently.
D
So
one
point
potential
issue
with
the
IP
matching,
though
that
I
see,
is,
if
you
have
a
use
case
with
your
worker
nodes,
for
example,
you
want
multiple
snakes
on
them.
I
know
I
mean
that's
something,
for
example,
for
a
use
case.
You
probably
need
that.
So
essentially,
at
the
moment
you
introduce
multiple
link.
G
G
D
Additional
confusion
around
that
is
like,
for
example,
think
about
a
system
like
OpenStack,
where
you
can
have.
You
know
a
machine
instance
that
you
create
with
an
internal
IP
and
then
you
can
add
dynamically
a
floating
IP
to
it,
and
the
floating
IP
does
not
really
show
up
on
the
node
itself
in
any
shape
or
form.
However,
for
example
that
floating
IP,
you
might
actually
what
you
actually
would
need
to
SSH,
for
example,
especially
consider
the
case
where
you
have
and
host
and
play
you.
D
Your
entry
point
is
maybe
just
one
machine
with
a
floating
IP
and
then
you're
kind
of
reaching
all
the
others.
Well
I
mean
say:
maybe
it's
not
exactly
the
specific
use
case
here,
but
that's
the
other
things
you
know.
Ip
sometimes
can
be
a
little
bit
not
add
up
as
Japan
would
expect.
I
mean
I.
Think
my
two
cents
here
would
be.
D
Maybe
we
should
look
at
like
a
seared
approach
to
say
you
know,
try
this
option
if
this
doesn't
math
work,
then
go
to
the
next
possible
best
option
and
kind
of
do
it
that
way
to
maybe
come
up
with
a
little
bit
more
period
operation
and
see
how
we
can
best
match
in
a
reasonable
way,
the
Machine
and
a
node,
and
that
and
that
flow
could
involve
certain
steps.
For
example,
a
cloud
provider
implementation
could
do
so.
D
That
would
completely
short-circuit
data
matching
things,
and
that
could
be
one
way
or
individuals
cloud
providers
to
run
and
for
those
who
don't
want
to
add
the
additional
metadata,
they
could
fall
back
to
little
bit
more
primitive
ways.
Ips
could
be
one
of
them,
but
that's
maybe
something
that
comes
to
my
mind.
So
I.
G
Agree
since
we
don't
have
much
more
time,
I
wanted
to
eventually
talk
about
machine
sets.
I'll
defer
that
next
week,
but
along
the
lines
of
what
you're
saying
I
think
we
do
need
to
have
the
ability
well
I
think
currently
the
reality
is:
some
providers
operate
remote
clusters
and
some
operates
there,
and
sometimes
the
Machine
objects
exist
within
the
cluster,
for
which
they're
operated,
operating.
I
think
that
interface
needs
to
be
made
a
little
more
clear,
in
particular
for
generic
controllers.
G
A
Think
this
is
one
of
the
the
reasons
that
hardik
a
proposed
copy.
The
no
conditions
over
into
the
machines
is
that,
if
we
put
all
of
the
sort
of
bridging
logic,
if
you
will
in
one
place
in
the
machine
controller
and
all
the
generic
controllers
above
that,
are
not
required
to
do
anything
except
you
know,
look
one
level
below
so
the
machine
deployments.
Look
at
machine
sets
machine
sets,
look
at
machines,
the
original
plantation
machines
that
so
two
machines
and
nodes
right.
A
So
it
was
following
the
reference
from
machines
to
nodes
which
meant
that
if
those
nodes
were
a
different
cluster,
it
wouldn't
be
able
to
find
them
right
by
centralizing
that
logic
in
the
machines
and
making
the
machines.
The
only
thing
that
needs
to
know
whether
nodes
are
local
or
remote.
That
means
the
the
generic
pieces
above
that
don't
have
to
care
anymore.
So
I
think
that
was
that
was
one
of
the
reasons
for
moving
I.
A
Don't
know
two
conditions
over
is
that
can
give
us
health
checking
at
the
Machine
set
level
without
the
Machine
set
needing
to
know
where
the
nodes
live
or
care
right.
So
then,
the
question
is
on
that
the
Machine
controller
is
the
part
that
that
needs
to
be
able
to
know.
Those
things
is
that
you
know
actuator
code
is
that
the
jammed
in
generic
quote-unquote
machine
controller
code
that
we
want
to
reuse
and
how
aware
does
I
have
to
be
about
where
that
I
was
exists?
But
I
do
think
that
you
know
I.
A
I
mean
it
also
right
now
we
have
like
a
reference
in
the
machine
status
to
the
node
object,
and
that
was
sort
of
put
there,
assuming
that
it
was
in
the
same
cluster
I
think
that
there
is
a
reference
type
in
kubernetes
that
might
allow
you
to
reference
something
in
a
different
cluster.
I'm,
not
sure
I
know,
there's
like
an
object
reference
or
just
a
local
object.
A
E
It
would
actually
support
the
statement
of
some
kind
of
short
circuiting
for
the
log
provider-specific
thing
and
if
node
object
itself
has
something
very
unique.
So
there
is
a
field
called
provider
ID,
which
this
tells
you
the,
which
very
uniquely
identifies
the
machine
on
the
cloud
provider.
It
tells
you
everything.
That's
needs
to
be
known
to
identify
the
exact
machine.
They
might
not
just
read
that
by
out
and
put
it
on
the
machine
object.
A
E
So
I
just
put
one
or
two
liners
so
I
just
want
to
reinitiate
the
discussion
on
the
machine
phases
and
the
states,
so
it
was
discussed
I
guess,
sometime
back
when
we
were
in
previous
depository,
and
there
was
quite
a
good
discussion
there.
So
I
will
just
take
the
work
from
there
on
the
machine
states
in
phase
and
just
wanted
to
confirm.
If
anyone
else
is
on
it,
we
can
maybe
collaborate
or
this
one.
We
want
to
convey
that
I'll
be
starting
on
it.
E
A
Okay,
I'm
frequent,
remember
a
quick
overview.
What
those
are
machine
phases
and
states
is
effectively
a
way
to
indicate
some
bits
about
sort
of
the
lifecycle
of
the
machine
where
it
is
and
sort
of
the
overall
lifecycle
of
provisioning
running.
You
know,
updated,
etc,
split
across
two
new
API
fields.
There
is
some
overlap
with
the
document
that
Phillip
and
I
shared
around
at
the
beginning
of
this
year
about
sort
of
the
manage
machine
life
cycle.
It's
I
think
they're,
somewhat
close,
but
there's
a
little
bit
of
convergence.