►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180815 - Cluster API
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hello
and
welcome
to
the
Wednesday
August
15th
rendition
of
the
cluster
API
breakout
meeting
for
sig
cluster
lifecycle.
Folks
go
ahead
and
add
your
name
to
the
attendees
list.
If
you
haven't
done
so
already,
and
now
is
a
good
time
to
add
agenda
items,
if
you
would
like
to
go
over
anything
later
on,
the
call
today
and
I
will
be
adding
some
of
my
own
agenda
items
here,
a
second
based
on
the
office
hours,
but
I
will
do
that,
while
other
folks
are
talking.
A
C
Yes,
so,
first
of
all,
so
if
at
any
point
it
becomes
hard
for
you
guys
to
hear
me,
then
I'm
willing
to
like
stop
talking
and
like
pass
on
my
opportunity
here,
but
just
a
heads
out
there,
so
that
you
guys
are
not
suddenly
to
hear
me.
So
the
point
that
I
was
trying
to
make
with
this
agenda
item
was
the
cluster
API
currently
deploys
the
cluster.
C
That
is
day
one
from
like
what
about
day.
Two
like
day,
two
is
like
making
sure
the
cluster
is
operational
and
usable.
There
may
be
things
like
monitoring
services
that
you
may
want
to
deploy.
You
may
want
to
have
some
custom
services
to
handle
credentials
to
the
cluster,
and
things
like
that.
So
maybe
we
should
think
about
like
a
book
that
we
can
offer
to
so
that
users
can
customize
how
their
cluster
shape
of
once
it's
deployed.
C
B
The
file
that
we
had
is
called
add-ons
memo
which
basically
cluster
cuddle,
will
effectively
queue.
Cuddle,
apply,
dash,
F
the
things
in
that
file
to
the
cluster
that
has
been
created.
So
if
you
wanted
to
apply,
you
know
a
specific
CNI
provider
or
if
you
wanted
to
apply
fluent
D
diamond
set
or
you
want
apply,
hipster
or
other
such
solutions.
On
top
of
your
cluster,
you
can
put
him
in
that
UML
file
and
cluster
cuddle,
we'll
put
those
in
your
cluster
when
it's
provision.
Okay,
so
there
are
those
sort
of
hooks
in
place.
B
I,
don't
think
we
have
standardized
on
what
sorts
of
things
we
might
want
to
to
add
onto
clusters.
Also
note
that
I
sort
of
gave
a
little
hint
in
slack
the
other
day
to
Chris
that
there
are
a
couple
people
at
Google
that
are
working
on
a
better
story
for
add-on
management
and
they've
told
me
that
they're
planning
on
coming
and
presenting
that
to
sequester
lifecycle,
sometimes
soon
the
keeps
shifting
a
little
bit
as
they
like.
They
want
to
make
sure
it's
a
sort
of
a
cohesive
story,
but
that's
just
a
tenth
of
our
row.
B
One
issue
of
like
get
my
cluster
up
and
running
with
some
add-ons,
but
then
I
get
a
version
of
flu
and
Dee
installed.
How
do
I
get
a
new
version
of
so
Andy?
How
do
I
get
a
new
version
of
flu
and
D
when
I
want
to
upgrade
my
cluster?
Those
are
couple
right
and
I
think
that's
the
part.
That's
missing
right
now.
C
B
C
B
G
B
I
think
that
the
current
sort
of
functionality
and
the
cluster
API
is
equivalent
to
what's
in
cube
a
cube
admin
like
I,
said
it's
really
during
the
creation
of
a
cluster.
It
makes
it
easy
to
sort
of
sort
of
single-shot
apply
things
on
top
of
your
cluster
right.
So
if
you're
grading
cluster
and
you
want
fluent
D,
you
can
stick
a
diamond
set
and
a
dongsang
ml
and
you'll
get
fluent
T
in
your
cluster.
Okay,.
F
B
F
B
Well,
the
current
mechanisms
is
there
because
it
solved
some
initial
pain
points
and
particular
things
like
adding
the
storage
class
to
your
cluster
because
those
aren't
built
in
and
if
you
want
to
put
you
know,
ingress
in
your
cluster
or
fluidy
like
I,
think
those
are
sort
of
valid
use
cases
for
people
as
they're
our
prototyping
and
trying
to
test
out
different
things
with
the
cluster
API.
Is
you
want
to
have
those
things
there
right
now?
B
The
cluster
API
is
is
early
enough
in
development
that
we
don't
expect
people
to
be
creating
Prussian
clusters
or
clusters
that
maybe
live
for
a
long
period
of
time
and
so
having
that
longer
sense
of
operational
responsibility,
it's
okay
to
sort
of
forego
that
right
now.
Will
they
just
solve
that
problem
in
the
future.
B
Sorry
diving's
that,
just
just
because
people
like
Google
or
working
on
it
doesn't
mean
other
people
shouldn't
also
think
about
the
problem
and
maybe
approach
it
as
well
from
other
directions
and-
and
you
know
maybe
they're
good
ideas
that
we
haven't
thought
of.
So
if
you
have
thoughts
about
how
you
think
we
should
do
add-ons
feel
free
to
you
know,
you
can
talk
to
Justin
yeah
or
you
know,
you
know
a
mold
of
yourself
and
help
it's
your
own
proposal
as
well.
Right,
it's!
B
It's
not
like
you
know
we're
not
liking
the
cookies
saying
we're
working
on
this.
Nobody
else
think
about
it.
Nobody
else
touch
it.
We're
gonna,
you
know,
drop
a
golden
solution
on
your
head.
It's
really!
You
know
this
has
been
a
gap
and
we're
trying
to
sort
of
close
that
gap.
But
that's
it's
been
a
gap
across
the
entire
community,
and
so
we
haven't
sort
of
stepped
up
as
a
community
in
solved
it,
and
so
it's
becoming
enough
of
a
pain
point
for
us
that
we
are
now
trying
to
solve
it.
B
E
It's
open
source,
so
you
know
do
if
you
want
to
try
your
own
approach,
then
by
all
means
do
so.
If
you
want
to
collaborate,
then
please
reach
out
it's
more
just
like
a
heads
up
that
if
you
are
feeling
that
same
point,
you
want
to
tackle
it
like.
We
are
also
working
on
this
and
you
know
we
may
come
to
the
same
conclusions.
So
you
your
your
work,
may
not
be
well.
I
must
say
wasted,
but
you
know,
like
other
people,
are
also
working
on
this
as
well
so
question.
A
And
and
I
feel,
like
we've,
been
having
this
conversation
of
scope
of
kubernetes
cluster
deployment
tools
for
like
two
years
now,
Justin
when
this
first
came
up
in
the
cops
issue
tracker
forever
ago,
and
for
what
it's
worth
my
opinion
is
add-ons
are
something
that
will
always
be
a
part
of
deployment.
Cooper
dice
cluster,
but
I
didn't
want
to
voice
the
other
side
of
the
fence
here,
just
for
folks,
so
that
they
feel
like
they
can
speak
on
it.
They
need
to
which
is
bringing
up
the
question
of
is
applying
anything
to
the
cluster.
A
In
general,
having
any
sort
of
mutation
to
the
cluster
after
we
bring
up
the
control
plane
and
what
you
know
wherever
the
line
in
the
sand
is
that
defines
a
kubernetes
cluster?
You
know
once
that's
up
and
running.
Is
this
out
of
scope
for
the
project
installing
things
like
keep
stir
for
Matthias
or
CNI
after
the
pack
I.
E
Think
that
there's
my
personal
use
it
like
it's
nice
to
have
a
hook
so
that
your
tooling
can
be
richer,
and
you
know
your
cluster
can
come
up
with
more
than
just
bare
bones.
That
can
just
be
a
hook
like
a
simple
like
like
what
we
have
today
right.
Where
does
a
apply
this
manifest
type
cook?
The
other
complexity
is
where
the
there
are
some
add-ons
that
may
require
a
coordination
with
your
infrastructure
or
things
that
are
managed
by
cluster
API.
E
A
I
guess
my
concern
would
be
more
along
the
lines
of
like
if
this
thing
got
too
out
of
hand,
and
we
started
to
see
people
like
putting
you
know:
business
logic
and
user
application
data
into
this
add
ons,
idml
implementation
or
the
new
one
that
folks
are
working
on.
Do
we
want
to
try
to
bound
this
or
scope
this
in
any
way,
or
do
we
just
trust
that
people
are
gonna,
do
what
they're
gonna
do.
E
B
I
think
people
would
certainly
sort
of
abuse.
The
hooks
that
you
put
in
but
I
think
we
can
also
sort
of
create
sort
of
a
notion
of
best
practices,
and
here
are
the
things
that
we
think
should
be
managed
to
this.
This
mechanism
I
hear
the
things
that
shouldn't
be
right,
so
like
I,
just
instead
like
I,
don't
think
it's
a
good
idea
to
take
tightly
couple.
B
Your
application
deployment
to
your
cluster
deployment
or
to
your
cluster
upgrades,
like
those
things,
should
be
managed
with
independent
life
cycles,
but
something
like
the
the
network
provider
like
that
makes
a
lot
more
sense
to
couple
with,
say,
upgrading
your
machines
right,
so
I
think
we
can.
We
can
tell
people
here
are
ways
to
do
things.
Here's
how
we
think
you
should
use
them,
but
maybe
someone
finds
a
really
interesting
way
to
use
it
that
we
haven't
thought
of,
and
then
we
change
what
our
best
practices
are
right.
A
A
F
F
F
A
F
The
workflow
for
getting
a
cluster
of
the
most
important
part
is,
you
know,
up
to
the
pivot
point.
It
doesn't
really
matter
that
much
how
you
do
it
and
you
could
use
an
extant
cluster
or
use
mini
queue.
We
have
ideas
because
most
of
our
customers
are
in
device
customers.
We
have
other
ideas
on
how
we
can
do
that
and
we
were
discussing
this
internally
and
you
know
one
in
one
way
we
could
do
it
is.
F
We
can
have
all
in
separate
tool
tools
to
set
up
the
external
cluster
and
then
and
then
defer
to
cluster
API
and
at
the
pivot
point
when
it
pivots.
So
look
at
the
internal
cluster,
and
you
know
there
was
a
question
about
well
shouldn't.
We
try
to
generalize
that
and
then
put
that
into
the
cluster
API
main
repo.
F
F
Is
this
a
cluster
running
and
you
know,
does
it
satisfy
all
the
requirements
that
I
need
in
order
for
me
to
do
the
pivot
and
one
way
to
do
this
is
exactly
what
I
just
described,
generalize
it
and
put
it
into
the
cluster
API.
The
other
way
is
for
vendors
to
create
their
own
tools
to
create
the
external
cluster
and
then
call
API,
and
you
know,
I,
don't
think
that
there's
any
consensus.
G
F
I
Okay,
one
the
way,
so
we
don't
have
well-defined
phases
in
cluster
control,
but
there
are
certain
synchronization
points
so,
for
instance,
after
the
masters
are
created,
but
before
they're
done
being
created,
we
wait
and
then
similarly
we
wait
for
the
notes
to
be
created
so
there's
essentially
before
masters
after
masters
before
nodes
and
after
nodes.
Those
are
the
three
implicit
phases.
I
One
thing:
that's
nice
about
the
annotation
interface.
This
slide
is
that
the
provider
implementation
can
determine
what
criteria
it
wants
to
use
to
set
that
annotation,
and
so,
if
you
only
need
three
phases
like
before
master
after
master
and
then
after
node,
you
can
get
that
behavior.
Just
by
changing
when
the
provider
sets
the
annotation
I,
don't
know
that
that
those
three
phases
are
sufficient,
but
maybe
they
are
okay.
I
I
F
I
F
F
J
B
I
F
F
Is
what
we
were
hoping
that
the
community
would
agree
on
that?
We
would
like
to
generalize
this.
We
would
like
to
generalize
it
and
put
into
cluster
control
also,
but
we
weren't
sure
if
the
community
was
stuck
on
the
existing
cluster
approach
with
this.
Neither
of
those
will
graduate
work
for
in
the
boys,
fascination
I've.
A
F
F
A
K
B
B
Shouldn't
say:
I
need
to
look
okay,
yeah,
I,
guess:
I
haven't
looked
at
it
too
closely
recently,
but
I
thought
it
was
like
pray
to
master
or
create
a
mini
pube,
and
let
me
know
when
it's
done
and
we
just
sort
of
assumed
that,
because
it's
mini
cube,
that
is
an
entire
fully
functional
cluster
I
think
that
maybe
also
it
but
Luke
is
getting
out
here.
Is
that
if,
if
you,
if
you
aren't
using
mini
cubing
you
to
break
that
into
multiple
steps
too,.
A
B
I
think
if
she's
dropped
off,
because
this
connection
was
was
pretty
terrible
on
the
bus.
But
from
the
issue
it
looks
like
he
is
trying
he's
proposed
three
different
approaches
and
was
maybe
trying
to
get
some
consensus
on
which
one
people
liked
probably
discussed
a
number
of
times
the
past
as
well.
A
Neither
did
I
I'm
also
reading
them.
It
looks
like
the
TLDR,
though,
is
we
want
to
be
able
to
run
cluster
and
machine
in
multiple
namespaces
and
history?
Approaches
are
sorry
I'm
reading.
I
So
if
the
third
approach
is
just
to
fix
cluster
controls
so
that
it
has
an
option
to
specify
the
namespace
and
in
another
PR,
but
evidently
not
linked
to
this
one
I'll
fix
that
after
the
meeting
I
documented,
the
two
places
I
think
the
only
two
places
they
do
need
update
cluster
control
or
the
cluster
deployer,
more
specifically
to
accept
the
namespace,
and
once
that's
done,
you
could
have
things
in
a
different
name
space.
You
can
only
have
one
cluster
because
there's
a
separate
issue
related
to
how.
I
B
I
B
F
Know
my
preference
is
to
keep
as
little
in
the
CLI
as
possible.
I
worked
on
previously
I
worked
on
a
product
where
we
had
to
see
a
lie
that
had
so
many
planners
that
customers,
family,
extremely
daunting
I,
know
that
cluster
control
doesn't
have
them.
We
there's
right
now,
but
it
can't
get
out
of
paint
very
easily
yeah.
I
I
B
It's
just
a
trade-off,
I
guess,
yeah.
One
thing
we've
done
on
the
Google
ones
is
we
have
manifest
templates
and
you
already
have
to
run
like
a
shell
script
right
now
to
like
actually
fill
in
the
specific
parameters
for
things
like
which
project
you
have
put
them
in,
and
so
one
alternative
would
be
make
Heather
fly
on
that
shell
script.
B
That
says,
like
also
you
know,
said,
replace
the
default
namespace
with
the
different
namespace
and
I
would
keep
them
in
the
ml
files
and
would
basically
still
add
a
flag,
but
in
sort
of
a
different
part
of
the
workflow
I,
don't
know
how
you're
thinking
you'd
be
generating
these
EML
files.
Normally,
presumably
a
person
isn't
writing
every
single
mo
file.
There's
gonna
be
some
tooling
around
generate
them
in
in
one.
E
It
would
set
that
full
disclosure
we
haven't,
found
a
great
library
to
do
it
so
currently
we're
manually
parsing
them
for
what
we
think.
Those
keys
should
look
like
we
sort
of
hard
code
each
one,
but
it
at
least
avoids
the
flag
explosion,
while
at
least
in
theory
having
the
option
to
make
everything
customizable
for
from
a
flag
in
the
future
of
someone.
Someone
wants
it
and
hopefully,
in
future,
we'll
find
a
nice.
A
A
A
Ok
uh-oh
looks
like
somebody
put
something
in
chat:
no
Viper
can
do
that.
Yeah
Piper's,
another
good
one
that
says
Skeets
thing
he's
the
guy
who
did
Cobra
and
Viper
is
like
a
complimentary
could
hit
Parthian
library
to
that
as
well.
We
can
add
a
link
in
the
doc
to
Viper
as
well
anyway.
Next
one
heartache
health
check
for
machines,
issue,
number
47.
G
Yes,
slightly
question:
I
was
recently
going
to
the
Machine
types
and
more
or
less
migrate
or
MCM
types
there
and
I
found
that
we
don't
have
amateur
food
at
the
moment
where
we
can
actually
check
the
health
of
the
machine.
So
we
have
a
provider
status
which
we'll
be
talking
about
the
provider
specific
state,
it's
coming
from
any
provider,
but
what
about
learning
from
also
the
node
condition?
G
G
G
It's
more
about
adding
in
parallel
deployed
er
status,
something
which
is
some
other
status
which
is
coming
from
the
node
conditions
or
maybe
inside
inside,
won't
make
sense,
because
it's
not
specific
to
provider
in
parallel
to
provider
status,
where
we
basically
fill
that
particular
field.
Let's
call
it
load
conditions
itself
and
fill
that
field
from
the
motor
conditions
of
load
status.
Conditions
of
the
node
object.
A
G
G
G
A
Mean
I
usually
clear
on
the
side
of
not
replicating
data
and
using
pointers.
If
we
absolutely
need
a
reference
to
it,
so
I
guess
my
question
would
be:
what
are
we
getting
from
putting
it
in
our
object
as
well?
It's
an
inert
object
that
we
couldn't
just
get
from
the
node
object,
otherwise,
so.
G
What
do
we
could
work
so
just
that
machine
set
would
have
then
to
make
a
call
and
get
the
node
object
inside
the
cache.
Well,
while
making
sure
whether
this
machine
needs
to
be
replaced
or
not.
But
it's
just
that
small
blob
is
already
there
on
the
machine
status.
Then
it
can
continuously
sync,
it
can
directly
get
the
hem
from
the
machine
object
itself
more
or
less
coming
from
the
replicating.
Yes,
that's
correct
concerned,
but
couple
of
couple
of
condition
feel
steps,
so
was
just
need
to
know.
B
Yeah
I
was
gonna,
ask
a
similar
question
to
Chris,
which
was
I'm
looking
at
the
there's,
a
PRL
from
Kenny.
That's
linked
the
last
comment
of
the
issue
you
linked
to.
Where,
in
the
machine
set
controller,
it
looks
like
it
does,
go
grab
the
node
conditions
from
the
node
linked
from
the
machine.
If
I'm
reading
the
code
correctly,
as
we
talked
during
a
meeting
I'm
wondering
what
the
advantage
is
of
copying
it
over
to
the
machine
versus
following
that
reference,
which
is
looks
like
it's.
What
we're
doing
today.
L
L
At
least
I
see
the
value
where,
if
you
are
doing
copying
it
because
then
you're
kind
of
going
across
you're
physically
fetching
the
information
from
a
different
cluster
external,
the
internal
cluster,
the
one
that
you
will
spinned
off
into
you
know
sort
of
your
management
cluster
and
then
that
in
at
least
in
that
scenario,
I
mean
I,
guess
copying,
but
then
that
information
truly
does
not
live
inside
that
management
clusters.
If
you
will,
in
that
case
at
least
I
can
see,
it
might
be
of
some
use
just.
B
To
restate
that
it
sounds
like
you're
saying:
if
the
machines
our
nodes,
are
in
two
different
clusters,
then
the
machine
controller
already
has
to
talk
to
the
two
different
clusters
regardless.
But
the
Machine
set
controller
would
only
have
to
talk
to
the
single
cluster
endpoint,
that's
holding
the
machines
and
not
additionally,
the
API
serve
it's
also
holding
the
nodes.
G
Deployment
will
also
have
to
go
till
node
object.
Yes,
one
more
thing
I
can
say
is
that
couple
of
things
we
also
see
node
objective.
We
stupidly
deleted
or
disappearing,
and
so
on
right.
So,
in
those
kind
of
cases
the
Machine
status
can
have
a
clear
description
there
that
it
is
not
able
to
be
just
because
it's
not
available
on
the
machine.
The
higher
level
controllers
can
be
missing.
B
L
I
guess
one
way
to
see
it
is
also
it
kind
of
scheduling
the
intent
of
the
person
who'd
applied
that
external
cluster
say
I
intended
to
have
three
nodes,
and
this
is
the
last
when
it
created.
Let's
say:
I
had
the
three
nodes
was
the
status
and
if
Quarter
Bank
something
happened
to
that
cluster,
where
one
node
just
disappeared,
it
need
have
something
to
reconcile
against.
There
could
be
a
benefit
of
that.
B
And
I
guess:
if,
then,
if
the
notice
has
disappeared
as
the
larger
reachable
shouldn't
the
machines,
that
control
would
be
replacing
it,
regardless
of
what
the
last
conditions
were
from
the
node?
Is
there
any
benefit
to
knowing
it's
gone
and
the
last
time
I
heard
form
it
was
healthy?
Aren't
you
still
gonna
say
it
needs
to
be
replaced
because
it's
gone.
L
Well,
I
mean
I,
guess
gone,
might
not
be
the
perfect
condition.
I
mean
I
was
just
trying
to
find
out,
like
you
know,
in
terms
of
the
intent
and
what
you
have
it's
kind
of
capturing,
a
snapshot
of
what
it
knows
was.
The
current
you
know
was
the
most
recent
thing
that
it
knew
about
it,
and
then
you
know
in
cases
where
you
know,
for
whatever
reason,
let's
say
the
manage.
You
know
the
pressure
that
you
created.
Let's
say
well
to
completely.
You
know
on
some
sort
of
a
different
side
that
got
disconnected
and
reconnected.
L
There
could
be
different
situations
again,
I'm
not
trying
to
get
to
like
one
specific
particular
case,
but
I
mean
in
a
generic
sense,
I'm
trying
to
say
you
know
it's
like
the
the
truth
about
the
system
there
at
least
now
you
get
in
case
of
you,
are
having
a
one,
centralized
management
which
is
managing
all
these.
There
is
at
least
something
to
kind
of
compare
and
take
some
corrective
action
against
if.
G
Possibility
is
very
same
machine,
reboot
happens
and
so
on.
So
the
node
object
might
not
be
available
for
some
time
and
then
it
might
come
back
our
situations
like
that's
in
public
cloud
providers.
It
might
not
happen
because
controller
manager
takes
care
of
it
chip
control
a
little
bit
in
other.
It
exists,
it's
possible
that
if
the
boot
is
happening,
then
it
might
be
registered
and
from
the
same
machine
comes
up
gaynelle.
We
might
not
have
to
really
particular
than
time
machine.
G
B
I
Think
it's
premature,
I
was
just
gonna
know,
so
actually
I
was
just
going
to
note
that
I
am
writing.
I'm
gonna
create
a
draft
of
a
provider
implementation
guide.
This
came
out
of
yesterday's
wider
implementers
meeting.
The
idea
is
just
to
identify
the
interfaces
between
the
user
and
cluster
control
and
cluster
control
and
the
operators
which
most
providers
are
going
to
have
to
implement.
I
B
Say
that
the
additional
part
of
that
that
I've
seen
worked
pretty
well,
is
people
start
off
in
Google
Docs,
because
it's
very
easy
to
have
common
threads
and
converse
back
and
forth
and
then,
as
things
start
to
stabilize
it's
often
useful
to
turn
that
into
markdown
and
actually
check
it
into
the
repository,
because
you
know,
people
that
are
new
to
the
project
are
not
gonna
find
Google
Docs
that
were
shared
with
the
sig
six
months
ago.
Right,
whereas
if
you
have
a
markdown
file,
it
gets
indexed
by
Google.
B
It's
like
you
really
easily
easily
linkable
from
the
page
it's,
but
it's
really
hard
to
have
the
conversation
about
what
should
be
in
that
file
initially
in
a
pull
request
for
markdown
right.
So
if
we
start
off
on
a
Google
Doc,
we
sort
of
stabilize
the
what
we
want
to
be
in
the
file
and
then
we
translate
that
to
mark
down
to
the
pull
request.
Get
it
checked
in
I've
seen
that
flow
work
really
well
on
kubernetes.
B
I
A
second
question:
I
apologize
Jason,
but
this
is
your
question
for
later:
I
the
words
internal
and
external,
confuse
everybody
and
I
vaguely
recall
us
talking
about
changing
that.
I
just
couldn't
find
anything
in
the
meeting
notes.
We
can
hash
it
out
during
PR.
I
just
wondered
briefly:
anyone
had
any
suggestions
like
copied
the
gardener
solution
with
roots
and
shoots
or.
I
A
D
It
gets
confusing
because
internal
is
currently
used
in
two
different
places,
and
this
came
up
with
the
no
pivot
and
a
PR
that
I
have
out
there
and
I
started
referencing
target
cluster
with
with
that
being
kind
of
selectable,
because
it's
basically
whatever
cluster
is
hosting
the
cluster
API.
At
that
point,
you
know.
Internal
only
means
is
that
we've
pivoted
the
cluster
to
you
know
the
the
cluster
that
was
spun
up.
So
it's
I,
like
the
target
cluster
scenario
terminology
there,
but
I'm
open
to
pretty
much
anything
so.
N
D
Currently
the
workflows
limited-
and
this
is
what
what
the
discussion
earlier
came
to-
that
we
we
basically
hard
code-
aim
space
with
cluster
cuddle
right
now
to
the
default
namespace.
So
you
know
once
that's
unlocked,
then
you
can
potentially
create
multiple
ones.
It's
just
a
matter
of
whether
we
would
support
that
as
part
of
the
workflow
through
cluster
cuddle
or
whether
we
would
only
support
kind
of
the
single
cluster
use
case.
D
B
Yeah
I
think,
if
you
think
about
like
the
the
use
case
of
using
an
existing
cluster,
that's
not
like
a
mini
cube
cluster
that
you
spin
up
an
ephemeral,
bootstrap
one
that
goes
away.
If
you
use
an
existing
cluster
to
be
that
that
bootstrapper
for
your
target
cluster,
it
really
makes
sense
to
use
that
same
bootstrap
for
the
next
time.
You
want
to
create
a
target
cluster.
It
doesn't
make
sense
to
like
have
to
create
another
bootstrap
cluster
right,
so
I
think
I
think
that's
going
to
be
the
pattern
that
makes
sense
to
most
people.
B
If
you
need
something,
that's
really
ephemeral
and
you're
comfortable
using
mini
cube.
It's
a
it's
an
easy
way
to
get
a
bootstrap
cluster
that
you
can
throw
away.
If
you
already
have
a
cluster,
you
can
use
that
as
your
food
truck
cluster
and
then
as
long
proposing
our
cases
where
neither
of
those
exists,
and
we
need
to
have
that
sort
of
a
third
solution.
So
I'm
looking
forward
to
hearing
what
that
is,.
A
A
Another
note
on
naming
things
in
general,
we
had
a
bit
of
a
discussion
on
the
cluster.
Oh
my
gosh,
like
I,
say
this
right:
the
API,
a
blue
cluster
API
implementation,
which
I'll
give
an
update
a
little
bit
later
in
the
call
on
that
we
were
trying
to
come
up
with
a
name
for
the
different
variants
of
implementations
that
are
relevant
to
the
AWS
example.
There
would
be
private
and
public
topology,
like
you
see
in
cops,
and
we
were
using
the
word
flavours
to
describe
that.
H
H
A
B
I
recall
they
were
putting
me
and
every
two
things
I
think
Tim
proposed
or
his
flavor,
and
there
was
one
other
term,
but
now
it's
it's
I
was
sleeping
my
my
memory.
What
the
other
one
was
it'd,
be
nice
to
sort
of
just
as
a
list
with
a
couple
different
terms,
and
we
created
voting
issue
like
you
did,
for
when
we
created
the
different
provider
repositories
and
people
can
use
a
little
github
emoji
voting
to
to
sort
of
settle
on
one
okay.
A
Okay,
alejandro
review,
if
you
158.
J
So
I
read
the
item
that
really
wanted
me
to
look
at.
This
was
issue
158,
and
here
it's
set
to
to
set
there
the
responsibility
of
updating
the
cluster
API
endpoints
to
the
cluster
controller.
We
fight
yes,
okay,
so
I
read
through
that
I
thought
about
a
little
more
and
I
think
that
really
is
logic
for
updating
those
endpoints
ship,
probably
in
the
mission
actually
code
itself,
and
probably
one
way
you
could
do
that
if
I
open
up
an
interface
called
update.
J
Your
time
points
say
once
a
master
control
thing
is
up
and
running,
you
can
go
ahead
and
update.
Those
are
fields
of
the
tester
object,
so
so
basically
sent
reports.
Hey
I
haven't
points,
forgive
you
that's
my
two
cents
in
the
sermon
topic,
because
we
have
variant
ideas
of
how
that
should
be
sort
of
work.
I'm
also
gonna,.
J
B
J
D
G
B
We
create
sort
of-
maybe
maybe
three
VMs-
create
a
load
balancer
in
front
of
them,
and
then
the
cluster
controller
would
take
that
load,
balancer
IP
and
stick
it
in
the
cluster
status.
If
we're
creating
a
singleton
master,
that's
maybe
a
single
VM.
Then
the
cluster
controller
can
say:
I
know,
there's
something
single
one
I'll
take
that
machine's
IP,
which
again
we
should
be
able
to
cluster
controller,
can
read
off
of
the
machine
status
at
the
machine.
It
does
it's.
The
master.
M
I
Sorry
go
ahead.
My
boy,
I
thought
I
was
going
to
say,
there's
a
related
issue
that
I
brought
up
earlier
about
annotations
when
waiting
for
waiting
to
determine
if
a
machines
ready
one
idea
for
replacing
those
would
be
with
another
field,
that
more
precisely
represented
what
you
were
waiting
for.
I
think
waiting
for
an
end
point
is
a
useful
thing
to
wait
for
it's
more
precise
than
waiting
for
ready
or
some
other
the
fine
state
right.
I
I
B
Yeah
I'm
trying
to
think
how
the
what
would
be
responsible
for
updating
an
endpoint
on
a
machine.
You
know
when
the
control
plane
was
sort
of
ready.
In
my
mind,
the
control
plane
being
ready
is
more
a
function
of
like
the
cluster
controller
or
the
concert
controller
should
be
responsible
for
making
sure
that
the
cluster
control
plane
is
functioning,
I.
Think
right
now,
it's
a
little
bit
intertwined
in
the
machine
actuator,
because
we
make
the
control
plane
ready,
be
a
startup
script.
B
K
K
Or
is
that
or
is
the
cluster
controller?
What's
going
to
be
responsible
for
managing
the
lifecycle
of
of
masters?
I
know
that
you
know
a
few
months
ago,
I
think
there
was
some
discussion
briefly
about
having
you
know,
specific
kind
of
machine
to
business
or
for
masters,
and
that
would
let's
say
if
you
know,
for
an
H,
a
cluster.
A
Really
quick
folks,
we
are,
we
are
at
our
time
for
the
call
today
and
I
know.
Somebody
else
is
using
this
room
call
I.
Would
it
be
okay
if
we
continue
this
conversation,
offline
and
slack,
and
we
start
off
with
our
remaining
two
issue.
Items
on
next
Wednesday
is
called
okay
I'm,
seeing
thumbs
up
for
people,
sorry
to
cut
everyone
off
thanks
for
joining
we'll
post
the
recording.
As
soon
as
we
can
and
we'll
see
everybody
next
Wednesday.