►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180606 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.7u36zfdwfnqk
Highlights:
- Deleting "deployer" directories in favor of standardizing on clusterctl
- Building the first release of clusterctl
- SSH provider for already provisioned machines
- Continued discussion about how to run multiple clusters in a single namespace
A
B
C
B
B
C
B
A
C
D
B
The
new
provider
code
is
not
being
moved
at
all
the
problem.
Sorry,
the
provider
code
is
not,
he
moved
it
off,
and
the
only
thing
that
will
change
is
the
tool.
Instead
of
having
a
tool
your
provider
will
use
the
common
cluster,
go-to
I
think
the
question
of
where
the
provider
goes
is
a
little
bit
odd.
No
and.
A
B
Have
the
next
one
so
since
cluster
cuddle
is
in,
it
can
create
clusters,
and
we
feel
like
it's
ready
to
get
some
real
real-world
usage.
I
was
just
exploring
ideas
for
how
to
build
or
we
can
build
and
release
candidates,
probably
like
a
0
0
1,
alpha,
dot,
0
or
alpha
dot,
1
or
something
just
so.
We
can
get
the
balls
rolling.
I,
don't
really
know
what
the
process
for
building
in
Toulouse
and
kubernetes
or
sake
assets
are.
What
I
think
we
should.
B
A
Chris
yeah:
hey:
do
you
want
to
give
any
background
about
how
cops
does
releases,
because
I
think
this
product
is
in
a
similar
state
of
copses
were
decoupled
from
the
sort
of
domain
to
Rene's
release,
train
I
know,
there's
a
lot
of
machinery
built
around
the
main
criminals:
release
train,
cube
admin,
sort
of
jumped
on
that
boat,
but
I
think
it'd
be
nice
for
us
to
stay
decoupled
and
maybe
a
little
background
about
cops,
does
and
things
that
are
good
or
bad
about
that
process.
Totally.
D
So
yeah
so
last
I
was
super
involved
with
cops
those
months
ago
and
the
way
that
our
release
worked
is
we
did
a
best
effort
to
follow
on
the
coattails
of
the
Cooper
Danny's
release
and
we
tried
to
at
least
follow
the
same
semantic
versioning
as
closely
as
possible,
so
we're
kubernetes
would
release
you
know:
Clive
Ford,
I'm,
sorry
1.54.
We
would
shortly
like
two
or
three
weeks
later
release
a
cops
correspondent
as
well.
That
would
support
the
new
kubernetes
codebase,
with
any
changes
as
needed.
D
One
downside
to
that
is
it
wasn't
always
as
quick
as
we
thought
it
was
gonna
be
in
some
cases
is
almost
no
work
and
in
some
cases
it
was
very
large
amount
of
work.
So
it
was
hard
to
find
like
a
stable
pattern.
There
I
would
just
high
level
want
to
bring
up
like.
Why
do
we
think
it's
a
good
idea
to
stay
decoupled
from
the
release
cycle
here?
D
I
know
when
we
were
first
talking
about
this
Justin
Aunt
Em
and
a
few
other
folks
really
were
pushing
towards
making
this
like
a
more
mainstream
thing
and
I
think
getting
is
the
main
relief
cycle?
Would
be
helpful
here
but
don't
over
engineer
anything.
B
Right,
yeah
I
think
we
can.
We
can
tie
in
with
the
kubernetes
versions.
Actually,
although
I
don't
know
if
we
should
start
with
London
for
now
clustered
little
version
wandering
forward,
because
what
I'm
thinking
is
maybe
cluster
Colonel
could
be
completely
independent
and
start
out
itself
as
0
0
1.
B
I'm
not
really
sure
how
cops
was
implemented
and
how
tied
in
it
is
with
kubernetes
versions,
but
this
tool
itself
does
not
really
have
any
tie-in
with
specifically
Nettie's
versions.
That's
all
part
of
the
specs
and
the
Hamill's
and
the
config
maps,
and
so
following
the
kubernetes
train
would
mean
either
we
get
to
do
too
few
or
too
many
releases.
B
Then
we
need
you
know,
because
once
cluster
Cal
is
in
a
stated
state
where
it
can
create
arbitrary
clusters
on
any
supported
platform,
we
pretty
much
need
to
update
a
config
map
to
support
the
new
kubernetes
version,
a
go
to
startup
scripts
test
it
and
then
release
the
yeah
most
rather
than
released.
Cluster
girdle
again
is
that
it.
D
D
Cadence
that
allows
us
tweets
released
quickly
is
gonna,
be
important
here,
and
at
least
that
gives
folks
an
easy
way
to
know
if
they're
running
the
most
recent
version
of
our
code
or
not
not
having
to
go
through
a
different
mental
exercise
to
make
sure
that
they're
running
the
most
updated
version.
One
proposed
thing
we
could
try
here
and
we
talked
about
doing
this
in
cobs-
was
doing
a
best
effort
to
release.
A
I
think
there's
a
great
discussion
and
we
probably
probably
won't
be
able
to
wrap
it
up
completely
today.
I
think
we
should
maybe
branch
into
like
what
should
we
do
in
the
short
term.
If
we
think
we
have
something,
we
want
people
to
start
testing
where
they
can
test
like
something
that
was
built
rather
than
building
it
said
themselves
from
head
and
then
also
the
long-term
source
retreat.
The
question
of
how
we
want
to
do
releases
in
general
and
I.
Think
Cube.
A
Admin
right
now
is
part
of
a
kubernetes
release
cycle,
and
that
definitely
has
some
downsides
too,
because
if
you
don't
quite
squeeze
everything
in
before
crew,
naze
cuts
its
release,
then
you
have
bugs
and
you're
trying
to
convince
kubernetes
to
cut
a
new
release
just
to
fix
one
of
your
bugs,
which
is
really
frustrating
and
is
becoming
more
difficult
over
time,
like
I
think
there
was
like,
like
a
1.8
dot.
Oh
let's
cut!
Thank
you
too
bad
one
was
broken
and
we
had
to
cut
it.
One
nodded
out
one
very
quickly.
Afterwards,
yeah.
A
And
so,
and
the
other
thing
you
mentioned
I
feel,
like
you
mentioned,
two
things
were
a
little
bit
at
odds.
One
is
tying
into
the
primary
release
process
and
the
other
was
releasing
with
a
fast
cadence
and,
in
my
mind,
releasing
every
three
months
is
not
a
particularly
fast
cadence,
especially
during
the
early
part
of
the
process.
Early
part
of
the
development
process
like
right
now.
If
we
had
to
wait
three
months
to
cut
our
first
one
that
seems
kind
of
crazy.
Okay
is.
D
A
That's
kind
of
what
we
do
with
the
things
like
the
ingress
controller
and
other
sort
of
add-ons
right.
We
have
that,
like
add
ons,
manifest
directory
in
in
the
cluster
directory
and
kubernetes,
which
is
brocaded,
and
everybody
wants
to
get
rid
of,
but
that's
effectively
what
that
directory
does.
Is
it
basically
has
external
references
to
things
that
were
built
elsewhere
and
says
this
is
the
version
of
that
thing?
A
You
should
use
with
this
Kuban
abuse
release
in
some
ways
I
feel
like
we
might
have
the
opposite
problem
where
what
we
want
to
do
is
we
want
to
say
this
version
of
cluster
cuddle
supports
the
following
versions
of
kubernetes.
Not
this
version
of
curious
requires
this
specific
build
of
cluster
cuddle,
because
I
feel
like
we
want
to
be
a
little
bit
more
permissive
in
terms
of
being
able
to
support
multiple
versions
and
upgrades
and
downgrades
and
so
forth.
Okay,
I,
don't
know,
I
do
feel
like.
A
This
is
a
really
interesting
conversation
that
we
probably
want
to
keep
having
I
do
want.
Also
answer.
Qur'an's
question
about
short-term:
tactically,
are
people?
Okay
with
us?
You
know
tagging
the
repo
building
a
binary,
probably
through
some
manual
process
and
pushing
a
you
know
that
binary
up
to
the
github
releases
page
notice
how
people
can
start
playing
with
something
that's
been
pre-built,
as
opposed
to
having
to
build
it
themselves.
Yeah.
B
Bit
yes,
so
right
now,
actually
that's
a
good
point
right
now
in
GCP,
we
support
the
Machine
set
of
config
map,
which
means
you
can
have
arbitrary
static
scripts
and
support
arbitrary
versions.
These
we
don't
yet,
and
so
I
think.
This
is
why
my
point
of
not
tying
in
with
kubernetes
just
yet
comes
in,
because
yeah
cluster
cuddle
should
be
independent
of
the
kubernetes
version.
It
does
depend
on
the
provider.
A
I'd
love
to
also
bring
fill
the
truck
in
the
conversation
because
he
is
has
been
talking
to
me
for
quite
a
while
about
wine
to
divorce,
cube
cuddle
from
the
kubernetes
release
process
and
for
cute
cuddle
to
be
built
and
released
at
a
much
much
faster
cadence
than
every
three
months.
I
think
he
mentioned
you
know
weekly
or
bi-weekly
releases,
a
few
petal
so
that
the
client
tooling
could
move
way
faster
and
so
be
really
interesting
to
get
his
perspective.
B
A
D
D
D
E
E
Robert
asked
a
great
question
on
the
issue.
You
know
with
regards
to
how
this
would
use
machine
set
and
machine
deployment.
They,
the
the
provider,
supports
already
provision
machines,
so
a
machine
spec
or
you
know,
machine
objects
need
some
information,
that's
host
specific,
so
this
is
not
really
a
way
to
pass
that
kind
of
information.
A
machine
deployment
to
have
a
curry
create
the
machine
objects.
So
it's
still
thinking
about
that.
You
yep.
If
you
haven't,
taken
a
look
at
the
proposal.
Please
please
take
a
look.
Thank
you
for
your
your
feedback
and
yeah.
E
It's
going
to
be
an
in
a
I
think
under
the
initially
maybe
under
the
platform,
nine
github
org,
and
that
is
so
I.
That's
the
plan
because
as
far
as
I
understood
from
the
issue,
you
know
once
there
are
maintains
across
a
couple
of
companies
for
this
provider,
which
hopefully
will
be
the
case
soon.
Then
then
you
know
we
can
move
that
into
a
requester
under
maybe
the
kubernetes
or
community
things
or.
A
Okay,
yeah!
That's
that's
why
I
was
asking
is
because
I
think
you'll
be
one
of
the
first
people
to
say,
like
we
have
our
separate
repo
with
with
our
machine
controller
code
or
caught
fire
specific
code,
and
we
don't
have
a
great
process
yet
for
figuring
out
where
those
go
or
how
ownership
works
for
them.
So
that'll
be
they'll,
be
really
useful,
I
think
to
start
reading
that
new
ground.
E
D
It
needs
to
be
yeah,
so
there's
all
there's
a
whole
thing
if
it's
gonna
be
an
official
six
sponsored
project.
If
that's
the
direction
we
want
to
take
this
thing
in
there
is
a
whole
write-up
of
what
you
need.
You
have
to
have
a
CLA
in
place,
CNCs
CLA,
you
need
an
Apache
2
license.
There's
it
like
a
checklist
of
things
you
can
do
and
then
we
just
say
like
adopt
that
or
whatever
and
can.
A
A
D
A
Thames,
that
is
super
useful,
cuz
I
think
this
is
gonna,
become
a
more
common
question.
I
think
I
know
you
know.
Jo
beta
has
been
of
the
opinion
that
we
should
not
have
to
do
all
of
our
development
in
the
Koran
ADIZ
skin
every
pose
right
now.
I
know
that
hep
D
has
been
building
their
own
stuff
and
if
SF
wants
to
get
contributed
later,
like
it's
really
useful
to
be
able
to
take
all
this
tooling
and
apply
it
and
Brennan
talked
about.
This
was
something
out
like
The,
Associated
repos
and
everything
yeah
sweet
excellent.
D
A
A
Once
you
put
multiple
clusters
in
the
same
namespace,
then
it
becomes
unclear
in
the
current
API
design,
which
machines
in
that
namespace
belong
to
which
cluster
of
that
namespace,
and
so
that's
sort
of
the
follow-on
part
of
this
conversation
is
how
to
more
tightly
linked
machines
and
clusters
together,
if
they're,
all
in
the
same
namespace.
Yes,.
G
A
So
you
could,
in
your
your
garden,
cluster,
have
five
different
clusters
in
the
same
namespace,
but
all
the
machines
for
those
clusters
wouldn't
be.
It
wouldn't
be
confusing,
which
one
was
which,
because
the
machines
would
live
in
the
chute
cluster
and
so
to
figure
out
which
machines
are
with
which
was
with
with
which
cluster
from
the
top
level
you'd
go
down
to
the
chute
cluster
and
just
look
at
the
machines
and
from
the
chute
cluster
you'd
go
up
to
the
garden
cluster
and
find
the
cluster
that's
associated
with
it.
Am
I
describing
that
right,
yeah.
H
So
our
approach
is
so
we
have
like
a
specification
for
the
machines,
but
it's
like
a
template,
but
more
or
less
like
one
very
huge
specification
for
cluster
and
basically
specification.
We
create
the
machines,
but
that
dozen,
but
the
users
don't
have
access
to
those
machines,
because
we
want
to
manage
them
and
operate
them,
and
because
of
that,
we
were
putting
the
our
motion
controller
for
now,
and
hopefully
the
good
ipas2
next
to
get
to
the
API
server.
But
end-users
don't
have
access
to
it
yet.
H
H
A
So
what
I
put
in
the
chat-
and
what
I'm
wondering
here
is
it
seems,
like
there's
sort
of
two
at
least
two
different
proposals
for
how
to
deal
with
multiple
clusters
in
the
same
namespace.
One
is
to
have
both
the
clusters
and
the
machines
in
the
same
namespace
in
the
same
cluster
and
the
other
is
to
have
the
cluster
and
the
machines
in
the
clusters
in
the
same
namespace.
But
the
machines
be
differentiated
because
they
are
in
a
different
cluster.
If
I'm
understanding
the
Gardner
architecture
perfectly.
H
H
A
H
A
Right,
but
that's
so,
but
in
your
architecture
right
now,
they're
different,
so
I
guess
what
I'm
worried
about
is,
if
we
put
an
owner,
F,
say
on
machines
to
cluster,
then
that
wouldn't
work
for
the
way
that
the
current
Gardner
architecture
is
because,
if
they're
in
different
kubernetes
API
servers,
there's
no
way
to
strongly
link
them
with
an
owner
F,
and
if
that's
a
required
field
on
machines,
then
it
would
be
impossible
to
create
machines.
If
you
didn't
have
that
cluster
definition.
In
that
same
communities,
instance
yeah.
G
C
I
So
we
try
to
put
the
architecture
in
slightly
different
ways.
I
could
see
I
think
there
were
so.
There
are
actually
three
different
layers
of
the
clusters
right,
so
first
layer
is
called
the
garden
cluster.
Second
layer
is
called
a
seed
cluster
and
the
third
layer
is
called
the
chute
clusters
right
and
in
the
garden
cluster.
You
can
see
it
in
a
way
that
there
is
one
name
space
where
we'd
run.
I
Something
called
gardener,
gardener,
controller
manager,
so
Cardinal
controller
manager
is
something
that
in
our
case,
would
be
a
controller
which
is
basically
a
controller
for
the
cluster
API,
the
highest
level
of
controller
right,
and
in
that
garden
cluster
we
will
have
a
namespace
where
we
will
in
the
single
name
space.
We
would
register
different
clusters
there,
so
it
could
be
a
cluster
one,
dot
e
ml,
because
two
dot
here,
mother
and
all
of
them
would
be
mistaken.
In
this
they
can
be
a
single
namespace
inside
the
cordon
cluster.
I
But
what
really
happens
is
that
the
controller
manager,
the
gardener
controller
manager,
which
is
learning
the
code
and
plus
all
right,
so
it
basically
exceeds
the
cluster
one
dot
e
ml
and
then
inside
the
seed
cluster.
It
dedicates
only
one
namespace
for
that
one
class.
So
it's
basically
when
cluster
100ml
is
introduced
in
inside
the
city
cluster,
one
named
sis
will
be
created.
I
Let's
call
it
crystal
one
inside
the
seed
cluster
and
inside
that
namespace
the
control
plane
of
that
cluster
one
will
be
installed
and
then
the
actual
worker
machines
would
be
running
as
the
actual
worker
machines
tell
them
call
toward
the
chute
cluster.
So
there
are
three
different
layers
of
the
cluster.
A
I
So
seed
cluster
1
names,
piece
actually
maps
to
the
one
chute
cluster
and
should
cluster
is
something
that
see
end
user
customer
plus
store
that
you,
so
instead
the
seed
cluster,
one
namespace
maps
to
the
one
cluster
and
namespace
basically
holds
the
control
plane
of
the
chute
cluster
and
see
so
basically,
if
I
understand
correctly
right.
So
the
conversion
that
we
had
from
the
Matthews
was
about
that.
If
we
wanted
to
put
two
different
control
planes
inside
the
single
namespace
of
the
seed
cluster
right.
I
So
that's
what
that's
something
that
we
have
tried
to
avoid,
because
then
it
will
end
up
in
the
same
set
of
questions
that
you
have
that
how
to
identify
that
which
machine
which
namespace
and
so
on.
But
I'm,
really
curious,
said
what
could
be
the
use
case
here.
If
I
want
to
have
a
single
namespace
for
multiple
different
clusters
and
I'm
just
curious.
I
G
Main
use
case,
a
Red
Hat,
is
that
we
were
managing
row
clusters
and
there
may
be
a
user
that
just
wants
to
have
one
set
of
credentials
that
they
store
in
in
secrets.
So
you,
rather
than
having
to
copy
those
credentials
to
every
namespace
for
every
cluster
that
they
want
to
create.
They
would
have
one
namespace
that
has
one
set
of
credentials,
and
each
cluster
would
share
that
that
secret.
A
D
A
F
J
You
well
at
least
four
pods,
there's
no
way
to
specify
which
namespace
to
pull
a
secret
from,
and
presumably
I
mean
to
do
that.
You'd
still
have
to
specify
which
namespace
you're
pulling
the
secret
from
I
mean
I.
Think
the
other
broader
use
case,
though,
is
our
back
just
general,
our
back
like
night.
A
H
Because
of
our
use
case,
and
because
we
have
some
quota,
so
we
have
a
secret
that
can
be
shared
between
multiple
coasters.
So,
for
example,
if
a
manger
gives
access
to
all
to
different
teams
under
heaves
under
him,
so
we
we
create
the
credential
yeah
I'll
secret,
and
then
we
have
another
object,
which
is
a
more
or
less
a
secret
binding
that
that
Wells,
which
basically
specifies
its
many
fired
back,
because
we
don't
want
to
have.
H
Because
if
you
reference
a
secret
in
your
controller,
then
you
have
to
be
sure
that
the
user,
who
is
actually
creating
those
or
trying
to
reference
the
sequence
as
a
back
row.
So
so
we
we
have
a
secret
binding
and
our
controller
reach
this
secret,
and
then
it
copies
it
into
the
namespace
of
the
of
the
control
plane
that
we
create
pretty
much.
So
this
our
logic.
H
I
Yes,
so
essentially,
yes
in
the
seed
cluster,
the
secrets
for
different
namespace
will
be
replicated,
but
but
then
it's
in
it's
interesting
question
that
we
anyway
have
to
make
sure
that
the
excess
of
the
seed
cluster
are,
if
some
the
root
cluster
or
anything,
it's
different
in
those
cases
right.
So
we
anyway
have
to
make
sure
that
no
one
gets
excess
of
this
layer.
I
Right
then,
is
it
really
a
so
I'm
just
trying
to
think
out
loud
that
is
it
really
a
concern
to
replicate
the
secret
on
multiple
namespace,
because
we
anyway
have
to
make
sure
that
even
for
the
one
namespace
for
you
somehow
not
leak
the
excess
in
the
gardener?
We
we
put
this
responsibility
on
the
higher-level
cluster.
That's
a
garden
cluster!
So
in
the
garden
cluster
user
gives
the
secret
only
once
and
then
you
can
have
a
terminal
just
like
a
project.
I
If
you
have
created,
let's
your
project,
one
and
then
the
admin
of
that
project
will
say
that
I
can
add
ten
different
developers.
So
if
ten
different
developers
are
added,
then
only
those
set
of
guys
will
have
access
to.
They
will
be
able
to
see
the
only
the
ID
of
the
secret
basically
and
naturally
it
will
be
then
for
the
chute
cluster
sectional
circuit
will
be
then
they
actually
with
the
install
in
the
seed
clusters
new
species.
So
this
is
interesting
use
case.
I
know
yeah.
A
One
question
I
would
have
about
that
is
like
secret
rotation.
So
if
you
have
one
secret,
that's
shared
it's
relatively
easy
to
rotate
that
secret,
because
you
change
the
one
secret
and
there's
no
propagation.
Do
you
guys
do
secret
rotation
for
the
gardener
or
have
you
thought
about
how
that
would
work.
H
A
H
If
you
go
back
to
the
original
Y,
we
do
one
control
plane
per
namespace,
because
it's
easier
to
delete
it,
it's
very
easier
to
delete
it,
and
you
are
sure
that
you
only
delete
resources
that
are
for
this
cluster,
and
this
is
one
very
big
big
deal.
Otherwise,
it's
very
nasty
to
I
mean
because
if,
if
I
try
to
delete
it
manually,
they
there
certainly
will
be
some
resources
that
the
path
resources
that
will
be
not
I
mean
you.
You
have
to
know
about
them,
though.
A
So
I
guess
my
point
of
poking
this
discussion
was
that
I
think
a
it
would
be
nice
if
we
had
a
single
or
a
sort
of
a
small
number
of
designs
that
we
think
or
maybe
like
sort
of
the
the
best
way
to
do
this,
that
we
could
promote
and
and
have
the
API
sort
of
have
sort
of
suggestions
of
like.
Maybe
this
is
a
way
that
we
think
is
the
right
way
to
do
it
and
there
at
least
sort
of
to
Shinzon
the
table
here.
A
So
I
wanted
to
explore
sort
of
the
rationales
for
starting
to
build
these
two
different
options
and
see
if
there's
a
way
that
we
can
can
sort
of
merge
them
towards
each
other
and
have
some
you
know
agreement
on
on
what
would
make
sense,
so
I'm
not
sure
we're
gonna
get
that
answer
right
now,
but
I
think
it's
definitely
worth
exploring
so
I
guess
the
one
big
thing
would
be
I
think
the
relationship
between
machines
and
clusters
is
fine.
I
I
do
worry
that
having
a
strong
reference
like
with
an
owner
F
is
gonna
break.
G
So
I
I
I
still
haven't
heard
a
use
case
that
would
break
because
of
having
the
strong
reference.
I
mean
I
I
understand
that
it's
not
needed,
and
it's
just
an
additional
thing
that
would
have
to
be
would
have
to
be
added
to
added
to
the
spec
and
filled
out,
but
I'm,
not
I'm,
not
hearing
a
my
use
case
won't
work
if.
A
I
have
to
fill
it
out.
So
if
I
understand
this,
that
architecture
correctly,
basically
that
the
cluster
spec
would
be
in
the
garden
cluster
and
the
machine
specs
would
be
in
the
chute
cluster
and
since
those
are
in
two
different
kubernetes
API
servers,
there's
no
way
to
make
an
owner
reference
between
them.
I
believe.
G
K
G
Yeah,
there
was
a
side
topic
last
week
where
we
talked
about
whether
machines
created
from
a
machine
spec
should
have
should
that
the
machines
set
controller
should
be
able
to
create
the
machines
prior
to
the
cluster
existing
and
whether
subsequently,
when
the
cluster
has
deleted,
the
machines
that
are
created
by
the
machine
set
should
be
deleted
as
well,
but
I
think
that's
for
a
later
date.
If
we.
K
You
for
the
dependency
dependency.
We
trust
I.
Think
this.
Mostly
the
controlling
information,
like
the
narrow,
Walker's,
Park
Service,
just
weren't
its
kind
information,
its
we
temple
required
in
the
GC
controller
setup
she
and
the
data.
So
that's
why
we
interpretation
machine
was
trying
to
reflects
cross.
I
I
Only
the
machine
said
to
avoid
complete
these
conditions
right
in
a
similar
wave
machine
deployment
should
only
talk
to
the
machine
said
it
should
not
really
mess
with
the
actual
machine
object,
and
that
gives
us
kind
of
a
relaxation
from
the
risk
conditions
said.
If
somebody
else
tries
to
delete
the
machine
updating
machine
set,
it
gets
confused
and
so
on.
I
So
in
a
similar
way,
if
we
have
one
layer
above
the
cluster,
so
I
was
expecting
that
clustered
whatever
cluster
controller
or
whatever
should
basically
try
to
consume
the
machine
deployment
from
the
higher
level
and
expect
that
the
low-level
stuff
will
be
taken
care
of.
So
if
you
have
a
machine
deployment,
then
eventually
you
should
delete
the
machine
deployment
and
you
should
make
sure
that
machine
deployment
will
eventually
take
care
of
the
Machine
set
and
machine
to
actual
machines
deletions.
So
from
the
higher
level.
I
I
But
those
updates
will
basically
come
from
the
machine
set
and
machine
said
should
be
the
only
controller
which
really
tries
to
delete
or
create
the
machine
objects.
So
that's
only
one
controller
which
deals
with
machines
in
similar
way
when
more
layer
down
where
machine
is
the
only
controller
which
talks
with
the
cloud,
and
you
should
know.
I
A
So
I
think
particular.
You
were
Martin
said
that
you
didn't
think
that
a
local
reference
would
cause
problems
in
your
existing
architecture.
I
would
like
if
we
could
confirm
that
before
we
merge
the
API
change,
because,
as
I
said
before,
it's
much
easier
to
add
API
changes
later
than
to
take
them
out
and
I
I
wouldn't
want
to
put
something
in
that
would
prevent
you
guys
from
converging
with
the
cluster
API
I
mean.
A
A
Maybe
you
guys
could
look
at
the
PR
and
and
and
see
I'm
sorry
not
to
put
you
off
another
week
by
just
as
I
was
as
I
was
looking
through
it
and
trying
to
reconcile
my
head
with
the
Gardner
architecture.
It
occurred
to
me
that
it
might
be
incompatible
and
I.
Think
it's
maybe
that's.
Okay,
I
mean
I
think
that
we
need
to
just
be
very
careful
about
making
sure
we
aren't
closing
doors,
who
don't
want
to
close
and
I
guess.
G
A
And
then
Chris,
who
has
asked
in
chat
about
some
reading,
we
could
do
if
this
was
confusing
to
people
I
linked
three
issues
there,
that
I'd
had
open
in
my
chrome
tabs
that
I've
been
trying
to
read
to
catch
up
and
there's
also
the
PR,
that's
lumped
in
the
meeting
notes,
so
I'll
go
ahead
and
link
all
of
those
issues
into
the
meeting
notes
as
well.
If
people
are
reading
this-
or
maybe.
D
A
Yeah,
so
people
are
trying
to
figure
out
sort
of
where
this
conversation
is
coming
from
there.
The
three
issues
are
linked
in
the
meeting
notes
there
and
then
hopefully
you
know
I
think
I
might've
said
this
last
week,
but
hopefully
we
can
come
back
in
a
week
and
have
a
little
bit
more
consensus
on
how
this
would
work.
A
A
All
right
and
then
Ron's
got
a
couple
of
fall
action
items
from
his
two
points,
the
beginning,
also
about
deleting
some
code
and
seeing
how
it
goes
to
try
and
build
a
release.
It
should
be
interesting,
so
I
guess
that's
it
for
today.
Everyone
please
follow
up
on
issues.
They're
things
you
care
about
or
interested,
and
we
will
reconvene
next
week
same
time
thanks
so
much
for
coming.
Okay,
bye,
everybody.