►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180808 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.aux3ws54yyjb
Highlights:
- AWS Cluster API Implementation kickoff meeting is today
- Prototype MIG implementation under development
- Plans for extensibility of machine-controllers for out-of-tree provider support
- Should the apiserver port be part of the cluster spec?
- Scope of sig-cluster-lifecycle and how it relates to configuration of networking & storage
- etcd lifecycle management
- Cluster objects in multiple namespaces
- Using cluster api in eksctl, where AWS manages the control plane
A
The
first
thing
I
have
on
the
agenda
today,
I
put
into
a
month
since
I
saw
Tim
was
here,
but
I
just
want
to
give
an
FYI
that
the
Tim
has
scheduled
a
AWS
cluster
API
implementation,
kickoff
meeting
today,
it's
a
half
hour
after
this
meeting
in
schedule,
10
Tim
I,
don't
know
if
you
only
have
any
extra
context
or
background
about
that
or
sort
of
you
just
want
to
advertise,
or
if
there
are
specific
people
you
are
hoping
to
set
corner
into
coming.
Just.
B
A
general
PSA
and
if
you
have
a
listing
of
requirements
or
things
that
you're
looking
for,
because
I
know
that
different
parties
will
have
different
requirements.
Please
come
to
the
table
with
any
requirement
stock.
You
might
have
and
we'll
try
to
rally
around
breaking
down
some
of
the
work
items
and
try
to
solidify
at
actual
spec
over
the
next
couple
weeks,
so
that
we
can
all
execute
and
then
federated.
Some
of
the
work.
A
So
next
I
saw
from
notes
from
last
week
that
there
was
some
more
discussion
about
using
auto
scaling
groups
on
Amazon,
a
particular
rate
limits
and
I
didn't
want
to
mention
that
there's
monic
Google,
who
has
started
doing
a
prototype
implementation
mapping
machine
sets
machine
deployments
to
manage
instance,
groups,
which
is
for
the
Google
version
of
auto
scaling
groups
and
just
sort
of
starting
to
explore.
What
that
would
look
like
and
we'll
have
some
more
to
share
in
a
couple
of
weeks
and
I.
A
Don't
know
if
anybody's
been
working
on
a
similar
mapping
for
aSG's,
but
I
think
it
would
be
useful
to
see
sort
of
how
sort
of
close
the
API
is
and
and
so
forth
work
and
how
close
the
user
experiences
using
individual
machines
with
VMs
and
using
the
grouping
concepts
on
provider
environments
that
support
those.
So
we're
sort
of
using
this
as
a
way
to
test
out
what
the
difference
looks
like
on
GCP,
where
we
do
have
the
existing
implementation
using
machine
sets
and
as
machine
control.
A
So
I
just
want
to
share
that
that
that
is
happening.
I,
don't
think
we
have
anything
more
to
share
yet
I
haven't
had
a
chance
to
look
at
the
code
yet
myself,
but
someone
is
working
on
that.
So
if
people
are
interested,
please
let
me
know,
and
if
anybody's
interested
in
doing
that
for
other
environments,
that
would
be
great.
C
Yeah
just
wanted
to
follow
up
about
the
provider
implementers
office
hours,
just
wondering
about
the
the
calendar
invites
the
day.
I,
maybe
I
need
to
sync
up
with
you:
Robert
offline
figure,
those
out
and
then
I
wanted
to
see.
If
there
was
anyone
who
wanted
to
volunteer
to
host
the
9:30
a.m.
Monday
start
Monday
9:30
a.m.
Pacific
Standard
Time
slot.
C
C
Yesterday
there
was
some
sort
of
a
zoom
issue,
so
I'm
using
the
cluster
ops
account
to
host
the
meeting,
and
it
looks
like
people
initially
when
I
when
I
you
know,
started
hosting
the
meeting
when
people
tried
to
join,
they
were
getting
this
error
from
from
their
zoom
climb
saying
another
meeting
is
in
progress,
so
I
don't
know
if
anyone
is
familiar
with
that
with
that
error
or
what
you
know
if
I'm
doing
something
wrong,
I
ended
up,
I.
Think
closing.
C
C
C
A
A
A
I
believe
that
these,
by
putting
these
on
like
the
official
estate
cluster
log
cycle
calendar,
they
should
also
show
up
on,
like
the
public
kubernetes
like
calendar.
If
you
go
to
like
the
overall
calendar
that
shows
like
when
all
of
this
things
have
meetings
which
I
don't
know,
how
many
go
look
at
that,
but
that
might
also
help
build
awareness
to.
C
C
E
Hi,
pardon
me
I,
it
looks
like
I've
got
some
internet
connection
issues.
Hopefully
it's
properly
audible,
right,
okay,
cool!
So
is
this
more
of
a
generic
question,
so
I
was
recently
going
to
the
rapper
when
I
was
wondering
that
right
now,
what
are
we
doing?
So
we
are
basically
separating
out
the
machine
controllers
or
provider-specific
code
right
and
the
extent
that
the
motion
controller
has
its
own
little
code.
Would
butch
kept
the
machine,
converter
and
so
on?
So
I
was
just
wondering
that
we
had
discussion
sometime
back.
E
Why
not
have
driver
based
implementation,
where
you
can
still
I
guess,
share
the
machine
controller
code
centrally,
maybe
in
the
controller
manager,
and
yet
we
can
expose
the
driver
in
a
way
that
let's
say
you
can
make
a
gr
PC
driver
separately,
where
you
can
just
execute
the
create
delete
and
whichever
better,
whichever
interface
we
want
to
execute
and
no
driver
could
basically
execute
that
and
could
still
work.
So.
E
Was
just
wondering
what's
the
view
on
it?
It's
just
that
the
right
now
is
too
much
of
the
freedom
given
to
the
machine
controller,
which
will
which
can
bring
lot
of,
could
actually
kill.
The
homogeneity
of
the
motion
controllers,
but
part
of
the
code
could
still
be
shared
in
the
controller
manager
and
in
this
expose
that
I
was
outside
the
GC
cons.
So
so.
A
So
right
now
we
have
a
like
the
machine
controller
generic
code
and
there's
an
interface
for
the
actuator
right.
I
think
it
sounds
like
what
you're
talking
about
is
basically
making
that
extra
actuator
interface,
something
that
we're
actually
going
to
adhere
to
and
call
out
over
G
RPC
and
put
the
common
code,
maybe
in
a
binary
that
everybody
has
to
run
and
I.
A
Think
right
now
you
have
the
flexibility
of
saying
if
I
want
to
use
a
common
code,
I
can
start
there
and
implement
the
actuator
interface
and
use
that
as
a
motion,
controller
or
I
can
say,
I
want
to
just
write
my
own
machine
controller
from
scratch.
So
I
guess
I'm
wondering
what
you
think.
The
the
pluses
and
minuses
are
of
switching
to
a
model
where
we
sort
of
force
force
you
to
use
that
actuator
interface
versus
giving
the
option
to
use
it.
If
you
want
to
have
had
reusable
code
so.
E
One
of
the
things
I
could
think
of
is
that
we
know
that
the
Machine
controller
is
still
the
one
of
the
most
important
building
block
in
the
entire
history
of
a
stake
right
and
whatever
we
would
have
under.
Let's
say,
Cuba,
96,
less
plus,
2
API
would
be
I
believe
would
be
monitored
by
a
much
larger
audience.
E
Everybody
would
try
to
make
that
code
as
much
good
as
possible
when
the
provider
specific
code,
which
is
outside,
which
would
have
attention
of
only
very
small
or
specific
set
of
people
who
there
are
possibilities
that,
if
you
miss
out
on
certain
I,
did
design
level
patience
or
miss
out
on
some
utilities
dealt.
Then
it
can
affect
the
entire
experience
of
the
cluster
API.
E
But
if
we
ensure
that
the
real
machine
controller
being
part
of
portion
control
is
still
shared,
it
is
just
the
let's
say,
the
implementation
of
the
create
delete
and
certain
methods
of
the
machine
which
is
being
controlled
by
the
provider
specific
code
which
is
residing
externally.
Then
we
somehow
are
reducing
the
risk
there,
and
one
more
thing
could
be
that
if
you
add
a
new,
if
you
want
to
add
support
for
a
new
cloud
provider
right,
then
with
this
kind
of
approach,
it
becomes
really
straightforward.
E
Somebody's
assess
to
take
the
template
from
the
other
team
on
the
create
and
delete,
and
that's
all
about
it.
So
anything
from
the
maintain
maintainability
perspective.
It
would
be
nicer
and
we
can
still
better
hold
on
the
complete
experience
of
the
cluster
API,
because
I
see
that
possibility
that
it's
tomorrow
XYZ
cloud
toward
fixing
and
writes
the
code
for
there
and
if
it's
not
properly
maintained-
and
it
affects
the
entire
experience.
So
just
wondering
one
that
yeah.
A
It's
him
suggested
in
chat,
alternative
title
implementation
to
G
RPC
would
be
a
simple
set
of
web
hooks,
which
I
know
is
something
that
both
of
those
are
are
solutions
that
kubernetes
community
has
embraced
right.
There
are
lots
of
places
in
the
core
or
use
web
hooks
and
they're
also
places
where
we
are
starting
to
use,
G
RPC,
so
I
think
yeah
either
either
one
of
those
would
be
fine,
I,
I
think
so.
A
Right
now
we
have
the
provider
specific
repositories,
and
presumably
this
would
it
might
slightly
reduce
the
amount
of
code
we'd
have
in
those
repositories.
Right
I
mean
if
I
look
at
the
GCP
one
right
now
we
vendor
in
the
cluster
API
code,
which
vendors
and
the
common
parts
of
the
Machine
controller,
and
then
we
basically
writes
the
machine
actuator.
You
know
go
file
and
implement
the
actuator
interface
and
then
compile
those
things
together
and
I.
A
Think
that
would
be
pretty
similar
in
this
model,
either
with
webhooks
RPC,
where
you'd
vendor
in
you
have
two
vendor
in
the
main
repo
to
get
the
definitions
for
that.
You
know
G
RPC
interface
or
for
the
web
hook,
and
then
you
would
still
implement
that.
But
you
probably
build
your
your
own
binary
at
that
point
and
you
have
to
glue
them
together
right,
so
I
think
that
would
be
the
biggest
difference.
Is
the
deployment
model
would
be
slightly
different
right.
E
Deployment
model
would
slightly
do
not
really
different,
because
this
would
also
be
it
can
also
still
be
seen
as
machine
controller
drivers
or
something
like
that,
which
is
running
separately
just
I'd,
be
really
stepping
it's
still
taken
care
by
the
core
controller
manager,
that's
still
in
our
control,
and
that
can
take
the
right
decision,
if
needed.
Just
that
something's
really
wrong,
going
right
and
right
and
left,
and
this
the
machine
control
of
the
still
share
them.
Just
still
in
the
original
repo,
which
is
I.
B
Think
the
reason
why
I
was
proposed
by
folks
is
it
allows
people
to
do
POC
grade
environments
really
really
easily
without
having
to
subscribe
to
the
entire
API.
So
that
way
you
could
basically
have
stubs
that
do
nothing
in
the
default
controller
and
then
you
can
register
web
hooks
dynamically
similar
to
the
way
the
API
registers
dynamic
web
hooks.
And
if
you
can
do
that,
then
you
can
basically
create
your
create
your
actuator
in
an
incremental
fashion
for
the
different
life
cycle
events
and
deal
with
it
independently.
B
E
Very
good
example:
that's
actually
very
good
points.
I
would
try
to
connect
it
with
what
we
have
done,
I'm,
seeing
so
being
we
had
the
same
things
of
you
have
made
it
a
driver
based,
but
we
still
have
it
in
tree,
but
we
we
could,
because
it's
a
tribal
based
and
certain
set
of
methods
which
has
to
be
implemented.
Let's
create
and
delete
OPM's.
Then
we
put
this
implement
when
client
library,
which,
let's
call
it
managed
VMs,
and
if
external
provider
ensures
that
managed
VM
create
VM
management,
space,
create
name
of
the
VM
works.
E
Well
then,
it's
indirectly
ensures
that
it
will
work
also
with
the
original
set
of
controllers,
because
this
client
is
just
only
about
either
the
way
cooks
or
the
call
to
action.
So
you
don't
have
to
worry
about
the
rest
of
the
stack
of
the
cluster
API
thing
so
just
this,
so
that
will
really
increase
the
adoption.
It's
just
really
quick.
B
We've
internally
chatted
about
this
is
a
potential
push
that
we
might
want
to
do.
I
think
maybe
of
logging,
an
issue
and
then
I,
if
you
want
to
see,
see
me
or
other
folks
from
hep
do
on
it.
I
think
will
happily
chime
in
on
this,
but
I
think
that
that
sort
of
makes
the
make
the
standard
controller
as
generically
huggable
as
possible
is
is
something
that,
from
a
prototype
perspective,
is,
is
super
useful
and
we've.
We've
chatted
about
literally.
A
Yeah
I
think
pretty
and
a--shoe
sounds
like
an
awesome
next
step.
I
think
conceptually
this.
This
sounds
pretty
similar
to
the
model
that
we
have
today
and
I
think
that
it's
I
think
part
of
the
things
you're
pointing
out
are
that
in
some
of
the
details
and
in
terms
of
maintainability
and
sort
of
for
a
future
ease
of
use,
this
might
be
a
better
model
to
be
using
than
what
we
have
today.
F
F
So
this
issue
is
something
that
came
up
like
against
him.
Trying
to
provision
a
cluster
I
was
trying
to
I've
got
a
master
control
thing,
though
just
fine,
but
the
nodes
couldn't
emit,
and
that's
because
we're
trying
to
connect
tonight.
Diversity
exist
together
for
the
port
he'd
be
noon
by
default.
Since
the
four
to
six
point,
three
and
I
saw
that
we'd
have
it
statically
secretary
for
forty.
F
People
so
the
way
to
fix
that
if
such
a
tell
me
they'd
be
enthused
the
bind
port
to
the
for
for
three
but
I.
Think
I
would
like
that
feeling
more
configurable,
and
this
is
an
attempt
to
do
that
now:
I'm
open
to
suggestion
how
to
do
that
exactly
in
very
inspect
the
poster.
Spec
I
have
a
suggestion.
There.
G
H
A
A
C
Iii
was
wanting
to
is
it
correct
to
to
interpret
then
this
change
that
that
the
API
endpoint
read,
though,
like
let's
say
the
master
IP,
where
I'm
looking
at
the
PR
here
at
the
commit?
That
would
be
known
at
the
time
that
the
cluster
object
is.
This
career
is
created
yeah
that
will
be
known.
The
IP.
F
Address
itself
will
be
known:
what
is
it
a
second
known
as
the
port
you're
using
since
comedian
allows
it
to
be
configurable?
Why
not
allow
us
that
way?
We
can
define
that
port
at
the
top
level
right
at
the
cluster
memo
file
level
and
that
will
be
pushed
into
civilian
so
that
you'd
have
to
find
that
inconvenient
and
internally,
as
controller
will
know
where
what
fork
to
use
and
communicate
that
to
the
notes,
so
that
the
notes
can
go
ahead
and
hook
up
to
to
the
Nationals.
F
H
F
Not
the
problem
because
IP
address
is
from
him:
saddest
little
palm
is
living
if
I
sort
of
port
itself
is
statically
defined,
and
so,
if
someone
provisions
a
machine
with
DBM
using
the
defaults,
you'll
come
be
shocked
and
surprised
that
we
can't
add
notes
to
the
so.
This
is
not
time
to
fix
that
possibility.
I
mean
the
alternative
is
to
actually
on
your
Parisian
scripts.
To
actually
add
that
value
at
4:43,
because
it's
about
but
I
mean
I
can
see
cases
where
someone
we
may
not
want
to
support
or
for
anyone
new,
some
other.
C
If,
if
what,
if
what's
needed
to
be
configured,
is
the
port
and-
and
it
seems
like
it's
reasonable,
at
least
in
this
change-
it
seems
like
that
port
would
apply.
Well,
there's
just
one
master
right,
but
that
that
port
looks
like
it
would
apply
across.
You
know
any
any
control
plane
running
in
cluster.
C
Does
it?
Would
it
actually
be
a
you
know,
a
reasonable
compromise
to
say
you
know,
have
the
the
control
plane?
Let's
say
they,
the
API
server
port
in
the
spec,
and
that
can
be
defined,
that
you
know
the
time
that
the
cluster
object
is
created
and
then
the
that
you
know
that
would
be
consumed
by
you
know
the
actuator,
etc.
When
configuring,
whatever
bootstrapping
different
of
the
control
plane.
F
F
A
A
And
so,
if
you
had
something
that
was
responsible,
recruiting
the
control
plane-
and
you
know
it
new
I'm
passing
this
port
through
then
it
could
then
report
that
port
in
status
right
and
right
now
right.
That
link
is
basically
like
we've
hard-coded,
four
four
three
in
two
different
places,
which
is
which
is
not
great
I,
agree
with
that.
But
I
wonder
if
that
should
be
hidden
from
the
user
and
we
don't
sort
of
put
the
burden
on
the
user
to
know
what
that
port
is
and
have
to
enter
that
port
as
part
of
the
spec.
F
A
So
I
guess
I'm
wondering
what
your
use
case
is.
Are
you
creating
a
cluster
sort
of
using
cube,
a
idiom
by
hand
and
then
adding
the
cluster
API
stuff
on
top
of
it?
Or
are
you
trying
to
use
cluster
cuddle
to
create
the
cluster
and
having
that
run,
queue
wait
a
minute
for
you,
the
latter
and
in
your
case
it's
using
cube
a
diem
in
it.
But
it's
it's
binding
to
a
different
port
like
it's
behind
at
six
four,
four,
three,
it's
just
in
the.
A
F
I
A
F
A
So
there's
another
open
issue,
that's
somewhat
related,
which
is
that
we'd
like
for
a
controller
to
be
able
to
populate
the
endpoint
of
a
cluster
so
that
we
don't
have
to
be
able
to
implement
the
the
common
interface
there's
camera.
What
the
issue
is,
but
there's
basically
two
functions.
We're
trying
to
get
rid
of
so
that
we
can
make
cluster
cut,
will
actually
be
generic
and
not
have
to
be
have
providers
compiled
in,
and
one
of
those
is
for
a
way
for
providers
to
sort
of
report.
The
endpoint
of
a
cluster
and
I.
A
F
A
Let
me
let
me
link
you're
on
your
PR,
a
link
to
that
issue,
and
maybe,
if
you
take
a
look
at
that
issue,
because
that
is
sort
of
an
ascending
issue
that
we
want
to
solve
regardless
and
maybe
we
can
poke
at
that
and
see
if
there's
a
convenient
way
either.
If
that
issue
is
solved,
that
it
would
fix
your
problem
or
maybe
a
way
to
solve
that
issue
and
cut
that
off
the
list
of
things
we
want
to
fix.
A
C
A
Okay,
Alejandro
I
would
say:
please
put
this
on
the
agenda
for
next
week
and
and
don't
just
sort
of
let
this
drop,
because
I
think
this
is
something
we
want
to
fix
and
if
we
can't
find
a
different
way
to
fix
it,
then
I
think
we
should
circle
back
to
whether
it
makes
sense
to
put
it
in
the
spec.
But
I
think
we
should
first
see
if
there's
sort
of
a
more
elegant
way
to
fix
it,
where
we
don't
have
to
put
it
as
part
of
the
spectrum.
F
F
D
Yeah
so
I.
This
is
a
question,
that's
kind
of
a
new
question,
because
I
just
recently
just
joined
this
effort,
but
is
there
any
sense
the
closer
life
cycle
charter
includes
setting
up
in
managing
the
control
plane?
Are
there
any
plans
for
the
cluster
API?
You
also
have
API
is
for
setting
up
and
maintaining
and.
D
B
There
exists
parameters
for
other
tools
to
be
able
to
specify
all
the
potential
options
that
you
would
need
for
different
configurations
of
the
control
plane.
With
regards
to
how
you
set
up
given
CNI
providers
like
there's
two
parts
to
that,
like
you
know,
there's
you're
picking
your
side
or
address
range.
You
have
to
do
as
part
of
your
setup
for
your
control
plane.
Then
there's
actually
picking
the
cni
that
you
want
to
use,
and
there
are
two
distinct
steps.
B
But
if
the
other
options
for
that
already
exist
inside
of
a
API
for
qadian
proper
that
it
has
for
its
configuration
file
that
you
could
specify
through
no
for
other
different
deployment
tools.
I,
don't
know
know
whether
or
not
they
have
a
well
version
semantics
around
it's
their
configurations.
But
that's
that's
outside
the
scope
of
Sigma
stroll
outside.
B
There's
a
million
there's
a
million
deployment
tools
that
exist.
We
capture
ones
that
week
and
pieces
that
are
come
that
we
maintain
and
control
so
like
I
can
I
can
speak
to
details
of
how
kuba
TM
is
versions.
I
cannot
speak
the
details
of
how
other
deployment
tools
do
their
versioning
first
configuration
for
their
control
plan
components,
okay,.
D
B
A
Another
way
to
think
about
this
is
that
to
actually
create
a
functional
cluster.
You
certainly
do
have
networking
and
you
may
you
probably
want
to
have
storage,
and
if
you
look
at
what
we're
doing
today,
setting
up
the
networking
is
provider-specific
right.
So
the
the
networking
that
we're
and
set
up
on
GCP
is
gonna
look
a
little
bit
different
than
on
a
device
which
is
going
a
little
bit
different
on
situation,
imagined
or
etc,
etc,
and
so
that's
part
of
the
provider
specific
cluster
controller.
A
When
you
create
that
cluster
and
I
would
expect
there
to
be
options
in
each
of
those
provider
provider
specific
blobs
for
how
to
configure
the
network
so
like
on
GCP,
for
instance,
sign
those
API
is
pretty
well.
You
would
probably
say:
I
want
my
cluster
to
be
in
this
network
in
this
sub
Network
in
this
GCP
project.
Alright,
so
those
are
all
sort
of
networking
fields.
Now,
as
a
provider
implementer,
we
may
or
may
not
want
to
say
you
get
to
pick
calico
versus
we
versus
flannel
versus
whatever
other
scan
dye
provider.
A
We
may
decide
we're
gonna
choose
one,
because
we
only
want
to
have
to
support
one
right,
and
if
you
want
to
run
using
a
different
network
provider,
then
you
could
create
a
different
implementation
of
GI
Thrones
on
GCP.
You
could
send
pull
requests
to
to
change
existing
implementation
or
since
a
lot
of
the
way
that
network
providers
are
installed,
are
sort
of
as
cluster
add-ons.
A
You
may
actually
be
able
to
create
a
cluster
and
then
sort
of
remove
the
existing
network
provider
and
create
a
new
network
provider,
but
I
suspect,
okay,
that
sort
of
networking
configuration
to
be
provider-specific,
okay
and
storage
is
similar.
I.
Think
the
interesting
thing
with
storage
is
that
we're
pretty
much
always
going
to
want
to
have
a
default
storage
class.
A
So
if
you
look
at
the
flags
to
cluster
there's
a
flag
to
pass
I
think
I
called
it
like
the
addons
file
or
something
like
that,
and
in
like
the
GCP
provider
examples
it
creates
the
proper
default
storage
class
for
you,
which
is
equivalent
to
what
we
get
with
something
like
Cuba
and
I.
Think
with
cops
for
other
providers.
A
We
can't
create
a
generic
default
storage
class
because
it
depends
a
little
more
on
the
environment
like
if
I
know
in
the
vSphere,
one
I
add
a
storage
class,
but
it's
like
a
half
filled
out
the
ML
file,
because
it
depends
on
your
particular
parameters
of
your
vSphere
installation
and
so
that
one
you
have
to
like
tweak
a
little
bit
by
hand
and
then
you'd
apply
that
storage
class.
But
again
in
the
internet
end
you
get
a
default
storage
class
I.
A
Think
in
terms
of
likes,
like
CSI
providers
like
times
I'd
like
that's
something
that
the
SIG's
storage
team
would
be
involved
in
supporting
and
from
our
point
of
view,
as
a
provider
of
the
cluster
API.
You
know
we
might
sixth
origin
and
they
might
say
you
should
use
this
one,
and
we
would
install
that
at
the
moment.
Extent
of
that
is
the
default
storage
class.
D
A
A
J
We
talked
about
the
CNI
implementation,
like
for
the
provider
user
code.
You
may
want
to
choose
this
TNI
thoughts
on,
for
example,
in
the
spec
itself.
If
there
was
a
you
know,
well-defined
parameter,
which
which
could
specify
for
the
given
provider,
say
what
CNI
do
you
want
to
use
and,
of
course,
up
to
the
provider
or
provider
could
choose
to
say
implement.
You
know
one
just
one
CNI
or
in
number
of
CNI
that
are
available
right
and
whoever
is
now
using
the
accessory
API
to
deploy
a
fester.
J
B
B
I
might
change
over
time,
so
by
hat
by
willfully,
federating
or
not
making
those
choices
in
the
beginning,
it
allows
more
options
to
the
user
and
it
allows
us
to
not
have
to
configure
these
things
and
it's
a
post-processing
CSI
is
planning
to
move
that
way
too
as
well.
So
that
way
we
can
lay
down
the
control
plane
and
then
you
can
apply
now
how
a
given
implementation
for
provider
chooses
to
do
their
business
within
their
provider.
Good
thing
totally
up
to
you.
K
I
I
would
I
would
argue
that
that
we
do
need
some
kind
of
representation
there,
because
I
mean
one
one
thing
that
it's
like
right
now:
there's
no
any
representation
of
what
look
at
what
the
desease
provide
network
provider
on
in
the
cluster
and,
if
it
does,
does
it
have
setting
pods
associated
with
it
right
and
what
sort
of
network
labs
would
actually
provide,
and
such
things
there
is
no
like,
even
just
just
like
sort
of
finding
out.
Is
this
cluster
running
deep
or
plumb?
F
A
So
we
do
have
that
there
is
a
provider
specific
Rock
extension
in
both
the
cluster
spec
and
in
the
machine
spec.
That's
what
I
was
saying
before
is
I
think
this
being
that
cost
in
that
provider.
Specific
part
of
the
cluster
spec
is
probably
where
you'd
want
to
put
this
and
I.
Think
if
we
start
to
see
some
common
patterns
emerge
across
different
providers,
then
it
would
make
sense
to
sort
of
promote
that
to
the
top
levels
back
and
say
this
is
something
we
want
to
be
consistent
across
providers.
A
It's
not
clear
to
me
yet
whether
all
providers
are
gonna
want
to
allow
choice
of
CN
is
or
whether
they're
going
to
sort
of
pick
one,
because
again
the
the
test
matrix
starts
to
kind
of
explode.
If
you
say
we're,
gonna
support
five
different
scene
eyes
and
five
different
storage
providers
and
five
different
container
runtimes
right.
F
A
Right
so
I
think
we
should
start
out
with
it
being
in
the
provider
specific
pieces,
and
so,
if
there's
a
demand
to
run,
you
know
two
or
three
different
CNI
is
declaratively,
say
only
WS
that
we
build
that
into
the
AWS
provider
specific
piece,
and
if
that
looks
like
something
that
we
should
make
common
across
providers,
then
we
can
think
about
promoting
it
up.
I
think
I'm
I'm
much
more
in
favor
of
a
sort
of
bubbling
things
up
that
way
and
actually
seeing
real
real
use
of
them
in
a
single
provider.
K
I
believe
that
part
you
just
sort
of
like
I,
think
there's
a
just
generally.
There
is
a
currently
a
lack
of
any
representation
of
the
network
within
the
cluster,
while
actually
the
network
provides
some
critical
functionality
and
it's
often
important,
for
example,
right
right
now,
I'm
working
with
ETS
a
lot
and
I
work
on
the
key
s
fiddle,
which
I
want
to
talk
about
a
little
bit,
but
there
there's
like
a
CNI
provider
that
happens
to
be
a
demon's
set
in
huge
system.
K
Main
space
was
a
certain
name
and
I'd
like
to
replace
it
with
we've
met,
but
I
basically
have
to
you
know:
I
have
to
sort
of
define
various
tech
object
to
remove
that
that
Givens
even
set
and
putting
a
Pete
net
and
then,
like
so
cycle,
all
the
pods
and
such
things.
It's
kind
of
like
it's
very
a
dark,
I,
think
I,
think
network
deserves
having
some
kind
of
high
level
representation.
K
A
I
think
that
comes
back
a
little
bit
to
what
Tim
was
saying
earlier
about.
If
we
can
establish
here's
a
control
plane,
then
you
can,
you
can
effectively
cube
cuddle,
apply
your
CNI
on
top
of
that
in
most
cases,
and
so
if
the
cluster
controller
implementation,
you
know
defaults
to
cube
cuddle.
Applying
you
know
calico,
then
you
want
to
cue
cuddle,
apply
weave,
then
what
you
can
do
is
you
can
put
a
flag
in
the
provider
config
and
send
a
pull
request.
A
Add
a
flag
and
capella
apply
weave
instead
of
calico
and
now
you've
got
your
choice:
declaratively
for
users,
right
and
so
I
think
the
question
is:
is
there?
Is
there
demand
from
users
to
have
that
choice,
and
and
how
do
we
specify
it
in
the
spec
right?
Is
it
a
name?
Is
it
a
name
in
a
version
like
I
think
there's
a
little
bit
of
a
rabbit
hole
to
go
down
there
and
I
want
that
to
be
explored
sort
of
in
the
sort
of
safer
space?
B
B
A
So,
in
a
prospect,
specific,
a
specific
provider,
you
might
actually
be
able
to
specify
it
as
actual
types
like
here's,
the
weave,
convey
or
the
flannel
configure
the
Calico
config.
But
if
you
want
to
make
that
generic
and
applies
all
environments,
it's
gonna
have
to,
you
know,
probably
just
be
strings
right
and
then
you
have
a
lot
less
sort
of
type
safety,
and
you
can't
put
you
know
specific.
F
Point
the
way
we
are
doing
right
now
is
actually
via
new
strap
process.
If
you
just
give
them
self
define
to
see
a
knife,
but
then
you
have
this
issue
where
you
have
some
of
it
configured
on
the
cluster
Emmitsburg
versus
where
you
have
their
recently,
and
you
might
have
this
conflict
where
the
best
adders
may
not
work
for
that
particular
mine
and
you
have
a
suitable
one.
A
Yeah
I
think
the
more
about
you
can
sort
of
hide
from
the
end
user.
The
better.
If
the
end
user
says
this
is
a
cider
that
I
once
and
you
you
can't
use
that
cider.
You
can
give
them
back
an
error
and
say
that
cider
doesn't
work.
You
know
it's
already
in
use
or
you
know
it's
gonna
have
a
conflict.
You
know
I,
think
most
users,
probably
don't
care
too
much
about
the
specific
network
implementation.
As
long
as
it
works,
maybe
I'm
wrong
right,
I
think
you
know
I
think
Ellie.
K
A
Cool
but
I
think
we
should
definitely
explore
what
it
would
look
like
to
make
this
the
cni
and
the
networking
configuration
sort
of
part
of
the
spec
and
how
that
ripples
down,
and
maybe
since
since
you
guys,
are
working
on
on
eks.
Maybe
it
would
make
sense
to
do
that
in
the
AWA
AWS
provider
to
start
and
sort
of
see
what
that
looks
like
and
then
for
communities.
That
would
be
great.
A
C
C
We
don't
really
talk
about
sed,
but
it's
it's
essential,
for
you
know
for
running
production
clusters
for
running
AJ
clusters.
Of
course
one
one
answer
could
be
well,
let's
just
you
know
just
bring
your
own,
you
know
sort
of
managed,
out-of-band
sed
cluster,
and
then
you
know
put
that
in
the
you
know.
Let's
say
your
your
provider
spec
whatever
passed,
that
on
to
a
DM
and
and
you're
done,
you're
done
and
the
cluster
API
won't
touch.
On
the
other
hand,
maybe
that
STD
is
something
that
is
in.
C
You
know
the
scope
of
the
like
the
top-level
cluster
API,
and
so
my
question
is
around
that
you
know:
do
we
do
we
think
that
will
have
a
separate
controller
for
for
managing
STD,
since
it
really
does
have
very
different
semantics,
it's
more
like
you
know,
stateful
set
rather
than
you
know,
replica
set
semantics.
So
do
we
have
do
we
think
that
we'll
build
a
separate
controller?
Do
we
think
that
will
extend
the
Machine
controller?
I,
don't
know
exact
or
will
we
leave
this
to
providers?
C
And
so
one
thing
to
consider
is
that
you
know
managing
as
he
Diaz
is
not
easy,
and
you
know
if
we
leave
it
to
providers,
it
might
be,
might
be
a
high
ask
and
it
might
actually
impede
sort
of
the
adoption
of
of
the
cluster
API.
You
know
for
for
production
use
cases,
so
just
we
just
want
to
throw
that
out.
There
I
have
strong
feelings.
I
B
I
A
I
do
want
to
kind
of
put
a
hold
on
this
discussion
because
they
know
said,
like
they
know,
said
it
could
definitely
eat
up
the
rest.
This
meeting
and
as
Tim
mentioned
I
know
that
Justin
has
a
lot
of
thoughts
about
EDD
and
actually
has
a
project
for
a
city
management
that
I'd
love
to.
Have
him
talk
about
and
he's
out
of
the
office
this
week?
A
L
L
Jessica
pointed
out
that
he,
when
this
is,
she
was
created
in
the
first
place.
The
idea
was
to
tie
or
to
have
a
namespace
per
cluster
and
I
want
I
kind
of
felt.
It
would
be
more
useful
to
have
all
clusters
in
one
namespace
and
somehow
add
time
machines
to
do
a
cluster
based
on
some
tag
or
some
sort
of
some
key
to
connect.
The
two
so
I
wanted
to
kind
of
bring
it
up
to
see
what
other
folks
think
about
about
that
approach.
K
L
A
My
namespaces
and
one
of
the
reasons
I
do
it
is
so
that
one
of
the
things,
if
you
have
multiple
clusters,
they'll
often
be
using
sort
different
credentials
and
separating
credentials
into
different
namespaces,
is
a
really
good
idea
from
an
isolation
point
of
view
where,
if
someone
has
access
into
a
namespace
and
they
have
access
to
those
credentials,
but
they
don't
necessarily
have
access
to
Corinne
shells
and
other
namespaces,
so
you
might
have
you
know
three
different
clusters
and
you
know
developers
that
want
to
have
their
own
control
over
some
credentials.
That
shouldn't
be
shared.
H
K
I
guess
my
question
was
really
what
like,
if
I,
create
an
object
in
this
management,
Buster
I
create
an
object
that
results
in
another
cluster
being
created
right.
Thus,
the
name
space
where
the
subject
leave
map
to
anything
to
do
with
the
with
a
child
cluster
so
like,
is
it
users
name
prefix
or
is
it
not
used
at
all?
I
still
have
to
insure
my
cluster
names,
somehow
unique
defending
you
would
have
provide
the
exposes
their
how's.
A
That
cluster
names
are
gonna
have
to
be
unique,
to
add
them
to
kubernetes
right
when
you
create
a
cluster,
it's
gonna
have
to
be
unique
name
within
its
scope,
so
either
you
know
if
they
were
non
namespace,
it
would
have
to
be
unique
it
completely
if
their
name
stays.
They're
gonna
have
to
be
unique
names
within
that
namespace
right.
So
another
advantage
of
having
them
be
names
based
is
that
you
could
have
two
clusters
at
the
same
name.
You
could
put
them
in
different
namespaces
yeah.
K
A
B
They're
usually
tightly
coupled
to
this
sort
of
identity
management
systems,
so
namespaces,
typically
in
most
integration
points,
are
integrated
with
identity
management
of
some
kind.
So
that
way
you
could
have
groups
who
administer
of
cluster
a
and
other
groups
that,
amidst
your
cluster,
be-
and
you
know,
there's
a
million
identity
management
system,
so
the
integration
is
glue
outside
of
that.
Typically,
from
what
I've
seen
that's
my
pin
my
perspective.
K
Well
I
was
I
was
trying
to
to
ask
this
very
specific
question
where
I
don't
think
it's
got
that
we
understood
so,
let's
say:
let's
imaginatively
is
provided
right.
We
have
AWS
provider
and
we
created
that
provider
and
it
goes
and
creates
resources
in
AWS
that
are
not
prefixes
anything
they're.
Just
like.
Let's
say
we
create
a
cluster
called
foo
in
through
cluster
API,
and
we
and
the
controller
the
provider
goes
and
creates
a
bunch
of
resources
that
are
prefix
this
food.
K
Now
somebody
creates
foo
in
another
name,
space
controller
can't
do
anything,
should
the
controller.
Actually,
you
know
take
the
names
place
into
your
account
and
prefix
it
with
like
I,
don't
know,
let's
say
the
first,
who
wasn't
default,
namespace
default
food
and
create
all
the
resources
with
those
names.
So
I
think
this
is.
This
is
radio
that
I
was
thinking
about
that
was
reading.
A
K
I
mean
this
was
very
hypothetical
right,
but
I
think
this.
This
is
kind
of
important
for
for
implementers
to
to
understand,
I
think
actually
in
AWS.
Now
that
I
think
of
it
and
in
certain
cases
this
this
wouldn't
really
matter
it
would
matter,
were
like
CloudFormation
stacks,
potentially,
but
not
for
other
things
where,
where,
where
you
don't
actually
have
to
supply
names?
K
Know
very
briefly:
I
just
I
just
wanted
to
say
that
you
can
tell
is
a
project
that
we
worked
on
and
and
eat.
You
know,
depending
whether
it's
a
gator
be
a
sponsored
or
not
we're
looking
to
proposals
there
and
what
I'm,
what
I'm?
What
I
really
wanted
to
say
here
is
that
we
have
a
plan
to
implement
cluster
API
support
in
ETS,
cool
and
yeah
and
I.
K
B
A
L
That's
not
super
important.
We
can
push
it
out
to
next
week,
but
I'm,
not
sure
where
we
landed
on
the
multiple
namespaces
thing.
Do
we
do
still
want
to
keep
a
namespace
cluster
or
how
do
you
want
to
do?
I
know,
hardik
mentioned
the
access,
control
or
multiple
people
mentioned
access
control,
but
wouldn't
access
control
be
something
that
you
would
implement
or
enforce
in
the
internal
cluster
than
in
the
external
cluster
and
also
how
would
pivoting
work
if,
if
we
create
a
namespace
for
cluster.
M
Okay,
it
dictate
because
the
agreement
that
we
had
gotten
to
before
was
that
it
could
be
an
optional
reference
for,
in
some
cases
when
you
do
want
to
have
the
the
reference
from
the
cluster
from
the
machine
to
the
cluster
or
not.
But
it's
probably
needs
a
little
bit
more
time
to
discuss.
Then
one
minute
quite
well.
M
A
Just
discipline:
that's
before
I
point
you
to
issue
number
41
I,
linked
it
to
your
issue
and
I.
Think
if
you
search
back
through
the
meeting
notes
that
you'll
see
some
notes
from
when
we
had
discussed
this
before
and
I
think
Caesar
is
correct
and
that
the
agree
we
came
to
before
was
that
we
do
want
to
have
the
option
for
making
them
more
tightly
linked
via
a
reference,
but
that
we
don't
necessarily
want
to
force
that
reference
to
be
there.
A
A
A
All
right,
so
we
are
a
minute
over,
so
we're
gonna
call
the
meeting
here
I.
Thank
you,
everyone
for
coming
and
again
one
more
plug
for
the
meeting
in
half
an
hour
about
the
AWS
implementation.
I
think
a
lot
of
people
are
probably
interested
in
getting
this
running
on
AWS
and
we
definitely
want
to
make
sure
people
are
rowing
in
the
same
direction.