►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
thanks
so
welcome
to
the
kickoff
meeting
of
control,
plane
management
to
contribute
in
lifecycle
management,
so,
first
of
all,
I
would
say
across
all
the
work
streams.
If
you
look
at
this
world
stream,
this
is
basically
a
much
broader
topic
as
compared
to
the
other
topics,
and
this
topic
requires
I,
guess,
first
of
all,
narrowing
at
some
point.
What
is
in
scope,
what
is
out
scope
and
what
do
we
equip?
A
We
want
to
do
with
the
control,
plane,
components
and
such
right,
so
in
the
agent
I
have
I
have
first
headed
point
to
how
to
first
of
all
define
this.
What
extremes,
I
would
say
as
part
of
this
first
stream,
I
guess
this
is
very
long
term.
We
should
probably
try
to
take
care
of
how
do
we
create
probably
upgrade?
How
do
we
scale
if
you
want
to
or
we
want,
and
how
do
we
really
delete
the
control
planes
as
such,
for
the
cluster
km?
A
A
So,
if
you
look
at
this
replay
project
right
now,
we
have
one
standard
way
of
managing
control
plane
where
we
go
ahead
and
we
create
the
master
machines
and
they
say
that
these
are
the
dedicated
machines
which
hold
your
control
and,
and
they
are
the
brain
of
your
cluster
and
now
there
are
also
conversations
happening
at
this.
I
have
lot
of
times
in
Kibera
this
community
community
that
we
need
to
think
of
the
ways
where
we
have
really
not
really.
We
do
not
really
denote
the
muster
roll,
all
the
node
rolls
and
so
on.
A
All
of
us
all.
The
mood
should
be
considered
same
probably,
and
with
that
also
other
approaches,
so
when
I
produce
also
there
were,
we
could
also
have
control
plane,
cluster
well
control
clinical
studies
of
one
or,
let's
call
it
a
management
cluster
where
the
management
cluster
can
hold
the
control
plane
of
people
recruit
clusters
as
well.
So
this
model
is
also
famously
known,
as
to
exception
model,
there
have
been
talks
in
cheap
corn
and
so
on.
A
B
I
know
one
of
the
things
we
were
considering
when
we
were
talking
about
this
within
VMware
is
there
there
are
use
cases
where
both
of
the
two
different
approaches
really
makes
sense.
So
when
we
were
initially
trying
to
break
down
how
we
could
see
this
happening,
we
we
envisioned
basically
managing
you,
know
being
able
to
abstract
away
the
control
plane
enough
so
that
it
could
facilitate
both
approaches.
I.
A
Think
I
would
also
completely
agree
on
this
I.
It
gives
both
of
the
approaches
have
their
own
trade-offs
and
I.
Guess
both
of
them
make
a
lot
of
things
so,
but
then
something
comes
to
us
is
that
how
do
we
move
ahead?
So
I
guess
for
the
second
approach,
which
is
the
management
plan.
Cluster
approach,
which
has
been
discussed
a
lot
of
time
in
the
in
the
community,
calls
but
I
guess
we
always
could
never
actually
produce
a
document
which
actually
states.
How
would
it
look
like
eventually
right?
A
So
that's
one
action
item
or
something
that
I
want
you
to
Kim
decide
for
this.
Call
that
probably,
if
you
agree
that
is
really
important
for
that
approach,
then
probably
I'll
try
to
create
one
proposal
doc
on
this
approach,
that
how
would
the
model
look
like
or
how
would
the
migration
from
the
interesting
way
of
being
clusters
to
the
other
management
way
of
doing
clusters
would
look
like
that's
one
thing
that
we
have
to
do,
but
then,
with
the
existing
approach?
A
Well,
we
have
a
data,
great
Marshall
motions,
I
think
still,
there
are
few
open
questions.
We
still
require
good
document
which
states
that
what
are
the
nuances
of
this
approach
right,
for
example,
if
there
is
only
a
single
master
machine
or
if
there
are
multiple
master
machines,
and
if
there
are
multiple
master
machines,
then
how
do
you
may
hold
you
want
to
manage
the
control
plane
across
will
master
machine.
So,
for
example,
how
do
you
scale
up
your
HPD's
if
you
want
to
scale
your
number
of
Master,
Muslims
and
so
on?
A
A
John's
and
I
also
copied
the
diagram
that
we
had
in
the
youth
case
document.
So
for
the
folks
who
are
not
probably
aware
of
what
are
what
are
these
two
different
main
approaches
and
probably
last
two
blocks
of
this
diagram
is
what
is
showing
the
defense
and
if
you
have
still
questions
or
something
I,
guess
it's
a
good
platform
to
actually
talk
about
it
and
internal
details
of
it
because
then
otherwise,
you
know
in
the
doubt
it
will
just
falls
through
the
cracks.
A
So,
okay,
then
this
was
the
one
aspect
which
I
want
to
go
discuss
and
we
shall
proceed
with
this.
With
respect
to
this
expect,
we
will
probably
proceed
with
the
dogs,
with
both
of
the
approaches,
any
studies
and
more
react
routes.
If
someone
moves
and
their
way
of
doing
things,
and
probably
a
good
time
to
edit
here
and
from
that
purposeful,
so.
B
A
Does
it
mean
that
cluster
EP
a
controller
should
be
able
to
bootstrap
a
gke
cluster,
KS
blisters,
and
so
on?
Yes,
oh,
yes,
that
is
also
he
gives
an
interesting
use
case.
So
all
the
many
Cubans
is
offering
could
also
be
spawned.
Why
are
the
same
controller?
So
a
user
goes
to
the
cluster
API
and
should
be
able
to
tell
that
I
want
a
cluster
which
is
confirming
to
the
cluster
EPA.
That's
one
way,
the
other
way.
A
Cool
thanks
anything
else.
Anyone
has
in
mind
which
might
be
good
friend
from
Texas
ting
approaches
which
we
have.
This
was
slightly
different
confusions,
or
does
it
in
the
fall
for
any
any
use
cases
which
might
be
different
from
this?
Probably
David,
Lee
I
think
you
send
some.
You
guys
have
been
looking
into
the
management
clusters
and
substrate.
So
if
you
have
an
input,
if
you
have
any
other,
if
you
think
that
is,
there
could
be
any
other
better
way
of
doing
that.
It
would
also
include
pet
this.
C
C
E
E
B
So
I
think
one
of
the
challenges
that
we're
gonna
have,
as
we
get
deeper
into
the
design
too,
is
that
we
do
overlap
with
at
least
two
of
the
other
work
strains,
one
of
those
being
the
data
model,
work
stream
and
the
other
one
being
the
extensions
mechanism
work
straight
so
to
avoid
trying
to
rabbit
hole
into
those
topics
here.
What
we
should
probably
do
is
bubble
up
the
requirements
that
we
have
as
far
as
like
the
data
model.
B
So,
instead
of
talking
about
exactly
what
the
data
model
looks
like,
we
can
talk
about
kind
of
what
are
the
requirements
that
we
have
the
data
model?
What
attributes
do
we
need
to
be
able
to
get
from
the
user?
What
attributes
do
we
need
to
be
able
to
publish
back
to
the
user
things
like
that
and
then
for
the
extension
mechanism?
We
should
probably
just
black
box
that
for
now
and
just
call
out
that
you
know
this
needs
to
happen
through
an
extension
mechanism
and
not
get
too
deep
into
the
design
around.
A
A
Okay,
then
moving
ahead,
so
also
I,
something
the
use
case
document
that
it
was
discussed
that
what's
a
good
way
of
doing
the
control,
plane,
engineering,
so
I
think
the
way
we
said
the
day.
One
of
deploying
the
control
plane
and
giving
the
term
in
this
cluster
is
good.
I
think
that's
the
easy
task.
The
hard
task
is
to
maintain
the
cluster
so
when,
when
user
starts
using
it,
you-
and
you
see
a
note
coming
in-
and
the
control
plan
that
we
have
already
deployed
may
start
suffering.
A
So
then
the
question
comes
that
how
do
we
want
to
make
sure
this
control
pen
always
stays
healthy?
And
if
you
can
in
general
refer
it
density
controlling
engine
and
Dell,
then
the
points
could
come
whether
we
want
to
scale
up
a
couple
of
components
when
I
say
scale
up:
it's
not
a
generic
term
I
think
each
component
will
each
component
has
its
own
very
different,
characteristic
and
he'll
sells.
The
scaling
part
has
to
be
taken
care
of
differently.
A
For
example,
a
city,
the
horizontal
scaling
doesn't
make
sense,
I
mean
unless
we
have
a
very
smart
way
of
horizontal,
auto
scaling.
It's
an
obvious
very
where
horizontal
or
the
scaling,
probably
if
you
have
a
single
node
at
three
instances
and
probably
the
vertical
scaling
could
make
make
make
more
sense
there.
You
know.
A
Similarly,
if
you,
if
you
look
at
the
API
server,
then
for
API
server,
both
horizontal
and
vertical
scaling
make
sense,
but
then
we
have
to
decide
which
one
takes
the
precedence
and
we
need
to
come
up
with
some
better
or
good
algorithm
that
went
to
do
what.
So.
This
is
also
much
broader
topic
and
in
this
round,
I
were
to
see
that.
How
do
we
see
that
whether
this
should
fall
under
these
but
extreme
or
whether
probably
there
should
be
a
completely
separate
effort
picture
of
these?
A
B
The
other
concern
that
I
see
is
we've
talked
about
three
different
kind
of
mechanisms
for
activating
the
control
plane,
and
these
this
could
potentially
be
quite
different
based
on
the
implementation
backing
it
as
well.
So
like
in
the
case
like
the
managed
service
control
plane,
you
may
not
even
be
able
to
scale
that
that
maybe
on
some
that
you
can
only
specify
at
instantiation
times
if
at
all,
yes.
A
I
think
that's
a
very
good
point.
Scoob
in
terms
of
control,
plane,
engineering,
I
think
we
have
a
lift
control
in
the
many
services
in
between
or
to
everything.
In
case
of
any
of
the
experience,
I
key
you
can
get
the
C
control
plane,
I
use
it
obvious
also
in
case
also,
if
you
don't
get
two
different
components:
that's
right
and.
F
A
E
I
think
it's
important
to
consider
like
what
aspects
we
consider
in
scope
and
out
of
scope.
So
if
we
take
the
simplest
case
of
the
API
server,
it
can
be
very
easily
horizontally
scaled,
because
it's
stateless
and
I
think
if
we
consider
a
managed
service
that
will
kind
of
maintain
the
high
availability
for
potentially,
then
the
API
could
be
as
simple
as
number
of
replicas
or
automatic,
and
it's
a
string
is
that
it
like
I,
think
we
should
really
try
to
get
as
simple
as
possible.
A
Yes,
it
is
I
guess,
but
there
are
also
some
complication,
so
this
is
also
comes
from
the
real
experience
with
the
API.
So
so,
as
we
know,
what
EPS
really
does
is
believe
mirror
try
to
mirror
the
full
cache
right
and-
and
what
becomes
very
not
very
normal-
is
that
if
your
very
large
cluster,
then
there
is
some
base
memory
that
is
always
going
to
be
used
in
all
of
the
instances
of
the
EPA
said.
A
Well
so,
for
example,
if
you
so,
which
basically
means
that
two
parts
with
1gb
each
Ram
probably
is
not
equivalent
of
one
part
with
2gb
right,
because
in
the
first
case
you
will
end
up
basically
replicating
not
off.
So
the
point
is
a
point
is
taken
that
I
think
there
should
be
a
simpler
mechanism
to
do
it,
but
I
guess
relevant.
Also,
this
specific
aspect
below
I
think
also
require
the
hints
or
the
inputs
from
the
folks
who
have
some
real
experience
with
the
control
plane.
A
So
no,
but
I
would
say,
let's
stick
it
in
scoop.
For
now,
let's
try
to
define
document
that
what
are
the
low-hanging
fruit.
So
if
there
is
a
release,
if
there
are
really
easy
ways
to
this,
for
example,
in
the
pH
levels,
for
now,
we
can
probably
enable
it
as
a
first
step
whenever
we
see
the
real
complications,
which
I
also
try
to
ask
in
terms
of
people
at
school,
is.
A
Yes,
the
previous
dog.
There
should
be
a
separate
section
which
explicitly
outlines
that,
whether
our
would
be
auto-scaling
part,
how
would
the
scaling
of
the
control
plane,
while
part,
would
be
considered
in
each
different
approaches?
I
think
that
makes
sense,
and
this
could
also
be
a
differentiator
mainly,
that
we
ho
how
how
better
you
can
manage
different
airplanes
and
different
approaches.
A
You
Kuzon
so
then
I
also
was
trying
to
understand.
How
does
the
to
dispersion
upgrades
affect
the
control
men?
So
when
we
talk
about
the
lifecycle
of
the
control
band,
it
definitely
includes
the
includes
the
phases
where
you
want
to
upgrade
from
one
version
to
another
version,
and
then
there
are
also
minor
versions
in.
There
are
major
versions
and
with
all
different
kinds
of
implementations,
this
can
be
tackled
in
completely
different
ways
right,
so
here
also
I.
Guess
we
have
a
good
chance
to
decide
that.
A
How
do
we
want
to
take
that
all
the
upgradation
of
the
control,
plane,
components
or
if
it's
something
that
we
take
in
a
more
generic
way
well
in
general,
in
the
cluster
APA
v,
decide
because
it
also
includes
the
worker
machines
as
such,
because
it
might
not
ever
make
sense
that
we
only
upgrade
the
control,
plane,
components
and
not
the
worker
machines
and
it's
more
or
less
a
spillover
or
on
different
regions
that
probably
in
the
mode
size,
North
lifecycle.
Also,
this
will
be
considered
right.
B
So
I
see
it
as
being
in
scope
and
I
think
there
are
two
components
here
to
this
I.
Think
one
is:
how
do
we
surface
up?
How
you
know
how
you
define
an
upgrade
you
know
and,
and
that
could
be
part
of
kind
of
something
that's
coming
despite
the
implementations
and
then
there's
the
other
component.
That's
going
to
be
very
implementation
specific,
because
how
you
would
manage
a
control
plane
upgrade
for
say,
a
machine
blade
based
deployment
versus
a
pod
based
deployment
would
look
considerably
different.
A
Also
we
would,
we
would
require
a
separate
dock
and
I
could
see
because
I
said
in
the
start,
things
are
much
broad
quarter
topic
as
such
and
I
think
we
might
not
want
to
dive
into
all
of
the
expects
me
DP,
but
listen
when
we
can
pick
up
the
city
will
specify
starter
for
these
more
documents.
Right,
yeah,.
E
It
seems
like
we
as
the
missus
control.
Fine
work
stream
have
an
easier
task
than
the
nodes,
because,
with
the
kubernetes
upgrade
flow,
you
upgrade
the
master
services
before
you
upgrade
the
nodes.
So
if
we,
if
we
want
to
focus
on
upgrading
the
master
components,
I'm
thinking
about
the
complication
of
needing
to
court
in
the
nodes
and
drain
the
nodes,
I,
don't
think
that's
in
scope.
For
this.
Obviously,
no.
E
A
A
Ok,
fine!
If
anyone
have
any
shouts
any
physicians,
please
feel
free
to
diving
guys.
This
meetings
has
been
ending
always
forever,
since
also
pretty
pretty
quick
and
cool
reason
of
this
works
in
these
meetings
was
that
we
can.
We
can
discuss
these
small
things
when
one
sees
on
the
approaches
which
we
were
not
able
to,
because
in
the
central
confess
will
be
a
meetings
like
this,
so
so
this
is
a
good
platform,
even
if
you,
if
you
want
to
think
loud,
this
is
a
good
great
formal
change.
B
So
I
do
want
to
say
that
one
of
the
things
that
we
should
keep
in
mind
is
that
not
everybody,
that's
interested
in
this
topic
is
able
to
attend
the
synchronous
meetings,
so
wherever
possible,
I
think
we
should
try
to
collaborate.
You
know
through
asynchronous
means
and
the
the
meetings
are
great
for
working
through
some
things,
but
we
we
need
to
make
sure
to
stay
inclusive
as
well.
I
think.
A
F
A
Get
the
existing
cluster
if
I
would
say,
let's
say,
I
controller,
which
a
controller
which
is
dedicated
to
manage
the
cluster
objects
more
or
less,
and
we
know
that,
if
not
now,
probably
in
the
near
future,
we
will
start
seeing
a
pattern.
Well,
there
are
a
lot
of
responsibilities,
small
things
which
needs
to
be
integrated
in
this
one
controller,
and
this
we
have.
A
We
have
seen
until
with
experience
in
Cardinal
that
small
things
becomes
small
pieces
which
creates
lot
of
complications,
and
eventually
you
have
a
big
one,
big
controller,
just
looking
over,
probably
100
different
clusters
or
200
different
clusters,
and
then
they
start
saying.
The
problem
is
where
the
150
cluster
has
to
wait
for
some
time
before
all
before
other
clusters.
Insulation
is
over
and
so
on.
So
that's
where
the
question
or
all
the
new
wave
can
see
kicks
in
belt,
but
we
want
to
think
in
terms
of
the
extensibility.
A
So
with
that
there
should
be
a
one
big,
mammoth
controller
which
does
everything
or
whether
there
should
be
available.
We
should
be
able
to
provide
extensions
which
are
dedicated
to
smaller
responsibilities,
for
example,
for
example,
so
infrastructure
controller.
So
a
controller
which
is
dedicatedly
not
all
kind
of
interested
on
the
cloud
provider,
but
the
infrastructure
resources
which
are
essential
flora,
the
clustered
API
project.
So
this
controller
stock
should
be
probably
to
only
make
sure
that
q1,
a
specific
interception
resource
already
create
CRT,
which
contains
a
specific
resource.
A
This
controller's
job
is
to
make
sure
that
this
resource
is
available
healthy
and
instead
on
the
cloud.
So
the
central
controller
then
doesn't
need
to
really
worry
on.
It
doesn't
really
empirically
have
any
kind
of
code
paths
which
is
to
look
there
and
do
anything
about
it,
and
there
could
be
other
examples
like
the
DNS
management,
so
there
could
be
an
DNS
is
also
some
sometimes
DNS
gets
tricky.
A
So
this
will
happen
that
the
initial
requirements
of
the
DNS
later
on
changes
if
I
want
some
other
alias
now
I
want
some
progress
and
you
probably
want
to
change
certain
DNS
entries,
or
you
probably
want
to
make
sure
that
I
want
more
entries
to
be
supported
for
the
existing
balances
and
so
on.
So
there
could
be
extension
mechanism
for
the
DNS
and
so
on.
So
I
wanted
to
also
know
your
views.
How
do
you
look
at
this
concept
and
we
should
think
of
in
this
direction,
but
now
in
future,
so
I
think
we.
E
Should
avoid
thinking
about
concurrency
as
a
problem
with
having
one
binary
versus
many
binaries
or
like
one
controller
versus
many
controllers,
because
right
now,
the
core
cluster
controller
blocks
in
certain
cases
where
it
doesn't
need
to
block,
because
now
we
have
status,
we're
using
status
in
a
more
appropriate
way,
so
I
think
like
just
for
the
sake
of
reducing
complexity,
we
could
have
kind
of
the
core
controller
that,
via
the
extension
mechanism,
whatever
that
turns
out
to
be
delegates,
certain
tasks
to
infrastructure,
specific
binary
or
even
a
go
library
whatever
it
turns
out
to
be,
but
that
we
use
states
to
implement
a
highly
concurrent
controller
right.
E
B
Yeah
and
I
think
I
think
this
is
a
tricky
topic
anyway,
because
it's
partially
going
into
some
of
the
concerns
in
the
data
model
like
whether
you're
dealing
with
one
object
or
multiple
objects
and-
and
it's
also
dealing
with
the
extensibility
mechanism
as
well.
That
said,
if
we
do
want
to
support
more
than
one
type
of
implementation,
I
think
that
pretty
much
mandates
that
we
would
have
to
leverage
some
type
of
extension
mechanism
to
do
so.
A
You're
also
pretty
heavily
over
lips
with
video
model.
I
could
not
attend
because
not
the
second,
you
think
of
the
data
model
today,
but
I
I
could
see
that
there
were
already
some
instances
of
the
institutional
management
and
so
on.
So
I
don't
have
a
clear
path
ahead,
but
I
would
say:
let's
keep
it
here
for
now
and
whenever
we
see
a
in
need
of
it,
we
move
in
this
direction
and
keep
it
otoscope
at
least
for
now.
You
stopped
beginning
for
this
folks
right.
I
think
that
that
sounds
reasonable.
As
of
now
to
me.
E
I
one
thing
I
want
to
point
out,
though,
is
that
this
work
stream
I
think
one
of
the
core
functions
of
control.
Plane
lifecycle
is
signing
the
certs
like
generating
a
CA
signing
all
the
certificates,
and
if
this
is
a
problem
I'm
currently
facing
in
my
implementation,
if,
if
the
core
controller
blocks,
while
it's
calling
out
to
open
SSL
or
whatever
it's
doing
that's
an
issue
like
I,
think
we
should
set
up
more
states
like
certs,
prieser
creation
and
then
have
that
be
asynchronous,
and
then
post
cert
creation,
yeah.
A
A
A
A
Kind
of
project
already
available,
who,
which
can
efficiently
do
the
backup
and
restore
and
not
looked
into
it,
but
probably
it
bleeds
that
way
and
then
there
whatever
also
be
patient.
That
hope
we
exactly
make
sure
that
if
you
lose
the
control
plane
the
how
to
how
to
be
hope,
you
really
can
reproduce
the
entire
group
in
okay
and
if
there
are
more
such
extensions,
not
not
specifically
the
extensions
but
some
controllers,
which
are
not
the
cool
communities
but
which
would
probably
fall
under
from
the
cluster
API
point
of
view.
A
Therefore,
they
would
probably
fall
in
the
control
plane
region
because,
for
example,
this
component
has
to
run
along
with
the
control
plane
component
itself.
It
has
to
run
side
by
side
with
SPDY.
So
how
do
we
tackle
such
different
controllers
with
a
weak
weather?
We
try
to
include
them
in
this
work
stream
or
whether
we
consider
them
as
add-on.
If
there
are
some
other
atom,
then
we
take
take
it
turn
or
irritations.
Any
comment.
C
A
For
example,
one
is
obsidian
and
probably
some
something
else
probably
could
be
to
the
vertical
part
autoscaler.
If
you
decide
to
do
the
vertical
scaling
up
and
you,
if
you
decide
to
do
vertical
scaling
of
any
of
the
control
grant
components
and
any
custom
written
controller
that
is
for
us,
we
have.
We
have
deployed
something
called
a
debris
as
lb8
water
users.
A
Where
is
a
dedicated
controller,
which
makes
sure
that
they'll
be
considerations
for
a
given
cluster
is
always
up
to
date,
and
healthy
user
goes
ahead
and
messes
with
a
infrastructure
than
this
controller,
basically
make
sure
that
something
is
really
wrong
there
in
water,
it's
or
something.
So
there
could
be
ideas
that
could
come
requirements
which,
on
top
of
the
core
to
Bonadies
controllers,
then
with
a
we
take
care
of
them
or
we
consider
them
as
something
different
in
a
cleat
separately.
B
So
to
me,
I
prefer
to
try
to
keep
some
of
that
out
of
scope.
For
now.
We
may
want
to
keep
in
mind
during
the
design
to
not
preclude
the
use
of
those
things,
but
I
don't
necessarily
know
if
we
should
that
that
should
be
in
the
scope
as
we
as
we
start
to
define
out
what
control
plane
management
looks
like
and.
A
So
it
basically
means
that
I
think
I
also
believe
that
we
should
only
be
focusing
on
the
core
cuban.
This
control,
plane,
components
and
not,
and
not
really
very
no
for
other
probable
control,
plane
components
which
move
our
liaisons
and
probably
there
should
be
a
dedicated
bulk
screen
as
well.
For
for
this
specific
issue,
because
even
for
the
control
plane,
components,
I
think
there
would
be
few
things.
Few
add-ons
controlled
way
of
I
would
say
deploying
certain
components.
Those
kind
of
facilities
would
be
required
very
soon
and
we
would
require
some
better
knowledge.
B
B
As
far
as
implementation,
so
far
I'm
not
I,
don't
know
if
anybody's
implemented
at
least
machine-based
control
planes
outside
of
Cuba
am
today,
so
we
would
potentially
have
complication
around
okay.
Do
we
do
we
force
users
to
deploy
more
machines
to
support
@cd
completely
hosted
separately,
or
do
we
still
try
to
do
co-located,
exudate
management?
Then
how
do
we
merge
kind
of
EDD
ATM
management
with
Cuba
diem
management
on
the
same
host?
C
Cristina's
I
can
have
two
different
primary
audiences.
One
is
sort
of
higher
level
tooling,
which
may
want
to
find
me
in
control
over
like
how
it
sees
configured
where
the
you
guys,
rivers
are
etcetera,
but
then
I
think
about
the
normal
user
and
honestly
I.
Don't
care
I,
don't
even
think
that
necessarily
want
to
tell
them
the
difference
between
one
master
reading
I
think
they
probably
want
to
know
like.
Does
it
support
a
single
failure
or
two
failures,
and
maybe
that's
even
too
much
I
don't
know,
but
they
want
capabilities.
Not
specifics.
E
So
I
think
one
of
our
responsibilities
is
to
define
like
where
the
extension
points
are
so
with
the
idea
of
@cd
being
a
machine
versus
a
pod
right
that
that
we,
that
we
define
that,
as
an
extension
point,
so
I
think
for
everyone
else
ado
to
be
really
useful
to
a
lot
of
people
that
maybe
there's
a
default
implementation
that
spins
up
at
CD
on
a
machine
using
a
net
CD
ADM
config
or
that
same
at
CD.
Idiom
config
could
be
an
input
to
a
delegate.
E
C
I
do
that
I
think
there's
still
some
question.
This
is
maybe
outside
of
this,
but
when
you
think
of
an
extension
mechanisms,
I
wonder
to
what
extent
should
we
flush
out
the
extension
points?
Should
we
say
that,
like
here's,
your
extension
for
at
CD
energy
retention
point
for
the
you
guys,
rivers,
because
then
we
have
this
multitude
of
different
concepts
that,
with
the
people,
the
application
of
medium
is
used.
There's
like
no
difference,
I
mean
assuming
you
use
like
comedian
was
stacked
masters,
though
that
fine-grain
breakout
at
the
essential
points
may
not
be
necessary.
C
So
is
a
similar
thing
from
the
date
of
all
discussion.
There's
the
supposable
having
an
infrastructure
controller
I,
wonder
in
a
similar
way
to
what
I
just
said,
I
wonder
if
that's
yet
useful,
because
particular
infrastructure
provider
they
may
or
may
not
decide
to
have
like
in
different
infrastructure
controllers,
right
like
how
you
Matt
how
they
want
to
break
up
their
infrastructures
into
small,
well-defined
controllers
may
be
outside
of
the
scope.
Finding
a
stitch
in
mechanism
yeah.
E
So
I
think
it's
an
opportunity
for
some
kind
of
manifest
for
an
extension
provider
where,
let's
say
this
is
JSON,
G,
RPC
and
there's
some
external,
like
you,
don't
even
have
to
call
it
a
controller.
You
can
just
call
it
like
an
RPC
server
or
something
and
you
define
which
functions
should
be
delegated
to
your
RPC
server.
B
Gets
it
like
one
of
the
ways
that
I
that
I'm
sketch
this
out
with
some
folks
in
the
past
was?
Is
you
could
define
all
of
these
different
kind
of
multitudes
of
extensions?
And
then
you
can
have?
Basically,
you
know
just
like
a
mapping
that
could
you
know,
mat
all
of
those
two.
So
so
you
can
have
basically
like
individual
provider
implementations
for
everything,
but
then
you
would
have
one
config.
B
That
would
say
this
is
kind
of
like
the
road
map
for
how
I
want
to
be
able
to
stamp
out
clusters
with
this
provider
can
say,
with
this
type
of
bootstrapping,
config
and
wire
it
up
that
way.
So
it
would
basically
be
like
a
template
to
be
able
to
stamp
out
clusters
and
it
would
dictate
kind
of
which
of
all
of
the
extension
points,
reuse,
yeah,.
E
I
think
it
could
be
part
of
the
CRD
even
like,
but
that
that
seems
to
get
a
little
complex,
I
think.
Maybe
when
you
install
a
provider
it
tells
you
which
extension
points
it
will
implement.
A
To
me
it
seems
there
are
basically
two.
There
would
be
two
different
categories,
so
components
which
are
essentially
for
costly
API
to
run
that
it
could
be
extensions
that
as
well,
for
example,
the
infrared
extension
that
you
are
talking
about.
It
would
be
internship
controller
or
reality
server
or
whatnot.
It's
one
way,
which
is
essential
for
this
VP
a
managed
cluster
to
run
properly.
A
But
then
there
are
other
extensions
which
are
more
or
less
add-ons
so,
for
example,
the
H
citty
idiom,
which
is
something
which
is
going
to
backup
and
restore
some
kind
of
features,
offer
that
city
even
a
pretty
clustered
manage
the
city
clustered
on
top,
so
that
kind
of
features
are
considered
as
add-on.
So
if
I
don't
have
that,
I
will
still
be
able
to
run
my
cluster.
A
A
E
I
absolutely
agree
so
things
like
adding,
let's
think
of
an
add-on.
It's
definitely
out
of
scope.
The
dashboard
which
is
deprecated
right,
like
what
I'm
getting
at
here,
is
not
that
you
have
an
extension
mechanism
at
the
very
end
which
is
like
deploy
whatever
manifests.
You
want
I'm
talking
about
like
at
the
point
where
@cd
needs
to
come
up.
You
either
have
a
default
implementation
that,
based
on
being
able
to
provision
machines,
is
able
to
provision
at
CD
machines
or
provides
enough
information
to
the
provider
that
it
provisions
SED
in
some
different
way.
B
Think
that
it
might
actually
be
like
just
a
different,
it
could
potentially
be
just
a
completely
different
implementation
of
the
control
plane
as
well.
You
know
because
we've
talked
about
machine,
backed
instance,
control
plane.
We
talked
about
management,
the
pod
base
for
the
management
cluster
and
we've
talked
about
external
managed
service.
The
ADM
and
control
plane
being
separate
kind
of
manage
resources
could
potentially
be
just
another
type
of
control,
plane,
implementation.
B
A
A
B
A
Yes,
yes
agreed.
Morally,
so
I
guess
we
need
approaches
just
that.
The
common
denominator
should
be
there,
but
then,
since,
if
you
have
multiple
implementations,
then
it
could
be
a
patron
where
thing
is
available
in
one
implementation,
but
might
not
be
available
in
the
other
implementations,
and
that
should
be
fine
as
long
as
we
know
that
the
common
denominator
is
the
dominating
one,
it
is
enough
on
both
of
the
implementation.
There
is
always
a
possibility
on
small
things
that
should
be
okay.
A
B
B
B
E
A
Sure
then,
if
you
want
listen,
you
can
also
collaborate
on
the
third
one,
which
is
a
managed
service.
I
I
would
also
like
to
know
details.
There
think
it
through
the
tree
so
okay,
good,
so
that
we
go
ahead
with
these
three
proposals.
Initially,
the
topics
which
are
after
these
three
proposals
are
about
auto
scaling,
inversion
of
breeds,
I
would
say,
let's
be
focused
on
first
of
all,
decided
to
control
plane
approach.
A
A
B
So
self
managed
ism
really
pictured
here,
it's
basically
the
first
diagram
and
then
there
would
be
a
pivot
to
pivot
the
components
donate
there.
There
really
isn't
much
of
a
difference
with
respect
to
the
control
plane
management
in
that
usage
compared
to
a
management
cluster
that
you
know
is
machine
based
really
well.
B
D
Yeah
sorry
I
joined
late
I
for
some
reason:
I
didn't
get
IC,
they
invited
in
my
email,
but
I
didn't
get
an
invite
on
calendar,
so
I
missed
the
mr.
majority.
The
meeting
it
looks
like
maybe
you
talked
about
the
control
plan
or
sed
specifically.
Today,
I
did
I
miss.
Was
there
a
long
discussion
on
that.
A
D
So
I
have
a
question
around
that.
So
I
know
that,
for
example,
Cooper
DM
today
will
bring
up
n
SVD
cluster
and
it
does
that
I
think
with
the
right
control,
plane,
joint
and
the
way
that
the
way
that
cou
radium
does.
This
is
by
passing
around
data
that
that
is
needed
each
time
that
you
add
a
new
ICD
member.
It
passes
it
around
through
the
kubernetes
cluster.
I.
Think
that
it's
the
point,
so
it's
the
ploy
is
the
first
control
plan.
D
B
Me
if
I'm
wrong,
there's
two
bits
of
configuration
that
are
added,
there's
one
where
it's
like
just
the
common
kind
of
control,
plane
config,
which
will
include
like
the
end
points
and
and
all
of
that
stuff,
the
the
general
cubed
iam
can
fake.
Then
there's
a
second
piece:
that's
a
basically
time
bombed.
B
D
But
I
yeah
I
mean
I,
wasn't
yeah
I
wasn't
thinking
so
much
about
you
know
like
the
security
aspect
of,
but
more
of
you
know
if
I'm,
adding
a
a
second
STD
member
to
an
existing
cluster
or
or
a
third
or
you
know
at
any
point
that
I'm
adding
that
to
be
a
member
I
need
to
provide
that.
Member
about
these.
It
look
like
to
car
all
the
current
members
in
in
the
cluster
that
information
comes
from
somewhere
I'm.
D
D
That
at
least,
is
there
there's
going
to
be
a
way
to
pass
this
like
a
standard
way
to
pass
this
kind
of
necessary
data
around,
so
that,
if
I
want
to,
you
know
work
on
some.
You
know
sed
controller
that
or
some
version
of
the
control
plane
controller
that
does
at
CD
that
it
is,
you
know,
through
a
CDM
that
it
could.