►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190329 - Cluster API Goals & Requirements Discussion
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
what
I
wanted
to
do
in
the
very
beginning
of
this
conversation,
I,
don't
want
to
necessarily
run
this
meeting,
but
I
want
to
set
up
some
ground
rules
for
engagement,
because
last
time
we
tried
to
talk
about
this.
We
kind
of
sprawled
all
over
the
place
and
had
deep
discussions
about
whether
belly
button
look
like.
So
what
I
want
to
do
in
this
discussion
is
current,
keep
it
focus
so
Annie
and
I
talked
about
it
beforehand
and
he's
created
two
separate
documents.
B
B
B
C
So
again,
this
is
meant
to
be
about
goals,
non
goals
and
requirements.
We
can
talk
about
use
cases
in
a
separate
document,
as
Tim
was
saying.
So
what
we
put
down
for
some
of
the
guidelines
here
are
really
about.
As
we
look
at
these
goals
and
we
look
at
these
requirements,
do
you
think
anything
is
missing
and
as
you
look
at
all
of
these,
is
there
anything
listed
in
here
that
would
prevent
you
or
prohibit
you
from
achieving
your
use
cases.
C
So
we
have
a
glossary
up
at
the
top
and
we
have
a
glossary
up
at
the
top
which
we
should
fill
in
with
additional
things
as
as
needed.
So
I'm
not
going
well,
I
can
read
them
so
provider.
I
know
Clayton
had
a
question
as
to
in
one
of
the
goals.
We
talked
about
provider,
agnostic
logic,
and
so
there
was
a
question
around
what
is
provider
I,
think
probably
infrastructure
provider
here
in
terms
of
being
able
to
provision
machines
and
related
aspects,
but
I'll
fill
that
in
later
or
if
anybody
has
suggestions,
please
feel
free.
C
We
also
want
to
have
the
concept
of
a
control
plane
that
is
provisioned
by
a
cluster
API,
where
the
control
plane
is
represented,
either
by
machines
or
pods,
but
not
something
like
a
gke
or
a
dks
control
plane.
And
then
the
alternative
to
self
provisioned
is
an
external
control
plane
like
GK
or
aks,
and
we
have
been
just
brainstorming
about
potentially
using
cluster
API
as
a
singular
API
to
manage
both
external
control
planes
and
self
provision.
C
Once
okay
I'm
going
to
read
off
the
goals,
as
we
have
them
written
here
so
number
one
to
declaratively,
create
manage
and
destroy
kubernetes
conformant
clusters,
kubernetes
conformant,
just
meaning
that
it
can
pass
the
conformance
tests,
and
so
it
does
not
shut
out
vendors
or
other
distributions.
So
kubernetes,
open
chef
ranch
or
whatever
can
pass
conformance
tests
is
what
we're
looking
at
managing.
Should
we
add
that
to
the
glossary
up
above
because
it
get
more
mature
yeah.
I
will
show
that
in
later.
C
The
third
one
here
which
Clayton
they
were
having
a
comment
discussion
about
to
consolidate
common
provider,
agnostic
logic.
We
probably
can
get
some
better
wording
here,
but
the
idea
is
that
we
want
to
make
it
so
that
the
infrastructure
providers
that
are
provisioning
machines,
don't
have
to
write
code
that
generates
the
control
logic
for
running
cube,
ATM
and
it
cubm
join
or
anything
else
that
you're
using
to
do
provisioning.
C
C
B
Somewhere
in
there,
because
we're
really
doing
lifecycle
specific
management,
we
are
not
doing
other
forms
of
kubernetes
management,
so
we
need
to
I
do
think.
Wordsmithing
is
important,
and
some
of
this
stuff
so
I
can
I
can
appreciate
some
of
the
people
who
are
coming
in
wanting
to
understand
the
words
so
somehow
putting
in
work
like
lifecycle
management,
probably
would.
C
B
E
So
possibly
also
hibernate
clusters,
so
Jonah
cluster
you
should
be
able
to
so
for
the
development
clusters.
At
least
we
try
to
do
something
where
you
want
to
be
shut
down,
bring
down
few
nodes
and
she
of
the
control
band
components.
For
time
being
just
we
call
the
hibernate
and
then,
when
user,
once
you
start
developing
a
spin
in
this
wake
up
the
cluster.
Instead,
it
gets
the
same
straight
back,
it's
very
easy
to
do.
Given
the
piggy
one
machine
went
further,
and
so
on
that
turns
out
to
be
very
nice.
Shoes
isn't.
E
So
just
in
one
wave,
but
then
there
would
be
I
think
more
controllable,
main
components.
I'm
just
thinking
out
loud
that,
on
top
of
the
codes,
you
burn
a
disc
underpin
components.
If
you
decide
to
deploy
more
than
thirteen
components
like
one
example
is
the
machine
controller
and
if
there
are
more
deployments
like
cooling
around
specific,
a
controller
starts
coming
in
and
we
would
probably
want
to
reduce
the
cost
of
pools,
or
so
those
controllers
winning.
If
you
can
hit
the.
B
E
I
think
it
can
be
seen
in
a
way
that
I,
if
I,
want
to
hibernate
I,
would
basically
shut
down
all
the
machines
and
as
long
as
it
as
we
see,
if
there's
cases
where
we
want
to
maintain
the
control
panel
says
I
would
also
probably
want
to
turn
down
the
scheduler
control
management
of
all
the
control.
All
those
controls
and
components
which
are
not
being
and
and
when
I
want
to
wake
up.
I
can
again
start
all
this
component.
Just
by
HPD
any
pieces
were
running
a
series
maintained
that
I
can
get
renters.
E
A
C
Recommend
that
hardik
would
take
some
time
when
you
have
a
moment
and
document
hibernation
as
a
use
case
in
the
use
cases
document
I
think
that's
probably
the
best
way
to
go
right
now
and
in
case
you
all
haven't
seen
it
as
I
mentioned.
We
do
have
this
definitely
use
cases
document
everybody
has
edit
right,
so
you
can
just
come
in
and
add
your
use
cases
anywhere.
G
H
B
C
J
C
Yeah
I
mean
if
you
want
to
back
your
way
into
somehow
creating
the
I.
Don't
know
cluster
machine,
whatever
the
objects
are
to
represent
the
GK
cluster
that
you
created
on
your
own.
If
you
can
make
that
work
great,
but
it's
not
a
goal
to
have
cluster
API
so
automatically
support
that
that
would
be
on
gke
or
gobs
or
somebody
yeah,
but
not
a
stick.
Your
quest
api
does
that
work.
Pablo.
J
Sorry,
I
only
move
gesture
that
works
to
me
at
slowness
week
at
some
point
could
probably
and
say
what
it,
but
it
will
take
me
if
I
wanted
to
do
that
myself
provider.
What
because
therapy
I
expected
me
to
have
this?
Like
reverse
the
process
of
creation,
say,
is
you
can
I
just
said
you
can
provide
a
representational,
your
machinist
machine
object,
but
else
I
think.
B
I
think
Pablo
would
probably
help
this
discussion.
A
lot
would
be
telling
a
story
sit
in
a
separate
document
where
they
use
cases
or
the
user
start
outlining
your
your
use
case.
For
that
we
can
go
back
and
forth
and
try
after
we
finish
talking
about
the
requirements
we
can
try
to
see.
If
the
your
use
case
is
met
by
the
current
list
of
requirements,
I.
L
I
have
another
verb
that
we
might
want
to
add
here,
which
is
maintained,
because
if
we
do
MIT,
we
do
still
have
this
idea
of
a
machine
set
or
whatever.
That
becomes
that
if
a
node
goes
away
for
whatever
reason
the
VM
is
deleted
or
Hardware
is
gone,
that
may
be.
The
cluster
API
tries
to
figure
out
how
to
bring
a
new
node
back
into
the
cluster.
C
F
Yeah,
does
that
also
include
things
like
you
know
that
let's
say
the
couplet
crash
is
something
file
system
level.
That's
it's
not
infrastructure
layer,
but
it
has
to
do
with
the
node
itself
or
applications
or
storage.
Does
that
also,
like
I
mean
it
may
be
we're
getting
the
I'm
getting
with
the
realm
of
like
the
node
health
node
problem,
detector
space,
but
I'm
just
curious
here.
C
K
K
C
Like
there's
kind
of
you
know
like
around
the
one,
the
non
goal
here
about
forcing
lifecycle
products
to
support
or
use
these
api's,
there's
I
think
there
was
some
discussion
in
the
cap
from
months
ago
about
you
know.
A
Components
so
I
think
there's
a
line
to
walk
here
too.
So,
for
example,
it
would
be
great
to
integrate
with
the
cluster
autoscaler,
but
there
is
some
behavior
in
the
cluster
autoscaler,
specifically
determining
which
node
to
remove
from
the
system
that
it
mates
makes
sense
to
either
adopt
or
import
into
cluster
api.
Well
I
mean.
M
M
B
B
Words
most
a
bit
more
but
I
think
there
is
a
fine
line
of
minutiae
here
about
it's.
Okay,
we
want
to
it,
as
you
said,
not
an
integrator
possible,
but
we
don't
want
to
preclude
ourselves
from
doing
what
we
need
to
do
to
get
our
job
done.
So
there's
there
might
be
a
portion
where
we
might
define
scale
behavior.
That
is
not
the
current
autoscaler
mm-hmm.
C
So
I
think
that's
a
your
mention
of
core
is
a
good
next
topic,
so
this
is
a
this
anon
goal
is
a
paraphrase
of.
What's
in
the
cap,
and
it's
to
add
these
api's
to
kubernetes,
core
and
I
know
there
was
a
decent
amount
of
discussion
over
here
with
Clayton
about
what
does
this
actually
mean
and
does
it
involve
API
reviews
and
claim
makes.
B
An
excellent
point
in
that
we
have
a
bunch
of
CR
DS
that
live
outside
of
kubernetes
proper,
but
they
use
the
k-8
CEO
name.
We
should
probably
have
a
distinction
for
things
that
live
in
the
that's
outside
the
core,
but
we
actually
haven't
given
texana
me
to
that.
So
I
think
it's
totally
legit.
His
statement
is
legit
in
that
like
we
should,
it
should
not
be.
We
should
define
a
namespace
that
is
outside
of
the
core
that
lives
underneath
this
larger
umbrella
that
that
we
would
love.
B
M
M
A
Is
I
I,
don't
think
at
this
stage
of
where
we
over
cluster
API?
It
makes
sense
to
block
development
waiting
on
API
reviews
for
all
of
the
API
changes
that
we're
looking
to
do
now.
When
we
talk
about
going
from
alpha
to
beta
that
may
be
a
different
story.
We
may
want
to
actually
seek
a
formal
API
where
do
then,
but
iterating
on
alpha
seems
a
bit
heavy-handed
on
process
no
well.
B
Api
reviews
right
now
today
are
totally
blocked
by
only
a
couple
of
people,
so
the
notion
of
us
that
are
that
is
external
to
the
core
being
blocked
by
reviewers.
It
would
grind
the
project
to
a
halt
right,
because
the
sheer
volume
of
these
that
exist
outside
of
the
core
of
kubernetes
is
an
order
of
magnitude
larger
than
the
core
of
kubernetes
itself
proper,
as
it
exists
today,
and
so
I
just
want
to
make
sure
that
we
there's
minutiae
there
and
we
should
have
some
distinction
and
we
should
refine
our
currently
review
process.
Yeah.
C
B
B
I
think
I
think
API
is,
should
have
that
model
as
well,
because
we
do
that
within
the
whole
structure
of
github
repositories,
so
having
a
sings
decades
down
a
little
makes
perfect
sense
to
me,
but
we
I
think
that's
another
discussion
and
we
should
totally
take
this
I'll.
Take
this
up
in
architecture
next
week.
Okay,
thanks
Tim.
C
Okay,
the
next
non
goal
is
to
manage
the
lifecycle
of
infrastructure
unrelated
to
the
running
of
kubernetes
conformant
clusters.
So
we
don't
want
to
become
generic
terraform,
create
anything
you
can
do
in
any
cloud
provider
and
I
thought
this
was
fairly
straightforward.
Although
there
was
some
discussion
back
and
forth
between
Clayton
and
me,
but
I
think
I
think
this
wording
hopefully
will
work
for
everybody,
if
not
feel
free
to
add
some
more
comments.
C
A
A
So
when
we
get
into
the
requirements,
it
could
be
such
that
you
know,
creation
of
a
machine
doesn't
require.
You
know
the
specific
you
know,
because,
right
now,
when
we
reconcile
a
machine,
we
pass
in
a
full
cluster
object
and
that
cluster
object
is,
you
know
we
introspect
data
from
that
cluster
object
to
get
shared
convey.
It
could
be
that
you
know
we
force
ourselves
to
design
a
more
explicit
interface
around
the
machine
that
doesn't
require
that
kind
of
external
object.
B
C
C
M
B
B
M
B
C
Yeah
I
think
the
intent
I'm
guessing
here
is
given
a
release
of
cluster
API
and
a
provider.
If
you
install
and
deploy
them,
you
must
be
able
to
do
a
standard
set
of
things
which
were
calling
conformance,
which
is
probably
create
a
cluster
on
that
cluster
I
can
at
least
run
a
pod
and
then
I
can
delete
the
cluster
like.
K
C
M
I
think
that
those
are
two
different
things
I
think
that
the
conforming
of
the
resulting
cluster
is
already
covered
earlier,
as
it
says
somewhere
the
confinement
cluster
may
be,
maybe
when
we
say
it
should
deploy
a
cluster
it
conformant
should
be
in
there,
but
as
far
as
validating
that
the
given
provider
implements
things
properly,
I
guess
there's
also
a
validation
in
the
other
way
of
and
I.
Don't
probably
doesn't
need
to
be
in
here.
B
There
you
know
I
want
to
make
sure
that
everyone
realizes
they
should
have
suggestion
rings.
Wordsmithing
is
the
group
activity.
So
if
you
can
think
of
a
clever
way
of
saying
stating
the
same
thing,
my
wife
is
an
expert
at
deciphering
team,
isms,
so
I'd
recommend
maybe
taking
a
statin,
and
if
you
have
some
clever
ideas
to
put
below,
maybe
we
can
do
like
a
bullet
point
list
below
potential
waste
area.
So
if
you
put
suggestions
down
below
that,
instead
of
us
going
and
add
nauseam
here,
we
could
have
folks
do
it.
Asynchronously.
O
Sorry
this
is
a
record
to
make
up
regarding
the
conformance
testing,
I,
think
some
of
the
discussions
that
we
are
recently
having
are
in
part,
because
there
are
all
a
lot
of
beauty
today.
So
they
need
to
be
great
if
we
can
crystallize
some
of
the
goals
unknown
goals
in
the
form
of
twenty
eventually.
So
this
would
help
to
have
to
qualify
kind
of
the
the
goals
so
I
know
that
make
sense.
You.
O
Like
a
grant
writing
to
say
is
that
eventually,
it
would
be
great
if
we
managed
to
materialize
and
represent
some
of
these
goals
in
the
forum
and
to
end
testing.
So
this
way
we
can
remove
some
of
the
ambiguity
that
we
have
today
in
the
project,
because
this
would
be
basically
basically
codify
right.
Okay,.
B
C
C
Okay,
so
yes,
next
up
cluster
API
I
must
not
have
entry
provider
code,
I,
hope,
that's
not
controversial,
and
a
cluster
API
must
be
able
to
create
clusters
on
multiple
providers
through
a
single
deployment.
So
I
think
this
is.
If
I
have
deployed
cluster
API
in
one
place,
then
I
can
create
my
machines
and
clusters
and
whatever
other
CR
DS
we
end
up
with
in
that
one
place
and
have
them
run
on
AWS
and
GCP
and
all
over
the
place
without
having
to
have
separate
separate
management
clusters.
I
guess
the.
A
C
A
M
N
N
C
G
Yeah
and
when
we're
talking
about
I
think
now,
we've
reached
this
point.
We're
talking
about
the
the
case
where
we're
not
pivoting
right,
where
we
have
the
the
management
cluster
and
then
mark
from
we
works,
use
the
term
and
a
implementers
office
hours
is
a
workload
cluster
and
that
seems
to
be
a
pretty
good
term.
I,
don't
know
if
we
maybe
we
could
adopt
that
when
we're
referring
to
the
cluster
that
is
being
managed.
C
C
I
know
so
I
know
I.
Guess
we
had
the
this
one
here
about
managing
clusters
on
multiple
providers,
and
then
we
got
this
new
one
about
creating
clusters
on
multiple
providers.
I,
don't
know
that
we
need
to
requirements
for
this
if
we
maybe
combine
them
as
our
lifecycle
management
discussion
that
we
had
earlier.
C
M
C
Okay,
so.
K
B
M
So
I'm
not
sure
that
that's
the
same
thing,
though
using
an
identical
deployment
implies.
A
nickel
specification
implies
some
abstraction
that
I
don't
know
that
we
want
to
do
a
single
cluster
API.
You
maybe
create
a
cluster
there's
some
different
providers,
but
not
necessarily
using
an
identical
resource
definition.
K
M
A
So
I
think
the
reason
why
we
added
must
there
was
that
for
using
kind
of
vanilla
upstream
cluster
API.
This
would
be
a
required
feature,
but
for
vendors
and
and
other
tooling,
that
is
looking
to
leverage
cluster
API
say
like
cops
Cuba,
corn,
openshift
or
some
other
other
type
of
product
eyes
version.
K
B
C
M
It
seems
like
we're
talking
about
two
different
things
at
the
same
time,
that's
part
of
confusion,
we're
talking
about
the
specification
of
the
API
and
then
we're
talking
about
provider,
implementations
of
that
specification
and
we're
saying
the
specification
must
include
this
API
call,
but
people
may
or
may
not
implement
it,
but
we
want
vanilla
for
the
reference
implementation
to
implement
it.
Something
like
that.
It
seems
like
maybe
we're
mixing
requirements
of
two
or
three
different
things
here:
I
really.
A
So
the
problem
is,
is
that
we
have
multiple
really
use.
We
have
multiple
use
cases
where
cluster
API
sets
it.
One
is
the
pure
upstream
I
want
to
be
able
to
leverage
cluster
API
to
be
able
to
stamp
out
kind
of
upstream
vanilla,
kubernetes
clusters,
and
then
we
have
all
the
various
different
types
of
clustering
is
an
implementation
detail
of
use,
whether
that's
like
a
managed
service
provider,
that's
providing
kubernetes
as
a
service
to
users
or
its.
A
A
B
I
need
to
open
a
book
here
and
go
back
to
it,
but
like
I,
think,
let's
that's
not!
Let's
do
the
word.
Smithing
keep
on
moving
still
I
think
we
have
that.
We
have
the
right
idea
in
the
shared
context
in
this
conversation,
but
the
language
is
escaping
us
because
we're
engineers
but
I.
There
are
third
books
that
actually
help
us
do
to
accurately
present.
K
Or
stating
I
think
related
to
that
I
think
this
one
is
different
from
the
others.
I
think
in
that,
if
we
can
come
up
with
a
great
API,
no
one
is
gonna
object
to
us
doing
this.
This
is
this
can
be
a
goal
for
us,
but
no
one
is
going
to
as
far
as
I
know.
No
one
is
likely
to
say
you
can't
do
that.
That's
like
our
turf.
I
B
C
Thinking
here
is
most
likely
decoupling
machines
from
configuration
of
what's
on
there
and
you
know.
Maybe
we
have
some
config
API
I,
don't
know
what
it
would
look
like,
but
we
do
want
to
have
the
ability
to
bootstrap
a
control
plane
and
allow
people
like
OpenShift
or,
inter
or
whomever
to
have
their
own
implementation.
I
would
explicitly.
A
L
A
A
M
B
C
J
A
J
J
Okay,
no,
no
sorry!
It's
just
okay,
you
say
yeah.
If
we
still
are
in
control
of
the
processes.
At
some
point,
we
had
a
hook
or
something
mechanism
that
we
can
enter
in
interface,
with
external
elements
to
configure
the
controlling
is
that
is
I.
Don't
really
get
the
idea
of
this
bootstrapping
configuring,
the
control
plane.
C
Positively
means
so
at
some
point
we
will
have
a
machine
whether
it's
VM
or
bare-metal,
or
what
that
is
created
and
it
needs
to
be
configured
such
that
it
can
run
q8
a
minute
or
whatever
you
do
with
other
distributions
to
get
them
initialized.
So
it's
about
making
sure
that
when
you
have
a
bear
operating
system
that
you
can
configure
it
so
that
it
runs
kubernetes.
L
C
Mean
it's
probably
a
good
idea,
so
there
are
gonna,
be
people
joining
most
likely
in
a
few
minutes
who
obviously
were
not
able
to
attend
this
first
hour,
I'm
tempted
to
start
back
at
the
beginning.
For
for
those
people
who
are
new,
but
obviously
anybody
who's
on
the
call
for
two
full
hours
would
be
hearing
the
same
stuff
over.
So
what
do
you
all.
K
B
The
recording
is
there,
anyone
who's
really
good
at
word,
smithing
and
spec.
Writing
like
let's,
let's
solicit
the
group
here
who,
like
I,
do
it
I,
do
it
progressively,
but
I
also
do
it
in
a
slightly
different
way,
but
I
don't
particularly
want
to
own.
This
purposely
tried
to
like
make
sure
I
help
guide
people,
but
I,
don't
wanna
I,
don't
want
to
own
the
language.
So
is
there
anyone
on
the
call
who's
really
good,
with
writing.
Specs
I.
I
B
F
C
Okay
or
actually
Jason,
do
you
want
to
do
the
intro
here
for
the
recording?
Well,
I,
just
paused
it.
So
this
is
the
same:
recording,
oh
great,
okay,
so
for
those
of
you
who
are
just
joining,
we
have
been
going
through
the
clustered
API
goals
and
requirements
document.
This
has
a
lot
of
the
content
from
the
giant.
What
is
cluster
API
document
that
was
previously
shared
with
you
and
just
in
terms
of
guidelines
for
the
discussion?
We
want
to
have
a
separation
between
the
goals
and
requirements
and
use
cases.
So
we
have
a
separate
document.
C
Everyone
has
edit
permission
on
this
use
cases
document
and
we
encourage
you
to
add
your
use
cases.
There
are
not
necessarily
any
wrong
answers
here,
so
it's
definitely
worth
putting
them
in
so
that
we
can
see
what
people
are
interested
in
doing
and
we
can
talk
about
them.
But
for
this
discussion
we
are
trying
to
stay
at
a
high
level
with
goals
and
requirements
and
we're
trying
to
see
if
there's
anything
missing
for
goals
and
requirements,
and
if
anything,
that's
in
this
document
would
prevent
you
or
prohibit
you
from
achieving
what
your
use
cases
are.
C
So
we've
been
working
on
a
taxonomy
that
is
in
development
in
terms
of
the
terminology
that
we
have
in
this
document
so
that
hopefully
we
can
all
have
a
single
common
language
and
we
are,
we
have
been
going
through
the
goals
and
non
goals
and
requirements
and
I
would
say
basically
there's
places
where
we
have
wordsmiths
written
in
front
of
something
and
that's
where
we've
discussed
it.
We
understand
the
intent,
but
wording
might
not
be
great.
C
C
I'll,
take
that
as
a
yes,
so
the
next
requirement
we
were
going
to
look
at
is
clustered
API
must
be
able
to
provide
a
consistent
way
to
prepare,
configure
and
join
nodes
to
a
cluster
across
providers
with
the
ability
to
opt
out
or
change
in
limitation.
So
the
the
intent
here
is
that
we
have
common
logic
for
basically
calling
Cuba
diem
join
or
the
equivalent
and
that
if
you're
not
using
human
am,
but
you
have
another
way
to
do
it,
that
we
have
an
extension
mechanism
so
that
you
can
supply
your
own.
R
A
M
A
A
Q
B
Oh
yeah,
what
we've
been
doing
is,
if
you
have
questions
or
comments,
to
add
some
billon
elements
as
suggestions
about
how
we
rephrase
this
for
other
issues
that
might
come
along
the
way
because
we
recognize
we
have
a
deficiency
in
taxonomy
and
part
of
this
process
is
especially
with
people
coming
online
that
are
relatively
new
to
this
space.
Is
that
some
of
the
we
realize
that
there's
overloaded
terms
that
people
are
using
and
we
need
to
be
more
explicit
with
our
taxonomy.
B
Although
some
of
us
old-timers,
like
Justin,
Santa
Barbara,
you
know
Jason
and
myself
just
look
at
each
other
and
know
what
we're
talking
about.
That
doesn't
mean
that
it
makes
sense
to
everybody
else,
so
it
would.
It
would
be
helpful
for
the
broader
group
to
try
a
document
in
the
sub
the
little
items,
what
your
thoughts
are,
and
we
can
try
to
refine
that.
K
A
C
C
C
Alright,
shall
we
move
on
next
up
clustered
API
must
be
able
to
scale
up
and
down
a
self
provision,
kubernetes
control
plane
with
a
swappable
implementation,
self
provision,
meaning
that
cluster
API
was
responsible
for
creating
the
control
plane,
and
this
is
to
call
out
a
difference
between
managed
services
like
gke,
where
we
likely
won't
have
any
control
over
the
size
of
the
control
plane.
So.
A
A
R
M
B
Like
you
started
out
with
a
one-note
environment
which
started
out
as
your
dev
environment,
and
you
want
to
grow
it
to
more
production
ready,
maybe
because
of
staging,
not
necessarily
Prague,
then
you
evolve
it
over
time.
That's
really
common
for
people
who
you
know
they
start
small.
They
work
on
it
and
then
aim
they
wanna.
They
have
more
requirements.
Time
on.
C
R
R
A
R
B
Well,
that's
part
of
this
discussion
so,
like
you
know,
people
that
have
been
working
on
it
for
a
long
time
would
just
use
scale
because
it
could
mean
any
of
those.
So
you
know
part
of
this.
If
you
have
better
texana
me
by
all
means,
please
give
a
suggestion,
because
there's
a
section
at
the
top-
but
you
know
trying
to
get
people
who
are
new
and
old
timers
on
the
same
page,
is
going
to
take
a
little
bit.
I.
S
A
B
B
A
B
S
Yeah
I
was
actually
thinking
like
in
a
case
of
a
yes
actual
self
origin
control
plane.
You
know
a
provider
implementer
may
choose
that
a
single
node
control
thing
he's
done
in
such
a
way
that
it's
non
scalable
and
they
only
allow
you
to
scale
if
you
started
with
an
H,
a
control
plane,
and
you
could
just
like
to
five
AC
so.
A
C
C
R
You
I'm
sorry,
and
if
this
was
the
must
provide
high-level
orchestration
for
cube
upgrades.
Yes,
is
the
orchestration
at
a
level
of
desired
state
or
mechanism.
C
So
I
think
this
is
going
to
be
something
like
all
like
many
of
the
items
that
are
in
here,
we
will
be
breaking
up
into
groups
to
talk
about
implementation.
Details.
I
would
expect
there
to
be
a
reference
implementation
for
for
upstream
kubernetes
and
other
providers
could
do
different
implementations
either
for
upstream
kubernetes
if
they
want
to
do
it
differently
or
for
for
vendor
distributions
as
well.
T
C
We
had
a
discussion
in
the
first
hour
about
when
you're
managing
a
cluster.
Is
there
a
definition
of
a
configuration
that
that
has
to
be
there
for
to
be
for
the
cluster
we
considered
minimally
functional
and
that
if
there
are
add-ons,
we
could
figure
out
that
leader
I
know
that
your
questions
around
add-ons
are
I
know
you
have
a
lot
of
questions
about
them
and
I.
Don't
know
that
we
have
all
the
answers.
I
think.
B
So
those
are
considered
core
atoms,
that's
part
of
bootstrapping,
so
we
lump
it
into
bootstrapping,
so
everything
that
cup
that
discusses
bootstrapping.
Is
that
piece
everything
after
that
point
if
considered
add-ons
they're
non-essential?
So
if
you
see
bootstrap,
that
means
things
that
are
core
add-ons
which
includes
you
know
things
like
core
DNS
and
the
proxy.
If
you
see
anything
else
like
hipster
or
whatnot,
that
would
be
not
essential.
That
would
be
after
prove
it
after
the
bootstrap
process
itself.
You
know
you
know
we
don't
go
into
that
space
at
all
and
does.
B
It
does
that's
one
add
on
that's
a
little
bit
fuzzy,
so
it's
post
bootstrap
that
you
need
to
do
one
more
thing.
We've
debated
these
several
times
whether
or
not
C
and
I
should
be
a
separate
step,
but
you
don't
get
something
functional,
so
we
want
to
put
users
it's
that
weird
boundary
line
where
we
want
to
get
users
in
a
good
space.
So
the
the
CNI
is
usually
kind
of
predetermined
or
somewhat
baked
in
by
the
provider
specific
configuration
with
options
or
knobs
to
specify
right,
yeah.
S
Yeah
yeah
I,
wonder
I,
wonder
whether
that
makes
you
know
potentially
makes
CNI
more
and
more
important
and
critical
and
which
we
should
kind
of
consider
it
being
less
of
an
optional
thing,
because
now
it's
like
it's
sort
of
in-between
all
right
I
mean.
Maybe
maybe
that
makes
the
case
for
it
to
stop
being
such
a
thing
and
become
more
first-class
I,
don't
know
what
exactly
that
means
over
yeah.
R
When
we
describe
add-ons
do
we
describe
them
in
terms
of
the
function
they
provide
in
the
cluster
API
or
in
terms
of
the
actual
implementation
choice.
So,
for
example,
if
the
store
API
wants
to
manage
DNS,
does
it
make
the
choice
of
which
DNS
solution
is
used
like
or
is
it
just
more?
The
I
would
like
a
way
of
saying:
I
just
want
the
NS.
B
So
we
don't
explicitly
make
that
a
cluster
API
knob,
it's
not
really
in
the
core
cluster
API
itself.
We've
kind
of
escaped
this
by
making
that
provider
specific
and
so
long
as
it
creates
a
kerger
days,
conformant
cluster.
Now
we
do
realize
that
we
need
to
kind
of
do
the
universe.
That's
kind
of
why
we're
having
parts
of
these
conversations
is
to
figure
out
what
belongs
in
cluster
API
versus
what
things
have
federated
out
to
the
providers.
And
how
do
we
make
this
common
scenario?
Clean
and
I?
B
Don't
think
we
have
a
specific
answer
for
that
other
than
we
should
allow
the
customization
for
things
like
C
and
I.
We
know
that
to
be
true
and
I
think
we're
lacking
the
vernacular
to
make
it
clear
to
everybody
is
kind
of
new,
so
I,
don't
know
how
I
distill
into
requirement
other
than
we're
trying
to
deduce
things,
but
we
should
probably
be
explicit
on
like
what
what
things
are
optional
for
the
cluster
API.
As
we
start
to
get
to
this
digital
world.
R
Yeah
and
I
guess
my
preference
and
I
don't
want
to
speak
for
others,
but
I
would
prefer.
We
describe
things
in
their
characteristic
rather
than
their
implementation
approach,
and
so,
if
you're
saying
the
I
would
desire
a
cluster
that
has
DNS
I
think
that's
different
than
saying
I
desire,
a
cluster
that
has
core
DNS.
So.
A
To
be
clear,
as
we've
discussed,
like
bootstrapping
the
cluster
and
the
control
plane
and
the
nodes
and
all
of
those
things
you
know,
we've
we've
left
it
so
that
the
you
know
we
want
those
to
be
explicit
extension
points
that
can
be
either
opted
out
of
or
overridden.
So
what?
Just?
Because
something
exists
in
the
kind
of
standard
default
configuration
doesn't
mean
that
that's
the
only
way
to
consume
that
okay.
B
City,
don't
really
do
that,
we
don't
let
the
option
for
CRI
in
that's
baked
in
by
the
image
for
the
provider,
so
we
do
a
lot
of
optional
specifications
so
like
if
you
they
want
to
do
that
they
can
there's
custom
mechanisms
to
allow
them
to
do
image,
provisioning
or
machine
provisioning.
However,
I
want
to
call
that
and
that's
part
of
that
process,
and
not
necessarily
in
the
API
proper.
C
U
B
A
S
S
S
Yeah,
ok,
so
no!
No!
No,
no
I
see
what
you're
saying
here:
yeah,
okay,
so
yeah
generally,
that
that
seems
plausible
because,
like
sometimes
people
need
to
run
this
one
specific
command
or
something
like
that,
you
know:
I
had
a
use
case
recently.
Any
kids
could
all
they
asked
for
a
thing
way
to
disable
hyper
threading
and
there
has
to
be
done
before
cubed
start
and
it's
just
like
an
echo
Sisyphus.
C
R
No,
no
okay,
I.
C
C
C
C
Next,
we
have,
we
must
be
able
to
provide
sufficient
information
for
a
consumer
to
access
a
provision
clusters
API
server
thinking
here
is
if
cluster
API
provisions
a
cluster.
Presumably
the
person
who
asked
for
said
cluster
will
need
a
cute,
config
or
similar
mechanism
to
be
able
to
talk
to
the
new
cluster
and
again,
nothing
here
is
implementation
details
so
how
it
provides
that
information
and
what
it
looks
like
is
left
to
further
discussion.
C
You
all
right,
cluster
API
must
be
able
to
display
the
kubernetes
version
of
a
node.
It
manages.
So
this
is
if
we
are
provisioning
machines
or
pods
that
represent
kubernetes.
We
want
to
make
sure
that,
with
using
cluster
API
types,
we
can
figure
out
what
version
of
kubernetes
is
running
on
there
or
our
other
distributions.
A
C
M
M
C
Okay,
next
up,
cluster
API
should
be
able
to
apply
configuration,
changes
to
managed
components.
This
could
be
things
like
changing
the
log
level
of
the
cubelet
or
something
else
on
the
system.
This
is
intentionally
vague,
because
I
think
we
need
to
talk
more
through
use
cases,
but
I
know
that
I've
talked
with
Derek
about
being
able
to
do
configuration,
changes
that
happen
after
the
machine
is
provisioned
and
configured.
So
I
would
again
ask
that
that
you
all
add
use
case
the
other
document
for
this
sort
of
thing
wise.
Why
is
this
explicitly.
B
R
With
respect
got
time
to
clarify
since
Andy
noted
that
I
had
raised
this
with
him
in
a
prior
discussion
that
Andy
and
I
had
I
was
trying
to
understand
what
ongoing
administrative
tasks
I
might
do
through
the
cluster
API
or
behind
the
cluster
API,
and
so
some
examples
of
those
tasks
would
be
a
user
is
trying
to
analyze.
What
is
wrong
with
a
machine
like
the
node
keeps
going
not
ready
every
ten
minutes.
Fifteen
minutes
and
natural
questions
that
come
up
then
are
can
I
get
the
container
runtime
logs
for
my
runtime?
O
B
R
B
B
T
M
S
Yeah
so
I
was
wondering
like
there's
this
slightly
different
concern,
like
let's
say,
let's
say:
I
have
like
what
do
we
color
the
Machine
set
right
and
in
a
machine
set
I'd
like
to
I,
don't
know
change
like
add
a
label
to
each
of
the
machines
I'm
just
in
a
set.
Should
that
be
our
concerns?
Because
you
could?
You
could
technically
call
like
the
node,
API
and
and
and
do
a
loop
around
that,
but
because
it's
a
machine
set,
it
would
make
sense
to
be
able
to
sort
of
have
a
group
operation.
Yes,.
B
I
think
that's
totally
reasonable,
but
I
think
the
that's
that's
kind
of
a
separate
condition
than
mutating
the
actual
images.
So
what
that's
a
separate
thing
where
we're
dealing
with
the
components
as
part
of
the
API
and
that
is
totally
within
our
purview
but
I,
think
mutating
the
images
that
can
be
done,
out-of-band
be
another
tool
or
another
process
or
another
system
I,
don't
think
we
should
take
ownership
in
that
stuff.
So.
R
A
B
If
you're
doing
in
a
custom
way
and
yeah
yeah
I
think
that's
still
got
a
scope,
you
could
there's
nothing
that
prevents
us
eventually
over
time.
We
might
want
to
distill
the
common
best
practice,
but
it
would
not
be
I.
Struggled
with
this
one
in
particular.
Ca
is
a
little
rough,
because
a
lot
of
people
have
very
opinionated
ways
that
they
do
it.
So
prescribing
a
mechanism
gets
really
weird.
So.
C
I'd
like
to
see
us
divide
provider
into
multiple
concepts
where
there's
an
infrastructure
provider
that
purely
does
infrastructure
and
if
we
want
to
have
an
API
or
a
provider
for
extract
certificate,
management,
I
think
that's
appropriate
and
we
can
have
a
reference
limitation
and
people
can
switch
to
different
ones.
But
I
really
want
to
decouple
this
idea
that
the
AWS
or
the
azure
provider
are
responsible
for
certificate
management
and
cluster
bootstrapping,
because
that
is
there's
nothing
providers
about
it.
Unless
you're
leveraging.
K
I
might
I
think
I
think
the
reason
this
particular
example
is
particularly
hard
is
because
we
don't
have
a
clear
kubernetes
api
for
CA
rotation
yet
and
like
node
labels,
for
example,
felt
easier
to
describe
like
we
were
able
to
like
say
well
that
should
we
should
define
our
API
such
that
you
would
like.
You
can
do
like
the
different
management
scopes
based
on
how
you're
managing
the
nodes.
K
If,
if
we
think
that
there
is,
it
is
important
enough,
some
group
should
create
that
API
for
the
CA
and
C
AIST
I
just
raised
the
acronym
for
the
certificate
authority,
and
then
cluster
API
should
work
well
with
that
I
would
say,
but
this
is
a
particular
example,
because
there
is
no
such
API
right
now
and
I.
Don't
think
we
necessarily
want
to
define
that
API
in
cluster
API,
so
I.
R
B
B
C
C
C
A
cluster
API
machine
that
is
associated
with
that
ec2
instance,
so
there
needs
to
be
corrective
action
to
either
delete
the
machine
and
create
a
new
one
or
keep
the
machines
spin
up
a
new
ec2
instance
or
whatever's
appropriate,
and
this
applies
to
cluster
infrastructure
broadly
so
that
providers
can
deal
with
things
like
fixing
firewall
rules
if
they
get
messed
up.
For
example,.
A
Don't
think
this
is
getting
into
the
implementation
of
how
the
infrastructure
is
defined
it
with
the
main
point
of
this.
This
one
is
so
that
if
we
are
managing
infrastructure
for
a
user
we've
created
it,
they
expect
us
to
manage
it
over
time.
Then,
if
it
goes
into
a
degraded
state
and
we
should
repair
it,
but.
B
C
I
C
C
Well,
so
we
have
load
balancer
if
there's
a
load,
balancer,
that's
sitting
in
from
the
API
server
and
so
cluster
API
could
have
these
concepts
generically
and
it
could
periodically
call
validate
on
every
single
cluster
and
the
cluster
wood
or
wood
for
a
cluster.
It
would
say:
go
validate
your
your
network.
Go
validate
your
firewall
rules,
go
validate
your
your
load
balancer,
and
then
that
would
be
an
extension
webhook.
G
RPC
call
out
to
I
provide
our
specific
implementation
to
go
check
the
VP
C's
and
the
NAT
gateways
and
everything,
but.
R
Speaking
speaking
for
a
lot
of
users
in
the
world,
I
assumed
like
something
as
basic
as
how
you
manage
a
VP.
C
is
widely
variable.
Where
many
enterprises,
for
example,
would
say
you
can
create
a
vp
c
and
many
would
say
no.
That
is
strictly
managed
by
my
my
enterprise
rules.
Yes
firewalls,
so
things
are
are
a
little
worrisome,
because
this
is
the
major
it's
one
thing
to
say:
to
have
provider
that
says:
I,
don't
know
anything
about
your
infrastructure.
Just
tell
me
it's
okay
and
it's
another
thing
to
say.
C
So
there's
a
concept
of
managed
stuff,
that's
managed
by
cluster
API
and
stuff.
That
is
freeing
your
own.
So
if
you
allow
cluster
API
to
manage
your
networking
and
your
firewall
rules
in
your
load,
balancer,
then
I
think
it's
appropriate
for
cluster
API
to
be
able
to
to
you,
know,
validate
and
reconcile
those
things.
If
you
bring
your
own
VPC,
then
we're
not
going
to
touch
it.
Yeah.
S
A
R
Then
you
run
into
issues
about
how
do
you
least
scope
permissions
to
those
tools
that
can
do
those
actions,
how
you
mint
credentials
for
those
tools
that
can
do
those
actions
and
how
you
manage
the
life
cycle
of
those
credentials?
This
is
an
area
that
I
think
is
very
dangerous
to
try
to
take
on
in
a
prescriptive
manner
versus
just
say
it
provide
or
gives
me
an
infrastructure
and
I.
Don't
try
to
program
it
afterwards.
So.
A
E
R
C
Derrick
this
is,
there
is
going
to
be
a
set
of
infrastructure,
components
and
pieces
that
cluster
API
providers
will
create.
They
will
create
ec2
instances,
they
will
create
VP
C's,
they
will
create
security
groups.
Whatever
else
you
do
in
AWS
and
assuming
you
are
allowing
cluster
API
to
make
that
stuff
for
you,
I,
don't
see
why
you
wouldn't
allow
cluster
API
to
reconcile
it.
B
By
a
hook,
I
think
I
think
what's
what's
missing.
Here
is
the
logistics
behind
the
when
it
says,
take
remediate
of
action.
It's
basically
calling
out
a
hook
for
the
provider,
specific
implementation
to
do
all
the
work.
So
it's
the
provider
that
actually
provision
the
details,
but
it's
not
cluster
API,
proper,
that's
doing
the
work,
it's
the
provider.
I
think
that's.
R
Detail
which
is
I,
do
not
believe
when
we
talked
about
should
cluster
API,
say:
I
have
a
a
single
node
control,
plane
or
H
a
control,
plane
or
regional
control
plane
the
moment
you
start
also
saying
cluster
API
is
prescriptive
about
how
I
construct
my
regional
control,
plane,
data
center
environment
versus
its
hidden
and
entirely
behind
a
particular
provider
like
these
are
tension.
Points
where
you'll
you'll
struggle
to
say
that
how
do
I
reuse,
a
particular
provider
broadly
or
not
like
what
am
I
very
sure.
P
I,
don't
think
we're
saying
that
we
better
be
prescriptive
about
how
you
create
your
network
I,
think
and
be
pointing
the
sounds
like
we're
trying
to
like
make
this
very
high
level
and
very
generic
and
a
steam
point
out.
This
will
be
a
hook
that
we
will
call
to,
and
providers
like
can
provide
their
own
implementation
for
it,
and
any
limitation,
like
glossary
API,
doesn't
have
to
know
I
want
one
know
how
that's
implemented.
At
the
end
of
the
day,
the.
P
R
C
A
R
Guess
what
I
would
wonder
here
is
rather
than
say,
must
which
implies
that
the
cluster
itself
is
providing
directed
guidance
on
how
to
maybe
remediate.
Maybe
it
would
be
clustered.
Providers
should
ensure
that
they
have
a
stable
and
resilient
infrastructure.
But
I
I
really
get
weary
when
we're
looking
to
program
how
that
stable
and
reliable
infrastructure
is
enabled
so.
A
B
M
We're
also
missing
differentiation
between
is
the
API
is
that
will
be
part
of
cluster
API
and
the
functionality
that
will
be
part
of
the
reference
implementation
and
the
functionality
there's
a
responsibility
of
the
provider
and
I
think
that
can
be
part
of
the
picture,
though
right.
Thank
you.
Well,
each
of
these
requirements
actually
applies
differently
to
those
different
things.
Some
of
these
requirements
are
for
one
of
those
things
looking
for
a
different
one
of
those
things,
and
it's
a
little
bit
fuzzy
now.
J
B
J
Something
that
I'm
missing
in
this
conversation
is
that
I
understand
that
what
I
understand
the
idea
of
this
reconciliation,
but
I
think
should
is
be
doing.
This
provided
specific
description
of
the
cluster
and
we
just
pass
it
out
to
the
providers.
Saying
just
be
sure.
This
is
right
and
tell
me
like
what
is
the
actual
state
of
this?
There.
O
J
J
A
If
we
are
specifically
talking
about
firewall
rules,
then
there
is
some
that
is
going
to
be
there's
some
input.
That's
going
to
come
in
from
cluster
API
directly,
because
the
configuration
that
you're
asking
for
is
going
to
impact.
What
configure
is
needed
so
I,
don't
think
it's
just
as
clear
cut
as
this
is
always
gonna
be,
but.
D
I
think
there's
a
distinction
between
whether
it's
coming
from
the
cluster
API
and
whether
or
not
it's
opaque
to
the
cluster
API,
because
that
could
be
completely
defined
in
the
provider.
Specific
config
section
and
the
cluster
API
itself
doesn't
need
to
necessarily
understand
or
interpret
any
of
that
is
there
a
particular
way
to.
C
C
If
we
can
get
to
a
place
where
we
have
very
loose
generic
api's
for
cluster
networks,
cluster
load,
balancers
and
whatnot,
then
it's
broader
than
just
nodes
and
I
do
want
to
reiterate
that
this
is
not
saying
that
cluster
api
is
informed
in
any
way
about
what
firewall
rules
should
exist.
What
network
should
exist?
It's
more.
The
generic
concept
of
networking
and
firewalls
and
I
have
a
little
pseudocode
here.
That,
basically,
is
like.
C
Let's
go
look
at
all
the
clusters
and
for
each
cluster
and
its
provider,
let's
validate
the
network
and
if
there's
errors,
then
we're
gonna
ask
the
provider
to
reconcile
the
network
and
that
assumes
that
the
provider
is
in
charge
of
creating
the
network.
If
it's
not,
this
won't
happen.
If
it
is,
then
the
provider,
whether
it's
AWS
or
some
other
one
can
go,
do
its
own
thing
to
make
sure
that
stuff
looks
good.
This
is
not
about.
C
R
D
K
B
D
But
here
we're
talking
from
a
high
level
that
the
cluster
API
may
call
a
hook
and
some
implementation,
and
that
implies
that
the
cluster
API
itself
knows
that
there's
a
high
level
networking
config
section
that
it
itself
needs
to
reconcile
and
then
hand
off
the
actual
work
to
a
third
party
and
I,
don't
think
that's
really
optimal.
There
should
be
no
high-level
Network
config.
D
That
should
be
all
provider
specific
if
I
should
be
able
to
say,
I
want
kubernetes
cluster
version,
1
and
I
want
three
nodes,
and
the
provider
knows
exactly
what
it
needs
to
do
for
that
and
if
there's
a
provider
specific
network
section
that
I
need
to
tell
it
that
would
be
an
opaque
provider.
Details
that
have
provider
reconcile
we
can
and
we
can
cover
that.
J
We
can
cover
that
in
a
veil
reader.
We
don't
need
timing
that
I
really
really
confusing.
Why,
in
this
particular
case,
we
need
to
know
the
specific
see
I
can
buy
the
idea,
even
if
I
don't
know
briefly
yeah
we
that
I
don't
know
why
we
need
to
call
the
provider
to
reconciliate
but
I
find
with
that
Ascalon
and
so
to
wish
for
pain.
I,
say
you
told
me,
you
tell
me
when
I
created
the
cluster,
will
you
provide
across
to
the
me?
J
A
P
One
one
thing
that
might
help
here
is
that
we
have
seen
like
replication,
of
course
across
providers
in
logic
that
like
determines
the
network
and
firewall
rules
and
load
balancers,
as
we
were
saying
and
as
Jason
would
say,
was
same
like
one
other
thing.
That
I
think
my
help
understand.
It's
like
well.
First
of
all,
like
we're
not
saying
that,
like
a
every
provider
has
has
to
do
this.
P
But
we
have
seen
that
it
has
been
useful,
for
example,
in
the
SS,
Asia
and
I've,
seen
some
repetition,
also
in
the
GCP
provider
and
vSphere,
to
kind
of
like
generalize
the
network
configuration
that
will
be
used
for
the
cluster
and
that
that's
all
we're
trying
to
do,
which
is
just
trying
to
like,
at
the
end
of
the
day,
like
reduce
duplication
of
effort
between
providers.
By
providing
like
a
little
bit
of
more
code
upstream
and
more
generic
code
up
free
I,
also.
C
C
R
K
K
S
I
mean
implementations
may
not
have
a
networking,
demonic
whatsoever,
I
mean
at
least
as
far
as
the
user
is
concerned.
They
may
not
need
one
like
in
GK,
I
don't
have
CNI,
and
it
was
like
just
a
name
of
a
network
that
that
I
use.
Then
sometimes
you
know
you
could
build
a
provider
on
top
of
that.
That
would
you
could
imagine
you
could
build
a
provider
on
top
of
GK
that
just
assumes
some
default
Network
and
that's
the
only
choice
that
you
have
and
this
the
heaven
visit
discussion
now.
S
O
C
Think
this
is
worth
some
additional
discussion.
I
know
I,
think
let's
pause
on
it
for
right
now
and
have
some
time
for
individual
reflection,
I'm
gonna
jump
to
the
last
one,
because
I
hope
this
one
has
no
discussion.
Cluster
API
must
support
all
operating
systems
in
scope
for
communities,
conformance
tests,
so
Windows
Linux
know
as
stuff
gets
added
and.
R
S
S
P
S
C
C
A
Yeah,
yes,
so
they
go
behind
this
one
was
to
basically
say
that
we
don't
want
a
own
higher
level
health
status
outside
of
bare
minimum
status
and
reconciliation
of
that
status.
Instead,
we
should
be
able
to
accept
input
from
the
no
problem,
detector
or
other
external
sources.
Auditor
determined
note
health
and
how
to
take
action
on
on
those
nodes.
R
So,
from
my
perspective,
this
is
where
the
details
of
how
this
gets
implemented
are
important,
and
if
the
definition
of
a
compute
layer
is
distinct
from
the
definition
of
an
incantation
of
a
node.
I
think
that
the
choice
that's
made
here
may
impact
what
what
types
of
common
tools
would
be
built
outside
of
this
project
in
in
Kiruna
days
generally
and
I
would
like
to
be
able
to
build
a
add-on
to
handle
node
repair
in
cube
independent
of.
R
A
So
we're
not
trying
to
build
this
tool,
we're
trying
to
be
able
to
leverage
existing
tools
in
this
space
so
like
we
need
to
know
like
in
the
case
of
talking
in
machine
sets
and
machine
deployments
today.
If
we
are
going
to,
you
know,
maintain
those
over
time.
Ideally,
if
a
node
becomes
unhealthy,
we
would
replace
it
instead
of
just
leaving
an
unhealthy
node
kind
of
sitting
around.
So
we
need
to
be
able
to
accept
that
kind
of
node
health
for
that
use
case.
A
K
R
Does
that
mean
annotating,
the
node
or
not
I
think
it
matters
like
what
we
view:
the
scope
of
the
cloud
provider
interface
in
kubernetes
as
meaning,
and
so
today
we
have
call-outs
that
lets
you
ensure
a
load
balancer
exists
or
to
delete
a
load
balancer
in
the
context
of
your
cluster.
If
we
had
a
call
out
in
the
cloud
provider
interface
that
allowed
you
to
create
a
piece
of
compute
or
delete
that
piece
of
compute,
we
could
build
this
tool.
Agnostic.
K
R
I
know
that
we're
taking
the
cloud
controller
managers
out
of
core
I'm
suggesting
you
could
build
this
component
in
the
cloud
controller
manager
and
I'm
suggesting
this
tool
has
overlapped
with
the
node
lifecycle
controller,
where,
if
you
build
an
API
that
allows
you
to
provision
and
deprovision
compute
in
some
location,
then
you'll
want
to
integrate
with
the
node
lifecycle
controller.
To
know
when
it
asks,
does
this
machine
exist
or
not
so
that
I
can
detect
a
zone
or
region
failure
that
the
right
thing
happens,
but.
P
R
P
R
C
So
I
want
to
thank
everybody
for
participating
in
one
or
both
of
the
sessions
today.
I
know
it's
been
over
too
close
to
two
and
a
half
hours,
so
I
think
we
should
stop
now
and
please
review
the
document
and
comments
as
well
as
add
your
use.
Cases
to
the
other
document
and
I
will
send
out
a
doodle
for
a
time.
You
have
a
follow-up
discussion
next
week.
Thank
you.
Everyone.
Thank
you.
Thank
you.