►
From YouTube: 20180703 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
July
3rd
2018.
This
is
the
sig
clustered
lifecycle
standard,
meaning
we
have
a
couple
of
agenda
items
we
can
start
to
go
through
and
if
folks
have
other
things,
they
want
to
discuss,
feel
free
to
bring
them
up.
The
first
thing,
I
added
to
the
list
for
today
was
I,
want
to
have
a
conversation
on
thoughts
on
promoting
dynamic
through
the
configuration
to
be
defaulted
or
not,
and
underneath
what
scenarios
so
I
think
for
breach.
A
A
A
C
C
C
C
If
it's
faulty
it
nothing's
gonna
happen,
it
just
gonna
be
a
condition
that
well,
there
is
a
fault,
achill
fault,
a
configuration
that
I'm
supposed
to
run,
but
I
can't
I'm
using
the
last
good
note
version,
but
if
I
actually
have
the
valid
reconfiguration,
everyone
is
going
to
start
so
I
think
that
it's
probably
out
of
scope
for
cube
area
to
do
the
orchestration
that
well
now
we
migrated
to
the
new
version:
3.1
cube.
Let
then
repoint
the
other,
cubelets
and
third
etcetera
etcetera.
So.
C
A
It's
a
kind
of
a
dangerous
feature.
It's
like
a
foot
gun
and
if
you
opt
into
foot
guns,
you
you've.
Buyer,
beware:
right,
like
I,
don't
own
a
gun
in
my
house,
so
the
chances
of
me
accidentally
shooting
something
are
slim
to
none,
but
if
I
owned
a
gun,
the
probability
be
much
higher.
So
it's
kind
of
the
same
analogy.
We're
like.
If
you
opt
into
doing
this
thing,
we
still
provide
you
the
means
by
which
to
do
it,
but
it
can
potentially
break
a
cluster
really
easily
and
I.
A
C
C
C
D
Think
that
there
are
two
problems.
One
problem
is
enabling
this
feature
on
an
on
existing
cluster
and
yeah.
We
can
do
it
automatically,
but
it
is
risky
and
I
don't
like.
Then
then,
this
feature
get
applied
without
the
administrator
of
the
class
standard,
getting
the
full
control
of
this
operation,
and
the
second
is
on
we
ever
the
configuration.
D
The
dynamic
counting
in
place
is
how
we
manage
the
change
of
the
dynamic
con
configuration
and
if
we
should
keep
this
change
link
today
upgrade
or
it
is
a
separate
operation
with
the
regard
to
the
last
point,
I
think
that
the
people
from
signal
make
a
good
job
in
in
doing
a
recommendation
in
keeping
the
config
map
immutable
and
and
to
change
it.
Even
a
kind
of
problem
graded
part
and
I
think
that
we
have
to
keep
these
best
products
and,
if
not,
enforce,
include
Amin,
at
least
to
recommend
it
out
of
the
documentation.
A
So
I'm
leaning
towards
just
heaven
as
an
optional
feature
that
you
can
buy
into
I
like
I
like
the
current
apparatus
and
mechanism.
We
have
today
because
it's
very
explicit
right,
I'm,
a
big
fan
of
explicit
versus
implicit,
because
the
operator
knows
what
they're
doing
they'd
have
to
explicitly
opt-in
to
this
feature
and
there's
there's
less
potential
for
catastrophic
events
to
occur.
In
this
scenario,
we
can
always
document
like
and
give
give
people
the
tools
to
do
what
they
need
to
do,
but
not,
but
make
make
the
common
case
really
well-defined
and
easy
to
understand.
A
C
So
yeah
I
mean
I'm
I'm
happy
with
the
current
situation
that
okay
I
don't
see
exactly
what
dynamic
cubelet
configuration
relay
gives
us
at
this
point,
because
still
we
we
do
the
version
branching
of
the
config
map.
So
even
though
we
used
dynamic,
cubelet
configured
in
order
to
like
segregate
the
mokuba's,
so
that
if
we
introduced
like
a
new
security
feature
in
112,
it's
not
going
to
affect
111
cubelets
running
in
the
cluster.
C
So,
and
we
already
do
this
and
if
we
consider
it
up
dynamic,
Cubert
configuration
enabled
and
we
upgrade
to
112
a
cubelet.
We
still
have
to
repoint
the
cubelets
to
the
112
configuration
that
is
like
that
is
an
operation,
and
it
doesn't
really
matter
if
we
download
the
112
configuration
to
a
file
on
disk
or
if
we
patch
the
cabinet's
API.
We
need
to
do
one
operation
anyway,
that
is
about
to
be
executed
on
the
node
side
for
the
user.
C
A
We
could
there's
a
way
we
can
generalize
how
we
approach
these
types
of
features
like
a
general
way
is
we
could
we
could
blanket
statement
like
we
will
always
Covidien
will
always
err
on
the
side
of
immutability
right
like
we
do
an
operation
it's
set
up
for
success
in
perpetuity.
We
will.
We
will
not
err
on
the
side
of
mutability,
which
is
basically
like.
We
provide
this
way
of
Auto
changing
the
environment.
Will
give
you
the
tools
to
allow
you
to
do
that,
but
that's
not
the
defaults
that
would
provide
for
you.
C
A
C
Yeah,
oh
yeah,
yeah,
yeah,
yeah
yeah,
you
mean
explicitly
the
cluster
itself
yeah,
and
that
is
also
if
we,
if
we
go
for
that,
we
also
kind
of
opt
out
of
self
hosting
automatically,
because
that
is
that
is
basically
like
mutating
stuff.
Like
we
have,
we
have
three
masters.
We
update
the
deployment
cabinets
updates
itself,
so
it's
gonna
roll
out
the
new
API
server
image
on
all
three
nodes.
Well,
cue
betting.
A
But
again,
this
doesn't
prevent
people
from
we
want
to
be
able
to
allow
people
to
do
their
things,
but
that's
not
the
default.
If
we
just
say
we
default
on
the
side
of
immutability,
if
we
provide
the
knobs
for
people
to
do
what
they're
gonna
do
because
I
always
I
always
want
to
get
out
of
the
way
of
our
operators
if
they
want
to
do
something,
give
them
given
the
tools.
A
The
next
topic
I
had
was
the
just
the
general
naming.
This
one
can
be
really
short.
There's
a
there
was
a
general
conversation
piece
had
in
Sagarika
tech
chure
in
in
steering
that
we
wanted
to
eliminate
the
name
of
master
across
the
code
base
where
possible
and
use
the
term
control
plane.
If
we
can
use-
and
this
kind
of
more
of
a
request,
as
well
as
the
cone
idea
say
about
the
history
minute,
we
should
probably
change
the
name
of
the
master
join
scenario
to
be
control.
Plane,
join
questions
comments.
There.
C
F
A
F
A
G
I
was
just
we
still
use
the
word
minion
in
the
like
on
all
our
bring
ups
in
Cuba,
for
example,
and
it
is
a
difficult
thing
to
expunge.
I
am
certainly
in
favor
of
like
renaming
the
UI
things.
It's
the
system
level
things
that
will
likely
persist
for
longer,
like
I,
was
doing
a
control
F
in
cube
ATM
and,
like
there's
a
master
configuration,
API,
type
or
kind.
G
G
Great,
but
it
will
be
probably
many
years,
I
suspect
there
is
another
thing
we
could
we
could
cool
we
should.
We
should
leader
is
another
word
that
I
think
so
I
guess.
The
question
is:
what
do
we
dislike
about
master
like
there
are
two
things
that
people
sometimes
just
like
about
master.
One
of
them
is
that
there's
a
dedicated
node,
the
concept
that
it's
dedicated
node
and
the
other
one
is
the
sort
of
the
correct.
G
Correctness,
master/slave
type
connotation.
If
it's
about
the
obviously
for
political
activists,
we
should
just
like
switch
to
other
words,
but
if
it's
about
the
idea
of
a
designated
node
leader
is
often
a
a
more
correct
term
if,
for
example,
we're
talking
about
joining,
you
join
a
leader
right.
So
if
we
have
to
specify
it,
you
don't
really
join
the
control
plane
if
you're
part
of
them
I
don't
know
anyway.
Sometimes
the
word
might
be
leader,
it's
fine.
What
we're
trying
to
achieve
or
why
we're
trying
to
achieve
it
earlier.
A
Yeah
we
can
bike
Shannon,
this
more,
but
I
don't
have
strong
opinions.
I
just
know
that
control
plane
was
the
term
designated
within
cigarettes
and
steering
as
the
common
denominator
for
how
we
converse
about
these
things.
It's
it's
brought
up
in
both
documentation
and
as
well
as
communicated
widely
across
different
things.
Yeah.
C
I
am
in
favor
of
like
just
saying
control
plane,
but
yes,
I
do,
but
it
really
depends
if
people
think
of
the
like
master
as
dedicated
node
or
if
we're
like
having
it
in
Cuba
deemed
as
you,
we
have
a
taint
and
you
can
take
the
clown
paint
it's
or
if
we
do
as
cheeky
I
think
that
don't
register
the
node
with
the
yummy,
you
know
just
invasion.
I
was.
C
D
F
A
C
C
There
we
go
okay,
so
this
is
like
I've
been
bit
frustrated
on
the
like
components:
config
isn't
proceeding
as
quickly
as
we
wanted
the
development
theorem.
We
don't
have
much
unity
between
the
component
configs
or
next
to
nothing
and
now
my
coffin
has
worked
a
lot
on
the
cubelets
side
and
it's
it's
really
improved
and
proved
to
be
working
exactly
as
it
should
and
in
the
end
it's
there
are
differences,
but
there's
a
lot
of
similar
stuff
that
we
use
for
the
rest
of
the
kubernetes
api
machinery.
C
So
it's
not
it's
not
that
different
from
from
what
what
are
the
api's
we
have,
but
basically
the
next
next
step
here
is
that
we
now
have
or
all
our
types
inside
of
the
kubernetes
core
and
in
our
cuban
and
configuration
right
now
we
embed,
like
you,
could
think
of
them
as
private
as
their
instead
of
everything
that
is
inside
of
Cabana
desks,
kubernetes
is
kind
of
private
from
a
rendering
perspective.
Things
that
are
in
staging
are
like
public
and
as
component
configs
from
that
now
are
private
as
they
are
inside
a
quarry.
C
C
A
I
wanna
the
fragmentation
of
consumption
of
API
objects
across
these
multiple
repos
from
an
external
consumer
perspective.
As
someone
who
both
works
on
the
core
as
well
as
consumes
things
outside
the
core
is
mind-bogglingly
complicated
over
complicated,
like
most
other
things,
have
one
SDK
right
like
if
I'm
going
to
program
against
Amazon
right,
I'm
gonna
do
something
against
Amazon.
They
have
a
single
SDK
that
I
can
reference,
and
it's
well
defined.
It's
well
documented.
A
We
are
never
ending
shifting
see
of
API
groups
and
types
and
versions
to
the
point
where
it
is
obtuse
even
to
people
who
will
work
on
the
core
right
like
these
things
should
be
super
easy
and
I'm.
Just
I'm
I
questioned
why
we'd
want
to
break
it
into
the
separate
thing
versus
trying
to
get
rally
around.
This
is
the
thunk
everything
you
need
as
a
consumer.
This
is
everything
you
need
to
vendor.
It's
all
in
one
location,
I,
don't
want
a
vendor
12
things.
I
want
to
vendor
less
things.
I.
G
C
I
think
here
that
that
issue
is
more,
if
not
a
steering
committee
issue,
an
API,
a
general
idea,
missionary
thing,
but
I
think
the
the
flux
here
we're
seeing
is
because
we
have
so
much
code
inside
of
the
core
mono
repo
that
it's
hard
and
it's
taking
a
long
time
to
move
stuff
out
into
logical
places
as
we're
trying
to
do
when
we
have
all
the
things
in
different
composable
staging
repos.
We
can
create
like
SDK.
C
A
C
That
is
the
risk
is,
that
is
because
we
have
to
make
stuff
composable.
Yes,
now
it's
now
we
can't
now
we
have
to
render
all
of
Combinator's
or
nothing,
but
if
we're
only
making
in
a
great
aggregated
API,
so
we
only
need
the
API
so
pots.
So
then
we
only
went
to
that.
If
we're
using
clients
go,
we
can
only
use
those
who
want
mega
SDK
with
all
of
the
components
we
want
to
make
components
configs
we
want
to
make
like
talk
to
the
recommender
API.
C
Do
we
want
to
create
our
API
server
with
an
operator
toolkit
and
we
want
to
reuse
the
controller
framework,
etc,
etc,
etc?
Then
we
need
everything
yes
and
then
we
need
like
one
mega
SDK,
but
it's
composable
for
a
reason
so
that
we
can
the
API
machine
it
can
be
used
for
API
machinery.
/
can
be
used
for
anything,
it
doesn't
have
to
be
equipment
in
of
stuff
clients.
Go
has
it's
like
specific
use
case.
The
API
server
repo
is
creating
whatever
whatever
I
PI
server.
You
want
etcetera,
so
I'm,
not!
C
A
C
Yes,
and
so
for
this
thing,
the
cubelet
is
fine,
it
has
all
the
stuff
needed.
Currently,
the
both
the
internal
and
external
API
versions
are
in
package:
cubelets,
api,
scribbly
config.
The
same
thing
goes
for
proxy,
which
is
so.
This
is
like
the
current
best
way
of
doing
it
under
its
package
and
its
components
and
IP
ice.
But
as
we
all
know,
the
version
is
still
alpha.
C
The
eat
and
pay
is
the
API
group
and
beds
client
configuration
client
connection,
configuration
which
is
wrong
because
it
should
be
shared
among
components
and
it
doesn't
do
flag
proceedings
as
we
know,
and
the
config
file
loading
is
not
destined,
and
then
we
have
the
mono
components.
Config
api
group,
which
is
like
just
being
created
long
time
ago,
like
near
epoch,
and
nobody
has
done
anything
to
it,
which
is
something
we
won't
change
now
and
it
also
embeds
the
shared
structs
and
we
should
move
stuff
out
here.
The
Control
Manager
is
next
scheduler.
C
The
cube
API
server
doesn't
have
at
all
component
configuration,
so
we
can't
do
anything
here.
The
Cloud,
Control
Manager
is
also
in
package
JPS
component
coffee.
So
the
goals
of
this
proposal
is
to
find
a
home
for
the
component
config
API
types
posted
as
a
staging
Reaper,
and
in
order
to
make
these
types
consumable
from
projects
at
Soto
cube
and
also
from
different
parts
of
cabinets
itself.
For
example,
whenever
we
break
out
cube
8
a.m.
C
We
we
need
this
and
then
also
do
as
case
that
io
/,
API
and
split
internal
types
from
version
types,
so
version
types
are
going
to
be
in
the
public
case
that
IO
component
config
repo
and
the
internal
types
are
going
to
be
left
in
case
that
io
package
cubelets
api
scoop
on
keyboard
config,
for
example.
So
here
are
only
the
API,
the
internal
API
s--
and
in
the
new
repo
instead
real
repo,
the
external,
and
then
we
can
remove
the
monolithic
component
config.
That
was
never
yeah.
C
It
was
never
used
properly
and
we
don't
with
this
proposal,
we're
not
saying
we
should
graduate,
for
example,
keep
proxy
component
configuration.
That
is
a
follow
up,
and
that
is
something
in
the
end.
Yes,
the
signet
work.
Approvers
has
to
do
this,
doesn't
mandate
any
structure
for
the
components
itself.
It
just
says:
create
a
generic
location
for
shared
stuff,
for
example,
and
it's
won't.
This
proposal
won't
change
the
components
itself
either
other
than
imports
renames.
H
C
There
is
a
different
goal
about
this,
not
using
the
API,
but
eventually
you
want
to
have
every
so.
Miked
often
has
his
documents
here.
Why
we
why
we
need
version
config
files,
and
this
is
like
the
what
we
want
to
do.
We
want
to
say
component,
like
the
API
server
or
controller
manager,
schedule
or
whatever,
and
give
it
a
dash
config
and
give
it
version
configuration
right.
H
C
A
H
C
C
C
H
H
A
C
D
C
Now
now
it's
like
the
proposal
is
to
have
this
kind
of
structure
move
stuff
out.
There
have
the
internal
types
inside
of
code
repo
create
a
shared
types
package
with
stuff
like
clients,
configuration
connect,
client
connection,
configuration
and
leader
election
configuration
that
is
shared,
multiple
components
in
a
generic
package
and
remove
the
monolithic
component.
Config
group
and
I
also
have
more
more
in
depth
like
what
should
be
done
for
every
component.
I
have
D
from
from
this
sig
helping
out
with
the
scheduler
and
another
guy,
Stuart
UI
is
doing
the
controller
manager.
C
A
Sometimes
people
secretly
sneak
in
things
here
and
there,
but
whatever
the
I
think
a
long
term
possible
objectives
might
be
to
eventually
differ
gate
flags
as
a
goal
before
a
chunk
of
the
flag.
Some
of
them.
You
still
want
to
have
the
overrides
for,
and
that
makes
perfect
sense,
but
the
there
are
too
many
flags
in
the
components.
Yes,.
C
Yes,
so
that's
definitely
I
go
long
term
and
also
there's
a
lot
of
one
goal
is
also
to
minimize
the
amount
of
duplication.
So
if
we
go
look
at
the
queue
proxy
configuration,
we
have
stuff
like
bind
address,
health
see
binders
metrics
binders
enable
profiling.
This
is
kind
of
like
generic
to
any
server
there.
C
Exactly
so
so
that
is
the
the
generic
stuff
and
also
whenever
the
API
so
gets
its
components.
Config
there
is
going
to
be
this
generic
API
server.
That
has
this
kind
of
stuff.
Where
and
we
I
mean
there's
a
lot
of
different
parts
of
all.
Our
IP
is
that
we
now
have
a
flag.
So
then
it's
duplicated
all
round
that
we're
gonna
unify
eventually,
but
that's
before
that's
actually
even
and
be
possible.
We
need
is
kind
of
well-defined
structure
of
all
stuff
component
of
it.
Alright.
A
C
Something
so
maybe
then
I'll
have
time
to
review
Cuban
HBO's,
so
I'm
gonna
be
in
the
onus
at
least
adonus
file
of
this
thing.
But
it's
definitely
like
I
wrote
this
now
when
I
still
had
time,
so
we
can
get
some
consistency
between
the
people
working
on
it
because
there's
there
have
been
a
lot
of
confusion.
If
we
have
the
canonical
description
of
this
is
the
plan.
It's
well
way
easier
to
actually
execute
on.
A
C
A
This
needs
to
be
I
mean
just
being
honest,
I
think
this
needs
to
be
borne
by
a
cig
or
a
working
group
that
can
cut
across.
It
makes
sense
that
the
working
group
or
sig
would
be
part
of
both
API
machinery
as
well
as
sequester
lifecycle,
but
to
make
it
to
make
sure
that
ownership
propagates
over
time.
In
that
way,
the
wanting
these
things
we
should
do
the
aliasing
from
from
epoch,
though
yeah
eventually
I
really
don't
see
this
being
owned
by
sig
sig
APA
machinery
at
all.
Once
it's
finally
said
done,
yeah.
C
We
had
a
conversation
like
more
than
a
year
ago,
I
think
in
this
sig,
where
we
had
also
people
like
Brian
grants
and
the
:
we
discussed
where
and
who
should
be
owning
component
configuration
and
I.
Think
at
that
time
we
said:
let's
do
it
in
class
lifecycle,
at
least
for
the
time
being,
to
unify
stuff
and
and
because
we
were
the
ones
responsible
for
configuration,
configuring.
The
components
kind
of
that
is
part
of
our
mission.
So
then
we
also
kind
of
should
be
responsible
for
enforcing
consistency.
C
C
C
C
The
next
step
is
here
how
to
split
the
code
out
in
logical
ways
to
actually
make
it
reusable
and
then,
after
this
we
might
do
stuff
like
there
are
none
goals
here
that
we
want
to
execute
a
letter.
That's
graduate
the
IP
another
Q
proxy
component
config,
but
that
again
is
a
signet
work
specific.
It's
not
our
thing
to
graduate
the
structs
I
don't
know.
Do
you
think
that,
like
that
responsibly,
responsibility
sharing
make
sense,
I.
A
Said
we
can
figure
it
out
every
time,
I
think
we'll
have
to
I
think
this
kind
of
lives
and
when
I
was
thinking
about
it,
it
kind
of
lives
in
the
sub-project
land.
It's
like
a
piece
that
kind
of
is
unto
its
own
right
and
usually
stuff.
That's
in
it
unto
its
own
is
considered
from
the
larger
organizational
structure
as
a
sub
project.
C
So
we
haven't
there,
were
you
haven't
resolved
all
the
conversations
yet
I
announced
this
yesterday
and
we
also
might
do
stuff
like
move
the
config
really
close
to
the
components.
So
then
we'd
move
start
moving
components
out
to
the
repos
or
whatever,
and
then
keep
the
conflict
closed
there
and
just
ditch
the
mono
Reaper
or
like
the
the
one
Reaper
with
all
the
components
configs,
and
is
that
do
multiple
repos
for
every
component,
where
both
the
component
and
its
configuration
lives.
So
that
is
not
fully
resolved.
E
C
I'm
not
very
afraid
of
the
size
of
the
piers
I,
don't
think
it's
gonna
be
that
much
kiss
I
mean
there's
types
go
file
most
of
the
API
machinery.
Stuff
looks
the
same
for
all
API
groups.
So
if
it
just
matches
the
standard
okay,
the
types
should
be
one-to-one.
There
shouldn't
be
any
changes,
and
then
there
are
a
couple,
but
not
that
many
imports
of
the
component
configuration
because
it's
not
used
widely
it.
J
C
C
H
Right,
basically,
my
question
was
like:
is
there
a
tool
that
basically
covers
doing
kubernetes
upgrades
in
a
rolling
update
fashion
and
basically,
we
deploy
kubernetes
using
puppet
modules
and
it's
basically
a
pain
to
do
cumulative
updates
for
a
lot
of
reasons,
and
we
are
basically
exploring
what
other
new
tools
are
available
now,
since
we
last
explored
this,
and
can
we
is
there
a
tool
that
allows
to
bring
components
of
kubernetes
across
minions
and
masters
in
the
rolling
update
manner?
Is
there
a
one
that
basically
handles
h.a
and
things
like
that?
G
I
mean
I
can
give
you
some
background,
which
is
there
were
a
number
of
tools
separately.
That
did
this
like
I,
obviously
work
on
cops
GK.
He
is
one
which
is
creamer
for,
like
you
know,
the
commercial
vendors
will
have
one
I
think
there
are
others
as
well.
I
think
there
is
a
work
to
unify
those
in
the
cluster
API,
which
is
a
sub
project
of
this
sig.
Alright,.
G
I
think
that's
that's
where
the
the
focus
is
because
once
we
get
that
into
a
sort
of
API
addressable
form,
the
hope
is
that
a
rolling
update
would
be
standardized
across
the
various
clouds
or
bare
metal
or
whatever.
It
is
so.
For
example,
today
cops
has
code
specific
to
AWS
to
to
do
like
the
instance
teardown
and
bring
up
as
part
of
a
rolling
update,
and
the
hope
is
that
in
the
cluster
API
there
will
be
a
machine
controller
that
will
have
you
know
a
specific
binding
for
AWS
or
GCE.
G
But
then
the
rolling
update
basically
operates
exactly
the
same
way,
regardless
of
the
Machine
implementation.
So
the
cluster
API
is
as
I
understand
it
where
it's
at
and
my
understanding
is
cube
ATM
because
it
operates
at
the
node
level.
Wouldn't
really
be
addressing
this.
That
that
will
be
the
coordination
of
a
rolling
update
will
take
place
at
the
cluster
API
level.
Anyone
wants.
H
G
We're
converging
on
a
single
API,
and
then
there
will
be
a
there
will
always
be
a
different
I
believe
there
always
be
a
different
machine
controller
for
AWS
and
there
would
be
for
bare
metal.
But
the
hope
is
that
the
the
piece
which
does
that
one
binding
to
your
provider
of
machines
will
be
very
small
and
the
piece
which
does
the
rolling
update
itself
will
be
reusable
and
will
be
the
same
controller
across
AWS
and
well.
At
least
GCE
bare
metal
is
obviously
a
little
weirder
for
for
the
scenario.
H
I
think
from
c
cluster
api,
I
think
you
can
you
say
that
would
support
rolling,
upgrade
I
think
you
mean
that
it
can
basically
tear
down
the
whole
VMs
right
like?
Is
there
a
way
to
do
rolling
upgrades
of
individual
components
like
just
the
couplets
or
just
the
controller
manager
or
just
API
server,
or
does
the
stick
think
that
it's
there?
There
is
no
need
for
it.
I'm.
H
Just
like
opening
like
open
questions
here
now,
I,
don't
have
a
solid
you
skills,
but
I
think
we
we
do
always
when
we
roll
out
cuban
it
is
using
puppet
puppet.
Has
a
I
mean
we
basically
run
puppet
agents
that
basically
have
a
staggered,
four-hour
window
or
whatever
so
I.
We
want
a
more
controlled
way
of
rolling
out
humanity,
updates
for
both
cube
red
cube,
proxies
and
all
the
components
in
our
stew
plaster,
and
that's
where
I
my
questions.
G
I
don't
know
of
anyone,
that's
done
the
it's
actually
implemented
the
full
rolling
update
of
a
cluster
in
the
cluster
API.
But
yes,
it
would
incorporate
that
I.
Think
the
other
piece
that
that
is
also
in
there
is
OS
upgrade.
So
when
you
have
a
new
a
new
kernel
version,
there's
a
problem
right,
so
that
also
has
to
come
in
and
that's
why
I
think
they
were
talking
about
it
currently
in
terms
of
replacing
VMs
and
sort
of
the
the
immutable
infrastructure
idea.
But
there
is
certainly
also
a
need
to
like
for
API
server.
G
C
If
you
I
mean
again
inside
of
the
provider
specific
config
of
the
cluster
API,
you
can
just
put
a
like
in
place
to
or
whatever
parameter
and
then
cubed
em,
if
you
install
it
with
cubed
and
can
go
and
upgrade
like
the
individual
masters
in
place
without
tearing
down
and
Williams.
If
you,
because
it's
assumed
that
your
cluster
API
controller
will
just
be
able
to
SSH
or
whatever
into
the
masters
and
execute
that.
C
So
that
is
like
it's
really
up
to
you,
but
the
cluster
API
spec
is
gonna,
make
it
possible
to
actually
specify
this
into
the
desired
state
and
reconcile.
However,
you
want,
and
the
default
as
justice
said
as
we
treat
pods
replicas
sets
and
deployments
or
to
like
kill,
pods
and
make
em
come
back
up,
and
if
we
apply
that
the
machines
needs
teardown
and
bring
up
new
new
ones,
but
we
can
still
like
create
a
mode
or
or
you
can
create
a
mode
like
to
instead
just
in
place,
go
to
the
parts
and
fix
them.
H
So
one
of
the
things
I
was
thinking
is
that
self
hosting
was
was
probably
one
of
the
ways
that
we
could
do
it
because
then
you
could
use.
These
updates
will
actually
do
rolling
updates
of
components
that
we
do
by
using
two
vanities.
Is
that
something
that
is
in
line
with
what
the
sig
might
be
thinking
or
sig?
Duster
dpi
is
the
way
to
go,
and
self
hosting
is
just
a
ploy
that
we
had
take
that
any
further
I.
G
G
Don't
know,
I,
don't
know
whether
there
were
certainly
efforts
are
doing
full
self
hosting
of
things
like
EDD,
which
I
believe
we
are
backing
away
from
at
the
moment
at
least
because
Alexa
T's
around
the
failure
modes
around
that
yeah
I,
don't
know
if
anyone
is
looking
at
other
forms
of
self
hosting
that
are,
there
are
less
complete,
as
it
were
less
less
like
circular,
but
the
that
could
certainly
be
a
mode
of
operation
in
the
cluster
API.
The
cluster
API
as
it
stands
today
is
certainly
like
Lukas
mentioned
the
provider
conflict.
It's
it's.
G
It
has
an
extension
mechanism
that
is
open-ended
enough,
that
it
leads
those
leaves
those
sort
of
things
undecided
as
it
were.
So
there
will
be
some
people
that,
or
some
implementations
I
hope
that
try
some
form
self
hosting
and
some
implementations,
my
form
of
not
not
so.
First
thing
we're
not
this.
We
do
today.
A
C
I
would
say
that
when
you
ask
for
sourcing
cubed
M
is
moving
away
from
it
because
of
the
design
decision.
We
mentioned
earlier
this
meeting
that
we
are
very
implicit
and
like
where
we're
we're
executing.
We
can
change
stuff
and
we
don't
go
and
we
don't
make.
You
bet
M
change
stuff
on
other
nodes,
for
example.
So.
C
H
Sounds
good,
so
I
think
it
sounds
like
that
cluster
API
is
the
tool
I
should
look
at
in
the
long
run.
I
guess
I've
seen
that
it's
not
like
it's
very
alpha
stage
right
now.
It's
like
it's
still
being
designed,
but
right
now
we
don't
have
a
tool
that
allows
you
to
control
rolling
update
of
individual
components
of
the
control
plane
right
is
that.