►
From YouTube: SIG Cluster Lifecycle 2020-06-16
A
A
A
B
Yeah
the
work
is
ongoing.
I
I
got
to
almost
all
the
issues
that
she
raised
and
I
mean
my
I
was
just
happy
to.
You
know,
show
up
here
and
no
present
like
why
we
were
doing
that
at
least
because
I
spoke
with
that
in
the
festered
API,
but
like
it
was
clear
that
this
is
the
right
place,
that
the
conversation
should
happen.
So
if
you,
you
know
I,
don't
like
there
is
a
you
know.
The
issue
is
there,
so
if
you
have
any
anything
to
us,
just
do
it
there.
B
I
will
try
to
to
address
all
the
questions,
but
in
the
last
couple
of
like
weeks,
we
revamp
our
v1
alpha,
be
one
presentation,
and
now
it's
fully
compliant
with
alpha
v3
and
like
as
a
company.
We
are
kind
of
happy
to
to
be
where
the
communities
are.
The
supports
are
so
that's
why
we
we
decided
to
submit
like
migration.
A
A
A
A
After
that
you
I
guess
you
also
have
to
comply
with
Nikita's
demand
and
asserted
us
that
we
have
to
do
as
well.
Is
we
have
to
send
a
separate
PR
which
added
here
as
a
mode?
The
pier
has
to
be
for
this
repository
and
lo?
The
TLDR
of
this
big
Yama
file
is
that
we
enumerate
the
list
of
sub
projects
for
sequester
cycle
and
the
coaster
API
existing
coaster.
Api
providers
are
already
in
this
list
of
resistance.
You
have
API
provider,
IDL
OpenStack.
A
Actually,
we
have
provided
IBM
quote
I,
think
you
make
a
very
decent
a
sequel
provider
instead
of
sequester
lifecycle,
yeah,
so
basically
yeah.
This
is
the
code.
Rather,
you
should
try
to
look
at
my
example
here
in
this
link:
okay,
just
a
bit
Abed
this,
this
PR
that
you
have
already
and
pink
the
same
people
that
are
were
pinked
here.
Okay,
thank.
B
B
A
A
C
This
is
a
cap
that
I've
filed
I
originally
took
it
to
the
sig
note
and
some
time
ago
now
they
referred
me
to
this
sig
on
their
meeting.
I.
Think
I've
documented
some
of
that
in
the
comments
here
on
the
cap,
but
in
any
case
I
wanted
to
present
this
here
because
it
needs
sponsoring
sig
and
consensus
seems
to
be.
It
should
be.
This
sig
brief
overview.
There's
many
different
actors
in
a
particular
kubernetes
cluster
that
can
do
disrupt
anode
and
there
should
be
a
central
place
of
coordination
amongst
these
different
components.
C
Cluster
API
components
or
third-party
components
that
exist
or
you
know,
will
be
developed
in
the
future.
The
core
concept
here
is
to
create
essentially
a
locking
mechanism
that
well-behaved
actors
can
respect.
It
was
suggested
to
me
by
Clayton
Coleman.
Maybe
we
should
use
a
lease
object
because
that's
kind
of
the
whole
design
around
the
lease
object,
so
that
seems
somewhat
sensible
to
meet
other
alternatives
could
be
like
annotation
or
any
other
number
of
things
that
you
could
imagine,
but
primarily
for
every
node.
C
You
acquire
this
lease
and
other
components
that
may
want
to
do
a
disruptive
thing
to
a
node
or
care
about
the
health
of
a
node
can
then
check
for
the
existing
existence
of
this
lease
and
that
basically
informs
them
that
hey
I,
don't
need
to
worry
about
this
thing
or
I
can't
proceed
with
my
action,
because
somebody
else
has
this
lease
and
they're
doing
something.
Disruptive
to
this
node.
C
Another
use
case
is
preventing
disruption
to
a
node,
so
let's
say
that
I'm,
an
admin
and
I
need
to
get
on
to
the
node
to
investigate
something.
For
some
reason,
I
might
manually,
create
this
lease
or
have
some
other
higher-level
component.
That
creates
this
lease
on
my
behalf
and
that
informs
other
things:
hey
don't
touch
this
node
because
I'm
doing
something
you're
not
allowed
to
disrupt
it.
So
it's
kind
of
like
twofold:
you
acquire
the
in
order
to
do
something:
disruptive,
a
node.
C
You
need
to
acquire
this
lease
acquiring
this
lease
doesn't
necessarily
mean
you're
going
to
do
something
disruptive,
it
could
also
just
mean
you're,
preventing
others
from
doing
something,
disruptive,
and
so
that's
pretty
much
the
high
level
of
it
and
I'm
really
looking
for
feedback
and
input.
Now
one
thing
that's
come
up.
A
number
of
times
is
there's
another
kept
outstanding.
The
node
shutdown
kept
well
that
a
mister
tackle
is
sometimes
nodes
get
terminated
in
the
cloud.
C
C
Thing
for
nodes
that
are
considered
failed.
The
node
maintenance
lease,
as
the
name
implies,
is
for
things
that
haven't
failed
so
well.
There
may
be
some
overlap,
semantically
about
the
state
of
actual
vm
or
node.
The
reason
why
it's
in
such
a
state
is
pretty
much
different,
so
in
the
case
of
the
node
maintenance
lease,
if
you're
intending
to
stop
a
node
in
the
cloud,
ideally
a
well
behaved
actor
would
drain
and
and
properly
follow
a
set
of
steps
to
stop
the
instance
properly.
C
There's
not
really
a
need
for
this
node
shutdown
cap
to
really
do
much
of
anything
as
far
as
applying
taints
or
whatever,
and
in
fact
this
cap
could
actually
also
coordinate
with
the
maintenance
lease
saying:
hey,
I,
don't
need
to
apply
the
these
taints
because
I
see.
Somebody
else
has
that
lease,
and
so
therefore,
this
disruption
was
expected.
So
I
don't
need
to
do
anything,
and
so
there's
some
background
and
my
thoughts
on
those
into
I'd
love
to
hear
any
questions
or
feedback.
A
Yeah
something
that
I
so
I
by
the
way
I've
seen
both
clips
at
this
point
but
I,
you
know
I
briefly
looked
at
them.
I
know
that
you
seen
is
the
author
of
this
phone,
so
something
that
I
noticed
immediately
in
your
camp
is
that
you
have
only
six
sequels
lifecycle.
Yes,
this
is
a
recommendation
by
signal.
Is
that
correct.
C
C
A
A
My
my
like
my
understanding
of
this
is
that
it's
not
sequence
to
life
cycle
that
is
the
actual
owner,
because
the
actual
owner
is
the
Signet
owns
the
code.
This
is
my
understanding
and
I
would
argue
with
the
signal
folks
that
participate
in
six
is
sequence
life
cycle
and
the
only
C
is
signal,
but
you
know
that's
my
my
opinion.
C
Well,
that's
interesting
point
I
think.
Actually
the
couplet
would
be
totally
unaware
of
any
of
this,
so
we
don't
really
expect
the
Kubla
to
do
much
of
anything.
There
is
a
question
of
like
creating
the
initial
namespace
in
creating
the
initial
object
if
we
want
to,
as
this
original
enhancements
to
just
creating
a
new
namespace,
creating
a
lease
object
that
already
exists.
C
Reason
for
the
new
namespace
is
to
have
a
one-to-one
mapping
with
node
names
for
the
lease
object,
similar
to
what
they
do
with
there's
just
this
police
that
nodes
already
have
like
health
check
lease
or
a
live
lease
or
whatever
they
call
it.
So
obviously,
we
can't
really
reuse
that
namespace
and
we
could
put
it
in
like
an
existing
system
namespace.
But
then
people
like
have
our
back
concerns
and
stuff
like
that.
So
really
on
any
of
these
points,
I'm
pretty
flexible.
C
C
So
not
really
impacting
the
code
for
the
kulit.
You
could
I
guess
make
the
argument
wherever
we
create
this
code,
wherever
that
code
lives
today,
maybe
that's
where
who
should
own
it,
but
I
think
as
far
as
like
a
point
of
coordination
and
who
owns
the
functionality,
I
think
this
potentially
is
a
good
sig,
but
yeah
I'm,
just
kind
of
following
the
pupils
direction.
At
this
point.
F
I
guess
it's
I'm
quite
directly
like
something
I
think
would
be
super
valuable,
is
to
explicitly
call
out
like
what
are
the
one
or
two
like
chiller
use
cases
that,
like
this
really
addresses
like
what
are
the
things
that
really
go
badly
wrong
and
that
could
really
help
us,
like
figure
out
where,
where
do
we
need
to
add
code
because
it's
it
sounds
like
it
will
mostly
be
in
controllers.
That
I
think
are
part
of
or
related
to.
C
Absolutely
well
speaking
as
far
as
overshift
are
our
primary
use
case.
That
I
could
tell
you
exist
at
8.
Is
we
have
another
component
that
handles
upgrades
called
the
machine
config
operator,
so
it
handles
like
on
disk
file,
manipulation
and
then
reboots
a
host
in?
We
also
have
another
component,
that's
upstream
now
in
cluster
API
machine
health
check,
and
so
basically,
when
we
reboot
a
host
for
upgrades
I
like
on
bare
metal
that
might
take
a
number
of
minutes.
C
We
don't
want
the
Machine
health
check
to
then
mark
that
machine
for
deletion,
because
it
didn't
come
up
as
fast
as
we
had
hoped.
So
that's
one
use
case.
Another
use
case
is
we
have
a
lot
of
alerting
and
alarming
and
stuff
like
that.
That's
saying:
hey
I'm
supposed
to
have
like
this
number
of
replicas
for
this
component,
that's
running
on
my
control,
plane,
host
and
I.
Don't
have
that
many
replicas,
there's
no
real,
clear
way
to
say:
hey!
It's!
Ok,
this
this
node
that
this
thing
was
running
on.
C
It's
gonna
be
coming
back
really
soon.
It's
just
under
maintenance
right
now.
So
all
these
other
things
don't
need
to
set
off
like
alarms
and
recording
and
I
added
some
other
user
stories.
They
are
one
thing
that
we
want
to
do.
Our
believe
we
want
to
do
in
the
cluster
API
project
is
add
power,
state
management.
C
F
Yeah
I
think
it
does
make
sense.
The
the
I'm
interested
wondering
is
so
there
would
be
a
for
the
case
where
it's
powered
off.
There
would
be
a
field
or
an
annotation
or
label
set
on
some
other
object
and
they
control
or
essentially
try
to
hold
the
lease
I'm
sort
of
yeah
is
that
is
that
right
or
how
do
we
imagine
that
working
okay.
C
Have
what
they
call
their
maintenance
operator
where
they
can
like
set
some
kind
of
schedule,
job
I'm,
not
really
sure,
on
that
underlying
implementation
details.
They
basically
want
to
shut
down
the
node
at
a
given
day
and
time
and
to
do
that
they
could
acquire
the
node
maintenance
lease.
You
know
at
some
time
prior
to
that
window,
that
node
maintenance
Lisa
is
an
object.
There's
one
per
node.
If
nobody
else
currently
has
that
leased
and
then
they're
able
to
acquire
it
once
they've
acquired
that
node,
then
they
know.
E
C
F
Yeah,
it
does
make
sense,
I'm
I'm
just
like
thinking
through
what
happens.
If,
like
that
process
crashes
cuz,
it
will
lose
its
lease
and
what?
What
goes
wrong
at
that
point,
but
yeah
it
makes
sense.
I,
yeah,
I'm
I'm,
seeing
how
you're
getting
sort
of
bounces
around
the
various
like
you
should
be
a
CR
D.
You
should
be
a
lease
like
it,
I
think
I
think
so
my
feedback
would
be
putting
in
some
of
these
use
cases
and
and
drawing
them
out
or
scratching
them
out.
F
F
Alright
I
think
it'd
be
good
to
get
it
in
to
core,
so
it
doesn't
get
removed
or
deprecated,
but
that's
gonna,
be
it
is
gonna,
be
harder
and
it's
I
don't
know
what
the
right
tactic
is
in
terms
of
whether
we
should
try
to
build
it
externally
and
and
and
then
get
it
into
core.
We
should
get
it
into
core
and
then
build
it
elsewhere.
I
don't
know.
C
C
So
then,
if
everybody
wants
to
use
this
now,
we
have
to
inject
this
other
dependency
to
make
sure
this
add-on
is
there,
and
that
seems
kind
of
messy
when
there's
already
a
core
object,
type
that
seems
very
suited
to
this
purpose,
so
I
mean,
if
it
was
truly
up
to
me.
I
would
just
make
this
annotation
or
you
could
just
document
it
use
this
annotation.
We
could
all
move
on
with
our
lives,
but
people
really
don't
like
annotations
for
some
reason.
F
Yeah,
the
the
reason
why
justice,
where
I
was
think
about
Syria,
was
just
around
the
failure
of
semantics
of
like
what
happens
when
you
lose
your
lease
and
what
like
what
goes
wrong,
but
it
is,
it
is
up
or
it
is
like
a
best-effort
coordination
thing.
So
maybe
it
doesn't
much
matter
but
yeah.
Sorry,
I,
don't
wanna
I
want
to
dominate,
live
Amir.
Did
you
have
more
things
you
want
to
talk
about?
You.
A
Know
I'm
trying
to
consume
the
Cape,
while
you
guys
were
talking
so
basically
in
terms
of
the
conflation
with
the
etiquette.
Usually
what
we
do
is
we
get
the
cigarette
asian
folks
through?
Maybe
even
you
can
invite
Clayton
about
this
into
a
meeting
folks
should
discuss.
We
have
conflating
caps.
Maybe
the
boat
shutdown
cap
should
be
dropped
in
favor
of
this
particular
cap.
C
Think
that's
a
great
feedback,
I
think
like
I.
Don't
necessarily
think
that
this
kept
really
overrides
or
supersedes
the
node
shutdown
cap,
I.
Think
they're
actually
just
two
completely
separate
concerns
and
it
would
be
just
an
additive
handsome
into
the
node
shut
down
kept
to
potentially
respect
this
lease
I
think
that's
minor
detail.
If,
like
there's
merged
before
ours,
I
guess
my
feedback
was:
can
some
other
people
look
at
it
and
kind
of
really
digest?
C
C
I
guess
what
would
be
helpful
is
if
I
had
some
extra
buy-in
potentially
from
people
in
this
sig.
If
this
is
gonna,
be
the
owning
sig,
to
say
at
least
the
skeleton
of
the
idea
is
good
and
we
want
to
move
forward.
Some
form
I
said
I'm,
not
necessarily
tied
to
any
one.
Particular
implementation
detail.
It's
just
you
know
you
got
to
put
a
cap
together
and
then
debate.
A
So
if
we
look
I
mentioned
at
the
beginning,
we
should
have
this
in
under
sequenced
life
cycle.
If
we
are
the
proposed
zoning
sake,
like
I
said
it's
going
to
break
all
the
comments.
Also,
if
it's
under
the
sequester
life
cycle,
father
I,
guess
this
should
go
under
the
generic.
We
now
have
a
sub
thing.
There
is
Koh,
sorry.
D
A
A
C
D
A
C
A
Good
to
have
it
because
people
like
to
buy
share
two
names,
it's
good
to
have
the
name
in
front
of
merging
the
cap
yeah.
This
is
again
goes
back
to
the
own
is
sick.
If
we
modify
the
mode
controller
or
the
cube
controller
manager
to
instantiate
this
namespace,
then
this
is
a
quiz.
The
stakeholder.
If
it's
a
Control
Manager
is
see
KPI
machinery,
if
it's
the
motor
controller,
is
Sigma
yeah.
So
they
are
the
owners
of
the
code.
C
Yeah
I
I
will
defer
to
your
recommendation.
There
I
think
I
think.
If
we
can
get
some
some
critical
mass
behind
an
idea,
then
we
can
kind
of
ferret
out
where
this
document
lives.
I.
Think,
like
that's
kinda
in
the
realm
of
implementation,
detail
I
think
we
already
know
what
the
implementation
will
basically
look
like,
but
I
don't
see
like
it's
definitely
a
gray
area.
C
A
I'm
going
to
cut
my
comments
to
the
Cape
later,
something
I
also
got
something
else,
but
I
forgot.
A
Yeah,
okay,
just
said:
if
you
you
can
also
add
your
comments.
I
guess,
if
you
want
to
basically
I'm
a
little
bit
worried
that
the
name
not
maintenance,
lease
can
be
used
for
other
things.
So
I
was
wondering
whether
we
should
generalize
it
a
bit
more
like.
Can
we
lock
the
note
for
something
else,
other
than
maintenance.
C
C
I
think
I've
jotted
this
down
I
meant
to
add
it
to
this
cap.
But
so
you
have
like
a
really
critical
workload
that
you
don't
want
to
be
disrupted
for
any
reason.
Obviously,
there's
probably
better
ways
to
do
that
in
kubernetes,
but
this
could
be
one
way
of
additionally
doing
that
if,
for
whatever
reason
you
needed
to
do
that,
just
I
need
this
thing
to
continue
running
and
it
cannot
be
disrupted.
C
That
would
be
a
non
maintenance
use
of
the
maintenance
lease,
but
it's
kind
of
like
I,
don't
like
your
you're
saying:
I
have
the
maintenance
lease
therefore
I'm,
the
only
one
allowed
to
do
maintenance
or
some
disruptive
action,
so
nobody
else
can
and
do
maintenance,
but
yeah
I,
don't
care
what
we
call
it.
Maintenance
least
disruption
lease
do
not
touch
lease.
A
Okay,
so
understood
I
think
it
basically
should
pink
the
you
can
pink
the
the
signal
folks
on
the
PR,
the
saying
that
we
plan
to
merge
it
in
signal
first
to
preserve
the
comments,
but
then
we
should
move
it
to
sequester
lifecycle,
potentially
and
also
I,
encourage
you
to
gather
feedback
in
the
secret
action
meeting,
because
this
is
touching
multiple
areas
of
communities,
so
I'm
sure
that
people
they
are
going
to
have
opinions
of
the
name
of
the
lease
and
also
the
name
of
the
name
space
and
so
on.
The
general
idea.
G
C
A
C
F
Appendix
and
say
you
were
asked
to
add
them
or
something,
but
like
I
hate,
that
sort
of
thing
right.
Just
because
if
we
see
that
cluster
API
things,
then
that
might
help
be
positive
like
if
we
see
they're
all
I'm,
not
saying
they
are
but
like
we
see
there
will
cuter
things
that
might
Cyrano
it
right
like
that.
This
can
help
figure
out
what
what
the
changes
aren't
willing,
where
they
will
live.
C
Yeah
I'm
originally
thought
this
is
a
signal
thing
cuz,
it
seems
very
node
centric
like
yes,
other
people
will
touch
this
thing,
but
it's
ultimately
about
the
node,
but
they
played
total
hot
potato
and
tossed
it.
This
way,
so
here
I
am-
and
that
was
some
time
ago,
so
I
just
have
finally
got
it
on
my
plate
to
bring
it
here.
We
got
some
more
internal
discussion,
so
could
I
say
this
has
the
semblance
of
something
that
seems
like
a
reasonable
idea
to
this
group
when
I
go
to
sig
arch
meeting
yeah.
A
A
I
heard
something
else,
but
I,
sorry
I'm,
forget
it
because
too
many
too
many
things
from
my
mind
to
the
moment,
but
yeah
we
can
get
back
to
this
proposal.
We
support
it.
You
just
try
to
get
feedback
from
dead
from
them
and
sorry
for
the
bureaucracy
rabbit
hole,
but
we
just
need
more
people
to
look
at
the
more
wider
proposals.
C
Yeah,
perfect
yeah
I
totally
understand,
but
yeah
I
I
felt,
like
I
needed
some
kind
of
support
of
some
group,
and
it
sounds
like
this
groups
at
least
tentatively
supportive
of
the
idea.
So
then
I
that's
a
little
bit
more
helpful
to
go
the
sig
arch
with
you
know.
If
it's
just
me
in
there,
like
hey
I,
have
this
idea
then
they're
gonna
kick
me
to
you
and
all
that
kind
of
stuff.
So
I
gotta.
C
Yeah
I
I
think
I,
agree.
I
thought
the
user
stories
are
kind
of
already
pretty
clear,
but
I
can
see
how
maybe
they're,
not
especially
when
you're
trying
to
balance
the
user
stories
with
the
rest
of
the
document,
it's
kind
of
hard
to
see
what
you're
actually
trying
to
do.
But
then
it's
good
feedback.
D
H
A
D
A
Got
a
small
PSA
actually
sorted
that
much
of
a
small
PSA,
but
the
kubernetes
is
recycle
changing
again
and
if
you
have
a
sous
project,
you
might
be
affected
by
this
again.
So
the
TLDR
is
that
we
have
a
codfish
from
the
joy
that
night,
which
is
a
bit
of
a
change.
A
The
previous
conference
was
this
month
and
also
been
released
again
shifted
forward
in
time.
I
believe
I,
don't
remember
what
was
the
last
date,
but
now
the
119
is
going
to
be
released
on
August,
20
fifth,
which
is
like
quite
the
delay.
Actually,
we
have
numbers
here,
so
it's
plus
two
weeks
yeah
any
comments
on
this
change.
A
All
right
up
with
us
at
Wikipedia
quickly,
we
marched
quite
a
few
peers.
The
cycle,
mostly
stability
changes,
so
the
first
batch
of
patches
was
ready
to
failures
when
you
are
executing
to
beta
enjoin-
and
we
also
backboarded
some
of
these
to
the
support
skew,
because
we
saw
users,
especially
in
the
coastal
api,
seeing
these
problems
in
older
versions
of
communities
in
cuba
diem.
The
second
change
I
wanted
to
mention
is
that
we
are
duplicating
customize
usage
in
Cuba.
Diem
replaces
withdrawal
pushes
and
the
PR
is
in
flight
to
get
merged.
A
This
is
a
mouthful
feature
for
one,
a
team,
and
finally,
we
have
a
couple
of
more
PRS
that
are
touching
the
sensitive
topic
of
corporate
config.
How
we
manage
core
adults.
This
is
cute
proxy,
a
couplet
historical
Adam,
I
guess,
but
it's
called
comprehend
that
we
depend
on
how
we're
managing
the
configuration
for
it.
These
are
the
days
I
have
occupied,
uniquely,
have
a
look
at
the
lake
peers.
I
Yeah
we
had
a
meeting
last
week
and
talked
about
getting
some
of
the
TV
manager
code
into
the
ADM
repo.
It's
something
that
that
has
been
on
the
road
map
like
this
idea
of
of
a
layer
above
a
TDM.
The
CLI
I
mean
that
that
helps
so
no
like
an
orchestration
layer
that
that
doesn't
require
kubernetes
a
kubernetes
control
plane.
So
that's
that's!
The
small
update.
I
F
Sort
of
thing
will
be
like
the
second
steps,
so
the
first
step
is
just
to
get
them
all
in
the
same
repo
and
then
we
can
actually
start
the
conversions
and
it's
been
a
sort
of
barrier,
I
guess
to
it's
actually
making
a
lot
of
progress
on
that
and
the
sort
of
hampered
my
ability
to
my
personal
ability
to
work
on
a
CD,
a
DM
CLI,
because
I've
been
sort
of
pulled
into
this
other
project.
So
hopefully
it
will
be
one
project.
F
A
I
think
this
is
a
very
good
idea,
so,
just
through
the
state
more
about
it,
so
HC
ADM
is
going
to
remain
only
see
away.
Two
in
the
repository.
F
Scp
manager
actually
has
a
CLI.
So
we
will
we
will
when
we
merge
the
code
in
it
will
also
get
that
CLI.
Let's
see,
I
told
us
things
like
listing
backups,
for
example.
The
intent
is
that
those
that
CLI
tool
goes
away,
but
so
iam
remains
the
the
main
CLI
Etsy
D
manager
does
have
a
CLI
that
will
come
along,
but
will
not
be
we're
not
gonna,
promote
it
as
I
like
another
CLI,
or
anything
like
that.
Yeah.
J
F
K
F
A1
that
we
built
ourselves
based
derive
from
Debian,
so
it's
not
a
massive
change
but
like
we've,
definitely
lost
some
things.
By
doing
this,
we've
gained
security
updates,
for
example,
but
we
have
lost
or
more
timely
security
updates.
We've
lost,
like
pre
loading
of
docker
or
container
D,
for
example,
so
the
bring
up
is
a
little
a
little
slower.
We
pre-loaded
a
bunch
of
other
packages
that
communities
needs
or
encourages
like
I.
Think
contrac
is
a
nun
by
default,
but
is
so
important
that
sort
of
thing.
F
L
F
Not
getting
deprecated
it
is,
it
does
predate
cluster
API.
We
are
actually
work.
We
want
to
start
adopting
Crosstour
API
in
cups,
there's
a
nice
architectural
diagram
with
like
the
different
layers,
oh
and
I,
think
like
if
I
were
to
try
to
paraphrase
that
the
idea
is
that
we
are
trying
to
build
in
so
cops
is
sort
of
like
a
user-facing
tool,
and
we
are
also
trying
to
build
into
a
cluster
lifecycle,
a
bunch
of
building
blocks
that
can
be
combined
to
create
make
it
easy
to
create
consistent
user
facing
tooling
and
I.
F
Think
actually,
cluster
FA
itself
may
end
up
being
more
may
end
up
producing
one
of
those
tools
which
is
great
and
the
intent
is
that.
However,
you
assemble
those
tools,
whether
you
get
one
that
is
assembled
by
the
community
like
cops
or
the
use
cluster
CTL
is
gonna,
be
the
name
of
the
cluster
API
one,
whether
you
get
one
that
you've
assembled
yourself
or
where
they
get
one
the
disassembled
commercially.
For
you
that
the
overall
experience
will
be
the
same
it'll
be
a
good
experience.
F
So
I
believe
that
the
moment
cops
is
still
considered
more
of
a
recommend
for
people
that
want
to
like
just
use
it
and
if
you
want
to
get
into
class
rate
value,
it's
a
great.
It's
still.
It's
still
not
GA
technically,
and
so
it's
a
little
earlier,
but
is
like
great
promise
and
you'll
be
able
to
consume
that
through
through
cups,
eventually.
H
Don't
I'm
not
super
familiar
with
cops
but
like
I,
don't
think
cluster
CTL
is
as
mature
as
copses
for
doing
full
deployments.
Like
they're.
You
know
you
can
use
cluster
CTL
to
to
initialize
the
cluster
that
could
host
the
capi
components
and
then
you
could
you
know
my
great
things
for
the
cluster
B
I.
Don't
think
it
does
installations
the
same
way
that
cops.
F
Okay,
I
cube
a
is
another
one
of
those
standardized
building
blocks
that
we
that
we
are
trying
to
produce,
as
are
a
bunch
of
these
sub
projects
like
Etsy
da
DM,
and
if
you,
if
you
sort
of
combine
rubidium,
which
is
node
initialization
at
CD
ATM,
which
is
sed
management,
cluster
API,
which
is
sort
of
infrastructure.
I,
guess
we
call
it
management
now,
and
the
cluster
add-on
sub
project,
which
is
the
sort
of
crew,
Nettie's
components
that
go
onto
a
cluster.
F
H
A
A
J
Yeah
all
right,
so
it's
been
like
three
weeks
now
over
released
one
minifig
111,
the
main
focus
there
other
than
like
general
integration.
Testability
was
specifically
working
towards
making
multi
node
clusters
at
G
a
feature
we're
not
quite
there
yet,
but
we
made
some
good
strides
in
the
last
release
and
the
other
big
thing
is
now
that
we're
pushing
the
docker
driver
as
we're
going
to
try
to
push
the
docker
driver
to
be
like
the
default
driver
for
every
platform-
we're
not
quite
there
yet
either,
but
we're
working
towards
that.
J
A
A
You
know:
core
kubernetes
incubator
no
longer
support
this
version
and
III
spoke
to
it
bed
and
told
him
like
you
should
start
dropping
these
all
the
supported
versions,
because
it's
a
complication
for
you
and
I
saw
like
in
the
cube
a
DM
slack
channel,
one
of
the
cubed.
Sorry,
the
mini
cube
made
the
NSA
me
I
forgot
his
name.
He
was
asking
like
about
support
for
one-third
113.
So
I
guess.
My
question
is
what
what
is
the
callback?
How
far
back,
do
you
go
with
support?
We.
J
A
J
A
A
H
Yeah,
this
is
just
a
really
quick
update.
Last
week
we
merged
a
PR
that
allows
the
the
Cappy
provider
for
the
autoscaler
to
run
in
like
separate
and
cluster
moon.
So
previously
you
could
only
run
the
autoscaler
as
a
joined
cluster,
but
now
we
have
a
work.
We
have
a
workflow
that
people
can
use
to
run,
auto
scaling
with
a
management
cluster
and
several
were
clusters.
That's
a
minor
update
but
I
think
it's
pretty
big
for
the
project,
but.