►
From YouTube: 20200429 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
hello,
everyone
and
welcome
to
the
Wednesday
29th
of
April
Costa
Rica,
meeting
Costa
Rica
is
a
project
of
SiC
cluster
lifecycle
and
we
have
the
earth.
We
have
a
meeting.
I
think
it
so
use
the
raise
hand
feature
of
zoom.
You
can
find
it
under
the
participant
list.
If
you
have
any
topics
like,
please
add
it
to
the
agenda.
A
We
have
few
PSAs
demos
today
and
some
RFC's
discussion
for
proposals
and
the
discussion
topic
yet
and
then,
at
the
end
of
the
meeting,
we
usually
go
in
to
try
adding
new
issues
that
came
up
before
we
start
like.
Let's
give
you
a
little
bit
of
time,
if
you
want
to
say
hi
and
like
if
this
is
your
first
time
introduce
yourself
I'm,
gonna,
mute
and
feel
free
to
unmute
yourself.
B
Well
I'll,
say
hi:
my
name
is
Matt
Boersma
I
work
at
Microsoft
with
Cecile
and
David
Justice
and
I'm
trying
to
get
in
the
loop
with
them
on
Cluster
API.
So
so
I'm
here
welcome.
C
A
Alright,
if
we
don't
have
anybody
else
and
what's
the
video
note
into
the
agenda,
if
you
want
to
add
your
name
to
attendee
list,
let's
go
on
to
psays.
So
yes,
there
we
release
0
3
4.
You
can
find
the
release,
release,
notes
here,
there's
currently
a
bug,
enclosure
cuddle
in
it.
We
will
have
a
fixed
up
soon
and
that's
probably
gonna
go
in
0
3,
5,
4,
0,
3
5,
we're
also
actually
getting
Koreans
1
and
team
support.
There's
a
beer
out
that
I
have
linked
here.
A
A
E
A
You
could
use
like
their
experimented
too,
like,
for
example,
introducing
new
controllers.
Introduce
new
see,
are
these
they're
like
live
under
this
API
group,
and
they
they
have.
They
have
a
life
cycle
as
well,
and
just
if
you're
here
like,
would
you
be
able
to
post
the
link
to
your
PR,
maybe
check
that
we
can
ask
for
feedback
on
that
too.
Does
that
make
sense
my
phone.
F
A
29
74
posted
in
chat
has
like
updated
guidelines
for
experiment,
so
with
more
details
on
like
how
we
would
expect
an
experiment
to
go
in.
So
to
your
for
your
question
like
we
would
expect
like
an
experiment
to
have
at
least
like
a
proposal
to
be
like
in
a
provisional
state.
Then
we
can
get
them
very
quickly.
So
by
looking
at
like
a
code
in
closed
API
ready
but
like
the
simple
by,
but
also
like
that
feature.
Wouldn't.
A
And
then
we
want
to
promote
these
things
like
we
want
to
go
from
office,
alpha
experiment
to
a
better
experiment,
and
ultimately
we
want
to
either
put
these
things
outside
of
foster
FBI,
so
promote
them,
like
maybe
like
a
new
package
or
bring
them
inside
Casa
API.
So
like
from
what
I'd
like
into
the
culture
api
tax
and
the
main
motivation
to
do.
This
was
to
kind
of
like
try
to
eat
eat
either
in
those
reappear
quickly,
as
you
mentioned.
I
Quick
PSA
there
is
a
new
possibility,
job,
which
is
called
full
custody
I
and
to
end,
and
this
job
is
testing.
A
new
cattle
is
testing
people
thing
to
say,
fasted
and
also
testing,
kcp
and
machine.
The
primate
upgrades-
and
these
are
to
entice,
is
based
on
the
on
the
framework
and
and
recently
we
added
a
lot
of
new
helper
submitted
in
the
framework.
So
the
provider
are,
can
now
use
the
Disney
editors
for
building
their
own
to
and
test,
but
also
they
can
reuse
the
entire
spec
to
test.
I
If,
for
instance,
they
will
star
or
sorcerer
force
that
works
with
a
specific
providers,
because
the
new
the
new
spec
are
designed
to
be
reusable
and
we
and
the
infrastructure
pros
are
that
is
basically
blog
about
VR
config
file
and
the
last.
The
last
point
is
that
we
are
going
to
sunset
a
day:
the
existing
cluster
API,
kappa
d,
end-to-end
chopper,
because
it
is
superseded
by
the
correct
job.
A
J
I
I
can
give
you
my
opinion
on
this.
The
dense
take
is
that
the
final
OMA
for
the
test
on
a
topless
or
whose
fear
our
data
with
her
or
who
still
provider
we
can
decide,
and
this
was
an
experiment
to
test
if
the
job
over
really
portable.
We
can
decide
to
close
this
PR
or
tumors
of
this
config.
So
eventually
a
custard
API
developer
can
decided
to
test
before
on
copy
and
then
test
on
earlier
provider.
I
E
I
L
K
Okay?
So
what
I'm
going
to
show
today
is
an
integration
of
the
autoscaler
using
the
cluster
api
provider.
You
know
to
basically
use
things
with
the
docker
provider.
So
just
kind
of
this
is
the
nothing
up
my
sleeves
moment
here.
I'll
just
show
you
so
you
can
see.
I've
got
one
cluster
here
called
work.
K
I've
already
merged
the
management
plane
into
my
work
plane
as
well,
so
you
can
see
I've
got
you
know.
Basically,
this
is
a
single
cluster.
At
this
point
and
I've
got
one
machine
deployment.
Currently
in
there
just
kind
of
showing
everything
in
here
you
can
see
the
number
of
machines
you
know,
I've
got
like
a
control
plane,
machine
and
I've
got
this
single
machine,
that's
in
the
first
machine
set.
K
So
what
I
want
to
do
at
this
point
is
I'm
gonna
switch
to
a
different
terminal
window
here
and
I've
got
a
small
script
that
I've
put
together.
I've
built
the
autoscaler
locally
and
I'm,
just
gonna
run
it
from
the
host
communicating
with
the
docker
cluster
essentially
and
I'm
doing
this,
because
part
of
the
reason
for
this
is
I
want
to
create
a
debug
workflow,
where
we
can
look
at
the
autoscaler
without
needing
to
invoke
some
of
the
more
heavyweight
cluster
providers.
K
So
I'm
gonna
start
this
and
hopefully
it'll
work,
we'll
see
a
bunch
of
spam.
So
what
we
can
see
now
from
the
law
gears
it's
running
and
it's
communicated
with
the
cluster,
so
our
autoscaler
is
in
place
at
this
point.
So
the
next
thing
to
do-
and
you
can
see
a
couple
messages
here-
you
know
it's,
it's
got
nothing
to
scale
down,
doesn't
really
know
about
anything.
It's
seen
our
you
know
our
different
kind
of
machines
here,
but
it
does
it's
not
doing
anything
with
them.
K
So
back
over
here,
I've
got
another
machine
deployment
and
you
can
see
these
are
the
annotations
that
are
specific
to
the
to
the
autoscaler.
So
I've
added
these
so
that
you
know
it'll
do
something
once
I
put
it
in
place.
So
what
I'm
going
to
do
now
is
just
have
it
create
this
and
I'm
gonna
switch
over
to
the
logs
here,
because
what
we
should
see
is
you
know
pretty
quickly.
It's
already
picked
up
this
group
and
it
sees
you
know
what
it's
supposed
to
do
here
and
it
should
start
scaling
up
so
far.
K
This
is
just
kind
of
operating
on
the
cluster
API.
If
we
had
made
this
machine
deployment
with
a
single
replicas,
I
would
have
expected.
You
know
cluster
API
to
do
the
same
thing.
So
let's
first
go
so
you
can
see
that
it's
you
know
it's
making
a
new
machine
for
us
now,
elliptical
machine
deployments
you
can
see.
We've
got
us
a
second
machine
deployment,
that's
scaling
up
and
if
I
get
the
nodes,
we
should
see
this
engine
on.
It's
already
been
created.
So.
K
You
know
it's
gonna
take
a
couple
seconds
for
this
to
come
up
and
I
can
just
kind
of
follow
that
while
we
wait
for
it
to
get
ready
a
big
part
of
why
I
wanted
to
do
this
is
that
we've
been
trying
to
solve
an
issue
on
the
autoscaler
side
with
creating
automated
tests
for
the
autoscaler,
it's
a
little
too
heavyweight
to
run.
You
know
kind
of
the
regular
providers,
so
this
doctor
provider
gives
us
a
great
way
to
kind
of
interact.
So,
okay,
so
at
this
point,
I've
got
everything
ready.
K
K
Hopefully,
I'm
noticing
that
it
didn't
want
to
actually
like
create
new
notes,
but
we
should
see
if
we
look
at
the
autoscaler
nodes
now
is
we
should
start
to
see
a
demand
for
it
to
create
new
nodes.
Now
this
I've
got
the
autoscaler
set
to
look.
You
know
re-examine
things
like
once
every
10
seconds
or
something.
So
it's
it's
looking
pretty
closely,
but
before
long
we
should
start
to
see
it
finding.
You
know
unschedulable
pods
and
then
it
will
create
new
nodes
to
go
along
with
those.
K
So,
unfortunately,
this
is
kind
of
a
slow
part
of
this
demo
and
I
know.
We
have
kind
of
a
packed
schedule
today,
but
hopefully,
hopefully
it'll
scale
up.
We
can
start
to
see
this.
Let
me
just
double
check
to
make
sure
it's
actually
alright.
Well,
I
guess:
I
need
to
scale
it
up
more
because
it
actually
was
able
to
run
all
this.
So
I'm
just
scaling
up
a
bunch.
K
Okay.
Now
we
see
a
couple
pending
pods
in
here,
and
so
at
this
point
you
can
see
that
the
the
scaler
is
now
discovered.
You
know
that
it
has
pods
that
can't
that
can't
fit,
and
so
it's
going
to
start
triggering
a
scaler.
So
if
we
go
back
here,
look
at
the
machine
deployments,
we
can
see
that
it's
actually
wants
to
replicas
now,
but
it
only
has
one
so
this.
This
would
be
the
point
where
you
know
I
start
to
look
at
the
nodes.
K
You
know
we
can
see
that
it's
bringing
another
node
up
now
and
what
I'm
going
to
do
is
when
this
node
gets
ready,
I'm
not
actually
gonna.
Let
this
job
complete,
but
what
I'll
do
is:
I'll,
kill
the
job
and
we
should
start
or
I'll
kill
the
deployment
we
should
to
see
it
scale
back
down.
Then
I'm
just
trying
to
stay
a
little
sensitive
to
the
time
here.
Cuz
I
don't
want
this
to
go
on
too
long,
but
again
the
big.
The
big
point
of
why
we
want
to
do.
This
is
I.
K
Think
it'll
really
help
us
accelerate
testing
on
the
autoscaler
side
of
things
and
although
it
doesn't
solve
all
of
our
problems
with
respect
to
testing
the
individual
providers,
I
think
it
gives
us
a
really
great
window
into
looking
at
just
the
autoscaler,
mechanics
and
figuring
out.
You
know:
how
can
we
shake
out
more
bugs
from
the
autoscaler.
K
K
K
K
Just
look
over
here
quickly
just
to
see
what's
going
on
and
you
can
see
it.
It's
scaled
back
down
at
this
point.
So
just
look
at
the
nodes
quickly
make
sure
it's
gone,
so
it's
actually
gone
yeah.
So
that's
about
it
look
forward
to
any
comments
or
anything,
it's
kind
of
tricky
to
get
this
working
currently
because
of
some
bugs
and
some
other
things
going
on,
but
yeah
I'm
hopeful
that
we'll
get
this
kind
of
as
an
end-to-end
test
on
the
autoscaler
side
of
things
so
yep.
That's
it.
M
K
M
K
The
docker
provider
is
creating
the
nodes,
so
I'm
essentially
running
the
autoscaler
and
letting
it
talk
to
cluster
api.
So
it's
using
the
generic
cluster
API
interface
and
then
cluster
API
I've
told
to
use
the
docker
provider
instead
of
you
know,
AWS
or
Azure
or
whatever
so
cluster
API
is
actually
handling
creating
those
nodes
and
whatnot
and
what
I,
what
I've
done?
K
M
A
All
right,
I
think
we
can
move
on,
so
there
is
like
some
space
on
the
the
meeting
notes
to
for
just
like
a
generic
questions.
If
you
can
add
them
under
here,
if,
like
you,
are
also
implementing
a
provider
and
like
EU
and
meet
some
help
like
this
also,
it
would
be
a
good
time
to
ask
questions
you
have
so
before
I
move
on
to
the
RFC's.
Like
is
that?
Does
anyone
have
any
questions
on
the
genetic
questions
like
they
could
be
even
very
minor.
D
D
N
N
Is
this
provision
and
the
infrastructure
provider
can
give
the
impression
right
to
cluster
API
that
you
know
look
this
this
cluster
API
machine
is
being
replaced,
but
underneath
you
know
it
could
be
the
same
machine
with
the
same
operating
system
like
that
life
cycle
can
continue
well
from
cluster
api's
perspective.
You
know
it
looks
like
it's.
You
know.
A
machine
has
been
delete.
A
cluster
cluster
API
machine
Capital
m
has
been
deleted
and
created
so
that
that's
one
option.
N
G
D
Just
want
to
bring
up
my
use
case,
it's
I'm
implementing
the
bare
metal
scenario,
so
I
have
a
limited
set
of
machines.
So
probably
the
in
place
is
the
best
choice
for
us,
but
thinking
about
the
workload
on
the
existing
cluster,
if
we
gonna
remove
one
of
the
existing
machines
from
the
cluster
I
have
some
concern
like
draining
the
node
or
the
water
close
on
the
cluster
and
etc.
N
Yeah
well
one
thing
to
keep
in
mind
if
you're
trying
to
avoid
the
drain
as
far
as
I
know
and
I
can
share
this,
maybe
in
the
slack
as
far
as
I
know
that
you
know
that
is
upgrading
without
draining
is
not
supported,
I
mean
it
may
or
may
not
work,
but
I
think
there's
a
there's.
An
outstanding
issue
like
there
there's
a
decision
to
be
made
and
I
think
it's
is
so
far.
Officially
right.
It's
you
you.
The
assumption
is
that
you
drain
before
you
upgrade.
So
that's
that's
something
else
to
consider.
N
A
O
O
The
general
policy
for
quest
straight
guy
has
been
to
not
take
on
the
workspace
of
mutation,
because
that's
a
it's
a
space
where
many
bodies
are
buried,
but
that
doesn't
mean
that
there
aren't
patterns
you
can
follow
here,
such
as
the
chameleon
operator,
which
is
inspect
but
not
complete,
which
is
basically
an
operator
style
pattern
to
do.
In-Place
upgrades,
asynchronously
and
I
know
that
several
other
distributions
do
a
very
similar
model
to
as
well
that
that
allows
you
to
circumvent
the
idea
of
the
actual
provisioning
of
machines
themselves,
because
you're
not
actually
doing
provisioning.
E
F
Yeah
I
just
wanted
to
comment
on
the
pretend
to
delete
machines,
but
not
actually
delete
them.
I
kind
of
went
down
that
same
line
of
thought
as
a
possibility,
because
we
have
the
bare
metal
stuff
that
we're
trying
to
support
downstream,
and
my
takeaway
from
that
was
probably
not
a
great
idea
just
like
for
all
the
things
that
you
would
need
to
do
to
make
that
workflow
work,
you're,
really
gonna
be
get
down
into
the
weeds.
In
my
opinion.
Now,
don't
let
me
stop.
The
other
thing
I
wanted
to
mention
was
previously
there.
F
There
was
the
understanding
that
we
were
going
to
not
focus
on
in-place
upgrades
for
now,
because
we're
trying
to
get
something
that
works
further.
Let
me
say
the
80%
use
case,
but
we
weren't
shutting
the
door
on
it
place,
upgrades
and
we're
kind
of
waiting
on
things
to
get
to
some
point
of
maturity
before
I.
G
Sorry
did
you
save
my
name
fence?
Yes,
okay,
thanks
I,
just
I
wanted
to
add
that
if
you
are
using
something
like
the
cube,
idiom
control,
plane
implementation
that
we
recently
added
for
v1
l3,
that
it
does
assume
and
currently
require
that
it's
going
to
create
new
machines
to
scale
up
and
delete
old
machines
to
replace
so
I.
G
Don't
imagine
that
that
would
necessarily
work
in
a
non
virtual
machine
set
up
right
now,
where
you
want
to
retain
machines
but
like
if
you
can
find
ways
to
make
things,
work,
that'd
be
good
and
then
I
know
Michael.
You
mentioned
about
eventually,
maybe
trying
to
support
more
than
just
the
80%
versus
what
Tim
was
saying
about
we're
always
immutable.
G
I
think
we
can
reevaluate
as
time
goes
on,
but
you
know,
as
things
are
implemented
right
now,
there's
some
wiggle
room
for
different
infrastructure
providers
to
potentially
do
things
that
don't
involve,
delete
and
recreate,
but
there's
definitely
a
portions
of
the
project
like
machine
sets
and
kcp.
That
expect
and
require
delete
and
recreate
right
now,.
A
F
Sorry
I
would
just
say
that
we
we
we
say
the
80%
use
case
and
I
totally
agree
with
that,
but
I've
been
keeping
kind
of
like
a
running
tally
in
my
my
head
of
all
the
people
that
want
in-place
upgrades
and
a
lot
of
those
people
are
not
represented
all
the
time,
because
cluster
API
doesn't
do
in-place
upgrades,
and
so
by
virtue
of
not
having
it,
we
don't
have
those
users
providing
that
feedback.
So
I
just
want
to
make
sure
that
we're
mindful
that
we,
this
has
been
a
recurring
theme.
F
F
You
might
want
to
do
it
on
bare
metal
because,
there's
you
know
a
lot
of
cloud
stuff
out
there
already
so
I
think
like
yes,
80%
of
users
using
kubernetes
are
probably
using
bm's,
but
what
about
the
80%
of
users
that
are
actually
going
through
and
managing
their
own
deployments
and
that
kind
of
thing
like
actual
large-scale
large
footprint
like
the
big
stakeholder
users?
Are
we
optimizing
for
a
lot
of
small?
F
A
P
Yeah
I
mean,
like
generally
I
I,
agree
with
supporting
a
lot
of
the
like
as
many
use
cases
as
we
can
find.
I
think
the
mismatch
here
is
that
the
majority
of
the
people
who
like
have
found
these
kinds
of
things
have
these
kinds
of
gaps
have
gotten
and
implemented
by
actually
like
doing
the
experiments.
Opening
the
PRS
and
stuff
like
that,
like
I,
feel,
like
we've
talked
enough
about
getting
in
place
upgrades
to
work,
but
like
somebody
who
really
needs
it
should
have
done
it
by
now.
P
You
know,
like
that's,
that's
kind
of
why,
on
that
with
it
like
we,
we
on
our
side
had
plenty
of
issues
with
the
wake
east
k.
Cps
were
managing
rolling
through
the
upgrades,
but
we
did
a
bunch
of
testing
and
we
made
a
bunch
of
fixes
and
we
changed
the
one
to
the
behaviors
and
that's
the
way
that
we
got
it
to
change.
P
I,
don't
think
I,
don't
think
the
way
that
those
kind
of
big
decisions
are
going
to
be
made
is
by
saying
like
hey
like
there
are
900,000
people
out
there
who
want
in-place
upgrades.
Please
please
do
it
for
them,
like
we
have
it's
a
very
small
team
over
at
VMware
and
a
few
other
folks
who
do
most
of
the
implementation
for
these
kinds
of
things,
and
this
is
a
huge
lift
to
get
in
place
upgrades
to
work.
The
way
that
people
seem
to
need
them
to
work.
F
I
I
agree
with
I.
Think
that's
a
great
point,
but
I
also
would
say,
like
this
kind
of
goes
back
Rao
saying
about
the
project
vers
community,
like
let's
try
to
make
sure
that
we're
saying
hey
this
door
is
open,
create
something
cool
with
these
building
blocks.
If
you
don't
like
the
coop
control
plane
thing,
if
you
don't
like
machine
deployments,
you
know
show
us
the
cool
thing
you
want
to
do
and,
and
hopefully
it's
like
really
broadly
applicable,
and
then
we
could
use
a
lot
of
that
stuff.
F
O
I
think
the
idea,
like
we've,
had
the
Aspen
number
times
I
think
if
people
want
to
take
a
point
on
this
I
think
writing
down
the
user
story
and
start
to
draft
and
coalesce
and
a
proposal
seems
like
a
good
place
to
go.
We
could
probably
take
a
little
discussion
from
this
point.
So
if
you
are
interested,
my
recommendation
is
to
sync
up
on
the
slack
channel
is
to
start
to
write
down
your
thoughts
into
a
concrete
set
of
ideas
and
user
stories
and
proposals
and
begin.
We
can
reconvene
with
a
more
informed
discussion.
M
Yes,
I
think
it's
time
that
we
do
get
into
discussion
on
immutability
as
such,
because
in-place
upgrade
is
one
of
them
and
so
I
think
we.
What
is
this
suggestion
like?
Is
it
like?
You
do
you
go
to
at
this
time?
We
are
in
point
0,
V
0,
dot
3.4.
When
will
we
open
it
out?
Will
it
be
me
0,
3.5,
3.6,
and
before
that
we
need
some
specs
right
or
at
least
some
kind
of
a
thought
as
to
what
we
need
to
do
to
enable
that
and
yeah.
A
A
O
You,
if
you
are
interested
in
this
user
story,
please
feel
free
to
reach
out
to
me
on
slack.
It
will
try
and
call
us
and
sort
of
guide
you
upon
existing
ideas
and
literature
that
we
have
on
this
stuff.
Already.
It's
we've
gone
around
the
circle
around
this
topic
for
quite
some
time,
so
I
think
right
now
we
should
probably
go
async
and
we
can
discuss,
and
hopefully
we
can
get
a
solution
in
place
that
ideally
answers
it,
not
necessarily
from
this
sub
project
itself,
but
in
sequence
or
lifecycle.
Generally.
A
E
A
No
I,
alright!
So
moving
on
to
the
RFC
discussions,
we
have
seven
of
them,
so
just
just
to
clarify
like
what
that
what
this
means
time
walks
five
minutes.
This
is
just
like
to
give
space
to
everybody
to
introduce
their
own
their
proposals
that
they're
writing
with
a
give
link
to
a
Google,
Doc
or
a
PR
or
whatever
else.
E
A
F
This
stemmed
off
an
issue
that
I
filed
some
time
ago,
there's
a
little
bit
back
and
forth
in
it
I'll
link
to
it
in
a
comment
at
the
top,
but
basically
a
machine
has
a
certain
individual
machine
has
a
certain
life
cycle
between
creating
and
belief,
and
some
people
want
to
kind
of
inject
custom
actions
that
may
be
a
third
party
or
external
controller
can
do
some
kind
of
thing
in
their
environment,
whether
it's
make
a
call
to
some
external
API.
That
does
something
maybe
control
the
behavior
application
or
do
something
in
the
infrastructure
provider.
F
F
You
delete
a
machine
in
a
machine
said
because
it's
unhealthy
and
you
just
want
to
get
it
different.
What's
gonna
happen
is
that
machines
marked
for
deletion.
Machine
says
sees
that
boom.
You
get
another
machine
wow
that
other
one
is
still
in
the
process
of
the
way,
at
least
that's
how
our
code
resource
being
this
she's,
a
stream
I,
haven't
really
looked
at,
but
in
any
case
so
now
I've
got
two
machines
going
on
at
once.
I
got
one
that's
going
away
and
well
that's
coming
up,
but
in
the
case
of
control
plane
machines.
B
F
Way,
I'm
not
deleting
all
my
actual
bits
for
their
instance
in
case
some
kind
of
disaster
happens
and
I
need
to,
for
whatever
reason
keep
that
machine
around,
because
it's
got
a
copy
of
the
data
that
I
really
need
like,
and
it
also
ensures
that
hey,
the
actual
other
machine
is
eventually
gonna.
Come
back,
is
actually
going
to
come
up
right.
I,
don't
want
to
be
down
on
a
master
for
some
control,
a
machine
for
some
protracted
period
of
time
when
the
replacement
never
comes
alive.
F
So
it's
nice
to
have
that
thing
sitting
there
ready
in
case
I
need
to
fall
back
to
it
in
some
kind
of
emergency
scenario.
That's
the
use
case
that
I
came
up
with
one
of
the
problems
I've
seen
with
us
when
we're
trying
to
kind
of
have
control
planes,
machines
be
more
replaceable
for
what
we're
doing
downstream
and
so
I
just
noticed
that
as
a
gap
and
then
through
the
discussions
that
kind
of
expanded
into
well.
F
Wouldn't
it
be
cool
if
we
could
define
all
these
other
places
because,
as
you
might
see
in
some
of
the
other
discussion,
topics
that
follow
along
with
this
one,
there
needs
to
be
some
kind
of
granularity
like
we're,
mediation
and
stuff
like
that
they
want
to
kind
of
hook
into
some
of
these
areas,
so
I'm
proposing
a
series
of
simple
annotations
that
will
basically
pause
the
machine
controllers,
reconciliation
of
a
machine
of
different
points
and
allow
other
things
to
happen.
Then,
when
those
annotations
are
removed,
the
machine
controller
continues.
A
A
So
I'll
try
to
summarize
so
this
is
like
a
use
case.
That's
coming
from
metal
3.
They
need
a
way
to
like
have
to
use
machine
health
check,
but
with
an
external
remediation
process.
So
there's
this
proposal
that
there
wants
to
add
support
for
a
machine
health
check,
external
remediation
so
like
as
a
strategy
and
the
other
proposal
is
to
add
support
for
a
machine
to
give
reloaded.
A
I
Or
so
Commission
is
a
proposal
that
aims
to
add
a
condition
under
status,
the
status
type
for
each
object,
and
the
proposal
is
kind
of
copied
down
to
a
first
iteration
that
aims
to
define
the
condition
type.
So
we
can
start
implementing
a
first
set
of
condition
and
and
get
some
some
user
feedback
on
what
we
are
doing
and
yeah,
and
so
what
is
important
to
notice
that
we
are
trying
to
define
a
single
condition,
type
of
all
the
object,
and
there
is
a
similar
kept
in
app
scream
permittees.
I
That
is
proposing
an
approach
really
similar
so
that
they
are
trying
to
plan
for
on
unification
of
the
start
of
the
condition
field
on
all
the
types
that
nowadays
are
different.
We
are
trying
to
keep
up
with
the
kubernetes
proposal,
but
this
is
ongoing.
So
is
not
easy.
The
main
difference
between
the
cluster
API
condition
and
the
community's
condition
is
that
in
castor,
API
I
am
proposing
to
introduce
a
field
which
is
severity
that
is
designed
basically
to
give
a
feedback,
a
natural
feedback
on
what
is
happening
for
in
longer
running
past,
for
instance.
I
The
goal
is
my
custard
infrastructure
provision.
England
I
would
like
to
understand
if,
at
a
certain
point,
the
cluster
infrastructure
is
not
here
torini,
but
what
is
happening
is
there
is
some
condition
where
the
user
is
and
should
a
canoe
should
take
action
or
we
are
in
concurrent
problem
or
simply
the
action
is
taking
a
long
time
due
to
the
underlying
infrastructure
provider.
So
this
is
the
main
difference,
and
the
last
point
that
I
would
like
to
point
out
is
that
we
are
trying
to
keep
these
visual
tree
apex
compatible.
I
Q
Yeah
so
I
heard
of
last
week,
but
I
started
working
on
the
proposal
document,
do
define
the
extensible
template
processing
for
cluster
cuddle,
feel
free
to
make
comments,
suggest
revisions
on
the
document,
but
the
idea
is
essentially
just
creating
a
set
of
interfaces
that
allow
apparel
to
be
extended
to
support
other
templating
mechanisms.
It
will
still
default
to
the
current
mechanism
of
variable
substitute
substitution.
I
know
that
I
seem
like
there
are
some
PRS
I
think
into
the
isère
provider,
about
using
customize
to
manage
the
flavors.
Q
So
if
you're
interested
or
if
you're
had
concerns
like
okay,
there
are
all
these
permutations
of
flavors
that
we
need
to
support.
How
would
I
work
with
customize
or
feel
like
thinking
of
other
tools
like
Q
or
dollar
ytt
or
whatever?
Please
feel
free
to
provide
your
comments
or
suggestions
in
the
document
that
will
kind
of
had
to
making
the
interface
more
generic
yeah?
That's
it
and
I'll
be
working
on
it
today.
So
there's
definitely
a
lot
more
work.
That
needs
to
be
done.
A
R
So
what
we
are
trying
with
post
apply
is
to
have
a
mechanism
to
apply
an
initial
set
of
like
default
resources
after
clusters.
First
come
up,
the
the
most
important
thing
we
want
to
have
in
a
fresh
cluster
is
to
have
network
connectivity,
so
this
is.
This
is
a
way
to
apply
CNI
right
away
after
the
cluster
becomes
ready,
but
it
is
not
limited.
R
F
I
think
I
brought
this
up
some
time
ago,
but
this
is
upstream
kubernetes
cap
in
same
thing,
I'm,
not
too
tired
to
the
implementation
here.
Actually
clean
Coleman
is
the
one
that
suggests
that
using
the
least
object
for
this,
and
so
that's
why
I
write
it
up
this
way.
But
basically,
we've
got
a
lot
of
different
actors
that
want
to
do
things
to
nodes
or
machines
that
are
disrupted.
F
So,
for
instance,
there's
like
some
people
want
to
have
like
a
node
machine
maintenance
CRD
that
that
shuts
down
the
nodes,
so
they
can
perform
operations
on
bare
metal,
stuff
and
other
things
like
that
in
the
some
of
the
discussions
that
we
had
internally
externally
or
very
much
revolving
around
the
Machine
API,
but
I'd
like
to
take
that
step
higher.
We
need
the
ability
to
coordinate,
disruptive
actions
to
nodes
and
the
reason
we
should
coordinate
on
a
node
versus
a
machine
is.
It
allows
us
to
integrate
with
components
that
aren't.
F
So
you
might
have
just
some
other
thing
that
that
is
operating
on
a
node
in
some
way.
In
particular,
you
might
be
running
something
like
supercritical
job
and
you
don't
want
that
to
be
disrupted
for
something,
and
so
then
you
might
acquire
this
maintenance
lease
that
says:
hey
I,
don't
want
anybody
to
do
any
disruptive
action
to
this
node.
Meanwhile,.
F
Somebody
else
wants
to
shut
down
the
node
in,
do
an
in-place
upgrade.
We
do
in-place
upgrades.
We
have
this
thing
called
the
machine,
config
operator
and
so
it'd
be
nice,
that
we
could
have
those
two
things
coordinated,
hey
you
want
to
shut
this
thing
down.
This
thing
really
needs
it
to
run
and
then
once
the
lease
is
free,
it
can
be
acquired
by
a
component
and
it
can
do
its
disruptive.
F
S
Took
it
down
because
they're
doing
some
kind
of
administrative
attitude,
maybe
they're
replacing
a
holiday
and
they
keep
their
little
machine
that
keeps
the
cloud
instance
or
take
a
snapshot
whatever,
but
this
serves
as
AI
centralized
point,
but
all
these
different
operators
that
want
to
be
things
that
are
disruptive
or
in
turn
prevent
disruptive
actions
from
taking
place
on
there.
So
that's
what
this
is
about.
I've
typed
up
some
initial
suggestions
of
what
I
think
it
could
look
like
I
talked
to
the
cig.
Note
folks.
They
said
this
probably
belongs
so
across
the
lifecycle.
S
A
A
M
M
F
M
I
A
That's
a
good
question,
I'm
sure
in
general,
like
the
Google
Docs.
That
should
be
ready
in
terms
of
like
contents.
So,
like
I
did
you
have
like
implementation
details
if
you
have
like
a
risky
mitigations
upgrade
plans
and
all
those
things
I
guess
like
for
experiments.
We
haven't
decided
like
what
those
criterias
are
but
like
the
proposal
should
get
into
a
provisional
state
so
usually
like
it
takes
like
one
or
two
weeks
of
like
waiting
time
and
slash
comments
to
go
from
a
Google
Doc
to
a
PR
and
then
another
week
to
go
from
PR.