►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-03-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
the
cluster
api
office
hours
meeting
today
is
march,
2nd
2022.
We
were
I'm
monitoring
today
and
before
we
start,
as
anybody
wants
to
say,
hi
or
introduce
themselves.
A
And
once
twice
three
times
all
right,
that's
one,
then,
just
as
a
reminder
like
we
do
have
a
meeting
etiquette.
So,
if
you'd
like
to
speak
up
like
please
use
the
raise
end
feature
you'll
find
on
the
reaction
on
your
zoomer
window,
I
post
a
link
to
the
agenda
in
chat,
so,
if
you
want
to
add
any
topics
feel
free
to
do
so
before
we
go
into
the
open,
readout
section.
A
B
Hi
everyone,
so
this
is
I'm
echoing
basically
a
message
that
is
a
is
being
broadcasted
in
kubernetes
channel
channels
so
take
care.
It
is
really
a
traveling
time
with
a
lot
of
things
going
on
on
top
of
two
years
of
pandemic.
So
let's
try
to
be
kind.
Let's
try
to
reduce
the
noise
if
possible,
as
a
community.
We
we
can
do
something
and
about
this,
and
if
you
have
some
any
idea,
something
that
we
we
can
do
in
order
to
reduce
to
ease
the
pressure
feel
free
to
speak
up.
A
Thanks
for
watching
this
times,
like
often
overlooked
as
mental
health,
so
like
please
take
care
of
yourself
and
let's
take
care
of
each
other
as
well.
I
know
a
lot
of
members
of
this
community
are
100
impacted
and
you
know
it's
really
sad
to
to
see
all
around,
but
we
need
to
do
our
best
to
make
sure
that
you
know
we're
all
here
for
each
other.
A
All
right
with
that,
let's
go
to
the
open
proposal:
readout!
Oh,
why
is
this
a
link?
Let's
go
here
instead
of
reading
through
it
like?
Does
anybody
want
to
touch
base
on
anything
here
before
we
go
to
the
agenda
items.
B
A
A
All
right,
I
don't
see
any
henry,
so,
let's,
let's
keep
going
fork
it
for
cut.
Sorry,
if
I
purchase.
C
Your
name
yes,
yeah,
that's
correct,
hey
folks,
I
was
chatting
with
fabrice
and
stefan
on
dislike
yesterday
or
day
before
regarding
the
release
timelines
and
cappy.
C
So
it
would
be
great
to
have
the
high
level
overview
and
like
roadmap
kind
of
thing
for
the
near
future,
like
which
points,
maybe
the
possible
release
timelines
in
cappy,
with
dates,
if
possible
or
like
overall
timeline,
so
that
users
or
like
providers,
would
have
more
vision
on
like
how
to
prepare,
and
things
like
that
so
and
also
we
discussed
that
this
needs
to
be.
C
As
as
I
heard,
this
is
in
in
discussion,
and
I
was,
I
was
told
that
it's
better
to
take
this
up
for
this
meeting.
So
well,
I'd
like
to
hear
your
opinions
on
that.
A
Sure,
thanks
thanks
for
bringing
this
up,
we
haven't
updated
the
the
roadmap
for
a
while.
So
it
that's
in
the
book
and
yeah,
this
is
actually
pretty
updated.
This
whole
page
should
be
redone.
What
I
would
like
to
propose
is
like
we,
you
know
we
do
have
like
a
discussion
and
github
to.
A
I
haven't
seen
any
updates
on
that
lately,
but
we
should
try
to
make
sure
that,
like
this
is
updated,
like
probably
by
the
end
of
the
month,
so
let's
give
it
at
least
a
month,
because
we
have
a
lot
of
proposals
out
as
well
in
terms
of
timeline
like
I
don't
necessarily
wish
you
have
dates,
but
we
could
definitely
provide
quarterly
dates.
A
So,
like
just
estimates,
especially
with
what
we
talked
about
like
kind
of
a
little
bit
more
flexible
in
terms
of
like
when
things
should
be
shipped
but
definitely
like,
we
do
have
in
the
contributing
guide
like
a
lot
of
guidelines
on
when
we
do
cut
releases
and
our
support
guarantees
for
today,
which
probably
should
be
updated
to
1.1.
D
So
I
think
I
think
in
general
I
agree
with
you
events.
Keeping
the
dates
flexible
to
quarters
is
definitely
fine
and
I
think
yeah,
like
the
page
that
we
have
is
valuable,
because
at
least
it
gives
an
overview
on
what
is
being
done
for
an
api
version.
An
addition,
potentially
is
probably
what
kubernetes
is
doing,
for
example,
for
regarding
batch
releases
for
each
for
each
line
that
they
maintained,
there's
a
schedule
with
graph
dates
that
gives
you
okay.
B
Yeah
I
for
petries,
we
are
kind
of
settling
in
cutting
apache
release
every
month.
If
there
are
pr
and
yeah,
I
was
planning
to
do
this
sometime
next
next
week
for
minor.
I
agree
that
the
road
map
is
important.
E
Cecilia
yeah,
I
I
would
like
to
maybe
push
us
to
like
I've
said
this
before,
but
consider
releasing
a
bit
smaller
releases
more
often,
especially
for
minor
releases,
which
aren't
a
new
api
version.
I
think,
even
if
we're
releasing
new
features
and
such
it's
easier
for
providers
to
follow
it's
easier
for
users
to
upgrade.
If
we're
releasing.
E
You
know
every
couple
of
months
instead
of
every
quarter
or
every
six
months
like
we
have
in
the
past,
because
the
longer
we
wait,
the
more
changes
accumulate
and
the
harder
it
is
for
upgrading
and
keeping
up
to
date.
But
I
think
we
should
focus
the
road
map
on
longer
term
like
what
we're
trying
to
accomplish
like
in
the
upcoming
year
or
in
the
upcoming
semester,
and
not
necessarily
on
one
single
minor
release,
but
try
to
keep
the
releases.
You
know
frequent
and
small.
A
Yeah,
echoing
that,
like
I
think,
the
1.1
release
was
actually
a
good
like
in
terms
of
payload
was
was
pretty
good,
especially
for
changes
of
for
providers
to
upgrade
was
kind
of
limited
but
yeah.
I
totally
agree
like:
let's,
let's
get
the
roadmap
in
place
like
in
terms
of
like
quarters,
and
everybody
can
kind
of
like
assess
themselves
like
in
what
they
want
to
propose
and
when
they
think
they
can
get
it
done
at
the
end
of
the
day,
and
then,
whenever
that
release
comes,
that
will
include
those
changes.
A
That's
when
it's
gonna
get
released
like-
and
this
I
think,
goes
back
to
the
point
like
easing
the
pressure
like
in
these
challenging
times,
so
that,
like
you
know,
we
can
all
kind
of
yeah
ease
it
a
little
bit
more
yeah
I
mean
generally
sounds
good.
I
think
the
biggest
pro
blocker
today
with
releases
is
there's
a
couple
of
things
like
left
to
automate
or
there's
a
lot
of
turn
on,
like
you
know,
creating
still
creating
a
new
miner,
specifically,
which
we
should
probably
think
about,
and
work
on.
A
Specifically,
the
testing
for
jobs
are
one
of
them.
The
book.
It's
another
one
with
the
whole
branch's
problem
that
that
we
were
talking
about
and
cardigan
release
itself
is
seems.
You
know
it's
gotten
much
easier
thanks
to
the
automation
so.
C
A
A
All
right
alberto
go
go
ahead,
you're
next.
F
Thanks
vince
yeah-
I
just
wanted
to
let
everyone
know
that
I
pick
up
the
word
for
sorting
out
the
machine
deployment
conditions.
So
I
put
up
two
different
pr's
that
are
wanting
review.
One
is
for
refactoring
the
way
that
we
calculate
the
status
in
machine
deployments
and
introducing
conditions
to
sign
out
scaling
operations.
The
same
way
that
we
are
doing
for
for
machine
sets
and
the
other
one
is
to
bubble
up
permanent
failures
in
infra
machines
that
when
we
just
started
a
discussion
there
fabricio
dropped
some
feedback.
F
What
I'm
proposing
that
pr
at
the
moment
is
to
keep
the
failure
message
and
in
the
info
machines
for
permanent
failures
and
then
create
a
new
condition
in
machine
sets
and
machine
deployments
where
we
bubble
up
those
permanent
values
for
the
from
the
infra
machines-
and
I
know
fabricio
has
some
different
view
so
yeah.
Hopefully
we
can.
Everyone
can
share
feedback
there
and
we
can
agree
on
the
platform.
D
Yeah,
I
think
in
general,
having
like
condition
is
conditions
for
terminal
failures
is
definitely
a
plus,
especially
when
we
think
about
mhc,
because
then
we
can
opt
in
if
we
want
to
remediate
or
not
terminal
conditions,
because
certain
some
of
them
might
actually
need
human
intervention
before
actually
allowing
remediation.
As
for
as
for
keeping
like
failure,
reason
and
failure
message,
that's
likely
they're
going
to
depend
on
our
deprecation
policy,
so
we'll
likely
need
to
deprecate
them
and
remove
them
on
the
next
cpi
version.
B
Yeah,
I
I
have
two
consideration
one.
I
agree
that
we
should
duplicate
the
fields
with
regards
to
how
to
model
I'm
a
little
bit
concerned
by
creating
a
specific
condition
for
machine
failure.
I
I'm
worried
about
the
ux.
I
make
an
example
today
in
kappa,
we
have
a
condition
that
is,
I
don't
know,
vcp
ready
about
vcp
provisioning
and
if
there
is
a
terminal
failure,
I
do
expect
this
condition
during
the
vcp
provisioning.
B
I
do
expect
this
condition
to
report
terminal
failure,
not
another
one
and
so
the
basically
we
give
we
use
existing
condition
to
give
the
user
a
better
feeling
or
a
better
senior
where
the
terminal
failure
is
without
adding
a
condition.
Generic
conditions
that
are
always
used
hard
to
understand
what
I
was
considering
and
discussing
with
alberto
in
the
pr
is
that
eventually
we
can
add
a
new
severity
level
to
our
conditions.
B
So
we
can
have
condition
that
report
that
basically
are
false
and
tells
this
is
a
fatal
condition,
a
fatal
error
or
whatever,
and
we
can
treat
this
condition
as
immutable
and
and
but
let
me
say,
without
adding
noise
or
different
signal
that
can
be
confusing.
For
the
users.
A
D
Yeah
I
forgot
I
did
about
like
the
the
form
or
or
the
structure
of
conditions.
I
agree
that,
like
in
general,
for
example,
for
for
for
providers,
there
might
be
use
cases
where,
like
you,
want
to
report
the
same
condition,
but
the
severity
is
likely
to
be
different.
That
says
terminal,
for
example,
if
you
don't
have
like,
if
you,
if,
like,
for
example,
if
your
credentials
aren't
valid
or
if
it's
a
case
where
you
cannot
actually
do
anything
until
human
intervention.
F
F
A
Just
to
chime
in
a
little
the
there
is
also
one
consideration
I
would
like
to
make
is
like
the
failure
message
in
reason,
you
know
where
they're
for
historical
purposes,
but
they
are
used
in
the
chain
of
operation
as
well.
A
Today,
if
we
do
have
like
something
different,
like
the
only
I
guess,
alert
that
comes
into
my
mind,
the
red
flag.
Is
that,
like
now
we're
treating
errors
and
failures
within
the
same
condition
a
little
bit
different
which,
like
it's
a
subtle
change,
is
like?
Is
it
an
error
or
a
failure?
It's
up
for
the
developer
to
decide.
I
guess,
but
it's
up
to.
Is
it
up
to
us
to
actually
tell
the
user
that
this
is
a
failure
like
at
the
end
of
the
day
like?
What
can
you
do
in
a
failure?
A
You
have
to
delete
the
machine
or
what
what
else
right?
Like
all
of
this
to
say
it's
like,
should
we
revisit
the
whole
concept
of
failure
and
like
all
together,
instead
of
like
bubbling
up
these
failures
as
well?
And
what
are
the
feelers
that
today
we
do
bubble
up,
but
that
could
just
go
somewhere
else
that
you
know
in
instead
of
like
using
the
failure
message
until
you
reason
and
for
brits
and
then
mike.
B
Yeah
I
was
answering
to
alberto,
so
if
we
add
a
new
severity,
then
we
can
basically
update
our
our
logic
that
bubble
ups
error
to
give
priority
in
blabla
in
bumbling
when
basically
rolling
up
the
the
fatal
errors.
So
this
is
something
that
we
can
do
but
yeah.
This
is
an
idea.
We
have
to
spec
out
the
details.
G
So
I
don't
necessarily
have
like
a
a
strong
opinion
around
the
condition
stuff.
I
mean
what
the
discussion
that's
going
on
here
sounds
good.
I
just
wanted
to
share
a
little,
maybe
wisdom
from
what
we
see
in
red
hat.
You
know,
so
our
implementation
of
what
we
call
machine
api.
We
use
a
failed
state
for
some
of
our
machines
and
we've
gone
back.
G
You
know
kind
of
condition
or
state
so
that
our
users
can
see
what
has
happened,
because
we
don't
want
to
make
an
assumption
about
like
why
this
thing
you
know,
if
we
could
fix
it,
we
fix
it,
but
in
a
state
where
it's
gotten
into
a
failed
situation,
you
know
not
trying
to
make
an
assumption
about
what
the
user
wants
to
do
with
that
machine
like
do
they
want
to
delete
it.
Do
they
not
want
to
delete
it,
but
giving
them
the
you
know
the
ability
to
kind
of
say.
G
D
You
see
yeah,
I
think
that,
like
I
think
like,
even
if
we
introduce
a
severity
at
the
end
of
the
day,
it's
up
to
the
user
like
because
if
we
introduce
severity
then
we
can
actually
make
a
few
changes
to
mhc
to
actually
like
remediate
on
the
like
remediate
when
that
condition
is
present
and
when
a
actually
it's
not
targeted,
but
by
an
mhc,
then
we
can
leave
the
machine.
D
So
ultimately
it
would
be
up
to
would
be
up
to
the
users
the
users
to
decide
what
they
want
to
do
about
it.
For,
regarding
like
whether
like
like
whether
whether
we'd
want
to
like
whether
users
would
want
to
do
that
or
not,
I
think
it
can.
It's
gonna
mainly
depend
on
the
semantics
of
the
conditions
we
report.
So
the
severity
is
one
like
one
thing,
but
ultimately
it's
gonna
come
down
to
the
quality
of
the
conditions
we
report,
such
such
as,
like
the
users
are
able
to
say.
D
Okay,
this
conditions
mean
that
there's
no
there's
no
need
to
remediate.
Until
I
do
something
about
it,
say,
fix
the
permissions
change
the
credentials,
so
there's
no
need
to
trigger
mhc,
because
it's
just
going
to
repeat
the
same
thing
over
and
over
again.
A
F
Right
so
that
made
sense
to
say
like
getting
back
to
furious
example,
or
maybe
some
different
examples
say
you
scale
up
a
machine
set
and
suddenly
100
machines
that
they
fail
to
be
created,
because
you
have
no
quota
on
your
cloud
provider.
F
So
putting
aside
on
how
we
sign
all
that
in
the
particular
info
machines,
how
would
we
sign
up
that,
like
as
a
user?
I
want
to
go
to
my
machine
set
and
quickly
see
that
that
happens.
So
how
would
we
sign
up
that?
Would
we
have
like
the
way
that
I
think
of
it?
F
I
envisioned
we
would
have
like
a
single
condition
in
the
machine
set
containing
all
that
information
like
saying:
hey,
like
your
50
machines,
just
fail
because
you
have
no
water,
and
I
would
expect
to
be
able
to
realize
that
without
looking
at
the
specific
infra
machine
resources,
so
I
think,
like
the
way
I
see
to
solve
that
is
by
having
one
single
condition
in
machine
sets
for
that
semantic,
but
naturally
the
fabric
or
the
scene
just
saying
something:
different:
you're
expecting
to
see
more
conditions
at
the
machine
set
level
as
well.
A
I
think
what
we
do
have
summer,
summarization
capabilities
already.
I
think
the
the
suggestion
was
to
reuse
those
if
we
do
add
this
critical
condition
to
prefer
those
two
other
kind
of
errors
that
are
like
might
be
a
little
less
minor.
A
At
least
that's
what
my
understanding
is
one
from
a
couple
folks
like
you,
you
can
already
summarize
conditions
today
and
then,
if
there
is
an
error,
it
gets
preferred
over
info,
and
I
would
assume
that
if
it's
critical,
it
will
get
preferred
over
error
message.
A
This
one
thing
that,
like
we
need
to
think
about,
though,
is
like
when
you
have
multiple
critical
conditions
at
the
same
time,
on
the
same
object
which
one
wins
and
which
one
is
more
important
right
like
we're
getting
into
this
like
space,
where,
like
is
failure
defined
by
like
multiple
critical
condition,
only
one
or
you
know,
there's
a
bunch
of
things
that,
like
we'll,
probably
discuss,
but
you
know
we
can
continue
this
discussion
on
the
issue
or
the
pr
itself.
A
D
Yeah
one
last
thing
so
alberto:
do
you
want
us
to
take
the
discussion
on
the
pr
or
or
the
issue,
because
there's
one
for
terminal
traders
and
messages.
F
A
Thanks
folks,
richard
you
have
the
next
topic.
H
Yeah,
it's
just
a
quick
one.
Yesterday
there
was
a
discussion
on
what
cappy
looks
like
for
managed
communities
driven
by
an
issue
we're
seeing
in
kappa
specifically
with
cluster
class.
So
the
meeting
notes
and
the
recording
are
in
this
dock
just
below
this
meeting.
So
if
anyone's
interested
and
couldn't
make
it
yesterday
feel
free
to
read
those
watch.
The
video
at
a
high
level.
The
three
things
that
came
out
of
that
were
a
new
agenda
item
would
be
added
to
this
meeting
for
provider
implementers
to
feedback
and
fabrizio
has
added
that.
H
There's
also
gonna
be
a
proposal
written
about
you
know.
Ideally
what
does
managed
kubernetes
look
like
in
cappy,
and
hopefully
you
know,
that'll
be
a
collaboration
between
the
three.
I
guess
three
main
cloud
providers
that
would
have
this
with
a
with
a
focus
on
consistency
across
the
providers
and
a
decent
ux.
So
there's
some
intricacies,
especially
in
kappa.
You
know,
differences
between
a
manager
and
managed
cluster
and
specifically
on
the
issue
of
cluster
class.
H
F
Yeah,
just
a
quick.
No,
I
don't
I'm
not
sure
if
I
mentioned
this
yesterday,
apart
from
the
main
providers,
I'd
be
interested
as
well
in
the
use
case
for
managed
kubernetes
in
a
cloud
agnostic
scenario
like
like
what
we
do
in
particular
is
we
have
a
provider
for
the
control
plane
that
gives
you
hosted
control
planes
as
a
service
basically,
and
that
can
be,
for
you
know,
aws
or
issue
or
any
environment
should
be
it'd,
be
good
to
to
take
that
use
case
into
consideration
as
well
in
the
discussions,
if
that
makes
sense,.
A
F
So
we
have
a
custom
implementation
like
we
then
use
like
the
chibatmin
reference
implementation
and
with
this
particular
implementation
does
is
to
give
you
a
control
plane,
that's
running
as
just
parts:
it's
not
creating
infrastructure
behind
that.
It's
assuming
that
there
is
some
infrastructure
and
the
bots,
for
you,
know,
api
server
and
everything
else
are
running
there,
and
that
gives
you
an
operational
api
server
that
then
we
we
use
the
the
whole
capi
workflow
to
hook
into
that
and
still
reuse
the
existing
machine
management,
controller
for
aws
or
ratio
or
any
other
provider.
A
Okay,
yeah,
that
makes
more
sense,
yeah
it.
It
does
have
some
overlap.
With
cap
n,
with
the
nested
provider,
the
folks
have
been
working
on,
so
we
could
probably
reach
out
and
see.
What's
the
state
of
the
project.
F
I
Kevin
hey
folks,
so
this
should
hopefully
also
be
quite
a
short
one.
This
is
really
just
a
pitch
for
a
small
pr
that
adds
templating
to
crs
objects
as
they're
being
created
into
the
remote
cluster.
I
I
actually
wrote
this
back
in
october
and
we
didn't
progress
at
that
point,
but
we
someone
came
up
with
the
same
use
case
again
this
morning
that
we
would
like
to
be
able
to
push
remote
objects
into
the
cluster,
but
just
maybe
applying
small
changes
to
them.
Maybe
changing
the
network
address
or
something
like
that
in
a
cni
object.
So
this
pr
is
essentially
a
trivial
change
that
exposes
through
through
a
go
templating
objects
and
you
can
stick
labels
on
it
and
then
pull
them
off
and
apply
them
into
the
object.
I
If
that's
what
you
want
to
do,
I
know
there's
a
lot
and
some
ongoing
work
on
crs's
just
now,
so
you
know
we'd
like
to
try
and
see
we
can
land
a
small
change
with
us
which
would
help
us
to
be
able
to
push
resources
around.
A
J
Since,
since
crs
changes
are
being
blocked,
is
this
also
one
of
those
that's
gonna
get
blocked?
Should
we
I
mean,
I
know
that
we're
just
gonna,
I
know
we're
forking
so
that
we
can
ship
our
product
should.
Should
the
entirety
of
crs
just
be
forked
and
become
a
tongue
project.
A
I
I
wouldn't
said
the
word
that
it's
on
block:
it's
still
an
experiment
that
there
is
like
a
proposal
for
the
add-ons.
A
Where
is
it
here
that
you
know
it's
bringing
some
other
discussions
like
with
regards
to
the
runtime
sdk
work
as
well,
and
the
managed
topologies?
A
A
Generic
stands
from
the
sig
and
probably
talk
about
it
at
the
second
level
and
next
week's
meeting
as
well,
because
one
of
the
project
goals
is
like
also
to
not
recreate
things
that,
like
are
already
out
there
again.
I
don't
necessarily
want
to
block
anything,
I'm
just
stating
something
that
has
been
like
already
it's
in
here,
like
in
the
goals
and
uncles
of
the
overall
project
in
terms
of
forking.
A
You
know
I
cannot
stop
it
for
for
something,
but
like
we
need
to
understand
the
life
cycle
of
these
features
like
do
any
other.
Maintainers
have
opinions
to
greet
you.
B
B
E
Yeah
thanks
so
just
want
to
clarify
yeah.
I
guess,
according
what
vincent
fabrizio
said,
that
the
pr
is
and
changes
to
crs
are
not
blocked,
they're,
just
temporarily
on
hold.
While
we
discuss
and
evaluate
the
proposal,
I
I
checked
after
a
conversation
last
week.
As
far
as
I
can
tell
there
wasn't
a
proposal
that
was
merged
and
approved
into
the
project
to
add
this
functionality
to
crs.
So
I
think
it's
still
on
the
table.
E
We
just
I
want
to
you,
know,
think
before
we
act,
but
I
think
the
right
outcome
for
this.
I
I
don't
think
we
should.
You
know,
block
changes
to
crs.
E
If
we
don't
have
a
clear
like
path
forward,
I
think
it
should
either
be
like
we
decide
that
we
don't
want
to
move
stars
forward,
and
then
we
give
a
clear
message
to
the
community
by
deprecating
it
or
we
decide
that
we
want
to
pursue
the
add-on
story,
but
we
still
want
to
allow
changes
to
cluster
resource
set
to
evolve
the
current
solution,
while
we,
you
know,
propose
an
alternative
change.
E
E
Yeah,
I
just
wanted
to
answer
that
yeah,
I
think
no
one's
just
you
know
debating
the
value
of
crs.
Everyone
agrees
that
we
need
something
like
it
to
do,
what
it
does
and
we
use
it.
You
know
like
in
cav
z,
to
apply
standards.
I
think
the
question
is
more.
E
Do
we
want
to
take
this
on
as
part
of
the
cluster
api
project
and
increase
the
scope
of
crs,
because
we
think
there's
overlap
with
other
solutions
in
the
community
and
we
don't
want
to,
like
you
know,
reinvent
the
wheel
if
there's
something
out
there
already,
that
can
solve
our
solution
and
when
crs
was
first
introduced,
it
was
always
supposed
to
be
as
a
stop
gap
solution.
While
we
figured
out
our
add-on
story
and
it
was
supposed
to
be
a
very
limited
scope,
but
now
we're
proposing
to
expand
that
scope
by
adding
other
functionality.
E
That's
not
part
of
the
initial
scope,
so
I
guess
it's
more
like
we're
trying
to
decide
whether
this
is
an
acceptable
scope
increase.
If
it's
you
know
still
like
temporary,
we
might
eventually
replace
it
with
something
else
or
if
this
is
kind
of
the
tipping
point.
Where
we
need
to
say
hey
like
we
can't,
you
know,
keep
increasing
the
scope
in
cluster
api
because
we
want
to
invest
in
these
other
projects.
That
can
do
this,
for
us
hope
that
clarifies.
A
And
and
to
add
to
that,
like
you
know,
we
do
understand,
there
is
urgency
on
on
your
fox
side
to
to
use
this,
and
you
know
to
expand
its
scope.
My
proposal
for
a
path
forward-
I
guess
we're
also
going
through
the
next
topic-
would
be
crs
has
its
own
proposal.
A
A
Let's
discuss
on
those
on
those
changes
before
we
make
them,
and
then
you
know
we
could
decide
like
each
way
to
go
to
go
forward
as
as
we
discussed,
the
changes
to
crs
specifically
the
other
adam
story
like
can
go
alongside
and
you
know
make
everybody
happy
like,
so
that,
like
we
have
a
short-term
solution,
make
sure
that
we
can
address
the
concerns
that
have
been
brought
up
with
the
feature
parity
with
what
you're,
looking
for
and
also
just
get
it
on
the
record
within
the
cluster
api.
K
Yeah,
I
just
wanted
to
add
if,
if
we
are
contemplating
the
possibility
of
expanding
crs
in
the
meantime,
while
we
work
on
the
adam's
story,
I
think
it
will
be
also
be
interested.
I
think
it
would
be
interesting
if
we
define
how
much
are
we
willing
to
expand
crs
right,
because
at
some
point,
if
you
expand
crs
too
much,
it
becomes
overlapping.
With
the
adam's
story,
I
like,
we
need
to
find
a
trade
of
there
and
before
start
adding
more
and
more
proposals
to
expand
crs.
K
What
do
we
need
from
crs
that
we
don't
have
yet
while
we
work
on
add-ons
right
and
once
we
once,
we
define
the
scope
of
what
we
need
from
crs
that
we
don't
have,
then
we
can
work
on
those
proposals
and
see
what
we
want
to
add,
instead
of
just
like
adding
a
million
proposals
that,
at
the
end,
are
gonna,
be
just
expanding
crs
into
what
we're
trying
the
island's
story
to
be.
K
That's
totally
fair,
but
I
think
we
we
it's
a
trade-off
right.
I
understand
what
you
mean
and
it's
definitely
experimental
and
experimenting
it's
about
learning,
100
agree,
but
any
piece
of
code
that
we
put
in
the
code
base
needs
needs
to
be
maintained
and
tested.
The
longer
we
take
to
include
the
add-ons
story,
implementation
and
deprecate
crs
and
the
more
we
expand
crs,
the
more
overhead
we
are
adding
to
the
project
for
something
that
we
already
know.
We
want
to
deprecate,
so
I
100
agree
you,
but
there
needs
to
be.
A
I
thought
I
hear
both
sides
of
the
story
like
one
qualification
I
would
like
to
make
is
that
the
add-on
story
is
not
trying
to
tackle
the
same
things
as
crs
is
trying
to
tackle
right
now.
The
add-on
story.
You
can
see
it
as
a
hub
for
that's.
The
proposal
does
a
better
job
than
what
I'm
doing
it
describing
it,
but
it
should
be
a
hub
where,
like
you,
can
orchestrate
these
add-ons
it
doesn't
actually
and
for
which
you
go
right.
A
If
I'm
wrong
install
anything,
it
just
calls
something
that
then
installs
it.
So
the
reason
that
I
was
proposing
for
a
an
amendment
to
the
proposal
is
that
potentially
crs
could
become
one
of
the
ways
to
install
these
these
add-ons.
A
But
if
the
scope
needed
from
from
this
community
it
gets
wider
and
wider,
crs
should
become
its
own
project
like
that
that
we
can
all
like
agree
on,
because
that's
in
the
outside
of
the
scope
of
cluster
api,
that
said,
kubernetes
itself
is
like
removing
things
from
entry
and
pushing
that
out
a
tree
and
by
pushing
them
out
of
tree.
It's
pushing
into
cluster
api
to
manage
and
that's
why
add-ons
like
it's
becoming
a
story
that
we
need
to
tackle.
A
So
given
these
things
are
still
in
flight,
like
again,
we
could
put
it
to
vote
or
the
discussion
on
the
sig
list,
but
if
we
do
have
an
amendment
to
the
proposal
that
either
expands
the
goals
and
non-goals
section
and
also
expands
the
features
that
folks
want
to
see,
and
we
discuss
it
on
the
proposal
itself,
while
we
work
on
the
hub,
I
don't
necessarily
think
like
the
work
has
to
be
thrown
away,
if,
like
folks
from
weworks,
want
to
continue
to
work
on
crs,
but
then
we
can
discuss.
H
L
So
I
have
a
question,
so
we
often
face
such
issues
about
whether
it's
I
mean
this
is
about
a
separation
of
responsibility
right.
It
will
be
good
if
we
have
a
clear
definition
of
what
a
crs
is
should
be,
I
mean
it
should
be,
and
what
cluster
api
is
supposed
to
do,
because
one
of
the
things
we
have
this
common
thing
about:
how
much
should
we
pile
on
at
the
end
of
cluster
creation
and
install
and
what
is
meant
by
a
ready
cluster?
L
So
for
some
of
our
use
cases
ready
cluster
is
when
the
cni
is
installed
and
the
cloud
provider
is
installed
beyond
which
everything
else
is
more
of
an
application
on
the
cluster
which
is
running,
and
it
is
not
in
crss
scope
really.
So
it
will
be
good
if
we
can
understand
what
are
the
things.
If
I
can
understand
what
else
will
fall
into
the
cr
scope,
because
why
should
be
so?
A
At
all
yeah,
so
this
is
outline
in
this
proposal
and
as
it
stands
today-
and
this
is
why
you
know
we
put
the
the
pr
on
hold
last
week-
is
that
we
today
like
an
ongoing
super
reconciliation
of
resources,
and
it's
like
you
know
we
it's
stated
in
here
and
so
like
we
want
our
proposals
to
be
updated
with
the
code,
so
that,
like
it,
also
serves
as
a
documentation
at
the
end
of
the
day.
So
that's
why
they
ask
we
can
expand
on
the
scope.
We
can
decide.
A
All
sorts
of
things
in
terms
of
like
oneness
needs
to
be
installed
in
a
cluster.
Unfortunately,
there
is
not
a
finite
set
of
like
things
that
you
know
defined
as
a
cni
or
cpi.
These
can
be
ever
increasing
and
we
cannot
add
specific
support
for
all
of
them.
So
that's
why,
like
the
add-on
management
story,
has
to
be
a
little
bit
more
generic,
but
a
csi
cni
cbi,
become
out
of
tree.
A
The
the
biggest
blocker
is
going
to
be
upgrades,
because
when
you
want
to
upgrade
the
cluster
itself,
you
want
to
upgrade
the
cni
cpi
and
csi
as
well.
Not
cni
is
like
yeah,
let's
take
it
aside
for
a
second
but
one
hundred
percent.
You
wanna
upgrade
c
cpi
yeah
yeah
and
there
are
helm
charts
today
that,
like
I've,
seen
that,
like
the
cpi,
external
and
cp
implementation,
publish-
and
they
have
clear
compatibilities
requirements
with
kubernetes
right.
L
And
so,
apart
from
this,
I'm
just
looking
for
something
which
is
apart
from
a
cni
cpi
and
csi,
which
would
also
make
sense
for
a
cluster
resource
set
I
mean,
is
it,
isn't
it
a
finite
number
as
in,
isn't
it
a
fixed
number
of
resources,
or
is
it
going
to
be
an
ever
increasing
set
of
resources.
A
L
A
Just
so
just
to
capture
action
item,
folks
from
from
reworks,
like
are
you?
Okay,
I'd
like
to
proceed
with
a
proposal
amendment
to
what
the
new
goals
and
goals
of
crs
should
be
and
the
details
implementation
details
as
well
of
like
the
features
that
you
would
like
to
add,
and
then
you
know
we
could
discuss
those
within
the
proposal.
First,.
K
Does
this
mean
that
then
we
are
open
to
the
possibility
of
expanding
crs,
because
it
seems
that
that's
the
decision
that
we're
making.
We
want
to
then
make
a
decision
case
by
case
using
proposals.
But
as
as
last
week,
I
think
we
haven't
decided
that
yet,
and
it
seems
like.
I
just
want
to
make
sure
that
we
clarify
that
that
we
are
deciding
that
it
is
okay
to
expand
crs.
A
I
think
the
decision
will
come
when
we
see
what
the
proposal
comes
out
of
like
folks,
that
want
to
maintain
it
as
well
like
it's
like
with
these
pr.
With
this
proposal
and
like
in
energy
scope,
like
will
come
great
responsibility,
which
is
to
maintain
the
crs
experiment
today
and
then,
if
we
disagree
on
the
scope
creep
that
we're
signing
up
for,
we
can
definitely
decide
to
move
crs
out
of
three
in
the
future.
K
K
A
Well,
I
mean
the
only
difference
is
like
we
do
have
what
crs
should
be
today
in
the
repo
and
we
you
know
we
should
update
it
with
what
we
think
it
should
have
and
then
discuss
on
the
proposal
like.
I
don't
think
we
can
make
a
decision
right
now.
If
we
don't
see
what
kind
of
scope
we're
signing
up
for.
A
J
J
J
Oh
you,
you
said
you
want
to
know
what
we're
taking
on.
I
was
just
wondering,
as
I
was
curious
about
your
definition
of
we
is
in
that
statement.
We
we
as
the
core
maintainers
we
as
the
community.
We
I'm
not
sure.
A
Yeah,
it's
always
the
community.
I
mean
we
as
a
community,
we're
sticking
on
yeah.
No,
it's
definitely
the
community,
like
you
know
we
community
and
the
project
we're
sticking
on
like
at
this
expanded
scope
with
expanded
scope.
Usually
we
ask
for
my
sub
project
maintainers.
So,
like
you
know,
step
up
and
to
manage
this
experiment
specifically.
A
Fabricio,
alberto
and
cecil.
Are
you
all
okay
with
next
steps
and
from
here
or
you
have
anything
to
add,
go
ahead?
Sue.
E
Yeah,
the
only
thing
I
want
to,
I
guess,
clarifies
like
we
sh-
or
I
guess
remind
everyone-
is
like
meetings,
are
really
great
for
discussing
ideas
and
you
know
pros
and
cons,
but
we
shouldn't
make
a
consensus
in
this
setting
just
to
try
to
be
inclusive
of
people
who
can't
be
in
the
office
hours
because
of
time
difference.
E
So
it's
better
to
make
decisions
like
these
npr's,
like
over
acing
discussion,
which
is
why
we're
asking
for
a
pr
but
we're
you
know
open
to
discussing
it
and
seeing
if
the
scope
increases
appropriate
for
the
project
for
the
time
being
or
not.
So
this
is
not
a
decision.
This
is
just
let's
move
this
forward
and
discuss
in
the
pr.
A
Perfect.
Thank
you.
Let's
move
on
in
the
register
in
the
interest
of
time
christian
plus
jk
state
matrix.
M
Yeah
hey,
so
we
had
a
discussion
this
last
week
about
the
possibility
to
maybe
donate
the
cluster
ap
state
matrix
project
to
the
sick.
M
M
For
anyone
who
isn't
aware,
I
think
a
lot
two
weeks
ago
I
did
a
short
demo
in
the
office
hours
yeah.
So
from
from
our
side,
we
are
built
to
contribute
it
to
that.
That
was
one
of
the
reasons
why
we
even
published
it
to
to
get
up
if
it
fits
the
the
need
here
so
yeah.
B
Fabricio
right,
so
I
I
think
that
first
of
all,
your
list
of
items
seems
okay
to
me
at
first
sight,
I
I
just
commented
on
vince
suggestion
that
we
can,
if
this
is
fine
for
you,
you
are
donating
the
project.
We
can
also
consider
to
incubate
the
project
inside
the
capi
report,
the
capi
repo
as
an
experiment
or
whatever,
or
as
we
have
a
separate
controller.
B
The
main
benefit
that
I
see
for
the
from
these
is
that
we
probably
in
doing
this
way
we
can
iterate
faster
at
the
beginning.
While
we
pick
up,
we
want
beta
one
while
we
we
start
setting
the
first
set
of
condition.
B
M
Yeah,
so
I
think
we're
both
fine
with
both
both
possibilities,
so
your
suggestion
would
be
to
have
a
subdirectory
at
the
core
cluster
api
repository
where
the
code
would
live
then,
and
the
contribution
would
be
a
pull
request
in
this
case
or
what
the
next
steps
be
to
have
a
small
proposal
or
something
yeah
or
here
into
the
experimental,
as
you
should
show
now,
yeah.
A
Yeah
the
small
proposal
that
outlines
what
the
features
are
and
goals
and
on
goals
for
the
metrics
would
be
great
and
then
yeah
like
a
experimental
folder
for
the
metrics,
then
itself
yeah.
I
do
agree
that
this
would
be
the
quickest
way
to
do
it
and
like
it.
A
It
also
provides
immediate
value
for
folks
that
want
to
test
that
out,
because
you
know
this
just
at
the
end
of
the
day
like
a
flag,
you
have
to
set
to
enable
the
the
state
metrics
and
then
you
can
just
use
it,
which
is
really
great.
A
Okay,
once
twice
three
times:
okay,
let's,
let's
put
it
on
record
in
the
issue
itself
and
then
yeah.
We
can
move
from
there.
A
All
right,
cdf
yeah,
the
next
topic.
N
Yeah,
so
I
was
doing
csi
migration
tests
and
I
have
some
initial
findings
that
I
want
to
share
with
you
all.
N
I'm
using
a
crs
control
plane
is
upgraded
without
an
issue,
but
I
noticed
that
the
worker
knows
are
failing
due
to
draining
issues
because
volumes
are
attached
to
them,
so
machine
deployment
upgrades
are
stuck,
so
there
may
be
an
issue
between
the
communication
of
1.22
cubelet
to
1.23
api
server,
so
I
tried
the
other
way
like
I
upgraded
the
machine
deployments
first
and
then
control
plane.
I
know
that
this
is
not
following
the
kubernetes
version,
secure
policy,
but
that
worked.
N
D
Issue
yeah.
I
think
this
is
like
if,
if
if
this
is
confirmed,
then
this
is
problematic.
I
know
that
there
was
a
flag.
I
know
that
there
was
a
flag
that
was
needed.
D
N
So
I
I'm
not,
I
don't
fall
back
to
entry
providers.
I
am
using
external
value
as
ccm
in
my
test.
Oh.
D
Yeah
I
mean
like
for
the
entry
storage
one
there's
there's.
There
was
a
flag
that
was
introduced
a
while
ago,
a
few
releases
ago,
sorry
to
actually
fall
back
to
the
entry
storage
driver.
N
A
Absolutely-
and
you
know,
sort
of
this
is
a
good
find
if
there
is
anything
that
comes
out
of
this,
it's
like
a
good
documentation
on
how
to
to
actually
do
these
upgrades
like
in
the
same
way,
and
potentially
we
can
check
if,
like
we
can
put
some
checks
in
place
inside
of
closed
api
before
you
migrate
from
entry
to
other
trade.
Don't
know
if
it's
doable,
but
you
know
think
about
it,
but
definitely
good
work
here.
A
All
right
we're
at
time.
Thanks
for
the
great
discussions
and
hope
you
all
have
a
great
week,
bye
folks.