►
Description
Meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit
A
All
right
welcome.
Everyone
today
is
august,
31st
2022,
and
this
is
the
cluster
api
community
meeting
cluster
api
is
a
kubernetes
sig
project
and
as
such,
we
follow
the
kubernetes
sigs
community
guidelines,
which
essentially
means
please
raise
your
hand
using
the
chat
functionality
if
you'd
like
to
talk
or
using
the
participants
and
please
be
kind
to
each
other,
so
yeah
with
that,
we
will
welcome
any
new
attendees
to
this
meeting.
A
So
if,
if
you
haven't
come
to
this
meeting
before-
or
maybe
you
haven't
spoken
to
the
group
before,
and
you
would
like
to
introduce
yourself
and
say
hi,
we
would
like
to
take
this
opportunity
to
allow
you
to
do
so.
So
please
feel
free
to
unmute
and
introduce
yourself.
C
B
Wants
to
go
ahead
of
me,
I
could
maybe
briefly
introduce
myself
yeah.
Please
do
yeah,
so
I
I
work
at
souza
who
not
everybody
will
familia
will
be
familiar
with
the
company.
We
are
a
provider
of
open
source
solutions
and
we're
probably
most
known
for
our
linux
products,
but
we
also
have
kubernetes
and
edge
solutions,
and
I
work
in
the
product
management
for
for
our
edge
in
our
hbu,
and
I
focus
on
the
telco
industry
and
I
have
been
playing
around
with
cluster
api
recently.
A
B
There
are
a
couple
of
alex's
around,
but
not
unlikely.
B
Yeah,
thank
you.
This
is
abhijit,
so
I'm
right
now,
working
at
vmware,
I'm
working
in
a
team
called
wcp,
which
we
provide
kubernetes
as
a
service
on
vsphere,
so
I'm
new
to
kubernetes,
I'm
trying
to
learn
and
grasp
as
much
as
possible.
Thank
you
for
giving
this
opportunity
to
interact
with
the
team
interact
with
everyone
over
here.
B
I
can
go
next,
maybe
hi
everyone.
This
is
giuseppe,
I'm
I
work
at
nvidia
and
this
is
the
first
time
with,
and
I
attended
this
meeting
and
thanks
a
lot
for
hosting
and
we
are
working
on
the
cluster
apis
and
we
would
like
to
know
a
little
bit
more,
and
this
is
a
great
opportunity
for
us
to
learn
and
grasp
additional
information.
Thanks.
A
All
right,
I
am
not
seeing
any
more
hands
on
muting,
so
in
three
two
one,
all
right:
let's
move
on
to
the
open
proposal:
readouts
is
there
anyone
who
would
like
to
speak
on
any
of
these
proposals?
Please
raise
your
hand
and
I'll
call
on
you.
A
All
right,
I
am
not
seeing
any
hands
go
up,
so
we
will
move
on
to
the
discussion
topics.
So
I
think
stefan
you've
got
a
couple
to
go
through
here.
So
I
will
hand
the
mic
over
to
you.
C
Yeah
and
try
to
make
it
quick,
so
the
first
one
I
just
noticed
that
we
didn't
send
an
and
set
an
end-of-life
live
date
for
1.1.
We
document
the
policy,
which
is
basically
as
soon
as
1.2.0,
is
released,
1.1
dot.
Whatever
is
auto
support,
and
essentially
I
just
want
to
bring
up
the
question.
Should
we
make
one
more
1.1
release
for
some
reason,
or
should
we
just
declare
it
essentially
right
now,
auto
support?
That's
it.
C
We
have
two
cherry
picks
on
release,
1.1,
which
are
not
yet
released.
One
is
classic
cash
tracker
and
the
other
one
is
just
another
point
of
somewhere,
which
is
probably
not
super
important.
As
far
as
I
know
so,
yeah
just
want
your
opinions
in
either
direction.
I
personally,
I
think
I
don't
have
any
opinions
if
you
want
to
go
with
the
policy
and
there
won't
be
another
release,
but
I
personally
also
don't
have
any
problem
with
making
one
more
just
to
release
those
fixes.
A
All
right,
so
I
see
fabrizio
in
chat,
is
saying
plus
one
for
another
patch
vince's
thing,
plus
one
for
another
patch
and
then
eol.
C
Okay
sounds
good
I'll.
Take
that
one
and
yeah
I
mean
I'll,
update
the
documentation,
etc,
etc,
and
I
think
the
next
monthly
release
for
1.2
should
be
roughly
in
two
weeks
and
maybe
just
do
the
1.1.2
and
then
we're
done.
C
Okay,
good,
then
next
one
yeah
so
kubernetes
support.
Essentially,
I
would
propose
that
we
support
1.25
in
1.2
as
well.
So
essentially
we
always
have
the
problem
that
we
release
cluster
api.
At
some
point,
then
kubernetes
does
its
release
and
if
you
don't
cherry
pick
any
kind
of
support,
they
would
have
to
wait
for
the
next
cluster
api
release
to
get
support
1.25
for
support
that
was
released
last
week.
Cluster
api
was
released
a
month
ago,
which
probably
means
either.
C
We
cherry
picked
the
25
support
into
one
or
two
or
we
have
to
wait
like.
I
guess
two,
three
four
months
I
don't
know
release
date,
but
it
will
be
relatively
long.
It
looks
like
the
cherry
pick
is
mostly
just
also
adjusting
the
the
test
drops
to
verify
that
it
actually
works.
It
looks
like
it
should
already
work
so
yeah
question
here
is
any
opinions
about
supporting
125
and
one
two
pro
or
con.
C
Let
me
know
in
the
la
the
last
time
we
cherry
picked
1.24
support
back
in
1.1,
so
we
have
kind
of
a
president's
case.
That's
how
we
do.
D
D
I
I
think
that,
given
that
we
are
basically
talking
about
to
cherry
pick,
a
change
on
ci
configuration,
I'm
definitely
plus
one.
That
means
that
we
will.
The
1.2
is
already
tested
with
125
ci,
so
it
should
work.
What
we
have
to
do
is
to
move
the
test
from
this
ci
to
the
stable
release.
Is
that
right?
C
So
indefinitely
plus
one
I
mean
I
can
guarantee
that
nothing
breaks
but
yeah
we're
actually
already
testing
a
newer
commit
on
the
kubernetes
release
branch
and
then
the
one
which
was
1.250
so
yeah
it
should
be
just
here
I
mean
alternative-
is
that
everyone
has
to
wait
until
1.3
to
get
1.25
support.
Official
1.25
support.
A
Yeah
I
mean,
I
think,
it's
nice
to
be
able
to
provide
this.
Where
we
can,
you
know
so
I
guess
I'm
plus
one
as
well.
A
All
right
any
other
comments
on
this,
or
should
we
move
to
the
next
item.
C
Next,
one:
okay:
so
can
you
maybe
open
that
link
to
the
book?
The
second
one.
C
C
We
were
talking
about
this
internally
and
recently
noticed
that
we
currently
don't
have
end-to-end.
Let's
say
no
easy,
end-to-end
test
coverage
to
verify
that
that
web
hook
in
infowider
is
implemented
correctly.
So
what
I
linked
in
the
meeting
nodes,
which
is
the
second
link
that
you
opened,
is
a
pr
to
extend
our
current
environ
test.
So
we
already
have
an
entering
test
which
does
something
very
similar
and
that
pr
extends
it
to
also
it's
a
detail.
C
But
essentially
we
are
also
testing
that
we
are
able
to
rotate
infrastructure,
machine
templates
and
that's
what
a
test
is
yeah
now
testing,
so
why?
I
mentioning
it
one
just
a
reminder:
whoever
is
bumping
the
cluster
api
one
or
two.
C
If
cluster
class
should
work,
please
remember
to
make
that
webhook
adjustment
and
second,
I
would
also
propose
to
cherry
pick
that
pr
back
to
release
one
or
two,
which
essentially
makes
it
easier
for
intro
providers
which
are
picking
up
one
or
two
to
actually
verify
that
1.2
works
correctly.
That's
also
something
that
is
not
strictly
in
our
policy
as
we
have
documented
today,
but
we
have
a
history
of
let's
say:
cherry
picking,
changes
in
test
framework
to
make
it
easier
for
other
providers
to
pick
up
test
improvements.
D
B
D
A
Cool
thank
you
fabrizio
and
thank
you
stefan
for
bringing
all
those
all
those
topics.
So
I
have
the
next
one
here.
This
is
kind
of
a
public
service
announcement
to
all
of
the
providers
who
are
creating.
You
know
controllers
for
cluster
eight
for
cluster
api.
There
is
a
feature
in
the
cluster
auto
scale,
auto
scaler
called
balance
similar
nodes.
A
A
So
you
could
imagine
a
situation
where
you
have
you
know
multiple
machine
deployments
and
different
zones,
maybe-
and
you
would
like
the
auto
scaler-
to
expand
them
in
an
even
manner
so,
instead
of
choosing
one
or
choosing
a
random,
it
would
try
to
balance
nodes
between
the
multiple
sets
that
it
has
now,
I'm
kind
of
making
a
call
to
the
providers
out
there,
because
there
is
a
piece
of
code
in
the
auto
scaler
that
allows
us
to
basically
ignore
a
number
of
well-known
labels
and
I've
been
trying
to
add
these.
A
A
You
know
ibm
cloud-
has
a
label
that
they
use
with
their
cloud
controller
managers.
You
know
they
also
have
a
different
label
that
they
use
with
their
csi
drivers,
so
it's
very
common
for
providers
to
create
their
own
labels
to
to
talk
about
the
persistent
volume,
node
kind
of
zone
awareness.
A
This
is
really
common
in
the
csi
drivers,
and
so
what
I'd
like
to
ask
is
that
you
know
if
any
providers
out
there
are
using
labels
that
describe
things
like
zone
awareness
or
worker,
ids
or
similar
things
like
that.
You
know:
vpc
ids,
these
kind
of
things.
These
labels
can
cause
problems
for
the
cluster
auto
scaler
because
it
thinks
the
groups
are
not
the
same
then
so,
for
example,
you
could
imagine
that
you
would
have
a
machine
deployment
in
zone
a
on
aws
and
those
nodes
will
get
this
topology
label
for
the
csi
drivers.
A
That
are
there
and
if
I
then
created
a
second
machine
deployment
in
a
different
zone,
it
would
also
have
that
label,
but
it
would
have
it
for
zone
b.
Perhaps
so,
when
the
cluster
auto
scaler
looks
at
these
machine
deployments,
it
thinks
they're
not
the
same
because
they're
deployed
in
different
zones-
and
this
is
not
always
the
behavior
that
we
want,
especially
when
balancing
because
the
zones-
it's
not
a
topological
difference
in
terms
of
the
capacity
of
the
machines,
it's
just
a
difference
in
where
they're
located.
A
So
I
would
like
to
ask
you
know
everybody
to
kind
of
take
a
look
at
this.
If
you
know
that
your
provider
uses,
you
know,
labels
that
that
could
cause
problems
here.
I
have
also
created
a
pr
to
update
this,
so
you
could
see
what
it
you
know
what
it
looks
like
I'm
as
I
come
across
these
I've
been
trying
to
add
more
so
you
know
we
found
one
for
gke
and
I
think
we
found
another
one
on
ibm.
A
So
basically,
you
know
just
if
you,
if
you
come
across
this
or
you
think
about
it,
while
you're
working
on
your
provider,
please
think
about
making
a
patch
to
the
cluster
auto
scaler
or
reaching
out
to
me.
If
you
know
you
know
what
the
label
is,
I
can
make
a
patch
pretty
quickly,
but
this
will
make
sure
that
you
know
people
who
are
using
the
cluster
auto
scaler
with
cluster
api
kind
of
have
the
best
experience
when
it
comes
to
using
these
features.
E
Yeah
this
seems
like
a
really
interesting
spike.
The
just
looking
at
that
set
of
labels
that
we
want
to
identify
to
me
is
an
architectural
smell
that
this
is
not
going
to
be
a
sustainable
approach,
because
this
will
continue
to
grow
and
we'll
always
miss
this
thing.
E
At
this
point
I
mean
I
haven't
been
involved
up
to
this
point,
so
it's
a
little
bit
unfair
that
I
would
want
to
insert
my
two
cents,
but
I'm
just
I'm
curious
if
there's
an
alternate
way
of
accomplishing
the
same
thing
where
we
don't
have
to
anticipate
and
know
about
these
specific
labels.
That
yeah
I
mean
I'm
sure,
we've
all
observed
that
these
things
can
get
introduced
very
rapidly
in
different
providers
for
different
sort
of
project
spikes,
and
they
can
be
really
hard
to
anticipate
and
suddenly
a
cluster.
That's
working.
E
A
Right-
and
you
know
maybe
this
is
this-
is
better
saved
for
the
this
is
maybe
better
saved
for
the
deep
dive
session,
but
there
there
is
a
flag
that
people
can
use
to
the
cluster
auto
scaler
to
give
these
when
they
launch
it,
so
they
don't
need
to
be
in
the
code.
I'm
I
was
kind
of
bringing
the
public
service
announcement
here,
just
to
kind
of
say
hey.
A
This
is
a
way
we
can
make
it
better
if
people
know
that
they
have
if
they
have
these
labels
that
aren't
going
away
or
or
or
whatever
but
yeah.
I
can
certainly
follow
up
by
doing
like
a
30
minute
deep
dive
on
this
yeah.
E
A
A
So,
let's
see
I
see
stefan
saying
so
essentially
the
multi-az
rebalancing
that
we're
missing
in
machine
deployments.
Yeah,
I
mean
you-
you
could
do
this
today
by
by
manipulating
the
command
line
parameters
to
the
auto
scaler,
so
you
can
make
it
work
today.
A
But
my
hope
here
is
to
kind
of
make
the
experience
a
little
more
out
of
the
box
for
people,
so
they
don't
have
to
configure
quite
as
much
and
and
cluster
api
is
kind
of
a
unique
provider
in
this
respect
in
the
upstream
or
in
the
auto
scalar
repo,
because
the
other
providers
usually
worry
about
like
one
label
or
something
like
for
aws.
This
is
like
this
is
very
simple.
A
A
So
maybe
another
way
to
look
at
this
too,
is
just
plumb,
the
other
providers,
node
groups,
and
look
and
see
what
they're.
What
they're
doing
chris
has
a
question
in
chat
here
is
topology.kubernetes.io
zone
already
being
ignored?
Yes,
it
is,
and
if
you,
if
you
look
at
the
the
code
piece
that
I
linked
in
the
first
link
here,
you
can
see
that
this
function,
which
creates
the
ignore
label,
has
this
basic
ignored
labels.
A
Those
are
where
all
the
well-known
labels
are
are
put,
so
the
ones
that
come
from
the
core
of
kubernetes
are
all
kind
of
in
the
basic
ignored
labels.
These
you
know
this
gets
into
a
discussion
of
kind
of
like
how
the
csi
drivers
were
created
and
the
the
timing
that
people
started
to
create
their
own
labels
and
whatnot.
So
you
know
some
of
this
has
to
do
with
just
kind
of
organic
growth
and
other
parts
of
it.
People
did
agree
on
using
the
well-known
label,
so
we
do
cover
those.
A
This
is
more
kind
of
like
looking
at
specific
behavior
on
each
infrastructure
platform.
A
All
righty,
I'm
not
seeing
any
hands
go
up.
So
let's
move
along
jonathan's
got
a
psa
about
the
cluster
api
visualizer.
I'm
very
curious
about
this.
Take
it
away
jonathan.
F
Hey
yeah,
I
just
wanted
to
share
that.
I
made
a
v1
release
of
the
cluster
api
visualizer
app.
The
main
change
is
that
I've
added
cluster
resource
sets
and
templates
to
the
visualization
graph
and
I'm
working
on
adding
it
to
cappy
tilt
as
well.
It's
our
an
older
version
is
already
on
there,
but
I'm
working
to
get
it
updated
too.
So.
A
Yeah
sure,
let
me
see
if
I
can
give
you
the
screen
quickly.
A
Does
anyone
know
where
that
is?
Do
I
have
to
make
him
a
host
or
how
does
that
work?.
F
All
right
yeah
so
on
the
main
screen
of
the
app
for
anyone
who
hasn't
seen
this
before
it
shows
all
the
workload
clusters
you
have
managed
from
your
management
cluster.
You
can
see
the
state
and
if
you
click
into
here,
you
can
see
all
the
different
resources-
and
this
is
just
a
web
ui
wrapper
for
the
cluster
ctl
describe
find
so
you
can
click
on
all
the
resources
to
open
them
up.
F
F
Now
you
can
also
search
for
fields
and
for
for
things
that
aren't
finished
loading
or
aren't
ready.
You
can
also
hover
to
see
the
reason
on
the
chip.
F
Oh-
and
I
also
forgot
to
say
I
added
machine
pools
as
well
so
before
you
can.
I
think
you
can
only
see
the
machine
pool,
but
now
you
can
see
in
this
case
we
have
a
azure
machine
pool
as
well.
F
But
yeah
this
is
so
what
I
added
for
the
v1
release.
Hopefully
it's
helpful
for
people
getting
into
cluster
api
and
existing
developers.
A
A
All
right
cool,
so,
let's
see
here
if
I
can
just
go
back
to
share
screen
quickly.
I
think
we're
just
about
done
here,
but
we've
got
a
few
provider
updates,
so
first
provider
capsi
someone
take
it
away.
E
B
Yep,
I
just
wanted
to
announce
zero
five
zeros
out
with
it's
got
a
few
small
fixes
in
it.
One
is
basically
we
had
the
the
tagging
thing
that
I
brought
up
last
week,
where
it
would
never
update
if
you
auto
tag
your
compartments,
so.
A
B
I
just
had
a
real,
quick
question
and
maybe
I've
missed
this,
so
machine
pools
are
still
experimental.
Is
there
an
any
idea
of
when
that
might
move
past
experimental.
E
D
Think
that
first
we
have
to
complete
the
machine
pool
machine
vr
implementation,
which
was
which
is
pending.
Then
it
is
really
up
to
the
community
to
to
basically
declare
when
a
feature
is
mature
enough
to
move
on
on
the
graduation.
A
Prakash,
please
go
ahead.
Yeah.
G
A
Does
anyone
remember
that
conversation
or
have
an
update
that
might
help
prakash?
Let's,
let's
look
and
see
what
happened
in
the
last
meeting
here?
A
Do
you
remember
when
that
was
prakash.
G
See
the
reason
was
the
cube
admin
folks.
They
were
trying
to
see
if
they
can
do
something
declarate
you
so
that
it's
usable
by
cluster
api
and
the
answer
there
was
okay.
We
don't
have
any
indefinite
say
on
cluster
api
on
the
mutability
issue.
So
until
we
have
that,
we
really
can't
move
for
a
declarative
way
of
allowing
queue
admin
to
be
used
by
external
users.
A
Yeah
that
does
not
sound
familiar
to
me
does.
Does
anyone
else
have
a
comment,
or
does
this
sound
familiar
to
them?.
B
Was
this
discussed
in
this
meeting
or
in
the
cluster
api
coupe
admin
meeting?
Because
as
far
as
I
understood
it,
he
said
it
was
discussed
in
the
good
admin
meeting.
Not
this
one.
A
Perhaps
it
would
be,
perhaps
it
would
be
best
to
kind
of
get
the
information
from
the
cube
baby
dm
meeting,
and
then
we
can
link
it
here
and
discuss
it.
Would
you
be
willing
to
make
a
an
item
for
next
week's
meeting
prakash
where
you
could
prepare
some
like
link
to
their
agenda,
and
so
we
could
study
it
a
little
bit
and
then
have
a.
A
Okay.
Does
anyone
else
have
an
ad
hoc
topic
or
something
they'd
like
to
bring
out
yeah
prakash,
go
ahead.
G
A
A
Okay,
I
am
not
seeing
any
hands
go
up,
so
thanks
everyone
for
coming
out-
and
I
guess
we'll
see
you
next
time.