►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everybody
today
is
wednesday,
the
14th
of
july
2021-
and
this
is
the
cluster
api
kubernetes
sig
project
meeting.
We
are
following
the
kubernetes
sig
community
meeting
standards
here,
so
that
generally
means
be
kind
to
your
fellows
and
treat
others
as
you
would
like
to
be
treated,
and
please
raise
your
hand
if
you
would
like
to
talk.
A
So
I
guess
we'll
go
right
into
the
psas.
Cecil
looks
like
you've
got
the
only
one.
A
All
right
sounds
good,
I
see
fabrizio
is
quickly
adding
another
one.
Do
you
want
to
go
ahead
for
brazil,
hey
you're,
sharing
your
calendar
mike,
oh
wait.
Am
I
sharing
the
wrong
screen
or
something.
B
A
C
Yeah,
I
want
just
to
give
a
psa
that
castle
cutter
by
copper
store
just
merged
an
amazing
pr
from
john
and
also.
A
A
It
looks
like
no
other
psas
at
the
moment,
so
I
guess
we'll
go
into
release
blocking
issues.
A
Update
so
we
rename
failure
domains
to
topologies.
Does
anyone
want
to
talk
about
this?
You
seen,
I
don't
know.
Are
you.
A
A
D
Yes,
yeah,
so
I
think
we're
just
queuing
this
up
for
next
release
of
cluster
api.
So
not
it's
not
for
the
v04x,
it's
just
to
more
closely
align
with
the
changes
in
kubernetes
around
topology,
and
we
forgot
to
do
it
for
we
went
out
before
so
we're
queuing
it
up
for
release
blocking
for
next.
A
B
Ahead,
yeah,
I'm
just
wondering
if
we
should
really
be
marking
out
like
release
blocking
for
issues
that
are
like
breaking
for
the
next
like
api
version,
since
we're
not
going
to
be
tracking
that
anytime
soon
and
if.
Instead,
we
should
just
for
the
current
changes
that
need
to
go
in
like,
for
example,
like
critical
buds
and
things
like
that.
That
need
to
current
release.
C
A
Cool
thanks,
susie
all
righty.
So
let's
see
if
we
have
any
open
proposals
that
need
to
be
read
out
here.
A
So
cluster
class
and
manage
topologies
that
one
merged
spot
instance
proposal
update
with
termination
handler.
I
guess
this
just
needs
reviews.
Is
anybody
here
who'd
like
to
talk
about
that.
A
All
right,
I'm
not
not
seeing
any
hands,
but
I
guess
if
folks
are
interested,
maybe
they
could
give
some
reviews
on
this
and
just
add
some
feedback.
I
see
this
is
alex's
so
yeah.
If
anybody
could
give
a
review
that'd
be
great.
A
The
next
one
is
me:
the
opt-in,
auto
scaling.
I
have
a
item
about
this
later,
I'm
just
looking
for
some
reviews
on
this
one.
I've
updated
the
enhancement
with
the
latest
kind
of
guidance,
or,
I
guess
we're.
You
know,
kind
of
responses
that
were
given
inside
the
pr.
A
So
mainly
the
big
changes
are
I
changed
the
way
the
I
changed
the
structure
for
the
for
the
way
the
code
looks
based
on
some
suggestions
that
fabrizio
gave
kind
of
making
it
look
more
like
resource
requests
that
are
done
in
other
parts
of
kubernetes,
and
I
tried
to
remove
the
language
about
updating
machine
sets
and
machine
deployments
and
a
few
other
minor
changes.
I
think
just
cleaning
things
up
so
yeah.
Any
more
reviews
here
would
be
great
I'd
like
to
get.
A
You
know
like
I'd
like
to
get
a
pr
for
this
soon
so
yeah.
If
anyone
could
give
a
review,
that'd
be
awesome.
A
Let's
see
what's
next
so
load
balancer
provider,
I
guess
joel
and
nadir
david.
Anyone
want
to
talk
about
this.
One.
E
A
Okay,
I
guess
we
got
a
new
one
here.
Ipam
integration,
I
don't
know
jacob-
are
you
here.
D
I
don't
think
there's
anything
to.
I
don't
think,
there's
been
anything
since
last
week
I
haven't
seen
any
comments
in
the
document.
So
do
take
a
look.
I
think.
B
A
A
F
Yeah,
so
I
had
received
some
review
comments
and
I
addressed
them,
and
so
it's
like
I
mean
it's
ready
for
another
round
of
reviews.
I
mean
the
review
wasn't
completed
in
the
first
round.
Also
so
yeah,
I'm
just
looking
for
more
reviews
on
this.
A
A
Alrighty
so
back
to
the
regularly
scheduled
agenda
here,
discussion
topics
so
the
first
one
was
me,
but
I
pretty
much
said
what
I
needed
to
just
looking
for
some
more
reviews
on
the
on
the
auto
scaler
scale
from
zero,
and
I
guess
tangential
to
that.
If
there's
anyone
here
who
would
like
to
become
more
involved
with
the
cluster
auto
scaler,
I
think
we're
going
to
need
probably
another
maintainer
or
two
on
the
auto
scaler
side.
A
So
if
you're,
if
you're
curious
about
that
kind
of
work,
you
know
please
reach
out
to
me,
and
you
know
we
can.
We
can
see
about
getting
you
set
up
as
a
reviewer
there
or
something
or
you
know
whatever
it
takes
the
next
one
is
from
max
reninc.
Sorry,
if
I'm
mispronouncing,
that
volume
detachment
and
cappy
lifecycle
do
you
want
to
take
it
away
max.
G
Yeah
sure
so
we
basically
have
hit
an
issue
on
a
regular
basis
that
the
cluster
api
itself
doesn't
wait
for
volumes
to
unbind
when
it
actually
rolls
nodes
which,
with
certain
csi
providers,
these
issues
in
our
case
basically
pure
pso
netapp
tried
and
the
vsphere
csi
provider,
because
they
all
expect
unbind
to
happen.
G
G
The
question
is
how
to
handle
that
in
capi,
because
I
think
that's
not
provider
specific.
That
will
happen
to
every
that
is
csi
provider
specific
and
some
expect
that
some
don't
mostly
on
pro
on-prem
providers,
expected.
A
All
right
nadir
go
ahead.
D
It's
more
of
a
question
because
I
I
don't
fully
understand
the
life
cycle
is,
is
like
unbinding
separate
to
pod
draining.
Is
it
right
handled
in
this?
Yes,.
G
So,
basically,
after
the
pod
evicts,
the
csi
controller
will
try
to
clean
that
up,
but
cap
cappy
is
usually
faster
than
that.
D
Okay,
go
ahead
and
do
yeah
my
next
question:
how
is
there
a
way
of
looking
at
what
volumes
are
bound
for
a
particular.
D
G
And
I
think
I
linked
to
a
patch
that
that
people
from
folks
from
spectracloud
have
made
to
mitigate
that
issue.
So
the
link
from
june
that
just
checks,
the
volumes
are
still
bound
on
the
node
and
waits
for
that
to
to
be
zero.
A
Topic
max
go
ahead,
yeah.
G
Yeah
I
mean
there
are
a
few
possible
ways
how
we
could
solve
this,
so
we
could
have
the
machine
life
cycle
hooks
that
solved
it
or
we
could
directly
tackle
that
in
in
capi.
The
question
is:
what
is
the
preferred
way
to
solve
that.
I
I
I
Does
it
make
sense?
So,
basically,
if
you
have
a
large
database
and
you
have
a
lot
of
things
to
be
written
and
so
on,
if
you
want
to
flush
the
cache
that
can
take
a
few
minutes
also
sometimes,
and
do
you
want
to
wait
for
the
whole
lot
or
do
you
want
to
have
a
grace
period
and
unwanted?
What
is
the
thought
here.
I
A
G
Okay,
then
people
can
voice
their
comments
about
the
implementation.
A
Okay,
let's
see
what's
next
on
the
agenda,
I
guess
that
was
it.
Are
there
any
other
kind
of
ad
hoc
topics
that
people
would
like
to
discuss,
or
we
also
usually
give
some
time
for
new
folks
to
the
community
to
introduce
themselves
well
put
that
in
hold
for
a
second,
though
fabrizio
go
ahead.
C
A
So
I
guess
yeah
we
can
move
on
to
kind
of
introductions
if
there's,
if
there's
anyone
who's
new
to
the
cluster
api
community
and
would
like
to
introduce
yourself
and
maybe
tell
a
little
bit
about
what
introduced
you
to
the
project
and
whatnot.
Please
feel
free
to
unmute
yourself
and
share.
J
Hi,
this
is
jack
foy
in
seattle,
and
I
work
at
a
company
called
haya
and
we've
been
running
kubernetes
in
production
since
the
early
days
and
so
going
on
five
years
now,
and
I've
been
really
interested
in
the
cluster
api
project
for
a
while,
but
haven't
been
able
to
clear
time
to
get
involved,
and
it's
not
clear
if
that's
actually
gonna
change,
but
I'm
trying
to
get
more
information
about
it.
A
J
I
I
did
just
request
access
to
the
to
this
meeting
document.
A
H
Yeah,
so
I'm
new
too
I'm
just
after
joining
vmware
this
week,
so
I'm
working
with
fabricio
and
vince
and
a
few
others
on
cluster
api.
So
I'm
just
kind
of
ramping
up
on
the
three
at
the
moment
and
yeah
excited
to
kind
of
get
certified
very
cool.
A
All
right:
well,
I
guess
we
have
no
more
topics
on
the
agenda
and
no
more
people
would
like
to
say
hello,
so
I
guess
everyone
can
take
about
40
minutes
back
and
we'll
see
you
next
week.