►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
june
16th.
This
is
the
cluster
api
office
hours
thanks
for
joining
us.
As
always,
please
follow
the
meeting.
Etiquette
use
the
raised
hand
feature
of
zoom
if
you'd
like
to
speak
and
I'll,
make
sure
to
call
on
you
and
if
you
want
to
bring
up
any
new
topics
edit
the
agenda
to
add
your
item
on
this
document.
If
you
don't
have
access
make
sure
you
join
the
sick
cluster
life
cycle
mailing
list-
and
this
is
a
cluster
api-
is
a
sub
project
of
sig
cluster
life
cycle.
A
So
we
make
sure
to
follow
the
cncf
code
of
conduct,
so
yeah
let's
get
started,
and
oh
also,
if
you
can,
please
add
your
name
to
the
attendee
list
right
here.
A
So,
let's
start
with,
I
don't
think
we
have
any
psas,
but
let's
start
with
a
release
check-in,
and
so
I
don't
think,
there's
any
really
blocking
issues,
but
we
should
have
the
lcd
update
in
here,
probably
nadia
or
vince.
Do
you
want
to
talk
a
bit
about
that
and
where
we're
at.
B
Yeah
I'll
talk
about
it,
so
kubernetes
122
they've
updated
that
xcd
client
to
version
3.5.
It
brings
in
mostly
a
lot
of
grpc
updates.
We
did
briefly
run
it
on
tesla.
It
failed
on
upgrade
test
and
upgrading
to
the
main
branch
of
kubernetes.
So
just
need
to
investigate
that
and
find
out
why
it's
doing
that
is
it.
Api
reason
is
a
kubernetes
version
thing.
Is
there
something
we
need
to
do
in
the
fcd
management
so
yeah?
I,
the
worst
case
scenario,
is
that
this
is
we
don't?
A
C
I
think
it
makes
sense
to
wait
and
test
this,
or
we
can
just
do
it
like
another
couple
of
betas,
if
that's
better,
but
I'd
rather
bump
this
definitely
before
zero.
Four
zero
is
cut
so
that
you
know
like
because
it's
a
pretty
big
one
in
grpc
dependency
as
well.
A
C
This
test
is
naturally
not
using
m
test,
so
these
are
tests
they're,
like
end-to-end
tests,
the
yeah,
the
m-test
tests
that
we're
actually
passing
so
yeah,
not
sure
we'll
have
to
check
that
as
well.
I
guess.
A
Okay
and
then
in
terms
of
beta
releases,
would
you
think
it
makes
sense
to
cut
another
beta
right
now,
just
since
there's
55
commits
to
master
since
the
last
one
just
so
we
can
get
providers
a
bit
more
testing
time
before
cutting
the
actual
release
of
the
lcd.
One
is
the
last
one
we're
waiting
for.
C
I
think
so
yeah
yeah,
but
let's
cut
a
beta
one
and
then
we
can
merge
the
lcd
change
and
then
we
can
cut
up
data
2
and
then
rc
and
then.
A
A
Okay,
cool
any
questions
from
anyone,
any
concerns
from
any
providers
or
anything
about
the
etsy
update
or
the
release
in
general.
A
Okay,
if
not,
let's
move
on
to
our
only
discussion
topic
dain
go
for
it.
D
Thank
you.
I
just
wanted
to
kind
of
bump
this
issue.
Get
it
in
front
of
some
eyes
get
some
some
people
talking
about
it.
We
talked
about
it
a
while
back
and
one
so
there's
a
few
different
opinions
or
approaches
that
could
be
taken
here.
I
we
have
since
had
reason.
D
I
had
some
resistance
early
on
to
creating
machine
resources
with
nodes
associated
with
the
machine
pool,
and
there
are
still
some
challenges
around
that
either.
You
know
that
bridge
or
pulling
the
api
or
something
along
those
lines
to
to
try
to
get
that
information
back
filled,
but
there's
definitely
some
operational
benefits
there,
especially
with
regard
to
like
deleting
a
bad
node
or
making
them
work
with
things
like
machine
health
checks
are
the
main
use
cases
that
I've
seen
come
up
more
recently.
D
I
don't
think
that
there's
other
aspects
that
were
in
this
issue
that
were
raised
things
like
kind
of
allowing
it
to
roll
out
similar
to
like
a
machine
set
and
things
like
that,
and
then
that
I
don't
think
aligns
well
with
the
with
the
principle
of
the
machine
pool.
But
I
do
now
believe
that
the
machine
resources
are
definitely
useful
here.
D
So
I
just
kind
of
wanted
to
bring
this
back
to
the
surface
and
see
where,
where
people
were
thinking
and
how
we
might
start
actually
turning
this
into
an
issue
or
a
design
that
we
could
actually
start
developing
toward.
E
Hi,
thank
you
for
bringing
this
update.
I
don't
have
a
pr
in
quick
grasp,
but
we
just
introduced
azure
machine
pool
machines,
that's
kind
of
redundant
and
repetitive,
but
yeah,
and
this
this
is
kind
of
the
next
step
for
us,
or
at
least
in
thinking,
is
kind
of
reflecting
that
machine
is.
We
ended
up
going
through
the
process
of
writing,
coordinate
drain
in
your
machine
pool,
which
is
just
it's
not
super
effective
right,
and
it
would
be
so
much
cooler
to
bring
that
kind
of
bring
those
features
upward.
E
I
am
more
than
happy
I
will.
I
would
like
to
start
working
on
a
proposal
for
something
for
machines.
I
think
this
next
week.
I
think
we're
about
ready.
So
I
I'd
love
to
work
with
you
or
anybody
else
on
this.
F
Yeah,
I
just
wanted
to
note
that
this
is.
We
would
probably
need
this
if
we
ever
want
to
build
an
abstraction,
so
the
machine
pools
can
plug
into
the
cluster
auto
scaler,
because
cluster
autoscaler
wants
to
be
able
to
delete
like
specific
machines
associated
with
you
know,
specific
nodes
or
whatever,
so
it.
D
Yeah,
I
think
so
you
know
currently
we're
using
cluster
auto
scalar
with
just
the
cloud
provider.
So
it's
it's
draining
and
it's
coordinating
and
draining
and
then
calling
the
the
terminate
instance
call
directly.
So
we
don't
have
we're
not
suffering
any
pain
on
that
front.
That
works
really
well.
But
the
aspect
that
we
are
suffering
from
is
the
lack
of
machine
health
checks,
because
that
is
not
necessarily
visible
to
the
provider.
D
It
can't
see
that
a
node
has
gone,
not
ready,
especially
if
the
vm
itself,
as
far
as
the
cloud
provider
is
concerned,
appears
healthy,
the
other
one
being
like
manual
operator
intervention
for
whatever
reason.
Occasionally,
operators
decide
to
delete
machines
and
that's
actually
a
lot
harder
in
a
machine
pool,
because
now
they
have
to
write
a
little
script
that
cordons
and
drains
and
and
calls
the
cloud
provider
instead
of
the
nice
consistent
delete
machine
like
we
used
to
have
so
happy
to
work
with
you,
I
think
it
sounds.
A
A
A
A
Yeah,
I
guess
that
was
a
short
one
just
before
we
conclude,
we
usually
give
a
bit
of
time
at
the
beginning
or
at
the
end,
for
anyone
who's
new
to
the
project
or
new
to
the
meeting
and
wants
to
say
hi.
So
if
that's
your
case
and
you've
joined
us
for
the
first
time
and
you
just
want
to
say
hello
and
tell
us
who
you
are
and
what
brought
you
to
cluster
api,
I
think
that
would
be
a
great
time
so
I'll
just
mute
myself
and
feel
free
to
unmute
and
say.
A
A
Awkward
silence,
okay,
I
guess
this
is
it
for
today
no
one's
new,
but
thanks
all
for
joining
great
to
see
you
all
and
we'll
talk
to
you
on
slack
and
see
you
next
week.