►
From YouTube: 20200617 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Wednesday
June
17th
edition
of
the
cluster
API
office
hours,
a
sub-project
of
sake,
cluster
lifecycle,
just
a
reminder
that
this
meeting
is
recorded
and
will
be
posted
up
to
YouTube
afterwards,
and
we
do
abide
by
the
kubernetes
community
code
of
conduct.
So
in
general,
please
be
excellent
to
one
another.
A
B
A
A
If
there's
anything
that
you
want
to
bring
up,
please
go
ahead
and
add
them
to
the
agenda
under
the
discussion
topics
and
also
add
yourself
to
the
attending
list
on
the
agenda
and
note
stock
as
well
going
back
to
the
PSAs.
We
do
have
three
pull
requests
out
right
now
that
are
updating
some
of
our
project
documentation,
there's
a
PR
out
for
updating
our
contributing
file,
the
review,
adding
the
reviewing
file
and
also
for
updates
to
the
project
roadmap.
A
A
A
A
A
D
A
E
This
is
Jason,
I
hope
this
is
the
rights
the
right
place
to
to
ask
about
this,
but
yeah
so
for
workload.
Cluster
upgrades
is
there.
Are
there
any
plans?
I
saw
that
there
was
a
bit
of
a
tool
tool
for
B
1,
alpha
2
2,
to
help
with
work.
Look
cluster
upgrades
but
I'd
know
if
there
was
anything
on
the
roadmap
or
any
kind
of
vision
around
how
that
would
work
in
the
future.
C
Sure
and
even
be
went
on
for
three,
so
there's
two
kind
of
like
areas
about
greatest
control
in
a
worker
knows
the
worker
node.
We
kind
of
already
have
that
supported
to
a
machined
employment.
You
know
from
alpha
to
you
know
after
three
adds
them
make
acp
and
if
you
determine
stands
for
QAM
control
plane,
which
uses
humidity
cubed
M
under
the
hood
to
spin
up
with
control
plane,
but
also
to
operate
that
control
plane.
So
with
a
version
change
like
your,
your
control,
plane
nose
will
get
rolled
and
upgraded.
C
You
can
also
change
other
things
like
instant
size
and
things
like
that
right
now,
I
I
think
like
we
support
both
like
one
controlling
node
or
three.
For
example.
If
you
wanted
like
a
Chitti
scenario,
and
if
you
have
multiple
Z's
like
for
example,
you
know
yes
I'm,
not
sure
which
one
you
use,
but
we
have
multiple
IDs.
E
Kind
of
I,
so
yeah
we're
just
testing
out
some
upgrades
workload.
Cluster
communities,
version
upgrades
and
yeah
I
mean
we
were
able
to
create
new
machine
templates
and
then
update
the
the
references
to
the
machine
templates
and
it
worked
really
well
I
didn't
know.
If
there's
going
to
be
there
were
any
plans
to
try
to
help
out
in
close
for
API
help
out
those
upgrades,
especially
for
like
the
common
things
like
kubernetes
versions,.
A
But
I
don't
think
it's
been
much
of
a
priority
for
us
since
the
main
problems
that
we
had
where
there
was
a
lot
of
external
actions
needed
to
facilitate
the
control,
plane,
actions
so
I
think
right
now,
the
only
thing
that
would
be
missing
from
us,
adding
you
know,
kind
of
a
more
unified
upgrade
path
would
be
just
somebody
taking
the
time
to
propose
how
it
could
be
done
and
and
then
getting
agreement
on
kind
of
a
path
forward.
For
how
do
we
implement
it?.
C
Yeah,
if
you
have
ideas
I
know,
we
discussed
this
at
like
the
last
kept
together,
like
I
think
except
underline
last
year,
or
two
years.
Yeah
last
year
was
last
year
and
some
folks
like
to
kind
of
raise
some
issues,
I
believe
with
coordination,
because,
like
you
kind
of
want,
like
the
control
plane
to
go
first
and
then
machines,
we
also
don't
want
to
roll
out
all
the
machines
at
the
same
time.
So
there
is
like
some
needs
of
that
yeah,
but
definitely
open
feel
free
to
open
an
issue
we
can
discuss
more.
C
C
Yeah
I
think
I
think
over
time
like
we
will
definitely
improve
the
user
experience
we're
trying
to
go
one
little
separate
time.
It's
that
make
sense
just
like
now
we
have
all
the
foundations
working
and
then
we
can
build
upon
this
foundation
and,
like
I,
have
an
even
better
user
experience
interest
today.
A
C
A
Great
go
ahead
and
refresh,
and
it
looks
like
we
have
one
prioritizing
machines
without
node
reference
when
scaling
down
machine
deployments,
and
this
one
was
mostly
around
somebody-
was
having
issues
with
machines
not
coming
up
properly
and
when
they
went
to
scale
down
the
machine
deployment
it
deleted
functioning
machines
rather
than
the
ones
that
were
failing
to
come
up
and
has
been
said.
It
does
appear
to
be
a
relatively
benign
improvement.