►
From YouTube: 20200729 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome.
Everyone
today
is
wednesday
july
29th,
and
this
is
the
cluster
api
office
hours.
Cluster
api
is
a
sub
project
of
sig
cluster
life
cycle.
Please
adhere
to
the
cncf
code
of
conduct,
raise
your
hand
on
zoom
if
you'd
like
to
speak
and
be
cautious
and
respectful
to
each
other,
okay,
so
to
get
started
today.
A
Just
wanted
to
briefly
mention
you
might
have
noticed
a
bit
of
a
templating
format
or
difference
in
the
agenda,
so
we
just
wanted
to
try
out
a
new
format
just
to
like
see
if
this
works
better.
We
kind
of
removed
everything
that
was
the
issue
triage,
because
a
lot
of
this
happens
async
during
the
week
anyways,
and
there
are
usually
only
a
few
issues
left
by
the
time
we
get
to
the
meeting.
So
it
doesn't
really
make
sense.
A
We
thought
to
like
triage
three
issues,
but
not
the
other
15
that
get
open
during
that
week,
and
this
is
getting
this
group
is
getting
bigger
and
bigger,
and
it's
a
rather
big
audience.
So
we
don't
want
to
you,
know
waste
people's
time
by
doing
triage
in
this
huge
group,
but
yeah,
so
the
office
hours
should
really
be
about
the
users
and
the
contributors.
So
any
issues
that
anyone's
encountering
anything
like
that.
So
we'll
try
this
out.
A
Please
add
any
topics
that
you
have
to
the
list
and
feel
free
to
bring
up
any
issues.
This
is
really
a
time
for
you
to
ask
questions,
even
if
you're
new,
even
if
you
think
that
it
might
be
an
obvious
answer-
that's
okay!
That's
what
we're
here
for
so
feel
free
to
add
those
questions
in
there.
On
that
note,
if
anyone
wants
to
moderate
at
some
point,
let
new
york
vince
know
we're
always
happy
to
have
other
people
helping
okay.
A
So
that
being
said,
we
have
one
psa
about
the
release
since.
B
Yeah,
I
just
just
wanted
to
call
out
the
zero
three
eight.
This
is
kind
of
like
a
very
small
milestone.
There's
like
around
25
issues,
npr's
going
to
this
release
and
you
I
post
the
link
in
here
there's
like
a
lot
of
bug,
fixes
that
are
going
to
come
in
and
we're
targeting,
either
tomorrow
or
friday
at
the
latest,
for
a
release.
B
There's
still
three
open
prs
that
we're
waiting
for
to
be
merged.
First
yeah,
that's
it.
A
Thanks
any
questions
for
vince
about
the
new
release
or
any
questions
about
the
new
agenda
format
or
anything
like.
A
A
Okay,
is
there
anyone
here
who's
new
to
this
meeting,
who'd
like
to
give
a
quick,
intro
and
tell
us,
you
know
why
you're
here
and
what
you're
interested
in
I'll
go
ahead
and
mute?
If
you
do
want
to
introduce
yourself,
go
ahead
and.
A
A
Okay,
I
will
keep
going
then,
okay.
So
let's
move
to
discussion
all
right.
I
actually
added
the
first
one,
so
I
just
wanted
to
briefly
ask
around.
I
know
that
so
right
now
for
context,
cubanium
control
play
machines
when
they
get
created,
they
get
created
serially,
so
one
after
the
other,
and
I
was
wondering
I
wasn't
involved
at
the
time
that
that
design
decision
was
made.
A
I
was
wondering
if
there
was
like
any
technical
blockers
to
having
machines
provisioned
in
parallel,
I'm
guessing
that
it's
more
difficult
with
the
first
one,
because
there's
a
difference
between
emit
and
join
right.
You
have
to
have
the
first
one
in
it.
The
cluster
first
and
the
others
can
join.
But
I
was
wondering
if
there
are
any
blockers
to
having
the
first
control
plane
come
up
and
then
all
these
subsequent
control
planes
come
up
in
parallel.
A
A
C
C
But
the
other
reason
why
was
we
wanted
to
ensure
a
little
bit
of
consistency
around
the
the
handling
of
etcd
quorum,
especially
when
going
from
like
two
to
three
and
the
process
of
qubitium
joining
to
the
etcd
cluster
prior
to
all
of
the
services
coming
up?
And
all
of
that
there's
this
situation
to
where
you
expand
the
cluster
and
then
you're
in
this
weird
state,
and
we
wanted
to
try
to
limit
that
as
much
as
possible.
D
Oh
thanks
yeah,
I
just
I
was
just
wondering:
what's
the
what's
the
motivation,
you
know
what
or
what's
the
use
case
for
creating
the
control
plane
machine
in
parallel,
I
I
I
take
it
that
control
the
the
control
plane
machines.
Are
you
know
it's
like
a
relatively
stable
set.
There's
no
there's
not
a
lot
of
turn
and
and
yeah.
So
the
control
plane,
I
guess
scale
operations
should
be.
D
I
I
expect
them
to
be
infrequent
and
and
yeah.
For
you
know
there
aren't
very
many
control,
plane
machines,
but
I'm
but
yeah.
So
I'm
interested
in
what
the
motivation.
A
Yeah,
that's
a
good
question.
I
think
it
was
mostly
just
the
initial
create
cluster
time
that
you
know
trying
to
see
where
we
can
shave
up
some
time
and
and
anywhere.
You
know
that
can
that
we
can
take
will
take,
because
you
know
right
now.
It
I
mean
the
you
do,
get
the
control
back
as
soon
as
the
first
control
plane
is
up
and
the
worker
notes
join
the
cluster
as
soon
as
the
first
control
plane
is
up
as
well.
A
So
that's
you
know
not
as
bad
as
having
like
everything
come
serially,
but
still
having
like
things
provisioning
after
you
know,
sometimes
10
15
minutes
can
be
cumbersome
depending
on
what
you
want
to
do
after
the
cluster
is
created
so
yeah,
and
so
the
reasons
that
you
brought
up
jason
the
one
about
the
bug,
I'm
assuming
that's
kind
of
solvable
or
is
it
something
that
has
it's
it's
something
that
was
there
at
the
time,
but
we
don't
expect
to
run
into
in
the
future.
C
Yeah
that
one
was
you
seen,
link
to
both
issues
that
are
related
to
that
there.
You
know,
obviously
it's
a
solvable
issue.
The
question
is
is
about
you
know,
because
of
the
way
that
we
support
multiple
versions
in
the
past.
It
would
be
that
kind
of
case-by-case
basis
where
we
would
have
to
potentially
disallow
it
versus
allow
it.
The
the
quorum,
though
I
think,
is
the
bigger
potential
issue.
C
I
know
there
are
some
plans
within
cuba
dm
to
support
learner
mode
for
new
members
prior
to
the
cubelet
coming
up
and
everything
being
configured
and
once
that's
fully
supported,
then
we'll
be
in
a
much
better
spot
to
be
able
to
do
things
like
parallel
join,
but
before
then
it
would.
C
It
would
probably
in
in
at
least
the
scale
that
we're
generally
talking
about
for
control,
plane
members,
you
know
one
to
seven,
you
know
scaling
up
mult
two
at
once
is
not
something
that
we
would
necessarily
necessarily
want
to
do,
because
it
gets
into
that
situation
to
where
you
lose
quorum
for
a
brief
period,
and
that
could
result
in
further
disruption
for
people
in
the
cluster
and
right
now.
We
only
have
that
from
one
to
two
replicas
and
we
could
potentially
have
it
for
multiple
cases
with
that.
D
Yeah
sorry,
I
forgot
to
put
it
down
last
night.
I
so
my
understanding
is
that
so
I
agree
with
everything
that
jason
said.
My
understanding,
however,
is
that
ncd
will
will
prevent
a
so
so
there
are
sort
of
two
phases
to
to
a
control
to
an
std
member
joining.
D
The
first
is
when
the
member
is
added
and
all
the
members
sort
of
agree
that
okay
there's
going
to
be
this
new
member,
that's
about
to
join
and
then
it
joins,
and
then
it
gets
it
sort
of
catches
up
with
the
data
and
only
after
that
point
will
fcd
allow
you
to
add
another
member.
So
it's
you
know
it's
it's
actually
not
it's
not
etcd
won't
allow
you
to.
D
You
know
concurrently,
add
members,
but
you
could
I
mean
you
could
save
time,
because
you
know
you
could
be
sort
of
bootstrapping
the
nose
and
then
and
then
sort
of
let
them
you
know
with
retry
logic.
Let
them
sort
of
you
know
the
winner
sort
of
gets,
gets
added
and
then
joins
first
and
then
you
know
the
next.
D
No,
the
next
member
will
join,
but
you
know
that
that
does
add
some
complexity,
especially
if
you
know
something
you
know
if
one
of
the
nodes
isn't
isn't
able
to
to
join
for
whatever
reason
you
know
the
the
some
node
may
join
out
of
water,
just
it
might
be
a
little
a
little
more
complex
to
handle.
D
You
know
failures,
although
this
is
again,
I
guess
when
that
you
know
that
might
be
maybe
less
important
when
you're
just
creating
the
cluster.
Since
you
know,
there's
really
no
workloads
on
it,
but
maybe
that
that
could
be
more
important
when
you're
scaling
the
control
plane
up.
You
know
as
part
of
maintenance
for
an
existing
cluster,
and
then
I
I
was
wondering
I
was
just
thinking
out
loud
here.
Do
we
you
know
I
you
mentioned
cecile,
you
mentioned
the.
You
know
like
the
time
that
it
takes
to
bring
up
that
initial
cluster.
D
Do
we
have
a
good
understanding
of
you
know
of
how
long
the
different
phases
take
and
how
long
it
takes
to
bring
up
the
you
know
the
the
nodes
as
opposed
to
the
control
plane,
nodes
and,
and
is
it
possible
to
you-
know,
sort
of
eagerly
bring
up
nodes?
You
know
if
you
have
all
of
the
information,
especially
like
let's
say
the
control
plane
end
point
sort
of
ahead
of
time.
A
Yeah
for
sure,
I
think
the
biggest
time
is
definitely
the
like
bringing
up
the
infrastructure
itself,
not
the
joining
the
cluster
part.
But
right
now,
since
cubadiem,
like
the
kcp
itself,
serializes
the
creation
of
the
machines,
then
we
can't
like
create.
We
can't
take
any
actions
on
creating
vms
until
those
missions
are
created,
and
so
it'd
be
great.
If
we
could
have
something
like
create
the
infrastructure
but
maybe
hold
on
joining
or
try
to
join
and
have
some
retry
logic
that
we
know
is
gonna
fail
if
it's
not
joining
properly.
E
Hey
everyone,
so
I
wanted
to
just
add
someone
spoke
earlier
about.
What's
the
I
think
the
angle
you
asked
like
what's
the
problem
that
we're
really
trying
to
solve
here-
and
I
just
wanted
to
to
throw
out
there
that,
from
my
observations,
what
we
see
is
that
during
control
plane
when
control,
plane
and
nodes
are
undergoing
some
kind
of
dynamic,
behavior,
so
you're
adding
a
node.
For
example,
we
will
see
I'll
see
fcd
leader
election
changes
impact
the
availability
of
the
api
server
overall.
E
So
maybe
that's
a
solvable
symptom
that
we
can
do
by
improving
the
way
that
new
nodes
are
added
at
the
fcd
surface
area.
But
the
the
important
outcome
of
that,
I
think,
is
that
during
cluster
creation
events,
you
really
want
to.
E
If
you
assume
that
those
ncd
leader
election
changes
are
going
to
affect
the
availability
of
your
api
server,
you
really
want
to
take
your
sort
of
scheduling
operations
offline
until
all
the
control
planes
have
stabilized
so
for
a
cluster,
create
if
you're
doing
things
in
ci
like
let's
say,
you're,
building
a
thousand
node
cluster
and
you
want
nine
control
planes
and
then
you're
gonna
run
a
bunch
of
stuff
in
the
middle
of
the
night
that
can
potentially
add
20
minutes
to
your
overall
job
completion
time,
as
as
you
serialize,
those
control
plane
nodes
coming
online,
because
you
want
to
wait
until
they
come
online
before
you
schedule
all
your
workloads,
because
you
don't
want
them
to
have
api
server.
C
Yeah
I
have
a
couple
of
questions.
Jack
based
on
that
are.
Are
you
seeing
the
leader
election
happening
for
scale
events
beyond
just
one
to
two,
because
I
would
expect
it
in
that
case,
because
that's
the
scenario
where
we
currently
lose
quorum?
I
wouldn't
necessarily
expect
it
for
any
additional
ads.
After
that.
E
That's
it
yeah.
That
makes
sense,
so
I
think
I
can't
confirm
that
I've
seen
any
other
time
because
I'm
a
dev
and
so
I'm
usually
trying
to
do
stuff
right
after
like
waiting
for
the
cluster
to
come
online
and
trying
to
do
stuff.
So
it
would
sort
of
stand
a
reason
that
I
would
be
hitting
that
sort
of
mathematical
edge
case
so
to
speak.
E
C
Yeah
and
that
that
was
one
of
the
reasons
why
we
wanted
to
kind
of
add
in
that
bit
of
caution
of
not
scaling
more
than
one
member
at
a
time
to
limit
that
scenario.
To
that
one
known
case
of
one
to
two,
because
we
figured
you
know
that's
going
to
be.
You
know
when
somebody's
spinning
up
a
cluster
they're
either
going
to
want
a
you
know:
multi-replica
control,
plane
or
they're.
C
The
other
question
that
I
had
is
you
mentioned
potentially
having
a
whole
lot
of
control,
plane,
replicas,
and
I
was
wondering,
if
there's
a
specific
use
case
in
mind
for
that,
because
with
the
stacked
cd
model
that
we
have,
you
don't
necessarily
want
to
have
a
large
number
of
replicas
for
larger
clusters,
because
you
actually
reduce
your
right
speed.
By
doing
so.
E
Yeah,
that's
a
great
question.
I
think
that
the
the
permutations
of
different
cluster
operations
are
so
vast,
but
we've
seen
in
our
experience
that
certain
folks
have
you
know
it's
going
to
be
different
across
cloud
providers
and
on
bare
metal
just
the
way
that
all
the
network
I
o
and
the
local
disk.
E
So
it's
really
hard
to
like
generalize,
but
I've
seen
folks
run
into
situations
that
are
solved
by
adding
more,
you
know,
increasing
the
and
on
their
control
plane,
and
it's
also
possible
that
they
can
just
totally
re
rejigger
how
they're
building
their
control
plane
and
build
sort
of
like
vertically
scale
the
vms,
as
opposed
to
horizontally
scaling.
E
So
I
would,
I
would
agree
with
you
that
that's
an
uncommon
use
case
and,
to
the
extent
that
we
can
identify,
people
are
doing
that
frequently.
We
should
probably
dive
in
and
try
to
prevent
that,
like
maybe
there's
something
wrong
with
something
that
would
suggest
that
that's
a
solution,
because
that
definitely
increases
complexity
and.
C
C
You
know
even
seven
being
like
really
on
the
edge
of
kind
of
concern,
but
if
there
are
like
use
cases
where
we
need
this,
we
could
potentially
do
something
where
we
determine
how
many
parallel
operations
we
can
do
in
parallel.
That
would
not
really
affect
quorum
and
and
kind
of
you
know
optimize
for
that.
A
Awesome,
thank
you.
I'm
gonna
move
on
to
the
next
topic,
but
thanks
alfred.
That
was
really
really
great
information.
Okay,
so
warren
you
have
the
next
one.
F
Yeah,
let's
just
more
curious,
so
right
now
we
have
a
proposal
to
switch
to
using
m
test
for
our
unit
tests,
which
is
fine.
I've
been
kind
of
working
a
couple
of
years
to
sort
of
hash.
How
that
work,
work
and
more
from
the
perspective
of
just
wanting
to
learn.
Does
anybody
here
know
of
any
other
company
sub
projects
that
are
sort
of
following
this
path
of
using
imp
test
for
unit
testing
or,
if
you'll
just
happen,
to
know,
projects
that
are
heavily
invested
in
mf
test?
I'm
just
gonna.
F
If
you
can
reach
out
on
slack,
I'm
just
happy
to
do
some
evening.
A
Thanks,
that's
a
good
question.
I
actually
do
not
know
if
there
are
projects
that
use
m
tests
out
there.
If
anyone
has
any
examples-
and
you
don't
want
to
speak
out
now,
can
you
please
maybe
paste
them
on
the
dock
or
on
slack
but
yeah
any
comments
or
questions
on
this
topic.
F
Just
to
point
out
the
reason
I'm
asking
is
because
I'm
running
into
some
issues
where,
when
we
like,
if
we
have
an
m-test
environment
spun
up
and
we're
trying
to
like,
do
an
update
on
some
objects,
I
get
a
lot
of
flaky
behavior
where
sometimes
I'll
run
the
test.
Everything'll
be
fine
and
then
I'll
get
an
error
saying
that
the
update
wouldn't
work.
F
I've
seen
this
also
sometimes
when
I
try
to
delete
an
object
and
since
there's
no
garbage
collection
for
m
test
yeah,
it's
like
you
have
to
delete,
or
you
know,
handle
every
object.
So
I'm
worried
that
there
could
be
a
possibility
of
a
lot
of
flakes
coming
in.
If
you
use
emphasis,
that's
why
I
just
want
to
learn
to
see
how
other
people
have
done
it.
Maybe
I'm
maybe
I'm
just
doing
something
dumb
yeah.
A
Yeah
thanks
for
raising
this
warren
yeah.
As
being
said,
let's
follow
up
async
on
slack,
because
we
should
definitely
look
into
this,
but
yeah
good
call.
A
Okay,
I
don't
see
any
hands
raised,
so
let's
keep
going
reminder
about
v1
alpha
4
and
d.
G
Yes,
so
we
are,
as
I
mentioned
last
week,
we
are
getting
ready
to
start
going
over
all
the
open
issues
that
have
been
around
for
a
while
and
we're
also
asking
for
anybody
who's
interested
with
ideas
to
please
file
new
feature
requests.
We
want
to
start
planning
for
v1
alpha
4..
I
don't
have
any
timelines
at
this
point
for
when
we'll
be
done
with
planning
and
when
we'll
have
code,
freezes
and
whatnot
we
have.
G
We
have
no
dates
right
now,
but
if
you
are
interested
in
participating
with
the
alpha
4
planning,
please
make
sure
that
any
issues
that
you're
interested
in
seeing
are
filed
and
vince
I'll.
Let
you
talk
about
the
grooming
session.
B
Of
backlog
blooming
this
friday
planning
to
post
the
link
at
around
8
am
pacific
time.
So
it's
also
good
should
be
good
for
folks
in
europe
and
what
we
would
do
like
as
part
of
this
first
back
backlog
grooming
session
is
to
go
over
the
whole
0
3
x,
milestone,
which
is
pretty
large
and
see
what
we
can
plant
off
like
2
0
4.
B
Instead,
given
like
we're
now
shifting
to
alpha
4,
a
lot
of
the
features
that
we
proposed
to
4,
03
can
probably
move,
but
if
there
is
bug
fixes
that
like,
for
example,
need
to
go
into
zero,
three
nine,
we
can
trash
them
and
like
put
them
into
the
mouth,
soon
yeah.
B
So
that
was
that's
the
goal
for
the
for
our
first
backlog
grooming
session
and
then
we'll
also
start
opening
new
issues
as-
and
you
mentioned,
to
kind
of
like
see
what
we
want
to
do
in
terms
of
feature
in
alpha
4.
G
And
just
one
other
thing:
we
do
want
to
stop
maintaining
the
zero
3x
branch
for
new
features
in
fairly
short
order,
so
that
we
can
focus
on
using
the
main
branch
for
breaking
changes
for
v1
alpha
4.,
we'll
certainly
continue
to
backboard
bug
fixes,
while
we're
still
prepping
the
work
for
alpha
4.
But
at
some
point
like
vince,
was
saying
the
the
features
that
are
currently
scheduled
for
zero
three
x.
B
A
A
All
right,
unless
anyone
has
any
other
discussion
topics,
that's
the
end
of
our
list.
Last
call
anything
else
that
you
want
to
discuss
any
issues
that
are
prs
that
need
attention.
B
I
guess
one
other
thing
that
I
wanted
to
mention
is
like
some
folks
from
our
team
are
going
to
be
out
like
in
august
like
for
a
week
or
so
so,
if
we're
like
not
responding,
it's
because
of
that,
just
one
little
psa
for
that,
at
least
on
from
vmware
side,
and
I
think
other
maintainers
are
like
sicilian
going
out
as
well
right.
A
Cool
all
right,
well,
thanks
and
if
you
think
of
anything,
you
can
always
message
on
slack
and
yeah
thanks
everyone
and
see
you
all
next
week.