►
From YouTube: SIG Cluster Lifecycle - Cluster API 21-03-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Hello,
everyone
and
welcome
to
the
wednesday
march
24th
close
to
api
office
hours
meeting.
I
see
there's
a
lot
of
like
new
names
in
the
participant
list.
That
does
anybody
wants
to
say
hi
before
we
start.
D
Hi,
I'm
rajishri
and
I
work
for
aws,
and
this
is
my
first
time
attending
the
cluster
api
meeting.
So
I'm
really
excited.
B
Going
once
twice
three
times
all
right,
so
just
before
we
start
the
meeting
like
we
do
have
a
meeting
etiquette
if
you'd
like
to
speak
up
or
feel
free
to
add,
like
your
name
to
the
attending
list
and
discussion
topics,
use
the
brazen
feature
in
zoom.
B
I
think
it's
under
reactions
now,
if
you
want
to
speak
up
and
respond
to
questions
or
your
thoughts
or
whatever
else,
let's
start
with
a
few
psas
we're
due
it's
almost
the
end
of
the
month,
we're
due
for
the
zero
316
buildings,
there's
quite
a
few
things
in
the
in
the
milestone.
B
So
I
wanted
to
check
if
we
wanted
to
schedule
a
release
for
this
or
next
week,
depending
on
how
things
go.
So,
let's
look
at
the
milestone,
really
quick,
oh
other
three
things,
so
kcp
roll
out
strategy.
I
think
this
almost
ready
to
go.
I
don't
know!
If
john
is
here,
I
see
you're
here.
B
We
should
definitely
merge
the
tolerations
one
with
any
anything
else
needed
here
and
then
fabrica.
You
have
the
closer
cut
off
that
you
should
not
upgrade
to
off
before,
which
makes
sense.
B
Do
we
want
to
target
by
the
end
of
the
week
or
early
next
week?
Anybody
has
any
preferences.
E
I
think
that
we
should
wait
for
next
week
because
I
add
a
new
comment
on
the
pr
for
the
scale
in
proposal
and
and
probably
we
have
to
address
these
in
in
master
before
and
then
back
part
of
the
the
change
to
the
pr.
E
We
were
discussing
this
on
the
pr
with
cecile
before.
F
B
Yeah,
I
think
so
for
pizza.
Why
don't
we
capture
this
yeah?
Let's
do
on
the
main
branch
first
and
then
backward.
I
do
everywhere
since
you'll
hear
the
like
wishes
strive
for
click
backwards,
it's
hard
like
because
of
reasons
but
like
yeah.
If
we
can
do
that,
that
would
be
better.
E
Okay,
I
will
sync
up
with
john
offline.
B
G
B
G
B
Okay,
so
let's
do
let's
try
to
do
maybe
and
then
you'll.
I
Yes
hi,
so
this
is
actually
very
related
to
what
we're
just
talking
about.
So
in
order
to
make
it
maybe
a
little
bit
easier
to
do
some,
you
know
milestone
grooming
for
zero
four
zero
and
trying
to
see
how
far
we
are
from
an
actual
release.
I
We
did
a
little
bit
of
reorganization
of
the
milestones,
so
the
there
used
to
be
like
a
zero
four
x
and
zero
four
zero.
So
we
merge
those
both
into
zero
dot.
Four
and
then
the
alpha
three
nelson
is
0.3,
and
the
idea
is
that
we
won't
like
apply
the
milestone
to
every
pr
anymore.
We
have
the
milestone,
applier
plugin,
enabled
for
cluster
api.
I
So
as
soon
as
the
pr
merges
in
the
main
branch,
it
will
go
into
the
it
will
be
added
to
the
0.4
milestone
and
if
it
merges
into
the
release
0.3
branch,
it
will
be
added
to
the
0.3
milestone,
so
that
will
simplify
a
little
bit
like
and
reduce
some
of
the
noise.
I
The
other
thing
that
we
want
to
do
is
make
it
a
little
bit
easier
to
look
at
the
milestone
for
the
upcoming
release
and
know
like
what
in
the
melson
actually
like,
is
blocking
and
needs
to
go
in
before
we
release
versus
what
is
like
a
nice
to
have
or
stretch,
and
so
we
added
a
release
blocking
label
having
some
technical
issues
at
the
moment,
because
we
can't
apply
it
as
is
so
we're
gonna
change
it
to
probably
a
kind
label,
so
that's
coming
like
today,
but
the
idea
is
that
we'll
try
to
track
all
issues
and
pr's
that
are
considered
blocking
for
the
next
release
with
that
label,
and
then
that
gives
us
like
a
better
view
of
like
how
far
we
are
from
a
release
and
try
to
you
know
close
up
on
0.4,
because
it's
it
otherwise,
it's
a
never-ending
cycle.
I
We
keep
adding
thanks
to
the
milestone
and
we're
never
gonna
get
there.
So
that's
the
hope
if
there's
any
feedback
or
comments
or
concerns
about
that.
Let
me
know:
oh
the
other
thing
is
that
release
label
or
release
blocking
will
be.
I
think
we're
gonna
make
it
so
that
only
approvers
can
apply
it.
But
if
you
think
you
have
a
pr
or
an
issue
that
needs
to
be
released,
blocking
please
like
tag
approvers
and
then
flag
it
as
release
blocking
and
then
we
can
add
the.
B
E
Yeah,
a
quick
reminder
that
tomorrow
the
there
will
be
a
the
kappa
d
code
walkthrough
we
are,
we
are
going
to
use
the
same,
the
same
zoom
meeting
that
for
that
we
are,
you
are
using
now
for
this
meeting
and
the
time
will
be
12.
B
All
right,
let's
move
on
to
discussion
topics.
Some
sort
of
group
topic
like
this
has
been
coming
for
a
while
cluster
pib
one.
What
does
that
mean?
So
what
we've
been
talking
about
like
hey?
We
should
make
a
roadmap
for
beta1
and
I
feel
like
a
cubecon
us
last,
one
that
was
in
person
so
like
two
years
ago,
we're
trying
to
tackle
like
a
lot
of
things.
At
the
same
time
like
including
ux,
you
know
our
ux
has
like
some
issues
like
in
terms
of
like
simplicity.
B
You
have
to
create
a
lot
of
objects
to
create
a
cluster,
but
at
the
same
time
like
we
also
have
solid
foundations
so
to
build
on
top
of
that
and
like
also
after
receiving
like
and
looking
at
all
the
feedback
that
came
from
internally
at
vmware,
but
also
like
from
external,
like
contributors
and
other
companies
like
one
idea
was
like
well,
we
do
have
solid
foundations
so
like
and
we're
focusing
on
stability
for
the
p1
alpha
4
cycle.
B
There
is
also
like
some
features
that
we're
adding
in,
but
what,
if
we
take
the
solid
foundation
and
make
it
like,
you
know
that
leave
a
faith
to
actually
say
like
hey
these
are
beta
and
they
will
be
supported
for
a
while
to
provide
like
a
little
bit
more
context.
Here
is
also
that,
like
we
are
operating
at
that
level
already,
our
alpha
apis
like
even
though
they're
alpha
and
we
we
see
sodas-
are
the
right
to
make
breaking
changes.
B
We
do
have
conversion
web
books,
which
is
you
know
it's.
It's
really
nice
that,
like
I,
have
like
like
an
easy
migration
path
from,
for
example,
alpha
3
to
alpha
4
or
alpha
2,
12
3,
and
that's
also
like
a
lot
of
work,
and
we
have
been
supporting
like
alpha
2
and
alpha
3
for
at
least
one
year.
So
that's
this
is
beta
level
kind
of
like
support
for
these
apis.
B
So
my
proposal
is
like
hey:
let's
make
it
official
we'll
start
with
cluster
api
first,
because
the
foundation
industry
api
and
like
they're
like
a
little
bit
more
solid
and
not
all
the
types
have
to
be
kind
of
like
beta
like
right
away.
It's
like
we
can
decide
which
ones
we
want
to
make
a
beta
in
which
which
api
group
and
the
target
data
would
be
kind
of.
Probably,
I
would
say
october,
hopefully
we'll
be
able
to
see
each
other
at
kubecon
us
we'll
see
but
yeah.
So
questions
comments.
J
Yeah,
so
aside,
like
of
the
side
of
this,
which
is
look,
I'm
fully
supportive
of
this
part
of
this
one
I
feel
like
folks
are
conflating
here.
Two
things
there
are.
There
are
basically
two
aspects
to
the
project
in
my
opinion,
which
are
api
stability
and
api
versions
and
the
a
and
the
project's
overall
state.
J
So,
given
the
fact
that
we
have
web
hooks
and
we're,
we
do
provide
the
upgrade
paths
from
a
version
to
another
that
doesn't
feel
like
an
alpha
project,
at
least
to
me
because
of
those
guarantees,
so
the
I
feel
like
we
should
distinguish
between
api
versions
and
the
overall
projects,
stability.
B
Okay,
lots
of
questions
there
so
from
I
want
to
answer
like
first
like
jason's
questions
like
well
more
like
statements
like
yeah,
there's,
a
feeling
that,
like
you
know
like
around
there,
it's
like
well,
if
we're
alpha
like
we're
using
this
in
production
like
folks,
are
like
not
they
don't
easily
like
buy
in,
and
I
feel
like
personally
like.
B
I
would
feel
the
same
if
this
was
a
new
project
that,
like
I
would
stumble
upon
right,
like
I
see
like
well,
you
said
alpha
like
I
can't
I
can
bet
all
my
cows
on
you
that
kind
of
thing
right,
but
given
that
we
are
operating
at
that
level,
like
I
don't
see
why
not-
and
that
was
like
one
driving
factor.
The
other
driving
factor,
though,
which
was
the
most
important
one,
is
that
we're
shipping
sometimes
breaking
changes.
B
We
also
have
been
in
a
mode
where,
like
we
cannot
upgrade,
for
example,
the
controller
runtime
version
until
the
next
six
months
or
like
a
year
after
the
release,
and
the
next
release
happens,
which
usually
it's
tied
to
the
api
release
tied
all
these
things
together
makes
it
really
hard
to
keep
support
up
right
so
like
in
this
scenario
like
if
we
move
to
to
to
v1
what
we
unlock
is
the
ability
to
use
a
minor
version
and
do
those
breaking
changes
in
those
minor
versions.
Not
api
changes
code
changes.
B
B
If
we
want
to
do
like
a
big
breaking
change,
we
have
to
wait
for
beta
2,
provide
conversion
and
like
an
upgrade
path
and
also
keep
supporting
beta
1
for
longer,
and
I
feel
like
we're
doing
already
all
those
things
right
so
like
it
makes
sense
to
me
to.
K
Yeah,
I
think
those
are
great
answers.
Vince
thanks
for
clarifying,
especially
around
the
api
stuff.
Also,
I
wanted
to
highlight
this
thing
that
craig
peters
had
said
in
chat,
because
that's
a
good
question
too,
you
know,
is
there
a
way
to
call
the
project
production
ready,
while
each
api
maintains
alpha
beta,
etc?.
B
So
production
ready,
like
production,
really
means
a
lot
of
things
in
different
companies
right
like
so
for
us,
like,
I
could
say,
like
we
have
extensive,
really
really
extensive
end-to-end
tests.
We
have
to
improve
like
on
a
lot
of
factors.
For
example
like
we
had
end-to-end
tests
failed
for
10
days
before
we
actually
noticed
cecile
and
nader
noticed
yesterday
that
our
end-to-end
were
failing
on
a
machine
health
check
thanksgiving
for
working
on
nccl
for
open
the
issue.
B
So
we
need
to
improve
that
right,
like
we
need
to
respond
to
these
things
like
earlier
than
10
days,
maybe
in
a
day
or
two
we
can
saving,
but
the
production
ready
bit
is
what
I
would
like
to
signify
with
the
1.0.
That's
what
kubernetes
is
doing
today
in
a
lot
of
ways
right.
B
It's
like
like
raises
like
1.21
these
days,
and
the
1.0
should
signify
that
now
it's
not
like
we're
gonna
do
1.0
and
we're
never
gonna
work
on
user
experience
anymore,
but,
like
I
would
love
for
that
to
go
on
the
side
a
little
bit,
because
we
need
to
work
on
the
foundation
that
we
have
rather
than
keep
changing
the
world
all
the
time
which
we
have
been
doing
for
a
while,
which
you
know
it's
like
growing
pain
in
a
lot
of
ways.
I
Yeah
one
thing
that
I'd
like
to
see
us
do
more
of
as
we're
thinking
about
you
know,
being
a
mature
project
is
maybe
have
a
redefine
our
release
policy
and
how
we
do
back
ports
and,
like
feature
releases.
So
up
until
now.
We've
mostly
you
know,
had
like
we
had
the
minor
release,
which
was
like
api
version
dependent
and
so
they're
like
a
few
months,
if
not
like
more
than
a
quarter
in
between
like
each
minor
release.
I
So
it's
been
like
a
long
time
since
we
really
0.3.0
and
we're
still
not
at
zero
four
zero,
and
so
the
only
way
to
like
not
block
progress
is
to
back
port
a
bunch
of
features
to
the
patch
releases
and
I
feel
like
that's
not
very
good
practice
for
if
people
are
going
to
depend
on
cluster
api
in
production
and
just
in
general,
like
we
want
to
be
able
to
provide
a
stable
like
patch
channel
and
be
able
to
like
only
backport,
like
bug,
fixes
and
like
critical
issues
to
the
patches,
while
also
keeping
like
a
minor
release
channel
for
those
bigger
features
and
things
like
that,
without
necessarily
tying
those
two
api
version
changes.
B
Yeah,
absolutely
yes,
that
that's
that's
what
like
the
other
thinking
about
this
is
like
my
the
minor
version
power
that
comes
with
it
like
that,
should
be
really
great
for
doing
these
things,
and
I
would
rather
like
upgrade
more
things
in
a
separate
like
like
a
minor
release
like
the
go
version,
the
controller
runtime
dependencies
k
log,
whatever
right
like.
Instead
of
bundling
these
things
in,
like
these
big
minor
releases,
we
would
spread
them
out
and
yeah.
J
So
so,
like
the,
I
feel,
like
the
sweet
spot,
once
you
get
to
a
certain
when
you
get
to
a
certain
state,
is
what
is
kubernetes
is
already
doing,
which
is
basically
ensuring
that
we're
not
cherry-picking
any
new
features,
but
rather
just
specific
bug
fixes.
At
least
this
way.
We
are
ensuring
and
forcing
ourselves
to
work
on
the
next
mining
release
without
having
a
never-ending
like
minor,
that
we
work
on.
B
L
So
the
other
other
thing
for
us
to
qualify
or
like
graduate
from
v1
alpha
to
beta
would
be
like
what
would
again
give
people.
Confidence
in
using
cluster
api
is
having
like
set
release.
Cadences
like
this
people
should
know
when
they
can
expect
the
next
release
and
with
some
tolerance,
and
they
should
be
able
to
get
that
next
release
releasing
at
a
predictable
cadence
is
another
thing
that
I
would
suggest
is
what
would
get
us
from
going
from
alpha
to
beta.
B
We
do
already
have
a
cadence
set
in
the
contributing
guide
today,
we'll
probably
have
to
tweak
that
a
little
bit
like
as
cecile
mentioned
before,
but
yeah.
That
awesome
sounds
good
to
me.
I
think.
B
But
one
one
comment
that,
like
on
the
when
people
can
expect
releases,
I'm
not
okay
setting
dates,
I'm
okay,
sending
like
way
really
wavy
and
like
kind
of
like
around
this
time
kind
of
thing,
especially
because,
like
we
still
need
reviewers
approvers
maintainers
in
a
lot
of
parts
of
the
code
base,
not
just
like
the
core
copy,
but
like
captive,
for
example
like
right,
the
ships
would
copy
or
kcp,
and
things
like
that,
like
those
need.
B
A
lot
of
help
still
so
saying
that,
like
world
release
at
this
point
in
time,
like
actually
puts
us
in
a
weird
position
where
like.
If
we
want
to
block
like
release
like,
we
should
be
able
to
have
joy,
a
scene
in
a
deer.
M
So
this
may
be
a
bit
of
a
random
one,
but
like
one
of
the
things
I
was
thinking
about
recently,
is
you
know,
as
the
api
goes
towards
a
more
stable
beta,
it's
going
to
be
around
for
a
lot
longer
than
the
alpha
apis
I'd
expect.
M
B
So
the
answer
is
yes,
we
have
thought
about
it.
We're
we're
not
necessarily
like
committed
to
doing
it
actually
go
itself
in
a
future
release.
Will
support
lazy
go
module?
I
think
it's
lizzy.
B
It's
called
lazy
module,
but
the
dldr
of
that
feature
that
I
read
was
that
if
you
import
only
a
package,
it
will
only
import
and
require
you
to
import
like
a
dependency
of
that
package,
which
the
only
blocker
there
would
be
the
web
books
that
are
right
now
like
in
the
same
package,
even
if
we
move
those
out
like
we
still
like.
B
I
need
those
web
books
to
live
in
there,
but
because
of
like
how
control
run
time
like
it
does
like
the
the
whole
web
book
thing,
so
we'll
need
to
figure
out
a
solution,
although,
like
I
said
like,
if
we
do
release
more
often
with
more
versions
of
controller
runtime
and
go
etc,
that
problem
will
probably
be
you
know,
coming
up
way,
less.
M
B
J
So
going
back
to
like
specific
dates
versus
like
just
a
certain
time
period,
I
feel
like.
I
agree
that
it's
fair,
that,
if
you
don't
have
enough
maintainers,
we
shouldn't
be
putting
the
burden
on
the
ones
that
are
already
in
place
to
commit
to
specific
dates.
J
What
I
would
hope
is
like
by
beta
we
are
able
to
get
more
people
in
for
reviewing
pr,
so
that
at
least
like
having
a
trained
model
should
be
something
we
ain't
aim
for,
because
at
the
end
of
the
day,
this
gives
confidence
to
anyone
consuming
cluster
api
that
they
can
plan
against
some
specific
dates
for
them.
And
if
something
misses
the
train,
then
it's
gonna
just
catch
up.
The
next.
B
A
Yeah,
I
just
thought
I
had
over
the
last
few
days.
Just
you
know,
there's
a
lot
of
open
pr's
in
the
cafe
repo
and
thinking
about
like
release
cut.
Is
there
a
case
to
be
made
to
start
breaking
up
the
repo
in
terms
of
like,
let's
move
capd
to
a
separate
repository
kcp
to
a
separate
repository,
and
then
we
can
start
to
have
different
release
cadences
for
those
and
it'd
be
a
bit
easier.
You
know
people
with
subject
matter.
A
Expertise
can
just
go
to
a
set
piece
like
a
set
of
pr's
and
make
it
a
bit
easier.
I
don't
know
maybe
having
like
a
big
uniform
release
is
not
ideal.
G
I
Yeah,
I
was
just
gonna
say
I
I
would
it
be
possible,
like
I,
I
agree
with
nate
here.
I
think
when
you
better
like
ownership
of
different
areas,
and
I
think
we
need
to
expand
the
reviewers
and
maintainers
circles
overall,
but
I
think
maybe
a
good
step
towards
that
would
be
having
like
owner's
files
for
sub
parts
of
the
projects,
because
I
think
adding
a
separate
repo
for
each
area
adds
a
lot
of
overhead
and
like
coordinating
different
releases
and
for
users.
I
It
makes
it
more
complicated
too,
because
now
they
need
to
understand
like
five
dependencies
instead
of
understanding
the
one
or
the
two
with
infrastructure.
So
that's
maybe
something
we
need
to
look
into.
B
More
yes,
I'm
100
behind
what
you
said
like
we
already
are
adopting
the
multiple
owners
and,
for
example,
like
for
the
ignition
stuff
that's
coming
in.
I
asked
to
get
a
knowner
file
in
there
for
folks
that
can
review
those
things
and
because
I'm
not
an
expert
in
ammunition
whatsoever,
so,
like
I'm,
not
comfortable
reviewing
those
pr's,
but
I'm
also
a
fan
of
batteries
included
rather
than
like
splitting
things
apart
and
you
have
to
install
like
300
different
things
to
get
there
that
makes
sense
see
fabrizio,
has
embraced,
go
ahead.
E
Yeah,
I
I'm
definitely
plus
one
to
add
more
owner's
files
and
we
can
start
following
the
call,
the
current
code
organization
that
that
that
will
be
make
easier
to
set
a
reviewer
for
kcp
cup
bk
test
framework,
so
everything
or
eventually
experiments
everything
which
is
under
a
well-defined
folder.
E
But
my
the
most
important
comment
is
that,
even
though
we
don't
have
a
reviewer
file
is,
if
someone
is
interesting
is
the
upper
as
a
reviewer.
Everyone
can
review
any
npr.
So
if
you
want
to,
if
you
are
interested
in
doing
this,
let's
start
acting
at
the
next
level.
This
will
really
help
our
current
reviewer
maintainers
and
we'll
automatically
qualify
everyone
for
claiming
a
spot
as
a
reviewer.
B
N
I
was,
I
was
very
confused,
changed
my
name
on
excellent,
oh
okay,
just
really
quick
on
the
reviewer
comment.
Well,
in
general,
I
agree
that
everyone
can
review.
I
think
it
would
be
beneficial
if
we
would
assign
folks,
like
you
said
in
the
ignition
part,
because
as
someone
that's
joining
the
project
more
actively,
I
think
a
lot
of
people
will
run
into
the
problem
that
they
don't
feel
qualified
to
put
in
a
review,
because
they
don't
have
much
experience
with
the
project
and
by
saying
well,
everyone
can
review.
Well,
that's
technically
true!
N
I
don't
think
a
lot
of
people
are
going
to
do
it
simply
because
they
don't
feel
personally
qualified
to
do
it
because
they
don't
have
work
with
the
code
base
much
and
I
think,
that's
kind
of
like
a
chicken
egg
problem.
Well,
I
agree,
but
I
think
we
need
to
find
ways
to,
as
you
said,
to
introduce
new
honor
files
to
look
for
people
differently,
other
than
saying
hey,
you
can
just
do
it,
but
I
don't
think
that
works
very
well.
Just
from
personal
experience.
B
I
just
wanted
to
respond
to
this.
Like
I
completely
agree,
the
the
only
thing
that
I
say
is
like
there
is
a
ladder
that
and
like
we
want
to
be
fair
to
everybody
right
like
we
don't
want
to
just
like
put
people
in
the
reviewer's
file
and
like
that,
don't
have
that
experience
so
like
if
we
bet,
if,
like
at
least
like
you,
would
go
in
and
like
start
reviewing
some
things,
even
if
you
don't
have
an
experience,
you're
also
good
to
ask
questions
like.
Why
is
this
this
way
to
start
learning?
B
I
agree,
though,
it's
like
a
subject
in
the
background
right
like
and
the
community
sometimes
can
be
scary,
but
I
want
to
do
my
best
to
say,
like
everybody's
welcome,
I
don't
know
what
else
to
do
to
to
you
know
to
do
that
like
we're
here
and
want
to
support
new
contributors.
Good
first
review.
Yes,
for
pizza,
go
ahead.
E
Everyone
should
not
feel
intimidated
by
reviewing.
Let's
go
try
to
add
your
comments.
Every
comment
is
valuable
and
and
welcome.
M
Yeah
also
just
going
to
add
to
that
like
when
I
first
started
reviewing,
I
quite
often
would
like
put
in
some
comments,
be
like.
I
don't
really
have
that
much
context
around
this,
but
you
know
I
think
it
looks
okay
and
I
could
put
the
look
good
to
me
later
on,
but
what
I
sometimes
did
was
like
yeah
look
at
me,
but
hold
for
like
a
more
experienced
reviewer.
M
So
I
don't
know
if
that's
also
something
people
want
to
do
like
if
they've
got
time
to
do
the
sort
of
basic
review,
but
don't
have
the
context
like,
I
don't
think
anyone's
going
to
judge
you
for
saying
I
don't
have
the
context.
I've
done
a
review.
It
looks
okay
to
me,
but,
like
someone
who
knows,
it's
better
should
should
also
have
a
look
before
it.
B
So
we're
almost
at
40
minutes
and
there
are
any
questions
on
cost
review
one
or
could
we
move
on
to
the
next
topic?
Thank
you
all
for
engaging
in
this
like
this
was
really
helpful
and
discussion
and
yeah.
It's
great
that
you
know
we're
gonna
get
there.
D
Yeah,
so
just
once
again
quickly
hi
everyone-
and
this
is
my
first
time-
attending
the
cluster
api
meeting
and
also
like
sharing
an
enhancement,
a
proposal
or
idea
or
anything
in
any
of
these
things
meetings.
So
if,
if
I
start
discussing
something
that
is
not
within
the
scope
of
this
meeting,
please
feel
free
to,
let
me
know
and
yeah.
D
So
the
main
idea,
like
the
main
idea
for
this
proposal,
is
to
add
a
bootstrap
provider
within
cluster
api
that
will
provision
and
manage
an
hcd
cluster
and
the
reason
being,
I
know
that
cluster
api
allows
users
to
either
use
the
local
hcd
configuration
option
with
which
lcd
and
control
plane
components
are
co-located
or
users
can
bring
in
their
own
external
hcd
cluster
and
provide
the
end
points
in
the
cluster
configuration
section,
but
from
what
I've
seen.
D
Cluster
api
currently
does
not
support
like
provisioning
and
managing
that
external
hcd
cluster
itself,
and
I
also
came
across
this
enhancement
proposal
that
I
have
linked
there,
which
shows
that
hcd
manager,
which
is
used
by
k,
ops,
is
being
rebased
onto
xcd
adm.
D
So
like
after
I
went
through
the
cap,
I
saw
that
the
reason
behind
this
rebase
is
that
they
want
to
use
the
e
like
they
want
to
keep
the
ease
of
use
of
xcdm
commands
and
combine
it
with
the
administrative
features
that
etsy
manager
offers,
such
as
automated
frequent
backups
and
restores,
and
the
end
goal
is
to
have
a
consistent
lcd
solution
that
can
be
used
across
all
kubernetes
projects
that
require
hcd
and
one
of
them
being
clustered
api.
D
That's
why
I
thought
that
we
could
have
a
bootstrap
provider
that
uses
hcd
adm
to
spin
up
an
xcd
cluster,
and
then
users
can
use
that
cluster
as
their
external
lcd
cluster.
So
right
now
this
this
proposal,
dock
is
still
a
draft
and
it's
incomplete.
D
E
Good
thank
you
for
for
for
the
idea
for
the
proposal.
Your
assumptions
are
right
so
now,
cluster
pi
does
not
manage
an
authenticity,
an
external
etcd,
and
this
is
something
that
that
I
guess
many.
Some
people
will
be
interesting
for
sure.
Danielle,
which
is
managing
a
tcd
driving
a
tcdm,
will
be
super
happy.
I
have
only
one
comment
that.
E
But
I
definitely
support
the
dia,
and
maybe
we
can
find
a
an
alternative
way
if
boost
trouble
is
not
the
right
one.
D
Okay,
sure
yeah,
so
the
reason
I
mentioned
the
bootstrap
provider
is
like
again
correct
me
if
I'm
wrong,
but
I
saw
similarities
in
the
way
it's
cd
adm
will
spin
up
the
lcd
cluster
and
cube
adm
spins
up
a
kubernetes
cluster
like
the
commands
and
the
third
generation
logic
where
they,
like.
Both
providers
will
generate
the
ca
sort
for
one
node
and
the
same
c,
so
it
gets
used
across
all
nodes
and
the
init
and
join
commands
also
look
similar.
D
So
I
yeah
that's
why
I
mentioned
bootstrap
provider,
but,
like
sure,
I
would
like
to
know
what
other
type
of
provider
or
controller
is
more
suited
for
this.
H
Can
you
thanks
yeah,
it's
it's
it's
great
see
the
adm
is
is
getting
some
like
that
that
looks
it
looks
useful
and
yeah
the
it
this
is.
This
is
exactly
sort
of
the
the
use
case.
I
guess
where,
where
that
was
designed
for.
H
I
do
agree
with
with
resale,
though,
that
the
bootstrap
controller
may
not
be
the
the
the
right
fit
and-
and
I
I
think
I
actually
it's
not-
it's
not
clear
that
this,
like
that,
this
work
needs
to
immediately
live
as
a
let's
say,
a
six,
a
sig
sub
project.
But
I
think
this
is
exactly
the
right
place
like
this
is
the
audience
there
are
people
here
that
may
be.
You
know,
motivated
to
to
join
in
on
this
work
and
and
make
it
happen.
H
I
I
think
that,
right
today
we
have
a
a
control,
plane,
controller
and,
and
it
is
tasked
with
managing
the
the
membership
of
a
with
what
we
call
a
stacked,
scd
cluster
right,
where
we
have
api
servers
co-located
with
with
fcd
peers
or
members,
and
I
think,
like
this
effort
to
me,
it
sounds
like
we
want
like
the
the
problem
here.
Right
is
okay.
H
How
how
can
we
deploy
fcd
using
the
infrastructure
right
using
using
the
like
the
the
infrastructure
providers-
oh,
that
that
interface,
that
it
that
it
provides
right
to
be
able
to
deploy,
let's
say
machines
right
and
then
tie
that
in
with
control
plane
replicas
that
don't
have
xcd
co-located
is
that
is
that
am
I
describing
the
problem
correctly
like
is
that
is
that
sort
of
the
end
the
end
goal?
Yes
and
okay,
so
yeah?
H
So
then
you
know,
I
think
I
think
it
would
be
great
to
you
know
whoever.
H
We
could
we
could
meet,
and
you
know
sort
of
figure
out
where
you
know
how
this
how
this
would
fit
in
and
yeah.
I
think
that
would
be
a
great
starting
point.
I
I'm
very
very
thankful
that
you
that
you
wrote
the
the
proposal
it
gives.
It
gives
a
right.
This
is
like
this
is.
D
Okay,
yeah
thanks.
That
will
be
great.
I
will
reach
out
to
you
and
anyone
who's
interested
like
either
on
the
cluster
api
slack
channel,
so
to
see
if
anyone's
interested-
and
I
can
set
up
a
zoom
call
to
discuss
this
further
yeah.
B
Perfect
one
one
quick
follow
up
on
here
is
like
from
early
conversation
of
cluster
api.
Hcd
management
was
one
of
the
goal
of
the
pro
external
entity.
Management
was
one
of
the
goal
of
the
project.
That
would
be
the
only
exception
that
we
would
make
for
machines
to
work.
So
what
I'm
saying
is
like
right
now,
like
all
machines,
have
to
become
kubernetes
nodes.
B
This
wouldn't
be
true
for
ncd
like
you,
unless
we
want
install
cubelet,
but
that
things
would
be
really
weird,
but
when
we
were
discussing
about
this
like
this
will
be
the
only
exception
that
could
be
made
for,
like
a
you
know,
booster
by
infrastructure
provider,
I
see
joel,
like
is
no
sorry.
Lubamir
was
also
mentioning
like
to
like
load
balance
like
as
well
like
for
etcd,
which
is
also
like
an
interesting
point
that,
like
we
will
need
to
collaborate
with
the
load.
B
Balancer
proposal
maybe
like,
if
I
don't
know
if
jason
is
here,
I
think
it
should
be
here
yes
to
see.
B
Fit
this
in
as
well,
it's
a
lot
of
things
to
fit
in,
but
I'm
happy
to
explore
this.
This
has
been
like
one
of
the
most
requested
features
and
the
other
thing
that
we
could
do
is
also
like
to
make
sure
that,
like
a
kcp
supports
external
lcd,
if
we
go
down
this
path,.
B
J
Yeah
one
quick
thing,
which
is:
if
we
want
to
support
like
if
we
want
to
enable
external
lcd
use
cases,
then
we
might
want
to
have
some
sort
of
readiness,
the
readiness
gates,
because
we
need
we
probably
need
to
bootstrap
the
lcd
cluster
before
the
infrastructure
turns
to
a
ready
state.
So
there
might
be
there
might
be
some
coordination.
We
need
to
think
about
here.
L
Yeah,
so
the
way
I
I'm
looking
at
this
is
an
external
entity
would
be
managing
the
lcd
cluster
and
we
would
just
be
using
the
configuration
for
the
external
cluster
in
provisioning,
the
in
cluster
api
provisioning
clusters.
L
So
I'm
I'm
kind
of
trying
to
wrap
my
head
around
how
cluster
api
would
manage
the
hcd
cluster
like.
Where
would
that
component
run
because,
with
the
bootstrap
cluster
and
the
workload
cluster,
we
already
had
like
a
inception
kind
of
a
situation
now?
How
are
we
going
to
handle
like
hcd
cluster
like
would
cluster
api?
Is
the
goal
for
cluster
api
to
provision
the
infrastructure?
L
D
So
what
I
was
thinking
is
that,
and
just
the
way
cluster
api
uses
kubernetes
I
mean
cube.
Adm
is
also
like
the
okay,
so
qbm
will
bootstrap
the
node
to
install
the
kubernetes
components
and
infrastructure
providers
will
actually
provision
the
infrastructure,
the
vms
and
so
lcd
adm
provider,
whether
bootstrap
or
anything
else,
I'm
not
sure
at
this
point
could
be
used
in
the
same
way
like
hcd
adm
is
the
like
is
the
component
that
is
managing
the
that
is
managing
the
hcd.
D
I
mean
it's
easy
fcd
adm
will
actually
manage
that
siri
components
and
infrastructure
providers
can
like
provision
the
machines.
Actually,
I'm
not
sure
if
I
understand
your
question
right,
so
why
would
hcd
adm
be
like?
Why
would
these
be
nodes
considered
as
an
external
entity
to
cluster.
D
L
I
guess
like
we
have
the
control
plane
controller.
We
would
then
have
like
an
cd
controller
which
would
call
into
xcdm.
I
think,
if
we,
if
we
are
wanting
the
cluster
api
controllers,
also
to
spin
up
infrastructure
that
becomes
x,
the
lcd
cluster.
L
The
reason
I
say
I
bring
this
up
is
because
it
it
will
become
like
a
chicken
and
neck
problem,
because
you
need
that
cd
cluster
to
provision
the
kubernetes
cluster,
and
now
you
will
the
bootstrap
cluster,
that
we
have
will
not
only
bootstrap
the
workload
cluster,
but
it
will
also
bootstrap
a
external
lcd
cluster.
B
So
folks,
we
have
only
eight
minutes
and
there's
like
like
a
four
or
three
other
discussion
topics.
So
is
it
okay?
If
we
move
this
to
slack
and
sorry,
it's
hard
to
interrupt,
yeah,
sure
yeah,
that's.
L
Oh
yeah,
so
for
I
was
trying
to
run
the
pr
block
blocking
e2e
tests
on
I
first
I
started
trying
to
run
all
the
tests
on
my
dev
machine
and
I
was
not
getting
any
of
them
to
pass,
and
then
I
just
started
the
started
with
the
pr
blocking
e2e
tests,
and
I
saw
some
failures
and
then,
after
restarting
docker
from
time
to
time,
they
I
got
them
to
pass.
L
So
I'm
trying
to
see
if
there
is
some
weird
docker
setup
that
I
have
on
my
machine
or
is
it
something
that
everybody
else
is
seeing
and
on?
I
I
also
was
talking
to
sadef
about
it
and
she
was
also
running
into
similar
issues.
The
other
question
on
the
same
lines
that
I
have
is
I'm
assuming
these
e2e
tests
are
run
periodically
somewhere.
Is
there
a
way
that
I
or
anybody
else
can
go?
Look
at
those
runs.
E
We
can
chat
offline
about
this,
but
the
the
the
quick
answer
is
the
currently
the
our
end-to-end
test
fit
is
so
big
that
it
is
almost
impossible
to
run
everything
in
one
run
locally,
because
it
is
to
resource
figure.
Second,
is
is
that
unfortunately,
docker
uses
a
lot
of
these
car
and
to
a
desk
test
strates,
and
it
happens
that
you
have
to
clean
up
the
docker
locally
for
with
regard
to
the
end-to-end
test.
E
You
you
test
are
in
in
the
test
grid
you
can,
there
are
all
the
logs
or
the
artifacts,
I
I
can
show
you
this
as
well,
and
but
you
can
also
run
all
the
end-to-end
tests
on
a
pr.
There
is
a
test
target
that
that
you
can
use.
Everything
is
documented
in
the
in
the
cluster
api
book
under
testing,
but
please
feel
free
to
reach
out
to
me,
and
I
can
give
you
all
the
pointers.
N
Yeah,
I'm
also
going
to
be
super
fast,
so
we
at
giant
swarm
or
I
have
been
a
lot
working
on
this.
We
have
implemented
a
full
poc
for
running
multiple
sets
of
copy
controllers
in
a
single
management
cluster,
so
including
web
hooks
conversion
web
hook
or
all
the
jam
completely
like
separated
with
version
labels
or
like
with
labeling,
and
I
would
I
can.
N
I
would
be
interested
in
showing
this
in
a
demo
if
anyone's
interested
in
seeing
that
please
ping
me
or
talk
to
me
if
you
are
interested
in
how
we
currently
do
it.
Obviously
they're
going
to
be
like
little
bit
things
that
a
little
bit
hacky,
because
the
poc
right,
it's
not
going
to
be
like
the
the
bee's
knees,
but
it
generally
works,
and
so
far
I'm
still
working
on
like
small
stuff,
but
the
general
concept
seems
to
work
so
yeah.
Thank
you
reach
out.
N
N
B
I
definitely
want
to
see
a
demo
and
probably
a
follow-up
proposal
for
the
operator.
If
you
know
you're
interested
in
pushing.
B
J
Yeah,
so
real
quick,
the
next,
the
for
the
next
item,
so
we're
deferring
the
node
bootstrapper
for
post
v1,
alpha
4..
We
decided
to
at
least
at
first
narrow
down
the
scope
of
the
boot,
the
bootstrap
in
space
to
just
solve
the
security
aspect,
which
is
the
node
attestation
proposal,
and
this
is.
J
Us
to
use
like
proper
attestation
and
opt
out
of
cube
adm
tokens
nadir
did
the
first
pass
to
address
the
first
wave.
The
second
wave
of
comments
yeah
feel
free
to
add
any
other
comments
in
there
and
we
can
follow
up.
I
think.
B
Sounds
good,
thank
you.
Actually,
this
brings
a
good
point
like
given
that
we're
probably
gonna
stop
with
the
alpha
four,
as,
like
the
last
alpha
release:
zero
five
zero.
Six,
all
the
zero
x
releases,
like
the
minor
ones
like
be
based
on
alpha
four
until
we
get
to
v1
and
possibly
by
october.
B
B
Final
thoughts,
all
right,
I
created
the
agenda
for
next
week,
so
jason
also
has
the
tinkerbell
cap
integration
demo,
which
is
so
really
exciting.
So
myself
jason,
I
if
you
can
have
your
that
one's
in
here.
That
would
be
awesome.
So
say
no
and
that's
all.