►
From YouTube: SIG Cluster Lifecycle - Cluster API 21-04-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
april
28th
2021
cluster
api
office
hours
meeting.
Just
as
a
reminder,
we
have
a
meeting
etiquette.
Please
use
your
raise
and
feature
you
can
find
it
under
the
reactions
at
the
bottom
of
your
zoom
panel
and
if
you
have
any
topics,
feel
free
to
add
them
to
the
agenda.
A
If,
for
any
reason
you
can't
access
the
agenda,
please
join
this
mailing
list
and
it
might
not
be
immediate
that
you
might
have
access,
but
maybe
in
a
couple
hours
you
can
get.
If
you
have
any
topics
that
you
can't
add,
please
add
them
in
chat
and
I'll
add
them
for
you
all
right.
Let's
get
started,
we
don't
have
any
psas
for
today.
Let's
see,
is
there
anything
we
want
to
call
out
the
seal
fabric
fox
here.
I
think
everything
is
on
track
more
or
less.
A
Yeah
I
see
like
most
of
them
merged
the
cubelet
authentication.
Plugin
should
be
almost
there.
I
was
looking
at
it
this
morning
if
folks
have
some
time
to
also
review
it
like.
Please
do
you
seen.
C
Yeah,
so
we
did
a
pass
together,
durian
fabrizio
and
me.
The
proposal
should
be
ready
to
merge.
A
Okay,
I'll
take
a
look,
maybe
friday,
I
probably
won't
be
able
to
do
it
today,
but
if
other
folks
have
also
some
time
to
take
a
final
look
yeah,
I
thought
it
was
almost
ready
to
be
merged.
Last
time
I
checked
and
then
so
for
cluster
class.
I
thought
that
we
have
a
topic
to
discuss
from
some
folks
here
so
like
like
this
later,
but
the
proposal
is
in
in
progress.
Sagar
is
actually
working
on
it
and
that
we
should
be
able
to
publish
it
in
like
maybe
next
week.
A
A
A
D
D
A
You
see
it
later
if,
like
there
are
no
breaking
changes,
it
could
be
maybe
marathon
four
but
seems
like
there's
no
progress
on
that
right.
Now,
opt-in,
auto
scaling
from
zero
mike.
Are
you
here.
A
Okay,
any
other
comments
on
the
open
proposals
that
we
have.
A
For
blocking
so
we
have
14
open
issues
for
release
blocking
this
should
move.
A
Masters:
okay,
actually,
why
don't
we
just
look
at
these
real,
quick,
a
different
name
for
machine?
How
check
max
unhealthy?
A
Daniel,
I
know
you
opened
this
issue.
Do
we
have
any
way
to
do
this
before
zero?
For
the
year
would
be
great
to
have
someone
assigned
to
us.
A
I
don't
know
if
we're
going
to
tackle
this
by
so
these
should
actually
be
safe
to
change
fabrizio
like
a
given
that
we
are
doing
our
own
cubitium
types.
Is
it
something
that
we
can
tackle.
A
Okay
for
kcp,
specifically,
this
should
be
a
non-issue
because
we
don't
compare
the
json
anymore,
like
we
used
to
we're,
comparing
each
field
or
set
of
fields,
but
I
want
to
make
sure
that
so
because,
like
if
it's,
if
it's
a
string
and
then
it's
a
pointer
and
then
becomes
nil
like
that,
that
could
cause
a
control
point
to
roll
out,
and
we
probably
don't
want
that.
I'm
also
fine
to
just
keep
this
this
way
to
be
honest
like
because
it's
not
a
huge
issue.
A
This
is
in
progress
as
well
as
we
which
are
yesterday
right,
like
for
the
close
to
coming
move.
I
think
nadia
has
a
pr
for
the
the
multi-tenancy
expectations.
A
Okay,
refactor
config
sub
command.
I
think
that
I
thought
that
was
already
appear
open
for
this.
A
Okay
and
then
oh
yeah,
cecil
david,
you
should
we
pulled
the
plug
on
this
for
folks
that
don't
know
machine
pool
right
now,
it's
in
a
different
experimental
api
group,
and
unfortunately,
we
did
this
before
realizing
that,
like
we
shouldn't
have
done
this
pretty
much
and
machine
tool
is
meant
to
be
in
the
cluster
x-kids
oreo,
which
is
our
main
api
group.
It
is
still
an
experiment,
but
the
api
group
should
remain
the
same
because
moving
from
one
to
the
other,
it's
like
an
insane
amount
of
effort,
thoughts.
D
D
My
opinion
was
that,
since
this
is
experimental
and
there's
like
no
guarantees
on
this
api,
we
should
not
do
that
extra
work
to
make
it
backwards
compatible.
But
there
was
some
push
back
on
that
a
little
bit.
So
that's
why
this
is
kind
of
stagnating,
and
I
think
we
should
just
pull
the
plug
and
rip
off
the
band-aids.
The
earlier
the
better,
but
right
and
jason's
is
saying:
can
we
do
not
just
provide
a
migration
tool?
We
could
that's
just
work
and
we
don't
have
anyone
who
stepped
forward
to
like
do
that.
C
C
So
I
guess
it
depends
on
what
we
want
to
do
if
like,
if,
if
the
solution
is
like,
basically,
if
we
don't
have
anyone
working
on
it
and
we're
not
stuck
to
do
it,
we
can
have
like
some
documentation.
That
says,
basically
that
you
need
to
apply,
or
we
can
like,
give
a
comment
that
dumps
all
of
the
machine
machine,
quiz
and
reapplies
them
through
the
new
api
group,
or
if
we
have
time
like
we
could
automate
this
but
again
yeah.
I
agree
that
this
is
dependent
on.
C
A
Yeah
so
machine
learning
users
like
are
aws
azure
and
maybe
someone
else
that's
like
doing
this
out
of
bed
right
like
that,
we
don't
know
about.
Was
there
any
other
provider
that
I'm
not
thinking
about.
A
Okay,
so
if
we
have
mostly
plus
one
from
infrastructure
providers,
I
felt
a
little
bit
better
to
just
do
it
and
it's
just
instead
like
and
yes,
you
mentioned
the
same
thing
it's
like.
Could
you
you
could
either
just
provide
like
a
simple
bash
script
or
with
some
set
in
it?
That
just
replaces
it
and
said
like
hey.
If
you
do
have
this,
you
know
it's
like
it's
an
experiment.
This
is
a
quick
way
to
recreate
your
stuff.
A
But
that
also
has
issues
because,
like
you
I
don't
know
qct,
I
cannot
recreate
status,
which
is
going
to
be
a
problem,
and
then
I
don't
know
if
the
implementation
of
machine
pool
are
going
to
re-adopt.
All
the
machines
that
have
already
created
depends
so
say:
let's
just
do
it
marcel.
H
So,
really
quickly,
for
my
understanding,
if
we
don't
give
a
transition
path,
anyone
that
would
be
upgrading
from
like
basically
zero
three
to
zero
four
would
be
affected
or
from
zero
four
to
zero.
Five.
A
Zero
three:
two
zero
four,
so
machine
pull.
It
is
an
experiment,
so
it
does
not
have
any
compatibility
promises
and
it's
an
alpha
experiment
too,
so
it
could
be
like
breaking
in
any
like
kind
of
way.
So
what
would
happen
when
you
upgrade
to
zero
four?
Is
that
your
machine
pool
wouldn't
be
reconciled
anymore.
H
D
C
Yeah
one
last
thing:
can
we
also
make
sure
you'd
like
to
to
send
an
email
to
the
mailing
list
to
see
if
anyone
in
the
world
is
using
this
or
not.
A
A
E
Jack,
hey
yeah,
so
going
forward.
Is
it
possible
we
can
like
doc?
I
know
I
guess
it's
assumed
that
anything.
That's
an
experiment.
You
know
there
is
no
forwards
compatibility,
guarantee
or
upgrade
path,
but
you
know
the
fact
that
there
was
like
a
debate
about
whether
we
should
have
that
path.
Like
means
like
it's
kind
of
it
sounds
like
a
gray
area,
so
I'm
just
wondering
like
if
we
can
define
what
this
should
be
for
future
experiments.
A
I
think
we
just
don't
want
to
break
people.
That's
that's
like
what
we
try
to
do
best
like
and
in
any
other
case,
like
we
usually
try
to
to
do
like
really
like
our
best
to
like
make
sure
that
there
is
either
combustion
conversion
web
book
or
a
migration
path,
but
for
experiments
like
we
actually
introduce
experiment,
for
this
exact
reason
that
if
we
do
have
to
make
a
like
a
breaking
change,
that's
an
experiment.
A
That's
also
behind
a
feature
gate,
so
you
really
have
to
get
into
and
like
and
enable
that
feature
kit
yourself
if
you,
if
you
want
to
use
machine
tool
today.
E
A
Is
posted
the
link
to
the
compatibility
notice,
which
it
just
literally
says
that
and
then
but
yeah?
I
think
like
this
like.
We
should
keep
this
in
place
for
other
experiments
as
well,
because
one
other
thing,
for
example,
that,
like
the
experiment
would
add,
is
like
let's
say
like
someone
adds
a
feature,
but
then
the
feature
never
gets
promoted.
We
want
to
make
sure
to
drop
that
feature
at
some
point
if
it
never
gets
updated,
there's
no
containers.
A
What
else
we
got
add
condition
to
machine
deployment
objects.
Are
these
wireless
release
blocking.
D
A
A
A
D
Yeah,
so
actually
this
one
had,
I
had
opened
that
and
then
we
had
agreed
that
this
would
be
part
of
like
the
load
monster
proposal,
so
I
kind
of
dropped
it
and
associated
it
to
that,
but
now
that
the
load
balancer
proposal
is
not
happening,
I
think
we
should
consider
making
this
change,
which
is
a
smaller
change,
still
a
breaking
change,
but
I
think
not
doing
anything
is
worse
than
yeah
jason.
I
Go
ahead
yeah,
so
I
think
the
main
challenge
here
is
that
I
think
in
some
cases
it
can
be
used
for
storing
data
that
is
expected
to
be
persistent,
and
we
just
need
to
make
sure
that
if
we
do
go
ahead
and
move
it
to
status,
that
any
providers
that
are
doing,
that
would
would
be
able
to
recreate
that
control.
Plane.
End
point:
in
the
case
that
status
is
lost.
C
Okay,
you
see
yeah,
and
I
guess
like
something
that
we
need
to
add
for
this.
Is
that
any
backup
restore
tools
for
kubernetes
resources
might
have
to
deal
differently
with
with
cluster
api
resources,
meaning
that
if
we
move
this
to
status,
they
need
to
add
either
annotations
or
something
to
persist.
The
data
on
before
we're
starting.
A
D
A
A
A
D
A
Okay,
is
it
okay,
fighting
away
from
the
milestone
for.
A
A
A
This
one
is
in
progress
and
yeah.
The
pr
should
be
almost
there.
So
should
we
drop
support
for
cubitium
v,
one
better
one
types
plus
one
on
this
one,
it
seems
like
a
most
most
is
we
have
agreement?
So
probably
do
it.
A
And
then
rename
kubernetes
control
plane.
I
have
appear
open
for
this.
It's
mostly
for
clarity.
It
rolling
out
doesn't
mean
that
we're
upgrading,
necessarily
and
then
hyper
cube
image
support,
and
I
saw
that
we
already
have
assigned
to
it.
So
should
be
good
okay.
This
is
looking
good
and
let's
keep
checking
it
every
once
in
a
while
all
right,
so
we're
halfway
through.
So
let's
go
with
the
demo.
We
have
a
demo
so
stop
sharing.
F
No,
I
can
only
see
your
browser,
okay,
I
was
trying
to
share
the
entire
screen,
but
come.
B
F
Okay,
cool
yeah
yeah,
so
I
just
wanted
to
demo
this
poc
that
I've
been
working
on
that.
What
I
had
to
add
so
far
is
an
lcd
adm
bootstrap
provider
which
mimics
the
cube,
adm
bootstrap
provider.
F
Then
this
lcd
adm
controller,
which
will
manage
the
life
cycle
of
this
lcd
cluster
and
again
it
very
closely
mimics
the
qbm
control,
plane,
cluster
and
so
I'll
just
get
started.
I
have
linked
all
the
code
changes
that
I
had
to
do,
including
some
of
the
changes
that
I
made
in
cluster
cluster
api
as
well,
so
I've
linked
them
all
in
the
google
doc.
F
F
F
Now
this
lcd
adm
controller
will
only
manage
the
hcd
cluster
and
if
I
just
switch
windows-
okay,
I
don't
know
if
this
second
window
is
getting
shared
or
not.
So
I
won't
switch
to
it
right
now,
but
the
lcd
adm
bootstrap
provider
will
manage
the
hcdm
config
resource
so
coming
back
to
xcd
cluster,
it
has
replicas,
it
will
create
the
config
resources
and
it
also
needs
an
reference
to
an
infrastructure
template
I'm
using
the
docker
machine.
Sorry,
the
docker
infra
providers.
F
So
this
is
the
docker
machine
template
for
it,
then,
for
the
cluster
resource,
all
the
fields
are
same,
like
none
of
the
existing
fields
are
changed
but,
like
we
had
discussed
in
an
earlier
meeting,
we
need
to
add
some
field
to,
first
of
all,
to
specify
that
we
are
using
an
external
hcd
cluster
that
will
be
provisioned
by
kappy
and
also
adding
a
new
field
on
the
cluster
will
like
other
infra,
like
the
infra
providers,
will
know
that
this
is
a
new
field.
F
I
mean
basically
to
establish
contract,
so
this
new
field
would
be
the
managed
external
hcd
ref,
and
that
will
point
to
the
hcd
cluster
resource.
That's
defined
at
the
top
of
this
yaml
file
then
comes
the
cube,
adm
control
plane
again
over
here.
Everything
is
same
in
the
queue
bdm
config
spec.
We
had
decided
that
we
can
try
using
the
existing
external
hcd
endpoint
fields.
F
So
that's
what
I'm
doing
I've
selected
under
hcd
the
option
external
and
to
begin
with
endpoints
is
nothing
it's
an
empty
slice,
because
the
capi
cluster
controller
is
supposed
to
populate
them.
Now
I
know
that
the
validating
webhook
right
now
does
not
allow
you
to
update
this
field.
F
So
one
of
the
changes
I
had
to
make
is
in
the
validating
web
as
well
to
allow
and
update
to
this
field,
and
I
understand
that
it
would
be
a
breaking
change,
so
we
need
to
discuss
if
this
should
be
allowed
or
not
or
if
we
should
add
a
new
field,
so
I
will
bring
it
up
again
at
the
end
of
this
demo
and
rest
of
the
manifest
is
the
same
like
what
you
would
usually
use
for
any
cappy
cluster
and
like
this
is
just
to
check
logs.
F
So,
okay,
I
just
wanted
to
say
that,
like
I
am
not
a
big
fan
of
live
demos
and
especially
of
a
poc
code.
So
if
something
goes
wrong,
please
bear
with
me.
F
Also
running
the
docker
infra
provider
really
slows
down
my
laptop.
So
if
something
is
slow
again,
I
apologize,
as
you
can
see,
it's
creating
the
first
lcd
machine.
Just
like
the
cube,
adm
control
plane.
F
It
first
runs
initialize,
which
will
just
create
one
machine
and
that
will
run
the
lcd
adm
inlet
command
using
which
is
given
in
the
cloud
init
script,
and
once
that
machine
is
ready,
then
it
scales
up
the
cd
cluster
by
creating
remaining
machines
in
the
meanwhile,
we
don't
want
the
cube,
adm
control,
plane,
provisioning
to
start
so
for
that
one
change
that
I
did
in
the
cluster
plane.
F
F
I
think
it
will
take
one
or
two
minutes
like
that's
how
long
it
had
taken
when
I
was
trying
this
out
so
hoping
it
doesn't
take
longer
than
that.
F
The
lcd
clusters
first
member
seems
to
be
up.
F
F
Oh
so
this
lcd
cluster,
basically
at
the
end
of
every
reconcile
loop
it
it
makes
a
call
to
update
status
and
in
that
it
keeps
checking
like
it,
gets
all
the
machines
that
are
created
for
this
at
cd
cluster
and
for
each
machine.
It
makes
a
health
check
call
by
making
a
call
to
the
slash
health
endpoint,
on
which
the
lcd
member
returns
a
response
to
indicate
if
it's
ready
or
not.
F
Okay,
so
again,
I
hope
you
can
see
the
browser.
I
don't
know
why.
It
ran
the
health
check
so
many
times,
but
the
idea
is
that
it
should
make
a
call
to
the
slash
health
endpoint
and
because
it
did
I'm
going
to
check
the
hcd
cluster
resource
again
and
it
populates
the
status
like
on
the
status.
There's
a
field
called
endpoint,
and
it
just
updates
this.
This
will
then
get
picked
up
by
the
cluster
controller
in
capi,
and
I
do.
F
F
F
F
D
F
F
Okay
yeah,
I
I
yeah
like
I'm,
not
sure.
Maybe
it
would
be
simplified
if
we
added
there.
I
just
thought
that
in
the
last
meeting
we
decided
that,
for
the
cluster
controller,
to
establish
a
contract
with
all
infra
providers
it,
it
might
be
better
to
add
that
field
directly
on
the
cluster
spec.
But
if
that's
not
the
case,
then
I'm
completely
fine
with
trying
out
adding
it
on
the
cubarium
control,
plane
resource.
F
Like
I
just
want
to
run
one
final
test
to
show
that
this
control
pane
cluster
is
using
the
hcd
cluster
that
just
came
up
so
for
that
I'm
just
going
to
create
a
namespace
in
this
workload.
F
F
Okay,
so,
as
you
can
see,
this
command
is
getting
all
the
keys
that
contain
hcd
demo,
so
it
has
the
namespace
that
was
created
for
it.
The
namespace
that
I
created
and
the
secret
service
account
and
config
map
that
was
created
for
it,
and,
let's
see
this
should
also
be
ready
right
now:
okay,
yeah,
okay,
so
that
that
is
it
for
the
demo,
like
the
cuban
control
plane
cluster.
F
Sorry,
the
workload
cluster
is
using
the
lcd
cluster
that
just
got
provisioned
and
yeah
right
now.
This
is
just
poc,
so
like
it
doesn't
it
just
does
the
creation
it
doesn't
handle,
upgrade
or
scaling
down
or
any
of
that.
But
I
just
wanted
to
like
show
this
so
that
we
can
see
what
all
changes
might
need
to
take
place
and
if,
of
course,
there
might
be
better
ways
of
doing
those
things.
So
we
can
discuss
that
as
well.
F
Anyone
has
any
questions
or.
F
F
F
I
also
wanted
to
ask
like
right
now,
I'm
just
working
with
the
ip
addresses
of
the
lcd
members,
and
I
know
that
we
had
discussed
that
we
should
have
a
static
point,
whether
it's
the
load
balancer
or
configuring
dns
records,
but
because,
in
the
last
meeting
we
decided
that
it
makes
sense
to
add
the
load
balancer
to
use
the
load
balancer
provider.
It
could
also
have
an
implementation
for
dns
records.
F
Can
the
first
version
of
this
use
only
ip
addresses
and
later
on?
We
add
when
the
load
balancer
provider
becomes
available
later
on,
we
switch
to
using
a
static
endpoint.
A
I
think
so
we'll
have
to
see
what
the
slice
versus
one
end
point
like
how
we
get
around
that,
but
because,
like
what,
if
there's
only
one
that,
like
we
assume,
that's
like
a
side
again
point
like
it
seems
kind
of
backwards,
david
and
fabrizio
and
then
jason
even
with
dns.
Are
you
going
to
have
one
endpoint,
or
are
you
going
to
have
multiple
dns
entries.
F
Actually,
after
discussing
with
a
few
people,
I
was
thinking
multiple
dns
entries
and
one
record
per
hcd
member
so
like.
If
I
just
talk
about,
let's
say
ec2,
there
will
be
one
private
hosted
zone.
That's
something
like
this
is
what
I
had
in
mind
that
the
user
will
create
a
private
hosted
zone
and
give
to
the
fcd
cluster
and
then
that,
whichever
dns
controller
will
use,
we
use
will
create
a
records
in
that
hosted
zone
and
assign
that
as
like
pass
it
in
as
extra
server
sort
sans.
F
B
Yeah,
first
of
all,
thank
you
for
the
for
the
great
name
demons
is
it's
just
great
to
see
things
moving
so
fast
here,
so
I
I
think
that
the
the
the
area
that
now
we
have
to
focus
on
is
the
contract
between
these
external
electricity
provider
and
and
the
rest
of
cluster
api
more
or
less.
B
Following
also
what
jason
said,
because
now
I
see
that
there
are
many
assumptions
with
that
are
kcp
specific
and
maybe
we
are
using
another
control
plane
providers
instead,
and
so
this
is
the
the
area
where
we
have
to
to
work
together
and
help
in
shaping
out
the
proposal.
But
it
is
just
great
with
regards
to
the
topic
of
one:
it
is
one
using
one
dns
or
many
dns.
B
I
think
that
one
topic
that
we
have
to
keep
in
mind
is
what
happened
when
the
list
of
dns
gets
updated
and
what
happened
on
the
control
plane.
So
if
changing
the
list
of
dns
will
trigger
the
control
plane
rollout,
this
is
not
good.
So
we
have
to
figure
out
basically
that
the
the
contract
with
between
the
control,
plane
manager
and
the
atct
manager
to
make
sure
that
that
they
are
not
triggering
or
allowed
when
it
is
not
unnecessary.
A
I
Yeah,
I
think,
following
up
on
what
fabrizio
said,
is
the
big
challenge
going
from
a
list
of
endpoints
to
a
static
endpoint.
You
know
one:
how
do
we
roll
out
that
change
to
the
control
plane?
I
The
other
concern
that
I
would
have
is:
how
do
we
roll
out
that
change
to
the
actual
scd
provider
as
well,
because
it's
going
to
impact
the
certificates
and,
what's
accepted
on
the
client
side,
for
the
certificates
and
and
that
sort
of
thing
so.
F
Yeah
yeah,
I
hadn't
considered
that
so
when
we
go
from
just
to
be
sure,
this
is
in
the
case
that,
right
now
we
start
with
ip
addresses
and
later
on,
when
we
add
support
for
static
endpoints.
What
will
happen
that
time
right
like
if
we
are
changing
this
endpoints
in
an
existing
cluster
like
will
that
trigger
the
control
plane
cluster
to
reconcile,
like
is,
is
that
the
concern.
I
F
Okay,
yeah,
I
think
so.
I
was
thinking
that
we,
we
will
also
need
a
follow-up
meeting
to
discuss
this
concern
like
about
the
search
and
switching
to
static,
end
points
and
also
the
contract.
Like
should
this
field,
be,
I
think
you
and
fabrizio
booth
are
saying
that
this
managed
external
net
should
feel
should
be
on
the
cubadiem
control,
plane
spec.
So
that-
and
another
thing
I
wanted
to
ask
was-
is
updating
like
is
utilizing
the
current
external
hcd
endpoints
field,
correct,
because
that
involves
a
change
in
the
validating
webhook.
F
So
like
I
have
updated
that
proposal
dock
with
all
of
these
things.
So
please
review
it
and
I
was
thinking
it
would
be
great
if
we
can
have
another
call
dedicated
for
this
topic
like
we
had
scheduled.
I,
like
I
can
schedule
one.
I
can
send
out
an
invite.
G
Yeah
hi
yeah,
so
that
is
there's
an
issue
linked
came
up
in
conversation
today
in
slack
in
the
vsphere.
I
think
jacob
is
about
to
please
speak
to
the
requirement,
but
essentially
one
air
be
able
to
place
an
annotation
on
the
infra
machine.
So
in
this
case,
particularly
the
vsphere
machine
and
currently
machine
deployment
will
copy
labels,
but
not
annotations.
G
If
infrastructure
providers
had
included
object
meta
in
their
templates,
then
that
would
have
been
copied
anyway,
except
I
checked,
kappa,
capsi
and
capri,
and
none
of
them
did
it.
None
of
them
have
done
that.
G
A
Yeah
we
were
trying
on
slack
about
this
particular
issue
like
I
was
tracking
down
the
machine
deployment
code
like
they
all
use,
generate
template
and
right
now
we
are
like
very
much
merging
the
the
labels
like
passing
that
down,
but
the
same
does
not
happen
for
annotation,
but
we
are
forcing
some
annotations
for
the
clone
from
the
template
control
annotations.
A
I
I
think
it
would
be
okay.
I
mean,
in
my
opinion,
to
add
annotations
in
here
as
well,
so
that
we
can
pass
them
down,
given
that
you
can
specify
those
annotations
in
the
machine
set
template
for
machines
and
and
yeah.
We
could
just
do
that,
so
this
is
going
to
be
probably
just
a
change,
behavioral
change,
404
and
it
will
only
be
propagated
once
a
new
machine
will
be
rolled
out.
A
I
Yeah,
I
think
the
only
concern
that
I
might
have
here
is:
would
we
potentially
be
copying
too
many
annotations
or
annotations
that
wouldn't
make
sense
on
the
infrastructure
resource.
A
A
A
Cool
so
this
case,
I'm
gonna
make
this
release
blocking,
given
that
we
should
probably
is
given
it's
a
behavioral
change,
we
should
probably
have
it.
That's
four:
zero,
four
zero.
A
A
Cool,
we
have
only
seven
minutes
left.
We
have
a
couple
more
discussions,
jacob
max
and
jai
about
proposer
cluster
class.
J
J
On
a
first
look,
the
cluster
classes
are
leaving
some
things
to
be
desired,
for
providers
that
are
not
integrating,
ipam
and
so
forth,
so
for
making
them
work
with
the
metal
providers
or
or
making
them
nicely
work
with
the
metal
providers
and
cap
weave.
J
For
that
matter,
it
might
be
useful
to
basically
class
them
a
bit
broader
and
pass
a
lot
of
more
metadata
down
from
the
actual
templates
defined
here
to
the
instance
templates
or
having
more
more
abilities
to
overwrite
things,
because
otherwise
you
basically
end
up
with
having
to
have
one
unique
set
of
templates
per
cluster
anyway,
and
you
can't
you
reuse
anything
by
using
cluster
classes.
J
Every
cluster
has
its
own
network
basically,
and
there
is
no
automatic
plugging
them
in
and
you
need
to
be
aware
of
the
layer,
2
networking
or
the
free
networking,
depending
on
what
vsphere
environment
you
run.
So
that
always
changes
the
measure
of
machine
template
and.
A
Yeah,
so
in
the
current
proposing
implementation
of
cluster
class,
like
what
you're
describing
it's
kind
of
like
one
of
like
the
10
steps,
after
like
we
make
this
a
little
bit
more
stable
and
the
reason
for
that
is
that,
like
overriding
and
having
a
chain
of
like
of
layers
that,
like
override
each
other,
it's
usually
like
very
confusing
for
users,
especially
when
we're
introducing
a
new
feature.
A
But
I
totally
hear
you
that,
like
you
know
like
to
make
this
more
useful,
we
probably
want
to
make
sure
that,
like
those
templates
could
be
reused
like
for
in
a
number
of
places
right
now.
The
only
thing
that
I
think,
if
amount
correctly
like
folks
are
thinking
to
add
to
overwrite,
are
labels
and
maybe
also
add,
like
node
labels
later
on
as
well
in
in
that.
C
You
see
yeah
so
for
so
for
for
cap
v
and
v
stream
in
general,
with
the
move
to
fail
implementing
failure
domains.
A
lot
of
this
is
going
to
be
moved
to
our
controllers.
C
A
Does
that
make
sense
max
like?
I
think
this
is
something
we
can
definitely
do
over
time
and
definitely
we're
with
open
arms
like
we
would
like
to
talk
more
about
like
how
do
we
make
this
much
better,
but
for
the
initial
implementation,
we're
trying
to
like
trim
down
as
much
as
possible
the
use
cases
to
the
minimum
set
to
get
this
started.
J
Okay,
we'll
make
sure
that
we
basically
write
up
what
what
we
have
and
include
that,
so
that
we
can
make
sure
that
the
initial
implementation
isn't
blocking
in
that
in
that
regard
and
needs
to
be
thrown
over.
But
otherwise
I
will
talk
to
you.
Yes,
yes,
you
seen
tomorrow
about
the
failure.
Domains
sounds
good.
A
Well,
the
only
other
topic
we
have
today.
I
Yeah
jason
should
be
pretty
quick.
The
cloud
provider
formerly
known
as
packet
is
now
known
as
equinix
metal.
So,
as
a
result,
we're
trying
to
finish
up
the
rebranding
stuff
around
that
and
one
of
the
things
we're
looking
to
doing
this
upcoming
quarter
is
renaming
for
cluster
api
provider
packet.
I
Since
we
already
talked
about
migration
issues
and
changing
api
groups
and
type
names
is
not
a
simple
thing.
We
want
to
go
ahead
and
request
a
net
new
repository
so
that
we
can
basically
have
a
clean
cut
migration
and
eventually
archive
the
existing
cluster
api
provider.
Packet
basically
wanted
to
give
folks
a
heads
up
that
we're
going
to
be
putting
in
that
request
and
trying
to
start
that
process.
I
So
the
main
reason
why
we
want
to
start
over
there's
a
couple
of
different
reasons
why
I
want
to
start
over
one
is:
we
want
to
be
able
to
have
both
running
side
to
side
quite
easily,
so
anybody
that's
currently
accessing
things.
We
don't
have
to
worry
about
if
we
rename
the
repository
that
all
the
github
aliases
will
continue
to
work
or
or
not.
I
The
other
issue
is,
is
we
wanted
to
make
it
clear
to
folks
that
may
just
be
pulling
the
manifests
and
applying
them
that
this
is
a
non-trivial
change
and
it's
going
to
require
an
external
migration
in
addition
to
switching
so
that's
that's
kind
of
the
impetus
for
it
and
while
we're
there,
we
can
also
clean
up
some
of
the
tech
debt
that
we
have
with
the
net
new
repository.
I
Yeah
exactly
and
also
continue
supporting
the
legacy
stuff,
as
long
as
folks
still
need
to
like.
We
have
some
customers
that
are
running
really
large
clusters
and
maxing
out
capacity
for
specific
instance
sizes
in
facilities,
and
things
like
that.
K
Yeah,
I
think
I
find
with
that.
You
know.
One
minor
concern
is
whether
we
keep
this
the
cap,
sorry,
the
packet
provider,
for
a
very
long
period
of
time.
If
we
can
migrate
users
fast
enough,
we
can
archive
the
repository.
I
I
I
would
expect
that
we
should
be
able
to
get
everybody
migrated
over
by
the
end
of
the
year
yeah.
I
think.