►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-06-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
I
could
also
say
hi
hello.
My
name
is
leonard
chang,
I'm
working
on
the
metal
cube
provider
mainly
and
also
the
openstack
provider.
I'm
basically
here
today
because
of
a
discussion
around
mixing
providers
having
hybrid
clusters
where
some
maybe
the
control
plane
is
from
one
one
provider
and
the
workers
is
from
another,
but
I
I
saw
that
we
may
not
have
that
demo
today,
so
well,
I'll,
listen
in
anyway,.
A
Going
once
twice
three
times
all
right
so
for
the
open
proposal,
readout
are
there
any?
Is
there
actually
anything
that
like?
We
should
talk
about
the
only
thing
that
I
wanted
to
notice
that
the
magic
external
std
provider
has
been
stale
for
a
while
I've
tried
to
ping
rajashree
on
it,
but
I
haven't
gotten
any
updates.
If
there
are
no
updates
like
I
would
be.
We
should
probably
close
this
for
now.
Unless
somebody
else
wants
to
take
it
go
ahead.
Google.
D
Hello,
can
you
hear
me,
I
don't
know
if
my
mic
is
actually
weird.
I
can
hear
you
now.
Yes,
okay
yeah,
I
was
gonna.
Add
I'm
in
contact
with
registry.
We're
gonna
be
working
on
this
together,
she's
a
little
bit
busy
right
now
because
sweet
jobs,
but
I'm
gonna
be
supporting
her
and
this
I
think
she's
gonna
play
the
proposal
with
some
notes
that
she
got
recently
for
from
some
talk
with
fabrizio.
D
So
I
hope
there
are
going
to
be
updates
around
this
soon
and
if
not,
if
he
doesn't
have
time,
I
will
probably
take
over.
A
Sounds
great
thanks
for
the
update,
so
this
is
yours
right,
yep.
A
Three
times
all
right,
my
only
status
tracker
would
be
for
the
node
label.
Sync,
I
oh
there
we
go.
Okay,
perfect,
so
seems
all
the
points
have
been
addressed.
So
do
we
have
all
mountaineers
on
the
call,
so
if
alberto
cecil
fabrizio
stefan
and
others
whoever
wants
to
actually
chime
in
and
give
the
latest
lgtm
on
this,
we
can
start
losing
consensus
from
today,
so
that
would
be
15th.
A
Okay,
so
cecile
want
to
send
this
to
you.
So
that's
in
your
backlog,
with
the
consensus,
expires
on
zero.
Six
fifteen.
E
E
A
F
Yeah,
I
think
we
had
it
open
for
a
while,
and
we
were
at
the
point
where
we
essentially
said
we
had
two
open
issues
and
I
didn't
take
a
look
after
the
last
upgrade,
but
I
think
we
sold.
We
had
consensus
on
both.
So
it's
just
a
matter
of
was
it
written
correctly
into
the
document,
but
I
kind
of
trusted
but
I'll
take
another
look,
of
course,
but
I
assume
that
everything
should
be
good
now.
A
Okay,
I
think
we
need
to
pick
one
of
these
two
options
potentially
and
just
also
say
which
one
we
were
picking
so
that
that
would
be
kind
of
the
last
comment
on
this
proposal
that
I'll
make
or
actually
this
is
alternatives.
F
C
A
A
Right
awesome,
so
let's
go
ahead
on
discussion
topic
christian!
You
have
the
first
one
go
ahead.
G
Yeah,
I
just
want
to
give
a
short
update
for
the
metric
stuff.
Last
week,
on
friday,
a
new
cube
state
matrix
release
was
announced
and
it
has
a
fun
new
feature
which
allows
creating
a
configuration
file
to
define
metrics
for
your
custom
resource
definitions.
G
So
we
have
to
take
a
look
at
the
feature
and
I
don't
think
it
makes
sense
to
follow
the
custom
binary
in
the
cluster
api
repository
version
anymore.
I
think
it's
for
everyone,
it's
much
more
easier
to
just
have
a
config
and
deploy
upstream
cubestate
metrics.
G
Then
there
are
features
that
complete
from
a
sense
of
supporting
all
kind
of
metrics
we
currently
wanted,
but
I
will
open
issues
there
or
an
issue
at
the
cubesatmetrics
project
to
start
discussion
on
that
and
then
we'll
need
to
take
a
look
and
see
how
to
proceed
here.
G
A
Going
once
twice
three
times:
alright,
thanks
for
watching
and
jacob,
I
think
next.
H
Yes,
so
this
is
again
about
replacing
kobe
1
object
reference.
There
is
several
different
or
several
different
decisions
we
have
to
make
regarding
that,
but
one
of
them
is
whether
we
want
to
implement
custom
types
or
if
we
want
to
solely
or
try
to
solely
rely
on
core
v1
types.
H
Those
arguments
in
both
directions,
I'm
in
favor
of
creating
custom
types,
but
I
just
thought:
maybe
it
makes
sense
to
discuss
this
here
a
little
bit,
especially
what
the
the
issues
with
custom
types
is
and
their
maintenance,
because
I
don't
really
understand
the
harsh
or
not
harsh
but
strict
opinion
to
not
implement
custom
types
and
only
use
or
prefer
the
copy.
One
types
for
this
as
far
as
possible.
A
I
Yeah,
ideally
like,
if
we're
implementing
something
custom
or
very
specific
I'd,
be
worried
of
that,
and
at
least
I
would
someone
I
would
want
to
have
someone
from
api
machinery
to
take
a
look
at
it
because
yeah
from
from
where
it
stands,
I
I
think
that
using
the
already
existing
type
would
fulfill
our
use
case.
A
So,
to
build
on
that
a
little
bit
the
the
issue
is
that,
like
the
core
types
have
like
a
lot
more
fields
than
we
actually
use
in
a
lot
of
places,
so,
for
example,
we
have
namespace
reference
when
you
actually
cannot
cross
namespace
boundaries
at
all,
so
that
becomes
redundant.
It
also
creates
like
a
little
bit
more
validation,
needed
and
defaulting
in
the
in
the
web
web
books
as
well.
A
That
that
said,
like
you
know,
I
think
the
main
reason
not
to
create
new
types
is
to
not
diverge
from
the
kubernetes
ecosystem,
just
in
general,
if
there
are
types
that
we
can
use
upstream,
we
should
because
this
project
is
a
kubernetes
project.
At
the
end
of
the
day,
I
think
what
I'm
I
have
like
a
hard
opinion
about
is
to
create
one
new
reference
type
for
each
type
of
reference.
A
So
as
an
example,
you
have
an
infrastructure
reference
or
you
have
a
bootstrap
reference
and
they
would
have
a
different
go
type,
but
the
json
implementation
would
be
the
exact
same.
I
think
that's
a
little
confusing
and
also
like
it
proliferates,
like
you,
know,
type
resources
that
we'll
have
to
maintain
just
in
general,
so
those
are
kind
of
just
like
the
two
thoughts.
I
hope
that
was
clear
in
my
comment.
A
I
don't
know
if
it
was,
but
in
here
so
personally
I
would
try
to
also
like
kind
of
make,
see
and
reach
out
to
api
machinery
group
to
see
like
if
they
are
interested
in
either
improving
the
core
with
one
type
or
add
to
them
ones.
That
would
be
along
the
lines
of
these
ones
like
if
we
explain
the
use
case
like
they
might
be
open
to.
A
You
know
additions.
I
guess.
H
Yes,
I
guess
I
mean
there
is
also
some
explanation
about
how
to
do
custom
types,
and
I
think
that's
I'm
not
100
sure,
but
I
think
the
comment
on
object.
Reference
also
explains
that
you
should
actually
create
private
types.
One
thing
that
you
mentioned
that
wasn't
that
clear
to
me,
I
think,
is:
if
we
create
custom
types,
they
would
always
try
to
be
as
close
to
the
core
v1
types
or
should
be
as
close
to
the
koi
one
types
as
they
can
so
from
an
yaml
or
json
perspective.
H
They
there
wouldn't
be
any
difference.
Apart
from
a
few
fields
missing,
I
wouldn't
try
to
to
create
something
completely
new,
even
the
if
we
drop.
If
we
decide
to
drop,
that's
another
discussion
if
we
decide
to
drop
the
version
from
the
reference
there
already
is
a
v1
type
that
only
uses
group
and
no
no
api
version.
H
So
those
new
types,
I
think,
are
also
quite
similar
to
what's
there
in
v1
or
kobi
one.
It's
just
well
custom
implementations
of
them
which
allow
us
to
add
methods,
for
example,
which
makes
them
easier
to
use,
etc,
and,
as
I
think
there
are
some
valid
points
from
joel
about
why
we
should
do
it
also
to
have
more
control
over
them
and
so
on,
even
though
upstream
types
never
change,
basically,
especially
those
important
ones.
I'm
not
sure
if
it's
worth.
H
Well,
if
everybody
thinks
that
it's
not
worth
going
through
api
machinery
to
get
new
tribes,
then
that's
probably
the
real.
The
reason
we
don't
have
any
so
maybe
it's
it's
a
good
idea
to
actually
ask
them
what
they
think
about
it
and
whether
they
even
want
to
add
new
types
there.
But
I
from
my
perspective,
it's
not
those
whether
it's
custom
types
or
not
is
not
that
important.
H
As
long
as
json
compatibility,
as
you
said,
is
maintained,
I
think,
from
a
go
coding
perspective
with
today's
tooling.
It
doesn't
really
matter
whether
it's
a
different
type
or
not,
because
it's
easy
enough
to
figure
out
which
one
to
use.
A
Yeah,
at
least
I
think
it's
worth
worth
asking
then
we
you
know
we
can
definitely
make
our
own
decision,
but
we
also
need
to
be.
A
You
know
careful
how
we
introduce
this
change
and
I
think
the
last
thing
is
like
we
cannot
drop
version
in
a
number
of
places
as
well.
So
we
do
need
the
version
to
still
be
there
for
the
api
type,
because
we
deal
with
unstructured
objects
and
trying
to
get
the
crd
every
time
we
have
to
get
a
version.
It's
not
ideal
mike.
J
Yeah,
I
just
wanted
to
add
that
if
we
move
away
from
the
core
types,
it
also
starts
to
complicate
things
like
the
auto
scaler
implementation,
because
we're
starting
to
dig
into
the
object
references
from
that
side
of
things.
So
if
we
start
to
make
custom
types,
I
think
it's
going
to
complicate
some
of
the
some
of
the
I
guess,
controllers
and
tooling
that
we're
building
on
top
of
cluster
api.
So
maybe
just
something
to
be
aware
of
is
another
data
point
here.
A
Yeah,
I
think
the
idea
is
that,
like
you,
would
still
be
able
to
unmarshal
these
things
into
a
core
v1
object.
Reference,
like
you
know,
as
the
json
compa
compatibility
has
to
be
there,
so
the
existing
tooling.
You
know
a
lot
of
these
objects
are
just
like
on
unstructured.
It's
like,
we
don't
know
anything
about
it
and
I
think
the
auto
scaler
does
a
bunch
of
the
same
right.
It.
A
So
it
makes
sense
that,
like
you
know,
when
you
have
a
reference,
you
need
to
unmarshal
into
a
core
v1
reference
unless
again,
like
you,
create
a
different
type,
but
which
kind
of
defeats
the
whole
purpose
of
it.
That's
what
I
was
saying
like
maybe
reaching
out
to
apm
machinery
and
explaining
the
problem
and
say
like
what
do
you
all
think
about
you
know
about
this,
and
we
don't
have
to
change
the
existing
object,
but
we
could
introduce
a
new
one
and
then
migrate
over
time.
A
That's
like
a
smaller
local
object
reference
with
a
version.
I
guess
that's
all
we
need
so
more.
We
literally
need
this
and
I
don't
even
know
if
we
actually
use
the
one
with
uids
anywhere.
A
I'm
also
not
sure,
like
I
said
I
don't
know
if
we
actually
need
to
use
that,
and
you
know
that's,
I
don't
think
that
the
uid
is
ever
used
anywhere.
It's
actually
cleaned
up
in
a
bunch
of
places
when,
for
example,
cluster
caro
moo
does
its
thing,
at
least
for
owner
references
and
things
like
that.
A
Oh
okay,
so,
okay,
so
next
steps,
next
steps
discussed.
H
A
bunch
of
us
sure.
A
I
guess
also
side
note
like
something
like
this
would
have
to
go
into
another
revision
of
the
apis,
which
we
have
not
yet
planned
whatsoever,
so
that
also
needs
to
go
hand
in
hand
with
some
planning
on
either
beta2
or
something
like
that.
F
Yep,
that's
more
like
an
fyi
for
yeah,
so
we
started
using
or
we
started
introducing
1.25
tests.
I
guess
so.
What
I'm
aware
of
is
that
in
core
capping,
even
tests
which
upgrades
from
124
to
125.
F
I
know
that
we
see
some
issues
in
cap
c
entwined
test.
I
think
that's
from
the
cloud
provider
azure
side,
because
that's
testing
against,
I
think
kubernetes
master
or
something
and
couple's
program
tool.
So
essentially,
what
was
changed
with
125
is
that
qubit
m
change
the
default
registry
of
kubernetes,
so
the
old
one
is
kate
she's
rio,
the
new
one
is
stretch
street.kxio,
so
starting
with
125
cube,
adm
we'll
choose
another
one
in
general.
That
is
fine.
F
We
just
have
a
few
places
where
we,
let's
say
kind
of
depend
on
baked
in
images
or
we
have
hard
coded
old
registry.
So
I
wouldn't
go
into
too
much
detail.
Essentially
we
have
those
one
issue:
one
pre-request
there,
I'm
just
that.
I
guess
everyone
knows.
If
you
encounter
similar
issues,
just
join
us
on
those
issues.
F
F
So
action
item
is
probably
something
like
kcp,
so
what
they
did
upstream
is
essentially
they
change
the
default
value
of
cube,
adding
125
and
in
cuba
adm
upgrade
on
from
124
to
125
they're
changing
the
registry,
and
I
think
we
have
to
do
something
similar
in
kzp,
so
something
like
if
the
user
doesn't
set
the
registry
and
we
are
using
the
old
registry
internally,
then
we
have
to
change
the
old
directory
to
a
new
registry
when
we
upgrade
from
124
to
125,
which
should
essentially
mean
if
the
registry
is
not
set
in
kcp,
then
we
have
to
update
a
cubed
and
config
config
map
and
word
documents.
F
Current
idea
how
to
solve
that
one
and
the
other
one
is
it's
a
it's
a
different
issue.
It's
just
the
way
we
run
end-to-end
tests
in
couple
kappa
kept
c.
As
far
as
I
know,
because
we
sometimes
download
images
from
somewhere-
and
I
would
say
that's
that's
a
different
problem.
I
would
keep
it
for
now,
but
we
we
can
discuss
it
on
the
pr.
A
Okay
sounds
good,
so
I'll
bring
this
up
again,
but
we
do
need
to
start
like
blocking
upgrades
to
versions
that
are
not
supported
or
never
tested.
A
We
can
start
by
doing
that,
for
example,
in
kcp,
first
and
and
also
in
the
topology
controller
right
away
and
potentially
have
a
way
to
disable
that
check
with
an
annotation.
If
you
need
to.
F
A
Yeah,
absolutely
so,
we
need
to
to
be
able
to
disable
it
for
our
own
testing,
but
for
for
a
user
like
unless
you
know,
people
could
be
in
a
very,
very,
very
old
version
and,
of
course,
for
api
and
like
try
to
update
125
and
they
don't
know
kind
of
like
what
happened
and
how
and
given
that
we
should
have
validation
in
place
in
there.
Do
we
have
an
issue
about
that
already.
I
Should
you
see
yeah,
this
is
gonna,
be
especially
more
important
as
we're
moving
towards
the
days
where,
like
we're,
removing
cloud
providers
and
like
moving
to
out
of
three
cpi
so
upgrading,
I
think
I
don't
remember
if
it
was
126
or
something.
I
need
to
check
my
notes,
but
yeah
on
that
version
like
if
we
let
users
upgrade
out
of
the
band,
their
clusters
are
just
gonna
break.
A
Yep
I'll
take
the
the
action
item
to
create
the
issue
not
somebody
else
wants
to,
but
yeah.
We
should
definitely
try
to
put
that
check
in
place.
F
I
would
have
one
additional
thing
that
I
really
should
mention
about
that
registry
change.
So
what
I
changed
now
is:
essentially
they
change
default
value
in
cubed
m
and
when
we're
looking
at
the
endpoints,
let's
say
so,
the
old
in
the
new
one.
Currently,
the
new
registry
url
is
redirecting
to
the
old
one.
F
What
they
will
do
over
time
is
reverse
that
so
that
the
that
the
registry
is
hosted
on
a
new
endpoint
and
the
olds
are
returning
to
a
new
one,
but
once
that
is
done,
they
want
to
deprecate
it
and
remove
the
old
registry
and
lubemire
was
saying
before
that's
like,
maybe
one
year
or
so,
where
both
will
be
available
and
then
kate's
gcrio
is
just
gone.
So
I
think
we
have
some
more
issues
and
discussions
about
that.
F
But
at
some
point
we
probably
have
to
think
about
what
happens
if
that
old
registry
isn't
there
anymore?
How
do
we
want
to
notify
our
users?
What
do
we
do
with
our
old
ci
chops?
I
guess
we
can
always
just
pin
to
the
new
registry
and
everything
works
again,
but
out
of
the
box
once
that
old
registry
is
gone,
cube,
adm,
1.24
and
below
just
won't
work
without
pinning
to
the
new
registry.
It
will
just
break,
but
it's
I
guess,
one
or
two
years
away.
A
K
Hey
yeah
so
we're
officially
in
cappy
the
the
cluster
cuddle
release,
so
that's
awesome
and
we're
working
towards
getting
the
repo
donated
to
the
kubesig,
and
why
why
I
bring
those
two
up
is
because
I
now
have
a
question
for
you
all:
what
does
the
upgrade
path
for
that
look
like
as
we
donate
the
repo
it's
gonna
change?
Obviously
the
url
and
so
114
doesn't
work
for
our
provider.
Now,
at
that
point,
so
is
there?
L
L
L
So
how
are
you
implement
some
redirect
out
of
whenever
your
current
things
exhausted,
or
I
don't
know
what
we
can
do
from
our
site?
The
code
is
already
out
and
they're
achieved.
K
L
I
Yeah,
I
think
that,
like
redirects
might
be
the
preferred
option,
there's
like
and
yeah
if
it's
like,
if
it
doesn't
pan
out
like
there's,
also
the
option
to
like
give
config
files
and
if
that's
not
ideal,
then
I
guess
like
we
can
restrict-
or
at
least
like
have
some
documentation
in
capital
ci.
That
says
that
for
a
given
version
like
that
is
gonna
point
of
cluster
cuddle,
it's
just
gonna
point
again.
I
Next
to
probably
a
copy
or
like
a
read
archive
of
the
current
repo
with
the
release
and
yeah,
I
guess
that
can
be
also
an
option.
So
do
you
plan
to
move
like
the
whole
repo
and
not
like,
have
something
in
their
oracle?
Or
do
you
plan
to
keep
something
as
archived
and
read
only
under
oracle.
K
Yeah,
that's
a
great
question.
I
the
plan
right
now
is
to
just
donate
the
repo
and
transfer
it,
but
that
it
may
be
better
to
keep
an
archive
than
create
a
new
one
or
move
it
or
something
along
those
lines.
I
don't
know
I
literally
just
thought
of
this,
as
I
was
typing
this
out,
so
I
didn't
think
that
that
through,
but
I
think
I
think
we
have
a
couple
options
there
redirect
and
or
something
with
the
repos.
F
Something
for
myself
one,
I'm
not
sure.
If
classic
color
supports
redirects,
we
had
a
similar
situation
with
cert
manager.
Certain
management
was
moved
to
another
organization.
There
was
actually
a
redirect
on
server
side,
but
class
cuddle
didn't
support
it.
I
don't
know
if
it's
different
for
provider
humans
than
for
certain
manchester
heralds,
but
you
definitely
have
to
take
a
look
on
the
client
side
as
well,
even
if
you
get
it
fixed
on
the
server
side,
yeah
and
if
you
have
to
implement
an
entire
set
and
immediately
a
new
release
anyway.
C
F
Maybe
it's
easier
to
just
make
a
hard
cut
or
copy
the
repository
and
then
the
old
releases
with
the
old
cluster
call
and
you
research
in
your
classic
huddle
or
something.