►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-01-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
cluster
api
office
hours
meeting
today
is
january,
26
2022.
As
a
reminder,
we
have
an
agenda
dock
to
gain
edit
access.
You
have
to
join
the
c
cluster
life
cycle
mailing
list.
There's
a
link
in
the
dock
that
you
can
follow.
A
Another
reminder:
we
have
a
meeting
etiquette
if
you'd
like
to
speak
up
or
respond
to
questions
or
comments
concerns.
Please
use
the
raisin
feature
in
zoom.
You
can
find
it
on
the
reactions
and,
let's
get
started
open
proposal
readout
jacob.
You
have
the
first
one
with
ipad
proposal.
B
Yeah,
I
just
wanted
to
briefly
announce
that
it's
now
on
github
instead
of
google
docs,
because
I
think
it's
close
enough
to
being
done
that
it's
fine
to
have
it
on
github.
Now,
yeah
and
sorry,
it's
not
a
link
and
also
it's
the
six
thousand
pull
reclair
pressed
slash
issue,
which
is
pretty
nice.
A
Another
milestone
congrats
everyone
on
that.
I
see
also
we
passed
the
two
dozen
stars
and
get
up
like,
like
I
think
a
few
days
ago.
That's
great
steaming
ahead:
cool
yeah
I'll,
take
a
person.
Take
a
look
at
this.
This
is
this
seems
interesting
from
a
copy
perspective.
B
There
is
maybe
a
small
point
we
could
briefly
discuss
here
sure.
So
the
idea
now
is
to
add
two
new
types
to
cluster
api
itself,
which
would
be
similar
to
the
persistent
volume
claims
logic.
B
A
an
ip
address
object
and
an
ip
address
claim
object
that
then
can
so
that
they
exist
as
a
general
like
api
contract
for
everything,
and
it
wouldn't
really
make
sense,
in
my
opinion,
to
put
them
into
a
separate
project,
because
it
would
literally
just
be
two
custom
resources,
and
maybe
some
validating
web
hooks
or
something
they
don't.
Both
of
them
don't
need
controllers,
they
just
need
to
exist
as
api
objects
and
that
they
would
then
get
created.
B
So
the
ip
address
claim
would
get
created
by
providers
by
infrastructure
providers
and
the
ip
address
object,
as
well
as
the
claim
would
then
get
reconciled
by
item
providers.
B
So
it
would
be
a
pretty
small
addition
to
to
copy
itself
and
the
rest
could
then
live
in
separate
projects.
But
it's
also
all
in
the
proposal.
I
just
that's
just
the
one
thing,
because
I
think
it
was
at
some
point
discussed
that
it
should
be
a
separate
component
to
copy,
but
since
that
aspect
of
it
is
so
small,
my
idea
was
to
put
it
into
copy
directly.
B
And
the
I,
the
pools
would
then
be
provided
by
the
ipam
providers,
so,
for
example,
in
this
case,
what
you're
just
showing
that
would
be
an
example
for
an
infoblox
provider
for
and
a
separate
in-cluster
ip
pool
for
an
in-cluster
provider.
That
would
then
be
similar
to
the
current
one.
That's
part
of
metal,
three
I've
also
started
working
on
a
reference
or
an
example.
Implementation
for
that
in
cluster
ip.
B
A
Got
it
okay,
I
mean
my
first
reaction
is
that
this
could
be
okay
like
to
put
in
the
copy,
but
probably
started
like
I
was
just
starting
in
the
experimental
folder
and
then
moving
it
out,
like
as
things
make
progress
you
seen.
C
Yeah,
so
my
pawn
was
specifically
on
that
so
like
plus
one
two
separate
api
group,
and
the
second
thing
was
like
regarding
the
experimental
folder.
So
do
we
like
so
a
meta
point
here?
Is
that,
like
whenever
someone
like
is
proposing
an
api
there's,
you
know
always
this
thing
of.
Is
this
thing
belonging
to
cappy?
Where
should
it
go,
should
it
go
to
the
experimental
folder
or
not?
C
Do
we
wanna
like
do
we
wanna
have
something
that
is
clarifying
that
says
what
are
the
criterias,
in
which
case
you
need
to
go
in
the
experimental
folder
or
not,
or
do
you
want
to
say
that,
like
everything
from
now
on
goes
to
experimental
folder?
I
think
that,
like
I
don't
know,
if
there's
already
some
some
prior
art
there
but
like
it,
might
be
worth
clarifying.
A
Awesome
thanks
folks,
yeah.
Please
take
a
look
and,
if
you're
interested
in
in
this
and
it's
yeah
pr,
66
000.
A
Perfect
moving
on
does
anybody
want
to
say
hi
before
we
start
introduce
yourself.
D
E
Hi
everyone
luca
and
I'm
joining
this
team,
because
we
are
starting
to
use
cluster
api
for
one
of
our
clients
and
I
would
like
to
understand
what
is
the
trajectory
of
the
project
and
how
the
project
is
going
to
evolve
over
time.
So
I'm
trying
to.
I
will
try
to
join
more
and
more
calls
and
possibly
if
the
clients
allow
are
going
to
try
to
contribute
back.
F
Thank
you,
hi
I'm
bridget
crema
and
I
am
working
with
cecile
and
other
folks
over
at
azure
and
I'm
pretty
excited
to
be
focusing
a
little
bit
more
on
cluster
api
and
helping
out
where
I
can
thanks.
A
A
Going
once
twice
three
times,
it's
a
lot
of
common
in
chat
yeah,
the
italians
are
taking
over
sure.
We
were
always
here
by
the
way
so
you're
just
there
for
it.
G
Yep
for
the
first
one,
maybe
you
can
open
the
hacking
b
for
a
moment
just
a
short
psa.
We
are
currently
collecting
ideas
for
some
code,
walkthrough
knowledge
sharing
sessions
and
other
stuff.
G
G
I
will
do
some
sessions
myself,
but
sooner
or
later,
I'm
looking
for
someone
who
can
yeah
do
some
of
those,
but
we
see
very
how
much
you
can
do,
but
I
think
at
least
a
few
ones
should
should
work
out,
and
that
should
help
everyone.
If
we
don't
forget
it,
we
will
record
everything,
put
it
on
youtube
and
link
it
in
the
book
afterwards.
So
hopefully
that
will
be
useful
for
new
contributors
in
the
future
yep.
That
would
be
it
for
myself
about
that.
A
Awesome
perfect
cecil
go
ahead.
H
I
guess
I
wanted
to
share
sort
of
related
we're
piloting
something
in
cabzi
right
now
trying
to
do
this
like
concept
of
after
hours,
which
we
stole
from
sig
windows,
which
is
basically
just
like
a
free
chat
like
pairing
session
for
contributors,
it's
completely
addock,
unrecorded,
less
structured
than
a
formal
office
hours
meeting,
and
the
idea
is
just
to
like
share
ideas,
help
each
other
out
if
people
are
stuck
on
a
problem
or
an
issue
or
something
try
to
like
pair
on
some
code
and
debug
together.
H
A
That's
perfect,
it
sounds
like
a
great
idea.
I
would
love
to
take
the
big
picture
one.
A
Maybe
there
is
also
some
refinement
that
we
should
do
as
a
community
like,
but
most
of
our
book
is
still,
I
guess
like
through
the
trajectory
where
we
said
like
years
ago,
but
if
we
would
help,
especially
for
new
folks
that
are
joining,
we
could
actually
do
like
an
introduction
to
the
project
maybe
next
week,
and
also
we
can
celebrate
the
1.1
release
for
next
week
at
the
same
time,
yeah
so
for
like
feel
free
to
just
kind
of
invite
myself
and
others
or
to
the
after
hours
after
hours
office
hours.
A
Sounds
great
kind
of
friday
thing
in
my
head,
but
cool?
Does
anybody
have
any
questions
comments
concerns
on
this
doc
before
we
move
on.
G
Me
again,
yeah,
I'm
not
sure
who
saw
it,
but
there
was
a
male.
I
guess
yesterday
or
day
before
that
there
is
an
issue
in
the
first
few
1.23
commercial
reasons.
When
you
can
please
open
the
linked
capi
issue.
G
So
the
tldr
is,
if
you're
using
cluster
class
with
patches
and
are
using
one
of
those
three
versions
of
coordinators.
Then
there
is
an
edge
case
where
you
will
lose
data
and
the
example
is
more
or
less
in
that
yaml.
So
if
you
more
or
less
yeah,
if
you
just
apply
the
something
with
the
first
yama,
so
you
have
a
default
value
where
you
have
an
array
of
objects
and
then
the
api
server
will
give
you
the
second
yam.
So
it
will
essentially
just
delete
everything
in
those
objects
in
the
array
yeah.
G
So
essentially,
if
you
want
to
use
glasgow's
patches
probably
use
a
new
version.
That
would
be
good.
If
you
really
want
to
use
one
of
those
versions,
you
should
probably
not
use
variables
which
are
arrays
of
objects.
Yeah.
We
will
put
a
note
in
the
documentation
and
try
to
upgrade
all
our
ci
to
use
a
new
version
to
just
avoid
the
issue,
including
the
kickstart
yep.
A
Thanks
steven,
are
there
any
questions
comments
on
this
particular
issue?
I
I've
tried
like
it
only
affects
cluster
class,
so
we
don't
have
any
other
points
that
were
affected
but
yeah
if
you're
using
glass
called
speed.
Take
a
look
at
the
issue.
A
A
It
focuses
on
know
like
crap
app
is
like
a
great
tool
to
just
search
a
bunch
of
things
like
so
that
you
can
just
do
things
like
that,
and
then
you
can
filter
it
out.
Just
a
side
note,
but
if
I
can
type
like
that
and
then
it
will
search
like
across
like
a
bunch
of
things
and
given
the
cluster
api
has
its
own
prefix,
it's
easy
to
kind
of
search
like
that.
I
Hi
everyone
so
to
two
things
on
behalf
of
of
the
sig,
so,
first
of
all
our
annual
report,
we
are
starting
collective
feedback
for
each
for
from
each
sub
project
and
each
project
has
to
nominate
a
person
responsible
to
fulfill
a
short
online
form.
The
deadline
is,
is
the
15th
of
february
and
currently
at
least
last
time
I
checked
it.
We
we
have
only
five
of
13
cup
some
finger
projects.
I
I
Each
subproject
can
submit
only
only
one
torque.
The
the
deadline
is
the
14th
of
february
one,
a
small
knot.
There's
one
small
note:
the
submission
should
be
done
or
assigned
to
a
sig
lead.
So
please,
sync
up
with
one
of
our
sick
leads
before
submitting,
because
they
will
be
the
person,
the
referent
person
for
the
for
the
submission.
G
Yeah
sorry,
okay,
first
topic
yeah,
as
you
already
mentioned
before
we
are
playing
through
this
next
week,
so
we
started
work
on
yeah,
getting
main
ready
for
1.2
and
adding
all
the
ci
signal
for
these
1.1
branch.
I
also
link
the
issue
here,
just
if
someone
is
interested,
what
kind
of
things
you're
doing
or
yeah
whatever
and
what
came
up
is
that
we
should
probably
do
some
kind
of
try
session
or
sessions
and
not
sure
how
we
want
to
schedule
them
but
yeah
here
it
is.
A
Yeah
for
scheduling
I
can,
I
can
send
out
an
invite.
Usually
these
sessions
are
like
two
hour
longs,
at
least,
and
usually
we
do
them
on
fridays,
so
maybe
like
in
yeah.
Maybe
when,
after
the
1.1
release,
we
can
schedule
some
like
a
couple
and
we
can
go
to
to
just
backlog,
triage.
I
Just
a
note
later
today
we
are
going
to
cutter
c1
for
1.1.
A
I
A
G
You
can
catch
up
with
your
notifications,
okay,
good
next
topic.
Yeah.
I
would
do
a
short
demo
about
machine
deployment
variable
alright,
just
some
yammer.
Can
you
give
me
the
rights
to
share
my
screen.
G
G
So
the
current
situation
is
the
following:
if
you
have,
when
you
have
a
cluster
class,
you
can
specify
your
machine
deployment
here
and
then
you
have
yeah
your
qubetting
config
template
your
aws
machine
template.
You
can
specify
some
variables
and
you
can
specify
some
patches
to
modify
them.
G
So
to
make
one
example,
if
you
want
to
be
able
to
customize
the
instance
type
of
the
aws
machine
template,
you
can
specify
the
vertical
machine
type
here,
give
it
a
default
value
and
then
you
can
configure
a
patch
which
takes
the
vertical
machine
type
and
overwrites
the
instance
type
in
the
aws
machine
template
yeah
and
as
a
user
I
can
on
the
cluster.
I
can
just
write,
I
don't
know
p3
large
here
and
then
that
value
is
taken
and
used
for
my
machine
deployment
without
overwrites.
G
G
This
thing
then
copy
paste,
the
variable
copy
paste
the
patch,
and
then
you
could
have
a
second
machine
deployment
class
with
another
size
and,
as
you
can
imagine,
the
more
things
you
want
to
customize
in
your
machine
deployment
or
in
the
templates
used
for
the
machine
deployment,
the
more
copy
paste
you
have
to
do,
and
if
you
want
to
combine
those
variants
it
just
gets
really
bad.
G
So
what
we
introduced
is
essentially
a
very
small
feature
to
make
it
more
flexible
and
what
you
can
do
now
and
in
the
new
release
you
can
just
say
variables
overrides
and
then
you
can
just
overwrite
that
value
for
a
specific
machine
deployment,
and
if
you
have
multiple,
you
can
set
different
values
of
course.
So
that's
that's
the
short
all
there
is
so
we
just
have
another
field
here
where
you
can
overwrite
specific
variables
for
a
specific
machine
deployment.
G
I
also
have
a
live
example
here.
It's
not
super
fancy,
so
I
implemented
something
with
with
capti.
I
don't
have
a
lot
of
fields
here,
so
the
example
is
not
super,
so
useful,
but
anyway
what
I
did
is
I
created
a
worker
kunis
version
variable
here
without
a
default
value.
I'm
not
sure
if
that
was
a
good
idea
and
I
created
a
corresponding
patch
and
the
patch
just
takes
the
variable,
a
patch
patches,
the
custom
image
field
in
the
docker
machine,
template
and
yeah.
G
That's
it.
When
I
now
look
at
the
docker
cluster,
I
can
just
set
the
variable
here
on
the
global
level,
and
then
I
can
overwrite
it
if
I
want
or
not
for
specific
machine
deployment.
So
the
cluster
here
is
the
same.
That
doesn't
make
sense.
So
when
I
now
look
at
the
docker
machines
of
my
different
machine
deployments,
I
just
see
okay
here
we're
using
1.22
2,
which
is
the
classified
value,
and
in
that
case
we
have
an
override
which
is
1.23
so
yeah.
That's
it.
I
hope
it's
useful.
G
We
had
some
downstream.
We
would
have
a
lot
of
downstream
issues
if
you
wouldn't
have
that
features,
because
yeah
copy
pasting
machine
deployment
classes
is
not
really
fun
yep.
That's
it
any
questions.
H
Here,
what's
the
point
of
having
a
default
value,
if
the
variable
is
required.
G
I
would
say
it's
the
same
as
in
crds.
You
can
also
say
there
that
you
make
something
required
and
have
a
default
value.
I
guess
it
depends
on
on
what
you've
what
the
semantic
of
require
us.
G
Of
course,
if
you
have
a
default
value,
it's
not
really
required
for
the
user,
it's
more
like
for
the
api
server,
it's
required
at
the
end
after
the
webhook,
and
it's
just
the
same
here,
so
you
could
also
make
that
option
it
when
you,
when
you
look
at
the
implementation,
it's
more
or
less
just
two
independent
features
we're
just
applying
the
defaults
first
and
then
validating
required
afterwards,
and
if
you
want
to
have
that
validation,
that
the
field
is
there
in
the
end,
of
course,
some
combinations
of
setting
defaults
or
not
and
making
it
required
or
not,
are
not
that
useful,
but
you
can
still
do
it.
B
Just
a
quick
one:
why
is
it?
Why
is
it
explicitly
is
just
explicitly
called
overwrite,
so
it's
clear
that
why
variables
get
overwritten?
Yes,.
G
Yep,
the
idea
is
to
make
it
absolutely
clear
that
you
just
take
the
entire
overwritten
variable
value
and
use
that
instead
of
the
high
level
one
because
otherwise
you
might
could
think
that
they
are
combined
or
merged
or
something,
and
we
also
thought
that
overrides
is
enough
for
now,
but
maybe
not
for
the
future.
So
if
we
want
to
do
some
kind
of
merging
or
whatever,
then
we
have
one
level
in
the
api
where
you
can
also
do
other
things.
G
I
Completely
just
adding
on
these,
I
think
that
override
seems
kind
of
strange
if
you
think
too
simple,
variable,
simple
types,
but
when
it
comes
to
variable
which
are
complex
type,
so
nested
structure
you,
you
start
guessing.
Why?
Why
are?
Are
we
overriding
or
merging
how
they
behaves?
So
this
is
we
make
it
clear
for
especially
for
complex
variables.
A
G
Yeah,
the
current
plan
is
to
only
to
only
cherry
pick,
bug
fixes
if
we
need
some
but
we're
currently,
not
tracking
any.
I
mean
we
have
one,
that's
improvement
for
1.1
which
might
make
it.
But,
apart
from
that,
no
new
features
perfect.
A
E
Yeah,
sorry,
just
picking
up
with
the
with
the
top
account-
I'm
not
I'm
completely
new
to
this.
So
this
is.
This
may
be
a
stupid
question
or
an
a
question
more
and
more
so
from
understanding.
The
overwrite
feature
is
something
that
gets
implemented
on
the
controller
correct.
So
you
you
basically
what
you're
saying
you
can
change
the
the
end
representation
of
the
resource.
Applying
this
override
at
the
controller
driver.
Did
I
get
that
right.
E
Would
say:
yeah,
probably
not
completely
roughly
yeah,
I
agree
with
you,
so
my
my
question
is
is:
is
there
a
reason
to
apply
that
on
the
controller
so
after
the
the
implementation
of
the
resort?
So
after
the
result
is
already
stored
in
the
initiative,
and
why
is
not
it?
Wasn't
that
an
option
to
do
it
ahead
of
time,
maybe
into
helm
or
something
like
that
right?
So
it's
it's
like
this
is
adding
logic
or
it's
a
lot,
adding
more
complex
logic
to
a
static
definition.
G
Yep,
I'm
not
sure
if
I
have
the
right
answer
for
it.
So
essentially
I
I
wouldn't
know
where
to
put
that
configuration
in
a
helm
chart
or
something
so
when
we're
looking
at
cluster
class.
The
only
thing
we
have
is
the
cluster
class
and
our
cluster
with
some
topology
configuration,
and
we
don't
have
anything
else,
so
the
machine
deployment
where
that
field
is
actually
used.
That
is
just
something
that
our
controller
actually
generates,
so
that
wouldn't
be
something
that
the
user
creates.
E
G
If
cluster
does
not
anymore
before
cluster
cloud,
I
mean
that's
that's
a
wrong
phrase,
so
you
can
still
use
a
machine
from
standalone
if
you
want,
but
if
you
just
use,
let's
say
basic
cluster
cluster
api
resources.
But
if
you
want
to
use
cluster
class,
then
you
get
our
let's
say
automation.
On
top,
and
in
that
scenario
you
wouldn't
be
expected
to
write
machine
deployment
yourself.
A
To
add
to
that,
like,
we
actually
were
doing
that
in
the
past,
isn't
like
we're
using
customize
and
asking
folks
to
export
environment
variables
to
like
set
these
variables,
but
it
just
turned
out
to
be
very
cumbersome,
and
so
that's
why
we
arrived
here
because,
like
kind
of
the
life
cycle
of
these
objects
are
really
it's
really
complicated,
that
you
can't
put
it
in
a
template
jacob.
I
think
you
had
your
injuries.
B
Yeah,
I
also
just
wanted
to
add
on
that
again
because
this
also
came
up
during
the
discussion
about
cluster
class
in
general,
whether
it's
even
necessary
to
have
that,
and
I
think
this
goes
in
the
same
direction.
So
you
can
also
do
it
with
templating,
but
it's
very
inconvenient
because
of
some
of
the
inner
workings
of
cluster
api.
So
that's
why
the
templating
was
basically
built
into
cluster
api
as
cluster
class
and
that's
why
we
have
so
many
templating
like
features
in
there.
I
I
Alpha
topology
plan
and
this
command
is
is
kind
of
nice
because
you
can
pass
a
cluster
class
and
plays
and
and
it
it
will
basically
show
you
what
what
what
will
be
the
cluster
that
will
be
generated.
So
you
can
basically
try
diagram
what
the
topology
directions
I
will
do,
and
it
is
a
pretty
powerful
feature.
I
So
you,
you
cannot
only
test
the
creation
of
a
cluster
using
a
custom
class,
but
you
can
also
test
what
happened
if
I
change
my
custard
class
or
what
happened
if
I
change
a
field
in
topology,
what
happened
if
I
do
a
base,
so
it
is
a
pretty
powerful.
I
I
asked
the
yuvaraj
to
to
make
a
demo
of
this
feature
soon
and
and
yeah.
We
hope
this
will
really
help
people
to
get
started
with
crosstalk,
plus.
E
I
just
wanted
to
say
thanks
a
lot
because
that's
very
useful,
and
I
appreciate
that
there
is
this
functionality,
because
that
was
actually
actually
my
next
question
right.
How
can
I
know
what's
going
to
happen
when
I
change
something?
If
I
know
if
I
don't
have
visibility
to
the
end?
The
result
of
this,
I
can
already
see
how
that
can
be
a
problem
in
complex
workflows
right
when
you
change
something
you
think
you're
writing
the
right
variable.
E
But
if
you
don't
know
the
behavior
of
the
controller,
maybe
the
override
is
the
wrong
example,
because
it
may
be
pretty
pretty
explicit
right,
but
I
may
put
there
in
the
wrong
path
and
then
overwrite
whatever
what
something
that
I
didn't
want
to
overwrite.
So
it's
good
that
there
is
that
function.
I
think
so
for
great
for
sharing
that.
A
Awesome
thanks
folks,
I
think
we've
covered
the
last
starter,
which
was
the
cluster
cloud
docks.
I
linked
the
prs
in
in
chat
as
well,
and
thanks
so
much
for
breaking
stuff
into
working
through
those.
I
took
a
quick
look
at.
They
look
great,
it's
a
great
start,
any
other
last
minute
topic
before
we're
done
for
the
day
for
the
week.
A
All
right,
thanks
folks,
have
a
great
week,
congrats
again
on
the
1.1
release
to
to
everybody
soon.
I
guess
it
will
cut
rc
today,
please,
post
feedback
into
issues
and
yeah
feel
free
to
reach
out
in
slack.