►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-07-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
good
day
is
july
27th
at
2022.
This
is
the
cluster
apk
office,
hours,
meeting
clustering
facet,
project
of
cplush
lifecycle.
We
have
meeting
etiquette
like
which
boils
down
to
like
to
be
kind
to
each
other
and
use
the
race
and
feature
in
zoom.
If
you
would
like
to
speak
up
so
I'll
post,
the
link
to
the
agenda
in
chat
feel
free
to
add
your
name
to
the,
but
this
to
the
participant
list
or
at
any
topic
that
you
would
like
to
discuss
today,
all
right.
A
So
before
we
start
do
we
usually
welcome
to
new
new
folks.
So
if
you
would
like
to
introduce
yourself
like
feel
free
to
mute.
B
Hello
prakash
here
I
used
to
be
part
of
classroom
for
a
long
time
written
after
almost
I
don't
know
six
months
to
a
year
and
just
the
same
connection
with
my
file,
where
I
wanted
to
look
at
the
possibility
of
ipv
address
management,
inclusion
or
something
of
that.
If
I
remember
a
couple
of
weeks
back,
I
tried
that
so
thanks
for
letting
me
in.
C
A
Three
times
all
right:
let's
go
to
the
open
proposals.
Is
there
anything
we
would
like
to
talk
about
or
any
updates.
A
C
Yeah,
so
the
managed
kubernetes
in
cappy
proposer.
So
it's
been
open
for
I
think
about
two
months
and
we
actually
moved
the
proposal
to
pr
today
so
yeah.
Please
take
a
look
and
for
information
in
kappa.
We
are
actually
just
starting
the
discussion:
how
to
evolve
our
eks
api
to
support
cluster
class,
so
yeah
things
are
moving.
A
A
I
see
this
is
past
lazy
consensus.
Was
it
merged
not
yet
there
is
still
like
a
bunch
of
comments.
D
A
Anything
else
for
the
proposal
section:
you
should
talk
about.
B
E
Yeah,
what
I
was
gonna
say
is:
basically,
if
you
want
prakash,
I
can
send
you
a
link
to
the
proposal
that
got
merged.
That
has
the
plan
and
the
actual
implementation
that
is
going
to
happen.
Okay,
thank
you.
A
Thanks
for
seeing
the
proposals
in
chat
all
right,
let's
move
on
to
discussion
topics
mike
you
have
the
first.
F
One
yeah,
so
this
is
something
that
has
come
up
during
the
scale
from
zero
review
alberto
kind
of
made
a
comment
about
you
know
we
could
use
some
like
some
sort
of
smoke
testing
here
or
something,
and
I
I
totally
agree
with
them.
It's
been
very
slow
progress
to
make
very
slow
to
make
any
progress
getting
the
auto
scaler.
You
know
core
to
change
the
way
we've
been
doing
end-to-end
testing
there,
but
it
got
me
thinking.
F
F
Would
there
be
a
way
for
us
to
do
cluster
api
testing
of
the
auto
scaler
when
we
put
up
commits
to
the
you
know
to
the
auto
scaler
repository
when
we
open
pr's-
and
I
don't
know
if,
if
prowl
would
let
us
kind
of
slice
things
like
that
like
if
we
could
just
have
tests
running
when
we
see
things
in
a
certain
directory
being
pr'd,
but
if
we
could,
I
would
love
to
like
talk
with
the
people
who
are
kind
of
running
the
cappy
testing
infrastructure
to
see
if
there'd
be
a
way
that
we
could
figure
out
how
to
do
this.
F
D
D
We
have
to
figure
it
out
if
we
want
to
test
copy
on
a
stable
release
of
the
autoscaler
or
to
test
copy
in
a
on
a
on
top
of
main
or
whatever,
whatever
we
have
to
figure
it
out
detail.
I
think
that
is
doable
but
yeah.
Let's
let
let's
take
my
suggestion-
is
to
take
this
offline
or
in
slack
and
eventually
arrange
a
meeting
to
dig
a
dive
into
some
idea
and
and
then
make
it
happen.
F
Yeah,
I
think
that
sounds
great
fabrizio
and
it
makes
me
wonder
you
know:
we've
done
the
deep
dives
for
other
portions
of
cluster
api
and
I
can't
remember,
did
we
do
a
deep
dive
video
for
the
testing
infrastructure,
because
if,
if
we
didn't,
that
might
be
a
great
place
to
kind
of
start
some
of
these
discussions
because
like
for
me,
it
would
be
helpful
to
learn
more
about
how
the
testing
infrastructure
for
cluster
api
is
set
up.
A
Okay,
I
mean
just
for
my
perfect,
like
it
would
be
a
great
thing
to
have.
I
think
in
the
past
we
have
seen
like
some
breaking
behaviors,
like
between
the
the
two
systems
that
like
were
caught
only
when
like
nowhere
in
a
production
system,
so
it
would
be
good
to
catch
this
ahead
of
time,
even
if
we,
you
know,
I
think
christian
read
my
mind,
like
you
know,
good
mark
cube
mark
would
be
a
good
signal
or
you
know,
could
do
something
with
cafe
as
well.
F
Right
just
to
make
sure
we're
not
breaking
things
and
we
don't,
we
don't
need
to
run
like
workloads
yeah.
We
just
want
to
make
sure
it
works.
You
know
which
is
basically
what
I
do.
You
know
kind
of
at
home,
anyways
all
right
cool,
like
I'm
good,
I'm
good
to
move
on
yeah.
I
mean
yeah
christian.
What
you,
what
christian
is
saying
in
chat
about,
maybe
a
periodic
and
cappy,
or
something
like
that.
I
guess
that
would
also
be
another
way
to
approach
this,
but
I'm
good
to
move
on
and
kind
of.
F
Okay,
cool
christian
and
fabrizio
I'm
gonna
write.
You
got
your
names
down
in
the
in
the
log
here
and
I'll.
Just
I'll
follow
up
with
some
actions
in
slack
or
whatever.
D
Okay,
thank
you.
So,
some
time
ago,
I
opened
up
an
issue
apr
proposing
a
change
on
how
we
do
issue
triage
and
milestone
management.
The
idea
is
mostly
start
using
triage
labels,
so
we
have
a
clear
senior
that
an
issue
has
been
triaged.
It
is
a
duplicate.
We
are
waiting
for
information,
etc,
etc.
D
D
Only
if
someone
completes
the
work
so
yeah
this.
D
There
was
a
lot
of
feedback
in
in
the
on
the
pr
and
they
are
all
addressed.
I
think
that
we
are
good
to
go
so.
My
question
for
the
for
the
audience
is
that,
can
we
set
a
lazy
concessions?
Deadline
of
this
pr
or
people
wants
more
time
to
look
at
it.
A
Okay,
so
I
guess
I'll
go
rotating
your
hand
raised.
A
So
from
my
perspective,
this
has
been
open
for
a
while,
I
think
it
probably
like
we
should
probably
just
merge
it,
and
one
action
item
that
I
would
take
from
here
is
to
schedule
like
a
few,
maybe
friday
afternoon,
like
we've
done
in
the
past
or
that
friday
morning,
to
be
a
little
bit
more
your
friendly
grooming
sessions,
so
that,
like
we,
don't
just
remove
the
milestone
next,
but
we
also
re-prioritize
and
try
to
accept
everything,
and
if
we
don't
want
to
accept
something
we
just
close
it
and
say
like
hey,
you
can
reopen
it
again.
A
I
think
that
that's
what
I
would
like
to
to
do-
and
maybe
we
can
take
like
you
know
we
don't
have
to-
we-
can
do
it
in
the
course
of
the
next
couple
of
months.
We
don't
have
to
do
it
like
super
often,
but
I
feel
like
it
would
be
probably
a
good
way
to
you
know
to
go
from
here.
A
A
Awesome
all
right
so
I'll
approve
it
and
feel
free
to
unblock
once
you're
ready
to
merge
it.
A
D
It's
not
a
problem,
wait
two
more
days
and
if
not
one
chime
in
we
move
on,
as
you
suggested.
A
Cool
sounds
great,
and
if
we
want
to
do
a
first
backlog
room
in
on
friday,
this
friday
yeah
we
can.
I
can
schedule
something
and
put
it
on
the
camera.
A
Awesome,
for
which
you
have
the
next
one
as
well,
so
that
way.
D
A
short
one,
so
during
last
resist
cycle,
we
started
basically
working
on
improving
how
we
do
logging
customer
pi.
Most
specifically,
we
are
relying
on
what
is
being
done
in
kubernetes.
That
means
structural
logging,
consistent
key
pair
values,
use
of
contextual
logging,
all
the
great
work
that
the
kubernetes
community
is
doing
and
basically
offering
us
in
in
caledon,
in
k,
log
or
in
controller
runtime.
D
A
Awesome
I
read
the
vlog
so
like
I
have
20,
plus
one
on
it,
cecile,
alberto
and
other,
like
folks
that
are
interested
in
noggin.
Just
please
take
a
look
as
well.
I
think
it
just
solidifies
what
have
you
been
doing
or
what
we
should
do
better.
A
Awesome
cecile,
you
have
the
release
v
team
process
go
ahead.
G
Yeah,
so
this
is
something
just
following
up
from
the
last
office
hours
last
week,
where
we
talked
about
starting
a
brainstorming
about
having
a
v
team
for
release
in
cappy,
and
there
is
lots
of
interest
in
slack.
So
thank
you
for
everyone
who
reached
out
saying
they
were
interested.
I
have
a
feeling
most
people
are
interested
in,
like
actually
you
know
getting
to
work
on
the
release
and
helping
out-
and
that's
great,
I
think,
we'll
definitely
need
volunteers.
G
You
know
in
the
future
cycles
as
we
implement
this
for
now,
we're
mostly
trying
to
brainstorm
on
how
to
actually
do
this
and
structure
it
in
a
way.
That's
lightweight
enough
that
it's
sustainable
for
cappy,
which
is
a
relatively
smaller
project
than
kubernetes
itself,
but
also,
you
know,
just
improve
a
little
bit
our
communication
and
just
like
spread
the
load.
So
it's
not
always
the
same
people
who
have
to
do
the
release,
tasks
and
chores
so
yeah.
G
G
So,
if
anyone's
interested
in
this
topic,
please
take
a
look
for
now
we're
mostly
doing
async
just
back
and
forth.
I
think
at
some
point
we'll
probably
have
to
schedule
a
call
to
talk
about
the
results,
or
maybe
we'll
just
you
know,
continue
doing
this
acing,
just
because
it's
easier
time
zones
wise,
but
yeah.
So
if
you're
interested
take
a
look-
and
this
is
work
in
progress-
so
yeah
just
wanted
to
bring
that
to
everyone's
attention.
C
A
And
you
know:
we've
been
trying
like
in
a
bunch
of
different
ways:
first
before
maybe
committing
like,
but
yeah
thanks,
laurie
flores
on
the
call
but
yeah
for
studying
this,
and
so
for
the
follow-up.
A
Any
questions
comments,
concerns
about
the
release
process,
release
the
information.
This
goes
hand-in-hand
with
the
release,
cadence
talks
that
we
have
had
for
the
past
few
weeks.
It's
the
theme
that
will
be
in
charge
for
those
releases.
G
Yeah,
I
guess
just
to
reiterate
about
the
release
cadence,
where
we
discussed
last
time,
which
I
think
we
should
try
to
do,
is
before
thinking
of
changing
anything
with
the
cadence.
We
should
make
sure
our
release
process
is
sustainable
for
people
who
do
it.
So
I
think
what
we
want
to
try
is
to
not
change
the
cadence
right
now
and
just
change
the
way
we
do
releases
and
how
many
people
are
involved
and
how
we
communicate
the
dates
of
when
we're
going
to
release.
G
G
A
The
concerns
that
I
raised
talked
about
that
ahead
before
about
it,
but
yeah.
This
is
awesome.
Thanks
for
driving
this.
D
A
D
This
is
a
periodic
task
that,
in
my
opinion,
should
fall
into
the
release
theme
because
it's
kind
of
making
there
is
ready
and
it
can
compass
a
lot
of
creating
test
jobs,
etc,
etc,
but
it's
kind
of
a
difference
from
what
kubernetes
team
does.
So.
I
would
like
to
understand
if
there
are
people
that
have
opinions
around
this.
Otherwise
we
can
take
this
offline
in
the
document.
G
I
was
waiting
to
see
if
anyone
had
anything
to
say.
I
don't
want
to
speak
too
much,
but
I
I
guess
personally,
I
think
that
it
makes
sense
to
have
the
release
team
be
in
charge
of
everything.
That's
around
like
release
process
like
adding
ci
jobs,
making
sure
that
the
ci
jobs
are
passing.
G
All
of
that,
I
think
adding
support
for
kubernetes
version.
It
depends
if
we
see
that
as
really
a
blocker
for
release
like
we
never
release
until
the
new
kubernetes
version
is
supported.
But
historically
I
don't
think
that's
really
been
the
case
and,
if
not,
I
think
it's
mostly
like
a
feature
of
the
release
right,
like
any
other
feature
that
we
have
in
the
milestone
that
we
want
the
release
to
support.
G
So
I
think
it
might
be
strategic
to
focus
the
releasing
on
actually
like
the
chores
around
actually
cutting
the
release
and
all
of
that
and
prepping
the
ci
and,
like
basically
everything
that's
in
the
test
giant
test
list.
That's
defined
has
been
opening
for
the
past
few
releases
as
an
issue.
I
don't
know
if
really
kubernetes
version
support
falls
into
that.
That's
I
guess.
That's
like
a
gray
area
that
we
need
to
figure
out.
E
And
I
think
like
this
will
reduce
inevitably
the
pool
of
people
that
we
can
get
for
the
release
process,
just
because
of
like
so
the
investment
that
need
that
is
needed
to
actually
get
cap
into
to
support
a
kubernetes
version.
So
I
think
what
we
can
do,
potentially,
if
you
want
to
have,
if
you
want
to
pull
this
effort,
is
probably
I
don't
know
if
that's
really
the
case,
but
like
have
some
automation
that
opens
like
an
issue
to
support
a
kubernetes
version
and
label.
D
Yeah,
thank
you
for
feedback,
so
I
kind
of
agree.
This
is
a
gray
area.
I
raised
the
point
for
two
reasons:
just
to
give
a
little
bit
context.
So
first,
first
of
all
is
that
this
is
one
of
the
tasks
that
we
do
periodically
like
release
and
that
insists
on
on
the
yeah.
You
know
on
always
on
the
same
set
of
forks,
and
so
this
is
an
area
where
we
can
use
help.
D
Second,
is
this
because
we
are
managing
these
like
releases,
so
we
have
basically
another
issue
with
a
set
of
tasks
that
that
we
have
to
do
for
every
release.
Usually
these
tasks
are
simple.
It
is,
it
basically
implies
create
a
copy
past,
a
set
of
ci
jobs,
and-
and
they
are
repetitive,
what
will
be
the
gap
or
the
barrier
will
be
okay.
What
happened
if
we
discovered
that
kubernetes
at
least
does
not
not
does
not
work
well
with
copy.
D
In
that
case,
I
agree
that
someone
else
outside
the
team
should
jump
in
and
fix
whatever
is
has
to
be
fixed
in
copy,
but
let
me
say
getting
all
the
paratus
ready
for
testing
a
new
cover,
necessarily
in
my
opinion
it
is
a
good
candidate
for
that
team,
but
yeah
we
can
discuss
this.
I
appreciate
the
feedback.
Thank
you.
Will
you
see.
A
F
Yeah,
so
I
guarantee
we.
We
did
not
plan
this
with
the
oracle
folks,
but
we
just
happened
to
be
releasing
the
same
version.
They
are
so
friday.
We
released
0.4.0.
The
big
feature
for
that
is.
You
can
now
specify
an
external
cluster
to
use
to
to
run
the
kubemark
pods
in.
So
this
is
nice
for
people
who
who
want
to
be
able
to
have
extra
resources
or
just
a
completely
separate
kubernetes
cluster,
where
they
could
run
the
hollow
pods,
and
then
we
also
made
some
improvements
to
the
release
packaging.
F
We
had
a
user
who
was
trying
to
kind
of
follow
the
normal
cluster
api
installation
methodology.
You
know
using
cluster
ctl
and
whatnot
and
we
had
a
few
bugs
in
the
way
our
release
artifacts
were
put
together
so
those
as
far
as
I'm
aware,
those
have
all
been
smoothed
out
now
and
we've
changed
the
image
location.
So
everything
should
be
pullable
from
public
registries.
So
if
you,
if
you're
interested
or
have
a
chance
check
it
out,
that's
it
for
me.
A
Awesome
thanks
mike
joe,
you
have
oci.
H
Yeah,
so
we
released
4.0.4
on
monday,
so
we
are
trailing.
So
we
have
just
the
experimental
support
for
machine
pools
in
this
release,
as
well
as
kind
of
the
big
one
is
upgrading
to
the
the
cluster
api
dependency
of
one
two
zero.
We
have
office
hours
this
coming
tuesday,
so
august
second,
and
then
there's
other
releases
in
there,
such
as
fixes
and
things
like
that,
but
it
those
are
the
two
major
things
we
released.
G
Yeah,
that's
super
exciting
that
you
added
machine
pool
support,
just
wondering
if
anyone
is
aware
on
the
oci
contributors
or
maintainers,
that
there
is
a
work
in
progress
to
add
machine
pool
machines
and
that's
coming
the
proposal
merged
and
the
implementation
in
cappy's
underway.
So
just
something
to
keep
on
the
radar.
A
Awesome
thanks
folks
we're
at
the
end
of
our
agenda
today.
Are
there
any
questions,
comments,
concerns
fabricio.
D
Yeah
one
last
point:
so
we
are
working
with
cncf
to
get
basically
a
new
character
added
to
added
to
the
pp
and
friends
family
for
cluster
api.
We
talked
about
this
some
time
ago.
It
seems
that
the
things
is
progressing
and
we
have
still
the
chance
to
get
this
ready
for
the
next
cooper
con,
which
will
be
great.
D
D
For
this
character,
cluster
api-
and
I
don't
know
how
we
can
search
this
out-
we
can
open
a
pool,
and
then
I
don't
know
if
we
can
do
pulling
in
in
the
onslack
or
whatever.