►
From YouTube: SIG Cluster Lifecycle - Office Hours - 20211130
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
today
is
november
30,
and
this
is
the
sick
cluster
life
cycle
office
hour
meeting
and
the
is.
There
is
a
document
agenda.
Let
me
put
the
link
in
the
chat
here.
It
is,
if
you
don't
have
access
to
this
document.
You
have
to
join
the
c
cluster
life
cycle
mailing
list
and
before
starting.
This
is
this
meeting
goes
under
the
kubernetes
code
of
contoured,
the
cncf
contact
control
rules.
So
please
be
nicer
with
each
other
and
let's
go
to
today
agenda.
B
C
D
Yeah,
I
think
this
is
the
alice
from
from
red
hat,
but
I
mean
the
tldr
here
is
like
we
just
need
a
new
re-buffer
that
was
freaky
operator
work.
D
E
A
A
I
I
have
two
we
discussed
it
briefly
last
week.
A
First,
one
is
that
we
have
to
start
thinking
to
seek
project
torque
for
for
the
next
kubecon
make
sure
that
this
this
time
we
have
a
sig
talk
and
and
eventually
a
project,
a
a
project
to
work
possibly
more
than
one
and
second
topic
is
in
the
last
sig
chair
meeting.
It
was
announced
that
it
is
starting
the.
A
The
the
collection
of
feedback
from
from
the
siege
in
order
to
compile
the
the
usual
annual
status
report.
So
last
week
we
agreed
that
we
will
send
the
questioner
that
I've
proposed
to
this
meeting
a
few
weeks
ago
to
every
sub
project.
So
I
I
think
that
I
I.
If
there
are
no
objection,
I
will
send
an
email
asking
to
reach
a
project
to
pick
up
one
one
person
responsible
to
answer
to
to
the
questionnaire
for
for
the
entire
subject
project
and
let's
see
how
many
feedback
we
we
got.
A
Okay,
so
if
there
are
no
other
questions,
I
think
that
we
can
move
to
project
updates.
If
someone
want
to
give
us
a
project
update.
B
I
can
quickly
chime
in
so,
for
I
don't
know
if
it
counts
as
a
project
update,
but
on
the
keyboard
provider
side
we
had
our
our
first
initial
meeting
with
the
community,
so
we
don't
have.
We
haven't,
set
up
bi-weekly
meetings
for
keyword
provider
yet,
but
we
had
a
kickoff
this
morning
and
it's
been
discussed
actually
initially
we
go
on
with
a
weekly
meeting
for
it,
but
yeah
in
in
the
longer
run.
I
guess
maybe
we're
looking
for
some
guidelines.
B
As
far
as
setting
up
bi-weekly
meeting
do
we
need
to
get
a
buy-in
from
from
the
community
and
this
group,
or
just
up
to
us,
to
set
it
up
and
add
it
to
the
to
the
community
agenda
and
the
other
issue
that
came
up
is
so
now
that
we
are
hooking
up
the
ci
infrastructure
for
keyword
provider.
B
We
are
looking
for
infrastructure
to
actually
run
tests
on.
There
are
some
ideas
being
bounced
around,
but
in
the
long
run
I
think
we
are
looking
for
either
for
funding
or
credits
in
order
to
keep
the
infrastructure
running
that
will
be
used
for
ci.
We
have
some
workarounds
in
mind
for
the
for
the
short
term,
but
I
think
we'll
need
to
figure
out
the
long-term
solution
for
for
ci
for
us
yeah.
Just
some
quick
updates
on
the
keyword
provider
side.
E
C
C
We
we
we
have
to
have
some
like
a
bare
metal
instance
for
us
to
make
use
of
like
because
the
currency,
I
that
red
hat,
that
the
kubert
community
use
is
really
based
on
the
bare
metal
like
they
have
like
three
or
five
bare
metal
instances
that
to
hold
their
ci
systems
and
based
on
our
initial
discussion
with
them
that,
because
this
is
the
bootstrap
stage
for
the
project
like
we
can
share
their
infrastructures.
C
However,
like
we
should
be
working
together
to
looking
for
the
long
term
that
either
they
can
get
more
credits
or
get
get
more
funded
on
their
infrastructure
to
add
more
nodes,
or
we
can
have
our
like.
Like
for
the
keyword
provider
project,
we
should
have
our
dedicated
info
structures
to
for
the
ci.
So
that's
something
that
we
are
looking
for,
while
looking
help
or
looking
for
help
from
the
from
the
community.
E
That
makes
sense
I
mean
I
believe
aws
does
have
bare
metal
instances,
so
you
could
do
that.
I,
I
would
be
wary
of
suggesting
that
you
actually
do
that
at
any
scale,
because
I
think
I
they're
pretty
expensive
and
I
I
imagine
it
would
burn
through
the
the
credits
in
the
aws
pool
as
it
were,
so
that
might
be
a
bit
upsetting
for
people
yeah.
C
See
so
for
the
red
hat,
they
are
currently
using
the
ibm
cloud
to
the
bare
metal
instance
in
the
ibm
clouds.
However,
we
want,
we
want
to
have
some
guidance
on
like
where
who
should
we
talk,
talk
to
for
these
filings
or
credit
fundings
or
and
and
and
or
whether
it's
even
feasible
for
us
to
ask
for
that.
E
E
There
it
is
possible
to
do
these
things.
I
think
the
the
core
thing
that
you
need
to
the
core
data
point
that
they
will
want
is
how
much
is
it
going
to
cost
so
like
how
or
like
you
know
what
is
the
sort
of
if
it
was
on
aws?
What
would
the
spend
be
or
if
it
was
an
ibm
cloud?
E
What
would
the
spend
be
and
then
I
guess
the
other
question
I
would
ask
is
like
if
you
didn't
have,
if
you
were
just
using
the
whatever
you
can
get
on
a
normal
instance,
I
you
know
like
very
low
performance
type
or
much
lower
performance,
but
potentially
smaller
instances.
How,
like
is
that
doable?
How
much
would
it
cost
would
be
more
expensive?
It
would
be
less
expensive
type
thing,
but
I
think.
E
You
know
if
you're
only
talking
about
running
a
couple
of
tests
like
a
couple
of
tests
an
hour
or
a
couple
of
tests
a
day.
It
might
be
that
the
money
just
doesn't
doesn't
matter
that
much
as
long
as
we're.
Not
you
know
as
long
as
we're
paying
by
the
minute
or
by
the
second
it,
although
it's
expensive
it
doesn't
expensive
by
the
hour.
It
doesn't
actually
add
up
to
that.
Much
type
thing.
C
Right,
I
I
think
we
can
probably
get
some
of
those
like
a
reference
data
points
from
the
cube
verb
community.
Like
the
like.
This
morning,
we
were
discussing
like
probably
like
a
10
hours
a
week,
cut
off
rate
and
and
not
like
not
trigger
the
sea
ice
automatically
for
every
pr
change
that
we
only.
C
We
need,
like
the
maintainers,
to
really
kick
off
the
prs
to
save
some
money
cost
from
my
understanding
like
for
the
cuber
they,
they
really
maintains
three
dedicated
the
bare
metal
instance
for
to
to
do
to
keep
three
bare
metal
incense,
and
I
I
don't
know
like
a
maybe
it's
a
like,
if
not
using
like
a
separate
aws
instance.
C
Maybe
we
can
just
like
harvest
it
like
a
sh,
give
there
some
of
the
credits
like
to
share
their
infrastructure,
because
I
I
don't
think
that
they
can
run
wrong
wrong
in
full
capacity
for
their
three
bare
metal
notes
like
buying
more
bare
metal,
gnosis
kind
of
like
wasted,
and
also
from
the
engineering
front
like.
We
also
need
effort
to
maintain
how
how
to
manage
those
three
bare
metal
instances.
This
is
like
out
of
the
scope
of
the
projects
right,
so
that
will
that
our
focus.
C
That's
some
some
something
that
we
want
to
bring
up
for
the
for
the
class
api
provider
keyword
or
maybe
we
should
like
have
more
discussion
in
tomorrow's
class
api
specific
meetings.
I
guess
thanks.
A
If
mh
I
mean,
I
think
that,
for
the
infrastructure
question
may
be,
the
sequel
straight
high
cycle
is
not
the
the
proper
venue.
I
I
think
that,
because
basically
we
we
don't
manage
the
the
founding
and-
and
we
don't
know
in
details
how
this
works.
So
I
remember
seeing
some
request
founding
but
for
individual
contributor,
if
I
don't
remember
when
in
kubernetes
orb,
but
I
have
to
search
for
the
for
the
issue
and
and
basically
people
was
there
ask
funding
for
special
needs.
A
They
have
to
provide
justification,
a
rough
estimation
of
the
budget
that
they
require
and
and
then
the
discussion
starts,
but
maybe
in
this
case
it
will
be
better
to
or
to
ask
in
in
country
backs
or
to
ask
in
in
test
infra
so
so
to
to
find
out
basically
the
the
right
person
to
to
talk
with.
A
A
A
With
regards
to
the
to
the
meeting,
the
procedure
is
not
not
so
simple,
because
there
is
a
basically
there
is
a
siege
calendar
that
a
few
people
have
right
asses
to
to
write.
I
I'm
on
one
of
those
people,
so
you
have
to
figure
it
out
when
you
you
want
this
meeting,
and
then
I
ping
me
and
I
will
set
up
the
the
record
meeting,
both
in
zoom
and
in
and
on
on
the
on
the
calendar.
Also,
the
kubernetes
calendar.
E
I'll
just
mention
like
it's
not
ideal,
but
if
we
can
make
it
fit
into
like
the
aws
account
right,
then
we
sort
of
skip
a
lot
of
this
like
we
just
need
to
like
make
sure
that
the
the
money
is.
The
cooler
cost
is
acceptable
right
because
we
do
have
an
aws
test.
Accounting
bus
puts
credits
in
there.
We
just
need
to
make
sure
that
you're
willing
to
like
putting
up
a
couple
more
if
the
need
arose.
E
So
if
we
can
say
use
a
bare
metal
instance
on
aws
or
if
there
is
comparable
infrastructure
existing
for
ibm
cloud,
then
you
know
that
the
ability
to
if
you
were
in
a
script
that
could
you
know
spin
up
one
or
however
many
instances
spearmint
systems
you
need
install
the
stuff.
Do
all
that.
Then
we
could
basically
just
run
that
from
prow
more
or
less
with
what
we
have
today
and
we
just
make
sure
that
the
cost
is
okay.
So
that's
that's
why
I
keep
going
back
to
the
like.
A
Okay,
so
it
seems
that
we
have
a
plan.
First
of
all,
we
need
a
rough
estimation
of
what
is
the
need.
Then
we
have
to
figure
it
out.
What
option
is
the
best?
One
use
a
ws
account
using
the
ibm
account
or
ask
to
seeks
for
for
alternatives.
E
E
Update,
yes,
so
there's
actually
a
ton
happening
in
kops,
but
I
think
the
thing
that
I
thought
was
most
interesting
is:
we
seem
to
be
actually
making
good
progress
on
getting
ipv6
actually
working
on
on
a
cloud
on
aws.
E
There
are
a
large
number
of
what
I
would
call
paper
cut
type
things
in
the
ecosystem,
where
we're
finding
that
you
know
like
the
cloud
provider,
aws
doesn't
really
support
ipv6
for
the
nodes
in
some
small
way,
and
you
know:
psyllium
gets
confused
with
ipv6
in
some
sort
way
or
whatever
it
is,
and
the
dns
controller
gets
infused,
but
essentially
broadly
it
is
more
or
less
working
and
there's
some
excellent
work
by
chaos,
contributors
that
are
like
addressing
those
small
shortcomings,
and
so
it
looks
like
we
will.
E
I
don't
know
like
actually
have
ipv6
only
clusters
passing
some
tests
relatively
soon.
I
hope
so.
I
thought
that
was
an
exciting
update
to
share
that
it
is,
it
is
actually
ipv6
is
actually
more
real
and
more
testable,
even
on
clouds
that
historically
have
not
supported
it.
I
have
not
yet
tried
on
gcp,
but
gcp
also
launched
comparable
like
early
early
ipv6
support.
So
once
once
we
get
working
on
aws,
maybe
I
will
try
to
get
work
on
gcp
as
well.
A
So
if
there
are
no
question,
I
can
give
a
super
quick
update
on
cluster
api.
So
on
class
api
we
are
trying
to
close
up
the
first
block
of
things
on
cluster
class
patch
merged
rebates
merged.
A
We
are
doing
implementing
a
lot
of
validation,
condition
is
being
implemented
events,
so
the
work
is,
is
getting
enemies
say
to
first
completion
first
milestone
state
and
we
are
looking
forward
for
people
to
test
it
break.
It
provide
feedback.