►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone:
this
is
the
sick
cluster
life
cycle,
first,
api
weekly
office
hours
on
20th
of
july
2022.
This
meeting
abides
by
the
cncf
code
of
context,
so
that
basically
means
be
awesome
to
each
other.
To
talk
at
any
point
during
the
meeting,
please
use
the
race
hand
feature
in
zoom
it's
under
reactions
and
while
we're
waiting
for
others
to
join,
please
add
your
name
to
the
attending
list.
A
Let
me
paste
the
link
to
the
talk
in
the
chat
and
if
you
want
edit
access
to
this
particular
document,
please
join
the
c
cluster
mailing
list.
It's
linked
at
the
top
of
the
document.
A
Generally,
it's
tradition
around
here:
let
the
new
folks
in
the
meeting
to
introduce
themselves.
Is
there
anyone
who
would
want
to
introduce
sensors
here.
A
I
I
don't
see
any
hands,
so,
let's
move
on.
Let's
move
on
to
the
open
proposal:
readouts
does
anyone
have
any
updates
on
any
of
the
open
proposals
that
they
would
want
to
share.
B
Here
I
am
hello
everyone,
so
I
want
to
yeah
give
a
shout
out
for
the
v123
release
that
has
been
cutted
on
monday,
I
posted
the
link
to
the
slack
message
that
stefan
created
and
also
the
the
link
to
the
release.
Note.
B
Yeah,
what
is
important,
I
took
some
time
to
look
at
the
release
and
it
is
really
important
to
give
a
shout
out
to
the
community.
I
think
that
we
measure
something
more
than
300
pr
in
the
in
the
latest
release,
which
is
a
lot
700
comments.
Man,
many
different
count.
Companies
contributed
to
it
and
yeah,
really
an
amazing
war
for
the
entire
community.
B
I
I'm
pretty
sure
that
that
it
will
take
some
time
to
for
people
to
to
catch
up
and
fully
grasp
the
possibility
that
run
time
extension
add
to
the
to
the
tool,
but
I'm
I
I'm
yeah,
I'm
really
excited
because
it
is,
it
is
really
powerful
and
flexible,
and
I'm
looking
forward
for
feedback
from
the
community
and
yeah.
That's
it.
Thank
you.
Everyone
for
helping
in
getting
these.
Ladies
out.
A
A
I
don't
see
any
hands
space,
so
we
can
move
on
to
the
next
one.
Ginkgo
v2
timing,
yeah.
C
C
If
we
should
wait
to
merge
these
changes
they're,
not
that
significant,
it's
mostly
changing
imports
and
then
there's
a
few
code
constructs
that
need
to
change
to
catch
up
with
ginkgo
v2
and
then
there's
some
more
idiomatic
stuff
that
we
could
do
in
the
future,
like
spec
labels
and
all
that,
but
that's
not
necessary,
but
obviously
the
approximate
problem
is
once
we
merge
that
pr.
C
I
should
have
linked
to
it
in
here
I'll
put
a
link
in
in
a
bit
once
we
merge
that
pr,
all
the
providers
are
using
our
e2e
framework
and
capi,
and
that
will
break
them
if
they're
consuming
master
or
main
branch,
so
that
that'll
happen
at
some
point.
I
don't
know
if
there's
any
way
to
be
more
deliberate
about
that,
but
I
guess
I'm
just
asking:
is
there
anything
else
that
needs
to
happen
before
that?
Pr
maybe
could
merge.
We
also
talked
about.
C
D
Yeah
thanks
matt,
I
think
in
my
opinion
we
should
just
rip
off
the
band-aid
and
get
this
in
the
sooner
the
better.
I
think,
if
we
do
this
at
the
beginning
of
the
release
cycle,
it
gives
more
time
for
providers
to
adapt
before
the
release
comes
out
and
then
they're
really
on.
You
know
a
sched
like
a
watch
like
a
clock
to
get
it
updated
to
v2
in
their
provider
and
in
terms
of
controller
runtime.
D
B
B
My
only
concern
was
that
I
I'm
not-
I
I
don't
ever
like
when
discussing
this
with
stefan,
we,
we
don't
have
a
full
picture
of
what
con
provider
are
doing
if
they
are
using
main
they
are
using
from
a
tag
or
whatever.
So
we
we
really
did
not
want
to
create
a
disruption
for
them.
Now
that
that
we
expect
they
are
catching
up
if
we
want
to
so
I
will
I
kind
of
agree
that
that
we
should
be
soon.
B
Maybe
we
give
three
four
weeks,
let
me
say,
grace
period
to
catch
up
with
everyone
to
without
putting
income
into
the
picture,
and
then
no,
we
can
do
it
so.
B
E
D
B
Is
to
not
create
instruction
for
the
concern
for
the
providers,
but
behind
that
the
sooner
the
better.
A
I
don't
see
any
answers.
Let's
move
on
to
the
next
item.
B
Maybe
we
can
do
this,
what
about
if
men
or
someone
send
a
message
to
the
mailing
list
telling
that
we
are
going
to
do
this
in
copy?
I
don't
know
in
three
weeks
four
weeks
we
we
define
a
date
and
if
we
don't,
if
we
do
many
least,
I
think
that
is
the
best
option
that
we
have
for
contacting
providers.
C
B
A
Thanks
matt
yeah:
let's
move
to
the
next
item
mike
you
have
an
item
I'll
just
go.
F
Yeah,
this
should
be
pretty
quick.
We
are
getting
very
close
to
merging
the
scale
from
xero
support
in
the
auto
scaler,
and
you
know
appreciate
any
extra
reviews
if
people
want
to
give
them.
I
just
wanted
to
let
folks
here
know
in
case,
if
you're
curious
about
this
feature
or
you
you
know,
want
to
know
more
about
how
it
works
or
whatever
you
know.
Please
take
a
look
happy
for
any
reviews.
I
I'm
hoping
we'll
be
able
to
merge
this
in
the
next.
A
Does
anyone
have
any
questions?
Comments
concerns
a
quick
question.
I
couldn't
find
the
raised
hand
button.
I
apologize.
Will
this
support
scaling
from
zero
with
csi
drivers?
I
know
the
the
general
like
aws
cluster,
auto
scaler
historically
has
issues
with
scaling
from
zero
with
some
of
the
csi
drivers.
F
Right,
so
there
is
no
specific,
like
extra
support
for
csi
drivers
in
there.
Whatever
the
main
auto
scaler
works
with
is,
is
what
it
would
do.
There
can
be
issues
with
csi
drivers,
especially
like
if
you're
trying
to
you
know,
if
you're
trying
to
do
things
like
use
the
balance,
similar
nodes
options
and
whatnot.
I
think,
as
long
as
you
keep
your
you
know,
machine
sets
or
machine
deployments
kind
of
limited
in
the
regions
that
they're
being
deployed
to
then
the
csi
driver
should
continue
to
work.
F
The
way
you
expect
it
to
the
only
times
we
see
a
kind
of
irregularities
are
when
we
have
nodes
that
are
sitting
in
different
availability
zones,
and
then
people
try
to
use
like
you
know,
balanced,
similar
nodes
or
something
like
that.
That
can
cause
some
problems,
but
otherwise
I
wouldn't
expect
any
issues.
Aside
from
what
just
kind
of
the
limitations
of
the
auto
scaler
are
in
general.
A
F
There's
nothing
special
about
csi
drivers
in
the
cluster
api
implementation,
so
we
just
default
to
the
basic
behavior
that
the
auto
scaler
uses.
So
in
those
cases
normally
the
scheduler.
You
know
if
the
scheduler
is
able
to
place
pods
in
places
where
it
thinks
a
new
node
coming
up
would
still
have
the
same
kind
of
like
persistent
volume
claim
or
whatever.
Then
it
will
try
to
satisfy
that
requirement.
A
F
A
D
A
I
don't
see
any
answers
thanks
mike
jonathan,
you
have
the
next
one.
A
G
See
yeah
all
right
cool,
so
in
the
observability
section
we
have
this
visualizer
app
and
when
you
click
on
this
it
takes
you
to
a
link
showing
off
your
clusters
and
you
can
see
the
provisioning
state.
So
if
I
go
and
spin
up
a
new
cluster
right
now,
it
might
take
a
minute.
So
we
can
circle
back,
but
it
will
show
it'll
show
up
here.
Actually
that
was
a
lot
faster
than
I
thought,
but
if
we
go
and
click
into
a
cluster,
we
can
see
the
different
resources
here.
G
So
this
tree
is
built
off
of
the
clusters.
Etl
describe
command
and
you
can
see
the
resources
based
on
based
on
the
different
parts
of
the
cluster.
So
you
have
cluster
api
in
blue.
You
have
the
bootstrap
provider
and
yellow
the
control
plane
provider
in
purple
and
the
infra
provider
in
green.
We
don't
have
add-ons
in
cluster
ctl
describe
that'll,
come
in
the
1.2
release
when
I
rebase
off
of
that,
but
for
all
the
resources
with
the
status
you
can
see,
you
can
see
the
status
in
in
the
little
badge
icon.
G
So
red
means
it's
failed.
Yellow
means
it's
info
or
a
warning
state
and
green
means
it's
ready.
So
if
you
click
on
any
resource,
you
can
see
the
diff
different
conditions
here
and
you
can
also
go
into
the
spec.
G
G
We
should
be
able
to
refresh
soon
and
see
that
see
that
the
new
changes
are
showing
up
some
other
things
I
added
are
you
can
zoom
in
and
out,
since
the
tree
can
get
a
little
big
and
you
can
also
toggle
the
link
style.
B
I
want
to
say
this
is
awesome,
and
I
want
also
to
thank
you
because,
basically,
while
developing
these,
you
put
a
lot
of
effort
in
improving
raster
cattle
described,
so
people
that
want
to
use
these.
As
let
me
say,
the
library
that
is
behind
these
weeks
is
a
library
that
everyone
could
use.
So
this
is
a
real
asset
for
the
community
and
thank
you
very
much
for
doing
this
work.
G
Yeah,
absolutely
thanks
for
everyone
who
helped
test
this
out
as
well,
and
I'm
not
sure
if
stefan
is
here,
but
thanks
for
he
made
a
pr
that
helped
me
get
this
working
until
so
thanks
for
that
as
well.
E
Yeah,
so
our
team
is
working
on
adding
dual
stack
support
for
our
distribution
of
kubernetes,
and
I
guess
one
issue
we've
run
into
is
or
one
issue
we're
trying
to
to
work
on
is
the
upgrade
scenario
for
or
a
single
stack
cluster,
a
dual
stack.
E
B
E
B
Yeah,
so
so
I
I
think
that
there
are
two
sides
of
it.
One
is
to
make
these
possible
in
in
the
api,
which
is
the
simplest
part,
and
just
to
give
you
a
little
bit
of
background.
B
B
Basically,
we
initially,
we
blocked
almost
all
the
change
and
now
over
time
we
are
making
them
possible
just
relaxing
the
book
whenever
we
are
sure
that
the
change
happen
in
in
in
a
proper
way,
so
that
they
are
not
disruptive
for
the
cluster
okay
say
say
that
the
the
second
part
of
the
story
is
that
is
there
a
way
to
change
the
the
networking
settings?
B
You
know
in
a
way
that
they
work,
because
I
don't
think
that
it
is
only
changing
them
in
sp.
I
do
expect
that
something
has
to
change
to
the
infrastructure,
probably
so,
and
this-
and
this
basically
goes
down
into
a
discussion
that
is
a
little
bit
broader,
how
this
behave
in
different
provider,
etc,
etc.
B
So,
from
my
point
of
view,
if
I
look
at
this
from
a
core
copy,
it
is
an
easy
change.
It's
just
just
a
relaxed
web
book.
However,
my
feeling
is
that
we
need
to
figure
it
out
to
the
full
story,
so
audi's
working
to
the
provider,
so
we
can
reason
about
it
but
happy
to
discuss
this.
We
can
start
the
document
whatever.
D
Thanks
yeah,
my
question
is:
I
guess,
since
this
is
a
pretty
disruptive
change
and
requires
like
restarting
the
infrastructure
and
everything,
what's
the
advantage
of
doing
that
versus
building
a
new
cluster,
that's
dual
stack
and
migrating
workloads
like
what's
your
use
case
here,.
E
I
guess
the
desire
is
people
with
existing
workloads
on
single
stack
clusters
who
might
not
want
to.
You
know
completely
stop
everything
to
upgrade
to
a
dual
stack.
D
Yeah,
in
my
opinion-
that's
a
I
don't
know,
I
think
that's
a
pretty
weak
argument.
Just
because,
like
you
can
migrate
workloads
without
it
being
disruptive
and
if
you're
going
to
have
to
restart
your
nodes,
they
are
going
to
be
migrated
to
new
nodes
anyways.
So
I
think
people
just
you
know,
there's
a
clusters
are
not
pets,
they're
cattle
right,
so
you
can
change
clusters
pretty
easily
like
there
are
tools
out
there
that
helps
you
do
that.
So
I
think
that's
one
like
route
worth
considering
too.
D
A
Thank
you
cecile.
You
have
the
next
one.
D
Yeah,
so
I
wanted
to
bring
this
back
up
now
that
we've
just
released
1.2.
I
think
it's
a
good
time
to
start
talking
about
it
again,
and
I
don't
know
if
vincent's
definer
here
is
probably
not,
but
so,
but
just
I
guess
to
kick
off
the
conversation.
D
We
talked
in
the
past
about
having
a
release
cadence
for
cappy
and
we
talked
about
doing
three
releases
a
year.
To
start,
maybe
four,
I
was
wondering
if
we
want
to
start
talking
about
a
deadline
or
like
a
date
for
1.3,
since
we
just
released
1.2
like
a
target
date
and
keep
that
in
mind,
while
we're
doing
like
backlog,
triage
and
milestone
grooming
and
then
also
if
anyone
is
interested
in,
you
know
starting
a
release
fee
team.
So
we
can,
you
know,
start
doing
more.
D
You
know
thinking
about
how
to
improve
our
release
process
and
that
sort
of
stuff
I
would
personally
be
interested
in
being
part
of
it.
But
if
anyone
else
you
know
wants
to
be
part
of
this,
I
guess
just
maybe
reach
out
and
we
can
get
something
started.
B
Yeah
just
to
give
an
update
from
from
from
my
side,
so
I
I'm
discussing
this
idea
internally,
so
basically
without
a
management
line.
I
I
think
that
personally,
I
consider
it
valid,
but
yeah.
Of
course
I
have
to
do
all
the
diligence.
B
I
have
I'm
happy
to
help
in
this
discussion.
What
is
super
important
for
me
is
that
if
you
go
down
this
path,
we
we
yeah.
There
is
a
commitment
from
from,
let
me
say,
a
set
of
forks,
because
I
release
cadence
basically
entails
some
work
lately.
This
work
is
not
well
distributed,
and
I
think
that
this
is
beneficial
for
the
entire
community.
B
If
we
have
more
more
more
force,
germany.
B
D
Yeah,
I
think,
if
we're
trying
to
be
like
realistic
in
terms
of
you
know,
ambitions
and
not
make
it
too
much,
you
know
more
work
right
away
like
just
not
disrupt
things
too
much.
I
don't
think
we
necessarily
need
to
start
releasing
more
often
than
we're
releasing
now
just
to
not
add
work
to
start,
but
I
think
it
would
be
really
great
if
we
could
provide
clarity
to
users
like
just
to
set
expectations
of
when
that
release
is
supposed
to
happen.
D
So
let's
say
we're
gonna
release
exactly
the
same
as
we
did
for
1.2
to
1.2,
for
example.
What
would
be
the
date
for
1.3?
Like
that's
the
kind
of
thing
I'm
you
know
trying
to
get
us
to
start
doing
and
then
once
we're
good
at
communicating,
then
maybe
we
can
start
thinking
about
adjusting
like
how
often,
but
I
don't
think
we
should
really
do
everything
at
the
same
time,
and
then
I
guess
the
release
team
is
just
like
for
now
just
getting
people
who
are
interested
in
this
who
just
form
a
working
group.
D
It
doesn't
necessarily
have
to
be
like
a
long-term
commitment,
but
for
now
we
can
just
start.
You
know
having
a
group
of
people
who
are
interested
in
helping.
You
know,
improve
automation,
improve
the
process,
work
on
the
communication
that
sort
of
stuff.
B
Yeah
did
this
yeah?
Did
this
match
what
I
have
in
mind,
yeah
we
we
should
get
there.
Let
me
say
incrementally
having
a
release.
Calendar
for
me
is
beneficial
for
the
entire
community
and
for
every
company
using
cluster
bi.
So
this
is
why
I'm
trying
to
push
these
also
internally
but
yeah.
You
know
there
are
many
things
going
on
and
we
we
are
trying
to
align
them
together.
A
Thank
you.
I
don't
see
any
other
items
on
the
agenda
of
considering
your
hands.