►
From YouTube: Kubernetes SIG Storage Meeting 2023-05-18
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 18 May 2023
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.ysglv6ob2p59
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
today
is
May
18
2023.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
YouTube.
So
we
are
at
the
beginning
of
a
new
kubernetes
release
cycle
version.
128
127
was
released
a
few
weeks
ago
and
in
the
last
meeting,
Shing
actually
got
us
started
and
created
a
new
tab
in
our
planning
spreadsheet,
for
the
128
planning.
For
the
most
part
we
copied
over
the
remaining
uncompleted
items
from
the
127
side.
A
I
will
go
over
that
today.
Some
timelines
to
be
aware
of
the
cycle
has
already
begun
for
128
and
June
8th
is
going
to
be
the
production
freeze
or
production.
Readiness
freeze
this
date
means
that
this
is
a
date
where
you
must
have
any
new
features,
declared
and
officially
kind
of
approved
by
having
your
PRD
approved
and
so
on.
Then
the
16th
is
enhancements,
freeze
and
then
the
tenth
is
code.
Freeze
and
sorry.
This
is
19th
of
July
is
code.
A
Freeze
26th
of
July
is
test.
Freeze,
then
docs
must
be
ready
by
8th
of
August
and
planning
to
release
15th
of
August.
A
If
you
have
any
items
that
you
think
should
be
in
128
in
terms
of
storage
features,
now
is
a
good
time
to
bring
it
up
and
we
can
go
ahead
and
add
it
to
the
list
and
start
tracking
it
as
part
of
the
work
for
the
Sig
Beyond
planning.
There
are
a
couple
of
items
I
already
see
here,
PR's
that
need
attention
and
designs
that
need
to
be
reviewed.
A
If
you
have
anything
that
you
would
like
to
discuss,
please
feel
free
to
add
to
the
agenda
at
any
point
and
we'll
get
to
it
after
planning.
You
can
find
the
link
to
the
agenda
in
your
calendar,
invite
so
with
that.
We
will
go
ahead
and
switch
over
to
the
planning
Tab
and
get
started
so
I'm
going
to
create
a
new
column
here
for
today
and
we'll
start
getting
status
updates
on
where
things
are.
For
the
beginning
of
this
new
cycle.
A
Okay,
so
we
have
recovering
from
resize
failures
and
get
API
changes
merged
in
128
still
needs
to
support
quality
of
service
a
month.
Are
you
on
the
line
by
any
chance.
B
Yeah
I'm
here
yeah,
so
we
are
still
we
had.
We
missed
the
cut
last
time
by
a
couple
of
days,
so
we
have
most
of
the
code
done
so
I
need
to
update
it
and
and
work
on
one
more
bit
just
like
to
recovery
to
all
the
way
to
the
original
size.
That's
something
I
did
not
factored
in
originally,
so
that's
something
I'll
be
working
on
this
release,
but
this
yeah,
this
feature
is,
should
be
on
track.
A
Cool
thank
you
for
that
update.
So
I
will
keep
this
as
started
and
as
Alpha
still
the
correct
designation
here.
C
B
Since
we
are
like,
we
are
going
to
change
the
if
we
have
to
I'm
working
on
a
design
that
will
require
allow
the
recovery
to
the
original
size.
For
that
we
have
to
do
some
minor
cap
changes.
C
C
A
Cool
good
question:
Shane.
Thank
you.
Next
is
issues
related
to
assuming
volumes
or
Mount
points.
It
looks
like
we
might
have
Jing
today.
Jing.
Are
you
on
the
call.
D
I,
don't
think
there's
much
work
recently
involved
in
that,
but
I
plan
to
look
through
the
recent
issues
and
the
backpack
fixes
to
to
see
whether
what's
the
next
of
this
yeah.
E
E
D
Okay,
yeah,
let
me
search
it
up
and
if
I
will
ping
you,
if
I
cannot
find
yeah.
A
Oh
sounds
good.
Thank
you
Jing.
Thank
you.
Michelle
next
is
volume
group
API,
so
I
guess
it's
Alpha.
This
cycle
is
that
right,
Shane.
C
A
Thank
you.
Then.
We
have
provision
volumes
from
Cross,
namespace
snapshot,
PVC
continue,
Alpha
work,
volume,
populator
working
progress,
anything
new
there.
You
know
Stefan
Michelle,
Shang
Ben.
Anyone
have
an
update.
C
I
see
Chaka
for
me,
you
want
to
give
an
update.
H
E
Oh
I
think
also
so
the
design
of
this
also
depends
on
the
reference
Square
API
that
the
Sip
Network
had
but
I
think
we're
looking
at
trying
to
bring
that
API
into
core
kubernetes
instead
of
the
Gateway
apis,
where
it
currently
is,
I.
Think
the
networking
folks
reached
out
to
us
and
they're
looking
for
help
with
that.
I
E
Design
is
I
mean
so
the
API
has
been
in
the
networking
Gateway
API
Group
for
a
while
now
and
I
think
that
it
might
even
be
GA
or
GA
soon,
but
when
they
were
proposing
to
Port
it
over
to
core
kubernetes
I,
think
some
things
in
the
API
had
to
change
a
little
bit,
and
so
that
discussion
is
still
happening
right
now,
but
I
I
do
believe
it
is
like.
E
The
design
discussion
is
close
to
being
finalized,
like
it's
probably
90
of
the
way
there
and
it
just
needs
a
little
a
little
push
to
get
it
over.
E
E
Because
because
the
API
that
they
have
is
very
specific
to
the
networking
use
case
to
their
use
case,
and
so
when
they
want,
when
they,
we
started
discussing
how
to
bring
it
over
to
like
a
core
API,
we
had
to
you
know,
make
some
changes
around
some
of
the
semantics
to
make
it
more
generalizable.
I
E
I,
so
that's
that's
where
that's.
Why
they're
asking
us
for
help
right,
because
I
think
from
their
perspective
they
have
their
API
and
they
don't
I,
don't
think
they
plan
to
switch,
and
so
now
really
the
push
to
generalize.
It
is
mainly
coming
from
us
because
we
want
to
use
the
API
for
our
purposes.
C
A
Cool.
Thank
you
for
the
background
on
that
Michelle
and,
if
folks
are
interested
in
helping
here,
looks
like
there
is
an
opportunity
to
go
potentially
Beyond
sync
storage
work
and
help
with
moving
some
of
this
logic
from
Cigna
working
into
course.
So,
if
you're
interested
please
reach
out
to
one
of
the
tech
leads
Michelle
and
we'll
put
you
in
the
right
direction,.
A
All
right
next
item
is
CSI
volume,
Health,
additional
metrics
and
or
events
staying
in
Alpha
still
needs
end-to-end
tests.
Any
update
on
this
one.
C
A
new
contributor
going
to
help
me
with
this
questions
since
she's
new,
so
I'm,
not
sure
if
she
would
be
able
to
finish
that
in
this
release,
so
we'll
be
working
on
it
got
it.
J
A
C
So
we
yeah-
we
reviewed
this
again
in
yesterday's
data
production
meeting
I.
Think
then,
if
you
want
chamin,
please
as
well
so
right
now,
we
are
in
a
good
shape,
but
I
think
the
the
person
who
is
doing
the
POC
was
not
in
the
meeting
yesterday,
so
we're
just
waiting
for
them
to
decide
whether
they
want
to
go
ahead
and
pursue
how
far
or
not.
I
Yeah,
the
feeling
in
the
data
protection
working
your
meeting
was
that
the
design
is
as
good
as
it's
going
to
get
on
paper
and
we
need
to
prototype
it
and
build
it
and
try
it
out
and
poke
at
it
got.
A
It
that's
pretty,
that's
a
pretty
good
place
to
be
I.
A
A
super
complicated
design
so
definitely
taking
a
while
to
get
here,
but
it
sounds
like
it's
coming
together.
A
I
see
yeah
okay,
we
should
probably
discuss
that
at
the
next
CSI
meeting.
Okay,
all
right
cool!
Thank
you
for
the
update
on
that
one.
This
new
RW
access
mode
looks
like
it's
already
done.
Do
we
still
need
to
track
this
or
can
I
remove
it.
C
We
normally
wait
right,
so
we
have
every
time
we
talk
about
a
feature
that
just
moved
to
Beta.
We
said
we
need
to
wait
for
them
or
release
before
we
move
to
UGA.
So
I
think
this
feature
should
be
treated
the
same
way
right.
E
C
E
C
A
I
guess,
is
there
an
opinion
on
this
specific
feature?
Do
we
want
to
move
it
right
away,
or
do
you
want
to
hold
off.
C
I
think
it's
good
to
just
you
know.
If
we
have
been
saying
this
for
every
feature,
then
I
think
it
just.
We
should
just
treat
it
the
same
way,
because
otherwise,
then
we'll
have
to
look
at
our
feature
and
see
why
why
this
feature
is,
you
know,
does
not
have
the
weight.
E
Yeah
I
mean
I
I.
Think
I
would
like
to
sort
of
just
like
not
saying
for
this
particular
feature
in
general,
but
I
think
like
in
general,
I,
don't
think
that's
a
policy
we
have
to
require
for
every
single
feature.
E
C
Okay,
in
that
case,
I
think
we
in
fact
we
can
just
go
over
all
the
other
features
as
well.
That
then
find
that.
A
Yeah,
okay
sounds
good,
so
I
think
we'll
evaluate
on
a
case-by-case
basis,
whether
to
wait,
one
cycle
or
two
cycles
from
moving
to
bait
from
beta
to
ga.
For
this
specific
one,
it
sounds
like
there's
consensus
hold
off
for
another
quarter.
Anyone
object.
A
Okay,
I'm
gonna
go
ahead
and
delete
this
one
then,
and
we
can
reintroduce
it
when
we
decide
to
move
it
to
ga,
then
we
have
runtime
assisted
mounting.
This
is
a
design
that
deep
was
helping
Drive.
A
It
would
allow
kind
of
the
run
time
to
handle
mounting.
Do
we
have
deep
or
anybody
else
on
the
call
who
might
be
able
to
speak
to
this.
A
A
So
I'm
going
to
mark
this
as
looking
for
a
new
owner,
since
we
haven't
heard
from
Deep
for
a
while.
If
anyone
is
interested
in
helping
drive
this
feature,
please
either
speak
up
in
this
call
or
Reach
Out
offline
and
we're
happy
to
kind
of
point
you
in
the
right
direction.
A
The
feature
here
is
really
allowing
the
mounting
to
be
handled
by
by
runtime
instead
of
the
host
OS,
and
so,
if
you're
interested
in
this
feature,
please
feel
free
to
reach
out.
Otherwise,
if
we
don't
get
an
owner
for
a
while,
we'll
just
go
ahead
and
drop
this.
A
Next
feature
is
enabling
privileged
containers
for
Windows
to
replace
CSI
proxy
for
Windows.
A
H
Yeah
so
hi
this
is
Manu,
so
we
we
discussed
internally
and
we
are
interested
in
contributing
to
this
I.
Just
a
few
minutes
ago,
I
reached
out
to
Mauricio
over
slack
and
asked
if
we
could
set
up
some
time
with
him
to
talk
in
detail
about
what
the
work
is
and
what
the
and
how
we
can
best
help
out
in
this
respect.
H
A
A
All
right
cool:
this
is
going
to
be
all
for
this
cycle
and
let's
move
on
to
the
next
one,
so
node
expansion
secret
was
moved
to
Beta.
It's
the
same
question
here:
do
we
want
to
move
it
immediately
to
GA
or
do
we
want
to
hold
off
any
opinions.
A
Okay,
so
then
I'll
go
ahead
and
remove
this
and
we'll
wait
another
cycle
before
reintroducing
it
okay.
So
next
up
is
SC
Linux
relabeling,
using
Mount
options,
CSI
driver
API
change
required
beta
on
by
default,
so
last
update.
There
was
a
bug:
it's
disabled
in
master.
Branch
may
need
to
do
another
beta
in
128..
G
So
I'm
fixing
the
bug:
I
have
a
PR
open,
I
have
end-to-end
test.
We
are
open
for
that
and
if
I
can
manage
to
enable
it
in
127.,
I,
don't
know
three
or
four,
then
it
will
be
awesome,
but
yeah
I
need
to
sync
with
Jordan.
If
we
can
enable
it
in
our
page
release
nice.
A
Sounds
good,
it
sounds
like
this
is
work
in
progress
going
to
remain
in
beta,
a
bug
is
being
fixed
and
maybe
Cherry
Picked
back
to
127
Branch
if
possible.
Thank
you.
John
next
is
CSI
migration
remove
entry
GCE.
A
Are
we
committed
to
this?
On
the
GCE
side,
mat.
L
We
it
we
we'd,
be
happy
to
have
help
for
doing
it.
We
we
haven't
committed
resources
to
it
yet,
but
if
somebody
wants
to
do
it,
we'll
definitely
work
with
them
as.
L
Cool,
if
you
ping
me
their
name
I'll,
be
happy
to
reach
out
and
encourage.
E
L
A
Okay,
so
this
may
already
be
underway
and
then
Matt
and
team
can
help
review
so
we'll
keep
an
eye
on
this
I'll
get
an
update
next
time.
This
one
is
for
1
30,
so
we're
holding
off
on
that
next
one
is
EBS
was
already
done.
A
Then
Azure
files
we're
saving
it
for
130
and
then
Azure
disk.
Anyone
have
an
update
on
that.
One
looks
like
it's
done.
Any
follow-ups
required
here.
C
J
C
Not
so
I'm
not
sure
if
we
can
do
that
on
before
Azure
file
is
removed
right,
a
real
file
is
1.30.
A
So
maybe
we
just
remove
this
all
together,
then
it
doesn't
sound
like
there's
anything
actionable
for
128.
J
C
So
humble
has
some
sender
email
to
the
main
list
asking
for
deprecation,
because
there
are
no
users
for
this
entry.
Plugin.
C
Yes,
this
is
what
we're
talking
about
the
the
entry
one
right.
Does
it
make
sense
to
move
CSM
migration
for
RBD
to
GA,
but
since
there
are
no
users,
then
why
do
we
want
to
do
that?
Because
then
we
have
to
make
sure
that
that
always
works.
No
one
is
using
even
using
it
then.
D
Yeah
yeah,
like
I'm,
saying
because
well
like
their
users,
I
I
can
later
confirm.
C
Email,
if
you
know
there
are
any
other
entry,
RBD
plugin
users
yep.
You
can
also
reply
to
that.
Okay,
yeah
yeah.
A
That's
a
good
call
out
shank
So
for
anybody
on
the
call.
If
you
know
somebody
who's
using
ceph
RBD
the
entry
version,
they
should
reply
to
humble's
email
on
Sig
storage
and
let
us
know
because
the
assumption
is
that
nobody's
using
it
and
instead
of
doing
a
clean,
CSI
migration,
as
we
did
for
a
lot
of
these
other
plugins,
if
no
one's
using
it,
the
plan
would
just
be
to
deprecate
the
entry
one
without
a
proper
migration
story.
C
So
stephas
that's
more
clear
right.
They
didn't
even
have
an
opera
version,
so
that
humble
sent
that
one
out
earlier,
though
so
we
didn't
get
any
reply
saying
anyone's
using
it
and
then
for
surf
RVD.
It's
just
to
see
that
we
already
made
a
lot
of
effort
moving
that
to
to
Beta.
But
he
told
me
there
are
no
users,
I
think
even
a
young
and
also
confirm
that
right
so
for
openshift
side
there
are
no
users
using
that.
So
I
mean.
C
J
A
Keep
them
both
here
and
see
what
what
happens
for
folks
on
the
call,
if
you
you
or
someone
you
know,
is
using
stuff,
RBD
or
self-fs,
please
reach
out
and
let
us
know
otherwise
they
will
get
deprecated.
C
I
I
asked
a
grant,
so
he
said
they
definitely
are
users,
so
so
this
is
definitely
that
part
we
know,
but
he
said
they
need
some
thorough
testing
before
they
want
to
move
that
to
Beta
on
by
default
right
now,
it's
off
by
default,
so
they
say
not
1.28,
probably
1.29.
Okay,.
A
So
I'll
go
ahead
and
drop
it
for
128.
We
can
reintroduce
actually
I'll
leave
it
here
for
tracking
I
guess
yeah
129.,
okay
sounds
good.
Thank
you,
Shane,
for
the
update
on
that
one.
D
So
just
wondering
for
some
other
intro
like
NFS
those
there's,
no
migration,
password.
A
D
And
we'll
keep
those
entry
like
a
Fiverr,
correct,
yeah,
okay,
but
oh,
maybe
it's
just
a
community,
maintain
that
right,
no
one
like
a
yes.
D
A
This
was
so
we'll
check
with
Deepak
and
intend
tests.
Anyone
have
an
update
on
this
one.
A
Cool,
so
if
anyone's
interested
in
jumping
in
this
might
be
a
kind
of
low
barrier
to
jump
in
on,
if
you
are
interested,
feel
free
to
reach
out
to
shing
or
any
of
the
tech
leads
on
the
call,
and
we
can
help
you
get
started.
H
Yeah
I
think
from
the
AWS
side
we
may
be
interested
in
making
some
Upstream
contributions
to
either
this
or
some
of
the
other
items
here,
we'll
need
to
understand
what
the
work
is,
but
as
and
when
we
have
bandwidth.
We
will
try
to
contribute
on
this
awesome.
A
A
Cool
thanks,
Manu
I'll
note
that
here
and
then,
if
you
decide
just,
let
us
know
yeah.
H
A
Next
up
is
control
volume,
mode
conversion
between
source
and
Target,
PVC,
so
again
same
story.
It's
already
beta
moving
it
to
GA.
Do
we
want
to
do
it
this
cycle
or
wait?
I
think
we
decided
to
wait
before
moving
to
GA.
Anybody
object
to
that.
C
I
think
we
should
wait,
because
we
actually
want
to
want
the
backup
vendors
to
make
a
change
in
their
logic
if
they
are
relying
on
this
feature,
because
otherwise
they
can
be
broken,
because
we
are
also
going
to
more
without
a
flag
in
the
external
provisioner
and
snapshotter
to
to
true
no
the
feature
flag
for
this
feature.
So
once
we
enable
that,
if
they
don't
change
their
logic,
then
it
could
be
broken.
J
C
A
Next
up
is
better
default
storage,
class,
GA
and
128.
So
I
guess
this
work
should
start
now.
G
Yeah
I
checked
before
I'm
on.
We
want
to
do
GA,
Industries.
A
Okay,
so
it
sounds
like
Jan
you've
confirmed
we'll
Target,
GA
and
128.,
yes,
cool
all
right,
we'll
get
keep
it
here
and
has
work
started
or
not
yet
not
really.
Okay,
so
we'll
keep
that
as
not
started
and
then
keep
tracking
it
for
128.
thanks
Sean
next
is
quality
of
service
for
volumes,
so
I
know
Sonny's
been
working
on
this
I
assume
she'll
continue
to
drive
this
to
Alpha
this
cycle.
F
H
H
We
have
been
getting
a
lot
of
interest
from
customers
on
this
and
in
order
to
unblock
our
customers,
we
have
gone
ahead
and
implemented
temporary
custom
annotations
based
solution
for
for
this
capability,
but
we
are
very,
very
interested
in
seeing
the
standardization
effort
being
completed
and
agreed
upon.
So
I
would
love
to
be
part
of
that
discussion.
H
Unfortunately,
I
do
have
a
heart
stop
at
right
now,
right
after
this,
but
I
didn't
want
to
say
that
you
know
there
are
a
couple
of
things
that
we
would
like
to
clarify
about
this,
and
it
would
be
great
if
you
could
have
a
discussion
around
that.
If
not
today,
then
at
some
point
very
in
the
very
near
future,.
A
So
Sunny,
maybe
we
could
set
up
a
follow-up
meeting
to
discuss
this
where
Manu
and
folks
can
join
as
well.
L
Oh,
haven't
we
discuss
the
timing
of
that
on
slack
or
something
that
may
be
better
use.
H
L
Yeah
and
else
just
to
add
minute
to
thank
you
for
the
comments
we
will
in
on.
My
Google
will
also
probably
be
doing
a
custom
annotation
type
thing,
while
the
cap
matures
so
maybe
we
should
also
discuss
what
exactly
you're
doing
so
that
we
can
not
be
totally
inconsistent
between
Google
and
AWS.
H
As
well
in
terms
of
the
interim
solution,
yeah,
that
was
exactly
what
I
wanted
to
cover
as
part
of
a
design
meeting
as
well.
The
solution
that
we
have
in
place
is,
while
it's
proprietary
for
now
I
think
there
is
some
room
to
kind
of
have
other
storage
providers
utilize
that
as
well
in
the
short
term
until
the
final
thing
becomes
available.
So
if
you
folks
are
interested
in
utilizing
that
we
are
more
than
happy
to
talk
about
how
we
can
make
that
possible,
so.
A
Foreign,
thank
you
both
for
the
updates.
There
sounds
like
a
design
is
moving
forward.
Full
speed,
yeah.
A
H
A
A
Jonathan
is
this
something
that
you
will
continue
to
drive.
G
Here,
I'm
pretty
sure
he
didn't
start
anything
and
I
don't
know.
Okay,
so
can
we
keep
it
as
it
is.
A
Foreign,
so
it
sounds
like
this
is
effectively
being
dropped
and
I
think
that's
fine
I'll
go
ahead
and
remove
it,
and
if
we
decide
to
pick
it
up
again
or
we
find
a
new
volunteer,
we
can
reintroduce
it.
A
Next
up
is
PV
last
phase
transition
time.
This
is
Alpha
for
this
cycle.
G
Very
good
yeah,
that
is
a
API
pr1
review
assigned
to
Michelle.
A
Next
up
we
have
address
issues.
Pvc
created
by
staple
sets
will
not
be
Auto
removed.
A
A
And
actually,
let
me
go
back
here
and
mark
this
as
done.
A
Okay,
next
is
volume.
Expansion
for
stateful
sets
like
design
is
planned
for
128.
Anything
new
here.
B
No
yeah
I
think
design
is
still
planned.
We
have
had
some.
B
C
Yeah
so
I
think
Ashley
says
he's
going
to
get
back
to
me
on
Monday.
He
said
most
likely.
He
can
start
this,
but
he
want
to
check
something
you
forget
about
me.
A
A
Okay,
I'm
going
to
go
ahead
and
drop
it
if
there's
no
objections
and
then
same
thing
for
the
next
one
looks
like
there
is
no
owner
and
no
one
has
committed
to
pick
this
up.
So
I
will
go
ahead
and
drop.
It.
A
Okay,
anyone
have
an
item
that
should
be
on
this
list.
That
is
not
yet
on
this
list.
A
Okay
with
that
we'll
go
ahead
and
switch
back
to
our
agenda
doc
and
go
through
the
rest
of
the
agenda
here.
So
first
up
we
have
PRS
to
discuss.
Rion
write,
end-to-end
test
for
storage,
V1
CSI
driver
endpoints,
plus
three
endpoints.
Are
you
on
the
call?
Can
you
talk
about
this
yep.
K
Yes,
I
am,
can
you
hear
me
properly?
Yes,
all
right
lost
weight
loss
meeting
we
were
here
to
say:
October
announced
that
we're
gonna
start
looking
at
all
endpoints
for
storage.
That
does
not
have
conformance
tests
and
we
started
to
work
on
that.
We
picked
up
three
three
quick
wins
the
delete,
storage
collection
and
the
patch
and
replace
for
the
same.
There
was
test
in
place
for
create,
read
list
and
delete.
K
So
what
we
did
is
we
looked
at
the
pattern
of
that
test
corrected
for
because
there
was
a
lot
of
changes
recently
in
the
way.
E2
exist
is
written
for
conformance,
so
we
updated
it
to
have
the
right
stylish
arrangements
for
conformance
test,
and
then
we
added
the
three
new
endpoints
so
now
that
new
tests
cover
all
seven
endpoints
for
that
specific
resource.
K
So
the
idea
is
to
have
the
stage
reviewed.
We
already
have
had
a
quick
review
with
only
one
small
knit,
so
I'll
appreciate
some
more
reviews.
It
is
quarter
to
five
in
the
morning
in
New
Zealand,
where
our
people
are
also
shortly
within
the
next
few
hours.
We
will
quickly
fix
the
net
and
I
appreciate
this
on
the
test
grid
as
soon
as
possible.
Then
the
idea,
once
we
got
this
on
the
tester,
it
has
to
run
its
two
weeks
for
like
free
once
it's
past
the
flight
3.
K
We
will
promote
it
to
conformance,
and
then
we
plan
to
remove
the
current
test
to
reduce
the
dish
load.
So
basically,
then
have
just
one
single
test:
testing
all
seven
endpoints,
but
we'll
only
do
that
once
we
monitored
at
least
for
a
couple
of
weeks
to
make
sure
that
the
promoted
test
does
not
flake
on
us
and
we
don't
remove
a
good
test,
replacing
it
with
a
birthday.
So
we
do
new
monitor
afterwards.
A
Awesome,
thank
you
so
much
for
the
update
and
thank
you
for
giving
an
update
so
early
in
the
morning.
Your
time
I
appreciate
it.
It
sounds
like
a
follow-up
step.
Here
is,
let's
just
make
sure
we're
getting
folks
to
help
review
this
so
folks,
on
the
call
can
help
review
and
get
this
moved
along.
That
would
be
awesome
and
then,
once
it
gets
merged,
looks
like
it'll
go
into
a
two-week
quarantine
to
ensure
it's
not
flaky,
and
then
it
should
be
done.
A
K
Thank
you
very
much.
We
really
appreciate
some
reviews
and
it
is
Friday
in
New
Zealand.
So
if
we
we'll
try
and
get
to
the
reviews
today,
otherwise
early
next
week
on
your
Sunday
we'll
deal
with
it,
get
it
in
as
soon
as
possible
and
then
the
monitoring
for
the
two
weeks
flight
free.
We
at
the
moment,
you
can
only
see
two
days
on
postgrad,
but
we
have
a
trick
that
we
basically
screenshot
the
the
test
grid,
monitor
it.
So
we'll
come
back
in
two
weeks
time
we'll
be
peers
reporting
on
the
flake
freeness
invertise.
A
All
right
next
up
is
design
review,
so
Sunny
for
the
CSI
spec
modify
volume.
It
sounds
like
you're
gonna
set
up
a
follow-up
meeting
to
discuss
with
folks.
So
if
anybody's
interested
please
reach
out
to
Sunny
offline
on
slack
and
she
can
help
get
you
added
to
the
meeting.
But
if
there's
anything
you
want
to
talk
about
in
this
meeting,
Sunny
feel
free.
F
Yeah,
so
just
bring
everybody
up
to
speed
that
the
design
decision,
considering
all
the
storage
prior
there
is
that
we
will
use
the
volume
qos
class
as
a
medium
for
a
cluster.
It
means
to
manage
the
configurations
and
then
the
end
user
will
apply
the
volume
qos
class
for
a
different
performance
parameters
setting,
and
there
are
some
discussions
or
questions
coming
up
from
the
design
that
that's
new
to
the
group.
F
C
Right,
so
for
us,
it's
more
since
in
our
case
is
more
adamant
driven.
So
that's
why
I
actually
brought
up
the
the
boarding
Qs
class
case
that
you,
the
admin
go
apply,
make
changes
to
the
voting
Qs
class.
Then
we
can
apply
that
for
every
volumes,
but
not
like
directly
renaming
that
on
a
particular
volume.
So
so
it's
more
when
you're
changing
this,
it's
more
like
affecting
everything
that
is
using
that
Qs
class.
E
C
That's
why
I
thought
that
that's
why
I
thought
the
proposal
actually
commonly
that?
Okay,
it's
right,
because
you
have
this
two
paths.
One
path
is
you
go,
modify
the
Qs
class
parameters
and
then
it's
going
to
run
into
your
controller.
The
resizer
controller
will
be
modifying
all
the
volumes
that
has
that
Qs
class
right
so
that
that
model
would
work
for
us,
but.
C
C
To
actually
have
compliance
that
you
can
apply
for
right,
you
cannot
apply
for
if
it
does
not
work
so
I
I,
just
you
are
changing
from
the
same
part,
the
same
silver
to
gold.
C
E
E
C
Well
so
in
this
case,
when
someone
is
changing
the
parameters
in
the
volume
Qs
class,
that's
still
a
admin
who
understands
right
what
it
is
going
to
change.
It's
not
like
just
like
a
regular
user,
go
change
that
right,
so
the
admin
would
have
some
knowledge
and
understand,
he's
changing
something
that
is
compliant
from.
Let's
say,
A
to
B.
L
So,
just
just
to
pile
on
in
that,
like
one
of
the
motivating
examples
we
have
for
this
cap
is
exactly
the
case
where
the
application
Dev,
who
is
not
kubernetes
cluster
admin,
is
tuning
a
workload
in
exactly
the
same
way
that
the
application
Dev,
who
is
not
a
cluster
admin,
chooses
the
size
of
their
disks.
They
are
also
tuning
the
IO.
C
Right,
so
that's
the
I
know
that
if
you
look
at
the
the
diagram
that
Sunny
has
there
yeah
that
covers
one
one
case,
right
yeah,
that
one
is
still
there
I'm,
not
saying
that
one
is
not
the
case
but
I'm
just
saying
that's,
not
something
that
works
for
us.
L
C
No,
that's
not
what
I'm
saying
can
you
can
you,
sir?
Can
you
share
that
diagram
I
think
that
diagram
is
actually
pretty
clear,
I'm
I'm,
not
saying
just
one
way:
that's
definitely
not
what
I'm
saying
I'm
just
saying
that
for
us
to
be
able
to
support
something
like
this.
We
need
that
second
case.
That's
all
I'm
saying,
whereas
I
think
this
diagram
actually
captured
everything.
F
L
Okay,
I
I
thought
you
had
said
that
modifying
the
qos
name
in
the
PVC
was
not
acceptable.
Oh.
B
L
B
C
E
L
E
I
guess
what
I'm
I
guess?
What
I'm
wondering,
though,
is
if
say
in
the
vsphere
implementation
we
can
kind
of
like
hide
that
restriction
and
still
like
create
a
second
qos
class,
and
then
you
know,
go
through
the
the
first
method
of
a
PVC
being
able
to
control
wind
change,
because
I
feel
like
in
general,
like
just
the
the
flow
of
like
an
admin,
changing
the
Qs
class
and
then
having
that
just
roll
out
to
all
the
volumes.
I
think
is
kind
of
dangerous
or
kind
of
risky
yeah.
E
L
Yeah,
it
could
be
because
like
if
it's
actually
the
case
that
I
o
speeds
are
controlled
for,
like
by,
like
the
whole
volume
I'm
a
system
I
mean,
like
that's
just
a
reality
of
the
implementation.
I
But
I
did
read
it
during
this
meeting,
and
the
thing
that
sticks
out
to
me
is
that
the
the
design
calls
for
the
system
to
apply
the
the
qos
class
at
creation
time,
but
also
for
it
to
remain
bound
to
the
qos
class
throughout
the
lifetime
of
the
PVC,
which
is
different
than
storage
classes,
right
storage
classes
that
can
be
deleted,
they
can
be
changed
and
kubernetes
won't
do
anything.
So
if
we're
going
to
go
with
this
design,
I
mean
first
of
all,
I
think
that's
weird!
I
You
got
to
figure
out
how
to
explain
the
users
why
one
class
only
applies
a
creation
time
in
this.
Other
class
applies
all
the
time,
but
but
if,
if
we
aren't
going
to
do
it
that
way,
then
then
yeah
changing
the
storage
class
should
be
the
action
that
causes
kubernetes
to
do
something.
I
I
I,
don't
think
that
I
mean
I
mean
like
changing
the
pvc's
storage
class.
From
like
silver
to
gold,
for
example,
should
cause
kubernetes
to
the
reconciler
to
reconcile
something.
If
what
silver
is
just
changes,
I
I
wouldn't
expect
every
every
PVC
that
is
currently
silver
to
go,
get
updated.
That's
that's
complicated
and
hard.
C
I
I
I
guess
what
I'm
saying
is:
I
mean
I,
don't
like
the
idea
of
the
volume
remaining
bound
to
its
qos
class
throughout
its
life
and
maybe
maybe
one
way
to
fix.
That
is
to
say
it's
not.
But
if
you
change
it,
then
it
goes
and
reads
whatever
you
changed
it
to
and
it
updates
it
to
that
so
like.
If
you
define
silver
and
then
you
create
a
volume,
it's
going
to
get
the
silver
parameters.
I
It
might
be
weird,
but
it
would
it
would.
It
would
achieve
the
desired
effect
of
giving
you
an
ability
to
mutate
the
volumes
without
creating
a
gigantic
scaling
problem
of
like
what
do
you
do
when
the
definition
of
silver
changes
and
you
have
8
000
silver
volumes.
You
know.
F
Oh
so
you're
saying
we
are,
we
will
implement
the
left
side
of
this
diagram,
but
not
the
right
side
because
of
scaling
issues.
I
Yeah
I
mean
just
just
imagine
like:
where
do
you
even
store
the
reconciler
state
to
tell
you
that,
like
I've
gotten
through
379
of
the
you
know,
8
000
volumes,
you
know
how
do
you
even
keep
track
of
where
you
are
in
the
process
of
updating
things?
It's
really
hard,
it's
easier,
just
to
say.
Okay,
every
volume
has
has
a
or
every
PVC
has
a
spec
volume,
qos
class
and
a
status
Qs
class,
and
if
they
don't
match
we'll
reconcile
it
and
if
they
match
we
won't
do
anything.
L
So
I
I
have
a
question
to
in
in
your
implementation.
L
Do
you
actually
do
things
per
volume
or
things
just
done
at
the
sort
of
storage,
con
control,
plane
level
because,
like
I'm
thinking
the
conflict
here,
is
that
if
there
is,
if
our
API
is
actually
a
modify,
a
volume,
that
kind
of
presupposes
that
the
storage
provider
has
the
ability
to
do
per
volume,
modifications.
C
C
Right
but
it's
by
you,
but
if
you
apply
that
API
it
is
applying
that
on
every
volume
level
there
is
a
volume
level
API
for
you
to
change
that
like
apply,
apply
that
policy
again
and.
E
And
currently
it's
it
doesn't
support,
say
changing
the
policy
of
a
volume.
Yeah
versioning
of
the
policy
and
updating
to
a
new
version
of
the
policy
is
that
right.
C
Right,
it's
like
you,
change
the
you
change
the
policy
itself
and
then
the
contents
of
the
policy
or
the
parameters
of
the
policy.
Then
you
reapply
it.
A
So
folks,
time
check
we're
we're
at
the
end
of
the
hour.
Let's
do
a
follow-up
meeting
I
think
there's
a
lot
of
interesting
kind
of
topics
that
we
need
to
follow
up
on
here,
so
Sunny.
If
you
can
help
schedule
the
next
meeting
on
this,
we
can
continue
the
discussion.
L
Yeah
we've
actually
begun
a
thread
on
the
Sig
storage
slack.
So
please,
everyone
pile
on.