►
From YouTube: Kubernetes SIG Storage Meeting 2022-03-10
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 10 March 2022
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.88ry3bnhqkwc
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
Okay,
today
is
march
10
2022.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
So
today
we're
gonna
go
through
the
agenda.
As
usual,
we're
going
to
do
the
1.24
planning
spreadsheet,
to
figure
out
what
items
the
sig
is
working
on,
what
the
status
of
each
one
of
those
items
is
upcoming
deadlines
to
be
aware
of.
A
Requires
you
to
have
declared
that
you're
going
to
do
that
by
the
23rd,
which
is
coming
up
soon
and
then
the
next
big
date
to
be
aware
of
is
the
30th,
which
is
just
about
three
weeks
from
now,
will
be
the
code
freeze
from
124
and
if
you
have
anything
else
that
you
wanna
chat
about,
feel
free
to
add
it
to
the
agenda
and
we'll
get
to
it
after
the
getting
the
status
updates
on
the
planning
spreadsheet.
A
Okay,
I
think
we
are
good
to
go
so
I
don't
know
if
a
month
is
on
the
line
for
the
first
item:
csi
online
offline,
resizing.
B
Yeah,
so
I
opened
pr
yesterday
for
cleaning
up
some
of
the
going
resizing.
Ga
removing
some
deprecated
features
that
we
like.
We
used
to
call
notes,
expand
between
node
stage
and
not
publish
we
just
deprecated
that,
because
we
are
going
to
call
note
expand,
always
after
node
publish.
So
this
is
confusion,
so
I
opened
up
here
for
that.
I
have
a
I'm
working
on
making
a
read.
Write.
Many
actually
read
write
many
for
read
many
volumes.
We
do
not
use
to
call
node
expand
on
every
node.
B
I
had
appeared
for
that
like
long
time
back
like
a
year
back.
Actually
I'm
just
reworking
that
pr,
I'm
almost
done,
but
I
have
to
just
fix
it
up
for,
like
recently
change
recover
from
resize
feature
that
will
merge
in
large
circles.
So
I'm
just
updating
that
I
should
be
opening
that
second
pr
by
maybe
today
tomorrow,
like
I
have
this
yeah
and
then
after
these
two,
I
think
I
will
open
another
pr
to
flip
the
feature
gate
to
ga.
A
Awesome.
Thank
you
good
progress.
Next
up,
we
have
recovering
from
resize
failures
any
update
on
that
one.
B
I
had
I
have
not
like
I
have
to
work
on
a
slight
design
item
for
that
one,
because
we
have
a
goal
to
previously.
We
only
allowed
recovering
to
a
value
higher
than
previous
value,
so
it's
like
it
has
to
be
like
at
least
one
byte
or
a
little
bit
higher
than
what
was
the
previous
actual
volume
size
now
michelle
and
we
discussed
that
it
should
be
possible
to
recover
it
all
the
way
back
to
original
size.
So
for
that
I
have
to
kind
of
tweak
the
design
a
little
bit.
B
B
A
Sounds
good
thank
you
month.
Next,
up
issues
related
to
assuming
mount
volumes
or
mount
points
from
jing
is
jing
on
the
line
by
any
chance.
A
Let
me
go
ahead
and
mark
that
as
started.
C
C
So
the
original
author
is
not,
I
don't
see
any
content
like
the
next
step
work.
Yet
I
am
thinking
to
ping
him
to
see
whether
his
plan
to
work
on
that.
Otherwise,
so
we
can
find
some
help
or
I
can
pick
it
up.
So
basically,
we
need
to,
I
think,
imported
the
new
version
of
that
kind
of
mount
info
your
tail
and
then
we
can
utilize.
The
mounted
fast
function
in
kubernetes.
D
Oh
awesome,
hey
patrick,
so
this
wasn't
something
that
we
initially
planned
to
work
on
for
this
release
cycle,
but
when
it
was
pointed
out
that
it
is
a
new
object
in
a
beta
group
and
we
have
kept
it
in
beta
too
long,
so
we
kind
of
try
to
pull
together
whether
it's
ready
or
not.
D
The
open
was
still
around
integration
with
the
outer
screener
had
a
pr
opened
for
that,
but
it
hadn't
been
been
reviewed
for
a
while,
given
the
choice
between
promoting
it
to
ga
and
doing
another
beta,
we
instead
opted
to
try
the
ga
promotion
because
it
is
working
when
you
don't
use
autoscaler
it
does
what
it's
supposed
to
do.
You
get
the
ability
to
fully
put
schedule,
parts
that
use
the
entire
storage
cluster,
all
the
all
the
storage
capacity
in
the
cluster.
D
Without
this
feature,
in
this
scenario,
you
end
up
with
a
situation
where
the
port
can't
be
scheduled
because
autoscaler
is
always
ever
quickly.
Schedule
is
always
picking
the
wrong
node.
So
I
updated
the
cap.
That's
I
think,
yeah!
Well,
we
we
merged
the
cap.
We
then
had
folks
from
six
scheduling
questioning
their
decision.
A
little
bit
and
they
started
to
look
at
what
would
he
would
be
needed
to
make
outer
scalar
integration
work
better
and
that's
where
we
are
at
the
moment.
D
I
think
I've
convinced
those
people
who
were
a
bit
skeptical
about
going
ga
that
we
are
okay
with
doing
it.
The
cap
there's
a
cap
update
pending
with
further
instructions
or
limit,
or
this
documenting
the
limitations.
There
is
a
code
pr
pending
that
does
the
api
change
and
removes
for
feature
gate
check.
So
from
that
perspective,
it's
fully
implemented.
D
It
just
all
needs
to
be
merged
to
be
completed
for
124.,
and
then
we
know
where
future
work
might
be
needed
if
it
really
affects
users-
which
perhaps
at
this
point
probably
is
big
stamping
block
for
me.
I
don't
know
who
is
going
to
deploy
local
storage
in
a
cluster
with
autoscaler,
how
they
are
doing
it,
how
they
want
it
to
work,
and
if
I
don't
have
user
feedback,
it's
it's
fairly
low
priority
compared
with
the
other
things
that
I
could
be
working
on,
so
that
that's
currently
where
I.
D
No
not
anymore,
it
looked
like
they
would
for
a
while,
but
I've
been
working
with
aldo
in
particular
intensively
the
last
one
or
two
weeks
running
tests,
scale
tests
again
to
show
him
that
it
works
and
how
it
works,
discussing
the
open,
autoscaler
pr
again
and
how
how
it
could
be
improved.
So
I
think
they
are
now
accepting
that
we
move
ahead
with
this.
They
they
weren't
too
happy
because
they
would
have
liked
a
much
different
solution.
D
But
I
think
I'm
not
sure
what
that
other
solution
that
they
have
in
mind
is
is
practical
or
what
other
drawbacks?
It
will
have.
That's,
definitely
an
area
that
would
need
to
need
some
more
investigations,
and
my
my
feeling
is
that
we
shouldn't
block
this
this
current
state
of
of
a
feature
for
that,
because
it's
it's
useful
as
it
is.
A
Okay:
next
item
is
csi
ephemeral
volumes,
existing
api.
E
Yeah,
that's
me
yeah,
so
I'm
working
on
a
bug
fix
right
now.
Someone
raised
this
in
slack
taoshen
mentioned
that
fs
group
does
not
get
applied
on
csi
inline
volumes.
So
I'm
working
on
addressing
that.
I
think
I'll
have
a
fix
out
for
review
shortly.
I'm
just
testing
that
today
there's
still
an
open
bug
on
volume,
reconstruction
that
I
need
to
look
into
and
still
have
some
work.
I
need
to
do
to
make
sure
that
the
tests
the
tests
are
all
covered.
A
Next
item
is
volume
group
api,
but
I
believe
that
item
is
blocked
or
what's
the
status
of
this
one.
A
And
then
we
have
csi
out
of
tree
move
iscsi
driver
fit
and
finish
looks
like
humble
left
us
a
comment:
progressing
towards
1.0
release.
A
G
You
know
I
I
think
I
tried
to
ping
andy
and
didn't
manage
to
connect.
Let's
see
if
I
can
try
harder
on
that.
A
Next
item
is
pvc
volume
snapshot,
namespace,
transfer,
design.
Anyone
have
an
update
on
this
one.
A
F
Right
so
it's
a
still
review
in
progress
right
now,
just
waiting
for
someone
from
signaled
to
review
it
got
approved
from
the
metric
side.
A
A
Next
item
is
volume,
populator
data
source,
add
metric
support
and
testing.
This
is
out
of
tree
work
ben
any
updates
on
this
one.
A
J
Okay,
good,
the
the
the
metric
support
in
the
library
is
almost
done
and
then,
after
that,
be
able
to
do
the
beta
release
there.
There
is
an
entry
component
of
this,
which
is
updating
the
feature
gate
to
beta,
I'm
going
to
go
ahead
and
push
that
pr
this
week,
even
before
we
have
the
other
release
out
just
so,
people
can
review
it
and
make
sure
there's
no
problems.
J
K
A
Anyone
on
the
call
interested
in
helping
ben
with
writing
e2e
tests
for
this
feature.
J
A
F
So
I
believe
the
cup
is
being
reviewed.
Let's
see
michelle
just
added
some
more
review
comments.
Yesterday.
A
Next
item
is
change
block
tracking.
Thank
you
shane
for
that
update
change,
block
tracking
any
update
on
that
one.
F
A
Awesome
sounds
like
we
have
progress,
so
that
is
good.
Thank
you.
Shane
next
item
is
runtime
assisted
mounting.
L
Hey
so
for
this
yeah
went
through
some
more
back
and
forth
with
signaled.
One
of
the
initial
like
one
of
their
pushbacks
was
that
they
do
not
want
a
runtime
class
to
have
any
capabilities
around
storage.
L
So,
basically,
now
I'm
reworking
that
I
found
an
alternative
like
we
can
potentially
rework
that
through
a
field
in
the
pod
spec
on
how
pvcs
get
mounted
and
specify
the
runtime
assist
like
the
direct
amount
as
a
field
there
so
reworking
the
cap
with
with
this
approach.
A
Got
it
so
overall
kind
of
you
found
a
way
to
move
forward
effectively
without
sick
storage
here
right.
L
A
D
H
H
So
I
am
working
on
the
volume
reconstruction
right
now.
I
have
a
pr
for
review
and
I
am
testing
it
with
multiple,
like
with
more
volumes
than
just
two
to
see.
If
there
are
any
performance
impacts.
A
A
Matt,
would
you
happen
to
know
anything
about
this?
No.
G
I
I
don't
sorry:
okay,
no.
A
Worries
mark
that
is
no
update
for
now.
Next
up
we
have
the
different
entry
drivers
that
needs
csi
migration,
so
first
up
is
vsphere.
F
B
Sorry
I
was
saying
that
if
you
want
to
enable
it
by
default
in
124,
thus
I
think
that
looks
like
it's
going
to
skip
right.
We're
not
going
to
do
that.
F
I
I
don't
know
so
we'll
have
to
see
because
we're
talking
it's
confusing,
I
see
there
are
different
documents
in
different
places.
We
have
different
version
so
just
trying
to
sort
that
out.
If
it's
still
the
same
as
what
we
declared
before,
which
is
67.3,
then
we
don't
have
to
do
anything
and
continue.
A
Got
it
okay,
thank
you
for
the
update
ching.
Next
up
we
have
azure
disk
and
azure
file.
Last
status
here
was
kept,
merge
pr
out,
moving
to
ga.
G
A
Docs,
so
this
one
had
was
merged
and
docs
needed,
update
any
further
update.
There.
A
Okay,
I'm
gonna
mark
those
as
no
update
and
keep
moving
gce
any
update
on
that
one
data
on
by
default.
G
No,
it
it
it
is
on
by
default,
we're
sticking
with
the
plan
of
moving
to
125
so
like
in
terms
of
the
124
stuff.
We're
done.
F
G
The
next
you
know
the
next
updates
will
come,
hopefully
for
the
next.
A
And
so
the
goal
this
cycle
was
to
be
beta
and
on
by
default,
which
is
complete.
So
cool
next
item
is
aws
windows,
support
beta
on
by
default
as
well
any
update
this
one
looks
like
it'll
be
delayed
to
125
any
change
in
that.
A
It
out,
since
we
don't
need
to
track
this.
F
So
so
dim
submitted
a
pr,
the
doc
pr
that's
been
reviewed.
I
think
that's
the
last
thing
left
for
this.
A
A
A
A
F
Yes,
I
think,
has
a
go
ahead:
yeah.
H
Yeah
all
right,
I
review
a
couple
of
vrs.
They
look.
Okay,
I
think
the
only
thing
that's
really
missing
is
to
be
test
that
should
be
edited
easily.
A
Cool.
Thank
you.
Young
next
item
is
control
volume,
mode
conversion
between
source
and
target.
Pvc
last
update
here
was
work
in
progress.
Pr
out
any
further
updates
here.
I
Hey
well,
the
pr
still
out,
I'm
hoping
it'll
be
more
soon,
and
then
I
have
some
follow-up
videos
that
are
sort
of
dependent
on
that.
So
we'll
put
those
out
soon
got
it.
A
Okay,
cool.
Thank
you
for
the
update
next
item
is
secret
protection,
prevent
deletion
while
in
use
depends
on
the
in-use
protection
kept
below.
So
let
me
see
here
any
update
on
this
or
the
subsequent
item.
Hamasaki.
A
The
remaining
items
are
co-owned
between
sig
storage
and
other
six,
so
let's
go
over.
Those
first
up
is
with
sigoth
the
user
id
ownership
in
config
maps
and
secrets
preserve
the
default
file
mode
bit
set
in
atomic
writer
volumes.
A
Okay
sounds
like
we're
still
looking
for
an
owner,
so
if
anybody
is
sitting
on
the
sideline
looking
for
an
item,
this
might
be
a
good
one
to
jump
into.
We
may
not
be
able
to
get
it
done
for
124,
but
you
can
start
investigating
and
figuring
out
what
needs
to
be
done,
and
then
we
can
pick
it
up
for
125.
A
Next
item
is
co-owned
with
sig
node
ungraceful
node
shutdown.
Last
status
update
work
in
progress.
Pr
should
be
out
soon
any
updates
for
this
one.
A
Awesome,
thank
you.
Shane.
Next
up
we
have
sig
apps
address
issues
with
pvc
created
by
statefulset
will
not
be
auto
removed.
Any
updates
on
that
one.
G
Yeah,
so
the
this
is
staying
in
alpha
or
124,
I
mean
I
think
our
goal
will
be
to
do
get
it
in
beta
and
125,
but
until
we
start
that
cycle,
I
don't
know
if
there's
going
to
be
more
updates
here.
Okay,
the
the
the
gap
here
is
that
I
think
we
didn't
actually
end
up
having
a
great
plan
for
how
to
validate
this
in
alpha,
which
I
will
work
on,
although
if
any
of
the
old-timers
have
any
suggestions
around
that
I'd
love
to
talk
with
them.
A
M
Hey
so
I
did
not
get
much
time
to
work
on
this
so,
but
I'm
still
looking
into
some
of
the
review
comments
on
the
cap.
A
A
Okay,
cool,
thank
you
for
the
update
and
then
last
item
we
have
is
execution
hook
any
update
on
this.
F
Yeah
this
one
is
still
saying
stay
in
design:
how
much
okay,
we
can
probably
just
yeah,
remove
it,
remove
it
from
this
release.
Here
we
continue
when
it's
transferred.
Okay,
cool.
A
Done
and
now
let's
go
ahead
and
switch
back
to
our
agenda
looks
like
we
have
no
pr's
to
discuss
and
no
designer
reviews
miscellaneous
items.
First
up
xing
volume
snapshot,
ga.
F
All
right,
so
we
talked
about
this
in
the
data
protection
running
group
yesterday
as
well.
So
you
know
we
have
several
faces
for
the
volume
snapshot
api
and
in
window
20.
That's
the
the
first
phase.
When
we
add
we
want
support
and
the
phase
two
was
in
kubernetes
by
about
21,
we
changed
the
story
version
from
beta
1,
even
and
then
we
also
duplicated
the
v1
beta1
api.
F
So
now
it's
almost
a
year.
Our
plan
is
to
remove
v1
beta1
api
in
1.24
release.
F
So
I
just
want
to
see
if
there
are
any
concerns
so
brought
it
up.
Yesterday
didn't
hear
any
objections.
We
actually
also
talked
about
this
one.
A
year
ago
in
the
data
production
group.
G
I
guess
that
will
plan
for
a
ratcheting
web
hook
seems
to
have
worked
quite
well
like
I.
I
think
that
was
a
good
strategy
for
this.
We
only
had
one
cluster
where
someone
had
a
v1
beta
1
snapshot
resource.
That
would
be
invalid
in
my
v1
so,
like
I
think,
that's
good
news
and
yeah.
We
should
probably
proceed
with
the
deprecation
before
anyone.
G
You
know
has
a
chance
to
start
adding
more
beta
resources,
and
you
know
so.
F
G
No
sure
I
I
was
being
I
was
making
a
small
joke,
like
I
think,
we're
actually
in
a
good
good
state
for
proceeding
to
a
v1
and,
like
you
know,
any
extra
time
we
wait,
will
just
give
us
give
opportunity
for
things
to
go
wrong.
G
F
F
All
right,
okay,
then
yeah,
because
right
now
I
think
seek
release.
They
are
going
to
have
a
blog
just
for
things
that
are
going
to
be
deprecated.
So
I
think
this
would
be
a
good
one
to
add
it.
There.
F
A
F
So
yeah,
so
I
send
it
this
out.
I
got
feedback
from
humboldt.
He
actually
writes
for
pretty
detailed
notes
here,
so
it
looks
like
they
are
still
using
that.
So
that's
fine.
I
think
we
can
keep
those
two
the
last
projects.
That
means
for
the
for
the
remaining.
I
think
there
are
five
nb
remaining
projects
that
we
will
submit
for
archive.
So
if
there
are
any
concerns,
please
speak
out,
or
should
we
wait
a
little
bit
longer?
We
can
wait
a
little
bit.
F
I
can
wait
until
next
meeting.
Maybe
you
can
check
again
give
you
more
time
to
respond.
F
So,
okay,
so
it's
actually
actually
four
yeah.
So
it's
mainly
just
a
two
because
the
the
other,
the
csr
app
that
ap
csi
api
that
one
we
know
because
that's
just
no
longer
used.
So
basically,
it's
just
the
css
driver
in
each
populator
and
css
lib
fc.
Those
two
looks
like
we
have
not
got
any
response.
Saying
people
need
them.
E
F
A
Sounds
good?
Anyone
on
the
call
have
any
objections,
if
not
we'll
give
it
another
two
weeks
and
then
then.
G
A
Okay,
any
other
items
from
anybody
on
the
call
anything
they
hope
to
discuss.
We
have
a
little
bit
of
time.
There
was.
J
A
there
was
a
discussion
in
a
code
review
about
the
nfs
subdirectory
provisioner,
the
csi
nfs
subdirectory
provisioner.
I
was
in
a
code
review
and
people
were
saying
this.
This
repo
is
not
maintained
anymore.
Maybe
we
should
get
rid
of
it,
but
I
know.
J
F
Think
when
I
checked
the
commits
this
one
still
has
commits,
can
you
can
you
send
me
the
link
because
I
believe.
F
J
Kubernetes
csi,
but
it
was
like
someone
was
updating
the
dependencies
from
like
kubernetes
118
because
it
hadn't
been
updated
since
then,
so
like
it's
just.
It
was
just
languishing.
F
This
is
the
the
drive.
Oh,
this
is
the
software
provision.
J
Yeah,
sorry,
I
didn't
put
a
link
in
the
agenda.
I
didn't
have
it
handy,
but
I
just
wanted
to
sort
of
mention
in
this
call
that
you
know
I
know
people
use
the
driver
and
it
could
use
some
maintenance
by
you
know
this.
F
N
F
Okay,
so
yeah,
if
there
are
more
maintenance,
of
course,
so
basically
you
want
to
ask
if
anyone
else
is
interested
in
helping
is
that
the.
F
J
F
N
No,
the
we've
added
provision
of
support
to
this.
A
Cool
sounds
good,
and
if
anybody
on
the
call
is
still
using
the
external
provision
or
nfs
subdir,
you
know
speak
up,
let
us
know-
and
let
us
know
if
there's
any
gaps
between
that
and
the
csi
variant,
and
hopefully
we
can
archive
this
at
some
point.