►
From YouTube: Kubernetes SIG Storage Meeting 2022-05-05
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 5 May 2022
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.mme8bqjo4arv
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
A
If
you
have
anything
that
you'd
like
to
discuss,
please
feel
free
to
add
to
the
agenda
talk
and
we'll
get
to
that
after
the
initial
items.
Here,
you
can
find
the
link
to
the
agenda
doc
in
your
invite.
A
So
first
up
today,
kubecon
eu
is
coming
up.
Some
folks
are
going
to
be
attending
specifically
sheng
is
going
to
be
participating
in
the
contributor
summit
and
there
will
be
a
sig
storage
meet
and
greet
as
well.
So
if
you
are
planning
to
attend
any
of
these
events
feel
free
to
throw
your
name
in
the
dock,
and
you
can
sync
offline
if
you'd
like
with
that
we're
going
to
move
over
to
today's
agenda.
A
So
the
group
has
been
working
on
the
124
release
that
had
been
delayed
to
may
3rd
this
week
and
it
finally
shipped
so
124
is
now
out,
and
so
today
we're
just
going
to
go
over
the
planning
spreadsheet
for
124
and
wrap
things
up,
get
a
final
status
on
what
made
it
into
the
release
and
what
needs
to
be
moved
to
125.,
and
in
our
next
meeting
we
will
do
a
125
planning
session.
A
You
can
find
the
timeline
for
the
planning
session
here
for
the
125
release
here
important
date
to
be
aware
of,
for
125
is
june.
16
is
going
to
be
the
enhancement
freeze
for
one
for
125..
A
If
there's
anything
that
you
think
that
the
sig
should
be
working
on.
The
next
meeting
is
an
important
one
to
attend.
That's
where
we're
going
to
try
and
lock
down
what
it
is
that
we're
going
to
be
working
on.
So
please
please
attend
that
and
bring
your
ideas
and
and
if
you're
interested
in
helping
that's
a
good
meeting
to
help
as
well,
because
that's
where
we
assign
folks
for
working
on
items
for
the
next
release.
A
So
the
first
item
here
was
already
completed,
so
we're
good
next
item
recovering
from
resize
failures,
looks
like
we're.
Moving
this
to
125.
B
A
Okay,
I
will
move
this
to
125
and
then
we
can
see
if
jing
is
still
interested
in
working
on
it.
Next
item
is
determine
mount
points
without
relying
on
proc
mounts
and
there
were
pr's.
This
is
moved
to
125.
Any
other
updates.
A
Okay,
I'll
plan
to
move
that
over
next
is
storage
capacity
tracking
for
pod
scheduling.
Taking
a
look
at
this
one
conformance
test,
pr,
merge,
blog
being
reviewed
external
provisioner
pr
is
needed.
Any
updates
on
this
one
is
patrick
on
the
line.
A
Next
up
we
have
csi
ephemeral
volumes
and
the
ga
was
delayed
here.
Go
ahead,
john.
D
A
Thanks
jonathan
next
we
have
volume
group
api
and
I
believe
this
is
a
design
working
on
updating
the
cap.
So
we'll
go
ahead
and
get
this
moved
over
to
25.
A
I
have
not
seen
an
update
from
humble
on
this
and
we're
gonna
go
ahead
and
move
this
to
125..
I
assume
sifs
and
sambar
are
the
same.
Michelle.
C
I
think
this
I
saw
some
effort
around
this
area.
I
think
this
might.
This
might
be
resolved
actually.
A
C
I
think
it
was
mostly
there
were
some
trying
to
get
it
to
build
in
our
repos
and
I
think
that's
been
done.
I'll
double
check.
A
Okay,
if
that's
the
case,
then
we
can
remove
it
from
125,
but
for
now
I'll
plan
to
add
it.
A
Okay,
next
item
is
pvc
volume
names,
volume,
snapshot,
namespace
transfer.
This
was
a
design
for
the
quarter,
targeting
125,
so
I'll
go
ahead
and
get
that
moved
over
any
other
updates
here.
E
Actually,
I
opened
another
cab
to
based
on
their
populated
approach
and
I've
added
in
the
next
line.
F
A
And
that's
number
15
masaki.
E
This
one
but
yeah,
we
need
to
agree
if
transfer
use
case
is
covered
by
the
escape.
A
Got
it
okay,
do
we
want
to
keep
both
of
these
open
for
125.
A
A
A
G
A
A
next
is
runtime
assisted
mounting.
This
will
also
be
trying
for
an
alpha
in
1.25
yep.
A
And
so
design,
what's
the
state
of
the
design
right
now.
H
A
H
A
Okay,
next
up
is
node
expansion
secret.
This
was
alpha
for
this
cycle
was
being
moved
to
125..
Anyone
else
have
a
comment
on
this.
A
A
And
I
believe
there
was
we're
going
to
need
to
move
this
to
125.
A
Okay,
we're
going
to
skip
over
azure
and
gce.
Those
were
complete,
we're
likely
going
to
move
those
to
ga
next
cycle
for
azure
and
gce,
and
then
openstack
is
already
ga
this
cycle
ceph.
Let's
take
a
look
at
those
so
yeah.
It
looks
like
we're
moving
those
to
125.
A
Okay,
then,
we
have
always
honor
reclaim
policy.
A
Okay,
I'll
get
that
moved
over
next
is
control
volume,
mode
conversion
between
source
and
target
pvc
design
was
approved
here
it
was
an
alpha
implementation,
status,
prs
merged
and
a
couple
left
for
review
doc,
pr
to
be
merged,
blog
being
reviewed.
Any
new
updates
on
this.
A
E
G
A
Okay,
thank
you,
then
ungraceful
node
shutdown
was
complete,
docs
pr,
merge,
blog
and
review.
I'm
going
to
go
ahead
and
mark
that,
as
done,
anyone
have
any
comments
on
that.
One.
A
A
Awesome,
that
is
good
news,
and
that
is
jing
jing.
We
have
a
couple
items
I'll
come
back
to
since
you're
here
get
some.
E
A
I
want
any
updates
on
that.
One.
A
B
About
me,
but
it's
it
requires
like
a
further
design.
I
don't
know
if
she
has
she's
there
on
the
call,
but
it's
me
and
jan
also
and
some
other
folks
talked
about
it.
It's
fairly
fairly
complicated,
actually
and
has
to
be
designed.
Like
I
don't
know
in
this
quarter,
I
think
it
will
be
an
achievement
if
you
can
design
it,
but
it's
it's
looks
simple
on
surface,
but
it's
very
yeah
complicated.
E
B
Yeah,
if
charlie
wants
to
pursue
this
and
if
she's
there,
she
might
have
to
set
up
regular
meetings
and
design
it
continuously,
because
there
are
like
all
sort
of
issues
like
what
happens.
If
you're
expanding
like
four
pvcs
and
expansion
of
two
pvcs
succeed.
Third,
one
fails
because
you
run
out
of
capacity
what
should
be
the
recovery
and
many
such
things
actually
so.
A
Yeah,
that
sounds
like
a
fun
set
of
challenges
to
think
through.
So
if
anyone
on
the
call
is
interested,
this
might
be
a
fun
design
problem
to
tackle
for
125
volume.
Expansion
for
stateful
sets.
A
Okay
with
that,
let
me
go
back
to
some
of
the
items
that
we
were
hoping
to
get
status
updates
on
for
jing.
A
So
I
guess
first
one
here
is
issues
related
to
assuming
volumes
are
mount
points
we
said,
move
to
125
any
other
updates
there
jane.
I
I
do
have
like
a
small
pr
pending,
but
I
haven't
finished
testing
a
unit
test
and
also
there
is
actually
another
pr
that
is
also
kind
of
doing
similar
thing
opened
recently.
I
will
check
and
see
whether
they're,
like
doing
similar
things
and
then
coordinate
about
that
change.
Yeah
got
it.
A
Cool
thank
you
jane,
and
I
think
the
second
item
we
had
here
was
determine
mount
points
without
relying
on
proc
mount
any
new
updates
on
that.
One.
I
A
A
J
J
So
I
I
sent
this
one
to
the
chat
so
originally
matthew,
matthew,
wong
had
opened
this
vr
to
optimize
cleanup
mount
points.
This
is
causing
some
pain
for
a
handful
of
customers
for
us,
I'm
from
eks
so
and
matthew.
Basically
just
didn't.
Have
the
cycles
to
take
this
across
the
finish
line,
so
I've
taken
up
implementing
the
option
that
everyone
seemed
to
kind
of
coalesce
around
in
this
original
pr.
J
It
basically
detects
a
certain
behavior
from
the
u-mount
implementation
on
linux
installations
and
avoids
doing
these
kind
of
expensive
mount
point
checks
if
it
can
rely
on
the
behavior
of
you
mount.
So
I
I
don't
think
I've
gotten
any
eyes
from
sig
storage
on
this,
but
I've
got
all
of
the
testing
done
and
yeah.
I
think
it's
it's
ready
for
consideration.
J
A
And
carter,
do
you
want
to
jog
everyone's
memory
on
which
approach
it
is
that
you
went
and
implemented.
J
Okay,
so
in
this
comment,
matthew
kind
of
gives
the
rundown
and
we
went
with
option
one
here.
Basically,
the
primary
or
most
popular
u-mount
implementation
will
fail
with
an
exit
code
of
not
zero
and
reply
not
mounted
when
you
try
to
unmount
something
that
isn't
actually
a
mount
point.
So
we
can
basically
skip
our
own
checks
and
just
rely
on
u-mount
to
do
the
check
for
us.
J
So
the
when
the
mounter
is
created,
it
will
essentially
detect
this
behavior
by
creating
a
temp
directory
and
trying
to
unmount
it,
and
then
it
will
later
be
able
to
skip
the
mount
checks
because
it
knows
you
mount
behaves
in
this
certain
way.
J
I
Okay,
so
there
is
a
way
to
kind
of
verify
this
humont
behavior
is
the
one
you
can
check,
and
otherwise
it
can
go
back
to
the
original.
J
A
Anyone
else
have
a
comment
on
this.
I
know
ben
you
had
some
thoughts
on
this
last
time.
It
was
discussed.
F
A
Cool
all
right,
thank
you
for
that
discussion
and
with
that
there's
no
designer
views
and
no
miscellaneous
items
so
I'll
open
it
up
to
the
room.
Anyone
have
an
item.
They
want
to
bring
up.
A
Or
I
think
if
it
collides
with
kubecon,
generally
speaking,
we
tend
to
cancel
the
kubecon
weeks
and
then
and
then
do
it
after
do
it
the
next
week,
so
my
proposal
would
be:
let's
cancel
the
next
meeting
and
then
do
planning
in
a
month
that
would
put
us
at
june
5th,
which
would
still
be
in
time
for
our
enhancement
freeze
on
june
16.,
it's
okay.
A
C
Yeah,
we
could
do
that
like
like
next
week
or
something.
F
A
E
A
Problem
any
other
comments
concerns
anything
anybody
wants
to
bring
up
for
discussion
today.