►
From YouTube: Kubernetes SIG Storage Meeting 2022-07-28
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 28 July 2022
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.3z3l564d4dz1
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
Okay:
let's
go
ahead
and
get
started
today
is
july,
28
2022.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube
on
the
agenda.
Today,
we're
going
to
go
over
the
125
planning
session
reminder
here.
The
upcoming
code
freeze
is
august
2,
so
coming
up
very
quickly.
A
If
you
have
features
that
you
are
developing,
please
keep
that
date
in
mind.
If
you
have
any
prs
that
need
attention
or
designs
that
need
attention
feel
free
to
add
to
the
agenda,
you
can
find
the
agenda
link
in
your
calendar.
Invite,
and
we
already
have
some
topics
for
discussion
that
we'll
get
to
so.
First,
let's
go
ahead
and
jump
into
the
125
planning
spreadsheet
and
get
status
updates
on
the
features
that
folks
are
working
on,
starting
with
delegate
fs
group
to
csi
driver
instead
of
cubelet,
including
updating
end-to-end
tests.
A
Csi
updates
is
a
month
on
the
line.
B
I
don't
have
any
update
here.
I
think
nothing
has
been
done
in
the
past
two
weeks.
A
Okay,
we'll
mark
that
one
as
no
update
and
then
we
have
issues
related
to
assuming
volumes
or
mount
points.
Jing
are
you
on
the
call.
C
Yes,
so
I
think
this
can
you
see
check
the
next
one?
I
I
forgot
it's.
C
So
so
there
are
two
like
relates
one
point
prs
and
both
are
merged.
Finally,
and
the
better
for
this
one's
assuming
one
point,
I
think
that
is
still
needs
some
work,
but
the
the
pr
merged
is
related
to
maybe
item
five
more
or
less
so
one
is
use
fast
approach,
faster
way
to
check
mount
point
using
the
newer
kernel.
C
If
the
kernel
support
that
and
the
other
is
it
will
skip
one
point
one
check
during
month
cleanup
if
it's
not
needed,
so
both
of
them
will
definitely
help
a
lot
for
like
attacking
the
amount
point
that
before
there
is
some
you
know
performance
issue
or
there
is.
C
C
Yes,
I
think
there
probably
still
a
little
bit
issue
we
want
to
address,
but
with
those
two
pr's
marched,
definitely
I
think
the
performance
usually
to
checking
one
point
will
got
improved,
got.
A
It
okay
sounds
good,
so
I'll,
keep
that
one
open
and
then
we'll
go
ahead
and
close
that
number
five
number
six
has
already
been
marked.
As
done,
we'll
move
to
number
seven
csi
ephemeral
volumes,
existing
api,
yeah,
yeah.
D
I
have
the
feature
gate
pr
open.
I
know
michelle
and
jordan
already
started
taking
a
look,
so
it's
in
review,
hopefully
that'll
be
ready
in
time
for
future
freeze
and
I
have
the
placeholder.pr
is
open.
D
The
last
thing
I
need
to
do
soon
is
to
update
the
the
issue
description.
I
don't
have
permission
to
do
that,
so
my
question
is:
should
I
is
there
some
way
for
you
to
grant
me
permission
to
update
the
the
issue
description,
or
should
I
just
paste
paste
it
over
to
you
to
update.
A
Yeah,
you
can
go
ahead
and
send
in
the
sig
storage
slack
a
message
to
me:
yan
michelle
and
xing,
and
one
of
us
will
go
ahead
and
update
it.
E
A
All
right,
cool
thanks,
jonathan,
we'll
follow
up
offline
next
is
local
ephemeral,
storage,
resource
management.
C
Yes,
so
we
want
to
promote
to
ga-
and
I
just
opened
the
pr
for
the
change
it's
a
little
bit
complicated
then
before
so
because
we
realized
there's
some
system,
like
especially
utilities
system.
That
cannot
support
this,
since
they
cannot
correctly
check
the
root
file
system
and
in
order
to
you,
know
about
breaking
right
now.
Those
system
just
is
able
to
feature
since
it's
data,
but
once
we
remove
feature
gate,
it
will
break
their
testing.
So
in
order
to
support
those
system,
I
talked
to
signals
a
few
times,
so
they
agree.
C
I
add
a
cubelet
config
to
still
allow
system
right
to
disable
this
feature.
So
my
pr
add
that
kubernetes
config
called
the
local
storage
capacity
isolation
and
by
default,
is
true
and
otherwise
for
the
loose
layer
system.
They
can
use
that
to
you
know
disable
that
feature
so
yeah.
I
think
please
review
that
change.
Yeah.
A
Cool
thank
you.
Jing
seems
like
a
nice
improvement.
Next
up,
we
have
volume
group,
api,
snapshot,
consistency,
groups
and
spreading.
This
is
a
design
for
this
cycle.
Anything
new
here,
shane.
E
So
so
I
did
find
out
a
few
implementations
of
this,
so
I'm
just
looking
at
those
some
of
them
uses
the
two
object
models,
so
it's
very
similar
to
what
we
have
in
the
current
cap,
but
others
just
use
one.
So
I
think
I'm
going
to
summarize
those
in
a
document.
A
A
Cool,
thank
you
shing
next,
two
are
complete,
so
we'll
skip
that
next,
two,
we
dropped
from
the
1.25
cycle,
so
we'll
skip
that
moving
on
to
csi
volume,
health,
additional
metrics
and
or
events
sent
to
end
tests.
Staying
in
alpha
for
this
cycle.
Anyone
have
an
update
on
this
one.
E
F
E
So
I
think
it
is
a
in
progress.
Tim
has
been
working
on
updating
the
updating,
the
apis
and
this
you
know
the
cozies
back
and
bring
it
up
and
also
update
the
the
csv
pipeline
doraches.
I
see
there
are
some
people
working
on
this
one
actually
on
the
card.
If
you
guys
want
to
speak
up,
please
go
ahead.
A
Sounds
good,
so
it
sounds
like
apis
are
being
updated
to
address,
kept
comments
and
changes.
Hopefully
we'll
get
that
in
before
the
code.
Freeze
all
right.
Next
up
we
have.
E
So
I
think
this
one
since
it's
outrageously
so
right,
it's
not
like
that.
Strictly
flowing
that
way,
this
one
is
kind
of
like
our
track.
Our
tree.
E
I'm
talking
about
cozy
because
we
have.
This
has
been
tracked
out
of
true
that
we
moved
it
out,
but
we
were
hoping
to
get
this
one
get
a
code
I'll
get
everything
done
by
the
end
of
this
month.
I'm
not
sure
if
we
are
there
yet,
because
there's
also
the
you
know
the
building
image
and
that
part,
I'm
not
sure
if
a
track
will
be
longer
than
that
yeah,
but
I
think
work
will
continue.
Even
after
the
code
freeze.
A
E
A
Makes
sense
all
right
next
up
is
node
expansion
secret
any
update
on
this
one.
I
see
a
comment.
A
B
So
I
got
preliminary
api
review
from
michelle
and
I
have
implementation
of
most
of
the
functionality.
I'm
adding
unit
tests
right
now,
but
I'm
pretty
sure
I
will
not
finish
the
volume
reconstruction
until
the
feature
freeze,
so
everything
will
the
feature
will
work.
We
can
test
it,
but
after
you
restart
cubelet,
then
all
the
states
in
cubelet
will
be
lost
and
it's
not
able
currently
well
it's.
Currently,
you
will.
D
B
Be
able
to
reconstruct
it
from
the
mounts
on
the
machine
in
125
because,
like
that's,
I'm
just
revoking
reconstruction,
and
I
can't
do
that.
I
can't
add
s
linux
to
reconstruction
in
two
days
or
three
days.
So
a
question
is
what
to
do.
Shall
we
like
merge?
It
half,
implement
it
and
continue
next
release
or.
F
A
Yep
plus
one
to
michelle,
I
think
that
sounds
good
to
me,
since
it's
an
alpha
feature
under
feature
flag
and
disabled
by
default.
Okay,
to
have
it
partially
complete
and
then
finish
it
in
the
next
cycle.
A
E
So
right,
so
we
have
that
pr,
the
pr2
turn
the
sound
by
default.
We've
got
merge
and
then
there's
another
pr
to
address
or
commands
command,
basically,
just
to
say
we
dropped
the
support
for
visual
version
lower
than
samuel
u2.
The
line
is
also
merged.
Yeah.
I
think
the
code
wise
this
is
this
is
done.
E
So
I
do
have
a
question
because
there's
a
there's,
a
thread
which
was
here
he
was
the
one
who
brought
it
up
regarding
the
windows,
there
was
a
question
regarding
the
windows
support
because
right
now,
that
is
your
offer
you're
asking
if,
when
we
bring
this
to,
if
we
move
this
to
ga,
what
is
the
plan
for
the
windows
support
so
right
now?
The
problem
for
the
windows
support
is
the
the
block
or
is
the
that
bug
we
opened
in?
E
This
is
a
proxy.
That's
the
performance
issue
because
of
the
power
shear,
and
you
know
because
of
that
the
cost
depends
issue.
We
do
not
have
a
way
to
solve
that.
So
that's
the
blocking
one,
the
blocking
us
from
ga
the
windows
feature.
I
remember
when
I
brought
this
one
last
time.
I
think
the
suggestion
is
that
if
we
can
have
like
future
priorities
or
like
compile
this
with
how
it
is,
if
we
use
the
entry
window
support,
but
the
problem
is
the
windows,
one
does
not
use.
E
The
powershell
doesn't
use
the
system
proxy
right,
so
it
doesn't
have
this
particular
problem.
Even
though
the
windows
feature
it's,
we
don't
even
know
who
is
using
the
the
windows
feature
intro
anyway,
we
don't
really
have
any
customers
asking
about
that.
That
one
was
because
yeah.
F
I
think
shane,
the
the
main
question
I
had
was
because,
in
the
in
the
bug
the
results,
the
performance
results
were
comparing
against
a
linux
run,
and
I
think
it's
better
that
we
compare
against
windows
to
windows,
because
it
is
a
known.
It
is
known
that
windows
consumes
more
resources
than
linux,
so
so.
E
E
But
okay,
I
will
bring
this
back
just
to
to
ask
them
to
just
just
still.
You
know
try
to
do
this
and
see
if
you
can
compare
okay,
good
point,
yeah.
F
I
I
yeah-
I
just
imagine,
because
I
I
do
think
in
general
windows
can
will
not
scale
as
much
as
linux.
So
I
think
I'm.
E
F
I
guess
I
don't
know
if
if
jing
do
you
know,
do
we
run
windows
in
scalability
tests
at
all.
F
Okay,
well,
maybe
that's
something
we
can
follow
up
with
sig
windows
to
see
if
they
have
scalability
tests
or
when
there's
not.
E
I
do
remember
someone
said
that
definitely
definitely
much
much
slower,
so
yeah.
I
have
seen
some
numbers,
but
I
don't
know:
okay.
E
F
E
F
F
A
Next
up
we
have
aws
csi
migration,
any
word
for
matt
on
this
one.
A
Cool
thank
you
for
the
update
looks
like
that's
on
track.
Next
up,
we
have
cs.
E
Microphone
there
we
go.
I
have
a
question,
sorry
about
the
openstack,
so
I
think
team
has
this
question
he's
asking
when
we
can
remove
the
cinder
entry
plug-in,
so
I
think
it's
a.
I
think
it
was
originally
in
origin.
We
said
it's
1.26,
but
now
because
the
core,
the
core
csm
migration,
I
believe
that's
what
ga.
E
E
B
E
B
A
Cool
thank
you.
Next
up
we
have
seth,
rbd
and
cfs.
I
think
humble
left
comments
on
these
existing
inline
tree
entry
tests
are
failing
heavily
on
master
branch,
even
without
migration.
So
we
are
trying
to
correct
those
issues,
but
various
issues
are
encountered
on
the
path
due
to
fact
that
there
are
tests
that
are
broken
for
a
long
time
still
trying
and
it's
current
blocker
for
beta
state
change.
Pr
to
get
merged
so
looks
like
this
is
in
progress
and.
A
F
A
Next
up
is
csi
migration
for
portworx.
Anyone
have
an
update
on
that.
E
I
think
the
last
I
heard
last
I
heard
is
the
the
pr's
merged
the
code,
pr
that
the
code
pr
is
merged.
So
it's
just
stocks.
I
don't
know
if
the
dog
is
out
yet,
oh,
you
need
to
check.
A
Okay
looks
like
it's
pretty
close,
though
so
we'll
leave
that
open
next
up
was
cluster
fs.
So
there's
a
there's
a
big
discussion
around
this
today.
So
after
some
heavy
discussion
on
this
topic,
deprecation
mail
has
been
sent
to
kdev
and
six
storage.
The
current
plan
is
to
mark
for
deprecation
and
code
removal
in
126.
A
due
to
unmaintained
dependency
issue,
more
details
in
the
email
threads,
so
yeah
folks
are
interested.
Look
at
the
email
thread
in
sig
storage.
That
dim
started
so
plan
here
is
going
to
be
to
deprecate.
A
E
No
no
test
yet.
F
F
A
A
Okay,
we'll
just
copy
over
the
same
status.
This
was
designed
for
the
cycle.
Next
up
is
non-graceful,
node
shutdown
and
10
tests,
yeah.
E
A
Okay,
cool
good
progress.
Thank
you,
shang
next
is
address
issues.
Pvc
created
by
stateful
set
will
not
be
auto,
removed.
End-To-End
test
ready
was
the
last
status
anything
new.
A
Okay,
so
we'll
mark
that
as
no
update
for
now
volume,
expansion
for
stateful
sets.
E
There
is
a
there
is
a
someone
that
I
think
it's
I
added
his
name
there,
the
yeah,
the
first
name
got.
A
E
Every
yeah
this
person
has
a
cap
pr
out
waiting
for
commodity
review,
basically.
F
A
Thank
you
shane.
Next
up
we
have
better
default
storage
class.
B
First
version
of
pr
was
posted
and
I
am
reviewing
it
looks
like
we
can
make
it
into
the
release.
If
I
can
get
api
review,
actually
there's
no
new
idea.
I
just
need
to
change
the.
We
just
need
to
change
the
validation
and
that's
considered
to
be
api.
Api
change
too.
So
yeah
there
is,
I
don't
know
two
or
three
line
change
in
the
validation.
A
Got
it
cool?
Thank
you.
Yon
looks
like
it's
pretty
close,
hopefully
it'll
make
it
in
then
we
have
handle
or
volume
csi
driver
capabilities.
This
was
designed
for
this
cycle.
Anything
new
here.
G
So
yeah
yeah
the
we
have
a
mostly
an
agreement
about
the
design.
The
sticking
point
for
this
one
was
how
we
would
handle
scheduling
of
pods
with
the
weight
for
first
consumer
pvcs,
because
the
solution
involves
basically
kubernetes
knowing
about
the
volume
subtype
and
the
scheduler
can't
know
that
subtype
until
the
pvc
is
provisioned,
so
you
have
a
funny
dependency
problem
in
trying
to
schedule
pods
with
pvcs
that
don't
exist
yet.
G
But
I
have
a
proposed
solution
for
that
problem.
I
need
to
talk
to
michelle
about
it,
but,
more
importantly,
I'm
a
I
am
switching
jobs
and
I
won't
personally
be
carrying
this
one
forward.
G
I'm
gonna
try
to
find
someone
to
carry
forward
in
my
stead
tomorrow,
but
I
can't
promise
that
we'll
definitely
be
doing
this.
G
So
that's
you
know
I'll
I'll
certainly
update
the
dock
with
where,
where
we
leave
things
and
if
someone
wants
to
carry
you
forward,
the
the
design
will
all
be
there.
A
Cool
sounds
good
thanks,
a
lot
ben
for
getting
it
this
far
and
yeah.
Folks
on
the
call,
if
you're
interested
in
helping
drive
this
forward,
please
reach
out
to
the
sig
leads
or
to
ben,
and
we
can.
A
We
can
get
that
moving
and
congrats
ben
excited
to
hear
where
you're
going
all
right.
F
A
Up,
we
have,
I
think,
comment
from
humble
on
a
pr.
A
So
yeah
primary
point
of
discussion
today
there
was
a
cluster
fs
dependency
that
is
no
longer
maintained,
and
so
dims
was
reaching
out
and
suggesting
immediate
deprecation
of
that
dependency
at
the
very
least
in
removal
of
it
from
the
code
base,
ideally
he's
suggesting
either
deprecation
in
125
and
removal
in
126
or
jumping
immediately
to
removal
in
this
release
in
125.
A
A
What
is
what's
the
plan
for
actually
deprecating
gluster
fs,
so
there
is
a
new
thread
that
has
been
started
about
this
topic
and
I
think,
after
lots
of
back
and
forth,
the
conclusion
appears
to
be
that
we
will
deprecate.
A
So
I
guess,
amongst
this
group,
any
objections.
Anyone
have
different
ideas,
any
concerns
comments.
A
Okay,
so
I
guess
that's
going
to
be
our
plan
moving
forward,
we're
going
to
put
deprecation
notices
in
125
for
gluster,
fs
and
and
then
remove
the
code
in
126
and
we'll
try
to
communicate
this
as
widely
as
possible,
so
that
there
are
few
surprises.