►
From YouTube: Kubernetes SIG Storage Meeting 2022-09-22
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 22 September 2022
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.or9zweu4t7dj
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
Okay,
today
is
september
22
2022..
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
So
if
you
have
anything
that
you
want
to
discuss
today
feel
free
to
add
it
to
the
agenda,
you
can
find
the
link
in
your
in
your
meeting.
Invite.
A
So
I'm
going
to
go
over
the
126
features
today
and
then
we
can
switch
over
to
any
miscellaneous
topics
that
folks
have
for
discussion
for
126.
There
is
one
change
in
planning,
specifically,
instead
of
using
a
spreadsheet
for
tracking
which
features
are
going
to
go
into
126.
The
release
team
is
instead
using
labels
in
github.
A
A
The
lead
opted
in
label
on
the
enhancement
issue,
and
this
will
ensure
that
the
cap
is
properly
tracked
by
the
release
team.
So
please
do
this.
If
you
have
any
feature,
that's
going
into
126,
that
has
an
enhancement
issue
or
a
cap,
so
with
that
we'll
go
ahead
and
switch
over
to
the
spreadsheet
and
get
status
updates.
A
A
And
any
progress
in.
A
Sounds
good
thanks
fabio
next
up
we
have
recovering
from
resize
failure.
End-To-End
tests
a
month.
Are
you
on
the
line.
C
A
And
mark
that,
as
no
update,
let's
move
on
to
csi
ephemeral
volumes,
wrap
up
conformance
test
work.
Jonathan.
Are
you
on
the
line.
A
Okay,
mark
that
one
is
no
update.
Then
we
have
local
ephemeral,
storage
management.
I'm
gonna
assume
this
one
is
no
update
either
unless
someone
else
is
able
to
provide
an
update
here.
A
A
D
Right
so
there
is
a
css
back
pr
that
is
being
reviewed
start.
If
you
can
help
review
that.
D
Okay,
thank
you,
and
the
cap
also
is
updated.
So
michelle
has
reviewed
it
and
added
comments
and
addressed
that
so
it's
ready
for
review
again.
A
D
So
I
think
right
now
it's
a
a
different
person.
Taka
takafumi,
I
don't
know
if
he
or
she
is
working
on
this.
I
just
got
a
pin
saying
that
he
or
she
couldn't
attend
the.
B
D
But
well
there
is
a
slack
there's
a
slack
thread.
A
Got
it
yeah
and
you
throw
in
the
the
name.
D
Yeah
I
mean:
can
you
check
the
that
lea
see
if
that
has
the
third
name,
the.
D
I
add
that
no,
no
okay,
okay,
let
me
add
that
okay,
cool.
D
B
A
Okay,
next
up
is
provisioning
volumes
from
cross
namespace
snapshot.
Pvc.
I
guess
this
one's
also
on
the
same
group
of
people.
D
B
B
A
D
B
A
All
right
next
up
is
cozy.
Staying
in
alpha
this
cycle,
any
new
updates
here.
D
It's
just
to
continue
trying
to
get
more
more
vendors
to
writer
drivers
for
cozy
and
documentation
and.
B
A
Thank
you,
shane
next
step
is
change.
Block
tracking
looks
like
last
status
was
kept,
updated.
Anything
new
here.
B
A
A
Next
is
new
rwo
access
mode,
moving
that
to
beta,
I
believe
chris
committed
to
working
on
this
for
this
cycle.
Chris,
are
you
on
the
line.
A
Thanks
john
next
is
runtime
assisted,
mounting
issue
2857
deep.
A
Got
it
thanks
deep,
then,
we've
got
csi
proxy
for
windows
transition
to
privileged
containers;
no
need
for
cap.
I
guess
this
is
bug
level
or
design
level
work.
Anything
new
here.
D
B
D
I
don't
know
if
it
was
an
owner
for
this
one,
yet
it's
just
a
an
issue
that
we
need
to
be
aware,
because
we
have
taught
being
talking
about
this
issue
when
we
talk
about
vsphere
system
migration,
because
windows
is
not
gas
one.
This
is
one
of
the
biggest
issues
with
the
performance
issue.
D
E
Yeah,
like
the
last
conversation
I
had
around
this
with
divine,
was
that
he
was
exploring
trying
to
spin
up
csi
proxy
within
a
demon
set
because,
like
privileged
support,
is
enabled
in
windows.
Now
and
basically
his
thought
was
like.
If
it's
within
a
pod,
then
its
resource
requirements
can
be
configured
so
it
does
not.
You
know
end
up
taking
a
lot
of
memory
and
cpu,
which
was
at
least
like
one
of
the
perf
issues,
then
trying
to
stress
it
quite
hard.
E
But
then
I
didn't
hear
any
updates
on
where
that
went
like
whether
that
approach
was
tried
for
vsphere
to
see.
If
that
makes
a
difference,
I.
D
Think
we
tried
something,
but
I
don't
know
if
yeah
I
don't
know
if
we're
going
to
continue
with
that
past,
because
I
thought
like
on
a
journal,
they
actually
made
some
changes
in
their
driver
to
leverage
the
privileged
container.
D
But
I
don't
know
if
that
actually
solved
this
issue
or
not
that
problems
you're
not
not
quite
clear.
E
Yeah,
like
there
were
two
problems,
one
was
like
the
memory
and
the
cpu
was
quite
high
and
that
can
potentially
be
solved
by
putting
it
within
a
pod
right,
and
I
think
the
azure
guys
tried
out
the
the
privileged
demon
set
approach
of
launching
csi
proxy
and
that
works
for
them.
Okay,
but
whether
this
approach
still
satisfies
the
benchmark.
That's
therefore
vsphere.
Like
that's
still
an
open
question.
I
think
that
needs
to
be
measured.
A
Okay,
so
I'm
going
to
leave
this
as
issue
needs
an
owner
for
now
or
do
we
want
to
put
vivian
as
the
owner.
D
Deviant,
I
I
think
he
would,
I
think,
for
our
side.
I
just
chatted
with
him.
Basically,
he
was
saying
from
our
side.
We
are
going
to
not
going
to
see
this
performance
issue
as
a
blocking
issue
for
windows
to
go
ga
anymore
or
just
you
know,
document
whatever
we
have
tested
with
the
so
if
it
works
with
a
certain
scalability,
we're
just
documenting
that
yeah.
There
are
other
issues
that
you
know
like
the
there
was
a
resize
religion
back
that
that
we
need
to
fix.
A
So,
let's
leave
this
open
for
a
bit
see
if
we
can
find
an
owner
if
anybody
on
the
call
is
interested,
this
might
be
a
fun
debugging
problem
and
if
we
can't
find
an
owner
we'll
close
it
for
this
cycle
and
then
see
if
we
could
pick
up
an
owner
for
a
next
cycle.
D
B
A
B
A
Thanks
john
next
we
have
csi
migration,
so
csi
migration,
core
matt.
Do
you
know
if
there's
anything
on
the
core
that
needs
to
be
updated
or.
G
B
A
So
it
sounds
like
that's
pretty
much.
The
only
outstanding
work
for
the
core,
I'm
gonna
go.
A
D
I
think
we
just
talked
about
that
earlier,
so
the
thing
that
is,
I
think
there
was
some
concern
regarding
windows,
support
right
because
that's
not
ga
yet
so
performance.
We
don't
treat
that
as
a
blogging
anymore,
but
there
is
a
res
sees
a
resize
related
bug
so
still
trying
to
get
that
one
fixed.
C
But
the
the
windows,
the
csi
driver,
sorry
vsphere,
driver,
never
supported
resize
the
entry
one
yeah.
D
Yeah,
so
that's
enough
right,
right,
yeah,
so
it's
it's
right,
but
but
but
that
is
the
problem-
is
that
this
is
the
combination
of
windows
and
the
csi
resize
right.
So
if
we
claim
windows
support,
then
we
have
to
say:
windows
also
support
all
the
other
features
that
is
already
ga
in
the
csi
driver.
So
that's
what
we
are
trying
to
fix
that
one
right
now
so.
D
C
D
And
then
there
is,
and
there
is
a
but
yeah-
I
think
deviant-
actually
submitted
some
submitted-
a
bug
fix
in
the
season
proxy
repo,
but
I
think
that
one
is
not
merged
yet.
So
I
think
there's
still
some.
D
Okay,
so
yeah,
so
I
think
yeah
deviant,
saying
like
we're
just
looking
at
how
to
make
a
fix
inside
our
driver.
Basically,
I
think
he's
exploring
so
still
once
we
get
that
fixed.
I
think,
then,
this
windows
feature
should
be
ready
for
june.
F
D
I
think
I
think
diva
is
trying
to
fix
that
in
a
generic
way,
but
I
think
there
are
some
concerns
so
come
on.
You
review
that.
C
I
think
the
deep
link
that
pr
to
on
chat,
okay,
we
can
follow
up
this
offline.
I
personally
don't
see
why
the
windows
g
is
supposed
to
be
vlogging
on
this
thing,
but
we
can
talk
about
this
offline.
C
A
Okay,
thank
you
for
the
discussion
on
that
csi
migration
for
azure.
Any
updates
on
that
do
we
have
andy
on
the
line
by
any
chance.
D
A
D
D
So
it's
basically
just
a
minor
update.
Let's
just
update
the
milestone,
so
both
rbd
and
the
cfs
the
caps
merged.
D
So
oksana
says
that
they
will
need
to
do
some
more
testing,
so
they
will
yes
not
not
going
to
move
it
to
the
next
stage.
So
staying
at
off
by
default
data.
D
Yeah
that's
already
removed,
the
pr's
are
emerged.
B
B
B
D
B
A
And
then
we've
got
secret
protection
prevention
deletion
while
in
use
this
is
dependent
on
the
in-use
protection
kept
below.
So
both
of
these
are
tightly
related,
we're
waiting
for
a
confirmation
from
masaki
have
we
heard
I
back.
D
No,
I
have
not
heard
anything
from
him.
I
don't
know
if
he's
still
working
on
kubernetes
is
a
no
no
response.
D
Maybe
we
can
just
cross
out
this
two
for
1.26.
B
A
A
D
Yeah,
so
the
test
merged,
but
we
still
need
to
add
that
to
the
test
grid.
So
so
right
now
there
is
a
pr
for
that.
That
is
being
reviewed.
A
Cool,
thank
you.
Shane
next
step
is
enable
user
namespace
and
cubelet,
so
your
uids
get
shifted
rootless
mode.
Anything
new
here.
A
Then,
let's
keep
moving,
we've
got
address
issues.
Pvc
created
by
stateful
set
will
be
auto,
rim
will
not
be
auto
removed.
A
G
Yeah,
so
this
has
been
added
into
the
tracking
sheet
under
sig
app,
sorry,
not
tracking
sheet
tracking
label
and
the
metrics
stuff.
We
have
figured
out
in
case
people
are
interested
cube.
State
metrics
is
sort
of
gaining
more
traction
and
is
useful
for
stuff.
So
I
got
the
prr
approved
based
on
that,
so
hopefully
we're
still
on
track
going
to
beta.
A
A
G
Yeah
sure
so
just
the
context
here
is
just
kind
of
asking
for
advice
before
we
sort
of
throw
out
a
design
proposal.
The
context
here
is:
we
are
developing
a
ephemeral,
csi
driver,
that's
going
to
use
non-trivial
resources
on
the
local
disk,
and
so
ideally,
this
usage
would
be
accounted
for
with
the
pod.
That.
G
G
We
are
kind
of
exploring
a
few
ways
of
like
where
the
right
place
in
the
cubelet
pod
directory
to
like
put
our
local
storage
is
going
going
to
be,
but
if
anyone
had
any
context
or
advice
or
thoughts
or
yeah
on
around
this
issue,
we'd
appreciate
hearing
that
before
we
get
too
far
down
a
you
know,
a
bad
bad
path.
F
Yeah,
I
guess
my
general
thought
is
like
I
think
just
because
we
didn't
include
it
as
part
of
the
ephemeral
csi
feature
doesn't
mean,
we
can't
add
it
because
at
the
time
we
didn't
have
a
use
case,
but
it
sounds
like
you
know.
Now
there
are
some
valid
use
cases
right,
so
I
think
we
should
explore
that
option
and
see
how
we
can
maybe
somehow
like
get
the
csi
driver
to
like
report
that
hey,
I
am
using
local
storage.
So
please,
you
know
count
my
usage
as
part
of
the
local
ephemeral
storage.
G
F
Because
I
think
the
the
main
thing
is
because
we
have
the
look,
we
have
two
features
right.
We
have
local.
F
Branch
tracking
and
then
we
have
like
we
have
csi
ephemeral
and
we
have
the
generic
ephemeral
and
the
the
main
problem.
Right
now
is
the
local
ephemeral,
storage
tracking
it.
It
specifically
only
looks
at
empty
volumes
right.
It
doesn't
look
at
other
volume
types,
so
I
think
the
question
here
is
now
we're
saying:
well,
actually
there
are
some
csi
drivers
that
will
heavily
use
local
storage,
and
so
we
want
some
way
to
like
detect
that
and
to
kind
of
have
our
existing
local
ephemeral
storage
tracking
track.
Those
csi
volume
types
too.
F
G
Yeah
because,
like
in,
in
particular
like
if
the
amount
of
usage
here
is
like
getting
comparable
to
the
amount
of
ephemeral
allocatable
you
have
on
the
node,
you
want
to
take
that
into
account
in
scheduling
decisions
so
that
you
don't
over
over,
commit
a
node.
F
Yeah,
I
think
right
now
what
what
what
would
basically
happen
without
this
is
that
that
ephemeral,
the
local
storage
usage,
would
count
as
system
resources
and
I'm
not
quite
clear
what
happens
on
cubelet
when
system
resources
greatly
exceed
what
we
reserved
for
them.
I
don't
know
if
that
just
causes
cubelet
to
just
start
evicting,
like
all
of
the
pods.
G
Okay,
cool
well
in
in
any
any
case
here
we
will
work
on
a
design
here.
It
doesn't
sound
like
there's
something
super
obvious.
We
are
missing,
please
reach
out
to
me
on
slack
or
etc.
B
A
Sounds
like
we
just
need
to
make
sure
the
local
ephemeral,
storage
usage
tracking
feature,
which
is
when
ga
is
extended
to
support
csi,
so
that
seems
like
a
good
natural
extension.