►
From YouTube: Kubernetes SIG Storage Meeting 2021-09-09
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 09 September 2021
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.a53r4gm2rm4p
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right,
let's
go
ahead
and
get
started,
so
today
is
september,
9
2021.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
A
The
agenda
for
the
meeting
is
posted
in
the
meeting,
invite
there's
a
link
there
feel
free
to
add
any
items
as
we
go
through.
We
have
a
section
at
the
bottom
for
any
pr's
that
need
attention,
design,
reviews
or
anything
miscellaneous
that
you
may
want
to
talk
about
main
topic
of
discussion.
We're
going
to
go
over
the
123
planning
spreadsheet
go
over
the
items
that
we've
committed
to
for
123..
A
Please
be
aware
of
the
timeline
for
the
123
release
today
is
september
9th,
which
is
the
enhancement
freeze.
This
means
that
any
features
that
go
in
to
123
any
new
features
must
be
declared
with
the
release
team.
You
must
have
your
cap
approved
and
signed
off
on
and
being
tracked
in
the
release
team
spreadsheet.
A
A
All
right
so,
first
item
that
we
have
here
is
delegate
fs
group
to
csi
driver
instead
of
cubelet.
This
is
moving
from
alpha
to
beta
and
dev.
Lead
here
was
chang
api
reviewer
a
month.
Anyone
on
the
call
able
to
give
a
status
update
here.
B
Yeah,
so
the
prr
for
the
cap
has
merged.
So
that's
the
only
element
needed
for
the
asmr
nice.
A
C
Sorry
this
is
hemanth
here.
Actually
the
cap
was
updated
to
target
ga
and
it
we
just
had
to
kind
of
flip
the
feature
gate.
That's
all
mostly.
C
So
the
cap
for
recovery
from
resize
failure
got
merged
and
the
future
of
the
scoping
allow
volume
expansion
from
field
from
sc
to
tv
is
uncertain
because
we
try
to
seek
feedback
from
api
reviewers
like
whether
but
it's
basically
it
changes
the
api
and
it
kind
of
we
are
blocked
on
that.
So
I'm
not
sure
if
you
even
will
be
able
to
do
that
and
other
than
that
yeah
the
main
thing
blocking
the
recovery.
The
volume
expansion
is
the
recovery
from
resize
failure
for
which
the
cap
was
merged.
C
E
Do
it
so
did
not
make
enhancement?
Freeze
can
come
back
next
quarter.
A
Okay
and
then
for
the
next
item
you
said
volume,
expansion,
ga
is
blocked,
but
you
have
a
design.
E
F
A
E
A
Yeah
next
item
is
pvc
inline
ephemeral
volumes,
work
with
csi
driver.
A
A
And
then
we're
gonna
move
over
to
volume
group
api.
Any
updates
on
that.
A
Thank
you.
Shane
next
item
is
csi
out
of
tree
move
the
iscsi
driver
fit
and
finish
image,
building
testing
cicd
documentations,
and
this
is
a
sign
technology
right
now.
Humble
are
you
there?
Can
you.
I
All
right
yeah,
so
the
library
csi
buys
kazee
with
jcsi
driver
is
looks
to
be
broken
in
various
code
parts,
so
many
of
the
issues
have
been
fixed
locally
at
my
end
and
it's
progressing
to
get
the
driver
gi
on
23
time
frame.
I
also
noticed
the
pr
in
library
repo,
which
also
addressed
few
of
these
shoes,
so
just
to
coordinate
and
get
the
library
in
good
shape.
I
will
schedule
a
meeting
with
the
pr
owner
tomorrow,
based
on
the
discussion.
I
will
file
qpr's
and
move
towards
the
links.
A
A
Thank
you.
Humble
next
item
is
the
samba
csi
driver,
andy
julie,.
A
Next
item
is
send
out
deprecation
notice
for
the
flex
volume,
so
last
update
here,
shang
we'll
talk
to
michelle
and
john
any
update
on
this.
G
Not
yet
I
need
to
talk
to
them.
No
rush.
A
Okay:
next
item
is
pvc
volume
snapshot,
namespace
transfer,
mustafa.
G
I
think
he
is
off
for
some
personal
reasons,
so
I
think
he
will.
You
know
he
comes
back.
He
said
he's
still
interested
in
working
on
this
got.
A
It
okay
sounds
good.
A
Let's
go
mark
and
now
update
for
now.
Okay.
Next
item
is
csi
volume,
health,
additional
metrics
and
or
events
and
followed
by
programmatic
response.
You
want
to
start
with
additional
metrics
shane.
G
D
K
Cap
was
merged,
approved
yeah
this
morning.
Thank.
A
A
A
So
I
think
so
sorry
go
ahead.
Shane.
G
Oh,
I
just
I
I
see
tim
has
been
reviewing
that
so
a
lot
of
comments
which
is
good
but
looks
like
he
want
the
api
to
be
simplified.
I
want
some
of
the
like
the
b-a-r-b-e-r-t,
flattened
and
also,
I
think,
he's
asking
to
talk
to
seek
off.
A
Yeah,
I
think
he
had
concerns
around
what
the
integration
with
identity
was
going
to
look
like
workload
identity,
so
that
makes
sense.
Okay,
kept
under
api
review,
requesting
simplification
requesting
sync
away.
A
Okay,
so
this
one
is
at
risk
we'll
keep
an
eye
on
it.
G
Right
so
since
this
it's
kind
of
out
of
tree
right,
so
we'll
see,
yeah
yeah
still
have
a
chance
of
making
it.
A
Any
idea
what
the
state
of
the
cap
is
in
terms
of
review.
G
No
yeah,
so
I
think
we
are
going
to
discuss
it
in
the
next
meeting.
I
will
send
you
might
use
that.
F
Yeah,
so
I
had
a
meeting
on
this
with
jan,
so
he
had
a
a
lot
of
things
to
kind
of
look
at
based
on
the
draft
kept
so
far,
so
the
next
steps
is
to
basically
elaborate
them
in
the
as
like
caveats
or
limitations,
and
then
have
a
meeting
with
all
the
leads
from
storage
and
patrick
to
decide
whether
this
is
this
vp
direction.
We
want
to
go.
A
J
F
We
are
working
on
sort
of
like
a
proposal
for
the
next
steps,
maybe
how
to
restructure
the
api
a
little
bit
so
csi
plugins
can
easily
migrate
from
the
proxy
to
the.
A
Cool
okay:
next
set
of
items
is
for
csi
migration,
starting
with
officially
deprecating
cloud
provider,
plug-ins,
sending
out
the
emails
about
the
deprecations
and
so
on.
Jahweh.
Any
updates
on
that.
L
Yeah,
I
think
I'm
planning
to
do
that.
I
think
I'm
just
waiting
for
some
more
plugins
to
turn
on
by
default
got
it.
A
Okay,
next
part
of
it,
is
the
core
bugs
and
issues
anything
new
there
yeah
there.
L
A
Sounds
good,
thank
you
and
then
we
can
go
over
the
cloud
provider,
specific
csi
migrations
and
get
updates
on
that.
First
up
is
vsphere
and
that's
being
remaining
in
beta.
Turning
on
by
default
divion,
any
updates.
G
This
one's
probably
not
here,
I
think
it's
still
the
same.
So
basically,
I
think
we're
going
to
trying
to
get
this
window
supporting
2.4
so.
G
G
A
Got
it
okay,
so
it
sounds
like
there
is
dependencies
that
are
currently
in
progress,
but
otherwise
it's
on
track
for
123.
M
A
L
I
I
just
wanted
to
confirm
turning
it
on
by
default,
like
it
doesn't,
actually
need
windows
or
I
thought
or
windows
has
to
be
ready.
I.
L
G
G
Okay,
yeah.
Maybe
if
you
can
actually
clear
clarify
that
in
that
document,
because
I
was
reading
that
there
is
a
talk:
okay,
yeah.
L
G
You
have
a
dock
for
each
cloud
rider.
I
think
there
are
some
notes
about
the
vsphere
driver.
I
thought
I
thought
this
is.
The
windows
is
a
blocker.
M
G
M
A
Yeah,
I
think
that
makes
sense.
We
should
probably
not
flip
it
on
until
we
know
that
we
have
feature
parity.
A
L
N
So
with
this
we
are,
we
have
a
plan
with
the
gcp
cloud
provider
to
move
the
tests
in
kkk,
which
sort
of
accidentally
depend
on
ece
over
to
the
cloud
provider
gcp,
and
so
that
we
can
flip
it
on
by
default
in
123..
N
So,
just
to
be
clear,
what's
going
to
happen
is
that
this
means
that
once
it's
on
by
default
gcepd,
you
know
can't
be
used
as
a
default
storage
class
in
any
of
the
kk
tests,
even
the
ones
that
happen
to
run
on
gce.
Although
I
guess
they
kind
of
need
to
run
all
the
stuff
on
time
and
all
of
the
dce
specific
tests
will
be
in
cloud
providers
e,
which
is
where
pdsi
will
be
switched
on
by
people.
M
Got
it,
we
have
a
meeting
with
sig
cloud
provider
today
to
sort
of
discuss
the
e2e
tests,
but
I
think
part
of
it
is
going
to
be
that
any
any
test
that
depends
on
on
a
default
storage
class
will
probably
be
disabled
by
default,
like
we'll,
probably
have
to
put
a
feature
tag
on
it.
N
N
Oh,
I
see
what
you
mean
right,
so
one
thing
we
were
going
to
look
into
is:
if
any
of
the
these
tests,
which
have
a
default
storage
class,
could
work
with
like
a
host
path
driver
instead,
which
would
be,
I
think,
platform
independent,
but
we'll
probably.
A
And
for
folks
who
are
interested
in
following
along
it
sounds
like
best
option
would
be
to
attend
the
sig
cloud
provider.
Discussion.
A
And
next
up
is
seth
and
seth
rbd
humble.
I
Yeah,
it's
progressing
in
good
place,
sad,
but
to
mention
it's
not
straightforward,
as
there
are
some
incompatibility
between
the
entry
and
the
csi
driver,
storage
class
parameters
and
some
results
like
secret
being
different
format,
regardless
making
good
progress
on
the
testing
and
completed
cue
scenarios
successfully
with
migration
on
I'm
also
in
touch
with
yana
and
figuring
out
how
to
address
these
gaps.
So,
in
short,
this
is
on
track
and
when
in
progress,
okay
sounds.
G
The
cap,
not
mercury
right,
it's
still.
It's.
D
M
Sorry
I
was
trying
to
turn
off
my
phone,
the
I
think
the
we'll
need
to
file
an
exception
for
the
two
caps,
but
basically
like
there
was
a
lot
of
confusion
about
the
way
we
were
trying
to
do.
The
caps
for
the
various
csi
migration
cloud
providers
and
so
we'll
need
to
yeah
we'll
need
to
get
an
exception
for
that.
A
Got
it
okay
and
humble,
are
you
gonna
file
the
exception.
M
Yeah,
I
think
we
can
send
an
email
for
both
because
it
impacts
both
the
ceph
and
portworx
ones.
M
We
can,
I
can
file
that.
A
Sounds
good,
okay,
any
other
concerns
about
the
cap
other
than
getting
that
exception
is
there
any
changes
needed
reviews
needed
anything.
That's
currently
blocked.
A
All
right,
so
next
up
was
port
works.
It
looks
like
it's
in
the
same
position.
Anything
else
to
add
there.
A
A
Next,
up
control
volume,
mode
conversation
between
source
and
target
pvc.
G
A
This
is
designed,
so
that
should
be
fine.
No
problem
next
item
is
secret
protection.
Preventing
deletion,
while
in
use
depends
on
in
use
protection
kept
below.
Q
Yeah
kep
is
not
yet
merged
for
the
secret
configmar
protection
itself.
The
discussion
went
back
to
whether
this
feature
needs
to
be
implemented
so
yeah
it
will
be
a
bit
broker
and
as
for
general,
being
used
protection
which
this
feature
plan
to
rely
on
keeper
is
mostly
agreed
in
single
api
machinery.
A
Okay,
thank
you,
misaki
for
the
update
I'm
going
to
go
ahead
and
stop
tracking
it
for
123.
Is
that
okay
or
do
you
want
to
continue
to
track
it
as
we
make
progress.
A
It
here
for
for
now,
then
thank
you.
Thank
you
for
the
update.
Next
is
a
set
of
items
that
are
co-owned
with
sig
storage
and
other
sigs.
First
one
is
with
sig
auth.
This
is
user
id
ownership
and
config
map
and
secrets
preserve
the
default
file
mode
bits
set
in
atomic
writer
volumes
last
status.
Update
here
there
is
a
meeting,
will
turn
into
a
cap.
Did
the
cap
get
created
yet
any
updates
on
that.
A
Okay,
I'm
gonna
mark
that
as
no
update
next
item
is
sig
node,
ungraceful,
node
shutdown,
moving
from
prototype
to
alpha
xing.
G
Yeah,
so
we're
actually
not
going
to
move
that
to
alpha,
so
we
actually
had
a
meeting
with
the
c
node
trying
to
understand
how
greece
for
no
shadow
is
supposed
to
work
and
then
so
turns
out
that
there
are
some
comments.
Some
shout
out
comments
will
be
recognized
by,
but
others
are
not
so
and
also
there's
a
regression
one
that
can
choose
when
not
changing
one.
G
If
the
shadow
signal
is
detected,
that
actually
looks
like
it's
behaving
correctly,
but
then
you
end
up
changing
two:
they
change
the
plot
status
inventor
21.
They
basically
mark
the
part
to
be
part
failed
and
when
I
trained
to
they
don't
do
that
anymore
then,
so
it's
actually
not
working
properly
yeah.
So
we
will
be
working
with
the
signal
trying
to
give
out
issues
with
the
use
of
team.
This
one
will
shut
down
yeah,
and
then
you
know
before
or
before
that.
A
Okay,
without
this
one
first,
that
makes
sense.
So
basically,
this
is
on
hold
until
we
fix
the
regressions.
Do
we
want
to
continue
to
track
it
for
this
quarter
or
for
this
cycle.
A
C
No,
I
don't
have
any
update.
I
haven't
like
the
feature,
is
try
getting
beat
or
something
in
this
quarter,
but
I
haven't.
A
Next
item
is
with
sig
apps
pvc
is
created
by
statefulset
will
not
be
auto
removed,
fixing
that
behavior
last
status
update
here.
This
should
be
on
track.
Any
updates,
matt.
N
Yeah
thanks
to
michelle
for
keeping
me
honest
about
getting
the
kep
updated.
So
I
was
able
to
get
that
so
that
the
it's
were
all
set
in
terms
of
the
kept
for
doing
alpha
and
123..
N
A
Awesome,
so
it's
on
track
for
approval,
but
we're
waiting
for
approval
is
that
right.
A
A
All
right,
thank
you.
Matt
next
item
is
also
co-owned
with
sig
apps
volume.
Expansion
for
stateful
sets
kk.
G
C
A
All
right
sounds
good.
Thank
you
oman.
Thank
you.
Shang
next
items
are
co-owned
by
sig
storage,
sig,
apps
and
sig
node,
which
makes
it
even
harder
to
get
it
moving
along
execution
hook
for
application
snapshots.
Sync.
G
Yeah
this
isn't,
I
don't
think
it's
going
to
make
it
so
we
did
get
sick,
node
folks
to
review
it.
If
you
have
a
list,
you
have
a
lot
of
concerns.
I
think
the
blocking
thing
is
mainly
some
comments
from
clayton
from
the
api
reviewer
site.
So
that's
the
one
that
we
couldn't
get
that
resolved
and
also
we've
got
some
questions
on
why
we
are
not
doing
a
crd,
so
we're
going
back.
G
Q
A
All
right,
thank
you
for
the
update.
Shane
next
item
is
with
sig
scheduling,
prioritization
on
volume
and
capacity
moving
from
alpha
to
beta.
A
All
right
last
item
we
have
is
co-owned
with
sig
api
machinery.
It's
in-use
protection
amasaki.
I
think
this
is
a
duplicate
of
the
item
above.
Is
that
right,
or
is
this
different.
Q
Oh,
it
is
depends
you
can
if
it
is
dependent
for
me.
A
F
E
A
Sounds
good
one
thing
I'm
gonna
do
is
just
move
this
up,
so
it's
next
to
the
other
item.
A
All
right,
thank
you
for
the
update
masaki.
So
that
was
the
last
item.
We
can
go
ahead
and
switch
back
to
the
agenda
doc
and
see
if
there's
any
other
items
that
need
attention.
Anybody
on
the
call
want
to
raise
anything
prs
to
discuss
design
reviews.
If
not,
we
can
talk
about
this
last
item
here.
A
O
L
A
Okay,
so
I
think
shane,
maybe
you
were
still
planning
to
do
some
sort
of
meet
and
greet
right
for
folks
who
might
end
up
there
or.
G
Is
that
still
going
to
be
just
like
a
general
one
right
so
I'll
see?
I
think
the
contributor
summit
is
organizing
something
so
I'll
just
see
they
probably
have
a
area
for
people
to
meet.
So
we
don't.
We
don't
need
like
a
separate
one
or
six
storage.
A
Okay
yeah,
I
actually
really
miss
the
cube
cons.
I
hope
you
know
once
things
get
a
little
bit
better
and
we
get
better
attendance.
We
can
I'll
do
another
big
face-to-face
meeting.
I
miss
those.
So
that's
all
I
had
for
the
meeting
today
any
other
items
to
discuss.
Q
33,
as
I
mentioned
with
it,
should
be
implemented
or
not,
is
a
big
broker
for
escape.
Q
A
Yeah
this
one,
so
so
I
guess
for
background
for
folks
who
are
not
familiar
with
this,
the
proposal.
Masaki,
do
you
want
to
explain
the
proposal
and
then
we
can
talk
about
jordan's
comment
here.
Q
I
I
think
I
I
would
appreciate
it
if
you
could
explain
sure.
A
Sure
so,
as
I
understand
it,
kind
of
one
of
the
challenges
that
some
csi
drivers
face
is
in
cleanup
because
they
depend
on
a
secret
and
if
that
secret,
for
some
reason
the
user
accidentally
deleted
it.
Then
it
prevents
kubernetes
from
being
able
to
clean
up
the
the
volume
and
the
proposal
here
is
trying
to
implement
logic
that
would
prevent
either
a
config
map
or
a
secret.
That's
currently
in
use
from
being
deleted,
and
the
comment
from
jordan
liggett
is
really
about
kind
of
the
complexity
and
performance
trade-off.
A
So
he
says
I'm
pretty
skeptical
about
the
complexity
and
performance
cost
of
this
feature
relative
to
its
benefit.
The
primary
use
case
seems
to
be
around
preventing
deletion
of
secrets
used
to
administer
volumes
backing
pvcs,
but
I
can,
but
I
think
those
could
be
placed
in
alternative
name
spaces
to
avoid
issues
and
limiting
what
is
placed
in
the
pvc
namespace
helps
preserve
pvc
abstraction
and
avoid
exposing
unnecessary
details
about
volume,
implementation,
and
so
I
think
I
was
taking
a
look
at
this.
A
I
think
masaki
you're
pointing
out
that
that,
while
that's
true
for
the
controller
side
of
things,
they
do
accept
secrets.
So,
even
if
you
can
unmount
the
volume
from
cubelet,
you
know
complete
clean
up,
the
cleanup
of
the
volume
could
still
be
blocked,
and
so
that's
I
think,
where
we
are.
O
That's
a
good
point
I'm
coming
in
blind
to
this,
but
if
the
user
has
a
a
secret
that
you
they
use
for
that
pvc
they
created,
it
would
be
hard
to
put
them
that
secret
in
a
different
lane
space,
because
from
their
point
of
view
they
are
working
on
that
name.
Space
and
the
pvc
is
on
that
namespace.
So
they
put
the
secret
and
things
like
that
in
it.
So.
A
On
the
controller
side,
I
think
the
we
initially
we
tried
very
hard
not
to
have
it
on
delete
or
detach,
but
we
got
a
big
big
push
back
from
the
community
to
say:
hey,
you
know,
we
have
back
ends
that
have
secrets
that
are
specific
per
volume
and
if
you
don't
give
us
a
secret
we'd
like
it
simply
wouldn't
work,
so
it
was
a
compromise
to
say:
okay,
fine,
we'll
stick
it
in
there.
G
O
G
O
So
for
me
I
like
this
kit.
I
think
it's
good,
I
just
what
can
I
do
to
to
help
out.
A
M
Q
A
What
is
the
kind
of
likelihood
of
that
getting
approved.
Q
Yeah,
I
think
the
cape
is
mostly
agreed.
The
broker
is
just
implementation,
got.
A
A
That
seems
like
a
decent
work
around
that
so
for
for
the
cases
that,
oh,
you
know,
the
users
really
care
about
this.
They
could
leverage
that
generic
and
use
functionality
once
it's
available.
Q
M
M
G
M
Yes,
because
I
guess
one
you
know
one
one
thing
is
like
like
one
one:
I
guess
one
possible
implementation
for
a
controller
could
be
similar
to
the
storage
protection,
one
where
it's
like.
If
we
see
pods
that
are
using
it,
we
won't
allow
deletion,
but
that
doesn't
necessarily
solve
the
case
where,
like
the
user,
you
know
just
accidentally
deleted
everything,
and
you
know,
even
though
they
you
know,
even
though
there
might
not
be
pods
using
it,
they
may
still
want
to
retain
it.
M
So
I
think
that's
something
that
the
having
a
controller
wouldn't
really
support
that
use
case.
Q
G
A
So
I
want
to
make
sure
I
capture
michelle
your
comment.
You
said
basically,
one
of
the
use
cases
is
that
the
user
may
want
to
decouple
the
life
cycle
of
the
secret
from
the
volume
and
kind
of
manually
would
actually
allow
that
versus
automation
would
not.
M
Yeah,
I
think
it's
basically
like
you
know,
controller
would
kind
of
the
proposed
controller.
Would
you
know
see
if
any
there
are
any
pods
using
it,
but
it
might
still
be
that,
even
if
there
are
no
pods
running
right
now
at
this
instance,
the
user
might
still
want
to
have
protection
on
it
like
it.
Just
might
be
that
all
the
pods
we
started
for
some
reason.
A
Got
it
okay
and
then
shing
sorry,
can
you
reiterate
your
comment.
G
Oh,
I
was
just
to
based
on
what
what
michelle
was
saying
then
looks
like
it.
If
the
in-use
cap
is
merged,
the
one
that
this
one
depends
on,
and
then
we
already
have
that
right,
so
user
can
just
use.
H
G
Well,
not
finalize
those
just.
I
think
there
are
some
examples
here
actually
that
have
that
in
this
cap,
just
to
even
with
this
cap,
users
still
need
to
add.
Those,
I
think
right
is
that
is
that
right,
you're,
more
familiar
with
your
own
cap.
Q
Yeah,
yes,.
Q
So
the
other
cape,
it
is
introduced,
a
new
concept
bn.
It
is
similar
to
finalizer,
but
a
bit
different.
Q
Q
And
this
cap
utilize
the
vm
feature
and
so.
Q
Q
G
So
yeah,
I
think
it's
more
important
to
get
the
other
one
in
then.
Maybe
we
can
start
just
trying
out
to
see
how
that
one
works.
A
Yeah
that
does
seem
like
the
best
option,
is
to
focus
on
the
generic
and
use
cap.
Get
that
merged,
encourage
users
to
use
that,
and
then
we
can
come
back
and
revisit
the
necessity
of
this
one.
O
Q
A
Okay,
any
other
comments
on
this
any
concerns.
H
A
All
right,
thank
you,
masaki
for
the
discussion.
Any
other
comments
on
this
concerns.
A
Okay,
any
other
areas
to
discuss.
We
have
a
couple
minutes
left.