►
From YouTube: Kubernetes SIG Storage 20201105
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 05 November 2020
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.ayuygin3orio
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
today
is
november
5
2020..
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube
on
the
agenda.
We're
going
to
go
over
the
planning,
spreadsheet
and
get
status
update
for
the
items
that
folks
are
working
on
important
dates
that
are
coming
up
november.
12Th
next
week
is
code
free.
So
if
there's
an
item
that
you're
working
on
for
this
1.20
release,
please
be
aware
of
this
deadline.
A
A
B
Ben
assad,
you
said
the
the
code
freeze,
we
know
that
applies
to
the
kubernetes
kubernetes
repository.
What
are
the
repos?
Does
that
apply
to.
A
It
technically
only
applies
to
the
kubernetes
kubernetes
repos,
the
other
repos,
where
we
have
control
over
merging
it's
kind
of
a
softer
deadline.
But
ideally
we
would
try
to
align
with
the
rest
of
the
project
and
make
sure
we
have
enough
stabilization
period
and
all
of
that.
C
One
I
think
it
it
would
definitely
be
nice
to
apply
it
to
the
side
cars
as
well,
but
it's
not
something-
we've
been
really
doing
in
the
past,
but
but
I
think,
definitely
being
able
to
get
that
soak
time
and
being
able
to
release
the
side
cars
soon
after
kubernetes.
Actually
releases
is
a
good
goal
to
achieve.
A
All
right
so
jumping
into
the
planning
spreadsheet,
let's
start
getting
status
updates
here.
First
item
is,
from
a
month
the
online
offline
resizing
addressing
issues.
Let
me
create
a
new
column
here.
A
A
Okay,
anyone
able
to
give
a
status
update
on
haman's
behalf
or
we
can
come
back
to
it
if
ahmad
joins
later.
A
Oh
here
we
go
hey
a
month.
We
were
talking
about
online
offline
volume,
resizing
just
wanted
to
get
a
status
update
on
that.
D
Yeah,
so
I
have
I'm
working
on
I'll,
be
this
the
squadron
I'm
just
trying
to
get
the
enhancements
out
for
for
the
allowable
expansion
field
and
the
secret
thing,
and
I'm
following
up
with
the
pod
resizing
discussion
that
is
going
on
so,
but
we
are
not
going
to
end
write
any
code
for
this
that
in
this,
so
just
we
have
to
in
this
quarter.
We
just
have
to
get
the
designs
finalized
and
I'm
working
on,
read,
write
many
update
and
then
keshev.
D
D
A
And
so
for
these
last
two
issues
that
you
mentioned:
rewrite
many
reread.
D
A
Okay
right,
thank
you.
Vermont
next
item
is
volume
snapshot
for
ga
shing.
F
Yeah,
so
a
lot
has
been
going
on
for
this
month,
so
eating
tests.
I
think
the
the
finalized
e3
test
is
getting
approved,
so
it's
on
the
way
to
get
merged
and
then
there's
a
test
on
secrets.
I
think
there's
some
comments
on
that,
so
we
need
to
move
that
one
to
the
mock
driver
instead
of
for
instead
of
right
now
it's
for
host
pass
and
gcpd
driver
which
does
not
really
use
secrets
and
then
stress
tests.
F
That's
a
review
in
progress
and
metrics
work
is
also
review
in
progress,
so
we
did
get
a
approve
from
tim
on
the
in
cpr
to
remove
the
feature
gate,
but
I
think
we
still
need
to.
I
think
we
shall
need
to
look
at
other
all
the
other
aspects
and
the
before
that
can
go
in
so
so
on
the
external
snapshotters
that
we
so
now
we
have
a
breakdown
that
pr
initially
it
has
everything
together
the
api
changes
and
the
controller
changes.
F
So
now
we
break
it
up
now
it
has
one
pr
only
for
the
api
changes
this
way
we
can
get
that
one
merged
first,
so
that
we,
then
we
need
to
update
the
external
provisioner.
We
need
to
update
release
tools
and
there's
like
this
chicken
egg
problem,
because
otherwise,
if
we
submit
everything
together,
we
can't
pass
the
I
so
so
we're
trying
to
basically
move
them
separately
and
then
I've
done
some
manual
tests.
Many
upgrade
tests
for
backward
combat
compatibility.
F
So
I'm
also
getting
someone
to
help
me
to
run
more
tests
to
make
sure
the
background
completely
completely
works,
yeah
so
trying
to
get
this
api
pr
merged
first,
because
that's
the
first
one
before
we
everything
else
depends
on
it.
F
A
Okay,
it
makes
sense
I
can
help
review
these
as
well
just
feel
free
to
shoot
me
an
email
with
any
of
the
pr's
that
need
to
help
review.
A
Cool.
Thank
you
for
the
update.
Shane
next
item
is
non-recursive
volume
ownership
fs
group
currently
owned
by
matt
carey
reviewed
by
hamant,
either
of
you
online.
D
I'm
here
so
there
was
a
metric
pr
that
is
in
progress.
I
have
almost
a
working
period
for
the
api
with
the
change
going
beta,
but
I
am
seeking
some
guidance
from
jordan,
our
team
about
defaulting
logic.
So
when
the
feature
was
alpha,
we,
although
the
the
dock,
says
the
document
says
that
the
default
is
always,
but
we
are
not
doing
that
in
the
part,
spec
or
security
context.
The
default
is
nil,
like
if
you
don't
specify
the
default
is
null
and
the
reasoning
being
like.
D
If
we
did
change
the
default
to
always
when
this
feature
moved
to
beta,
then
all
the
existing
part
spec
has
to
be
updated
and
that
will
cause
unnecessary
churn
because
daemon
set
and
bunch
of
other
controllers
hash
the
part
spec.
So
it's
undesirable
to
to
change,
put
a
change
in
part
spec
and
cause
roll
out
of
all
these
objects.
So
there
was
discussion
with
tim
and
jordan
last
release
when
we
moved
it
to
when
it
was
an
alpha
about
how
to
do
this
and
and
in
alpha.
D
D
A
Having
trouble
hearing
you,
you
sound
kind
of
robotic
garbled.
D
Okay,
I
think
what
a
client
thing.
G
So
I
just
submitted
a
pr
for
the
e2e
test
for
pod
fs
group
change
policy
yesterday.
So
please
take
a.
G
A
D
A
All
right
cool,
thank
you
both
for
that
update
next
item
is
csi
entry.
Read-Only
handling,
humble
actually
gave
me
a
to-do,
and
I
need
to
go
back
and
take
another
look.
A
C
Yeah,
so
I
think
one
of
the
pr's,
the
author
is
hasn't,
been
responsive,
so
that
may
slip
again
and
then
I
think
the
pr
that
andy
is
working
on
jan
is
helping
with
that.
So
I
think
that
one
should
be
able
to
merge
in
one
twenty.
A
C
Yeah,
so
I
think,
with
with
the
three
items
that
patrick
is
looking
at,
I
think
they're
to
remain
in
their
current
phases,
but
patrick
has
been
looking
at
making
improvements
to
them.
C
It
it's
to
facilitate
plug-ins
like
local
volumes
that
have
limited
capacity
per
node,
so
basically
csi
plugins
can
report
how
much
capacity
they
have
per
topology
and
then
the
scheduler
will
be
looking
at
that
when
it
decides
how
to
provision
the
volumes.
C
C
Oh
yeah
for
the.
If
the
two
ephemeral
volumes
items
patrick,
has
sent
out
a
meeting,
invite
to
discuss
the
api
direction
that
we
want
to
go
with
this
in
order
to
be
able
to
move
these
features
to
the
next
phase,
and
I
think
the
meeting
is
he
sent
it
out
to
the
store
six
storage
list.
The
meeting
is
next
monday.
A
So,
if
anyone's
interested
in
this
any
of
these
topics,
please
try
and
attend
that
meeting.
The
invitation
should
be
in
the
sig
storage
kubernetes
group.
So
if
you
are
subscribed
to
that
group,
you
should
have
an
invite
if
you
are
still
unsure.
I
think
it
should
be
on
the
on
our
calendar
as
well,
if
not
just
send
patrick
an
email
and
he
should
be
able
to
get
you
the
invite
as
well
all
right.
Thank
you.
Michelle
for
those
updates.
A
Next
set
of
items
are
from
shing
shang
spreading
over
failure,
domain
and
volume
groups
both
designs.
F
Yeah,
I
think,
spreading
over
field
of
that
one's
still
no
update,
because
we
are
still
working
on
this,
the
following
one
and
this
one
I
did
an
update
on
the
cap.
I
probably
should
schedule
a
review
meeting
again.
A
Next
set
of
items
is
regarding
csi
moving
the
iscsi
driver
fit
and
finish,
and.
C
I
have
not
gotten
a
chance
to
deprecate
these,
but
I
think
after
code
freeze
next
week
I
should
be
able
to
take
a
look
at
this.
Okay
sounds.
A
A
Moving
gluster
fs
provisioner
from
external
storage
repo.
I
would
like
to
mark
this
as
done
one
small
thing
is
pending,
but
I
don't
think
we
need
to
track
it
via
the
planner
so
effectively.
A
Our
next
item
is
out
move
out
the
nfs
provisioner
and
move
out
the
nfs
client
provisioner
karen's
been
helping
with
that.
Karen
are
you
on
the.
A
A
Okay,
it's
I
think
the
last
status
update
here
was
ci
pending.
Hopefully
we
can
get
karen's
update
at
some
point.
Moving
on
to
the
next
item,
we
have
volume
snapshot.
Namespace
transfer,
there's
been
a
lot
of
good
progress
here.
A
No
okay,
I'm
gonna
mark
that
as
no
update
for
now
and
if
we
give
mike,
we
can
get
a
status
update
on
that.
A
A
All
right
next
item
is
csi
volume,
health.
F
A
B
Yeah
yeah,
so
so
we've
been
having
weekly
meetings,
tuesday
mornings,
we've
made
a
lot
of
progress,
we've
discussed
several
things
and
basically
had
had
a
lot
of
agreement
about
the
direction
we
want
to.
Take
this.
That's
the
good
news.
The
bad
news
is
I've
created
more
work
for
myself
than
I
had
anticipated,
and
I
certainly
not
all
of
it
is
going
to
be
done
in
this
release.
B
So
it's
it's
now
a
question
of
like
how
much
can
we
get
in
120
and
how
much
is
gonna
have
to
wait,
but
but
the
the
design
meetings
have
been
going
well
and
it's
just
a
matter
of
really
working
on
more
implementation.
Now.
B
I
I
think
it's
still
within
the
realm
of
possibility
to
get
the
the
controller
piece
and
the
crd
for
data
populators
or
volume
populators
in
and
then
to
halt,
to
postpone
all
of
the
work
around
the
actual
library,
implementation
and
sample
populator
implementation
for
later.
So
I
I
think
it's
it's
still
possible
that
we'll
get
the
important
piece
in
120.
A
All
right,
thanks
ben
for
that
update
and
next
item
is
object.
Storage,
api,
cozy,
jeff,
sids
rainey.
Anyone
want
to
give
it
up.
J
E
J
Is
about
the
credential
mounting
and
we
got
a
very
good
idea
on
that
and
then
we
have
another
issue
about
topology.
I
don't
think
it
will
be
fitting
into
this
current
work,
but
we
want
to
understand
all
the
issues
around
topology
and
affinity.
So
we
had
that
discussion
and
we'll
have
the
discussion
today
in
the
in
the
cozy
meeting
after
this
meeting.
J
So
on
the
development
side,
we
are
going
good,
all
the
components
are
in
place
and
we
are
shooting
for
a
a
demo
before
the
holidays,
but
still
the
ci
work
and
the
release
tools,
work
that
is
pending
that's
kind
of
a
bottleneck
right
now.
That's
all.
A
K
Yeah
we
had
last
update
was
we
had
two
different
pr's
open
once
relax
validation
moved
to
beta
the
second
one
to
include
the
entertainment
test.
Those
two
have
been
consolidated
into
one.
An
initial
round
of
reviews
has
been
completed
and
their
feedback
incorporated.
So
now
it's
just
waiting
on
a
second
round
of
reviews
and
approval.
A
I
A
No
worries
at
all
that
is
completely
understandable.
Okay,
so
let's
scratch
this
for
the
quarter.
Since
we've
got
a
week,
left
yeah.
A
And
then
we'll
reconvene
on
this
next
quarter
in
the
meantime,
maybe
I'll
sync
up
with
matt
and
see
if,
if
there's
anything
else,
we
can
do
here
in
the
meantime.
A
A
Thank
you
shane
for
that
update
next
item.
Azure
disk
we're
skipping
for
the
quarter,
azure
file.
C
Let's
see
here,
andy
has
yeah
andy,
has
a
number
of
prs
and
he's
actively
working
on
this.
A
A
We
can
mostly
understand
you,
it
still
sounds
very
odd,
robotic.
A
No
worries,
okay.
Next
item:
I
think
one
thing
you
might
want
to
try:
matt
is
calling
in
by
phone.
I
think
that's
one
of
the
options
for
okay.
Next
item
is
aws
csi
migration.
C
Yeah,
so
I
had
an
offline
discussion
with
matt
wong
on
this
and
he
said
he'll
be
actively
looking
into
this.
I
think
aft
in
the
not
for
120,
but
in
the
next
couple
months.
So
I
think
121.,
okay,.
K
A
Okay,
next
one
is
openstack
cinder.
We
needed
an
owner
for
this.
A
A
I
couldn't
progress
on
this
much
last
week
due
to
other
commitments.
Somehow
I
feel
it's
not
a
rush
on
it
and
tried
to
squash
it
with
the
release
rather
completed
in
the
next
release,
with
complete
end-to-end
testing.
Can
we
please
move
this
to
the
next
release?
G
A
A
A
Okay,
I'll
mark
that
as
no
update
next
one
is
also
kk
volume.
Expansion
for
stateful
set
come
on.
Do
you
have
any
updates
on
kk's
behalf.
D
No,
I
haven't
heard
anything
from
him
again.
Actually
I
and
yeah
I
I'll.
A
A
Sounds
good!
Thank
you.
A
month.
Next
one
is
sig
apps
execution
hook,
container
notifier
for
application,
snapshots,
shangri-sheesh.
F
So
yeah
right
now
this
week
we
don't
have
any
updates.
So
we
need
to
schedule
a
meeting
to
resolve
the
open
issues
for
api
design.
I
think
right
now,
both
chanting,
I
can
basically
say
snapchat
to
work,
so
we
have
not
scheduled
anything
yet
so
make.
A
Sense
cool,
thank
you.
Shane
next
up
is
the
mount
repo.
J
Yeah,
this
is
training
again.
Basically,
we
are
dependent
on
c
advisor
0.38
version
and
they
their
pr
is
blocking
our
pr
right
now.
They
said
that
they
will
they'll
make
a
pr
this
week,
so
there
is
the
likelihood
that
we
may
not
make
it
to
this
release.
That's
the
last
pr
I
submitted
to
remove
the
final
dependencies
on
the
mount
details
in
the
kk
so,
and
I
also
saw
another
pr
for
the
container
d
update,
which
is
updating
the
c
advisor
to
a
sha
number,
which
will
also
work
for
us.
J
If
that
goes
through,
then
I
might
be
able
to
squeeze
this
in.
At
this
point,
I
don't
have
a
clear
visibility:
how
it
pans
out.
It
may
spill
over.
A
Got
it
thank
you
for
that
update
sereni.
I
think
we're
doing
the
best
that
we
can
here
and
let's
see
where
it
lands.
Thank.
A
A
Last
item
is
co-owned,
with
sig
scheduling,
prioritization
of
volume
capacity,
michelle
any
updates
on
this.
C
A
Okay,
cool
we'll
get
a
status
update
on
that
next
time.
Thank
you
michelle.
So
with
that
we
are
done
with
the
sig
planning
section
of
the
meeting.
Switching
back
to
the
agenda
dock
looks
like
we
only
have
one
miscellaneous
item
today.
A
E
Yeah
hi,
so
I
think
a
few
people
already
have
some
context
on
that.
But
let
me
just
give
some
background
for
everyone.
So,
basically,
with
the
external
provisional
2.0,
we
sort
of
added
a
new
command
line
argument
called
default,
fs
type
which
is
set
to
empty.
E
So
I
think
previously
the
external
provisioner
would
send
in
the
default
fs
type
as
ext4,
and
then
the
csi
driver
had
the
ability
of
to
sort
of
override
that,
and
the
behavior
has
changed
now
to
just
pass
an
empty
and
from
a
csi
driver
perspective
you
can
still
override
or
assign
or
format
the
volume.
E
However,
you
would
like
to
well
the
problem
that
we
found
is
that,
basically,
if,
if
my
csi
driver
formats,
the
volume
with
say
ext4,
that
information
is
not
communicated
back
to
the
external
provisioner
and
the
a
sort
of
side
effect
of
that
is
that
if
a
pod
is
created
with
a
particular
security
context,
and
you
specify
the
fs
group
when
kubler
tries
to
mount
that
mod,
it's
sort
of
the
only
information
it
has
is
that
fs
type
was
empty
and
therefore
it
won't
try
to
apply
that
fs
group
on
your
on
the
board
or
whatever.
E
So
so,
basically
cubelet
ignores
the
fs
group
that
that
was
specified
in
the
broad
spec,
even
though
the
volume
was
actually
formatted
according
to
like
ext4
fs
step
and
the
gap
that
we
sort
of
found
is
that
the
formatting
information
is
not
actually
communicated
back
to
external
provisional
and
when
cubelet
tries
to
mount
that
volume
it.
So
it
doesn't
know
that
it
was
actually
formatted
and
therefore
it
it
ignores.
The
fs
group.
A
E
So
cubelet,
I
think,
uses
a
combination
of
fs
type
and
access
mode
to
decide
whether
to
apply
an
fs
group
or
not.
I
see.
B
In
particular,
it's
just
looking
is
the
string
empty
or
not
if,
if
the
fs
type
is
set
to
something,
it
will
try
to
like
count
all
the
files,
but
if
it's,
if
it's
empty
it
skips
that
and
and
then
the
problem
is,
is
that
like?
If
you
leave
it
blank
in
the
in
the
storage
class,
then
it
will
be
pat
and
you
leave
it
blank
in
the
external
provisioner
sidecar?
D
D
We
are
moving
away
from
this
heuristic
actually
and
in
120,
like
christian,
is
working
on
the
feature
where
csi
driver
can
just
explicitly
say
that
okay,
I
need
it's
a
beta,
so
you
can
do
this
if
I
say
that
okay,
I
support
fs
through
policy
to
to
file.
So
even
if
there
is
no
fs
type
in
the
volume,
it
will
still
apply.
Fs2.
B
So
so
so
the
problem
with
that
work
around
is
that,
like
there's
already
a
workaround
for
the
existing,
the
the
change
that
went
into
the
2.0
sidecar
was
supposed
to
solve
this
problem
by
allowing
you
to
set
the
default
in
the
sidecar,
so
that,
even
if
the
user
didn't
put
anything
in
the
storage
class,
you
would
get
a
file
system
type
in
your
pv.
B
The
problem
is
those
provisioners
that
sometimes
want
to
set
a
file
system
and
sometimes
don't
want
to
set
a
file
system
or
if
they
have
two
different
file
systems
that
they
may
or
may
not
set.
Then
there's
no,
nothing
you
can
put
in
the
sidecar
command
line
parameters
that
will
always
give
you
the
right
answer
and
I
think
you're
going
to
run
into
the
same
problem
with
the
csi
driver,
crd
fs
group
policy.
C
A
A
A
D
D
C
C
D
But
but
in
case
of
windows
it
doesn't
apply
fs
group.
D
B
D
B
C
B
Yes,
a
combination
of
that
change
and
the
change
to
the
csi
driver
object
that
says
don't
apply.
Fs
groups
for
me
would
allow
these
more
complex,
csi
drivers
to
opt
out
of
cubelet's
current
behavior
and
to
say
I
will
do
it
myself
and
as
long
as
all
of
the
same
information
is
pushed
down,
then
the
csi
plugin
can
just
do
the
right
thing
whatever
that
is
yeah.
So
that's
that's.
C
D
Okay,
I
I
think
yeah
that's
a
separate
concern,
maybe,
but
I
think
we
should
track
and
it's
I
know
like
it
was
to
me
and
finish
trying
to
finish
the
the
gi
I
think
work
that
to
pass
it
on.
The
note
stage
is
not
published
and
as
far
as
I
remember,
but
I
had
a
conflict,
I
could
not
attend.
Jan
told
me
that
there
was
no
major
objection.
It's
just
that
we
needed
to,
but
there
was
not
like
james
who
was
opposed
to
that
adding
gid
as
a
mount
option
was
not
there
in
the
meeting.
D
A
So
do
does
this
summary
accurately
captured
what
we
discussed
so
short-term
mitigation
for
drivers
that
support
multiple
storage
back-ends
set
the
fs
type
in
storage
class
for
drivers
that
support
single
storage
back
end
set
the
fs
type
in
the
csi
driver,
object
or
sorry.
This
should
be
in
the
sidecar.
The
configuration.
A
A
And
then
longer
term
for
drivers
that
support
a
single
storage
back
end.
They
need
drivers
need
to
be
able
to
return.
B
No,
no,
that
that's
the
workaround
that
doesn't
work
very
well.
The
the
better
one
is
to
have
have
the
fs
group
stuff
applied
at
node
stage
time,
okay,
which
I
I
guess
I
wasn't
paying
close
enough
potential
when
we
discussed
that.
But
I'm
much
more
interested
in
now
that
I
understand
what
the
point
of
it
is
so.
B
D
B
A
B
A
Okay,
so
effectively,
what
we're
saying
is
kubernetes
doesn't
care
what
the
fs
type
is,
because
we
just
give
the
option
for
fs
driver
for
drivers
to
be
able
to
handle
the
only
reason
that
cubelet
would
care,
which
is
fs
group,
and
if
that's
no
longer
responsibility
of
cubelet,
then
we
don't
really
care
what
the
fs
type
is
on
the
current
side,
then
it
just
becomes
an
interesting
detail.
We
show
in
the
ui,
and
so
that's
the
second
part,
which
is
for
the
nicer
ui,
ensure
the
pb
object.
F
B
A
D
B
C
M
D
Yeah
so
one
of
the
items
that
we
are
trying
to
get
and-
and
I
think
this
will
be
tricky-
is
like
we
decided
to
put
an
on
mount
policy
in
the
thing
that
christian
is
working
on
on
the
fs
group
policy,
and
only
when
it
is
mount
policy,
then
only
the
gi
day
will
be
supplied
to
the
driver,
and
now
I
think
we
are
tilting
towards
pass.
It
always.
A
Okay,
so
basically
christian's
work
right
now
is
to
change
when
fs
group
is
applied.
What
you're
talking
about
is
changing
where
it
is
applied,
basically
in
the
driver
or
cubelet.
B
Yeah,
we
may
need
to
re
rethink
the
the
design
around
the
fs
group
policy
to
make
sure
that
it's
going
to
be
compatible
with
a
model
where
we're
passing
it
down
to
the
csi
plugins.
M
B
M
B
C
C
There's
another
fs
group
related
change
going
on
right
now
to
have
pods
be
able
to
opt
into
whether
or
not
fs
group
is
recursively
applied
on
every
mount,
or
only
when
the
permissions
changes,
and
that
I
think
that
information,
if
we
delegate
fs
group
to
the
csi
driver
that
information
would
also
have
to
be
passed.
C
D
If
we
are
going
to
pass
gid,
then
we
have
to
pass
it
both
places.
I
think
because
we
like
for
for
drivers
that
mount
the
mount
the
volume
during
note
stays.
They
have
to
have
the
gid
access
and
in
node
stage,.
A
Okay,
anything
else
on
this,
if
not
we're
a
minute
over
time
and
we'll
call
it.
C
I
guess,
while,
while
we're
also
talking
about
volume
permissions,
I
think
another
similar
problem
that
we'll
want
to
start
investigating
for
121
is
user
permissions
on
files,
because
this
has
been
a
long
standing
ask
since
the
beginning
of
kubernetes.
A
Okay,
we're
over
time-
and
this
room
is
going
to
be
used
for
another
meeting
right
after
this,
so
I
am
going
to
go
ahead
and
terminate
this
meeting.
We
can
continue
this
discussion
next
time
if
folks
want,
but
thank
you
all
for
your
time
and
take
care.