►
From YouTube: Kubernetes SIG Storage 20201203
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 03 December 2020
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.fry0hwxnun10
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
A
The
latest
milestone
just
happened,
which
was
the
code
freeze
and
docs
freeze.
The
release
is
upcoming
on
the
8th.
So
what
we're
going
to
do
is
go
ahead
and
get
an
end
of
quarter
update
for
each
one
of
these
items
and
see
where
they
are,
what
made
it,
what
didn't
make
it?
A
A
So
with
that,
let's
jump
into
the
spreadsheet
and
start
getting
status
updates.
Excuse
me.
First
item
is
from
hamad,
csi
online
offline
volume
resizing
updates
on
this.
B
We
don't
have
any
new
update.
The
last
update
still
holds
true
in
the
one.
The
next
release,
which
is,
I
think,
121,
will
work
on
two
items.
At
least
the
allow
volume
expansion
moving
to
pv
and
copying
it
to
pv,
and
then
the
node
secrets
that
still
has
to
be
reset
in
the
csi
community,
maybe
so
yeah
and
then
the
third
item
is
the
recovering
from
recess
very
well.
We
did
not.
B
A
C
Yeah,
so
we
have
been
making
good
progress,
so
the
match
explorer
to
snapshot
controller
pr
just
got
merged
last
night,
so
the
e3
test
is
for
the
matrix
is
submitted
by
that
one.
We
have
to
wait
until
the
branch
is
open
to
merge
that
and
also
the
pr
that
handles
certification.
C
Rotation
for
webhook
is
also
merged,
so
the
next
one
we're
trying
to
get
in
is
there
is
a
pr
update
controller
based
on
the
v1
snapshot
apis,
so
that's
been
reviewed,
just
need
to
get
that
one
merged,
and
then
we
also
need
to
update
the
the
client
go
module
to
v4.
I
think
yeah.
So
I
think,
after
that
we
are,
should
be
good
to
cut
the
release,
so
we're
getting
close.
A
Sounds
good
and
let's
keep
this
as
started
for
now
and
then
next
cycle
we
can
finish
it
off
and
mark
it
as
done,
hopefully,.
C
A
Cool
next
item
is
non-recursive
volume,
ownership,
fs
group.
A
A
E
I
think
one
I
think
one
pr
was
merged
and
we
had
to
re.
Oh
so,
okay,
let's
see
andy's
pr
was
merged
and
that's
fine.
There
was
one
more
pr
that
got
merged,
but
we
had
to
revert
it
because
it
actually
caught
a
bug
in
the
volume
manager.
A
F
Yeah,
no
no
changes
were
done
for
120.
In
this
release.
We
got
clarity
on
oh
well,
that's
the
other
line
item.
So
no
no
changes
there.
I
would
like
to
add
perhaps
that,
but
I,
but
I
started
working
a
little
bit
this
quarter
on
running
the
external
provisioner
alongside
each
node,
which
wasn't
originally
part
of
that
spreadsheet
here,
but
it
does
touch
on
this
capacity
tracking
a
little
bit
because
there
was
one
part
in
the
cap
about
that
particular
way
of
running
it
and
it
looks
like
we'll
we'll
have
it
implemented
for.
F
Alongside
the
csi
driver
on
each
node,
so
no
central
component
anymore,
that's
useful
for
local
volumes,
because
then
the
csi
driver
really
remains
fairly
simple.
It
just
needs
to
manage
volumes
on
the
local
node
that
it
runs
on
and
the
rest
is
handled
by
external
provisioner
got
it.
A
Okay
cool.
Thank
you
for
that
update,
we'll
go
ahead
and
get
that
moved
to
120.
A
Next
item
is
pvc,
inline,
ephemeral
volumes
and
then
csi
ephemeral
volumes.
Patrick
you
want
to
give
an
update
on
both
of
those
yeah.
F
This
quarter,
we
basically
froze
or
they
didn't
didn't,
do
any
further
development.
But
we
had
one
important
meeting
on
the
api
questions
and
the
conclusion
was
that
for
the
generic
ephemeral
inline
volumes,
we
continue
as
planned
with
the
same
api,
so
that
that
will
is
something
that
I
I
intend
to
work
on
for
121.
Now
we
in
the
same
meeting
we
discussed
how
to
proceed
with
csi
ephemeral
in
nimoriums
and
agreed
that
the
epi
needs
to
be
revised.
F
Specifically,
we
want
to
move
the
top
level
entry
in
the
volume
source
structure
from
csi,
which
is
also
a
bad
name
for
it
into
something
that
is
a
bit
more
or
a
bit
more
more
obvious
about
what
it
is.
I
don't
remember
what
we
agreed
on,
but
it's
all
in
the
in
the
tracking
issue
in
enhance
kubernetes
enhancements,
there's
one
one
enhancement
issue
for
csi
efferent
volumes,
and
I
I
summarized
basically
the
next
step
in
that
issue.
F
I
need
to
point
out
that
this
particular
line
item
needs
needs
another
owner,
so
I
I
won't
have
bandwidth
for
both
the
other
two
items
and
this
one.
So
in
121,
if
nothing
happens
or
now,
no
one
else
steps
up.
It
will
remain
in
beta
with
the
current
api
and
no
further
progress
will
be
made.
A
I
think
matt
carey
may
be
able
to
pick
this
up
once
we
do
the
planning
for
q1
we
can.
We
can
sort
that
out
all
right.
Thank
you.
So
much
for
those
updates.
Patrick
next
item
is
spreading
over
failure.
Domains
shing.
C
Yeah,
so
this
might
still,
depending
on
the
next
one,
the
voting
group
yeah
and
the
the
volume
group
one.
I
haven't,
took
a
chance
to
update
that
yet
so
I
will
get
to
you
next
week.
C
A
A
A
Okay,
next
up
is
moving
out
the
gluster
fs
provisioner.
This
was
completed.
I
don't
think,
there's
any
further
work.
That
needs
to
be
done
here.
Moving
out
the
end.
A
That
is
a
good
question.
Let's
ask.
A
G
So
I
got
some
help
from
michelle
on
how
to
proceed
with
the
image
building
on
this
one
planned
to
work
on
this
in
this
month.
At
the
same
time,
there's
actually
a
community
pr
that
came
in
for
the
nfl
subtitle
project
in
image,
building
via
github
actions
wanted
to
check
here.
If
that's
something
that
we
want
to
proceed
with,
or
should
we
go
with
the.
E
G
Okay,
so
I
think
I'll
then
go
ahead
and
split
that
pr
into
taking
the
multi-arch
pieces,
but
actually
set
up
where
the
clock
formation
I'll
work
with
the
contributor
on
that
one.
A
Sounds
good,
thank
you,
and
that
was
both
for
for
both
nfs.
Oh,
yes,
I
think
perfect.
Thank
you,
karen
right
copy
that
and
we'll
get
another
status
update
on
that
at
the
next
meeting
in
two
weeks.
Next
item
is
valium
snapshot,
namespace
transfer,
ben.
A
Oh
yeah,
mike
my
bad,
so
have
you
have
you
in
my
head
for
everything
related
to
name
space.
A
It
doesn't
look
like
he
is.
Anyone
have
an
update
for
namespace
transfer.
I
think
the
conclusion
here
is
no
update.
H
So
I
I
am
curious
just
because
I
don't
think
this
is
that
big
of
a
deal
like
has
has
much
progress
been
made
on
this
like
is
there
a
proposed
approach?
Have
we
gone
anywhere
there.
C
Is
a
new
there
was
a
new
cap
that
he
submitted,
but
after
that
I
don't
see
anything.
H
H
A
No
worries
so
we'll
go
ahead
and
get
that
moved
to
121.
Hopefully
more
people
can
take
a
look
at
it.
The
security
concerns
are
legitimate.
So
that's
probably
the
biggest
angle
to
look
at
this
from
next
up
is
the
csi
volume
health
initiative
shang.
Any
updates
on
that.
C
So
so
she's
still
working
on
that,
I
think
there
are
some
ci
issues
he's
trying
to
get
resolved,
so
we're
going.
F
H
In
october,
I
had
a
we
started
up
the
meeting
the
series
of
meetings
to
to
design
this
and
those
have
been
going
well
and
and
this
week
I
have
been
making
a
lot
of
progress
on
this,
so
we're
aiming
at
1
21
for
this.
In
the
course
of
working
on
this,
I
managed
to
find
a
bug
in
the
prisoner
side
car
and
I
have
a
new
pr
that
is
already
up
for
the
controller
and
that
will
handle
populator
crds,
I'm
working
on
the
new
sort
of
sample,
populator
implementation
and
so
yeah.
A
Awesome
good
progress,
all
right,
cool,
we'll
make
sure
to
get
that
moved
over
to
121..
This
is
going
to
be
super
exciting
once
it
comes
together.
Next
item
is
cozy
srini,
sid
jeff.
Anyone
want
to
give
a
update.
A
Okay,
I
can
fill
in
for
them
they're
pretty
much
the
same
status
as
before.
Good
progress
in
coding.
They
have
their
weekly
design
discussions.
There's
an
active
discussion
last
week
made
some
good
progress
there,
and
this
will
continue
into
120
121.
H
I'll
add
to
that
that
a
lot
of
the
recent
discussion
with
this
one
has
been
around
the
the
downward
facing
interface
of
the
you
know
the
so
the
application
facing
part
of
cozy.
That's
where
we're
struggling
the
most,
I
think
all
of
the
you
know
provisioning
and
life
cycle
management
for
buckets.
H
A
All
right
next
item
is
fs
group
support
and
csi
christian
huffman.
Are
you
on
the
line?
Yes,
this
was
merged.
It's
awesome
on
track
for
120,
and
so
can
we
mark
this
complete?
Yes,
great,
thank
you.
So
much
perfect.
A
Looks
like
the
last
status
update,
pretty
much
captured
it
shing.
Is
there
anything
else
you
want
to
add?
I
think
that's
still
the
same.
That
makes
sense
so
we'll
go
ahead
and
get
this
move
to
121.
A
E
Azure
file,
there
was
a
major
bug
found
at
the
last
minute,
so
we
missed
the
code.
Freeze
deadline.
Andy
is
continuing
to
debug
the
issue
and
he'll
we'll
have
to
target
this
for
121.
A
A
Openstack
cinder,
we
were
looking
for
an
owner
here
hamante.
I
think
you
mentioned
you're
going
to
try
to
look
for
an
owner
for
this.
Any
luck.
B
Yeah,
so
it's
this
is
what
we
were
talking
about
in
the
last
last
update
that
if
the
sender
openstack
team,
then
red
hat
doesn't
come
up,
then
this
will
be
owned
by,
like
us,
actually
miyan
and
fabio,
so
we'll
have
owner
for
this.
A
And
then
ceph
and
seth,
ffs
and
stuff
rbd,
I
need
to
follow
up
with
humble
on
the
status
of
this.
My
guess
is
this:
will
move
to.
A
A
Okay,
next
item
is
the
next
set
of
items
are
co-owned
between
sig
storage
and
other
cigs.
The
first
few
are
coned
with
sig
apps,
the
first
of
which
is
an
issue
with
pvcs
where
stateful
sets
create
pvcs.
But
when
you
delete
the
stateful
set,
the
pvcs
are
not
deleted.
It
would
be
nice
if
they
were
auto
removed.
A
This
was
a
design
that
kk
was
helping
us
follow
up
with
sig
apps
on
kk.
Are
you
on
the
line
by
any
chance,
or
anyone
else
have
an
update
on
this
item?.
A
Doesn't
look
like
he
is
so
the
last
few
status
updates
were
were.
I
think,
kk
was
planning
to
get
back
on
this
and
and
provide
an
update,
we'll
go
ahead
and
move
this
to
121..
I
don't
think
we
made
this
into
120..
A
Next
item
is
volume
expansion
for
stateful
sets
and
month.
Correct
me
if
I'm
wrong,
but
same
same
thing
here,
move
to
121.,
okay
and
then
we
have
execution
hook.
Shang.
C
A
A
Sounds
good
so
we'll
go
ahead
and
get
this
moved
to
the
121
release.
Thank
you.
Shane,
for
the
update.
Next
item
is
size
memory
backed
sizing
and
empty
der
memory,
backed
volume
as
minimum
of
pod
allocatable
memory
on
host.
I
don't
recall
seeing
this
one
before
looks
like
derek
is
working
on
it
and
vermont's
helping.
A
Cool
next
item
is
a
ask
from
sig
architecture
to
move
our
mount
library
out
from
kubernetes
utils
into
a
standalone
repo
and
have
that
repo
be
replicated
in
the
staging
directory.
Sereni
was
helping.
With
this
last
status.
Update
was
ran
into
some
problems
and
needed
to
revert
the
c
advisor
change
will
continue
after
120,
so
we'll
go
ahead
and
move
this
to
121.
A
And
final
item
was
with
sig
scheduling,
prioritization
on
volume
capacity,
michelle
any
updates
on
this.
A
Perfect
so
I'll
go
ahead
and
mark
that
as
started
as
well
and
we'll
go
ahead
and
get
that
moved
over
to
the
121
spreadsheet
looks
like
shane
has
already
created
a
new
tab
here,
so
we
can
go
ahead
and
get
started
on
on
populating
that
moving
back
to
the
agenda
before
we
continue.
I
had
a
question
for
everyone
in
the
next
meeting
is
in
two
weeks,
which
is
december.
17Th.
A
Do
folks
want
to
do
the
q
q1
planning
session
for
the
121
release
on
the
17th
of
december,
or
should
we
wait
until
we
come
back
so
on
the
31st
I'm
going
to
be
out,
which
is
the
subsequent
meeting,
and
it
is
a
u.s
holiday
new
year's
eve.
So
I
imagine
folks
may
want
to
cancel
the
31st
yeah.
The.
H
The
two
weeks
after
the
next
reading,
or
both
probably
people,
will
be
out
due
to
holidays
right.
So
it
would
be
three
weeks
after
that,
one
before
we
could
realistically
get
back
together.
A
Yeah,
so
if
we
skip
the
31st,
the
next
available
opportunity
would
be
the
14th
which
would
land
us
into
kind
of
the
beginning
or
like
almost
halfway
into
the
first
month
of
the
next
cycle.
So
I
was
thinking
if
we,
if
we
do
our
planning
session
on
the
17th
it'll
set
us
up
well
for
q1
and
then
and
then
we
can
on
the
7th.
When
we
reconvene
we
can
come
back
and
and
reassess
and
re-add
items
as
needed.
A
A
A
A
Okay,
if
that's
the
case
I'll,
go
ahead
and
cancel
this
31st
meeting
and
we'll
convene
on
the
17th
as
usual
and
do
121
planning
in
the
next
meeting.
Okay,
with
that
I'll
hand,
it
off
to
ben
to
talk
about
a
new
idea
for
pvc,
annotation
handling
ben.
E
H
So
so
there
has
always
been
a
desire
on
the
part
of
multiple
vendors
to
allow
users
to
like
specify
some
per
pvc
option
to
override
some
default
behavior
on
a
per
pvc
basis
and
the
csi
spec
never
provided
a
way
to
do
that
and
and
for
good
reason.
H
And
so
it's
created
this
tension
where
you
know,
if
you
want
to
develop
some
sort
of
a
new
feature
that
that
requires
the
user
to
sort
of
specify
some
per
pvc
option
you
end
up
having
to
like
do
something
really
awkward
like
like:
stick
it
in
the
pvc
annotation
and
then
have
the
csi
plug-in
like
go.
H
Read
the
pvc
annotations
and
you
know
to
to
sort
of
prototype
some
new
feature,
and
we
had
talked
about
the
potential
of
just
like
taking
the
pvc
annotations
and
merging
them
with
the
storage
class
parameters
and
passing
them
all
down.
Every
time
you
do
a
crate
volume
but
like
that,
creates
real
problems.
In
terms
of
you
know
the
possibility
for
conflicts,
the
possibility
of
user
overriding,
the
administrator's
intent,
the
the
the
fact
that
administrators
may
not
like
that
idea
entirely
and-
and
so
I've
been.
So.
H
I
basically
kind
of
gave
up
on
this
a
while
ago,
but
I
had
a
new
idea
a
couple
months
back
when
I
was
looking
at
a
the
extra
metadata
option
on
the
external
provisioner.
The
idea
is
this.
I
would
like
to
add
a
a
new
command
line
parameter
to
the
external
provisioner
sidecar,
which
lets
you
specify
a
prefix,
and
then
only
pvc
annotations
that
are
that
are
have
that.
H
Prefix
would
be
packaged
up
and
sent
along
to
create
volume
alongside
the
storage
class
parameters,
when
you
do
a
crate
volume-
and
I
just
I
just
wrote
up
a
pr
in
the
last
30
minutes
and
pushed
it
if
you
want
to
look
at
it,
it's
very
small.
It
implements
this.
H
H
H
Other
prefixes,
where
you
have
some
sort
of
a
you
know:
name,
dot,
name,
dot,
name,
slash,
annotation
name
and
then
a
value
where
that
name
would
be
some
sort
of
vendor
domain
name,
presumably,
and-
and
it
also
addresses
the
issue
of
you-
know
administrators
wanting
to
disable
this,
because
the
administrator
can
just
modify
the
sidecar
container
definition
to
not
specify
this
prefix
and
then
the
sidecar
won't
pass
anything
down.
So
so
it
puts
control
in
the
hands
of
both
vendors
and
controllers.
H
To
turn
this
feature
on
sorry,
vendors
and
administrators
to
turn
this
feature
on
if
they
want
it
and
it
addresses
it,
addresses
this
need
for
people
to
sort
of
prototype
things
in
pvc
annotations
without
many
of
the
problems
we've
had
in
the
past.
So
I
just
wanted
to
sort
of
throw
this
out
there
to
the
whole
group
and
see
if
people
hate
it
or
if
people
think
this
has
any
promise
at
all
as
being
a
way
to
sort
of.
Let
you
optionally
pass
down
some
pvc.
H
You
know
per
pvc
options
when
you
create
your
volumes
that,
in
a
way
that
they
get
all
the
way
to
the
csi
plug-in
and
it
can
act
on
them.
A
I
think
my
opinion
on
this
remains
the
same,
which
is
this
is
a
pattern
we
should
discourage
because
of
the
portability
concerns
and.
H
Yeah
yeah,
so
so
so
yeah
to
be
clear:
it
doesn't
it
doesn't.
It
does
sort
of
cut
against
the
the
portability
concern,
but
the
idea
is
there.
Are
there
are
a
huge
number
of
users
who
they?
The
portability
concern
is,
is
not
something.
H
Specifically
worried
about
because
they
they're
both
the
administrator
and
the
the
user
of
their
particular
kubernetes
cluster.
They
know
which
csi
plug-in
they
have
and
they
want
to
enable
some
sort
of
functionality,
and
we
know
I'll
tell
you
our
csi
plugin
already
does
the
thing
I
mentioned
earlier
today,
where
we
we
just
go:
read
the
pvc
annotations
ourselves
and
it's
it's
just
kind
of
gross.
So
it's
not
that
people
can't
do
this
today
and
it's
not
that
it's
a
good
thing
to
do
because,
as
sod
mentioned
like
this
is
an
ansi
pattern.
H
But
there
is
the
very
strong
demand
to
let
users
specify
per
pvc
options
that
the
csi
plug-in
can
act
on
and-
and
I
can't
think
of
a
a
sort
of
a
less
bad
way
to
to
enable
this.
A
You
know
it
might
sound
evil,
but
I
I
feel,
like
the
kind
of
icky
things
that
you
have
to
do
to
make.
This
work
are
a
good
thing
because
it
forces
the
driver
vendor
to
realize.
Should
I
be
doing
this
because
it's
not
a
standard
supported
thing,
but
I
worry
that
if
we
do
provide
a
standard
way
of
doing
this
like
a
flag,
then
it
becomes
a
oh
well,
it's
a
supported
feature.
A
Of
course,
I'm
going
to
start
using
it,
and
I
I
like
the
way
that
you
put
it,
which
was
you
know
this
should
be
used
for
prototyping
and
you
know
trying
new
things,
but
I
worry
what
it's
going
to
end
up
being
is
we'll
end
up
with
a
set
of
csi
drivers
that
require
the
use
of
this
flag.
They
won't
work
without
it
and
and
we
will
end
up
with
a
generation
of
csi
drivers
that
are
effectively
not.
A
H
A
Agree,
I
think
the
the
difference
is
kind
of
it's.
It's
a
psychological
one
right,
it's
a
hey,
I'm
doing
something
that
I
probably
shouldn't
be
because
I
have
to
do
all
this
icky
weird
custom
stuff
to
make
this
happen
versus.
Oh,
this
thing
is
supported
out
of
the
box
so
like
it
must
obviously
be
okay.
A
H
Yeah
yeah,
I
mean
that's,
that's
the
portability
guarantee
which
is
is
really
valuable.
It's
just
that
there
are
some
users
that
don't
care
and
they
they
want
to.
You
know,
get
at
vendor
goodies
and
you
know
the
pressure
to
provide
those
goodies
is,
is
immense
and
so
yeah
we
we
do
the
the
gross
thing
and
read
the
read:
the
pc
annotations.
A
H
A
F
H
H
That
option
seems
to
be
in
the
same
vein,
where,
like
somebody
needed
to
do
something
outside
the
csi
spec,
and
so
we
we
added
a
hack
that
is
defaults
to
off.
But
it's
it's
there
to
enable
some,
presumably
useful
behavior.
A
A
This,
to,
you
know,
do
things
that
are
outside
of
or
that
are
going
to
violate
portability,
but
I
think
the
argument
that
was
made
at
the
time
was
you
know
we're
not
passing
in
the
actual
pvpc,
and
you
know
if
anybody
were
to
abuse
this,
they
would
have
to
go
out
of
their
way
and
actually
go
and
write
code
to
fetch
these
objects,
and
so
I
think
the
primary
use
case
here
was
just
using
these
as
to
decorate
the
the
volume
objects
that
were
created
to
say.
A
I
If
you
may,
as
a
as
a
driver
author,
I
I
use
the
metadata
exactly
for
what
you
have
for
bc.
Logging.
The
only
variation
on
that
is
that
annoying
staple
set
problem
with
you
know,
pvs
being
deleted
that
I
I
use
the
I
kind
of
guess
the
staple
set
name
from
the
persistent
volume
got
it
ooh.
H
No,
I
mean
when
you're
down
in
the
trenches
you
frequently
end
up
with
these
weird
sticky
problems
where
it's
like.
Oh,
if
only
I
just
had
you
know
some
extra
information,
I
could
solve
this
better
right
so
yeah
I
come
across
those
not
that
specific
problem,
but
you
know
that
kind
of
stuff
right,
so
so
so
what
I'm
hearing
side
is.
That
is
that
you
would
prefer
to
just
make
this
hard
simply
to
discourage
it.
A
Yeah,
I
I
hate
to
be
that
person,
but
yes,
it's
an
anti-pattern
and
I'd
like
people
to
work
harder
to
make
it
happen,
rather
than
make
it
easier.
A
C
That
is
why
I'm
kind
of
concerned,
when
I
see
that
pr
to
add
this
extra
metadata
in
external
snapshot
so
just
to
follow
what
is
there,
for
example,
provisioner,
because
we
do
see
a
lot
of
requests
right
for
adding
this
type
of
metadata
per
person
snapshot
or
per
pvc.
But
what
do
we
want
to
accept?
What
do
we
want
to
say?
No,
I
don't
know
what
is
the
criteria
yeah.
H
You
know
this
is
just
something
specific
to
our
implementation
that
we
do
and
we
want
to
give
people
a
knob
to
turn
it
on
and
off,
and
and
we
would
never
propose
that
as
a
as
a
portable
option,
so
yeah
I'll
I'll
think
of
the
ones
that
maybe
could
be
made
portable.