►
From YouTube: Kubernetes SIG Storage 20191219
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 19 December 2019
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.1jh4s0fu9aul
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
N/A
A
All
right
today
is
December
19
2019.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
Today,
we're
going
to
start
planning
for
the
next
quarter
for
the
next
release
of
kubernetes,
which
is
version
1.18
and
figure
out
what
this
SIG
wants
to
work
on
and
have
folks
committed
to
working
on
it,
reviewing
it
and
so
on.
A
So,
let's
review
these
items
and
confirm
if
they
make
sense,
make
sure
their
priorities
are
correct,
make
sure
they're
assignees
are
correct
and
so
on,
and
if
you
have
anything
that
you
think
those
things
should
be
working
on,
feel
free
to
add
it
to
the
bottom
and
we'll
review
it
when
we
get
there.
So
first
item
here
is
CSI
online
offline
resizing
volume.
B
C
B
B
A
B
B
A
A
A
A
Right
cool,
so
everything
else
I
think
looks
good
here.
P1
seems
like
the
right
priority.
Next
item
is
addressing
snapshot.
Issues.
Snapshot
was
moved
to
beta
in
1.17.
There
was
a
lot
of
refactoring
and
work
that
went
into
that.
It
sounds
like
we
want
to
keep
it
in
beta
for
one
more
quarter
before
moving
into
GA
and
in
this
corner
we
want
to
focus
on
bug,
fixes
Shing.
Do
you
want
to
talk
about
this
Shing
Sean
yeah.
D
E
A
B
B
A
B
B
I
don't
know
jingle
is
in
the
call,
so
we
discussed
the
the
issue
of
this
at
the
cube
god
and
I.
We
also
discussed
this
with
some
kernel
folks
and
folks
who
work
on
CRI
container
at
runtime
and
like
using
sift
affairs
and
and
having
this
if
the
kernel
could
fix
it,
and
what
we
have
learned
so
far
is
that
a
real
fix
in
Linux
kernel
will
take
real
long
time
right,
it's
more
than
one
year
and
it
could.
It
could
ideally
never
happen
actually,
like
possible,
never
happen.
B
B
Obviously
he
wants
so
this
is
like
earlier,
and
so
there
could
be
a
smart
version
that
it
does
the
it
checks
the
permissions
of
the
the
topmost
directory
at
file,
and
if
it
has
a
write
permission
and
the
egg
exits,
it
doesn't
do
recursive
CH
own.
It
could
be
always
so
that
that's
the
current
behavior
it
always
recursively
changed
the
permission
and
users
could
also
opt
in
as
never
so
that's
the
design.
So
far
we
have
and
I
think
that
has
been
no
major
opposition
to
it.
A
I
think
that
seems
reasonable
and
you
and
Jen
are
still
the
right
folks
to
work
on
this
yeah.
Ok,
do
you
know
between
you
who
would
be
doing
the
coding
fork
and.
A
E
A
A
Okay,
so
we're
gonna
strike
this
from
the
q1
planning
and
will
punt
it
to
q2
for
q1,
we'll
focus
on
the
low-hanging
fruit
in
terms
of
volume,
permission
issues
and
then
we
can
come
back
and
try
to
address
volume,
permissions
more
holistically
and
then
revisit.
Uid
GID
handling
is
part
of
the
same,
so
I'm
gonna
delete
that.
F
B
F
I
feel
like
it
could
be
similar,
though,
where,
if
you're,
specifying
hey,
don't
make
this
change
in
the
pod
security
policy
or
in
the
sorry,
in
the
pod
security
context,
you
could
also
you
can
set
the
security
context
to
use
right
so
that
still
Falls
similar
into
the
category.
So
if
someone
has
a
specific
need
for
this
in
my
environment,
I
might
just
ask
them
to
hey
we're
gonna,
give
you
permission
to
use
a
specific
context,
and
then
you
can
set
it
to
not
change,
and
so.
B
C
B
B
E
A
Okay,
cool
thanks
for
that
clarification,
and
thank
you
month
for
clarifying
that
as
well.
Next
item
is
to
do
volume,
expansion
for
stateful
sets.
This
would
be
a
cross
sake.
Collaboration
type
deal
where
say:
gaps
is
actually
working
on
it,
we're
very
interested
in
it
month.
Are
you
still
interested
in
shepherding
this
with
say,
gaps?
The.
B
A
A
There's
a
github
issue
where
there
was
a
set
of
issues
that
were
highlighted
and
what
we
wanted
to
do
with
them.
Some
of
that
information
may
be
out
of
date,
so
the
task
here
would
be
to
go
through
that
review
and
come
up
with
a
new
plan
for
what
still
needs
to
be
done.
If
anything,
and
if
there
is
anything,
then
go
ahead
and
start
working
so.
F
A
A
Very
cool
next
item
is
the
CSI
driver
API
object,
which
is
used
to
enable
a
number
of
features,
including
the
ability
for
a
CSI
driver
to
automatically
have
attack
skipped
if
it
doesn't
support,
attach
and
be
able
to
request
pod
information
from
from
kubernetes
on
the
mount
call.
So
we
want
to
move
this
API
to
GA.
It
was
almost
moved
to
GA
last
quarter,
we'll
get
this
moving
along.
This
quarter
Christian
has
been
signed
up
for
this.
Is
that
correct.
A
Cool,
so
nothing
more
here,
we're
gonna
move
it
to
GA.
There
should
be
a
cap
for
this
already
yeah.
A
Next
item
is
improving
CSI
metrics
I'm,
already
working
on
it
for
this
quarter.
I
don't
know
if
we
need
to
continue
to
track
this
for
next
quarter,
but
we
can
leave
this
item
here
and
and
drop
it
if
we
figure
out
that
there's
no
further
work
that
needs
to
be
done
here,
I'll
keep
it
assigned
to
myself.
A
Issues
related
to
assuming
volumes.
Our
mount
points
issue
number.
Seventy
two,
three,
four
seven.
This
was
a
issue
that
we've
been
looking
for,
someone
to
pick
up
and
help
fix,
but
we
haven't
been
able
to
find
an
owner.
It
is
important
for
si
si
ephemeral
volumes,
anyone
interested
in
helping
debug
and
fix
this
issue.
A
A
We
want
to
provide
some
mechanism
by
which
both
volumes
and
snapshots
can
be
transferred
from
one
namespace
to
another.
This
is
useful
in
a
number
of
different
user
scenarios,
John
had
started
a
proposal
ear,
but
I'm
not
sure.
If
he's
able
to
commit
to
it,
is
anyone
interested
in
picking
this
up
and
helping
you
drive
this
design.
A
A
D
A
D
A
Thank
you,
Thank
You.
Shane
next
item
is
volume
group
ape.
This
is
a
big
big,
big
item
and
I.
Think
it's
gonna
unlock
a
lot
of
interesting
functionality.
I
think
there
has
been
questioned
about
whether
volume
spreading
should
be
part
of
this
API
or
not
and
there's
a
related
there's
volume
groups
and
then
there's
storage
pools.
Do
we
have
storage
pool.
A
A
D
D
That
one
Patrick
is
a
no.
He
has
been
updating,
he's
a
tap
so
I'm
pretty
sure
he
want
to
bring
that
to
offer
vocalist.
So
it's
probably
okay,
I
think
he's
pretty
eager
to
cut
that
it
went
down
it's
just
a
it's
very
complicated
and
then
also
we
need
to
consolidate
the
two
caps
and
then
I
think
it
was
also
I.
Think
a
certain
issue
that
he
made
I
didn't
to
follow.
I
didn't
to
read
the
latest
update.
D
A
Okay,
so
and
okay,
so
we've
got
both
of
these.
One
of
them
is
going
to
be
moving
just
going
through
design.
The
second
one's
going
to
try
to
move
to
alpha
both
of
them
are.
This
one
needs
a
enhancement
issue.
This
one
needs
a
cap.
Okay,
that
looks
good
and
we
talked
about.
No,
we
haven't
talked
about
generic
data
populate
er,
yet
so
generic
data
populate
er
is,
if
you
noticed
on
the
PVC
object.
There
is
now
a
data
source
filled
in
that
data
source
field
supports
a
handful
of
types
today.
A
Those
types
include
a
snapshot
as
a
data
source.
Also,
you
can
have
a
PVC
as
a
data
source
which
enables
volume
cloning.
We
want
to
open
up
this
field
for
generic
use,
such
that,
even
if
the
underlying
storage
system,
such
that
the
the
thing
that
provisions
the
data
on
the
volume
could
be
different
from
the
thing
that
provisions
the
volume
itself.
A
You
can
imagine,
for
example,
a
volume
that
gets
provisioned
by
some
storage
system
and
then
a
application
that
fills
it
with
the
content
from
a
github
repo
before
it
is
made
available
to
a
pod
to
use
or
any
other
combination.
Where
you
have
a
data
popular
generic
data
populate
er,
so
it's
a
powerful
API.
A
The
tricky
bit
is
figuring
out
how
we
will
make
sure
that
the
volume
is
not
made
available
for
general
pods
to
use
until
it's
been
populated
but
make
it
available
to
populate
er
pods
to
fill
it
with
data,
and
that's
a
pretty
tricky
bit
of
design
work
that
needs
to
be
done.
Is
anyone
interested
in
owning
that.
D
So
I
think
this
is
definitely
something
that
the
the
Data
Protection
wasn't.
One
group
would
be
interested
in,
but
I
just
don't
know
if
I
can
commit
as
an
owner
yet
and
but
probably
it
should
talk
to
young
and
you
and
see
yeah.
A
A
Fine,
like
we
don't
have
to
assign
all
the
owners
for
these
I
think
last
quarter.
We
over
committed
the
number
of
things
that
we
wanted
to
do
so
this
quarter,
I'm,
okay,
not
committing
to
many
things
and
having
a
few
things
that
we
commit
to
and
trying
to
get
those
completed
rather
than
spread
us
all
too
thin.
A
A
Okay,
moving
on
the
next
set
of
items
are
CSI
drivers.
We
have
four
drivers
that
are
owned
by
this
SIG.
Mostly
aside,
drivers
are
owned
by
kind
of
the
vendor
who
it
builds
the
storage
system,
but
in
this
case
there
are
a
common
set
of
drivers
which
has
no
vendor
ownership.
There
are
generic,
and
so
this
SIG
owns
them
and
those
drivers
are
the
NFS
driver.
The
I
scuzzy
driver,
the
fiber
channel
driver
and
a
flex
volume
adapter
driver.
A
All
of
these
need
a
significant
amount
of
work
to
get
them
into
working
condition
for
the
NFS
and
I
scuzzy
driver.
We
need
image,
building
testings
the
ICD
documentation,
fibre
channel
still
hasn't
even
been
moved
from
its
original
location
and
there's
a
ton
of
work.
That
needs
to
be
done
and
the
same
thing
is
true
for
the
Flex
driver.
A
A
A
My
only
concern
is
if
we
ever
want
to
migrate
the
entry
versions
of
these
drivers
to
CSI.
This
will
become
an
issue,
but
if
that
happens,
we
can
consider
reviving
these,
but
in
lieu
of
having
owners
I
think
that
makes
sense
should
we
instead
have
items
to
retire
track.
The
retiring
of
these
repos
sure,
okay.
So
let's
do
that
instead
yeah.
G
B
A
External
storage,
repo
as
part
of
kubernetes
incubator
and
the
kubernetes
repo
admin
team,
wants
to
retire
all
of
the
kubernetes
incubator
organization,
and
that
means
everything
inside
of
it,
including
external
storage.
The
primary
project
that
used
to
live
under
external
storage
was
the
external
provisioner
library,
and
that
has
since
been
moved
to
kubernetes
SIG's.
This
is
the
new
location
for
it.
A
So
I
wanted
to
bring
this
up
in
the
sig
to
say:
hey
we're
going
to
retire
this
repo.
If
there's
anything
in
here
that
you
depend
on,
please
let
us
know,
and
we
can
create
a
new
repo
under
kubernetes
things
for
you
and
have
you
moved
that
content
there
to
continue
working
on
it?
Otherwise,
we
will
deprecated
this
pretty
soon.
So
I've
sent
out
an
email
on
kubernetes
sig
mailing
list
about
this
I
just
wanted
to
give
everyone
a
heads
up
should.
A
E
A
A
All
right
so
Before
we
jump
into
the
cig
apps
items.
Let's
finish
off
the
items
that
we
have
for
our
cig.
A
big
item
here
is
moving
raw
block
to
GA.
I
would
put
that
as
a
p1
instead
of
a
p2
there's
two
components
of
this
there's
raw
block
for
CSI
and
raw
block
for
entry,
based
on
my
conversations
with
yan
raw
block
for
entry
is
in
much
better
state
than
raw
block
Perseus
I.
A
The
plan
would
be
to
try
and
move
this
move,
both
of
them
to
GA
this
quarter,
so
that
we
can
make
one
big
announcement
about
Rob
Locke
and
not
confuse
users,
but
at
the
same
time
we
have
delayed
the
gif
raw
block
entry
for
a
very
long
time.
So
if
this
quarter,
it
looks
like
CSI
is
going
to
slip
for
some
reason,
because
new
issues
were
discovered,
we'll
have
to
decouple
those
two
and
move
entry.
A
A
G
A
G
A
B
A
B
B
A
All
right
that
sounds
good
to
me
and
then
the
last
two
items
we
have
here
are
items
that
are
ideally
owned
by
Sega
apps
that
we
are
interested
in.
So
we
would
need
someone
in
this
thing
to
help
Shepherd
them.
First
item
here
is
to
address
an
issue
with
PVCs
that
are
created
by
stateful
set
today.
There's
an
issue
where,
if
you
delete
the
stateful
set,
the
PVCs
are
not
deleted.
The
reason
for
this
is
that
as
a
stateful
set
is
scaled
up
and
down,
they
preserve
the
earth
when
it
scaled
down.
A
They
preserve
the
VC's
in
case
somebody
wants
to
scale
back
up
and
they
want
that
operation
to
happen
quickly.
But
if
the
P
it's
if
the
stateful
set
is
completely
deleted,
the
same
behavior
applies
and
that
doesn't
really
make
a
lot
of
sense
because
it
ends
up
leaking
a
bunch
of
PVC
objects
that
the
user
has
to
go
in
and
manually
clean
up.
So
the
ideal
behavior
we
want
is
yeah.
A
A
Same
I,
agree
and
I
think
in
order
to
make
it
backwards
compatible,
it
might
be
something
like
a
flag
or
an
API
option
to
say:
hey
I
want
this
to
be
deleted
when
it's
deleted,
so
technically
stateful
sets
falls
within
the
purview
of
say,
gaps
Dave,
who
was
on
the
call
last
time
from
Sega
app
said
he
would
be
interested
in
helping
bride.
This
does
anybody
know
if
that
is
still
the
case.
Shing,
maybe
yeah.
A
D
So
aDNA
has
already
submitted
a
PR
just
just
to
get.
You
started
okay,
so
he
said
he
will
continue
working
on
it
for
a
while
and
then
he's
going
to
hand
it
over
to
another
person,
so
I
will
follow
up
with
them.
Hey.
J
A
A
Okay,
so
if
there's
anything
else,
that
comes
up
feel
free
to
add
it
to
the
bottom
of
the
task
list
and
we
can
review
it
at
our
next
meeting.
Going
back
to
the
agenda
looks
like
there's
no
PRS
that
need
attention
in
terms
of
design
reviews
provisioning
Jeff.
Do
you
want
to
walk
with
us,
yeah.
K
It's
just
a
reminder
that
we
have
a
from
the
as
a
result
of
the
cig
storage
face
to
face
at
Q
Khan.
We
have
started
a
cap
for
an
API
for
provisioning,
there's
been
some
Red
Hatters
that
have
chimed
in
and
we
got
some
great
feedback
from
Andrew
at
Google
and
we'd
love
to
get
input
from
others.
As
this
cap
you
know,
goes
through
the
process
and
and
and
we
just
want
as
much
input
as
we
can
get
on
it.
So
it's
really
just
a
heads
up
that
is
out
there
awesome.
A
A
A
K
A
K
I
that
yes
I
mean
that
would
be
awesome
if
we,
it
just
depends
on
on
the
feedback
we
get
and
how
many
changes
Andrews
changes
were
I
mean
Andrews,
suggestions
and
comments
are
fairly
significant,
so
it
may
be
that
we
just
get
a
solid,
kept
a
solid
design
by
that
time.
It
just
it
just
depends
how
it
goes
that.