►
From YouTube: Kubernetes SIG Storage 20190314
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 14 March 2019
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.d9k1eyhevswq
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:12:31 From Srinivas Brahmaroutu : https://github.com/kubernetes/community/pull/3361/
10:01:12 From Hayley Swimelar : I do, but my mic isn't working
A
A
Then,
if
you
have
any
PRS
that
need
attention
designs
that
need
review,
please
feel
free
to
add
to
the
agenda
or,
if
there's
anything
else,
that
you'd
like
to
discuss,
feel
free
to
add
it
to
the
agenda
under
miscellaneous
as
well.
So,
let's
get
started
with
zigge
storage
planning
spreadsheet
and
go
over
that
very
quickly.
First,
we
have
the
migration
engine
for
CSI
deeply
able
to
give
an
update
on
this.
C
A
D
A
E
E
E
A
F
A
A
A
C
G
G
A
C
A
C
A
I
Pr's
outstanding,
but
I
don't
expect
any
of
them
going
into
114.
Yet
there
is
a
king
of
test
weights
over
the
last
two
weeks,
I
think
and
we'll
have
to
work
using
that
we
opened
a
PR
in
community
33,
61
I
pasted
it
into
the
chat
that
is
for
promoting
validations
weights
if
that
PR
gets
merged.
That
will
give
us
ability
to
start
working
on
the
validation
straight.
So
oh.
A
C
A
A
A
G
A
G
Yeah
I'm
not
sure,
that's
enough,
because
there's
a
lot
of
things
missing
if
we
really
want
to
find
out
right
whether
the
body
is
deleted
or
not.
What
is
this
your
touch,
but
it's
a
Monte
seems
like
there
are
a
lot
of
things
we
need
to
find
out
so
I,
don't
know
you
forgive
me
just
to
make
one
just
add
one
interface
or
giving
you
can
add
multiple
things
that
I'm
not
quite
sure
yet
yeah.
G
A
A
A
C
A
A
A
A
G
G
G
A
G
A
Cohesive
story
kind
of
intend
for
that
and
I
think
in
the
next
few
either
the
meeting
on
Monday
or
the
meeting
after
that.
That
Jing
has
four
snapshots.
She'll
discuss
that
more
at
length,
okay,
so
but
yeah
good
progress.
There
next
item
is
CSI
library,
moving
the
mount
library
to
an
external
common
repo
that
can
be
consumed
by
multiple
places.
So
there
was
a
large
peer
that
got
split
up
the
first
PR
merged
any
other
progress
on
this
Travis
yeah.
K
K
A
All
right,
Thank,
You
Travis-
this
is
good
progress.
Next
item
is
provisioning
capacity
reporting
for
generic
topology,
which
is
also
required
for
local
volume,
dynamic
provisioning.
We
had
no
owner
for
this
last
time.
It
sounds
like
we're
unlikely
to
have
found
one
right
before
code.
Freeze
of
that
true
Michelle
yeah.
A
A
That
make
sense.
Next
item
is
CSIRO
tree
moving
NFS
driver
to
separate
repo
last
update,
the
initial
PR
was
merged
and
there
was
outstanding
work
to
add,
update
to
CSI
ad
deployment
scripts
at
CI
CD
and
a
bunch
of
other
work,
Mithu
sonore
anybody
else,
who's
been
working
on
this
any
updates
for
the
NFS
side
of
things.
A
Okay,
in
that
case,
what
we
can
do
is
Michelle
or
I
can
help
review
it
and
just
kind
of
do
a
sanity
check
and
go
ahead
and
get
that
initial
merge
in
there
and
say
you
know
it's
a
prototype,
and
if
anybody
wants
to
use
it,
they
find
issues
with
it.
Hopefully
they
can
revise
and
fix
things.
We're
gonna
have
to
merge
it
without
having
any
experts
review.
It
then
all
right
thanks!
Medusa
next
item
is
moving
online
resizing
to
beta.
This
was
designed
for
this
quarter.
A
F
A
Okay
and
I
hope
people
in
this
say
gun
Durst
and
the
implications
here,
and
if
anybody
is
familiar
with
the
storage
system
for
which
this
will
not
work,
now
is
the
time
to
speak
up
before
this
becomes
the
default.
If
you
don't
understand
the
implications
of
it,
please
reach
out
to
her
month
or
follow
up
on
now
one
of
the
related
issues.
Maybe
a
month,
you
can
post
a
link
in
the
chat
for
folks
who
want
more
context.
Yeah.
A
F
F
Yeah,
the
update
is
that
we
had
to
solve,
or
at
least
propose
a
solution
for
the
offline
resizing,
just
adjust
the
volume
tap
that
only
support
offline
resizing.
That
cab
does
not
addresses
that.
So
that's
what
is
blogged
on
we'll
work
with
see
gaps,
to
unlock
it
and
move
it
in
10
1.15.
That's
the
plan.
A
J
You
know
I
know
there
is
side
view
submissions
for
story
later
topics
just
for
the
overall
cube
con,
I
I,
don't
know
how
many
you
actually
got
accepted,
I,
think
I,
I,
think
notices
actually
went
out
last
week
or
something,
but
for
those
people
that
didn't
get
accepted
to
but
still
planted
on,
attending
cube
continent,
especially
the
contributor
summit.
I
was
looking
at
doing
like
some
six
storage
presentations.
So
if
people
have
ideas
or
an
already
plan,
I'm
being
there,
please
send
me,
you
know
an
abstract.
J
J
A
The
past
Brad,
the
contributor
sumit,
has
been
kind
of
all
the
kubernetes
contributors
in
a
single
room,
the
first
day
like
on
a
Monday
before
Yukon,
and
then
folks
will
go
up
and
give
presentation.
So
it
sounds
like
you're,
organizing,
the
sig
storage
platform,
correct,
okay,
it's
cool!
So
that's
the
context
for
folks.
This
is
usually
a
meeting.
That's
held
right
before
the
day
before
cube
con.
A
A
And
when
you
think
about
these
topics,
think
about
how
they
would
be
more
widely
applicable
to
beyond
just
sig
storage
to
the
wider
kubernetes
community.
What
are
the
things
that
we
want?
Everybody
else
is
working
on:
kubernetes
who's,
helping
develop
kubernetes
to
be
aware
of
I
think
there
are
a
lot
of
good
topics.
So
thank
you
for
organizing
this
Brad.
J
A
G
J
A
All
right
and
then
the
next
announcement
was
the
next
meeting,
for
this
sake
is
gonna,
be
March
28th
and
we're
going
to
use
that
to
plan
for
1.15
so
be
prepared
for
that
I'll
try
and
send
out
an
email
before
that
to
start
generating
list
of
topics.
I'll
create
a
new,
a
new
sheet
on
this
spreadsheet
and
you
can
start
adding
items
there
for
things
that
we
should
be
working
on
for
1.15
and
we'll
help
review
those
at
our
next
meeting.
So
prepare
for
that
next
item
is
is
PRS
to
discuss
that
need
attention.
A
A
I
Actually
I
think
the
himand
esterday
about
online
volume-
resize-
that's
not
that's
in
alpha
stable,
so
I
assume
that
we
can
push
forward
to
beta
in
115,
and
the
second
second
issue
is
a
separate
one
that
essentially
some
of
both
of
these
things.
We
have
customers
who
are
interested
in
we
are
using
Aakash
is
also
on
I,
believe
is
available,
so
PVC
annotations
that
are
used
today
to
set
certain
parameters
for
creation.
A
What
I
want
to
underscore
here
is
that
the
PVC
object
is
supposed
to
be
all
about
portability
portability
across
clusters
across
implementations
of
different
types
of
storage.
So
what
we
don't
want
to
do
is
have
the
PVC
object,
decorated
with
fields
annotations
labels
that
are
specific
to
any
given
storage
provider.
The
PVC
is
a
weight,
a
generic
way
to
represent
to
ask
for
storage.
A
The
things
that
are
specific
for
a
storage
system
should
be
on
a
PV
object.
They
should
be
on
a
storage
class
that
said,
I
do
understand
and
realize
that
there
are
cases
where
a
storage
classes,
opaque
parameters
are
insufficient,
and
so
what
I've
been
asking
folks
to
do
is
instead
of
poking
a
hole
in
the
API
and
saying
hey,
let's
just
you
know,
expose
per
annotations
directly
to
the
storage
system
from
the
PV
C.
Let's
talk
about
the
individual
use
cases
that
you
have
and
folks
have
highlighted
a
couple
of
useful
use
cases
to
me.
A
One
was
around
storage
system
topology.
That
was
a
good
use
case
and
Mishelle
enough
about
a
design
and
we'll
we'll
get
something
out
there.
Another
interesting
use
case
was
around
the
ability
to
tag
volumes
and
I've
got
a
couple
of
different
ideas,
for
that
one
is
if
you
want
to
just
tag
it
with
PVC
information.
If
you
look
at
the
name
field
on
the
create
volume
called
that
already
includes
the
PVC
UID,
so
that
should
be
sufficient.
If
that's
the
only
information
you
want,
the
type
of
tagging
that
you
want
is
more
advanced.
A
So,
for
example,
you
want
the
ability
to
tag
your
volume
with
workload,
information
or
you
want
to
be
able
to
tag
it
with,
for
example,
information
about
billing,
or
something
like
that.
My
recommendation
would
be
to
create
a
small,
simple
controller
that
looks
at
your
provision,
PBS
and
kind
of
rectifies
and
applies
whatever
needs
to
be
done.
The
benefit
there
is
that
you're
gonna
get
the
customization
that
you
want,
as
well
as
the
fact
that
it'll
actually
be
continuously
updated,
rather
than
applied
once
and
potentially
get
stale
over
time.
A
N
Hi
so
I
wanted
to
highlight
one
of
the
use
cases
which
one
used:
here's
what
I
can
bring
up
from
our
customers
is.
They
would
usually
provide
the
kubernetes
zones
and
the
regions
from
the
PVC,
so
they
decide
anomaly
where
they
want
to
provision
the
PVC,
and
today
they
send
it
why
the
PVC
is
where
the
zone
is
sent
by
the
customer.
So
that
is
one
of
the
year's
biggest
we
had
where
we
needed
this
to
be
passed
on,
driver
sure.
C
C
C
So
I
think
you
can
we
you
can
do
that
today,
with
the
volume
scheduling
feature.
Basically,
you
turn
on
late
binding
in
the
storage
class
and
then
basically,
we
passed
through
the
scheduler
decision
for
pods
to
the
volume
creation
and
that
way,
all
the
all
the
information
is
taken
from
all
the
constraints
that
the
pod
has
specified.
N
C
O
We
are
our
storage
product.
Has
you
know
a
concept
of
tenancy
and
it
aligns
really
well
with
kubernetes
concept
of
namespaces
and
we'd
like
to
be
able
to
do
our
provisioning
within
you.
You
know
essentially
align
these
two
things:
kubernetes,
namespaces
and
tenancy,
but
with
the
current
CSI
model,
tenancy
has
only
passed
in
you
know.
If
you
turn
on
this
particular
CRD
object
and
it's
only
passed
in
with
a
like
two
calls
and
I
think
their
node
calls
in
order
for
us
to
support
the
tenancy
model
that
kubernetes
has.
O
A
I
think
that's
a
legitimate
use
case.
Could
you
open
an
issue
to
track
that
and
we
can
brainstorm
some
ideas?
My
initial
reactions
are
one:
can
we
make
this
more
generic
like?
That
is
one
way
to
do
tendency?
Are
there
other
ways
are
there?
You
know?
Oh,
let's
see
if
we
think
more
generic,
if
not
honestly,
I'm
not
passing
through
namespace
information
is
not
a
huge
deal,
because
you
can't
use
that
alone
to
violate
portability.
E
I
see
a
problem,
though,
in
the
future,
where,
if
we
ever
had
a
way
to
move
like
a
PVC
across
namespaces
like
because
those
have
no
namespace
right
and
you
can,
you
can
delete
a
PVC
and
then
create
a
new
PVC,
a
different
name.
Space,
that's
bound
to
the
exact
same
PV.
True
buddies
will
allow
this
and.
O
I
believe
that
the
generic
way
to
solve
that
would
be
to
create
the.
If
say,
you
didn't
have
a
way
of
moving
between
tenancy
models
in
your
storage
back-end.
You
would
essentially
do
a
slow
transfer
host
assisted
transfer
to
the
next
one
like
fit.
You
would
create
in
the
new
tenant
and
then
transfer
between
know.
E
A
O
A
A
O
N
N
C
Actually,
I
think
we
just
discussed
this.
We
ironed
out
some
more
details
on
it
yesterday,
but
I
think
so
right
now
we
support
secret
temple
in
George
class,
but
right
for
all
the
operations
except
for
provisioning,
and
then
we
just
worked
out
some
details
on
how
to
support
templating
for
provisioning
yesterday
and
I
I
think
it
can
work.
There's
an
issue
open
in
the
external
provisioner
I
would
take
a
look
at
it
and
with
our
proposal
and
see
if
that
works
for
you.
So.
A
A
You
can
actually
have
templates
that
you
specify
as
the
value
of
that
name
or
namespace
and
then
at
provision
time
or
at
a
later
time,
kubernetes
will
automatically
replace
that
template
with
the
name
of
the
PBC,
for
example,
that
is
being
used
to
provision,
and
what
that
allows
you
to
do
is
that
you
have
a
single
storage
class
and
you
could
have
multiple
different
secrets
for
that
single
storage
class
based
on
the
volume
that's
being
provisioned
and
the
secret
could
live
in
the
namespace
of
the
user.
It
could
live
in
another
namespace.
A
This
is
very
flexible
currently
for
basically
the
attach
and
mount
calls
secrets
that
are
acquired
on
CSI
for
attach
and
mount
on
the
pub
on
the
create
and
delete
volume
side.
It's
not
very
flexible
right
now.
It
only
allows
templating
of
pv
name
and
we're
looking
into
making
that
more
flexible
right
now.
So
that's
the
issue
that
michelle
just
mentioned.
N
So
I
looked
at
this
template,
so
I
just
take
a
look
at
it,
create
and
delete
things
now.
The
thing
is,
it
is
taking
the
pv
name.
Now,
if
you
see
as
a
template,
I'm
referring
to
more
like
a
dynamic,
provisional
cases
where
the
user
will
just
create
where
the
PVC
and
there's
not
aware
of
the
pv
name
is
dynamically
assigned
right,
so
creating
secret
with
that
name
is
changed
with
every
PVC.
It's
automatically
assign
right.
A
M
M
Another
issue
is
stored.
Classes
are
typically
only
administered
by
the
cluster
administrator
where,
as
we'd
like
users
to
be
able
to
to
specify
how
many
replicas
they
want,
whether
they
might
encryption
and
things
like
that.
So
you
know
like
the
storage
class,
this
does
work,
but
we
just
end
up
with
loads
of
them.
So
we
we
found
that
by
adding
a
passing
through
the
labels,
that's
the
storage
system,
we're
able
to
remove
that.
A
Yeah
I
understand
the
attraction
of
being
able
to
use
annotations
on
a
PVC
to
be
able
to
reduce
that
combinatorial
explosion
of
storage
classes.
That
happens.
I.
Don't
think
that
that's
a
good
pattern
to
move
towards
in
general,
when
we're
violating
a
basically
portability
of
a
kubernetes
portable
API
object,
we
need
to
have
a
very,
very
high
justification,
so
I
do
recommend
looking
at
it
on
a
case-by-case
basis
of
what
are
the
things
that
are
resulting
in
that
combinatorial
explosion
and
see
if
we
can
come
up
with
a
way
to
reduce
that.
A
Are
there
first-class
fields
that
we
can
add
are
there
you
know?
Is
there
first-class
functionality
that
we
can
add
to
make
kubernetes
more
aware
and
able
to
allow
automatic
selection
beyond
all
of
that,
there
was
an
option
that
Michelle
and
I
started
talking
about
which
would
allow
users
to
potentially
application
developers
to
be
able
to
override
storage
class
parameter
defaults
and
that
we've
kind
of
tabled
after
discussing
with
Jordan
Liggett,
it
becomes
very,
very
complicated,
very
quickly
and
also
begins
to
kind
of
I.
A
It
moves
the
problem
of
portability
to
a
different
place
at
what
we
were
proposing
was
having
a
config
map
that
you
can
have
a
user
defined.
That
would
allow
you
to
override
the
parameters
from
a
storage
class
and
have
the
user
reference
that
config
map
in
their
PVC.
But
then
you
know
we
need
to
define
edge
cases
like
is
it
required?
What
happens
if
it's
not
specified
or
what
happens
if
it's
specified
and
those
parameters
don't
match
up
with
what
the
storage
class
provides?
A
How
do
you
specify
which
parameters
within
the
storage
class
can
be
overruled,
so
I
think
Jordans
biggest
questions
were
around
security.
The
storage
class
exposes
parameters
that
are
very
powerful
and
not
all
of
them
should
be
exposed
to
application
developers
or
end-users,
and
then
at
that
point,
if
we
get
to
a
point
where
we're
saying
you
know
what
is
going
to
have
a
default
value,
it's
not
going
to
have
a
default
value.
What
can
be
overridden?
What
can't
be
overridden
we're
essentially
basically
defining
a
very
complicated
schema
within
the
storage
class.
A
Was
a
mistake
in
the
API
when
it
was
built
very
early
honestly
file
system
should
not
have
been
a
a
per
volume
field.
It
should
have
been
a
PVC
level
field.
A
Unfortunately,
that
wasn't
the
way
that
it
was
implemented
and
block
didn't
exist
at
that
time
and
for
a
number
of
reasons
it
wasn't
implemented
that
way,
so
we
have
to
kind
of
live
with
that,
but
I
guess
what
you're
asking
for
is
a
way
to
be
able
to
have
users
be
able
to
specify
the
file
system
we're
sort
of
having
it
once
per
storage
class.
Yes,.
F
H
F
A
C
L
A
Q
If
I
may
make
a
nice
suggestion,
this
is
a
Rivera.
What
about
simply
telling
vendors
to
implement
a
controller
and
Ciardi
to
abstract
away
the
storage
class
combinatorial
explosion
so
like
you'd
in
and
so
you'd
be
stuck
effectively,
not
working
with
raw
PVC
with
straight-up
PVCs,
the
final
CRD
of
or
volume
type.
That
then,
is
handed
off
to
a
controller
that
then
takes
care
of
finding
the
appropriate
storage
class
based
on
the
parameters
you
give
it,
and
then
that
created
the
PVCs.
So
it's
a
level
of
abstraction
on
top
of
PVCs
yeah.
A
R
Mean
oh
I
mean
also.
If
if
we
could
have
a
PVC
bind,
you
know
a
request
to
a
to
a
store
or
to
cool
or
our
to
a
storage
class
based
on
labels.
Then
you
would
have
this
easy
way
to
dynamically,
select,
a
storage
class
and
then,
having
you
know,
a
dozen
or
a
couple,
dozen
source
classes
wouldn't
be
so
bad.
Q
A
Yeah
and
I
mean
in
general,
it's
not
such
a
horrible
pattern
in
kubernetes
in
kubernetes.
What
we
try
to
do
is
expose
the
lower
level
primitives
that
enable
a
wide
variety
of
use
cases
and
then
have
folks
basically
be
able
to
extend
and
build
on
top
of
that
to
look
at
any
sort
of
very
complicated
workload
in
kubernetes
you
don't
actually
deploy
those
directly
by
creating
you
know,
stateful
sets
and
replica
sets,
and
quads
and
PVCs
manually
in
general.
A
You
have
exactly
what
Jose
described,
which
is
CR
DS
in
a
controller,
an
operator
to
be
able
to
deploy
a
specific
type
of
complicated
application.
So,
presumably
end-users
are
not
working
on
this
stuff
manually
or
directly.
They
have
some
sort
of
nicer
abstraction
between
them
that
minimizes
the
complexity
a
little
bit
if
it
exists
at
that
lower
layer.
J
L
H
J
If
you're,
if
you're
putting
every
property
out
to
your
user
and
having
this
combinatorial
explosion
like
I,
think
you're
kind
of
doing
your
users
dirty
rightly,
but
an
application
developer
should
care
about.
Maybe
the
backups,
but
not
necessarily
every
little
parameter
of
me
and
by
you
know,
specifying
that
you've
loses
the
portability
I.
O
E
A
But
I
I
think
I
want
to
reiterate
Brad's
point
as
well
really
think
about
what
are
the
parameters
that
you
really
really
want
to
expose
to
your
end
users,
the
application
developers
right
the
kubernetes
kind
of
wants
to
get
to
a
point
where
application
developers
have
a
very
small
set
of
choices
available
to
them.
They
shouldn't
really
have
to
think
too
much
about
storage
beyond,
like
I
want
slow
medium
fast.
Something
like
that.
A
Okay,
good
topic,
three
feel
free
to
move
the
items
that
we
didn't
get
to
to
the
next
meeting
and
then
we'll
talk
about
it.
There
thanks
all
right.
Thank
you.
Everyone
for
the
great
discussion
and
if
you
have
any
questions,
concerns,
feel
free
to
follow
up
offline
on
the
sink
storage
mailing
list
and
we'll
see
you
in
two
weeks.
Thank
you
very
much.