►
From YouTube: Kubernetes SIG Storage 20171214
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 14 December 2017
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.9t204n6zsoe4
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
N/A
A
All
right
recording
started:
this
is
the
meeting
of
the
kubernetes
storage
sake.
Today
is
December
14,
2008
17.
As
a
reminder,
this
meeting
is
recorded
and
published
on
YouTube
on
the
agenda.
Today,
we're
gonna
go
over
the
end
of
quarter
review
for
the
items
that
we've
been
working
on
this
quarter.
I
was
planning
on
doing
kicking
off
the
planning
for
next
quarter
in
this
session,
but
I
think
we
won't
know.
We
should
delay
that
until
the
beginning
of
January.
A
A
A
D
A
A
A
E
F
A
B
E
A
A
A
A
One
thing
I
wanted
to
discuss
was
a
suggestion
by
Michael
Rubin.
He
attended
Q
con
last
week
and
one
of
the
topics
that
he
ran
into
a
lot
was:
how
do
you
onboard
new
people
into
the
storage
sake?
The
folks
who
are
already
active
participants
in
this
sake
tend
to
know
how
to
contribute
what
to
do
how
to
participate
in
planning
things
like
that.
A
But
for
someone
who's
new
to
the
stake,
it's
a
little
bit
overwhelming
and
not
clear
how
things
work,
so
they
just
tend
to
lurk
or
disappear,
and
we
want
to
try
to
improve
that
process.
So
one
suggestion
he
had
was
to
have
a
one-off
meeting
for
six
storage,
where
we
would
discuss
some
of
these
topics.
Things
like
how
to
contribute
what
the
cygnets
helped
with,
how
to
add
a
volume
plug-in.
Where
should
it
go?
What
does
entry
versus
flex
versus
CSI
versus
external
provisioner
mean?
A
H
Think
it's
worthwhile
sought
in
terms
of
a
follow
up
idea
for
those
who
want
to
do
it.
Maybe
we
could
even
add
to
this
with
some
kind
of
buddy
system,
where
you
know
we
could
solicit
volunteers
who'd
want
to
link
up
with
these
people
and
answer
one-on-one
questions
they
might
have
after
going
through
that
training
track.
That
is
already
listed
there.
That's.
I
I
Yeah
and
we
could
I
think
it
would
be
good
to
do
like
a
storage,
101
kind
of
thing,
and
then
you
know
have
maybe
a
slack
channel
available
like
ask
us
anything.
Even
stupid
questions,
cuz
I
think
people
get
intimidated
and
they
don't
want
to
contribute
because
they
won't
want
to
ask
sometimes
so
it
would
just
be
like
you
know.
How
do
I
do
this?
How
do
I
get
started?
Yeah.
H
And
on
that,
on
that
list
of
suggested
topics,
it's
almost
like
a
training
track
and
if
it
sounds
like
you
know,
it's
maybe
presentations
and
we
some
people
prefer
I
think
to
watch
youtubes
rather
than
read,
so
we
could
perhaps
even
record
it
nah
I
think
I
could
volunteer
to
maybe
get
some
resource
for
getting
a
recording
made
okay.
So
you
know
what
let's.
A
Let's
do
this
for
these
topics
that
we
have
here
put
your
name
next
to
it.
If
you
want
to
give
a
presentation
on
it
and
then
what
I'll
do
is
put
together,
meeting
it'll
be
a
B
C,
so
it's
relatively
low
cost.
We
can
record
it,
and
people
will
just
present.
Presentations
folks
can
tune
in
live
and
then
we'll
post
it
to
YouTube
afterwards
and
then
we'll
follow
up
with
the
mentorship
idea.I.
H
H
A
Yeah,
that
makes
sense,
and
we
could
probably
bundle
a
couple
of
these
questions
into
a
single
presentation.
So
I
opened
up
slots
here
for
presentations
for
the
veterans
in
this
group
or
for
folks
who
are
familiar
with
this
SIG
who
want
to
help
others,
throw
your
name
in
there
and
and
set
up
a
presentation
for
now.
You
could
put
down
your
name
and
then
later
we
could
sort
out
who's
going
to
present.
Why.
J
J
F
A
K
A
E
A
Okay,
so
anybody
who
wants
to
volunteer
to
help
out
with
this
throw
your
name
on
there.
I'll
send
out
an
email
to
folks
and
we'll
get
something
set
up.
I
was
thinking,
maybe
second
half
of
January,
but
we
can
sort
out
the
dates
and
then
we'll
send
out
the
details
to
the
community
storage,
sig
mailing
list
and,
of
course,
we'll
mention
it
on
subsequent
calls.
F
F
Don't
know
I
mean
learning
side
of
things.
I
would
like
to
be
guinea
pig
to
learn
from
what
what
we
will
be
trying
to
do
here
on
documentation,
because
again,
the
goal
for
me
will
be
to
provide
a
vendor
plugins
or
research,
how
to
do
it
so
I'm
I'm,
a
volunteer
they've
been
a
guinea
pig
students.
That's
perfect!
All.
K
A
F
A
Alright,
so
next
topic
on
the
agenda
is
when
the
next
meeting
should
be
officially
its
scheduled
for
December
28th,
which
is
dead
middle
of
the
holidays
and
I.
Imagine
a
lot
of
folks
are
going
to
be
out.
I
would
like
to
suggest
skipping
that
meeting,
giving
folks
time
off
and
then
reconvening
the
first
week
of
January
January
4th,
and
we
could
kick
off
planning
during
that
week
for
q1.
A
A
F
L
So
basically,
I
just
want
to
discuss
the
CSI
timeline
because
on
the
working
group
of
cloud
providers,
we're
starting
to
think
about
writing
adapters
for
volume
plugins,
and
we
are
really
interested
to
know
when
CSI
will
go
beta
since
well.
I
think
write
adapters
for
free
cloud
providers
and
maintain
them.
Has
the
API
changed
so
yeah
first
thing
that
we're
interested
in
is
maybe
the
timeline
the
CSI
would
follow.
A
Sure
so
for
CSI
we
hit
alpha
this
quarter.
Next
steps
are
to
itemize
all
the
items
that
are
all
the
requirements
that
we
have
to
get
beta.
The
folks
working
on
this
project,
including
Vlad
Lewis,
Brad,
Yan
and
I
and
Chakri
are
going
to
be
going
through
and
creating
that
list.
We
have
a
planning
session
on
Friday
to
do
that
and
then,
based
on
that
we're
gonna
see
how
much
of
that
we
can
bite
off
for
q1.
A
B
L
Where
all
the
outreach
plug-in
will
live
because
on
the
cloud
provider
side
I
think
Jago
was
about
to
talk
with
some
folks
from
CN
CF.
You
see
where
we
will
host
cloud
cloud.
My
controller
managers
that
will
be
written
by
cloud
providers
so
for
volume
plug-in
will
this
volume
go
with
with
CC
ends
that
will
be
written
under
some
CN
CF
umbrella?
Or
do
you
plan
to
move
all
the
ultra
plugins
into
a
new
organization
so.
A
We
had
a
lot
of
discussion
about
this
in
the
CSI
group
and
one
thing
that
we
really
did
not
want
to
make
the
same
mistake
again
was
end
up
with
a
single
repository
owned
by
this
group.
We're
a
bunch
of
volume.
Plugins
are
checked
in
the
problem
with
that
approach
is
at
this
group
who
doesn't
have
the
expertise
or
the
means
to
test
these
volume.
Plugins
ends
up
having
to
maintain
tests
to
revise
these
volume
plugins.
A
So
instead,
what
we'd
like
to
see
is
a
model
emerge
where
the
storage
vendors,
who
are
creating
these
volume,
plugins
put
them
in
their
own
repositories
and
that
way
they
can
independently
test
rev
maintained
them
and
for
for
cloud
providers.
What
this
would
mean
is
that
the
cloud
provider
should
host
their
own
volume
plug-in
in
their
own
repository,
if,
if
there
are
volume
plugins
that
are
being
created
by
another
third
party,
the
third
party
should
decide
where
to
host
the
volume
plug-in.
A
That's
currently
the
plan
of
action,
but
where
this
gets
fuzzy
is
volume
plugins
that
don't
have
a
clear
owner
or
author
things
like
I,
scuzzy,
NFS
and
yeah
like
for
those.
We
haven't
settled
on
a
solution,
yet
Brad
childs
was
proposing
having
a
group
of
volume.
Vendors
have
a
repository
that
they
agree
on
to
maintain
it
independent
of
kubernetes
and
just
put
those
drivers
there,
I'm
okay
with
that
proposal,
but
we
can,
you
know,
have
that
discussion
here
and
and
decide
as
we
begin
to
approach
GA.
M
L
A
A
So
ya
seen
I
had
a
discussion
with
Walter
fender
and
he
was
entertaining
an
idea.
Last
time
we
talked
about
potentially
pulling
the
existing
controller
manager
code
off
into
separate
cloud
provider,
controller
managers
or
cloud
provider
a
like
a
separate
cloud
provider.
Controller
managers,
essentially
yeah.
A
A
L
I
Thanks
dad
so
I
just
wanted
to
make
sure
I've
kind
of
been
browsing
around
I
noticed.
A
lot
of
people
are
updating,
plug-ins
for
block
and
I
wanted
to
make
sure
that
we're
coordinating
that
effort-
and
there
isn't
a
lot
of
duplicate
work
going
on
so
I
want
to
make
sure
that
we
can
maybe
cache
for
that
in,
like
the
planning
spreadsheet,
that
if
people
are
editing
entry
plug-ins,
especially
to
have
them,
recognize
the
volume
mode
that
they're
capturing
this
in
the
1:10
planning.
So
we're
not
duplicating
work.
L
I
A
N
N
I
asked
you
because
I
actually
work
for
Brooke
and
we
use
a
plugin
due
to
CSI,
not
be
enough
a
stage
and
requiring
the
user
to
enable
it
so
I
heard
like
the
the
timeline
for
CSI
is
gonna,
be
for
beta
I.
Think
the
most
aggressively
will
be
second
color
a
bit
of
that
you
say
so
if
that
is
that,
if
that's
the
case
and
I
think
we
should
be
okay
waiting
for
that.
But
if
not,
if,
if
you
say
that
is
the
sclerotic
longer
than
that,
then
we'll
like
to
get
a
little
block.
A
I
think
that's
fair
and
we
can
continue
to
reassess
as
we
go
through
this
coming
year
and
if
it
turns
out
that
we're
not
making
the
level
of
progress
that
we
want
on
suicide
and
we
can
consider
introducing
changes
to
flex
the
other
thing
to
realize
that
these
features
like
block
volume,
support
are
in
in
themselves,
alpha
and
part
of
being
declared
GA
for
the
feature
is
going
to
be
having
the
feature
available
in
a
in
something
like
CSI
or
flex.
Otherwise,
I
don't
think
it's
fair
to
declare
the
future
GA.
F
A
So,
if
you're
writing
a
volume
driver
in
the
next
couple
quarters
you're
in
this
gray
area,
where
we're
in
the
process
of
deprecating
the
old
thing,
but
the
new
thing
is
not
quite
ready
yet,
which
is
confusing
in
a
terrible
place
to
be.
You
have
to
look
at
what
your
deadlines
are
and
what
your
willingness
to
or
for
alpha
features,
availability.
Your
your!
How
willing
you
are
to
have
your
things
work
with
alpha.
A
So,
if
you're,
if
you're,
okay
with
alpha
level
functionality
I'd,
say,
start
investing
in
CSI
now
the
volume
plug-in
driver
framework
is
already
there
and
then
we're
going
to
continue
to
revise
it
in
this
coming
year,
and
so
by
the
time
it
hits.
Gao
have
a
si.
Si
si
si
driver,
that's
been
well
well
tested
and
and
works,
but
if
you
need
something
immediately,
that's
production-ready,
meaning
within
the
next
one
or
two
quarters
and
you're
not
willing
to
wait.
A
Then
I
would
suggest
get
a
flex
volume
driver
out
there,
and
you
can
think
about
migrating
in
the
future.
It's
possible
that
the
Flex
volume
driver
will
be
sufficient
for
your
needs.
If
all
you
need
is
attaching
mounting,
then
flex
volume
is,
is
completely
sufficient
and
we're
going
to
continue
to
maintain
that
so
it
just
depends
on
on
your
need
during
this.
So
instead
I'll
ask
some
questions
later,
but
oh
you
got
it.
G
A
L
A
We
have
recommendations
for
how
you
should
deploy
it.
We
provide
sidecar
containers
that
will
make
your
life
a
lot
easier.
If
you
have
a
containerized
CSI
volume
plug-in,
you
can
use
those
but
you're
free
to
do
your
own
thing.
If
you
want
to
I,
would
suggest
taking
a
look
at
the
CSI
designs,
design,
doc.
A
O
Start
stealing
this
flex
versus
CSI
story,
but
if
we
have
a
short
deadline
here
and
we
can't
use
all
the
features
and
still
requirements,
we
need
or
plugin
to
be
a
containerized.
What
can
we
do?
I
know
we
have
Mon
propagation
already
there,
but
it's
still
alpha
rights.
So
is
there
a
timeline
for
multiplication
to
be
beta
or
even
GA?
A.
A
Yeah
I
realized
that
we
are
in
this
position
in
this
sig
right
now,
where
we
don't
have
good
answers
for
how
to
extend
kubernetes
with
new
volume.
Plugins
flex
has
its
sharp
edges
and
CSI
is
not
quite
ready
yet
and
we
don't
want
any
more
entry
volume
plugins.
So
it's
it's
not
a
good
place
to
be,
but
we're
gonna
have
to
suffer
for
a
couple
of
quarters
and
hopefully
things
will
get
better
we're
going
to
drive
very
hard
to
make
that
happen.
O
A
A
So
we'll
have
a
planning
session
in
this
meeting
on
January
4th
and
then
the
CSI
team
can
report
back
in
there
and
at
that
point
anybody
is
interested
in
helping
out
with
the
project
and
we
can
take
on
new
volunteers
as
well.
Okay,
I'll
be
there!
Thank
you
all
right.
Let's
keep
moving
along
next
topic
of
discussion
is
by
Harry
Zhang.
A
A
P
Yeah,
so
what
I
want
to
talk
about
is
maybe
because
income
to
support
the
cutter,
runtimes
I
mean
the
basically
hypervisor
best
company
runtime
in
CI
Sai
I
once
talked
with
a
few
folks,
including
Jane,
shoot
your
in
Ruby
Kong
and
so
now,
cutta
cutta
runtime
in
kind
of
like
a
standard
of
hyper.
Why
the
best
container,
runtime
so
yeah?
We
actually
did
some
work
to
make.
P
It
use
the
persistent
watering
in
kubernetes
and
if
you
check
the
the
dark
I
post
in
on
the
schedule,
so
not
only
are
using
them
a
flex
warning
which
is
kinda
like
we
will
escape
the
attach
phrase
and
instead
of
return,
some
useful
information
to
back
to
the
to
recuperate,
and
then
we
will
mountainous
the
block
device
directly
to
the
hypervisor
best
apart.
So
this
is
basically
how
the
cut
or
uncut
a
container
runtimes
work
in
criminals
with
persistent
warning.
P
P
So
in
production
we
actually
use
the
Flex
warning
to
to
mount
a
plant
capacity
right
directly
to
the
hyper
at
the
best
part,
and
that
also
make
it
very
hard
for
us
users
like
cryo
or
cry
container,
to
use
cutter
runtimes,
because
you
have
to
maintain
the
origin
flex
warning.
That's
that's
how
we
do
today.
We
actually
have
a
building
flex
volume
in
CRM
to
make
the
work.
P
So
we
are
very
interesting
to
see
if
it
is
possible
in
CSI
to
add
some
supporting
this
kind
of
a
color
run
time
and
we
actually
have
data
prototype,
which
is
visually
to
make
a
small
change
in
size,
actually
adding
detach
the
flag
into
the
states
that
API.
So
if,
if
you
are
running
Qatar
runtimes,
it
will
not
do
the
attached
park
instead
of
it
will
return
a
group
of
key
value
information
back
to
the
the
crew
blade
that
is
in
the
next
step.
P
P
Q
P
Q
Q
G
Have
a
question
around
that
too?
Actually,
if
we
pass
it
directly
to
the
VM,
then
it's
like
hey,
we
can't
pass
in
any
block
device
pass
through.
It
has
to
be
a
driver
which
something
like
a
VM
understands
today.
If
you
look
at
support
some
I
scuzzy
because
I,
but
if
you
want
to
support
something
like
a
nvme,
then
you
cannot
do
a
device
pass-through.
That's
an.
G
J
Q
D
P
So
they
also,
they
all
need
to
use
this
kind
of
block
device
and
one
incorrectly
to
be
mounted
on
the
part.
In
order
to
do
that,
the
says
that
I
need
to
know
some
information
about
whether
and
should
do
the
normal
process
or
not.
That's
why
I'm?
Actually,
the
change
we
need
to,
we
hope
to
make
is
to
add
a
flag
to
CSI
I,
don't
know
if
it's
possible
or
I
would
like
to
hear
advices
from
you
guys.
A
So
extending
the
CSI
spec
is
a
conversation
that
we
should
take
to
the
CSI
community.
Independent
of
this
sake.
I,
don't
have
enough
context
on
this
particular
problem
to
say
anything
more
about
whether
this
is
a
good
idea
or
not,
but
what
I
suggest-
and
it
sounds
like
the
suggestion-
is
purely
updating
the
CSI
spec
correct
right.
P
A
P
P
A
P
A
P
A
J
Q
Provider
now
we
are
not
going
into
that
direction.
There's
our
Gaston,
oh
I'll,
be
the
provisioner
for
sake
of
people,
that's
packed
because
I'd
be
the
provisional
used,
RBD
command
and
not
every
kinetic
containers
package.
That's
commenced
for
the
sake
of
the
for
people
who
don't
have
a
really
command
in
the
controller
master.
We
provide
the
external
for
visionary,
but
it's
not
going
to
the
car.
J
J
A
Entry
volume
plugins
is
that
they
have
an
API.
They
have
their
part
of
the
kubernetes
api,
like
every
single
volume
plug-in
adds
to
the
kubernetes
api
and
anything.
That's
in
the
kubernetes
api
has
a
deprecation
policy.
The
deprecation
policy
is
that
you
can't
deprecated
anything
inside
the
API
until
like
kubernetes,
2.0
or
I
believe
a
notice
of
one
plus
years.
For
all
intents
and
purposes,
it's
not
going
to
suddenly
disappear
or
become
deprecated.
Okay,.
P
A
Our
we
do
plan
to
migrate
these
entry
volume
plugins
to
CSI,
but
that
would
happen
implicitly
and
invisibly
by
actually
keeping
the
API,
as
is
so
that
the
existing
workloads
don't
notice
it,
but
instead
of
having
the
business
logic
handled
entry
inside
the
kubernetes
binary
have
the
kubernetes
binary,
deploy
a
CSI
volume,
plug-in
and
proxy
to
that
CSI
volume
plug-in
and
that
way.
As
far
as
the
end
user
is
concerned,
nothing
has
changed,
but
the
internal
logic
that
is
fulfilling
that
request.
Instead
of
it
being
the
entry
volume
plug-in,
will
be
CSI.
A
J
So
so
I
at
least
I
have
seen
looked
at
that
with
the
provisional,
like
I.
Think
it's
difficult
to
unit
test
these
kind
of
things
which
depend
on
external
wineries.
Is
there
a
it?
Does
that
mod?
Does
that
change
in
the
CSA
I
know
like
CSI
as
an
interface,
so
you
could
write
mocks
and
all,
but
how
do
you
like?
Is
there
a
plan
for
RBD
kind
of
provisional
school,
more
unit
testing
in
all
yeah.
A
B
Absolutely
first
I
completely
agree
that
testing
is
extremely
important,
so
what
we
are
doing
as
part
of
the
CSI
team
is
create
different
models
of
testing.
First,
we
have
already
provided
a
mark
driver
for
the
ce
o--'s
or
the
clients
to
be
able
to
just
a
second.
We
are
working
on
a
kind
of
a
sanity
test
for
four
drivers
to
use
to
test
their
driver
to
make
sure
it
satisfies
the
CSI.
Spec
does
less
two
things.
B
The
third
thing
is
that
there's
actually
an
e
to
e
test
part
of
kubernetes
that
once
once
a
driver
is
complete,
they
can
add
their
own
section
to
it
and
they
can
run
through
the
same
pattern
to
be
make
sure
that
it
actually
works
with
kubernetes.
So
there
will
be
three
models
of
testing
for
for
CSI
drivers.
B
J
So
did
is
this
understanding
correct?
So
let's
say
the
CSI
is
implement
and
I'm
not
very
familiar
with
the
CSI.
So
let's
say
CSI
is
implemented
by
our
BD
and,
let's
say
Gloucester
right,
so
both
of
them
will
go
and
implement
a
Mach
driver
that
that
closely
models
their
behavior
and
then
use
that
for
unit
testing.
Now.
D
B
D
D
D
B
Type
of
models
here
so
again,
those
are
specific
to
CSI
and
those
are
specific
again
to
client
testing,
but
you're
talking
about
driver
testing
and
driver
testing,
you
need
a
kind
of
a
client
to
execute
the
API
calls
to
CSI
and
those
we're
working
on
right
now,
actually
to
be
able
to
do
that.
Go
ahead.
Yeah.
Could
you
post
the
links.