►
From YouTube: Kubernetes SIG Storage 20170316
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 16 March 2017
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.4wqz8vvnk99f
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:04:43 From erinboyd : Please mute if you aren’t speaking
09:40:31 From hayley : can I get a link for the CSI doc?
09:40:53 From Jan Šafránek : https://docs.google.com/document/d/1JMNVNP-ZHz8cGlnqckOnpJmHF-DNY7IYP-Di7iuVhQI/edit#
09:41:46 From hayley : awesome! thank you!
09:47:02 From stephenwatt : DockerCon is April 17-20
A
Alright,
so
let's
get
started.
This
is
the
bi-weekly
meeting
of
the
crew.
Burnetii
storage
special
interest
group
today
is
March
16
and
as
a
reminder,
this
meeting
is
public
and
recorded.
Let's
get
started
so
first
on
the
agenda
is
status,
updates,
I'm,
going
to
go
ahead
and
open
up
the
planning
doc,
and
we
can
reveal
item.
A
A
A
A
A
A
E
So,
basically,
that
issue
that
I'm
dealing
with
it
is
we
are
running
coupon
this
cluster
on,
like
digital
ocean,
our
lineage
kind
of
simple
visas
providers,
and
they
only
have
like
most
of
the
land,
only
have
like
a
single
primary
partition
and
there
is
no
option
to
add
any
secondary
disk.
So
so
today,
the
you
know
like
if
you
want
to
install
a
like
the
data
is
serve
all
the
process
or
my
sickle.
The
stateful
set.
E
The
problem
is
that
you
know
you
cannot
really
you
the
training
and
the
only
option
you
will
have
particularly
using
something
like
a
host
of
the
issues
that
the
host
park.
Commissioner,
there
is
no
substitute
for
dynamic
of
professional,
and
actually
I
think
the
scheduler
doesn't
have
enough
information
to
like
ensure
that
the
if
the
pod
restarts
of
the
northeast
as
the
host
art
expert
example
forgets
to
go
back
to
the
same
part,
because
painless
poodle,
I
guess.
My
point
is
like
how
do
like
the
claws
or
like
whatever
options
we
have
so.
A
Hope
Pat
has
never
was
never
intended
to
be
used
for
production
systems,
it's
more
protesting
than
anything
else
from
space.
It's
basically
grateful
I
like
urban,
any
paradigms
it
doesn't.
You
start
writing
things
to
a
local
disk.
Kuber
Nettie's
doesn't
do
anything
to
ensure
that
your
pod,
when
it
gets
rescheduled,
comes
back
to
the
same
machine.
This
problem
space
and
all
we're
calling
a
local
storage
in
general
on
there
could
be
persistent
local
storage
or
a
femoral.
Local
storage
and
Michelle
from
our
side
has
been
driving
an
effort
to
to
to
make
local
storage
work.
F
Hi
yeah
so
to
all
I,
haven't
taken
a
look
at
your
profile,
not
really
exactly
what
it's
doing,
but
I
think
so.
We've
been
discussing
on
the
fish
fish,
I
and
Kamal
has
been
discussing
on
our
PR
about
this
issue,
so
we
can
continue
discussing
their
if
we
want,
but
the
basically
fish
is
suggesting
that,
as
part
of
our
proposal
to
add
a
new
local
PD
object
that
we.
F
F
E
I
mean
so
the
way
you
know
we
are
looking
at
this
is
particularly
if
they're,
like
people
were
using
something
like
a
like
a
digital
ocean
of
things
like
that.
You
know
they
without
like
qualities.
They
can
actually
go
and
install
like
it.
Every
cell
wall
on
a
maybe
like
any
minute
about
their
application,
but
the
moment
they,
but
you
know
who
want
to
do
the
same
thing
with
communities,
is
just
a
possible.
So,
like
you
don't
cry
so
it.
B
Is
possible
I
just
want
to
frame
it
off.
So
what
you're?
What
you're
saying
is?
There's
a
limitation
in
the
system
volume
framework?
Okay,
there
is,
but
there's
no
limitation
in
the
volume
plug
in
print.
So
so
what
you?
Basically
to
summarize,
you
can
only
claim
network
this
ornate
wood,
shared
process
of
you
can't
claim
local
disks.
So
that's
Michelle's
proposals
to
be
able
to
be
able
to
claim
local
system
at
it,
advertised
promiscuous,
Ambani.
B
What
you
can
do
in
the
work
around
today.
It
is
or
be
a
little
bit
and
it
makes
your
Cooper
natives
nose
pits
rather
than
cattle,
but
what
we
we
do,
the
Swift
like
running
storage
platforms
and
Cuban
eighties,
and
things
like
that
very
similar,
you
think,
is
that
you
give
your
pod
a
note
selector,
and
so
you
label
the
node
with
the
host
forage.
B
And
then
you
give
you
quite
a
node
selector
and
then
it
always
lands
on
a
server
that
that
has
the
disk
and
so
surface
a
single
instance
relational
database
like
mice,
people
and
you
set
up
this-
might
equal
five.
Maybe
it's
a
replica
set
of
one.
So
it's
always
running
and
uses
the
nose
to
lace
always
land
on
a
particular
host
or
distributed
systems
like
like
I'm.
B
E
Find
so
we
haven't
actually
done
all
those,
and
we
want
to
kind
of
want
to
move
away
from
that
because,
because
that
makes
it
very
difficult
to
like
do
something
like
that
works
and
like
Google
cloud,
but
then
doesn't
work
on
like
you
know
before
somewhere
else.
Well,
my
my
I
guess
it's
not
a
very
like
a
proposal,
but
more
like
you
know.
We
just
want
to
make
it
bring.
G
E
D
F
A
D
E
A
A
I
I
It's
not,
then
I
would
ask
her
to
to
make
the
documents
on
publicly
accessible,
and
I
would
like
to
also
suggest
that
if
it
would
be
possible
for
people
who
are
interested
in
the
feature
and
in
the
proposal,
if
you
can
meet
like
more
often
more
frequently
once
twice
a
week,
Center
some
doodle
question
and
we
could
discuss
and
make
sure
that
the
proposal
is
really
moving
towards
merging.
Oh,
that's,
all
I
was
hoping
that
would
be
in
the
call,
but
I.
J
I
I
A
Moving
on,
we
have
PRS
to
discuss
that
need
attention.
Yawn
added
storage
classes
be
one
in
1.6
yawn.
It
looks
like
this
was
a
change
that
you
got
in,
but
was
reverted
because
of
some
gke
issues
and
now
you'd
like
to
push
it
in
again,
nom
exactly
so,
I
I
think
it's
a
big
change,
so
we
might
get
pushed
back
from
the
released
folks.
K
A
A
K
F
I
have
a
one
possible
suggestion:
would
it
be
possible
just
to
have
the
test
call
z1,
api's
I,
think
if,
if
we
can
argue
that
this
change
is
only
changing
tests
and
including
the
test
coverage,
then
they
might
accept
that
mm-hmm.
F
A
L
So
Hamas
has
been
working
on
a
PR,
I,
choose
or
restart
of
the
attached
detached
controller
right
now,
if
it
restarts
it
loses
those
days
which
leads
to
awkward
problems
I,
we
were
trying
to
get
this
in
for
one
dot
six,
but
it
was
too
big
of
a
PR
with
too
many
issues
actually
get
Burgess
in
time.
We've
talked
about
doing
this
for
one
dot.
65C
hi
is
this
some
we
want
to
consider
and
if
so,
what's
the
next
steps
to
do
that
so.
A
I
remember
taking
a
look
at
this
PR
I
think
it's
definitely
something
that
we
want
it's
too
late
for
1.60.
It's
definitely
not
a
bug
fix
or
it's
too
large
to
be
considered.
Your
bug
fix
so
we're,
let's
target
it
for
1.61,
Jing,
I
and
michelle's
start
taking
a
look
at
the
the
pr
anybody
else,
who's
interested
in
reviewing.
It
should
take
a
look
at
it
as
well
and
will
target
1.61
for.
I
L
A
I'll
after
a
1.60
is
cut
the
point
releases,
1.6
115
you
are
or
I
guess,
they're
called
patch
releases
are
cut,
basically
a
domestic
in
general,
it's
anywhere
from
two
to
four
weeks
cadence.
So
it's
kind
of
a
rolling
train
grab,
whichever
one
works
best
for
you,
we're
going
to
try
for
161.
If
we
missed
that,
we
could
always
do
162
no
big
deal.
A
Now
masters
on
code
trees
because
of
160
degrees
is
going
to
be
lifted
at
some
point
between
now
and
Wednesday.
When
the
release
is
supposed
to
happen
once
it's
lifted,
then
you
absolutely.
This
should
be
able
to
get
merged
to
master
once
its
code
reviewed
and
then
you
can
share
ii.
Think
it
to
the
160
branch
after
160
is
cut
yeah.
L
A
All
right,
I'm
going
to
switch
up
a
couple
of
these
items.
Next
up
I
would
like
to
talk
about
container
storage
interface.
I
posted
a
link
there
to
a
preliminary
dock
in
the
agenda.
So
the
idea
here
is
a
a
few
weeks
ago,
a
bunch
of
folks
from
Cooper
Nettie's,
docker,
mezzos
and
cloud
foundry
got
together
to
try
to
define
what
we
all
think.
A
Aiken
standard
container
storage
interface
would
look
like
and
the
the
idea
behind
this,
if
you
look
at
our
number
one
goal,
is
that
a
storage
plug-in
author
should
be
able
to
write
one
plug-in,
preferably
a
container
that
just
works
across
all
container
orchestration
systems.
That's
a
pretty
grand
goal,
but
I
think
one
of
the
major
feedbacks
we've
gotten
from
a
lot
of
storage
vendors
is
that
you
know
they
have
to
go
in
and
write
custom
plugins
for
every
one
of
these
systems,
sometimes
its
entry.
A
Sometimes
it's
at
a
tree
and
it's
a
pain
and
really
they
should
be
able
to
just
do
the
work
once
and
have
it
work
everywhere
and
so
to
try
to
achieve
this
vision.
We
first
just
got
all
the
cluster
orchestration
systems
together
right
it.
If
this
is
something
we
want
to
do,
and
folks
decided
that
this
is
one
you
know.
This
is
something
that
everybody
wants
to
do
it's
beneficial
for
both
storage
vendors,
for
cluster
orchestrators
and
for
users.
Then
it's
a
question
of
how
we're
going
to
make
that
happen
on
this
design.
A
A
B
A
So
the
idea
is
to
try
to
encompass
everything
that
cluster
orchestration
system
would
need
from
a
storage
provider,
and
so
that
does
include
dynamic,
provisioning
and
D
provisioning
volume.
It
does
include
attach
detach,
and
it
does
include
Mount
unmount,
which
we're
renaming
to
publish
unpublish,
because
mountain
mount
is
very
overloaded
and
those
steps
could
be
could
encompass
a
lot
more
things
than
just
mounting
and
unmounting.
N
A
N
I'll
go
away
because,
just
in
the
last
couple
days
made
those
just
published
their
own
version
of
a
proposal
as
well
and
should
be
the
same
exact
doc,
we've
been
working
together.
Okay,
it
isn't,
but
maybe
it's
at
a
different
level,
because
they're
even
talking
about
how
it
ties
into
some
of
the
higher
level
functions
nicely.
F
C
N
I
want
to
do
is
kind
of
an
audit
of
you
know
whether
there's
any
differences
or
gaps
and
even
compare
it
to
the
you
know.
The
scale
Liam
sees
a
Lib
storage
and
what's
going
down
there,
but
yeah
I'd
like
to
have
a
few
days
to
look
it
over
and
then
I
definitely
want
to
cover
it
in
detail
at
that
basic
excuse
me
and.
A
It's
an
open
question
at
this
point:
we've
been
thinking
about
whether
to
enforce
grp,
see
as
the
standard
interface
that
the
containers
must
expose.
The
idea
here
is,
on
the
one
hand,
if
you
want
to
have
this
ideal
of
a
storage
provider,
just
gives
you
a
single
container
and
it
just
works,
and
we
interface
that
it
exposes
the
rest.
Api
should
be
consistent
so
that
you
know
you
don't
have
to
have
four
or
five
different
flavors
in
the
same
container.
O
O
A
O
J
A
At
this
point,
we
want
the
interface
to
be
everything
covering
everything
that
we
currently
have,
but
we
definitely
need
to
consider
expanding
the
API
in
the
future
and
so
the
exact
details
of
how
to
do
API,
versioning
haven't
been
defined
yet.
But
the
idea
would
be
that
this
is
a
versioned
API
and
that
you
could
expand
it
in
the
future
to
encompass
new
functionality
like
replication
and
snapshots,
and
things
like
that.
Okay.
J
A
J
E
Sup,
human
psl,
or
just
looking
at,
like
the
publish,
unpublished
operations
that
happen
on
the
node,
sounds
like
we're,
considering
the
restful
service
or
something
like
this
of
service.
So
how
do
we
deal
with
like
services
that
has
to
select
the
operation
that
has
to
run
on
the
node
and
cannot
be
centralized?
Yeah.
A
A
B
M
A
A
A
Vendors,
obviously
all
has
a
vested
interest
in
promoting
their
own
solution,
and
what
we
wanted
to
do
was
first
not
have
those
voices
involved
and
focus
just
on
what
would
be
best
for
end-users
and
what
the
cluster
orchestration
systems
agree
on
once
we
have
that
bend
idea
is
to
open
it
up
to
storage
vendors,
because,
obviously,
storage
vendors
have
a
lot
more
experience
in
this
space
and
help.
Let
them
help
shape
what
this
is
going
to
look
like,
and
so
we're
getting
to
that
point
now.
Well,.
M
Absolutely
and
thanks
for
clarifying
that,
in
fact,
more
than
more
than
I,
don't
want
to
sound
too
optimistic
here,
but
more
than
you
know
trying
to
push
the
perfect
solution.
We
want
to
be
able
to
work
with
the
orchestrators
to
get
information
that
we
generally
can't
get
like
to
be.
Application-Aware
get
x
about
placement
and
all
that,
so
you
know
I
rather
not
want
respect
to
degrade
into
something
like
a
credible
asian
all
describe
creation,
apps
and
drinks
naps.
Then
it
just
gets
boring
it.
M
B
A
Yeah
this
is,
this
is
an
incomplete
list.
It's
what
are
the
existing
volume
plugins
that
exist,
and
we
want
to
make
sure
that
they
continue
to
work
through
this
interface.
So
what.
B
One
other
thing,
just
two
ads
are
like
it's
a
fairly
common
pattern:
just
because
large
open
source
community
just
like
expediency
for
a
bunch
of
folks,
like
a
small,
remember
like
three
or
four
people
that
are
cooking
up
an
idea
for
a
proposal
to
get
together
and
bastard
amongst
themselves,
till
they've
got
something
that
they
think
is
is
with
while
and
then
they
open
it
up
for
public
discussion.
So
I
would
say
this
is
still
in
proposal.
N
Just
for
your
works,
guys
who
might
have
been
offended
to
be
left
out,
I'm
with
Kelly,
MC
and
I,
didn't
see
this
document
making
also
a
couple
minutes
ago.
So
I'm
not
complaining
the
scale
hi
bill
mentioned,
but
I
didn't
participate
in
this
either.
So
I
think
it's
still
a
level
playing
field
going
on
here
from
what
I
can
see
absolutely.
M
A
So,
as
you
know,
flex
is
currently
being
iterated
for
1.6
to
include
the
attached
or
detached
your
interface
that
we
have
internally
for
Cooper
Nettie's,
that's
going
to
be
a
breaking
change
for
existing
drivers
of
flex
they're
going
to
be
needed
a
they
need
to
be
updated
for
1.6
moving
forward
for
out
of
tree.
What
we're
thinking
is
that
this
is
going
to
be
our
driver
for
out
of
tree
the
interface
once
it's
agreed
upon.
Then
it's
up
to
the
cluster
orchestrators
to
decide
how
they're
going
to
implement
it.
A
So
for
us
in
Coober,
Nettie's
we're
going
to
need
to
decide
whether
this
is
going
to
be
an
iteration
of
flex,
whether
it's
going
to
be
a
new
plugin,
we
should
call
it
flex
2.
We
should
call
it
something
else
or
whether
it's
something
that
we're
going
to
bake
into
a
lower
layer
of
the
storage
stack
where
it's
not
a
volume
plugin.
But
it's
something
that
Kubra
Nettie's
storage
sack
natively
understand,
and
that
is
an
open,
open
discussion.
We
haven't
gotten
to
the
point
of
Coober
Nettie's
implementation,
so.
Q
A
Is
no
relief
targeted
for
this?
It's
going
to
be
ready
when
it's
ready,
we
all
have
I,
think
all
the
cloud
orchestration
systems
have
different
release
schedules
and
cadences,
so
we're
just
going
to
try
and
see
if
we
could
converge
on
this
as
soon
as
possible
to
something
that
we
agree
on
get
storage
vendor
input
once
we
have
something.
A
The
next
couple
weeks
would
be
greater
within
the
next
week,
but
I
think
we're
going
to
open
it
up
why
to
a
wider
audience
or
start
incorporating
storage
vendors
in
meetings,
probably
when
within
two
to
four
weeks
and
at
that
point,
that
that
will
probably
be
another
better
Avenue
to
get
your
voice
heard.
If
you're
interested
helpful
for
you
Brad
just
and
you
could
start
commenting
on
the
dock
immediately.
Okay,.
A
Next
item
is
Q.
Connie
you
it's
coming
up
in
two
weeks.
There
is
a
list
of
the
agenda
of
who's.
Attending
looks
like
from
the
google
side.
It'll
be
me
and
michael
rubin
Yan
is
going
from
red
hat.
Sarge
will
also
be
there
if
anybody
else
is
attending,
please
let
us
know
and
we'll
add
you
to
this
agenda
doc.
A
F
C
A
K
A
N
N
You
know
in
a
round
format
appropriate
for
discussion
that
will
match
our
group
size
also,
like
tentatively
allowed
for
to
extend
into
a
second
day,
but
unless
there's
a
spot
on
that
document
in
the
link
to
put
proposed
agenda
items-
and
we
can
call
it
as
we
see
it-
I
I
personally
am
traveling
in
and
will
since
I
thought,
we'd
have
a
dinner
that
evening
I'll
be
staying
when
I
tonight
anyway.
So
maybe
we
can
resolve
within
the
next
week
whether
we
think
we
need
a
second
day
I.
A
L
A
N
L
A
L
A
L
A
A
I
K
G
N
The
other,
the
only
thing
I'd
say
is
the
elder
tree,
probably
is
going
to
be
more
than
just
listening
anyway,
and
this
would
almost
be
an
intro.
It's
also
kind
of
early
right.
I
mean,
if
we're
talking
about
this,
then
just
being
opened
up
from
storage
providers
in
the
parliament
even
make
her
is
building
very
intrumental
talk
about.
G
N
It
again:
well,
I
think
what
the
timing
you
know
discussing
the
CSI
draft
that
we
just
saw
yeah
and
somebody
threw
out
that
you
know
at
this
stage-
is
still
being
discussed
amongst
the
orchestrator,
but
maybe
even
changed
a
bit
up.
This
face
to
face
is
going
to
come
down
pretty
much
aligned
with
the
first
exposure
of
this
to
these
storage
providers.
So
that
being
said,
I
think
that
it'll
give
us
a
chance
to
get
an
intro
presentation
on
what
was
there.
What
the
thought
process
was
in
terms
of,
though
a
real
Cyril
call.
N
G
N
If
we
might
flexibility
during
that
week,
I
could
probably
move
it
within
a
week,
but
I
think
fraud.
If
I
remember
correctly,
you
were
going
on
vacation
kind
of
that
whole
slot
anyway.
Yeah
then
once
it
gets
into
the
next
meeting,
at
least
for
us-
and
I
said,
I
suspect,
the
rod
of
all
this
docket
mom
comes
up
in
in
mid-april
and
I
suspect.
A
lot
of
the
group
X
complexes.
N
G
G
Think
I'd
feel
more
comfortable
with
this.
If
can
we
have
and
I
guess
yeah?
Maybe
we
can
do
it
that
way
where
we
have
a
an
intro
to
the
other
tree
provisioning
stuff
and
like
chocolate,
can
talk
about
flex
and
then
you
know
I
guess:
I
can
present
sods
work
from
CSI
and
then
we
make
sure
that
we
schedule
a
follow
up
shortly
after
a
doctor
con
to
talk
about
thankful,
in-depth.