►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Container-Storage-Interface (CSI) Implementation - 25 April 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-WmRYvqw1FREcD1jmZAOjC0jX6Gop8FMOzsdqXHFoT4/edit#heading=h.64ib7ng9sj3p
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
B
Yeah,
it's
I
mean
it's
something,
a
lot
of
things,
because
we've
has
so
much
timely
between
meetings.
The
first
one
is
the
documentation.
How
do
backport
commit
so
member
Sergei
has
some
questions:
how
to
do
that
in
and
and
we
had
some
more
documentation
that
we
needed
for.
How
is
it
that
we
do
release
management,
so
I
yawn
started
out
that
document
and
I
added
the
which
was
pretty
good,
so
I
only
added
the
second
part,
which
is
how
is
it
that
we
do?
B
B
D
B
D
B
B
A
B
A
B
That's
just
no,
that's
it
see,
that's
exactly
what
I'm
trying
to
prevent!
That's
exactly
I.
Don't
you
don't
need
that?
Actually
it's
actually
simpler
to
just
go
right
to
the
gr
PC
stuff
and
that
caused
a
lot
of
confusion
for
people
who
write
drivers
who
think
that
they
can
require
that
and
you
do
not.
So
what
happens
is
a
CSI
Commons
is
to
say
you
could
uses
if
you
want
to,
but
you
don't
have
to,
and
but
you
it's
actually
I
find
it
much
easier
to
go
right
to
the
interface
of
G
RPC.
So.
F
A
Mean
kubernetes
CSI
is
a
kubernetes
repo,
so
I'm
perfectly
happy
leaving
it
under
kubernetes
CSI
org
I
like
this
idea
that
Lewis
has
of
having
separate
repos
inside
this
organization
for
multiple
drivers,
so
driver
host
path,
yeah
that
makes
sense
under
kubernetes
ESI
driver
flex.
Adapter
I
would
be
happy
with
having
that
here
as
well.
This
is
great
correction.
E
A
A
number
of
packages
that
will
be
packaged
and
shipped
here
right,
external
attacher,
external
provisioner
in
terms
of
drivers,
I
think
having
kubernetes
specific
drivers
like
host
path
flex,
potentially
others
in
the
future
under
kubernetes
CSI,
makes
perfect
sense
to
me.
It's
the
ones
where
the
ownership
is
not
clearly
attributed
to
kubernetes
that
become
problematic,
so
I
scuzzy
NFS
those
we
should
find
a
better
home
for
cinder,
of
course,
which
has
already
sounded
good
home.
So
we're
happy
there.
B
A
F
Mean
that's
not
really
with
this
steering
committee
is
directing
this
words
like
they
want
stuff
in
soups
was
it
Brittany's
dish
seems
is,
and
then
they
want
anything
that
quote
unquote.
Ships
with
kubernetes
in
the
kubernetes
street
they
don't
have.
They
didn't
have
direction
for
like
every
night,
yeah.
F
A
I
mean
if
they
want
to
do
that,
we
can
consider
doing
that.
I,
don't
know
if
that's
a
good
idea,
I
like
having
a
separate
organization
for
the
kubernetes
CSI
stuff,
because
it's
a
lot
easier
like
considering
how
many
components
we
already
have.
If
we
were
to
just
shove
these
into
a
broader
kubernetes
storage,
sig
repo,
it
would
be
more
likely
to
just
get
lost
in
there.
B
A
A
A
I
had
that
discussion
or
late
last
year,
so
that's
where
they
came
back
and
basically
came
in
and
did
all
the
things
that
make
this
illegitimate
repository
things
like,
including
the
code
of
conduct
and
having
the
CMC
f
CLA,
bought
running
on
here.
So
as
far
as
they're
concerned,
the
immediate
legal
ramifications
are
addressed,
but
I
think
what
Brad
is
getting
at.
Is
that
not
just
the
legal
stuff
but
more
about
where
the
CN
CF
kind
of
wants
to
see
these
projects?
Incubated
they've
sent
guidance
to
say
that
there
is.
A
A
A
Yeah
but
completely
fair
points,
Brad
I
say:
let's
leave
it
where
it
is
for
now
and
if
we
get
a
lot
of
pushback,
then
we
can
address
it.
B
A
D
The
reason
why
since
I've
been
out
for
a
couple
weeks,
so
we're
taking
si
si
common
because
no
one
is
supporting
it
or
it's
not
working.
It's
Brad.
D
B
Okay,
so
look
after
writing.
Three
drivers,
or
so
you
see
si
common,
isn't
it
was
supposed
to
be
a
common
area
for
the
sample
drivers
to
check
their
incoming
G
RPC
requests.
So
just
like
the
if
statements,
if
you
don't
have
the
volume
and
II
I'm
gonna
return
this.
If
you
don't
have
this
one
over
time
and
what
happened
was
that
it
was
supposed
to
be
a
simple
thing,
but
it
kind
of
grew
from
that
into
more,
because
this
is
using
multiple
drivers.
B
You
need
to
register,
you
need
to
lease
thing,
but
that
mechanism
is
really
not
necessary
when
you
write
a
single
driver
because
in
a
single
driver
you
you
just
an
implementing
your
one,
your
single
go
line
function
and
you
see
your
if
statements
in
that
and
then
a
you're
done,
but
so
what
I'm
trying
to
say
is
that
CSI
comment
made
sense
when
we
were
writing
a
lot
of
sample
drivers
right.
But
now
that
we're
saying
look,
look
everybody
has
you
know
we
have
drivers
that
are
more
realistic.
B
D
B
C
B
You
one
thing
is
like
in
in
both
six
storage
in
knowing
slack
I
have
CSI
pinging
me
every
few
seconds,
so
every
time
selects
its
any
types,
anything
anyone
types
CSI
on
anywhere
it
pings
me
and
the
question
was
the
time.
Is
you
know
why
do
I
not
do
I
need
to
use
CSI
comment
and
why
do
I
need
to
use
it
and
it
just
like
I
keep
saying:
don't
need
to
usually
just
go
right
to
the
interface
and
just
like
Jing
zhis
said
she
thought
it
was
necessary
right.
G
D
B
D
A
Okay,
let's
go
through
pr's
that
need
attention;
I,
just.
B
A
A
Well,
it's
not
okay!
Let
me
so
needs
I,
can't.
B
A
The
only
thing
I
think
I'm
gonna
echo
Shang's
concern
of
breaking
things
as
much
as
we're
still
pre
1.0.
It's,
let's
make
sure
that
you
don't
break
stuff,
but
yes,
ultimately
CSI
sanity
should
have
its
own
and
should
have
its
own
repo
as
well
I
agree
because
it
should
have
separate
independent
releases.
Well,.
B
A
A
A
B
A
A
So
you
go
ahead.
Mark
that
green,
well,
CSI
intent
test
is
occasionally
flaky,
but
thankfully
Lewis
is
updated.
0.21
with
new
bug,
fixes
and
pointed
the
antenna
tested
that
tag.
Hopefully
that
gets
rid
of
the
plagues.
We'll
keep
this
around
and
take
a
look
at
it.
Next
time.
Yon
had
an
issue
last
time
about
attached
or
timeouts.
A
A
A
A
D
A
A
D
A
A
A
A
G
A
G
G
A
Talking
to
David
about
what
the
set
of
volume
plugins
are
that
we
should
migrate
for
the
initial
set.
It
should
be
the
cloud
provider
volumes
because
there's
a
big
driver
to
push
cloud
provider
code
out
of
tree
and
that's
kind
of
like
the
most
immediate
need,
so
anything
that
has
any
cloud
provider
dependency
should
be
our
first
priority
to
push
out.
Second,
priority
would
be
any
type
of
remote
volume
plug-in.
This
could
be
cinder.
A
I,
scuzzy,
NFS
port
works
whatever,
and
then,
after
that,
we
can
consider
ephemeral
volumes
and
local
storage
I
think
ephemeral
volumes
are
probably
the
set
that
should
remain
entry,
mostly
because
of
the
performance
benefits,
and
you
know
things
like.
Every
single
pod
has
a
secret
volume
making
that
a
CSI
driver
is
probably
not
the
best
idea.
I
would
be
her
and
considering
its
kubernetes.
A
Specific
I
would
be
perfectly
okay,
leaving
that
entry
host
path
as
well
so
the
the
kubernetes
specific
volumes,
I'm,
okay,
leaving
entry,
local
storage
kind
of
I
can
see
it
both
ways
but
I'm
willing
to
punt
that-
and
you
know,
kick
backhand
down
to
the
line
and
figure
that
out
later.
It's
not
a
priority
so.
A
It
depends
on
what
the
timeline
for
snapshots
is
going
to
be.
It
would
be
worth
talking
to
David
and
Yann
and
figure
out
what
the
timeline
for
this
process
is
going
to
be.
If
it
turns
out
that
it'll
be
at
least
maybe
three
four
quarters
before
this
happens,
then
it
might
be
worth
getting
snapshots
updated.
Actually.
A
A
D
A
All
right
next
up
is
how
do
credentials
get
passed
on
to
subsequent
calls
status,
so
this
is
that
it
got
merged
by
Jordan
Liggett,
think
it
got
merged.
I.
Think
thank
you
to
Jordan
for
all
the
hard
work
on
this.
The
only
thing
remaining
is
updating.
The
Docs
Jordan
was
asking
where
the
docks
are
I
pointed
him
to
the
kubernetes
ESI
Docs
and
the
dynamic
provisioning
page.
C
What's
natural
said
right
now,
a
big
problem,
this
urn
was
a
github
issue.
Some
folks
couldn't
view
the
PR
anymore.
They
just
see
this
big
unicorn,
so
James
was
saying
he
said
he
could
not
Mew
it
and
he
was
suggesting
to
use
something
like
that.
Reviewable
thought
I,
oh,
but
that
I
thought
somebody
was
talking
about.
A
Think
we
have
reviewable
enabled
on
CSI
Satori
I
think
it's
only
enabled
in
kubernetes
kubernetes.
It's.
B
A
So
to
catch
everybody
up
on
the
issue,
Jing
has
a
very
nice
large
PR
out
for
snapshots,
adding
snapshots
to
CSI
in
the
container
storage
interface
organization
under
the
spec
repo.
The
problem
is
it's
so
large
that
github
is
dying
opening
up,
especially
if
you
are
a
maintainer
of
the
container
storage
interface
project.
You
tend
to
see
it
much
more.
So
the
folks
who
are
actually
responsible
for
merging
this
thing,
basically
can't
see
it
and
that's
been
a
problem.
Reviewable
might
be
a
way
to
get
around.
That
is.
C
B
C
E
E
C
B
A
A
A
G
G
A
All
right
cool
thanks
for
helping
drive
that,
hopefully
that
gets
a
result
soon.
Dynamic
maximum
volume
count
I'm,
not
sure,
if
I'm
months
on
the
call,
but
he's
working
very
actively
to
get
this
into
the
CSI
spec.
We
just
had
an
hour-long
discussion
about
the
PR
and
it
looks
like
you've
made
made
some
some
good
progress
there.
A
Hopefully
they'll
get
results
soon
as
well,
moved
to
CSI
sanity
in
the
spec
repo.
This
is
what
Lewis
was
mentioning
earlier.
I
think
we're
okay
with
that.
So
we
have
consensus
the
rest
of
these
RP
threes
and
then
we
have
a
couple
of
p2s
to
find
an
owner
or
P
one's
to
find
an
owner
for
flex
volume
and
the
I
scuzzy
drivers.
A
F
A
F
A
It's
just
one
of
those
nice
to
have
things
where
we
we
had
it.
It
was
working
with
CSI
0.1.
Then
it
just
kind
of
never
got
updated
to
0.2.
So
it's
fun
enough
out
of
compatibility
and
yeah.
It
would
be
nice
to
have
this
working
and
having
someone
on
it,
so
that
folks,
we're
using
flex
can
have
an
easy
way
to
play
around
with
CSI
as
well.