►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Container-Storage-Interface (CSI) Implementation - 28 February 2018
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
B
C
B
But
the
the
idea
behind
this
is
that
right
now
mark
and
the
CSI
tests
were
separate
and
how
proposal
this
is
just
a
proposal
is
to
bring
in
the
Mach
driver,
I,
simplified
version
of
it
and
click
very
little
one
into
the
CSI
test,
our
repo,
so
that
the
sanity
test
and
the
can
be
tested.
That's
really
the
goal
and
can
be
tested,
and
it
can
use
this
implementation
that
you
already
exists.
Now
we
don't
want
to
the
goal
is
not
to
replace
the
Mach
driver
in
goes'.
We
don't
want
to
do
that.
B
The
goal
is
to
have
a
test
of
all
CSI
sanity
driver,
and
that's
it
that's
how
go
and
to
do
that,
we
would
propose
that
we
want
to
keep
this
in
the
same
people
so
that
we,
when
we
make
changes
so
and
the
goal
on
that.
One
too,
is
that
when
this
moves
to
container
storage
interface,
this
is
a
scenery
that
it
pulls
in
this
disemployment,
a
Shinawatra
which
is
a
very
simplified
one.
B
C
There
is
one
there's
there's
one
more
reason:
I
would
like
to
add:
I
mean
with
this
mug
driver.
We
can
really
clean
up
everything,
every
single
return
code
and
really
validate
that
the
end-to-end
tests,
the
compliance
of
the
driver
with
the
spec.
So
we
have
a
set
of
e
2
e
tests
if
it
passes
with
this
driver.
So
we
can
expect
the
same
results
when
we're
testing
against
like
a
host
pass
plug-in,
for
example,
or
anything
else,
it's
gonna,
it's
doable
with
the
GUI
si.
C
C
C
Yeah,
so
one
of
the
reason
I
wanted
to
add
is
that
we
can
really
clean
up
this
mug
driver
to
be
hundred
percent
compliant
with
the
CSI
spec,
and
then
then
the
tests
it
wit
has
based
on
this
mug
driver.
We
can
use
to
test.
Other
drivers
like
host
path,
plug-in
or
people
can
use
this
test
to
validate
their
implementation
of
the
driver,
and
that
gives
them
a
sort
of
a
comfort
that
their
driver
implementation
I
mean
follows
the
CSI
spec.
A
C
A
B
A
B
D
C
D
Guess
if
we're,
if
we're
moving
this
mock
part
into
this
repo
that
basically
stops
us
from
contributing
back
upstream
to
the
original
repo
right,
where
whatever
changes
we
make
here
in
theory,
would
be
beneficial
to
the
upstream
repo
I.
Don't
see
why
this
would
need
to
diverge.
Basically.
A
B
E
He
seems
to
think
that
he's
gonna
continue
to
work
on
it,
but
again
it
you
know,
go
CSI
is
no
longer
owned
by
it's,
not
a
dill
thing.
It's
it's!
It's!
It's
it's
open!
So
if
no
it's
open
to
submit
PRS,
if
there's
a
roadblock
and
I
would
you
know
my
take?
Is
let's
treat
it
as
any
as
we
would
treat
any
other
open
source
dependency,
push
PR
to
it
up
until
the
point
where
there's
non-response,
so
that
there
would
be
my
suggestion,
mmm-hmm
so.
E
A
C
A
B
But
what
I'm
just
voicing
this
now
I
think
Oh.
Si
si
right
now
is
way
over
engineered.
That's
one
of
my
huge
concerns
and
it
it
provides
a
lot
of
answers
for
a
few
questions.
So
I
just
think
sometimes
that
we
need
to
first
find
the
questions
and
then
I'm
like
this
is
how
you
answer
it.
That's
my
only
concern
was
the
assignment
Lee
that
it's
apart.
It
provides
like
a
an
endless
amount
of
possibilities
and
I
just
want
to
kick
it
clear,
concise
and
simple,
and
just
my
style
I
just
watched
my
opinion,
I.
C
D
C
A
C
C
B
E
C
E
A
Of
the
project,
I
think
like
the
most
important
thing
right
now
is
that
we
can
move
quickly.
I
understand
that
the
concern
that
both
Sergey
and
Lewis
raised
is
that
go
CSI,
does
a
lot
of
things
and
tends
to
be
a
little
bit
opinionated
and
they
want
just
you
know,
for
example,
mock
si
si
without
any
of
the
extras
and
that's
what
this
PR
apparently
is
trying
to
do.
So,
how
about
how.
B
E
B
A
C
B
B
C
A
C
B
A
E
E
E
E
A
Okay
sounds
good.
What
I
also
want
to
do
is
make
sure
we
don't
get
blocked
by
this
for
the
rest
of
our
end-to-end
testing.
If
you
guys
think
that
you
can
move
faster
by
getting
this
merged
and
coming
back
and
doing
refactoring
later
and
consolidating
with
go
CSI
I
would
be
ok
with
that
right
now,
I
would
prefer
a
speed
over
you
know,
sitting
around
and
making
sure
that
we
have
things
in
the
right
place.
We
can
consolidate
later.
If
you
got
it.
So
it's
it's
your
call.
Let
me
know
if
you're
blocked.
A
There
was
a
bug
that
Sergey
raised
last
time
in
the
external
provisioner,
apparently
the
way
that
the
external
provisioner
is
currently
the
way
that
it
functions.
It
creates
a
identification,
an
ID
number
that
is
unique
per
time
that
it's
running
and
if
it
creates
some
volumes
and
then
the
plug-in
restarts
its
ID
is
different
and
those
original
volumes
get
orphaned.
C
There
was
a
couple
of
discussions,
I
had
with
a
young
and
basically
he
posed
a
very
valid
question.
I
mean
why
would
we
need
it
at
the
first
time
and
basically,
we
couldn't
come
up
with
any
particular
reason,
because
the
the
ownership
of
the
volumes
is
determined
by
the
name
of
the
provisioner
mentioned
in
the
storage
class
right.
C
So
as
long
as
this
provisioner
belongs
to
or
associated
with
that
storage
class,
ID
shouldn't
matter
at
all
so
I'm
proposing
to
completely
remove
the
ID
check,
and
in
this
case,
basically,
whenever
older
provisioner
created
volumes,
they
should
be
able
to
be
deleted
by
the
new
provisioner.
As
long
as
it
belongs
to
the
same
storage
class,
yep.
B
C
It's
already
implemented,
I
need
to
I
need
reviews
and
basically
that's
pretty
much
it
they're
here,
yeah
PR.
There
is
a
list
of
PR
a
bit
below
for
each
external
component
that
I
submitted.
So
oh.
A
A
A
Okay,
Sergei,
you
could
update
the
doc
just
post
the
pr's
in
underneath
the
bugs
area.
It
might
be
helpful,
yep,
okay,
so
code
freeze,
slipped
from
Monday
to
Tuesday
I,
think
we
got
most
of
what
we
wanted
in.
We
can
go
over
the
tasks
individually
and
before
that
proposals
move
real
drivers
from
drivers
repo
to
their
own
repo.
B
Anyone,
okay,
so
I'm,
just
I
just
bring
this
up.
I,
don't
wanna,
take
too
much
time
on
it.
Just
people
think
about
it.
We'll
discuss
it
again
that
other
time
it's
just,
it
seems
to
be
like
the
sample
repo,
the
drivers.
Repo
was
supposed
to
be
four
samples.
It's
actually
becoming
more
realistic
yeah.
So
what
I'm
trying
to
say
is
like
taking
real
drivers
out
seesee
enough
and
then
rename
the
drivers.
We've
got
two
simple
directs
for
whatever
it's
a
like
hose
path.
A
We
could
do
that
over
Marino,
okay,
so
one
of
the
big
remaining
item
in
the
kubernetes
core
is
moving
from
0.1
to
0.23.
Did
a
lot
of
work
here
to
get
these
changes
in
before
code
freeze.
The
only
thing
that's
remaining
is
that
once
zero
point
two
is
tagged.
We
need
to
update
our
dependency
to
point
at
the
zero
point.
Two
tag
instead
of
two
head,
which
is
the
current
state
CSI
zero
point
to
release
candidate
was
cut
on
Monday
afternoon
and
we
will
give
two
hours
before
we
cut
the
official
0.2.
A
A
A
A
A
The
other
two
items
are
green
and
antenna
tests
is
something
that
we
should
start
focusing
on
now,
I
think
for
the
next
few
weeks.
This
is
all
that
we
should
be
doing
is
just
making
sure
we
test
I've
added
as
many
tests
as
possible.
This
means
adding
new
tests,
as
well
as
making
sure
that
the
existing
tests
that
we
have
are
enabled
and
running
Lewis
are.
Can
you
be
the
lead
on
this
and
recruit
people
to
help
I
know?
A
B
It
that's
fine
and
the
only
thing
I
notice
is
ID.
We
don't
really
test
other
drivers
right,
so
the
only
driver
that
we
can
actually
test
with
is
the
host
back
driver
and
that's
a
single
no
driver,
so
the
our
tests
are
very
limited.
They
just
test
to
make
sure
that
we
do
things
and
the
three
tests
that
we
don't
have
right
now
does
all
that.
So
you
don't
really
have
like
a
suite
of
tests
that
test
negatively
or
force
things
to
fail,
and
that
you
know
we
test
right.
A
B
A
C
A
One
area
where
I
would
be
like
yeah,
if
you
want
to
add
a
WS
driver
or
any
other
driver,
please,
let's
do
it.
I
would
rather
us
test
those
things
and
catch
failures
rather
than
not
well,
because
the
fact
is
that
not
all
storage
systems
are
going
to
be
able
to
have
n
10
tests
running
in
kubernetes.
You
know
if
you're
running,
like
fibre
channel
or
something
we
just
met.
C
B
A
B
C
Right,
so
these
are
okay:
provisioner
needs
to
review,
drive
a
registrar,
it's
already
merged
external
attach,
are
merged
and
well
host
path.
I
mean
it's
working,
and
this
is
a
common
part
is
done,
but
I
have
a
hard
time
with
the
dependency
thing,
so
by
some
reason,
when
it
tested
again
in
the
Travis,
the
dependency
fails,
even
though
the
end-to-end
test
is
completed
so
I
mean
I'll,
probably
reach
out
to
some
more
experienced
folks
with
the
dependency
see
what
for
some
for
some
guidance.
What
what?
What
what
to
do
with
this?
Okay.
C
A
B
B
A
E
E
C
B
A
Volume
tributes,
so,
basically,
all
the
changes
that
we
need
40.2
are
kind
of
bundled
under
this
one
issue.
Now
our
one
task
item
Sergey
is
creating
a
bunch
of
PRS
for
it.
If
you're
responsible
for
one
of
these
components,
please
look
at
the
PRS
that
he
sent
out
and,
let's
see,
identity
check
on
deletion.
Is
that
a
separate
PR
Sergei?
Oh.
A
A
A
B
B
A
B
A
B
It
has
bad,
so
what
we
need
is
a
creative
project
there,
so
have
people
come
in
and
start
adding
unit
tests.
Okay,
okay
and
the
unit
tests
can
then
just
like
drop
a
register
and
just
like
it's
no
attacher,
they
can
use
the
CSI
test
mark
driver,
which
is
the
one
just
for
unit
us,
but
the
automatic
generator
one,
but
I
don't
have
time
to
do
that.
But
this
is
an
area
where
I
already
have
highlighted
the
areas
that
need
to
be
tested.
A
B
A
A
I
don't
think
this
is
as
critical
right
now
so
Philippe
if
you're
still
interested
in
working
on
this,
it's
your
call,
it
would
require
modifying
some
of
the
existing
driver.
Plugins
that
exist
are
the
drivers
that
exist
in
modifying
their
deployment
scripts
to
add
a
liveness
probe,
and
the
challenge
here
is
going
to
be
figuring
out
how
to
actually
pose
just
one
RPC
to
kubernetes
right
the
kubernetes
liveness
pro
system.
A
A
Think
by
end
of
this
month
would
be
nice
every
or
March
end
of
March,
correct,
sorry,
and
if,
if
we
don't
get
it
in,
it's
not
the
end
of
the
world,
because
the
plugins
will
still
work.
The
only
purpose
of
this
would
be
if
a
plug-in
gets
into
an
unhealthy
state.
This
kubernetes,
liveness
probe,
will
automatically
restart
the
containers
if
the
probe
call
is
not
responding.
Okay,
there's
just
a
nice
to
have
I
mean.
A
C
A
A
A
A
Okay,
so
we'll
drop
that
for
this
quarter
we
have
Docs
to
write
new
drivers,
that'll
be
it
to
do
for
now,
I
I,
suppose
revisit
the
CSI
docks
and
kubernetes
io
make
it
user
facing
so
I
think
we
should
just
handle
all
the
doc
stuff
all
at
once
together
and
that's
pretty
much
it
so
right.
Now,
I'm
tracking
three
big
issues:
one
is
making
all
the
external
components:
0.2
compatible.
A
A
So
we
have
until
the
end
of
March
to
make
sure
that
we
get
all
of
this
stuff
done
to
align
with
the
kubernetes
release
and
will
declare
beta
with
the
kubernetes
release
for
the
kubernetes
implementation
of
CSI,
and
then
folks,
who
are
implementing
these
vol
plugins,
can
implement
0.2,
compatible,
plugin
and
know
that
it's
going
to
work
with
kubernetes
and
that's
it
and
then
for
next
quarter.
We're
going
to
have
to
decide
whether
we
want
to
go
GA
or
wait
a
quarter,
leave
it
in.
C
A
And
then
go
g8,
the
subsequent
quarter,
I
think
that'll
be
based
on
the
amount
of
work
that
we
have
queued
up
for
next
quarter.
So
one
of
the
things
that
I'll
ask
you
for
the
next
month
is
come
up
with
a
list
of
items
that
you
think
we
need
to
do
before
GA
and
then
we
can
consolidate
that
list
and
come
up
with
something
concrete
for
the
next
quarter.
A
So
that's
a
good
point
and
the
plan
is
that
we
should
be
in
alignment
essentially
right
around
when
kubernetes
goes.
Ga
is
when
CSI
should
also
go
1.0
and
based
on
the
discussions
around
the
zero
point
through
release
of
CSI.
It
sounds
like
we
want
to
do
that
sooner
rather
than
later,
because
having
this
spec
constantly
changed
in
breaking
ways
is
very
difficult
for
folks
to
adopt
and
so
post
zero
point.
Two.