►
From YouTube: Kubernetes SIG Storage 20180913
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 13 September 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.1km578fc4sqm
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
None
A
Welcome
everyone
today
is
September
13
2018.
This
is
the
meeting
of
390
storage
special
interest
group
today
on
the
agenda.
We're
gonna
go
over
the
status
of
items.
We've
been
working
on
for
1.12
code.
Freeze
was
last
week
and
we'll
go
over
and
see
what
made
it
and
what
didn't
make
it,
and
then
we
can
discuss
any
PRS
that
need
attention
and
explain
your
status
updates.
A
B
A
D
A
D
A
D
Yeah
that
effort
is
on
the
way
it's
it's,
this
early
stage
right
now,
we're
looking
to
have
something
this
coming
quarter.
A
A
A
A
F
A
A
Think
this
actually
might
be
important
for
the
effort
to
move
the
cloud
provider
code
out
of
tree
as
well,
because
a
lot
of
the
cloud
provider
providers
depend
on
this
code.
So
if
it's
abstracted
away
into
a
separate
repository,
they
can
consume
it
as
well.
So
sounds
good.
Next
up
is
the
cubelet
device
plug
in
registration
mechanism.
That
Michelle
was
talking
about
that
also
got
promoted
to
beta
this
quarter,
which
was
our
goal.
So
thank
you
to
blad
for
pushing
that
through
I.
A
On
preparing
CSI
for
GA
in
q4,
so
that
was
the
goal
I
think
of
the
eight
items
that
we
were
tracking.
Seven
of
them
got
done
so
we're
looking
pretty
good
for
GA
in
q4.
We're
gonna
push
for
a
ga
in
q4.
We're
gonna
try
to
get
the
CSI
spec
itself,
one
point
owed
by
the
Thanksgiving
time
frame,
so
that
kubernetes
can
pick
it
up
in
the
1.13
release.
A
E
C
A
A
E
A
A
H
E
A
A
J
J
So
that's
that's
been
an
initial
PR
has
been
up
or
github.
Repo
has
been
up
on
that
for
about
a
month
now,
we've
got
some
feedback
but
I'm
looking
for
some
some
people
to
look
at
it
and
then
also
the
question
of
whether
we
wanted
to
combine
all
the
connectors
are
not
came
up
in
one
of
the
meetings
and
we
should
probably
decide
what
we
want
to
do
there.
Oh
yeah.
J
So
currently,
the
way
that
set
up
is
the
idea
was
that
you
could
have
the
fiber
channel
connector
and
a
nice
fuzzy
connector,
both
in
that
same
package,
just
important
ones
individually,
that
you
wanted
and
actually
both
of
them
have
implementations.
Now
there
was
some
discussion
in
one
of
the
meetings
that
some
folks
didn't
like
that
idea
and
I.
Think
then
in
particular
was
concerned
that
that
wasn't
a
good
way
to
go
so.
K
J
J
K
That's
my
memory
in
any
case,
I
have
started
looking
at
the
code,
but
I
would
like
to
actually
integrate
it
into
a
working
driver
and
try
it
out,
and
you
know
before
I
give
it
the
thumbs
up
and
I
have
not
done
that.
Yet.
But
oh.
K
K
G
E
Guess
the
the
other
thing
is
in
terms
of
packaging
like
maybe,
if
you
want
one
and
not
the
other,
can't,
is
this
still
possible
to
only
write
yeah.
J
The
the
way
that's
set
up
right
now
is
there's
a
package
directory
or
actually
there's
not
a
package,
because
it's
all
packages
but
there's
a
I,
scuzzy
directory
and
there's
a
fiber
channel
directory
and
you
just
import
the
one
that
you
need
and
use
it.
That's
that's
the
that's
the
model
right,
so
you
don't
have
to
import
the
entire
library.
You
just
import
the
sub
package
that
you
want.
Okay,.
J
Right
yeah,
oh
so,
I
see
what
you're
talking
about
yeah,
not
not
right
now,
the
way
it's
set
up
is
it's
it's
intentionally,
dead-simple,
so
that
you
won't
have
things
like
that.
They
are
pretty
much
completely
independent,
even
though
they're
in
the
same
repo
and
they
share
the
same
interface
for
the
most
part
they
are.
They
are
independent
packages.
J
A
A
J
J
A
J
A
Unfortunately,
we
weren't
able
to
get
this
Vlad
worked
very
hard
on
it
and
it
was
almost
there,
but
ultimately
it
was
just
shorted.
The
code
freeze,
so
we
decided
to
punt
this
to
next
quarter.
Next
up
is
passing
workload,
information
to
CSI.
This
was
completed
as
alpha,
so
this
is
now
possible
using
the
new
CSI
cluster
registry
and
we
do
these
docs,
which
hopefully
an
update.
He
responded
to
the
comment
that
I
had,
so
you
can
get
that
in
next
up
shared
workload
info
CSI
spec.
A
A
E
J
A
A
E
A
A
A
L
M
G
A
A
Think
she
should
just
be
able
to
get
approval
from
mom
mm-hmm
Zig
AWS
for
creating
a
new
repo
and
then
the
process.
Is
you
go
to
I?
Think
it's
kubernetes,
slash,
orgs,
open
issue
there
and
say
you
know
say:
kws
wants
a
new
repo
under
kubernetes
SIG's.
We
want
it
to
be
named,
such-and-such
and
they'll
go
ahead,
and
do
that
sounds
good.
Oh
sorry,
go
ahead.
A
I
A
I
A
A
A
A
Looks
like
he's
not
so
the
last
outstanding
item
here
was
the
inline
volume
mapping.
Inline
volumes
didn't
go
in
for
CSI
this
quarter,
so
we're
gonna
have
to
reassess
and
figure
out
what
the
plan
here
is.
I
know,
Anisha
and
her
team
are
very
interested
in
working
on
this
as
well.
So
I
put
her
in
touch
with
David
and
they're,
going
to
work
together
to
try
and
figure
out
what
the
next
steps
here
are
and
make
sure
that
what
we
do
is
in
alignment
with.
A
A
G
D
A
M
A
K
A
A
A
G
A
K
That
the
NFS
Commons
library,
there's
there's
no
work
to
be
done,
we're
pretty
confident
about
that.
There
is
mount
options,
changes
that
are
needed
for
NFS
in
general.
That
don't
need
to
be
a
separate
thing.
I
have
PRS
for
those
that
aren't
merged,
because
I
have
to
write
some
unit
tests.
Regarding
the
update
the
NFS
driver
to
use
the
lid
plus
0.3
changes,
the
NFS
I.
That's
going
to
be
really
easy,
but
I'm,
not
100%
sure,
on
what
the
other
half
is
supposed
to
look
like.
K
K
K
K
E
J
K
Yeah
Benito,
we
need
a
whole
separate
driver
that
doesn't
do
any
creation
or
deletion
it
just
as
attaches,
but
then
how
does
it
get
the
information
from?
Because
it's
not
going
to
be
a
CSI
volume
in
careers?
It's
going
to
be
a
nice
cos
ear
and
at
best
volume
in
cornea.
So
it's
gonna,
not
gonna,
have
all
the
same
persistent
data
that
a
CSI
volume
would
have
so
something
somewhere
is
doing.
The
translation
and
I
guess
I
have
that
bit
so.
A
A
I
think
the
task
here
was
to
update
this
driver,
which
actually
looks
like
somebody
already
might
have
done,
that
it
looks
like
Matt
did
that
already,
so
it
might
be
that
there's
no
work
here.
Let's
just
go
in
and
double-check
I
think
there
was
two
pieces
of
work.
One
was
updated
to
align
with
0.3,
which
looks
like
it
was
done,
and
the
second
piece
was
to
pick
up
the
common
library
if
one
existed
and
in
this
case
no.
A
A
Yeah
just
to
get
all
under
communities
si
si
for
the
drivers,
just
prefix,
all
of
them
with
driver,
so
it'll
become
kubernetes,
si
si
slash
driver
and
a
fast
driver,
I,
scuzzy
and
so
on,
and
then
the
second
one
will
be
the
library
one,
maybe
prefix,
all
the
library
ones,
with
library,
I,
scuzzy
library
NFS.
That
way,
it's
kind
of
consistent
when
we
look
at
look
at
the
giant
list
of
repos
that
we
have
and
then
the
third
thing
is,
we
also
wanted
to
get
the
external
storage
splitting
out.
A
Repos
work
started
it's
becoming
especially
important.
Now,
because
we
have
a
new
CSI
API
under
a
kubernetes,
slash,
CSI
API,
that
consumes
the
API
machinery
package
and
the
external
storage
external
provision.
Error
code
also
directly
consumes
API
machinery
and
both
of
those
package
are
consumed
by
external
provisioner.
So
this
is
a
problem
that
Chang
is
running
up
against,
which
is,
if
you
have
two
different
dependencies,
that
both
depend
on
the
same
dependency
but
depend
on
different
versions
of
that
same
dependencies.
A
A
A
K
J
A
Next
up
is
flex
volume
resizing
support.
This
PR
was
under
review,
but
did
not
make
it
for
code
review
or
did
not
make
it
before
code
freeze,
so
we're
punting
it
to
next
or
to
the
next
release.
I
know
there
was
a
exception
filed
for
this
and
I
block
that
exception,
because
exceptions
are
or
extraordinary
cases.
A
In
this
case
this
was
a
p2
for
the
four
six
storage
and
we
need
to
kind
of
be
well
behaved
within
the
larger
community
and
not
file
exceptions
just
for
every
feature
that
doesn't
make
it
so
we'll
pick
this
up
again
next
quarter
and
hopefully
get
this
in
beyond
this
feature,
I'd
really
really
like
to
stop
developing
flex
volumes
further.
We
need
to
kind
of
incentivize
CSI
to
be
used
and
reduce
the
burden
in
developer
time
that
we
have
by
splitting
it
across
multiple
api's.
G
On
those
lines,
there
is
one
issue
that
did
come
up
around
Flex
support,
relabeling
recursively
labeling
for
SELinux
permissions
and
then
the
recursive
sound
for
for
the
group,
the
group
IDs.
It's
something
that
I
entry
plugins
don't
have
a
problem
with
because
they
can
skip
it
if
they
want,
and
it's
not
call-out
I
against
this
we
talked
about
instead
of
adding
like
a
function
color,
something
just
doing
it
as
a
capability.
It's
part
of
the
capability
call
mm-hmm,
but
I
was
gonna.
Push
that
as
a
as
a
bug,
I.
G
A
G
G
A
The
problem
I
think
what
it
boils
down
to
was
the
fact
that,
when
a
node
becomes
unhealthy
for
a
long
period
of
time,
the
pods
there
are
not
actually
deleted,
and
that
is
the
signal
that
the
attach
detach
controller
uses
to
actually
trigger
the
detach.
And
so
initially
we
were
thinking
of
just
handling
this.
A
And
then
it's
the
responsibility
of
note
people
or
other
components
like
the
node
problem
detector,
to
figure
out
when
to
mark
a
node
is
unhealthy
and
like
the
pods,
and
so
currently
the
problem
is
pushed
you.
Those
teams,
tough,
to
figure
out
a
better
way
to
be
able
to
handle
these
cases
or
an
Otis
marques
or
notice
marked
as
unhealthy
or
it'd.
Be
it's
shut
down
and
needs
to
be
drained.
So
that's.
G
A
Have
the
smart
recovery
and
smart
detach
it's
just
the
the
things
that
we
trigger
it
off
have
always
been
just
you
know
the
creation
or
deletion
of
a
pod
and
discovered
that
that
is
not
sufficient.
In
all
cases-
and
you
know,
our
initial
reaction
was
yeah.
Well,
we
can
fix
that
we'll
just
make
it
smarter
and
then
we
got
pushback
from
the
other
SIG's
saying.
Actually
you
know
you
keep
doing
what
you're
doing.
We
should
be
doing
a
better
job,
that
we
give
you
a
better
signal.
A
G
So
I
think
when
I'm
trying
to
make
is
like
we
need
to
set
the
expectation
in
the
sig
that
the
volume
attach
detach,
isn't
going
to
be
fully
automatic
anymore,
and
we
we've
there's
too
many
problems
to
try
to
make
it
fully
automatic
that
there's
going
to
be
some
recovery
in
different
parts.
But
it's
probably
not
going
to
be
is
I,
don't
know
as
simple
as
we've
been
trying
to
make
in
the
past.
We
get
rid
of
the
expectation
that
the
system's
going
to
recover
that
we
can.
A
I
don't
know
if
I
agree
with
that
I
think
the
system
tries
it's
best
to
recover
where
possible.
There
are
still
cases
that
we're
not
handling
correctly
and
we're
going
to
try
to
fix
those
we're
not
going
to
say
we're
not
going
to
throw
our
hands
up
and
say
sorry
user
has
to
do
it.
We
need
to
fix
it,
but
whether
it's
you
know
code
within
the
attach
detach,
Toller,
that's
gonna
handle
that
or
cocaine.
G
A
And
I
mean
that'll
I
mean
that's
gonna,
go
to
the
CSI
drivers.
Eventually,
when
we
do
the
record
in
CSI,
but
I
think
it's
all.
It's
all.
You
know
it's
various
layers
working
together
to
achieve
a
desired
end
user
behavior
right.
So
it's
not
like
one
component
alone
is
going
to
be
able
to
do
this.
Our
end
goal
from
the
end
user
perspective
is
that
it's
fully
automated
and
they
don't
need
to
do
anything
manual.
A
Reality
is
that
there
are
edge
cases
where
there
is
manual
intervention
required
today,
and
that
shouldn't
be
the
case.
We
need
to
work
to
resolve
that,
whether
that
happens
at
the
driver
layer
at
the
attach
detach
controller
layer
or
at
the
bigger
you
know,
node
problem,
detector
or
other
layers.
We
need
to
figure
that
out
and
work
with.
G
A
A
Okay,
we
can
discuss
that
more
offline
if
needed.
Moving
on
next
up
is
mount
named
space,
propagation
GA
that
got
merged
as
well.
Thank
you,
Fabio.
There
were
some
questions
there
about
feature
gates
and
what
that
should
look
like
and
that
got
resolved
and
then
finally,
checkpointing
I
don't
believe
there
was
any
progress
there.
A
That's
all
that
we
have
in
terms
of
feature
reviews
so
overall
I
think
I'm
pretty
happy
with
the
amount
of
work
that
we
got
completed.
It
was
a
lot
of
work
for
a
very
short
amount
of
time
and
for
the
next
release.
Let's
take
a
look
at
my
schedule,
but
I
believe
we
want
to
do
planning
and
during
the
next
meeting,
so
I'll
send
out
an
email
and
I'll
create
a
new
tab
under
here
for
113.
You
can
start
populating
it
for
items
that
you
think
are
important
and
then
on
the
27.
A
K
So
this
is
possibly
Mina
understanding
how
to
use
the
DEP
tool
but
I'm,
if
you
click
on
the
PR
and
look
at
the
size
of
it.
So
running,
DEP
ensure
update
was
not
enough
to
get
the
external
storage
repo
forward
to
the
point
where
the
bits
that
I
needed
for
doing
mount
options
were
there.
So
I
went
ahead
and
made
it
a
small
change
to
the
go,
packaged.
Toma
file
and
I
ran
it
again.
A
A
K
A
The
recommendation
here
is
to
split
out
the
commits
so
that
it's
very
easy
to
just
look
at
the
commit
with
a
change
versus
the
DEP
ensure
which
look,
it
looks
like
you've
are
done
and
then
obviously
minimize
the
dependency
change
as
much
as
possible,
which
you've
also
done,
and
in
this
case
we
haven't
picked
up
API
machinery
for
a
while.
So
it
looks
like
that
needs
to
be
updated
and
you're.
The
one
that'll.
K
B
K
B
K
A
A
M
A
M
A
B
A
Okay,
yeah,
if
you
can
do
that,
to
unblock
yourselves
quicker,
that's
great,
go
ahead
and
update
all
the
versions
to
pick
up
the
same
version:
API
machinery,
including
external
storage
and
all
the
external
sidecar
containers.
At
the
same
time,
Brad
can
continue
to
to
do
this
work
and
we
can
start
doing
proper
releases
on
the
external
storage,
external
provisioner
yeah.
M
H
Y'all
already
kind
of
described
the
issue:
okay,
so
I
linked
to
the
repo
creation
request
and
all
it
needs
is
approval.
And
then
hopefully
you
know
you
can
get
it
done.
Since
it's
been
like
a
year
and
I've,
you
know
I'm
aware
that
it's
very
painful
for
the
onus
rendering
just
so
you
know
for
whatever
that's
worth,
we
can
get
it
done
pretty
soon.
Cool.
A
K
A
K
A
Repo,
but
what
we
decided
now
is
we're
gonna,
do
a
separate
repo
for
every
project
and
I
mean
external
storage,
was
kind
of
that
mondo
repo,
that
we
put
everything
under
and
we
realized
thing,
and
everything
was
painful.
So
now
what
we're
doing
is
a
single
repo
for
a
project,
and
that
makes
it
much
easier
to
track
issues
and
pieces
and
all
that
kind
of
stuff.
So
what
we
do
is
just
prefix
it
with
just.
A
A
A
A
A
Cherry-Picks,
so
whenever
they
stop
doing
the
automatic
batch
cherry-picks,
which
I'm
not
sure
what
the
deadline
is,
whether
it's
RC
or
closer
to
the
release
date,
then
we'll
have
to
do
cherry
picks,
but
for
now
just
make
sure
that
your
PRS
are
marked
with
1.12.
And
if
you
have
any
questions,
just
ping,
the
release
team
on
the
release
channel
on
slack,
and
they
should
be
able
to
give
you
the
latest
update.
And
if
there's
any
issue,
just
ping
me
and
I
can
help
resolve
that.
A
All
right
great,
so
thank
you
very
much
for
all
your
hard
work.
This
quarter
I
think
we
got
a
lot
done,
we're
in
a
good
position
to
push
to
AGA
for
CSI
next
quarter,
that's
gonna
be
the
culmination
of
a
lot
of
work,
and
so
next
steps
are
at
our
next
meeting
on
the
27th,
we'll
do
planning
for
the
1.13
q4
release.
So,
if
you're
interested
at
all
in
what's
going
on
or
what
six
storage
is
going
to
work
on,
or
volunteering
for
any
work,
anything
like
that.