►
From YouTube: Kubernetes SIG Storage 20181108
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 08 November 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.trrf65a1ive9
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
None
A
All
right
today
is
November
8
2018.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
YouTube
for
the
agenda
today
we're
going
to
go
over
the
items
that
were
working
on
as
a
sig
for
q4
2018
for
the
113
release.
The
code
freeze
for
this
release
is
coming
up
very,
very
quickly.
It's
going
to
be
next
week
at
the
end
of
next
week
and
then,
if
you
have
anything
else
that
you
want
to
discuss,
please
feel
free
to
add
to
the
agenda.
A
Let's
move
on
to
the
planning,
spreadsheet
and
get
status
updates
from
folks.
First
on
the
agenda
is
moving
block
volume.
Support
to
beta
this
is
currently
on
my
plate
to
review
I
see
the
P
R
that's
already
out.
This
is
for
CSI.
Actually
sorry
I
was
thinking
about
the
core
feature
itself.
Well,
I.
Do
you
want
to
give
an
update
on
the
CSI
portion
of
this
absolutely.
B
So
last
week,
or
maybe
a
couple
weeks
ago,
we
have
a
meeting
discussing
some
of
the
gaps
that
still
exist
in
CSI
implementation
for
blocks.
So
we've
decided
to
keep
this
feature
in
alpha
for
now,
given
the
other
stuff
that
we
have
to
do.
Excuse
me
so
it'll
stay
alpha
and
hopefully,
by
next
quarter,
will
have
more
implementations
on
board
which
will
validate
some
of
decisions
that
were
made
regarding
the
spec,
hey
Vlad,.
C
B
Think
what
we
one
of
the
thing
we
ran
into
is
the
language
and
spec
itself,
which
I
think
recently
got
clarified
and
that
update
in
the
spec
language
is
going
to
make
it
to
1.0,
and
the
other
thing
that
we
ran
into
as
couple
of
people
were
actually
doing
tastebuds
and
they
you
know
they
had
issues
understanding
what
clarifications
were
needed
as
to
how
the
CEO
would
would
handle
certain
scenario
for
four
blocks.
So
that's
why
we,
we
sort
of
decided
to
keep
it
alpha
for
now.
D
B
C
B
C
A
B
Forgot,
the
first
name
is
yeah,
so
that'd
be
our
it's
it's
it's
outpost,
you
can
review
it.
I've
extensively
reviewed
it,
but
certainly
another
set
of
eyes
would
be
welcome
and
it's
ready
to
be-
and
this
is
an
internal.
This
VR
has
to
do
with
internal
changes
that
happen
in
the
CSI
block.
Support.
Okay,.
C
B
C
E
A
C
A
B
F
B
A
B
C
B
G
Both
with
other
people,
people
working
on
that
I
was
working
on
refactoring,
the
end-to-end
framework,
so
that
it
can
be
used
outside
of
kubernetes.
That's
done
and
I've
been
I've
started
using
that
in
my
own
projects,
and
some
colleagues
have
become
the
same
so
that
basically
works
and
Michelle
or
one
of
what
was
working
on
refactoring
their
volume
tests.
That
is
also
well
it's
it's
in
master
currently
in
kubernetes,
but
using
that
outside
of
Cuba
annuities
needs
a
bit
more
work.
G
F
A
Worries
thank
you
for
the
update.
Next
one
is
revision:
incapacity
reporting
for
generic
topology
required
for
local,
dynamic
provisioning.
This
is
just
a
design
for
this
quarter.
Last
we
left
off.
There
is
some
brainstorming
going
on
on
different
ideas
are
either
Michelle
or
this
person
on
the
line.
Yeah.
A
A
A
So
this
is,
let's
say,
design
for
this
quarter
then,
and
yeah.
D
A
H
A
A
H
A
Would
just
call
it
done.
Oh
sounds
good
to
me.
Please
take
a
look
at
this.
If
you
are
writing
any
sort
of
I
scuzzy
driver
and
see
if
it's
usable
for
use
and
I
guess
the
next
step
here
is
using
this
driver
for
the
comment
or
using
this
library
for
the
common
driver,
and
that
looks
like
it
has
started
as
well.
A
I
I
A
A
Next
one
is
adding
support
for
volume
expansion,
so
I
met
with
James
yesterday.
He
said
that
he
didn't
have
any
major
outstanding
issues,
but
he
wanted
to
take
a
look
at
the
PR
holistically
because
there
had
been
a
lot
of
minor
changes
and
then
once
he
did
that
he
would
be
okay
with
it.
So
I
think
that's
the
next
step.
It's
waiting
on
Yin,
okay,.
A
J
With
David
and
that
deep,
this
feature
like
I
was
working
on
the
testing
part
of
the
Davis
change
of
eighty
controller
and
I.
Think
deep
change
is
working
on
the
provisional
part.
That's
kind
of
a
slow
and
I'm
also
picking
up
changes
on
the
low
side,
which
is
the
amount
and
amount
just
because
over
time,
as
a
little
tight,
so
I
think
it
might
not
be
able
to
make
it
on
the
Alpha.
A
D
A
Thanks
Michelle,
so
that
sounds
like
it
is
on
track.
We're
gonna
skip
over
the
driver
for
fibre
channel
and
go
to
the
fibre
channel
library
at
Goosen
was
working
on
this.
He
was
gonna
sync
up
with
John
to
figure
out
what
kind
of
work
needed
to
be
done.
John
the
truth
in
did
you
guys
get
a
chance
to
sync
up
and
figure
out
the
next
steps
here
for
the
fibre
channel
library,
I.
A
K
A
A
E
K
A
You
know
who
you're
waiting
on
further
review.
Is
it
a
month
or
somebody
else.
I
Yeah
baby,
we
talked
about
it
yesterday
and
some
couple
of
things
that
you
want
to
discuss.
Maybe
after
this
call
we
can
discuss
essentially
yeah.
Essentially
this
PR
will
change
how
the
resizing
is
to
work
or,
like
you,
just
don't
have
to
delete
the
part
and
related
I
was
just
thinking
if
you
want
to
make
it
octane
because
you
know
like
not,
maybe
everybody
want
to
do
online
all
the
time
just
my
main
concern
was
I
spoke
with
Matthew
yesterday.
I
B
A
A
D
A
A
L
A
G
No,
that's
really
something
that
we
need
to
discuss
in
the
entire
team.
I
I
have
some
thoughts
and
with
all
these
changes
and
improvements
that
we
are
making
in
the
end-to-end
testing,
I
think
we
are
getting
closer
to
the
point
where
six
stores
or
the
CSI
working
group
could
run
their
own
CI
system
using
the
entrant
tests
to
verify
that
our
Kansai
containers
work
say
against
different
kubernetes
versions,
even
not
just
the
latest
one,
but
we
haven't
really
started
that
discussion.
To
be
honest,
okay,.
M
A
A
Ideally,
we
want
all
the
open.
Pr
is
changing.
One
changes
for
1.0
it
to
be
merged
by
Friday
end
of
day
and
then
cut
a
CSI
1.0
are
see
on
Monday
and
then
have
kubernetes
pick
it
up
on
Thursday.
When
the
official
release
is
cut,
when
the
RC
is
cut,
we
can
begin
the
work
to
start
doing
the
kubernetes
changes
and
as
soon
as
the
special
one
is
cut
on
Thursday,
we
pick
it
up
right
before
the
code
freeze.
A
A
We
had
a
one-off
community
meeting
yesterday
to
discuss
things,
but
if
you
know
you're
interested
at
all
and
what's
going
on
with
CSI,
please
take
a
look
because
things
are
going
to
get
locked
down
very
soon
and
after
one
point
oh,
we
don't
want
to
make
any
breaking
changes
if
there's
anything,
you've
been
meaning
or
hoping
that
would
change
in
CSI.
Now
is
the
time
to
get
your
word
in
it's
down
to
the
last
minute.
So
please
do
so.
A
A
B
A
A
B
B
A
A
Add-On
that
exists
in
cluster
slash
add-ons.
The
deployer
needs
to
specifically
opt
into
using.
There
is
a
cube
up
script
that
is
part
of
the
kubernetes
deployment
script.
That's
already
been
updated,
but
that's
just
one
mechanism
of
deploying
there's,
probably
10
or
15
different
ways
to
deploy
kubernetes
and
whichever
des
plan
mechanism,
you
have
needs
to
opt
into
using
that
move,
that,
on.
A
A
I
think
it
depends
on
what
how
you're
doing
the
deployment
and
what
type
of
customization
you
have
on
that.
Okay,
if
you
don't
want
to
use
the
add-on,
it
would
be
as
simple
as
doing
aq
cuddle
create
a
chef
manually,
but
I
do
we
want
to
have
the
have
the
deployers
just
have
the
add-on
so
that
the
user
doesn't
have
to
do
it
manually,
but
yes,
that
okay
possible
all
right.
A
D
G
Think
we
are
almost
done
with
that.
There's
one
open
dock
full
request
for
for
Visio
Cuban
NTC
is
AI
docks
where
the
example
gets
updated
to
use
via
external,
our
bug,
rules
and
that's
the
last
thing.
That's
been
needs
to
be
done
and
it's
actually
kind
of
urgent,
because
people
are
figuring
out
that
what
we
have
in
the
documentation
at
the
moment
is
broken.
We
have
usually.
B
A
B
A
A
A
H
Went
ahead
and
I
I
started
working
on
that
this
week
as
well.
So
the
thing
that's
kind
of
interesting
about
that
is
the
changes
that
were
merged
for
data
source
into
coops
already
and
the
API
group.
If
you
I,
think
we
can
just
leverage
those
and
use
them
so
I've
got
some
code
working
right
now
and
the
only
changes
were
to
the
external
provisioner
and
some
slight
tweaks
into
the
kubernetes
code
to
let
the
different
api
groups
and
okay.
H
Proposal
would
be
release
in
that,
from
my
viewpoint
is
to
modify
that
and
allow
not
only
PVCs
well,
those
would
work
anyway,
because
they're
native
objects
dude
to
allow
external
CRTs
and
those
would
be
we
would
just
call
them
populate,
errs
or
whatever,
and
so
the
only
thing
that's
that's
kind
of
outstanding
on
that
is
I
need
to
work
with
their
snapchat
photos.
I'd
like
to
Shin
about
this.
Yesterday,
the
readiness
gates
stuff
and
things
like
that.
You.
A
H
A
A
Let's
keep
the
design
doc
in
the
kubernetes
community,
repo,
okay,
that
way
it's
kind
of
a
holistic
view
of
kubernetes
end
to
end,
and
then,
if
there's
any
work,
that's
external
specific!
We
just
do
the
work
there.
Okay,
after
all,
I.
Let's
track
it
like
a
kubernetes
feature:
cool
okay.
Next
up
is
the
ability
to
set
a
UID
on
a
volume
in
addition
to
a
GID
status.
This
was
keeping
track
of
an
existing
thread.
No
namespace
feature
may
be
impacted.
The
link
was
this:
amantani
updates.
A
G
B
A
A
A
D
C
So
briefly,
the
you
know:
net
app
has
customers
that
use
our
existing
dynamic
provisioner
and
we
want
to
move
to
CSI
and
we're
gonna
have
a
situation
where
some
customers
will
have
some
CSI
volumes
and
some
or
some
CSI
Peavey's,
and
some
older
I
scuzzy
and
NFS
TVs
we'd
like
to
be
able
to
non-disruptive
Lee,
get
them
to
all
CSI.
You
know
cuz,
underneath
the
covers
we're
not
going
to
move
any
data
around.
It's
just
changing
the
pointer
in
kubernetes.
C
So
because
of
because
of
the
the
fact
that
you
can't
rewind
a
PVC
to
a
different
pv,
this
gets
kind
of
ugly.
You
know
that
you
can
do
it
by
deleting
the
PVC
creating
another
PVC
with
exactly
the
same
name,
force
blending
that
PVC
do
a
new
pv
that
you
create
and
deleting
the
old
pv
and
playing
a
bunch
of
tricks.
But
but
that's
not
clean
right.
C
That's
that's
nasty
and
so
I
think
it
would
be
nice
if
there
was
a
way
to
solve
this
problem,
and
my
you
know,
my
naive
thought
is:
if,
if
you
could
just
rebind
the
PVC
to
another
pv,
if
we
found
a
way
to
add
that
feature,
it
would
address
this.
But
I
don't
didn't
know
if
anyone
else
had
any
other
ideas
or
so.
D
C
Right
so
something
that
will
work.
So
if
we
have
like
an
existing
NFS
volume
and
the
NFS
code
goes
away,
then
the
NFS
CSI
driver
will
take
up
the
job
of
doing
the
attach
detach.
But
we're
saying
is
we
don't
even
want
that?
We
would
just
want
to
take
that.
You
know
our
CSI
driver
knows
how
to
do
the
attach
detach
for
that
volume.
So
we
just
want
to
change
it
from
a
NFS
pv
to
a
CSI
pv
with
the
with
the
NetApp
type
or
the
Trident
type.
C
In
our
case
right,
so
we
do
it'll
point
to
exactly
the
same
thing,
but
we
want
to
use
our
CSI
driver
rather
than
the
default
one
which,
which
requires
changing
the
pv
it'll
point
to
the
same
data.
The
users
won't
know
the
difference,
but
it'll
be
cleaner
because
there
won't
be
one
CSI,
plugin
involved.
H
So
maybe
I,
maybe
I'm
not
following
I
thought
so
Michelle
wasn't
the
the
migration
stuff.
Wasn't
it
literally
just
a
you
flip
the
switch
and
you're
on
CSI
and
then
it
knows
how
to
manage
anything
that
was
created
before,
because
the
only
thing
we
care
about
is
the
actual
deep
provisioning
of
things
I.
C
C
D
A
Wonder
is
based
on
the
work
that
David
is
now
doing
on.
The
node
info
object
to
specify
our
migration
Bin's.
If
we
could
have
something
along
the
lines
of
like
a
map
that
says
this,
is
the
CSI
driver
that
I
want
you
to
use
for
migration
and
you
could
opt
in
on
a
cluster
of
a
cluster
basis,
and
so
it
would
be
the
same
migration
code,
that's
being
implemented,
except
you
can
potentially
include
a
larger
subset
of
entry
volume
plugins
that
you
can
stay.
Please
reroute
manifest
move
this
external
driver,
but.
A
M
A
C
So
we
have
a
POC
today
that
what
it
does
is
it
goes
through
and
deletes
all
your
PVCs.
It
marks
all
the
PV
says:
don't
actually
delete
this,
then
it
deletes
all
your
PVCs
and
then
it
recreates
new
PVCs
with
exactly
the
same
name
and
options,
but
but
but
pre
bound
to
new
PV,
that
is
of
the
CSI
type
with
the
Trident
provisioner
and
that
works,
but
but
the
problem
is,
as
a
user
would
see,
all
their
PVCs
getting
deleted
and
they'd
say
what
the
heck
you
know
this
is.
This
is
weird
we'd.
C
C
That
that's
the
proof
of
concept
way
of
doing
this,
but
I
don't
think
we
could
ship
that
right,
because,
because
users
might
have
automation
around
doing
something
automatically
when
a
PVC
gets
deleted
and
if
we're
doing
it,
you
know
just
to
do
a
migration.
The
automation
might
trigger
unintentionally,
and
so
we
really
want
to
not
delete
the
PVC
and
create
a
new
one.
Additionally,
things
like
the
UUID
changes
when
you
do
that,
yeah
actually.
A
K
K
And
I
think
everybody's
migration
strategy
is
gonna,
be
a
little
bit
different
here.
You
know,
for
example,
like
the
CSI
EBS
driver
is
you
know
we
did
a
lot
of
work
for
make
sure
the
intrigue
UVs
stuff
works.
Well,
I,
don't
see
us
just
switching
over
to
the
CSI
UPS
driver
just
because
it's
available
and
I
think
we'll.
You
know,
as
far
as
our
customer
is
concerned
will
probably
say:
hey
if
you
want
to
go
and
experiment
with
a
CSI
driver
for
a
while.
K
A
C
A
C
A
I
I
I
So
Michel
discovered
a
bug.
The
other
day
like
like
where
the
mkfs
is
going
on
the
volume
is
getting
detached
and
and
like
similar
thing
will
happen
during
the
the
resizing.
It's
basically
like
it
could
result
in
problem
and
online
resizing
could
exaggerate
that
problem.
So
that's
the
main
concern
that
I
have
and
I
mean.
I
I
A
We
should
probably
document
that,
along
with
the
ability
to
turn
it
on
its,
if
we
actually
genuinely
believe
that
there
could
be
issues
but
having
some
way
to
be
able
to
do
resizing
and
disable
this
feature,
if
it
can,
if
some
something
comes
up
seems
reasonable.
You
could
also
like
have
it
on
by
default,
but
have
a
way
to
be
able
to
control
it
and
disable
it.
If
you
wanted
to
so
it
could
be
opt
out,
but
either
those
options
seems
reasonable
to
me.
B
A
A
A
A
All
right,
that's
all
that
I
have
today
there's
nothing
else,
then
we'll
reconvene
in
two
weeks
that
will
be
post
code
freeze
if
you
have
any
features
that
are
going
in
reminder.
We
are
very,
very
close
to
code,
freeze
and
I.
Believe
today
is
the
deadline
for
having.
If
you
have
a
feature,
you
need
to
have
an
empty
Doc's
PR
created
as
a
placeholder
for
the
docs
folks
to
know
that
you
are
going
to
write
documentation,
hey.
A
A
Let's
take
a
look
is
next
week.
Yeah
code
freeze
is
next
week
and
then
the
this
meeting
overlaps
with
American
Thanksgiving
in
two
weeks,
so
I'll
just
delete
that
we'll
meet
in
four
weeks.
Then
on
the
sixth.
Unless
we
want
offset
by
any
preference
between
offsetting
by
one
week
or
just
meeting
in
four
weeks.
A
A
Okay,
so
let's
just
leave
it
as
is
I'm
gonna
cancel
the
meeting
over
Thanksgiving.
If
there's
anything
that
you
want
to
discuss
in
the
meantime
feel
free
to
send
out
a
message
on
slack
or
over
email,
and
we
can
always
set
up
one-off
emails
if
there's
anything
that
needs
discussion.
Otherwise,
we'll
reconvene
in
four
weeks,
which
happens
to
be
right
before
cube
con
q,
con
Seattle.
That
is.