►
From YouTube: Kubernetes SIG Storage 20181011
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 11 October 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.2wtp3vku7nzb
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:12:37 From Sig Release : I am going to drop off now. Thanks all
09:26:56 From Deep Debroy : This is the feature for CSI migration: https://github.com/kubernetes/features/issues/625
A
All
right
today
is
October
11
2018.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
and
recorded
and
published
on
YouTube.
This
is
our
agenda.
Doc,
it's
linked
in
the
calendar,
invite
feel
free
to
add
any
items
to
it,
specifically
any
pr's
that
need
attention
or
design
reviews
or
anything
else
that
you'd
like
to
discuss
today
on
the
agenda.
First
up
we
have.
I
am
I
pronouncing
that
right.
That's.
B
So
basically,
I
just
wanted
to
drop
by
and
use
myself
for
folks
who
haven't
we
have
for
whom
I
haven't
introduced
before
so
I'm.
The
113
release
lead
and
we
are
basically
meeting
up
a
different
six
to
talk
a
little
bit
about
the
timeline
of
the
release
itself
and-
and
we
are
currently
in
the
enhancement,
collection
phase
and
so
discuss
a
little
bit
about
what
each
cig
is
planning
for
the
future
load
for
113
itself.
B
So
113
is
like
a
super
short
release
by
now
I'm
hoping
most
of
you
would
have
seen
the
schedule
itself
cope
freezes
on
11:15
and
we
are
planning
to
release
by
12:00
3:00
before
the
Seattle
cube
con,
so
which
basically
just
leaves
us
about
six
weeks
or
so
for
development
itself.
That
is,
code
testing
and
talks
to
wrangle
all
of
them
in
so
so
we
started
the
enhancement
collection
on
Monday,
and
currently
we
like
yeah,
we
are
seeing
close
to
11
enhancements
on
storage.
B
A
That's
good
good
feedback,
I
think
our
biggest
goal.
This
quarter
is
to
try
and
get
CSI
to
1.0,
and
then
there
is
a
number
of
other
features
that
have
been
pending,
that
we
want
to
kind
of
move,
move
towards
beta
or
GA.
What
we're
gonna
do
is
reassess
as
we
get
closer
to
the
23rd.
How
many
of
these
were
actually
going
to
be
able
to
land
within
time,
yeah
prioritize
and
cut
the
ones
that
we
can't
and
focus
on
the
items
that
are
absolutely
critical,
critical
to
get
us
to
CSI.
One
point:
oh
yeah.
B
That'd
be
great
and
along
it
along
the
way,
if
you
could
I
know,
Kendrick
had
left
to
come
into
Kendrick
assigned
is
our
enhancement
seed,
so
I
think
he's
left
a
comment
on
most
of
those
issues
asking
for
water
spend
if
it
is
graduating
like
is
a
test
or
code
or
talks.
If,
if
the
feature
owners
could
leave
that
information,
there
that'll
be
super
useful
for
us
yeah.
A
A
Thank
you
so
much
and
next
step
is
our
q4
1.13
update,
so
we're
gonna
go
over
the
planning
spreadsheet
for
the
items
that
we
have
tracked
again.
This
is
a
much
shorter
quarter
than
usual.
So
if
you
feel
like
you're
not
going
to
be
able
to
finish
an
item
within
time,
it's
okay
to
get
it
punted
to
the
next
quarter.
So
let's
do
a
quick
status
review
and
get
any
updates
from
folks.
First
up
for
CSI,
we
have
blocked
volume,
support
moving
to
beta
Vlad.
Are
you
on
track
for
this?
A
C
There's
a
PR
pending
somebody
I
can't
remember
the
name
he's
doing
some
some
refactoring
and
which
is
based
on
the
port
of
our
DB.
Stop
looking
for
block
so
I'm
using
that
as
the
basis
for
anything
that
needs
to
be
fixed.
So
I've
been
working
on
that
with
him
or
her
I'm,
not
even
sure,
and
that
will
be
the
basis
and
then
I
also
have
Chris
testing
some
stuff
as
well.
To
make
sure
that
everything
looks
good.
So.
A
A
Thanks
bud
so
we're
going
to
flip
over
the
CSI
drivers
and
get
a
status
update
on
those
at
the
end
of
the
quarter.
I
spoke
with
some
folks
from
the
VMware
team
and
looks
like
they're
targeting
the
December
timeframe,
CSI
library
moving
the
mount
library
from
core
kubernetes
kubernetes
to
an
external
common
repo
travis.
Any
updates
on
this.
D
Yeah
I've
gotten
some
feedback
on
the
PR
that
is
open.
That
I
need
to
incorporate
right
now,
so
it
is
definitely
started
and
I
have
much
more
bandwidth
to
work
on
this
now
because
I
have
been
pretty
slow
on
it.
So
apologies
for
that,
but
yeah.
That's
my
attention
and
I'm
heading
towards
the
next
steps.
Awesome.
A
E
A
F
A
H
A
G
A
A
G
A
I
I
I
It
will
work,
I
think
to
get
it
to
the
state
where
a
lot
of
drivers
will
want
to
use
it.
We
need
you
no
ill
need
to
get
a
lot
of
features
in
testing.
So
my
concern
with
this
was
without
a
good
regime
for
testing
it
I'm
kind
of
scared
to
just
start
throwing
pr's
at
it
to
try
to
add
the
features
that
I
think
are
missing,
but
I
mean
as
a
baseline.
It's
a
good
starting
point.
It
sure
is
almost
all
of
what
cubelet
currently
does.
I
A
J
I
J
I
J
Well,
we
spent
quite
some
time
and
tried
you
look
around
this
and
yeah.
In
the
end,
we
use
center
to
get
to
the
hostname
space
and
ask
you,
sir.
So
first
we
require
user
to
installed
my
Scottie
to
me
on
the
host.
Instead
of
restoring
the
container
as
I
can
do
we
use
it
as
introduced
which
namespace
back
to
use
that
binary
on
the
host.
So
then
we
multiple
containers
and
share
the
they
share.
The
library
yeah.
I
K
I
J
And
also,
we
cannot
install
the
ICRC
sea
and
ice
cuz
it
may
inside
container,
because,
basically,
the
kernel
names,
the
net
link
used
by
the
SOC
of
the
main
talk
to
discuss,
be
doesn't
support
namespace,
yet
in
the
kernel
part,
so
County
around
one
instance
of
ice
gossipy
in
one
node.
So
we
prefer
that
one
to
be
on
the
host
rather
than
brownie,
on
something.
E
J
I
J
Is
a
problem?
I
have
referred
to
one
get
used
to
my
comments
on
that.
Pri
have
referred
to
one
link
it
back
to
another.
One
Google
Group
is
bashing
about
this
issue
and
the
it's
the
kernel,
I
think
it's
the
kernel
saying,
because
that
somebody
confirm
that
and
I
also
try
to
look
into
the
common
codes
to
see
that
I
think
it's.
The
kernel
quote.
J
E
L
K
E
K
L
E
J
E
J
L
I
L
L
The
demon
would
have
to
be
running
its
own
separate
container
from
any
of
the
plugins,
and
then
the
plugins
would
be
running
their
own
versions
of
their
own
I
scuzzy,
a
DM
or
at
least
accessing
the
hosts
I
scuzzy
ad
I'm,
depending
on
how
you
want
to
do
that.
But
all
of
those
I,
scuzzy,
idioms
or
tooling,
then
access
the
singular.
A
I
A
N
Open
it,
beer
for
the
max
capacity
for
the
CSI
I
have
updated,
updated
the
PR
for
obviously
it's
for
the
for
the
volume
expansion.
Currently,
the
it
is,
the
CSI
volume
expansion
is
stuck
in.
Basically,
a
wording
like
whether
to
like,
where
to
call
it
like
online,
is
only
for
controller
expansion
or
is
it
like
also
covers
tech
volume
that
are
attached
to
the
node
wire,
some
sort
of
like
node
publish
a
node
stage
like
I
scuzzy,
so
it's
basically
a
wording
thing
that
I
need
to
sort
out
for
wall
expansion,
okay,.
A
O
O
E
A
O
It's
been
started.
We
have
identified
about
three
or
four
sort
of
well
scoped
out
items.
They
include
things
like
making
sure
the
stateful
set
deployments
are
well
spread
out
through
everything,
as
it's
done
in
the
entry,
as
well
as
a
couple
of
things
around
using
informers
for
the
CSI
node
info
objects,
and
things
like
that.
So
this
Devon
scoped
out
I'm
working
on
the
first
one,
should
have
a
PR
up
by
hopefully
under
feet,
cool.
P
D
A
A
E
A
Cool
so
I'll
leave
both
of
those
there.
Next
up
is
how
to
test
these
different
fiber
channel
libraries
or
fiber
Chinese
drivers
that
we
have
fiber
channel
I
scuzzy
and
the
library's
NFS
I
scuzzy
fiber
channel
I
think
this
goes
back
to
what
Ben
was
mentioning.
We
have
these
libraries.
How
do
we
actually
test
them
and
I
think
that'll
be
an
important
next
question.
Any
updates
on
this
from
either
Brad
or
Ben.
E
Q
I
Them
but
but
yeah,
please
invite
me
to
any
meetings
you
have
on
this
subject,
but
I
think
once
we
have
tests,
you
can
run
externa
externally
and
then
test
it'll
just
be
a
matter
of
doing
some
ordinary
kubernetes
and
and
testing
using
a
driver
that
includes
with
these
libraries
yeah.
M
E
A
F
N
A
R
A
R
A
A
A
N
A
Q
A
A
D
D
S
Sure
so
we
happy
ours
on
deletion
policy
and
admission
webhook
for
the
deferred
snapshot
class.
Those
are
in
reviews
and
we
also
have
a
CSS
back
on
topology
for
snapshots.
That
is
also
in
reviewed
and
the
final
laser
is
working
progress.
So
there
are
three
finalize
errs
the
first.
You
are
on
vol
snapshot
and
wanna
snapshot
content,
so
those
will
be
added
to
the
central
controller
and
then
the
third
one.
We
want
to
add
that
on
the
wall
rim,
so
probably
PVC,
we
delete
when
we
delete
the
PVC.
S
We
want
to
check
whether
your
the
snapshot
being
created
from
that
I'm
wondering
whether
we
should
add
that
to
the
the
external
storage
controller,
because
it
seems
like
the
this.
The
controller
under
external
storage
is
the
right
place,
adding
that,
but
that
means
extremists
story
will
be
dependent.
A
A
A
A
Q
A
Yeah,
you
can
go
ahead
and
check
it
yeah
beyond
that.
Nothing!
That's
a
next
up
is
the
cubelet
device
plug
in
registration
mechanism,
so
Vlad
I
think
we
need
to
get
the
conversation
going
here
would
be
device
plug-in
folks
and
make
sure
this
is
on
their
agenda
right
for
q4.
Let's
get
this
going,
ASAP,
okay,
because
it
kind
of
depends
on
outside
of
this
group,
and
we
want
to
make
sure
it's
on
their
roadmap.
A
We
have
a
number
of
new
CR
DS
that
the
core
is
going
to
depend
on
CSI
driver
object
and
CSI
node,
so
object
in
the
Alpha
releases
that
last
quarter.
These
objects,
the
CR
DS
for
them-
need
to
be
manually
installed,
that's
inconvenient.
We
want
this
to
be
automatically
installed.
The
feedback
from
API
machinery
was
do
not
use
the
controller
students
to
all
this.
This
quarter
to
unblock
ourselves.
What
we're
gonna
do
is
use
the
add-on
manager
to
pre
installed
CR
DS.
A
A
In
the
meantime,
I'm
going
to
pursue
installation
with
add-on
manager
and
that
should
be
sufficient
to
block
unblock
us
and
whatever
sake
cluster
lifecycle
decides
to
do,
can
be
the
wrong
long-term
solution
and
there's
also
a
number
of
issues
that
come
up
with
using
a
CR
D
in
the
core,
including
things
like
race
conditions.
If
the
CR
D
is
not
installed
in
things
like
the
informer,
what
happens
if
the
CR
D
is
deleted,
while
it's
in
use
things
like
that,
and
the
recommendation
from
API
machinery
was,
if
anything
is
not
behaving
as
intended.
A
A
We're
going
to
follow
up
with
Jordan
to
ensure
that
we're
ok,
deprecating
the
entry
code
and
if
so,
we're
gonna
go
ahead
and
move
the
role
definitions
to
the
external
repos.
Next
up
is
moving
the
CSI
driver
and
CSI
node
info
objects
from
alpha
to
beta.
This
is
blocked
on
the
CR
D
installation
mechanism,
so
no
update
there.
A
C
A
A
A
T
We've
been
working
on
that
and
testing
things
out
for
the
transfer.
We've
there's
a
possibility
that
we
can
just
update
the
claim
ref
to
move
it
to
a
new
namespace
and
PvP
in
the
other
namespace.
So
John
was
gonna,
submit
that
since
I
broke,
my
github
account
and
had
to
open
anyone,
but
we
will
get
that
at
least
on
the
calendar,
so
everyone
can
start
reviewing
it.
That'd.
A
C
T
T
T
A
A
A
A
Q
A
K
A
Is
a
really
really
bad
piece
of
code?
I
think
there's
been
multiple
bugs
open
to
clean
that
up
for
a
long
time.
I,
don't
think,
we've
gotten
to
it
if
anyone's
interested
in
working
on
that,
let
me
know,
but
considering
its
a
short
quarter
might
might
might
not
be
the
wisest
use
of
time
for,
for
this.
I
Is
this
is
work,
I
did
back
in
112
and
then
I
got
stuck
on
the
dependency
stuff,
and
so
it
didn't
make
the
deadline,
but
I
just
need
to
write
the
unit
tests.
This
is
this
is
essential
for
NFS.
You
can't
really
do
in
a
festival
without
mount
option,
so
okay,
men
options
into
the
sidecar
and
and
into
the
kubernetes
CSI
driver
mount
option,
support
there.
So
there's
actually
two
PRS
here:
I'll,
take
it
the
other
one.
Okay,.
I
N
A
You
alright,
so
we
have
a
lot
that
we're
doing
in
a
very
short
amount
of
time
code
freezes
a
little
over
a
month
away.
The
23rd,
as
ash
mentioned,
was
the
date
that
we
need
to
officially
finalize
on
the
feature
repo
what
what
we're
committing
to
for
this
quarter.
So
when
we
have
our
next
meeting,
let's
reassess
what
we
think
we
can
make
in
this
quarter
and
what
we
can't
on
it's:
okay,
to
say
that
the
quarter
is
too
short
and
that
we
don't
have
time
for
a
specific
feature.