►
From YouTube: Kubernetes SIG Storage 20180802
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 02 August 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.bb6wxuunl8d2
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:57:32 From Conor Landry : 👋 how can i be added to the invite list for sig-storage?
A
A
A
As
a
reminder,
the
feature
freeze
date
was
yesterday
for
features
that
need
to
get
into
1.12.
That's
the
mechanism
by
which
you
declare
that
new
features
will
be
going
into
112
and
then
code
freeze
for
these
deadlines
for
our
code.
Freeze
for
these
features
is
going
to
be
September
4th,
so
we
have
just
about
a
month,
alright.
So
taking
a
look
at
the
six
storage
planning
spreadsheet,
let's
just
get
status
updates
here.
First
item
is
the
snapshot
and
restore
controller
that
Jiang
and
Shang
are
working
on
on
do
either?
B
A
Cool
sounds
good,
so
it's
like
code
in
progress
and
the
proposal
is
in
its
final
stages
of
getting
reviewed.
That
looks.
Good
next
up
is
block
volume.
Support
betta
I
actually
dropped
the
priority
of
this
2p2
I.
Don't
think
this
is
a
blocking
feature
for
getting
CSI
to
GA.
It
can
move
independent
of
the
core
CSI
api's
blad.
Do
you
want
to
talk
about
this
at
all
and.
C
D
C
A
If
this
feature
is
important
to
you,
please
test
it
out
with
your
driver
and
help
us
find
issues
and
file
bugs,
and
we
can
take
a
look
at
those
and
try
to
get
them
fixed,
VMware,
vSphere
driver,
so
I'm
gonna
skip
over
the
drivers,
mostly
because
they're,
not
part
of
the
core
and
we'll
just
get
a
status
update
on
the
drivers.
At
the
end
of
the
quarter
for
CSI
entry,
we
were
discussing
a
cluster
plug-in
registration
mechanism,
so
we
decided
we're
going
to
pursue
this
as
a
CR
D.
A
So,
for
instance,
if
you
have
a
driver
that
does
not
implement
attached
when
it
registers
itself
with
this
CAC
Rd
object,
it
has
a
field
that
says
I
don't
implement,
attach
so
that
when
the
kubernetes
attached
detached
controller
comes
along
and
decides
whether
it
needs
to
do
attach
or
detach
today
it,
it
calls
it
for
all
CSI
drivers
in
the
future.
It
would
look
for
this
object
if
it
existed,
and
the
object
indicated
that
the
driver
didn't
require
attached.
We
can
skip
that
operation
and
speed
up
execution
for
drivers
that
don't
require
attach
detach.
A
A
A
The
challenge
today
is
that
for
those
drivers
that
don't
have
an
attach
detach,
possibly
don't
even
have
a
controller,
they
still
have
to
ship
with
a
CSI
external
attacher
that
doesn't
know
up
and
for
all
CSI
drivers.
Today,
the
kubernetes
system
creates
a
volume
attachment
object
and
it
expects
this
external
attacher
to
basically
no
op
it
if
attach
doesn't
exist,
and
that
adds
a
little
bit
of
performance
delay
on
volume.
Plugins
that
don't
implement
an
attach,
because
you
essentially
have
its
extra
code
running
and
the
mounting
steps
have
to
wait
for
it
to
complete
okay.
A
So
basically
all
they
need
is
a
mountain
and
unmount
node,
publish
and
a
note
on,
publish
they
have
no
use
for
provisioning.
They
have
no
use
for
attaching.
They
don't
actually
implement
a
controller
so
that,
like
the
secret
volumes
or
config
map
volumes
that
we
have
entry
but
implemented
using
CSI,
okay,.
A
A
Yeah,
so
there's
a
lot
of
volume
plugins
that
could
benefit
from
this
moving
on
skipping
external
attach
detach
for
non
attachable
volume.
So
this
was
one
of
the
goals
for
this
quarter
and
it
turns
out
number
five
actually
solves
it
in
a
very
nice
clean
way.
So
we're
pursuing
that
once
number
five
is
available.
Yon
can
implement
number
six
for
number:
seven,
replace
volume,
reconstruction
with
checkpointing
I'm,
not
sure.
If
you
had
a
chance
to
look
at
this
Vlad,
it's
pretty
low
priority,
yeah
I've
not,
but.
A
A
A
The
default
behavior
is
that
a
when
a
volume
driver
is
used
when
mount
is
called.
We
construct
the
UNIX
domain,
socket
path
and
expect
the
UNIX
domain
socket
to
exist
there.
If
it
doesn't,
it
will
fail
once
this
feature
is
enabled
any
UNIX
domain
socket
created
in
the
specific
directory
results
in
a
registration
mechanism.
Getting
kicked
off.
That
registration
mechanism
identifies
the
type
of
the
volume
plug
in,
for
example,
CSI
or
GPU
device
plug-in
and
then
allows
for
the
cubelet
CSI
code
to
do
further
registration.
A
So
that's
the
reason
we
want
to
align
with
this
work.
We've
been
talking
to
folks
who
built
the
alpha
version
of
this
feature
last
quarter
about
driving
it
to
beta
this
quarter.
There's
some
question
about
whether
they're
going
to
be
able
to
do
so
or
not,
based
on
the
outstanding
items
that
they
have
remaining
Sergey,
glad
I
think
you
guys
might
have
more
context
on
this
yeah.
A
A
F
F
So
we
discussed
the
higher
level
ideas
at
the
proposal
last
week,
so
it
sounds
like
we
have
consensus
in
the
sig
on
the
plan.
So
now
we
can
move
forward
and
start
looking
at
what
we
need
to
do
to
actually
execute
on
this.
So
I
think
Brad
has
some
people
on
his
team
that
can
help
with
this
and
we'll
have
to
sync
up
with
Patrick
to
to
coordinate
efforts.
A
A
A
A
A
F
A
F
A
J
F
F
A
F
F
A
A
F
A
F
A
C
A
H
Yeah
so
I
got
that
started.
I
kind
of
ended
up
tying
it
to
the
driver
that
were
skipping
because
if
I
don't
make
sense,
it's
not
as
quickly
as
I'd
like,
but
hopefully
hopefully
in
another
week.
I
should
have
something
ready
for
Ben
to
start
looking
at
or
anybody
else.
That's
interested
in
looking
good.
A
So
if
anybody
is
writing
a
CSI
driver
that
has
I
scuzzy
mounting,
you
should
really
take
a
look
at
the
work
that
John
is
doing
here,
he's
coming
up
with
a
standard
library
that
can
be
used
by
any
arbitrary
CSI
driver
that
does
I
scuzzy
mounting.
Ideally,
we
want
this
to
be
usable
by
multiple
different
drivers.
So
please
take
a
look
at
that
provide
feedback
and
see
how
we
can
make
it
usable
by
multiple
different
implementations.
I
know
a
lot
of
you
have
I
scuzzy
bass,
CSI
drivers.
H
A
A
Currently
CSI
only
supports
access
via
PV
PVC,
which
is
fine
for
remote
attachable
volumes,
but
for
local
ephemeral
volumes.
Think
like
a
secret
vol,
it
doesn't
make
sense
to
access
that
through
a
PV,
PV
C,
you
should
be
able
to
reference
those
directly
inline
in
your
pod.
The
challenge
is
that
we
don't
want
to
be
able
to
allow
those
types
of
volumes
to
be
a
attachable
remote
volumes
to
be
referenced
directly
inline
in
a
pod
for
multiple
reasons,
one
because
it
breaks
portability
two,
because
it
makes
things
like
doing
topology,
aware
scheduling
impossible.
A
Now
that
we
are
planning
this
CSI
registry,
we
can
actually
use
that
for
this
logic,
where,
if
the
driver
wants
to
be
able
to
do
inline,
it
would
have
to
create
register
itself
with
the
registry
and
say
that
it
doesn't
do
attaching,
and
if
that
is
the
case,
then,
if
someone
references
a
driver
inline,
we
can
check
that
and
allow
it
to
happen.
This
is
work
that
Yan
is
driving
yan.
Is
there
anything
else
you
want
to
that.
K
A
K
Yeah
they
have
a
sidecar
containers
running
on
every
node.
This
CSI
driver
register
that
the
registers
yes
I
driver
into
Q
black
and
currently
it
passes
just
the
socket,
a
pass
to
UNIX
the
main
circuit,
the
driver,
and
it
would
pass
also
some
sort
of
configuration
how
to
be
a
driver
in
two
lanes
of
like
we
should
for
this
particle
driver.
He
should
pass
also
the
put-put
name
offense
base
and
the
driver.
K
A
G
So
I've
been
working
on
like
bleeding
the
code
for
CSI
and
I'm
working
on
and
obtaining
the
proposal
as
well
like
there's
a
plan
to
like
the
CSI
bits.
I
think
it
depends
on
one
key
thing
is
like
this
driver
name
and
the
provisional
name
to
be
same
and
I
tried
to
talk
to
sig
note
about
how
this
limits
will
be
over
a
table
for
entry
volume.
If
that
is
something
they
will
allow
so
yeah.
G
A
G
A
G
A
G
A
G
A
A
A
K
J
F
A
Light
and
the
other
question
that
we
had
was
whether,
as
your
file
needed
any
zone
awareness,
it
doesn't
appear
to
be
the
case,
so
we
can
probably
cross
the
street.
So
we'll
stop
tracking
that
and
then
mark
this
is
started
and
then
we'll
hopefully
get
an
update
on
TCP
next
time
and
then
for
the
GCE
PD
update
re
PD
to
support
the
new
topology
framework.
I
guess
that's
gonna
happen
hand
in
hand,
yeah.
A
A
Looks
like
David's
not
on
the
call,
but
I
had
a
design
review
with
him
this
week
to
review
the
the
proposal.
The
current
state
of
the
proposal
looks
good
to
me.
I
think.
The
only
outstanding
item
left
is
how
entry
in
line
volumes
are
going
to
map
to
CSI,
since
CSI
currently
doesn't
have
entry.
Only
CSI
doesn't
have
in
line
volumes.
Excuse
me
how
entry
in
line
volumes
are
gonna
map,
so
that's
going
to
be
dependent
on
the
CSI
in
line
design.
A
A
A
A
A
Skipping
over
the
CSI
driver,
we
have
the
fibre
channel
common
library
Brad
any
status
updates
on
this.
D
A
A
I
E
Can
be
done
with
this
item
that
there
is
another
related
bug
that
I'm
working
on
I
am
so
I'm
gonna
have
another
P,
actually
I
have
the
code
written
I
just
haven't
tested
it
I'm
waiting
to
push
it
up
until
I
test
it,
but
there's
the
remains
of
race
condition
in
the
multipath
code,
where
you
can
sometimes
not
get
multipath
when
you're
supposed
to
it's.
A
E
So,
as
I
mentioned
two
weeks
ago
and
I'm
fairly
certain
that
nothing
is
needed
for
NFS,
because
it's
just
a
mount
I
can
say
at
this
point
having
been
working
on
the
NFS
CSI
driver
of
my
own
I'm
95%
certain.
We
won't
need
any
common
library
at
all.
Cuz.
It's
just
gonna,
be
a
mount
call.
At
the
end
of
the
day,
yeah.
A
H
E
Would
be
a
better
outcome
here
because,
because
actually
solving
this
issue
would
be
a
considerable
an
of
new
code
to
scan
all
of
the
sub
path
volume
sure.
But
right
now,
sub
has
sort
of
piggybacks
on
other
other
reconciler
x'
to
get
to
get
reconciled
and
there's
nothing.
That
just
goes
and
looks
for
sub
paths
that
aren't
attached
to
anything.
So
we
would
be,
as
there
be
substantial
amount
of
new
code
to
reconcile
orphaned,
subpaths.
A
L
L
I
Sweetie
thanks
Anika
tied
up
with
my
rejoined.
So
yes
a
month
since
we
need
over
right,
we
do
have
the
cool
completed
and
we
should
have
a
br
shortly
payment.
As
we
discussed
offline
yeah
I
can
set
up
a
quick
call
with
you
to
go
over
the
couple
of
comments
that
you
had
to
understand
them
better
I
and
ended.
You
add
a
few
comments
around
the
the
admission
controllers
and
a
few
other
things
so
and
Sri.
Neither
has
also
been
helping
me
on
the
end-to-end
test.
That's
what
we
are
now
concentrating
on
cool
awesome.
A
G
A
A
A
A
A
Change
it
basically
requires
documenting
ensuring
that
documentation
has
been
updated,
adding
in
updating
the
feature
flag
to
promote
this
from
beta
to
stable
and
then
ensuring
there
are
end-to-end
tests.
It
seems
like
a
great
opportunity
for
anybody
who's
been
sitting
on
the
sidelines
and
wants
to
get
involved
with
a
core
kubernetes
feature.
A
A
N
Sure
so
this
is
a
pretty
straightforward
PR.
The
way
the
admission
control
is
written
right
now
it
allows
TV
series.
Sia
is
only
for
volume,
plugins
that
support
resize
or
expansion.
As
a
result,
plugins
like
NFS,
I,
scuzzy
and
I-
think
this
is
the
issue
with
the
Flex
driver
data.
You
cannot
do
resize
with
them,
so
there
are
two
main
motivations
for
this
channel.
One
is
that
admission
controller
doesn't
necessarily
know
we
CSS
plugins
with
Safari
sized.
N
So
what
this
does
is
basically
delegate
the
volume
plugging
check
to
the
expand
controller
as
opposed
to
that
mission
controller.
It
also
allows
external
provisioners
or
external
controllers
to
do
resize
for
plugins,
like
NFS,
very
sketchy
where
it's
not
possible
to
have
resize
capability
for
entry
plugins.
So
this
is
the
gist
of
it.
N
N
G
I've,
looked
at
the
pier
the
thing,
I
think
mostly
it's
it's.
What
we
want
to
do
I
agree
with
the
general
idea
of
it
to
think
and
I
think
we
discussed
in
previous
call
as
well,
and
we
agreed
with
the
general
idea
of
it
the
made.
The
main
thing
is
like
the
event
name,
I,
don't
know
externally
expanding
I
know
we
used
similar
name
in
this
thing
and
then
some
of
the
implementation
bits
I
have
left
some
comments.
I
think
we
can
get
I
mean
from
from
my
perspective.
A
A
N
A
A
N
A
K
N
I
G
O
B
N
G
N
M
There's
another
design
review
tomorrow
for
cloning,
we've
been
having
him
on
Fridays
Thursdays.
We
had
a
real
productive
design
review
last
week.
Splitting
out
the
cloning
into
limiting
to
a
single
namespace
than
in
the
transfer
of
volumes
and
populate
ur
is
secondary
design
considerations.
So
if
people
are
able
to
attend,
I
was
absolutely
love.
The
feedback
and
everyone
should
already
have
an
invite
same
time.
Is
this
meeting,
but
tomorrow.
G
M
The
host
phoning
the
host
assisting
yeah,
so
the
host
is
so
cloning
initially
is
meant
to
leverage
the
technology
by
the
storage
itself,
to
do
the
storage
on
the
backend
and
provide
an
instant
copy
right.
But
there
may
be
cases
where
we
want
to
clone
and
snapshots
are
a
perfect
example.
This
may
be
from
one
storage
type
to
the
next,
where
we
want
to
have
a
best
practices
designed
around
having
the
host
assist
in
creating
that
volume.
So
that's
where
host
assisted
client
comes
in
it's
your.
M
You
may
have
created
the
volume
using
one
technology
and
you
want
to
create
another
volume
using
a
difference,
but
you
want
the
copy
of
the
exact
same
data,
so
this
could
be
true
of
you
know
what
is
allowed
for
a
storage
class
within
a
namespace
that
you
allow
some
types
of
storage
in
one
and
not
in
the
other.
You
want
to
pre-populate
the
volume
for
testing,
maybe
a
database,
something
like
that,
so
that
that's
where
it
comes
from.
Does
that
make
sense
a
month,
yeah.