►
From YouTube: Kubernetes SIG Storage 20190328
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 28 March 2019
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.9pnf02hpaxv9
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:46:26 From cheng pan : Since we mentioned Windows in several places, do we have a dedicated story for “CSI windows support”?
09:47:23 From Deep Debroy : Hey not yet. It’s one of the items in the agenda and if we don’t have time, will revisit and populate entries here.
09:48:29 From cheng pan : ah, saw it
A
A
We
want
to
just
go
through
very
quickly
and
get
an
end
of
quarter
status
of
what
was
completed
and
what
was
not
completed.
Then
we're
gonna
do
q2
planning
for
a
version
115
of
kubernetes
and
figure
out
what
we
are
going
to
commit
to
I've
already
added
a
tab
for
115
feel
free
to
add
additional
items
here.
If
there's
something
you
think
we
should
be
working
on
that
we're
not
so
let's
go
ahead
and
get
started.
A
A
A
B
A
C
A
B
There's
one
bug
outstanding,
I
think
Shing
was
able
to
address
it,
so
we
will
back
poor
will
try
to
back
port
that
fixin
to
catch
release
for
114
and
then
on
the
sidecar
side.
D
A
A
A
A
C
A
A
B
E
A
A
Next
item
is
local
PVD
GA.
This
I
believe
was
completed
right,
Michelle
yep
cool
great
work
on
that
next
item
is
cleanup.
Storage,
feature
gate
handling
that
was
completed,
non-recursive
volume
ownership.
This
was
started
last
status
update
was
there
was
a
cap
pending
any
updates
on
that
as
a
design
been
agreed
on
for
this.
Yet.
C
Sorry
we
had,
we
have
to
work
on
I,
think
like
we
had
to,
we
didn't
get
the
kept
merged.
Michelle
has
some
comment
so
I
think
in
this
quadrant
we'll
have
to
convert
this
to
design
only
because
we
don't
have
a
cap
that
is
agreed
on
yet
so
it's
they
close
it
time.
I
think
maybe
this
entire
quarter
to
to
conversion
design.
That
will
solve
this,
and
maybe
the
next
item
as
well,
that
he
visit
UID,
GID
handling
of
volumes.
Okay,.
A
B
A
E
A
H
B
Some
of
the
stuff
was
completed,
the
I
think
the
main
things
left
over
is
the
read-only
stuff
and
then
clean
up
stuff
I
think
it
was
partially
completed.
A
B
A
H
B
A
A
B
A
I
I
A
A
A
A
A
C
C
C
A
A
Next
item
is
per
PVC
provisioning
secrets,
so
this
is
in
the
CSI
provisioner,
the
external
provisioner.
There
is
logic
that
allows
you
to
do
templating,
basically
for
secrets.
Instead
of
hard-coding
to
a
specific
secret
to
be
used
for
every
single
volume
you
can
template,
such
that
the
pvp
namespace
or
name
or
the
PVC
name
or
namespace,
or
an
annotation
from
the
PVC,
is
used
as
part
of
the
name
for
the
secret,
and
what
this
allows
you
to
do
is
have
a
different
secret
for
each
one
of
your
volumes.
A
That's
provisioned,
and
there
are
provision
secrets,
there's
attach
detach
secrets,
there's
mount
on
mount
secrets.
The
attach
and
mount
unmount
secrets
currently
allow
you
to
template
based
off
the
PVC
name,
which
is
pretty
useful,
but
the
provision
1
does
not
allow
that.
There
was
a
number
of
reasons
when
Jordan
introduced
this
feature
that
he
decided
not
to
allow
allow
this
on
the
provisioning
secret.
But
we've
discussed
with
him
since
and
it
seems
like
it
would
be
very
useful.
A
A
H
So
this
we
already
have
a
key,
so
it's
just
need
to
change
the
way
how
it
is
implemented
so
I'll
be
welcome
at
okay.
So.
A
B
A
B
G
K
A
Sounds
good,
thank
you
for
the
update,
so
the
feature
here
is
for
volume,
finalized
errs
and
then
the
next
item
is
snapshot
execution
hook.
Oh
those
are
alpha
scientist
and
Shang,
so
it
looks
good
there.
Priority
wise
p2
seems
reasonable.
To
me.
Next
item
is
CSI
out
of
tree
automating
the
sidecar
release
process
and
adding
end-to-end
tests.
B
A
A
J
K
H
H
A
H
A
A
What
I
also
want
to
you
is
move
up
this
populate
data
source
and
kind
of
group,
all
the
things
that
John's
working
on
together.
So
we
have
volume
cloning.
That
one
makes
sense.
Second
item
is
a
volume
snapshot
namespace
transferring
so?
The
idea
here
is
that
hey
I
create
a
volume
from
a
from
a
snapshot
or
I
clone
a
volume,
but
I
wanted
in
a
different
namespace.
A
So
the
idea
here
is
to
come
up
with
an
implementation.
That'll
allow
you
to
safely
move
things
across
namespaces.
So
ideally
this
would
be
a
two-way
handshake
details
on
how
exactly
it'll
work
are
being
worked
out.
The
third
item
here
is
populated
data
source.
So
we
do
have
a
data
source
field
on
the
PVC
object
when
you
provision
currently
its
supports
snapshot
as
source
and
with
number
14
volume.
Cloning,
it's
gonna
support,
PVC.
A
As
a
data
source,
but
in
the
future
we
want
to
kind
of
open
this
data
source
up
to
any
arbitrary
C
or
D.
So
you
could
imagine,
for
example,
a
github
as
a
populate
er
right.
If
I
have
a
volume
and
I
want
to
pre-populate
it
always
with
a
github
repo
or
a
docker
image,
you
could
write
a
populate
or
a
standalone
external
populate
er
that
can
pre-populate
volumes.
A
This
design
is
fairly
complicated
because
you
have
to
have
your
populate
ur
pod,
be
able
to
basically
attach
and
mount
this
volume
before
it's
ready,
but
make
sure
that
no
other
regular
pods
are
able
to
attach
and
mount
this
volume,
and
we
want
to
do
this
in
a
backwards
compatible
way,
so
the
orchestration
around
this
gets
pretty
tricky.
So
for
that
reason
this
is
going
to
be
designed
for
this
quarter.
We
just
want
to
make
sure
that
we
get
we
get
this
right,
but
once
we
do
I
think
it'll
be
a
pretty
powerful
interface.
A
K
A
F
A
F
A
C
Should
be
separate
item
because
currently
we
do
it
only
for
low
tax
at
if
you
have
to
blow
petrol
for
videos,
you
have
to
rethink
the
whole
thing
little
bit,
but
the
implementation
itself
will
be
separate
from
the
current
one
element.
Basically,
at
least
I
would
like
to
be
two
separate
other
people.
A
A
A
I
I
A
A
So
let
me
let
me
leave
that
on
a
sign
for
now.
A
G
A
C
So
this
was
blogged
on.
How
are
we
going
to
resize
the
volume
tab
that
only
support
offline
it
didn't
handle
it
at
all,
and
they
just
we
have
to
get
that
that
that
sorted
out
I
can
reach
out
to
the
silic
and
and
see
what
if
he
is
still
interested
in
working
on
it.
If
he's
not,
we
might
have
to
kind
of
like
push
it
to
next
quarter,
something
but
I'll
reach
out
to
him.
First.
A
K
A
H
H
K
K
H
H
H
A
A
This
wallet
just
being
and
in
terms
of
kind
of
consistency
groups
in
general
I,
was
talking
to
Jing
about
this
yesterday.
I
think
my
opinion
on
this
has
changed
a
little
bit
more
recently.
I
think
I'm,
not
convinced
that
this
needs
to
be
exposed
directly
in
the
kubernetes
api,
especially
if
say,
gaps
picks
up
execution
hooks
and
helps
with
doing
application
level
qsr
and
on
qsr.
But
we
can
have
that
discussion
in
this
yeah.
H
B
A
D
D
G
B
J
And
this
is
something
that
Michelle
and
I
talked
about,
and
we
have
a
customer
that
like
stateful
sets
and
they
they
want
their
the
use
cases.
They
have
these
volumes
that
they
want.
If
they're
in
a
stateful
said,
they
need
to
be
placed
on
different
nodes
in
the
actual
storage
cluster,
which
is
separate
accessibility,
so
gotta.
A
A
E
B
D
A
B
B
A
B
B
Think
it's
still
a
two
I
think
most
of
the
places
where
this
issue
comes
Oh.
Actually
you
know
maybe
a
one
I
think
there's
currently
one
CSI
test.
That's
failing
because
of
that
and
I
think
we
basically
leave
orphaned
orphan
pod
mounts
on
when
the
pod
gets
deleted.
So
that
might
be
a
more
important
issue.
Yet,
okay.
A
And
the
last
item
I
had
was
for
the
CSI
spec.
Now
that
the
spec
is
1.0,
it's
very
important
for
us
to
have
some
mechanism
by
which
we
can
have
kind
of
alpha
beta
GI
designations.
This
is
a
conversation,
I've
already
started
with
CSI
community
and
there's
a
couple
of
proposals
out
there.
What
we
did
before
1.0
is,
we
would
introduce
a
feature
and
then
we
would
kind
of
refine
it
and
implement
it
and
revise
it
and
make
breaking
changes,
but
with
1.0
they
guarantee.
A
Is
that
we're
not
going
to
make
breaking
changes
moving
forward
to
the
speck,
but
we
also
want
to
be
able
to
add
new
functionality
and
the
reality
is
that
anytime,
you
add
something
to
an
interface
you're,
probably
not
going
to
get
it
right.
The
first
time
it's
gonna,
take
a
few
iterations
to
do
so,
and
you
may
want
to
make
changes
to
that
implementation.
A
A
It
was
either
delay
resizing
some
more
and
figure
out
how
to
do
this,
alpha
beta
GA
or
get
it
in,
and
we're
kind
of
taking
a
gamble
and
saying
I
think
this
is
the
way
we
want
resizing
to
look
and
even
snapshots
to
a
certain
extent,
was
added
shortly
before
1.0,
but
moving
forward
for
any
subsequent
features
before
they
go
into
the
CSS
spec
I
want
to
make
sure
that
we
have
some
way
to
have
them
go
in
as
alpha.
So
that's
something
that
all
commit
to
working
which
on
would
CSI
community.
B
A
A
A
G
L
I
You
want
to
open
up
that
dock
so
essentially
with
114
windows,
support
kind
of
became
stable,
so
basically
we'll
start
seeing
windows
node
appearing
in
mixed
clusters
with
linux
nodes.
So
basically,
I
started
off
with
this
quick
document
and
laying
out
an
overview
of
one
of
the
options
there
for
Linux
workloads
today
and
then
in
the
next
section.
I
I
go
over
the
present
state
of
things
with
Windows,
which
is
mainly
support
in
a
handful
of
entry
entry
plugins,
for
example,
gcpd
as
your
disk,
your
file,
and
then
there
are
also
these
flex
volume
plugins
that
are
not
maintained
in
any
official
kubernetes
repo.
It's
kinda
like
a
external
Microsoft
people
where
they've
been
making
this
flex
volume
drivers.
I
So
in
terms
of
the
future,
like
you
know,
if
you
want
to
really
support
a
persistent
workload
which
I
think
we
do
for
Windows
workloads,
then
we
need
to
probably
handle
a
few
different
scenarios,
so
I
call
out
I
break
them
out
into
potentially
looking
into
I
know.
This
would
probably
be
controversial,
but
potentially
looking
into
an
SMB
plugin
for
entry.
I
understand
that
entry.
We
do
not
want
to
add
stuff
to
entry,
but
just
like
we
have
NFS
and
I
scuzzy
today.
I
The
first
proposal
is:
can
we
bring
in
a
smb1
and
enhance
the
I
scuzzy
plugin
to
be
able
to
support
Windows
through
the
powershell
commandlets
instead
of
the
I
scuzzy
ADM
tool
said
that
today
supports
Linux,
Linux
workloads,
so
that's
sort
of
the
first
proposal.
The
second
one
is:
what
are
some
of
the
potential
ways
we
can
support
CSI.
So
obviously
the
easiest
one
is
say
you
know
so
just
for
some
background,
one
of
the
big
constraining
factors
in
Windows
the
lack
of
privileged
support,
support
for
privileged
containers.
I
So
the
containers
cannot
really
in
Windows
using
default,
bind
mount
mechanisms
right
to
the
host
name
space.
They
can
only
read
from
there,
but
one
the
one
of
the
workarounds
available
for
this
is
that
we
can
potentially
consider
a
privileged
proxy
process.
So
this
will
be
a
unmanaged
binary
but
sort
of
like
a
single
utility
binary
that
multiple
node
CSI
node
plugins
can
use
on
a
Windows
node
in
order
to
get
their
privileged
operations
done.
I
So
I've
discussed
this
a
little
bit
with
you,
Zhu
and
Jing,
and
they
had
some
feedback.
One
of
the
biggest
concerns
here
is
like
what
do
we
do
around
security,
because
you
know
you
have
this
like
sort
of
privileged
name
pipe.
There
are
some
thoughts
around
like
potentially
using
a
security
policy
to
tighten
this
up
and
on
the
other
side
of
advantages
for
this
process.
I
The
main
advantage
is
that
the
node
plugins
can
be
packaged
as
containers
and
they
can
be
distributed
in
a
way
very
similar
to
how
Linux
note
that
Linux
CSI
plugins
are
done
today.
They
can
also
continue
to
use
the
node
registrar
as
a
sidecar
with
this
mechanism
and
the
overall
management
of
the
node
plugins
becomes
very
easy
through
standard
kubernetes
mechanisms
such
as
like
a
demon
set.
So
there
are
several
advantages,
but
security
is
one
of
the
biggest
concerns
that
I
need
to
figure
out.
I
I
That's
one
of
the
kind
of
high
level
things
I
was
looking
for
and
just
for
completeness.
I
also
call
out
that
you
know
just
like
today.
The
external
side
cards
are
not
necessary
for
a
CSI
plug-in
to
be
used.
Similarly,
this
is
not
something
that
we
will
enforce
on
people
and
obviously
we
cannot
cover
any
arbitrary,
advanced
privilege
operation
that
somewhat
someone
might
need
in
a
Windows
node.
A
A
I
A
And
then
so,
that's
the
initial,
like
registration
beyond
the
initial
registration,
you're
saying
that,
instead
of
using
the
privilege
container
to
do
the
mounts
or
the
equivalent
at
the
mounts
inside
the
container
you'll
have
the
container
call
out
to
this
proxy
sitting
on
the
underlying
post
and
somehow
we're
gonna
secure.
That
and
this
proxy
can
then
issue
the
right
commands
to
make
the
the
volume
available
exactly.
A
A
I
think
this
sounds
very
promising.
I
know
for
our
initial
look
at
this.
We
were
just
considering
putting
CSC
side
drivers
on
Windows,
not
containerized,
which
is
extremely
painful.
So
this
looks
pretty
cool
to
me,
but
definitely
the
security
implications
here
need
to
be
thought
through.
Cap
would
be
a
good
good
next
step.
I
am
willing
to
add
this
too.
L
Hey
guys,
yes,
hey
I,
just
want
to
take,
therefore,
the
proposal
by
the
way,
thanks
for
putting
that
together.
This
is
suppose
I
work
for
Microsoft
in
the
container
team
and
we
live
in
the
proposal
and
just
wanted
to
give
a
heads
up
that
we
are
starting
out
expect
for
published
containers
nice.
So
they
maybe
think
that
you
know,
obviously
that
can
help
there
right.
But
it's
not
going
to
be
a
immediate
term
project.
I
would
say
these
four
to
six
months
out
right.
L
I
L
We'll
do
pull
together
respect
for
this
and
we're
out
in
the
next
semi-annual
channel
release,
as
we
call
it
for
Windows.
It
will
make
its
way
into
long
term
and
they
might
be
even
be
a
possibility
to
back
ported
into
rs500
yeah.
So,
let's,
let's
chat
offline.
That
makes
sense,
but
we
can
also
spin
up
a
work
stream
in
the
open
and
and
see
what
the
requirements
are
and
how
we
can
best
address
those,
but
just
want
to
give
a
heads
up
in
this
community.
That
is
the
something
really
cool.
A
L
A
You
taking
a
very,
very
hard
line
on
entry
plugins
and
we're
pushing
as
many
entry
plugins
out
to
CSI
as
possible.
I
understand
the
kind
of
special
case
here.
Windows
doesn't
have
a
viable
CSI
story
yet
so
I
could
buy
the
argument
of
making
an
exception
here.
But
let's,
if
we're
gonna,
do
it,
let's
run
it
by
sega
architecture
and
put
together
a
case.
Does
anybody
on
this
call
have
a
strong
objection
to
this?
Can.
A
E
A
A
Si
si
has
a
pretty
decent
user
experience
because
the
user
can
deploy
the
plug-in
simply
through
cue
cuddle
and
the
kubernetes
api,
and
then
the
entry
of
course
requires
no
extra
work.
So,
ideally,
if
we
had
a
viable
CSI
solution,
I
think
there
would
be
no
need
for
this,
but
if
we
do
not
have
a
viable
CSI
solution
for
the
foreseeable
future,
I
think
I
can
be
convinced
that
flex
is
too
cumbersome
to
be
a
go-to
option.
A
A
I
would
say,
let's
get
a
better
understanding
of
this
storage
proxy
and
let's
get
a
prototype
of
that
and
if
it
looks
like
that
is
not
a
viable
way
forward
and
privileged
containers
are
not
looking
like
they're
coming
down
the
line
any
time
soon,
then
we
can
consider
this
the
subsequent
quarter.
Okay,.
A
Plugins
are
going
to
be
moved
first,
because
we
want
to
get
cloud
provider
code
out
of
the
core
as
quickly
as
possible
and
then
the
long
tail
of
these
other
volume
plugins
that
we
have
I
think
that
third-party
owned
ones
will
be
the
first
ones
we
want
to
move
and
then
I
think
the
last
ones
will
be
like
yes,
I
scuzzy,
NFS
and
fibre
channel.
But
that
is
the
ultimate
goal.
Okay,.