►
From YouTube: Kubernetes SIG Storage 20190628
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 28 June 2019
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.kojc2u4b0czl
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
None
A
All
right
today
is
June
28
2019.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
Normally,
this
meeting
would
have
been
held
next
week
on
Thursday
July
4th,
but
that
is
a
u.s.
holiday.
So
we
decided
to
pull
this
meeting
in
by
a
few
days
to
make
sure
that
we
have
it
last
meeting.
We
did
planning
for
1.16
the
next
release
of
kubernetes.
We
didn't
get
through
everything,
so
I
wanted
to
make
sure
that
we
got
a
chance
to
do
so
instead
of
skipping
the
meeting.
A
So
that's
the
main
item
on
the
agenda
today
is
116
planning,
we're
gonna
review
the
remaining
items
that
have
been
added
at
the
bottom
of
the
spreadsheet
and
then
we'll
go
back
and
look
at
the
items
that
we've
agreed
on
and
then
looks
like
there
is
already
a
PR
John
has
that
he
wants
to
talk
about
if
you
have
anything
PRS
design
or
anything
else
that
you
like
to
discuss,
please
feel
free
to
add
items
to
the
agenda
and
we'll
talk
about
them
after
the
planning.
So
let's
go
ahead
and
jump
into
the
planning
session.
A
B
Is
required
to
make
that
happen
so
I
think
the
only
thing
that
we
need
is
to
get
a
few
more
plugins
to
implement
the
cloning
feature
and
get
some
mileage
on
it.
The
actual
changes
are
pretty
minimal
in
terms
of
the
kubernetes
api,
so
it
shouldn't
be
a
big
deal.
It's
just
a
matter
of
getting
some
mileage
on
so
and.
B
A
A
It
was
the
opposite
and
you
were
holding
up
cloning
until
that
was
resolved,
but
we
decided
we
don't
need
to
do
that.
The
generic
data
populate
errs
is
a
longer
more
controversial
topic
and
we're
gonna
move
that
more
slowly.
Cloning
in
the
meantime
doesn't
really
need
to
be
held
up
because
of
that
we
continue
to
add
individual
kind
of
data
sources,
but
when
we
open
it
up
to
generic,
that
becomes
a
lot
more
interesting.
So.
A
D
A
D
B
A
A
A
As
anybody
on
the
call
interested
in
helping
with
this,
this
is
the
entry
suicide
driver,
API
object,
which
is
responsible
for
helping
drivers
customize
their
behavior.
So
when
a
new
CSI
driver
is
created,
it
can
say,
hey
I,
don't
support
volume
attachment
just
skip
volume
attached
when
you
call
me
things
like
that,
if
you're
interested
in
helping
drive
that
feature
to
GA,
this
would
be
a
good
opportunity
to
get
your
hands
dirty,
and
this
is
core
kubernetes
api.
So
you
get
to
touch
that
which
is
becoming
more
and
more
rare.
A
D
A
F
So
we
should
dice
this
feature
up
a
little
bit
more
in
the
validation
on
for
the
entry
drivers.
I
I
don't
know
for
the
volume
types
at
least
the
cloud
provider
types
we've
had
one
bug
around
it:
I
scuzzy
that
we've
been
discussing
I
am
so
I
we're
pretty
confident
with
the
cloud
provider
types
going.
You
know
going,
GA
I
think
the
I
scuzzy
stuff.
F
C
A
F
Proposal
is
that
we
I
dice
out
the
I
scuzzy
piece
of
the
raw
block
to
GA
for
entry
stuff
in
the
entry
drivers,
because
that's
the
only
one
that
we
know
that
there's
issues
with
and
that's
in
some
of
the
ice
because
he's
specific
code,
the
rest
of
the
intrigue,
bra
block
storage,
has
no
issues.
Servants
are.
F
Haven't
we
haven't,
landed
on
issues
and
there's
we've
looked
at
the
code
enough,
you
know
and
tested
it.
I
would
say
not
in
production
but
another.
You
know
just
by
hand
enough
that
we
feel
confident
in
it.
The
ice,
cozy
stuff,
we've
tested
in
front
bugs,
so
we
haven't
found
those
bugs
in
in
the
others.
I
would.
A
F
A
I
think
what
it
also
avoids
confusion
in
terms
of
the
feature
I
think
one
of
the
problems
we
have
is,
we
start
breaking
features
into
smaller
components
and
then,
when
we
officially
launch
people
are
like
wait,
what's
GA,
what's
not
GA
so
at
moving
forward.
I
think
what
I
would
like
to
see
happen
is
features
especially
things
like
resize,
where
you
have
online
offline
CSI
entry.
Let's
move
them
all
as
one
kind
of
function
feature
functionality
together,
form
alpha
beta
to
GA,
so
we
don't
have
to
guess
or
try
to
explain
to
users.
B
A
Yeah
I
mean
the
the
argument.
I
think
is,
if
you
get
it
to
GA,
you
get
more
testing
on
it,
but
I
think
that's
kind
of
the
point
of
beta
it's
on
by
default,
which
means
folks,
are
get
the
opportunity
to
play
around
with
it.
But
we
are
not.
You
know
offering
a
guarantee
to
say
yeah.
We
are
certain
that
this
is
gonna,
be
working
at
production
production
level,
which
is
true
because
we
haven't
gotten
the
mileage
on
it.
That's.
F
A
A
I
think
I'm.
Okay,
like
put
it
if
we
want
to
put
out
some
sort
of
statement
to
to
that
end,
to
say:
hey
here,
are
the
known
problems
with
block
here's.
What
works
here
is.
What
doesn't
you
know
we
can
put
that
on
on
the
six
storage
mailing
list,
where
you
put
it
somewhere
more
official,
but
I,
don't
think
moving
the
feature
to
GA
or
move
part
of
it
to
GA
is
the
way
to
address
that.
F
C
C
A
Know
and
the
reason
I
think
it's
worth
being
cautious
on
this
one
is:
it
is
a
very,
very
core
functionality.
This
is
how
you
access
your
data
and
we
don't
want
to
screw
that
up.
I
think
other
features
are
kind
of.
If
they
don't
work,
they
don't
prevent
you
from
using
your
workload
right
in
this
case.
If
it
doesn't
work
we're
you
know,
we're
gonna,
say
hey.
This
thing
is
GA
people
find
issues
with
it,
their
workloads,
don't
work,
that's
a
pretty
bad
place
to
be.
That's.
F
A
Okay,
are
we
okay
removing
this
item
then.
A
E
A
A
One
was
saying
if
a
stateful
set
is
deleted,
the
PV
sees
that
it
deleted
should
have
some
option
to
also
automatically
be
deleted
rather
than
requiring
human
intervention.
The
second
part
was
asking
for
deleting
PBC's
when
scale
in
happens.
So
if
you
scale
down
your
stateful
set
today,
the
extra
PVC
sticks
around
the
argument
for
having
that
is.
If
you
do
a
scale
up,
the
scale
up
is
very
quick
because
you
don't
have
to
create
a
new
volume.
I
am
less
supportive
of
that.
A
I
think
we
need
to
be
very
cautious
when
we
delete
data-
and
in
this
case
there
is
a
legitimate
use
case
for
keeping
it
around
so
that
second
use
case
I'm
not
so
certain
about
regardless
overall
stateful
set
is
owned
by
cig
apps.
While
it
will,
it's
I
think
we
need
to
sign
off
on
the
final
PR.
It
should
be
something
that
is
driven
by
Sega
apps,
so
I'm,
okay,
adding
it
here
as
long
as
whoever
the
owner
is,
it
goes
to
see
gaps
and
gets,
gets.
The
item
tracked
and
approved
they're.
A
Know
so
so
isn't
about
you
scale
down
your
stateful
set,
presumably
be
PVC
sticks
around
you
see
is
independent
of
whatever,
whatever
that
pod
was
that
was
and
then,
when
you
scale
back
up
that
PVC
should
still
exist.
Is
that
not
the
right
assumption
try
to
remember
it
stayed
back
to
the
persistent
valley?
A
I
A
I
A
Yeah
we
can
take
it
offline,
but
I
believe
the
the
the
process
is,
if
the
there's
no
pots
consuming
a
particular
PVC,
it's
going
to
detach
the
volume
from
any
given
notes.
It's
then
it's
just
kind
of
floating
around
waiting
for
it
to
be
used
and
then,
when
you
scale
back
up,
it'll
find
a
pod
to
a
note
to
place
the
new
pod
on
and
then
we'll
reattach
that
existing
volume.
A
A
A
A
A
A
J
J
A
A
D
D
B
D
J
A
I'm,
the
last
uncommitted
item
we
have
is
si
si
driver
API,
moving
that
to
GA
I
think
this
is
the
highest
priority
uncommitted
item.
If
anybody's
interested
in
working
on
anything,
this
is
probably
the
one
to
jump
on
and
then,
of
course,
any
of
the
other
ones.
If
you're
interested,
please
let
us
know
all
right.
Moving
on
PRS
that
need
attention.
First
up
is
from
John.
B
B
K
Have
a
issue
right
now:
I
opened
an
issue
in
CSI,
spec,
repo
I,
don't
know
whether
right
now
it
is
proper
place
to
discuss
sure
go
for
it.
The
problem
I'm
seeing
is
so
normally,
for
example,
calls
for
create
volume
or
snapshots
had
a
secret
argument
so
that
you
can
add
some
right
credentials
to
the
driver.
So,
but
the
problem
is
the
read.
Operations
like
list
doesn't
have
any
credentials
argument.
So
I
think
it
poses
some
problem
about
some
driver
implementations,
for
example,
CSS.
A
Think
that
was
the
early
intention.
We
wanted
to
start
off
with
a
chord
that
we
think
absolutely
required
it.
We
weren't
sure
if
there
were
other
calls
that
might
require
it.
We
can
certainly
look
into
these
other
calls.
I
think
that
seems
like
legitimate.
Ask
the
one
big
ask
that
we
tend
to
get
that
gets.
A
lot
of
pushback
is
I,
believe
the
unmount,
which
is
a
node
unpublished.
D
B
Alright
anything
else,
yeah
one
other
thing
there
would
so
the
I
sent
out
an
email
to
the
group
last
week
regarding
the
data
source
and
popular,
and
things
like
that.
I
haven't
seen
any
response
on
that,
but
I'd
like
to
see
how
folks
would
like
to
kind
of
proceed
with
that
in
terms
of
discussions
and
things
like
that,
whether
we
should
have
a
you
know,
a
supplemental
meeting
specifically
for
that
topic.
A
Think
it
might
be
worth
setting
up
a
one-off
call
just
to
talk
through
the
design
and
what
the
options
are.
Okay,
I
think
that
tends
to
be
the
highest
bandwidth
way
to
get
these
kinds
of
designs.
Moving
going
back
and
forth
over
a
mailing
list
or
appie
arts
I've
found
tends
to
take
a
lot
longer,
yeah.
A
B
A
Storage
vendor
to
deploy
those
and
that
kind
of
thing,
I
think
that
was
one
big
item
but
I
think
designs
like
data
populate
er,
and
maybe
some
of
these
other
things
that
are
under
design
like
volume,
so
on
I
think
will
be
our
media
enough
that
they
they
could
benefit
from
having
a
face-to-face
meeting
and
if
nothing
else,
at
least
just
for
the
sake
of
kind
of
community.
It's
good
to
have
these
once
in
a
while
as
well.