►
From YouTube: Kubernetes SIG Storage 20170803
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 03 August 2017
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.rpli7d6261d
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
N/A
A
All
right
good
morning,
everyone
today
is
August
3.
This
is
the
bi-weekly
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
and
recorded
and
posted
on
YouTube
we're
going
to
switch
over
to
the
agenda.
If
you
have
anything
to
discuss,
please
add
it
to
the
agenda
and
we
can
discuss
it
during
the
meeting
first
item
on
the
agenda
today
is
going
over
a
status
updates
for
the
features
that
are
planned
for
this
quarter.
A
I'm
just
going
to
go
over
the
items
very
quickly
and
if
you
have
a
status
update,
please
shout
out.
Csi
design
is
currently
in
progress
as
a
reminder
to
everyone
feature:
freeze
was
this
past
Tuesday.
That
means
that
every
feature
that
is
supposed
to
be
part
of
one
point.
Eight
should
already
have
a
feature:
repo
bug
open
in
the
future
repository
if
there's
a
new
feature
that
did
not
make
this
deadline,
it's
going
to
be
difficult
to
get
it
into
one
point.
Eight
looks
like
all
the
features
that
we
were
tracking
that
needed
a
feature.
A
A
B
C
Yep
so
I
have
the
design
proposal
out
there.
It's
had
several
comments,
but
I
didn't
have
the
other
things
marks
properly
I'm
waiting
to
hear
their
inputs
from
node,
API
and
I
suggest
the
other,
and
then
once
that
approves
we're
going
to
break
it
out
amongst
three
of
us
and
get
started
on
it.
A
hope
also
that
have
a
prototype
next
week.
Everything
as
you
see,
volume
just
using
it
as
a
Ross
device.
So
that's
where
we
stand
cool.
D
A
A
C
F
Fibre
channel
is
right:
the
fibre
channel
is
merged.
We
got
a
one
here,
material
to
watch.
Oh
nice,
Kathy
pass
and
RVD.
We
have
to
reconsider
if
sis
requires
our
dependency
on
RVD
utility
on
the
controllers,
so
that's
probably
just
put
it
on
the
shelf
for
a
while
until
we
had
bubble
resolution
but
the
kind
of
sizes
we
have
a
channel
ready
and
that's
coming
next.
Okay,
what
else
maybe
ask
you
for
review,
but
okay.
A
A
F
I
J
I
So
yeah-
and
we
just
think
up
like
order
this
week
and
discuss
a
few
items,
relate
to
design
like
API
and
I
also
have
a
controller
proposal
so
because
currently
we,
the
the
current
prototype,
might
not
cover
all
the
cases
and
we
plan
to
discuss
and
hopefully
to
make
change
on
the
controller
side.
And
then
we
also
have
a
few
like
TK,
a
SS
plugging
a
cinder.
So
we
can
start
testing
them
on
real
workload.
A
I
I
J
A
I
E
A
Cool
next
up,
exposing
stories,
class
parameters
to
end
users-
yan
organized
a
meeting
about
this
last
week
and
to
try
to
see
where
we
are
whether
everyone
agrees
on
the
design
and
that
kind
of
thing.
What
we
had
consensus
on
was
that
everyone
kind
of
agreed
that
this
would
be
a
cool
feature
to
have,
but
we
couldn't
get
anybody
to
pick
it
up
and
volunteer
to
drive
it
to
a
design
or
drive
the
design,
at
least
not
for
this
quarter.
H
A
A
A
A
F
Go
down
had
a
unit
and
he
proposed
a
new
API
objects,
because
we
had
a
the
background
data
when
we
conversed
previous
loss
we
had
I
tried
breaking,
have
been
teased.
You
know
what
a
proposal
to
make
the
objection
difference
across
the
different
diversions.
You
have
to
make
up
a
new
secret
reference
objects
that
we
are
give
you
the
same
api
compatibility,
but
that's
proposing.
That
idea
is
so
to
go
to
the
API
review.
We
haven't
got
a
confirmation,
yet
I
think
that's
a
good
idea,
but
relation
required
big
change.
A
A
Back
today,
so
current
status
is
work
in
progress,
yeah
or
any
of
the
PR
is
out,
or
is
it
just
a
sign?
Please
I
cannot.
E
A
It
would
be
worthwhile,
I
was
thinking.
Maybe
you
could
set
a
separate
meeting,
maybe
early
next
week
to
just
go
over
this
design.
I
think
that's
helped
a
lot
of
other
designs
move
forward
where
they
set
aside
an
hour
and
just
invite
everybody
from
the
storage
stake
and
discuss
the
design
and
see
where
it
goes
from
there.
A
M
A
Sure
yeah,
if
we
have
some
time,
we
can
definitely
do
that.
Okay,
so
next
up
are
the
raqual
Amanda
tonics,
vol
plugins.
Both
of
these
are
targeting
flex
volumes
now,
which
means
that
they're
out
of
tree
and
don't
clearly
need
to
be
tracked
as
part
of
this
milestone.
I
spoke
with
the
bode
offline
and
he's
ok
with
closing
the
feature
bug
for
this
milestone
for
a
rook
speaking
with
Stephen
bassam,
we
are
going
to
leave
these
leave.
A
E
A
Next,
all
the
rest
of
these
items
are
low,
rated
or
low
priority
prevent
the
deletion
of
a
PVC
that
is
referenced
by
an
active
pod.
This
was
a
bug
that
we
wanted
to
fix,
or
else
not
necessarily
a
bug.
It
was
the
feature
behavior
that
we
wanted
to
change
and
I
think
in
from
Salesforce
started.
Taking
a
look
at
this
design,
there
were
three
or
four
potential
ways
to
implement
this.
There
were
some
debate
about
what
the
best
way
to
move
that
forward
was.
A
G
I'm
on
here
and
I've
continued
to
think
about
this.
There
are
a
few
references
added
after
we
discussed
the
different
designs.
It
looks
like
we,
the
finalizer
approaches
the
appropriate
design
based
on
you,
know
the
community
and
the
different
SIG's
involved,
but
how
we
get
that
in
for
me,
that's
going
to
be
used
in
the
finalizar
about
the
status
of
a
particular
PVC
and
it's
PV
I
think
that's
still
an
eye
open
item.
G
So
there's
some
parts
of
the
design
that
I
think
we
have
agreed
upon
and
parts
that
aren't
and
there
were
some
open
items
that
we're
going
to
help.
But
it
looks
like
those
were
closed,
like
Matt,
had
one
open
about
giving
readwrite
to
PV,
node
affinity,
so
I
think
it's
right
to
just
keep
this
in
the
backlog
for
the
time
being
and
I'm
kind
of
keeping
my
eye
out
for
areas
that
are
things
we
could
leverage
and
looking
at
the
finalizar
approach.
G
A
A
A
J
A
J
A
I
So,
basically,
right
now,
a
user
like
a
lot
of
you
complain
they
have,
they
don't
have
a
good
way
of
monitoring
that
how
much
storage
used
for
their
place
of
PV
or
PVC.
So
we
are
trying
to
expose
the
storage
metrics
for
users
through
different
ways
and
some
user
like
use,
for
example,
crew,
blade,
metric
endpoint
or
some
some
user
also
used.
Some
prey
term
refers
to
like
retrieve
their
monitoring
data,
so
we
want
to
expose
these
kind
of
status
volume
status
metrics
to
also
to
print
from
a
test.
I
I,
don't
know
how
to
pronounce
the
Attic
correctly.
So
the
problem
is
right.
Now:
oh,
the
volume
usage
information
we
collect
them
through
the
summary
couplet,
summary,
API
and
but
summary
API.
We
don't
have
any
information
related
to
PBO
PVC,
so
the
plan
is
to
add
pv
pv
c
reference
information
into
the
volume
stat.
So
currently,
if
you
go
through,
like
summary
api,
you
can
see
their
pod
stats
and
inside
of
pod
stats.
I
You
can
see
their
volume
stats
to
show
each
volume
what
other
usage,
but
it
only
contains
a
name
that
is,
we
called
ultra
spec
name.
That
is
the
name.
Use
is
specified
in
a
pod
stack,
so
we
want
to
like
change
the
volume
manager,
data
structure,
to
be
able
to
catch
those
pvd's
information
and
put
it
in
the
volume
set
and
from
summary,
API
we
can
expose
those
metrics
to
other
end
points.
A
Think
big
idea
makes
sense,
I'd
like
to
look
at
this
in
detail
to
better
understanding
exactly
what
the
proposal
is
and
if
there's
any
alternatives
that
are
being
considered,
particularly
the
fact
that
a
change
is
required
to
volume
managers
in
State
memory.
I,
wonder
if
there's
any
way
around
that,
if
we
could
reference
information
directly
from
the
API
server,
maybe
other
than
that
I
don't
think
there
is
there's
two
major
concerns
all
right.
C
So
what
are
exactly
are
the
SS
staff?
Is
it
just
the
name
or
is
their
usage?
I
mean
what
what
does
this
do
beyond?
What's
already
bubbled
up
like
an
undescribed,
for
instance,
so.
I
L
I
B
A
Here's
a
question:
do
we
really
need
to
expose
PVC
as
a
first
class
item
in
this
metrics
API,
considering
that
cubelet
is
responsible
for
generating
these
metrics
and
if
a
volume
is
not
currently
mounted,
basically
you're
not
going
to
get
any
updates
for
it?
So
if
that's
the
case,
if
the
updates
only
happen
at
the
pod
level
for
individual
volumes
inside
of
a
pod,
would
that
not
be
sufficient?
A
I
I'm
thinking
about
that,
so
if
no
pod
use
that
volume
currently,
we
don't
have
a
good
way
of
keep
track
of
the
usage
so
from
what
we
currently
available
so
is
from
the
part,
and
you
can
get
collects
those
information.
That
is
the
easiest
way.
Otherwise
we
have
to
use
probably
I,
see
another
controller
and
place
or
the
PV
right
now,
genius
citizen
and
then
use
a
different
way
to
collect
those
data.
Okay,.
A
Right,
so
so,
given
that
the
collection
mechanism
is
basically
going
to
be
a
very
closely
tied
to
the
pod
design,
it's
being
referenced
and
the
pod
is
scheduled,
then
information
is
going
to
be
collected
about
the
volume.
Then
do
we
really
need
to
do
what
is
being
proposed
in
in
this
proposal?
Meaning
do
we
really
need
to
expose
PVCs
as
a
in
the
metrics
API?
Why
not
just
leave
it
as
is
and
add
information
about
the
outer
volumes
back
inside
the
pod
or
the
outer
volume
name?
Sorry,.
E
E
So
the
proposal
is
so
what
you're
saying
is
already
being
done
as
and
we
expose
pod
metrics
and
in
the
pod
metrics.
We
also
expose,
as
part
of
that,
the
volume
metrics
for
each
everything
that's
attached.
The
proposal
is
to
additionally
tag
those
volume
metrics
with
the
with
the
PVC
information,
so
that,
on
the
other
side,
when
we
query,
we
have
enough
information,
we
can
go
from
a
PVC
and
then
figure
out
what
art
or
other
marginal
next
and
not
it
yeah
we're.
E
A
A
E
A
Think
that
would
be
sufficient.
At
least
we
have
some
information
being
surface
to
the
end
user.
They
have
to
do
a
little
bit
more
work
to
tie
it
back
together,
exactly
which
PVC
maybe,
but
for
this
quarter
we
can
say
we're
surfacing
the
information
and
that
in
a
subsequent
quarter
we
can
go
ahead
and
tackle
the
problem
of
figuring
out
how
to
give
a
good
reference
back
to
completely
right.
I
I
E
E
I
A
I
A
A
I
Know
is
showing
showing
in
the
but
there's
no
information
ready
to
pvo
PVC
at
all.
It
is
going
through
the
summer
API,
but
it's
just
a
volume,
auto
spec
name
there
and
you're
a
user
want
to
like
monitor
there,
a
list
of
PV
or
PVCs
usage,
so
they
can
have
some
event
triggering
it's
out
of
disk
or
or
close
to
the
capacity
right
now
they
don't
have
a
way
even
with
summer
API.
You
cannot
get
that
information
so.
I
Possible
and
also
most
of
the
users
are
not
use
that
cut
away.
They
are
using
like
the
directly
retrieve
data
from
those
metrics
and
I
ignite,
resent
point
of
presence,
so
are
not
like
keep
cattle
a
select,
interactive
way
of
users
to
check
the
status
buds
for
their
like
agent
monitoring
agent
to
do
something
that
they
they
use,
but
any
use
cases.
We
have
to
add
information,
this
PVCC
cognition
into
the
volume
manager,
so
we
can
able
to
carry
them.
I
thought
I,
don't
think
it
will
like.
I
L
A
I
I
L
A
A
A
If
that's
the
case,
then
I'd
like
to
say:
let's
start
with
that,
and
if
that's
not
sufficient,
then
we
can
add
this
in,
but
I
think
based
on
what
you
guys
are
saying
and
I'm
not
I
haven't
been
looking
very
closely
at
this
design.
It
may
be
the
case
that
that's
not
sufficient,
let's,
why
don't
you
guys
hold
a
longer
meeting
for
this
and
you
could
do
an
in-depth
design
review
and
we
can
take
it
from
there
without
me,
okay,
J!
Yes,.
N
K
K
K
A
A
M
K
K
O
A
K
A
Okay
with
it
I
think,
whoever
reviews
it
from
this
cig
is
probably
going
to
other
than
just
take.
A
sanity
check
is
probably
just
going
to
sign
off
on
it.
Ultimately,
the
it
is
going
to
be
your
responsibility
to
ensure
that
this
code
is
working
and
that
it
doesn't
break
anything.
This
is
part
of
the
whole.
Let's
get
volume
plugins
out
of
tree,
so
that
the
sig
is
not
responsible
for
reviewing
them,
because
even
today,
it's
exactly
like
Aaron
said
and
Brad
they're,
both
alluding
to.
A
We
can
review
it
in
terms
of
just
looking
at
the
code,
making
sure
things
kind
of
make
sense,
but
ultimately,
whether
it
works
or
not,
we
don't
have
the
resources
to
test
that
and
to
verify
it.
So
we're
going
to
have
to
leave
that
to
you
and
your
buddy
Jeff
whoa
cool
okay,
so
one
of
us
will
take
a
look
at
it.
I'll
take
a
note
to
take
the
cops.
K
A
You
want
to
you
want
to
back
port
this
21.7,
yes,
so
backporting
features.
Two
previous
releases
has
a
relatively
high
bar
in
terms
of
what
is
allowed
in
or
not
technically,
you're
only
allowed
to
back
port
bug
fixes
this
considering
it
is
a
major
refactor
would
be
harder
to
justify
unless
you
can
give
a
list
of
bugs
that
it's
fixing
and
prove
that
can't
back
port
those
bugs
alone.
If
that
makes
any
sense.
A
Mean
that
that's
kind
of
the
problem
here
right,
it's
checked
entry,
so
the
policies
that
apply
to
entry
apply
to
this
code
and
arguably
yes,
it
only
applies
to
this
code
only
touches
vsphere
and
if
it
breaks
vsphere,
it's
only
going
to
break
through
here
and
we're
okay
with
that.
But
if
you
really
strongly
feel
that
there
is
bugs
that
are
being
fixed
here
and
absolutely
required,
you
could
you're
the
owner
of
this
code
and
you
can
push
for
that
and
then
it
will
be
up
to
the
branch
manager
to
accept
it.
N
Yes,
so
50
are
supports.
The
photo
ID
as
a
new
body
might
enjoy
their
same
as
a
greeting
with
WWF
press
anywhere.
So
I
propose
I,
already
proposal
the
cause
and
go
to
a
GT
game
programming.
So
now
I'm
awaiting
the
approval
come
here,
so
a
photo
PR.
So
can
anyone
from
six
right
shape
to
approve
this
peer.
A
L
A
N
F
F
F
Ecological
good,
alright,
so
so
that
has
to
propose
a
bit
of
the
name
and
they
have
the
varieties.
I.
N
M
So
this
is
about
a
handling
over
and
scratch
request
regarding
to
risk
versus
quarter.
The
problem
here
is
that
there
is
some
complex
they've
come
from
the
fact
that
we
can't
have
different
setups
for
our
node,
so
either
we
have
one
partitioning
or
two
partitions.
If
there's
any
one
partition,
so
everything
requested
from
user,
whether
it's
container
overlay
or
emptied
or
it's
charged
from
the
storage
scratch
subscribe
space.
M
If
there
are
two
partitions,
then
the
container
overlay
comes
from
a
storage
over
an
empty
dough
comes
from
storage
scratch,
so
these
are
kind
of
two
ways
we
want
to
go,
but
this
is
totally
different
user
experience
for
handling
storage
Quorra.
So
we
want
to
make
a
decision
on
how
to
handle
this
I
think
there
is
a
issue
on
there.
If
you
scroll
down,
there
is
an
issue
from
Jing.
M
M
A
I
Yeah
I
didn't
amuse
myself.
So
yes,
so
this
proposal
is,
they
have
two
options
and
wires.
Just
have
one
Kota
type
for
local
story,
the
other.
We
support
two
different
Kota
for
local
storage,
wise
mainly
for
empty
dirt,
Oh
scratch,
the
other
is
overlay
and
the
posts
have
their
pros
and
cons.
So
we
want
to
have
some
feedback
and
I
will
discuss
wish
to.
So.
Anyone
like
may
use
this
feature
and
can
let
us
know
that
was
a
better
way
oak
started
setting
it
up.
I
A
I
think
again
on
an
issue
like
this,
where
you
have
multiple
options
and
you're,
not
sure
which
one
to
proceed
with,
instead
of
throwing
an
email
out
and
hoping
that
people
will
take
a
look
at
it
and
getting
some
response
just
set
up
an
hour
meeting
with
the
storage
sig
and
when
folks
show
up
introduce
them
to
what
the
problem
is,
introduce,
what
alternatives,
you're
considering
and
hopefully
that
will
help
people
have
the
context
to
give
a
input,
and
then
you
can
go
from
there.
Would
that
be?
Okay?
Yes,.
I
So
after,
like
today,
I'll
ever
see
whether
we
need
to
set
it
up
meeting,
maybe
already
next
week
or
I,
think.
P
H
So
a
33
talks
about
the
improvement
on
on
flexible
in
order
to
support
external
planes
like
rook,
so
one
thing
that
antenna
does
differently
is
that
we
Asha's
root
gap
creates
extend
the
castle
resources
to
talk
to
the
actual
who
controller
back
in
now.
If
we
do
this,
with
with
flag
volume,
we
need
a
consistent
I
couldn't
find
a
consistent
way
to
generate
a
Cuban,
a
decline
in
order
to
create
those
CDR
CR.
H
H
A
B
B
O
O
J
O
O
J
The
Flex
volume
to
make
a
call
to
your
server
and
yourself
actually
kind
of
proxy
to
the
kubernetes
api
server.
So
all
the
cause
will
actually
rotate
to
the
your
server
and
your
server
actually
needs
to
talk
to
the
community
service.
Then
your
service
can
actually
run
it
as
at
all
and
you
will
have
access
to
the
service
accounts.
One
thing
is:
even
if
you
slap
in
something
and
to
hack
it,
even
though
the
IP
is
routable,
every
second
can
be
recyclable.
So
in
the
case
again
it
look
like
yeah.
O
O
Api
to
call
we're-
which
you
know
kind
of
original
PR,
and
this
has
a
bunch
of
context
on
this,
which
people
like
to
read
it.
But
there
is
no,
but
the
idea
is
with
the
Flex
volume
or
if
the
entry
volume
the
same
approach,
they
would
both
create
an
instance
of
a
CRD
and
that's
how
our
api,
our
operator,
picks
it
up.
I.
A
Think
the
problem
here
always
comes
down
to
discovery
and
approach.
You
guys
are
trying
to
look
at
is
trying
to
make
it
as
simple
as
possible
to
the
other
discovery
automatically
and
when
you
start
going
out
of
tree
you're
going
to
need
to
discover
something
whether
that's
an
intermediate
tree
proxy
pod,
like
what
Chakri
suggested.
If
you
do
that,
you
still
need
to
be
able
to
discover
the
service.
That
is,
that
that
pod
exposes
versus
discovering
the
the
IP
proxy
door,
API
server
directly,
I'm,
not
sure
what
a
good
answer
for
that
is
yet
yeah.
O
O
A
O
A
C
Think
it's
to
a
point
where
it
just
needs
the
PR
review.
I,
don't
think
when
we
only
review
it
on
this
call.
Given
we
had
those
extra
calls
out
of
and
from
this
meeting
I
would
appreciate
feedback
its
PR
number
805.
If
people
want
to
take
the
time,
but
otherwise
I
don't
get
any
more
feedback
on
Menem
move
forward
for
approval
and
implementation.
Okay,.