►
From YouTube: Kubernetes SIG Storage 20170413
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 13 April 2017
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.z3amevmhmuuc
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:05:26 From Saad Ali : Meeting minutes: https://docs.google.com/document/d/1tvrh7RZzwynPE5fVxln6GEPaDMYmgVgYZHvi8jyUCTo/edit
09:52:00 From IanC : Very cool. Nice demo Michelle
A
Alright
good
morning,
this
is
the
bi-weekly
meeting
of
the
storage
special
interest
group
today
is
April
13
2017.
As
a
reminder,
this
meeting
is
public
and
recorded
and
let's
kick
it
off
today
on
the
agenda
we
have
so.
The
face-to-face
meeting
was
held
earlier
this
week
yesterday
and
the
day
before
it
was
pretty
well
attended.
Props
to
Delhi
MC
for
hosting
the
notes
for
from
the
meeting
are
in
the
agenda
dot.
Does
somebody
want
to
take
the
time
to
summarize
some
of
the
things
that
were
discussed.
B
Yes,
we
had
a
pretty
extensive
talk
on
our
snapshot
design
and
where
we
pretty
much
come
to
the
conclusion
that
we
need
a
name
space
and
a
name
non
name.
Space
object,
there's
still
a
little
bit
of
question.
If
we
can
use
a
sub
pocket
where
it
needs
to
be
a
snapshot,
object
and
a
snapshot
claim
a
request
separately
and
we're
working
there.
But
generally
the
the
overall
design
was
pretty
much
improved
and
we've
left
it
up
to.
B
B
Then
we
had
a
pretty
long
talk
on
the
CSI
initiative.
I
we
talked
you
know
we
went
through
the
different
API
calls.
We
had
kind
of
giving
an
overview
about
everything
worked.
Then
we
added
talk
of
the
timeline
and
how
it
would
coordinate
with
flex
and
the
idea
being
that
this
was
pretty
much
one
to
one
with
it
what's
to
point
o
model.
B
B
C
B
B
Local
storage,
we
had
a
long
discussion
on
local
storage
again,
there's
another
great
design
document,
either
out
there
full
details.
If
anybody
wants
to
we're
going
to
view
that
we
walk
through
pretty
much
every
aspect
of
it,
there
was
various
concerns
that
were
brought
up,
but
I
think
you
know
it's
it's
on
track
and
it's
going
in
the
right
direction.
B
A
B
Is
targeted
for
17,
okay,
so
not
in
system
at
I
17
we
do
plan
to
have
an
open
for
one
second
again
and
as
part
of
both
CSI
and
the
local
storage,
the
question
of
block
storage
came
up
and
we're
also
looking
at
you
know,
coming
up
with
the
design
for
that,
probably
for
the
17,
confident
as
well
talk
a
little
bit
about
a
resource
management
and
isolation.
So
this
was
some
discussion
about
logs
and
the.
B
Other
local
node
resource
constraints,
there's
more
blood
for
notes
in
the
dock,
and
then
it
was
pretty
much.
It
I
think
that
the
big
stakes
that
came
out
of
it
were
snapshotting,
Lobo,
storage,
oh
and
we
had
some
discussion
around
resize
and
if
we
could
use
modification
the
fields
in
the
PVC
as
trigger
points
for
resides
or
other
imperative
operations,
I
think
we
decided
they.
You
know
sort
of
a
proof
of
concept.
We
all
thought
that
could
be
possible
and
we
didn't
really
have
next
steps.
B
After
that,
it
was
more
just
the
design
path
we
want
to
go
and
I
think
we're
still
waiting
together.
What
our
next
steps
are,
so
that's
the
island
whole
guys.
Wanna,
look
at
the
details,
there's
a
great
document
out
there
with
all
the
notes
and
then
there's
links
all
the
other
design
documents
from
that.
A
A
B
A
It
I
think
a
couple
of
folks
that
we
met
at
at
the
storage
face-to-face
we're
also
very
interested
in
helping
out
with
AWS
work,
but
based
on
my
discussions
with
Jing,
it
sounds
like
there
are
workarounds
in
place
that
have
mostly
stabilized
it.
There's,
probably
larger
reef
actors
that
need
to
be
done
and
are
ingesting
are
going
to
help
lead.
Those
next
up
out
of
volume,
plugin
CSI,
the
early
design
draft
of
this
was
presented
at
the
face
to
face.
A
Like
brad,
said
the
meeting
minutes
capture
some
of
the
details,
we're
going
to
continue
to
revise
and
improve
on
this
design.
I
think
one
of
the
big
pieces
of
feedback
was
around
the
list
call
and
how
to
do
pre
created
volumes,
we're
going
to
chew
on
that
for
a
bit
and
see
if
we
could
come
up
with
something
better.
A
So
that's
work
in
progress.
Next
up,
local
ephemeral,
storage
capacity,
isolation,
I
thinking,
Jing,
has
primarily
been
working
on
this.
She
has
a
prototype
out
showing
some
of
the
some
of
the
design
that
she
showed
up
at.
She
should
show
the
design
at
the
face
to
face
that.
She
also
has
a
working
prototype,
so
I
think
we'll
be
we're
in
good
shape
for
alpha
41.7
local
persistence
for
Michelle.
Are
you
on
the
line
yeah.
A
D
So
I
think
basically
yeah.
We
also
are
in
the
same
boat
kind
of
we
have
a
working
prototype
going
out
and
I
think
the
design
is
so
far
making
progress.
The
only
last
thing
maining
is
the
scheduler
part,
which
is
the
hardest
part,
so
we're
trying
to
see
if
we
can
at
least
come
up
with
a
design
for
the
scheduler
in
17
I'm,
not
sure
if
we
can
actually
like
it
get
some
code
in
4174,
scheduler,
but
I
think
the
rest
of
the
components
are
more
straightforward,
so
we
can
definitely
still
get
those
in
got.
E
Okay,
annuity
arm
no,
but
thanks
for
getting
I
mean
related
to
the
ledge
cube.
What
I
did
with
God's
help.
We
got
a
test.
We
were
merged
last
night,
which
test
does
one
test
example
of
trying
to
make
sure
cubelets
don't
get
wedged
when
there's
problems
with
NFS
server
running,
but
no,
we
just
we
have
a
we're.
Just
plugging
along
I
got
a
lot
of
we've
gotten
a
lot
of
good
review
feedback
on
our
recent
PRS
and
I.
A
There
was
a
somewhat
if
a
question
about
how
to
rejet
new
folks
more
involved
in
the
community.
Where
do
they
start
contributing
we're
going
to
have
a
bigger
discussion
about
that
at
the
next
storage
sig
meeting?
But
my
quick
thoughts
on
that
is
I.
Think
a
lot
of
the
work
that
we
have
left
for
testing
is
a
really
great
spot
to
start.
We
have
a
big
spreadsheet
that
Aaron
helps
maintain
for
tests.
A
A
Thanks
Jeff,
so
next
up
is
cloud
provider,
storage,
metrics.
As
far
as
I
know,
this
was
checked
in.
There
are
some
issues
with
the
way
that
this
was
coated.
I
believe
Bowie
on
our
side
is
taking
another
look
at
it,
he's
implementing
metrics
for
networking,
and
so
he's
probably
going
to
do
a
refactor
of
this
code
and
he'll
probably
include
a
month
in
that
code
review
as
well
the
metrics
for
the
volume
controller,
I'm
not
sure
what
the
status
of
this
is
a
comment.
Are
you
on
the
line?
I.
A
F
A
G
Star
certification,
proper
progress
because
law,
progress,
application
work
became
dependent
on
the
dynamic
changes
to
the
BBC
in
order
to
control
operations
related
to
application
in
order
to
be
able
to
express
in
our
enable/disable
with
disconnects
heat,
thus
far
as
application
proper.
Nothing
has
been
done,
but
there
is
a
proposal
for
dynamic
changes
into
the
pvcs
and
disease
therefore
discussed
from
the
face
to
face
and
I'm
going
to
be
working
on
her
next
level.
A
G
A
A
G
C
G
A
F
A
B
I,
let
me
go
a
little
context
on
this.
We
have
a
couple
like
grits,
it's
right
now
in
a
row.
This
is
a
way.
I
can
print
this
right
now
and
they
are
around
some
of
the
around
some
changes
or
some
problems
and
attach
these
texts
that
are
fixed
by
this
PR
I
know
that
we've
gone
through
a
couple
iterations
of
like
test
and
retest
as
we
found
like
different
issues,
and
we
some
of
the
other
problems
that
we
found
right
here
in
this.
B
A
B
B
A
So
it
looks
like
it's
just
a
matter
of
getting
a
code
reviewed
and
merged
and
letting
take
yep
cool
I'm.
Good
next
up
is
stabilizing
azure
support.
There
were
a
set
of
bugs
here,
I
think
a
couple
of
them
are
linked.
Woman
is
owning
this,
but
I,
don't
think
he's
on
the
line.
A
A
Next
up
is
a
staple
applications
via
Federation
plane.
We
don't
have
an
owner
for
this
I
think
it
depends
mostly
on
the
Federation
side,
we
kept
an
item
just
in
case
they
to
track
things
for
the
milestone.
We
may
end
up,
dropping
it
in
just
a
couple
Federation
folks
and
see
if
there's
any
plans
for
this
milestone
for
this
pv
capacity
usage
stats.
C
D
D
H
D
Because
I
think
with
well,
actually
I
don't
know
if
we
have,
we
don't
have
any.
We
do
have
zone
spreading
logic.
Indeed,
yeah.
D
D
D
D
A
H
H
A
H
A
Ok,
so
the
status
of
this
we
discussed
it
at
the
face-to-face
meeting
Yan
has
a
proposal
out
to
try
and
improve
the
containerization
of
mounts,
there's
already
a
kind
of
acne
version
of
that
in
the
existing
code.
His
proposal
looks
pretty
solid
and
it
looks
like
it
aligns
with
a
lot
of
the
work
that
we
will
eventually
need
to
do
for
CSI
anyway,
so
it
sounds
like
it
would
be.
A
worthwhile
pursuit
and
Yan
is
targeting
alpha
for
1.7.
A
H
Mine,
okay,
I
should
play
so
I
in
order
to
have
containerized
mount.
We
need
to
fix
the
moment
which
was
run
the
BM
easily
and
everything
is
every
mount
is
private
there.
So
we
cannot
export
anything
containers,
so
listen
a
proposal
or
PR
to
make
you
a
shaft
during
boot
and
in
my
opinion
this
is
quite
dangerous
thing
to
do
so.
This
needs
some
consensus.
I
sent
email
to
six
node
and
this
comparative
death
and
the
only
person
responded
was
wish.
H
H
B
B
B
I
B
And
then
I,
I
think
we
probably
well.
I
would
guess
that
google
would
want
to
do
a
security
audit
on
the
uke
side
of
things.
Make
sure
that
you
know
their
confidence
insecure,
not
really
bet
every
system,
v
system
or
every
other.
You
know,
thoughts
become
the
upper
right
hand.
Most
part
3
audit.
If
this
will
work
and
not
cause
problems,
I
think
there's
like
individual
responsibility
to
do
the
security
audits
of
this,
but
we
don't
need
to
take
a
whole
burden
on
is
asleep.
Need
you
don't
get
them.
B
We
needed
to
work
in
the
ED,
which
I,
don't
think,
is
necessarily
a
security
problem
in
itself.
We
probably
want
to
note
that
there
is
potential
security,
surface
area
here
that
Beach
said
you
know
it
needs
to
be
audited
on
a
OS
x,
0
s
basis
and
then
probably
ji
ke
will
want
to
do
some
kind
of
security
audit
on
the
room.
I.
H
A
B
B
H
J
Yeah,
so
this
doesn't
issue
opens
which
is
like
the
base
has
been
poured
in
former
and
noted
former,
and
that
this
P
I
kind
of
fixes,
tangentially,
am
indirectly
I,
think
Jin
know
the
vote
is
so
sorry
who
reviewed
it
already
and
I
just
want
to
discuss
like
a
couple
of
things
that,
like
the
way,
we
are
like
how
we
want
to
fix
this
problem,
because
this
is
one
approach
which
is
like
this
PR.
What
it
does
is
it.
J
It
uses
desired
state
of
world
populated
every
one
minute
to
update
the
paths
that
are
running
now.
The
concern
that
andy
has
expressed
on
the
PR
is
like
this
two
approaches
either
using
the
polling
or
we
can
introduce
the
worker
cues
that
other
controllers
to
use
to
ensure
that
we
don't
miss
events.
If
the
node
is
not
simple
and
a
four
day
event
arrives
that
we
don't
be.
J
Currently
we
discard
that
body
went
if
the
node
doesn't
the
Christian
actually
state
a
world
right,
so
back
sister
disaster
of
War
I,
it
doesn't
really
consider
a
suitable.
So
we
just
need
to
decide
the
approach
that
you
want
to
take
for
fixing
this
and
I
just
wanted
to
see
what
everyone's
thought
is
on
that,
so
the
bug
is
basically
the
problem
is
like.
If,
then,
if
controller
doesn't
know
about
a
note
and
a
pod
even
arrives
that
a
part
has
been
added,
then
that
part
is
discarded
and
controller
doesn't
know
anything
about
that
part.
J
A
B
A
This
this
isn't
hitting
the
cloud
provider
at
all
its
internal.
It's
basically
hitting
the
internal
controller.
Cash
I
am
not
sure
what
the
performance
impact
of
100k
or
250k
pods
could
be,
but
since
this
is
mostly
in
memory,
I
not
sure
how
horrible
it
would
be.
There's
no
network
halls
going
on
here.
You've
got
a
pod
lister
list
which
will
end
up
getting
the
informer
and
then
former
will
basically
give
you
a
humongous
list
of
pods
I.
Imagine
you
could
have
memory
issues
depending
on
how
that's
implemented
may
be
worth
testing.
J
A
That
doesn't
necessarily
that
handles
one
version
of
the
problem
right,
but
there
is
other
reasons
we
could
miss
a
an
event,
basically
right
and
we
don't
wanna.
The
idea
with
these
populate
errs
is
that
we
go
back
and
we
double
check
our
work
right.
There
is
this
escape
hatch
that
goes,
I
I,
think
everything
is
ok,
but
let
me
go
and
go
ahead
and
verify,
and
currently
what
the
populated
does
is.
It
goes
out
and
says
for
all
the
pods
that
I
am
tracking.
A
Ahead
and
trigger
the
you
know:
detach
process
for
them,
yeah
and
you're,
implementing
the
flip
side
of
that
which
is
maybe
there
are
some
pods
that
I'm
not
aware
of
that.
I
should
process,
I
and
I
think
the
logic
there
is,
it
makes
sense.
It
adds
a
lot
of
robustness
to
the
existing
code.
The
concern
about
a
large
number
of
pod
could
be
valid.
A
D
A
All
right,
nobody
put
anything
down
for
design
reviews
this
week,
so
we're
going
to
skip
that
and
our
last
item
on
the
agenda
today
is
a
demo
by
Michelle
about
persistent
local
storage
Michelle.
Take
it
away.
D
D
Great,
so
here
I
basically
have
a
cluster
of
five
notes
that
you
can
see.
I
pre
deployed
them
with
some
local
SSDs
and
they're
all
automatically
well
on
the
GCE
environment.
They
automatically
get
partitioned
in
formatted,
so
they're
already
all
set
up
and
ready
to
use.
D
If
you
see
right
now,
I
have
I
have
no
key.
Oh
I
do
oops.
That
was
a
mistake.
Okay
already
messed
up
with
them
already.
Let
me
undo
so.
F
F
D
F
D
D
D
All
the
soldiers
team
instead
take
a
look
they're
all
running
and
it
created
persistent
volumes
for
all
the
recipes
in
the
system.
The
capacity
currently
is
just
hard
coded
so
ignored
that
value,
but
basically
I
have
two
SSDs
on
every
note,
and
you
can
see
it
has
created
two
to
persistent
volumes
for
each
of
the
two
of
us
appease
their
okay
great.
I
have
persistent
volumes
now
now
I
can
have
an
application
to
use
them.
So
I
have
this
faithful
set
application
here.
D
D
This
is
just
a
persistent
volume
claim.
I
reference
the
storage
class.
I
reference
how
much
storage
I
want.
This
is
not
the
API,
that's
in
the
spec.
This
was
like
one
of
the
older
API
proposals,
so
this
isn't.
This
will
actually
be
unnecessary
in
the
actual
implementation.
All
you
have
to
do
is
specify
the
correct
storage
class
and
from
the
storage
class
we
can
figure
out
if
it's
a
local
volume
or
not.
D
Alright,
it's
already
created
two
of
the
two
of
the
replicas
and
staples
that
I
have
three
replicas
specified.
Let's
look
at
the
pvcs
that
got
created.
There
are
three
TVCs
that
got
created
one
for
each
replica
of
the
staple
set
it
looks
like
they
got
down
to
some
SSDs
or
some
TVs.
We
can
take
a
look
at
all
the
TVs
here,
we'll
see
three
out
of
the
ten
there
now
bound
to
these
PVCs
all
right.
D
D
If
we
come
here,
we'll
see
the
content.
This
is
local.
It's
logging,
so
we
see
that
the
state
will
set
container
is
logging
every
10
seconds.
It's
logging,
its
name
which
into
the
sky
and
then
this
count
is
basically
the
how
many
times
that
be
how
many
times
has
that
we
started
so
I
just
brought
it
up.
So
it's
at
one.
So
let's
actually
just
kill
the
pod.
D
B
D
Not
the
I,
it's
not
the
scheduler
coach
that
we're
going
to
take
in
the
end,
but
it
was
one
of
the
iterations
we
were
making
all
the
scheduler
design,
so
it's
creating
all
right
and
we
saw
that
it
also
got
it
got
assigned
or
scheduled.
To
the
same
note
again,
if
we
look
at
our
reader
pod,
that's
my
lead.
Your
pod.
That's
waiting
the
contents
here,
we'll
see
the
count
got
increased
the
two
because
I
killed
it.
It
came
back.
D
D
It's
funny:
let's
look
at
the
logs,
so
here
now
we
see
now
we're
attached
to
a
local
test
ones
volume.
Here
we
see
one
that
counts
long,
because
I
haven't
killed
it
so
that
that's
just
a
sample
application.
Reader-Writer
Apple
that
I
wrote
I
can
demonstrate
deleting
the
PDC's
now
which
actually
might
take
a
while
50.
D
D
B
D
D
Think
you
can
see
you
can
actually
see
the
scheduler
changes.
If
you
really
want
to
we're
probably
going
to
wipe
all
of
it
the
schedule
schedule.
Our
changes
are
all
contained
in
there's
a
local
TV
predicate
under
the
predicate
folder.
You
can
look
at
that.
It's
some
not
really
that
interesting.
It's
really
just
the
PB
can
exactly
the
pd
controller
copied
over
ok,
saiful
set
is
deleted.
D
C
D
So
I'm
going
to
pick
this
and
then,
if
you
look
at
the
pd's,
you'll
see
they've
been
released
now
and
then
this
is
where
in
the
provision
or
demon
set
were
who
are
watching
for
this
release
state
and
once
we
see
that,
then
we'll
go
and
clean
up
the
volume
and
then
delete
it
and
then
add
it
back.
So
it
looks
like
it's
there
ready
for
use
again,
all
right,
so
that
is
all
I
have
to
demo
any
any
actions.
J
D
I
D
I
D
I
Is
the
other
whose
innocence
oh
yeah,
conceivably,
if
you
have
a
large
cluster
with
a
bunch
of
disks,
you
can
have
a
lot
of
those
live
around.
Yes,
so
is
the
end
product
that
the
admin
will
be
responsible
to
keep
track
of
all
these
Diamond
sets,
or
will
there
be
some
how
to
aggregate
I
mean?
How
would
how
would
you
be
able
to
keep
track
of
the
remaining
silent,
a
reasonable
manner
if
you
have
a
lot,
how.
D
So
I
mean,
if
you
want
to
if
you
want
to
specifically
assign
some
certain
pd's
to
some
specific
pods,
you
can
modify
the
demon
set
when
it's
creating
the
pd's
to
add
some
label
to
it,
depending
on
whatever
your
policies
are
so.