►
From YouTube: Kubernetes SIG Storage 20180607
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 07 June 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.a4js2dkhhapg
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
N/A
A
All
right,
this
is
the
meeting
of
the
kubernetes
special
interest
group
today
is
June
7
2018.
As
a
reminder,
this
meeting
is
public
and
recorded
and
published
on
YouTube,
so
the
agenda
today
looks
pretty
light.
We're
gonna
do
a
status
review
of
all
the
items
that
this
sake
has
been
working
on
for
this
quarter
and
then,
if
there
are
any
pr's
or
design
reviews
that
you
have,
that
you'd
like
to
sing
to
take
a
look
at.
A
A
So
what
we're
gonna
do
is
go
over
the
the
task
list
and
get
status
updates
and
see
what
the
end
of
release
status
for
these
items
are
and
what
made
it
in
and
what
did
not.
So
the
first
thing
on
the
status
list
is:
make
volume
dynamic,
provisioning,
topology
aware
Michelle.
Can
you
give
an
update
on
this
yeah.
B
So
we
got
the
scheduler
part
in
the
part.
That's
missing
is
a
plug
volume,
plug-in
that
actually
takes
advantage
of
the
scheduler
changes,
so
I
think
we'll
be
concentrating
on
the
volume
plugins
in
on
12.
There
were
also
a
few
a
few
design
items
that
we
need
to
also
continue
working
on
for
112
as
well.
A
A
A
B
A
C
C
D
E
F
Yes,
so
I
mean
we
represents
the
restoral
flow
on
the
seek
face-to-face
meeting,
and
but
we
still
trying
to
revise
it.
You
extend
it
to
a
more
generic
API
like
volume,
including
different
kinds
of
volume,
operation
and
I'm
working
on
the
design,
and
we
will
send
out
design
proposal
out,
so
people
can
review
it
if
needed.
I
can
also
present.
You
seek
storage
meeting
in
more
detail
next
time.
A
Sounds
good
I'll
go
ahead
and
mark
change
this
to
design
for
this
quarter
and
then
we'll
say
implementation
next
quarter.
Basically,
this
quarter,
what
we're
doing
is
seeking
approval
and
that
is
currently
still
ongoing
and
hopefully,
in
the
next
couple
weeks
we
have
a
thumbs-up
from
Sagarika
tech
chure
to
do
this
work
and
move
an
entry
and.
A
D
D
Basically,
I
also
have
a
designs
back
floor
in
the
community
design
proposal
repo.
There
are
a
few
questions.
The
Jeana
and
I
have
would
like
to
discuss
with
the
group,
maybe
after
the
project,
status,
tracking
and
further
for
the
sole
and
also
I
also
have
a
patch
out
for
the
snapshot.
Api
entry
and
also
the
entry
controller
part,
would
like
people
to
review
them.
I
do
have
one
question
about
the
the
PR
itself,
because
I
separate
them
into
two
PRS
when
dependent
on
the
other
words.
D
A
Sorry.I
I'm
a
little
bit
confused
here,
so
I
understanding
was
last
week.
Snapshots
was
presented
to
cig
architecture
to
get
either
a
thumbs-up
or
a
thumbs-down
on
whether
we
can
move
entry
or
not.
No
decision
was
made
at
that
meeting,
so
the
current
status
is
that
we
would
present
to
Sagarika
tech
chure.
Unfortunately,
we
weren't
able
to
do
so
this
week,
so
the
plan
is
to
try
and
do
so
at
the
next
meeting,
which
will
be
next
week.
Well.
F
F
E
A
Okay,
so
let's,
let's
assume
that
we're
going
to
just
make
sure
that
we
have
all
you
know,
crossed
our
T's
and
dotted
our
eyes
and
let's
go
back
to
Sega
architecture
next
week
and
make
sure
we've
got
100%
approval
and
it
sounds
like
preemptively
Shing,
yer
you're.
Getting
these
PRS
ready
to
go.
Yes,.
D
A
D
A
D
A
General,
it
depends
the
reason
folks
like
to
have
the
API
changes
along
with
the
logic
implementing.
It
is
because
you
could
get
the
API
changes
in
and
then
never
implement,
logic
and
we'd
end
up
shipping
without
the
logic
which
would
be
which
would
put
us
in
a
bad
state.
So
what
some
people
will
ask
for
is
multiple
commits
where
you
have
a
set
of
commits
for
the
API
and
then,
instead
of
it's.
D
D
D
A
A
A
C
A
A
H
A
The
current
state
is
very
much
alpha.
I'm
convinced
that
there
are
probably
some
major
bugs
in
here,
because
we
don't
actually
have
any
volume
plugins
that
are
implementing
block
to
test
it
with
Vlad
is
very
actively
looking
through
this
code
and
writing
some
mock
drivers
to
help
test
it.
So
there's
probably
going
to
be
some
bug.
Fixes
that'll
come
out
around
this,
but
the
feature
itself
is
alpha,
gated
and
opt
in.
If
you
do
opt
in
to
test
it
out
dunno,
it's
very
much
alpha
right
now,.
I
I
No
just
it's
just
si
si
si
si
driver.
Is
this
what
we
need?
That's
that's.
The
next
step
basically
is
to
have
si
si
drivers
that
implement
block
and
make
sure
that
all
this
works
actually
work
as
intended,
but
I
will
look
at
also
at
the
end.
Maybe
there's
something
in
American:
I
can
use
for
testing
as
well
or.
A
Okay,
so
then
we
have
an
item
called
prepare
si
si
for
GA
in
q3.
At
this
point,
I,
don't
think
we're
gonna
go
GA
and
q3
most
likely
q4,
but
the
work
that
the
the
si
si
kubernetes
implementation
group
has
been
working
on.
Fortunately,
we've
hit
the
big
items
that
we
wanted
to
hit:
Sergei
and
Vlad,
and
the
others
in
that
group
have
been
working
very
hard
to
get
those
features
in
specifically
other
than
block
we
were.
There
was
two
other
features
we
wanted
to
get
in.
A
A
A
In
other
aspect
of
CSI
is
figuring
out
how
to
begin
to
migrate,
the
entry
volume
plugins
to
CSI.
We
want
to
do
this
because
it'll
make
you
know
instead
of
having
to
maintain
two
code
bases
for
these
Volney
plugins
we'll
have
one,
and
the
kubernetes
project
in
general
is
has
a
very
big
push
to
move
cloud
provider
specific
code
out
of
the
core,
and
this
will
help
enable
that
david
has
been
very
actively
working
on
a
design
David.
You
want
to
give
an
update
here.
I.
A
A
A
A
Next,
replacing
volume
reconstruction
with
the
checkpointing
logic,
because
the
volume
reconstruction
code
is
some
of
our
buggy
s
code
that
remains
in
cubelet.
We
didn't
get
to
to
this,
so
we're
going
to
hopefully
pick
it
up
next
quarter,
moving
the
GC
cloud
provider,
disk
API
to
auto-generated
codes,
something
that
Chang's
been
working
on
this
quarter.
A
Previously,
all
the
wrapper
api's
around
the
GC
p
cloud
provider
were
handwritten
and
Bowie
who's.
The
networking
TL
basically
created
a
tool
that
will
auto-generate
all
of
the
wrapper
code,
and
the
work
here
was
for
the
the
the
Jesus
GCE
PD
volume
plug-in
code
to
be
modified
to
use
that
new,
auto
generated
code,
and
we
were
able
to
get
that
done
this
quarter
as
well,
thanks
to
Chang
for
that
next
up
is
cubelet
plot
device,
plug-in
registration,
Sergei
was
working
on
this
and
he
was
able
to
complete
it
in
time.
So
we're
looking
good
there.
A
It
is
an
alpha
feature
plan
to
break
up
the
external
storage
repo,
so
Brad
had
a
meeting
or
a
session
during
the
face-to-face
meeting
where
we
established
what
the,
where
all
the
different
components
that
currently
live
inside
the
external
storage
repo
are
going
to
end
up
Brad.
Is
there
anything
you
want
to
add
to
that.
H
A
A
So
this
is
the
fact
that
cubelet
is
sometimes
run
in
a
containerized
mode
and
there
are
lots
of
issues
that
arise
with
when
you
run
in
that
mode.
We
haven't
in
the
past,
had
very
much
testing
around
this
and
have
had
lots
of
regressions
around
it.
So
the
ask
here
was
to
identify
an
owner
for
this
area,
help
add
more
tests
and
fix
bugs
that
do
arise.
I
think
the
ask
was
Brad
was
going
to
find
an
owner
for
it.
Brad.
Do
you
have
an
update
here.
A
H
C
A
J
So
I
I
thought
I
had
reproduced
this
two
weeks
ago
and
I
talked
to
Michelle.
It
turns
out.
I
refer
used
a
different
bug
that
we
didn't
know
about
related
eyes,
because
even
clean
reconstruction,
so
I
I
put
that
one
on
the
back
burner.
I
have
now
actually
reproduced
this
bug
and
I
can
see
it
happen
in
the
debugger
and
funny
story
running
cubelet
in
a
debugger.
It
was
kind
of
hard
to
do,
but
I
imagined
to
do
it,
and
but
I
don't
have
a
fix
for
this
one.
J
A
A
J
So,
as
we
talked
about
at
the
last
meeting,
there
was
an
issue
with
races
between
attach
and
detach
in
the
I
scuzzy
and
in
the
ask,
is
eco
paths
and
agreed.
We
wanted
an
actual
locking
to
avoid
those
races,
so
I've
added
that
it
passes
the
tests.
It's
passed,
my
tests
there's
a
few
more
tests.
I
I
could
do
but
I'm
pretty
happy
with
this
now.
So
as
far
as
I'm
concerned,
it's
ready
to
merge.
Okay,.
A
A
You
could
theoretically
have
a
volume
where
you
can
share
it
between
multiple
pods
and
then
you
can
have
sub
directories
that
automatically
get
pod
specific
paths
under
inside
that
volume.
So
a
pretty
cool
feature
glad
we
were
able
to
get
it
in
next.
Up
is
a
storage
object
in
use
feature?
2Ga
I
saw
the
PRS
merge
for
this.
Does
anybody
want
to
give
an
update.
A
A
A
H
A
D
Yeah,
so
this
is
basically
to
move
the
api's
inch
remove
the
snapshot.
If
you
guys
in
sri,
we
have
maybe
two
questions
about
this
one.
So
when
we
do
the
CSI
design,
we
have
we
added
the
parameters
for
a
crate
snapshot
request.
So
in
order
to
pass
in
the
parameters,
we
need
the
storage
class.
So
now
the
question
is
whether
we
should
have
added
a
switch
class
in
the
volume
snapshots
back.
So
if
you
you
can
search
for
if
you
search
for
stored
class
name
and
right
so
yeah,
so
the
volume
snapshots
back
right.
D
So
we
can
add
the
storage
class
name
here
as
well.
So
the
question
is
whether
it
one
way
is
to
pass
this
through
the
sweater
class
from
the
PVC.
That's
one
way
right!
So
when
you
take
a
snapshot,
you
know
which
PVC
is
so
you
can
get
the
sort
of
class
there.
Then
you
can
use
the
parameter
in
that,
but
then
I
think
the
concern
is
whether
you
want
you
passing
the
same
parameters
and
in
the
snapshot
as
the
ones
in
the
volumes
right.
So
do
you
want
to
use
different
parameters,
I
mean.
A
Was
talking
Jing
about
this
yesterday,
I
think
or
a
couple
days
ago,
I
forget
so
I
think
is
taking
a
step
back
where
this
came.
This
requirement
came
from
was
the
fact
that,
when
you
provision
a
snapshot
similar
to
when
you
provision
a
new
volume,
there
are
going
to
be
a
bunch
of
knobs
that
are
specific
to
a
particular
storage
back-end
that
you
may
want
to
manipulate
yeah.
A
What
we
do
not
want
to
do
is
put
those
knobs
into
the
object
that
is
exposed
to
application
developers,
the
folks
we're
actually
deploying
workloads
on
kubernetes.
So
the
mechanism
by
which
you
request
a
snapshot,
the
volume
snapshot
object,
should
not
have
anything,
that's
not
portable.
You
should
be
able
to
move
that
object
across
clusters
and
have
it
still
work.
So
that
means
that
we
can't
put
it
on
the
volume
snapshot
object,
but
where
do
we
put
it?
So
this
was
the
same
exact
problem
that
we
had
with
the
PVC
object.
A
You
know,
storage
systems
have
parameters,
you
can
be
modified,
but
if
we
put
it
on
the
PVC
object,
the
object
is
no
longer
portable.
Where
do
we
put
it?
We
invented
the
storage
class
object
to
be
a
home
for
the
opaque
parameters
that
are
specific
to
a
volume,
plug-in
and
storage
class
objects
are
created
and
modified
by
the
cluster
administrator.
So
we're
okay
with
those
objects
being
unique
per
cluster.
A
So
now
the
question
here
is
well:
how
do
we
handle
it?
In
the
snapshot
scenario,
do
we
introduce
a
new
storage
class
like
object,
or
do
we
reuse
the
existing
object?
Jing
was
suggesting
you
know.
For
the
most
part,
the
new
object
would
look
pretty
much
like
the
storage
class
object.
So
why
not
just
reuse
the
existing
storage
class
object?
She
mentioned
that
you
know
the
name
is
generic
enough
storage
class.
It's
can
be
used
for
for
from
snapshots
and
the
the
provisioner
that
specified
inside
it
could
just
refer
to
a
snapshot.
Provisioner
I'm.
A
J
A
That's
what
I
would
like
to
do
I
would
like
to
decouple
the
story,
the
original
storage
class,
from
the
storage
class,
of
the
snapshot.
The
way
that
I
see
it
is
the
original
storage
class
was
parameters
that
were
required
to
provision
that
volume
and
the
new
snapshot
is
parameters
required
to
provision
the
snapshot
they
may
be
completely
separate.
The
set
of
parameters
that
apply
to
the
creation
of
a
volume
may
not
be
the
same
as
the
set
of
parameters
that
apply
to
the
creation
of
a
snapshot,
and
there.
D
A
J
Risk
I
see
is,
is
the
possibility
for
conflicts
right
if
you
have
two
different
storage
classes
that
it
basically
end
up
on
different
storage
systems,
and
then
you
have
corresponding
snapshot,
storage
classes
for
those
storage
systems,
and
you
you
attempt
to
mix
and
match
them
in
a
way
that
it's
not
going
to
work.
The
end
user
is
going
to
end
up
with
a
snapshot
that
never
gets
taken
and
they'll
be
they'll,
be
upset
about
that
I.
A
F
F
F
A
If
it's
not
a
snapshot
class
is
not
specified.
What
should
the
behavior
be?
My
might
got
that
you
should
not
specify
anything.
It
should
be
a
rook.
You
know
if
somebody
specifies
it,
you
pass
it
in.
If
they
don't
specify
it
don't
go,
and
you
know
try
to
be
smart
and
pull
it
off
the
PVC
or
something
like
that.
But.
D
F
A
J
A
About
if
we
allow
for
the
same
storage
class
to
be
used
but
have
a
different
field
for
snapshot
parameters,
so
you
have
the
existing
set
of
parameters
which
are
to
be
used
for
provisioning
volumes
and
we
introduced
a
new
field
that
would
be
for
snapshots.
So
that
way
you
could
still
reuse
the
same
storage
class,
but
it's
very
explicit
which
set
of
parameters
are
for
provisioning
and
which
set
of
parameters
are
for
snapshotting,
yeah.
A
A
A
H
A
Yes,
so
after
the
snapshot
is
created,
so
this
process
here,
these
objects
that
are
shown
here
are
for
creating
the
snapshot
once
the
snapshot
is
created,
the
basically
generating
a
new
volume
and
pre
populating
it
with
the
contents
of
this
snapshot
is
done
through
a
PVC
object
and
in
the
PVC
object
you
can
always
specify
your
storage
class
and
it
can
be
different
from
the
storage
class
of
the
original
PVC.
So
you
could.
H
A
H
A
Think
we
need
to
optimize
for
the
default
case
and
I
like
this
idea.
Okay,
if
you
don't
specify
a
storage
class
which
would
be,
but
you
know
the
default
case,
we
pull
the
storage
class
off
the
the
PVC
object
and
then
there
is
a
second
field
on
the
storage
class
for
exclusively
for
snapshot
parameters.
My
concern
with
sharing
this
snapshot
object
is,
if
you,
you
know,
confuse
the
parameters
between
provisioning
and
snapshotting.
It
gets
very,
very
confusing.
A
D
A
D
Okay
and
second
question
is
about
the
condition
so
when,
if
you
move
down
a
little
bit,
it's
this
not
just
status.
So
when
we
have
the
design
discussions
on
the
CSI,
we
have
3
status,
basically
on
the
CSI
side,
basically
uploading
ready
and
arrow
uploading,
but
that
is
the
water
plug-in.
Because
that's
you
know,
that's
a
synchronous
call
and.
G
D
Is
the
API
layer
API
actually
is
asynchronous,
so
we're
thinking
that
we
should
add
a
creek
creating
condition.
So,
basically,
in
addition
to
you
uploading
ready
in
arrow,
we
should
have
another.
Basically,
this
is
before
the
snapshot
is
cut.
So
basically
you
know
we
started
the
creating
process
that
it's
not
done
yet,
so
that
would
be
creating.
K
I
did
want
to
just
go
back
to
the
snapshot
parameters,
real,
quick,
just
a
Charmin
yeah,
so
I
definitely
think
that's
a
good
idea,
because
I
was
talking
with
Jing
on
slack
and
on
github
the
past
day
or
two
where,
like
with
the
standard,
big
three
cloud
providers,
when
you
go
to
take
a
snapshot
of
a
volume,
it
just
gets
snapshotted
wherever
based
on
the
implementation
of
the
cloud
provider,
but
with
things
like
port
works
or
mesh
or
stuff
or
anything.
That's
you
know,
storage,
that's
running
in
cluster.
K
It
tends
to
be
snapshotted
locally,
and
you
may
want
to
specify
that
when
you
take
a
snapshot
of
whatever
type
of
storage,
you
have
that
you
want
that's
shot
to
be
transmitted
externally,
yeah,
it's
a
s3
or
wherever
and
I,
think
the.
If
there's
a
snapshot,
storage
class
or
snapshot
parameters.
That
would
be
a
good
use
case
for
that
yeah.
A
J
So
the
one
thing
about
this
has
given
me:
heartburn
I,
like
the
idea
of
like
being
able
to
restore
a
snapshot
to
a
different
kind
of
storage
per
minute
from
an
end
users
perspective.
But
from
an
implementation
perspective,
like
you
know,
net
up
will
implement
a
net
app
snapshot,
obviously,
and.
G
J
Be
able
to
create
a
new
net
out
volume
from
the
net
up
snapshot
because
the
net
provisioner
will
know
how
to
do
that
now
if
the
user
wants
to
create
a
different
non
net
up
going
from
an
in-app
snapshot,
there's
no
way
for
that
to
happen
unless
they
either
put
an
app
specific
logic
in
there.
Other
provisioners.
A
A
J
System
that
that's
inside
of
a
box,
but
if,
if
you
restore
it
to
a
different
machine
with
a
different
file
system,
you're
going
to
lose
at
least
a
little
bit
of
metadata,
you
know
just
like
converting
from
an
ext
to
a
XFS
file
system.
You
know
it's
not
going
to
be
exactly
the
same
when
you,
when
we
restore
that
thing
it
something
is
going
to
be
slightly
different,
though.
A
J
A
A
K
K
J
A
The
fail
failover
is
going
to
happen
eventually
because,
eventually,
you're
gonna
hit
the
provision
or
it's
gonna
say
I,
don't
know
what
that
is.
I'm,
not
gonna,
be
able
to
do
that.
The
question
is:
if
there
is
any
fast
validation
that
we
can
at
the
server
level
to
immediately
reject
the
request.
Maybe.
A
I
would
say:
let's
leave
it
more
flexible
and
actually
in
general,
we
want
to
start
off
more
restrictive
and
then
loosen,
rather
than
the
other
way
around,
because
it
becomes
more
difficult.
So
maybe
we
start
off
with
a
API
validation
that
says
the
provisioners
must
match
and
then
in
the
future.
If
somebody
comes
to
us
with
a
concrete
use
case
and
says
actually,
I
am
able
to
provision
from
somebody
else's
system,
then
remove
that
restriction,
because
it's
always
easier
to
remove
the
restriction
than
to
go
in
and
apply
it.
Okay.