►
From YouTube: Intro: Storage SIG - Saad Ali, Google
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Intro: Storage SIG - Saad Ali, Google
Join Kubernetes SIG Storage to learn about the areas of our focus, what we are working on currently, and how you can get involved. Veteran SIG Storage members will be on hand to help answer questions.
To Learn More: https://sched.co/GrbP
A
This
is
the
cig
storage,
intro,
I'm
gonna
be
talking
about
what
cig
storage
does,
who
we
are,
how
you
get
involved,
so
anything
that
you
want
to
find
out
about
cig
storage.
This
is
the
place
to
be
at
the
end.
We're
gonna
do
a
panel
with
some
of
the
folks
who've
been
involved
with
cig
storage
for
a
while
to
get
to
know
you
folks
buy
a
raise
of
hands.
How
many
of
you
know
what
six
storage
is
all
right?
That's
good!
A
How
many
of
you
have
participated
in
a
cig
storage
meeting
or
contributed
code,
or
anything
like
that?
Okay
and
all
right,
that's
that's!
That
is
what
I
was
expecting.
So,
let's
get
started
first
up,
who
is
cig
storage?
Cig
storage
is
a
group
of
kubernetes
contributors
if
you're
not
familiar
with
what
a
ciggy
is.
A
cig
is
a
special
interest.
Group
kubernetes
is
a
very
large
project,
so
we
divided
a
divided
it
up
into
smaller
groups,
and
these
groups
are
responsible
for
sub
areas
of
kubernetes.
A
The
kubernetes
special
interest
group
for
storage
specifically
is
responsible
for
the
block
and
volume
layer,
specifically
ensuring
that
file
and
block
storage
is
available
in
a
portable
manner
to
your
containers,
wherever
they're
scheduled.
So
the
scheduler
scheduler
is
responsible
for
scheduling
a
pod.
The
cubelet
is
responsible
for
starting
a
pod.
Our
team
is
responsible
for
ensuring
that
any
persistent
or
ephemeral,
local
or
remote
storage
that
that
pod
depends
on
is
available
to
that
pod.
A
So
we
started
off
with
ace
just
existing
within
the
core
kubernetes
codebase.
This
was
when
the
project
first
started,
but
as
we
continued
to
grow,
what
we
realized
is
that
we
were
adding
more
and
more
third-party
code
into
the
core
of
coburn,
and
that
was
becoming
unsustainable
for
a
number
of
reasons,
including
the
fact
that
we
couldn't
actually
test
this
external
code.
A
We
really
had
no
way
of
testing
it,
but
we
were
responsible
for
maintaining
them
and
since
that
was
becoming
unsustainable,
we
created
interfaces
to
allow
external
development
of
these
volume
plugins
in
an
initial
attempt
at
this
interface
was
something
called
flex
volumes
and
a
more
recent
and
the
thing
that
we're
going
to
be
moving
forward
with
is
called
the
container
storage
interface.
So
both
of
those
are
also
under
the
purview
of
this
group.
The
container
storage
interface
just
achieved
its
1.0
milestone
last
quarter
and
we
just
pushed
it
to
GA
in
kubernetes
1.13.
A
So
some
of
the
components
in
the
kubernetes
api
that
this
team
is
responsible
for
include
persistent
volume,
claims
and
persistent
volume,
storage
classes
which
are
responsible
for
dynamic
provisioning
and,
of
course,
the
interfaces
that
I
just
mentioned,
including
CSI
flex
and
pretty
much
all
the
volume
plugins
that
you
can
find
within
kubernetes.
That's
our
team
page,
if
you're
interested
in
getting
involved
that
that
is
where
you're
gonna
find
more
information
about
the
team.
A
The
storage
sig
hosts
meeting
meetings
every
two
weeks
and
we
have
folks
showing
up
from
a
very
diverse
set
of
backgrounds
that
includes
storage,
vendors.
That
includes
cluster
orchestrators
and
everything
in
between
so
you're
welcome
to
join
and
if
you
do
join
what
would
you
be
working
on?
Well,
the
storage
save
works
on
number
one.
We
code
new
features.
We
write
tests,
we
fix
bugs
all
related
to
the
volume
and
storage
subsystem,
so
that
could
be
as
part
of
the
core.
A
You
know
the
controllers
that
operate
to
make
to
bind
volumes
or
to
attach
a
volume
or
to
make
it
available
on
a
particular
container
or
external
of
kubernetes
kubernetes.
Now,
with
the
container
storage
interface,
a
lot
of
our
code
is
no
longer
actually
part
of
kubernetes
kubernetes.
We
have
a
lot
of
sidecar
containers
that
were
responsible
for
maintaining
that
pair
with
external
CSI
drivers.
A
So
if
you're
a
third
party
vendor
who's
writing
a
storage
driver,
you
could
do
it
completely
on
your
own,
using
the
interfaces
that
exist,
but
to
make
it
easier,
this
team
actually
provides
a
set
of
sidecar
containers
that
will
do
things
like
interact
with
the
kubernetes
api
and
issue.
Csi
calls
against
your
driver,
so
it
minimizes
the
overhead
of
writing
new
drivers.
So
there's
a
lot
of
work
going
on
with
those
sidecar
containers
and
then,
of
course,
kind
of
intend
testing
all
of
these
things
to
make
sure
that
they
work.
A
The
way
that
we
expect
them
to
kubernetes,
of
course,
is
becoming
a
fundamental
base
infrastructure
layer
for
a
lot
of
people
across
the
world,
and
it
has
to
be
stable.
Above
all,
it
has
to
be
reliable.
We
can't
have
bugs
in
it,
and
so
a
lot
of
the
focus
moving
forward
is
going
to
be
on
reliability
and
making
just
improving
what
we
have,
rather
than
continuing
to
add
more
code.
A
We
do
host
these
virtual
meetings
every
two
weeks
on
zoom',
so,
if
you're
interested
in
attending,
please
look
at
the
team
page.
We
also
host
face-to-face
meetings
every
now
and
then
we're
in
fact
hosting
a
small
face-to-face
meeting
tomorrow
here
at
cube.
Con
I'll
tell
you
a
little
bit
more
about
that
later
and
then
the
community
is
pretty
active
on
both
slack
and
the
Google
Group.
So
if
you're
interested
in
reaching
out
to
folks,
that's
a
good
way
to
do
so.
A
A
Now,
what's
interesting,
is
that,
as
we
become
more
decoupled,
where
you
have
core
kubernetes
code
as
well
as
external
CSI
code,
functionality
gets
added
to
the
core,
and
then
it
follows
into
CSI.
So
while
the
core
functionality
for
rob
block
was
moved
to
beta
this
quarter,
the
CSI
implementation
of
raw
block
is
still
alpha
and
so
we're,
hopefully
gonna,
move
that
next
quarter
to
beta
and
those
are
kind
of
fun.
A
staggered
timeline.
A
The
next
feature
here
is
topology
aware
volume
scheduling.
This
is
functionality
that
allows
the
kubernetes
scheduler
to
be
aware
of
what
the
structure
of
the
storage
system
looks
like
in
terms
of
availability.
You
know
in
some
storage
systems
a
volume
may
not
be
available
to
all
the
machines
equally
within
a
cluster,
so
you
can
imagine.
In
a
cloud
environment,
for
example,
a
given
volume
may
only
be
available
to
a
specific
zone,
and
so
it's
important
for
the
scheduler
to
be
aware
of
this,
so
that
it
can
make
appropriate
scheduling
decisions.
A
You
don't
want
a
pod
landing
on
a
VM
that
exists
in
a
zone
where
it
can't
access
that
volume.
So
what
we
wanted
to
do
here
was
come
up
with
a
mechanism
that
doesn't
incorporate
or
like
hard
code,
the
concepts
of
zones
and
regions
and
other
topology
primitives.
Instead,
we
wanted
to
create
something
generic
that
can
apply
not
just
to
cloud
providers,
but
you
know
some
sort
of
any
type
of
on-prem
topology
that
might
exist.
A
You
can
imagine
racks
or
even
nodes,
so
that
work
has
moved
to
stable
GA
in
the
kubernetes
core
on
the
CSI
side,
that
implementation
is
still
alpha
and
is
going
to
be
moved
to
beta.
Hopefully,
next
quarter
moving
forward
looking
into
2019.
Some
of
the
features
that
we're
going
to
be
working
on
one
is
the
migration
of
entry
volume
plugins
to
CSI.
A
So
we
created
this
awesome
new
interface
to
write
volume
plugins,
but
we
have
a
set
of
old
volume
drivers
that
are
built
in
to
the
core
of
kubernetes
each
one
of
these
old
drivers
exposes
a
interface,
a
kubernetes
api
interface,
and
so
therefore
it
is
the
kubernetes
deprecation
policy
applies,
and
the
kubernetes
deprecation
policy
is
very
strict.
It's
basically
that
you
can't
really
deprecated
an
API
object
until
the
next
major
release
of
kubernetes,
which
is
kubernetes
2.0,
which
is
basically
forever
we're
stuck
with
the
API
that
we
have.
A
So
we
have
these
set
of
volume
plugins
that
expose,
for
example,
gcpd
AWS
EBS
in
the
kubernetes
api.
They
have
code
in
the
poor,
kubernetes
kubernetes
that
implements
those
drivers.
We
now
have
an
external
mechanism
with
CSI
to
implement
drivers,
we're
asking
folks
to
start
implementing
CSI
drivers
and
what
we'd
like
to
do
is
start
proxying,
the
kubernetes
api
to
use
CSI
drivers
as
the
implementation,
rather
than
having
back
code
living
within
the
core.
That
project
is
very
large
and
we
need
to
do
it
very
carefully,
because
the
success
metric
is
a
little
bit
strange.
A
We
just
need
to
make
sure
that
nobody
notices
that
we
swapped
it
out
underneath
them,
because
if
they
do,
we
have
failed.
The
plan
is
to
take
that
to
alpha
in
q1
beta
in
q2
and
GA
in
q3.
It
is
a
massive
amount
of
effort
and
we're
still
trying
to
decide
if
we're
going
to
100%
pursue
this
path.
So
there's
a
small
possibility
that
we
might
decide
to
do
something
else.
A
The
other
things
that
we're
going
to
be
pursuing
in
the
next
year
are
bringing
CSI
up
to
feature
parity
with
the
core
so
that
we
can
do
the
migration.
So
this
includes
raw
block
volumes,
topology,
resizing
and
inline
volumes,
and
then
beyond
that,
we
are
also
looking
into
the
next
generation
of
primitives
to
introduce
into
the
kubernetes
api.
So
for
things
like
data
management
operations
like
snapshotting
and
cloning,
we
already
have
a
implementation
within
CSI
for
snapshotting.
A
That
is
currently
alpha,
but
it
is
very
basic
and
that
it
allows
you
to
create
a
snapshot
and
restore
it
to
a
new
volume.
Some
of
the
functionality
that
we'd
like
to
add
is
the
ability
to,
for
example,
qsr
your
workload
based
on
the
snapshot.
So
we
would
send
a
signal
to
pod
to
say:
hey
we're
about
to
take
a
snapshot.
Please
pause,
take
the
snapshot
and
then
resume
the
workload
so
that
we
can
get
an
app
at
least
a
pod
level,
consistent
snapshot
and
then
taking
that
to
the
next
level.
A
A
We
have
a
spreadsheet
that
we
track
all
the
work
in
and
at
the
beginning
of,
every
quarter
in
one
of
these
meetings
will
do
a
planning
session
where
we
decide
what
we're
going
to
commit
to
for
the
next
quarter
and
who's
going
to
be
working
on
what
and
then
in
each
subsequent
meeting
throughout
the
quarter
will
do
status
updates
to
figure
out.
You
know
for
all
the
things
that
we've
committed
to
how
far
along
is
it
what's
remaining?
A
Where
do
you
begin?
A
good
place
to
begin
is
bugs
and
we
have
a
lot
of
them
so
I
checked
yesterday
there
were
275
open
bugs
in
the
core
kubernetes
kubernetes
related
to
sink
storage.
Of
course,
some
of
these
might
be
duplicates
and
they
need
to
be
triaged,
but
there
is
considerable
amount
of
work
just
figuring
out
what
the
status
of
all
of
these
bugs
is
and
what
has
been
fixed.
A
What
hasn't
been
fixed
and,
of
course
there
are
a
ton
of
external
components
that
we
now
own,
which
are
gonna,
have
their
own
set
of
bugs
and
issues
as
well
help
us
write
tests.
That
is,
you
know
the
movie,
the
biggest
focus
next
quarter.
I
would
like
to
just
focus
on
stability.
So
if
you
can
help
us
improve
testability
of
our
infrastructure,
that
would
be
awesome.
A
But
there's
still
a
lot
of
work
left
to
be
done
there
and
then
Michelle
will
come
up
in
a
little
bit,
and
maybe
we
can
ask
her
more
about
that
and
then,
of
course,
you
can
also
help
us
write
features.
We
do
have
a
lot
of
feature
work,
that's
going
to
happen
as
well,
probably
closer
to
the
second
half
of
next
year,
mostly
around
snapshots
and
data
management,
and
things
like
that.
A
A
This
is
something
that
we
want
to
work
on
for
the
quarter.
We're
gonna
put
the
sig
storage
label
on
there
and
track
it
as
part
of
the
work
that
we're
doing
for
the
quarter
you're
here
at
cube
con.
There
is
a
strong
presence
here
from
the
sig
storage.
Yesterday
we
had
a
all-day
cloud
native
storage
day.
There
were
a
number
of
talks
related
to
storage
and
this
event
was
oversubscribed.
A
I
think
there
was
space
for
a
hundred
and
300
were
on
the
wait
list
and
then
earlier
this
morning,
unfortunately,
you
already
missed
this
one,
but
Michelle
gave
an
excellent
talk,
Michelle
and
Jana
here
on
some
of
the
security
issues
that
we
encountered
with
the
storage
layer
and
how
security
is
handled
overall
in
the
kubernetes
project.
That
talk
is
going
to
be
posted
online
soon,
so
you
can
keep
an
eye
out
for
that
and
then
moving
forward.
A
There
is
another
talk
that
I
fail
to
add
on
here,
which
is
by
David
Zoo,
and
it
is
titled
David.
Can
you
give
us
a
title
up
here
around
how
I
how
I
manage
to
trust
the
speck
I
believe
is
something
like
that,
but
basically
it's
about
the
CSI
specification
and
how
specifications
allow
for
better
collaboration
and
extensibility
and
that's
going
to
be
tomorrow.
And
finally,
there
is
a
face-to-face
meeting
tomorrow
morning
between
9:00
a.m.
and
noon.
It's
a
mini
face-to-face
meeting.
We're
gonna
get
all
the
folks
in
this
SIG
storage
together.
A
Alright,
so
if
I
could
ask
you
folks
to
step
up
Yan
Shang
Michelle
come
on,
these
are
some
of
the
veteran
members
of
cig
storage
and
they
can
help
me
answer
questions
and
maybe
all
ask
Michelle.
The
first
question,
which
is:
let
me
see
if
it's
turned
on
well,
let
you
figure
that
out,
you
might
need
to
turn
it
on.
A
B
Question
side,
so,
in
the
past,
the
way
that
we
tested,
volume,
plugins
and
kubernetes
is
we
create
in
our
test
case
we
create
the
pod
and
we
actually
specify
the
exact
volume
type
of
a
bond.
So
if
you
want
to
test
GCE
PD,
you
create
a
test
with
your
pod
that
creates
a
GCE
PD
volume
and
at
least
for
us
we
were
pretty
good
at
adding
our
test
cases.
B
So
gcpd
is
really
well
tested,
but
all
the
other
volume
types
that
don't
run
on
GCE
or
not
so
a
big
effort
that
we
have
been
doing
over
the
last
two
quarters
is
to
try
to
refactor
the
way
that
we
run
our
tests
so
that
we
can
separate
the
test
case
itself
from
the
underlying
volume.
And
with
this
you
can
basically
write
one
test
case
and,
and
the
test
case
is
testing
things
from
the
user's
perspective.
B
So
the
test
case
is
something
like
can
I
mount
a
volume
write
to
it
and
when
the
pod
restarts
can
I
read
the
data
again
and
the
test
case
says
nothing
about
what
the
underlying
storage
is.
That
part
is
abstracted
away
and
can
get
plugged
in
and
now
what
our
framework
can
do
is
iterate
through
all
our
supported
volume,
plug-ins
and
just
runs
the
same
test
case
against
every
single
volume,
and
that
has
uncovered
some
funny
issues
like
some
test
cases,
actually
work
on
PD
and
they
end
up
failing
on
NFS
for
some
strange
reason.
B
Maybe
there's
a
bug,
so
that's
been
really
good.
We
probably
went
from
like
10
test
cases
to
a
hundred
test
is
in
the
last
two
quarters
and
we're
pushing
the
limits
of
the
CI
with
all
our
test
cases
now,
but
that's
a
pretty
I
guess:
I!
Guess
we're
not
quite
there
yet.
The
eventual
end
goal
is
to
be
able
to
provide
a
library
with
a
bunch
of
test
cases
and
you
can
import
those
into
your
CSI
drivers
and
run
our
standard
library
of
test
cases
against
any
CSI
driver.
Awesome.
A
We
try
to
expose
through
opaque
parameters
in
the
storage
class,
so
any
given
driver
can
expose
any
number
of
opaque
parameters
tuning
any
knob
for
that
storage
system.
So
you
can
imagine
one
of
those
knobs
would
be
I
want
to
enable
encryption
and
use
this
encryption
key.
So,
in
fact,
for
GC
persistent
disks,
we're
going
to
be
doing
exactly
that
and
it'll
be
exposed
as
an
opaque
parameter
in
the
storage
class
in
the
future.
A
If
we
find
that
that
is
not
sufficient
and
that
the
orchestration
system,
for
some
reason
needs
to
be
aware
of
this
special
property,
we
might
consider
extracting
that
out
into
a
common
field
in
the
kubernetes
api
and
into
CSI,
but
so
far
at
least
for
encryption.
I.
Don't
think
we've
seen
a
big
need
to
do
that.
C
B
A
D
D
A
Anyone
want
to
take
it,
so,
yes,
there
is
a
central
place.
We
have
a
kubernetes
si
si
github
page
where
you
can
find
documentation
related
to
kubernetes.
Kubernetes
si
si
one
of
the
sub
pages.
There
is
a
drivers
page
inclusion
on
that
page,
of
course,
is
optional,
but
so
far
a
lot
of
folks
have
listed
their
drivers
that
are
currently
available
as
alpha
beta
and
their
status.
It.
B
A
Good
question,
so
the
question
is
you're
moving
to
move
the
entry
drivers
out
of
tree.
What
about
the
basics
like
NFS,
nice,
cozy
and
those
so
I?
Classify
it
into
four
sections?
One
is
the
cloud
provider
volume
plugins?
Those
need
to
go
first
because
we're
trying
to
get
the
cloud
provider
code
out
of
core
as
fast
as
possible.
The
second
set
are
third-party
remote,
persistent
drivers.
A
So
this
would
be
things
like
I
think
there
is
port
works
and
a
handful
of
others
that
are
in
there
and
they're
remote
volume,
plugins
managed
by
third
parties,
so
we're
gonna
hand
those
off
to
those
companies
and
they'll
write
a
CSI
driver
and
we'll
proxy
those
out
the
third
set
or
the
remote
persistent
volumes
that
you
mentioned,
which
are
NFS,
I,
scuzzy
and
fiber
channel.
We
would
like
to
move
those
to
CSI
drivers
and
in
fact
there
are
already
CSI
implementations
of
them.
A
Someone
reached
out
to
me
today
to
remind
me
that
the
NFS
one
doesn't
work
very
well,
but
the
idea
is
that
it'll
be
community
owned
and
it
will
be
external
I'm,
not
sure
if
we're
gonna
automatically
migrate.
Those
or
just
leave
it
optional
to
the
end-user
to
decide
which
one
they
want
to
use.
We
need
to
figure
that
out
and
then
finally,
the
last
set
is
local,
ephemeral
volumes.
This
includes
empty
der
secrets,
config
maps.
B
Basically,
so
we
in
kubernetes,
we
have
support
for
local
persistent
volumes
which
lets
you
expose
in
local
disks
on
your
hosts
as
a
persistent
volume,
and
the
question
is
related
to
so
right
now
we
don't
support
dynamic
provisioning.
So
the
question
is:
what's
the
status
on
supporting
dynamic
provisioning,
it's
definitely
something
on
our
roadmap
and
we're
still
kind
of
in
the
process
of
designing
it.
We've
hit
a
few
design
hurdles,
but
we're
trying
to
go-
and
you
know,
get
unblocked
and
try
to
make
forward
progress.
E
B
Yes,
there,
yes,
you
can
so
actually
someone
I
think
someone
from
Rancher
has
written
like
a
host
path,
dynamic
provisioner,
but
the
main
the
main
challenge,
that
the
hurdle
that
we're
trying
to
overcome
right
now
is
how
to
do
capacity,
isolation
and
do
capacity
reporting
of
how
much
local
storage
you
actually
have
available
in
your
local
storage
pool.
So
that
mechanism
does
not
exist
today
in
kubernetes.
So
we
would
need
to
figure
out
how
to
add
that
yeah.
A
A
Question
is
what
about
replication.
Any
news
about
that.
So
replication
I
think
is
a
feature
that
I
kind
of
Creed
similar
to
encryption,
where,
right
now
it
is
an
opaque
parameter
that
a
cluster
admin
can
set
on
a
storage
class.
So
it's
a
property.
If
a
storage
system
that
gets
exposed
through
a
storage
class,
you
can
say
I
want
to
enable
replication
and
I
want
it
to
be
replicated
to
this
other
region's
own,
whatever
external
place,
and
then
the
storage
system,
after
its
provision,
is
responsible
for
doing
that.
Now,
the
next
steps
are.
A
Are
there
things
that
we
can
do
within
kubernetes
to
make
kubernetes
aware
that
there
is
a
replicated
version
of
this
volume
that
exists
somewhere
else
and
make
it
operate
more
intelligently
on
that
the
work
that
Michelle
and
the
rest
of
the
sig
did
with
topology
enables
that
to
a
certain
extent.
So,
basically,
if,
if
you
implement
topology
correctly
with
your
CSI
driver,
you
can
identify
a
volume
as
belonging
to
multiple
zones,
for
example,
and
then
the
kubernetes
scheduler
will
automatically
figure
out.
Oh
I,
I
can
use
either
of
these
zones
to
land
land.
A
A
So
you
need
to
handle
that
at
an
application
layer
you
need
to
like
deploy
two
pods
and
make
sure
they're
multi
attached
capable
and
then
like
write
your
own
code
to
be
able
to
switch
between
them
or
to
have
some
sort
of
service
that
will,
you
know,
send
traffic
to
either
them
I
think,
there's
work
that
we
would
like
to
do
there
a
longer
term,
but
we
haven't
thought
through
the
details
of
what
that
would
look
like
in
the
short
term.
I,
don't
know
if
anybody
else
has
anything
you
want
to
add
to
that.
So.
F
G
It's
a
cloning
is
not
quite
replication
but
like
if
you're
interested,
you
should
check
out
like
Aaron
and
John's
proposal
about
like
cloning
and
yeah.
It's
not
you
gonna
do
active
active
PVC
use,
but
do
you
want
more
comments
and
reviews
of
the
cloning
proposal
to
take
it
forward
as
something
we
applied
to
address
in
our
next
year?.
C
A
F
Possibilities
you
can
use
the
snapshots,
then
you
can
once
you
take
snapshot
from
one
back
and
maybe
the
other
back
and
can
just
create
a
PPC
from
glass,
I'm
sure.
That's
one
possibility
and
notice.
Also
the
topology
feature
that
we
are
looking
at,
adding
to
support
a
snapshot
so
using
that
and
that's
possibility
and
other
things.
We
actually
had
a
proposal.
The
volume
operation
proposal
that
that
was
drafted
earlier
that
have
to
includes
migration,
but
we
didn't
proceed
with
that.
So
maybe
you
could
we
could
go
back
and
look
at.
C
C
A
What
what
I
would
like
to
do
see
in
the
long-term
future
is
building
on
top
of
what
we
did
with
snapshots.
So
if
you
see
the
way
that
snapshots
are
implemented
today,
the
way
that
you
use
a
snapshot
is
on
your
PVC
you're
persistent
volume
claim.
There's
a
new
field
called
a
data
source
that
you
can
specify
a
snapshot
as
a
data
source.
So
you
can
imagine
that
being
extended
to
create
general
external
pop.
A
You
laters,
where,
for
example,
you
could
write
something
that
knows
that
I
am
trying
to
import
a
snapshot
of
type
EMC
and
you
know
put
it
put
it
into
a
what
other
type
of
storage,
and
so
we
could
extend
that
data
source
to
be
more
generic
and
allow
for
C
RDS.
And
then
you
could
write
your
own
importer
and
exporter,
so
the
exporter
would
be
triggered
off
of
the
snapshot,
and
then
we
have
snapshot
classes
which
you
could
use
to
say.
A
This
is
the
target
that
I
want
to
store
this
at,
and
then
you
could
have
a
reverse
import
operation
by
creating
a
new
PVC
with
a
data
source
that
points
to
a
CR
D.
So
that's
kind
of
where
I'm
leaning
towards
this.
But
again
these
are
very
early
plans
and
if
you're
interested
in
all
at
all
and
working
on
that
join
us
and
help
us
kind
of
shake
that
out.
H
H
A
H
F
F
A
You
know,
of
course
nothing
is
final,
but
in
general
we
try
to
stay
out
of
the
data
path
as
much
as
possible.
It
reduces
just
the
ability
for
us
to
scale
and
increases
the
number
of
bugs
and
security
issues
that
we
have,
but
great
question
join
the
community.
We
can
talk
about
that
more
and
I
think
you
had
a
good
question.
E
A
H
A
E
E
H
About
storage,
if
it
should
expose
as
I
understand
it
in
the
PV
that
there
support
just
read
right
many
or
they
should
also
support
the
Detroit
once
inside
the
PV.
And
the
answer
is
yes,
you
should
in
the
PV,
you
should
expose
all
the
modes
that
the
PV
is
capable
of,
including
read
one
read
only,
for
example,
and
the
PV
binder
matches
the
request
from
PV
see
if
whatever
the
PV
supports
and
gives
it
to
the
pot.
G
Communities
currently
like
only
for
attachable
volumes
we
enforce
like
like
if
they're
mode,
the
volume
supposed
only
read
right
once
it
could
be
attached
only
to
one
node,
but
for
shared
storage
like
for
other
volume,
tab
that
don't
support
attachable
interface,
we
don't
enforce
it
actually
and
even
within
attached
controller.
The
interface
that
enforces
this,
that
okay
read
right.
G
Once
volume
will
be
attached
only
to
one
node,
it's
kind
of
weak
actually
and
if
the,
if
the
control
plane,
controller
crashed
and
it
started
on
some
other
note,
there's
a
race
condition
that
that
it
could
allow
a
volume
to
be
attached.
At
least
we
try
to
attach
to
two
notes
at
once.
So
this
is
something
like
we
don't
strongly
like
enforce
it
at
the
moment.
Actually,
yeah.
A
So
it
is,
this
is
the
way
that
kubernetes
kind
of
behaves
today
is.
It
doesn't
actually
enforce
these
access
modes.
The
access
modes
are
used
for
binding
and
it's
up
to
the
the
user
to
decide
how
they're
gonna
use
it.
So
you
could
imagine
a
readwrite
once
volume
that's
bound
and
if
it
doesn't
actually
implement
attached,
kubernetes
doesn't
prevent
you
from
actually
using
it
multiple
times.
You
can
go
and
use
it
multiple
times.
The
storage
system
will
fail
and
only
the
first
pot
is
going
to
be
able
to
get
started.
A
G
B
The
question
was:
if
you
use
local
persistent
volumes,
if
the
pod
goes
to
another
node,
doesn't
storage
move
with
it
too?
The
answer
is
no.
Actually,
once
you
bind
your
pod
to
a
PVC
that
uses
a
local
volume,
your
pod
always
gets
scheduled
to
the
same
node.
Even
if
that
node
dies
and
disappears,
your
pod
will
try
to
be
scheduled
to
that
node
and
if
your
nodes,
not
there,
then
your
pod
can't
get
scheduled
if.
A
A
F
E
F
A
In
kubernetes
today,
the
scheduler
kind
of
schedules
for
CPU
memory
and
possibly
storage
capacity
and
some
of
the
attributes
for
storage
like
topology.
We
don't
really
schedule
based
on
IO
or
things
like
that.
We
don't
have
any
immediate
plans
to
make
kubernetes
more
aware
Michelle.
Maybe
you
can
so.
B
H
D
A
I
mean
immediate
plans
are
to
get
better
metrics
around
scalability,
so
in
the
coming
quarter
we
want
to
establish
those
and
start
exporting
them
as
part
of
the
scalability
test
that
we
run.
Kubernetes
I
think
runs
multi
thousand
node
clusters
at
the
moment,
but
those
are
all
stateless
pods
that
are
being
tested
in
that
cluster.
So
we
want
to
start
introducing
stateful
pods
into
those
tests,
collect
the
metrics
and
figure
out
where
the
bottlenecks
are
and
then
add
profiling
to
figure
out.