►
From YouTube: Containerized Storage Systems on Kubernetes
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
so
I
worked
at
emc
for
about
12
years
and
then
I
worked
at
netapp
for
about
three
years
and
rehab
storage.
After
that,
and
now
I
work
at
portworx,
I
actually
worked
at
coreos
also
and
now
I
work
at
partworks
doing
containerized
storage
systems.
So
one
of
the
things
I
want
to
talk
about
is
that
there
seems
to
be
containers
and
storage
is
kind
of
still
new
to
a
lot
of
people.
So
I
wanted
to
kind
of
talk
about
it.
A
A
little
bit
and
kind
of
you
know
talk
about
the
differences,
especially
against
comparing
against
virtual
machines.
So
one
of
the
things
I
want
to
talk
about,
I
have
a
little
list
here
of
questions
that
if
you
don't
have
any
questions,
I'll
ask
myself,
you
know
I'll,
be
like
hey.
What
do
you
think
about
this
and
then
I'll
answer
you
know
so
so
one
of
the
first
questions
is:
how
is
storage
in
container
is
different
than
for
virtual
virtual
machines.
Anybody
know
all
right.
Okay,
here
we
go
yeah,
absolutely
there's
a
huge
difference.
A
So
what
happens?
Is
let's
go
first
with
what
is
a
virtual
machine
and
then
what
is
the
container?
Okay.
So
first
is
that
a
virtual
machine
is
a
hypervisor
running
on
a
host.
Okay
and
the
hypervisor
is
the
one
that
creates
a
virtual
machine,
meaning
a
to
the
host
that's
running
inside
that
virtual
machine.
It
thinks
that
it's
running
on
an
in
in
real
hardware,
but
it's
all
emulated.
The
memory
is
emulated,
the
networking,
the
the
storage
and
everything.
So
it
is
when
the
virtual
machine
talks
to
the
storage.
A
What
is
actually
talking
to
is
the
hypervisor.
The
hypervisor
can
then
go
over
the
network,
for
example
to
over
fiber
channel
or
iscsi
or
any
other
model
that
you
would
like
and
talk
to
a
storage
system.
Okay,
now
the
difference
is
a
container
is
not
a
virtualized
environment.
A
container
is
just
all
it
is.
I
mean
it
may
surprise
you
it's
just
an
application
that
has
been
reduce
its
resources.
Okay,
so
it
just
applications
like
anything
else
running
on
the
host.
It's
just
it's
not
emulated
in
any
other
way.
A
It
shows
that
its
resources
have
been
reduced.
So
what
that
means
is
that,
when
the
container
talks
to
storage,
it's
not
talking
to
some
other
layer
that
then
can
go
over
the
network
and
get
your
storage.
No!
It's
talking
to
the
kernel
on
your
host,
okay,
and
because
of
that
there's
a
huge
difference:
how
virtual
machines
can
access
storage
and
how
containers
access
storage.
The
biggest
thing
is
that
containers,
when
they
access
storage,
they're
accessing
it
through
the
host.
That
means
the
host
must
have
all
your
drivers
needed
to
access.
A
Your
storage
is
accessing
it
through
the
kernel
and,
if
you've
used
openstack,
have
you
anybody
used
openstack
before
yeah,
all
right
good.
So
if
you've
all
used
openstack
before
you
use
most
likely
a
virtual
machine
used
by
kvm
or
qmu,
and
what
happens
when
you
do,
that
is:
let's
take,
for
example,
using
ceph
as
a
storage
system,
you
would
run
your
virtual
machine
and
then
the
virtual
machine
will
talk
to
qmu
the
hypervisor,
and
then
it
will
go
over
the
network
and
talk
to
ceph.
A
It
wouldn't
even
hit
your
kernel
on
your
host
right,
but
now
as
a
con
as
a
container
running
on
a
host,
you
need
to
make
sure
that
if
you
want
to
talk
to
the
same
storage
system,
the
host
has
needs
the
drivers
to
talk
to
them.
Okay,
so
the
big
big
difference
all
right
all
right.
So
what
does
anybody
know
here?
What
block
storage
is
file
and
object
all
right?
So
it's
a
few
all
right
cool.
So
did
you
know
that,
in
virtual
machines,
when
you
access
storage
here,
most
of
them
are
looking
for
block?
A
Why
is
that?
Because,
when
the
virtual
machine
looks
for
block
and
gets
attached
to
the
virtual
machine,
it
formats
the
file
system
inside
of
it.
So
if
you
have
a
windows
machine,
for
example,
a
virtual
machine
and
it's
talking
to
a
storage
system,
it
wants
block,
because
the
kernel
inside
that
virtual
machine
is
talking
to
that
virtual
disk,
as
if
it
was
a
real
disk
so
and
discs
are
blocked.
Okay,
but
containers,
like
I
said
before,
are
talking
to
the
host
all
right.
A
So
what
that
means
that
the
container
cannot
talk
to
a
block
system
it
can't
it
needs
a
file
system
right.
So
what
that
means
is
that
when
a
container
application
like
mysql
or
or
anything
else,
just
want
to
write
a
file
just
touch
a
file
right,
it
goes
into
the
kernel
of
the
host
and
it's
talking
to
a
file
system,
so
you
may
then
say:
okay
well
in
kubernetes,
I
know
that
I
can
attach
some
block
devices
and
things
like
that
into
my
container.
Well,
what
that
means
is
that
it's
happening.
A
A
It
then
attaches
it
to
your
system
and
then
formats
it
that's
the
key
it
formats
it
first
on
the
host
and
then
mounts
it
on
the
host.
Now
there's
a
big
dif.
Also,
probably
some
misinformation
on
how
the
container
actually
writes
to
that
storage.
A
Most
people
think
that
the
container
itself
is
the
one
writing
to
the
storage,
meaning
that
the
the
mounting
of
that
volume
happened
at
the
container
and
that's
not
true.
The
mounting
happens
at
the
host
like
I
said,
because
it
is
the
host
that's
doing
everything.
Okay.
So
what
that
means
is
that
the
host
mounts
it
in
a
certain
location,
a
slash,
var,
cubelet,
api
or
some
huge
path.
A
Then
you
say:
okay!
Well,
that's
not
the
path
that
I
see
inside
my
container
right.
You
see
a
nice
path
that
you
asked
it
to.
What
happens
is
when
kubernetes
or
docker
or
any
tile
container
orchestration
system.
They
want
to
bring
that
volume
into
your
application.
So
you
can
write.
What
happens?
Is
that
the
they
take
that
mount
point
and
they
do
what's
called
a
bind
mount
okay?
A
Okay,
so
when
you
say
docker
dash
v,
some,
you
know
the
host
path
and
then
colon
your
your
path
that
your
path
is
the
buying
mount
point
that
it's
doing,
okay,
that
makes
sense
all
right
cool
any
questions
before
I
go
through
my
next
time
on
my
list.
A
Redundancy
of
volumes
are
mounted
so
if,
let's
say,
for
example,
I
have
two
containers
and
those
two
containers
are
trying
to
mount
the
same
spot
so
bind
mounts.
Would
I
don't
know
exactly
how
like
kubernetes
or
the
container
construction
system
would
do
it,
but
it
would
definitely
be
a
mounted
by
the
host
and
then
you
can
bind
mount
two
locations.
You
can
bind
out
many
times
over.
So
it's
that's
just
tracked
by
the
kernel,
because
it's
just
being
mounted
into
a
new
name
space.
It's
about
me.
A
What
is
me
so,
but
good
question:
yeah
you're
the
core
contributor
on
the
open
source
project
csi,
yes,
yeah,
yeah,
okay,
great
question,
great
question
and
I'll
give
you
some
money
for
that.
A
All
right,
so,
let's
talk
about
what
csi
is.
Does
anybody
know
what
csi
is?
Oh
all
right
cool,
so
it's
kind
of
like
cni,
but
much
cooler
anyway.
So
so
what
happened
was
is
that
we
have
different
container
orchestration
systems
and
you
have
many
ways
of
connecting
those
storage
systems
into
those
container
orchestration
systems.
A
We
have
docker
had
what's
called
docker
volume
to
be
able
to
communicate
with
those
systems,
and
then
we
have
kubernetes
which
had
what's
called
the
internal
drivers
you
as
a
storage
vendor,
I
would
have
to
write
code
inside
kubernetes
and
then
be
bound
to
the
kubernetes
release
cycle
for
it,
for
my
storage
system
to
be
able
to
be
used
right.
So
when
kubernetes
says
I
want
a
new
volume
and
he
wants
to
talk
to
some
storage
device
that
command.
I
want
a
new
volume
needs
to
go
somewhere
and
that
code
needs
to
be.
A
You
know
to
be
able
to
communicate
with
that
system,
so
the
community
got
together
the
community
being
the
guys
from
mesos,
docker
and
google,
and
got
together
and
created.
This
thing
called
the
csi
csi
is
for
container
storage
interface
and
what
that
is
is
a
a
new
api,
a
standard
and
it's
not
a
service,
so
I
don't
want
to
I'll
I'll,
actually
talk
a
little
bit
about
it
after
that
is
not
compared
to
cinder.
Cinder
is
a
service
in
openstack,
it
has
services
and
a
database,
and
everything
like
that.
A
Csi
is
just
a
specification
and
what
that
specification
says.
Is
that
all
the
container
orchestration
systems
now?
Can
you
know
code
to
this
specification
as
a
client,
and
now
the
storage
developers
can
code
it
to
that
specification
as
a
driver
and
now
what's
really?
That
means
is
that
the
storage
vendors
can
benefit
by
providing
out-of-band
drivers
to
already
running
systems.
So
I
can
you
know,
for
example,
portworx
where
I
work.
A
We
can
then
deploy
newer
versions
with
newer
features
of
our
driver,
not
not
fixes,
because
we
we
never
write
any
bugs
but
features
absolutely
like
snapchats,
and
things
like
that
that
that's
what
comes
in
now,
csi
right
now
is
at
a
version.
0.3
and
snapshots
just
went
in
it,
but
it
is
a
is
something
that
is
maturing
and
we
hope
to
have
it
in
kubernetes
by
hopefully,
by
the
end
of
the
year
as
ga.
A
I
also
participate
in
the
kubernetes
implementation
team,
which
we
actually
take
the
csi
spec
and
we
change
kubernetes
to
be
able
to
consume
storage
systems
that
that
program
against
that
api.
Okay,
any
questions.
A
What's
the
impact
great
question?
Okay,
so
let
me
just
kind
of
reword
what
you
just
said.
A
So,
if
you
deploy
two
up,
two
applications
on
two
different
kubernetes
clusters
is:
how
do
you
know
which
storage
to
pick
you
know
so
the
way
it
works
in
kubernetes
is
that
dynamic
provisioning
works
by
a
system
called
a
storage
class
and
the
storage
class
kind
of
defines
how
the
storage
is
going
to
come
from
when
you
make
it
when
you
make
a
request
for
a
volume
and
what
are
the
parameters
for
that
volume,
but
maybe
replica
3,
replica,
4,
sorry,
2
and
so
on.
A
Now
what
happens
is
that
if
the
administrator
of
that
kubernetes
system
has
set
a
default
value
and
a
name
all
that
the
application
developer
needs
to
do
is
that
when
they
create
a
new
application,
make
a
request?
A
request
is
done
by
setting
a
what's
called
a
pvc
persistent
volume
claim.
All
they
need
to
do
is
need
they
need
the
name
of
the
storage
class.
A
So
if
the
name
is
the
same
on
both
clusters,
then
the
application
will
run
without
any
issues
and
even
though
that
the
back
end
of
both
systems
may
be
completely
different
storage
systems,
one
may
be
in
the
cloud
like
ebs
and
the
other
one
may
be
on-prem
and
be
some
attached
volume.
So
does
that
answer
your
questions?
A
A
Okay,
so
let's
go
back
and
we'll
see
how
much
time
I
have,
because
this
one
may
take
some
time
to
explain
and
okay,
I
got
50
seconds
okay.
Here
we
go
so
what
that
means
is,
if
you
go
back
to
the
story
of
storage
systems
back
to
year,
2099
and
so
on,
it's
always
been
compute
and
v
and
storage
and
cluster
together.
A
So
you
have
some
type
of
interconnect
in
between,
but
now
with
kubernetes
we're
just
realizing
that
kubernetes
is
now
the
operating
system
of
the
data
center
and
as
such
I
want
to
manage
my
apps
my
applications,
my
network
and
my
storage
software
defined
storage
system
all
through
the
same
control
plane
all
through
the
same
api.
Now
to
do
that,
I
need
a
storage
system
that
can
act
as
an
application,
just
so
that
kubernetes
can
be
able
to
deploy
it.
So,
for
example,
I
in
portworx
what
we
do
is
that
we
create
our
storage
system.
A
We
containerize
it
now
you
can
deploy
portworx
as
an
application
into
kubernetes
and
he
can
deploy
using
a
daemon
set.
It
goes
into
every
one
of
the
systems
and
then
takes
over
your
local
block,
storage
and
the
host
and
then
talks
back
to
kubernetes
and
say:
hey,
I'm
portworx,
I'm
here
I
can
provide
dynamic
volumes
for
you
so
now,
when
you
applications
come
in
with
a
pvc,
they
can
come
in
and
then
storage
can
be
provided
by
that
new
storage
system.
A
That's
been
deployed
in
kubernetes,
but
why
would
you
want
to
do
this
if
you're
in
the
cloud
you're,
probably
thinking
I
can
just
use
ebs
volume,
so
I
can
just
use
the
volumes
from
the
cloud
vendor.
Well,
some
of
those
volumes
from
the
cloud
vendor.
For
example,
when
kubernetes
deploys
your
nodes,
it
may
go
across
different
zones
right
and
what
happens
is
those
volumes
are
only
allowed
to
be
used
in
the
same
zone?
A
So
you
have
to
be
very
careful
that
you
can
go
for
your
app
can
go
from
one
zone
to
another
and
be
pending
forever,
because
your
ebs
volume
is
only
allowed
from
one
zone.
But
when
you
deploy
a
containerized
storage
system,
it
goes
across
all
your
nodes
and
it
can
go
from
one
zone
to
the
next.
It
can
still
attach
to
the
the
the
storage
that
it
was
using
without
any
issues.
A
B
A
So
let
me
just
make
sure
I
understand
your
question:
if
it
doesn't
matter
if
it's
on
prem
or
not
what
you're
asking
is
it's
if
a
part
goes
down
how
and
it
comes
back
up
in
another
location,
how
is
it
that
the
system
knows
that
the
volume
how
to
continue
up
the
application
comes
up
and
it
needs
the
storage
to
continue?
Okay,.
A
Okay,
let
me
reword
that
then
I
want
to
that's
a
good
question
and
I
kind
of
need
to
reword
that
the
storage
system
takes
over
the
block
volumes
that
are
local,
like
on-prem,
which
they
cannot
be
moved
and
reattached,
but
what
they
do
is
they
create
a
large
pool
of
that
across
the
the
system,
and
it's
like
imagining
as
a
layer
or
intelligent
layer
on
top
where,
when
you
ask
for
volume,
it
provides
either
local
access
or
network
access
to
them.
A
So
when
a
pod
is
at
some
node
a
and
that
it's
it
asks
for,
storage
storage
is
provided
to
it
from
three
different
locations
for
replica
three,
for
example,
and
when
it
could
be
at
if
it's
a
thousand
nodes,
it
could
be
up
any
of
the
thousand
nodes.
It
would
just
talk
to
of
the
network
to
the
storage
system
or
it
could
be
on
the
same
node
as
the
storage
system
is
and
then
we'll
be
talking
directly
through
the
kernel.
A
Cli,
no
it's
through
the
through
the
kernel.
Still
so
imagine
the
po.
The
part
is
still
talking
to
the
kernel
right.
It
always
is,
it
doesn't
know.
What's
going
on
he's
always
going
to
the
kernel
and
in
the
kernel
there's
a
driver
that
says:
oh,
this,
your
volume
is
not
here.
It's
it's
on
that
note,
so
it
just
goes
over.
The
network
gets
the
data
that
I
was
looking
for
and
returns
it
as
if
that's
only
when
again,
that
part
is
running
on
a
node
that
it
becomes
a
client
it.
A
B
Okay,
another
question
hold
on
a
second.
A
A
B
C
A
The
app
needs
to
be
resilient
to
know
that
the
connection
has
died
and
to
wait
some
time
until
the
database
comes
up
on
some
different
node,
because
that
node
does
no
longer
exist
and
then
from
the
storage
system
it
will
get
reattached
with
the
same
storage.
So
the
app
then
might
the
database
will
still
come
up
with
all
the
data.