►
From YouTube: OpenShift Commons Briefing OpenShift Storage at MultiPetabyte Scale Gregory Touretsky (Infinidat)
Description
OpenShift Commons Briefing
Simplifying OpenShift Storage at Multi-Petabyte Scale
Gregory Touretsky (Infinidat)
A
A
A
A
A
Focus
of
infinite
is
really
going
along
those
lines
we
are
offering
something.
That
means
the
needs
of
customers
who
are
really
interested
in
the
very
large-scale
deployments.
Multiple
divide,
very
high
performance,
talking
about
millions
of
operations
per
second
and
the
tens
of
gigabytes
per
second
throughput
of
the
system,
while
offering
capacity
at
cost
much
lower
than
what
our
competition
does.
I'm
not
going
to
talk
about
the
technical
details
of
the
storage
system
itself.
A
They
and
those
challenges
when,
as
part
of
the
accumulated
journey,
first
of
all,
resistance
is
not
something
that
was
defined
on
the
very
beginning
communities.
Most
of
the
workloads
you
probably
know
in
communities
space
were
stateless
and
everything
that
required
resistance
was
usually
deployed
outside
of
the
kubernetes.
However,
this
is
changing.
We
see
more
and
more
customers
starting
to
hunt
stateful
applications
in
communities,
and
it's
not
always
simple
for
them
to
do
this
with
highly
reliable
enterprise
storage.
Another
challenge
is
once
you
start
using.
A
A
Security
is
another
concern
that
is
mentioned
by
many
of
our
customers,
as
well
as
the
ability
to
really
move
the
data
between
multiple
codes.
One
of
the
promises
of
kubernetes
is
an
ability
to
shoot
the
workloads
between
the
on-premises
and
the
public.
Clouds
and
communities
can
do
this
in
a
great
way,
but
the
limiting
factor
is
really
an
ability
to
migrate
data
together
with
the
workload
that's
not
a
challenge
and,
in
general,
the
amount
of
solutions
in
this
ecosystem
is
exploring.
This
is
something
that
also
confuses
many
of
our
customers,
and
they
really
are.
A
To
find
the
best
solution
for
their
needs,
this
is
also
aligned
to
what
we
see
in
CNCs
surveys.
They
are
talking
about
interviewing
customers
and
program
about
30
percent
of
those
customers,
saying
that
the
storage
is
challenged
when
they
start
the
Cuban
arias
adoption,
and
we
also
believe
that
this
number
could
be
bigger
if
those
customers
were
more
down
the
road
of
this
migration
into
the
Cuban
Edison
containerized
environment,
and
this
is
exactly
why
we
are
starting
to
offer
the
solutions
for
the
containers
as
well,
but
before
I
go
into
the
ziplock.
A
A
They
have
on
the
agent
and
a
half
petabytes
of
storage
capacity
upon
infinite
storage
across
data
centers,
and
what
you
can
see
here
is
what
kind
of
workload
stay
on
today
in
general
and
teeny
box
storage
is
target
for
consolidation,
so
this
customer
is
running
a
mix
of
the
VMware
and
AG
and
Linux
and
windows,
and
some
backup
and
some
other
applications
on
there.
Seven
in
tiny
box
systems
across
those
three
data
centers.
A
So
they
are
starting
to
introduce
now
communities
as
part
of
the
mix
of
the
work
laws
of
the
years
and
they
started
to
use
our
seaside
resort
and
they
were
one
of
the
data
customers
for
it,
and
one
of
the
architects
of
this
company
came
with
this
feedback
that
really
they
like.
This
is
a
driver,
the
like
the
integration
and
the
extend
extracts
us
and
their
usage
of
internal
storage
also
into
the
containerized
workload
as
well.
A
So
why
do
customers
use
in
continual
storage?
We
are
offering
solutions
at
a
very
large
scale.
We
can
consolidate
multiple
workloads
and
into
the
same
solution,
so
it
simplifies
the
infrastructure
for
customers.
We
provide
standard
enterprise,
features
at
great
scale.
Things
like
application
for
both
synchronous
and
asynchronous
provide
the
snapshots
that
can
be
used
and
can
be
taken
instantly
for
the
storage
area
for
the
data
without
any
degradation
or
impact
on
the
performance.
We
supports
encryption
of
the
day.
A
There
was
a
quality
of
service,
yep
data,
telemetry
data
from
all
the
systems
in
the
field
and
then
exposed
insights
on
the
performance
and
meaning
recommendations
to
customers
for
lo
Infinite
Earths.
We
have
different
purchasing
models
for
the
storage
systems
and
all
those
things
in
together
really
result
in
this
very
high
adoption
that
brought
us
to
over
six
exabytes
of
deployment
today.
A
A
It
basically
provides
great
integrations
or
customers.
It
is
free
of
charge
for
infinite
customers
available
with
the
source
code
on
the
github.
We
have
the
container
images
for
the
driver
on
the
docker
hub
and
reddit
the
container
catalog.
You
can
see
the
scripts
cancels
here
with
both
the
driver
itself
and
the
operator
for
the
deployments,
and
those
are
the
features
that
we
support
with
the
CSI
driver.
So
customers
using
an
internal
storage
can
manage
multiple
and
tinny
box
systems
from
the
same
openshift
or
kubernetes
cluster.
A
They
can
do
dynamic,
provisioning
and
de-provisioning
of
assistant
volumes
and
those
volumes
can
offer
both
the
file
system
and
raw
blog
access
or
customer
signing
an
application
such
as
Oracle
database
in
communities
may
consume.
Persistent
volume
use
a
roadblock
interface.
We
support
instant
cloning
of
persistent
volumes.
The
customer
made
provision
a
new
persistent
volume
claim
using
Houston
one
will
instantly
create
a
new
PV
without
duplicating
capacity,
so
only
the
changes
that
the
customer
will
make
will
consumes
the
storage
on
the
Infinity
box.
We
support
resizing
of
the
volumes
we
support.
A
Snapshots
gain
snapshots
are
instant
and
customers
may
restore
data
from
a
snapshot.
Creating
a
new
PVC,
we'll
see
all
those
things
as
part
of
the
demo.
Customers
may
import
existing
data
set.
So
if
there
was
a
static
allocation
before
or
customers
are
migrating
from
some
legacy
storage
into
the
infinity
box
array,
we
can
take
an
existing
persistent
volume,
import
it
into
the
CSI
driver
and
manage
it
within
the
CSI
driver.
From
that
point
in
time,
CD
box
is
a
unified
storage.
I
think
I
mentioned
this
earlier,
so
we
support
all
those
protocols
also
for
communities.
A
Customers
may
choose
which
protocol
works
best
for
their
environment,
whether
it's
fiber
channel
ice,
Kazi
or
NSS.
We
also
have
a
special
flavor
of
the
NFS
that
we
call
NFS
3q
utility
when
we
allocate
a
subset
of
a
file
system
or
a
quater
limited
some
directory
file
system
as
a
car
sitting
volume.
This
flavor
is
intended
for
customers
who
need
a
really
large
amount
of
persistent
volumes,
we're
talking
about
hundreds
of
thousands
of
consistent
volume,
stir
and
teeny
box
storage
array
using
NSS
3q
level.
A
If
a
customer
has
his
key
Bananas
Foster
with
the
master
of
mushi
master
nails
and
the
worker
knows,
he
will
also
have
the
Infinity
box,
storage
array
or
multiple
arrays
in
his
session
in
Penny
box,
provides
separate
important
for
the
management
access
and
for
the
data
part,
and
so
when
the
CSI
driver
gets
deployed,
there
are
a
few
entities
that
we
create
within
the
cluster.
We
create
a
secret
that
holds
the
credentials
for
the
Infinity
box,
which
allows
the
driver
later
on
to
manage
storage
there.
A
So
whenever
an
efficient
volume
request
claim
comes
in
and
the
CSI
controller
talks
to
the
management,
interface
and
infinity
box
and
provisions,
this
persistent
volume,
as
requested
by
the
through
the
PVC
jack
it
maketh
all
the
required
configuration
settings,
ties
obviously
some
other
things
related
to
type
of
provisioning
and
so
on.
When
an
application
port
a
yet
scheduled
on
one
of
worker
miles.
Hewlett
will
communicate
with
the
CSI
node
instance
on
that
worker
node,
which
may
contact
the
management
interface
on
the
Infinity
box,
for
example,
to
map
a
volume
to
the
worker.
A
Now
this
is
a
block,
access
information
or
ice
ecology
or
export
discussing
volume,
ballistics
report
right
now
this
is
NFS.
It
may
also
format
the
persistent
volume
if
this
is
required
to
file
system
such
as
exercise
if
the
extra
core
and
then,
when
the
actual
port,
get
scheduled
on
this
worker.
Now
it
may
consume
storage.
Using
this
precision
volume.
A
And
talking
a
little
bit
about
the
CSI
contacts
and
it
is
important
1l,
yes
to
the
demo,
to
understand
how
things
work,
so
there
are
parks
that
are
handled
by
the
service
provider
in
Pina
box,
in
our
case
or
maybe
other
storage
systems.
So
we
are-
and
there
are
things
that
are
defined
within
the
communities
cluster
itself.
The
usual
way.
Customers
are
asking
requests
in
storage.
In
given
areas
the
developer
may
define
a
consistent
volume
claim
that
specifies
some
details
about
type
of
access
to
the
festival
and
the
size
and
the
storage
class.
A
There
is
a
similar
set
of
constructs
that
was
added
later
in
Cuban
Aries
for
the
snatchers
management
and
very
similar
to
the
PVC
storage
plus
and
PV.
There
are
concepts
of
:
snapshot,
volume,
snapshot,
class
and
volumes
natural
contents,
which
can
which
used
to
provision
a
snapshot
of
a
consistent
volume
within
the
storage
provider,
and
we
implement
those
constructs
for
the
Imperial
surgery.
B
A
A
little
bit
from
a
single
pod
and
talk
more
about
the
multi-cloud
corners
I
mentioned
that
some
of
our
customers
are
interested
in
the
multi
cloud
deployments
and
an
ability
to
really
share
the
data
and
share
the
workload
between
the
on-premises
and
public
clouds,
in
addition
to
the
on-prem
infinity
box
deployments
with
infinite
that
we
offer
also
a
fully
managed
service
that
we
call
Neutrik
bulb.
This
is
a
service
that
we
manage,
deploying
it
in
the
data
centers
adjacent
to
a
major
public
cloud
regions.
A
Customers
can
consume
storage
from
new
Tech's
cloud
and
pay
for
consumption
without
dealing
with
the
actual
physical
infrastructure.
And
so,
let's
assume
we
have
a
customer
running
the
Bananas
Foster
on-premise
decision
and
skinny
box
assistant
audience
at
the
back
end.
We
can
replicate
those
persistent
warnings
into
the
new
tricks
cloud
and
we
can
expose
those
volumes
to
applications
running
in
Amazon
or
Google
and
other
public
clouds.
So
the
customer
israelian
an
EPS
cluster
oversees
Island
openshift
cluster
in
ec2.
A
He
may
consume
persistent
volumes
also
in
this
cloud,
whether
it's
a
replica
from
the
on-premises
environment
or
the
new
persistent
modem
that
is
just
available
from
new
tricks
and
they
can
access
percent
volumes,
also
from
Asia
or
Google
cloud
or
IBM
cloud.
Those
are
the
calls
that
we
support
with
musics
environment.
We
also
offer
an
option
to
support
multi
cloud
access
when
the
same
consistent
volume.
If
this
is
realized,
many
storage
like
NFS,
can
be
accessed
by
poles
running
both
in
Amazon
and
other
and
Google.
A
A
A
A
A
A
So
I
was
mentioning
communities
all
the
time.
It
definitely
applies
to
the
openshift
as
part
of
the
probably
the
most
popular
commercial
version
of
children,
areas
that
we
see
from
our
with
our
customers.
We
provide
a
certified
the
operator
which
is
available
on
the
operator
hub
or
deployment
of
the
CSI
driver.
Health
customers
may
deploy
the
CSI
driver
for
antenna
box
through
the
operator
hub
and
use
it.
We
support
the
CSI
driver,
support
any
Kira,
Nerys
version
starting
from
114
or
OpenShift
4.2.
A
We
also
have
an
earlier
solution
that
we
released
a
couple
of
years
ago,
dynamic
provision,
effective
remedies
that
work
since
equivalent
is
1.6.
It
is
not
suicide.
Three
CSI,
implementations
or
customers
or
on
older
versions
may
use
that
another.
Now
that
I
wanted
to
make
CSI
is
an
evolving
standard.
So
there
are
new
features
that
are
being
exposed
to
customers
with
every
cable
news
release
and
some
of
the
features
that
I'm
going
to
cover
here.
They
may
not
be
available
in
all
the
versions
of
Cuban
Ares.
A
A
Start
with
the
demo
we'll
start
from
just
checking
the
cluster
that
we
have
and
I
run
my
cube,
cuddle,
get
nose
command
and
we'll
see
that
my
Cuban
ideas
cluster-
that
I
have
here,
has
three
nodes:
granny
inversion,
118
right
now.
Let's
check
that
we
have
our
driver
deployed
and
we
do
so
I'm
running
the
script.
I'll
get
both
commands.
A
We
recommend
to
deploy
a
driver
in
a
specific
name
space
in
this
example
I'm
using
a
namespace
called
I
box
or
in
any
boxes
I'd
rather
deployment,
and,
as
I
said
before
in
the
presentation,
we
have
a
single
instance
of
the
controller
and
one
is
prepare
cluster
and
we
have
one
instance
of
the
nerd
component,
fair
worker
node,
and
this
is
exactly
what
you
can
see
here
with
this
three
node
cluster.
We
have
one
instance
of
the
controller
and
three
instances
of
the
nerd
components.
A
Each
one
is
running
on
a
separate
version
note,
so
the
next
step
for
us
would
be
to
create
a
storage
clause
or
infinity
box.
I'll
use
NFS
transport
as
an
example
here,
as
I
mentioned
before,
we
can
do
also
I
scallion
fibre
channel,
and
we
also
can
do
NFS
PQ
for
customers
who
want
to
use
hundreds
of
thousands
of
persistent
volumes.
So
this
is
a
standard
definition
of
the
NFS
leverage
plus
or
in
any
box.
The
name
of
the
storage
class
will
be
eye
box,
storage
plus
demo.
A
It
refers
to
the
provision
air,
which
is
basically
our
CSI
driver.
We
want
the
reclaim
policy
for
the
transition
volumes
to
be
delete.
We
specify
that
we
support
volume
expansion
feature,
so
customers
can
use
our
driver
to
resize
the
precision
volume
after
it's
been
created,
and
then
we
provide
some
parameters
that
are
relevant
for
infinitive
storage
container,
that,
if
any
box
can
use,
can
define
multiple
pools
within
the
infinite
box
that
can
be
used
for
different
applications
or
to
just
separate
the
allocation
to
different
chunks.
A
So
we
require
a
different
pool
for
every
storage
class
that
you
define
in
canaries,
and
here
we
specify
the
full
name
on
the
infinite
box
for
the
NFS
provisioning,
we
define
a
network
space.
This
is
another
construct
of
the
Infinity
box
storage.
It's
basically
set
of
n
points
that
will
be
used
to
access
data
on
the
Infinity
box.
We
allocate
several
IPS
and
in
this
example
for
NFS
access.
Those
are
basically
your
NFS
server
IPS
and
the
CSI
driver
will
randomly
choose
one
of
those
IPS.
A
Every
time
the
mount
is
done
to
the
resistant
volume,
some
other
parameter
of
like
we
want
to
do
thin,
provisioning
for
storage.
We
use
NFS
as
an
axis.
We
may
specify
mount
options
for
the
volumes
that
will
be
used
when
the
worker
miles
will
mount
the
disk
volume.
We
can
specify
expert
permissions
for
the
trisystem
volumes
as
well.
A
Assistant
volume
is
okay.
My
demo
gods
are
with
me
I
guess
we'll
check
it
out
in
a
second
Hey,
so
the
persistent
volume
Oh
basically
have
to
change
the
name.
That's
why
it's
showing
me
the
wrong
result,
so
the
persistent
volume
has
been
created.
This
is
the
name
of
the
actual
persistent
file
system
on
the
Infinity
box
that
was
created,
holding
the
PVC
created
quest.
You
can
see
that
this
is
a
one
gigabyte,
readwrite
many
and
it's
bound
to
the
claim.
A
Once
we
have
the
persistent
volume
there,
I
can
go
ahead
and
create
my
snapshot
plot,
because
I
want
to
start
doing
specials,
so
the
Stanford
class
defines
again
which
psi
driver
should
be
used
to
manage
snapshots
and
I
refer
to
our
inbox.
It's
a
driver,
I
can
create
the
snapshots.
Class
and
I
can
check
that.
The
snapshot
class
exists
that
we
just
created
this
I've
of
snaps
across
demo
and
now
I
can
go
and
create
a
snapshot
again.
The
snapshot
would
be
another
PV
is
construct
in
a
cuban
aires.
A
I
define
a
llamo
file,
which
is
referring
to
the
water
snapshot
kind.
I
name
the
snapshot
as
I
box
PVC
snapshot
demo
again.
This
is
a
name.
Space
related
construct,
I
will
use
I
box
snapshot
plus
demo
absolute
path
to
do
this
and
the
source
for
the
snapshot
would
be
the
PVC
that
we
created
in
the
previous
page.
A
So
if
I
run
now
this
command
and
I
create
my
snapshot.
I
can
check
the
status
of
the
snapshot,
and
this
is
what
I
see
that
my
snap
should
exist
for
five
seconds
and
now
I
want
to
check
the
volume
snapshot,
content
name
for
the
snapshot.
So
this
is
the
internal
name
of
the
snapshot.
Content
that
has
been
created
I
can
check
it
also
from
the
volume
steps
content
side
and
see
that
the
swollen
snapshot
content
is
available
for
27
seconds
behind-the-scenes.
A
A
A
So,
if
I
create
now
this
new
persistent
volume
claim
as
a
restorer
I
can
see
that
this
PVC
has
been
created
and
what
happens
under
the
behind
the
scenes.
We
take
our
snapshot
that
was
created
before
make
a
clone
of
the
snapshot,
which
is
a
writable
copy,
accessible
for
applications.
So
basically
what
we
did
we
instantly
created
a
copy
of
the
original
snapshot
and
made
it
writable
and
available
for
the
customers
again.
A
Allow
creation
of
instant
clones
without
going
through
the
back
up
through
the
snapshot
stage.
Customer
made
you
fine
and
you
PVC,
that
would
be
creating
a
clone
directly
from
the
service
PVC
without
learn
the
snapshot
again.
I
can
define
a
PVC
I,
specify
the
data
source
to
be
an
existing
PVC
demo,
as
opposed
to
the
snapshot
as
in
the
previous
step,
and
if
I
run
this
a
cubicle
creates
demand
and
I
see
that
my
clone
has
been
created
and
it's
available
also
for
applications.
A
So,
let's
see
how
it
works
with
application.
I
can
have
my
application
pod
definition
here
that
would
launch
a
busybox
image
and
mount
our
eye
box.
Pvc
demo
PVC
as
TMP
data
now
schedule
discord.
If
you
remember
this
demonstration
that
I
had
for
how
they
see
a
driver
works.
Now
we
have
called
ACS
I
know
the
component
on
the
relevant
worker
node.
A
He
is
made
the
magic
and
expose
the
PV
to
the
pod
to
the
worker
mount
and
then,
when
the
pod
sparks
on
the
worker
node,
you
can
see
that
it
is
running
now
and
I
can
connect
to
the
squad
using
cube.
Caudill,
exact
comment
and
check
that
might
impede
data
is
really
pointing
to
the
NS
s
mount
on
the
Infinity
box,
as
I
would
expect,
and
we
can
see
that
it
was
really
done
successfully.
So
the
CSI
driver
took
one
of
the
IPS
from
the
net
space
on
the
Infinity
box.
A
It
use
the
export
path
for
the
file
system
and
it
is
available
on
the
flash
team.
P
slash
data
for
their
pod
consume
the
data,
so
this
is
a
kind
of
conclusion
of
the
demo.
One
other
thing
that
I
wanted
to
mention
is
how
it
can
be
done
in
a
more
thematic
way.
So
this
is
an
example
of
provisioning
in
my
sequel
database.
As
a
pole,
using
help
HR
for
my
sequel
and
one
thing
that
can
be
done
here.
The
customers
wants
to
provision
my
sequel
database
using
persistent
volume
from
the
infinii
box.
B
A
B
B
A
B
A
This
is
becoming
more
and
more
important
and
we
expect
that
this
will
be
even
more
critical
for
them
in
the
future,
and
one
thing
that
we
are
doing
now
is
really
helping
them
to
address
the
storage.
The
aspects
of
the
Cuban
Eddie's
adoption
and
OpenShift
adoption
is
I
driver.
What
you
can
see
here,
the
bottom
of
the
slide
is
related
as
we
were,
showing
that
Hill
ability
aspect
of
our
solution.
I
was
mentioned
at
the
Dan
fs3.
A
Here
we
can
go
to
hundreds
of
thousands
of
cases
and
volumes,
and
this
is
a
screenshot
from
one
of
our
deployments
where
we
have
over
a
hundred
thousands
of
consistent
volumes
and
literally
single
infinii
box
storage
array
behind
the
scenes,
a
I'm
available
for
future
questions.
If
there
are
questions
now,
I
will
be
glad
to
take
them
if
there
are
questions
kind
of
later
on
I'm
available
this
email
good.
Let's
talk
about
other
infinitive
solutions
over
about
the
seaside,
riser
and
open
ship
integration,
in
particular,.