►
Description
OpenShift Commons Gathering
Milan Italy 2019
Title: OpenShift Container Storage
Speakers: Christopher Blum (Red Hat), Carlos Torres (Red Hat)
A
A
And
he
even
speaks
italian,
so
that's
helpful.
So
we'll
start
with
a
little
bit
of
an
overview
of
what's
storage
and
then
Carlos
will
talk
about
what
we
had
with
OCS
3,
which
worked
for
OCP
3
and
then
I'll
give
you
a
little
bit
of
an
outlook
what's
coming
with
OCS
42,
which
will
be
made
available
with
OpenShift
4
right
so
with
OpenShift
and
kubernetes.
A
If
you're
looking
at
kubernetes,
persistent
storage
and
there's
two
things
you
can
do
very
early
in
the
kubernetes
days,
people
were
still
doing
aesthetic
provisioning
of
PVS,
so
you
as
an
admin,
create
PVS
and
then
once
a
user
creates
a
claim
at
PVC
that
gets
matched
with
what's
out
there,
and
nowadays
we
do
want
to
do
dynamic
provisioning.
So
once
a
user
does
his
claim
a
PVC,
we
do
actually
create
the
thing
in
the
backend
and
when
the
user
frees
that
up,
it's
automatically
automatically
reclaimed
the
leaders
in
the
backend
storage.
A
A
So
most
people
that
talk
to
us
do
want
to
keep
the
registry
on
a
persistent
layer,
because
that
way
they
can
distribute
that
over
all
notes
and
that
doesn't
eat
not
only
help
with
retaining
the
registry
when
a
node
fails,
but
it
also
allows
you
to
distribute
that
to
other
locations
and
then
obviously
you
want
the
file
storage
for
the
containers
to
store
anything,
including
databases,
but
what's
now
new
with
4.2,
you
don't
also
get
block
storage
for
containers.
You
can
directly
access
blocks
and
directory
right.
A
A
What
we
consider
storage,
we
we
divide
that
into
three
categories,
that
we
have:
the
traditional
storage,
arrays
and
appliances
which
I
usually
have
a
vendor
login.
You
do
usually
have
these
in
your
data
centers
and
you
can
attach
them
from
the
outside.
Obviously,
to
your
openshift
environment.
You
got
your
point
play
stores
things
that
are
not
necessarily
convenience
aware,
but
the
most
important
thing
for
us
is
usually
limited
to
one
environment,
so
either
it's
inside
of
your
own
data
centers
or
it's
inside
of
a
single
cloud
environment
and
what
we
target
with
OCS
is.
A
We
want
to
not
only
run
in
public
clouds,
but
we
also
want
to
inside
of
your
own
data
center
and
make
that
homogeneous
experience
for
you,
so
that
your
data
can
move
wherever
you
need
it
to
to
be,
and
that,
for
us,
looks
like
this,
you
have
your
bare
metal.
Will
virtual
machines,
containers,
the
private
cloud
public
cloud
and
your
legacy?
Storage
needs
and
everything
is
supported
by
the
same
storage
environment,
so
just
that
that
should
be
enough
for
a
quick
overview
and
hand
it
over
to
Carlos
thanks
Chris.
B
Well,
we
had
value
proposition
in
our
portfolio,
so
we
have
a
story
for
the
present.
That
means
openshift
3.11
and
the
story
that
I'm
going
to
tell
you
is
the
3.11
product
base
it
on
cluster
engine?
Then
Chris?
We
will
tell
the
story
about
the
feature
that
it
would
be
4.2
and
its
base
it
in
based
on
self
so
regarding
3.11
and
regarding
the
overall
story
of
container
storage.
So
as
Chris
mentioned
before,
so
we
need
to
storage,
for
the
infra.
B
Part
and
record
provides
the
storage
for
the
new
frog
and
the
infra
means
registry
matrix
and
logins
very
important,
because
you
know
you
are
running
on
register
your
container
images
and
your
matrix
and
login.
You
are
probably
under
auditing
process,
so
you
need
to
keep
your
matrix
and
login
information
safe,
and
then
you
have
the
storage
for
applications,
stateful
applications.
So
in
the
previous
presentation
there
were
a
session
about
Kafka.
There
are
multiple
applications
like
three
scales
that
requires.
Stateful
then
requires
persistent
storage
based
on
that.
So
what
is
the
proposition
of
RedHat?
How
to?
B
How
can
you
deploy
your
air
storage
inside
or
outside
of
achieved?
So,
on
the
left
side,
we
see
a
deployment
model
in
space,
atone
storage
for
containers,
and
it
means
that
you
are
running
your
platform
independently
than
your
storage.
So
you
have
your
storage
base
it
on
dedicated
VMs.
Where
you
run
your
binaries
of
our
storage
product,
then
you
connect
through
AP
is
the
storage
to
open
ship,
but
the
both
parts
contain
a
open
ship
and
OCS
open
ship
container
stores
are
independent.
Then
you
have
on
the
right.
B
The
flavor
that
is
called
storage
in
containers,
and
it
means
your
storage
becomes
an
application
and
is
deliberate
inside
of
a
ship
means
that
you
have
binaries,
but
the
binaries
are
in
pod
container
port
format
and
are
completely
managed
by
OpenShift.
What's
the
difference
about
both
because
for
regulation
point
of
view
and
for
internal
process,
probably
you
want
to
maintain
your
independency
between
infra
and
application
stuff
right.
So
there
are
dedicated
storage
teams
and
there
are
dedicated
developer
teams
so,
on
the
left
side
keep
independent.
You
address
this
request.
B
Ok,
so
what
about
the
architecture?
So
this
is
the
container
flavor
architecture,
so
the
storage
is
running
as
application
inside
openshift.
So
you
have
a
pot
and
then
you
have
the
data
plane.
The
data
plane
of
the
current
version
3.1
is
based
on
cluster,
so
you
have.
The
so
called
in
cluster
is
called
bricks.
So
you
have
five
systems
in
nodes
that
are
federated
together
and
provide
you
the
cluster,
the
storage
part.
Then
you
have
the
control
plane.
B
The
control
plane
is
the
API
that
is
integrated
with
openshift
and
enables
the
dynamic
provisioning
features
and
all
the
features
that
regards
the
persistent
volume
claims
so
basically
with
three
dot
version:
three
with
a
lever
version
after
version
new
features.
So
this
is
a
table
about
the
three
last
versions
from
3.9
to
3.11.
What
are
the
features
that
we
deliver
in
these
three
versions?
And
you
can
see
that
version
after
version
we
deliver
new
features.
B
So,
regarding
kubernetes
integration
we
support
block
file,
object,
storage,
then
read
remaini
read,
write
once
read
only
dynamic
provisioning,
pv
resize,
then
for
an
open
shift
point
of
view.
In
the
last
version
you
can
manage
your
storage
directly
from
the
web
console.
You
can
install
the
storage
from
the
playbook
the
same
playbook
that
you
used
to
install
the
OpenShift.
Then
the
storage
services
that
we
provide,
as
I
said
before,
multi-protocol
storage
snapshot
geo
replication
for
dr
in
terms
of
infrastructure.
B
The
solution
is
agnostic,
so
runs
everywhere
when
open
ship
runs
in
terms
of
support,
so
they
are
all
cs3
version.
3
13
is
aligned
it
in
terms
of
support
with
OpenShift.
It
means
that,
based
on
the
lifecycle,
OCS
will
be
supported
until
the
day
that
you
have
cinder
and
there
is
the
link
for
the
public
reference
about
the
support
lifecycle
and
regarding
the
new
OC
pv4.
So
OCP
before
has
have
several
requirements.
B
The
main
requirements
are
related
to
the
operator
scene.
So
you
have
probably
hear
my
colleagues
about
operators
very
interesting
session
because
they
simplify
the
lifecycle
of
the
applications
inside
the
openshift
and
then
this
storage.
Again,
the
storage
must
be
aligned
with
this
new
way
of
life
cycle
through
operators.
Then
you
have
to
deal
in
openshift
version
4
with
the
standard
kubernetes.
That
is
called
si.
B
Si
si
si
is
an
agreement
between
storage
vendors
when,
finally,
all
the
means
the
more
important
vendors
agreed
to
follow
and
to
create
a
standard
api's
to
manage
the
storage,
it
includes
how
to
deliver
storage
classes.
The
ability
to
encrypted
angels
create
multiple
si
si
drivers
how
to
manage
the
si
si
per
cluster
you
can
see
in
in
those
tables
what
are
more
or
less
the
api's
and
API
scores
that
are
related
to
the
CSI
driver.
B
So
this
is
a
new
challenge
for
storage,
for
storage
industry
to
align
to
those
api's,
and
this
is
good
because
you
know
in
the
past,
everyone
builds
their
own
driver
right,
so
different
technology
very
difficult
to
integrate.
Now
we
have
a
standard
suicide
and
you
need
to
be
aligned
with
those
standards.
So
we
are
planning
to
the
lever
to
respect
those
standards
in
OCP
for
the
two.
So
what
is
the
plan?
Is
OCS
3.11
supported,
unfortunately,
no,
but
we
have
a
new
product
version.
B
The
version
is
OCS
version
for
the
two
that
Chris
will
talk
about
now,
but
regarding
the
use
case,
where
you
are
already
OCS
3.11
customers,
so
you
need
to
migrate.
Your
workers
right,
we
have
a
solution
to
migrated
workloads
from
OCS
3.11
to
OCS
for
the
tool.
So
the
solution
is
a
migration
tool
that
it
is
integrated
in
openshift.
B
A
Next
version,
ferrati
thanks,
Carlos
and
I,
do
see
that
some
people
actually
woken
up
and
started.
Listen.
So
that's
wonderful,
so
you've
heard
about
operators
today,
a
lot
so
I
just
want
to
quickly
go
go
over
the
framework.
Again
goal
of
an
operator
is
not
only
to
install
it
but
to
actually
help
you
in
day
to
operations
so
updates
back
over
Becca,
failover
and
restore
so
you
shouldn't
be
worried
about
all
these
things
anymore.
A
I
think
that
most
of
you
probably
understood
that
now,
but
what's
also
important,
is
it's
a
native
application
for
company
news,
we're
not
reinventing
anything
special
here
and
because
our
CP
4
wants
us
all
to
run
everything
as
an
operator.
Obviously,
OCS
also
runs
as
an
operator.
So
what
has
changed
now?
We,
we
changed
openshift
three
to
four
and
then.
Consequently,
we
also
changed
OCS
from
three
to
four
and
two
to
spice
it
up.
A
We
completely
changed
back-end
for
OCS,
so
as
I
told
you
this
morning,
already
OCS
three
was
cluster
based
and
now
we
base
it
on
rook
force
F
and
we
base
it
on
nuba
for
the
redhead
multi-cloud
gateway,
which
will
allow
you
to
do
cool
things
between
clouds,
for
your
object,
storage
and
as
scholars
already
set.
You
cannot
use
OCS
3
on
openshift
4.
A
That's
unfortunate,
but
I
can
ensure
you
that
the
wait
is
worth
it
because
with
4.2
openshift
4.2
you
will
be
able
to
use
OCS
and
if
you
were
to
use
open
ship
3
already.
There
is
a
migration
tool,
and
this
is
the
default
migration
tool
that
you
would
use
any
ways
to
port
over
your
parts
and
that
will
also
be
able
to
port
over
your
persistent
storage
from
the
cluster
based
OCS
to
the
Zephyr
new
based
osseous.
So
the
new
technology
stat
looks
like
this.
A
We
have
rook,
we
SEF
Anubha
and
SEF
with
Luke
already
has
an
operator,
and
we
are
basically
putting
an
operator
on
operator,
as
you
heard
earlier,
that
will
manage
all
the
stories
underneath.
So
why
did
we
move
to
SEF
from
cluster
of
us?
That's
a
question.
I
get
a
lot,
but
it
does
make
sense.
Sef
has
already
been
supported
from
the
very
beginning
of
kubernetes
as
a
community
effort,
and
we
heard
customers
that
also
wanted
to
have
an
s3
and
point
inside
of
their
open
chef
clusters
and
now
with
Saif
and
nuba.
A
We
can
actually
deliver
on
this
demand
very
well,
so
I'm
not
sure
how
many
people
know
Luke
already,
but
it's
also
a
project.
That's
community
driven,
so
it
doesn't
didn't
start
inside
of
Red
Hat,
but
we're
now
an
active
contributor
in
this
community
and
its
main
purpose
is
to
bootstrap
Assaf
cluster
and
make
it
available
for
open
shift
and
kubernetes
and
do
the
dynamic
provisioning
that
I
talked
about
earlier
and
also
support
life
cycle
changes.
A
For
now
the
OpenShift
console
with
the
UI,
and
you
can
just
request
a
new
storage
pool
and
that
will
translate
into
a
safe
pool,
for
example,
or
you
can
request,
file,
storage
or
object,
storage
or
block
storage
as
well,
and
then
that
will
communicate
on
the
right
with
the
rook
operator.
The
SEF
demons
will
do
the
actual
work
in
the
back
end
and
then
either
using
the
Flex
driver
or
the
CSI
driver.
You
can
actually
attach
and
mount
this
storage
onto
your
pots.
A
So
that's
just
an
overview
down
here.
We
have
everything
that
we
need
anyways
force
F,
so
the
O's
DS
that
store
the
data.
We
have
the
monitors
that
contain
the
cluster
information.
We
have
the
managers
that
will
allow
monitoring
and
communication
with
the
cluster
from
the
outside
and
the
MDS
is
for
the
distributed
file
system
and
using
the
rook
layer
on
top
the
blue
one.
We
can
now
exports
those
volumes
to
the
pots
make
that
available.
A
Now
we
talked
to
not
enough
about
SEF
I'm
talking
about
nuba,
because
a
lot
of
people
don't
know
that
already
is
our
answer
to
provide
a
very
enterprise
ready
as
s3.
So
SEF
already
implements
an
s3
endpoint
that
a
lot
of
people
use
and
that
has
been
proven
to
work
on
a
very
high
scale
already,
but
nuba
adds
very
nice
enterprise
features,
especially
when
we
want
to
work
in
multiple
clouds
in
parallel.
So.
A
One
of
these
multi
clouds
gateway
functionalities
that
are
very
nice
is
we
can
have
an
active,
active
readwrite
access
between
clouds.
So
you
have
one
endpoint
that
your
application
uses
in
a
back-end.
You
can
use
multiple
clouds
that
your
data
is
distributed
over
and
you
can
define
how
its
distributed.
So
we
will
call
nuba
the
product
and
the
company
RH
OCS,
multi-cloud
gateway
or
short,
just
a
multi
cloud
gateway
and
it
will
be
included
in
the
regular
OCS
package,
and
this
is
just
an
overview
side.
A
So
you
have
your
apps
in
OpenShift
up
there
they're
using
s3,
and
they
can
use
different
buckets
and
every
bucket
can
have
a
different
configuration.
Have
one
endpoint,
multiple
endpoints,
whatever
so
summing
this
up,
you
do
have
o
see
us
in
the
operator
half
soon
and
then
you
can
just
install
from
operator
app.
You
click
on
storage
and
either
you
select
that
you
want
to
have
specifically
storage
for
red
head.
Then
you
only
see
that
or
you
will
see
the
other
storage
things
once
installed.
A
You
have
access
to
the
monitoring
and
management,
so
you
can
do
everything
from
the
UI.
You
probably
remember
this
slide.
We
we
saw
that
this
morning
already,
so
that's
healthy
cluster
unhappy
cluster,
where
nodes
failed
and
you
will
have
access
to
all
of
that
and
it's
in
included
into
the
whole
openshift
metric
system.
It's
hooked
up
to
Prometheus.
You
can
get
alerts
so
that
whole
thing
that
you
need
for
for
the
day
to
operations
as
well,
so
OCS
for
to
fowl
block
object,
support
Prometheus
its
fips-compliant.
A
If
you
need
that
we
do
support
VMware
and
AWS
right
from
start
and
will
add
a
certain
goggle
cloud
in
later
versions
and
to
sum
this
up
again,
this
is
a
very
similar
slide
that
we
we
seen
earlier.
If
you
use
OCS,
you
can
deploy
your
storage
onto
anything
that
you
deploy
anywhere.
Is
it
bare
metal
of
VMs
or
inside
of
containers,
and
you
can
not
only
have
the
readwrite
ones,
but
also
the
read/write.
Many
persistent
volume
claims
and
obviously
also
use
a
three
as
the
gateway,
so.
B
Call
us
very
hard:
it's
like,
because
I'm
a
preset
so
I
have
to
tell
you
something
about
these
queues
so
yeah.
Basically,
this
queues
will
not
change,
but
let's
say,
since
this
is
new
architecture
and
we
are
offering,
you
know,
s3
multi-cloud
gateway
and
probably
you
need
to
scale
your
workload.
So
I
recommend
you
to
keep
in
contact
with
your
Red
Hat
representative
in
order
to
evaluate
case-by-case
what
are
you
want
to
achieve?
So
if
you
want
to
stay
with
the
same
workload,
basically
it
will
be
the
same
in
terms
of
excuse.
B
You
want
to
scale,
of
course
me
what
we
need
to
make
some
architecture
considerations,
so
the
facts
to
summarize,
and
thanks
get
Chris
for
coming.
So
basically,
you
need
to
evaluate
when
you
are
looking
for
a
storage
for
to
integrate
in
openshift.
You
need
to
evaluate
even
is
my
storage
CSI
ready,
so
it's
complying
with
the
UC
standard
technology,
so
we
offer
with
OCS
photo
to
this
standardization.
B
Then,
if
you
are
moving
to
cloud
native
workloads,
so
you
are,
you
want
to
target
s3
because
3
is
now
is
mostly
adopted
in
the
industry
and
is
is
now
it's
replacing
even
the
file
sharing,
because
it's
the
new
and
flexible
way
to
to
share
you
know.
Buckets
is
very
simple:
share
data
and
ingest
data
through
s3.
So
again
we
offer
Luba
in
the
solution.
B
So
it's
a
real
multi-protocol
storage
solution
with
file
block
in
object,
and
basically
this
cue,
if
you
are
already
OCS
customer,
this
queue
will
remain
the
same,
so
you
can
get
access
to
both
products.
If
you
stay
on
open
ship
3.11,
then
you
can
get
access
to
the
containers
of
OCS
3.11.
Otherwise,
if
you
move
to
us
OCP
for
the
tool,
you
can
get
access
to
the
new
version
product
with
that
thanks
thanks
for
your
time,.