►
From YouTube: State of OpenShift Container Storage | Duncan Hardie Eran Tamir Red Hat | OpenShift Commons Briefing
Description
State of Container Storage
Duncan Hardie Eran Tamir Red Hat
April 2020
OpenShift Commons Briefing
A
A
Excellent
thanks,
Iran,
and
so
today
we're
going
to
talk
to
you
about
the
state
of
open
shift
container
storage.
But
while
that's
gonna
be
your
main
meal
today
and
you're,
the
meat
of
our
in
our
sandwich,
I
thought
I'd
start
with
a
nice
little
moosh,
bouche
or
me
very
aperitif,
and
just
give
you
all
an
update
on
open
shift
storage
itself.
A
You
know
just
a
quick
view
of
what
we're
up
to
you,
the
main
things
that
have
happened
in
the
product
recently
and
where
we're
going
to
go
to
you
so
I
thought
I'd
start
with
just
a
quick
refresher.
I
know,
Diane's
really
good
at
getting
excellent,
varied
audiences
here,
and
some
of
you
are
very
familiar
with
open
shift
already
and
some,
maybe
not
so
much
so.
A
Let
me
start
by
giving
you
an
idea
of
where
we're
going
with
open
shift,
storage,
there's
four
themes:
these
would
have
been
pretty
standard
through
my
time
with
open
shift
storage
and
we
haven't
moved
from
them.
I
guess
what's
changed
is
where
we
may
be
focused
what
areas
we're
playing
with,
particularly
so,
let
me
go
into
each
one:
was
he
read
through
the
slide
itself?
The
first
area
is
feature
expansion.
So
there's
a
little
thing
called.
A
The
container
storage
interface
that
has
come
to
cubanía
is
I'm,
going
to
talk
a
lot
more
about
that
in
a
moment,
but
that
has
been
an
area
that
we
have
looked
at
as
far
as
our
feature
side,
and
we
want
to
make
sure
and
work
upstream
with
the
community
to
make
sure
that
we
have
a
complete
spec
in
CSI
and
enable
all
look.
We
required
features
that
we
need.
A
You
know
it's
pretty
comprehensive
already
about
certain
things
like
resize
or
clone
or
or
still
in
beta,
so
we
want
to
take
forward
and
make
sure
that
keeps
moving
on
and
meets
the
expectations
of
an
enterprise
product
like
open
shift.
Second
flexibility.
So
obviously
we
wanted
you
to
be
able
to
use
your
storage
flexibly
and
particularly
in
the
switchover,
that's
come
in
from
what
we'll
learn
about
entry
drivers
to
CSI
drivers.
We
want
to
make
sure
that
we
minimize
any
outages
that
you
experience
any
lengthy
operation.
That
is
just
a
simple,
clean
process.
A
So
that's
another
area
that
we're
looking
at
the
third
theme
for
today
is
about
enable
and
choice,
so
open
shift
and
its
eco
system
is
excellent,
and
we
want
to
continue
to
do
that.
On
the
storage
side,
we've
got
partner,
certification
programs
going
for
CSI,
and
we
want
to
continue
to
grow
these
options
that
are
available
and
open
shift.
And,
of
course,
you
know
what
we're
here
about
here
to
learn
about
today.
A
We
also
want
to
make
sure
that
you
have
the
best
experience
possible
and
you
go
straight
to
using
OCS
as
your
storage
of
choice
and
then
the
final
thing
are
not
on.
Our
themes
is
just
around
observability,
so
we've
had
some
time
now
to
kind
of
look
at
what
storage
metrics
are
what
storage
to
Market
elementary.
You
want
acolyte
of
your
operators
and
that's
an
area
that
is
a
group
that
we're
looking
at
how
we
aligning
with
this.
You
know,
if
you
talk
to
me
three
months
ago,
would
be
very
much
in
the
first
quadrant.
A
A
So
I
mentioned
CSI
container
storage
interface
there'll
be
some
of
you
listening
in
today.
That
will
be
able
to
teach
me
a
lot
more
about
it
and
I
know,
and
some
of
you
may
be
a
been
using
for
you
I'm.
So
a
kind
of
lesson.
What
is
CSI,
and
why
did
we
do
it?
If
you
go
back
to
kubernetes
before
CSI,
you
had
the
idea
of
entry
drivers
for
storage,
so
those
were
storage
drivers
that
were
part
of
the
kubernetes
core
code.
A
They
were
embedded
into
there
and
while
there
was
kind
of
some
volume
plug-in
systems,
because
these
things
were
entry-
and
this
meant
that
you
know
vendors
wanted
to
add
support
for
their
storage
systems
or
even
just
fix
a
bug.
They
were
forced
align
with
the
kubernetes
and
hence
the
open
ship
release.
In
addition
to
that,
some
of
the
third-party
storage
code
that
went
into
that
core
code-
these
binaries,
could
be
problematic.
A
There
might
be
some
security
issues
there
or
maybe
reliability
issues,
and
it
was
really
hard
for
the
maintainers
not
only
just
even
to
maintain
but
also
test.
They
might
not
have
access
to
that
storage
that
need
to
so
the
storage
sakes
thought
long
and
hard
and
the
result
was
CSI
and
that's
essentially
a
standard
for
exposing
arbitrary
block
and
file
storage
systems
into
kubernetes.
Well,
it's
a
little
bit
more
than
just
cubing.
It
is
the
CSI
goal
is
to
aim
for
any
container
orchestration
system,
but
we're
here
to
talk
about
open
shift.
A
So,
let's
focus
on
or
Kuban
it
is
today
with
CSI.
You
have
a
truly
extensible
volume
layer
now
and
this
third-party
storage
providers
can
write
their
plug-ins.
They
can
update
them
when
they
want
they
kind
of
get
out
of
that
being
part
of
the
community's
release
system
and
it
and
the
plus
side
you
know
just
for
kubernetes
core.
Is
it
taste
some
of
that
risk
out
and
makes
kubernetes
call
much
more
secure
and
reliable?
A
So
what
does
that
specifically
need
mean
for
openshift
and
we've
been
doing
CSI
since
the
4.2
release,
so
back
in
August
last
year,
so
it's
been
around
for
a
while.
The
API
has
been
in
there
and
I
focus
up
until
now,
as
it
kind
of
said
on,
the
previous
slide
has
been
around
enabling
partners.
So
we've
been
working
closely
with
the
OCS
team,
who
have
their
CSI
driver
already
available
and
we've
been
working
with
our
partners
to
to
educate
them
and
make
them
available
to
do
that,
and
we
tied
it
decided
to
do
that.
A
Rather
than
move
our
current
entry
drivers
across
to
CSI,
because
the
mandatory
switchover
from
CSI
over
CSI
from
entry
drivers
are
still
a
few
releases
out,
so
we
had
some
time
to
get
base
level.
The
architecture
light
right
and
you
also
kind
of
wanted
to
make
sure
that
we
were
able
to
do
this
smoothly.
So
the
migration
piece
become
before
we
kind
of
enforce
that.
So
what
does
that
mean?
Let's
have
a
look
at
some
dynamic
provisioning
with
the
CSI
driver
itself,
so
that
this
diagram
is
pretty
cipher
explanatory.
But
let
me
go
through
anyways.
A
Just
to
give
you
an
illustration,
so
what
do
you
do?
You've
got
your
CSI
driver
written
you've
installed
her
everything's
good
to
go
what
actually
happens
behind
the
scenes,
so
your
user
will
create
a
PVC
off
of
this
persistent
volume
claim
on
the
API
server
I
think
called
the
external
provisioner,
we'll
get
an
event
that
this
PVC
had
been
created,
and
then
what
that
external
provisioner
will
do
is
that
initiates
a
create
volume
call
into
the
CSI
driver
itself?
Then
the
nice
bit
happens.
A
The
CSI
driver
talks
to
the
storage
back-end
and
the
volume
is
created
and
then
once
that's
happened.
The
CSI
driver
returns
a
volume
to
the
external
provisioner.
He
then
passes
the
pv
or
the
persistent
volume
back
up
to
the
API
server.
That's
bound
and
you're
all
done,
and
that's
how
you
know
this
new
CSI,
API
and
through
the
kubernetes
now
allows
you
to
do
this
plug-in
and
allows
you
to
do
dynamic
provisioning
as
it
says
there.
So
it's
a
really
nice
solution.
A
We've
seen
some
success
with
it
already,
so
you
saw
them
now,
how'd
you
get
it
from
open
shift
or
how
do
you
work
with
a
CSI
driver
in
urban
shift?
Well,
there's
three
options
here
and
you
can
go
straight
upstream
and
you
can
have
a
look
there's
quite
a
lot
of
CSI
drivers
up
there,
there's
definitely
30,
plus
last
time,
I
looked
for
various
different
vendors.
You
can
go
on
download
those
just
like
you
would
with
any
other
upstream
thing
and
then
layer,
those
on
top
of
openshift
and
the
only
downside
there
is.
A
You
know
this.
This
is
something
you've
taken
from
upstream.
So
it's
between
you
and
the
maintainer
upstream
to
look
after
and
maintain
and
take
responsibility
for
that
piece,
and
the
nice
thing
about
open
shift
now
is
that
we've
reengineered
it
to
be
based
on
operators
and
a
discussion
operate
as
itself
would
take
of
all
of
Diane's
session,
and
she
would
glare
at
me
and
I'm
very
scared
of
Diane's,
glares,
so
safe
to
say
that
operate
as
a
this
fantastic
revolution.
A
We
move
forward
for
us
as
far
as
open
shift
is
concerned
and
we're
leveraging
them
with
CSI
drivers,
so
any
CSI
driver
that
wants
to
install
on
open
shift
more
officially,
then
we're
mandating
that
they
go
inside
an
operator
and
there's
a
great
operator
pub
community
up
there,
where
you
can
do
you
get
hope,
requesting
and
submit
your
operator
in
there,
and
it
has
great
benefits.
I
always
badly
describe
them,
as
operators
are
like
groupings
of
containers
with
operational
intelligence
built-in.
A
So
you
can
do
that
with
your
CSI
driver
and
it's
a
really
nice
way
and,
as
you
can
see
on
this
diagram,
our
customers
can
just
search
for
the
storage
piece
and
your
CD
CSI
drivers,
and
it
makes
provisioning
really
in
easy.
There
is
a
third
option
here,
and
this
is
to
certify
your
CSI
driver
and
that's
a
pilot
program
that
we're
coming
out
of
currently.
In
fact,
we've
just
had
HPE
to
be
our
first
partner
vendor
to
certify
their
HP
surfer.
A
Their
CSI
driver
with
us-
and
essentially
this
just
takes
the
operator
certification
that
we
have
today
with
just
some
security
checks
and
and
looks
at
the
images
and
adds
in
a
simple
CSI
test.
That's
based
on
the
upstream
one,
that's
already
available,
and
we
just
check
that
the
API
is
working
and
what
does
it
mean
to
get
a
certified
operator?
Well,
that
means
Red.
Hat
will
fully
support
it.
A
You
know
that
we've
tested
it
and
also
it
will
appear
on
our
what
we
sometimes
call
our
own,
better
embedded
operator
hub
that
you
get
with
open
ships
itself.
So
I'll
stop
going
on
about
CSI
now
and
move
on
urban
shift
for
for
the
dependent
on
you
listening.
This
is
just
around
the
corner
or
just
come
out,
and
what
have
we
done
in
storage?
So
you
can
see
on
the
right
hand,
side
I,
think
we've
got
excellent
coverage
now
of
all
the
main
storage
offerings
that
we
need
to
have.
A
This
has
been
bolstered
massively
by
OCS
command,
along
with
their
their
storage
offerings.
Again,
you're
gonna
hear
a
lot
more
about
that
in
a
moment
on
our
site.
It's
just
between
the
link
between
four
three
and
four
four
we've
been
looking
at
bringing
snapshot,
restore
and
clone
tech
preview,
so
you
can
have
a
play
with
that.
A
We
also
did
a
sidecar
rebase
for
those
of
you
not
aware
when
you
develop
in
a
CSI
driver,
there
are
a
whole
bunch
of
sidecars
that
make
some
of
the
more
common
tasks,
much
easier
to
do
that
you
can
just
reuse,
so
we
can't
just
rebase
the
upstream.
What
was
there
and
the
CSI
test
suite
that
I
alluded
to
is
now
included
in
four
four.
A
So
if
you
were
developing
a
drive-
and
you
want
to
use
it-
it's
just
there
and
easy
to
use
and
as
always
with
red
heart,
we've
continued
our
focus
on
upstream
work,
but
the
team
certainly
done
a
lot
more
in
this
place
and
then,
finally,
from
my
site
and
we'll
get
on
to
the
the
main
course,
what
about
a
road
map?
Well,
we've
talked
about
four
four
already,
but
in
the
medium
term,
si
si
si
si
si
si
really,
to
be
honest,
there's
a
few
things
that
118
move
as
we
moved
into
GA.
A
So
we'll
pick
those
up
resize
cloning
and
raw
block.
We
start
our
own
work
now
on
the
si
si
drivers
so
and
the
AWS
ABS
driver
will
have
a
tech
preview
of
that
and
I'm
we're
gonna
t,
ephemeral
or
inline
volumes
as
a
tech
preview
as
well,
and
those
of
you
not
example,
not
used
to
femoral
volumes.
This
is
for
volumes
that
don't
persist
after
the
pod
ceases
to
exist,
so
these
are
volumes
they
useful
for
store
and
configuration
information
or
a
scratch
space
for
applications.
A
So
there's
a
good
community
upstream
and
biotech
go
and
have
a
look
at
it
and
then
there's
a
couple
of
enhancements
that
were
gonna
do
so.
Local
storage
discovery
was
something
that
came
from
the
OCS
team.
Actually,
so
that's
to
look
at
a
node
and
see
what
storage
is
available
on
there
and
then
recursive
permissions.
That's
around
making
sure
that
you
know
with
things
for
recursive
permission,
changes
for
FS
group
and
selinux.
A
It
has
taken
a
long
time
in
the
past,
so
we're
looking
at
how
we
can
speed
that
up
along
the
term
and
and
the
further
out
we
go
the
more.
This
can
change
so
you
know
please
take
this
with
a
grain
of
salt
we're
looking
at
migration.
So
that's
that
clean
switchover
from
entry
to
CS
either
they
talked
about
snapshot,
restore
everyone's
looking
for
that.
A
Looking
for
that,
you
know,
will
work
in
upstream
with
kubernetes,
because
both
in
the
CSI,
API
and
cubanía
is
API
objects
themselves,
they're
still,
both
in
beta,
so
we're
working
to
move
them
forward.
More
cloud
providers,
ephemeral,
storage,
okay,
working
more
with
their
party
vendors
and
now
the
good
thing
is
we
we've
kind
of
had
a
headstone
to
try
and
get
CSI
going
and
working
in
a
good
state.
A
So
that's
given
us
chance,
nor
now
to
look
at
more
Ora
fees
and
that's
it
from
the
openshift
site
and
what
I
wanted
to
now
is
hand
you
over
to
Iran.
He
will
take
you
through
the
OCS
pieces
over
to
you
around.
B
B
So
just
to
recap:
for
those
who
don't
know
what
is
open
ship
container
storage,
so
the
open
sea
of
container
storage
is
another
product
that
reddit
releases
and
what
we
are
doing
is
providing
the
highly
scalable
and
production
grade.
Persistent
storage,
which
means
that
we
want
to
be
there
that
supports
your
applications
in
openshift
and
provide
them
scalable
solution
and
also
deploy
sorry
and
also
provide
you
a
very
easy
way
to
deploy
it
and
maintain
it.
So
both
onboarding
and
day-to-day
maintenance
were
made
it
very
easy.
B
With
the
object
container
storage,
we
integrated
the
product
into
operative
dashboards
and
I
will
show
you
in
a
second
as
well
all
the
benefits
of
having
a
well
integrated
product
in
terms
of
alignment
with
the
releases.
The
alignment
with
their
and
leveraging
features
like
Duncan
just
mentioned
features
like
a
snapshot
scrolling
and
so
on.
B
That
OpenShift
is
trying
to
provide
for
the
application
and
providing
for
the
data.
So
we
want
to
make
sure
that
customer
can
run
in
any
infrastructure
that
is
available
for
OpenShift
to
ensure
that
there
is
no
vendor
locking.
So
you
have
the
freedom
to
choice
to
choose
and
you
can
start
on
Prem.
We
can
have
your
dev
and
test
running
in
a
very
simple
environments,
either
in
the
cloud
own
or
affirm
and
then
decide
were
you
actually
going
to
use
the
production
and
again
operative
container
storage,
provide
the
same.
B
B
So
and
again
it
will
all
work
exactly
this
thing
where
you,
regardless
of
where
you're
walking
the
third
part,
which
is
very
important,
is
this
capability,
when,
whatever
you
have
a
new
project,
you
know
where
you
start,
you
don't
know
if
it
will
succeed
or
not,
so
you
always
want
to
start
small
and
then
have
the
opportunity
to
scale,
and
that's
also
linked
to
the
idea
of
where
do
I
run.
So
maybe
I
will
start
on
tram,
that
we
start
very
small
and
then
the
project
is
successful.
B
A
B
The
way
that
we
look
at
it
whenever
you
have
open
shift
and
reverse
of
the
environment,
we
will
provide
the
box
storage.
The
fan
store
objects
service
across
these
environments,
so
let's
take,
for
example,
regardless,
let's
say,
for
example,
the
AWS,
so
the
object,
the
open
shaped
container
storage
will
actually
use
the
EBS,
for
example,
or
any
other
storage.
If
it's
local
tribes,
we
can
use
that
as
well.
B
So
we
actually
provide
the
ability
to
divide
between
the
concerns
that
the
operative
cluster
manager
has
and
the
developer
or
the
DevOps
point
of
view
same
goes
for
file
and
object,
regardless
of
what
is
your
current
data
layer,
either
AWS,
s3
or
Azure?
From
the
development
point
of
view,
the
API
is
always
AWS
s3
compatible,
and
it
means
that,
regardless
of
where
you're
running
your
application,
it
will
behave
the
same.
You
don't
need
to
change
your
code
and
you
will
get
the
same
experience.
B
I
talked
about
simplicity
and
ease
of
uses,
as
Duncan
mentioned,
the
idea
of
having
the
operators
and
the
operators
hub
really
makes
everything
much
easier.
You
can
install
openshift,
container
storage
and
directly
from
the
operators
up
simply
click
here
and
it
will
get
installed
once
you
install
it
and
I'm
linking
it
to
the
integrated
behavior
that
I
mentioned
before
you
get
all
the
information
that
you
need
from
the
storage
layer
directly
to
open
ship
dashboard.
So
in
this
case
you
can
see
that
the
persistent
storage
dashboard
everything
is
well
integrated.
You
can
see
the
status.
B
B
Also
very
easy
to
scale
simply
decide
how
what
is
the
increment
that
you
want
to
scale
with,
and
it's
easy
I
want
to
elaborate
a
bit
about
the
s3
service
which
introduced
new
capability
in
terms
of
hard
Britain
and
data
management
in
general,
and
this
service
is
provided
by
a
component.
We
call
multi
cloud
object
gateway
and
the
idea
again
is
to
Stockland
it's
a
very
lightweight
pod
that
you
have.
Whenever
you
deploy
OCS,
you
don't
need
to
change
anything,
you
don't
need
to
configure
anything.
B
It's
you
get
a
service
out-of-the-box
and
you
can
scale
locally
by
using
local
storage
that
you
have
on
Prem
or
in
the
cloud
and
later
on.
You
can
start
with
a
mirroring
data
to
enjoy
better
portability
for
the
applications.
You
can
use
it
for
backup
and
you
can
use
it,
of
course,
for
high
availability
and,
in
short,
that's
the
way
that
it
looks
like.
On
the
left
side.
We
have
the
application,
it's
consuming,
s3
compatible
API
provided
by
openshift
container
storage
on
the
right
side,
as
I
mentioned
before,
for
the
block
and
fine.
B
We
have
also
here
the
separation,
so
the
admin
can
decide.
Where
is
the
actual
data?
The
data
can
be
on
Prem.
The
data
can
be
in
cloud
native
storage
wherever
you
are
working
in
to
save
in
egress
cost,
and
once
you
have
this
configuration,
you
can
of
course
change
it
and
add
more
layers
for
that
ad
tearing
had
the
mirroring
between
these
locations
and
and
have
hybrid
buckets
part
of
the
data.
B
In
short,
that's
the
way
that
the
multi
cloud
object,
Gateway,
provides
or
digests
the
data,
and
it's
important
talk
about
it
because
it
introduces
also
the
security
advantage
whenever
an
application
is
writing
data.
There
is
a
fragment
fragmentation
process
in
application,
compression
and
encryption.
At
the
end,
we
have
multiple
chancemore
chunks
and
crypt
chance
that
are
spread
across
the
underlying
storage.
B
This
just
to
explain
how
do
we
look
at
this
various
services
that
openshift
container
storage
provides?
So
we
have
the
block
and
in
a
high
level,
everything
which
is
transactional,
databases,
messaging
and
so
on.
We'll
go
with
a
persistent
volume
which
is
based
on
the
block
offering
here
if
we
need
more
than
one
application
and
consuming
the
same
data
set,
we'll
use
the
short
file
system
again
from
the
application
point
of
view,
it's
a
simple
pivot
claim
from
the
open
shift,
container
storage,
but
different
type
of
that
process.
B
B
How
different
it
is
in
openshift,
internal
storage,
and
the
short
answer
is
that
it's
the
same
same
capabilities
but
a
very
opinionated
deployment,
meaning
that
we
wanted
to
keep
it
very,
very
simple
for
customers.
So
we
took
a
very
opinionated
approached
and
provided
all
the
benefits
in
a
very
accessible
and
consumable
way
into
the
OpenShift
product.
The
way
that
we
did
it
is
with
by
using
work,
which
is
another
open
source
project
that
helps
us
to
do
exactly
that.
B
I'm
taking
the
Duncans
word
about
what
is
operator
and
the
fact
that
it's
a
bunch
of
intelligent
containers,
that's
exactly
that!
That's
what
brings
today
to
the
game
and
it
helps
us
to
magically
deploy
self-monitor
it
correctly
and
make
sure
that
we
we
keep
on
track
in
terms
of
their
cell
filling
and
other
features
that
we
have
intact.
B
The
third
component
is
the
Nuba
part
of
Reddit
acquisition
happened
there
at
the
end
of
2018
and
it's
an
open
source
as
well,
and
it
provides
the
obstruction
there
for
all
the
data
services.
I
want
to
jump
to
this
diagram,
not
not
specifically
to
deep
dive
on
what
we
have
here
in
terms
of
the
components,
but
many
look
at
the
components
that
we
have
it
on
the
top.
B
B
So
the
idea
is
very
similar
to
volume
claim,
but
for
objects,
meaning
that
by
adding
several
lines
to
the
application,
yeah
well,
the
the
developer
can
easily
get
endpoint
access
key
and
secret
key
to
his
application
and
read
it
dynamically
and
start
using
it
dynamically,
meaning
that
we
will
wrapped
all
the
old
interaction
between
the
application
and
the
object
service.
And
we
took
the
connectivity
part
and
provided
it,
provided
it
in
a
dynamic
way
to
the
application
and
that's
the
way
that
it's
working.
So
the
application
has
a
bucket
claim
to
the
OCS
operator.
B
B
And
the
data
is
dedicated
only
for
this
application
and
we
bring
this
information
back
to
the
application
and
from
that
point,
application
can
start
writing
and
reading
from
the
bucket
underneath.
That
means
that
you
can
decide
what
type
of
bucket
do
we
use
do
we
use
a
and
the
default
backing
store,
which
is
the
local
storage
or
we
use
something
more
complex
to
ensure
mirroring
and
so
on.
B
B
B
So
that's
another
solution
and
that
we
are
working
on
the
last
one
is
the
Metro
age,
a
disaster
recovery
solution
which
is
mainly
targeting
or
leveraging
I
would
say,
availability
zones,
that's
something
that
we
already
have.
As
part
of
the
as
part
of
the
our
support
in
availability
zones
today,
here,
we
actually
want
to
also
support
smaller
capacity,
a
smaller
and
they
dissenters,
and
we
want
to
also
improve
the
cost
model.
B
So
we
have
new
features
around
that
just
to
quickly
recap
on
what
do
we
need
in
order
to
have
this
when
we
are
looking
at
this
challenge?
So
we
have
data
mirroring
and
that's
what
we
need
to
support
from
the
storage
point
of
view,
we
want
to
be
able
to
synchronously
be
pure
data
across
locations
between
locations.
We
need
a
snapshot
which
is
coming
soon
from
openshift
and
kubernetes.
Of
course,
we
have
the
backup
that
is
the
component
that
take
advantage
of
the
snapshots
and
we
have
the
data
replication.
B
B
Next,
we
have
multiple
areas
of
I
would
say
interest
that
we
want
to
tackle.
First,
one
is
platforms,
it's
a
long
side
effort.
We
want
to
make
sure
that,
in
addition
to
our
permit
and
support,
AWS
and
VMware
support,
we
can
also
support
open
stock,
partial
Google,
IBM,
Cloud,
Alibaba,
ref
and
so
on.
We
have
a
dedicated
team
in
order
to
make
it
this
delivery
faster.
B
We
want
to
improve
our
security
solution
for
customers.
So
currently
we
have
encryption
at
rest
for
AWS
customers
and
cloud
customers
in
general,
but
we
want
to
make
sure
that
we
can
provide
it
on
bare
metal
and
VMware,
and
so
we
want
to
add
encryption
interest
in
transit,
which
is
unrelated
to
the
underlying
infrastructure.
B
Once
the
kubernetes
story,
once
OpenShift
provides
a
kms,
integration
will
also
integrate
it
into
the
storage
to
leverage
and
centralize.
The
key
management
systems
on
the
multi
cloud
object,
gateway
aspect.
We
want
to
move
to
the
next
phase,
which
is
multi
cluster.
As
you
can
see,
multicast
is
a
repeating
term
for
the
next
versions
that
we
have
both
in
data
protection
and
in
multi
cloud
object
gateway.
So
we
want
to
have
a
good
solution
for
that
as
well.
For
the
multi
cloud
object
gateway.
B
We
are
going
to
introduce
the
caching
mode.
Many
for
a
IML,
with
the
huge
data
set
in
a
centralized
location
and
ability
to
be
closely
to
the
digestion
area
to
the
computer
area
and
the
relevant
data
and
another
step
is
the
integration
with
multi
cluster
dashboards
that
we
have
and
also
improve
something
that
we
call
namespaces,
which
is
a
proxy
for
multiple.
B
Cloud
providers
and
again
we'll
lab
rate
that
on
that
in
the
next
session,
scalability
is
another
step
that
we
have.
We
want
to
introduce
in
the
PES
mode.
Soon.
The
bed
mode
means
that
we
have
a
huge,
centralized
storage
which
is
scalable
and
serves
multiple
openshift
clusters.
It's
managed
a
bit
differently.
It
is
very
powerful
in
terms
of
all
the
knobs
that
you
can
have,
but
it
allows
operative.