►
From YouTube: London OpenShift Commons Gathering 2019 Lightning Talk Portworx Container Data Orchestration
Description
London OpenShift Commons Gathering 2019
Lightning Talk Portworx
Container Data Orchestration
A
Hi
everyone,
my
name,
is
Joe
Gardner
I'm,
the
technical
lead
for
port
works
in
Amir.
So
what
is
port
works?
So
we
are
a
cloud
native
ie
container
data,
orchestration
platform
for
OpenShift.
So
what
that
means
is
we
integrate
natively
with
the
OpenShift
platform,
driven
through
all
of
the
existing
storage
primitives
in
that
platform,
so
storage
classes
PVCs,
and
there
are
two
things
that
we
do
within
this
data
management
platform
that
are
unique
right.
So
the
fact
that
we
can
do
both
together
in
the
one
platform
is
what
kind
of
makes
port
works
special.
A
So
the
first
thing
is:
we
have
a
set
of
capabilities
around
data
operations,
so
this
is
really
how
you
manage
the
lifecycle
of
a
container
volume
and
the
second
is
this
idea
around
data
workflow.
So
how
can
we
move
data
and
applications
between
multiple
openshift
clusters
potentially
running
in
separate
regions
or
data
centers
right?
A
So
why
is
this
important?
Well,
increasingly
we're
seeing
stateful
workloads
being
containerized
and
run
in
openshift
right,
so
some
of
the
applications
that
we
see
our
customers
using
obviously
databases
right,
but
increasingly
big
data
workloads.
We
see
lots
of
use
cases
around
dev,
tooling,
so
things
like
your
Jenkins,
what
you
might
call
legacy
databases
assuming
no
one
from
IBM-
is
in
the
room,
I
suppose
most
of
Red
Hatters
on
that
less
sensitive
subject,
but
Leslie.
These
are
the
kind
of
staple
workloads
that
we
validate
for
running
on.
Port
works
volumes
right
cool.
A
So
let
me
go
back
to
the
two
points.
So
the
two
things
we
do
write
data
operations
and
data
workflow,
so
data
operations
is
really
addressing
this
question
of
how
can
I
operate
stateful
applications,
but
do
that
with
container
volume
granularity?
So
what
I
mean
by
that
is,
you
know
you
could
have
potentially
a
Cassandra
ring
running
across
your
openshift
cluster.
You
could
have
volumes
on
different
underlying
block
devices.
A
How
do
you
then
take
a
snapshot
with
consistency
across
that
whole
cluster
right
need
to
be
able
to
do
that
exactly
the
same
point
in
time
for
every
member
of
that
ring.
So
it's
those
kind
of
operations
where
you
need
to
be
able
to
operate
on
that
container
volume
level,
which
is
where
port
works
comes
into
play
right
and
it's
not
just
snapshots.
Well,
there
we
go
go
back
one.
A
Hopefully
there
we
go
so
there's
a
whole
set
of
capabilities
that
are
sort
of
sand-like.
So
all
of
these
points,
as
I
said
on
the
container
volume
level,
so
we
can
do
replication
across
the
cluster
for
a
rapid
failover
of
your
stateful
workloads.
We
have
a
fully
automated
snapshot,
I'm
a
store
process
which
can
also
be
pushed
off
site
into
object.
Storage.
Shared
volumes.
A
Allow
you
to
have
that
readwrite
many
configuration
which
is
great
for
config
sharing
and
jenkins
and
content
management
systems
per
volume,
encryption
with
separate
keys,
very
popular
with
financial
services
and
government,
as
you
can
probably
imagine,
and
a
couple
of
other
capabilities
around
operating
the
platform,
library
sizer,
for
example,
so
then
on
the
data
workflow
piece.
So
what
we're
seeing
increasingly
is
that
you
know
there's
no
doubt
that
you
know
we're
production,
izing,
our
OpenShift
platforms,
and
with
that
now
we
have
this
kind
of
this
standard
development
target
in
our
businesses.
A
We're
now
entering
this
multi
cloud,
multi
cluster,
world
right
and
so
with
that
comes
a
whole
new
set
of
challenges
around
how
can
I
migrate?
My
applications
and
my
data
to
another
open
shipped
cluster,
so
we
have
the
ability
to
snapshot
an
entire
group
of
container
volumes,
potentially
an
application
stack
or
a
clustered
database.
We
can
also
extract
the
deployment,
manifests
from
the
kubernetes
api
and
then
ship
the
whole
thing
across
to
a
second
cluster
and
then
bring
it
up
so
essentially
driven
through
Yama.
You
can
say:
I
want
to
move.
A
My
Postgres
database
to
a
secondary
cluster
may
be
running
in
a
cloud
apply.
The
yama
spec
a
few
minutes
later
is
up
and
running
in
the
secondary
cloud:
okay
cool.
So
where
does
this
fit
into
a
business?
Well,
pretty
much
everyone
that
I
meet
with
whether
it's
a
customer
or
someone
testing
port
works
is
talking
about
this
building
a
platform
in
their
business
right.
It's
building
that
common
deployment
target
where
the
belavezh
can
self-service
and
the
platform
team
can
just
focus
on
their
container
platform.
So
we
fit
into
that
model.
A
Providing
a
platform
with
those
two
kind
of
capabilities
has
been
has
driven
a
lot
of
adoption.
These
are
some
of
our
customers.
These
are
all
in
production
and,
as
you
can
see,
these
are
not
organizations
who
take
data
lightly
right.
They
will
not
accept
data
loss
and
the
fact
that
port
works
is
in
production,
and
these
organizations
says
a
lot
about
the
solution.