►
From YouTube: OpenShift Commons Briefing: Running Databases in Production on OpenShift - Michael Ferranti Portworx
Description
Recorded live on Dec 5th 2018
OpenShift Commons Briefing
Running Databases in Production on OpenShift - Michael Ferranti Portworx
A
You
well
hello,
everybody
and
welcome
again
to
another
openshift
Commons
briefing
this
time.
We're
really
pleased
to
have
Michael
Ferrante
from
what
works
gonna
be
talking
today
about
running
databases
in
production
on
RedHat
OpenShift,
which
we're
really
you
know
everybody's.
This
is
one
of
the
hot
topic,
so
I'm,
hopefully
we'll
get
a
great
overview
from
Michael
today
and
find
out
what's
coming
from
port
works
in
the
future,
so
I'm
gonna,
let
Michael
introduce
himself
and
take
it
away
and
that
will
be
live
Q&A
at
the
end.
A
B
Great
well
thanks,
Dan
I
really
appreciate
it.
I'm
excited
to
be
here
again,
Diane
introduced
me,
I'm
the
vp
of
product
marketing
at
four
works
and
are
really
excited
about
kind
of
all
that
the
community
is
doing
to
enable
stateful
option
applications
to
run
on
on
kubernetes
and
openshift.
I.
Think
Diane
is
right
that
this
is
a
really
hot
topic
right
now
and
we'll
kind
of
talk
about
some
of
the
reasons
for
that
and
some
of
the
industry
trends
on
that
kind
of
underscore
that
but
yeah
I
just
wanted
to
start
by
saying.
B
I
think
you
know
the
factory
even
here
talking
about
about
this.
This
topic
is
an
indication
of
a
lot
of
work
that
the
community
is
doing
around
projects
like
CSI
or
the
container
storage
interface
being
being
embedded
into
kubernetes
and
making
a
lot
easier
for
users
to
use
different
storage
options
on
an
plug
and
play
on
and
I
just
wanted
to
say
a
big
thank
you
to
all
of
those
on
in
the
community
who
are
involved
in
making
those
projects
a
reality.
B
B
On
the
day
after
that
announcement
and
so
I
was
gonna-
take
this
out
because
it's
maybe
not
relevant,
but
I
thought
you
know
what
let
me
just
leave
it
in
and
just
make
one
point,
which
is
to
say
that
port
works
is
a
platform
on
is,
can
anywhere
on,
and
that
means
we're
off.
We
have
customers
on
who
are
running
in
IBM's
cloud,
IBM
cloud
private.
B
We
have
customers
that
are
running
on
open
shift
on
and
from
our
perspective
you
know,
customers
make
a
scheduler
decision
based
on
the
unique
requirements
of
their
app
on
the
import
works,
provides
a
storage
layer
that
that
works
with
that
scheduled
environment,
so
I
know
kind
of
in
a
time
of
heavy
in
mergers
and
acquisitions.
Sometimes
users
are
concerned
about
what
does
this
mean
for
the
future?
Should
I
make
a
choice
now
on,
and
you
know
the
point
that
I
make
here
on
this
slide
is
simply
that
you
know.
B
Port
works
works
very
very
closely
with
both
IBM
and
RedHat.
So
as
a
user,
you
can
expect
that
we
will
continue
to
work
with
both
of
those
companies
and
then,
as
with
the
single
organization
post
merger,
and
that
as
changes
are
made
on
to
application
platforms
based
on
kind
of
what
the
team
I'm
sure
has
not
decided,
they
want
to
do
yet.
But
what
eventually
happens?
We
will
support
that
environment
as
well.
B
Okay,
so
you
know
on
the
title
of
this:
on
the
this
presentation
is
running
a
production
databases
on
open
shift
and
I'm
gonna
talk
about
how
port
works,
enables
that,
but,
let's
back
up
a
second
and
just
talk
about
picking
the
right
storage
for
openshift
to
begin
with,
because
I
think
one
of
the
things
I
want
to
just
kind
of
maybe
shed
some
light
on
is
how
port
works
fits
into
the
openshift
ecosystem
right.
You
know,
there
are
many
different
workloads
and
different
workloads
require
different
types
of
solutions.
B
I
mean
we
always
want
the
customer
to
pick
the
right
solution
for
the
job.
So
I
have
two
examples
to
kind
of
underscore
this.
The
first
does
is
say:
okay,
what?
If
we
have
a
file
sharing
and
shared
configuration
use
case
on
open
shift
on
you
know
either
you
know
traditional
file,
storage
or
maybe
I
have
a
Jenkins
environment
where
I
need
to
share
some
configuration
between
my
different
Jenkins
worker
notes.
B
The
requirements
for
that
type
of
use
case
are
typically
what
we
call
a
shared
multi
writer
volume
on
meaning
a
single
data
volume
that
can
be
accessed
from
multiple
and
they
all
have
obviously
read
access,
but
also
write
access
to
that
shared
volume.
This
is
kind
of
you
know
the
just
a
textbook
case
of
a
file.
Storage
use
k
case.
Excuse
me,
I'm
in
on
openshift.
The
options
for
this
type
of
use
case
are
really
rich.
B
Openshift
has
something
that's
called
open
shift
container
storage,
which
is
based
on
Gloucester
FST's
on
you
could
use
something
like
NFS
you
if
you're
running
open
shift
in
the
cloud
as
many
of
our
customers
are,
you
could
use
you
know,
Amazon,
EFS
or
Azure
has
a
file
on
a
file
storage
service.
There
are
lots
of
options
on
open
shift
for
file
based
use
case.
B
Let's
look
at
another
use
case,
which
is
a
databases,
and
we
could
extend
that
to
database
like
services
like
queues
and
key
value
stores
and
analytics
and
streaming
the
requirements
here,
not
shared
multi
writer,
but
rather
single
writer,
high
I/o
and
throughput
okay.
You
know
just
to
use
an
analogy.
You
know
if
I
asked
you
to
run
Postgres
on
Amazon.
What
are
you
gonna?
B
Do
what
you're
gonna
spin
up
an
ec2
instance,
which
is
your
VM
and
you're,
probably
going
to
attach
a
block
of
device
on
an
EBS
volume
to
that?
What
that
means
is
that
the
storage
type
for
databases
we
need
single
writer
and
high
I/o
and
throughput
is
block
I'm,
not
file,
but
block,
and
so
the
question
is
you
know?
What
are
what
are
the
options
on
open
shift
and
you
know,
can
I
use,
for
instance,
open
shift
container
storage
for
these
block
use
cases
specifically
for
databases.
B
You
know,
I
think
the
vision
there
is
yes,
you
can
today
on
the
teeth.
The
Gluster
FS
team
is
doing
a
lot
of
innovative
work
around
making
Gluster
work
well
for
block
use
cases
on
and
there's
a
there
are
some
primitives
in
cluster
now
called
cluster
block
on
which
is
really
really
promising.
But
today,
Red
Hat
is
saying
that,
though,
that
that
that
interface,
lustre
block
interface
should
only
be
used
for
open
shift,
logging
and
open
shift.
B
Metrics
and
I
just
wanted
to
make
sure
that
that
was
still
accurate
and
so
I
just
checked
this
morning
on
in
as
of
3:11.
That's
that
still
kind
of
be
if
there,
the
official
stance
of
red
hat.
So
you
know,
without
even
getting
into
you
know
how
can
port
works
on?
B
You
know,
compliments
or
provide
features
and
functionality
that
might
be
different
from
openshift
container
storage,
you're
kind
of
in
a
situation
as
a
user
where,
if
you
you're,
following
the
best
practices,
recommendations
from
Red
Hat,
you
know
Gluster,
FS
and
open
ship
container
storage
are
really
for
those
filed
issues
case.
Not
the
block
use
cases,
and
so
this
is
where
for
works,
comes
out
I'm
complimenting
on
the
other
tools
within
the
open,
shipped
ecosystem,
with
a
solution.
B
A
specifically
designed
for
that
block
of
storage
use
case-
and
you
know,
port
works-
is
really
proud
to
be
Red
Hat
certified
on
open
shift,
we're
really
proud
to
be
in
the
Red
Hat
container
catalog,
and
to
partner
with
with
Red
Hat,
to
provide
a
single
platform
that
can
run
all
types
of
applications
I'm
on
open
shift,
including
those
mission-critical
databases
and
again
this
isn't
about
you,
know
better
or
worse,
or
anything
that
it's
about
the
particular
use
case.
B
Okay,
so
getting
into
a
little
bit
about
more
about
port
works,
you
know
here's
a
slide
of
some
of
our
customers.
The
reason
I
show
this
slide
is
because
you
know
the
breadth
and
depth
of
use
cases
for
containers.
It
just
seems
like
it's
exploding
by
the
day
they're
just
so
so
many
and
the
reason
I
put
this
slide
up
is
just
to
say
that
we
have
customers
across
all
verticals
and
industries.
B
We
have
customers
that
run
on
Prem
and
in
cloud
within
the
cloud
that
run
in
all
the
different
clouds
that
run
in
hybrid
environments,
multi
cloud
environments,
you
know
some
some
customers
run.
You
know
kind
of
mission-critical
SAS
applications
in
containers
running
on
for
works,
others
or
building
internal
platforms
as
a
service
we've
seen
all
of
those
use
cases
so
kind
of
no
matter
what
you
come
at
OpenShift
and
saiful
services.
B
A
port
works
can
help,
and
you
know
these
customers
are
an
indication
of
all
of
the
different
variations
that
we've
seen
across
the
market.
Okay,
so
I
want
to
talk
a
little
bit
about
kinda
some
technical
problems
that
for
work
solves,
but
before
doing
that,
I
want
to
talk
a
little
bit
about
the
business
value
right.
Why
should
I
even
care?
Why
should
I
keep
listening
to
to
this
presentation?
Your
the
first
point
I
would
make.
Is
that
force
helps
you
accelerate
time
to
market?
B
For
container
projects,
you
know
if
you
have
a
micro
service
or
a
use
case
or
a
platform
that
requires
developers
on
to
use
databases,
queues,
key
value
stores
again
that
list
of
stateful
service
that
work
where
weak,
that
need
block
storage
forces
is
simply
the
fastest
way
to
get
up
and
running,
not
only
when
what
we
call
day,
one
which
is
deployment
and
configuration
but
day
two,
which
is
what
happens
when
you
know
infrastructure
needs
to
be
upgraded
or
servers,
fail
or
networks
or
partition,
and
that
app
needs
to
keep
up
and
running,
because
port
works
was
built
from
the
ground
up
for
containers,
as
opposed
to
kind
of
some.
B
Some
other
storage
systems
on
that
were
kind
of
came
of
age
on
during
the
time
of
virtual
machines,
which
don't
tend
to
be
as
dynamic
and
certainly
were
not
orchestrated
by
kubernetes
kind
of
a
lot
of
the
complexity
is
removed
because
port
works
was
built
from
the
ground
up
to
be
managed
and
deployed
via
those
systems.
B
So
we've
seen
customers
who
have
spent
a
year
or
more
in
research
and
development
trying
to
get
various
block,
storage
solutions
or
other
storage
solutions
to
work
for
containers,
and
then
they
discover
port
works
and
are
able
to
get
up
and
running
in
a
matter
of
weeks.
The
other
another
benefit
would
be
avoiding
vendor
lock-in
and
I.
Think
here
you
know
port
works
and
in
Red
Hat
just
has
the
exact
same
approach
to
kind
of
how
to
enable
applications
to
run
on
any
environment
in
a
consistent
way
with
choice
onion
without
lock-in.
B
In
my
own
data
center
we
support
on
a
model
where
you
can
deploy
port
works
in
any
environment,
I
mean
you
can
move
data
between
any
of
those
environments,
so
you're
not
locked
into
a
single
physical
operating
environment
and
you're,
not
locked
into
a
single
provider
on
and
I.
Would
you
know,
extend
that
very
transparently
to
the
fact
that
we
are?
We
are
heavily
invested
in
the
CSI
project
I
mentioned
earlier
at
the
top
of
the
call
in
CSI
is
all
about
having
pluggable
storage
interfaces.
B
So
I
would
even
extend
this
to
two
to
poor
works.
To
say
you
know,
you're,
not
even
locked
into
poor
works
because
of
those
open
interfaces
that
we
support,
CSI
crime
among
them
that
you
know
if
you
pick
up
four
works
and
you
decide
that
it's
not
the
right
tool
for
the
job.
Well,
then
you
can,
then
you
can
move
to
another
solution
and
not
be
locked
in
even
at
the
footworks
level.
B
The
last
point
that
I'll
make
is
that
kind
of,
in
addition
to
the
agility
benefits,
in
addition
to
the
flexibility
benefits,
our
customers
see
dramatic
cost
savings
from
an
infrastructure
perspective.
These
cost
savings
are
conservatively
on
the
order
of
30
to
60
percent
on
and
they
get
those
savings
both
from
compute
and
storage
cost
reductions
on.
So
just
one
quick
example
of
how
that
works
on.
We
have
customers.
Many
customers
who
run
Kafka
in
production
Kafka
is
a
messaging
service.
B
It's
very
popular
with
IOT
use
cases,
streaming
data
and
analytics
apps
and
for
eight
arbitrary
level
of
reliability.
Okay,
based
on
NASA,
lays
of
that
application.
We
see
customers
consistently
instead
of
say,
being
able
to
run
having
to
run
five
kafka
brokers,
which
requires
five
containers
and
requires
the
CPU
and
memory
to
process
those
containers
being
able
to
run
only
three
Kafka
workers,
40%
compute
savings,
and
you
know
we
have
customers
who
are
extending
that
across
all
of
their
applications
for
platform-as-a-service.
B
Rather
than
running
a
three
node
MongoDB
cluster
for
the
same
level
of
reliability,
they
can
run
a
single
MongoDB.
Pod
I
mean
again
get
the
same
level
of
h.a
with
a
third
of
the
computer.
That
computes
savings
is
really
important.
It's
a
little
bit
counterintuitive
because
I'm
talking
about
for
works
as
a
storage
solution.
B
How
can
you
possibly
help
me
save
on
compute
cost,
well
I'm,
going
to
show
you
that,
in
a
minute
on
how
you
can
run
fewer
pilots
for
the
same
level
of
reliability,
thanks
to
the
way
that
port
works,
our
replicates
data
across
operating
environments
and
then
from
a
storage
perspective?
We
help
you
save
money
by
by
optimizing
the
types
of
storage
on
that
you
that
you
run
your
applications
on.
Sometimes
this
is
called
tearing.
Do
you
have
your
primary
production
workloads
running
on
your
fastest?
B
Typically,
most
expensive
storage,
you
have
test
environments
running
on
less
expensive
storage
and
then
you
can
off
offload
backups
and
things.
Like
that's
object,
storage.
We
make
all
of
that
very,
very
easy
to
do.
Ok,
so
that's
a
little
bit
about
the.
Why
now,
let's
get
into
the
nuts
and
bolts
of
the
for
work
solution
itself,
so
you
know
I
mentioned
that
at
the
top.
B
But
what
makes
us
unique
is
our
deep
scheduler
integration,
the
fact
that
you
can
use
and
control
port
works
and
manage
all
of
your
stateful
services
running
on
forwards
directly
via
that
scheduler,
in
this
case
openshift,
but
also
native
kubernetes
on
and
you
can
use
port
works
to
run
and
manage
any
stateful
service
on
that
requires
block
storage.
So
I
have
four
good
examples
here,
but
we
could
add
Cassandra.
We
could
add
my
sequel.
We
could
add
Kafka,
we
could
add
tensorflow
or
HDFS
on
any
stateful
service
that
runs
on
Linux.
That
needs
block
storage.
B
You
can
run
and
manage
via
open
shift
with
quit,
with
port
works
on,
and
you
can
do
it
on
any
cloud
or
on-premise
data
center
and
and
very
important
and
kind
of
this
next
wave
of
cloud
and
container
adoption
across
all
of
these
environments.
And,
finally,
you
can
do
it
at
up
to
scales
that
are
consistent
with
kubernetes
scales,
on
meaning
up
to
a
thousand
nodes
per
cluster.
The
port
works
scales
with
your
applications.
B
So,
let's
look
at
a
use
case
to
kind
of
see
how
some
of
these
key
features
and
capabilities
work
with
within
an
open,
shipped
environment.
So
let's
say
we
want
to
run
a
highly
performant
and
resilient.
My
sequel
database
on
open
show.
How
would
we
do
that?
Well,
the
first
thing
that
we
would
need
to
do
is
install
openshift.
B
Excuse
me
install
port
works
on
OpenShift
I
mentioned
before
that
poor
works
was
built
from
the
ground
up
for
containers.
What
this
means
is
that
port
works
itself
can
be
installed
and
configured
and
managed
via
OpenShift.
Just
like
I'm
gonna
manage
my
sequel
via
OpenShift
I'm
gonna
manage
poor
works
via
OpenShift
as
well,
which
means
installing
it
is
OC,
applied
a
chef
and
then
a
yam
will
file
on
that.
We
help
our
customers
create
on
at
a
website
called
installed.
App
works
comm,
you
can.
B
You
can
visit
that
and
you
now,
if
you're
into
multitasking
or
after
the
webinar
and
you'll,
see
that
you
basically
put
in
some
configuration
options.
You
know
that
the
IP
address
of
your
Etsy
cluster-
or
you
know
you-
can
specify
that
you
want
port
works
to
provide
a
key
value
store
for
you,
so
you
don't
have
to
manage
it
yourself.
You
know
what
type
of
storage
media
you're
gonna
be
running
in
what
schedule
you're
using.
We
have
some
optimizations
that
we
make
for
openshift
you.
B
Basically,
you
know,
click
a
bunch
of
buttons
type
in
a
couple
of
configuration
variables
and
you
get
this
yellow
file
and
when
you
do
OC
apply
dash
F
with
that
yeah
Mel.
Poor
works
is
now
installed
on
your
open
shift
cluster,
it's
kind
of
anti-climatic,
it's
so
easy!
You
don't
have
to
do
anything
on
the
masters.
You
don't
have
to
bootstrap
the
cluster.
An
individual
developer
can
install
pores
as
easily
as
they
would
install
any
other
application
on
openshift
from
there.
We're
gonna
deploy
our
state
flap
with
openshift.
B
So,
in
this
case,
we're
using
my
sequel
in
the
first
step
of
deploying
my
sequel
is
going
to
be
creating
what
is
called
ithink
Rueben
Eddie's,
a
storage
class,
so
you
know
using
OC,
create
and
then
we're
gonna
create
a
gamma
file
on
that
has
some
has
some
configuration
so
in
this
case
I'm,
highlighting
that
this
storage
class
on
has
a
replication
factor
under
you
back
up
for
a
second,
so
a
storage
class
within
Trinity's
defines
the
type
of
physical
storage.
You
actually
want
your
application
to
consume.
B
When
your
pot
is
the
boid,
you
can
think
of
a
storage
class
as
a
type
of
storage,
and
you
could
deploy
applications
with
different
types
of
storage.
Your
storage
class
is
a
way
for
you
to
find
well
what
type
of
storage
doing
one
you
could
imagine
that
a
production
application
would
require
different
types
of
stores
and
say
a
test
application.
This
storage
class
is,
is
very,
very
flexible
and
here
I'm,
highlighting
that
I
want
I
want
port
works
to
create
two
replicas.
B
Excuse
me
to
maintain
two
replicas
or
two
copies
of
my
my
sequel
volume.
At
all
times
on.
This
number
is
configurable,
I
mean,
is
really
important
for
a
chaise,
we'll
see
in
a
minute.
I
could
also
specify
an
I/o
priority,
meaning
do
I
want
to
when
I
deploy.
My
pod
do
I
want
it
to
land
on
a
host
that
has
a
an
SSD
with
provision
die-offs
or
do
I,
want
it
to
land
on
a
local
disk
or
do
I
want
it
to
land
on
a
block
device
with
that's
using
H
HDS
for
lower
cost.
B
I
can
specify
that,
and
importantly,
I
can
also
specify
things
like
snapshot
policies
as
well
as
encryption
policies,
though,
as
if
we
look
at
kind
of
tearing
between
kind
of
architects
and
administrators
and
developers,
architects
and
platform
operators
can
specify
storage
classes
that
developers
should
use
and
that
will
automatically
apply
things
like
snapshot
policies
say
back
up
this
app
once
an
hour
or
encryption
policy,
so
that
my
don't
have
my
developers
don't
have
to
think
about
that
kind
of
security
and
business
continuity,
but
I
can
as
a
platform
architect,
but
it's
just
really
easy
for
them
to
use
kubernetes
to
deploy
their
applications.
B
All
that's
done
through
the
storage
class
once
I
have
that
storage
class
I'm
gonna,
create
a
I'm
gonna,
create
a
PVC
that
references
that
storage
class,
I'm
gonna,
create
a
deployment
within
kubernetes
and
then
all
of
a
sudden
I
have
deployed
my
sequel
using
openshift
with
port
works,
providing
a
persistent
volume
to
my
my
sequel
database
from
here.
B
We
start
to
see
the
features
and
capabilities
that
enable
port
works
to
make
that
database
highly
available
highly
resilient
and
easy
to
manage,
even
when
it
has
Mis
critical
data
and
now
we're
gonna
look
at
some
of
those
some
of
those
capabilities.
So
the
first
is
you
remember
back
to
that
storage
class.
We
had
created
a
replica
audience.
We
had
said
you
wanted
to
have
two
replicas
of
our
data.
Two
copies
over
data.
Excuse
me
so
here
once
that
pod
is
deployed
with
that
storage
class.
B
Does
that
and
what
we
call
a
topology
aware
manner,
meaning
that
if
you're
deploying
OpenShift
on
Amazon,
we
understand
the
difference
between
availability
zones,
and
so
when
we
see
that
your
kubernetes
cluster
spans
availability
zones,
we're
gonna
place
your
your
your
replicas
across
those
availability
zones
as
well,
so
that
if
I
lose
even
an
entire
availability
zone,
kubernetes
and
open
ship
can
automatically
failover
those
pods
to
got
other
availability
zone
and
we're
kind
of
maximizing
the
HAR.
Our
application.
Thanks
to
our
network
architecture-
and
you
know
looking
at
what
that
looks
like
you
know.
B
If
we
lose
that
node
three
open
ships
is
automatically
going
to
reschedule
that
pod
onto
another
hotel
in
our
cluster,
we
can
see,
we
have
node
1
and
we
can.
We
have
no
port
works
until
kubernetes,
we're
to
schedule
that
pod,
based
on
where
the
replicas
is
this
the
idea
of
running
a
pod
and
on
it's
on
the
host
that
actually
has
a
local
copy
of
the
data.
B
That's
called
hyperconvergence
I
mean
mini
modern
databases,
Cassandra
and
Kafka
being
great
examples
of
it,
but
also
Postgres
in
my
sequel,
on
tintal
perform
better
when
they're
running
close
to
their
data
and
when
they
don't
have
that
network
overhead
on,
and
so
it's
one
of
the
key
things
that
for
works
does
is
we
can
instruct
kubernetes
where
data
volumes
live
such
that
when
it
reschedules
a
pod
to
another
host
in
the
cluster,
it
lands
where
the
data
is
versus
on
somewhere,
where
somewhere,
where
the
data
is
not
live,
and
you
would
then
before
stripped
over
the
network,
so
we
we
ensure
that
hyper
can
burn
even
in
the
case
of
failover.
B
You
know
other
examples
of
day.
2
operations
that
poor
works
facilitates
would
be
cloud
snap,
which
is
a
feature
of
our
platform,
plug
px
Enterprise
on
for
off-site
backups.
So
you
know
day
1
operations,
deployment
and
configuration.
We
saw
that
on
day
2
operations
we
start
to
have
to
think
about
failover
and
high
availability.
We
also
have
to
think
about
backup
and
recovery.
We
could
call
it
dr
I
would
call
backup
recovery.
We
can
call
it
business
continuity.
B
Whatever
you
want
to
call
it,
you
need
to
have
a
copy,
an
up-to-date
copy
of
your
application
running
in
some
other
environment.
The
port
works
makes
that
easy
with
something
we
call
cloud
snap
where
you
can
basically
snapshot
either
a
single
volume,
a
namespace
or
any
arbitrary
group
of
volumes,
move
that
data
to
an
object,
store
either
running
on-premises
may
be
yours
in
Mineo
may
be
stuff
and
then
restore
it
to
another
region
and
redeploy
your
pods.
And
now
your
app
is
up
and
running
again
kind
of
a
very
easy
to
use.
B
Intuitive
simple
snapshot,
move,
restore
functionality
and
I'm
gonna
get
into
in
a
minute
of
some
new
capabilities
that
that
that
we've
launched
to
to
make
this
even
simpler.
But
this
is
a
good
starting
point.
We
also
support
bring
your
own
key
encryption,
so
you
know
security
has
never
been
more
important
than
it
is
now,
and
it's
really
important
that
we
enable
our
customers
to
follow
security
best
practices.
So
one
of
the
things
that
we
enable
is
bring
your
own
key
encryption
on,
meaning
that
even
for
worst
can
decrypt
your
data.
B
Only
you
have
those
keys,
I
mean
those
keys
can
be
at
the
container
granular
level,
meaning
you
don't
have
to
encrypt
your
entire
kubernetes
cluster
with
a
single
key.
You
don't
even
have
to
encrypt
in
a
namespace
with
the
same
key.
You
can
encrypt
each
of
your
individual
data
volumes
with
the
same
key
and
even
when
you
back
up
even
when
need
migrate.
Those
applications
they're
gonna,
maintain
that
key
and
we
support
all
of
the
kind
of
the
major
key
management
systems.
B
So
sorry,
I
kind
of
alluded
to
some
new
features
related
to
data
migration
in
the
last
slide,
I
want
to
kind
of
wrap
up
a
little
bit
talking
about
some
of
that.
So
just
yesterday
we
released
for
works.
B
Enterprise
2.0
on
poor
works,
Enterprise,
1.0
I
was
really
the
foundation
of
what
I
just
shared
with
you
on
it's
that
you
know,
ability
to
run
and
manage
mission-critical
databases
on
in
containers
managed
by
kubernetes
I'm
in
ways
that
both
you
know,
individual
developers,
knowing
love
and
IT
ops,
can
manage
effectively
and
fit
in
with
their
existing
business
practices
on
in
kind
of
working
with
the
customers.
That
I
showed
on
the
previous
slide
and
many
others
during
the
the
few
years,
where
px'
Enterprise
1
dot
it
was
in
the
market.
B
We
learned
a
lot
about
kind
of
you
know
where
customers
going
next.
Clearly,
you
know
as
a
community.
We
think
that
container
adoption
is
only
gonna
increase.
You
know
kind
of
the
consolidation
we
could
call
it
of
the
market
with
IBM
acquiring
Red
Hat
with
VMware
acquiring
hep,
CO
and
others.
We
see
that
containers
are
here
to
stay
in
the
enterprise
and
container
platforms
themselves
are
gonna,
be
used
not
just
to
kind
of
beat
up
the
development
process,
but
to
really
bring
hybrid
cloud
and
multi-cloud
operations
into
the
mainstream.
B
You
know
my
personal
opinion
on
why
IBM
purchased
Red
Hat
on
is
that
openshift
gives
it
a
great
way
to
compete
in
the
cloud
market.
You
know
the
public
cloud
market
is
a
really
tough
business.
I'm
an
IBM
launched
a
public
cloud,
and
you
know
it
wasn't
as
successful
as
you
know,
Amazon
and
and
Microsoft.
You
know
that
the
market
numbers
are
categorical
on
that
point,
but
IBM
knows
how
to
run.
Applications
knows
how
to
manage
data
and
is
trusted
in
the
enterprise.
B
So
how
can
I
if
IBM
knows
that
enterprise's
want
to
move
to
the
cloud?
How
can
it?
How
can
it
have
a
play
there
and
I?
Think
it's
play
is
open
shipped
on
in
commoditizing
the
cloud
providers
as
simply
a
place
that
I'm
gonna
run
an
open
shift.
I
run
open
ship,
the
Nazarite
open
ships
on
microphone
on
on
Amazon,
but
really
the
value
provided
to
the
to
the
enterprise
is
at
the
open
ship
layer,
which
means
you
need
to
be
able
to
solve
the
data
of
mobility
problem.
B
If
you're
really
gonna
do
hybrid
cloud,
if
you're
really
gonna
do
multi
cloud,
you
have
to
solve
the
data
mobility
problem
and
this
headline
from
the
new
stack
just
yesterday,
I
think
really
underscores
why
we
launched
pieps
Enterprise
2.0.
We
do
believe
that
hybrid
cloud
is
probably
the
biggest
trend.
I'm
an
enterprise
IT
right
now,
I
think
it's.
You
know
only
a
matter
of
time
before
all
enterprises
are
operating
in
their
own
data
center,
as
well
as
a
variety
of
public
clouds
we're
already
seeing
it.
B
B
Well,
the
way
that
we
do,
that
is
by
solving
data
mobility
on
and
the
feature
on
that
that
solves
this
problem
is
called
pieps
motion,
and
you
know
if
you're
you
know
been
in
and
around
vmware
for
a
number
of
years.
This
might
sound
familiar
a
vmware
has
a
concept
of
V
motion
and
you
know
the
fact
that
we
call
this
px
motion
is
not
you
know
it's
not
by
chance.
B
It's
also,
you
know,
don't
take
that
analogy
too
far
and
I'll
explain
exactly
what
it
is,
but
we
wanted
something
that
kind
of
you
know
had
the
some
of
the
same
ideas
that
V
motion
brought
to
the
market
from
VMware
and
the
ability
to
move
bm's
and
data
between
environments.
We
wanted
to
bring
that
and
update
it
and
modernize
it
for
containers.
So
we
have
px
motion.
What
does
px
motion
do?
B
You'll
in
a
nutshell,
px
motion
allows
you
with
a
single
command
again
that
that
ubiquitous
coupe,
cuddle,
apply
or
OC
apply
to
migrate
data,
yes
between
environments
and
clouds,
but
also
pods
controllers
objects,
volumes,
H,
a
levels
and
policies
you
our
entire.
What
I
would
call
application
stack
between
environments
with
a
single
command
managed
by
kubernetes
and
open
ship?
That's
what
we
wanted
to
do,
and
so
what
port
works
did?
Is
we
leveraged
some
of
the
capabilities
within
kubernetes
in
specifically
its
its
concept
of
scheduler
extenders?
B
To
be
able
to
you
know
we
can
already
migrate
data.
We
saw
that
earlier.
We
can.
We
can
very
easily
snap
shot
and
move
data
between
environments.
We
can
do
it
on
an
application
consistent
level
on
and
we
can
do
it
on
for
groups
of
volumes,
but
that
still
requires
you
to
spin
up
your
pods
in
another
location,
and
you
know
point
to
the
right
data
volumes
to
get
traffic
up
and
running.
We
said
how
can
we
make
that
a
one-click
process
I
mean?
So
that's
what
px
motion
has
done.
B
You
do
coop
cuddle
apply
and
you
describe
your
migration,
your
source,
cluster,
your
destination
cluster,
which
objects
to
move.
Kubernetes
moves,
your
pods,
your
controllers
and
objects.
Ports
moves
your
volumes,
your
H,
a
levels
and
your
policies,
and
so
in
a
single
command,
you're
able
to
migrate
between
environments.
Now.
What
is
that
enable?
One
of
my
last
slides
is
a
little
bit
as
a
few
use
cases
for
px
motion.
What
is
augmentation?
It's
simply
the
idea
that
you
know
as
an
application
architect.
You
know
I'm
building
a
platform
on
that.
B
Sometimes
what
that
means
is
taking
an
application
and
moving
it
to
a
cluster
with
more
capacity
or
maybe
we
tweets
EEP
are
our
high-performing
application
on,
but
the
same
cluster,
and
we
want
to
move
the
lower-performing
application
to
another
cluster.
There
are
multiple
ways
to
free
up.
Compute
capacity
port
works
enables
us,
you
can
basically
select
applications
that
could
be
an
entire
kubernetes
cluster.
B
B
You
know,
unfortunately,
yesterday
there
was
a
kind
of
a
critical
CD
for
kubernetes,
where,
where
basically
everybody
had
to
upgrade
to
a
patch
version
of
kubernetes,
you
know
kubernetes
makes
upgrading
kubernetes
easy,
but
you
you
know,
when
you're
running
an
app
in
production,
you
need
to
qualify
that
new
release
for
your
applications.
B
If
you
don't
want
to
have
kind
of
unexpected
errors
that
occur
a
very
popular
way
for
rolling
out
new
versions
of
software
within
DevOps
and
blue/green
deployments,
I
won't
get
into
kind
of
the
the
best
practices
for
blue,
blue,
green
I
will
suffice
it
to
say
what
works
with
px'
motion
allows
you
to
take
your
application
running
on
kubernetes,
create
a
second
copy
of
it,
including
pods,
including
configuration,
including
data,
move
it
to
some
other
cluster,
where
you
can
then
run
your
your
tests
and
decide
whether
you
want
to
flip
your
load
balancer
to
that
new
green
cluster.
B
We
make
that
very,
very
easy.
So
the
timing
yesterday
was
unfortunate
on
anytime.
You
have
a
critical
security
vulnerability,
it's
unfortunate,
but
it
really
underscores
why,
once
you
start
running
databases
on
in
production
on
kubernetes,
you
need
to
be
to
very
quickly
qualify.
Those
apps
on
new
clusters
and
and
pieps
motion
helps
you
with
that.
The
final
example
would
be,
you
know,
simply
just
better
test
have
a
lot
of
bugs
that
happen.
Our
result,
not
just
of
code
but
of
the
interplay
between
code
and
data,
so
port
works
helps
you
debug.
B
Better
by,
for
instance,
taking
a
snapshot
of
your
production
data
moving
into
a
test
environment
where,
then,
you
can
run
on
all
of
your
tests,
you
can
you
can
debug,
you
can
see
whether
or
not
it
was
the
combination
of
your
code
in
your
data
that
led
to
that
progression,
I
mean
so
you
can
fix
it
on.
These
are
some
examples
of
px
of
the
utility
of
px
motion,
which
again
is
the
ability
to
move
both
pods
and
configuration
and
data
between
environments.
B
Okay,
we're
getting
to
the
end
I'm
happy
to
answer
questions
now,
just
a
couple
of
resources
for
you
and
would
be
sharing
these
slides,
so
you'll
be
able
to
have
access
to
this.
These
are
some
hyperlink.
B
We
we
have
a
lot
of
content
on
our
site
about
running
on
databases
on
openshift
suite
here
I
have
a
blog
of
my
sequel.
Here's
one
about
MongoDB.
We
have
a
demo
of
running
databases,
kind
of
the
entry,
the
the
first
part
of
the
presentation
is
similar.
What
we
did
today,
the
second
part,
is
actually
a
live
demo
on
the
via
the
OpenShift
command
line
for
running
my
sequel
on
OpenShift.
So
you
can
take
a
look
at
that.
B
This
is
a
demo
video
and
then,
if
you're
interested
in
learning
more
about
px'
Enterprise
2.0,
we
have
a
blog
post
that
we
did
about
how
to
run
Bluegreen
deployments
kind
of
some
of
the
overview
and
the.
Why
and
the
how
a
px
Enterprise
and
then,
of
course,
if
you
want
to
learn
more
about
four
works,
please
visit
our
website.
B
You
can
go
to
works,
comm,
slash
request
a
demo
and
we're
happy
to
kind
of
talk
to
you
specifically
about
the
apps
that
you're
running
specifically
about
your
operating
environment
and
to
see
if
we
can
help
you
do
more
with
your
OpenShift
on
our
platform
and
with
that
I
will
happy
to
answer
any
questions.
You
have
awesome.
B
Yeah,
it's
a
really
good
question
on
and
it's
kind
of
a
multi-faceted
answer.
So
so
the
first
thing
is
that,
when
it
comes
to
snapshots,
poor
works
has
the
ability
to
create
what
are
called
a
player
snapshots,
meaning
if
I
have
a
say
a
a
multi,
multi
node,
say
Cassandra.
B
Cluster
port
works
can
actually
snapshot
all
of
those
pods
running
on
different
hosts
at
the
same
time
by
quiescing
the
Cassandra
in
kind
of
a
Cassandra
specific
way,
and
it's
different
for
how
you
would
do
it
on
close
for
us
how
you
do
to
a
how
you
do
it
on
my
sequel,
you
do
it
or
Cassandra,
so
we
actually
have
about
application.
Specific
logic
built
into
for
works.
I
mean
it's
extendable
to
even
custom
applications
that
you
write
and
maintain
yourself.
B
There's
a
face:
a
kind
of
pre
hooks
in
post
hooks
on
that
that
you
can
configure
to
run
the
right
app
level
commands
before
those
snapshots
happen,
and
so
there
you're
always
gonna
get
application
consistency.
The
question
here
is
from
a
transaction
perspective,
so
we
don't
migrate
memory
state
so,
depending
on
the
type
of
application
and
what
you're
trying
to
do
there
may
be
application
level
work
that
you
need
to
do
in
addition
to
the
App
level
snapshots
that
pull
up
sort
handle
okay,.
C
That's:
okay,
that's
interesting,
so
this
is
kind
of
I'll
try
to
comparing
that
in
between
the
story
versus
the
one
you
have
on
the
infrastructure
side.
We
are
application
and
mistake.
Basically,
we
don't
care
what
application
and
the
application
takes
the
responsibility
and
I'm
comparing
what
their
additional
capabilities
you
guys
can
do,
honey
and
and
and
so
forth.
When
you're
saying
they're
part
you,
you
also
snapshot
they're
the
pot
information,
including
the
the
one,
is
not
in
there
in
the
PB
information.
C
You
know
when
you're
running
apart,
you
will
have
some
data
for
your
application
that
you
want
to
write
it
to
the
persistent
volume
and
there's
data
might
be
just
in
the
pot,
and
are
you
saying
that
you're
gonna
yeah
the
product
itself
as
the
capabilities
to
snapshot,
everything
or
just
their
information
in
in
there?
For
you,
yeah.
C
C
B
C
We're
talking
about
around
transaction
right
because
when,
when
we
do
a
database,
there's
a
lot
of
transaction
between
in
different
time.
So
when
we
doing
px
motion
or
snapshot
we'll
have
to
consider
how
the
transaction
at
a
given
time
and
if
we
will
lose
the
transaction,
we
were
doing
that
yeah
yeah.
B
Yeah,
it's
a
great
question
so
the
way
that
port
work
solves
this
is
using,
what's
called
a
scheduler
extender
within
kubernetes,
so
we
have
a
scheduler
extender.
That's
called
stork
on
n
stork
is
where
this
application
specific
logic
is
so
I
know
that,
for
instance,
if
I
want
to
snapshot
my
post
press
database
I
want
to
make
sure
that
any
in-flight
transactions
are
written
and
acknowledged
before
I.
Take
that
snapshot
otherwise
you're
right
I'm
gonna
lose
that
transaction.
That
snapshot
right
before
that
order
came
in,
or
you
know,
I'm
gonna
have.
B
You
know,
told
the
user
that
your
purchase
has
been
successful
but
I'm,
not
gonna
dock
it
in
my
inventory
system
or
something
like
that
there
you
know
some
ways
in
which
a
lock
transaction
would
affect
it.
So
port
works
understands
Postgres
how
to
flush
right
ACK
through
that
scheduler
extender
and
again.
This
is
extensible
to
any
database
on
its
using
basically
pre
hooks
and
post
hooks.
So
different
database
tools
have
different
ways
to
issue
those
commands.
B
What
peih-gee
tools
for
Postgres,
whether
it's
you
know
known
old,
to
no
tools
for
Cassandra,
px
Enterprise
understands
those
application
level.
Semantics.
A
A
I
think
there
so
I'll
add
that
in
for
those
of
you
listening
to
the
posting
of
this
recording
on
on
YouTube
and
on
the
blog
post
on
OpenShift
comm,
so
that
you
can
read
it
as
well,
so
it
it's
it's
a
very
a
robust
solution
for
a
lot
of
these
things.
That
I
wasn't
quite
aware
of
what
you
even
what
we
were
doing
with
cloud
snap.
So
I'm
very
pleased
that
you
took
the
time
today
to
do
this.
A
So
we
hope
Michael
that
you
and
your
colleagues
will
come
back
and
and
doing
more
talks
on
this
topic,
because
I
think
it's
quite
interesting
as
we
go
in
as
the
landscape
keeps
changing
how
we
have
to
deal
with
all
this.
The
data
storage
and
the
data
mobility
questions.
You
know
some
of
the
other
I
know
at
the
gathering
there's
one
one
of
the
talks
is
about
sustainable
data
centers
with
Vattenfall,
and
they
have
been
asking
a
lot
of
questions
about
specifically
this
topic.
A
So
I
hope
we
can
connect
you
with
them
and,
and
that
will
be
a
that
would
be.
That
would
be
very
interesting
to
see
if
we
can
get
the
two
of
you
talking
next
week.
So
maybe
over
that
I
do
the
intros
there,
and
you
know
this-
is
you
know,
as
always
with
the
briefings?
Sometimes
it
leaves
us
with
more
questions.
But
if
you
have
questions,
please
do
reach
out
to
Michael
and
the
team
at
Port
works.
A
B
Just
thanks
again,
I
mean
I,
think
you
know
we're
big
fans
of
Red
Hat
and
really
like
the
open
ecosystem
that
you
create
and
kind
of
the
choice
that
you
have
your
customer.
So
you
know
when,
when
it
comes
to
their
databases,
we
would
love
them
to.
You
know,
give
us
a
look
and
happy
to
answer
any
questions,
and
you
know
we're
also
really
transparent.
You
know
it's
not
a
fit
for
they're,
better
solutions,
and
you
know
we
we'd
much.
B
Rather
the
customer
be
successful
and
then
for
the
next
project,
where
they
do
need
something
like
four
works
to
work
with
us
there.
So
you
know
if
they
have
kind
of
questions
about
architectures
and
things
like
that,
we're
always
happy
to
help
them.
You
know
figure
out
what
the
best
way
to
architect
is
and
yeah
we'll
see
we'll
see.
Hopefully
a
bunch
of
people
next
week.
Well.