►
Description
Get your espresso ready for the OpenShift Coffee Break as we welcome Daniel Paul, Cloud Architect Portworx at Pure Storage, to show us how Portworx and OpenShift work together to operate, scale and secure applications and databases across multiple clouds.
A
A
B
Welcome
back
to
the
openshift
TV
Coffee
Break
today,
I'm
really
happy
I
have
my
super
co-host
back
Andrea.
A
How
are
you,
hello,
I
think
there
is
a
little
bit
of
a
delay
and
I,
don't
know
for
how
long
but
I
think
it's
a
couple
of
seconds
for
some
reason.
B
Oh
okay,
okay,
then
we'll
we'll
figure
it
out,
but
in
the
wild,
welcome
back
to
the
openshift
TV
coffee
break.
For
those
of
you
not
aware
of
the
show,
this
show
is
a
web
TV
show
here
in
emea,
where
we
talk
about
openshift,
the
kubernetes,
the
cloud
native
architecture
like
being
in
a
coffee
machine
and
talking
about
tech,
and
this
morning
we're
really
happy
to
have
our
special
guest
Daniel
Paul
from
Port
Works
Daniel.
How
are
you.
B
Great
to
have
you
here
and
Daniel,
let
me
also
introduce
a
bit
the
team
here.
My
name
is
natal
avinto
I'm,
an
executive
producer
for
openshift
TV
and
Mia
coffee
Brook
together
my
super
co-host
Andrea,
which
is
coming
back
from
long
time.
Andrea
hey
two.
A
Months
yeah,
unfortunately,
I
had
an
injury
and
and
a
number
of
operations,
but
I'm
back
to
I
cannot
hear
you
anymore.
You
can't
hear
me,
oh
yes,
oh.
C
A
Think
I'll
I'll
disconnect
for
a
second.
We
connect
again
because
the
the
network
seems
to
be
acting
up.
B
Okay,
no
problem,
no
problem,
hey
in
the
wild
Andrea
will
fix
his
Network
Daniel.
So
we're
really
happy
to
have
you
here,
because
there
are
hot
topics
and
I'm
sure
our
audience
is
really
excited
about
it.
Today,
we're
gonna
talk
about
disaster
recovery.
Business
continuity
can,
can
you
introduce
a
bit
what
what
is
part
folks?
What
what
are
we
doing
today?
Yeah.
C
So
so
Port
works
is,
is
basically
your
your
production
Survivor
Kit
for
for
kubernetes.
So
we
care
about
all
the
things
when
it
comes
to
stateless
stateful.
Sorry
stateful,
work
workloads
on
kubernetes,
so
whenever
you
want
to
run
a
database
with
with
really
important
data
on
kubernetes
and
we
see
more
and
more
customers
going
this
way,
then
Port
works
is
the
best
solution
to
to
work
with.
B
That
sounds
amazing
and
the
portfolks
is
available
in
the
openshift
marketplace
in
the
operator
Hub.
So
if
you
have
an
open
shift
instance,
you
can
install
Port
works
from
operator
Hub,
and
you
have
this
integration
today.
Daniel
if
I'm
recall,
corrected
we're
going
to
show
this
integration.
We're
going
to
show
a
real
world
example
with
a
live
demo.
B
Daniel
in
my
there
there
is
a
famous
book
from
Tannenbaum.
The
networking
is
about
networking
Tanenbaum,
you
know
is
the
inventor
of
Minix,
those
very
popular
professor,
and
he
said,
and
he
said
he
always
trust
the
cables,
not
trust
the
wireless
we
need.
We
need
to
tell
this
to
Andrea,
because
maybe
yeah
to
connect
and
not.
C
B
Well,
Daniel,
we
can,
if
you
have
any
slides,
I,
think
we
can
start
presenting
and
we
can
go
through
the
flow.
C
C
Let
me
just
quickly
start
so:
I
don't
want
to
show
you
all
the
marketing
stuff
of
iOS
slide
deck
because
you
all
might
know
kubernetes
is
growing.
We
see
more
and
more
workloads
going
in
this
field,
but
we
see
when
we
talk
to
customers
that
there
are
there's
room
to
improve
and
it's
it's
not
only
these
things
about
networking
we
already.
We
already
were
talking
about.
So
it's
about
storage
and
data
management.
C
So
we
see
more
and
more
customers
when
it
comes
to
to
growth
when
it
comes
to
to
scale
in
the
kubernetes
field
that
they
see
issues
with
storage
and
data
management.
And
the
question
is
why
more
and
more
customers
migrate,
their
workloads
into
a
containerized
environment
and
the
ideal
world
would
be.
C
You
have
all
your
applications
over
a
linear
amount
of
time
migrated
onto
this
space,
but
the
reality
often
shows
another
another
yeah
another
curve,
so
they
start
very,
very
good,
but
then
this
curve
flattens
down
when
it
comes
to
the
migration
of
these
applications
and
the
reason
here-
is
they
mostly
start
with
status,
applications
and
stateless
means.
C
We
don't
store
any
data
within
the
within
the
kubernetes
cluster
and
this
is
no
problem
because
yeah
they
don't
need
to
talk
to
to
care
about
this,
but
then
they
start
also
migrating
non-critical,
stateful
applications,
meanings
or
testing
applications
which
are
not
business
critical
and
then
it
shows
that
there
are
challenges
in
this
field
and
when
it
comes
to
critical
stateful
applications,
so
the
really
business
critical
applications
coming
to
these
environments,
then
this
curve
really
flattens
down
and
yeah.
C
It
starts
getting
tough
here
and
we
see
this
in
many
customer
Journeys,
where
we
come
into
play
when,
where
the
customer
moves
critical
applications
kubernetes.
The
reason
why
this
is
is
mostly
in
the
in
the
approach
of
kubernetes
is
doing
storage,
so
I
guess
you're
all
familiar
with
my
interface,
so
an
interface
created
by
the
kubernetes
project
to
connect
existing
storage,
arrays
Storage
Solutions
to
a
kubernetes
cluster.
C
Each
and
every
major
storage
vendor
has
its
own
implementation
for
this
interface,
but
we
see
when
it
comes
to
scale
more
and
more
issues
in
customer
environments,
and
this
could
be,
for
example,
that
they
directly
connect
Hardware
arrays
kubernetes
clusters,
and
this
will
result
in
a
one
by
one
container
volume
to
storage
volume,
mapping,
which
could
be
just
too
much
for
a
classical
storage
array.
So
just
think
about
the
test.
C
You
run
on
your
on
your
containerized
environment,
which
creates
a
thousand
volumes
and
runs
a
test.
This
test
fails
you
just
throw
everything
away
and
after
10
minutes
you
have
a
software
hedge
and
you
retry
this
test
and
do
another
thousand
volumes.
This
could
run
some
Hardware
arrays
into
problems.
C
It's
also
an
issue
that,
when
you're
dependent
on
the
on
this
is
I
interface,
and
you
have
multiple
kubernetes
clusters,
maybe
on
multiple
platforms,
storage
looks
always
different,
so
each
and
every
kubernetes
cluster
has
other
capabilities,
another
way
of
managing
storage,
and
this
could
be
a
problem
when
it
comes
to
scenario.
C
So
often,
customers
ask
for
a
unique
or
a
single
way
for
a
simplified
way
to
manage
storage
and
also
access
to
storage
capabilities
in
these
different
clusters
and
different
environments,
just
thinking
about
on-premise
and
then
public
Cloud
environments,
and
we
also
see
that
the
the
features
of
storage
are
limited
by
the
capabilities
of
this
is
high
implementation.
But
the
last
Point
here
in
the
slide
on
the
left
side
is
also
very
important
from
a
process
side.
The
question
is
who's
responsible
for
managing
this
storage
and
what
are
the
processes
behind?
C
Because
most
customers
today
run
kubernetes
in
a
virtualized
environment,
and
this
usually
means
a
virtualized
environment
consumes
storage
from
the
underlying
virtualization
layer,
which
is
in
most
environments
vsphere?
And
now,
if
you
implement
this
CSI
interface
accessing
directly
a
storage
array,
you
break
a
virtualization
layer,
because
now
you
want
to
access
your
storage
directly
from
a
virtualized
workloads,
the
out
of
the
virtual
machine,
which
runs
the
kubernetes
worker
nodes.
You
need
now
access
to
a
storage
array.
C
You
need
to
open
network
ports,
you
need
to
create
network
connectivity
to
your
storage
array,
and
this
is
mostly
not
only
the
data
path.
You
also
need
to
open
communication
using
the
management
path
to
the
storage
array
and
I've
seen
many
customers
having
troubles
with
this,
because
this
breaks
what
they
do
for
more
than
10
years
now,
and
it's
also
the
question
of
responsibility,
because
who
is
managing
this
thing?
C
Usually
the
storage
teams
don't
know
about
kubernetes
and
kubernetes
teams,
don't
know
about
storage
arrays,
but
now
they
will
put
a
piece
of
software
delivered
by
the
storage
vendor
into
their
kubernetes
testers
and
who's
responsible
for
this
piece
of
software.
It's
delivered
by
the
storage
team,
but
it
runs
in
the
kubernetes
cluster,
so
this
can
be
hard
to
figure
out
who's,
responsible
and
who's
troubleshooting
and
all
these
problems
we
usually
see
in
customers
when
it
comes
to
scale
when
it
comes
to
hybrid
environments.
So
don't
get
me
wrong.
His
eye
is
working
very
fine.
C
Let's
say
this
way,
so
in
just
a
few
words,
how
it
works
is
a
software
which
Cloud
native
Technologies
and
runs
as
a
as
a
scale
out
application
within
a
containerized
environment,
and
today
we
mainly
talk
about
kubernetes
or
openshift,
but
in
general,
what
works?
We
also
work
on
single
Docker
hosts,
so
we'll
just
containerized
a
piece
of
software
abstracting
and
delivering
storage
to
the
workplace,
and
when
it
comes
to
scale,
we
don't
have
any
issues.
C
So
what
works
can
scale
approved
and
thousands
of
operations
a
day
and
a
very
important
thing
when
it
comes
to
scale?
Is
we
ensure
a
consistent
access
to
storage
from
a
consumer
view,
but
also
from
an
operator
view
to
storage?
We
are
not
dependent
on
the
underlying
infrastructure
and
storage
looks
always
the
same,
no
matter
if
you
run
in
an
on-premise
environment
on
openshift
or
if
you
run
in
a
public,
Cloud
environment
or
if
you
use
public
cloud
and
brunette
is
offerings.
So
storage
from
consumer
perspective
always
looks
the
same.
C
It's
always
managed
the
same
and
it's
managed
as
a
nature
part
of
kubernetes,
and
this
is
very
important
piece
when
it
comes
to
responsibilities.
The
port
works
with
the
software
running
inside
the
kubernetes
cluster.
It's
managed
like
each
and
every
other
part
of
kubernetes,
so
basically
you'll
be
almost
X
and
this
very
much
improves
in
the
operational
efficiency
in
our
customers.
A
For
the
chat,
but
there
is
a
question
from
me,
if
possible,
on
the
diagram
to
the
right,
I
I
see
what
I
imagine.
There
are
two
kubernetes
clusters
and
with
their
they're
containing
native
storage
in
each
of
them,
but
I
see
that
there
is
a
line
between
the
two.
Does
this
mean
that
I
could
potentially
provide
access
to
a
second
cluster
from
my
storage
that
I'm
running
in
my
first
cluster.
C
Well,
we
don't
provide
access
to
to
other
clusters
storage,
but
we
leverage
this
connection
to
enable
Dr
migration
capabilities.
Okay,
so
I
will
focus
on
this
on
the
later
slide.
Excellent-
and
this
is
one
of
the
the
best
use
cases
for
for
Port
works,
yeah,
so
great
question,
so
just
to
give
you
an
overview
went
today.
Talking
about
Port
works,
I
mainly
talk
about
Port
Works
store,
which
is
our
base
product
which
sits
inside
the
containerized
environment
and
provides
a
unified
storage
layer
and
parts
of
this
portworx
door.
Product
are
pod
box.
C
C
This
port
Works
backup-
and
this
is
this-
comes
into
play
when
customers
want
to
backup
and
restore
their
containerized
environments
and
the
the
third
product
we
also
here
have
is
a
portrax
data
services
which
I
will
also
show
in
the
live
demo,
which
is
all
about
managing
databases
on
on
kubernetes
or
on
containerized
workloads,
because
when
we
talk
to
our
existing
customers-
and
we
have
some
very
large
customers-
and
we
ask
them
what
are
you
doing
with
the
capabilities
we
offer
in
Port
works,
so
stable
storage
layer?
C
The
the
80
of
these
answers
are,
we
are
deploying
databases,
and
these
databases
are
more
the
modern
Cloud
native
databases,
and
we
also
figured
out
that
these
customers
struggle
when
it
comes
to
managing
these
databases.
But
I
will
cover
this
in
a
later
slide.
So,
basically,
we
have
three
products:
it's
partwork
storage,
which
is
the
storage
core.
We
have
portworx
backup
which
helps
backing
and
restoring
workloads
on
on
containerized
environments
and
with
the
products
data
services
which
manage
and
life
cycle
databases
on
top
of
those
environments.
C
All
these
products
have
one
thing
in
common:
we
are
agnostic
from
the
underlaid
infrastructure,
so
we
work
in
any
cloud.
Is
it
AWS,
Azure,
Google,
IBM
or
on
premise?
We
run
on
each
and
every
infrastructure,
so
we
don't
care
about
the
underlying
infrastructure
and
we
also
support
most
of
the
of
the
coming
kubernetes
distribution.
So
openshift
is
our
our
largest
base.
C
So
many
customers
of
us
are
running
openshift,
but
we
also
run
in
Amazon
eks,
gke,
AKs,
tanzu
or
even
Upstream
kubernetes,
so
vanilla,
kubernetes
and
many
of
our
customers
run
as
a
sad
database
workloads
on
top
of
Port
Works.
They
run
analytics
streaming
stuff
on
on
top
of
Port
works.
So
this
is
just
the
the
summary
of
of
what
we
do
in
the
next
slide.
I
will
dig
deeper
into
the
architecture
of
Port
works.
If
there
are
no
questions,
let
me
quickly
check.
Okay,
no
questions
so
far.
Okay,
great
so
yeah
the
projects
architecture.
C
How
does
Port
Works
actually
work?
First
of
all,
Port
Works
basically
is
deployed
in
a
single
kubernetes
cluster.
So
for
each
of
your
kubernetes
casa,
you
would
have
by
default
a
single
portworx
instance
inside
this
kubernetes
cluster
potworks
is
deployed
using
an
operator.
I
will
show
this
later
in
the
live.
Demo
and
portworx
creates
a
custom
resource
definition
called
storage
cluster
and
on
top
of
the
storage
cluster,
we
provide
our
storage
virtualization
services.
C
So
all
the
things
you
can
do
with
Port
works
are
sitting
on
top
of
this
of
this
storage
cluster
and
if
you
want
to
consume
storage
within
your
kubernetes
cluster,
you
now
Leverage
The
the
common
ways
of
creating
a
PVC
referencing,
a
storage
class,
so
portworx
provides
storage
classes
and
you
just
reference
these
storage
classes
by
your
PVCs
and
the
question
often
now
is
how
does
actually
storage
comes
into
this
because
we
need
to
store
the
data
somewhere
and
the
the
basic
design
principle
of
Port
works?
C
Is
we
take
block
storage
from
the
worker
nodes
and
put
this
block
storage
into
a
pool,
and
the
pool
is
our
abstraction
layer
and
out
of
this
pool
we
serve
the
the
volumes.
The
persistent
volume
claims
and
the
volumes
which
are
which
can
be
created
can
be
read.
Write
once
or
read,
write
many
volumes,
and
on
top
of
these
volumes
we
can
do
all
the
magic
with
storage.
I
will
cover
later,
but
the
days
the
basic
design
principle
is.
C
Now
when
it
comes
to
those
unused
block
storage
in
most
of
the
virtualized
environments,
the
term
unused
block
storage
on
worker
nodes
can
be
translated
into
the
language
of
this
platform,
because,
if
you're
talking
about
the
vsphere
environment,
for
example,
unused
block
storage
on
worker
nodes
basically
means
a
vmdk
file
attached
to
a
VMware
VM
or
if
we
are
talking
about
Amazon
or
the
Amazon
Cloud
AWS
unused
block
storage
on
a
worker.
Node
means
EBS
volume
attached
to
ec2
instance,
and
in
those
environments
we
can
connect
to
the
infrastructure.
C
So
we
have
a
second
layer
with
import
Works,
which
is
the
orchestration
layer,
and
this
orchestration
layer
can
be
connected
to
your
infrastructure
provider.
If
there
is
one-
and
you
can
tell
Port
works
at
the
initial
installation
that
you
want
to
have
each
worker
node
contributing,
for
example,
100
gigabyte
to
the
pool
and
then,
if
you,
if
you
install
Port
works,
it
would
connect
to
the
infrastructure
and
would
provide
100
gigabytes
to
each
of
the
worker
nodes
of
the
cluster.
C
So
your
portfolks
cluster
would
just
start
and
would
be
ready
to
run
so
the
orchestration
layer
connects
into
the
infrastructure
whether
it's
AWS,
it's
Azure,
it's
Google
or
it's
vsphere
and
would
just
attach
those
volumes
of
those
platforms
to
the
portworx
worker
nodes
and
the
worker
nodes
would
bring
this
into
the
pool
and
then
Port
works
is
up
and
ready.
I
will
show
you
this
in
the
yamos
back
on
the
on
the
lcp4
platform.
C
B
Yet
not
yet,
but
yeah
I'm
I'm
interested,
then
I,
don't
know
if
you
will
touch
it.
I'm
interested
about
the
storage
type
for
one
of
the
use
case
you
mentioned,
which
is
the
AIML,
so
I'm
I'm
interested
on
use
example
of
artworks
usage
on
openshift.
If
you
have
any
with
AIML.
C
Idea,
because
for
AIML
you
mostly
need
very
high
performance,
read
storage,
so
I
I
can
I.
Can
I
can
focus
on
this?
What
we
do
so
this
this
slide
basically
explains
how
we
do
this
stuff
and
I
would
now
like
to
answer
the
questions.
Why
you
should
do
this,
and
maybe
this
goes
also
in
the
into
the
direction
of
AI
and
L.
So
what
you
can
do
actually,
with
this
this
platform
we
introduced
here
so
first
of
all,
yeah
we
can
provide
multiple
storage
classes,
I
think
that's
not
the
exciting
thing!
C
Everyone
can
do
this.
We
can
provide
blog
and
file
storage
to
the
worker
nodes
and
we
can
provide
different
levels
of
performance
to
our
workloads,
and
this
is
being
done
by
as
soon
as
a
worker.
Node
contributes
storage
to
a
pool.
We
Benchmark
this
storage
from
the
worker
node
and
to
classify
it
into
let's
say:
slow,
medium
and
high
performance
storage.
Is
it
magnetic?
Is
it
is
it
SSD
and
later
you
can
reference
your
yours
to
a
storage
class
I'm
using
this,
this
quality
of
storage?
C
So
we
can
replicate
data
within
the
kubernetes
cluster
and
I
will
focus
on
this
at
a
later
point,
where
the
deeper
into
how
we
do
this
when
we
do
stuff
on
top
of
the
storage-
and
we
are
always
application,
consistent
and
application
granular,
this
means,
if
we
want
to
do
a
snapshot
of
an
application
and
all
its
stateful
data,
you
need
to
take
care
of
that.
C
This
application
can
be
run
in
multiple
replicas
within
your
namespace
and
if
you
want
to
have
a
really
useful
snapshot
of
it,
you
need
to
bring
the
application
into
a
consistent
State
and
you
need
to
snapshot
all
the
volumes
which
belong
to
this
application,
using
label
selectors
and
stuff
like
this,
and
we
can
take
care
about
this.
So
it's
just
a
single
yaml
spec.
You
can
create
within
Port
Works
to
do
such
a
kind
of
application,
consistent
and
granular
snapshot.
So
this
is
all
basic
stuff
with
import
works.
C
C
You
have
a
unique
key
encryption
key
for
each
of
your
volumes,
and
we
can
also
ensure
the
access
within
the
cluster
to
this
volumes,
meaning
we
have
an
Arabic
mechanism
behind.
So
we
can
ensure
that
that
users
are
only
having
a
valid
token,
can
access
volumes
within
this
cluster,
and
we
can
also
do
a
replication
between
Port
Works
clusters,
and
this
is
where
our
disaster
recovery
and
migration
capability
comes
into
play.
So
you
might
remember
the
picture
and
the
question
regarding
there's
a
line
between
two
portrait
clusters,
and
this
is
about
Disaster
Recovery.
C
So,
with
Port
works
on
top
of
your
clusters,
you
can
now
enable
synchronous
or
asynchronous
replication
between
different
kubernetes
cluster,
and
this
means
we
will
replicate
the
kubernetes
objects
and
the
and
the
volumes
of
the
application
between
different
port
works
and
kubernetes
clusters
to
implement
the
disaster
recovery
strategy.
I
have
a
slide
for
this,
which
shows
this
in
more
in
detail.
C
When
we
do
replication
within
the
same
cluster,
we
always
we're
always
topology
aware,
so
we
are
aware
of
that.
Kubernetes
clusters
can
have
multiple
availability
zones
and
when
we
do
replication
within
the
cluster,
we
just
take
care
that
a
replica
if
you
create
replicas
that
they
are
placed
in
different
availability
zones,
it
would
also
take
care
about
regions
and
all
this
stuff
and,
as
I
already
said,
we
are
no
not
dependent
on
the
underlying
infrastructure.
C
You
can
just
snapshot
the
the
the
namespace
and
replicate
this
namespace
to
another
cluster
and
there
the
developer
can
do
all
the
troubleshooting
stuff
he
needs.
So
we
we
leverage
this
technology
to
enable
customers
to
migrate
their
application
between
different
clouds
between
different
kubernetes
distribution
and
even
diff,
between
different
managed
or
self-hosted
kubernetes
environments.
B
So
Daniel
to
this
point:
if
I
have,
for
instance,
two
three
openshift
clusters,
I
can
do
the
disaster
recovery
strategy
with
part
works
and
and
yeah,
maybe
in
the
later
or
in
the
demo.
You
show
us
better
so
how
I'm
wondering
how
it
works
like?
What
is
the
strategy
to
start
This
Disaster
Recovery
strategy
with
part
work
for
onopership
yeah.
C
I
have
a
slide
for
this,
so
I
can
can
show
you
how
we
do
this.
Thank
you.
So
in
general,
it
it
doesn't
depend
on
openshift.
So
it's
it's
about.
It
works
on
each
and
every
kubernetes
distribution
the
same
way
but
yeah.
Let
me
let
me
focus
on
this
in
in
a
later
slide,
so
yeah
the
the
last
marketing
slide.
So
we
have
lots
of
large
customers
running
Port
works
and
all
the
functions.
C
So
when
it
comes
to
production
when
it
comes
to
scale,
this
is
a
the
best
choice
to
to
yeah,
protect
and
store
and
run
your
your
persistent
data.
So
let's
go
a
little
bit
more
deeper
into
the
into
the
actual
architecture.
When
talking
about
a
single
kubernetes
cluster,
a
single
openshift
cluster,
often
we
get
the
questions.
Okay,
how?
How
do
you
actually
store
this
data
within
a
single
cluster?
And
how
do
you
enable
High
availability
within
this
cluster
and
yeah?
So,
first
of
all,
you
start
installing
Port
Works,
using
an
operator.
C
I
will
show
this
later.
How
this
looks
like
in
the
in
the
in
the
in
the
operator.
Hub
of
openshift
portworx
is
being
deployed
on
each
and
every
worker
node
of
the
of
the
kubernetes
cluster,
and
if
now
a
workload
gets
scheduled
on
your
worker
nodes,
we
can
yeah.
We
we
set
filters
on
the
on
the
kubernetes
scheduler
so
to
to
prefer
nodes
which
have
the
right
storage
capabilities
being
demanded
for
this
workload.
C
So
here
in
this
example,
we
have
a
postgres
being
placed
on
the
worker
node,
one
of
our
cluster,
which
claims
a
50
gig
volume,
and
this
50
gig
volume
will
now
be
created
on
the
worker
node,
where
this,
this
actual
part,
is
running.
So
all
the
data
here
is
stored
locally,
but
now,
when
you
want
to
protect
your
your
workload
from
a
worker
node
outage
because
think
of
this,
this
could
be
bare
metal
worker
nodes
just
running
on
Hardware.
C
Without
any
hypervisor
in
between,
and
these
ssds
are
just
ssds
being
plugged
into
this
worker
node,
then
you
can
create
a
replication
factor
and
this
is
being
set
on
the
storage
class.
So
you
can
have
storage
classes
which
just
offer
a
replication
factor
of
two
or
three
and
using
this
replication
Factor.
C
We
would
now
create
a
asynchronous
copy
of
the
data
on
different
other
worker
nodes,
so
here
in
this
example,
with
the
replication
factor
of
three,
we
would
to
create
two
additional
copies
of
the
volumes
on
to
other
worker
nodes
and,
as
we
replicate
this
data
over
the
network,
there
is
always,
let's
say,
a
cost
if
the
cost
of
latency
and
bandwidth,
which
comes
with
this
network
replication,
so
we
try
to
optimize
this
stuff.
So
first
of
all,
portworx
tries
to
find
out
what's
the
actual
workload
running
here.
C
C
A
Daniel
I
do
have
a
question
here,
and
you
mentioned
the
on-premise
case
with
you
know:
physical
disks
attached
to
nodes
yeah.
What
is
the
recommendation
here
in
terms
of
number
of
nodes
that
are
attached
to
storage
within
within
a
cluster
I
know
that
you
can
put
the
replication
factor
to
three
or
whatever,
but
what's
the
minimum
number
of
worker
nodes
that
you
need
for.
C
Normally
so
the
minimum
number
of
worker
nodes
you
need
for
a
portworx
cluster
is
three.
You
need
at
least
three
worker
nodes,
contributing
storage
to
the
portworx
cluster
and
on
this
picture
here,
I
intentionally
have
one
worker
note
who
is
not
contributing
storage
to
the
cluster,
so
you
can
run
a
combination
if
you
have
a
cluster
where
each
and
every
worker
nodes
contributes
storage
to
this
cluster.
C
You
can
have
dedicated
notes
for
storage,
so
you
can
select
which
which
nodes
should
contribute
storage
by
attaching
storage
to
those
nodes
or
not
or
by
labeling
those
nodes
and
the
the
idea
behind
this
is.
If
you
go
into
a
auto
scaling
cluster,
you
don't
want
to
downscale,
let's
say,
storage
nodes,
because
here
in
this
example,
we
have
a
volume-
and
this
sits
on
three
worker
nodes.
What,
but
what
happens?
If
you
scale
down
these
three
worker
nodes
by
an
auto
scaler?
C
That
might
be
not
a
good
idea,
because
then
you
don't
have
your
storage
available.
So
we
have
this
this
aggregated
idea,
so
you
can
have
worker
nodes
which
do
not
contribute
storage
but
which
can
consume
storage,
and
this
can
be
scaled
up
and
down
as
needed.
So
this
is
more
for
the
for
the
auto
scaling
stuff,
but
yeah
for
running
this
cluster.
You
at
least
need
three
worker
nodes,
contributing
storage
to
the
cluster.
This
is
the
minimum
amount
you
need
for.
Port
works.
Okay,
thank
you!
C
So
here
we
have
now
our
application
running
replication
factor
of
three.
But
what
happens
if
you
lose
the
worker
node
one
yeah?
So
this
is
this:
this
yeah,
the
the
most
common
use
case.
You
lose
a
worker
note.
So
now,
if
you
lose,
the
full
worker
note
kubernetes
would
need
around.
You
know
five
minutes
to
figure
out
that
the
worker
node
is
dead
and
would
reschedule
the
port.
So
we
can.
C
We
can
optimize
this
this
parameters,
so
we
can
tell
the
scheduler
to
reschedule
Parts
which
consume
our
volumes
at
an
earlier
point,
because
you,
if
you
run
a
database
and
here
in
this
example,
it's
just
a
single
instance-
no,
don't
know
the
reason
why
you
want
to
have
this
to
to
be
rescheduled
much
earlier.
So
we
can.
C
We
can
optimize
this
behavior
from
Port
Works
layer,
but
this
put
would
now
be
just
be
rescheduled
to
another
worker
node,
and
now
we
also
set-
or
we
again
set
filters
on
the
on
the
on
the
on
the
scatter
scheduler
or
on
the
nodes,
which
would
prefer
worker
nodes
having
a
copy
of
this
volume
so
meaning
preferably.
We
would
now
schedule
this
part
on
worker,
Note,
2
or
3.,
but
if
this
is
not
possible
because
of
there
is
not
enough
memory,
there's
not
enough.
C
Without
storage,
we
always
try
to
optimize
the
behavior,
so
the
best
way
would
be
to
schedule
and
work
on
two
or
three.
But
if
it's
not
possible,
you
can
schedule
it
on
each
and
every
other
worker
note,
and
this
happens
within
seconds.
So
there
is
no
manual
intervention
needed,
so
we
just
ensure
the
access
to
the
volumes.
The
the
Pod
would
be
recreated
and
would
run
that's
it
now.
The
question
is
what
happens
with
this
worker
node.
C
So
by
default
we
have
a
timeout
and
if
this
time
out
hits
we
would
just
create
the
third
copy
of
this
lost
worker
node
on
another
worker
node.
If
the
worker
node
comes
back
within
this
time
period,
it
would
just
be
resynced
with
all
the
datas,
but
this
depends
also
on
your
SLA.
What
are
you
doing
with
a
lost
workout
node?
Will
you
try
to
revive
it,
or
will
you
just
throw
it
away
and
create
another
one.
C
B
C
Disaster
Recovery,
yes,
you
already
asked
about
so
Disaster
Recovery.
You
start
with
a
single
cluster,
so
you
have
one
one
openshift
cluster
running,
let's
say,
for
example,
in
your
on-premise
environment,
you
run
very
important
workloads
on
these
clusters.
They
are
protected
within
the
cluster,
with
portware,
so
you're
running
high
performance
storage.
You
leverage
all
the
functionalities
portworx
offers,
but
you
need
a
disaster
recovery
strategy
and
This
Disaster
Recovery
strategy
could
be.
You
need
this
data
within
a
another
ocp
cluster
running
in
public
Cloud.
Just
in
case
you
lose
your
on-premise
data
center.
C
We
also
have
customers
and
demanding
having
these
Dr
capabilities
between
different
Cloud
regions
or
even
between
different
Cloud
vendors.
So
I
recently
did
the
demonstration
of
the
this
capability
between
an
Amazon
eks
and
a
Google
gke
cluster.
So
this
we
don't
care
about
the
underlying
infrastructure.
We
just
enable
this.
So
you
have
the
the
running
production
cluster
and
now
you
can
create
a
second
cluster.
As
I
said,
this
can
be
in
public
Cloud.
C
This
can
be
another
version,
it
doesn't
matter,
it
needs
portworx
to
be
installed,
and
then
you
create
a
that's
the
pair
on
the
kubernetes
and
Port
Works
layers.
So
you
create
a
yaml
spec
which
just
describes
okay
cluster
one
is
now
the
source
and
cluster
2
is
the
destination
and
you
can
create
multiple
of
those
cluster
pairs.
It's
just
the
just
a
crd
within
the
within
within
your
kubernetes
cluster,
and
it's
just
a
yaml
spec,
which
needs
to
be
created.
C
So
now
you
set
up
a
replication
between
cluster
one
and
two,
and
this
just
describes
discuss
the
one
is
the
source
cluster
2
is
the
destination,
and
now
you
can
create,
on
top
of
your
cluster,
a
migration
schedule
and
this
migration
schedule.
It's
just
like
a
Chrome
tab,
so
you
describe
how
often
something
should
happen,
and
then
you
can
select
your
name
spaces
and
you
can
say
on
a
namespace
layer.
I
want
to
have
the
namespace
postgres
being
replicated
using
the
cluster
pair
cluster
1
to
Cluster
two
and
then
Port
Works
would
go
ahead.
C
It
would
also
take
all
the
kubernetes
objects
of
this
namespace
and
then
would
asynchronously
replicate
this
using
in
between
Object
Store
to
the
destination
cluster,
so
on
the
destination
cluster,
these
volumes
would
appear
and
the
PVCs
would
appear
and
also
all
the
kubernetes
objects
would
appear
just
by
default.
We
would
scale
down
the
replicas
to
zero
because
you
don't
want
your
application
to
run
in
cluster
2.
Actually
you
just
want
to
have
it
there.
So
it's
it's
looking
like
a
shadow
copy
of
the
of
the
application.
C
C
Depending
on
your
architecture.
There
need
to
be
some
more
changes,
because
we
here
don't
care
about
Ingress
and
I've.
Had
this
example
in
in
My
Demo,
so
I
was
running
on
Amazon
and
I
was
running
on
Google
and
Ingress
differs
between
those
two
environments,
so
you
need
to
change
the
Ingress
to
make
this
cluster
2
running
or
make
this
cluster
accessible
from
the
outside.
The
application
is
running
it's
there,
but
you
might
need
to
change
other
things
and
we
have
multiple
ways
of
addressing
this.
C
So,
first
of
all,
we
have
some
cluster,
some
customers
running
a
git,
Ops
approach,
so
meaning
they
don't
need
to
replicate
the
kubernetes
objects
between
those
two
classes,
because
the
kubernetes
objects
come
out
of
a
git
repo
and
in
in
their
example.
They
would
just
deselect
the
replication
of
kubernetes
object
and
would
just
replicate
the
volumes
and
would
just
bring
out
another
deployment
into
cluster
2
with
all
the
relevant
and
and
right
fitting
Ingress
definitions.
C
We
can
also
do
a
rewrite
of
kubernetes
object,
so
we
introduced
this
in
one
of
the
last
versions
of
Port
works,
so
you
can
Define
what
should
happen
with
the
kubernetes
object
when
it
gets
replicated
to
the
second
cluster
and
you
could
apply
a
rewrite
rule,
so
you
can
change
settings
just
to
make
it
a
better
fit
for
cluster
2.,
so
yeah.
This
is
one
of
the
the
most
commonly
used
features
of
Port
works,
and
this
is
called
async
Dr.
We
have.
C
We
are
leveraging
the
same
technology
also
for
our
migration
and
when
it
comes
to
migration,
it
just
means
it
does
not
follow
a
schedule.
It's
a
one-time
job,
so
you
create
a
yaml
spec
saying:
I
want
to
migrate
the
postgres
namespace
to
Cluster
2,
and
then
it
would
just
do
a
one
time:
job
migrating
or
do
snapshotting
the
volumes
migrating
this
to
the
other
cluster
and,
if
wanted,
you
can
also
automatically
start
this
this
part,
so
this
is
being
used
for
for
for
migrating
applications
from
A
to
B.
A
I
do
have
a
few
questions,
and
maybe
you
know
if
we
don't
have
enough
time,
so
one
question
would
be
and
then
I'll
I
have
a
second
one.
Can
I
have
a
situation
where
I
have
two
clusters
where
I
deploy
some
applications
online,
some
of
the
others
they
replicate
so
that
they
have
on
on
cluster
one,
have
applications
running
and
some
with
replica
zero
that
were
replicated
from
cluster
2
and
vice
versa,
so
that,
yes,
essentially
they
both
are
in
hot
standby,
for
the
applications
that
they're
not
that
they're
not
running.
C
So
in
this
case,
you
would
yeah.
So
in
this
case
you
would
just
create
two
cluster
pairs.
The
one
the
first
one
would
be
source
is
cluster,
one
and
destination
is
cluster,
two
and
the
second
cluster
pair
would
be
source
is
cluster,
two
and
destination
is
cluster
one,
and
then
you
could
just
migrate
namespaces
from
one
to
two
and
other
namespaces
from
two
to
one.
This
is
possible
and.
A
The
second
question
would
be:
do
you
also
replicate
Registries
so
that
image
is
already
there?
So
that
is
something
that
you
have
to
do
yourself.
A
Okay,
third
question:
is
there
the
need
of
any
external
component
in
a
cell
data
center
for
this
to
work
or
not.
C
As
you
need,
you
need
an
object
store
in
between,
so
this
is
not
shown
on
this
picture,
but
we
use
and
we
leverage
an
object
store
so
in
in
Amazon
and
S3
or
S3
compliant
a
compatible
to
to
sync
those
data.
So
there's
not
the
so.
The
the
sync
does
not
happen
directly
between
these
two
instances.
It's
it's
always
running
in
between
a
a
S3
packet
or
an
object,
store.
A
A
It's
a
very
interesting
topic
and
obviously
there
is
a
lot
of
interest
with
our
users
in
Dr
and
and
and
as
you
know,
there
are
a
lot
of
different
different
requirements
to
that,
so
that
that's
quite
interesting.
Thank
you.
So.
C
Could
be
interesting?
Yes,
yes,
excite
me,
so
so,
but
let's
quickly
quickly
finish
the
last
slide
for
the
r
we
also
have.
So
this
is
the
technical
details.
Well,
I
can
share
this
with
you.
It's
just
what
I
also
explained
the
cluster
paired
Object
Store,
the
scheduled
policy,
the
migration
schedule,
the
schedule.
We
also
have
the
capability
of
doing
a
synchronous
replication.
C
This
technically
means
we
can
stretch
a
portworx
cluster
between
two
kubernetes
cluster,
but
we
need
to
take
care
about
latency
and
and
other
things,
but
we
have
some
customers
with
the
demand
of
having
an
RPO
of
the
zero,
which
means
we
have
two
kubernetes
cluster
having
a
synchronous
copy
of
volumes
between
those
two
clusters.
I
will
not
dig
deeper
into
this
yeah.
Well,
if
it's
interesting,
we
could
do
another
session
for
this.
C
I
think
I
will
skip
this
piece
here
because
we
are
running
out
of
time
and
I
promise
to
do
a
demo.
So
let's
yeah,
let's
just
dump
jump
jump
into
a
let's
just
jump
into
the
live
demo.
A
C
I
would
guess,
but
that's
just
my
guess,
keeping
up
could
mean
if
you,
if
you
define
a
five
minute
schedule
to
to
migrate
from
A
to
B
and
you
you
change
a
lot
of
data
on
your
source
and
you
are
not
able
to
sync
these
Deltas
Within
These,
five
minutes.
What
would
happen
so
I
hope
this
is
the
question.
That's
just
my
guessing,
and
in
this
case
we
would
just
skip
the
in-between
sync.
So
if
you
start
a
sync
at,
let's
say
minute:
zero
and
we
have
a
schedule
for
do
this.
C
Every
five
minutes
and
this
first
thing
would
take
more
than
five
minutes
where
the
second
sink
would
hit
in.
We
just
would
skip
the
second
thing,
so
the
first
thing
needs
to
be
ready
and
then
the
next
sync
would
come
after
this
one
is
ready-
and
this
is
mostly
what
customers
ask
when
when
it
comes
to
yeah
keep
up.
Also
here
we
optimize
this
Behavior.
So
the
first
thing
it
needs
to
be
full
sync,
but
the
following
strings
here
on
this
on
this
asynt
are
are
only
Delta,
so
we
only
sync
the
Deltas
later.
C
So
this
would
speed
up
the
replication
hope
this
answers
the
question:
if
not
please
come
back
directly
to
me.
C
C
You
will
find
it
here
what
works
Enterprise
and
if
you
install
the
port
Works
operator,
it
would
basically
create
the
storage
cluster
crd,
and
here
you
would
see
it's
it's
now
online.
So
if
you
want
to
to
dig
deeper
into
this
into
this
storage
cluster,
there's
a
yaml
definition
for
this
storage
cluster
and
if
you
want
to
create
your
own
port
work,
storage
cluster,
you
need-
or
your
best
way
to
start
would
be.
You
make
an
account
on
centralportworks.com
and
we
have
just
a
wizard
where
you
can
create
your
own
yaml
spec.
C
So
it
will
ask
you
some
questions.
So,
what's
your
platform
you're
running
on
so
most
on-premise
customers
are
running,
for
example,
on
vsphere
and
then,
as
I
explained,
we
can
integrate
with
vsphere
to
automatically
deploy
storage.
It
would
ask
you
about
the
vcenter
endpoint
and
we
just
enter
some
data.
It
would
also
ask
about
the
data
store.
C
C
Yes,
sorry
I
forgot,
so
now
you
should
see
so
yeah,
it's
It's,
On,
central.portworks.com
Again.
It
asks
you
some
important
questions
and
there
are
many
more
options
to
to
modify.
But
usually,
if
you
just
do
a
try,
you
don't
need
to
change
this
this
and
as
soon
as
you
did
this,
you
need
also
to
create
a
secret
within
your
cluster
in
this
namespace,
which
just
references
the
credentials
for
the
vcenter
we
documented
which
which
which
permissions
are
needed.
C
So
it's
least
privileged,
and
then
you
can
download
the
yamos
back,
agree
our
license
agreement
and
that's
basically
what
it
is.
So
you
deploy
the
operator
in
in
openshift
and
then
it
just
asks
you
about
a
yaml
spec,
and
this
is
basically
the
yaml
spec
you
need
to
deploy
and
in
this
yaml
spec
there's
just
the
information
you
provided
with
this
wizard.
So
you
say
you
want
to
use
cloud
storage.
We
always
talk
about
cloud
storage
or
Cloud
drives
when
it
comes
to
those
automated
deployments,
and
here
it's
a
lazy,
zero
tick.
C
This
is
an
option
of
vsphere
size
150.
So
we
want
each
worker
node
deploying
150
gig
and
here
are
also
the
the
vsphere
pieces,
so
the
secret
the
center
IP,
the
data
store.
So
everything
is
is
now
in
this
yaml
spec
and
you
would
just
now
deploy
this
yamls
back
to
the
storage
cluster.
C
So
you
would
the
operator
would
ask
you
for
this
yaml
spec,
if
you
deploy
a
new
operator
and
you
just
provide
this
yaml
spec
and
here
in
my
example,
this
is
running
on
on
Amazon
ec2,
so
here
I'm,
using
gp2
with
a
size
of
149.
B
C
We
would,
by
default
we
would
just
use
each
non-used
non
partitioned
block
device
we
can
find
on
the
Node.
You
can
specify
this
in
more
in
details.
You
can
create
groups
of
nodes
and
saying
on
this
group.
I
want
to
have
Def
sdd
and
on
the
other
group,
I
want
to
have
this
Dev
SDC,
so
there's
a
lot
more,
which
you
can
change,
but
this
is
just
here
default
Behavior.
C
C
Yes,
thank
you
yes,
so
now
here,
if
we
check
just
one
of
our
worker
nodes
and
we
go
on
to
the
storage
thing,
you
would
now
see
here.
My
worker
nodes
have
two
disks
149
gig
attached
to
this
to
these
running
instances.
So
these
nodes
would
now
just
take
149
gig
from
EBS
volumes
to
put
into
the
the
portworx
pool
and
now
Port
works
is
just
running
on
this
cluster.
C
So
we
see
here,
the
storage
cluster
is
online
and
if
I
SSH
into
one
of
those
nodes-
or
this
is
my
that's-
not
actually
the
master
node-
that's
just
jump
host
from
mine
I
would
see
here
all
the
all
the
nodes
of
this
cluster
and
I
can
also
see
the
storage
cluster.
C
Has
been
deployed
so
this
is
my.
This
is
my
storage
cluster
and
by
default.
Port
works
also
creates
some
storage
classes
with
with
several
functions,
so
you
can
modify
this
or
you
can
create
your
own
storage
classes
by
just
just,
for
example,
pxc's
idb.
C
You
would
now
see
here
we
we
have
set
a
parameter
on
this
storage
class,
which
says
it's
a
database
which,
with
remote
replication,
and
we
want
to
have
a
replication
factor
of
three.
So
if
you
now
deploy
a
workload
on
top
of
this
on
this
or
a
volume,
a
PVC
on
top
of
the
storage
class,
it
would
be
automatically
replicated
for
in
three
instances
and
it
would
be
optimized
for
database
workload.
C
If
you
don't
give
this
parameter,
we
would
try
to
guess
what's
what's
behind
it,
so
now
you
could
just
you
just
could
create
a
a
namespace.
Let's
say
MySQL.
C
C
You
would
see
this
is
now.
It
has
no
bound
and
PVC,
which
is
backed
by
by
Port
work.
So
this
is
just
how
we
can
how
you
can
consume
storage
on
Port
works,
so
we're
now
going
more
to
a
high
level,
because
I
yeah
I
promise
to
to
demo
this.
So
now,
here,
I
manually
provisioned
the
database
on
on
the
on
the
cluster,
but,
as
I
said,
we
also
most
of
our
customers
doing
databases
on
Port
works,
because
this
is
reliable.
C
This
is,
we
have
a
function
to
to
back
it
up,
but
customers
also
ask
us
to
to
to
find
a
way
how
to
more
automate
and
standardize
this
database
deployment-
and
this
is
when,
when
portraits
data
services
come
into
play
so
with
portrax
data
services,
we
offer
a
control
plane.
For
let's
say
it's
like
an
operator
for
database
operators,
so
you
can
integrate
your
your
clusters
into
polterox
databases
and,
as
I
said,
openshift
is,
is
one
of
our
first
class
citizens.
C
So
I
can
now
register
this
this
cluster
to
our
control
plane,
meaning
we
roll
out
our
operator
and
as
soon
as
this
operator
is
installed.
This
cluster
would
be
seen
here
as
a
deployment
Target
on
portworx
data
services
and
as
long
as
we
are
waiting
here
on
portox
data
services,
you
can
now
Define
storage
options
for
your
databases,
so
you
might
might
remember
the
replication
factor
I
I
mentioned
on
Port
works.
This
can
be
here
set
as
a
template.
We
also
offered
the
data
services,
meaning.
C
Currently
we
support
these
these
databases
on
top
of
this
platform,
and
you
can
set
settings
for
these
databases,
so
you
can
set
templates
for
the
size
for
your
example
for
your
postgres
database.
So
you
give
your
users
t-shirt
sizes
for
the
database
pods,
and
you
can
also
set
application
configurations
which
goes
more
into
detail
a
runtime
variables
for
these
databases,
it's
everything
is
documented.
What
we
support
here,
but
here
you
can
create
templates
for
Heap
sizes,
for
the
databases
and
all
this
stuff.
C
You
can
manage
your
users
and
you
can
also
manage
your
backups,
so
our
backups
are
always
done
using
a
snapshot
of
the
volume
and
the
kubernetes
objects
and
being
placed
in
an
S3
bucket
or
in
an
object
store.
So
here
you
can
Define
your
backup
targets,
your
your
objects
or
your
packets,
and
you
can
also
create
backup
policies.
So
this
is
all,
let's
say,
admin
stuff.
You
need
to
do
and
here's
my
cluster,
it
just
appeared
with
The
UU
ID
of
the
cube
system.
Namespace
I,
give
it
a
better
name.
C
C
And
give
this
a
label
that
this
is
now
available
to
portraits
data
services
and
it
should
appear
oops.
It
should
appear
here
very
soon
and
as
soon
as
this
namespace
appears
in
this
deployment,
Target
I'm
able
to
deploy
databases
from
the
control
plane
into
this
into
this
cluster
and
the
most
important
thing
for
our
customers
is
the
databases
actually
run
on
your
clusters.
This
is
not
a
database
as
a
service
offering
the
databases
are
the
standard
database
images
they
run
in
your
cluster.
C
C
Oh
I
I
named
it
my
SQL
but
I,
know
I
will
deploy
postgres
well,
okay,
doesn't
matter,
I
can
select
the
the
T-shirt
sizes
I
already
referenced,
so
you
can
select.
How
big
should
these
pods
be?
How
many
nodes
should
run
you
could
select
storage
options?
You
could
select
more
details
about
the
application
configuration
you
could
select
the
backup
schedule
you
want
to
run
so,
let's
say
I
want
to
have
a
backup.
Every
five
minutes
being
placed
on
an
S3
bucket.
I
cannot
select
because
of
the
oh
here
here.
C
Let's
open
this
in
a
smaller
window,
it
would
deploy
the
database
into
this
namespace
of
this
cluster,
so
the
actual
database
would
run
on
the
customer's
caster,
and
this
here
offers
just
a
high
level
view
to
these
databases,
to
all
the
deployed
databases,
and
it
also
helps
you
when
it
comes
to
yeah
responsible.
So
we
often
get
the
question
who's
responsible
for
running
these
databases
inside
the
Clusters.
Is
it
the
developer,
who
chose
who
has
chosen
the
database?
C
Is
it
the
debit
database
team,
which
usually
only
cares
about
the
Oracle
databases
and
then
there's
there's
a
lack
of
responsibility
for
those
products,
and
we
can
address
this
with
with
politics,
data
services
and
the
underlying
layer
of
portraits
data
services,
portworx
storage
and
Port
Works
backup.
So
we
leverage
what
we
have
in
the
platform
just
to
to
to
ease
the
deployment
of
databases.
This
is
all
accessible
using
an
API,
so
you
can
integrate
this
in
in
any
automation
workload
and
while
we
are
waiting,
let's
just
have
a
look
into
the
other
deployments.
C
You
see
here
now
all
the
all
the
database
deployments
on
these
clusters.
You
also
see
that
when
there
are
updates
available,
you
can
just
roll
them
out
run
the
update.
You
can
also
get
metrics
high
level.
Metrics
out
of
the
out
of
the
deployment
will
get
details
you
can
also.
We
also
have
much
more
Prometheus
metrics
of
these
databases.
You
can
also
connect
to
those
databases
in
your
clusters.
C
If
you
want
to
dig
deeper,
this
is
just
high
level
yeah
and,
as
I
said,
you
can
schedule
backups
and
manage
all
this
stuff
here
from
a
high
level.
So
now
my
database
should
be
close
to
ready.
Oh
no,
it's
not
ready.
C
B
It's
super
interesting,
no,
that
we
don't
have
additional
question
on
the
chat,
I'm
sure
the
people
are
just
watching
like
like
this.
The
the
live
demo
and
I
also
did
a
tweet
only
live
demos.
You
see
only
those
kind
of
like
Disaster
Recovery,
like
that
was
only
number
50
with
them.
So
thank
you,
Daniel
for
showing
that
this
possibility
with
portfolks
it's
amazing
and
maybe
you
know
Andrea.
Maybe
this
deserves
a
separate
session
only
on
disaster
recovery
and.
A
I
think
it's
also
recovery
and
backup
talk
a
little
bit
about
about
the
two
together,
because
obviously
you
can't
have
one
and
and
not
the
other
cases
when
and
you
know
what
the
what
the
strategies,
what
policies
you
can
you
can
set
up
and
what
are
the
options
just
to,
because
Enterprise
customers
usually
want
to
address
those
two
together
and
and
not
all
the
Clusters
will
need
disaster
recovery,
but
all
of
them
will
be
get
back
up,
for
example,
so
I
think
we
actually
we
should.
B
Let's,
let's
Shadow
it
because
it's
super
super
interesting
and
thank
you
for
this
intro
in
the
in
this
important
topic
into
and
thank
you
for
the
live
demo.
We're
we're
excited
about
it
and
hey
folks.
The
recording
of
this
session
is
on
the
same
link.
You
are
watching
from
YouTube
and
twitch
the
the
the
session
as
and
if
you
have
a
additional
question
Daniel
you
want
to
share
your
Twitter
handle.
We
can
share
the
Twitter
handle,
so
they
can
come.
B
And
also
Daniel
I
shared
in
the
chat,
the
the
web
page,
where
the
of
Reddit
open
shift
on
portfolks,
where
you
mentioned
the
customer
use
cases
from
one
of
those
logos,
I
recognize
the
Royal
Bank
of
Canada
and
there's
a
there's,
the
webinar
from
the
page,
where
you
can
see
how
this
customer
with
part
works
and
openshift
implemented
their
their
strategy.
So
I'm
gonna
put
the
link
again
in
the
chat,
so
you
can
have
a
look
yeah.
That's
it!
B
Thank
you
Daniel
for
being
with
us
today,
as
I
said,
it's
been
really
really
great,
Daniel,
sorry
Andrea,
so
our
next
appointment
falls
back
into
the
next
session,
which
is
gonna,
be
after
some
some,
while
because
we
have
a
kind
of
a
break
and
then
we
come
back
with
another
session
talking
about
wasp
and
podman.
This
is
gonna,
be
on
April.
The
5.
A
But
first
okay,
so
did
you
say,
pause
on
end.
B
B
C
Well,
I
mean
I,
I,
think
the
the
license
needs
to
be
reassigned
to
the
newscaster.
So
maybe,
if
you
already
have
a
portraits
licenses
on
your
cluster,
just
just
get
to
your
your
account
team
of
Portugues
and
they
will
assist
you.
So
there's
no
direct
license
migration,
but
you
apply
the
license
file
or
maybe
you
would
need
a
new
one.
So
I'm.
B
B
On
reach
out
to
the
account
team,
they
will
definitely
help
you
and
Daniel.
We
wait
for
you
for
the
next
session,
looking
forward
to
it.
Thank
you
very
much
and
see
you
all
next
time,
Andrea,
as
we
said,
April
the
5
with
this
other
interesting
section.
Thank
you.