►
From YouTube: Database Workloads on OpenShift Container Platform Babak Mozaffari (Red Hat) OpenShift Commons 2022
Description
Database Workloads on OpenShift Container Platform
Babak Mozaffari (Red Hat)
OpenShift Commons Gathering on Databases held on 02/23/2022
Slides: https://bit.ly/35nr1ZD
Join OpenShift Commons: https://commons.openshift.org/index.html#join
Full Agenda here:
https://commons.openshift.org/gatherings/OpenShift_Commons_Gathering_on_Databases.html
A
A
So
as
a
result
of
that,
the
rhoda
work
that
mike
talked
about
in
the
keynote
the
engineering
aspects
of
that
falls
within
my
team
and
we
have
a
team.
That's
been
working
on
that
and
we've
also
had
a
couple
of
other
efforts.
I'm
going
to
share
a
couple
of
slides
and
all
right,
so
yeah
so
outside
of
rhoda.
We've
done
some
other
work
and
our
focus
has
been
from
the
point
of
view
of
the
application
developer,
who
needs
a
database.
A
The
other
story
is
what,
if
you
actually
want
to
run
that
database
on
openshift
and
that's
where
there's
like.
Quite
a
few
scenarios.
One
scenario
is
that
potentially,
from
a
somewhat
naive,
kubernetes
perspective,
you
come
in
and
you
say
well,
this
is
kubernetes
right,
so
I'm
just
going
to
have
one
node
running
and
then,
if
that
node
goes
down
for
any
reason
at
all,
then
that
node
is
going
to
come
back
up
on
a
different
machine.
You
know
the
pod
is
going
to
just
come
back
up.
A
I
only
need
one
pot
and
nothing
more
than
that,
and
so
one
of
the
challenges
we
have
is
there's
a
long-standing
kubernetes
defect
that
prevents
that
from
happening,
because
there's
a
rewrite
once
mode
that
is
typically
used
for
storage,
not
necessarily
but
most
of
the
time,
and
that
causes
some
problems
and
we've
looked
at
what
those
problems
are
and
the
solutions
available
with
red
hat
to
solve
them.
So
that's
the
high
availability
with
single
node
as
and
the
solution
to
that
problem,
really
what
we're
looking
at
is
node
remediation.
A
So
what
happens
is
if
you
have
one
pod
and
that
part
goes
down
because
of
the
access
mode
being
read
right
once
kubernetes
can't
really
bring
up
that
part
on
a
different
machine
because
it
doesn't,
it
has
to
verify
first
that
that
part
actually
went
down,
because
the
o,
at
the
end
of
read,
write
once
means
only
one
machine
can
mount
that
pod
or
that
claim
to
that
storage.
For
that
pond.
A
So
there's
a
couple
of
solutions:
there
there's
a
machine
health
check
operator
and
a
poison
pill
operator
within
openshift
that
work
together,
essentially
to
identify
the
scenario
and
to
do
the
recovery
in
the
case
of
the
poison
pill
operator
by
rebooting
the
machine
and
taking
it
out
of
commission.
Therefore
letting
kubernetes
know
that
it's
okay
to
redeploy
and
there
is
a
medicaids
project
which
extends
that
support,
essentially
within
a
community
effort
to
some
scenarios
like
outside
of
ipi
openshift.
That's
not
supported,
but
in
reality
most
of
the
people
who
are
using
production.
A
So,
if
somebody's
using
mongodb
they're
going
to
have
a
replica
set
and
if
they're
using
let's
say
cockroachdb
or
crunchy
or
anything
else,
really
they're
going
to
use,
for
example,
the
post
quest
replication
that's
available,
what
that
means
is
their
starting
scenario
is
typically
going
to
be
multiple
pods
running
typically
on
different
openshift
nodes
and
those
pods
are
going
to
talk
to
each
other
to
replicate
the
data
among
each
other,
but
even
in
those
scenarios,
what
you're
going
to
see
most
of
the
time
is
only
a
single
one
of
those
pods
is
a
primary,
that's
writable
at
any
given
time,
and
if
that
fails,
one
of
the
other
parts
takes
over
and
that's
great,
seemingly
problem
solved,
except
that
how
many
failures
can
you
support?
A
So
if
you
have
a
typical
cluster
of
three
nodes,
one
of
them
fails.
That's
fine.
Two
of
them
fails
you're
in
trouble
because
all
of
a
sudden
you
don't
have
quorum,
the
cluster
doesn't
really
know
which
one
has
failed
and
which
one
hasn't
and
to
not
essentially
risk.
You
know
to
have
data
consistency
and
not
have
data
corruption.
A
What
you
need
to
do
is
make
sure
you
have
a
quorum
and
more
than
half
of
your
nodes
are
up.
So
if
you
have
three
nodes,
you
need
two
of
them
to
be
up.
So
while
this
scenario
gives
you
h.a
without
downtime,
you
have
multiple
pods
running
and
everything
is
great.
If
one
of
them
fails,
you
need
it
to
recover
at
some
point
so
that
you
can
sustain
another
failure
and
again
that's
what
doesn't
happen
out
of
the
box
and
that's
why
again,
you
would
be
looking
to
some
of
those
solutions.
A
A
A
Another
thing
we've
looked
at
is
disaster
recovery
again,
this
is
all
work
in
progress,
but
we've
been
looking
at
disaster
recovery
with
databases,
and
specifically
one
of
the
scenarios
we've
been
working
on
is
cockroachdb,
which
has
the
ability
to
set
up
a
cross-region
mesh
of
cockroachdb
databases,
essentially
so
part
of
the
work
we've
done
on.
A
You
know
a
lot
of
the
positive
things
that
kubernetes
gives
you
is
bringing
all
those
basically
advantages
of
the
cloud
environment
to
either
your
own
data
center
or,
if
you're,
using
it
in
a
public
cloud,
giving
you
a
lot
of
things
out
of
the
box
like
having
clustering
capability
and
so
on.
A
But
sometimes,
if
you
look
at
that
and
you
compare
it
to
let's
say
a
simple
vm
base,
you
have
additional
infrastructure,
you
have
additional
latency
and
you
want
to
make
up
for
it
somehow
and
one
of
the
ways
you
make
up
for
it
is
by
having
horizontal
scaling
and
horizontal
scaling.
One
of
the
scenarios
we've
been
looking
at
in
this
case
is
with
mongodb
and
while
mongodb
has
replica
sets
that
gives
you
multiple
pods.