►
From YouTube: Deploying Databases using OpenShift Virtualization - Cem Omurtak (Sahibinden) OpenShift Commons 2022
Description
Sahibinden's Cloud Transformation: Deploying MySQL, Cassandra, MongoDB using OpenShift Virtualization
Erkan Ercan (Red Hat)
Cem Omurtak (Sahibinden)
Jehlum Vitasta Pandit (Red Hat)
OpenShift Commons Gathering on Databases held on 02/23/2022
Slides: https://bit.ly/3HnII8P
Join OpenShift Commons: https://commons.openshift.org/index.html#join
Full Agenda here:
https://commons.openshift.org/gatherings/OpenShift_Commons_Gathering_on_Databases.html
A
So
we
have
another
very
interesting
customer
session
with
sahih
bindan
next
up
and
sahih
bin,
denis
turkey's
biggest
online
marketplace.
They
have
a
very
unique
use
case
and
they're
deploying
several
workloads,
including
databases
such
as
mysql,
mongodb,
cassandra,
click
house
and
apache
kafka
android
at
openshift,
and
this
supports
their
whole
website
and
handles
all
the
traffic
that
they
have
received
on
their
website.
A
We
have
jim
omartak,
who
is
a
software
and
data
architecture
manager
at
sahih,
bindan
and
erkin,
who
is
a
senior
solutions
architect
at
red
hat
and
he
helps
our
customers
in
their
app
development
journeys
and
they're
here
to
shed
more
light
on
the
sahibundan
story.
So
hi
welcome
to
the
comment
section
and
thank
you
so
much
for
being
here.
A
And
so
I'm
going
to
so
and
yes,
so
you
can
see
their
profiles
up
here
and
we
can
just
dive
right
in.
I
won't
waste
any
more
time.
So
can
you
can
you
tell
me
a
bit
about
sahih
bin
and
what
does
sahib
indian
actually
do.
B
Five
names
a
classified
platform,
this
fights
platforming
from
turkey,
a
diverse
spectrum
of
new
and
second,
and
which
is
sold
on
the
truck
platform
like
real
estate,
cars
cranes,
clothing,
food
with
close
to
60
million
active
users,
30
million
pages
per
month.
Fiona
is
the
fourth
largest
online
online
classifies
platform
in
the
world,
trailing
on
the
craigslist
in
usa,
between
russia
and
the
bomb
in
france.
B
It's
also
one
of
the
topics
the
size
in
turkey,
according
to
alexa
current.
This
is
close.
Two
thousand
employees,
250
of
which
are
working
in
randy
department
side,
is
currently
running
on
more
than
four
four
thousand
pms,
more
more
than
150
distinguished
distributed
between
two
on-prem
sites
on
a
public
cloud
site
on
gcp.
B
That's
a
rob!
So
thank
you.
B
Although
public
providers
check
all
these
boxes,
we
are
subject
to
stick
governance
regulations
in
the
form
of
a
local
version
of
gpu
gdpr.
B
That
means
in
some
going
point
in
the
future.
We
may
have
to
suddenly
abandon
abandon
our
public
cloud
as
extensions,
so
we
are
kinda
stuck
with
on-prem
data
centers
before
before
2020.
We
have
built
up
upon
our
our
virtualization
of
an
automation
platform
around
debian
linux,
and
then
we
come
with
some
homegrown
python
based
infrastructures
code
tooling.
We
named
simon
and
cloud
systems
when
we
need
that
that
kind
of
infrastructure,
the
tools,
the
modeling
tools,
are
not
very
major
actually
and
this.
B
This
tooling
is,
although
very
customizable,
because
it's
written
by
us
has
a
very
steep
learning
curve
and
do
not
really
attract
people
who
already
learned
and
used
open
source
tools
for
that
purpose.
B
So,
in
fact,
we
wish
to
minimize
the
footprint
of
self-provision,
operated
security
and
government
technology
components
and
focus
our
efforts,
engineering,
careful
effort
to
our
application
around
data
layer,
but
we
also
envisioned
that
the
actual
containerization
of
of
all
our
applications
is
will
be
a
multi-year
project
and
we
decided
to
instead
approach
the
we
decided
not
to
make
a
big
bank
actually,
so
we
decided
to
go
with
the
lifter
shift
approach,
where
we
keep
running
our
workload
on
virtual
machines
and
focus
on
replacing
the
underlying
platform.
B
First,
with
these
ideas
and
restrictions
in
202-
and
we
started
a
two-phase
project
in
the
first
phase,
we
moved
a
part
of
our
workload
to
google
public
plot,
where
we
diverted
20
percent
of
of
our
traffic
to
keep
it
alive,
and
we
actually
actually
liked
our
experience
there,
because
they
are,
we
are
not
bound
by
any
kind
of
hardware
restrictions.
A
B
In
the
second
phase
of
the
project,
we
tried
to
extend
the
benefits
of
clarification
of
our
application
services
to
on-premise
data
centers
by
employing
a
private
cloud
solution
that
mimics
a
public
cloud
experience
as
much
as
possible.
I
mean
the
limits
of
technological
and
financial
feasibility
and
we
therefore
released
nrfp
to
private
cloud
vendors.
Many
private
plot
providers,
including,
of
course
retired,
responded
to
the
rfp,
putting
cybernet
specific
details
aside.
B
I
think
the
rfp,
the
I
guess
in
the
rfp
can
be
summarized
as
follows.
Firstly,
we
we
needed
a
major
platform
which
regard
both
virtualization
and
continuization
has
heard
first-class
citizens.
By
that
I
mean
we
should
be
able
to
work
our
workloads
flawlessly
on
on
both
virtualization
part
and
quantization
part,
and
their
connectivity
should
be
also
seamless.
B
This
kind
of
technology
will
always
allow
us
to
shift
our
applications
from
traditional
vms
to
containers
in
our
own
pace,
because
there's
a
big
transformation
project-
and
we
can't
you
know
close
the
shop-
and
we
are
doing
this-
probably
can't
do
that.
So
we
need
time
for
that
and
without
a
big
bank,
such
a
platform
will
will
enable
us
this
transformation
be
as
flow
as
graceful
as
possible.
B
B
They
should
be
able
to
control
production
concerns
like
number
and
composition
of
pots,
auto
scaling
policies,
selection
of
databases
and
other
subsystems
and
test
them
with
these
by
selecting
them
from
application
catalog,
and
then
push
those
developers
and
changes
to
production
with
minimal
infrastructure,
team
involvement.
B
We
are
also
trying
to
avoid
van
der
lucking,
while
we
can
keep
the
symmetry
between
our
own
from
data
centers
and
cloud
data
centers.
This
is
also
very
important
for
us
also.
Lastly,
to
our
experience,
operation
of
large,
open
source.
Kubernetes
installations
are
not
really
three
will
work,
especially
the
upgrade
yeah.
You
can
get
it
work,
but
the
upgraded
really
upgrade
part
is
really
painful
by
upgrading
different
parts,
you
can
easily
end
up
with
the
configuration
that
has
never
been
tested
anywhere
as
before
you,
so
you
may
be.
B
A
C
Yeah,
actually,
when
we
got
to
the
rfp
and
analyzed
the
requirements,
we
thought
openshift
on
openstack
based
solution,
supported
with
sf
storage,
can
be
a
good
fit
because
most
of
their
workloads
were
running
on
vms,
and
so
we
built
a
solution
based
on
that.
But
after
we
presented
this
to
the
sahibundan
saibindan
cto
stated
that
they
were
looking
actually
more
hyper-converged
and
the
unified
solution
to
run
containers
and
virtual
machines
together
seamlessly.
C
So
they
don't
want
to
invest
on
two
separate
stacks
for
integration,
migration
and
maintenance.
So
this
became
an
additional
effort
for
them.
So
that's
why
we
changed
our
solution
to
open
shift
virtualization
based,
so
we
offered
them
a
bare
metal
open
shift,
supported
with
openshift
data
foundation,
to
run
both
virtual
machines
and
containers,
together
with
the
support
of
openshift
virtualization.
C
So
when
we
propose
that
they
actually
the
cybernet
team
liked
this
one,
even
openshift
virtualization
was
a
fairly
new
technology.
At
that
time
they
we
decided
to
go
on
this
solution.
A
Okay,
and
so
why
did
cybenin
choose
openshift
and
openshift
virtualization
to
get
this
massive
goal
over
the
finish
line.
C
C
So
I
think
openshift
was
a
very
good
fit
for
that,
and
so
we
were
a
leader
in
these
enterprise
kubernetes,
and
so
it
was
very
popular
in
also
in
turkey
as
well,
and
jim
also
mentioned,
as
you
remember,
they
were
trying
to
improve
their
developer
productivity
change,
the
culture
in
so
they
can
able
to
try
and
test
new
applications
very
easily
like
in
a
cloud
environment.
So
I
believe
these
were
the
you
know,
pillars
that
you
know.
Cybinding
shows
us.
A
Okay,
that's
very
interesting,
and
so
what's
the
what's
the
current
architecture,
can
you
talk
us
through
that?
How
is
the
traffic
being
distributed?
How
are
you
delivering
this
project
because
it's
so
nice.
C
Yeah,
okay,
let
me
start
with
how
we
deliver
this
project
very
quickly,
so
we
started
the
project.
On
february
2021
we
made
the
bare
metal
openshift
deployment
in
cybind
and
ankara
data
center,
so
cybin
dance
cloud
system.
You
know
they
were
using
the
provision,
the
virtual
machines
and
the
configuration
other
infrastructure
components,
so
they
integrated
their
cybin
and
cloud
system
with
using
openshift
virtualization
apis.
C
They
did
it
very
quickly
actually,
after
that
they
migrated
more
than
1
500
virtual
machines.
So
these
workloads
include
this
some
java
applications,
saving
them
api
gateways,
even
active
directory
and
many
stateful
applications.
I
think
that
was
the
topic
of
our
today's
session,
so
I
think
jen
will
mention
much
more,
but
we
onboarded
mysql
servers,
mongodb
servers,
cassandra
and
kafka
clusters
so
and
as
so
on
april,
this
went
live
and
after
that
these
clusters
started
to
get
data
synchronization
traffic
from
the
other
data
centers.
C
After
getting
that
cybin
done,
team
actually
rooted
the
start
to
root
some
image
and
content
traffic,
which
is
the
majority
of
the
traffic
happening
inside
bindan.
You
know
when
the
people
upload
their
pic
photos
of
their
houses
cars.
So
these
are
kept
in
a
kind
of
a
cdn
in
private
cdn
network.
So
these
these
kind
of
things
were
started
to
serve
on
openshift
but
and
on
july
2025
cybinder
rooted
80
percent
of
their
whole
site
traffic
to
openshift.
C
Now
so
now
we
are
working
on
the
new
other
data
center
in
istanbul,
so
openshift
is
set
up
there
and
cybernine
team
is
migrating
that
one
as
well
as
currently
we,
as
you
can
see
side
is.
Eighty
percent
of
cybernet
traffic
is
route
to
openshift.
The
clusters
in
ankara
and
20
percent
is
going
to
do
google
cloud.
They
are
currently
istanbul
data
center
legacy,
istanbul
data
center
is
decommissioned,
and
so
we
are
now
modernizing
the
the
other
data
center.
Now.
C
Yeah,
I
think
in
in
a
short
time
this
the
cybin
team
will
complete
the
migration
of
other
applications
in
to
to
the
istanbul
data
center
as
well,
and
after
that,
each
openshift
cluster
will
be
handling
around
40
of
the
whole
site
traffic.
So,
in
the
case
of
an
emergency,
they
can
able
to
switch
traffic
very
easily
between
sites
and
after
that
we
will
be
starting
on
actually
modernizing
the
applications
and
onboarding
containerized
applications
on
openshift.
A
B
B
As
I
said,
we,
we
don't
want
to
tackle
two
problems
at
the
same
time
and
we
just
did
a
different
shift
operation
where
we
moved.
They
were
already
running
on.
B
Virtualization
for
middleware,
we
use
mainly
memcache
monkey.
We
have
to
use
because
we
haven't
used
cash,
it's
our
application,
cash
and
as
our
manchester
merchant,
we
have
an
internal
stevia
which
mainly
runs
varnish
and
it
serves.
B
It
contains
in
sketch,
like
close
to
one
megabyte
image.
We
also
have
to
use
elasticsearch
because
we
have
to
search
stuff
in
the
classified
in
our
classified
site.
So
at
least
100
vms
are
different.
Elasticsearch
clusters
we
heavily,
we
again
have
to
use
kafka
for
even
streaming
and
inter
process
messaging.
B
As
far
as
databases,
we
we
we
use
mysql,
it's
the
one
with
absolutely
we
treat
as
absolute
source
of
truth,
because
we
also
have
because
it's
transactional,
we
all
say
longer
by
the
time
this
decision
was
made,
doesn't
have
any
transaction
support
so
more
like
ephemeral
data
like
login
information
or
the
the
kind
of
data.
B
If
we
lose,
we
don't
cry
that
kind
of
data
lives
in,
but
it's
reliable.
Actually
in
click
house,
we
contain
user
activities,
it's
kind
of
analytical
database,
but
it
can
handle
very
big
queries
on
really
fast
and
you
can
actually
connect
your
front-end
application
to
and
it's
much
faster
that
we
got
much
faster
than
when
sql
in
those
type
of
situations,
and
we
also
have
used
cassandra.
B
Applications
run
on
their
own
vms,
on
let
it
open
shift
visualization
at
the
moment,
and
we
are
in
the
process
of
modernizing
the
now.
It's
it's
time
to
modernize
the
applications
of
application
themselves.
We
started
recently
microservices
project
where
we
attacked
all
monoliths
to
models,
and
you
know
separate
to
smaller
parts.
So
so
we
can
run
them
on
virtualization
on
contain
containers
instead
of
virtualization.
A
B
Our
infrastructures
code
tooling
actually
allowed
us
easy
to
allow
that
easily.
It's.
B
B
Virtualization
at
another
provider,
and
it
was
as
easy
as
as
our
old
way.
So
this
is
this.
This
was
our
approach.
As
far
as
running
databases
on
official
virtualization
is
concerned,
we
didn't
feel
any
difference.
I
mean
it's
not
getting
any
worse,
it's
better!
Actually,
it's
very
they
it's
more
performance
compared
to
that
other
than
that,
we
haven't
used
the
database,
the
existing
databases
from
the
application
catalog,
but
we
don't
have
to
which
we
don't
have
to
operate
ourselves,
but
with
microservices
transformation.
B
We
expect
to
the
developers
include
their
data,
their
own
databases
from
the
application
catalog,
and
then
they
can
develop
whatever
they
want
and
push
them
to
the
to
the
production
where
they
themselves
decide
on
the
size
size
of
the
database.
How
many
choose
how
many
replicas
it's
a
topology?
So
we
are.
We
are
hopeful
about
the
future.