►
Description
As part of the “All Things Data” series of briefings, Red Hat’s Jakub Scholz and Karan Singh will be giving an overview and discussing workload characterizations of AMQ Streams with OpenShift Container Storage.
The AMQ Streams component is a massively scalable, distributed, and high-performance data streaming platform based on the Apache Kafka project. It offers a distributed backbone that allows microservices and other applications to share data with high throughput and low latency.
OpenShift Container Storage (OCS) is software-defined storage for containers that provides you with every type of storage you need, from a simple, single source.
Visit openshift.com/storage for more information.
A
Thank
you,
everybody
for
joining
us
for
another
all
things:
data
on
the
open
shift,
Commons
briefing
channel.
This
is
a
new
briefing
series
and
we're
excited
for
everybody
to
join
us,
and
if
you
have
any
ideas
or
things
that
you
would
like
to
see
in
the
future,
please
let
me
know
today
we
have
Jakob
Schultz
from
Red
Hat,
as
well
as
Karan
Singh
from
Red
Hat,
and
thank
you
both
for
joining
us
they're
here
to
talk
about
amq,
using
open
shift,
container
storage,
so
yeah,
please
take
it
away.
B
Okay,
let
me
share
my
screen,
so
in
this
presentation
it
will
be
me
presenting
together
with
Karin.
So
I
am
principal
software
engineer
in
the
Red
Hat
middle
air
engineering,
team
and
Aaron
is
senior
architect
in
the
red
head
storage
unit,
and
today
we
will
give
you
some
introduction
to
amq
streams
and
to
the
storage
requirements
which
amq
streams.
Yes,
and
then
we
will
talk
a
bit
more
about
importance
of
open,
shaped,
container
storage
for
EMU
streams.
B
B
It
has
two
versions:
one
of
them
is
for
reddit
Enterprise
Linux,
so
that's
suitable
if
you
want
to
run
Apache
Kafka
on
VMs
or
bare
metal
directly
without
any
containers
and
openshift
or
kubernetes
orchestration,
and
the
second
version
is
for
OCP
or
open
container
platform,
which
obviously,
for
this
talk,
is
the
relevant
one
and
roc
p.
We
are
using
the
power
of
operators
from
the
strumsy
project.
Streams
is
a
CN
CF
project
which
makes
sure
that
cough
currents
negatively
on
kubernetes
and
OpenShift
environments
and
with
the
operators.
Amq
streams
can
run
basically
anywhere
there.
B
Ocp
is
running
so
it
can
be
on
VMs
on
bare
metal
on
private
clouds
or
public
clouds.
Their
ever
open
ship
is
supported.
You
can
run
into
streams
as
well
and
with
amq
streams
on
OCPD
support,
OCP,
311
and
higher.
So
if
you
haven't
jumped
on
the
open
sea
for
train,
yet
we
have
you
covered
as
well,
but
open
shift
for
is,
of
course,
great
with
a
lot
of
improvement.
B
B
Is
that
Apache
Kafka
basically
provides
capabilities
for
storing,
delivering
and
processing
data,
which
makes
it
kind
of
ideal
platform
for
doing
streaming
data
even
driven
architectures
through
in
processing
and
so
on,
but
it
is
also
a
publish/subscribe
messaging
system,
so
it
can
be
used
as
a
there's,
an
alternative
to
let's
say
the
traditional
JMS
style
messaging
system
and
can
be
used
for
the
traditional
integration
use
cases
and
then
the
third
definition
I
have
here
is
a
distributed
horizontally.
Scalable
fault
oral
and
commit
lock.
B
I
love
this
one,
because
it's
full
of
very
nice
buzzwords
and
it
sounds
very
sophisticated,
but
don't
get
misled
by
that.
It's
not
really
just
a
random
collection
of
some
fancy
words.
It's
actually
pretty
accurate
definition
of
how
a
box,
because
if
you
look
at
the
Kafka
broker
in
detail
how
it
is
implemented,
it
really
is
distributed
horizontally.
B
Scalable
fault
around
commit
log,
so
it's
pretty
accurate
Kafka
was
originally
created,
it
linked
him,
but
later
it
was
open
sourced
and
now
it's
as
the
name
suggests
part
of
the
Apache
Software
Foundation,
and
it
was
from
the
beginning,
designed
to
be
fast,
scalable.
Durable
and
available,
and
it
achieves
that
by
being
distributed
by
nature,
so
it
does
basically
everything
it
does
is
based
around
data
partitioning
or
sharding.
B
If
you
want
so
all
the
data
are
spread
into
some
smaller
shards
which
are
handled
individually
and
that's
how
basically
Kafka
is
able
to
achieve
the
high
throughput
with
fairly
low
latency
or
delivering
the
data,
and
that's
what
gives
it
the
ability
to
handle
a
huge
number
of
consumers.
What
I
think
is
always
important
to
mention
is
that,
while
in
this
talk
will
mostly
focus
on
the
cough
car
broker,
which
is
where
the
storage
comes
into
play,
Kafka
itself
is
a
co
system
rather
than
a
single
project
or
a
single
component.
B
So
apart
from
the
broker,
it
has
its
own
stream
processing
library
in
Java.
It
is
its
own
clients
for
different
languages.
It
has
its
own
integration
framework,
called
Kafka
connect
and
even
beyond
Apache
Kafka
project
itself.
There
are
integration
with
many
different
third-party
tools.
All
the
big
data,
tooling
stream
processing
frameworks,
such
as
spark
storm
and
so
on.
All
of
them
have
great
support
for
Kafka.
So
I
always
like
to
point
out
that
it's
not
just
the
messaging
broker
itself,
but
it's
really
ecosystem
of
many
different
components.
B
So
I
already
said
that
Kafka
was
designed
to
be
distributed,
and
it
does
so
by
partitioning
the
data.
So
when
sending
in
or
receiving
messages,
we
always
sent
you
a
receipt
from
topic,
but
every
topic
is
always
split
into
one
or
more
partitions,
though
it
can
be.
That
topic
is
just
a
single
partition,
but
in
most
cases
it's
many
partitions
or
each
topic
and
these
partitions
act
as
a
chart.
B
So
most
of
the
actual
work
is
always
done
on
the
partition
level,
and
the
topic
is
really
more
virtual
object
where
you
send
or
receive
the
messages
from,
but
which
gets
kind
of
internally
pointed
to
the
right
partition,
and
each
message
is
always
written
only
to
a
single
partition
for
a
given
topic
and
Kafka
is
following
this
idea
of
smart
clients
and
dumb
brokers.
So
the
broker
is
really
just
the
commit
look
as
I
mentioned
before,
and
it's
the
clients
which
have
a
lot
of
the
logic
of
distributing
the
messages
into
the
partitions.
B
So
when
you
try
to
send
some
message,
then
the
client
decides
into
each
partition.
It
should
be
sent
based
on
the
message
key
or
based
on
some
round-robin
distribution.
If
there
is
no
message
key
and
then
the
consumers
were
similarly
decides
which
consumer
should
consume
from
which
partition
and
so
on.
So
that's
what
allows
it
to
scale
so
nicely
because
there's
no
single
point
through
which
all
the
messages
need
to
flow
in
the
brokers
or
anything
like
that.
B
And
of
course,
that
means
that
the
roles
can
be
changed
during
the
kafka,
cluster
or
lifecycle.
So,
in
this
picture,
you
can
see
that
we
have
a
free
brokers
with
single
topic,
which
is
free
partitions
with
free
replicas
each,
and
you
can
see
the
dark,
blue,
partitions
or
replicas.
They
are
the
leaders
and
you
can
see
that
they
are
feeding
the
data
to
the
followers
so
that
the
followers
can
keep
the
copy
of
the
data
and
now,
of
course,
what
can
happen.
B
For
example,
the
broker
si
goes
down
because
of
some
hardware,
failure
or
whatever,
and
basically
immediately
the
one
of
the
follower
replicas
on
the
broker.
B
was
able
to
take
over,
becomes
the
new
leader
and
the
clients
can
just
reconnect
there
and
keep
working
without
any
significant
disruptions.
B
Now
the
replication
is
really
important
for
Kafka
and
cannot
be
fully
replaced
by
mirroring
on
the
storage
level,
because
having
multiple
active
replicas
available
with
the
same
data
means
that
availability
is
really
great
and
they
can
very
quickly
take
over
if
the
leader
fails.
But
it's
also
important
for
the
reliability
that
the
data
are
written
to
multiple
different
disks
on
a
multiple
different
brokers
when
they
get
put
into
the
disk
buffers
and
then
later
written
to
the
disks.
So
the
replication
is
not
something.
B
B
Because
we
want
to
talk
about
how
openshift
container
storage
fits
with
the
IMP
you
streams,
let's
look
a
bit
more
about
what
kind
of
storage
MQ
stream
supports
and
what
are
the
requirements
for
storage
which
we
have
so
on
open
ship.
The
currently
support
three
types
of
storage.
The
first
one
is
the
ephemeral
storage,
which
is
using
the
empty
dear
volumes.
So
if
you
don't
know
what
the
emptier
volume
is
and
copper
Nettie's
are
open
shift,
it's
kind
of
you
can
imagine
it
as
at
MD
or
somewhere
on
the
host.
B
So
it
works
quite
well
for
things
like
development
or
CI
pipelines,
but
it
doesn't
really
provide
any
data
reliability.
So
it's
not
supported
for
production.
Then
the
second
storage
type,
which
we
support
is
persistent
volume
claim
storage.
So
what
we
do
here
is
we
just
use
the
storage
classes
in
persistent
volume
claims
as
a
kubernetes
based
mechanism
to
dynamically
provision,
the
storage
and
that's
really
great,
because
it
means
we
do
not
need
to
kind
of
write,
a
separate
support
for
each
kind
of
storage
type
which
kubernetes
are
openshift
support.
B
But
once
the
support
exists
in
open
seed
or
kubernetes,
we
can
automatically
take
it
and
use
it,
and
then
the
third
type
is
something
called
called:
j-bot
storage,
just
a
bunch
of
disks
which
really
what
it
does
is.
It
allows
you
to
specify
that
the
broker
should
use
multiple
volumes
of
one
of
the
types
one
of
the
two
types
I
talked
about
before
and
that's
useful
to
increase
the
capacity
and
performance
of
the
brokers.
B
In
some
cases
now
Kafka
was
basically
written
to
not
require
any
sophisticated
storage
features
or
appliances,
so
it
doesn't
really
need
storage
mirroring.
It
doesn't
really
need
raid
arrays
to
make
bigger
disk.
It
can
use
multiple,
separate
disks.
Instead,
it
doesn't
need
read/write,
many
disks,
so
you
can
easily
use
even
local
storage
with
it,
and
the
actual
requirement
which
we
have
for
the
storage
is
that
it
needs
to
be
block
storage
because
file,
storage,
things
such
as
NFS
does
not
really
work
well,
if
CAF
kind
is
not
supportive.
So
what
is
really?
B
B
The
fact
that
they
are
not
the
minimum
requirement
for
a
Kafka
doesn't
mean
that
they
are
not
useful
and,
in
the
second
part
of
this
presentation,
Karin
will
talk
a
bit
more
about
where
OCS
can
add
the
value
over
some
other
storage
types
and
when
so
I'm
really
from
the
engineering
not
really
from
the
field.
But
I
get
involved
in
talking
with
our
field.
C
You
could
make
Jakub
has
give
you
a
pretty
nice
introduction
to
Kafka
and
they
also
painted
like
what
storage
requirements
Kafka
imposes
on
you
right.
So
in
the
second
half
of
the
presentation
we
gonna
present
to
how
open
chef
container
storage
adds
value
to
your
MQ
stream
or
calf
car
deployments.
So,
if
you're
running
a
MQ
streams
on
top
of
open
shift,
then
by
introducing
OCS
for
your
persistence,
persistency
storage,
therefore
for
Kafka
it
it
gives
you
performance
additional
performance
and
an
additional
resiliency.
We
do
not
talk
in
detail
about
these
two
parameters.
C
So,
as
you
mentioned,
Kafka
is,
is
natively
designed
with
fault,
tolerance,
II
and-
and
you
know
it's
inherent
in
Italy
this
distributor
and
provides
better.
It's
pretty
good
in
doing
that,
but
OCS
can
add
additional
resiliency
features
and
could
provide
you
additional
improved
service
availabilities
during
infrastructure,
failures
and
I
was
like,
like
a
note,
goes
away
or
a
storage
volume
destroyed
itself.
So
so
these
are
the
benefits
that
you
can
get
from
by
using
OCS.
So,
let's
quickly
cover
performance
as
the
first
thing.
C
So
in
our
lab
we
have
tested
various
different
combination
of
storage
backends.
So
we
have
tested
this
on
AWS,
so
a
standard
OCS,
open
chef,
container
deployment
with
three
masters
and
and
three
workers
for
Kefka.
We
have
tested
a
different
versions
of
OCS
just
to
make
sure
whatever
we're
testing
is
consistent
and
just
just
to
have
various
different
data
points.
So
we
have
tested,
we
have
tested
OCS
4.2
the
current
release
and
we
have
also
tested
the
future
release
or
for
CS
4.2
4.3,
which
is
coming.
You
know
in
a
few
feel
a
few
months.
C
C
The
architecture
looks
like
this:
we
everything
is
running
inside
openshift
and
storage
is
provisioned
using
open
shift
container
storage
each
of
the
Kafka
broker
have
their
own
independent
block
volumes,
because
Kafka
works
nicely
on
block
and
the
supporting
component,
which
is
tsuki
border,
has
also
got
some
tiny
bit
of
of
come
off
a
block
storage
so
that
thick
it
can
store
its
metadata
information
on
the
load
side.
We
have
tested
open
a.
C
So,
let's,
let's
look
at
the
performance.
There
are
lot
of
information
on
this
graph,
so
this
is
performance
characterization
across
different
storage
options.
So
before
we
go
into
the
numbers,
let's
talk
about
the
config
real
quick,
so
three
Kafka
broker
prods
and
with
each
each
part
having
three
so
three
storage
storage,
nodes
for
open
ship
container
storage,
because
it
storage
has
to
run
or
node
so,
which
is
why
we
have
to
use
a
physical
or
a
virtual
instance
on
AWS
for
OCS
standard,
a
small
message
size
and
one
PV
for
every
broke.
C
Apart
in
Kafka
on
the
on
the
y-axis,
we
have
tested
produce
messages
per
second,
the
metric
that
we
were
looking
in
the
next
slide.
We
have
messages
consumed
per
second,
so,
on
the
column
side,
here
we
have
tested
OCS
on
OCS,
consuming
EBS
for
forests,
for
its
storage
needs,
which
is
the
current
design.
So
for
a
further
two
OCS,
you
need
to
use
EBS
for
storage
layer.
We've
also
used.
C
We've
also
tested
the
upcoming
new
feature
in
OCS,
which
makes
use
of
local
instance
storage
just
to
make
sure
just
to
compare
how
performance
looks
like
and
then
the
third
one
is
using
a
mq
on
top
of
EBS
without
using
any
OCS
OCS
there.
Last
one
is
ephemeral,
storage,
which
is
not.
It
is
not
something
you
want
to
do
in
production,
which
so
I'll
just
keep
it
aside
right
now,
X
for
most
people,
Devon
Devon,
QA
testing
purpose.
C
C
We
are
running
a
current,
a
distributed
storage
OCS
on
top
of
EBS,
which
is
also
again
another
video
rate,
so,
which
is
why
we
have
to
pay
our
performance
tax
that
you
can
compare
between
golden
and
blue,
so,
which
is
why
there's
a
performance
tax,
so
so
yeah,
some
ways
that
you
know
OCS
on
I
three
instances
delivers
you
the
best
performance
that
you
are
looking
for
same
story
for
the
second
metric,
which
is
the
messages
consumed
per
second
again
OCS.
On
top
of
I,
three
instances
delivers
the
best
performance.
C
One
thing
we
don't
have
in
this
graph
is:
we
have
not
tested
a
Kafka
on
direct
I,
three
instances
which
Jakob
mentioned
that
Kafka,
usually
Kafka
people
deploy
calf,
corn,
physical
machines
and
they
just
use
jbods
or
on
local
storage.
So
we
have,
we
don't
have
that
metric,
so
we
accept
that
the
second
type
of
so
this
one
was
standard
performance
on
a
steady
state
at
steady
state
care
for
cluster.
We
thought
let's
introduce
let's
and
do
some
failure
in
the
Kafka
system,
so
we
have
triggered.
C
We
have
destroyed
a
power
and
try
to
inject
some
some
artificial
failure
scenarios
into
the
situation
and
try
to
measure
how
much
does
it
take
for
for
Kafka
running
on
OCS
vs.
Kafka
running
on
the
EBS.
You
know
hit
on
the
recovery,
so
you
can
see
this.
The
red
line
is
Kafka
on
OCS
and
the
blue
one
is
Kafka
on
EBS,
so
so
the
smaller
the
line
this
line
is
the
better
the
better
it
performs.
C
So
you
can
see
the
red
line
quickly
goes
back
and,
and
it
the
partial
recovery
is,
is
a
is
pretty
much
fast
on
OCS
back
back
to
cough
clusters
compared
to
EBS,
because
caffeine
has
to
do
you
know
some
regeneration
of
data,
because
EBS
does
not
provide
in
a
storage
level
redundancy
which
kafka
requires
or
not
requires,
but
Kafka
would
would
enjoy
if
storage
by
d4
by
default
provides
that
so
yeah
osseous
helps
in
faster
recovery
of
the
EMQ
cluster.
In
case
of
intravenous,
like
note,
goes
down
or
storage
goes
down.
C
C
So
this
is
the
performance
metrics
from
a
different
cluster
that
we
have.
We
have
tested
because
we
have
been
getting
some
questions
from
the
field
like
I
need
both
performance
and
resiliency,
which
is
multi,
easy,
H,
a
I'm
going
to
talk
about
this.
This
word
in
the
next
few
slides
for
Kefka,
so
which
storage
should
I
use,
should
I
choose
OCS
or
should
I
use
the
more
expensive
storage
on
on
Amazon,
which
is
the
provision
I
up.
So
we
did
a
comparison
between
these
two
model
as
these
two
stories
classes.
C
So
the
first
one
is
EMQ
on
OCS
and
the
last
one
is
NQ
on
EBS
provisioned.
I
ops.
We
found
that
because
eventually,
at
the
end,
both
these
storage
classes
are
basically
using
n
via
means
underneath,
but
we
found
that
OCS
performed
slightly
better
compared
to
compared
to
EBS
provision
diode,
which
is
kind
of
nice
and
for
both
for
both
messages
produced
per
second
as
well
as
messages
consumed
per
second.
C
Again,
I'm
going
to
talk
about
this
in
a
few
slides,
but
even
understand
that
the
second
question
that
we
have
got
from
the
field
is
like:
hey,
I
love
to
run,
let's
head
up
to
9
NQ
streams
on
open
chef,
container
storage,
but
you
have
like
two
storage
classes
of
there.
You
have
block
storage.
You
also
have
a
file
system,
storage,
Steffes,
so
which
one
should
I
use
for
Formica
clusters.
So
yaku
has
already
pointed
out
that
NFS
or
read/write
many
volumes
are
not
I
mean
Kafka
works.
C
On
that
you
could,
you
could
run
comfort
classes
using
that,
but
in
the
longer
run
there
are
stabilities
problem
if
you
are
using
file
systems
on
top
of
underneath
a
fluster
because
of
the
you
know
the
the
locking
and
all
these
things
which
are
which
are
therein
in
file
system.
So
so,
hence,
if
you
are
designing
this
on
a
kubernetes
native
environment,
you
should
be
using
a
block
storage
provided
by
by
OCS
or
even
in
general.
C
You
should
use
block
storage
for
for
the
kafka
needs
and
file
system
causes
stability
and
performance
issues,
oh
so
yeah.
So
this
is
what
we
have
on
on
the
on
the
performance
side.
The
second
part
of
the
equation
is
resiliency
like
how
could
OCS
helps
in
improved
resiliency
for
for
kapha
clusters?
So
for
that
we
first
of
all
need
to
understand
the
problem.
What
is
the
resiliency
problem?
Do
we
have,
let's
suppose,
I'm
running
my
my
communities
and
kafka
environment
on
public
clouds,
because
this
holds
true
for
most
of
the
public
alerts?
C
C
Your
cough,
a
cluster
is
consuming
PVCs,
which
are
being
backed
by
EBS
volumes,
all
good
standard,
sure
you're
using
your
system,
cough
God,
takes
skills
of
the
treadle
application
and
but
if
anything
goes
down,
let's
suppose
any
of
the
ec2
instance
in
any
of
your
easy
goes
down,
or
let's
say
your
EB
s
of
that
easy
goes
down
or
or
the
entire
availability
zone.
It
goes
down
because
it's
it's
all
silicon.
It
can
go
up.
Things
can
go
wrong
right.
C
However,
this
this
new
block
storage
would
be
empty
because
it's
a
fresh,
fresh
PV,
which
which
EBS
to
this
class
has
given
it
so
Kafka
has
to
do,
has
to
reconstruct
all
the
data
in
the
in
the
field
EBS
volumes,
and
it
could
take
minutes
R
or
even
days,
depending
on
the
size
of
the
data
that
Kafka
needs
to
to
to
rebalance.
So
during
this
time,
you're
you
are
just
taking
your
your
resiliency
or
the
system,
so
you
know
think
another
mishap.
C
On
the
same
same
same
easy
or
as
a
different
easy
happens,
you
are
in
a
problem.
You
are
losing
you
losing
your
avi
liberty
of
the
service.
So
that's
the
problem.
Let's
now
understand
how
this
problem
could
be
solved,
using
open
shift,
container
storage,
so
exact
same
architecture,
but
this
time
we
have
an
open
ship,
container
storage
layer
running
on
top
of
EBS,
which
is
over
ship
container
for
storage.
C
For
point
to
thing,
you
can
create
an
RB
storage
class
out
of
EBA
storage
class,
and
this
time
your
your
storage
would
be
across
across
availability
zones,
so
again
standard
data
application
to
get
care
by
kafka.
Everything
is
good
and
storage
level
replication
is
taking
about
care
by
OCS.
So
all
the
data
is
is
the
triple
replicated
across
the
availability
zones.
In
this
case,
let's
suppose
same
thing
happens.
C
If
the
EZ
goes
down,
abs
goes
down,
the
entire
blue
goes
down,
Kafka
will
spawn
up
another
part
and
it
will
request
for
storage,
but
since
it
is
backed
by
not
it's
backed
by
open
container
storage,
open
OCS
will
give
gaff
car
the
same
PB
of
PVC,
which
was
originally
assigned
to
your
first
part
and
Kafka
just
need
to
sync
up
between
the
changes.
It
doesn't
need
to
replicate
or
or
or
regenerate,
all
the
failed
data,
because
the
data
is
still
there
because
data
was
the
replication
was
taking
care
by
OCS.
C
So
this
is
the
additional
resiliency
you
can.
You
can
get
out
of
OCS
in
your
clusters.
It
can
it
can.
It
can
handle
easy
failures
in
case
of
public
clouds,
but
wait
a
minute.
You
can
even
definitely
debate
on
here
like
hey,
ok,
so
we
are
running
so
we
are
going
through
the
performance
tax
problem,
which
I
have
explained
in
my
performance
test
testing.
We
are
running
a
bit
storage
on
top
of
our
distribution
so
rate,
so
whether
I'm
gonna
use
some
performance.
C
So
the
answer
is
yes,
you
wanna
lose
the
performance
because
it's
you
know
it's
storage
and
storage.
So
how
can
I
solve
this
at
the
same
time
having
the
same
resiliency?
So
you
can
have
you
can
deploy
in
OCS
on
top
of
AWS
or
VMware
or
any
public
cloud
of
you
know,
I
think
I
T
type
instances
which
equivalent
of
Azure
and
Google,
so
these
local
instances
provide
these
physical
strength,
provide
nvme
volumes
exposed
to
the
to
the
to
the
to
the
OS
and
OCS
can
use
local
storage
operator,
which
is
a
new
feature.
C
It's
gonna
come
with
when
incoming
releases,
in
which
means
there
is.
There
is
no
performance
tax
layer
of
EBS,
so
we
have
completely
isolated
or
remove
the
the
storage
layer
provided
by
the
public
labs,
and
you
can
simply
have
an
OCS
and
simply
of
Kafka
requesting
storage
directly
to
OCS
and
if
anything
goes
down
and
thing
goes
bad
in
your
in
your
environment,
Kafka
will
take
care
of
the
data
because
it
doesn't
need
to
regenerate
all
the
data
and
it
is
already
there
and
it's
fast
because
it's
backed
by
all
flash
storage.
C
So
this
is
how
OCS
adds
performance
and
resiliency
benefits
to
to
existing
resilient
architecture
of
Kafka
Oh,
real,
quick,
we're
not
going
to
the
deployment.
So
it's
a
standard
ml
file
which
you
could
just
use.
You
can
go
and
take
a
look
at
streams,
e
dot
IO.
It
has
a
fantastic
documentation.
Jakob
and
his
team
is
maintaining
that
it
is.
It
will
give
you
enough
enough
guidance
how
to
deploy
your
first
few
clusters
on
top
of
communities
or
OpenShift.
C
If
you
want
to
go,
give
it
a
try,
V,
you
can
have
this
URL
and
which
I
have
created,
like
you,
can
have
step-by-step
instructions
to
deploy
MQ
streams,
backed
by
open
container
storage
on
top
of
openshift
platforms.
So
this
guide
will
will
give
you
step
by
step
instructions
how
to
do
that
alright.
So
this
is
all
we
have
today
for
you
guys,
if
you
have
any
questions
you
can
ask
in
here
they
drop
in
the
drip
into
the
chart.
We
can
answer
or
device
yeah.
So
back
to
you,
Rena.
A
Thank
you
so
much
Ron
and
Jakob.
That
was
excellent.
Thank
you.
Everybody
for
joining
us
today
for
all
things:
data
and
our
special
guests,
Jakob,
Schultz
and
pouncing
from
Red
Hat
and
join
us
next
time
again
same
time.
Next
Tuesday
at
8:00
a.m.
Pacific,
9:00
a.m.
mountain,
and
we
will
see
you
soon.
Thank
you
again
for
joining
us.