►
From YouTube: Workload Demo: Twitter streaming and sentiment analysis with OpenShift Container Storage 4
Description
Karan Singh, architect, demonstrates a Twitter sentiment analysis application leveraging AMQ Streams (Kafka) and MongoDB backed by OpenShift Container Storage 4.
Learn more: openshift.com/storage
A
The
ingredients
for
the
demo
is
a
horse
open
shirt,
which
is
the
base
platform.
We're
gonna
use.
Storage
is
provided
by
a
predator
ship
container
storage.
The
app
stack
looks
like
MQ
streams,
Kafka
for
stream
ingestion
Python
for
the
backend
API
that
we
have
used
JavaScript,
front-end,
app
and
MongoDB
for
no
sequel
data
data
collections.
A
Under
the
cover,
the
app
looks
like
this,
so
through
our
front-end,
a
user
comes
up,
hey
I
want
to
just
take
snap
to
the
text.
Analysis
of
my
my
Twitter
keyword.
So
here
are
my
key
words.
He
will
ingest
those
keywords
into
into
a
front-end
app
which
will
trigger
the
backend
API,
which
will
contact
Twitter
and
it
will
start
filtering
those
tweets,
as
per
the
user's
request
directly
to
a
MQ
streams,
which
is
backed
by
a
bunch
of
container
storage
into
streams,
will
will
persist
all
the
tweets
in
real
time.
A
The
third
action
involves
rendering
some
charts
we're
gonna,
see
this
in
a
few
minutes
and
the
fourth
action
from
the
backend
API
involved
connecting
to
Island
service
or
some
text
processing
and
few
B's
visualizations,
using
joint
or
genius.
By
the
way,
this
all
thing
is
all
demo.
Can
you
mean
you
can
do
it
at
your
convenience?
At
the
end
of
this
presentation,
I'm
gonna
share
the
github
or
the
github
URL
that
you
can
use
to
demonstrate
this
for
the
customers,
for
your
friends
or
or
in
community
events.
A
Under
the
covers,
let's
talk
about
how
OCS
is
providing
persistent
storage
across
the
across
this
distance,
so
starting
from
caffeine
or
cluster.
We
have
three
three
three
part
or
three
node
cough
cluster
Kafka
is
backed
by
PBS
from
from
OCS
rwo
PBS
cough
kuan-yin,
zookeeper,
so
superior
need
storage,
so
we
are
also
providing
PBS
zookeeper
cluster
through
OC.
Yes,
the
third
element
of
the
distributed
messaging
service
is
monitoring
because
monitoring
is
key
part,
so
Prometheus
and
Ravana
both
they
require
some
some
sort
of
storage,
so
we
have
provided
TVs
to
to
Griffin
and
Prometheus.
A
Finally,
the
database
service,
which
is
a
word
MongoDB.
It
also
requires
personal
storage
to
be
fault
tolerant,
so
we
are
using
another
PV
for
the
database
service.
So
this
is
how
we
gonna
we
are
using.
We
are
relying
on
OCS
in
real
world
kind
of
application
to
provision
storage
provision,
persistent
storage.
A
The
deployment
steps
looks
like
this
will
first
verify
some
pretty
aggressive
checks
like
hey
my
OCS
is
healthy
or
not,
I
have
OCS
or
not.
We
then
start
by
deploying
kafka
service
on
top
of
OCS
will
then
move
on
to
deploying
database
service
MongoDB
on
OCS.
Third,
the
fourth
step
is
to
deploy
the
backend
DPA
service,
which
is
in
Python
on
open,
show
container
platform,
and
then
we
will
deploy
a
front-end
element
to
back
and
raise
on
HTML
GS
on
a
sinking.
A
I'll
switch
to
my
my
dashboard.
This
is
my
OCS
border
to
dashboard
and
step
number
one
is
to
to
verify
the
this
storage.
Is
it
so
doing
good
so
from
here,
cluster
is
healthy
stories
closes
okay,
so
I'll
real
quick
to
it
to
in
tune
to
my
my
CLI
and
here,
okay,
so
let's
go
to
the
project
and
and
run
just
one
or
two
commands
to
make
sure
my
myself
wise.
My
cluster
is
doing
good.
A
A
There
should
be
here
a
moment
and
yes,
I
do
have
OCS
storage
cluster
set.
Our
BD
is
multiple
storage
class
good,
so
check
is
completed.
So,
let's
move
on
to
the
next
section
of
deploying
the
distributed
messaging
service,
caki
I'm
going
to
start
by
creating
a
new
project
as
any
other
or
any
developer
will
do
so.
A
A
So
I'm
going
to
pause
this
video
just
to
save
time,
we
read
back
so
my
Kafka
and
superclusters
up
and
running.
The
next
step
is
to
deploy
Prometheus
in
kevanna
dashboards,
so
holy
snap,
Prometheus
and
once
it
is
done,
we
cannot
apply
go
far
enough,
so
it
isn't.
The
final
provide
us.
The
monitoring
capabilities,
fetch
the
metrics
from
Caprica
and
do
some
utilizations.
A
So
now
we
have
Prometheus
in
Griffin
our
services
up
and
running
and
as
we
speak,
we
have
cat
cluster.
We
have
zookeeper
clusterer,
we
have
crow
phenomena
prometheus.
So
let's
verify
how
many
OCS
PVS
we
have
spawned.
So
far,
so
switching
to
my
the
tundra
and
as
you
can
see
in
this
in
this
project,
we
have
the
storage
class
that
to
safe
RVD,
and
we
have
three
part:
three
TVs
for
Kefka
three
for
zookeeper
and
two
for
Griffin
afforded
any
logs
and
Prometheus.
A
So
these
are
PVCs.
If
he,
if
I
check
for
TVs
in
this
particular
project,
we
have
you
know
it's
the
same
output,
but
this
is
PV
and
the
other
one
is
PVC.
So
I
think
we
are
all
set
here.
Next
step
is
to
link
Prometheus
in
the
fauna
and
the
data
source
in
that
few
few
dashboards
to
it.
So
I'm
gonna
keep
the
cute
one
script
which
should
be
for
us
and
you
have
such
splitting
a
data
source
data
source
is
done.
A
So
this
should
take
a
few
minutes.
We're
attack,
so
committees
and
Griffin
are
linking
has
been
completed.
We
will
grab
the
deer
out
here
up
to
the
clapping
instance,
and
once
we
browse
that
in
the
browser
we
should
able
to
see
Capcom
and
zookeeper
dashboards
from
in
here.
So
this
is
my
my
super
dashboard
and
the
previous
one
was.
The
knee
cap
coverage
would
so
X
splitter
coming
from
from
the
clusters.
A
A
Get
the
tablet?
Yes,
the
tablet.
Is
there
we're
gonna,
deploy
a
new
app
with
a
few
parameters
which
will
launch
or
only
be
serviced
so
Cydia
app
is
the
command
to
do
it
and
I've
used
it
with
with
my
database
regional
password.
So
the
app
is
there.
We
need
to
expose
this
app
so
that,
oh
no,
we
don't
need
it
propose
it.
A
A
And
we
should
show
ever
MongoDB
the
prior
card.
After
that,
you
should
see
the
MongoDB
bar
itself,
so
Naga
TV
is.
This
is
now
coming
up,
which
will
provision
OCS
TV,
so
once
MongoDB
services
up,
we
should
see
a
BBC
claim
from
from
MongoDB,
which
is
against
my
worship
and
installed
it.
So,
yes,
Maryam
is
now
using
OCS
next
list.
We
will
exact
into
the
the
moment
he
pod
and
try
to
run.
Let
it
try
to
connect
to
a
database
and
just
add
some
some
records
just
to
verify.
Things
are
in
it's
I'm,
doing
good.
A
So
I'm
connected
to
my
movie
DB,
my
username
password.
The
next
step
is
to
write
something
single
key
value,
store,
MongoDB
and
the
record
straighten.
Let's
try
to
fetch
the
record
just
to
verify
things:
okay,
yes,
so
I
can
read
and
write
to
my
my
mom
will
be
installed
so
good.
Let's
move
to
the
next
step,
we
will
not
deploy
our
Python
back-end
API
service.
A
To
do
that,
you
verify
the
email
file,
which
is
simple,
open
super
HTML
file
which
pulls
my
container
image
from
docker,
and
it
does
that
a
lot
of
parameters
which
is
for
Twitter
for
Island
service
and
and
nobody
instances
like
where
to
connect
for
MongoDB
and
where
to
connect
after
this
is
all
preferred
here.
So
this
is
basically
the
the
file
if
you're
doing
this,
if
you
are
following
this
demo
or
if
you're
doing
this
demo,
you
need.
A
A
A
Alright,
so
we're
back
in
service
is
up
and
running,
and
it's
this
is
the
logs
for
the
better
service
and
it's
listening
on
port
8080,
and
so
is
the
container
onto
the
back
end
container.
It
is
it's
running
so
I'm
going
to
set
from
the
back
hand
side.
So
at
this
point
we
have
completed
Kafka.
You've
completely
read
away
service.
We
have
complete
the
backend
service.
Last,
stop
is
to
deploy
the
front-end
server
so
that
we
can
start
interacting
with
the
app
I'm
going
to
deploy
the
my
our
front-end
app.
A
A
A
So
let's
grab
the
lead
out
tour
front
end,
which
is
this
one
I'm
open
a
new
tab
and
browse
it.
So
this
is
a
landing
page
of
our
front-end
app
running
on
top
of
over
check,
so
the
weight
goes
like.
We
need
to
provide
some
few
keywords
separated
by
commas,
so
I'm
gonna
put
something
just
to
test
it
here.
Amazon,
google
mix
off
before
that
before
we
hit
continue,
let's
switch
to
our
back-end
API,
so
mekin
is
not
doing
anything
because
we
have
not
instructed
it
to
do
okay.
So
let's
continue
with
these
three
tags.
A
So
here
this
is
the
control
control
center.
So
we
will
go
over
these
things
step-by-step.
So,
first
of
all,
as
per
the
the
graph
here
is,
as
for
the
plan,
will
first
start
quitting
or
start
streaming
from
Twitter
into
CAFTA
and
from
there
we
move
the
data
to
MongoDB
and
then
from
there
MongoDB.
We
will
your
Island
Service
to
do
some
sent
a
night
analysis,
so
this
is,
these
are
the
five
six
five
buttons
to
it,
so
first
will
enable
Twitter
to
craft
for
those
three
keywords.
A
A
So
suddenly,
okay,
now
good,
you
can
see
the
spikes
right,
so
we
are
fetching
not
not
much
but
10
messages
or
per
second
and
if
I
still
asked
5
minutes,
and
so
you
can
see
this
clearly,
we
have
started
fetching
the
data
from
Twitter
into
our
system,
there's
not
too
much,
because
it's
it's
a
small
cluster
that
we
are
playing
against,
but
anyways
you
got
the
idea
right.
So
we
are
fetching
the
data.
The
next
step
is
to
move
the
data
from
Kafka
to
MongoDB
yeah.
So
this
button
customer
good
Eevee.
A
It
should
start
writing
the
data.
As
you
can
see
this
it's
fast
because
we
are
moving
from
local
service
to
a
local
service.
So
it's
writing
data
to
to
the
database.
We
will
do
some
chart
rendering
by
click
on
this
button.
So
you
can
see
this.
We
are
getting
some
tweaks.
It's
at
the
moment
it
is
redundant.
It's
not
I
mean
we
are
creating
the
queue,
so
it
is
redundant
but
anyways
you
got
the
idea.
We
are
fetching
the
tweets
from
in
real-time
from
the
internet,
moving
the
data
to
2.
A
Over
so
I
mean
this
is
a
this
than
the
loop.
So
don't
worry
about
this.
This
message.
This
is
not
the
loop.
That's
why
it's
complaining
that
the
execution
is
still
going
on
so
just
forget
about
that,
but
anyways
we
are
capturing
it
and
we
are
reading
it.
So
very
less
attraction
on
Microsoft,
Amazon,
Google
history
going
to
be
much
higher.
A
A
A
So
this
is
service
if
you're
hitting
and
getting
it
and
yeah.
Let's,
let's
now.
Let's
now
do
some
sentiment
analysis
charting
for
these
tweets,
okay,
so
the
analysis
still
going
on,
but
this
is
the
sentiment
analysis
of
the
apps
that
we
have
three
of
these
and
it's
you
know
positive
negative
and
neutral.
It's
not
bulletproof
right
now,
but
you
got
an
idea
right.
We
are.
A
We
are
basically
done
the
demo
at
this
point,
but
what
we
have
done
is
we
have
in
real
time
we
have
captured
tweets
from
Twitter
with
our
favorite
keywords
in
real
time
into
into
kafka.
We
have
then
moved
the
data
from
Kafka
topic
MongoDB
and
then
from
Abu
Dhabi.
We
have
used
some
external
servers.
We
can
definitely
replace
this
and
build
something
of
her
own
like
this
Business
Park,
but
that's
for
another
demo.