►
From YouTube: Data Services Office Hour: Smart Cities
Description
Join data experts Chris Blum, Michelle DiPalma and data novice Chris Short every other week for a hands-on Office Hour about Red Hat OpenShift Data Science. Be ready with your questions and to learn a few things along the way.
A
A
A
A
A
Good
morning,
good
afternoon,
good
evening
welcome
to
a
special
edition
of
the
data
services
office
hour.
I
am
chris
short
executive
producer
of
open
shift
tv.
I
am
here
with
the
one
and
only
karan
singh.
He
is
going
to
be
demonstrating
a
smart
or
green
cities.
However,
you
want
to
phrase
it
call
it
what
you
want.
You
know
it's
it's
essentially.
A
You
know
we
have
a
lot
of
it
going
on
here
in
detroit.
You
know,
detroit
michigan
is
where
I'm
from
there's
a
lot
of
activity
in
the
autonomous
corridor,
space
between
downtown
detroit
and
downtown
ann
arbor,
where
the
university
of
michigan
is
so.
This
topic
is
near
and
dear
to
my
heart,
so
I
will
hand
it
over
to
koran
to
introduce
himself.
Please
take
it
away.
Quran.
B
Hey
thanks
list
and
thanks
for
having
me
here,
hey
guys,
I'm
karen
singh
see
nothing
at
red
hat
and
yeah
same
as
chris.
So
a
lot
going
on
in
in
my
city,
bhopal
I'm
in
from
bhopal
right
now
in
india
and
we're
gonna
talk
about
smart
city
and
green
city
today,
and
we
would
be
showing
you
off
how
you
could
build
a
pattern
of
using
open
source
technology
and
adopting
that
in
a
use
case.
B
And
the
use
case
that
we
have
chose
for
the
day
is
the
live,
capturing
of
images
from
vehicle
and
detecting
the
number
plates
in
real
time.
And
you
know
the
entire
data
chain
from
that
point.
So
that
would
be
pretty
interesting.
I
hope
that
you
would
you
guys
would
find
this
useful,
and
the
the
whole
idea
of
this
is
that
this
is
just
a
demo
of
a
kind
of
fictional
scenario
that
we
have
built
up.
B
And
since
this
is
all
running
on
openshift,
it
is
the
leveraging
openshift
as
the
core
engine
and
we
are
using
lots
of
open
source
technologies
on
openshift
and
building
patterns,
so
that
you
guys
can
learn
more
from
these
patterns
and
try
to
adopt
these
patterns
into
your
apps.
Your
production
apps
your
kind
of
use
cases,
so
that
is
the
whole
intention
of
this.
B
This
mini
demo,
I
would
say
cool
so
to
get
to
set
things
set
up
here,
I'll
share
the
screen
and
give
you
a
glimpse
of
what
we're
going
to
talk
about
and
where
our
story
gonna
revolve
today.
So
chris
just
verify.
If
you
can
see
yesterday,
okay,
fantastic
I'll
promise,
I'll,
not
bore
you
guys,
with
with
lots
of
lots
of
lots
of
speed
and
presentation
screens.
Here
there
are
so
many,
so
I
need
to
select
which
one
to
show
off
this
one
is
real,
quick.
B
You
will,
by
the
way,
I'll
give
you
the
the
url
of
the
github
repository
that
we
are
using
to
store
all
the
code
bases.
All
the
openshift
eml
files
sample
data
sets
automation,
including
the
documentation
that
you
guys
can
see
and
peek
around
and.
B
B
B
I
think
I
should
switch
to
this
one,
so
yeah
yeah,
so
we
have,
we
will
do
entrance
as
the
edge
we
have
openshift
deployed
on
the
edge
and
we
have
our
model
which
is
trained
on
detecting
cars
and
extracting
out
the
number
plates
and
at
the
same
time
it
will
extract
the
the
characters
of
the
car
number
plate,
and
once
we
once
once
that
model
has
detected
a
number
plate
from
the
car,
it
will
basically
send
that
onto
a
kafka
topic
running
on
the
edge.
So
right
now
we
are,
we
are
at
the
edge.
B
So
this
is
all
happening
on
the
edge.
We
need
to
make
sure
the
data
should
also
come
to
the
core,
because
this
is
hybrid
right.
We
need
to
build
a
net
kind
of
a
situation
and
scenario,
so
the
data
will
go
from
kafka
topics
and
using
using
a
using
mirror
maker.
So
this
amq
streams
has
has
a
nice
feature
of
mirror
maker.
A
cap
has
a
nice
feature
for
mirror
maker.
Where
it
can
it
can
you
know
you?
B
Can
you
can
move
messages
from
from
a
distinct
cluster
onto
another
toxic
cluster?
So
you
know
your
take
should
be
on
this.
Is
that
okay?
This
is
one
pattern.
You'll
see
a
pattern
here.
B
This
is
a
this
is
a
event
driven
loosely
coupled
scenario
where
you
have
edge
and
core,
and
you
want
to
move
data
from
the
edge
to
the
core,
so
this
would
be
the
actual
or
the
the
main
pattern
that
you
would
be
deploying
in
your
in
your
apps
right
kafka,
mirror
maker
will
make
this
happen
and
you
know
messages
will
move
from
from
from
the
from
the
edge
on
to
the
core
right
and
once
you
have
message
on
the
core,
which
means
the
message
would
be
like
a
json
json
string
off
of
the
car
at
this.
B
At
this
time
we
don't
know
much
about
this
car
right,
because
we
just
have
deducted
the
number.
Then
we
will
do
the
rest
of
the
processing
on
the
core
on
the
core
data
center.
So
in
here
we
will,
we
will
process
it.
We
will
have
multiple
business
logics
which
will
work
on
that
detected
number
plate
like
okay,
please
tell
me:
who
is
the
owner
of
this
car
right
and
in
real
time
we
can.
We
can
go
and
talk
to
a
database
running
on
openshift
or
a
mongodb.
B
We
don't
have
mongodb
here,
but
yeah.
It's
just
a
database
that
is
running
which
will
give
you
okay,
this
car
belongs
to,
let's
say
chris,
and
then
we
will
store
okay.
This
is
this.
Is
the
car
that
directories
were
driving
at
this
time,
we'll
put
a
time
stamp
on
that
event,
so
that
event
will
will
get
enriched
with
some
more
data
on
the
core.
So
we
are
enriching
the
data
generated
on
the
edge
move
to
the
core,
and
then
this
enriched
data
could
then
be
further
utilized.
B
So,
for
example,
another
would
be
used
to
like
this
you'll
see
another.
I
have
another
service
here,
which
is,
you
know,
alert
service
which
will
say.
Okay,
you
know
I
was
looking
for
this
car.
This
car
was
was
lost
or
stolen
from
somewhere.
If
you
know,
if
this
this
number
plate
matches
with
the
spring
or
on
the
kafka
topic,
we
will
immediately
send
out
alerts
to
to
the
to
the
relevant
organization.
B
The
third
one
you
would
be
you
know
storing
this
on
for
a
longer
term,
on
on
a
look
on
a
low
cost
object,
storage
solution,
so
that
you
know
this
could
be
retrieved
and
used
for
data
analysis
data.
You
know
dashboarding
in
in
real
time,
so
we
rely
on
and
we
use
s3
is
the
defacto
way
to
store
data.
You
know
on
object
storage,
so
we
use
openshift
data
foundation
backed
by
by
steph.
B
So
that
is
the
underlying
technology
that
stores
this
and
once
the
data
is
there,
you
know
you
could
you
could
use
tools
like
premiere?
Premiere
is
the
open
source
project
and
the
downstream
project
product
is
the
starburst
presto.
B
So
so
cluster
is
pretty
pretty
interesting
tool.
This
is
a
distributed
sql
engine.
So
the
way
it
works
is
that
from
this
from
premier
interface,
I
I
will
just
you
know
I'll,
say
trinia
or
presses,
so
don't
get
confused.
It's
just
the
same
tool
upstream
and
downstream.
So
trinio
is
the
open
source
version
of
it.
So
on
trinia
you
will
say:
okay,
you
will
write
sql
statements
and
this
premium
engine
can
at
the
in
the
back
end
it
can.
It
can
run
distributed.
B
Sql
queries
that
if
the
data
is
living
on
on,
let's
say
rdbms
or
a
mongodb
or
on
an
s3
object
storage
system
like
odf
from
a
single
interface,
you
could
query
and
standard
sql
the
data.
So
this
is
very
powerful
right.
You
don't
need
you
know,
you
know
you
don't
need
like
three
types
of
different
tools
to
to
pull
data
out
from
from
different
database
engines
and
object.
Storage
trinio
can
do
it.
For
you
object
story
surf
is,
is
the
you
know
it's
that
is
pretty
fantastic.
B
It
is,
it
is
sd
compliant.
So
there
is
absolutely
no
problem
for
trinia
to
go
and
talk
to
s3
odf
s3
and
push
the
data
out,
and
once
we
have
data
on
the
for
on
once
we
have
the
sql
engine
ready.
We
can
query
it
by
cli
if
you
are
a
fan
of
it.
If
you
want
to
do
it,
but
most
people
prefer
to
you
know
to
pre
prepend
one
one
dashboarding
system
or
a
reporting
system,
it
could
be
tabula
or
it
could
be
open
source
like
it
could
be.
B
Apache
superset
to
query,
do
adopt
query
on
on
super
write
and
I'll.
Show
you
this
thing.
This
would
be
pretty
interesting,
write
a
query
on
supersite.
It
will
go
to
to
presto,
and
then
it
will
go
in
fresh
data
from
s3.
That
is
pretty
fantastic.
We
also
have
one
dashboard
like
an
operations
dashboard
so
that
people
can
see
what's
going
up
so
so
at
this
time.
This
is
what
we're
gonna
talk
about
again
the
I
would
emphasize
on
the
patterns
here.
B
There
are
a
lot
of
patterns
which
are
inbuilt
in
this
in
this
demo
that
we
are
showing
to
you
like
influencing
at
the
edge,
because
this
thing
is
really
getting
getting
attention
like.
I
want
to
do
machine
learning
and
ai
inferencing
at
the
edge
help
me
do
it.
So
this
is
the
way
you
can
have
it
and
once
the
once,
you
have
done
the
inferencing.
B
You
need
to
move
the
data
to
the
course
for
long
term.
Preservance
or
it
could
be.
It
could
be
like
an
ml
ops
scenario
where
you
are
continuously
getting
new
images
if
the
model
is
not
able
to
detect
that
image,
you
wanted
to
have
that
image
that
was
not
detected
and
train
the
model
in
the
back
end
later
and
then
push
the
model
back
to
back
to
the
edge
location.
B
So
this
is
all
this
all
goes
in
containers
and
an
open
shift,
and
for
that
matter,
kubernetes
is
the
we
believe
is
the
right
way
to
do
edge
and
core
and
have
these
envelopes
and
data
engineering
thing
tied
in
together.
B
So
chris
does
this
make
sense
anything
that
you
wanted
to
ask
me
at
this
time
or
make
help
me
or
or
should
I
explain
more
on
any
any
of
these
sections.
A
All
right
chris
is,
and
chris
is
having
internet
problems
right
now,
I'm
his
intern,
so.
B
Okay,
don't
worry,
stop!
I
hope
this.
This
streaming
is
still
going
on.
I
hope
so.
Yes,
okay,
fantastic
memories.
So
what
what
I'll
do
next
is?
Meanwhile,
chris
is
busy
fixing
his
his
cable
internet
connection,
so
I'll
I'll.
Take
that
moment
and
and
walk
you
through.
I
think
I
will
walk
you
through
my
my
github
repository,
because
that
is
the.
That
is
the
core
thing
that
has
already
so
I
I
so
my
my
colleagues
diom
and
kyle
who
are
on
this
call.
B
I
guess
they
would
be
listening
to
this
or
tuning
for
this.
They
they
helped
three
of
us.
We
have
spent
a
lot
of
time
on
this
and
we
built
this
demo
and
the
way
the
way
to
use
this,
this
git
repository
is
that
we
will
we
will
have
multiple
patterns.
So
this
is
what
the
pattern
that
we
worked
on
so
called
that
smart
city
pattern.
B
So
you
go
into
this
directory
smart
city
and
then
you
will
have
a
full
blown
documentation
that
you
could
use
for
deploying
each
and
every
component
of
of
this
demo.
So
this
is
a
pretty
comprehensive
documentation
that
will
show
you
how
to
deploy
this
entire
smart
city
green
city
project
on
your
openshift
environments.
B
This
could
also
be
deployed
on
on
kubernetes.
If
but
yeah
you,
you
would
need
to
avoid
that
and
do
some
changes
at
your
side,
but
that
should
not
be
too
bad
to
do
it
so
yeah
they
should.
They
should
equally
work
well
in
community
but
yeah.
This
is.
This
has
been
tested
multiple
times.
I
guess,
like
you,
know
several
seven
dozen
some
times
on
openshift,
so
we
are
pretty
sure
that
this
works
seamlessly
on
openshift.
B
We
will
yeah
one
one
so
so,
there's
a
directory
called
this
deploy
so
deploy
directory
has
all
the
ml
files
that
you
would
need
to
deploy
each
and
every
component
of
this
demo,
and
these
demo
this
demo
yaml
pad,
includes
you
know
creating
a
secret
or
config
map
or
a
deployment
service
or
route
the
standard,
kubernetes
and
openshift
thing
right.
So
you
will
you
will
deploy
these
components
in
together
and
in
tandem
they
will
work
once
you
have.
A
I
think
I
hope
yeah
comcast
is
the
best
internet
provider
out
there.
Let
me
tell
you
it's
the
second
time.
B
Yeah
so
so
yeah,
I
was
just
going
through
this.
This
deployed
directly,
which
will
have
all
the
all
the
yaml
files
and
then
the
the
crux
and
the
core
is
inside
the
source
in
the
source
directory.
You
will,
you
will
see
all
the
entire
python
code
that
we
have
built
up
to.
You
know,
write
those
logics,
and
you
know
how
this
is
gonna.
B
How
this
will
you
know,
go
and
and
connect
to
a
post-sequel
database,
and
then
how
are
you
gonna
store
data
into
this,
and
then
you
know,
how
would
you
initialize
the
kafka
kafka,
you
know
listener,
and
then
how
would
you
move
data
onto
kafka,
bus
things
like
that?
So
these
are
all
like
kafka,
consumers
and
producer
configuration.
So
all
the
code
you
can
find
in
my
in
this
repository
so
feel
free
to
to
do
and
try
it
out.
B
Let
me
know
if
you
have
any
issues
and
just
create
new
issues
for
on
this
laptop
or,
if
you,
if
you're,
stuck
somewhere,
happy
to
help
have
to
help
here,
I
will
go
and
show
you.
How
does
this
look
like
once
even
before
I
go
to
I'll?
Show
you
my
my
open
shift
deployments,
so
you
can
see
that
so
this
is
a
running
environment
right
now
in
a
namespace
called
as
smart
cd,
and
we
have
all
the
all
the
components
deployed,
database,
lpr,
service
life,
recognition,
service
and
image
server
generator.
B
The
generator
is
like
you
know.
Obviously
we
don't
have
real
cameras
at
the
moment,
so
this
is
the
generator
which
is
just
generating
randomly
images
of
the
cars,
and
then
this
generator
data
is
getting
fed
into
the
the
inference
engine
which
is
detecting
the
client
number,
and
this
is
the
lpr
service
license
plate
recognition,
cervix,
which
is
doing
the
magic,
so
yeah
everything
is
deployed.
At
the
moment
we
have
object,
storage
buckets
in
here
I'll,
go
and
see
object
bucket
claims.
B
We
have
two
buckets,
I'm
using
two
buckets
to
do
it.
We
are
using
one
interesting
pattern
here,
which
is
which
is
very
important
for
for
the
listeners
to
to
see
I'll
come
back
here
for
a
minute.
You
know
moving
moving
data
from
from
kafka
on
to
on
to
object,
storage
right,
so
we
are
using
an
open
source
tool.
It's
called
called
as
decker
so
stacker.
Originally
came
from
pinterest
engineering,
it's
a
process
project.
We
thought
this
would
be
a
very,
very
good
use
of
this.
B
I
mean
that's
why
they
built
this
moving.
How
to
move
data
for
long-term
retention
on
to
kafka
topic,
like
kafka,
is
just
a
buffer
right.
It's
a
temporary
buffer.
I
would
say
it's
a
log.
You
know
it's
a
buffer,
you
will
you'll,
not
store.
You
know,
10
years
of
data
on
kafka
that
doesn't
make
any
sense.
It's.
B
Yes,
right
so,
and
this
data
is
very
critical
for,
for
you
know,
for
lots
of
reasons,
so
we
don't
look,
there
has
to
be
a
way
and
another.
Another
dimension
was
that
which
you
know
we
thought
okay.
You
know
why
we
can.
We
move
data
from
kafka
to
a
sql,
sql
database
or
our
dbms.
B
So
when
we,
when
we
talked
about
this
and
and
for
detailing,
that's
not
really
our
db
rdbms
for
right,
you
you're
not
dumb,
traditionally,
you'll,
not
dump
lots
of
data
on
rdbms
that
this
archival
data
right.
You
would
need
a
data,
warehousing
solution
for
for
such
kind
of
things.
If
you
want
to
store
petabytes
of
data
for
long-term
retention,
so
so
object,
storage
is
kind
of
the
the
natural
choice
for
us,
because
it's
really
a
pattern
that
dump
dump
your.
B
So
this
would
be
a
data
lake
s3
powered
by
s3
interface,
self
object,
storage
is
a
data
lake
and
what
we're
doing
with
separate
sector
is
a
nifty
tool
which
will
move
based
on
my
filters.
It
will
move
data
from
kafka
topic,
it's
just
a
listener.
It
will
move
data
from
taskbar
topic
and
dump
that
into
our
own
format,
like
orc
or
park,
a
different
kind
kind
of
kind
of
serialization
format.
B
It
will
format
that
and
then
dumped
that
as
an
object
into
an
object,
search
bucket
and
once
data
is
there
on
objects
bucket
number
one
it
is,
it
is
readily
available
for
you.
It
is.
It
is
cheaper
because
objection
is
typically
not
not
very
expensive
and
lots
of
app
lots
of
tools
can
know
how
to
access
object.
Storage
these
days
right
so
so.
B
This
is
a
pretty
good
pattern
that
people
are
using
in
production
and-
and
that's
that's
why
we
also
use
this
in
a
real
life
scenario
like
how
would
you
use
a
combination
of
kafka
and
secure
and
sc?
How
can
you
dump
it.
B
Right
so
I'll
come
back
to
my
openshift,
so
this
is
my.
My
obc
object
bucket,
claim
buckets
which
are
created
on
on
podf
object,
storage,
so
yeah
pretty
much
it.
We
have
everything
here,
I'll
show
you.
How
does
the
end
result
look
like,
so
we
have
built
up
this.
This
dashboard
on
superset
I'll,
explain
a
little
bit
like
okay.
This
is
this
is
a
dashboard
for
for
for
reporting,
like
real-time
reporting
like
okay,
you
know
how
much
collection
I
should
refresh
this.
B
It's
a
busy
day
in
london,
look
at
this
10
000
vehicles,
almost
10
000
vehicles
that
have
been
passed
from
these
stations
and
and
out
of
which
we
have
collected
43
000,
whatever
a
dollar
a
pounds
of
of
toll
fees.
It's
all!
It's
all.
You
know,
make
up
numbers
but
anyways
with
an
idea
and
look
at
this
one,
this
pollution
key.
So
the
vehicles
which
are
very
old,
which
are
emitting
lots
of
carbon
into
the
environment.
They
we
are
charging
them
extra.
B
So
this
kind
of
a
nice
dashboard
and
then
you
know
we
can
see
we
have
like,
as
I
told
you,
we
have
like
multiple
stations,
so
we
can
sort
that
by
station
id.
We
have
built
up
these
these
panels
in
superset.
You
can
also
do
that.
B
It's
pretty
pretty
amazing,
too
and
simple
to
do
it
so,
for
example,
okay,
so
station
number
5201
is
is
you
know,
is-
is
witnessing
22
percent
of
the
total
traffic,
so
these
kind
of
metrics
you
know
you
could
get
in
real
time
using
this
kind
of
a
pattern-based
solution
right,
which
means
you
need
to
do
you
know?
Maybe
I
don't
know
whatever
you
need
to
put
more
more
more
police
or
traffic
policing
this
year,
because.
A
B
Yeah
yeah
right
these
kind
of
things
that
we
should
that
one
can
do
and
like
who
are
the
top
20
structured
customers.
So,
mr
gemma
north
he
is,
you
know
he
has
to
pay
like
4315
pounds
this
month.
So
his
monthly
bill
would
be
this
much
at
the
same
time
you
can
you
can
see
okay,
what
he's
driving
he's
driving
yeah?
This
is
the
address
I
mean
this
is
all
fake
numbers
and
fake
addresses.
So.
A
B
Yeah
yeah,
so
this
is
this-
is
all
this
is
an
open
source
license
plate
car
images
that
we
have
chosen,
because
obviously
we
don't
have
this
much
of.
B
If
anyone
from
london,
smart
city
or
any
city
listening
to
this
contact
me,
I
can
help
you
build
this
dashboard
for
real
in
your.
You
know
in
your
you
know,
in
your
state
whatever
so
yeah,
and
this
is
the
pollution
fee
like
this
one.
Oh
okay,
scrolling
issues
going
back.
B
Okay,
one
more
time
come
down,
okay,
so
susan
king,
this
is
license
plate
number
and
he's
driving
like
a
vehicle
which
is
of
2009,
which
means
his
engine
is
emitting.
Lots
of
you
know
carbon,
so
he
has
to
pay
some
extra
pollution
fee
of
2
800
pounds.
So
so
you
know
you
got
this
idea
right,
and
this
is
the
kind
of
a
distribution
of
okay.
What
type
of
vehicle
is
like
rdra?
B
As
for
the
data
set
that
we're
using
and
toyota
land
cruiser
are
the
you
know
the
most
popular
one
which
is
going
through
and
bmw
six
series?
Just
you
know,
two
for
eight
percent
of
the
population
is
doing
yeah,
so
yeah,
it's
just
to
show
you.
You
know
these
kind
of
things
you
can
do
once
you
have
enough
data
right
data
captured
from
the
places
and-
and
you
know
using
these
patterns,
you
should
build
up
this
kind
of
solution,
but
this
is
just
an
example
of
smart
city.
B
You
could
put
this
up
into
into
an
industrial
environment
like
you're,
a
manufacturing
hub,
you're,
a
car
manufacturing
thing.
You
can
do
in
this
kind
of
situation.
Use
case
driven.
You
know,
patterns
using
open
source
technologies
on
openshift
pretty
neatly
so
this
was
super
set.
Next,
I'm
going
to
show
you
another
cool
demo
or
a
dashboard
which
is
actually
showing
you
live.
B
B
I
just
I
was
I
was
gonna
cost
saving
more,
so
I
powered
off
all
my
so
again.
You
know
so
this
demo
this
was
working
fantastic
buying.
So
I
powered
off
all
my
instances
because
obviously
it's
a
cost
right.
B
This
morning
I
just
powered
on
my
openshift
environment.
Just
once,
and
openshift
brought
up
all
the
components,
all
the
services,
including
my
data,
everything
without
without
a
glitch,
so
another.
B
Right,
so
what
you
see
here
is
that
this
is
that
the
same
london
zone-
and
these
are
all
these
are
all
the
places
we
have
where
we
have
cameras
installed
and
vehicles
are
coming
and
from
from
these
stations-
and
this
is
the
last
detective
vehicle
number,
and
this
is
the
real
image
from
from
the
vehicle
and
we
are
detecting
in
real
time
the
the
number
plate
of
that
car.
If
you,
you
see
this
changes,
and
you
will
also
notice
that
you
know
this
is
not
accurate,
because
this
model
is
not
accurate
right.
B
Look
at
this
one,
it's
it's
not
very
accurate,
but
so
this
happens
in
production.
You
would.
C
B
You
have
not
built
a
a
model
which
would
be
accurate
on
day
one.
This
is
a
continuous
learning
process
right.
You
will
get
new
data
sets.
You
have
to
train
it.
You
have
to
build
an
you
know,
emelops
pipeline,
so
that
you
can
train,
and
once
you
have
the
trained
model,
you
need
to
deploy
it
back
or
ship
it
back
to
the
edge
location
so
that
your
edge
inferencing
engine
should
get
updated
with
the
new
model.
B
So
you
need
these
kind
of
tooling
in
your
in
your
at
your
disposal
so
that
you
can
do
all
these
things.
You
know
without
a
complexity
of
you
know
of
things
going
down
or
whatever
right.
So
that's
the
beauty
of
openshift.
That
gives
us
all
the
tools
to
do
to
do
this
magically.
A
Beautiful-
and
I
think
what's
amazing
is
just
the
fact
that
the
you
have
it
set
up,
so
the
data
just
starts
flowing
in
whenever
you
start
your
services
and
everything
just
kind
of
picks
up
and
goes
like.
That
is
very
handy
right.
If
I'm,
a
researcher
and
somebody's
like
oh
yeah,
go
train
this
model,
I
need
to
be
able
to
have
that
infrastructure
and
the
same
services
and
the
same
data
and
the
same
everything
available
to
me,
and
you
can
do
that
with
this
right.
A
A
Why
is
there
an
increase
in
you
know,
traffic
in
this
area
this
time
of
year,
time
of
day
whatever,
and
you
can
go
and
investigate
that
with
actual
real
life
solutions
right,
like
you,
can
drill
down
into
the
data
in
real
time
and
find
whatever
it
is,
you
might
be
looking
for
and
it
could
be
increased
foot
traffic.
It
could
be
anything
from
you
know:
a
car
broke
down
in
a
center
lane
kind
of
thing.
Really
that
happens
right,
like
everything
happens
in,
and
you
know,
humans,
machinery,
cities,
like
total
wild
card
right.
B
Literally
correct,
yes
and
and
yeah
yeah,
that's
why
chris
and
you
know-
and
I
mean
the
way-
the
thing
that
fascinates
me
here
is
that,
because
yeah
edge
is,
is
really
people
are
actually
deploying
these
kind
of
things
at
the
edge
and
added
edges,
building
popularity,
people
across
the
across
the
industry
like
financial
or,
let's
say,
industry
or
automobile
right
or
e-commerce,
or,
like
the
retail,
all
all
the
industrial
vertical
need.
B
They
have
some
somewhat
different
form
of
edge
and
they
want
to
do
they
want
to
deploy
technology
at
the
edge
right
so
and-
and
you
need
you
need
tools
so
that
you
can
seamlessly
ship
your
your
app
your
your
code,
basically
at
the
edge
so
that
you
can
get
these
kind
of
you
know,
insights
from
data
and
collecting
it
so
yeah
focus
focus
on
on
the
right,
tooling
and
using
the
right
tool
for
your
job
is
kind
of
very
important,
yeah,
so
yeah.
This
is
kind
of
just
to
show
you.
B
This
is
kind
of
a
draft
that
will
okay.
You
know
vehicles
per
second,
how
it
is
going
as
for
the
time
so
kind
of
a
hitman
kind
of
a
chart
here.
Another
cool
cool
graph
and
chart
here
is,
I
should
shout
out
to
my
colleague,
geoman
kyle,
who
were
hacking
around
their
their
skills
of
using
gif,
and
I
was
watching
them
what
the
hell
these
guys
are
doing,
but
then
they
they
came
up
with
this.
B
A
A
B
I
look
at
this
one
this
this
smart
city
inference
api,
so
this
is
this
is
the
api
where
we
have
our
model
deployed
so
in
real
time.
This
is
this:
is
a
camera
or
a
simulation
of
a
camera?
This
is
sending
an
image
to
my
my
influence
api,
a
little
bit,
it's
red
dot
and
then
data
is
going
back.
You
know
to
my
kafka
and
then
from
kafka.
We
are
using
mirror
maker
to
send
the
data
to
my
code
once
the
data
is
on
the
core
that
number
plate.
B
It
is
going
to
second-
and
second
is
storing
that
to
that
to
my
myself,
the
the
middle
part,
we
don't
have
arrows
and
dots
here
and
then
we
have
supersite,
which
will
go
and
query
goes
to
the
sql
database
and
self-object
storage
audio
object
storage
to
build
that
build
that
nice
dashboard.
So
this
is
the
whole
journey
from
the
detection.
B
Until
you
see
a
report-
and
this
is
all
happening-
using
open
source
phones
and
on
openshift-
that's
awesome
another
another
variant
of
this
is
okay.
You
know,
super
cpu
is
good.
Let's,
let's
build.
Let's
just
have
fun:
let's
build
memory,
so
I'll
show
you
that
as
well.
It's
kind
of
the
same
this
numbers
are
numbers
are
different,
that's
it.
So
this
is
my
memory
consumption.
B
I
look
at
this
mirror
maker.
Like
you
know,
260
megabytes
of
memory,
it's
consuming
just
to
you
know,
move
more
messages
from
from
edge
to
core,
which
is
very
important
right,
so
yeah
this
is.
This
is
my
dashboard
I'll
come
back
to.
I
also
have
another
interesting
tool
that
we
have
used
open
source
again,
so.
B
Yeah
I
mean
when
people
were
deploying
this
like
we
were
always
you
know
going
to
the
cli
connecting
to
the
to
you
know
the
topic
and
running
commands.
We
love
that.
But
you
know
once
you
are
in
hurry,
you
want
okay,
I
need
just
one
thing
that
will
give
me
everything,
because
I
need
to
forget
the
writing
code
right.
So
this
is
this:
is
cap
drop
you
can
get
it
from
github
and
this
in
real
time.
This
will
show
you
the
messages
coming
on
your
topic.
B
So
I'm
connected
to
my
my
code
after
topic.
This
is
the
url
from
openshift,
and
this
is
my
lpl
is
my
topic
and
you
can
see
this
is
my
my
event,
which
is
detected
from
the
the
edge.
So
look
at
this
one
time
stamp
and
vehicle
number
plate
and
detections
were
successful,
and
this
has
been
detected
on
station
a13.
B
B
Platform.
Another
thing
that
I
will
show
you,
which
is
very
interesting,
is
the
capability
capability
of
premium,
which
is
super
set,
not
super.
Sorry
starburst
club
store
to
write
queries
in
sql
such
that
it
can
go
and
talk
to
two
different
destinations
where
data
is
stored.
Remember
in
this
slide,
I
talked
about
that
that
you
have
data
stored
on
on
the
on
your
rdbms
database
and
you
have
data
even
stored
on
s3
odf
s3.
B
So,
okay,
my
cpu,
is
it's
really
hard
right
now,
70
something
is
eating
up
my
cpu
anyways,
so
so
yeah
this
part,
so
so
presto
can
go
and
talk
to
our
database
using
distributed,
queries
and
audio.
So
I'm
going
to
show
you
one
like
how
does
that
look
like?
So
this
is
a
simple
query.
B
This
is
a
simple
distribute
query
that
I'm
gonna
run
that
okay,
you
know
please
go
and
and
give
me
a
time
stamp
license
plates
or
vehicle
model
number
and
owner,
and
this
this
is
written
in
such
a
way
that
it
will
go
and
and
pick
up
data
from
from
my
high
five
is
a
component
in
it's.
It's
a
it's.
B
A
big
data
component
that
just
it's
a
meta
store
which
will
which
will
have
information
about
the
tables
and
the
partition
definition
that
where
data,
where
the
actual
data
of
this
table
is
stored
on
which
f3
bucket,
so
it's
it's
a
it's
a
big
data
construct.
So
like
it's
a
tool,
so
so,
using
this,
this
five
meta
store,
it
will
go
and
query
my
my
odf,
which
is
openshift
data
foundation,
which
is
kind
of
s3
component
that
we're
using
here.
B
This
query
will
will
go
and
talk
to
and
do
a
live,
join
on
on
the
database,
which
is
postgres
so
so
again,
just
take
a
moment
understand
this
from
a
single
sql
query:
I
am
I'm
I'm
investigating
or
I'm
just
looking
for
records
which
are
stored
on
two
different
places
and
getting
them
on
one
query.
This
is
very
powerful
because
in
real
world,
I'm
very
sure
you
will
have
multiple
different
data,
so
data
destinations
it
could
be
postgresql,
it
could
be
mongodb,
it
could
be
object,
storage
or
it
could
be.
B
You
know,
I
don't
know
whatever
you
might
have
a
third
form
as
well,
so
you
need
a
single
mechanism
and-
and
sql
is
like
you
know,
sql
is
there
since
last
40
years,
so
people
know
how
to
write
sequels,
so
I'm
gonna,
I'm
gonna
run
this.
This
will
take
a
few
seconds
running
the
statement
and
boom.
You
have
the
data
here,
so
this
is
a
real
time
requiring
this
in
real
time
and
getting
data
from
from
this.
B
B
Just
happened
because
we
ran
it
from
from
superset.
Superstar
will
give
instructions
to
premio
and
premier
will
will
give
instruction
or
basically
go
and
crawl
or
get
data
from
rdbms.
So
this
is
redstone
and
we
can
see
light
plan.
So
so
this
is
for
you.
You
know
how
your?
How
presto?
Because
presto
is
an
engine,
it's
a
it's
a
it's
an
engine
to
to
to
write
a
big
data
query,
so
it
has
its
own
mechanism
of
building
plans
and
finding
the
most
optimal
plan
to
to
grab
the
data,
and
things
like
that.
B
So
yeah
this
is
this:
is
it
mostly
I'll
come
back
to
to
the
story
where
we
have
started
so
so
again,
you
can
see
you
can
say:
okay
karan,
you
know
this
is
not.
B
This
is
not
something
that
you
have
deployed
in
a
real
world
and
or
maybe
to
a
customer,
so
how
this
is
relevant
to
you,
the
takeaway
for
for
all
the
all
the
listeners
or
watchers
on
this
on
the
live
stream
is
that
we
want
you
to
think
how
you
can
use
these
patterns
in
your
use
case
in
your
apps
right.
A
Pattern
right
like
this
is
a
model
of
models
for
lack
of
a
better
term
right
like
yeah.
I
see
this
and
like
that
video
stream.
It
could
be
anything
coming
in
right
like
it
could
be.
You
know,
test
results
in
a
hospital.
It
could
be.
You
know
at
a
you
know
a
vaccination
site
right
like
talking
about
you
know
what
inventories
come
in
and
out,
and
you
know
how
many
people
you
vaccinated
the
whole
nine
yards
right
like
there's
a
lot
of
applications.
For
this
I
mean
factory
working.
A
B
A
I
I
remember,
learning
kafka
messaging
queues.
You
know
way
back
in
the
you
know.
Whenever
they
came
out
kind
of
days.
You
know
there
were
two
zero
zero,
some
number
after
it
that
year
and
like
just
envisioning
the
possibilities
right
like
oh,
I
could
actually
like
just
throw
it
on
a
bus
and
have
something
else
pick
it
up.
A
That's
amazing
and
to
be
able
to
just
like
hey
we're
going
to
have
this
small
deployment
out
there
collecting
data
wherever
it
may
be,
whatever
kind
of
data
it's
going
to
have
kafka
running
on
it
streaming
the
data
back
into
the
core
so
that
we
can
analyze
it
and
process
it
better
right,
like
there's
so
many
possibilities
with
that,
and
then
you
put
openshift
with
kafka.
It's
like
okay.
C
B
Yeah
I
mean
I'm
very
excited
working
on
this
project
because
you
know
there.
This
is
just
just
a
small
thing
that
we
have
built
up,
but
you
know
we
can
always
plug
in.
You
know:
serverless
openshift
serverless
here,
like
okay,
you
know
all
of
these.
All
of
these
in
real.
C
B
Are
calculating
fee
like
because
with
the
calculation
right,
so
you
can,
you
can
have
a
summerless
because
late
in
the
night
there
will
be
less
traffic
flowing
around
right.
So
you
will
you
can
just
bring
down
those
those
containers
and
have
instantly
wake
up
once
you
have.
You
know
a
new
message
in
parking
topics,
so
that's
another
pattern,
a
serverless
pattern
that
you
can
use.
We
don't
have
it
here,
but
that's
another
pattern
that
you
could
pick
up
to
reduce
cost.
B
One
interesting.
You
know,
dialogue
that
we
have
been
doing
with
other
people
within
within
the
essay
team
is
okay.
You
know
persistence.
B
How
can
I
process
data
so
so
in
here
we
we
are
actually
using
two
forms
of.
Let
me
see
two
forms.
I
guess
you
have
two
forms
of
data
persistence,
using
block
storage
and
an
object
footage,
and
typically
you
know
developers.
You
know
you
need
to
have
some
kind
of
persistence.
Layer
in
your
openshift
environment
and
without
a
doubt,
openshift
foundation
is
the
is
the
ultimate
choice,
because
I
love
the
technology.
It
is
built
upon
itself,
it's
a
10
year
old
technology
and
it
just
works.
A
B
Our
system
matters
a
lot
right
so,
like
I
powered
my
entire
open,
open
shift
and
android,
but
if
there's
no
persistence
here
I
would
have
been,
you
know,
figuring
out
how
to
recover
my
data.
Whatever
so
yeah
we've
been
using
block
storage
from
first
pvcs
percent
volume,
plane
from
operation
radar
foundation
and
then
buckets-
and
this
is
this-
is
very,
very
interesting
to
us.
I
I'm
pretty
sure
this
has
been
discussed
previously
on
your
on
your
channel,
but
yeah
through
openshift
console.
B
You
can
request
for
object,
buckets
like
as
a
developer.
You
can
request
for
object
bucket
and
you
will
get
a
bucket
and
you
will
get
credentials
to
your
bucket.
So
that's
what
we
are
doing
here.
So
we
have
created
two
two
buckets.
The
first
bucket
is
the
data
set.
The
data
set
bucket
where
we
are
storing
or
the
data
set,
and
the
second
bucket
that
we
have
is
for
secure
for
secure,
is
dumping
data
onto
this
bucket.
So
again,
this
is
also
a
powerful
construct.
B
As
a
developer,
you
don't
care,
you
just
need
to
write.
You
know
a
code
in
which
you
will
pack
s3.2,
yeah,
right
and
and
you'll
say:
hey,
please,
give
me
one
bucket
or
whatever,
and
this
open
shift
using
object,
bucket
claim,
which
is
a
native.
You
know,
constructing
openshift.
It
will
give
you
it'll
provision
an
object,
bucket
claim
on
openshift
data
foundation
and
give
you
a
bucket
and
then
just
use
that
bucket.
It's
your
persistent
storage
for
for
all
for
everything.
A
I
mean
it's
yeah
open
data
foundations
is
an
amazing
amazing
kind
of
platform
of
tools
right
and,
if
you're,
if
you're,
looking
for
the
like
fast
easy
way.
Right
now,
we
have
in
tech
preview
some
tech
preview.
So
keep
that
in
mind
when
you're
using
this,
you
can
use
what's
called
an
assisted
installer.
A
Basically
it
gives
you
an
iso,
but
this
iso
can
be
configured
with
odf
and
virtualization,
and
everything-
and
all
you
have
to
do
is
just
give
it
hardware
and
spin
up
the
iso
and
it'll
install
a
full
cluster
with
all
the
bells
and
whistles.
As
far
as
you
know,
everything
needed
for
this
demo
other
than
you
know
some
bits
and
pieces
here
and
there
and
the
data
itself.
Oh
yeah,
did
you
know
it
could
do
that
the
assistant
installed.
A
Poking
around
in
it
yesterday
building
yeah.
I
was
rebuilding
my
cluster
as
I
do
because
I
break
it
all
the
time.
This
channel
helps
with
that,
and
I
saw
it
and
I
was
like.
Let
me
try
this
and
it
was
like.
A
There
was
odf.
There
was
virtualization.
There
was
everything
I
was
like
dang.
This
is
really
powerful,
so
it's
in
tech
preview
check
it
out,
go
to
cloud.redhat.com
and
or
try.redhead.com
or
the
openshift.com
try.
I
drop
the
link
in
chat
stream.
Try
you
can
hit
that
and
you
can
go
and
look
for
an
assisted
installer
and
you
can
check
a
couple
boxes.
Get
an
iso
and
you'll
be
up
and
running
in
you
know
less
than
an
hour.
Basically,
it
just
depends
on
how
fast
your
infrastructure
is.
B
B
A
Just
you
know,
rich
with
content,
exactly.
B
A
B
And
talking
about
new
things,
I
also
have
one
more
who
cool
collection
of
things
to
show
to
the
audience.
Here
you
see
these
three
things:
superset
and
grafana
and
starburst.
B
This
comes
in
odh,
so
odh
is
another
open,
open
datahub,
which
is
a
collection
of
tools
that
help
you
do:
data
engineering
or
analytics,
or
things
like
that
machine
learning
all
running
on
top
of
openshift.
So
it
is,
you
know
it
is
very,
very
flexible
and
helps
you.
B
B
There
are
so
many
components
to
to
this
operator.
Okay,
I'll,
go
to
my
install
operators.
I
have
it
installed.
So
open
data
hub
is
the
operator
that
I've
installed
in
here
and
if
you
go
to
open
data
hub
and
then
decade,
kf
def,
so
here
you
can
define
which
all
component
you
would
need.
You
might
not
need
all
the
components
that
ship
with
open
data
hub
right,
you
can
choose,
you
can
choose
what
all
components
do
you
need
in
this
cml
file?
B
So
I'll
say:
okay,
you
know
I
need
I
need
super
set
and
I
need
grafana,
and
I
also
need-
and
this
is
this
is
the
magic
thing
here
chris
I
and
I
want.
I
want
to
use
premium,
which
is
my
sql
engine,
and
this
is
where
this
is,
how
simple
it
is
to
use
object.
Storage
between
you,
okay,
trinio,
please
go
to
this
end
point,
and
this
would
and
get
your
secrets
and
credentials
from
these
two
secrets,
and
this
is
the
bucket
name.
This
is
a
storage
class.
So
look
at
this
one.
B
This
is
how
simple
it
is
to
configure
premiere
to
use.
Sd
object,
storage
on
your
on
the
app
right,
and
once
you
have
it,
you
will
preview
install
the
open
data
hub
installer
will
deploy
relevant
components
in
your
name
space
or
yeah.
Once
you
have
this
kfdf
applied,
it
will
deploy
everything
for
you
and
then
it
will
also
give
you
kind
of
a
nice,
its
own
dashboard
from
where
you
can.
You
know
just
jump
on
to
the
right,
so
odh,
dashboard
odh
again
is
open
data
hub.
B
It's
an
open
source
project
and
we
have
a
downstream
project
called
this
openshift
data
science,
which
is
kind
of
a
collection
of
more
matured
tools
in
here.
So
right
now,
I'm
just
using
two
components
from
open
data
hub
you
can.
You
can
just
go
to
go
to
this
and
and
see
what
all
the
things
that
this
is
but
yeah.
This
is
super
typed
and
this
is
graphana,
and
there
are
so
many
files
in
here
just
choose
what
components
you
want
to
deploy
on
openshift.
It's
simple,
that's
beautiful!
So
this
these
toolings
will
help.
B
You
build
the
the
right
foundation
for
your
for
your
ml
ops
journey
for
the
data
engineering
journey.
You
need
to
have
the
right
tools,
that's
what
we
do
at
random.
We
provide.
We
provide
the
right
tools
to
all
the
developers
and
all
the
people
so
that
they
can
build.
You
know
they
can
build
amazing
things
like
like
what
I've
done
here.
A
B
B
If
you,
if
you're
running
this
on
kubernetes,
you
need
to
do
some
work
and
change
the
emails,
the
code
base
should
work.
The
the
python
file
should
just
work.
Fine,
you
just
need
to
adjust
the
you
know.
You
know
you
need
to
deploy
rook
steps
on
on
kubernetes,
because
groups
that
provide
you
block,
storage
and
object
storage,
and
then
you
need
to
basically
modify
some
emails,
but
I'm
pretty
sure
that
this
will
also
work
on
kubernetes.
B
You
need
to
just
do
some
changes,
but
yeah
hit
up
this
hit
up
this
url
and
there's
a
nice
documentation
here,
which
we
are
also
in
process
to
streamline
before
it
is
streamlined,
but
because
this
is
more
more,
you
know
more
kind
of
comprehensive
so
that
we
intentionally
wanted
to
be
comprehensive
so
that
you
can
learn
how
we
are
building.
It's
not
like.
You
know
it's
not
a
not
a
secret
sauce
like
right
or
not
or
a
black
box
right.
I
want.
We
want
you
to
work
on
this.
B
Follow
this
and
I'm
pretty
sure,
by
the
end
of
this
deployment,
you
would
get
to
know
a
lot
of
patents
that
you
can
use
in
your
apps
and
you
know,
make
make
us
feel
proud.
A
That's
awesome,
thank
you
so
much
for
this
demo
and
I
apologize
for
my
technical
difficulties.
I
feel,
like
I
missed
a
little
bit
and.
A
Yeah,
that's
true.
I
will
and
definitely
we'll
check
out
this
repo
more.
I
shared
it
out
for
everybody
to
see
there
yeah
so.
A
Was
an
awesome
demo
and
like
a
mind-opening
kind
of
like
thing
for
me,
I
hope
that
the
audience
has
had
a
similar
experience
despite
the
technical
difficulties,
but
I
mean
this
is
amazing
karan.
This
is
awesome.
Yeah.
B
So
yeah,
I
would
look
forward
to
you
know.
I
would
look
forward
like
how
you
can
pick
up
these
patterns
and
build
your
own
apps.
So
again,
this
is
just
a
demo
to
showcase
you,
the
power
of
openshift
and
all
other
open
source
components
that
we
have
used
in
here
and
how
we
have
deployed
this,
and,
and
initially
we
thought,
okay,
no,
we
will
have
just
a
single
openshift
cluster
at
the
core
boom
we
are
done.
B
Demo
is
completed,
but
that
was
not
very
realistic
and
you
know
we
want
to
have
something
that
people
can
relate,
because
edge
is
really
really
the
the
heat
of
topic
right
now.
People
want
to
do
it
and
they
want
to
know
how
they
can
do
it.
Like
chris,
we
were
discussing
a
few
minutes
back
like
okay.
You
know
this
is
taking.
B
A
B
B
Okay-
and
in
case
you,
if
you
yeah,
I
want
to
do
my
linkedin,
because
linkedin
is
something
that
I
follow
like
I'm
okay,
but
I
need
to
put
it
find
my
which
point
window
please
hold
on.
B
So
that
the
linking
that
was
filling
now,
I
got
it
now.
Okay,
so
so
yeah
I'll
look
forward.
If
you
guys
have
any
anything
for
me,
if
you
want
to
learn
more
or
you
need
any
kind
of
help
from
this
so
reach
out
to
me-
and
let
us
know
let
us
know
in
the
comments
section
of
of
whatever
platform
you're
visiting
like
if
you
have
any
questions
or
maybe
there
could
be
another
interesting
thing
or
a
pattern
that
we
would
be
missing
here.
You
know
we're.
B
Right
we're
all
developers,
so
we
can.
We
can
see
things
from
a
different
point
of
view,
so
we
might
be
having
something
interesting
that
I
would
learn
from
you
right
and
maybe
incorporate
that
in
my
next
level,
because
that's
that's
what
we
got
to
do.
We
want
to
build
more
demos,
more
more
use
cases
that
we
that
should
stuck
that
should
stay
in
your
mind
and
then
you
can
use
these
things
while
building
your
next
great
app.
A
A
A
Coming
up
later
today
we
have
devnation
the
show
with
my
friend
sebastian
blanc
and
then
after
that
we
will
be
having
well
not
immediately
after
then
later
this
afternoon,
on
the
show
on
the
channel,
we
have
get
ups
guide
to
the
galaxy,
we'll
be
talking
about
helm,
get
ops
workflows.
So
please,
oh.
B
A
C
A
So,
yes,
when
in
doubt
check
out
the
calendar
it'll
link
off
to
wherever
you
need
to
go
to
watch
the
content
and
we'll
see
you
next
time
and
thank
you
again
karan-
and
this
was
great.
I
look
forward
to
seeing
more
later.