►
Description
Guest Speakers: Alessandro Arrichiello, Luca Bigotta and Luca Gabella (Red Hat)
Containers could be the perfect medium for IoT Edge Deployments: in this scenario OpenShift is the right platform for helping developers to build Edge Applications. We’ll see in a real use case scenario how developers may leverage OpenShift features for enabling Hybrid deployments on standalone Red Hat Enterprise Linux. In the demonstration, we’ll also use OpenShift’s Ansible Service Broker for automating the external deployment, looking forward to use Ansible Tower when large scale ones will be needed.
A
B
Thank
You
Diane
so
good
morning,
good
afternoon
good
evening,
everyone
I
am
Luca
Cabella
I'm
responsible
to
develop
the
Internet
of
Things
activities
in
ma
for
Reddit,
and
together
with
my
colleagues,
Luca
Bogota
and
Alison
our
Okello
solution,
architects
for
reddit,
we
will
run
this
webinar
today.
We
would
like
to
introduce
you
a
use
case
leveraging
open
ship,
these
stupid
containers
capabilities
within
an
Internet
of
Things
context,
so
the
webinar
will
be
divided
into
part,
the
first
one.
B
We
will
make
a
presentation
of
the
solutions
and
we
will
go
through
a
recorded
demo
to
illustrate
this
case.
The
second
part
of
this
webinar
will
be
dedicated
to
a
Queenie
session,
so
starting
with
Internet
of
Things
Internet
of
Things
is
a
complex
topic.
We
won't
go
in
details
in
in
deep
dive,
this
topic
in
this
webinar,
because
we
won't
have
any
enough
time
to
do
this,
but
in
a
nutshell,
Internet
of
Things
is
a
capacity
if
I
have
to
discuss.
B
B
The
second
tiers
is
made
of
concentrated
or
gateway
of
that
gathered
data
from
the
field
from
the
first
year
that
typically
sit
at
the
edge
closer
as
possible
from
where
the
data
is
generated,
but
more
and
more,
it
is
required
us
to
these
tiers
to
become
more
intelligent
through
complex
data,
elaborations
application
running
in
real
time.
Essentially,
then,
we
have
the
last
year,
that
is
the
data
centers,
where
the
internet
Internet
of
Things
solution,
is
orchestrated
through
automation,
procedures
where
the
data
is
elaborated
capture
injected
in
the
different
enterprise
processor.
B
Just
to
come
back
to
the
second
tier,
that's
it
as
the
edge
it's
typically
facing
values,
environment,
sometimes
in
time
very
tight
in
terms
of
physical
constraints
and
sometimes
also
very
demanding
in
term
of
computational
power
and
application
to
run.
In
today's
use
case,
we
will
demo
we
will
demo
in
this
webinar.
Basically,
the
targets
for
the
type
of
IOT
use
case
we
have
in
mind
is
typically
when
the
the
edge
is
requested
to
do
very
complex.
B
Processing
port
activities-
and
let's
say
this-
is
typically
when
they
have
to
run
very
complex
applications
and
very
manage
multiple
type
of
data
elaboration.
This
use
case
was
in
PI
inspired
by
several
customer
discussion.
We
have
these
last
moments
where
typically
the
customer
had
asked
to
be
capable
to
develop
application
centrally
and
then
make
deploy
it
easily
up
to
the
edge
update
them
and
eventually
remove
it
from
the
edge.
So
this
is
typical
situation
that
we
can
face
in
some
specific.
B
Let's
say
you
request
from
the
customer
side,
and
here
in
this
slide,
we
try
to
summarize
tippet
3a
for
typical
case.
We
see
the
this
type
of
you
scale
will
be
demo
today
can
be
used.
The
first
one
is
when
we
have
typical
manual
industry
situation
manufacturer
who
has
district
with
plants
all
over
all
over
the
world
and
need
to
connect
this
plant
and
deploy
applications
that
have
been
developed
centrally.
These
things
is
similar
to
some
actors.
B
Another
one
we
are
seeing
is
in
the
fleet
of
boards
to
be
managed,
typically,
the
cruise
boats
and
so
on,
and
the
last
one
is
in
some
cases
where
the
the
players
have
deployed
very
complex
machinery
that
embedded
inside
CPU,
deep
CPU
pores
and
need
to
develop,
deploy
specific
application
in
the
in,
depending
on
the
context
where
this
machinery
are
used.
So
setting
this
I
will
know,
let
the
the
world
to
look
I
will
describe
in
detail
this
use
case
that
motivate
this
webinar.
Thank
you.
C
C
They
have
on-premise
a
server
farm
which
did
the
rural
part
teams,
both
internal
and
third-party
people
from
system
integrator
team
works
together
on
build
and
test
application.
They
also
start
adopting
a
proposal.
Service
architecture
based
on
Linux,
container,
central
server
farm
and
reg
are
different
in
a
potato,
so
docker
container
format
seemed
to
be
a
simple
format
on
which
track
to
the
packet
of
the
application
and
dependency
in
a
portable
load
to
be
moved
from
the
center
to
the
edge.
C
We
were
dealing
with
a
manufacturing
company
with
a
main
IT
data
center
headquarter
and
many
plants
in
every
plant.
We
find
a
small
local
server
farm.
Acting
as
a
fan
table.
We
have
to
connect
edge
elements
like
microcomputer
connected
to
the
production
machine,
the
plant
artist
around
the
world,
though
we
also
had
to
consider
the
complexity
come
from
geographic
distribution.
C
In
a
comprehensive
IOT
scenario,
the
deployment
solution
we
built
could
be
focused
or
move
to
the
edge
business
application
and
it
could
be
combined
and
integrated
with
a
technical
solution.
More
dedicated,
typing,
typically
IOT
domain
aspect
like
data
in
event,
gathering
from
sensor,
feel
the
connectivity
and
machine
sensor.
Device
management,
extender
exit
now
on
a
sounder
will
give
you
details
on
the
solution.
D
D
The
edge
machines
could
be
read
out,
surprise,
Linux,
running
containers
and
engines
like
talker,
and
we
can
handle
the
deployment
with
ansible
automation
suite.
But
how
do
we
handle
it?
Actually,
as
we
said,
we
can
easily
install
and
use
OpenShift
both
on
central
data
center
and
on
the
factory.
Then
we
can
leverage
on
the
open
shift.
Di
CD
features
with
jenkins,
for
example,
for
n.
Do
all
the
development
needs
once
built.
D
The
container
could
be
movement
between
the
environments,
for
example,
from
development
to
task
from
task
to
prod
and
also
to
the
factory
arriving
at
a
2d
hub
and
finally,
to
the
edge
machines
run
in
standalone,
reductant,
surprise,
New,
York's
and
both
open
shift
on
the
Don
Center,
and
the
factory
may
also
share
the
same
open
shift
container
registry
and
the
edge
deployment
would
be
held
through
ansible.
So
just
for
recapping,
the
technology
involved.
D
Before
going
in
deep
on
the
recorded
demonstration
of
you
like
to
better
explain
what
we'll
see
and
how
we
manage
achieving
it.
First
of
all,
we
reuse
at
the
demo
that
was
developed
for
a
reduction
of
years
ago
and,
as
you
can
see
by
the
slide,
it's
made
of
different
components:
sensor,
application
that
sent
temperature
data
using
MQTT
to
reddit
MQ
broker.
These
messages
will
be
forwarded
to
a
business
rule
service
that
finally
will
trigger
and
until
agency
reduction,
when
the
sensor
value
reaches
a
threshold.
D
As
you
can
see,
the
demo
user
uses
all
reduct
products,
tour,
reddit
and
precisely
fused
session
manager,
mq,
and
it
represent
an
example
of
intelligent
gateway
on
the
edge
we
firstly
sort
talking
about
the
demonstration
we,
firstly,
calm,
terrorizes
the
demo,
creating
multiple
guitar
project
for
every
single
component
and
a
placeholder
project
describing
the
double
demo.
We
then
set
up
with
the
deployment
config
will
Counting
and
services
for
running
containers
on
OpenShift,
as,
as
you
will
see
in
in
a
few
moments
in
a
recorded
demonstration.
D
We
are
also
integrated
for
every
single
project
and
containers,
a
Jenkins
pipeline
that
actually
build
deploy
and
promote
containers
between
different
environments
going
in
deep
in
every
components.
So
we
have
a
the
software
sensor.
This
is
an
application
that
produces
simulated
comparators
and
send
me
to
near
to
Anam
you
topic
and
on
by
an
MQ
broker.
Then
we
have
the
diffuse
routing
service
component
that
take
the
software
sensor,
messages
and
move
them
on
a
queue
way
monitored
by
the
other
business
rules
empowered
by
the
decision
manager.
D
So
actually
we
will
see
in
the
demo
an
open
shift
container
platform,
so
an
only
one
container
platform
simulating
both
the
data
center
and
also
the
hub
on
the
factory
using
pre
project,
the
development
project,
the
testing
project
and
the
hub.
Finally,
we
have
also
an
external
standalone
product,
Enterprise
Linux
running
the
spare
containers.
On
top.
As
you
can
see,
we
have
four
containers
running
on
the
development
environment
and
the
testing
environment,
where
the
containers
are
built
to
Jenkins
pipelines
and
tested.
D
We
are
also
in
our
project
on
the
same
openshift
platform,
simulating
the
factories
AB
and
this
project
we
handle
only
the
MQ
broker
containers
for
receiving
the
data
from
the
edge.
Basically,
as
we
see
in
a
few
moments,
the
final
edge
deployment
of
the
remote
reddit
Enterprise
Linux,
his
handlers,
who
announced
evil
service,
broker
feature
of
open
ship.
D
So
just
before
going
in
the
demonstration,
as
we
said,
we
create
as
a
as
a
part
of
the
demo
and
ansible
label
bundle
revealed
in
the
oven
ship
service
catalog.
That
will
start
the
edge
deployments.
Thank
to
the
ansible
service
broker.
The
ansible
label
bundle
contains
all
the
credential
needed
or
connecting
to
the
remote
Red
Hat
Enterprise
Linux,
and
the
playbook
for
deploying
the
containers,
of
course
is.
D
This
is
just
an
example
and
in
a
real
massive
use
case
scenario,
this
technique
would
be
evolving
in
using
ansible
tower
that
offers
an
enterprise
management
console
for
unstable
workloads.
In
the
next
slide,
we
will
watch
at
a
demonstration
we
recorded
of
the
running
example
on
a
real
OpenShift
platform,
and
we
will
comment
it
live
so.
D
First
of
all,
this
is
the
github
page
of
the
original
demonstration
that
was
presented
in
a
trade
islamic
years
ago
and,
as
you
can
see,
is
already
created
around
Red,
Hat,
Enterprise
Linux,
then
using
shahboz
fuse.
We
are
a
mess
now
named
decision
manager
and
MQ,
and
this
is
this
was
an
example
of
building
an
internal
it,
the
gateway
in
a
very
few
easy
steps.
D
Then
we
created
a
project
on
github,
a
placeholder
for
all
the
instructions,
peroxides
and
links
for
all
the
other
project
involved
that
you
can
reuse
for
deploying
the
same
demo
on
your
openshift
environment.
Moving
to
the
open
shift
container
platform
you
can
see.
This
is
a
platform
built
on
agile,
running
already,
The
Container
catalog,
with
three
project,
the
development
and
the
testing
and
the
hub,
as
we
said
before,
and
we
will
start
going
in
deep
in
the
development
environment.
D
As
you
can
see
here,
there
is
an
MQ
container
handling
the
queue
for
messaging.
Then
a
business
rules
container
handling
the
rules,
a
Jenkins
container
that
we
will
see
in
a
few
moments
with
handle,
builds
and
pipeline,
a
routing
service
fuse
container,
and
then
a
software
sensor
that
produce
simulated
temperatures.
All
the
environment
is
running
sucker
running
and
we
can
see
there.
D
Actually,
the
software
sensor
is
already
producing
comparators
and
sending
to
the
MQ
broker,
and
then
we
can
also
inspect
and
take
a
look
at
the
bills
we
created
for
creating
the
final
container
that
we
will
be
deployed
at
at
the
edge
and
then
the
pipeline's
created
and
used
by
Jenkins
for
handling
the
build
and
then
the
test
and
finally,
the
deploy
of
the
container
in
the
development
environment
at
the
end
of
the
deployment.
There
is
also
a
tag.
D
Action
attack
state
that
we
tagged
the
current
development
image
into
the
testing
environment,
promoting
that
kind
of
image.
We
can
also
start
for
demo
purpose
the
pipeline
that
will
produce
a
brand
new
image.
Talking
of
images,
we
have
only
three
placeholder,
because
we
will
use
the
MQ
broker
image
from
reddit
container
registry,
and
then
we
have
the
same
three
press
order.
Also
for
the
IOT
testing
environment.
This
one
will
be
populated
by
the
Jenkins
pipeline.
D
Moving
to
the
IOT
testing
environment.
We
can
see
actually
we
have
the
same
architecture,
same
structure
of
the
development
environment,
so
the
same
element
and
containers,
but
we
have
not
feels
neither
pipelines
created.
This
is
just
because
we
feel
the
docker
image
image
stream
of
OpenShift
with
the
Jenkins
pipeline
tag
in
the
development
images
moving
to
the
IOT
hub
project.
Actually,
we
will
see
that
only
the
MQ
containers
is
running
and
this
because
we
are,
we
will
use
this
broker
for
receiving
messages
from
the
edge.
D
Then
we
will
spawn
and
deploy
images
to
the
edge
thanks
to
the
testing
image.
Placeholder
then
moving
to
the
container
catalog
and
choosing
between
the
services.
As
we
said,
we
created
an
a
support.
Plate
bundle
named
IOT,
remote
container
deployer
that
requests
the
IP
address
of
the
remote
rel
to
connect
and
start
the
container
deployment.
So
before
creating
the
clm
and
his
service,
we
can
connect
to
the
cockpit
interface
of
the
remote
Red
Hat
Enterprise
Linux
server.
D
Cockpit
is
just
about
bender
phases
like
you
manager,
remotely
a
red
hat
enterprise
linux
and,
as
you
can
see
by
the
overview
page,
this
is
just
a
bit
of
machine
running
on
top
of
a
sure.
This
is
just
a
rel,
not
an
open
shift
node.
So
it's
just
a
real
plus
docker.
Without
talk
with
a
cockpit
interface,
we
enabled
the
containers
tab
also
for
watching
the
container
cell
and,
as
you
can
see,
there
are
no
running
containers,
but
only
some
images
in
the
cache
for
free
from
previously
created
deployment.
D
The
the
containers
will
be
provisional
thanks
to
playbook,
we
already
wrote
and
integrating
in
the
catalog
and
as
you
can
see,
this
playbook
will
use
the
docker
image
module
of
ansible
for
pooling
the
images
from
the
external
exposit
OpenShift
registry,
and
we
will
download
the
iot
testing
project
images
that
will
it
will
kill
any
previously
running
containers
and
start
one
by
one
all
the
needed
container
at
the
edge.
The
last
one
will
be
the
software
sensor.
D
D
Working
properly-
and
we
will
see
in
a
few
seconds
that
actually
the
software
sensor
already
just
started-
is
already
producing
simulated
temperatures.
These
temperatures
will
be
sent
to
between
the
chain
directly
to
the
mq
broker
in
the
openshift
container
platform,
so
in
the
in
the
hub,
and
we
can
see
there
actually,
these
brokers
receiving
messages
is
receiving
messages.
Looking
at
the
mq
console,
as
you
can
see,
the
messages
are
increasing.
The
number
of
messages
are
increasing,
so
actually
all
the
stack
is
working
properly.
D
Moving
back
to
the
openshift
console,
we
will
see
that
actually,
as
we
provision
it
and
then
created
new
containers
at
the
edge,
we
can
also
division
them
by
deleting
the
created
service.
As
as
you
can
see,
the
service
looks
already
deployed
created
and
then
we
can
click
delete,
accept
the
deletion
and
then
moving
to
the
standalone,
reddit
enterprise
lino.
We
will
see
in
a
few
moments
disappearing
and
killing
the
running
containers
from
cleaning
up
the
previous
deployment.
D
As
you
can
see,
the
images
still
stay
in
the
in
the
cache
for
for
future
deployment
of
speeding
up
the
deployment
so
his
and
the
actually
demonstration,
but
just
for
recapping
what
we
saw
we
saw
how
could
be
easily
to
use
OpenShift
as
a
development
environment
for
testing
building
application
that
could
be
movement
and
then
run
in
the
edge
as
a
container.
So
the
the
image
is
only
build
in
the
in
the
development
environment
then
promoted
in
the
values
project
environment
that
we
will
create.
A
Excuse
me
I'm,
very
complete
and
and
wonderful
demonstration.
So
thank
you
very
much
for
the
presentation.
I'm
curious.
This
was
a
demo
that
that
I
did
indeed
see
or
a
version
of
it.
I
saw
on
stage
at
Red
Hat
summit
I'm
wondering
if
you
have
examples
of
people
who
are
now
using
this
at
scale
in
production
out
there
yet
and
or
if
this
is
still
something
that's
most
just
in
in
welcome
to
architecture,
phase.
A
B
So
the
thing
is,
we
do
not
have
live
case
right
now.
We
are
working
with
a
couple
of
customers:
I
mean
the
use
cases,
the
example
of
let's
say
targeted
industry.
We
just
mentioned
the
beginning
of
the
of
the
presentation,
other
type
of
people
we
are
discussing
with
so
so
you
know,
the
deployment
of
these
kind
of
the
real
scales
take
a
lot
of
time,
because
the
Internet
of
Things
deployment
is
not
only
the
the
the
digital
part.
That
is
difficult.
B
It's
also
all
the
rise
deployment
of
the
physical
part,
the
engine
of
the
process
and
so
on.
This
allowed
to
have
a
significant
change
in
the
way
they
used
to
run
application
in
the
printing
applications.
Oh
really,
sometimes
they
didn't
even
did
not
make
any
deployment
of
application
because
they
were
not
the
available
technology
yet,
but
it's
very
interesting
that
the
people
we
are
talking
with
our
designing
architectures
with
these
intentions
to
deploy
application
containerized
as
close
as
possible
to
weather
that
I
generated.
A
No
I
I,
don't
expect
you
to
name
names
or
anything,
and
it's
just
that.
This
is
the
same,
a
similar
conversation
that
we've
had
in
the
automotive,
sig
and
in
other
places
as
well,
because
you
know
not
just
two
factories,
but
them
automobiles
and
other
other
moving
vehicle
type
things
to
get
to
those
edge
networkers
as
well.
A
So
it's
I,
there's
some
overlap,
I
think
in
the
conversations
that
we're
having
in
different
market
spaces,
even
amongst
the
Red
Hatters,
so
it'd
be
interesting
to
bring
everybody
together
and
see
if
some
of
this
can
help
with
other
folks
and
I
actually
think
they
probably
have
used
some
of
this
in
production
at
some
of
the
automotive,
at
least
at
POC
stages
from
one
again
and
as
well
with
telcos,
there's
been
a
lot
of
conversations
around
the
edge
networking
stuff.
So
it's
it's.
A
Definitely
it's
one
of
the
new
hot
areas
for
us
to
continue
to
help
build
reference
architectures
and
get
the
information
out
there
about
how
to
do
this
and
the
interesting
to
see
as
as
you're
the
people
you're
working
with
go
public
with
their
pocs
and
give
their
lessons
learned.
And
that's
best
practices
to
get
them
to
come
back
and
and
tell
us.
You
know
what
well
what
they
did
and
where
they,
where
they
can
share
their
phone
content.
D
The
meaning
issue
at
the
moment
that
actually
in
kubernetes
and
OpenShift
as
well,
because
kubernetes
as
a
container
orchestration,
you
can
set
up
an
isolated,
node
and,
and
so
ansible
and
unseeable
tower
in
a
can,
can
help
for
deploying
and
automating.
The
deployment
because
OpenShift
is
ready,
is
really
powerful
in
developing
new
technologies
in
new
prospect
and
creating
new
project.
But
of
course
you
need
to
add,
then
some
automation
layer
for
friendly
in
the
deployment
to
the
edge,
and
so
that's
why
we
thought
about
this
notion.
I.
A
Think
it's
a
great
use
of
ansible
tower
and
ansible
playbook
for
the
automation,
I
in
it
yeah
and
and
now
one
thing
I'm
sure
is
that
open
ship
doesn't
solve
everything.
We
have
to
use
a
lot
of
other
tools
in
our
toolbox
to
make
this
happen,
and
it's
really
good
to
see.
I
I
don't
see
any
questions
from
the
audience
and
the
chat
there's
a
lot
of
folks
on
listening
and
being
very
quiet.
So
if
anyone
has
a
question,
please
put
it
into
the
chat
and
do
you
have
a
resources
page
in
your
slide
deck?
A
Yeah
so
and
I
think
that
that's
something
that
we
should
probably
do
is
creep
and
I
offered
I
offer
up
open
to
Commons,
because
we
have
a
number
of
topics:
specific
special
interest
groups
that
maybe,
if
people
are
interested,
we
can
create
a
landing
page
and
open
ship
Commons
for
these
discussions
and
and
also
create
some
github
repo
links
and
things
like
that
as
well,
and
there's
Luke
just
posted
the
github
or
for
this
for
this
project
and
for
this
demo
I'm
in
the
chat.
Well,
is
there
any?
B
No,
no,
no
at
this
stage
I
think
is.
This
is
something
that
is
coming
from
real
demand.
We
are
under
discussion,
so
we
hope
to
have
more
things
to
add
in
the
near
future,
but
I
mean
this
is
something
that
it's
a
real
need.
Not
everyone
is
I
have
to
say
is
today
on
the
market
matcher
to
understand
this
new
potential.
So
obviously
there
are
some
work
also
of
introducing
it
to
the
market.
A
A
Is
to
get
back
get
the
video
out
will
create
a
sake,
will
get
people
talking,
but
I.
Think
and
we'll
pull
in
some
of
the
I
know.
The
folks
in
the
automotive
and
some
of
the
folks
and
and
Japan
have
done
an
edge
networking
talked
previously
probably
about
a
year
ago,
at
an
open
ship,
Commons
gathering
hide
sue.
Keanu
did
one.
So
what
I'm
thinking
is?
A
If
now
is
the
time
sort
of
to
pull
everybody
together
and
get
some
of
the
requirements
and
feedback
from
outside
of
red
hat
and
start
building
building
a
community
around
this
content.
So
I
look
forward
to
doing
that
with
you
all.
So,
thank
you
very
much
for
coming
today
and
we
will
post
all
the
list
on
YouTube
and
if
everybody
sends
me,
their
slides
I
will
then
post
it
on
log,
OpenShift,
comm
or
open
ship
comm,
slash
blog,
actually
and
we'll
get
that
up
there
for
you
all.
But
thank
you
very
much.