►
Description
The manufacturing industry is no stranger to using technology to optimize operations, product development and fuel innovation. In this session, you’ll hear how new cloud-native software development in conjunction with edge computing and AI/ML intelligent applications can help proactively discover and solve potential manufacturing problems before they occur by gathering and processing sensor data at the assembly line across hundreds of manufacturing plants - enabling them to reduce downtime through predictive maintenance and improved product quality.
A
Hi
everyone
welcome
to
this
openshift
commons
briefing
session.
My
name
is
rosa
guntrip
and
I
am
part
of
red
hat's
marketing
team.
I
am
within
the
cloud
platforms
business
unit
and
I
look
after
both
openshift
and
openstack
from
an
edge
computing
perspective.
A
B
Yeah
perfect,
so
today's
topic
is,
I
want
to
really
discuss
what
is
actually
the
manufacturing
industry
and
what
does
edge
computer
mean
for
the
manufacturing
industry,
and
then
I
wanted
to
spend
roughly
half
of
the
time
that
we
have
on
a
demonstrator
that
we
built
that
shows
how
we
can
build,
deploy,
manage
software
and
artificial
intelligence
applications
in
a
factory
edge
environment,
and
then
we
hopefully
have
a
couple
of
minutes
for
q
a
in
the
end.
B
So
let's
dive
right
in
what
is
this
manufacturing
industry?
It's
really
a
umbrella
term
that
we
use
at
red
hat
to
to
cover
different
industry
segments,
so
we're
talking
about
anything
that
is
actually
somehow
produced
in
in
a
plan,
so
that
could
be
high
value
assets
like
cars.
B
That's
basically
our
lighthouse
vertical,
where
we
do
a
lot
of
work
in
the
automotive
space,
but
also,
for
example,
shipbuilding,
trains,
aerospace
and
defense,
for
example,
but
it
could
also
be
consumer
goods
or
industry
parts
like
electronics
or
home
appliances
or
power
tools,
and,
what's
also
included,
is
chemical
companies,
rubber,
pharmaceuticals
or
anything
that
is
not
manufactured
in
a
discrete
fashion,
but
more
as
a
industry
process
and
all
of
them
they
have
in
common
that
they
have
a
form
of
plans.
That
is
to
be
managed
that
actually.
B
The
way
that
those
manufacturing
companies
are
structured
and
we
we
took
the
automotive
example
and
extended
it
really
to
to
other
industries
as
well
as
that
they
have
a
value
chain
which
sort
of
is
in
in
four
different
steps.
You
have
an
r
d
development.
B
Where
you
get
from
an
idea
to
an
offer,
then
you
usually
have
a
sales
and
marketing
area
where
you
go
from
an
offer
to
an
order,
production
logistics,
the
order
to
the
actual
delivery,
and
once
it's
delivered,
you
have
an
after-sales
or
services
division
that
looks
after
delivery
to
to
customer
care
and
that
might
change
a
little
bit
depending
on
the
industry.
B
And
the
interesting
thing
is
that,
basically,
in
all
of
these
value
chain
areas,
there
are
opportunities
for
edge
right
in
r
d,
for
example,
automotive
manufacturers.
They
are
now
doing
huge
works
around
autonomous
driving
and,
of
course,
each
one
of
these
research
vehicles
is
an
edge
computer
which
gathers
petabytes
of
data
and
needs
to
that
needs
to
be
managed
ingested
and
stored,
somehow
into
a
central
environment
in
those
marketing.
Very
often,
you
have
flagship
stores
and
and
retail
outlets
that
need
to
be
somehow
managed.
B
So
there
you
also
have
edge
and
after
sales
service
right
a
connected
product
like
a
connected
car
or
even
a
connected
home
appliance,
is
some
form
of
edge
device.
But
today
we're
really
focusing
on
this
production
logistics
piece
where
we're
looking
at
what
actually
happens
in
the
the
factory
space.
B
We've
shown
this
slider
slight,
similar
to
that
in
yesterday's
presentation
by
nick,
where
he
talked
really
about
redhead's
overall
vision
for
edge
and
how
that
will
reflect
in
our
product
landscape,
broken
this
down
towards
factory
edge,
we're
really
talking
about
the
end
user
premises
and
device
edge
areas,
so
the
the
very
the
the
far
edge
right
before
the
actual
sensor
and
the
device
where
we
sort
of
want
to
be
in
the
in
the
factory
and
want
to
do
integration.
B
Scenarios
like
manufacturing
execution
systems,
integration,
predictive
maintenance,
quality
assurance,
augmented
reality
or
even
controlling
autonomous
transport
systems.
That
is
usually
not
that
easy,
because
you
have
different
stakeholders,
you
there.
We
suddenly
talk
about
ot,
operational
technology
versus
I.t
information
technology.
B
What
are
we
talking
about?
So
we
usually
what
we
see
in
in
our
customers,
especially
in
global
manufacturing
customers,
which
have
a
number
of
plants,
is
that
they
have
some
form
of
hierarchy.
They
have,
of
course,
a
headquarter
or
potentially
multiple
headquarters
and
headquarter
data.
Centers
then,
usually
in
in
each
one
of
the
major
geographies
there,
and
they
have
regional
data
center,
where
they
can
aggregate
and
control
on
a
regional
level.
B
But
then
also
each
plant
has
its
own
data
center
and
I
put
data
center
bit
in
quotation
marks,
because
you
shouldn't
expect
like
a
cloud
level
data
center
there.
It
could
just
be
a
a
rack
in
a
room
and
don't
expect
three
fire
zones
be
glad
if
you
have
two
separate
rooms
and
stuff
like
that.
But
each
plant
usually
has
its
own
I.t
control
center.
B
The
plant
data
center
usually
doesn't
do
the
actual
production
control.
The
each
production
line
usually
has
some
form
of
line
server.
That
is
a
ruggedized
pc
where
certain
connections
take
place
where
you
can
also
do,
process,
monitoring,
etc
and
the
actual
production
control
is
done
by
specific
devices
programmer
logic,
controller
that
that
are
basically
fail.
Safe,
are
safety,
critical
and
actually
do
the
the
control
of
the
production
process
going
from
right
to
left.
B
We
often
increasing
scale
and
increasing
order
of
magnitude
and
the
amount
of
devices,
and
it's
very
common
for
large
manufacturers
to
have,
for
example,
hundreds
of
plants
with
thousands
of
line
servers
and
then
10
or
100
thousands
of
devices
that
actually
sense
or
control
things
and
somewhere
between
the
line,
data
server
and
the
plan
data
center.
There
is
this
convergence
between
operational
technology
and
information
technology
and
those
things
somehow
work
together.
B
Industrial
manufacturing
is
a
bit
different
from
telco
edge
in
different
different
trade-offs
that
you
have
to
make
right
if,
for
example,
a
a
5g
cell
within
a
telco
network
dies
very
often,
it's
basically
only
affecting
a
small
subset
of
the
customer
set
and
and
also
other
cells
can
potentially
take
over
by
increasing
their
radio
output,
etc,
whereas
in
manufacturing
edge
right
you,
the
production
must
never
go
down.
So
you
have
different
requirements
with
regards
to
uptime
it's,
but,
for
example,
it
can
usually
operate
in
a
disconnected
fashion,
much
better
than
telco
environment.
B
Where
being
disconnected
means
well
calls
get
lost.
You
have
very
long
living
investments
there,
so
we
have
to
always
think
about
brownfield
environments
where
we
have
invested
in
certain
machinery
that
is
out
there.
That
just
needs
to
be
connected,
and
I
mentioned
before
right.
Everything
is
separated
behind
firewalls
and
it's
usually
a
policy
that
you
cannot
connect,
for
example,
to
a
line
server
from
the
regional
data
center.
B
Just
because,
basically,
the
line
server
is
something
that
needs
to
be
protected,
so
it's
hidden
behind
firewalls,
so
the
discussions
we
usually
have
with
customers
around
well,
I'm
now
using
modern
technologies
in
our
corporate
data
centers,
and
we
really
love
these
cloud
environments
where
we
have
apis
for
everything
and
we
can
easily
deploy
new
applications
and
new
software
and
they
ask
us
well
how
can
we
do
the
same
thing
also
now
towards
our
manufacturing
plants?
How
can
we
achieve
real-time
transparency
as
a
foundation
for
successful,
optimization
and
planning
and
control
of
production?
B
How
can
we
actually
without
taking
a
usb,
stick
and
going
to
each
one
of
these
line
servers
individually?
How
can
we
optimize
the
role
of
configuration
across
100s
of
manufacturing
plans?
How
can
we
use
machine
learning
in
a
production
control,
for
example,
to
improve
quality
or
to
do
visual
inspection?
B
Well,
what
we
need
to
do
that
is
multiple
things.
We
need
to
somehow
accelerate
the
software
driven
production
optimizations
we
need
to
introduce
or
our
customers
want,
to
develop,
new
features
and
and
that
are
targeting
manufacturing
applications
as
fast
as
they're
doing
that
with
their
other
business
applications.
B
We
want
real-time
transparency,
so
we
need
real-time
insights
and
actionable
data
both
at
the
edge
as
well
as
in
the
central
data
center.
We
need
a
way
to
trans
to
to
process
and
analyze
that
data
and
make
it
and
visualize
it,
and
for
that
we
need
to
be
able
to
use
standard
messaging
protocols
with
based
on
open
industry
standards.
B
That
would
allow
us
to
do
these
agile,
uncontrolled
configuration
software
rollouts,
but
it
would
also
give
us
a
chance
to
actually
have
a
for
point
where
we
can
enforce,
for
example,
auditing,
because
we
only
then
need
to
audit
who
changed
the
desired
state,
and
that
would
give
us
an
insight
as
to
who
did
actually
a
modification
at
the
end,
and
we
can
center
our
approval
processes
around
this.
B
And
we
we
see
that
a
declarative
model
also
scales
to
100
production
lines.
Just
because
we
know
that
also
such
a
declarative
model
scales
to
thousands
of
of
cloud
nodes
today
and
to
leverage
big
data
and
machine
learning
technology,
we
need
to
collect
and
normalize
and
visualize
the
data.
We
need
to
make
it
accessible
to
data
science
and
be
able
to
basically
develop
and
deploy
machine
learning
models
not
only
again
in
the
central
data
center
but
again
to
the
production
line.
B
B
We
want
to
use
declarative
conflict
management,
and
for
that
we
are
using
a
git
ops,
based
approach
and
most
of
what
we
saw
with
git
ops
was
targeting
a
single
cluster
and
we
were
looking
at
well.
How
can
we
extend
that
just
from
conceptual
point
of
view
to
cover
multi-cluster
environments,
data
processing
from
sensors
to
analytics
we're
using
an
open
source,
middleware
and
aiml
stack
more
specifically
we're
using
the
open
data
hub?
B
That
was
also
mentioned
yesterday
in
order
to
bring
up
all
the
machine
learning
tools
that
we
need
based
on
kubernetes
and
openshift,
and
we
want
to
do
model
training
and
deployment
to
production
there.
The
open
data
hub
actually
helps
us
because
it
can
be
integrated
in
our
continuous
integration,
continuous
delivery
story.
B
So
what
we've
built
in
our
demonstrator
is
really
a
small
sample.
Application
consists
of
something
that
simulates
data
acquired
from
sensors
and
a
application
that
consists
of
basically
a
messaging
broker:
kafka
as
eventbus
a
small
web
application
that
depicts
sort
of
the
current
state
of
the
sensor,
data,
that's
being
acquired
and
some
form
of
analytics
which
helps
us
to
detect
outliers
and
alerts.
The
operator,
the
machine
operator
in
case
something
happens
and
the
way
that
we
envision
it
is
that
we
have
three
different
environments
that
are
for
relevance.
B
One
of
this
is
the
line
data
server,
where
the
actual
sensor
applications
reside,
and
we
have
a
second
execution
environment,
which
is
the
factory
data
center,
where
the
rest
of
this
application
sits
the
developers
and
the
ot
operations
guys
they
don't
sit
in
either
the
factory
data
center
or
the
line
data
server.
They
sit
in
the
central
data
center
and
they
basically
build
and
deploy
the
application
that
that
is
depicted
here.
B
One
thing
to
notice
is
that
we
assume
that
there's
a
strong
firewall
between
the
line,
data
server
and
the
factory
data
center,
so
the
line
data
server
is
protected
and
in
order
to
allow
the
sensors
to
actually
connect
or
the
the
sensor
simulators
to
actually
connect
to
the
amq
broker
right,
they
need
to
be
port
opened
here
and
for
that
we've
built
an
operator
that
allows
us
to
actually
enable
and
disable
firewall
rules,
basically
again
based
on
the
declarative
approaches.
B
B
The
next
slide
will
show
a
bit
more
complex
view
of
what
we
actually
built
and
the
demo
is
rather
extensive
and
probably
running
through
each
one
of
these
model.
B
Aspects
in
in
detail
will
take
us
well
over
the
time
that
we've
allotted
for
this
briefing
and
it
can
probably
spend
a
whole
day
around
this.
This
whole
demo
scenario,
but
basically,
what
we
we
one
can
show
based
on
this.
This
factory
edge
demo
is,
for
example,
how
to
first
of
all
do
a
basic
application
deployment.
How
can
we
declaratively
deploy
an
application
that
spans
both
the
lane
data
server,
as
well
as
the
factory
data
center,
as
well
as
aspects
of
the
central
data
center,
and
the
second
thing
that
we
usually
do
is
then?
B
B
B
So
we
do
go
into
code,
ready
workspaces,
bring
up
a
browser-based
ide,
do
a
coding
change
and
then
run
our
continuous
integration,
kenya's
delivery,
tecton
pipelines
to
recreate
new
container
images
and
then
that
those
are
meshed
against
with
with
the
github's
approach,
so
that
they're
automatically
staged
to
the
test
environment,
they're,
tested
there
and
once
sort
of
the
the
test
concludes
their
they're
basically
sent
for
approval
so
that
the
production
environment
can
also
be
updated
with
the
latest
versions
for
those
customers
who
are
really
interested
in
the
deep
dark
core
technical
stuff,
we
can
go
into
discussing
how
to
actually
build
an
operator
which
manages
infrastructure
components
so,
for
example,
firewalls
or
even
other
stuff
that
that
might
be
required
in
such
a
scenario,
and
then
we
have
the
a
demi
module
around.
B
The
seventh
demo
scenario
is
really:
how
can
we
actually
now
bring
the
data
that
we
collect
in
the
factory
to
the
central
data
center,
for
example,
for
machine
learning
training,
but
also,
for
example,
if
a
developer
support
needs
to
understand,
what's
actually
happening
in
the
factory,
we
need
somehow
insight
into
what
is
happening
there.
B
So
how
do
we
actually
transport
that
data
and
we're
using
kafka
and
mirror
maker
there
to
forward
the
data
from
the
factory
data
center
to
the
central
data
center,
where
this
made
made
accessible
both
in
kafka,
as
well
as
in
s3
storage
and
based
on
the
data
that
we
have
acquired
in
s3?
For
example,
that's
our
last
demo
module.
We
can
show
how
we
can
actually
do.
B
A
machine
learning
can
detect
outliers
in
the
data
and
push
that
back
again
as
an
updated
model
into
the
factory
where
we
can
then
update
the
the
outlier
detection.
So
that's
basically
the
end-to-end
demo
and
I
would
dive
right
in
and
try
to
cover
one
or
two
of
those
demo
scenarios
in
the
next
couple
of
minutes.
B
So
the
way
that
we've
done
it
I
would
start
with.
Basically,
what
has
been
pre-built
so
I'm
now
looking
here
in
our
to
our
git
ops
repository
and
the
interesting
thing
is,
you
can
see
here
multiple
directories,
but
basically
in
our
config
instances,
we
have
different
sets
of
applications.
We
can
deploy.
B
Sorry
and
the
one
that
is
interesting
to
us
is
really
this
manuela
stormshift
instance,
which
consists
of
these
three
parts
that
I
mentioned
before.
We
have
the
line
dashboard.
We
have
the
machine
sensors
and
we
have
the
messaging
thing
and
those
are
configured
as
five
different
components
that
we
can
sort
of
deploy.
B
We
have
the
individual
components
here,
the
messaging
application,
the
machine
sensor,
the
line
dashboard,
and
we
have
our
custom
resources
which
define
firewall
rules
which
allow
us
to
open
ports
here
and
why?
Why
did
we
make
it
that
way?
Why
did
we
put
everything
into
a
single
directory?
We
understand
that.
Basically,
we
want
to
be
able
to
understand
and
reason
what
an
application
looks
like
in
its
whole
context,
so
basically,
all
the
components
that
make
up
this
application
they're.
B
Basically
here
in
a
single
directory-
and
we
can
look
at
all
the
contents
here,
all
the
kubernetes
artifacts
that
make
up
this
application
and
understand
basically
how
that
application
operates
and
how
it
works.
But
that
is
not
how
the
application
is
actually
deployed
because
parts
of
those
actually
go
to
one
cluster
and
parts
of
those
go
to
another
cluster.
B
So
let's
take
a
quick
look
at
what
our
and
our
actual
infrastructure
looks
like.
You
need
to
step
out
of
the
presentation
here.
So
in
our
demo
scenario,
we
have
two
different
open
shift:
environments
right.
We
have
one
over
here
and
we
have
a
second
one
over
here
and
I
am
assuming
you
cannot
read
this
yet.
B
But
if
I
zoom
in
a
little
bit,
you
can
see
there's
number
of
stuff
already
deployed,
but
the
actual
application
anything
with
manuela
storm
shift
is
not
there
yet
and
the
way
that
we
want
to
do
the
deployment
declaratively
is
through
github.
So
we've
deployed
argo
cd
and
our
cd
has
a
configuration
which
says
basically
anything
in
the
factory
data
center.
Anything
that
is
deployed
into
githubs
in
the
factory
data
center
directory
will
then
be
instantiated
automatically
in
the
cluster.
B
So
that's
what
I
want
to
do
now.
I
want
to
basically
take
the
the
configurations
that
are
there
and
deploy
them
to
the
different
clusters
that
are
out
there.
To
this
end,
I
will
switch
over
to
my
console.
B
I'm
in
the
manual
get
ops
directory
here.
So,
let's
change
into
the
deployment
directory
here
you
can
see.
I
have
different
execution
environments
so
I'll
switch
into,
for
example,
the
line
data,
server
execution
environment,
and
here
I
will
sync
simply
symlink
the
configuration
that
is
out
there,
the
manuela
storm
shift-
and
here
I
want
to
have
the
sensor
data
the
sensor
application.
So
I
just
assume
link
this
one
here.
B
B
And
as
a
third
thing,
I
have
a
separate
directory
where
I
can
store
all
the
stuff
that
makes
up
the
network
path
between
the
line,
data
server
and
the
factory
data
center,
and
here
I'll
assume
link
the
http
rules.
So
we
can
assume
that
each
one
of
these
three
directories
represents
a
different
target.
Cluster
we'd
have
a
target
cluster
where
the
applications
that
are
running
on
the
line
data
server
run
on.
B
We
have
a
next
cluster
that
represents
the
factory
data
center
and
we
have
a
third
cluster
that
represents
some
form
of
management
cluster
that
hosts
the
firewall
configurations,
for
example-
and
here
are
the
firewall
rules,
we
have
http
and
we
have
https.
B
B
Do
a
initial
deploy
of
manuela
app
and
before
I
get
push
just
want
to
check
the
state
again
of
that
is
currently
out
there.
So
I'm
switching
back
to
my
project
here
you
can
see
there
is
no
manuela.
Storm
shift,
deploy
components
deployed
here
same
over
there.
There
is
no
manuela
storm
shifted
deployed
over
here
and
if
we
look
at
our
firewall,
probably
we'll
have
to
log
in
again,
let
me.
B
B
We
will
trigger
basically
an
update
of
the
operational
git
repository
and
usually
the
way
that
rbc
works.
It's
basically
polls
the
git
repository
every
I
don't
know
five
or
six
minutes.
I
can
certainly
speed
that
up
just
by
basically
telling
argo
cd
to
sync,
which
is
what
I'll
be
doing
now,
just
in
order
to
speed
up
the
demo
a
little
bit,
but
you
could
certainly
expect-
and
you
can
already
see
it
already
picked
it
up.
B
So
it
sees
now
that
there's
new
applications
that
are
out
there,
so
it
already
did
sync
as
the
the
git
repository
and
what
you
will
see
now
is
that
you
can
see
new
applications
being
deployed
here.
You
can
see
the
manual
the
storm
shift,
dashboard
and
probably
in
the
next
couple
of
seconds,
also
the
messaging
layer.
B
The
role
will
take
a
couple
of
seconds
and
you
can
actually
monitor
that.
So,
for
example,
by
opening
the
application
you
can
actually
see
what's
happening
and
argo
city
is
quite
intelligent
in
in
the
way
that
it
does
it,
because
it
allows
us,
for
example,
to
do
it
in
stages.
B
So
while
the
rollout
is
happening,
argo
cd
does
it
for
us
automatically.
We
don't
have
to
do
anything
beyond
actually
updating
the
git
repository.
Let's
take
a
look
at
the
application.
I
just
need
to
check.
What's
about
the
other
one,
that's
being
rolled
out,
that's
this
one
over
here
yeah!
B
B
That's
a
good
one
sync!
So,
let's
see
what
went
wrong
probably
didn't.
Do
the
preparation
right,
okay,.
B
B
B
B
B
So,
assuming
that
the
messaging
layer
and
the
sensors
are
still
there
already,
so
the
sensors
are
running
on
the
second
cluster
over
here
we
see
that
we
can
now
have
three
different
data
sources
configured.
We
have
our
pump
number
one
and
the
pump
number
two
and
pump
number
one
and
pump
number
two.
They
send
vibration
and
temperature
data,
but
pump
number
one
is
only
sending
vibration
data
and
no
temperature
data.
So
this
certainly
something
that
we
want
to
change
as
our
second
use
case.
B
So
we've
proved
now
that
basically,
we
can
deploy
an
application
that
consists
of
multiple
components
over
different
clusters.
We
have
plus
number
one
and
cluster
number
two
here:
the
sensors
are
deployed
to
one
cluster,
the
rest
of
the
applications
deployed
to
the
second
cluster,
all
of
them
from
the
central
data
center,
and
now
we
want
to
do
a
configuration
management
change.
B
So,
basically,
in
our
manual
storm
shift
machine
sensor
configuration
we
have
this
config
map
and
we
can
see
that
temperature
enabled
as
false.
We
simply
set
it
to
true
store
it
again
and
I'm
not
doing
it
through
the
console.
Now,
let's
do
it
through
the
ui
here
I
add
the
change
and
say
enable
sends
temperature
sensor
and
push
it
again
to
the
central
repo
and
once
argo
cd
is
picking
up
the
change,
I
would
expect.
Let
me
quickly
trigger
that
here.
B
B
B
What
we
can
also
check
in
the
meantime
is:
did
our
firewall
rules
actually
change?
And,
yes,
you
can
see
here.
We
have
two
new
firewall
rules
here
in
the
middle
which
had
been
generated
by
our
operator
because
sort
of
with
the
application
rollout
we
also
declaratively,
said
we
want
the
firewall
between
the
sensor
and
the
application
to
be
opened,
and
this
is
what
happened
here.
B
On
the
sensor
side,
if
that
doesn't
work
out,
it's
still
starting
that's.
This
is
why
so
the
new
one
is
still
not
ready
yet
and
once
it's
there
now,
the
fourth.
Basically,
the
fourth
data
source
is
now
also
arriving
so
usually
with
git
ops.
It
happens
asynchronously
in
the
background,
so
I'm
a
little
bit
eager
and
always
expecting
data
to
arrive
much
earlier
than
it
actually
does.
But
here's
now
the
fourth
data
source
also
arriving
so
we've
now
shown
we
can
deploy
an
application.
B
We
can
configure
infrastructure
like
firewall
and
we
can
do
conflict
changes
all
from
the
central
data
center
to
the
edge.
I
want
to
skip
over
the
application
development
and
the
more
more
complex
use
cases
here.
Basically,
what
we
would
show
there
is
how
we
can,
for
example,
to
run
tecton
pipelines.
We
have
our
continuous
integration
environment
here
and
deployed
our
tecton
pipelines.
Here
we
have
a
number
of
pipelines
built
that
we
can
then,
for
example,
use
to
to
do
a
build
and
staging.
B
So
basically,
we
can
use
we,
for
example,
in
our
pipeline.
Here
we
clone
our
development.
In
our
operational
environments,
we
update
sort
of
a
build
version,
then
run
the
source
to
image
build.
At
the
same
time,
in
parallel,
we
modify
the
operational
repository
to
point
to
the
results
of
the
new
build
once
that
is
complete.
We
sort
of
tag
the
development
environment
with
the
build
number
we
commit.
Our
operational
repository
push
the
operational
repository
so
that
it
points
to
the
new
new
tagged
image.
We
then
again
trigger
argo
cd
to
sync.
B
B
What
I've
also
already
deployed
here
is
our
kafka
and
s3
integration.
So,
basically,
what
happens
in
our
openshift
environment
is
that
we
sort
of
pick
up
the
data
and
forward
it
through
mirror
maker
to
the
central
data
center
and
there
we
store
it,
then
in
s3.
So
this
is
also
what's
happening
here.
We
have
openshift
container
storage
deployed
that
provides
us
with
s3
buckets
and
there
we
have
our
manuela
data
lake
that
contains
already
a
large
number
of
objects
that
are
stored
here.
B
Scroll
too
much
so
we
have
our
temperature
data
and
the
sensor
data
that
was
gathered
through
mqtt
copied
to
kafka
sent
over
to
the
central
data
center,
and
we
have
here
now
in
our
central
data
center,
the
all
the
sensor
data
available
in
s3,
and
this
is
where
we
would
usually.
This
is
how
I
want
to
end.
The
short
demo
would
switch
over
to,
for
example,
a
jupiter
hub
to
pick
up
and
analyze
the
data.
B
So
we
have
created
a
jupiter
notebook
where
we
sort
of
pick
up
the
data
that
is
gathered
from
the
sensors
and
try
to
understand
where
certain
anomalies
come
from
and
and
how
to
detect
them.
And
in
order
to
do
that,
we
first
of
all
need
to
take
pick
up
and
and
and
cleanse
the
the
sensor
data.
We
have
then
the
raw
data
over
time
we
sort
of
need
to
label
the
outliers.
B
B
So
is
a
decision
table
better
or
do
we
take
support,
vector
machines
or
using
a
gaussian,
naive,
bayes
filter
here
and
basically
we
we
compare
and
contrast
these
approaches
with
regards
to
their
accuracy
and
and
resource
consumptions,
and
we
basically
determine
in
this
example
that
our
decision
tree
is
the
actual
best
approach,
and
this
is
then
a
a
small
application
that
we
then
sort
of
build
in
and
and
prepare
for
selden
serving
and
those
certain
containers
then
built
again
through
our
standard
pipeline
and
sent
back
together
with
the
trained
model
to
through
githubs.
B
In
order
to
be
picked
up
then
and
again
at
the
edge,
and
then
you
can
see
the
the
outlier
detection
model
in
action,
so
that
is
sort
of
a
one
day
demo
day
in
let's
say
20
minutes.
B
So
where
are
we
today?
Chief
most
of
that
stuff?
All
of
the
demo
itself
is
public
in
our
git
repo
in
the
future.
We
want
to
focus
more
on
this
end-to-end
messaging
thing.
We
also
want
to
look
more
and
actually
the
cluster
life
cycle
management
so
to
really
to
be
able
to
deploy
also
the
infrastructure
at
the
edge.
B
So
at
the
moment,
edge
infrastructure
falls
from
the
sky
for
us
and
we're
really
keen
to
also
show
how
we
can
manage
and
roll
out
and
manage
the
life
cycle
of
the
infrastructure
at
the
edge.
We
want
to
also
do
be
able
to
do
distributed,
monitoring
and
logging.
For
example,
if
somebody
complains
that
application
is
not
working,
it's
probably
not
feasible
to
take
the
developer
and
drive
them
out
to
the
factory.
B
B
So,
if
you're
looking,
for
example,
at
autonomous
transport
systems
that
might
have
some
lighter
or
camera
sensors
running
on
top
of
them,
but
the
whole
heavy
lifting
is
done
somewhere
in
an
edge
computing
environment.
That
could
be
an
interesting
use
cases,
but
can
also
do
the
very
low
latency
telemetry
type
of
use
cases
there,
where
you
have
tons
of
sensors
just
connecting
wirelessly
without
it
they
may
have
to
be
connected
to
specific
field,
buses,
etc.
So
those
those
open
new
challenges,
but
also
new
great
opportunities
in
the
factory
environments.
B
Another
interesting
thing
is
ot
infrastructure
consolidation.
What
we
see
is,
in
the
past,
every
type
of
ot
use
case
usually
broke.
Its
own
bespoke
hardware
configuration
with
it.
So
you
have
tons
of
hardware
sitting
next
to
each
other,
each
one
for
a
specific
use
case
and
to
date
right.
The
the
approach
is
to
actually
virtualize
and
consolidate
that
on
a
single
hardware,
environment
hasn't
really
been
done
yet
and-
and
at
the
moment
right,
it's
really
hard
to
keep
this
in
part
aging
hardware
up
and
running.
A
We
do
we
do.
We've
got
a
couple
of
questions
and
thank
you
by
the
way.
Wilfred
great
presentation
and
demo
first
question
is:
is
the
upgrade
schedule
slower
for
manufacturing
to
ensure
more
stability
and
less
downtime.
B
What
we
usually
have
is
some
form
of
maintenance
windows
that
you
need
to
comply
with
which,
by
the
way,
is
just
a
configuration
item
in
aggro
cds,
you
can
just
tell
argo
cd,
don't
do
any
deployments
between
or
only
two
deployments
in
these
hours
or
stuff
like
that,
so
yes,
it's,
it's
definitely
slower
just
because,
basically
you
don't
want
to
interrupt
the
ongoing
production
flows,
and
then
you
usually
have
to
comply
with
these
maintenance
windows
and
traditionally,
of
course,
right.
B
A
Okay,
second
question:
are
we
doing
anything
with
computer
vision
on
a
factory
floor.
B
That
is
an
interesting
one,
because
the
role
that
red
hat
plays
usually
in
that
space
is
pretty
much
the
platform
and
infrastructure
providers.
So
I'm
pretty
sure
that
we
have
customers
and
for
can't
really
talk
about
a
use
case
here,
but
we
have
customers
that
basically
use
this.
This
platform,
with
all
the
the
capabilities,
has
also
the
the
gpu
enablement
to
really
do
accelerated
computer
vision
type
of
scenarios.
B
But
again
this
is
not
where
red
hat
has
offerings
per
se,
that
that
sort
of
directly
addresses
targets.
It's
it's
more,
an
infrastructure
storyline
for
us.
C
A
Okay,
yeah
and
there
is
a
follow-up
from
diane
on
that
just
to
see
if
there
was
someone
we
were
partnering
with,
but
I
think
you've
addressed
that
question.
A
A
B
Yes,
I
only
did
some
hand
waving
yeah
we'll
do
that
kind
of
of
mentioning.
That
is
certainly
the
true
observation
and
basically
the
way
that
we
do
it
is
we
take
the
the
model
lib
and
the
python
code
run.
Our
continuous
integration
build
bake.
Basically,
everything
into
a
container
image,
push
it
to
query,
use,
quays
replication
features
to
make
it
available
on
the
factory
and
from
there
we
just
do
a
selden
serving
deployment.
B
Basically,
we
have
the
selden
operator
installed
at
the
edge
and
we
have
more
than
just
45
minutes.
I'd
be
happy
to
show
that.
A
B
B
At
this
point,
we
haven't
really
looked
at
partner
solutions,
because
we
first
want
to
understand
better
what
our
own
reddit
advanced
cluster
manager
has
to
offer
here,
because
it
certainly
allows
us
a
certain
degree
of
insights
across
clusters
once
the
once
the
requirements
that
our
customers
go
go
beyond
that,
I'm
pretty
sure.
Usually
our
customers
already
have
this
already
decided
for
a
a
observability
partner
here
that
we
would
then
look
to
integrate.
A
B
So
the
demo
scenario
itself
is
pretty
much
standalone
and
just
for
the
reason
that
we
cannot
bring
up
also
an
sap
application
environment
in
our
demo
environment.
Otherwise
it
will
explode,
but
in
practice
it
certainly
needs
to
integrate.
I
mean
you
have,
for
example,
if
you
do
predictive
maintenance,
you
have
the
plant
maintenance
functions,
it
usually
in
sap,
so
any
work
order
or
anything
needs
to
need
to
then
actually
be
dispatched
to
sap
or
you
have
in
in
actual
production
control.
You
then
have
the
the
parts
lists
and
the
production
orders
etc.
B
So,
of
course,
in
a
real
scenario,
we
would
certainly
have
to
integrate
with
sap,
and
the
good
news
is
that
we
have
a
suite
of
components
in
our
reddit
integration
portfolio
that
allow
us
to
do
odata
through
the
netweaver
gateway
or
do
classic
r3
type
of
rfc
connectivity
or
do
idocs
papi
calls
whatever
so
there's
a
range
of
capabilities
that
we
can
actually
bring
to
the
table
in
our
reddit
integration
stack
to
connect
to
sap.
A
Awesome
and
then
we've
got
a
few
more
from
david
he's
asking
if
we're
baking
back
to
the
ai
side,
so
you're
baking
the
model
into
the
container.
That's
the
question.
B
Yes-
and
I
assume
there's
an
underlying
thought
about
this-
is
won't
the
container
images
get
too
big.
If
we
do
that,
so
for
our
demo
scenario,
no,
they
don't
once
we're
really
getting
into
the
multiple
hundreds
of
megabytes
to
gigabytes
of
ranges.
We
can
certainly
discuss
if
actually
baking,
the
the
models
into
the
container
images
is
the
right
approach,
but
for
the
for
the
scenario
that
we
had
in
mind,
it
certainly
was.
B
That's
certainly
something
we
also
want
to
actually
add
so.
Opc
ua
is
the
op,
so
it
doesn't
stand
for
anything
anymore.
It
used
to
stand
for
ola
for
process
control,
unified
architecture,
which
is
basically
a
web
services
based
approach
to
control
process
or
to
to
to
connect
to
this
type
of
equipment.
Let's
put
it
this
way
and
the
key
difficulty
is.
We
haven't
yet
been
able
to
find
a
linux
based
because
that's
much
easier
for
us
to
replicate
than
windows
based
opc
ua
server.
B
I
think
we
need
a
server
in
this
case
that
is
on
an
open
source
license,
so
everything
we
found
is
is
either
proprietary
or
windows
based,
which
makes
it
a
bit
harder
for
us
to
integrate
into
our
our
demo
environment,
but
we'd
love
to
do
that,
because
we
think
that
mqtt
is
probably
compared
to
opc
ua
rather
than
each
protocol,
and
and
most
factory
deployments
will
support
some
form
of
opc
ua.
B
Maybe
nick
will
also
step
in
here.
We
don't
use
it
in
our
demo.
I
think
we're
we're,
certainly
monitoring
what
x,
foundry
is
doing.
A
B
A
Awesome
awesome!
Okay,
so
I
don't
see
any
further
questions.
Give
it
a
couple
seconds
to
see
if
any
more
pop
in
it's
tough.
Thank
you
glad.
You
joined
david
awesome
all
right
so
with
that,
I
think
we're
good.
Thank
you!
Wolfram
great
presentation,
great
demo.
Thank
you,
everybody
for
joining
us
and
I'm
looking
forward
to
further
conversations
on
edge
computing
with
red
hat.
Thank.