►
Description
From manufacturing, to energy, to telecommunications - industries everywhere are looking at how to benefit from edge computing to help lower costs, scale their infrastructure, and create new business opportunities. In this session you’ll hear Red Hat’s strategy for edge computing and the importance of its linkage to open hybrid cloud and open source. We’ll also cover our architectural approach to addressing the needs of different edge tiers including 3 node Kubernetes clusters and how automation and management play a crucial role in deploying and life-cycle managing an edge architecture.
A
Problems
that
will
vary
based
on
the
disconnected
state
of
the
site.
A
A
How
do
we
deploy
to
thousands
or
hundreds
of
thousands
of
sites
without
having
to
require
requiring
adding
one
person
behind
each
one
of
these
deployment
that
wouldn't
scale
that
wouldn't
work
that
wouldn't
be
economically
viable?
So
we
need
to
be
able
to
scale
without
having
a
huge
increase
in
our
administrative
functions.
A
A
And
finally,
we
need
consistency
because,
as
we'll
see,
the
edges
are
multiple,
also
in
the
sense
of
what
they're
going
to
be
delivering
to
people.
But
from
the
developers
perspective,
we
shouldn't
have
to
force
them
to
re-engineer
their
deployment
based
on
where
it
is
going
to
be
deployed.
In
fact,
what
we
would
like
to
be
able
to
do
is
that
you
manage
a
workload
that
is
running
in
the
cloud
exactly
the
same
way
as
you
would
manage
it
running
at
the
edge.
A
A
So
obviously,
the
very
first
question
that
an
organization
will
face
when
they
are
going
to
be
dealing
about
edge
is,
should
I
go
with
something
that
enables
me
to
have
this
interrupt
interoperability
between
multiple
system,
or
should
I
go
with
something
that
is
completely
integrated
from
top
to
bottom?
I
don't
need
to
think
about
it.
A
However,
if
I
take
the
vertically
integrated
solution,
it
might
be
super
simple
to
get
from
point
a
to
point
b,
but
nobody
knows
yet
about
what
point
c
is
about,
and
I
may
severely
limit
my
ability
to
adapt
the
day.
A
I
know
what
point
c
is
so
our
job
we
think,
is
better
by
delivering
an
environment
where
you
can
continuously
benefit
from
open
source
innovation,
from
the
extension
that
the
communities
are
building
and
that
we
are
integrating
back
in
our
product
and
we
really
want
to
be
the
best
platform
to
offer
this
homogeneity
in
diversity.
A
So,
if
you've
seen
some
of
our
publication,
you
may
certainly
have
seen
paul's
citation
if
edge
computing
is
going
to
be
a
realistic
future
for
enterprise
I.t.
It
needs
the
hybrid
cloud
and
open
source
to
thrive.
A
A
So
our
approach
is
not
very
different
from
what
we've
been
doing
forever
from
a
go-to-market
pers
perspective.
We
have
platforms
and
a
very
vast
portfolio,
which
is
our
building
block
to
make
our
edge,
offering.
We
have
a
pretty
good
grasp
on
how
to
deal
with
open
source
community.
I
think
we
don't
need
to
demonstrate
that
anymore
and
we
need
to
just
ensure
that
the
innovation
that
is
happening
in
this
community
is
helped
and
brought
back
into
our
products.
A
But
the
key
element
in
this
strategy
is
as
we
do
so
we
need
to
be
enabling
our
partners
to
jointly
deliver
on
this
innovation
with
us,
because
we
don't
run
just
on
software.
We
need
hardware
to
run
on.
We
just
don't
run
on
infrastructure.
We
need
workloads
to
be
running
on
top
of
our
infrastructure.
A
So
what
we
are
trying
to
build
is
an
edge
platform
that
meets
the
multiple
needs
of
the
multiple
edges
that
customer
may
have.
That
remains
an
horizontal
platform
that
addresses
in
a
portion
or
a
combination
each
one
of
the
use
case
that
we
can
encounter,
and
this
is
why
we
decided
to
offer
something
that
is
modular.
In
essence,
we
already
have
what
we
need
to
deliver
at
the
public
or
private
cloud
level.
A
A
We
need
to
consolidate
on
what
we've
been
doing
for
bare
metal
to
address
the
new
needs
of
bare
metal
at
the
edge,
because
there
you
have
less
people
to
press
on
buttons
when
you're
deploying
in
this
very
unfriendly
location
that
the
edge
can
bring
you
to,
and
you
need
also
to
address
smaller
and
smaller
food
threats,
because
when
you
have
to
install
a
gateway,
let's
say
an
all-right
that
is
battered
by
bad
weather
half
of
the
year.
A
You
can
imagine
that
that
gateway
will
not
have
exactly
the
same
specification
as
a
cozy
server
hosted
in
the
data
center
and
being
on
top
of
a
roof
in
a
city
won't
be
any
different
either.
So
we
need
to
provide
modularity
in
our
platform
so
that
we
can
address
all
these
types
of
footprint
meeting
the
requirements
of
the
various
use
cases.
A
So
how
are
we
going
to
do
that
in
terms
of
architecture?
Well,
we
are
very
much
advanced
along
the
way
of
this
architecture,
but
in
order
to
represent
what
the
various
layers
of
the
edge
can
be,
we've
been
using
this
slide
that
we
nicknamed
the
onion
internally.
We
call
it
the
onion,
because
every
time
we
present
it
to
anyone,
we
will
either
have
the
reaction
of
of
oh,
but
we
have
more
layers
than
this
or
oh,
we
have
less
layer
than
this.
A
So
it's
like
an
onion,
the
more
you
peel
it,
the
more
layers
you
discover,
but
what
which
what
we
try
to
distinguish
here
is
three
big
groups
of
in
this
layer
at
the
center.
You
can
see
what
we've
been
handling
for
a
long
time,
using
openstack
or
openshift,
which
is
the
core
and
the
regional
data
center
that
we
know
how
to
handle
pretty
well.
The
the
number
of
this
site
is
not
going
to
grow
as
we
move
to
edge.
A
A
So,
in
our
analysis,
what
we
need
is
to
provide
a
way
to
deploy
the
infrastructure
as
a
very
small
cluster
and
the
smaller
we
could
consider
a
valid
hj
cluster
e3
node
and
in
order
to
make
that
as
small
as
possible,
it's
three
nodes
where
we
combine
both
the
supervisors
and
the
workers
on
these
nodes,
and
this
is
something
that
actually
has
already
been
shipped
with
openshift
4.5,
which
was
released
a
couple
weeks
ago.
You
have
this
in
the
box,
you
already
are
able
to
do
three
note
edge
cluster.
A
Then
we
need
a
first
model
of
handling
a
large
number
of
nodes.
In
the
case,
you
have
pretty
good
connectivity,
which
we
call
remote
worker
nodes.
So
basically
you
deploy
a
set
of
supervisor
at
a
central
location,
and
you
have
a
bunch
of
remote
location
where
you
can
deploy
one
or
more
worker
nodes.
A
A
The
first
step
we
did
to
cover
ad
requirements
was
what
we
called
openstack
distributed
configment,
so
this
is
openstack
being
deployed
in
a
central
data
center
and
you
have
the
ability
to
put
your
nova
node
in
various
faraway
locations,
and
while
this
is
a
great
solution,
most
of
the
edge
use
case
that
we
are
seeing
today
are
not
any
more
vm
based
but
container
based.
A
A
A
Then
we
have
regional
data
centers,
which
can
either
or
have
both
actually
have
masters
or
sorry
supervisors
in
the
new
domain.
Clutcher
supervisors,
which
are
controlling
a
bunch
of
remote
worker
nodes,
or
we
can
just
have
other
types
of
regional
data
center,
deploying
combined
masters
and
workers.
A
For
example,
if
you
have
a
request
to
deploy
a
specific
workloads
on
all
your
edge
sites
in
cd
more
than
I
don't
know,
5000
inhabitants,
you
can
just
label
these
cities
that
matches
your
criterias
and
apply
a
policy
that
will
have
the
workloads
deployed
everywhere
and
acm
also
helps
you
in
identifying
what
is
working
and
what
is
not
working
and
has
a
lot
more
functionality.
In
this
sense.
A
So,
as
I
just
told
you,
the
ability
to
do
three
node
cluster
and
the
ability
to
deploy
openshift
in
data
centers
in
general
is
all
here
since
ocp
4.5.
A
So
with
this
we
are
already
able
to
solve
some
real
problem
and
I'll
talk
a
little
bit
more
about
that
in
a
moment.
A
Arriving
in
4.6
is
this
notion
of
remote
worker
nodes,
so
we're
talking
three
months
from
now
we'll
be
able
to
deploy
that,
and
here
we
have
a
few
things
that
you
may
want
to
know
and
if
you
want
I'll
be
able
to
answer
that
in
the
q
a
on
what
are
the
things
that
we
can
and
cannot
do
about
remote
worker
nodes.
The
code
is
already
here,
so
we
know
pretty
well
now
how
these
behave
and
finally,
sometime
in
2021
we'll
be
delivering
this
single
node
edge
server.
A
So
here
we
are
hard
at
building
this,
and
I
have
a
lot
less
details
to
deliver
on
this.
I
can
tell
you
what
we
are
asking
it
to
be.
I
cannot
tell
you
in
details
what
it
is
going
to
be.
A
There
were
the
people
in
the
cto's
office
that
started
a
center
of
excellence
for
ai
and
they've,
been
you
know,
building
a
pretty
impressive
stack,
which
is
done
completely
open
source,
which
we
nicknamed
odh
for
open
data
hub,
which
is
a
completely
open
community.
A
So
we've
been
doing
that
and
the
result
has
been
showing
in
the
first
blueprint
that
we're
going
to
be
delivering,
which
is
the
manufact
manufacturing
use
case
that
wolfram
has
been
working
on.
In
fact,
what
happened
is
that
wolfram
and
his
team
have
been
discussing
with
quite
a
few
large
customers
in
germany
very
involved
in
manufacturing
processes?
A
Sorry,
I
can
give
name,
but
I'd
love.
I
could
wolfram
will
tell
me
when
I
can,
I
hope,
and
in
order
to
make
openshift
more
palatable
to
this
customer,
they
put
together
a
demo
which
they
called
manuela,
which
is
basically
all
the
software
needed
to
accumulate
data
from
hundreds
of
sensors
in
a
factory
and
deal
with
this
data,
and
when
we
discovered
that
they
were
struggling
with
how
the
footprint
to
deploy
that
we
need
at
least
five
nodes,
and
then
we
need
additional
load
for
storage.
A
Even
there
is
something
we
can
do
to
improve
that
and
that's
where
the
this
three
node
cluster
became
something
that
really
simplified
the
hardware
footprint
on
that
use
case.
And
then
we
realized
that,
since
we
had
all
the
ai
ml
work
that
had
been
happening
in
parallel,
maybe
we
could
do
more
than
just
accumulate
the
data
and
represent
it.
Maybe
we
could
add
functionality
such
as,
for
example,
predictive
maintenance,
on
top
of
it
to
on-site,
immediately
inform
the
maintenance
crews
about
the
failure
that
we're
going
to
be
happening
so
stay
tuned.
A
And
if
you
join
again
tomorrow,
wallfrom
will
tell
us
a
lot
more
about
the
history
of
manuela.
But
that's
for
me.
A
That's
a
great
example
of
how
at
red
hat,
we
are
trying
to
not
only
collect
information
and
cooperate
in
upstream
community,
but
we
also
try
to
use
that
notion
of
community
to
collect
the
feedback
from
our
field
from
our
customer
to
build
a
new
solution,
and
we
hope
that
this
pattern
of
developing
a
reusable
blueprint
based
on
field
engagement
so
that
we
can
improve
and
iterate
together
on
these
blueprints,
is
something
that
we
are
going
to
be
repeating
successfully
much
more
often.
A
I'm
going
to
get
it
right
at
some
point
and
on
this
I'm
going
to
stop,
stop
reminding
you
once
again
that
you
can
register
for
a
wall
from
presentation
tomorrow
to
get
a
lot
more
details
about
this
manufacturing
use
case
and
wolfram.
I
think
you're
going
to
be
doing
a
demo
right.
B
C
A
Excellent,
so
very
exciting
session
coming
up,
but
in
the
meantime
we
still
have
quite
a
few
minutes
to
talk
together.
If
you
have
questions
or
want
to
tell
me
that
I
forgot
something
in
that
plan.
D
Have
we
do
have
a
question
that
came
into
the
chat
through
our
youtube
channel
question
is:
do
we
have
any
edge
computing
api
standards.
A
So
there
is
no
such
thing
as
edge
computing
api
standards.
There
are
vertical
standards
so,
for
example,
in
the
manufacturing
environment,
sensors
like
to
speak
scada
with
the
virus
environment
and
they
like
to
use
a
protocol
called
cam
to
do
their
time
management
where,
in
the
telco
industry
they
love
about
using
tsn
and
they
are
using
acronyms
such
as
cu
and
gus.
Well,
I'm
not
going
to
bore
you.
D
Okay,
I
don't
see
any
other
links.
Oh
here's,
the
question
remote
worker
nodes.
I
think
the
question
is:
when
will
those
be
available?
He
jointly.
A
D
A
D
A
Well,
if
you,
you
know,
wake
up
tomorrow
or
in
one
hour
with
a
question
I'm
available
on
twitter,
my
android
is
nijaba
nijaba,
so
feel
free
to
ask
me
a
question
or
two
online.
D
Fantastic,
it
looks
like
there's
nothing
else
in
the
chat,
and
so
I
think
we're
good
again
we're
hopefully
both
of
you
or
well.
Everyone
can
join
the
call
tomorrow
again
a
lot
more
detail.
The
session
is
being
recorded.
I've
put
the
link
again
to
wolfram
session
in
the
chat.
So
if
you
want
to
click
on
that
and
just
wanted
to
say
thank
you
to
everybody
for
attending,
and
hopefully
we
will
see
you
tomorrow.