►
Description
recorded as part of OpenShift Commons Telco SIG Nov 30 2018
Containers at the Edge
Azhar Sayeed (Red Hat)
A
C
Are
you
able
to
see
my
slides
absolutely
looks
and
is
the
audio
okay?
It's
perfect.
All
right
sounds
good,
so
thank
you
very
much
Diane
and
team,
and
so
what
we
will
do
it
in
this
session.
The
next
20
minutes
or
so
is
go
through
what
does
edge
mean
and
what
this
will
contain.
This
mean
for
the
in
the
context
of
edge.
So,
first
of
all
some
definitions,
you
know,
we've
seen
various
definitions
of
edge
edge
is
certainly
the
next
infrastructure
paradigm
for
delivering
application.
C
Services
closer
to
the
subscriber
and
edge
allows
efficient,
processing
and
delivery
of
time.
Sensitive
data
people
have
used
different
other
definitions
as
well,
which
is
some
geographic
distribution
of
compute
nodes,
distributed
computing
paradigms
largely
performed
in
distributed
devices,
etcetera,
but
edges
independent
edges
elastic
as
edges
massive
scale.
It
must
be
automated
to
meet
that
scale
characteristic
and
those
requirements
are
resilience
and
be
built
using
public,
private
or
hybrid.
C
You
know
footprints
with
respect
to
cloud
now,
there's
a
lot
to
be
said
on
just
one
slide
in
terms
of
what
is
edge
when
what
does
edge
mean
one
way
to
look
at
it
is
it's
it's
about
building.
You
know,
ultra
reliable
experiences
for
people
and
you
know
objects
when
it
matters
and
where
it
matters.
So
then
you
know
many
times:
people
struggle
with
edges,
customer
edges,
provider,
edges
you
know
somewhere
in
between
it
can
be
anything
that
sits
between
the
subscriber
and
the
provider
chord
or
provider
data
center.
C
And
if
you
take
that
broad
definition
of
edge
then
automatically
it
will,
you
know
you'll
start
to
look
at.
How
do
you
solve
the
front
application
challenges?
How
do
you
just
solve
different
deployment
challenges?
How
do
you
solve
different
Network
challenges
in
the
context
of
edge,
because
then
the
footprints
will
vary,
the
sizes
will
vary.
The
deployment
matters
and
context
will
vary.
So,
let's,
let's
start
there
now
we
know
in
a
telco
space.
Typically
provider
network
can
be
classified
as
sub
classified
into
many
different
tiers.
C
Not
all
these
tiers
apply
to
every
single
provider
in
the
world
because
geographically,
some
of
these
tiers
are
merged,
and
you
know
in
in
smaller
countries
and
in
in
countries
like
you
know,
US
and
Canada
in
Australia.
You
might
have
actually
you
know
much
more
enhanced
set
of
tears.
But
certainly
there
is
you
know
the
the
access
network
access
aggregation
network,
there's
a
termination
infrastructure.
C
There
is,
you
know
the
last
mile,
copper
fixed
wireless
in
terms
of
the
wireless
side
of
the
business
or
or
the
copper
cable
cheap
on
optical
stuff,
and
then
there
is
the
customer
CPE,
whether
it's
a
residential
CPE
or
a
business
CPE,
and
then
the
actual
customer
network
itself.
If
you
look
at
that
tearing
or
that
categorization
model
I
mean
this
is
a
work
that
was
done
by
between
me
and
posi
and
accredit
to
full
credit
to
posi
for
actually
coming
up
with
this
kind
of
a
tiering
model.
C
And
you
know
it
then
allows
us
to
segment
what
is
applicable
in
each
one
of
these
two
years
and
then,
once
you
understand
what
is
applicable
in
each
one
of
these
tiers,
then
you
can
understand
the
type
of
applications.
The
type
of
network
functions,
the
type
of
capabilities
that
are
required
at
those
tiers
and
then
try
then
try
to
solve
the
problem
of
what
is
required
from
a
you
know:
infrastructure
perspective.
What
is
required
from
a
networking
perspective,
what
is
required
from
a
containers
perspective
and
so
on
and
so
forth.
C
So
you
can
keep
double-clicking
into
each
level
and
then
define
that
level
of
detailed
set
of
requirements.
I
mean
everybody
talks
about
edge,
I
mean
here's
an
example
of
a
slide
that
was
done
by
China
mobile
at
the
vco
keynote
presentation
in
Amsterdam,
and
if
you
look
at
this
in
terms
of
the
deployment
model
from
national
district
level,
data
center
down
to
the
base
station-
and
this
was
because
this
is
China
mobile-
it's
talking
about
radio
infrastructure.
The
number
of
servers
that
they
need
to
deploy
this
whole
virtualized
ran
with
edge.
C
Compute
is
about
1.25
million
service,
a
number
sights
number
of
locations
and
then
multiply
each
location
by
the
total
number
that
they
need
in
about
20-25.
This
is
their
estimates
and
the
distances
that
these
reside
from
the
base
station
or
from
the
user
and
the
latency
requirements
in
all
of
those
I
mean
we
ended
up.
C
Actually,
you
know
again
credit
to
my
friend
Fozzie
here
for
actually
coming
up
and-
and
you
know
taking
pains
to
catalogue
all
of
these
details,
and
the
important
point
here
is
what
I
really
want
to
show
through
this
slide
is
not
necessarily
to
dig
up.
Why
or
what,
but
to
tell
you
that,
regardless
of
which
way
you
cut
it,
it's
gonna,
be
large
I
mean
if
you
take
this
number
and
apply
it
to
a
number
of
cell
sites,
a
number
of
locations
or
a
number
of
points
for
AT&T
or
Verizon
here.
C
That
still
is
a
pretty
large
number.
That
still
is,
you
know
two
hundred
thousand
three
hundred
thousand
whatever
that
number
is
here.
It
turned
out
to
be
more
than
1.25
million
in
terms
of
the
total
number
of
service
that
are
required,
but
what's
important
in
that
context
of
scale
is
when
you
have
those
locations
deployed
with
compute
infrastructure,
then
functionality
from
the
core
data
center
from
the
you
know,
regional
data
center
starts
to
migrate
towards
those
locations.
C
When
that
functionalities
trust
starts
to
migrate,
because
you
are
providing
better
experiences.
Remember
we
started
with
the
definition
of
you
know
edges
about
providing
better
experiences
when
you
start
to
provide
those
better
experiences.
When
that
functionality
starts
to
migrate
towards
that
edge,
then
you
need
at
that
edge
infrastructure
or
that
edge
location,
all
types
of
capabilities
ability
to
manage
virtual
machines,
ability
to
containers
probability
to
provide
an
assurance
framework.
You
need
a
view
of
where
those
worth
notes
a
place,
how
they
are
placed.
C
What's
the
urines
framework,
what's
the
overlay
network
in
control
or
under
the
network,
in
control
for
all
of
those?
Now
you
just
saw
presentation
from
Intel
a
moment
ago
about
you
know
what
are
some
of
the
networking
challenges
specifically
and
how
you
know
red,
head
and
Intel
are
working
together
to
solve
some
of
those
networking
challenges
there
is
you
know
more
than
that
also
networking
challenges.
We
also
need
things
like
visibility
into
the
nodes.
In
terms
of
how
do
you
place
functionality?
How
do
you
distribute
functionality?
C
C
C
Now,
when
you
have
that
when
functionality
starts
to
migrate,
whether
that
functionality,
that
was
in
core
data
center
running
in
kubernetes
or
running
in
OpenStack
and
virtual
machines,
now
has
to
be
able
to
run
at
that
same
edge
location
and
be
able
to
be
been
managed
with
single
pane
of
glass
across
both
of
those
locations
so
just
putting
out
there
in
terms
of
the
scale
size
and
when
putting
out
there
in
terms
of
the
complexity,
in
terms
of
how
do
you
manage?
So
that
brings
me
to
okay.
C
So
what
are
those
use
cases
that
are
being
deployed
at
edge?
What
are
people
considering?
So
if
you
take
a
pure
telco
view,
you
know
from
a
topo
standpoint,
a
telco
delivers
enterprise
residential
or
you
know,
mobile
types
of
services,
so
those
services
are
being
terminated.
Today
on
a
central
office,
we
have
now
a
central
office
blueprint
based
on
OpenStack
that
central
office
blueprint
now
needs
to
convert
it
to
be
a
container
blueprint
as
well
with
I
mean
instead
of
OpenStack
or
maybe
both
because
you'll
now
have
some
of
those
network
functions.
C
Moving
into
containers,
then,
when
you
deploy
just
pure
edge
compute
for
other
traditional
applications,
IT
applications
at
the
core
is
a
dent.
With
these
network
functions,
then
those
are
running
in
micro
services
in
containers,
and
so
you
need
to
be
able
to
operate
those
in
the
mobile
case.
For
example,
you
have
this
viren
virtualized
ran
now
that
virtualized
ran
is
becoming
containerized
ran,
so
I
mean
you're.
C
Then
there
are
other
interesting
areas
such
as
video,
optimization
that
needs
to
happen
that
needs
to
happen
locally.
In
that
edge,
complete
location,
then
you
have
specialized
deployment
models
such
as
stadiums
or
many
years
or
big
public
venues,
and
then
the
technology
itself
is
moving
from
an
energy
perspective
to
containers
and
will
actually
look
at
some
of
that.
Some
of
that
in
a
little
bit
of
detail,
you
can
also
take
things
like
in
the
IOT
space.
C
Otherwise
is
the
round-trip
delay
times
and
then
the
complete
processing
time
at
a
centralized
data
center
is
probably
not
sufficient.
You
know
in
terms
of
how
quickly
you
need
to
take
action
in
those
type
of
situations.
Then
there
are
traditional
telco
service,
which
is
your
specific.
You
know,
CPE
manage
CPE
vnf
as
a
service
and
so
on.
Today
some
of
these
are
delivered
using
dedicated
devices
managed
CB
stuff
tomorrow
or
pretty
quickly.
C
Moving
to
a
virtualized
model
and
pretty
quickly
we'll
move
to
a
containerized
model
as
well,
so
the
what
what
you
will
see
is
again
more
in
the
word
use
this
extensive
thinking,
that's
being
put
together
by
telcos
to
say
how
should
I
go
about
transforming
my
existing
set
of
services
and
into
this
new
model,
and
how
do
I
go
about
deploying
those
again?
We
can
talk
about
each
one
of
these
line
items
for
a
very
long
time,
but
what
I
want
to
give
you
through
this
particular
list
is
a
mix.
C
The
mix
that's
being
deployed
at
edge
compute
layer
is
both
a
set
of
network
functions
and
a
set
of
traditional
applications.
The
idea
applications
that
will
run
on
edge,
which
means
then
whatever
was
typically
running
in
a
centralized
data
center,
now
needs
to
be
running
at
the
edge,
whether
that
applies
to
virtual
machine
context
or
whether
the
or
it
also
applies
to
the
container
context.
C
Now,
we'll
just
take
a
quick
drill
down
into
two
areas:
5g
and
5g,
because
somebody
brought
up
5g
here
on
the
topic,
I
think
it's
relevant
to
take
a
quick
drill
down
5g
seen
as
an
enabler
technology
for
new
use
cases.
It's
also
seen
as
a
leveling,
the
playing
field
between
incumbents
and
new
entrants
into
the
market.
There
are
a
lot
of.
You
wouldn't
believe
that
there
are
a
lot
of
providers,
people
around
the
word
speculating
jumping
into
the
5g
spectrum
auction
and
and
getting
into
that
field,
because
it's
it
seems
attractive
to
them.
Why?
C
Because
it
resolves
some
of
the
life
last
mile
access
problems,
it
provides
an
instant
you
know,
subscriber
access.
If
you
drop
a
high-speed
modem
into
a
user
into
residents
or
into
an
enterprise
customer,
you
don't
need
to
worry
about
whether
that
user
has
a
DSL
or
a
fiber
or
a
cable
or
or
any
other
connection.
You
can
get
potentially
a
high-speed
connection
into
the
user
by
just
dropping
in
a
modem
and
in
their
house
or
in
their
premises.
So
it
results
some
of
those
last
mile
challenges
and
that's
why
fixed
mobile
broadband?
C
That's
the
term
that
people
use
it
is
a
very
attractive
service
option
based
on
5g
for
a
lot
of
providers,
then,
of
course,
because
of
the
low
latency
capability
because
of
you
know
massive
scale
and
machine
type
communications.
New
services
are
possible
through
these
enhanced
use
cases
for
5g,
you
know,
and
then
some
of
the
deployment
models
lend
us
to
deploy
better
H
compute
capability
with
5g
and
which
means
again
better
user
experiences.
C
Overall,
however,
one
thing
that's
true,
though,
is
it-
does
require
huge
investment
in
terms
of
the
infrastructure
in
in
the
rand
space
and
from
a
mass
scale
perspective.
We
probably
still
you
know,
18
24
months
away.
Phones
are
beginning
to
be
announced
or
user
equipment
as
being
beginning
to
be
announced.
A
C
With
availability
sometime
later
next
year,
so
then
you
can
see
some
spotty
deployments
coming
up.
How
long
did
it
take
for
transition
from
3G
to
LTE?
Think
about
that?
How
long
will
it
take
us
to
transition
from
LTE
to
5g?
We
can
all
speculate
in
terms
of
how
much
it
takes
and
how
much
investment
they'll
do
again.
Just
because
this
is
a
use
case.
Centric
I
wanted
to
kind
of
point
out
that
you
can
take
these
specialized
use
cases.
C
Applications
database
requirements
all
of
those
things
and
categorize
them
into
these
three
broad
categories
of
5g,
with
enhanced
mobile
broadband,
machine-type
communication
or
low
latency
communication.
And
then
you
will
see
that
how
you
know
each
application,
h
service
or
each
application
Falls
can
be
categorized
into
Maya
and
one
of
these
three
buckets
and
then
deployed
from
a
telco
perspective.
C
C
So
if
you
take
the
5g
architecture,
you're
splitting
the
functionality
that
was
in
Bassman
unit,
probably
at
running
as
a
virtualized
baseband
unit
into
what
we
call
a
centralized
and
distributed
nodes,
these
centralized
and
distributed
nodes
are
certainly
candidates
where
you
can
run
these
containers.
You
can
run
these
code
in
containers
today
they
can
see,
use
their
virtual
machines
in
terms
of
any
other
trials
and
any
other
conversations
we've
been
having
internally.
C
We
showed
that
at
the
open
networking
summit
in
Amsterdam
as
a
demo
POC,
but
you
know
there
is
a
lot
of
talk
and
there's
a
lot
of
conversation
around
how
this
implementations
can
move
two
containers
pretty
fast,
not
only
that,
with
these
infrastructure
components
moving
to
complete
containers,
there
is
this
standard
compute
nodes
that
are
being
deployed
right
next
to
those
locations.
What
we
highlight
in
this
picture
as
edge
compute
right
here
inside
these
turrets
red
circles,
that
each
compute
is
your
generic.
You
know
computer
infrastructure
running
your
traditional
applications.
C
It
could
be,
you
know
a
caching
node.
It
could
be
your
traditional.
You
know
localized
database.
It
could
be
your
traditional
IT
application,
which
would
be
something
that
does
data
analytics
with
the
data.
That's
coming
back
on
this
particular
infrastructure
right
there
locally,
rather
than
at
a
core
data
center,
especially
that
edge
compute
node,
allows
you
to
now
shunt
locally
traffic
with
with
5g,
where
you
place
the
user
plain
function
over
there,
and
then
you
can
do
a
whole
lot
of
things
with
different
kinds
of
services
and
experiment
with
that.
C
C
C
So
these
are
micro
services
that
are
already
available
as
part
of
a
commercial
catalog.
These
micro
services
are
common
to
then,
then
all
they
need
to
worry
about
is
how
do
you
leverage
these
micro
services
and
then
how
do
you
build
the
core
functionality?
That's
required
by
you
know
the
5g
components
like
access
management
function
or
session
management
function,
which
is
a
mfsl
stuff.
That's
the
service,
specific
micro
services,
and
then
they
can
have
a
set
of
common
networking,
routing
services
and
product
balancing
micro-services
as
well.
C
That
leverage
is
this
so
now
what
was
previously
packaged
all
together
as
a
single
virtual
machine
or
a
set
of
merchant
machines,
as
part
of
me
and
F,
is
now
completely
refactored
and
broken
into
all
of
these
micro
services,
and
then
these
micro
services
need
to
be
managed
as
a
cluster
er
as
a
group
and
then
deployed
in
these
various
different
thousands
of
locations
that
that
we
need
to
do
so.
That
brings
me
to
my
you
know
the
last
few
slides,
which
is
so
container
izing.
C
These
things
puts
in
a
lot
of
requirements
a
lot
of
requirements
on
you
know
the
infrastructure,
whether
it's
so
kubernetes
as
a
container
management
capability,
needs
to
be
aware
of
certain
things.
Then
how
do
you
bring
about
the
the
other
functionality
through
custom
resources
through
CR,
these
two
custom
controllers
through
service
mesh
and
network
mesh
through
you
know
what
other
things
that
we
need
to
be
able
to
add
to
bring
about
the
similar
functionality?
C
Then
you
have
become
a
little
bit
used
to
in
in
the
VM
world
at
the
moment
of
VM,
or
you
know,
virtual
machine
work
and
bring
that
now
to
the
container.
So
multi-site
is
incredibly
important
with
respect
to
you
know
edge,
so
provisioning
of
those
multi
sides
provisioning
of
those
remote
nodes,
the
zero
touch
models,
the
single
pane
of
glass
of
Orchestrator
model,
in
terms
of
how
do
you
place
that
functionality
and
where
so
stuff
that
was
discussed
earlier
on
the
core
by
Intel
in
terms
of
node
feature
discovery,
what
Frank
just
answered
they
become?
C
Incredibly,
these
type
of
features
become
incredibly
important
identity
and
security
management
Confederation
capability,
because
now
you
have
thousands
of
these
sites.
They
need
to
work
together
as
a
single
cluster,
you
need
to
have
a
common
identity
across.
Follow
those
clusters
common
way
of
orchestrating
managing
those
h.a
models.
Do
we
need
a
community's
master
node
on
each
on
each
one
of
the
sighs?
Do
we
need
some
sort
of
a
headless
operation
part
model
on
on
those
sides,
at
least
when
you
restart
these
containers,
can
you
restart
a
single
container?
C
If
you
remember
in
my
previous
slide
in
terms
of
how
a
single
vnf
got
refactored
into
all
of
these
different
micro
services,
maybe
you
need
to
restart
differently
in
terms
of
bundles
in
dependency
trees.
How
do
you
create
those
dependency
trees
and
do
worry
about
restart
ability
of
those
application?
Bundles
multi-tenancy
is
incredibly
important
for
telco
in
in
often
we
ignore
multi-tenancy,
because
we
are
typically
looking
at
deploying
containers
inside
a
single
enterprise.
C
We
in
the
telco
space
now
you're
talking
about
taking
a
service
that
needs
to
be
offered
to
thousands
of
subscribers
thousands
of
businesses,
and
you
cannot
have
an
application
that
shares
data
for
those
different
subscribers.
So
you
have,
you
may
have
copies
of
applications
running
you
know
in
different
tenants.
Then
those
tenants
need
to
provide
they
get
their
own
view
of,
what's
going
on
from
and
from
their
application
perspective
from
their
infrastructure
perspective.
C
Networking
requirements,
I
think
there's
already
being
talked
about
enough,
there's
more
in
the
context
of
not
just
Malthus,
but
also
things
like
native
v6
native
v6,
everywhere,
no
Nehring,
no,
you
know
v4
to
v6
conversions
and
that
becomes
really
really
important.
Then
data
plane
acceleration
compute
in
terms
of
real
time
capabilities,
threat,
painting,
CPU,
painting
models,
I
think
Intel
did
go
through
some
of
those
things
that
are
coming
down.
How
do
you
patch
OSS?
C
Can
you
actually
worried
about
the
dealing
with
this
as
a
black
box
approach
versus
an
incremental
Apache
model?
If
you're,
if
you
have
1
million
nodes,
how
can
you
do
incremental
patching
across
those
1
million
nodes
and
and
and
on
and
on
and
on
right
I
mean
I
can
keep
going
on
with
respect
to
these
set
of
requirements?
Some
of
these
things
are
being
worked.
Some
of
these
things
are
yet
to
be
worked.
I
think
through
this
community.
C
What
I'm
interested
in
is
collaborating
working,
defining
refining
these
set
of
things
so
that
not
only
when
you
know
different
groups
of
people
are
involved
in
this.
Get
the
right
set
of
things
to
go
then
build
the
develop
both
in
the
community
level
and
at
Red
Hat
level
and
and
participate
in
that.
So
that's
kind
of
a
quick
summary
of
where
things
are
with
respect
to
edge
in
containers.
A
C
Actually,
both
people
are
looking
for
dual
stack
as
well
as
people
are
looking
for
just
native
v6
across
the
board,
no
dual
stack.
So
if
you
go
to
Japan-
and
you
want
to
talk
to
any
of
the
telcos
in
Japan
they'll
say
give
me
everything
native
v6
today,
there's
no
more
v4
addresses
being
allocated
by
Ayana
done
so.
A
C
A
A
Okay,
because
we've
seen
one
of
the
customers
asking
for
both
in
same
invitee
use
case,
so
good
ipv4
and
ipv6.
That's
what
I
was
thinking
like
when
you
say
an
ATO
ipv6
I
want
to
just
understand
all
these
customers.
Looking
forward,
I
mean
Yami,
the
phone
customer
say
they
will
say:
I
feel
the
phone
and
of
both
ipv6
right.
So
that's
what
I
want
to
appear
ly
understand
the
value
industry
is
heading
towards.