►
From YouTube: OpenShift Commons Telco SIG Nov 30 2018 Full
Description
Telco on OpenShift SIG Meeting Nov 30 2018
Agenda:
Full Meeting Recording
Co-chairs: Diane Mueller and Paul Lancaster (Red Hat)
SIG Charter/Mission – Diane Mueller (Red Hat)
Introductions
What is new in Kubernetes Networking? Kural Ramakrishnan (Intel)
Containers at the Edge Azhar Sayeed (Red Hat)
Next Meeting: 4th Friday of each month work for everyone at 8:00 am Pacific - Jan 25th
https://commons.openshift.org/events.html#event|telco-sig-2|907
A
A
C
A
C
B
B
B
Morning,
I'm,
just
to
let
you
know,
Diane
I
did
promote
this
widely
internally
on
our
NFE
boards,
and
things
like
this
I
also
asked
folks
to
promote
it
with
partners
and
customers,
so
I'm,
hoping
that
we
get
quite
a
few
folks.
I,
do
see
some
very
key
Red
Hatters
in
terms
of
both
our
CEOs
office,
as
well
as
our
telco
architecture
team
attending.
So
that's
great
stuff.
C
So
what
in
in
the
chat
is
a
link
to
hack
em
video,
which
is
what
I
use
for
meeting
notes?
If
you
can
add
your
name
to
the
list
of
any
attendees
I'll
try
and
do
it
while
we're
chatting
you
don't
have
to
I,
should
be
able
to
get
there
well
we'll
get
started
in
a
few
minutes.
We'll
give
everybody.
Let's
say
you
were
three
more
minutes
just
to
join
up,
but
I'm
really
pleased
that
Paul
is
the
instigator
for
doing
this.
C
C
One
of
the
things
so
I'm
just
gonna
get
right
into
the
sig
charter
admission,
because
it's
really
there's
a
just
I
like
to
make
clarify
the
distinction.
I
mean
they
occur,
Bernardin
working
in
technical
sig
and
an
open
shift
common
sig
and
in
the
open
shift
common
sig.
C
You
know
how
to
add
something
to
kubernetes
and
how
to
work
or
how
to
how
to
integrate
something
on
new
technology
into
kubernetes
errs
along
that
line.
So
we
try
and
because
there's
some
there's
tons
of
kubernetes
SIG's
that
these
really
are
about
sharing
lessons
learns
best
practices,
your
use
cases
and
getting
that
fire
hose
of
information
about.
What's
coming
down
the
pike
and
what's
people
are
passionate
about
in
new
technology,
so
it's
there's
I,
think
I
think
that's
a
pretty
clear
distinction.
C
So,
if
you're
looking
to
contribute
code
to
you
know
some
new
project
inside
of
kubernetes
or
around
kubernetes
or
around
openshift,
we
can
find
you
those
places
too,
but
this
is
really
kind
of
a
conversational
thing.
If
you
have
it
so
I
just
wanted
to
make
that
clear,
and
it's
really
about
connecting
peers
to
peers.
So,
though,
we
have
a
lot
of
Red
Hatters
on
here,
sometimes
even
at
Red
Hat
with
12,000
employees,
we
need
to
connect
to
each
other.
C
So
that
said,
I'm
just
going
to
enter
myself.
I
am
the
director
of
Community
Development
for
Red
Hat's
cloud
platform,
which
means
all
kinds
of
different
things
and
I
wear
many
hats.
So
please
do
ask
me
if
you
are
interested
in
operators,
machine
learning
on
OpenShift.
C
You
know
any
any
any
of
the
interesting
little
things
that
on
the
side
and
I
do
a
lot
of
the
cross
community
collaboration
with
the
upstream
project,
so
I'm
a
good
touch
point
for
those
things
and
I.
Also
what
I'll
probably
do
is
everybody
who's
on
here,
I'll
invite
you
all
to
join
a
slack
channel
for
telco
so
that
you
can
have
telephone
conversations,
I
run
the
slack
or
open
ship
Commons
so
that
you
know
you
can
continue
these
conversations
in
the
interim
and
with
that
I'm
gonna.
F
Brad
Appleman
here
I
work
in
a
research
lab
at
Boot
Allen
Booz
Allen
were
telco
focused,
my
historically
I'ma
telecommunications
engineer,
somewhat
new
to
the
containerization
environment,
but
we're
currently
running
a
a
open
EPC,
a
OpenStack
platform
or
a
partner
with
Red
Hat
and
I'm
interested
in
possibly
we're
looking
at
moving
our
open
EPC
solution
into
a
containerized
environment
from
the
OpenStack,
but
for
more
of
a
youth
I'm
more
of
a
use
case
engineer,
not
a
software
developer.
So.
B
B
Mean
because
one
of
the
things
that
we're
looking
at
is
there's
there's
a
there's:
some
open
source
software
in
the
community
like,
for
instance,
clear
waters,
IMS
software-
that
you
know
a
lot
of
folks-
are
working
on
container
rising
right
now
and
it's
been
containerized
to
some
extent.
But
it's
just
always
interesting
to
see
what
open
source
software
people
folks
could
potentially
use-
and
you
know,
frankly,
writing
on
openshift
for
calcio
solutions.
F
G
D
C
H
E
E
H
K
F
L
A
N
So
this
garage
name
is
clément
over
krishna,
so
I'm
working
in
cloud
native
orchestration
team
in
the
Intel.
So
they
are
basically
from
data
center
group
within
Intel
and
we
basically
work
along
with
the
customers
and
most
of
our
use
cases
go
to
market.
So
we
are
working
with
Fang
and
team
from
CTO
office,
so
we
usually
work
closely
and
we
are
like
close
team
together.
So
we
will
talk
about
this.
One
in
my
presentation
here,
perfect.
M
O
C
We'll
put
you
on
the
agenda
for
the
next
topic,
the
next
meeting,
so
that
would
be
great,
so
I
want
to
get
started
because
we
have
two
talks
that
I
want
to
do
and
we
were
recording
everything.
So
the
whole
video
of
the
entire
meeting
will
be
posted
on
a
YouTube
channel
and
I'll,
send
it
to
the
mailing
list
and
put
it
in
on
the
calendar
for
this
meeting.
C
So
you
can
find
it
so
without
any
further
ado,
I'm
sure,
there's
probably
a
few
other
people
will
join
us
as
we
go,
so
I
will
add
them
to
the
agenda
Correll.
How
about?
If
you
share
your
screen,
I'll
stop
sharing
mine
on.
We
set
it
up
and
start
your
what
is
kubernetes
and
networking
and
why
it
matters
to
telco
people.
L
N
So
hi
guys
I
am
working
in
BCG
group
within
Intel,
so
we
are
working
on
in
close
partnership
with
the
same
team
or
from
city
offices.
So
I
want
to
say
this
whole
presentation
is
from
the
contribution
from
both
the
internet,
the
attack
team
and
that's
why
I
put
input
from
both
printing
and
Doc's
mix
and
Billy.
So,
let's
see
what
is
new
in
kubernetes,
my
cooking,
so
before
starting
into
the
topics
as
traditional
that
everyone
starts
with
what
is
cloud
native
I
also
going
to
start
with?
What
is
my
big
collaborative?
N
So
my
simple
approach
to
what
cloud
native
is
cloudy
with
the
approach
of
building
and
running
application
that
fully
exploits
that
going
cloud.
Computing,
business,
deliverables
models
and
cloud
narrative
is
how
you
create
your
application,
deploy,
maintain
and
it's
not
about
where
you
place
this
application,
so
the
application
could
be
facing
the
edge
code
or
in
the
data
center
clouds
and
cloud
like
belleville
model
is
very
appropriate
for
both
public
and
the
cloud
private
cloud
segments
and
the
key
attributes.
N
So
one
of
the
key
hat
reduces
like
it
should
be
stateless
and
they
say
length
and
it
should
be
self
feeling,
and
so
this
is
a
basic
information.
I
want
to
begin
with
a
cloud
native
and
the
next
one.
I
want
to
little
bit
talk
about
the
network
or
transformation
from
Intel's
perspective,
so
the
from
Intel
perspective,
the
you
can
see
the
broader
network
transformation
for
the
modernized
networks
and
within
interview
are
organizing
to
either
VMs
or
container.
So
we
don't
go
into.
N
N
It's
only
containers
in
some
cases
it's
kind
of
a
hybrid
model
where
they
want
to
have
both
VM
surface
containers,
basically
it
for
the
Bromfield
deployments
and
not
all
the
customers
are
willing
to
transfer
to
the
cloud
native
so
that
that's
why
I
want
to
iterate
that
we
have
a
mixture
customers
who
wants
to
work
in
both
cloud
native
and
also
so
the
interesting
point.
What
I
want
to
point
out
here
is
like
we
are
in
the
cloud
ready,
which
means
clarified.
N
N
So,
let's
jump
into
the
actual
technical
stuff,
so
we
will
talk
here
about
what
is
container
networking
and
what
is
the
various
deployment
models.
So
this
is
a
slide
which
is
kind
of
picked
from
the
open,
nfe
architecture.
So
you
can
see
there
is
various
be
enough
applications.
So
we
are
working
with
closely
with
customers
who
have
containerized
to
there
be
enough
applications
and
we
are
working
with
various
technologies
like
oviya,
spiritual
switchers,
Fido,
DPD,
cave
and
also
in
common.
N
It
is
we
have
a
CNA
community
and
we
also
working
with
NB
a
porta
textual
like
man,
o
and
O
Assam,
so
with
Intel
Xeon,
like
curves
mixture
of
failures,
authorization
and
service
orchestration
as
well.
So
what
does
the
things
really
concerned
as
when
we
talk
with
a
customer?
So
one
of
the
deployment
models
whenever
the
customer
talk
is
both
like
bad
metal
models
and
unified
infrastructure
models.
N
So
when
you
say
look
into
the
bare
metal
deployment
models,
the
marrow
or
OS
'm
is
act
like
a
service
orchestration
where
the
kubernetes,
this
kind
of
defect
or
auto
station
engine
fits
a
talker
at
the
runtime
a
container
and
time.
And
the
thing
is
that
the
kubernetes
as
to
manage
both
of
compute
network
and
storage,
and
then
we
talk
with
few
customers
who
want
to
use
the
brownfield
or
bad.
They
have
big
burst
to
a
lot
on
the
vm
based
infrastructure.
N
They
have
like
to
have
your
model,
which
is
like
a
hybrid
on
unified
in
infrastructure
model,
so
our
team
is
most
of
the
time
working
with
the
customers
on
bare
metal
and
will
be
also
working
with
the
customers
on
unified
and
also
hybrid
models.
So,
but
today's
topic
will
be
covering
most
of
the
topics
related
to
bare
metal.
Actually,
so
the
first
thing
so,
as
I
said
in
the
previous
slide,
we
took
kubernetes
as
a
de
facto
authorization
engine
by
the
time.
N
It's
a
very
famous,
not
efficient
engine,
mainly
satyr
this
project,
like
three
years
before
when
he
started
kubernetes.
So
there
are
multiple
key
challenges
which
we
saw
in
kubernetes,
starting
if
you
want
to
have
your
ported
cohort,
cable
segment,
customers.
The
first
thing
they
told
us
like
when
we
want
to
run
your
Venus
application
attach
to
support
multiple
network
interface.
Suppose
currently,
kubernetes
is
doesn't
support
it
and
there
are
a
lot
of
use.
N
Cases
become
as
related
to
performance
and
also
I
availability
of
networking
and
how
you
going
to
do
the
cpu,
pinning
and
Numa
one
else.
Elementary
there
a
lot
of
issues
as
we
have.
So
we
did
a
lot
of
brainstorming
with
talking
with
the
customers
and
also
with
open
source
community,
and
we
came
up
with
idea
of
grouping
them,
so
we
group
them.
So
this
is
something
like
the
multi
networking
is
goes
to
the
kubernetes
networking
group.
N
How
we
going
to
ask
you
later:
it's
a
packet,
we
developed
a
data
plane,
acceleration
plugins
and
we
put
all
the
resource
allocation
and
management
things
as
the
EP
a
future.
So
Hoover
is
so
familiar
with
OpenStack
or
they
might
have
here
the
term
EPS.
It's
called
enhance
platform
avenues
so
that
you
can
manage
your
resources
by
service
authorization
was
through
the
resource
orchestration
and
the
last
one
is
the
telemetry,
which
is
also
the
important
topic
to
discuss
about
service
assurance.
N
So
then,
we
developed
multiple
plugins
in
collaboration
with
open
source
partners
and
read,
add
basically
and
we
developed
for
Mussina
something
we
developed
a
multi
data
plane
acceleration
we
developed
with
the
use
of
space
CNI
and
as
far
we
say
in
a
and
bouncing
a
nice,
and
today
we
will
be
discussing
basically
on
the
networking
side,
so
I
will
be
covering
the
topics
on
multi-user
space
via
nice,
Authority
and
also
the
authority
Wexler
Ginn.
So
if
you
guys
have
any
questions,
you
can
interrupt
me
and
ask
any
questions
actually.
So,
yes.
O
F
N
We
provide
is
kind
of
a
pluggable
motor,
so
in
this
case,
if
you
want
to
use
user
space
again
I,
you
want
to
use
BGP
or
obvious
switches
for
this
case.
It's
that
currently
kubernetes
doesn't
support
user
space
networking.
What
was
in
this
case
you
have
to
use
poultice
so
with,
along
with
multiple,
you
have
to
use
these
space
CNI,
so
it's
kind
of
pluggable
model
actually
most
of
the
models
which
we
discussed
here
is
that
answer
your
question.
N
So,
let's
jump
into
the
multiple
network
interface
for
so,
as
I
said
in
the
previous
slide,
the
problem
statement
is
kubernetes
doesn't
support.
Multiple
interface
is
supposed
to
only
one
single
interface
in
case
of
venous
use
cases
you
have
should
have
at
atomic
level.
Even
if
you
ask
you
venus
winter
to
develop
your
be
enough
application,
they
will
say
a
Tomic
level
or
they
have
to
develop
the
arena.
Fabrication
which
activities
require
two
interfaces.
Phone
is
just
a
fourth
data
planar
control
plane.
N
So
they
told
us
look
we
we
need
this
future
in
kubernetes,
the
otherwise
it's
very
difficult
for
us
to
develop
a
venous
application
and
run
it
in
cooperative.
So
the
first
project
which
we
developed
within
Intel
this
month
and
the
use
cases
as
I
said,
is
for
the
functional
separation
of
both
control,
plane
and
data
planes,
and
it
also
provides
the
different
network
service
level
agreement
and
basically
one
of
the
escape
minutes.
We
talked
with
the
customers,
they
said:
oh
okay,
they
don't
want
both
control
plane
and
data.
N
Plane
should
be
in
the
same
interface
because
various
security
reason.
So
this
project,
we
were
started
in
2016
and
we
are
trying
to
projects
and
we
find
the
high
data
so
not
close
partner
along
with
us
on
this
journey
for
our
own
wandered
off
years
on
pushing
this
one
in
kubernetes
communities.
These
speeds
started
with
the
developing
native
kubernetes
support
of
multi
networking
and
then
because
of
lot
of
effort
from
Red
Hat,
basically,
some
Dan
Williams
and
Feng
Feng
team.
N
We
could
able
to
create
a
working
group
which
is
called
kubernetes
networking
working
group,
which
is
basically
focused
on
creating
a
de
facto
standard
called
multiple
and
cooking
kubernetes.
So
in
this
working
group
is
that
is
not
closely
on
the
implementation
part.
It's
basically
on.
Oh,
we
can
create
a
standard
fit
in
kubernetes
so
that
we
can
able
to
support
multiple
networking.
N
The
next
one
is
user
space
say
a
nice,
so,
as
I
said
in
the
my
previous
talks
at
the
kubernetes,
doesn't
support
or
use
space
networking
at
all,
so
it
basically
works
on
the
contour
kernel
networking.
So
for
user
space
networking
the
major
use
cases
the
performance.
So
it's
basically
the
software
acceleration.
So
in
this
case
we
developed
a
plug-in
which
is
called
user
space
CNI
with
Red
Hat,
basically
from
belly,
is
on
of
the
leading
engineer.
N
Andy
is
leading
this
particular
user
space,
see
and
I,
along
with
other
developers
from
Nokia
and
the
Intel,
and
we
developed
as
this
user
space
CNI
and
it's
the
project
is
still
in
development
phase.
We
are
working
along
with
various
partner
to
make
sure
that
is
going
to
be
a
stable
release
and
we're
also
planning
to
make
it
as
kind
of
20
ways
pluggable
motor.
So
we
will
discuss
about
this
in
the
upcoming
flight.
What
is
the
device
plug-in
and
how
we
could
help
our
networking
actually.
N
The
next
one
which
we
are
working
right
now
is
SRB
CNI
slugging.
So
the
problem
statement
in
this
case
is
kubernetes,
doesn't
support
SRB
networking
interface,
so
I
thought
me.
Network
is
basically
kind
of
a
hardware
acceleration,
so
in
kubernetes
it
doesn't
support
physical
platform
resource
isolation.
There
is
no
guarantee
to
network
I/o
performance,
so
if
you
use
Authority
or
Coover
similar
with
sob
knows
that
there's
always
a
standard.
The
PCI
spec,
where
you
can
divide
your
peer
for
the
physical
function
into
multiple
gear,
and
each
of
us
will
have
a
guaranteed
bandwidth
allocation.
N
So
basically,
this
saw
CNI.
If
you
look
at
the
figure
he's
a
severe
which
we,
it
pulls
the
VF
from
the
horse
network
namespace
as
it
moves
to
the
cut
in
a
network
names.
It's
a
very
simple
concept
and
we
are
elaborated
this
concept
to
make
it
work
with
a
deep
wreaking.
So
duplicate
is
one
of
the
technology
which
developed
from
Intel,
which
calls
data
plane
development
stage,
basically
its
bypass
the
kernel
so
that
the
performance
is
really
more.
N
So,
as
I
said
about
the
debates
plugin,
so
what
is
the
best
plugin?
So
most
of
the
people
in
this
meeting
may
be
familiar
with
that
it
is
plug
in
there.
You
might
have
here's
the
world
from
Nvidia
as
they're,
developing
GPU,
the
way
plug-in
and
expediate
a
base
plugin,
but
how
device
klingon
could
help
us
on
the
networking,
because
both
are
a
different
concept
but
or
we
could
help
us
so
before
jumping
into
let's
understand
what
is
the
base?
Plugin
means.
N
So
whenever
it
for
interview
like
click
on,
so
we
have
various
the
devices
like
qat
cards,
Martinique
and
storage
cards
as
well.
So
these
are
like
a
device
specific
cards.
If
you
want
to
write
all
the
things
in
cumin,
it
is
nearly
impossible
stone,
tsukuba
natives,
so
this
was
management
working
who
came
up
with
a
wonderful
concept
of
develop
device,
plug-in
concept.
N
Where
you
can
manage
these
devices,
you
can
advertise
these
devices
kubernetes
and
the
kubernetes
could
be
able
to
keep
your
bookkeeping
of
these
resources,
and
this
also
helps
you
to
to
the
scheduling
and
selecting
of
those
devices.
But
there
is
quite
a
few
topics
in
this
design
is
that
it
doesn't
do
that
allocation
part.
But
if
you
are
having
a
networking
resources,
you
have
to
allocate
these
resources
when
the
pod
gets
filled.
Only
you
have
to
do
the
relocation
as
well.
So
we
came
up
with
idea.
N
Okay,
we
have
the
authority
device
CNA,
but
the
tropics,
which,
as
service
Ani,
is
that
there
is
no
mechanism
we
can
expose,
how
many
we
have
out
there.
In
an
oak,
the
kubernetes,
scheduler
and
kubernetes
ajudar
have
no
idea
whether
you
have
a
sort
of
iha
nicking,
your
server
or
not.
If
you
have
multi
clusters
and
you
have
to
support
like
thousand
nodes,
it's
very
difficult
to
have
a
solution
basis
of
an
S
or
BC.
Annex
and
and
one
more
thing
is
the
Numa
awareness,
the
Numa
awareness.
N
N
On
the
early
on
the
device
plug-in
and
the
few
people
want
to
do
networking
on
the
CNI,
but
we
thought
that
both
these
solution
should
work
together,
because
kubernetes
design
at
the
end
of
the
day
supply
couple
models
and
it
should
work
together.
So
in
this
case
we
Double,
authoritative
a
plugin
along
with
attacks,
and
this
particular
device
plug-in
acts
like
an
intelligence
which
managed
or
knows
the
SRP
resources
in
inside
the
server
and
Easter
advertised
those
resources
to
the
kubernetes
and
it
allocate
those
resources.
N
So
a
sort
of
a
device
blipping
is
kind
of
intelligent
guy.
Who
knows
that
what
to
do
and
when
to
do
and
the
CNA
is
the
guy
who
just
gets
the
PCA
information
from
the
device
clicking?
In
this
case,
we
using
monsters
actually
in
this
case
some
others
just
to
get
the
information
of
the
PC
address
from
the
cubelets
and
it
passed
this
information
to
the
CNA.
So
the
DNA,
the
guy
just,
do
the
plumbing
work
ease.
Does
it
have
any
intelligence
he
at
all?
N
So
all
these
projects
is
in
close
collaboration,
which
thing
and
we
have
a
technical
lead
from
for
each
project,
something
teams
who
we
have
to
move
and
dance
for
the
mulches
project
and
for
user-facing
energy
have
Billy,
who
is
the
technical
lead
and
for
the
authority
based
plugging
the
ABS
Ronnie,
who
is
working
closely
with
another
engineer
called
Abdul.
So
all
these
product
is
a
close
collaboration
between
the
Intel
and
that
so.
B
P
M
N
N
So
we
are
working
with
reddit
on
this
one.
Be
developing.
Martin
exhibit
plugging
this
kind
of
work
of
doing
offloading
with
obeah
switches.
So
it's
still
in
development
phase.
So
once
we
open
source
design
proposal,
you
will
have
more
information
on
average
working.
So
it's
basically
your
version
of
s
or
every
device
plug
in
right.
It's
work
so
long
with
user
space
so
see
and
I
actually
for
the
offloading
of
fear
for
VR
with
virtual
switches.
But
we
are
currently
working
on
the
mast.
P
G
P
N
P
So
I
again
I
think
there
should
be
some
level
of
discovery.
What
are
the
device
capability
with
respect
to
the
offloads
and-
and
you
know,
obviously
the
scheduler
should
be
able
to
understand.
You
know
you
know
how
a
device
device
on
the
node.
We
should
have
that
you
know
one
of
those
capabilities
and
then
I
you
know
I
would
expect
that
the
device
have
to
be
properly
configured.
P
Violet
might
have
ability
to
do
multiple
different
of
loads.
I
think
it
will
be
configured
for
whatever
specific
or
flaws.
You
would
like
to
do
no
more
RN
and
that
will
be
part
of
the
configuration
of
the
Nocona
nodes
and
plugins
will
have
to
comprehend
that
those
are
available
offloads
for
it
versus.
D
Are
we
are
working
on
no
future
discovery,
which
does
what
you
described
right,
which
is?
We
should
be
able
to
go
to
a
host
a
particular
hardware
host
and
to
feature
discovery
and
label
the
nodes
appropriately
and
run
different
device
plugins
as
needed,
but
that's
kind
of
the
long-term
vision
that
we
have
as
well.
Yeah.
N
Thank
you.
So
this
slide
is
just
to
say
what
is
the
transition
from
the
CNI
model
to
the
CNA
dBase
plug-in
model.
So
in
this
case
you
can
see
the
CNA
as
to
mechanism
to
schedule
or
consumables,
like
sov
VF
or
no
Numa
alignment,
so
our
team
is
also
working
on
the
pneuma
manager.
So
in
kubernetes
we
are
planning
on
the
next
release
with
kubernetes,
so
we
will
have
a
human
manager.
N
So
the
task
for
the
starting
point
of
this
project
will
be
like
a
showcase
of
the
Numa
manager
can
work
along
with
CM
k,
CM
k-means,
CP
manager
for
kubernetes
and
along
with
authority
based
plugin.
We
showcase
that
particular
puce
to
both
resource
management,
working
group
and
also
to
the
kubernetes
plumbing
working
group.
So
the
process,
the
development
is
still
ongoing.
So
by
next
release,
where
we
are
more
confident
that
will
be
along
with
the
kubernetes.
C
C
H
Are
you
able
to
see
my
slides
absolutely
looks
and
is
the
audio
okay?
It's
perfect.
All
right
sounds
good,
so
thank
you
very
much
Diane
and
team,
and
so
what
we'll
do
it
in
this
session?
The
next
20
minutes
or
so
is
go
through
what
does
edge
mean
and
what
this
will
contain.
This
means
within
the
context
of
edge.
So
first
of
all
some
definitions,
you
know,
we've
seen
various
definitions
of
edge
edge
is
certainly
the
next
infrastructure
paradigm
for
delivering
applications.
Services
closer
to
the
subscriber
and
edge
allows
efficient,
processing
and
delivery
of
time.
H
Sensitive
data
people
have
used
different
other
definitions
as
well,
which
is
some
geographic
distribution
of
compute
nodes,
distributed
computing
paradigms
largely
performed
in
the
distributor
devices,
etcetera,
but
edges
independent
edges
elastic
as
edges
massive
scale.
It
must
be
automated
to
meet
that
scale
characteristic
and
those
requirements
are
resilience
and
be
built
using
public,
private
or
hybrid.
You
know
footprints
with
respect
to
cloud
now,
there's
a
lot
to
be
said
on
just
one
slide
in
terms
of
what
is
edge.
What
does
edge
mean
one
way
to
look
at
it?
Is
it's
it's
about
building.
H
You
know
ultra
reliable
experiences
for
people
and
you
know
objects
when
it
matters
and
where
it
matters.
So
then
you
know
many
times:
people
struggle
with
edges,
customer
edges,
provider,
edges
you
know
somewhere
in
between.
It
can
be
anything
that
sits
between
the
subscriber
and
the
provider
cord
or
provider
data
center.
And
if
you
take
that
our
definition
of
edge
then
automatically
it
will,
you
know
you'll
start
to
look
at.
How
do
you
solve
different
application
challenges?
How
do
you
solve
different
deployment
challenges?
H
How
do
you
solve
different
Network
challenges
in
the
context
of
edge,
because
then
the
footprints
will
vary,
the
sizes
will
vary.
The
the
deployment
matters
and
context
will
vary.
So,
let's,
let's
start
there
now
we
know
in
a
telco
space.
Typically
provider
network
can
be
classified
as
sub
classified
into
many
different
tiers.
H
Not
all
these
tiers
apply
to
every
single
provider
in
the
world
because
geographically,
some
of
these
tiers
are
merged,
and
you
know
in
in
smaller
countries
and
in
in
countries
like
you
know,
US
and
Canada
in
Australia.
You
might
have
actually
you
know
much
more
enhanced
set
of
tears,
but
certainly
there
is
you
know
the
the
access
network
access
a
delegation
network
there's
a
termination
infrastructure.
H
There
is,
you
know
the
last
mile
copper
fixed
wireless
in
terms
of
the
wireless
side
of
the
business
or
or
the
copper
cable
Jeep
on
optical
stuff,
and
then
there
is
the
customer
CPE,
whether
it's
a
residential
CPE
or
a
business
CPE,
and
then
the
actual
customer
network
itself.
If
you
look
at
that
tearing
or
that
categorization
model
I
mean
this
is
a
work
that
was
done
by
between
me
and
fuzzy
and
credit
to
full
credit
to
posi
for
actually
coming
up
with
this
kind
of
a
tiering
model.
H
And
you
know
it
then
allows
us
to
segment
what
is
applicable
in
each
one
of
these
tiers
and
then
once
you
understand
what
is
applicable
in
each
one
of
these
tiers,
then
you
can
understand
the
type
of
applications.
The
type
of
network
functions,
the
type
of
capabilities
that
are
required
at
those
tiers
and
then
try
then
try
to
solve
the
problem
of
what
is
required
from
a
you
know:
infrastructure
perspective.
What
is
required
from
a
networking
perspective,
what
is
required
from
a
containers
perspective
and
solved
so
on
and
so
forth?
H
So
you
can
keep
double-clicking
into
each
level
and
then
define
the
level
of
detailed
set
of
requirements.
I
mean
everybody
talks
about
edge,
I
mean
here's
an
example
of
a
slide
that
was
done
by
China
mobile
at
the
vco
keynote
presentation
in
Amsterdam,
and
if
you
look
at
this
in
terms
of
the
deployment
model
from
national
district
level,
data
center
down
to
the
base
station-
and
this
was
because
this
is
China
Mobile-
it's
talking
about
Radia
infrastructure.
The
number
of
servers
that
they
need
to
deploy
this
whole
virtualized
ran
with
edge.
H
Compute
is
about
1.25
million
service
in
terms
of
number
of
sites
number
of
locations
and
then
multiply
each
location
by
the
total
number
that
they
need
in
about
20-25.
This
is
their
estimates
and
the
distances
that
these
reside
from
the
base
station
or
from
the
user
and
the
latency
requirements
in
all
of
those
I
mean
we
ended
up.
H
Actually,
you
know
again
credit
to
my
friend
Fozzie
here
for
actually
coming
up
and-
and
you
know
taking
pains
to
catalogue
all
of
these
details-
and
the
important
point
here
is
what
I
really
want
to
show
through
this
slide
is
not
necessarily
to
dig
up.
Why
or
what,
but
to
tell
you
that,
regardless
of
which
way
you
cut
it,
it's
gonna,
be
large
I
mean
if
you
take
this
number
and
apply
it
to
a
number
of
cell
sites,
a
number
of
locations
or
number
of
points
for
AT&T
or
Verizon
here.
H
That
still
is
a
pretty
large
number.
That
still
is,
you
know
two
hundred
thousand
three
hundred
thousand
whatever
that
number
is
here.
It
turned
out
to
be
more
than
1.25
million
in
terms
of
the
total
number
of
servers
that
are
required.
But
what's
important
in
that
context
of
scale
is
when
you
have
those
locations
deployed
with
compute
infrastructure,
then
functionality
from
the
core
data
center.
From
the
you
know,
regional
data
center
starts
to
migrate
towards
those
locations.
H
When
that
functionalities
trust
starts
to
migrate,
because
you
are
providing
better
experiences.
Remember
we
started
with
the
definition
of
you
know
edges
about
providing
better
experiences
when
you
start
to
provide
those
better
experiences.
When
that
functionality
starts
to
migrate
towards
that
edge,
then
you
need
at
that
edge
infrastructure
that
edge
location,
all
types
of
capabilities,
ability
to
manage
virtual
machines,
ability
to
manage
containers
probability
to
provide
an
assurance
framework.
You
need
a
view
of
where
those
worth
nodes
are
placed,
how
they
are
placed.
H
What's
the
your
assurance
framework,
what's
the
overlay
network
in
control
or
under
the
networking
control
for
all
of
those?
Now
you
just
saw
presentation
from
Intel
a
moment
ago
about
you
know
what
are
some
of
the
networking
challenges
specifically
and
how
you
know:
Red,
Hat
and
Intel
are
working
together
to
solve
some
of
those
networking
challenges
there
is
you
know
more
than
that
also
networking
challenges.
We
also
need
things
like
visibility
into
the
nodes.
In
terms
of
how
do
you
place
functionality,
how
do
you
distribute
functionality?
H
H
H
Now,
when
you
have
that,
when
functionality
starts
to
migrate,
whether
that
functionality,
that
was
in
core
datacenter
running
in
kubernetes
or
running
in
OpenStack
in
virtual
machines,
now
has
to
be
able
to
run
at
that
same
edge
location
and
be
able
to
be
been
managed
with
single
pane
of
glass
across
both
of
those
locations
so
just
putting
out
there
in
terms
of
the
scale
size
and
putting
out
in
terms
of
the
complexity
in
terms
of
how
he
managed.
So
that
brings
me
to
okay.
So
what
are
those
use
cases
that
are
being
deployed
at
edge?
H
What
are
people
considering?
So
if
you
take
a
pure
telco
view,
you
know
from
a
telco
standpoint,
a
telco
delivers
enterprise
residential
or
you
know,
mobile
types
of
services,
so
those
services
are
being
terminated.
Today
on
a
central
office,
we
have
now
a
central
office
blueprint
based
on
OpenStack
that
central
office
blueprint
now
needs
to
convert
it
to
be
a
container
blueprint
as
well
with
I
mean
instead
of
OpenStack
or
maybe
both,
because
you'll
now
have
some
of
those
network
functions.
H
Moving
into
containers,
then,
when
you
deploy
just
pure
edge
compute
for
other
traditional
applications,
IT
applications
at
the
core
is
a
dent.
With
these
network
functions,
then
those
are
running
in
micro
services
in
containers,
and
so
you
need
to
be
able
to
operate
those
in
the
mobile
case.
For
example,
you
have
this
beer
and
virtualized
ran
now
that
virtualized
ran
is
becoming
containerized
ran,
so
I
mean
you're.
H
Then
there
are
other
interesting
areas
such
as
video,
optimization
that
needs
to
happen
that
needs
to
happen
locally.
In
that
edge,
complete
location,
then
you
have
specialized
deployment
models
such
as
stadiums
or
many
use
or
big
public
venues,
and
then
the
technology
itself
is
moving
from
an
energy
perspective
to
containers
and
we'll
actually
look
at
some
of
that.
Some
of
that
in
a
little
bit
of
detail,
you
can
also
take
things
like
in
the
IOT
space.
H
You
know
analytics
is
a
thing
that
keeps
coming
up
every
every
time
when
we
talk
about
IOT
and
H
compute
deployment
is
almost
necessary
to
provide
timely
feedback
into
those
sensors
and
devices
that
are
autonomous
that
are
operating
independently.
Otherwise
is
the
round-trip
delay
times
and
then
the
complete
processing
time
at
a
centralized
data
centers
not
sufficient.
You
know
in
terms
of
how
quickly
you
need
to
take
action
in
those
type
of
situations.
Then
there
are
traditional
telco
services,
which
is
your
specific.
You
know,
CPE
managed,
CPE
vnf
as
a
service
and
so
on.
H
Today,
some
of
these
are
delivered
using
dedicated
devices,
MANET
CB
stuff
tomorrow
or
pretty
quickly,
moving
to
a
virtualized
model
and
pretty
quickly
will
move
to
a
containerized
model
as
well.
So
the
what
what
you
will
see
is
again
more
in
the
word
use
this
extensive
thinking,
that's
being
put
together
by
telcos
to
say
how
should
I
go
about
transforming
my
existing
set
of
services
and
into
this
new
model,
and
how
do
I
go
about
deploying
those
again?
H
We
can
talk
about
each
one
of
these
line
items
for
a
very
long
time,
but
what
I
want
to
give
you
through
this
particular
list
is
a
mix.
The
mix
that's
being
deployed
at
each
compute
layer
is
both
a
set
of
network
functions
and
a
set
of
traditional
applications.
The
idea
plication
that
will
run
on
edge,
which
means
then,
whatever
was
typically
running
in
a
centralized
data
center,
now
needs
to
be
running
at
the
edge,
whether
that
applies
to
virtual
machine
context
or
whether
the
or
it
also
applies
to
the
container
context.
H
Now
we'll
just
take
a
quick
drill
down
into
two
areas:
5g
and
5g,
because
somebody
brought
up
5g
here
on
the
topic,
I
think
it's
relevant
to
take
a
quick
drill
down
5g
seen
as
an
enabler
technology
for
new
use
cases.
It's
also
seen
as
a
leveling
the
playing
field
between
incumbents
and
new
entrants
into
the
market.
There's
a
lot
of
you
wouldn't
believe
that
there
are
a
lot
of
providers,
people
around
the
word,
speculating
jumping
into
the
5g
spectrum
auction
and
and
getting
into
that
field,
because
it's
it
seems
attractive
to
them.
Why?
H
Because
it
resolves
some
of
the
life
last
mile
access,
it
provides
an
instant
you
know,
subscriber
access.
If
you
drop
a
high-speed
modem
into
a
user
into
a
residence
or
into
an
enterprise
customer,
you
don't
need
to
worry
about
whether
that
user
has
a
DSL
or
a
fiber
or
a
cable
or
or
any
other
connection.
You
can
get
potentially
a
high-speed
connection
into
the
user
by
just
dropping
in
a
modem
and
in
their
house
or
in
their
premises.
So
it
results
some
of
those
last
mile
challenges
and
that's
why
fixed
mobile
broadband?
H
That's
the
term
that
people
use.
It
is
a
very
attractive
service
option
based
on
5g
for
a
lot
of
providers.
Then,
of
course,
because
of
the
low
latency
capability.
Because
of
you
know
massive
scale
and
machine
type
communications.
New
services
are
possible
through
these
enhanced
use
cases
for
5g,
you
know,
and
then
some
of
the
deployment
models
lend
us
to
deploy
better
H
compute
capability
with
5g
and
which
means
again
better
user
experiences.
H
Overall,
however,
one
thing
that's
true,
though,
is
it
does
require
huge
investment
in
terms
of
the
infrastructure
in
the
band
space
and
from
a
mass
scale
perspective.
We
probably
still
you
know,
18
24
months
away.
Phones
are
beginning
to
be
announced
or
user
equipment
as
being
beginning
to
be
announced.
M
H
With
availability
sometime
later
next
year,
so
then
you
can
see
some
spotty
deployments
coming
up.
How
long
did
it
take
for
transition
from
3G
to
LTE?
Think
about
that?
How
long
will
it
take
us
to
transition
from
LTE
to
5g?
We
can
all
speculate
in
terms
of
how
much
it
takes
and
how
much
investment
they'll
do
again.
H
Just
because
this
is
a
use
case.
Centric
I
wanted
to
kind
of
point
out
that
you
can
take
these
specialized
use
cases.
Applications
database
requirements
all
of
those
things
and
categorize
them
into
these
three
broad
categories
of
5g,
with
enhanced
mobile
broadband,
machine-type
communication
or
low
latency
communication,
and
then
you
will
see
that
how
you
know
each
application
service
or
each
application
Falls
can
be
categorized
into
my
own
one
of
these
three
buckets
and
then
deployed
from
a
telco
perspective.
H
H
Going
forward
so
if
you
take
the
5g
architecture,
you're
splitting
the
functionality
that
was
in
base
man
unit,
probably
at
running
as
a
virtualized
base
man
unit
into
what
we
call
a
centralized
and
distributed
nodes,
these
centralized
and
distributed
nodes
are
certainly
candidates
where
you
can
run
these
containers.
You
can
run
these
code
in
containers
today
they
can
see,
use
their
virtual
machines
in
terms
of
any
other
trials
and
any
other
conversations
we've
been
having
internally.
H
We
showed
that
at
the
open
networking
summit
in
Amsterdam
as
a
demo
POC,
but
you
know
there
is
a
lot
of
talk
and
there's
a
lot
of
conversation
around
how
this
implementations
can
move
two
containers
pretty
fast,
not
only
that,
with
these
infrastructure
components
moving
to
complete
containers,
there
is
this
standard
compute
nodes
that
are
being
deployed
right
next
to
those
locations.
What
we
highlight
in
this
picture
as
edge
compute
right
here
inside
these
turrets
red
circles,
that
each
compute
is
your
generic.
You
know
computer
infrastructure
running
your
traditional
applications.
It
could
be.
H
You
know
a
caching
node.
It
could
be
your
traditional.
You
know
localized
database.
It
could
be
your
traditional
IT
application,
which
would
be
something
that
does
data
analytics
with
the
data.
That's
coming
back
on
this
particular
infrastructure
right
there
locally,
rather
than
at
a
core
data
center,
especially
that
edge
compute
node,
allows
you
to
now
shunt
locally
traffic
with
with
5g,
where
you
place
the
user
plain
function
over
there,
and
then
you
can
do
a
whole
lot
of
things
with
different
kinds
of
services
and
experiment
with
that.
H
H
H
So
these
are
micro
services
that
are
already
available
as
part
of
a
commercial
catalog.
These
micro
services
are
common
too,
then,
then
all
they
need
to
worry
about
is
how
do
you
leverage
these
micro
services
and
then
how
do
you
build
the
core
functionality?
That's
required
by.
You
know
the
5g
components
like
access
management
function
or
session
management
function,
which
is
am
FSF's
as
the
service-specific
micro-services,
and
then
they
can
have
a
set
of
common
networking,
routing
services
and
protocol
or
balancing
micro
services
as
well.
That
leverages
this.
H
So
now
what
was
previously
packaged
all
together
as
a
single
virtual
machine
or
a
set
of
merchant
machines,
as
part
of
me
and
F,
is
now
completely
refactored
and
broken
into
all
of
these
micro
services,
and
then
these
micro
services
need
to
be
managed
as
a
cluster
as
a
group
and
then
deployed
in
these
various
different
thousands
of
locations
that
that
we
need
to
do
so.
That
brings
me
to
my
you
know
the
last
few
slides,
which
is
so
container
izing.
H
So
multi-site
is
is
incredibly
important
with
respect
to
you
know
edge,
so
provisioning
of
those
multi
sides
provisioning
of
those
remote
nodes,
the
zero
touch
models,
the
single
pane
of
glass
of
Orchestrator
model,
in
terms
of
how
do
you
place
that
functionality
and
where
so
stuff
that
was
discussed
earlier
on
the
call
by
detail
in
terms
of
node
feature
discovery,
what
Frank
just
answered
they
become?
Incredibly,
these
type
of
features
become
incredibly
important
identity
and
security
management
confederación
capability,
because
now
you
have
thousands
of
these
sites.
H
They
need
to
work
together
as
a
single
cluster,
you
need
to
have
a
common
identity
across
all
of
those
clusters,
common
way
of
Orchestra
managing
those
h.a
models.
Do
we
need
a
kubernetes
master
node
on
each
on
each
one
of
those
sides?
Do
we
need
some
sort
of
a
headless
operation
part
model
on
on
those
sides,
at
least
when
you
restart
these
containers,
can
you
restart
a
single
container?
If
you
remember
my
previous
slide
in
terms
of
how
a
single
vnf
got
refactored
into
all
of
these
different
micro
services?
H
Maybe
you
need
to
restart
differently
in
terms
of
bundles
in
dependency
trees?
How
do
you
create
those
dependency
trees
and
do
worry
about
restart
ability
of
those
application?
Bundles
multi-tenancy
is
incredibly
important
for
telco
in
in
often
we
ignore
multi-tenancy,
because
we
are
typically
looking
at
deploying
containers
inside
a
single
enterprise.
H
We
in
the
telco
space
now
you're
talking
about
taking
a
service
that
needs
to
be
offered
to
thousands
of
subscribers
thousands
of
businesses,
and
you
cannot
have
an
application
that
shares
data
for
those
different
subscribers.
So
you
have,
you
may
have
copies
of
applications
running
you
know
in
different
tenants.
Then
those
tenants
need
to
provide
they
get
their
own
view
of,
what's
going
on
from
and
from
their
application
perspective
from
their
infrastructure
perspective.
H
Networking
requirements,
I
think
there's
already
being
talked
about
enough,
there's
more
in
the
context
of
not
just
Malthus,
but
also
things
like
native
v6
native
v6,
everywhere
know.
Nehring
know
you
know
v4
to
v6
conversions
and
that
becomes
really
really
important.
Then
data
plane
acceleration
compute
in
terms
of
real-time
capabilities,
thread,
painting,
CPU,
painting
models,
I
think
Intel
did
go
through
some
of
those
things
that
are
coming
down.
How
do
you
patch
OSS?
H
Can
you
actually
worried
about
the
dealing
with
this
as
a
black
box
approach
versus
an
incremental
Apache
model?
If
you're,
if
you
have
1
million
nodes,
how
can
you
do
incremental
patch,
across
those
1
million
nodes
and
and
and
on
and
on
and
on
right
I
mean
I
can
keep
going
on
with
respect
to
these
set
of
requirements?
Some
of
these
things
are
being
worked.
Some
of
these
things
are
yet
to
be
work,
I,
think
through
this
community.
H
What
I'm
interested
in
is
collaborating
working,
defining
refining
these
set
of
things
so
that
not
only
when
you
know
different
groups
of
people
are
involved
with
this,
get
the
right
set
of
things
to
go
then
build
the
develop
within
the
community
level
and
at
Red
Hat
level
and
and
participate
in
that.
So
that's
kind
of
a
quick
summary
of
where
things
are
with
respect
to
edge
in
containers.
H
Actually,
both
people
are
looking
for
dual
stack
as
well
as
people
are
looking
for
just
native
v6
across
the
board,
no
dual
stack.
So
if
you
go
to
Japan-
and
you
want
to
talk
to
any
of
the
telcos
in
Japan
they'll
say
give
me
everything
native
v6
today,
there's
no
more
v4
addresses
being
allocated
by
Ayana
done
so.
H
N
N
N
C
There
are
any
other
questions
you
know.
We've
run
over
time
a
little
bit,
but
I
really
wanted
to
thank
you
for
for
giving
the
presentation
it
really
kind
of
helped
set
the
stage
for
all
of
the
other
topics
that
we're
going
to
need
to
talk
about
over
the
next
year
or
so
as
things
develop.
I
have
proposed
the
next
meeting
date
to
be
the
25th
of
January
aka.
The
fourth
Friday
of
each
month
at
8
a.m.
and
I
haven't
heard
any
objection
so
I'm
going
to
set
that
up.
C
I
know
some
of
you
are
in
act
pack,
time
zones,
so
that
might
not
be
perfect
for
you,
so
we
will
try
and
determine
in
the
future,
and
there's
been
a
few
other
folks
chatting
in
the
background
too.
So,
hopefully
we
can
get,
maybe
a
contrail.
You
know,
Paul
latzke
Lancaster
was
working
off
with
another
I
think
it
was
my
OC
or
my
something
or
other
another
telco
service
provider
to
get
them
on
on
board.
C
For
the
next
meeting
to
talk
and
I'm,
hoping
that
this
structure
works
for
us,
I
tend
to
like
to
have
these
like
at
least
two
little
talks
to
spur
conversation.
But
if
we
need
to
only
do
one
to
talk,
and
so
we
can
have
more
conversation,
we
can
even
shorten
that
up
in
the
future.
So
if
anyone
has
any
feedback,
I've
put
my
email
in
the
chat,
so
you
can
send
feedback
that
way
and
also
if
anyone
is
come
in
to
coop
con.
C
Please
pay
me
because
I
will
give
you
all
three
passes
through
the
gathering,
which
is
on
December
10th
and,
as
I
mentioned
as
well
on
the
chat.
There's
a
very
large
exhibit
room
lunch,
slash
lunch
room
next
to
the
room
where
all
the
presentations
are
going
on,
and
it
would
be
a
great
place
for
everybody
to
maybe
have
lunch
to
get
or
sit
together
during
one
of
the
cotton
breaks
or
in
the
reception.
C
The
just
connect
face-to-face
and
then
have
a
sort
of
face-to-face
informal
state
meeting
on
December
10th
there
anything
that
anyone
would
like
to
add
to
this
conversation
today.
I
know
we've
run
over
time.
So
actually
both
had
to
drop
off,
but
then
played
a
good
introduction,
I
think
through
both
the
people
in
the
space
at
red
hat
and
a
lot
of
the
partners
here
and
a
few
are
the
customers
too.