►
From YouTube: Emerging Use of Containers and OpenShift in Telco Suresh Krishnan @Kaloom OpenShift Commons Briefing
Description
Emerging Use of Containers and OpenShift in Telcos
OpenShift Commons Telco SIG Briefing
Suresh Krishnan (Kaloom)
Richard Gee (Red Hat)
June 28 2019
Next meeting: 4th Friday of each month at 8:00 am Pacific
https://commons.openshift.org/events.html
A
We
have
today
with
us,
Richard,
Gere
and
sure
s,
Krishnan,
Richard
G
is
from
Red,
Hat
and
sure
s.
Krishnan
is
the
CTO
of
Cologne
and
they're
going
to
walk
us
through
the
emerging
use
of
containers
and
openshift
and
telco
networks
today
and
I'll.
Let
them
introduce
themselves
and
then
we'll
open
up
the
call
afterwards
for
a
conversation
about
this
topic
or
anything
else
that
you're
you'd
be
interested
in
it
so
sure
ash
and
Richard.
Why
don't
you
kick
us
off
here.
B
C
Great
thanks
hi,
this
is
rich
G
I
am
a
singer'
pistol,
a
trekker
in
our
in
Red
Hat's
global
service
provider
group
I've
been
focused
on
global
tokoha,
papayas
V
partnerships
over
the
over
the
years
and,
most
recently
in
the
past
year
year
and
a
half
so
just
on
kind
of
our
container
and
openshift
type
strategies
in
the
telecommunications
market.
Here
brush.
B
So
tell
us
like
what
we
did
and
like
you
know
how
our
system
looks
like
and
what
kind
of
issues
we
face
and
like
how
we
make
changes
to
kubernetes
pretty
much
to
make
sure
that
the
right
things
get
done.
So
that's
kind
of
like
the
basic
idea
of
the
speed
of
the
talk
itself.
So
rich
is
gonna
like
provide
like
the
overall.
Like
you
know,
situation
and
I
think
put
us
in
perspective.
I'll
talk
a
little
bit
about
Kalume
and
then
at
the
end,
we'll
open
up
for
questions.
C
C
Just
you
know
briefly
touch
on
what
type
of
collaboration
we've
done
with
Kowloon.
In
fact,
I've
actually
started
working
with
Kowloon
almost
three
years
ago,
when
they
were
still
in
stealth
mode,
so
we'll
kind
of
identify
why
we
started
that
and
then
I
hopefully
will
then
kick
off
into
kind
of
covering
again
what
some
of
some
of
their
solutions
and
how
they
use
containers
and
kubernetes,
and
in
particular
the
use
case
that
we're
seeing
in
5g,
for
example,
and
then
there'll,
be
a
lot
of
room
for
Q&A.
So
I'll
rush
right
through
these.
C
So
basically
you
know
you'll
know
we'll
get
it
folks,
I've
kind
of
fun.
The
future
trends
I
do
want
to
do
a
bit
of
a
sanity
check
right
in
terms
of
the
evolution
on
nfe,
here,
probably
most
of
you've
seen
the
csdf
Linux
Foundation
slides
that
show
the
future
state
actually
without
OpenStack,
where
Cuba
Nettie's
on
bare
metal
is
the
single
underlying
substrate
or
under
cloud,
and
that
orchestrates,
both
vnfs
and
with
Qbert
and
cloud
native
network
functions
right.
So,
whether
you
call
it
the
container
native
functions,
containerized
area
function
cloud
data
for
network
functions.
C
You
know
in
general,
we're
talking
about
CNS
here,
removed
from
being
apps
to
seeing
apps
here
now
what
you
know
we
have
product
managers
that
you
know
we'd
be
happy
to
have
set
up
set
you
up
with
that
discuss.
You
know
how
we
plan
to
get
to
that
state
that
future
state
that
the
CNF,
as
has
been
out
there
or
program
kind
of
presenting
as
a
future
vision,
and
that's
certainly
a
critical
path
to
the
future,
especially
for
for
edge
I
used
as
like
we're
seeing.
C
But,
but
it's
probably
worth
noting
that
you
know
the
reality
in
the
market
today
is
that
there's
been
a
lot
of
time
span
already
in
their
network
function,
virtualization
the
infrastructure
for
that,
and
so
it
will
take.
You
know
quite
some
time
for
for
the
move
from
being
FCC
announced
and
mass
deployment
of
CN
FS
and
the
installed
base
of
of
OpenStack
for
network
function.
Virtualization
and
vnfs
will
actually
even
be
a
place
for
you
have
a
longer
time
right.
C
So
so
we
do
believe
that
to
really
address
the
market
needs,
we
need
to
provide
choice
of
both
the
mature
OpenStack
platform
and
a
next-generation
platform
based
on
open
ship
right
for
our
customers
and
partners.
So
just
kind
of
watch.
You
know
kind
of
level
step
that
a
little
bit
go
ahead.
It's
Russia
next
slide,
please.
So
you
know
where,
where
are
we
today
in
this
evolution
right
in
terms
of
containers
and
kubernetes
in
the
network?
C
You
know
the
as
I
mentioned
before
the
the
prevalent
use
of
OpenStack
and
the
maturity
of
infrastructure,
and
our
support,
not
only
in
OpenStack
Monroe,
has
meant
that
most
of
the
use
cases
do,
you
know,
will
be
rely
on
the
open
second
cloud
support
right
now.
In
the
past
six
to
nine
months,
we
have
definitely
seen
more
specific,
open
ship
interests
from
both
customers
and
partners
to
use
containers
of
Cuban
Indies
in
the
network.
C
We've
noticed
this
in
terms
of
distinct
up
cake
across
the
board
on
PLC
discussions
on
various
Network
functions.
We
had
been
singing
it
in
the
core,
but
starting
to
see
a
lot
of
interest
in
edge
use
cases.
We've
even
seen
more
RFI
requests
right
from
the
telco
customers
putting
out
in
the
market
request
for
for
kind
of
strategies
and
plans
to
move
the
containers,
and
so
there's
a
sense
that
you
know
they
are
also
gearing
up
or
say,
moving
from
pilots
PLC's
to
pilots
say
for
next
calendar
year.
Right
now.
C
At
the
same
time,
we've
been
pushing
the
building
of
cuba's
operators
for
day
2
operations
to
increase
automations
portability
and
scale
with
telco
ecosystem,
so
that
that
this
helps
prepare
to
leverage
container
adoption
and
deployments
right,
because
we
know
that
Intel
called
that.
You
know
these
things
are
critical
in
terms
of
the
actually.
You
know
moving
to
the
next
step
of
realizing
the
use
of
some
of
these
emerging
technologies.
So
and,
as
I
mentioned,
the
the
push
from
the
newer
use
cases
of
edge
and
5g.
C
You
know
those
tend
to
bring
more
of
a
focus
on
running
open
ships,
a
on
bare
metal,
so
you'll
start
seeing
from
us
better
support
for
running
open
shift
with
OpenStack
it
kind
of
in
the
second
half
of
this
year,
right
as
opposed
to
on
top
of
stack
more.
You
know
along
side
of
it.
As
you
see
here,
we
you
know.
We
also
know
that
it's
not
just
about
running
an
open,
chiffon
bare
metal
right.
That's
not
the
only
answer.
C
You
know
we
learned
this
from
our
ecosystem
work
in
OpenStack
and
nfe
that
there
were
these
critical
and
kind
of
implementation
gaps
that
we
required
in
commodities
to
enable
and
then
we're
functions
to
move
over
a
to
and
leverage
contain
technologies
and
platforms.
So
we'll
talk
a
bit
about
that
here
and
also
about
how
you
can
have
also
you
know,
participate
in
this
process
to
move
vehicles.
So
this
is
a
special
interest
group
here
in
open
ship.
C
So
you
know
no
surprise
here
the
way
that
we
do
our
work
in
terms
of
moving
the
technologies
and
the
platform's
forward.
Is
we
work
in
the
special
interest
groups
and
work
groups
within
kubernetes
right,
and
you
know
that
way.
We
can
kind
of
iterate
the
ecosystem,
partners
and
customers
in
each
of
these.
The
areas
and
they're
not
be
bound
by
say,
perhaps
a
process
or
release
or
other
mechanism.
That's
part
of
the
larger
communities
environment
right.
C
So
this
was
quite
effective
in
other
communities,
and
this
is
kind
of
how
we're
doing
here
and
just
to
dive
down
a
little
bit
to
see
how
we've
done
this
or
made
progress
specifically
and
networking
in
containers.
Well,
look
at
the
networking
workgroup,
for
example,
and
how
we've
kind
of
gotten
through
that
next
slide.
There's
fresh!
C
C
So
we
do
need
to
update
this
with
the
release
of
a
4.1,
which
is
the
most
recent
release
this
past
month,
but
you
can
see
that
Malta
supported
solution
that
we're
looking
to
get
done,
and
that
was
done
in
a
network
plumbing
working
group,
for
example.
So
you
know
that
was
critical
in
terms
of
getting
the
kubernetes
and
networking
support
that
we
needed
and
the
function,
how
we
needed
to
support
things
like
multi
network
support
for
vns,
for
example.
C
Additionally,
we
were
in
tech
preview
already
for
SRO
V
CNI
and
for
the
device
plugin
right.
So
this
is
adjunct.
Work,
that's
also
relevant
for
network
functions,
for
example,
for
example,
and
we
expected
to
have
that
support
later
this
year.
Now
we
combine
that,
with
all
the
progress
we've
made
on,
you
know,
other
known
enhancements
that
were
required
in
terms
of
resource
management
and
enhanced
platform
awareness
features.
C
So
you
know
in
a
nutshell:
basically
it
the
way
we
do.
This
is
not
only
in
the
cigs
and
upstream
and
obviously
you
know,
folks,
like
Intel
and
ourselves
and
Cisco,
IBM
and
others,
but
also
getting
other
partners
and
couples
to
participate
is
important
not
only
for
providing
code,
but
it's
just
as
important
to
get
real
telco
use
cases
and
influence
into
the
community
and
to
help
set
that
direction
right
and
so
I
think
the
net-net
is
you're.
C
You
know
kind
of
a
future
vision
right.
So
what
is
the
longer
you
know
longer
term?
It's
a
future
state
right.
You
know
no
surprise,
you
know
we're
gonna
deliver
that,
through
through
OpenShift,
will
be
evolving,
the
native
infrastructure,
support
or
compute
networking
and
storage
resources,
so
that
we
can
look
a
straight
both
virtualized
and
contain
our
application
workloads
on
the
platform
right,
we'll
continue
to
drive
operators
for
the
applications
and
infrastructure
to
automate
operations
and
for
lifecycle
management
yeah.
You
know,
as
we
move
more
on
to
a
container
based
technology.
C
You
know
we
feel
you
are
our
expertise
and
right
and
the
things
we
can
bring
to
the
table
in
terms
of
container
linux
and
the
rail
host
for
the
infrastructure
piece.
We're
gonna,
be
critical
elements
and
technology
to
help
this
move
forward
in
a
supportable
manner,
and
we
have
our
version
of
Qbert,
which
is
called
container
network
virtualization
for
support
of
the
the
VM
application
of
virtualized
applications
and
that
that
work
is
tracks
along
with
the
work
with
Moulton,
SRO
V
for
examples,
so
I
believe
that's
actually
also
in
tech
preview.
C
But
again
we
can
set
up
more
more
details
on
that
with
the
product
managers
in
future
sessions.
What
I
would
don't
there
is,
and
this
is
you
know,
you'll
notice.
This
of
many
of
technologies
could
work
on
a
lot
of
the
functionality
and
use
cases
initially
are
driven
from
the
larger
kind
of
enterprise
use
case
community
right.
So
so
going
back
to
that
plugin
made
earlier
right.
It's
really
important
for
us
to
get
customers
and
partners
in
a
toko
market
and
in
these
tech
interest
groups
to
to
bring
the
actual
telco
use
cases.
C
So
we
can
iterate
on
these
on
these
efforts
to
to
address
more
of
those
type
of
AIDS
patients.
I
think
if
you
can
move
to
my
last
slide.
Ok
and
before
we
move
on
to
you,
know
kaloumes
deeper
dive
here,
I
wanted
to
kind
of
highlight.
You
know
why
we
started
engaging
with
Kalume
a
few
years
back
and
our
joint
partnership.
You
know
a
lot
of
it
started
with
when
we
were
talking
to
Kalume
a
few
years
back.
C
Then
the
emerging
technologies
in
Cuba
chef
right,
and
they
were
also
very
interested
only
in
working
with
us
upstream
on
these
various
things,
but
also
collaborating
spending
these
special
interest
groups
alright.
So
that
was
really
a
good
kind
of
joint
collaboration
strategy.
We
had,
they
were
looking
at
pushing
the
envelope
in
terms
of
using
communities
in
Rio
and
networking
use
cases
which
we
were
very
interested
in
been
doing.
They
were
looking
at
in
their
products,
actually
leverage
our
platforms
in
very
unique
ways,
and
we've
seen
that
kind
of
play
out
here.
C
C
You
know
we
participated
a
lot
of
projects,
but
there
are
a
lot
of
other
projects
that
we
don't
participate
directly
and
so,
for
example,
we're
very
interested
in
what
Kalume
was
looking
at
do
in
the
emerging
kind
of
open
source
efforts,
an
open
programming
language
of
p4,
and
so
this
was
a
way
for
us
to
participate
in
that
without
being
directly
involved
in
those
projects.
You
know
working
through
partners
in
the
ecosystem
to
really
explore
these
evolutions
of
these
open
platforms.
B
So
so
what
are
we
doing?
So?
What
we
started
off
building
is
really
software-defined
fabric,
so
by
that
I
mean
a
fabric,
networking
fabric
that
can
be
changed
after
it
is
deployed,
and
so
that
is
like
a
key
feature,
so
we
wanted
something
that
is
programmable
and
open,
and
so
we
we
had
a
few
key
features
in
mind
when
we
start
off
with
this.
So
first
thing
was:
it
was
like
completely
autonomous,
so
the
basic
fabric
itself
is
like
fully
self
discovering
and
self
forming.
B
So,
if
you
put
in
like
the
component
switches
together
and
just
power,
it
on
everything,
self
discovers
itself
and
I
can
like
talk
about
it
in
quite
a
bit
of
detail,
but
it's
very
complicated
like
when,
if
you
want
to
configure
like
OpenShift,
which
is
what
we
use
so
everything
we
do
on
the
control
flame
of
the
fabric
is
fully
containerized
and
runs
an
open
shift.
So
if
you
don't
have
the
basic
underlying
network
connectivity,
it
becomes
very
difficult
for
us
to
boot.
B
The
system
right
like
because
OpenShift
to
configure
itself
like
I'm
a
requires
network
connectivity
and
we
are
actually
bootstrapping
there
into
a
connectivity.
So
it
was
like
an
interesting
problem
to
solve
and
that's
like
an
our
stock.
Pretty
much
like
you
know
how
we
did
it
like
how
we
started
off
with
like
the
Kouros
in
the
bottom
and
then
like
built
up
the
whole
system.
B
But
the
basic
idea
is
like
we
didn't
want
any
kind
of
configuration
just
to
have
a
system
up
and
running,
and
you
we
use
ipv6,
like
Auto
configuration
for
all
the
nodes
to
find
themselves
like
assign
them
self
addresses
from
the
connectivity,
l3,
connectivity
and
neat
and
from
the
mesh.
So
that
is
like
something
we
wanted
to
do
fully
in
an
automated
fashion.
B
The
second
thing
you
want
to
do
is
like
we
wanted
it
to
be
fully
virtualized
able,
which
means
that
you
could
slice
any
subset
of
ports
sitting
anywhere
in
the
fabric
and
manage
them
independently.
So
this
this
meant
that
we
had
to
put
like
you
know
strong
isolation
requirements
like
on
the
community
side,
as
well
as
the
the
hardware
side.
All
the
switches-
and
it
also
meant
we
can
have
independent
fabrics
that
can
be
managed
differently.
B
So
if
you
look
at
Rich's
picture
right,
so
there's
like
a
OpenStack
side
and
an
open,
shipped
side,
which
means
that
there's
some
subset
of
resources
that
are
controlled
by
OpenStack
and
some
subset,
that's
controlled
by
open,
shipped
and
and
we
kind
of
wanted
these
to
be
independent
from
each
other.
So
you
could
have
what
we
currently
call
a
tenant
right
like
running
OpenStack
and
so
someone
running
open
share
and
we
really
didn't
them
to
be
stepping
on
top
of
each
other's
toes.
B
So
like
we
had
like
some
serious
isolation
requirements
where
any
kind
of
traffic
on
one
slice
doesn't
affect
any
traffic
on
the
other
slice
and-
and
we
had
some
very
weird
requirements
from
some
of
Hotel
Co
customers
and
also
some
data
center
customers,
which
said
like
hey.
We
want
the
complete
entire
namespace
for
each
of
the
slices.
So
if
you
have
like
VLAN
ID
33
in
one
slice,
it
should
be
a
different
network
than
VLAN
ID
33,
another
slice,
even
if
it's
on
the
same
switch.
B
So
so
those
kind
of
things
like
I
enough
gave
us
like
quite
a
bit
of
interesting
thought
on
how
we
do
the
slicing
and
and
at
the
end,
like
we've
done
them,
was
where
we
have
part
of
the
system
running
OpenStack
for
the
system
running
open
ship,
that's
kind
of
what
we
demoed
here
as
well.
So
that's
kind
of
like
a
design
requirement
that
we
recognize,
like
just
like
rich
said
that,
even
though
containers
are
the
future,
the
VM
based
lots
are
not
going
away
anywhere
soon.
B
So
there's
gonna
be
a
long
tail
of
like
VM
based
loads,
and
so
we
won't
be
able
to
support
everything
independently.
And
the
third
thing
is
like
when
Rich
talked
about
fee
for
the
idea
is
like
we
can
program
the
fabric
in
the
field.
Okay,
so
we
want.
We
move
to
a
DevOps
kind
of
model,
so,
like
you
know
in
in
the
kubernetes
community,
in
even
and
openshift
community,
which
is
not
really
telco
side
like
on
the
enterprise
side
and
and
pretty
much
like
most
of
the
consumers
there.
B
The
devops
model
is
very,
very
prevalent,
but
the
telco
model
is
more
based
on.
Like
you
know,
having
stable
releases
like
at
long
periods
and
probably
not
even
updating
very
much
when
things
are
working
and
the
idea
was,
we
wanted
to
bring
in
the
devops
kind
of
model
into
networking-
and
this
is
not
like
don't
think
of
this
as,
like
you
know,
we
kind
of
put
in
like
feature
updates
all
the
time,
but
we
need
to
keep
things
up
to
date,
so
I
mean
like
there's.
Some
protocol
fixes,
there's
like
new
options
coming
in.
B
We
didn't
want,
like
redo
the
hardware
yeah,
but
instead
of
doing
like
you
know,
hardware,
like
folklore
upgrades,
we
want
to
have
a
software
patch
that'll
adil,
the
new
functionality
or
new
protocols,
and
the
fourth
thing
we
want
to
do
is
like
accelerate
the
data
plane.
So
this
is
again
like
when
we
started
working
with
the
red
I'd.
Like
you
know,
we
were
looking
at,
like
you
know,
virtual
switches,
so
we
looked
at
OBS
and
we
said:
okay,
like
you
know,
OVS
has
some
limitations.
B
It
consumes
like
a
lot
of
CPU
on
the
on
the
computer
so
like
what
can
we
do
to
optimize
it?
So
we
have
our
own
virtual
switch
that
we,
we
are
benchmarking
along
with
red,
add
where
we
try
to
offload
most
of
the
processing
onto
the
leaf
switch
that's
attached.
So
this
gives
us
like
quite
a
bit
of
CPU,
optimization
right,
like
the
we
can
offload.
B
Let's
say
instead
of
eight
cores
like
doing
VP
DK
stuff,
we
can
cut
it
down
to
like
one
core
and
do
all
the
processing
on
the
fabric
itself,
and
it
also
gives
us
like
significant
performance
and
latency
benefits
in
there,
and
we
also
have.
Since
we
have
a
programmable
fabric,
we
can
have
additional
network
functions
that
are
getting
hosted.
So
first
things
we
started
off.
We
started
putting
you
like
the
virtual
switch
in
there.
B
We
started
putting
a
virtual
router,
so
we
have
BGP
and
OSPF
running
in
the
fabric
itself,
the
data
plane,
and
we
also
have
a
VX
LAN
gateway
like
so.
If
you
wanted
to
have
l3
network
virtualization
be
able,
we
were
able
to
support
it
directly
in
the
fabric
hopper
itself
and
and
while
going
down
this
direction.
B
That's
when
we
started
like
building
the
user
plain
function,
which
is
the
EPF
for
Phi
G
I'll
talk
about
it
a
little
bit
more
detail
because
that's
kind
of
the
focus
of
the
presentation,
but
that's
kind
of
where
we
start
off,
like
what
can
we
have
as
part
of
the
fabric,
like
the
fabrics,
have
to
come
into
the
let's
say
the
21st
century
and
start
doing
things
that
are
useful
for
telcos,
for
doing
network
functions
and
and
one
of
the
key
features
is
like.
We
never
wanted
to
do
any
hardware.
B
So
we
have
tie
ups
with
like
top
white
box
networking
vendors
and
you
want
to
be
able
to
run
on
like
pretty
much
any
of
them.
So
act
on
is
like
our
closest
partner,
like
most
of
our
shipments,
like
we've
been
doing,
is
like
the
tacked
on,
but
we
also
have
certified
stuff
on
Delta,
Foxconn
and
Inman
tech
as
well
from
white
boxer.
B
So
the
who
are
the
customer
is
able
to
pick
who's
who's,
the
odium
vendor
they
want
to
buy
from,
and
we
just
if
I,
on
top
of
the
hardware,
so
starting
off
with
the
programmable
data
plane.
As
I
talked
about
a
little
bit,
so
you
want
to
add
new
features
and
services
at
runtime
without
affecting
the
traffic,
and
we
want
to
have
like
innovation
in
the
network
like
to
develop
like
new
code
and
new
protocols
and
allow
like
new
kind
of
business
models.
B
B
So,
just
to
like,
like
for
the
telco
space,
it's
kind
of
like
how
things
look
like
on
the
Left
today.
So
we
have
a
bunch
of
networking
functions
that
are
sitting
on
multiple
servers
and
and
each
of
the
networking
functions
like
pretty
much,
have
their
own
control
plane
and
data
plane
in
a
monolithic
fashion.
So
the
packet
comes
in
into
one
of
the
servers.
It
does
like
one
piece
of
the
function:
ships
it
back
into
the
fabric
and
so
on,
and
the
fabric
is
like
a
fixed
function,
basic
base
fabric.
B
So
this
is
pretty
much
like
anything
you
can
see
in
the
market
today,
so
based
on,
like
a
tomahawk
kind
of
thing
or
a
Titan
kind
of
chipset
and
like
everything,
is
done
pretty
much
on
the
x86
and
if
you
see
there's
like
a
few
issues
right.
So
the
packet
comes
in
in
an
art
so
which
means
that
you're
using
a
lot
of
back-to-back
bandwidth
on
the
fabric,
so
it
kind
of
reduces
the
performance
and
every
time
a
packet
gets
in
and
out
of
a
server
like
I
say
like
greater
than
20
microsecond.
B
But
it's
more
about
like
50
to
80
microsecond
of
latency
introduced
every
time,
a
packet
in
and
out,
and
if
you
look
at
like
you
know,
a
lot
of
applications
in
Fiji
that
claim
to
need
1,
millisecond
latency
if
you
go
through
five
functions,
you're,
probably
at
like
300
400
micro
seconds
of
latency
already
consumed
in
the
network
processing
chain,
so
that
doesn't
leave
much
for
radio
and
for
any
other
application
processing
stuff.
So
we
wanted
to
reduce
this
quite
significantly.
B
So
what
we
did
is
we
separated
the
control
plane
and
and
the
data
plane
of
the
network
functions
and
and
hosted
them
directly
in
the
fabric.
So
that's
really.
The
goal
is
to
make
sure
that
in
the
packet
comes
in
into
the
fabric,
it
stays
in
the
fabric.
In
most
of
the
cases
they
use
the
plane
itself,
the
control
plane.
We
didn't
want
to
move
it
because,
like
it
had
quite
a
few,
not
bond
interfaces.
Every
operator
has
like
they're
already
their
workflow
engines.
B
There
are
crustacean
systems,
so
it
becomes
very
difficult
to
disentangle
this
piece.
So
we
wanted
to
keep
all
the
control
functions
and
the
not
born
interfaces
identical
while
offloading
the
data
plane
processing
directly
into
the
fabric.
Okay.
So
this
gives
us
like
quite
a
few
benefits
right,
so
we
have
better
performance.
We
can
push
packets
out
at
a
higher
throughput.
We
have
much
lower,
latency
and-
and
the
third
thing,
which
is
like
very
interesting,
is
that
let
me
save
quite
a
bit
of
CPU
utilization
in
Indi
in
all
of
the
servers.
B
So
this
is
like
a
depiction
of
like
you
know
how
we
split
up
the
fabrics
and
there's
like
just
one
thing:
I
want
to
do.
I
don't
want
to
do
this
in
too
much
detail,
so
every
physical
fabric,
we
can
split
it
into
multiple
virtual
fabrics.
That's
like
a
common
use
case.
Like
you
know,
people
say:
hey
like
I,
want
to
split
up
and
and
virtualize
and
provide
the
stuff
to
multiple
tenants.
B
So
that's
like
a
common
thing
today,
but
like
the
other
thing
that
we
have
done,
is
like
taking
a
virtual
fabric
and
span
it
across
multiple
physical
data
centers.
So
so
this
like
came
about
because,
like
we
had
like
a
lot
of
operator,
customers
who
are
like
building
out
quite
a
bit
of
H
data
centers
and
they
want
to
manage
them
a
unified
fashion.
B
So
if
you
look
at
this
picture
here,
if
you
have
a
centralized
data
center
called
fabric,
detached
fabric
1
and
a
distributed
data
center,
which
is
like
H
data
center,
which
is
like
far
out
and
there's
like
thousands
of
them
so
a
fabric,
2
is
an
example.
You
could
have
a
fabric
that
spans
virtual
fabric
that
spans
across
both
the
central
data
center
and
a
remote
data
center,
which
means
that
you
can
allocate
compute
or
networking
functions
depending
on
where
you
want
them
to
be
so,
you
could
have.
B
If
you
want
like
low
latency,
you
could
put
some
server
resources
on
a
remote
data
center
and
you
can
move
them
back
to
a
centralized
data
center
if
you
want
more
throughput
or
if
you
want
become
like
better
utilization.
So
this
gives
you
like
unprecedented
scalability
as
one
and
the
flexibility
on
where
you
put
those
things.
So
that's
something
like
that's
why
we
wanted
to
separate
the
physical
manifestation
of
the
fabric
from
the
actual
virtual
one
which
actually
people
used?
Ok,
so
each
of
the
V
fabrics
can
be
controlled
by
a
different
Orchestrator.
B
In
our
case,
we
are
like
supporting
Red
Hat,
open,
sac
and
open
chef,
pretty
much
to
sit
on
top
of
these.
So
that's
kind
of
odd
Rich's
talking
bot,
so
we
are
like
already
ml
to
certified,
with
OpenStack
13
and
also
the
our
CNI
is
also
like.
You
know,
fully
tested
with
open
share
and
so
like.
One
of
the
examples
like
we
were
using
is
like.
B
D
B
B
Noise,
so
the
idea
was,
like
you
know,
each
of
the
V
fabrics
like
can
be
foggy
slice,
so
you
could
have
slice
that
spans
multiple.
So
if
you
look
at
this
picture
here,
you
have
like
access
nodes,
you
have
the
edge
data,
centers
and
centralized
data
centers,
and
you
could
have
a
slice
that
goes
across
multiple
physical
locations
and
you
could
use
like
one
for
like.
Let's
say
we
are
seeing
some
cases
in
Asia
where
there's
like
network
sharing.
B
Ok,
so
like
you're
talking
to
somebody
Singapore
and
they
were
like
actually
trying
to
spare
the
spectrum
and
have
like
some
kind
of
natural
chatting,
and
we
have
the
same
thing
in
Canada,
where,
like
Bell
and
Telus,
right
like
they
have
some
kind
of
network
sharing
arrangements.
So
if
we
can
implement
those
kind
of
things
by
having
a
virtual
fabric,
that's
totally
isolated
and
say:
like
hey
like
each
operator,
it
gets
like
a
piece
of
like
the
resources
and
so
think
of
it
as
a
name.
B
You
know,
but
like
really
all
the
way
across
the
network
right
and
you
could
take-
let's
say
another
slice
and
give
it
to
somebody
like
Tesla
right,
like
or
some
car
manufacturer,
and
they
could
do
like
their
own
end-to-end
slides
over
the
network
and
they
can
have
full
control
of
it
and
what
runs
in
the
data
plane
all
those
things.
So
that's
kind
of
the
idea
is
like
you
have
complete
slice
into
N
in
the
Phi
G
network
and
everybody
gets
independent
ownership
of
the
resources.
B
B
So
we
wanted
to
be
highly
scalable
and
it's
very,
very
low
latency,
so
we
can
get
a
packet
in
and
out
with
the
full
mobile
processing
in
about
500
nanoseconds.
So
it's
like
very,
very
quick.
It's
a
probably
like
a
couple
of
orders
of
magnitude
faster
than
doing
run
on
x86
and
on
a
single
node
that
we
showed
here.
B
We
we
were
doing
like
about
2.7
tera
bit
of
throughput,
while
compare
it
to
an
x86
the
fastest
one
we've
seen
is
about
200
gigabytes,
so
it's
about
again
15
fold,
increase
in
throughput
and
and
also
like,
rather
than
latency
and
throughput.
The
power
consumption
is
also
like
way
lower.
So
pretty
much
for
three
tera
bit
of
traffic
we
consume
about
the
same
power
is
like
what
is
consumed
for
a
bit
of
traffic
right
and
we
can
scale
this
up
pretty
high
millions
of
sessions
on
the
user
side
and
we
are
targeting
multiple
use
cases.
B
So,
since
we
are
like
scalable
pretty
much
linearly
by
adding
you
like
more
UPF
nodes,
we
are
targeting
the
large
regional
data
centers,
where,
like
everything
is
concentrated,
but
we're
also
like
the
the
main
kind
of
use
cases
we
are
seeing
right
now
is
like
people
are
trying
to
use
something
like
this
for
an
edge
data
center.
Where,
like
this,
like,
you
know,
probably
tens
of
base
stations
or,
like
you
know,
hundreds
of
base
stations
that
are
getting
like
aggregated
really
right.
B
So
every
five
base
station
is
like
pushing
out
about
twenty
gate
of
throughput
and
so
like.
If
you
do
like
you
know,
15
or
20
of
them
like
you
know,
it
becomes
like
400
gig
right.
So
it
becomes
like
pretty
difficult
to
like
to
handle
this
all
in
the
x86
and
also
it's
quite
a
bit
of
since
you
have
the
edge
data
center,
there's
quite
a
bit
of
limitations
on
the
real
estate
and
power.
B
So
people
want
to
minimize
how
much
real
I
say
they
use
in
there
if
they
have
login
only
to
racks,
they
want
to
be
use
it
for
applications
like
that
can
be
charged
for
rather
than
their
own
network
functions,
and
so
this
is
like
kind
of
like
showing
this
in
the
in
the
network.
Pretty
much
at
the
UPF
sits
between
the
base
station,
which,
like
I,
call
the
ran
here
and
and
the
the
internet
side
right
like
so.
B
B
B
So
since
it's
like
a
p4
piece
of
code
right,
we
can
actually
use
it
as
part
of
the
fabric
or
we
can
have
UPF
as
an
appliance,
which
is
pretty
much
our
lease,
which
we
just
ship
it
off
and
it
can
be
connected
to
an
existing
fabric
and
and
so
all
our
stuff.
All
our
control
plane
runs
in
a
cluster
of
x86
servers
and
everything.
B
All
of
these
things
are
running
OpenShift,
ok,
so
we
we
had
pretty
much
like
read
at
atomic
little
atomic
running
on
all
of
these
switches
and
we
are
moving
everything
to
core
OS
and
running
everything
in
an
open,
shipped
cluster.
So
the
main
idea
this
is
like
the
the
control
plane
can
be
scaled
based
on
how
much
messaging
we
need.
So,
if
you
have
a
lot
of
messaging
needs,
we
have
a
lot
of
control,
plane
functions.
B
You
can
scale
this
and
we
wanted
something,
that's
reliable
and
modern,
and
that's
why
we
decided
to
do
everything
as
container.
So,
if
you
look
at
all
the
virtual
tour
stuff
or
bgp
functions,
everything
is
running
as
containers
so
like
they
can
be
run
on
the
switches.
They
can
be
run
on
the
controller
servers,
ok
and
not
of
the
latency
sensitive
functions.
They
are
running
on
the
switches
itself.
B
The
switches
have
an
x86,
that's
running
braille,
core
OS
and
that's
where
we
run
the
containers
themselves
and
the
control
plane
functions
of
the
Phi
G
network
itself.
So
that's
the
AMF
SMF.
They
run
on
application
servers
that
are
connected
in
the
front
ports
of
the
fabric.
So
why
did
we
pick
like
open
shift
and
and
core
OS
right?
The
idea
is
like
we
wanted
uniformity,
so
we
wanted
some
customers
to
have
the
same
kind
of
operating
system.
B
They
can
run
on
the
compute,
the
storage
and
the
networking,
which
means
that,
like
that
rules
are
any
kind
of
like
our
nas,
pretty
much
that
we
write
ourselves
and
we
want
our
like
good
security
features.
We
wanted
like
easy
upgrades
and
rollbacks
and
everything.
So
if
you
wanted
an
update,
let's
say
for
meltdown
aspect,
we
should
be
able
to
deploy
it
in
a
matter
of
minutes
rather
than
matter
of
years,
and
we
wanted
to
have
a
light
operating
system
which
is
made
for
container
platforms
pretty
much.
So
that's
why
we
went
with
core
OS.
B
We
didn't
do
the
classic
rail
and
so
and
that's
pretty
much
it
and
a
lot
of
people
have
gone
the
other
way.
On
the
networking
side,
everybody
has
their
own
networking
operating
system.
So
if
you
look
at
it,
there's
like
quite
a
few
examples
of
those
and
the
idea
is
like
you
know
they
don't
have
like
a
large
footprint
and
they
don't
have
like
a
lot
of
security
features
and
and
pretty
much
like
the
updating
is
like
pretty
slow.
B
So
if
you
look
at
a
lot
of
the
network
Linux,
so
it's
like
they're
gonna
be
updated,
probably
like
six
months
or
a
year
later,
then
like
gets
an
update
right.
So
if
you
look
at
meltdown
respect
or
something
so,
that's
kind
of
the
important
thing
and
also
like,
if
you
look
at
the
run,
see
vulnerability
that
came
out
a
few
months
ago,
like
open
chip
and
the
only
thing
that
was
not
affected
right
by
that.
B
So
like
we're
like
that's
why
we
decided
that,
like
you
know,
we
didn't
want
to
spend
our
time
and
effort
on
maintaining
the
operating
system
and
the
updates,
and
we
rather
focus
on
the
networking
piece
itself.
Okay-
and
this,
this
might
like
you
know,
be
very
different
approach,
but
I
think
it's
a
right
approach
to
go
to
do
the
stuff.
So
if
you,
if
you
think
of
it
in
a
different
way,
the
the
the
switches
that
we
have
in
our
system,
they
act
like
any
other
server
with
like
a
really
really
large
NIC.
B
That's
how
you
can
think
about
it
another
way.
So
that's
once
we
started
that
kind
of
thinking,
then
everything
just
fell
into
place
and
we
got
the
stuff
working
so
I
mean
we
did
everything
with
containers
right.
We
realized
that
quite
a
few
I
would
call
it.
How
should
I
call
it
like
a
shot
Cummings
on
the
kubernetes
side,
for
the
networking,
so
like
a
kubernetes
was
made
for
a
lot
of
the
l7
workloads
that
are
very
common
in
a
lot
of
the
big
application
providers.
B
But
once
we
start
going
into
like
in
a
multiple
networking,
support
and
stuff
like
that,
which
is
very
common
in
telco
environments,
like
you
know,
we
had,
we
found
like
quite
a
few
things
were
missing.
We
started
working
on
with,
like
you
know,
the
the
mother's
folks
and
like
a
lot
of
the
folks
after
that,
to
make
sure
that
we
started
improving
the
cni
and
started
pushing
things
back
and,
as
we
talked
about
there's
like
a
network
plumbing
working
group
under
Signet
work
and
that's
where
we
started
doing
quite
a
bit
of
the
work.
B
And
so
we
we
put
this
into
our
sea
and
I.
Then
our
scene
is
called
cactus.
So
a
bunch
of
the
stuff,
like
you
know,
we
kind
of
started
putting
in
things
into
the
specifications
really
into
the
network
plumbing.
But
we
also
release
all
the
source
code
pretty
much
into
open
source,
and
our
goal
is
to
make
sure
that
pretty
much
any
change
we
make
goes
upstream
into
maltose
and
then
it's
going
to
get
pulled
in
into
OpenShift
in
the
future.
Okay
and
and
the
biggest
differences
I
think
we
make
is
like
you
know.
B
We
really
want
to
be
able
to
have
multiple
networking
support
in
in
communities
and
with
dynamic
life
cycles.
So
if
you
have
a
routing
container-
and
you
want
to
bring
up
an
interface
right
now,
kubernetes
the
only
way
to
do
it
is
like
to
kill
and
restart
a
pod
and
that's
not
really
very
useful.
In
a
telco
environment,
they're
24/7
availability
is
required.
So
we
you
are
thinking
like
you
need
to
do
something
dynamic
now
how
we
actually
discover
new
connectivity
and
provision
it
so
like
we
are.
B
We
have
something
called
a
pod
agent
that
we
develop.
That
looks
out
for
changes
in
annotations
and
it
can
hook
up
and
remove
network
connectivity
on
the
fly.
Okay,
so
I
you
can
download
the
slides
at
some
point,
Diane
right
so
like
so
yeah
I
have
the
URL
there.
So
you
can
go,
take
a
look
at
our
source
code
and
you
know
like
contribute
stuff
back
or
if
you
have
any
questions
like.
Please
send
us
a
note,
so
so
one
so
like
we're
kind
of
running
out
of
time.
B
So
I
just
wanted
to
make
sure
that
I
tell
you
about
like
some
stuff.
We
are
doing
along
with
that
and
and
the
Linux
Foundation
as
well.
So
Linux
Foundation
has
this
effort.
That's
called
virtual
central
office.
So
there's
going
to
be
something
called
virtual
central
office.
Three-Door,
oh
and
that's,
going
to
be
based
on
pretty
much
CNF
s'right
like
yeah.
It's
going
to
be
testing.
B
Containerized
network
functions
pretty
much
and
I
supposed
to
like
the
earlier
versions
of
VC.
There
are
more
VMs,
so
we
have.
We
are
putting
together
a
couple
of
laughs.
We
put
together
one
in
Montreal
and
one
in
Shanghai.
So
that's
something
we
showed
this
week
and
the
idea
is
like
we
keep
the
the
colored
fabrics
it's
in
there,
but
we
are
really
open
to
like
getting
any
other
kind
of
workload.
B
So,
if
you're
like
either
a
telco
vendor
or
an
operator
who
wants
to
try
things
out-
and
this
will
be
like
a
really
really
good
place
to
test
it
out-
so
bring
bring
in
your
application,
you
can
put
it
in
the
lab.
So
we
have
like
a
whole
bunch
of
servers
that
are
pretty
much
donated
by
Lenovo
and
a
bunch
of
switches
from
act
on.
They
are
all
running
the
column
software
right
now,
and
this
is
something
that's
open
for
pretty
much.
B
Anybody
like
at-at-at
partner
or
linux
foundation,
remember
to
come
and
participate
so
just
to
put
in
your
Lord
see
how
things
work
out,
see
how
we
interoperate
with
everybody.
So
that's
pretty
much
like
one
of
the
things
I
want
to
call
out
so
we're
putting
two
of
these
together,
so
one
in
a
pack
and
one
a
North,
America
and
there's
also
a
third
one.
C
Actually,
since
we
have
any
questions
there,
perhaps
I'll
kind
of
throwing
up
for
for
sure
ash
and
maybe
put
him
on
a
spot
a
little
bit
too
right.
You
know
I
think
one
of
the
interesting
things
we
found
over
less
in
a
few
years
here
is
that,
even
though
we
went
into
it
identifying
that
we
knew
we
had
certain
things
we
wanted
to
accomplish
with
containers
networking
ovations.
What
have
you
you
know?
C
You
really
don't
know
what
you
don't
know
right
until
you
get
into
it
and
that's
interesting
that
I
think
we
were
able
to
to
apply
some
of
these
things
in
areas
we
never
would
have.
Perhaps
expected
you
know
up
front
yeah.
You
know
containers
for
high-performance
networking,
a
low
latency
like
like
how
kalume's
working
at
there
is
one
area
that
I
think
she
rushes
on
being
a
actually
I,
think
you're.
Still
the
triple
I
Triple
E
a
chair
for
ipv6,
you
rush.
C
Do
you
see
that
I
know
that's
one
of
the
things
we're
working
on
longer
term?
Do
you
see
that
being
an
issue
right
now
for
people
to
start
looking
at
this
stuff?
How
do
you
see
that
playing
act
that
comes
up
as
a
question
every
so
often
with
the
with
our
operators
right
and
from
surviving
support,
yep.
B
So
when
we
start
off,
like
kubernetes,
had
like
very
poor
ipv6
support
and
then
after
we
started
working
on
it,
kubernetes
got
pretty
good
ipv6
support,
but
the
big
issue
is
that
right,
like
you,
cannot
do
v4
and
v6
okay,
so
you
had
to
pick
one
so
like
kubernetes,
like
candle
v4
only
or
v6.
Only
so
that's
like
one
thing,
that's
kind
of
missing
is
the
dual
stack
support
for
kubernetes
and
that's
coming
pretty
soon
so,
like
I
saw
quite
a
bit
of
like
you
know,
PRS
coming
in
for
the
dual
stack
support.
B
C
B
Right
and
yeah
like
entirely
v6
only
inside
the
fabric.
It's
also
like
you
know.
We
didn't
see
a
lot
of
the
issues,
but
I
would
see
like
as
like
somebody
who's
going
to
use
it
as
an
end
user
as
an
operator
they
would
serve.
We
have
before
only
loads
like
this,
like
still
stuff
and
that's
before.
Only
like
and
III
think
it's
like
the
dual
sex
support
for
kubernetes
I
would
say,
probably
in
the
next
couple
of
months
they
it
should
be
around
for
sure
great.
D
C
D
B
So
the
network
is
pretty
much
like
standard
BGP
OSPF
right.
So
we
what
we
have
done
is
like
you
know.
The
routing
tables
are
spread
out
through
the
system,
so
we
have
a
distributed
table
and
which
allows
us
to
scale
so,
like
you
know,
we're
not
replicating
the
routing
tables
and
all
all
the
switches.
So
we
have
a
distributed
routing
table
and
we
have
an
algorithm
by
which
we
figure
out,
where
the
routers
store,
right
and
and
and
the
control
plane
itself
is
running
as
a
container
under
open
chef.
B
B
So,
like
so
going
to
Rich's
point
right,
so
we
seen
quite
a
bit
of
interest
like
when,
when
we
sound
off
like
talking
to
operators,
like
you
know,
quite
a
few
like
I
would
say
like
about
a
year
and
a
half
ago
right
before
we
had
a
product,
we
started
talking
to
people
and
everybody
are
still
like
on
the
VM
track
right.
So
there's
not
like
much
uptake
for
containers
and
like
and
most
of
the
people.
B
We
are
talking
to
we're
running
like
no
like
Rho
suite
ten
or
something
like
that
right
like
so,
they
were
like
prettier
than
and
we
decided
like
pretty
early.
So
we
had
like
some
nice
talks
with
like
rich
and,
like
you
know,
like
more
crascell
and
like
lot
of
the
red
people,
and
we
decided
at
that
point
to
go,
do
like
OpenStack
13
and
to
do
the
certification,
so
our
ml
2,
we
decided
to
go
on
the
ml
on
the
OpenStack
13
platform
and
that's
something
you've
seen
quite
a
bit
like
you
know.
B
A
lot
of
people
are
moving
or
from
10,
which
is
the
last
long-term
release
to
13,
which
is
the
latest
one.
So
that's
one
thing,
but
we've
seen
quite
a
bit
of
interest
on
the
container
side,
so
we've
seen
like
a
lot
of
customers,
picking
up
open
shift,
and
so
like
I,
like
you
like,
I,
think
that
they're
the
bail
deal
was
announced
already
in
in
that
summit.
So
I
can
talk
about
it.
B
So
so
that
is
like
a
big
operator
in
Canada
and
they're,
like
God,
going
like
quite
a
bit
of
open,
so
like
they're,
really
interested
in,
like
all
the
ESC
and
I
staff,
and
we
have
like
a
trial
on
going
but
I'm
like,
which
is
pretty
much
testing,
are
a
lot
of
the
things
I'm
talking
about
right
now,
so
they
have
the
educator
sent
a
use
case.
They
have
containers
and
they
want
to
see
our
CNI
improvements
and
our
latency
improvements.
B
B
Containers
as
like
the
way
forward
to
go
through
and,
like
you
know,
doing
micro
services
and
things
like
that,
so
that's
kind
of
like
where
we're
going
next
right
so
see
how,
like
you
know,
we
do
the
service
mesh
kind
of
thing,
how
we
do
the
micro
services
that
would
be
the
next
step
and
and
Vijay
like
I,
know,
you've
all
done
the
sky
stuff
right.
So
that's
something
that's
some
kind
of
thing
where
we
can
actually
look
at
the
visibility
of
the
network.
B
D
B
D
B
Different
because,
like
barefoot,
only
suppose,
everything
at
the
hardware
level
and
we
have
to
do
analytics
at
the
vFabric
level,
so
we
had
to
do
like
something
different
like
from
the
scratch.
So
we
use
the
capabilities
of
the
barefoot
chip
itself,
but
everything
else
is
like
ours.
We
don't
it's
not
based
on
buried
deep
inside.
D
A
D
D
A
Would
be
great
and
and
then
more
I
think
around
where
a
kubernetes
is
going
in
the
future
features
and
functions
that
that,
from
the
telco
point
of
view
and
from
kaloumes
point
of
view,
that
we
need
to
help
support
in
kubernetes
would
be
a
nice
second
follow-on
talk
topic
as
well,
so
we
are
at
the
top
of
the
hour.
So
thank
you.
A
Pacific,
so
we're
definitely
looking
for
other
speakers.
Suresh
you
mentioned
bail
and
tell
us,
and
a
whole
number
of
other
folks
and
myself
being
in
Canada.
I
recognize
both
of
those
names.
So
maybe
we
can
get
some
of
them
to
come
on
and
tell
how
they're
using
Kalume
on
some
other
use
cases
for
the
edge
as
well.
So
thanks
again
everybody
for
coming
Richard
and
will
let
you
go
to
sleep
now
sure
ish
somewhere
in
China
or
travel
on
upon
you
overnight
and
get
the
red-eye
safe
travels
to
everybody.
Okay,.