►
From YouTube: Intro: Cloud Native Network Functions BoF - Dan Kohn, Cloud Native Computing Foundation
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Intro: Cloud Native Network Functions BoF - Dan Kohn, Cloud Native Computing Foundation
This birds-of-a-feather (BOF) session will discuss how telcos are evolving their Virtual Network Functions (VNFs) into Cloud-native Network Functions (CNFs) running on Kubernetes.
To learn more: https://sched.co/JCLS
A
Yeah
hi
folks,
my
name
is
Dan
Khan
I'm,
the
executive
director
of
the
cloud
native
computing
foundation,
which
is
putting
on
this
conference
this
week
and
really
appreciate
all
of
you
attending
and
I'm
thrilled
to
get
to
talk
for
a
little
bit
about
one
of
my
favorite
topics,
which
is
CFS
or
cloud
native
network
functions.
This
is
a
little
bit
of
a
directional
talk
more
than
a
oh.
A
You
absolutely
need
to
go
adopt
this
today
kind
of
thing,
but
I
would
say
that
if
you
look
at
CN
CF,
which
is
the
organization
that
hosts
kubernetes
and
cube
con
and
these
other
projects,
this
is
going
to
be
one
of
our
major
focuses
in
2019
of
telling
story
about
how
kubernetes
makes
sense
in
the
telecoms
world.
So
hopefully
you've
heard
this
already,
but
CN
CF
is
part
of
the
Linux
Foundation.
We
just
celebrated
our
3-year
birthday.
A
We
talked
about
this
slide
of
that
CN
CF
is
focused
on
trying
to
help
these
technologies
mature
and
also
from
a
signaling
standpoint
to
tell
enterprises,
which
ones
they
should
be
adopting
that
2018
was
the
year
that
kubernetes
crossed
the
chasm
into
the
early
majority.
We
would
definitely
say
that
has
has
not
been
reached.
The
late
majority
or
laggards
yet,
but
some
of
the
areas
I'm
talking
about
here
of
trying
to
apply
kubernetes
to
a
new
venue
to
telcos
is,
is
different
and
much
less
mature
and
I.
Think
most
folks
are
familiar.
A
That
CN
CF
is
part
of
the
Linux
Foundation.
The
Lynx
foundation
today
is
far
more
than
just
Linux,
so
let's
encrypt
gives
out
more
the
of
the
world's
security
certificates.
Lf
networking
has
several
projects,
including
onap
and
OPI,
NFV,
Fido
and
others
that
are
taking
the
lead
and
via
nafs
and
software-defined
networking
CN
CF
in
the
cloud
automotive-grade
linux
is
shipping
in
all
new
Toyota's.
Several
other
manufacturers,
hyper
Ledger's
when
leading
up
options
in
blockchain
and
nodejs
is
one
of
the
most
popular
application
frameworks
on
the
web.
A
A
So
the
router,
the
switches,
the
firewall
and
it
kind
of
looked
like
this,
but
lots
of
racks
of
it,
but
cat6
might
have
even
been
cat
five
at
the
time
cabling
connecting
it
all
together
and
over
the
last
decade.
There's
been
this
major
trend
in
telecoms
of
saying
that
each
of
those
boxes
were
moved
from
being
separate
physical
components
to
being
virtual
machines
that
are
called
virtual
net
functions
and
those
generally
run
on
top
of
VMware
or
OpenStack.
And
ideally,
it
looks
like
this
that
you
have
a
very
clean
data
center.
A
All
the
server's
can
be
identical
and
you
can
move
the
workloads
around
a
little
bit
based
on
demand.
What
we're
talking
about
today
is
what
we
think
is
going
to
be
the
next
big
trend
in
how
telcos
use
software,
which
is
that
they
can
take
that
networking
code
and
instead
of
having
it
packaged
in
virtual
machines
on
OpenStack,
can
packages
as
containers
run
it
on
kubernetes
and
can
then
deploy
it.
A
As
we've
been
talking
about
the
whole
conference
on
public,
private
or
hybrid
clouds-
and
it
will
look
like
this-
which
is
exactly
the
other
picture
except
the
software's
on
it-
is
different.
So
this
is
sort
of
I,
think
I
sort
of
feel
like
an
enterprise
computing
and
software
development.
The
pull
requests
as
the
coin
of
the
realm
and
the
networking
people
really
like
block
diagrams.
So
yes,
so
this
is
a
block
diagram
to
talk
about
that
evolution.
A
It's
something
we
put
together
in
partnership
with
another
Linux
Foundation
project
onap,
which
came
out
of
AT&T
in
China
Mobile
and
has
gotten
a
lot
of
adoption
but
Amsterdam.
The
original
version
of
that
ran
on
OpenStack
or
VMware,
on
bare
metal
and
on
Azure
or
Rackspace
in
the
public
cloud
and
more
a
more
recent
version
of
it.
Casablanca
that
just
shipped
now
can
run
on
top
of
kubernetes,
and
so
you
can
use
that
on
any
public
cloud
or
on
bare
metal.
A
But
then
your
virtual
network
functions
still
run
on
top
of
OpenStack
on
bare
metal
and
and
now
you
can,
at
least
in
principle,
support
cloud
native
network
functions,
see
NFS
running.
On
top
of
kubernetes,
the
story
that
we're
talking
about
is
the
evolution
that's
possible
and
that
we
think
is
going
to
occur
is
that
the
vast
majority
of
that
networking
code
can
evolve
to
become
CNF,
to
become
containers
that
are
managed
just
like
any
other
kubernetes
load.
A
The
kubernetes
becomes
that
universal
substrate
sitting
on
top
of
bare
metal
or
any
public
cloud
exactly
the
way
it
represents
that
for
enterprise
computing
today,
and
you
will
just
give
a
quick
shout
out
to
a
project
cube
vert
being
led,
particularly
by
Red
Hat
right
now,
but
I
think
it
has
a
lot
of
potential
for
certain
workloads
that
are
not
easy
to
containerize
or
that
are
more
challenging
that
it
allows
you
to
run
certain
VMs
on
top
of
kubernetes.
But
the
idea
is
that
that
all
those
pieces
can
move
to
be
to
beyond
kubernetes
okay.
A
A
But
that's
actually
the
smallest
reason,
but
by
far
the
biggest
reason
is
number
two,
which
is
development
philosophy
and
that's
exactly
the
same
story
for
enterprise
computing.
Where
all
of
these
companies
that
have
found
that,
by
moving
from
a
monolith
and
moving
to
micro
services,
that
they're
able
to
move
faster
and
the
speaker
this
morning
from
Airbnb
was
a
great
example
of
that.
And
then
the
third
one
is
resiliency.
So
this
actually
should
be
much
more
reliable.
Then
then,
things
have
been
in
the
past.
Okay.
So
that's
the
basic
idea.
A
Now
the
good
news
are
is
that
it
is
totally
feasible
to
get
the
performance.
You
need
just
using
the
permissible
interfaces
of
linux
and
and
of
kubernetes,
but
it's
meaningful
work
to
do
that
and
yeah.
So
that
that
needs
to
occur
and
it
needs
to
occur
essentially
separately
for
each
of
vnf-
that's
out
there
so
I'm
gonna
hand.
This
off
to
my
colleague
Taylor
in
just
a
second
but
I
do
have
some
pretty
useful
links
in
here,
and
this
whole
presentation
will
be
going
up
will
be
linked
from
sched
to
take
a
look
at
it.
A
So
there's
a
still
a
lot
of
architectural
approaches
that
are
being
debated.
Motus
is
a
technology
out
of
Intel,
and
this
idea
of
having
multi
interface
pods,
there's
also
a
separate
approach
called
network
service
mesh
of
talking
about
different
ways
of
essentially
going
around
the
kubernetes
networking
and
get
high
getting
higher
performance.
Another
area
and
I
do
want
to
address
this.
A
One
in
particular,
is
that
there's
been
I,
guess
a
suggestion
that
you
need
some
kind
of
virtual
machines
in
order
to
get
adequate
separation,
and
so
the
two
concerns
that
have
come
up
and
I'd
say
they
specifically
came
out
of
the
Takata
containers.
Community
is
oh,
you
need
to
use
something
like
kata
in
order
in
order
to
avoid
the
noisy
neighbor
problem
and
for
security
and
I
would
really
push
back
aggressively
on
both
of
those
things.
A
If
you
just
look
at
kubernetes
in
production
with
thousands
of
enterprises
today,
it
provides
adequate
protection
from
the
noisy,
neighbor
problem
for
memory
for
CPU
for
network
that
the
the
whole
way
that
containers
work
in
the
Linux
operating
system
is
effective
and
on
security.
When
you
look
at
the
telco
use
case,
the
key
question
is:
can
the
code
be
trusted
or
not?
Was
it
written
by
that
telco
itself,
or
did
they
pay
a
vendor,
a
lot
of
money
for
that
virtual
network
function
and
they
have
a
contractual
relationship?
A
A
So
the
heads
up
that
I
want
to
give
and
then
I'm
gonna
hand
it
off
to
Taylor
is
that
CN
CF
is
actively
working
on
a
project
that
we're
we
don't
need
to
come
a
better
name
for
it,
but
something
like
the
CNF
testbed
and
the
idea
is
to
take
the
identical
networking
code,
we're
using
code
from
own
app
because
it's
open-source
and
a
sister
project
packaged
it
in
a
virtual
machine,
run
it
on
OpenStack
and
to
take
that
same
code
packages
and
container
running
on
top
of
kubernetes
both
running
on
top
of
the
same
bare
metal
hardware.
A
Just
before
I
hand,
it
off.
I
really
do
want
to
talk
about
the
discipline
from
a
continuous
integration
standpoint,
of
building
up
an
entire
system,
running
the
performance
test
and
then
tearing
it
down
again
because
it
seems
like
the
absolute
standard
thing
in
the
kubernetes
world
and
in
most
of
the
application
development
of
software
development
world
that
I'm
used
to
and
yet
seems
of
radically
just
not
used
in
the
networking
world,
where
what
we
keep
seeing
is
that
companies
set
up
their
own
testbed,
their
own
test
lab.
And
then
they
say.
A
Oh,
if
you
have
some
new
code
to
run,
you
can
send
it
to
us
and
we'll
run
it
in
our
lab.
But
when
we
go
back
and
say,
okay,
well
exactly
what
configuration
exactly
what
environment
do
you
have
in
that
lab
because
we're
trying
to
replicate
it?
It's
often
not
even
available
in
source
control
or
in
a
way
that
we
can
exactly
replicate
or
referencing
all
these
packages
and
systems
that
have
gone
out
of
date?
And
it's
caused
us
a
huge
amount
of
hassle
and
delay
in
cost
and
such
in
creating
this.
A
So
our
aspiration
is
to
create
an
open-source
testbed
that
anyone
can
replicate
and
I
want
to
call
out
the
bare
metal
hosting
company
packet,
who
has
been
a
fantastic
partner
for
us
with
this,
where
they
have
these
bare
metal
servers
in
principle,
at
least
we
want
it
to
be.
So
all
you
need
is
an
API
key.
A
Then
you
can
hit
run
and
deploy
both
of
these
environments
and
then
either
run
our
network
code
that
we
have
as
a
default,
but
ideally
get
to
a
spot
where
you
could
run
your
own
networking
code
in
a
vnf
or
CNF
and
see
the
difference
in
performance
between
them,
so
we're
actively
underway
here.
This
is
out
of
date.
A
B
B
B
So
we
mentioned
packet
also
FD
IO
C
set
lab,
so
that's
a
Linux
Foundation
lab
and
we
want
to
try
to
compare
the
code,
how
it
works
and
runs
in
different
areas,
we're
also
looking
at
commodity
hardware.
So
in
the
in
a
fee
world
you
are
going
to
have
hardware
that
may
be
very
specialized
to
make
the
entry
for
other
people
to
rebuild
this
and
test.
B
We
think
it's
important
to
start
with
commodity
hardware
and
then
you
can
build
from
there
and
then
the
software
everything
that
we're
doing
100%
open-source
trying
to
do
standard
practices
and
best
practices
as
well,
so
we're
testing
different
platforms
for
kubernetes
helm
opens
also
vanilla
versions.
You
may
have
very
specialized
versions
of
open
sec.
You
may
have
people
that
are
using
KVM
with
the
network
functions
we're
starting
with
the
vanilla
version,
so
anyone
could
download
it
have
access
to
and
for
the
data
plane.
B
Networking
we
use
VPP,
also
FDA
a
project,
and
then
Dan
mentioned
apples
to
apples
test
so
trying
to
get
as
close
as
possible.
The
different
platforms
aren't
going
to
allow
it
to
be
exactly
apples
to
apples,
so
we
make
sure
and
note
the
differences
and
then
we're
trying
to
look
at
optimize
test
cases.
So
we
say
here's
how
you
would
do
it
for
KBM.
Here's
how
you
do
it
for
kubernetes
or
OpenStack
and
then
a
community.
So
we
talked
a
little
bit
about
the
contributors.
B
Here's
some
of
the
different
groups
that
have
been
involved,
multiple
vendors,
so
packets,
been
directly
involved.
We've
also
had
Intel
since
we're
caring
about
network
cards
and
having
their
involvement.
There's
been
engineers
on
that
side.
Mellanox.
That's
a
network
card,
that's
available
on
a
packet
that
we've
been
testing
working
towards
very
visible
decision,
making
progress
and
process
inspired
by
this
C&C
of
charter.
B
So
once
we
have
that
neutral
test
environment,
we
can
start
building
up
the
infrastructure
on
that
and
the
platforms
like
OpenStack
and
kubernetes.
How
are
those
going
to
be
there
so
that
anyone
could
reproduce
it
and
then
the
test
cases
themselves
so
at
packet?
Here's
an
example:
machine
it's
based
on
Dell
hardware:
it
comes
with
those
Mellanox
cards
by
default.
B
We
also
tested
with
some
Intel
cards
that
are
available
at
the
cset,
the
Linux
Foundation
lab,
and
we
can
provision
all
of
that
with
terraform,
and
that
allows
us
to
give
something
that
anybody
in
the
community
may
already
be
familiar
with.
You
can
use
it
on
other
platforms,
and
now
we
get
to
some
scarier
stuff,
with
maybe
kubernetes
the
layer
to
configure
raishin
dealing
with
all
of
the
things
that
kubernetes
makes
easier
for
the
app
side,
but
we
have
to
care
about
for
Network
functions
so
looking
at.
How
is
this
going
to
work?
B
What's
the
capabilities
with
the
switches
and
everything
there
so
trying
to
find
that
out
a
packet,
they
provide
an
API
so
trying
to
look
at
that
and
make
it
configurable
and
then
the
host
networking
itself.
So
this
is
some
things
in
addition
to
what
you'd
have
in
kubernetes
and
the
apples
to
apples.
What
can
we
use?
So?
What's
the
soft
we're
going
to
do
be
running
on
those
worker
machines
or
on
the
compute
nodes
for
open
sector?
B
So
for
OpenStack
we
are
using
open
sex
chef
cookbook
for
deployments,
and
this
is
something
maintained
by
the
community.
So
if
we're
using
open
source
versions,
people
are
going
to
know
what
we're,
what
we're
doing
there
everything's
on
bare
metal
again
all
the
services
and
then
we're
using
VPP
for
both
the
host
networking
the
v
switch.
This
is
what's
connecting
to
the
network
functions
and
the
network
functions
to
the
actual
hardware,
then
the
the
Nick's
themselves
and
then
the
high
speed,
V
NFS
are
running
VPP.
B
This
is
using
a
mixture
of
Tara
formative
provision,
machines
and
then
clad
in
it
to
set
up
vanilla,
kubernetes
and
then
ansible
again
is
used
to
set
up
some
things
that
are
outside,
including
the
layer,
2
networking,
because
we
provide
we
provide
additional
interfaces.
You
get
by
default,
your
layer,
3
interface,
on
the
cuber
nice
container,
so
we
add
additional
interfaces
with
ansible.
This
is
prior
to
network
service
mesh
being
fully
available,
which
we
are
working
with
is
kind
of
the
goal.
B
Once
we
have
those,
then
we
can
look
at
from
what's
called
in
the
network,
the
nfe
world
service
topology.
So
how
are
these
different
network
functions
laid
out
on
the
notes
and
then
the
density
on
the
machines
themselves?
So
there
can
be
different
types
so
before
we
get
into
that,
here's
your
just
high-level
view,
whether
it's
KVM
OpenStack,
you
have
your
vnfs
running
on
machine.
We're
gonna
have
some
type
of
V
switch,
so
you
may
be
familiar
on
the
OpenStack
world
with
OBS
or
Linux
bridge.
We
replace
that
for
the
high
speed
with
VPP.
B
The
the
way
it
works
is
very
similar,
though
kubernetes
we
also
plug
in
V
switch.
That's
it
side
by
side
with
the
Linux
networking,
so
you
can
still
use
that
for
your
layer,
3
and
then,
when
you
need
this
high
performance,
you
can
use
this
for
the
test.
Comparison
that
we're
looking
at
here,
we're
picking
one
Network
function.
As
Dan
said:
you're
gonna
have
to
look
at
migrating
each
one.
How
do
they
work?
What
are
their
functionality?
A
VM
may
have
multiple
services
and
you
may
want
to
break
that
down
into
smaller
micro
services.
B
B
Once
you
have
a
network
function,
then
you
can
create
what's
called
a
service
function
chain,
so
these
are
a
set
of
services
that
are
linked
together
to
provide
some
larger
functionality.
You
could
have
firewalls
and
other
things
that
providing
these
for
us
we're
looking
at
the
router,
then
you
can
talk
about
density,
so
how
many
of
these
chains
and
the
network
functions?
Are
you
having?
B
It's
also
if
you're
familiar
with
hops
on
a
network
connecting
systems
is
similar
in
app
servers,
if
you
had
say
proxies
knowing
that
can
affect
the
performance
so
very,
very
important,
even
on
a
single
physical
node,
so
the
apples
to
apples
we're
looking
at
a
snake
topology
with
multiple
chains.
So
what
does
this
mean?
It's
going
to
be
the
same
thing
for
KVM,
for
open
sack
for
kubernetes.
B
The
traffic
goes
from
the
v
switch
and
then
loops
through
each
of
the
network
functions
for
the
whole
chain,
whatever
the
length
until
it's
ready
to
leave
the
note,
that's
the
same
thing
if
you
have
multiple
chains.
So
if
we
look
at
a
test,
it's
something
like
this.
You
have
a
traffic
generator.
It
sends
the
traffic
it's
going
to
go
through
the
switch
into
the
machine,
all
the
way
through
all
the
chains
and
then
it
loops
back
around
and
we
collect
the
results.
B
So
here's
some
results
in
a
side-by-side
comparison.
This
is
actually
KVM
and
kubernetes.
So
on
the
right
with
the
cns,
it's
we
have
kubernetes
and
on
the
Left
we
have
KVM
for
the
VNS,
and
this
is
in
millions
of
packets
per
second.
So
we
saw
in
the
two
chain.
Three
Network
function,
that's
three
deep
where
it
had
to
loop
through
about
half
a
million
packets
per
second
for
the
VNS,
and
then
we
saw
nearly
8x
for
the
increase
for
the
CNS,
so
this
is
looping
through
and
again.
This
is
the
same.
V
switch.
B
It's
running
VPP
for
both
of
this,
and
if
we
go
over
to
what
some
people
may
be
more
familiar
with
on
network
functions
is
not
as
deep.
If
you're
looking
at
VM,
so
three
chains
with
one
network
function,
we
saw
4.5
million,
we
still
saw
nearly
a
2x
increase,
a
point
9
million
for
the
CNS,
so
it's
whether
you're
doing
looking
at
depth
or
not
we're
seeing
an
increase
in
CNS.
Let's
look
at
another
case,
so
this
one
is
talking
about
the
connections
between
the
network
functions.
B
We
just
looked
at
this
neck
case,
where
they
all
loop
together.
So
what?
What
is
the
optimal
ones
for
vns
sources,
CNS
for
KVM
and
OpenStack?
That's
the
snake
topology
for
kubernetes
or
containers.
So
on
docker
you
can
do
something
called
a
pipeline,
and
this
is
where
you
can
directly
connect
the
network
functions
to
each
other
and
the
traffic
will
go
from
the
V
switch
into
the
first
Network
function
and
then
directly
to
the
network
function
for
whatever
length
before
it
leaves
same
thing
for
multiple
chains.
B
It
looks
a
little
like
this,
so
it's
very
similar.
So
here's
another
comparison
side-by-side.
We
looked
at
the
vnf,
that's
KVM,
on
the
left,
and
then
we
have
kubernetes
on
the
right,
so
half
a
million
packets
per
second
for
the
three
depth
and
then
we
had
I
think
that's
around
17
X
a
little
bit
more
than
17
X
for
the
CNS,
so
pretty
incredible
increase
for
depth.
Even
if
we
look
at
the
the
less
depth
we
can
see
across
the
board
and
we
have
some
other
tests
that
extensively
go
on.
B
You
will
see
the
pipeline
doesn't
matter
how
many
you're
at
it's
still
very
high.
So
what
this
means
is,
if
you
had
microservices
and
you
wanted
to
share
resources
on
a
machine,
you
could
keep
adding
those
and
spread
them
out.
This
helps
with
the
failure
with
resiliency
and
other
things
like
that
as
well.
B
So
if
you
want
to
verify
the
results-
and
we
have
everything
up
on
the
github,
henderson,
open
source
and
Dana
saying
we
would
like
to
for
other
people
to
be
able
to
reproduce,
you
can
reproduce
right
now,
the
KVM,
the
KVM,
docker
and
kubernetes
right
now,
OpenStack
is
very
close.
That's
a
little
bit
hairier
as
he's
talking
about
to
be
able
to
it,
but
we've
really
been
pushing
hard
on
that
and
we're
very
close.
B
We're
on
the
last
part
for
the
VPP,
with
Neutron
trying
to
use
stuff
that
people
are
gonna
understand,
get
a
packet
account.
If
you
have
API
key,
you
can
download
follow
the
steps.
If
you
have
any
problems,
please
open
an
issue
because
we're
trying
to
make
sure
that
someone
can
just
walk
through
and
recreate
these,
and
so
several
things
here
keep
we're
gonna
keep
at
working
with
folks
we're
working
towards
adding
network
service
mesh
support
and
working
with
them
directly.
B
So
we
have
that
kind
of
pluggable
right
now,
where
we're
manually,
creating
it
with
ansible
all
that
layer,
2
configuration
well
network
service
mesh
would
hopefully
be
able
to
plug
in
at
some
point
right
there,
and
we
would
love
more
test
cases.
So
if
you
know
of
the
way
to
make
V
NS
look
great
we'd
like
to
know
what
that
is,
tell
us
the
details
so
that
we
can
create
those
contrasts
the
results
and
see
what
it
means.
B
Maybe
it's
because
it's
very
difficult,
we'd
love
to
know
that
and
supporting
other
environments
look
at
Amazon
bare
metal.
We
got
to
figure
out,
which
ones
would
support
these
type
of
test
cases,
but
we'll
be
looking
at
that
I
think
some
of
the
events
were
mentioned,
including
here
we
are,
and
here's
how
you
can
connect.
Besides
github
there
is
a
CNF
channel
on
the
cloud
native
slack
and
that's
it.
I
had
a
QA,
but
you
want
me
to
do
that
or
you
want
to
sit
back
out
Dan.
A
B
A
C
Well,
thank
you
guys
for
working
on
this
and
have
been
trying
to
solve
it
myself,
I'm.
Actually
from
a
telco
provider,
we
have
been
actually
working
with
Intel
developing
an
open-source
and
like
4G
5g
network,
or
we
have
Connecticut
cops
architecture
or
control
and
user
plane
are
separated.
So
one
of
the
challenges
we
have
is
essentially
to
move
cleburn
leonetti's.
We
need
layer
three
right
specifically,
we
need
multiple
interfaces
to
separate
the
SGI
s1u
interfaces.
C
One
of
the
issues
I
have
with
VPP
is
that
it
uses
the
dedicated
course
and
the
PDK,
so
it
does
not
expose,
for
instance,
the
SRV
pour
more
drivers,
so
I'm
just
trying
to
understand
you
know
for
like
flow,
a
simple
flow
rosettes.
How
is
that
better
than
a
VSD
PDK,
and
also,
what
kind
of
like
your
vision
in
essentially
creating
l3,
specifically
multiple
network
interfaces
and
just
fix
the
PDK
problem
right
now
what
we
are
having
is,
you
have
to
have
a
specific
obvious
version.
A
C
A
But
before
Taylor
does
try
to
answer
that,
I
do
just
want
to
say
we're
really
eager
to
get
your
engagement.
So
we
are
looking
to
start
to
calls
a
month
and
we'd
love
to
just
have
a
community
of
folks
discussing
it
because
I
don't
but
go
ahead
and
try
it.
Our
I,
don't
think
we're
gonna
get
a
comprehensive
answer
for
that.
B
Yeah
we,
we
have
no
worry,
so
we
have
several
people
here,
so
my
checks
lead
on
the
FDIC
sip
project
and
we
have
at
Warneke
for
network
service.
Mesh
I
may
bring
them
in
so
we've
we've
definitely
done
a
lot
of
tests
with
different
packet
sizes,
including
IMAX.
All
of
the
results
that
we've
tested
in
this
particular
project
are
pushed
up
to
get
up
the
cset
project
and
VPP
also
push
tons
of
results,
so
that'd
be
helpful.
Get
involved
on
as
as
like
Dan
is
saying.
B
B
We've
handled
a
lot
of
how
to
configure
the
machines
partly
and
ansible
part
of
its
outside
network
service
mesh
is
trying
to
handle
layer
two
as
a
service
within
kubernetes.
You
still
have
the
infrastructure
side,
that's
outside
of
that.
In
fact,
we
want
to
talk
more
about
that,
because
we're
trying
to
make
that
part
easy
to
I,
don't
have
a
specific
answer
for
you,
but
we're
trying
to
solve
those
different
problems.
I,
don't
did
you
hear
something
quickly.
D
Comment
on
the
pole
modes
comment
and
I
can
let
maybe
yet
comment
about
the
the
multiple
interfaces
into
into
kubernetes.
That
NSM
is
working
on
so
and
pvp
today,
as
supports
Paul
note,
as
we
have
rightly
said,
are
using
TP
decay.
A
VBB
also
supports
number
of
native
drivers,
but
we
prefer
to
use
the
pedagogy
of
girls
who
gives
us
hardware
abstraction.
It
also
supports
interrupt
mode,
and
it
also
now
supports
something
in
between
is
called
adaptive
depending
on
the
load
it
switches
between
the
two,
in
addition
to
TB
decay.
E
I've
got
one
for
you
and
one
for
you
too,
and
it'll
be
quick,
though,
to
you.
What
are
you
doing
around
standard
for
CN
FS,
because
if
I
see
one
more
block
diagram
from
Etsy
I'm
gonna
jump
off
a
building.
F
E
And
so
you
look,
and
you
mentioned
the
packet
core
I
work
with
packet,
core
network
with
virtual
cmts
and
every
vendor
is
doing
it
drastically
different
and
then
for
you
to
when
you're
gonna
do
real-world
use
cases.
I
know
direct
memory.
Copies
are
fast
and
I
know
that
VPP
can
process
packets,
but
there's
no
complex
work
being
done
on
those
packets.
There's
no
encryption
to
decription,
there's
no
in
cap
D
cap
when
do
I
get
those
numbers
I'll.
A
B
V
CPUs
case
and
trying
to
move
that
to
work
with
cloud
native
functions,
and
it
seems
difficult
to
say
that
one
part,
how
does
one
part
work
unless
you
make
it
composable
and
build
up
and
that's
why
we're
trying
to
do
that?
We'd
like
more
complex
use
cases,
especially
if
you
can
define
one.
So
if
you
all
have
something
that
you
could
say
here
is
something
that's
real
world
and
we'd
like
to
see
the
performance.
If
we
move,
if
you
can
define
it,
then
we
can
build
it
and
that's
what
we've
been
trying
to
do.
B
A
F
Little
bit
of
a
follow
up
to
that
that
question
and
it's
more
helping
to
answer
that
question
as
well.
So
you
know
with
between
the
phyto
cset
team,
there's
also
some
benchmarking
and
performance
work.
That's
going
on
in
OPN,
Fe,
I,
think,
specifically
nfe
bench
and
she
said
have
been
interacting.
There's
also
because
perform
one
of
the
things
that
we've
building
up
around
that
are
using
traffic
generators
like
Ixia
to
drive
more
real
world
style
traffic
through
things.
F
So
there's
there's
some
more
layers
of
benchmarking
that
we've
been
building
up
in
the
vnf
world
that
needs
to
transition.
Yeah
I
know
that
you
work
for
some
of
the
the
vnf,
but
people
for
that
or
in
a
fee
bench.
Sorry,
so
I
think
that,
as
we
kind
of
start
taking
pieces
from
all
these
projects
that
I'm
working
on
you
know
the
virtualization
1.0
you
sort
of
with
this
project,
we
can
kind
of
start
getting
that
library
of
more
real
world
type
things.
F
We've
got
things
like
a
sample
vnf
project
that
tries
to
mimic
the
sort
of
performance
projects
or
sort
of
performance
capabilities
of
real-world
use
cases
as
well,
so
I
think
there's
a
there's.
A
lot
of
cross-pollination.
That's
started,
but
not
me,
basically
not
enough
and
there's
a
lot
more
opportunity
there,
I'm
kind
of
amongst
the
several
pops
so.
A
I
think
we
have
to
stop
there
because
there's
people
coming
after
us
we'll
be
outside
to
chat
and
I
just
want
to
shout
out
both
Mobile
World
Congress,
but
also
the
open
networking
summit
in
San
Jose
in
April.
We're
gonna
have
a
whole
track
on
these
topics
and
would
love
to
engage
with
folks
more
there.