►
From YouTube: Using VPP as Envoy's Network Stack - Florin Coras
Description
Using VPP as Envoy's Network Stack - Florin Coras
Vector Packet Processing (VPP), part of fd.io, is a high performance, layer 2-7 scalable and multi-platform user space networking stack. Typical VPP use cases include, amongst others, deployments as a vSwitch/Router, Firewall, Load Balancer and TCP Proxy. This talk will discuss how some of the recent socket layer API changes can be leveraged to cleanly integrate Envoy with VPP's socket layer, the VPP Comms Library (VCL), and some of the potential benefits thereof.
A
Hi
everyone,
my
name,
is
florian
korres,
I'm
a
cisco
technical
lead,
also
an
fdio
vpp
project
container
and
in
today's
talk,
I'd
like
to
give
you
a
high
level
overview
of
the
benefits
of
using
bbp
as
on
voice
network
stack.
My
background
is
in
networking
in
particular,
I'm
one
of
the
co-creators
of
vbp's
whole
stack,
so
I
typically
talk
about
transparent
protocols
and
socket
layer
implementations.
A
However,
today
I'll
mainly
focus
on
how
envoy
can
leverage
user
space
networking
and
some
of
the
benefits
thereof
now
before
we
dive
in
and
in
the
interest
of
those
of
you
who
are
not
familiar
with
vpp.
A
very
quick
introduction.
A
A
It
supports
l2,
switching
bridging
ip
forwarding,
virtual
routing
and
forwarding
that
is
vrf,
so
it
has
the
right
constructs
for
iplayer
multi-tenancy,
but
in
addition
to
these
basic
l2
and
l3
functions,
it
also
supports
a
multitude
of
additional
features
and
just
to
name
a
few,
a
very
efficient
ibsec
implementation,
acl,
nat,
npls
segment,
routing
and
several
flavors
of
throwing
protocols.
Things
like
pxlan
and
lisp
now,
on
top
of
the
networking
stack
vbp
also
implements
a
custom
host
stack,
built
and
optimized
in
a
similar
fashion,
as
one
might
expect.
A
It
supports
commonly
used
transports
like
tcp
and
udp,
but
also
tls
and
quick.
The
session
or
socket
layer
provides
a
number
of
features,
but
perhaps
the
most
important
for
the
context
of
the
stock
is
the
shared
memory
infra
that
can
be
used
to
exchange.
I
o
and
control
events
with
external
applications,
using
per
worker
message,
queues
and
finally,
to
simplify
interoperability
with
applications.
A
Vpp
provides
a
comms
library
or
vcl
which
exposes
posix
like
apis.
So
I
guess
that
by
this
point
some
of
you
may
be
asking
the
inescapable
question
why?
Yet
another
host
stack
and
you'd
be
right
to
ask
that,
because
from
a
functional
perspective,
linux
is
obviously
the
one
stack
to
use.
However,
because
linux's
networking
stack
was
designed
around
a
single
pass
run
to
completion
model
per
packet
performance
is
limited.
This
is
especially
noticeable
when
hardware
acceleration
cannot
be
leveraged.
A
A
So
how
exactly
does
envoy
integrate
with
vcl
and
what
sort
of
changes
were
needed?
Well,
rather
intuitively.
The
first
step
was
to
make
sure
that
envoy
components
do
not
make
any
assumptions
with
respect
to
the
underlying
socket
layer
and
consequently
always
use
generic
socket
interfaces
such
that
they
can
potentially
interoperate
with
custom
socket
layer
implementations
once
they're
available.
A
Obviously,
this
is
not
exactly
glamorous
work,
as
the
changes
are
not
so
much
features
as
they
are
focused
on
api
refactoring.
Still
out
of
the
set
of
changes
that
have
gone
in
perhaps
the
most
notable
are
the
fact
that,
as
a
core
rule,
we
now
avoid
using
raw
file
descriptors
anywhere
in
the
code
eye
handles
still
expose
the
fds,
but
last
time
I've
checked,
we've
managed
to
clean
them
to
a
point
where
they
were
only
used
in.
I
believe
a
couple
of
places.
A
A
Another
interesting
consequence
of
the
first
point
is
that
file
event
creation
is
now
delegated
to
I
o
handle
implementations
so
as
a
desired
side
effect.
The
socket
layer
that
provides
the
I
o
handle
is
now
the
one
that
decides
how
events
are
created
or,
in
other
words,
socket.
Events
are
no
longer
tightly
coupled
with
lib
event
and
finally,
an
interesting
scenario
that
might
serve
as
an
example.
Going
forward
was
tls,
which
mainly
for
convenience
reasons,
relied
on
bios
that
needed
explicit
access
to
the
fd.
A
It
eventually
turned
out
that
writing
a
custom
bio
that
uses
the
I
o
handle
as
opposed
to
the
fd,
is
relatively
straightforward,
so
we
actually
switched
to
that
now.
All
of
these
changes
are
enough
to
allow
the
implementation
of
a
vcl
specific,
socket
interface,
but
they
still
leave
one
more
problem
to
be
solved,
namely
both
libivant
and
vcl,
want
to
handle
the
async
polling
and
the
dispatching
of
the
I
o
handles,
but
only
one
of
them
can
be
the
main
dispatcher.
A
So
the
solution
to
this
problem
is
to
leave
control
to
lib
event
and
to
register
event
of
the
the
event
of
the
associated
to
a
vcl
workers.
Message:
queue
with
lib
event.
If
you
recall
the
mqs
are
used
by
vp
to
convey
I
o
and
control
events
to
vcl
and
the
event
is
d
is
used
to
signal
enqueue
transitions
from
empty
to
non-empty
state.
A
This
ultimately
means
that
bpp
generated
events,
force
lib
event
to
hand
over
control
to
the
vcl
interface,
which,
for
each
envoy
worker
uses
a
locally
maintained,
epo
fd
to
pull
or
pull
events
from
vcl
and
subsequently
dispatch
them.
A
Now.
These
are
just
the
stepping
stones
for
the
envoy,
vcl
integration
and
as
first
next
steps,
the
plan
is
to
further
optimize
the
performance.
The
lowest
hanging
through
here
are
the
read
operations,
as
vcl
could
pass
pointers
to
socket
data
in
the
shape
of
buffer
fragments,
instead
of
doing
a
full
copy.
Now,
the
groundwork
for
this
is
already
done.
What's
left
is
the
actual
integration
and
speaking
about
performance,
to
evaluate
the
potential
benefits
of
this
integration.
A
I
built
the
following:
topology
wherein
wrk
connects
to
vcl
and
envoy,
which
performs
http
routing
to
a
back-end
nginx.
Now
this
type
of
scenario
might
not
be
relevant
in
practice
and,
in
fact,
I'd
be
delighted
to
learn,
if
that's
the
case
and
also
what
type
of
scenarios
would
be
interesting
for
those
who
actively
deploy
envoy
nonetheless,
for
the
purpose
of
this
experiment,
this
is
ideal
because
it
gives
us
an
idea
of
how
many
bdp
workers
are
needed
to
load
envoy
and
an
upper
bound
on
performance.
A
A
However,
after
a
certain
point
about
four
to
five
workers,
performance
does
not
scale
linearly
and
it
behaves
somewhat
worse
for
larger
payloads,
albeit
it
should
be
noted
that
tso4
vpp
was
not
enabled
in
in
this
scenario,
so
results
are
really
encouraging,
but
there
are
still
some
things
that
need
further
investigation
for
a
better
understanding.
A
So
with
that,
should
you
be
interested
in
further
exploring
avoid
vpp
integration?
Please
give
the
code
a
try
for
more
in-depth
conversations.
A
You
should
be
able
to
grab
me
on
one
of
envoy's
slack
channels
and
before
I
conclude
I'd
like
to
quickly
say
thank
you
to
matt
and
the
whole
community
lizen
antonio
dragyan,
just
to
name
a
few
for
the
constant
support
and
openness
towards
the
refactoring
effort,
and
with
that.
Thank
you
very
much
for
your
attention
and
I
look
forward
to
your
questions.