►
From YouTube: CNCF Networking WG Meeting 2017-11-07
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
C
A
B
D
B
E
D
B
B
C
F
D
A
D
To
be
the
case,
it
hasn't
been
a
more
except
when
just
does
it
as
experimental
or
individual
contributor,
or
whatever
else
I
mean
sauce.
You
don't
want
to
be
on
a
standard
track.
I
mean
the
days
of
proving
interoperability.
Unfortunately
you're
long
gone,
it
would
have
made
things
so
much
easier
to
just
put
everyone
in
the
sin
bin
but
I
to
many
vendors.
D
B
Other
interesting
things
that
are
starting
to
come
up
in
the
ITF
they're
hackfest
are
getting
to
be
quite
fun
lately,
so
I
mean
a
lot
of
what
we've
been
seeing.
Sort
of
is
the
how
the
heck
fest
at
every
IETF
and
every
IETF
we
wind
up
with
some
protocol
or
other
that
gets
implemented
as
a
plug
into
VPP.
You
know
I
la
won
Best
in
Show
at
one
point,
for
example,
and
so
you
wind
up
with
really
cool
stuff
that
actually
makes
it
into
code
and
into
the
real
world
pretty
fast.
B
A
B
A
D
B
A
A
D
There
is
a
service
provider
that
when
I
was
at
Alcatel
in
New
Zealand
that
shall
remain
nameless,
that
what
they
actually
decided
to
do
was
number
their
up
their
ops
network,
their
back-end
ops
network
by
area
code,
criminal
dialing
code.
So
they
lose
net
three-four
stuff
in
Auckland,
that
for
for
stuff
in
Wellington
net,
seven
for
stuff
in
Christchurch
and
they
promised
getting
to
Apple.
For
example,.
B
E
B
And
some
of
the
things
that
are
possible
with
it,
which
there's
a
lot
of
stuff
that
can
be
done,
that
you
literally
don't
have
anything
else
to
control,
so
really
quick,
Fido
fti.
Oh,
we
pronounce
it
Fido,
because
it's
cutesy,
like
that
I
mean
it,
gives
us
excuse
to
do
things
like
I'm
told
that
we
actually
have
approval
now
to
have
a
real
dog
at
our
who
thinks
you've
gone.
So
we
come
by
to
the
puppy.
B
Well,
it's
basically
it's
a
project
of
the
Linux
Foundation.
It's
open
source,
it's
multi-party,
meaning
there
are
many
people
involved
and
multi
project.
All
this
is
familiar
structure
where
people
like,
which
is
much
the
same.
It
does
a
software
data
playing
basically
so
high
throughput
low,
latency
lots
of
features
very
resource.
Efficient
works
equally
well
on
bare
metal,
VMs
or
containers,
because
it's
a
pure
user
space
data
play.
B
So
if
you
need
to
stick
it
in
a
container
sure,
why
not
and
it's
multi-platform
it
runs
on
Intel
arm
and
PowerPC
at
some
point
in
the
historic
past
it
ran
on
nips,
but
I
don't
know
that
anyone
actually
cares
anymore
about
notes.
So,
and
then
we
have
a
little
bit
of
a
rubric
that
we
use
to
discuss
the
scope
of
things
that
live
within
Fido.
So
we
usually
will
talk
about
three
layers.
B
The
first
isn't
a
trick
IO,
which
is
how
do
you
get
a
packet
from
a
NIC
or
v-neck
to
a
thread
on
a
core?
So
you
could
think
this
is
something
like
DP
DK,
which
does
that
brilliantly
and
we
actually
use
the
PDK
for
that
purpose
of
times
we're
dealing
with
physical
hardware.
Then
you
could
think
of
it
as
packet
processing
is
the
next
layer.
So
how
do
you
classify
transform
prioritize
and
forward
and
possibly
terminate
packets?
And
then,
of
course,
now
that
you've
got
this
spiffy
packet
processing
going
on?
B
H
G
B
That
was
the
slide.
I
was
just
talking
to
thank
you
for
bringing
that
up.
It
would
have
made
the
rest
of
the
much
harder
to
follow
it's
the
ground.
Well,
the
graphics
really
do
help
in
places
there
are
some
fun
animations
on
how
the
technology
works
so
moving
right
along.
Can
you
all
see
Fido
in
the
overall
stack
here?
Yes,
yep
right,
so
you've
got
things
like
kubernetes
that
handle
orchestration
these
sort
of
have
data
playing
services.
B
B
There's
pretty
broad
membership
in
Fido,
so
we've
got
a
bunch
of
service
providers
who
are
involved.
There
are
network
owners,
chip,
vendors
and
in
various
integrators
as
well.
There's
a
lot
of
interest,
because
this
solves
a
lot
of
problems
for
people
in
data
playing
in
terms
of
software
and
then
we've
got
even
broader
contribution
coming
in
so
code
is
coming
from
a
lot
of
different
directions
and
if
you
sort
of
look
at
the
code
activity,
one
of
the
things
you'll
notice
is
from
a
commit
point
of
view.
B
B
I'd
mention
it's
multi
project,
so
we
end
up
having
a
lot
of
different
projects
that
go
into
it.
I've
sort
of
classified
them
here
by
what's
going
on
most
of
the
projects
in
packet
processing
tend
to
be
either
VPP,
which
is
the
core
technology
or
things
that
are
providing
plugins
or
our
libraries
to
control
it
of
particular
interest
that
that
screw,
probably
is
going
to
be
the
go
VPP
project
which
provides
go
bindings
to
VPP.
B
Honeycomb
is
a
a
particular
data
plane
management
agent
EBP
is
agnostic,
as
so
you
can
build
whatever
you'd
like
honeycomb
is
one
that
service
providers
tend
to
like,
because
it
gives
Netcom
yang
interfaces
to
the
finality
and
VBB
and
service
providers
love
net
comp
yang,
and
if
that
makes
you
happy
that
it'll
be
a
great
day
to
play,
Management
Asian
for
you
and
if
it
doesn't
then
take
another
one.
E
H
B
In
the
code
EBP
library,
there
there's
a
framework
project
called
legato
which
was
building
out
a
framework
for
building
data,
plane
management
agents,
because
if
you
look
at
it
in
terms
of
wanting
to
build
not
just
infrastructure
stuff
for
things
like
kubernetes,
but
also
vnfs,
that
you
may
deploy
you're
gonna
want
to
be
an
MPhil.
So
legato
is
using
go
VPP
and
then
there's
some
effort.
Right
now
and
you'll
see
this
in
later
slides.
They
use
ATO
framework
to
basically
build
a
V
switch
and
literally
working
for
kubernetes
as
just
another
micro
service.
B
H
B
So
at
the
core
of
the
phyto
project,
is
this
really
cool
technology
called
vector
packet
processing?
Basically,
it
lives
at
the
packet
processing
layer.
It's
incredibly
high
performance.
It's
your
user
space.
You
can
run
it
in
Linux
user
space,
as
I
mentioned,
runs
on
Intel
our
parks
key.
It
actually
does
some
very
sophisticated
optimizations
in
order
to
run
really
well
on
them.
It's
a
very
mature
technology.
It's
something
that's
shifted
volume
in
both
the
the
server
and
an
embedded
products
since
I
believe
about
2004.
B
So
it's
it's
a
very
mature
technology
with
the
kinds
of
things
that
you
want
in
a
mature
technology
like
high-level
trio
like
really
really
good,
traceability,
hundreds
and
hundreds
and
hundreds
of
statistics
on
everything
being
collected
without
actually
impacting
performance.
That
kind
of
stuff.
So.
B
The
interesting
question
is
sort
of
how
does
it
work
so,
basically,
it
decomposes
packet
processing
into
a
directed
graph
of
nodes,
so
you
can
almost
think
of
each
of
these
nodes
as
sort
of
a
nano,
a
micro
Network
function,
and
each
of
these
notes
is
a
very
small
amount
of
work
involved
in
processing
the
packet
and
then
it
hands
it
off
to
the
next
node
of
the
processing
graph.
Now
packet
processing
graphs
are
not
particularly
new.
B
What
is
new
is
that
VPP
processes,
this
processes,
this
graph
a
vector
at
a
time,
so
it
takes
a
vector
of
packets
as
many
as
it
can
take
off
of
the
received
queue,
and
it
will
take
that
entire
vector
and
process.
It
through
say
the
ethernet
input
node,
and
then
it
will
move
on
to
the
next
node
from
there
and
this
sense
of
doing
some
pretty
incredible
stuff
to
performance,
because
a
graph
node
is
optimized
to
fit
inside
the
instruction
cache.
B
B
B
You
can
sort
of
visualize
what
happens.
They
come
as
an
entire
vector
node
by
node
through
the
graph,
and
so
you
get
all
those
nice
cash
forming
behaviors
that
I
mentioned.
In
addition,
you
get
the
advantages
of
prefetch
to
memory.
There's
a
lot
of
stuff
being
done
with
instruction
parallelization
I,
think
on
the
new
skylake
server
CPUs,
the
theoretical
maximum
parallelization
of
instructions
per
cycle
is
5
and
we're
running
something
like
four
point:
nine
seven.
If
memory
serves
so
hyper
hyper
efficient.
B
That
means
that
the
number
of
packets
of
the
next
vector
is
a
little
bit
larger,
and
since
you
have
a
bunch
of
these
very
expensive,
fixed
costs
like
hitting
memory
that
are
being
averaged
across
the
entire
number
of
packets.
If
n
goes
up,
the
average
number
of
CPU
cycles
per
packet
goes
down,
and
so
you
catch
up.
In
other
words,
if
you,
if
you
get
a
situation
to
where
things
go,
a
little
bit
more
slowly
for
a
couple
of
minutes
econds,
they
go
faster.
B
Nope
cool,
so
if
VPP
also
has
a
plug-in
architecture,
that's
incredibly
rich,
so
you
can
do
anything
with
a
plug-in
except
precisely
heat.
So
you
can
add
graph
nodes
to
the
packet
processing
graph.
You
can
rearrange
the
graph.
You
can
build
your
plugins
independent
of
the
BPP
source
tree
and
just
drop
them
as
Daiso
files
in
the
plugin
directory
and
what
this
means,
as
anyone
can
extend
VPP
with
new
features.
However,
they
would
like
to
do
it.
B
You
don't
have
to
block
on
everyone
catching
up,
and
likewise
you
can
use
the
same
mechanism
to
provide
hardware
acceleration.
So
if
I
have
a
piece
of
hardware
that
handles
some
of
the
things
that
the
graph
nodes
would
otherwise
handle
in
software,
I
can
simply
let
it
do
that
work
and
then
skip
to
the
place
where
software
has
to
process
in
the
graph.
And
what
this
means
is.
If
you
have
accelerating
Hardware
in
your
boxes,
you
can
simply
provide
plug-ins
for
them
and
then,
if
the
hardware
is
present,
things
go
faster.
B
If
the
hardware
is
absent,
then
things
still
go
or
if
you
have
hardware
from
different
vendors
and
different
Hardware
nodes
in
your
system,
you
could
simply
stack
up
the
plugins
and
say:
ok,
I'm
not
going
to
have
to
worry
about
differential
deployment
across
different
kinds
of
hardware.
I
just
deploy
one
collection
of
plugins
everywhere
and
take
advantage
of
the
hardware
acceleration.
That's
present,
make
sense
so
far,
cool
programmability
I'll
go
through
this
really
quickly.
Vpp
for
programmability
uses
a
shared
memory
message
queue
it's
incredibly
high
performance.
It's
been
clocked
in
at
900,000
requests
per.
B
Second,
don't
ask
me
why
you
would
need
to
do
that,
but
people
who
do
high-performance
coding
are
kind
of
obsessive
about
making
things
fast,
and
then
you
get
async
response
messages,
of
course,
and
we've
got
bindings
for
C,
Java,
Python
Lua
and
go
that
are
being
automatically
generated.
Every
time
we
build
honeycomb,
as
I
mentioned
before.
It's
just
one
example
that
takes
this
API
and
exposes
Netcom
for
rest
comp
northbound.
It
turns
out
that
anyone
can
build
an
agent,
and
so
you
can
have
any
control
plane.
That
makes
sense.
B
B
So
VPP
has
been
clocked
in
at
a
terabit
per
second
on
commodity
servers,
with
no
hardware
acceleration
with
millions
and
millions
of
routes
in
the
routing
table.
So
I
don't
know
anything
else
that
even
comes
close.
F
E
B
Because,
again
you
run
you,
it
will
run
whether
or
not
you
have
magic
hardware
present.
So
it's
very
friendly
to
public
cloud
and
it's
it's
even
friendly
to
you-
have
some
funky
networking
thing
that
you
need
to
do
inside
a
container.
Well
with
VPP,
you
don't
have
to
go,
get
the
appropriate
version
of
the
kernel
with
the
appropriate
magic
feature
going.
You
can
just
run
it
purely
in
user
space,
which
makes
it
not
only
very
favorite
favor,
very
friendly
to
public
cloud.
B
What
kind
of
applications
you're
talking
about
so,
if
you're
a
service
provider
and
you're
running
to
deploy
a
be
enough,
we
are
the
BNF,
is
doing
network
processing
right,
it's
processing
packets,
then
you
would
probably
want
to
build
an
application
that
has
BGP
incited
to
do
your
packet
processing.
If
you
are
just
a
plain
old,
vanilla,
flavor
application
running
on
a
kubernetes
node,
then,
as
you'll
hear
you
see
here
in
some
later
slides,
you
can
simply
do
the
traditional
dance
of
you
know.
Queue
control,
apply
FTP
Emel
and
that
will
make
that
switch.
B
But
what
will
be
happening
under
the
covers
is
literally
a
vnf.
Your
V
switch
a
container
for
your
V
switch,
a
pod
for
your
Reese,
which
being
deployed
as
a
daemon
set,
so
that
you
know
your
networking.
D
switch
is
running
purely
in
user
space
and
then
your
containers
before
the
pods
that
you're
deploying
on
that
node
are
being
plugged
into
it.
Okay,
okay
I've
got
some
pretty
pictures
around
that
because
it
is
an
interesting
thing
yep,
but
so
again,
getting
back
to
this.
The
VPP
guys
actually
do
count
cycles
per
packet.
B
So
you
know
this
was
doing
in
comparison.
You
were
looking
about
160
cycles
per
packet,
which
is
damn
fast
and
I.
Apologize
I
missed,
quoted
we're
at
three
point:
two:
eight
out
of
five,
not
four
point:
nine
seven.
It
was
the
line
below
it
that
got
stuck
in
my
head
in
terms
of
the
instructions
per
cycle,
so
I
mean
we're
literally.
F
F
B
So
if
you
look
at
this
picture
splitting
the
vector
so
in
that
packet,
it's
broken
off.
Packets,
1,
0,
1
3
through
N.
They
process
just
as
fast
as
normal,
but
you've
got
to
go
back
and
then
load
the
ARB
input
node
into
the
instruction
cache
to
process
packet
2.
So
the
totality
of
that
vector
will
process
more
slowly.
B
That
means
the
next
vector
is
like
n
plus
M,
it's
a
little
bit
bigger
in
size,
so
your
average
cost
goes
down
per
packet
for
the
next
one
suggesting
we
most
vectors
traverse
a
single
path
through
the
graph,
but
you
do
get
outliers,
but
then
you
can
catch
up,
make
sense
yeah!
B
Yeah,
so
the
you
end
up
with
extremely
high
performance.
One
other
thing
to
mention
here
is:
we
are
literally
finding
that
the
limitation
is
the
number
of
PCI
links.
We
could
actually
do
better
than
these
numbers
with
VPP
if
we
could
get
more
PCI
lines
from
Intel
and
the
way
we
know
this
is
back
on
the
old.
The
previous
version
is
eons
when
we
were
getting
560
gigabits
per
second,
the
telemetry
was
telling
us.
B
It
was
the
PCI
lanes
that
were
limiting
us
and
we're
seeing
that
same
telemetric
signature
looking
at
the
telemetry
now
that
we're
basically
pushing
a
tear
of
it
through
and
so
we're
pretty
sure
that
if
we,
you
know
as
the
generation
of
processors
get
better
and
we
get
more,
PCI
lanes
will
continue
to
see
performance
improvements.
Cuz.
These
two
numbers
are
exactly
the
same
binary.
It's
just
different
hardware,
with
different
number
of
PCI
lines,
so
the
this
guy
is
really
quick.
Getting
this
guy
is
pretty
much
the
limit
in
terms
of
features
this.
B
This
is
kind
of
an
eyesore.
Basically,
what
it
comes
down
to
is
you've
pretty
much
got
everything
you
could
imagine
in
terms
of
networking
features.
If
you
were
doing
an
industrial-grade,
router
or
switch
you
have
in
VPP,
so
routing
switching,
you
know,
advanced
features
like
segment
routing.
You
know,
NAT
features
all
the
kinds
of
proxies
you'd
want
for
DHCP
lots
of
invent
ulema
tree
lots
of
counters.
You
know
even
support
for
things
like
MPLS.
B
D
B
The
date
barrack
is
sort
of
the
genius
behind
all
of
this
and
I
was
having
a
conversation
with
him
at
one
point
where
he
was
sort
of
saying
you
know
you
really
want
in
caps
not
tunnel
interfaces,
because
interfaces
are
expensive.
You
know
and
and
though
performance,
but
you
gotta
use
them.
If
you're
gonna
do
sort
of
a
virtual
bridge
domain
right.
If
you're
gonna
do
a
bridge
domain
for
l2
stuff,
you
got
to
use
interfaces
that
it
has
costs,
and
this
kind
of
Concerned
me
and
so
I
said:
okay
Dave.
B
D
I
just
a
couple
other
questions
and
I'm
just
looking
at
this,
so
the
multipath
was
one,
the
other
IP
and
IP
on.
Do
you
have
that
as
an
end
cap?
Oh
I,.
B
I
guess
by
P
and
IP,
but
quite
honestly
in
caps
are
the
simplest
order
problems
we
have
if
somebody
whenever
so.
D
B
D
That's
what
we
sort
of
do
today
and
in
bird
we
basically
send
it
to
the
ton
zero
interface
as
the
which
handle
is
the
end
cap
stub
right
so
I
guess
the
only
other
one
that
comes
to
mind.
There
was
two
others,
so
I
guess
one
was
that
you
know
what
is
the
size
of
the
multipath
and
be
have
you
guys
looked
at
doing.
D
B
The
one
thing
I
will
plead
with
regards
to
this
slide
is
there's
a
lot
of
stuff
that
didn't
get
put
on
this
slide,
simply
because
the
number
of
features
we
support
has
made
it
impossible
for
this
like
to
actually
be
kept
up
to
date.
I'm,
you
know,
and,
and
and
you
know
this
is
the
third
attempt
at
this
and
the
last
time
I
did
this
slide.
It
was
like
I
simply
can't
fit
anything
else
in
here.
The.
D
Reason
I'm
asking
about
X
lot
and
I
think
this
is
larger
for
the
network
working
group
I'm,
starting
now,
to
run
into
folks
who
are
going
directly
from
ipv4
only
to
wanting
to
run
ipv6
only
infrastructure,
because
they're
just
out
of
addresses
and
kubernetes
makes
it
much
much
worse.
And
then
there
is
the
concept
of
v4
as
a
service
across
the
infrastructure,
and
you
could
do
something
like
a
stateless,
Forex
Forex
lot
to
offer
v4
as
a
service
without
requiring
consistent
before
endpoints.
Well,.
B
I
I
believe
we
do
a
standard.
We
do
have
available
a
standard
before
you
know
before
every
6x
slot,
but
one
thing
I
will
draw
your
attention
to
is
we
also
have
features
like
map
and
lightweight
for
over
six,
give
you
probably
more
complicated
than
you
want
honestly
abilities
to
tunnel
v4
over
a
PC
under
length.
I.
Think
in
your
case,
what
you,
probably
you
want,
is
more
like
a
static
ex-lap,
because
these
are
more
free,
I'm
running
a
complicated
service
provider,
network
yeah.
D
Actually,
it's
it's!
It's
interesting.
You
don't
want
it
static
because
then
I've
got
to
offer
the
for
static
addresses
to
the
client.
What
I
want
is
the
dynamic
on
the
pot
on
the
payload
I
want
to
lie
to
the
payload
and
tell
it
it's
got
a
v4
address,
but
actually
have
that
X
as
a
v6
and
then
get
mapped
to
a
v4
at
the
edge
if
I
have
to
handle
inbound
for
traffic
right.
D
D
That
yeah,
we
can
have
a
large
conversation
on
that,
but
but
I
think
I
can
I
think
this
is
one
that
we
should
probably
start
talking
about
in
the
network.
Working
group
is,
you
know
it's
sort
of
the
you
know,
v4
considered
harmful
RFC
we've
gone
from.
We
don't
support
v4
to
very
quickly.
Should
we
be
talking
about
V,
you
should
be
considering
v6,
at
least
considering
v6
only
infrastructures,
if
you're
going
to
be
at
scale
and
therefore
what
do
you
do
with
all
your
v4
end
points
required
end
points
in
that
environment.
B
Things
I
can
offer
you
guys,
as
potentially
something
we
could
do
is
I
sure
it'll
come
as
a
great
surprise
to
you
that
I
have
an
incredibly
deep
v6
expert
in
my
back
pocket.
Hey
I
mean
you
would
never
expect
that,
but
but
I
can
literally
all
I
get
literally
persuade
Marc
Townsley,
who
is
literally
the
fellow
at
Cisco
who's
been
driving
the
v6
Boulder
up
yeah.
D
D
B
Good
news
is
I,
think
you'll
find
the
ones
that
we
have
don't
suck
and
if
somebody
decides
they
need
more
than
what
we
have
they're.
You
know
they're
tunable,
meaning
with
a
little
more
a
little
more
elbow,
grease
folks
can
make
them
go
faster
and
because
VP,
because
the
the
release
cycle
and
Fido
is
about
three
months
and
because
it's
running
in
user
space
as
a
micro
service,
you
don't
have
to
wait,
however,
many
years
for
the
kernel
to
catch
up.
D
B
E
B
G
B
Right
and
then
you've
got
the
standard
sorts
of
things.
You
would
expect
from
a
relatively
well
run
project
when
a
patch
is
submitted.
He
goes
through
automated
verify,
where
we
run
sort
of
hundreds
of
unit
tests,
hundreds
more
system,
functional
tests,
and
then
you
know
a
fair
number
of
depending
on
how
concerned
people
are
about
the
patch
performance
tests.
They
get
run
on
bare
metal
so
that
we
can
make
sure
we're
not
seeing
performance
regressions
in
the
entire
system,
and
that
happens
before
we
even
get
to
a
code
review.
B
Stop
got
it,
you
know,
and
then,
once
we
do
merge
for
usability,
we
publish
a
bunch
of
artifacts,
so
you
get
apt.
Installable
Debian
packages,
yum
install
overall
RPM
packages,
auto-generated
documentation
that
gets
generated
merged
by
merge,
so
you
can
always
go
out
together.
Yum
install
the
latest
packages
and
then
per
release.
We
also
auto
generate
test
reports.
We've
got
puppet
modules,
their
training
and
tutorials,
hands-on
use
cases,
blah
blah
blah.
B
All
right
yep,
so
then
this
sort
of
is
trying
to
draw
the
picture
here
when
I
was
saying,
I
keep
saying
networking
as
a
micro
service,
so
you
can
literally
run
on
on
a
node
a
VB
PD
switch
micro
service
pod.
You
know
it
would
be
bypassing
the
kernel
to
get
to
the
NIC
and
then
it
ends
ups.
You
know
providing
networking
for
the
individual
pods
that
you're
running
as
workloads
on
that
node,
and
this
gives
you
a
lot
of
advantages.
You
know,
basically,
it's
pure
user
space.
B
You
could
talk
directly
to
the
next
use.
Edd
BDK
is
up
being
quite
a
bit
faster
than
what
you
get
from
the
kernel.
It's
more
scalable.
It
has
a
bunch
of
additional
features.
It
evolves
more
quickly
because
you've
got
releases
every
three
months
and
at
since
it's
a
essentially
just
another
daemon
set
that
you
run
to
provide
your
networking
as
a
pod
upgrading
becomes
much
simpler.
You
don't
have
to
upgrade
the
underlying
kernel
and
box
and
it
also
becomes
a
few
microseconds
of
restart
I.
F
B
So
you're
basically
asking
what
happens
if
I'm
running
this
to
the
VM
instead
of
on
bare
metal
right?
Yet,
if
you're
right,
if
you're
running
it
on
in
a
VM,
you
have
something
that,
as
far
as
the
VM
is
concerned,
looks
like
a
PCI
net
as
long
as
far
as
you
know,
and
there
is
a
you
IOT
CI
generic
driver
that
will
work
in
that
case.
So,
even
if
you
don't
really
have
hardware
there,
you
should
be
able
to
run
this
inside
inside
of
the
m
in
a
public
cloud.
B
B
B
H
B
I
mean
let's
go
to
this
sort
of
next
picture,
because
this
talks
about
how
you
can
communicate
with
pots
from
VBB
right
so
and
there's
sort
of
two
options
that
are
available
to
you.
One
is
you
can
use
beef
Paris
and
the
problem
with
the
fingers
is
their
comparatively
slow
and
comparatively
non-scalable,
and
so
that's
certainly
an
option.
It's
an
option
that
will
always
work
and
you
will
still
get
advantages
from
using
to
be
in
that
phase.
B
B
Is
we
have
a
VPP
user
space
host
stack
that
implements
TV
and
that
kind
of
stuff,
and
that
host
stack
has
an
LD,
preload
shim
available
so
that
with
no
code
changes
to
the
pod,
you
can
actually
slide
that
LD
preload
shim
under
or
for
you
know,
for
non
statically
linked
workloads,
and
you
can
then
take
advantage
of
the
BPP
user
space
so
stack.
Now.
This
host
stack
scales,
OH
10,000,000,
can
simultaneous
connections,
200,000
new
connections
per
second
on
two
core.
B
It's
also
been
clocked
out
pushing
north
of
a
hundred
gig
per
second
between
two
pods
running
on
the
same
server
because
ECP
connection
or
a
single
connection,
because
what
are
the
things
that
we
do
is
if
we
detect
that
two
pauses
are
running
on
the
same
host
and
you're
using
the
BPP
host
act
rather
than
pushing
that
traffic
through
a
TCP
stack,
we
simply
treat
it
as
a
FIFO
queue,
and
so
you
get
really
really
sped
up
performance
there.
So
that's
one
set
of
options
that
will
work
with
existing
workflows
out
of
the
box.
D
B
D
B
D
Understanding
it
the
same
isolation.
The
question
is
just
the
change:
if
we
can
use
CN,
you
know
scuzz,
it's
gonna
get
a
pod.
Namespace
is
gonna,
get
set
up
for
the
thing
and
to
be
in
in
a
mall
I'm.
Looking
at
the
V
switch
microservice
pod
would
be
in
the
host
name
space.
So
as
long
as
the
BPP
host
for
that
can
span
those
namespaces
that
network
namespaces
I
know
it's
not
necessary
for
keeping
you
know,
routing,
table,
etc.
I
don't
rely
on
any
of
that.
It
just
that's
the
way
things
get
done
so
honest.
B
So
you
know
that
that
basically
know
that
there's
one
other
aspect
of
this.
If
you
are
dealing
with
a
workload
that
is
sufficiently
performance
sensitive
that
you
actually
so
here's
a
fact
about
the
world
that
anyone
who's
tried
to
write.
A
high-performance
TCP
stack
has
eventually
hit,
which
is
that
the
bsd
socket
API
itself
is
a
bottleneck.
The
API
itself
limits
performance
when
you
get
to
the
far
out
edges,
and
so,
if
you
have
an
application
where
you
are
finding
the
bsd
api
itself
in
this
situation
is
creating
a
bottleneck
for
you.
B
D
B
We
actually
do
have
examples
of
this
working
right.
Now
that
I
can
point
you
to
go,
that's
fine!
That's
just
you
can
send
me
a
Boyer,
be
good
yeah.
Absolutely
so
that's
sort
of
the
generic
how
you
communicate
with
pods
you've
got
the
two
options
there
now
in
terms
of
how
you
communicate
between
pods.
If
you
use
the
native
space
host
stack,
there's
an
interesting
other
thing
that
is
possible
here,
which
is
the
way
VPP
sets
up
the
communications.
B
You
know
say:
you've
got
one
pod
listening
at
another,
pod
connects
to
it
and
they
happen
to
be
on
the
same
host.
Is
it
essentially
is
just
acting
as
the
broker
here
that
sets
up
a
shared
memory
segment
between
them
and
once
you've
got
that
direct
shared
memory?
Connection
established
bvp
gets
entirely
out
of
the
way,
because
the
connect
all
the
things
that
are
done
about
authorizing
the
connection
setting
up
the
connection,
those
go
through
VPP
to
control
issues
of
policy
and
whatnot.
B
D
So
is
all
of
the
so
so
here's
an
interesting
problem
if
the
config
in
CPP
changes,
ie
somebody's
added
another
policy
and
you've
got
pods
talking
over
that
policy
and
flyer
changes.
A
policy
and
you've
got
pods
communicating
with
one
another
or
using
that
policy
in
flight.
This
would
bypass
any
pod.
Any
policy
changes.
Somebody
changes
the
canaries
network
policy,
for
example,
and
that
change
affects
in-flight
traffic.
You.
D
B
D
B
D
B
That
that's
an
interesting
corner
case
to
go,
take
a
look
at
and
make
sure
that
we
get
right.
Yes,
okay,
cool,
all
right,
so
that,
but
there's
a
big
pick
up
both
in
terms
of
the
performance
you
can
get
between
pods
that
happen
to
live
on
the
same
node,
but
also
in
terms
of
the
cost
of
being
of
networking
between
them,
so
you're
gonna,
increase
in
scale
and
density
as
well.
B
Okay,
and
then
we've
been
talking
a
little
bit
to
Mack
line
about
possibilities
here
with
envoy,
because
you
know
there's
sort
of
an
example
of
people
who
are
very
interested
in
the
performance
of
things
and
so
in
the
sto
envoy
case.
Were
you
to
use
the
VPP
host
back?
It's
the
same
LD
preload,
shim
game
involved.
It's
just
that
you're
talking
within
the
same
pod
over
that
direct
shared
memory.
Once
connection
is
established
instead
of
between
pods
right.
D
But
you
wouldn't
be
doing
that
between
pods
in
envoy
anyway,
if
you
load
the
Envoy
in
yes,
but
you
would
be
doing
it
between
containers
they're
in
the
same
network,
namespace
I!
Guess
yes
you're,
but
you
could
be
talking
to
local
host
right.
So
not
sure
I
guess
depends
on
how
you're
doing
the
local
host
within
the
pod
I'm.
Also
confused
here.
You've
got
VPP
host
stack
connected
to
both
containers.
The
VPP
host
stack
I,
think
would
be
connected
into
the
pods
network.
Namespace
not
into
every
container
space
is
a
kernel.
D
D
B
B
Okay,
here
are
the
collection
of
IPS
that
are
associated
to
this
namespace
that
the
TCP
stacks
that
are
the
host
acts
that
are
talking
here
can
talk
to
and
by
the
way
one
of
them
is
the
local
host,
which
should
be
treated
the
way
they
way
local
host
is
treated
and
which
may
say,
oh
by
the
way.
This
one
over
here
is
the
proxy
and
reverse
proxy,
because
that's
what
on
boys
doing
right,
because
all.
D
D
D
B
Very
much
in
favor
of
that,
but
basically
my
point
is
that
yes
you're
one
pod,
but
there's
the
flexibility
to
do
more
than
what
IP
per
pod
there's
the
flexibility
to
do
more.
If
you
want
to
do
interesting,
things
like
direct
server,
return,
yeah,
yeah,
okay,
yeah
and
so
then
kubernetes
integration
there's
some
example:
cake
communities,
integration
going
on
in
the
concieve
EPP
and
area.
B
B
That's
partially,
why
the
NGO,
VPP
library
is
separate
from
the
legato
framework
is
separate
from
the
con
tu
PP
agent
is,
if
all
you
want
to
be
able
to
do,
is
use
write
a
go
agent
that
talks
to
DP
P
or
integrating
DP
P
into
your
existing
go
agent.
You'd,
probably
pick
up
the
go,
VPP
library.
If
you
wanted
to
do
something
that
has
some
other
niceties
because
you're
starting
from
scratch,
the
legato
framework
may
prove
to
be
useful
to
you.
Okay,
a
lot
of
thought
went
into
the.
B
How
do
we
make
it
as
easy
as
possible
for
everyone
to
play
cool
and
then
just
as
the
teaser?
So
ipv6
is
a
lot
more
than
just
addresses
right.
Everyone
sort
of
winds
up
there
because
of
more
addresses,
but
it
also
enables
a
bunch
of
other
things
that
could
be
done:
implementation
wise
to
provide
some
of
the
existing
API
s
that
we
have
in
kubernetes.
B
You
know,
for
example,
being
able
to
use
em
any
casts
for
smart
load,
balancing
where
you
can
actively
measure,
not
only
the
network
latency
to
who
you're
talking
to,
but
the
responsiveness
of
the
client
in
order
to
make
your
selections
for
who
you're
actually
load-balancing
to
you
know
the
instrumentation
I
mentioned
with
IO
am
already.
If
you
followed
the
segment
routing
network,
programmability
rfcs,
there's
a
lot
of
interesting
things
available
there
and,
of
course,
moving
away
from
overlays
and
Nats
to
provide
service
for
things
like
load,
balancing
for
services
and
so
forth.
B
All
these
are
available
implementation
level
options
that
don't
necessarily
require
any
change
in
the
contract,
which
needs
to
remain
simple
to
the
actual
consumers
of
the
kubernetes
api
and,
of
course,
all
of
these
are
supported
today
at
DDP
at
high
performance
and
scale.
So
you
have
a
bigger
pallet
to
play
with
when
making
implementation
decisions
is
I,
guess
the
underlying
point
yeah
and
then
you
can
sort
of
weigh
your
trade-offs
and
make
your
calls
and
then
finally,
the
ubiquitous
get
involved,
slide.
Yeah
I
mean
photos
and
open
community.
We
have
literally
days
of
tutorials.
B
If
you
want
to
code
in
it,
we've
got
your
binary
packages.
You
can
install
mailing
lists,
IRC
channels.
The
good
news
about
the
wiki
is
anything
you
could
possibly
want
to
know
is
in
the
wiki.
The
bad
news
about
the
wiki
is
anything
you
could
possibly
want
to
know
is
in
the
wiki
yeah
and
are
you
gonna
be
sharing
these
slides
I'd
be
delighted
just
to
share
these
slides
wherever
makes
sense
to
share
these
slides?
It's
not
clear
to
me
where
the
right
place
is
for
the
network
working
group
github.