►
From YouTube: Kubernetes SIG Network 20171116
Description
Kubernetes SIG Network meeting from November 16th, 2017
A
A
B
I'm
completely
fine
with
that,
as
long
as
you'll
cut
me
off
to
make
sure
I'm
taking
sure,
though,
I've
got
some
slides,
because
visuals
tend
to
help
a
lot
in
understanding
some
of
what's
going
on
with
TPP,
we
can
sort
of
you
jump
around
them
depending
on
what
folks
interests
are
with
them,
because
there
are
lots
of
interesting
things
that
can
be
explained
here
and
I
sort
of
pushed
to
the
back,
be
how
it
works
magic.
Although
that's
fascinating,
it
may
not
be
the
thing
that
fascinates.
You
guys.
C
B
Right,
so
let
me
start
from
the
top
and
please
ask
questions
as
we
go,
because
that
will
often
lead
to
interesting
conversations
and
give
me
a
better
idea
of
what
aspects
you
guys
are
interested
in.
The
whole
point
here
is
just
to
provide
some
information
for
you,
guys,
hello,
Liv,
there's
came
fully
the
context
lists
or
to
start
with
what
phyto
is
really
complete.
Ft
dot
io
typically
comes
to
Fido
because
it
stood
singing.
It's
an
open
source
project
of
the
Linux
Foundation,
just
like.
B
B
So
basically
fidos
an
open
source
project
similar
to
CN
CF
at
the
open
at
the
Linux
Foundation,
the
point
of
which
is
basically
to
produce
a
very
high
throughput,
low
latency
feature-rich
data
plane
that
runs
across
multiple
hardware
architectures
and
runs
equally
well
on
bare
metal
and
VMs
and
in
container
it
helps
a
little
bit
to
understand
sort
of
the
scope
of
phyto
because
it
helps
you
know
the
network
space.
B
So
we'll
talk
about
layers,
Network
IO,
which
is
how
do
I
get
a
packet
from
an
nicker
Edenic
to
a
thread
on
a
core,
so
DP
DK,
for
example,
doesn't
Network
IO
very
well
that
produces
really
good
drivers,
it'll
hold
things
into
user
space
for
us
and
we
actually
use
DP
DK
for
that
for
physical
mix,
where
appropriate.
Next
up
is
packet
processing.
You
can
pick
up.
B
The
you
know
there's
a
broad
set
of
membership.
You
know
various
service
providers,
network
vendors,
there's
a
lot
of
interest
from
various
hardware.
Vendors
who
sell
chips
mostly
because
phyto
provides
an
extremely
flexible
method
for
for
integrating
with
different
kinds
of
hardware
acceleration.
There
was
a
question
on
the
chat
about
a
link
to
the
deck
I'm
happy
to
drop
the
deck
any
place
that
you
guys
find
convenient.
B
Again,
the
contribution
comes
from
a
broad
array
of
folks
as
well,
so
it's
a
pretty
porous
to
community
from
a
code
activity.
Point
of
view.
This
sort
of
shows
from
when
Fido
was
open
sourced.
In
you
know,
2016
cumulatively
commits
so
fighters
currently
from
the
direction
of
commits
the
most
active
project
that
I'm
aware
of
in
data
plane.
You
know
sort
of
in
the
data
plane
space.
You
know
many
more
commits
than
both
OBS
and
GT
DK.
B
B
So
the
sort
of
the
core
technology
at
Fido
is
something
called
vector
packet
processing.
It's
one
of
the
many
projects
that
we
have
at
fight
out
and
it's
essentially
an
optimized
software
network
platform.
It
gives
you
pure
users,
face
networking
at
extremely
high
performance
and
I'll
show
you
in
just
a
second
how
high
pure
Linux
user
space.
So
this
this
is
very
nice
as
things
move
more
and
more
towards
cloud
native,
because
you
can
actually
treat
and
manage
your
networking
entirely
as
a
micro
service.
Just
like
everything
else.
B
B
That's
been
shipping
at
volume,
both
in
server
and
embedded
products
since
2004,
so
it
has
all
the
sorts
of
telemetry
instrumentation
traces,
hundreds
and
hundreds
and
hundreds
of
counters
that
don't
impact
performance,
so
you
can
throw
an
enormous
amount
of
telemetry
off
of
the
system,
probably
more
than
you
want
to
so
I
had
mentioned
performance.
This
is
not
something
I!
B
Think
it's
directly
applicable
in
the
curb
review
space
yet,
but
it
gives
you
some
idea
of
what
I
mean
by
high
performance
phyto
on
modern
skylake
processors
is
currently
able
to
route
a
terabit
of
traffic
with
millions
of
routes
in
the
routing
table
on
a
commodity
server
without
any
sort
of
magical
hardware,
to
assist
it
and
being
able
to
do
that,
you're
literally
sweating,
every
CPU
cycle,
and
that
is
actually
on
yo
a
minority
of
the
chorus
on
the
box.
So
only
24
chorus
to
push
up
to
those
numbers
and
there's
a
fascinating
thing.
B
We
used
to
on
the
previous
generation
of
Intel
Xeon,
so
we
were
getting
half
a
terabyte
per
second
one
of
the
time
the
telemetry
coming
off.
The
system
was
indicating
to
us
that
the
limitation
was
not.
We
were
not
seeking
you
back.
We
were
actually
being
bound
with
a
number
of
PCI
lanes
per
socket,
and
so
when
the
new
skylights
came
out
with
more
pci
lanes
per
socket
with
exactly
the
same
binary
x',
we
scup
saw
nearly
a
doubling
of
performance
just
from
adding
the
additional
PC
eiling's
and
the
telemetry
that's
coming
off.
B
The
system
tends
to
indicate
that
we
are
still
being
blocked
by
the
by
insufficient
PCI
lanes
to
really
exploit
what
we
can
do
with
the
system.
So
there's
even
more
performance
to
be
squeezed
out,
as
the
hardware
improves
in
terms
of
being
able
deliver
packets
from
the
NIC
to
the
core.
Any
questions
so
far.
B
All
right
cool,
so
there
I
also
mentioned
feature-rich.
This
slide
is
the
vast
minority
of
the
features
we
offer,
but
it's
the
things
that
people
end
up
typically
being
interested
in
so
I've
mentioned
hardware
platforms,
you
know
not
only
DB
DK
for
interfaces
but
AF
packet,
because
user,
you
know
all
of
them
supporting
things
like
multi
cue,
reconnect,
jumbo
user
frame,
jumper
frame
support,
we've
got
language
bindings
for
C,
Java,
Python
and
actually
now
go
that
are
being
automatically
generated.
So
everything
in
the
system
is
fully
programmable.
B
All
the
way
down
in
terms
of
granularity
using
an
extremely
high
performance
programming
mechanism
and
you've
got
the
language
bindings.
You
need
to
build
whatever
kind
of
agent
you
want
to
control
it.
You
know
the
full
suite
of
tunnels
and
in
caps
you
know
GRE
EDX,
LAN,
VX,
10gbe.
We've
got
hardware
support
for
an
IPSec,
so
if
you
have
you
know,
accelerating
Hardware
will
do
the
crypto
for
you
and
accelerated
hardware.
I,
don't
think
anyone
here
cares
about
MPLS
some
way
to
skip
that
for
routing.
B
We
support
you
v4
v6,
hierarchical
fibs,
millions
and
millions
of
routes,
thousands
and
thousands
of
brf's
multi
path,
both
dcmp,
an
equal
cost
I
think
I
had
some
questions
when
I
talked
to
the
CN
CF
Network
cig
they're,
asking
how
many
paths
and
I
went
it
inquired
and
the
response
was
64
K,
which
is
ludicrous,
but
that's
just
kind
of
how
the
people
who
to
make
this
kind
of
software
think
you
know.
Generally,
we
support
all
the
sorts
of
things
you'd
want
for
switching,
including
you're,
bridging
with
programmable
page,
which
table
entries
proxy.
B
B
They
allow
you
to
actually
get
in
bin
telemetry
for
your
network
traffic,
so
you
can
actually
find
out
timestamps,
not
only
when
you're
hitting
the
things
you
do
control,
but
where
you
hit
intermediate
routers
and
switches
that
will
timestamp
that
for
you
in
the
network
so-
and
this
is
sort
of
a
you
know,
this
is
not
everything.
So
if
there's
a
feature
you
don't
see
here,
please
ask
we
probably
have
it
do
folks
have
any
questions
on
the
features
available
inside?
Oh,
you
guys
are
awfully
quiet.
E
C
E
B
E
B
The
healing
power
of,
and
so
the
answer
is
yes
and
I,
get
a
little
bit
into
that
in
literally
two
slides,
where
I
start
talking
about
how
this
would
fit
in
to
a
kubernetes
context,
with
networking
as
a
micro
service,
I
I
didn't
expect
this
group
to
be
so
interested
in
I
want
to
run
a
container
based
virtual
network
function
like
running
a
router
in
a
container
for
purposes
other
than
serving
kubernetes.
We
could
certainly
talk
about
that
if
there
is
interest,
because
that
is
a
use
case,
but
I
preserve.
E
B
I
could
go
on
for
another.
You
know
hour
about
that.
So
we'll
not
do
that
on
this
occasion,
but
but
do
know
that
that
there
actually
are.
You
know
one
of
the
key
use
cases
this
can
be
used
for
is
that
that
use
case
and
then
just
so
really
quickly.
We
tend
to
run
rapid
release
Cadence's
so
about
every
three
months
and
because
it
is
running
purely
in
user
space,
it's
a
simple
upgrade
path.
As
you
pick
up,
you
know
the
features
that
are
continuing
to
turn
up
every
three
months.
B
So
this
is
kind
of
where
I
get
a
little
bit
more
into
where
this
would
fit
into
use
of
a
kubernetes
context.
So
what
you
could
think
of
is
because
VPP
runs
purely
in
user
space.
You
can
now
take
the
V
switch
function
that
you
need
to
search
kubernetes
when
you
plug
in
BSC
and
I,
and
you
can
actually
make
that
a
micro
service
of
its
own.
B
You
can
run
it
in
its
own
plot
as
a
daemon
set,
and
you
can
then
service
the
other
pods
running
on
the
node
and
in
this
case
VPP
is
talking.
You
know
the
UI,
oh,
it's
the
Nick,
so
you're
not
going
to
the
kernel
network
stack
at
all,
which
is
sort
of
crucial
for
getting
maximum
performance,
and
this
gives
you
you
know
things
that
are
pure
user
space.
It
tends
to
be
faster
and
more
scalable
in
to
give
you
an
idea
of
what
we
mean
by
faster
at
scale.
B
B
So
these
apologies
I
should
probably
put
that
in
this
state
full
effectively.
What
it
means
is
that
if
you
know,
if
I
got
an
ingress
ACL
and
somebody
opens
an
outgoing
TCP
connection,
I
make
sure
they
actually
get
the
ability
for
the
response
on
it.
You
see
the
connection
coming
from
the
other
side
to
come
for
got
it.
Thank
you
totally.
B
Thank
you
for
asking
so
with
with
10k
of
them
we're
seeing
about
5.7
million
packets
per
second
and
that's
the
non
drop
rate,
meaning
we
lose
not
a
single
packet
and
that's
on
a
single
core,
and
this
tends
to
scale
a
tiny
bit
less
than
the
nearly
per
quart.
So
you
know
effectively,
you
can
scale
up
to
ludicrous
amounts
of
traffic
with
very,
very
large
numbers
of
ACLs.
So
as
the
complexity
of
network
policies
grows
and
the
number
of
quads
for
nodes
grow,
you
have
a
lot
of
headroom
here.
I.
E
B
B
E
B
G
B
G
B
F
G
B
Okay,
that
I
actually
don't
have
number
sports,
but
that's
a
really
valuable
thing.
My
performance
guys
actually
are
looking
for
new
interesting
things
to
measure,
and
so
what
you're?
Basically
asking
is,
if
I've
got
10k
ACLs
and
I'm
driving
5.7
million
packets
per
second
through
what
additional
latency
do.
I
pick
up
in
the
establishment
of
connections,
yeah.
G
B
A
B
You
very
much
for
holding
holding
the
line
on
the
time.
So
if
we're
actually
close
to
the
end
of
my
slides
after
here,
we
talked
a
few
things
passed
here.
I
think
I
got
maybe
four,
maybe
five
more
slides,
so
for
NAT
with
60k
sessions
for
clocking
it
and-
and
this
is
I
mixed
traffic-
at
about
6.2
million
packets
per
second
about
18
gigabits
per
second
on
a
single
core,
and
that
also
scales
almost
linearly
with
the
number
of
cores,
and
in
this
case
it
is
actually
I
mixed
traffic
I'm.
B
B
So
when
you
look
at
the
IETF
standards
for
how
you
traffic,
because
when
you're
measuring
things
like
millions
of
packets
per
second
or
throughput,
it
actually
makes
a
huge
difference
what
packet
sizes
you're
running
through,
because
if
you
turn
down
the
number
the
packet
size,
you
can
crank
up
the
millions
of
packets
per
second,
and
if
you
turn
up
the
packet
size,
you
can
crank
up
the
throughput.
So
I
mix
this
an
algorithm
for
producing
a
mixture
of
different
packet
sizes.
It
attempts
to
mimic
what
one
would
realistically
see
floating
through
a
network.
B
Thank
you
cool,
excellent
question.
Thank
you.
These
were
sort
of
two
things
I
wanted
to
highlight.
We
have
tons
and
tons
of
other
performance
data,
but
these
two
I
struck
me
as
being
things
that
would
probably
be
of
interest
to
you
guys
because,
as
the
number
of
pods
per
as
the
number
of
pods
per
noting
pieces
and
as
the
complexity
of
things
that
people
specify
with
the
kubernetes
network
policy
and
services,
api
increase,
I
expect
that
scale
becomes
more
important.
You
can
tell
me
if
I'm
wrong,
no.
E
B
Yeah
I
know
I
mean
and
I
mean
one
other
thing,
I
will
mention
in
passing.
Well,
we
can
certainly
sort
of
service
services
via
nap.
We
have
a
lot
of
other,
very
sophisticated
tools
that
could
give
you
the
same
behavior
you
get
from
services,
but
with
fun
things
like
direct
server
return,
but
that's
a
sort
of
a
future
discussion,
so
pictures
help
communications
with
pods.
B
So
we
actually
have
two
ways
that
you
could
have
pods
connecting
to
the
be
switched
with
EDP
one
is
you
could
just
use
a
Vieth
pair,
as
you
guys
are
very
comfortable
with
right
now
that
loops
through
the
kernel,
but
that
puts
the
kernel
network
stack,
sort
of
in
the
middle.
The
other
thing
that
is
an
option
is
we
actually
do
have
a
complete
user
space
host
back
in
BGP
that
can
be
used
to
connect
to
the
pods.
B
That
host
back
is
clocking
in
right
now,
at
about
10
million
simultaneous
connections
with
200,000
new
connections
per
second
on
to
core,
so
it
scales
really
really
really
well.
It
also
does
cute
things
like
if
you
happen
to
have
to
pause
on
the
same.
Rather
than
going
down
the
TCP
stack
and
back
up
the
TCP
stack.
B
But
there
are
people
who
have
really
high
performance
workloads
that
they
care
a
lot
about
tuning
and
if
you
have
such
a
workload.
So
what
are
the
things
that
everyone
who
does
a
high
performance?
Tcp
stack
eventually
discovers
is
that
the
bsd
socket
API
itself
is
a
bottleneck,
and
so
we
do
have
a
higher
performance
native
API.
B
B
B
And
then
this
was
for
communication
between
pods
I
had
mentioned
effectively.
One
of
the
things
that's
very
nice
here
is
that
if
you
have
two
pods
that
are
communicating
use
the
view
TP
host
back
VPP
is
only
acting
essentially
to
assist
in
the
setup
process
when
you're
actually
passing
bits
between
them.
Once
the
connections
established
they're,
basically
going
over
a
direct
shared
memory
pipe,
so
you
get
insanely
high
performance
talking
between
two
pods
that
are
sitting
on
the
same
server.
B
B
So
there
is
reality
for
this
stuff
and
comes
to
integration.
Punty
is
doing
integration
right
now
with
VPP,
but
one
thing
I
did
want
to
point
out.
Is
it's
done
in
layers,
so
literally
anyone
else
could
integrate
with
VPP
in
their
kubernetes
C&I
plugin.
We
do
have
a
straight-up
go
library
for
VPP
that
anybody
could
use
to
build
an
agent.
If
you
have
an
existing
agent
in
just
what
it'd
be
overdrive
EPP.
If
you're
building
an
agent,
we
actually
have
an
agent
framework
that
could
be
used.
B
So,
for
example,
you
were
talking
about
people
wanting
to
run
routers
in
containers
for
nfe.
They
could
use
that
to
build
it,
and
then
you
know
that
sort
of
steps
up
to
the
Conti
VPP
agent,
but
again
anyone
else
plain
old
calico.
Any
of
the
rest
of
those
guys
could
also
use
the
EPP.
We
just
happen
to
be
doing
it
in
counties.
D
H
D
So
is
your
vision
for
this
that
you're
kind
of
you're
trying
to
get
the
other
projects
like
calico
flannel
weaves,
to
integrate
this
as
their
underlying
switch
or
router?
Or
are
you
in
the
reason
I
ask
that
is
that
Pontiff
is
one
thing,
but
if
more
and
more
projects
are
doing
this,
that
kind
of
changes,
a
lot
of
things
and
kubernetes
is
in
general,
like
cube
proxy,
would
be
one
that
wouldn't
work
in
user
space
currently.
So
what
what
is
a
kind
of
the
vision
for
what
you
would
like
to
see
happen
with
it?
Yeah.
B
Let
me
see
what
honestly
I'd
like
to
see
it
adopted
as
broadly
as
possible,
because
I
think
it
solves
problems
in
terms
of
performance
and
scalability
and
in
terms
of
rate
of
innovation.
That
I
think
would
be
good
overall
for
the
community
and
I.
Don't
tend
to
be
one
who
is
gonna
like
religion
for
one
particular
choice
to
see
an
ID
plug-in
or
another
sort
of
block
forward
progress,
so
I
would
love
to
see
the
degrade
it
more
broadly.
B
You
know
I
would
actually
one
of
the
conversations
I
want
to
have
subsequently
with
you
guys
is
I
know
you
currently
support
at
least
two.
Maybe
three
modes
in
queue
proxy
in
terms
of
having
queue
proxy
manage
how
services
are
handled.
I
would
love
to
talk
about
sort
of
what
the
rules
of
the
road
are
for
contributing
additional
integrations,
because
I'd
love
to
get
a
mode
where
you
know
queue
proxy
would
translate
down
to
a
user
space.
Bppv
switch
to
handle
services
if
that's
actually
available
and
how
the
network
thing
is
being
handled.
B
And
so,
like
my
closing
slide
is
pretty
straightforward.
How
can
we
help
right?
The
whole
point
of
the
conversation
is
to
sort
of
see
if
there
are
problems
that
we
could
help.
You
guys
in
solving
we've
got
a
lot
of
fun
tools
and
the
question
is:
do
they
fit
and
is
this
something
that
would
be
useful
for
you.
A
B
Guys
made
it
very,
very
easy
authority
of
designing
kubernetes
networking
makes
it
quite
easy,
be
the
only
point
where
it
serves.
You
have
to
come
and
make
sure
that
this
piece
falls
into
place.
Right
now
is
to
proxy,
and
so
you
know,
I
would
love
to
talk
about
you're
getting
some
patches
in
there.
That
would
allow
it
to
work
with
VPP
and
or
whether
it
makes
sense
to
have
you
know,
sort
of
an
API
mechanism
analogous
to
what
CNI
does
so
that
you
can
have
things
to
simply
plug
into
to
proxy
to
provide
that
functionality.
E
B
B
So
you
know
you
can
actually
then
also
lifecycle
it
and
handle
upgrades
and
things
of
that
nature
in
a
similar
way.
So,
right
now,
if
I'm
using
a
kernel
based
mechanism
for
networking,
if
I
want
to
go
upgrade,
my
networking
stack,
I've
gotta
go
and
upgrade
my
kernel,
which
means
upgrading,
like
sure.
E
E
Ok
right,
so,
let's
just
talk
with.
Like
the
you
know,
the
original
proselytizers
for
micro
services
was
Netflix
right
with
their
ecology
of
a
bunch
of
micro
services
right
and
there's
a
user.
Usually
these
big
heavyweight
things.
They
have
a
load
balancer,
they
have
a
bunch
of
servers
and
it's
it's
a
it's.
A
higher
level
concept.
I
think
right
here,
you're
you're,
talking
about
a
user
space
switch
not
just
switch,
but
crack.
B
Processing,
okay,
no,
that
makes
a
lot
of
sense.
I
can
tell
you
part
of
why
I
made
that
word
tracing.
C
A
B
A
B
Things
that
are
probably
going
to
be
useful,
so
number
one
I
do
hangout
on
a
slack
Channel.
Let
me
actually
make
it
clear
on
this.
Like
Channel,
Who
I
am
so
feel
free
to
hit
me
up
on
slack
I'll
brought
my
email
there
as
well,
and
if
somebody
can
point
me
to
where
to
put
the
slides
I'm
happy
to
put
them
wherever
the
local
convention
is
so
I'd,
be
delighted.
I,
don't.
B
A
B
D
A
J
Here,
yes,
culinary,
we
have
received
many
issues
in
the
Kadapa
recordings.
Ip
fears
of
caprese
Baga,
mostly
about
the
high
position,
does
cross.
Nude
is
eternal
pada
and
as
a
root
cause
is.
Is
that
the
way
I
be
yes,
the
pro
see
misters
as
net
and
I
feel
like
I,
have
a
discussed
in
the
Google
mailing
list
about
as
a
b-side
and
an
hour
I
already
sent
up
here
and
have
received
a
real
good
to
me
from
Daniel.
Then
he
has
revealed
that
sepia.
I
I
I
A
So
I
got
this
up.
I
admit
I'm
only
vaguely
familiar
with
this
dashboard,
but
from
what
I
can
tell
it
looks
like
we
have
five
in
the
five
sweets
in
the
Signet
work,
GCE
sections
that
have
failures,
and
it's
probably
worth
us
just
going
through
those
understanding,
whether
or
not
those
are
or
what
the
priority
is,
and
if
somebody
is
owning
them.
Yeah.
J
J
A
J
A
K
And
comprise
the
off
I
did
take
a
look
at
it
and
it
basically
fell
all
that
way.
It's
not
never
accumulated
so
there
you
guys
they
were
basically
like
things
panicking
or
something
is
stacked,
raising
a
lot
of
thing
that
I
already
open
a
PowerPoint,
so
I
think
I'm,
a
generic
guy
to
say
and
I
think
they
are
looking
at
it.
I
look.
A
A
All
right,
I,
myself,
have
not
been
too
closely
involved
with
the
history
of
these
tests.
I
know
Chris
Luciano
and
Eric
on
the
tidier
team.
I've
been
looking
at
these
and
I
think
it
was
a
kind
of
infrastructure
problem
with
the
cluster
creation
scripts
I'm,
not
sure
if
that
PR
has
been
merged.
Yes,
it
appears
like
it
probably
hasn't
if
they're
still
sailing
yeah.
A
A
So
the
last
thing
on
the
agenda
was
just
I
wanted
to
raise
schedule
over
the
next
month
and
a
half
or
so
because
I
know
holidays
are
coming
up.
We
got
Keep
Calm
I
just
wanted
to
get
a
feel
for
what
we
think.
We
should
set
our
schedule
to
be
so
written
down
the
next
meetings.
They
would
be
November
30th,
December,
14th,
December,
28th
and
January
11th
I'm,
assuming
that
at
least
the
December
28th
one
is
a
no-go.
A
A
A
E
A
L
Yeah
hi
yeah:
this
is
fatigue
as
I
proposed
on
topic
last
time.
I
am
working
on
a
tool
to
play
the
IP
tables
in
the
cube
cluster
so
that
it's
easy
to
debug
the
networking
between
the
parts
so
I
know
there
is
very
less
time.
Otherwise.
I
could
have
showed
whatever
progress,
I
have
and
cut
off
feedback
and
what
other
people
think
if
it
might
be
useful?
Oh
god,
there
is
still
five
minutes.
Maybe
I
can
show
it
if
it's
mine
and
get
some
feedback
so
that
I
can
work
on
it.
Further
yeah.