►
From YouTube: CNCF Networking WG 7/5/2017
Description
https://github.com/cncf/wg-networking
The CNCF Networking WG meets bi-monthly to explore cloud native networking technology and concepts around container networking interface (CNI). This video is the WG July 5th meeting.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
A
A
A
A
A
A
A
B
B
B
B
B
C
A
A
A
A
A
A
A
A
C
C
How
can
this
Paul
do
you
have
a
order
of
agenda
that
you
want
to
present
stuff
stuff
presented
in
I.
A
A
C
So
Brian
and
I
are
gonna.
Do
this
together
and
you
guys
may
not
know
me.
I
I
worked
with
Alexis
before
and
in
on
various
things
and
in
other
stuff,
and
he's
asked
me
to
help
with
this
project
of
donating
or
week
to
the
CNCs.
But
Brian
is
the
main
war
we
regard
so
generally
doing
all
the
hard
part
of
the
presentation,
I'm
just
kind
of
a
high
level
view:
Alexis
senses
apologies.
He
can't
join
his
currently
traveling.
Oh.
A
A
C
So,
but
just
a
quick
overview
of
we've
neck
arm.
It
started
back
in
2014,
it's
an
overlay
network
for
containers
and
cloud
native
environment
and
there's
sort
of
the
mantra
of
the
of
the
project
is
that
networking
should
just
work,
especially
for
an
active
audience
and
should
be
kind
of
invisible
infrastructure
till
you
need
to
know
something
about
it
and
then
it
let
you
know
about
it
and
there's
been
some
Russian.
C
C
The
recently
launched
week
200
with
a
bunch
of
new
capabilities
in
the
last
two
weeks,
and
there
are
quite
a
number
of
external
contributors-
they're,
not
just
open
with
works,
and
some
of
those
contributors
have
been
made
into
committed
so
that
as
people
from
outside
the
company
have
full
commit
light.
So
it's
already
kind
of
on
its
way
to
being
a
community
project.
Obviously,
one
of
the
aims
of
pushing
to
the
CNCs
is
to
strengthen
that
and
we'll
get
to
that
at.
C
D
I
think
so
boy
born
here
as
also
the
lead
engineer
on
the
weave
network
project,
we've
works
and
so
so
getting
into
the
it
just
works.
We
networking
in
the
large
has
this
kind
of
reputation
for
having
here
some
controls,
knobs
and
levers,
and
so
on
that
the
you
know
priesthood
to
control,
and
we
we
try
to
do
away
with
that,
because
we're
not
actually
networking
people
were
application,
developer
people,
and
so
the
thing
is,
is
kind
of
self
configuring.
It
you
just
download
it
run
it
the
cross
cloud
capability
it
can.
D
It
can
use
multiple
different
data
paths
to
do
things
like
get
across
NAT
boundaries.
What's
a
quiver
stuff
in
there
and
something
a
bit
like
overlays
at
all,
and
we
demonstrated
how
we
could
use
the
same
control
plane
to
use
a
completely
different
data
plane.
That's
specific
Amazon,
V
PC.
We
use
the
AP
their
API
to
control
their
routing
table,
but
that
that's
that's
an
option
in
the
code
as
well.
D
It
runs
on
this
thing,
which
will
take
a
little
bit
more
detail,
but
the
weave
mesh
is
a
eventual
consistency,
library
that
we
use
to
gossip
information
about
topology
peer
to
peer
without
having
a
central
database
and
or
a
central
store
where
we
need
to
get
to
to
find
out
what's
going
on.
So
that
means
that
it's
partition,
tolerant.
You
can
disconnect
your
laptop
from
the
net.
You
know
get
on
a
plane
and
not
have
to
worry
about
that.
We
connected
you
can
have
partitions
between
data,
centers,
etc,
etc.
D
So
that's
some
of
the
technical
features
moving
on
the.
We
also
use
that
gossip
mechanism
to
do
distributed,
LP
allocation,
IP,
address
allocation,
I
mean
to
rats
all
built
in,
and
we
have
a
DNS
service
which
again
uses
that
we've
mesh
library
to
gossip
around
the
network
and
in
the
data
plane.
We
support
encryption.
D
Let's
not
get
into
the
details
of
it,
basically
we're
using
either
exactly
the
same
technology
as
as
IPSec
does
inside
the
kernel
or
we're
using
the
Google
NaCl
library
in
userspace,
we
support
multicast,
which
is
pretty
rare
for
a
container
network,
that
kind
of
comes
with
it
being
a
layer.
Two
implementation.
D
That's
that
kind
of
this,
the
two
things
that
really
connect
the
kubernetes
network
policies
implemented
at
layer
3
in
one
program
and
the
network's
implemented
another
program
there
too,
but
we
distribute
them
together
and
they
they
installed
together
and
people
can
use
that
I
kind
of
rat
went
through
this
in
the
just
to
kind
of
get
through
the
points
of
anyone
see
questions
I.
Can
you
can
interrupt
me
or
you
can
leave
them
till
the
end?
D
D
D
D
There
are
two
kinds
of
ways
you
can
walk,
that
you
could
want
an
ipv4
overlay
running
in
an
ipv6
data
center
or
you
could
want
on
ipv6
overlay
running
on
ipv4
data
center
or
what
conceivably
ipv6
on
ipv6.
That's,
probably
a
niche
concern,
and
anyway,
there's
effectively
two
different
projects
that
require
different
handling.
It's
it's
same
fundamentally,
just
could
requires
going
through
the
code
and
tweaking
all
the
places
where
it's,
assuming
that
it's
dealing
with
ipv4
addresses
and
breaking
that
assumption
it's.
So
it's
there's
nothing
fundamentally
hard
about
that.
D
D
D
C
D
D
We
have
own
a
kind
of
check
in
the
in
the
program
to
see
if
it's
got
a
list
version
and
from
that
we
can
determine
how
many
people
are
actually
running
it
on
a
given
day.
So
that's
so
the
28
days
active
instance
is
about
90,000.
At
the
moment
we
don't
collect
any
personal
information.
You
know
these
people
are,
we
don't
know
what
they're
doing
with
it
I
think
like
that,
but
we
we
know
that,
for
whatever
reason
there
are
likely
thousand
of
these
things
running
out
there.
D
C
A
C
A
D
D
These
are.
These
are
some
of
the
ideas
that
that
kind
of
on
our
roadmap.
We
don't
have
a
lot
of
detailed
plans
around
them,
but
the
whole
area
of
ingress
and
egress
is
very
popular.
It
would
ask
about
otherwise
how
do
I
connect?
My
existing
network
might
bear
a
battle
network,
my
f5
load
balancer
and
my
you
know,
there's
other
different
things
that
people
want
to
connect
it
to
a
container
network
and
only
so
many
hours
in
the
day.
D
Hypervisor
yeah,
this
VX
LAN.
We
use
the
DX
laminate
encapsulation,
which
is
in
the
Linux
kernel
and
if
you
run
on
top
of
a
VM
environment,
that
also
uses
the
X
line
and
you've
kind
of
got
the
Exile
inside
the
X
line,
and
we
will
ought
to
talk
to
the
hypervisor
to
burst
through.
You
know,
just
just
on
one
of
those
layers
and.
D
D
We
can
only
really
automatically
the
one
topology,
which
is
a
full
mesh.
Everything
connects
to
everything
and
you
get
about
100
nodes,
and
you
don't
want
to
do
that
because
a
10
x,
n
connections,
so
so
those
are
more
intelligent,
topologies
which,
which
you
can
do
you
know
script.
You
can
script.
The
topology
at
the
moment
from
the
outside.
The
building
that
into
the
product
would
be
cool
and
obviously
anything
else
going
on
in
the
community
wants
to
come
along
and
contribute
or
trying
to
do.
D
That's
the
pitar
ideas
about
the
future.
River
next
link
is:
it's
got
our
block
diagram.
Sometimes
people
find
it
easier
to
visualize
things:
we'd
rather
weave
mesh
layer
along
the
bottom,
that
is,
as
I
mentioned,
gossiping
and,
and
that
is
an
eventual
consistent
layer,
CRD
data
structure.
So
so
that
is
resilient
to
network
partition.
You
can
are
taking
your
networking
again
and
it
will
carry
on
us
before
and
and
then
on
top
of
that
we
build
the
network,
we
build
DNS,
the
IP
address
management,
the
network
policy
controller
and
the
Amazon
VPC
network.
D
Discriminating
policy
so
so
that,
basically
you
so
he
or
thirties
is
based
on
labels,
so
you
can
attach
any
label
to
anything.
So
you
can,
you
can
say,
I'm
going
to
label
my
front
end
here,
my
middle
tier,
my
data
tier
and
I'm,
going
to
write
a
rule,
so
the
only
things
that
can
connect
into
my
middle
tier
is
things
that
are
labeled
front-end
and
I
can
make
that
real
a
little
bit
more
specific
and
I
can
talk
about
ports.
D
So
I
guess
I!
Guess
it's
worth
saying
you
know.
The
whole
of
this
project
is
is
doing
nothing
more
than
using
Linux
network
technologies.
You
know
that
the
the
lifting
the
heavy
lifting
is
done
inside
Linux,
and
the
same
is
true
of
the
of
the
network
policy
controller.
It
is
setting
up
IP
tables
rules,
so
the
Linux
kernel
is
actually
running
those
rules
and
we're
kind
of
doing
the
administrative
work
bookkeeping
to
make
sure
those
rules
are
correct.
D
D
D
D
C
I
mean
you
know,
I.
Think.
The
alignment
with
the
cloud
native
charter
is,
you
know,
is
almost
obvious
that
you
know
it's.
This
is
a
library
and
a
toolkit.
That's
really
designed
to
help
with
containers
dynamically
manage
systems,
need
networking
and
need
networking
that
that
can
adjust
itself,
and
this
is
a
low
bid
and
a
toolkit
that
helps
do
that,
and
you
know
micro
services
based
again,
you
know
this
is
so
I.
Don't
really
feel
this
patient
needs
much
explanation,
I
think,
there's
a
clear,
strong
alignment
with
the
CNCs
charter.
C
I
think
that
more
interesting
question
is
you
know
why
why
we've
in
the
CN
CF
I
think
there's
a
desire
to
provide
an
independent,
respected
home
for
we've
met?
There
is
a
strong
alignment
with
cloud
native.
It's
obviously
a
clearly
would
be
a
good
project,
especially
with
with
CNI
and
and
the
desire
to
encourage
wider
community
and
engagement.
You
know
there
are
number
people
who
are
already
contributing,
but
I
think
is
felt
that
it
would
really
widen
that
appeal
to
see
this
as
part
of
the
CN
CF.
C
Similarly
feel
that
this
would
pop
strengthens
CNI
arm.
You
know
having
playing
multiple
with
CNI
implementations
in
in
the
CN.
Cf
will
definitely
be
a
good
thing
and
similarly
strengthen
the
network
policy
move
inside
kubernetes
by
having
another
implementation
inside
CN,
CF
and
and
obviously
help
those
projects
that
are
already
using
this
and
encourage
wider
wider
uptake
of
this
and
make
it
easier
for
other
CNCs
projects
to
feel
this
is
a
project
that
they
can
rely
on
or
reuse
I.
A
D
I'm
going
here,
I'll
talked
about
the
yeah.
Well,
on
one
of
the
slides
are
the
International
Securities
Exchange
is
one.
They
were
very
interested
in
the
multicast
capability
living,
which
is
a
like
a
IOT,
automated
home
company
Accardo,
which
is
a
shock
I
shouldn't
claim
them
as
a
production
case
car.
There
is
a
an
online
supermarket
in
the
UK,
their
knowledge
production
and
a
couple
more
that
I
guess
have
been
put
in
there
from
other
people
in
the
organization.
I
can't
speak
to
personally,
but
I
would
say
it's
kind
of
interesting.
D
A
B
B
You
mentioned
the
scalability
in
terms
of
I,
think
it
was
the
number
of
nodes
being
currently
limited
to
order
100
and
it
sounds
like
you
have
some
aspirations
to
increasing
that,
and
then
I've
also
heard
performance
concerns
that
the
plane
I
was
just
wondering.
If
you
have
any
thoughts
on
you
know
what
the
aspirations
are,
is
fundamentally
limited
to
small
local
Claire.
D
Not
at
all,
well
they
like
to
say
we
have.
We
have
the
International
Securities
Exchange,
which
is
a
New
York
trading
market
they've
run
its
the
DI
environment.
They
run
on.
We've
met
that
they,
they
were
perfectly
happy
with
the
performance.
The
the
favor
100
nodes
is
just
that
you
have
to
turn
off
the
automatic
formation
of
new
connections,
because
it
will
try
to
make
a
full
mesh
and,
and
it's
an
N,
squared
number
of
connections
the
you
want
something
like
like
a
hypercube
or
more
clever
topologies.
D
D
If
you
are
someone
who
is
utterly
passionately
concerned
about
the
last
nanosecond,
then
yes
you're
going
to
regard
it
as
having
a
performance
problem,
but
then
ya
don't
use
an
overlay.
Now,
that's
that's
one
of
the
reasons
why
we
put
the
the
new
overlay
option
in
which
is
only
implemented
for
one
kind
of
router
at
the
moment,
but
that
could
be
extended.
D
C
A
A
The
only
other
thing
I
am
the
agenda
for
today
was
to
talk
about
ipv6.
There
was
a
an
email
thread
suggesting
that
one
of
the
topics
that
this
workgroup
might
want
to
pick
up
is
around
your
ipv6
and
the
role
it
has
in
you
know
cloud
native
patterns,
and
so
you
know
we
have
kind
of
a
small
group.
They
have.
Probably
you
know
just
let
you
know
that
we're
going
to
you
know
have
that
discussion
at
some
point
and
in
the
not-too-distant
future,
and
try
to
determine
you
know
how
we
address
that.