►
From YouTube: Kubernetes SIG Network 2018-05-31
Description
SIG Network meeting from May 31st, 2018
A
And
we
are
recording
this:
is
the
kubernetes
SiC
Network
meeting
Thursday
May
31st
2018,
let's
get
right
into
it,
starting
off
with
test
review
looks
like
last
time
a
few
people
were
handed
out
or
volunteered
to
to
look
into
some
failing
tests.
Do
the
honors
go
around
those
and
see
what
the
status
is.
A
B
Look
I
look
into
the
one
that
was
fading.
It
is
6
CL
I
associate.
Where
is
a
deployment
risk
of
admin
in
GC?
So
it
looks
like
not
not
linked
to
network
at
all.
Whatever
the
network,
it
is
no,
it's
not
failing
it
is.
It
is
flaking
and
at
first
I
I
added
to
an
existing
issue
on
1.8.
But
finally,
one
guy
over
there
said
me
know,
create
another
issue,
so
I
created
an
issue,
but
there's
no
reply.
So
far
from
6000
occupied
me.
C
I'm
still
trying
to
dig
into
that
I
mean
it
looks
like
it's
the
queue
proxy
upgrade
and
downgrade.
It
looks
like
some
nodes
just
for
proxy
and
there's
a
lack
of
log
messages
out
of
cubelet.
That
would
elucidate
that
so
I
am
continuing
to
look
into
we're,
trying
to
figure
out
exactly
why
cubelet
just
simply
doesn't
seem
to
be
starving
those
pods.
It
does
get
the
add
notification,
but
then
it
never
actually
begins
to
run
them,
whereas
some
nodes
do
so
still
looking
into
it.
A
D
A
A
From
a
couple
weeks
ago,
I
said
I
was
going
to
look
into
the
failing
Calico
tests.
We
ended
garbage
collecting
those
because
they'd
never
been
passing
and
nobody
wanted
to
own
them.
You
know
here
at
Tiger,
we've
got
a
ton
of
tests
that
we
run
so
didn't
seem
like
that.
Suite
was
adding
a
lot
of
value.
A
A
E
I
just
want
to
raise
an
issue
because
currently
it
looks
like
the
command-line
arguments
are
being
deprecated
in
favor
of
going
full
config
and
we're
looking
at.
How
can
we
possibly
manage
things
like
hostname
overrides
from
cube
ATM
for
the
proxy,
because
we're
deploying
it
as
a
daemon
set?
So
we'd
really
like
to
be
able
to
keep
host
specific
command-line
arguments
and
not
have
those
deprecated
in
favor
of
just
the
config.
D
I
think
the
real
answer
is
nobody's
been
really
running
with
it.
I
know,
Mike
toughen
who's
doing
all
the
cubelet
work
has
been
giving
me
grief
about
not
pushing
that
fast
enough
hard
enough,
and
the
fact
of
the
matter
is
there's
just
not
that
many
people
who
are
interested
in
driving
it
forward
and
I
haven't
had
the
time
personally
to
do
so.
So,
while
they
may
be,
deprecated
I
wouldn't
be
worried
about
them.
Going
anywhere
for
at
least
a
few
releases
I
mean
at
the
very
minimum.
D
We
would
have
to
give
them
six
months
of
over
overlap.
I,
don't
know
when
we
deprecated
them
so
I
would
be
too
too
worried
about
it.
Yes,
I
think
we're
gonna
keep
the
host
specific
Flags,
just
like
cubelet
kept
the
hopes
host
specific
flags,
but
there's
order
four
or
five
of
them
not
order
twenty
of
them.
E
E
E
C
The
question
there
was:
how
long
was
each
of
the
things
gonna
take
and
my
feeling
was
that
network
service
mesh
there
spent
a
lot
of
discussion
on
it
the
past
and
so
maybe
there'd
be
a
lot
of
discussion
on
it
today.
If
we
don't
think
that's
the
case,
then
maybe
we
can
just
go
with
the
current
schedule,
but
I'm
also
noticing,
if
there's
a
bunch
of
stuff
down
below
too
so
I
think
my
or
my
question
into
more
of
a
general.
How
should
we
organize
that?
F
My
general
sense
personally
is
the
one
presenting
the
network
service.
My
staff
is,
and
rather
a
lot
to
present
hope
to
expect
about
30
minutes.
There's
a
propensity,
there's
a
certain
probability
running
over,
so
we
can
get
the
other
things
with
small
blocks.
Out
of
the
way,
first,
I
would
actually
really
prefer
for
them
to
be
done.
First,
I
mean
I,
specifically
have
a
very
strong
preference
for
actual
business,
the
sig
getting
done
first.
So,
for
example,
like
I
see
things
like
the
the
long
march
towards
scored
EMS
approval
and.
F
B
G
D
Well,
that's
so
much
better,
whoever
muted!
Thank
you.
Okay,
so
I
have
the
action
item
to
repost
that
image,
and
can
you
just
slack
me
the
PR
number
that
needs
the
milestone
and
I
will
push
it
to
the
top.
My
cue
okay,
it
is
put
on
the
chat
here.
I
might
lose
that
after
the
meeting,
if
you
lack
or
email
either,
one
is
fine.
E
B
So
and
now
we
are
working
on
the
documentation,
so
the
documentation,
because
in
cubed
min
is
default.
So
everywhere
there
is
accordion
the
cube
D&S.
There
would
be
a
bat
for
Cody
Ennis
and
this
documentation
for
a
security
patch,
but
that
is
not
linked
to
the
country's
so
that
we
have
time
to
it's
in
progress
and.
B
J
For
example,
you'd
have
NAT
NAT,
six,
four
and
dns64
four
connections
out
external
servers
and
dual
stack
ingress
where
you'd
have
dual
stack
on
the
edge
and
that
would
load
balance
to
to
be
6v6
endpoints,
so
that
approach
would
be
would
be
quicker
and
would
involve
a
lot.
You
know
probably
no
API
changes
and
such,
but
it
would
be
a
little
bit
of
a
distraction.
So
I
was
wondering
if
I
could
just
get
a
feel
for
you.
J
D
J
J
Why
I
was
asking
I,
you
know
it's
I,
don't
know
how
many
you
know
we're
targeting
1.12
dual-stack,
but
that's
really
aggressive
I.
Think
so.
I
don't
know
if
you
know
instead
of
waiting
if
there
are
customers
that
want
will
want
to
do
this
alternative
approach,
but
I
don't
have
a
feel
for
what
what
the
demand
is.
D
D
K
L
G
Okay,
so
the
motivation
behind
doing
this
is
so
we
know
that
CNI
works
very
well
for
a
lot
of
networking
use
cases,
but
there's
some
kind
of
edge
cases
that
it
is
not
suitable
or
it
doesn't
have
enough
hooks
in
order
to
do
what
we
need.
So
the
first
case
is
limited
network
availability.
So
our
main
use
case
for
this
is
IV.
G
So
if
you
have
your
NIC,
which
only
has
a
limited
number
of
VF
s,
C&I
has
no
mechanism
to
advertise
this
availability
and
when
the
VF
s
have
been
exhausted,
a
pod
can
still
be
scheduled
on
the
node
and
it'll
fail
due
to
the
limited
number
of
V
s.
So
with
the
device
plug-in
mechanism,
we
can
advertise
that
number
and
make
sure
the
kubernetes
editors
are
way
of
that
information.
G
Their
workloads
as
a
requirement,
so
we
want
to
coordinate
this
with
the
NIC
that
the
traffic
is
coming
through.
So
at
the
moment
there
is
no
planned
integration
or
alignment
with
Numa
with
CNI,
whereas
there
is
planned
integration
between
the
device
manager
and
CPU
manager
to
make
new
malware
resource
decisions.
And
then,
finally,
we
have
no
mechanism
to
manage
device.
G
You
groups
with
the
CNI,
so
in
some
raves
cases
we
have
to
use
a
privilege
pod,
which
has
access
to
all
of
the
devices
in
the
C
groups,
whereas
we
would
like
to
limit
this
to
only
the
device
that
their
they've
been
allocated.
So
that's
kind
of
some
of
the
main
motivation
behind
behind
creating
this
assess
every
Network
device.
G
Plugin
so
I
know,
there's
been
a
couple
of
different
proposals
on
this
in
the
community,
so
Vic
s
from
Red,
Hat
and
Fabian
and
Peter
also
proposed
documents,
so
we're
not
proposing
another
different
mechanism,
we're
kind
of
building
and
adding
to
the
existing
work,
and
our
main
goal
is
to
create
a
unified
story
for
network
devices.
So
our
SRA
V
Network
device
program
is
hopefully
a
reference
implementation
for
a
good
way
to
unify
your
your
story
for
networking
with
devices.
G
So
stop
me
if
there
is
any
questions
at
any
point
or
I
can
take
them
at
the
end
as
well.
So
there's
four
components
to
this
project.
So
firstly,
is
the
actual
device
plug-in
itself
which
implements
the
device
plug-in
API.
It's
responsible
for
discovering
the
SR
IV
capable
Nick's
on
the
nodes
and
discovering
mev
FS.
They
have
configured
and
advertising
those
back
to
the
qubits,
so
they
can
be
advertised
as
extended
resources.
G
It's
been
responsible
for
a
spot
allocation
time,
storing
a
mapping
of
POD
information
to
VF
information,
which
then
can
be
used
by
the
DC
and
I
program,
so
we're
proposing
an
extension
to
the
device
program
API
to
pass
additional
information
to
the
device
plugins.
So
this
information
we've
passed
is
the
the
pod
UID
and
the
container
name.
So
that
way
we
can
create
a
unique
store,
unique
mapping
for
pod
to
2vf.
G
So
then
we
have
the
the
C&I
shim.
So
the
CNI
shame
is
responsible
for
communicating
between
the
CNI
plugin
and
the
device
plugin
so
communicates
with
the
device
programmed
via
G
RPC,
and
it
passes
the
CNI
rx
pod
name
and
pod
namespace
to
the
device
plugin.
So
the
device
plug-in
can
receive
the
Paju
ID
from
the
kubernetes
api
and
pass
back
the
relevant
VF
information
to
the
CNI
shim.
G
We
also
utilize
a
meta
plugin
in
this
case
Malta's
to
provide
multiple
interfaces
in
our
pod
and
then
finally,
we
use
the
SRA
VCR
to
do
the
network
roaming
and
we've
also
aligned.
This
proposal
with
the
the
network
object
work,
that's
going
on
in
the
network
drama
group
so
that
we've
used
the
standard.
Cardss
have
been
created
so
then
finally,
I
just
block
diagram
of
the
different
components
and
their
interactions.
G
So
we
have
first
yes,
already
device
plug-in
discovers
twice's,
so
it
discovered
and
by
Phoenix
on
your
know,
dance
covers
there
VF
s
and,
as
I
said
it
registers
with
Hubert,
and
this
advertises
that
number
of
these
vehicles
resources
and
then
on
alakay
called
the
device
plug
in
stores
that
information.
Then,
when
the
CNI
is
called
montes
first
gets
the
the
network
theory.
G
D
K
Think
this
one
is
like
a
implementation
detail
actually,
but
there
are
provisions
we
can
connect
the
device
plug
into
a
particular
pool
of
sourav
enix
so
that
it
can
pick
all
the
srvv
of
from
this
particular
pool
or
the
second
approach
is
like
within
this
box.
We
have
belief
if
you
have
a
platform
like
Intel
platform,
we
have
the
next,
which
are
specific
to
the
particular
platform
so
that
we
know
that
within
this
box
we
have
the
fourth
linux
or
new
Attucks
next,
so
we.
K
G
With
extended
resources
you
can
make
in
a
generic
name,
so
in
our
case
we're
using
in
telecom
for
assess
SR
IV.
So
it
doesn't
really
matter
to
the
user,
which
SR
IV
Nick
is
on
a
note.
They're
just
requests
that
they
they
require
SR
IV.
So
it
that
way
we
can
kind
of
support
a
heterogeneous
cluster
in
that
the
user
doesn't
have
to
specify
exactly
which
device
they
just
need
to
specify
that
it.
G
G
L
So
I'll
just
quickly
go
over
the
llamar
files
that
you
have
so
in
this
setup
we
have
we're
using
Maltese
as
a
way
of
implementing
additional
network
devices
in
doing
a
survey.
B
and
flannel
is
our
default
new
one.
It
is
network,
so
this
is
the
configuration
for
Siena
configuration
for
Maltese,
so
we
are
getting
additional
network
using
the
CR
D.
So
this
is
the
new
spec
for
the
series
specification,
and
then
we
have
this.
L
L
Then
we
have
for
test
part
here.
So
here
we
are
using
so
and
then
a
new
CD
spec,
so
we
just
need
to
define
an
additional
network,
so
this
will
be
on
top
of
the
default
component
is
running
network,
so
we
are
using
here
additional
interface
with
a
villain
at
10000
Network,
and
this
part
also
seems
and
part
3.
Also,
this
it's
adding
to
the
billion
2000
Network,
and
this
also
billion
with
other
networks,
so
I'll
just
go
over
now
to
the
demo.
L
L
L
L
So
we
have
three
DF
so
part
three
three
parts
are
running
so
when
you
try
to
look
at
another
part
in
there,
so
it
should
be
handling
it
shouldn't
be
able
to,
because
this
resource
is
not
available
anymore,
so
that
we
can
say
this
pending
here.
So
it's
not.
It
cannot
allocate
anymore.
We
have
in
there
because
only
3vf
available
in
there.
So
if
you
look
at
the
the
interfaces
in
the
pod.
L
L
So
I
think
that
shows
the
demo
here,
so
this
is
just
to
show
that
we
actually
able
to
schedule
pod
based
on
the
valuable
resources
we
have
for
the
SRA
vpf
and
just
the
basic
mechanism.
How
the
PUC
works,
I
think
there's
a
lot
lot
of
work.
We
can
do
a
lot
of
improvement.
I
can
achieve
from
you
just
here
to
get
some
feedback
from
the
community
I.
M
H
A
G
D
I'm
concerned
about
the
sort
of
back
and
forth
needed
between
the
different
components
and
I
feel
like
there's
got
to
be
a
better
way
to
do
it,
but
maybe
not
within
the
confines
of
what
we've
already
built
and
so
I'm
still
sort
of
chewing
on
trying
to
figure
out.
If
there's
a
way
to
make
these
things
understand
each
other
more
directly
having
an
RPC
interface
plug
in
classes
or
feels
like,
maybe
not
the
right
way
to
do
it.
D
This
I
think
goes
back
to
the
the
idea
of
moving
the
cni
plugins
into
T.
Rpc
endpoints
literally
be
the
same
process.
I
think
that
would
be
a
step
in
the
right
direction.
I'm
also
not
sure
that
I
understand
completely
the
cardinality
issues
here
of
what
happens
if
I
have
more
than
one
SR
io
v
network,
what
happens
if
I
have
different
devices?
I
appreciate
the
answer.
I,
don't
think
I
fully
understand
it.
M
M
Really
quickly,
I
think
that
is
one
of
the
limitations
of
the
implementation
today,
right,
which
is,
there
is
no
connectivity,
information
being
conveyed
from
either
device
plug
in
or
the
cni
for
the
hardware.
So
you
basically
need
somewhere.
You
know
that
says
this
I
have
two.
You
know
physical
interfaces
that
are
otherwise
identical,
but
are
connected
to
different
networks.
How
are
they
actually
different
that
that
information
isn't
isn't
there
today?
We're
assuming
everything
is
the
same?
Yes,.
D
That
is
the
thing
I'm
really
wrestling
with
is
how
much
how
much
do
we
need
is
support
there
like
we
can
go
arbitrarily
complicated
here
or
how
little
can
we
get
away
with,
and
how
do
we
represent
that
and
who's
responsible
for
configuring?
That
is
it
a
discovery
system?
Those
parts
are
just
not
clear
to
me
yet
yeah.
F
By
the
way,
if
anyone
wants
the
link,
while
I'm
talking
there's
a
QR
code
on
the
bottom
that'll,
take
you
straight
to
the
slide.
So
so
I've
been
talking
to
a
lot
of
folks
about
the
network
service
mesh
stuff
for
a
while,
now
and
and
one
of
the
things
I've
discovered
is
like
people
are
all
over
the
place
in
terms
of
their
preferred
level
of
concreteness
and
abstraction,
and
and
so
it
seemed
like
a
really
good
thing
to
start
with
use
cases,
because
you
know
there
was
a
very
much
new
question
asked.
F
You
know
what
whatever
you're
trying
to
solve,
and
the
answer
is
quite
a
few
of
them
right.
We've
got
all
these
problems
wandering
around
in
networking
around
kubernetes
that
people
would
like
to
get
solved
like
so
injection
of
physical
interfaces
into
pods.
That's
something
we
just
saw
a
demo
on.
You've
got
people
who
want
to
set
up
virtual
bridge
domains,
Maclin
Bricklin,
VLAN
Jermaine's.
There
are
general
problems
that
people
have
about
getting
VPN
gateways
from
wherever
the
kubernetes
cluster
is
back
to
something,
whether
it's
a
corporate
intranet
or
some
other
things.
F
You've
got
problems
around
Direct,
Connect
and/or
direct
interconnect
where
people
may
have
a
preferred
way
that
they
would
like
to
get
out
of
their
cloud
to
some
network
provider
of
some
sort.
You've
got
a
problem
connecting
apps
to
pods
for
things
like
distributed
virtual
bridge
domains
and
you've
got
a
bazillion
NFB
and
servers
function,
chaining
use
cases
going
on
and,
quite
honestly,
this
is
not
an
exhaustive
list.
It's
just
an
example
list.
F
So
if
you
look
at,
for
example,
hardware
interfaces,
you
can
visualize
this
pretty
straightforward
in
a
pretty
straightforward
way.
You've
got
a
bunch
of
hardware
interfaces.
You
connect
them
to
a
beast
which
you've
got
a
pod.
It
connects
to
the
beast,
switch
as
well,
and
maybe
you've
got
a
hardware
interface
that
you
would
like
to
directly
inject
into
that
pot.
All
right.
That
was
very
much
the
sort
of
use
case
we
were
dealing
with
here
in
the
previous
demo
thing.
F
You've
also
got
cases
like
the
VPN
gateway
case,
where
I
have
a
pod,
I'd
like
to
be
able
to
have
it
phone
home
to
some
VPN
ctrader
for
my
corporate
intranet
or
some
other
specialized
resource
that
it's
probably
outside
of
the
cluster
that
I'd
like
to
talk
to.
That's
definitely
one
that
people
are
very
interested
in
you've
got
an
almost
identical
problem
when
dealing
with
Direct,
Connect
or
interconnect,
or
you
know
this
long
time
ago.
F
You've
got
the
distributed
bridge
domain
problem.
You
know
where
you've
got
a
bunch
of
pods,
which
course
start
out
with
the
normal
crew
Nettie's
networking,
but
you'd
also
like
to
selectively
connect
them
to
various
distributed
bridge
domains.
You
know,
distribute
assuming
zero
when
one.
This
is
often
done
with
the
excellent
man
for
cuddling,
but
frankly,
there
are
a
lot
of
ways
you
can
do
this
mechanically
speaking
well.
You've
got
an
analogous
thing
for
Mac
VLAN
bridge
domains.
F
These
pictures
are
gonna,
look
really
similar
guys,
but
you
just
happen
to
be
getting
these
interfaces
to
actually
lands
on
actual
switches.
Same
general
conceptual
thing
though,
and
then
the
NFP
guys.
I
won't
go
deep
into
this,
but
they've
got
a
bunch
of
things
they
call.
Traditionally
you
know
virtualized
network
functions
or
network
functions.
You
leave
it
to
the
cloud
native
version
running
in
containers.
They
take
l2
and
l3
packets
as
inputs
and
outputs,
and
they
do
something
to
those
packets.
Then
the
number
of
some
things
is
really
large.
F
One
is
you
can
be
implementation
focused
and
the
other
issue
could
be
developer
focused,
so
an
implementation
focused,
abstraction,
would
sort
of
say.
Okay,
what's
the
previous
implementation,
these
use
cases
have
in
common,
let's
figure
out
an
abstract
use
case
to
make
the
current
environment
look
like
whatever
this
lowest
common
denominator
or
the
previous
implementation
was
the
developer
focused
approach.
It
was
just
like
ask
the
question:
what
problem
is
the
developer
really
trying
to
solve?
How
do
we
get?
F
How
do
we
help
develop
a
result
back
column
while
making
their
life
is
easy
and
pleasant
as
possible?
And,
interestingly
enough,
this
word.
This
kind
of
approach
typically
involves
allowing
the
developer
to
ignore
those
implementation.
Details
that
you
might
otherwise
deal
with
in
an
implementation
focused
approach
to
extracting
the
use
case,
and
then,
of
course,
if
you
want
to
take
whatever
that
easy
path
is
and
make
that
the
abstract
use
case
that
you
work
on.
F
Good
examples
here
of
implementation,
focused
versus,
would
be
that
all
the
stuff
that
happened
with
cloud
went
better
with
the
MS,
where
we
just
slapped
at
the
in
front
of
everything
and
to
a
large
degree,
you
accrue
Bonetti
who's
done
a
really
nice
job
of
being
developer
focused.
What
does
the
developer
really
trying
to
do?
How
do
we
make
it
easy
and
pleasant
for
the
mill
to
do
that?
F
Well,
so,
if
we
look
at
these
problems
the
problems
I
sort
of
brought
up
as
use
cases,
they
there's
sort
of
two
ways
you
can
approach
them
as
well.
You
can
look
from
an
implementation,
completed
and
say:
okay,
previous
implementation
was
interfaces
subnet,
that's
et
cetera,
so
let's
add
virtual
interfaces
in
subnets
etc
to
kubernetes
so
that
you
can
go
and
connect
to
these
virtual
assistants
up.
That's
I
would
maintain
that
if
the
Villa
per
focused
approach
says
okay,
what
does
the
developer
really
want
and
what
the
developer
really
want?
F
All
of
these
cases
is,
they
have
a
pod.
They
need
a
connection
that
carries
either
l2
or
l3
payloads
to
something
functionally.
That
does
the
thing
that's
needed
when
the
packets
are
sex
and
that
something
could
be
really
varied
and
it's
typically,
sometimes
the
network,
but
most
of
the
time
it's
actually
not
so
I'll
run
through
some
examples
here
you
know
they
may
want
connectivity
to
isolated
resources
that
are
outside
of
the
cluster.
F
You
know
various
mechanisms
of
protecting
them
from
threats,
allowing
pause
to
talk
to
a
particular
isolated
network
like
the
radio
network
load,
balancing
to
things
you'll
completely
divorced
from
their
their
kubernetes
implementation
connection
to
container
network
function.
You
know,
network
functions,
connectivity,
the
corporate
internets,
guaranteed,
latency
and
bandwidth.
These
are
all
functional
things.
The
developers
want
not
a
single
one
of
them
directly
has
to
do
with
a
subnet
or
an
interface.
F
So
if
you
look
at
it,
though,
oh
it
gets
even
worse
because
you
actually
have
a
Venn
diagram
of
these
things,
because
what
the
developer
is
actually
wanting
is
not
a
connection
to
a
particular
network.
There
is
some
Venn
diagram
of
these
functional
things
that
they
want
as
a
service
when
they
want
to
go
sind,
l2
and
l3
packets
for
these
non
kubernetes
cases,
and
so
thinking
about
this
as
a
network,
is
sort
of
confusing
implementation
details
for
the
underlying
network
service
that
people
really
want.
F
So
the
network
service
mesh
approach
basically
says:
ok,
that's
good.
Let's
look
at
three
main
abstractions.
We
can
use
to
try
and
get
there.
So
the
first
abstraction
is
to
say:
let's
have
a
network
service,
we'll
treat
it
as
that
logical,
something
that
does
the
thing
needed
when
you
send
packets
right
it
gives
you
the
protection.
You
want
the
connectivity,
isolated
resources
that
outside
the
cluster
you
want,
etc
and
then
all
networks.
You
know-
and
it's
important
to
note-
that
all
networks
provide
a
network
service,
but
not
all
network
services
are
well
described
as
networks.
F
This
is
actually
a
really
crucial
point
and
you
sort
of
saw
that
from
the
previous
slide,
with
a
Venn
diagram.
So
the
second
concrete
concept
is
here:
abstract
concept:
where
is
network
service
inputs?
These
are
the
concrete
instances
of
the
you
want
to
connect
you
to
get
your
network
service.
So
this
could
be
a
pod.
It
could
be
something
external
to
cluster.
There
are
a
lot
of
ways
this
could
work
out
in
this
presentation.
F
I'll
be
talking
this
if
their
plots
and
then
you've
got
the
l2
and
l3
connection,
which
is
literally
just
the
tube,
where
I
send
and
receive
whatever
payload.
It
is
whether
it's
Ethernet
frames
IP
packets,
MPLS
frames,
you
know
InfiniBand
whatever,
and
the
way
the
pod
sees
this
tube.
Who
this
connection
may
be
a
kernel
interface.
It
may
be
some
other
mechanism
and
the
underlying
transport
that
eventually
gets
me
to
my
network
service.
F
F
Well
so,
let's
sort
of
build
out
a
concrete
example:
let's
say
I
want
secure,
Internet
connectivity
right,
so
I've
got
a
pod
I'd
like
it
to
have
secure
Internet
connectivity,
so
I
could
define
a
network
service
for
this
you'll
give
it
a
name,
provide
us
back.
You
have
a
selector
against
you
know
some
pod
labels.
F
F
So
this
is
a
lot
of
advantages
over
a
multi
interface
approach,
so
it
talks
semantically
about
what
the
developers
want.
It's
independent
of
the
implementation
details.
It
allows
you
to
shift
to
thinking
of
this
dynamically
rather
than
being
pegged
to.
You
know
various
kinds
of
orchestration
complexity
that
you
have
to
do
in
the
background.
To
get
the
surface
you
want
yeah
its
payload
agnostic,
so
any
l2
or
l3
payload
works,
I,
PE
third
at
MPLS,
instead
of
and
whatever
is
invented
next
year.
F
So,
if
you
look
at
a
simple
implementation
of
this,
you
might
have
a
pod
that
provides
the
the
secure
Internet
gateway.
You
know,
label
on
it,
so
you'd
up
with
an
l2
l3
connection
to
that
from
your
pod
and
that
pod
is
connecting
to
your
VPN
concentrator
in
your
corporate
internet.
This
is
a
really
simple
way.
It
might
be
implemented,
but
life
stuff
would
stay
simple,
and
so
you
can
imagine
people
wanting
to
do
some
aggregation
to
make
the
provided
service
richer
because
bring
our
Venn
diagram.
F
F
It
is
your
VPN
concentrator
speaks,
and
then
you
connect
to
your
VPN
concentrator.
Now
the
truth
of
the
matter
is
from
the
pods
point
of
view.
This
is
still
the
same
network
service,
secure,
Internet
connectivity
and
so
the
pod
literally
good,
was
there
question
cool.
The
pod
literally
should
not
be
able
to
tell
the
difference
between
this.
In
the
simple
case,
though,
thinking
about
this,
this
way
you
know
in
the
network
service
mesh
approach
has
a
number
of
advantages
over
a
multi
interface
approach.
So
number
one
everything
looks
the
same
to
the
consuming
plot.
F
All
the
time,
let's
require
complex,
inferential
manual.
Orchestration
of
all
the
things
takes
to
do
a
three.
You
can
support
thing
too
and
on
three
connections
at
any
time,
not
just
the
startup
but
anytime.
During
the
pause
existence
do
if
I
have
a
pod,
that's
an
app
equals
firewall
and
I
want
to
have
any
connection
coming
into
it.
Carry
l2
and
l3
packets
I.
Don't
have
to
figure
all
that
out
at
the
time.
F
So
you
can
imagine
this
gets
even
more
disaggregated.
You
got
your
Gateway
firewall,
ID
loss,
someone
decides
they
want
to
do
is
load
balancing
because
they
want
to
go
to
a
number
of
different.
You
know,
potentially
proxy
pods.
Those
proxy
pods
may
have
different
decisions
in
terms
of
where
they
would
like
to
send
your
l7
traffic,
which
may
result
in
going
to
different
VPN
gateways
going
to
different
VPN
concentrators
and
again,
this
looks
like
a
single
network
service
to
the
pod:
that's
consuming
it,
but
who
really
does?
F
F
So
getting
there.
So
how
would
the
network
service
mesh
approach
look
at
solving
this
problem
so
we're
to
build
this
and
progressively
you
start
by
looking
at
the
known
local
case
rights
for
the
node
local
case.
We
just
presume
that
whatever
the
network
service
endpoint
is,
we
are
trying
to
reach
for
a
network
service
is
running
as
a
pod
on
the
same
node.
F
Now,
if
you
think
about
this
as
an
analogue
to
what
we
do
with
TCP
connections,
we
literally
have
nothing
in
existence
for
an
establishing
l2
and
l3
connections.
If
you
will,
this
is
analogous
to
TCP,
and
so
we
need
to
introduce
something
that
does
that
work
for
us
that
does
the
control
plane
behavior
for
that
particular
case.
In
this
case,
we
call
that
the
network
service
manager-
you
just
run
it
as
a
daemon,
set
on
your
notes,
so
your
network
service,
endpoints
sort
of
indicates
that
it's
exposing
a
channel.
F
In
other
words,
it's
open
fork
actions
of
a
particular
type
to
provide
a
network
service.
What
a
pod
just
says,
the
sever
decides
that
it
wants
to
request
a
connection
to
that
channel.
It
sins
AG,
RPC,
request
over
UNIX
file
socket
to
the
NSM,
which
then
forwards
that
to
the
memory
service
endpoint,
once
it's
accepted,
the
NSM
goes
and
talks
to
whatever
your
data
plane
is
it's
a
kernel
or
the
e
switch
and
creates
an
interface
for
this
purpose
or
what
other
some
other
kind
of
thing
like
the
host
user,
MMI
F
doesn't
really
matter.
F
F
So
let's
talk
a
little
bit
about,
what's
being
communicated
as
part
of
this
negotiation
from
the
network
service
endpoint
of
the
network
service
manager,
because
we're
doing
this
with
GRDC
we
act
and
we're
doing
it
connection
by
connection.
We
have
tremendous
flexibility,
so
we
can
attach
metadata
to
the
channel
your
labels
that
basically
allow
us
to
distinguish
things
have
offered
you.
F
Can
you
obviously
have
to
provide
the
name
of
the
network
service
you're,
providing
I
mean
you
can
give
a
preferred,
ordered
list
of
mechanisms
for
how
you
would
like
to
get
your
end
of
the
tube
or
connection.
You
know:
kernel
interfaces,
B
host
user,
MMI,
F
or
others,
and
then,
when
the
NSM
comes
back
to
whatever
pod
is
providing
the
network
service,
it
can
provide
any
sort
of
necessary
information
there
before
it
injects
it,
and
this
is
another
interesting
and
important
point
wait.
F
The
network
service
can
optionally,
on
the
accept
connection,
provide
parameters
to
be
passed
on
to
the
requesting
pod
for
things
like
addressing
routes
and
all
the
other
kinds
of
things
that
you
may
want
to
handle
any
mcli.
So,
for
example,
in
our
VPN
instance,
if
I
have
a
VPN,
concentrator
and
I
need
to
add
prefixes,
that
are
things
that
should
be
routed
to
the
intranet
I,
don't
have
to
go
and
deal
with
redoing
all
of
my
sort
of
fragile
manual
orchestration.
When
that
happens,
the
VPN
concentrator
lifts
the
VPN
gateway
pod.
D
K
F
Actually,
the
simplest
way
to
be
able
to
tell
different
l2
and
l3
traffic
apart,
it
turns
out
you
have
to
go
through
massive
mass
Nations
if
you
only
have
a
single
tube
to
figure
out
what
to
do
with,
say,
I
get
three
different
MAC
addresses
that
arrive
or
16
different
IDs
that
arrive
during
out
how
to
demultiplex
those
back
out
to
get
to
the
originating
pause
for
the
reverse.
Traffic
can
be
a
really
really
fiendishly
complicated
game
if
you
do
not
have
some
way
of
segregating
them.
This
is
just
a
convenient
way
of
segregating.
K
C
I,
in
what
cases
were
you
thinking
of
that,
a
pod
would
want
to
request
a
network
service
endpoint
after
it
is
started,
does
just
thinking
that
you
know
in
the
past.
As
far
as
I
can
remember,
we've
sort
of
considered
pod
networking
static
from
the
start
of
the
pod,
and
you
know
it's
like.
Are
we
discussing
changing
that
at
some
point,
since
that
is
kind
of
an
update
to
the
you
know,
pods
all
and
start
quickly,
and
if
something
goes
wrong,
they
just
got
really
clear.
F
About
something
into
the
insufficiently
clear
about
something:
was
none
of
this
actually
touches
CNI
or
the
existing
kubernetes
networking
that
stuff
works?
We
don't
want
to
break
it
all
right.
You
know,
but
we
have
here
is
situations
where
events
occur
after
the
pod
startup.
That
may
require
the
pod
to
be
able
to
support
different
connections,
so
the
simplest
one
to
imagine
is
if
I
have
a
network
service.
Endpoint
say
this
network
service
endpoint
is
my
VPN
gateway.
Clients
are
not
ly
to
be
started
at
the
time
that
the
VPN
gateway
comes
up.
C
F
See
we've
gone
through,
oh,
if
you're,
looking
at
you
throw
the
negotiation
between
another
service
endpoint
again
SM
we've
talked
about
that.
If
you've
got
a
pod,
that's
trying
to
connect
to
a
network
service.
What,
if
there's
a
request
connection,
it
can
say
things
like
this
is
my
preferred
list
of
mechanisms.
I
would
rather
use.
Maybe
I
prefer
kernel
interfaces,
but
I
could
also
do
V
host
users.
F
It
may
Express
things
like
affinity,
perfect
preferences
like
please,
if
possible,
connect
me
to
something
on
the
same
note
and
it
may
also
communicate
device.
Claims
like
I
would
actually
really
like
to
get
this
connected
to
this
network
service
via
a
physical
interface
place
and
there's
a
whole
separate
deck
on
how
that
might
interact
with
the
device
plugin
and
then
on
the
response.
Again
with
the
accept
connection
optionally,
the
NSM
can
pass
through
the
parameters
from
the
network
service
to
the
pod,
about
things
like
addressing
and
routing
and
other
kinds
of
things
that
are
relevant.
F
So
next
step
up
would
be
doing
this
between
notes.
You
know
this
looks
very
very
similar
to
begin
with
you
get
exposed
channel,
but
in
this
case
you
advertise
the
network
service
endpoint
to
the
I
server,
just
like
when
you
get
an
endpoint
that
comes
up,
you
may
have
an
in
point
for
services
in
kubernetes.
F
Then,
when
someone
requests
a
connection
on
a
different
node
there,
NSM
can
go
look
up
networking
service
in
points
and
make
a
selection
decision
and
then
communicate
an
SM
to
NSM
to
get
the
connection
set
up
and
again,
this
looks
very
very
similar.
You
get
the
injections
of
interfaces.
The
difference
here
would
be.
We
get
a
creation
of
the
tunnel
on
the
on
the
receiving
end
and
then,
when
the
connection
is
accepted
on
the
originating
end,
you
get
injection
to
the
interface
and
you
also
get
the
creation
of
the
other
end
of
that.
J
F
H
F
F
In
general,
you
would
need
to
know
things
like
base.
You
need
things
like
how
do
I
reach
an
NSM
that
I?
Can
you
I
can
go
talk
to
about
getting
a
connection
here
because
keep
in
mind
you
know
the
NSM
one
is
doing
the
lookup
for
the
end
point
it
has
to
figure
out
who
NS
m2
is
that
it's
talking
to
so
that's
the
sort
of
information
that
you
would
end
up
storing
in
the
network
service
endpoint,
just
let
them
currently
store
some
reach
ability.
D
F
Mechanically
speaking,
it
absolutely
could
and
it
could
actually
let
you
do
different
flavors
existing
urban
areas.
Networking
so
there's
an
interesting
space
of
possibility
there
to
explore,
but
it's
explicitly
trying
to
be
in
a
position
of
being
orthogonal.
If
that's
what
the
community
decides.
O
O
F
That's
gonna
vary
much
on
how
your
data
playing
is
handling
that
you
know.
Data
plays,
have
there's
a
huge
variety
and
stuff
that
people
in
what
people
do
not
a
potato
playing
level.
It
might
be
helpful
to
go
to
the
next
slide
to
get
a
sense
of
of
what
that.
How
that
might
end
up
so
just
make
sense
slide.
Yep
you
in
a
sense
negotiating
back
and
forth.
F
One
of
them
requesting
can
will
communicate
some
information
about
its
preferred
list
of
mechanisms,
because
there
are
a
bunch
of
them
and
the
number
keeps
growing
depending
on
what
your
payloads
are.
And
then,
when
you
get
an
accept
the
connection,
there's
some
indication
of
the
selected
mechanism
and
whatever
those
mechanism
parameters
are
so
that
sort
of
ends.
Up
being
this,
you
could
actually
include,
as
I
said,
the
the
pass-through
parameters
for
addressing
in
routes
and
whatnot
for
the
pond.
F
The
nice
thing
here
is
in
this
tip
sort
of
gets
to
your
question,
because
we
are
writing
a
G
RPC.
To
do
this,
we
can
provide
the
necessary
information
to
do
smart
things.
We
probably
don't
want
to
over
specify
it,
but
we
can
do
the
necessary
information
there
and
because
we
have
the
NSM
is
a
control
plane
and
you
want
to
be
agnostic
as
the
data
plane.
The
real
question
is:
how
smart
is
your
data
plan?
Not
how
smart
is
the
network
service
model?
Okay,.
O
K
L
F
Yes,
it's
very
good
question,
so
this
is
another
argument.
I
wanted
to
make
really
clearly,
because
I
heard
this
comment
made
a
lot.
The
network
services
really
do
form
a
mesh
and
and
what
I
think
about
a
mesh?
What
I'm
thinking
of
is
a
collection
of
systems
that
can
connect
directly
dynamically
non-hierarchical
and
cooperatively
to
accomplish
a
task
and
that
it
sort
of
dynamically
self-organizing
and
self-configuring,
so
this
is
sort
of
just
pulling
forward
from
the
blood
gave
earlier.
F
What
you
really
do
have
is
a
collection
of
these
sisters
being
pulled
together
and
if
you
could
imagine
each
one
of
these
pods
exposing
the
network
service,
that's
being
requested
by
the
other,
what
you
effectively
I'm
missing
a
couple
connections,
tears!
Apologies
when
you
effectively
wind
up
with,
is
the
peer-to-peer
negotiation
between
these
as
they
consume
each
other.
In
order
to
provide
the
overall
network
service,
that's
perceived
by
the
pod,
so
I,
the
question:
does
that
satisfy
any
of
them?
The
mesh
detractors
that
I'm
actually
talking
about
a
mesh.
D
D
Today,
with
the
service
mesh
sort
of
being
a
hot
sexy,
it's
difficult
to
talk
about
this
without
people
sort
of
saying
well.
But
what
about
is
do
what
does
it
mean
about
this
year?
It's
neither
here
nor
there
so
I'm,
the
one
that
were
over
time,
so
we
should
wrap,
on
the
one
hand,
I
it's.
It
feels
like
it's
very
niche
and
it
can
sort
of
do
its
thing
and
not
bother
anybody
which
is
great.
On
the
other
hand,
it
feels
like
an
opportunity
to
also
possibly
converge
some
concepts.
D
F
Have
other
decks
where
I
have
endeavored
to
actually
lay
out
how
you
might
go
about
doing
this?
Obviously,
there's
no
time
for
that
year,
I
do
sort
of
want
to
close,
as
I
often
prefer
to
close
on
the
way
out
with
some
appreciation
for
folks.
So
you
know,
Frederick
and
Kyle
have
been
furiously
typing
code
for
this
Prem
Shankar
and
John
McDowell
have
been
working
on
use
cases.
John
has
also
been
doing
similar
esoteric
code.
That
may
eventually
come
home
Mike.
F
It's
been
great
about
insisting
on
precise
expression
of
ideas,
and
this
is
infinitely
more
comprehensible.
As
a
result,
dan
and
Chris
have
been
extremely
encouraging.
Supportive
and
providing
good
introductions.
Tinh
Matt
were
really
really
patient.
Listening
asking
questions,
providing
feedback
and
sanity
and
then
just
generally,
we
have
meetings
going
on
right
now,
weekly
and
the
de
broad
community
for
network
service
mesh
has
been
great
about
feedback
support
and
advocacy,
though
we
would
love
to
see
you
guys
at
that
meeting,
they
their
ATMs
ATM
Pacific
time
Friday.
So
we've
got
one
tomorrow
so.
F
So
the
desired
outcome
immediately
here
is
to
get
feedback
from
the
community
and
some
of
what
you
guys
have
provided
and
get
broader
or
engagement.
So
we
can
try
and
solve
all
of
these
problems
in
a
way
that's
converged
insane.
It
doesn't
just
look
like
an
accumulation
of
band-aids
and
then
you
know
effectively.
I
think
it
behooves
us
to
say
in
connection
with
the
next
cig
networking
and
see
how
we
can
collaborate
together
going
forward.
While
we
are
absolutely
capable
of
being
orthogonal.
F
P
F
In
this
stack,
there
is
a
link
to
the
repo.
If
someone
could
drop
I'll,
stick
the
tech
link
in
there
in
the
the
meeting
minutes,
there's
a
QR
code.
If
you
want
to
haul
out
a
camera
and
just
take
a
picture
everything
you
take,
you
the
same
place
and
that's
where
we've
got
the
code.
The
meetings,
the
use
case
Doc's
a
bunch
of
the
slides.
C
I
think
the
thing
I
like
about
this
and
I
know
it's
something
that
Tim
has
talked
a
lot
about
in
the
past
is
that
it
does
focus
more
on
the
developer
and
that's
something
we
always
kind
of
wanted
to
get
to
with
the
multi
network
approach.
So
thanks
for
thinking
more
about
that
and
trying
to
keep
pushing
that
forward.
Thank
you.
I
I'm,
glad
that
and
I'm.