►
From YouTube: Network Plumbing WG Meeting 2018-10-11
Description
Kubernetes Network Plumbing Working Group meeting for 2018-10-11
B
This
particular
issue
is
based
upon
number
of
requests.
We
got
from
the
user
because
there
was
issue,
they
were
reporting
that
they
couldn't
able
to
change
the
interface
type,
for
example,
if
they're
using
funnel.
They
want
to
change
it
to
Mac
VLAN
for
some
ports
and
certain
parts
to
vite
interface
and
reason.
They
said
that
right.
They
want
to
change
the
interface
in
order
to
consume
such
the
services
in
certain
interfaces,
because
there
is
the
limitation
that
you
can't
expose
the
service
other
than
eth0
and
I.
C
B
So,
for
example,
if
you
take
final
right
so
final
always
use
the
veeth
patter,
okay,
so
the
default
interface.
This
eth0
will
be
a
beach
pair.
So
you
will
consume
certain
services
using
that
interface
and
they
have
another
use
case
in
which
they
want
to
consume
certain
services
in
Faneuil,
but
with
I
I,
P,
VLAN
or
Mac
VLAN
or
kind
of
a
different
interface,
but
isn't
that
flannels
problem?
C
B
D
D
To
share,
can
you
see
my
desktop?
I
can
see
it?
Yes,
so
the
previously
we
have
the
delegates.
So
sorry,
so
a
previously
the
Maoist
uses
the
dura
gates,
the
field
in
JSON
file.
So
this
this,
the
this
JSON
file
is
for
just
a
sec.
Nid,
maybe
came
all
this
or
something
this
stuff,
and
then
we
we
create
the
delegates
field
and
then
creates
the
hard
ways
a
child.
D
Json
file
is
put
it
like
type
Ronelle
and
then,
let's
name
it
and
then
the
next
day
is
we
both
that's
the
background.
I.
A
Think
another
way
to
describe
this
Mike
would
be
that
we've
had
requests
upstream
in
C
and
I
to
change
the
way
that
a
particular
CNI
network
attaches
a
pod
based
on
whether
it's
a
VM
or
a
container,
we're
still
trying
to
figure
out
in
the
CNI
main
tears
group
how
that
should
actually
happen.
But
the
argument
I
think
here
is
that
it's
the
same
logical
network,
but
you
can
think
of
that
things
are
being
attached
to
it's.
A
C
A
And
that
was
kind
of
what
was
proposed
upstream
for
some
of
the
CNI
plugins.
We
talked
about
that
as
the
maintainers
group
and
thought
that
was
a
little
bit
too
specific
and
that
there
were
other
use
cases
for
some
of
that
functionality.
So
we
were
trying
to
come
up
with
a
more
generic
way
of
allowing
this
to
happen,
but
I
mean
just
using
that,
as
an
example,
I
think
is
kind
of
a
more
specific
use
case
of
what
Tomo
and
Corral
are
trying
to
talk
about
here
that
you
know
we're
still
talking
about.
A
C
D
I
see
so
so
the
now
we
so
pretty
motifs
use
the
delegates
config,
but
they
are
in
this
proposal.
We
are
likely
to
add
the
Crosstown
network
and
a
default
network
configuration
instead
of
other
gates.
So
the
now
I'm
thinking
about
the
once
we
introducing
these
stuff
the
delegates
keeping
to
their
mothers,
but
this
is
kind
of
the
transitional
phase
or
something
so
they
are
in
the
I'm.
C
Sorry
cook:
can
you
back
up
again
I'm
sorry
in
this
motors
config,
so
I
understand
motifs
as
an
implementation
of
this
multi
Network
thing
and
its
job
is
to
read
a
pods
annotations
and
call
out
to
the
appropriate
network
attachment
methods.
So,
what's
delegates
doing
in
the
motors
configure.
B
B
B
D
D
The
delegates
fear
the
progress
is
used
as
the
default,
probably
so,
if
the,
if
other,
is
around
it
without
any
network,
our
notation,
they
are
both.
So
currently
the
frontier
and
the
Mac
Baren,
for
example,
it
will
be
added.
Interest
is
added
and
the
first
item.
The
frontal
is
also
recognized
as
the
Eve's
of
their
own
interface
kinds
of
the
default
Craster
network
stuff
so
which
the
kubernetes
uses
this
network
is
for
polled,
IP
and
then
the
other
living
as
proofs,
or
something
like
that.
So
that's
their
current
implementation.
So.
A
C
A
I'm
just
trying
to
think
like
how
can
we
bring
this
up
one
level
and
you
know
kind
of
brainstorm
how
it
might
fit
into
the
CRD
spec?
And
then
you
know
if
we
can
get
a
proposal
for
that,
then
we
can
kind
of
bring
it
to
the
wider
group
and
see
if
it
works
for
other
use
cases
too
and
see
what
everybody
thinks
about
it.
C
A
You
know
and
I
think
one
thing
that,
if
we
do
put
any
of
this
into
the
spec
that
we
need
to
be
careful
about,
is
to
put
some
parameters
around
the
the
networks
that
actually
get
attached,
because
kubernetes
requires
certain
things
of
all
pods.
At
least
at
this
point,
like
you
know,
two
pods
on
the
same
cluster.
Basically,
all
pods
should
be
able
to
talk
with
each
other,
because
they're
all
hooked
up
to
the
cluster
wide
default,
Network
and
yeah.
Not
exactly
because
you
know
there
are
some
tenancy
issues
and
all
that
but
yeah.
C
A
F
D
So
so
down,
I
also
confirmed
that
the
some
cases,
the
cube
root
or
API
server,
may
have
the
binding
there's
some
certain
IP
address
so
right
here.
If
the
default
network
used
by
the
front
of
a
determinate,
maybe
the
10.1.1.1
is
already
only
this
IP
address
is
find
it
at
that
time.
The
even,
though,
are
we
adding
the
some
Macmillan
Network,
which
is
not
find
it
tengo
2012
are
at
that
time.
A
Yep
exactly
so,
basically,
we
are
allowing
that
to
happen
where
it
currently
is
not
allowed
to
happen,
which
is
why
we
should
you
know
if
we
do
this,
you
know
make
it
very
clear
that
you
should
make
sure
it's
the
same
logical
network
and
that's
kind
of
where
we
leave
it.
But
you
know
if
you
well.
C
A
B
D
C
Exactly
right
that
I
could
be
done
either
in
admission
control
or
in
the
the
you
know,
the
implementation
of
the
multi
net
or
well,
of
whatever
interprets
this
right,
doesn't
even
have
to
be
the
multi
networking
that
right
so
now
we're
getting
into
interesting
thing
here
we
get.
We
have
this
definition
of
network
attachment
definitions
and
they
can
be
used
either
to
make
additional
attachments
or
is
alternate
ways
of
making
the
primary
attachment,
and
so
maybe
this
becomes
something
that
the
couplet
itself
I'm.
Actually
it
right
is.
It
is
that,
let
me
think
about
this.
C
Yes,
it's
the
Kubla
kublai.
She
needs
to
look
at
this
annotation.
This
is
actually
changed
for
this
relevant
to
the
couplet,
so
the
coop,
so
the
verification
could
be
done
either
by
the
couplet
or
an
admission
control
or
in
the
strategy
right.
The
strategy
has
a
validation,
so
the
pod
strategy
could
could
check
this.
Also,
although
the
problem
there
is
that's,
checked
at
one
point
in
time,
it
might
be
more
appropriate
to
check
it
at
the
couplet
time.
Well
there
it
should
be
very
different.
I.
A
So
it
does
it
look
like.
We've
got
at
least
some
idea
of
where
to
go
with
this
and
that
maybe
Corral
and
Tomo.
You
guys
could
develop
more
concrete
proposal
for
the
spec
and
then
send
it
to
the
list
or
bring
it
up
in
a
further
like
next
we're.
Not
next
weeks
called
that.
You
know
call
two
weeks
or
something
like
that.
Does
that
sound
reasonable.
A
A
C
G
C
G
G
You
know
we
are
trying
to
do
something.
We
have
a
agent
which
is
sitting
down
and
observing,
if
somebody's,
adding
annotations
into
the
system
and
and
try
to
do
the
hookups
afterwards
right.
So
that's
kind
of
the
prerequisite
why
we
wanted
this,
and
so,
if
you
don't
have
a
need
for
like
dynamically
adding
things
in
there,
I
don't
think
it's
like
relevant
straightaway
right.
So
if
the
client
specifies
it
like,
you
need
to
make
sure
that,
like
you
know,
that
thing
is
unique
somewhere
so
like.
G
C
G
C
C
B
G
F
F
Like
searches,
we
generate
a
name
and
sort
of
we
hold
the
context,
we're
ready,
I
bet.
This
are
generated
in
right,
so
our
function,
we
we
will
find
it.
We
have
a
if
we
have
a
collision
and
then
so
generate
a
different
name
with
this,
like
you
say,
forces
uniqueness
and
once
you
have
delay
you
sort
of
use
that
name,
but
we
have
a.
F
C
I'm
still
lost
to
the
problem,
I'm
something
I
understand
that
the
names
need
to
be
unique,
but
that's
not
exactly
your.
What
you're
talking
about
something
some
part
of
the
some
system
needs
to
be
able
to
predict
a
name,
that's
created
by
some
other
part,
and
then
this
is
the
part,
that's
getting
confusing.
G
So
so
that's
the
pod,
like
I,
was
talking
about
first
Mike
right,
like
you
know,
we
have
like
something
called
a
poly
agent.
That's
like
listening
to
changes
in
the
annotations
right
and
that's
pretty
much
the
part,
so
the
they
are
doing
something
dynamic
and
that's
something
I
want
to
demonstrate
like
so
that
we
get
with
people
in
the
context.
That
are
something
we
plan
to
do
at
the
next
meeting.
So
maybe
this
weekend
table
this
until
we
can
get
into
the
context,
and
then
we
talk
about
like
why
we
do
this.
C
C
C
So,
but
the
point
is
that,
though
I'm
saying
is
that
in
our
annotations,
I
think
maybe
I
starting
to
understand
the
problem
in
our
annotation
syntax.
We
do
have
the
option
to
name
each
attachment,
but
I
guess
your
problem.
Is
you
want
to
write
a
pod
agent
that
works
in
both
cases,
the
cases
where
the
annotations
are
named
and
the
cases
where
it's
not
named
yep.
G
G
Yeah
good
point:
Mike
and
I'll
just
run
it
around
it's
true
right
and
see
like
you
know,
if
you
get
stuck
somewhere
and
then
get
back,
so
maybe
that
is
an
easier
solution.
So,
okay,
I,
can't
immediately
think
why
not,
but
just
give
me
some
time
I'll
just
like
think
over
it
and
then
get
back
to
you
sure.
F
The
demo,
so
basically
what
the
inside
of
way
I
guess,
you
can
say,
connect
this
base.
Router
looks
like
and
like
in
handle.
We
can
handle
as
many
networks
as
we
would
want
in
a
network
solution
will
be
running
it
up
to
16,000
per
switch
or
four
different
networks,
and
it
looks
fairly
fun
when
you
look
at
from
coming
inside
kubernetes.
Oh.
G
F
F
B
Okay
cool
so
last
time,
I
was
boring,
you
guys
with
a
lot
of
text.
So
this
time
what
I
did
is
I
just
put
your
example
now
it
looks
like
in
overall
so
for
the
proposal
assumption.
Let's
see
that
we
have
two
networks
in
the
cluster,
both
this
network,
a
in
network
B
and
network,
a
is
like
overlay
network,
for
example,
it
could
be
final
or
calico,
and
this
is
the
network
which
will
be
attached
to
the
eth0
interface
in
inside
a
pod.
B
A
network
B
is
a
kind
of
private
networks
or
not
a
private
network.
It's
also
a
cluster
wide
network,
but
it's
maintained
by
the
cloud
I'd
mean
it's
the
own
network:
it's
not
Fanueil
or
calico.
It
could
be
any
network
like
using
private
or
networks,
so
both
network
a
network
b
or
spread
across
the
cluster,
so
it
means
that
they
can
communicate
with
each
other.
B
A
B
B
So
in
this
scenario,
so
what
we
have
I'm
trying
to
propose
this?
That,
okay,
you
have
the
network
attachment
definition,
so
I'm
trying
to
introduce
a
field
called
clash
or
IP
address-
and
this
is
the
cluster
IP
address-
is
similar
to
yes
service.
Maybe
the
name
may
be
misleading,
so
it
could
be
service,
C
or
D
range,
which
you
give
it
to
the
API
server
and
then
so.
This
part
is
clear
right.
Why
I
am
introducing
this
service
CRT?
So
why
am
I
yeah.
A
B
B
B
So
that's
why
I'm?
Exposing
this
network
attachment
definitely
a
name
in
the
service
and
then,
when
I
try
to
create
this
service,
it
will
invoke
my
multi
network
service
controller
and
it
will
get
the
IP
address
from
this
particular
range.
It
maintains
this
range
because
it's
created
here,
it
maintained
this
range
and
give
you
flush
or
IP
address,
so
you
can
create
a
service
with
a
cluster
IP
address
in
kubernetes.
B
So
in
this
case
it
doesn't
have
any
endpoints
you
want,
because
you
are
not
creating
any
pots
in
this
case,
so
it
it
doesn't,
have
any
endpoints.
In
this
case,
then
you
go
back
and
then
create
your
deployment.
The
same
example
the
host
name
deployment
in
this
case
you
mentioning
the
network
annotation
and
in
this
case
you're
saying
it's
bridge,
config
one.
B
B
You
have
to
redirect
the
traffic
to
Net
Zero,
so
it
this
particular
routing
information
will
be
injected
in
the
pod
network,
namespacing
and
after
that,
after
this
creation
of
this
one,
my
multi
network
service
controller
will
be
observing
the
this
pod
spec
and
it
once
this
network
status
has
been
populated.
It
will
get
the
IP
address
of
this
network
bridge
config
one
and
then
it
will
populate
or
create
endpoints
for
you.
B
So
in
this
case,
if
we
create
the
endpoints
with
this
address,
sub
IPS,
so
in
this
case
I
created
like
two
parts,
two
replicas,
so
basically
it
will
get
two
IP
addresses
from
two
ports
and
it
will
create
the
endpoints.
As
for
you,
and
by
doing
this,
it's
very
simple
that
you
don't
need
to
change
anything
on
the
load,
balancing
or
in
queue
proxy.
B
C
Let's
just
talk
about
routing
just
plain
routing
without
any
services.
Let's
just
talk
about
the
we
do
not
these
multi
network
attachment
business,
forget
services.
Suppose
I've
got
a
pod
with
multiple
attachments.
It
seems
to
me
that
if
two
pods
are
going
to
communicate
directly,
no
no
VIPs
just
directly
with
their
extended
addresses,
it
already
has
to
be
getting
set
up
correctly,
yeah
yep.
B
C
F
C
They
right
so
they
may
or
may
not
be
reachable.
So
the
point
is
that,
even
regardless
of
services
that
baseline
proposal-
and
if
we
didn't
write
this,
it
was
a
mistake.
We
in
our
baseline
proposal,
we
must
say,
we've
got
the
network
attachment
definition.
The
CMI
plugin
that
gets
run
neat
set
up
the
routing
so
that
IP
packets,
addressed
to
a
ignorant
address
in
the
sidecar
Network
get
routed
through
this
sidecar
network
interface.
B
So
you
can
do
it
actually,
so
there
are
certain
mechanism
which
we
are
obtaining
that
one.
So
in
our
case
what
happens
is
like
all
the
ports.
In
our
case,
we
will
have
a
management,
plane
and
control
data
plane.
So
in
this
case
we
are
completely
isolating
the
network.
Actually,
so
then,
okay
and
network,
we
can't
talk
to
each
other.
So
that's
how
we
are
not
facing
this
issue,
because
each
pod
will
have
two
networks
and
it's
fine.
B
It's
working
fine
and
we
caught
a
customer
use
case
in
that
they
said
you
look
guys,
so
this
won't
able
to
run
because
we
can't
able
to
expose
the
services
in
net
zero
or
net
one
interface.
Because
of
that
we
have
to
maintain
a
lodge
ipam
information
so
that
it's
able
to
all
the
parts
could
be
able
to
communicate
with
each
other.
So
I
understand.
B
C
F
F
F
F
F
C
A
C
B
C
B
A
B
B
F
F
E
B
In
this
case,
what
is
happening
that
the
back
to
the
question
output?
Why
introducing
the
clutter
IP
address
so
this?
This
is
open
question
I
put
it
so
the
mechanism
out
to
you
can
route.
So
we,
if
I,
want
to
consume
this
service
right
inside
a
pod,
then
I
have
to
make
sure
that
the
particular
thing
the
request
will
go
to
net,
see
one
interface
instead
of
eth0.
That's
why
the
introduction
of
clutter
IP
address.
B
C
Yeah
there's
trouble
both
ways,
but
so
it
seems
to
me
the
trouble,
if
we're
going
to
say
that
there
are
virtual
IP
addresses
you're
immediately
faced
with
the
problem
of
coup.
Proxy
does
its
job
by
putting
stuff
in
the
main
in
the
hosts
main
network
namespace,
which
might
not
necessarily
handle
the
packets
involved.
A
C
Say
it
I
mean
that
sounds
very
abstract
yeah
a
little
more
concretely,
the
way
I
understand
it
right.
Actually,
I
haven't
looked
at
the
IPTS
thing,
but
the
iptables
mode
and
the
userspace
mode
of
coop
proxy.
The
way
they
work
a
critical
part
of
what
they
do
is
both
of
them.
They
inject
iptables
rules
in
the
main
network,
namespace.
C
Okay,
if
you've
got
a
CNI
plugin
that
does
its
work
by
you
know
creating
a
network
interface
in
the
networking
space
of
the
pod
or
whatever
they
were
namespace.
Your
you're
dealing
with
which
in
general,
is
not
the
host
right.
It's
some
other
network.
Namespace
you've
got
a
network
interface
in
there
and
if
the
packet
goes,
you
know
kind
of
directly
into
some
kind
of
encapsulation
or
something.
Then
the
hosts
network
new
space
never
sees
the
package
doesn't
handle
through
the
IP
handling
of
the
packet.
A
D
A
This
kind
of
like
management
port,
then
hits
cube
proxy
gets
sent
over
the
nodes
network
connection
to
the
other
node,
where
that
endpoint,
that
cube
proxy
has
selected
lives
and
then
goes
back
through
some
kind
of
like
management
interface
into
the
overlay.
I
mean
that's,
obviously,
one
way
to
do
it.
D
C
Right
right,
someone
who
operator
who
is
deploying
such
a
scene,
I
plugin,
could
also
deploy
an
alternate
or
additional
solution.
I
did
I
said
what
you
said
with
the
first
one
because
I'm
sorry,
let
me
go
back
an
answer
review
most
basic
question
here.
Are
we
supporting
overlapping
IP
addresses
here.
F
C
C
A
Here's
it,
you
can't
use
Q
proxy,
as
is
you
could
write
which
I
guess
is
you
know
the
goal
of
all
the
stuff
we're
trying
to
do
here
is
not
modify
kubernetes,
but
right.
Yeah
I
mean
you
can
imagine
a
cube
proxy
that
was
multi
network
aware
and
then
just
you
know,
directed
the
traffic
to
a
particular
endpoint
through
a
given
nerve
face.
Obviously
that
doesn't
help
us
much
yeah.
C
A
B
C
A
B
B
D
A
25Th,
it
looks
like
we've
got
both
two
demos
Suresh
and
pair
and
then
the
Intel
one
so
I
guess
everybody
gets.
Maybe
plan
on
20
minutes
tops
because
we
probably
have
a
couple
other
things
to
discuss
and
if
it
turns
out,
we
don't
have
additional
agenda.
Then
maybe
we
can,
you
know,
do
like
25
minutes
or
something
like
that,
including
questions.
So,
okay,
if
that,
hopefully
that
sounds
good
yep
all
right
thanks,
everybody
see
you
in
two
weeks.