►
From YouTube: Network Plumbing Working Group Meeting 2017-12-21
Description
2017-12-21 meeting of the Network Plumbing Working Group
A
A
So
I'll
start
I'm,
Dan,
Williams,
I,
work
for
Red,
Hat
and
I'm,
also
a
sort
of
co-lead
of
Signet
work
and
have
an
interest
in
this
particular
topic.
So
I
thought
I
would
try
to
organize
this
working
group
and
get
us
to
focus
a
little
bit
more
on
this
and
push
things
forward
so
whoever's
next,
I
guess
Peters.
Next.
A
B
D
Okay,
I'm
Mike
Spreitzer
I'm
from
IBM
I'm
working
on
implementing
traditional
cloud
technology,
virtual
machines,
virtual
networks,
leveraging
as
much
of
kubernetes
as
I.
Can
we
currently
have
running
code
based
on
a
fork.
We
did
this
spring
from
vert
--let.
That
does
run
virtual
machines
with
multiple
thing
to
a
dynamic
set
of
network
attachments
on
each
VM
each
game
since
its
vertical
it
appears
to
be
in
a
pod.
We
also
have
dynamic
set
of
storage
attachments.
I
saw
this
is
currently
totally
out
of
free
code.
I'm
interested
in
you
know.
E
I
L
M
N
P
A
All
right
sounds
good,
I
think
that's
pretty
much
everybody
and
some
people
also
introduce
themselves
on
chat,
which
is
fine
also,
if
I
couldn't
quite
get
all
the
details
in
the
agenda.
Doc.
If
you
want
to
add
yourself
there,
that
would
be
great
to
correct
any
mistakes
I've
made
by
in
the
introduction
section.
A
So
yes
thanks
Kellan,
so
to
keep
going
on
or
continue
basically
just
do
like
Charter
Review,
really
quick
and
I
think
I
sent
most
of
this
out
to
the
Signet
work
list
originally,
but
just
to
kind
of
review.
It
really
quick.
My
suggestions
for
essentially
the
Charter
for
this
group
are
to
develop
specifications.
Proof
of
concepts,
in
addition,
is
extensions
to
the
kubernetes
to
kubernetes
in
support
of
expanded
networking
capabilities
for
pods.
A
If
possible-
and
so
I
mean,
if
that
sounds
good
to
everybody,
you
know
we'll
just
kind
of
go
ahead
with
that
and
again
you
know
we
can
obviously
discuss
anything
on
a
mailing
list
as
well.
If
people
have
issues
or
want
to
add
things
to
this
again
for
short-term
goals,
I
also
reiterated
these
in
mailing
list
and
also
at
cube
con
and
some
of
the
face-to-face
meetings
that
a
number
of
us
had
together.
A
You
know
I
thought
the
short-term
goal
should
be
to
use
something
like
multi,
CNI
or
CNI
geni,
or
some
of
the
other
multi
CNI
plugin
implementations
that
are
already
there
and
standardized
CR,
DS
and
annotations
for
those
that
anybody
writing
these
kinds
of
plugins
can
use
and
then
perhaps
write
a
reference,
C
and
I
plug
in
as
well.
This
be
kind
of
a
short-term
goal
for
the
group,
and
you
know
hopefully
over
the
next
couple
of
months.
A
A
You
know
kind
of
explore
how
we
can
get
these
ideas
upstream
into
kubernetes
in
a
way
that
doesn't
significantly
impact
the
API
I
should
have
put
a
link
into
this
document
to
the
discussion
that
we
had
back
in
I
think
it
was
August
or
September
the
previous
multi
network
meeting
that
we
had
in
the
cube
sig
network
meetings,
which
actually
enumerated
quite
a
few
of
the
points.
If
you
are
unaware
of
that
meeting,
it
is
in
the
Signet
work
agenda
and
there
were
some
direct
notes
taken
from
that
that
I
think
I
had
taken.
A
That
should
be
fairly
detailed
and
should
bring
up
some
of
the
concerns
that
Tim
and
others
Tim,
Hakan
and
others
had
expressed
around
changes
to
the
API
and
the
scope
of
what
those
might
have
entailed
and
so
again
just
four
mediums
long
term.
We
want
to
make
sure
that
we
develop
something
that
we
can
potentially
contribute
back
to
kubernetes,
whether
that's
actually
API
improvements
or
as
kubernetes
evolves
into
the
future
with
more
extension
points.
Perhaps
that
is
even
just
something
like
standards
or
even
developing.
A
H
D
A
D
As
I
said,
I'm
not
interested
in
doing
anything,
that's
not
dynamic
ring
so
I.
Might
you
know
kibbutz
or
participate
on
an
offhand
manner?
If
we're
not
doing
here
or
something
that's
dynamic,
but
I
I,
you
know
that's
what
I
want
to
see.
Okay,
something
that
is
inherently
unable
to
do
dynamic
is
going
to
be
well
in
my
opinion,
if
we're
aiming
for
dynamic,
then
starting
with
something
that
cannot
do
dynamic,
it's
just
a
mistake.
D
A
I
would
agree
with
that.
I
think
we
have
to
keep
that
in
mind
and
you
know
I
mean
kind
of
given
the
conversations
that
cube
con
in
the
deep
dive
around
you
know,
I
think
you
brought
up
that
question
at
the
deep
dive
and
you
know
we
talked
about
it
with
Tim
and
I
recall.
He
wasn't.
A
D
D
My
problem
is
I
need
to
support
customers
or
very
Brown,
and
they
need
a
traditional
IAS
clout
and
I'm
trying
to
figure
out
how
to
satisfy
those
customers
and
they
need
traditional
sorts
of
EMS
and
virtual
networks
on
you
know:
sto
layer,
seven
multi
bunk
isn't
going
to
solve
the
problem
for
them,
even
though
it
may
be
what
we
think
everybody
will
want
in
the
future.
So
there's
a
real,
sharp
question
here.
H
We
had
the
same
problem
with
I
mean
we're
really
working
with
lace.
We
we
want
to
be
able
to
route
these
three
networks
and
so
on.
Hansol
have
routing
protocols
really
running
into
the
lettuce
and
hot-plug,
absolutely
stopping
birds.
We
need
that.
I
mean
that
really
meant
that
we
share
estimates
time
to
put
it
out.
There
has
been
little
bit
stressful
since
we
got
back,
and
it's
it's
not
that
hard
to
do
that's
sort
of
what
we
notice,
what
I'm
implementing
it.
H
And
I
think
that's
where
this
garden
is
before
what
we
said
is
like
to
call
it.
The
cluster
network
is
immutable.
The
other
networks
should
be
possible
to
be
mutable
because
they're
not
really
part
of
the
spec
as
it
is
today,
and
only
their
annotations
in
different
sort
of
plugins
it's
released
today,
I
will
say
not
defined,
it
seems
to
define
or
undefined
the
behavior
whether
you
do
stuff
widows
are
patients
and
in
worst
case
I
guess
we
can
leave
it
like
that.
I
would
prefer
to
have
it
more.
M
H
A
A
Alright,
take
that
as
a
yes,
okay,
so
I
think
we
have
representatives
from
most
of
the
current
proof
of
concepts
and
implementations
here.
So
maybe
why
don't
we
go
through
the
list?
Really
quick,
and
you
know
if
some
of
those
representatives
just
want
to
say
a
couple
words
I,
don't
think
we
need
to
say
too
much
about
each
of
them.
I
tried
to
put
some
notes
in
here
about
at
least
Malta
since
E
and
I,
based
on
some
of
the
code
review,
I'd
done
and
feature
review
if
I
got
any
of
those
wrong.
A
Please
correct
me,
but
you
know,
let's
just
start
it
off
with
multi
sand.
You
know
if
each
person
give
just
a
really
short
description
of
what
it
is,
what
the
goal
is
with
that
particular
plugin
and
then
a
very
high-level
view
of
the
things
that
you
think
it
does
differently
from
the
other
implementations.
That
would
be
great
and
let's
try
to
keep
it
fairly
short.
If
we
can
so
Corel.
If
you
want
to
do
malta
so.
C
I
think
previously
we
did
some
demo
on
Maltese
working
in
networks
at
community.
I
have
like
four
points
to
say
about
Maltese.
So
if
the
first
point
is
very
simple,
so
multis
reuse
the
concept
of
delicate
packages
which
is
similar
to
final
and
the
code
is
very
straightforward
and
it's
easy
to
understand.
C
There
is
no
hot
coding
and
there
is
no
need
to
add
additional
plugins,
because
it's
very
generic
and
they
just
do
multi
networking
and
one
additional
thing
is
just
to
is
to
store
the
information
of
this
network
during
the
creation
and
deletion,
because
it
has
to
soul,
store
this
information
in
order
to
make
sure
this
network
object
or
the
config
file.
If
it's
changed,
it
should
retrieve
back
that
information.
C
And
the
second
point
is
whether
it's
e
CR,
D
or
TP,
or
it
doesn't
matter
because
multis
use
the
self
link
approach
in
order
to
find
the
network
object.
And
third
one
is:
it's
also
support
chaining
mechanism,
so
multi
scan
called
multis.
So
that
is
one
of
the
use
case
and
the
fourth
point
is
default.
Networking
so
default.
C
P
P
G
C
We
have
one
customer
who
is
already
using
in
the
field
trial
where
major
use
cases
for
the
nfe.
Actually,
we
have
three
three
customers
which
are
already
using
in
production
multis
for
the
ASIC,
basically
for
any
few
use
cases,
so
we
created
multis
basically
to
divide
control,
plane
and
data
plane,
and
they
just
do
that
particular
job.
O
A
A
A
C
Yeah
so
I
what
fits
for
you
after
who
I
guess
so
I
know.
Kevin
Kevin
is
the
developer
for
a
genie,
so
basically
they're
having
some
of
the
futures
related
to
final
and
Romano
is
because
they
have
something
like
network
telemetry.
So
they
get
the
information
of
both
interfaces
and
after
getting
the
information,
they
will
automatically
select
a
networking.
So
they
have
this
particular
telemetry
future.
That's
why
the
code
is
little
bit
having
the
hard-coded
features
of
what
see
any
difference
in
applicants,
but
other
than
that.
It
also
do
multi
networking.
C
B
Paris
remember
in
development
right
now,
so
it's
probably
not
so
useful
and
it's
based
on
C
advisor
Osiris
and
my
mirror
and
I'm.
Also
the
user
of
CNI,
Jenny
and
I
can
tell
you
that
it's
quite
simple,
then
you're
in
yours.
It
also
supports
multiple,
unite
plugins
training,
but
in
actual
version
it
returns.
Uni
result
from
last
plugin
in
China.
A
B
O
Ok,
yeah
and
the
nature
planning
is
using
our
internal
products
now
and
it
works
well
so
far
it's
sick
of
having,
as
the
even
notes
said,
it
includes
three
components:
the
monitor
the
monitor
is
running
a
sinner
host.
It
can
be
accompanied
to
the
master
or
an
a
host
that
would
be
okay
and
it
interacted
with
network
infrastructure
and
manager.
O
The
network's
for
the
users
any
kind,
many
to
the
networker
dynamically
say
some
others
say:
I
want
her
tomorrow,
networks
for
my
application
and
the
Dominator
said:
okay,
I
will
added
them
for
you
and
when
the
manager
is
ready,
the
euros
of
the
kind
configure
their
application
to
use
the
neural
networks,
all
the
acted
it
gives.
The
existing
other
networks
and
the
Dominator
is
running
as
an
observer
to
in
the
active
agent
running
on
each
node.
O
O
O
Will
women
eight
the
nerve
of
objects,
but
a
bit
of
a
the
kind
of
moderator
to
use
the
CID
very
easily
and
the
networks
require
the
bipod
are
completely
educated
and
aware,
when
other
smaller
fear,
the
impetus,
pecker
Inc
related
for
now,
and
neither
a
titanium
book
other
third-party
plugin
better
to
all
see
on
the
itself.
So
that's
why
it's
sick
and
I
think!
That's
all
for
me.
For
now.
Okay,.
I
So
this
is
just
for
designed
for
PLC
or
some
damn
was
not
product,
and
then
the
this
has
the
no
the
agent
to
managing
the
IP
address.
This
is
because
the
this
CNI
puts
the
interfaces
without
the
accumulated
management
stuff,
and
then
it
is
also
have
the
a
way
to
invoke
the
anniversary
and
prowling
works.
Amethyst,
irrigation,
septic.
D
Sorry
I
didn't
actually
understand
that
very
much.
Do
you
mind
if
I
ask
a
couple
questions
go.
D
D
So
a
network
is
not
represented
by
an
object
in
the
basic
desired
and
reported
States
final
kubernetes
it.
So
you
must
have
something
more
like
a
parrot
of
API,
so
someone
can
request
creation
of
a
network
and
get
back
some
kind
of
a
response
and
made
me
do
things
like
list
networks
and
read
networks
and
stuff
like
that.
O
D
I'm,
sorry,
maybe
I
wasn't
clear.
I'm
trying
to
understand
nature
this
network
manager
and
what
its
interface
looks
like
common
in
the
world
are
two
different
styles
of
interfaces.
The
older
one
is
an
imperative
style
where,
for
example,
you
have
operations
to
create
networks,
delete
networks,
list
networks,
read
networks,
update
networks
and
then
there's
the
kubernetes
style
in
which
the
interface
semantics
are
focused
on
an
object
that
has
both
a
spec
and
a
status
or,
as
one
might
describe
in
more
generic
english
may
be
desired.
O
It's
something
like
the
network
network,
that's
disgusting
in
the
former
multiple
network
proposal
and,
in
fact,
that
we
have
I
have
to
filter
for
them
to
create
or
delete
updated
networks
and
another
manager
will
receive
the
user.
The
requested
who
will
to
update
the
network
operator
mandated
by
the
nature
manager
is.
O
A
Alright,
with
respect
to
hot
plugging,
is
there
anything
more
there
to
discuss
for
a
minute
or
two
Mike,
or
should
we
put
that
to
the
end?
I
think
something
I
was
just
thinking
of
during
this
conversation
was
that
if
we
want
to
try
to
support
hot
plugging
that
might
have
an
effect
on
how
we
would
implement
any
particular
reference
plug-in
if
we
did
that
just
so
that
it
would
be
able
to
monitor
events
coming
out
of
kubernetes
and
perhaps
dynamically
attached
network
as
they
come
and
go.
H
So
we
used
annotations,
we
have
more
networks
in
spec,
we
have
a
it's
called
the
live
object
that
is
listening
to
changes
on
the
service,
Abhi's
and
they've.
Seen
a
better
network
changes.
It
basically
sees
for
the
part
I
mean
where
the
call
is
call
meltus
to
add
and
remove
those
interfaces,
as
they
are
added
a
real
respect,
and
that
happens
as
soon
as
you
think
respect
to
get
the
network
where
it
should
be.
H
D
H
D
H
D
H
D
N
D
That
I'm
interested
in
is
while
a
pod
is
running,
being
able
to
add
and
remove
network
attachments.
My
central
abstraction
I'm
thinking
about
is
network
attachments,
and
that
is
neither
an
IP
address
nor
network
right
in
the
world
with
dual
stacking
and
network
attachment
may
bring
you
a
couple
of
IP
addresses
or
maybe
more
because
ipv6
likes
animals
lots
of
addresses,
but
anyways.
It's
a
dynamic
set
of
network
attachments.
H
F
Sounds
good
to
me,
like
I,
think
we
like
in
our
case
as
well
right
like
the
the
idea,
is
like
the
container
itself
might
need
more
network
attachment
points
like
Mike
said
like,
for
example,
if
you
have
like
a
container
running
some
kind
of
like
l2
or
l3
functionality,
we
should
be
able
to
dynamically
bring
up
a
network
without
restarting
the
pod.
That's
what,
like
you
know,
hot
plugging
means
to
us
at
least
ok,.
N
H
M
C
H
J
M
D
Be
utterly
clear
and
I
think
we
should
be
very
explicit
about
this
right
right.
I
think
we
should
really
clear
separate
two
issues
here
or
to
subject-matters.
One
is
a
multiplicity
of
networks.
Another
is
in
multiplicity
of
attachments,
Network
F,
on
a
pod.
Okay
I've
been
focusing
on
the
attachments,
because
that's
the
most
new
and
challenging
thing,
because
it
kind
of
gets
into
the
business
of
the
couplet.
A
multiplicity
of
networks
is
entirely
apart
from
kubernetes
and
is
easy
to
add,
with
C
RDS
or
aggregated
API
servers
or
whatever
you
want.
H
F
D
So
I'm
trying
to
say,
let's
be
very
clear
and
careful
a
multiplicity
of
networks
is
a
separate
matter
from
a
multiplicity
of
attachments
on
a
pod.
Of
course,
the
ladder
can
only
happen
if
you've
got
the
former,
but
just
by
defining
and
managing
the
networks
is
a
clearly
separable
problem,
so
which
was
your
question
about
just.
A
D
D
So
I
do
want
to
recognize
that
an
attachment
can
bring
multiple
addresses
right,
even
like
it.
V6
right,
you'll
typically
get
three
addresses
out
of
an
attachment.
There's
a
local
address,
a
permanent
address
and
a
shifting.
You
know
the
time
an
obscured
one.
The
changes
ready,
yeah
the
privacy
of
the
system
right,
okay,
so
I,
don't
do
not
equate
a
catchment
with
IP
address.
D
So
yes,
an
attachment
is
more
like
a
conduit
look
I
mean
we're.
It
was
to
be
clear
right.
We're
talking
about
the
Linux
kernel,
a
interface
between
looks
kernel
and
applications
right.
We
know
what
that
is.
It's
network
interfaces
all
right,
so
a
network
attachment
appears
the
inside
of
pod
as
a
network
interface
with.
However,
many
IP
addresses
it.
Scott
now,
I
know
that
what
is
some
of
the
thinking
that
no,
we
should
be
more
abstract
and
I.
D
You
know
a
like
Tim
I,
don't
want
to
say
no,
but
you
know
at
least
my
needs
are
actually
pretty
concrete,
but
I
do
want
to
also
agree
with
the
people
are
interested
in
high
performance.
You
know,
I
want
to
run
qmu
inside
a
pod
and
ultimately
I.
Don't
want
to
have
you
know
anymore
plumbing
in
there
than
necessary.
I
want
to
be
able
to
get
qmu.
You
know
going
directly
user
level
IO
down
to
the
underlying
device,
so
that's
even
less
abstract
rather
than
more
abstract.
But
okay.
A
A
Is
that
does
that
sound
like
everybody
else,
as
understanding
as
well
as
n?
Could
this
group
would
it
be
useful
for
this
group
to
specify
de
facto
standard
CR
DS
that
all
of
these
plugins,
like
Malta's,
knitter
and
CNI,
geni
and
others
could
use
such
that
the
user
experience
for
a
kubernetes
application,
user
or
cluster
administrator
would
be
consistent
between
most
of
these
network
plugins?
A
D
Couple
of
things
sure
Tim
was
complaining
about
in
the
the
you
know,
the
September
27th
meeting,
which
unfortunately
I
missed,
but
he
was
complaining
about
well.
His
big
concern
was
Portability
and
tempest.
We
do
want
to
pay
as
much
attention
to
portability
as
we
can
and
another
concern
he
brought
in
was
he
didn't
want
to
introduce
an
unnecessary
extra
layer
of
virtualization?
D
He
seemed
to
be
assuming
that
multiplicity
of
networks
would
be
implemented
by
some
kind
of
a
generic
layer
that
that
does
VX
LAN
or
something
like
that
to
make
the
multiple
networks,
at
least
from
my
point
of
view,
I'm
interested
not
in
adding
any
extra
layers
of
virtualization
I
want
to
be
able
to
plum
through
to
the
existing
underlying
virtualization
yeah.
A
To
clarify
that
point,
quick,
my
understanding
was
that
this
was
tied
in
intimately
with
portability,
because,
if
you're
talking
about
plumbing
into
these
certain
underlying
networks,
taking
an
app
that
requires
that
and
moving
it
to
say,
AWS
or
Google
cloud
would
require
that
to
implement
the
same
functionality.
Is
you
have
in
your
deployment,
which
I'm
assuming
is
kind
of
private
cloud
or
whatnot
Google
would
then
or
AWS
would
then
need
to
create
an
overlay
underneath
to
provide
that
functionality,
which
would
obviously
be
less
performant
right.
D
I,
don't
want
I,
don't
want
to
do
that
right,
I,
think
a
better
approach
would
be
to
say:
I
mean
if
I
recall
correctly.
You
know
AWS
and
Azure
and
Google.
They
do
have
concepts
of
you
know
virtual
networks
right.
The
the
concern
was
maybe
that
we
might
mandate
more
functionality
than
their
networks
have
so
I
would
rather
say
something
like,
and
this
reminds
me
a
lot
of
the
ingress
problems
that
we've
discussed
right.
D
Maybe
we
asked
some
kind
of
a
core,
a
common
core,
that,
if
it's
kind
of
a
lowest
common
denominator,
unfortunately
right
that
all
the
cloud
providers
can
provide
virtual
networks
with
this
much
functionality
and
we
have
some
kind
of
a
network
class
concept.
Okay.
So
if
you
want
to
write
a
portable
application,
you
use
the
portable
network
class
to
describe
what
you
wanted
to
your
virtual
network
and
all
the
cloud
providers
can
provide
it.
A
A
You
know
a
fairly
simple
Network
object
as
like
a
v1
that
would
essentially
just
be
you
know,
kind
of
seeing
a
configuration
or
a
pointer
to
see
an
eye
configuration
on
disk
that
then
this
reference
plug-in
or
any
of
the
plugins
that
people
currently
have
would
call
I
mean
it's
essentially
kind
of
a
simplification
and
extracting
the
common
pieces
of
what
CNI,
Genii
and
Malta's
already
do,
and
attempting
to
sort
of
standardize.
Something
along
that
direction.
D
A
I
mean
again
tossing
it
out
to
the
wider
group.
Does
this
seem
like
something
that
is
useful?
If
you
currently
write
a
plug-in,
I
mean
I'm.
Thinking
of
you
know,
knitter
and
cocoa
net
and
Malta's
and
CNI
genie.
If
we
ended
up
doing
a
kind
of
de
facto
standard
for
CR
DS.
Is
that
something
that
you
would
be
interested
in?
Adding
support
to
for
your
plugin
I.
C
K
A
So
then,
in
that
case,
next
steps
on
this
I
just
kind
of
posted
some
proposals
into
the
agenda.
Doc
I
will
split
these
things
out
into
a
separate
document.
I
know
we
already
have
the
multi
networked
document
from
Joji,
but
I
feel
like
we
should
sort
of
start
a
little
bit
fresh,
since
this
is
kind
of
a
refocus,
and
then
we
can
kind
of
discuss
there
and
also
discuss
a
mailing
list
again.
I
was
trying
to
keep
things
fairly
simple
and
take
a
superset
of
what
the
plugins
do.
A
So
I
will
take
that
as
an
action
item
and
we
can
potentially
move
on
I
did
notice
that
CNI
genie
has
a
way
to
write
the
results
back
to
the
cube
API
for
each
pod,
which
I
understood
as
containing
interface,
details
and
IP
addresses,
which
I
assumed
was
a
workaround
for
the
fact
that
kubernetes
doesn't
care
about
any
of
that
right
now.
But
there's
got
to
be
a
way
to
get
some
of
that
back.
A
So
a
question
I
wanted
to
propose
to
the
group
was:
would
it
be
also
useful
to
standardize
on
an
annotation
for
pods?
That
would
contain
the
result
that
comes
out
of
CNI,
whether
that's
the
combined
result
or
just
the
single
or
so
I
mean
essentially
anything
that
comes
back
from
the
Terminator.
Excuse
me,
the
CNI
ad
call
would
somehow
be
represented
in
the
cube
API
as
an
annotation,
so
it
could
be
used
elsewhere.
A
I
mean
the
other
thing
I'd
point
out
is
that
you
know
if
we
do
end
up
working
on
cube,
multi,
IP
addresses
or
pod
multi
IP
addresses
through
this
working
group.
I
know
that
the
ipv6
people
are
very
interested
in
that
I
know
that
a
number
of
people
here
interested
in
that.
So
you
know
as
that
effort
kind
of
spins
up.
You
know
a
number
of
us
to
try
to
be
involved
in.
Perhaps
we
can
find
a
way
to
combine
those
two
efforts
and
use
something
there
for
this
purpose
as
well.
B
A
All
implementations
and
I
remember
that
coming
up
a
number
of
times
in
the
cube,
multi
IP
address
discussions
as
well
that
perhaps
there
could
be
a
special
address
for
health
checks
or
that
address
could
be
tagged
differently
if
plugin
returns,
multiple
ones.
So
that
clearly,
is
something
that
we
need
to
address,
because
criminals
right
now
either
de
facto
defines
or
requires
that
the
cubelet
be
able
to
health
check
a
pod.
That's
part
of
the
contract,
essentially
with
kubernetes
in
the.
B
H
That
also
goes
hand
in
hand
with
the
sort
of
address
space
separation
so
that
at
least
it
needs
to
be
part
of
sort
of
the
kubernetes
cluster.
Networking
I
may
also
probably
need
to
look
into
how
side
cars
are
managed
in
among
the
network.
I
guess
my
preference
would
be
seats.
It
would
be
to
say
that
they're
not
and
over
the
wall.
They
only
move
in
there
on
the
pass
the
network
manner
on
the
amount,
the
network
side.
A
All
right,
I
had
kind
of
listed
some
proposed
requirements
for
that
reference,
implementation
and
then
also
some
nice-to-haves
at
the
bottom.
There
you
know
we
can
figure
out
the
details
of
how
we
this
started,
where
it
lives,
and
you
know,
kind
of
who's
going
to
contribute
in
the
path
forward.
We
can
punt
that
to
the
mailing
list
over
the
next
couple
of
weeks
anyway,
so
I
mean
some
of
the
proposed
requirements.
I
had
are
around
bringing
the
plugins
up
to
kind
of
modern
standards.
I
know
there's
the
PR,
for
example,
for
CNI
genie.
A
That
does
result.
Chaining
I
think
that's
something
very
important
to
have,
and
also
support
for
config
lists
and
also
a
CNI
versioning
is
going
to
be
important,
because
if
we
have
multiple
plugins,
we
want
to
make
sure
that
they're
somehow
compatible
between
each
other.
You
know
if
you
have
a
plug-in
zero
point,
one
version,
and
you
also
want
to
call
the
zero
point.
3
plug-in.
We
got
to
figure
out
how
to
actually
handle
that
you
know
we
may
even
want
to
try
to
specify
that
as
a
de
facto
standard,
how
that's
supposed
to
work.
A
We
also
want
to
make
sure
that
plugins
support
or
that
this
de
facto
plug-in
supports
capabilities
which
are
necessary
for
things
like
host
ports
for
some
of
the
thinner
plugins.
If
you
have
a
plug-in
that
has
a
long-lived
process
that
actually
is
watching
the
cube
API,
not
as
important,
but
it
could
be
used
for
other
things
in
the
future
as
well.
So
you
probably
wanna
support
that.
N
A
I
am
one
of
the
CNI
maintainer
x',
so
I'm
kind
of
looking
at
this
with
that
in
mind,
and
some
of
the
things
that
I've
added
here
are
features
that
we've
added
to
CNI
over
the
past
year
or
so
that
I've
seen
that
some
of
these
example
plugins
don't
support.
So
what
I'd
like
to
do
is
for
whatever
the
reference
plug-in
is,
or
you
know,
even
if
people
want
to
implement
these
features
in
their
own
plug-in.
A
You
know
I'd
like
to
see
these
kinds
of
things
in
those
plugins
like
config
lists,
prove
results,
capabilities,
version,
handling
or
correct
version
handling.
The
other
thing
I'd
like
to
see
is
eventually
that
kubernetes
actually
consumes
the
results
from
these
plugins
as
well
and
I.
Think
that's
gonna
be
a
lot
more
important
when
we
add
multiple
IP
support
for
cube
right
now.
What
happens
at
least
with
Gershom?
A
Is
that
a-10s
enters
the
container
and
just
pulls
the
IP
address
out,
but
it
only
pulls
one
out,
and
so
it
would
be
much
more
useful,
I
think
to
consume
the
results
from
the
cni
add
operation,
because
then
we
can
also
get
things
like.
Oh
hey.
What
if
the
network
wants
you
to
use
specific
DNS
servers
for
this
pot?
A
You
know
so
I
think
there's
a
lot
more
options
there
that
kubernetes
could
have
in
the
future,
but
of
course
that
requires
that
plugins
actually
do
the
right
thing
in
the
first
place
before
they
could
return
the
result
to
cube
so
and
I
think
you
know,
I
mean
given
the
makeup
of
this
group
and
some
of
the
other
people
that
are
interested
in
multi
network
that
are
also
interested
in
CNI
stuff.
You
know,
I,
don't
think
we'd
necessarily
have
a
problem
with
communication
between
CNI
and
this
group
or
sig
Network
can
answer
the
question.
Bowie.
A
N
A
A
We
also
need
to
make
sure
that,
whatever
you
know
future
solutions,
we
have
that
go
upstream
into
cube,
take
network
policy
into
account,
but
at
least
for
the
next
like
month
or
two
for
this
reference,
plugin
I
wasn't
necessarily
thinking
it
would
do
that,
because
this
plug-in
might
call
other
plugins
that
already
implement
network
policy
on
their
end.
If
that
makes
sense,
but
I
will
certainly
add
that
to
the
nice-to-haves.
D
D
A
Apparently,
it
was
not
utterly
clear,
so,
let's
clarify
that.
Does
that
sound,
okay,
pratik?
No
thanks,
okay
and
we're
about
at
a
time.
You
can
continue
for
a
few
more
minutes
if
people
have
more
things
to
discuss.
Otherwise
we
can
take
things
to
the
mailing
list.
It
seems
like
we've
got
some
general
agreement
on
at
least
a
couple
of
things.
First,
that,
yes,
it
seems
useful
to
specify
CR,
DS
and
so
we'll.
Take
that
up.
I
will
create
that
document.
To
kick
that
off.
I'll,
say.