►
From YouTube: Kubernetes SIG Network 20170323
Description
Kubernetes SIG Network meeting 2017-03-23
C
C
I,
so
for
those
I
don't
know,
I
have
been
here
for
the
last
couple
of
meetings
so
for
those
who
missed
it,
I
couldn't
get
enough
bandwidth
to
review
it
in
time,
and
so
it
missed
the
window
for
GA
for
1617
I
think
seems
like
you,
and
all
you
got
to
do
is
you
know,
put
the
printed
and
cross
the
t's.
So
one
thing
that
came
to
light
light
was
that
it
looks
like
most
of
the
implementations.
C
Don't
actually
support
named
ports
yet,
and
so
we
need
to
get
some
feedback
as
to
whether
that
was
just
loops
or
whether
there's
a
reason
that
that's
not
supported
like.
Should
it
be
stricken
from
the
API
or
should
it
be
supported,
like
you
know,
named
ports
are
supported
in
most
places
throughout
the
API,
so
we
should
figure
out
if
we
need
to
fix
that
or
what
Fox
four
mins
learners.
D
E
A
A
C
I
mean
the
truth
is
I.
Think
it's
very
seldom
used
and
part
of
me
wishes.
We
had
just
stuck
our
guns
and
removed
it
because
we
actually
tried
to
remove
it
at
some
point
and
we
got
pushback
from
people
who
had
some
active
use
cases
at
the
time.
I
think
at
the
time
use
case
was
at
CD
was
renumber
in
its
ports,
and
people
were
doing
live,
updates
using
named
ports,
and
it
was
sort
of
a
cool
trick
that
it
could
do
that
that
we
could
have
a
service
stay
alive
across
that
I.
C
C
Yeah,
so
that
question
is,
would
it
be
compatible
later
like?
Could
we
could
we
add
four
things
back
and
you
know
the
idea
that
effective
it
was
interesting
in
the
first
place.
Probably
was
also
a
mistake,
but
it's
a
mistake
that
has
been
carried
through
in
a
couple
different
places
so
v2
one
day
we'll
do
a
v2
right,
I'm,
fine,
with
striking
it.
If
we
can
get
to
an
answer
as
to
whether
it
would
be
forward
compatible
later.
D
C
Let's
cross
that
bridge
when
we
get
there,
okay,
I'm,
okay,
with
potentially
striking
it
entirely
I'm.
You
know
I'm
still
a
half
a
mind
to
try
to
remove
named
ports
throughout
the
whole
system.
It
is
complicated
to
implement
I
asked
someone
who
had
to
implement
the
service
controller
for
it.
It
is
complicated,
but
it
just
sort
of
cute
okay,
so
Daniel
figure
that
out
and
then
we'll
figure
out
if
we
want
to
abandon
it
as
part
of
that
same
PR,
yep,
okay,
a
lot
of
to
all
tcp,
but
not
UDP
ports.
Motor.
C
A
D
C
C
C
Right
because
there's
marshal
errs
and
whatever
we're
all
broken,
so
a
slight
update
on
this.
The
marshal
ours
are
still
mostly
broken,
but
this
topic
has
come
up
in
other
facets
of
the
API
and
some
are
insisting
that
it
is
fixable
and
that
they
want
to
try
to
fix
it
because
for
them
they
they
think
the
distinction
matters
I
think
we
were
just
taking
we're
just
being
lazy,
mostly
right.
Well,.
G
C
G
D
C
C
So
I
would
suggest
that
we
we
should
precede
it
as
we
are
with
erasing
the
distinction
and
then,
if
they
actually
do
fix
it
before,
say
two-thirds
of
the
way
through
the
next
cycle,
then
we
could
consider
adopting
that
the
more
regular
semantics
that
way
that
way
we're
ready
for
GA
either
way.
And
if
we
get
the
you
know
the
sort
of
pure
way,
then
we
can
do
a
secure
way
and
if
not
we're
fine.
I
G
If
we're
going
to
plan
right
now
to
keep
the
change,
we
should
change
the
document
we
should
fix.
The
recommendation
to
describe
the
waist
x
are
working
now
that.
D
D
D
D
C
C
C
Crop
top
in
it.
Oh
yes,
so
we
currently
spec
it
as
a
pod,
selector
or
a
namespace
selector.
It
has
come
up
in
the
taint
and
toleration
API
that
they
have
a
difference
or
problem
different
semantics
that
they
chose
and
I
suggested
to
David
Oppenheimer
that
we
reconcile
as
you
can.
Both
of
us
are
pre
GA,
but
we're
a
little
further
along
than
their
well
actually
I.
D
Enduring
demos
and
talks
about
the
code
and
trying
to
provide
sample
network
policies,
it
really
confuses
people
that
one
of
the
pod
selectors
refers
or
that
like
yet
one
of
the
pod
collectors
effects
traffic
from
all
namespaces,
but
the
other
one
only
refers
to
to
dress
up
within
a
single
namespace
because
of
how
this
magic
end
up.
Working
like
BBC
destination,
odd
selector
effects,
incoming
traffic
from
other
namespaces,
but
the
source
code
selector
only
selects
the
current
in
space
so
and
maybe
I'm
not
explaining
that
well.
G
So
let
me
see
if
I
got
right,
so
the
idea
is
the
one
that
selects
the
subjects
of
the
name
of
the
network
policy,
selects
the
subject
and
applies
the
whole
policy.
Even
if
that
policy
is
about
traffic
from
outside
the
namespace,
whereas
the
one
that
is
selecting
tears
can
will
select
peers
of
the
same
namespace
right
and
then
I.
A
D
D
C
C
C
C
C
C
A
C
B
J
Hi,
this
is
omro
potter,
I'm
positive.
He
met
cisco,
we've
been
looking
at
trying
to
about
an
ipv6
area
and
we
had
a
few
questions
and
worst
questions
here
is
just
them
how
we
handle
the
control
manager,
putting
the
cluster
cider
right
now,
there's
just
a
klutz
aside
our
option
where
you
can
put
in
your
cider,
but
we
have
to
add
another
option
like
a
b6
cider.
K
K
J
C
B
C
Yeah,
no
insider
allocation
will
chop
that
up.
So
I
should
say
one
of
our
guys
in
poland,
wojtek
who's
doing
all
our
scalability
work
is
also
looking
at
at
breaking
the
need
for
there
to
be
a
single
contiguous
cluster
cider
because
it
doesn't
fit
well.
You
know
ever-growing
clusters,
and
so
maybe
in
the
17
cycle
that
will
be
changed.
Maybe
if
we're
lucky
so
the.
B
Other
Souls
struck
me
about
this,
is
that,
depending
on
the
network
plug
in
the
network,
logon
may
not
need
that
kind
of
allocation
scheme.
It
seems
our
pilot,
8,
node.
Ciders
is
good
for
some
of
the
cloud
providers
as
well
as
keep
that,
but
a
lot
of
our
plugins
may
not
even
you
know,
allocate
letters
to
a
note
like
this,
especially
for
v6
yeah.
C
E
For
now,
so,
there
is
another
value
which
some
people
end
up,
saying
the
same
which
which
is
passed
to
cube
proxies
to
masquerade
everything
in
and
out
of
the
cluster
and
I,
always
find
that
very
hard
to
explain
to
people.
Why
why
there
are
two
values
that
are
supplied
to
two
different
components:
it's
not
stored
anywhere,
it's
not
accessible
through
any
API.
It's
just
a
thing
that
you
better
get
right.
I,.
C
Agree
so
keeping
in
my
vein
of
there's
things,
people
are
working
on
there's
this
dynamic
config
proposal
that
is
started
with
cube,
lit
configs,
but
will
eventually
expand
to
cluster
config,
which
would
be
the
obvious
place
to
stick
such
a
variable
except
I.
Don't
want
there
to
be
such
a
variable.
I
want
to
find
the
places
that
assume
that
it
is
one
contiguous
block
and
break
them.
Well,.
C
C
L
C
B
D
G
Yeah
I
was
just
saying
that,
as
far
as
I
can
tell
this,
when
I
looked
at
this
a
few
months
ago,
maybe
more
than
a
few,
the
the
service
cider
wasn't
really
necessarily
used
for
anything,
at
least
in
the
case.
I
was
looking
at
the
time
when
the
client
would
always
supply
the
server's
IP
addresses
the
service.
Cider
was
just
a
check
that
was
going
to
get
in
my
way.
C
B
C
Regarding
v6
I
think
that
it
liked,
for
the
medium
term,
it
has
to
become
a
plural.
Well,
it'll
have
to
be
a
new
flag.
That
is
pluralized,
and
we
may
want
to
think
about
whether
it
needs
to
be
a
little
bit
more
structured
than
simply
a
list.
Well
well,
I
think
it
as
just
a
list
of
it
can
have
up
to
two
ciders
one
v6
one
before
or
maybe
we
just
add
a
second
flagged
as
the
v6
either
like
the
original
suggestion
again
just
to
confirm.
B
Like
you
know,
for
example,
with
open
shift,
we
don't
use
any
of
the
allocate
notes
either
stuff,
because
we
have
our
own
mechanism
for
doing
note
allocation
for
the
nodes,
both
the
pods
and
note
address
ism.
So
you
know
it's
not
really
useful
to
us
and
I
can
imagine
it's
probably
not
useful
to
other
plugin
right,
here's
as
well,
but
yet
it
is
useful
to
the
cloud
providers
that
happen
to
use
kidnet
and.
C
Well,
I'm
also
hoping
the
Cuban,
it
dies,
a
violent
deaths
and
rap.
So,
but
the
question
remains
of
you
know:
if
somebody
were
doing
v6
even
with
cube
net
is
the
normal.
You
know.
Is
it
the
done
thing
with
v6
to
just
randomly
allocate
blocks
per
machine,
or
is
there
some
auto
calculation
that
is
done?
There
I
seem
to
recall
some
standard
stuff
that
was
different
than
before
because
of
the
wide
wide
space.
B
B
Mean
usually
when
you
assign
an
ipv6
address
through
interface,
always
get
2
/
6
before
he
would
typically
have
larger
than
a
splash
64
allocated
to
each
node,
and
then
it
would
allocate
addresses
out
of
that
larger
splash.
But.
B
I
mean,
for
example,
you
could
allocate
maid
/
56th
ever
node
and
then
that
node
assigned
addresses
to
the
pods
and
those
pods
get
like
a
flush.
54
something
or
I
mean.
If
you
you
don't
want
to
go
that
route.
If
you
actually
do
routing
between
the
pods
and
not
relying
the
like
local
type
stuff
or
some
that
stuff,
then
you
could
add
a
sign.
C
J
B
So,
moving
on
from
that,
once
you
actually
have
flash
sexy
for
words
or
whatever
delegated
to
the
nodes,
we
then
need
to
make
sure
ipv6
get
through
all
the
rest
of
the
cube
API.
The
church
I'd
add
some
other
notes
down
below
specifically
multiple
pot
IP
addresses
and
some
of
the
cni
related
items
as
well.
B
I
mean
I
guess
just
going
over.
Some
of
those
currently
pods
can
only
handle
one
IP
address
at
a
time
and
a
API,
so
the
pod
status
only
has
space
for
one
I
think
their
various
proposals
floating
around
out
there.
For
that,
and
actually,
I
think,
the
multi
network
proposals
that
judgy
put
together.
We've
all
been
commenting
on
touches
on
this
as
well,
because
his
proposal
proposes
to
add
some
fields
to
the
conch
data
structure.
L
N
Think
we
have
to
support
at
least
link
local
and
another
one
other
ipv6
address,
and
but
you
probably
wouldn't
be
using
the
link,
local
44
anything
and
then
the
question.
The
next
question
is:
would
would
you
want
the
the
API
to
be
able
to
display
that
link
local
Lord?
You
just
leave
it
kind
of
hidden
and
just
use
the
the
address.
That's
assigned
that.
F
B
B
O
O
We
have
a
way
to
bring
up
a
link-local
v6
address
our
pod
and
we
have
at
least
one
user
using
that
to
reach
out
to
dual
stack
and
point
out
of
the
cluster.
We
basically
do
a
gnat
off
host
and
I.
Maybe
I'm
not
subscribe
me
perfectly
perfectly
than
I
here,
because
I'm
not
so
familiar
with
it,
but
I
just
wanted
to
bring
it
up.
D
O
G
O
G
N
O
B
Okay,
so
its
various
TNI
stuff
are
any
of
these
PRS
active
anybody
on
the
call
working
on
the
CNI
PRS
for
ipv6
I
know
some
of
those
put
on
hold
for
the
multi
interface
changes
in
0
30,
but
I
think
that
now
that
those
have
gone
live,
we
can
probably
start
some
of
these
PRS
and
ideally
get
them
re--
based
on
top
of
current
master
frisky
ni
and
then
keep
pushing
forward.
Yeah.
N
B
K
P
N
One
thing
we
wanted
to
make
sure
that
is
well
understood
is
that
this
/
64
boundary
that
we're
kind
of
assuming
that
ncn
I
that
there's
going
to
there
will
be
64
bits
available
for
the
interface
identifier.
And
that
means
if
the
subnets
have
to
be
limited
to
no
larger
than
a
/
64
subnet
space
or
conversely,
the
prefix
length
is
less
than
64
bits.
N
N
B
I
guess
I
guess:
I
was
thinking
more
along
the
lines
of
yes,
it
was
in
be
slack
and
have
some
kind
of
advertisements
or
whatever
the
yes.
It
would
need
the
64
bits,
but
if
you
know
you
follow
the
general
current
model
with
the
other
local
again,
where
it
basically
generates
and
allocate
those
and
kind
of
a
round-robin
fashion.
N
B
N
B
B
C
G
Well,
I
think
it's
really
ties
back
to
a
broader
discussion
of
multi-tenant,
seeing
how
to
make
multi-tenant
dns
and
in
we
discussed
this
several
times,
I'm.
Seeing
basically
two
approaches.
One
is
to
make
a
DNS
servers,
/
name
of
space
and
was
to
make
shared
dns
servers
that
discriminate
based
on
the
client
address
WC.
The
latter
has
the
possibility
to
be
more
efficient,
but
it's
a
little
fragile
because
into
brain
is
you
can't
necessarily
rely
on
the
client
address
and
I?
Don't
know
where
you
know
I
mean
that's
just
basic
trade-off
here,
I'm
not
sure.
C
All
right,
I
was
on
mute
again.
I
have
complete
sympathy
with
the
idea.
I,
don't
know
in
my
heart
of
hearts
what
the
right
implementation
is
yet
I
thought
I
saw
Bowie
on
here
and
he
has
been
driving
a
lot
of
a
DNS
lately,
I
yeah
I'm,
going
to
sort
of
delegate
the
decision
in
the
end
to
Bowie
early.
You
know
from
our
side
but
yeah,
it's
a
tricky
one.
Yeah.
F
I
think
Dom.
This
is
Bowie
that
we
are
concerned
about
the
resource
usage
because,
as
we've
discussed
last
time,
you
could
have
conceivably
a
ton
of
namespaces.
So
maybe
it
is
possible
to
share
the
server
for
the
caching
layer
in
some
sense,
but
I
guess.
One
thing
that
would
be
good
is
how
much
of
it
would
be
baked
in
verses
sort
of
impact
from
a
you
know,
force
you
to
have
multiple
servers,
one
for
each
namespace
or
not,
but.
B
G
C
You
know
with
respect
to
the
multi
network
stuff.
If,
if
we're
allowing
people
to
sort
of
go
into
the
multi
network
space,
we
got
to
make
sure
that
DNS
is
was
resolve
all
and
reachable
from
you
know.
Every
reasonable
network
setup
well
from
every
pod
right
well
from
every
pod,
which
might
be
on
disjoint
networks,
got
to
make
sure
that
the
DNS
appropriate
for
that
pot
is
reachable
by
that
at.
B
B
Depends
you're
in
control
of
the
containerized
application.
If
you
decide
to
replace
your
dns
implementation
with
your
own
inside
the
container,
then
you
can
do
whatever
you
want
with
as
many
BF
service
as
you
want.
As
long
as
you
can
reach
those
servers
from
the
pod,
it's
just
the
agility
implementation
in
the
resolv.conf
format
inside
the
pod
are
limited
with
three.
G
B
It's
the
image
that
your
container
is
built
on,
so
that's
the
application.
Programmers
choice,
correct
if
you
happen
to
build
your
application
with
Julius
II,
which
a
lot
of
containers
are
built
with
gilad
speakers
are
just
built
off.
You
know
linux
system
more
like
filesystem
and
programs.
Then
you
only
get
three
right.
B
But
I
mean
what
I
mean
is
that
if
you
have
five
network
and
they
all
have
dns
servers
and
you
have
an
application
built
using
AG
using
a
dns
resolver
that
can
deal
with
all
five
servers,
then
you're
great,
that's
fine,
but
if
you
have
five
networks
attached
to
a
pod
and
that
pod
has
an
image
built
from
GNC
as
we
base
with
it
was
over,
then
you
have
to
make
a
tough
decision.
How
many
you
actually
get?
Did
you
get
three
of
the
five
well
I
was
not.
L
H
L
B
B
F
B
With
so
I
mean
there's
a
potential
conflict
with
the
cluster
policy
there,
because
I
think
currently,
you
can
define
a
cluster
dns
server
and
it's
not
specific
to
each
pot
or
pod.
Given
your
network,
but
maybe
that's
one
place
to
start,
is
allowing
the
network
plugin
to
provide
dns
information
and
to
actually
somehow
consume
that
inside
the
pod.
G
Yeah
regarding
that
cluster
config
for
the
dns,
I
think
the
correct
interpretation
of
it
is
really
if
this
is
a
choice
that
the
application
program
inks
and
when
he
says
nothing.
It
defaults
to
cost
to
config,
and
I
think
when
he
says
nothing,
what
really
means
is
whatever
the
cluster
operator
wants
and
if
the
cluster
operator
wants
to
actually
pull
the
information
from
the
sea
and
I
plug
in
then
that's
a
perfectly
legitimate
thing
to
happen.
I
think
I.
G
It's
part
way:
yes,
so
that
that's
a
way
by
which
the
cluster
operator
who
choose
either
of
two
different
dns
implementations
or
potentially
you
know,
let's
just
do
it
that
way,
the
cluster
operator
could
choose
whether
he
wants
DNS
servers,
/,
namespace
or
per
cluster,
and
arrange
that
the
scene
I
plug
in
returns.
The
appropriate
server
eNOS
client
configuration
viewing.
G
Well,
let's
start
with
where
we
are
today,
which
is
you've,
got
one
scene
I
plug
in
that
gets
invoked
all
right.
Yellow
the
cluster
operator
could
arrange
that
that
c
and
I
plug
in
returns
the
one
dnf
client
config
to
inject
into
the
pod
and
in
the
future
case
we
go
to
multiple
networks.
The
each
one
of
the
CNI
invitations
can
return.
Optionally,
some
genius,
configurate
client
configuration
and
the
couplet
needs
to
merge
those
together,
hopefully
not
trying
for
the
Lucian
larger
than
three,
and
that's
the
fitting.
The
pod
note.
C
G
C
G
G
B
P
G
C
I
Of
the
things
that
bothers
me
is
that
it's
possible
for
developers
to
are
unfamiliar
with
this
limitation
jealousy
to
run
into
this
problem
and
not
to
know
why
it's
breaking
on
them.
Do
you
think
you
would
be
possible
at
all
where,
if
we
detect
that
they're
using
G,
lipsy
and
injecting
more
than
three
DNS
servers,
we
could
stick
a
warning
in
Coober
Nettie's
saying
more
than
three
you
should
you
should
work
out
some
different
solution,
yeah.
F
B
I
mean
the
way
that
most
people
would
encounter.
This
is
if
they
have
a
VPN
and
that
VPN
has
a
different
set
of
information
in
their
upstream
name
server
from
their
ISP,
and
so
you
do
some
kind
of
match.
On
the
name
and
director
certain
set
of
queries
to
the
VPN
server
and
the
other
sets
to
the
isp
I.
B
C
Mechanism
that
we
use
and
we're
running
out
of
time,
we
have
run
out
of
time
for
Georgie's
topic,
but
you
know
there
will
be
some
description
of
networks
and
maybe
maybe
it
fits
there
or
maybe
it
fits
in
C
and
I.
But
yeah
I
mean
if
we're
looking
at
this
as
a
per
tenant
sing
or
a
/
network
sing.
Then
clearly
there's
a
coupling.
B
For
multiple
networks,
I'm
sure
you
try
to
continue
the
call
for
a
little
bit
or
should
we
take
the
discussion
back
from
endless
I
feel
like
we've,
had
some
good
discussion
when
mailing
list
in
the
past
week.
So
maybe
that's
accept
them.
Otherwise,
I'm
happy
to
stay
on
for
a
little
bit
and
talk.
I
have.
C
N
L
C
B
L
L
L
Ok,
so
I
think
the
only
only
reservation
I
have
about
your
school
is
it's
a
little
bit
more
complicated
than
I
would
have
liked.
You
know
having
to
explicitly
call
out
some
of
these
things
through
the
sea
and
I
it.
It
almost
assumes
that
CN
I
never
supported
multiple
networks,
so
it
almost
sounds
like
beer.
We
are
adding
that
to
see,
and
I
anyway
can.
L
R
L
From
what
I
can
see
so
now
we
are
talking
about
explicitly
calling
out.
You
know
multiple
network
capability
in
the
P
and
I
conflictos.
That
sounds
a
little.
You
know
incorrect
to
me.
So
I
was
thinking
we
will
will
treat
like
the
future.
You
know
as
the
clean
scenario
and
for
backward
compatibility
we
will
do.
The
simplest
approach
is
what
I
was
thinking
so
I.
B
B
P
B
L
B
Can't
do
I
can't
think
of
why
this
speck
in
any
way
would
deny
that
interpretation.
So
that
is
an
interesting
way.
The
only
reservation
I
had
with
generator
auto
generating
the
configuration
for
a
plugin
was,
but
then
that
precludes
somebody
from
adding
additional
information
to
the
configuration
Jason
that
gets
sent
to
the
plug-in
because
speaks
on
temporary
things
have
changed
somewhat
since
then
in
the
proposal.
B
Okay,
I'm
sorry.
I
haven't
caught
up
with
the
latest
reply,
so
so
I
guess
you
know
kind
of
bruising
reviewing
what
I
had
said
in
the
mail
I
feel
like
in
my
mind,
there's
two
different
cases
here:
thin
plugins
and
sick
plugins
is
what
I
call
them
and
you
can
think
of
thin
plugins
with
the
existing
C&I
plugins
like
bridge
I,
TV
land
mackerel
in
there,
basically
one
shot.
B
Imagine
we
probably
talk
to
some
controller,
whether
that's
in
a
neutron
or
something
else
or
you
know
whatever,
and
so
what
I
think
Georgie
is
proposing
is
that
cube
wit
would
call
the
plug-in
and
somehow
pass
the
network
name
to
the
plug-in,
and
then
the
plugin
would
take
that
network
name
and
figure
out
all
the
other
configuration
information,
subnet
I,
Pam
details,
tenant
ID.
That
kind
of
thing
that
makes
actually.
L
B
L
L
B
That's
the
way
that
plaintiff
traditionally
worked
for
CN
I
and
the
way
that
a
lot
of
the
node
local
stuff,
like
cube
net
and
the
cloud
providers
that
currently
use
tube
that
have
worked
correctly,
and
you
can
think
of
you
know
I
mean,
for
example,
may
be
clustered
and
wants
to
do
both
a
linux
bridge
for
one
network
and
a
cluster
wide
network
for
the
other
one.
For
you
know,
maybe
local
data
plane
of
some
kind,
2nd,
then
costs
or
wide
one
could
be.
B
You
know,
control
playing
network
or
some
kind
of
management
network,
so
I'm
trying
to
keep
some
of
the
options
open,
yeah
I
to
accommodate
both
types
of
plugins,
because
I
feel
like
there's
a
cool,
and
I
can
think
of
some
use
cases
particularly
around
and
if
you
perhaps
where
that
might
be
useful.
Alright.
So
I
know
that's
kind
of
where
I'm
coming
from
I'm
trying
to
keep
the
options
video
for
both
of
those
because
they're
useful,
ok,.
B
And
I
think
you
know
I
mean
we
don't
have
to
do
anything
special
to
accommodate
the
thin
plug
in
case
sick
plug
in
case,
and
one
of
my
goals
here
is
to
try
to
make
it-
I
guess
as
streamlined
as
possible
and
then
what
some
of
the
proposals
don't
necessarily
do
that
but
I'm
with
a
little
bit
concerned
that
if
Cupid
is
generating
a
bunch
of
configuration
and
trying
to
exact
plug-in
itself
from
that
directory,
that's
not
really
going
to
see
I
guess
it's
sort
of
using.
You
know
the
sienna
interface.
B
L
N
L
That's
that's
the
goal
yeah
I'm
trying
to
meet
because,
especially
with
multiple
networks
but
single
network,
I
think
we
could
probably
get
away
with
having
a
file
on
every
node,
but
with
multiple
networks.
It
could
get
complicated
very
quickly
if
you
were
to
do
that,
instead,
certain
scenarios,
which
is
why
you
know
I-
want
to
avoid
that.
But
then
I'm
not
saying
necessarily
that
we
have
to
auto-generate
the
config
will
try
to
all
that.
L
L
R
K
Hi,
this
is
good
from
winter.
I
I
was
looking
the
use
case
of
stand,
recording
the
tin,
plug-in
and
recording
the
tickling.
My
idea
would
like
to
you
guys
are
planning
for
the
network
or
network
configuration
different
source,
for
example,
if
you
have
a
DNA,
you
have
network
configuration
parameters
like
which
name
or
those
details
or
item
information.
We
can't
be
Zen,
goes
information
and
support
of
annotations
like
if
you
pass
those
information,
our
spot
of
annotations,
it's
being
much
easier
right.
B
B
B
That
didn't
quite
seem
the
right
thing
to
do,
but
then
the
second
thing
is
that
that
would
basically
be
an
opaque
blob
of
data,
because
not
all
network
plugins
would
care
about
that
configuration
and
there
wouldn't
be
anything
interesting
that
coo
benetti's
could
really
do
with
it,
because
only
certain
plugins
will
be
even
use.
The
CNI
configuration
for
I,
Pam
or
any
of
the
other
attributes,
especially
in
the
sick,
plug
in
case
most
of
those
plugins
handle
I
and
themselves,
and
that
does
not
go
to
communities
at
all.
L
Yeah,
the
other
problem
with
having
the
configurations
send
from
the
orchestrators
is
that
different
use
cases
and
different
plugins
will
require
different
consideration
right,
so
it's
kind
of
hard
to
standardize
on
a
given
configuration.
So
that
is
the
other
other
issue
that
we
always
run
into
whenever
we
try
to
push
down
config
from
the
top
for
everything.
B
I
mean
that's
that
it
does
bring
up
an
interesting
question
of
if
you
do
decide
and
I
think
maybe
this
is
partly
the
Intel
use
case.
If
you
do
decide
to
use
what
I've
been
calling
thin
plugins
that
you
want
to
dynamically
compose
networks
using
a
thin
plugins,
then
how
do
you
do
that
without
installing
config
files
on
every
node?
What
I
new
network
is
created.
B
I,
don't
think
we
have
I
think
it's
not
about
how
the
information
gets
stored,
its
whether
the
information
is
appropriate
to
be
stored
at
all,
at
least
that's
where
I'm
coming
at
it
from
so.
If
we
decide
that
it
is
appropriate
to
store
it
inside
the
cube,
api's
and
config.
That
might
be
the
way
to
do
it,
but
I
think
we
have
to
answer
that
Roger
question.
First,.
S
This
is
david
from
until
because
our
warm
that
phase
one
approach,
because,
as
you
can
see,
progeria
deters
the
last
of
scope
here,
so
it
might
be
best
just
to
let
me
start
with
the
tag
lines
and
then
think
about
maybe
tickling
later
on
you're,
just
making
a
lot
of
hassle
up
front
I'm,
not
really
getting
anywhere
with
us.
So
maybe
this
is
a
suggestion.
B
And
you
know
I
think
if
we
move
forward
with
the
existing
networks
proposal
that
doesn't
preclude
some
other
approaches
and
editions
later
I.
What
I
think
we've
tried
to
do
is
the
multi
network
composer
with
exists
a
if
you
guys
don't
have
a
link
to
that
document.
We
can
certainly
send
one
around
again
I.
B
Think
there's
a
link
in
the
agenda
notes
as
well,
but
it
tries
to
be
as
simple
as
possible
and
as
common
a
base
as
possible
so
that
every
plugin
can
work
with
it,
and
so
they
can
try
to
get
things
into
Cuba
Nettie's
rather
than
continuously
like
shed
about
them.
Maybe
we
just
start
there
and
then
we
keep
discussing
on
how
to
enhance
it,
going
forward
sure,
okay
and
I
guess
David.
B
L
S
I,
just
I
really
just
3
30,
so
I
can
0
dash.
So
we
actually
have
three
phases
of
this.
What
we
describe
this
phase
water
is
really
just
going
to
get
the
multiple
over
there
that
that
works
for
work.
Most
of
our
use,
cases
for
our
customers
are
actually
in
the
phase
2a,
which
is
the
physical
network.
All
these
things
like
SS
or
VDP,
DK.
All
these
things.
Now
we
didn't
share
it
with
you,
the
phase
two,
because
it's
just
it's
just
going
to
create
too
much
hassle
at
this
point.
S
So
what
wait
for
the
district
criticize
a
saint
approach,
and
just
we
just
start
with
this,
and
we
can
build
on
top
of
from
from
there
on
so
I,
don't
think
we're
introducing
anything.
New
te
goes
and
in
phase
one
on
what
we
put
up
on
the
other
documents-
and
you
know
we
just
want
to
work
better
with
you
to
you
know,
to
get
to
a
better
place
phrase
1.
We
have
some
basic
functionality
in
place.