►
From YouTube: Nework Plumbing WG Meeting 2018-06-21
Description
Network Plumbing Working Group meeting for June 21, 2018
A
A
A
A
So
hopefully
that
addresses
the
issue.
If
anybody
has
thoughts
or
comments
on
that
edition,
let
me
know
either
in
the
meeting
or
on
the
mailing
list
or
whatnot
it
pretty
straightforward.
I
would
think,
and
it's
optional.
Of
course
you
don't
have
to
include
it,
but
the
idea
there
was
that
I
think
it
was
huawei.
Last
time
had
talked
about
that,
maybe
they
could
use
that
information
to
further
prototype
some
of
their
service
stuff
on
those
other
networks,
and
we
can
try
to
move
forward.
The
prototyping
of.
A
Multiple
data
planes
and
services,
the
stuff,
so
assuming
that's,
not
very
controversial.
The
next
item
was
Dan.
Winship
had
suggested
in
the
comments
on
the
spec
IP
request
in
the
Mac
requests:
keywords
in
the
status
or
certain
status
in
the
selection
annotation,
he
thought,
or
his
suggestion
was
to
just
shorten
those
to
IP
and
Mac,
so
I
couldn't
want
to
throw
that
out
to
the
group.
I
mean
that
sounds
I,
guess
more
or
less
okay.
A
A
B
C
D
C
A
I
guess
the
other
question
there
was
also
they
were
all
of
the
keys.
There
were
requests.
Let
me
go
through
that
really
quick
okay,
so
we
had
IP
requests.
Mac
requests,
an
interface
request,
I
think
all
of
those
are
pretty
much
demands
and
not
requests,
so
we
might
as
well
just
change
all
of
those
to
remember.
D
C
D
Seems
to
think
and
I
tend
to
agree
that
there's
some
verbiage
there,
that
makes
it
sound,
very
meta,
plugin
ish
and
what
he
highlighted
was
the
inflammation
then
determines
which
additional
CNI
plugins
to
call
and
anyways
so
I
think
that
that
just
seemed
limiting
from
the
thick
plug-in
point
of
view.
So
maybe
we
could
take
a
shot
at
making
that
a
little
bit
more
generic
there
yeah.
B
I
took
a
look
at
that
just
before
the
meaning,
also
and
I
noticed
it's
actually
worse
at
the
very
beginning
right.
The
first
sentence
in
the
definition
of
the
term
implementation
basically
says
it's
a
meta
plugin
and
that's
not
what
we
intended
the
the
intent.
I
think.
The
reason
we
talk
want
to
talk
about
implementation
rather
than
meta
plug-in,
is
to
not
bind
ourselves
to
using
a
meta
plugin
totally
right,
but
in
the
presence
of
restriction.
B
One
point
one
point:
four:
there
has
to
be
a
plug-in,
but
I
think
the
point
here
using
implementation
is
to
have
a
spec
that
is
not
dependent
on
one
point,
one
point
four:
so
we
should
define
the
implementation.
I
suggested
some
wording
to
just
be
whatever
makes
the
you
know.
The
the
consequences
of
this
specification
happened
right
and
note
that,
as
long
as
one
point,
one
point
four
applies,
that's
going
to
be
a
plug-in,
you
know
and
go
that
way.
B
A
I
sort
of
read
it
the
same
way
and
that's
probably
my
fault
when
I
did
the
huge
rewrite.
The
other
thing
that
occurred
to
me,
though,
when
I
was
reading,
that
and
thinking
about
the
implications
are
there's
a
lot
of
other
C&I
related
things
kind
of
sprinkled
throughout
the
spec.
Since
we
were
sort
of
focusing
on
that
in
some
ways,
and
so
not
sort
of
this
is
utterly
and
clearly
very
explicitly
about
programming.
A
Seeing
like
calls
yeah
and
so
there's
a
lot
of
stuff
going
on
there,
and
if
we
were
to
take
Pung
suggestion
to
the
logical
conclusion,
there
would
theoretically
be
a
spec
for
CNI
based
meta,
plugins
and
sort
of
a
generic
spec
for
plug-in
things,
plugins
that
weren't
necessarily
CNI
based
and
I.
Guess
what
I
mean
by
that
is.
You
know,
for
example,
the
config
string
in
the
network
spec
specifically
talks
about
you
know:
CNI
configuration
now
ya.
B
Know
this
is
something
yeah
I
completely
agree
with
you
that
station
right.
We
discussed
this
a
long
time
ago
and
you
know
we've
discussed
yeah.
We
could
try
for
something
more
general,
but
we
decided
we
would
start
with
someone.
That's
just
about
driving
CNI
I.
Think
right.
I
think
we
should
admit
that
and
be
clear
about
that.
So
yeah,
the
implementation
pretty
much
has
to
map
onto
calls
to
other
CNI
plugins,
because
that's
all
we're
trying
to
do
right
now.
Yeah.
A
That
was
one
of
the
reasons
why
I
did
not
do
the
consolidation
of
the
plug-in
part
that
we
had
talked
about
last
time,
which
was
another
open
issue
at
the
top
of
the
spec
document.
I
was
going
to
do
that,
but
then,
when
I
had
read,
Hong's
comments,
I
decided
to
hold
off
on
that
a
little
bit
to
try
to
get
some
thoughts
from
the
group
here
about,
maybe
where
we
should
move
forward
with
that
and.
D
Yeah
this
was,
it
was
an
item
for
later
in
the
agenda,
but
you
know
Poong
is
interested
in
doing
another
reference
implementation
using
courier,
so
courier
is
a
way
to
essentially
for
those
they
don't
know.
It's
a
way
to
connect
your
OpenStack
implementation
with
Neutron
to
kubernetes,
or
that's
one
way
that
you
can
use
it
so
anyways,
yeah
I
think
that
in
his
particular
case,
he's
not
interested
in
being
him
how
to
plug
in
and
selling
to
other
CNI
plugins.
D
So
I
think
that
he
might
have
been
worried
that
he
was
going
against
the
spec
fine
doing
it
that
way
and
I'm
like
brings
up
a
point
that
that
might
so.
B
I
am
familiar
with
career.
At
least
I
was
a
couple
years
ago
when
I
was
using
it
for
exactly
this
I
had
a
CNI
plugin
that
invoked
a
basically
courier
to
use
a
neutron
virtual
network
to
supply
kubernetes
network
connectivity,
so
I'm
a
little
confused
I,
don't
see
why
he
would
have
a
problem
with
writing
a
C
and
I
plug
in
and
letting
it
be
called
by
the
meta
plugin.
That
needs
to
be
today's
implementation.
I.
D
E
D
E
Same
time
because
they
are
trying
to
they're
trying
to
implement
multiple
types
of
interfaces
as
well
like
something
like
a
a
user
space
interface,
as
well
as
a
Mac
VLAN
interface,
and
they
wanted
to
plumb
both
of
those
types
right
into
the
pod
in
the
career
deployment.
So
so
I
think
they
had
a
lot
of
trouble
with
getting
multis
to
work
so
that
so
they
decided
it's
better.
To
have
courier
C&I
to
support
both
to
support
multiple
network
or
multiple
interface.
I.
A
And
it
occurred
to
me
that
it
should
be
possible
to
support
that
use
case
by
you
know.
If
we
clean
up
the
thick
plug-in
stuff,
then
you
would
essentially
have
multi
in
between
and
then
you
would
have
the
courier
plug-in
with
the
cni
config
I
mean
it
would
basically
just
be
the
binary
on
disk
and
then
the
multitude
called
the
courier
plug-in
with
the
right
network
name.
So
it
feels
like
it
should
be
possible
to
support
that
case.
E
Bang,
do
you
know
that's
correct,
so
I
think?
Well,
it
doesn't
work
when
you
try
to
do
multiple
types.
I
think
is
the
it's.
The
conclusion
I
think
it
works.
Okay,
if
you
only
use
like
MacNeill
and
like
they
use
today,
but
I
think
if
they're,
if
they,
what
now
they
are
trying
to
do,
DVD,
Kay
and
others
so
I
think
that
makes
it
work
less
well.
I'm,
not.
B
E
Multiple
types
concurrently,
correct
and
that's
that's
what
they're
adding
now
so
so
today,
it's
being
worked
on
I
think
Intel
is
working
on
that,
if
they're
adding,
basically
a
second
type
of
interface
in
courier,
so
and
that's
to
support
fast
data
paths
right.
So
it's
D,
P,
D
K,
that
kind
of
stuff,
so
we
can
get.
We
can
in
fact
get
someone
from
Intel
to
come
to
this
meeting.
Well,
it's
not
great
time
for
them
either,
but
potentially
at
one
point
we
can
get
them
to
talk
about
their
finding
yeah.
A
A
B
E
No
I
think
that
works
okay,
yeah,
it's
when
you,
when
you
call
them
at
the
same
time
it
doesn't
work.
Oh
well,.
B
B
C
C
C
Surprise
me
really
really
much
or
it's
like
you
just
need
to
survive
somewhere.
I
mean
you
just
can't
have
two
things
running
over
the
same
date
unless
you're
I'll
call
it
thread
safe
or
whatever
you
want,
but
I
mean
you
need
to
protect
your
resources
like
it
reads
the
price
when
people
have
such
problems
today,
because
the
hole
wasn't
clear
it
doesn't
have,
it
depends,
is
nice
here
in
the
station
as
it
is,
and.
A
C
B
A
B
C
B
C
C
B
A
B
B
B
C
A
B
A
little
weak
I
think
we
cannot
say
they
may
be
run
in
parallel
until
we
are
confident
that
seeing
my
plugins
allow
concurrency,
you
mean
the
CNI
spec
or
the
plugins
themselves,
well,
I
suppose,
at
the
end
of
the
day,
it's
the
implementations
that
matter,
but
we
usually
rely
on
the
spec
two.
So
in
some
sense,
let's
see
community
right
I
mean
if
they
all
believe
that,
oh
yes,
of
course,
they
all.
A
C
A
C
A
A
E
A
B
A
All
right
I
have
tried
to
capture
that
in
the
notes,
all
right
and
then
also
Fung,
maybe
for
the
next
meeting.
If
we
can
try
to
get
somebody
from
career
to
talk
a
little
bit
more
about
that
I
think,
okay,
basically,
we
probably
will
end
up
punting
some
of
the
issues
that
Poong
raised
until
the
next
meeting.
Okay
and.
E
A
D
D
D
D
A
Then
it
won't
write
its
file
into
the
other
place
until
the
default
cluster
wide
network
writes
its
file
to
at
CCI
net
D.
That's
one
option.
Another
option
would
be
modifying
all
third-party
Network
plugins
to
specify
a
different
directory
for
their
config
files,
and
then
the
implementation
here
of
the
spec
would
write
its
config
file
to
Etsy
C
and
I,
not
dot
d
and
wait
for
the
other
stuff
in
the
other
directory.
But
that
seems
less
plausible
because
trying
to
get
all
the
network
plugins
to
update
where
they
write
their
configs
is
yeah.
A
The
third
option
is
to
modify
kubernetes
CNI
driver
to
accept
a
network
name.
That
would
then
be
the
name
of
the
file
that
the
implementation
right,
stat,
CC
and
I
net
D,
so
that
kubernetes
would
wait
until
a
given
named
CNI.
Config
shows
up,
as
opposed
to
just
picking
the
first
one
that
happens
to
exist.
So
there
are
a
couple
of
different
options.
A
B
A
D
Implementation
itself
to
determine
readiness,
but
that
would
require
coop
to
consult
it.
So
in
this
particular
case,
what
would
happen
is
that
Cuba
would
start
scheduling.
Workloads
on
the
node
that
it
is
determined
is
ready,
as
it
knows,
by
the
presence
of
the
config
and
SEC
and
I
net
D,
but
then
the
the
implementation
would
just
basically
wait
until
it
got
whatever
semaphore
it
needed.
D
That's
right
and
that
will
get
called
on
pod
creation
right.
So
that's
the
race
there.
So
what
I'm
kind
of
saying
are
what
I
would
anticipate
doing
is
I
would
have
the
config
for
the
meta
plugin,
be
you
know
alphabetically
first
and
always
live
there
and
be
you
know
whatever
number
one
dot-com,
okay,
the.
B
That's
I
think
that's
a
plausible
approach,
perhaps
I'm
a
little
nervous
about
pausing
for
who
knows
how
long
time
but
I
think
we're
debating
a
something.
That's
not
a
speck
matter.
I
think
this
and
the
technique
of
using
an
alternate
config
directory
are
both
allowed
by
cube
and
can
be
allowed
by
the
spec,
and
it's
really
an
implementation
detail
that
should
be
left
open
to.
A
Some
degree,
though,
in
when
I
did
the
rewrite
of
the
spec
readiness
is
one
of
the
things
that
we
had
identified
was
under
specified
and
so
I
specified
that
and
perhaps
over
specified
it
and
that's
section
six
one
in
the
spec,
and
it
goes
into
all
of
the
detail
there,
including
city
right.
If
it's
not,
we,
the
implementation
should
not
indicate
Cabernets
that
it
is
ready
by
writing
or
placing
at
CNI
Jason
configuration
file
in
the
cube
CNI
config
directory.
A
Until
the
cluster
wide
default
network,
CNI
Jason
configuration
file,
I
think
I
mean
what
Doug
is
probably
requesting
is.
Should
we
loosen
that
up
to
allow
a
situation
like
he
suggests
where
the
implementation
actually
says,
hey,
I'm,
ready
immediately,
but
then
blocks
waiting
for
the
default
cluster
wide
network
to
become
ready.
D
A
A
D
That
we've
covered
it
sufficiently
so
yeah.
Hopefully
we
and
it
sounds
like
it
already
has
brought
up
a
few
good
issues
and
so
anyways
yeah
I'm,
just
like
that.
Someone
else
is
gonna
make
a
reference,
because
we'll
get
more
feedback
on
the
specification,
so
anyways
I
think
it's
good
news
for
us,
so
yeah,
hopefully
we'll
hear
more
from
those
guys
as
time
goes
on.
That's
all
I've
got
on
it.
I
actually.
B
Have
a
question
about
that?
This
is
pushing
another
one
of
my
buttons,
my
hot
buttons
in
the
networking
group
right
now,
which
is
the
pod
ready,
plus
plus
proposal
I,
don't
see
how
we
can
implement
as
someone
who's
implementing
networks
based
on
courier
can
implement
the
network
policy
contribution
to
pod,
ready
plus,
plus
that
we
have
defined.
B
Should
have
sure
the
concern
is
my
understanding
of
what
we're
saying
the
way
that
network
policy
should
contribute
to
pod,
ready,
plus
plus,
is
basically
indicating
when
all
the
filtering
relevant
to
a
new
pod
is
in
place,
because
Padre
plus
plus,
is
about
getting
at
pod
ready
a
new
pod
ready,
and
the
problem
is
that
the
filtering
relevant
to
a
new
pond
appears
in
two
places,
both
adjacent
to
that
pod
and
adjacent
to
all
the
remote
correspondence.
If
that
pod
might
try
to
communicate
with
so
to
implement.
B
First
off
I
expect
that
to
implement
network
policy
in
a
courier
based
approach,
you
would
be
mapping
that
onto
Neutron
security
groups.
Neutron
security
groups
provide
no
feedback
on
when
they
are
fully
implemented,
so
say,
network
policy
implementation
that
delegates
the
filtering
to
security
groups
cannot
tell
when
that
delegated
filtering
is
all
in
place.
D
This
because
there's
no
channel
of
communication
to
between
those
as.
B
And
again,
the
hard
part
is
not
the
filtering.
That
goes
where
you
apply
the
security
group,
the
thief
of
pork,
that
you
put
it
on,
but
on
the
other
side,
when
a
security
group
has
a
rule
that
refers
to
the
remote
peers
as
the
members
of
this
other
security
group,
the
problem
is
that
the
filtering
for
the
first
security
group
depends
on
the
membership
of
the
second
security
group,
and
so
there
and
there's
particularly
no
feedback
have
been,
but
when
all
the
consequences
of
that
have
been
in
place,
I.
D
Will
certainly
give
Pung
a
heads
up
about
this
particular
time
mark
in
the
meeting,
because
I
think
that
this
will
certainly
interest
him,
because
that's
a
you
know
pretty
interesting
interaction
there
for
sure
that
I
am
not
educated
enough
about
the
poverty
plus
plus
nor
Neutron.
But
it
sounds
like
a
interesting
scenario
for
sure.
C
And
the
question
also,
when
you
start
talking
about
career,
you
started
talking
about
I
mean
and
the
neutron
you
have
I
want
to
say
so
fantastic,
but
it's
there
there,
elf
for
interface
or,
and
it
talks
about
the
egress
services,
covenant
this
as
well.
How
that
would
be
able
to
interact
with
elf
or
services
in
Neutron
and
stuff,
like
that,
I.
D
C
I
mean
the
ingress
and
we
have
the
proxy
right
and
they're
doing
little.
Balancing
things.
I
don't
know
if
there
is
anything
about
how
that
would
be
a
good,
a
little
topper,
sorry
but
I
guess
not
so
as
I
had
that
whole
networking
on
Korea
really
would
work
in
the
hand
at
the
Babel
to
sort
of
serve
both
masters
and
integrated
in
a
good
way.
It's
something!
That's
I'm,
not
scratching
my
head
over
it
right
now,
but
I
will
that
in
a
couple
of
months
it's
going
to
sit
in
my
lap.
A
F
A
A
Yep
yeah
all
comments
appreciated,
I,
guess.
The
last
thing
then,
is
bike
shedding
on
name's,
Doug
and
Mike
have
voted.
Another
suggestion,
I
just
thought
of
was
shortening
up
network
attachment
template.
Maybe
that's
better,
maybe
that's
not,
but
again
if
people
have
thoughts
and
what
we
should
rename
network
put
your
+1
at
the
top
of
the
spec
next
to
the
one
that
you
like,
and
maybe
we'll
try
to
narrow
down
at
least
some
of
the
choices.
B
Yeah,
so
no
shortening
it
would
be.
Okay
with
me,
I
think
we
generally
have
practice
of
not
doing
that,
but
you
know
at
some
point
the
names
get
so
long,
but
I'm
willing
to
shorten
them.
So
since
I
only
have
one
plus
one
here,
I'm
not
quite
sure
how
I
should
indicate
that
that
would
be.
Okay
with
me.
A
I
mean
I,
don't
know,
maybe
just
I
don't
know
say
that
that
either
one
of
these
or
one
of
them
works
for
you
do
another,
plus
one
on
the
one
you
like,
since
it's
sort
of
underneath
the
one
you
already
voted
for.
Okay,
we'll
see
if
anybody
else
likes
those
better
okay,
so
it's
Network,
not
a
option.
A
I
mean
I,
guess
it
could
be
an
option
but
I
think
Mike's,
point
and
I
mean
I
at
least
sort
of
agree
with
Mike
is
that
network
itself
is
not
an
accurate
description
of
this
object,
because
this
object
is
a
description
of
how
you
attach
a
pod
to
a
network
as
opposed
to
actually
being
the
network
itself.
That
network
might
live.
You
know
elsewhere,
like
out
in
Neutron
or
so
or.