►
From YouTube: IETF95-NVO3-20160404-1400
Description
NVO3 meeting session at IETF95
2016/04/04 1400
D
D
E
D
D
D
So
we're
going
to
get
started
in
a
minute.
I
just
want
to
see.
If
anybody
here
I
know,
John
has
agreed
to
take
notes.
Thank
you
very
much
John.
If
anyone
else
wants
to
take
notes
and
collaborate
with
John
I
would
appreciate
that
you,
if
you
want
to
stick
your
hand
up
I'll,
give
you
credit
for
it.
If
you
want
to
just
do
it
and
sync
up
later,
that's
fine
too!
D
D
D
D
D
So,
first
of
all,
please
note
well,
this
is
an
ietf
meeting
and
therefore
all
of
the
IETF
rules
apply
in
particular,
there's
some
IPR
rules,
and
things
like
that
that
you
should
know
about
and
care
about.
If
you
have
questions,
I
would
be
glad
to
answer
them
later,
but
the
gist
of
it
is
that
anything
you
do
here
is
a
contribution
to
the
IETF
anything
that
you
present
or
say.
D
So,
please
keep
that
in
mind.
As
I
said,
the
blue
sheets
are
going
around.
Jon
is
taking
notes.
I
am
monitoring
Jabbar
and
me
dekho,
so
anyone
that
wants
to
contribute
to
any
of
the
things
I
just
said
taking
notes
looking
at
job
or
etc.
Please
do
so,
of
course
you
know
how
to
join
the
mailing
list,
or
you
probably
wouldn't
be
here
as
I
said.
We're
monitoring
me.
D
D
We've
talked
with
aaliyah
we're
trying
to
figure
out
how
to
get
the
work
done,
get
the
working
group
to
move
on,
and
so
we'll
probably
go
through
an
update
dates
on
some
of
these
things
to
be
actual
real
dates
instead
of
last
year,
haven't
actually
worked
out
the
details
of
that
yet,
but
it
will
be
happening.
The
intent
has
not
changed.
D
We
want
to
close
out
the
working
groups
work,
probably
this
year,
so
keep
that
in
mind
and
and
and
in
the
next
couple
slides
keep
it
in
mind,
because
we
have
some
work
still
to
do
broadly
speaking
and
tried
to
look
at
all
the
stuff.
That's
going
on
and
break
it
down
into
a
couple
of
categories.
We've
made
a
fair
amount
of
progress
on
data
planes.
If
you
look
at
quantity
as
progress,
we
have
three
of
them.
D
What
I
saying
we
don't
have
a
lot
of
discussion
on
control,
plane,
I,
think
there
are
a
couple
of
drafts
out
there
and
there
have
been
presentations,
but
we
need
more
discussion
on
that.
If
there's
an
interest
for
the
working
group
to
do
some
control
plane
work,
we
need
to
do
it
quickly.
If
not
we'll
move
on
either
is
an
okay
answer,
but
it's
up
to
the
working
group
to
decide.
D
We
do
have
some
drafts
and
you'll
see
a
presentation
today
on
AVX
land
yang
model.
Frankly,
we
we
need
to
talk
about
yang
and
models
in
a
broader
context,
around
the
nve,
abstractly
multi-protocol
and
accommodate
oam
functions
and
and
all
of
this,
so
there's
some
work
to
be
done
there.
It
should
be
work
that
we
can
achieve.
We've
got
a
good
start
and
we
want
to
start
to
organize
that
now
there
is
a
the
last
point
on
here.
D
D
G
Editor
co-editor
for
the
draft
in
802
dot,
one
for
the
PDP
extension
to
support
this,
but
we
need
participation
and
we're
going
to
try
and
handle
it.
So
we
do
allow
for
as
much
participation
as
possible
without
being
physically
present,
so
we'll
be
having
an
interim
in
May
and
at
the
interim
I
expect.
G
We
will
we're
trying
to
arrange
to
have
online
participation
for
that
part
of
our
meeting
and
there's
a
presentation
up
with
a
proposal
for
tlv
formats
and
there's
a
doodle
poll
that
was
sent
out,
I
believe
to
the
envio
three
working
group
list,
as
well
as
to
the
80
two
dot.
One
working
group
list
for
people
to
select
times
to
be
available
for
a
biweekly.
A
semi
weekly
conference
calls
on
this.
So
please
respond.
The
pole,
I,
egel
and
I
will
be
available
after
the
meeting.
G
D
D
So
if
you
recall
a
couple
slides
ago,
I
talked
about
the
fact
that
we
are
that
our
milestones
need
to
be
updated
with
dates
that
are
in
the
future.
As
opposed
to
the
past.
One
of
the
things
that
Matthew
and
I
have
been
talking
about
doing
is
putting
together
an
interoperability
thread
of
meetings,
will
to
figure
out
a
lot
of
the
logistics
around
this
still.
But
at
the
end
of
this
thread
of
meetings,
we
anticipate
having
an
interrupt
demo
somewhere.
D
That
hopefully
gets
some
good
exposure
for
those
of
you
that
haven't
implementations
of
this
we're
in
conversations
with
the
Linux
Foundation,
who
is
now
organizing
the
open
networking
some
add-ons,
we
may
be
able
to
do
it
there
in
March
2017
and
that
seems
to
be
I.
Think
a
nice
carrot
for
people
to
work
together
in
the
meantime
to
demonstrate
some
interoperability
and
things
like
the
data,
plane,
control,
plane
and
oam,
and
whatever
else
you
guys
think
we
need
to
demonstrate.
So
so
we're
we're
considering
putting
this
thread
of
meetings
together.
D
But
we
need
feedback
and,
frankly,
volunteers,
people
that
want
to
be
involved
both
in
the
administrative
side
of
setting
up
these
meetings
and
organizing
them.
Also,
more
importantly,
frankly,
people
with
implementations
that
actually
want
to
bring
them
to
these.
These
meetings
and
I
know
there
are
some
implementations
that
should
be
easy,
ie
software,
implementations
and
and
some
which
might
be,
let's
say,
a
little
more
expensive,
like
big
boxes
that
have
hardware,
implementations
and
so
on.
So
we
may
need
to
figure
out
how
to
balance
all
of
that
out
over
time.
D
Anybody
who
has
opinions
or
willingness
to
help,
please
let
me
know
today
or
an
email
to
Matthew
and
I
whatever
is
convenient.
We
want
your
help.
The
goal
just
to
kind
of
bring
this
back
to
the
milestones.
The
goal
here
would
be
that
in
the
next,
and
basically
the
remainder
of
this
year
that
we
will
get
through
these.
D
So
with
that
I'm
going
to
jump
in
quickly
the
agenda
today,
it's
fortunately
just
a
handful
of
items
and
we'll
work
through
them.
We
do
have
a
little
extra
time.
So
any
kind
of
questions
and
conversation
within
reason
is
welcome
and
with
that
I'm
going
to
pass
the
mic
to
our
first
presenter
Eric,
let
me
figure
out
how
to
get
your
slides
up.
H
H
Ok,
a
lot
less
but
but
d
park
was
actually
been
leading
the
stuff
you
actually
put
together
some
slides
that
I
edit
a
bit
to
give
an
overview
of
where
we're
at
and
there's
some
questions
for
the
the
working
group
and
in
particular,
for
the
different
encapsulations
at
the
end
of
these
slides.
But
if
you
have
questions
and
sort
of
concerns
around
holy
em
in
general,
we
can
talk
about
that
power
as
well,
since
we
have
some
extra
time
actually,
the.
H
One
thing
to
keep
in
mind:
is
that
there's
actually
it's
an
other
design
team
operating
in
parallel,
which
is
a
routing
area,
design,
team
that
alia,
charter
and
and
there's
a
few
of
us
that
are
carrying
both
of
these
ones,
so
sometimes
I'm,
very
confused,
which
meaning
them
in
and
exactly
what
we're
covering,
but
but
but
but
some
of
these
things
are
marking
in
particular,
there's
more
discussion
elsewhere.
So
I
think
that
there's
some
per
ticket
I
didn't
call
if
there's
actually
a
presentation
about
the
marking
in
the
rotting
area.
H
H
So,
if
you
look
at
this
sort
of
picture
of
the
different
in
our
connectivity
and
and
where
envy
or
three
applies
and
also
were,
then
then
OEM
applies.
You
can
think
of
many
different
cases,
so
one
of
them
is
within
one
of
these
data
center
in
bubbles
right.
So
nothing
particularly
new
there.
The
the
other
thing
that
we,
of
course
get
is
that
the
we
have
this
north-south
communication
going
out
on
the
internet.
H
So,
in
that
case,
envia
three
terminate
in
some
some
gateway
of
some
form
towards
the
internet
and
well,
to
what
extent
can
you
actually
do
OEM
sort
of
IPO
AMV,
it
BFD
ping,
treasure
type
thing
that
transcends
that
that
to
the
spans
across
the
the
data
center
network,
as
well
as
the
rest
of
the
internet?
It's
one
on
the
questions,
the
other
one,
that's
interesting
is
there's
two
ways.
These
different
bubbles
are
interconnected.
H
One
is
this
thing
called
a
DCI
gateway
and
there's
sort
of
qualitatively
different
things
you
can
think
about.
One
is:
if
I
have
these
two
data
centers
and
I
I
own
some
set
of
connectivity
that
I
control
that
I
interconnect
them
is
potentially
different
than
than
the
black
arrow
arm
hop
we're
just
saying:
well,
I
even
connect
them
over
over
the
internet
right
and
I,
don't
necessarily
control
our
voices.
I.
H
Don't
necessarily
know
about
that
apology
and
resources,
etc
in
that,
following
that
black
arrow
going
through
the
internet
core,
but
when
I
interconnect
them
using
is
DCI
gateway.
I
do
so.
The
considerations
for
OEM
might
actually
be
different
as
a
result.
In
one
case,
you
can
actually
have
stronger
knowledge
about
the
polity
resources
for
that
inner
connection
which
which
actually
matters
if
you
want
to
be
able
to
do
things
like
explore
all
ecmp
paths.
Well,
if
it's
the
internet
or
how
do
you
know
how
many
MV
pads
you
have
you
don't
it
might
bury.
H
H
In
other
cases,
you
let
running
on
top
or
something
that
you
don't
know
about
to
what
extent
you
know
anything
about
how
how
much
nested
tunnels
do
you
actually
get
in
the
core
different
administrative
boundaries
in
these
typologies
and
and
one
of
the
key
things
for
the
working
group
and
I'll
talk
more
about
this
at
the
end-
is
basically
getting
semantically
equivalent
support
from
the
different
encapsulation
right.
The
encoding
does
not
to
be
the
same,
but
actually
having
the
same
or
okay
and
you
might
have
subsets
of
functionality.
H
I
don't
think
that
we
have
the
mandate
that
all
encapsulation
should
support
all
this
mo
am,
but
we
don't
necessarily
want
different
flavors
of
the
same
thing,
because
that
just
gets
confusing
for
the
users
and
it
makes
it
harder
to
in
a
work
thing,
so
these
pieces
can
be
optional,
but
having
sort
of
common
definitions
of
them
simply
makes
on.
So
it
may
make
white
easier,
make
people's
lives
easier
down
the
road,
particularly
in
terms
of
teaching
operational
people
about
how
they
should
use
this
to
operate.
The
network
I.
H
H
So
one
piece
set
that
came
up
and
please
I'm
not
going
to
go
into
the
motivation
in
any
detail,
but
I
don't
have
slides
on
them,
but
the
discussion
we
had
back
in
in
Honolulu
was
around
having
marking
for
some
way
of
synchronizing
counter.
So
you
can
actually
try
to
measure
draw
packet
drops
by
being
able
to
use
a
marking
that
in
the
data
plane
and
that
costly
effectively,
some
synchronization,
some
switching
of
different
banks
of
counters
details
can
be
done
slightly
differently.
H
But
the
notion
of
having
such
a
bit
seems
to
be
be
something
that
people
think
are
interesting.
This
is
something
that,
if
we're
going
to
do,
this
I
think
it's
important
to
get
it
right
and
you
sort
of
not
have
different
definitions
of
it,
because
people
will
need
to
build
this
stuff
into
hardware
for
it
to
be
useful
in
in
hardware
and
B's
right
question
at
the
back
Greg.
I
Can
refer
to
the
draft
in
beer
working
group
on
passive
measurement
with
a
marking
method
and
will
be
discussed
in
a
beer
working
group
this
week,
and
so
there
are
two
bits
one
bit
is
to
create
batches
of
packets
and
then
in
the
second
bit
is
to
create
mark
bits
that
can
be
more
accurately
and
you
used
for
more
accurate,
latency
and
variation
measurement,
because
with
one
bit
you
can
effectively
do
good
loss
measurement,
but
time
/.
Metrics
measurement
would
not
be
that
accurate.
I
H
We
should
my
suggestion
is
that
people
that
interested
in
the
stuff
follow
this
other
work
well
in
I
ppm
and
elsewhere,
I
right
to
see
whether
we
as
a
working
group
on
and
follow
that
I.
Don't
think
we
need
to
define
our
own
here
but
right,
and
I
think,
and
I
sort
of
recommend
that
people
get
involved
in
this
this
if
they
care
and-
and
I
think
one
of
the
things
we
need
to
make
sure
is-
this
is
okay.
J
I
Greg
murska
Ericsson,
yes,
you
can
create
batches
with
one
bit,
but
second
bit
creates
are
marked
packet
in
a
batch
that
you
can
track
to
do
our
latency
packet,
latency
and
Inter
packet
delay
variation
calculation.
So
with
one
bit
you
can
do
probably
latency,
but
you
cannot
really
reliably
calculate
jitter.
G
I
H
Sounds
like
a
good
idea.
I
think
I
think
that
they
at
least
my
hire
a
bit
and
the
thing
that
we
talked
about
and
it's
sign
game
is.
We
have
three
different
header
formats
here
we
should
make
them
have
the
same.
We
should
define
the
same
capabilities
and
then
whether
people
implement
or
not
is
a
separate
matter,
but
it
is
define
it
defined.
One
somehow.
H
Well,
here
we
have
an
IP
network
on
top
of
a
90
network.
What
does
it
mean
to
trace
throughout
all
the
possible
paths
across
the
two
layers
of
ecmp
and
find
out
failures,
right,
I
think
that
there's
there's
things
here
that
are
different
than
one
what
people
have
done
before
and
it's
not
always
the
case
that
people
want
to
expose
all
of
that
underlay
information
to
to
the
Tenon's
running
in
the
overlay.
But
if
they
do
right,
how
did
these
things
actually
get
exposed?
H
H
One
thing
that
might
be
useful
for
people
that
have
have
n
ve
NBA
implementations.
If
they
have
sort
of
already
done
thought
some
on
what
they're
logging
or
what
they're
tracing
right.
It
would
be
useful
to
figure
out
how
to
fold
that
in
because
I
think
what
we
thought
about
mostly
in
the
design
team
so
far,
is
around
the
sort
of
more
on
classical
OEM
pieces.
But
but
implementations
might
already
have
sort
of
logging
unusual
events
or
whatever
that
we
can
try
to
capture
and
see
any
and
whether
we
have
come
a
fella.
H
H
There
are
some
questions,
particularly
around
fragmentation,
where,
where
it's
sort
of
existing
deployed
protocol
is
like
VX
lon,
they
tend
to
skip
on
fragmentation,
because
you
don't
want
to
do
that
in
hardware,
etc.
Right
and
there
is
the
sum
I
think
we're
carrying
that
forward
for
some
of
the
existing
proposals
in
NV
or
three
as
well.
If
that's
the
case,
at
least,
you
need
to
be
able
to
discover
when
something
is
messed
up
in
terms
of
your
MTU
right,
because
packet
would
ugly
just
get
dropped
on
the
floor
somewhere.
I
H
So
so
I
actually
have
put
this
question
on
the
last
slide,
but
but
there
is
a
question
about
you
want
to
have,
so
I
think
that
the
way
people
have
been
thinking
about
this
in
NV
03
is
having
this
OEM,
but
means
that
you
can
sort
of
make
the
initial
part
of
the
header
look
like
a
reg
or
inner
payload,
and
then
you
have
some
additional
OEM
information
in
the
pocket
somewhere
in
hence
sort
of
following
the
pattern.
That's
there
in
jilla.
H
I
am,
but
I
don't
know
if
people
have
thought
of
it
that
way,
but
and
and
then
you
need
a
different
bit.
But
if
it's
really
saying
well,
I'm
just
going
to
send
this
packet
and
the
payload
itself
is
a
way
I'm.
Because
it's
a
trace
message.
It
would
be
sufficient
to
have
a
separate
payload
type
as
long
as
that
doesn't
affect
very
ecmp
behavior
in
the
housing
behavior
going
through
the
network
right.
I
H
I
think
that's
that
I
think
that's
a
discussion
that
we
should
have
right,
but
it
it
ties
in
with
these
assumptions
about
what
D,
what
do
we
think
in
particular
underlie
devices
will
actually
look
like
for
our
hashing
all
right.
If
they're
going
to
look
at,
you
know
this
care.
Wd
p,
you
have
a
UDP
header
with
IP
addresses
a
sort
of
source
port.
If
it's
v6,
you
might
have
a4,
do
you
have
a
bunch
of
things
you
can
use
for
hashing?
H
H
You
always
do
that.
Okay.
Now
I
think
it's
a
it's.
It's
a
very
good
question.
It's
I
think
it's
a
key
question
for
working
group
right
saying:
are
we
going
to
say
if
you
want
to
if
you
want
to
be
able
to
send
away
m
frames
on
the
same
path
as
regular
data
frames,
our
work
is
going
to
say
in
that
case
you
must
not
actually
do
your
ecmp
like
housing
too
deep
into
the
package,
all
right,
I.
I
L
Tom
coburn,
so
I,
actually
at
your
comments,
1,
was,
is
related
to
that
can
the
most
closer
to
the
mark,
or
so
we
have
both
the
UDP
source
port
and
also
the
ipv6
flow
label.
What
I
kind
would
like
to
see
is
some
guidelines,
basically,
especially
with
regards
to
the
IPC
ipv6
flow
label.
So
if
you're
doing
ecmp
just
look
at
the
IP
header
and
then
we
can
consistently
route
any
packet
regardless
of
protocol
and
capsulation
or
anything
like
that
is
still
route
based
on
on
the
flow.
L
L
So
common
number
two,
though
I'm
wondering
about
this
restriction
or
idea
that
we're
only
allowed
to
use
one
or
two
bits
out
of
the
encapsulation
header.
If
I
remember
correctly,
the
three
in
consolation
protocols
at
the
working
group,
this
working
group
is
taken
up,
are
actually
extensible,
which
means
we
can
define
OEM
fields
or
OEM
information.
So,
for
instance,
if
I
wanted
to
get
a
latency
one
way,
I
could
do.
L
I
Lyrics,
and
so
you
are,
you
will
get
round
trip
with
this
method
and
because
the
forward
will
be
over
your
layer,
Network
and
return
will
be
over
out-of-band
network
I.
Think
dividing
by
two
would
not
give
you
a
really
characteristic
of
your
overlay
Network.
So
you
need
to
have
uniquely
one-way
measurement,
because
dividing
by
two
doesn't
work.
L
Well,
right
I
mean
so
all
well
with
that.
That's
what
we
have
now
and
that's
what
everything
is
based
on.
If
my
user
says
they
have
a
latency
problem,
they
don't
mean
one
way
latency
they
mean
when
they
send
a
reply
and
they
get
a
response.
What
is
that
latency?
It's
oh
really
to
me
that
that's
what's
relevant
rapidly.
I
Well,
if
you're
measuring
in
the
same
network
layer
that
probably
justified,
but
if
you
are
measuring
in
actually,
if
you,
if,
if
your
latency
affected
by
two
different
players
of
the
network,
then
okay,
you
can
say
well,
that's
approximation,
that's
as
good
as
you
can,
but
I
think
that
we
have
methods
that
to
do
better,
I
I'm,
not
saying
that
this
method
is
invalid.
It's
probably
good
for
some
scenarios,
but
I.
Don't
think
that
we
can
say
that
it's
good
enough,
so
it.
H
Sounds
like
there
is
a
fair
bit
of
discussion
around
latency
and
whether
this
is
something
that
we
should
figure
out
how
to
have
a
high
bandwidth
conversation
on
the
side
and
get
together,
because
there
might
be
different
toys.
You,
sir
and
I,
know
that
yeah
you
can
do
x
them.
I
haven't
looked
at
the
details
of
what
they're
proposing
in
br
ppm,
but
so.
L
I,
just
one
on
outside
latency
I
was
using
it
as
an
example,
and
my
point
was
that
the
amount
of
information
you
get
out
of
tube
its
limits
you
to
that
amount
right.
So
in
the
future,
if
we
want,
they
have
more
interesting
things
like
if
you
wanted
to
do
record,
router
or
something
more
advanced.
How
would
you
do
that
with
two
bits?
L
M
H
M
I
M
H
So
these
the
formulas
is
a
bit
small,
but
I
think
I
think
the
high-order
question
is
you
know.
Is
it
reasonable
for
the
working
group
to
say
that
we're
going
to
define
OEM,
that's
common
for
the
3
end
caps,
I'm
sure
the
pieces
will
be
optional,
but
we're
not
going
to
have
sort
of
different
variants
of
doing
the
same
functionality
that
behaves
slightly
differently
for
the
different
different
encapsulations.
H
I'm
sure
the
actual
where
things
are
placed
in
packets
might
differ
right,
but
but
not
actually
have
independent,
slightly
different
definitions
of
what
the
semantics
are
of
an
obit
or
whatever,
or
whether
that
it's
that
our
payload
type
I
am
payload
type
but
they're
not
not
different,
and
then
I
think
Greg
brought
up
this
other
question,
which
is:
do
we
need
a
payload
type
just
a
bit?
Both
there
was
a
question
that
the
design
team
came
up
with
where
it
wasn't
clear
from
the
document
and
zaki.
H
There
isn't
much
text
in
the
very
same
caps
about
this
document.
I
think
that
the
most
succeeding
exactly
in
the
geneve
document
right
but
but
like
the
sea
did
in
GUI.
It
says
it's
a
control
thing
well
is
that
is
that
just
oh
I
am
or
could
it
be
other
things
right?
It's
not
clear
from
reading
the
document,
I
think
the
the
GP
document
is
even
has
less
detailed
because
it
just
says
always
the
OEM
bit
and
that's
it.
H
There's
a
question
about
whether
intermediate
underlay
nodes
can
look
at
the
OEM
packets
traveling
in
the
overlay.
Can
they
participate
in
that
by
sort
of
sending
reports
whatever
the
rewarding
and
the
new
document
doesn't
allow
this?
It's
not
clear
what
the
right
thing
to
do
here.
It
sort
of
the
relationship
between
the
overlay
in
the
underlay
internal.
Are
we
assuming
that
all
the
underlay
does
is
send
icmp
errors
and
we
somehow
use
those
or
are
we
assuming
that
it
that
they
can
participate
by
sending
some
new
OEM?
You
know
report
messages
whatever
and
yeah.
H
Can
we
set
the
OEM
bit
on
normal
payload
44
in
bano
a.m.
or
does
it
you
know
what
again?
What's
the
semantic
rd
the
GUI
specs
as
the
control
is
control?
It
doesn't
actually
carry
a
regular
payload
target.
My
understanding,
our
packing
through
the
way
I'm
bitsat,
always
dropped
by
the
destination
and
ve
or
can
actually
be
forwarded
towards
the
end
system.
H
L
H
Writing
some
OEM
message
back
instead
of
just
D
capsule
ating
it
and
forwarding
it
to
the
M
system,
but
otherwise
the
packet
looks
like
you
know
it
has
a
TCP
header
in
the
inner
pocket
whatever,
and
this
was
something
that
was
required
when
om
was
defined
for
trill,
because
chill
didn't
have
sort
of
an
entropy
field
anywhere.
The
only
way
you
could
do
you
hang
would
be
to
look
at.
If
you
weren't
satisfied
with.
H
H
So
so,
next
up
for
the
design
team,
I
think,
is
to
work
more
closely
with
a
rotting
area,
overlay,
OEM
team,
and
we
try
to
look
at
that
at
BFD
and
how
it
fits
in
here.
But
we
haven't
actually
accomplished
anything
and
then
you
know
figure
out.
Should
we
put
together
some
document
at
some
point
in
time
that
covers
this
stuff.
Put
some
more
concrete
things,
I
think
that
that,
for
many
of
these
things,
there's
a
fair
bit
of
overlap
where
the
with
the
rotting
area,
I
overlay
or
empty.
N
I
Jesse
girls,
so
I
think
everyone
is
definitely
in
favor
of
having
more
commonality.
40.
Am
it's
only
an
objection
there,
but
I
wonder
to
what
extent
it's
feasible
without
ending
up
at
the
the
lowest
common
denominator?
I
mean
certainly
we've
seen
that.
Are
we
a
couple
discussions
as
far
as
being
somewhat
parsimonious,
with
one
bit
versus
two
bits
versus
and
and
also
whether
you
have
a
payload
vs,
oh
and
some
options
that
would
be
Carrie
at
the
same
time.
So.
H
It
but,
but
but
I,
think
that
well
there's
a
question
about
how
many
bits
you
would
sort
of
need
right
to
reserve
and
they
might
be
limitations
and
someone
in
cancellations,
some
of
the
other
things
I,
think
what
most
important
to
argue,
the
semantics.
So
if,
if
one
encapsulation
says
I
can
do
this
with
an
oem
bit
another
one
says:
I'll
do
this
with
a
payload
type,
but
the
semantics
of
them
are
identical.
N
H
O
This
is,
you
know,
I,
don't
think
you
need
any
bits
in
any
headers
if
you
just
make
the
internal
packet
address
to
the
end
point
the
V
tip
itself.
So
if
you,
if
you
originate
an
IP
packet
and
I
now
in
your
the
destination,
V
tab
I'm
just
going
to
send
a
packet
to
you,
I'm
going
to
encapsulate
it
in
whatever.
Whatever
outer
header.
Is
there
you're
going
to
d,
capsulate
and
you're
going
to
deliver
to
yourself,
which
is
going
to
be
the
control
point?
Then
you
can
put
anything
you
want.
O
H
Maybe
this
one
piece
missing
from
the
puzzle
which
didn't
I,
didn't
talk
about
sort
of
what
what
we're
trying
to
solve.
But
one
of
the
things
that
people
have
pointed
out
is
sort
of.
The
ability
to
piggyback
o
am
on
exist,
pockets
that
flow
through,
so
that
if
I
have
data
traffic
for
this
VPN
or
this
tenant,
I
can
actually
perform
some
form
of
measurement
on
that
without
having
conduct
additional
measurement
packets
and,
for
instance,
measuring
that's.
That's
where
the
measuring
drop
and
delay
comes
from
right
and.
O
I
Both
correct
one
method
doesn't
give
you
complete
ability
to
do
OEM,
so
you
need
combination
of
passive
and
active
okay
and
passive
means
that
you
are
do
not
modify
packet
so
that
it
changes
how
the
packet
treated
by
the
network,
because
there,
for
example,
their
proposal
that
you
can
add
extra
Heather
in
ipv6.
That
would
not
be
really
passive
method
because
you
change
the
length
of
the
packet.
So
that's
the
treatment
by
the
network
changes
and
the
metric
performance
metrics
will
change
right.
I
So
that's
why
there
is
proposal
as
we
formulate
it
in
a
beer
that
you
have
two
bids
that
should
not
be
used
for
the
forwarding
decision.
Okay,
thus
you
can
use
them,
but
you
can
monitor
them
and
changes
in
the
values
as
a
marking,
combinations
signifying
the
end
of
the
badge
and
their
beats
the
by
the
package
that
you
select
for
measurement,
but
at
the
same
time
yes,
you're.
Absolutely
right.
If
you
want,
for
example,
monitor
stand
bypass,
you
need
active
met
methods
because
you
don't
have
traffic
there.
I
If
you
have
not
sufficient
load
on
a
network,
you
might
not
really.
We.
You
need
to
create
some
active
traffic
test
probes
when
you
are
turning
the
service
on.
You
can
use
active
performance
measurement
to
do
service
activation
protocol
right
so
I
think
that
and
that's
what
we
discussed
in
our
sessions,
that
combination
of
passive
and
active
oam
methods
give
you
the
most
powerful
tools
so.
O
M
D
J
I
get
a
comment
from
Stuart.
Brian
I
asked
him
for
some
clarification
which
might
be
forthcoming,
but
he
said
true,
but
that
that
is
what
I
have
asked,
but
you
need
to
give
the
lower
layer
a
hit
to
do
real
time.
Oh
am
also
true
to
go
loss.
You
really
need
to
instrument
the
packets.
H
H
Yeah,
I
can
doing
doing
active
measurements
to
measure.
Loss
might
be
quite
expensive
because
you
might
need
to
send
a
million
packets
to
find
out
that
you
have
10
to
the
minus
5
packet
loss
right,
that's
a
bit
expensive
and
but
if
those
packets
are
already
flowing
it
sort
of
get
it
for
free
quote
unquote.
So.
D
K
K
Yeah
and
so
mm
I
think
oh
man
assess
happy
is
the
most
important
to
notice.
Oh
I
describin
her
in
details,
and
we
depend
on
50
types
of
or
the
Western
access
type.
When
is
winnin
one
Connor
one
and
we
hang
on
unco
no
one
when
women
are
to
interface
and
nursery
in
the
face
on
the
mac
address
and
for
the
second
one
winner
and
co,
no
one
wasting
together,
hell
I'm
showed
greater
or
equal
to
2.
K
K
We
also
have
an
idea
content
and
when
idea
turns
30
so
for
the
weight
have
one
we
have
we
now
100
and
winner
304,
the
port
1
and
the
kala
true
on
digital
course,
known
to
win
an
one
Angela
and
the
further
we'd
have
true.
We
also
have
way
none
200
and
when
I
341
and
the
pottery
be
known
to
all
when
an
244
and
the
modifier
would
be
known
to
win
answer
800.
So
for
this
we
none
1
ko.
K
No
one
time
we
can
have
the
on
methionine
and
with
an
idea
for
we
have
one
where
how
when
I
100
Americans
to
where
ID
Penton
and
for
wine
and
300
up
and
the
web,
the
method
fooling
ideal,
1030
and
for
weight,
have
a
20
mm
when
200
mm
soo,
true
when
I'd
be
in
10
10,
when
I
300l
magistrate
or
when
I
did
it
in
search
it.
K
When
idea
can
be
different
hell,
we
can
see
that
a
photo
of
an
idea
and
10
and
photo
which
ever
went
when
I
d
can
be
100
other
for
whatever
tort
of
an
ID
eatin
is
200
so
nicely,
though,
when
an
and
Connor
one
and
so
for
the
way
when
we'd
have
one
we
have
when
out
100
under
win
and
200
master,
to
win,
win,
ID
content
and
the
win
and
three
hundred
and
the
winner
400.
K
Next
to
you,
an
idea
and
30
I'm
for
with
HEPA
tool,
we
we
now
100
on
the
win
and
200m
map
store
when
I
did
in
10
and
the
winner
300
under
when
I'm
400
Mesut
to
win
idea
insert
here
so
for
for
this
type
of
wasting
his
editor.
For
the
same,
when
ID
and
pin
10
the
the
one
idea,
I
would
have
won
underweight
habit,
or
should
it
be
just
saying
so,
we
can
say
that
to.
K
K
So
next
is
the
women
hour
to
innovation
and
for
what
is
matching
to
put
a
number
should
I
be?
Should
it
be
edited
for
the
matching
k
hell
we
now
100
I'm,
not
support,
one
will
be
met
in
the
to
win
idea:
10
10
4,
with
habit
or
when
an
200,
a
nice
room
for
the
one
where
be
mapped
to
when
ID
and
then
so.
This
is
a
women
are
to
interface,
and
next
is
the
hour
as
a
radiant
face.
K
K
Never
two
when
idea
1010
so
nesa
is
a
is
a
Mac
type,
and
this
is
also
very
simple
or
type
or
so
for
the
mecha
mecha,
one
to
say:
guess:
who
searches
three
and
seven
eight
five,
eight
seven
five
or
six
one
where
member
21
when
idea
when
idea
1010
so
on.
So
what
is
the
difference?
We
now
one
corner
one
and
we
now
ancona
14
when
I
one-on-one,
and
so
when
I
d
metin2
with
everyone
and
it's
a
remote,
oh
whatever
to
should
it
be,
can
be
different
for
ways
now.
K
The
remote
away
to
have
two
must
be
same
for
latest
nale'nid
and
and
under
the
under
two-second
difference,
each
other,
your
processor
for
indo,
pegar,
handy
model
for
when
I
one
corner,
one
inner
Tiger,
handy
handy
model
can
be
his
car
today
in
the
winner
model
and
no
discussed
in
the
narmada
butter
for
when
an
unco,
no
12
in
the
tagging
and
the
model
must
be
long
discussed
in
the
winner
model,
because
if
we
need
o
configure
as
its
Cassidy
know,
when
an
model,
the
data
transfer
to
the
remoter
we'd
have
a
tool,
and
so
we'd
help
to
doesn't
know
how
to
how
to
transfer
this
package
to
to
the
destination.
K
Host
and
nessa
is
a
wyndham
okinawan
on,
and
these
are
difference
when
ancona,
1
and
0.
When
an
hour
to
interface.
For
we
now
one
Cano
won
the
the
win
I
D
can
be,
can
be
maps
map.
That
was
ever
a
pause.
We
can
say
program
and
the
proto-tool
of
be
known
to
when
I
100.
So
this
when
I
100
master
to
win
ID
content
panel
for
when
out
our
to
interface,
the
way
late,
the
West.
K
K
So
this
is
a
great
night
and
access
typo.
We
have
a
stray
when
I
was
three
when
you
for
this
node
next
is
the
west
na
access
type
of
configuration.
So
we
have
a
Lex
9
instance
under
the
cage,
where's
now
ID,
so
very
sad
when
an
IP
is
easy
to
identify
it.
Where's
none
and
the
national
des
is
the
waist
when
I
access
the
types,
but
we
we
just
talked
about
it-
is
to
load
and
make
sure
we
define
the
resin
and
control
play.
So
we
have
three
different
controllers
for
foot
young.
K
M
Question
yes,
hi
Sharon,
the
vibrato
I
thought
this.
Is
it
you're
going
to
describe
a
yang
model?
You
just
change
the
VX
land
paradigm?
Where
is
the
l2
and
l3
interface
in
the
x
la
your
own
invention?
Sorry
keynote
I
said
not
happening.
You
talked
about
12,
1,
n,
2,
1
or
no
L
to
it.
So
where
did
you
get
all
you
want
to
1
and
n
2
1?
Ok,
that's
ok!
But
where
do
where
did
you
get
this
l
to
interface?
L3
interface?
K
K
M
M
G
Yeah,
it's
it's
quite
common
for
for
the
VLAN
mapping
potentially
to
be
set
up
fairly
dynamically.
You
know
if
you're
using
the
the
VN
ID
it
may
be
very
local
to
a
given
port
and
because
there's
not
very
many
VLAN
IDs,
so.
G
G
K
K
D
K
Or
other
old
pastor
you
again
so
so
I
centered
some.
So
I
think
that
imagine
when
I
100
underwear,
the
when
idea
can
be
can
be,
can
be
achieved
by
the
control
today.
So
so
you
for
an
idea,
move
move
to
another
project.
If
you
would
the
when
I
accessed
I,
who
is
the
same
so
so
the
world
to
have
another
machine?
So
I
think
this
is
no
relation
with
the
with
distribution
I
will
when
I
give
ya.
K
The
the
the
yeah
well,
they
were
it's
not
an
assassin.
Apple
is
if
the
power
is
that
I
mean
her
for
the
way
for
the
way
we'd
have.
So
if
we
configure
weight
helpful
on
so
so,
you've
waken
figure
out
where
we
answer.
If
we
create
awareness
nonsense,
where
should
a
configured
when
it's
not
an
SS
type,
so.
G
H
So
I
I
don't
know
if
this
focus
on
the
defining
a
set
of
access
types.
Is
that
useful,
because
I
think
that,
in
addition
to
to
the
previous
comment
that
you
seem
to
have
access
types
that
aren't
actually
implemented
or
defined
anywhere
right,
there
might
be
useful
but
they're
not
actually
standardized
anywhere.
I.
Think
that
you're
missing
some
things
that
I
can
use
as
well,
because
existing
devices
can
be
a
VLAN
mapping
before
they
did
a
VX
on
end
cap,
for
instance
right.
H
So
I,
don't
know
if
there's
a
way
of
separating
this
out,
basically
saying
that
we
have
some
demark
point
between
whatever
VLAN
behavior
you
have
on
a
device
alright,
which
there's
existing
implementations
and
whatever
they
might
be
standard
for
some
of
these
things
and
then
there's
a
there's,
a
handoff
to
VX,
lon
and
I
think
that
that
pictures
a
lot
simpler,
because
I
think
at
that
hand
off
there's
only
two
behaviors
that
exist.
One
is
I,
preserve
the
inner
dot1q
tag
or
I
stripper
right.
H
I
think
that
those
are
the
only
things
that
differ
there
and
then
the
different
boxes
at
the
two
ends
they
can
have
whatever
veal
and
pork
behavior
mappings,
whatever
they
want
right
and
then
you
don't
have
to
have
to
figure
out,
do
I
have
27.
However
many
access
type
to
do.
I.
Have
you
just
end
up
with
those
that
distinction.
H
K
K
We
have
a
way
with
happen,
instance
in
on
the
latest
nine
container.
So,
first
of
all,
we
will
create
our
West
nine
instance
and
the
way
were
configured
when
snack
and
says
hi
under
when
I
d,
so
we
also
create
a
way
with
happy
instance
be
known
with
heavy
instance.
We
were
configure
or
do
you
know,
oh
sure
it
we
were
configured.
K
H
I,
so
I
think
it
might
be
sufficient
to
just
have
this
inner
tag
handling
mode
and
not
worry
about
the
access
types
in
this
document,
because
that
allows
you
to
specify
had
behaved
to
maybe
expand
perspective
and
and-
and
somebody
else
would
worry
somewhere
else
about.
How
can
I
define
various
VLAN
behaviors.
D
A
Okay,
this
is
diego
from
nokia
and
yeah.
I
mean
I
agree
with
some
of
the
comments
that
it
probably
means
to
focus
on
on
the
attachment
of
a
service
to
vx
lon
binding,
because
on
the
on
the
Access
ID
attachment
you're
defining.
You
know
certain
models.
Some
the
poor,
significant
villain,
can
be
implemented
and
is
implemented
by
some
vendors.
The
villain
based
approach
is
implemented
by
other
vendors,
but
you
don't
also
define
whether
there's
qualified
learning
or
unqualified
learning
there.
A
There
are
a
lot
of
open
points
which
may
be
belong
to
the
maybe
layer
to
VPN
working
group
or
other
parts
and
I
agree
with
Arik
that
we
should
maybe
be
focusing
on
the
attachment
of
a
service
with
whatever.
However,
we
define
a
service
into
the
VX
lon
tunnels,
but
the
access
part
is
probably
something
that
needs
to
be
defined
elsewhere
or
a
lot
more
in
detail.
So.
D
There's
been
some
good
feedback
and
I
think
we
want
to
come
to
a
consensus
around
a
working
group
model
for
this
stuff,
the
ex
land
vitebsk
and
more
broadly
n
bees,
so
we're
out
of
time
to
keep
talking
about
this,
but
a
fun
way.
Thank
you
and
it's
everyone
who
spoke
at
the
mic.
Please
carry
on
this
conversation
after
ok,
ok,.
D
K
D
L
Okay,
my
name
is
Tom
Herbert
from
facebook,
and
today
I
wanted
to
talk
a
little
bit
about
checksum
offload,
especially
with
regards
to
UDP
encapsulation,
there's
actually
been
quite
a
bit
of
work,
at
least
in
the
linux.
Networking
stack
to
kind
of
adapt
to
this
world
of
encapsulation
and
in
particular,
what
we're
trying
to
do
is
preserve
all
the
benefits.
L
The
offloads,
particularly
in
performance
that
we
have
in
the
normal
non
encapsulation
path
with
encapsulation
and
clearly
encapsulation
adds
some
interesting
things
today,
I'm
going
to
talk
about
one
aspect
of
this,
which
is
really
how
we
deal
with
the
checksum
and
in
particular,
we
think
of
the
checksum,
not
as
necessarily
just
an
hour,
UDP
checksum,
but
really.
How
do
we
efficiently
process
say?
L
So
checksum
offload
is
a
well-known
technique
that
mini
mix
or
probably
most
Knicks,
provide
where
they
can
perform
the
checksum
calculation
on
behalf
of
a
host
for
TCP,
UDP
or
ICMP.
Basically,
any
Internet
one's
complement,
checksum
and
they
mix
have
implemented
this
in
two
different
ways.
What
we
call
so-called
protocol
specific
and
protocol
agnostic
and
I'll
get
a
little
bit
more
into
the
implications
of
that.
Encapsulation
is
interesting,
because
not
only
does
it
move
that
checksum
in
deeper
inside
the
packet,
for
example,
TCP
checksum
become
encapsulated.
L
We
also
now
have
the
possibility
of
more
than
one
checksum
for
packet,
and,
if
you
think
about
this
really
an
unlimited
number,
we
could
have
ten
levels
of
encapsulation
with
10
different
checksums.
We
have
no
reason
to
artificially
prohibit
this
and
implementation,
so
what
we're
looking
for
is
a
generic
solution
that
kind
of
can
deal
with
even
that
case
in
a
nice
way.
L
One
of
the
goals
we
do
have,
though,
is
no
full
packet
check
some
computation
on
the
host,
meaning.
We
don't
want
to
go
through
the
host
CPU
to
do
a
checksum
calculation.
This
is
actually
a
pretty
expensive
operation
check
sums
obviously
well
known
and
simple,
but
the
fact
is:
if
we
have
to
go
through
a
packet
to
compute
a
checksum,
we
have
to
pull
all
the
bites
into
the
cache
we
have
to
run
through
at
least
an
ad
with
hairy
instructions.
So
it's
not
it's
not.
L
The
cheapest
thing
that
we
can
do
so
on
transmitted
as
I
mentioned.
There's
a
protocol
agnostic
protocol
specific
one,
the
protocol
agnostic,
one-
is
called
hardware
checksum.
Basically,
this
is
where
we
tell
a
device.
Here's
the
starting
point:
checksum:
here's
where
to
write
the
value,
go
ahead
and
do
the
calculation
and
write
the
value.
So
it
doesn't
need
to
know
what
the
protocol
the
software
will
set
up.
The
pseudo
header
checksum
in
the
field,
and
it's
taken
care
of.
L
So
this
is
way
to
basically
do
a
very
generic
offload
of
one
checks:
I'm
going
to
pack
it.
Historically,
though,
most
providers
or
Nick
Nick
vendors
have
provided
what
we
call
IP
checksum,
and
this
is
where
the
Nick
actually
parses
the
packet
figures
out.
It's
a
TCP,
IP
packet
looks
at
the
pseudo
header
calculates
that
calculates
the
check
some
of
the
TCP
header
and
the
data
and
finally
writes
the
the
answer
into
the
TCP
checksum
field.
So
it's
a
quite
a
bit
of
logic.
It's
not
generic!
L
Similarly,
on
the
receive
side
there's
this
thing
we
call
checksum
complete,
which
is
where
a
device
just
simply
provides
the
checksum
calculated
over
the
pack,
it
usually
from
the
start
of
the
IP
header,
and
the
idea
is
that
the
host
stack
can
use
that
information.
It
can
subtract
off
the
checksums
over
various
layers
to
produce
checksums
of
sub
portions
of
the
packet.
L
So,
with
one
check
some
calculation
done
by
the
device,
we
can
basically
validate
any
number
of
checksums,
whether
an
encapsulator
not
inside
a
packet
and,
conversely,
though,
to
the
transmit
there's
a
protocol,
specific
version
which
is
kind
of
referred
to
as
check
some
unnecessary,
and
this
is
where
the
device
actually
looks
into
the
packet
figures
out.
What
the
protocol
is
performs.
The
checksum
calculation
decides.
If
the
checksum
is
valid,
it
sums
to
zero.
L
If
it
is,
it
tells
the
host
inside
the
receive
descriptor
I
validate
a
checksum
if
it's
not
usually
doesn't
say
anything
so
again.
This
requires
a
lot
of
logic
in
the
hardware
to
parse
protocols.
We
do
have
some
vendors
who
started
to
be
able
to
parse
into
encapsulations
in
order
to
validate
checksums,
which
means
they
now
need
to
parse
the
X
land.
For
instance,
our
preference
from
the
software
side
is
really
for
the
protocol.
Agnostic
means,
if
nothing
else.
This
means
that
we
can
apply
something
like
Chuck
some
complete
to
any
protocol
we
could
invent.
L
L
L
So
in
order
to
make
this
reality,
we
apply
various
tricks.
All
its
of
one
actually
do
not
involve
protocol
change,
which
is
kind
of
nice
check.
Some
unnecessary
conversion
kind
of
sounds
like
what
it
is.
Basically,
if
we
get
a
check
some
unnecessary
indication
that
an
outer
UDP
header
checksum
has
been
validated,
we
can
actually
pretty
easily
convert
that
into
what
the
checksum
calculation
is
over.
The
UDP
payload
and
then
use
that
as
though
it's
like
a
checksum
complete
that
the
device
gave
us
so
again,
that's
a
doesn't
require
any
protocol
chains
actually
fairly
simple.
L
One
of
the
other
interesting
non
protocol
mechanisms
that
somebody
observe
recently
with
that.
If
we
have
a
device
that
hit
does
have
the
capability
to
offload
and
inner
checksum
say
an
inner
TCP
checksum
inside
UDP
and
the
UDP
encapsulation
has
the
outer
checksum
enabled
you
can
actually
deduce
the
value
to
set
in
the
UDP
checksum
field
on
the
basis
that
we
know
that
everything
from
the
TCP
header
on
with
the
pseudo
header
once
the
checksum
is
calculated.
The
answers
is
basically
zero
because
it
will
calculate
a
zero
checksum.
L
So
we
can
use
that
to
our
advantage
and
figure
out.
The
checksum
for
UDP
is
really
just
based
on
UDP
header
and
anything
before
the
TCP
header,
where
the
TCP
checksum
starts.
So
this
is
great,
because
what
it
means
is
with
one
single
offloaded
checksum,
which
is
the
inner
checksum,
we're
able
to
deduce
any
outer
checksums
what
the
value
is
without
calculating
the
whole
check
them
over
the
whole
packet.
So
again
that
that
prevents
us
from
needing
to
do
that.
L
L
So
the
one
thing
that
we
have
done,
which
is
a
protocol
change,
is
something
called
remote
checksum
offload,
and
this
is
really
for
those
devices
that
on
transmit
can
only
calculate
the
outer
UDP
checksum,
so
they
don't
have
the
capabilities
to
either
parse
the
header
or
they're,
not
doing
the
hardware
checksum
that
I
described
earlier.
So
basically,
these
are
a
lot
of
devices.
They
can
do.
Udp,
TCP
checksum
offload
for
transmit
on
plain
packets
using
no
extension,
headers,
no
IP
options,
it's
really
just
a
TCP
UDP
packet
over
IP.
L
So
what
we
can
do,
though,
is
with
a
little
bit
of
extra
bits
in
the
encapsulation
header.
We
can
actually
provide
information
about
the
inner
checksum
to
the
encapsulation
layer,
and
the
information
we
need
is
basically
the
starting
point
for
the
inner
checksum
and
the
offset
where
it
is.
So
it's
it's
kind
of
the
same
information
that
we
give
in
the
network
net
of
hardware
checksum.
So
with
that
information,
what
we
can
do
is
we
set
this
in
the
encapsulation.
L
We
do
a
normal
check
sum
all
flowed
in
UDP
send
the
packet,
so
the
hardware
does
UDP
checksum
offload
on
transmit,
presumably
receiver
can
offload
the
receive
checksum,
and
then
this
gets
into
the
encapsulation
layer.
Now
we
know
that
the
whole
checksum
has
been
validated
with
the
outer
UDP
checksum.
So
in
order
to
calculate
what
the
inner
checksum
should
be,
we
just
need
to
do
some
math
to
basically
subtract
the
checksum
from
the
UDP
header
to
the
TCP
header,
for
instance,
and
then
adjust
the
inner
TCP
checksum.
L
Once
we
do
that,
we
then
let
the
packet
proceed
up.
Did
the
council
asian
goes
through
tcp?
Now
we
have
both
the
updated
in
or
TCP
checksum,
and
we
have
the
checksum
value
which
we
got
from
the
nic
you
that
was
comput
eight
over
the
packet.
Now
we
can
validate
the
TCP
checksum
without
any
additional
calculation.
So
this
really
is
a
good
thing
to
save
cycles
on
the
transmitter,
since
it
didn't
have
to
go
through
the
full
packet
checksum
for
the
TCP
Jetson.
L
So
the
question
concerning
this
was
more
of
a
extensibility
question
and
I
posed
this
on
the
list
about
a
month
ago,
haven't
heard
any
response,
but
what
we
did
for
receive
checks
them
off
or
remote
checksum
offload.
We
implement
man
&
goo,
which
was
easy
enough,
and
there
is
an
implementation
of
the
X
land
and
a
draft.
What
we
did
in
the
implementation
was
a
little
bit
on
the
conservative
side,
because
this
was
not
a
standard
part
of
the
protocol.
L
L
How
do
we
make
this
official?
Is
it
enough
to
have
a
draft?
We
really
would
like
to
take
away
the
the
conservative
configuration
option
and
the
Linux
tack
and
just
make
this
set
by
default
all
the
time
on
the
received
path,
and
then
I
have
the
same
questions
for
VX
lan
gpe.
As
I
understand
there
is
an
extensibility
model
and
vxn
GPE,
which
has
more
to
do
with
the
n
sh.
L
I
tend
to
think
that
might
be
overkill,
but
again,
I'm
looking
for
input
on
this
and
by
the
way
we
actually
do
have
a
VX
LAN
gpe
implementation
in
linux.
That
happened,
I
think
about
two
weeks
ago.
So
it's
a
good
time
to
start
to
think
how
to
basically
either
make
common
common
pieces
whether
or
not
these
should
be
common
flags
or
whether
these
paths
are
completely
divergent
right
now,
they're
actually
sharing
some
common
code.
So
if
we
do
something
for
VX
LAN
should
we
do
the
same
thing
for
VX,
LAN
GP,
for
instance,.
L
L
N
N
There's
been
a
ton
of
other
proposals
to
use
existing
bits,
so
I'm
not
sure
how
so
there's
GP
GP
be
right.
Yeah
I
mean
it's
kind
of
about
some
on
an
author
ID,
but
that's
also
somewhat
divergent,
but
I
was
actually
referring
to.
I
mean
the
time
of
drafts
just
using
its
in
in
the
pure
v
excellent
heather
as
well,
and.
N
L
I'm
just
wondering
if
there's
I
really
wanted
the
input.
I
mean
it's
more
of
a
medic
question
granite,
so
we
know
VX
LAN
is
out
there.
We
know
it's
deployed.
We
want
to
extend
it
in
a
80
of
ways
and
we're
limited
number
of
bits.
So
it's
a
perfect
use
case
of
a
real
protocol
and
the
interesting
thing
is
obviously
it's
not
a
working
group
item.
But
yet
this
working
group
talks
about
a
lot.
So
maybe
it's
just
a
matter
of
one
at
a
time
and
whichever
ones
one
stick.
L
I
know
there
were
two
bits
for
the
oam
being
allocated
and,
as
you
said,
I
don't
know
if
there's
been
a
ton,
but
we
definitely
had
several
proposals
and
we
only
have
what
16
bits
or
maybe
there's
a
few
more
I,
think
there's
a
pretty
large
observer
but
yeah.
But
some
point
you
run
out
of
bits.
It
seems
to
me
it's
a
little
ad
hoc
just
to
be
randomly
picking
up
bits,
I,
I,
guess
I.
My
hope
is
that
at
least
that
part
could
be
at
least
clarified,
maybe
by
the
VIX
playing
guys.
L
H
H
H
Drafted
picked
a
bit
as
well
right,
right
yeah,
so
it
sounds
like
it
would
be
useful
to
figure
out
a
way
of
doing
a
registry
right,
but
where
you
can
get
someone
to
manage
those
bit
and
that's
a
bit
tricky
because
there's
not
an
IDF
protocol
and
it's
not
clear
what
allocation
policies
you
would
have
for
those
bits.
Would
you
require
an
idea
standard
for
an
extension
to
protocol
that
isn't
even
ours
yeah?
H
L
L
So
my
theory
was
that
the
low
order
bits
on
the
virtual
networking
identifiers
are
less
likely
to
be
proactively
allocated
because
it
doesn't
look
like
it's
part
of
the
rest
of
the
Flex
I
I
know
it's
kind
of
interesting
I'm,
just
hoping
that
why
I
do
think,
we
need
clarity
on
this
and
then
the
excellent
gpe
is
also
interesting,
because
some
of
the
extensions
at
jesse
refer
to
won't
work
in
the
excellent
gpe.
The
group
policy
field,
whatever
whatever
that
is,
that
actually
overlaps
the
protocol
field
that
they
put
in
vxn
gpe.