►
From YouTube: IETF109-BESS-20201119-0900
Description
BESS meeting session at IETF109
2020/11/19 0900
https://datatracker.ietf.org/meeting/109/proceedings/
A
B
B
Okay
right:
well,
you
have
to
he's
he's
displaying
us
at
the
moment,
so
stefan,
if
you
can,
if
you
can
move
to
the
next
slide
when
I
need
to
so
welcome
to
the
best
working
group
here
is
the
note.
Well,
if
you
just
look
at
that
for
a
second,
this
is
basically
says
anything
that
you
say
here
is
contribution
to
the
itf.
B
Okay,
so
you're
in
miniteco,
which
is
a
great
start.
If
you
would
like
to
speak,
please
join
the
queue
and
the
chairs
will
manage
the
queue,
but
wait
for
your
name
to
be
called
and
then
I'll
meet
yourself.
Please.
B
Okay,
so
rfc
7432
bis,
so
this
is
an
open
action
that
we've
had
for
about
a
year
now,
there's
a
presentation
on
it
on
the
agenda
at
this
ietf
chairs
have
discussed
this
and
we
intend
to
publish
just
directly
as
a
working
group
document
if
the
working
group
has
no
objection
to
that.
Seeing
as
this
is
on
our
our
work
to-do
list
to
do
this
update
to
an
existing
rfc
I'll
just
give
a
second.
If
anybody
has
any
any
concerns
of
that,
please
jump
in
the
queue
now.
B
See
anyone
in
the
queue
so
thank
you.
Stefan
next
slide.
Please
status
update,
so
rfc's
published
since
the
last
itf
haven't
published
any.
We
have
four
documents
in
the
rfc:
editor's
queue:
evpn
overlay,
dci,
vpn
overlay,
epm,
prefix
advertisement,
an
shbgp
control
plane
and
the
revision
of
rsc
5549.
B
So
the
cno
flags
draft
in
subnet
forwarding
optimized
ir
proxy
arp
and
open
discovery.
There
are
one
procedure
updates
the
oem
requirements
framework
for
evpn
and
then
the
dc
gateway
draft
as
well.
B
Next
slide,
please,
a
number
of
documents
are
currently
in
our
queue
for
being
reviewed
by
the
the
shepherds.
So
my
commander's
reviewing
the
irb
multicast
draft
awaiting
comments.
Do
you
want
to
say
anything
about
that.
A
Mankind
yeah,
so
I
have
I
have
finished
reading
about
most
of
this
document
and
I'll
be
sending
comments
to
authors
soon
and
I'll
come
up
with
the
write-up
as
well.
B
Okay,
the
the
unequal
low
balancing
draft.
We
need
another
short
working
group
call
for
that:
the
the
evpn
virtual
ethernet
segment
document,
the
write-up
is
done.
I
mean
there's
some
follow-up
going
on
with
between
luke
and
the
authors,
the
vpn
preference
df.
Stefan
is
following
up
on
that
and
the
igmpm
ldp
proxy.
B
Okay,
no
did
you
want
to
say
something:
okay,
the
igmpm
ldp
proxy
we're
expecting
a
new
version
of
that
to
be
submitted
as
soon
as
the
gates
are
open.
B
Stefan
do
you
want
to
take
over
it's
gone
again.
Ongoing
polls.
There
was
a
a
poll
for
evpn
lspp.
We
had
some
comments
from
laura.
I
think
it
was
regarding
the
iron
section
on
that,
so
this
needs
to.
We
think
this
needs
to
be
updated
before
we
can
close
close
the
poll
and
check
with
with
lara
as
well
for
the
the
updated
section.
B
We
also
need
to
need
a
poll
on
that
to
address
the
issue
of
implementation,
so
I
don't
think
we
had
any
replies
about
whether
anybody
had
actually
implemented
this,
so
we're
going
to
need
to
do
an
explicit
implementation
poll
or
an
explicit
poll
to
check
if
the
working
group
is
okay.
Given
our
policy
for
only
publishing
standards
track
documents
with
implementation,
at
least
one
implementation,
so
we
will,
we
will
check.
The
working
group
is
is
okay
with
that
we
have
an
ongoing
working
group
last
call
on
the
ipvpn
interworking
draft.
B
Stefan
next
slide:
please:
okay,
ready
for
last
call
srv6
services,
so
these
are
in
our
queue,
we'll
we
we,
we
will
likely
start
a
working
group
last
call
for
these
shortly,
so
it's
v6
services,
the
e3n
evpn
aggregation
label,
the
irb
extended
mobility
draft,
which
has
a
couple
of
dependencies
on
other
other
documents
that
might
hold
it
up
next
slide.
B
Our
working
group
documents,
so
the
df
election
draft
multicast
deaf
election
draft,
so
we've
got
a
new
update
that
is
likely
to
be
posted.
B
We
expect
this
to
be
ready
for
working
group
last
call
for
the
next
ietf,
and
this
is
planned
to
be
presented
at
this
ietf
to
get
more
comments
on
the
draft.
So
you
should
should
see
that
on
the
agenda,
the
first
df
recovery
draft
there's
no
update
to
that
and
we're
following
up
with
the
authors.
C
Luke
hi
just
for
the
fast
df
recovery.
We
did
do
an
update
sometime
last
spring
and
I
actually
went
and
got
some
ina
code
points
allocated.
So
maybe
I
can
present
that
one
at
the
next
idea,
or
or
even
at
least,
if
there's
a
lost
column.
B
Next
slide,
stefan
so
we
haven't
had
any
updates
for
this
list
of
drafts.
B
Inside
and
we
have
some
new
working
group
documents
that
we've
recently
adopted,
so
the
the
vpn
martial
and
split
horizon
was
adopted
in
october.
B
B
Okay,
so
here
is
the
agenda.
I
just
realized.
We
didn't
do
an
agenda
bash
at
the
start,
so
any
comments
on
the
agenda.
D
Here,
can
you
hear
me
yep?
Okay,
all
right,
so
this
draft
ipv4
nlri
over
an
ipv6
next
top,
so
this
disk
draft
utilizes
the
rxc
5549
ipv6
next
stop
encoding
scheme.
So
we
have
the
authors,
myself,
manik
mon
mishra,
with
cisco
jeff
tensura
with
abstra,
and
we
have
a
new
new
member
to
the
draft
from
juniper
that
has
recently
joined
lilly
wang.
D
So
with
this
draft
overall,
we've
made
some
good
progress
as
far
as
some
internal
in
intermediate,
like
interoperability,
just
support
of
the
rfc
5549
next
top
encoding
related
to
ebgp
pairing
sessions
and
really
the
goal
of
this
draft
is
to
test
the
interoperability
as
well
as
first
determine
whether
it's
supported
the
feature
and
functionality
that
everything's
fully
functional-
and
you
know
with
with
some
of
the
bigger
vendors
like
like
so
so
far
in
in
the
group
of
vendors
that
we
have
so
far.
We
have
cisco
juniper.
D
We
have
arista
as
well
as
nokia
and
we're
also
trying
to
get
away
on
board
as
well.
So
that
would
be
like
really
the
top
five
vendors
that
we're
looking
at
doing
the
interoperability
testing
of
this
next
slide.
D
Thanks
so
just
an
overview
of
this
draft,
so
what
what
this
draft
does?
Is
it
utilizes
the
ipv4
next
top
topic,
next
type
encoding,
ipv4
nlri
over
and
over
an
ipv6
next
top
here,
that's
been
used
very
similar
to
what's
been
around,
for
you
know
decades
where
you
have
a
v4
edge
or
v6
edge
over
a
v4
core
and
your
pedo
are
appearing
in
ibgb
puri
is
where
you're
encoding
your
v4
v6
andori
over
over
a
v4
next
hop.
D
So,
similarly
to
that
with
with
rfc
5549,
we're
kind
of
doing
the
doing
the
reverse,
we're
taking
a
v4
edge
and
we're
encoding
over
over
a
v6
next
top.
So
with
that
what
this
does,
is
it's
utilizing
that
same
peering?
That
we've
that's
been
done?
D
You
know
for
a
long
time,
you
know
with
it
with
a
four
edge,
a
six
sorry,
a
six
edge
over
four
chord,
now
we're
doing
a
four
edge
over
a
six
core,
but
utilizing
that
same
concept
for
your
pc,
peering,
so
you're,
edge,
peering
and,
and
the
reason
behind
that
is.
D
It
really
allows
us
to
eliminate
all
of
our
dual
stacking
that
we
have
at
the
edge,
and
that's
that's
like
the
really
the
massive
gain
with
this
that
we're
able
to
treat
the
just
as
we
you
know,
stack
sapphies
on
our
p
to
r
appearing
where
you,
where
you
have
your
your
vpn,
v4,
vpn,
v6
and
and
many
other
safety's
that
are
stacked
in
your
p2r
appearing.
Similarly,
what
we're
doing
is
we're
actually
using
that
v6p
or
that
single
pier
to
transport.
D
So
now
we're
able
to
transport
the
v4
enter
nlri
over
and
over
the
v6
next
hop,
thereby
eliminating
that
fury
as
far
as
this
the
savings
it
goes,
you
really
from
an
ipv4
address,
depletion
perspective.
You
really
that
that's
really
a
really
huge
major
gain
and,
as
far
as
address
address
planning
as
well
for
ipv4,
the
other
big
gain
is
opex.
Savings
in
terms
of
monitoring
appears
from
an
nms
perspective,
and
that
kind
of
yang
perspective
that
now
you
have.
D
Basically,
you
cut
down
your
peers
in
half,
so
you
don't
have
all
the
dual
stacking
and
edge
peering
we're
not
now
you
really
are
able
to
there's
really
across
the
board
that
just
would
apply
to
really
anywhere
whether
it's
a
core
data
center,
any
type
of
infrastructure
that
you
have
wherever
you
have
dual
stack
peering
you
can
keep
the
concentration
of
dual
stacking.
If
word
exists,
keep
it.
You
know
consolidated
toward
the
edge
at
the
customer
edge,
but
on
the
peering
points
from
the
customer
to
the
core
that
that
can
be
purely
v6.
D
So
here
you
can
see
you
have
you
have
you
have
separate
peers
and
you're
carrying
your
v4
pure,
is
carrying
your
v4
and
lri,
and
your
v6
pure
is
carrying
your
v6
and
lri
next
slide.
D
So
what
with
this
with
this
encoding
scheme,
what
ends
up
happening
is
now
instead
of
having
two
separate
piers,
you
have
a
single
pier,
so
it's
a
single
v6,
pier
and
you
can
see
like
at
the
top
in
red
the
next
hop
so
you're
carrying
the
nlri,
let's
say
192.1.
I
want
to
do
that.
Slash
24.!
The
next
top
is
a
v6
next
top,
so
you
have
that
single
v6
pier
and
it's
carrying
both
the
v4
and
lri,
as
well
as
the
v6
and
lri.
D
As
far
as
operational
considerations,
when
that
happens,
they
should
be
should
be
minimal.
The
one
thing
that
you
know
that
does
exist.
You
do
have
it's
a
single.
This
is
you
have
a
single
physical
interface,
so
you
would
have
because
it
said
it's
not
dual
stacked
anymore.
You
would
not
have
two
separate
bgp
session,
a
b,
sorry
bfd
sessions,
so
you
would
have
a
single
bfd
session.
That
should
not
be
an
issue
I
mean,
as
as
it
is
a
single
physical.
D
You
know,
let's
say
a
fiber,
so
if
a
fiber
cut
happened
or
a
link
went
down,
you'd
lose
both
peers
anyways,
but
it's
it's.
So
I
think,
from
that's
perspective
from
the
layer
one
there
to
perspective,
whether
you
have
two
peers
or
one
peers
it
does.
It
doesn't
really
matter
from.
From
that
perspective
from
a,
I
guess,
a
learning
curve
or
operations
support
perspective.
D
What
does
change
is
that
now,
instead
of
having
two
physical
peers
now
you
do
have
one
peer.
So
with
that
peer,
the
configuration
from
a
routing
perspective,
there's
really
there's
really
no
change.
The
policy
is
applied
at
the
address
family
layer.
So
your
your
your
routing
policy
that
exists
separately,
let's
say
on
a
separate,
v4p
or
v6
pier
now
it's
on
it's
on
a
v6
pier,
but
you
still
have
your
same
separate
v4
routing
policy.
D
So
this
this
depicts
the
typical
core
that
that
exists.
You
know
from
you
know
from
you
know,
for
many
years
I
guess-
and
this
is
like
go
showing
a
v4
edge.
Sorry,
these
a
dual
stacked
edge
going
over
v4
core
peed
are
appearing.
So
what
I
was
describing
earlier,
where
you
have
a
these,
you
have
a
v6
edge,
so
you're
doing
v6
nlri
over
a
v4
next
top,
so
you're
doing
a
v4
next
top
encoding
and
in
this
case
you're
you're
carrying
you,
said
you're
carrying
a
v6
over
v4
nlri
orion.
D
This
is
global
table
routing
you're,
doing
six
pe,
so
bgp
labeled,
unicast
6pe.
Next
next
slide.
D
So
in
this
and
this
sorry
there
you
go
so
in
this
scenario,
so
this
is
a
service
provider
core
where
you're,
where
you
have
you,
have
your
dual
stacked
at
the
core
at
the
edge,
but
now
you're
you're
doing
this
similar
thing.
This
has
existed.
You
know
for
decades
as
well
that
you're
you're
also
doing
like
what
I
show
in
red
there
that
you're
doing
you
have
v6
over
a
b4
next
hop.
So
your
p
to
r
appearing
that's
a
v4
peter.
So
this
has
existed
for
for
years.
D
D
So
this
this
scenario
is
showing
a
global,
a
global
table,
routing
scenario.
So
here
you
have,
you
could
have
in
the
core.
It
could
be
it's
a
six
core,
so
it
could
be
either
ldp
v6,
srm,
pls,
v6
or
srv6,
and
you
know
so
with
the
software
mesh
framework.
From
that
perspective,
our
c-2565,
you
have
a
you,
have
a
a
a
four
and
four
edge
over
a
six
core,
and
here
your
your
p
e
two
rr
appearing.
D
You
have
you're
running
like
four
pe,
so
you
have
the
ipv4,
nlr
and
lri
over
an
ipv6,
next
top,
so
you're
doing
labeled
bgp
label
to
unicast
4p,
similar
to
6pe,
but
you're
carrying
your
you're,
carrying
your
v4
late
labeled
unicast
over
a
v6
next
up
here
and
then,
as
you
can
see,
on
the
on
in
the
pc
connection.
Here,
it's
a
single
pier!
D
So
here
you
have
your
your
your
your
a
fee.
Two
slash
one
is
carrying
your
green
one,
slash
one!
So
your
v4,
pure
affi,
is
basically
stacked
on
top
of
your
v6
transport,
carrying
both
both
address
formats.
Next
slide.
D
So
this
is
a
service
provider
core
perspective,
where
you
have
the
vpn
overlay,
so
similar,
six,
six
core,
so
mpls
like
ldp,
v6,
srv6
or
srm
pls
v6,
and
then
you
have
on
on
your
pdr
appearing.
You
have
vpn
v4
carried
over
v6
next
top
ipv6
next
up
when
your
p
to
r
appearing,
and
then
you
hear
your
p
e
c
e
p
airing-
and
this
is
where
the
where
the
major
gain
is
that
now
we
can
eliminate
all
of
our
v6
or
dual
stack.
D
D
Next
slide,
so,
as
I
mentioned
at
the
very
very
beginning,
we
have
made
some
good
progress,
the
four
vendors
so
far
that
we
that
have
contacted
been
as
we
have
cis
we
have
monica
mitchell
with
cisco
and
then
with
juniper.
D
We
have
lily
wang
and
then
with
nokia
and
aristo,
we're
I've
contacted
them,
we're
working
on
getting
some
getting
some
contact
information
for
the
interoperability
testing,
but
we
have
made
some
really
good
headway.
So
far,
the
three
of
the
five
vendors
in
total
that
we're
looking
at
they
do
support
the
ebgp
pdc
pairing
with
a
single
pier,
and
they
do
see
the
you
know
the
major
advantage
in
doing
this,
so
I
think
we're
probably
getting
close
to
workgroup
adoption.
I
think,
but
I
want
to
get
some
feedback.
D
I
guess
from
folks
on
on
bez
to
see
what
everyone's
thoughts
are
on
this.
So
I'd
like
to
open
up
for
anyone
has
any
questions
or
comments
or
anything
related
to
this
draft.
Thank
you.
B
F
E
Okay,
it'd
be
good
now
that
we've
got
the
55
49
bits
to
indicate
compliance
with
that
as
well.
In
your
testing.
D
D
B
Okay,
no
thanks.
Thank
you
very
much.
Thank
you.
If
you
can
folks
can
review
the
draft
and
send
some
comments
to
this
that'll
be
good.
Thank
you.
Thank
you.
Okay.
So
next
is
luke
andre.
C
C
C
After
that
yeah,
so
there
was.
This
draft
also
proposes
to
extend
the
evpn
layer
to
attribute
extended
community
by
two
new
fields.
C
The
the
version
two
of
this
draft
addresses
field
alignment
issues
in
the
definition
of
the
extension
to
that
extended
community,
and
also
we
updated
the
ina
section
to
to
actually
request
allocation
of
the
two
new
fields.
C
Slide
so
in
a
second
step,
there's
there's
also
a
clarification.
That's
added
in
in
zero,
two
for
the
two
fields,
the
mode
bits
and
the
vid
normalization
bits
and
on
how
the
the
two
proposed
fields
operate.
C
C
C
So
the
section
addresses
how
that
mismatch
needs
to
operate
to
prevent
tunnel
instantiation,
not
just
raise
an
alarm
the
the
operational
mode,
so
the
m
flags
or
fields,
sorry,
is
maintained
as
optional
and
error
notification.
Only
rationale
for
that
field,
so
other
next
steps
for
this
draft.
It's
it
is
an
implemented
draft.
It
does
show
good
adoption,
so
we
would
like
to
see
some
more
review
and
comments
on
it.
B
A
So
this
is
just
a
problem
statement
recap
because
it
has
been
quite
long.
We
spoke
about
this
particular
draft,
so
this
graph
defines
a
new
df
election
mechanism
and
why
exactly?
It
is
needed.
So
right
now
we
have
a
couple
of
deflection
procedures.
One
is
default
df
election
procedure,
which
is
defined
in
7432,
which
is
nothing
but
evi,
esi
and
df
election
and
the
other
df
election
mechanisms
are
when
we
use
hrw
algorithm
to
distribute
df
among
multiple
ps
in
the
multiple
scenario.
A
But
we
had.
There
are
a
couple
of
customers
who
wants
to
use
evpn,
multi
homing
and
the
whole
multicast
services
on
a
single
vlan.
So
if
we
have
such
scenario
in
that
case
always
there
is
only
one
p
who
is
going
to
be
a
dm?
It
means
that
the
whole
multicast
traffic
is
going
to
get
carried
or
get
powered
by
your
one
of
the
pe.
A
A
So
in
this
scheme,
what
is
going
to
happen
so
by
default,
when
we,
when
evpn
service
is
configured,
we
are
going
to
have
a
default
df
election
which
will
set
up
their
state
that
one
of
them
will
be
the
df
and
other
guy
is
non-df,
for
example,
in
this,
p1
is
in
the
df
state,
and
pe
2
is
non
df
state.
So
reason
why
we
have
a
default
df
election
being
done
is
for
your
broadcast
and
unknown
unicast
traffic
and
what
happens
that
when
we
get
a
first
join.
A
So
when
join
reaches
to
pe
2,
we
have
ignp
proxy
procedures
to
synchronize
this
joint
to
p2,
and
at
this
point
of
time,
for
this
particular
flow,
we
are
going
to
have
a
df
election
performed
and
this
df
will
be
providing
a
state
for
this
particular
flow.
So
in
this
case
p1
becomes
non
df
and
pe
2
becomes
df
when
the
next
join
comes
next
join
again
will
go
over
exact,
same
df
election
procedure
and
appropriate
df.
A
A
Next
slide:
okay,
so
this
draft
has
been
already
implemented
and
it
is
deployed,
and
we
would
like
to
get
more
comment
from
working
group
and
we
plan
to
have
last
call
before
next
ietf.
A
I
think
we
can
go
to
the
next
here,
so
ac,
aware,
bundling
service
interface.
It
was
a
new
interface
defined
for
evpn
services
and
we
did
present
this
two
years
back
in
iitf,
bangkok
and
the
reason
for
presenting
it
back
is.
We
are
going
to
basically
spend
time
and
move
forward
with
this
drop
in
before
next
ietf,
so
I
just
wanted
to
kind
of
go
over
it
and
there
have
been
some
comments
received
from
harvey
and
luke
andre,
which
will
be
addressed
in
the
next
revision.
So
it
is
for
this
idea.
A
It
is
just
we
are
going
over
the
same
slide
deck
to
make
sure
that
the
whole
working
group
is
aware,
and
if
there
are
any
comments
we
can
take
them.
So
the
requirement
is
very
simple
that
there
are
certain
deployments
where
we
want
to
have
multiple
subnets
within
a
single
bridge
domain
and
each
subnet
will
be
distinguished
by
the
vlan
in
the
bridge
domain.
A
So
7432
defines
three
types
of
service
interfaces
and
each
each
of
those
service
interfaces
are
addressing
different
requirement,
but
none
of
them
actually
are
going
to
provide
a
mechanism
where
we
can
have
multiple
subnet.
In
single
bridge
domain,
so
in
vlan
based
service
interfaces,
we
have
one
to
one
mapping
between
bridge
domain
and
a
subnet.
A
Our
our
evas
in
the
vlan
bundle
service
interfaces.
We
have
each
evpn
instance
do
correspond
to
multiple
vlans,
but
the
bridge
table
is
going
to
be
single
bridge
table
and
when
we
do
mac
address
lookup
we
will
we
might
address.
Lookup
mac
address
is
the
key
for
the
lookup
and
encapsulate
packet
will
be
remain
encapsulated
in
the
originating
vlan
id
and
vlan
aware
bundle
service
interface.
It
does
provide
a
mechanism
where
multiple
vlans
can
be
con
will
be
part
of
multiple
bridge
to
a
multiple
bridge
table.
A
A
We
need
a
way
to
identify
that
which,
which
subnet
this
mac
belongs
to
so
today,
when,
if
we
use
any
of
the
service
interfaces
which
is
defined
in
7432,
it
is
not
going
to
carry
any
information
which
can
provide
information
to
p2,
about
which
exactly
subnet
are
vlan.
This
host
h1
belongs
to
so
when
packet
has
to
be
sent
out
of
p2
for
host
h1.
Well,
pu
has
no
clue
which
vlan
should
be
put
on.
This
particular
data
belongs
to
the
mac
h1
mega
h1
next
slide.
A
So
ac,
where
bundling
service
interface,
is
going
to
address
this
particular
requirement
and
to
address
this
requirement.
What
is
going
to
happen
that
we
are
going
to
carry
a
circuit
id
our
attachment
circuit
id
along
with
the
routes,
whether
it
is
macro
or
multicast
routes,
and
the
reason
why
this
is
applicable
even
for
a
multicast
route,
because
when
we
are,
we
are
syncing
igmk
joins.
A
We
need
to
make
sure
that
we
do
carry
which
ac
this
join
has
come
from,
so
that
when
you
are
creating
a
following
state
for
multicast
in
the
smoking,
appropriate
ac
can
be
picked,
and
this
attachment
circuit
is
going
to
be
attachment.
Circuit
will
be
useful
only
for
the
multi-home
tier
and
it
must
be
ignored
by
the
non-multi-home
pair
or
any
remote
side.
A
When
this
mac
route
is
being
received
by
non-multi-homing
peer,
it
must
process
macro,
as
it
does
today
defined
in
7432,
but
it
is
going
to
ignore
the
attachment
circuit
extended
community,
and
if
it
is
a
multi-home
tier,
then
it
must
get
which
ac,
which
we
land.
This
this
particular
background
belongs
to
and
even
with
respect
to
multicast
throughout
again,
it
is
only
for
the
multiple
pf
and
anywhere
route.
7
is
supposed
to
be
processed
by
multi-home
beer
itself,
and
it
is
going
to
get
appropriate
in
crs
subnet.
B
A
So
there
is
no
data
plane
change
for
non-multi-home
peer
it.
It
is
going
to
be
exactly
same
as
7432
yeah.
We
can
go
to
the
next
slide.
A
So
so,
right
now,
when
data
is
received,
it
does
follow
from
the
ce
point
of
view.
It
does
follow
the
same
procedure
as
7432,
so
only
a
new
change
with
this
will
be
when
you
are,
we
are
receiving
the
data
and
we
are
going
to
forward
now.
The
mac
address
mac
address
lookup
is
going
to
provide
you,
the
port
plus
which
vlan
it
has
to
go
out
on,
so
that
appropriate
wheel
and
tag
can
be
pushed
and
multicast
packets
will
be
forwarded
based
on
whatever
multicast
state
overflow
state
has
been
built
in
next
slide.
A
So
there
is
already
implementation
with
cisco
and
we
are
working
across
other
vendors
also,
and
we
have
from
the
last
presentation.
We
have
added
nokia
and
juniper
as
well
in
this
draft,
and
we
will
look
for
any
other
comments
which
working
group
may
potentially
have,
and
there
will
be
a
request
for
adoption
call
before
next
ietu.
B
Okay,
there's
one
one
comment
from:
could
you
make
a
comment?
Please
hello,.
G
G
A
G
G
But
if
you
have
inter-subnet
traffic
in
the
asymmetric
irb
case,
what
will
be
dividend
it's
gonna
put
on,
so
I
think
this
scheme,
the
new
service
type
introduced
required.
You
have
put
vdn
in
the
data
packet
once
you
have
intercepted
routing
asymmetric
irb.
A
B
H
I
Okay,
all
right,
so
this
is
a
short
presentation
on
the
areas
that
we
are
addressing
for
rfc
7432,
the
area
of
improvements.
It's
been.
This
rfc
has
been
around
for
almost
10
years
as
a
result.
I
Almost
all
of
them
in
the
form
of
the
clarification-
and
there
have
been
also
some
other
comments,
with
respect
to
the
clarification
and
additional
explanation
that
we
are
trying
to
incorporate
into
this
draft
into
this
bits,
an
extra
slide.
Please.
I
So
the
area
of
clarification
improvements.
You
know,
I've
listed
the
main
main
areas
in
here:
first
errata
that
have
been
logged
for
these
rfc
and
some
of
these
some
of
the
errata.
It
basically
revolves
around
the
roth
label
field,
description
across
multiple
evpn
rounds
and
to
basically
clarify
that
the
encoding
is
in
the
high
order
bit
and
some
editorial
fixes
for
the
for
that
errata.
I
I
I
Another
area
is
for
the
applicability
of
the
layer
to
attribute
extended
community
to
the
evpn
service
and
to
explicitly
signal
the
control
board,
mtu
size,
primary
and
backup
bits
for
a
given
ethernet
segment,
which
we
added
a
section
to
clarify
this.
I
Then
another
section
is
on
the
priority
handling
of
evpn
robs
and
we
have.
We
have
three.
Basically,
we've
categorized
it
into
the
three
groups
of
the
priority
handling
which
I'll
explain
in
the
in
later
in
these
slides,
and
also
clarification
on
the
best
path
selection.
When
we
advertise
a
mac
ip
route
with
a
default
gateway,
extended
community
community
and
also.
I
I
I
Next
way,
okay,
so
with
respect
to
the
relationship
between
mac
verve
bridge
table
evpn
instance,
british
domain,
vlan
and
vids,
you
know
this
pe
model
should
help.
It
describes
that
the
evpn
instance
gets
instantiated
with
the
corresponding
mac
verb
on
a
pe,
and
the
mac
verb
consists
of
one
or
more
bridge
tables
and
depend
on
the
depending
on
the
ethernet.
I
You
know
across
multiple
pes
that
for
that,
given
pe
maps
to
a
one
of
these
to
a
one
bridge
table
and
that
vlan
can
be
associated
with
one
or
multiple
vehicles,
so
this
table,
this
figure
and
associated
text
tries
to
capture
the
relationship
among
them,
and
hopefully,
you
know,
makes
the
relationship
among
these
clear
nexus.
Why.
I
So
the
next
slide
is
the
text
that
goes
with
that
pe
model
and
I'm
I'm
not
gonna,
get
into
the
detail
of
it,
but
it
basically
talks
about
if
it
is
a
vlan
based
mode.
What
would
be
the
you
know,
the
association
between
the
evi
and
vlan
and
the
british
table
and
british
domain
what
it
should
be
and
for
vlan
bundle
and
villeneuve
bundle
and
so
forth,
so
it
make
sure
that,
with
respect
to
these
different
interface
modes,
it
is
these
association
and
relationships
are
clear.
I
Then
we
added
a
section
on
the
l2
attribute
x
in
the
community,
which
this
attribute
is
his
extended
community
is
already
defined
in
our
evp
and
vpws
rfc,
and
it
is
being
used
there
for,
and
it
explains
what
the,
how
the
primary
and
backup
bids
with
respect
to
a
single
home,
single
single,
active
multi-homing,
how
the
how
these
bits
works
and
all
that-
and
we
extended
that
to
here
for
the
evpn
service
and
we've
also
described
how
the
flow
label
and
control
world
can
be
signaled
along
with
the
mtu.
I
So
we
advertise
this
external
community
with
along
with
chura
swan,
is
imetra
and
then
the
other
one
is
ether
ad
perivi
route
and
in
case
of
the
imat
route,
we
advertise
it
with
the
mtu
control
board
and
flow
label
fields
and
for
the
ether
adipo
api.
We
advertise
it
with
the
p
and
p
bits,
so
we
partition,
basically
different
bits
of
this
attribute-
is
convey
with
with
these
two
routes.
I
With
respect
to
the
control
board,
signaling
7432,
basically
doesn't
you
know
it
assumes
that
control
ward?
It
is
there
if
it
is,
we
are
doing
deep
packet
inspection
and
if
it
is
entropy
label,
then
you
don't
use
control
board.
I
B
Alec
you
copy
faster
places.
I
Sure,
evpn
drought,
priority
handling,
we
divided
the
evpn
drops
into
three
groups
for
the
priority
handling.
I
The
first
group
is,
you
know,
of
the
lowest
scale,
with
the
highly
convergence
affecting
routes
and
those
are
first
priority,
and
then
we
have
two
other
routes
for
the
second
priority:
three
rounds
on
the
second
priority
and
two
other
routes
for
the
third
priority.
So
this
this
is
basically
we
prioritize
the
handling
for
better
conversions
nexus.
I
Right
best
path,
selection
for
default
gateway
is
another
clarification
that
is
default,
gate
for
irb
interface,
you
advertise
the
mac
and
mac
and
ip
address
for
an
irb
interface
with
the
default
gateway
external
community,
and
there
can
be
you
know,
mishaps
and
misconfiguration,
where
you
configure
the
same
thing
on
a
host,
and
we
added
the
takes
that
if
these
species
arises,
how
do
we
handle
it
and
how
do
we
take
care
of
it?
I
And
if
there
is
such
duplication,
we
need
to
basically
give
priority
to
the
mac
and
ipa
address
on
an
irb
interface
as
opposed
to
a
host.
So
that
section.
I
You
have
to
conclude
now
I
think
I
I'm
up
to
the
last
slide.
Next
slide
is
the
last
one,
so
a
few,
a
few
other
additional
things,
we're
gonna,
do
it
for
the
rev
zero
one
and
then
finally
slide.
Let's
go
to
the
next
one.
I
Next
step
we're
going
to
publish
this
after
this
itf
meeting,
we
want
to
solicit
input
and
basically
there
is
no
rush
for
the
working
group
last
call
in
here.
We
want
to
make
sure
that
all
the
stuff
that
we
added
here
are
fully
baked
and
give
everybody
a
lot
of
chance
to
read
it.
So
we
expect
several
itfs
for
doing
this.
K
And
there
I'll
see
if
I
can
feel
under
my
time,
donald
east
lake
with
future
way-
and
this
is
an
update
on
the
status
of
the
vpn
vip
draft
next
slide.
K
So
basically
provide
never
go,
am
layer,
as
described
in
the
framework
draft
next
slide.
K
So
this
is
a
diagram
out
of
the
framework
graph,
so
you
can
see
the
different
levels
of
oam
service
network,
which
is
between
pes,
which
is
what
this
draft
is
mostly
concerned
with,
and
the
transport
and
the
link
away
on
the
various
links
that
connect
the
boxes
next
slide.
K
Just
talking
briefly
about
the
other
layers,
the
link
oam
quest
that
whatever
the
link
technology
is,
for
example,
ethernet,
a
popular
link
technology
you
can
use
802.3.57
whose
name
is
operations,
administration,
maintenance,
oem
transport
depends
on
transport,
you
can
use
bfd
and
our
icmp
facebook
panels.
K
Next
slide
and
the
service
is
essentially
mostly
end-to-end.
The
pes
are,
I
need
to
be
aware
of
it
and
use
the
connectivity
fault
management
which
is
defined
in
ieee
typically,
so
the
peas
must
support
the
maintenance,
intermediate
point
functions
and
they
should
support
the
endpoint
functions
you
can
do
between
cds
or
from
the
ce
to
pe
next
slide.
K
K
So
the
bfd
discriminators
are
distributed
using
the
bfd
discriminator
attribute,
which
is
defined
in
the
mvpn
fast
failover
draft
and
those
are
attributed
with
the
appropriate
routes,
depending
on
which
type
of
traffic
and
so
forth,
you're.
Looking
at
that's
a
fairly
simple
attribute,
which
basically
provides
the
vfd
discriminator
and
has
some
flags
which
are
currently
not
no
files
are
currently
defined.
For
that
attribute
next
slide.
K
So
a
lot
of
the
text
in
the
current
raft
and
the
diagrams
related
to
specifying
encapsulations,
as
used
in
these
different
cases
as
listed
here
and
it's
but
there's
like
some
additional
information,
may
need
to
be
provided
there
concerning
the
kind
of
granularity
you
want
to
do
the
oam
at
since
it's
defined
to
be
operable
at
different
levels
of
granularity
next
slide
and
in
the
framework
graph.
The
framework
graph
it
talks
about
the
regularity.
K
So
specifically,
changes
from
the
previous
version
has
a
various
initial
specification
of
the
routes
in
which
you
want
to
use
this
vfd
discriminator
attribute
and
how
to
handle
withdrawals
of
those
routes.
As
you
know,
as
far
as
the
bfd
sessions
go
and
the
like,
I
added
a
reference
to
the
draft
mursky
mpls
p2mp
bfd,
which
provides
information
on
cases
where
the
using
the
head
notification
without
polling
mode
for
the
point
of
multi-point.
K
There
are
several
different
modes
for
point
to
multi-point
and
defined
in
the
multiple
tail
documents
and
vfd
multiple
tail,
and
we
it's
current
draft
yeah
that
I'm
talking
about
specifies
that
you
would
use
some
version
which
includes
head
notification.
K
So
there's
only
one
mode
that
doesn't
in
the
three
other
modes
that
do
include
head
notification,
and
this
draft
is
now
referenced,
provides
additional
information
on
one
of
those
modes.
There
were
various
ip
addresses
specified
for
the
encapsulation.
Those
have
been
adjusted
based
on
iesg
feedback
on
other
drafts,
in
the
hopes
to
avoid
future
iesg
problems,
and
there
are
miscellaneous
editorial
improvements.
K
So
feature
edition
is
possible
information
covering
ppv
vpn
irb.
K
I
think
it
may
be
useful
to
have
an
appendix
that
actually
shows
traceability
from
the
requirements
that
are
in
the
framework
draft
to
the
provisions
that
are
in
this
draft
to
show
that
those
requirements
are
are
covered
and
one
could
conceivably
add
some
information
on
other
encapsulations
next
slide.
K
So
I'm
basically
looking
for
requests
requesting
comments
and
suggestions,
and
I
hope
to
progress
this
draft
to
a
point
where
it
could
be
working
with
last
call
then
not
too
far
into
the
future.
L
Okay,
there's
jeffrey
from
juniper
one
need
comment.
Instead
of
seeing
him,
maybe
just
go.
L
Of
saying
mp2p
just
just
say:
ingress
replication,
because,
okay,
you
certainly
have
p2p
for
ingress
replication.
K
Okay,
well,
it
uses
the
phrase
ingress
replication,
also
in
the
draft,
but
mp2p
is
slightly
shorter
for
slides.
Maybe
I
don't.
H
You
hear
me
yes,
yeah
yeah,
about
the
fallback
team,
for
is
it
the
intent
to
advertise
the
vfd
discriminator
with
every
single
mac
ip
route
in
the
broadcast
main
or
only.
K
K
I
don't
think
you
want
to
test
between
pes
if
there's
no
routes
between
them,
but
as
long
as
there's
one
you
might
want
to
just
test
one
path.
That
would
be
the
worst
granularity.
K
If
you
want
to,
I
believe,
according
to
the
framework
draft
you,
you
should
be
able
to
test
several
stages
of
granularity,
including
one
which
is
much
finer,
but
if
you're
doing
more
fine-grained
testing,
then
you
need
to
be
able
to
distinguish
the
vfds
for
those
different
sessions
that
are
testing
those
different
fine-grained
paths.
K
So,
if
you
have
any
suggestions
on
improvements
for
the
draft
along
those
lines,
they'll
be
welcome.
H
Yeah
I'll
send
some
comments.
The
other
request
is
to
also
include
ip
multicast
and
the
you
know
the
advertisement
of
the
discriminator,
also
with
spmc
ad
routes
for
evpn.
K
B
Thanks
next
stop
is
linda.
F
Oh,
can
people
hear
me
okay,
so
this
is
we
want
to
show
that
how
the
bgp
is
used
to
control
the
sd-wan.
The
the
purpose
of
this
draft
is
to
demonstrate
to
other
organizations.
F
For
example,
mef
has
two
sd1
related
projects
and
they
do
have
bgp
how
to
use
bgp
to
control
sd1,
but
I
think
that
bgp
is
better
for
ietf
to
have
the
draft
showing
how
we
use
them
and
so
just
give
them
a
guideline.
So
the
the
draft
itself
is
pretty
simple
straightforward.
F
We
have
described
the
services
and
requirements
and
we
listed
several
scenarios.
Those
scenarios
are
from
sd1
project
in
math
and
then
we
show
how
bgp
are
used
to
for
each
of
those
scenarios,
and
then
we
show
the
traffic
walkthrough.
So
this
was
adopted
in
july
to
the
working
group
draft.
We
still
would
like
to
hear
more
feedbacks
and
comments
next
slide.
Please.
F
So
the
the
major
changes.
Basically,
we
reflect
the
the
latest
change
in
the
tunnel
in
cap
20
version
20,
and
we
addressed
the
suggestion
from
the
best
amount
list
to
use
recursive
lookup,
the
recursive
lookup
is
described
in
the
section
8
of
the
tunnel
in
cap,
and
so
here
I'll
just
show
some
example.
How
do
we
use
that
next
slide
please?
F
So
this
is
from
the
old
version:
the
zero
zero
version
showing
that
for
the
pgp
update
to
reflect
the
attributes,
especially
for
the
ipsec
attributes.
You
have
client
routes
and
you
have
detailed
attributes
that
episec
related
subjects
which
is
described
in
a
secure
evpn.
Next
slide,
please.
F
So
this
just
to
show.
If
we
use
a
recursive
lookup,
there
will
be
two
updates.
The
first
update
will
have
the
client
routes
and
with
the
next
next
hop
to
the
loopback
address
and
we
use
we
tag
the
client
route
with
the
encapsulation
extended
community
with
the
tunnel
type,
which
can
be
episode
or
can
be
some
existing
tunnel
supported
and
we
use
the
color
extended
community
and
then
the
second
update
is
to
describe
detailed
attributes.
F
This
is
also
from
the
previous
version
in
the
st1
project.
In
math,
they
talk
about
different
topologies.
Some
traffic
can
go
through
like
blue
topology.
Some
traffic
can
only
go
through
the
red
topology,
and,
with
this
update,
you
can
reflect
that
route
different
topologies.
They
call
it
segmentation
sd1
segmentations
next
one
please.
F
This
is
just
another
illustration
of
using
recursive
look
recursive
lookup,
so
for
the
blue
topology,
we
have
two
updates.
The
first
update
is
the
client
routes
with
the
extended
community
specified
as
a
tunnel
type
and
then
there's
a
color
to
specify
the
the
attributes
and
then
which
is
associated
by
the
second
update.
F
So
here
is
the
for
the
hybrid
sd1,
meaning
that
the
underlay
network
can
be
mixed
of
different
different
underlay
networks.
You
could
have
underlay
being
the
internet
provided
by
different
isps
and
you
could
have
underlay
being
the
l3vpn
so
similarly
using
a
recursive
loop
lookup,
the
first
update
is
very
simple:
we
have
the
client
routes
we
actually
we
need
a
new
tunnel
type.
F
We
call
it
sd1,
hybrid
and
then
associate
with
the
color
and,
in
the
second
update,
we'll
have
a
very
detailed
description
attributes
for
these
hybrid
tunnels.
So,
very
importantly,
that
for
the
like
for
sd1
for
the
ipsec,
for
example,
ipsec
is,
I
say,
security
association
is
always
pairwise.
F
F
So
in
this
hybrid
tunnel
we
could
include
a
pre-configured
ipsec
assay
identifiers,
so
you
don't
have
to
attach
those
attributes,
and
you
can
also
include
hybrid
underlay
tunnels
for
like,
for
example,
if
the
client
route
10.1.1x
can
be
contributed
or
can
be
carried
by
both
elsewhere
vpn
and
the
internet,
and
just
a
side
note,
there's
a
draft
in
idr
to
describe
a
more
like
optimized
encoding
for
sd1
hybrid.
We
call
it
sd1
nri
and
which
will
be
discussed
tomorrow
at
idr
session
next
slide.
Please.
F
F
B
H
All
right,
this
draft,
it's
about
a
pbb
edp
and
I
said,
based
cmac
flash.
This
is
the
list
of
my
authors.
H
This
is
working
group
draft
documentary
and
the
idea
is
that
it's
you
know
it's
ready
for
last
call
and
we
should.
We
should
make
sure
that
we
we
get
there.
The
you
know
the
word
time.
Agenda
short
refreshed
some
history
and
some
conclusions
at
the
end.
Next
slide,
please.
H
So
what
this
is
about
is
isaac
based
sigma
flash
for
pbv
evpn.
So
in
pvpn
we
still
learn
a
c-max
in
the
in
the
data
plane
right
and
they
are
subject
to
aging
timers.
H
So
whenever
you
have
multi-homing-
and
you
have
physical
or
logical
failures,
you
need
to
to
send
some
cmac
flash
notifications
so
that
the
remote
ps
can
actually
flash
the
max
and
relearn
again
so
rfc
7623,
the
pv
vpn
defines
some
cmac
flash
mechanisms,
but
only
for
a
single
active
multi-home
ethernet
segments,
but
not
for
other
use
cases
like
virtual
ethernet
segments
or
when
you
know
the
multi-homing
is
managed
by
the
the
ces
like
in
in
the
case
in
the
dragon,
where
you
have
a
g80
32-axis
networks
or
an
active
standby
to
the
wires.
H
You
know
manageable
from
the
ce.
So
what
this
draft
introduces
is
a
a
generic
ic
based
sigma
flash
that
works
with
virtual
ethernet
segments,
regular
ethernet
segments,
and
it
uses
what
we
call
bmac
ic
updates
with
incremental
sequence
numbers.
This
is
nothing
else,
but
a
round
type,
two,
with
a
b
mac
and
d
is
it
for
which
we
want
to
flash
the
max
encoded
in
the
ethernet
tag
id
field,
and
this
is
a
backwards
compatible
with
rfc
7623.
H
So
we
started
with
this
draft
in
2016.
It
was
developed
over
the
the
next
few
years.
We
had
multiple
revisions
and,
in
the
meantime,
it
was
implemented
and
deployed
in
large
pvp
vpn
networks.
H
Back
in
2019,
we
had
a
discussion
in
the
working
group
because
there
was
an
alternate
icid
based
c-mac
flash
procedure
in
the
virtual
ethernet
segment
draft,
but
the
the
outcome
of
the
discussions
was
that
we
would
standardize
the
cmak
flash
procedure
in
this
document
and
remove
the
the
procedure
in
the
virtual
ethernet
segment
draft
and
as
a
result
of
that,
we
adopted
this
document
in
october.
2019
next
slide.
H
Please
so
what
is
new
in
this
revision?
So
some
terminology
added
a
new
section
for
just
for
terminology,
some
type
of
fixing
and
a
general
review
with
minor
clarifications
just
to
make
it
ready
for
last
call
next
slide.
Please.
H
H
All
right,
so
the
next
one
is
about
multicast
source
redundancy
in
evpn
networks,
so
this
is
the
I
think
it's
the
second
or
third
time
that
we
present
this
draft.
This
is
the
list
of
mike
authors.
Next
slide.
H
So
we'll
we'll
do
a
very
short
refresh
since
we
presented
this
drop
already
in
the
past
I'll
focus
on
what
we
are
adding
in
revision02
and
we'll
end
up
with
some
conclusions.
Next
slide,
please.
H
So
the
role
of
this
document
is
to
define
a
solution
for
multicast
redundancy
in
evpn
networks
and
avoid
any
packet
duplication
that
may
happen
as
a
result
of
that
redundancy.
H
So
it
addresses
the
redundancy
for
multi-home
sources,
redundant,
single
home
sources
and
also
redundant
multi-home
sources,
and
it
works
for
sources
within
the
same
broadcast
domain
or
a
different
broadcast
domain
or
a
mix
of
both.
H
So
we
want
to
to
cover
all
the
all
the
potential
cases,
and
so
the
packet
duplication
may
happen,
because
the
assumption
here
is
that
redundant
sources
are
sourcing
traffic
for
the
same
flow
for
the
same
multicast
flow,
and
we
define
here
a
new
concept,
which
is
the
single
flow
group,
and
this
refers
to
a
multicast
group
which
represents
traffic
that
contains
only
a
single
flow
right
and
the
multiple
multiple
sources
may
be
transmitting
the
same
sft
and
as
a
result
when,
when
you
have
like
in
this
diagram,
you
have
a
receiver
21
that
is
requesting
to
join.
H
You
know
star,
comma
g1
group.
We
don't
want
to
have
duplicate
packets
on
on
this
receiver,
so
we
need
a
way
to
avoid
that
next
slide.
Please.
H
So
the
draft
defines
two
redundant
solutions:
the
warm
standby
solution
ws
or
the
hot
standby
solution
hs.
H
So
the
first
one
is
interesting
because
it
only
requires
the
the
that
you
upgrade
the
upstream
keys
that
are
connected
to
the
redundant
sources,
but
not
the
rest
is
completely
transparent
to
the
downstream
keys.
H
What
this
requires
is
is
pretty
much
the
upgrade
and
configuration
of
the
upstream
keys.
So
in
this
example
that
would
be
p1
and
p2.
You
need
to
to
configure
the
you
know
what
the
sfg
is
and
and
where
those
you
know,
redundant
sources
for
the
sfg
accessed,
then,
once
you
do
that
we
signal
the
location
of
the
the
sources,
the
redundant
sources.
We
use
spmc80
routes,
any
vpn
for
that,
and
we
include
the
eflection
extended
community
and
the
special
flag
in
the
multicast
flags.
H
So
with
with
this
deflection
excellent
community,
we
we
run
a
what
we
call
an
sf
election
or
a
single
forwarded
election
between
the
the
upstream
pes
connected
to
the
same
sources,
and
so
between
them.
They
will
select
only
one
single
forwarder.
H
So
with
that,
as
a
result
of
this
election,
the
upstream
piece
they
will
program
some
rpf
checks
right
for
the
the
sfg
so
that
the
non-sf
keys
they
will
discard
any
receipt
packets
on
on
the
local
ac
for
the
sfg
and
the
sf.
Pes
will
accept
those,
but
only
from
one
local
ac.
H
H
There
was
a
need
also
for
another
solution
that
is
hot
standby
and
the
reason
why
we
needed
this
is
because
we
want
to
have
a
faster
convergence.
H
The
drawback
of
this
solution,
though,
is
it
requires
all
the
the
ps
to
be
upgraded,
so
not
only
the
the
upstream
ps
but
also
the
downstream
pes,
but
other
than
that.
Basically,
the
procedure
is
it's
described
here,
so
we
need
to
configure
the
sfg
and
and
the
bds,
the
you
know,
the
redundant
sources
are
going
to
be
attached
to
we.
H
We
introduced
some
control
plane
extensions,
so
because,
on
the
downstream
piece
we
need
to
detect
or
determine
whether
an
you
know
a
multicast
flow
is
coming
from
a
given
source
or
another.
We
are.
H
We
are
including
esi
labels
to
to
make
that
this
tension,
so
those
csi
labels
are
signaled
in
spmcad
routes
and
and
the
other
extension
is
that
the
a
es
esi
routes,
they
must
be
domain
wide
block
labels,
because
we
need
to
make
sure
that
the
same
esi
label
is
used
from
the
the
two
piece
here
for
the
same
source.
Right
and
and
those
esa
labels
are
different
on
a
per
redundant
source
basis
as
well.
H
So
that's
that's
why
they
must
be
pcb
allocated
labels
so,
based
on
that,
we
advertise
those
labels
also
on
the
aed
per
es
routes,
and
we
program
rpf
checks
on
the
downstream
keys
so
that
we
up
on
receiving
you
know
the
duplicate
flows
from
the
two
sources:
the
tunnels,
three
keys.
They
they
actually
discard
the
packets
coming
from
one
source
and
they
forward
only
the
ones
for
the
primary
source
for
fall,
detection
and
the
change
in
the
programming
of
the
rpf
check.
H
We
rely
on
the
advertisement
and
withdrawal
the
of
the
80
routes,
but
bfd
is
possible
too.
So
next
slide.
Please.
H
So
what
did
we
do?
In
revision02,
some
general
cleanup
with
a
type
of
fixing?
We
also
improved
the
terminology
section
we
added
an
optional
use
of
bfd
with
the
reference
to
the
evp
and
bfd
draft
that
was
presented
earlier.
H
So
that's
why
I
request
it
to
the
authors,
or
I
will
actually
formally
request
to
the
authors
of
the
vst
draft
to
to
include
the
the
signaling
of
the
vfd
discriminator,
along
with
the
spmc
aed
routes,
which
is
needed
in
this
draft,
and
we
also
added
some
security
considerations
to
the
document
next
slide.
Please.
H
So
what
we
are
requesting
is
working
group
adoption.
We
believe
you
know
with
we've
worked
quite
a
bit
on
this
draft.
We've
presented
it
a
couple
of
times
and
it's
it's.
You
know
it's
ready
for
working
group.
J
Okay,
good
good
yeah,
this
draft
is,
as
the
name
is
showing
here,
is
about
elan
services.
J
So
it's
the
old,
elan
technology
that
we
are
all
aware
of,
but
I
think
here
it's
talking
about.
How
can
we
can
we
provide
the
elan
service
in
an
optimized
way
benefiting
from
the
new
segment
routing
transport,
so
whatever
segment
routing
transport
is
offering?
How
can
we
benefit
from
that
to
provide
the
overlay
elan
service
in
light
of
the
new
capability
offered
by
segment
routing?
J
So
those
are
the
list
of
answers
that
have
contributed
and
co-authored
the
draft.
So,
as
you
see
here
so,
let's
move
to
the
next
slide,
please
so
so
the
motivation
here
is:
how
can
we
provide
the
service,
the
elan
service?
That
historically,
of
course
started
with
sudhuar's?
J
If
we
are
talking
about
zakur
of
the
service
provider
network
and
we
have
started
with
vpls
and
zan
later
on,
I
came
on
evpn
as
well
and
has
the
history
developed
along
to
for
the
elan
service
and
now
with
benefit
of
sr.
How
can
we
achieve
some
optimization
for
the
offering
so
in
general?
The
motivation
here
is
more:
how
can
we
simplify
control,
plane
and
data
plane
and
address
as
well?
J
Maybe
some
of
the
issues
that
historically
was
present
with
the
sudoa
technology
or
the
vpls
technology-
that's
widely
deployed,
even
today,
in
service
provider
network.
So
if
we,
if
we
go
back
to
the
history
of
the
sudo
war,
sudo
sudowar
historically
provided
a
context
that
identified
both
the
service
and
the
endpoint
and
because
the
sudower
had
that
context,
that
identifiable,
endpoint
and
service
it
did
suffer
from
scam
scale
issues.
So
here
is
an
example.
J
If
we
are
talking
about
10k
service
deployed
on
100
node,
we
are
talking
even
with
partial
meshes
between
those
service,
not
on
all
hundred
node.
I
mean
we
are
talking
about
needing
to
signal
more
than
hundred
thousand
sudower
in
control
plane
for
those
different
services.
J
So,
of
course
the
skill
was
a
concern
that
that
that
was
that
dpls
was
suffering
from
and
as
well
sudwar
in
general,
followed
the
layer
2
semantic
which
didn't
have
active,
active
redundancy
and
zeus.
Semantic
of
not
having
active,
active
redundancy
was
primary
motivation
for
evpn,
for
example,
to
come
and
offer
the
active,
active
redundancy
and
as
well
solves
the
skill
problem
by
providing
as
a
multi-point
to
point
lsp
for
the
evpn
service.
J
Of
course,
when
evpn
provided
that
multi-point
to
point
service,
it
removed
the
source
information
from
the
context
for
the
evpn
instance
and
that
mandated
that
evpn
has
to
advertise
the
mac
address
in
control
plane.
J
So
what
here
we
are
proposing
with
that
new
idea
of
sr
optimized
elan
is
benefiting
again
from
the
segment
routing
is
that
we
can
really
improve
the
scale
by
simply
splitting
the
pseudo-art
context,
instead
of
having
it
only
presented
by
one
context
or
one
label,
as
we
as
we
have
for
the
sudower
is
splits
that
and
have
the
sudowar
presented
by
the
service
label
and
another
context
to
identify
the
source.
So
now,
sudhuar
can
be
sought.
J
That's
gonna
be
sending
the
traffic
associated
with
that
layer
two
service,
so
so
that
split
can
allow
us
to
scale
and
now,
instead
of
needing
hundreds
of
thousands
of
sudower
context,
the
service
can
be
presented
by
only
ten
thousand
service
instance
context
tray.
We
we
still
with
with
sr
optimized
here,
maintains
the
pseudo-r
semantic
or
the
point-to-point
semantic
between
two
endpoint.
J
By
presenting,
as
as
mentioned
here,
the
the
sudo
context
with
two
sets
in
in
the
sit
stack,
which
is
the
service
sid
and
the
sources
and
as
well.
What
we
are
saying
is
that
we
can
offer
still
the
active,
active
redundancy
by
simply
benefiting
from
the
anycast
set
capability
offered
by
segment
routing
today.
J
J
So
here
you
know
as
if
we
go
back
to
the
motivation
and
the
goal
here,
we
are
saying
we
wanna,
of
course,
benefit
from
sr
or
segment
routing,
but
at
the
same
time
we
want
to
as
well
optimize
the
control
plane
or
simplify
the
control
plane.
So
what
we
are
saying
is
that
if
you
see
here
a
a
network
with
six
nodes,
as
shown
and
and
you
know,
ces
attached,
wizard
zoos
are
multi-homed
or
single
home
to
as
a
different
peas
on
that
network.
J
We
are
saying
you
know,
the
discovery
can
be
greatly
simplified
by
simply
having
each
node
advertise
the
services
as
a
bit
mask
of
all
the
services
that
are
provisioned
on
a
given
node.
So
if
you
imagine
you
know
some
some
nodes
here
are,
for
example,
configured
with
thousand
even
of
services.
J
Each
node
can
advertise
a
bit
mask
of
all
the
service
said
ids,
starting
with
the
start
services.
I
think
here
yeah.
The
assumption
is
that
we
are
going
to
have
a
global
service
said
block
right.
So
so
we
are
going
to
have
an
srgb
for
the
services.
But
how
are
we
advertising?
The
service
sed?
Will?
Will
simply
be
you
know,
a
start
service
said
plus
a
bit
match
where
each
bit,
where
each
service
in
that
block
is
set.
J
If
the
service
is
configured
on
that
node-
and
this
is
how
all
nodes
in
the
network
can
quickly
discover
what
services
are
configured
on
its
spear
nodes
right
and
that
can
help,
of
course
to
you
know,
discover
the
membership
for
ingress
replication.
For
example,
if
you
are
going
to
be
flooding
layer,
2
packet
or
to
build,
you
know
point
to
multiply
trees,
for
example,
if
that's
needed
as
well.
So
the
discovery
of
many
many
servers
can
happen
using
a
single
control
plane
route
right.
J
That
can
be,
of
course,
flooded
using
a
bgp
control
plane
to
the
rest
of
the
network
and,
along
with
that
bit
mask
we.
I
can
advertise
per
note,
especially
for
english
replication.
A
pair
node
broadcast
sid.
So
that
said,
is
is
a
note
set
still,
but
that
allow
the
node
to
receive
broadcast
traffic.
So
if
the
top
said
is
that
broadcast
not
said,
then
the
node
know
that
this
traffic
is
meant
to
be
broadcasted
or
meant
to
be
flooded
to
the
subs.
J
So
in
here
yeah
we
are
describing.
How
can
you
achieve
active,
active
redundancy
with
this
new?
Is
this
new
idea
here,
as
shown?
So
so
we
are
talking
about.
J
As
mentioned
again,
the
stack
here
in
the
stack
we
are
going
to
have
the
service
set
and
the
source
node
said,
presenting
the
suduar
context
right.
So
what
we
are
proposing
here
is
that
for
each
ethernet
segment,
that's
multi-home.
J
We
are
going
to
have
an
anycast
set
associated
with
it,
so
that
anycast
said
you
know,
is
going
to
be
advertised
or
flooded
into
as
a
network
by
igp,
but
it's
presenting
our
bgp
for
that
matter,
but
it's
presenting
an
ethernet
segment.
J
So
what
that
anycast
said
is
going
to
be
flooded
to
the
network,
then
the
nodes
attached
to
the
multi-home
site
can
discover
that
they
are
configured
with
the
same
anycast
sets
that
are
that's
associated
with
the
easterner
segment
that
can
help
them
do
their
df
election
and
at
the
same
time,
when
we
do
data
plane
mac
learning.
Here
we
are
going
to
be
learning
the
mac,
that's
sourced
from
a
packet
coming
from
a
multi-home
site
against
any
customer.
J
So
so
the
mac
learning
yeah,
maybe
I
should
touch
on
that.
A
little
bit
is
that
the
mac
learning
here
for
a
given
service
is
going
to
be
associated
with
the
source
node
set
and
that
source
node
set
could
be.
J
You
know
unicast
if
it's
it's,
that
traffic
is
coming
from
a
single
home
site
or
could
be
sorry,
it
could
be
associated
with
a
unicast
note
said,
or
could
be
an
anycast
note
said
if,
as
a
packet
was
coming
from
a
multi-home
site,
so
the
learning
still
can
happen
against
the
source,
not
said
whether
it's
any
cast
or
regular
note
said
that
you
know,
is
associated
with
the
node
that
receives
the
traffic
from
a
single,
so
yeah.
Having
said
that,
let's
go
to
the
next
slide.
J
Yeah,
I
I
think
you
still
have
five
minutes
right
yeah.
I
know
okay,
so
for
against
the
optimized
services
data
plane
mclearning
here,
as
as
mentioned,
you
know
again,
repetition
on
on
the
flooding.
We
are
gonna,
of
course,
the
data
plate
mclearning.
Everybody
is
aware
that
we
do
flood
and
learn
here
and
we
are
learning
again.
J
J
Okay,
yeah
one
one
key
feature
as
well:
talk
about
art
suppression
and
how
we
can
achieve
our
suppression
with
that
idea
right.
So
one
approach
here
that
can
be
deployed.
Of
course
you
we
can
learn
about
the
ipad
binding
by
gleaning
arc.
J
However,
you
know
the
arp
replies
are
typically
unicast
and
ipmaq
binding
may
not
be
learned,
except
only
by
the
nodes
that
are
involved
with
that
unicast
packet
and
here
a
proposal
of
flooding
arc
replies
to
allow
the
ipmc
binding
to
be
learned
via
all
nodes
as
well
to
meant
to
to
be
able
to
provide
arc
suppression
yeah.
Let's
move
to
the
next
slide.
J
Yeah,
actually
in
here
we
are
saying
you
know,
mechanism
that
are
required
to
provide
overlay
to
provide
overlay.
Conversions
are
not
needed
here
and
that's
honestly,
a
very
key
to
this
approach.
So,
as
we
said,
we
are
simplifying
a
control,
plane
mechanism
by
advertising
very,
very
limited
number
of
routes
to
discover
the
services
and
and
build
to
discover
services
are
configured
and
learn
about
them
in
control
plane
with
a
much
simplified
control
plane
offering
here,
but
as
well
in
terms
of
convergence.
J
Given
that
the
ethernet
segment
is
presented
by
an
anycast
said
in
the
underlay
network,
we
can
use
overlay,
sorry
the
underlay
conversions
to
converge
as
overlay
as
well
right.
So
there
is
no
need
really
for
any
overlay
conversions
in
here,
because
simply
a
anode
can
withdraw
it's
any
cass
said
from
the
underlying
network.
Then
it
will
stop
receiving
multi-home
traffic
or
traffic.
That's
destined
to
its
multi-room
site-
and
we
are
saying
here
even
for
faster
conversions.
J
Note-
can
direct
traffic
to
another
node
associated
with
the
multi-home
site
if
it's
on
site,
for
example,
field
until
the
withdrawal
has
been
propagated
to
the
rest
of
the
network.
J
So,
let's
move
to
the
next
slide,
please
yeah!
We
are
saying,
of
course,
because
we
are
using
anycast
said
we
can
do
ecmp
multi-passing.
This
could
be
ecmp
or
even
ucmp
right,
because
any
cassette
can
be
advertised
with
different
weight,
and
if
that
happens,
you
know
we
can
send
traffic
with
different
weight
and
as
well
to,
of
course,
to
the
nodes
attached
to
the
multi-site
or
as
well.
J
We
could
have
active
standby
by
having
one
node
advertising.
Then
he
has
said
it
was
a
hard
metric
next
slide.
Please.
J
I
think
yes,
this
is
the
last
slide,
so
so,
just
to
recap
quickly.
Here
we
are
now
yeah
yeah
yeah,
I'm
concluding
so
just
to
recap
quickly.
Here
we
are
saying:
okay,
hey!
We
are
maintaining
data
plane,
mac
learning,
with
all
the
benefit
of
data
plane,
back
learning
with
this
approach
to
achieve,
of
course,
fast
convergence,
and
no
you
know
data
plane.
Mclearning
has
been
optimized
greatly
on
hardware
today,
and
many
hardware
can
do
mac
learning
at
line
speed,
mac
move.
J
You
know,
maintain
conversational
learning
skill
very
well.
We
bring
the
benefit
of
active,
active
and
multi
passing
through
benefiting
of
srne
cache,
and
you
mentioned
as
well.
Arch
suppression,
much
simpler
control
plane.
You
know,
imagine
one
route
that
can
teach
you
about
all
the
services
seated
with
or
configured
on
your
box
right
or
on
your
router,
the
leveraging
of
the
segment
routing
and
eliminating
any
need
of
overlay
conversions.
J
I
So
there
are
quite
a
few
claims
that
are
inaccurate
and
the
idea
of
these
data
plane,
learning,
mac
learning
in
data
plane
and
breaking
the
pseudo-wire
context
into
a
source
field
service
id
field
and
the
destination
which
is
a
multicast
field,
has
been
done
for
over
17
years.
First,
in
2003
2004
there
was
a.
I
I
wrote,
a
draft
vpls
multicast
that
describes
how
to
do
that
and
then,
seven
years
later,
in
2010
there
was
a
vx
and
the
native
vx
land
draft
talks
about
data
plane
learning
and
between
the
two
we
had
pvb
vpls,
which
does
the
same
thing.
J
I
I
am
getting
to
the
point.
I
am
getting
to
the
point
that
there
is
nothing
new
here.
Claims
are
being
made
in
here
which
are
inaccurate
and
there
are
issues
all
the
issues
that
we
have
with
the
data
plane.
Learning
it
is
ex
still
exists,
except
that
you're
not
either
you
don't
know
about
them
or
you
you're,
not
mentioning
them.
Okay,
so.
J
I
J
Actually,
you
know
what
that
idea.
You
know
I
even
with
other
vendors
I
worked
for.
I
had.
I
have
been
even
talking
about
it
20
years
ago.
So
what
is
the
point?
I'm
not
getting
it.
The
point
is
the
same
issue
that
we
had
with
the
data
plane.
How
come
you're,
not
the
overlooking
why
you
are
thinking
that
data
plane
is
evil
and
we
shouldn't.
I
I
J
J
J
I
N
C
I
Gonna
mention
there
is
a
flooding
issue
as
a
result
of
the
all
active
multi-homing
where
ce
does
the
hatching
to
one
pe
and
let's
say
to
pe
one
and
pe
two
doesn't
receive,
and
then,
when
the
remote
sends
a
traffic
to
pe
two
that
hasn't
learned.
The
mac
address
the
pe
two
that
multi-home
t
is
gonna
flood.
J
N
Yeah,
I
I
I
I'm
really
sorry
guys,
I'm
really
sorry!
We
are
really
out
of
time.
So
could
you
try
to
discuss
on
the
list
and
stay
calm?
Okay,
so.
O
Sammy,
I
I
have
a
one
one
particular
question
in
the
in
the
draft
section
4.1
it
mentions
that
extension,
another
draft.
It
seems
to
me
that
this
is
either
you
know
something
invested,
that's
reinventing
evpn
or
is.
O
Well,
that's
even
more
concerning,
but
but
also
I
mean
this
is
the
bgp
enabled
services
working
group?
Would
this
draft
not
be
better
discussed
in
spring?
At
this
point.
J
We
can
discuss
the
drafting
spring,
but
the
idea
here
is
that
this
is
an
elan
service
and
we
didn't
define
yet
the
extension
for
bgp
to
support
this.
So
this
is
why
we
are
presenting
here
presenting
the
draft
hearing
best
right.
So.
J
Discussion
there
is
bgp
angle
of
it
that
are
going
to
be
added
to
the
draft
shortly
right.
You
know
the
part
about
distributing
in
control
plane
the
set
of
land
services
associated
with
a
given
node.
J
Why
we
are
okay?
The
main
motivation
here
that
we
mentioned
at
the
beginning
of
the
of
the
presentation
is
that
we
are
saying
we
want
to
have
a
much
simpler
control
plane
to
set
up
scale
better
in
terms
of
control
plane
in
distributing
the
servers.
Actually,
if
you
are
talking
about
evpn,
if
we
are
gonna,
go
evpn
route
and
signal,
for
example,
on
a
hundred
no
ten
thousand
servers
whose
mac
entries
you
are
talking
about
millions
and
millions
of
routes
that
you
have
to
distribute.
J
So
we
are
saying
you
know
instead
of
that
approach.
Why
can't
we?
You
know
if
you
talk
about
evpn
and
you
have
10
000
servers.
You
are
talking
about
distributing
ten
thousand
ethernet.
J
B
Okay
over
time
next
is
so,
please
take
any
of
your
questions
on
this
draft
to
the
list.
N
L
Okay,
so
I
just
want
to
see
that
this
transcript
is
actually
independent
of
are
okay,
all
right.
So,
oh
I,
my
comment
was
about
the
semi
strap,
but
now
let
me
get
down
to
my
presentation,
this
independent
generic
fragmentation.
L
L
So
this
fragmentation
issue
for
when
it
comes
to
sudowr
or
vpos
the
there
it
was
a
solution,
rfc
4623
using
the
control
world,
but
that
is
not
applicable
to
evpn,
because
evpn.
The
two
reasons:
why
is
evpn
either
does
not
use
control
word
or
when
it
does
use
control
word
it's
r0.
It
does
not
have
the
sequence
number
in
it,
the
also
the
application
we
are.
L
L
This
is
unrelated,
so
the
second
opportunity
is
that
some
ip
fragmentation
can
actually
be
viewed
as
independent
of
ip.
So
as
long
as
the
context
for
that
of
identification
in
the
fragmentation,
header
is
available
next
slide.
L
L
L
L
Now
the
egress
pe
sees
that
label
and
knows
that
the
gfh
follows
so
it
reassemble
the
packet
and
then
hand
enhance
the
package
to
mpos
for
further
processing
next
slide,
please
so
the
fragmentation
header,
the
first
one,
is
the
ipv6
fragmentation,
header
and
the
next
one
is
that
this
is
generic
fragmentation.
Header,
the
red
fields,
are
change
the
differences
from
the
ipv6
ones.
The
first
one
is
that
four
or
zero
base
at
the
very
first
at
the
very
beginning,
that
is
for
ecmp
issue.
L
So
there
have
been
some
discussions
on
the
npos
powers
and
best
millionaires,
and
this
ecmp
issue
was
brought
up
by
stuart,
and
so
this
is
a
solution
for
that
ecmp
issue.
This
is
past
just
same
as
the
first
four
zero
base
in
the
sudower
control
world.
I
want
to
say
that
it's
not
in
the
draft
yet,
but
I'm
here
presented
here
because
of
discussion.
L
The
identification
field
is
variable
because
there
are
cases
we
need
to
to
embed
the
source
information
in
this
identification
field,
and
when,
when
that
is
the
case,
then
there
is
the
flag
bit
s
as
a
bit
set
to
indicate
that
the
identification
fields
itself
includes
the
handx
information,
so
the
context
from
the
outer
head
will
not
be
used
and
because
of
that
variable
identification
field,
we
need
a
header
lens
in
the
in
this
gfh
next
slide.
Please.
L
Next
slide,
please
so
coming
back
to
the
motivations
first,
it
solves
the
evp
fragmentation
issue
without
requiring
ip
transportation
or
incurring
ip
overheads.
L
In
addition
to
fragmentation,
the
other
functions
provided
by
ip
that
can
be
supported
at
this
independent
frame
level.
Shimla
layer
as
well
esp
is
one
example,
and
in
fact
this
fragmentation
or
esp
functions.
L
Next
side,
please,
okay,
so
we
are
basically
seeking
comments.
There
have
been
presentations
and
discussion
in
various
working
groups
and
about
use
case,
and
if
this
is
a
solution
that
we
do
want
to
work
on,
then
then
we
we
need
to
find
the
home
right
now
is
the
or
draft
is
for
tsv
working
group,
but
it
can
be
any
group
that
we
deem
appropriate
next
slide.
Please,
oh
actually,
no
more!
So
I'm
done
with
the
presentation.
P
Stuart
okay,
so
you
know
thank
you
for
improving
the
the
draft
a
little
bit.
I'm
still
worried
about
how
you're
going
to
present
prevent
reassembly
lock
up,
because
basically,
you've
got
nothing
to
clear
the
to
clear
the
resources.
P
If
subsequent
packets
fail
to
arrive
to
complete
the
packet-
and
I
do
wonder
whether
you
should
have
looked
also
at
using
the
pseudo
wire
header
design,
with
just
adding
an
identifier
to
it,
but
I'm
still
also
worried
about
this
design
in
I'm
not
quite
sure
what
problem
you're
fixing,
because
if
it's
ipv6
you're
running
then
pmtu
discovery
will
will
fix
the
problem
it
will
automatically
back
down.
For
you.
L
So
let
me
answer
the
third
question.
First,
there
are
cases
you
you
cannot
ask
the
the
traffic
source
to
to
do
fragmentation.
L
For
example,
you
are
a
evp
provider
and
your
m2
is
limited,
and
yet
you
are
suppose
supposed
to
provide
for
transparent
services,
and
you
cannot
ask
your
customer
to
say
that
hey,
you
need
to
tone
down
your
or
your
mtu
so
that
that's
the
use
case,
the
the
I
think,
the
and
then
that
your
first
question
is
what
what
what
was
the
first
question
again.
L
L
That's
a
generic
problem.
That's
even
if
you
are
not
using
this
one.
Let's
say
you
are
using
ipv6
encapsulation
and
you
fragment
using
ipv6
or
even
in
the
pseudowire
case.
We,
when
you
do
have
a
fragmentation.
Reassembly
based
on
the
sequence
numbers
a
egress
pe
could
have
many
of
those
ones.
It's
the
same
problem.
P
L
Well,
actually,
here
it's
somewhat
related
to
what
sammy
was
saying.
Sammy
was
basically
using
the
source
sources
to
to
introduce
this
pseudo-wire
context
into
this
evmp
concept.
So
here
is
the
same
thing.
The
the
the
the
the
context,
either
embedded
in
the
identification
field
or
in
the
outer
header,
will
give
you
that
that
pseudo-wildlife
context.
So
we
can
discuss
that
more,
but
I
think
it's
it's
the
same
thing
and
then
your
second
question
or
comment
okay.
What
was
that
again.
P
I
can't
remember
what
it
was
now
the
key
ones
were
pmtud
and
which
will
happen.
Naturally
right
it's
just
you
don't
have
to
do
anything
that
automatically
happens
and
the
lock-up
problem.
E
Yeah,
I
have
to
admit,
I
only
read
this
during
the
spirited
discussion
of
the
previous
draft.
From
a
procedural
standpoint,
you
might
have
to
split
up
the
the
signaling
from
the
transport
aspects.
It
might
be
better
since
there
are
different
areas.
That's
just
procedural
and
there's
also
some
details
in
those
bfb3
that
need
to
be
changed,
but
that's
that
can
be
done.
I
have
one
question
like
I
said
I
didn't
really.
I
only
just
scan
read
through
this
draft
quickly.
E
L
E
L
Okay,
the
reason
why
they
are
needed
is
that
mpos
does
not
have
a
mean
to
a
way
to
indicate
what
is
the
next
header?
What
what
follows,
after
the
bottom
of
the
label
stack
so
that
the
label
gfh
label
will
serve
the
purple
saying
that
okay,
the
next
header
is
a
gfh
header,
and
then,
additionally,
the
label
can
optionally
identify,
say
a
which
source
which
node
sends
in
this
package.
In
addition
to
the
the
gfh
semantics-
and
that
is,
I.
L
I'll
be
like
the
ce
right,
the
ce
right,
no,
the
pe
I
mean
or
the
pe
okay
yeah,
and
that
is
actually
very
similar
to
to
to
the
sources
in
sami's
presentation.
It
has
happened
that
the
two
presentations
are
somewhat
related,
even
though
I
just
realized
that
that
that
that
point.
P
Comment
you
know
so
that
I
remember
where
the
question
was:
you
kind
of
alluded
to
it,
but
you
haven't
really
answered
it,
which
is
I'm
not
sure
why
you
need
type
information
here.
The
way
mpls
works
is
that
the
payload
type
is
implicit
in
the
label
itself.
P
So
I'm
struggling
to
understand
why
what
the
point
of
the
next,
the
next
header
is
other
than
to
sneak
in
a
an
mpls
type
without
further
discussion.
L
You
mean
the
next
header
value
field
in
the
achievement,
header
itself.
Yeah,
I
mean
everything.
L
No,
the
the
so
the
the
the
gfh
label
indicates
that
the
gfh
follows.
But
how
do
we
know
what
follows
after
gfh?
L
So
that's
why
we
need
the
next
header
in
the
gfh
itself,
because
it
could
be
that
after
the
gfh
header,
it's
it's
no
longer
in
pos,
it
could
be
ip
or
it
could
be
whatever
anything
else.
P
So,
presumably
you
were
expecting
this
to
be
an
ethernet
frame
or
something
else
frame
when
you
set
the
set
the
pseudo
wire
up
and
all
you've
done
is
to
all
you
need
to
do
is
to
provision
this
particular
pseudo
wire,
such
that
you
expect
to
find
a
the
fragmentation
header
there
and
it's
probably
simpler
to
always
send
the
fragmentation
header.
L
No,
no,
not
necessarily
when
there
when
there
is
no
need
to
do
the
fragmentation,
then
you
don't
need
that
gfh
and
in
the
evpn
mpos
case.
Definitely
after
the
gfh,
it's
a
it's
a
mpls.
L
Another
npr's
label
stack
base.
Basically
is
the
service
label,
but
there
could
be
other
use
cases
where
after
gfh
is
no
longer
in
pure
payload.
So
I.
L
Definitely
we
we,
I
also
have
a
presentation
schedule
for
npr's
working
group
and
we.
L
We
we
do
want
to
have
more
discussions
in
in
the
meaningless
as
well.
We
still
have
thank
you.
Thank
you.
B
Okay,
I
think
we
probably
should
wrap
it
up
now
with
10
minutes
over
unless
you
have
a
10
second
comment:
ac.
Are
you
gone
okay?
So
thanks
very
much
everybody
and
see
you
next
time.