►
From YouTube: IETF102-BIER-20180718-0930
Description
BIER meeting session at IETF102
2018/07/18 0930
https://datatracker.ietf.org/meeting/102/proceedings/
A
C
G
F
B
F
A
B
Pressures
on
now
buddy,
you
got
a
delivery
because
you're
toast
BAM,
they
are
uploading.
The
latest
row,
so
I'm
gonna
reload
this
and
then
we'll
send
me
check
for
you.
Oh
good
I
want
to
make
sure
you're
into
these
minutes.
I
hate
you
have
you
taken
two
minutes
then
get
to
your
slot.
Come
on
in
yeah,
I'd,
almost
care.
H
I
B
B
B
B
E
A
E
E
B
F
E
B
B
No
well
noted!
Well!
Thank
you.
Here's
our
agenda
today,
I
shifted
things
around
a
little
bit,
based
on
availability
of
folks
and
discussion,
kind
of
group,
things
around
content
and
then
I've
added
some
stuff
to
this.
That
just
got
announced
well,
I
guess
today,
I
get
slides
from
Taurus
about
a
design
team
enroll
and
an
interrupt
test
that
a
vendor
is
actually
not
a
vendor.
Excuse
me:
a
provider
is
trying
to
put
together
for
beer
vendors,
so
I'll
show
that
up
in
a
bit
everyone,
okay
with
you
agenda
anything
missing.
B
All
right
anyone
I
insulted,
give
me
his
chance
all
right,
yeah
we'll
get
to
this
okay.
This
is
the
fact
they
get
a
school
back.
I'm
gonna
be
complaining
about
this
whole
part.
I'm.
Sorry,
so,
first
up
multicast
in
HTTP
I
send
an
email
out.
Yeah
shifted
things
around
I
set
the
mail
out,
set
it
right,
good,
bud,
you're
here
yeah.
Thank
you.
Thank
you.
Honestly.
I
wanted
people
to
like
fresh
coffee
to
have
this
conversation
so
excited.
J
J
Thank
you,
so
this
draft
is
is
basically
kind
of
if
I
want.
If
I
recap,
it
is
based
on
a
previous
draft
multicast
HTTP
using
beer,
which
describes
an
use
case
in
the
use
case
document
which
is
list
there,
and
in
this
previous
or
earlier
draft,
we
described
some
of
the
required
functional
elements
such
as
the
PCE
and
the
service
router,
and
also
there
were
some
suggested
interactions
that
was
described
in
that
earlier
draft.
J
If
we
go
to
the
next
slide,
please
now
in
this
draft
we
basically
took
what
was
described
there
and
try
to
kind
of
go
into
little
bit
details.
So
in
this
draft
we
have
just
started
off
with
a
reference
architecture
for
to
realize
this
use
case
over
beer.
So
it
is
a
more
like
a
an
overlay
layer
on
top
of
on
beer,
and
as
this
diagram
shows,
there
are
certain.
J
This
overlays
base
is
realized
by
is
formed
by
the
investor
outers
and
the
egress
routers
BF
ir
and
BF
ER,
and
also
some
at
the
additional
functions,
such
as
the
service
handler
and
the
path
computation
elements.
The
interesting
thing
is
this:
the
BFI
are
the
ingress,
router
and
egress.
Routers
are
kind
of
combined
with
this
additional
functionality,
which
we
call
service
Handler,
and
these
are
marked
in
this
red
boxes
and
also
the
the
PCE.
J
The
path
computation
element
is
kind
of
jointly
shown
as
the
green
box,
with
the
BR
te
so
which
kind
of
basically
the
which,
which
which
kind
of
sends
or
helps
configuration,
configure
some
of
this
red
boxes,
which
we
call
them
as
naps,
thus
client,
side,
naps
and
server
side
naps.
So
actually
question
is
sure.
Greg.
B
From
Cisco
you
show
beer
te
here
in
the
PCE
is
T.
They
expected
requirement
or
you
just
through
then.
Is
it
yeah?
It's
just
a
possibility.
J
J
You
go
down
here,
yeah
the
next
slide,
so
in
this
draft
you
also
try
to
compare
that
if
I
want.
If
we
want
to
realize
this
use
case
over
IP
multicast,
then
how
can?
How
can
this
be
realized?
And
what
are
the
some
of
the
like?
You
know
operational
details
and
what
kind
of
thing
like
problems
are
there?
So
in
this
draft
we
describe
the
realization
of
this
use
case
over
IP
multicast,
and
we
basically
show
the
operations
in
terms
of
the
supports
that
this
overlay
will
require
to
create
these
mopes.
J
Maintaining
the
group
States
and
also
the
signaling
IGMP
signaling,
to
join
to
join
this
multicast
groups
are
described
now.
One
of
the
issue
with
this
realization
of
this
use
case
over
IP
IP
MC,
is
that
in
certain
scenarios,
where
there
are
few
number
of
receivers,
what
we
basically
can
sees
is
that
that,
because,
when
that
video
is
encoded
in
multiple
rates,
there
may
be
some
of
the
bitrate
switch,
may
not
be
required
and
drop
unnecessarily.
J
B
J
Yeah,
so
what
we
anticipate
or
assume
is
like
when
there
are
fewer
number
of
receivers,
so,
for
example,
like
say
a
particular
receiver
is
subscribing
for
a
once,
a
particular
bitrate.
So
generally,
what
will
happen
like
it
at
the
snap
level?
So
what
is
generally
what
will
happen
from
the
main
server?
It
will
subscribe
for
all
the
different
bit
rates,
so
they're
in
there's
assumption
that
the
multicast
group
will
be
formed
for
each
the
encoded
bit
richer.
Now,
on
the
client
side,
they
are
basically
say
now
a
client
at
a
particular
amount
of
time.
J
It
is
basically
requesting
for
a
particular
bitrate,
so
it
subscribes
for
say
one
of
the
group.
Now
when
there's
a
few
number
of
receivers,
you
basically
only
subscribe
force,
one
of
the
group
there
and
the
other
groups
are,
they
may
not
be
joined
or
they
don't
want
to
join
there.
Now,
as
the
say
that
users
are
basically
say
requesting
another
group,
so
they'll
we
keep
changing
and
it
has
to
constantly
send
this
joining
requests
to
each
of
these
groups.
J
So
if
we
move
on
to
the
next
slide,
so
now
we
go
into
this,
come
after,
comparing
with
realization
over
our
IP
multicast.
We
say
that
okay,
now,
instead
of
if
we
have
to
realize
this
use
case
using
beer
and
we
go
into
certain
details
of
the
earlier
functions
that
we
described,
for
example,
the
one
of
the
function
was
this
service
handler,
which
was
co-located
with
the
ingress
router
and
the
egress
routers,
so
the
what
the
service
router
do.
J
It
terminates
application
level
protocols
and
again
it
tries
to
extract
the
the
URI
to
determine
the
exact
path
ID
and
and
at
that
time
it
takes
the
help
of
this
PCE.
So
this
PC,
what
it
does
it
keeps
track
of
all
these
service
execution
points
and
knows
how
to
reach
them
and,
as
we
mentioned
before,
it
can
be
a
part
of
the
beer
te
and
the
PCE
basically
determines
the
path
idea.
Now
what
we
explained,
what
path
ID
is
is
basically
it's
a
the
the
the
the
origination
point
and
the
and
the
and
the
server.
J
So
basically
it's
the
originating
point
and
the
termination
point
is
combined
is
called
the
is
the
path
ID
and
so
the
pcs
generates
this
path.
Id
and
there
are
certain
interface
functions
to
the
vfi,
are
aware.
The
path
ID
is
mapped
to
the
beer
header
and
in
terms
of
how
the
multicast
is
achieved.
It
is
we
followed
the
exactly
the
steps
that
are
described
in
names
casing
and
now
try
to
describe
in
terms
of
this
s,
snap
and
C
nap,
so
here
in
the
server
side
nap.
J
If
we
move
on
to
the
next
slide,
I
think
so
here
we
try
to
again
identify
or
describe
some
of
the
advantages
is
basically
it
eliminates
this
dynamic
multicast
signaling.
When
you
compare
with
the
IP
multicast
scenario,
and
also
one
of
the
thing
is
it
avoids
sending
any
unnecessary
data
block
within
that
IP
multicast
solution
and
also
another
thing
is
on
the
s
snap
side.
Now
we
can
control.
We
can
wait
to
collect
all
this
request
and
wait
certain
amount
of
time.
J
So
that's
the
end
of
the
presentation
so
now
in
this
just
the
next
step,
like
we
are
basically
suggesting
2includes,
may
be
an
additional
applicability
statement
in
documenting
how
a
beer
can
be
applied
to
aggregate
the
HTTP
responses,
and
we
basically
try
to
see
that
we
can
also
elaborate
on
this
solution
to
support
the
application
statement,
as
our
next
step.
So
I
think
that's
the
end
of
the
presentation.
Oh
so.
D
Tony
here
two
observations:
one
is
that,
even
if
you
call
as
this
requests
a
beer
domain
gives
you
no
penalty,
if
you
decide
to
do
a
one-on-one
with
the
receivers
before
you
actually
cut
them
to
like
you
know,
because
there
is
no
state
cause,
you
can
talk
to
the
guy.
Just
using
one
beat
one
for
one
before
you
call
s
this
thing,
and
then
he
cut
him
over
on
the
group
sent
just
as
an
idea,
because
beer
is
unusual
right
and
the
second
one
is
so.
D
The
draft
still
doesn't
describe
in
gory
details
how
HDPE
has
been
bent
to
actually
to
deal
with
this
of
multicast,
but
because
you
actually
built
the
whole
thing
over
ping.
I
understand
that
all
the
overlay
details
have
been
pretty
much
already
figured
out.
No
I,
don't
think
it
needs
to
be
described.
It's
just
an
observation.
Having
read
it,
the
draft
is
much
much
clearer,
but
how
the
magic
of
the
HTTP
is
done
is
still
not
described,
which
I,
don't
think.
Actually
you
know
is
in
scope
of
beer
anyway,.
L
L
So
basically,
the
architecture
is
similar
to
the
to
the
one
you
you
just
described
here
so
and
you
know
in
the
edge
and
they
have
devices
to
convert
its
unique
hearts
into
multicast
and
in
the
server
side
it
also
happen
a
unicast
multicast,
a
function
located
in
the
suicide.
So
that's
basically
the
hope,
I
think
I
think
the
the
use
case
actually
are
they
are.
They
are
doing
it
in
the
application
layer
they're
using
current
multicast
capability
and
in
the
network
existing
like
a
PNP
right.
L
They
are
doing
something
in
the
application
line,
for
example,
to
define
the
you
know
how
the
packet
format
will
be
look
like
in
the
multicast
streams
and
also
they
they
define
the
control
lab
or
like
signaling
protocols
or
interfaces
like
this.
So
I
think
this
work
is
quite
similar
to
that
one
and
I
think
from
the
requirements
you
just
described
here.
I,
don't
think
in
the
dĂ¡il
plan.
Beer
should
be
make
any
first
change
or
something,
but
might
be
you
for
for
beer
to
be
used
in
that
case.
L
L
Actually,
I
think
one
chapter
like
this
kind
of
just
it's
a
quite
but
how
beer
can
be
using
that
case
will
be
useful
like
how
you
know
the
you
know
the
functions
you
use
quite
here
to
be
no
I,
strongly
suggest
that
you,
you
know
be
consistent
with
you
know
the
framework
and
the
use
cases
or
something
like
that
and
the
different
organizations
to
be
consistent
way.
We
don't
have
to
repeat
look
at
something
yeah.
M
N
N
J
B
This
is
Greg
again
so
in
the
block
diagram
in
your
discussion,
you
show
that
the
service
handler
and
be
fi
and
the
F
ers
are
in
the
same
device.
So
from
a
deployment
perspective.
If
we're
talking
about
hardware
that
potentially
a
challenge
you
know
getting,
beer
and
boxes
are
step.
One
right
getting
to
some
service
handle
a
function
in
that
router
is
probably
beyond
the
scope
of
many
routers
or
router
vendors
interests
as
well.
B
J
K
B
B
O
B
P
D
I
mean
I
already
had
discussions
about
something
like
virtual
beer
overlays,
because
people
build
their
own
replication
networks
right,
which
really
ingress
wrap
with
replication
points,
and
you
need
to
control
a
control
protocol,
even
if
you
don't
necessarily
run
beer
forwarding
to
like
synchronize
all
the
stuff.
So
having
this
kind
of
document
and
engaging
comment.
B
R
R
B
Hears
read
the
drafts,
who
feels
it's
ready
for
last
call
got
some
good
support,
though
okay,
so
we'll
take
that
to
the
list.
Thank
you
very
much.
For
me.
I
would
say
it
was
like
eight
to
six
eight
read
six
said:
yes,
I
saw
two
hands
go
down.
We
do
that
count
again.
Who's
read
the
draft.
We
went
numbers
five,
six,
seven,
eight
nine
who
thinks
you'd
be
last
call
which
agree:
four:
five:
six:
seven:
okay,
seven.
D
And
nine,
the
let
me
ask
a
side
question.
So,
first
thanks,
it's
much
more
readable.
Okay,
this
version,
no
first
draft
I
felt
completely
beside
the
point
now
I
understand
what
the
use
case
is
and
I
understand
slowly
how
that
stuff
would
work,
no
honestly,
so
who
thinks
that,
having
a
raft
forcing
Kyoto's
to
make
it
even
more
readable
improves
the
quality
would
be
worth
doing
right
because
I
was
I
was
stunned,
I
mean,
did
the
quality
of
this
version
versus
the
last
version?
S
D
Q
B
T
Okay,
so
the
job
says
that
the
multicast
address
is
to
use
for
clothes
and
what
should
be
configured
configurable,
which
makes
sense,
but
it's
kind
of
tricky
to
deploy
this.
If
you
need
to
go
to
all
the
routers
and
configure
the
exact
same
multicast
addresses
on
every
single
router.
So
the
question
is:
what
do
you
think
it
would
be
useful
to
go
to
Ayane
and
ask
for
either
a
range
of
addresses
or
just
one
address
that
can
be
used
for
this
purpose?
T
So
in
most
cases,
if
people
have
a
single
beer
domain,
you
only
need
a
lot
of
to
address
this
one
for
Clarisse
one
for
reports
and
that's
it
and
yeah.
Maybe
potentially
it
could
be
nice
to
have
some
well-known
addresses
for
that.
If
you
have
multiple
subdomains
than
me,
I
love
having
multiple
instances
with
different
group
addresses,
but
that
won't
be
a
common.
A
common
case.
I
would
think
so
anyway.
I
would
like
some
input
on
whether
you
think
well-known
address.
This
would
be
useful.
I
could
maybe
Sarah
if
use
well
known
addresses.
T
The
only
drawback
you
could
think
of
is
that
if
there's
a
miss
configuration
or
something,
maybe
you
know,
some
message
could
go
somewhere
where
it
wasn't
expected
to
be,
and
those
other
routers
also
use
the
same
well-known
address.
So
they
will
just
process
the
message.
Just
fine,
but
yeah
I
think
you
could
make
it
a
lot
easier
for
people
to
deploy
if
they
were
well
now,
any.
B
I
guess
the
concern
is
the
fact
that
currently
these
are
link-local
multicast
addresses.
We've
got
this
domain
wide
beer
domain
here.
The
challenge
really
only
being
as
you
mentioned,
if,
if
something
leaks
and
leak
would
be
a
configuration
problem
really
right.
So
even
if
you
had
multiple
set
multiple
areas,
the
mask
itself
is
going
to
prevent
that
from
going
outside.
So
the
B,
if
I,
are
ers
that
are
part
of
a
given
said
or
area,
will
only
get
that
message
so
that
multicast
address
is
really
hidden
from
the
bier
domain.
Yeah.
T
B
J
J
B
T
Errors
so
yeah
I
mean
what
they
can
say
in
a
draft
I
think
it's
already
there
is
that
you
may
use
the
link
local
addresses,
so
implementations
could
have
that
as
a
default.
Here,
yes,
but
it
also
says
it
must
be
configurable
Oh
seriously,
I
think
so.
I
should
double-check,
but
believe
me
say
it
must
be
configurable,
because
yeah
not
that
everyone
wants
to
use
those
addresses
I
would
figure.
We
want
to
allow
multiple
subdomains,
so
incidents.
B
Normally
I
would
I
would
say
that
once
this
progresses
we'll
get
feedback
and
larger
IETF
community
yeah
from
from
a
architecture
board.
However,
this
is
multicast
and
none
of
them
pay
attention,
so
the
people
who
will
be
paying
attention,
or
mostly
in
this
room
so
think
about
this
I
mean
this
is
something
that
we
are
responsibilities.
B
I'm
saying
the
room
in
general,
not
just
to
the
authors
and
stick,
but
this
is
part
of
our
role
when
this
stuff
goes
forward
IETF
at
large
it
we
are
kind
of
that
bastard
stepchild
people
put
it
off
scope
oftentimes.
If
I
talked
to
several
working
groups
already
this
week,
where
they've
taken
some
some
action
to
say,
multicast
is
out
of
scope
and
I
understand.
They
shouldn't
try
to
boil
the
ocean,
but
that
doesn't
mean
it's
off
the
table
and
often
that's
what
happens.
U
T
I
guess
I
I'm,
starting
to
wonder
now
about
the
pin
document
and
when
you
send
that
pin
joint,
you
normally
send
it
to
all
him
neighbors,
which
is
a
link
local
address
as
well
right.
So
if
that
is
used
in
the
pin
document,
I'm
I'm,
not
sure.
If
that's
the
case,
then
it
makes
sense
to
do
the
similar
thing
in
our
GMP
and
say
a
link
local,
it's
okay,
there
as
well.
Okay,
yeah.
A
B
N
T
So
yeah,
so
actually
that's
yeah.
That
could
be
one
possibility.
So
I
mean
the
draft
doesn't
say
how
we
do
it.
Maybe
that's
needed,
but
basically,
when
you,
when
you
send,
you
know
you
need
to
send
say,
IGP
query
out
on
beer.
You
have.
If
you
have
those
instances,
you
have
to
decide,
ok,
which,
which
instance
are
you
in
which
which
group
address?
Do
you
want
used
for
the
query?
T
Q
T
N
T
T
Alright,
so
I
have
this
document
about
MTU
discovery
and
I
could
just
talk
about
what
that
draft
says,
but
I
feel
like
people
are
a
little
bit.
I
know
either
confused
or
in
this
agreement
or
whatever
about
what
this
MTU
discovery
and
is
it
needed
and
stuff
and
I
want
to
just
quickly
say
something
well
empty,
you
general
and
until
discovery
next
slide.
Please.
T
T
Yes,
you
know
we
have
this
working
group
draft
already-
that
talks
about
probe
base
path,
MTU
discovery
and
I'm
proposing
a
different
way,
which
is
finding
sort
of
a
sub
domain
wide
MTU
instead,
but
I
want
to
talk
about
the
general
general
issue
before
going
into
the
drafts
next
slide,
please
so
obviously,
for
an
IP
packet
to
reach
its
destination.
It
must
be
small
enough
to
traverse
all
the
links
towards
the
destination,
which
is
not
always
that
easy
in
the
old
days
and
still
for
ipv4.
T
T
You
are
done
is
that
the
host
that
originates
the
packets
need
to
make
sure
the
packets
are
fragmented,
the
datagrams
are
fragmented,
so
each
of
the
fragments
are
small
enough,
and
the
question
then,
is:
how
do
you
know
what
is
small
enough?
You
could
use
a
safe
value.
12
80
is
supposed
to
be
safe
for
ipv6.
It's
not
always
the
case
500,
something
I
guess
used
to
be
the
safe
m
to
you
for
ipv4.
T
But
ideally
you
want
to
use
something
larger
so
that
you
can
minimize
the
number
of
all
packets
that
are
sent
where
you
don't
want
to
fragment
the
datagrams
into
smaller
pieces
than
you
need
to
so.
Some
kind
of
empty
discovery
is
useful
so
that
the
house
can
know
that
the
sizin
can
maximize
the
size
of
the
fragments.
Okay
next
slide.
T
So
so,
when
you
do
MTU
discovery
today
for
IP
what
happens
is
well?
First
of
all,
the
idea
is
that
applications
or
say
TCP
can
know
their
their
size,
so
they
can
use
the
optimal
medium
to
use.
They
can
use
the
optimal
size
so
also
with
beer,
in
particular,
if
you
use
an
overlay
than
the
the
bf
yeah,
the
edge
B
routers,
they
actually
originating
packets
like
a
joint
prune
or
an
RMP
report,
or
something
like
that.
So
it's
good
to
know
one
to
you
also
for
that
purpose:
II
regress
or
English
brothers.
T
So
what
happens
with
my
peep?
Haven't
you
discoveries
generally
that
if
a
router
gets
a
packet
that
is
too
big
it
or
send
an
ICMP
report
back
to
the
source,
saying
it's
too
big
and
what
the
available
empty
use
and
that
way
the
house
can
over
time,
learn
and
cache
what
the
empty
use
for
different
destinations?
T
This
is
done
always
wrapped
in
a
six,
even
for
a
twenty
six
monthly
costs,
it
is
the
spec
say
so
and
with
sometimes
done
for
a
PPA
for
so
for
beer
in
particular,
it
would
be
really
bad
to
do
the
in-flight
fragmentation
I
talked
about
earlier.
Imagine
a
packet
travels
through
beer
and
in
the
middle
of
the
beer
domain.
The
packet
is
suddenly
too
big.
If
you
would
have
to
be
capsulate
a
packet,
your
IP
fragmentation,
we
encapsulate
the
fragments
which
it
just
won't
work.
T
So
what
could
we
do?
Instead,
you
could
do
some
kind
of
beer
fragments
which
I
don't
think
we
want
to
do.
Basically,
let
have
some
fragment
ID
in
the
beer,
header
and
stuff.
Well,
we
could
do
MTU
discovery
with
beer
and
just
as
avoid
fragmentation
in
beer
completely
and
there's
a
few
ways,
you
could
do
that
one
would
be
a
similar
to
a
pin
from
AIPMT
discovery.
Perhaps
some
beer
router,
but
if,
if
it
gets
a
beer
packet
that
is
too
big,
it
could
send
some
new
beer
message.
T
So
whatever
to
the
BFI
are
saying,
our
packet
is
too
big
and
the
BFI
are.
Can
you
know
make
the
pack
do
the
fragmentation
as
needed
when
it
encapsulated
packets?
That's
just
a
possibility.
No
one
has
proposed
that
or
we
could
do
yeah
some
MTU
discovery
by
using
either
you
know
the
probe
based
path,
MTU
discovery
draft
or
what
I'm
proposing
in
this
draft.
T
Okay
next
slide,
so
yeah
I,
guess
I'm
already
moved
into
this
piece
here.
So
so,
basically
on
on
say
BFI
are.
You
would
like
to
know
the
MTU
so
that
we
can
report
back
to
using
IP
to
IP
sources
saying
the
packet
is
too
big
when
we
attempt
to
encapsulate,
but
we
also
want
to
know
for
the
overlays
what
the
MTU
is
and
we
could
use
a
safe
value.
Just
say
maybe
1280
or
500
or
whatever
is
always
will
always
work
with
here,
but
it's
not
optimal.
T
So
if
we
want
to
use
something
else,
you
could
imagine
that
an
administrator
can
configure
a
beer
and
emptied
astatically
on
each
router
that
might
work
fine,
maybe
or
we
could
discover
them
to
you
using
one
of
these
drafts
so
that
the
current
working
of
document
uses
probes,
which
might
work
find.
My
only
concern
is
if
the
topology
changes,
you
won't
really
know
the
new
MTU
until
the
next
probe.
So
what
you
do
in
the
meantime,
do
you
just
drop
kids,
or
should
you
probe
really
frequently?
T
Oh,
no
and
the
one
I'm
suggesting
is
the
subdomain
MTU,
which
is
basically
using
empty
or
the
weakest
link
in
a
domain
which
is
not
very
optimal.
If
you
have
a
single
link
with
a
very
small
empty
you,
then
you
are
in
trouble
canal,
but
a
good
thing
about
it
is
that
if
there
is
some
routing
change
or
in
general,
if
a
link
flaps
normally
the
MTU
would
not
change
so
there's
no
signaling
needed
and
you
wouldn't
drop
packets
for
a
small
time.
Just
because
don't
you
got
smaller,
it's
not
entirely.
T
True,
though,
if
the
weakest
link
goes
down,
for
instance,
then
the
MTU
could
go
up.
The
draft
is
suggesting
having
some
delay.
There
is
so
that
you
don't
have
to
you
know
it's
not
no
harm
and
keep
using
the
small
M
to
you
for
a
while,
especially
if
it's
just
a
link
flap.
Of
course,
if
someone
brings
up
a
new
link
that
becomes
the
new
weakest
link,
then
you
would
have
to
reduce
them
to
you
in
the
domain.
Oh.
B
Greg
again,
or
would
you
have
to
reuse
em
to
you
I
would
so
then
we
boil
this
down.
From
my
perspective
and
I
came
here.
We
have
a
beer
beer
domain
which
should
be
single
administrative
domain.
If
there's
an
MTU
mismatch,
it's
a
configuration
problem.
It's
not
a
topology
change.
Necessaries
not
like
I,
had
a
joiner
in
another
part
of
the
world
who
wasn't
part
of
my
network
and
the
links
down
there
change
them
to
you.
B
So
going
back
to
your
use
case
of
the
overlay,
that's
solid,
the
overlay
can
you
know,
pin
messages
they
can
pack
to
MTU.
We
need
to
know
what
the
network
is
and
as
an
operator
I
would
want
feedback
to
tell
me
if
something
went:
went,
gunny
bag.
So,
let's
say
topology
change
because
of
a
new
deployment
within
my
beer
domain
they
followed
my
procedures,
they
have
all
the
deployment
done.
They
sent
bit
assignments,
but
someone
messed
up
a
link.
Configuration
I
wouldn't
want
my
content
to
fail
everywhere
else
because
of
a
configuration
problem
somewhere.
B
I
would
prefer
to
have
some
feedback
that
says
we'll
keep
for
in
the
packets
of
the
world,
but
I'm
not
forwarding
here
anymore
because
exceeded
MTU
on
this
link
and
notify
some
alert
that
you
have
a
configuration
somewhere
in
your
network.
Let
the
operator
know,
there's
a
problem
and
not
all
the
customers
know
there's
a
problem.
Yeah.
B
If
it's
my
domain
and
there's
a
router
that
doesn't
mat
match
my
requirements,
that's
a
configuration
problem,
it's
a
single
domain!
So
it's
a
single
entity
making
a
decision.
I
can't
I
mean
maybe
I'm
wrong
and
wave
your
hands.
But
is
there
an
operator
case
where
I
have
a
network
where
a
handful,
the
links
will
be
a
small
into
you,
yeah.
T
T
B
I
admittedly,
I'm
thinking
the
provider
case,
where
we've
got
large
overlay
network
that
I'm
running
multiple
servers
on
top
of
them
beers
now
part
of
it
and
in
that
environment.
That
would
be
configuration
wrong.
So
I'm
admitting
there
are
deployments
coming
where
this
may
be
an
issue.
If
there's
an
enterprise,
if
it
was
a
data
center,
maybe
but
DC
should
be
again
figure
configuration
it's
only
really
enterprise
case
or
when
we're
crossing
administrative
domains.
Where
I
can
see
em
t
issue
would
be
the
problem.
Beer
shouldn't
be
crossing
the
misery
today.
So.
T
B
B
T
B
W
Very
interesting
conversation
I
mean
I,
agree
that
this
is
a
configuration
thing
most
of
the
time,
but
also
seems
like
and
and
now
I
have
to
go
to.
Heather
read
the
draft,
as
you
probably
figured
that
you
could
have
essentially
the
equivalent
of
a
configuration
verbal
which
is
the
minimum
MTU
which
is
acceptable
to
you
for
using,
and
that
will
provide
a
bottom
so
that
if
you
have
a
link,
come
in
that's
misconfigured,
everything
else
doesn't
have
to
handle
it.
So
just
you.
W
O
B
T
M
T
If
it
tried
to
detect
this
empty,
you
should
be
so.
The
idea
in
the
draft
just
to
clarify
is
that
each
router
looks
at
all
its
local
beer
interfaces
and
determines
what
is
the
kind
of
the
the
maximum
packet
I
can
send
out
to
all
my
neighbors
or
the
minimum
common
minimum
kind
of
all
the
empty
use
I
have
locally,
and
when
you
tell
your
side,
the
LGBT
tell
everyone.
This
is
my
local
MTU
Cano
and
each
and
my
router
like
an
ingress
router.
T
Will
them
or
any
router
will
look
at
all
those
announced
empty
use
and
the
minimum
of
those
announced
empty
use
is
the
safe
empty
you
to
use
in
the
entire
domain?
The
question,
though
that
was
raised
is:
do
you
announce
the
the
maximum
size
before
or
after
the
beer
and
captain?
So
the
draft
this
it
seems
like
it's
a
little
bit
ambiguous,
but
that
is
the
draft
rise
to
say
that
you
should
announce
the
maximum
before
and
cap.
That
is
basically
say
if
you
natively
can
send.
T
Fourteen
hundred
and
eighty
and
and
the
b1
cap
takes
up,
maybe
say
twenty
five
pounds.
Remember
then
you
announce
1460
as
your
SDM
to
you,
it's
in
maximum
pale
or
like
the
IP
packet
that
can
go
inside
inside
the
beer,
header
canal
and
the
problem
with
that,
though,
is
that
you
need
to
anticipate
what
the
end
caps
might
be.
So
I
was
thinking.
You
know
what
end
caps
you
might
use
for
your
neighbors
or
in
this
domain,
but
maybe
the
ingress
router
has
a
choice.
T
So
what
end
cap
to
use,
but
do
you
have
your
and
they
just
quickly?
The
other
option
is
that
you
announced
MTU
available
after
end
cap.
So
if
it
was
fourteen
eighty
with
our
MPLS
label
or
whatever,
then
that's
what
you
announced
and
but
then
the
ingress
router
will
have
to
know.
Look
at
what
end
cap
am
I,
going
to
use
and
subtract
that
to
find
the
actual
IP
MTU,
and
that
works
just
fine
unless
I
don't
know.
T
I
know
I
Esaias
already
has
has
subtly
for
or
tle,
perhaps
for
finding
like
an
MTU
for
a
domain.
So
you
know
you
could
maybe
do
something
like
that.
I
want
to
I.
Think
it's
useful
to
make
it
be
a
specific
though,
because
you
know
beer
might
be
deployed
just
in
parts
of
your
network,
and
you
want
to
know
what
you
actually
can
send
through
wet
beer
but
yeah.
T
Yeah
I
suppose
the
yeah.
The
only
problem,
though,
is
let's
say
you
have
some
in
your
ITP
domain.
You
have
some
links
that
are
pretty
small
that
are
not
part
of
the
actual
beer
domain.
Then
they
they
will
impact.
What
you
think
you
can
transfer
why
a
beer,
even
though
they
are
not,
you
know
never
gonna
be
used
with
beer
and
cap,
but
this
can
be
figured.
T
D
D
For
example,
we
understand
what
payload
causing
the
beer
frames
right
and
we
could
look
something
that
ipv4
frame
look
at
the
DFB
it
and
if
people
flip
on
the
DF
bit
and
will
glue
them
to
you,
we
could
basically
send
ICMP
back
saying,
like
you
know,
on
this
receiver
set,
you
are
blowing
game
to
you
because
we
have
on
the
away
of
the
ICMP
invent
ping
and
the
one
we
have
it
all
defined
right.
So
we
could.
We
could
send
it
out
of
path
in
path
and
we
could
specify
no
indicate
on
which
receiver
set.
D
You
blew
the
MTU,
so
we
could
kind
of
elegantly
punked
the
stuff,
in
the
sense
that
if
you
really
worried
about
the
stuff
put
it
on
the
payload,
that
is
within
the
beer
frame,
which
you
understand
and
get
your
router
vendors
to
look
at
something
like
a
DF
bit,
which
indicates
strongly
that
I
care.
When
you
start
to
draw,
because
today,
most
people
that
I
talk
to
they
Soviet
application
level
by
configuration.
And
then
they
assume
the
network
will
be
okay
with
it.
So
they
don't
even
probe
in
today's
multicast
deployments.
T
T
But
I
would
say
normally
today,
when
you
transfer
sale,
if
you
do
ipv6
multicast
the
specs
say
really
say
you
always
have
to
be
puffy
MTU
discovery
if
it's
bigger
than
1280
for
various
tunnel
types
and
so
on,
and
then
caps
way
that
routers
do
they
handle
this.
So
I
really
want.
You
know
beer
to
handle
this
as
well,
and
all
we
need
is
for
the
beer
ingress
router
during
the
uncapped.
You
know
have
somebody
about
them
to
you
to
accept.
T
T
Fine,
the
only
problem
is
you
always
use
1500,
bytes
packets
and,
if
you're
doing
like
4k
streaming
or
something
it
could
be
great
to
you
know,
do
something
larger,
so
I
think
it's
know
not
everyone
maybe
needs
it,
but
I
think
it's
very
useful
to
to
have
them
to
discovery.
So
we
already
have
a
workgroup
document,
though
about
until
discovery,
so
I
guess
what
I
want
to
ask
is.
T
T
X
Maybe
yeah,
what
do
you
think
so
Ruskin's
Berg
and
I'm
a
co-author,
so
you
know
take
that
is
in
that
context.
V
D
T
B
R
R
So
this
is
extending
the
pin
signaling
to
em
LDP.
Let
me
take
a
step
back
and
actually
explain
the
problem
we're
trying
to
solve
here
so
going
forward.
Some
of
the
Tier
one
operators
out
there
they're
trying
to
build
the
next
generation
come
converge
core.
Basically,
what
this
means
is
that
they
want
to
have
a
lean
core
from
protocol
point
of
view,
just
using
IGP,
bgp,
Sdn
type
of
protocols
or
segmented
routing
type
of
protocols
to
build
this
core.
That's
it.
R
This
core
will
support
many
other
type
of
access
services
like
wireless
5g,
Business,
Services,
verticals,
etc,
etc.
So
what
that
means
is
that
they
like
to
choose
beer
for
the
multicast
of
this
core
because
of
the
simplicity
of
beer,
I,
think
you
know
this
group
did
a
great
job,
making
beer
extension
of
our
GP
and
that
simplifies
multicast
much,
and
this
is
the
attractive
point
of
beer
using
in
Sdn
or
segmented
routing
type
of
core.
R
R
There
is
these
talks
of
three
seed,
etc,
etc
in
the
segmented
routing,
a
stuff
that
we
need
to
solve.
So
basically,
we
need
to
have
a
solution
that
can
stitch
legacy
or
future
Sdn
type
of
multicast
technology
to
a
beer
core.
So
this
is
one
a
step
going
farther
into
the
MPLS
domain.
One
thing
I
want
to
make
it
very
clear
right
off.
The
bat
here
is
that
we
are
not
trying
to
propose
to
have
n
LD
P
neighboring
through
the
beer,
so
you're
not
trying
to
send
em
LDP
signaling
through
the
beer
core.
R
R
Know
this
is
a
credit
we
left
the
diagrams
just
for
eye
candy,
so
I
just
go
to
the
Texas.
That's
all
right,
so
the
proposal
here
is
very
similar
to
the
PM
signaling.
Basically,
the
biggest
point
here
is
that
we
need
to
have
a
label
in
the
beer
domain
that
represents
a
pimsy
or
a
point-to-multipoint
LSP
that
if
you,
if
you
want
to
call
it
beer
domain
3
label
that
BTL
has
to
be
actually.
R
What's
assigned
has
to
be
assigned
by
the
ebbr
and
the
ebbr
at
the
same
concept
as
the
pin
signaling,
it's
it's
the
router
beer
router
that
is
closest
to
the
source.
So
when
the
ebbr
assigns
that
label
for
a
point-to-point,
a
multi
point
LSP
or
for
a
pimsy,
then
it
need
to
advertise
it
to
the
IB
BRS
routers
B
routers
closest
to
the
leaf
to
actually
say
that
a
stitch,
your
point-to-multipoint
nld
to
this
label.
So
it
can
be
unique
within
the
beer
domain
when
that
label
is
assigned
in
the
beer
domain.
R
That
label
needs
to
be
signaled
to
the
IB
beer
in
some
way.
What
we
chose
here
is
BGP
MP
BGP.
We
could
come
up
with
a
new
PT,
a
type
and
NLRA
type
for
this
specific
purpose
and
signal
this
specific
label
to
all
the
IB
beers
that
are
needed.
With
the
same
token,
the
ebbr,
just
like
ping
signaling,
needs
to
track
all
the
IB
beers
that
are
advertising
that
unique
fake.
R
R
Thank
you
so
from
MPLS
point
of
view,
all
is
happening
on
the
BB
RSI
be
BR
and
ebbr
is
swapping
or
a
stitching,
whatever
the
stitching
means
to
the
to
the
operator,
whether
it's
swap
or
whether
it's
pop
and
a
push
between
the
ml
DP
label
and
assigned
beer
domain
label.
So
that's
all
that
is
happening
from
MPLS
point
of
view.
Now
we
need
to
somehow
put
that
new
MPLS
packet
on
top
of
the
beer
domain,
so
if
we
could
go
to
a
sliced
forward,
thank
you.
R
So
from
that
apat
point
of
view,
what
will
happen
is
that
from
the
source
the
the
MPLS
packets
will
get
forwarded
to
the
BFI,
our
or
ebbr.
For
that
matter,
the
BFI
our
will
swap
the
label
with
the
beer
tree
label
we'll
since
the
BFI,
our
has
tracked
all
the
B
F
ers.
It
will
push
the
beer
header
on
top
of
them
just
like
p.m.
signaling
and
will
forward
it
to
the
be
F
ers
bf
ears
will
pop
the
beer.
B
That's
it
Gregg
just
to
be
pedantic
a
bit
why
beer
domain
tree
label
just
a
name.
We
came
up
here.
We
are
open
to
change
it.
Okay,
I
add
that
for
your
exercise
now
I'ma
trying
to
slice
the
term
tree
from
people's
discussion
about
beer,
okay,
because
it
mostly
because
we
mentally
we're
always
trying
to
map
these
SG
States
into
beer
and
that
really
it's
pulling
alpha.
So
if
there's
something
specifically
about
a
flow
you're
trying
to
label
or
if
you're
just
saying
you
know,
this
is
the
domain
wide,
then
I'd
say
avoid
tree.
R
N
Okay,
so
there
has
been
some
discussions
on
the
mailing
list
and
offline
I
just
want
to
say
that
again
that
there's
ml
DP
over
a
core
that
does
not
run
ml
DP
itself,
that's
already
been
specified
in
RFC,
where
those
that
the
border
routers
of
the
core
will
run
a
targeted
LDP
sessions
to
exchange
labels
for
the
ml
DP
purpose.
The
same
thing
could
be
done
here.
N
The
only
difference
here
is
that,
instead
of
say,
RC,
bt,
p2
and
piano
in
that
core,
we
are
using
a
beer
quote
unquote
on,
oh
here,
the
just
to
recap:
some
discussions
we
had
yesterday
that
cos
the
concern
with
that
solution.
You
guys
have
is
that
the
the
provider
may
not
want
to
run
em
LDPE
session
over
the
sewer
over
their
core.
But
my
comment
is,
and
that
is
that
that
MOT,
the
target
session
between
the
IB,
BR
and
ebbr
is
not
through
the
core.
It's
over
the
core.
N
It
is
participe
session
between
the
ib
b
RN
ebbr
that
you
already
long
ml,
DP
and
managed
by
the
group
that
responsible
for
mr
DP.
So
to
me
I
don't
think
that's
that's
concern,
maybe
to
some
marketers
is
concerned,
but
I
just
want
to
point
out
that
some
there's
really
not
much
difference
between
the
base
and
the
existing
solution.
Yep.
R
Thank
you
with
regard
to
the
label
assignment
you're,
absolutely
right.
When
it
comes
to
both
ideas,
the
upstream
router
will
will
assign
the
label
the
the
biggest
difference
here
is
how
we
signal
it
right.
You
want
a
signal.
We
are
all
technology
lldp
that
we
are
trying
to
deprecate
in
this
course.
N
J
Q
Dog
on
a
Nokia,
so
as
soon
as
we
talk
head
before
a
it
doesn't
have
to
be
a
problem,
but
it
may
be
a
problem.
It
all
depends
how
you're
running
your
ear
sessions
and
how
you
divide
debugging
network-
and
things
like
this.
So
if
you
want
to
get
rid
of
various
MPLS
problems
that
you
have
in
that
car
and
you
don't
want
to
deal
with
them,
you
don't
want
to
debug
them
and
then
introducing
therapy
sessions.
It's
a
it's,
not
a
solution
for
you.
If
you
don't
care,
then
TL
DP
is
an
alternative.
Q
So,
where
I
don't
think,
we
arguing
that
this
is
an
alternative.
An
alternative,
both
I
think
to
be
honest
to
me.
Both
are
viable
solutions
and
the
pending
actual
an
operator
and
how
they
wanna
in
quotes
optimize
their
deployment.
They
may
prefer
this
solution
in
one
case
and
it
and
have
more
a
TL
DP
based
solution
in
another
case.
Q
D
Dispassionate
chair,
so
I'm
not
leaning,
either
way,
I
mean
I,
agree.
I,
think
it
boils
down
to
to
link
right.
What
do
operator
wants
us
to
link
when
you
sing
the
signaling
to
the
data
plane?
What
you're
doing
here
right
in
line?
We
don't
have
any
provisions,
for
you
know
precedents
control
precedents,
so
that
may
be
a
good
point
to
start
to
scratch
your
head.
What
will
happen
when
you
pump
a
lot
of
data
over
this
domain
and
at
the
same
time,
you
try
to
signal.
D
Yeah
may
not
be
a
consideration,
but
you
know
that
something
that
comes
up
on
the
high
volumes
when
you
start
to
signal
may
deserve
a
sentence
saying
like
we
do
not
think
that
is
a
problem
and
I
don't
see
an
obvious
way.
You
know
how
we
could
do
that
with
beer,
so
people
can
reorder
I
mean
you
could
always
go
over
different
sub
domain
and
present
and
use
a
signaling
sub
domain
and
give
you
the
high
preference
on
scheduling,
right,
I.
Q
Q
Q
If
you
don't
do
anything
you're,
exposing
you're,
exposing,
potentially
you
know
control
message
tunneled
through
tor
discards
at
the
same
level
as
data,
but
if
you're
implementing
it
that
like
this
okay,
you
know
we
whistle
bullets
whistle
guns,
but
we
do
not.
So
then
you
know
how
you
use
them.
Yeah.
D
I
mean
I
also,
don't
think
there
is
no
any
potential,
no
for
major
trouble.
We
cannot
resolve
our
heat,
actually
just
laying
out
the
landscape
right,
but
I
think
the
biggest
point
would
be
tooling
right.
What
tooling
to
the
operators
have
and
what
are
the
perfect
weight?
The
weight
just
won't.
Look
at
my
beer,
five
understands.
What's
going
on
or
do
I
do,
I
have
all
my
tooling
built.
You
know
across
like
targeted
LDP
already
and
that's
what
I
got.
Okay.
R
So,
just
on
top
of
this,
if
you
read
the
draft,
there
is
another
proposal
with
the
SDN
too.
So
it's
the
same
idea,
but
now,
instead
of
PGP
advertising,
the
labels
today
to
the
IB
BRS
stn
will
actually
suck
up
the
delay
bowl
from
the
ebbr
and
then
will
download
it
via
BGP
SRT,
or
something
like
that
right
so
they're.
The
thoughts
of
the
operator
is
going
toward
BGP
because
of
the
future
Sdn
solutions,
etc,
etc.
And
this
is
exactly
why
we
kind
of
leaned
toward
BGP
for
signaling
to
you.
R
That's
number
one,
but
number
two
is
no
matter
how
the
signal
lists,
whether
it's
TLD
P,
whether
it's
PGP,
we
still
need
tracking
of
the
IB
BR.
That
draft
7060
really
doesn't
explain
how
we
need
to
track.
Ib
b
are
just
because
there
is
a
session
that
doesn't
mean
there
is
a
fake
for
the
ML
DP
that
is
advertised
that
we
need
to
build
the
beer
header,
so
the
beer
header
really
is
gonna,
be
built
based
on
the
FAQ
and
the
OPEC
that
is
advertised
from
the
IB
BR
to
the
ebbr.
R
So
the
majority
of
this
draft
is
really
tracking
that
part
of
the
equation
that
we
are.
What
is
the
point
to
multi-point
LSP?
What
is
the
fake
and
the
OPEC
from
which
IBB
are
it's
coming
on
signaling
point
of
view
again,
I
think
you're
right.
There
is
multiple
ways
of
skinning
the
cats,
and
you
know
that
could
be
argued.
One
way
of
you
know.
N
Tracking
of
IP
I
PB
r
re
v,
dr,
is
exactly
similar
to
that
pin
signaling
or
to
the
CCM
boudin
been
signaling,
and
we
talked
about
how
they're
correlated
with
an
effect
and
other
things
the
test
desk,
where
this
adventure
of
the
existing
sturgeon
comes
in,
it's
really
ml,
DV
signaling
it.
You
already
have
everything
already.
It's.
Q
Just
it's
it's
a
little
bit
kind
and
I
don't
want
to
make
it
a
religious
argument,
because
this
is
where
it's
going,
but
it's
it's
general
philosophy.
Are
you
on
a?
Are
you
on
a
mode
of
introducing
beer
into
a
network,
to
create
a
domain
and
to
simplify
it
part
of
a
network,
in
which
case,
in
which
case
you
know
this
may
be
a
step
II.
This
is
almost
n
and
the
game,
and
then
you,
you
simplified
part,
and
you
Dan,
or
are
you
on
a
and
again
on
a
way
of
MPLS
as
a
control?
Q
Plane
is
I'm,
gonna,
say
very
controversial
statement
just
for
illustration
purposes:
please
don't
shoot
is
MPLS
that
and
I
don't
wanna
and
all
I'm
thinking
about
is
removing
it
and
not
introducing
like
if
you,
if
you're,
introducing
something
new
and
with
that
I'm
introducing
what
I'm
trying
to
remove.
That's,
not
a
that
not
that's
not
necessarily
a
attractive
proposition.
If
you
do
not
ran
that
today,
yeah,
so
that
and
again
different
operators
and
different
people
will
run
different
solution.
N
So
I
am
definitely
for
simplification
and
deafening
for
moving
forward
and
I
just
want
to
clarify
that
you
are
still
using
MPLS
over
that
core.
You
have
that
trade
label.
It's
really
just
that
label
for
that
hanno
stereos
on
that,
and
it's
just
a
signal
side.
Do
you.
You
say
that
wrong
in
that
LGBTIQ
recession
over
the
core
is
a
concern,
but
I
really
don't
see
that
as
a
concern,
I
agree
that
it
will
be
subject
to
the
view
of
the
apparel
operator.
Let.
R
B
R
Q
A
M
N
N
We
know
how
to
deal
with
p
routers
that
are
not
peer
capable,
but
we
have
always
assumed
that
you
know
the
deployment
where
you
want
to
run
beer,
the
the
edge
routers
will
be
beer
capable.
But
what,
if
some
of
the
pease
in
this
deployment
is
not
beer
capable,
at
least
during
the
same
time,
during
the
transition
phase
for
the
electronic,
prolonged
transition
phase.
N
So
let's
say
that
the
PE
that
is
not
incapable
between
beer
is
a
egress
beer,
egress
PE.
Now
for
for
an
ingress
P
that
is
capable
doing
beer,
you
would
have
to
send
traffic
in
two
ways:
wines
via
beer
to
P
is
capable
in
beer.
In
the
meantime,
if
our
traditional
tunnels
to
those
in
cave
over
peds
that
is
complicated
and
it's
not
efficient
in
some
cases,
is
not
efficient
and
but
what,
if
a
young
hip
of
egress
PE
pretends?
That
is,
support
Spears,
so
that
we
can
just
use
beer
Tata.
N
N
So
how
do
we
signal
that
if
a
beer
innkeeper
router
pretends
it
is
supposed
beer?
It
was
just
signal
that
beer
sub
two
of
these,
although
other
things
the
same
as
it
does
the
normal
beer
router,
but
it
additionally,
it
will
tell
others.
That's
when
you
send
me
beer
package,
please
pop
off
the
beer.
Header
I
just
want
the
payload
directly
and
to
do
so
for
for
npos.
Encapsulation
is
simple.
N
N
N
The
there
is
a
label
in
the
payload
that
identifies
the
identifies
that
which
beeping
it
is
for.
Typically,
it
is
upstream
assign
label,
but
when
you
pop
the
beer
header,
you
no
longer
have
the
context
for
that
upstream
assign
label,
but
we
do
have
another
proposal
independent
of
this.
That's
we
that
VPN
label
can
be
from
a
global
sort
of
a
sock.
We
call
recorded
domain
domain
wide
comment
block
it's
very
similar
to
srgb.
That's
every
router
will
use
the
same
label
from
the
common
block,
so
the
identify
PE
VPN.
N
So
then,
so
we
need
to
use
that
they
that
kind
of
label
it
will
work.
And
if
it's
not
VPN
case
it's
just
a
global
table,
then
they
also
works.
Basically,
the
IP
payload
needs
to
be
in
the
address
space
for
the
peer
routing
underneath
if
the
payload
is
bx9
or
mvg
re4,
Avakian
purpose,
I
wrote
should
be
fine.
Here
then,
I
realized
that
in
the
beer
for
evpn
drafts
in
case
of
the
reaction
and
imagery,
there
is
actually
no
IP
header,
it's
just
NB,
GRE
or
via
header
directly.
N
N
N
Basically
a
cc
replication
in
case
of
ye
vaping
overlay
or
for
an
Pure's
MVP,
and
there
is
this
council
of
virtual
hub-and-spoke
and
in
evpn
POS.
There
is
also
a
concept
where
hub-and-spoke,
basically,
you
have
hubs
and
spokes
the
spokes
was
sent
traffic
to
the
hub
and
hub
or
related
to
other
other
spokes.
So
if
you
make
the
hub
appear
kid
over
PE,
then
the
incapable
PE
will
send
traffic
to
the
hubs
or
the
assassin
replicator.
Then
they
will
relay.
N
K
D
Okay,
so
publish
the
stuff
out
because
there's
some
good
amount
of
discussions
now
with
people
interest
in
deploying
beer
and
most
of
the
discussion
are
very
much
brown
field
deployment.
Centered
right
and
one
theme
is
how
do
I
mix
my
core
with
beer
and
non
B
routers?
The
other
theme
is:
how
do
I
upgrade
my
Pease
we
go.
Vp
is
just
volume
tons
of
those,
so
what
I
put
together
is
pretty
much
migration
frameworks.
What
people
can
use
to
actually
build
a
hybrid
core,
there's
a
couple
of
approaches:
there's
no
judgment
passed.
D
Okay,
all
of
them
have
certain
properties
and
they
depend.
You
know
what
the
customers
are
comfortable
with
in
terms
of
migration
or
rolling
beer
as
I'm.
Just
laying
that
stuff
out
and
I
think
it
is
helpful
framer
for
discussions
right
when
you
structure
discussion
with
customers,
how
can
I
get
beer
into
into
my
car
without
forklifting?
The
whole
thing.
D
Right
so
greenfield
deployments,
you
know,
especially
on
larger
core.
You
know
our
only
wishful
thinking,
yeah
so
I
think
I
said
already
all
why
I
brought
this
draft
okay
next
one
please
now
what
I'm
showing
are
three
or
four
possibilities?
That's
pretty
much
an
exhaustive
list
after
having
talked
know
with
people-
and
you
know
there
was
a
lot
of
discussions
where
we're
moving
towards
the
RFC
lots
of
them
about.
You
know
flexibility
to
allow
actually
to
brownfield
to
to
roll
in
beer
and
the
main
splitsies.
D
D
What
he
boils
down
to
is
that
you
are,
you
have
to
either
go
through
something
which
cannot
do
that
beer
forwarding
operation
or
you
have
to
go
around
it
around
it,
and-
and
you
know,
that's
your
choice.
What
you
comfortable
with
I
do
not
talk
about
the
beer.
Overlays
be
overlays
on
top.
Are
you
know?
Those
are
all
these
things
we
have
now
this
M
VPN
and
Tim
signaling,
and
so
on.
So
I'm
not
talking
about
any
of
the
stuff.
The
overlays-
we
are
doing
your
own
think
next,
one,
please,
okay,
so
what?
D
What
do
we
have
really
is?
Yeah
I
should
have
made
the
font
well,
I
got
at
least
colors,
so
the
green
thing
is.
We
have
a
controller
based
solution
who
is
kind
of
its
own
axis.
So
I
talk
very
quickly
about
that.
Then
we
have
three
possibilities
to
play:
the
IGP
so
in
strictest
sense
of
the
word.
If,
in
the
strictest
sense
of
you
know
what
we
build
this
technology,
we
don't
even
eat
meat.
Anything
because
we
have
to
multi.
D
Topology
are
talk
that
basically,
if
you're
willing
to
deploy
multi
topology
with
all
the
tools
in
a
sense,
but
the
82
now
is
79.
Has
this
section
that
talks
about
the
grafting
at
a
certain
properties?
And
we
now
have
pretty
much
this
bar
degree
of
freedom
which
allows
us
to
run.
You
know,
be
a
specific
computation,
taking
into
account
all
possible
be
a
metric
and
frankly
we
can
signal
more
of
them
if
we
need
them
and
use
those.
So
those
are
pretty
much
like
four
different
things
you
can
talk
about
when
you
brown
field
beer.
D
Okay,
next
one
please
so
the
first
one
is-
and
you
shouldn't
forget
it-
that
the
tools
are
here.
If
you
are
willing
to
deploy
miles
a
topology,
some
people
are,
then
you
really
don't
need
to
do
anything
extra
to
have
a
brown
field
deployment,
because
you
can
confine
the
beer
capable
routers
in
there
in
the
in
their
own
multi
topology,
and
that
has
certain
properties.
So
yes,
multi
topology
has
been
around,
has
been
used
for
a
couple
of
purposes.
It
is
not.
You
know,
like
the
hottest
thing,
a
lot
of
people
perceive
it
as
complex.
D
They
don't
want
to
deploy
it,
but
if
you,
if
you're
willing
to
do
to
go
there,
it
gives
you
certain
properties
which
are
desirable.
So
if
you
deploy
multi
topology,
then
you
basically
and
your
topology
is
not
continuous.
You
have
to
strip.
You
know
these
pieces
together
and
in
multi
topology
will
do
it
using
know
some
kind
of
tunneling
which
shows
up
as
basically
l3
entities
that
allows
you
for
the
unicast
and
multicast
on
your
network
to
deviate.
D
So
you
can
contain
your
multicast,
a
different
part
of
your
network
or
force
it
of
certain
things
where
you
only
want
to
have
unicast,
and
there
was
one
of
the
original
drivers
of
two
or
three
also
to
build
market
apology,
because
they
people
want
a
different
RPF
for
multicast.
They
didn't
want
to
be
confined
by
unicast.
D
What
you
call
it
attribute,
you
can
assign
certain
interfaces
on
the
router
being
beer
capable,
but
that
can
be
desirable
right,
because,
right
now,
beer
is
basically
a
router
property
either
the
router
does
be
a
forward
and
it
doesn't
with
multi
topology.
You
can
go
towards
deployment
with
where
a
line
card
is
beer
capable
or
interface
a
certain
set
of
interfaces.
The
other
are
not
which
again,
it
depends
really
on
what
the
customer
is
building.
What
is
comfortable
with
what
kind
of
product
mixing
does
he
have?
D
What
falls
out
is
that
on
mount
a
topology
run
pretty
much
standard
IGP
computation,
so
you
get
all
the
protection
confined
within
the
the
multi
topology
that
all's
fall
out.
So
you
get
all
the
IGP
computation
benefits
and
what
is
somewhat
interesting
is
that
if
you
run
multiple
subdomains
and
you
can
run
them
over
multiple
to
different
multi
topologies,
don't
forget.
We
have
all
that
stuff
very
well
laid
out
our
high
tech
tree.
D
You
can
share
interfaces
between
multi
topologies
and
those
have
different
metrics,
so
you
can
buy
D
for
around
traffic
in
two
sub
domains
in
two
different
parts
of
your
network.
But
if
one
sub
domain
loses
enough
links,
it
may
use
the
links
of
the
other
sub
domain
as
protection,
though
there
is
no
a
multi-dimensional
game
that
allows
you
a
lot
of
flexibility
for
the
price
of
deploying
the
technology.
D
D
Tunneling
technology-
and
you
know
sr
is
an
example.
Ldp
could
be
another,
you
don't
need
configuration
right.
Your
network
is
magically
tunneled
from
everywhere
to
everywhere,
where
you
need
it,
the
tunnels
don't
have
to
show
up
in
the
IGP.
That's
another
advantage
right,
so
your
GP
is
not
loaded
by
advertising.
All
these
l3
forwarding
adjacencies,
which
you,
which
you
have
with
other
approaches,
which
again,
can
be
an
advantage
depending
you
know
how
sensitive
the
customer
is.
D
If
you're
it
is
deploying
this
kind
of
dynamic,
tunneling
technology
on
his
network
that
may
perceive
that
that
you
know
a
shortest
path
to
the
goodies.
You
get
immediate
full
neural
protection
coverage
on
your
network
right.
So,
if
you
have
the
notes,
failing
the
dynamic
tunneling
goes
around
that,
like
I
said
you
don't
have
the
forwarding
edges
on
the
IGP.
That
can
be
an
advantage
depending
on
scale
at
certain
point
in
time,
just
the
amount
of
tunnels
showing
on
a
GP
may
be
a
disadvantage.
D
What
you
have
is,
however,
that
when
your
topology
is
changing
right,
that
influencing
how
this
dynamic
tunnels
are
moving,
so
that
may
that
that
may
lead
to
your
traffic
moving
to
UI
GP
having
to
pick
the
stuff
up,
the
forwarding
decision
or
sorry,
not
the
IGP,
but
you
have
to
recompute
all
the
forwarding
tables
and
redownload
them,
so
the
dynamic
city
is
higher
than
like
when
you
have
static
tunnels,
when
you
run
these
dynamic
tunneling
technologies,
they
mostly
do
not
because
they're
mostly
state.
Let's
move
on
the
network,
they
don't
something
like
s.
D
Our
OEM
is
something
that
doesn't
come
with.
You
know
the
technology
necessarily,
so
that's
something
to
consider
if
customers
are
comfortable
with
static
tunnels,
where
they
have,
you
know
kind
of
a
lot
of
control,
they
can
do
1+1
protection
on
tunnels
and
so
on,
and
they
have
this
set
of
requirements.
This
dynamic
tunneling
may
change
the
dynamics
of
what
they
used
to,
and
this
approach
also
forces
you
to
forward
the
beer
traffic
along
the
unicast
path,
which
may
or
may
not
be
a
consideration,
depends
on
the
capacity
planning.
D
What
are
they
comfortable
with
the
multicast
showing
anywhere
where
they
were
having
the
unicast
load?
But
that's
saying
Herrin
property?
Ok,
next
one
then
the
bar
is
kind
of
you
know
the
most
flexible
solution
in
a
sense,
because
you
can
define
any
kind
of
algorithm
and
take
either
unique
as
property
or
additional
beer
property
into
account,
and
you
basically
compute
your
specific
beer
tree
right
and
I
debt
preconditions
normally
configure
tunnels.
If
you
have
to
want
to
stitch
the
pieces
of
technology
together,
but
you
can
play
games
like
you.
D
Look,
for
example,
of
capacity,
your
replication
capacity
and
look
at
things
like
final
degree
that
you
comfortable
weight
on
a
network.
You
could
you
know
ping
replication
points.
You
can
pretty
much
do
anything
you
want.
You
can
diverge
the
unicast
and
multicast
paths
where
the
traffic
is
going
on
your
network.
You
can
compute
protections
right.
So
that's
like
the
weasel
thing
where
you
can
go
completely
beer
specific
and
be
arbitrary,
flexible.
You
know
at
the
price
of
dealing
with
specifying
this
stuff
and
deploying
the
technology
next
one
please,
and
ultimately
we
also
this
set.
D
These
controllers
right,
which
are
only
important,
though,
is
custodiet
Ipsos
custodes.
If
you
get
the
joke
right,
but
when
you
have
the
controller
and
that's
something
that
I
already
told
no
I
trying
to
influence
the
the
young
more
the
yank
model,
you
can
push
the
beards
and
you
can
push
the
bits
and
there
is
actually
a
degree
of
flexibility
of
your
thinking
about
that
stuff.
D
You
can
have
a
solution
when
you
just
push
the
push
to
beards
and
the
router
is
computed
in
your
computing,
the
bits
out
of
it
or
you
can
push
the
whole
bits
out
of
the
controller,
and
there
is.
There
is
a
subtlety
here
that
I
talked
about
so
a
controller
in
terms
of
computation.
What
it
can
do
is
even
more
flexible
right,
because
the
distributed
computations
still
doesn't
have,
for
example,
synchronized
clause.
D
If
you
have
something
and
those
things
are
being
done
where
you're
multicast
forwarding
is
dependent
on
the
time
of
the
day,
a
controller
can
do
that
for
you
right.
So
you
can
download
depends
on
if
you
can
predict
your
multicast
load
very
well
and
you
have
load
of
it.
You
may
actually
change
you.
Multicast
distribution
on
your
network,
depending
on
the
time
of
the
day
and
the
controller
solution,
can
basically
download
the
tables.
D
So
pretty
much
anything
can
be
taken
into
account
on
computation,
but
because
the
controller
is
no
farther
away
from
the
nodes,
it's
not
a
distributed
solution.
Really
you
have
a
couple
of
things
that
you
need
to
consider.
One
is
that
you
fail
over
time.
Is,
you
know,
is
not
that
dynamic,
so
you
may
want
to
pre-compute
protection
when
you
download
to
the
stables
you
run
into
the
problem
that
you
need
some
kind
of
synchronization
when
everybody
is
flipping
over
the
table
right
at
the
same
time.
D
Instead
of
listening
to
the
controller
yes,
I
mean
Andy,
protection
can
be
computing
onto
controller
and
push
down
as
well,
which
you
probably
want
to
do,
because
on
link
fillers
and
such
things,
you
don't
have
to
know
this
immediate
distributed
computation
kicking
in
next
one,
but
I
think
that's
it
right.
So
I
think
that's
helpful
as
a
framework
to
guide
Brown
field
approach.
You
know
when
customers
are
thinking,
how
do
I
roll
out
beer
and
you
know
pick
and
choose
what?
Are
you
comfortable
with.
B
Discussions,
let
me
just
add
to
this:
some
of
this
work
came
from
discussions
out
of
London.
We
saw
a
lot
of
ideas
coming
out.
The
intent
was
to
have
some
guidance
from
the
working
group,
so,
as
stuff
comes
up,
people
go
to
one
location
and
see
how
we
get
there
because
oftentimes
these
solutions
stand
out
as
solutions
and
not
as
a
path
to
you
know
a
migration
path
and
someone.
Q
I
I'm,
like
'no
Bridewell
way,
so
I
applaud
you
for
this.
This
is
exactly
what
we
need
and
our
customers
that
we
talked
to.
They
need
this
kind
of
help.
Yes,
so
thank
you,
I
didn't
even
know
this
existed,
I
haven't
read
it
I
will
do
so
this
week.
At
the
outset,
you
mentioned
that
overlays,
like
n
VPNs,
are
out
of
scope
of
this
document
and
that's
fine
I'm
just
thinking
ahead,
because
there
are
some
documents
that
Jing
wronged
and
others
are
and
I
can
clean
myself
I've
been
working
on
it.
D
Ask
me
absolutely
I
mean
we
are
rolling
a
transformational
technology
and
we
start
to
produce
too
many
draft
for
the
architects
to
bother,
but
generally
ITF
is
a
swamped
of
drafts
like
8000
something
read,
3000
drafts,
you
understand
what
we
talk
about.
That
is
not
helpful
in
transforming
those
networks,
we're
dealing
with
people
who
are
decisions,
makers
who
have
to
relatively
quickly.
You
know,
cut
and
run
pass
judgment,
and
we
need,
for
example,
for
overlay.
Well
I
want
my
sections
I'm
running
these
service
on
my
network.
Ok,
how
do
I
transform
this
stuff?
D
What
are
my
option
right
and
I
want
it
in
nice,
sound
bite?
And
then
please
give
me
all
the
gory
details,
how
you
cook
the
sausage
and
the
2
option
from
lldp
right,
but
I
want
to
understand
how
do
I
transform
mine
my
service
from
here
to
there?
What
is
the
technological
cost?
What
does
it
by
me?
So
I
think
something
like
that
for
all
I
could
be
superbly
helpful.
We
have
to
produce
marketing
material
now,
yeah
yeah,
sorry,
you
know
like
the
techie
beats
are
cool.
D
B
D
Encourage
now,
from
my
experience
talking
to
customers,
I
would
encourage
to
separate
ones.
Those
are
two
different
crowds.
You
talk
about,
you
talk
to
the
PE
crowd,
running
the
services
and
you
talk
to
the
guys
running
the
core
and
those
are
mostly
separate
set
of
discussions.
I'm,
not
religious.
We
can
bundle
it
all.
D
That's
becomes
the
problem
in
itself
right,
you,
you,
throw
some
170
pages,
you
better
have
a
structure
where
they
can
really
pick
and
choose
very
quickly
which
can
be
done
if
we
have
the
discipline
for
it,
but
it
will
just
start
to
ramble
and
cross-reference.
Everything
may
not
become
very
helpful.
Okay
right
and
again,
this
is
very
non-judgmental
right.
We
give
people
these
things
to
pick
and
choose
from,
and
they
will
judge
by
themselves
find
what
is
acceptable
if
they
write
the
checks.
D
The
correct
set
of
products
will
be
built
right,
religious,
worse
and
like
mine's,
better.
This
yours
are
not
helpful.
Now
we
have
the
technology
at
the
offering
it
says
certain
properties
and
we
have
to
make
the
path
for
people
to
move
into
the
technology
non-judgmental
and
simple,
easily
consumable
yeah.
It's
very
important
at
this
Junction
I
think.
I
B
D
Observe
very
that
I
very
careful
do
not
talk
about
advantages
or
disadvantages.
Those
are
just
properties,
always
what
you
get
when
you
pick
it
up
in
terms
of
requirements
and
properties,
would
you
get
out
of
it?
It
may
be
important,
not
important
to
you
and
the
complexity
that
it
brings
with
you,
maybe
accept
or
unacceptable
for
you,
which
is
all
fine.
People
will
write
checks
and
that
will
at
the
end,
define
you
know
a
small
set
of
products
without
him.
D
Utley
will
be
done,
but
at
this
point
in
time,
having
you
know
based
on
the
religious
preference,
is
what
we
think
is
like
the
coolest
technology
who
will
not
help
us
at
alter
all
the
technology
for
work.
Look
at
shiny,
yeah,
the
new
and
shiny,
is
gone.
Now
we
have
to
make
it
a
value
proposition.
Okay,.
I
D
B
Just
to
apologize
up
front
of
me,
this
work
were
you
in
London.
These
issues
came
up
because
okay
might
not
have
been,
who
we?
What
happened
was
the
fallout
from
the
room?
A
bunch
of
us
got
together,
cooked
this
time
together
on
a
call,
and
this
draft
came
out
of
it
right.
So
if
you're
gonna
do
the
same
thing
for
an
overly
I,
encourage
you
to
reach
out
to
the
people
who
have
documents
that
are,
you
know
in
the
same
ballpark,
so
we
can
all
kind
of
work
together
on
what
the
documents
would
look
like.
B
D
B
Y
Y
Y
Y
Y
Y
N
Y
Y
Y
N
That
s
our
labels
take
the
the
existing
beer
solution
does
does
not
have
a
problem
with
that.
I
don't
think
I
don't
think
the
existing
that
beer
solution
has
a
problem
with
that.
Yes,
we
may,
we
may
need
more
offline
discussion
on
this.
I
don't
know
if
you
have
enough
time
to
if
we
try
to
figure
this
out
here,
maybe
I'll
talk
to
you
offline
and
so
far,
I,
don't
see
why
you
need
that
SLA
but
step
there.
I
can.
B
I
I
Currently,
there's
a
option
within
beer
for
having
a
entropy
to
be
able
to
be
used
for
load
balancing.
It's
used
in
a
variety
of
other
protocols
for
load,
balancing
for
more
deterministic
load,
balancing
and
you're
able
to
take
in
a
nice,
EMP
environment,
certain
fields
of
a
packets
and
create
a
entropy
label
and
to
do
load.
Balancing
on
that.
We
talked
a
little
bit
about
datacenter
clause
networks.
I
don't
need
to
go
into
too
much
detail
on
that.
I
Lo
Chang,
there's
hash
function
and
efficiencies
and
you
could
have
flow
collisions
water
flows
get
place
on
one
path
over
the
other.
Isolating
faults
can
be
difficult
in
existing
environments
and
it's
it's
it's
non-trivial.
So
when
beer
is
deployed
in
multi-tiered
data
center
networks
and
again
I
when
I
mentioned
this
in
empo,
nd
I
assumed
that
today
there
was
no
beer
in
datacenter
that
works,
but
Gregg
corrected
me
that
he
thinks
that
there
is
deployments
of
beer
data.
Send
networks
so
be
good
to
get
some
more
information
about
that.
I
I
I
So,
just
real
quickly
again,
this
is
this
kind
of
gives
you
an
overview.
You
can
look
at
this
later
if
you
want,
but
there's
multiple
stages,
there's
three
stages:
five
stages
and
in
Clause
types
networks
and
there's
northward
stages
and
southbound
stages
and
there's
Richie
CMP
and
there's
problems,
as
I
mentioned,
with
elephants,
stewing
development
flows
and
paths,
divisions
when
you've
got
different
set
identifiers
next
slide.
I
So
yeah
as
I
as
I
mentioned,
then
you
can
again
use
different
bits
of
the
20
bit
entropy
beer
label
to
represent
different
paths
and
it's
similar
to
what
is
happening
in
other
drafts,
including
this
spring
entropy
label
for
multistage,
EC,
MPs,
we're
doing
it
by
breaking
into
the
20
Bennett
Irby
beer
field.
So
it's
as
far
as
we
can
tell
there's
nothing
that
we're
adding
to
any
of
the
beer
specs
we're
just
utilizing
it's
an
application
of,
what's
already
been
specified,
next
slide
for
local
convergence.
Again,
this
is
very
similar
to
it.
I
Just
kinda
explains
what
happens
when
you
have
convergence
and
down
links
next
slide,
and
this
is
the
forwarding
procedure.
We
do
feel
that
it's
easier
and
more
deterministic
than
the
existing
ecmp
hashing
function.
So
with
this,
we're
hopeful
that
this
is
a
useful
document
on
its
own.
Maybe
it
could
be
a
part
of
a
larger
document,
I'm,
not
sure,
but
because
it's
more
of
a
deployment.
We
think
that
this
is
why
it
should
be,
and
it
could
be
something
that
can
be
an
MD
that-
and
many
of
you
are
already
there
any
questions
about
that.
N
W
Yeah
this
so
I'm
certainly
sympathetic
to
wanting
to
have
deterministic
hashing,
but
one
of
the
things
that's
critical
in
how
hashing
functions
are
used
is
putting
in
salt
so
that
if
you
get
a
flow
coming
in
one
it,
basically
all
flows
coming
in
interfaced,
don't
end
up
deterministic
Li
hashing
to
the
same
path
right
you
get.
You
get
the
you
get
diversity
of
the
traffic
against
across
the
different
ecmp
next
tops,
not
just
at
this
layer,
but
at
the
next
one
or
the
next
one.
W
If
you
don't
include
salt
in
your
hashing,
eventually
what
you
end
up
with.
If,
for
instance,
you
took
this
as
described,
you
know
the
first
hop
I
know
it's
a
three.
You
know
five
stage
across
that's
nice,
but
still
you
the
first
hop
you'd,
say:
oh
here
all
the
traffic's
going
to
you
know
this
traffic's
going
to
what
used
to
going
across
next
top
one
at
the
next
stage.
W
W
Look
at
what
the
model
and
the
system
behavior
for
the
traffic
flowing
is
consider
how
you're
applying
salt
in
it
and
think
about
the
problems.
There's
a
reason
that
it's
hard
to
pull
the
information
about
how
traffic
is
hashed
out
of
routers-
and
this
is
I
totally
get
the
motivation
totally
understand
really
in
career,
would
be
lovely
to
make
it
easier
to
have
the
determinism
right
you're
going
to
get
strange
system
effects,
if
you
don't
think
about
it
really
carefully,
but
and
I
think
there's.
I
I
H
I
D
I
B
Thank
you
for
asking
that
was.
The
question
was
gonna
bring
up
next.
Actually,
unfortunately,
we've
got
a
chair
here
that
doesn't
both
so
I.
Remember
this
conversation
and
what
came
up
there,
which
I
think
we
should
consider
in
the
room
and
I'm
not
gonna
push
the
issue.
If
this
affects
14
of
a
beer
packet,
it
probably
shouldn't
take
place
here
and
hashing.
The
entropy
field
does
point
to
a
forwarding
table,
so
I
think
it
should
be.
The
work
should
be
here.
That's
what
I
feel,
but
let's
what's
the
room
think
about
this.
F
I
B
So
there's
the
question:
who's
read
the
draft
who
thinks
this
is
work
that
the
draft
that
we
should
pick
up
in
the
working
group,
all
the
hands,
pretty
much
stayed
up.
I
thought
I
saw
wanted
to
go
down.
Okay,
it
it's
like.
Let
me
let
me
comment
to
his
his
noises.
I
would
prefer
the
conversation
to
be
taking
place
in
a
structure
like
this.
Whether
or
not
I
move
forward
is
still
that's
how
the
working
group
works.
If
the
conversation
is
not
taking
place
in
a
document
through
with
people,
then
things
get
well
yeah.
Okay,.
F
B
F
B
F
I
Would
be
very
brief,
so
this
is
what
I
was
referring
to
when
Tony
was
talking
about
that
framer
document.
I
think
I
haven't
talked
to
the
authors
but
I'm
thinking
that
maybe
this
could
be
a
candidate
there.
So
we
just
wanted
to
take
two
minutes.
Jen
wrong
has
presented
this
a
few
times.
He's
talked
to
many
of
you
in
great
detail
about
this
document
got
a
lot
of
good
comments.
I
D
From
my
side,
I
would
discourage
to
try
to
fix,
like
hybrid
deployments,
by
using
overlay
techniques
layers
which
just
start
to
bleed
into
another,
and
we
end
up
with
hex
piling
on
top
of
hex,
okay,
okay,
we
have
two
overlays
and
one
of
them.
No
big
selling
point
is
that
it's
just
another
tunnel
and
your
overlay
signaling
doesn't
change
it's
a
big
selling
point
and
in
underlay
we
have
enough
techniques
to
do
to
deal
with
this
stuff
and
otherwise
we'll
have
cross.
D
I
I
B
R
R
A
B
Y
Y
Y
You
can
run
on
various
links
and
it
can
run
on
unicast
SRH
that
passed
hello,
so
the
problem
can
be
divided
into
two
parts.
First,
the
bisque
problem
is
the
word
who
put
the
pier
header.
We
have
true
option.
We
may
have
to
option
rise
to
reinvent
a
new
ipv6
extension
header,
but
this
is
not
recommended
in
ipv6
option
through
its
new
yacht,
where
existing
ipv6
extension
header
under
the
next
question.
It's
how
Campion
ran
over
SRH
so
we'll
check
the
ipv6
option.
Y
Ipv6
option:
why
is
the
logic
rather
SRH
is
some
type,
some
type
of
loading
header
and
the
destination
option,
whether
it's
the
right
place
to
to
put
the
pier
option
to
put
the
pier
header
and
the
destination
option
header.
We
only
use
the
mast
destination,
whether
they
tell
it
only
by
the
final
destination,
only
not
that
every
destination
along
as
IH
path
and
the
consideration
is
the
up
lower
brother.
Y
Y
Q
B
N
Where
I
think,
ok,
so
I
think
you
were
talking
about
that
when
you
use
it
down
and
pure
zinc
encapsulation
and
it
cannot
turn
or
the
beer
packet
is
that
did
I?
Read
that
wrong
or
not?
Yes,
it's!
This
is
the
part
of
this.
That
I
don't
understand.
That's
the
part
I,
don't
understand
why
you
cannot
handle
your
beer
packets
there,
okay.
Y
Y
N
D
We
carry
L
two
frames
of
a
lot
of
different
tunnels,
though
with
beer,
you
have
really
two
problems.
If
you
want
to
tunnel
it,
you
have
an
L
to
frame
with
tons
of
tunneling
technique
for
L
2
frames.
The
only
problem
is
the
tunnel
has
to
indicate
the
payload
type.
We
that's
not
in
the
scope
of
this
group.
Okay
in.
Y
N
W
Yeah
Atlas,
so
ethertype
is
very
commonly
used
in
a
number
of
overlay
tunneling
protocols
in
order
to
describe
the
payload
that
is
included.
That
is
part
of
why
we
are
using
ether.
We
decided
to
use
either
type
here.
If
you
take
a
look,
for
instance,
at
the
genève
encapsulation,
that
is
being
standardized
by
n
vo
3,
it
includes
a
protocol
type
and
that
protocol
type
is
ether
type
field.
This
exists
in
a
number
of
different
tunneling.
W
Headers
specifically
to
have
some
consistency
and
to
make
it
easy
to
include
an
arbitrary
payload
inside
a
packet,
the
fact
that
they're
inside
a
tunnel,
the
fact
that
it's
called
ether
type
is
a
fascinating
detail
of
history,
and
it
means
that
we
have
a
very
collegial
relationship
with
I
Triple
E
to
get
allocations
of
them,
but
it
has
more
general
use
and
you
should
take
a
look
at
where,
if
your
type
is
used
in
different
of
the
RFC's
and
the
different
tunneling
protocols,
you
care
about
to
see
how
you
can
play
with
it.
Ok.
Y
Y
Okay,
when
you
don't,
we
don't
use
this
where
this
weight
in
captured
the
the
PA
further
in
ipv6
destination,
object
other
and
you
could
say,
scheme
ipv6
to
each
other
and
also
we
use
multicast
address
at
the
destination
address.
So
Matic
has
to
address
and
destination
option,
rather
with
each
other
through
violate
the
pier
Hopa
Hopa
replication.
B
B
Y
Okay,
this
is
this
is
test
mission
to
to
updated
their
current
yeah
and
maybe
in
I
have
seen
so
what's
the
problem
company
the
pyramid,
him
I've,
seen
only
support
hell,
I
are
explicitly
tracking
when
using
segment.
Maybe
in
so
the
problem
had
been
stated
in
that
I'll
say
also,
but
I
want
to
put
our
I
want
to
have
some
other
point
that
I
think
a
very
important
that
a
it's,
the
pure
joy
latency.
Y
When
living
there
are
its
police
to
tracking
it
is
a
very
inefficient
because
we
need
to
first
sender
the
same
our
custard
one
by
one
and
then
in
the
University
Center,
the
team
joy
to
see
and
then
get
to
the
date
from
Seyi
under
then
rotate
drive,
their
SPMS
are
82
advertise
and
then
the
equals
he
can't
respond
with
that
the
baby
and
then
the
US.
He
can
complete
the
explicit
record
so
the
joint
agency,
it's
very
bad,
but
that
it's
already
have
very
good
toast
of
RPF
explicit
Ricky.
Y
Week,
I
owed
this
RPF
explicit
record
to
reduce
that
join
latency
greatly
and
I
think
this
our
lot
goes
through
DT,
oh,
so
what
it's
just
forwarding
on
the
APR
not
to
to
IP
lookup
instead
of
pure
maple
lookup
described
in
the
current
and
maybe
India
I
have
seen
so
the
benefit
is
significant
and
that
the
change
is
nothing.
The
change
is
little.
We
don't
require
any
ia,
a
code.
N
Y
N
Comments
on
that,
why
is
that?
Resorting
to
to
IP
lookup
makes
problem
worse,
because
you
will
have
more,
you
save
the
entire
state
by
your
introducing
the
IP
states
in
the
in
the
worse.
Let's
make
important
or
worse,
second
comment
is
that
this
is
not
specific
to
here,
and
this
is
not
specific
to
to
explicit
tracking,
even
if
it's
just
using
ingress
replication,
you
have
the
same
problem,
so
this
is
no
really
specific
to
beer.
Y
Y
N
Remember
that
even
I
agree
that
if
you
don't
want
to
do
per-flow
Lobo,
then
then
you
can
use
IP,
lookup
I,
even
I
acknowledge
that
then
I
realized
that
by
doing
so,
you
are
making
the
problem
worse
before
you're
using
pro
flow
later
flow
label
in
the
MPS
table
now
you're
you're
you're,
using
works
and
in
each
bar.
If
you
have
their
/
SG
states,
its
scanning
problem
in
the
control
in
the
forwarding
path
is
even
worse,
but.