►
From YouTube: IETF105-LSR-20190722-1330
Description
LSR meeting session at IETF105
2019/07/22 1330
https://datatracker.ietf.org/meeting/105/proceedings/
C
F
A
G
D
D
E
C
H
A
D
C
Next
slide,
we
haven't
had
any
RFC's
I'm
gonna,
go
kind
of
quick
to
leave
room
for
the
presentations
we've
gotten.
A
note
from
the
RFC
editor
and
all
the
segments
routing
drafts
are
are
that
it
we're
waiting
on
miss
references
are
going
to
be
published
very
soon.
I
was
hoping
it
would
be
before
this
meeting,
but
it's
not
so
well.
That'll
be
good
to
get
these
all
published.
C
Okay
and
in
AD
we
still
have
this
H
bit
support.
The
host
support
is
care
in
here
no
well
anyway,
trying
to
get
the
offers
the
primary
offers
to
finish
the
response.
There
was
some
confusion
about
Alvaro's
comments,
not
that's.
Pending
the
yang
data
model
for
OSPF
protocol,
we
finished
the
80
comments.
We've
had
a
ATF
wide
working
with
last
call.
There's
been
some
comments
on
a
few
more
comments
on
the
list
and
a
few
more
Directorate
reviews.
We
were.
C
We
have
completed
working
group
last
call
on
these.
We
have
to
write
up
the
Shepards
reports,
the
two
and
yeah.
Let
me
say
one
more
thing
about
these,
so
these
documents
is,
you
know
we
spend
a
lot
of
time
in
past
years,
and
so
we
gotta
capture
that
discussion
in
summary
and
the
Shepards
reports
and
send
it
to
the
80s
baby.
A
Yeah,
so
we
had,
we
had
a
process.
Fubar
snafu,
whatever
on
the
yang
model
for
is,
is
a
I
think.
If
you
remember
a
meeting
or
two
ago,
we
decided
to
do
like
OSPF
first
and
then
get
the
comments,
capture
the
comments
and
make
adjustments
to
the
eye
size,
but
then
that
sort
of
got
the
signals
got
crossed
on
that
I.
Don't
know
where,
where
exactly
those
signals
got
crossed,
I
might
have
been
at
eighty
level
might
have
been
at
working
bubble.
A
So
the
AC
of
thinking
that
we
had
done
a
working
group
last
column
both
submitted
the
is
is,
but
we
have
process
wise.
We
have
not
actually
working
group
last
called
the
ISI
yang
model,
so
we
are
currently
receiving
80
reviews
on
the
is
I
see
a
model
for
processes
sake.
We
will
be
well
working
group
last
calling
this
document
as
well
starting
like
today.
So
this
is
first
notice
of
the
working
group.
Last
call
on
the
is
is
yang
model
will
do
a
note
to
the
list.
A
C
Okay,
we
got
both
of
these
entropy
label
capability
drafts.
We
revised
these
to
advertise
per
prefix
capability
and
thanks
to
Peter
and
I,
did
the
OSPF
I
I,
don't
know
if
he
did
the
eius
eius
too,
but
they're
both
they
both
reflect
what
or
if
Stefan
did
did
one
of
them
I.
Don't
remember
yes,
definite
one!
A
Peter
did
one
and
now
we're
ready
to
working
group
last
call
them.
The
one
thing
we
do
know
do
need
our
BG
pls
code
points
and
rather
than
having
a
separate
IDR
draft.
C
What
the
flex
algorithm
it's
gonna
be
covered
today
same
with
these
yang
models,
and
these
ones
are,
let's
see
everything's
covered
today,
there
hasn't
been
any
activity
on
the
I
is.
Is
routing
for
spine
leaf
topology
I'll
check
on
that
with
will
check
on
that
with
the
offer
see
where
they're
going
not
too
much
on.
We
need
more
discussion
on
the
prefix
originator
and
there's
been
some
discussion
on
the
PCE
security
capability.
I,
don't
think
it's
not
ready
for
word
and
Google
last
call.
Yet,
though,
and
that's
it.
I
I
Okay,
we
also
had
an
interim
meeting.
We
had
many
discussions
during
that
meeting.
The
outcomes
of
that
are
pretty
straightforward.
We're
going
to
advertise
which
links
are
in
the
flooding
topology.
There
are
going
to
be
bits
in
the
link,
attributes,
sub
TLV
and
link
attributes
Tod
for
OSPF,
and
this
is
an
equivalent
to
the
FT
bit
that
came
from
the
CCL.
S
are
flooding
Draft,
okay,
small
bug
fix
in
the
flooding
request.
Tlv,
we
listed
a
field
as
the
circuit
types
field.
This
caused
some
implementation
confusion.
I
I
Okay!
Next
slide!
Please,
let's
see
we
did
a
small
change
to
the
area,
router
ID
TLV,
to
try
to
compress
it
a
little
bit.
That
was
a
little
bit
improvement
in
density.
We
had
a
little
language
clarification.
If
the
area
leader
advertises
a
flooding
topology,
then
dynamic
flooding
is
active.
If
the
area
leader
does
not
advertise
a
flooding,
topology
then
dynamic
flooding
is
disabled
and
legacy.
Flooding
should
be
used.
I
Okay,
that's
probably
obvious,
but
we
just
wanted
to
be
explicit
and
clarify
that
one
addition,
if
you
have
an
area
you
may
find
that
you
may
want
to
have
a
backup
area
leader.
This
is
not
explicitly
denoted
by
anything.
It's
obvious
from
the
area
leader
election
who
who
the
second
node
would
be,
and
we
explicitly
allow
that
area
leader
that
backup
area
leader
to
also
advertise
a
flooding
topology.
I
This
helps
against
the
case
where
there
is
a
failure
in
with
the
area
leader,
it
simplifies
and
smooths
out
the
transition
between
the
two
flooding
topologies
and,
let's
see
we
welcome
Cuomo
Chen
as
a
co-author
to
the
draft,
the
FT
bit
and
some
of
the
other
stuff
that
we
took
from
his
draft.
This
is
now
incorporated,
so
we've
added
him
as
a
co-author.
I
One
thing
I
forgot
to
put
on
the
slides.
We
did
do
some
work
on
a
yang
model
for
this.
We
did
not
get
anything
ready
to
present
in
time
for
this
ITF,
our
apologies.
We
are
looking
for
more
volunteers
who
want
to
work
on
yang.
Please
see
me
if
you
are
interested.
We
think
that
this
draft
is
now
pretty
well
cooked.
I
I
C
B
A
D
A
K
K
So
for
a
lot
in
topology,
so
the
lump
of
relations
is
the
degree
of
that
amount.
For
example,
for
this
Dragon
Ball
T
our
work
with
our
one
has
two
connections
on
the
flooding
object,
so
our
while
Steven
Lee
is
for
more
on
zero.
It
has
three
connections
on
the
flooding
topology
so
is
degree.
The
degree
of
our
are
zero
in
three.
So
for
each
load
we
have
a
degree
of
a
front
of
our
G.
The
degree
of
CO
400,
more
G
is
the
maximum
degree
of
all
of
the
nodes
of
degree.
K
So
the
basic
idea
of
why
didn't
want
commutation
is
that
we
started
from
a
load
with
ammo,
which
is
the
smallest
of
the
load.
I
denote
as
a
gold
and
then
a
Buda
a
tree
using
this
our
job
as
a
road
racer
first
strategy,
and
then
we
can
equal,
no
goat,
whose
degree
is
a
one
to
another
note
to
the
front
wall
to
the
foreign
topology.
Where
were
Dubuque,
and
then
we
have
a
flat
in
topology
in
which
Evernote
connects
at
the
visa.
K
K
So
here
we
just
take
a
one
simple
example
of
a
wagon
horse
computation.
So
we
go
through
the
detail.
Steps
just
get
some
idea
about
of
their
details,
ornamentation,
so
we
consider
five
nodes,
flush,
the
black
metal
and
then
based
on
visitor
topology.
We
usually
our
algorithm
to
compute
a
flagon
apology.
So
initially
we
started
with
candidate
Q,
which
is
equal
for
short.
Seek
you
can
take
one?
No,
the
road
are
German.
K
These
are
zero
with
degree
in
zero
and
any
previous
hope
the
front
over
the
visa
recently,
and
then
we
have
a
maximum
degree
given
desta
as
conditional
constrain.
It
is
three
so
the
first
establish
establish
that
we
just
remove
the
row
or
juror
from
the
candidate
queue
and
then
put
a
data
our
journal
to
the
flooded
and
then
for
each
load,
connect
it
with
Arturo.
K
We
add
those
note
into
the
CQ,
so
in
this
property,
and
then
we
were
other
r1
r2,
r3
and
r4
into
the
capital
Q,
and
it
is
favored
one
way
we
will
remove,
remove
the
first
node,
which
is
our
woman.
So
our
wives
previous
row,
R
0,
R
0
degrees
zeros.
So
we
can
add
big
R
1
R
R
0
into
the
Friant
of
origin.
So
after
we
add
R
1
to
R
0
into
the
foreign
apology,
so
we
will
update
the
candidate
list
or
each
load
connected
to
R
1.
K
K
K
So
in
this
case
because
update
the
previous
hope
of
archery
and
Artois
in
the
C
key
and
then
the
last
one
and
step
three
is
that
we
can
add
a
sweetie
and
I'll,
throw
this
link
into
the
flood
and
apartment,
because
right
now,
our
two
degrees,
so
you
still
Western
maximum
degree.
We're
given
is
which
is
three.
So
we
had
a
are
three
and
ours
0.
This
link
into
the
apology.
K
So
after
this
one
and
then
we
also
updated
their
candidate
queue
and
then
we
come
to
the
stable
for
at
stay
before
we
remove
our
four
from
CQ
right
now.
Our
four
have
a
previous
4
r0,
r1,
r2
and
so
on,
because
all
those
degrees
all
these
three.
So
we
can
order
ethylene
the
people
I
for
now
0,
but
we
have
privet
over
r1
I
was
degrees
is
1,
so
we
can
link
between
our
for
and
r1
into
London
apology.
So
we
can
add
this
linger
into
the
front
abortion.
K
So
for
for
this
plan
we
just
connect
Evernote
degrees,
one
to
another
node
and
then
the
stereo
five.
So
the
first
note
we
still
with
degree
one
is
a
no
I,
so
our
our
we
will
find
a
note,
a
find
the
link
between
our
two
an
acyl
note
so
right
now,
what
thumb
is
that
the
link,
r2
and
r3
will
be
added
to
the
fallen
apology,
because
r3
has
been
among
degrees.
K
So
we
were
editing
our
203
into
the
flooding
apology.
So
after
this
step,
5,
and
then
we
only
care
for
our
4,
we
will
have
our
4
has
1
1
degrees
with.
So
in
this
case.
Well,
we
will
find
big
return.
R4
+
r2,
because
our
r2
has
a
minimum
degree
and
also
minimum
node
IDs.
So
after
this
step,
and
then
we
have
a
collaborative
orgy,
his
flooded
hopper
G,
the
degree
3
and
then
Evernote
has
connected
to
at
least
the
to
know,
sorrowful
an
apology.
K
So
next
next
page
so
here
so
previous-
we
just
go
through.
For
example,
get
some
idea
was
at
a
detail
of
algorithm.
So
here
is
some
kind
of
a
more
formal
descriptions
about
a
chrism,
so
this
algorithm
start
from
a
node
R
0
as
I
wrote,
and
then
this
a
wrote
will
have
a
minimum
of
smallest
node
IDs.
Some
systems
make
that
and
then
we
were
given
a
maximum
degree
and
then
we
will
have
a
kind.
K
The
queue
initially
and
Canada
were
only
contain
is
a
Al
Jarreau
as
a
as
a
as
a
vote,
and
then
we
start
with
the
empty
lot
in
topology,
and
then
this
one
would
give
more
details
about
this
one
harder
with
the
road.
Again,
usually
the
degree
of
Archer
is
0
and
then
the
previous
host
of
Arcturus
is
empty.
So
next
phase.
K
So
from
that
standpoint
again
we
just
go
through
several
steps
in
this
algorithm.
So
basically,
first
super
one
is
that
we
move
finer
and
remove
an
element
from
a
candidate
the
queue
and
then
stop
satisfy
some
conditions,
such
as
we
should
have
a
one
of
his
previous
halt
that
his
prints
for
the
degree
business
is
less
than
the
man
some
a
degree.
So
we
find
that
pattern,
admin
and
then
remove
that
element
from
a
candidate
list.
K
Then
I
had
that
one
into
the
for
an
apology,
and
then
we
thinned
the
Kaneda
to
kill
hazardous
vapor,
one
so
stable.
To
give
up
the
more
details
about
hallway
you
submit
the
cannibal
queue
so
due
to
that
better
know
the
way
we
moved
from
from
Canada
queue
and
there
you
were
data.
Node
is
no
longer
candid
with
you
and
then
we
just
add
that
nobody
in
the
kinetic
you
even
I,
don't
know
that
is
already
in
the
kinetic
view,
and
then
we
can
stir
updated.
The
previous
hope.
K
So
that's
the
way
updated
the
kinetic
you.
So
even
we
order
those
on
the
forgotten
apology
and
then
that
we
are
go
to
the
right
step,
which
is
a
connector
every
node
whose
degrees
one
to
another.
Node,
sorry
that
we
have
a
following
topology,
in
which
every
node
has
at
least
two
connections
on
the
photic
of
origin,
and
then
we
have
a
weather
of
the
hot
water.
We
have
so
nice
page.
K
L
C
Think
there
might
be
IPR
before
you
know
at
some
point
that
from
some
from
way
back,
but
because
Brett,
because
a
lot
of
people
have
said
that
we
should
do
it
first
on
based
on
you,
know
the
degree,
and
now
this
actually
formalizes
with
the
algorithm
to
do
it
based
on
degree
so
I'm
wondering
if
anybody's
knows
of
IPR
on
this.
Yes,.
C
The
other
question
is
I:
read
it
I
think
you
need
to
describe
the
usage
of
previous/next
top
better
in
the
drought.
Maybe
even
have
some
of
these
pictures.
If
you
could
I
mean
I,
know
it's
hard
with
ASCII
art,
but
it's
hard
time
hard
to
understand
it,
and
maybe
I
didn't
put
enough
time
into
it,
but
I
understood
it
better.
This
time
with
the
pictures
and
I
didn't
when
I
first
read
it,
that's
my
mom.
K
I
Tony
Lee
Arista,
when
we
first
established
the
dynamic
flooding
draft.
We
didn't
issue
an
IPR
statement:
I'm,
not
a
lawyer.
I
can't
tell
you
whether
covers
this
or
not
okay,
but
it
was
Tim.
The
light
I
care,
a
statement
on
gameplay
now
everybody's
got
my
PR
on
Dinah
flooding,
so
I'm
not
trying
to
block
a
and
I'm
just
trying
to
have
your
lawyers.
Gonna
look
at
patent
application.
I
I
I
Okay
claim:
oh,
thank
you
very
much
for
the
present
that
helped
myself
and
my
colleagues
looked
at
your
draft
very
carefully
and
scratched
our
heads
and-
and
we
were
very
disappointed
that
you
are
not
here
to
discuss
it
with
us.
You
really
want
to
understand
this
and
we're
not
quite
there
yet
is
really
the
intent
here.
The
primary
crux
of
the
algorithm
to
be
breadth-first
search.
I
K
C
M
A
Love
ASCII
art
too,
though
so
I
like
if
you,
if
you
can't
manage
to
do
the
topology
or
it's
not
clear,
and
it
would
be
much
clearer
with
graphics,
then
yeah
you
do
have
the
ability
to
use
SVG
and
RFC's
I,
don't
know
if
that'll
be
required
or
not,
but
the
takeaway
I
think
we
know
is
yeah.
It
would
be
great
if
you
could
put
some
of
the
things
you
have
in
this
presentation
and
maybe
even
some
more
pictures.
They
help
a
lot
I
think
that
would
call
for
adoption.
A
Let's
do
another
round
on
this
I.
Don't
think
anybody
has
any
objections
to
this.
You
know
it's
looking
good,
but
there
are
some
open
questions.
Let's
work
on
that
and
make
sure
that
we
address
the
questions
like
the
diameter
and
stuff
like
that
and
as
long
as
nothing
pops
out
that
looks
strange,
I,
don't
see
why
the
working
group
wouldn't
want
to
adopt
something
like
this.
Okay,
so
we're
gonna
move
to
the
next
graphic.
The
next
presentation,
oh.
D
A
So
we
have
a
proposal
about
flooding,
speed
advertisement
that
was
put
forward
by
Bruno
in
a
draft
there's
some
how
we
got
a
scheduled
for
a
bottle
coming.
This
is
sort
of
interesting
to
me
because
it
happened
to
me
the
first
time
that
I
presented
in
is
is
about
twenty
years
ago
or
something
I
I
got
a
surprise
presentation
that
came
up
right
after
me
thinks
Tony
leap
had
Hank
Schmitt
come
up
and
tell
everyone
why
the
thing
I
just
presented
was
useless.
A
So
I
just
realized
that
this
was
a
little
payback.
I
guess,
but
it
was
totally
not
I
didn't
realize
that
there
hadn't
been
anything
discussed
between
the
two
groups
before
this
presentation
was
put
on
there
anyway.
Nothing
is
being
decided
here
right,
so
this
is
just
a
presentation.
There
will
be
a
presentation
following
Bruno's,
where
there
there's
sort
of
addressing
and
disagreeing
with
some
of
the
points
you're
about
to
see.
A
But
again
nothing
is
being
decided
here
and
Bruno
sent
us
something
he's
not
had
a
chance
to
really
go
over
that
presentation,
so
he
felt
a
little
bit
like
hey.
I
would
have
liked
to
have
seen
this
before
I
presented,
so
I
could
have
maybe
addressed
these
points.
I'm
sympathetic
to
that,
like
I,
said
twenty
years
ago.
The
same
thing
happened
to
me
and
I
didn't
like
it.
O
Okay,
so
hello,
I'm
Bruno
from
our
engine
I'm,
going
to
talk
about
drive
to
improve
flooding,
speed
so
just
to
try
Phi,
it's
just
about
communication
speed
between
two
two
neighbors:
it's
not
competing
with
proposal,
radio,
sings
of
routing
a
topology
or
graph.
So
next
one
please
so
as
an
instruction.
It's
probably
well
known
to
everyone,
but
still
flooding
is
a
very
important
for
link
state
I
GP.
O
Unfortunately,
a
footing
is
odd.
You
want
to
be
fast
to
have
you
our
database
instinct,
especially
in
case
of
a
single
node
failure,
which
is
just
one
failure,
but
for
links
that
IGP
translating
into
multiple
messages.
So
it
could
be
an
ax
and
could
be
a
10
20
depending
on
your
topology.
So
you
need
to
be
able
to
fast
through
the
10
or
20
SPD
messages,
quite
fast
just
for
single
node
failure.
O
So
it's
enough
to
to
not
overload
your
neighbor,
because
flooding
is
a
up
by
up
so
you're
just
sending
message
to
your
neighbor,
so
it's
between
two
adjacent
neighbor
next,
a
slight
kiss.
So
so
what
is
the
status
in
Isis?
We
don't
have
any
signaling
to
control
the
fruiting
speed
on
the
current
status
in
term
of
implementation.
O
O
So
next
one
please
so
the
proposition
is
it's
very
simple:
it's
just
one
TLV
to
be
able
to
advertise
your
own
capability
to
receive
SPD's
message.
Messages
to
your
upstream
neighbors,
so
I
entered
on
screen.
I
am
capable
of
receiving
eyes--eyes
LSPD.
You
were
to
a
certain
speed
and
I
advertised
that
capability
to
my
upstream
routing,
never
so
Y
arrow
again,
because
we
are
just
discussing
between
adjacent
neighbors.
O
O
So
we
have
two
parameters
next
with
Pisa,
so
the
first
parameter
is
called
it's
caused
is
called
minimum
interface,
LSP
transmission
interval,
so
the
name
is
long,
but
it's
only
the
delay
in
milliseconds
between
two
consecutive
areas,
PDU.
So
it's
nothing!
You
it's
a
parameter
which
is
available
in
all
existing
implementation.
So
sometimes
it's
called
LSP
interval
some
some
time.
It's
where
LSP
base
interval,
sometimes
karelis
PT
X
in
tower
something
so
nothing
new
in
term
of
fruiting
and
that
parameter
is
akin
to
a
CPU
or
processing
performance.
O
So
basically
you
commit
that
you're
capable
of
in
average
processing
some
LSP
you
per
minute
per
second.
So,
for
example,
Sochi
next
column
second
parameter
is
the
ability
to
to
send
a
burst
of
consecutive
LSPD
you
so
you,
okay,
excuse.
You
say
I
agree
that
you
can
send
me
a
purely
speedy
you
back-to-back,
if
you
want
up
to
n
n
LSP,
deal
it's
less
popular
in
term
of
imitation,
but
there
are
implementation.
O
O
So
that's
all
know.
So
that's
all
to
two
parameters
in
one
chill
V
there
is
a
smaller
or
refinement.
Is
that
actually
for
fast
fluid?
We
say
it's
a
it's!
A
number
of
non
acknowledge,
LSPD,
you
not
only
the
number
of
a
speedy
you
that
you
can
send
in
bursts.
So
next
one.
So
with
that
precision
we
can
go
even
more
precisely
to
our
flow
control,
because
I
size
does
have
a
mechanism
to
acknowledge
the
reception
rate
speedy.
O
You
basically
using
a
PS
NP
on
one
point,
two
point
interfaces,
so
ISS
can
already
advertise
the
reception
or
acknowledge
the
reception
of
each
SD.
On
with
that,
we
have
a
dynamic
flow
controller
if
you
want
so
with
two
two
parameters:
the
one
which
is
a
sizable
burst.
So
the
memories
that
you
can
have,
which
is
a
static
transmission
window,
so
remember
it
remember.
Static
transition
window
will
have
a
test
in
a
few
minute,
and
then
we
have
a
x,
SN,
p
or
p
SN
p,
which
sell
that
as
denominational
acknowledgements.
O
So
in
summary,
we
want
to
improve
ICS
communications,
penis,
off-roading
speed.
We
are
authorizing
static
parameters
from
the
done
string,
reading,
never
up
to
the
extreme
pruning,
never
so
point-to-point.
On
these,
two
parameters
match
existing
implementation,
at
least
for
the
first
one,
on
partially
for
the
second
one,
the
next
one.
O
So
with
that
single
glv,
the
upstream
can
do
different
kind
of
behaviors
up
to
him.
So
the
first
kind
of
behavior
is
to
coordinate
on
CSD
pacing,
so
that
one
is
very
easy
to
do,
because
it's
already
available
on
all
implementation.
But
with
with
the
extension
you
you
agree
with
your
neighbor
on
all
fast.
You
can
process
at
a
speedy
LSPD
you,
and
even
with
that
single
limb
improvement,
it
can
be
significant
in
real
life,
because
I
think
everyone
will
agree
that
routing
speed
is
very,
very
conservative.
O
So
it's
good
to
be
able
to
advertise
quickly
that
when
one
node
has
been
removed
from
the
top
regime
and
also
possibility
open
by
that
new
TLV,
if
you
have
dynamic
flow
control,
which
will
be
a
significant
improvement
compared
to
a
static
SP
SP
by
things
that
we
have
today.
But
any
of
those
improvement
is
its
local
to
the
upstream
fooding
level.
O
P
P
C
O
C
B
P
Should
you
maybe
do
it
100,
microseconds
or
something
else,
because
millisecond
means
you
can
only
do
one
per
millisecond,
which
means
a
thousand
per
second
depend
on
your
birth
size,
but
it's
still
limits,
and
you
just
said
that
BGP
does
3000
per
second.
So
maybe
the
you
know,
let's
not
stick
to
millisecond,
because
everyone,
let's
use
a
number
that
makes
sense,
I,
think.
A
O
Really
thank
you
for
documents
we
already
receive
determine
by
email.
It
was
a
bit
too
late
to
change
the
draft,
but
I
agree
that
many
seconds
could
be
limiting
in
the
future,
even
though
it
would
be
already
happy
if
we
could
reduce
up
to
1
millisecond
but
you're
right,
absolutely
Thank,
You,
Josie.
O
Q
Q
O
You
really
have
implementations
which
have
LSP
pacing
per
interfaces,
so
it's
nothing
really
new.
It's
already
I,
don't
know
for
all
implementation.
Q
R
R
R
So
this
is
the
the
minimum
interval
in
which
I
would
like
to
receive
felis
B's,
the
minimum
int
well
in
which
I
would
like
the
minimum
burst
that
I
would
like
to
receive,
and
this
can
potentially
be
adjusted
dynamically,
although
I
believe
there's
language
in
the
draft
that
says
it's
not
a
good
idea
to
do
this
too
often.
Okay,
next
slide.
R
So
here's
what
we
agree
on.
We
definitely
agree
that
flooding
at
least
as
it's
been
defined,
historically
and
and
defaults
in
most
implementations-
is
much
too
slow
for
for
modern
networks.
So
we
agree
that
we
want
to
allow
for
the
capability
to
flood
much
faster
than
we
do
by
default
today,
but
we
also
agree
that
doing
so
entails
some
risk
and
therefore
we
want
to
have
some
means
of
flow
control.
Next
slide.
R
Here's
what
we
disagree
on
the
the
essence
of
the
draft
is
to
send
parameters
that
control
the
state
per
interface.
I
think
this
is
related
to
the
point
that
Jeff
was
asking
about
it.
Isn't
the
value
that
you
want
to
use
for
flooding
is
not
a
a
per
interface
value
and
I'll
go
into
a
little
more
detail
here?
The
second
point
is
the
draft
specifies
the
flow
control
should
be
managed
by
the
receiver
and
we
believe
it
should
be
managed
by
the
transmitter
next
slide.
R
So
flooding
is
part
of
what
allows
you
to
achieve
Network
wide
convergence,
just
as
we
do
with
SPF
intervals.
If
you
configure
vastly
different
SPF
intervals
on
different
nodes,
the
convergence
behavior
their
network
is
going
to
suffer
because
some
portions
of
the
network
are
going
to
converge
significantly
faster
than
other
portions
of
the
network.
Same
thing
is
true
with
flooding,
because
if
I
run
my
SPF,
but
the
information
I'm
running
my
SPF
on
is
doesn't
match.
R
That
of
my
neighbors,
then
I'm
going
to
come
to
a
different
conclusion,
I'm
going
to
temporarily
install
different
routes
in
the
for
any
plane
and
are
going
to
be
vulnerable
to
loops
and
black
holes
and
so
forth.
So
we
strongly
believe
that
the
the
value
that
you
want
to
use
for
pacing
your
LSP
flooding,
needs
to
be
a
network-wide
parameter,
not
a
pretty
interface
parameter
next
slide.
R
This
is
I.
Did
some
archaeological
research
here
back
in
ISO,
10
589,
there's
actually
some
language
in
10
589.
That
reinforces
this
point
that
the
language
here
specifies
that
the
this
was
talking
about
the
retransmit
timer,
the
retransmit
timer
is
per
LSB,
but
it's
not
per
LSB
per
interface,
it's
simply
per
LSB.
So
if
I
need
to
read
transmit
an
LSB,
then
I
want
to
do
so.
At
the
same
time,
on
all
of
the
interfaces
on
which
I
have
yet
to
receive
an
acknowledgment.
I
don't
want
to
tune
this
to
a
per
interface
value.
R
R
Receiver,
given
flow
control,
the
ability
of
a
node
to
decide
gee,
you
know
I'm
I'm
I'm,
getting
things
too
fast,
for
whatever
reason
my
CPU
is
overloaded.
My
plan
Q
from
the
data
plane
is
overloaded,
but
whatever
the
reason
may
be,
many
implementations
implements
an
LSP
input.
Q
is
simply
a
FIFO,
not
a
pretty
interface
FIFO,
but
just
well,
no
matter
what
interface
ie
I
got
this
in
on
I'm
gonna,
stick
it
into
a
queue
and
when
I
get
around
to
processing
that
queue,
I'm
gonna
process
them
first
in
first
out.
R
So
if
that
queue,
which
oftentimes
is
bounded,
if
that
kid
gets
overloaded,
okay,
it's
an
indication
that
I'm
getting
swamped
but
who's
wanting
me
unless
I
actually
go
through
and
process
the
LSPs
in
the
input
queue.
I
can't
figure
out
who
you
know
which
of
my
neighbors
might
be
the
one
who's
sending
stuff
to
me
too
fast,
and
in
addition,
you
know.
Based
on
the
points
you
know
we
made
in
the
previous
slides
I,
don't
really
want
to
have
a
per
interface.
R
You
know
a
slower
rate
on
one
interface
and
a
faster
rate
on
another
interface.
So
the
rationale
for
saying
I
want
to
advertise
a
slower
rate
to
this
one
neighbor,
but
allow
the
faster
rate
to
some
other
neighbor
is
both
a
problem
from
an
implementation
standpoint.
It
is
not
the
behavior
we
want
in
order
to
optimize
Network
convergence,
I.
A
Mean
just
make
clear,
you're
making
a
point
based
on
assumptions
about
implementation,
you're,
saying
I,
don't
know
who
sends
it
to
me,
because
my
implementation
uses
a
single
FIFO
LSP
queue
I
mean
so
yes
for
an
implementation
that
uses
a
single
white
ball
in
a
input
queue
for
LSPs.
This
is
not
easy,
careful,
not
a
good
solution,
but
it's
not.
That
doesn't
mean
that,
for
all
implementations
of
is
is
this.
Is
this
is
hard
to
identify
who's,
sending
you
the
speeds
so.
R
I'm
not
necessarily
going
to
disagree
with
you
there's
a
number
of
points
being
made
here
and
I.
Think
one
of
the
points
is
there
are
implementations
that
do
this
more
than
one
from
my
personal
experience,
that
doesn't
mean
all
implementations
do
this
and
there
are
some
good
reasons
for
an
implementation
to
implement
it.
This
way
so.
A
R
Know
I
could
counter
by
saying:
if
I
do
this
per
interface
now
I
got
to
come
up
with
a
fairness,
algorithm
anyways,
let's
not
debate
that
the
second
point
I
want
to
make
about
receiver
driven
flow
control.
Well,
actually,
actually,
sorry
can
you
go
back?
One
slight
I
may
have
missed
something
here:
yeah
yeah,
yeah,
so
descending
of
the
the
updated
receive
interval
is
essentially
an
out-of-band
signaling
mechanism.
R
Okay,
no
words
for
this
to
be
effective.
I
can't
wait
10
seconds
until
my
next
hello
interval.
You
know
that's
going
to
let
the
condition
persist
for
much
longer
than
I
would
like
so
I've
got
to
introduce
extra
processing
both
for
myself
and
for
my
neighbors,
precisely
at
the
time
when
everybody's
the
most
busy,
which
to
me
is
a
disadvantage
next
slide.
R
This
obviously
depends
on
some
protocol
extensions,
so
it's
not
going
to
be
effective
if
I
send
this,
but
my
neighbor
doesn't
support
it.
So
I
have
to
wait
for
everybody
to
upgrade
to
get
the
maximum
benefit
out
of
this.
If
you
do
this
in
a
brown
field,
you're
not
gonna,
get
the
full
benefit
next
line.
So
what
should
we
do?
R
Okay,
ice
ice
101?
What
do
we
do?
We,
when
we
have
a
Nellis,
beat
to
sin?
We
mark
it
with
a
send
routing
message
bit
and
the
SRM
bit
and
I'm
talking
here
strictly
about
point
to
point.
We
know
broadcast
interfaces
work
a
little
bit
differently,
but
for
the
purposes
of
this
discussion,
I
think
we
can
stick
to
point
to
point
the
SRM
stays
set
until
we
actually
get
into
an
acknowledgment
on
that
particular
interface.
The
retransmit
timer
will
fire
and
we
will
resend
any
LSB
that
has
been
unacknowledged
from
the
transmit
side.
R
I
know
how
many
LS
B's
that
I
have
for
a
particular
interface
that
I
have
sent
that
I
have
not
received
acknowledgement
for
I.
Have
all
of
that
information.
Okay
and
when
I
have
that
information
I.
Don't
particularly
care
why
the
LSB
was
not
acknowledged.
Does
my
neighbor
was
my
neighbor?
Does
he
not
have
enough
CPU
time
it
is
received
you
through
small?
Did
he
have
problems
in
the
punt
path?
I,
don't
care
all
I
know
is
that
I
need
to
be
retransmitting
and
I've
got
all
the
information.
R
R
But
basically
you
you
set
an
upper
bound
for
your
unacknowledged
LSPs
on
a
particular
interface.
When
you
hit
that
limit,
then
you
slow
down
the
rate
at
which
you
send
Alice
B's
to
that
neighbor,
and
you
persist
in
doing
this
until
the
number
of
unacknowledged
Ellis
piece
goes
below
some
safe
limit.
Then
you
go
back
to
your
your
default
setting
when
this
condition
occurs.
R
I
think
this
condition
should
be
logged
because,
as
we
discussed
before,
we
want
to
have
the
consistent
flooding
rate
on
all
the
interfaces
in
the
network,
so
that
we
can
get
the
optimal
convergence
and
if
you
have
a
node
that
is
consistently
not
able
to
keep
up,
then
the
convergence
in
your
network
is
going
to
suffer
and
you
need
to
let
the
operator
know
so
the
operator
can
decide.
What
does
he
want
to
do
about
this?
You
know:
does
he
have
an
underpowered
node?
Does
he
have
a
miss
configuration?
That's
causing?
R
S
R
Got
many
choices
and
we
need
to
let
him
know
that
this
condition
is
is
occurring
next
slide.
So
there's
a
bunch
of
things
in
this
space
that
actually
have
been
done
and
are
out
there
fast
floodings
been
around
for
close
to
20
years
as
part
of
fast
flooding
or
fast
convergence.
I
should
say
it's
part
of
that
and
we
have
some
kind
of
fast
flooding
knob
that
says
before
you
actually
start
your
SPF
send
a
bunch
of
LS
B's
to
your
neighbors
so
that
we
can
speed
up
the
propagation
of
LSPs
through
the
network.
R
R
A
R
It
was
just
the
point
on
the
slide.
There's
a
number
of
things
have
been
done.
People
are
actually
flooding
faster
than
33
milliseconds
they've
been
doing
so
successfully.
I
haven't
heard
about
any
problems
with
that,
so
it
isn't
like.
We
have
just
been
able
to
turn
this
on
and
it
doesn't
work.
It
does
work
next
slide.
So
what
should
we
do?
We
should
encourage
vendors
to
increase
the
flooding
rate.
We
need
to
emphasize
the
point
that
this
neat-
this
is
a
network-wide
parameter,
not
a
per
interface
parameter
use
the
TX
base
flow
control.
R
A
I
think
a
lot
of
good
work
can
be
done
on
this
I.
Don't
think
that
anybody
has
the
right
answer
yet.
I
I
would
also
like
the
whole
comments
quickly
of
whoever
heard
of
receiver
based
flow
control.
What
a
crazy
idea
right
or
out
of
been
signaling,
RTS,
CTS
I
think
it's
not
crazy
to
want
to
do
receiver
side
flow
control,
but
that
said,
I
mean
I.
Think
you
have
some
valid
points
that
need
to
be
discussed
like
the
different
types
of
you
know,
you
see
if
you're
floating
at
different
rates
on
different
interfaces.
N
A
So
if
it's
not
going
fast
on
this
interface,
maybe
it's
going
fast
on
another
interface,
so
I'm
not
sure
if
that's
a
big
deal
whatever
we
can
discuss
that
on
the
list
as
well,
but
so
it's
good
to
have
I
think
to
two
points
here,
though,
you've
got
a
good
point
about
hey
I've
got
retransmits
I
can
do
trans
transmission
side
flow
control
that
way,
I
think
that
the
other
other
authors
have
a
good
point
about.
Maybe
we
need
to
look
at
receiver,
size,
I,
don't
know.
A
A
T
The
different
rate
to
cause
a
transient
on
the
network
is
very
pronounced
in
sparse
network
the
denser
the
networks
get
the
less.
That
is
an
issue
you
actually
flip
it
to
the
other
side,
so
the
per
node
behavior
is
largely
an
artifact
of
a
very,
very
old
spec.
That
has
been
a
little
bit
too
specific
now
to
implement
these
things
right.
So
if
you
have
dense
networks
and
with
modern
techniques,
the
per
interface
is
doable
so
depend
which
you
know
which
part
which
part
of
the
design
space
you
eat.
T
The
constant
rate
indicated
per
second
is
definitely
not
enough
things
change
very
very
quickly,
and
let's
give
you
a
large
part
of
the
solution,
you
should
basically
look
on
the
sender
more
on
the
losses
that
is
a
much
simpler
ticket
and
don't
ramp
up
too
aggressively
I
mean
you.
Have
it
slightly
hidden
right,
like
don't
be
too
optimistic
when
things
go
well,
but
you
can
also
know
where
you
don't
have
enough
throughput
when
you
want
to
really
bump
up
there.
T
It's
also
very
good
to
keep
for
that,
so
I
agreed
BCP
is
probably
all
we
need.
A
lot
of
these
techniques
are
only
around
and
roughly
what
last
chest
is
practically
very
doable.
I
would
only
need
agree
of
the
peri
interface
I'll
cooperate
defense,
especially
on
their
on
the
on
the
dense
networks,
because
then
the
slowest
receiver
starts
to
actually
choke
you.
If
you
have
a
lot
of
interfacing,
the
slowest
guys
can
climb
sit
down
on
flooding,
speed,
okay,
I.
A
Think
that
there's
a
reason
for
a
draft,
no
matter
what
our
resolution
is,
because,
while
you
said
that
yes,
people
might
be
violating
in
a
33
millisecond,
it
is
a
violation
of
a
standard
and
you
can-
and
so
you
can
give
into
these
realms,
where
you
get
operators
who
run
a
test
that
says
I've
used,
you
know
standard
right
and
if
you
say,
look
you're
sending
it
faster
than
33
and
then
you
have
to
do
with
justification
words.
If
you
have
a
document,
we
can
point
out
saying
the
20.
U
R
A
S
S
I
I
I
I
R
R
No
no
I'm
saying
a
TCP
connection
is
between
node
a
and
node
B.
Wherever,
however,
many
hops
they're
separated
in
the
network,
you're
controlling
the
flow
between
these
two
endpoints.
Only
that's
not
what
we're
trying
to
do
with
flooding
we're
trying
to
get
the
flooding
out
to
the
whole
network
as
fast
as
we
can
as
consistently
as
we
can.
I
From
the
point
A
to
point
B-
and
you
know
neighbor
to
neighbor
and
use
the
bandwidth
that
we've
got
and
use
the
CPU
power,
we've
got
so
it
seems
like
the
way
to
do
that
is
to
have
a
local
control
loop
running
as
quickly
as
possible.
Now
I'm
gonna,
again
go
out
on
a
branch
and
say
I.
Disagree
with
my
co-authors
I
think
that
this
TOB
should
also
be
in
the
PSN
peek
packet.
Okay.
I
A
Thank
you
so
again,
this
is
very
interesting.
I
want
to
point
out,
I
know
we're
out
of
time,
but
you
raising
the
point
about
point-to-point.
Not
having
anything
said
about
it
in
the
ISO
standard
is
I.
Think
why
we're
here?
Because
it
was,
they
were
literally
relegating
this
to
the
l2
and
they
were
saying:
send
it
as
fast
as
you
want
right
and
lt
will
take
care
of
you
and
it
isn't
for
us
right
and
now
we're
here
trying
you
know
from
both
sides
on
the
transmit
or
receive
we're
saying.
A
C
N
O
N
O
R
N
O
M
Very
quick,
I'm
coming
in
quite
cold
to
this:
oh,
no,
no,
no
SPF
for
a
sexpert
but
I
understand
both
sides.
Points
I,
think
you
have
different
into
the
same
stick
here.
You
want
to
say
solve
the
same
problem
and
I.
Think
you
give
that
thought
either
the
sender
could
monitor,
acknowledgment
or
Foe
controller
or
the
receiver
can
provide
feedback,
or
perhaps
both
I
think.
One
thing
that
gets
missed
and
you're
conflating
two
points.
M
One
is:
how
do
we
control
the
point-to-point
controlling
rate
and
the
other
point
which
you
keep
coming
to
quite
elegantly,
is
in
a
link
state
routing
protocol.
The
critical
thing
is
about
the
consistent
view
of
the
link
state
database
across
the
entire
network.
So
if
you
want
to
speed
up
the
propagation
of
that
link
state
across
the
entire
network,
that's
almost
a
different
parameter
and
needs
a
different
method
of
speeding
up
than
just
great
controlling
your
point.
Winning
they're
both
important
and
33
minutes.
Milliseconds
is
too
old
a
number.
C
It's
on
it's
gonna
go
next,
we
do
have
a
plan
for
catching
back
up.
Part
of
it
is
Tony
had
originally
asked
for
five
minutes
for
these,
and
we
gave
ten
so
any
any
fee
for
each
of
them
any
help
you
can
give
us
the
other
thing
is.
We
can
push
Peters
last
presentation
to
the
next
session,
because
we
we
only
have
45
minutes
allocated
in
the
next
session,
so.
I
All
right
I've
got
two
updates
for
you:
topology
transient
zones,
area,
abstraction
and
hierarchical
ISS.
Next
slide,
please:
okay,
first,
apology,
transparent
zones
and
airy
abstraction.
As
previously
noted,
this
is
largely
trying
to
accomplish
the
same
thing.
We
have
completely
different
ways
of
doing
things.
We
at
least
have
met
and
privately
and
have
agreed
we're
going
to
collaborate.
We
have
started
a
document
offline.
We
do
not
have
anything
ready
to
present
sorry,
okay,
next
hierarchical
ISS,
so
we
have
some
changes
here.
We
added
support
for
flooding
scopes.
There
was
a
bug
fix.
I
We
needed
more
PDU
types
for
LAN
IHS.
That
was
just
silly.
We
added
an
explicit
area
identifier,
because
apparently
nobody
in
in
discussion
was
willing
to
actually
deal
with
hierarchical
in
SAP
allocation.
Does
anybody
here
remember
what
ends
apps
are?
Oh
good?
Okay,
that's
probably
the
right
thing
to
do.
We
welcome
Les
and
Paul
Wells
as
co-authors
and
we're
again
asking
for
a
working
group
adoption.
C
A
So
we
had
a,
we
had
Lea
offered
last
iqf
by
Tony's
agenda
to
present
all
the
reasons
we
would
never
want
to
actually
deploy
hierarchical,
link,
state
routing
and
then
I
went
and
asked
him
for
this.
If
you
know,
would
you
please
present
that,
and
he
said
no,
so
that's
pretty
pretty
bad
I
wish
he
would
have
I.
A
C
A
I
mean
so
the
reason
I'm
sort
of
having
in
Hine
right
is
because
I
don't
think
people
are
paying
attention
right
and
and
the
some
people
who
do
are
paying
attention
are
saying.
This
is
a
bad
idea,
but
then
they're
not
willing
to
get
up
and
show
us.
Why
so
I
just
don't
want
to
jump,
jump
the
gun
here
and
get
and
adopt
something
that
then
later
we
get
somebody
coming
in
and
saying
what
the
hell
are
you
working
on
this
floor
right?
This
is
a
crazy
idea.
I
Very
the
the
main
use
case
is
called
scale.
We've
got
this
thing
called
the
Internet.
We've
got
domains
they're
busting
out
of
their
seams.
For
many
many
years,
people
have
been
doing
strange
things.
They've
got
an
IG
P
running
in
North,
America
they've
got
an
IG
P
running
in
Europe,
and
they
paste
things
together
at
the
edges,
or
they
run
a
distance
vector
between
the
two.
You
know
all
sorts
of
strange
things
happen.
T
I
A
I
I
M
A
D
D
A
R
Ok
next
slide,
please
this
will
be
pretty
short.
What
did
we
change
the
not
that
short.
R
I'm
trying
I'm
trying
ok,
it's
just
a
quick.
This
has
been
presented
several
times
before.
What
motivated
this
draft.
We
actually
had
some
some
real
world
interoperability
problems
and
how
people
decided
whether
to
reject
an
LSB
in
its
entirety,
and
we
needed
to
clarify
what
you
do
hey.
If
you
got
an
unknown,
TLV
or
TLV,
that's
malformed!
You
can't
use
the
TLV
obviously,
but
you
cannot
throw
away
the
LSB.
If
you
throw
away
the
LSP,
then
we
got
inconsistent
LSP
databases
in
the
network
and
we're
never
going
to
work.
R
There
were
some
issues
with
bridges,
anyways
that
these
are
the
the
reasons
that
we
wrote
the
draft
next
slide.
We
got
some
comments
from
Brunel.
Thank
you
largely
editorial,
but
they
improved
the
quality
of
the
document
we
put
them
in.
There
was
a
last
call
started
on
June
12th
and
there
was
a
considerable
amount
of
support,
no
objections,
voice,
blabbed.
It
was
never
declared
finished.
D
R
A
C
D
J
J
So
one
of
the
things
that
we
clarified
in
the
draft
is
that
we
explicitly
said
so.
So
what
we
sought
the
draft
says
is
that
the
algal
zero
locator
should
be
advertised
as
a
prefix.
The
reason
is
that
we
want
things
to
work
in
a
network
where
the
router
is,
which
you
don't
support,
it,
sorry,
six,
which
is
obvious,
but
it
never
really
said.
What
do
we
do
is
the
algorithm
other
than
zero.
How
do
we
advertise
so?
J
We
basically
put
a
sentence
in
a
table
saying
we
should
not
advertised
these
as
a
prefixes,
but
only
as
a
locators.
The
reason
is
that
if
you,
if
you
have
an
algo
X
in
the
network,
you
really
only
want
the
routers
which
are
participating
in
that
algo
and
is
supporting
the
algo
to
actually
see
that
prefix.
So
what
we
said
is
they
should
not
be
advertised
as
a
prefix,
but
only
as
the
locators.
So
that's
a
small
codification
next
slide.
J
J
But
after
discussing
with
some
people,
we
thought
maybe
we
should
move
really
the
a
flag
to
the
prefix
reach
ability
attribute
flags,
so
it
is
also
used
outside
of
the
SR
v6,
as
it
seems
to
be
a
good
idea
to
know
that
prefixes
and
anycast
prefix
and
we
do
support
these
flags
attribute
flex
under
the
locator
TLV.
So
unless
anyone
implemented
this,
we
would
propose
to
move
these
to
the
attribute
flex.
Tle.
V
A
J
But
it
still,
if
something
is
any
cost
you
may
want
to
be
sure
this
isn't
any
country
fix
because
not
saying
it's
an
not
introducing
or
not,
including
n
big,
doesn't
mean
instant
any
cost
so
that
so
having
a
none
bit
means
it
is
not
any
cars
but
not
having
n
bits.
That
doesn't
mean
it
is
any
cost.
So
we
want
to
explicitly
know
this.
Isn't
any
costly
fix.
V
J
Here
in
the
inter
located,
we
want
to
know,
and
we
don't
want
to
use
to
sit
from
the
locator
if
it
is
forty
LFA,
for
example,
right
in
the
prefix.
What,
if
you
don't
have
any
prefix
with
your
hand,
flag
right,
I
mean
look,
it's
I'm,
not
saying
this
is
a
must.
We
just
felt
this.
A
flag
would
probably
deserve
to
go
to
the
more
generic
subtly
and
the
subtly
is
available
under
the
locator.
So.
J
J
J
J
J
J
J
Okay,
so
what
yeah?
It's
just
I'm,
covering
both
of
the
data,
so
what
we
did
is
originally
the
entropy
label
capability
was
advertised
as
a
node
attribute
in
the
router
capability
TLV.
We
moved
it
out
out
of
the
router
capability
theory
and
we
put
this
under
the
prefix
advertise
name.
The
reason
is
that
the
routers
may
not
know
the
identity
of
the
of
the
prefix
originator
in
the
remote
area
or
a
domain,
and
even
if
they,
if
they
know
they
may
not
know
the
the
capabilities
of
that
originator.
J
If
the
router
has
multiple
line
cards,
the
router
must
not
announce
the
ELC
for
any
prefix
that
is
locally
attached
if
it
is
capable,
if
it
is
not
capable
of
processing
years
and
all
of
you
all
of
its
line
cards,
the
leaked
prefixes
should
preserve
the
ELC
signaling.
So
we
should
do
that
during
the
redistribution.
J
So
this
way
we
can
get
in
a
way
you
know
best
way.
We
can
really
do
the
DLC
support
between
the
areas
and
the
domains,
and
that
was
the
purpose
of
the
move
from
the
capability
subtly
or
capability
theory
to
the
prefix
advertisement.
So
it's
not
perfect,
but
this
is
the
best
we
can
do.
Okay
next
slide.
J
So
the
the
depths
advertisement
that
didn't
really
change
it
stays
as
a
new
MSD
type
in
an
Odom
is
d-sub
TLV
and
basically
fir.
Alder
has
a
multiple
n
cards
with
so
different
capabilities
of
reading
the
maximum
label
stack
Napster
Artemis
advertise
to
the
smallest
one.
This
is
the
rule
that
has
been
there
even
in
a
previous
version
of
the
of
the
draft.
I
can
go
next
slide
so
similar.
Similarly,
we
did
the
same
thing
in
the
u.s.
beer.
We
remove
it.
The
ELC
signaling
from
the
router
information
Alice
a
b-movie
to
the
prefix
advertisement.
J
J
A
J
W
So
I
completely
agree
with
the
new
proposal,
doing
it
as
a
prefix
attribute,
but
I
wanted
to
ask
think
again
or
question
about
removing
it.
As
a
node
attribute,
there
are
some
applications
that
traffic
engineering
and
all
of
that
which
would
benefit
from
learning
the
ear
capability
from
the
node.
Otherwise
now
I
have
to
you
know,
get
I
have
the
node,
but
I
have
to
look
for
some
specific
prefix
in
there.
Look
at
that
actually
two
and
then
figure
out
come.
J
J
L
C
J
J
So
originally,
when
we
the
draft
it
was
written
in
a
way
that
we
specified
the
how
the
Flex
I'll
go,
forwarding
works
in
the
SR
MPLS
data
plane
and
then
for
any
other
data
planes.
We
said
it's
up
to
the
you
know
that
data
playing
on
that
application.
How
did
I
want
to
use
the
outcome
of
the
calculation?
J
We
still
keep
the
same
thing,
but
we
led
the
SRB
six
data
plane,
there's
a
new
section
which
describes.
How
does
this
work
in
an
SRE
six
environment
to
make
it
very
very
short
here
and
you
can
can
read
the
draft,
but
basically
the
necessary
six.
The
locator
itself
is
attached
to
a
topology
and
the
algorithm,
so
the
locator
itself
has
a
notion
of
the
algorithm
and
that's
how
the
four
running
is
being
done.
J
J
Okay,
so
the
initial
draft
said
that
we
can
do
flex
our
optimization
inside
an
area
and
then
between
the
areas
or
between
the
domains.
It's
up
to
the
controller,
to
figure
out
what
is
the
best
path
overall,
because,
as
we
do
the
calculation
which
is
area
bound,
we
can
only
find
a
flex
alga
based
paths
inside
the
area,
so
just
giving
an
example.
J
If
you
have
these
four
routers
I
want
to
go
from
r1
to
r4
and
I
have
the
IGP
matrix
of
ten
everywhere,
and
then
the
the
green
numbers
are
Easter
delay
when
I
lick,
the
prefix
from
right
left.
R1
is
going
to
see
that
it
has
delay
of
100
to
get
to
the
prefix
with
the
metric
of
ten,
which
is
being
advertised
from
r2.
So
his
metric
is
110
the
same.
J
The
other
path
would
be
210
going
to
r3,
so
he's
going
to
pig
r2
but
overall
and
to
end
the
Tila
is
better
to
go
the
the
bottom
path,
but
because
we
didn't
have
a
white
signal,
the
flags
I'll
go
metric
in
the
prefix.
We
couldn't
really
optimize
this
and
we
said
well,
you
can
do
this
by
controller,
but
it
looks
like
people
would
like
to
use
this
without
the
controller.
So
we
are
trying
to
solve
this
problem
here
and
next
slide.
J
Please
so
obviously
the
way
we
can
solve
it
is
that
we
introduce
the
per
flex
alga
metric.
We
don't
think
we
need
this
to
be
advertised
with
the
prefix
inside
the
area.
This
is
mostly
useful
when
we
leak
or
redistribute
the
flags,
alga
prefixes,
and
you
know
the
labels
or
locators,
but
we
really
want
to
keep
the
metric,
which
has
to
be
the
optimization
in
place,
so
we
can
calculate
n,
2
and
us
now.
J
Obviously,
we
cannot
just
put
the
metric
in
the
prefix
advertisement
and
start
to
use
it,
because
we
need
to
understand
that
everybody
who
is
doing
the
calculation
uses
the
same
thing.
So
why
introducing
the
metric?
We
also
need
to
introduce
something
in
the
Flex
alga
definition,
which
would
say
we
want
to
use
this
metric,
and
that
is
the
next
slide.
J
So
we
introduce
the
new
sub
theory
of
the
Flex
or
the
definition
theory,
which
is
the
the
definition
flags
we
introduce
one
flag,
which
is
an
unplug
and
acts
like
an
ant
like
says
that
when
the
prefix
alga
specific
flag,
raga
specific
metric
is
used
or
advertised
in
the
prefix,
we
must
use
it
if
it
is
not
there.
We
refer,
revert
back
to
the
standard
IGP
metric.
So
that's
the
definition
and
everybody
needs
to
basically
agree
to
this
definition
and
understand
it
before
we
can
get
the
consistent
calculation
results.
J
So
it's
similarly
defined
for
SPF
and
there
is
a
registry
for
those
bits
which
are
IGP
agnostic.
This
is
like
a
common
registry
defined
in
a
jar.
Ok,
next
slide.
Please-
and
this
is
the
the
Dialga
metric
basically
said,
vert
eyes
as
a
sub
theory
of
the
prefix,
which
ability
theories
and
it's
just
a
little
bit
metric
value
and
similarly
define
in
OSPF.
C
W
W
Ospf
runs
over
layer,
three
lag
interfaces
and
for
certain
in
certain
deployments,
there's
a
use
cases
for
performing
either
some
kind
of
a
am
verification
of
the
underlying
member
links
or
doing
some
kind
of
a
traffic
engineering
steering
or
specific
member
links.
Instead
of
doing
the
hashing,
and
in
order
to
enable
these
use
cases,
there
is
a
requirement
that
member
links
actually
get
described
and
exported
as
part
of
the
OSPF
topology
advertisements.
W
Next
one
so
what's
proposed
in
the
draft
is
that
when
this
use
case
is
required
or
when
this
thing
feature
is
enabled
that
OSPF
router
actually
describes
the
layer
to
bundle
layer
to
members
of
that
bundle
interface.
So
it's
basically
the
description
of
the
link
and
some
specific
attributes
of
that
link,
and
this
is
applies
to
both
OSP
of
v2
and
v3.
Now.
Do
you
thing
to
note
is
that
this
does
not
change
the
OSPF
route,
computation
or
SPF
computation.
So
it's
you
know
some
information
which
is
getting
advertised
out.
Why
OSPF.
W
So
how
is
this
done?
So?
There
is
a
new
l2
bundle,
member
attributes
up
TLB
that
we
have
introduced
in
Westby
of
v2.
This
would
be
the
under
the
sub
TLV
of
the
extended
linked
a
link
TLB
in
the
extended
link,
opaque
LSA
and
in
OSPF
it's
part
of
the
new
extended
Alice's.
So
it's
a
extended
router
LSA,
and
this
is
a
sub
TLB
of
the
router
link
LSA,
which
in
both
cases
they
actually
describe
the
layer,
3
link.
W
So
that's
a
clearly
proposed
proposal,
and
mainly
it's
the
descriptor
that
lists
scripter
is
a
link
local
identifier.
You
know
something
like
an
if'
index,
perhaps,
and
then
we
have
sub
tlvs
which
describe
the
attribute
of
this
specific
member
link.
So
if
there
are
multiple
member
links,
you
would
see
I
know
more
than
one
instance
of
this
sub
TLV
under
the
link
next
one.
So
there
are
no
new
new
link
attribute
sub
sub
tlvs
defined
for
this
particular
layer-2
member
types.
The
idea
is
that
would
reuse.
W
The
existing
till
means
that
we
have
for
layer
3
just
that
they
would
be
included
under
under
this
one
and
they
would
be
associated
with
the
l2
bundle
member.
The
draft
goes
talks
about
the
ones
which
are
applicable
for
layer
2,
like
adjacency,
said,
or
the
maximum
link
bandwidth,
and
then
it
lists
others,
like
you,
know,
addresses
and
identifiers
which
are
not
really
applicable
for
layer,
2
interface
next
one.
So
this
is
mainly
you
know.
W
B
N
B
W
W
Yeah
thanks,
so
this
is
a
follow
on
update.
This
office
was
presented
at
in
Prague
about
those
PF
working
with
strict
mode
BFD
go
to
the
next
one.
So
quick
recap
on
what
the
draft
does.
So
it's
a
mechanism
we're
trying
to
standardize
something
which
many
implementations
do
already,
which
is
want
to
run
BFD
to
monitor,
liveness
and,
at
the
same
time,
do
not
want
to
bring
up
the
OSP
of
adjacency
until
BFD
sessions
up.
So
there
is
an
ISS
RFC
which
does
something
similar
it
with
for
Isis,
go
to
the
next
one.
W
So
just
a
very
quick
reminder
how
it
is
done,
it's
really
we
introduced
a
new
flag,
be
bit
flag
in
the
LLS,
extended
options
and
flags.
This
is
part
of
the
hello
messages,
and
this
is
how
a
neighbor
router
can
inform
its
neighbor
that
it
wants
to.
You
know,
operate
in
this
strict
mode
and
the
way
of
this
session
is
requested
right
when
the
new
neighbor
is
detected.
The
draft
talks
about
the
FSM
changes.
It's
really
that
the
neighbor
FSM
is
held
in
the
init
state
until
the
BFD
session
is
up.
W
W
First
thing
is
we
change
the
title
from
OSPF
DF
district
mode
to
OSPF,
strict
mode
for
BFD
got
a
lot
of
feedback,
saying
that
which
correctly
said
that
this
is
not
really
a
PFD
mechanism,
it's
always
OSPF
mechanism,
and
then
there
was
some
feedback
on
the
need
to
indicate
that
for
graceful
restart
scenarios,
because
we
have
this
additional
BFD
establishment
thing
we
need
to.
You
know
indicate
that
so
the
timers
need
to
allow
for
this,
especially
in
a
scale
scenario.
Perhaps
Tony,
you
had
a
question.
T
To
be
if
the
new
site
comes
up,
why
do
you
even
need
this
signal?
We
have
the
proper
eye.
The
problem
is
absolutely
valid
right.
You
want
a
lot
of
config
saying
until
I
get
the
PFD
are
no
SPF,
but
you
don't
sell
any
hellos
until
you
BF
decide
is
up.
The
other
side
cannot
come
up
right.
It
does
matter
what
both
sides
have
configured.
One.
N
T
W
Okay
and
the
last
bit
I
think
we
can
go
to
the
next
slide,
so
we
have
OSPF
v3
multi
address
family,
so
we
can
have
ipv4
instance
routing
supported
by
OSP
of
you
three
now
in
this
mechanism.
What
happens
is
all
the
hellos
are
done
with
link
local
ipv6
addresses,
but
because
it's
for
the
ipv4
address
family
we
need
the
session.
Establishment
has
to
be
for
the
ipv4
neighbour
address,
and
we
don't
know
this
when
we
are
operating
OSP
of
u3
so
next
slide.
W
So
there
is
a
new
TLV
in
being
introduced
in
for
LLS
sorry
and
you
ll
SQL,
wave
being
introduced
for
OS
pop3,
where
we
propose
that
local
interface
ipv4
address
is
exchanged
in
the
hello
itself.
So
this
way
normally
in
the
OSP
of
v3
ipv4
working,
would
wait
for
the
link
LSA
to
learn
the
neighbors
IP
address
that
we
could
install
as
a
next
stop
in
the
routing
here
with
this
TLD.