►
From YouTube: IETF111-LSR-20210730-2300
Description
LSR meeting session at IETF111
2021/07/30 2300
https://datatracker.ietf.org/meeting/111/proceedings/
C
A
Okay,
well,
let's
ac,
you
want
to
get
started.
D
Certainly,
this
is
lsr
in
the
last
meeting
of
ietf
111
on
online
next
slide.
D
Okay,
I'm
sure
you've
all
seen
the
note.
Well,
basically,
the
crux
of
it
is
that
if
you
know
about
ipr
and
you're
participating
in
any
manner
you're
compelled
to
disclose
it,
you
can
read
the
details
here.
D
Okay,
one
rfc
next
slide
since
110..
Okay,
we
have
these
three
sitting
here
in
cluster
340.,
I'm
not
sure
what
they're
what
they're
waiting
on,
because
there's
no
dependencies
on
any
of
these
on
any
you
know,
there's
no
misreps
or
anything,
but
these
are
going
to
come
out
shortly
as
well.
They
already
have
numbers.
I
just
didn't
put
the
numbers
in
the
in
the
status,
so
we
have
three
more
that
are
going
to
come
fast
next
slide.
D
I
saw
that
tom
patch
nudged
john,
and
he
talked
to
the
rfc
editor,
so
maybe
now
that
this
bfd
yang
reference
to
the
te
model
is
resolved
is
is
removed,
not
resolved.
Maybe
we
can
get
all
these
protocol
yang
documents
published.
That
would
be
good
just
to
have
them
have
them
with
the
rfcs
and.
D
Let's
see
here,
yeah
we
finally
got
the
is,
is
srv6
extensions.
It
should
go
to
telechat.
I
mean
it's
under
iesg
evaluation.
We
have
some
maintenance
drafts
flex
algorithm.
We
did
the
second
last
call
and
the
discussion
resulted
in
errata
to
18
89,
20
and
89
19.,
hopefully
that
one
will
get
through
80.
Have
you
haven't
looked
at
that
one
john
right.
B
He's
coming
right,
I
have
not.
I've
got
some
backlog.
D
E
D
This
isn't
working
last
call
I
I
saw
another
another
ipr
just
recently,
but
I
realized
it
was
just
a
refresh
of
the
old
one.
I
don't
see
anything
different,
I'm
gonna
look
at
that
closer
next
week,
but
I
think
it's
just
the
same
in
ipr
that
was
issued
at
the
onset
of
the
document
next
slide.
D
D
Slide:
okay,
we're
going
to
cover
the
flood
reflection
today.
This
is
an
experimental
draft
and
tony's
going
to
talk
about
it.
We
started
a
list
discussion,
strict
mode
bfd.
We
should
be
able
to
do
that
when
there's
actually
implementations
that
one
and
ospf
b3
for
srv6,
that's
a
rather
complex
graph.
We
want
to
make
sure
we
get
it
right,
but
we'd
like
to
get
that
one
as
soon
as
possible
and
ensuring
the
quality,
because
it's
a
normative
reference
to
it
in
flex.
Algebra
next
slide.
D
D
D
D
D
We
have
people
who
have
asked
for
adoption,
and
these
are
all
on
the
agenda
today,
so
I
won't
say
anything
more
okay,
I
think
we're
done.
I
tried
to
go
faster
than
I
had
in
the
past,
so
I
didn't
go
through
every
draft.
I
just
looked
for
the
ones
that
I
think
thought
I
needed
to
make
note
of
to
the
working
group.
D
A
F
D
F
All
right,
so,
let's
get
started
so
we
can
leave
a
good
amount
for
the
open
discussion
next
slide,
please
so
the
changes
that
we
made
since
the
the
last
significant
revision
we
had
an
example
algorithm.
We
removed
it
for
a
number
of
reasons.
One
we've
actually
worked
on
an
implementation
now
and
our
implementation
did
not
was
not
identical
to
the
example
algorithm.
F
The
algorithm
is,
in
any
case,
a
local
matter.
There's
no
interoperability
issues,
so
people
could
implement
things
in
different
ways
and
it
would
not
cause
any
problems.
So
we
decided
to
replace
the
example
algorithm
with
a
few
broad
guidelines
that
anybody
who
implements
the
algorithm
should
follow
next
slide.
Please.
F
F
F
F
F
Algorithms
should
be
more
aggressive
in
slowing
down
when
they
detect
a
reason
to
do
so
and
less
aggressive
when
speeding
up
and
we
want
to
work
with
both
enhanced
nodes
and
legacy
nodes.
So
you
know,
there's
been
some
work
to
demonstrate,
for
example,
that
sending
acts
more
quickly
is
an
aid
to
faster
flooding.
F
F
They
may
do
parallel
link
suppression.
They
may
implement
the
dynamic
flooding
draft.
We
want
to
be
able
to
work
regardless
of
whether
these
enhancements
are
are
being
used
or
not,
and
we've
also
had
discussions
how
imp
implementing
packet
priority,
for
example,
processing
snps
at
a
higher
priority
than
lsps.
So
you,
you
process
the
acts
more
quickly
as
a
help.
F
F
So
we
we
incorporate
that
into
the
algorithm.
We
are
agnostic
to
the
reason
for
the
for
the
delay
we
can.
Packets
could
be
lost
on
transmission,
they
could
be
dropped
on
ingress
to
the
receiver,
node
or
being
punted
from
the
data
plane
to
the
control
plane.
There
could
be
cpu
issues.
Isis
may
not
get
to
run
as
much
as
it
would
like
to
doesn't
matter
to
the
algorithm.
A
Yeah
so
unless
it's
a
an
immediate
like
just
clarifying
question,
let's
hold
the
questions
until
the
end,
because
we
have
dedicated
over
a
half
hour
to
discussion.
F
Okay,
okay
and
just
to
note
the
current
flooding
rate,
the
is
referred
to
as
lsptx
rate
as
opposed
to
the
the
target
maximum
which
is
lsptx
max.
I'm
not
sure
that
we
incorporated
that
name
on
all
the
slides.
I
apologize
for
that,
but
just
to
clarify
the
distinction.
Next
next
slide.
Please
here's
the
basic
test.
Topology!
We
have
an
emulation
of
a
large
network
so
that
we
can
get
most
of
the
tests
were
done
with
2000
lsps.
F
The
only
thing
that
we
have
done
on
the
receiver
is
we
put
a
hook
in
to
artificially
manipulate
the
receive
rate
that
the
receiver
will
support,
so
that
we
can
test
when
the
receiver
is
able
to,
for
example,
handle
100
lsbs
per
second
or
300
speeds
per
second
and
so
forth,
but
other
than
that,
it's
it's
base
code.
Both
the
sender
and
the
receiver
are
real
hardware
platforms
and
the
basic
test
procedure
is
to
reset
the
receiver.
F
F
Just
a
little
explanation
about
you'll
you'll
see
a
column
related
to
the
the
lsb
transmission
strategy
when
you're
sending
at
a
fairly
slow
rate
as
historically,
we
have
done
it's
reasonable
to
say,
for
example,
if
I'm
doing
33
lsps
per
second
I'll,
I
can
expect
that
I
will
wake
up
33
times
a
second
and
send
one
lsp
each
time,
but
at
higher
transmission
rates.
It's
unrealistic
to
expect,
for
example,
we're
going
to
wake
up
a
thousand
times
a
second
and
and
be
spot
on
each
millisecond.
F
Please
so
here's
just
a
baseline
there's,
no
operation
of
the
actual
algorithm
itself
other
than
the
optimized
transmission
strategy
that
I
was
discussing
on
the
previous
slide,
and
this
just
simply
shows
how
long
it
takes
to
to
transmit
2000
lsps
at
the
given
rates.
Like
there's,
nothing
very
surprising
here
next
slide.
Please.
F
All
right,
so
here's
an
example
where
we're
slowing
down
again
we
have
2000
lsps
to
flood
the
we
we're
starting
out
with
a
lsptx
rate
of
300
per
second,
and
the
receiver
is
actually
only
able
to
handle
100
lsps
per
second
and
the
transmitter
has
to
slow
down
so
you'll
see
with
the
two
baselines,
where
the
the
algorithm's
not
in
use.
F
F
We
then
have
two
cases
where
the
algorithm
is
in
use,
so
we're
our
starting
lsptx
rate
is
300
lsps
per
second
we're
going
to
have
to
adjust
that
based
upon
how
quickly
we're
getting
x
back
from
the
receiver
to
adjust
to
the
receiver's
rate,
which
is
100
lsps
per
second.
F
F
F
F
F
And
there's
no
adjustment
required
if
we
go
to
the
next
slide.
This
is
just
to
demonstrate
that
we're
not
oscillating
up
and
down
we're
just
staying
at
the
last
known
rate
that
we
knew
was
good,
and
so
we
have
no
re-transmissions
here
next
slide.
Please
all
right.
F
It
actually
took
us
multiple
bursts
before
we
actually
reached
the
the
maximum
of
one
thousand.
F
So
if
we
go
to
the
next
slide,
we'll
see
that
in
graphical
form,
so
again
the
left
hand
side
we're
adjusting
up
to
from
100
to
300,
and
you
can
see
that
we
hit
the
whoops.
We
hit
the
maximum
rate
in
about
11
seconds,
the
graph
on
the
right,
we're
trying
to
ramp
up
to
from
100
to
1000,
and
you
can
see
that
it
took
two
bursts.
The
first
burst
got
us
up
to
about
500
lsps
per
second,
and
the
second
burst
got
us
up
from
500
to
1000.
A
Tony
p
has
on
this
chat,
has
mentioned
a
couple
times
that
he
thinks
that
that
should
be
tx.
A
F
A
F
A
F
F
F
F
There
we
go
okay,
so
you
can
see
the
total
elapsed
time
in
milliseconds
when
we
go
from
100
to
300
is,
is
a
little
over
11
seconds
in
the
case
of
going
to
a
thousand
there's
actually
two
bursts.
The
first
burst
took
us
about
10
seconds.
The
second
burst
took
us
another
three
seconds:
does
that
ac?
Does
that
help
clarify
for
you.
F
All
right
again,
if
you
go
one
slide
ahead,
you'll
see
there
were
two
bursts
involved
to
get
us
all
the
way
up
to
1000.
If
you
look
at
the
right
hand
graph,
but
we
sent
two
thousand
lsps,
we
got
up
to
about
500
and
then
the
you
know
we
waited
a
little
while
and
then
we
sent
another
burst
and
that
got
us
up
to
a
thousand.
F
A
A
C
I
J
Okay,
hello,
can
you
hear
me?
Yes,
okay,
so
hello,
everyone,
I'm
game,
celinac,
I'm
here
to
present
you
the
work
we
did
with
bruno
on
flow
and
conjunction
control
for
isas
and
especially
on
the
basis
of
the
draft
next
slide.
Please.
J
So
the
first
point
we
want
to
clarify
about
the
problem
is
that
there
are
actually
two
problems
that
we
are
trying
to
solve.
The
first
one
takes
place
in
the
control
plane,
namely
in
the
ies
process
and
the
under
circuit
buffer,
or
the
input
queue
before
the
isis
process,
and
the
other
part
of
the
problem
is
the
pass
between
the
ingress
and
this
input
buffer
next
slide.
Please.
J
J
J
J
J
J
The
draft
focuses
more
on
point-to-point
interfaces
and
for
this
there
is
a
classical
algorithm,
which
is
to
use
a
receiver
window,
which
we
call
our
win
in
the
next
next
slide,
and
basically
the
sender
will
never
send
more
than
or
win
a
knowledge
lsps.
J
J
J
So
there
there
have
been
some
questions
on
how
to
choose
our
win.
Well,
so
we
assumed
that
there
was
one
circuit
per
neighbor,
so
in
that
case
you
can
use
the
circuit
size
divided
by
two.
So
there
are.
There
is
space
for
other
pdus.
J
If
you
don't
have
access
to
this
information,
but
I
think
that
should
be
the
case
in
in
every
time
you
can
use
a
tcp
value
which
also
uses
a
resource
circuit
and
in
the
worst
case
you
can
still
use
a
conservative
value
of
10
since
well.
A
popular
implementation
already
uses
considers
that
burst
of
10
lsps
are
are
doable
for
a
receiver.
J
J
So
when
to
send
the
psnt,
we
reuse
the
mechanism
that
was
presented
in
the
previous
ietf,
the
receiver
chooses
a
parameter
lpp
for
lsps
per
psnp,
and
so
once
it
has
lpp
lsps
to
acknowledge
it
will
send
a
psnp
right
away.
J
Please
so
the
algorithm
or
win
relies
on
one
static
information,
the
size
of
the
circuit
buffer.
There
are
multiple
identifier
identified
parameters
that
influence
its
behavior.
There
is,
of
course,
the
value
of
arwen
advertised.
J
So
if
we
want
to
look
at
the
theoretical
rate
we
can
achieve,
it
is
our
win
over
rtt,
because
in
the
best
case
you
send
our
in
lsps,
you
get
the
unenlargement
right
away,
so
you
can
send
all
in
lsps
every
rtt
which
gives
you
a
rate
of
hour
in
our
rtt
next
slide.
Please.
J
J
J
J
So
if
we
want
to
look
at
why
there
is
a
difference
between
the
between
the
theoretical
rate
and
the
two
actual
experiments,
we
can
look
at
the
time
an
lsp
takes
to
be
unalleged,
and
this
is
the
graph
you
can
see
on
the
right
on
top
for
lpp
5
on
the
bottom,
for
ipp
equals
15.,
and
you
can
see
that
the
actual
measured
rtt
is
higher
than
the
20
milliseconds.
J
J
J
J
J
The
receiver
takes
to
acknowledge
the
psnp
since
lsps
gets
buffered
for
a
longer
time
again,
two
good
points:
there
are
no
ls
pillars
again
and
the
sender
space
well
to
to
the
to
the
receiver.
So
it's
2000
total
it's
around
200
lsp
per
second
per
vlan,
which
gives
around
a
7x
speedup
compared
to
the
default
rate.
J
Okay
next
slide,
please
now
we
want
you
to
study
what
happens
when
there
is
io
bottleneck,
so
when
there
is
no
information
available
at
the
center
side
to
deal
with
an
io
bottleneck
next
slide,
please
the
experimental
setup
is
the
same
next
slide
please.
J
The
senders
managed
to
fill
the
bands
with
link
and
there
is
no
lsp
loss.
This
is
no
surprise
because
the
sum
of
the
rwin
of
all
the
senders
are
actually
smaller
than
the
botanic
buffer,
which
prevents
losses
from
happening
now.
We
want
you
to
see
what
happens
when
the
buffer
is
smaller
and
there
are
losses
that
happen
due
to
io
congestion.
J
So
now
we
reduce
the
buffer
size
to
64
packets
and
there
is
there
are
unsurprisingly
losses
and
you
can
see
the
losses
on
the
bottom
graph.
They
occur
every
five
seconds.
This
five
seconds
correspond
to
the
retransmit
timer,
where
a
sender
considers
that
an
lsp
was
indeed
lost
in
that
case
and
comes
back
to
the
queue
for
retransmission.
J
The
other
interesting
point
is
that
we
completely
fill
the
boundaries
during
almost
the
duration
of
the
experiment,
the
so
there
are
around
10
percent
losses
on
the
total
of
lsps
transmitted,
and
the
losses
correspond
to
the
difference
between
the
sum
of
the
receive
window
with
the
buffer,
the
bottleneck
buffer
next
slide.
Please.
J
So
yeah,
if
we
want
a
bit
to
sum
up
what
we
have
seen
so
far,
there
are
three
sending
rates
possible
in
the
third
experiment.
It
was
the
rwin
limit
in
the
second
experiment.
It
was
the
cpu
limit
and
in
the
third
one
it
was
the
io
limit.
In
every
case,
the
algorithm
managed
to
completely
fill
the
bottleneck.
J
J
J
The
gain
is
that
there
are
no
speed
losses
due
to
cpu
contention.
The
speed
is
spaced
by
the
receiver
axe
and
the
dropped
lsp
artificially
fills
the
receive
window
on
the
center
view,
which
means
that
internal
congestion,
still
it
still
has
some
good
properties,
even
in
case
of
internal
congestion.
Next
slide,
please.
J
J
A
small
recap
on
how
we
do
congestion
control.
I
think
I
will
skip
this
part
since
it's
not
very
relevant
to
the
discussion
we
can
go
afterwards.
We
can
go
back
afterwards
if
you're
interested
next
slide.
Please,
and
so
the
here
you
can
see
the
results
of
our
algorithm
with
the
same
benchmark
as
before,
and
the
conjunction
control
algorithm
actually
helps
helps
a
lot
in
avoiding
this,
the
spurious
retransmission
that
we
saw
before
and
we
go
from
10
around
10
to
1
percent
of
less
lsps.
J
There
is
a
large
overshoot
at
the
beginning
because
we
are
aggressive
in
the
the
starting
phase
and
this
aggressive
phase
called
slow
start
could
be
removed
like
we
saw
before,
but
it
helps
in
scaling
to
larger
links
because
being
more
aggressive
allows
to
reach
larger
sending
rates
next
slide.
Please.
J
Why
don't
we
do
only
congestion
control
like
proposed
just
before
it
is
very
likely
that
there
is
cpu
bound
at
some
point
in
the
life
of
the
iso
is
process
so
for
many
reasons
it
can
be
skilled.
Another
process
can
be
scheduled,
there
could
be
expensive,
spf,
computations
and-
and
in
that
case
the
our
algorithm
deals
perfectly.
Since
all
the
sand
lsps
can
be
buffered
before
being
processed.
J
The
other
problem
is
that
congestion,
congestion
control,
sorry,
if
not
bonded,
by
a
flow
control
algorithm.
I
will
consider
that
cpu
slowness
is
actually
a
congestion.
While
it
is
not,
packets
could
be
buffered,
since
we
could
know
the
space
available
to
store
them,
and
the
other
point
is
that
they
can
work
together.
J
J
So
here
is
a
recap
of
all
the
results,
and
we
can
see
that
in
the
cpu
bound
case,
the
our
algorithm
deals
perfectly
with
the
problem
next
slide.
J
J
It
does
not
deal
with
io
congestion
perfectly,
but
we
still
have
some
guarantees
and
congestion
control
algorithm
only
has
a
partial
effect
on
cpu
contention,
even
though
not
perfect
and
deals
well
very,
very
well
with
your
io
but
index
well,
thank
you
for
listening
and
yes,
I
think
we
can
have
the
discussion
now.
If
the
chairs
love
it
great.
A
I
have
I've
been
collecting
some
questions.
There's
been
a
lot
of
talk
in
the
chat
as
well.
I
don't
know
if
we
want
to.
We
probably
want
to
bring
some
of
those
back
in.
Let
me
pull
up
the
questions
on
your
presentation.
First.
A
Is
one
of
the
questions
I
had
overall
was:
maybe
we
can
go
back
through
the
slides,
but
how?
How
much
faster
was
yours
with
arwen
than
with
the
congestion
control
with
less's
proposal?
Do
we
have
a
comparison.
J
Well,
I
think
it's
hard
to
be
fair
in
the
comparison
like
comparing
flow
control,
algorithm
and
congestion
control
algorithm.
I
I
think,
I'm
not
sure
if
the
comparison
is
very
relevant,
since
they.
A
No,
but
what
I'm
saying
is
well,
it
has
to
be
relevant.
It's
the
point
right.
What
I'm
curious
about
is
you
know,
and
maybe
we
can
do
this
right
now
and
figure
it
out,
but
how
do
we
comp
as
an
audience
when
we're
looking
at
these
results?
How
much
faster?
Is
it
going
to
go
right?
You
know,
that's
the
that's!
The
goal
right
is
that
we
plug
we
flood
faster
and
we
have
results.
You
know
there
were
some
results
we
can
go
back
to
unless
is
where
it
showed.
A
You
know
like
17
seconds.
I
think,
or
you
know
these
different
things
is
there
some
number
that
we
that
we
could
say-
and
you
guys
looked
you
guys
looked
at
each
other's
stuff.
Right
I
mean
you
know
is,
is
our
win
like?
Does
it
get
everything
done
in
three
seconds
for
a
thousand
or
you
know,
or
what
whatever
so
I
mean.
J
Yeah
yeah,
I
think
I
get
your
your
question
our
in
algorithm
alone.
I
think
we
don't
want
to
use
it
alone,
even
though
that
we
showed
here,
I
think
we
want
to
see
the
receive
window
as
an
additional
guarantee
on
top
of
congestion,
congestion
control,
algorithm,
and
this
way
the
conditional
control
algorithm
cannot
lose
package
due
to
cpu
contention,
because
as
soon
as
the
algorithm
algorithm
will
eat
some
io
bottleneck,
it
will
lose
packets
and
potentially
perform
way
worse
than
a
simple
conditional
control.
Algorithm
would
have
done.
A
So
one
of
the
things
I
wondered
is
is
arwen.
The
equivalent
of
making
less's
transmit
max
dynamic
is.
Is
that.
J
Yeah
indirectly,
yes,
because
by
bounding
the
number
of
of
buffer
packets,
you
will
bound
the
transmit
rate
and
you
don't
have
to
take
an
arbitrary
value.
A
E
E
E
So
it's
it's
a
it's
a
maximum
that
can
be
entered
by
the
receiver
if
they
receive
a
pause.
So,
for
example,
for
300
milliseconds,
the
sender
will
pause
by
300
milliseconds,
again
same
number,
for
for
any
change
in
the
behavior
of
the
receiver.
The
sender
will
adapt
quickly
in
one
rtp
and
with
no
loss
of
lsps.
So
it's
important
to
to
to
to
compare
how
fast
to
adapt
on
how
many
lsps
you
lost
on
the
on
the
process.
A
Yeah
tony
p
in
the
chat
is
bringing
up.
I
that
I
shouldn't
have
said
max.
I
think
I
should
have
said
the
target.
I
don't
I
don't
know
actually
maybe
tony
can
come
and
say
that
at
the
mic,
okay,
yeah.
So
and
again,
I
think
I'm
taking
from
that
unless
I'm
wrong,
that
that
there's
a
target
in
in
a
fixed
target
in
unless
maybe
you
can-
is
that
the
case
there's
a
fixed
target
in
your.
In
your
case
and
in
our
win
case,
it's
a
it's
a
target
set
by
the
receiver.
F
So
I
want
to
first
of
all,
I
want
to
reinforce
bruno's
point
can't
compare
raw
numbers
here,
because
clearly
we
ran
this
with.
You
know
different
hardware
and
you
know
different
implementations
of
the
protocol
itself.
F
F
It's
chosen
at
startup
and
that's
the
number
you
get
so
to
me.
This
implies
that
you
have
to
pick
a
number.
That's
conservative,
because
you
want
it
to
be
able
to
work
even
when
the
receiver
is,
is
very,
very
busy,
perhaps
not
with
isis,
but
with
other
things,
so
that.
A
Right
and
and
but
it's
also
something
that
could
be
easily
adapted
right,
I
mean
the
when
we
talk
about
the
r
window
and
and
the
value
it
totally
reminds
me
of
credits
right.
You
know
it's
like,
so
we
could
just
change
it
to
a
credit
spacing
where
the
receiver
says.
Okay,
your
windows,
this
big
now
and
you
can
adapt
that
value
based
on
the
receiver.
F
J
Yeah,
so
so
there
are
really
two
things:
the
ahring
and
the
conditional
control
algorithm
and
there
the
arring
is
really
the
upper
bound
on
what
can
be
stored
before
isis
process
processing.
J
So
we
really
think
that
all
in
all,
you
will
need
both
a
conjunction
control,
algorithm
and
a
receive
window
to
well
to
get
the
the
best
conditions
you
can
have.
So
in
our
case,
the
the
signal
we
have
to
deal
with
congestion
control
is
not
the
same
as
less
which
monitor
the
the
announcement
rate.
In
our
case,
we
use
a
variation
in
the
well.
We
monitor
if
an
lsp
gets
delayed
for
too
long
compared
to
the
usual
rate.
J
A
Okay-
and
I
want
to
go
to
I
think
tony
next,
unless
bruno-
let's
let
tony
go
maybe
and
then
we
can
come
back
to
you
bruno.
K
K
But
the
point
is
that
if
we
have
feedback,
if
we
can
give
information-
and
it
probably
would
be
best
if
it's
real-time
information
back
to
the
transmitter-
we
can
do
better
right.
If
we
take
a
look
at
lesser
slides,
it's
very
clear
that
there
is
some
time
for
the
transmitter
to
react,
and
maybe
we
can
improve
that
time.
A
I
I
have
to
say
by
the
way
I'm
I'm.
This
is
all
for
me
as
a,
but
I
I
agree
with
you
tony.
I
think
that
it's
a
good
starting
point,
I'm
not
allergic
at
all
to
adding
information.
You
know.
I
think
that
there's
been
some.
Some
people
are
interested
in
not
not
adding
things
because
you
know
I
don't
know
why,
but
you
know
I'm
not.
I
don't
have
that
inversion
myself
and
I
I
think
that
so
many
other
conjun
you
know
rate,
limiting
and
or
improving
things
include
feedback
right.
A
So
I
don't
think
we
should
be
so
averse
to
it.
I
agree
so
bruno
did
you
do
you
want
to
go
next.
I
E
Window
is
static,
but
the
window
does
not
express
the
rate
that
we're
going
to
achieve
the
rate
is,
is
is
determined
by
how
fast
the
receiver
will
process
and
acknowledge
so
so
I'll.
Try
to
give
an
example.
If
I
say
well,
please
state
one
sentence
one
by
one
and
give
me
some
time
to
process.
E
E
The
rate
is
dynamic.
Based
on
how
fast
the
sender,
the
receiver
can
process
the
lsp
or
this
the
sentence.
A
So
another
question
that
I
had
was
because
it:
how
related
is
the
tx
max
value
in
less's
proposal
to
the
r
window?
Those
are
both
static
values
right.
Are
they
also
serving
similar
goals.
F
I
think
that
the
answer
to
your
question
is
probably
a
qualified,
yes,
but
the
you
know
the
tx
based
algorithm
is
built
to
adapt.
F
As
I
understand
the
r
was
based
algorithm,
you
pick
a
number
that
is,
that
is
never
going
to
change
and
it's
not
going
to
adapt.
So
the
behavior
that
you
get
from
arwin
is
because
the
receiver
goes.
Okay,
I
can
send
10
lsps
or
whatever
the
the
magic
number
that
has
been
advertised
is,
and
then
I
have
to
pause.
F
A
F
We
did
simulate
it
in
a
simplified
way
and
you
know
that
that's
to
demonstrate
that
the
algorithm
does
adapt.
It
obviously
doesn't
adapt.
You
know
in
the
case
of
slowdown,
it
doesn't
adapt
with
zero
re-transmissions,
but
it
does
adapt,
and
I
think
this
is
an
important
aspect
of
whatever
solution.
We
pick
because
you
know.
Obviously
you
know
in
the
real
world
we're
doing
a
lot
more
than
just
isis
processing.
C
J
So
I
actually,
I
was
reading
tony
tony's
point,
but
I
agree
that
you
need
congestion,
conduction
control
and
the
the
value
you
advertise
is
really
no
not
magic
at
all.
It
is
the
space
you
have
before
cp
like
your
lsp
gets
actually
processed,
and
I
don't
think
it's
too
much
to
ask
to
have
this
value.
I
think
you
know
the
buffer.
You
have
just
before
where
your
lsp
gets
stored
before
processing,
so
it's
really
no
magic.
J
So
actually
it's
very
likely
that
if
you
have
a
buffer
large
enough,
you
will
never
reach
this
bottleneck.
So
so
you
actually
need
a
conditional
control
algorithm.
I
very
agree
with
this
again
and
yeah
sorry,
I
was
trying,
but.
K
It's
the
current
stuff
that
was
just
presented.
Our
win
is
static,
but,
let's
remember
that's
not
set
in
stone
right,
we're
out
with
a
proposal
that
is,
does
what
we
want
and
we
don't
have
to
pick
and
choose
a
or
b
right
now.
K
K
A
E
I
I
agree
with
les
that
we
want
the
center
to
be
dynamic
to
adapt
to
whatever
happens,
so
maybe
a
rib
update,
bgp
computation,
but
again
that's
exactly
what
what
provides
aroma
again.
If
the
receiver
stops
for
200
milliseconds,
for
whatever
reason,
then
the
sender
will
stop
for
200
milliseconds
exactly
after
period
of
one
rtt,
but
without
without
losing
any
lsp.
E
And
it's
actually
it's
a
clear
difference
between
both
proposola.
E
We
can
see
that
we
can
adapt
two
different
threads
different
number
of
neighbors
without
losing
any
lsps
for
for
the
two
cases,
whereas
it
is
cpu
bond,
whereas
when
the,
if
you
do
just
congestion
proposed,
we
can
see
in
the
slides
for
formless
that
it
adapts
very
good,
but
after
a
loss
of
lsps.
It's
because
that
it
reacts
to
the
bad
news.
L
Yeah
so
at
least
what
I
kind
of
kind
of
learned
or
arrived
at
is
in
practice.
The
rvin
window
is
more
or
less
like
the
max
tx
rate,
which
is
in
the
other
proposal
more
or
less
so,
and
the
point
is:
if
arvind
is
static
or
constant,
then
I
really
don't
see
it
as
something
that
is
a
max
rate.
Configured,
let's
say
on
the
you
know,
per
link
basis,
kind
of
thing
on
the
receiver
on
the
sender
side.
L
L
Let's
assume
I
do
not
know
how
that
can
be
determined,
and
maybe
there
are.
There
were
some
assumptions
on
there
being.
You
know
socket
per
neighbor
or
interface
and
all
of
those
there
were
some
other
assumptions.
Maybe
take
the
value
from
bgp,
and
you
know
things
like
that.
So.
L
A
L
Yeah,
I
think,
that's
probably
what
I'm
saying
is:
that's
the
one
which
would
really
make
a
difference,
and
the
next
part
is
last
thing-
is
that
there
are
these
assumptions.
There
are
these
additional
requirements
that
are
needed
to
implement
this,
and
they
all
you
know
would
be
helpful
if
they
are
documented
in
both
the
proposals
right.
What
is
the
backward
compatibility?
Do
we
need
it
on
both
sides?
What
kind
of
implementation
assumptions
are
need
to
be
factored
if
they
are
captured,
then
that
will
help
in
a
better
analysis.
A
F
Points
one:
I
agree
with
with
tony
lee
that
if
we
could
adapt,
you
know
our
win
dynamically.
That
would
be
a
good
thing.
I
don't
know
how
to
do
that.
I
think
it's
a
very
difficult
problem
and
I
don't
think
there's
anything
that
has
been
presented
thus
far.
That
gives
us
a
hint
as
to
how
this
could
be
practically
done.
So
I
think
that's
that's
a
significant
issue.
F
We
don't
live
in
a
perfect
world,
we're
going
to
have
to
come
up
with
a
solution.
That's
practical!
The
second
quick
point
I
want
to
make
is
and
kept
in
just
kind
of
bane
it.
For
me,
it's
important
to
be
able
to
work
with
legacy
nodes
or
nodes
that
are
not
optimized
and,
as
I
understand
the
rx
based
proposal,
it's
heavily
dependent
upon
the
fact
that
people
have
optimized
their
psnp
response
time
and
I'm
not
arguing
against
optimizing
psnp
response
time.
A
Word
in
a
minute,
but
I
I
do
want
to
throw
out
there
because
no
one's
mentioned
it.
I
agree
with
you
that
the
people
haven't
talked
about
how,
where
to
drive
the
information
from.
I
personally
don't
believe
it's
so
hard.
My
first
thing
that
pops
into
mind
every
time
we
come
back
to
this
point
is
the
line
card,
the
line
card
q
depth
to
the
rp
right.
A
D
At
809,
because
I
took
less
time
for
the
status,
but
anyway
I
was
just
going
to
say.
I
agree
that
that
feedback
is
good
and
the
case
like
they're
they're,
just
using
different
feedback
in
the
in
in
the
two
cases.
You
know
in
one
case,
it's
a
our
window
that
could
possibly
that's
explicit
in
the
other.
D
The
transmit
transmitter
is
taking
note
of
the
actual
behavior
of
the
receiver,
via,
as
indicated
by
the
acts
that
are
sent,
and
I
think
that's
the
main
difference.
I
think
I
also
agree
that
you
know
there's
a
lot
of
differences
in.
You
know
how
you
implement
this
you
can
you
can
do
things
a
lot
better,
whether
you
process
every
packet
to
completion
or
you
pull
it
off
a
cue
and
schedule
it.
So
you
do
all
the
important
stuff.
First,
I
mean
that's,
that's
that's
a
that's
a
key
one.
D
A
I
I
so
just
to
recap:
I
think
this
has
been
very
fruitful
and
I
think
an
interim
would
be
a
great
idea
if
people
are
willing
to
to
go
to
it
and
to
do
work
for
it.
There's
time
that
people.
D
Have
right,
one
thing
we
should
take
to
the
list
is
whether
or
not
we
want
to
do
area,
let
people
you
know
what
we
should
do
about
experimentation,
we'll
talk
about
it.
Yeah.
A
Specifically
about
that
one,
I'm
one
you
know
going
back
to
the
old
days
of
the
interop
labs
over
in
you
know,
unh
or
whatever.
It
would
be
great
if
we
could
get
to
an
apples
to
apples
comparison
right
like
or
get
close
to
one.
So
you
know
where
we
joined
up
the
teams,
and
I
guess
to
that
point
also,
I'm
going
over
time.
I
shouldn't
that
there's
room
to
combine
these
two
things,
but
we'll
have
to
take
this
discussion
to
the
list.
I
guess.
D
Bruno
and
guillaume,
and
and
tony
p,
everybody
who
participated,
I
have
to
admit,
I'm
working
less
on
igps
now
and
we're
actually
working
on
other
stuff.
So
I
was
less
even
though
I'm
on
the
draft.
I
was
less
involved
than
the
others.
M
All
right
I'll
keep
it
really
crisp,
so
just
update
where
we
stand
with
the
flood
reflection
next
one
please.
M
So
I
think
last
time
we
were
showing
something
was
106,
so
it's
a
while
draft
has
been
adopted.
There
was
one
or
two
refs,
I
think
the
only
so.
Yes,
it's
implemented
deployment
experience
no
interrupts
not
aware
of
any
other
implementation.
Otherwise
you
know
holler
the
one
thing.
The
change
in
the
draft
was
that
what
we've
seen
that
the
flood
reflector
should
not
advertise
attach
bit
because
we
don't
want.
You
know
this
thing
to
attract
l1
traffic
towards
l2.
M
We
got
temporary
code
points.
Let's
unify
that
on
a
nice
value,
that's
about
to
expire,
so
heads
up,
I
think
the
chairs
have
to
request
the
extension.
If
we
don't
get
that
stuff
out
of
the
document
as
oh
that's,
the
old
version
we're
showing
here.
Actually,
I
I
sent
an
updated
version
of
of
this
geograph
now
anyway.
So
as
far
as
we
see
it
could
be
last
called
there.
There
is
enough
information
need
to
implement.
As
far
as
we
see-
and
you
know,
the
discussion
of
the
list
didn't
bring
many
questions.
M
Some
comments
from
ac
show
that
we
may
consider
a
battle
tunnel
and
no
tunnel
section,
occasionally
split
looking
for
opinions
here,
yeah
and
that's
pretty
much
it
so
if
people
think
that
we
need
another
ref,
you
know
with
some
better
operational
split
for
the
tunnel,
no
tunnel
option
and
maybe
some
more
detail.
The
no
town
stuff
I'll
buy
pretty
much.
Everything
is
in
the
draft.
As
far
as
I
see,
then
we
can
ref
another
version
and
that's
about.
M
D
I
I'll
I'll
I
just
didn't
get
out
of
the
queue
from
the
last
one.
I
I
could
say
something
that
I
I
I
I'd
like
to
get
more
just
to
the
working
group,
I'd
like
to
get
more
people
discussing
this
giraffe
since
it's
you
know
it's
an
experimental
draft.
D
You
know
along
the
same
lines
is
ttz.
We
got,
we
have
an
implementation
and
everything
and
I'd
like
to
get
some
more
discussion
on
it.
I
think
we
have
to
renew
the
code
points
because
I
think,
even
if
we
were
to
working
rookie
group,
last
call
it
today
we
wouldn't
get
it
done
before
the
code
points
expired.
A
Yeah
the
and
I
I
guess
my
comment-
I
don't
I
can't
remember
if
I
actually
made
it
to
the
list
or
not
to
be
honest,
I'm
so
sorry,
but
the
tunneling
stuff
was
one
of
my
sticking.
Points
too,
was
with
the
hand
waving
done
at
it
like
either
pull
it
out
or
do
more
than
hand
wave.
M
A
A
M
The
rational
is,
you
know,
fairly
complex,
comes
from
deployment
experience,
it
will
work
without,
but
you
just
don't
want
to
do
it
operationally
to
be
on
the
safer
side
but
okay,
so
I
split
it
up
into
two
section:
operational
tunnels
and
no
tunnels,
and
you
know
I
give
it
a
little
bit
more
explanation.
I
polish
it
put
it
from
the
different,
no
places
in
the
draft
where
it's
mentioned.
Okay,.
M
B
N
Yeah,
so
I
will
present
the
updates
to
flexible
algorithms,
bandwidth,
delay,
matrix
and
constraint.
Draft
next
slide.
Please.
N
N
So
the
changes
from
last
revision
are:
are
we
added
a
section
on
applicability
to
flex
algo
multi
area
case,
so
basically
nothing
new
its
flex?
Algo?
Has
this
flex
algo
prefix
metric?
N
So
we
just
clarified
that
when
flexalgo
is
using
a
generic
metric,
what
goes
in
fapm
is
a
is
a
metric
that
the
flexalgo
is
using
so
and
that
metric
could
be
a
generic
metric,
basically
no
change
to
the
protocol
procedures.
Just
a
clarification
second
point
is
changes
related
to
max
link
bandwidth
in
ospf,
so
to
compute
the
metric.
N
We
are
using
maximum
link
bandwidth
in
an
inflex
algo,
and
this
maximum
link
bandwidth
in
ospf
is
not
is
advertised
as
an
application,
independent
attribute
and
it
doesn't
get
advertised
in
asla,
so
the
draft
had
was
incorrectly
pointing
to
esla,
so
that
change
was
made
and
this
this
change
also
would
need
clarifications
in
flex.
N
I'll
go
draft
because
flex
algo
says
flexilgo
draft
says
all
the
attributes
must
be
used
from
tesla,
whereas
in
ospf
this
particular
attribute
is
not
not
advertised
in
asla
at
all,
so
flexible
go
drafts,
need
clarification
and-
and
I
have
sent
some
proposed
text
for
that
and
request
working
group
to
take
a
look
at
that.
N
N
It's
just
a
recap
of
what
generic
metric
is
so
this
is
this:
has
a
concept
of
metric
type
and
value
just
kind
of
instead
of
defining
like
delay,
metric
t,
metric
bandwidth,
metric
and
a
series
of
metric.
This
is
a
generalization
where
the
tlb
carries
metric
type
and
value.
So
there
is,
there
are
two
parts:
metrics
types
could
be
standardized
or
they
could
be
user
defined,
so
the
user
defines
ones.
N
Are
you
know
available
for
operator
to
configure
and
use
it
as
he
wants
and
doesn't
need
a
ayana
code
point
allocation,
so.
N
So
the
concept
of
generic
metric
is
that
an
application,
any
application,
such
as
flex,
algo,
srt,
rsvp
or
lfa,
can
use
it
and
it
can
also
get
carried
in
in
bgp
and
get
accumulated
across
multiple
domains
and
and
it
has
inbuilt
ability
to
advertise
different
metrics
for
different
applications.
So
right
now,
the
draft
has
this
generic
metrics
up
tlv
in
tlb22
and
ospf
extended,
link
tlb,
and
we
the
discussion,
and
we
have
a
discussion
going
on
on
the
mailing
list
on
whether
it
is
it
is
to
be
advertised
under
asla.
N
Please
yeah
so
right
now,
this
generic
metric
is
has
code
points
only
for
the
link
metric
that
is
generic
metric
is
advertised
as
a
link
metric
and
there
are.
We
have
not
taken
any
code
points
for
the
prefixes
and
we
wanted
to
see
if
there
are
any
use
cases
that
would
require
a
generic
metric
to
be
advertised
for
prefixes
as
well,
so
for
for
flex
algo.
N
This
is
not
required
because
flex
algo
has
already
defined
on
a
per
flex,
algo
basis
prefix
metric,
so
that
gets
used
in
prefix
computation
for
flex
algo
and
even
for
even
for
igp.
N
We
would
even
for
lfa
computations
for
on
upper
flex
algo
basis.
We
would
need
we
would
use
the
fapm
prefix
metric,
and
so
that
is
covered,
and
we
are-
I
mean
the
authors
group
is
thinking
about
whether
we
would
need
the
generic
metric
type
for
prefixes
and
one
of
the
use
cases
could
be
like,
let's
assume
in
srte.
N
Let's
say
we
have
red
lsps
using
metric
type,
128
and
blue
lsps
using
metric
type
129,
and
let's
say
we
have
two
remote
p's,
p1
and
p2,
and
you
want
you
want
to
be
able
to
say:
p1
is
used
as
a
as
a
primary
for
traffic
going
on
red
lsps
and
p2
is
a
primary
for
ls
traffic
going
on
blue
lsps,
and
you
could
probably
do
that
with
you
know:
advertising
loopback
metric
for
the
prefixes
metric
type
128
to
be
lower
metric
for
that
pe
1
for
red
lsps
and
higher
for
the
pe
2
for
red
lsps,
but
this
probably
can
be
achieved
other
ways
also.
N
N
Yeah
we
request
review
and
comments,
and
once
we
close
on
these
two
aspects
that
are
under
the
currently
discussion,
probably
you
would
also
need
an
early
code
point
allocation
yeah.
That's
all.
A
Okay,
we
do
have
a
little
time
for
discussion
there.
There
has
been
a
lot
of
list
list
discussion
as
well,
which
is
good.
I
I
will
say
you
know
I
just
went
so
to
be
ready
for
this
meeting.
I
went
back
and
reread
all
of
the
email
and
and
I've
talked
with
ac,
and
you
know
we're
not
making
a
rough
consensus
call
now,
but
it
is
definitely
looking
like
that.
A
Also
thing
is
going
the
all
slow
way,
maybe
just
in
the
future,
because
you
know
there's
been
some
public
airing
of
you
know.
Like
the
author,
you
mentioned
the
author
group
is
in
discussions.
A
K
L
My
comment
was
regarding
the
need
for
generic
metric
at
prefix,
especially
for
the
srt
use
case
that
you
mentioned.
L
Normally,
the
srt
paths
would
be
set
up
to
the
node
right,
not
to
a
specific
prefix,
so
that
is
achieved
by
the
generic
metric
on
the
link.
So
I'm
not
really
sure
we
need
it
for,
and
that
applies
to
rsppt
as
well.
So
I'm
not
really
sure
we
need
a
generic
metric
at
the
prefix
level
for
those
two
applications.
I
would
suggest
that
when
we
when
or
if
we
do
have
an
application
in
the
future
that
needs
it,
it
can
always
be
added
at
that
point
of
time,.
N
So
I
probably
I
wasn't
very
clear
with
what
what
I
meant
by
the
srt
els
piece
for
the
remote
endpoint.
I
I
I
can
explain
that
on
the
list.
I
A
Okay,
we
still
have
a
couple
minutes.
Did
you
have
any
questions
for
the
chairs.
D
D
D
I
think
it's
just
you
know
it's
kind
of
a
it's
kind
of
an
end
run,
and
I
don't
really
think
you
know
if
we,
if
we
can't
get
off
the
dime
on
this
it,
maybe
it
should
be
taken
out
and
we
go
to
a
bandwidth
network
metric
or
something
for
this,
these
bandwidth
constraints
and
done
in
a
separate
proposal,
because,
because
you
know
it's
generic,
I
saw
you
said
like
for
ospf.
You
said
you
could
put
it
in
the
prefix
link,
extended,
link,
attributes
or
you
could
put
it
back
in
the
te
lsa.
N
Can
see
I'm
I
don't
know
what
your
concerns
are.
I
I'm
not
get
understanding
what
your
concerns
are
so
for
ospf.
Are
you
saying
that
we
should
never
have
it
in
in
e
lsas
or
or
what's
the.
N
So
so
there
is,
there
is
no
proposal
to
use
it
from
t
e
l
s
a
for
flex
algo.
F
K
Hi,
so
there's
no
end
run
going
on
here
earlier
draft.
You
know
we
had
it
as
a
bandwidth
metric
and
we
had
some
definitions
in
there
about
how
the
bandwidth
should
be
defined,
but
in
looking
at
it,
we
quickly
came
to
the
discussion
the
conclusion
that
it's
a
purely
local
definition
anyway
and
there's
not
a
whole
lot
of
reason
to
mandate
that
you
know
a
particular
network
operator
use
a
particular
algorithm
for
computing.
What
the
metric
is,
they
can
do
whatever
they
want
from
there.
O
So
questioning
the
words
end
wrong.
The
early
statement
about
this
was
that
it
was
a
violation
of
89
19
and
we
pretty
quickly
agreed
that
it
didn't
violate
8919,
as
the
words
are
written
on
the
page.
You
know
maybe
as
the
intent
of
the
authors,
but
not
as
words
as
written
on
the
page.
A
Yeah
to
that
point
I
didn't
find
that
very
compelling,
but
you're
right
if
we
need
an
errata
or.
O
A
O
A
B
If
you
don't
mind
a
quick
break
in
yeah,
if,
if,
if
you
send
an
errata
up
the
basically
what
what
I've
got
to
do
to
confirm,
it
is
go
and
look
at
the
mailing
list.
Traffic-
and
you
know
what
the
documented
consensus
of
the
working
group
was
and
if
it's
written
down
one
way
and
it
looks
like
it
just
got
written
down
wrong,
then
it's
an
errata
that
gets
confirmed.
If
it's
not
written
down
that
way
and
it's
just
people's
opinions,
then
the
errata
doesn't
get
confirmed
perfect.
A
A
P
P
So,
a
little
bit
of
a
background,
typically
what
we
do
in
the
linkster
protocols,
we
distribute
the
state.
P
P
P
So
the
initial
use
case
that
we
had
in
mind
when
we
started
this
work
was
basically
the
the
case
where
you
know
the
summarization
is
being
used
in
the
network
to
address
the
scale
in
terms
of
advertising.
The
number
of
prefixes
between
the
areas
or
domains-
and
you
know
the
result
of
the
summarization-
is
that
some
of
the
other
protocols
that
are
relying
in
you
know
most
specific
routes
for
its
convergence.
It's
basically
being
lost.
We
typically
use,
for
example,
the
bgp
peak,
where
you
know
the
loopback
of
the
remote
pe.
P
If
it,
if
it
becomes
unreachable,
you
know
the
bjp
peak
edge
is
being
triggered,
but
with
the
summarization
we
don't
have
that
visibility.
P
So
we
would
like
to
get
some
type
of
a
notification
outside
of
the
local
area
or
domain.
When
you
know
some
of
these
components
of
the
summarized
routes
goes
away
and
we
want
to
do
it
in
a
way
that
it
doesn't
leave
any
persistent
state
in
a
in
a
link
state
database.
P
P
P
We
need
the
reliability,
so
there
is
some
retransmission,
but
the
retransmission.
The
transmissions
are
limited
to
a
very
limited
period
of
time.
So
we
don't
want
the
the
retransmission
to
actually
extend
the
life
of
a
pulse,
and
these
information
are
never
being
sent
as
a
part
of
the
database.
Sync,
whether
this
is
an
adjacency
bring
up
or
a
graceful
restart
or
whatever.
These
are
only
sent
over
the
existing
adjacencies
which
are
in
app
state.
P
So
we
define
a
bunch
of
new
pdus,
which
we
call
the
flooding
scope,
pulse
lsp
and
flooding
scope,
pulse
psnp.
These
are
based
on
the
rfc
70c56,
which
is
the
scoped
lsp.
We
support
all
the
flooding
scopes
that
are
currently
defined.
There
is
no
need
for
a
csnp
because,
as
I
said,
we
never
exchanged
this
as
a
part
of
the
database
exchange,
and
obviously
I
mean
these.
These
are
new
pdes
like
this
is
not
the
best
compatible
thing.
This
is
completely
something
new.
P
P
So
here's
just
comparison
to
what
the
flooding
speed.
Sorry
for
what
the
pulse
lsp
would
look
like.
So
it's
it's
similar
in
nature,
but
we
removed
bunch
of
the
the
fields
in
it
and
especially,
we
don't
need
the
remaining
lifetime
because
it
doesn't
really
have
the
lifetime.
The
lifetime
is
by
definition,
use
and
and
then
discard.
P
P
P
These
are
flooded
only
on
the
circuits
that
are
participating
in
a
given
scope.
The
the
policy
sps
are
not
retained
beyond
you
know
the
time
it
needs
to.
You,
know,
flood
them
and
eventually
use
them.
If
you
want
to
use
it,
we
are
limiting
their
transmissions
to
a
certain
amount
of
retries.
So
we
propose
three
by
default
to
you
know,
have
some
reliability,
but
then,
after
that,
we
basically
don't
continue.
P
Okay
next
slide,
please
in
terms
of
the
generating
this,
the
originator
of
the
pulse.
Lsp
should
or
must
remember
this,
the
last
sequence
number
it
used
for
given
osp,
because
we
are
still
using
the
sequence
number
you
know
to
to
compare
the
lsp
in
terms
of
which
one
is
the
new
one.
We
also
propose
that
the
originator
uses
the
next
lspid
each
time
a
new
pulse
information
is
being
advertised.
P
This
is
basically
to
prevent
the
new
powers
to
refresh
the
state
of
the
older
policies
which
has
been
sent
previously.
We
don't
want
to
refresh
them,
so
we
want
to
really
keep
them
separate
and
let
them
die
irrelevant
of
any
other
new
policies
which
are
needing
the
chinese
to
be
sent.
You
know
after
that,
so
the
next
slide.
Please
in
terms
of
acknowledge
me
acknowledging
the
the
policy
lsps.
P
If
you
receive
newer
or
same
version
of
the
pulse,
lsb
be
acknowledged
using
the
the
policy
psmps
receive
something
that
is
older.
Basically,
we
don't.
We
don't
do
anything.
We
just
basically
drop
it
because
it
will
go.
It
is
going
to
basically
times
out
anyway.
P
Okay
next
slide,
please
so,
and
basically
this
is
the
the
tlb
that
we
are
defining
to
use
the
the
pulse
lsps.
It's
it's.
What
we
call
a
summary
component,
reachability
loss
power.
Steel
will
be
basically
when
you
do
a
summarization,
whether
it
being
on
a
l1,
l2
router
or
on
an
isis
sbr,
when
a
reachable
component
of
the
prefix
is
being
lost
in
the
area
from
which
the
summarization
is
being
done.
We
are
generating
this
tlv.
P
This
tle
is
going
to
be
propagated
across
the
network.
It
can
be
leaked.
Obviously
we
don't
want
them
these
tlus
to
be
leaked
back
into
the
area
from
which
the
summarization
is
being
done.
So
there
is
a
condition
that
avoids
that
and
then,
when
the
receiver
receives
this
being
in
an
ingress
pe,
it
may
get
this
notification
to
the
bgp
and
bgp
may
trigger
the
peak.
Obviously,
the
procedure
is
what
and
how
this
is
done.
This
is
a
local
matter.
P
P
A
Does
is,
are
you
gonna
run
into
patent
ipr
issues
with
snapchat
just
just
kidding,
ac.
D
It's
basically
now,
admittedly,
most
of
the
time
it's
it's
going
to
succeed,
but
this
is
really
like
a
best
effort.
Delivery.
D
Yeah,
it's
limited
it's,
but
and
and
it
and
it's
it's
kind
of
like
like
the
next
one,
when
we're
going
to
see
or
see
later
it's
it's
it.
It's
kind
of
temporal
based
on
when
you
know
when
you
got
the
components
of
the
summary
you're,
gonna
you're
gonna
signal
the
par
the
actual
there's
two
parts
to
it:
the
mechanism
and
then
the
application
of
it
to
trigger
the
bgp
reconverge
re
route
reserve
right.
P
D
Q
Very
interesting
talk,
my
understanding
is
that
either
define
a
generic
procedures
and
the
encodings
for
distribution
distributing
the
network
event.
Q
P
A
It
may
not
come
through
the
second
time
either
your
mic
is
sort
of
going
in
and
out
yeah,
it's
kind
of
hard
to
hear.
Maybe
you
can
post
your
question
to
the
chat
room
and
we
could
echo
it
back,
maybe
even
after
the
next
presentation
like
because
I
think
we
will
have
some
slop.
I
A
Okay,
well,
the
interesting
new
work.
Let's
get
some
discussion
on
the
list
on
it,
I
do
have
the
question
this
is
related
to
the
next
presentation.
Is
that
right.
P
E
P
It's
the
prefix
unreachability
stuff
yeah.
That's.
P
A
A
G
All
right,
so
my
name
is
ian
mishra
with
verizon
and
I'm
giving
providing
an
update
for
prefix
unreachable
announcement
pua
so
as
as
tony
gave,
the
update
for
the
or
sorry
peter
on
the
last
draft
that
we
went
through.
This
is
a
very
similar
mechanism
that
we've
had
quite
a
bit
of
discussion
on
the
mailing
list,
but
it
basically
works
very
similar.
The
mechanism
is
completely
different,
but
it
works
basically
on
the
component.
G
So
you,
if
you
have
a
a
a
transit
core
where
you're,
where
you
have
perhaps
broken
up
in
the
areas
into
areas,
so
either
ospf
areas
or
isis
levels,
then
you're
doing
summarization
and
you're
summarizing
the
next
hop
attribute.
But
you
know
for
the
edges,
your
egress
or
ingress
pes
windows
components.
So
when
you're
summarizing
all
the
slash
32s
and
instead
of
having
to
flood
them,
you're
doing
a
summarization
when
those
components
go
away,
this
arena
from
a
link
or
node
failure.
G
What
this
draft
does
kind
of.
In
summary,
it
basically
detects
that
via
an
advertisement
and
control,
point
update
and
then
it
forces
the
control
plane
to
convert.
So
it's
an
update
to
the
control
plane.
So
the
control
plane
converges
and
then
the
data
plane,
no
change
to
the
data
plane
and
the
data
plane
follows
the
control
plane
and
it
converges
as
well
quickly
just
based
on
a
component
failure.
So
it
is
like
an
event.
G
Notification
similar
to
the
last
graph
give
completely
different
mechanism,
but
it
from
that
from
that
detection
it
it
it
forces
a
convergence
next
slide.
Thank
you.
G
So
so
the
poa
mechanism,
so
so
basically
upon
receiving
a
node
or
link
failure
from
a
prefix.
You
know
that's
within
a
advertised
summary
from
an
avr
going
between
a
demand
within
the
domain
from
let's
say,
ingress:
pes
towards
the
egress
pes,
within
a
with
from
an
avr
performing
a
summarization
towards
an
egress
point.
G
Did
you
so
we
generate
a
a
new
summary
address,
but
with
the
with
the
failure
prefix
associated
with
that
it
sets
that
information
to
a
null
zero,
a
null,
a
null
information
to
know
so,
basically-
and
that's
really
the
capability,
where
it's
actually
flooding
that
so
when
that
link
or
node
goes
down
where
the
next
half
attribute
goes
down,
that
prefix
is
set
to
null
and
then
and
it's
forcing
that
that
convergence
to
happen
at
the
control
plane.
G
And
then
the
data
plane
follows
that
so
for
isis,
what
we
do
is
we
use
the
ipv4
ipv6
source,
router
id
subtle
view
defined
in
rfc
7794
and
with
ospf.
We
use
the
prefix
originator,
sub
tlv
defined
in
the
draft
ospf
prefix
originated
and
then
the
flooding
mechanism
is
exactly
the
same.
There's
no
change
in
the
flooding
mechanism
for
both
ospf
and
isis
next
slide.
Please.
G
So
the
updated
action
based
on
the
pua
message
so
for
the
sum
for
the
node
failure
scenario,
when
the
node
within
an
area
receives
the
po
a
message
from
all
all
this
aprs
will
trigger
the
switch
over.
So
that's
the
switch
over
it's
basically
that
component
prefix.
So
let's
say
that
linker
node
goes
down
that
egress
pe
node
goes
down
it
actually
that
that
prefix.
G
For
that,
let's
say,
let's
say,
you're
doing
a
next
top
cell
from
the
p
and
that
loop,
zero
goes
down
to
the
node-
is
down
now
that
prefix
is
set
to
null
zero
and
now
you're
forcing
that
switch
error
to
happen
immediately
from
you
know,
due
to
that
summarization.
So
now,
you're
not
waiting
for
convergence.
G
Convergence
happens
instantly
onto
the
alternate
path
so
for
lincoln
network
partition
scenarios
where,
when
only
once,
when
only
some
of
the
adrs
can
reach
the
failure
prefix,
the
avrs
that
can
reach
the
prefix
should
advertise
a
specific
route
to
the
pua
prefix
and
it's
a
similar
kind
of
negatives.
I
I
think
I
would
mention
it's
similar
but
different,
but
I
mean
I
guess
you
can
draw
some
analogies
to
the
rift
and.
G
Advertisement
happens
next
slide,
please
thanks
so
conclusions
and
further
action.
So
we
have
had
a
lot
of
discussions
and-
and
we've
presented
this
on
on
the
ietf
a
few
times
there
have
been.
I
think
the
goal
here
is
really
trying
to
get
more
clarification
on
the
process
and
procedure
and
how
the
pua
mechanism
works
and,
and
what
actually
is
changing.
G
So
with
with
the
changes,
it's
really
control
plane,
there's
no
change
in
the
data
plane
and
it's
really
a
detection
mechanism,
detecting
the
component
prefix
very
similar
to
the
previous
draft
of
what
we
went
through,
but
it's
very
the
detection
mechanism
and
from
that
detection,
forcing
the
control
plane
to
converge,
and
then
the
forwarding
plane
to
follow
that
up.
This
does
because
it's
a
control
plane
update
all
the
nodes
within
the
area
they
do.
G
They
would
require
an
upgrade
update
because
they're
all
acting
on
the
the
failover
capability
to
set
that
prefix
for
the
component
to
null
zero.
So
it's
to
force
a
control
plane
to
convert,
so
it
does
require
an
upgrade
and
that's
and
that's
and
that's
something
that
we
do
have
to
update
in
the
draft.
Exactly.
I
think
you
said
notice
that,
and
so.
G
So
what
this
proposal
is
to
solve
is
so.
This
is
something
as
well,
and
I
think
he
said
mentioned
that
I
think
probably
a
few
times
and
maybe
others
it's
something
that
we
will
fix
in
the
draft.
So
the
passive
interface
is
a
implementation,
specific
command,
and
so
we
we
will
replace
any
any
references
to
the
passive
command
and
change
that
to
stub.
G
So
thanks
for
those
comments
and
we'll
get
that
fixed,
so
what
this,
what
this
draft
does
and
the
goal
is
to
write
any
any
cases,
any
use
cases
where
we
use
where
you
have
a
stub
lsa.
So
it's
really
like
the
passive
command
is
implementation
specific.
But
when
you
have
a
stub
lsa
with
no
neighbors,
so
that's
really
tracking
cracking
that
that
interface,
so
that
link
that
doesn't
have
any
neighbors
and
it
could
be
an
interface
within
the
data
center,
an
nras
boundary.
G
And
then
I
guess
it's
a
new
use
case
with
edge
compute
scenario,
with
5g
of
compute.
The
draft
link
there
where,
where
we
were
where
we
don't
have
any
neighbors,
so
it's
a
pat
it's
an
interface.
That's
made
passive
with
no
neighbors.
So
it's
a
stub
lsa
so
being
able
to
track
that
and
and
update.
G
I
either
you
know,
for
a
bgp
controller
or
pc
controller,
be
able
to
differentiate
interfaces
that
are
passive
from
interfaces
that
have
neighbors.
I
think,
and
that's
with
that,
bgpls
or
bgp
controller
next
slide.
G
So
the
ospfv2
extension
that
so
we
I
think
we
had
a
different,
a
variety
of
different
different
ways.
We're
going
to
do
this.
I
guess
the
update,
I
think.
Initially
we
had
it
in
the
base,
ospf
v2
and
that
was
fixed.
So
now
it's
in
the
tlv
based
osvii
goes
via
v2
update
and
I
think
initially
we
had
it
using
the
prefix
lsa
and
then
we
changed
that
because
I
guess
the
passive
interface
being
link
pacific
and
really
was
what
we
wanted.
Topological
versus
our
prefix
base.
G
So
using
the
usb
usb
v2
extended,
link,
opaque
lsa
versus
a
prefix
lsa
is
in
the
same
for
ospf,
v3
and
isis.
So
what
we
propose
is
defining
a
new
ospf
v2
extended
stub
link
tlv,
and
this
has
been
updated
in
version
8
and
the
with
the
link
type
and
metrics
field
with
this
newly
defined
tlb.
G
So,
let's
say
like
one
is
as
boundary:
two
loopback
3d
land
and
then
in
a
variety
of
other
future
extensions
that
we
can
add
to
that
next
slide,
please
so
for
ospf,
v3
and
isis,
it's
a
similarly
done
using
the
so
with
ospf
v3,
we
define
a
router
router
stub
link
tlv
to
describe
the
single,
the
router
stub
stub
interface
and
the
tod
would
be
contained
within
the
e-router
lsa
and
then
isis
would
be
a
new
top-level
stub
tlb
and
then
the
similar
options
that
we
have
for
the
for
the
the
link
types
for
the
various
link
types
that
would
be
supported,
and
I
think
for
now
like
in
the
draft
we
would
have
as
boundary
loopback
vlan
and
then
future
extensions
that
we
can
extensibility
that
we
can
add
to
the
to
the
draft
next
slide.
G
Yes,
so
newly
defined
sub
tlp,
so
once
the
newly
defined
sub
tlv
to
describe
the
ip
address
information,
that's
associated
with
the
passive
interface
defined
subject,
sub
object
is
defined
within
rfc329209
extensions
for
our
cpte
and
then
propose
one
independent
registered
code
point
is
ian
allocation
for
the
stub
link
attribute,
which
can
be
preferred
by
v2
and
v3
and
isis.
So
all
three
of
those
would
have
a
uin
allocation
point
next
slide
piece.
G
So
for
further
plans
any
comments.
I
think
we
have
had
a
lot
of
comments,
we'll
probably
spur
this
up
on
the
on
the
mailing
list
again
and
and
get
more
feedback.
We
appreciate
all
the
feedback
that
we've
gotten
so
far
on
this
draft
and
the
pua
draft
from
the
chairs,
as
well
as
folks.
You
know
in
within
the
work
group,
so
much
appreciated.
G
We've
added
a
new
co-author
to
the
to
the
draft
with
zte
zintong
son,
welcome
and
we'd
like
to,
I
guess
the
next
steps
you
know
ask
for
an
adoption
call
for
this
draft.
If
some
folks
feel
that
this
draft
is
ready
for
an.
I
D
I
think
I
think
again
and
comment
that
I
don't
think
we
need
we
already
have
you
know
we
have.
We
have
links
for
things
that
are
not
topologically
significant
to
the
igps
and
we
have
prefixes
for
local
addresses
and
I
don't
think
we
need
you
know,
for
you
know
addresses
that
were
advertising
local,
the
inter-area
prefixes.
D
I
don't
think
we
need
this
new.
This
stub
link
explicitly
you
could
take
these
this
single
stub
link,
type
or
whatever
you
want
to
call
it
and
add
it
to
a
prefix.
If
you
really
needed
it
now,
the
igp
certainly
doesn't
need
it,
but
so
you
want
us
to
carry
this
solely
for
other
applications.
D
I
mean
other
use
cases
and
I
mean
for
for
one
tlv.
I
know
we've
done
this
before,
but
you
know
that
where
we
have
optional
tlbs
that
we've
had,
we
we
have
things
in
in
definitely
have
subtle,
v's,
that
we
carry
for
prefixes
that
aren't
used
explicitly
by
the
idp.
So
that's
that's
that's
kind
of
a
moot
point,
but
to
invent
a
new
construct,
the
stub
link,
and
then
you
realize
oh,
I
need
the
address
of
this
as
well.
D
A
Yeah-
and
so
I
just
before
we
ran
out
of
time-
I
wanted
to
throw
out
there
that
we
do
have
adopt
as
a
working
document
on
the
slide,
so
we
should
speak
to
that.
Do
you
think
that.
A
G
C
D
I
mean
I
mean
actually
you're,
not
advertising
the
prefix
just
the
address,
but
advertising
it
two
different
ways.
That
would
be
a
good
indication
that
this
is
is
is
not
the
right
way
to
encode.
It.
A
Okay,
we're
out
of
time,
so
I
just
wanted
to
give
a
final
thoughts.
I
I
think
that
the
discussion
was
really
good
at
the
beginning
of
this,
and
I
think
that
if
people
are
willing
to
and
have
time
to
put
more
work
into
it
bruno
less
that
we
should
maybe
look
at
doing
an
interim
yeah.
That's
all
I
had
to
say.
A
Hello,
okay,
great
well,
we'll
have
to
see
whether
in
the
next
two
weeks,
we're
going
to
have
another
virtual
or
not
probably,
but
thanks
everybody
for
participating
and
see
you
next
time.