►
From YouTube: IETF-LSR-20210929-1400
Description
LSR meeting session at IETF
2021/09/29 1400
https://datatracker.ietf.org/meeting//proceedings/
A
A
B
D
At
when
it
it's
10,
I
mean
seven
or.
E
F
F
F
E
E
B
D
F
D
B
Yeah
we
could
flip
the
order
right
too
right,
so
that
bruno
could
just
present
the
joint
thing
and
then
present
his
do
his
presentation
and
then.
D
D
Yeah
yeah
this
is
this
is
like
one
of
those
b
tv
series
where
somebody
goes
missing
right.
B
Okay,
well,
let's
go
ahead
and
get
started.
I
guess.
B
B
This
is
a
standard
note.
Well,
so
just
says
that
by
participating,
you
agree
to
follow
the
item
processes.
Any
contribution
make
is
covered
cover.
Well,
you
can
read.
What's
on
the
screen.
B
B
And
our
agenda
is
going
to
be
slightly
modified
because
less
is
not
here
yet
so
we're
going
to
start
with
the
draft
consensus
as
as
ac
called
we're.
Gonna
start
with
the
punch
line
and
then
move
on
to
bruno's
congestion,
control
flow
control,
results
from
testing
and
go
back
to
less
hopefully,
and
then
we're
gonna
have
a
wrap
up
with
the
discussion
and
next
steps.
B
Does
anybody
have
want
to
bash
the
agenda
at
all.
B
A
H
Okay,
thank
you
for
the
slides,
so
a
subset
of
authors
and
boss
draft
met
on
monday
and
we
to
discuss
what
we
were
well,
where
the
consensus
is
on
on
the
points.
We
were
the
disagreement
on.
H
H
That
algorithm
is
implemented
on
transmit
side.
Same
insider,
don't
need
consistency
for
interrupt,
and
hence
anyone
can
do
whatever
it
wants.
So
we
don't
need
a
real
specification
for
that
part
also
contrary
to
to
tcp.
We
don't
have
to
be
fair
with
other
iss
speakers,
because
we
fall
under
the
same
autonomous
system
and
also
we're
all
saying
the
same
thing.
This
is
the.
H
H
H
H
So
the
draft
could
also
discuss
some
algorithm,
which
would
be
non-normative
so
provide
it
as
illustration,
or
example,.
H
So
we
plan
this
is
a
target
to
have
a
combined
draft
by
its
one.
H
Okay,
so
so
so
so,
in
short,
we
are
not
going
to
define
any
any
algorithm
whatever.
It
is.
B
A
H
B
Does
anybody
have
any
questions
go
ahead
and
pop
up
in
the
queue
or.
D
I
was
just
gonna
say
that,
hopefully,
we
won't
be
relegated
to
the
five
offers,
although
I'll
be
I'll
volunteer
to
go
to
contributor
on
the
combined
draft.
D
But
I
think
this
is
a
case
where
we
should
be
able
to
let
the
the
the
main
offers
figure
that
out-
and
I
know
we
have
total-
probably
eight
or
nine
people.
So
but
now,
if
I
get
off,
that's
still
eight
people
which
is
over
the
limit,
but
hopefully
we
won't
be
relegated.
D
Can't
they
can't-
I
I
think
yeah
peter
or
mark
would
be
have
were
involved,
were
involved
more
with
the
test
queen
than
than
anybody,
so
they
they'd
be
the
best
ones.
B
I
B
All
right:
well,
let's
bruno,
why
don't
we
move
on
to
your
presentation?
Then
it
doesn't
seem
like
there's
a
lot
of
questions
on
this,
so
I
will
load
up
the
next
one.
H
H
I
have
powerpoint
animation
so.
B
A
H
H
H
When
the
receiver,
obviously
cpu
limits
and
input
queue
limits
and
the
second
problem
space
is
everything
else.
So
mostly
it's
a
communication
from
one
control
plane
to
another
control
plane.
We
have
below
a
very
simplified
model.
Obviously
it's
implementation
dependent,
so
we're
not
trying
to
discuss
that.
H
So
one
is
flow
controller,
which
is
a
control
loop
between
the
a
single
sender
on
a
single
receiver,
so
really
end
to
end
between
each
iss
process
and
the
goal
is
to
prevent
the
sender
from
overwhelming
the
receiver's
control
plane.
H
H
H
For
the
next
one
for
control,
plane
congestion,
we
believe
I
mean
on
our
draft.
We
believe
that
flow
control
is
is
useful
and
it's
a
key
difference.
H
So
for
graph
the
decline,
we
propose
to
use
flow
control
with
the
receive
window
to
handle
the
end-to-end
flow
controller
and
for
draft
ginsberg.
I
believe
they
propose
to
use
the
same
congestion
controller,
to
handle
everything,
including
a
control,
plane
flow
control.
H
So
again,
it's
control
plane
only.
We
want
to
avoid
losing
avoid
using
lsps
in
the
control
plane
of
the
receiver
for
for
any
control
plane
reason,
so
it
may
be
that
we
have
a
reach.
The
maximum
throughput
of
the
receiver
as
per
its
cpu
or
the
receiver,
is
pausing
because
it's
doing
something
else
so,
for
example,
bgb
or
pc,
computations
or
whatever
reason,
and
so
we're
proposing
to
use
a
very
classic
receive
window
algorithm.
H
So
the
receiver
data
a
received
window,
which
is
a
number
of
lsps,
that
it
can
store
in
the
control
plane
and
the
sender
computes
the
number
of
unaccumulated
sps,
which
is
the
same.
We
call
it
an
act
on.
In
short,
the
center
pose
when
an
act
is
equal
or
greater
than
arduino.
So
it's
very
it's
very
simple.
H
We
do
need
a
new
specification
on
interrupt.
We
need
two
things:
we
need
to
be
able
to
advertise
a
receive
window
in
either
pd,
lopdo
or
psnp,
and
also
we
need
to
be
able
to
have
faster
psnp
to
get
faster
feedback
in
order
to
have
a
faster
feedback.
Loop.
H
In
july,
so
we
had
some
some
questions,
so
the
window
received
window
can
be
static
or
dynamic.
There
was
some
question
about:
how
does
it
work
if
the
window
is
static?
So
I
have
a
one
slide
to
illustrate
so
on
the
left.
I
have
the
sender
of
lsps
on
the
right
side.
I
have
the
receiver.
H
H
And
when
it
has
sent
10
speeds,
he
has
to
pause
on
the
receiver
side.
Well,
he
has
two
stores
with
standard
speeds.
H
H
So
the
sender
on
with
ceilings
this
this
psnp,
acknowledging
2
fp.
It
can
compute
that
the
number
of
nx
lsps
is
now
eight.
So
you
can
send
two
more
lsps,
so
it
does
exactly
that
at
some
point.
The
receiver
choose
to
process
some
more
rsps.
Let's
say
three
and
same
same
same
movie.
It
is
acknowledging
clsps
in
the
psnp.
So
usual
message
december
computes
at
the
number
of
unknown
lsps
is
now
seven,
so
we
can
send
three
more
sps
and
so
on
and
so
forth.
H
H
H
H
So,
in
summary,
we
pro,
we
believe
our
win
with
flow
control
provides
interesting
properties
properties
by
designer.
H
So
the
sender
cannot
over
overwhelm
the
receiver
control
planer,
whether
it's
cpu
or
input
q,
because
the
sender
has
to
wait
for
the
receiver
to
acknowledge
second
point:
the
sender
rate
is
completely
controlled
by
the
receiver,
so
it's
a
line
on
its
capability
or
performance
and
again,
if
the
receiver
pause.
For
any
reason,
the
sender
will
have
to
pause
same
duration,
same
numbers.
H
H
By
processing
capability
of
2000
lsps
per
per
second-
and
that's
all
second
option
is
we
are
limited
by
a
I
o
transmission
between
control
plane.
So
we
have
some
loss
of
lsps
on
the
way
or
some
delays
introduced
because
we
are
preferring
or
whatever
limitation,
a
third
limit
which
is
introduced
by
aarin.
My
flow
controller
is
to
be
rtt
boundary,
because
the
rate
is
the
rate
that
we
can
achieve
is
limited
by
the
rtt
and
by
rtt
I
mean
control,
plane
rtt.
H
H
So
let's
have
a
look
at
the
three
limitations.
H
So
first
one
I
o
burner.
So
in
short,
it's
not
only
by
flow
control
in
details.
We
have,
we
have
some
feedback
loops,
but
let's
keep
the
details
and
I
think
we
all
agree
that
for
that
we
have
to
rely
on
a
consistent
control
algorithm
and
we
have
one
example
in
one
draft.
We
have
another
example
in
the
other
draft
and
anyone
needs
three
to
to
pick
another
one
and
his
favorite.
H
Also,
anyone
is
any
sender
is
free
to
pick
any
favorite
shipping
option,
so
you're
free
to
pick
to
send
nlsps
every
s
milliseconds,
it's
purely
local
on
the
sender.
H
H
So
I
have
a
summary
of
three
cases
here:
we're
using
10
centers
against
one
receiver
and
we've
found
that
the
overall
capacity
of
the
receiver
is
to
be
able
to
process
process.
2000
lsps
engineer
so
also
it
needs
to
be
divided
by
the
number
of
senders
you
may
find
it
smaller
or
not,
but
it's
a
very
older
cpu,
it's
10
years
old
and
we
had
we
had
zero
or
loss
of
lsps.
H
Even
though
we
even
though
we
had
a
competition
between
senders,
so
clearly,
each
center
is
trying
to
send
hmot
as
much
lsps
as
possible
and
there
is
no
coordination
between
centers,
so
some
are
trying
to
to
force
more
packets
and
it's
very
dynamic
adaptation
between
between
centers.
H
We
also
see
that
we
are
a
cpu
boundary,
so
changing
our
wind
does
not
change
anything
because
we
already
reached
the
maximum
capacity
of
the
receiver.
H
H
So
there
has
been
some
some
test
reports
a
few
presented
in
ihf
111,
and
it
shows
that
at
least.
H
H
H
H
Another
one
is
that
only
all
adjacencies,
all
communication
send
advertise
the
same
lsdb
some
ls
same
lsps,
so
we
don't
really
need
to
be
fair
on
it's
less
a
big
deal
to
drop
some
added
speed
from
one,
and
also
it's
not
a
big
deal.
If
one
speaker
has
lots
of
irtt,
it's
okay,
if
other
senders
have
a
lower
rtt,
so
we
have
a
will
be
faster
with
senders
with
a
lower
entity.
H
H
Because
there
is
some
some
process
on
lsps
before
acknowledging
compared
to
doing
the
work
on
the
kernel.
H
Another
one
drawback
is
that
the
time
to
acknowledge
will
be
longer
for
iss,
typically,
because
iss
do
some
checks
on
acknowledge
compared
to
a
tcp,
it's
being
done
in
internal.
H
So
that's
the
cost
of
the
flow
control
proposal,
but
we
we
believe
it's
it's
a
it's
a
good
drug,
it's
a
good
trailer
for
for
our
opinion,
but
we
do
need
a
receiver
to
send
faster
psnp.
H
H
We
believe
flow
control
provides
key
benefits,
which
is
when
the
receiver
is
cpu
boner.
We
believe
the
sending
behavior
is
optimal,
so
no
loss
of
lsps.
We
have
the
exact
max
rate,
supported
by
the
receiver
on
any
post
on
the
receiver.
Doing
something
else
translate
on
the
same
pause
on
the
sender.
H
There
is
a
key
cost
which
we
do
need
a
change
on
the
receiver,
which
is
to
to
act
faster
than
what
is
currently
required
by
the
user
specification,
and
the
second
one
is
that
the
rate
may
be
reduced
for
the
neighbors,
which
has
I
rtt
link.
H
So
next
last
last
slide
next
steps.
So
we
believe
that
in
in
many
cases
the
receiver
is
cpu
bone
was
an
rtt
bond
only
for
airwing
to
be
very
applicable,
especially.
We
have
lots
of
links
which
have
a
low
rdt
so
links
within
cities
within
a
region
within
a
country
if
your
country
is
of
a
small
size
so
typically
in
europe.
H
So
from
our
point
of
view,
we
see
no
reason
to
forbid
the
ability
to
do
flow
control
with
arwin,
but
for
that
we
do
need
a
con
point
to
advertise
our
win
and
hence
we
are
going
to
ask
for
working
of
adoption
either
this
draft
or
now
we
will
move
to
a
combined
draft,
because
we
asked
we
did
ask
a
cut
point
to
iso
experts
and
their
issues,
because
the
document
was
not
a
working
document,
so
we
have
an
implementation
on
the
we.
H
E
Yeah
I
just
had
one:
how
did
you
determine
that
most
implementations
were
cpu
bound?
Most
isis
implementations
were
cpu
bound.
I
mean
I
have
much
more.
D
Experience,
probably
by
a
couple
orders
of
magnitude
with
ospf
implementations
and
isis,
and
I
never
I
found
them
to
be
more.
I
o
bound.
I
know
I
I
know
ospf
deals
with
much
smaller
units
in
in
lsas,
as
opposed
to
lsps.
Usually
even
the
router
lsas
are
are
usually
smaller,
but
how
did
you
make
that
determination?
That's
just
what
I'm
asking.
H
Okay,
so
good
question
so
clearly
for
for
our
implementation,
which
was
a
free
range
routing
on
our
server.
H
We
were
about
to
track
what
was
the
exact
reason
for
the
performance
and
it
was
not
drop
of
lsps
on
the
io
pass.
It
was
just
the
sender
being
busy.
H
H
We
found
that
we
we
can.
We
can
be
wrong
that,
with
current
with
current
implementation,
one
big
limitation
is
that
the
receiver
is
waiting
to
to
acknowledge
the
the
first
lsps,
and
that
is
is
just
too
late,
and
so
you
have,
he
is
too
late
in
term
of
he
has
a
lot
of
lsps
in
this
backlog.
H
I
was
just
wondering
thanks,
but
again
one
correction.
The
limit
is
not
when,
when,
when
you
have
a
single
adjacency,
it's
when
you
have
multiple
adjacencies,
so
you
have
multiple
centers
again,
a
single
receiver.
B
So
I
have
a
question
a
process
question
you
want
to
do
a
working
group
adoption
how?
How
is
this
affected
by
the
idea
of
doing
a
new
draft?
I
mean
it
seems,
like
it
kind
of,
doesn't
make
sense
to
do
a
code
point
allocation
only
to
do
a
combined
draft
after
that
brings
this
in.
H
Yes,
I
I
agree,
so
it's
the
slides
were
were
done
before
with
the
meeting
on
monday.
B
B
A
H
B
A
H
I
don't
know,
but
I
pray
we,
you
will
have
less
content
in
the
combined.
C
H
Because
we
restrict
to
common
agreement
so
yes,
maybe
less
some
people
will
be
happy
with
the
lack
of
detail.
Maybe
but
we'll
see
we'll
see.
B
All
right
anyone
else.
B
All
right
bruno:
well,
thanks
a
lot
and
let's
move
on
to
less.
D
I
I
was
just
going
to
make
a
make
a
point
that
I
looked
at
the
look
and
looking
at
the
participants.
We
have
a
lot
of
the
people
that
are
gonna
have
gonna,
have
an
opinion
on
this
anyway,
on
in
the
meeting,
with
the
exception
of
possibly
some
of
the
people
in
the
china
time
zone
time
zones,
I.
B
B
Okay,
all
right!
Well,
I'm
gonna
share
lessons
presentation.
Can
you
hear
me
I
can
hear
you
less.
Let
me
share
the
your
presentation.
Unless
you
have
your,
unless
you
have
it
handy.
C
C
Okay:
first,
my
apologies
for
being
late
I'll,
have
to
speak
to
my
secretary,
who
messed
up
my
calendar.
C
C
C
We
so
one
of
the
additional
tests
that
we've
done
now
is
we've
tested
where
the
receiver
is
not
dropping
but
they're
just
processing
lsps
at
a
slower
pace
and
you'll
see
those
results
as
we
get
to
them
next
slide.
Please.
C
So
this
this
is
the
same
slide.
We
presented
previously
just
kind
of
an
overview
of
the
algorithm
guidelines.
C
We
understand
that
flooding
burst
durations
are
not
long
lived.
For
example,
if
you
have
a
node
with
two
thousand
neighbors
and
it
goes
down,
so
you
have
2000
lsps
that
need
to
be
flooded
if
we're
flooding,
even
at
a
shall
we
say,
a
modest
increase
over
the
current
rate
at
300
lsps
per
second.
This
is
going
to
take
roughly
seven
seconds,
so
I
would,
if
we're
going
to
adapt,
we
need
to
adapt
relatively
quickly.
Otherwise,
the
the
burst
period
will
have
gone
by
and
we
won't
have
been
able
to
make
any
adjustments.
C
We
understand
that
receiver
performance
can
vary
from
time
interval
to
time
interval
because
of
a
variety
of
factors,
we'll
go
into
a
little
more
detail
later
in
the
presentation
and
if
we're
what
we
want
to
be
able
to
do-
and
I
think
this
is
this-
is
certainly
the
same
goal
that
bruno
has
talked
about
is
we
would
like
to
minimize
re-transmissions
and
since
standard
re-transmission
timer
is
five
seconds.
C
C
C
So
our
proof
of
concept
algorithm
that
we've
implemented
in
our
testing.
What
we
do
is
we
track
the
rate
of
transmissions
over
a
multi-second
history
and
compare
it
to
the
rate
of
acknowledgements
that
we
have
received.
C
C
And
in
order
to
to
track
accurately
whether
the
receiver
is
keeping
pace,
we
need
to
know
what
the
psnp
delay
is
by
specification.
10
589
suggests
two
seconds:
implementations
may
today
may
be
using
two
seconds.
They
may
be
using
one.
Second,
they
may
be
using
something
much
much
shorter,
knowing
what
the
acknowledgement
delay
of
the
neighbor
is
is
important
to
the
accuracy
of
the
algorithm.
C
Currently
we're
using
this
as
a
configurable
knob.
I
think
in
the
the
combined
draft
that
we're
talking
about
generating
this
is
a
candidate
for
being
advertised
by
the
receiver,
so
you
won't
have
to
configure
it.
The
receiver
can
actually
tell
you
what
it's
doing
so.
We've
incorporated
the
neighbor
specific
act
delay
into
the
algorithm,
and
we
don't
really
care
why
the
receiver
is
not
keeping
pace
it
could
be
because
packets
are
lost
from
the
transmitter,
it
could
be
they're
lost
on
the
receiver.
They
could
be
corrupted
on
the
wire.
C
There
could
be
issues
with
the
punt
path
to
the
control
plane.
There
could
be
cpu
issues,
we
don't
care.
We
just
track.
Is
the
receiver
keeping
pace
with
the
rate
that
we're
sending
and
if
it's
not,
then
we're
going
to
slow
down
the
term
that
we
use
in
the
presentation
for
current
lsptx
max
is
the
current
value
that
we're
using
to
flood
to
a
particular
neighbor
next
slide?
Please.
C
C
We've
optimized
the
psnp
delay
on
the
receiver,
so
that
it's
one
millisecond
and
we
have
test
code
on
the
receiver
that
manipulates
the
rate
at
which
the
receiver
processes
the
the
incoming
lsps,
and
we
can-
and
we
have
two
modes.
Like
I
said
in
the
original
testing,
we
had
done
that
we
presented
previously,
the
receiver
simply
dropped
whatever
was
in
the
queue
that
was
beyond
the
arbitrary
processing
rate
that
we
had
set
up,
but
we've
now
tested
with
another
option
where
the
receiver
doesn't
drop
the
lsps.
C
C
C
C
C
C
You
know
if
you
send
it
at
a
thousand
lsps
per
second,
it
takes
a
little
under
two
seconds
for
this
to
complete
and
so
forth.
Next
slide.
C
So
here's
a
test
we
did
where
the
receiver,
for
whatever
reason.
C
So
in
in
this
test
we
had
a
processing
rate
of
100
lsps
per
second
on
the
receiver,
so
for
only
sending
at
33
lsps
per
second,
then
the
receiver
can
keep
up.
It
still
takes
66
seconds
to
get
all
of
the
2000
lsps
exchanged,
but
we
have
no
re-transmissions
if
we
simply
send.
C
300
at
roughly
300
lsps
per
second,
but
we
don't
use
our
optimized
transmission
scheme
so
we're
basically
sending
this
every
33,
milliseconds
or
sorry,
every
three
milliseconds.
I
guess
we
get
done
faster,
but
there
are
a
lot
of
retransmissions
because
the
receiver
is
not
able
to
keep
up.
C
The
the
two
lines
that
are
highlighted
in
green
are
are
the
the
new
tests
that
we
run.
We
ran
since
ietf
111,
where
the
receiver
is
not
dropping
the
lsps,
it's
just
processing,
100
per
second
and
then
putting
up
the
next
second
in
process
another,
and
what
you
see
in
the
case
of
100,
where,
where
we
start
with
a
rate
of
300
lsps
per
second,
is
that
we
adjust
down
to
100
speeds
per
second
and
we're
able
to
react
quickly
enough
that
there
are
no
re-transmissions.
C
C
Most
of
those
re-transmissions
are
redundant,
but
of
course
the
the
transmitter
doesn't
doesn't
know,
and
it's
just
a
matter
of
it
takes
a
bit
longer
for
the
adjustment
from
a
much
higher
rate
to
the
much
lower
rate.
I
think,
if
you
go
to
the
next
slide,
we
have
graphical
representations
of
this.
D
Qualif,
a
clear
clarifying
point
here:
you've
modified
the
receiver
to
be
able
to
put
the
radian
that
it
will
drop.
C
So
what
you
have
on
the
left
hand
side
is
the
transition
from
a
300
lsp
per
second
rate,
to
100
lsp
per
second
rate,
and
you
can
see
that
we
decrease
rapidly
and
we're
actually
matching
the
consumption
rate
of
the
receiver
in
less
than
five
seconds.
And
as
a
result
of
this,
there
are
no
re-transmissions
in
the
case
on
the
right.
C
It's
we've
already
sent
a
thousand
lsps
in
in
in
the
first
second
or
two
and
they're
not
going
to
get
acknowledged
it's
going
to
take
up
to
10
seconds
for
all
of
those
lsps
to
be
acknowledged.
C
So
this
is
the
same
test,
but
the
receiver
now
is
not
is
not
dropping
the
lsbs.
It's
simply
buffering
them
in
its
input,
queue
and
it'll
it'll
process
them,
as
it
catches
up.
C
The
receiver
is
has
a
consistent
receive
rate
which
we've
already
adjusted
to.
C
In
other
words,
our
our
target
lsptx
max,
is
either
300
lsps
per
second
or
1000
lsps
per
second,
but
based
on
previous
performance,
we've
already
adjusted
to
the
receiver
rate,
and
we
continue
to
send
at
the
rates
that
the
receiver
is
able
to
consume
and,
as
a
result,
the
it
takes
roughly
20
seconds
to
send
the
2000
lsps.
C
C
C
It
takes
because
we
ramp
up
more
slowly.
It
takes
us
longer
to
achieve
the
maximum
rate
and
so
that
it
actually
takes
two
bursts
at
the
end
of
the
first
burst,
we're
not
yet
at
a
thousand.
It
takes
another
burst
of
2000
lsps
in
order
for
us
to
realize
that
the
receiver
is
actually
capable
of
maintaining
1000
lsps
per
second.
C
and,
as
you
can
see,
we
ramp
up
to
300
before
we've
sent
all
of
the
2000
lsps,
but
in
the
case
on
the
right,
where
we're
trying
to
ramp
up
to
1
000
after
the
the
first
burst
of
2000
lsps,
we've
ramped
up
to
you,
know
roughly
a
little
under
600
lsps
per
second
and
then
when
we
trigger
a
second
burst,
will
continue
to
increase
since
the
receiver
is
able
to
consume
at
the
faster
rate
and
will
get
eventually
get
up
to
our
configured
maximum
next
slide.
Please.
C
So
our
future
plans,
we're
looking
into
we've,
got
a
number
of
internal
parameters
that
we
can
tune
as
we're
looking
at
clearly
in
the
case,
where
there's
a
steep
slowdown
that
we
were
sending
at
one
thousand
lsps
per
second,
for
whatever
reason,
the
receiver
is
now
only
capable
of
100.
C
C
C
So
what
you
have
here,
you
have
a
line
card
and
a
router
may
have
one
or
more
line
cards.
Each
line
card
has
a
set
of
interfaces
as
packets
that
are
destined.
You
know
for
us
processing
for
whatever
reason,
they're
in
our
case,
we're
obviously
interested
in
isis
packets.
But
you
know
there
are
oem
packets.
There
are
bgp
packets.
There
are
many
packets
that
are
going
to
be
punted
to
the
control
plane
for
processing.
C
C
C
C
So
this
is
looking
at
the
control
plane
so
from
the
control
plane,
we've
now
got
an
input
queue,
which
is
the
input
from
the
punt
queue
on
all
of
the
line
cards.
C
C
You
know
whatever
has
been
punted,
so
we
need
to
take
those
packets
and
put
them
into
a
another
queue
that
is
specific
to
a
particular
packet
type.
In
our
case,
we
have
an
isis
input
queue
note.
At
this
point.
We
have
packets
from
all
interfaces
on
all
line
cards,
at
least
all
the
ones
that
are
enabled
for
isis,
and
we
have
all
iss
packet
types.
We
have
hellos,
we
have
snps,
we
have
lsbs,
we
may
be
running
one
instance
of
isis.
C
We
may
be
running
multiple
instances
of
isis,
so
we
now
need
to
separate
out
from
this
input
queue.
Those
packets,
which
are
for
iosites
instance,
number
one
from
those
that
are
destined
for
isis
instance
number
two
and
so
on
again.
At
this
stage
we
still
have
all
iss
packet
types,
hellos,
snps
lsps.
C
C
So
the
the
the
sum
of
of
all
of
this
is
you
have
to
look
at.
You
know
what
is
the
state
of
the
receiver
at
a
given
moment?
How?
How
can
I
determine
whether
I'm
in
a
congested
state
or
not?
Well,
I've
got
these
police
input
cues
in
the
data
plane,
I've
got
the
punt
queue.
I've
got
the
control,
plane,
input
cues.
C
In
order
for
isis
to
accurately
signal
its
current
state
to
its
neighbors,
you
would
have
to
understand
the
state
of
all
of
these
elements
and
would
also
have
to
do
this
in
real
time
because,
as
we
discussed,
the
duration
of
a
burst
is
not
going
to
be
a
very
long
time.
It
may
be
a
couple
of
seconds.
It
may
be
10
seconds,
it
might
be
15
seconds
if
we're
going
to
respond
accurately.
C
C
The
signaling
is
required
when
a
flooding
burst
is
happening.
Most
of
the
time
when
the
network
is
stable,
our
level
of
flooding
is
minimal,
we're
just
doing
refreshes,
which
are
quite
modest
during
a
burst.
Both
the
transmitter
and
the
receiver
are
going
to
be
very
busy
they're
going
to
be
busy
doing
the
iss
work.
They
may
also
be
busy
doing
other
work,
which
is
a
side
effect
of
whatever
the
network.
Topology
change
is
the
node
may
be
receiving
lsps
and
sending
lsps.
C
C
So
the
the
summary
of
all
this
is
that
accurate,
real-time
signaling
of
the
current
state
of
the
receiver
is
challenging
and
that's
kind
of
the
one.
The
the
main
reason
that
we've
decided
in
our
approach
not
to
depend
upon
it,
we're
only
depending
upon
the
actual
rate
of
acts
that
we
receive
from
the
receiver.
C
B
Yes,
it
it
was
well.
I
I,
since
I'm
talking,
I
have
a
comment
on
this.
What
you
just
talked
about
this,
the
signaling-
if
I'm
not
mistaken,
that's
been
proposed
so
far,
whether
it
be
the
arwen
or
even
the
one
that
you
were
talking
about,
the
psnp
interval
both
of
those
are
static
values.
So
it's
not
something
that
would
have
to
be
signaled.
C
Well,
so
I'm
not
the
irwin
advocate,
so
you
know
probably
bruno
should
respond
to
this,
but
I
do
believe
that
for
our
win
to
be
most
useful,
it
needs
to
be
dynamic.
B
I
I
saw
I
used
to
think
the
same
thing
when
I
was
just
sort
of
looking
at
it
at
a
high
level,
but
in
fact,
if
you,
if
you
break
the
queues
down
per
adjacency,
it
is
you
can
give
it
a
static
value
right.
It.
The
reason
that
you're
thinking
that
needs
to
be-
and
I
thought
that
it
needed
to
be
dynamic-
was
to
to
adapt
to
the
network
conditions
right.
But
but,
if
you're,
if
you're
each
adjacency
is
given,
you
know
30
q
q
slots,
it's
it.
B
C
C
B
C
B
Bring
everything
together
and
talk
about
bgp
traffic
and
all
that
other
stuff
you've
got
to
do
some
other
things
right,
but
you
know
we're.
C
Talking
about
being
static,
so
so
let
me
just
make
one
comment,
and
I
think
you
know
guillaume
and
tony
probably
have
much
more
to
say
about
this,
but
I,
if
you're,
trying
to
determine
the
state
of
the
receiver-
and
you
want
to
know
whether
the
receiver
is
getting
overwhelmed,
you
cannot
just
look
at
the
input
queue
to
isis.
C
B
Correct
this
is
why
a
good
it's,
why
it's
good,
you
guys
are
getting
together
right.
I
I
I
don't
I'm
not
saying
this
is
black
and
white
or
you
have
to
choose
one.
Your
approach
is
dealing
with
the
overall.
You
know
congestion
that
can
happen.
The
our
win
is
doing
something
else
which
is
saying
you
can
send
me
this
many
lsps
and
I
won't
drop
them
right
and
that
just
it's
just
another
way
to
to
optimize
the
situation.
J
Yes,
hello,
everyone,
so
I
wanted
to
go
back
on
the
our
win,
so
you
you
said
that
owing
needed
to
be
dynamic-
and
I
think
that's
yes,
when,
when
you
want
to
do
conjunction
control,
you
absolutely
need
a
window
that
is
dynamic,
but
you
can
have.
You
can
actually
do
both
use
the
advertised
arwin
as
an
upper
limit
on
what
your
your
allowed
window
on
the
receiver
side
is
allowed
to
to
go.
For
example,
receiver
advertise
a
receive
window
of
100.
J
if
the
senders
just
use
this
directly,
this
100
receive
window.
The
problem
is
that
there
could
be
losses
before
the
input
queue
just,
as
has
less
explained,
so
it's
cool
if
you
don't
want
to
lose
packets
at
the
input
queue.
This
is
the
first
guarantee,
but
it's
arguably
not
enough,
and
for
this
the
idea
is
to
go
from
a
very
small
window
and
on
the
side
of
the
server
side.
Sorry,
and
with
the
axe
you
receive,
you
can
infer
if
there
was
congestion
or
not
so
for
this
to
work.
J
Have
a
question:
oh,
but
yeah,
okay,
I
thought
you
were
asking.
Do
we
need
the
irwin
to
be
dynamic?
No.
J
B
K
Yeah
I
just
wanted
to
chime
in
as
another
evil
greedy
vendor
implementations
are
all
over
the
map.
Right
now
and
hardware
has
been
making
great
strides
in
making
interesting
progress.
K
That
said,
there
are
a
whole
lot
of
implementations
that
are
not
particularly
elegant
that
they
they
comply
with
the
first
model
that
les
showed,
with
a
giant
punt
queue
and
everything
just
funneling
into
that
punt
cue
now,
there's
more
complications,
the
things
that
funnel
into
that
punt
queue
are
rate
limited,
and
so,
even
though
the
queue
may
be
ten
thousand
packets
deep,
you
can
only
dump
a
thousand
packets
per
second
into
it,
or
something
and
hardware
is
not
very
forgiving
about
that,
and
maybe
you
can
program
your
punt
q
rate
or
not
whatever
it
is.
K
K
So
again,
we've
got
an
enormous
amount
of
variation.
We
have
to
try
to
capture
and
our
win
is
only
a
first
approximation
and
maybe
over
time
we
learn
better
ways
of
doing
things.
But
the
whole
point
is
to
give
us
some
mechanism,
because
we
realize
that
some
feedback
in
the
control
loop
is
better
than
no
feedback.
D
I
I
just
want
to
say
you
we
don't
really
don't
want,
because
of
all
these
implementations
we
we'd.
Really.
I
mean
I
mean
this
is
going
to
be
very
this
draft.
This
isn't
going
to
be
a
draft,
it's
a
hammer
that
says
you
must
do
this
and
you
must
do
that,
but
I'd
encourage
implementations
to
allow
speed
up
without
the
receiver
sending
our
win.
D
You
know
that
you
could
that
you
could
have
a
way
to
bypass
that
or
statically
configure
it.
So
you
don't
depend
totally
on
the
our
win.
D
We
are
completely
agreed
that
arwin
is
wholly
optional,
yeah
yeah,
but
okay
and
the
other
thing
I
was
going
to
say,
I
was
going
to
say
anybody
who
has
a
rate
limit
on
their
punt
q
or
qs
without
doing
it
per
without
some
classification.
I
mean
that's
just
that
would
just
be
very
naive
anyway.
I
don't
think
there
are
any
implementations
that
don't
do
it.
No,
that
rate
limit
just
arbitrarily
on
the
point,
q
and
don't
do
it
by
packet
types
or
any
classif
by
classification.
B
Yeah
I,
but
to
be
fair,
I
mean
some
of
these
most
big
routers
will,
you
know,
do
some
amount
of
classification,
even
in
the
mpu,
you
know,
you've
got
an
isis
packet,
but
yeah.
I
like
tony.
I
think
your
point
is
well
made
that
you
know
it's
all
over
the
place
right
and
I
certainly
wouldn't
argue
against
trying
to
deal
with.
C
C
So
it
came
out
of
our
consensus
on
monday
was
and-
and
you
know
I
I
think
you
know
tony's-
probably
the
one
who
voiced
this
is
you
know,
let's
just
provide
the
tools
that
are
needed
to
support
a
variety
of
strategies
and
over
time,
we'll
find
out
what
works
and
what
doesn't
work.
C
K
A
B
Okay,
ac,
we
had
I
I'm
looking
for
my
agenda
here.
We
had
discussions
and
next
steps
looks
like
we're
at
the
end
of
discussions.
D
B
I
I
agree
with
that.
So
if
the
combined
authors
want
to
get
together
and
try
to
get
something
out
pretty
fast,
we
could
even
look
at
doing
a
on
the
mailing
list.
Adoption
call
before
the
next
ietf.
H
Okay,
thank
you,
but
we
may
be
ready
just
a
few
weeks
before
the
atf,
so
it's
I
think
it's
okay
to
to
present
these
and
you
draft,
but
thank
you
for
the
offer.
C
Yeah,
I
think
bruno
that
the
the
submission
deadline
I
checked
yesterday,
I
think,
is
october
25th,
so
I
think
we're
going
to
try
to
make
that
you.
H
F
C
C
Yeah
yeah,
but
as
far
as
you
know,
the
working
group
chairs
anything
you
can
do
to
accelerate
the
working
group
adoption
process
from
the
point
of
view
of
us
authors,
I
think,
is
most
welcome.
L
G
B
D
B
All
right,
everyone,
I
guess,
that's
it!
Unless
anybody
wants
to
pop
into
the
queue
going
once
going
twice
going,
eisen.
C
L
F
B
D
Yeah,
I
I
think
we
can
address
that
offline.
It
doesn't
have
anything
to
do
with
this.
We
have
to
there's
some
technical
issues.
I
think
we
should
discuss
a
little
bit
more
and
whether
we
do
that,
before
or
in
the
context
of
an
adoption
call
is
a
question,
but
I
don't
think
it
needs
it's
relevant
to
either
side
isis
flooding,
optimization.
B
Yeah
I
mean,
since
we
have
time
if
you're
looking
for
an
update,
ac-
and
I
just
spoke
about
the
pua
draft
yesterday,
so
it
is
on
our
minds
and
we
want
to
go
back
and
look
at
what
was
presented
recently
by
peter
as
well,
but
I
think
you
know
it
you're
you're,
making
a
reasonable
request.
It's
been
around
for
a
while
and
we're
gonna
have
to
put
up
for
an
adoption
call.
I
think
so.
B
Let
us
just
circle
back
to
that
over
the
next
few
days
and
get
back
to
you.
Okay,.
A
B
Okay,
well
thanks
everybody
great
outcome
and
see
you
at
at
the
next
virtual
ietf
thanks
a
lot.