►
From YouTube: LSR WG Interim Meeting, 2020-04-29
Description
LSR WG Interim Meeting, 2020-04-29
B
A
A
E
I
can't
share
it's.
It's
I
can't
share
I
can't
join
as
a
different
user.
Ten
minutes
is
not
an
acceptable
preparation
at
the
time
like
I.
Don't
can.
This
is
probably
not
the
forum,
but
we
need.
We
need
more
ability
to
join
earlier
than
10
minutes
before
the
meeting.
I'm.
Sorry
for
everyone
that
we
were
able
to
join
earlier
and
get
this.
E
A
E
A
A
F
A
E
A
E
A
A
Despite
not
meeting
in
person,
we've
had
a
lot
of
progress
since
Singapore
in
a
lot
areas
and
more
discussion
than
usual.
Although
that's
been
good.
A
This
is
a
record
on
the
number
of
RFC's
we
published
between
IETF
now
part
of
it,
because
we
had
all
these
MPLS
segments
routing
drafts
that
were
waiting
on
a
final
reference,
I
forget
which
one
it
was,
but
it
was
it
was
it.
It
was
a
chain
of
dependencies,
so
we
published
those
four.
We
published
this
long
time,
1l
bar.
A
A
A
A
This
one's
waiting
on
El,
Faro
ayah
sizing,
valid
TLB
handling
a
good
maintenance
graph
there.
It
was
pretty
much
pretty
much
an
animist
consensus
on
that.
This
draft.
This
draft,
that's
a
progress
since
the
last
interim
meeting,
but
not
enough.
It
was
submitted
to
diet
ESG
for
publication.
What
I
didn't
put
on
here,
there's
there's
a.
A
What
do
you
call
that
an
appeal
on
this
draft
and
there
was-
and
we've
also
got
some
of
the
implementers
other
than
the
author's,
have
come
forward
and
said
that,
okay,
this
is
okay,
so
we
have
had
some
progress
but
I
think
not
enough
I'm
gonna!
Let
Chris
talk
about
it
since
he's
the
Shepherd
on
this
one.
So.
E
I
mean
there's
a
lot
of
contention
on
the
on
the
base
network
programming
draft.
Well,
there's
I,
don't
know
how
much
contention,
but
it's
in
appeal.
If
you
followed
the
the
list
and
the
spring
or
maybe
it's
been
seized,
heat
and
no
I,
don't
think
so.
Anyway,
there's
an
appeal
filed
about
the
penultimate
hop
there.
Maybe
there's
there's
other
aspects
to
the
appeal.
Anyway,
it's
it's
a
lot
of
raging
anger,
maybe
that
we
don't
want
to
touch
I.
Think
that
the
draft
as
it
is
you
know,
I
mean
it's.
We
need.
E
Changes
I,
don't
think,
there's
any
reason
to
bring
that
contention
to
us.
So
you
know
we
have
something:
that's
sort
of
ready
to
go
to
support
the
network
programming
document,
but
we're
gonna
hold
back
on
actually
submitting
this
to
the
IES
primarily
to
avoid
you
know.
Taking
on
that
poison
pill,
we
could
become
a
part
of
the
appeal
then,
because
it's
sort
of
you
know
end
run
around
the
process
if
we're
pushing
through
extensions
to
support
something
that
hasn't
been
standardized.
That's
under
appeal
that
said
I.
E
B
E
What
we
should
support
I,
don't
think
unless
people
disagree
with
that,
I
mean
that
seems
logical,
so
I
think
that
we
should
just
wait
and
see
how
this
plays
out
and
then
once
the
network
programming
draft,
if
it
goes
through
unchanged
that
will
just
push
ours
through
to.
If
there's
changes,
then
of
course
we
have
to
adjust
anyway.
The
main
point
is
like:
let's
not
become
a
part
of
the
appeal
process
right,
so
we'll
just
wait
for
that
to
play
out.
A
Yeah
I'm
sure
I'm
in
agreement
that
we've
got
a
lot
of
important
things
there
and
a
technical
level
that
we
need
to
discuss.
We
really
don't
want
to
waste
any
working
group,
energy
on
or
mailing
list
bandwidth
fund
debating
this
one
until
it
settles
down
a
little
bit
in
the
at
least
an
appeal
process.
A
Okay,
this
one
I
think
engine
has
asked
for
a
working
group.
Last
call
I,
guess
I'm
a
co-author
on
this
I
think
it's
almost
ready.
It's
there
hasn't
been
any
press
pressing
need,
but
it's
I
think
it's
pretty
much
ready.
There's
so
we'll
talk
about
we'll
take
that
to
the
list,
and
then
we
have
these
ones
these.
These
are
really
working
group
documents,
I.
A
A
I
definitely
put
the
dynamic,
flooding
and
flex
algorithm
ahead
of
those
just
because,
because
there's
been
there's
implementations
preceding
and
we
covered,
you
know
the
AIA
extended
hierarchy
and
in
the
last
session,
there's
no
pressing
need
now
nobody's
talking
about
implementing
that.
So
I
think
we
probably
downplay
whether
or
not
we
when
we
want
to
do
this.
A
These
these,
these
all
these
young
model
ones,
it's
just
a
matter
of
I-
mean
we
get
getting
all
the
reviews
and
doing
them
this
one
I
think
we're
kind
of
dropping
doesn't
seem
to
be
a
lot
of
interest
in
it.
The
routing
for
spine
leaf
topology,
mainly
because
I
think
the
main
reason
is
because
the
other
flooding
optimizations
are
more
generalized,
and
this
was
really
just
specific
to
the
data
center
spine
leaf
between
those
type
notes.
A
This
one,
this
one's,
not
my
Lister
to
kind
of
review
and
and
see
if
it's
ready
for
adoption,
the
other
ones,
aren't
a
high
iron
high
priority.
Okay,
the
other
thing
I
was
going
to
say
regarding
the
interims,
Chris
and
I
were
talking
about
maybe
having
an
interim
also
like
this
one's
subject
on
the
flooding,
maybe
have
one
on
the
on
the
area:
abstraction
and
flood
and
flooding
reflectors.
A
E
Where
is
the
water
I'm?
Looking
have
a
was
Philippi
I.
F
I
I
I
J
E
I
I
I
Men
use
cases
readily
fast
when
is
not
failure,
so
when
you
have
an
odd
failure,
each
of
your
neighbor
and
retires
the
failure
of
the
link
toward
you
so
depending
on
the
project
Network,
so
at
least
for
orange,
it
could
be
between
two
250
LS
Pete
and
his
PC
to
fluid
is
of
a
single
node
failure
for
us.
It
would
be
good
to
to
float
in
unless
at
100
millisecond
for
fast
conveyance,
ideally
0
milliseconds
for
for
2
recipes
in
order
to
to
quickly
do
AG
peak.
I
There
were
requests
to
12
some
data
and
the
behaviors
so
did
lot
of
lab
testing
in
lab
who
have
an
idea
of
existing
behavior.
So
we
use
the
idealistic
test
conditions,
so
it
couldn't
be
better.
So
we
use
the
Metro
implementation
got
greater.
We
use
a
I
end
rotor.
We
use
a
semi
implementation
and
was
the
sender
on
the
receiver.
I
So
there
is
no
interruption,
no
Intel
work
issue
the
difference
of
assumptions
so
the
same
behavior
on
the
sender
and
receiver
next
and
on
the
ice
ice
is
running
on
the
rotor,
so
no
bgp
doing
interference
is
withing
the
lab.
So
it's
the
reminiscing
entity
and
we
were
single
I
GP
adjacency.
So
there
is
a
single
receiver.
There
is
a
single
speaker
sending
a
nice
piece
to
the
receiver,
so
it
couldn't
be
better
in
term
of
condition.
I
Next
slide,
please
I'm
not
going
to
go
into
all
details,
but
just
a
summary,
so
three
kind
of
test,
one
with
the
food
parameters.
In
these
years
we
have
somewhat
to
LSD,
be
synchronization,
no
surprise
it's
as
per
usual
manure.
Everything
works,
fine
except
it's
not
very
fast.
It's
about
communication
is
below
500
kilobits
per
second,
so
in
order
to
fruit
of
force,
whatever
4,000
SPS,
for
example,
you
need
150
seconds
but
other
than
that
it
works
fine
second
group
of
tests
trying
to
tune
the
parameters.
I
B
I
Smoothly
and
then
the
third
group
is,
if
you
try
to
go
a
bit
beyond
best
parameters,
so
you
have
a
little
bit
too
aggressive
parameters
in
this
case.
By
definition,
the
receiver
is
over,
went
on
so
we
cannot
handle
all
the
LSB
even
in
Sue's
best
condition.
So
you
have
only
one
speaker:
it's
a
big
big
water
and,
as
a
consequence,
you
have
a
lower
good
put.
I
I
E
I
E
That
would
be
interesting
so
to
know
like
how
much
overwhelmed
like
does
it
if
you
change
the
parameters,
so
you
you
overwhelmed
it,
but
you
know
if
you
overwhelm
it
two
times
that
or
three
times
or
four
times
that,
how
does
that
change
you
know?
Do
we
stay
a
sort
of
a
steady
state
of
fifteen
seconds,
or
does
you
know,
does
it
sending
it
faster?
You
know
faster
faster.
You
know,
change
the
convergence.
E
I
A
varied
question
I
answer
that
question
I
will
probably
need
to
disclose
the
name
of
the
implementation,
so
I'm
not
sure
about
it.
Is
it's
a
small
change.
You
can
see.
That's
in
average
the
entire
SP
delay,
which
is
sometime
called
LSP
pissing
delay
parameter
it's
one
millisecond,
it's
not
my
spouse.
It's
when
milliseconds
point
two
for
the
too
aggressive
parameters.
It's
two
milliseconds
before
so
it's
two
times
exactly,
there's
a
difference
in
term
of
parameters,
but
if
I
go
into
more
details,
so.
I
This
sandir
I'm
about
reaching
the
capacity
of
the
sender,
so
I
can't
really
go
much
beyond,
but
we
could
use
to
sender
it's
a
typical
use
gather
throwing
up
at
any
pef
to
address
some
neighbors
on
so
we've
two
neighbors.
We
apply
the
load
by
two,
that's
something
that
we
can
do.
But
since
then
we
have
some
synchronization
issue
because
at
the
same
time,.
E
A
A
I
I
I
Avoid
losses
on
retransmission
because
the
sender
was
too
fast
for
the
receiver.
That's
the
definition
of
its
end
to
end
between
sender,
sender
and
receiver,
and
for
that
TCP
user.
A
receive
window
which
is
advertised
from
the
receiver
to
the
sender
propose
to
use
exactly
the
same
mechanism
except
the
unit
would
be
in
a
number
of
LSPs,
whereas
LSP
user
and
above
bytes,
because
ISS
and
HP's
wearisome
TCP
sends
a
stream
of
bytes.
I
I
I
The
reaction
from
Isis
so
is
the
delay.
The
log
was
a
throughput.
Otherwise
you
need
a
router
window,
that's
nothing
new
and
trade
off.
So
for
a
given
size
of
Windows,
there
is
a
benefit
in
in
sending
this
faster
I
mean
acknowledging
HP's
faster,
because
you
can
have
a
faster
feedback
loop
and
you
have
a
faster
throughput.
I
B
I
I
This
is
well
yes,
but
it
still
works.
We
still
have
a
flow
controller
and
we
even
with
a
relatively
small
window,
we
can
try
to
have
a
good
throughput
by
sending
SNP
faster
speed
as
a
feedback
loop.
So
what
shot
short
answer
you
can
pick
a
static
value
whatever
you
want
for
some
implementation
has
already
sent
a
manipulated
species,
back-to-back
example
fighter,
and
you
can
reuse
that
exact
same
value.
It's
a
receive.
I
Next
type,
listen,
there
is
optional
bit
more
dynamic,
you
can
change
those
the
window
receive
window
based
and
receiver
load,
also
your
loader,
so
you
can
increase
the
window.
If
you
are,
you
have
plenty
of
resources,
so
we
are
waiting
for
IO
you,
which
is
more
or
less
pizza
for
more
recipes.
You
can
increase
the
window,
which
is
a
way
to
send
to
your
Center,
go
ahead
and
send
more,
and
you
put
it
if.
B
I
I
There
is
a
server
solution:
it's
to
advertise,
a
dynamic
window,
monitoring
some
relevant
resources
within
your
platform
within
the
receiver
platform,
typically
adware
resources
such
as
both
the
for
space
or
CPU
or
whatever.
Clearly,
in
this
case,
it's
it's
our
way
on
platform
dependent
as
you're
monitoring
your
hardware
or
software
resource,
which
was
a
limiting
factor
for
your
implementation,
so
that
one
is
out
were
dependent,
but
again
he
doesn't
have
to
be
that
way.
I
If
you
want
to
do
that
way,
that
you
can
you
don't
want
and
simply
that
I
does,
that
you
value
the
sense
that
you
have
to
define
or
TCP
window
on.
E
I
E
I
I
From
overwhelming
the
network,
so
the
network
is
between
two
nodes
on
each
pin
see
applications.
The
application
is
Isis
for
Isis.
We
have
a
specific
case.
Our
network
is
not
that
big,
it's
madly.
It's
made
me
one
point-to-point
link,
which
is
very
high,
speed
I
think
we
agreed
that
it's
not
going
to
be
the
issue,
but
if
you
go
a
bit
closer
there
also
so
formal
forwarding
resources
within
the
router
between.
I
Okay,
it
it's
platform
dependent
and
we
don't
want
to,
or
maybe
we
don't
want
to
look
at
the
details.
We
just
assume
it's
it's
part
of
the
network
into
two
applications.
So
we
don't
care
about
the
details.
We
don't
need
to
look
at
that
the
internet.
There
is
no
assumptions
about
knowing
so
to
preserve
the
network
on
services.
I
I
However,
we
can,
with
that
we
can
add.
A
congestion
control
with
probably
will
in
the
draft
is
to
prove
that
it's
doable
and
we're
not
going
to
invent
anything
new,
we'll
just
reuse,
existing
a
IMT
algorithm,
which
is
additive
increasing/decreasing,
which
is
the
one
which
is
very
old,
nothing
fancy
it's
using
TCP
as
a
CTP
on
some
DCC
P
mode.
I
It's
quite
simple.
You
start
with
a
congestion
window,
it's
kind
of
slow
start
to
start
to
slow,
because
it's
not
effective
for
a
short
duration
of
message.
So
one
option
is
to
start
with
the
eyes
of
the
receive
window,
but
one
is
too
slow
and
then
you
do
a
linear
increase.
If
everything
goes
well,
we
can
do
a
proportional
controller
received,
rightly
NLS.
Visa.
I
I
I
I
I
B
I
B
I
I
E
I
I
That's
a
summary
of
transmission
layer
going
to
be
a
bit
more
details.
The
draft,
the
change
in
the
draft
since
maurella
following
some
comments,
growing
that
kill
he
to
be
advertised
pose
in
l
ozone
as
in
visa
again
fulfilling
some
comments.
During
the
meeting
we
will
use
subshell
visa
one
for
each
parameter,
extant.
B
I
I
We
already
received
a
lot
of
feedback,
especially
in
Mariana
during
the
meeting
and
the
wrists
and
I
have
a
conversation
Amalia.
Thank
you.
We
could
would
improve
the
draft.
Oh
it
will
gamma
next
step
did
the
draft
which,
what
we
have
isn't
it
today,
a
better
introduction
transmission
layer
and
what
we
are
doing
it
overall
behavior
before
dividing
into
reason
and
I
think
the
congestion
control
algorithm.
E
E
It's
doing
a
lot,
there's
you're
talking
about
flow
control.
You
know
you're
also
talking
about
congestion
control
and
then
we're
also
talking
that
last
slide
a
see
if
you
put
it
on
15.
Well,
it's
up
there
anyway.
It
says
new
sections
right.
So
that's
even
the
third,
a
third
aspect
rate
which
is
modifying
the
protocol
mechanisms.
You.
I
E
That's
cool
what
I
would
I
I
think
I
have
a
different
idea
of
flow
control
and
congestion
control
I'm,
not
necessarily
right
here,
but
I
tend
to
think
of
when
you
say,
congestion,
controls
in
the
network,
I
think
really
congestion
control
is
about
managing
loss.
That's
how
I
think
about
it.
So
the
loss
sure
can
happen
in
the
network,
but
it
can
also
happen
in
the
router
right
and
I.
E
You
know:
we've
talked
about
that
on
a
list
and
software
people
talked
about
different
queues
and
loss
points,
so
I
think
when
you're
solving
the
problem
of
loss,
that's
really
when
you're
talking
about
congestion
so
and
something
I
wanted
to
bring
up
that
I.
Don't
think.
We've
talked
too
much
about.
There's
I
see
flow
control,
at
least
the
basic
form
of
it.
If
we
go
way
way
back
to
rs-232
right,
you
have
flow
control
as
a
way
of
saying
you
know,
stop
stop
sending
right
and
it's
less
about
it's
less
about.
E
You
know
congestion
control,
algorithms,
work
off
of
data
like
loss
rates,
whereas
in
flow
control
is
more
I,
see
it
as
a
more
active
and
I'm
I
think
that
we
should
also
explore
this
I,
don't
know
if
it
will
pan
out
if
it
will
be
fruitful,
but
you
know
just
throwing
out
there
a
simple
sort
of
idea.
You
could
you
know,
look
at
your
first
level
queue
and
as
long
as
your
router's,
your
NPS
or
whatever
are
doing
you
know,
depositing
things
is,
is
in
its
own
queue.
E
You
can
tell
the
other
side
to
stop
sending
I
know,
there's
a
lot
of
multiple
queues
and
all
that
that
we've
discussed
before.
But
you
know
queues
drain,
so
you
can
tell
when
they're
first
level
queue
is
not
empty
anyway.
The
point
is
I
think
that
we
should
also
explore
that
idea,
because
it
could
be
a
very
simple
solution,
so
I
sort
of
have
dropped
that
into
the
flow
control
category
and
then
put
everything
else
in
the
congestion
control
category.
E
So
for
that,
from
that
viewpoint
the
receiver
would
know
the
dynamic
received
window
sizing
that
you
know
kind
of
what
you're
talking
about
in
the
draft
where
you're
you're
sort
of
advertising
hey
use
these
retransmission
timers,
and
these
you
know
pacing
intervals,
the
way
that
sort
of
I
perceive
that
is
a
sort
of
second-order
modification,
a
second-order
input
to
the
congestion
control
or
almost
modifying
the
congestion
control
algorithm
right.
So
at
the
first
order
you
know
input
to
a
congestion
control
algorithm
is
your
loss.
E
You
know
your
round-trip
time
generally,
it's
like
round-trip
time
and
your
loss
rate
and
those
variables.
You
know
you
they're
not
hard
to
they're
they're
already
in
a
limited
sense
from
from
your
nak
ACK
and
then
so
what
I
think
you're
proposing
is
it's
sort
of
like
above
that
and
it's
like
going
in
and
saying
I'm
gonna
modify
the
behavior
of
the
congestion
control
algorithm.
E
So
that's
an
interesting,
take
I
wonder
if
we're
biting
off
more
than
we
need
to
there,
but
you
know
it's
definitely
worth
investigating
yeah
so
that
that
was
my
main
main
comments
on
what
I
was
thinking.
Let's
check
out
like
ether,
net
pause,
CTS,
RTS
and
see
if
that
simple
thing
might
work-
and
you
know,
let's
the
window
stuff
seems
complex
since
it's
not
about
more
modifying
the
congestion
control
and
it
is
just
giving
input
to
it.
I
B
I
I
E
I'm
not
sure
I,
I
think
you're
talking
about
the
burst
rate
right,
so
the
burst
rate
is
not
actually
like
a
receive
window
because
you
you
have
to
you,
can't
you
still
have
to
pay
attention
to
the
limits.
So
it's
more
like
a
buffer
size
right
I
mean
maybe
that's
what
I
received
when
you
was,
but
it
you
know
if
you
send
a
five
back-to-back,
but
you
only
can
send
five
one
per
second.
On
average,
you
still
only
seven
five
per
second,
it's
not
gonna
go.
Nothing
is
going
to
go
faster.
I
I
E
I
E
E
E
B
E
B
B
E
C
C
So
we
put
in
a
new
example
flow
control.
Algorithm
note
here,
of
course,
that
this
algorithm
is
based
solely
on
input,
a
hole
to
the
transmitter.
So
that's
the
big
point
of
difference,
I
think
between
the
two
drafts
I'm
not
going
to
spend
a
lot
of
time
on
the
details
of
the
algorithm
itself.
We
do
think
this
algorithm
reacts
quickly.
C
C
C
Ascending
excuse
me
sending
the
snps,
which
is
on
a
point-to-point
circuit,
which
is
the
way
LSPs
are
act.
The
base
protocol
specifies
a
relatively
large
timer
of
two
seconds.
If
we're
going
to
do
faster,
flooding,
then
clearly
we're
going
to
need
to
send
acknowledgments
faster,
I.
Think
implementations
today,
don't
necessarily
adhere
to
the
two-second
value,
but
they
kind
of
stay
in
the
same
range.
They
might
be
one
second
or
a
little
less
than
one
second,
but
there's
certainly
not
much
faster
than
that,
and
we
think
that
needs
to
be
changed.
C
C
Inconsistency
in
the
LSB
base,
various
portions
of
the
network.
We
added
a
section
recommend
that
the
faster
flow
need
not
be
enabled
until
all
nodes
supported
flooding
is
scoped
by
the
area.
So
you
could
enable
this
in
a
particular
area
and
they
fall
in
the
note
mourneth.
Well,
the
notes
in
another
area
or
in
the
LG
backbone,
don't
yet
support
it.
That's
possible
next
slide.
Okay,.
C
C
C
This
is
the
which
you
send
LS
B's,
but
what's
the
state
of
you
know
what
happened
to
these
LS
B's
uneven?
Did
they
get
drop
sir?
They
include,
they
include
in
eius,
eius
itself
or
down
in
you
know
some
of
the
cues
from
the
data
plane
to
the
control
plane,
and
we
took
a
look
at
at
least
existing
implementations
of
platforms
as
to
what
kind
of
QoS
they
applied
to
RS
is
PDUs.
C
The
operation
of
different
data
planes
is
Gary's
a
considerable
amount,
but
I
think
it's
generally
true
that
most
platforms
do
not
implement
a
queue
less
specific
to
have
size
PDUs
they
lump
as
I
speedy
use,
either
with
all
incoming
traffic
that
needs
to
be
punted
or
some
subset
of
that
traffic,
which
could
be
all
the
routing
protocols.
You
know
BGP,
BFD
and
so
forth.
There
are
varying
ways
of
doing
this,
certainly
did
not
typically
apply
to
you
as
specific
to
ISI
speedy
types.
C
C
Are
they
this
specific
to
a
particular
interface,
because
in
most
implementations
the
statistics
that
they
use
to
control
the
punting
is
not
based
on
a
particular
interface
but
is
lumped
on
all
of
the
incoming
PDUs
across
all
the
interfaces
second
point
is
in
order
for
the
flow
control
algorithm
receiver
base
flow
control.
Algorithm
in
is,
is
to
operate
it's
going
to
have
to
get
updates
from
the
data
plane
as
to
its
state
in
real
time
at
a
fairly
fast
rate.
C
Given
that
we
have
a
variety
of
implementations,
even
if
we,
when
we
try
to
define
some
sort
of
platform-independent,
the
API
has
to
have
to
send
State
from
the
data
plane
to
the
control
plane.
It's
going
to
be
difficult
to
try
to
map
the
platform
specific
behaviors
into
a
common
notification
to
the
protocol.
C
So
we
see
these
as
significant
challenges,
utilizing
the
receiver
base
flow
control
and
if
one
of
the
goals
is
try
to
get
faster,
flooding
deployed
in
a
modest
amount
of
time,
you
know
not
many
years
from
now
and
to
not
impose
restrictions
on
how
new
platforms
need
to
handle
the
QoS
for
eyes--eyes
BTUs.
This
is
something
we
see
as
a
huge
barrier.
C
Alternatively,
if
you
use
a
transmit
base
flow
control,
as
we
proposed
in
the
draft.
All
of
the
input
to
the
flow
control
algorithm
is
local
to
ISS
and
is
in
fact
information
that
the
protocol
already
has
today
and
that
data
is
per
interface,
because
the
implementation
of
the
update
process
in
ours
is
requires
us
to
maintain
statistics
on
what
LSPs
have
been
need
to
be
sent
on
a
per
interface
basis
and
whether
they
have
been
acknowledged
or
not.
E
So
I
I
mean
not
I'm
sort
of
curious
on
what
you
know
what
you
put
out
a
range
of
things
that
are
possible
or
not
maybe
not
possible.
With
hardware
that's
out
there,
I
mean
I,
think
any
eius
eius
has
generally
deployed
in
you
know.
Higher-End
routers,
so
I
would
be
shocked
to
find
out
that
any
successful
router
didn't
separate
eius
eius
traffic,
at
least
from
data
data
plane
traffic
other
than
just
punting,
all
control
the
traffic
I
mean,
but
that.
E
C
E
These
might
be
barriers,
but
like
I
as
an
old
VSD
guy
I'm
familiar
with
trying
to
implement
things
on
ancient
hardware
or
bad.
You
know
that
doesn't
mean
we
have
to
engineer
to
it
right.
So,
for
example,
we
could
say
I
mean
it.
This
is
not
hard
to
do
right,
whether
it's
done
wrongly
or
not,
is
one
thing,
but
it's
it's
really.
You
know,
for
example,
is,
is
sort
of
almost
always
get
separated
very
earlier
in
the
MPM
bu
stage,
because
it's
coming
in
on
I,
so
right.
So
it's
like.
E
It's
usually
noticed
at
the
very
first
thing
that
it's
not
a
normal
either
net
packet,
so
it
generally
goes
into
a
different
queue.
Maybe
not
always,
but
my
point
being
it's
not
hard
to
do.
It's
not
hard
to
recognize
is:
is
packets
outside
of
other
packets,
and
so
what
I'm?
Basically
getting
at
here
is:
let's
not
throw
out
a
solution
just
because
we
have
to
engineer
to
it
right,
because
if
this,
if
we
had
a
simple
pause
solution,
we
could
present
that
as
something
the
market
could
design
on
right.
E
So
you
say:
okay
well,
our
implementation
of
is
is
allows.
You
know
50
millisecond
convergence
across
the
network
because
you're
able
to
you
know
flood
so
quickly
and
our
flow
control
is
so
efficient
because
we
did
this
little
tweaking
on
our
line
card
in
the
MP
you
right
and
then
you
have
the
market
deciding
so
I.
Just
don't
want
to
throw
out
possible
solutions
because
current
hardware
doesn't
it
might
not
support
it.
It
is
a
barrier
to
implementation,
but
hey
you
know
the
market
can
decide
and
people
can
make
money
on
I'm
providing
good
things.
D
Here's
be
an
input
from
practical
implementation
of
all
the
stuff.
The
transmitter
is
very
useful
works
actually
pretty
well
the
receiver.
You
need
very
high
frequency
for
the
stuff
to
work
well,
which
we
simply
don't
have
right.
So
you
have
to
start
to
go
towards
things
like
TCP,
that
has
200
millisecond,
RTP
estimates
and
so
on.
So
you
get
yourself
very
quickly
into
designing
a
completely
new
transport
and
that's
a
moderate
fault.
D
Of
queues,
and
so
on,
so
my
experience
is
that
you
cannot
get
a
decent
feedback,
even
at
500
million,
and
if
you
get
five
hundred
nearly
to
start
to
flood
real
fast,
mostly
you
were
burst.
So
are
you
going
by
the
time
you
can
do
it?
Something
so
I
mean
if
we
put
the
stuff
in
because
in
the
future
you
know
whatever
mayflies
with
big
enough
engines.
That's
fine.
D
E
D
You
have
to
put
it
into
something
right
and
I.
Look
at
the
system.
It
starts
to
emit
packets
somewhere
at
the
very
bottom.
In
the
line
card
to
debark
such
systems-
or
you
know
even
operationally.
Try
to
figure
out
what's
going
on,
can
be
pretty
challenging
yeah
it's
an
idea,
but
what
would
you
put
it
in
so
you
start
to
generate.
You
know
like
hellos,
there's
no.
E
D
E
D
To
generate
that,
all
of
a
sudden
that
the
implementation
knows
nothing
about
or
you
have
to
be
plenty,
the
way
that
the
line
card
starts
to
report
it
back
to
the
main
implementation.
Those
are
not
easy
things
to
build
and
again,
depending
on
the
time
heat
you
know,
you're,
you
know
your
mileage
will
be
very,
very
we're
very,
very
much.
It
all
boils
down
on
the
fact
whether
you
can
implement
very
high
resolution
timers
very
precisely,
and
we
all
know
the
user
space.
This
is
a
pretty
much
a
proposition
from
hell
and
scale.
E
E
C
Chris
the
comment
I
would
make
is
that
your
proposal
only
takes
into
account
one
point
in
the
chain
of
hunting
from
the
data
plane
to
the
control
plane.
There
are
other
points
of
congestion,
so
even
if
we
were
to
agree
and
I
agree
with
what
Tony
has
said
in
terms
of
the
problem
implementation
problems
of
this,
but
even
if
we
were
to
agree
that,
okay-
let's
let's
do
it-
you
suggest
it
does
not
cover
all
of
the
points
where
the
PDUs
you
know
may
be
dropped.
C
E
E
D
Implementation
input
here
the
rift
has
something
like
that,
but
for
the
stuff
to
work.
Well,
it's
actually
far
more
useful
in
a
different
context
and
we
cannot
do
it
in
ice
ice
because
everything
is
meshed
together
into
the
same
LSP
if
you
kind
of
like
OSPF
differentiate
prefixes
from
node
description
that
it
is
highly
beneficial
to
actually
push
on
the
sender
saying:
look
I'm
having
trouble
processing.
Please
send
me
only
node
description
in
terms.
E
Of
point
I
mean
why
can't
as
long
as
the
MP
you
can
say,
this
is
an
LSP
and
I've
got
too
many
of
those
in
my
queue
right,
it's
not
a
lot
of
logic
and
it
can
easily
generate
a
pause
from
that
now,
whether
the
control
plane,
you
know,
and
how
to
deal
with
that.
I
don't
know
but
I'm.
Just
saying
I,
don't
know,
I
don't
want
to
throw
this
out
so
so
quickly.
It
might
be
an
add-on
solution.
E
It
doesn't
have
to
the
solution,
but
I,
don't
I,
don't
think
you
know
and
I've
done
a
few
implementations
and
worked
on
some
of
this
hardware.
I.
Don't
I'm,
not
convinced
that
it's
that
hard
to
put
this
in
a
simple
thing
into
the
line
card
network
processor
to
do
this,
but
you
know
we
can
disagree
or
we
can
argue
on
the
list
of.
H
C
C
C
What
other
protocols
are
are
enabled
you
know,
what's
your
link,
bandwidth
speed?
Obviously,
what's
the
speed
with
what
is
your
hardware
capable
of
doing
what
is
your
SLG
deployment,
which
can
influence
the
number
of
LSPs
that
might
be
triggered
as
a
result
of
its
apology,
change,
I'm,
sure,
there's,
there's
many
more
than
people
contribute
here.
So
the
idea
that
you
could
have
a
modest
number
of
scenarios
for
a
particular
platform
and
say:
okay,
I'll,
take
scenario.
One
and
I'll
use
these.
C
C
C
Obviously,
TCP
is
right.
Stream
eius
eius
is
packet
based
key
to
the
TCP
control
is
the
fact
that
you
have
ordered
delivery,
meaning
if
I
receive
byte
90,000
in
my
data
stream.
That
presumes
that
the
sender
has
already
sent
me.
You
know
all
of
the
bytes
up
to
five
90,000
in
the
case,
if
I
receive
you
know,
LSB
number
five
from
note
a
this
tells
me
absolutely
nothing
about
whether
there
are
other
LSPs
that
have
been
updated
from
node
a
whether
there
are
other
LSPs
that
have
been
updated
from
other
nodes.
I
C
C
The
resources
that
are
used
for
TCP
and
are
used
to
trigger
the
adjustment
to
the
window
are
managed
by
the
control
point.
We
set
aside
certain
amount
of
buffer
space
for
a
particular
session
in
the
case
of
ISI
us.
You
know,
as
we've
been
talking
about
the
resources
for
receiving
eius
eius
packets
are
dependent
on
the
data
plane.
We've
got
input.
Viewing
we've
got
intermediate
queuing
going
towards
the
control
plane.
We
may
have
multiple
levels
of
queuing
in
the
control
plane,
so
I
don't
find
that
the
you
know
to
say
what
works
for
TCP.
C
Plugging
characteristics,
so
these
are
just
the
the
basic
flooding
scenarios.
I
think
a
lot
of
this
you
know
also
covered
in
its
presentation.
So
if
you
have
a
stable
topology
and
then
during
that
time,
the
only
thing
that's
that
we're
flooding
or
refreshes
these
can
be
distributed.
If
you're
in
a
network
that
has
a
lot
of
LSPs,
then
it's
advantageous
to
use
something
other
than
the
default
lifestyle
for
your
LS
B's,
the
default
lifetimes.
20
minutes
what
you're
doing
for
most
implementations,
they
allow
configuration
of
up
to
18
hours,
16-bit
value,
so.
C
C
Typically,
results
in
a
small
number
of
LSPs,
updated
again
draft
talks
about
how
to
optimize
LSP
Jeanette
should
I
say
minimize
the
number
of
LSPs
that
need
to
be
generated
when
a
single
neighbor
goes
down.
If
you
follow
those
guidelines,
typically
you're
only
going
to
have
two
LS
B's
that
need
to
be
flooded.
C
C
C
Then
we
have
the
known
state
changes
when
you're
bringing
a
note
up.
That
note
needs
to
get
a
complete
LSB
database
if
the
LSB
database
is
large,
which
is
the
case.
We're
really
concerned
about.
This
can
take
a
significant
amount
of
time.
However,
bringing
a
note
up
is
not
a
time-critical
operation,
and
there
are
techniques
so
referred
to
as
critical
startup
that
essentially
allow
this
note
to
come
up
without
being
used
for
forwarding
in
the
network
until
it's
able
to
think
it's
LSB
database.
C
Note
failure.
However,
that's
probably
the
key
case,
because
here
the
note
was
actively
being
used
in
the
network.
The
note
goes
down
in
order
to
get
traffic
reconverge.
We
need
to
handle
all
of
the
updates,
get
been
front
drop
the
network
and
if
the
note
which
failed
has
a
large
number
of
neighbors,
then
clearly,
we've
got
a
large
number
of
LSPs
that
need
to
be
flooded.
C
C
C
C
E
No
just
I
mean
when
I
think
about
it
in
the
list.
All
right
like
if
I
fly,
I
think
if
we
had
a
goal
of
flooding
optimally
than
you
know
and
without
loss
we
just
did
it.
You
know
optimally
the
network
converges
as
fast
as
it
can
I,
don't
see
why
we
should
be
so
concerned
about
whether
one
interface
is
going
faster
than
another
yeah.
E
I
E
C
C
You
know
much
more
than
the
guys
on
the
right
that
I
want
to
be
flooding
faster
to
the
left
and
slower
to
the
right,
because
that
means
that
for
any
event,
which
occurs
even
with
the
modest
number,
those
bees
that
are
going
to
be
updated,
you
know
the
left
side
of
the
network
is
going
to
converge
a
lot
faster
than
the
right
side
of
the
network.
Well,.
E
E
But
it
I
mean
so
we
just
slowly
convert
I
mean
if
you
were.
Let's
talk
about,
you
know,
there's
good
info
and
bad.
You
know
like.
If
we're
talking
about
a
network
failure,
we
want
to
converge
as
fast
as
possible.
I'm,
let's
take
a
pathological
one.
That
path
logic
with
a
simple
case
of
you've
got
one
really
slow
router.
Why
do
I
want
that?
One
really
slow
router
to
slow
convergence
of
my
entire
network
down
I'd,
rather
grout
around
that
problem
right
and
flood
around
it.
So.
C
What
I'm
trying
to
say
is
if
you
look
at
your
network
and
you've
got
you
know
one
or
two
routers
that
are
significantly
slower
than
the
other
routers.
This
is
a
problem,
effort
I,
think
the
you
know
the
analogy
here
is
you
know
what
happened
when
we
went
from
my
second
SPF
Interpol's
to
50
milliseconds?
Yes,
if
you
only
did
this
on
some
nodes
in
the
network
to
some
degree,
it's
a
problem,
work.
C
E
E
F
Just
had
a
follow-up
to
Chris's
comment,
which
is
it's
been
too
long
since
I've
done
this
particular
version
of
network
sodoku,
to
know
this
for
sure
right,
typically
the
answer:
why
is
it
a
problem
for
part
of
my
network
to
converge
faster
than
the
other
part
of
my
network?
Typically,
the
answer
is
because
you
can
form
forwarding
loops.
F
D
E
To
speak
to
that,
particularly,
we
actually
went
through
this
exercise
with
the
Esaias
a
while
back
where
we
talked
about
good
information,
about
information,
I,
think
and
right.
So
in
a
perfect
world,
do
you
want?
You
know
you
want
breaking
things
if
your
networks
broken,
you
want
that
as
fast
as
possible
everywhere,
because
you're
black
hauling
traffic
anyway
right,
whereas
in
when
you're
making
new,
you
know
you're
bringing
an
interface
up
that
maybe
is
bringing
up.
E
F
I,
don't
want
to
try
to
redo
it
right
now
in
the
middle
of
the
meeting,
but
black
holing
if
I'm
like
Oh
in
traffic
I'm
consuming,
you
know
basically
1
times
whatever
that
that
link
was
across
the
network
and
if
I'm,
but
if
I'm,
forming
them
like
really
open
potentially
consuming.
You
know
something
like
255
times
what
that
link
was
in
terms
of
bandwidth
or
some
period
of
time.
That's.
E
I
C
This
is
not
a
response
to
your
comment
so
much
as
from
from
the
working
groups
perspective.
My
viewpoint
is
the
big
debate
that
we're
having
at
the
moment
is:
should
we
do
receiver
side
base
flow
control
or
a
transmit
side
base
flow
control
in
the
time
remaining?
This
is
just
my
point.
I
would
like
to
see
us
talk
mostly
about
that
and
not
get
lost
in
some
of
these
issues,
which
may
still
be
important,
but
or
shall
we
say
or
less
not
really
a
point
of
conflict.
E
I
E
I
B
C
B
C
C
G
C
D
D
D
H
Okay,
just
some
duration
series,
so
it
looks
to
me
both
drafts
try
to
do
the
flow
control
you
know
and
the
control
needs
to
be
done
by
descending
sign
right.
It
looks
like
the
only
difference
is:
where
do
we
get
the
parameters
on?
You
know
what
is
the
buffer
size
or
what
is
the
amount
of
LSPs
we
can
flood
before?
You
know
the
the
number
of
ack
LS
B's,
you
know,
filled
up
and
we
need
to
stop
whether
this
is
going
to
be
determined
by
the
sender
or
the
receiver
or
the
rest.
B
H
All
we
are
divided
the
debating
year
and
to
that
point,
if
we
are
going
to
advertise
the
static
values,
I
can
say,
I
can
configure
that
Andrei
on
the
sander
as
well.
If
we
are
going
to
do
a
Perl
platform
set
static
value,
maybe
that
if
you
are
going
to
a
dynamic
I,
just
lent
it
to
the
receiver
window
size
which,
as
I
expressed
many
times,
it's
very
problematic.
In
my
opinion,
that
is
a
difference.
But
if
we
are
talking
static
values,
there
is
not
much
difference
in
what
we
are
trying
to
do.
I
E
See
if
we
talked
more
about
solutions
who-
and
how
do
you
put
this,
so
it
feels
to
me
like
when
we,
unless,
when
you're
presenting
this
we're
getting
a
lot
of
data
about
why
other
solutions
are
not
good,
right
and
I
think
that
maybe
that's
not
the
best
way
to
get
things
forward.
Why
don't
we
just
talk
about
why
your
solution
is
good
right
and
then
you
know
forget
about
saying
these
other.
This
one
doesn't
work.
E
You
know
all
the
no
Nicks,
let's
just
get
to
here's
my
solution
and
it's
really
good
and
here's
the
proof
of
that
right
and
I.
Think
that
then
you
know,
then
we
just
get
some
good
work
out
of
this
right.
I
feel
like
we're
spending
a
lot
of
time
as
a
working
group.
You
know
in
the
mailing
list
and
stuff
talking
about
you
know
your
solution
isn't
good,
because
we
can
do
it
with
mine
and
I.
Don't
think
that
I,
don't
I
think
we
can
get
somewhere
faster.
C
E
H
E
H
E
Can
do
that
before
any?
We
can
just
do
that.
Why
can't
we
just
do
that?
Why
do
we
have
to
say
Bruno?
You
know
these
statics,
whatever
they
just
say,
here's
a
trend,
let's
go
with
it.
We're
gonna
go
with
the
transmit
solution
and
we
come
up
with
this
great
algorithm
and
here's
how
it
performs
like
I.
Don't
understand
a
word
to
keep
talking
about.
Why
the
other
one
is
not
useful
or
you
know
you
can
just
do
it
differently
or
I
mean
as.
B
H
H
H
B
C
May
be
changing
behavior,
but
we're
not
changing
the
protocol
and
there's
nothing
about
what
the
transmitter
may
do.
That
requires
the
code
in
the
receiver
to
change
and
if
our
algorithm,
if
you
know
if
the
algorithm
is
sound
so
that
we
are
able
to
correctly
and
quickly
adapt
to
the
rate
at
which
the
receiver
is
able
to
process
and
everything
works.
Fine,
there
is
no
interoperability
issue.
There
are
no
protocol
extensions
required,
I.
E
Think
we're
getting
hung
up
on
that
point
right,
like
okay,
so
whether
you're
changing
the
protocol
behavior
or
changing
on
the
wire
being
your
I,
don't
think
that's
pertinent
to
what
we're
talking
about
right
now.
I
mean
it's
not
that
hard
to
change
the
protocol
either
right
are
we're
trying
to
solve
a
problem
right.
Let's
come
up
with
some
solutions.
Now
you
could
say
I.
My
solution
is
really
great
because
it
doesn't
change
the
on
the
wire
behavior.
Great
right
I
mean
like
getting
hung
up
on
like
these.
C
B
E
A
G
B
Yes,
that
is
a
baseline.
That
is
not
a
demonstration
at
any
working
code.
That's
simply
to
say
that,
what's
out
there
today,
the
default
parameters
are
too
poor
and
if
you
turn
the
knobs
randomly
and
just
dial
it
up
to
11
that
doesn't
help
and
the
right
number,
the
optimal
rate
for
sending
is
not
something
that
we
know
right.
E
Before
I
don't
know
if
I
agree,
I
see
I,
don't
view
these
two
drafts
as
being
in
conflict
right,
because
if
I
was
implementing,
I
could
do
both
right.
So
I
I
don't
want
I,
don't
want
to
like
let
that
go
by
if
as
a
working
remember,
but
it's
you
know
there's
they
both
can
be
done.
So
there's
no
reason
to
like
pit
them
against
each
other.
G
C
G
And
the
second
comment
was
that,
purely
from
from
a
you
know,
simplicity
perspective,
since
it's
a
local
behavior
change.
The
second
second
proposal
that
we
saw
seems
something
easy
to
implement
or
roll
out
and
also
get
deployed
in
a
mixed
environment
without
any
interruptions
right,
whereas
the
other
proposal
has
you
know,
we
need
something
change
on
the
wire,
so
just
my
feedback,
I
think
from
that
perspective,
the
second
proposal
seems
you
know
something
easier
or
faster
to
get.
E
Tony's
point
is
like:
let's
get
some
numbers
right,
so
yeah
not
changing
anything
on
the
wire,
but
you
know-
or
at
least
it's
a
selling
point
right
and
we
can
like
take
all
the
data
when
we
have
it
look
at
the
different
outcomes
and
make
decisions
based
on
that.
But
I,
don't
ii
prefer
an
unproved
solution
because
it
doesn't
change
the
protocols.
It's
sort
of
you
know
vacuous
I.
C
E
E
B
H
E
I
love.
That's
great.
That's
a
great
point,
though
right
because
once
we
start
talking
about
solutions
and
the
I
sent
an
email
earlier
this
morning
saying
you
know
we
need
to
have
the
there's
a
proposed
algorithm,
but
it
only
covers
point
to
point
so
I
mean
at
minimum.
We
have
to
at
least
cover
point
a
point
and
land.
Then
once
you
have
that
we
can
start
saying
you
know,
does
it
work
and
then
once
you
have,
it
does
work.
You
can
start
like
looking
at
hey,
it
works
even
better.
J
Hi,
this
is
your
song.
My
question
is
very
simple,
because
we
things
are
doing
some
comparisons
between
these
two
solutions.
My
question
is
how,
even
if
we
can
do
some
text,
how
can
we
do
this?
We
can
get
a
window
size
from
the
receiver
side
and
we
can
get
another
window
size
from
the
transmitter
translator
side.
So
whether
we
have
ID
idol
windows
bias
to
judge
which
one
is
better
or
we
compare
the
coverage
as
time
or
some
other
solutions
to
do
this
for
both
of
the
others.
J
B
J
E
One
way
you
could
also
you
can
also
take
a
snapshot
on
the
sender
of
the
sequence,
numbers
right
quite
and
then
pull
the
thing,
but
I
don't
know
if
the
control
plane
is
fast
enough.
The
management
plane
is
going
to
give
you
the
information
fast
enough,
but
yeah
I
mean
basically,
you
have
to
test
when
all
the
LS
B's
have
arrived.
I've
never
seen
her.
You.
J
J
J
C
F
E
J
But
actually
I
think
I
have
to
say
I
kindly
agree
with
less
I
think
the
answer
may
not
be
so
straightforward.
We,
because
even
in
congestion
control
in
the
transport
layer,
we
can
just
say
one
algorithm
is
more
aggressive
than
the
other
one
or
some
other
standard
to
judge
which
one
is
better,
but
maybe
it
is
hard
to
gather
the
answer
which
one's
better.
In
my
point
of
view,
I.