►
From YouTube: IETF100-ICCRG-20171113-1330
Description
ICCRG meeting session at IETF100
2017/11/13 1330
https://datatracker.ietf.org/meeting/100/proceedings/
C
D
C
C
Thank
you.
So
much
and
you're
doing
Java
right
come
on.
Dev
escribe
is
easy.
Somebody
chat
about
jabber
jabber
yo
Chabot.
Thank
you.
We
got
a
scribe
in
a
Java
Script.
Thank
you
very
much.
People
I'll
leave
you
from
the
iltf
IP
our
policy
document.
C
There
is
a
scribe
and
jabber
question
a
very,
very
quick
update,
because
I
called
these
things
I
see
draft
as
we
well.
This
is
me
speaking
us
also
ant
yeah.
Just
to
give
you
an
update
here,
because
I'm
also
of
both
of
these,
we
asked
for
feedback
on
Lisa
and
as
an
author
I
want
to
thank
the
reviewers
as
a
chair
also
want
to
thank
two
reviewers.
C
C
C
E
C
E
E
That
is
quite
a
big
problem
we
have,
but
that
other
thing
another
myth
on
the
next
slide,
if
the
latency.
So
basically,
indeed,
we
have
some
500
milliseconds
latency
on
the
links,
but
we
all
know
thanks
to
all
the
buffer
bloat
issues
we've
seen
in
the
past
and
also
what
has
been
done
through
the
right
project.
That
latency
is
not
just
the
signal
propagation
delay
and
also
there
are
many
cases
such
as
boats,
plane
and
others.
E
E
It's
a
more
heavy
page
and
even
though
the
long
page,
the
large
page,
takes
some
time
to
have
the
data
to
actually
come,
and
once
we
have
the
first
bits
of
the
connection,
everything
is
quite
loaded
and
fluid
so
and
they,
and
also
on
the
light
page
on
the
list.
You
can
see
that,
even
though
it
is
some
answers
and
some
things
that
is,
that
requires
some
interactivity.
It's
it's
not
that
bad
next
piece,
the
last
but
not
least,
miss
is
that
we
use
middle
boxes,
and
that
is
not
a
myth.
E
We
have
nasty
middle
boxes,
we
split
TCP
connections,
so
we
actually
use
some
transpired
competition.
That
is
not
that
bad,
but
for
the
for
the
TCP
connections,
we
make
our
own
TCP
connection,
so
we
break
the
end-to-end
connectivity
and
they
are
the
main
problem
we
have.
Is
that
it's
difficult
to
have
the
support
and
to
consider
the
whole
recent
improvements
you
may
we
may
have
other
servers
or
as
a
client,
for
example,
also
TF,
all
things
and
other
thing
next
slide.
E
Please
we
use
me
the
boxes
for
many
reasons,
and
basically
we
have
a
quite
specific
network.
I
will
not
go
into
the
details
because
we
have
we
don't
have
much
time,
but
the
main
problem
also
we
have
is
that
it's
a
small
community.
So
it's
difficult
for
us
to
push
some
specific
modifications
to
TCP
for
our
use
case,
because
for
terrestrial
and
fixed
networks,
it's
a
corner
case,
but
and
also
we
have
a
scarce
resource
to
deal
with.
E
E
So
that's
why,
when
it's
difficult
for
us
to
get
rid
of
this
strange
boxes
because
for
writes
such
good
results
at
the
moment
for
the
traffic
that
is
going
through
Bob
on
satellite
networks,
next
piece,
the
thing
I
want
that
why
we
put
in
results
on
BB,
r
and
MP
DCP
is
that
we
are
following
what's
happening
on
the
transport
and
we
are
constantly
figuring
out
whether
we
actually
need
these
boxes
or
not.
And
and
we
have
they
aware
there
were
some
modifications
in
the
NFC
2760
and
such
as
higher
congestion
window
packet
pacing.
E
E
We
have
different
file:
ftp
ftps
ins
as
I
serve
at
the
client
downloading
files,
which
are
all
state
by
or
cubic
or
VB
our
favor.
Next,
please,
on
the
Left.
We
can
feed
the
results
of
cubic
and
on
the
right
of
BB
r
and
the
top
curve
is
a
good
put
and
the
bottom
curve
the
state
of
the
queue
we
can
see.
That
cubic
is
acting
as
expected.
E
Basically,
it's
fulfilled
all
the
queue
which,
on
the
bottom
of
the
blue,
we
can
see
that
lots
of
kudos
is
happen
and
we
have
an
overshoot
of
capacity
on
the
right.
We
see
CPP
a
PBR.
Despite
the
strange
satellite
configurations,
we
have
the
latency
and
the
isometric
links.
We
can
feel
that
the
we
have
a
protocol
that
at
the
whole
can
use
available
capacity
and
with
a
low
Q
occupancy.
E
We
still
needs
to
have
some
specific
accelerations
bb/sbb
are
looking
at
more
application
level
metrics,
but
we
think
that
in
looking
at
the
queue
occupancy
and
other
shoots,
it's
quite
interesting.
However,
all
traffic
flows
in
the
Internet
today
are
not
ABR
to
still
so.
The
thing
is
we
still
for
the
moment,
we'll
have
to
make
some
TCP
speeds
and
so
on
anyway,
next
piece
and
the
other
use
case
I
will
show
you
is
about
the
MPT
CPUs
gave
v
on
the
top.
E
The
trend
for
the
5
GCM
networks,
architecture
and,
at
the
bottom
the
same
thing
that
we
can
have
on
satellite
access.
At
the
moment
we
have
an
MP
TCP
concentrator
here
that
basically
it
here
because
we
may
not
have
participe
deployed
if
we
were
on
Terrace.
So
that
is
a
use
case.
We
consider,
but
we
are
more
focusing
on
metrics
at
the
relevance
of
this
use
case
anyway.
At
the
moment,
next
slide,
please.
E
We
could
have
results
in
the
lower
capacity,
but
next
slide.
Please
and
that's
another
important
result
we
have
had
in
these
studies,
but
basically
when
we
have
an
MP,
TCP
aggregator,
let's
say
at
the
terminal
side
of
the
from
one
who
has
a
satellite
internet
access.
As
soon
as
we
deploy
an
MP
TCP
sing
at
the
client,
we
cannot
accelerate
the
traffic
any
more
on
the
satellite
link.
So
today,
if
you
are
using
satellite
access,
you
will
not
be
able
to
use
MP
TCP
on
this.
E
You
have
use
case
so
there
have
been
some
discussions
on
that
in
the
in
the
NPT
to
be
mailing
list.
But
another
thing
is
that,
despite
the
fact
that
we
cannot
accelerate
the
traffic
still
MP
TCP
is
doing
better
than
each
of
the
single
accesses
and
have
another
slide.
Basically,
we
have
made
some
tests
on
VBR.
E
We
have
seen
an
increased
training
testing
trade-off
between
the
queue
occupancy
and
the
actual
food
that
you
can
gain,
how
we
see
from
late
comer
fairness,
but
what
is
flow
rate
furnace
anyway,
and
but
the
thing
is,
we
still
don't
know
at
the
moment
whether
we
will
deploy
from.
We
will
still
need
in
the
futures
on
TCP
acceleration.
On
the
satellite
things,
when
we
have
only
TCP
PBR
light
traffic,
but
at
the
moment
the
share
of
the
we
still
have
lots
of
cubic
flows
and
lots
of
other
flows
that
are
not
PBR.
E
So
we
we
will
not
stop
deploying
peps
today,
and
so
we
also
have
worked
on
MPT
I,
don't
know
how
much
time
I
have
left,
but
we
also
have
worked
on
MPT
CP.
That
seems
to
manage
the
large
asymmetry
we
have
when
we
use
that
right
thing.
However,
we
may
do
better
than
that,
so
that
is
something
we
are
working
on.
E
E
So
these
are
the
people
who
worked
to
do
preparing
the
results
that
have
been
used
for
this
presentation
and
also,
if
you
have
questions
of
the
platform,
is
used
and
don't
hesitate.
Most
of
them
are
open
source
or
open
for
researchers
anyway,
and
also
we
have
an
open
platform.
If
you
want
to
do
tests
on
real
satellite
link,
we
can
help
you
at
setting
it
up
and
making
from
research
projects.
G
My
name
is
Anita
from
G.
This
is
a
very,
very
interesting
experiment
for
MP
TCP.
So
as
the
being
clean
stood
over
there,
if
I
access
all
drink
over
there,
can
I
duplicate
or
experiment
you
did.
What
do
we
have?
We
need
to
get
more
information
from
you,
okay,
so
I've
directed
a
pre-k
to
Yorick's
experiment
and
then
the
rings
over
there
is
the
ordering
information.
I
need
to
get
to
replicate
your
experiment.
H
All
right
so
sorry,
my
name
is
Ron
bliss
and
I'm,
presenting
a
paper
that
was
published
at
the
last
month
at
the
ICMP.
It's
about
the
experimental,
a
variation
of
ER
congestion
control,
misses
joint
work
with
Maria
Martina.
So
on
you
all
know,
Google
Google's
congestion
control.
We
are
because
I
already
presented
it
I
think
three
times
here.
H
So,
what's
wrong
with
sorry
what's
wrong
with
the
model,
so
the
network
model
is
fine
for
for
the
bottleneck,
but
bbr
uses
this
at
the
sender
and
that
basically
lags
the
dynamics
of
multiple
senders.
So
that's
one
of
the
problems,
so
the
behavior
in
the
single
flow
case
is
quite
straightforward
on.
If
you
have
only
a
single
flow,
for
example,
at
100
megabit
bottleneck,
you
see
that
as
soon
as
PBR
probes
for
more
bandwidth,
it
will
see
also
the
delivery
rate
here.
It
detects
is
a
dashed
line.
H
One
round-trip
time
later
is
basically
the
same,
because
there's
not
much
more
bandwidth
to
provide.
So
in
that
case
it
tries
to
reduce
the
queue
that
has
been
built
by
the
down
phase
and
the
gain
cycling.
So
the
flow
probes
and
cannot
get
higher
delivery
rate,
since
the
bottleneck
is
already
fully
utilized
so
and
the
excess
data
is
then
removed
afterwards,
so
so
far
so
fine.
So
that's
also
what
we
saw
in
the
experiments.
So
our
test
setup,
we
used
one
gigabit
per
second
bottleneck,
and
you
can
see
here
the
throughput.
H
The
dips
here
are
basically
the
probe
RTT
phases
and
if
you
look
at
the
RTT,
you
can
also
see
you
nicely
the
probe
bandwidth
cycles
here
and
the
probe
oddity
phase,
so
everything
works
expected
since
the
model
fits.
But
what
about
the
multiple
flow
case?
So
if
we
have
multiple
flows,
let's
assume
we
have
simplified
case.
H
Two
flows
now
sharing
the
100
megabit
per
second,
so
that
we
have
50
megabits
each
and
we're
starting
with
one
flow
that
starts
to
probe,
and
so
the
gain
cycling
phase
up
will
then
try
to
get
more
bandwidth
and
it
actually
gets
more
bandwidth,
because
the
other
flow
then
basically
loses
bandwidth
so
the
first
floor.
That
is,
probing
actually
measures
a
higher
delivery
rate,
and
that
leads
to
the
in
the
next
phase
to
using
that
higher
bandwidth.
H
It
just
saw
due
to
the
maximum
filter
that
is
also
applied,
so
the
windowed
maximum
filter
keeps
the
sent
right
now
at
the
new
high
level,
and
the
other
flow
also
are
keeps
it's
sending
rate
at
the
old
level
due
to
the
maximum
filter.
So
that
basically
means
that
for
multiple
flows
are
both
flows
together.
So
this
chart
here
shows
basically
the
the
individual
sent
rates
and
the
combined
sent
ray,
which
is
the
red
line
here
and,
as
you
can
see,
both
flows
together
actually
are
way
above
the
bottleneck
capacity.
H
H
So
what
does
that
mean
in
a
large
buffer,
which
means
the
buffer
is
at
least
the
VDP.
Large
bbr
basically
operates
at
its
in-flight
cap,
which
means
you
have
1
to
1.5
bdp
cute,
so
VB
is
operating.
Point
is
not
at
the
optimal
point.
Let
the
paper
point
it
out,
but
it's
more
or
less
depending
on
the
buffer
size.
So
it's
here,
for
example,
at
to
VP
mate,
so
in
a
smaller
buffer.
H
H
We
needed
a
dedicated
one
gigabit
per
second
link,
because
these
interfaces
were
not
able
to
switch
down
the
link
speed
edge.
So
we
use
the
4.9
version,
implementation
of
PBR
and
used
bottlenecks
at
one
gigabit
and
10
gigabit
per
second,
so
the
RTT
was
around
20
milliseconds
in
most
scenarios
and
we
had
two
types
of
bottleneck
buffers.
The
large
version
is
160
milliseconds,
while
the
small
one
is
60
minutes,
so
these
correspond
with
base
ret
to
a
PDP
you
or
0.8
times
the
bdp.
H
So,
let's
start
with
two
flows
in
a
large
buffer
scenario,
so
two
flows
with
the
same
minimum
RTT,
the
the
prediction
of
the
model
says
that
basically
the
PBRs
operating
at
it's
in
flight
caps
OBB
our
uses
accuse
one,
be
deep
in
the
buffer
and
that
can
be
seen
here.
So
we
have
an
RTT
base,
our
TT
of
twenty
milliseconds,
and
this
is
basically
doubled.
H
Nearly
all
the
time
the
two
flows
are
active,
so
a
single
flow
operates
at
the
base
or
DT,
but
as
soon
as
the
second
flow
starts,
we
have
one
bpq.
So
if
we
increase
the
minimum
RTG
to
40
milliseconds,
this
should
double,
and
this
is
what
we
also
can
see
here.
So
it's
not,
then
the
effective
oddity
is
at
80
milliseconds,
and
so,
if
we
use
80,
milliseconds
or
80
min,
we
see
in
that
effective
ret
doubles.
It
was
at
160
milliseconds,
so
multiple
flows
in
a
small
buffer
where
small
is
not
really
small.
H
It's
it's
80%
BDP.
So
if
we
use
six
flows,
which
means
two
per
interface
here,
PBR
causes
massive
packet
loss,
because
the
operating
point
is,
as
just
explained,
even
beyond
the
buffer
capacity.
So
if
you
look
closely
at
the
senders
transmission
rate
here,
you
can
see
that
you're
ascending
above
bottleneck
capacity,
actually
so
the
even
one
beyond
one
gigabit
per
second.
So
we
have
roughly
here
10%
overload.
H
So
that's
quite
a
lot.
What
about
interrupt?
Interpret
all
fairness,
so
if
you
are
a
let
VBR
run
versus
cubic
for
example,
in
this
case
we
have
one
gigabit
per
second
bottleneck,
one
baby,
our
flow
against
one
cubic
flow,
and
we
have
small
buffers
so
baby
are
basically
suppresses
the
loss
based
congestion
control
here,
because
yeah
VBR
is
so
aggressive
that
it
causes
basically
packet
loss
and
cubic
then
backs
off
while
PBR
tries
to
get
as
much
bandwidth
as
possible.
H
But
this
are
still
more
or
less
within
the
model
of
PBR,
because
we
only
have
a
single
PBR
flow.
So
our
expectation
was
that
it
get
it
gets
worse
as
soon
as
you
started,
an
additional
be
awful,
and
that
is
really
the
case
so
to
bebe.
Arbors
is
to
cubic
floors,
for
example,
the
model
doesn't
hold
anymore,
and
in
that
case
multiple
PBR
flows
simply
behave
more
aggressively.
H
If
you
have
multiple
PBR
flows,
because,
as
explained
earlier,
the
operating
point
of
view
RS
here
and
at
least
in
the
version
that
we
tested
it,
it
doesn't
react
to
packet
loss,
so
congestion
packet
loss
as
congestion
signal,
is
basically
ignored,
so
intra
protocol.
Fairness
is
also
interesting.
So
are
again
six
floors
here,
twenty
milliseconds
or
eighty.
H
We
were
not
able
to
find
any
real,
consistent
fairness
behavior,
so
you
can
see
our
different
scenarios
like
more
or
less
fair
share
the
bandwidth
or
sustained
faces
of
few
flows.
Getting
only
a
small
portion
of
bandwidth
by
others
get
most
of
it
and
something
in
between,
and
also
here,
even
in
a
small
buffer.
Sometimes
you
see
are
quite
good
fairness.
H
So
what
about
RTD
fairness
since
the
the
prediction
is
that
bbro
operates
at
it's
in
flight
cap
and
in-flight
cap,
then,
basically
is
calculated
on
base
of
the
scene,
R,
DT
and
times
the
available
bandwidth.
So
the
bandwidth
delay
product
we
just
use
twenty
milliseconds
forty
milliseconds
in
80
milliseconds,
a
space
RT
t
for
three
different
flows
flowing
at
the
same
time,
and
so
the
prediction
is
that
each
PBR
flow
operates
at
it's
in
flat
cap
of
to
V
DP
in
a
large
buffer.
H
So
we
have
a
large
profit
scenario
here
and,
as
you
can
see,
that
the
flow
with
the
largest
RTT
basically
is,
which
is
the
red
one
here
gets
the
most
most
of
the
bandwidth
sherry.
So
the
20
millisecond
flow
is
ya,
know
nearly
here
down
at
the
bottom,
while
the
40
milliseconds
flow
still
gets
some.
So
this
is
due
to
the
fact
that
the
larger
rgt
means
more
data
in
flight,
and
this
directly
leads
to
unfairness.
H
So
float
with
the
larger
RTG
have
basically
an
advantage
here
so
on
to
summarize
net
PBRs,
basically
model
based
congestion,
control
and
works.
Well,
with
no
congestion
is
present
so
for
a
single
flow
at
the
at
the
bottleneck.
It
works
perfectly
right,
but
for
multiple
flows
on
PBR
steadily
increases
the
amount
of
in-flight
data
and
some
large
buffers
PBR
operates
its
in-flight
cap
and
also
has
a
consequence
of
showing
oddity
unfairness
and
in
small
buffers
we
saw
a
high
amount
of
packet
losses,
because
our
PPR
also
ignores
packet
loss
as
congestion
signal.
H
So
we
found
no
consistent
fairness,
behavior
and
especially,
we
saw
unfairness
to
flows
with
loss
based
congestion,
control,
cubic
or
Reno,
for
example.
So,
as
you
all
know,
BB
ours
already
news,
but
probably
right
now
its
many
application
limited
where
it's
used.
So
we
maybe
don't
see
these
kinds
of
effects
and
these
were
all
more
or
less
generated
and
then
in
a
lab.
So
that's
kind
of
maybe
not
real-world
scenario
but
still
think
know
what
you're
worth
to
to
think
about.
So
BB
are
still
on
a
development
and
I.
C
About
BB
so
I
suppose,
probably
the
questions
are
gonna
come
at
the
end
of
the
PVR.
It's
not
oh!
Okay!
There
is
one.
J
J
Data
in
flight
I
was
wondering
if
there's
then
a
risk
that
it's
estimate
if
the
round-trip
delay
will
be
inflated
because
of
the
over
full
buffers,
which
means
it's
bandwidth,
delay
product
estimate
and
the
cap
of
that
grow
and
the
things
spirals
out
of
control,
because
it's
actually
not
getting
the
accurate
data
and
it
sounds
like
you
didn't
see
that
I
wondered
if
you
had
any
intuition
of
why?
No,
actually,
we
see
that.
H
But
basically,
for
example,
in
the
in
the
beginning
of
so
the
flow
starts
you
basically
see
or
that
the
flows
will
measure
actually
a
higher
effective
oddity.
In
this
case,
we
have
more
less
than
some
kind
of
late
comer
advantage
right
and
that's
what
we
saw,
but
after
some
time
actually
bbr
is
able
to
more
less
synchronize
the
flows
going
into
probe
Rd
T,
and
then
they
have
more
or
less
the
ground
truth
of
seeing
the
right
minimum
RT,
and
then
that
corrects
itself,
but
but
as
you're
right.
H
K
Well,
this
is
Jeff
using
from
a
panic.
We've
done.
Experiments
like
this
over
the
internet.
We
were
using
data
centers
in
Frankfurt,
London
and
Australia,
and
the
one
between
London
and
Frankfurt
was
actually
interesting.
It's
a
short
RTT,
but
it's
across
the
production.
Internet
and
bibi
are
running
on
standard
linux.
With
three
parallel
flows.
Same
endpoints
was
doing
a
sustained
50%
packet
loss
rate,
but
maintaining
300
megabits
per
second
stable
II.
K
Of
it
brings
up
basically
to
be
DP,
like
that's.
The
theory
and
practice
are
exactly
the
same,
but
this
sort
of
bizarre
behavior
that
absolutely
sustained
massive
loss
rate
when
I
tried
to
bring
in
cubicle
Reno
no
packet
sweat.
You
know
they
just
got
crowded
out
on
loss
as
you'd
expect,
so
it
stabilizes
at
a
phenomenal
loss
rate
when
under
Germany
to
Australia
it
hits
about
140
million
megabits
per
second,
which
is
great
on
the
production
Internet.
Interestingly,
though,
the
loss
rate
is
way
down,
it's
earning
on
short
delay.
D
So
Jemaine
got
an
address
Wow
number
of
threads
here,
but
just
very
quickly.
There's
the
the
RTD
inflation
doesn't
really
happen
because
PVR
uses
for
its
beauty
estimate.
It
uses
the
minimum
on
TT
measured.
So
it's
not
the
continuous
oddity
or
the
SRT
or
the
instantaneous
oddity.
So
the
minimum
oddity
is
measured
and,
as
Roland
pointed
out,
there
is
a
probe
RTT
phase,
which
happens,
which
allows
which
is
designed
to
allow
flows
to
measure
a
minimum
RTT
so
that
during
periods
when
the
network
is
not
fully
utilized,
so
that's
that's.
D
C
M
M
M
A
A
M
You
yeah,
so
my
name
is
Neil
Cardwell
and
I'm
gonna
give
a
quick
update,
I
guess:
I,
just
have
ten
minutes
on
recent
work.
We've
been
doing
at
Google
on
some
updates
to
the
VBR
algorithm.
This
is
joint
work
with
the
folks.
You
see
there
yeah,
so
we're
going
to
start
out
with
a
quick
review
of
super
quick
review
of
PPR
version.
One
we'll
make
a
quick
sense:
Roland
did
a
nice
job
of
summarizing
version.
M
One
then
we're
going
to
dive
into
some
recent
work
that
we
doing
on
BB
our
version
two
and
then
we're
gonna
focus
our
discussion
on
the
behavior
and
shallow
buffers.
Since
obviously,
you
know
this
has
come
up
in
in
Roland's
talk
and
in
the
Q&A
session
afterwards,
and
then
we
will
have
a
quick
conclusion
next
slide,
please.
M
So
the
motivating
problem
here
for
bbr,
as
has
been
mentioned,
is
that
you
know
there
are
these
issues
of
philosophies,
congestion,
control.
If
the
packet
loss
comes
before
the
congestion
which
can
happen
in
shallow
buffers
and
with
bursty
traffic
or
random
loss,
then
your
loss
based
congestion
control
can
get
low
throughput,
but
on
the
other
hand,
if
losses
come
after
congestion,
which
happens
in
deep
buffers,
you
get
the
problems
that
people
have
called
buffer
blow
with
high
delays.
Next
slide.
Please.
M
M
M
So
so
bbr
is
basically
a
new
kind
of
approach
that
builds
an
explicit
model
of
the
network
path
and
version.
One
had
just
the
maximum
bandwidth.
That's
been
recently
seen
in
the
minimum
round-trip
time
and
it
uses
that
model
to
control
its
sending
and
to
try
to
vary
its
pacing
rate
and
see,
went
to
try
to
explore
the
network
and
build
feed
samples
to
that
model.
M
Next
page,
please
so
you
know
just
a
quick
summary
of
where,
where
we've
been
so
far,
so
with
VP
our
version
one,
we
deployed
it
to
Google
for
TCP
and
quick
traffic
on
google.com
and
YouTube
and
when
backbone
connections.
The
code
is
available
in
Linux,
TCP
and
quick
and
there's
also
active
work
on
bvr
going
on
at
Netflix
and
we're
in
close
communication
with
that
team.
There's
some
internet
drafts
discovering
describing
the
algorithm
and
we've
presented
it
ITF
and
there's
a
sort
of
an
overview
paper
in
the
communications
of
the
ACM
next
slide.
M
Please
so
currently
we're
working
on
improving
the
VBR
algorithm
and
there
the
effort
is
sort
of
continuing
on
multiple
fronts.
One
of
our
big
of
embassies
is
reducing
the
packet
loss
rate
in
shallow
buffers,
which
has
been
discussed
nicely
today.
This
means
that
we
were
trying
to
tune
the
algorithm
so
that
I
can
handle
both
deterministic
and
stochastic
loss
in
a
reasonable
way
and,
alongside
this
steady
state
improvement,
we're
also
working
on
a
faster
exit
of
the
start,
upload
in
the
presence
of
packet
loss.
M
That's
sort
of
a
lengthy
topic
so
we'll
have
to
get
to
the
details
on
that
another
day.
Another
IETF,
another
area
where
we're
working
is
on
improving
the
the
VP,
our
throughput
in
Wi-Fi
and
cellular
and
cable
networks,
where
we
see
all
sorts
of
very
common
act,
aggregation
and
decimation
features
and
the
work
there
is
sort
of
in
in
two
areas.
M
One
is
improving
the
bandwidth
estimator
for
this
case
and
then
the
other
is
making
sure
we've
provision
enough
data
in
flight
by
modeling
aggregation
and
we're
seeing
pretty
decent
results
with
the
latest
Wi-Fi
LAN
testbed
increasing
bandwidth
from
40
megabits,
with
version
one
to
270,
with
version
two
you
and
we're
working
on
improving
behavior
in
the
data
center
with
lots
of
flows
and
basically
the
goal
is
to
use
PBR
for
all
Google,
TCP
and
quick
traffic,
whether
it's
data
center,
where
public
Internet
next
slide.
Please
so.
We've
deployed
a
couple.
M
M
So
just
a
quick
comment
on
the
sort
of
the
major
issue
we've
been
focusing
and
that
has
come
up
today.
A
lot
so
bbr
has
a
known
issue
with
paper
and
cello
buffers
which
roland
discussed
and
some
depth-
and
you
know
the
real
cause
has
to
do
with
interactions
between
man
with
probing
and
then
with
estimator,
and
you
know
core
part
of
this
is
basically
that
the
bandwidth
probing
is
based
on
a
simple
static
proportion
of
the
model
parameters.
So
we
go
at
one
point.
M
Two
five
times
estimate
bandwidth
once
every
eight
round-trips-
and
you
know
this
was
based
on
a
set
of
competing
trade-offs
among
real-world
issues
that
we
saw
in
you
know
real
deployments.
You
know.
Cell
systems
often
have
dynamic
bandwidth
allocation,
so
they
need
a
big
backlog,
whereas
shallow
buffered
switches
with
lots
of
bursty
traffic,
they
need
some
compensation
for
that
stochastic
loss.
So
there
are
a
couple
of
different
trade-offs
going
on
here,
but
basically
for
BPR
version,
one.
M
We
we
M
for
simplicity
and
robustness
and
the
cases
that
we
were
deploying
it
in
so
it
we
had
a
sort
of
one-size-fits-all
static
probing
system.
The
bbr
version
two
is:
is
going
to
use
a
dynamically
adaptive
approach
that
all
discussed
briefly
here.
So
next
slide
please.
So
there
are
a
couple
different
changes
here
that
one
I
would
say:
they're,
basically,
three
main
buckets
into
which
these
changes
fall.
M
The
first
one
is
a
generalization
and
a
simplification
of
the
the
long-term
bandwidth
estimator,
which
was
previously
only
targeted
scenarios
with
polices,
but
we
sort
of
generalized
to
apply
to
fast
recovery
in
general,
and
here
the
bandwidth
estimate
is
a
windowed
bandwidth
average
over
the
last
two
to
three
round-trip
times.
The
second
big
changes
is
some
new
algorithm
parameters
to
adapt
to
shallow
buffers.
Not
surprisingly,
we
have
a
an
estimate
of
the
maximum
amount
of
data
that
we
think
we
can
safely
inject
in-flight
into
the.
M
We
think
we're
causing
loss
and
second,
we
have
the
current
volume
of
data
that
we're
going
to
use
the
next
time
we
probe
the
network
to
see
if
there's
more
capacity-
and
you
know
it
starts
gently
in
one
packet
and
then
doubles
upon
success
to
rapidly
explore
if
more
bandwidth
suddenly
becomes
available
and
then
the
third
big
piece
is
a
sort
of
new
full
by
pipe
and
buffer
estimator.
That
uses
a
loss
rate
signal.
M
Current
iteration
is
just
using
something
simple,
looking
at
the
loss
rate
over
the
scale
of
a
round
trip
being
5%,
but
we're
looking
at
other
alternatives
as
well,
and
once
we
once
that
estimator
estimates
that
the
pipe
and
buffer
are
currently
full.
It
takes
some
steps
that
will
be
surprising
to
people
in
this
room.
We
sat
at
the
estimate
of
the
maximum
safe
volume
of
data
that
we
can
have
in
flight
to
the
current
flight
size.
M
Then
we
do
a
multiplicative
decrease
of
the
in-flight
cap
to
0.85
times
that
estimate,
and
then
we
wait
that
sort
of
backed
off
level
for
an
amount
of
time
that
is
currently
a
sort
of
scaleable
function
in
the
1.
To
4
second
range,
as
a
function
of
the
estimated
bandwidth
to
give
us
a
sort
of
RTT
fare
mechanism
that
also
can
reap
robe,
then
within
a
sort
of
reasonable
amount
of
time-
and
you
know
this
also
gives
us
a
sort
of
design
curve
that
we
can
adjust
to
trade
off.
M
You
know
how
gentle
we
want
to
be
on
Reno
and
cubic
versus
how
reasonable
we
think
it
is
to
wait,
say
the
way
cubic
waits
40
seconds
for
repo
being
bandwidth,
if
you,
if
you're
running
at
10,
gigabits,
say
or
even
18
seconds,
if
you're
running
it
at
a
gigabit
and
your
rtt
is
on
our
milliseconds
anyway.
So
that's
a
discussion
for
another
day,
but
this
is
continuing.
Work
next
slide,
please.
So
this
just
is
a
scenario
to
give
you
a
sense
of
how
this
behaves
within
the
v2
approach.
M
So
this
is,
you
know,
100
megabit
path,
100,
millisecond,
RTT,
very
shallow
buffer,
it's
just
5%
of
the
BGP,
and
this
is
just
a
60
second
Volta
transfers
with
six
flows.
This
is
staggered
entry
times,
so
this
is
sort
of
comparing
cubic
PBR
version
one
and
then
the
current
iteration
of
VB,
our
version
2
with
throughput
on
the
left.
You
know
you
can
see
that
basically
cubic
has
some
nice
fairness
properties,
the
barrel
version.
M
One
does
not
of
the
level
of
fairness
that
would
like,
but
VBR
version
to
as
a
pretty
decent
fairness
and
sort
of
approaches
cubic.
And
then
there
you
transmit
rate,
you
know,
cubics
is
nice
and
low
less
than
one
percent
BB
are
version.
One,
as
has
been
discussed,
is
pretty
high.
Here
it
was
15%
and
then
PVR
version
to
the
current
iteration
as
a
loss
rate.
That's
about
an
order
of
magnitude
lower
than
PPR
version,
one!
That's
in
the
neighborhood
of
1.3
1.4
percent
next
slide.
M
M
You
know
that
you
can
see
basically
PV
original
ones.
Fairness
issues
there
and
then
you
can
see
that
cubic
and
VBR
version
to
do
a
nice
job
of
leaving
a
little
bit
of
headroom
there
in
terms
of
idle
space.
So
the
new
flows
can
come
in
and
discover
the
available
bandwidth,
and
you
can
see
that
you
know
it's
cubic
and
VBR
version.
2
are
not
constantly
riding
right
up
against
the
100
megabit
mark
so
that
they
can
keep
the
loss
rate
nice
and
low.
M
Alright
next
slide,
please
so
that
was
just
a
quick
discussion
of
PBR
version,
1
and
then
sort
of
some
quick
taste
of
whom
PBR
version
2
and
continued
work
continues
on
both
the
Linux
TCP
implementation
and
the
quick
implementation
at
Google
and,
as
I
said,
there's
work.
Some
nice
work
on
the
way
for
PBR
in
FreeBSD,
TCP
Netflix
and
of
course
we
are
always
happy
to
hear
more
test
results
and
more
research
results,
as
we
continue
to
work
on
PBR
on
our
end
as
well,
so
yeah,
so
I
guess
next
slide
and
then
any
questions.
O
All
right
again,
this
pub
again
yeah
the
question
on:
where
did
the
evidence
come
from
that
loss
is
not
a
good
estimator
for
congestion,
I,
don't
think,
we've
seen
any
evidence
for
that
and
I
think
that's
sort
of
one
of
the
underlying
problems
here
in
that
there's
this
sort
of
prejudice
against
using
it
where's.
It's
all
information
that
you
need
to
have
I
know.
I
know,
you've,
you've
started
to
use
it
here,
but
you
haven't
really
used
it
you've,
just
sort
of
tried
to
avoid
it.
M
So
I
wouldn't
say
they
were
trying
to
of
just
trying
to
avoid
it.
I
mean
what
I
think
we
are
trying
to
use
the
law
signal
here,
but
anyway,
back
to
the
question
of
about
what
we
think
about
loss
and
in
its
relationship
to
congestion.
So
the
the
way
I
think
about
it
is
that
you
know,
as
I
said
early
on
in
the
slides,
it's
very
common,
for
example,
for
buffers
to
be
very
deep.
You
know
the
well
and
often
discussed
buffer
blow
problem.
M
M
But
there's
the
packet
loss
doesn't
happen
until
the
queue
gets
way
out
of
control
and
quite
a
bit
too
long.
And
so
you
know
that's
one
case
in
which
packet
loss
is
not
a
good
proxy
for
congestion,
because
congestion
happened
very
early
and
then
the
packet
loss
happened
when
the
queue
got
to
be
seconds
bigger.
So
that's
that's
one
and
in
the
spectrum
that
we
definitely
obviously
see
it's.
It's
well
described
in
many
contexts.
M
The
other
situation
where
we
see
that
there's
this
real
disconnect
between
packet
loss
and
the
point
of
congestion
is
in
high
speed
shallow
buffered
winds.
So
if
you
have
a
network
that
you
build
out
of
commodity
switches
and
let's
say
they're
operating
at
ten
gigabits
or
faster-
and
let's
say
that
your
switches
have,
you
know
some
number
of
megabytes
of
buffer
memory
because
they're
just
commodities,
which
is
you,
know,
they're
what
you
can
afford
for
your
look
large
hide
speed
network.
M
So
in
this
case,
if
you
have
a
couple
of
megabytes
of
switch
buffer
and
you're
going
at
ten
gigabits
or
faster
than
you
know,
often
in
these
scenarios,
then
you
have
just
one
or
two
milliseconds
of
buffer
space.
Essentially,
and
if
your
round-trip
time
is
say,
100,
milliseconds
or
200
milliseconds,
which
can
obviously
happen,
then
you
can
you
you
can,
and
we
do
see
in
Google's
high-speed
who
had
backgrounds
that
you
get
these
scenarios
where
a
flow
is
operating
with
100
millisecond
round-trip
time
and
during
that
hundred
milliseconds.
M
You
would
get
a
signal
that
you
interpret
to
mean
slow
down
and
no
matter
what
you
do
you
you
get
a
signal
to
slow
down,
but
meanwhile
there
could
be
that
link
could
be
90%
idle
because
maybe
there's
90
milliseconds
of
total
silence
and
only
10
milliseconds
split
up
into
little
chunks
here
and
there
or
that
2
millisecond
buffer
was
full.
So
this
is
a
kind
of
the
other
kind
of
environment.
That's
important,
at
least
to
us,
and
it's
an
environment
where
packet
loss
is
not
closely
related
to
congestion.
M
O
O
O
In
other
words,
all
these
all
these
the
assumption
that
that
loss
is
not
really
loss
or
it's
not
really,
congestion
means
that
baby
are
is
holding
or
going
at
a
certain
delay
level
in
the
in
the
buffer
or
higher,
rather
than
trying
to
go
slightly
under
to
allow
all
these
other
little
things
to
happen.
Well,
it's
an.
M
It
I
think
that's
a
fair
description
of
the
VR
version.
One
I
think,
as
you
probably
saw
in
the
presentation,
we're
trying
to
incorporate
the
kinds
of
considerations
that
you
are
mentioning
right
now,
so
I
think
I
think
we
are
on
largely
the
same
page
as
far
as
that
particular
issue
is
concerned.
We
don't
want
to
you
want
to
respond
to
the
bandwidth,
that's
actually
available
to
the
flow,
and
we
want
to
use
loss
as
a
signal
to
do
that.
In
addition
to
the
delivery
rate
and
the
round-trip
time
of
the
path
you
know,
yeah.
P
And
remember:
I
just
wanted
to
add
a
comment
to
one
you'll
see
a
bit
actually
I.
You
know
with
two
one:
is
that
it's
not
like
inside
Google,
we
don't
care
about
packet
loss
because
it
costs
us
the
students,
a
substantial
amount
of
RAM,
and
the
cost
of
that
is
quite
significant.
P
But
it's
not
a
reliable
indication
of
congestion
in
the
way
that
it
was
designed
to
be
in
Reno,
so
I
think
BBI
2
is
going
and
by
the
way
this
is
the
first
time
I've
heard
exactly
what
Neal's
planning
will
give
you
attitude
the
I
think
the
about
Libya
is
heading
in
the
right
direction
of
softening
all
the
responses
to
some
a
reasonable
control
algorithm,
rather
than
being
super
super
hard
on
everything,
and
that
was
BBA
ones.
Mistake
being
super
hard
on
those
variants.
P
G
Q
Me
I
could
even
say:
I
saw
that
you
have
this
kind
of
threshold
and
they're
like
five
percent
of
loss.
You
do
something.
So
that's
already
great,
so
you
don't
even
know
lost
completely
I
think
this
is
really
important
for
the
safety
of
the
internet.
However,
five
percent
teams
still
hide
so
I.
Guess
that
doesn't
solve
the
fairness
problem.
You
have
with
loss
based
congestion,
control
right
well,.
M
You
had
5%
pretty
quickly
due
to
the
quantization,
but
anyway,
so
the
let's
see
the
the
other
consideration
I
can
think
about.
Is
that
what
matters
you
know
in
cubic
in
terms
of
what
they
can
utilize
and
in
terms
of
bandwidth
is
not
really
the
loss,
as
it
would
be
measured
with
primers,
really
the
distance
in
time
between
the
lost,
epochs
and
I.
Think
right
now
or
personally,
I'm
thinking
about
it
in
terms
of
making
sure
the
lost
epochs
that
BB
are.
M
M
We
want
to
be
reasonably
fair
to
these
algorithms,
but
hey
they're,
not
really
fair
to
myself.
B
they're,
not
really
RTT,
fair
to
themselves
for
sure
and
then
see
they
have
some
unreasonably
long
timescales
for
adjusting
in
some
cases,
so
I
think
it's
gonna
be
tricky
that
curve,
but
I
think
now
we
at
least
have
a
dial
in
the
algorithm
to
choose
this
explicitly
and
unconsciously.
C
J
Stuart
Cheshire,
Apple
I,
know
we're
short
on
time,
so
I'll
make
this
brief.
My
comment
is
kind
of
along
the
same
lines
as
Bob's
and
sort
of
long,
the
same
lines
as
Geoff's
about
massive
packet
loss.
What
concerns
me
is
we
have
this
belief
that
I
he
repeated
that
loss
is
not
always
due
to
congestion.
Sometimes
it's
just
random
stuff
that
happens
and
and
I
hear
this
a
lot
from
my
management
at
Apple.
J
People
who
studied
computer
science,
but
not
networking
and
the
one
thing
they
remember
from
their
networking
class-
is
that
the
internet
loses
packets
all
the
time
and
therefore
TCP
is
wrong
to
treat
that
as
a
sign
of
congestion,
because
it's
not
it's
just
the
way
the
internet
works.
It's
throwing
packets
away
for
no
reason
all
the
time,
and
that
may
or
may
not
be
true.
It
may
be
that
there
are
Wi-Fi
algorithms
that
are
over-aggressive
with
the
transmit
rate
they
pick
and
their
rate
had
apt.
J
Ation
is
not
finding
a
reliable
operating
point,
but
if
that's
true,
the
question
is:
what
do
we
do
about
it?
And
why
answer
is
we
put
bigger
buffers
in
these
switches?
We
improve
the
Wi-Fi
algorithms
so
that
random
loss
is
not
happening,
are
quite
the
same
level.
That's
one
approach.
The
other
approach
is
to
say:
oh
forget
it.
Let's
just
ignore
loss.
Loss
means
nothing
anymore.
J
J
If
you've
got
50%
packet
loss
on
every
link,
then
after
10
links,
you've
only
got
one
in
a
thousand
packets
getting
out
the
other
side
and
I
worry
we'll
be
in
a
situation
where
we're
all
dumping
a
firehose
of
data
into
the
network,
and
basically,
nothing
is
coming
out
the
other
side
and
that
takes
bandwidth
on
shared
links.
It's
using
Wi-Fi
spectrum,
it's
using
battery
power,
it's
consuming
a
bunch
of
resources
for
data
that
doesn't
survive
to
the
other
side.
H
This
is
Roland
again,
so
this
is
John
worked
with
Mario
Felix
and
Martina,
it's
about
TCP
Lola,
which
is
an
approach
toward
low
latency
and
high
throughput
congestion
control.
So
the
motivation
is
to
achieve
high
throughput
and
low
delay,
so
this
is
typically
considered
as
a
trade-off
or
even
as
conflicting
goals.
So
we
think
this
is
not
necessarily
so,
and
so
we
try
to
mitigate
actually
this
trade-off.
So
there
are
several
approaches
here.
H
We
all
know
like
aqm,
more
it's
weeks
to
existing
congestion
control
and
coming
with
aqm
workers,
l-
back
off
with
ECNs,
or
that
you
can
keep
the
utilization
high,
even
if
you
have
a
qm
and
traditional
law,
space
tcp
operating
on
that
and
yeah
there's
also
the
other
approach
of
trying
out
new
congestion
controls.
So
we
wanted
to
investigate
how
far
we
can
actually
get
with
the
congestion
control,
achieving
low
queuing,
delay,
high
utilization
and
throughput.
H
We
want
to
be
scalable,
so
that
means
that
we
can
also
use
even
ten
gigabit
per
second
and
me
on
so
in
several
orders
of
magnitude
of
bandwidth
that
you
need
to
scale
with.
We
want
to
achieve
in
our
TT
fairness
and
the
whole
approach
should
work
with
at
least
and
beginning
with
regular
Ted
or
cues.
Our
focus
was
basically
on
wide
area
networks
and
not
data
center
networks.
H
Specifically,
so
this
is
not
excluded,
but
wasn't
wasn't
in
our
initial
focus,
so
the
general
goal
is
basically,
if
you
know
our
point
of
view,
to
determine
a
suitable
amount
of
in-flight
data
and
order
to
achieve
high
bottleneck
link
utilization.
So
only
on
the
other
side,
we
want
to
avoid
creating
any
standing
queues
in
order
to
keep
the
community
lay
low.
So
what
Lola
does
is
it
provides
a
configurable
fixed,
targeted,
I
value,
so
we
can
say
we
want
to
have
a
queuing
delay.
H
You
know
more
as
five
milliseconds,
for
example,
and
on
our
definition
of
congestion.
We
think
that
congestion
is
when
we
see
a
persistent
queuing
delay
above
the
fixed
target.
So
the
challenge,
then,
is
basically
also
to
achieve
good
convergence
to
fairness,
so
that
the
the
problem
is,
you
may
end
up
with.
If
you
only
want
to
achieve
the
first
two
goals
without
considering
fairness,
you
may
end
up
with
a
total
amount
of
in-flight
data,
which
is
quite
okay,
but
you
may
see
unequal
ratios.
H
So
the
idea
is,
then
that
basically,
you
need
to
in
cry
and
increase
the
in-flight
data.
One
sender
while
reducing
it
for
others
and
the
this
particular
challenge
is
you
have
a
small
Q?
The
flow
is
more
or
less
interact
via
the
Q,
and
so
you
have
less
room
then
for
interaction.
H
If
you
want
to
keep
the
queuing
delay
small,
so
this
is
basically
more
difficult
and
we
want
to
do
that
without
sacrificing
the
low
delay,
also
not
allowing
overshooting
in
the
queue,
for
example,
and
in
order
to
appropriately
inform
more
bandwidth
and
so
on.
So
the
basic
approach
is
that
we
use
queuing
delay
thresholds,
so
this
is
showing
the
bottleneck
buffer
queue
here
and
we
define
more
or
less
the
areas.
So
one
the
first
area
is
below
a
threshold.
Targets
are
called
Q
low,
where
the
link
utilization
is
basically
unknown.
H
H
So
we
try
to
stay
below
Q
target,
which
is
as
I
just
briefly
introduced,
may
be
set
to
5
milliseconds,
for
example,
and
so
we
want
to
achieve
in
this
green
area,
your
high
throughput
and
low
delay,
but
once
we
we
got
beyond
Q
target,
we
are
half
a
state
of
congestion
and
we
try
to
actually
back
off
then.
So.
H
You
need
to
estimate
the
queuing
delay
right
now,
we're
using
minimum
filter
over
fixed
time
period
in
order
to
measure
the
standing
Q
and
we
also
have
a
heuristic
in
order
to
adapt
to
any
network
path
changes.
So,
for
example,
if
you
have
path
change
and
the
minimum
R
DT
actually
increases,
then
we
you
need
to
adapt
to
that
once
a
while.
H
So
overall
Lola
is
a
congestion
window
based
approach
and
we
also
use
packet
pacing,
which
is
beneficial
for
its
operation
in
order
to
get
better
measurements,
but
it's
not
necessary
as,
for
example,
in
PPR,
so
the
flow
States
or
the
works
roughly
like
this.
We
have
a
slow
start
phase,
which
is
nearly
as
always
you're
doubling
sending
rate
Mullis.
So
we
have
some
exit
condition
where
we
try
to
exit
early
in
order
to
avoid
overshooting
too
much
in
the
queue
in
through
the
queue.
Then,
in
this
yellow
area.
H
Here,
where
the
link
utilization
is
unknown,
we're
starting
just
are
like
cubic
to
increase
the
congestion
window
until
we
basically
cross
this
queue
low
threshold.
Here
once
we
cross
that,
so
the
estimated
queuing
delay
is
larger
than
Q
lower.
Here
we
enter
a
mode
which
is
called
fare
flow
balancing.
So
this
is
a
special
stage
where
we
try
to
achieve
fairness,
which
is
explained
on
the
next
slide
and
then
once
we
actually
across
the
the
queue
targets
delay.
So
when
we
enter
a
more
or
less
congestion
state,
each
flow
holds
its
congestion.
H
We
know
for
some
time
in
order
to
let
others
detect
actually
that
they
crossed
all
together
on
this
state
here
or
the
the
target
here
and
then
this
is
also
important
on
some
action
is
trigger,
namely
in
the
so
called
Taylor
decrease.
So
this
state
or
this
action
tries
to
actually
drain
the
queue
completely,
but
still
keep
the
unity
utilization
high.
H
H
So,
for
example,
the
green
one
here
starts
alone.
Then
second
one
joins
here
and,
as
you
can
see
in
this
gray
areas
here,
which
is
the
face
of
fair
flow
band
editing,
you
can
see
that
the
the
green
one
always
keeps
its
congestion
window
until
they
more
or
less
come
together
and
the
other
flow
here.
The
flow
with
a
smaller
then
the
fair
share
actually
increases
its
congestion.
H
We
know,
as
you
can
see
here,
and
this
curve
here
is
cubic
in
order
to
be
scaled
or
in
order
to
be
scalable,
so
we
have
a
simple
heuristic
to
use
or
calculate
and
lower
the
amount
of
data
in
the
queue
more
or
less
affect
you
share.
If
you
will,
which
is
time
dependent
so
over
time,
actually
the
allowed
amount
of
data
in
the
queue
increases,
and
now
the
problem
is
then
that
you
need
to
model
a
synchronize
at
some
point
in
time.
But
that's
a
point
of
synchronization
is
basically
that
event
here.
H
So
there's
the
good
chance
that
every
flow
will
reset
its
its
timer.
So
the
the
testbed
setup
is
nearly
the
same
as
in
VBR
case.
So
this
don't
tell
you
much
about
that,
but
the
cue
low
target
was
the
cue
low
threshold
story
was
set
to
1.
Millisecond
cue
target
was
set
5
milliseconds
here.
Croatian
time
is
set
to
250
milliseconds
and
measure
window
actually
set
40
milliseconds.
H
These
values
are
not
really
optimized,
so
there's
much
room
for
improvement,
maybe,
and
so
let's
look
at
the
results,
so
our
two
flows
sharing
a
10
gigabit
per
second.
So
the
first
flow
starts
and
is
able
actually
the
green
one
here
is
able
to
fully
utilize.
The
bandwidth
second
flow
starts,
and
then
you
can
see
that
they're
starting
the
fare
flow
balancing
phase
here
and
basically
they
achieve
quite
fair
share
here
in
comparison
to
other
delay.
H
Both
flows
do
there
have
flow
badness
in
face,
and
you
can
see
that
you
have
always
just
little
spikes
here
which
correspond
to
the
5
milliseconds
queuing.
Delay
target
in
comparison
to
that
has
already
known
cubic
TCP
always
fills
the
buffer
completely,
and
this
also
leads
to
higher
queuing
delay
in
seniority.
So
the
queuing
delay
basically
is
also
our
TT
independent.
H
We
buried,
for
example,
the
RTT
in
this
case
to
5
or
5
milliseconds
61,
milliseconds,
101
milliseconds
and,
as
you
can
see
always
our
Lola
gist
adds
5
milliseconds
to
the
base
or
DT,
and
so
it
keeps
the
target
of
5
milliseconds
queuing
delay
action,
so
the
queuing
delay
is
independent
of
the
base,
Rd
T
and
also
the
rate
which
is
not
shown
here
and
also
with
respect
to
the
number
of
senators
which
can
be
seen
in
the
next
slide.
So
we
have
several
flow
starting
here
in
succession.
In
this
case
there
were
18
flows.
H
Just
for
clarity.
I
show
only
two
flows,
but
it's
it's
nearly
the
same
for
the
others,
so
in
in
this
case,
even
while
we
have
several
flow
starting,
your
Lola
still
able
to
control
the
overall
queuing
delay
and
basically
produce
no
packet
loss.
So
in
contrast
to
that
comparison
to
PPR,
for
example,
PBRs
flow
starts
in
succession.
You
can
see
that
bbr
first
fill
the
buffer
completely
even
overshoot
some
times
and
then
yeah
that's
so
this
is
measured
basically
at
the
sender,
and
it
also
produces
several
or
retransmissions
here.
It's
just
you
can
see.
H
H
So
this
is
just
showing
first
test
of
the
overall
concepts,
so
this
was
published
on
the
LCN
conference
last
month,
so
the
the
parameters
are
not
thoroughly
optimized.
Yet
it's
not
a
full-fledged
PCP
brine,
so
you
just
are
looking
into
several
aspects
and
so
there's
room
for
further
investigations
like
use
a
one-way
delay
instead
of
our
TT
measurements.
H
Right
now
we
are
using
the
rtt,
but
it's
also
easy
to
use
one-way
delay
and
said
in
order
to
get
rid
of
any
disturbances
on
the
reverse
path,
and
we
didn't
look
into
the
issues
of
delayed
and
compressed
x.
Furthermore,
we
don't
know
how
it
behaves
in
wireless
environments.
Also,
multiple
bottlenecks
enarans
may
be
a
problem
and
we
also
did
some
work
on
coexistence
with
loss
based
variants,
but
we
don't
want
to
build
in
any
compatibility
mode,
but
instead
use
separate
queues,
for
example,
or
maybe
also
a
queue.
Q
A
cool,
even
I
could
also
talk
to
Ronan
data,
but
anyway
so
I
like
the
fair
queuing
approach.
That's
really
nice
and
I
just
want
to
come
and
say
he
gives
the
minimum
the
overall
menu
amount
of
time
I've
seen,
but
I
think
you,
actually
you
try
to
make
really
hard
sure
that
you
empty
queue
from
time
to
time
right
so
you're.
Q
Q
F
S
What
we're
looking
at
is
the
problem
where
you
have
large
amounts
of
data
that
have
to
be
transferred
over
the
internet.
They
don't
have
to
be
transferred
as
fast
as
possible,
so
perhaps
we
can
send
them
in
a
lesson
best
effort
way,
but
they
also
have
a
timeliness
constraint
next
slide
and
because
of
that,
they
often
have
to
be
treated
completed
by
some
sort
of
soft
deadline.
We'd
like
to
have
a
sort
of
a
deadline
aware
less
than
best
effort.
Now
what
that
is
next
slide
is
it'll.
S
Have
these
sort
of
qualities
it'll
be
something
that
keeps
disruption
of
current
best
effort
flows
where
things
and
other
types
of
traffic
to
a
minimum?
So
do
good
and
react
earlier
to
congestion
than
a
normal
best
effort
flow
be
pragmatic
in
that
you
still
want
to
meet
a
deadline,
and
if
things
are
bad,
then
be
more
aggressive,
but
still
do
no
harm
and
never
be
more
aggressive
than
a
best-effort
type
flow.
Next.
Nice.
S
Now
our
approach
was
to
model
this
first
to
get
an
idea
of
the
dynamics,
and
then
we've
developed
our
framework
a
way
of
doing
this
that
can
adapt
in
principle
any
way
any
underlying
congestion
control
algorithm
to
to
do
this.
We
have
a
publication
there
that
you're
welcome
to
look
at
next
thing
now,
our
quick
overview,
probably
the
quickest
you've
ever
seen
of
a
network
utility
maximization.
S
You
have
centers
and
receivers
through
the
internet
and
one
of
the
ways
to
model.
This
is
next
slide
by
saying
that
the
sender's
try
to
maximize
their
utility,
they
get
more
utility,
for
particular,
send
right,
send
rate,
but
the
restriction
is
that
they
can't
collectively
exceed
the
capacity
on
any
link
through
it
along
their
path.
S
Ok,
next
slide
things
now.
It
very
nicely
happens
that
if
you
work
with
the
Lagrangian
dual
default
solve
this
optimization,
then
it
can
be
solved.
Distributive
lis
and
it's
sort
of
looks
a
bit
like
TCP,
where
you
have
a
price,
but
the
price
is
not
dollars,
and
since,
in
this
case
it
can
be
lost,
ECM
delay
or
other
congestion
type
ways,
and
you
do
a
calculation
based
on
your
price
to
get
the
rate
you
send
sort
of
a
bit
like
what
TCP
does
on
average.
Ok
next
things!
S
Now,
if
we
want
to
make
this
less
than
best
effort,
there
are
really
two
ways
you
could
do.
This
one
way
is
that
you
could
change
the
utility
function
to
make
it
less,
invest,
effort
and
that's
the
equivalent
of
changing
the
congestion
control
algorithm
now.
The
other
way
you
can
do
it
is,
you
could
inflate
the
price
okay.
So
if
you
inflate
the
price
for
one
congestion,
control,
algorithms
and
it
will
think
there's
more
congestion
than
there
really
is
and
send
slower.
S
So
that's
the
other
way
and
that's
the
way
we're
looking
at
in
more
detail
here.
Next,
Thanks,
ok,
just
in
price
inflation,
so
you
have
the
sum
of
the
total
price
that
you've
got
for
your
whole
link,
which
could
be
your
RTT
or
could
be
a
loss
or
your
ecn
signals,
and
we
weight
that
by
a
number
between
not
quite
zero
and
one
to
get
an
inflated
price.
If
the
weight
is
one,
then
it's
best
effort.
If
it's
not
one
and
less
than
one,
then
we
have
a
degree
of
lbs
lbs
lbs.
S
S
But
at
longer
time
intervals
we
want
to
keep
the
idea
of
the
deadline
involved,
so
we
will
adjust
what
that
weight
is
relative
to
the
deadline
and
we've
experimented
with
a
couple
of
ways
of
doing
that:
a
PID
controller,
which
is
not
too
bad
once
you
sort
of
configure
it
because
you're
always
controlling
something
between
zero
and
one
and
one
I'll
be
showing
here
is
a
model
based
control,
because
a
lot
of
TCP
algorithms
have
good
models
describing
their
average
behavior.
So
that's
one
of
the
ways
we've
looked
at
next
links.
S
Okay,
the
first
thing
we
did
was
apply
it
to
cubic.
Why
this
cubics
not
the
best
for
this?
Yes
s,
we
know,
but
if
you
have
a
network
where
most
things
are
behaving
like
cubic,
then
it's
like
with
like
two
ways:
you
could
do
it
inflate
the
response
or
inflate
the
price.
That's
the
indirect
way.
We
tried
this
just
because
just
changing
the
beta
seems
easy.
It
turns
out
that's
more
complicated
to
do
in
an
actual
Linux
kernel
just
for
one
particular
flow.
S
Sorry,
what
we
did
do
is
look
at
inflating
the
price.
You
could
say
we
drop
extra
packets,
but
that's
not
very
nice.
It's
like
shooting
yourself
in
the
foot.
Instead,
we
introduce
concept
called
phantom
ecn
signals
that
allow
the
lost
space
protocol
to
react
in
a
ecn
like
way,
but
without
actually
losing
anything.
Next,
thanks
now,
as
a
simple
test
scenario,
we've
set
up
six
cubic
TCP
flows,
starting
and
stopping
of
two
note
in
a
short
interval
between
1000
and
1000
and
10
seconds.
S
So
this
is
what
we
get
when
we
try
to
vary
beta
and
the
reason
why
that
doesn't
work
so
well
and
doesn't
give
you
a
good
amount
of
El
Venus
as
such
is
because
Cuba's
quite
aggressive
and
loss
is
not
a
very
regular
signal.
Okay,
so
rare
a
signal
so
cubic
is
rapidly
ramping
up
again
now
with
phantom
ecn.
Next,
we
get
a
better
degree
of
lb
eNOS.
You
could
say,
however,
one
problem
with
this
is
how
do
you
know
when
congestion
goes
away?
S
S
Now,
if
we
weren't
using
loss
to
detect
that
congestion
was
gone
away
about
something
like
delay,
we
could
probably
do
a
little
bit
better,
so
we
chose
next
slide
to
do
this
with
Vegas,
eventually,
because
it's
a
common
delay
based
protocol
that
everybody's
quite
familiar
with,
but
the
problem
with
having
something
like
Vegas
working,
a
network
where
all
the
other
flows
may
be.
Something
like
cubic
is
especially
when
the
deadlines
approaching.
How
does
Vegas
ramped
up
enough
to
compete
with
cubic?
S
We
calculate
weighted
by
the
particular
congestion
controls
reaction
to
congestion
so
cubic
it
might
have
a
beta
of
0.7
or
point
eight,
something
like
Vegas,
its
plus
one
minus
one,
but
what's
it
well,
if
your
window
size
is
two
minus
one
is
50%,
but
if
your
window
size
is
100,
minus
one
is
1%
so
relative
to
that,
but
Vegas
also
reacts
to
loss,
so
you've
got
to
handle
the
both
of
them
as
well.
Thanks
you're,
getting
really
good,
okay,
so
next
slide.
S
So
what
we
do
is
we
inflate?
Like
we
did
before
we'll
inflate
the
delay
part,
but
we
have
this
extra
parameter
Phi,
which
gives
us
sort
of
a
balance
between
the
aggressiveness
of
cubic
and
Vegas
at
that
particular
time.
So
we're
measuring
the
relative
reactions
to
congestion,
and
we
inflate
that
now
the
lost
part,
because
Vegas
is
reacting
to
both
of
them
when
it's
really
congested
and
we're
getting
near
a
deadline,
and
we
want
to
compete
a
bit
more
like
cubic.
S
Then
we
probabilistically
ignore
some
losses
there,
okay,
but
we
are
never
in
the
end
more
aggressive
than
cubic,
because
we
keep
track
of
what
a
loss
base
flow
would
be
reducing
relative
to
what
we're
doing
now.
This
is
example
of
the
Vegas
face
one.
Now,
it's
less
than
best
effort.
It
is
reacting
on
short
timescales
to
the
congestion
that's
happening
when
there's
nothing
at
all,
it
can
take
all
of
the
capacity,
which
is
what
we
want
to
be
able
to
do,
take
advantage
of
empty
capacity
and
well
in
this
case.
S
S
Aware
less
and
best-effort
flow
with
that
next
thinks
and
we
did
a
bit
of
work,
making
sure
that
it
was
all
stationary,
so
we
could
collect
proper
statistics
for
it
next
now.
This
is
one
of
our
summary
statistics.
I'm
only
going
to
show
one
where
we
just
looked
at
what
your
completion
time
is
relative
to
the
offered
load.
S
That's
not
the
actual
load,
because
the
actual
load
will
depend
on
retransmissions
and
other
things,
because
it's
TCP
happening
there,
and
what
you
find
is
that
the
cubic
base,
one
because
of
the
phantom
essence
and
not
being
able
to
take
advantage
of
all
the
capacity.
Yes,
it
always
does
quite
good
as
far
as
the
deadline
goes,
but
when
there's
not
much
happening,
it
can't
actually
send
our
Bob
transfer
as
fast
as
it
should
be
able
to,
and
do
it
without
disturbing
the
other
traffic.
S
The
vegas-based
one
actually
can
and
some
of
our
other
results
which
you'll
have
to
wait
to.
The
next
publication,
actually
show
that
the
impact
on
other
types
of
traffic,
the
other
traffic
happening
at
the
same
time
and
so
on,
is
very
minimal,
with
the
Vegas
and
more
with
that
and
much
better
than
if
you
have
say
a
normal
best
effort
flow,
and
we
also
look
at
fairness
and
some
other
measures.
Ok,
next
things
now,
where
we
are
at
the
moment,
is
we
have
our
mostly
working
stand-alone
version
of
this
in
linux.
S
In
the
background,
this
meta
control
is
collecting
stats
from
what's
happening
in
the
kernel,
what
the
loss,
what
how
many
packets
lost
our,
what
the
delays
are
and
so
on,
to
calculate
those
parameters,
so
it
can
adjust
through
the
control
the
dynamics
of
whatever
underlying
congestion
control.
We
happen
to
be
using
the
examples
we
had
was
Vegas
and
cubic,
but
it
can
be
applied
to
any
of
them.
Thanks.
S
Neat
we'll
have
a
look
at
what
congestion
control
algorithms
are
available,
which
one
might
be
the
best
one
for
deadlines,
aware
less
than
best
effort
to
adapt.
If
there's
a
delay
based
one,
that's
usually
better,
if
not
we'll
just
use
cubic
or
something
and
well
sorry
back
one
step
and
choose
it
underneath
you
can
read
more
about
it
there
and
in
the
taps
group
more
about
NEET
next
things.
So
this
is
how
it
works
in
neat.
So
we
have
the
neat
library
applications
to
talk
to
neat.
S
S
No
need
to
go
to
the
next
slide.
So
our
deadlines
are
we're
less
than
best
effort
mechanism.
It
is
for
Bob
type
transfers
where
you
want
a
complete
within
some
sort
of
loose
deadline,
but
needn't
be
nice
to
all
the
other
traffic.
That's
flowing
hey,
it's
sort
of
based
on
the
network
utility
maximization
idea.
We
inflate
prices,
we
tested
it
with
cubic
in
Vegas,
but
in
principle
it
could
be
applied
to
any
type
of
congestion,
control,
algorithm
working
as
a
meta
control
and
our
ongoing
work.
S
C
Something
question
anybody
I'm,
very
sorry
for
this.
Actually,
you
know
we
should
have
asked
for
a
longer
time
stop,
but
then
the
presentations
that
came
in
were
too
good
and
so
I
sent
emails
to
people
asking
them
to
be
short,
which
isn't
easy,
but
all
right.
It
reminds
me
to
go
to
all
days
of
you
know
too
many
presentations
coming
here,
which
is
good
thing.
T
I'm,
going
to
talk
about
LED
back
plus
plus,
is
a
low
priority,
TCP
congestion
control,
which
we
have
actually
shipped
in
Windows
for
several
scenarios
and
extract
these
the
background,
so
an
anecdote,
so
basically
windows
crash
dump
upload
basically
took
out
in-flight
Wi-Fi.
So
that's
how
we
started
looking
into
this
problem.
T
So
our
goals
were
that
we
want
to
do
less
than
best
efforts
our
service,
but
we
don't
want
to
interrupt
foreground
traffic
like
interactive
web
pages
right
cause
and
things
like
that.
We
basically
did
some
literature
survey
and
then
LED
back
seemed
like
the
best
solution
because
it
ramps
up
and
there
is
no
competing
traffic
and
yields
to
foreground
flows,
but
we
did
find
problems
with
LED
back,
which
led
us
to
make
some
improvements.
T
Quick
recap!
So
let
byte
works
by
measuring
the
base
delay
and
then
it
adds
its
own
target
delay
on
top
and
it.
Basically,
if
the
delay
is
less
than
target,
then
there's
an
additive
increase.
The
delay
is
higher
than
target.
Then
there's
an
idea
to
decrease
no
particular
strict
requirements
from
slow
start
and
it
reacts
to
packet
loss
just
like
standard
TCP.
Next
right.
Please
so
these
are
the
problems
that
we
found.
So
one
way
delay
measurements,
which
was
required
by
the
RFC
was,
is
hard
to
do
with
TCP.
T
So
there
is
no
standard
clock,
frequency
or
synchronization,
and
there
there's
there
are
cross
skew
problems.
There
is
a
late
comer
advantage
problem,
because
it's
a
delay
based
condition,
control,
so
flow
that
arrives
later
measures
a
higher
base
delay
and
it
basically
gets
a
higher
share
of
the
network.
There
is
also
internet
but
fairness
problems.
If
you
have
too
low
priority
connections
competing
then
essentially,
there
is
a
stable
queue,
but
they
don't
each
get
this
fair,
fair
sure,
even
if
their
base
delays
is
measured
as
the
same
the
recommendations
around
slow
start
were
vague.
T
So
that
was
another
thing
we
thought
was
a
drawback
with
the
RFC
there's
also
a
latency
drift
problem
that
we
noticed,
which
is,
if
you
relate
the
lead
back
connection,
to
run
for
a
long
time
like
longer
than
10
minutes.
We
noticed
that
the
base
delay
would
actually
keep
increasing
because
of
the
queue
that
was
built
up
by
the
led
bad
connection
itself.
T
So
we
would
notice
this
latency
ratcheting
problem
which
would
keep
increasing
and
then
there's
the
low
latency
competition
problem,
which
is
if,
if
the,
if
the
queue
never
builds
up,
if
it's
a
fashion
of
network
connection,
then
led
by
basically
behaves
like
standard
TCP
and
doesn't
be
able
to
standardise.
Can
we
go
to
the
next
slide?
Please?
So,
basically,
we
did
these
essentially
five
changes.
T
Instead
of
one
by
dealer,
measurements
we
use
round
trip,
latency
measurements,
so
slower
than
reno
conditions
will
do
increase,
so
the
gain
factor
we
actually
made
it
adaptive
versus
using
a
fixed
value
instead
of
additive
decrease,
we
do
a
multiplicative
decrease
so
and
then
modified
slow
start
and
then
initial
and
periodic
slowdowns
so
that
we
can
more
accurately
measure
to
the
Bayes
delay.
This
has
been
shipping
since
the
Windows
10
anniversary
update
it
is
in
use
by
the
error
reporting
service,
as
well
as
peer-to-peer
Windows
Update.
T
We
are
working
on
making
this
API
public
and
the
configuration
as
well
next
slide,
please.
So
what
are
the
so?
The
advantage
of
round
latency
is
that
it's
already
available
in
TCP.
There's
no
need
for
clock
synchronization.
This
disadvantage,
obviously,
is
the
receiver
delays
and
the
deal
attacks.
So
we
did
some
mitigations
here.
We
enable
the
tcp
times
to
have
option.
T
Implicitly
we
filter
the
RTD
samples,
just
as
recommended
in
the
RFC
to
the
minimum
of
the
four
most
recent
samples
and
we
use
a
target
delay
of
60
milliseconds,
which
is
different
than
the
one
recommended
in
the
RFC,
which
was
100
milliseconds
or
less
the.
We
basically
picked
a
value
that
was
the
typical
server-side
delayed
AK,
and
we
found
that
60
milliseconds
works
much
better
for
Skype
versus
hundred
milliseconds,
which
covers
2/3
of
the
budget
for
a
good
quality
wide
connection.
Next
slide,
please
slower
than
Reno.
T
So
essentially,
instead
of
growing
the
congestion
window
in
the
current
avoidant
phase
like
standard
TCP,
we
basically
introduce
a
factor
which
is
of
makes
the
condition
window
growth
a
fraction
of
standard
TCP,
essentially
F.
We
experimented
with
fixed
values
for
F,
but
it
never
worked
out
very
well
for
connections
that
are
a
wide
range
of
base
delays.
We
find
that
picking
an
adaptive
scheme
actually
works
much
better.
T
So
essentially,
we
we
find
that
16
is
a
good
trade-off
for
essentially
low
latency
connections
to
not
eat
of
eat.
A
large
share
of
the
bandwidth
when
the
when
the
latency
is
low.
So
this
essentially
solved
the
low
latency
competition
problem,
so
I'll
be
showing
a
bunch
of
graphs
later,
which
explained
this
in
a
little
bit
more
detail.
Next
I,
please
multiplicative,
fricative
decrease
was
proposed
in
a
paper,
but
we
found
that
just
implementing
a
multiplicative
decrease
factor
was
not
good
enough
to
solve
the
fairness
problem.
T
Essentially,
if
to
let
bad
flows,
don't
have
the
same
base
delay
then
just
introducing
matically
gate
of
decrease
was
not
good
enough.
Essentially
we
had
to
cap
the
multiplicative
decrease
coefficient
to
be
at
least
point
5.
For
it
to
be
effective.
We
also
had
to
ensure
that
the
congestion
window
never
drops
below
two
packets
next
slide,
please
so
a
slow
start.
This
is
interesting,
so
we
did
not
want
to
skip
slow
start
because
skipping
it.
T
We
found
that
it
then,
when
there
was
no
competing
traffic,
let
backhoe
trample
like
that
plus
plus,
would
ramp
up
very
slowly.
So
we
still
do
a
slow
start,
but
we
essentially
also
apply
the
adaptive
rate
reduction
factor
to
the
congestion
window
increase
so
that
the
slow
start
for
our
let
bat
plus
plus
flow,
is
actually
slower.
The
ramp
is
slower
than
a
standard
TCP.
D
T
And
then
yeah,
we,
the
exit,
slow,
start
early.
If
we
find
that
the
queuing
delay
becomes
3/4
of
the
target,
which
is
extremely
second
value
that
I
described
earlier,
but
we
only
apply
this
exit
for
the
initial
slow
start
because
for
subsequent
slow
starts,
we
do
have
an
SS
Thresh
value.
Next
slide,
please!
T
So
we
had
this
latency
drift
problem
and
the
way
we
solved
that
was
by
introducing
initial
and
periodic
slowdowns.
So
what
we
want
to
do
is
we
want
to
actually
yield
to
the
network
to
figure
out
what
the
accurate
based
delay
is.
This
is
especially
important
when
you're
LED
pad,
plus
this
connection,
is
going
on
for
a
long
long
time
so
to
force
these
gaps.
T
Essentially,
the
RFC
said
that
there
will
be
automatically
because
of
the
person,
as
there
would
be
periods
in
the
network
where
it
will
be
measured,
but
it
practically.
When
we
did
these
experiments,
we
found
that
that
didn't
work
very
effectively.
So
we
introduced
this
notion
of
a
slowdown.
So
it's
basically
an
interval
where
the
led,
by
plus
plus
connection
voluntarily
reduces
its
condition
window
down
to
two
packets
for
2,
RT,
T's
and
then,
after
to
our
titties,
we
again
do
slow
start
until
we
reach
the
previous
ssthresh
value.
T
The
initial
slowdown
starts
twice
the
RTT
after
the
first
slow
start
ends
and
then
for
periodic
slowdown.
What
we
do
is
we
measure
the
duration
for
the
connection
to
have
ramped
up
from
the
after
the
tour
duty,
slow
down
phase
back
to
the
original
ssthresh
value.
So
we
measure
the
duration,
and
then
we
multiply
that
value
by
9
so
that
effectively
it
becomes
only
a
10%
drop
in
throughput.
T
We
found
that
this
solved
the
legislative
problem
next
slide,
please
I'm
going
to
go
over
a
bunch
of
measurements,
so
this
is
basically
just
showing
that
standard
TCP.
When
there
are
other
short
flows,
it
basically
doesn't
healed,
and
this
the
graph
below
shows
that
so
the
purple
graph
is
basically
your
short
flows
and
then
the
red
is
like
that,
plus
plus
so
as
soon
as
short
flows
ramp
up
led
by
plus
plus
very
quickly
yields
and
it's
able
to
ramp
back
up
when
there
are
other
no
competing
flows.
T
Next
right,
please
so
standard
TCP
again,
this
is
the
problem,
so
slow
start
bends
up
a
bunch
of
queue,
sometimes
up
to
one
to
two
seconds.
These
graphs
are
not
in
the
Train
science
same
time
scale
so
for
like
that
plus
plus,
we
see
that
it
achieves
it's
essentially
stabilizes
around
120,
milliseconds
or
fluency
next
light,
please.
So
this
is
the
ratcheting
effect
that
I
was
talking
about
standard
like
bad
over
time.
Essentially,
the
period
of
10
minutes
would
keep
increasing
the
base
delay,
so
the
bay,
essentially
the
idle
a,
would
keep
on
increasing.
T
So
over
time
we
find
that
it
goes
from
about
120
to
like
180
and
then
over
200
milliseconds,
just
applying
the
multiplicative
decrease,
as
I
described
below,
didn't
solve
this
problem.
We
had
to
actually
also
cap
the
coefficient,
as
well
as
introduce
the
periodic
slowdowns,
and
then
we
see
that
they're
able
to
keep
the
we
don't
see.
This
problem
of
phase
delay
continuously
increasing
for
long-running
connections.
Next
slide,
please
standard
like
that.
T
The
last
graph
here
is
both
of
them
essentially
accurately
measure
their
base
really
and
it's
the
same
value
next
slide.
Please.
This
is
the
low-latency
competition
problem,
so
we,
if,
when
we
run
standard
TCP
with
LED
back
fill
as
plus,
but
we
keep
a
fixed
reduction
factor.
We
see
that
standardise
be
basically
only
gets
2/3
of
the
bandwidth.
T
This
is
the
case
where,
for
example,
the
latency
is
like
1,
millisecond
or
10
millisecond,
very
low
value
on
the
page
delay,
but
once
we
introduce
the
adaptive
value
in
this
case
because,
let's
assume
the
latency
is
less
than
10
milliseconds,
we
would
end
up
with
f-16,
and
then
we
find
that
led
by
plus
the
seals
and
more
majority
of
the
bottleneck.
Bandwidth
goes
to
standard
TCP
next
slide,
please
so
yeah.
In
summary,
we
found
a
bunch
of
problems
with
led
bat,
as
described
in
the
RFC.
T
L
Michael
speaking
as
one
of
the
other
chairs
fighting
for
it,
I
mean
it's
a
fair
question
to
ask,
but
in
this
specific
case,
I
believe
the
submission
could
also
be
TCP
M.
What
would
happen
in
that
case
is
that
you
would
forward
it
to
ICC
Archie
to
discuss,
and
so,
at
the
end
of
the
day,
it
would
probably
be
discussed
here
again
conceptually
for
that
specific
document.
The
mission
could
target
TCP
m
at
the
end
of
the
day.
In
my
opinion,.
I
M
I
C
D
See
that
yes,
I
will
actually
fight
for
this,
and
I
specifically
I
say
that,
because
it's
a
condition,
control,
it's
a
permission,
control
and
I,
see,
crg,
has
condition
control
and
it's
darn
named.
But
but
let's
not
litigate
that
here
I
do
have
a
one
particular
bit
of
feedback.
It
seems
like
there
are.
There
are
two
separable
pieces
in
the
work
that
you've
done,
and
this.
D
By
the
way,
thank
you
so
much
for
doing
the
work
and
for
bringing
it
here.
This
is
very,
very
valuable
and
useful.
Some
of
the
points
you
mentioned
we
knew
early
on
when
we
were
doing
that
badan
it
was
like,
like
the
stuff
about
you
know
the
we
were
expecting
there
to
be
empty
quiet
periods
in
the
network.
We
didn't
know,
but
it's
definitely
good
to
know
that
drift
is
a
real
thing
and
but
bbr
also
it's
similar.
That
is
definitely
yeah.
T
D
I
think
the
separable
pieces
are,
one
of
them
is
actually
a
direct
update
to
the
Redbird
algorithm
itself
and
the
second
one
is
making
that,
but
making
that
but
useful
with
TCP
and
I
know
that
folks
at
Apple
have
implemented
LED
bad
with
TCP
as
well.
So
there
may
be
I
think
there
may
be
value
in
separating
this
into
algorithmic
part,
and
how
do
you
make
it
work
with
TCP
and
those
two
pieces
could
be
in
different
places.
D
U
Lot
of
cigarettes,
so
the
original
that
Pat
was
experimental
but
done
through
the
ITF
track,
and
since
it's
been
seeing
quite
a
bit
of
deployment
right
and
and
there
are
fixes
that
are
looking
very
interesting
and
they
made
a
based
on
actual
running
this
thing.
I
would
hope
that
we
could
make
the
follow
on
standards
track,
but
the
original
that
seems
to
have
problems
that
maybe
we
don't
want
to
do
it
with
that.
U
But
this
one
might
be
right
and
if
we're
doing
stylus
track,
it
means
meaning
means
it
needs
to
be
done
to
the
ITF
remind
me
to
not
have
beer
at
lunch
again
on
Monday,
so
that
still
means
that
if
so
I
like
the
splitter
China
proposed
as
a
possibility
right.
That
means
that
your
algorithm
apart
doesn't
necessarily
need
to
be
standards
track,
but
it
might
also
be
I
mean
going
on
a
little
bit
of
a
rant
here
right.
We
have
done
that.
U
There's
an
HTTP
as
informational,
we've
done
cubic
as
I
think
informational,
experimental,
we're
now
talking
so
cubic,
isn't
even
publish
it
we're
already
talking
about
adding
into
the
down
registry
so
that
our
standard
track
Alice's
can
can
point
to
it.
So
we
I
think
we
are
too
cautious
with
doing
stuff
on
standards
track.
So.
D
V
F
V
V
T
One
of
the
problems
that
we
haven't
soiled
yet
is
working
well
with
aqm.
Now
we
do
have
some
solutions
here
which
I
have
not
talked
about
in
the
slides,
but
we
that's
a
work
in
progress,
so
that's
the
only
part
that
remains
to
be
figured
out
as
far
from
our
site,
but
of
course
there
could
be
other
feedback
from
the
community
and
we'd
be
happy
to
address
any
kind
of
you
know,
gaps
here.
V
Cool,
if
I'm,
if
you're
thinking
that
there
is
still
research
to
be
done
and
I'm
actually
also
on
the
IR
s
G,
so
I
get
to
have
that
conversation.
If
you,
if
you
think
that
there's
still
research
to
be
done,
I
think
that
doing
it
in
a
research
you
kind
of
place
would
make
could
make
a
lot
of
sense,
but
I
guess
a
I
think
those
III
don't
have
opinions
about
where
it
should
end
up,
because
I
don't
understand
all
the
things
that
need
to
be
balanced.
But
those
are
the
things
that
I
was
thinking.
V
C
F
F
F
F
Actually,
this
is
a
draft
of
in
the
research
to
provide
the
transport
service
for
the
out
higher
bandwidth
and
also
low
latency
application,
and
basically
the
the
motivation
is
to
try
to
do
something
which
the
current
transport
cannot
support,
such
as
air.
We
are,
and
the
tactile
Internet
and
based
on
the
analysis
of
thought,
is
all
you.
F
On
the
analysis
of
the
current
solution,
we
take
the
approach
that
lets
a
network
device
will
be
involved
more
in
this
regard,
and
so
we
also
want
a
simplest
solution
protocol
to
support
ease.
We
try
to
get
rid
of
the
same
fate
of
the
ICP,
so
our
design
target
is
that
the
end
user
or
application
can
directly
use
a
new
service,
and
also
the
new
service
can
coexist
with
the
traditional
TCP
and
is
both
can
share
the
resource
of
network.
F
Also,
the
quality
of
service
can
be
adaptive
to
the
application
requirement,
which
means
the
application
can
directly
change
it.
So
the
service
provider
can
also
manage
the
service
and
the
performance
and
the
scalability.
This
is
the
most
concerned,
part
which
we
think
that
the
target
should
be
achievable
by
the
vendor
and
hardware.
Also,
last
one
is
that
the
new
service
is
transport.
Acoustic
means
that
both
TCP
and
UDP
can
use
it.
F
So
I
only
have
time
to
consider
this
slice,
but
if
you
are
interested,
please
read
the
draft
which
was
submitted
to
the
6mm
first,
but
because
we
have
a
lot
of
other
area
to
do
not
like
a
congestion
control
and
the
security
and
the
UDP.
So
the
this
draft
is
a
forced
to
introduction
of
the
ideas
and
we
may
have
other
what
to
do
so.
If
I
interested
give
us
feedback.
Okay,.