►
From YouTube: IETF114 ICCRG 20220728 2000
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
B
By
saying
welcome
this
is
the
iccrg
meeting
at
idea,
14.,
if
you
meant
to
be
somewhere
else.
This
is
more
fun.
I
promise
you
so
just
stay
here
anyway.
B
I'm
going
to
just
very
quickly
go
through
and
show
you
the
note.
Well,
if
you
are
not
familiar
with
this,
you
should
become
familiar
with
it
and,
if
you
don't,
if
you,
if
you're
not
familiar
with
this
again
hit
me
up,
I'm
happy
to
share
this
and
talk
to
you
about.
It
know
that
the
irtf
note
well
is
a
little
bit
different.
The
ietf
note
12.
However
high
level
it
still
applies,
I'm
not
going
to
go
over
the
goals
of
the
irtf.
They
are
in
the
slides.
B
B
Finally,
the
agenda
we've
got
three
topics
today:
got
product
congestion,
control,
bob
briscoe
and
vivi.
Gold
will
be
talking
about
that.
I
have,
after
that,
the
agenda
slightly
changed.
We're
gonna
have
present
packet
reordering
in
multipath
transport
scenarios.
After
that
and
colin
will
be
go
talking
about
condition,
control
work,
especially
with
respect
to
the
new
proposal
and
transport
area,
about
condition,
control,
work
in
the
irtf
and
in
iccrg.
B
B
B
B
All
right,
I
will
send
you
your
gifts
later
and
I'll
charge,
the
ietf
irtf
for
it
we're
gonna,
go
on
to
the
first
talk
bob
you're
already
up
there,
take
it
away.
We
got
30
minutes
for
this
thanks
bob.
B
So
you
should
be
able
to
you
want
me.
I
can
do
that
if
you
want
me
to
well,
I'm.
F
B
F
Right,
okay,
so
I
had
a
look
back
and
realized
that
when
we've
presented
things
about
prior
congestion
control
twice
in
iccrg
before
we
always
just
said
the
latest
thing
we've
done
and
we
never
actually
gave
the
sort
of
basic
results.
So
this
is
what
this
is
all
about.
There's
the
people
involved
in
the
authorship
of
the
draft
and
the
experiments
that
I'm
going
to
go
through
next
slide.
Please
jenna.
F
Yeah,
so
just
just
some
caveats:
what
you're
going
to
see
is
comparative
results
between
different
congestion
controls
and
different
aqms,
and
the
primary
reason
is
to
share
insights
that
we
got
from
those
and
you'll
obviously
get
the
numbers
in
the
results
as
well,
but
the
the
test
traffic
isn't
designed
to
be
realistic.
It's
designed
to
be
explain
these
effects.
I
want
to
talk
about
so
next.
F
Right
so
you've
you've
probably
seen
the
right
hand
picture
first.
The
the
left
hand
of
this
slide
is
more
what
the
congestion
control,
what
the
difference
is
between
the
dc-tcp
and
the
congestion
control
are
on
the
left.
The
right-hand
side
is
the
principle
by
which
dc-tcp
and
prague
work
I'll
start
with
the
left,
and
I
don't
I'm
not
going
to
go
through
all
these.
F
I'm
just
going
to
say
really
the
list
isn't
that
long
and
that's
why
it's
there,
and
this
is
what
is
in
the
draft
to
explain
assuming
you
start
from
dc
tcp,
what
the
differences
are
so
on
the
right
I've.
F
And
if
you
push
down
delay
by
setting
the
set
point
of
the
an
aqm
low,
you
don't
start
losing
utilization,
so
you
get
the
best
of
both
worlds.
F
So
next
slide
is
going
to
be
empirical
results
that
prove
that
just
with
a
single
flow
on
the
left
is,
is
a
cubic
flow
acn
cubic,
in
fact,
to
show
that
it's
not
ecm,
that's
doing
it,
but
it's
the
cubic
cubic
key
part
of
it
happens
to
be
using
fq
coddle,
but
of
any
classic
aqm
would
do
and
the
left
hand,
column
of
results,
uses
a
five
millisecond
target
and
the
middle
column
uses
a
one
millisecond
target
and
then
the
right
hand
column
is
prago
dual
pi
squared
also
using
one
millisecond
target.
F
The
horizontal
axis
is
a
range
of
five
link
rates
and
five
round
trip
times,
so
you've
got
25
different
cases
for
each
column
and
you've
got
q,
delay
and
utilization.
As
shown
in
the
previous
plot
now
shown
here,
and
you
can
see
as
you
push
down
the
queue
delay
set
point
for
fq
codal
from
five
to
one
millisecond.
F
You
do
get
lower
mean
delay,
which
is
the
blocks,
and
you
do
get
lower
99th
percentile
delay,
which
is
the
little
dots
on
top,
but
you
lose
out
on
utilization.
It
the
mean,
which
is
the
the
little
bar
in
the
middle
of
the
horizontal
bar
in
the
middle
of
the
vertical
bar,
goes
down
to
about
87
88,
and
you
can
see
the
first
percentile
and
99th
percentile
also
well.
F
The
first
percentile
goes
down
below
85
in
the
highest
bdp
case
tested,
whereas
on
the
right
you
can
see
if
you
use
a
one
millisecond
threshold
with
tcp
prague
because
of
the
small
saw
teeth
you
get
near,
100
utilization
as
well,
and
that's
just
showing
you
proof
of
that
point.
Next
slide.
F
First
of
all,
when
you
look
at
a
plot
of
delay,
the
the
little
blue
dots
in
the
middle
column
that
the
columns
are
the
same
by
the
way,
the
same
five
link
rates
same
five
round
trip
times,
giving
you
25
of
each
case,
but
both
plots
this
time
a
q
delay
and
the
trouble
is,
as
with
all
delay,
metrics
smaller
is
better,
so
you
can't
actually
see
the
good
ones,
whereas
if
you
show
a
log
plot,
which
is
what
the
bottom
one
is,
you
can
see
it's
significantly
below
the
other
ones
and
the
other
interesting
pattern.
F
Now,
when
you
work
out
how
many
packets
there
are
in
the
queue
you
find
that
that's
because,
as
the
link
rate
gets
higher
by
the
way
as
the
round
trip
time
changes,
it
doesn't
change.
F
So
each
step
is
flat,
but
the
link
rate
is
obviously
making
a
difference,
and
you
find
that
there's
about
a
mean
of
one
packet
in
the
queue
and
the
99th
percentile
is
about
two
packets,
as
the
link
rate
goes
up,
and
the
important
point
here
is,
this
is
one
prague
flow
and
one
cubic
flow
competing
with
each
other
in
a
dual
queue,
and
the
reason
that
delay
continues
to
go
down.
F
In
the
dual
q
case,
it's
it's
due
to
both
the
coupling
and
pacing.
Now
let
me
explain
so
the
the
top.
This
time
series
shows
you
the
number
of
packets
in
the
queue
over
a
time
series,
but
just
in
the
lq,
but
bear
in
mind.
There
is
also
a
cq
here
that
isn't
shown
there's
a
second
queue
that
is
serving
the
classic
flow,
because
it's
one
on
one
remember
so
what's
happening
is
when
this
queue
is
zero.
F
The
queue
is
zero
and
that
what
what
happens
is
the
coupling
from
this
eq
causes
the
marking
in
in
this
lq
to
push
back
the
prague
flow
until
it
leaves
enough
gaps
to
be
about
one
to
one
rate
between
the
two
flows
and
that's
why,
as
the
rate
goes
up,
then
the
q
delay
gets
lower
and
lower,
because
we're
we're
just
the
size
of
the
queue
is
essentially
just
the
size
of
the
burst
from
pacing,
which
in
this
case
are
two
packets.
F
So
in
this
case,
because
the
pacing
is
about
two
packets,
you
get
small
bumps
with
lots
of
small
gaps,
but
if
the
pacing
was
larger
or
if
the
bursts
were
larger,
you'd
get
larger
and
fewer
gaps.
But
the
next
slide
you'll
see
that
if
we,
this
is
a
zoom.
In
back
on
the
very
first
plot
I
showed
of
the
delay
versus
utilization,
but
just
zooming
in
on
the
delay.
The
one
I
said
didn't
stayed
flat
and
the
utilization
stayed
good.
F
F
And
so
then
it
is
butting
up
against
the
the
threshold
of
one
millisecond
in
the
lq
and
that
one
millisecond
is
configured
so
again.
The
same
link
rates
and
round
trip
times
and
the
main
and
p99
shown
there,
the
the
red
it
doesn't
exist
because
this
is
just
pulled
out
zoomed
out
of
the
other
plot,
it's
an
inset
from
the
other
plot.
So
it's
not
it's
not
a
problem
that
there's
a
standing
queue
up
to
one
millisecond,
because
one
millisecond
is
is
small
enough.
F
It's
just
interesting
that
when
you
have
the
traffic
in
the
other
queue,
the
lq
is
actually
just
causing
one
or
two
packets
of
queuing,
not
a
standing
queue
up
to
the
threshold,
and
that
insight
shows
you
that
if,
if
you
mark
a
cue
from
any
other
queue,
that's
related
to
it,
you
can
get
rid
of
a
standing
queue
which
could
be
useful.
F
For
instance,
if
you
had
a
virtual
queue,
which
some
of
you
may
know
what
a
virtual
queue
is,
I
haven't
got
time
to
explain
it
now,
but
it
it
will
it's
a
q,
it's
what
the
q
would
be
if
your
link
rate
was
slightly
slower
than
the
real
q,
and,
if
you
mark
with
that,
you
will
lose
the
standing
q
and
you
so
you
just
get
the
bursts,
not
the
standing
q,
so
that
can
be
useful.
F
Okay
next
slide,
please
jenna
right,
jumping
to
a
completely
different
point.
We've
still
got
one
flow
for
each
one.
Steady
state
flow
for
each
congestion
control
here,
and
this
is
just
showing
what
happens
when
you
have
more
and
more
or
different
numbers
of
flows
of
different
types.
F
So
now
along
the
bottom,
instead
of
link
rate
and
round
trip
time,
we're
at
one
link
rate,
which
is
showing
bottom
left,
40
megabits
per
second
and
one
round
trip
time,
10
milliseconds,
but
we've
got
different
numbers
of
flows
a
and
b.
If
you
look
at
the
little
gray
box
at
the
bottom
example,
a2
b8
means
two:
a
a-type
flows
and
a
b,
eight
b
type
flows.
F
A
is
depends
on
the
column.
On
the
left
column,
a
is
e
c,
and
cubic
and
b
is
cubic
without
acn
on
the
middle
column.
Sorry
am
I
right.
F
G
Sorry
for
this
interruption-
sorry,
it's
just!
I
have
to
take
a
flight,
so
we're
gonna
sandwich
my
results
on
apple,
quick
in
between
bob's
slides,
so
we
have
started
working
on
prague,
conduction
control,
as
some
of
you
know,
from
the
wdc
developer
session
that
we
had
some
a
few
months
ago,
and
these
are
some
early
results
from
our
lab
testing
and
next
slide.
Please.
G
As
you
can
see
these,
are
we
tried
to
do
the
testing
on
similar
bandwidths
and
different
rtt
combinations,
as
they
have
been
done
for
tcp
prague
just
to
you
know,
compare
the
two
implementations
and
we
see
sort
of
similar
results
and
in
in
general,
if
you
look
at
the
different
results,
prague
has
almost
90
percent
reduction
in
queuing
delay
as
compared
to
cubic
for
for
the
low
bandwidth
case.
You
know
the
four
megabits
per
second.
G
G
G
This
is
the
application
good
put
plotted
as
utilization,
so
there's
a
little
bit
difference
as
compared
to
tcp
product,
because,
first
of
all,
we're
not
plotting
the
link
throughput,
but
the
application
could
put
and
which
includes
the
link
utilization
includes
header
sizes.
While
this
one
doesn't
and
the
second
difference
that
we
have
is
we
didn't
start
the
measurement
at
the
steady
state,
so
we'll
improve
this
plot
to
match
the
tcp
plot,
and
this
should
get
to
higher
link
utilizations
for
both
cubic
and
proc.
G
G
G
G
And
you
can
see
that
the
flows,
the
first
flow
start,
and
it
has
almost
the
you-
use
the
the
full
link
and
then
the
second
and
the
third
and
the
fourth
starts,
and
they
start
to
converge
at
at
around
between
20
to
40
megabits
per
second.
G
There
is
one
flow
that
seems
to
pop
out
and
the
other
three
flows
tend
to
stay
together
and
we're
investigating
the
reason
for
for
this
behavior
next
slide.
G
This
is
the
corresponding
smooth,
rtt
plot
for
the
same
test
where
you
can
see
all
these
four
flows.
The
measured
smooth
rtt
at
the
quick
layer
at
the
end
host
is
staying
between
20
and
22,
or
sometimes
20,
20
and
23.
So
this
is
very.
G
This
is
very
good
result.
We
have
much
much
less
deviation
from
the
base
rtt
the
base.
Rtt
here
is
20
millisecond,
as
you
can
see,
and
the
deviation
is
very
little
and
it
also
reduces
the
jitter
and
next
slide.
Please,
and
comparing
this
with
cubic,
you
can
see
the
amount
of
jitter
and
deviation
from
the
base
rtt
for
cubic
next
slide.
Please.
G
So
those
were
the
results
from
apple
quick.
We
are
continuously
working
on
the
improving
the
conduction
controller
and
now
I'll
go
through
a
few
points
that
are
important
while
people
are
getting
started
on
this
and
they
might
have
questions
on
these,
so
I
thought
I'll
just
include
that
next
slide.
Please.
G
Some
of
the
folks
asked
me
whether
we
need
to
use
reno
for
implementing
prague
requirements
and
that's
not
true.
You
don't
need
to
use
that
you
can
use
your
default
condition.
Controller
behavior
like
cubic
and
use
that
for
reductions
and
increase
during
the
loss.
G
Another
thing
that's
interesting
for
that's
interesting
for
prague
is,
you
know
when
you
do
a
reduction
due
to
loss?
Sorry,
when
you
do
a
reduction
due
to
ce,
and
then
you
see
a
packet
loss
right
after
you
did
a
reduction
due
to
ce
within
the
same
rtt.
G
Would
you
want
to
do
another
reduction
independent
of
what
you
reduced
due
to
ce?
Or
would
you
like
to
combine
the
two
so
that
the
total
reduction
is
either
30
percent
or
for
cubic
or
50
for
reno?
This
is
something
to
investigate,
and
it
would
be
great
if
folks
can
try
this
out
and
show
their
results
with
this.
G
Another
important
thing
for
l4s
and
prog
is
pacing
is
mandatory,
because
otherwise
you
will
create
burstiness
and
you
will
see
a
lot
of
marks
and
for
pacing
there
are
obviously
things
to
consider
like
if
your
congestion
control
is
in
user
space.
You
know,
for
example,
quick
protocol
has
conduction
control
in
user
space.
G
What
kind
of
pacing
would
you
want
to
use
because
you
need
more
fine
grain
pacing
than
what
you
might
have
been
using
right
now?
So
if
you
use
user
space
spacing
you
could
have
skew
and
timers
that
could
basically
cause
some
amount
of
bursting
if
the
pacing
is
not
accurate
on
linux
operating
system,
there
are
ways
to
offload
this
to
kernel.
G
There
are
apis
like
as
socket
max
spacing
rate
and
sotx
time,
and
you
need
to
use
a
fair
q,
queueing
discipline
for
both
of
them,
and
some
of
some
of
the
quick
implementers
have
already
tried
sotx
time.
So
they
have
some
experience
and
you
can
try
either
of
the
approach
and
you
know,
decide
what
works
best
for
your
implementation,
for
the
condition
controllers
in
kernel.
G
Some
tcp
still
exists
in
the
kernel
stack.
So
for
that
it's
pretty
simple
on
linux,
you
can
use
the
sk
pacing
rate
on
the
sk
and
then
you,
you
have
a
callback
for
tso
segments
which
allows
you
to
set
the
burst
size.
So
that
is
all
from
me
today,
as
I
have
to
leave
now.
So,
if
you
have
any
questions,
you
can
ask
bob
and
if
bob
doesn't
have
those
answers,
please
feel
free
to
send
questions
to
the
mailing
list.
Thank
you.
F
Yeah,
sorry
janna
can
you
does
anyone
want
to
ask
billy
questions?
No
she's,
probably
wanting
to
get
off.
Isn't
she
okay
want
one
back
yep.
So
here
this
is
a
plot
of
normalized.
This
is
this
is
now
going
back
to
tcp,
prague,
so
vinnie
was
implemented
prague
in
quick.
This
is
going
back
to
the
linux
implementation
of
tcp
prague
before
we
had
to
let
video
so
she
could
get
her
flight
right.
F
This
shows
along
the
bottom
different
numbers
of
flows,
for
instance
a
to
b,
eight
meaning
two,
a
flows
and
eight
b
flows
on
the
left,
they're
all
ecn
cubic
on
the
right,
they're,
all
prague
in
coddle
and
in
the
middle
they're
prague
versus
cubic
in
a
dual
queue,
so
the
the
cubic
ones
go
into
the
classic
q
and
the
prague
ones
into
the
other
queue,
and
the
aim
is
obviously
to
get
roughly
hit
the
ratio
of
one.
F
But
it's
not
a
ratio.
Sorry,
it's
a
normalized
rate
per
flow
where
one
means
your
flow
is
going
at
one
end
of
the
capacity
where
there
are
end
flows
so
along
the
bottom.
You
see,
you've
got
a1
b1
a2b2,
so
for
a2b2
n
would
be
four
and
you're
getting
if
you're
getting
a
quarter,
then
your
the
normalized
rate
would
be
one
and
you'll
see
about
halfway
along
it.
F
It
also
starts
testing
a
naught
b,
b10
a1
b9
a2
b8,
so
we
have
actually
tested
the
full
matrix,
but
this
gives
you
representative
results
and
you
can
see
the
fq
coddle
is
pretty
spot
on
one
all
the
time,
with
very
little
variance
if
none
apart
from
the
odd
bump
and
fart,
which
they
are
in
fact
hash
collisions.
F
Given
we
did
a
lot
of
tests
on
this
and
on
the
left-hand
column,
you
can
see
that
pi
is
similarly
the
mean
is
sitting
at
one,
but
the
variance
is
greater.
F
I
should
add
that
the
top
plot
is
cubic
and
the
bottom
plot
is
cubic
versus
reno
as
the
as
the
other
flow,
so
top
is
cubic
cubic
and
the
bottom
is
cubic
reno
for
the
pine
for
the
pie
case
and
the
middle
is
pie
cubic
at
the
top
and
pyrino
at
the
bottom,
and
you
can
see
in
the
middle
the
dual
pie-
squared
it's
not
quite
on
one,
but
it's
not
far
off
it
sort
of
wanders
around
a
bit
and
the
blue
one
is
sometimes
above
one,
sometimes
below,
but
not
significantly
far
off.
F
But
the
the
classic
flow
has
a
similar
variance
to
the
pi
one
and
the
prague
flow
has
a
flows,
have
a
similar
variance,
but
well
not
quite
as
good
as
the
fq
gotta
one
by
any
stretch
of
the
imagination,
but
pretty
good.
F
I'm
I'm
going
to
jump
over
the
point
in
with
the
grave
call
out
in
the
interest
of
time.
So
please
that
you
should
be
able
to
read
about
that
or
we
can
talk
about
it
on
the
list
next
slide
and
then
I
think
we're
pretty
close
to
done
so.
This
one
is
now
with
mixed
round
trip
time
flows,
which
has
been
a
point
of
contention
with
the
dual
queue
algorithm.
F
F
If
you
look
at
the
key
at
the
bottom,
they
represent
different
around
trip
times,
so
something
like
a5
b100
means
five
milliseconds
versus
100
milliseconds
and
it
tests
a
is
always
prag
in
the
middle
column
and
b
is
either
cubic
or
reno
top
or
bottom
sorry,
not
top
or
bottom
reno's,
the
red
triangle
and
the
black
star
is
cubic
and
the
top
one
is
rate
ratio
and
the
bottom
one
is
window
ratio,
so
you'll
see
fq
coddle
gets
the
rate
ratio
pretty
much
one
to
one,
because
it's
a
it's
a
fq
schedule,
so
you
would
expect
that
the
single
queue
there
it's
on
the
left,
pi
you'll,
see
the
rate
varies
with
round
trip
time
across
that
sweep
of
round
trip
times,
but
the
window
stays
pretty
much
close
to
one
because
they're
window
fare,
whereas
on
the
right
you
can
see,
the
window
is
is
pushed
into
being
different
because
the
rate
is
equal.
F
Now,
dual
pi
squared
effectively
emulates
a
single
qaqm
by
this
rtt
reduction.
Algorithm!
That's
in
it!
It's
just
slightly
worse,
so
you
can
see
that
the
worst
case
is
6.3,
a
ratio
rather
than
5.5.
So
that's
1.14
harm
metric.
If
you
like
worse,
I
won't
go
through
all
the
writing
which
I've
put
on
there
for
anyone
who
wants
to
study
this
in
their
own
time.
F
Thanks
jenna,
oh
yes,
there's!
This
is
the
last
last
set
of
plots.
This
is
now
the
first
one
that
isn't
steady
state
stuff.
We
to
give
a
feel
for
what
happens
when
you've
got
web
light
load.
This
is
heavy
web
light
load,
plus
a
long
running
flow
from
both
flows.
F
We've
got
cubic
ecn
versus
cubic
non-ecn
in
on
the
left-hand
column
and
the
right
hand,
column
left
hand
being
pi
right
being
effical
again
middle
is
prague
versus
cubic.
So
all
the
time
we're
trying
to
compare
ecn
versus
non-ecn,
so
we
cut
out
the
any
any
effect
of
vcn
itself.
F
The
plots
of
the
along
the
bottom
two
plots
are
q,
delay
and
utilization
again
as
as
at
the
start,
but
this
time
remember
with
a
lot
of
short
flows
as
well
and
there's
a
very
heavy
load
of
short
flows
here
and
you
can
see
the
q
delay
is
pretty
much
like
it
was
before
in
the
dual
pi
square
case
in
the
fq
coddle
case.
F
F
The
profile
of
the
utilization
is
pretty
much
the
same
for
the
dual
pi,
squared
and
fq
coddle
cases
and
and
really
the
the
point
here
is
that
the
way
prague
works,
the
long-running
flow
has
an
ewma
ewma
in
it,
which
recognizes,
there's,
there's
a
load
here
of
unresponsive
short
flows,
and
it
makes
some
headroom
for
them
to
keep
the
the
delay
down,
and
that
was
all
part
of
the
design
of
dc
tcp
and
that's
what
we
wanted
for
the
internet,
the
the
top
plots
are
cumulative
distribution
functions
and
they're,
complementary
and
they're
also
log
scale.
F
I
have
shown
one
of
these
before
so
along
the
bottom
scale
is
the
q
delay
and
the
and
the
vertical
is
percentiles
on
a
log
scale
and
each
one
of
those
is
just
one
case
of
the
other.
You
know,
as
I
mentioned
before,
there
are
25
cases
and
it's
just
picking
one
of
them:
120
meg
10
millisecond
case
in
each
case
and
the
gray
background
plots
of
the
the
ones
off
the
other
plot.
F
There
is
right
down
much
lower,
even
at
five
nines
percentile,
it's
about
six
milliseconds
and
a
99th
percentile
it's
about
two
milliseconds,
and
so
that
I've
shown
you
that
plot
top
middle,
if
you've
been
in
any
presentations
about
l4s
before,
and
it
also
shows
where
fk
coddle
is
and
pi
on
the
same
thing,
but
just
to
point
out
that
that
is
sort
of
seeing
fk
coddle
in
a
good
light,
because
it's
picking
one
of
the
better
ones
for
fq
codel.
Okay,
thank
you,
janna.
F
I
think
there's
one
more
slide,
plus
a
sort
of
final
yeah.
Just
a
summing
up
slide,
I
think
yeah.
So
the
messages
are,
if
you
don't
want
a
standing
queue
in
your
buffer
control
marking
from
another
queue,
possibly
a
a
coupled
queue
or
a
virtual
queue.
F
Secondly,
the
proper
place
to
address
round-trip
time
independence
around
triton
dependence
is
in
a
congestion
control
like
prague,
which
is
being
newly
deployed
for
low
latency,
which
is
where
the
problem
is,
and
third
long
proud
flows
leave
headroom
for
the
short
ones
so,
and
that
and
that's
a
feature
that's
intended
for
the
utilization
to
be
reduced
when
you've
got
a
lot
of
short
flows
that
are
unresponsive
effectively
and
just
finally
to
say
these
results.
F
We
use
these
sort
of
plots
to
check
for
regressions
and
these
have
been
stable
since
about
that
says
july
19.
I
think
this
is
probably
jana
the
the
first
version
of
the
plots
I
sent
you,
I
think
a
new
one
says
we
can
go
back
to
2016
and
and
they're
stable,
like
this.
F
B
I'm
gonna
I'm
gonna
step
in
here
very
quickly.
We
don't
have
a
lot
of
time.
So,
let's
take
one
or
two
quick
questions.
I
need
to
switch.
B
B
Oh,
that
is
a
problem
in
my
life
in
general,
I
said
we'll
take
a
quick
question
right.
Yep
john
you're,
on
you're
on
the
thing
go
for
a
jonathan.
D
So
first
question
is
you
mentioned
that
it
would
need
a
very
smooth
pacing
in
order
to
reduce
burstiness?
What
happens
if
burstness
is
introduced
by
the.
F
Link
right
right,
yes,
so
that
that's
actually
semi-answered
by
the
point
about
marking
with
another
queue-
and
I
I
mentioned,
if
you
mark
with
a
virtual
cue-
you
can
ensure
that
the
any
any
bursts
coming
into
your
l4s
link,
whether
you
know
if,
if
caused
by,
like
you
say
another
link
like
a
wi-fi
upstream
or
something
they
will
still
come
into
the
the
l4s
aqm
and
the
flow
will
still
hit
the
threshold.
F
But
if
you
put
that
threshold
in
the
virtual
queue
it
at
least
will
ensure
that
the
bursts
don't
get
into
the
real
queue,
even
though
so
effectively
it
will
reduce
the
utilization
of
the
bursty
flow
up
to
the
well.
It
depends
depends
where
you
put
the
threshold
in
the
virtual
queue
you
can.
You
can
absorb
some
of
the
burst
in
the
virtual
queue
and
then,
above
that,
the
burst
will
go
into
the
real
queue.
F
But
the
reason
for
doing
that
is
that
what
you
don't
want
to
do
is
to
set
the
threshold
in
the
real
queue
I
mean
say,
for
instance,
you're
getting
10
millisecond
burst.
F
You
don't
want
to
set
the
threshold
at
10
milliseconds,
just
in
case
you
get
bursts
and
then,
when
you
don't
get
burst,
you
get
a
standing
queue
up
to
10
milliseconds,
whereas
if
you
put
it
in
the
virtual
queue
you
will
get
when
you
have
bursts
you,
you
will
still
get
bursts
in
in
the
real
queue
or
at
least
the
top
of
them.
F
But
when
you
haven't
got
burst
so
you've
got
an
ethernet
link
coming
in
as
well,
and
you've
got
traffic
coming
over
that
then
at
least
you
will
have
the
smoothness
of
that
link.
F
If
you
haven't
always
got
it.
So
you
you
get
the
benefit
of
smooth
smooth
links
when
you're
using
them
and
you
can
still
absorb
the
bursts
from
the
burst
you
want.
Is
that
a
oh
and
I
I
should
add
that
that
virtual
cue
is
in
in
the
design
of
docsis
from
the
start,
we
would
have
to
add
it
to
the
linux
implementation
to
test
it
and
we
haven't
tested
anything.
D
B
I'm
going
to
say
we
need
to
move
on
to
the
next
one,
and
I
encourage
you
to
continue
this
conversation
on
either
the
chat
channel
or
on
email
bob.
We
yeah
we
have
15
minutes
left.
We've
got
two
more
things
to
do
so.
Thank
you,
bob
vidi,
for
the
presentation.
I
do
appreciate
it.
There's
some
like
I
said,
there's
some
feedback
on
the
chat.
Please
go.
Take
a
look.
I'm
going
to
switch
over
to
natalie,
I'm
also
going
to
try
to
come
here
and
eat
your
head
from
here.
As
you
leave.
H
H
H
So,
first
of
all,
when
we
compare
multiple
transport
with
single
path,
transport
characteristics,
we
know
that,
in
the
case
of
the
multipath
scenario,
we
experience
an
extraordinary
higher
jitter
and
also
a
significantly
higher
out
of
order
delivery.
This
comes
precisely
from
the
heterogeneous
nature
of
the
different,
a
parts
which
are
bundled
within
the
multipath
connection.
H
Now
these
characteristics,
this
high
jitter
and
high
out
of
order
are
exposed
by
the
multiple
protocols
either
to
the
higher
layer,
application
that
makes
use
of
them
or
to
the
end-to-end
traffic
that
is
carried
by
them.
In
the
case,
an
intermediary
multi-path
transport
is
used
as
it
is
the
case
of
the
atss
framework
specified
in
3gbp,
which
defines
a
5g
and
wi-fi
mno
boundary
so
from
the
mp
protocols,
those
who
have
a
strict
reliability
and
strict
inaudible
delivery
like
mptcp
and
mp,
quick
in
stream
mode.
They
only
experience
a
high
jitter
dominated
by
the
path.
H
Latency
difference,
but
for
those
protocols
which
do
not
have
this
strict
in
order
delivery
like
mptcp
mpqi,
with
the
datagram
extension
and
ssttp
with
the
cmt
extension,
we
experience
jitter
at
all,
but
we
also
experience
a
high
out
of
order
delivery,
also
dominated
by
the
path
latency
difference.
Now.
H
The
problem
is
that
the
service
running
over
the
internet
are
designed
to
cope
with
the
characteristics
of
the
single
path
transport.
So,
if
we
add
this
high
jitter
and
this
high
out
of
order
delivery
introduced,
for
example,
by
an
atss
splitting
use
case,
this
might
cause
this
might
result
in
quality
of
experience
and
quality
of
service
degradation.
H
The
explanation
of
these
algorithms
that
are
going
to
be
shown
here
is
also
in
this
draft
that
I
linked
in
the
presentation,
so
the
tests
that
we
executed
were
executed
using
the
mpdccp
framework,
but
the
results
are
also
applicable
to
other
multipath
solutions.
Like
mpquik,
we
generate
the
traffic
tcp
udp
using
iperf,
and
we
also
generated
quick
traffic
using
the
quick
go
implementation
next
slide.
Please.
H
H
The
mp3
connection
has
two
subfloors
or
two
parts,
each
one
of
them
with
a
10
megabit
per
second
capacity,
and
the
latency
difference
between
them
is
at
15
milliseconds.
We
use
a
priority
by
statistic
mode.
That
means
there
is
a
primary
link
that
is
going
to
be
used
by
default.
Once
the
primary
link
is
congested,
the
secondary
link
will
start
being
used
and
the
congestion
control
used
in
the
quick
traffic
is
nearing
so
in
the
left.
We
see
the
results
for
the
udp
traffic.
H
So
what
happened
in
the
quick
case
is
that
as
soon
as
the
secondary
path
starts
being
utilized,
the
bad
latency
difference
causes
a
lot
of
scrambling.
This
scrambling
causes
duplicate
acknowledgements,
which
are
an
indication
for
the
quick
protocol
of
packet
loss,
so
the
reliability
mechanisms
and
the
congestion
control
react
immediately.
There
is
a
packet
retransmission
and
the
congestion
window
is
cut
and
therefore
the
throughput
is
also
reduced.
H
In
conclusion:
for
a
protocol
like
udp,
where
there
is
no
any
congestion,
control
or
any
reliability,
there
is
also
no
need
of
any
real,
no
demand
of
any
reorders,
so
the
scrambling
introduced
by
the
multipath,
a
transport
doesn't
have
any
impact
on
the
overall
performance,
but
unlike
udp,
which
has
a
congestion
control
and
therefore
there
is
a
demand
for
another
delivery
and
therefore
it
fails
to
use
the
aggregated
bandwidth
due
to
the
impact
of
the
packet
of
scrambling
next
slide.
Please.
H
So
now
that
we
demonstrate
that
there
is
an
impact
when
no
reordering
is
in
place
and
that
this
impact
is
also
correlated
with
the
characteristics
of
the
current
traffic.
We
proceed
to
evaluate
some
solutions
to
correct
this.
A
out
of
order
delete
the
first
one
is
a
reordering
algorithm,
based
on
connection
sequence
number
with
an
static
tag.
So
what
we
do
here
is
basically
read
sequence
number
to
verify
the
inorder
arrival.
H
In
this
case,
we
use
the
same
setup
as
we
had
before.
We
use
quick
traffic
because
we
know
there
is
an
impact
over
this
type
of
traffic,
but
with
the
only
difference
that
we
introduce
a
reordering
algorithm
with
an
aesthetic
timer
of
50
milliseconds,
which
is
higher
than
the
pad
latency
difference,
and
therefore
enough
to
correct
all
the
scrum.
H
The
results
are
shown
in
the
figure.
In
this
scenario,
we
managed
to
fully
utilize
both
parts
and
to
have
the
full
aggregated
bank
width
close
to
the
20
megabits
per
second.
So
basically,
the
reordering
algorithm
in
this
case
corrects
the
scrambling
eliminates
the
duplicate
acknowledgements
and
therefore
prevents
the
congestion
control
in
the
quick
stream
to
react
and,
as
a
result,
the
application
manages
to
fully
utilize
a
both
parts.
Bandwidth
next
slide,
please.
H
So
this
reordering
with
the
static
tag,
works
well
as
long
as
we
know
what
the
path
latency
difference
is
and
if
this
latency
difference
doesn't
change,
but
there
might
be
the
case
where
this
latency
difference
goes
either
above
or
below
the
timer
that
we
configure.
So
this
is
what
we
try
to
test
here.
We
have
a
the
same
setup
as
before,
but
we
have
a
pad
latency
difference
of
20
milliseconds
and
a
timer
of
only
15
milliseconds.
H
The
results
are
in
the
graph
of
the
in
the
left
side,
so
we
see
that
the
secondary
path
is
not
fully
utilized
and
there
is
an
impact
on
the
overall
aggregated
bandwidth
to
correct
this
problem.
The
solution
that
we
test
is
the
same,
a
reordering
bases
on
sequence
numbers,
but
with
a
dynamic
type.
This
dynamic
number
is
going
to
be
updated
to
be
equal
to
the
latency
difference
of
the
paths
and
this
latency
difference
is
going
to
be
estimated
using
the
rdt
measurements
provided
by
the
condition
controlling
place.
H
H
Now
at
this
point,
we
prove
that
this
sequence-based
reordering
a
solves
the
problem
of
the
scrambling,
so
there
is
no
more
scrambling
and
a
congestion
control
like
new
arena
doesn't
react
anymore,
but
this
sequence-based
reordering
doesn't
do
anything
with
the
latency
difference
of
the
butts.
So
we
still
have
the
problem
of
the
high
jitter,
even
though
the
new
arena
doesn't
react
to
that
next
slide.
Please.
H
So
now
the
question
is:
what
happens
when
the
congestion
control
is
not
the
last
base
of
congestion
control,
but
a
latency,
sensitive
congestion
control
like
vbr,
so
to
test
that
we
use
tcpr
carried
over
our
multipaticcp
framework,
we
have
the
same
same
setup
of
the
static
reordering
with
a
timer
higher
than
the
bad
latency
difference.
H
So
we
know
that
the
scrambler
that
the
scrambling
is
corrected,
but
we
still
see
that
only
one
path
is
utilized,
and
this
happens
because
for
a
congestion
controller
like
bbr,
which
is
latency
sensitive,
the
jumps
in
the
latency
generated
by
the
by
the
scheduling
through
the
through
both
butts,
generate
a
throttling
of
the
throughput.
Under
the
assumption
that
this
hey
fixing
the
latency
come
from
buffer
blood.
H
So
to
correct
this
problem
we
use,
or
we
test
a
solution,
which
is
the
delay
equalization
or
the
path
latency
difference
compensation.
What
we
do
with
the
solution
is
that
at
the
receiving
side,
we
delay
the
incoming
packets
in
the
fastest
path
to
equal
the
latency
of
this
lowest
path.
Again,
this
delay
has
to
be
equal
to
the
pad
latency
difference
that
is
estimated
from
the
rtt
measurements.
H
H
H
H
It
is
also
recommended
that
to
achieve
an
ultimate
performance,
it
is
better
to
use
a
combination
of
all
these
solutions
together.
So
a
sequence
number
base
a
a
sequence
number
based
reordering
with
a
dynamic
expiration,
timer,
a
delay,
equalization
and
also
some
algorithms
to
detect
fast
packet
loss
detection.
H
H
B
I
think
this
is
a
magnus
is
in
the
queue
magnus.
If
you
want
to
respond
to
this
quickly
and
that'd
be
great.
I
want
to
switch
to
giving
colin
in
a
couple
of
minutes
before
we
leave
the
room,
go.
A
Ahead,
language
westland,
your
new
reno
did.
It
include
rack.
H
A
Okay,
so
I
guess
it
would
more
likely
would
behave
slightly
better
but
anyway,
and
when
it
comes
to
this,
I
think
what
you're
actually
asking
around
the
itf
scope
here,
it's
a
question.
I
mean
you're
tunneling
over
an
mp
if
you
have
an
application
protocol
on
top
directly
on
mp.
This
is
not
the
question
it's
only
when
you're
doing
tunneling
and
have
another
flow.
So
it's
a
question
of
tunneling
over
mp
protocol
when
this
arises.
B
I'll
say
in
authority
that
it's
a
conversation
in
terms
of
where
does
it
fit
in
the
itf?
I
encourage
you
to
reach
out
over
email
and
we
can
we
can.
I
can
pull
the
right
people
in,
but
please
reach
out
over
even
and
I'm
happy
to
take
it
from
there.
B
Thank
you
for
your
presentation
again
before
we
leave
the
room.
I
know
we
are
past
time,
but
I
just
want
to
give
colin
a
couple
of
minutes
to
talk
about
corn.
Are
you
there
to
talk
about
condition?
Control
work
go
for
it
colin.
E
All
right,
thanks
jonah,
so
I
just
wanted
to
give
people
a
little
bit
of
a
heads
up
for
some
work,
which
is
potentially
happening
in
the
ietf
side
of
things.
So
those
of
you
who
are
in
the
transport
area
working
group
earlier
in
the
week
are
on
the
transmit
area.
List
will
have
seen
that
the
transport
ads
are
considering
chartering,
a
new
congestion
control
working
group
in
the
ietf.
E
A
lot
of
the
goal
of
that
looks
to
be
to
update
rfc
5033,
which
is
the
the
guidelines
for
how
the
itf
standardizes
congestion
control
algorithms,
but
there's
also
suggestions
that
the
the
group
will
consider
developing
standards
track
congestion,
control
algorithms.
After
that.
E
E
E
The
expectation
I
think
from
the
itf
side
is,
is
that
proposals
for
standards
track
congestion,
control,
algorithms
will
have
been
pretty
thoroughly
vetted
by
the
research
community
before
they
get
to
the
ietf,
and
I
expect
iccrg
will
will
certainly
play
a
role
in
doing
that
and,
in
you
know,
facilitating
discussion,
providing
review
of
the
documents,
and
I
think
icc
rg
is
a
a
really
good
venue
for
people
to
talk
about
this.
E
This
this
sort
of
work-
it's
it's
got
really
strong
links
to
the
research
community
and
I
think
it
really
sort
of
helps
to
add
value
when
just
by
allowing
the
the
researchers,
the
the
industry,
the
standards
community
to
exchange
ideas-
and
you
know
the
the
the
work
in
icci
has
been
really
good
at
getting
practical
experience.
Getting
your
researchers
implementers
talking
to
each
other
each
other,
and
I
think
that's
really
important
and
I
hope
it
continues.
E
Iccig
is
also
going
to
continue
to
be
able
to
publish
our
experimental
rfcs
they're,
a
really
great
way
to
document
congestion
control,
algorithms.
I
think
that
they're
especially
important,
perhaps
as
a
way
of
documenting
things
that
are
perhaps
hoping
to
move
into
the
itf
standards,
track
just
as
a
way
of
clearly
describing
the
baseline.
E
That
said,
I
don't
expect
iccrg
to
turn
into
an
experimental
rfc
factory.
It's
a
venue
for
research,
experimentation
and
discussion.
First
of
all,
so
yeah,
I
think
jana's
been
doing
a
great
job
as
chair,
I'm
expecting
and
hoping
that
he
will
continue
as
as
chair
of
the
group,
but
we
we
are
certainly
looking
to
appoint
new
co-chair
candidates
to
help
move
things
along
in
iccrg
and
janna,
and
I
are
talking
to
some
potential
people
in
that
space.
E
So
that's
what
I
think
is
happening.
If
there
are
any
questions
about
it,
I'm
happy
to
try
and
answer
them
from
the
irtf
side,
and
I
guess
talk
to
martin,
duke
in
the
ietf
side.
If
there
are
questions
about
that,
I
don't
know
if
any.
C
Yeah
I
soon
have
walked
in
at
exactly
the
right
moment,
martin
duke
from
google
and
transport
a.d
yeah.
So
I
just
to
be
clear.
I
completely
endorse
everything.
Colin
has
said.
I
think
he
and
I
are
in
complete
alignment
about
the
vision
here
and
I
I
I
in
fact
insist
that
I
see
ccrit
continue
to
to
exist
and
do
what
it
has
been
doing,
if
not
more.
C
A
very
possible
outcome
is
that
we
do
this
congested
working
group
and
it
completes
its
assigned
tasks
and
there's
little
if
any
standardization
effort
and
the
group
closes
and
like
ccrg
will
be
there
as
a
kadesh
control
forum,
as
it
has
been
for
for
years.
So
yeah.
I
just
wanted
to
just
be
absolutely
clear
about
our
complete
alignment
on
this
thanks.
B
Thank
you
both
colin
and
martin,
I
think,
by
the
next
meeting.
We
should
have
a
lot
more
clarity
on
what's
going
on,
so
I
would
encourage
people
to
stay
tuned
and
definitely
show
up
to
the
next
iccrg
meeting
and
the
next
idea,
of
course,
both
of
those
will
have
some
interesting
bits
for
those
interested
in
construction
control
and
how
that
work
is
done
in
the
istar
space.
So
with
that,
thank
you.
Everybody
please
enjoy
the
amazing
heat
in
philadelphia.