►
From YouTube: IETF111-ANRW-20210726-1900
Description
ANRW meeting session at IETF111
2021/07/26 1900
https://datatracker.ietf.org/meeting/111/proceedings/
B
A
So
just
fly,
some
people
are
telling
me
they
cannot
join
the
new
echo
with
the
ietf
registration
code.
So
I
don't
know
if
that's.
C
A
A
A
But
okay:
well,
hopefully,
a
few
people
have
joined
so
they
have
managed
to
find
their
way.
So
I
guess
we
can
start
for
for
today
the
first
the
first
session
of
the
introduction
and
welcome.
So
my
name
is
andra,
I'm
a
researcher
with
telefonica
and
this
year.
Well,
we,
together
with
nick,
we
were
entrusted
with
taking
care
of
the
technical
program
for
the
applied
networking
research
workshop.
A
So
thank
you
very
much
to
the
steering
committee
and
thank
you
calling
for
giving
us
this
wonderful
task.
We
have
had
lots
of
fun
so
we'll
discuss
a
little
bit
about
the
process
and
you
know
how
the
program
looks
like
what
what
we
have
in
store
for
you.
So
thank
you,
everybody
for
for
joining
today
and
we
hope
you're
you're
gonna
and
I
enjoy
it.
B
Oh
sure,
I'll
just
introduce
myself
and
thanks
andra
for
kicking
it
off.
Hi
hi
everyone.
My
name
is
nick
feemster,
I'm
a
professor
at
the
university
of
chicago
in
computer
science
department
and
I
think,
we're
very
excited
to
present
the
program
to
you,
andre.
I
don't
know
if
you
wanted
me
to
talk
about
irtf
now
or
later
on.
A
Yeah
I
mean
I
just
wanted
to
to
say
like
the
first
task
we
kind
of
got
you
know
when
we
set
out
to
to
put
together
the
program.
Obviously
we
got
a
lot
of
our
friends
together
in
in
the
technical
program
committee,
a
lot
of
our
a
lot
of
the
people
who
support
the
applied
networking
research
workshop.
So
we
couldn't
have
done
it
without
the
program
committee.
So
we
obviously
huge
thanks
to
all
the
30
people
who
have
helped
us
in
this
effort.
A
So
you
see
them
all
here
and
I
think
you
know
as
you
as
you've
seen
you
know,
we
put
a
lot
of
you
know,
effort
into
making
sure
that
the
reviews
are
mindful
constructive
and
you
know
useful
for
for
the
people
who
have
submitted
so
three
of
the
people
we
have
invited
to
the
the
hong
kong
program
committee
actually
have
helped
us
with
this.
So
I
I
know
nick
has
much
more
to
say
about
about
this
initiative.
B
I
think
one
of
the
things
that
we
wanted
to
do
is
ensure
that
that
the
people
who
submitted
papers
got
reviews
of
quality
that
they
were
happy
with
and
and
so
andre,
and
I
tried
very
hard
to
maintain
high
quality
on
on
the
reviews
and
as
part
of
that,
we
asked
three
of
our
senior
members
of
our
community
here
to
serve
on
a
review
task
force
which
helped
us
oversee
the
review
process,
essentially
reviewing
the
reviews
looking
and
making
sure
those
reviews
were
responsive
to
to
technical
concerns
in
the
paper
that
they
were
substantive
and
that
if
they
made,
you
know
critical
suggestions
that
they
were
backed
up
with
with
concrete
feedback
and
and
actions
that
the
authors
could
take
to
improve
the
papers.
B
And
I
think,
in
addition
to
that,
I
think
andre
and
I
spent
a
lot
of
time
ourselves
with
probably
members,
four
and
five
of
the
review
task
force
really
going
through
a
lot
of
the
reviews,
making
sure
that
everything
was
handled
with
with
care
and
and
with
thought,
and
I
think
some
of
the
authors.
I
hope
I
hope
appreciated
that
as
well,
because
I
think
we
did
go
back
and
review
the
reviews,
including
taking
some
appeals
which,
which
doesn't
always
happen
in
conferences.
B
But
we
hope
that
this
can
help
set
the
tone
for
the
future
of
the
kind
of
event
that
nrw
can
be
a
place
where
people
can
submit
all
kinds
of
applied.
Networking
work
and
expect
to
get
constructive
feedback
on
the
work
that
they
present.
A
So
yeah
I
couldn't
agree
more,
so
thank
you
again
so
much
to
the
task
force
and
not
only
that,
but
they
also
helped
us
link
very
well.
The
you
know
the
submissions
with
current
interest
that
you
know
the
itf
and
the
ritf
have
so.
A
Hopefully,
everybody
here
will
find
the
program
exciting,
and
I
just
wanted
to
bring
to
your
attention
that
a
lot
of
these
papers
that
you're
gonna
hear
about
during
these
three
days
of
workshop,
they
have
you
know
different
artifacts
attached
to
them,
so
hopefully
contributions
that
you
know
are
going
to
be
useful
for
for
the
community
and
obviously
we
would
like
to
thanks
to
thank
the
sponsors
for
supporting
and
for
you
know,
making
sure
that
everybody
can
attend
the
workshop
just
a
little
bit
of
boring
details.
A
So
as
you,
as
you
know,
we
are
virtual
because
of
you
know
the
world
today.
So
thank
you
so
much
for
for
putting
the
effort
to
join.
Even
if
you
in
this,
you
know
current
conditions
in
this
format,
we
do
have
a
slack
channel
where
we
encourage
you
to
go
and
interact
with
other
people
who
might
be
you
know
presenting,
or
you
know,
continue
conversations
you
might
have
here.
There
is
also
the
gather
space
so
in
the
floor
plan.
A
Just
if
you
roam
around
you're
gonna
find
us
somewhere,
so
you
can
definitely
also
join
there
for
for
coffee
breaks
and
so
on.
All
the
program
and
details
you
can
find
them
online
on
the
nrw
website
and
the
proceedings
are
are
also
live
just
you
know
that
all
the
sessions
are
recorded
and
I'm
going
to
bore
you
a
little
bit
with
some
more
information.
A
The
noteworld
on
the
intellectual
property
just
does
not
apply
for
contributions
that
were
made
for
the
apply
networking
research
workshop,
so
we
just
wanted
to
mention
that,
however,
do
please
observe
the
not
well
on
audio
and
video
recordings.
Essentially,
if
you
participate
online
and
if
you
turn
on
your
camera
and
or
your
microphone,
you
essentially
consent
to
a
recording
that
we
might
do.
As
I
mentioned
before.
A
All
these
sessions
essentially
are
going
to
be
recorded
and
we're
gonna
add
them
to
our
youtube
channel
to
the
workshops,
youtube
channel
and
also,
please
observe
the
note
well
on
privacy
and
code
of
conduct-
essentially
just
you
know,
agree
to
work
respectfully
with
with
the
other
participants,
just
a
few
notes
on
on
deco.
So,
as
I
mentioned
before,
we
are
going
to
run
all
the
all
the
workshop.
Essentially,
here,
presentations
are
pre-recorded.
A
You
can
actually
go
and
preview
them
on
the
program
website
on
the
workshop
program
website
each
session,
each
paper
basically
gets
10
minutes
of
presentation,
and
then
it's
followed
by
a
five
minutes:
q,
a
and
at
the
end
of
each
session
we
will
have
also
a
15
minutes
panel,
where
we
basically
want
to
encourage
you
all
the
attendees
to
engage
with
the
authors
in
a
more
lively
q
a
session.
A
If
you
would
like
to
ask
questions,
we
encourage
you
to
use
the
with
echo
queue.
The
virtual
cue
that
you
know
each
of
the
session
chairs
are
is
going
to
handle.
So
I'm
sure
that
if
you
need
more
information,
as
you
can
see,
we
have
support
here
both
nick
and
I
will
be
here
in
order
to
make
sure
that
things
run
smoothly
and
if
anybody
needs
help
yeah,
just
let
us
know.
B
Sure
so
we
we
organized
the
papers
into
thematically
into
six
sessions
and
we'll
meet
three
times
over
the
next
three
days
for
about
two
hours,
and
each
session
has
has
two
themes
today,
we'll
we'll
we'll
hear
about
internet
protocols
and
congestion
control
and
the
subsequent
slides
have
papers
on
each
of
these.
So
we
can
just
look
at
those
in
a
second,
but
then
tomorrow
we'll
have
interconnection
and
routing
and
internet
traffic
monitoring
and
then
on
wednesday,
we'll
talk
about
dns
and
privacy
and
then
applications.
B
A
Yeah,
I
just
wanted
to
add
that,
overall,
I
think
we
received
about
28
submissions,
so
thank
you
so
much
everybody
who
who
submitted
and
thank
you
for
your
interest-
and
you
know,
based
on
the
review
process
we
put
together.
We
ended
up
accepting
about
16
papers,
which
we
basically
try
to
put
in
these
sessions.
So
it's
a
pretty.
A
You
know
deterministic
program,
so
it's
always
two
hours
from
monday
to
wednesday
we're
going
to
meet
at
you
know:
7
00
p.m,
pm
utc
in
in
these
two
hour
slots
you
can
see
that
each
session
has
a
session
chair
assigned
and
we're
gonna
just
hand
over
today
directly
for
the
new
session
on
internet
protocols.
We
have
three
citing
papers
from
three
speakers,
as
I
mentioned
before.
A
A
With
the
authors,
we
will
have
the
15
minutes
panels
that
each
of
the
session
chairs
is
going
to
so.
Basically
manage
so
today,
as
nick
was
saying:
we
have
the
internet
protocols
with
three
papers,
followed
by
practical
congestion
control
with
with
two
papers,
so
hopefully
the
slot
that
we
have
assigned
is
it
yeah.
It
fits
perfectly
today.
Anna
brostrom
and
interesting
part
are
gonna
help
us
session.
A
The
two
chairs
tomorrow,
we're
gonna
have
the
session
on
inter
interconnection
and
voting,
and
here
I'm
rich
is
going
to
help
us
chair
this
session
with
two
talks
and
then
session
four,
which
is
a
longer
session
here.
We're
gonna
have
four
different
talks
and
then
mundo
is
going
to
help
us
share
this
session.
A
In
the
last
day,
we
have
the
innocent
privacy,
which
nick
is
going
to
take
care
of,
and
I'm
going
to
be
with
you
right
till
the
end
with
applications
and
specifications.
A
So
this
is
it
for
me
right
on
time
again
huge.
Thank
you
to
everyone
who
submitted.
Congratulations
to
the
authors.
Thank
you
so
much
to
to
the
pc
we
have
yeah.
We
literally
could
not
have
done
this
without
you
and
again
massive.
Thank
you
to
colin
for
all
the
sports,
like
with
everything
on
all
aspects
across
yeah.
All
this
process,
so
thank
you
so
much
and
again,
thank
you
for
to
the
steering
committee
for
trusting
us
to
put
together
this
program
that
we
hope
at
least
you
you
will
enjoy.
B
I
think
you
said
it
all.
I
have
nothing
to
add,
except
I'll
add
my
own.
Thank
you
to
again
to
colin,
especially
for
shepherding
us
through
the
process
of
getting
the
program,
put
it
together
and
organized,
as
well
as
to
our
program
committee,
who
did
a
lot
of
hard
work
and
also
I'll
thank
you,
andre
as
well
for
handling
some
of
the
some
of
the
trickier
logistics,
as
well
as
getting
everything
on
the
website
among
many
other
things.
So
thanks
a
lot.
A
A
So
thank
you
so
much
for
everything
and
we
do
hope
you
you
will
enjoy
this
so
we'll
be
here,
but
now
I
think,
I'm
just
gonna
hand
over
to
anna.
So
I'm
just
gonna,
maybe
just
you
know,
go
to
session
one
here
and
I
think
we
can
already
start
our
first
session.
A
D
D
E
Hello,
my
name
is
marcus
piezka.
I
work
as
a
phd
student
at
carlster
university
in
sweden
and
I'm
going
to
present
to
you
the
outcome
of
work
done
on
multipath
scheduling
in
the
context
of
multipath
transport
layer.
Tunneling.
This
presentation
is
organized
into
three
parts.
The
first
will
introduce
you
to
the
main
motivation
for
this
work,
which
is
the
planned
built-in
multipath
support
in
5g.
E
E
This
multipath
framework
would
allow
for
the
simultaneous
usage
of
cellular
networks
with
other
wireless
access,
such
as
wi-fi
for
tcp.
The
idea
is
to
enable
this
by
splitting
the
end-to-end
tcp
into
regular
tcp
between
the
server
and
a
proxy
and
multi-path
pcp
between
that
proxy
and
a
user
for
non-tcp
traffic.
A
transport
layer
multipath
tunnel
may
be
used
instead.
E
E
This
is
true
for
users,
as
well
as
telecom
operators,
both
of
whom
may
reduce
costs
by
offloading
traffic
onto
the
wi-fi
whenever
possible.
One
scheduler
designed
to
deliver
traffic
distribution
in
this
manner
is
the
cheapest
path.
First,
scheduler.
The
basic
id
behind
this
is
to
rank
order.
The
paths
by
cost
in
ascending
order
and
to
schedule
packets
over
the
cheapest
path
until
the
amount
of
in-flight
data
match
the
size
of
the
congestion
window,
at
which
point
the
scheduler.
Instead
attempts
to
schedule
packets
over
the
subsequent
path.
E
The
problem
with
this
is
how
first
is
defined
if
first
means
until
the
congestion
window
is
full,
as
I
previously
stated,
it
will
tend
to
lead
to
a
full
utilization
of
the
primary
path,
which
is
what
we
want,
but
also
to
congestion,
which
is
troublesome
for
more
reasons
than
just
the
increased
delay,
as
it
turns
as
it
turns
out,
the
congestion
will
also
delay
or
completely
preclude
you
can
say,
the
utilization
of
the
secondary
path.
The
extent
to
which
this
is
a
problem
will
depend
on
the
potential
for
congestion
on
the
primary
path.
E
E
E
E
E
After
the
event
shown
in
the
previous
slide,
the
primary
path
is
likely
at
least
somewhat
congested,
while
the
secondary
path
is
unused.
If
we
treat
the
two
paths
as
a
single
path
for
the
purposes
of
analysis,
we
have
both
congestion
and
underutilization,
which
is
something
that
you
do
not
expect
to
see.
At
the
same
time,
this
is
obviously
a
malign
state
and
can
be
thought
of
as
kind
of
a
bad
distribution
of
the
server
congestion
window
over
these
two
paths.
E
So
this
brings
us
to
the
third
and
final
part
where
we,
where
I
show
you
the
modifications
made
to
this
scheduler
to
overcome
this
problem,
the
basic
idea
is
to
redistribute
the
server
congestion
window
to
maximize
the
throughput.
The
first
step
is
to
determine
the
bandwidth
delay
product
of
the
primary
path
with
the
delay.
In
this
case,
being
the
minimum
delay,
we
then
introduce
what
we
call
a
live
congestion
window,
which
is
a
fraction
of
the
full
window,
ranging
from
the
bdp
to
the
full
window.
E
The
scheduler
is
then
changed
to
act
on
the
live
congestion
window
instead
of
the
full
congestion
window.
Finally,
we
manage
the
live
congestion
window
by
periodically
increasing
it
if
there
are
packets
in
the
scheduling,
queue
and
decreasing
it.
Otherwise,
returning
to
the
example
shown
previously
and
using
the
same
configuration
except
for
the
new
scheduler,
we
see
that
we
no
longer
have
a
problem
with
the
utilization
of
the
secondary
path.
D
D
E
Sure
can
you
hear
me
all.
E
So
if
you
mean
by
that
the
end-to-end
congestion
control,
one
interesting
aspect
is:
if
you
use
something
more
traditional
like
let's
say:
neurino,
then
you
get
a
shapeless
path.
First,
scheduler
that
works
only
intermittently,
so
it
becomes
sort
of
unstable.
Whereas
if
you
use
bbr
end
to
end,
then
you
get
a
shapeless
path.
First,
scheduler,
if
it's
tunneling
over
bbr,
that
that
either
works
or
doesn't
but
it's
stable,
so
so
so
how
this
misbehaves
will
vary
depending
on
the
combination
of
congestion
controls,
but
it
will
misbehave
the
default
scheduler,
but
but
it
takes
it.
E
Yeah,
it
depends
on
what
you
want.
I
guess,
if
you're
unlucky
and
your
the
configuration
of
your
network
is
a
certain
way,
let's
say
then
getting
aggregation
with
bbr
over
bbr
is
just
hopeless.
It
will
never
happen,
whereas,
regardless
of
the
configuration
almost,
if
you
have
let's
say,
neurin
or
cubic
over
bbr,
then
you
will
get
aggregation
intermittently.
E
So
it
depends
on
what
you
want,
but
it's
certainly
a
bit
discouraging
to
see
that
the
the
consistency
with
with
which
bbr
or
bbr
misbehaves,
sometimes
it
just
refuses
to
show
any
aggregation
of
the
path
capacities.
D
F
Hi
I
wanted
to
thank
you
for
your
presentation,
and
I
just
wanted
to
you
know
this
is
this
is
something
that
actually
is
kind
of
interesting
to
maybe
more
people
than
I
had
previously
thought
at
the
ietf,
including
myself.
I
was
wondering
if
you
had
an
opportunity
to
think
about
a
broader
range
of
of
path.
F
Characteristics
like
including
satellite
and
things
like
that,
which
are
you
know,
that's
starting
dish,
that's
starting
to
show
up
in
the
3g
pgp
specifications,
for
I
guess,
phase,
2,
atss,
really
17
kinds
of
things
right
now,
so
that
there
is
a
real
range
of
things,
and
I'm
also-
and
this
is
probably
more
speculative-
I'm
not
talking
about
3gbp
futures
here,
but
the
io-
the
idea
that
you
may
have
three
or
more
paths,
rather
than
the
two
paths
that
atss
is
defined
to
use
right
now.
F
You
know
that
you
know
by
my
question-
is
kind
of
probably
about
generalizing
your
findings
to
things
that
are
not
particularly
atss,
specific
and
I'll.
Stop
asking
questions
now.
E
E
B
E
Where
we
are
looking
at
in
this
case,
it's
just
we
are,
we
are
we
have
a
greedy
flow,
so
we
want
to
have
aggregation
all
the
time.
That's
a
sort
of
easy
test
to
to
put
this
scheduler
to
a
much
more
challenging.
One
is
where
you
have
a
rate,
limited
flow
that
will
need
aggregation
occasionally
when
the
wi-fi
is
particularly
bad,
and
these
are
the
kind
of
things
that
we're
looking
into
right
now,.
G
F
I
I
say
I'll
mute
and
let
progress
happen,
but
this
is
stuff
I'm
really
interested
in
talking
with
you
all
about
more.
Thank
you.
D
Okay,
thank
you,
marcus
and
spencer.
I
think
we
should
move
on
to
the
next
presentation
and
you
can
also
save
up.
If
you
have
more
questions
for
marcus,
we
will
be
back
at
the
last
panel
for
for
all
the
papers.
So
you
can.
D
You
still
have
a
chance
if
you
think
about
some
some
more
questions,
but
let's
move
to
the
next
talk,
which
would
be
by
matthew,
bertz
and
matu
is
a
software
architect
in
the
r
d
department
at
tessares,
which
is
a
spin-off
from
yusulovan
in
belgium,
and
his
research
interests
are
around
multipath
technologies
like
multipath,
tcp
and
multipath
quick,
and
he
is
also
currently
co-maintaining.
The
mptcp
stack
in
linux
and
mature
will
be
talking
about
levering,
leveraging
the
crt
convert
protocol
to
improve
wi-fi
cellular
convergence.
H
Hello
and
welcome
to
this
presentation
about
leveraging
the
zero
rtt
convert
protocol
to
improve
wi-fi
and
cellular
convergence
behind
this
long
title.
There
is
an
experiment
I
would
like
to
describe,
but
first
I
will
start
by
describing
why
we
wanted
to
do
this
experiment
simply
by
showing
the
current
and
future
situation.
H
I
will
also
present
which
technologies
are
available
today
to
solve
the
problem.
Then
I
will
be
able
to
briefly
expose
the
main
results
we
got
before,
making
some
conclusions
for
technical
reason.
It
is
not
possible
for
me
to
know
the
background
of
the
origins.
I
hope
you
don't
mind
if
I
spend
a
bit
more
time
explaining
the
current
situations
issues,
mobile
operators
have
and
a
different
technologies
available
today
to
solve
that
feel
free
to
get
a
coffee.
H
H
Already
today,
the
cellular
traffic
is
significant
for
a
third
of
the
operators.
Here
on
the
graph,
the
average
data
usage
per
reported
sim
per
month
is
already
above
10
gig.
A
simple
solution
to
cope
with
this
increase
of
traffic
for
the
operator
is
to
improve
the
network.
To
avoid
any
situation
of
course,
yes,
but
the
average
revenue
of
this
operator
is
not
growing.
H
H
H
Many
operators
have
already
deployed
their
own
hotspot
solution
just
for
their
clients,
but
today
the
wireless
broadband
alliance,
supported
by
many
different
network
operators
in
the
world,
is
looking
at
a
global
solution.
Under
the
name
of
open,
rooming
operators
would
share
resources
in
order
to
relieve
cellular
traffic
congestion,
but
there
are
drawbacks
as
an
end
user.
Would
you
be
ready
to
be
disconnected
when
switching
from
one
network
to
another
open,
potentially
switch
to
more
limited
networks?
H
Did
you
not
already
disconnect
your
device
from
a
wi-fi
access
point,
because
the
cellular
network
was
better
at
that
time?
A
solution
to
that
problem
would
then
be
to
use
multiple
networks.
At
the
same
time,
multipass
tcp
can
help
you.
There.
Mptcp
is
an
extension
to
tcp
and
its
standardized
in
the
rfc
8684
to
describe
mptp.
We
can
do
that
in
one
sentence.
H
It
allows
to
exchange
data
for
a
single
connection
over
a
different
path
simultaneously
or
not.
Instead
of
having
one
connection
limited
to
one
pass,
the
same
connection
can
go
over
multiple
paths.
In
other
words,
you
can
then
have
more
redundancies,
but
you
can
also
have
more
bandwidths
and
more
many
things
like
supporting
handover
and
new
ability
use
case
like
here,
switching
from
one
weak
network
to
another,
where
all
the
signals
are
quite
valuable.
H
But
this
extension
to
tcp
needs
to
be
supported
by
both
the
client
and
the
server
on
the
client
side.
Some
smartphone
already
supported
like
the
one
from
apple
since
2013,
but
also
some
others
from
samsung
lg
and
more,
even
if
it's
restricted
to
some
applications,
as
shown
on
the
mpcpio
website.
On
the
server
side,
the
support
is
not
very
significant
for
the
moment.
For
various
reasons,
we
are
actively
working
on
it,
but
wait
again.
There
is
a
solution.
H
H
H
Enough
with
the
introduction,
let's
have
a
look
at
our
experiments
and
at
the
main
result
we
extracted
from
them.
From
a
high-level
viewpoint,
we
have
a
smartphone
connected
to
two
networks.
The
first
one
is
a
dsl
network
connected
via
wi-fi.
It
is
a
home
network,
unfortunately
limited
to
10
meg.
I
said,
unfortunately,
because
it
is
my
home
connection.
H
H
H
H
H
Here
we
are
disconnected
from
the
wi-fi
access
point
between
the
two
green
lines.
When
we
are
connected
to
the
fixed
network
and
the
wi-fi
we
use
it
in
priority
here.
The
dsl
network
can
cope
with
the
requested
speed,
so
the
traffic
is
fully
offloaded
to
the
fixed
network,
excellent
for
the
operators.
H
H
H
This
proof
of
concept
shows
that
the
future
atss
already
works
on
one
hand.
It
helps
mobile
operators
to
reduce
their
mobile
traffic,
while
also
helping
to
improve
the
end
user
experience.
Mptp
and
the
zero
rtt
convert
can
play
a
role
in
wi-fi
and
cellular
convergence.
Today,
in
4g
networks
and
tomorrow
in
all
5g
networks
tend
to
the
atss.
H
D
D
H
Is
it
still
better
now
now
here?
Yes,
sorry,
sorry,
my
bad
yeah!
So
yes
for
tcp!
First
of
all,
here
we
we
are
quite
fine
because
we
use
it
only
between
the
smartphone
and
the
hug,
so
the
the
server
that
we
set
up
in
the
network
operator.
H
So
we
only
have
to
ensure
that
the
tcp
first
open
cannot
be
blocked
on
the
network
side,
but
typically
it
is
used
on
a
not
a
private
network,
but
I
mean
a
network
that
is
controlled
by
the
operator.
So
it's
the
operator
deploying
the
server.
So
I
hope
that
the
operators
can
also
make
sure
that
tcp
first
open
is
not
blocked
in
the
network.
D
Okay,
yeah,
so
this
this
of
course
makes
it
yeah
reduces
the
problem.
I
guess
yeah.
I
I
noticed
in
your
your
graphs
when
you
moved
to
the
lte
network,
you
had
quite
a
bit
of
of
fluctuations.
D
H
So
we
we
looked
a
bit,
especially
after
the
review
that
we
we
got
and
what
we
saw,
that
it
seems
that
the
end
server,
so
it's
twitch,
so
it's
a
live
stream.
Video
was
able
to
push
more
data
on
the
lte
network
and
because
I'm
quite
lucky
to
have
a
quite
poor,
dsl
connection.
We
can
see
that
it's
quite
stable.
Everything
is
good,
but
also
because
I'm
on
the
country
side,
the
lte
is
sometimes
better,
but
also
fluctuating.
H
D
I
Hello,
hello,
hello,
yeah.
Can
you
hear
me?
Okay,
yeah.
I
have
a
question
regarding,
like
you
know,
for
the
dnx
park,
so
the
uee
sometimes
is
going
to,
but
well
by
default,
you're
going
to
get
a
dns
service
from
the
the
mobile
provider,
but
sometimes
the
the
ue
can
set
its
own
dns
in
its
own.
I
You
know
in
its
own
way,
so
here,
if
you
have
a
both
ways
like
through
wi-fi
non-three
gpp
case
and
three
gbp
case,
so
if
the
ue
does
specify
the
dns
in
his
own
way,
how
are
you
going
to
do
this
type
of
convergence?
It's
like
a
switch
over
between
the
mounts
through
gpp
and
the
three
gpp
wave.
Thank
you.
H
Yes,
I
hope
I
the
question
properly.
So,
if
I
may
just
quickly
repeat
the,
I
think
the
question
is
about
what,
if
there
is
a
custom
dns
that
may
be
forced
to
take
one
pass
or
another.
I
Yeah,
you
know
this
is
like
a
feature,
but
you
can
change
or
you
can
change
on
your
ue,
but
for
a
mobile
provider
they
default
well,
they
prefer
to
use
the
dns
provided
by
the
mobile
provider
itself.
But
here
you
have
the
non-3gpp
path
like
a
wi-fi,
so
the
wi-fi
will
have
its
own
dns.
So
now
that
you
have
three
ways:
one
dns
provided
by
mobile
provider,
one
provided
by
the
downstream
gpu
for
wi-fi
and
the
third
way
can
be
manually
set
by
the
ue
itself.
I
So
now,
how
are
you
going
to
do
this
type
of
the
switching
if
the
ue.
F
H
Okay,
yes,
thank
you
it's
clear
now
so
here.
What
we
technically
do
is
that
when
the
you
is
requesting
to
go
on
a
website,
we
we,
like,
we
catch
the
connection
and
we
send
it
to
our
proxy
server.
So
the
hack
proxy
server
and
what
we
put
in
the
information
to
connect
to
the
end
server
is
it's
its
ip.
H
H
D
Okay,
thank
you
and
for
your
questions.
So
I
think
we
move
to
the
next
talk
and
matthias
will
also
be
back
for
the
panel
at
the
end.
If
you
have
more
questions
for
him,
so
thank
you,
matthew
and,
and
our
third
talk
will
be
by
zolt.
D
Grammar
salt
is
a
phd
student
at
punderpest
university
of
technology
and
economics
in
hungary
and
his
main
research
interest
or
congestion
control
in
cellular
networks
and
cooperative
traffic
handling
solutions,
and
the
result
will
be
talking
about
cooperative
performance
enhancement
using
quick
tunneling
in
5g
cellular
networks.
G
J
Going
to
present
our
paper
titled
cooperative
performance
enhancement
using
quick
tunneling
in
5g
cylinder
networks,
the
other
authors
are
miria
kulavin,
marcus,
ilar
and
otila
mihai
from
eriksson
5g
is
going
to
bring
very
high
peak
data
rates
and
significantly
lower
latency.
However,
the
propagation
properties
of
the
new
radio
also
result
in
higher
volatility
in
available
bandwidth.
J
Previous
studies
showed
that
these
factors
intensify
the
need
for
a
shorter
control,
loop
and
local
optimizations
in
the
transport
layer
in
rte
networks.
This
has
been
mostly
achieved
by
performance,
enhancing
proxies,
which
basically
split
the
end-to-end
connection
into
two
separate
control
loops
for
the
two
separate
domains.
J
J
J
We
have
identified
three
different
use
cases
based
on
a
different
granularity
of
cooperation
between
the
server,
the
client
and
the
proxy,
and
these
are
local
loss,
recovery,
promise,
signaling
and
sending
declarative
messages
to
the
server
when
the
client
requests
the
local
loss
recovery
feature
it.
It
is
done
by
using
the
reliable
data
stream
service
of
the
quick
tunnel
and
besides
this
initial
explicit
request,
no
additional
signaling
is
needed.
J
Although
packet
losses
are
considered
rare
in
4g
and
5g
networks,
when
using
acknowledge
mode,
this
use
case
can
improve
the
performance
of
unacknowledged
mode
as
well
and
thus
enable
lower
latency
on
the
right.
You
can
see
some
preliminary
performance
results
using
a
one
percent
emulated
random
loss,
and
you
can
see
that
the
average
download
time
is
significantly
lower
when
the
local
loss
recovery
feature
of
the
proxy
is
used,
especially
if
the
rtt
increases
promise.
Signaling
is
useful
if
the
bottleneck
is
between
the
client
and
the
proxy.
J
J
The
third
use
case
is
sending
decorative
messages
to
the
server.
This
assumes
another
explicit
tunnel
between
the
proxy
and
the
server,
and
then
the
proxy
is
able
to
send
declarative
safe
to
ignore
messages
containing
acknowledgement
or
even
neck
info,
and
the
server
can
then
utilize
these
acknowledgements
and
apply
a
multi-domain
congestion
control
algorithm,
which
maintains
two
separate
control
loops
for
the
wired
and
the
wireless
domains.
J
One
of
them
is
clocked
by
the
acknowledgements
from
the
proxy,
the
other
one
clocked
by
the
acknowledgements
from
the
client.
One
of
them
can
be
more
conservative
and
the
other
one
more
aggressive,
and
thus
the
algorithm
is
able
to
provide
both
fairness
in
the
wire
domain
and
fast
utilization
in
the
wireless
domain
on
the
right.
J
In
conclusion,
transparent
connection,
splitting
performance
enhancing
proxies
are
not
going
to
be
feasible
for
encrypted
traffic.
J
We
have
shown
three
different
use
cases:
local
loss,
recovery,
promise,
signaling
and
declarative
messages
to
the
server
shown
some
promising
early
performance
results,
and
our
future
work
mainly
focuses
on
a
detailed
performance
evaluation
in
4g
and
5g
network
conditions.
Thank
you
for
your
attention.
D
So
I
see
marcus
in
the
queue,
so
please
marcus
you
can
unmute
and
ask
your
question.
K
Yeah
hello,
can
you
hear
me
yes
very
good,
and
I
wonder
after
seeing
the
first
presentation
from
today
about
the
nested
congestion
control
effects
when
it
comes
to
multi-part
calculation,
if
something
like
this
also
applies
to
this,
even
if
it's
a
single
path
to
this
quick
over
quick
scenario
you
presented
today,
are
there
scenarios
where
yeah
the
overall
performance
suffers
from
congestion
control
over
congestion
control.
J
Good
question:
thank
you.
We
have
not
encountered
such
a
thing
during
this
early
performance
evaluation.
J
But
I
I
I
yeah
so
basically
that's
that's
my
answer.
We
have
nothing
encountered
such
a
thing
yet.
K
Okay,
if
you
allow
me,
I
have
a
second
question
on
this.
So
far,
you
tested
quick
over
quick
and
they're.
Also
the
idea
to
allow
in
future
non-end-to-end
a
protocol
end-to-end
protocols
which
are
not
quick
so,
for
example,
tcp,
which
uses
other
congestion
controls
than
bbr.
J
J
So
we
are
utilizing
mask
and
if,
if
mask
is
there-
and
we
have
these
two
layers
of
connections-
we
have
this
tunnel,
then
we
want
to
establish
this
channel
between
the
client
and
the
and
the
proxy
and
enable
these
performance
enhancement
use
cases
with
this
added
communication-
and
this
is,
I
think,
a
quick
specific
framework.
D
L
So
I
thanks
for
the
presentation,
very
good
ideas
when
you
said
it
is
based
on
mask
and
musk.
When
I
looked
at
working
group
slightly,
don't
have
any
rfc
or
any
any
published
specification.
L
J
J
Relatively
recent
working
group-
and
I
so
what
what
I
mean
is,
for
example,
the
channel
between
the
client
and
the
proxy,
is
established
via
an
extension
of
the
http
connect
method.
J
And
this
is
very
much
related
to
the
mass
working
group
and
mask
basically
describes
this
framework
of
having
a
quick
tunnel
over
the
end
to
quick
connection.
J
Yes,
you
could
say
so,
but
it's
it's.
It
follows
the
the
documents
and
and
and
the
design
that
is
being
worked
on
in
the
mask
working
group.
Thank
you.
M
Can
you
hear
me
yes,
okay,
so
my
question
is:
what's
the
maybe
I
missed
this
during
the
presentation,
but
what's
the
main
difference
between
the
multi-domain
and
cubic.
G
J
The
multi-domain
congestion
control
algorithm
has
two
components
run,
they
are
running
in
parallel
and
they
are
clocked
by
the
different
type
of
acknowledgements.
One
one
is
the
acknowledgement
from
the
client
and
the
other
is
the
acknowledgement
from
the
proxy
and
these
control
loops.
So
there
are
two
congestion:
con
congestion
window,
state
variables,
maintained
and,
and
these
are
clogged
in
parallel
and
then
the
algorithm
selects
the
lower
one
of
the
two.
J
So
thus
it
is
able
to
adapt
to
the
location
of
the
bottleneck,
and
if
the
bottleneck
is
in
the
wireless
domain,
then
it
can
behave
more
aggressively,
because
the
resource
sharing
in
the
y
in
in
the
mobile
network
is
not
the
responsibility
of
the
congestion
control
algorithm.
It
is
taken
care
of
in
lower
layers,
and
if
the
bottleneck
is
in
the
wire
domain,
so
let's
say
the
internet,
then
it
behaves
similar
to
cubic
like
cubic
with
a
lower
rtt.
J
M
M
J
Then
yes,
but
if
but
in
our
like
our
measurements,
the
bottleneck
was
in
the
mobile
network
and
then
it's
it's
a
much
more
aggressive.
It's
similar
to
a
scalable
tcp,
but
it's
you
can
you
could
use
bbr
or
any
kind
of
like
algorithm
with
fast
utilization
properties.
N
N
Okay,
are
there,
I
guess
that'll
be
future
work
I
mean
is
that
is
that
something
you're
intending
to
pursue,
or
just.
I
Hear
me
right
now:
hello,
hello,
yes,
okay,
you
know
I
I
was
reading
through
your
paper
and
in
the
figure
you
mentioned:
okay,
you're,
going
to
use
the
proxy,
but
I
I
checked
your
your
words,
but
I
couldn't
find
where
you're
going
to
put
the
proxy.
I
I
I
So
if
you
put
that
your
proxy
behind
the
ups
or
maybe
either
the
pa
the
decision,
the
the
psc
upf,
so
how
you're
going
to
handle
this
like,
like
the
dependent
the
delay,
the
latency
things,
so
I
I
couldn't
find
in
your
paper.
J
Yeah,
that's
a
very
good
question.
I
think
this
is
going
to
be
a
very
important
part
of
future
work
like
get
the
the
the
proper
placement
of
the
deployment
and
maybe
even
experiment
a
little
bit
with
the
performance
evaluation.
J
I
D
L
D
So
maybe
maybe
each
of
the
speakers
can
say
what
is
what
is
new
about
your
approach
and
how
it
relates
to
the
evolution
of
the
internet.
In
response
to
jean's
question,
I
guess
for
the
naming
we
have
to
ask
the
the
shares,
but
this,
I
think,
is
an
interesting
panel
question
for
for
all
of
you,
so
maybe
marcus
you
can
start.
We
take
the
three
papers.
E
Okay,
so
there's
no
new
protocol
per
se
in
the
work
we're
doing,
although
we
are
working
with
multipath
dccp,
which
is
a
new
protocol,
what
it
is
is
a
evolution
of
existing
protocols,
next
16
schedulers,
so
it's
not
exactly
new
protocols.
I
actually
agree
with
that.
It's
more
like
maturing
protocols
or
schedulers.
D
H
All
right
again,
my
fault.
Yes,
of
course,
mpcp
is
not
a
new
protocol.
We
are
using
a
so
that
for
mptp,
as
maybe
you
know,
there
is
a
new
version-
mptp
v1,
but
basically
still
mpcp,
behind
not
a
lot
of
changes,
even
if
they
are
important.
H
On
the
other
side,
we
were
using
a
proxy.rtt
convert
protocol
which
I
think
was
never
used
in
production
network,
or
at
least
in
not
that
time
that
I
saw
so
it's
kind
of
new,
so
we
hope
also
it
will
be
used
so
yeah.
That's
it
for
me,.
J
Can
you
hear
me
sorry
for
the
issues
with
the
camera,
I'm
not
sure,
what's
happening
yeah,
so
some
kind
of
similar
answer?
Quick,
is
not
a
new
protocol
unless
you
count
it
as
new
due
to
the
recent
standardization,
but
I
the
the
method
to
provide
cooperative
performance
enhancement
with
quick
is
kind.
His
novel.
D
Yeah,
so
in
summary,
I
guess
we
have
having
the
first
session
npdccp,
which
is
a
draft
at
the
moment,
so
that's
clearly
maybe
a
new
protocol
and
salt.
You
have
also
used
mask
which
we
had
a
question
about,
because
it's
not
yet
fully
defined.
So
I
think
this
is
also
new
protocols
in
development.
In
some
sense
and
mature,
you
had
the
serial
rtt
convert,
which
is
just
became
a
rfc
and,
as
you
said,
has
not
been
been
used
before.
D
I
don't
see
any
more
people
in
the
queue
and
we
actually
are
also
running
out
of
our
time
slot.
D
So
I
think
this
was
a
very
suitable
last
question
and
maybe,
after
the
replies
we,
the
name
was
actually
also
quite
well
chosen.
D
So
thanks
a
lot
to
all
the
the
speakers
and
also
to
all
the
the
participants
for
your
your
question,
questions
and
engagement
in
the
session,
and
we
will
continue
immediately
with
the
session
two,
so
I
will
hand
over
to
the
share
for
this
session
teresa.
So
thanks
a
lot.
O
P
Hello,
everybody,
my
name,
is
natalie
roman
and
welcome
to
the
presentation
of
the
paper
cci85,
an
implementation
of
the
bbr
congestion
control,
algorithm
for
gccp
and
its
impact
over
multiplex
analysis.
This
presentation
is
divided
in
four
sections.
I'm
going
to
start
with
an
introduction
where
I
explain
the
motivation
and
the
objective
of
this
research,
following
with
the
description
of
the
design,
the
evaluation
to
finalize
with
the
conclusion
and
the
future
work.
P
This
specification
has
already
defined
a
mptcp
as
a
solution
for
traffic
splitting,
but
it
is
limited.
It
is
limited
only
to
the
reliable
and
in
order
transport
of
traffic,
which
means
it
is
not
suitable
for
any
real-time
service
or
application,
and
it's
also
not
suitable
for
the
transport
of
quick.
P
On
that
basis,
a
the
mpdc
protocol,
the
mpdcp
protocol,
was
developed
to
provide
a
multipath
solution
capable
of
transporting
unreliable
traffic.
Mptccp
extends
the
datagram
construction
control
protocol
to
support
multiplex
sessions.
As
I
said,
it
provides
unreliable
transport
of
data,
it
is
connection
oriented
and
congestion
controlled.
P
P
Bbr
defines
two
conditions
to
achieve
an
optimal
point
of
operation.
The
first
condition
is
that
the
amount
of
time
flight
has
to
be
equal
to
the
bank
with
the
late
products,
and
the
second
condition
is
that
the
bottleneck
packet
arrival
must
match
the
ebola
net
bangle
to
a
to
fulfill
these
two
conditions:
bbr,
initially
estimates
the
values
of
the
borrownet
bandwidth
and
the
run-through
propagation
time
to
later
use
these
values
to
calculate
three
control
parameters
which
are
the
congestion
window,
the
pacing
rate
and
the
same
one.
P
The
way
these
parameters
are
updated
and
the
way
the
whole
protocol
behaves
is
ruled
by
this
state
machine
depicted
in
this
figure
in
the
startup
state.
The
sending
rate
will
increase
rapidly
until
the
pipe
is
detected.
So
before,
once
this
detection
is
made,
the
sending
rate
will
be
reduced
to
there
in
any
possible
queue
to
finally
go
into
a
problem
with
the
state
where
the
amount
of
content
flight
is
slightly
increasing
from
time
to
time
to
prove
for
more
available.
P
P
P
In
addition
to
that,
the
cps
standard
defines
a
specifies
that
associative
ccd
profile
has
to
define
a
some
other
aspects
like
the
format
of
the
acknowledgement,
packets,
the
timing
of
their
generation
and
how
they
are
congestion
controller.
All
these
definitions
were
taken
from
the
ccid
too.
That
means
that
we
use
acknowledgement,
vectors
and
also
acknowledgement
rate.
P
Apart
from
that,
there
are
many
functions
which
are
part
of
the
tcp
protocol
and
none
of
the
dcp,
which
corresponds
to
the
tracking
of
the
packets
in
flight
and
also
to
the
acknowledgement
generation.
These
functions
were
also
taken
from
the
ccid2
and
we
also
took
a
some
other
ideas
to
a
verify:
the
application,
limited
periods.
P
Finally,
to
integrate
the
conjunction
control
algorithm
with
the
multiple
framework,
we
need
to
provide
some
information
to
the
scheduler
and
reordering
functions
that
I
have
mentioned
before.
The
information
that
we
provide
is
congestion,
window
packaging
flight
for
the
schedulers
in
rtt
estimation
for
the
algorithm
algorithms
to
test
this
implementation,
we
use
two
schedulers
from
all
the
list
of
the
schedulers
available,
which
are
cheapest
first
and
ground.
Rule
cheapest,
buy
first
allocates
packets
based
on
a
predefined
priority
and
round-robin
alternates
packets,
sending
through
all
the
available
parts.
P
P
P
P
Path
a
once
the
bandwidth
testing,
when
the
bandwidth
is
limited,
the
congestion
control
algorithm
detects
this
congestion
and
informs
to
the
scheduler,
and
at
this
point
the
scheduler
stands
starts
transmitting
through
the
secondary
path.
We
can
see
that
the
ccd5
reacts
quite
faster
than
the
cc-82
and
we
can
also
see
that
for
the
c82,
the
primary
path
remains
congested,
which
leads
to
higher
latencies.
P
In
this
multipath
scenario,
we
also
decided
to
test
a
tcp
traffic,
taking
into
account
that
this
framework
should
be
also
capable
of
transporting
quick
traffic,
and
these
two
protocols
have
some
similar
behaviors.
P
So
for
this
test
we
didn't
set
a
fixed
transmitting
rate.
We
let
tcp
to
send
as
much
as
possible,
but
we
limited
the
bandwidth
path
to
10
megabits
per
second,
as
we
can
see
here,
a
the
response
in
terms
of
throughput
is
more
or
less
similar
for
both
ccd2
and
cc85,
but
ccid5
shows
better
results
in
terms
of
latency.
P
We
also
implemented
managed
to
implement
pvr
sequential
control
in
vbr
conjunction
control
as
an
accident
for
the
ccp
protocol,
proving
that
the
conceptual
background
of
vvr
also
works
for
vccp,
and
we
also
integrated
our
approach
into
a
multiple
access
framework
that
can
be
applied
either
to
the
5h
5g,
the
sss
context,
or
to
the
hybrid
access
scenario
as
future
work.
We
pretend
we
intend
to
make
more
extensive
testing
of
the
algorithm,
adding
some
other
variables
in
constraints
like
concurrent
flows
to
to
evaluate
fairness
by
delay
and
packet
loss.
P
O
N
P
Yeah
we
have
different
implementations
of
the
realtime
and
the
most
basic
one
is
just
do
nothing
and
the
packets
go
through
directly
to
the
virtual
interface.
The
second
option
is
the
buffer
reads:
the
okay,
the
algorithm
reads:
the
sequence
numbers.
P
If
a
gap
is
detected,
then
the
packets
are
stored
and
that
until
the
missing
packets
arrive
or
until
after
fixed
timer
expires
and
the
third
one
is
an
active
adaptive,
rendering
that
uses
the
information
of
the
around
three
times
that
the
congestion
control
could
provide,
and
with
this
information
of
the
round
three
time
the
timer
of
the
buffer
can
be
adapted
practically.
So
it's
not
a
fixed
background.
P
No,
those
are
not
explained
in
any
figure
we
just
wanted
to
to
test
in
the
first
place
that
the
integration
works,
that
the
algorithm
is
actually
able
to
read
the
information
that
the
congestion
control
provides,
but
in
the
first
place
the
target
was
just
to
validate
whether
we
could
actually
reduce
the
difference.
The
difference
of
the
licenses.
N
Okay,
yeah
all
right
thanks.
I
guess
we've
seen
efforts
to
do
that
in
layer,
2
switching
before
that
were
arguably
counterproductive,
but
I
was
just
curious
if
you're,
if
you're
seeing
anything
different.
Thank
you.
Q
P
So
that's
the
let's
say
the
main
challenge,
and
also
that
there's
another
some
sort
of
limitation
is
the
fact
that
they
we
met.
We
have
the
measurements
in
packets
and
not
vice
versa
in
dccp.
So
of
course
they.
It
is
a
more
valid
adaptive
to
for
applications
that
use
a
fixed
packet
size.
Q
O
I
I
think
he
was
just
saying
thank
you
for
using
okay,
okay
and
now
we
have
video.
M
Can
you
hear
me
yes,
you're
a
little
bit
quiet
that
weekend,
okay,
yeah,
I'm
sorry!
It's
all
right!
Still
quiet!
That's
all
right!
M
So
a
question
about
so
I
have
just
I
just
googled
about
ccp.
I
haven't
read
much
much
about
it,
so
I
was
just
curious
about
first
of
all,
why
did
you
choose
bbr
and
then
did
it
give
you
better
performance?
I
mean
I
was
looking
through
the
slides,
but
there's
a
lot
of
content,
but
then
it's
giving
better
performance
than
cc
84
with
respect
to
latency
and
bandwidth
utilization.
P
First,
we
choose
say
vbr
precisely
because
of
the
good
results
that
it
has
shown
on
gcp
because
it
avoids
buffer
blood,
which
means
that
we
will
have
better
results
in
terms
of
latency
and
second,
we
only
tested
against
ccid2
because
a
for
the
linux
scanner
implementation
we
only
have
available
cc82
and
cci3
and
cc82
is
more
or
less
reno-based.
So
we
have
a
baseline
to
convert
also
with
gcp.
M
G
M
Because
I
suppose
I
suppose
you
did
answer
that
question
yeah,
I'm
gonna,
yeah
I'll,
think
about
it
more
and
read
your
paper
and
thank
you
for
the
great
presentation.
O
Okay,
thanks
welcome
all
right
thanks,
vidi
and
now
we
have
another
question
from
I'm.
I
hope
I'm
not
pronouncing
this
wrong
or
wish,
and
then
I
think
we
should
afterwards
we
should
move
on
to
the
next
talk.
C
R
P
I'm
not
really
sure
I
mean
the
tests
that
we
did
were
like
in
a
closer
environment
in
a
laboratory,
but
I
don't
think,
there's
bigger
profile
than
last
month.
I
think
it's
the
opposite.
As
far
as
I
understand
the
capacity
and
the
sweet
of
the
links
available
is
higher
than
you
have
right
way
to
come
so,
but
we
didn't
test
it
in
a
real
environment.
We
just
said
I
need
to.
R
O
Thanks
all
right,
so
let's
move
on
to
our
next
presentation
and
then
towards
the
end.
We
will
have
a
panel
discussion
with
both
speakers.
Thank
you,
natalie
and
now
our
next
talk
is
from
thomas
who
just
finished
his
phd
at
the
university
of
illinois.
Congratulations
and
at
this
university
he
has
been
working
on
mobile
wireless
and
cellular
networks,
as
well
as
the
pcc
protease
congestion
control
algorithm,
which
he
will
be
talking
about
in
a
bit
and
he
will
be
showing
huawei
soon.
O
S
S
It
is
inspired
by
the
surging
amount
of
internet
applications
and,
in
the
meanwhile,
the
fair
sharing
objective
in
most
previous
congestion
control
protocols.
Specifically,
the
applications
shown
here
all
require
network
bandwidth
resources
to
accomplish
data
transfer.
However,
their
users
usually
have
different
timing
requirements
for
applications
with
inelastic
timing
requirements.
Their
data
transfer
should
complete
as
soon
as
possible,
but
there
are
also
elastic
timing
applications
such
as
system
and
software
update.
S
S
S
Moreover,
an
online
video
has
the
largest
encoding
periods
corresponding
to
that.
If
the
throughput
is
more
than
enough
to
support
that
best
qe,
the
video
streaming
flow
can
also
conditionally
lower
its
priority
to
scavenger
and
furthermore,
when
alice's
video
clip
buffer
is
almost
full,
the
risk
of
having
a
rebuffer
is
relatively
low,
and
in
that
case,
the
streaming
flow
can
opportunistically
switch
to
scavenger
priority
for
a
short
time
period.
S
S
That
said
by
enabling
flexible
and
dynamic
switch
between
the
scavenger
and
primary
priorities,
we
can
further
extend
the
scavenger
use
cases
and
increase
the
network
utility.
Therefore,
in
our
recent
work,
the
proposed
pcc
process.
In
summary,
it
is
based
on
the
pcc
ultimate
framework,
where
each
priority
corresponds
to
a
to
the
function.
S
S
That
inherently
enables
pcc
proteus
to
support
dynamic
priority
switch.
However,
several
factors
are
restricting
larger
scale
deployments
of
scavenger
congestion
control.
From
the
perspective
of
implementation,
the
existing
scavenger
protocols
have
limited
availability,
especially
in
widely
deployed
transport
data
packs
such
as
linux,
kernel
and
quick
to
be
detailed.
The
utorrent
implementation
of
flatbeds
is
limited
to
the
application
itself.
S
S
So
in
this
work
we
first
push
forward
the
open
source
implementations
of
scavenger
protocols,
report
pcc,
proteus
to
quick,
which
is
becoming
increasingly
popular
with
the
ietf
standardization
progress
and
for
convenience
of
academia,
research
and
performance
experiments.
We
also
provide
our
own
lab,
plus,
plus
implication
by
branching
off
the
utorrent
lab
code
base
for
benchmarking
purpose.
We
compare
the
impact
of
lab
lab,
plus
plus
and
the
proteus
quick
scavenger
to
bbr
and
cubic
using
a
combination
of
mahi
monkey
bottleneck
setups.
S
The
figure
presents
the
cdf
of
primary
flow
throughput
ratio
when
the
primary
flow
uses
bbr,
we
can
see.
Lab
plus
plus,
does
have
improved
performance
compared
with
lab
beds,
and
it
is
even
similar
to
our
quick
protease
scavenger,
but
for
cubic
a
cubic
flow
achieves
5.6
percent
higher
throughput
competing
with
produce
quick
than
with
level
plus
plus.
S
S
However,
throughout
the
above
process,
the
congestion
controller
is
obvious
to
actual
application
requirements
which
are
necessary
to
determine
the
appropriate
transport
priority.
So
we
claim
that
to
convey
such
application
requirements.
Another
interface
between
the
application
and
transport
data
path
is
needed
when
deploying
scavenger
congestion
control.
This
interface
is
responsible
for
priority
control,
including
two
tasks.
S
Priority
selection
and
priority
switch
the
first
task.
Priority
selection
is
to
inform
the
congestion
controller
which
priority
to
use
in
real
time.
There
are
several
different
ways
to
implement
that
with
the
application
transport
interface.
First,
a
priority
controller
can
be
contained
within
the
congestion
control
protocol.
S
It
takes
the
input
of
real-time
qe
through
the
interface
and
outputs.
The
selective
priority,
or
the
priority
controller
can
be
implemented
together
with
the
application.
For
example,
when
designing
adapted
theory
algorithm
in
video
streaming,
the
developer
can
directly
determine
the
priority
to
use
besides
the
video
bit
reselection.
S
This
is
easy
for
our
pcc
produce
because,
as
we
mentioned,
different
priorities
in
proteus
are
simply
different
function,
apis
under
the
same
protocol.
Thus,
it
only
needs
a
bin
reflect
to
determine
which
function
api
to
call
in
comparison.
The
priority
switch
is
not
as
convenient
if
the
scavenger
protocols
are
separately
implemented
from
the
primary
protocols
like
the
case
of
labet,
but
we
think
it
is
possible
to
leverage
multipath
transport
transport
as
a
workaround,
as
shown
here,
the
sender
can
use
different
subpaths
specific
to
primary
and
scavenger
priorities
respectively.
S
It
can
then
direct
traffic
to
one
of
the
subpaths
according
to
the
selective
priority,
and,
ultimately,
we
hope
the
scavenger
congestion
control
can
attract
more
interest
so
that
our
invasion
priority
control
based
on
the
interface
between
application
and
transport
data
path,
could
be
accepted
or
implemented
in
practice,
and
the
corresponding
implementation
will
also
be
an
important
future
work
due
to
limited
time.
That's
all
I
want
to
present
here
and
thank
you
all
for
listening.
We
definitely
look
forward
to
questions,
suggestions
and
criticisms.
S
O
Thanks
for
this
pre-recorded
talk
and
if
you
would
like
to
join
audio
and
video,
does
anybody
have
any
questions
for
this
particular
talk
so
far?
Yeah?
I
can
hear
you
okay.
If
there's
no
question
so
far,
I
have
a
question.
Oh
wait.
We
do
have
actually
a
question
all
right.
Please
go
ahead.
R
Can
you
hear
me
yes?
Yes,
I
have
a
question
regarding
the
that
second
control
channel
between
the
application
and
the
transport.
It
suggests
that
the
should
explicitly
prioritize
the
okay.
My
question
is:
what
is
what
are
your
thoughts
on
the
potential.
R
Situation
where
multiple
applications
are
claiming
that
their
use
case
has
the
highest
priority
and
thus
leading
to
some
kind
of
tragedy
of
the
commons,
where
everybody
wants
the
highest
priority,
but
nobody
can
get
any
prioritization.
I
I
wonder
specifically
about
the
interaction
between
the
application
and
the
transfer.
Thank
you.
S
Yeah
cool,
so
that
is
definitely
a
very
question
and
we
do
receive
similar
questions
a
lot
previously.
So
the
simple
answer
to
that
is
in
the
case
that
everyone
wants
higher
high
priority.
S
Then
that's
basically
the
same
same
case
with
the
current
congenital
product
called
implication
like
everyone
use,
either
like
bbr
or
cubic
just
the
default
data
path,
transport
protocol
and
therefore
that
means,
in
that
case
everybody
use
a
higher
priority
and
which
means
it
is
not
some
obvious
implementation
scenario
for
this
lower
scavenger
priority
and
the
motivation
for
this
scavenger
priority
is
that
there
should
be
chance
that
some
application
can
lower
its
priority
temporarily
to
benefit
others
and
without
impacting
this
application
itself.
L
S
For
their
parts
it
is
currently
future
work
for
us
like
right
now
did.
Did
you
ask
the
product
protocol
for
each
part.
S
So
so,
right
now
that
is
future
work
for
us
like
this
is
a
landing
paper
and
we
talk
about
the
invasion
for
of
for
our
thoughts
on
this
interface
and,
as
you
can
see,
we
provide
the
quick
information
for
our
priority
control,
the
conjunction
control
protocols,
and
in
that
case
then
right
now
there
should
be
some
communication
for
like
transport
statistics,
statistics
from
the
lower
lower
layer
to
the
higher
layer
and
then,
since
the
quick
is
implicating
user
space
and
then
for
this
api,
we
envisioned
that
there
should
be
some
communication
within
the
from
the
application
layer
to
the
to
the
quick
transport,
okay,
quick
congestion,
part
and
the
in,
I
think,
for
implementation.
S
Some
thoughts
here
specific
to
specific
to
the
quick
data
path,
is
that,
within
a
quick
connection
or
even
lower
layer
to
the
sing,
a
quick,
syn
algorithm
interface,
there
should
be
some
function
apis
there,
for
example,
called
a
priority
selection
or
priority
switch
and
then
the
higher
higher
level,
the
epic,
the
application
layer
or
somewhere
between
the
application
and
the
transport
layer
when
they
convey
the
application
requirements,
they
can
use
this
function.
Apis.
L
S
It's
just
the
currently
default
congenital
priority,
and
if
there
is,
if
there
is
like
no
such
apis,
then
by
default,
most
applications
currently
use
high
priority,
and
we
just
want
to
use.
This
interface
is
to
tell
the
transporter
pass
when
you
can
switch
to
lower
priority
and
then
when
to
switch
back,
and
this
determination
happened
per
flow.
So
there
is
no.
So
if,
for
each
flow
for
each
flow
for
a
specific
application,
it
does
not
need
to
know
the
other
know.
S
The
choice
of
the
other
competing
applications
in
the
network
or
at
the
same
endpoints.
O
I
Okay,
okay,
just
check
check
check,
okay,
yeah,
it's
good
good
work
and
not
very
interesting.
You
know
recently
we
are
working
the
three
gpp,
the
the
tactile
and
the
multi-modality
communication
service.
So
basically
means
like
for
the
the
tech
mm
it's
going
to
have
the
multiple
input
sources
like
it
can
consist
of,
like
a
video
audio
and
a
sensor
and
a
haptic
like
some
gesture
things
so
now
yeah,
you
know
I'm
trying
to
think
you
know
there
are
some
issues
regarding
the
stigmatization
of
compete,
even
among
all
the
multi-modality
input.
I
So
here
you
just
answer
the
previous
question
say:
okay!
Well,
each
flow
is
going
to
behave
or
do
something
based
on
its
own
decision,
but
for
the
tech
mm
service.
Actually,
a
composite
flow
is
going
to
in
include
multiple
input
resources
and
this
different
like
something
can
be
elastic,
something
inelastic
or
something
maybe
up
to
opportunistic
if
you
define
or
categorized
so
I'm
thinking
how
well
it's
not
like,
especially
for
your
your
work
but
try
to
think
well
from
your
side.
I
How
can
we
do
something
you
know
by
using
your
things
for
the
3gbp
tech,
mn
service,
so
yeah?
Thank
you.
S
Sure,
thanks
for
the
question,
so
personally,
I'm
not
quite
familiar
with
the
three
strategy
strategy
details,
but
I
think
so.
For
example,
in
our
work,
we
based
the
priority
selection
and
switch
only
on
the
perform
real-time
performance
for
the
flow
itself.
But
if
we
can
like
achieve
more
information
on
like
competitors
and
then
like,
we
can
some
some
more
centralized
design
may
be
necessary
for
scheduling
and
resource
allocation.
S
I
in,
in
addition
to
this,
into
an
online
intune
based
approach,
and
in
that
case,
for
in
our
case,
the
selection
of
the
priority
should
be.
It
should
be
based
on
one
rule,
which
is
you,
the
the
change
or
specifically,
to
lower
the
lower
priority.
S
Selection
should
not
impact
the
flow
itself,
but
creates
the
opportunity
that
if
some
computing
flows
within
the
network
can
acquire
higher
bandwidth
allocation
and
in
the
scenarios
when
there
are
multiple
services,
as
mentioned
by
you,
then
so
first,
we
may
have
more
priorities,
instead
of
only
one
primary
and
one
scavenger,
and
also
if
we
have
more
inputs
in
addition
to
the
performance
of
the
flow
itself,
then
we
we
can
leverage
that
either
by
still
by
perfluo,
but
with
more
information
or
we
if
it.
I
S
Yeah
yeah
on
that
front
on
that
front,
thanks
for
the
clarification,
so
on
that
front,
I
think
there
could
be
separate
priority
determination
for
each
of
those
sub
services
like
for
all
you,
then
they
we
can't
fix
it
to
the
default
or
primary
priorities
all
the
time
and
for
video
they're
made
for
for
video
parts.
Then,
if
the
bandwidth
is
high
enough,
then
this
is
the
most
intuitive
situation.
S
Then
we
can
switch
to
lower
priority
for
this
video
streaming
flow
when
the
high
speed
rates
or
is
rendered
smoothly
or
the
local
playback
buffer
for
online
videos
case
instead
of
real-time
streaming,
then
the
local
playback
buffer
is
heavily
occupied,
so
it
is,
there
is
no
risk
for
recovery.
L
O
P
Yes,
of
course,
well,
starting
from
our
research,
the
mptcp
protocol
is
attempted
to
be
a
standardized
united,
so
it's
attempted
to
be
a
part
of
the
whole
internet
as
well
and
according
to
the
congestion
control
that
we
implemented
during
this
research.
We
have
already
some
prototypes
that
work
a
within
the
public
internet,
with
some
fixes,
like
some
converters,
to
avoid
middle
boxes
problems,
but
in
theory
yes,
it
is
intended
to
work
within
the
whole
internet,
not
just
for
global
networks
or
something
like
that.
S
Yeah,
so
in
our
case,
there's
there
are
two
parts
in
our
plans
or
what
we
already
achieved
so
first
part
is
the
implementation
of
the
congestion
control
protocol
itself
and
just
to
provide
the
scavenger
priority
and
for
their
part,
we
have
currently
have
a
open
source,
quick
implementation,
which
we
expect
to
be
used
by
every
internet
users.
And
besides
that
we
have
some
close,
which
is
not
mentioned
in
our
paper
summation.
S
We
have
some
closed
source
linux
kernel
information
which
already
have
some
real-world
internet
users
and
the
other
part
is
the
interface.
So
this
is
an
in
envision
future
invasion
in
our
discussion
and
right
now,
the
most
available
transfers
protocol
implication
is
based
on
the
quick
framework.
So
at
least
we
expect.
The
very
next
step
is
to
implement
this
apis
within
the
quick
framework
which
is
in
the
other
space.
S
So
it
should
be
easier
to
implement
and
we
have
some
collaboration
with
like
industry
companies
right
now,
which
we
hope
that
can
help
us
if
we
want
to
make
this
interface
discussion
to
be
some
idea,
standards.