►
From YouTube: ANRW-QUIC
Description
QUIC meeting session at ANRW
A
B
C
Yes,
so
a
note
about
IPR
the
ITF
intellectual
property
rights,
disclosure
rules,
the
note
well
do
not
apply
to
this
workshop.
They
do
apply
during
IR
TF
research
group
meetings.
So
if
you
have
questions
about
that,
please
ask
someone
from
IRT
F,
that's
know
about
IPR
we're
streaming
live,
so
talks
will
be
recorded
and
posted
on
the
ITF
YouTube
channel.
You
can
see
the
URL
there
on
the
slide
and
obviously
a
big
thank
you
to
our
sponsors,
Akamai,
Comcast,
ACM
and
I
RTF.
Let's
give
them
a
hand.
C
So
the
goals
here
is
to
help
inform
practitioners
at
the
IETF
of
academic
research
that
might
be
relevant
to
them
and
also
to
help
folks
working
on
research
problems,
help
them
transition
to
practice
so
communicate
ideas
with
practitioners
get
a
sense
of
where
research
fits
in
terms
of
standards
and
sort
of
to
act
as
a
gateway
between
you
know,
academic
research
that
we're
doing
at
universities,
and
hopefully
the
standards
process
going
forward.
So
you
know
the
goal
here
is
really
to
sort
of
bring
research
into
practice.
C
Well,
thanks
for
all
the
hard
work
doing,
reviews
and
discussing
and
as
a
result,
we
now
have
a
nice
program.
So
some
statistics
we've
got
two
tracks,
we've
got
invited
talks
and
we
also
have
regular
paper
submissions,
so
we
had
13
talks
that
were
nominated
and
we
ended
up
invite
believe
we
invited
four
and
to
have
ended
up
in
the
program
we
had
26
papers
submitted
to
be
presented.
These
were
reviewed
and
then
discussed
at
a
program.
Committee
meeting
and
13
of
them
have
been
accepted
into
the
program
today.
C
Sorry,
I
just
want
to
go
back
a
slide
for
a
second
enough
for
presenters.
If
you
have
your
slides
in
PDF,
can
you
please
email
them
to
the
email
address
here?
I
RTF,
chair
at
a
RTF,
org,
PDF
format.
These
will
be
posted
on
the
workshop
web
page.
So
this
is
a
good
resource
for
people
to
look
back
later.
Please
get
those
sent
in
so
I.
Each
have
the
9tf
research
groups.
D
So
the
irony
has
several
research
groups
and
while
you're
here,
you
should
make
use
of
this
opportunity
to
see
if
you're
interested
in
the
activities
of
these
recent
groups
and
if
you
are
reach
out
to
the
chance
talk
to
them,
find
out
if
you
can
contribute
or
pick
up
ideas
from
being
part
of
the
research
group.
The
topics
are
pretty
broad
and
there
are
jobs
that
get
charted
periodically.
D
The
IDF
has
a
is
where
you
would
go
to
take
a
research.
Look
if
you
wanted
to
become
standardized.
If
you
wanted
to
be
adopted
by
various
industry
participation
zone,
that's
where
he
would
go
and
here's
a
brief,
informal
review
which
sharing
put
together
Shango
birthday
last
year
and
we
sort
of
cannibalizing
it
here
so.
D
D
D
You
can,
basically,
you
do
a
research
ideas
in
the
IDF
and
the
idea
for
salinization,
and
the
most
important
aspect
here
is
find
at
least
one
idea
of
native
code.
This
is
a
person
who
can
help
you
navigate
the
process.
You
will
never
understand
the
process.
I
just
showed
him,
they
can
help.
You
navigate
understand
this,
because,
typically,
the
process
tends
to
be
very
different
depending
on
working
group
and
the
work
that
he'll
be
make.
It
can
be
different.
The
amount
of
time
it
takes
can
be
quite
different.
Go
back
for.
D
His
TCP
first
open
this
is
a
draft
that
was
proposed
by
people
at
Google
and
they
took
about.
Was
it
started
about
three
and
a
half
years?
That's
not
I,
don't
know
the
averages
are
what
the
median
here
is.
Somebody
else
might
be
able
to
view
only
that,
but
that's
not
a
typical.
It
can
be
shortening
that
your
draft
can
be
smelled
enough,
less
controversial
and
can
be
up
to
ten
years.
D
But
again
you
have
to
give
these
numbers
in
your
head,
but
I,
like
it
perspective,
thinking
about
taking
up
foolish
research
and
trying
to
get
to
a
standard
and
that
being
two
years
can
seem
daunting,
but
you
have
to
think
about
the
potential
for
impact
that
you
can
have
by
getting
there,
which
is
the
day
industry
adopts
your
work
and
your
work
actually
gets
deployed
next
slide
so
anyway.
So
you
want
to
find
the
gifts
of
money.
D
E
D
C
G
Okay,
hello,
everybody!
It's
my
pleasure
to
welcome
you
to
the
first
session
of
an
or
W
this
year.
This
first
session
is
focused
on
quick,
which
is
one
of
the
larger
undertakings
of
the
IETF
has
taken
on
over
the
last
couple
years.
Quic
is
a
pretty
ambitious
protocol
that
combines
security,
application
and
transport
and
is
is
well
underway
under
the
in
the
book
working
group,
and
so
we're
really
happy
to
oh
yeah.
Let's
switch
is
over.
G
All
right
as
we
the
setup,
let
me
talk
a
little
bit
more
about
quick,
so
the
quick
working
group
has
been
has
been
working
diligently
on
developing
this
new
standard
and
deploying
it
and
testing
it,
and
so
we're
really
happy
to
have
these
two
talks
that
explored
quick
and
its
performance
in
the
real
world.
So
first,
let
me
welcome
the
first
speaker
here
as
he
gets
set
up.
So
let's
give
a
nice
round
of
applause
to
conrad
volts
in.
F
Ok,
let's
see
what
we
can
do
here.
G
G
Okay,
Fabio,
are
you
here,
I
just
saw
you
alright,
sorry
about
that.
We're
just
going
to
move
forward.
G
H
H
Okay,
the
purpose
of
the
work
is
that
we
are
proposing
an
octave,
an
alternative
way
of
improving
the
spin
bit
measurement,
using
only
one
bit
instead
of
the
two
bit
used
by
the
valid
edge
counter.
In
this
way,
not
all
the
three
bit
reserved
for
future
experiments
are
used,
but
we
can.
We
can
use
another
bit
just
to
implement
the
measurement
of
the
loss
rate.
H
Okay,
okay,
as
you
all
know,
the
latitude
spin
bit
is
a
simple
mechanism
that
causes
one
bit
to
spin
in
the
in
the
quick
adder
in
the
short
packet
header
of
the
of
quick
pockets,
generating
one
edge,
a
transition
from
0
to
1
or
1
to
0
once
per
round
with
time.
So
an
observer
can
measure
the
distance
in
time
between
two
different
edges
and
generate
warranty
time
sample
per
flop
around
the
time.
The
mechanism
is
a
really
simple.
H
H
H
Okay,
spin
meter,
some
limitation
packet
loss
tend
to
cause
wrong
estimates
of
Ranchi
time
due
to
period
the
width
changes
as
you
can
see
in
the
first
image
and
reordering
caused
a
drastic
underestimate
around
3
times
since
packet
reordering
caused
the
generation
of
fake
has
been
with
transition,
and
this
Kelly
the
observer
to
measure
around
this
time.
That
is
really
really
small.
H
Moreover,
application
limited
sender
can
introduce
delay
in
the
edge
reflection.
So
if
the,
if
one
of
the
20
point
is
not
send
traffic,
the
runs
with
time
measure
can
be
a
really
I.
This
issue
are
addressed
by
the
baggage
counter.
That
is
a
two
bit
valuation
signal
used
to
mark
body
touches
okay
divide.
Each
counter
is
a
two-bit
signal
that
is
added
to
each
pocket
and
its
purpose.
It's
to
explicitly
report
whether
an
edge
was
valid
when
transmitted
so
basically,
the
mechanism.
H
The
mechanism
is
really
simple:
a
value
greater
than
zero
is
assigned
exclusively
to
by
the
edges.
So
whenever
that
just
arrives
at
one
end
point
that
end
point
marks
the
the
following
part.
Yet
the
follow-up
go
in
pocket
with
average
counter.
That
is
the
same
of
the
previously
received
incremented
by
one.
So
basically
the
back
is
increased.
The
retirement
age
reaches
one
of
the
two
end
point.
Instead,
when
an
end
point
detect
a
problem,
an
impairment
such
as
re,
ordered
or
lost
edge
directly
set
back
to
1.
H
So
the
can
use
this
information
to
avoid
completing
incorrect
measurements
and
what
okay?
The
delay
bit
is
a
proposal.
The
idea
is
to
have
only
a
single
packet
that,
in
our
case,
is
called
the
delay
sample
with
a
second
market
bit
that
bounces
between
client
and
server.
So
a
passive,
a
passive
observer
priced
on
whatever
direction
can
compute
the
difference
in
time
between
these
two
delay
sample,
without
considering
any
other
information.
So
in
this
scheme
you
can
see
that
the
first
packet
of
the
spin
period
is
set
to
zero.
H
Then,
as
you
can
see,
the
green,
the
green
pocket
is
the
one.
The
carry
the
delay
bit
and,
as
you
can
see,
there
is
only
one
packet
that
carried
a
delay
bit
that
can
be
used
by
off
server
to
detect
the
edge
of
the
measurement.
We
can
go
ahead.
Okay,
the
mechanism
is
very
simple:
the
generation
start
with
a
client
that
set
the
first
packet
to
1,
so
the
delay
bit
of
the
first
packet
is
set
to
1.
H
Both
endpoint
reflect
an
incoming
delay
sample
to
the
first
available
of
going
pocket,
but
if
the
reflection
is
delayed
for
more
than
one
millisecond,
this
sample
is
considered
lost
and
the
reflection
is
aborted.
The
client
as
to
an
F
to
the
clients,
I
control,
is
a
part
of
the
algorithm
that
is
in
charge,
shoot
and
to
verify
if
the,
if
a
spin
with
period
ends
without
ideal
example.
H
In
this
case
in
recovery
process
start
and
the
the
client
waits
an
empty
period
in
which
no
delay
sample
is
introduced,
then
it
regenerated
the
delay
sample
mark
in
the
sample
the
marking,
the
first
packet
of
the
following
spin
bit
period.
This
empty
period
is
needed,
the
to
signal
to
a
possible
Observer
that
there
was
an
issue
and
a
noodle.
A
measurement
session
is
starting,
so
it
can
avoid
to
completing
a
wrong
estimate
of
the
ranted
time,
I'm
glad.
H
Okay,
the
key
goal
in
this
case
is
to
stabilize
the
round-trip
time
measurement
that
our
influencer
does.
We
have
seen
before
by
packet
loss,
air
or
drink
packet.
Loss
is
already
solved
by
the
daily
sample
working
principle,
because
a
single
sample
is
used
just
to
to
measure
on
tea
time
and
when
an
empty
field
is
detected,
the
daily
sample
is
considered
lost,
so
no
measurement
has
to
be
completed.
Pocketed
ordering
has
no
effect
in
the
round-trip
time,
because
we
don't
have
that
problem
that
when
a
packet
is
lost
that
period,
the
increase
its
its
width.
H
However,
the
observer
must
be
able
to
correctly
identify
the
periods
and
the
related
daily
sample,
because
when
an
empty
period
is
detected,
that
one
is
the
is
the
signal
that
the
measurement
and
as
to
be
restarted,
so
spruce
edge.
Spoof
images
must
be
detected
by
the
observer.
To
do
so,
we
can
use
awaiting
interval.
That
is
a
simple
timer,
introduce
it
into
the
observer
that
is
implemented
the
adding
a
time
and
interval
made
of
time
that
is
used
just
to
reject
every
spin
transition
after
a
spin
transition
is,
is
detected
traffic
calls.
H
This
problem
is
solved
using
that
mechanism
that
introduced
an
empty
period
that
sorry
not
relief,
reflect
the
delay
sample
when,
when
the
reflection
is
delighted
more
than
one
milliseconds
okay
to
test
the
entire
algorithm,
we
implemented
the
beef
to
our
network
using
mini
net
and
the
delay
bit
was
implemented
in
in
quicker.
This
is
the
schema
of
the
network,
so
we
have
to
switch
in
each
side,
the
observer
and
the
clients.
In
the
side
we
can
go
out.
H
Okay,
as
you
can
see,
the
delay
bit
works
in
the
same
way
as
the
inlet
and
the
back
when
analysts
and
ordering
is
introduced
into
the
right
work.
This
is
because,
in
this
case,
the
first
pocket
of
the
spin
period-
that
is,
the
one
that
is
market
with
a
delay
bit
so
the
edge
and
the
delay
bit
and
estimate
edge
and
the
delay
bit
at
the
same
pocket,
and
so
the
measurements
are
equal.
Okay,
we
can
grab
okay
when
we
introduce
randomness.
H
H
Ok,
the
conclusion
are
the
back
produced
one
more
valid
periods
for
edge,
fridge
loss
because,
as
I
said
before,
it's
quicker
order
start
and
the
observing
implementation
is
simpler
than
delay
sample
causes
and
they
don't
need
it
don't
need.
The
timer
in
the
job
site
requires
3
beats
or
the
entire
amount
made
available
is,
is
used
just
for
the
round-trip
time
measurement
and
decreases
its
performance
in
the
presence
of
packet
reorder.
H
Moreover,
observe
implementation
need
a
timer,
and
this
timer
is
the
waiting
period,
the
working
interval
to
skip
false
period,
but
this
is
also
a
trade
off
for
all
for
our
solution.
Cause
a
waiting
period
in
duration
is
also
the
minimum
measurable
the
round-trip
time.
So
if
we
set
a
waiting
period
of
5
milliseconds,
we
cannot
measure
around
3
time
that
are
under
these
this
value.
G
I
Giovanni
aside
the
end,
yes,
nice
work
just
to
make
sure
if
I
understand
it
right.
So
the
can
you
go
back
one
slide.
Yes,
so
the
delay
beats
the
one
that
actually
measures
the
user
experience.
Is
this
right?
Oh
yes,.
J
I
H
We
that
we
are
trying
to
to
measure
round-trip
time
in
a
way
that
is
more
valid.
If,
if
we
consider
that
the
spin
bit
would
produce
a
lot
of
in
case
of
impairment,
a
lot
of
round-trip
time
measurement
that
are
wrong,
overestimated
or
sometimes
underestimated,
so
with
a
delay
bit,
we
can
try
to
validate
what
measurement
that
can
be
used
to
to
have
a
real
indication
of
the
round-trip
time.
So
all
right,
so
you.
I
Let's
say
you
send
a
packet
and
gets
lost
mm-hmm
but
actually
gets
delayed,
because
the
network
is
congested
or
whatever,
and
when
you
get
that
response,
you
get
a
response.
Back,
you're
gonna
have
the
RTT
of
the
specific
packet,
and
then
you
know
that
would
be
different
from
the
previous
ones
you
had.
So
we
have
like
a
sort
of
an
outlier
there,
so
this
would
be
like
equivalent
to
the
user
experience
right.
That's
what
the
user
would
probably
see.
H
H
So
if
you
can
see
that
the
measurement
is
done
in
in
a
single
pocket
that
single
pocket,
if
it
is
not
delayed
in
the
two
end,
point,
will
bounce
between
try
and
answer
without
any
any
kind
of
problem
unless
somewhere
or
during
in
during
the
the
network
during
the
HRA
in
into
the
network.
So
the
round-trip
time
measured,
there
will
be
the
one
experienced
by
the
by
the
end
user
yeah.
So.
I
I
think
if
your
operator
I'm
just
brainstorming
here,
you
can
have
a
lot
of
data
points
with
this
approach,
which
is
great,
but
if
you
only
have
delayed
bit,
let's
say
and
then
you're
gonna
see
like
a
delay
of
a
particular
destination
or
send
it
data
to
let's
say
they
would
have
a
certain
distribution
like
a
median
RTT
and
out
of
a
soda.
You
see,
there's
a
spike.
You
know
there's
already
a
problem
in
there.
So
when
I'm
trying
to
say
maybe
the
delay
dates
is
simpler.
I
K
H
Waiting
period,
the
is
needed
just
to
to
filter
out
this
produce
address.
So
if
you,
if
you
set
it
to
one
millisecond
you'll,
be
able
to
filter
out
only
those
two
loose
edges
that
were
delayed
the
of
1
milliseconds.
So
it's
a
it's
a
trick
to
to
filter,
spruce,
I
just
and
avoid
that
the
that
you
observe
very
detect
an
empty
period.
I'll
start
the
measurement
well
yet,
where
that
empty
period
is
not
really
presented,
it's
a
fake
empty
period.
So
there's
no
reason
to
start
a
measurement.
So.
K
E
K
M
The
other
color
of
the
paper
can
add
some
words.
There
are
two
different
timer
one
time
is
1
milli
seconds.
That
is
the
time
at
that
measure
the
error
to
avoid
an
error
in
the
measurement.
If
there
is
a
because,
let
me
say
the
ping
is
made
using
real
packets,
not
artificial
packets
is
possible
that
hour,
delay
sample
can
be
replayed
because
there
is
no
traffic
in
the
other
direction,
so
the
measurement
can
be
wrong,
not
because
the
delay
is
big,
but
because
there
isn't
a
traffic
that
can
transport
this
packet.
M
So
if
we
wait
for
our
delay
bit
hourly
packet,
that
is
waiting
for
more
then
millisecond.
We
discarded
because
we
introduced
an
error
in
the
delay
measurement
the
greater
than
one
millisecond
us.
So
this
is
the.
If
we
can
accept
an
error
in
the
measurement
bigger
than
one
millisecond,
we
can
wait.
The
two
millisecond
finish
second
before
to
discard
that
the
sample
is
only
to
avoid
whoever
error
in
the
measurement.
One
millisecond
is
the
maximum
rate
that
we
accept
as
an
error.
K
M
G
N
From
the
University
from
Germany
and
I
will
present
our
paper
on
the
performance
perspective
between
quick
and
TCP.
So
to
start,
this
talk
I
have
a
very
short
recap
of
the
two
websites
we
hope
you
have
in
place
and,
first
of
all,
this
is
the
TCP
stack
with
TLS
an
HTTP
T,
and
this
deck
currently
suffers
from
inefficiencies,
for
example,
head
of
line
rocket,
which
cannot
be
solved
due
to
the
sticking
and
also
deployment,
is
hard
in
this.
N
Take,
for
example,
first
open
is
challenging
to
deploy
still
and
the
Cystic
is
ossified
to
overcome
this
issues.
Quick
joins
this
protocol,
stick
and
all
three
protocols
in
a
signal
protocol
on
top
of
UDP,
and
by
doing
so
a
quick
can
involve
without
ossification
over
time
and
also
can
solve
head
of
line
blocking.
Moreover,
and
quick.
One
key
feature
is
that
performances
to
have
zero,
ATT
connection
establishment.
N
So,
regarding
our
measurements,
we
aren't
the
first
that
did
performance
management
on
TCP
and
quick,
and
there
are
already
a
lot
of
related
work
already
out
there.
This
is
just
a
small
connection
and
what
you
know
related
work
usually
compares
websites
or
also
synthetic
web
sites,
and
therefore
measurements
for
realistic,
reps
networks
and
also
emulated
networks.
Most
of
the
time
related
work,
users,
page
load
time
as
matrix,
but
the
overall
outcome
of
related
work
is
that
currently
Creek
is
the
faster
protocol.
N
But
when
you
step
back
for
a
moment,
you
can
ask
yourself
Rises
the
case
because,
basically
quick,
we
implements
all
features
that
are
currently
already
there.
So
we
found
two
major
reasons
which
might
be
the
case
for
that.
First
of
all,
Creek
is
already
configured
for
web
performance
and
for
TCP
there
are
a
lot
of
tuning
knobs.
You
can
turn
to
increase
the
performance
of
TCP
and
we
think
that
we
like
to
work
up
to
until
now
does
not
tune
TCP.
N
Moreover,
these
two
for
rap
sex
have
two
different.
Connecting
establishments.
Quick
usually
requires
one
or
two
hundred
times
less
during
the
connection
establishment
compared
to
TCP
or
TLS.
So
this
is
maybe
where
a
second
gap
is
introduced
in
late
at
work
and
when
you
start
to
reason
about
user
experience
the
currently,
it
is
proven
that
petrol
time
is
not
suited
for
music
perception,
so
other
metrics
might
be
better
there.
N
So
our
goal
of
this
paper
is
to
have
a
reproducible
user
centered
performance
measurement
between
these
two
web
stacks,
TCP
and
quick,
and
to
do
that
on
an
eye
to
eye
level.
So
in
the
next
step,
I'll
explain
how
we
achieve
this
eye
to
eye
level.
These
are
some
properties.
These
two
stakes
have
usually
right
off
the
order
of
the
box
and
use
you
can
seek
trick.
Tcp
is
still
like
some
configuration,
so
the
goal
is
to
tune
PCP
to
reduce
this
initial
gap.
N
We
do
so
by
increasing
the
initial
window
and
neighbor
pacing
for
TCP,
and
we
can
also
use
the
TCP
first
open
option
for
the
amount
of
time
connections
and
also
measure
with
TLS
1.3
with
early
data,
but
we
won't
do
so
for
a
measurement
because
of
the
following
reasons.
First
of
all,
at
the
time
of
our
measurements,
TCP
TLS
1.3
with
early
data,
was
not
implemented
in
the
Chrome
browser
which
we
use
for
measurements
and
second
TCP.
First
open
is
not
that
deployed.
For
example,
at
our
University,
the
firewall
still
blocks
TCP
first
open.
N
On
the
other
hand,
we
don't
use
quick
with
the
ATT,
because
we
see
issues
with
you
ATT
in
the
deployment,
because
there
are
issues,
for
example,
with
replaying
and
since
HTTP
is
not
stateful
us
as
it
is
promised.
Currently,
we
don't
see
0a
TT
for
quick
coming
soon.
So
still
there's
one
one
trip
time
difference,
but
we
will
discuss
this
one
later
in
the
evaluation.
So,
finally,
we
use
only
HTTP
2
in
our
testbed
for
TCP
and
then
for
the
emulation.
I
derive
these
five
different
protocols.
N
We
have
stock
TCP,
which
is
as
reference
the
untuned
TCP.
Then
we
have
attune
TCP
with
the
tuning
from
above
and
then,
since
we
accept
expect
the
congestion
control
to
have
huge
impact.
We
also
have
one
scenario
with
TCP
and
VB
RS
congestion
control.
Then,
on
the
quick
side,
we
have
a
recent
quick
version,
Google
Creek
and
also
one
with
PBR.
So
regarding
our
test.
But
how
did
you
implement
this?
We
use
the
my
my
framework
and
modified
it
to
support
Google
Creek
and
also
the
Android
NIC
server,
and
this
Mary.
N
My
framework
is
already
capable
of
replaying
websites
realistically,
and
it
also
comes
with
a
network
layer
to
emulate
different
network
settings
on
the
bottom.
We
have
the
Chrome
browser
which
measures
our
web
performance
and
this
one
gets
measured
by
browser
time.
So
before
we
start
our
measurements,
we
did
a
short
pin
track
of
the
performance
of
our
two
server
that
we
have
and
therefore
we
downloaded
files
with
different
file
sizes
without
any
benefit
restrictions
and
for
Rice's
below
1
megabytes.
N
We
can
see
that
both
implementations,
Google
and
engine
squeaks,
similar
and
just
add
files
versus
atomic
megabytes.
There
is
a
gap
emerging,
and
here
we
can
see
that
Google's
quick
implementation
is
not
as
fast
compared
to
energy
leaks,
but
for
all
measurements.
This
does
not
impact,
because
our
website
file
resources
are
usually
smaller.
Our
largest
resource
is
4
megabytes,
so
we
won't
see
these
gaps
in
our
evaluation
and
measurements.
N
Regarding
network
configurations,
we
have
5
Network
configurations
and
one
more
than
we
presented
in
the
paper.
This
Ouija
network
is
not
in
the
paper
and
we
have
the
speedy
DSL
LTE
like
connection
the
swiggy
and
then
2
special
networks,
the
direct
air-to-ground
communication,
which
is
and
the
mobile
satellite
service-
and
these
two
are
measured
from
other
researchers,
doing
actual
flight
airplane
flights
in
and
the
internet
reception
up
there.
Now,
a
tech
to
ground
communication
uses
cell
phone
18
ike
network,
but
with
antennas
pointing
toward
the
airplane
and
mobile
satellite
service,
is
using
satellites.
N
So
what
do
we
measure
on?
We
have
set
of
38
web
sites,
which
is
a
subset
of
the
alexa
and
motts
lists,
and,
as
you
can
see,
these
web
sites
are
quite
diverged
regarding
the
size
and
also
the
number
of
IP
addresses,
and
we
have
four
different
visual
matrix
which
we
ever
ate
on,
and
these
are
all
measured
above
the
fold
and
only
page
load
time
is
not
above
the
fold,
so
we
have.
First,
we
will
change
speed
index
as
a
metric
matching.
The
user
perception
work
closely.
N
We
have
the
point
in
time
when
the
website
reaches
the
visual
completeness
of
85%
and
also
last
visual
change,
so
to
calculate
the
difference
between
our
two
protocols
or
protocol
combinations.
We
define
the
performance
gain
and
this
one
is,
for
example,
most
of
the
time
the
difference
between
some
protocol
we
and
untuned
stock
TCP,
and
we
repeated
our
measurements
31
time
and
then
take
the
average
and
denoted
by
here.
The
over
line
and
the
performance
gain
is
basically
the
normalized
difference
between
two
protocols.
N
So
as
an
example,
if
the
page
for
time
for
quick
is
0.5
seconds
and
the
petrol
time
for
TCP
is
two
point,
seven
five
seconds,
then
the
performance
gain
would
be
negative,
0.33,
so
I
would
know
through
our
networks
and
our
results.
First
of
all,
in
this
plot
you
can
see
all
five
metrics
side
by
side
and
on
the
x-axis
we
have
the
performance
gain
compared
to
stock
untuned
TCP,
as
you
note
that
negative
performance
gains
show
the
speed
increase
on
the
y-axis.
N
We
have
CDF
over
all
38
websites
and
what
you
can
see
here
that
tuning
indeed
already
increases
the
performance,
so
we
can
show
that
tuning
should
be
done
for
comparing
R
xx
but
still
Creek
outperforms,
even
our
tuned
very
end,
and
what
you
can
see
also.
He
is
that
the
congestion
control
does
not
have
a
huge
impact
because
the
Alliance
for
B
we
are
mostly
open
overlapping
with
the
creek
lines.
Still
we
derive
from
these
steep
curves
that
the
website
structure
in
general
has
here
a
low
impact
on
the
overall
achievable
performance.
N
So
next
is
the
ad
network.
This
one
is
a
little
bit
slower,
but
not
that
much
slow
in
bandwidth,
and
here
we
see
similar
results
to
DSL
and
only
the
gaps
in
performance
gain
are
increased.
So
there
is
no
much
two
or
three
here,
but
when
we
look
at
the
3G
network
well,
the
bandwidth
is
decreased
even
more.
We
have
higher
amount
of
time
and
a
small
amount
of
loss.
Then
we
can
see
that
the
tuning
impact
of
TCP
is
decreased,
especially
when
you
look
at
last
little
change
and
Pedro
time.
N
But
still
quick
is
better
here
and
we
can
see
that
the
variability
Rises
here
are
the
curves.
Aren't
that
deep
anymore-
and
maybe
there
are
some
other
aspects
too.
So
when
we
decrease
the
bandwidth
even
more.
This
is
the
network
with
the
least
bandwidth.
We
can
see
that
first
of
all
tuning
becomes
at
us
coin
here
and
tuning
does
not
really
impact
anymore,
and
we
think
that
this
is
the
case,
because
the
initial
window
increase
might
lead
to
early
loss
here
in
this
network.
Still
quick
performs
very
good
here.
N
N
Here
we
have
an
increased
bandwidth
again
and
here
we
can
see
that
tuning
is
again
profitable,
but
here
the
on
time
and
loss
has
increased
even
more
so
we
have
six
percent
loss
in
our
packets
and
we
can
see
that
here
for
the
first
of
the
time
bbr
outperforms
cubic.
So
when
you
track
the
Green
Line
for
TCP
B,
we
are
towards
the
end.
You
can
see
that
at
petrol
time,
quick,
PBR
and
TCP
PR
are
quite
similar,
so
the
congestion
control
here
impacts
more
than,
for
example,
me.
N
Protocol
choice
still
quicker
if
cubic
is
very
fast,
and
we
think
that
this
is
the
case
again
because
of
no
help
of
line
blocking
second
largest
acquaintances
and
so
on.
Okay,
to
conclude
these
findings:
here
we
have
on
the
y-axis
the
mean
performance
gained
over
our
websites
and
on
the
x-axis
all
protocol
combinations,
and
then
the
colors
denote
the
different
networks.
So
for
the
first
tuning
TCP
versus
unhewn
tcp,
we
can
see
that,
except
for
one
network,
tuning
TCP
is
profitable.
N
When
we
compare
unto
TCP
against
week,
then
we
see
that
this
is
the
largest
performance
increase
and
when
you
take
into
account
during
during
TCP
and
Creek,
then
the
performance
of
course
is
limited.
And
when
you
look
at
TCP
tuned
and
PBR
and
quick
and
with
quick
PBR,
then
we
can
see
that
there's
usually
no
performance
gain,
except
for
our
network
with
the
high
loss,
where
the
congestion
control
in
heads
most
and
for
the
other
networks.
We
see
that
quick
is
still
better
but
sometimes
again,
congestion
control
is
for
the
MSS
network
more
important.
N
For
this
talk,
we
picked
the
num
influence
of
the
number
of
resources,
because
we
think
that
this
one
might
impact
most
big
of
the
head
of
line
blocking,
which
is
related
to
resources,
and
here
on
this
plot,
you
can
see
on
the
y
excels
all
of
our
websites
that
we
have
in
our
set
and
on
the
bottom.
There
are
the
websites
with
the
least
amount
of
resources
and
on
the
top,
for
example,
D
Ramon.
N
The
impact
of
the
resources
does
not
have
such
a
huge
impact
because
already
at
first
won't
change
there
aren't
that
many
resources
loaded
for
speed
index
and
little
completeness.
We
see
that
for
websites
with
few
resources
there
is
no
real
impact,
but
there
are
some
websites
with
high
amount
of
resources
where
little
completeness
and
speed
index
is
impacted
most,
and
there
are
also
some
outliers.
N
One.
Outlier
is
very
exceptional
because
here
the
no
times
website
here
we
have
a
large
shift
for
visual
completeness
towards
TCP,
where
TCP
is
better
and
when
we
take
a
closer
look
at
this
website,
we
find
the
reason
in
how
the
website
loads,
so
you
can
see
that
the
normal
depth
times
website
has
an
ad
banner
on
top
and
we
were
able
to
reproduce
of
the
time
when
the
spanner
loads,
but
not
the
actual
content
of
the
banner,
because
we
couldn't
reproduce
the
ad
network.
So
let's
have
a
short
look
at
this
video.
N
So
what
you
can
see
here
is
that
basically,
both
websites
load
similar
but
TCP
manages
to
load
the
up
Habana
first,
so
this
triggers
then
dividual
completeness
event
and
quick
takes
some
time
to
load
the
upper
banner,
and
so
maybe
little
completeness
isn't
the
best
mate,
because
it
depends,
for
example,
on
the
burner
shift
and
all
other
elements
in
quick
might
still
be
present,
and
just
is
the
burner
popping
down
late.
So
what
is
still
left
for
the
aberration
is
that
we
have
a
difference
of
one
round
trip
time
in
our
measurements.
N
So
maybe
this
would
explain
the
remaining
gap
between
TCP
and
Crick,
and
our
idea
was
maybe
we
can
subtract
out
one
round-trip
time
from
the
page
load
time
since
we
know
how
we
configured
our
round
networks.
So
we'd
know
the
one
for
time,
but
as
it
turns
out,
this
is
not
very
straightforward,
because
a
lot
of
resources
and
websites
have
to
be
downloaded.
We
are
a
different
kind
of
lot
of
different
connections,
and
so
we
cannot
simply
subject
out
one
100
time.
N
Luckily,
in
our
set
of
domains
there
are
two
websites,
Wikipedia
and
canoodle
log
that
rely
on
resources
of
only
one
single
IP
address.
So
here
we
can
subtract
out
100
time
for
the
connection
establishment,
and
then
we
have
reduced
this
get
when
we
do
so.
We
get
these
results.
You
can
see
for
the
net
different
networks
and
the
web
sites,
the
difference
in
page
load,
time
and
again,
negative
values
are
the
performance
increase
for
quick
and,
as
you
can
see
here
there
for
the
fast
dsl,
LTE
and
3G
networks,
usually
the
correct.
N
The
difference
is
very
small
within
milliseconds
and
in
terms
of
1/2
times
the
difference
is
below
one
one
third
time
so
quick
and
TCP
are
similar
fast.
There
is
one
exceptional
case
here
for
the
Wikipedia
website,
where
the
difference
is
still
a
lot
of
one
trip
time
three-point
advantage
times,
and
you
can
argue
with
that.
Maybe
this
is
the
core
case
because
of
head-of-line
blocking
loss.
So
for
the
these
networks
there
are
also
two
cases
where
TCP
is
now
faster
slightly
for
the
ms/ms
network.
N
We
had
to
look
a
little
bit
more
closely
because
here
the
congestion
control
again
impacts
most
and
on
the
top
table.
You
see
quick
and
when
we
correct
the
amount
of
time
difference
here
still
quick
is
faster,
but
on
the
button
table
here
for
bbr,
we
can
see
that
there's
also
now
a
case
where
tcp
is
faster
for
Wikipedia
and
still
the
voucher
plan
differences
below
110.
N
So
to
summarize,
we
found
that
unity.
Cp
is
not
negligible
when
we
compare
two
different
vectors,
Dex
still
quick,
outperforms
TCP,
but
the
gap
gets
narrower
and
we
can
argue
that
the
remaining
gap
can
be
explained
by
the
difference
in
the
connection
establishment
between
tcp
and
creek.
Moreover,
there
are
some
cases
where
they
are
on
congestion.
Control
met
us
much
more
than
protocol
choice,
sometimes
okay,
some
discussion.
We
think
that
quick
is
not
family,
primarily
built
to
improve
the
performance,
but
to
perform
well,
as
we
saw
it
performs
quite
well.
N
We
think
that
quicks
main
reason
is
there
to
enable
evolvability
of
the
web
stacks
that
are
currently
there,
and
especially
on
transport
layer
and
one
open
question
which
also
arises
from
the
video
is
we
can
measure
these
differences,
but
how
do
users
perceive
these
differences,
because
maybe
these
differences
are
too
small
to
be
even
perceivable
for
end-users?
So
much
other
than
that.
Thank
you
for
your
attention
and
I'm
happy
to
answer
your
questions.
I.
N
E
E
N
E
Okay,
so
in
in
reality,
LT
even
with
a
single
carrier,
can
change
its
bandwidth
from
really
second
to
millisecond,
with
multiple
carriers
aggregated,
it
can
change
in
much
larger
chunks,
so,
possibly
a
topic
further
research
would
be
have.
How
does
the
congestion
control
algorithm
influence
that
and
what's
the
performance
difference
there
versus
what
you
simulated
here
said.
N
N
E
P
N
These
are
some
major
improvements
that
we
see
for
quick
and
also,
for
example,
the
larger
sac
ranges
that
are
there
in
quick
and
also
for
PBR
and
so
on.
People
can
oversee
loss,
static
loss
and
also
Creek
is
able
to
how
was
it.
Creek
can
see
whether
it
is
a
retransmission
or
not,
which
can
also
help,
but
we
weren't
able
to
confirm
these
findings
because
we
have
no
tooling
to
look
inside
the
encrypted
packages
so.
N
So
you
think
mean
that
we
just
benchmark
one
single
file
right
fire
I
know
we
didn't
do
that
because
we
concentrated
on
that
performance
here
what
you
will
get
in
your
web
browser.
So
we
had
our
speed
performance
measurement
for
these
two
permutations
in
the
beginning,
but
this
one
was
resolved.
Any
network
effects.
N
P
K
N
N
K
N
N
K
Milliseconds
and
similar,
like
is
the
previous
speaker
regarding
the
alt
I,
believe
that,
if
you
want
to
continue
to
do,
some
of
his
evaluations
would
be
interesting.
Reporting,
for
example,
in
the
context
of
the
axis,
like
varying
different
type
of
number
of
retransmission.
Oversee
access,
will
different
error
conditions,
model
and
I.
N
L
Hi
nice
work
I've
have
one
question,
it's
sort
of
to
you,
but
also
to
the
to
the
G
correct
people's
as
far
as
I
know,
at
least
at
some
point.
The
the
cubic
implementation
in
G
Creek
was
actually
tuned
to
be
as
aggressive
as
to
need
new
Reno
flows
for
TCP
because
of
some
YouTube
thing,
and
so
I
wonder
if
this
explains
some
of
the
benefits
you
see.
Have
you
looked
into
this
I
mean
you
could
also
sort
of
you
tuned,
the
de
Colonel
TCP,
to
be
as
aggressive
as
to
greener
flows.