►
Description
Day 3 of the IAB's Measuring Network Quality for End-Users Workshop, 2021-09-16.
Day 1: https://youtu.be/pFZEa3NN39A
Day 2: https://youtu.be/8vjft84gqFA
Workshop page: https://www.iab.org/activities/workshops/network-quality/
A
Today's
sessions
are
themed
as
synthesis
and
after
starting
the
conversation
with
rather
bleak
perspective,
of
how
we
still
haven't
figured
out
how
to
measure
quality
of
the
internet
and
that
maybe
the
previous
attempts
were
futile
and
after
exploring
different
angles
of
where
the
quality
may
may
be,
and
how
to
measure
it.
Today,
we
are
going
to
look
into
concrete
suggestions
by
several
participants
on
how
to
start
measuring
network
quality
today
in
a
way
that
would
be
meaningful
for
the
end
users.
A
This
may
not
be
the
final
metric
to
measure
it
once
and
for
all,
but
we
hope
that
it
will
be
useful
in
the
following
years
useful
to
make
the
experience
of
the
end
users
better.
A
A
Hopefully,
by
after
after
hearing
all
the
presentations
yesterday,
the
first
day,
everyone
now
has
rich
context
so
that
we
can
continue
with
the
fruitful
and
productive
discussions.
A
A
I
don't
have
much
more
to
say:
well,
actually
I
do
logistically.
There's
this
day
will
be
slightly
different
from
previous
two
days.
We
have
two
sessions
and
not
four
and
the
last
part
of
the
last
two
hours
of
the
workshop
after
the
break.
A
A
A
All
right
well,
thank
you.
I
have
three
sessions
and
we
have
we
will.
We
are
going
to
finish
with
the
recap
thanks
to
us,
so
before
I
make
any
more
blunders,
let
me
let
me
give
the
mic
to
randall,
who
is
going
to
be
driving
today's
first
two
sessions.
Thank
you.
B
We
had
one
more
comment
at
the
end
of
the
day.
During
that
discussion,
we
will
be
trying
to
figure
out
sort
of
next
steps
and
conclusions
that
we
think
that
we
could
make
you
know
as
a
whole,
so
so
as
you're
thinking
about
through
the
day
do
do
start
to
start
thinking
about.
Where
do
we
go
from
here
so
that
we
can
get
now
from
the
work
from
the
workshop
that
might
have
a
forward
direction
of
next
steps?.
A
And
the
last
reminder:
let's:
let's
try
to
utilize
slack
as
we
did
for
the
in-depth
discussions
and
leave
the
verb
chat
as
the
control
plane,
so
that
our
moderators
will
have
easier
time
recognizing
who's
in
queue
and
whom
to
call
up
next.
C
Okay,
here's
our
agenda.
Oh
first
time,
randall
meyer
I'll,
be
your
chair.
This
is
my
first
time
so
please
bear
with
me
here's
our
agenda
and
we
will
start
with.
So
this
is
a
sysvester
sylvester.
D
Yeah
you
can
pronounce
it
like:
okay,
yes,
okay,
hello,
everyone,
I'm
presenting
our
work
on
incentive-based
traffic
management
and
quality
of
service
measurements.
Work
of
several
people.
Please
go
to
the
next
slide.
D
So
what
is
traffic
management
in
traffic
management
consists
of
mechanisms
controlling
the
congestion
and
and
they
can
be
in
a
way
sorted
by
position.
Duration
from
start
to
long,
so,
for
example,
if
you
want
to
have
very
small
delay
for
for
your
packets,
that's
a
that's
like
it's
a
short
congestion,
so
you
can
solve
that
by
acquiring
scheduling.
D
If
you
want
to
decrease
the
load
on
your
network,
for
example,
in
the
busy
hour,
you
might
need
to
change
your
users
policy
and
service
level
agreements,
and
there
are
all
these
algorithms
in
between
like
resource
sharing
condition,
control,
contraption,
maybe
or
maybe,
also
load,
balancing
and
admission
control.
You
need
to
dimension
your
network
somehow
and
if
you
want
to
solve
the
problem,
it's
it's
wise
to
choose
the
right
mechanisms,
depending
on
the
condition.
Duration.
So
again
you
it's
it's!
It's
unlikely
that
you
can.
D
You
can,
for
example,
solve
your
your
packet
delay
by
changing
the
usage
policy
event.
That's
there
are
solutions
for
that
and
for
each
mechanism
there
there
are
alternate
algorithms,
for
example.
If
you
take
condition
control,
there
are
different
tcp
condition,
control,
algorithms,
and
we
we
believe
that
traffic
management
can
be
grouped
into
strategies
and
and
we
define
strategies
as
harmonized
set
of
algorithms,
where
there
is
one
more
or
zero
for
each
such
mechanism.
D
One
zero
or
more
algorithm
for
each
such
mechanism,
for
example,
best
of
what
intense
access
is
such
a
strategy,
even
though
there
is
minimal
harmonization
among
the
algorithms
and
in
general.
Updating
a
single
algorithm
has
limited
in
ins
impact.
If
you
want
to
reach
something,
consider
also
updating
the
strategy,
but
and
and
also
an
update,
might
even
break
the
harmonization
of
strategies.
If
you,
if
you
introduce
a
new
tcp
cca
and
the
new
aqm,
the
the
twos,
have,
might
have
different
assumptions
and
and
might
break
the
the
strategy
itself.
D
D
So,
as
an
example,
we
take
the
incentive
based
core
stateless
quality
of
service
strategy.
It's
it's
called
perfect
value.
You
can
find
much
much
more
information
about
it
in
a
home
page
and
again.
This
is
an
example
for
for
such
a
strategy
where
resource
sharing
policies
are
expressed,
something
by
called
throughput
value
functions.
You
can
see
such
throughput
value
function,
examples
on
the
right
and
that
function
you
can
encode
by
marking
a
packet
value
in
each
and
every
packet.
D
Different
flows
may
have
the
same
policy,
which
means
the
same
function
but
a
separate
marker,
or
they
can
have
different
policies,
and
in
this
strategy,
scheduling
an
aqm
works
only
by
maximizing
transmitted
value.
There
is
no
need
for
flow
identification
or
policy
knowledge
of
the
pathology
and
and
in
this
strategy
we
have
something
we
call
conditional
threshold
value,
which
is
a
rich
congestion
measure.
You
can
see
on
the
on
the
right
hand,
side
that
the
intersection
of
the
throughput
value
functions
and
the
condition
threshold
value
defines
the
desired
resource
sharing
for
each
and
every
flow.
D
It's
this
condition.
Threshold
value
is,
is
emergent
of
the
of
the
scheduling,
so
you
don't
have
to
prepare
pre-calculate
it.
You
can
just
maximize
your
transmitted
packet
value
and
thereby
you
will
have
a
condition.
Threshold
value
and
packets
with
higher
value,
are
dropped
lower
where
you
are
paused,
and
this
condition
threshold
by
you
may
hop,
have
been
harmonizing
the
arguments
of
traffic
management,
so,
for
example,
it
can
be
used
for
load
balancing.
D
D
Maybe
it
should
be
part
of
the
strategy.
The
measurement
itself,
but,
but
we
highlighted
usage
policies,
also
is
a
very
important
among
the
the
mechanisms
to
be
harmonized
with
their
measurements,
and
we
believe
that
using
a
rich
congestion
measure
for
both
the
measurements
and
for
harmonizing
the
arguments
of
traffic
management
strategy
looks
a
very
promising
way
forward.
Take
the
example
in
the
previous
slide,
at
the
same
time,
the
simplicity
of
the
internet
and
the
best
internet
access
strategy
that
that
was
a
very
key
success.
Factor
for
the
internet,
so
most
developed,
quieter.
D
Service
solutions
are
not
used
not
especially
in
northern
to
and
and
the
session
based
and
any
kind
of
session
based
critical
services
extremely
unlikely
to
happen
over
the
internet.
So
we
somehow
have
to
have
to
keep
it
simple,
and
we
believe
that
incentives
are
a
possible
way
for
what
incentives
are
proposed
as
pieces
of
information
usable
to
make
traffic
management
decisions.
It's
intended
to
be
lightweight
and
and
not
session
based,
but
it
should
also
enable
more
detailed
slas
over
the
internet,
and
there
are
examples
for
that
in
our
paper.
G
I
audible
yes,
okay,
awesome,
greetings
and
welcome
to
my
presentation.
My
name
is
sata
sengupta,
I'm
a
third
year
phd
student
at
princeton
and
today
I'm
going
to
make
the
case
for
passive,
real-time
and
fine-grained
rkt
monitoring
inside
the
network.
This
is
work
done
done
with
june
kim
and
my
advisor
jennifer
expert
next
slide.
Please.
G
There
has
been
a
lot
of
discussion
in
this
workshop
already
about
how
rtt
is
an
extremely
important
qa
metric
rtd
relates
directly
to
throughput,
has
an
influence
on
video
quality
of
experience
and
page
load
time
and
is
especially
crucial
for
latency
sensitive
applications
such
as
online
gaming,
algorithmic
training
and
so
on,
but
it's
not
not
just
enough
to
measure
rdt.
However,
we
need
to
measure
rtt,
while
the
application
is
running
and
a
real
user
is
being
affected
by
it.
Such
in-network
rtt
monitoring
is
possible.
G
Now,
thanks
to
the
advent
of
high-speed
programmable
switches,
if
an
in-network
device
can
measure
rtt,
it
can
enable
network
adaptation
strategies
such
as
routing
and
scheduling
in
response
to
significant
rises
in
latency,
for
example,
the
programmable
switch
could
select
a
cydian
replica
with
lower
latency.
It
can
read
out
traffic
in
a
multi-home
network
setting
to
improve
video
qe
and
even
enable
latency
aware
handoffs
among
wi-fi
access
points.
To
give
a
concrete
example,
we
see
to
our
right
a
scenario
where
I
p,
anycast,
pcd
and
replica
selection
is
happening.
G
G
So
in
this
work
we
propose
p4rttt,
which
is
the
tool
for
rtt
monitoring
and
network
adaptation
written
in
the
p4
language,
a
language
designed
for
the
programmable
data
plane.
This
tool
would
be
running
in
an
in-network
programmable
switch
on
path
between
the
client
and
the
server.
We
call
the
location
of
the
switch.
Our
vantage
point,
the
vantage
point,
sees
a
stream
of
packets
from
both
directions.
It
has
an
rtt
measurement
engine
that
computes
rtt
samples
for
each
flow
and
sends
them
across
to
an
analytics
and
adaptation
module
that
takes
actions
based
on
the
rtt.
G
It
can
also
send
some
statistics
derived
from
these
rtd
samples,
such
as
the
minimum
rtt
in
the
window
to
the
control
plane
for
further
processing.
We
perform
this
comp
rtt
computation
by
matching
data
packets
with
ack
packets,
based
on
tcp
sequence
numbers.
Now.
This
can
become
quite
difficult
to
scale
when
the
volume
of
traffic
is
large,
which
is
especially
true
because
certain
figures
of
tcp
traffic,
like
re-transmission
and
reordering,
force
us
to
maintain
flow
state
for
a
large
number
of
flows.
G
While
the
high-speed
data
plane
is
extremely
resource
constraint,
we're
able
to
pull
this
off
by
designing
a
computation
method
that
consumes
constant
space
as
opposed
to
a
software
tool
like
say,
tcp
trace.
Furthermore,
our
tool
doesn't
depend
on
the
tcp
timestamp
option
that
is
sometimes
unreliably
implemented
next
slide.
Please.
G
Finally,
I'd
like
to
present
some
preliminary
results
obtained
from
our
campus
data
set.
The
data
set
is
anonymized,
and
this
study
has
been
approved
by,
for
instance,
irb
and
patr.
We
had
thousands
of
packets
per
second
going
to
and
from
host
machines
within
princeton
to
the
external
internet.
We
ran
tcp
dump
on
our
vantage
point,
close
to
the
princeton's
gateway
router.
This
gave
us
the
ability
to
divide
the
total
client
to
server
rtt
into
two
components.
G
As
you
can
see
to
your
left,
the
internal
leg
and
the
external
leg,
the
internal
leg
accounts
for
the
fraction
of
the
rtt
that
is
due
to
the
campus
infrastructure,
while
the
external
leg
accounts
for
rtt
beyond
princeton's
control.
G
What
you
see
your
on
your
plot
to
the
right
top
is
a
comparison
between
wired
and
wireless
rtts.
In
terms
of
how
much
the
campus
campus
infrastructure
contributes
to
the
total
rtt,
we
observe
that
for
the
90th
percentile
rtt
to
youtube,
the
wireless
part
of
the
network
adds
significantly
more
latency
compared
to
its
wired
counterpart.
G
In
fact,
it
is
possible
to
expand
our
setup
to
include
more
vantage
points
along
the
path
so
that
we
can
divide
the
rtt
into
even
more
components
to
pinpoint
exactly
which
part
of
the
encompass
our
infrastructure
causes.
This
increase
in
rdt
now
to
the
bottom,
we
evaluate
the
accuracy
of
our
tool
under
load.
We
compare
accuracy
against
the
software
tool
tcp
trace,
but
for
fair
comparison,
we
limit
tcp
trace
to
order
of
one
space.
We
implement
a
prototype
in
the
p4
language
and
simulated
faithful
in
python
for
ease
of
conducting
experiments.
G
Early
results
indicate
that
p4rt
is
able
to
collect
98
of
the
samples
collected
by
tcp
trace
const,
despite
several
severe
resource
constraints.
The
distribution
of
rtts
is
also
similar,
as
you
can
see
in
the
plot
to
your
right
bottom,
implying
there
is
no
noticeable
bias
against
smaller
or
higher
articles.
G
We
used
to
expand
the
study
by
deploying
our
prototype
to
our
campus
testbed
by
testing
some
of
the
electro
calibration
strategies
that
I
talked
about
and
by
exploring
the
possibility
of
including
quick
traffic,
I
invite
you
all
to
take
a
look
at
our
paper
for
more
details,
including
a
link
to
our
campus
testbed
called
p4
campus.
Thank
you.
C
Thank
you.
Are
there
any
clarifying
questions.
C
Okay,
let's
move
on
into
the
next
speaker,
al
morton.
H
Thank
you
thanks
for
inviting
me
and
thanks
for
listening
today,
so
the
I
stand
before
you
as
the
person
who
literally
terminated
my
wife's
teaching
session
on
zoom
by
trying
to
finish
the
potatoes
in
the
microwave
oven.
H
We
all
have
things
to
learn,
so
I
I
wanted
to
start
out
talking
about
the
user's
dream
pipe
and
it's
been
mentioned
a
few
times
here,
but
always
available
in
sufficient
capacity
and
so
forth,
but
when,
when
we
want
to
get
that
anywhere
on
earth
and
without
the
qualifiers
I
listed
above,
it's
a
pipe
dream
and
we
we
all
need
to
work
together.
H
Obviously,
to
accomplish
that
one
thing
I
did
that
I
haven't
seen:
many
do
is
to
consult
a
survey
of
users,
one
that
I
pulled
out
here
was
the
to
try
to
find
out
what
the
users
want.
2500
users
late
in
the
uk.
H
They
want
what
they
don't
have.
Everyone
wants
reliability,
they
want
more
capacity
in
rural
areas.
They
want
their
isp
to
bundle
the
security
and
privacy
features.
They
don't
have
no
mention
of
latency,
though
probably
because
the
the
guys
in
sweatpants
in
the
basement
aren't
filling
out
surveys
next
slide.
Please.
H
We
have
the
seven
fundamental
metrics
that
we
don't
want
to
forget
about
there's
another
hierarchy
there
of
singletons
and
samples
and
statistics,
and
we
can
use
them
to
create
derived
metrics
the
other
derived
metrics,
that
we've
done
beyond
a
delay,
variation
and
a
packet
delay,
variation
and
inter
packet
delay
variation
which
compare
two
delays
are
reliability
and
matt
mathis's
model
based
metrics,
some
others,
maximum
ip
layer
capacity,
which
draws
together
loss
round
trip,
time,
packet,
delay,
variation
and
others
to
measure
up
to
the
megabits
that
isps
advertise.
H
So
we
need
to
structure
when
we
standardize
the
metrics
and-
and
this
is
for
everyone
thinking
ahead-
we
need
to
have
the
definitions
and
and
the
workload
definition
stream
definitions
nailed
down,
but
we
also
want
some
flexible
parameters
for
those
nail
them
down
when
we're.
When
we're
actually
doing
the
measurements
reporting
the
results,
we
want
to
have
the
frame
of
reference
that
makes
it
the
interpretation
easy,
like
the
gauge
on
a
tachometer.
The
rpm
metric
fits
that
pretty
well.
Every
engine
has
a
different
red
line
and
figure
of
merit.
H
H
H
So
I
I
want
to
wrap
up
with
a
couple
of
proposals
and,
and
the
first
one
is
a
another
derived
metric
sort
of
new
and
shiny.
Like
the
pipes
above,
it's
the
number
of
users
supported
on
isp
access
and
effectively.
You
know
on
gigabit
access.
We
can
easily
see
that
the
number
of
bits
per
second
is
greater
than
a
single
user
consumes.
Today,
advertisements
are
picking
up
on
that.
H
There
are
also
benefits
for
latency
and
delay
variation,
but
we
need
to
define
the
standard
user
streams
or
the
number
of
apps
or
app
streams
that
can
be
supported,
and
maybe
the
new
metrics
over
here
can
we're.
Defining
in
looking
at
in
this
workshop
can
help
with
that
and
the
the
standard
user
streams,
the
workload
that
changes
over
time,
so
it
has
to
be
registered.
H
We
can't
argue
about
it
for
a
year
and
finally,
the
old
derived
metrics,
which
users
again
pick
up
on
the
connectivity
availability.
Reliability,
covid19
made
this
really
clear
to
a
lot
of
people
that
that
reliability
is
important
as
when
the
house
became
the
hub
for
for
everything
we
have
standards
for
point-to-point,
availability,
reliability
and
so
forth
and
always
derived
from
loss
measurement
systems
can
require.
H
They
usually
require
connectivity
to
begin
their
work,
but
they
need
to
think
they
need
to
set
up
and
when
that,
when
the
measurement
setup
fails,
that
could
be
the
most
important
info.
So
you
want
to
derive
that
from
your
testing
and
make
it
available
as
well.
So
there's
the
old
man
hanging
on
a
wire.
Thank
you
for
your
attention.
C
Good,
are
there
any
clarifying
questions.
I
That's
a
quick
one.
Do
your
metrics
or
proposals
deal
at
all
with
the
asymmetric
nature
of
so
many
service
lines?
I
you
know
gigabit
down,
but
only
30
up.
H
I
think
sure
the
when
the
applications
only
require
a
certain
workload
up
and
and
a
different
workload
down,
then
that
was
that
would
be
what
we
would
be
trying
to
emulate
in
an
active
measurement.
Also,
the
I
mean
purely
loss-based
connectivity
measurements
reflect
that
as
well.
A
I
have
a
clarifying
question
al
when
you
said
methods
with
ground
truth.
I
am.
A
H
It's
it's
a
combination
of
the
first
one
omer,
where
you're
you're
grounded
in
a
physical
phenomenon.
Like
the
you
know
the
physical
bit
rate
of
a
of
a
link
and
and
you're
trying
to
assure
that
you're
actually
getting
that
when
you're
asking
for
it
also
the
ground
truth
that
comes
from
subjective
testing.
H
When
folks
are
trying
to
say
this,
this
is
a
qoe
metric.
Now
we
have
a
direct
comparison
with
sort
of
the
coming
from
the
mean
opinion,
score
of
studies
that
have
been
conducted,
and
so
there
can
be.
There
can
be
a
ground
truth
for
these
single
figure
of
merit
kinds
of
metrics.
It's
and
it's
it's
much
safer
to
have
that
kind
of
ground
truth
rather
than
to
decide
all
the
coefficients
separately
and
and
just
based
on
consensus.
C
Thank
you
all
moving
on
and
we're
at
discussion.
J
J
Oh
good
you're
talking
all
kinds
of
great
stuff
in
this
other
slide.
I
arrived
late.
Maybe
it
was
a
second
presenter,
the
one
that
did
the
p4
analysis
so
anyway,
sorry
to
occupy
more
than
60
seconds.
If
you
can
find
that
one.
For
me,
the
core
part
when
I
looked
at
that
is
that
it
was
of
the
bottom.
Ninety
percent
of
the
rtt's
for
a
bbr
and
pacing
based
system
youtube,
which
is
great,
we've
just
shown
how
well
youtube
works
on
the
campus
network.
J
For
obviously,
but
but
taking
a
look,
the
next
one,
okay,
sorry
for
yeah
there
we
go.
I
love
cdf
plots,
so
my
first
question
was
again
criticizing
constructively.
I
hope
the
use
of
youtube
for
this
analysis.
J
J
However,
I
am
I
make
this
comparison
to
this
discovery
of
the
cosmic
background
radiation,
throwing
out
the
top
ten
percent
of
the
worst
case,
samples
as
I've
always
found
that
that
the
world
is
more
interesting.
If
you
look
at
the
anomalies
and
if
you
could
repeat
this
exact
same
analysis
but
throw
out
the
bottom
90
percent,
you
might
find
some
interesting
anomalies
and
statistics
that
will
help
make
everyone's
internet
experience
better.
If
we
can
find
out
what
those
anomalies
really
meant,
that's
all.
C
Thank
you,
dave
next
is
alex.
K
Yes,
I
have
a
question
to
to
the
first
presenter
to
sylvester,
and
this
concern
is
basically
the
incentive
based
aspect.
Basically
in
your
talk,
so
one
thing
that
I
was
wondering
is:
basically:
is
this
an
incentive,
or
is
it
really
a
priority,
or
maybe
another
way
to
frame
it
is
who
gets
to
assign
the
value
of
the
incentive
to
two
packets
and
basically
the
threshold
there?
This
is
not
pretty
much
like
a
priority,
but
it
would
be
good
for
perhaps
to
clarify
this
and,
depending
on
the
answer.
I'll
have
some
follow-up
questions.
D
C
Or
yeah,
how
should
sylvester
you
want
to
respond
or
yeah.
D
Yeah,
if
I
can,
without
using
the
queue
so
yeah
as
the
end
user,
may
be
able
to
assign
it,
but
it
then
it
has
to
be
policed
by
by
whoever
owns
the
resource
and
in
a
good
case
we
we
demonstrated
that
you
can
do
a
side,
a
way
of
packet
marking
when
you
take
into
account
the
the
priorities
of
the
end
user.
So
you
can,
you
can
encode
higher
recovery
so
sharing
into
this
in
some
cases,
but
in
a
in
a
short
answer,
it
could
be
the
tcp.
D
K
Yeah,
because
my
so
this
sounds
to
me
really
more
like
a
priority
and
I'm
wondering
actually
why
you
call
it
incentive
based.
I
think
this
is
an
incentive
basis.
Well,
one
one
thought
in
my
mind,
is
it's
very
easy
from
for
a
user
of
an
application
to
always
ask?
Well,
I
want
to
get
the
highest
priority.
I
want
to
get
the
best
service.
I
want
to
get
the
biggest
priority,
but
an
incentive
is
basically
well.
What
are
you
willing
to
to
to
give
me
in
exchange
right?
K
So
basically,
this
brings
up
the
question
for
monetization,
for
instance,
but
they,
but
I
think
that
you
say
this
would
come
to
a
policing
plane
that
would
not
come
directly.
M
N
Hi,
I
have
a
question
to
the
second
presenter:
it's
just
a
very
small
question
about
it.
The
slide
set.
It
doesn't
rely
on
time
stamps.
So
how
do
you
measure
the
round
trip
time
when
there
is
a
re-transmission,
because
for
tcp
you
don't
know?
What's
a
data
packet?
What's
a
retransmission
right.
So
without
that,
how
do
you
do
that?
That's
that's
very
intriguing!
If
you
do
that,
and
I
will
just
like
to
learn
more.
G
Yeah
sure,
so
what
we
do
here
is
we.
We
keep
some
flow
state,
which
indicates
so
so
the
flow
state
tells
us
whenever
there
is
retransmission,
essentially
so
so
we
know
the
left
and
the
right
bounds
of
what
is
the
valid
measurement
range
for
traffic
that
is
going
on
right.
Now,
I'm
happy
to
talk
to
you
offline
more
about
this
there's
more
details
in
the
paper
as
well.
O
Corey
you're
next
I
I
have
a
question
for
al
I'll.
The
pipes
over
which
my
data
comes
have
multiple
sections.
H
Sure
corey,
I
think
I
think
there
you're
talking
about
needing
segmented
measurements
and
and
and
also
sort
of
aggregateable
or
composable
results,
and
by
composable
I
mean
sort
of
a
spatial
composition
where
you
can
add
up
the
measurements
and
get
a
view
of
the
end
to
end
and
that
should
actually
correlate
with
end
to
end
measurements
pretty
well.
H
So
I
think
the
probably
that
probably
the
solution
is
the
measurements
of
sub-paths,
which
you
know
seem
to
be
more
more
and
more
possible
with
the
deployment
of
you
know
the
flexible
infrastructure,
you
know,
vnf's
that
make
measurements
and
so
forth.
I
am
am
I
getting.
Am
I
getting
to
your
question
there.
O
H
Good,
I
I
think
the
the
dream
pipe.
You
know
when
it's
one
gigabit,
it's
that
it's
actually
revealing.
You
know
the
bandwidth
limitations
elsewhere.
I
personally
I've
experienced
that
in
my
home
and
when
I
was
signing
up
for
gigabit
access,
they
said.
Well,
you
won't
get
this
over
wi-fi.
Q
Hi,
thank
you.
It's
actually
a
follow-on
from
al's
discussion
again
and
it's
that
intriguing
idea
about
a
number
of
users
of
the
metric
clearly
very
difficult,
but
I'm
going
to
make
it
even
more
difficult
because
it
triggered
something
in
my
mind
that
I've
always
been
slightly
angry.
Well,
I'm
slightly
angry
always
anyway,
but.
E
Q
The
fact
that
we
get
a
carved
out
bit
of
a
shared
medium
sold
to
each
user
and
where
there's
a
shared
medium,
there's
usually
a
lot
of
the
other
say
like
on
a
cable
network.
A
lot
of
the
other
channels
are
actually
unused
and
you
could
have
be
getting
more
usage
or
more
capacity
for
one
of
the
users,
if
only
they
weren't
sold
this
this
pipe
and
that's
purely
a
sales
thing.
It's
not
a
it's,
not
a
technical.
Q
Well,
it's
become
a
technical
limitation,
but
it's
it
started
off
as
a
sales
thing,
and
so
when
you,
when
you
talk
about
a
number
of
users,
if,
if
the
user
in
any
one
household
could
be
using
other
people's
capacity,
that
actually
makes
that
number
of
users
metric
sort
of
even
more
difficult
to
be
meaningful,
because
someone
who
allows
you
maximum
sharing
but
not
guaranteed.
How
do
they
answer
that
question?.
H
That
on
yeah
randall
sure
go
ahead.
Sorry
go
for
it
thanks.
I
I
think
it
could
also
be
phrased
as
a
number
of
apps
available
for
use
or
supportable
and
again
for
that
we
need
to
know
the
the
workload
streams.
H
The
the
number
of
the
number
of
those
that
can
be
supported
on
a
particular
service
would
be
would
be
valuable
to
know.
The
mixture
of
those
that
a
a
particular
household
is
trying
to
activate
would
be
would
be
useful.
I
I
didn't.
I
didn't
mean
that
to
segment
particular
users,
but.
H
C
Thank
you
back
to
the
queue
so
investor.
D
D
You
distribute
your
values
among
your,
your
flows
is,
is
can
be
like
an
incentive
so
in
in
that
method
like
it's
of
the
same
flow
or
same
subscriber
are
not
marked
the
same
rather
according
to
distribution,
and
this
function
defines
a
distribution.
So
it's
it's
and
an
incentive
can
be.
For
example,
if
you
haven't
used
internet
access
for
a
while,
you
are
allowed
to
transmit
to
the
higher
speed
for
a
while.
I
had
a
presentation
at
iccrg
about
multi-time
scale,
fairness
and
things
like
that
and,
for
example,
this
kind
of
multi-time
scale.
G
Sarah,
a
quick
comment
to
dave
because
he
said
certain
things
about
the
presentation.
So
thank
you
so
much
for
the
constructive
criticism.
Of
course,
the
the
idea
here
was
to
get
a
representative
sample
of
our
all
the
rtds
so
that
we
are
able
to
figure
out
what
is
it
that
is
anomalous
in
that
distribution,
so
that
we
wanted
that
to
scale
first
and
then
look
at
the
anomalous
rtts.
But
but
thank
you
so
much
for
the
other
comments
as
well.
R
Okay
thanks
everybody
am,
I
is
my
mic
working
yeah
good.
I
really
appreciate
all
the
the
good
thought
that's
gone
into
this
workshop,
but
I
see
an
elephant
in
the
room,
there's
a
lot
of
good
metrics
that
we
could
gather,
but
the
equipment
that's
currently
in
our
homes
and
out
of
our
control,
injects
random
latency
into
all
these
measurements.
It
can
be
like
between
10
and
even
2,
000
milliseconds.
C
Thank
you.
Next
is
roberto.
S
So
the
survey
had-
I
guess,
the
unsurprising
result
that
everybody
wants
everything.
S
I
I
think
one
of
the
things
that
I
would
wonder
is
if
we
change
the
survey
design
a
little
bit
to
ask
what
trade-offs
people
are
willing
to
make,
if
that
wouldn't
give
us
a
little
a
bit
more
information,
for
instance,
if
if
I
was
willing
to
give
up
half
of
a
gigabit
assuming
you
know,
if
I,
if
I
want
to
have
my
bandwidth
in
order
to
get
substantially
better
latency
with
an
explanation
of
what
that
actually
means,
is
that
the
trade-off
that
people
would
take?
S
I
think
that
is
a
very
powerful
way
to
ask
the
question
and
that's
something
that
I
would
be
very
interested
in
where
I
and
isp,
for
instance,
because
that's
something
I
can
monetize
and
that
drives
incentives
in
the
world
that
we're
in
today.
C
Okay,
next
is
stewart.
T
Hey
a
couple
of
quick
comments:
first,
just
to
echo
what
dave
tarq
said:
excellent
points
to
avoid
things
like
youtube,
that
they're
inherently
source
limited
because
they
don't
really
stress
the
network,
so
they
don't
find
what's
wrong
with
it.
T
But
my
question
was
to
al
morton
and
we
heard
this
example
many
times
about
the
potato
in
the
microwave
broke
the
internet,
and
I
wondered
if
al
did
any
more
analysis
and
I'm
asking
because
I
hear
this
all
the
time-
a
lot
of
the
time
from
people
at
apple
who
are
not
networking
experts,
some
people
in
leadership
positions,
and
they
say
when
the
microwave
turns
on
wi-fi-
loses
packets
and
that's
why
you
should
not
slow
down
for
packet
loss.
T
I
asked
these
people:
did
you
take
it
back
and
trace?
The
answer
is
always
no
and
when
I've
taken
packet
traces
when
I
put
the
microwave
on
it,
raises
the
noise
floor,
wi-fi
lowers
its
channel
rate
and,
and
that
causes
a
cue
to
build
up
and
if
the
sender
doesn't
adapt
to
that
changing
rate,
you
get
massive
q
overflow.
So
all
the
packet
loss
is
from
q
overflow.
It's
not
mysterious.
We
do
microwaves
making
the
package
disappear.
Did
you
do
any
analysis
alex
what
caused
lost.
H
Well,
the
the
short
answer:
stuart
is
yes
and
no
the
I.
I
can
tell
you
this-
that
the
the
microwave
was
in
direct
line
of
sight
with
the
with
the
wi-fi
beacon
that
my
wife's
laptop
was
using.
It
was
within
10
feet
of
the
microwave,
and
I
I
like
I
said
I
really
I
really
turned.
H
I
really
turned
put
the
potatoes
in
there
to
finish
them
up
before
dinner,
because
my
objective
was
to
have
dinner
on
the
table
as
soon
as
her
class
was
over
and
so
that
I
kind
of
reduced
my
possibilities
to
do
packet
traces.
H
But
I
think
that's
a
good
idea.
I
mean
I
can
certainly
repeat
the
experiment
and
I'd
be
happy
to
get
back
to
you
on
it.
Thanks.
So.
T
D
I
would
like
to
comment
on
this
cpa
and
user
equipment
thing
that
I
agree
that
this
is.
This
is
a
point
to
solve,
but
in
in
in
many
cases
like
like
most
of
the
gamers,
they
don't
use
wi-fi
and
there's
a
reason
for
that,
and
if
you
solve
that
problem,
vms
might
start
using
wi-fi,
but
still
there
there
are
issues
in
the
network.
There
can
be
bottlenecks
in
the
access
network.
D
There
is
the
international
part
of
the
line,
there's
a
reason
for
most
applications
using
ctns
and
and
if
you,
if
you
want
to
have
like
a
decent
webpage,
you
have
to
use
cdn.
So
you
could.
You
could
democratize
that,
because,
while
using
cdns
like
there
are
advantages
of
using
cdns
which
which
cannot
be
avoided,
but
part
of
the
reason
is
that
the
international
line
is,
for
example,
is
not
that
good.
So
look
for
I
I
disagree
with
looking
at
only
at
one
at
the
cpa
and
the
ue.
I
agree
with
also
looking
there.
C
Thank
you.
Next
is
jonah.
U
Well,
thank
you.
I
will
have
two
comments.
The
first
one
is
al's
position
paper.
I
really
enjoyed
reading
the
draft
I'll.
Thank
you
for
writing
it
up,
and
I
think
that
it's
actually
I
enjoyed
it
because
I
think
it's
asking.
U
I
think,
and
it
consolidates
a
number
of
things
and
I
enjoyed
reading
about
what
people
people
asked
for,
but
what
becomes
evident
is
that
people
are
asking
things
based
on
what
is
available,
what
they
use
right
now-
and
you
know
this
in
the
draft
as
well,
and
we
want
to
push
that-
we
want
to
make
more
applications
possible
that
they
don't
know
about
yet,
and
so
we
are
trying
to
push
it
towards
a
direction
that
they
may
not
yet
understand.
So
that's
something
a
bit
of
a
something
to
keep
in
mind.
U
The
second
comment
I'll
make
is
that
in
general
I,
if
I'm
out
of
time,
I
can
actually
queue
myself
back
in
for
my
second
comment
right.
V
I
want
to
note
that
the
microwave
phenomena
is
exactly
the
same
as
as
kids
stop
doing
what
you're
doing,
because
my
my
teleconference
isn't
working
and
that
what
stewart
said
is
exactly
correct
and
it
doesn't
even
have
to
be
you.
It
can
be
somebody
who
happens
to
be
sharing
that
same
wi-fi
channel.
Who
suddenly
is
using.
V
W
Yeah,
I
just
said
a
couple
of
comments
on
what
roberto
and
others
have
said.
Obviously,
there's
always
a
trade-off
on
latency
versus
bandwidth
when
you
do
it
outside
of
hardware
acceleration
that
exists
in
the
retail
market.
Today
you
want
a
gaming
router
that
doesn't
do
a
trade-off.
You'll
pay
a
thousand
dollars.
That's
there,
you
can
go,
buy
it
in
the
csp
space.
It's
different
story.
You
can't
sell
a
thousand
dollar
router
to
a
csp
in
the
home.
There's
no
business
case
for
that.
So
there
is
a
trade-off.
W
Maybe
it's
10
20
on
low
end
socks,
maybe
it's
40
40
to
50
percent
of
your
bandwidth,
but
it's
there
and
consumers
are
willing
to
accept
that.
I
see
that
today
around
the
world
with
many
carriers
and
many
end
users
and
you're
right
gamers
do
know
eternity
is
best,
but
a
lot
of
them
are
still
using
wi-fi.
In
fact,
on
a
lot
of
consoles,
nintendo
and
whatnot,
there
is
only
wi-fi.
If
you
play
competitively,
you
still
have
to
use
wi-fi.
You
have
no
choice
around
that
and
in
terms
of
the
potato
I
love
the
potatoes.
W
That's
gonna
be
the
quote
of
this
workshop.
There
are
cps
and
chipset
that
can
detect
very
accurately
non-802.11
spectrum
users,
microwave
vb,
monitors
and
again
the
issue
is
cost.
We've
tried
selling
that
we
tried
deploying
that
it
does
exist,
but
it's
really
hard
to
make
a
business
case
for
a
csp
when
they
invest
that
kind
of
stuff.
So
it's
kind
of
relegated
to
the
to
the
retail
market.
W
U
Good
randall,
so
my
this
is
a
high
level
thought
and-
and
I'm
not
going
to
be
able
to
do
a
great
job
of
articulating
it
very
precisely
in
one
minute,
but
I'm
going
to
try
anyways
and
I'm
going
to
flounder
a
little
bit.
U
I
hear
the
word.
I
hear
this
this
thing
and
and
stuart
said
this,
but
I
don't
want
to
put
you
under
the
bus.
Here's
to
it,
but
I'm
going
to
disagree
with
you
just
a
little
bit
about
saying
that
something
is
wrong
with
the
network.
I
don't
think
things
are
wrong
with
the
network.
There
are
things
that
make
the
network
less
useful
in
particular
ways,
and
I
do
want
to
change
trying
to
shift
this
this
this.
This
notion
that
that
I
mean
30
years
ago.
U
U
But
thinking
about
that
something
is
that's
being
wrong
with
the
network
suggests
that
you
have
an
idealized
view
of
the
network
and
maybe
that's
that's
worthy
of
chasing.
Maybe
that's.
That's
that's
good
to
chase,
however,
as
we
understand,
users
aren't
going
to
be
chasing
that
necessarily.
The
reason
we
like
gamers
are
because
we
tend
to
be
more
aligned
there.
U
I
want
to
urge
us
to
think
about
what
the
current
network
or
a
property
in
the
network
say
delay
or
whatever
it
is,
makes
difficult
and
makes
for
poor
quality
of
end
user
experience,
because,
ultimately,
I
think
that
is
what
drives
adoption
and
I
think
that's
what.
Ultimately,
we
care
about
a
network
is
only
as
useful
as
its
use.
C
Next
is
unless
stewart
wants
to
respond.
Next
is
rich
brown.
R
Okay,
I
I
wanted
to
respond
that
that
jonah
was
saying
the
network's
not
broken.
I'm
sorry
it.
R
It
is
if
my
car,
if
I
step
on
the
accelerator
in
my
car
and
it
slows
way
down
and
then
it
picks
up,
I
take
it
to
my
mechanic
and
I
say
something's
wrong
with
my
car:
it's
not
working
if
I
start
to
dump
a
lot
of
data
into
the
network
through
my
router
and
the
latency
goes
to
pieces,
my
router
is
broken,
there's
no
other
interpretation
of
it
and
you
know
if
you
want
to
call
it
buffer
blower,
you
want
to
call
it
responsiveness
or
you
want
to
call
it
any
of
those
things.
C
Thank
you
rich
excuse
me.
Jonah.
U
Sandal
and
thanks
for
the
response,
rich
I'll
quickly,
note
that
that
doesn't
mean
it's
wrong
with
the
network.
My
son
has
a
car
which
cannot
accelerate
beyond
70
miles
per
hour.
That's
a
feature,
not
a
bug
right,
not
everybody
needs
and
should
have
a
a
race
car,
so
I'm
artic.
What
I'm
trying
to
articulate
here
is
that
let's
not
talk
about
right
or
wrong
with
the
network.
Let's
talk
about
what
it
makes
possible.
U
That's
the
more
interesting
thing.
As
a
network
engineer,
I
think
that
we
all
betray
our
origins
that
network
engineers,
when
we
start
talking
about
hey
that
latency
just
went
up.
Therefore
it's
bad,
but
if
you're
sitting
back
and
watching
netflix,
you
didn't
even
notice
anything
you
don't
care
as
an
end
user.
There
is
a
tension
there.
I'm
not
saying
that.
I'm
I'm
again
saying
this
in
a
blunt
way.
I
understand
the
nuance
here,
but
I'm
necessarily
trying
to
push
us
towards
stopping
towards
talking
about
usefulness
rather
than
right
or.
C
Thank
wrong
next
o'mare.
A
I
have
a
comment
so
so
reach
thanks
for
joining
and
thanks
for
provoking
healthy
discomfort,
but
I
I
want
to
use
your
comment
as
a
trampoline
to
pivot
a
little
bit
and
please
don't
take
it
as
any
kind
of
argumentation
or
anything
like
that.
I'm
just
trying
to
extend
it
a
bit,
so
a
consumer
believes
that
if
they
do
something
and
doesn't
work,
it's
not
working.
A
A
As
a
reminder,
a
hour
of
work
of
support,
engineer
of
an
enterprise
company
cost,
probably
more
not
only
than
not
only
how
much
manufacturer
will
see
profit
from
the
device
more
than
the
entire
cost
of
the
device,
including
packaging
shipping.
But
it's
not
so
it's
not
like.
While
pressure
is
working
or
not,
we
can
argue
about
absolutes,
and
probably
we
should
not
argue
about
that.
It's
endless,
but
is
there
money
there?
A
Is
there
enough
money
there
to
to
ship
the
market
and
if
we
believe
that
there
is
then
how
to
make
an
argument
that
actually
consumers
should
care
about
it
to
the
degree
that
would
incentivize
the
entire
economy
to
highlight
the
economic
possibilities
of
delivering
lower
latency
devices
to
broader
population?
So
I
just
wanted
to
to
take
your
comment
and
to
trampoline
it
slightly
sideways,
but
not
too
much.
Thank
you.
C
Thank
you,
jim.
V
People
seem
to
be
making
arguments
based
on
oh
well.
We're
gonna
have
to
pick
one
and
courtesy
of
the
work
that
toki
and
dave
has
done
and,
and
the
jonathan
fox
has
brought
to
market
there's
a
counter
example.
V
You
can
have
your
cake
and
eat
it
too.
If
you
haven't
played
with
one
of
jonathan's
routers,
you
should,
if
you're
an
open
source
guy
go
run
open
word
on
several
of
the
wi-fi
chips
that
are
in
the
market.
So
so
I
don't
believe
it's
a
good
argument
to
say:
oh
we're
going
to
have
to
we.
We
need
to
pick
because
it
isn't
possible
to
solve
this
problem.
We
have
the
counter
example.
R
So
jim
sort
of
stole
my
thunder,
I
give
away
a
few
percent
of
bandwidth
on
my
open
wrt
router
to
get
nice
stable,
low
bandwidth,
omer
you're,
exactly
right
that
that
there
is
an
economic
trade-off
and
I
am
hoping
that
we
can
come
up
with
a
way
to
give
vendors
service
providers,
as
well
as
as
hardware
manufacturers,
an
incentive
to
put
in
fq
coddle
or
cake
or
pie
or
any
of
these
technologies,
so
that,
because
I
want
all
their
customers
to
call
them
and
tell
them
their
equipment,
I'm
going
to
use
the
word
broken
again.
C
Thank
you,
rich
dave,.
J
Oh,
I
don't
want
to
jump
on
that
one,
but
I
wanted
to
talk
about
a
use
of
language
that
is
crept
throughout
our
language
in
this
discussion.
It's
the
word
consumer.
What?
Because
most
of
us
are
working
from
home,
we're
actually
producing
things
we
are
actually
contributing
to
the
economy.
If
we
were
just
consuming,
the
economy
would
have
collapsed
and
everyone
starved
to
death.
J
So
I've
tried
to
switch
to
using
a
different
term
netizen,
where
we
are
interacting
with
the
world
through
the
internet
and
still
part
of
the
world,
and
if
we
thought
about
enabling
netizens
to
participate
in
this
world,
I
would
hope
that
you
would
see
more
actionable
items.
I'm
not
going
to
touch
upon
my
the
previous
two
comments
on
this
plus
q.
Thank
you.
C
Thank
you,
rich
brown,.
C
Okay,
sorry,
jim.
U
Oh
china,
oh
I'll,
go
in
because
the
line
is
empty
dave.
I
don't
think
that
it's
it's
consumer
in
the
sense
that
you're
sitting
back
and
just
eating
up
a
tv
show
right.
It's
it's
a
consumer
in
the
sense
that
you're
consuming
services
that
somebody's
offering
to
you.
So
in
that
reason
or
not,
I
am
still
consuming
my
sp
services,
whether
I'm
producing
content
or
not
so
consumer
depends
on
what
layer
and
what
exactly
we're
talking
about
as
consuming
that's
it
I
do
want
to.
I
do
want
to
share
that.
U
I
appreciate
that
latency
and
buffer
bloat
and
responsiveness
are
important
to
us
today.
I
don't
think
that
they
are
the
end
of
10
years
from
now.
Hopefully
we
won't
be
talking
about
this
problem,
but
we'll
be
talking
about
other
problems
and
that's
what
I
want
to
draw
attention
to.
We
are
not
here
to
just
solve
one
problem.
U
We
are
trying
to
come
up
with
a
way
of
measuring
what
the
or
not
not
a
way
of
measuring,
but
articulating
or
figuring
out
how
to
describe
network
properties
that
are
useful.
This
is
why
I
said
that
I
think
al's
draft
is
actually
quite
useful.
It
talks
about
what
the
user
sees
as
important
and
the
one
thing
we
never
did
not
talk
about
is
just
connectivity.
U
One
of
the
biggest
problems
for
users
is
just
having
a
reliable
connection.
Forget
everything
else.
They
are
pissed
off
and
it
happens
often
enough
that
they
lose
connectivity
and
they
don't
know
who
to
point
to
to
say:
hey.
My
network
is
broken.
This
could
be
because
of
the
device
this
could
be
because
of
whatever
it
is
that's
going
on
in
the
isps
network.
It
could
be
a
a
number
of
reasons,
but
there
are
very
basic
problems
for
a
user
that
we
don't
talk
about
and
in
part
like.
U
I
said
this
betrays
a
bit
of
a
network
engineer's
view
of
the
world
right.
Those
are
problems
that
we
put
under
the
rug
because
we
don't
think
about
them,
because
we
can't
do
anything
about
them.
We
chase
the
problems
that
we
want
to
solve,
that
we
can
try
and
solve
that.
We
believe
we
can
try
and
solve
whether
we
necessarily
will
solve
them
or
not.
I
do
not
know,
but
there
are
from
an
end
user's
perspective
of
the
internet
yeah.
We
should
probably
move
on
a
little
bit.
X
Yeah
thanks,
I'm
beating
the
same
drunk
drum.
That
janna
is
here
that
I
feel
like
the
discussion
is
sort
of
devolving
into
how
do
we
fix
buffer
bloat
or
what
to
do
when
the
network
is
not
providing
sufficient
capacity
or
something
like
that,
and
if,
if
what
we
are
focus,
if
we,
if
we
want
to
focus
on
the
metrics,
then
that
should
be
the
focus
as
well.
X
V
We
will
be
here
in
ten
years
if
we
do
not
successfully
expose
the
problem
and
enable
economics
to
drive
things
into
the
market.
The
economic
cost
of
doing
of
solving
this
problem
is
actually
extremely
low.
V
C
That's
the
end
of
this
discussion.
Y
Yes,
call
of
duty
so
hi
everyone,
so
my
name
is
kara
kirk
and
I
have
tom
sandberg
with
this
quality
of
service
and
quality
of
experience
some
30
years.
Somebody
may
even
remember
me
from
the
itf
meetings
in
the
90s,
but
I
don't
know
okay.
The
point
here
is
that
I'm
actually
proposing
something
similar
that
sylvester
actually
proposed
in
the
first
presentation
today
that
we,
I
think
it
would
be
very
good
to
somehow
change
the
mindset
of
quality
yourself.
This
is,
of
course,
this
is
very
ambitious
proposal,
but
and
also
okay.
Y
I
start
with
the
remark
that
actually,
I
have
already
presented
this
in
1997
in
some
itf
meeting.
So
it
is
this,
but
in
a
very
old
idea,
but
then
I'm
not
a
salesman
has
been
served.
Setting
his
keynote
speeds
also
emptiness
should
be
also
some
kind
of
statesman.
I
I
don't
really
think
that.
Y
So,
even
though
that
proposed
very
similar
ideology
some
20
years
ago,
and
then
I
worked
with
nokia
and
made
even
some
15
patents,
and
so
but
anyway,
still,
the
situation
is
that
even
this
div
serve
was
defined
in
an
itf
innovator,
just
based
on
the
fundamental
idea
that
applications
have
different
quality
and
okay
quantity
requirements,
and
then
the
task
of
the
network
is
to
satisfy
those
needs.
Y
Yes
thanks,
so
I'm
now
calling
it
this
incentive
based
quality
of
service.
So
in
general,
I
think
that
it's
much
better
than
relying
on
the
satisfaction
of
specific
quality
recommends
to
to
build
mechanism
that
really
create
insane
themes
with
application
and
users
to
behave
in
a
way
that
is
beneficial
for
all
during
the
contest.
Y
Of
course,
that
means
this
adaptation
of
the
situation
now,
of
course,
tcp
in
a
way
is
responsible
for
that,
but
that
is,
of
course,
it's
also
somewhat,
not
so
robust,
because
it's
up
to
the
application,
they
use
it
to
implement
those
tcp
mechanisms
so
and
then
in
this
area
or
model
or
mindset,
the
users
and
applications
are
totally
free
to
use
the
network
and
mechanics
as
tvs.
Y
So
that
is
the
optimal
case,
so
nowadays,
of
course,
we
to
some
extent
we
can
rely
on
it's
the
reality
that
we
can
realize
that
tcp
is,
I
I
happy
to
take
care
of
the
consistent
situation
and
network
adaptation
and,
of
course,
also
video
applications
that
are
adaptive
are
very,
very
reasonable,
but
okay,
technical
solution
or
one
technical
solution
that
actually
was
patented
by
nokia
some
20
years
ago.
I
think
that
all
of
the
key
patterns
are
already
expired,
so
you
are
free
to
use
this
everyone.
Y
Y
Workshop
of
course,
they
are
not
necessarily
perfect
solutions,
but
there
are
some
solutions,
so
the
solution
is
really
based
on
those
such
two
kind
of
priorities,
delay,
delay,
classes,
kind
of
delay,
priorities
maybe
make,
and
those
should
be
freely
selectable,
but
low
delay
comes
with
a
kind
of
penalty.
That
means
that
you
get
actually
in
this
proposed
system.
Y
You
you
get
less
benefit
if
you
select
low
delay
plus,
so
that
is,
that
is
the
only
kind
of
penalty,
and
then
we
need
something
like
at
least
four,
maybe
private
approach,
maybe
even
eight
priority
levels
to
implement
an
incentive
to
adapt
to
the
situation,
innovative
venture
when
you
are
using
lower
bit
rate,
you
get
higher
priority,
which
means
lower
packet
loss
ratio,
and
I
cannot
call
it
the
theaters,
but
this
this
works
very
nicely
with
tcp
also
so
these
api
episodes.
Y
But
the
main
point
is
that
there
need
to
be
some
incentive,
and
that
is
the
principle
is
similar,
but
silverster
proposed
also
that
the
decision
to
reset
packets
should
be
based
on
the
innovative
value
of
the
packet,
and
this
is
also
this
kind
of
system
based
on
the
priorities
or
priority
levels.
Also
somehow
realize
that
that
principle,
so
those
packets
that
are
highest
value,
their
transmit
it.
If
there
is
not
enough
resources.
Y
Yes,
okay,
the
point
here
is
that
what
I'm
asking
here
is
that,
is
there
any
interest
to
really
promote
this
kind
of
this
kind
of
incentive
based
on
your
service
mechanisms?
Yes,.
C
Okay,
let's
move
on
next,
we
have
now,
I'm
not
sure
you're
speaking
neil.
Z
No,
it's
me
speaker,
thompson.
Z
All
right,
okay,
so
this
is
the
joint
work
between
ourselves,
vodafone
and
domos.
Z
So
we've
heard
a
lot
about
the
importance
of
things
like
responsiveness
and
we've
also
talked
about
things
like
propagation
delay
and
so
forth,
and
we
actually
come
with
this
from
the
perspective
of
how
does
an
application
see
your
network
and
the
the
issue
with
responsiveness,
for
example,
is,
is
sometimes
sometimes
the
issue
is
latency,
but
sometimes
the
issue
is
packet
loss
which
can
really
have
a
big
impact.
So
next
slide.
Please.
Z
So
our
perspective
is
that
loss
and
delay
really
need
to
be
considered
together.
So
we
we
think
in
terms
of
what
we
call
quality
attenuation,
which
we
write
with
delta
q
is
really
the
combination
of
both.
So
it's
it's
a
combined
measure
of
the
extent
to
which
the
network
is
not
instantaneous
and
not
faultless,
and
this
is
basically
what
applications
see.
We've
had
some
discussions
earlier.
Z
Dave
dave
tap
made
the
point
that
actually
a
lot
of
the
interesting
things
happen
in
in
in
the
tale
of
the
distribution,
and
we
would
completely
agree
with
that.
So
actually,
when
we,
when
we're
thinking
of
quality,
attenuation,
we're
looking
at
a
a
distribution
and
on
an
average,
so
we
could
say
our
our
good
news
is.
Z
We
think
there
is
a
a
measure
which
we
from
which
many
of
the
things
we've
talked
about
previously
rpm
and
so
forth
and
99th
center
and
so
on
can
be,
can
be
derived
the
bad
news.
Is
it's
not
a
number,
but
the
reason
you
need
a
whole
distribution
is
different
applications
when
their
protocols
are
sensitive
to
different
aspects
of
this.
So
any
single
moment
or
sentinel
that
you
pick
will
will
not
be
universally
applicable.
Z
The
other
advantage
of
looking
at
things.
This
way
is
you
can
you
can
you
can
specify
a
bound
on
how
bad
this
attenuation
can
be
in
the
picture
on
the
right?
Z
It's
the
it's,
the
blue
curve,
and
if
you
measure
it
we'll
talk
about
measuring
in
just
a
second
and
you
can
see
that
the
the
probability
of
getting
it
everything
arriving
sooner
than
you
needed
them
to,
then
you,
then
you
know
you're
doing
better
than
you
needed
to,
and
you
have
a
you
have
a
test
for
what
constitutes
good
in
it.
Okay,
can
we
have
the
next
slide?
Please.
A
One
one
introduction
somebody
has
an
open
microphone,
please
I
I
would
like
to
ask
everyone,
but
the
speaker
to
mute.
Thank
you.
Z
Okay,
so
so
the
way
we
measure
this
is
to
basically
get
one-way
delay,
timings
from
from
one
point
to
another,
but
using
different
packet
sizes
and
actually,
when
you,
when
you
look
at
it,
just
just
looking
at
those
raw
measurements,
frequently
reveals
a
lot
of
interesting
behavior,
but
what
we
can
also
do,
if
you
take
data
like
that
and
you
sort
it
by
size,
you
can
actually
decompose
this
this
into
some
useful
components.
Z
So
if
for
each
each
size
bucket,
you
will
get
some
packets
that
have
the
minimum
delay
for
that
size
and
you
do
a
regression
through
them.
You
can
extrapolate
that
to
what
would
what
would
be
the
delay
of
a
hypothetically,
zero
size
packet,
and
we
call
that
component
g
is
a
kind
of
geographic
delay.
It's
basically
like
the
propagation
delay
that
was
that
was
spoken
about
the
first
day
and
then
the
the
extent
to
which
bigger
packets
take
longer
than
that
is
something
we
call
s.
Z
The
serialization
delay,
that's
obviously
a
function
from
packet
science
to
an
additional
delay.
Quite
often,
that's
that's
roughly
linear,
and
when
we
subtract
those
two
factors,
what
we're
left
with
is
what
we
call
a
variable
delay:
the
serialization
delay
in
the
geographic
delay.
Essentially,
statically
determined,
they're
functions
of
the
the
layout
and
the
technology
of
the
network.
They
change
relatively
slowly.
Z
Z
So
you
can
you
can
you
can
pick
up
a
lot
of
phenomena
from
this
data,
but
the
contention
effects
are
basically
in
the
pushed
into
the
variable
component,
where
we
could
look
at
the
distribution
in
the
tail
and
that-
and
that
tells
us
a
lot
about
what's
happening
with
with
cubing
at
some
some
bottleneck
point.
Z
Okay,
so
this
I
forget
who
said
it
now,
but
some
somebody
was
arguing
that
we
thought
we
knew.
What
we
needed
is
a
composable
measure,
and
this
is
one.
Z
This
is
a
this
is
this
quality
attenuation
is
the
thing
a
bit
like
noise,
it's
something
which
is
something
which
adds
up
as
you
go
along
the
the
digital
delivery
chain
in
either
direction
it
accumulates,
and
what
you
get
end
to
end
is
the
is
the
appropriate
sum
of
what
you
get
are
in
the
individual
segments
and
that's
also
true
of
the
individual
components.
Z
So
this
is.
This,
provides
a
method
for
being
able
to,
as
el
morton
was
asking
for
being
able
to
localize.
Where
issues
go,
where
issues
occur,
you
can
you
can
you
can
see
if
the
end-to-end
quality
attenuation
is
bad?
You
can
see
where
that
came
from,
and
by
decomposing
it
into
these
different
parts.
You
can
also
see
what
that
was.
Z
You
get
a
lot
of
insight
into
what
that
was
due
to
so
this
is
a
this
is
this
is
a
measure
that
you
can
you
can
process
to
extract
a
lot
of
the
the
the
single
value
metrics
that
we've
talked
about
already,
but
which
actually
contains
a
lot
more
information,
and
we've
used
this
in
a
number
of
contexts
that
that
always
reveal
interesting
things
that
you
didn't
know,
and
certainly,
for
example,
shows
shows
clear
differences
between
the
performance
of
different
kinds
of
access
technologies
and
and
can
give
a
lot
of
information
about
wi-fi.
Z
For
example,
domos
have
used
this
looking
into
the
behavior
of
of
wi-fi
systems.
Okay,
that
was
basically
what
I
had
to
say.
I
hope
there
are
some
clarifying
questions
to
pick
up
the
things
that
I
meant
to
say
and
forgot,
and
otherwise
I
look
forward
to
the
discussion.
Thank
you.
B
Yeah
really
quickly,
so
I,
like
the
approach,
do
you
try
and
take
into
account
the
fact
that
routing
switches
in
any
cast
and
things
like
that
can
cause
issues
with
your
measurements
over
time?.
Z
Basically,
we're
relying
okay,
so
so
so
so
yeah,
so
we're
we're
we're
relying
but
okay,
generally
speaking,
we
we
do
these
measurements
by
by
injecting
a
low
rate
stream
of
test
packets
with
appropriate
randomness.
So
they
they
appropriately
sample
what
we
want.
We
are
relying
on
on
the
test
packets
following
the
same
path
as
whatever
we're
interested
in
so
yes,
there
could
be.
Z
There
could
be
phenomena
which,
which
impact
that,
but
the
the
what
the
measurement
will
show
you
is
that
there's
something
this
you're
missing,
something
right.
It
will
you'll
you'll,
see
weird
behavior
and
then
you
can
start
drilling
down
to.
Where
is
that
happening?
And
what's
it
due
to
so?
Yes,
there
are
a
number
of
things
which
could
which
could
impact
this
but
you'll,
but
it
will.
It
will
provide
a
baseline
for
drilling
down
into
what
they
are.
C
Okay,
thank
you
moving
on
to
I'm
not
sure
who's
speaking
is
it
meet
and
greet
yeah.
AA
Here:
okay,
I
am
okay,
yes
yeah!
Okay!
Thank
you,
hello!
I'm
minori,
a
first-year
master
student
from
university
of
nebraska
lincoln,
and
this
is
the
collaboration
work
from
lady
at
apple
and
lisa
at
the
university
of
nebraska
england.
I'm
going
to
talk
about
our
work
user,
latency
to
measure
congestion,
control,
algorithms
and
less
recipes.
AA
Okay,
thank
you.
Ccas
is
an
important
component
of
transport.
Protocols
such
as
the
tcp
is
usually
measured
and
evaluated,
using
metrics
such
as
the
throughput
rtt
fairness,
and
while
these
metrics
are
important
for
network
operators,
they
are
not
sufficient
to
describe
the
latency
experienced
by
the
users,
most
ctas
use,
rtt
to
evaluate
delay
performance.
AA
The
rtt
of
tcp
is
usually
defined
as
the
time
from
the
departure
of
tcp
data
to
arrival
of
tsap
ack
rtd
measures.
The
delay
inside
the
network.
However,
what
user
perceived
time
is
from
the
user's
application's
request
to
the
arrival
of
application
response
on
this
daily
time
is
what
we
proposed
as
a
user
perceived
latency
and
upl.
AA
AA
AA
AA
So
that
is
that
rtt
is
not
a
in
good
indication
of
latency
perceived
by
the
users
and
therefore,
where
rtd
is
important
metric
to
measure
the
package
latency
inside
the
network,
we
want
to
propose
an
additional
metric
upl
to
measure
the
latency
received
by
the
users
by
posting,
the
upl.
As
of
the
performance
metric
for
ccas,
we
hope
that
it
would
promote
better
design
of
cca
to
measure
what
actual
users
experience
would
be
like
thanks.
C
Thank
you.
Are
there
any
clarifying
questions.
C
Discussion,
I
guess
so
so,
let's
start
the
queue
and
looks
like
omar
is:
oh,
he
has
a
clarifying
question
and
he's
in
the
queue.
So
I
don't
know
what
you
want
to
start
with.
First,
it's.
A
The
same
one,
okay,
I
have
a
clarifying
question
to
mean
gray.
I
hope
by
pronouncing
the
name
correctly.
A
My
understanding
is
that
upl
is
somewhat
equivalent
to
the
what's
so
called
responsiveness,
but
it
is
basically
inverse
of
responsiveness
and
to
that
degree,
what
what
mechanisms
have
you
used
to
measure
the
user
part
of
the
perceived
legacy?
Was
it
http
client
server?
Was
it
some
kind
of
binary
transfer
and
to
that
degree,.
A
AA
Okay,
thank
you
hobby
measures
upl,
so
we
in
our
experiment.
We
measure
the
upl
in
the
application
we
node
measures
at
the
tcp
layer,
so.
AA
A
Buffer,
are
you
giving
me
a
hobby
anyway?
Let's
take
a
tough
one.
Sorry
about
that
and
have
you
have
you
did
you
have
a
chance
to
experiment
with
tcp
rack?
Do
it
wrecked
lp,
in
addition
to
the
congestion
controls,
I'm
just
curious.
AA
AA
Offers
and
response
sets,
but
it
does
not.
It's
always
indicated
that
the
upl
is
different
from
rtt.
C
You
so
first
in
the
queue
I
think
is
dave,
and
I
think
I
have
your
slides.
J
Yes,
thank
you
very
much
for
letting
me
point
to
that.
I
wrote
out
what
I'm
going
to
say,
so
I
can
fit
it
in
60
seconds.
J
So
everyone
here
knows
I'm
a
proponent
of
fair
flow
queueing
and
I
think
that
that
alone
can
solve
a
huge
percentage
of
the
problems
we've
observed
in
a
multi-user
home
that,
once
you
have
enough
bandwidth,
you
need
better
bandwidth
when
it
comes
to
this
incentive-based,
qos
concept
and
the
development
of
the
smart
team
management
and,
ultimately,
the
sched
cake
thing,
an
analysis
of
hundreds
of
third-party
qos
systems
and
bandwidth
management
systems.
We
ended
up
settling
on
a
very
simple
structure.
J
K
Yes,
yes,
yes,
yes,
hello,
yeah!
I
wanted
to
comment
actually
on
the
on
on
the
first
speaker,
so
you
guys-
and
I
think
actually
one
thing
that
so
I
like
this
incentive,
based
and
and
so
forth
concept
here
and
I
think
actually
really
to
me.
This
seems
to
be
also
about
fairness,
and
I
think
this
is
actually
something
to
perhaps
look
more
into
how
we
can
let
users
specify
basically
their
trade-offs
right.
Basically,
it's
one
thing
to
say:
well,
I
want
to
be
prioritized.
K
I
want
to
hog
the
resources
and
so
forth
over
others,
but
it's
another
to
to
be
able
to
to
to
support
mechanisms
that
allow
you
to
specify
which
one
is
more
important
for
me
right
that
trade-off.
Basically,
is
it
more
important.
The
latency
is
more
important,
the
loss,
the
the
the
the
capacity
and
so
forth.
K
What
what
is
it
so
basically,
these
types
of
mechanisms,
I
think
are
are
very
interesting
to
look
at
one
question,
perhaps
or
perhaps
one
thought
is
also
how
users
could
indicate
this,
but
if
we
want
this
to
happen
dynamically
and
so
forth,
basically
does
this
require
potentially
also
ways
of
signaling,
where
you
would
put
these
types
of
trade-offs,
because
a
lot
of
respect
to
do
this
basically
makes
things
inherently
fair.
Y
Thank
you
can
I
can.
I
have
a
short
answer
sure.
Yes,
so
yeah
I
I
agree
absolutely
and
we
have.
We
really
did
continue
that
this
is
also
some
20
years
ago
and
have
some
solution,
but
in
in
the
proposal
that
I
and
we
met
nokia
was
that
in
principle,
the
user
is
an
application,
is
free
to
mark
the
packet
as
there
is
and
then
in
the
way,
somehow
declare
that
what
are
the
priorities.
Y
But
then
we
need
also
some
measurement,
the
network
boundary
to
measure
that
how
much
they
are
using
these
priorities
and
how
much
data
they
are
putting
on
different
level
priorities,
and
then
there
need
to
be
some
kind
of
feedback
loop
from
from
the
used
resources
and
priorities.
But
this
is
of
course,
complex
issue.
T
Sorry
I
was
looking
for
the
unmoved
button,
a
quick
echo
to
what
dave
tat
said.
It's
very
common
to
ignore
anomalies
and
outliers,
but
actually
looking
at
the
outliers
is
actually
most
interesting
because
that's
what
makes
the
video
called
fail.
I
had
a
question
about
the
graph
and
I
don't
know
the
slide
number.
Unfortunately,
it
was
the
one
that
showed
the
base
the
base,
speed
of
light,
latency
s,
the
geographic
latency
and
that
graph
seemed
to
show
that
the
geographic
delay
is
dominant,
the
serial
no,
no
go
back.
AB
E
Z
Okay,
so
this
this,
this
graph
is
just
kind
of
illustrative,
and
you
know
stu
stewart's,
absolutely
right
the
the
thing.
Z
The
thing
that
the
thing
that
the
constant
investment
in
in
higher
rate,
serialization
technologies
is
done,
is
basically
to
be
pretty
much
eliminate
the
serialization
delay
so
yeah,
if
it
if
to
plot
this,
for
a
real,
modern
technology,
the
the
the
the
the
green
line
would
be
almost
flat
and
the
variable
part
would
be
bigger,
but
that's
very
then
dependent
on
on
the
specifics
of
the
technology.
Z
Things
like
like
cable
and
wi-fi
with
shared
media
are
very
different
for
things
like
dsl,
where,
where
you
have
dedicated
media
and
the
geographic
delay,
though,
is
still
still
significant.
That
doesn't
go
away
as
as
the
as
the
as
the
bit
rates
have
gone
up.
The
speed
of
light
hasn't
improved,
so
so
yeah,
I
would
agree.
G
and
g
and
v
are
the
mostly
dominant
factors
now.
C
Can't
figure
this
out
ahmed
did
you?
Are
you
in
queue
again.
AC
AC
I
don't
understand
why
an
application
needs
to
know
about
congestion,
algorithms
and
measuring
it.
The
other
thing
is
from
a
used
perceived
latency,
it's
about
the
application
and
really
the
delay
the
delay
in
the
server
and
being
able
to
serve
the
the
client
or
the
request
has
to
come
into
play
to
give
a
reflective
user
experience
on
that
particular
application.
So
I
think
ws
shouldn't
be
about
the
window
size
of
the
response
of
the
of
the
application
server.
It
must
also
include
the
server's
ability
to
access
the
database
search,
blah
blah
blah
blah.
C
Okay,
thank
you
starting
sylvester.
I
totally
missed
your
spot
in
the
queue
I
apologize.
AA
AA
AA
Yeah,
oh
okay,
so
for
that
a
user
processing,
oh.
AA
C
Sorry,
I'm
sorry
I
was,
I
was
gonna
go
to
sylvester
next,
but
you
can
go
after
our
go
ahead
gary
and
then
we'll
go
to
semester.
Sorry.
C
Oh
okay,
sorry
sylvester.
D
I
I
think
that
a
way
to
compete
on
incentives
and
translate
these
traffic
management
strategies
is
maybe
more
interesting
than
than
to
say
that.
Okay,
this
incentive
is
good.
All
this
incentive
is
better,
so
and,
and
somehow,
if
that
competition
happens,
then
then
allow
the
users
to
have
an
opinion
about
like
which
incentive
he
would
like
to
use
for
which
service,
but
don't
bother
them
by
default,
so
somehow
have
have
meaningful
defaults
but
allow
users
to
to
change
them.
D
AD
Hi,
just
a
a
couple
of
quick
comments
on
the
quality
attenuation
technique.
First
of
all,
I
don't
think
peter
mentioned
it's
been
standardized
now
by
the
broadband
forum
and,
as
a
result,
a
number
of
vendors
are
now
building
that
into
products.
AD
So
there
are
commercial
products
available
that
use
that,
but
also
when
you
look
at
what
we
call
it,
the
presentation
there,
you
think
about
using
this
in
a
live
network,
to
measure
the
performance
of
of
networks,
which
is
what
we're
talking
about
today,
but
it
also
has
very
useful
use
cases
in
measuring
the
performance
of
equipment
in
a
lab
setup
and
in
in
actuality.
AD
One
of
the
very
first
use
cases
of
the
technique
was
back
in
the
cern
particle
accelerator
lab
to
select
between
different
vendors
and
different
models
of
ethernet
switches
in
that
high-speed
lab
and
compare
and
contrast
their
performance,
and,
I
think,
increasingly
with
modern
networks
as
we're
building
the
model.
Software
and
virtualized
network
functions.
We've
seen
a
huge
variation
in
the
implementation
performance
of
those
virtual
network
functions,
particularly
under
load
and
different
conditions,
and
that
that's
where
we've
quality
attenuation
in
a
lab
environment.
AD
It's
particularly
useful:
we've
used
it
for
low
latency,
docsis
and
virtual
pngs,
and
a
bunch
of
other
scenarios
in
the
labs
just
wanted
to
mention
it's
not
just
about.
You
know,
live
network
deployment.
You
can
use
this
in
the
design
and
architecture
and
supply
chain
evaluation
phase
as
well.
AE
Yeah
in
terms
of
tcp
rtt,
not
being
representative
of
qoe
or
transport
rgt
in
general,
that's
not
particularly
surprising,
given
that
transport
rtt
measurements
don't
incorporate
the
impact
of
re-transmissions,
so
you're,
looking
at
a
subordinate
delivery
rate,
you're
getting
more
of
an
estimate
of
throughput,
not
good,
but
in
terms
of
buffer.
Both
I
see
this
is
something
that
continues
to
come
up
in
this
meeting.
I
think
it's
it's
interesting.
I'm
curious
if
folks
have
an
idea
or
if
there
are
data,
sets
where
people
looked
at
four
access
links
at
in
residential
networks.
C
Thank
you,
omer.
A
Yep
awesome,
I
had
a
question
to
mr
thompson
regarding
the
geographical
delay,
variable
delay
and
civilization
delay
back
to
that
slide.
That
start
had
comment
on
earlier.
The
question
is,
in
your
experience,
is
the
geographic
delay
truly
constant
or
given
pair
of
fears?
A
The
reason
I'm
asking
is
that,
in
my
experience
most
recently
working
in
traffic
in
facebook,
the
routes
change
quite
frequently
at
least
a
couple
of
times
a
day
when
the
isps
are
running
out
of
budget
for
their
transit
agreements
and
they're
switching
to
the
next
next
least
expensive
agreement.
Because
of
that
the
well
coordinates
do
not
change
the
network.
Topology
got
changed.
A
This
was
my
experience.
I
wonder
whether
I
wonder
whether
your
experience
showed
different
different
results,
and
I
would
I
would
be
interested
in
hearing
a
bit
more
about
that.
Thank
you.
Z
Thank
you
shall
I
take
that
off:
go
ahead
and
respond
okay,
yeah
you're,
absolutely
right!
I
mean
when
we
when
we
call
it
geographic
delay.
That's
that's
just
a
kind
of
shorthand.
It's
it's
yeah.
Z
We
we
totally
see
the
effect
of
the
effect
of
route
changes
and
and
even
in
so
when
we
earlier
on
early
on,
we
did
some
exploration
of
vdsl
access
lines
and
if
you
do,
if
you
do
a
cluster
plot
of
the
of
the
g
and
the
s
you
can,
you
can
observe
the
effect
of
the
modem
seeking
seeking
different
operating
conditions
and
yeah
the
the
there's.
Z
The
g
is
not
boring.
There's
a
lot
of
information
in
that
part,
exactly
the
kind
of
thing
you
were
saying.
Z
C
Thank
you,
stuart.
T
T
I
talk
about
this
in
my
submission
on
the
workshop
agenda.
Page,
that's
the
first
one
I
I
remember
talking
to
some
people
doing
a
video
conferencing
client
and
they
claim
they
regularly
saw
40
packet
lost,
50
60,
sometimes
70
percent
packet
loss
and
they
claim
this
is
just
normal
on
the
internet.
T
The
internet
just
loses
60
of
the
packets
and
clearly,
if
you
do
a
tcp
trace,
tcp,
don't
have
a
tcp
connection,
you
don't
see
60
packet
loss,
so
they
were
getting
lost
because
they
were
sending
three
times
faster
than
the
network
could
carry,
and
that
was
what
they
were
causing
the
packet
loss.
The
internet
was
not
losing
their
packets,
they
were
causing
the
packet
loss
and
in
a
shared
queue
that
hurts
all
the
flu
flows,
sharing
the
queue
and
that's
why
any
application
built
on
udp.
T
C
Thank
you,
jana.
U
Thanks
randall
a
couple
of
quick
questions,
one
is
on
the
cca
talk.
I
like
the
measurement
of
this
application
delay
and
I
think
it's
really
actually
useful.
It
indicates
something
about
cueing
that
happens.
However,
it
is
not
a
necessarily
a
purely
condition:
control
metric,
it's
it's!
It's
it's
it's
beyond
that
and
that's
important
to
bear
in
mind
because
actually
it
captures
some.
This
gets
trickier
when
you
get
too
quick
because
the
the
tcp
has
a
sequential
delivery
model.
U
Quick
does
not
so
it's
it's
trickier
to
measure
something
like
this
and
compare
pcb
versus
quick.
When
you
start
doing
this
separately.
Somebody
asked
the
question
of
how
often
is
a
brandon
asked
the
question
of
how
often
is
rtd
greater
than
the
network's
min
rdd
effectively.
That's
how
I
translate
the
question
in
my
head:
anyways,
it's
actually
very
often
quite
high
branded.
If
you
look
at
packet
traces
at
servers,
you
you
you'll,
see
that
it
can
grow
up
to
10
50
times
the
minimum
rtt
within
a
connections
lifespan.
U
E
N
There
is
like
huge
difference,
like
I
would
say,
yeah
50
times
probably
sounds
about
right,
and
that
means
there
is
some
a
lot
of
buffer
bloat.
It's,
irrespective
of
whether
I
am
the
one
who's
sending
the
other
competing
traffic.
The
buffer
board
is
just
there
in
the
network.
I
mean
it's
probably
because
somebody
else
sharing
the
same
cable
is
also
sending
the
traffic.
But
it's
it's.
The
variable
latencies
and
the
variable
rtt
as
compared
to
the
main
rtt.
Is
it's
pretty
it's
pretty
bad
yep.
V
This
problem
is,
is
usually
what
other
people
do
do
to
you,
and
that's
that's
why
buffer
blood
went
relatively
unnoticed.
We
tend
to
us.
Humans
tend
to
do
one
things
thing
at
a
time
and
do
one
measurement
at
a
time,
and
it's
literally,
when
you're
doing
doing
both
things
at
a
time
whether
it's
you
or
somebody
else
on
your
network.
V
When
you
see
the
problem,
if
most
applications,
you
know
by
themselves,
do
just
fine
by
themselves,
and
so
this
is
really
a
back
to
the
internet
or
your
home
network
is,
is
a
shared
network
and
it's
shared
in
ways
which
are
more
subtle
than
most
people
realize,
whether
it's
the
microwave
raising
the
no
the
noise
floor
on
wi-fi
or
your
neighbor
with
his
own
wi-fi
network
right.
V
So
this
is
exactly
what
stewart
said.
C
Thank
you,
kristoff.
AG
Oh
yeah,
I
wanted
to
comment
also
on
the
buff
upload
question
that
brandon
raised.
I
think
one
of
the
problems
is
that
when,
when
it
happens,
it
is
extremely
short-lived
and
let's
say
if
now
on
my
network,
I
start
a
huge
file
transfer.
You
will
all
see
a
one
second
blip
in
my
audio
and
then
webex
is
going
to
adapt
to
it
right,
and
so
you
will
see
it
and
then
you
will
forget
about
it,
but
it
happened,
and
so
that's
one
of
the
reasons
I
am.
AE
I
think
I
did
a
poor
job
stating
my
on
the
statement
earlier.
I
agree
that
rtt
throughout
a
connection
may
change.
Sometimes
that's
going
to
be
due
to
the
congestion
control
behavior
of
the
sender.
If
I'm
sending
faster
than
the
bottleneck
wink,
I'm
going
to
build
up
a
queue
there
and
yes,
we
see
that
in
production
as
well,
the
rtt
observed
throughout
connection
will
be
higher
than
the
minimum
rtt.
What
I'm
trying
to
understand
is
how
often
is
it?
AE
AE
Realistic
because
for
me,
okay,
I
have
a
hundred
a
one
gigabit
per
second
up
down
link
at
home
on
a
fiber
connection,
unless
the
total
rate
of
packets
arriving
at
this
link
is
one
gigabit
per
second
we're
not
going
to
have
a
cube
built
up
there
on
either
the
uplink
or
the
downlink,
and
so
how
often,
if
you
want
to
saturate
that
link
and
tell
me
here's
how
big
the
buffer
is,
how
often
will
that
scenario
actually
occur?.
R
I'd
like
to
speak
directly
to
to
brandon's
question,
and
you
know
I'm
I'm
delighted
that
you
can
get
a
gigabit
up
until
about
six
months
ago.
I
had
seven
megabits
dsl
and
my
neighbors
here
all
did
too,
and
the
answer
is
cueing
delays
buffer
bloat.
If
you
want
to
use
that
word
gangs
up
on
everybody,
a
whole
lot
gangs
up
all
the
time,
and
so
you
know
I
I
I'm
resisting
the
urge
to
ask
people
to
start
a
ping
test
and
then
run
a
speed
test
during
this
webinar.
R
C
Thank
you,
rich
dave,.
J
J
My
own
comment,
though,
was
that
once
we
deeply
understand
these
issues,
the
content
providers,
the
ones
that
are
making
the
most
money.
I
really
do
wish
that
they
would
join
us
all
in
providing
a
better
in-home
access.
Pers
things
like
better
research.
You
know
comcast
innovation
fund,
something
like
that
to
help
make
the
network
as
a
whole
better
for
everyone.
C
Thanks
dave,
so
I
think
brendan
went
from
from
popping
in
and
out
jonna.
U
Two
quick
comments,
one
it's
I
I
again
people.
I
I'll
note
that
I
I
this
60
seconds
make
it
hard
for
me
to
be
nuanced,
which
is
probably
the
right
thing
to
do
here
right.
I
don't
want
to
spend
five
minutes
talking
about
something
here,
but
I'm
going
to
say
something
and
walk
away.
Two
things.
One
christopher
said
that
the
the
the
buffer
blood
that
happens,
that
that's
witness
is
is
short-lived
and
it
goes
away
you're
exactly
right.
U
That
begs
the
question,
though,
if
you
can't
point
a
finger
at
it,
is
it
really
a
problem,
so
that
is
my
sort
of
my
my
bomb
drop,
I'm
going
to
speak
to
rich
as
well.
Who
said
that
this
is
a
behavior
router
behavior
a
router
behaving
badly
and
holds
on
to
a
packet,
that's
the
equivalent
of
somebody
screaming
at
the
computer,
saying
you're
not
working.
Well,
it's
working
as
it's
supposed
to
do.
This
is
what
I
keep
saying.
We
have
to
stop
talking
about
this
as
bad
behavior
as
wrong
behavior.
U
It
is
doing
what
it's
supposed
to
do
and
it's
serving
a
purpose.
It's
not
if
you
want
to
change
the
behavior,
let's
figure
out
how
to
do
that,
but
it
is
doing
something,
and
it's
doing
that
thing
well,.
Q
Yeah
I
wanted
to
pick
up
on
the
perception
that
people
might
get,
that
you
know
brandon's
saying
he's
got
a
gig,
a
symmetric
gig
and
therefore
his
data
can't
chase
his
cues
and
he's
he's
no
they're,
never
going
to
catch
up
with
him.
Q
That
sort
of
gives
the
impression
that
this
problem
is
going
to
be
solved
when
we've
all
got
more
bandwidth,
but
the
middle
of
the
market
has
more
bandwidth
when
it's
filling
it
with
more
data,
and
so
it's
only
because
brandon's
ahead
of
the
market
he's
not
seeing
the
problem
once
everyone's
got
one
gig,
everyone
will
be
filling
one
gig
as
well,
just
like
everyone's
feeling,
whatever
they've
got
now
when
they're
in
the
middle,
it's
only
it's
only
the
people
at
the
at
the
bleeding
edge
that
don't
see
this
problem
and
and
quickly
to
pick
up
on
janna's
problem,
janna's
point
of
his
bomb.
Q
The
the
interruption
is
just
more
yes,
it
maybe
is
how
the
internet
works
now,
and
it
doesn't
have
to
be
how
the
internet
works,
though
it's
unnecessary,
and
we
can
make
it
better.
That's
what
we
do.
That's
what
engineering
task
force
means.
AH
Okay,
tourists,
and
so
this
this
ongoing
discussion
about
the
buffer,
bloat
and
sometimes
the
rtt
increasing
because
of
it
yeah.
We
all
want
to
get
rid
of
it
in
the
network,
but
I
think
what
we
should
also
remember
is
that
there
are
higher
level
things
in
the
you
know
above
the
transport
stack
in
the
in
the
media,
stack
that
can
be
done.
AH
I'm
not
sure
how
many
of
you
have
experienced
that
you
know
you
had
sometimes
this
buffer
bloat
increase,
but
your
video
application,
I
think
specifically
zoom,
is
a
subject
to
that
is
continuing
to
have
a
long
rtt.
So
over
the
the
time
of
a
conference,
you
know
your
interactive
response
will
get
worse
and
worse
and
you
have
to
restart
the
application
to
basically
get
back
to
your
normal,
lower
latency
or
low
rtt
and
those
those
are
really.
AH
You
know
problems
where
there
are
a
lot
of
solutions
in
good
codex
decks
to
catch
up
and
do
other
things
like
you
know,
doing
different
streams
and
so
on,
and
maybe
that's
that's
something
that
we
could
start
to
collect
information
about
and
and
document
at
least
informationally
in
the
ietf,
because
video
conferencing
applications
on
average
really
very
often
suck
in
that
respect.
AH
AE
Brandon,
I
I
think
the
symmetric
gig
point
here,
I
think,
even
if
you
have
50
megabits
per
second
or
25
megabits
per
second,
I
somebody
brought
up
pinging
during
this
call.
That's
exactly
the
type
of
data
set,
I'm
looking
for
right.
If
I
take
users
with
different
access
types-
and
I
look
at
how
often
throughout
the
day
is
their
contention
at
their
access
link
for
the
uplink
or
the
downlink.
AE
AE
And
the
other
question
I
have
here
is:
I
I
think
when
there
is
this
type
of
buffer
book,
I
think
it's
typically
that
the
uplink
buffer
is
saturated
or
the
uplink
is
saturated,
not
the
downlink.
I
realize
the
outpoints
are
typically
going
to
be
more
constrained,
but
again,
how
often
is
the
uplink
under
under
contention.
V
V
V
You
know
the
the
point
of
the
long
test
is
to
push
the
network
to
where
it
stops,
working
and
find
out
how
much
or
if,
if
it
malfunctions.
Okay
that
takes
time
that
there
are
transient
events.
A
lot
of
the
time
is
sort
is,
is
also
extremely
true,
but
each
of
those
can
can
destroy
the
quality
of
experience
for
many
applications.
C
Sorry,
jim,
we
got
to
move
on.
Thank
you
christoph.
AG
Yeah
I
wanted
to
respond
to
janna
jonas
replied
to
me
on.
If
the
problem
is
hard
to
to
catch,
is
it?
Is
it
actually
a
problem?
I
think
the
problem
is
the
problem
is
how
to
it's
hard
to
root,
cause
it
and
pinpoint
it
to
that.
The
reason
for
the
short
glitch
in
webex
is
buffer,
and
so
that's
why
I
believe
we
need
to
build
the
right
tools
and
measurements
to
actually
be
able
to
point
the
fingers
on
on
the
root
causes,
so
that
then
they
can
actually
get
fixed.
T
I'm
gonna
echo
partly
what
kristoff
says,
but
maybe
more
strongly
and
I'm
gonna
paraphrase
unfairly
what
ghana
said.
So
forgive
me
for
that
janna,
but
in
effect
what
jhana
said
is
if
the
end
user
can't
diagnose
the
problem,
then
it's
not
a
problem
and
I
don't
think
that's
the
right
way
to
think
of
it.
If
a
fuel
car
a
couple
of
times
a
week,
has
the
engine
just
die
halfway
through
a
busy
intersection,
that's
bad!
It
doesn't
happen
all
the
time,
but
a
couple
of
times
a
week.
T
R
Yeah
I
wanted
to
respond
to
janna
sort
of
on
the
same
theme.
If
somebody
said
to
me
that
my
you
know
my
routers
well
you're
you
asserted.
Maybe
the
routers
are
working
as
designed
and
I'm
gonna
say
if
it's
part
of
the
design
spec
to
cue
milliseconds
or
hundreds
of
milliseconds
of
data,
it
ought
to
be
written
on
the
box
and
I
don't
think
that
we
design
our
boxes
that
way
so
I'd
I'd
like
to,
I
think
buffer,
bloat's
real
I'd
like
people
to
to
to
think
about
it
seriously.
R
I've
been
running
an
rrul
test
here
and
my
latency
has
gone
from
maybe
16
milliseconds
to
a
high
of
maybe
50
milliseconds.
So
I
just
put
my
money
where
my
mouth
is
here.
Thank
you,
jonathan,
for
a
good
router.
U
Very
quick
responses,
I'll
start
off
with
rich
your
point.
I
I
don't
know
that
putting
a
router
a
delay
metric
on
a
box
is
going
to
make
it
much
of
a
difference
for
the
common
user.
Do
you
and
me
yes,
but
not
to
the
normal
user
at
stuart's
point
yesterday
I
mean
I
I
you
know
that
I'm
being
provocative
here
a
little
bit,
but
there's
a
hint
of
truth
in
there
right.
I'm
a
network
engineer
like
all
of
you.
U
I
would
like
to
see
buffer
bloat
be
addressed.
However,
I
still
am
looking
to
understand
to
your
point
of
view.
My
car
breaks
down
twice
a
week
at
an
intersection.
That's
very
quantifiable!
So
that's
not
an
effectively.
It's
not
a
good
analogy.
I
don't
think,
and
if
I
told
my
mechanic
that
they
would
certainly
want
to
go
fix
that,
because
breaking
down
twice
a
week
in
the
middle
of
an
intersection
is
a
pretty
big
problem.
U
However,
if
once
in
a
while,
my
car
makes
a
squeaky
noise
somewhere
and
it
doesn't
really
bother
anything
or
anybody,
my
mechanic
is
likely
to
see
well
keep
going
until
something
actually
breaks.
So
that's
a!
I
to
me
that's
a
better
analogy,
because
if
I
can't
put
my
finger
on
it,
then
it's
it's
kind
of
hard
to
fix
it.
U
So
I
think
there's
a
lot
more
to
be
said
about
incentivizing
and
about
what
we
should
be
incentivizing
and
I
don't
want
us
to
rattle
on
buffer
bloat
either,
because
I
think
the
quality
of
the
network
has
a
lot
more
to
be
discussed
here
as
we've
seen
in
the
past,
and
I
do
want
to
see
in
the
future
towards
the
end
of
the
day,
if
you
can
move
towards
talking
about
things
like
security
like
privacy,
like
all
kinds
of
other
issues
like
connectivity,
like,
I
said,
basic
connectivity,
which
we
haven't
even
talked
about
yet
so
my
thoughts.
C
A
We
are
indeed
a
break,
we
will,
we
will
get
back
in
10
minutes
and
we
will
start
the
third
session
for
the
day.
A
The
the
moderator
will
be
jeff,
houston.
J
I
got
a
question
for
the
the
ending
of
this
thing.
I
really
had
a
great
breakout
session
yesterday.
I'm
not
really
sure,
though,
what
what
are
we
really
aiming
for
in
outputs?
Arguably,
we
just
went
through
buffer
blood
hell
and
there
are
other
issues
with
involving
metrics.
I
is
there
some
way
we
can
get
to
a
simple,
clear,
actionable
overall
executive
style
statement,
tippity
top.
B
Of
whatever
we
do
next
so
we'll
talk
about
that
at
the
in
the
last
hour,
and
I
have
some
slides
to
drive
sort
of
that
discussion.
In
fact,
so
I'm
hoping
omar
will
turn
over
to
me
and
we'll
see
what
we
can
do.
It'll
take
some
work
on
the
mailing
list
after
this
is
over
as
well,
but
yeah
I've
been
trying.
J
Yeah
well,
I'd
like
it
like
to
try
to
stay
focused
on
the
outputs
in
that
last
hour,
so
we
can
nail
it
to
the
nail
this
piece
of
jell-o
to
the
wall,
and
that
would
be
great
thanks
and
I'm
gonna
go
grab
my
lunch
real,
quick.
A
End
of
this
thing,
yeah,
there
is
at
some
point
the.
AA
U
AI
Are
the
only
important
issues
I
agree?
We
are
now
at
the
end
of
a
break
time.
Hi.
My
name
is
jeff
houston,
I'm
moderating
this
next
session,
so
we'll
press
on
with
the
agenda
using
similar
format
and
first
up
is
christoph
posh,
with
responsiveness
under
working
conditions.
Over
to
you.
Thanks.
AG
Oh,
am
I
the
only
one
who
sees
the
slides,
very
tiny.
AE
AG
Yes,
that
works
so
hello,
everyone.
So
this
is
a
presentation
about
a
draft
we
submitted
to
ippm
a
few
weeks
back
and
it's
about
measuring
responsiveness
under
working
conditions.
So
let
me
I'm
not
going
to
go
into
all
the
technical
details
that
we
present
in
the
draft,
but
rather
tell
the
big
story
around
it.
So
why
did
we
start
off
with
this
this?
Well?
The
first
problem
was
that
well,
we
have
known
about
buffer
bloat
for
more
than
10
years
now
it
exists
since
far
longer
than
that
solutions
exist.
AG
Also
since
then
cuddle
pie
and
so
on,
but
still
the
problem
is
very
widespread.
When
you
go
out
and
measure
it,
you
still
can
measure
latency
under
load,
and
so
why
is
that
so?
Well,
we
believe
that
one
of
the
problems
is
that
we
need
to
raise
awareness
to
the
problem
and
create
tools
that
allow
to
actually
measure
it
and
quantify
it.
AG
This
awareness
will
create
the
end
user,
enable
the
end
user
to
be
the
forcing
function
and
create
the
market
incentives
to
actually
for
the
vendors
to
actually
fix
the
problems
right
and
by
having
the
tools
available
to
actually
measure
it.
The
vendors
can
actually
vendors,
I
say
vendors,
it
could
be
operators,
whoever
operating
system
providers
right
all
of
these.
They
can
actually
use
the
tools
to
actually
measure
the
buffer
load
and
see
if
their
problem,
their
solution
actually
fixes
the
problem
right.
AG
AG
Also,
how
do
you
measure
it?
Should
you
use
an
icmp
ping,
a
udp
ping,
tcp
request
response,
or
should
you
use
http
3?
How
do
you
actually
create
the
working
conditions
right,
the
load
on
the
network?
It's
surprisingly
difficult
to
actually
do
that,
and
there
are
out
there.
There
are
a
few
tools
that
are
already
trying
to
measure
buffer
load.
AG
We
have
these
reports
fast.com,
there's
the
waveform
website,
there's
some
common
line
tools
like
flint
and
so
on,
but
all
of
them
are
measuring
it
in
a
slightly
different
way
and
if
you
run
them
one
after
the
other,
you
will
get
the
very,
very
huge
variants
in
the
different
numbers
that
you
that
you
can
measure.
So
that's
why
we
thought
hey
it's
actually
time
to
standardize
a
methodology
to
measure
responsiveness
under
working
conditions.
AG
AG
So
what
do
we
measure
here?
Well,
first
of
all,
we
focus
on
responsiveness
for
the
end
user,
and
what
does
that
mean
is?
It
implies
a
few
things
it
implies.
First,
we
use
modern
protocols.
We
don't
use
icmp
ping,
nobody,
transmits
data
on
ping.
We
don't
use
udp
ping,
neither
we
use
http,
2,
ht,
http
3
would
be
an
option
tls
right,
and
we
are
very
well
aware,
like
what
matt
presented
on
mond
on
tuesday,
that
buffer
bloat
can
happen
in
different
layers,
and
you
can
measure
at
each
layer
and
each
measurement
is
valid.
AG
We
just
need
to
be
aware
of
the
layer
that
we
are
measuring,
has
a
certain
meaning
and
so
measuring
at
http.
2
has
meaning
for
us,
because
we
want
responsiveness
for
the
end
user
and
also
because
we
want
to
measure
it
for
the
end
user.
We
want
to
measure
everything
all
stages
of
the
connections
like
dns,
handshake,
tcp,
handshake,
tls,
handshake,
http,
request,
response
and
http
response
on
the
load,
bearing
connections.
AG
AG
So
our
methodology-
and
that
is
described
very
much
in
detail
in
the
draft-
is
we
create
stable
working
conditions,
which
is
pretty
difficult
actually
to
reliably
create
these
conditions.
Because
the
problem
is
you
don't
want
a
short
term
spike
of
latency
created.
You
want
it
to
be
stable
because
you
want
to
run
multiple
repetitions
of
latency
measurements
right,
and
so
we
create
multiple
http
2
bulk
data
transfers
that
are
ramping
up
gradually
and
we're
adding
more
and
more
connections
to
it,
so
that
we
can
reliably
fill
the
pipe.
AG
Then
we
measure
responsiveness
and
we
measured
in
two
different
ways.
First,
we
measure
it
with
http
2,
get
requests
on
the
load
bearing
connections,
so
the
load
band
connections
are
creating
one
http
get
for
a
huge
file,
and
then
we
run
small,
get
requests
of
just
one
byte
to
measure
that
are
multiplexed
on
the
same
connection
to
measure
the
latency.
AG
This
allows
to
expose
buffer
bloat
in
the
network
and
also
on
the
server
implementations,
and
then
we
measure
also
separate
on
short-lived
parallel
http
2
get
requests
that
allows
to
measure
while
the
dns,
tcp,
handshake
and
so
on.
There
might
be
prioritization
happening
in
the
network,
so
tcp
handshake
might
be
faster
than
the
get
requests,
and
things
like
that.
So
we
measure
all
these
different
stages
of
the
connections.
Next
slide.
AG
And
so
then
we
crunch
the
numbers
and
output
it
as
round
trips
per
minute.
The
reason
is
why
we,
why
do
we
express
it
as
round
trips
per
minute?
Well,
we
want.
First
of
all,
we
want
a
single
number
and
we
want
it
to
be.
The
higher
is
better.
We
don't
want
people
to
look
for
a
lower
number.
Finally,
expressing
it
as
round
trips
per
minute
allows
for
a
very
nice
range
from
the
low
tens
to
the
upper
a
few
thousand.
So
it's
all
very
simple
to
to
express
it.
AG
AG
We
have
standardized
the
entire
methodology
we
have
published
it
in
the
itf
draft.
We
have
a
tool
that
is
available
in
the
mac
os
beta.
It's
called
network
quality.
You
can
run
it
and
you
can
see
the
output
here,
the
output's
capacity,
the
number
of
flows
that
ended
up
being
using
being
used
and
the
responsiveness.
AG
As
I
mentioned,
it
has
been
standardized
and
we
are
also
encouraging
people
to
set
up
servers,
and
so
in
the
draft
is
a
specification
of
how
you
can
basically
host
this
kind
of
service
so
that
you
could
run
this
tool
against
your
own
server.
Okay
times,
time's
running
out
now,
I'm
at
the
end.
So
that's
it
great.
AI
Q
I
am
sorry
I
was
using
the
wrong
mouse
over
to
you
as
long
as
you
move
it
yep
okay,
so
this
was
aimed
at
the
question
in
the
call
for
papers
for
a
simple
metric
that
will
focus
people's
minds
users,
service
providers,
whatever
next
slide,
please
right.
So
this
is
just
to
give
a
bit
of
context.
Q
What
we
want
to
do
is
have
a
metric
for
varying
packet
delay
and
one
that
we
can
all
agree
on
gets
people
focused,
and
this
is
how
variation
occurs
in
in
different
scenarios.
A
couple
arbitrarily
picked
on
the
right.
You
see
an
orange
one
on
the
left,
blue
one
and
in
the
middle
you
see
them
both
on
the
same
plot.
So
you
can
see
the
orange
ones
hugging
the
x-axis,
the
blue
one,
taking
the
y-axis,
so
you
can't
really
both
see
them
both
at
the
same
time.
Q
So
it's
a
question
of
whether
you
use
the
mean
the
median
I
haven't
shown.
It's
quite
close
to
the
mean
on
the
right-hand
one,
but
it's
about
less
than
half
on
the
left-hand
one
as
you
can
see
in
the
numbers
above
standard
deviation
or
perhaps
99th
percentile
or
a
high
percentile
or
perhaps
the
max.
Actually,
this
looks
like
old
slides
because
the
max
is
meant
to
be
labeled.
It's
where
the
color
ends,
there's
about
4.5
on
the
left
and
about
50,
seven,
eight
on
the
right.
Q
Jeff,
the
next
thanks
so
in
picking,
which
one
first
want
to
make
the
negative
point
that
mean
or
median
are
distractions.
Q
You've
got
you've
got
skewed
distributions
here,
the
of
all
the
main
applications
when
you're
dealing
with
measuring
at
the
packet
level,
the
packets
are
pieces
of
of
the
objects
your
application
needs
and
if
you
measure
the
average
of
all
those
things
that
isn't
really
representative
of
what
the
application
can
do
when
because
it
needs
to
collect
up
the
stragglers
before
it
can
do
anything
with
with
the
pieces
and
particularly
with
real
time,
you
know
if
you,
if
you,
for
instance,
use
the
median
to
characterize
delay
and
you
set
your
playoff
buffer
at
that
level.
Q
You
would
be
discarding
50
percent
of
the
packets
if
you
play
them
out
after
the
median
delay.
So
that's
just
not
it's
just
pointless
talking
about
the
median.
Similarly,
for
a
tcp
short
flows,
the
receive
buffer
has
to
wait
until
they're
in
order
before
it
delivers
them.
So
any
of
the
stragglers
it
has
to
wait
for
and
even
if,
you've,
not
gotten
in
order
delivery.
Q
If
you've
got
datagram
type
systems,
you
tend
to
have
application
logic
that
that
has
dependencies,
so
a
generalization
which
isn't
always
true,
but
often
that
we're
really
talking
about
the
user
experience
being
about
delivery
of
an
assembled
product
or
at
least
an
assembling
product.
You
know
in
some
sort
of
window,
not
the
pieces
and
so
measuring
at
the
packet
level
mean
or
median
is,
is
really
doesn't
cut
it
and
because
these
are
so
asymmetric.
Q
What
we
propose
is
as
a
straw,
man,
if
you
like,
but
if
we
try
and
go
for
a
single
metric
adult
to
be
a
high
percentile
to
enable
comparisons
if
we're
going
to
go
for
one.
So
next
next
slide
asks
the
question
jeff,
which
high
percentile-
and
this
is
obviously
where
things
get
difficult,
there's
a
the
the
main
thing
is
not
too
high.
Otherwise
it
becomes
too
slow
to
calculate
which
say
someone's
having
to
do
tests
and
they
have
to
do
a
whole
matrix
of
tests.
Q
If
each
test
requires
something,
like
you
know,
a
billion
packets
in
order
to
get
a
decent
decently,
accurate,
99.999,
percentile
or
whatever,
then
it's
just
going
to
be
impractical.
So
it
has
to
be
as
low
as
is,
as
is
representative,
but
it
still
has
to
be
representative
of
typical
application
requirements.
Q
So
as
a
straw,
man,
99th
percentile,
that
means
an
application
would
be
able
to
conceal
it
if
it
if
it
played
out
after
99
of
the
packets
had
arrived
generally,
you
know
if
it
set
it
its
playout
buffer
on
the
basis
it
might
have
to
discard
one
percent
of
the
packets.
Is
that
reasonable?
Does
that
make
it
easy
to
write
applications
to
do
that,
because
we
want
to
make
it
representative?
Otherwise,
you
you
you're,
not
really
providing
a
metric
that
doesn't
have
sort
of
benchmark
effects
and
and
poor
incentives.
Q
So
I
just
wanted
to
before
I
go
on
to
the
last
question
about
how
you
know
how
he
set
con
consensus
on
this,
and
things
like
that.
Just
wanted
to
make
some
clarifications
so
jumping
to
the
small
print.
We're
not
saying
you
don't
also
have
to
specify
other
things
to
to
make
this
metric
useful.
We're
not
saying
you
don't
have
to
say,
but
it's
one
way,
delay
or
two
delayed
it
or
two-way.
Q
You
don't
have
to
you
know
you
have
to
say
whether
it's
layer,
four
or
layer,
seven
you're,
measuring
out
like
like
the
previous
presentations.
You
know
you
have
to
give
the
traffic
load
as
in
christoph's
last
presentation,
the
duration
and
all
the
rest
of
it.
But
I
just
we
just
want
to
focus
on
this
one
question
and
and
try
and
get
an
answer
that
we
can
all
focus
down
on
which
percentile
should
it
be?
Q
And
as
long
as
we
have
one
common
one,
it
doesn't
have
to
be
the
only
one
people
report,
but
it's
one
that
people
can
compare
everyone's
got
a
lowest
common
denominator,
so
you
know
which
itf
working
group
I
suggest
the
ippm.
I
think
the
itf's
a
good
body
to
to
set
this
standard,
and
you
know
it's
it's.
It's
also
really
good
and
I'm
now
going
to
be
extremely
sarcastic.
It's
all
the
it's
really
good.
At
setting
standards,
as
spencer
dawkins
said,
when
there's
only
one
choice
and
we've
got
lots
of
choice
here,.
AI
No
problem:
okay,
going
going
gone;
kristoff
again,
let's
go
on
to
the
last
one
in
this
set,
which
is
an
end
user
approach
to
an
internet
score.
AL
Yeah
over
the
years
great.
Thank
you
very
much
so
I'd
like
to
talk
to
you
today
about
an
in
a
case
for
an
end-user
internet
score.
So
this
is
not
something
that
necessarily
those
of
us
who
understand,
networks
really
well
would
consume,
but
someone
who's
not
versed
in
the
art,
as
they
say,
and
it's
not
all
about
throughput.
We've
had
several
people
discuss
this
already
that
looking
at
speed
tests
or
something
like
that
doesn't
give
you
the
whole
picture.
There's
many
other
things
to
consider.
You
know:
responsiveness,
jitter
and
protocol
conformance.
AL
This
is
a
particularly
important
thing
and
many
other
factors
that
we
kind
of
want
to
roll
up
into
something
that
evaluates
how
our
internet
connection
is
doing
and
also
a
key
piece
of
this
is
that
we
have
different
focuses,
so
some
people
are
really
into
gaming.
We
have
video
streaming.
We
have
interactive
video,
a
number
of
other
things,
so
the
idea
here
is
that
we
would
like
to
define
an
internet
score
that
works
with
different
use
cases
and
is
a
measure
of
both
the
network,
quality
and
utility.
AL
AL
So
there's
plenty
of
network
quality
network
properties
that
we
can
evaluate.
We
know
the
kind
of
standard
ones
could
put
latency.
We
divide
into
two
pieces
at
least
the
idle
net
latency
and
the
working
latency
and
christophe
kind
of
alluded
to
this.
When
we
load
the
network,
what
can
we
expect
to
get
in
the
way
of
latency?
And
one
thing
I
really
like
one
way
of
characterizing:
this
is
the
ability
to
multitask,
because
the
network
actually
handle
multiple
functions
at
one
time
and
then
there's
protocol
conformance.
AL
We
have
existing
protocols
like
ecn,
v6,
wi-fi
stuff,
and
there
are
new
protocols
that,
of
course,
we
can't
foresee
you
know
quick
has
come
and
become
an
interesting
protocol
to
work
with
and
someday.
It
may
be
replaced
by
something
else
who
knows,
and
from
this
we
want
to
kind
of
cook
this
all
together
and
produce
some
kind
of
quality
and
utility
metric.
That's
an
abstract
quantity.
So
the
idea
is
to
have
a
dimensionless
number,
that's
relatively
small,
something
you
can
express
in
just
a
few
digits
again.
AL
Higher
is
better
as
we've
spoken
about
with
the
rpm
score,
so
a
higher
number
means
you
have
a
better
connection
with
no
particular
upper
bound
on
this
to
allow
us
to
expand
into
the
future
as
things
get
better
and
better.
The
real
sticking
point
here
is
correlation
user
experience.
I
think
a
number
of
people
have
brought
this
up
throughout
the
workshop.
AL
AL
AL
So
our
proposal
is
this
synthetic
score
that
we
can
scale
into
the
future
and
the
idea
is
to
use
both
linear
and
non-linear
transformations.
We
note
that
in
psychology,
a
lot
of
the
ways
that
human
perception
sort
of
transfers
from
a
technical
metric
into
one,
a
perceptual
metric
is
non-linear.
AL
You
know
often
mediated
by
things
like
logistic
functions.
We
want
to
be
able
to
express
that
in
here
as
a
piece
of
this
sort
of
generic
block
you
see
here
for
how
scores
are
put
together,
in
other
scores
that
I've
worked
on.
So
the
idea
is
to
transform
these
input
parameters
and
then
combine
them
in
some
ways
and
the
the
combination
is
a
pretty
important
thing.
AL
You
know
how
that
combines
is
is
more
of
an
art
than
a
science
and
trying
to
get
it
to
work
in
a
way
that
humans
can
perceive
it
as
useful.
So
we
pass
it
through
these
transformation
functions,
and
then
we
apply
the
waiting
table
to
be
able
to
combine
these
in
some
useful
way
as
we
go
forward,
and
that
gives
us
the
internet
score.
That
will
be,
you
know,
just
a
quantity
and
we
can
scale
this
to
anything.
We
want
to
make
it
into
a
number.
AL
You
can
tell
someone
in
a
chat
window
or
something
like
that,
so
the
idea
is,
it
would
be
some
kind
of
a
an
integer
of
some
kind.
So
that's
all
I
had.
AI
Okay,
thank
you
very
much
over
to
discussion
now
and.
X
Thanks
jeff
yeah,
so
just
a
quick
comment
to
bob.
I
completely
agree
that
mean
and
median
are
distractions
as
we
completely
agree
on
that,
and
then
I'd
just
like
to
point
out
that
choosing
a
single
percentile
has
comes
with
a
trade-off,
because
one
of
the
things
that
you
lose
if
you
choose
a
specific
percentile,
is
this
property
that
you
can
measure
from
a
to
b
and
then
from
b
to
c
and
add
them
up.
X
You
can
do
that
with
means,
and
you
can
do
that
with
other
moments,
but
you
can't
do
it
with
percentiles
and
then
the
other.
The
other
thing
you
lose
is
the
generality
in
mapping
that
to
application
requirements,
because
some
applications
may
be
sensitive
to
a
different
percentile
or
a
different
combination
of
percentiles
than
the
one
that
you
that
that
we
we
potentially
agree
on
so
yeah
arguing
for
the
the
keeping
as
general
distribution
as
is
sort
of
computationally
feasible.
Q
Yeah
sure
very
quickly,
yeah,
it's
it's,
it's
just
an
attempt
to
get
a
single
number
and-
and
yes
you
know
it
would
be
great.
Like
I
showed
you
know,
the
the
log
scale
ccdf
there
would
be
ideal,
but
it's
really
a
single
number
that
people
may
not
even
understand
what
a
percentile
is,
but
they
know
what
a
number
is.
Q
You
know
and-
and
it
may
be
inverted
like
the
rpm
metric,
but
it's
it's
a
single
number
and-
and
you
know
saying
yes,
it's
not
additive,
that's
true,
but
I
think
it's
more
important
that
it's
that
it's
representative
and
I
think
a
lot
of
the
problems
we're
having
is
because
median
and
mean
latencies
are
being
thrown
about
by
everyone,
as
if
there's
some
sort
of
measure
of
what
we've
got
and
then
people
say
well,
that's
not
very
much,
but
actually
that's
not
what
the
application
is
experiencing
and-
and
you
know,
and
what
users
are
experiencing,
because
we're
not
talking
about
the
the
tale.
Q
Thanks
bob
phoebe.
N
Thank
you
for
all
the
presentations.
I
have
a
question
to
kristen.
Thank
you,
kristen
for
the
internet
score
presentation.
Can
we
just
go
back
to
the
one
previous
slide,
the
one
that
shows
yes?
So
this
is
great
that
you
have
a
that.
We
have
a
final
score
for
the
user
to
look
at,
but
is
there
a
way
to
decompose
it
back
to
p0
p1
p2
for
the
developers
to
do
the
debugging.
AL
What
we
found
is
to
make
these
things
actionable
that
you
know
the
scores
are
kind
of
like
a
dimensional
collapse,
so
we're
crunching
a
highly
dimensional
space
down
into
a
single
dimension,
and
we
have
to
be
able
to
reinflate
that
score,
and
sometimes
it's
a
directory
inflation
by
going
back
to
all
of
the
parameters,
and
sometimes
intermediate
representations
turn
out
to
be
very
useful.
So
the
idea
would
be
that
there
would
be
a
diagnostic
way
of
getting
back
and
re-inflating.
Those
dimensions
sounds
good.
AK
H
H
Thank
you,
jeff
and
thanks
for
all
three
talks,
a
quick,
a
quick
response
to
bjorn's
comment.
You
you
actually
can
combine
percentiles
and
we
had
a
real
smart
mathematician
from
nortel
contribute
considerably
on
that
work
in
itu-t
standards
and
it's
replicated
in
an
rfc.
H
So
we
can
get
together
at
some
point
and
share
the
specs
that
you
want
to
look
at,
but
you
know,
and
of
course
it
has
limits
as
well
and
then
since
kristen's
slide
is
up
here,
I
I
think
you
said
that
it's
more
of
an
art
than
a
science
kristen
when
choosing
the
weights
and
the
and
the
transformations.
H
H
AI
Thanks
al
whiz.
B
Thanks
jeff,
so
in
the
united
states
we
actually
have
a
you
know:
air
quality
index,
which
is
actually
defined
by
the
us
government.
It's
actually
an
interesting
thing
that
that
does
exactly
sort
of
what
you're
doing
for
an
audience
that
doesn't
necessarily
understand.
You
know
the
complexities
of
each
individual
measurement,
so
it
actually
does
a
linear
aggregation
of
multiple.
You
know
different
measurements,
with
different
multipliers,
even
depending
on
the
range
of
the
measurements
to
come
up
with
some
random
number
that
dictates
to
people.
B
Can
you
go
outside
and
exercise
and
then
they
even
bend
that
down
into
color
ranges?
So
we
have
you
know
green
is
good
and
it's
50
or
less
and
then
50
to
100
is
better.
As
a
scientist,
I
hate
that
number.
I
want
to
see
the
individual
measurements.
I
care
that
the
smallest
particle
is
getting
into
my
lungs
and
I
need
a
really
good
mask
versus
just
knowing
that.
It's
you
know
bad
air
quality
with
big
particles
which
aren't
quite
as
dangerous
and
things
like
that,
but
for
the
average
user.
AI
Thanks
wiz,
I
now
need
to
go
and
refine
my
screen
everyone's
still
seeing
the
same
screen
or
is
it
flipped
elsewhere.
B
You
have
a
white
screen
now,
but
the
next
person
is
dave.
If
you
want
me
to
take
over
for
half
a
second,
no.
AI
No,
no
I'm
back
in
business
dave
wants
it
flipped
back
again,
so
back
to
the
first
one
dave
and
over
to
you.
J
So
over
the
years
we
have
struggled
to
find
tools
and
metrics
that
can
universally
describe
the
biggest
problem.
I've
been
working
on
which
is
buffer
bloat,
and
I
only
recently
came
to
understand
that
one
of
the
reasons
why
the
two
sides
were
talking
past
each
other
was
that
on
the
one
side
we
had
proprietary
benchmarks
with
their
own
metrics
and
on
the
other
side,
the
work
that
my
project
was
doing
was
covered
under
the
gpl
version,
three,
which,
for
all
intents
and
purposes
in
the
corporate
america
was
verboten.
J
AI
Thanks
dave
who's
next,
in
the
queue
anna.
AB
Yes,
I
wanted
to
comment
on
the
combined
metric,
so
the
swedish
internet
community
did
some
work
three
four
years
ago
to
try
and
define
what
was
required
from
from
internet
access,
also
defining
a
number
of
metrics
that
you
would
need
to
support,
and
we
had
a
lot
of
discussions
if
all
these
metrics
would
yeah
if
they
should
be
weighed
into
a
single
metric.
What
we
ended
up
with
it
was
to
say
that
you
know
to
to
pass
in
a
sense
to
be
a
good
access
for
basic
access
or
for
a
particular
use
case.
AB
AB
So
it
was
another
way
of
combining
metrics
that
you
could
also
perhaps
have
have
thresholds
and
say
you
need
to
to
meet
these.
If
you
should
be
a
good
connection
for
gaming
or
a
good
connection
for
something
else,
and
you
could
also
have,
of
course
grades
for
it.
But
to
avoid
having
to
to
weigh
the
different
metrics
together,
which
can
be
a
difficult
task,
I
think.
Y
Okay,
so
my
question
is
ready
to
this
last
last
presentation
at
this
synthetic
core.
So
what
is
your
opinion
about
this
nature
of
this
course?
So
it
is.
Y
I
I
can
see
two
two
choices,
that
is
at
least
two
choices,
so
it
can
be
a
kind
of
satisfaction
scale,
something
like
mean
okay,
score
whatsoever,
or
another
pro
possibility
is
to
have
a
kind
of
utility
scale,
for
example,
value
of
time
which
is
used
in
the
transportation
area
to
measure
the
value
of
different
kind
of
situations
so-
and
these
are
different,
because
the
value
of
time
or
at
the
utility
scale
is,
is
in
a
way
it's
a
more
linear
mobility.
So
what
is
your
opinion
about
this
nature
of
the
scale.
AI
There's
a
few
seconds
left
in
your
the
minute.
Do
you
want
to
answer
right
away.
AL
AH
Thank
you
tell
us
so
in
the
iptv
world,
where
there
is
no
congestion,
but
the
bandwidth
is
upfront
provisioned
or
reserved
whatever
the
the
key
you
know,
metric
of
interest
for
the
quality
is
the
number
of
disturbances
that
the
user
sees
on
the
screen
right,
so
I
mean
video
blobling
or
something,
and
if
I
translate
this
into
the
congestion
world,
I'd
say
that
I
very
much
like
the
99
percentile
from
job,
but
I
think
it
would
be
good
trying
to
capture
you
know
a
metric
for
exactly
when
or
how
often
the
99
doesn't
happen,
but
when
you
exceed
it,
so
in
the
you
know,
iptv
world
broadband
access
network,
where
said,
should
have
at
most
one
such
you
know
visible
interruption
within
four
hours,
and
you
know
I
myself
would
also
know
like
to
know
how
often
my
internet
connection
you
know,
exceeds
the
99
percentile,
because
it's
a
big
difference,
whether
it's
one
big
interruption
or
a
10,
smaller
one,
10,
smaller
ones,
would
typically
be
perceived
to
be
much
more
annoying
than
one
big
one.
AK
Thank
you,
sam.
AM
Sorry,
I
was
on
mute,
yes,
quick
question
to
clarifying
question
to
kristoff.
Actually
for
me
from
a
quick
reading
of
your
draft,
it
looks
like
the
the
mean
is
used
to
aggregate
multiple
working,
latency
measurements.
I
realize
it's
a
bit
more
subtle
than
that,
because
there's
multiple
different,
that's
for
dns,
handshake,
tcp,
handshake
and
so
on,
but
given
the
given
the
challenges
of
keeping
the
link
saturated
during
the
during
the
latency
under
load
test,
what
why
not
use
something
like
a
maximum?
I
assume
that
must
have
been
considered.
AG
Second
point:
why
not
the
maximum
the
problem
with
the
maximum
is
it's
very
highly
likely
because
we
measure
at
the
http
2
layer
the
maximum
is
going
to
be
basically
a
measurement
of
packet
loss,
and
so
the
latency
is
going
to
be
much
higher
for
for
those
particular
probes,
and
so
it's
kind
of
skewed.
So
we
weren't
we
were
in
this
divi
I
agree
mean,
is
definitely
not
ideal
and
we
would
love
to
have
a
better
understanding
of
what
is
a
better
way
to
to
express
it.
AG
But
that's
what
we
currently
have
finding
a
percentile
is
also
difficult,
because
another
problem
that
we
have
is,
if
your
network
is
hugely
buffer,
bloated
measuring
buffer
blow,
takes
a
lot
of
time,
and
then
we
run
against
the
usefulness
goal
that
we
have
of
finishing
the
test
within
20
seconds
right
and
in
some
networks,
actually
our
test
times
out
because
there's
too
much
buffer
bloat
in
the
network.
AG
So
it's
it's.
We
are
balancing
different
goals
that
are
sometimes
conflicting
and
that's
what
we
currently
came
up
with
and
we
would
be
happy
actually
if
people
contribute
to
the
draft
and
provide
and
suggest
other
solutions.
AI
Thank
you
next,
on
the
queue
is
jonathan.
I
All
right
chris
looks
going
to
get
a
lot
of
air
time.
So
first
question
here
is:
is
there
an
open
source
version
of
the
server
side
of
the
of
the
rpm
test?
It's
coming.
AG
It's
coming
randall,
who
shared
the
previous
session,
is
going
to
post
in
the
coming
days
or
weeks.
The
server
side
configurations,
it's
very
simple.
It's
all
just
http
2!
You
need
to
provide
a
json
at
the
beginning
for
the
con
for
to
bootstrap
the
process
and
from
down
it's
all,
just
hosting
a
few
http
2
files
and
allowing
an
http
to
post.
I
Okay
and
any
way
to
direct
the
ios
client
to
a
different
server
than
the
default.
Yes,
there's
an
option
and.
AG
Sorry
right,
no,
I
wanted
to
just
hide
that
one
key
point
on
the
of
the
internet
score
kristen
mentioned
the
waiting
table
and
the
waiting
table
is
really
the
tool
that
will
allow
us
to
create
a
score
for
gaming,
create
a
score
for
video
conferencing,
greatest
goal
for
people
who
only
do
email
and
web
browsing,
for
example.
Right
and
earlier
in
this
workshop,
we
talked
about
the
nutrition
labels
right
and
we're
wanting.
Well,
what
should
be
the
roads?
Should
the
roads
be
good,
or
should
it
be
responsiveness?
AG
Well,
maybe
your
road
could
be
well.
Your
gamer
score
is
49.
Your
video
conferencing
score
is
600,
so
it's
much
better,
and
so
we
believe
that
the
waiting
table
is
going
to
be
can
be
the
tool
that
allows
to
address
all
the
use
cases.
AI
Thank
you
next,
in
the
queue
is
alex.
K
Today
was
actually
chris
have
just
saw
my
stole
my
thunder,
because
I
wanted
to
also
comment
on
the
synthetic
score.
I
I
don't
think
actually
a
single
score
is
is
useful
at
all.
I
I
think
all
it
says
is
okay.
I
can
quantify
to
say
okay,
this
sucks,
or
this
doesn't
suck
quite
as
much.
I
don't.
I
don't
really
feel
how
how
that
helps
in
a
general
way.
K
However,
I
think
one
can
definitely
make
the
cause
for
benchmarking
for
these
different
types
of
use
cases,
because
different
things
are
important
for
different
users,
and
we
were
talking
also
earlier
about
the
when
this
whole
incentive
discussion
about
okay.
Do
you
is
delay,
is
latency,
more
important,
is
it
loss
and
so
forth,
and
this
basically
also
helps
it
perhaps
relate
to
where
you
would
put
those
trade-offs,
what
what
scores
better
for
which
for
which
area?
So
I
so
I
do
think
so
yeah.
So
I
think
benchmarking
is
good.
K
The
one
score
itself
is
is
not
so
useful
on
the
other
aspect,
also
on
reinflating
the
dimensions
separate
comment,
I'm
not
sure.
I
fully
actually
understand
this
because
if
you
say
this
mapping
should
be
reversible,
then
this
is
just
an
encoding.
I
don't
see
how
this
can
be.
One
compact
score,
if
you
want
this
to
be
that
way,
so
I'm
a
little
bit
skeptical
on
that
in
any
way,
so
I
would
done.
AI
X
To
bjorn
yeah,
I
just
briefly
want
to
describe
how
we
do
this
measuring
or
that
this
mapping
between
application
requirements
and
and
measurements
in
the
delta
q
system,
and
that
is,
for
instance,
if
you
have
this
iptv,
where
what
you
care
about
is
the
whether
or
not
there
are
artifacts
for
the
end
user.
X
You
can
map
that
to
a
an
upper
bound
on
the
on
the
distribution
of
latencies
and
packet
loss
that
the
is
measured
across
that
link,
and
then
you
can
say
something
like
you
can
count
the
number
of
seconds
that
the
network
delivered
a
latency
and
packet
loss
distribution
that
was
above
that
bound.
So
basically,
the
the
network
is
too
slow
now
so
yeah.
X
I
think
that
it
points
what
twirlers
points
out
is
you
need
a
metric
that
can
also
have
a
well-defined
mapping
to
application
requirements
and
delta
q
supplies
that
thanks.
Thank
you.
Dave.
J
B
Thanks
dave
waze
so,
regardless
of
whether
we
have
a
single
score
or
food
label
type
settle
argue
that
we
need
to
enable
studying
the
scores
over
time,
as
trends
and
deltas
are
likely
more
important
than
just
a
static
point
in
time.
AN
N
I'm
just
thinking
out
loud
about
the
internet
score
and
if
it's
you
know
like
every
application
or
use
case
gets
its
own
score.
For
example,
video
conferencing
gets
50
and
gaming
gets
a
hundred,
and,
let's
say
100
is
good.
50
is
bad.
Are
we
also
planning
on
you
know
documenting?
What
is
the
user
supposed
to
do?
Is
he
supposed
to
play
games
more,
or
is
he
supposed
to
figure
out
why
his
whatever
video
conferencing
score
score,
is
low
and
vice
versa?
N
So
just
thinking
out
loud,
if
that's
gonna,
be
you
know,
the
authors
are
thinking
about
this
as
well
or
if
they
have
any
thoughts.
AI
Thank
you
alex
back.
K
In
the
queue
yep,
I
think
I
yeah
difficult
on
the
or
the
percentiles
and
those
metrics.
I
think
that
I
think
one
thing
that
is
important
is
to
refine
these
over
time
and
have
at
this
time.
Component
is
a
little
bit
echoing
what
we
said,
but
I
think
also
one
thing
to
think
about
is
basically
should
be
metrics
for
for
recent
high
water
marks
or
recent
percentiles,
so
that
we
can
distinguish
basically
the
long
running.
AI
Thank
you.
I
put
myself
in
the
queue
so
I'll
put
the
timer
on
me.
I
think
in
searching
for
one
metric
we're
actually
engaging
in
telephone
thinking.
You
know
if,
if
the,
if
the
network
only
had
one
use,
it
was
really
easy
to
figure
out
the
quality
dimensions
of
that
particular
used
framework.
AI
When
we
set
up
a
packet
based
network,
we
had
no
use
in
mind
and
we've
actually
had
you
know
a
complete
panoply
of
the
way
applications
actually
drive
the
network
and,
while
we
might
say
everyone
uses
tcp,
therefore
the
network's
optimized
for
tcp,
I
think
that's
wrong,
and
it's
kind
of
tunnel
thinking
one
metric
doesn't
solve
all
of
the
various
applications
we
use
and
would
like
to
use
and
trying
to
narrow
this
down
to
a
single
metric
of
quality.
I
think
is
actually
kind
of
thinking
about
this.
The
wrong
way.
AI
AO
Sharat
hello:
is
it
audible
now
sorry
about
that?
Okay,
so
I
just
want
to
add
some
points
and
thoughts
to
what
you
were
saying,
joff,
and
also
with
these
comments
earlier
that
I
do
agree
that
we
probably
need
to
look
at
the
metrics
from
an
application
perspective,
because
that's
something
I
believe
a
user
would
understand.
AO
And
if
you
listen
to
our
talk
yesterday
at
canopus
networks,
we
are
trying
to
do
something
of
that
sort
where
a
conferencing
application
has
certain
metrics
right
that
can
be
exposed
back
to
the
user,
that
a
network
right
now
can
support
certain
such
amounts
of
latencies
or
jitters
and,
of
course,
agreeing
to
us
that
it's
a
time
varying
metric.
It
does
not
it's
not
a
fixed
thing.
AO
It
could
be
a
particular
value
today,
but
it
could
change
tomorrow
as
well,
but
making
the
application
the
center
of
exposing
these
metrics,
I
think,
would
be
useful
for
both
the
customers
to
understand
and
the
developers
to
to
work
on
it
to
improve
it
further.
Thank
you.
Thank
you.
Waze.
B
So
I
don't
really
know
how
to
rectify
the
fact
that
we
need
different
things
for
different
people
and
how
to
you
know,
come
about
with
the
debugging
information
in
the
dns
world.
We've
debated,
adding
extra
fields
for
debugging
and
who
actually
should
be
able
to
interpret.
You
know
what
a
software
author
writes
in
there
and
whether
it
should
be
shown
to
the
user
or
not
and
whether
they
should
be
given
a
button
that
says,
ignore
the
certificate
error
right.
B
I
can
say
that
there
are
a
large
number
of
of
cases
where
we
have
red
and
green
lights,
because
that's
what
a
user
needs
to
know
is
it
down
right,
now.com
is
one
and
is
it
raining.com
is
another
one
right.
They
just
want
to
know
yes
or
no,
and
unfortunately
that's
what
users
want
a
lot
of
the
time.
Does
my
network
work
yes
or
no.
AP
Yeah,
I
I
I
believe,
we're
over
complicating
this.
I
think
that
there
are
not
so
many
applications
that
want
latency.
AP
AP
You
know
just
keeping
latency
low
and
focusing
on
that
as
a
metric
as
something
we
can
measure
is
probably
going
to
help
very
many
applications,
and
then
you
can
try
and
classify
that
in
applications
that
care
more
or
less
about
this-
and
you
know,
say
this:
is
this
matters
for
gaming,
because
even
the
low
rate
traffic
is
going
to
have
low
latency
air?
But
then
you
know,
if
you
have
high
rate
traffic,
admit,
may
produce
latency.
We
just
want
to
avoid
it.
AP
D
About
metrics
and
video
conferencing,
it's
some
some
applications.
Do
you
have
this,
but
I
I
have
never
seen
an
application
which
said
that,
okay,
it's
your
line,
the
is
which
is
the
problem
or
it's.
The
other
part
is
line
which
is
the
problem.
Should
I
solve
it
or
should
the
other
party
solve
it?
So
this
kind
of
fingerprint
pointing
we
mentioned
several
times
these
days?
That's
that's,
I
think,
is
really
important.
D
Thank
you.
Rich.
R
So,
yes,
thank
you.
I
want
to
pick
up
on
what
michael
just
said,
that
for
people
that
are
espousing,
we
need
multiple
measures
for
different
applications.
I
guess
I'd
I'd,
ask
people
to
design
them
and
then
to
implement
them.
Point
is
to
to
the
code,
we'll
run
them
and
we'll
see
if
we
can
get
any
good
data
out
of
them.
That's
it.
AI
Thank
you,
whereas
we
seem
to
have
exhausted
the
queue
we're
a
couple
of
minutes
early,
but
guided
by
you.
Do
you
want.
B
AI
One
of
my
friends
who
ran
an
isp
always
said
that
there
were
a
bunch
of
gamers
that
he
called
loping
bastards
and
they
were
his
nemesis
that
whenever
the
network
clagged
up
the
loading
bastards
that
hit
the
phones
and
give
him
a
hard
time-
and
maybe
that's
what
we're
talking
about
so
what's
next
where's.
Is
this
the
last
slide?
Yes
over
to
you.
B
All
right,
so
I
think
we've
been
thinking
about
what
to
do
in
this
last
hour
and
trying
to
figure
out
how
to
wrap
stuff
up.
So
let
me
go
ahead
and
steal
the
sharing.
B
And
we
will
go
with
this,
so
what's
next
right,
how
do
we?
This
has
been
a
fantastic
three
days
of
discussion
in
some
ways
it
feels
like
we
came
to
some
conclusions
and
other
ways.
It
feels
like
we're
just
getting
started
right,
and
I
think
that
that's
probably
the
right
thing
to
think
about
so
a
little
bit
of
administrative
emitted,
trivia.
First,
first,
yes,
the
website
is
still
out
of
date.
We
apologize
for
that.
We
are
working
on
getting
it
fixed.
B
There's,
multiple
issues,
including
running
a
workshop
and
trying
to
you
know,
get
other
stuff
fixed
at
the
same
time.
Yes,
people
have
asked.
Can
we
publish
your
slides?
Yes,
we
will
publish
all
three
slide
decks
as
well
on
the
website
and
yes
beyond
today.
Please
do
consider
continuing
this
conversation
in
both
slack
and
email
and
stuff.
AF
B
B
Yes,
okay,
so
it
won't
go
full
screen
and
I'll
just
work
through
it?
B
That
was
the
administrative
slide,
but
I
read
it
so
you
probably
don't
need
to
read
it,
but
this
is
what
we
propose
for
the
last
hour,
one
to
try
and
document
some
of
the
conclusions
that
we
agree
on
and
I'll
do
that
in
two
parts
and
we'll
continue
that
in
a
minute
and
then
for
another
20
minutes
after
that,
we'll
sort
of
talk
about
what
the
next
steps
are.
What
do
we
need
to
do
that?
We
can
do
immediately.
You
know
what
can
we
do
in
the
ietf?
B
What
is
like
longer
term
research,
you
know,
and
then,
of
course
you
know,
coordinating
on
the
mailing
list
and
continuing
to
do
stuff
is
a
good
thing
and
then
once
we
decide
okay,
here's
the
next
steps.
You
know
who
wants
to
commit
and
actually
helping,
but
first
off
for
the
first
30
minutes.
B
What
can
we
agree
on
so
I'm
going
to
propose
the
following
experiment,
and
this
is
where
the
there's
a
fine
line
between
bravery
and
stupidity,
and
I'm
going
to
figure
out
which
side
I'm
going
to
fall
on
for
this
so
queue
up.
If
you
have
a
statement
that
you
want
to
propose,
this
is
not
for
discussion.
B
I
want
you
to
pick.
You
know
some
statement
that
you
think
we
agree
on.
So
not
you
know
everybody
has
to
agree,
but
something
that
you
think
that
that
the
general
audience
you
know
believes
is
true.
Don't
pick
something
that
you
think
you're
in
the
minority,
for,
in
other
words,
you
know,
example,
is
there
is
no
single
metric.
B
I
think
most
there's
been
a
lot
of
people
that
have
said
that
and
we'll
write
them
down
and
we'll
send
them
all
out
later
and
I
will
be
stupid
again
and
actually
start
writing
some
of
these
down
live
after
we
sort
of
run
out
of
of
items.
I
don't
think
there
should
be
too
many.
B
You
know
I'll
start
a
second
queue
for
people
that
disagree
with
some
of
the
statements
to
really
push
back
against
it,
and
this
is
just
sort
of
an
experiment
for
what
conclusions
do
we
think
that
we
actually
can
come
to
that?
We
can
document
in
you
know
the
final
working
workshop
report
and
which
ones
do
we
still
need
to
think
about
more
so
with
that
jeff,
I'm
going
to
ask
you
to
continue
running
the
queue
if
you
can-
and
I
was
just
going
to
volunteer
to
do
so.
Thank
you,
sir.
Q
All
right,
I've
actually
got
two
but
they're
very
similar,
I'm
going
to
say
we
do
not
have
incontrovertible
evidence
that
latency
is
more
important
or
becoming
more
important
than
bandwidth.
That
would
convince
wi-fi
developers
and
your
second.
Q
AH
Q
Q
AQ
Yes,
so
the
background
for
this
is
that
the
internet
keeps
changing,
and
when
I
make
some
observations
during
this
conference
and
otherwise
I
I
still
see
things
that
are
measurement
related
but
refer
to
technologies
that
may
be
on
their
way
out,
or
at
least
not
as
useful
as
before.
So
I
would
propose
that
we
say
that
new
measurement
or
qos
techniques
should
not
rely
on
reading
tcp
headers.
AP
Yeah,
as
I
just
typed
I
mean
I,
I
think
we
probably
have
large
agreement
on
this
statement
from
bob
that
present,
that
the
mean
and
median
are
distractions
when
it
comes
to
latency,
but
for
the
specific
formulation.
If
he
wants
to
say
something
that
helps
me
paraphrasing
him.
So.
AH
Tell
us
it
is
frustrating
to
work
on
measuring
bed
network
services
without
simultaneously
being
able
to
work
on
improving
those
network
services.
AR
AI
AH
Neil
sorry,
I
don't
think
heart
is
the
right
word
right
or
insufficient,
or
so
right
yeah,
I
said
frustrating,
but
there
may
be
a
better
word,
but
heart
is
not
good
thanks.
Thank
you.
I
X
My
microphone's,
not
working
neil's
microphone
is
not
working.
I
think.
X
Localization
thanks
bjorn
yeah,
so
yeah
that
gets
to
the
point
where
detecting
a
problem
is
not
very
useful
unless
you
can
also
find
out
where
the
problem
is
and
potentially
what
to
do
about
it.
AR
AK
B
L
Glad
all
right
stakeholder
incentives
aren't
aligned
for
easy
wins
in
this
space.
L
B
AC
AC
B
B
AH
In
the
queue
tell
us,
so
I'm
wondering
if
we
already
are
in
the
face
of
beating
up
these
statements,
because
I'm
starting
to
read
the
others
and
think
about
them,
and
so
for
the
first
two
ones.
Maybe
my
more
positive
view
on
that
would
be.
We
need
better
evidence
and
metric
for
the
fact
that
latency
may
be
equal
or
more
important
than
bandwidth
and
that
buffer
bloat
is
a
significant
problem
to
persuade
developers.
AH
B
AH
B
AH
B
AK
Q
Bob
we
need
measurement
to
be
continuous,
which
is,
I
think,
michael
beltz's.
It's
michael
wilson's
presentation,
we're
not
colluding,
and
he
also
thought
my
point.
AI
AK
AI
D
D
D
AI
B
E
B
One
I
actually
meant
to
to
poke
on
that
subject,
because
I
don't
think
we've
covered
it
enough.
So
that's
that's
an
incredibly
important
topic
that
I
think
we
need
to
consider.
So.
Thank
you
all
right.
B
So
the
next
step
that
I
wanted
to
do
is
is
there
anything
in
this
list,
and
that
is
now
too
small
for
you
all
to
read
that
you
think
is
just
absolutely
wrong
and
or
do
you
think
that
the
at
least
that
you
think
that
multiple
people
agree
with
you
that
it's
absolutely
long,
I'm
just
trying
to
get
a
feel
for
which
ones
are
there
disagreements
out
that
these
conclusions
need
more
work
on
and
I'm
not
going
to
write
anything
down,
I'm
only
going
to
mark
sort
of
the
ones
that
that
people
find
contention
with.
B
So
shall
I
speak
or
are
we
queuing
jeff
yeah
we're
queuing,
we're
queuing.
AI
Okay,
dave,
you
seem
to
have
oh
no.
Let
me
understand
where
this.
AI
Now
I'm
going
to
go
back
to
where
al
said,
plus
q
at
3
12
my
time
god
it's
early
in
the
morning,
so
al
is
this
in
response
to
wiz's
comments.
H
AI
No
problem
I'll
move
down,
I'm
on
to
evgeny.
AS
Thank
you.
Well,
I
cannot
agree
with
the
the
first
statement
and
I
think
that
the
statement.
AS
13
is
much
better
and
much
clearer,
first
of
all,
because
currently
wi-fi
7
targets
at
latency
and
wi-fi
7
targets
on
real
time
of
support
of
real-time
applications.
AS
J
Thank
you
dave,
okay.
Obviously
I
would
disagree
pretty
strongly
with
number
two
and
number
one
and
in
the
chat
sam
of
sam
knows
demonstrated
16
seconds
of
buffer
blood
and
his
connection
and
how
useless
it
was.
So
I
believe
that
anyone
that
cares
to
look
knows
it
exists
and,
as
for
other
studies,
I
love
what
matches
did
with
the
responsiveness
metric.
So
I
I
don't
know
what
constitutes
uncontrovertible
evidence.
Besides.
J
S
You're
going
to
hear
me
say
something
very
similar.
I
believe
we
do
have
incontrovertible
evidence
for
whatever
definition
you
want
to
use
for
controvertible,
that
buffer
blood
is
a
prevalent
problem,
and
I
believe
that
we
can
show
that
latency
is
more
important
than
bandwidth
in
a
number
of
applications.
B
Okay,
so
so
let
me
do
a
point
of
order,
because
we
don't
have
time
to
debate
all
of
these.
What
I'm
looking
for
is
actually
not
the
ones
that
I've
marked
in
italics
that
we
do
have
problems
with.
I
think
there
we've
now
concluded,
there's
multiple
people
with
issues
with
one
and
two.
So
those
are
things
we
need
to
work
on
we're
not
going
to
work
on
them
now.
B
So
if,
if
you're
gonna
speak
speak
to
one,
that
is,
you
know
not
yet
been
talked
about
as
something
that
you
have
problems
with
we're
just
trying
to
find.
What's
good
and
what's
what's
what's
agreed
upon
and
what's
not
at
this
point,
what
goes
into
future
work?
Okay,.
AI
Given,
given
that
jonathan
still
in
the
queue
still
got
a
comment.
I
E
AC
Respect
for
a
standard
home,
wi-fi
router
from.
AC
And
that
will
include
clock,
clock
speeds
or
all
that.
B
Ahmed,
your
your
audio
just
doesn't
work
at
all.
Unfortunately,
it's
very
broken.
Can
you
paste
what
you
want
into
the
chat,
but
I
think
you're
proposing
to
add
some
a
new
one,
not
talking
about
a
current
one.
So
please
clarify
that
in
chat
as
well.
Q
Yep
number
10,
it's
just
I
don't.
I
think
it
needs
some
more
exposition
as
to
what
it
means
it.
It's
possibly
true,
but
it
depends
what
this
space
is
as
well
and-
and
you
know
I-
I
don't
really
know
what
it's
what
it's
saying
I
mean.
If
it's
talking
about
latency,
I
would
have
thought
incentives
are
aligned
because
it's
not
a
zero-sum
game,
but
I
don't
really
know.
L
I
can
give
you
an
example
bob,
since
this
is
my
line
so
for
since
we
just
heard
about
buffer
board
again,
I
think
we
understand
buffer
bloat
very
well
for
10
years
now
right,
but
the
incentives
aren't
there
to
get
the
mitigation
technologies
deployed
at
scale.
V
AI
U
H
Ow
jeff,
I
I
like
michael's
15.
We
need
to
measurement
to
be
continuous,
but
I
think
we
need
to
extend
it
it
it
needs
to.
It
needs
to
include
reliability,
measurement
or
connectivity
measurement,
and
that's
the
only
reason
to
measure
continuously.
Otherwise,
it's
a
it's
an
internet
of
measurements,
but
thanks
for
thanks
for
getting
that
thanks
for
getting
that
point
in
michael,
I
I
missed
the
cue
to
in
order
to
do
it
thanks
al
omar.
A
I
think
that
point
17
needs
to
be
needs
to
be
clarified,
because
I
I
think
that
this
I
intuitively
agree
with
that,
but
I
believe
it
should
be
rewarded,
maybe
not
now,
but
I
think
that
this
statement
can
be
made
clearer.
Thank
you.
B
AG
Okay,
kristoff:
I
wanted
to
speak
to
points
six
and
and
seven
actually
on
the
localization
part.
I
don't
think
that
measurement
absolutely
needs
to
support.
Localization
localization
is
needed
to
fix
a
problem.
That's
right,
but
in
order
to
quantify
and
measure
a
network
quality,
it
is
not
necessary
and
also
localization
is
usually
an
is
an
entirely
different
task
from
measuring
it
and
requires
different
tools
actually
having
tried
to
localize
buffer
blocky
at
apple
park.
B
Effect
of
buffalo
itself,
I
I
may
have
transcribed
that
one
poorly.
I
think
six
and
seven
were
actually
almost
you
know
similar
things
reported.
I
would
rephrase
based
on
what
you've
said
to
the
measurements
need
to
support
reporting
localization
in
order
to
find
problems.
AI
Okay,
dave,
I
did
know
you
you'd
like
to
talk
later
at
length
on
something
in
response
to
the
chat,
but
you
had
something
else
in
mind
at
the
queue
right.
J
Yes,
I
did
thank
you
actually,
when
I
first
heard
him
say
localization
I
jumped
for
joy
because
this
prop,
if
you
go
to
google
trends
for
my
better
noise,
you'll,
find
that
it's
only
in
english-speaking
countries,
and
yet
I
know
from
having
lived
in
a
spanish-speaking
country
that
they
didn't
get
it
it
just
didn't
and
they
had
horrifically.
They
still
have
horrifically
horrible
problems.
J
So
somehow
I
was
the
word
localization
meant
to
me-
needs
to
be
explained
in
all
the
core
languages
in
the
world
where
it
needs
to
go,
and
we
just
changed
it
back.
So
just
my
thought.
We
need
to
get
it
out
of
english.
Somehow,
okay,.
K
Alex
I
have
a
comment
to
a
few
of
those,
particularly,
I
think
we
need
to
separate
concerns.
I
think
some
of
these
items
are
conflating
things
and
I
would
separate
them
out.
An
example
is
actually
number
11,
which
was
no
sorry
was
there.
Where
was
the
one
with
the
active
measurement
and
and
passive
measurements?
K
And
it
said
basically
we
we
need
a
combination
of
them.
I
think
this
needs
to
be
articulated
as
yeah
both
have
their
place.
We
need
to
have
both,
but
it
does
not
necessarily
have
to
be
a
combination
and
similarly,
in
the
number
21,
the
the
future
proof
networking
the
ecological
impact
and
is
as
important
as
quality,
I'm
not
sure
why
we
are
conflating
them.
I
would
simply
say
this
is
important
as
well
right.
Both
qualities
use
is
important
and
energy
usage.
K
Those
things
are
important
as
well,
but
I
don't
know
why
we
are
mixing
them
up
into
a
single
one
and
then
finally,
the
third
one
was
actually
what
they
was
just
mentioning,
I
think
actually
clearly
localization.
Isolation
is
very
important,
but
I
would
separate
that
from
detection
and
I
think
again
both
have
their
place.
Of
course
we
need
one
well,
we
need.
Maybe
we
need
both,
but
we
we
should
look
at
them
separately,
so
I
would
call
them
out
separately
isolation,
one
thing
and
detection
another
thing
both
have
their
place.
AA
AP
Well,
I
have
I
mean
well
thanks
for
for
putting
mine
in,
but
it
wasn't
really
how
I
would
have
phrased
it.
So
my
point
about
these
continuous
measurements
was
that
they
need
to
be
queryable.
After
the
fact
you
know
you're
using
an
application,
you
want
to
be
able
to
go
back
to
it
and
a
part
of
my
point
was
that
that
can
be
done
reasonably
when
you
use
passive
measurements.
AP
I
am
a
little
worried
with
just
saying
measurements
need
to
be
continuous,
because
somebody
was
just
saying
that
then
we
have
an
internet
of
measurements,
I
mean
we
end
up.
Have
we
are
filling
the
network
with
traffic?
So
maybe
this
should
have
a
world
of
in
in
14.
It
is
to
stay
to
say
that
that
at
least
passive
measurements
to
be
continuous
or
passive,
I'll
just
say
passive
measurement
has
to
be
continuous.
No,
no.
This
is
in
14.
AP
and
then
the
reliability
connectivity
aspect
I
mean
I'm
not
saying
this
is
not.
This
is
not
important,
but
that's
not
what
I
would
have
I
mean
it
can
can
include
anything.
You
know
like
my
concern
would
have
been
pinpointing
the
risk.
The
reasons
for
the
agency,
so
I
would
honestly
remove
that
bit
because
I
think
it
can
include
various
things
and
as
long
as
we
don't
fill
the
network
with
unnecessary
measurement
traffic
actively,
I
think
we
can
measure
all
kinds
of
things
now
which
one
are
you
referring
to.
AP
AP
Well,
I
don't
I
don't
disagree
with
I
mean
we
can
add,
we
can't
keep
it
in
it
doesn't
matter.
I
just
think
I
just
don't
think
it's
necessary
I'll
mark
it
for
consideration
later
before.
I,
and
I
would
also
demand
an
explanation
for
of
better
bandwidth
before
we
just
leave
that
in
because
I
think
better
bandwidth
and
0.8
is
a
bit
it's
like.
What
does
that
supposed
to
mean.
AP
B
I
mean
how
useful
is
that
statement
then,
because
it's
a
goal:
it's
not
a
it's,
not
an
implementation.
I
get.
AP
T
T
T
B
All
right
thanks,
my
mark
six
is
off
the
table
because
I
think
we
could
rat
hole
into
the
wording
on
that
one.
I
think
that
there's
important
elements
in
that
that
people
agree
with
we
just
need
to
figure
out
how
to
state
it
more
properly,
and
let's
do
that
on
the
list
and
in
the
report
later,
okay,
but
just
before
I
moved
to
bob,
I
wanted
to
check
with
you
where's
there's
a
time
check.
AN
Q
Yeah,
just
helping
michael
with
number
14
again
well
and
I
wrote
it
and
maybe
yeah
badly.
I
agree
with
what
he
said,
and
maybe
the
point
here
is
that
metrics
like
latency
reliability
and
connectivity
are
all
intermittent,
and
so
you
need
to
be
measuring
things
continuously
in
order
to
find
them,
and-
and
so
you
know,
we
need
passive,
you
don't
can
take
passive
out
of
parentheses
if
you
want
and
and
michael
also
wanted
it
to
say,
continuous
and
archivable
or
or
something.
AP
Okay,
thank
you.
Al.
H
Yeah
thanks
jeff
and
and
yeah
michael,
I
I
didn't
I'm
sorry.
I
had
to
jump
on
your
particular
thing
here,
but
it
was
this
person
who
said
we
don't
want
an
intercept,
an
internet
of
measurements.
That
was
me
both
times
so
we've
we've
taken.
I
think
a
lot
of
people
have
actually
taken
a
look
at
using
active
measurements
to
a
assess
in
a
continuous
way,
the
connectivity
and
so
forth.
H
It
just
has
to
be
a
really
low
bit
rate
stream
and-
and
you
know,
there's
there's
ways
around
this
and
completely
agree
with
the
additional
points
you
made.
Thank
you.
AH
So
I
I
really
liked
six
before
I
think
it
became
conflated
with
that
second
bullet
point:
I'm
not
sure
who
added
that
this
english
stuff,
but
I
mean
as
far
as
stewart's
mentioning
was
that
how
about
you
know
needs
to
support,
pinpoint
problem,
pin
pointing
right
instead
of
localization
right.
I
think
yeah.
B
B
O
AB
Yes,
if
we're
on
the
wording,
I
I
think
the
wording
of
number
three
is
also
a
bit
strange.
I
think
I
know
what
we're
trying
to
say
but
new
measurements.
I
guess,
if
tcp
headers
are
available
in
some
network,
there's
no
reason
you
couldn't
measure
there.
I
think
the
intent
is
that
the
new
measurement
techniques
or.
E
B
AI
B
In
that
case,
we're
going
to
go
on
to
the
the
next
step,
so
I
will
send
this
all
out
later.
We
can
fight
over
wording
and
things
like
that.
I
do
find
it
interesting
that
out
of
21,
we
didn't
strike
out
too
many.
That's
that's
actually
a
good
mark
to
me.
I
I
was
thinking
it'd,
be
possibly
more
contentious
than
that
and
there's
certainly
some
problems
with
some
of
them
and
that's
to
be
expected
and
that's
okay.
B
So
the
next
steps
are
really
we're.
Wrapping
up
an
ieb
workshop
right
with
the
goals
of
where
do
we
need
to
go
from
here?
What
you
know,
obviously
there's
a
lot
of
work
in
this
space.
The
conversations
have
been
non-stop
for
three
days,
which
has
been
fantastic
each
day.
There
was
at
least
an
hour
of
conversation
afterwards,
plus
slack
conversations,
and
things
like
that.
So,
in
my
opinion,
there's
sort
of
two
different
things
that
we
can
take
away
from
this
right.
We
have
some
short-term
goals.
B
What
do
we
want
to
accomplish
in
the
immediate
future?
And
then
what
are
the
longer
term
goals
things
that
you
know?
We
really
can't
do
without
significant.
More
reframing,
so
this
is,
you
know
your
opportunity
to
jump
in
the
queue
to
figure
out.
You
know
what
we
need
to
do.
The
one
thing
that
we
have
to
do
is
we
have
to
do
a
workshop
report.
B
B
Else
to
try
and
the
conclusions
left
to
try
and
come
up
with
some
stuff
that
we
can
stay
as
as
what
we
did
in
the
workshop.
B
It
won't
necessarily
be
a
definitive
set
of
conclusions,
but
it
will
at
least
record
you
know
what
the
workshop
did,
but
is
there
immediate
work
that
that
can
be
done
in
the
ietf?
Is
there
ideas
that
people
have
come
up
with
in
the
last
few
days
that
can
be
done
in
either
an
existing
working
group
or
a
future
working
group?
And
of
course
you
know
long-term
goals.
B
Is
there
future
research
that
can
be
done,
or
I
think
industry
outreach
and
reach
out
to
end
users
and
other
things
to
help
you
know
maybe
get
a
broader
perspective
would
be
good,
and
then,
unfortunately,
I
put
deployment
under
a
longer
term
goal,
because
we
know
how
long
it
takes
to
you
know,
replace
cpe
environments
and.
B
AH
Like
automating
getting
away
of
the
mute
button,
so
I
I
think
I
would
love
first
to
see
that
we,
you
know
finish
up
with
a
way
to
organize
ourselves
going
forward
so
that
everything
that
falls
over
in
in
our
timeline
today
can
be
done
there.
And
to
that
end
I
would
think
that
the
mailing
list
might
be
converted
into
one
that
doesn't
say
workshop
anymore,
but
you
know
that
would
be
a
non-working
group
mailing
list
under
iab
thingy
sponsorship,
and
that
should
be
open.
AH
Of
course,
then-
and
I
think
the
second
most
useful
tool
I've
seen
in
something
like
the
design
teams-
was
just
to
try
to
figure
out
how
we
get
ourselves
organized
on
the
wiki
right.
Could
you
know
different
topics
could
get
different
pages.
People
who
are
interested
in
them
would
start
to
collect
the
information.
Beyond
that.
You
know
initial
first
page,
which
obviously
would
be
the
points
from
the
last
slide,
and
that,
I
think,
is
how
far
I
think
organizationally
we
might
want
to
go.
AH
I
think
it's
at
the
point
in
time
when
the
mailing
list
sees
enough
interest
in
any
individual
subtopic.
That
might
you
know
cause
another.
You
know
just
interim
meeting
or
something
like
that.
So
that's
organizationally.
B
Okay:
okay,
now,
one
point
of
order,
if
I
could
inject
yeah
going
to
merge
sort
of
the
last
two,
if
you
are
willing
to
work
on
something
while
you're
speaking
about
you,
know
things
that
should
be
done,
please
do
say
how
you
would
contribute.
Would
you
be?
You
know
an
editor?
Would
you
contribute
in
the
discussion
or
things
like
that
just
so,
we
get
a
feel
for
the
interest
level
among
the
community.
J
Important
to
me
are
what
kind
of
deadlines
that
we
have
ahead
of
us
there's
a
group
of
us
here
that
are
working
on
the
buy
tag
report
on
internet
latency
explained
and
some
of
this
ties
together,
but
the
workload
is
enormous.
So
what
when
do
you
need
a
workshop
report
by.
B
We
don't
have
hard
deadlines
that
typically
takes
a
couple
of
months.
You
know
to
get
something
out
and
and
they're,
not
it's
not
an
exhaustive
report.
I,
if
other
iab
members
want
to
step
in
and
give
their
opinions
that'd
be
great,
but
I
would
say
you
know
six
months
or
so
with
the
the
latest
would
be.
You
know
my
estimate,
but
it
takes.
B
Goes
through
you
know
the
typical,
the
typical
public
comment,
type
stuff,
so.
J
Cool,
I
would
hope
I
would
shoot
for
three
months
and
expect
six
that's
good,
and
then
what
is
the
ihtf
definition
of
immediate.
B
To
so
badly
worded,
should
we
start
new
work
immediately
in
the
ietf,
so
is
there?
Is
there
work
items
that
we
think
you
know
based
on
the
discussions
that
hey?
We
can
actually
go
work
on
that
now
because
there's
actually
a
concrete
item
that
that
people
might
want
to
consider
working
on
so
okay,
one
more
question:
if
I
may.
B
My
goal
is
next
week,
but
I
have
to
work
with
the
secretariat
to
actually
get
that
done
and
it's
sort
of
up
to
them
to
go.
Do
the
editing
and
extraction
and
stuff.
So
in
terms
of.
B
Yeah
we
can,
we
can
work
toward
that
or
at
least
discuss
in
the
mailing
list.
Visna.
E
Hi,
I'm
I'm
a
guest
to
the
itf
ib.
I
come
from
the
ripe
community
and
thanks
for
allowing
me
to
to
join
as
a
community
builder,
I
would
like
to
contribute
the
cross
connections
between
this
workshop
and
the
other
communities
where
I'm
active
and
as
a
part
of
that,
I'm
inviting
you
to
come
to
the
right
meeting,
which
is
virtual.
It
is
going
to
be
end
of
november
and
we
do
have
a
working
group
for
measurements.
It's
called
measurements
analysis
and
tools,
so
they
are
looking
for
contributions
right
now.
E
So
if
anybody
wants
to
give
a
presentation
there,
you
are
very
welcome-
and
I
will
publish
kind
of
a
report
for
our
own
audiences
and
the
ripe
labs
platform
that
we
have.
It
will
not
be
as
detailed
as
you
are
looking
for
for
the
report.
E
So
that's
that's
what
I
want
to
contribute
and
what
I'm
curious
about
is
how
can
others
contribute?
So
this
is
what
I
would
like
to
hear
at
the
end
of
this
session.
I
need
clarity
on.
Is
the
list
going
to
be
public
who
can
create
a
account
on
the
wiki?
Can
anybody
join
slack
stuff
like
that?
Thank
you.
B
Good
questions,
I'm
very
glad
you're
here
when
you
do
write
up
your
report
for
write.
Please
do
send
it
to
the
mailing
list
so
that
we
can
all
see
it,
but
in
general
the
ietf
is
a
very
open
group.
Anybody
can
join.
You
know,
efforts
to
to
get
underway,
we'll
have
to
advertise
how
to
create
a
list,
as
we
may
create
a
new
list,
as
somebody
mentioned
earlier,
where
anybody
can
join.
But
it's
an
open
group
thanks
quiz
tommy.
AT
Yes,
so
just
talking
about
the
venue
for
doing
work
in
ietf,
you
know,
certainly
we
can
have
a
list,
but
I
you
know
speaking
here
as
the
ippm
chair,
I
think
you
know
any
immediate
work
we
could
do
in
the
itf
would
definitely
fit
within
the
charter
of
ippm,
and
I
think
that
group
of
people
it
has
a
lot
of
people
here.
It
has
the
expertise,
and
you
know
bringing
in
kind
of
more
of
this
energy
into
that
working
group.
I
think,
would
be
quite
welcome.
AT
So
I
I
guess
I
you
know
I'd,
invite
any
concrete
steps
to
happen
there.
If
we
have
more
researchy
steps,
that
also
would
fit
well
in
map
rg.
So
we
we
have
places
to
do
this,
so
it'd
be
good
not
to
fork
efforts,
and
you
know
just
bring
it
to
ippm
and
that's
fine
and
you
know
in
that
regard,
I
would
love
to
contribute,
at
least
in
the
kind
of
sharing
or
figuring
out
how
to
shepherd
that
work
into
the
ietf.
H
Thanks
thanks
jeff,
I'm
I'm
turning
on
my
video,
because
this
may
be
my
last
chance
to
speak,
and
I
wanted
to
talk
about
the
same
tommy
tommy's
talk
topic
right
now:
ippm,
we
seem
to
be
sort
of
dominated
by
the
protocol
for
measurements
work
and
I
would
really
welcome
you
know
a
strong
thread
of
the
the
measurements
definitions
there
again
as
well.
H
I
think
I
had
one
slide
where
I
tried
to
really
encourage
that,
and
it
may
mean
that
we
might
actually
need
to
kind
of
segment
the
work
at
various
places
so
that
we
can
give
each
the
protocol
and
the
and
the
performance,
metrics
definition
work
adequate
attention,
and
I
and
I
say
that,
possibly
as
the
at
the
expense
of
my
protocol
proposal,
which
is
kind
of
languishing
there,
because
of
all
the
other
protocols
that
are
really
actively
adopted
and
and
controversial.
H
But
I
think
I
think
that's
that's
a
possibility
too,
and
it
might
mean
I
I
don't
want.
I
really
don't
want
to
split
the
working
group,
but
I
want
us.
I
want
to
be
sure
we
give
time
for
both
and
I'm
I'm
obviously
interested
to
review
drafts
that
show
up
and
work
with
people
on
adopting
the
you
know,
the
the
templates
that
we
have
and
making
it
fit
our
framework.
H
AI
I'm
never
sure
if
I'm
mangling
your
name
or
not
janna
so
jana
janna.
I
hope
you'll
excuse
me.
U
So,
thank
you
for
actually
asking
that
question.
It's
it's
janna
and
you
weren't
mingling,
my
name,
although
about
fifty
percent
of
the
idf
routinely,
does
I'm
used
to
it?
It's
all
right.
Sorry,
the
the
the
two
quick
things
I'll
note
are.
First,
I
think
that
it's
useful
to
consider
network
quality
as
broader
than
just
matrix.
U
I
understand
that
the
metrics
are
the
way
that
we
understand
it
and
the
way
we
measure
it
so
even
in
terms
of
short-term
or
long-term
goals.
I
would
encourage
us
to
think
about
this.
This
particular
distinction
and
in
terms
of
long-term
goals,
I
would
I
would
put
the
same
onus
on
on
the
iab
is
about.
From
my
point
of
view,
the
iab
and
the
iab
program
is
about
as
broad
as
it
gets
and-
and
we
are
bob
said
earlier-
that
the
engineering
is
in
ietf.
However,
this
is
not
an
ietf
workshop.
U
It's
an
iab
workshop,
so
I'll
call
out
that,
if
you're
going
to
talk
about
further
work,
we
should
be
careful
to
outline
what
is
scoped
in
here.
Most
of
us
are
network
engineers,
and
we
will
try
to
drive
it
in
particular
ways
towards
metrics
and
towards
measurable
outcomes
and
towards
things
that
we
know,
and
we
believe
strongly
firmly
with
all
of
our
burning
hearts,
to
be
problems.
But
that's
not
necessarily
how
I
think
an
iab
program
would
be
thank.
AU
You
helen,
so
I
I
would
love
to
see
some
of
the
people
that
are
they're
here
continue
some
work
on
defining
a
metric
that
could
be
used
in
a
consumer
sense
as
a
way
that
we
describe
and
and
and
sell
and
rate
broadband
connections.
I
mean.
Basically,
today,
broadband
connections
are
largely
sold
by
a
imaginary,
exaggerated
version
of
downlink
bandwidth
right
if
there
was
something
else
that
could
come
into
play
that
started
connecting
with
latency
or
the
type
of
qualities
that
we've
been
talking
about
here.
AU
I
think
absolutely
anything
would
be
a
vast
improvement
over
what
we
have
today
there.
So
I
think,
working
on
that
type
of
thing
focused
as
this
is
something
consumers
can
relate
to
and
then
and
then
sell.
Maybe
that's
rpm,
maybe
it's
something
else,
I'm
not
saying
what
it
is
but
develop.
Some
simple
metric
in
there
that
gets
used
for
that
use
case,
I
think,
would
be
very
valuable.
Thanks.
AI
Thank
you,
I'm
at
the
bottom
of
the
queue
ways
so
once
more.
AI
AH
AH
Like
you
know,
we
had
these
point
one
and
two
on
the
prior
slide
and
my
attempt
to
improve
on
that.
In
terms
of
you
know,
what's
the
evidence,
what's
the
current
metrics
to
basically
highlight
the
you
know,
prevalence
and
the
importance
of
you
know
the
buffer
boat
problem
to
well
us
the
community,
but
ultimately
also
those
you
know,
people
that
are
implementing
stuff
and
not
doing
it.
Do
we
have
anything
like
that?
If
not,
then
that
would
be
a
really
good
thing
to
try
to
get
people
working
on
gary.
O
Hi
I
mean
I
found
it
really
interesting
that
we
had
such
a
broad
range
of
people
here,
and
it
seems
fairly
clear
that
the
ippm
people
can
do
metrics
and
do
measurements.
But
I'm
not
sure
that
this
group
of
people
in
even
in
this
workshop
would
always
be
an
ippm
meeting.
And
when
I
go,
I
get
a
bit
phased
by
the
wide
variety
of
metrics
and
protocols
which
are
being
talked
about.
O
So
I
suspect
there
are
people
in
v6
ops
who
should
know
about
good
metrics
for
testing
networks,
and
I
suspect
there
are
people
in
transport
groups
who
should
know
really
and
like
use
this
tool.
As
part
of
your
benchmarking,
where
you
come
to
map
rg
talk
about
this
tool
as
well.
So
I
think
there's
a
role
for
the
ib
to
somehow
figure
out
how
to
convince
all
the
itf
people
that
this
is
important
and
there
are
good
tools
coming
as
well
as
developing
the
tools
which
really
means
you've
done.
AI
Thanks
gary
I've
put
myself
in
the
queue
here,
I
should
note
that
you
know
I
think
we're
all
aware
that
a
huge
amount
of
work
doesn't
happen
in
the
ietf
and
a
huge
amount
of
work
is
orchestrated
by
other
food,
like
nick
mcewen's
group
at
stanford's
done
a
series
of
buffer
workshops
which
have
been
astonishingly,
interesting,
there's
work
in
protocols.
There
is
all
kinds
of
other
work
out
there
and
I
think
it
would
be
silly
if
the
ietf
tried
to
sort
of
orchestrate
all
of
this.
AI
So
you
know
overreach
doesn't
really
help
us,
but
at
the
same
point
trying
to
point
out
where
there
are
architectural
issues
and
actually
try
and
understand
them,
a
little
more
is
is
actually
I
think
of
value,
and
that's
the
tension.
That's
going
on
here
in
searching
for
things
to
do.
There's
no
end
to
you
know
things
that
we
can
do,
but
in
searching
for
ways
that
we
can
complement
existing
work.
AI
J
Sorry
here
I
come
so
going
to
the
long-term
goals.
Could
you
strike
the
word
industry
from
outreach?
Outreach
is
what
you
both
discussed
just
now.
Absolutely
we
need
to
be
engaged
with
right.
E
J
With
the
other
nano
organizations
and
the
industry,
and
is
this
language
that
has
crept
into
this
debate,
that
bothers
me,
a
lot
is
the
word
consumer.
It's
the
users
of
the
internet,
the
netizens
that
we're
acting
on
behalf
of,
and
I
would
just
like
to
strike
the
word
consumer
from
the
conversation
entirely.
If
possible,
we
use
the
internet,
we
do
not
just
consume.
We
produce.
V
That
doesn't
mean
there
isn't
further
research.
That
doesn't
mean.
I
think
some
statements
from
the
iab
would
be
very
useful
that
this
you
know
that
you
can
wave
in
front
of
people
as
part
of
of
the
marketing.
V
V
V
S
I
I
realized
because
of
conversation
on
slack
etc,
that
I
forgot
to
bring
up
a
conclusion.
It's
probably
too
late,
but
I
just
want
to
put
it
out
there
that
we
see
applications
using
measurements
of
latencies
and
things
at
various
different
levels.
S
And
we've
heard
that
in
the
metrics
and
what
what's
very
clear
to
me
is
we
don't
have
a
good
answer
for
the
apis
even
to
the
applications
today,
so
that
the
measurements
that
we're
already
doing
as
necessary
components
of
congestion,
control
things
like
that
can
actually
be
reused,
and
that
is
setting
up
nasty
feedback
loops
or
sub-optimal
feedback
loops.
AI
I
guess
he's
not
story
was
just
a
comment.
Sorry,
let
me
just
quickly.
It
was
a
comment
that
map
rg
was
mentioned
when
there
was
concrete
work
to
be
done
and
where's
if
you're
still
around
near
keyboard.
When
you
talk
about
further
research,
you
might
put
in
brackets
beside
that
map
rg.
U
Thank
you,
jeff.
I
want
to
note
one
thing
that
I've
heard
a
few
times
and
I
want
to
challenge
us
to
think
about
that.
A
little
more
carefully
being
able
to.
I
found
in
my
experience
that
being
able
to
demonstrate
that
a
problem
exists
convincingly
to
others
is
not
simply
a
marketing
issue.
U
It
is
an
understanding
issue
or
differently,
put
being
able
to
demonstrate
that
a
problem
exists
convincingly
helps
us
to
understand
how
and
where
the
problem
exists.
Much
better.
So,
in
terms
of
talking
about
pretty
much
anything,
I
would
argue
that
being
able
to
not
just
expose
metrics
but
also
be
able
to
demonstrate
the
problem
to
people
who
ask
in
a
way
that
makes
sense
to
them
is
super
critical.
U
I'm
going
to
leave
it
open
as
to
demonstrate
to
whom,
because
ultimately
that
brings
in
the
question
of
who
do
you
need
to
incentivize
who
are
the
decision
makers
and
so
on
and
so
forth.
I
don't
think
it's
a
marketing
problem.
I
think
it's
an
understanding
of
how
this
ecosystem
works.
AI
Thank
you,
so
I
think
where's
it's
over
to
you
and
the
other
workshop
organizes.
B
I'll,
let
I'll
let
thank
you
jeff
and
thank
you
for
the
past
two
hours
and
thank
you
to
all
of
the
session
chairs
in
particular
that
moderated
it's.
It's
a
lot
of
work
to
to
hold
a
cue
for
two
hours
and
keep
your
sanity,
but
the
ib
would
like
to
thank
you
know
all
of
you
for
coming.
B
I
think
this
has
been
a
very
productive
three
days
and
we
really
really
really
appreciate
all
the
voices
and
all
the
work
that
went
into
it,
both
in
writing
papers
in
the
program
committee
and
to
my
co-chairs
I'll,
let
omar
close
us
out,
but
but
please
we.
We
greatly
appreciate
all
the
thoughts
and
effort
that
went
into
this
has
been
a
fantastic
conversation.
Thank
you.
A
Thank
you
less
yeah
it.
It
has
been
a
great
three
days,
and
indeed
it
feels
like
we're
doing
something,
but
indeed
it
is
really
just
the
beginning.
A
I
think
that
we
should
close
this
workshop
with
the
understanding
that
we
probably
will
see
each
other
more
in
the
future,
focusing
one
step
at
a
time
and
sure.
Today
we
are
talking
about
latency
responsiveness,
it
seems
to
be
sort
of
relevant,
hopefully
in
10
years
from
now,
five
years
from
now,
we
will
be
talking
that
we
don't
need
more
less
latency.
We
only
need
better
latency
right.
A
Properties
of
the
network
will
get
richer
and
will
get
people
to
be
able
to
innovate
in
the
most
reckless
way
without
knowing
that
the
network
will
stand
there
for
them.
That's
all.
I
have
to
say,
actually
let
me
repeat
myself
thanks
a
lot
to
everybody
here,
thanks
a
lot
to
the
wonderful
members
of
dtpc,
thanks
a
lot
for
the
iab
for
giving
this
idea
a
chance
thanks
a
lot
for
my
colleagues
at
table
who
are
trying
to
move
the
needle
one
number
at
a
time,
and
I
think
that
this
is
it.