►
Description
Day 1 of the IAB's Measuring Network Quality for End-Users Workshop, 2021-09-14.
Day 2: https://youtu.be/8vjft84gqFA
Day 3: https://youtu.be/6q4-G9pnhfY
Workshop page: https://www.iab.org/activities/workshops/network-quality/
A
Here
we
are
set
to
go,
I'm
going
to
go
ahead
and
get
started
with
the
well
people
dial
in
so
welcome
everybody
to
the
measuring
network
quality
for
end
users
workshop
an
ieb
workshop
in
virtual
2021.
A
Thank
you
for
all
coming
for,
regardless
of
whatever
time
zone
you're
in
this
is
day
one.
You
probably
guessed
that
already
so
the
structure
of
the
workshop
I'll
go
over
really
briefly,
which
is
we
have
a
three-day
workshop
scheduled
over
the
next
three
days.
It's
four
hours
per
day.
We
have
roughly
hour-long
sort
of
sub-sessions,
so
the
way
that
the
chairs
in
the
program
committee
divided
things
up,
so
we
have
hour-long
sub-sessions
with
discussion
time
in
each
of
them.
A
So
there's
five
there's
a
small
number
of
five-minute
presentations
three
to
four
typically,
and
we
will
they're
very,
very
rapid
and
the
whole
point
of
them
is
to
just
get
your
succinct
points
across
to
see
the
discussion
times
that
happen
later,
and
there
will
be
three
minutes
or
so
of
clarification,
questions
after
each
presentation,
but
please
no
discussion
during
that
time.
There
should
really
just
be
time,
for
you
know.
A
I
didn't
understand
something
about
your
slides,
or
can
you
further
elaborate
on
something,
but
the
real
meat
will
be
in
the
discussions
which
we
have
set
for
30
to
40
minutes
after
the
presentation
sets,
because
we
have
a
huge
number
of
participants
we
are.
We
have
decided
to
try
and
limit
people's
just
comment
time
microphone
lines
to
60
seconds,
which
we
recognize
is
both
short
and
long.
Depending
on
how
much
you
like
to
talk,
but
we
encourage
you
know
all
workshop
participants
to
actively
participate
in
these
discussions.
A
You
can
see
that
the
word
discussion
is
in
bold
like
four
times
on
this
slide.
You
probably
get
the
point
that
that's
what
we're
really
shooting
for
right.
This
should
be
a
interactive
collaboration,
type
environment.
A
The
session
chairs
will
be
strictly
enforcing
all
the
times.
They.
We
have
a
number
of
session
chairs
that
we
picked
out
of
the
program
committee
that
are
volunteered
to
to
run
each
set
of
panels.
Basically
a
little
bit
of
administration.
This
workshop
is
being
recorded
supposedly
it's
automatic
up,
yep,
it's
recording
and
we
will
publish
the
results
to
youtube
sometime
after
the
event
is
over
working
with
the
iuts
secretariat.
A
To
make
that
happen,
and
then
the
other
thing
that
we've
decided
to
do
is
we
will
close
the
recording
at
the
end
of
the
workshop
every
day,
but
we
will
leave
the
room
open
for
a
few
hours
in
case
people
want
to
stay
around,
maybe
have
some
smaller
group
conversations
and
you
know
feel
free
to
to
say
during
discussion
time.
Hey.
Let's
continue
this,
you
know
after
the
day
is
over.
A
If
you
want
more
of
an
informal
discussion
and
people
can
hang
around
and
maybe
brainstorm
and
follow
on
so
we'll
try
and
do
that
every
day,
it'll
save
them,
of
course,
but
the
recording
will
be
turned
off.
A
I
want
to
thank
everybody
attending
today.
The
ib,
you
know,
really
appreciates
the
energy
that
has
been
put
into
this
workshop.
So
on
behalf
of
the
ib,
you
know,
thank
you
very
much,
especially
to
the
workshop
chairs,
jenny
and
homer.
A
They
put
an
immense
amount
of
work.
It's
very
difficult
to
put
together
such
a
rapid-based.
You
know
workshop
with
so
many
participants
and
we
couldn't
have
done
it
without
the
program
committee,
which
there's
a
large
number
of
people
that
helped
review
all
of
the
the
documents
put
them
into.
You
know,
keywords
and
groupings
so
that
we
could
hopefully
put
together
a
presentation
series
that
will
work
really
really
well,
but
even
more
importantly,
right.
A
All
of
you
that
have
contributed
documents
and
have
offered
to
you
know,
I'm
sorry
that
we
couldn't
have
every
document
author
be
a
present,
but
but
you
know
the
obviously
that
this
is
a
topic
which
is
highly
engaging
and
we're
really
looking
forward
to
this,
the
discussions
that
we
get
out
of
it.
So
thank
you
very
much
to
everybody
that
that
helped
and
for
those
that
are
showing
up
today
to
actively
participate
in
the
discussions.
A
The
agenda,
I
think,
is
on
the
website.
This
is
today's
agenda.
So
there's
a
lot
of
you
know.
Pack
time
we'll
spend
the
first
hour
we'll
go
into
our
first
presentations
for
presentations
as
well
as
a
half
an
hour
discussion
time.
A
There
will
be
a
keynote
in
in
an
hour
by
vince,
cerf
and
I'll,
introduce
him
at
that
time
and
then
we'll
have
a
couple
more
short
presentations
directly
there
after
there
will
be
some
time
to
ask
vin
any
clarifying
questions,
or
you
know,
discussions
based
on
him,
so
he'll
probably
give
about
a
20-minute
presentation
and
then
we'll
have
10
minutes
of
discussion.
You
know
around
his
particular
presentation,
we're
greatly
looking
forward
to
that,
and
then
we
will
have
a
short
break
in
the
middle
of
every
day.
A
And
then
we'll
have
a
10-minute
break
every
day
and
then
we'll
get
into
furthering
into
introductions
and
then
we'll
begin
sort
of
some
of
the
meat
of
our
our
discussions
later
in
the
day
which
is
sort
of
the
metrics
the
first
metric
session,
so
that
is
it
I
am
going
to
turn
it
over
now
to
keith
with
four
minutes
to
spare.
So
we're
we're
beyond
on
track,
which
is
fantastic.
A
Keith.
Are
you
here
and
available?
You
should
hello,
I'm
here
excellent.
Do
you
want
to
share
your
slides
yeah?
Let
me
do
that.
A
The
chairs
will
be
decision.
Chairs
will
be
sharing
all
the
slides
to
decrease
the
amount
of
time
between
switching
between
presenters
with
the
short
presentations.
Okay,.
C
Okay,
great
well,
our
first
speaker
will
be
the
person
who,
literally
his
work,
got
me
interested
in
this
field
without
further
ado
steward
cheshire.
The
internet
is
a
shared
network.
D
Thank
you,
keith
welcome
to
this
workshop
good
morning,
good
afternoon,
good
evening,
wherever
you
are
in
the
world.
Let's
move
to
the
first
slide.
D
D
I've
listed
some
items
here
for
things
that
internet
connections
could
do
better.
Ipv6
support
is
one
obvious
example:
reducing
the
need
for
nat
and
giving
devices
addresses
that
make
them
reachable
and
innovation
new
protocols,
multi-path
tcp
and
quick,
give
better
mobility
by
supporting
multiple
parallel
communication
paths.
D
All
of
these
are
important,
but
my
current
focus
is
the
next
elephant.
In
the
room
after
we've
provided
abundant
bandwidth
and
we
should
continue
to
work
on
yet
more
bandwidth,
but
in
parallel
we
should
ask
the
question:
why
don't
we
get
consistently
good
responsiveness
on
our
networks
that
are
so
fast?
So,
let's
move
on.
D
I
want
to
take
a
moment
to
look
at
the
choice
of
words
here,
as
well
as
moving
bigger
objects
through
the
network.
We
should
try
to
move
them
through
the
network
faster,
and
that
means
getting
a
request
to
a
server
and
getting
the
response
back
quicker,
so
lower
round
trip
time,
but
I
particularly
care
about
end
to
end
delays,
not
just
the
network
ping
time
measured
using
icmp
echo
request,
because
that's
not
what
we
spend
our
days
doing
on
our
computers
when
we
use
the
network.
D
D
People
often
quote
average
round
trip
time,
but
the
good
average
by
itself
may
not
be
enough
if,
for
five
seconds
out
of
a
typical
minute,
the
delay
is
briefly
ten
times
worse,
then
that
doesn't
help
that
would
make
for
a
very
rough
video
conference
experience.
So
I
care
more
about
the
99th
or
the
99.9
percentile.
D
D
D
I
would
love
to
see
fine-grained
notification
like
l4s
become
successful,
because
that
lets
us
keep
queues
even
shorter.
I
have
my
reservations
about
flow
cueing
flow.
Queueing
assumes
that
we
want
to
protect
a
well-behaved
flow
from
a
badly
behaved
flow,
and
there
are
cases
where
that
may
be
useful.
D
D
Well,
that's
what
I'm
trying
to
do
and-
and
I'm
I'm
I'm
trying
to
keep
on
time,
because
we
have
a
lot
of
presentations
to
get
through
and
as
the
first
one.
I
want
to
set
her
a
good
precedent
for
the
rest
of
the
workshop.
D
D
Bandwidth
is
a
zero-sum
game.
If
you
take
some
away
from
one
client
to
give
to
another,
then
one
client
loses
so
another
client
can
win,
but
delay
management
is
not
a
zero-sum
game.
We
can
give
low
delay
for
everything
and
all
current
speed
tests
in
the
internet
are
not
measuring
this
and
they're
telling
people
something
that
doesn't
matter
anymore.
It
reminds
me
of
teenage
boys
in
the
1990s
would
brag
about
whose
computer
had
the
most
megahertz.
D
They
even
had
little
leds
led
displays
on
the
front
showing
the
clock,
speed
and
the
turbo
boost
button.
You
could
press
to
make
it
go
faster.
Nobody
cares
anymore.
How
many
megahertz
does
your
iphone?
Have?
Nobody
cares
they
care
about
what
it
can
do,
not
the
specs.
So
let's
move
to
the
last
slide.
C
I
think
we're
really
out
of
time
if
you
could
sum
up
quickly.
Thank
you.
D
I'm
sorry
on
the
time
table
it
said
I
had
five
minutes
you're
at
about
eight
now
stuart.
D
Okay,
well
in
my
document
I
mentioned
some
other
things
which
I
won't
talk
about
now,
but
in
the
spirit
of
sitcom
outrageous
opinions,
I
have
severe
reservations
about
non-queue
building
blows
as
a
concept
and
udp
is
not
a
transport
protocol.
D
C
E
All
right,
let's
get
started,
so
thank
you
for
inviting
me
to
give
a
talk
on
this.
I
am
jenna
ingar
and
I'm
gonna
start
off
with
next
slide.
Please.
E
Talking
about
the
workshop
itself
broadly,
when
I
saw
this
call
for
papers,
I
was
first
very
intrigued
and
obviously
excited,
as
all
of
you
must
have
been,
and
I
saw
the
questions
that
were
described
in
the
in
the
workshop
call
for
papers
saying
what
are
the
fundamental
properties
of
a
network
that
contribute
to
good
user
experience
and
so
on
and
all
worthy
questions.
E
This
is
doomed
to
be
eventually
useless,
and
then
I
thought
some
more
and
and
I've
had
this
thought
in
my
mind,
for
a
little
while
next
slide
that
there
will
never
be
any
maxwell's
equations
for
internet
quality.
I've
thought
about
this,
and
I
wanted
to
say
this
loud
to
many
people
that
there
are
no
fundamental
laws
of
the
internet
so
to
speak
and
we'll
never
be
able
to
find
them
in
any
program
to
look
for
them
as
doomed
and
as
I
thought
about
this
more
and
more
next
slide,
please.
E
I
wanted
to
bring
my
inner
lewis
black
out
for
those
of
you
who
aren't
familiar
lewis.
Black
is
a
comedian
who
is
known
for
his
angry
rants,
so
basically
your
average
iitfer
at
an
iab
open
mic,
and
I
decided
to
write
something
that
would
express
that
next
slide.
Please-
and
I
started
to
write
in
saying
that
talking
about
metrics
in
the
abstract
is
meaningless.
I'm
going
to
go
through
this
in
not
so
much
a
loose
black
voice,
but
in
indian
lewis,
black
voice,
because
that's
the
best
I
can
do
that
application.
E
Metrics
are
ultimately
what
matter
specific
applications
in
that
metrics
that
ultimately
drive
deployment
of
anything
in
the
network.
Look
at
a
whole
bunch
of
examples,
dc
and
aqm
congestion
control.
All
of
these
were
demonstrably
great
in
summer
as
some
abstract
metric,
but
didn't
move
the
needle
for
real
applications.
Next,
please,
and
the
best
example
of
this
is
really
construction
control.
E
The
community
went
through
many
phases
of
trying
to
figure
out
how
to
evaluate
condition
controllers
by
by
trying
to
distill
internet
topologies,
by
trying
to
figure
out
what
internet
traffic
looks
like
and
then
with
just
ecp
and
applications
and
finally,
giving
up
and
coming
to
terms
now
more
recently
to
to
actually
thinking
about
what
condition
control
really
does
to
real
applications.
Next,
please.
E
So
I'll
propose
shortcutting
the
whole
thing
and
not
go
looking
for
abstract
metrics,
because
you
can't
really
do
very
much
with
this.
I
I
would.
I
would
propose
that
you
look
at
a
network.
Audio
is
what
a
user
can
do
and
cannot
do
well
on
the
network,
and
I
get
and
warn
they
should
be
there.
You
should
be
you
should
you
should
be
aware
of
abstract
applications
as
well.
That
video
conferencing
broadly
is
not
skype,
which
is
not
facetime,
which
is
not
zoom,
which
is
not
webex.
E
These
are
all
abstract
and
the
concrete
is
what
matters.
If
you
don't
think
of
the
concrete,
the
whole
program
is
doomed,
but
then
next
slide
please.
I
decided
that
this
was
not
a
good
way
to
win
friends,
so
I
decided
to
actually
buckle
down
and
write,
something
that
was
more
constructive
next,
and
this
is
what
you
have
in
my
paper,
which
is
basically,
I
have
three
propositions
here.
E
The
first
la
proposition
is
that
the
internet
exists
and
slices
that
every
user,
so
that's
the
end
of
lewis
black
mode
by
the
way,
I'm
back
to
being
myself
for
those
of
you
who
know
me-
and
this
is
a
the
fact
that
a
user
only
sees
the
slice
of
the
internet
that
they
actually
exercise
or
they
actually
use
next
and
to
be
relevant
to
a
user.
E
E
So,
with
these
three
propositions
next
I'll
point
to
just
a
couple
of
takeaways-
and
these
are
effectively
my
position
here
and-
and
the
point
I
want
to
make-
are
that
to
be
successful,
any
quality,
metric
or
measurement
framework
that
we
build
or
develop
here
must
include
or
show
strong
correlation
to
real
applications
or
real
networks
and
quality
of
those
applications
are
real
networks
next
and
applications
change.
E
C
You
left
us
speechless,
okay!
Well,
thank
you
again.
Jana!
Let's
move
on
to
our
final
speaker
of
this
section.
Jakob
stein
will
tell
us
about
the
futility.
The
futility
of
quality
of
service,
yakov.
F
Yes,
hi,
everyone
hear
me
yeah,
I've
spent
you
can
go
on
to
the
next
lab.
I've
spent
a
lot
of
my
career
looking
into
qos
and
oem
mechanisms,
performance,
monitoring
and
things
of
that
sort.
Can
you
go
on
to
the
next
slide.
F
Yes,
thank
you
and
basically
the
reason
that
people
use
qos
is
because
it's
related
to
qoe.
Once
again,
it's
I'm
sure
everyone
here
knows
the
difference,
but
usually
I
mean
the
subjective
quality
of
experience.
As
the
end
user
feels
it.
Qos
are
a
bunch
of
parameters
which
are
basically
objective,
that
is
they're
not
subjective.
They
can
be
measured,
defined.
G
G
F
Some
background
noise.
I
hope
that
doesn't
bother
too
much.
However,
although
easy
to
measure
these
parameters
are
defined
to
be
correlated
with
qualitative
experience.
As
a
matter
of
fact,
we
have
lots
and
lots
of
equations,
and
this
ties
in
with
what
we
just
heard
in
the
previous
talk.
We
have
lots
of
equations
of
the
form
given
the
application
and
perhaps
details
of
the
applications
like
what
the
codec
is
and
if
it's
video
or
audio
or
things
of
that
sort
and
a
bunch
of
quality
of
experience.
F
Excuse
me
quality
of
service
parameters
such
as
its
delay
and
packet
loss
and
things
of
that
sort.
You
can
actually
predict
what
the
quality
of
experience
will
be,
and
the
reason
that's
important
is
that
people
obviously
are
willing
to
pay
for
quality
of
experience.
People
have
gotten
used
to
not
paying
for
best
effort
services.
You
know
you
get
skype
and
free
email
from
gmail
and
etc.
F
All
these
services
come
free
if
there's
no
guarantees,
but
if
you
have
the
best
quality
you're
willing
to
pay
a
lot
for
it
and
there's
some
kind
of
interpolation
in
between
those.
F
So
we'd
really
like
to
know
the
quality
of
experience
and
up
to
now,
we've
been
able
to
do
it
pretty
well,
either
in
a
subjective
way
by
putting
people,
you
know
taking
32
people
putting
giving
them
headphones
and
letting
give
scores
and
estimating
it
or
doing
the
objective
way,
which
correlates
well
with
the
quality
of
experience
and
that's
the
reason
people
have
been
doing
performance
measurement
and
oh
and
slas
and
qos
and
etc.
Next
slide,
please.
F
However,
the
problem
is:
is
that
recently,
the
kind
of
services
that
we
gave
the
communication
services
had
fundamentally
changed
once
upon
a
time
the
the
services
that
we
gave
were
pure
transport
services.
Basically,
we
said
from
point
a
to
point
b
move
these
bits
and
we
have
at
least
this
data
rate
and
no
more
than
this
delay,
and
that
was
it,
but
nowadays
we
have
mixed
up
communications
and
computation.
F
We
have
a
lot
of
non-trivial
computation
happening
along
the
path
of
the
the
packets
that
go
through
the
network
and
it
could
be
simple
things
like
firewalling,
simply
not
letting
the
packet
go
through
or
letting
it
go
through
more
complicated
things
like
compressing.
The
data
transcoding
video
doing
tcp
proxy
in
the
middle
of
reassembling
re-uh
segmenting
the
flows
et
cetera,
et
cetera
and
really
complicated
things
like
cdn
now.
F
Then,
basically,
all
these
quality
of
service
parameters
that
we
have
so
much
information
about,
and
so
much
knowledge
about
how
to
measure
them.
They're
basically
meaningless.
Next
slide.
Please
and
the
proof
that
this
is
the
case.
That
is,
you
cannot
find
in
general,
a
formula
for
a
given
application
that
given
pocket
loss
and
bucket
loss
rate
and
delay
and
jitter,
and
all
these
things
that
you
can
find
the
quality
of
experience
is
based
on
a
sequence
of
thought
experiments.
F
I
simply
point
you
to
the
paper
which
I
expected,
and
this
will
show
you
that
you
can't
actually
make
it
all
work,
and
just
the
last
thing
I
want
to
say
is
that
nfv
that
is
making
virtual
functionalities
makes
everything
much
worse,
because
if
the
functionalities,
the
non-trivial
computation,
was
in
one
particular
place-
and
I
knew
what
it
was,
I
could
probably
work
around
it
and
measure
before
and
after
and
but
but
if
I
can
bring
a
functionality
and
put
it
somewhere
and
move
it
from
place
to
place,
then
you
have
to
be
aware
that
anytime,
at
any
time,
the
quality
of
service
can
stop
predicting
what
the
quality
of
experience
means,
what
the
quality
experience
is,
which
means
that
the
end
user,
even
though
you
think
you're
giving
a
good
service,
might
be
unhappy
and
it
could
be
around
the
other
way
around.
F
C
C
Okay,
well
hearing
none.
I
guess
why
don't
we
move
on
to
our
half
hour
of
discussion?
Everyone
gets
60
seconds
and
the
first
discussant
is
bob
briscoe.
H
Hi
there
they
were
pretty
dismal
talks,
they
sort
of
basically
said
you
can't
do
anything
which
is
sort
of
you
know
doesn't
really
give
us
much
hope.
So
I
I
don't
know,
I
don't
know
whether
it
was
just
trying
to
be
clever
or
something,
but
I
don't
think
yakov
that,
just
because
qo
isn't
a
pure
function
of
quas
means
you
can't
do
anything
and
similarly
janna.
H
I
don't
think
just
because
you
know
there
isn't
a
correlation
between
what
the
application,
how
the
application
quality
is
and
the
network
quality
is
or
there
isn't
a
strong
correlation.
It
doesn't
mean
that
if
you
say
reduce
delay
at
the
in
in
network
packets
by
you
know,
70
or
something
like
that,
won't
see
any
effect.
I
I
think
maybe
you're
you're
you're
sort
of
both
trying
to
look
for
some
precision
that
isn't
there,
but
maybe
missing
the
propensity
they're
bringing.
C
H
I
Yeah
I'd
like
to
respectfully
disagree.
Bob
I,
and
I
think
the
the
problem
here
is
that
the
the
talks
are
objecting
to
trying
to
boil
the
ocean
by
dividing
the
problem
too
broadly,
and
I
think,
if
you
take
out
of
them
that
actually,
what
you
want
to
do
is
have
multiple
metrics
in
different
parameters.
I
Each
focusing
on
certain
aspects
of
application
and
network
behavior.
It's
much
more
likely
to
be
solvable
and
solving.
You,
then
don't
connect
the
failure
of
one
metric
to
success
or
failure
of
another
metric.
So
responsiveness
to
me
means
something
very
specific
which
is
different
than
jitter,
which
is
different
than
throughput
and
a
responsibility.
I
A
All
right
so
can
I
interject
really
quick
for
administrative,
so
a
couple
of
things
I
should
have
mentioned
this
in
my
earlier
slides
and
I
apologize
for
not
doing
that
a
couple
things
we
are
using
the
chat
for
queuing.
So,
if
you
put
plus
q
in
the
chat,
the
moderator
for
the
session
will
call
on
you.
The
the
general
thinking
is
that
we
really
want
to
have
more
interactive
discussion,
so
it
won't
be
much
like
a
panel
where
you've
seen
in
the
past,
where
the
speakers
are
really.
You
know
answering
questions.
A
So
if
the
speakers
could
you
could
speakers
can
raise
their
hands
and
we
might
call
on
them
sooner,
but
in
general
it's
encouraged.
The
speakers
actually
participate
as
part
of
the
conversation,
rather
than
do
this
back
and
forth
q
a.
E
So
I
wanted
to
step
in
there
very
quickly
because
my
name
was
raised
a
couple
of
times
but
precise,
because
I
think
that
there's
this
and
when
we
are
trying
to
boil
the
ocean,
but
the
idea
is
that
ultimately,
an
abstract
metric
has
to
be
tied
to
something
concrete
and
real
if
it
is
to
mean
anything
at
all.
E
If
you
have
an
abstract
metric
that
doesn't
correlate
to
any.
You
know
if
you
say
that
this
is
supposed
to
do
well
for
video
conferencing,
but
you
can't
demonstrate
that
it
does
well
for
either
zoom
or
webex
or
skype.
Or
what
have
you
then
there's
something
wrong
right?
I
mean
that's
basically
something.
Ultimately,
the
point
of
this
is
to
show
that
the
artifacts
and
the
usage
of
the
internet
gets
better
not
in
some
abstract
notion
of
the
internet.
F
Yes,
once
again,
john
is
saying
something
very
similar
to
me,
but
coming
from
a
different
angle,
just
getting
back
to
what
bob
was
asking.
My
problem
is
not
the
precision
I'm
not
claiming
that
because
of
these
computations
that
now
are
happening
in
the
network,
I'm
off
by
five
percent.
It
simply
becomes
completely
meaningless.
Think
of
a
case
of
a
cdn,
for
instance,
and
the
end
user
is
really
getting
bad
quality.
F
So
you
thinking
that
this
is
a
normal
path
and
a
lot
of
bandwidth
somewhere,
where
nothing
is
traveling,
because
it's
all
popping
up
in
the
middle
of
the
network
and
getting
to
the
end
user
right,
so
you're
you're
wasting
a
lot
of
resources
and
doing
absolutely
nothing.
I
can
give
you
a
lot
of
examples
like
this,
where
there
are
several
counter-intuitive
ones,
where
you
reduce
the
delay
and
the
quality
of
experience
gets
worse
and
once
again,
look
at
my
paper.
You
see
all
these
really
really
bad
things.
F
The
culmination
is
a
link
break
can
sometimes
actually
improve
things
right.
So
this
is
not
that
we're
off
by
a
few
percent.
It's
simply
that
a
lot
of
what
we've
been
doing.
All
these
years
has
become
meaningless.
D
Going
back
to
the
first
comment
from
bob:
if
I
sounded
pessimistic,
then
I
apologize.
That
was
not
at
all
what
I
wanted
to
say.
The
reason
I
got
involved
with
organizing
this
workshop
is
that
I'm
very
optimistic
that
we
can
improve
things
in
a
big
way
to
do
that.
We
need
to
focus
attention
in
the
right
place
and
my
my
concern
right
now
is
that
every
end
user
who
goes
to
an
internet,
speed
test
it
measures
their
throughput
and
if
lucky,
they
measures
their
idle
paying
time.
A
We
wouldn't
need
a
three-day
workshop
if
this
was
an
easy
problem,
and
I
think
you
know
the
important
points
that
I
took
away
out
of
the
talks,
I
think,
are
you
know
the
understanding
that
generic
metrics
are
hard,
if
not
impossible,
and
that
application
specific
metrics
are
potentially
necessary,
and
one
of
the
things
that
I
I
think
is
interesting
about,
that
is,
the
quality
is
likely
different
on
all
layers
of
the
stack
right.
How
do
you
measure
you
know
a
transport
layer?
A
You
know
quality
versus
an
application
layer
quality,
that's
a
difficult
thing
to
do.
How
do
you
measure
the
fact
that
the
quality
of
security
authentication
is
really
hard
to
get
into
an
application
that
is
otherwise
good
right?
That's
two
totally
different
sets
of
qualities.
So
I
look
forward
to
this
sort
of
helping
set
the
stage
for
the
workshop
thanks.
H
I
just
wanted
to
come
back
and
apologize
to
stuart.
It
wasn't
actually
including
your
talk
in
the
in
the
his
melody
and
and
also
apologize
to
yakov
and
janna.
It
was.
It
was
more,
the
maybe
the
overstatement
of
the
claims.
You
know
that,
yes,
of
course,
things
have
to
be
linked
to
real
applications
and
stuff
like
that,
but
it
it
doesn't
mean
say.
For
instance,
you
invent
a
new
way
of
taking
a
round
out
of
a
tls
negotiation,
or
something
like
that.
H
Whether
or
not
you
link
that
to
certain
applications
of
the
day
doesn't
matter.
It
will
improve
things.
You
know
it's.
F
Okay,
a
real,
quick
one.
I
wasn't
trying
to
be
negative
or
pessimistic.
I
was
simply
saying
we're
doing
things
wrong,
especially
people
like
in
the
ontario
ethernet
level,
where
they
have
displays,
which
specifically
say
what
the
packet
loss
rate
has
to
be
measured
in
15
minute
intervals.
F
However,
there
are
some
really
really
good
things
about
all
this
number
one,
the
kinds
of
measurements
I
was
talking
about,
for
the
estimations
can
be
done
not
only
on
qs
parameters
but
on
end
user
data
nowadays,
so
you
could
actually
put
on
the
end
user
application,
something
which
starts
feeling
something
wrong
and
asks
the
end
user,
like
you're
driving
in
ways
it
asks
you
are
you
in
a
in
a?
Is
there
a
accident
in
front
of
you?
Are
you
in
a
traffic
jam-
and
you
say
yes
and
that
information
being
reported
is
actually
very
useful.
M
Oh,
I
wanted
to
share
one
observation
regarding
jhana
and
yakov
statement,
and
this
is
that
in
the
last
10
years
the
rate
of
change
in
the
internet
has
accelerated
from
one
router
every
three
years
to
run
a
virtual
function,
probably
if
at
extreme
every
five
minutes
and
today,
based
on
my
experience,
building
a
facebook
cdn
today,
it
is
possible
to
shift
half
of
the
traffic
of
a
continent.
M
Within
15
minutes
and
the
effects
it
will
have
on
on,
the
internet
are
immense
and
yet
having
just
one
metric
may
be
maybe
confusing
and
misleading.
This
is
my
comment.
Thank
you.
N
Okay,
I
I
hunger
for
a
day
when
vendors,
router
manufacturers
and
service
providers
compete
on
the
basis
of
responsiveness.
In
the
same
way,
they
compete
on
speed
up
to
x
megabits
and
it's
responsive
too
a
lot
of
good
thoughts
going
into
all
these
papers.
The
end
to
end
and
longitudinal
measurement
systems
are
important,
but
we
need
to
make
sure
those
measurements
aren't
skewed
by
badly
behaved
cpe,
and
I
think
I'm
jumping
on
what
jim
geddes
just
said.
N
If,
if
the
cpe
is
queueing,
hundreds
of
milliseconds
of
data,
no
amount
of
optimization
is
going
to
make
things
better.
N
In
my
in
my
little
town
of
1700
people,
I
turned
a
lot
of
people
onto
commercial
routers
that
use
fq
coddle
to
make
their
seven
megabit
dsl
responsive,
and
they
were
all
delighted,
but
it
doesn't
scale.
Rich
brown
going
to
your
house
or
telling
you
on
the
local
list
serve,
doesn't
work.
I
can't
apple
seed,
the
world.
I
Miss
my
mute
button,
I
think
one
of
the
things.
E
I
I
E
Thanks
keith,
so
I
want
to
quickly
respond
to
an
earlier
comment
by
bob
and
perhaps
others
as
well
after
that
to
say
that
it's,
I
don't
think
that
it's
pessimistic
to
well,
I
could
the
way
I
I
came
off
might
have
sounded
pessimistic,
but
I
think
I
was
trying
to
be
trying
to
bring
caution,
perhaps
to
getting
carried
away
to
a
very
academic
exercise
which
might
not
ultimately
to
jim
gettys
point
drive
adoption.
E
I
still
think
that
it's
important
for
us
to
consider
how,
whatever
it
is
that
we
do
here
if,
if
you
are
chasing
either
a
metric
or
a
framework
or
whatever
it
is
that
we're
chasing,
that
we
incorporate
and
bring
in
real
applications
into
the
picture,
then.
D
Coming
back
to
warp
again,
I'm
going
to
push
back
on
what
you
said
that
saving
the
round
trip
time
on
tls
is
bound
to
improve
things,
and
we
don't
have
to
demonstrate
it
the
way
my
group
works
at
apple
in
software,
we're
very
driven
by
demos,
and
if
we
go
to
the
vp
with
slides,
showing
statistics,
his
answer
is
always
show
me
the
demo,
and
if
we
have
some
technology,
whether
it's
to
do
with
the
hard
disk
or
whatever,
if
we
can't
demonstrate
a
user
visible
benefit,
then
then
maybe
our
assumptions
are
wrong,
so
I
do
think
and
you're
probably
right,
shaving
a
tls
round
trip
does
help,
but
you
need
to
demonstrate
that
to
show
that
we're
not
making
faulty
assumptions-
and
I
want
to
make
one
plea
to
everybody-
let's
stop
saying
speed
when
we
mean
capacity,
an
oil
tanker
carries
lots
of
cargo,
but
it's
not
faster
than
a
speedboat.
O
I
want
to
comment
on
the
earlier
comments
that
really
actually
it's
the
quality
of
experience
that
counts
at
the
end
of
the
day,
not
not
just
the
quality
of
service,
and
while
I
agree
with
that,
I
think
at
the
same
time
we
need
to
be
careful
to
not
actually
conflate
actually
what
the
application
does
and
what
the
network
does.
One
of
the
main
issues
today
is
also
often
the
fingerprint
issue
right.
So,
for
instance,
okay,
you
experience
poor,
poor
application,
performance
and
okay,
maybe
I'm
bad
for
the
example.
O
Skype
is
not
the
same
as
some
other
application,
but
you
don't
know
what
the
issue
often
is
as
an
end
user.
Is
it
basically
the
network?
Is
it
the
application?
Is
it
some
something
else
in
the
in
the
stack?
So
I
think
as
much
as
we
would
like
to
have
just
one
metric
one
thing
that
gives
the
entire
user
experience.
O
I
am
actually
pessimistic
that
we
can
actually
do
that
and
I
do
think
actually
layering
and
structuring
and
segmenting
and
basically
dividing
well
basically
separating
the
concerns
really
applies
here
and
is
going
to
be
important
here.
If
we
want
to
make
progress,
even
if
it,
even
if
to
the
user,
it's
yeah,
they
want
to
talk
about
the
application,
but
I
think
for
the
network
and
so
forth.
As
network
engineers,
we
do
need
to
talk
a
little
bit.
We
are
more
differentiated.
I
guess.
C
P
I'd
just
come
back
to
this
issue
of
the
metric,
and
I
think
one
of
the
ways
of
viewing
this
that
we
found
useful
is
to
talk
about
application
outcomes
or
to
look
at
things
like
you
know,
for
video
on
demand,
it's
the
probability
that
you
will
get
a
buffering
event
in
an
hour
of
watching.
Q
I
wanted
to
echo
diana's
point
about
real
world
applications.
I
think
it's
very
apropos
it's
hard
to
motivate
any
work
without
that.
C
All
right,
wonderful,
wes,
hartaker.
A
So
a
number
of
interesting
things
have
cropped
up.
Right
is
that
user
experiments,
as
neil
just
said,
is
the
user
experience
that
matters
right
and
one
of
the
challenges
of
this
work
is
going
to
be.
How
do
you
measure
subjectiveness
of
that
user
experience
and
the
flip
side
of
that
is
what
experiences
don't
matter
right
if
we're
going
to
measure
the
things
that
do,
we
should
also
be
able
to
figure
out
how
to
exclude
things
that
don't,
for
example,
in
javascript.
A
We
put
you
know
we
put
javascript
scripts
at
the
bottom
of
the
html
file
to
make
sure
that
it
loads
last,
because
matter
is
much
dns
result.
You
know
speed
to
resolvers
matter
a
whole
lot.
It
directly
affects
the
users,
but
responsiveness
to
the
root
servers
has
been
demonstrated
not
to
matter
as
much,
but
people
seem
to
still
care
anyway.
Smtp
was
designed
not
to
matter
nobody.
You
know
as
long
as
it's
there
in
a
reasonable
period
of
time.
We
don't
need
it
to
be
that
much
faster.
J
All
right,
toki,
hoyland,
jorgensen.
R
But
if
your
delay
goes
up
by
200
milliseconds
as
soon
as
someone
starts
a
a
download,
that's
not
going
to
work
for
your
video
conferencing,
but
when
you
go
and
measure
it,
it's
fine
that
this
was
one
of
the
original
things
that
that
jim
discovered
when
he
coined
the
whole
buffer
blow
thing
right,
and
you
have
this
issue
that
goes
away
as
soon
as
you
start.
H
All
right,
I
I
don't
want
to
go
on
about
this.
I
mean
I
genuinely
agree
with
you
and
janissary
and
gekko
just
I
just
wanted
to
say
that,
yes,
you
need
to
link
things
to
application
experience.
H
You
know
I
mean
in
my
own
work.
I've
always
done
that.
I'm
just
saying
I
don't
want
to.
I
don't
want
to
make
this
into
a
a
religion
that
you
have
to
demonstrate
everything,
and
even
though
I
would
myself
you
know,
there
are
cases
where
things
are
obvious.
No.
H
E
Nice
on
there
I
I'll
respond
very
quickly
to
bob
to
say.
I
don't
think
that
it's
it's
a
it's
a,
I
don't
think
it's
a
barrier,
I'm
presenting
or
or
we
are
presenting
it's
a
barrier
that
naturally
presents
itself
if
you're
not
able
to
demonstrate
as
stuart
as
pointing
out
it's
difficult
to
get
things
accepted
and
deployed.
So
I
want
to
come
back
to
something
that
wes
asked
about.
E
How
do
you
find
what
the
right
metrics
are
and-
and
that's
not
quite
what
you
asked,
but
I
want
to
say
that
that's
actually
exactly
why
I
want
to
bring
in
concrete
applications
if
you
ask
anybody
that
who's
working
on
any
of
these
products,
so
to
speak,
that
we
might
think
of
using
on
the
internet,
they
all
first
come
up
with
metrics,
and
then
they
build
the
product
because
or
often
times
at
least
because
they
want
to
keep
improving
the
product
and
they
are
looking
at
particular
metrics
that
drive
usage.
E
If
you
look
at
youtube,
for
example,
or
netflix
or
zoom,
they
all
have
metrics
whereby
x
has
its
own
metrics
for
exactly
what
it
does
for
how
to
measure
improvement.
So
to
speak
and
they
might
measure
improvement
in
particular
ways
and
those
are
important
and
useful.
It's
certainly
valuable
to
generally
say
that
video
conferencing
does
better
with
responsiveness.
I
think
I
would
agree
with
that
and
similarly
with
your
streaming,
if
you
want
to
have
fewer
rebuffers,
then
you
want
to
have
a
smaller
buffer.
E
S
I
was
wondering
how
this
approach
that
is,
application
centric,
would
scale,
because,
when
you're
building
infrastructure,
let's
say
you're,
building
a
transport
you're
dealing
with
a
lot
of
different
applications,
so
focus
on
one
application.
Might
you
know
you
might
improve
on
metric,
for
one
application
might
end
up
harming
another
application?
So
generally,
you
know
how,
as
you
know,
infrastructure
builders.
How
do
we
deal
with
the
diversity
of
applications?
I
think
that
becomes
a
challenge
with
the
application-centric
approach.
L
This
again
I'll
keep
banging
on
the
the
economic
drum
right
now.
Basically,
people
are
either
given
insufficient
or
bad
information
without
any
knowledge
of
where
problems
are
occurring.
So
they
can't
do
anything.
They
can't
vote
with
their
dollars.
L
Toki
went
off
and
built
a
tool,
but
that
economic
pressure
has
to
happen
and
it
needs
to
be
at
least
approximately
integrated
into
into
applications
whenever
possible.
So
that
you
can
say,
I
don't
think
my
app
is
is
the
problem
here,
but
it
smells
like
it's
someplace
else.
Please
go.
Do
some
some
tests
to
try
to
point
some
fingers.
T
Yes,
on
on
the
question
of
the
network
quality
for
certain
applications,
I
think
it's
important
to
keep
in
mind
that
well,
people
use
many
different
applications
at
the
same
time
or
different
people
use
different
applications.
At
the
same
time,
on
the
same
network
and
as
stuart
mentioned
in
this
part
of
the
target
that
the
internet
is
shared
networking,
we
want
the
network
to
be
good
for
all
applications,
even
if
they
run
concurrently.
T
There's
no
reason
why
a
software
update
can't
coexist
with
a
webex
call,
it
should
be,
should
be
working
fine
and
I
I
think
it's
more
about.
We
need
to
identify
classes
of
use
cases,
let's
say
male
and
people
are
more.
They
are
the
power
users,
and
then
there
are
the
people
that
work
from
home,
and
then
there
are
people
that
just
use
mail
and
web
and
identify
metrics
or
groups
of
metrics
for
these
kind
of
classes.
U
Yes,
I
just
wanted
to
add
that
I
think
if
we
it's
very
important,
those
that
we
think
about
who
the
metric
is
for,
so
I
really
agree
with
janna
in
the
end
for
the
end
user,
of
course,
it's
the
application
performance
that
that
matters.
I
think
what
bob
and
praveen
maybe
was
touching
on
is
that
if
you
think
about
the
whole
development
of
solutions
and
applications,
maybe
not
during
that
whole
development,
that's
the
only
metric
we
would
have.
U
V
So
when
jacob
mentioned,
the
vms
make
life
harder,
I
was
remembered
of
the
first
ibm
computers
who
supposedly
inserted
it
so
that
the
interactive
experience
was
a
very
persistent
and
people
wouldn't
understand
that
you
know
they
could
get
at
low
usage
time
get
much
faster
response
times
than
over
high
load.
Nowadays,
we
have
you,
know
the
majority
of
internet
traffic
being
supposed
streaming.
That
has
a
play
out
buffer
of
up
to
30
seconds.
V
To
give
you
an
average
bit
rate
experience
under
that
you
know
capacity
seeking
performance
of
the
internet,
so
maybe
at
some
point
in
time
we
wake
up
and
see
that
the
best
effort
service
for
the
quote
internet
isn't
the
only
thing
that
we
should
be
focusing
on.
You
know
in
the
majority
of
private
networks
that
are
doing
you
know,
mission
critical
services
in
the
network.
We
have
a
lot
more
of
the
services
for
a
network
transport
that
we've
defined
in
the
itf
than
in
the
internet.
W
W
The
network
actually
relents
if
a
flow
exerts
pressure
on
the
network
and,
interestingly,
you
know,
like
many
fluid
dynamic
problems,
the
higher
the
pressure
you
exert
on
the
system,
the
greater
the
resistance
to
actually
giving
more
resources
to
that
particular
flow.
And
so,
if
you
think
about
this,
the
outcome
is
all
about
dynamic
equilibriation
and
it's
not
a
static
resource
allocation
and
a
lot
of
these
measurement
arguments
are
just
trying
to
measure
static
resources
that
won't
work.
W
A
All
right:
well,
that's
the
end
of
the
queue
wes.
Let
me
turn
it
over
to
you
all
right.
Thanks
very
much
I
see
vinta
is
actually
logged
in
which
is
fantastic.
If
int,
if
you're
here,
please
go
ahead
and
start
sharing
your
screen
in
in
the
meantime
I'll
give
a
real
quick
recap.
I
think
the
past
you
know,
half
an
hour
has
been
a
very
interesting
discussion.
It
really
has
shown
the
breadth
of
the
problem
that
we
have
it.
It
really
has,
I
think,
shown
how
much
you
know.
A
We
have
to
think
about
all
layers
of
the
stack,
as
I
mentioned
earlier,
as
well
as
thinking
about.
Does
the
network
matter,
if
you
don't
have
enough
compute
cycles
to
actually
produce
the
non-static
content
to
draw
directly
on
jeff's
comment:
vint
are
you
here
yet
yeah?
Can
you
not
hear
me
and
see?
I
can
hear
you
now:
do
you
have
slides
today
or
no?
No,
I
don't
you
got
it.
Okay,.
A
That's
just
fine,
so
it's
my
pleasure
to
introduce
vint
cerf
to
you
all,
I'm
sure.
Most
of
you
know
him
he's.
The
formal
type
formal
title
at
google
is
vice
president
and
chief
internet
evangelist
and
he's
widely
known.
As
you
know,
one
of
the
fathers
of
the
internet.
You
know,
interestingly
enough,
he
is
the
author
of
rfc-0013,
which
is
probably
one
of
the
shortest
rfcs
you'll
ever
read.
So
I
suggest.
X
Well,
thank
you
very
much
for
that.
I'm
sorry,
I
don't
have
slides.
You
know.
The
chief
internet
evangelist
goes
around
doing
sermons,
and
so
this
is
my
sermon,
which
I
do
not
think
is
going
to
take
an
hour
on.
My
calendar
says:
there's
this
hour
slot
here.
Well,
it's
a
half
an
hour
now,
but
go
ahead.
Well,
in
any
event,
even
a
half
an
hour
might
turn
out
not
to
be
necessary.
X
X
So,
first
of
all,
I
I
just
want
to
tell
you
how
important
this
session
is
not
mine,
but
the
the
entire
arrangement,
I'm
so
happy
to
see
the
iab
convening
this,
and
I
think
it's
indicative
of
the
recognized
importance
of
the
question
of
performance
that
so
many
people
have
gone
to
the
trouble
of
actually
preparing
material
ahead
of
time,
yeah
evaluating
it
and
and
sending
print
ready
copy,
so
that
the
rest
of
the
community
will
have
an
opportunity
to
see
what
you've
had
to
say.
X
So
I
really
think
this
is
an
important
event
in
in
our
story.
Second,
just
a
tiny
net:
this
is
not
a
complaint.
It's
actually
a
thank
you
that
we're
paying
attention
to
buffer
bloat,
dave,
todd
and
others
have
been
beating
the
drum
and
maybe
their
heads
against
this
problem
for
a
long
time.
I
think
it's
demonstrably
quite
real.
X
The
question,
of
course,
is
what
to
do
about
it
and
how
to
go
about
doing
that
and
the
most
recent
episodes
have
been
looking
at
starlink,
which
of
course,
is
one
of
the
newest
large-scale
infrastructure
builds
underway,
and
I'm
hoping
that
the
starlink
team
will
recognize
that
this
is
a
performance
issue
that
deserves
attention
and
can
candy
can
be
dealt
with
with
the
thoughtful
implementation
and
test.
X
The
third
thing
I
wanted
to
mention
is
that
it's
very
timely
for,
for
all
of
you
to
be
looking
not
only
at
the
potential
to
adjust
the
tcp
protocol
to
do
a
better
job
of
performing
across
the
network,
but
I
think
we
also
should
feel
free
and
I
believe
the
submitted
papers
demonstrate
the
exploration
of
other
protocols
at
that
layer
are
deserving
of
the
tension,
and
that's
quick,
for
example,
which
is
an
alternative
to
the
tcp
tls
combination.
X
There
are
other
real-time
protocols
as
well,
but
I
want
to
emphasize
the
importance
here
of
feeling
free
to
consider
alternatives
to
the
original
core
protocols.
X
Since
many
years
have
have
evolved
and
a
lot
has
happened
to
tcp
implementations,
but
there
are
other
things
that
people
are
trying
to
do
that
might
work
better
with
alternative
implementations,
and
you
know
some
people
might
imagine
that
I
would
cling.
You
know
like
a
teddy
bear
to
the
original
tcp
work,
I'm
proud
of
what
we
did.
X
The
fourth
thing
I
wanted
to
mention
may
not
be
a
primary
topic
of
this
symposium,
but
I
still
would
like
to
see
ipv6
implemented
across
the
internet,
and
I
feel
like
a
little
bit
like
beating
the
drum
since
1996,
hoping
that
that
we
will
get
there.
I
know
it
has
flaws
and
I
wish
that
we
had
thought
more
carefully
about
the
interoperability
with
ipv4.
X
Although
I
don't
want
to
sound
like
apologists
here,
I
will
admit
to
you
that
in
1996
I
thought
that
it
would
be
so
obvious
that
the
of
the
advantage
of
this
relatively
simple
variation
on
ipv4
that
people
just
go,
implement
it
and
get
it
over
with
so
we'd
have
some
more
address
space
that
didn't
happen,
and
you
may
have
different
explanations
for
this,
of
the
one
which
I've
come
to
think
is,
it
might
be
at
least
relevant.
X
Is
that
1996
was
about
the
point
where
the
dot
bone
was
happening
because
netscape
communications,
which
was
formed
in
94,
that
went
public
in
1995
and
stock,
went
through
the
roof,
and
everybody
was
throwing
money
at
anything
that
looked
like
it
had
something
to
do
with
the
internet,
and
the
result
must
have
been
hey.
We
got
lots
of
address
space.
X
What's
the
problem,
let's
go
build
some
something
and
make
money,
and
so
we
did
not
catch
the
moment
to
get
v6
done
until,
of
course,
the
rapid
expansion
of
the
internet
made
it
harder
to
get
there,
especially
considering
we
didn't
really
run
out
in
a
severe
way
until
2011,
which
was
at
that
point
in
some
15
years
in
the
future,
at
least
that's
my
my
little
apologia
for
for
not
getting
ipv6,
but
I
still
think
we
should
get
there.
X
The
the
fifth
point
I
wanted
to
make
has
to
do
with
the
fact
that
we
may
be
operating
in
a
space
with
dramatically
different
para
parameters
than
we
have
had
before,
and
that
parametric
variation
justifies,
in
my
opinion,
re-examination
and
scrutiny,
not
just
of
transport
protocols,
but
possibly
a
lot
of
the
others
that
that
make
up
the
internet,
and
so
the
idea
of
looking
at
performance
in
many
different
dimensions,
whether
it's
latency,
jitter,
absolute
bandwidth
or
other
you
know,
reachability
and
a
variety
of
other
metrics
is
timely.
X
X
A
good
example
of
that,
of
course,
is
the
bundle
protocol
work
which
is
going
on
with
the
interplanetary
internet
effort,
and
there
you
could
see
very
quickly
that
ping
is
not
your
friend.
You
don't
have
a
very
good
estimate
for
how
long
it
will
be
before
a
response
to
an
echo
packet
would
show
up,
especially
if
it's
a
multi-hop
and
disconnected
frequently
disconnected
system.
There
are
a
whole
lot
of
things.
X
Network
management
doesn't
work
very
well,
dns
is
questionable,
but
if
the
round
trip
times
are
40
minutes
to
four
hours
and
there's
a
possibility
that
the
ip
address
you
got
back
from
the
dns
turns
out
to
be
wrong
by
the
time
you
get
it
because
somebody
moved
to
a
different
network
and
got
a
different
ip
address.
X
These
kinds
of
parametric
changes
could
also
happen.
Terrestrially,
not
quite
in
the
same
way.
I
don't
expect
the
speed
of
light
to
be
altered
in
such
a
way
that
it
takes
four
hours
to
get
from
here
to
a
geosynchronous
satellite
back,
but
but
there
are
parametric
variations
in
even
a
terrestrial
environment.
Breakage
and
lack
of
connectivity
would
be
an
obvious
one,
especially
as
we
rely
increasingly
on
radio
based
protocols.
B
X
This
sixth
thing
that
I
wanted
to
suggest
to
you,
in
which
I
suspect
this
conference
may
generate,
is
well
I'll.
Call
this
a
desirable
properties
exercise
as
we
look
at
the
various
parametric
spaces
in
which
we
are
expecting
our
protocols
to
work.
X
Performance
among
the
desirable
properties
might
very
well
be
low
latency,
for
example,
and
you
you
will
all
recall
that
when
the
ip
was
split
from
the
original
tcp
design,
it
was
for
purposes
of
dealing
with
real-time
communication
that
did
not
require
100
delivery
of
all
the
data
of
the
one
liner
description
of
that.
X
B
X
X
The
the
painful
ipv4
ipv6
situation
might
conceivably
have
been
done
better,
particularly
if
there
had
been
a
way
to
arrange
for
a
backward
compatibility
mode
for
ipv6
and
ipv4,
but
I
could
never
quite
figure
out
how
to
get
the
ipv4
32-bit
address
space
to
contain
128
bits
of
address
in
order
to
refer
to
a
v6
target
that
didn't
exist
at
the
time
that
the
ipv4
protocols
were
developed
and
and
the
nomenclature
designed.
X
I
if
I
were
able
to
talk
to
my
younger
self,
I
think
I
would
have
said
something
around
1992
about
how
deployment
was
really
important
and
backward
compatibility
ought
to
be
given
deeper
consideration
at
this
point.
It's
it's
a
bit
late
for
that
for
v4
and
v6,
but
it
is
not
necessarily
late
for
thinking
about
variations
on
existing
protocols
and
to
make
them
to
adapt
them
to
new
parametric
spaces.
X
But
we
should
keep
in
mind
that
making
them
backward
compatible
is
extremely
valuable
for
for
introduction
into
use,
and
so,
if
I
had
a
list
of
desirable
properties,
that
would
be
certainly
one
of
them.
For
sure,
and
the
last
point
I
wanted
to
make-
and
I
see
that
I've
only
used
about
11
minutes
so
far,
which
is
I
don't
need
to
use
more
than
you
can
stand
and
I
have
to
offer,
but
the
last
one
I
wanted
to
mention
has
to
do
with
performance
across
multiple
hops
in
the
internet.
X
X
I
mean
it's
certainly
true,
that
that
pairwise
interactions
have
sometimes
led
to
special
agreements
by
you
know
handshake
between
two
network
operators
who
want
to
offer
something
to
their
respective
customers,
that's
other
than
purely
best
efforts,
but
getting
that
to
work
over
three
networks
that
are,
you
know
with
one
in
between
it's
turned
out
to
be
much
harder.
X
So
it's
an
important
question
as
you're
thinking
about
alternative
implementations
and
designs
to
ask
the
question:
can
we
get
this
to
work
across
the
multi-hop
environment?
X
And
the
same
argument
could
be
made,
I
suppose,
for
amazon
and
others.
I
haven't
done
the
appropriate
homework
to
look
at
the
bgp
tables.
The
forwarding
tables
to
confirm
that
view.
But
perhaps
some
of
you
have
and
can
say
something
about
the
shifting
architecture
of
the
internet,
where
an
increasing
number
of
large
players
have
substantial
networking
capability
and
heavy.
X
With
the
rest
of
the
the
public
internet,
so
as
we
think
about
improvements
in
design
and
parametric
adaptation,
it
seems
to
me
we
should
be
asking
ourselves.
What
does
the?
What
does
the
multi-hop
situation
look
like
in
this
environment?
I'm?
X
X
X
So,
assuming
that
I
am
not
incorrect,
assuming
I'm
right
about
that,
there
might
be
some
utility
in
thinking
more
about
taking
advantage
of
broadcast
physical
broadcast
media
where
we
broadcast
and
hope
that
mostly
everybody
got
it
and
the
ones
who
didn't
there's
a
recovery
mechanism
to
get
the
stuff
he
missed.
X
X
So
I
think
I
should
stop
there
unless
you
want
me
to
literally
rattle
on
you
know
at
this
point
it
would
be
stream
of
consciousness
or
unconsciousness,
and
I'm
hesitant
to
impose
that
on
you.
What
I
would
like,
though,
is
to
know
how
far
off
the
mark,
I
am
with
any
of
these
notions
and
whether
you
would
add
to
them
some
concrete
foci
that
that
I
haven't
mentioned,
but
that
you
think
should
be
important.
X
So
if
it's
all
right,
mr
chairman,
I'd
stop
there
and
invite
reactions
from
people
who
are
still
awake.
A
That
makes
sense
keith.
Can
I
ask
you
to
run
the
queue
again,
although
I
did
put
myself
in
the
front
so
I'll
go
ahead
and
start.
So.
Thank
you
very
much
for
for
talking
to
us
today.
Vint
you,
you
said
a
number
of
enlightening
things
and
I
particularly
like
the
notion
that
you
know
why
do
we
use
wireless
networks
in
point-to-point
fashion?
That's
a
very
good
point.
If
everybody
in
the
house
is
streaming
the
same
movie,
why
are
we
doing
that
right?
A
But
so
my
question
to
you
is
given
sort
of
your
unique
kind
site:
do
you
have
any
advice
in
you
know
developing
workshops,
developing
metrics
in
this
workshop
today?
That
can
stand
the
test
of
time
I
mean
so
you
know
anything
that
we
do
today
is
so
much
is
going
to
change
in
the
next.
You
know
decade
or
20
years,
maybe
ipv
and
will
come
out.
I
think
we
should
stop
numbering
them.
You
know
how?
How
can
we?
A
X
Things
occur
to
me,
the
first
one
is
that
if
you
expect
it
to
last
for
a
long
time,
then-
and
you
want
to
get
it
propagated
into
the
system,
my
comment
about
backward
compatibility
is
pretty
important.
Finding
a
way
to
introduce
something
that
is
easy
for
everyone
to
adopt
on
their
own
time.
Schedule,
as
opposed
to
having
to
do
a
flag
day
is
really
important.
X
It's
also
important
to
find
arguments
that
will
cause
people
to
conclude
that
they
want
that
capability.
So
thinking
about
why
that
capability
might
be
important
is
a
very
important
part
of
your
success.
X
If
I
could
go
back
to
something,
I
think
I've
learned
in
the
course
of
my
career,
and
that
is
that
you
can't
do
anything
big
unless
you
can
figure
out
how
to
get
help,
and
the
only
way
you
can
get
help
is
to
convince
somebody
they
want
to
do
what
you
want
to
do.
There's
another.
X
It's
called
sales
and
you
know,
while
a
lot
of
engineers
sort
of
dismiss
sales,
as
though
you
know
those
raw
raw
guys
over
there.
I
have
to
report
to
you
that
the
sales
guys
don't
succeed,
you
may
not
get
paid.
So
that's
first
issue
and
the
second
one
is
there's
a
wonderful
worked.
Example
of
salesmanship
in
a
book
called
tom
sawyer.
X
Where
did
some
of
you?
If
you've
rather
remember,
he
was
whitewashing
the
fence
and
he
managed
to
convince
his
friends
that
this
would
be
fun
for
them
to
do
and
and
all
he
offered
was
an
apple
core,
which
is
you
know,
pretty
weak
inducement.
So
so
I
would
say
better
learn
to
sell
the
idea.
If
you
wanted
to
have
widespread
adoption,
I
don't
think
there's
well,
let's
see
what,
of
course,
if
I
could
have
whispered
to
my
what
was
then
28
year
old
self
or
something.
Y
X
And
and
then
said,
pick
128
bits
of
address
space.
We
would
have
avoided
this
hassle.
However,
to
be
absolutely
honest
with
you,
can
you
imagine
in
1973
being
told
you're
going
to
need
3.4
times
10
to
the
38th
addresses
for
your
network
in
about
50
years?
X
X
L
I
assert
it's
primarily
at
the
edge
in
people's
in
people's
devices
in
their
homes
in
the
last
hop
or
two
to
the
isp,
and
that
that's
that's
my
assertion.
X
So
and
you
you
and
I
are
pretty
much
in
agreement,
although
I
would
there,
let's,
let's
dig
down
a
little
bit
into
some
nuance.
The
first
observation
I
would
make
is
that
people's
wi-fi
systems
are
poorly
tuned.
Generally
speaking,
we
don't
have
wonderful
tools,
at
least
that
I'm
aware
of
for
helping
people
configure
them
well
putting
in
repeaters
where
they
would
do
the
most
good.
Sometimes
you
put
in
repeaters
and
they
cause
interference,
and
that
makes
it
worse.
X
So
we
have
that
problem
and
a
lot
of
the
speed
tests
that
people
run
don't
take
into
account
or
don't
exhibit
a
surface
at
poorly
configured,
wi-fi
environments,
so,
but
that's
still
out
of
the
edge,
so
you
and
I
are
still
in
sync
there.
Second,
I
certainly
agree
that
the
routers
that
we
use
to
get
access
to
the
public
internet
that
have
buffer
boat
problems
are
a
significant
component
of
performance
variation.
X
However,
there
is
a
piece
of
the
global
internet
which
is
potentially
troublesome
and
that's
where
the
isps
interconnect,
whether
that's
an
ixp
or
whether
it's
a
direct
peer-to-peer
connection.
There
are
cases
that
have
been
well
documented,
partly
by
the
m-lab
measurements
that
have
been
gone
going
on
since
the
last
11
or
12
years
now,
where
some
isps
have
chosen
not
to
upgrade
where
they
should
have
in
terms
of
providing
adequate
bandwidth
in
the
inter
core
interconnects
of
the
net.
X
I
would
say,
however,
that
for
the
most
part,
the
core
networks
are
operating
at
extremely
high
capacity.
We're
running
at
400
gigabits
a
second
in
most
of
our
backbones
now
and
with
some
possibility
of
moving
to
even
higher
capacity
on
those
optical
channels.
So
I
don't
think
the
core
of
the
net,
except
the
interconnect
points
tends
to
be
as
big
a
problem
as
the
edge.
So
you
and
I
are
largely
in
agreement
there.
C
D
D
I
arrived
at
stanford
for
my
phd
in
the
summer
of
1990
right
about
the
time
that
steve
deering
had
invented
ip
multicast
and
30
years
later,
we've
yet
to
see
really
any
widespread
usage
of
it.
We
imagine
back,
then
we'd
have
broadcast
tv
over
the
internet
and
what
we
actually
have
is
youtube
with
every
everybody
watching
their
own
customized
content
when
it
comes
to
wi-fi
all
modern
wi-fi
systems
have
multiple
antennas
on
the
access
point
and
on
the
client
and
they
use
phasing
to
plant.
D
Z
D
One
pack
it
actually
gets
planted
on
the
receiver
that
you
wanted
to
get
and
with
multi-user
mimo
it
can
actually
be
sending
multiple
unicast
to
different
clients.
At
the
same
time,
and
coming
back
to
the
earlier
theme,
people
have
mentioned
a
lot
of,
it
is
measurement
and
money,
because
people
measure
unicast
throughput.
That's
where
all
the
research
spending
is
going
in
wifos
to
make
the
unicast
performance
off
the
charts.
Y
Stuart
and
others
pretty
much
said
everything
that
I
was
gonna
say,
which
is
that
you
know
the
only
other
thing
about
multicast.
That's
a
big
issue
is
billing.
Is
that
there's
no
good
way
to
account
for
and
bill
for
these
things
we're
still
in
networks
that
are
where
people
want
to
pay
per
bit,
especially
when
you
talk
about
this
interconnection
problem,
a
lot
of
them
it's
about
who
wants
to
get
paid
for
what
bits?
Y
It's
disappointing
that
I
have
a
higher
connection
at
my
house,
a
higher
speed
connection,
my
house
than
some
carriers
have
as
their
global
interconnection
between
two
large
providers.
It's
I've.
I
really
want
to
name
and
shame
those
carriers,
even
though
they've
disclosed
the
information
to
me
privately,
but
it
is
very,
very
disappointing
that
we're
in
we're
in
that
state-
and
this
the
same,
I
think,
can
be
said
as
well-
about
the
qos
discussion,
which
is
that,
broadly,
you
can
get
networks
that
have
good
qos.
Y
If
you
pay
somebody
for
them
and
they
will
happily
build
you,
a
private
network
with
the
qs
properties
that
you
want,
but
when
it
comes
to
the
public
internet,
there's
no
guarantees
once
you
leave
that
particular
carrier
and
then
you
know
the
other
comments
about
cpe,
etc.
So
I
think
everything
else
was
largely
said,
but
I
think
you
know
yeah
that
billing
component
and
the
major
interconnection
issues
they're
still
out
there
and
a
lot
of
the
companies
like
google.
I
work
for
akamai.
Y
AA
Hello,
thank
you
for
mentioning
the
design
for
the
future
and
considering
that
I
would
suggest-
or
at
least
for
me
personally,
one
desirable
property
would
be
ecological
sustainability,
so
us
yeah,
since
we
are
talking
about
measuring
and
maximizing
the
quality
of
network
service,
I
would
think
that
we
should
also
kind
of
try
to
balance
that
with
minimizing
the
ecological
impact.
So
what
do
you
think
about
that?.
X
So
actually,
this
is
a
really
interesting
point.
I
had
not
thought
about
that
as
carefully
and
specifically,
as
you
have
I'm
going
to
expand
on
that
for
a
second
and
suggest
to
you
that
there
is
a
metric
that
is
frequently
used
by
economists
called
gdp,
and
it
is
an
additive
metric
about
how
much
money
is
being
spent
on
products
and
services.
X
What
is
never
included
in
gdp
is
the
negative
impact
of
the
business
on
the
environment,
for
example.
I
think
we
need
some
negative
elements
of
gdp
and
the
same
argument
can
be
made
for
performance
on
the
internet
if
there
are
costs
associated
with
it
that
are
negative
in
fact
that
are
harmful.
We
ought
to
account
for
that
as
well.
AB
Hi,
it's
great
to
see
you
vint
and
to
hear
your
thoughts.
I
was
riffing
on
what
vesna
was
saying.
I
was
violently
agreeing
with
her
and
there
are
metrics
that
you
know
like
carbon
intensity,
that
again,
if
we
consider
as
a
cost
metric,
which
which
would
mean
how
carbon
intense
is
say,
the
path
through
which
you
might
route
your
data
that
might
help
towards
your.
AB
You
know,
accounting
for
your
negative
impact
as
as
you
were
describing
it,
and
so
I
I
wonder
you
know
how
many
other
folks
thinking
about
metrics
are
considering
things
like.
You
know,
delay,
and
this
goes
back
to
something
that
you've
held
near
and
dear
around
the
delay.
Tolerant
networks
is,
maybe
you
hold
on
to
data
until
such
time
as
the
renewables
are
available
or
there's
excess
renewables
when
the
cost
of
renewables
is
even
lower,
and
so
something
like
a
carbon
intensity
metric
might
be
something
to
weigh
against
delay
or
latency.
X
So
that's
a
really
interesting
idea.
Of
course,
not
all
applications
have
the
ability
to
accept
this
kind
of
artificial
delay,
but
the
cases
where
it
can,
where
an
application
is,
is
capable
of
being
delay.
Tolerant.
It's
a
very
interesting
idea
to
allow
someone
to
say
I
would
prefer
my
packets
to
go
on
the
least
you
know
expensive,
environmentally
expensive
path,
despite
the
fact
that
that
might
involve
delay
eve.
I
have
to
ask
you:
am
I
remembering
correctly
that
we
were
both
at
ucla
way
back
in
the
late
1960s.
AB
I
was
affiliated
with
ucla
for
a
little
bit,
yes
and
and
caltech
were
where
our
paths
crossed
and,
of
course,
the
internet
and
the
itf.
But
yes
this,
I
I
think
you
know
some
of
the
things
that
google
is
doing
with
regards
to
shifting
workloads
to
follow.
The
sun
exposes
at
least
to
the
google
ecosystem,
things
like
carbon
intensity.
AB
AB
You
know
the
algorithms
that
they
use
or
the
metrics
that
they
consider
so
that
as
edge
computing
arrives
on
the
scene
and
and
we
expose,
we
want
to
maybe
load
balance
the
network
by
considering
the
network
carbon
footprint,
not
just
the
data
center
carbon
footprint,
whether
there
are
things
whether
we
could
see
what
it
is
that
google
is
doing
inside
to
do
that
sort
of
load
balancing
inside
their
networks.
Is
that
something
that
is
right
for
our
standardization?
X
Yeah,
I
don't
know
the
answer
to
that.
I
know
we're
out
of
time
here.
I
will
say,
however,
that
one
wants
to
go
after
the
largest
contributor
to
carbon
consumption
first
and,
to
be
honest,
that
tends
to
be
the
big
data
centers.
So
we
put
an
enormous
amount
of
effort
into
reducing
power
requirements
paying
for
low
carbon
or
zero
carbon
electricity
production
and,
taking
you
know
like
30-year
contracts,
and
recently
we've
announced
that
we're
trying
to
replenish
the
water
that
we
use
up
in
evaporative
cooling,
for
example.
X
X
A
All
right,
thank
you
very
much
for
your
time
vint.
We
hope
you
can
stick
around,
but
we
certainly
understand
if
you
can't,
but
we'll
we'll
turn
it
back
into
our
workshop
program
now
with
more
rapid
fire
presentations
and
followed
by
discussion.
But
we
greatly
appreciate
your
you're
coming
to
join
us
today.
X
Well,
thank
you
for
that.
I
appreciate
being
invited
as
the
talking
dinosaur
and
I
will
hang
around.
I
can
hang
around
for
another
28
minutes,
so
I'll
be
happy.
A
Excellent:
okay!
Well,
we'll
we'll
do
two
short
presentations
I'll
turn
it
back
over
to
keith
for
that,
and
then
we'll
join
the
discussion
as
you're
welcome
to
join
theirs
as
well.
C
All
right,
thank
you,
wes!
Our
next
talk
is
from
pedro
casas,
10
years
of
internet
qe,
measurements
pedro.
Why
don't
you
take
it
away.
AC
Yeah,
hey
hi
thanks.
I
saw
you
change
the
the
format
of
the
slides,
so
I
was
just
checking
actually
my
first
very
pair
on
on
working
on
qe
dates
back
to
16
years
ago,
so
I
could
say
15
years
of
internet
qe
measurements.
AC
So
the
idea
is
to
understand,
on
the
one
hand,
a
bit
more
about
qe
measurements
per
se,
and
if
these
tell
us,
what
do
we
need
from
the
network
side?
So
if
we
can
go
to
the
next
slide?
So
so
here,
basically
I
talk
about
the
term
internet
qe
about
thinking
user
experience.
AC
In
a
broader
perspective,
not
only
thinking
of
qe
as
a
user-centric
lab
study
perspective,
but
something
more
global
in
which
you
can
see
the
real-world
networks
and
services
at
scale,
and
you
want
to
basically
through
internet
qe,
understand
a
bit
more,
the
quality
of
data,
communication
networks
and
services
from
the
user-centric
perspective,
and
why
that
because
we
always
say
that
you
cannot
understand
what
you
don't
or
let's
say
you
cannot
control
what
you
don't
understand
and
you
cannot
understand
what
you
don't
measure.
So
it's
for
me.
AC
In
my
perspective,
it's
very
important
to
understand
the
user
experience
and
quality
of
experience,
but
also
how
to
measure
it.
AC
The
operation
of
networks
through
a
user-centric
understanding
is
very
important,
and
I
would
just
take
the
example
of
what
you
were
saying
in
one
of
the
presentations
before
about
carbon
footprint
or,
let's
say
more
green
operation
of
networks.
AC
So
so
far,
networking
has
been
about
over
provisioning
right
because
of
our
provisioning
is
probably
cheaper
than
then
adapting
the
network
and
having
to
tolerate
all
the
problems
that
may
arise
from
there,
but
from
a
quality
of
experience
perspective
over
provisioning
could
be
or
resolved
in
just
a
complete
waste
of
resources,
because
some
extra
resources
might
not
really,
at
the
end
of
the
day,
translate
into
better
performance
for
the
user
of
all
the
services.
AC
So
it's
very
important
I
would
say
to
to
understand
the
user-centric
perspective
of
the
networks
so
actually
qe,
driven
or
qe
aware
algorithms.
They
can
help
in
better
optimizing.
The
network
operating
the
network
in
a
more
efficient
way,
thinking
from
traffic
engineers
of
more
offline
conception
of
networks
and
so
on,
but
also
to
more
online
monitoring.
AC
Just
think
about
diagnosis
of
problems
or
detection
of
anomalies,
understanding
when
there
is
a
problem,
you
want
to
prioritize
things
that
are
really
impacting
the
end:
customers
or
the
end
users,
but
qi
modeling
measurement
and
assessment
is
super
complex
and
time
consuming,
and
so
far
we
have
had
a
discussion
about
qs.
It's
all
wrong
we
have
to.
What
we
have
been
doing
is
is
not
good.
I
think
in
in
that
plot
that
I'm
in
that
figure
that
I'm
showing
in
the
bottom
left
qe
perspective
or
the
whole
thing.
AC
It's
a
multi-layer
thing,
so
we
have
to
consider
every
layer,
so
the
network
layer
for
sure
the
qs
is
is
important
because
it's
what
we
control.
But,
of
course,
if
you
want
to
understand
what
the
user
is
experiencing,
we
have
to
get
closer
and
we
have
to
measure
the
application
layer
and
this
becomes
very
complex
because
of
course,
application
layer
do
not
offer
in
general
apis
or
information
that
can
give
us
this
type
of
information.
AC
But
in
order
to
understand
all
these,
and
especially
the
end
user,
we
have
to
be
able
to
measure
at
different
levels
of
the
stack.
Just
remember
that
quality
of
experience
is
is
really
complex,
because
there
is
a
lot
of
things
mixed
there
about
subjectivity,
so
there
is
personality,
usage,
context,
device,
usability
and
so
on.
So
it's
it's
broader,
but
doesn't
mean
that
we
are
not
advancing
this
erection.
So
next
slide,
please
so
from
the
experience
on
the
internet,
ue
measurements.
AC
I
would
just
it's
very
difficult
for
me
to
summarize
all
the
learnings,
but
at
least
I
would
like
to
share
with
you
some
some
comments.
We
were
thinking
or
we
were
discussing
at
some
point-
that
knowing
how
to
deliver
good
or
excellent
quality
is
good
because
you
can
actually
sell
it.
I
would
say
the
opposite:
it's
not
probably
that
we
can
sell.
But
what
it's
important
to
understand
is
that
poor
quality
of
experience-
it's
bad,
for
example,
for
user
engagement.
AC
So
this
is,
there
are
tons
of
studies,
understanding
that
when
the
performance
from
the
user
perspective
is
bad,
things
start
going
wrong
in
the
in
the
engagement
perspective.
So
if
you
think
video
streaming,
when
you
think
about
re-buffering
video
quality
and
quality
switching
for
web
services,
if
you
think
about
waiting
times,
but
also
we
were
talking
about
responsiveness,
these
are
things
that
we
have
to
be
able
to
somehow
understand
and
measure
in
order
to
avoid
this
quality
dissatisfaction
and
user
engagement
problems.
But
it's
not
only
about
experience,
but
it's
also
about
productivity.
AC
So
when
we
talk
about
cloud
services,
introductivity
and
responsiveness
is
everything
and
many
people
that
basically
rely
on
cloud
services
for
working.
They
can
severely
impact
their
own
productivity
because
of
this
problem,
so
it's
not
only
about
just
enjoying
the
things
but
also
making
it
work.
So
when
you
are
having
everything,
let's
say
in
the
cloud
important
here
is
that.
AC
Yeah,
okay,
so
maybe
just
one
final
comment
here:
it's
not
only
is
the
network
not
only
the
services
but
also
the
end
devices
are
there,
and
maybe
we
can
go
to
the
next
slide.
So
I
I,
what
do
we
need
actually
from
the
network?
The
question
here
is
that
depends
on
what
the
user
perceives
for
specific
service.
So
it's
not
only
about
fast
connections,
low
latency,
but
it's
something
more
holistic.
I
think
it's
important
at
some
point.
Somebody
said
it:
it's
not
only
about
quality
of
experience
for
specific
services.
AC
I
think
it's
about
quite
an
experience
for
the
session
humans.
We
don't
just
react
by
independent
small
things,
but
we
try
to
integrate
components
right.
So
it's
very
important.
We
take
a
more
holistic
view
of
saying:
okay,
we
as
a
user,
we
have
a
session
where
there
is
the
network.
There
are
the
services
and
devices
and
we
have
to
see
how
to
capture
these
things
in
a
kind
of
single
metric
way.
AC
C
AD
Thanks
so
it's
been
a
great
session
so
far.
I
think
this.
This
presentation
is
probably
just
gonna,
be
reinforcement.
Some
of
the
things
we've
already
discussed
about
so
I'll.
Keep
it
brief,
because
we're
kind
of
nearing
the
end
of
this
one,
the
paper
is,
is
really
a
position
on
the
questions
that
were
posed
by
the
by
the
iab
kind
of
invitation.
To
this
thing,
and
so
some
of
the
questions
that
piqued
my
interest
or
about
you
know
should
be
being,
for
me
end
users
of
metrics,
whatever
they
might
be.
AD
I
think
my
position
would
be
no
does
the
network
you
know
corollary
of
this
is:
does
the
network
need
to
know
about
what
the
applications
are
doing
and
there's
an
argument
that
we
put
forth
that
says
no
as
well,
so
if
we're
going
to
the
next
slide,
what
are
the
premises
of
the
document?
Is
that
the
internet's
for
end
users
and
the
way
that
they
view
the
internet?
Is
this
box?
That
loads
of
stuff
happens
in
it's
a
mess,
there's
different
parts
of
it?
They
don't
understand
how
it
works.
AD
They
want
to
use
the
internet
whatever
that
might
be
to
achieve
their
goals
and
and
the
goals
that
they
want
to
achieve,
can
vary
by
time
and
space
and
the
quality
of
assessment
of
those
goals
vary
by
people
and
personality
is
subjective
and
it's
hard
and
this
system
or
this
box
is
a
complex
system
and
and
trying
to
model.
That
is
incredibly
difficult
and
it's
incredibly
naive
to
think
we
can
run
stuff
like
speed
test,
to
get
some
metrics
to
to
understand
why
maybe
something
was
bad.
AD
They're
disjoint-
and
you
know
this
isn't
to
say
the
network
is
not
irrelevant
to
these
these
things,
but
that
the
server
apps
and
the
client
apps
play
such
a
large
role
in
the
ability
of
people
to
have
a
quality
of
experience
that
their
network
role
should
not
be
overestimated
in
this.
Even
though
we're
in
the
iab
and
we're
talking
about
networking,
so
we've
got
two
examples
of
users.
One's
happy
is
one
side.
AD
If
we've
gone
to
the
next
example,
the
chap
who
couldn't
load
the
picture
of
his
dog,
you
know,
what's
he
gonna
think
this
magic
box
of
the
internet?
Is
it
slow?
Why
is
this
happening?
Is
it
my
client?
Is
it
my
isp?
Is
the
server
overloaded
today?
Is
my
corporate
security
software
playing
at
they're
probably
unlikely
to
blame
bluffer
bloke,
because
they
don't
know
what
that
is,
they
might
say
their
mum's
microwaving
a
potato.
I
don't
know
stuff
happens.
It
changes
over
time.
AD
I
think,
unless
you
can
measure
everything
all
of
the
time
that
to
understand
why
something
happened
in
the
past,
it's
gonna
be
difficult
unless
you
can
tie
that
to
quality
of
experience
and
react
like
I
just.
I
think
this
is
too
hard,
so
most
people
will
say
I
don't
care
why
it
was
bad.
AD
It
was
just
bad,
and
that
made
me
sad
and
to
fill
out
a
survey
at
the
end
of
this
to
report
that
it
was
bad,
doesn't
fill
people
with
happiness
either,
because
quite
often
it's
like
well,
where
did
that
report
go
what
measurable
outcome
or
improvement
did
that
actually
make
something
onto
the
final
slide?
AD
And
you
know
some
of
the
points
for
the
reason
or
the
reason
the
internet
was
slow
is
because
everyone
else
is
on
the
internet
getting
in
the
way
and
doing
stuff
and
they
shouldn't
be,
but
that's
life.
You
know
in
one
example
for
me,
my
internet
used
to
regularly
be
slow
because
my
mum
was
microwaving
a
potato.
AD
You
know,
obviously
that's
not
the
internet,
but
my
local,
wi-fi
or
lan
or
wan
or
whatever
you
might
want
to
call
it,
but
this
is
a
regular
occurrence
and
it
would
knock
out
my
interactive
gaming
session
and
really
severely
disrupt
me,
but
really,
you
know,
there's
an
illusion
of
choice
there.
What
can
I
do?
Go
out
of
band
and
speak
to
my
manaska
to
stop
doing
those
things?
There's
not
a
technical
measure
that
I
could
take
or
they
could
be.
AD
I
could
plug
into
a
wire
there's
a
diverse
possibility
of
problems
and
solutions,
and
so
measuring
this
stuff
is
hard.
There's
no
universal
frame
of
reference
here.
There's
the
position
we
take,
there's.
Also
no
grand
unified
means
to
to
combine
these
things
together.
I'd
say,
you
know,
99
of
the
world's
population.
Don't
know
what
lull
is
and
that
I
probably
just
made
that
up
right
now,
but
it's
something
to
consider
of
who
we
are
targeting
with
this
workshop.
AD
It's
for
end
users
and
if
they
don't
know
what
we're
talking
about,
let's
not
bombard
them
with
with
weird
metrics
you
could
ask,
does
does
knowing
any
kind
of
problem
help
identify
where
it
might
be.
Probably
I
don't
know
or
does
knowing
what,
what
theoretically
you're
capable
of
doing
in
the
network
help
understand
what
is
actually
happening.
AD
Everyone
wants
his
data,
but
but
who's
actually
going
to
act
on
that
to
do
anything.
The
changes
are
hard
they're
hard
to
test
in
advance,
and
I
have
to
prove
in
the
long
term,
because
everything's
changing
all
the
time.
So,
given
a
plethora
of
information,
you
know
what
do
you
do?
Change
change
your
phone
because
someone
told
you
that
would
work
faster.
Well,
that
might
not
even
be
possible
change
your
isp!
Well,
that's
severely
restricted.
AD
You
might
identify
bgp
routing
error
and
fix
that.
But
again
the
the
ability
to
control
this
within
your
own
system
domain
is
probably
unlikely
and
where,
where
these
problems
happen.
Because
of
system
interactions,
I'd
argue
it's
not
necessarily
super
helpful,
but
that's
the
end.
C
A
Great,
so
I
will
remind
people,
please
do
use
the
slack
channel
when
we're
having
a
discussion.
I
think
you
know,
while
the
presentations
are
going
on,
it's
not
a
big
deal,
but
for
keith
to
keep
track
of
the
queue.
Let's
keep
the
the
in
webex
chat
down
to
just
queuing.
Please.
I
think
you
know
that
was
a
fantastic
presentation.
A
I
think
the
one
thing
that
that
came
out
of
a
lot
of
things
you
know
that
we've
been
discussing
recently
is
that
even
if
we
have
a
metric
that
told
you
you
had
a
bad
connection,
it
doesn't
tell
you
why,
as
vint
very
carefully
hinted
in
the
chat,
as
a
number
of
other
people
did
right,
it
doesn't
doesn't
tell
you
that
it's
the
microwaved
potato,
it
doesn't
tell
you
that
it's
you
know
the
router
at
your
edge
device.
One
of
the
previous
questions
to
vent
was,
you
know.
Where
is
the
problem?
A
H
C
All
right,
john
einger.
E
I
was
actually
going
to
say
something
to
lucas,
but
I'm
going
to
say
something
with
lucas
and
also
respond
to
bob
briefly,
so
the
the
I
I
really
love
how
we
presented
this
lucas,
and
I
appreciate
the
fact
that
there's
an
illusion
of
choice-
and
I
want
to
bring
this
up
earlier,
even
if
I
know
that
my
isp
sucks,
I
have
very
little-
I
can
do
where
I
live.
E
For
example,
I
have
exactly
one
isp,
so
it's
not
just
an
illusion
of
choice
that
I
mean
there's
actually
very
clearly
no
choice
at
all
and
not
even
an
illusion
as
far
as
I'm
concerned.
So
knowing
that
my
isp
sucks
doesn't
do
very
much
it's
it's
useful
to
some
extent,
but
I
cannot
ex.
I
cannot
put
any
economic
pressure
on
them.
E
So
there's
a
fundamental
problem
here,
and
this
applies
not
just
to
my
isp
but
to
many
other
things
that
we
talk
about
if
we
are
stuck
to,
for
example,
I'm
on
this
webinar
as
an
example
of
this
right,
like
we've
agreed
to
do
webex
here,
and
so
we
are
stuck
potentially,
you
could
move
past
that,
if,
if
ever
sucks,
but
but
but
in
some
cases
that
that
just
isn't
an
option
at
all
to
bob
to
your
point,
I
I
I
quickly
want
to
say
that
the
right
metric
is,
I
don't
think
the
right
adjective
to
use
funny
enough,
I'm
using
ironically,
I
think
what
we
want
to
talk
about
is
the
useful
metric
and
the
not
so
useful
metric,
because
there
isn't
a
right
or
a
wrong
here.
E
C
L
Our
problems
are
end
to
end
from
the
application
to
the
other
application
on
the
other
end
of
the
network.
You
cannot
solve
this
problem
if
you
only
look
at
the
network,
that
is
our
experience
in
linux.
We've
mostly
fixed
the
problem
in
linux,
but
there
are
these
pesky
damn
device
drivers.
There
are
certain
people,
many
of
whom
are
on
this
call,
I'm
looking
at
you
jason,
who
can
help
force
things
into
the
market
by
requiring
that
the
equipment
that
they
provide
to
customers,
things
like
the
pesky
cpe
equipment
work
properly.
L
I'm
thinking
about
wi-fi
where,
where
at
least
under
linux,
we
courtesy
of
dave,
tots
and
and
toki's
wonderful
work,
we
actually
have
working
low,
latency
wi-fi,
but
the
economic
pressure
has
to
be
applied
somehow
right
now,
it's
not
being
applied.
AD
Yeah,
just
just
to
respond
to
bob,
I
was
kind
of
being
quite
glib
in
that
talk.
I
hope
people
understood,
but
in
terms
of
exposing
metrics
directly
to
end
users.
I
see
that
as
like
the
speed
test
of
like
here's,
a
thing
use
that
number
to
kind
of
guestimate
how
how
all
of
the
applications
you
ever
want
to
run
in
your
home
and
all
of
your
family,
and
maybe
your
neighbors,
who
are
stealing
your
wi-fi,
are
going
to
use
that
metric.
I
don't
think
that's
very
useful.
AD
I
do
think,
what's
potentially
useful
is,
is
collection
of
end
user
metrics
and
then
turning
those
into
something
that
that
other
people
can
can
turn
into
something.
So
you
see
this
with
the
web
of
being
able
to
collect
different
kinds
of
metrics
related
web
page
loading,
just
as
an
example
that
then
get
fed
back
to
the
service
provider.
So
who
can
then
correlate
that
to
networks
and
more
longitudinal
measurements
to
understand
that
hey
for
this
subset
of
users
in
this
population
or
geo?
They
they
have
a
really
bad
page
load
time.
AD
For
some
reason
they
don't
they.
They
know
this
is
happening
and
they
don't
know
why,
and
even
if
we
told
them
why
they
wouldn't
know
what
to
do
to
fix
it,
but
that
those
metrics
could
help.
So
I'm
not
super
pessimistic
that
we
can't
do
anything.
I
just
don't
think
exposing
this
directly
is
like
raw
data
to
people
is
gonna.
Be
that
helpful.
AE
AE
If
you
go
to
a
qualcomm
or
mediatek
or
cortina,
and
you
ask
them
hey
I'd
like
you
to
do
this,
I'm
going
to
say:
okay,
what's
in
it,
for
me,
bring
a
csp
and
in
order
for
the
csps
to
be
able
to
come
to
the
table,
they
need
to
be
able
to
monetize
it
and
that
that
is
there.
So
when
we
talk
about
all
these
metrics
and
whatnot,
that's
not
what's
going
to
bring
them
to
the
table.
What's
going
to
bring
the
table
is
use
cases.
Can
I
sell
this
to
gamers?
AC
Yeah
I
mean
just
wanted
to
comment
on
on
the
metrics.
I
don't
know
why
we
were
talking
about
defining
metrics
that
the
end
users
have
to
understand.
I
don't
really
think
the
the
end
users
need
to
understand
about
metrics.
The
end
user
is
just
about
their
experience
and
I
think
what
it's
important
to
us
is
to
understand,
depending
on
what
they
are
doing
and
what
applications
they
are
doing.
What
are
the
things
that
we
have
to
measure,
or
how
do
we
change
things
in
order
to
influence
that?
A
You
know
one
of
the
things
that
we
know
is
that
bittorrent
is
always
faster
than
unicast
downloads,
and
so
you
know
that's
apples
and
oranges
to
some
extent
due
to
the
parallelization
of
bittorrent.
So
we
need
to
not
remember
that
everything
is
application
of
server.
H
Oh
yeah,
just
coming
back
on
on
on
what
I've
said
about
lucas
and
how
lucas
replied,
the
the
I
I
think
it's
the
perfect
is
the
enemy
of
the
good.
Just
because
a
metric
isn't
perfect
doesn't
mean
to
say
it's
not
useful.
Just
like
speed
test
isn't
perfect,
but
it's
still
useful
to
a
lot
of
people.
You
know
witness
the
fact
loads
of
people
quote
it
and
use
it,
and,
and
so
when
I
first
looked
at
speed
tests
and
as
a
researcher
I
thought
yeah.
H
Okay,
but
you
know
it's
not
often
the
actual
speed
across
the
broadband
network.
It
might
be
somewhere
somewhere
else.
You
know
it
might
be
they're
not
going
to
their
cdn
or
whatever,
but
ultimately
it's
a
good
90
percent
type
metric
that
is
irrelevant
in
10
of
the
cases
and
similarly
with
latency
under
load,
which
was
why
I
picked
up
on
lux's
presentation
at
the
end,
where
he
said
latency
on
the
load
wouldn't
be
understood
by
users.
N
There
we
are
thanks.
I
wanted
to
follow
up
on
a
couple
things
that
people
said
gino.
You
said
that
the
the
vendors
are
there,
the
vendors
need
to
be
sold.
I've
been
in
the
buffer
bloat
world,
trying
to
talk
to
vendors
they're,
not
they're,
not
anxious
to
listen,
and
maybe
they
just
think
we're
being
the
smartest
guy
in
the
room.
I
wanted
to
say
I
like
the
idea
of
food
labels.
N
I
think
capacity
is
one
characteristic
and
responsiveness
is
another.
I'd
wanna
focus
on
metrics
that
that
can
go
on
the
box
for
router
vendors
and
I'll
stop
there.
C
All
right,
the
next
person
in
the
queue
is
myself
so
I'll,
try
and
keep
myself
to
time,
but
I
hope
you
all
keep
me
honest.
I
also
wanted
to
respond
to
this
idea
that
sort
of
the
vendors
are
willing
to
do
it.
In
my
experience,
so
I
was
talking
to
folks
at
several
vendors,
including
you
know,
huawei,
which
makes
a
very
large
number
of
the
e-node
b's
and
g-node
b's.
C
I
said
you
know
why,
on
your
4g
and
5g
downlinks,
don't
you
have
some
sort
of
aqm?
Don't
you
have
some
sort
of
better
queuing
and
they
were
very
frank.
They
said.
Look
our
scheduling
and
traffic
management
is
highly
proprietary
and
we
are
expecting
money
in
exchange
for
changing
it.
C
And
you
know,
facebook
at
one
point
came
to
us
and
said
we're
not
happy
with
the
treatment
of
facebook's
traffic
over
the
huawei
eno
bees,
there's
buffer,
bloat
and
so
facebook
paid
us
a
gigantic
amount
of
money
to
set
up
a
joint
research
institute
and
that
joint
research
institute
researched
the
problem
and
deployed
a
custom
classifier
that
treats
facebook's
traffic
preferentially
on
our
systems.
And
if
another
vendor
has
a
problem,
we
would
expect
them
to
come
to
us
and
pay
us.
C
You
know
a
nine-figure
amount
of
money,
so
I
think
the
idea
empirically
that
the
vendors
are
eager
to
deploy
aqm
or
either
deploy
some
sort
of
better
scheduling
or
queuing.
It
doesn't
seem
to
be
the
case.
It
seems
like
they
are
expecting
to
be
compensated
for
that
by
the
people
making
money
off
the
traffic
next
person
in
the
queue
is
jim
gettys.
L
Just
to
follow
up
on
what
you
were
saying,
the
problems
are
everywhere
in
the
enobs.
There
are
severe
problems.
These
are,
are
not
something
that
there's
a
magic
wand
to
fix.
L
Toki
and
dave
spent
four
five
six
years
working
on
linux,
wi-fi
so,
but
there
are
key
points
where
which,
where
there
are
costs
that
are
sometimes
hidden
going
on,
so
it's
things
like
service
calls.
I
I
applied
service
calls
to
my
isp
in
this
case.
Comcast
long
before
I
understood
what
buffer
bloat
was,
there
may
be
an
education
process
that
we
can.
We
can
do
to
make
them
aware
that
that's
part
of
part
of
where
what
their
support
costs
are,
but
we
have
to
get
certain
organizations
to
to
move.
L
You
know:
we've
named
a
fair
number
number
of
them
and
the
way
that
seems
to
ultimately
work
has
to
do
with
with
money,
and
unless
you
point
fingers
that
that
your
device
is
broken,
it's
really
hard
for
people
to
vote
with
their
feet.
We
need
to
make
it
really
easy
for
those
finger-pointing
to
go
on
or
we
will
be
have
a
similar
one
of
these
meetings
in
another
five
or
ten
years.
A
All
right,
that
is
the
end
of
the
queue
wes
I'll
turn
it
over
to
you
all
right
great.
I
think
this
morning
was
a
great
set
of
discussions.
A
You
know
we
we
set
this
workshop
up,
knowing
it
would
be
somewhat
chaotic
and
the
the
introduction
sections
that
we
we
did
this
morning
and
we'll
continue
after
a
short
break,
are
really
designed
to
sort
of
frame
the
problem,
and
this
is
sort
of
exactly
what
we've
been
doing,
is
sort
of
scoping
out
how
hard
it
is
and,
as
the
workshop
goes
forward,
I
think
we'll
find
more
people
that
think
they
might
have
a
solution
toward
things.
A
So
with
that,
thanks
for
everybody,
that's
joined
the
slack
channel,
it's
growing
nicely
and
thanks
for
everybody
that
has
participated
nicely
and
thank
you
to
keith
for
doing
a
great
job
of
moderating
a
very
rapid
session
queue.
That's
not
an
easy
thing
to
do
so
with
that,
I'm
out
of
espresso
and
I'm
going
to
go
fix
that
problem.
So
we
have
10
minutes
and
we
will
see
you
at
10
minutes
after
the
hour,
we'll
pick
up
with
introductions
number
two.
Thank
you.
AF
Just
as
a
note,
I'll
I'll
be
running
the
slides
for
this
next
section,
so
I
can
click
through
them.
For
you,
hello,.
AF
AG
AF
When
was
the
latest
version
updated?
I
sent
them
yesterday
today,
yep
yep.
I
should
have
those
then.
A
AG
You
what
I'm
I'm
at
your
mercy,
so
I'm
gonna,
let
you
in
control,
but
I'm
gonna
try
to
whiz
through
that
very
quickly.
To
respect
the
time
I
saw.
A
AF
Absolutely
all
right,
so
we
are
in
the
second
half
of
our
introduction
section.
So
this
will
be
the
next
hour.
We'll
have
three
short
presentations
again
five
minutes
with
about
two
minutes
of
clarification,
questions
then
half
an
hour
of
discussion,
and
so
we
are
starting
out
with
ahmed.
AG
Good
afternoon
everybody
or
good
morning,
where
you
are,
I
work
for
the
uk
telecom
regulator
of
console.
I
won't
go
through
what
this
discussion
about
is
in
the
title
next
slide,
please,
I
think,
historically,
the
operators
and
the
network
engineers
have
had
their
requirements
for
measurements.
Regulators
have
a
slightly
different,
we're
concerned
about
making
sure
consumers
have
access
to
a
wide
range
of
information
and
can
run
applications
if
you're
a
cab
and
also
ensuring
that
they
have
a
good
choice,
a
good
wide
choice
of
content
online.
AG
Next
slide,
please
what
it's
about
for
us.
It's
about
the
access
network,
it's
the
black!
It's
the
black
line
that
connects
the
end
user
with
the
rest
of
the
internet
from
out
from
up
from
consumer
spec
of
what
we
see
is
that
consumer
needs
for
transparency.
They
need
to
know
what
they're
getting
how
good
it
is.
Is
it?
Are
they
getting
value
for
money?
That
kind
of
thing
from
a
policy
maker
perspective
it's
around
the
notion
of
understanding
the
national
infrastructure.
AG
AG
One
aspect:
that's
important
for
us
from
which
the
internet
is
able
to
do
is
allow
sort
of
innovation
without
permission,
which
is
important
from
a
regulatory
perspective.
Two
risks
that
that
have
been
highlighted
over
the
years,
one
of
the
degradation
of
internet
access
as
a
service
as
a
whole.
S
AG
I
won't.
I
won't
talk
much
about
this
slide
other
than
from
a
requirement
point
of
view
to
perhaps
perhaps
two
things
that
that
that
that
that's
different
from
from
operators
and
engineers
and
that's
comparability
and
trustworthiness,
and
just
one
is
quite
important
for
us,
because
sometimes
we
do
need
to
defend
our
decisions
in
core,
which
means
that
the
measurements
themselves,
the
measurement
systems,
the
methodologies
and
so
on
has
a
legal
value.
So
designing
a
system
in
that
perspective
might
be
different
from
designing
a
system
say,
for
example,
for
a
performance
measurement.
S
AG
Over
the
years,
I've
done
a
lot
of
work
with
beric.
That's
the
europe,
the
the
body
for
european
regulators
for
electronic
communications.
The
umbrella
for
all
the
european
regulators
is
before
the
uk
left
europe
with
a
lot
of
work,
trying
to
understand
some
of
the
systems
like
atlas,
m,
lab
and
other
commercial
systems.
AG
We
also
looked
at
lmap
and
ippm,
which
are
in
the
itf,
of
course,
and
and
and
what
we
found
was
that
all
systems
were
good,
but
they
didn't
quite
fit
our
the
regular
requirement
in
one
way
or
the
other
beric
eventually
embarked
on
building
a
system.
I
don't
know
how
far
that
got
to.
I
was
involved
in
the
specifications.
AG
I
know
for
sure
that
we
drew
a
lot
of
negita
from
the
lmap
architecture
and
ibpm
as
well,
but
one
thing
that
perhaps
I
would
like
to
highlight
here
is
the
rising
star
in
this
in
this
in
this
game.
That's
crowdsource,
because
of
sort
of
being
a
new
insight
into
the
experience
of
people
and
also
the
networks
themselves.
However,
there
are
lacks.
AG
There
is
a
lack
of
standards
in
the
space
privacy
issues
are
prevalent
as
well,
and
the
collection
of
data
and
and
and
isn't
isn't
future
proof
we're
still
at
the
mercy
of
the
operating
system.
Vendors.
Sometimes
some
of
the
information
is
made
available,
sometimes
available
and
so
on
next
slide.
Please.
AG
I
won't
talk
about
the
quality
of
surface
metrics.
This
is
this
is
but
the
butter
for
you
guys
and
certainly
for
us.
It
has
been
fully
for
years
and
it
will
continue
to
be,
but
I
think
I
think
it
isn't
just
about
metrics,
as
we
found
is
metrics
plus
how
to
measure
the
methodology
for
measurement
and
also
the
software
implementation
of
those
methodologies.
AG
That's
what
we
found
you
can
have
the
same
metric
being
measured
in
two
different
ways
and
you
get
different
answers.
Quality
of
experience
is
becoming
more
and
more
important.
Listen
to
the
discussions
today,
there's
been
a
suggestion
is
difficult.
However,
we
still
see
commercial
companies
who
are
selling
quality
experience
systems
for
a
nominal
fee.
We
think-
or
I
think
that,
there's
a
sort
of
common
theoretical
framework
that
needs
to
be
worked
out
here
of
some
sort.
AG
AG
AG
We
talked
about,
and
also,
I
think,
the
future
for
this
game,
because
we
need
a
lot
of
data
to
make
them
to
make
the
measurements
meaningful,
and
that
is
around
standardization,
common
software,
implementations
and
also
common
data
formats,
and
apis
with
that.
I
would
like
to
say
thank
you
for
inviting
me
and
listening
to
me.
Thank.
AF
You
all
right,
thank
you.
If
anyone
has
a
clarifying
question,
you
can
jump
in
for
30
seconds.
AF
All
right
well,
thank
you!
So
much
next
up,
we
have
michael
hello,
hello,.
AF
AH
Yeah,
I
just
saw
a
case
for
long-term
statistics.
This
is
about
being
passive
and
measuring
different
things
ties
well,
I
think,
with
the
requests
from
others
to
look
at
applications,
and
I
think
it's
maybe
a
bit
redundant
regarding
what
lucas
has
been
presenting
before
so
quality
metrics,
I
mean
one
of
the
things
we
can
do
with
them
at
least
is
as
a
user.
AH
You
know
to
have
some
long-term
fix
to
problems,
change
the
isp
change
equipment,
I'll
change,
the
application-
it's
probably
not
in
this
list
here,
but
could
have
been
telling
people
in
the
household
to
stop
using
some
other
application,
and
that
doesn't
help
much
by
a
case
of
somebody
sitting
and
doing
a
ping
delay,
and
I
would
say,
even
also
running
something
like
an
rpm
test.
I
mean
you
run
it,
you
test
it,
but
it
doesn't
help
you
at
the
time
you're
on
it.
It
may
have
been.
AH
AH
So
my
case
here
is
against
such
interactive
use,
not
saying
this
is
useless
to
have,
but
it
would
be
more
useful,
perhaps
to
have
more
passive
measure
measurements
next
slide.
Please
so
useful
metrics
as
examples
would
be,
is
my
connection,
the
bottleneck.
How
often
is
it
in
general,
or
was
it
the
bottom
like
five
minutes
ago
during
a
telco
that
I
just
have
do
applications
influence
each
other?
This
question
of
you
know:
is
there
really
a
problem
with
somebody's
doing
something
in
parallel?
AH
I
I
okay
yeah,
makes
this
a
bit
more
lively
with
the
story
should
fit
in.
I
hope
I
have
a
friend
in
austria
whom
I
I
play
table
tennis
with
actually
live.
I
live
in
norway,
so
we
use
this
oculus
quest,
headset
crazy
thing.
I
move
around
in
my
home
space
and
everything
and
it
shows
on
the
screen
on
the
side.
It
shows
ping
delay
in
milliseconds.
So
you
play,
you
see,
20
30,
milliseconds,
all
of
a
sudden
delay
goes
up
to
200
300..
AH
What
happens
is
that
he
says.
Oh,
oh,
oh,
it's
getting!
You
know
if
he's
losing
points,
so
he
just
takes
off
the
headset.
I
hear
him
shout
at
his
family.
He
comes
back,
he
said,
oh,
that's
obvious.
They
were
watching
video
can't
work.
At
the
same
time,
of
course,
and
I
wonder
I
mean
of
course
right
it
depends.
Does
he
know
it
really
is
his
network.
It
seems
to
work,
yes
seems
to
make
the
difference,
but
you
know,
did
he
just
tell
them
in
vain
that
they
should
stop
watching
videos?
AH
Is
that
the
right
way
to
use
the
network?
It's
all
very
awkward
next,
so
I
do
think
there
is
something
we
can
do.
I
think
passive
listening
can
work
as
long
as
this
is
associated
with
applications.
AH
I
looked
at
the
wireless
diagnostic
diagnostics
tool
in
osx,
which
I
think
is
very
nice
actually
for
what
it
offers.
It
gets
pretty
much
in
the
direction
of
what
I'd
like
to
see,
but
it's
not
doing
it's
not
showing
per
application
traffic
and
it
would
be
about
gathering
some
other
long-term
statistics.
Just
like
packet
loss,
rtt
growth.
Did
it
really
happen
on
my
side?
Did
it
happen?
You
know.
Was
it
on
my
bottleneck?
You
could
be
doing
pings
with
ttl
limits
and
things
like
that
right
to
find
out
which
applications
were
involved.
AH
Did
they
you
know,
did
they
share
a
bottleneck?
We
made
a
mechanism
some
time
ago
on
passive
shaft
bottleneck
detection
for
webrtc.
That
mechanism,
as
far
as
I
know,
is
not
implemented
by
anybody
because
it's
complex,
it
uses
one-way
delay,
but
there
are
many
variants
of
doing
bosch,
bottleneck,
detection
in
in
research,
more
or
less
reliable,
all
kinds
of
ways
of
doing
it.
AF
Thank
you
so
much
do
we
have
any
clarifying
questions.
AF
AI
B
AI
To
rate
a
network,
quality
first
term
would
be
service
connectivity,
including
all
of
the
associated
protocols
of
the
service,
and
then
I
trade,
the
quality
as
if
it
underperforms
matches
or
exceeds
my
user
expectations,
which
are
subjective.
Of
course,
then,
at
rate,
if
I
do
measurements
I
infer
on
a
time
frame
that
is,
do
I
evaluate
past
network
quality,
the
current
network
quality,
or
do
I
anticipate
future
network
quality
or
service
quality?
AI
In
theory,
it
sounds
pretty
straightforward,
so
we
have
connectivity
delays,
jitter
losses,
transfer
capacities,
existing
rfcs
that
we
put
together
glued
together.
Somehow
we
try
to
exhibit
no
bias
for
specific
operators
and
technologies
and
try
to
put
some
simple
metric
together.
Unfortunately,
it's
not
that
simple
for
for
two
specific
reasons.
Next
slide,
please.
AI
In
the
first
thing-
and
this
this
is
what
jana
already
said
so
using
users
experience
service
quality
and
not
network
quality.
How
first
question
is
how
can
we
differentiate
between
these
two,
and
the
second
aspect
is
that
services
define
the
network
traffic
pattern.
AI
Why
is
this
important
networks
today?
Are
no
stateless,
copper,
wires
and,
unfortunately,
many
quality
of
experience
models
today,
quality
of
service
models,
infer
on
rely
on
networks
being
stateless,
copper,
wires
with
linear,
behavior
networks
react
statefully
to
user
traffic
at
various
layers,
increase
in
starting
with
physical
and
and
transport
layer,
etc.
Schedulers
allocate
and
reallocate
capacities,
and
we
have
a
huge
number
of
uncertainty
factors
that
goes
even
down
to
the
content
and
compressibility
of
packets.
AI
Just
as
an
example
at
the
right
to
become
technical,
we
have
a
delays,
one-way
delays
in
the
uplink
of
a
3g
network,
random
payload
sizes
packet
sent
at
random
start
times
and
depending
on
how
these
packets
aggregate
together,
we
see
that
the
user
behavior
differs
for
a
six-fold
scale
so,
depending
from
from
25
milliseconds
to
150
milliseconds,
just
depending
on
how
these
packets
these
random
packets
aggregate
together.
So
this
is
user
experience
and
difficult
to
capture
for
random
traffic
next
slide,
please!
AI
So
if
we
are
not
thinking
about
the
quality
measurement
framework
first,
I
would
say
we
must
define
the
requirements
for
this
network
quality
rating
with
the
audience
end
user
operator
automation.
Do
we
want
to
do
an
sla
verification?
Should
it
be
an
actionable
item
that
the
network
accepts?
Do
we
target
the
past
the
presence
of
the
future?
Do
we
have
specific
requirements
with
respect
to
accuracy,
and
can
we
ignore
the
services
we're
opening
pandora's
box?
AI
AI
How
can
we
capture
user
input?
That's
for
discussion
and
networks
are
instrumented
to
monitor.
We
have
heard
already
about
lmap.
Can
we
extend
it
to
the
service
profile,
but
simplicity
is
something
else.
So
how
can
we
convey
this
to
non-technical
users
and-
and
I
agree
with
most
of
the
participants
so
far-
that
we
it-
I
don't
see
a
way
to
map
network
quality,
to
one
single
value.
We
have
orthogonal
metrics
that
represent
quality
and
and
it's
quite
difficult
yeah
on
the
next
slide.
I
have
just
for
completeness
some
some
some
uncertainty
factors.
D
AI
D
J
AI
Just
I
tried
to
structure
uncertainty
factors
that
goes
all
of
these
measurements
have
been
done
in
a
static
fashion,
meaning
no
movement,
no
redundant
paths,
no
redund,
aggregated
paths
having
distinct
technologies,
etc,
etc.
We
have
so
many
uncertainty
factors
in
this
game
that
it
becomes
rather
difficult
to
capture
the
behavior,
so
I'm
skeptical,
but
I'm
very
keen
on
doing
these
discussions.
Thank
you.
A
So
I'll
just
preface
this
discussion
with
you
know
we've.
This
is
sort
of
the
end
of
the
introduction
section,
which
is
important
to
note
that
we've
sort
of
tried
to
frame
the
problem
in
the
beginning
of
this
workshop.
A
So
you
know
any
any
thoughts
on
on
continuing
that
in
terms
of
how
we
can
work
forward
in
the
rest
of
the
workshop
for
solving
all
of
the
problems
that
we've
been
talking
about
this
morning,
I
found
a
number
of
interesting
points
in
the
last
couple
of
of
slide
decks,
including
you
know,
finding
out
which
applications
it
actually
affected.
The
metric
that
you
just
measured
is
a
really
challenging
thing,
let
alone
the
privacy
aspects
of
doing
that,
as
well
as
sort
of
fascinating
to
worry
about.
Unfortunately,.
AF
All
right,
thank
you.
Anna.
U
Yeah,
I
was
just
thinking
about
your
last
comment.
There
you
are
that
you
you're
pessimistic.
There
are
so
many
things
that
influence
the
measurements
and-
and
this
is
clear,
but
I
guess
we
have
to
be
aware
that
no
matter
what
what
metric
we
pick
here,
it's
it's
not
going
to
be
perfect
and
we're
not
going
to
be
able
to
measure
it
perfectly.
So
maybe
we
need
to
have
something:
that's
good
enough
and
measurable,
so
that
it
is
useful.
But
it's
it's
never
going
to
be
perfect.
I
think.
AF
Just
just
as
a
note
to
the
people
raising
their
hand
in
webex
we're
using
the
chat
queue.
So
please
do
the
plus
q
in
the
chat,
rather
than
raising
your
hand.
V
So
for
the
first
presentation
I
mean,
I
think
it
would
be
interesting
to
really
understand
you
know
what
what
type
of
metrics
might
be
useful
for
regulation,
and
so
I
think
in
the
past
there
was
a
lot
of
you
know.
Let's
say
the
degree
of
over
subscription
of
of
bandwidth
right.
So
oh
you'll
get
16
megabit,
dsl
link,
but
sorry
we've
got
a
10,
megabit
uplink
and
we've
got
thousand
people
on
it.
V
But
hey
it's
a
16,
megabit
dsl
service
right
so
may
not
be
as
an
end-to-end
service
be
very
relevant,
but
for
the
actual
bandwidth
you're
getting
it
is
very
relevant.
And
so
what
are
you
know?
Examples
of
metrics,
we've
seen
being
regulated
and
what
might
be
you
know
better
metrics
to
be
regulated
in
the
future.
AG
AG
I
think
possibly
it
helps
if
we
separate
this
into
two
areas,
one
if
you
like
the
demand
side,
which
is
what
applications
require
to
work
and
then
the
supply
side,
which
is
what
the
network
can
do
to
deliver
that
demand
yeah.
So
that's
the
supply
side
and
I
think,
if
we
thought
in
terms
of
those
two
buckets,
it
might
help
kind
of
like
decompose
the
problem
and
simplify
the
problem
a
little
bit
so
from
from
bucket
one,
which
is
about
the
applications.
AG
Really.
I
think
some
of
the
things
that
we
talked
about
things
like
you
know,
fridge
quality,
thumb
up,
thumb
down,
simple,
simple,
metrics
that
people
that
ordinary
people
can
understand,
and
then
you
want
to
make
that
as
far
away
from
technology.
AG
From
from
from
details
of
technical
details
from
the
supply
side,
I
think
that
could
be
a
technical,
the
technical
metrics,
like
the
ones
we
play
with
these
days.
A
Yeah
so,
and
even
speakers
should
be
in
the
queue
as
well,
so
I
actually
find
one
big
concern
of
of
the
recent
discussions
about
politics
and
and
regulations
is
that
they
can
actually
affect
the
definition
of
metrics
right.
A
I
think
a
lot
of
us
are
technical
geeks
and
we
might
define
what
we
believe
is
right
and
true,
and
then
you
know
you
get
lawyers
that
need
to
frame
it
in
terms
of
politics,
and
then
you
end
up
with
you
know,
companies
that
want
to
influence
that
and
how
you
measure
there's
a
fantastic,
interesting
discussion
on
nanak
a
while
back
about
high-speed
internet,
and
should
we
redefine
what
that
means,
and
there
was
a
large
number
of
people
from
major
telco
end
user
providers.
That
said,
no,
the
current
definition
is
fast
enough.
A
AF
AI
B
Upon
to
touch
upon
the
previous
comment,
packet
loss
is
good.
Measuring
those
packet
loss
and
markings
is
good
and
it
is
a
should
be
a
forcing
feedback
function
into
what
overall
arching
demands
are
made
for
more
bandwidth
and
my
opinion.
What
the
world
does
not
need
is
more
bandwidth,
it
needs
better
bandwidth
and
if
there
was
some
level
of
some
set
of
economic
factors
and
forcing
functions
that
would
lead
to
a
long-term
ability
to
more
dynamically
upgrade
our
devices
in
place,
including
new
metrics,
etc.
B
AF
All
right,
thank
you
back
to
ahmed.
AG
Just
just
to
pick
on
that,
really
the
regulators
have
been
reporting
on
speed,
mainly
and
that's
because
of
the
market.
The
market
at
the
moment
are
selling
connections
in
a
per
speed
kind
of
thing.
AG
There
is
no
compelling
reason
why
we
wouldn't
move
to
another
metric
if
the
market
shifted,
or
it
will
provide
more
information
to
consumers,
but
I
think
I
think,
as
a
suggestion
to
the
community
here,
standardization
does
need
this
player.
A
role
and
also
representation
of
of
the
real
experience
would
also
play
a
play.
AG
A
big
role
in
us
accepting
sort
of
new
matrix,
metrics-
and
I
just
want
to
emphasize
from
our
experience,
isn't
just
the
metric
is
the
metric
is
how
it's
measured
and
in
some
cases,
we've
seen
even
the
ways
implemented
in
hardware
and
software
makes
a
difference.
AF
All
right,
thank
you.
Neil.
P
I'm
just
to
pick
up
on
what
amit
said,
I
think.
There's
there
are
in
engagements
that
we've
had
with
people.
There
is
a
wider
interest,
because
the
the
digital
supply
chain
is
getting
more
involving
more
and
more.
You
know
you
have
gpon
providers
who
are
separate
from
wholesale
providers
who
are
separate
from
this
or
separate
from
that.
There
is
a
great
interest
in
holding
your
suppliers
feet
to
the
fire.
If
you
could
justifying
the
size
of
the
feet
or
what
fire
they
have
to
put
there
against
right.
P
So
I
think
that
one
is
regulation,
but
I
think
there's
a
large
untapped
demand
for
enforceable
slas
that
have
that
could
be
potentially
enforced
in
an
appropriate
court.
So
I
think
I
think,
there's
there's
two
ways
of
looking
at
this.
It's
not
just
regulation,
I
think
follow
the
money
is
another
way
of
potentially
doing
this.
K
Yes,
my
concern
about
policy
making
is
a
regulators
are
trying
to
employ
speed
tests
to
provide
information
to
consumers.
So
once
it's
been
widely
accepted
by
consumers,
it
becomes
a
little
bit
too
late
to
fix
the
problem.
So
we
need
to
propose
something
better
than
speed
tests
before
you
know
it
becomes
too
late.
P
Actually,
our
experience
has
been
that
by
defining
the
speed
tests,
the
notion
of
speed,
the
speed
tests
become
gained
right
and
you
know
once
upon
a
time
it
was
a
single
tcp
session.
We
know
physical
hardware
devices
that
use
24
tcp
sessions
to
get
the
speed
up,
thereby
hiding
all
of
the
loss
that
they've
created
in
the
buffers
right
to
try
and
get
speed
up.
P
So
I
think
the
the
other
thing
you,
you
need
an
evil
persona
to
look
at
these
things
to
understand
how
they
are
going
to
be
gained
and
stop
those
keep
that
gaming
occurring.
If
you
can.
AG
Ahmed,
so
I'm
I'm
I'm
sort
of
a
glass
half
half
full
kind
of
guy,
so
I
don't
know
nothing
about
gaming,
but
I
think
I
think
maybe
we
need
to
start
the
change
in
the
way
we
design
systems
protocols
and
try
to
think
about
performance
by
design
or
performance
measurement
by
design.
AG
You
know,
as
a
network
engineer
many
years
ago
you
sit
down
and
work
out
what
what
what
the
latency
per
tcp
session
was
well
could
could
could
could
the
the
communicating
and
themselves
work
that
by
themselves
and
make
that
information
available
as
part
of
the
operating
system?
I
think
I
think
that
would
be
a
great
you
know
for
protocol
designers
to
think
about
what
are
the
key
metrics
here
for
this
particular
protocol
and
how
I
can
make
it
available
as
part
of
the
protocol
design.
AF
D
Hill
made
a
really
good
comment
now,
which
I
wanted
to
reinforce
any
test
we
make
is
going
to
be
gamed
by
people
looking
to
optimize
that
score,
and
we
have
to
accept
that.
So
we
should
keep
that
in
mind
when
defining
a
test.
Ideally,
we
want
to
test
like
if
we
had
a
test
that
is
consistent,
low
delay,
regardless
of
any
traffic
pattern,
then
any
provider
who
games
the
system
to
give
consistently
low
delay,
regardless
of
any
traffic
pattern.
AF
R
AH
Because
of
latency
seems
like
it
yeah
I
I
have
a.
I
have
a
very
concrete
suggestion
upon
that
is
related
to
ahmed's
comment
about
about
statistics
that
should
be
stored
by
in
in
in
the
os
or
offered.
AH
AH
AF
P
I
was
just
going
to
pick
up
on
the
amidst
think
about
this
gaming
other
projects,
large-scale
projects.
I've
worked
with
where
games
were
where
gaming
was
a
potential
financial
upside
game.
Theorists
are
very
useful
in
this
process,
because
actually
so
I
think
there
are
ways
in
which
we
could
look
at
how
this
could
be
gained
on
economic
resources
and
analyze
it
and
go
and
find
a
friendly
game
theorist
to
make
sure
we're
not
we're
not.
You
know
we're
doing
the
right
thing.
AJ
M
Thanks
it
says
to
me
that
we
are
currently
discussing
two
types
of
metrics,
the
eight
priori
metrics,
which
will
define
how
the
system
will
behave
versus
posteriori
metrics
that
are
strongly
correlated
with
good,
the
good
experience
without
explaining
how
this
is
happening.
I
just
wanted
to
to
share
this
observation.
Maybe
it
can
become
more
useful.
Maybe
not
that's
my
observation.
Thank
you.
AG
AG
From
a
regulatory
perspective,
you
know
we
are
not
network
engineers
yeah
we're
not
trying
to
solve
network
problems
for
for
for
the
operators.
In
fact,
if
we
tried
we,
probably
we
would
get
it
wrong.
You
know
they're
they're,
the
best
at
running
the
networks.
AG
Really
from
our
point
of
view,
we
we
want
to
report
the
right
things
to
consumers
subject
to
their
contract,
subject
to
their
being
able
to
choose
the
right
package
and
in
the
future,
switching
that
kind
of
thing,
and
and
from
from
that
national
statistics,
point
of
view
about
you
know
sort
of
the
delicate
the
health
of
the
national
teletubbical
infrastructure.
AG
AF
A
A
number
of
interesting
comments.
I
think
this
discussion
has
been
fascinating.
A
I
I
have
one
story
to
tell,
which
is
that
my
son
recently
had
to
work
with
his
internet
service
provider
and
network
engineer
who
came
out
and
refused
to
use
his
own
company's
measurement
system
because
they
know
it
was
gamed
and-
and
you
know,
was
actually
using
different
things
when
trying
to
diagnose
my
son's
apartment
complex,
but
I'd
love
to
hear
people's
thoughts
on
going
back
to
the
privacy
aspect,
we
sort
of
glossed
over
that
that
seemed
like
an
important
one.
Does
anybody
have
thoughts
of?
A
P
I'm
I'm,
I
would
encourage
you
to
come
and
see
what
we
have
to
say
on
thursday,
because
actually
I
mean
I
had
to
say
that
with
that,
because
basically
it's
possible
to
insert
random
measurement
measurements
such
that
they
don't
reveal
you
can't
it
doesn't
reveal
what
was
actually
being
done
in
terms
of
which
websites
are
being
done,
at
least
at
the
network
level.
I
mean.
V
Turles,
I
think
the
more
you
get
up
to
the
quality
of
experience
metric,
the
less
privacy
you'll
have
right.
I
mean
if
you
don't
want
to
unveil
whether
you're
doing
you
know
video
streaming
with
a
30
second
play
out
or
web
browsing
where
you
want
to
have
the
shortest
round
trip,
trying
to
see
all
the
wonderful
ads.
Just
to
give
the
two
classic
examples
right
then:
well,
you
won't
get
good
quality
experience
metrics
for
these
two
applications,
you
can
still
get
the
lower
layer
network
metrics.
V
AG
Ahmed,
I'm
just
going
to
say
that
there
are
some
encouraging
work
in
the
ietf.
I
think,
which
is
about
having
servers
in.
AG
That
says
something
about
the
quality
of
service
that
the
network
is
is
is
able
to
provide
at
that
particular
moment
in
time.
I
suppose
also
going
to
be
sometimes
a
time
window.
So
while
we
so
far,
we've
spoken
about
over
the
top
type
of
measurements,
the
networks
themselves
do
have
rich
information
and
if
those
could
be
made
available,
that
would
that
would
be
really
really
helpful
and
I
think
the
itf
does
have
the
mechanism
for
doing
that.
AG
I
think,
needs
more
thinking
and
more
sort
of
theoretical
basis
for
any
work
to
push
that
work
forward.
I
Yes,
one
of
my
visions
for
measurement-
and
I
I
hate
to
admit
how
far
this
goes
back
this
goes
has
been
on,
is
always
on
measurement
in
the
serving
and
to
have
ways
of
exposing
metrics
that
the
content
provider
has
of
the
path-
and
I
admit
I
gravitate
towards
that
kinds
of
measurements,
and
I
actually
can
touch
on
that
a
little
bit
in
my
talk.
I
Food
labeling,
you
can
imagine
standardizing
some
of
the
rows
in
such
a
label
in
a
nutrition
label
and
having
an
isp
being
able
to
say
in
standard
agreed
upon
petition
language.
Your
network
doesn't
have
enough
protein
or
whatever
it
is,
it
needs
to
say
and,
and
that
actually
being
useful
information
to
the
user
and
useful
information
backed
up
by
measurements.
That
can
be
done
in
multiple
ways:
useful
information
to
the
provider.
AF
O
Yes,
so
yeah,
I
just
wanted
to
comment
also
on
the
privacy
aspect,
and
basically
I
have
two
two
comments
there.
One
thing
is
basically,
perhaps
not
all
measurements
need
to
be
passive
measurements,
but
also
there's
a
role
for
active
measurements.
Right,
active
measurements
are
one
way
I
think,
to
address
some
of
the
privacy
aspects
where
you
have
synthetic
synthetic
traffic
and
so
forth.
Obviously,
the
question
is:
what
does
it
say
about
your
production,
traffic
and
so
forth?
But
it
depends
on
the
purpose
here.
O
Actually
it
has
its
place,
particularly
when
you
want
to
assess
the
the
general
quality
and
so
forth,
and
when
it's,
and
also
when
you
don't
use
this
for
accounting
purposes,
the
data,
then,
of
course
you
would
have
to
measure
and
observe
the
the
actual
service
levels
and
so
forth,
and
the
second
aspect
concerns
also
with
regards
to
to
the
to
the
privacy
aspect,
is
perhaps
there's
a
role
not
only
for
measurement
but
also
for
prediction
or
where
you
basically
measure
one
set
of
metrics
which
are
not
directly
what
you
want
to
do,
but
from
which
you
can
essentially
measure
one
set
of
metrics.
AF
Thank
you
all
right,
I'll.
I've
put
myself
in
queued
just
another
quick
comment
on
the
privacy
aspect.
AF
You
know,
particularly
as
we
look
at
things
you
know
using
quick
or,
as
we
start
having
more
encrypted
tls
client
loads,
there's
going
to
be
a
lot
less
visibility
of
the
networks,
and
so
I
want
to
kind
of
pause
it
out
here
that,
as
we
are
gathering
these
measurements,
thinking
about
the
client's
role
in
you
know
getting
the
various
measurements
either
passively
or
actively
and
aggregating
them
in
a
way
that
it
could
export
the
data
meaningfully
to
have
a
good
summary
without
revealing,
specifically
what
was
being
accessed,
I
think,
would
be
an
interesting
exercise.
AF
AK
Yeah,
thank
you
and
sort
of
following
up
on
what
tommy
said,
and
I
agree
with
everything
that
he
said,
but
I
want
to
expand
on
that
just
a
little
bit,
because
the
basic
setup
is
that
we
have
multiple
parties.
I
mean
there's
endpoints,
there's
some
network
components
in
different
parts
of
the
network,
there's
some
test
equipment
and
probes
and
and
so
on,
and
I
don't
view
the
privacy
problem
as
unsolvable.
AK
You
just
have
to
be
careful
when
you
design
the
the
system
that
that
you
know
what
information
you
share.
You
don't
have
to
share
everything
that
that's
not
not
the
point,
and
I
don't
know
we're
not
in
that
world
anymore,
but
you
have
to
share
something
if
you
actually
want
to
get
get
sort
of
full
measurement
functionality,
at
least
like
being
able
to
figure
out
like
where
the
problem
is
and
not
just
the
end-to-end
result
or
if
you
want
to
get
more
always
own
measurements
and
so
on.
AK
AG
Sorry
I
was
I
was,
I
was
on
mute
so
on
the
privacy
side,
I
can't
agree
any
any
more
than
what's
being
said
all
research.
All
our
research
shows
that
privacy
is
a
is
an
important
element
in
not
consumers,
thoughts
and
told
me
you
were
banging
on.
I
think
I
think
you
were
echoing
what
I
was
saying
and
that
is
revealing.
AG
Sort
of
performance
measures
should
be
part
of
the
protocol
design
and
the
system
design
at
birth
and
not
known
as
an
aftermath,
I'm
just
going
to
pick
up
on
the
server
side
measurements
which
matt
I
think
was
talking
about.
I
am
aware
of
one
one
system
that
does
that.
So
all
the
work
is
done
from
the
server
side
very
little
on
the
client
side,
and
there.
S
AG
There
are
merits
in
that,
and
you
know
if
you
think,
about
mobile
battery,
that
sort
of
thing
there
is
that
angle,
of
course,
which
was
the
server
side
measurements.
Is
it
claims
what
that
company
doesn't
blame,
and
that
is
better
from
in
the
mobile
case.
Another
point
I
think,
sort
of
active
versus
passive.
AG
AL
Sorry
yeah,
so
I
wonder
what
tommy
said
is
it's
a
bit
quick?
It
becomes
more
difficult
to
do.
AL
In-Network
stuff,
and
so
the
project
I've
been
working
on
is
called
q
log,
which
we'll
also
put
on
tomorrow,
which
the
idea
is
that
we
log
everything
on
the
endpoints
client
servers
and
then
we
aggregate
across
the
different
layers,
and
we
also
have
privacy
questions
there
and
then
what
we
hope,
indeed,
is
something
like
what
tommy
says
is
that
we
can
aggregate
a
lot
of
these
logs
together
if
we
want
to
share
them
with,
let's
say,
service
providers
for
observability.
AL
But
another
approach
that
we
had
in
mind
was
to
define
or
standardize
privacy
levels
to
say:
if
you
have
application
level
15,
then
this
is
what
uses
strip
or
you
should
obfuscate,
but
in
some
use
cases
you
still
want
to
have
some
identifiable
information
if
it
even
if
it's
just
ips
or
just
some
urls.
M
Yeah
regarding
privacy,
I
just
wanted
to
meant
to
comment
that
privacy
is
not
a
static
game
and
if
we
are
assuming
that
bad
players
exist,
its
publish
would
be
prudent
to
assume
that
they
will
continuously
extend
their
methods
and
therefore,
if
we
are
thinking
about
including
privacy
into
measurements,
probably
it
would
be
great
to
think
of
how
to
update
the
privacy
measures
and
how
to
not
to
ossify
something
that
works
today,
but
will
be,
will
be
naive
and
irrelevant
in
five
years.
From
now,.
E
Janna
thanks
tommy,
so
I
appreciate
that
it's
the
the
conversation
has
been
great
so
far.
I
should
know
that
first,
I
think
it's
really
useful
to
think
about
how
to
share
metrics,
how
to
gather
metrics
and
aggregate
them,
and
all
that
is
very
useful
to
the
specific
point
that
tommy
made.
There
is
a
ppm
mailing
list
at
the
ietf.
There's
probably
going
to
be
an
effort
sometime
at
id
of
112,
on
trying
to
do
something
exactly
like
that.
So
keep
on
the
record.
For
that.
E
I
also
want
to
note
yet
again
harping
on
my
hey
applications
already
exist
on
the
network
thing.
There
are
applications
that
already
exist.
Vendors
already
exist.
We
are
not
doing
this
for
the
first
time
it's
been
done
and
it
continues
to
be
used.
Metrics,
there's
a
lot
of
people
already
measuring
the
network
in
a
lot
of
ways.
E
So
one
one
way
in
which
I
can
see
us
maybe
doing
some
standardization
help
or
work
here-
might
be
to
actually
help
or
encourage
vendors
to
share
metrics
that
they're
already
gathering
about
networks
for
their
particular
applications,
figure
out
how
to
share
them
privately
figure
out
how
to
share
them
with
all
the
privacy
controls
in
place
and
how
to
aggregate
those
bits
of
information.
That'll
be
very
useful.
There's
data
already
sitting
there.
We
just
need
to
gather
it.
AF
AG
Comments
go
ahead
just
on
the
active
versus
passive
I'll,
put
a
comment
in
the
chat,
but
that
doesn't
mean
there
is
no
room
for
the
passive
level
active.
I
think
there
are
room
for
both
in
in
the
in
the
toolbox.
Each
one
is
important
for
something,
and
perhaps
not
the
other
in
terms
of
aggregating
data
from
different
measurement
systems.
This
is
just
picking
up
on
ghana.
The
important
thing
is
that
when
we
aggregate
data
from
different
places,
it
needs
to
be
harmonized
and
comparable.
AG
Each
measurement
system
will
have
its
own
biases
or
this,
why?
I
was
talking
about
metrics
method
of
measurements
and
also
down
to
the
software
at
adaptive,
implementation
and
that's
something
we
have.
We
have
seen
very
much
so.
AF
I
think
robin
go
ahead.
AL
The
real
thing
is
that
I
am
not
personally
convinced
that
they
are
currently
not
doing
this
purely
out
of
privacy
considerations.
Right,
I
don't
think
privacy,
there
is
the
big
issue
or
the
big
problem
or
something
that
we
cannot
solve.
I
think
there
are
deeper
issues
at
stake
there.
So
do
we
need
other
incentives
for
the
big
coast
to
share
their
real
application
metrics
in
in
any
privacy
sensitive
form,
yeah
would
be
my
real
question
now
or
tomorrow.
A
Like
a
good
end
to
that
first
session,
so
lots
of
good
discussion
there,
I
have
been
taking
little
tiny
notes
of
major
things
that
I'm
hearing
out
of
you
know
all
this
discussion
and
that
bullet
list
is
actually
getting
quite
long.
So
I
I
appreciate
all
the
thoughts
I
think
with
that,
we'll
roll
into
metrics
right
and
so
the
the
beginning
was
all
problem
framing
and
we're
going
to
start
getting
into
more
concrete
stuff.
I
think,
after
this.
I
I
Please
about
two
months
ago,
I
first
encountered
the
apple
responsiveness,
developer
tool
and
I
actually
was
a
video
presentation
shown
in
connection
with
one
of
dave,
tott's
buffer,
bloat
rpm
meetings
that
he's
currently
been
running
and
there's
a
whole
bunch
of
things
that
the
responsiveness
metric
does.
That,
I
think
are
approved,
are
going
to
prove
to
be
very,
very
important
in
this
space.
I
I
assume
everybody
has
seen
it,
but
it's
basically
the
rate
at
which
tcp,
which
the
network
delivers
rounds
of
information.
The
nice
thing
about
this
metric,
one
of
the
nice
things
about
it,
a
bunch
of
nice
things
about
it.
You
can
construct
a
concrete
definition
of
it
and
there
are
parallel
metrics
at
different
layers
in
the
stack.
I
I
What
I'm
using
to
estimate
the
rounds
is
the
final
value
of
the
smooth
rtt
estimator.
We
have
measurement
lab,
has
12
years
of
archived
10
millisecond
samples
of
the
smoothed
rtt
data
for
all
of
the
tests,
and
I
believe
it's
all
intact,
although
I
haven't
confirmed
that
next
slide.
I
So
the
the
responsiveness
that
I'm
using
is
is
what
I
would
like
to
use
is
the
is
actually
accumulated
rounds,
which
you
can
measure
during
a
test,
and
I
believe
that
we
can
both
compute
this
for
all
of
our
archive
data.
We
could
also
imagine
computing
it
inside
of
the
ndt
measurement
tool.
I
It's
it's
actually
just
a
telemetry
change,
it's
not
a
code
change
or
a
wire
format
change,
and
I
also
believe
that
it'll
turn
out
that
responsiveness
in
today's
internet
is
a
direct
predictor
of
page
load
time
for
the
vast
majority
of
web
pages.
The
constraints
are,
if
most
of
the
objects
run
out
of
data
before
you're
out
of
slow
start
so
before
they
receive
a
congestion
signal
and
most
of
the
compute
to
invoke
the
next
to
load.
Additional
objects
is
fairly
small.
I
I
They
don't,
which
is
actually
something
we
haven't
explored,
there's
a
lot
of
a
whole
lot
of
caveats
about
this
graph,
which
I
will
eventually
get
to
I'm
actually
more
interested
in
the
plumbing
to
produce
the
graphs
and
make
sure
the
data
is
correct
than
actually
studying
the
graphs.
I
want
to
encourage
you
to
go.
Look
at
the
paper,
there's
a
link
in
the
paper
to
a
colab
document
which
contains
the
source
code
to
generate
these
graphs.
I
I
This
is
the
same
data
with
a
slightly
different
query.
This
is
also
in
the
published
document.
This
is
split
by
isp
it
this
one
says
test
volume.
I
actually
changed
it.
It's
the
isps
with
the
most
clients,
the
downward
slope,
circa
january
2020
is
actually
the
version
change
for
the
platform
and
there's
a
discount
note
in
the
data,
because
the
platform
changed
at
that
point.
I
So
a
bunch
of
things
on
my
to-do
list,
like
I
said
this
is
a
very
early
study,
there's
a
whole
bunch
of
different
ways
of
of
of
measuring
responsiveness,
and
I
want
to
compare
them
and
understand
what
the
differences
are.
It
might
turn
out
to
be
that
certain
kinds
of
methodologies
are
different.
One
of
the
things
I'm
concerned
about
for
multi-stream
responsiveness
is
how
you
generate
cross
traffic
is
extremely
important.
I've
already
noted
that
the
congestion
control
algorithm
is
extremely
important.
I
There's
a
bunch
of
things
there
that
need
to
be
looked
at
there.
I
want
to
prove
that
response
in
this
predicts
web
page
load
time.
I
actually
have
a
very
simple
way
of
doing
of
doing
that.
I
think.
Without
doing
a
big
study,
I
can
actually
construct
a
network
where
I
can
control
the
responsiveness
and
show
that,
as
I
adjust,
the
responsiveness
somebody's
got
a
microphone
open.
I
can
show
that,
as
I
adjust
the
mic
responsiveness,
the
page
load
time
is
predicted
by
it.
I
I
I
So
I
think
that's
all
I
have
I
oh
one
more.
The
real
problem
is,
I
want
I.
I
want
a
responsiveness
metric
that
is
exposed.
I
think
part
of
the
reason
why
we
haven't
solved
the
buffer
load
problem
and
is
because
it's
it's.
I
My
hope
is
that
we
can
expose
a
responsiveness,
metric
side-by-side
with
our
speed
test.
We're
one
of
the
people
who
gets
could
fairly
be
blamed
for
the
over
focus
on
speed
and
display.
We
could
display
responsiveness,
it's
only
a
matter
of
programming.
I
think
we
need
to
convince
ourselves
that
the
metric
is
right
first,
but
we
will
do
that
and
then
people
can
complain
and
people
can
see
the
data
and
the
engineers
can
see
that
networks
are
different.
AF
AG
AF
Like
that
was
okay,
okay,
great,
let's
see
we
have
a
bunch
of
other
stats
in
here.
People
want
to
take
quick
glances
like
these
all
right
and
I
think
next
up
we
have
brandon.
AF
AM
Here,
all
the
components
in
that
path,
including
things
like
how
the
cable
modem
at
the
end
of
the
path,
handles
a
qm
for
the
uplink
for
different
aggregates
of
users,
so
the
user
arrogant
on
the
right
here
is
all
isp
users
or
cable
modem
users
in
redwood
city,
we
break
up
users
by
geospatial
aspects
and
also
by
their
connection
type
and
by
their
provider.
Next
slide.
AM
So
two
examples
of
qoe
metrics.
These
are
simple:
simplified
versions
of
what
we
use
in
production
so
for
video,
for
instance,
measuring
qoe.
I
want
to
make
sure
that
I
can
play
the
video
stall
free.
I
wanted
to
have
acceptable
moss,
which
means
that
the
quality
is
good
enough,
that
somebody's
going
to
want
to
watch
it,
and
I
wanted
to
begin
within
one
second
of
the
request
being
made
by
the
client.
AM
Likewise,
I
want
images
to
load
within
500
milliseconds
of
a
user
requesting
that
image,
meaning
that
the
box,
where
the
image
should
appear,
is
ready
to
render
the
image
next
slide
and
the
reason
we
want
to
know
this
is
we
want
to
be
able
to
characterize
our
products
requirements.
We
want
to
be
able
to
quickly
evaluate
when
changes
in
network
conditions
impact
qoa.
AM
So
if
a
metric
changes
does
it
actually
matter
or
is
it
inconsequential
in
terms
of
what
our
users
are
going
to
perceive
and
we
want
to
be
able
to
identify
actionable
improvements
at
the
product
network
and
transport
layers?
I
improve
the
product.
Yeah
include
the
product
piece
here,
because
we
can't
always
make
improvements
at
the
network
player,
but
we
can
often
make
improvements
to
how
our
product
adapts
to
those
network
conditions
next
slide.
AM
So
I'm
going
to
be
talking
today
about
the
types
of
traffic
I
mentioned
a
little
earlier,
I'm
not
talking
about
real-time
traffic.
We
use
a
slightly
different
set
of
metrics
to
understand
that
type
of
traffic
like
the
column
on
right
now,
but
for
this
transfer
of
object,
type
traffic,
we
measure
two
things.
The
first
is
the
propagation
delay
for
each
of
these
aggregates
and
the
propagation
delay
is
going
to
tell
us
the
minimum
network
time
for
any
request,
regardless
of
the
response
times.
AM
The
propagation
delay
is
telling
us
the
link,
the
propagation
time
of
all
the
links,
things
like
interweaving,
the
access
network,
which
can
be
significant
on
some
access
technologies,
transmission
time,
the
presence
of
persistent
queues,
and
we
measure
it
as
the
median
min
rtt,
as
measured
by
our
quick
stack
at
the
server
over
connections
from
each
of
these
aggregates
in
the
time
windows.
So
we
can
see
over
a
day
for
each
of
these
aggregates.
How
does
this
measure
propagation
delay
change
next
slide?
AM
The
other
thing
we
look
at
is
what
we
call
the
good
put
success
rate,
and
this
is
the
probability
that
a
connection
in
one
of
these
aggregates
can
deliver
a
stream
of
bytes
at
a
given
rate.
Again
earlier,
I
brought
up
if
it's
500
milliseconds
of
propagation
play.
We
know,
that's
always
going
to
be
a
component
make
transfer.
The
question
then
is
hopefully,
after
I
account
for
that
propagation
delay.
Can
the
network
stream
bites
to
me
in
a
reliable
manner?
AM
This
is
going
to
account
for
congestion
control,
behavior
transport,
behavior
of
loss
shaping
at
the
bottom
like
link,
if
there's
contention
of
the
user's
uplink
preventing
ax
from
getting
back
to
the
server.
That's
going
to
appear
here
as
well.
AM
B
AM
Not
downloading
bulk
files,
they're
downloading
small
objects
and
furthermore,
we
can't
just
use
object
transfer
times,
because
those
are
going
to
be
subject
to
things
such
as
application
behavior.
If
I
change
how
my
application
multiplexes
traffic,
I
may
change
observe
a
change
in
object.
Transfer
times
that
doesn't
mean
there's
been
any
change
in
the
underlying
network
conditions.
AM
Likewise,
cdn
conditions
can
be
incorporated
into
object,
transfer
times
as
well
as
startup
behavior
and
things
like
propagation
delay
again.
So
the
way
we
measure
this
again
is
that
the
server
side
for
each
of
these
connections
in
an
aggregate
we're
looking
at
how
quickly
can
the
transport
deliver
bytes
when
those
bytes
are
available
to
be
sent
to
the
user
or
to
the
client,
and
that
becomes
our
measurement
of
goodput
next
slide.
AM
This
is
a
simple
example
here
of
propagation
delay
and
good,
put
success
rate
for
2.5
meters
per
second
for
a
users
in
a
cellular
network.
We
can
see
at
the
bottom,
which
shows
the
relative
traffic
volume
for
that
time
in
the
day
as
the
traffic
volume
increases,
the
success
rate
for
2.5
megabits
per
second
decreases.
AM
So
these
users,
if
the
application,
requires
2.5
megabits
per
second
to
provide
good
qov,
it's
going
to
have
a
harder
time
and
often
case
it
won't
be
able
to
provide
it
at
these
peak
hours
of
the
day
and
through
other
signals,
we
determine
that
this
is
likely
due
to
cellular
rand
congestion
next
slide,
so
we're
going
to
very
quickly
go
over
these.
There
is
oftentimes
very
little
degradation
for
aggregates
of
users.
Network
traffic
is
pretty
constant
over
time.
We
talk
about
this
further
in
the
paper.
AM
AM
So
I
think
the
final
points
that
are
probably
most
interesting
to
this
group
are:
why
are
we
using
this
good
put
success
rate
instead
of
metrics
and
propagation
delay
instead
of
metrics
like
srtt,
or
the
loss
and
re-transmission
rate?
The
reality
is
that
loss
and
re-transmission
rates
are
really
difficult
to
interpret
in
isolation.
AM
Good
put
success
rate
more
holistically
captures
how
all
these
underlying
network
conditions,
along
with
how
the
congestion
control
and
transport
behave
under
those
conditions,
are
in
impact,
our
ability
to
deliver
bytes
to
the
users,
and
then
the
final
thing
here
is
well.
I
talked
about
all
these
things.
How
we
measure
qos?
AM
How
do
we
determine
what
qos
conditions
are
required
for
good
qoe
and
what
that
really
comes
down
to
is
taking
those
qa
requirements
that
I
described
earlier
and
seeing
how
a
aggregate's
ability,
how
users
and
aggregates
ability
to
achieve
those
requirements
changes
as
network
conditions
change?
So
if
I
have
two
aggregates
of
clients-
and
they
have
different
network
conditions
that
we've
observed
through
these
measurements,
how
do
those
network
conditions
as
they
change
impact
the
client's
ability
to
have
security
and
with
that
happy
to
take
any
clarifying
questions.
AF
All
right,
any
clarifying
questions,
looks
like
we
have
a
couple
enqueue
for
that,
so
we
have
a
robin
first.
AL
AM
We
do
this
with
the
quick
layer
now
we
also
have
a
metric
that
works
for
tcp,
but
our
quick
metric
is
is
better
what
we're
looking
there
is,
as
we
push
bytes
into
the
network.
What's
the?
AM
Has
socket
timestamps
that
the
kernel
can
give
you
on
both
tx
and
ack
for
linux,
those
act,
timestamps
and
tx
timestamps
can
be
used
to
do
much
of
the
same
work.
Happy
discussion
thanks.
E
Sami
thanks
man
for
the
talk,
two
quick
questions.
One
have
you
looked
at
and
considered
delivery
rate,
which
is
in
tcp
in
the
linux
kernel,
and
you
might
wonder
if
you
consider
an
equivalent
of
that
for
quick
as
well.
In
addition
to
this
medic
and
second,
do
you
compensate
in
your
could
put
in
your
group
measurement
for
application,
limited
periods
and
application
limited
behaviors.
AM
AM
There
is
a
hole
in
the
network
space
where
we're
no
longer
pushing
the
network
as
hard
as
we
could
and
that's
being
incorporated
into
our
measurements
for
the
first
piece
here,
the
delivery
rate,
the
challenge
with
the
delivery
rate
is
it's
a
measurement
of
throughput,
not
a
good
put
right,
because
my
congestion
control
algorithm
is
using
that
to
understand
how
quickly
what
rate
it
should
be
sending
at
not-
and
so
that's
not
always
going
to
be
representative
of.
AM
But
the
other
thing
is
that's
measured
over
a
single
rtt
window,
and
so
it
can
vary
widely
with
what
we're
doing
here,
we're
able
to
look
at.
I
have
a
50,
kilobyte
or
100
kilobyte
object.
How
long
can
I
expect
that
object
to
transfer,
and
we
found
that
the
delivery
rate
often
is
a
weaker
estimator
than
what
we
built
for
that
happy
to
discuss
further
offline.
D
I
actually
have
comments
from
matt
and
brandon,
so
I'll
try
to
do
both
in
60
seconds.
Thank
you
for
talking
about
round
trips
per
minute,
matt.
Some
people
on
the
call
will
have
heard
of
that,
but
I
expect
some
have
not.
D
The
idea
behind
rpm
is
latency
and
milliseconds
are
kind
of
abstract
concepts,
especially
to
the
end
users
that
we've
been
talking
about.
So
we
took
the
reciprocal
of
that
so
around
so
rounded
per
second
or
hertz,
and
and
then
we
went
to
round
trips
per
minute,
it's
nice
because
it
gives
a
number
where
more
is
better,
it's
three
or
four
digits
typically
and
has
no
decimal
places.
So
it's
a
nice
convenient
metric
and,
coincidentally,
aligns
with
car
rpms.
D
I
put
a
link
to
the
video
in
the
chat
and
for
matt's
presentation.
That
means
that
when
the
graphs
go
up
and
to
the
right,
that's
a
good
thing:
that's
not
latency
getting
worse.
It's
responsiveness,
getting
better
for
brandon's
comments
about
video
quality
people.
AF
Often
measure
for
out
of
time,
so
can
you
re-include?
Okay,
sure
thank
you
all
right.
Next
torrelis
you
had
gotten
in
and
out
of
queue,
do
you.
V
Want
to
just
go
so
I
think
what
kind
of
trickled
through
at
the
sidelines
of
these
presentations
was
the
metric
I,
as
a
user,
I'm
always
most
interested
in,
which
is
the
whose
fault
is
it
metric
right?
So
in
terms
of
looking
at
the
numbers
saying
most
likely
causes
this
and
that
which
I
think
goes
back
to
the
ultimate
question
of
actionability
of
metrics
right,
I
mean
what?
AF
AG
Go
ahead,
it
was
just
something
mad
said
about
how
different
versions
of
dcp
give
different
performances
and
that
just
sort
of
goes
back
to
what
I
was
saying
earlier
about
one
protocol,
one
definition
and
the
implementations
kind
of
like
give
different
performances,
and
you
can
see
the
local
effect
that
has
on
measurements
and
interpreting
those
measurements
and
then
acting
upon
elements,
because
if
two
tcp
implementations
just
give
two
different
answers,
yeah
you
have
a
problem
which
one
do
you
believe.
AG
M
Yeah
I
have
questions
to
brandon.
You
mentioned
that
you
were
using
p50
to
to
correlate
between
good,
put
and
application
performance.
I
wonder,
but
what
have
you
seen
when
looking
at
p90
or
95.99
instead
of
p50,
so.
AM
The
reason
that
we
don't
want
to
use
these
higher
percentiles
and
arguably
we
could
use
the
p5
as
long
as
the
aggregates
that
we're
creating
our
sets
of
homogeneous
users,
meaning
that
again,
it's
all
cable
modem
users
on
ispx
and
redwood
city.
They
should
all
have
about
the
same
propagation
delay
in
terms
of
what
we're
measuring
here.
P99
is
going
to
be
capturing
things
like
again.
AM
The
user's
uplink
is
congested
because
some
other
application
is
sending
at
that
point
in
time
and
that's
no
longer
a
measure
of
the
propagation
delay
between
facebook
and
that
aggregate
of
end
users,
that's
something
specific
to
that
individual
end
user,
it's
not
as
actionable
on
our
side.
Furthermore,
it's
going
to
appear
in
our
goodput
metric,
because
that
cueing
on
the
uplink
is
going
to
delay
x
and
because
we're
measuring
to
put
at
the
server
side
it's
going
to
appear
in
that
measurement.
So
we
don't
want
to
measure
the
same
thing
twice.
AM
We
measure
propagation
delay,
which
is
all
these
components
that
we
expect,
especially
since
our
connections
are
pretty
short,
to
be
relatively
stable
over
a
five-minute
period,
for
instance,
and
then
we
measure
good
put
to
see
how
quickly
the
network
can
deliver
bytes
to
our
users.
So
that's
why
we
don't
look
at
the
p99.9.
AF
B
That
one
we
had
a
case
for
longitudinal
measurements.
I
just
described
several
times
in
several
talks.
This
was
one
of
the
most
exciting
graphs.
I've
seen
in
a
long
time,
roughly
a
five-fold
improvement
in
improv.
I
don't
know
why
the
cubic
thing
got
good
for
a
month,
but
roughly
a
three
to
five-fold
improvement
in
responsiveness
across
the
internet
in
the
last
five
years.
X
B
The
other
two
brandon,
for
example,
facebook-
do
you
have
measurements
over
time
of
some
of
your
core
metrics.
AM
Yeah
we
keep.
We
keep
metrics
over
time,
both
to
look
at
how
interventions
yeah
the
network
conditions
and,
ultimately
impact
qoe.
I
think
one
of
the
most
recent
changes
that
we've
had
in
the
past
year
that
we've
talked
publicly
about
is
switching
from
tcp
to
quick
and
that
you
know
again
we're
looking
at.
How
does
that
impact?
The
good
put
that
we
can
achieve
and
how
does
it
impact
propagation?
AM
AF
All
right,
thank
you,
stewart
you
would
re-enqueued.
D
Thank
you.
So
this
comment
is
to
brandon,
not
not
a
criticism,
just
an
observation.
When
you
talk
about
the
video
quality,
you
talk
about
the
normal
things
that
people
do
mos
mean
opinion
school
and
rebuffering
events,
and
things
like
that
for
watching
a
continuous
video
from
start
to
end.
The
other
thing
that
we're
looking
at
is
random
access,
because
people
don't
always
sit
down
and
watch
a
video
from
start
to
end
in
a
two-hour
sitting.
D
AM
AF
Yeah
brandon,
I
guess
going
forward-
would
be
good
to
have
the
speakers
kind
of
enqueue
themselves
as
well.
That's
fine!
I'm
happy.
X
AF
AN
Hi
my
question
is
to
brandon
about
the
measurements
and,
if
I
understand
correctly,
you
mentioned
that
it
is
doing
measurements
on
the
server,
but
I
think,
what's
also
important,
is
measurement
on
the
client,
because
sure
you
have
a
good
delivery
rate,
but
what's
the
guarantee
that
the
client
has
seen
that
data
or
the
application
layer
has
delivered
that
data
to
the
client?
And
you
know
because
that's
what
matters
it
doesn't
matter
what
server
is
delivering.
I
think
so.
AN
I
think
both
client
and
server
are
important
and-
and
the
second
thing
I
wanted
to
say
was
it's
kind
of
related
to
my
first
point
is
not
just
the
transport
layer
metrics,
but
the
application
layer.
Metrics
are
also
you
know
important,
so
yeah.
If
that's
that's
all.
I
want
to
say.
AD
Thank
you,
lucas,
hello.
It
was
kind
of
a
slightly
clarifying
question,
but
it
was
too
much
to
do
at
the
talk
about
the
requests
per
minute
and
the
responsiveness
which
I
think
is
is
is
much
better
than
bandwidth.
AD
Let's
just
be
clear,
like
the
whole
theme
of
today,
but
like
how
how
that
would
map
to
web
page
loading,
given
how
incredibly
complicated
the
whole
process
of
being
a
browser
and
rendering
pages
is-
and
you
know
how
moving
resources
around
slightly
or
changing
his
server
software
to
to
to
maybe
serve
things
in
a
slightly
different
order
or
loading
up
an
ad
from
a
third-party
network
on
that
page
on
a
different,
tcp
connection,
entirely
competes
with
the
things
trying
to
to
come
from
the
first
party
like
I
just
wondered
how
that
that
whole
mapping
was
done
and
if
there's
a,
if
there's
a
methodology
that
could
be
shared
really
or
is
this
a
it
worked
for
a
specific
type
of
web
page,
which
is
still
fine
but
yeah,
just
just
trying
to
understand
how
that
might
be
more
applicable
to
people.
E
Hi,
I
want
to
quickly
go
back
to
matt's
thing
that
probably
speaks
to
what
lucas
said
just
a
little
bit
matt's
graph
about
showing
the
longitudinal
one
that
dave
was
talking
about
just
now.
It
seems
to
suggest
that
reno
was
actually
the
most
responsive
of
renault
cubic
and
bbr.
E
Now
he
that
actually
is
included
to
me
in
certain
ways.
But
I
want
to
note
here
that
it's
important
for
us
to
not
just
think
about
responsiveness
as
the
better
metric,
but
as
an
additional
metric
that
fills
the
spectrum
of
metrics
that
we
need.
If
we
want
to
measure
latency,
we
want
to
measure
bandwidth.
Those
are
super
useful
and
important.
B
E
Responsiveness
is
perhaps
something
that
is
additional
it.
It
offers
a
third
axis,
so
to
speak,
that
we
also
need
to
measure.
So
I
would
say
that
bandwidth
is,
is
let's
not,
let's
not
throw
the
baby
out
of
the
bath
water
so
to
speak.
Let's
not
discard
bandwidth,
it
is
important
and
useful.
It's
not
adequate
is
what
I
would
like
to
suggest,
but
I
would
also
like
to
hear
from
matt
about
what
he
thinks
about
the
responsiveness
of
reno
versus
bbr.
Thanks
all
right.
AM
But
two
things
here,
I
think
one
was
a
question
around
measuring
from
the
server
side.
Is
that
we're
going
to
be
represented
with
what
the
client
experiences
the
client
stack
that
we
have,
for
instance,
on
an
android
application?
We
write
the
quick
transport
layer,
it's
our
own
implementation.
It's
open
source
called
move
fast.
AM
We
write
the
application
that
sits
on
top
of
it
and
because
it's
quick
and
not
tcp,
we
can
be
assured
that
acts
that
we're
seeing
are
not
coming
from,
for
instance,
a
pep,
a
performance
enhancing
proxy
between
us
and
the
user
and
thus
are
representative
of
when
the
apple.
When
the
traffic
lies
at
the
client
and
when
it's
going
to
go
up
to
the
application,
I
mean
we
compare
it
internally,
our
time
stamps
and
their
you
know,
the
measurements
from
both
sides
are
strongly
correlated.
AM
The
other
piece
here
was
random
access
completely
agree.
I
think
we
have
some
of
that
in
looking
at
when
a
user
clicks
on
a
new
video.
How
long
is
it
going
to
take
for
that?
Video
to
start
playing?
Facebook
is
a
little
bit
different
than
netflix
or
other
content
providers,
and
our
videos
are
typically
short,
so
random
access
may
not
apply
as
much
you're,
not
shipping
skipping
forward
trap
chapters,
but
still
a
good
thing
which
comes
later
thanks.
I
Yeah,
so
the
there's
a
thought
experiment
imagine
doing
which
shows
why
responsiveness
predicts
web
web
browsing-
and
that
is
you
imagine,
I'm
gonna
have
trouble
doing
this
in
a
minute,
but
I
I
there's
a
network
device,
imagined
called.
I
The
elapsed
time
for
the
browser
to
finish
will
depend
on
the
number
of
clicks.
I
go
through
the
rounds
gate,
so
I
can
count
round
trips
in
the
network
and
if
it's
always
slow,
start
and
always
cascaded
loads
caused
by
html.
Invoking
other
other
embedded
objects,
it
ought
to
be
a
fixed
number.
Oh
dns
also
goes
in
there
fixed
number
of
rounds.
I
Some
of
them
are
application
layer
rounds
and
some
of
them
are
our
network
layer
rounds,
but
I
can
just
count
the
round
number
of
rounds
on
the
web
page
and
rpm
is
going
to
predict
how
long
it's
going
to
take
to
load
to
the
first
approximation,
and
that's
you
know
I
want
to
write
this
up,
but
yep,
that's
good.
It's
pretty
obvious
once
you
see
it.
Thank
you.
AF
All
right
stewart
responding
to
lucas.
D
Yes,
you
lucas,
you
asked
a
question
which
match
has
covered
on,
but
I
want
to
expand
on
that.
Why
does
round
trip
matter
for
web
page
loading?
There's
a
sequence
of
round
trips.
Dns
is
a
round
trip.
You
wait
for
the
reply
before
you
can
make
the
connection
then.
Typically,
today,
a
round
trip
for
tcp
a
round
trip
for
tls,
then
you've
got
to
get
index.html
first
thing
it
refers
to
is
a
stylesheet,
so
you're
going
to
do
another,
get
for
the
stylesheet
or
an
image?
D
AF
All
right,
thank
you.
I'd
include
myself.
AF
I
have
more
of
just
kind
of
a
meta
question
for
us,
as
we
have
these
two
different
topics
we
are
going
through.
I
wanted
to
hear
from
people.
You
know
we
have
on
one
hand
the
concept
of
rpm
and
responsiveness
and
then
for
brandon's
talk.
We
have
the
propagation
delay.
You
know.
To
what
degree
are
these
different
things?
Are
they
the
same
thing?
Are
they
converging
on
kind
of
this
axis
that
we're
looking
at
and
is
one
way
of
framing
it
preferable
to
the
other
in
the
eyes
of
this
group?
E
That's
my
part,
it'll
be
partly
in
response
to
the
question
that
you
just
asked.
I
I
want
to
again.
Yes,
it's
absolutely
important.
Critical
and
people
have
heard
me
say
this
in
the
past
as
well
round-trip
times
a
critical
light
of
speed
isn't
going
anywhere,
we
and
and
webpages
tend
to
endanger
80
percent
of
of
page
loads.
Typically
at
servers
tend
to
be
tend
to
end
in
slow
start,
so
we
know
that
that
webpages
are
are
not
saturating
the
network.
So
bandwidth
isn't
a
problem.
E
However,
it
is
important
not
to
walk
away
from
bandwidth.
We
want
to
add
bandwidth
is
still
useful
and
still
important.
There
are
plenty
of
situations
under
which
you
are
still
bandwidth,
limited.
Consider
you
know
when
I
travel,
for
example,
in
india.
I
do
often
find
that
the
page
that
I'm
getting
google
automatically
sends
me
off
to
the
the
low
quality
page
loads,
because
my
my
bandwidth
and
roundup
time
are
measured
to
be
low.
Now
this
happens,
and
this
is
real
and
it's
important
to
consider
that
those
that's
important
as
well.
E
AJ
Yeah,
hey
guys
to
stewart's
point
there
about
the
round
trips.
AJ
I
think
that's
a
very
good
description
of
how
the
sequence
of
of
steps
makes
the
round
trip
very
important,
and
then
I
also
want
to
just
point
out
that
it's
very
it's
it's
important
to
keep
more
than
just
an
average,
for
instance
of
that
round
trip
time,
because
if
you
have
rare
events
that
show
up
once
in
a
while
on
one
of
those
round
trips,
then
because
they
all
line
up
in
a
in
a
change,
then
that
will
affect
the
outcome
of
the
load
time
right.
AJ
So
any
metric
that
wants
to
capture
that
effect
needs
to
either
do
the
entire
sequence,
as
I
think
is,
if
I
understand
correctly,
that's
what
you're
doing
with
the
rpm
or
to
keep
some
sort
of
the
statistical
distribution
of
those
round-trip
times
all
right.
Thank
you.
AF
AM
I
think,
in
terms
of
comparing
the
two
metrics
I
I
need
actually
a
little
bit
more
understanding
from
from
matt
to
completely
understand
the
other
metric
that
I'm
looking
at,
but
also
how
ndt
works,
but
in
terms
of
responsiveness,
if
you
think
of
it
as
propagation
delay,
is
going
to
be
the
best
case,
responsiveness
to
microphone.
If
I
have
a
user
connected
via
satellite
connection,
it's
500,
milliseconds
propagation
delay.
It
doesn't
matter
what
size
of
a
response
I
send.
AM
You
know
some
amount
of
time
for
this
response
to
arrive
depending
on
that
response's
size,
but
I
do
I
do
think
they
are
measuring
slightly
different
things.
I
think
that
they're
both
relevant,
I
think
the
looking
at
srt,
as
I
mentioned,
is
just
difficult
to
understand.
When
that
increases.
S
B
Be
somewhat
deliberately
contrarian,
this
conversation
is
taking
place
over
a
video
conference.
In
that
case,
one-way
delays
in
both
directions
are
very
important
and
coming
up
with
additional
metrics
on
top
of
what
we
have
and
along
janus
lines
for
covering
jitter
and
other
flaws
in
the
user.
Experience
of
video
conferencing
is
very
important,
and
I
wish
we
could
all
work
together
towards
improving
video
conferencing
behaviors
as
well.
AF
All
right,
turles.
V
Yes,
I
do
like
the
the
marketing
benefits
of
rpm,
so
that's
that's
really
cool.
I
I
like
the
original
vision
of
the
internet
right
that
I
can
go
to
places
all
over
the
world
and
obviously
the
further.
I
go
the
harder
it
gets,
and
so
I
think
that's
not
you
know
represented
in
that
metric.
Yet,
obviously
the
absolute
experience
pride
is
is
ideal.
When
I
have
a
local
cache,
I
can't
do
that
for
all
type
of
services.
V
AF
AF
J
D
Yes
does
this:
it
seems
like
there's
some
interest
in
this
that
there's
been
various
people
asking
questions
around
this
subject.
So
I
can
talk
a
little
bit
about
what
the
rpm
tool
does
bear
in
mind.
This
is
still
in
development
and
it
will
evolve
part
of
the
point
of
this
workshop.
D
Is
we
want
to
get
feedback
from
other
people
about
ways?
We
could
improve
that,
but
I'll
tell
you
where
we
are
now
for
start.
We
don't
measure
round
trip
time
with
ping,
because
we
want
to
measure
real
world
effects,
and
we
talk
tell
you
about
gaming.
When
you
measure
round
trip
time
with
icmp
echo
request,
it
motivates
network
vendors
to
prioritize
icmp,
echo
request
which
doesn't
help
anything
else.
D
Unless
you
run
all
your
traffic
over
icmp
echo
request,
so
we
measure
it
using
actual
http
2
ping
frames
in
the
same
http
stream,
because
that's
what
matters
when
you're
loading
a
web
page
you're
doing
a
get
and
getting
the
response,
so
we
want
to
measure
the
in
connection
around
trip
time.
D
It
should
send
it
as
fast
as
the
network
will
carry,
which
means
one
user,
with
one
phone
with
one
photograph
should
be
saturating
the
network
and,
if
they're,
not
file
a
bug
report,
when
I'm
doing
that
the
video
conference
should
not
go
to
hell.
So
in
the
rpm
test
we
saturate
the
uplink
in
both
directions.
Up
and
down,
we
send
tcp
data
until
it
gets
out
of
slow
start
and
experiences,
one
or
more
losses
when
we
believe
we
have
pushed
the
buffers
to
overflow.
D
We
start
measuring
the
in-stream
http
ping
frame
round
trip
time,
because
that
is
a
measure
of.
If
I
skip
ahead
in
a
video
and
suddenly
my
video
client
says
no,
don't
get
that
get
this
instead,
what
is
the
minimum
time
that
that
new
get
request
can
deliver
the
new
video
data
to
me
that
I
now
want
to
be
watching?
So
that's
a
bit
of
background.
It
will
change
over
time.
AF
Thank
you
has
given
us
good
background
and
also
time
to
rebuild
our
queue.
Wes
go
for
it.
A
AF
E
Thank
you
for
that
explanation.
Stewart
I
had
one.
I
don't
know
if
it's
a
question,
not
a
comment.
It's
a
question
really,
and
if
it's
not
a
question,
then
it's
a
comment.
E
Basically,
what
would
be
really
useful
is
to
be
able
to
measure
the
responsiveness
of
the
network,
not
when
you
are
doing
the
measurement,
but
when
I'm
doing
an
activity,
so,
for
example,
if
I'm
watching
netflix
it'll
be
good
for
me
to
be
able
to
use
the
tool
to
measure
how
responsive
my
network
is
so
perhaps
I
can
predict
how
fast
I
can
scrub
the
video
so
to
speak
as
you're
talking
about
you
know
how
you
know.
E
Is
it
reasonable
for
me
to
scrub
through
the
video
and,
if
I'm
scrubbing,
through
the
video,
if
I
can't,
if
I'm
not
able
to
do
it
well
well,
I
can
run
this
tool
and
go
well.
That's
because
my
network
is
not
responsive
now
that
that
I'm
trying
to
separate
the
responsiveness
tool
from
from
from
the
measurement
that
you
have
you
are
doing,
but
I
think
there's
value
in
being
able
to
do
that
separately
as
well.
E
D
AF
AM
Brandon
yeah,
I
might
have
misunderstood,
but
it
sounded
like
the
rpm
tool
was
driving
the
ball
neck
at
the
end
during
the
connection,
you'd
be
saturated
and
then
building
up
the
cue
there
and
then
you're
looking
to
see.
What's
the
responsiveness
given
that
q,
I
guess
one
challenge
there
is
that
might
not
be
representative
of
what
traffic
looks
like
even
for
video
traffic
for
facebook.
If
we're
not
pipelining,
it
means
that
we're
always
going
to
have
one
request
in
flight.
At
any
point
in
time,
our
our
frames
are
chunk.
AM
Sizes
are
on
the
order
of
two
seconds,
so
that
means
at
most.
I
can
have
two
seconds
worth
of
data
sitting
in
that
queue
and
given
connection
speeds,
I
don't
expect
a
survey
to
see
that
queue
extend
to
its
full
length,
like
it
would
during
a
speed
test
or
a
iso
download.
Also
given
that
you're
using
we're
using
in
this
case
bbr,
we
expect
that
to
be
trying
to
manage
the
queue
and
to
prevent
it
from
growing
significantly.
So
I
guess
I'm
curious.
AM
J
AD
There's
interest
about
the
pings
ht
pings,
underneath
the
dos
attack,
and
I
know
that
there's
implementations
that
do
mitigate
those
things.
So,
if
you
have
some
like
frequency
or
ping
rate
in
mind,
soliciting
that
as
a
use
case
could
help
and
the
reason
I
say
that
is
because,
for
example,
grpc
uses
ping
as
a
way
to
gauge
bdp
and
to
try
and
do
some
clever,
auto
tuning
stuff
and
their
algorithm
is
very
aggressive,
possibly
too
over
aggressive.
AD
I
don't
know,
but
we
we've
seen
that
result
in
the
connection
getting
closed.
So
I
think
it's
probably
a
solvable
issue,
but
definitely
something
the
community
might
be
interested
in
cheers.
Z
Thank
you
for
your.
Thank
you,
stuart
for
your
description
of
the
metric.
However,
I
have
another,
maybe
comment
or
question:
do
you
think
that
maybe
it
would
be
fruitful
to
to
measure
or
to
to
not
to
measure
but
to
load
the
network
from,
for
example,
from
another
device,
because
we
often
have
like
shared
link
which
is
shared
in
some
way
and
if
you
load
the
link
from
your
device
as
you,
as
you
said.
Z
As
far
as
I
understand,
you
say
that
you
send,
for
example,
data
from
your
device
and
then
measure
the
link
state
from
the
the
same
device,
for
example
in
real
life.
I
think
we
would
have
the
situation
when
there
there
is
our
neighbor,
not
our
neighbor,
but
our.
I
don't
know
my
mom
or
ch
or
son,
who
is
watching
some
youtube
and
it's
like
a
shared
channel.
Z
You
have
a
different.
You
have
some
kind
of
scheduling,
for
example,
in
the
network
devices
which
share
the
resource
between
you
and
somebody
else,
and
that's
more
like
close
yeah,
that's
it.
Thank
you.
Thank
you.
Thank
you.
E
Sorry,
so
I
want
to
clarify
something
that
I
said
earlier.
Basically,
the
tool
or
the
measurement
right
now
is,
and
this
is
specific
to
the
tool
steward.
So
after
this
I
will
take
it
offline
and
we
can
talk
about
it
separately.
It
seems
that
the
tool
basically
measures
the
network
network
conditions
under
load
that
is
created
by
the
tool.
E
That
is
useful,
but
it
describes
us
that
describes
network
as
a
static
thing
which
could
have
the
worst
case,
potentially
rpm,
of
whatever
it
ends
up
being
now,
bear
in
mind
that
the
rpm
is
here
still
a
function
of
the
condition
controller,
that's
used
by
the
sender,
because
the
sender
uses
a
buffer,
filling
congestion,
controller
versus
a
buffer,
not
filling
condition
controller.
Your
response
is
going
to
be
different.
E
What
I
wanted
to
see
here
was
a
tool
or
what
I
was
suggesting
was
a
tool
that
could
measure
my
network
as
it
is
right
now
without
loading
it,
assuming
that
I
am
loading
it
with
other
stuff.
So
if
I'm
watching
doing
webex
right
now,
my
network
is
in
a
certain
place.
It
would
be
useful
for
me
to
be
able
to
run
a
tool
that
simply
does
pings
and
back
to
some
known
server
just
to
measure
the
responsiveness
of
my
local
network.
E
Q
So
I
think
any
such
tools
are
always
useful
and
interesting.
The
thing
that
I'm
worried
about,
though,
is
that
you
know,
as
I'm
looking
across
everything
that
we're
talking
about.
We
haven't
mentioned
cdns
once
and
you
know
whether
or
not
you
get
a
cache
hit
and
things
like
that
very
much
drastically
change
the
application
experience
again.
Q
We
have
not
any
real
standards
for
how
to
measure
performance
in
the
presence
of
cdns
when
there's
cachets
not
cache
hits,
and
this
will
continue
to
be
especially
problematic
so
long
as
we
have
l7
proxies
that
do
reordering
or
or
or
dealing
with
packet
loss
as
part
of
their
processing,
because
right
now
deploying
an
l7
is
an
engineering
decision
with
trade-offs,
as
opposed
to
an
l4
where
I'm
just
packet,
switching
so
a
little
bit
of
a
mix
up.
AF
Thank
you
back
to
stewart.
D
All
right,
I've
got
three
to
answer.
Let's
make
this
quick
brandon
talked
about
facebook
only
fetching
two
second
chunks
of
video,
which
is
great
that
limits
the
amount
of
damage
facebook
can
do
to
other
traffic,
but
it
doesn't
limit
the
amount
of
damage
other
traffic
can
do
to
facebook's
video
experience
mikhail
talked
about
measuring
from
a
different
device.
D
That
would
be
useful
thing
to
expand
in
the
future
right
now.
Our
assumption
for
most
users
is,
they
have
a
dom
fifo
queue
and
it
fills
the
capacity
and
it
really
doesn't
matter
whether
it's
two
apps
on
the
same
device,
two
people
in
the
house
using
different
devices,
the
fifo
queue
has
a
certain
size
and
it
gets
filled
with
whatever
packets
are
put
in
it
to
jana's
point
measuring
the
network,
as
it
is
right
now
and
he
doesn't
want
to
load
it
while
doing
webex.
D
Well,
if
one
of
your
family
members
sends
an
email
with
a
large
attachment,
they'll
load
the
network
and
there's
nothing,
you
can
do
about
it.
So
that's
the
scenario
we
want
to
replicate
with
this
tool
is
the
stuff
that
actually
does
happen
every
day
to
normal
users.
That
makes
their
webex
go
to
hell
when
someone
else
loads.
The
network.
I
can
load
the
network
because
I'm
running
fq
caddle
here
and
I
can
do
it
and
it
doesn't
affect
the
webex.
We
want
everybody
to
have
that
experience.
AO
Yeah
I
wanted
to
talk
a
little
bit
about
that.
Low
latency
is
actually
collaboration
between
both
the
network
and
the
the
users
of
the
network,
let's
say
the
applications,
so
so
I
see
a
lot
and-
and
I
kind
of
it
relates
a
little
bit
to
what
stewart
said
it's
applications
can
can-
can
use
the
network
but
can
also
cause
the
latency.
AO
AO
So
so
it's
also
important.
Maybe
that
networks
can
can
also
measure
how
well
applications
respond
to
their
signals,
for
instance.
AO
AF
I
Yes,
so
I
I
want
to
sort
of
raise
a
side
point
that
was
missed.
I
think
earlier
in
this
framing
the
question,
but
knowing
where
the
end
points
are
is
sort
of
a
sub
problem
which
the
users
can't
solve
very
well,
but
the
content
providers
understand
and
know
about
things
like
cdns
and
such
this
is
why
my
vision
it
actually
goes
back
to
the
web.
100
proposal
was
always
on
instrumentation
in
every
content
provider
and
the
content
providers
do
measurements,
contact
providers,
content
providers
know
which
isps
have
problems
with.
I
Y
Jared,
I
was
just
gonna
comment.
A
few
things
real,
quick,
yes,
cdns
have
lots
of
measurements.
Just
like
the
prior
commenter
said.
Our
customers
actually
have
even
more
measurements
of
their
multi-cdn
environment,
because
most
customers
are
multi-cdn
and
so
they're
going
to
say,
oh
today,
akamai
is
great,
fastly
is
bad.
Y
Fastly
and
cloudflare
are
totally
impressive,
etc,
and
so
we're
not
going
to
give
akamai
any
traffic.
You
know
so
you
know
whatever
that
case
is,
and
that
may
be,
and
and
they
can
be
hyper
localized
in
what
they
do.
There's
already
a
lot
of
companies
that
do
this
professional
measurement
as
a
service,
and
you
probably
should
have
done
a
better
job
of
recruiting
some
of
them
in
here
to
participate
in
this.
You
know
with
my
iab
hat
on
is
like
I
just
didn't.
Y
You
know
I
didn't
think
of
that
until
just
now.
The
other
thing
is,
you
know.
As
a
you
know,
I
have
a
small
fiber
to
the
home
isp
the
client
device.
Behavior
is
super
interesting.
I
had
issues
with
my
kids
downloading
steam
games
because
they
I
could
plug
into
the
ethernet
and
pull
at
a
gig
which
impacted
my
webex
experience
with
them,
downloading
800,
megs
or
whatever,
based
on
the
design
of
my
home
network.
It
was
easier
for
me
to
just
upgrade
my
home
network
to
10
gig
to
solve
them.
Y
Sorry,
sorry
and
client
and
client
when,
when
amazon
starts
at
like
colors
up
something.
AP
AP
Thanks
thanks
tommy,
I
couldn't
let
the
whole
day
go
by
without
saying
something,
and
so
it's
gonna
come
down
to
the
expression
of
the
rpm
results
to
users.
I
heard
brandon
a
couple
of
times
say
that
the
propagation
delay
is
the
thing
that's
going
to
set
the
sort
of
the
minimum
performance,
but
also
jana's
point
that
we're
not
going
to
set
aside
the
other
metrics
that
we
know
and
love
like
a
capacity
metric
and
so
forth.
AP
So
these
these
set,
the
you
set
the
upper
kind
of
the
upper
and
lower
bars
for
for
the
the
performance
that
could
be
plotted
or
expressed
on
an
rpm
tachometer
graph.
I
even
envisioned
that
later
on.
So
in
a
couple
of
days
from
now,
we
can
touch
on
this
again.
The
key
point
is
to
have
a
frame
of
reference
that
everybody
can
understand
when
you,
when
you
plot
the
data
for
a
user
thanks.
AF
All
right,
thank
you,
and
then
I
had
included
myself.
I
just
wanted
to
point
out
something
I
was
observing
around
what
michaela
brought
up
around
you
kind
of
which
device
is
generating
the
load
and
how
that
happens.
You
know
stuart
rightly
pointed
out
that
you
know
most
of
the
routers
today.
Their
queue
won't
really
be
affected
too
much
by
that,
but
going
back
to
one
of
the
points
that
we
were
talking
about
in
the
first
half
of
this
day
about
kind
of
the
game
theory
and
how
you
want
to
gain
these
metrics.
AF
If
we
were
trying
to
have
rpm
or
something
like
it,
be
kind
of
a
gaming
proof
metric
or
something
that
has
aligned
incentives.
We
also
want
to
make
sure
that
that
metric
is
not
only
based
on
you
generating
the
load
yourself,
because
it
is
possible
to
build
a
router.
Then
that
games
it
such
that
you
say.
Oh,
you
know
this
device
is
generating
load,
but
the
other
ones
aren't
and
therefore
I'll
treat
it
differently
than
the
others.
AF
So
we
may
want
to
think
about
that
in
kind
of
our
overall
gaming
and
have
versions
of
the
metric
that
can
generate
load
for
multiple
devices.
And
I'm
done.
A
I
was
about
to
say
time
just
to
make
sure
somebody
did
it
for
you
all
right.
Thank
you
very
much,
tommy
for
running
the
last
two
hours
of
queuing
and
sessions
appreciate
it
greatly.
I
think
you
know
today
went
really
well.
I'm
I'm
happy
to
see
all
of
the
discussion.
That's
happened.
Our
leaving
room
for
lots
of
discussion
has
definitely
fired
some
people
up,
which
is
a
good
thing.
A
couple
of
quick
closing
comments
for
the
day.
A
One
we're
gonna
be
back
here
tomorrow,
starting
at
the
same
time
as
we
did
this
morning
evening
or
whatever
time
it
is
for
you,
one
quick
comment:
a
lot
of
people
have
pointed
out
that
the
camera
ready
copies
of
the
papers
on
the
website
are
are
not
the
most
recent
ones.
We
apologize.
As
we
said
we
we
threw
this.
You
know
together
quickly
due
to
the
compressed
time
schedule
and
we
don't
have
direct
control
over
the
website.
So
we
have
to
give
changes
to
somebody
else
to
actually
make
them
happen.
A
Thank
you
for
everybody
that
contributed
to
today's
discussions
and
presentations
and
verbal
discussions
and
discussions
on
the
slack
channel
which
is
taken
off
as
well
appreciate
it
greatly
and
as
a
reminder,
I
will
close
the
recording
momentarily
and
you
are
welcome
to
stay
around
and
hang
out
in
the
webex
room
and
chat
in
a
more
freeform
session
without
any
moderators
and
timekeepers,
so
you're
all
on
your
own.
A
This
is
sort
of
our
attempt
to
to
deal
with
the
fact
that
we
are
not
meeting
physically
and
there's
no
hallway
conversations
and
no,
you
know
running
off
to
coffee
shops
or
other
forms
of
beverage
allocations
and
to
have
discussions
so
feel
free
to
stick
around
or
even
leave
and
come
back
myself.
I'm
probably
going
to
go
make
lunch
because
I
only
have
an
hour
between
a
bunch
of
other
meetings.