►
Description
Day 2 of the IAB's Measuring Network Quality for End-Users Workshop, 2021-09-15.
Day 1: https://youtu.be/pFZEa3NN39A
Day 3: https://youtu.be/6q4-G9pnhfY
Workshop page: https://www.iab.org/activities/workshops/network-quality/
B
After
I
finish
my
very
brief
talk,
we
will
be
able
to
share
them
and
continue
with
the
duration
process
of
the
of
the
whole
session.
Actually
not
one
but
two
sessions,
that's
good
yeah.
Okay,
do
you
have
the
slides.
B
B
Please
write
plus
q
in
the
chart
during
the
presentation
or
during
the
discussion,
and
also
we
have
yeah
and
please
take
into
account
that
we
ask
all
participants
to
limit
the
duration
of
the
talk
by
16
seconds
just
to
allow
more
people
to
participate
in
the
discussion
and
if
you
have,
if
you
want
to
answer
to
a
comment
which
is
currently
being
said,
so
please
either
raise
your
hand
or
indicate
somehow
in
the
chat,
so
ask
the
moderator
to
give
you
a
talk.
B
So
also
just
reminder:
we
have
a
text
discussion
in
the
flag
and
this
the
link
to
the
slide
will
be
copied
to
the
chat
after
my
after
my
very
brief
presentation.
B
So
I
think
that
we
have
a
lot
of
to
discuss
today.
So,
let's
start-
and
I
introduce
you
christoph
who
will
turn
the
first
two
sessions
of
today's
of
today's.
C
Hello,
so
let's
get
started
right
away
with
jonathan
on
metrics
helpful
in
assessing
internet
quality.
D
Okay,
hi
thanks
chris
next
slide,
please
so
today,
I'll
be
sharing
with
you,
some
metrics
that
were
developed
over
the
course
of
multiple
years
of
deploying
a
solution
that
promises
to
improve
users,
internet
quality
with
a
router
that
automatically
adapts
to
their
line.
So
some
of
these
metrics
were
necessary
for
the
automation
functions.
Many
were
necessary
for
triaging
issues
and
to
inform
and
alert
users
about
anomalies
with
their
service.
D
First
of
these
metrics
is
an
accurate
determination
of
line
capacity
which
actually
is
harder
than
held
down
than
one
would
think.
It
takes
multiple
samples,
at
varying
times
across
multiple
days,
to
really
profile
a
line,
since
those
are
not
always
consistent
from
our
data
set.
We
find
that
about
36
percent
of
lines
vary
by
more
than
10
percent
over
our
measurement
periods.
D
D
So,
during
these
capacity
tests,
the
capture
of
latency
under
working
loads
is
performed
to
determine
buffer
bloat
levels
with
the
traffic
manager,
both
disabled
and
later,
with
the
traffic
manager
enabled
to
confirm
whether
settings
are
appropriate
and
just
like
capacity.
Latency
metrics
from
the
router
truly
accurately
profile
current
internet
link
performance
and
allow
for
accurate
reporting
performing
these
latency
metrics
on
an
ongoing
basis,
allows
determining
whether
current
qs
settings
are
sufficient
to
mitigate
buffer
bloat
and
are
used
to
refine
these
settings
dynamically.
D
D
The
depicted
graph
is
a
good
illustration
of
a
cable
line
with
some
significant
sag
in
the
evenings,
which
is
usually
due
to
backhaul
or
loop
congestion,
requiring
the
traffic
manager
to
be
adjusted
down
until
latencies
are
acceptable.
The
system
will
also
dynamically
adjust
upwards
when
latencies
seem
to
have
recovered.
D
This
preserves
a
high
quality
of
user
experience
and,
in
this
case,
there's
still
enough
capacity
for
most
online
activities.
This
user
is
perfectly
happy
with
their
internet
service.
This
reinforces
points
made
in
other
papers
that
qoe
at
the
end
user
level
does
not
necessarily
correlate
with
line
capacity
alone.
A
stable,
low,
latency
line
wins
most
every
time
next
slide.
Please
one
thing
rarely
discussed
are
link
stability,
metrics,
which
are
vital
to
good
qe
link.
D
Loss
is
an
obvious
one,
so
logging
and
reporting
this
helps
users
and
their
isps
have
facts
to
inform
them
about
when
these
events
occur.
This
also
helps
people
understand
when
loss
of
connectivity
is
due
to
local
events
like
poor,
wi-fi
connectivity,
but
the
metric
I'd
like
to
share
a
bit
more
about
is
discussed
further
in
our
document,
and
this
is
the
unstable
line
which
we
define
as
an
increase
in
latencies,
with
a
little
to
no
load.
D
Often
this
is
a
transient
phenomenon
usually
caused
by
failing
modems
or
backhaul
congestion,
or,
as
we
frequently
use
it,
to
detect
a
cascaded,
router
scenario
where
traffic
is
bypassing
our
traffic
manager
in
the
iq
router,
and
this
is
usually
due
to
somehow
the
isp's
all-in-one
wi-fi
being
remotely
enabled
in
our
data
set.
We
see
that
more
than
a
quarter
of
deployments
see
this
at
some
point,
and
you
know
quite
a
few
deployments
have
just
a
tremendous
amount
of
these.
D
D
D
D
C
Thank
you,
jonathan.
Are
there
any
clarifying
questions
or
comments
in
the
audience?
I
saw
bob
had
one
question,
but
he
seems
to
have
clarified
it
himself
already.
E
Thank
you
very
much,
and
indeed
thank
you
very
much
for
the
opportunity
to
present
my
position.
So
I'm
vj
I'm
a
professor
at
unsw
sydney
which
is
in
australia.
E
I
have
been
working
in
the
area
of
qos
and
qe
for
quite
a
quite
a
while
and
in
fact,
a
couple
of
years
back
decided
to
commercialize
some
of
my
ideas
into
a
startup
called
canopus
networks,
where
I'm
also
the
current
ceo.
So
if
you
can
move
to
the
next
slide,.
E
So
there
are
two
points
that
I
would
like
to
take
the
opportunity
to
make
today,
the
first
of
which
is
that
we
take
the
position
that
experience
or
quality
of
experience
should.
We
would
very
much
like
to
see
it
from
the
user's
perspective.
E
You
know,
obviously,
in
australia,
the
regulator,
measures,
things
like
speed
and
and
I've
just
put
a
sample
graph
of
what
consumers
or
users
get
to
see
in
the
latest
report
that
came
out
a
couple
of
weeks
back
and
and
frankly,
as
a
user.
I'm
not
getting
a
lot
of
information
out
of
this
in
the
sense
that
the
speeds
are
all
like
fairly
close
to
each
other.
The
gap
is
very
narrow.
E
In
fact,
the
top
three
are
within
one
percent
of
each
other,
and
if
anything
you
know
this
race
for
speed
is,
is
causing
the
economics
for
sp's
to
really
raise
to
the
bottom.
Of
course,
one
could
augment
speed
with
things
like
latency
and
loss.
But
again
these
are
things
the
user
does
not
really
perceive
or
understand
what
it
actually
means,
because
sometimes
loss
and
latency
can
be
absorbed
by
the
application.
E
So
we
definitely
take
the
view
that
it's
quite
useful
to
measure
experience
from
the
perspective
of
the
consumer,
exactly
how
they
think
of
experience,
and
by
that
we
mean,
for
example,
streaming
video
and
if
you're,
watching,
netflix
or
disney,
plus
or
prime,
or
what
have
you
you're
very
concerned
as
a
consumer
about
well,
I've
just
bought
a
4k
tv.
Am
I
really
getting
the
video
at
the
best
possible
resolution,
given
the
investment
I've
made
into
a
device
such
as
a
tv,
and
is
my
video
freezing?
E
Am
I
getting
that
buffering
or
that
spinning
wheel?
So
those
would
be
the
most
pertinent
things
to
from
a
user's
perspective
and
in
contrast
to
showing
the
speed,
as
in
the
plot
above,
we
could
show
them
with
the
different
options
they
have
for
isps
in
australia.
We
call
them
rsps
because
they're
retail,
because
the
access
network
is
actually
nationalized
by
the
government.
E
So
how
will
the
rsps
or
isps
compare
in
terms
of
their
capability
to
serve
video
at
the
highest
available
resolution,
and
you
can
go
down
further
and
break
it
up
by
the
provider
of
video?
Obviously,
youtube
is
different
to
netflix.
It's
different
to
disney,
plus
and
so
on
and
so
forth.
You
could
do
the
same
for
gaming.
Users
who
are
gamers
would
very
much
like
to
know
if
they
are
playing
fortnite
or
counter-strike
or
apex
legends.
E
You
know
how
consistent
is
the
latency
they
get
and
how
low
is
it
while
they're
playing,
because
we
know
that
any
jitter
can
mean
gunshots,
not
taking
effect
or
teleporting
in
the
game
so
and
being
able
to
show
that
clearly,
you
know
as
on
the
right
bottom
plot,
you
can
see
the
green
means,
experience
has
been
good,
orange
is
mediocre
and
red
is
bad
and
this
has
been
can
be
customized
to
the
title
of
the
game.
E
So,
what's
shown
here
is
aggregated
across
shooting
titles,
but
you
could
even
break
it
down
by
individual
game
titles,
and
you
could
equally
show
this
experience
measure
for
role-playing,
games,
strategy,
games,
sports
games
like
fifa
and
so
on,
and
you
could
do
the
same
for
conferencing
applications
by
telling
users
exactly
what
is
the
relative
performance
on
your
isp
in
terms
of
getting
stutters
and
dropouts
on
your
conference
calls
and
all
these
measurements
can
be
done
passively
on
real
applications
on
in
their
real
natural
operating
environment
rather
than
via
synthetic.
E
You
know
bursts
of
speed
tests
or
via
latency
probes,
so
that
was
the
first
point
trying
to
really
show
these
metrics
that
are
very
easy
for
the
user
to
understand
and
appreciate
and
relate
to
if
you
could
move
on
to
the
second
slide.
That
will
be
the
second
point
and
final
point
we
want
to
make
today,
which
is
and
sorry
you
can
click
on
to
show
the
animations
as
well
and
yeah.
Thank
you.
E
E
The
answer
is
yes,
or
at
least
we
believe
that,
and
in
fact
we
have
staked
that
on
a
company
that
we
have
started
first
and
foremost
in
terms
of
the
scale.
You
know,
I'm
sure
the
community
is
aware
of
what's
been
happening
in
the
world
of
programmable
networks
and
the
fact
that
you
can
actually
get
a
multi-terabit
switch,
which
is
able
to
export
telemetry
at
really
fine
time
scale.
E
So
it
can
process
hundreds
of
thousands
of
subscribers
worth
of
traffic
and
we
can
isolate
every
single
flow
going
on
date,
streaming,
video
gaming
conferencing
and
so
on,
and
we
can
actually
extract
very
fine
grain
telemetry
down
at
the
sub
second
level,
often
in
the
tens
of
milliseconds
level.
That
is
able
to
reveal
the
exact
behavior
of
the
application
and
by
being
able
to
see
the
behavior,
as
shown
on
those
two
plots
as
examples,
I've
showed
the
the
pulse.
E
E
You
can
actually
make
some
inferences
about
not
just
the
fact
that
this
happens
to
be
streaming,
video
or
gaming,
and
this
is
netflix
and
that's
counter-strike,
but
also
the
whether
or
not
this
netflix
stream
is
working
at
the
highest
available
resolution.
Not
whether
or
not
the
buffer,
the
playback
buffer
on
this
netflix
stream
is
healthy,
or
it's
actually
depleting.
E
So
these
inferences
can
be
made.
There
are
many
academic
papers
that
have
been
published
on
this
in
recent
years.
We
have
published
a
few,
and
these
are
generally
trained
on
data,
because,
obviously
you
know
youtube's
algorithm
for
serving
videos
different
to
netflix's
is
different
to
listening,
plus
and
so
on,
and
likewise
the
games
are
all
different.
So
we
have,
you
know,
developed
ai
models
that
are
able
to
analyze
this
data
and
estimate
experience
quite
accurately,
and
we
have
actually
built
this
platform
and
it
is.
E
It
is
commercially
deployed
in
a
in
a
few
of
the
large
operators
in
australia
and
it's
able
to
show
experience
on
an
application
by
application
basis
across
hundreds
of
thousands
of
subscribers
per
instance
that
we're
putting
in
and-
and
this
is
helping
both.
Of
course,
the
operators
understand
various
you
know,
tuning
of
their
network,
what
impact
it's
having
on
application
experience
and
I'll
just
give
one
or
two
examples
and
stop
one
of
them
is
we
have
an
operator
who's
tuning.
E
The
buffers
in
their
network
is
because
there's
a
shaping
going
on
and
and
the
impact
of
the
buffers
on
different
applications
is
different.
Obviously,
a
larger
buffer
helps
downloads,
whereas
a
shorter
buffer
helps
gaming.
Our
applications
to
reduce
data
so
helping
them
tune
that,
to
balance
the
needs
of
the
various
applications
is
one
and
another
one
we
are
working
with
so.
C
E
E
C
F
Hear
me:
okay,
yes,
okay,
good
morning,
everyone
at
least
from
the
mountain
time
zone
here.
So
I
I'm
gonna
talk
about
for
those
of
you
who
recognize
the
broadband
america.
That's
the
fcc
measuring
broadband
america
program,
that's
published
about
10
reports
now,
since
2011
and
with
dr
levi
prego
who's.
Also
with
me
on
the
faculty
of
cs
and
at
cu
boulder,
we
took
a
look
at
the
latency
under
load
data.
F
That's
included
in
the
fcc
measurements
they
actually
in
the
reports
to
the
best
of
our
knowledge,
haven't
reported
it
in
the
reports
themselves.
But
if
you
go
to
the
raw
data,
they
do
have
a
latency
under
load
test
that
they
run
that
essentially
operates
when
they're
doing
their
10
second
downstream
and
upstream
speed
tests
and-
and
they
send
some
udp
packets
when
that's
occurring
and-
and
so
we
want
to
this
paper-
takes
a
look
at
what
the
data
looks
like
and
we're
just
going
quickly
through
the
results
here.
F
So
what
this
is?
What
this
chart
is
showing
is
the
the
downstream
latency
under
load
data
by
technology
and
by
technology.
It's
they
have
different
isps
that
are
participating
in
the
fcc
program.
F
We
selected
centurylink
for
their
as
a
ds
representative
of
dsl
comcast
for
cable
and
verizon
for
fiber
to
the
home,
and
so
you
can
see
this
is
what
the
latency
under
load
data
look
like
for
the
last
year
that
they
reported
that
big
chunk
in
september
october.
The
fcc's
withheld
that
data
until
they
published
their
report,
which
hasn't
been
published
yet
on
their
web
page.
And
what
we
can
see
here
based
upon
these
results,
is
that
there's
a
substantial
difference
in
the
magnitude
of
the
downstream
between
these
different
technologies.
F
If
you
note
a
bit
of
surprise
here
that
the
fiber
the
home
with
verizon
is,
is
higher
than
cable,
and
we
would
note
that
in
the
earlier
earlier
report
for
the
data
they
had
pulled
some
of
the
verizon
data
over
concerns
about
the
the
sam
knows
platform
and
its
ability
to
process
one
gigabit
per
second
services.
F
So
that
may
be
a
a
reason
for
that.
We
don't
know
we're
just
looking
at
the
data
here
and
can't
do
too
much
sleuthing
on
it,
but
we
see
there's
substantial
variation
by
the
technology
there's
also
not
shown
here,
but
in
our
paper
substantial
difference
in
the
downstream
and
upstream
latency
under
load.
F
The
average
in
2020
was
about
150
milliseconds
for
the
data
we
looked
at
in
the
downstream
at
about
350
milliseconds
milliseconds
in
the
upstream,
and
some
of
this
differential
could
be
due
just
do
the
difference,
download
and
upload
speed
differences.
F
So
let's
go
to
the
next
slide,
and
so
the
last
chart
was
the
average
round
trip
time
that
they
measure
for
the
latency
under
load.
This
is
the
this
is
a
plot
of
the
maximum
round
trip
time
that
they
measure
as
well
and
by
looking
at
the
maximum
round
trip
time.
F
We
can
get
some
more
insight
in
the
range
of
variability
here
that
is
possible
for
the
network
delay
under
the
higher
traffic
load
conditions,
and
what
you
can
see
here
is
the
overall
maximum
latency
under
load
data
increased
about
65
on
average
from
that
150
millisecond
average
round-trip
time,
and
it's
up
about
50
they're,
not
showing
it
here
for
the
upstream.
F
For
the
different
technology
types
you
see
here,
they're
a
little
bit
more
bunched
up,
but
still
with
dsl
having
a
higher
amount
than
the
others
and
again
the
the
higher
magnitude
here
can
be
explained
in
part
by
the
upstream
or
by
just
the
the
speeds
associated
with
these
different.
You
know
broadband
service
speeds
that
are
offered
by
these
different
technologies.
If
we
can
go
to
next
slide.
F
So
this
is
the
historical
or
longitudinal
analysis
of
the
data
and
we
just
focused
on
one
isb
here.
This
is
a
comcast
and
you
can
see
this
this
plot
of
their
average
round-trip
time
and
it
shouldn't
say
2020.
So
it's
a
typo
in
in
the
title
there.
But
you
can
see
over
time
here
that
that
there's
been
an
improvement.
F
There's
a
long
trend
line
here
of
improvement
in
in
the
latency
about
10
every
year
kind
of
averaged
out,
and
you
know
recognizing
that
this
isn't
something
that
you
can
necessarily
predict
about
how
the
internet
performs
based
upon
different
the
changes
in
the
dynamics
associated
with
applications
and
like
but
you're.
Seeing
a
improvement
here
and
the
reasons
for
improvement
are
likely
due
to
the
increasing
service
speeds
that
have
been
occurring
during
this
time,
along
with
improvements
in
the
ip
protocol.
F
So
the
2020
data
shown
here
for
comcast-
and
this
is
just
for
the
month
of
november-
actually
includes
their
use
of
aqm
and
their
cmts
in
the
downstream
direction.
And
and
so
you
can
see
substantial
improvement
there,
both
in
the
the
maximum
and
the
average
round-trip
times.
F
F
F
Yeah
I
can
I
I
can
do
that.
That's
fine!
This
is
that
I
was
going
to
talk
to
so
this
shows
the
before
and
after
and
what's
interesting
in.
This
is.
F
This
is
just
shows
how
the
distribution
changes
with
the
use
of
aqm,
and
you
can
see
how
the
the
data
is
most
of
the
samples
are
now
coming
in
below
150,
milliseconds
and
and
so
that
it
is
good
for
the
user
experience,
and
with
that
you
can
look
in
the
paper
on
the
findings
which,
basically
summarize
a
lot
of
the
observations
that
I
I
found
here
that,
in
turn
that
we
found
here
in
terms
of
the
decrease
over
time
associated
with
the
the
latency
of
this
metric,
and
you
know
given
that
speeds,
it
also
shows
a
relationship
between
speed
and
latency
and
as
speeds
increase.
F
C
Thank
you.
I
think
jonathan
had
a
clarifying
question.
G
Christoph
I
had
a
super
quick
clarifying
question:
if
that's
okay,
thanks
very
much
david,
this
is
sam
from
sam-
knows
here,
actually
you're,
absolutely
right.
What
you
mentioned
at
the
beginning,
that
the
the
fcc
haven't
published
the
any
of
the
latency
on
the
load
measurements
in
the
nba
reports,
yet
so
first
I've
confirmed
that.
Secondly,
on
the
you
mentioned
something
about
verizon
one
gigabit
there
in
the
raw
measurements
you're
looking
at
the
raw
files
published
on
the
fcc's
website,
nothing
should
have
been
excluded
from
there.
If
there's
something
you're
missing.
F
That's
good
again,
we
didn't
do
it
kind
of
beyond
the
scope
of
this
and
a
quick
turnaround.
I'm
looking
at
the
data
to
try
to
go
the
next
level
to
explain
why
the
fiber,
the
home,
would
be
higher
than
say,
cable,
given
expectations
there,
and
some
of
the
earlier
idle
latencies
that
have
been
reported
by
the
fcc
did
have
lower
latency
idle
latencies
for
the
fiber,
the
home
as
compared
to
the
cable.
F
C
H
D
Some
isps
with
fiber
to
the
home
have
incredibly
high
latency
under
load,
which
you
know
we
were
surprised
when
we
started
seeing
people
deploying
our
product
on
those
types
of
lines,
but
clearly
it's
something
to
do
not
with
the
technology
but
possibly
with
how
the
infrastructure
is
deployed
and
managed,
but
it
happens,
but
plenty
of
others
have
excellent
now
on
to
comcast
and
all
of
that,
it's
great
to
see
the
dramatic
improvement
and
download
latencies
due
to
aqm,
but
there
it
is
important
to
note
that
even
on
many
docsis
3-1
provisioned
lines,
the
upload
is
still
docsis
3-0.
D
I
It
would
help
if
I
unmuted,
wouldn't
it
the
question
for
david
on
that
last
last:
presentation.
It's
semi-clarification,
but
also
a
comment.
I
You
mentioned
a
number
of
times
that
latency
should
be
better
as
speed
increases
or
you
would
expect
it
to
get
better
and
you
know
just
have
been
getting
better
if
it's
a
latency
under
load
test
that
shouldn't
make
any
difference,
because
the
the
the
flow
should
should
have
be
given
time
to
reach
the
point
where
it's
filling
the
buffer.
The
same
so
it's
I.
I
I
agree
that
in
in
real
life,
if
flows
stay
about
the
same
length
or
don't
get
longer
as
fast
as
the
links
get
faster,
then
you
will
start
to
see
less
latency
under
load.
But
I'm
I'm
I'm
not
sure
why
you
thought.
F
We
have
a
scatter
plot,
that
of
the
data
that
I
didn't
show
here,
that
kind
of
shows
overall,
a
clear.
You
know,
based
on
the
data
that
as
speeds
increase,
the
latency
under
load
goes
down,
and
I
don't
know
if
it's
like
the
transmission
delay,
piece
or
or
for
other
reasons.
We
didn't
have
time
to
kind
of
sleuth
and
figure
that
out,
but
there's
a
clear
association
with
that.
F
A
C
C
Thanks
dave
next
up.
J
I
wanted
to
point
back
to
the
second
presenter's
set
of
slides
where
he
showed
the
netflix
pulse.
I
see
that
in
a
lot
of
data,
as
you
can
capture
it
as
detailed
as
you
can,
and
it's
one
of
those
things
that
does
not
show
up
on
an
average.
J
You
know
and
finding
ways
of
detecting
pulses
of
that
are
really
key
to
a
better
user's
quality
of
experience.
If
you're
interrupted
every
two
seconds
from
whatever
you're
doing
it
can
be
a
quite
annoying
internet.
The
going
back
to
the
previous
set
of
comments
as
we
get
more
and
more
bandwidth.
The
tests
themselves
are
not
running
long
enough
to
actually
detect
what
happens
for
a
long-running
flow.
So
you
know
20
seconds
proved
not
to
be
enough
even
10
years
ago.
K
I
have
a
few
questions
to
johnson,
so
I
will
try
to
ask
one
or
two:
if
I
have
time
question
number
one,
it
wasn't
quite
clear
whether
you're
advocating
for
the
in
application
monitoring
what
application
is
measuring
the
lnc
data
or
ae
or
for
in-network
monitoring,
where
a
third
actor
device
is
observing
the
old
application
successively.
K
So
that
wasn't
clear
question
number
two:
when
talking
about
capacity
changes
have
I
you
specifically
observed
the
effect
of
isps
changing
their
routing
from
peering
to
transit
and
back,
and
if
so,
can
you
elaborate
on
that?
Those
are
my
two
questions.
Thank
you.
D
Okay,
really
quickly,
the
we
advocate
for
the
metrics
to
happen
at
the
bottleneck
point,
since
that's
the
only
point
in
a
local
network,
at
least
where
you
can
observe
all
traffic
patterns
and
all
use
of
capacity
and
the
true
latency
at
the
line
level
versus
you
know,
latency
induced
by
things
like
wi-fi.
D
So
those
metrics
are
what
we
believe
are
important
for
end
users
to
be
able
to
understand
what
their
actual
line
quality
is.
So,
if
I'm
going
to
say,
my
isp
is
not
giving
me
good
service
because
my
line
sags
every
afternoon,
you
know
down
to
50
percent,
then
I
need
you
know
the
only
way
I
can
really
know
that
is
not
from
an
application.
D
K
Was
regarding
changes
in
capacity,
you
specifically
thought
we
were
talking
about
changes
in
capacity,
and
my
question
was:
were
you
able
to
observe
changes
in
capacity
that
are
happening
whenever
an
sp
is
changing
their
route
from
a
peering
agreement
to
a
transit
which
would
necessitate
completely
different
usage
pattern
of
the
backbone?
Thank
you.
D
Okay,
yeah,
the
our
observation
and
all
of
our
data
has
pretty
much
shown
us
that
when
we
see
this,
this
phenomenon
described
and
reflected
in
that
graph
of
the
sag
that
that
is
pretty
much
98
caused
by
local
issues.
Bad.
You
know
local
loop
contention
on
cable
systems
on
copper
lines.
It's
gonna
be
the
backhaul
being
insufficient
for
the
load
thanks.
Okay,.
L
L
The
the
other
thing
is
that
the
buffering
in
these
devices
has
been
scaling
with
each
generation,
because
at
least
when
there's
no
aqm
in
order
to
do
the
single
stream
test
to
maximize
that
bandwidth
value
you
keep,
tcp
has
demanded
that
much
more
buffering
for
each
generation
as
a
minimum
and
memory
has
gotten
very
cheap.
So,
while
I
can
believe
in
your
data
set
that
the
that
the
latencies
have
been
dropping
with
time,
that
doesn't
actually
mean
that
the
worst
case,
which
is
what
really
bedevils
you
has,
has
necessarily
changed.
L
Thanks
sam
your
turn.
G
Yeah,
just
more
of
a
clarification
than
a
question
for
the
discussion
between
dave
and
bob
earlier
on.
I
think
it's
perfectly
reasonable
when
you're
looking
at
a
macro
level,
as
dave's
presentation
did,
you'd
expect
the
idle
latency
to
fall
as
you
hit
as
you
move
towards
higher
access
speeds,
simply
because
they're
using
newer
access
technologies
so
pick
on,
like
centurylink,
for
example,
they're
a
blend
of
extremely
slow,
dsl
and
also
extremely
fast
vibe
to
the
home
and
a
whole
bunch
of
ranges
in
between
like
a
vdsl
and
fttc
as
well.
C
M
Good
stuff,
I
want
to
speak
to
a
discussion,
that's
going
on
in
slack
about
netflix
pulsing
and
so
the
square
wave
or
the
wave
nature
of
traffic.
This
is
not
new.
I
just
want
to
make
a
comment
to
the
traffic.
This
is
basically
abl
air
traffic
and
there's
a
fundamental
reason
why
it
is
so.
M
The
reason
it
happens
is
because
to
avoid
any
rebuffering
at
the
receiver,
the
sender
sends
enough
traffic
that
will
fill
basically
the
time
period
over
which
it's
playing
so,
for
example,
a
duty
cycle
here
of
50
means
that
the
center
will
send
five
seconds
worth
of
video
in
about
two
and
a
half
seconds
and
the
and-
and
this
is
what
the
receiver's
abr
tries
to
optimize
for
it-
tries
to
maximize
the
amount
it
can
receive,
but
it
will
typically
not
go
close
to
80
or
90
percent
simply
to
avoid
rebuffer
probabilities,
so
it
will
reduce
its
rate
so
that
it's
it's
small
enough.
M
Keep
it
high
enough
that
it
can
maximize
some
sort
of
duty
cycle
there,
but
the
duty
cycle
tends
to
be
60
or
so
that's
what,
in
my
experience,
I've
seen
and
this
the
the
frequency
of
the
square
wave
can
change
depending
on
the
kind
of
streaming
you're
seeing.
This
could
be
other
live
video
streaming
where
this
is
much
smaller,
so
you
can
have
like
one
to
two
second
square
waves
or
you
can.
M
C
Good
job,
jonathan,
no
time.
D
Okay,
just
another
comment
around
the
metrics
that
jim
and
dave
mentioned,
which
is
not
only
reinforced
the
need
to
run
these
tests
longer,
but
also
more
concurrent,
streams
are
required.
As
you
go
up
the
the
capacity
tiers.
It's
it's
pretty
easy
to
run,
let's
say
12
streams
on
a
gigabit
download
and
and
pretty
much
on
docsis.
D
N
Yeah
two
one
question
for
jonathan:
there
was
a
conclusion
that
you
made
that
it's
due
to
last
mile
conditions
I
suggested
in
the
docsis
network,
I'm
curious
if
the
testing
methodology
gives
you
more
information
that
can
support
this.
I
mean
I've
looked
at
the
paper
you're
showing
180
megabits
per
second
mine
for
a
lot
of
the
day,
you're,
not
achieving
that.
N
I
am
curious
if
that's
a
function
of
the
congestion
control,
that's
being
used
and
it's
observing
very
slight
variations
and
delay,
and
it's
adjusting
for
that
or
if
it's
potentially
congestion
between
wherever
the
server
that
you're
you're
transferring
from
and
the
end
user
and
not
actually
in
the
access
network,
I
mean
to
give
a.
N
I
was
on
a
call
a
few
days
ago
with
a
cable
provider
and
they
claim
that
contention
in
the
docsis
network
is
very
rare
or
sorry
in
the
cmts
network.
So
it
is
interesting.
The
other
piece
here
is
this
latency
under
road
test.
Again,
I'm
wondering
if
this
is
realistic
or
representative
what
users
are
going
to
see
if
you're
trying
to
fill
the
buffer
at
the
bottleneck
link.
You
know
how
often
is
the
workflow
that
users
have
actually
going
to
cause
that
to
occur.
N
D
Okay
and
yeah,
we
we
do
see
and
again
our
adjustments
in
the
dynamic
range
are
done
when
the
the
line
is
loaded.
So
we
we
will
not
make
dynamic
changes
just
because
latency
metric
happened
to
come
back
bad.
We
only
do
it
under
known
load
conditions,
in
which
case
we
know
that
you
know
the
increase
in
latency
is
being
triggered
by
the
load.
Now
it
could
it
be
because
of
a
poor
peering
agreement,
or
you
know,
a
really
bad
quality
peering
link.
D
Yes,
it
might
be
that,
but
more
often
than
not
again,
because
it's
seeing
all
of
the
traffic
not
just
a
specific
set
it,
it
can
make
that
that
determination.
I
Sorry,
I
I
can't
even
remember
why
I
went
into
the
queue
move
on.
H
I
wanted
to
reply
to
what
jaina
was
saying
about
the
netflix
bursts
and
he
said
that
was
about
the
buffering
and
the
receiver
to
my
best
understanding,
all
the
adaptive
bitrate
algorithm
of
all
the
streamers
are
using
this
to
measure
the
downlink
speed
so
that
for
the
following
sequence
of
segments
that
they
receive,
they
know
what
bit
rate
they
want
to
receive
right.
H
So
this
is
a
way
of
adjusting
themselves
and
the
reason
why
they
have
to
do
this
by
you
know
sending
unlimited
speed
bursts
is
because
you
know
from
the
network.
We
don't
give
them
another
way
to
figure
out
what
actually
the
available
downlink
downspeed
bandwidth
is.
So
there
is,
you
know
some
some
some
argument
to
be
made
that
if
we
had
a
better
network
service
that
would
give
some
idea
about
the
free
head
room.
K
Yeah,
I
have
a
question
to
vijay
when,
when
victor,
when
you
mentioned
that
you
were
using
videos
from
netflix
and
such
to
estimate
the
ability
of
isp
to
serve
to
serve
high
quality
traffic,
I
was
a
bit
confused,
because
these
large
networks
typically
do
not
rely
on
isp's
capabilities
as
much.
K
They
just
absorb
the
location
put
their
own
equipment,
which
is
completely
different
from
the
isp
equipment
and
use
that,
I
wonder,
have
you
seen
a
strong
correlation
between
behavior
of
netflix,
facebook,
google,
amazon
and
other
services,
despite
that
prior
of
having
different
technology,
I'm
quite
curious.
How
that
has
been
seen
from
the
outside.
E
So
what
we
measure-
and
maybe
it's
slightly
misleading
when
I
said
we
measure
the
isp.
What
we
measure
is
the
end-to-end
performance
of
netflix.
It
could
equally
be
a
bad
wi-fi
in
your
house.
We
don't
know
right.
What
we
are
measuring
is
end-to-end.
What
is
netflix
doing
and
that
pulse
carries
information
on
whether
or
not
netflix
is
working
at
the
highest
available
resolution
and
we
detect
when
the
resolution
drops
and
we
detect
when
netflix
panics
when
the
playback
buffer
goes
down
and
it
opens
more
connections
and
tries
to
fetch
chunks
in
parallel.
E
So
we
have
built
models
around
the
behavior,
and
that
is
letting
us
make
these
inferences
so
you're
right
as
in
it's
not
only
the
isp
who's
at
fault
it
could,
they
may
or
may
not
have
a
cache.
We
don't
know,
but
we
just
measure
the
end-to-end
performance
of
every
stream
of
netflix.
I
don't
know
if
that
helped.
O
Yeah,
so
the
the
first
thing
is,
I
wanted
to
echo
what
tardis
was
saying
that
there's
there's
a
whole
lot
of
problems
around
bandwidth
estimation
for
video
stuff,
and
this
ends
up
presenting
these
patterns
that
we're
seeing,
but
not
only
for
netflix-like
content
where
we're
I'm
not
I'm
not
meaning
to
say
that
netflix
is
solving
a
very
easy
problem,
but
it's
an
easier
problem
than
solving
say
a
live
streaming
broadcast
and
the
reason
I
say
that
is
because
the
lack
of
a
good
api
for
measuring
bandwidth
actually
means
that
we
have
to
assert
a
delay
in
the
live
streaming
so
that
we
can
make
a
buffer
so
that
we
can
send
at
faster
than
the
current
streaming
rate,
so
that
we
can
actually
get
a
probe
of
the
network
to
increase
the
bandwidth
or
the
quality
of
the
video.
O
F
David
yeah,
I
just
I
sent
out
on
on
the
slack
line,
just
some
of
the
numbers
of
that
the
fcc
reported
earlier
in
their
data
last
year
on
the
different
isp
type
technology
type.
But
I
think
this
this
question
of
of
what's
the
long-term
trend
here
on
latency
and
the
performance
is
important
for
kind
of
understanding
and
putting
the
right
context
for
different
improvements,
and
if
we
don't
have
an
agreement
on
the
metric
in
particular,
that's
very
important.
F
If
you
know
the
fcc
has
a
10
second
test
and
it
should
be
20
or
30
seconds.
You
know
that's
an
important
thing
to
communicate
out
so
that
we
can
get
because
we
know
speeds
are
increasing
we're
at
one
gig.
We've
got
x-pawn
going
to
10
gig.
So
do
we
anticipate
improvements
or
not
the
you
know,
I
didn't
mention
it,
but
the
aqm
that
comcast
implemented
is
basically
a
50
improvement
year
on
year
for
the
month
of
november.
F
Now
that
you
know
again,
I
understand
that
you
can't
just
take
that
to
the
bank
and
it's
you
know
that
you
can
guarantee
that
type
of
improvement.
But
you
know
if,
as
these
protocols
improve,
there
could
be
some
expected
improvement
in
performance
over
time
as
the
service
evolves,
and
I
think
that's
important
piece
of
calculus
to
understand
how
this
problem
is.
You
know
the
consumer
experience
problem
is
going
to
evolve
over
time.
L
L
If
you
screw
up
the
round
trip
times
in
the
upstream
direction,
it
makes
it
real
hard
for
the
downstream
to
figure
out
how
to
buffer
your
video
right
and
our
applications
are
changing.
The
resolution
of
our
video
streams
is
going
up
the
amount
of
of
of
image
and
video
content
that
people
upload
or
want
a
backup
keeps
going
up.
L
M
Christoph
I
actually
want
to
go
what
what
jim
said.
I
don't
think
we
should
be
over
pivoting
on
netflix,
I
mean
there's
a
there's.
A
lot
of
applications
that
have
many
different
behaviors
and
netflix's
behavior
is
not
irrational
right.
I
mean
they're
serving
that
doing
something
that
is
solving
a
particular
problem.
It's
not
specific
to
netflix
either.
Let's
be
clear
about
that.
This
is
abr
video
in
general.
You
might
see
some
pronounced
effects
with
netflix,
but
it's
going
to
be
there
as
long
as
you
have
abr
video
doing
this
capacity.
M
Probing
thing
bandwidth,
probing
thing
that
it
keeps
doing
and
to
told
us
your
your
point
of
this
is
not
it.
These
are
basically
continuous
testing,
that's,
hence
the
adaptive
in
abr,
it's
not
just
done
at
the
beginning
of
a
stream.
You
can't
just
ask
the
network
hey
what
is
your
bandwidth
and
then
start
streaming?
At
that
rate,
it
is
a
continuous
measurement,
that's
happening.
We
basically
have
to
do
online
measurements.
We
don't
have
a
way
of
simply
querying
and
finding
out
what
the
bandwidth
of
the
end-to-end
network
is.
M
So
there
is
a
real
problem
here.
Element,
this
practical
engineering
problem
here
that
they're
trying
to
solve
it's
not
out
of
nothing,
and
I
think
jeff
made
a
point
earlier
that
maybe
we
should
engineer
the
network
around
this.
To
that
point,
I'll
come
back
to
the
the
the
maybe
something
that
jim
was
noting,
but
I'll
note
it
anyways
that
having
some
amount
of
isolation
across
the
applications
having
some
ways.
M
This
is
so
I
I
I
yeah,
that's
all
I
had.
I
said
my
last
point
is
that
being
able
to
isolate
applications
from
hurting
each
other
is
a
potentially
useful
way
of
thinking
about
this,
rather
than
trying
to
change
every
application
on
the
planet.
E
Thanks
yeah,
I
think
good
discussions
and
I'm
also
looking
at
some
of
the
comments
on
slack.
I
think
the
key
point
is
and
that
I'd
like
to
reiterate,
is
by
looking
at
the
behavior
of
any
application
and
netflix
was
just
an
exemplar,
and
you
know
you
have
youtube,
you
have
prime,
and
we
we
can
build
a
data
driven
model
of
how
it
behaves
and
that
can
be
used
to
estimate
experience.
E
In
fact,
the
top
hundred
games
on
the
steam
and
the
tweet
charts
and
there's
a
lot
of
variation
in
tcp
games,
udp,
tick,
race
rate,
tick,
based
games,
not
the
game,
so
it's
it's
laborious
and
it's,
but
it
definitely
does
have
its
rewards
in
in
terms
of
the
meaning
it
provides
to
end
users,
because
even
you
know,
call
of
duty
is
different
from
counter
strike
is
different
from
fifa,
and
so
on,
so
really
getting
down
to
an
application.
E
Application
level
is,
is,
is
rewarding,
even
though
it's
fairly
hard
work,
so
I
just
want
to
retreat
that
point.
C
Thanks
jeff.
P
I'm
mute
I'd
like
to
make
the
case
that
actually
we
should
pivot
and
we
should
think
long
and
hard
about
video
streamers.
I
lived
through
an
age
where
we
spent
a
huge
amount
of
time
engineering
for
voip
and
it
was
marginal
both
in
terms
of
traffic
volume
and,
most
importantly,
totally
marginal
in
terms
of
isp
revenue.
We
were
spending
a
huge
amount
of
time
for
basically
minute
marginal
pieces
of
traffic.
P
H
To
get
back
to
what
roberto
was
saying
about
real
time
and
this
bandwidth
probing
right,
so
I
think
everybody
has
seen
that
when
you
start
a
streaming
video,
then
the
quality
slowly
ramps
up
right
and
that's
certainly
something
which
you
don't
really
want
to
have
in
a
in
a
real-time
video
conference
when
you're
starting
and
stopping
to
send
video
like
in
a
multi-party
environment
where
your
stream
is
not
always
on
so
we're
kind
of
having
the
problem
of
the
tcp
slow
start
and
and
any
other.
H
You
know,
bandwidth,
probing
of
of
new
flows,
and
so
there
there
are
a
bunch
of
other
things
that
go
into
the
codex
like
you
know,
increasing
decreasing
fec.
But
ultimately,
when
you
give
up
bandwidth,
you
have
to
get
it
back
afterwards
and
I
think
that's
the
the
the
diffic
most
difficult
thing
that
I
think
we
can't
improve
on.
Unless
we
improve
the
network
service
itself.
C
Thank
you,
omar.
K
I
wonder
how
long
very
general
does
it
take
to
develop
a
model,
because
that
would
match
the
behavior
of
a
particular
video
provider,
because
if,
for
example,
let's
say
facebook
is
running
five
different
experiments
and
it
gradually
shifting
it,
I
just
imagine
that
it
would
be
quite
hard
to
to
adopt
the
model
in
the
almost
real-time
rate,
to
accurately
track
the
experimental
location.
E
Absolutely
I'll
give
you
a
very
short
answer
and
the
longer
answer
sharath
can
give
maybe
offline,
which
is
that,
luckily
providers,
the
major
providers
like
netflix
have
nerd
stats
and
if
you
know
how
to
enable
nerd
stats
on
your
video,
you
can
train
a
machine
to
automatically
harvest
the
ground
truth
in
addition
to
the
what
it's
seeing
on
the
network
and
the
training
becomes
fully
automated.
But
not
all
providers
give
you
nerd
stats,
so
it
can
be
more
laborious
and
manual
for
the
other
providers.
C
N
I
think
it's,
I
think
it's
a
good
point
that
we
shouldn't
constrain
what
we
consider
in
terms
of
measuring
qoe
too
much
to
any
specific
set
of
applications.
At
the
same
time,
as
other
folks
have
mentioned,
the
vast
majority
of
internet
traffic
is
now
video
traffic.
That's
typically
going
to
be
served
by
cdns
that
are
local
to
the
user,
which
means
the
transit
networks
are
not
going
to
be
involved.
N
Sometimes
those
cdns
even
have
nodes
co-located
inside
the
end
user
isps
network
and
then
that
video
traffic
is
going
to
be
a
continuous
cycle
of
the
player
requests
a
chunk.
The
sender
dumps
it
into
the
network
as
fast
as
possible,
based
on
its
condition,
control
algorithm
and
then
the
receiver
requests
another
chunk
and
the
process
just
continues
to
repeat
based
on
the
receiver's
buffer.
The
other
things
you're
going
to
have
is
image
traffic.
N
That's
going
to
be
similar,
I'm
voting
a
web
page,
I'm
going
to
quickly
request
as
many
images
as
possible
get
them
served
to
me.
So
I
can
display
that
page
and
then
you
have
constant
bitrate
traffic,
that's
kind
of
like
what
we're
on
for
this
call.
There's
my
voice
right
now
has
to
be
carried
to
you.
There's
some
constant
bit
rate
encoding
of
it.
The
video
can
change,
but
in
general
we
need
this
traffic
to
be
constantly
delivered,
but
it
is
all.
N
None
of
these
types
of
traffic
are
representative
of
me
downloading,
a
large
file
and
that's
what
we
typically
see
when
people
are
developing
congestion
control,
algorithms.
Those
are
the
types
of
experiments
they
look
at.
If
I
have
two
senders
or
receivers,
they
are
both
sending
this
large
file.
How
do
they
share
bandwidth,
but
that's
not
as
representative
of
the
actual
traffic.
That's
on
today's
internet
thanks.
L
L
C
Q
Thank
you,
hi
everyone.
My
name
is
kyle,
I'm
about
to
be
a
second
year,
grad
student
at
the
university
of
chicago
working
with
professor
nick
femster,
and
today
we
wanted
to
share
some
preliminary
results
from
a
measurement
study
that
we're
conducting
next
slide.
Q
So
we're
about
to
deploy
about
100
devices
across
homes
in
chicago
to
measure
certain
aspects
of
the
internet
that
people
receive
and
before
this
we've
had
a
bit
of
a
beta
deployment
to
11
homes
of
the
members
of
the
group
in
the
lab
to
test
that
the
system's
working
and
validate
that
the
results
we're
getting
are
reasonable
and
we've
been
able
to
make
some
pretty
interesting
observations
from
the
data
collected
off
our
latency
under
load
test
and
before
I
get
into
that.
Q
Q
Q
Q
You
can
see
that
one
subscriber
experience
is
consist
nearly
consistently
higher
latency
under
load
than
another
same
same
observation
can
be
made
about
the
two
subscribers
in
another
neighborhood
south
shore,
and
on
top
of
this,
the
subscribers
in
south
shore
experienced
nearly
twice
at
least
twice
as
high
latency
under
load
than
those
in
hyde
park,
and
just
to
just
to
reiterate
all
of
the
subscribers
and
the
figure
figures
on
the
right
are
on
the
same
isp
and
are
all
subscribed
to
this
gigabit
service
plan.
Q
And
so
we
wanted
to
bring
these
results
to
this
workshop
and
get
some
feedback
from
the
group,
because,
while
the
differences
are
clear,
the
causes
are
not
so
any
feedback
about
our
methods
or
suggestions
as
to
how
to
narrow
down
the
potential
causes
would
be
very
much
appreciated.
Thank
you.
C
Thank
you
kyle
that
you
had
a
short
clarification
question.
D
Q
No,
so
they
it
is
likely
that
they're
not,
but
let
me
put
it
this
way,
we
they're
they're,
not,
but
a
lot
of
them
are
like
a
lot
of
the
a
lot
of
them.
Get
the
cable
modem
box
from
the
isp.
C
Thanks
jim
one
last
commenter
question.
L
The
device,
the
the
cable
modem
and
the
like
matters,
the
devices
in
it
and
the
amount
of
buffering
in
them
vary
from
from
model
to
model,
and
you
should
bend
according
to
that.
Okay,
that
makes
sense.
C
U
T
In
our
discussions
we
use
characteristic
as
availability,
pathway,
ability,
serviceability
and
got
to
wonder
is
what
do
we
mean
and
how
can
we
measure?
How
can
we
express
this
if
we
want
to
be
to
use
it
as
a
matrix
next
slide,
please
so
error
performance,
oem
tool
sets
include
methods
that
help
us
to
detect
network
defects
and
measure
performance.
T
T
So
that's
how,
from
the
fault
management
plan
we
can
up
to
the
performance
monitor
if
we
look
at
the
packet
loss.
Attacking
plus,
in
fact,
is
a
packet
that
had
been
delayed
infinity.
So
it's
infinite.
Delay
of
attack.
T
T
So
we
believe
that
performance
management,
error,
performance
measurement
is
active
oem
in
terms
of
it
can
be
in
packet
switch
networks.
Specifically,
it
can
be
a
measured
using
antibiotic
measurement
methods
according
to
classification,
rfc
7799,
and
it's
not
a
new.
It's
well
known
from
a
concentrate.
T
Media
like
pdm,
and
it's
based
on
guaranteed
presence
of
data,
so
the
data
is
anticipated
and
expected
by
the
receiver
in
a
packet
switch
network
that
is
based
on
statistical
multiplexing.
That
cannot
be
achieved
unless
we
use
periodically
transmitted
probes
test
packets.
T
So
the
active
oem
can
create
a
characteristic
subflow
that
statistically
can
be
correlated
with
the
behavior
experienced
by
the
data
next
slide.
Please.
T
So,
looking
at
what's
been
achieved
and
used
for
tdm,
we
look
at
several
parameters
in
error.
Performance
metric,
such
as
error,
interval,
severe
error
and
error
free.
T
So
this
intervals
they
can
be
combined
in
the
periods
as
availability
on
unavailability
of,
what's
being
monitored.
Instance
to
make
it
stable.
T
T
As
a
result,
other
metrics
that
can
be
used
as
a
error,
interval
ratio
and
severe
error
interval.
T
You
will
note
that
we
do
not
characterize
the
specific
requirements
for
their
packet
loss
ratio
or
delay,
delay
variation.
T
That
can
be
specific
to
the
object
that
being
monitored
to
the
instance
of
the
service
and
the
tag,
so
the
thresholds
can
be
altered
and
adjusted
and
that
because
different
services
are
different,
paths
will
have
a
different
requirements
and
in
terms
of
their
their
tolerance
to
pack
it
to
us
and
back
in
delay,
because
there
are
several
protocols
being
developed
and
in
itf,
for
example,
that's
where
we
do
this
work.
T
Primarily,
we
are
coming
with
a
proposal
that
helps
to
integrate
fault
management
and
performance
monitoring,
oem
in
one
integrated
id
or
packet
oriented
layer,
three
layer,
two
and
a
half
protocol
that
combines
best
qualities
of
pfd
and
performance
monitoring
tools.
T
Okay,
I
believe
this
is
our
presentation,
any
questions.
V
Thank
you
very
much
christoph
so
before
I
start
digging
into
the
the
slides
I
want
to
set
a
bit
of
context,
I'm
a
little
bit
of
an
outlier
in
this
workshop.
In
fact,
some
of
you
guys
may
even
think
of
me
as
the
bad
guy
and
as
honored
and
excited
as
I
am
it's
a
little
bit
difficult,
not
to
feel
a
bit
of
the
imposter
syndrome,
but
I'm
hoping
I
can
bring
you
guys
a
bit
of
a
unique
perspective.
V
I
don't
do
pr
d,
I
don't
sell
anything,
I'm
not
part
of
a
product
group.
My
job
really
is
to
apply
everything
that
you
guys
are
bringing
the
table
here
and
make
it
relevant
to
the
end
users.
V
So,
for
years
now,
I've
literally
spent
my
days
working
with
the
whole
ecosystem,
the
sock,
vendors,
the
cp
vendors,
the
platform,
the
web
scale,
the
csps,
the
cdn
vendors
gaming
studios,
you
name
it
all
around
improving
the
quality
of
experience
of
the
end
user.
So
I
bring
a
different
view.
You
guys
are
the
metric
experts
and
on
all
my
friends
at
apple
google
bell
labs,
but
my
job
is
surely
to
understand
how
to
make
this
relevant
to
end
user.
V
So
when
you
guys
come
to
me
with,
I
got
this
new
queuing
algorithm,
I'm
like
great
that's
going
to
help
that
my
kill
debt
ratio
and
if
you
look
at
me
with
blinking
eyes-
and
you
know
what
I'm
talking
about-
then
nobody's
going
to
buy
this
stuff-
so
don't
get
me
wrong.
I
work
extensively
on
queueing
traffic
optimization,
but
my
focus
is
really
to
understand
how
to
translate
this
into
something.
The
consumer
understands
wants
something
that
csp
can
sell.
The
cp
vendor
can
integrate
and
a
stock
vendor
is
willing
to
do.
V
We
heard
some
amazing
comments
from
stuart
j.
You
know
gina,
vijay,
jim
and
so
many
others
on
the
real
world
benefits,
and
I
love
that
so
residential
internet
used
to
be
a
best
effort,
kovit
changed
that
I
think
jimmy
you're
mentioning
that
earlier.
I
totally
agree
people
don't
just
need
it.
It's
a
basic
human
right
in
in
a
sense
you
rely
on
it,
for
critical
services
worked
education.
V
So
what
I'm
going
to
share
with
you
guys
here
is
understanding
how
we
can
solve
that
potato
in
the
microwave
quote
that
came
out
a
couple
of
times.
I
love
that
there
are
ways
to
detect
when
there's
a
period
of
microwave.
Now
we've
opened
up
the
the
cp
environment
yeah,
you
know
containerization
and
I'm
working
with
a
whole
bunch
of
you.
V
Actually,
some
of
you
that
are
here
on
deploying
agents
in
the
cpes
that
will
give
you
that
low
level
access
to
queuing
wi-fi
wan
ran
whatever
you
want,
so
no
longer
you're
gonna
have
to
infer
measurements,
you'll
be
able
to
get
access
to
it.
So
with
that
said
I'll
go
quickly
to
the
slides,
I
want
to
share
some
real
world
results.
I
like
using
the
cloud
gaming
because
it's
a
great
example
of
high
bit
rate
video,
constant
video
with
with
low
latency
and
the
reality
is.
V
As
it's
been
said,
you
know,
90
of
your
issues
are
going
to
be
in
the
home
or
in
the
last
mile,
there's
a
lot
that
you
can
do
to
focus
on
metrics
and
improvement,
but
you're
hitting
a
point
of
diminishing
return.
If
you
don't
focus
on
the
in-home
in
the
last
mile,
that's
really
where
things
matter
next
slide.
V
So
what
we've
done
is
we've
actually
gone
ahead
and
done
some
real
world
measurements
when
I
say
real
world
we're
talking
to
residential
homes
with
real
gamers
pro
gamers
amateur
gamers-
and
I
remember
a
tier
one
operator
saying
you
know
what
we
have
a
fiber
network.
Yet
people
complain
about
our
gaming
and
so
they're,
like
our
ping.
Tests
are
great
and
yeah.
You
know
if
you
look
at
the
lower
left
graph,
their
ping
times
for
that
cloud.
V
Gaming
service
with
nvidia
was
great
62
milliseconds
on
average,
but
you
can
also
see
the
inconsistency
because
of
the
in-home
network
and
wi-fi
and
everything
else,
and
every
time
you
see
those
little
spikes
that
gaming
session,
that
video
conference
was
impacted
when
we
applied
things
like
pi
square
aqm
and
l4s
again
same
households,
same
real
world
users
and
everything
else,
you
saw
a
dramatic
improvement
in
the
quality
of
the
experience,
the
end
user.
I
don't
really
care
that
the
ping
time
improved
from
60
to
42
milliseconds,
that's
irrelevant.
V
It's
a
consistency
that
deviation
from
the
mean
that
really
matters,
and
for
me
personally,
I
had
to
to
be
able
to
understand
that
I
had
to
get
involved.
I
play
games,
I'm
a
twitch
streamer
on
friday
and
saturday
nights,
and
I
saw
my
kill
that
ratio
increased
by
0.5
in
four
weeks.
This
is
what
is
going
to
drive
the
consumers
to
force
the
service
providers,
to
pay
attention
and
to
buy
into
this
stuff
next
slide.
V
We
also
did
the
same
thing
for
video
conferencing
so
with
a
partner
of
ours
domos,
and
I
thank
them
for
being
able
to
share
the
stats.
We
did
a
video
conferencing
using
the
same
technology
using
real
world
enterprise
customers,
a
worldwide
global,
consulting
firm.
If
you
will,
we
took
some
measurements
now
on
video
conferencing
over
time
and
being
able
to
understand
how,
by
focusing
in
the
indian
home
in
the
access
in
this
case
it
was
a
5g
ran.
Could
we
improve
the
video
conference
and
we
did?
V
We
provided
some
really
amazing
results
where
at
the
90
and
99
percentile?
Where
is
really
what
you
want
to
focus?
If
I
have
one
video
blip
in
a
half
hour,
that's
enough
to
enrage
me
if
I
have
eight
hours
a
meeting
and
that
happens
every
hour,
so
it
for
us,
it's
not
the
50th,
it's
not
the
median.
It's
that
90
at
night
and
99
percent
out
that
we
really
want
to
focus
on
and
be
able
to
show
that
you
focus
in
that
last
smile
and
the
in-home.
V
You
can
have
significant
improvement
here,
and
you
know
what
I
I
don't
have
a
bet
in
the
source
if
it's
pi
square
versus
another
algorithm.
What
I
care
about
is:
will
this
translate
into
something
the
end
user
can
feel
and
can
want
and
is
willing
to
pay
for
it
next
slide,
and
in
this
case,
if
you
can
go
through
yeah
exactly
so,
we
want
to
focus
on
tangible,
real
world
benefits
from
the
consumer
point
of
view,
consumers
don't
care
about
cdf
careers,
they
care
about
kds,
they
carry
fps,
they
care
about.
V
All
of
these
things,
and
everybody
said
it
already.
All
my
thunder
has
been
stolen
for
the
last
day
and
a
half,
but
that's
great.
Now
speed
tests
are
reached
at
its
end
game.
We
can
already
provide
300
times
more
peak
capacity
than
the
average
sustain
usage
from
a
broadband
sub.
So
if
you
have
a
500
megabit
connection-
and
you
have
gaming
issues,
you
have
video
conferencing
issues
going
to
one
gig
or
1.5.
Gig
is
not
going
to
solve
any
of
that.
V
You
know
the
consistency.
Issues
also
aren't
from
the
general
internet,
they're
largely
confined
to
that
last
mile
in
home.
There's
a
reason.
70
percent
of
all
the
support
calls
that
come
to
a
service
provider
are
wi-fi
related,
that's
the
reason
they
don't
care.
They
don't
call
to
complain
about
netflix.
They
don't
care
to
complain
about
this.
They
care
to
say
my
wife
is
bad,
and
you
know
platform
and
web
skill
providers.
Apple,
google,
netflix
a
whole
bunch.
You
know
they've
been
you
know,
I
give
them.
V
C
Let's
try
to
yeah.
V
I'm
just
gonna,
I'm
just
gonna
finish
up
with
the
with
this
one
really
here
is.
I
did
because
consumer
surveys,
the
consumers,
are
willing
to
pay
for
better
latency
for
their
own
control.
Latency
70
of
the
population
is
willing
to
pay
from
5
to
20
a
month
and
over
65
of
them
want
the
ability
to
tell
the
latency
agent.
If
you
will,
what
is
important
is
it
gaming?
Is
it
streaming?
Is
it
in
mornings
in
the
afternoon,
even
the
service
providers,
the
nps
score
increase
when
they
give
that
ability
so
last
slide.
V
C
Thanks,
we
have
two
clarifying
questions,
one
from
sharat,
first
sure.
W
Hi
junior,
I
think,
a
very
good
talk,
and
these
are
some
of
the
problems
we've
seen
operating
at
us
canopus
networks
in
isps
as
well.
What
would
you
think
is
the
point
at
which
we
need
to
intervene
to
help
alleviate
some
of
these
issues.
Was
it
the
home
router
right
within
the
household
premises,
or
was
it
the
cpe
where
typically
isps
enforce?
You
know
the
bandwidth
plans?
V
The
cpe
yeah
definitely
the
cpu
from
historical
perspective.
They
it
was
always
a
cost
at
the
bottom.
Cps
was
give
me
the
cheapest
thing.
I
can
put
the
home
that
gives
wi-fi
and
that
will
you
know
that
I
can
manage
that's
changing
the
cps
are
not
a
cost
anymore,
their
way
to
differentiate
their
way,
to
create
value
and
to
monetize.
So
that's
changing.
You
know
you
can't
go
and
replace
everything,
that's
in
the
field,
but
there's
a
different
mindset
that
the
service
browsers
are
bringing
to
the
table.
Now.
W
V
Let's,
let's
take
that
offline
because
that's
a
whole
conversation
itself
on
the
on
bng
virtualbng
placement.
That's
that's
going
to
take
us
half
a
day.
V
C
Okay,
thank
you,
gino
that
moves
us
on
to
the
next
presentation
by
praveen
on
transport
layer,
statistics
for
network
quality.
X
Hello,
everyone,
so
I'm
going
to
talk
about
the
state
of
the
art
of
transport
statistics.
X
I'm
also
going
to
talk
about
some
of
the
challenges
we
face
today
in
using
these
transport
statistics
to
gather
network
quality
information
and
some
like
challenges
and
ideas
where
ietf
could
play
a
role
in
in
making
this
better
overall
next
slide,
please
so
why
the
transport
layer,
so
the
transport
layer
has
very
good
visibility
into
performance
metrics.
X
So,
for
example,
you
know
rtt,
which
is
latency
loss
information
from
the
network,
reordering
bandwidth
estimates,
so
the
transport
layer
has
really
rich
information
with
the
encryption
at
the
transport
layer
now,
particularly
with
quick.
It
is
also
the
right
layer
to
expose
this
information
up
to
the
applications,
because
the
visibility
in
the
network
is
is
reducing
compared
to
tcp
and
assuming
ideal
apps
in
terms
of
you
know
not
having
bottlenecks
in
putting
enough
data
out
on
the
network.
The
transport
layer
basically
determines.
U
X
U
X
A
very
rich
idea,
or
very
rich
picture
of
why
there
is
increase
in
tail
latency
or
why
certain
requests
are
taking
longer.
So
in
terms
of
what
is
done
being
done
today
in
platforms,
we
have
apis
that
are
exposed
in
the
form
of
e-stats
or
like
tcp
info,
which
can
be
used
by
applications
to
gather
this
information.
One
of
the
challenges
today
is
that
this
is
full
mode
right,
so
the
application
needs
to
query
this
information
periodically.
X
One
of
the
enhancements
here
could
be
push
model
because
the
transport
does
know
when
significant
events
happen,
for
example,
congestion
or
you
know,
a
stream
gets
closed,
or
you
know
the
connection
ends.
Unexpectedly,
there
are
cases
where
the
transport
could
push
this
information
to
the
application
versus
it
being
the
other
way
around
and
quick
introduces
a
little
bit
more
complexity,
because
some
of
these
statistics
now
become
per
stream,
which
means
we
need
to
expose
way
more
data.
X
So
what
do
we
correlate
with?
So
the
application
layer
has
their
own
level.
You
know
they
have
their
own
set
of
metrics
that
they
measure
to
to
determine,
for
example,
what
the
you
know:
effective
user
experience
is
based
on,
say,
you
know,
response
time,
request
per
second
delivery
rate,
jitter,
etc
and
being
able
to
correlate.
This
sort
of
transportation
is
extremely
useful.
One
of
the
problems
we
see
is
with
connection
reuse.
X
So
if
the
application
layer
reuses
a
connection,
how
do
we
exactly
correlate
the
statistics
that
are
associated
with
a
particular
request
down
to
the
right
connection,
and
we
also
have
problems
with
multiple
streams
right?
So
how
do
you
exactly
correlate
that
your
request
went
on
a
particular
stream
and
the
bottleneck
was
due
to
the
stream's
behavior?
X
Let's
say
flow
control,
and
the
other
interesting
idea
here
is
also
like
support
for
statistics,
difference
basically
being
able
to
timestamp
when
a
request
starts
and
ends
between
the
application
layer
and
the
transport
layer
and
being
able
to
extract
just
the
right
amount.
U
X
So
what
are
the
challenges
in
using
transport
layer
metrics?
So
one
of
the
big
challenges
is
the
multi-platform
world
is
a
multi-transport
world,
particularly
with
quick.
Now.
X
So
the
reason
this
is
a
challenge
is
because,
typically
you
don't
want
these
metrics
from
just
one
side
of
the
connection.
Typically,
you
want
data
from
both
sides,
because
there
are
cases
where
the
performance
could
be
limited
by
say,
flow
control
on
the
receiver
so
and
and
the
receiving
application
as
well.
The
other
challenge
I
would
say
in
the
cloud
world
is
different:
administrative
domains.
So
if
you
want
to
extract
this
information,
you're
not
always
the
same,
it's
not
the
same
application
running
on
both
sides.
G
X
The
peers
and
the
the
other
problem
also
is
that,
if
you
do
try
to
do
this
in
a
general
way
versus
application,
specific
there's
certain
information
that
is
not
available
today,
like
what
was
the
application
intent.
This
is
not
always
expressed
in
apis
and
somewhat
difficult
to
infer,
and
the
user.
Intent
is
also
not
expressed,
like
the
user,
may
choose
that
a
certain.
X
File
download
to
be
like
low
priority,
it's
not
always
explicitly
expressed
it's
quite
difficult
to
determine
what
that
intent
was.
There
are
cases
where
there
are
system
services,
for
example
like
an
operating
system
updates,
but
it's
very,
like
you
know,
it's
very
easy
to
infer
what
the
intent.
Q
X
But
if
it's
a
user
control
action,
it's
it's
pretty
difficult
to
infer
that
today
and
the
receive
site
transport
has
less
visibility.
So
another
reason
for
why
why
you
want
to
do
want
to
exchange
this
information
in
some
form,
so
some
some
ideas
where
I
think
the
ietf
could
help-
is
some
application
layer
mechanisms
to
exchange
this
transport
metrics
in
a
privacy
preserving
form.
This
would
be
extremely
useful.
Given
you
know
all
the
challenges
that
I
talked
about
before
api
improvements.
U
X
How
do
we
infer
user
intent?
Some
of
these
are
like
extremely
useful
research
areas
and
then,
in
terms
of
like
measurements,
you
know
that
there's
passive
measurements
and
active
measurements
both
of
them
can
yield
very
rich
transport
information.
X
The
question
is
how
how
you
sort
of
like
you,
know,
sample
and
pick
between
passive
and
active
measurements,
because
with
active
measurements,
you
may
not
be
measuring
the
right
application
behavior.
So
and
that's
all
thank
you.
C
Are
there
any
clarifying
questions
to
praveen,
specifically.
Z
All
right
that
that
that
is
a
follow-up
to
praveen
I
just
because
you
you
mentioned
the
taps
working
group
like
can
I
stop
myself
from
asking
what?
What
specifically
would
you
would
you
like
to
see
that
we
don't
currently
have
in
the
api.
X
X
Like
new
applications
that
can
write
to
new
api,
it's
also
like
existing
applications.
So
I
guess,
like
you,
know
it's
a
platform
challenge
to
to
both
have
new
apis
where
applications
can
adopt
and
also
be
being
able
to
infer
that
trend.
So
I
think
that's
where
the
challenge
is.
What
do
we
do
with
all
these
existing
applications
present
in
today.
D
Okay,
yeah
quick
question:
instead
of
inferring,
you
know,
I
haven't
heard
that
much
discussion
about
requiring
applications
to
actually
use
marking
using
dscp.
So
if
you're
going
to
do
background
downloads,
you
know
market
cs,
you
know
with
the
right
dscp
marks
and
what
just
the
same
as
like
my
webex
app
is
now
marking
my
video
and
my
voip
correctly.
I
don't
have
to
infer
anything
it's
telling
my
router,
which
tends
to
put
the
stuff
in
and
it's
going
there.
V
So
maybe
I
can
bring
a
quick
comment
on
that.
One
jonathan
is
from
my
experience,
dealing
with
the
hundreds
of
cares
around
the
world.
Many
of
them
today
will
still
sanitize
or
whitewash
those
markings
coming
in
from
home.
So
even
if
you
add
the
applications,
do
it
there's
a
very
good
chance,
they'll
be
they'll,
be
sanitized
before
they
they
leave
the
cpu
anyway,
so
it.
I
think
there
are
ways
to
do
it
in
band
without
using
the
dcp
markings
and
and
there's
some
cool
ideas
that
I've
been
working
with.
V
Unfortunately,
that
approach
as
much
as
I
like
it
don't
get
me
wrong.
I
I'd
be
totally
in
favor
for
it.
Unfortunately,
a
lot
of
the
providers
are
are
whitewashing,
those
or
sanitizing
those
markers.
D
C
U
Thank
you.
I
have
a
comment
question
for
kyle
when
you
are
measuring
the
latency
on
different
access
technologies,
I'm
interested
in
the
challenges
of
measuring
gigabit,
because
if
I
actually
have
gigabit
service
from
the
isp,
I
probably
don't
have
anything
faster
than
that
in
the
house.
U
If
my
computer
is
connected
by
gigabit
ethernet
to
a
35
megabit
uplink,
it's
pretty
clear
that
the
bottleneck
is
in
the
cable
modem.
But
if
my
cable,
modem
or
my
fiber
is
going
gigabit
bi-directionally,
then,
when
you're
measuring
latency
under
load,
you
may
actually
be
measuring
the
buffer
bloat
on
the
device
itself.
That's
sending
buffer
bloat
in
the
ethernet
driver
buffer
floating
the
wi-fi
driver.
So
at
that
point,
you're
not
measuring
the
isp
equipment,
anymore.
Q
U
It's
just
an
observation
that,
when
you're
presenting
results
right
now
presenting
meaningful
results
for
gigabit
is
hard.
We've
had
gigabit
ethernet
on
laptops
and
desktops
for
a
decade,
but
it's
rare
to
have
more
than
that.
There
is
typically
one
bottleneck
on
the
path
where
the
buffer
forms
and
that's
what
you
want
to
measure
and
that's
just
that's
the
slowest
hop
on
the
path
when,
when
your
laptop
becomes
the
slowest
hop
on
the
path
now
you're
dealing
with
the
vagaries
of
the
ethernet
driver
or
the
os
on
that
device.
V
V
So
we've
been
forcing
a
lot
of
the
stock
vendors
to
implement
hardware
acceleration.
So
we
can
do
these
kinds
of
tests,
but
they're
not
really
relevant
today.
So
I
I
like
the
angle
that
you
said
that
the
end
device
you
know
pc,
smartphone,
whatever
it
be,
may
have
a
trouble
doing
those
kind
of
line
rate,
but
even
the
cps
themselves
and
the
service
provider
maybe
may
feel
struggle
there.
V
A
lot
of
low
end
winters
and
cps
don't
have
the
socks
that
can
do
that
kind
of
line
rate,
but
that's
changing
we're
forcing
that
we're
seeing
qualcomm
come
out
we're
seeing
cortina.
You
know,
offer
iperf
3
hardware
acceleration,
so
you
can
do
10
gig
line
raising
these
kinds
of
things,
but
that's
not
going
to
change
overnight
for
the
large,
largely
the
population.
Today.
That's
that's
still
an
issue.
C
Thank
you,
gino
robin
you're
up.
AA
I
want
to
say
that
our
solution
that
we
propose
for
the
api
problem
is
to
bypass
it
and
just
have
each
layer,
export,
metrics
or
information
by
themselves
and
then
do
post
hoc
merging
via
different
systems.
That
kind
of
translates
the
problem
into.
How
do
we
sync
or
correlate
these
things
there,
but
maybe
that's
that's
a
bit
more
tangible
or
feasible
in
the
short
short
term
and
then
another
challenge
that
we
identified
that
I
don't
think
you
mentioned.
Praveen
was
once
you
have
these,
these
shared
meta
or
merged
metrics.
AA
Then
what?
Then,
you
certainly
need
new
tools
and
new
data
mining
techniques
to
actually
find
the
correlations
between
is
what's
happening
in
the
application
visible
on
the
transport
and
vice
versa,
and
that's
we
have
some
experience
with
that
now
with
with
with
quick,
but
that,
in
my
experience,
is,
is
even
a
bigger
challenge
than
just
combining
the
data.
Somehow,
thanks.
C
Thank
you
robin
praveen,.
D
X
I
have
three
comments
and
I'll
try
to
be
short
so
on
the
dsp
marking
yeah.
I
think,
like
there's,
also
gamification
problem
right.
The
platforms
still
need
to
start
off,
make
sure
that
applications
are
not
just
cheating
by
trying
to
get
the
best
quality
of
service.
I
had
another
comment
on
on
robin's
question.
Yeah,
I
think,
like
the
the
correlation
problem
is
hard.
X
We
do
have
applications
that
scale
together
doing
this
like,
particularly
at
the
tail
like
you
zoom
into
the
p99
and
and
you
look
at
those
requests
and
the
associated
transport
starts.
So
it's
doable.
If
there
are
challenges,
though,
yes,
because
it
does
require
some
manual
analysis,
okay
and
then
I
have
a
question
for
juno,
so
you
were
talking
about.
Users
are
willing
to
pay
for
like
a
lower
latency.
X
X
V
You're
absolutely
correct
so
so
the
first
thing
is,
you
need
to
prove
it
to
the
service
provider.
So,
right
now,
that's
why
I
said
like
you
got
to
care
about
what
the
end
user
does.
If
he's
a
gamer
he's
going
to
care
about,
fps
kill
debt
he's
going
to
care
about
all
these
things,
so
we
do
real-life
field
trials
to
be
able
to
show
with
real
gamers
other
service
writers.
V
What
this
does
we
do
the
same
thing
with
video
conferencing
with
enterprise
once
we're
able
to
show
the
value
nps
scores
increase
for
enterprise
gamers
are
willing
to
pay
for
it
now
give
me
10
dollars
a
month,
so
I
can
get.
You
know
two
more
kills
once
we
have
that
and
you
give
them
the
tool
you've
got
to
give
them
something
that
the
end
user
can
see.
If
there
is
a
pay
data
potato
in
the
microwave.
Don't
tell
me
that
it
was
a
wi-fi
issue
that
was
causing
like
I'm.
V
Okay
with
that,
so
you
need
to
be
able
to
isolate.
You
know
where
the
the
the
spikes
come
from
the
buffer
blow
comes
from
or
whatever,
but
you
need
to
be
able
to
convey
that
into
the
terms
of
what
they're
going
to
sell.
The
service
is
not
going
to
sell
latency
he's
going
to
sell
you.
A
gaming
package
he's
going
to
sell
your
working
from
home
package
he's
going
to
sell
enterprise
services,
so
you
know
whatever
you
do
has
got
to
be
into
that
context.
What
is
the
end
user
of
those
services
looking
for.
S
Yes,
I
have
another
comment,
also
to
prove
union,
which
also
ties
it
a
little
bit
also
into
this
little.
I
think-
and
this
concerns
actually
the
application
intent
and
the
the
comment
about
basically
prioritizing
saying
what
traffic
is
more
important.
What
is
less
important,
potentially-
and
I
think,
there's
also
a
case
to
be
made
to
prioritize
between
different
metrics
and
say
for
certain
applications
you
want
to
have
which
trade-offs?
Are
you
willing
to
make
it's
easy
to
say?
Oh,
this
is
really
important.
S
I
want
to
get
better
service,
but
what
is
more
important
to
me
right?
Am
I
willing
to
sacrifice
or
trade
off
my
latency
against
loss
against
a
possibility,
for
instance
of
loss?
Do
I
want
to
have
the
trade-off?
The
other
way
so
to
have
this
also
a
little
bit
so
it's
it's
a
it's.
A
more
dimensional,
I
guess
than
than
just
having
better
quality
is,
is
one
that
I
mentioned,
that
there
are
more
trade-offs
there.
So
that's,
I
guess
both
the
comment
and
the
in
the
question.
X
Absolutely
I
think,
like
that,
that's
why
I
called
it
a
research
area
right
so
and
in
fact
some
applications
might
want
to
like
be
be
better
citizens
on
the
network
and
say
hey.
This
is
my
max
data
rate.
You
know
I'm
willing
to
trade
off
some
data
rate
for
let's
say
low
jitter,
so
yes,
the
there.
This
would
be
expressed
in
terms
of
like
trade-offs
and
there's
more
than
just
priority
right.
X
So
you
know
you
could
express
this
in
terms
of
bandwidth
caps
like
you
know,
latency
requirements,
data
requirements
so
yeah
this
could
get
complex.
I
have
to
absolutely
agree.
R
I
have
a
question
about
report
scoping.
Most
of
us
here
are
measurement
oriented,
but
there's
actually
this
comment
about
bleaching
and
the
problems
of
bleaching
qos.
One
wonders
if
there
are
perhaps
some
statements
we
should
make
about
infrastructure
that
that
bleaching,
qos
in
fact
causes
problems
for
all
sorts
of
people
the
and
whether
or
not
that
kind
of
statement
would
be
in
or
out
of
scope.
R
A
So
with
respect
to,
if
I
can
interject
really
quick
from
a
process
point
of
view,
with
respect
to
the
working
the
workshop
report,
we
absolutely
can
put
in
anything
that
we
feel
that
we
do
have
you
know
consensus
about,
or
even
you
know,
state
that
some
people
think
that
this
type
of
problem
exists.
A
So
if
you
guys
want
to
talk
about
bleaching
and
the
problems
with
it,
and
the
fact
that
you
know,
I
think
documenting
industry
practices
that
are
getting
in
the
way
of
things
that
we
want
to
do
in
terms
of
defining
quality
of
service,
there's
nothing
wrong
with
doing
that.
We
do
have
to
be,
of
course,
careful
about
the
wording
and
not
call
out
names
and
demons
and
document
where
we
may
not
be
in
100
consensus.
But
we
can
absolutely
do
that.
L
I
have
two
quick
points:
hardware
accelerators
are
not
a
panacea;
they
are
often
they
often
result
in
in
more
and
more
hidden
buffering.
So
beware
of
that.
The
other
thing
is
that
we're
in
this
bandwidth
race.
We
have
been
for
eons
between
what
wi-fi
can
provide
and
what
what
the
the
backhaul
provides.
So,
in
fact,
our
bigger
problem
is
usually
in
wi-fi
there.
It's
getting
the
the
soc
vendors
to
invest
in
cleaning
up
device.
Drivers
is
the
biggest
thing
along
with
the
platforms
themselves.
L
I
don't
don't
know
what
the
state
of
mac
and
windows
are
doing.
Linux
is
mostly
cleaned
up,
so
that
bandwidth,
race
is
a
historical
race.
I
expect
it
will
continue
and-
and
that
also
means
that
the
platform
is
involved.
It's
your
operating
system
that
has
what
we
found
at
least
half
a
dozen
different
places
inside
the
linux
operating
system
that
we're
doing
the
buffering
in
the
network
stack.
L
L
AB
Yeah
first
I'll
I'd
like
to
say
I
completely
agree
with
you,
jim
and
that's
a
good
point,
and
then
this
discussion
about
the
latency
under
load
tests
and
how
it
may
be
hard
to
run
the
test
long
enough
or
be
able
to
generate
enough
traffic
to
actually
get
the
results
that
you
want,
because
you
want
to
measure
sort
of
the
the
depth
of
the
of
the
full
buffer
at
the
bottleneck
link.
AB
It
sort
of
points
to
an
issue
with
that
test,
because
it
it
it's
pos.
It's
it
gives
you
the
worst
case
performance
in
a
sense,
but
when
it
when
it
discovers
that
the
performance
is
not
what
you
want,
it
doesn't
tell
you
where
the
problem
lies
right.
So
I
I
think
a
really
important
feature
of
a
metric
to
solve
these
issues
is
that
you
need
to
be
able
to
measure
at
intermediate
points
or
to
to
subtract
one
kind
of
measurement
from
another,
so
that
you
can
isolate.
AB
Is
it?
Is
it
a
to
b,
or
is
it
b
to
c,
if
you
measure,
if
you
measure
a
to
c
and
that
and
you
get
a
negative
result
in
the
sense
that
performance
is
not
good
enough,
is
it
the
a
to
b
link
or
the
b
to
c
link,
if
you
can
measure
a
to
b
by
itself
or
if
you
can
measure
both
a
a
to
b
and
and
a
to
c
at
the
same
time
and
then
do
a
subtraction,
then
that
allows
you
to
solve
that
problem.
U
C
Let's
go
thanks
for
the
comments.
Let's
go
back
to
the
queue
gino,
you
had
an
answer
to
stuart's
gigabit
comment:
do
you
still
want
to
I'm
good?
Thank
you
very
good
on
that
cool.
That
brings
us
to
stewart.
U
Thank
you.
I
wanted
to
comment
a
bit
more
on
the
gigabit.
I
kind
of
think
it's
pointless
measuring
gigabit,
because
100
megabits
is
enough
for
two
4k
video
streams
and
webex
and
sending
email
at
the
same
time.
U
A
observation
I'll
make
is,
if
you're,
if
your
isp
service
is
100
megabits
and
your
wi-fi
is
200
megabits,
then
the
queue
is
in
the
cable,
modem
or
the
cmts,
and
if
they're
managing
the
queue
competently,
you
can
have
low
buffer
bloats.
The
irony
is
that
if
you
upgrade
your
isp
to
500
megabits
or
gigabits
now,
your
queue
is
almost
certainly
in
the
wi-fi
access
point
and
if
that
has
a
dumb
light,
bloated
fifo
queue.
U
H
Joelles
jawless
so
to
to
answer
to
genus.
I
was
very
positively
surprised
about
the
data
of
customers
willing
to
pay
money
for
lower
latency.
H
I'd
really
love
to
see
some
more
data
on
that
and
collection
of
examples,
because
the
only
one
I
ran
across
is
is
rather
old,
and
that
was
in
germany
when,
in
the
dsl
network
they
were
offering
lower
latency
for
the
fec
in
the
atm
layer
of
dsl,
I
think
from
20
millisecond
down
to
five
four
five
euros
a
month
and
that
seemed
to
have
some
success
but
went
away
with
vdsl,
so
I
haven't
run
in
consumer
service,
especially
for
gaming.
Any
other
data
points
so
that
that
was
very
interesting.
H
The
other
point
about
the
bleaching.
I
think
that
you
know
the
dscp
isn't
really
good,
as
the
standards
mean
to
convey
the
type
of
policy
differences
we
would
like
potentially
to
see
like
flows.
You
know
wanting
to
be
faster
than
other
flows.
We
do
know
that
we
can
do
that
with
congestion
control
in
waiting
fashions.
H
There
are
no
standards
for
that
in
the
dhcp,
so
those
would
all
be
proprietary
or
you
know
specifically,
configured
by
service
provider
that
want
to
support
anything
like
that
and
just
passing
the
dscp
along,
of
course,
also
has
problems.
So
it's
it's
a
quite
complicated
topic.
J
Yes,
please:
I
want
to
go
to
the
second
presenter's
slide
that
wonderful
breakdown
of
where
we
find
latencies
and
bandwidths.
I
have
a
very
similar
slide
that
I've
been
using
for
ages
and
that's
the
some
of
the
depth.
The
problem
was,
I
believe,
stuart
pointed
out
that
we
also
have
serious
difficulties
on
the
hosts
themselves
at
higher
rates
kind
of
argue,
with
the
overall
general
potential
in-home
ethernet
and
in-home
wi-fi
I've
seen
over
seven
seconds.
You
know
docsis
has
gotten
a
lot
better,
but
it
is
wonderful
for
us
to
try
to.
J
Can
we
all
agree?
This
is
the
kind
of
classes
of
problems
and
where
they
happen
in
the
network.
The
part
that
I
wanted
to
talk
to
specifically,
however,
was:
can
you
go
to
the
next
slide?
Please
sure.
Thank
you.
I'm
really
glad
you
don't
have
a
horse
in
this
race.
I
do.
C
C
I
All
right
again,
I
was
just
trying
it's
so
long
since
I
got
into
queue.
I
can't
remember
what
it's
about
I've
been
talking
on.
The
chat.
Sorry
pass
over
again,
that's
fine
gory.
AB
AD
Hi
I'm
called
gory
yeah,
so
I
think
I
was
talking
to
dscps
and
saying
that
they
are
they.
I
think
they
are
very
useful
when
you
are
dealing
with
your
upstream
and
you
are
close
to
the
marker
and
many
isps
are
actually
passing
these
upstream
and
many
aren't,
and
I
think
this
is
a
thing
that
we
should
really
start
unpicking
in
the
tsbwg
working
group.
So
I'll
put
my
working
group
chair,
hats
on
and
say,
please
come
and
talk
there
about
this
problem.
C
Thanks,
gori,
buren,
now
it's
your
turn.
AB
Oh,
thank
you.
Yes,
so
jim
you
said
you
can't
subtract,
because
the
bottleneck
link
sort
of
moves
around
and
that
I
agree
when
what
you're
measuring
is
the
bottleneck
link
in
that
sense.
But
if
you
have
the
right
metric
then
or
a
the
right
metric
will
have
that
property
that
you
can
actually
subtract.
That's
sort
of
my
point
there.
AB
Okay,
let's
discuss
in
a
breakout
room
afterwards,
because
that's
interesting
and
then
today,
yes,
open
sourcing
that
metric
the
one
I'm
talking
about
is
something
we're
interested
in
in
pursuing.
So
let's
discuss
that
as
well.
L
So
everybody's
saying
the
bottleneck
link,
there
are
two
bottlenecks,
one
in
each
direction
stop
and
they
and
you
can
have
a
different
different
in
each
direction
at
any
given
time
and
it
varies
as
the
applications
change.
So
stop
saying,
bought
the
bottleneck
link.
It
is
bottleneck,
links,
always
it's
often
the
same,
but
it's
very
often
not
the
same.
L
L
I
Yeah,
just
the
always,
if
if,
if
you're
not
restricted
on
the
ac
stream,
there
isn't
a
bottleneck
in
the
other
direction.
It's
not
always,
but
I
agree
with
you
in
you
know
it
can
be
yeah.
It
is
commonly.
H
Yeah,
so
I
just
wanted
to
to
to
answer
to
gory
with
respect
to
usefulness
of
dscp
right,
so
there
is
no
contract
between
me
and
any
transit
provider
right.
So,
if
I,
if
I
pass
on
the
dscp
into
the
transit
provider,
what
would
they
reasonably
do?
I
think
the
only
differentiation
based
on
dhcp
that
makes
a
sense
through
a
transit
path
without
additional
contracting
for
differences
of
services
would
be
to
react
differently
to
the
new
scavenger
class.
We
did
recently
in
the
ietf
in
tsvwg,
but
that's
about
it.
H
Obviously
there
is
a
lot
more
that
could
be
done
with
your
direct
access
service
provider
given
appropriate
contracting,
but
yeah,
that's
exactly
where
I
think
we
don't
even
have
standard
dscp,
meaning,
for
you
know,
differences
in
elastic
traffic.
I
You're
in
line
yeah
this
this
was,
I
remember,
gauri,
reminded
me
why
I
just
got
in
the
queue
with
with
matt
saying
about.
Excuse
me
the
dscp,
and
you
know
we
need
to
sort
this
out
and
stop
it
being
bleached.
But
it's
it's
not
just
bleached
because
isps
randomly
bleach.
It
there's
there's
a
reason
why
they
bleach
it
and
we
have
to
sort
the
reason
why
they
bleach
it,
which
is
all
to
do
with
policing,
and
you
know
that
that's
a
really
deep
problem
with
the
difficult
architecture.
I
T
Thank
you.
I
wanted
to
add
to
the
discussion
thread
between
bjorn
and
jim.
It
appears
that
we're
talking
about
not
bottleneck,
links
but
bottleneck
nodes
or
interfaces,
and,
if
that's
the
case,
then
we
might
look
at
different
measurements
measure
not
n
to
end
but
measure
on
each
hub
that
we
traverse
in
both
directions.
And
then
it
will
give
us
advantage
of
instantaneous
reflection
of
where
the
bottleneck
or
dynamic
of
these
measurements
will
give
us
better
clue
and
understanding.
T
T
We
need
to
have
a
tool
and
method
and
use
the
metric
that
gives
us
information
already
what
we
need,
so
we
can
do
the
localization
and
that's
probably
seems
to
be
a
residence
time,
but
we
can
talk
this
later.
Thank
you.
C
Thank
you,
greg
evgeny
you're,
the
last
one.
B
Thank
you
so
related
to
the
question
about
the
number
of
the
bottleneck,
links
or
notes.
So
in
general
case,
I
agree
with
you
that
there
are
several.
B
There
could
be
several
bottlenecks,
but
today
the
main
part
of
traffic
is
generated
in
wireless
networks
and
if
we
consider
the
wireless
networks-
even
maybe
not
wi-fi
but
seller
networks,
we
will
see
that
the
bottleneck
is
there
battleneck
in
both
terms
in
throughput
and
latency,
because
solar
networks
have
very
significant
problems
with
delays,
and
if
the
users
are
cell
age
users,
they
experience
very
low
capacity.
B
So
the
bachelorette
is
there.
Thank
you
and
after
the
break,
we
will
have
a
lot
of
presentation
related
to
virus
networks
and
to
the
issues
in
wireless
network.
So
I
think
they
will
clarify
this
statement.
Thank
you.
C
And
that
is
the
end
of
our
session
west
of
afghani.
B
A
Thank
you
to
christoph
for
cheering
the
last
two
hours
worth
of
thank
discussions.
G
As
well
yeah
yeah,
that's
all
right.
We've
got,
we've
got
eight
minutes
right.
K
So
afghanistan,
since
you're
both
here,
I
think
one
of
you
should
remind
everyone
that
we
should
use
the
web
apps
chat
as
a
control,
plane
and
slack
as
data
play,
because
once
a
discussion
emerges
in
the
bits,
it's
very
hard
to
to
note
every
cue
and
people
can
get
upset.
So
maybe
maybe
you
can
you
can.
You
know,
use
the
weight
of
the
position
to
remind
it.
K
Yeah,
it
shows
that
very,
very,
very
easy
to
miss
cue.
U
K
And
there's
also
a
benefit
that
once
is
interesting.
This
question
emerges
in
slack.
It
is
possible
for
us
to
include
at
least
some
of
those
ideas
in
the
final
report
if
possible.
This
is
absolutely
not
an
option.
Very
smart.
K
Tool
so
slack
is
used
by
atf
and
it's
quite
standard
in
the
industry
and
yeah
everybody
wants
to
make
their
living
from
some
somehow.
A
So
the
atf
is
actually
possibly
going
to
move
to
a
different
tool.
I
don't
know
if
it
was
announced
yet
or
not.
That's
we'll
probably
continue
to
use
jabber
for
working
group
sessions,
but
for
a
general
chat
too.
I
think
we're
heading
to
zulu
or
something
like
that.
A
We
can
probably
do
that
sorry
about
the
the
interruptions
today.
Let's
do
the
last
minute
waiting
for
slides
most
of
it's
the
same,
it's
just
the
last
few
presentations
of
the
day
or
in
a
slightly
different
order
other
than
that
it's
the
same,
the
website
is
still
accurate,
except
for
the
ones.
For
today
that
again,
we
saw
the
same
problem
that
I
talked
about
yesterday,
where
the
person
that
can
update
the
website
is
disconnected
at
the
moment.
A
A
We
got
very
late
submissions
for
a
number
of
papers
and
a
number
of
slides,
we'll
try
and
get
the
slides
online
as
well,
but
it
probably
will
be
next
week
before
we
get
that
totally
wrapped
up.
Unfortunately,.
A
Yeah,
as
I
said,
juggling
40
plus
submissions
over
mail
was
challenging
at
best.
B
I
think
we
can
start
this
is
the
afternoon
sessions
will
be
chaired
by
them
crawford
so
stem.
Please.
G
Thanks
very
much
okay,
our
first
presentation
is
from
from
jerry
and
it's
titled
observability
is
needed
to
improve
network
quality
over
to
you.
AG
Yes,
thank
you.
I
think
you
can
move
the
next
slide.
I
only
have
one
slide.
This
is
a
bit
of
a
sort
of
big
picture,
top-down
presentation
and
maybe
appropriately
so
because
we're
beginning
the
discussion
on
cross-layer
and
there's
more
details
and
spawn
specific
proposals
in
the
coming
talks,
and
I
also
want
to
call
out
colin
jennings,
who
may
not
be
on
the
call
right
now,
but
was
at
least
yesterday
who
last
year
talked
about
this.
AG
This
problem
that
if
you're
a
content,
provider
or
user
or
network
player,
you
don't
often
understand
why
some
some
issue
is
a
problem.
You
can't
necessarily
see
what's
happening
on
the
user's
wi-fi
or
or
if
you,
the
user,
you
don't
know
if
the
problem
is
at
the
network
provider
or
at
the
cloud
facility
or
or
the
application.
AG
That's
a
problem,
of
course,
and
it's
a
problem
of
limited
ability
to
observe,
isolating
a
problem,
debugging
and
fixing.
It
is
difficult
and
that's
not
desirable.
Of
course
we
want
to
have
better
service
and,
of
course,
this
is
not
related
to
just
the
problems
in
general.
People
want
to
maybe
measure
things
and
understand
how
good
certain
network
offerings,
for
instance,
are
or
not,
and
not
all
aspects
of
desirable
behavior
are
directly
visible,
at
least
not
with
the
simplest
measurements
and
some
more
complicated
measurements.
AG
It
still
leaves
out
many
other
things
like
various
features
and
capabilities
and
security
aspects
of
networks
and
also
dynamically
changing
situations
is
usually
not
shared
to
others,
so
we're
basically
missing
out
a
little
bit
on
things
that
we
could
be
doing,
and
our
paper
with
amelia
was
basically
proposing
that
the
direction
for
solutions
is
more
or
better
collaboration
among
multiple
parties,
so
we're
arguing
that
we
don't
just
have
like
two
parties.
You
know
the
center
and
the
receiver.
AG
We
actually
have
you
know
many
players
collaborating
to
to
do
something
and
being
able
to
talk
to
those
other
players
would
improve
things
at
least
somewhat,
and
in
some
some
situations
we
want
observability
we'd
like
to
be
able
to
explicitly
identify
a
situation
as
opposed
to
inferring
something
based
on
some
very
indirect
measurement,
and
if
this
sounds
very
abstract,
it
is.
I
do
have
some
more
concrete
examples.
AG
The
one
that's
not
listed
on
the
slide,
but
we
talked
about
yesterday,
is
if
you
have
measurements,
and
you
have
data,
please
publish
it,
it
could
be.
You
know
a
continuous
publication
rather
than
you
know,
one
talk
at
a
conference
once
during
the
lifetime
of
the
system.
AG
You
can
also
imagine
that
there'd
be
some
sort
of
actual
interfaces
where
or
ability
to
use
a
network
for
assisting
in
some
measurements
in
some
way
by
the
user
or
by
the
application.
Similarly,
roping
is
going
to
talk
about
capabilities
for
endpoints
to
produce
information
that
could
be
assisting
others
in
understanding.
What's
going
on
so
that
that's
another
interesting
angle,
we
sometimes
have
various
kinds
of
probes
and
probing
processes
going
on
which
can
be
used.
AG
I'd
like
to
see
capability
discovery
discussed
a
little
bit
more.
What
does
this
network
do?
Instead,
you
know
in
addition
to
having
this
latency
and
this
you
know
bandwidth.
What
what
does
it
do?
Does
it
do
v6?
Does
it
too?
You
know
what
kind
of
dns
and
what
kind
of
security
and
so
on
also
security
will
be
really
nice
to
understand,
what's
actually
going
on.
Sometimes
you
see
like
this
immediate
security
situation
that
well.
AG
My
interface
is
connected
this
way
and
there's
this
security,
but
you
don't
know
what's
happening
after
that
hop
and
also
when
you
use
various
kinds
of
servers,
you
don't
necessarily
know.
What's
going
on,
you
see
that
well,
I'm
using
the
tls
thing,
but
what's
at
the
other
end
of
the
server,
so
one
of
the
things
that
we
played
with
a
little
bit
is
using
confidential
computing
to
be
able
to
provide
some
assurances
of
what
happens
to
your
information.
AG
Usually
it's
only
possible
for
mutually
beneficial
cases
and
it
needs
to
be
explicitly
decided
and
not
sort
of
you
know
randomly
sending
some
headers
and
that
other
people
look
at.
If
you
haven't
read
rfc
8558
and
go
read
it
it's
a
good
one
and
of
course
we
do
need
standards
again.
Robin
is
going
to
talk
about
some
of
that
and
finally,
of
course
you
have
to
be
very
careful
with
data.
We
also
talked
about
that
yesterday,
what
you
know,
what
do
you
give
and
to
who?
AG
G
Thank
you
very
much
any
clarifying
questions
from
anyone.
G
AH
AG
Answer
was
that
a
question
for
me:
I'm
not
sure
I
understood
the
question
circle
buffer.
What.
G
I
think,
let's
move
on
if
nick,
if
I
don't
think
we
got
the
question,
if
you
wouldn't
mind
following
up
in
the
discussion
afterwards,
that
would
be
great
okay.
Thank
you
very
much
for
that.
G
Moving
on,
we
have
robin
who
will
be
presenting
merge
those
metrics
towards
holistic
logging.
AH
A
Paired
this
shunt
paper.
AI
AA
Yeah,
okay,
let's
go
so
the
next
slide.
That's
the
one
that
yeah
I'm
sure
the
muting
will
happen.
Sometime
next
slide,
please!
So
so
we've
been
talking
about
all
these
different
metrics
that
we
might
use
to
to
infer
the
quality
next
slide.
Please
next
slide!
Please
there
we
go
no
one
back!
Is
this
not?
It
is
not
the
updated
deck
whatever.
AA
This
is
not
the
most
recent
tech,
but
it
doesn't
matter
the
updated
deck
has
just
the
left
part
of
the
image,
so
focus
on
that.
So
we've
seen
all
these
different
metrics
that
you
might
use
and
the
observer
the
observer.
The
observation
that
we
made
is
that
these
are
typically
extracted
from
different
parts
or
at
least
in
different
formats,
so
for
the
lower
layers
lower
protocols,
you
typically
do
something
in
network
or
use
something
like
packet
captures,
of
course,
but
then
for
higher
level.
Tls
http.
AA
You
typically
extract
these
at
the
endpoints
from
inside
libraries
or
platforms
like,
for
example,
browsers,
and
then
you
have
this
whole
thing
above
the
business
logic,
which
is
often
something
completely
different
right.
You
have
custom
tracing
pipelines
or
things
like
google
analytics
that
that
people
use-
and
this
is
of
course,
logical
to
see
these
things
are
implemented
at
different
layers,
kernel
versus
user
space
and
while
we
might
want
to
use
packet
captures
for
everything,
this
becomes
much
more
difficult
once
we
start
encrypting
stuff
from
tls
onwards.
Right
now.
AA
The
thing
is
these:
these
three
different
levels,
don't
really
matter,
they're,
still
useful
for
most
use
cases,
but
the
moment
you
move
to
more
complex
situations.
You
really
want
a
cross-layer
visibility
from
at
least,
let
me
say,
tcp
all
the
way
up
top
to
the
to
the
business
logic
there,
and
this
is
something
we
especially
saw
for
things
like
web
performance,
webpage,
loading
performance
and
video
streaming,
and
I
think
that
accounts
for
gaming.
AA
What
we
saw
today,
for
example
as
well-
and
the
thing
is
that
while
we
do
have
data
from
all
these
three
different
levels,
let's
say
it
is
still
very
difficult
to
get
them
all
together
and
to
analyze
them
all
together
in
a
single
view,
yeah,
and
so
that's
one
of
the
issues,
and
then
we
have.
This
is
slide
two
with
quick.
AA
We
have
something
that
makes
this
boat
easier
and
more
difficult
right,
because
quick
is
now
encrypting
everything
you
certainly
can't
use
your
packet
captures
anymore
to
to
do
anything
or
mainly
anything,
useful
anymore,
but
the
fact
that
quick,
most
quick
stacks
at
least,
are
in
user
space
now
or
at
least
we're
engineered
from
from
scratch.
AA
This
time
around,
we
can
actually
start
logging
things
or
or
grouping
things
more
at
the
endpoints,
so
having
both
the
transport
level
quick
stuff
and
everything
that
comes
on
top
http
3,
but
also
some
of
the
business
logic
like
we
are
doing
this,
for
example,
for
abr
video
streaming
in
our
experiments,
you
can
combine
all
of
that
into
a
single,
a
log,
a
single
view,
which
is
our
q
log
project,
but
you
also
have
separate
other
projects
using,
for
example,
ebpf
and
things
like
that
to
extract
things
from,
let's
say
the
tcp,
and
the
idea
is
we.
AA
We
might
want
to
take
this
one
step
further
right
to
go
to
the
to-do
on
the
slide,
to
have
the
full
full
cross-layer
view,
or
at
least
maybe
not
to
the
full
physical
side.
I
don't
know
if
that's
possible,
but
as
deep
as
we
can
go
and
the
idea
there
is.
This
comes
back
to
what
I
said
to
praveen's
thing
before,
because
he
touches
on
these
things
as
well.
Is
that
I
think
it's
it's
not
really
doable
to
to
log
all
of
this
into
a
single
framework.
AA
Let's
say
a
single
api
on
the
endpoints,
but
that
we
might
want
to
extract
these
from
the
separate
implementations
or
settings
and
then
aggregate
them
in
a
post
hoc
fashion
so
generate
a
single
view
from
different
data
sources.
Afterwards.
AA
Most
of
it
is
about
that,
and
once
we
have
that
this
this
single
view,
then
we
can
start
creating
tools
to
analyze
and
and
observing
these
cross-layer
effects
and
cross-endpoints
effects.
Much
better
and,
like
I
said,
I
think
this
is
going
to
be
one
of
the
main
challenges
going
forward
from
there
last
slide.
AA
Please,
and
once
we
have
that,
like
single
aggregated
logs
that
have
these
these
full
views,
I
think
this
could
be
very,
very
useful
for
them
sharing
with
with
other
parties
as
well.
This,
of
course,
again
introduces
this
privacy
aspect
that
we've
discussed
multiple
times
this
week,
but,
like
I
said
yesterday,
we
think
this
is
fixable.
We
can
do
this
via
obfuscation
or
omission
or
aggregation
for
some
cases.
AA
We
think
this
might
enable
or
solve
some
of
quick's
observability
problems,
for
example
by
sharing
some
of
this
sanitized
data
with,
for
example,
network
operators,
who
knows,
but
also
crucially,
I
think
that
the
bottom
side
there
is
is:
is
the
the
ability
to
share
much
more
extensive
data
sets
that
we
currently
that
we
currently
have,
especially
from
industry,
to
people
like
me,
the
pure
researchers
who
would
like
to
get
this
data
now?
I
am
not
naive
enough
to
think
that
that
privacy
there
is
the
main
driver
that
they
currently
don't.
AA
Do
this
sorry,
that's
why
we're
pretty
much
at
time?
If
you
could.
G
AA
I'm
completely
finishing,
so
I'm
I'm
not
naive
enough
to
think
that
privacy
is
the
only
thing
that
just
having
a
good
way
to
handle
privacy
of
these
aggregate
logs
will
solve
that.
But
I
would
like
to
have
a
discussion
if,
if,
if
this
can
ever
happen
and
what
kind
of
incentives
would
additionally
be
needed
to
to
incentivize
the
coast
to
share
more
of
this
low-level
data
so
that
we
don't
have
to
rely
on
their
conference
publications
with
very
high
level
conclusions,
thank
you.
G
Okay,
thank
you
very
much
robin
any
clarifying
questions
before
we
move
on.
G
AJ
AJ
Next
slide
there
you
go
thanks
so
set
the
context.
Extremely
wi-fi
network
is
a
public
hospital
network.
We
have
around
21-ish
plus
million
hotspots
for
people
who
are
familiar
with
cellular
3gbp
architecture
is
built
just
like
a
cellular
network.
AJ
AJ
Typically,
a
customer
of
ours
would
be
100
miles
from
the
wag,
so
that's
your
dot
one.
We
have
around
120
plus
hardware
types
in
a
network
at
any
given
time.
You
know
every
wi-fi
stock
that
you
know
out.
There
is
out
there
in
our
network.
Typically
in
three
flavors,
there
is
a
residential
there
is
outdoor,
which
is
on
the
strand
and
then
business
locations
at
any.
AJ
Given
time
there
are
200,
plus
even
firmware
locations
floating
around
in
a
network
on
the
bottom
that
you
see,
we
basically
you
know
wanted
to
use.
We
do
both
active
observation
and
passive
observation
for
quality
of
experience.
AJ
Active
observation:
actually
we
do
walk
tests
with
synthetic
clients
and
but
they're
limited
by
you
know
time
and
space
and
number
of
samples.
So
but
passive
observation
is
here
what
we
are
here
to
talk
about.
We
do
passive
observation
of
kpis,
where
it's
shown
on
the
top.
It's
basically
at
our
core
network
and
then-
and
our
goal
is
to
you
know-
use
passive
observation
as
a
proxy
for
especially
latency
as
a
proxy
for
quality
of
experience
on
our
public
wi-fi
network
next
slide.
Please.
AJ
AJ
Just
given
the
scale
of
our
network
and
number
of
tcp
connections,
you
see
you'll
see
in
a
minute
measuring
rtt
on
a
per
proper
transaction
basis,
where,
like
very
cost,
prepared,
cost
cost
prohibitive
and
doesn't
even
scale
so
but-
and
there
is
enough
richness
and
diversity
due
to
complex
web
applications.
There's
enough
tcp
connect,
tcp
connections
per
application
when
the
user
connects
to
a
network
that
we
actually
found.
This
is
pretty
good
correlated
to
rtt.
AJ
Then
we
also
use
tcpd
transmissions,
basically,
which
is
you
know,
percentage
of
frames
that
were
re-transmitted
entirely
in
the
entirety
of
the
session.
Again
we
do
statistical
distribution,
but
you
know
you
have
to
have
a
benchmark.
A
tcp
connect,
latency
benchmark
is
150
millisecond
round
trip
from
the
core
network
to
client
and
back,
which
includes
the
uplink
and
downlink
of
the
wi-fi
to
the
client,
and
then
four
percent
is
our
our
marker.
AJ
But
again
we
do
is
we
are
interested
in
p8,
98
99
so,
but
we
do
need
a
benchmark
so
which
is
four
percent
for
us
and
then
finally,
throughput
throughput
is
not.
As
for
us,
it's
not
a
kpi,
it's
an
outcome.
So
if
you
keep
improving
a
network
by
doing
network
optimizations
and
reducing
latency
retransmissions,
we
actually
see
that
the
actual
throughput
consumed
by
users
for
applications
like
video,
which
is
a
primary
application
in
a
network.
Actually
throughput,
goes
up
in
those
markets
where
we
reduce
latency
next
time,
please.
AJ
So
this
is
typically
an
example
like
we.
What
we
do
is
like
you
know.
We
do
kpi
information
instrumentation,
you
know
at
the
point
of
observation,
24
hours,
these
these
instruments
out
there
they
just
measure
kpis
for
billions
of
sessions.
You
see
here
again,
each
one
of
them
represents
an
individual
tcp
connection,
attempt
so
and
then
they
aggregate
at
night
and
then
next
day
it's
ready
for
passive
observation
and
data
collected.
AJ
So
this,
basically,
what
you
see
here
this
graph
is
actually
a
statistical
distribution,
cdf
and
pdf
for
all
the
connections
like
that
happened
before
again.
This
is
basically
capturing
every
and
then
we
basically
then
bin
it
into
different
cohorts.
AJ
You
see
a
subscriber
type
cohort,
ap
class,
ap,
firmware
ap
hardware,
different
markets,
and
then
we
use
that
of
or
our
trending
that
our
network
is
measuring
up
the
benchmarks
we
have,
but
then
we
also
all
the
proof
of
concepts
we
do
and
optimizations
we
do
at
a
national
level
we
can
actually
compare
before
and
after
we
should
talk
about
later
a
little
bit
and
then
see
we're
actually
making
improvement
again.
AJ
As
you
can
see,
our
close
to
like
98
percent
of
our
our
our
tcp
connects,
are
below
our
150
millisecond
benchmark
today.
But
again
we
want
to
keep
moving
those
samples
to
the
left
as
much
as
we
can
next
slide.
Please
same
thing
with
tcpd
transmissions.
We
do
the
statistical
sampling.
AJ
We
have
a
lot
of
interesting
observation
on
what
these
individual
tcp
connections
types
are,
how
long
they
are,
how
many
packets-
and
but
this
is
this
for
a
separate
day
again,
like
you
see
like
eighty
percent
of
the
samples
are
within
one
zero
to
one
percent
of
tcp
transmissions,
but
on
the
extreme
right
you
see
like
this
there's
two
percent
or
above
25
percent
tcpd
transmission.
Those
are
the
ones
we
are
after,
why
that
happens,
and
the
biggest
thing
for
us.
AJ
This
challenge
is
the
sell
edge,
so
one
takeaway
I
would
take
from
there
is-
and
if
you
have
one
of
those,
if
you
have
one
of
those
experiences
where
you're
standing
on
a
stop
light
and
your
google
map
stops
working
because
you
know
you're
randomly
connected
to
some,
you
know
wi-fi
access,
point
and
negative
90
negative
88,
negative
80
rssi
people
who
are
representing
smartphones
here.
AJ
You
know
please
go
back
home
and
tell
you
engineering
to
do
a
better
connection
manager
if
you
have
a
four
bars
of
lte
and
one
bar
of
wi-fi
or
less
than
that.
Please
connect
that
and
better
application
design
next
slide.
AJ
You
can
see
this.
This
is
a.
This
is
a
standard,
a
b
test.
That's
how
we
do
network
measurements
and
improvements,
nothing
new.
Here
we
compare
markets
and
then
whatever
works,
we
make
that
optimization
change
and-
and
this
is
one
over
then
washrooms-
repeat-
that's
pretty
much.
It
thanks.
G
G
H
I
wanted
to
ask
the
obvious
question
about
yahri's
presentation,
which
you
know
first
time
when
I
was
trying
to
introduce
visibility
into
the
itf
seven
years
ago,
or
so
I
was
hit
with
a
lot
of
backslash
on
privacy
and
I
don't
think
that
has
improved
anyhow,
if,
if,
if
anything,
it's
gotten
worse,
depending
on
which
company
you're
working
for
but
to
give
a
simple
example
that
I
think
would
be
interesting
today,
not
even
about
dscp,
but
I've
seen
a
lot
of
home
routers
that
have
improved.
H
You
know
the
the
low
latency
by
identifying
and
putting
into
a
low
latency
queue,
tcp
acknowledgement
packet.
So
now
we're
given
to
into
quick
and
a
bunch
of
other
product
calls.
So
we
don't
have
that
information
in
the
network
anymore
and
it's
not
even
just
the
troubleshooting,
but
it's
the
whole.
You
know
aqm
and
queuing
and
low
latency
that
will
be
suffering.
So
I'd
really
love
to
hear
from
ari
some.
H
You
know
more
explicit
ideas
about
you
know
what
exactly
are
the
bounds
of
visibility
that
we
can
do
to
basically
comply
with
whatever
the
itf
feels
is
you
know
the
necessary
privacy.
G
Thank
you,
brandon.
N
Yeah,
I
think,
there's
been
some
discussion
about
what
data
could
be
shared
from
between
companies
that
have
access
to
it
and
other
people
that
want
to
do
research
on
internet
quality,
and
I
think
a
question
is
what,
when
would
that
data
be
actionable,
because
it
can
be
a
lot
of
work
to
export
that
data?
And
so
what
would
you
do
with
it?
N
To
give
an
example,
if
I'm
looking
at
internally
a
session
where
somebody
was
playing
a
video
and
it
stalls,
I'm
going
to
want
to
look
at
what
was
occurring
at
the
cdn,
how
it
was
occurring
for
that
connection,
how
are
we
using
network
resources
so
that
we
have
multiple
hp
requests
in
flight
that
we're
competing
with
each
other?
What
was
occurring
for
other
users
of
that
aggregate?
Is
this
user
connected
to
a
cellular
brand?
That's
congested,
or
did
they
just
walk
into
a
subway
station
and
lose
signal,
and
that's
why
this
video
stalled?
N
And
then
you
really
have
to
extrapolate
from
all
that.
You
can't
just
look
at
one
session.
You
have
to
find
out.
How
prevalent
is
this
type
of
problem
and
then
experiment
with
possible
solutions,
and
the
experiments
are
often
making
changes
at
the
business
logic
as
robin
talked
about,
but
also
at
the
transport
or
deeper
layers,
and
I
think
that
kind
of
becomes
a
barrier
to
making
the
data
to
exposing
data,
because
it's
not
clear
what
other
parties
would
do
with
it
or
how
whether
it
would
really
be
actionable
for
them.
G
Okay,
thank
you.
Dave.
J
G
All
not
thank
you
very
much.
Bob
you're
next.
I
I
Checking
that
specs
and
and
stuff
at
least,
have
like
management
interfaces
to
do,
monitoring
and
stuff
like
that,
which
I
I
pointed
out
not
long
ago
that
none
of
the
experimental
aqm
rfcs
have
anything
to
do
with
monitoring
in
them.
Oh.
AH
I
News
so
I
mean
is
that
what
is
that?
What
we're.
G
What
was,
are
you
able
to
mute
nick.
I
Right
yeah
so
on
those
rfcs,
don't
have
anything
to
do
with
monitoring
in
them
or
management,
and
so,
even
if
an
isp
wanted
to
measure
whether
it
was
to
blame
for
something,
let
alone
someone
outside
trying
to
measure
that
they
can't
do
it.
So
the
itf
needs
to
be
a
bit
more,
have
more
diligence
about
checking
management
stuff.
I
know
I
know
a
lot
of
people
get
find
it
a
pain,
but
it's
got
to
be
there.
G
Okay,
thank
you.
Bob
yary.
AG
Yeah
bob
has
a
point
about
the
perfectly
inanimate
of
good.
We
should
not
shoot
for
sharing
everything,
obviously
and
to
back
to
torlas's
point.
Obviously
it's
it's
again
difficult
to
share
everything
and
you
can't
actually
have
like
a
one-size-fits-all
answer.
Unfortunately,
I
have
to
look
at
all
the
specific
cases
like
you
know.
Is
it
okay
to
share
this
or
not?
I
mean
you
know,
spin
bits
and
easier
bits,
and
so
on
it
is.
You
know.
People
are
relatively
comfortable
with
that.
AG
If,
if
you
start
sharing,
I
don't
know
application
identifiers
and
the
like,
then
then
people
get
more
uncomfortable,
but
just
to
give
you
some
examples
of
the
kinds
of
things
that
that
you
could
actually
share,
I
mean,
and
certainly
I
think
everybody
should
be.
Okay
with
you
know,
providing
information
about
what
what
the
security
of
the
system
is,
that
that
you're,
connecting
to
or
through
you
should
be
able
to
do
capability
discovery
for
sure.
AG
With
regards
to,
like
you
know,
networks
providing
measurement
data,
if
you
request
for
a
specific
measurement
or
something
that
relates
to
your
own
flow,
that
seems
a
reasonable
request.
What
do
you?
What
do
you
see
out
of
my
flow,
or
you
know
what
you
see
to
this
ip
address
and
and
if,
if
it's
like
a
content
provider,
they
could
certainly
provide
some
aggregate
results
that
well
in
your
area
or
in
your
country.
AG
The
average
delays
to
our
services
seem
to
be
like
this,
and
in
your
isp
like
that.
I
think
those
are
those
things.
Those
kinds
of
things
are
reasonable,
but
there
are
obviously
a
lot
of
per
user
per
application
related
data
that
that
is
not
reasonable
and
we
just
have
to
look
at
look
at
it
case
by
case.
Z
Thank
you,
michael
okay,
just
a
quick
answer
to
dave
todd
when
he
was
saying
that
he
saw
this
data
point
with
one
af
or
the
two
packet.
So
yes,
it
happens.
Yes,
it
exists.
We've
measured
that
in
the
past
and
gauri's
group
has
done
some
measurements
as
well.
So
I
mean
dhcps
sometimes
pass
through
the
network
unharmed,
it's
rare,
but
it
does
happen
and
there
are
numbers
and
statistics.
So
I
mean
it's
not
true
that
they're
always
bleached.
That's
that's,
maybe
a
misconception,
but
very
often
yes,.
G
A
So
I'm
going
to
put
on
my
pessimistic
hat
here
for
a
second.
You
know
we
discussed
a
lot
yesterday
about
the
economics
of
being
able
to
you
know
instantiate
these
metrics
that
we
may
want
to
come
up
with,
and
how
do
we
do
that?
Where
we
can
actually
can
you
know
convince
the
major
players
that
they
want
to
deploy,
something
that
allows
that
end-to-end
debugging,
where
the
cdn
actually
can
reach
all
the
way
into
the
user
side
of
things
and
see?
A
What's
going
on
in
the
past,
there
has
been
network
operators
that
have
tried
to
promote.
You
know
fine
grain
access
control
where
the
the
smaller
is
key,
can
can
query
the
upstream
against
that
interface,
for
example
over
snmp
or
something
else
under
network
management,
but
that's
really
only
single
hop
only
and
I'm
curious.
How
do
we
incentivize
you
know
turning
on
some
sort
of
debugging
metric
when
nobody
wants
to
have
the
finger
pointed
at
them
as
to
be
the
cause
of
the
problem.
H
Just
to
answer
to
yuri,
I
think
you
know
I
perceived
that
answer
to
be
a
fair
assessment
for
the
the
internet,
where
I
think
the
privacy
expectations
and
the
you
know
limits
therefore,
for
visibility
are
quite
different
from
private
networks,
and
I
think
you
know
it
would
be
very
helpful
to
also
provide
more
background
on
the
experiences
people
have
had
and
requirements
that
they
do
have
in
private
networks.
Even
in
things
like
enterprises.
Right
I
mean
when
microsoft
was
implementing
in-serv
support
into
a
windows
xp.
You
know
it
had
a
very
great.
H
You
know
policy
mapping
from
knowledge
about
the
application
exercise
by
the
system
operator
over
to,
for
example,
dscp
or
rsvp
right
and
likewise,
if
you
look
at
industrial
networks,
then
you
exactly
have
you
know
examination
all
the
way
up
into
the
application
layer
by
intermediate
devices
to
basically
foster
the
performance
and
other
metrics
right.
So
I
think
we
we
should
be
aware
that
the
itf
isn't
only
for
the
internet.
J
Having
recently
run
deeply
a
foul
of
the
privacy
issues
in
my
own
work,
I
I
would
really
love
to
have
guidance
as
to,
in
particular,
being
able
to
widely
expose
congestion
information
from
a
very
specific
structure
called
the
tcp
underscore
info
structure.
I
just
pasted
it
into
the
slash
dot
chat
and
someone
coming
up
with
a
good
set
of
rules
and
guidelines
for
being
able
to
widely
collect
and
or
distribute
that
information
would
be
a
great
help
to
me.
Presently.
G
Okay,
thank
you.
The
queue
is
getting
low.
If
anyone
has
any
questions,
feel
free
to
add
them
to
the
or
add
yourself
to
the
to
the
webex
chat.
Jim
is
next.
L
The
enemy
of
the
good
is
the
perfect.
I
will
note
that,
with
most
of
the
problems
being
in
the
very
edge
of
the
network,
whether
it's
in
a
home
environment
or
a
corporate
environment,
a
lot
of
the
privacy
issues
don't
exist
there.
They
exist
primarily
when
you
cross
administrative
boundaries,
and
so
don't
believe
that,
just
because
it
may
be
really
tough,
if
you
will
inside
the
internet
that
working
in
at
this
problem
isn't
worthwhile
because
it
is,
we
need
to
be
able
to
diagnose
the
networks
where
the
problems
occur.
A
Hey,
we
just
need
a
dialog
box
on
every
single
home
router
that
releases
that
tcp
info
I
mean
that's.
The
answer
to
privacy
right
is
that
you've
got
to
get
authorization
to
release,
and
that
is
very
hard.
We
have
so
many
instances
of
that
happening
now
with
every
you
know,
phone
saying
hey.
Do
you
mind
if
I
use
background
cpu
in
order
to
do
federated
learning
for
data
and
maybe
federated
learning
techniques
are
sort
of
one
way
that
we
could
look
at?
How
do
we?
A
G
Thank
you.
Bob.
I
Having
the
ability
to
having
the
ability
to
monitor
things
on
a
device
isn't
just
for
the
isp,
obviously
you
know
if
it's
your
home
device,
you
can
do
it
yourself
and
you
then
you
can
determine
whether
it's
your
device
or
from
something
else,
that's
causing
the
problem.
You
know
it's
it's,
but
if
you
don't
have
it
there
in
the
first
place
the
isp
can't
use
it.
The
user
can't
use
it.
No
one
can
use
it.
You
know
it's
got
to
be
there.
G
Okay,
roberto.
O
O
So
if,
if
we're
rigorous
about
encrypting
things
and
come
up
with
reasonable
key
management,
things,
as
has
been
discussed
by
the
way
in
the
chat
here,
it
helps
us
solve
at
least
the
security
and
or
privacy
related
problems
with
it.
And
then
we
can
orthogonalize
the
when
to
collect
when
to
trigger
and
where
to
store
things
separately.
L
Thank
you,
roberto
jim.
I
will
note
that
that,
if
you
will
end
user
customers
have
incentive
to
release
information
to
their
isp
when
they're
having
trouble.
So
I
think
I
think
that
that
this
is
something
which
can
get
worked
out.
Similarly,
between
peering,
you
know
the
different,
the
different
carriers.
May
you
know
the
different
networks
may
wish
to
to
work
those
things
out,
but
we
have
to
make
sure
we
can
easily
get
the
information.
J
All
right,
so
let
me
point
to
another
place
where
privacy
and
stuff
is
both
misinterpreted
by
the
public
and
also
very
difficult
to
do.
Research
on
a
lot
of
my
work
has
been
focused
of
late,
more
and
more
on
video
conferencing,
and
we
don't
get
adequate
statistics
back
from
the
pre
a
host
of
these
things,
and
yet
the
pass
that
the
server
has
is
unencrypted.
J
The
servers
have
access
the
unencrypted
data,
and
that's
really
worrisome
to
me
and
the
perception
in
the
public,
however,
is
that
it's
encrypted
it's
all
safe,
and
that
really
isn't
the
case.
So
I
julius
krobich
last
year,
wrote
a
thing
called
galen
and
anger,
mostly
for
teaching,
and
I
have
been
working
with
his
group
to
do
adapting
bass,
congestion
controls
and
being
able
to
look
at
the
statistics,
with
my
user's
consent
to
try
to
build
a
better
video
conferencing
platform
and
again
the
possibility
of
aggregating
those
statistics
correctly
figuring
out.
G
Thanks
for
that,
yarry.
AG
Yeah
to
comments
you
know,
people
are
bringing
up
the
sort
of
limited
network,
or
you
know
my
network
type
situations
and,
and
that's
a
valid
point
of
course,
just
not
at
least
personally
a
huge
believer
in
this
separate
networks.
In
the
end,
they
tend
to
be
actually
multiple
administrative
domains.
Like
you
know,
even
in
my
home
network,
there's
some
some
applications
on
my
phone
on
the
app
and
and
the
corresponding
server
actually
believe
that
they,
you
know
they
somehow
own.
AG
This
thing-
and
you
know-
maybe
rightly
so-
maybe
not,
but
but
it's
actually
multiple
entities,
it's
difficult
to
coordinate
everything
and
the
the
other
point
is
that
you
know
going
back
to
this
perfect
versus
good.
I
think
the
answer
is
not.
You
know
this
huge
architecture,
even
though
I
was
basically
talking
about
it
for
sharing
everything.
It's
really
small
steps.
You
know,
maybe
it's
like
a
small
improvement
in
yeah.
AG
You
know
trace
route
or
you
know
the
use
of
some
some
probes
or
periodic
publication
of
content
provider
delays
to
different
isps
or
regions
aggregated,
and
you
know
anonymized,
probably
of
course,
so
I
think
that
that's
the
answer.
We
should
be
looking
for
not
not
the
grand
change.
G
Thank
you
very
much.
Matt.
R
Yeah,
so
I
wanna
tell
two
slightly
related
stories
about
tcp
info.
The
measurement
lab
proj
collects
tcp
info
on
fine
grain
snapshots.
R
I
sort
of
talked
about
this
yesterday
and
all
of
that
data
is
public
tc,
the
telemetry
from
the
connection
connection
telemetry,
is
naturally
available
to
the
content
owner
the
content
provider,
and
that
is
the
source
of
the
information
in
an
encrypted
internet,
something
that
people
might
not
know,
and
I
actually
don't
know
how
much
of
this
is
public,
so
have
to
be
careful
if
you
attend
things
like
nanog,
google
shares
with
directly
connected
isps
subnet
grained
statistics
on
things
like
packet
loss,
and
this
is
our
way
of
helping
the
isps
be
as
good
as
possible
with
what
they
do
and
the
generalization
of
this
applies
to
any
large
content.
N
Much
matt
brandon,
I
think,
following
up
on
matt's
point
there.
There
are
other
content
providers
also
share
with
isps
this
type
of
information,
but
the
key
challenge
is
always
finding
out
what
signals
should
be
shared
in
that
you
can.
More
is
not
always
better
talked
about
panic
loss
when
we
changed
from
cubic
to
bbr
pack,
a
boss
rates
in
some
cases
increased
for
some
networks
and
that's
again
because
bbr
is
going
to
be
more
aggressive
under
some
circumstances
when
cubic
would
have
backed
off.
N
So
a
network
operator
would
see
an
increase
in
packet
loss
rate
for
their
network,
despite
the
fact
that
performance
perceived
by
their
end
users,
quality
of
experience
actually
improved,
and
that
doesn't
make
sense
to
most
people.
But
that's
because
packet
loss
is
a
function
often
of
the
workload
right
and
that's
so
that's
why
it's
also
sometimes
difficult
to
use
data
sets
like
those
from
mlab,
because
they
have
a
very
specific
workload.
R
Yeah
I
put
myself
in
the
cube
yeah
the.
So
what
we're
trying
to
do
at
measurement
lab
is
to
derive
metrics.
So,
for
instance,
the
responsiveness
metric
comes
from
a
bulk
transport
test.
It's
a
side
signal
off
of
a
bulk
transport
test
and
at
some
level
it
predicts
it.
It
tells
you
something
about
what
the
cue
management
strategy
is
for
the
for
the
connection
which
may
or
may
not.
R
On
the
other
hand,
it
does
have
a
bound
for
what
the
network
is
likely
to
do
for
other
workloads,
and
the
the
case
that
I
was
humming
and
hawing
about
is
that
one
of
the
things
we
can't
see
is
whether
or
not
there
are
there's
separate
cues
for
different
flows
and
flow
isolation.
But
we
can
measure
statistics
about
the
flow
that
we
that
our
connection
is
using
and
that
information
is
useful
and
is
is
valuable,
but
it
not
if
qs
is
being
used,
for
instance,.
G
Thank
you,
matt
anna.
Y
Yes,
I
wanted
to
to
follow
up
on
what
brandon
said,
because
I
think
that's
a
very
important
point
and
in
general,
if
we
talk
about
sharing
of
data,
sets,
it's
also
very
important
to
have
the
appropriate
metadata
for
those
key
metrics
that
are
captured
so
that
you
can
actually
derive
correct
conclusions
because
otherwise,
clearly
you
can
also
make
very
incorrect
conclusions
if
you
just
have
a
limited
set
of
data
available.
So
I
think
the
metadata
is
very
important.
If
we
talk
about
sharing
data
sets.
A
It's
more,
I'm
gonna
agree
a
thousand
percent
with
anna.
You
know,
if
you
don't
have
labeled
data,
it
doesn't
really
matter.
If
you
have
some
metric
that
you
have
no
idea
whether
that
translates
to
the
quality
of
experience
at
the
user
side
of
things
right,
if,
if
you
could
have
all
the
loss
in
the
world,
but
if
they
had
a
great
you
know
video
conference
anyway,
as
somebody
just
pointed
out,
I
think
brandon
it.
It
doesn't.
U
A
Right,
so
if
you
don't
know
that
the
metric
corresponds
with
with
user
experience,
then
we're
sort
of
lost.
G
Thank
you,
corey.
AD
Yeah,
I
I
actually
was
was
on
a
similar
thing
to
anna.
I
was
just
slowing,
responding
and
the
metadata's
so
important
to
know
what
was
going
on
at
the
same
time,
when
the
you
have
the
data,
because
when
you
look
at
it
later,
it
could
be
utterly
meaningless.
If
you
don't
know
what
the
user
experience,
if
you
don't
know
what
the
other
traffic
was,
that
was
happening
at
the
same
time,
it's
all
needs
to
be
seen
together
to
get
a
proper
response,
and
that's
kind
of
tricky
is
something
that
we
don't
really
know.
G
Okay,
thank
you
very
much.
The
queue
is
at
zero,
so
I'll
offer
one
last
round
of
opportunities
to
get
into
the
queue.
If
anyone
wants
to.
M
Through
sorry,
what
do
I
do
to
get
to
the
cube
plus
q
or
something.
AJ
So
one
of
the
things
we
have
observed
is
that
you
know
part
of
this.
Conversation
is
network
selection.
Most
mobile
devices
have
more
than
one
radio
access
technology
to
connect
to,
and
we
have
seen
like
some
of
the
improvements
can
be
made
in
correct
network
selection
for
the
right
latency
I've
seen
we
have
observed
that
connection
managers,
and
especially
smartphone
devices,
have
pretty
strong
affinity
to
the
poorest
of
poor
wi-fi
signal
when
there
may
be
a
much
better,
a
cellular
signal
available.
AJ
So
you
know
some
more
focus
on
this
area.
Again,
we
are
doing
navigate
mission
control
on
our
side
to
to
put
admission
control
like
if
you
can't
sustain
a
connection
quality
below
our
kpi,
we
would
start
you
know,
you
know
not
admitting
those
at
the
time
of
this
association,
but
I
believe
app
developers
and
connection
can
do
a
much
much
better
job
as
as
well,
because
wi-fi,
unlike
cellular,
is
a
more
federated.
G
Okay,
thank
you
very
much.
Russia
are
there
any
more
entries
to
the
queue?
If
not,
we
can
take
a
shorter
three
minute
break
before
the
next
next
batch.
O
I'm
seeing
a
discussion
here
about
packet
loss
and
I
suspect
we
all
know
this,
but
I'm
just
gonna
say
it
anyway,
just
to
make
sure
we're
all
on
the
same
page
applications
don't
give
a
flying
what
to
who's
it
about
packet
loss
beyond
you
know
I
might
increase
my
buffer
a
little
bit.
Applications
will
happily
like
zoom
and
other
things
they'll
happily
like
send
packets
and
if
they're
dropped,
okay,
I'll,
send
more
packets
right
and
we
have
congestion
control
because
that's
obviously
a
bad
strategy
overall.
O
But
I'm
just
wanting
to
make
it
clear
like
we're
talking
like
the
applications,
care
and
the
applications,
don't
care
for
any.
You
know
first
order
impact
they're
impacted
by
the
second
order
impacts.
They
may
care
about
that.
But
in
general
we
don't
see
application
writers
really
concerned
with
that
scarily
enough.
G
I
think
that
might
change
a
little
with
quick,
but
we'll
come
back
to
that
brandon
you
wanna
go
next.
N
I
think
again
just
on
the
pack
of
lost
point.
Looking
at
pacquiao,
some
isolation
often
isn't
going
to
be
meaningful.
If
I
have
a
one,
millisecond
propagation
delay
between
me
and
the
server,
I
can
recover
from
loss
very
quickly
for
content.
That's
fetched
right!
If
I
have
a
100
millisecond
loss,
that
means
any
packet,
that's
lost
unless
I
have
something
like
fec
or
I'm
doing
aggressive
re-transmissions,
it's
going
to
take
100
milliseconds
to
recover
from,
and
if
I
have
a
100
millisecond
boundary
like
on
this
call.
N
G
J
All
right,
I
agreed,
almost
everything
was
just
what
was
just
said,
except
that
there's
no
re-transmission
video
conferencing.
Typically,
you
just
blast
another
frame,
so
there's
no
induced
latency
at
all
for
having
dropped
the
packet
aside
from
typically
a
video
distortion
field.
H
Tallest
I
mean
I,
I
once
had
on
a
global
path
from
the
us
to
to
europe
in
a
corporate
network,
0.5
packet
loss
because
of
a
bad
transceiver
and
guess
what
the
everything
was
so
slow
that
I
didn't
see
a
difference
in
tcp.
But
then
I
was
doing
video
conferencing
and
it
totally
sucked.
There
was
not
fec
in
these
video
conferences
and
then
it
was
ecmp.
So
I
couldn't
even
reproduce
it
another
system,
guess
what
it
took
me
more
than
a
year
to
figure
that
out.
H
So
really
the
diagnostic
and
the
ability
to
also
do
that
segment
by
segment
or
so
and
not
only
end
to
end
are
really
crucial.
AK
U
AK
On
most
of
those
leading
deployments,
there's
an
incredible
amount
of
fact
used
during
the
height
of
the
covid
crisis,
where
there
was
a
really
having
really
a
lot
of
congestion
and
sort
of
before,
like
youtube
and
netflix
had
to
ramp
their
bandwidth
down,
we
were
sending.
We
were
using
more
bits
for
audio
at
one
point
than
we
were
for
video
for
the
full
redundancy
on
it.
Okay,
like
like
there
were
several,
like
you
know,
a
small.
AK
10
kilobits
per
second
audio
codec
stream,
had
been
expanded
with
fec
into
over
a
megabit,
so
it
was
pretty
extreme.
So
there's
a
lot
of
fact,
but
there
is
also
a
lot
of
re-transmission
used
on
those
things
too,
because
it's
very
hard
to
recover
from
everything
with
just
a
gross
amounts
effect.
So
all
of
them
will
use
re-transmissions
in
some
cases
or
you
know,
a
knack
type
scheme.
AK
So
the
educational
versions
of
these
products
often
provide
worse
environment
than
the
enterprise
version
of
it,
because
people
are
trying
to
keep
the
cost
down
and
the
education
ones
often
have
latency.
That's
just.
I
can't
actually
see
how
anyone
really
uses
them
for
a
really
interactive
call.
So
sorry,
that's
such
a
long
answer,
but
I
want
to
throw
in
a
few
tidbits
on
that.
G
All
right,
thank
you.
Moving
on
to
our
next
series
of
presentations,
the
first
one
is
from
cohen
on
challenges
and
opportunities
of
hardware
support,
low
cueing
latency
without
packet
loss.
AL
AL
We
see
that
these
issues
result
from
past
mindsets
or
call
it
the
former
design
goal
that
higher
throughput
equals
or
or
what's
better,
regardless
of
latency
and
and
maybe
also
a
second
order
expectation
that
low
latency
traffic
doesn't
need
high
throughput.
AL
So
maybe
before
continuing,
I
want
to
make
clear
that
all
of
this
doesn't
mean
that
we
can't
improve
latencies
under
congestion
today,
using
the
existing
low
layer
protocols
and
chipsets.
Most
of
the
time,
we
can
easily
gain
a
factor
of
10
to
100
today
by
identifying
and
removing
these
building
rate
optimizations.
AL
So
maybe
next
slide
when
we
speak
about
low
latency,
most
of
us
think
in
terms
of
priority
and
guaranteeing
bit
rates
like
in
diffserv,
but
most
existing
applications
do
they
actually
need
both
low
latency
and
high
bandwidth,
which
is
more
difficult
to
achieve
continuously
and
especially
guaranteeing
that
this
bandwidth
is
available
all
the
time,
especially
on
mobile
networks.
So
it
would
help
that
the
network
can
give
a
feedback
to
applications
on
what
rate
to
use
and
alpha.
AL
This
is
actually
basically
a
way
to
to
control
the
rate
of
flows
even
without
having
a
queue
at
all,
so
you
can
see
it
as
an
extra
tool
that
is
missing
today,
once
and
and
once
this
rate
is
is
under
control
by
by
that
mechanism.
AL
AL
So
so,
in
this
way
you
can
have
a
kind
of
net
neutral,
fair
share
policy,
so
that
you
can
give
all
all
the
throughput
all
the
flows,
the
same
rate,
but
still
have
some
who
needs
low
latency,
a
zero
or
very
small
queue,
but
also
other
policies
can
be
used
for
managed
services
to
keep
latency
low
under
all
network
conditions,
instead
of
purely
guaranteeing
a
high
throughput.
AL
But
the
result
is
that
not
only
the
network
is
responsible
for
low
latency,
but
it
becomes
a
cooperation
between
networks
and
and
applications.
That's
important
next
flight.
AL
So
all
this
requires
a
different
mindset
when
you
think
about
this,
and
people
are
really
working
on
trying
to
reduce
latency
in
in
endpoints
probably
often
face
the
effect
that
bigger
buffers
get
more
throughput,
so
the
bigger
buffer
you
create
in
the
network
it
you
need
to
create
a
big
buffer
in
the
network
to
get
actually
higher
throughput,
and
it
is
usually
unintentionally,
but
is
a
pure
result
of
this
over
optimism
over
optimizing
the
rate
and
the
problem
is
that
if
you
try
to
get
a
smaller
buffer,
you
get
lower
throughput.
AL
AL
If
so,
so
you
see
in
a
scheduled,
medium
and
and
everybody
is,
there-
is
a
lot
of
packet
aggregation
everybody's
sending
for
a
long
time.
If
you
unilaterally
try
to
reduce
your
sending
time
you
you
only
reduce
your
tube
throughput,
you
hardly
reduce
the
latency,
so
so
this
low
late
layer
protocols
really
need
to
look
at
giving
enough
scheduling
opportunity
for
all
flows.
AL
Also,
if
we
see
in
the
in
the
lower
box,
you
see
often
that's-
and
it's
also
mentioned
by
some
people
already
on
school-
that
hardware,
accelerator
and
physical
layers
cue
up
a
lot
of
packets
that
that
are
sometimes
even
invisible
to
to
to
manage
so
also
there
it's
important
that
things
are
working
together.
AL
AL
So
in
in
the
top
there
also,
the
impact
of
of
bursty
technologies
in
one
part
of
the
network
or
even
network
interfaces
or
stacks
of
an
end
system
will
cause
potentially
drop
or
in
future
further
low
rate
links,
and
I
believe
there
are
proposals
already
for
adding
play
out
timestamps
on
individual
packets,
which
will
definitely
help,
and
I
can
support
that.
AL
AL
Coming
back
to
to
the
measurements
here
we
see
a
cumulative
plot
of
latency
that
every
packet
experience
in
different
stages
and
it
immediately
shows
where
things
can
be
optimized
if
you
look
at
the
gray
curve,
but
quickly,
also,
if
you
really
have
the
cdfs
of
the
different
stages
in
the
network,
you
can
see
also
in
here
that
the
black
curve
is,
is
the
the
mac
aggregation
time
load,
which
is
pretty
good
in
this
case,
but
even
then,
this
chipset
needed
an
extra
stage
and,
and
we
have
to
double
the
the
latency
and
especially
if
the
mac
conditions
are
worse,
it's
it's
always
doubling
everything
together
and
maybe
the
last
slide
quickly.
AL
I
noticed
a
lot
of
ideas
about
tools
to
measure
network
latency,
but
also
tools
to
measure
application,
cost,
latency
or
induced
latency
must
be
considered.
I
think
probably
not
many
people
would
like
to
hear
this,
but
it's
also
useful
for
users
to
know
which
applications
cause
most
latency
or
problems
in
their
networks,
but
the
security
aspect
also
has
been
discussed
already.
Of
course,
that's
it.
Okay,.
AM
Sorry,
I
didn't
know
animations
we're
active
on
this.
Can
you
say
just
click
through
yeah?
Thank
you.
So
I
I
see
it.
We
have
software
that
collects
data
to
perform
diagnostics
and
optimization
of
both
broadband
lines
and
wi-fi.
U
AM
It
goes
through
an
anonymization
process
and
that
little
graph
over
on
the
right
there
is
is
a
histogram.
AM
So
we
collect
data
from
millions
of
lines
at
multiple
operators
locations
I
mean
to
start
the
data,
has
no
user
information
about
it
whatsoever
and
then
it's
all
aggregated
together
through
histograms.
So
there's
no
privacy
concerns
because
it's
only
aggregated
across
all
of
north
america
and
all
of
europe
where
we
have
deployments
you
know.
AM
So
we
pull
in
a
lot
of
data
and
we
did
an
analysis
for
the
dsa,
the
dynamic
spectrum
alliance,
who
does
a
lot
of
interesting
work
on?
You
know
changing
around
rules
for
access
to
spectra
for
actually
for
wi-fi
and
for
cellular,
and
we
we
found
some
salient
parameters
of
interest,
particularly
to
look
at
trends
in
data
over
time.
AM
So
there's
your
typical
things
that
the
traffic
you
know
the
throughput
speed,
latency,
there's
also
kind
of
wi-fi
specific
things.
Congestion
is
basically
the
the
amount
of
contention
due
to
stations
on
your
own
access
point.
Interference
is
from
stations
connected
to
neighboring
access
points.
AM
That
also
gives
up
some
of
the
air
time,
and
we
also
collected
a
number
of
broadband
statistics.
We
get
a
lot
of
broadband
data,
but
we
only
included
some
of
it
in
the
study
so
next
slide.
Please.
AM
AM
For
north
america-
and
you
can
see
for
both
5
and
2.4
gigahertz,
the
trends
are
increasing.
The
5
gigahertz
spectra
can
carry
some
more
speeds,
so
there's
more
traffic
there
and
when
we
look.
AN
AM
The
wi-fi
traffic
doubles
approximately
every
three
years
and
you'd
expect
you
know
the
similar
increase
in
interference,
congestion,
etc
without
any
more
additional
spectrum,
and
another
analysis
we
did
was
quite
interesting
is
when
is
wi-fi
limiting
the
speed,
and
so
that's
what
the
plot
on
the
right
is
showing
2.4
gigahertz
wi-fi
is
often
slower
than
the
broadband
speed.
These
are
downstream
speeds,
five
gigahertz!
Well,
you
might
look
at
it
and
say:
well,
it
went
up
to
you
know.
Only
20
of
lines
are
limited
by
wifi,
but
a
a
that's.
AM
AM
Yeah,
so
we
we
looked
at
individual
parameters,
you
know
here
and
plotting
the
the
trends
by
linear
regression
again,
and
this
is
real
world
data.
So
you
see
some
kind
of
oddities,
but
we
take
the
overall
trends
on
the
right
and
we
basically
just
take
a
simple
linear
sum
of
traffic's
interference,
latency
and
subtract
efficiency,
because,
as
efficiency
worsens,
you
know
things
get
worse.
The
other
way
around
from
the
others.
AM
AM
When
you
do
that
compounding
25
annually
kind
of
the
result
is
we
need
more
spectrum,
and
you
know
the
six
gigahertz
spectrum
is
helping
we're
hoping
to
get
the
full
six
gigahertz
spectrum
usable
by
unlicensed,
including
the
upper
six
gigahertz
globally.
AM
AM
J
G
And
ken,
I
think
there
is
a
clarifying
question
for
you
in
webex,
asking
for
a
url
to
be
pasted
to
slack
okay
on
to
our
next
presentation
from
mikhail
talking
about
crosslayer
cooperation
for
better
network
service.
AP
Morning
afternoon
evening,
for
everyone,
my
name
is
michael
and
today,
I'd
like
to
to
emphasize
some
of
the
problems
we
face
when
we
try
to
measure
the
users,
the
quality
of
experience
for
the
end
users
and
and
advocate
for
the
cross
layer,
cooperation
as
the
mean
to
solve
them.
Well,
next
slide.
Please.
AP
AP
I
think
that's
rather
obvious,
and
the
problem
with
the
wireless
links
is
that
well,
they
have
really
different
properties
compared
to
the
wide
ones
like
you
have
different
rates
for
different
users,
different
packet,
error
rates
and
etc,
and
well,
moreover,
at
the
wireless
links
except
for
the
queueing
delays,
you
get
a
lot
of
protocol
into
use.
AP
Delays
like,
for
example,
if
you
try
to
measure
the
entrance
latency
by
some
pink
measurements,
which
I
call
the
single
packet
measurements
at
the
slide,
then
you
often
face
the
delays.
AP
The
protocol
delays
like
in
lte
you,
you
can
face
the
transmissions
delay
transmission
delays
in
wi-fi
you
face
the
the
same
delay
also,
and
the
problem
is
here
that
first,
they
are
really
frequent
and
second,
they
actually
present
at
any
load,
so
they
obscure
the
actual
load
and
the
actual
network
state
for
the
measurements,
because
they
well
because
they
present
at
any
law.
AP
Then
if
you,
if
you
try
to
measure,
for
example,
throughput,
which
means
you
want
to
send
multiple
packets
to
the
network
or
you
would
like
to
measure
the
network
capacity,
which
means
you
would
like
to,
for
example,
send
a
couple
of
packets
and
measure
the
delay
between
them
at
the
receiver
side.
Well,
you
face
the
problem
that
the
wireless
networks
have
very
high
degree
of
aggregation,
so
they
create
a
very
bursty
traffic.
AP
AP
So
all
of
this
means
that,
as
on
the
on
the
slide,
that
single
transmissions
on
modern
wireless
include
dozens
of
kilobytes
of
data
at
once
and
all
this
data
arrives
at
the
receiver
simultaneously,
which
means
that
you
do
not
actually
have
anything
to
measure
between
the
packets
at
the
receiver.
You
feel
like
all
packets
have
have
arrived
to
you
simultaneously.
AP
Moreover,
if
you
have
retransmissions-
and
you
very
probably
have
some
returns
missions,
because
when
you
transmit
dozens
of
packets,
the
probability
with
a
single
packet
probability
of
approximately
10
as
a
standard
threshold,
then
for
a
dozen
of
packets,
the
probabilities
becomes
really
high
and
well
all
the
modern.
Let
us
say
a
two
layers
or
l2
protocols
at
wireless.
They
try
to
provide
like
linear
arrival
of
packets
or
you
know
the
delivery
of
packets.
AP
AP
And
so
we're
almost
the
time.
AP
Yeah
just
a
couple
of
seconds
please
so
my
main
point
of
the
of
this
slide
is
that:
well
we
if
we
had
a
kind
of
cross
layer
corporation,
if
we
have
a
communication
between
applications
and
the
network
devices,
we
could
exchange
this
information
we
could
exchange.
We
could
efficiently
learn
the
requirements
of
the
applications.
We
could
efficiently
know
the
state
of
the
applications
and
the
well
and
that
that
can
be
done
privately,
because
nowadays
we
anyway
have
dpis.
AP
We
anyway
have
well
some
approaches
to
analyze
traffic,
but
in
the
better
case
the
applications
could
just
reveal
this
data
to
the
network,
but
not
the
private
data,
but
the
metadata
or
the
properties
of
the
flow
and
on
the
other
side,
the
network
devices
just
could
advise
the
applications,
what
to
choose
and
what
trade
to
choose
and
say
what
are
their
conditions.
So
that's
basically
it
thank
you
for
your
attention.
G
Thank
you
very
much
any
clarifying
questions
before
we
move
on.
G
Okay,
thank
you.
Moving
on
to
packet
delivery
time
as
a
tie,
breaker
for
tie
breaker
for
wi-fi
access
points.
I
don't
know
if
francois
or
olivier
is
presenting.
AQ
Yes,
it's
franco,
can
you
hear
me?
Yes,
we
can
okay,
great
so
hello,
everyone,
so
my
talk
is
coming
from
some
problem
that
we
encountered
when
we
were
doing
some
experiments
using
some
wi-fi
access
points
in
our
lab,
and
so,
as
you
know,
there
are
a
lot
of
use
cases
and
a
lot
of
devices
using
wi-fi
from
file
transfer
to
cloud
gaming
next
slide.
Please,
and
so
these
different
use
cases
have
different
requirements.
AQ
You
will
you
will
need
a
low
latency,
and
maybe
I
inverted
online
gaming
and
cloud
gaming
on
the
picture,
but
basically
these
use
cases
will
need
quite
a
low
latency
and
next
next
slide.
Please,
and
sometimes
when
you
use
the
wi-fi,
you
have
access
to
several
access
points
and
the
users
basically
never
decides
which
access
points
they
use
if
they
have
access
to
several
ones.
Often
is
the
device
that
chooses
the
last
one
that
they
used
or
something
like
that
next
slide.
AQ
Please,
and
so,
if
something
goes
wrong,
when
they
are
using
that
device
to
do
some,
real-time
use
cases,
how
can
they
choose
the
correct
access
points?
If
you
look
at
the
metrics
that
you
have
currently,
it's
often
only
showing
the
the
bit
rate
and
some
security
information,
but
nothing
about
the
latency
or
the
the
jitter
next
slide,
please,
but
with
the
same
the
same
rates
and
the
same
configuration
the
same
distance,
you
can
still
have
different,
latency
or
different
jitters
here.
AQ
It's
an
experiment
we
had
from
our
lab
that
made
us
apply
for
the
workshop.
Basically,
is
that
we
have
the
same
configuration,
but
we
change
the
channel.
Basically,
one
of
the
channel
is
more
crowded
than
the
other,
and
this
implies
retransmission,
which
rise
at
the
mac
layer
and
so
sometimes
a
larger
latency
next
slide
piece,
and
so
basically
it's
it's
not
revolutionary,
but
our
proposition
was
just
to
maybe
announce
some
latency
latency
distribution
information
to
either
the
user
or
the
application,
for
example,
for
the
user.
Currently
they
see
that
next
slide,
please.
AQ
If
you
have
some
information
about
the
distribution
of
the
jitter
on
some
access
points,
you
could
do
something
at
the
application
level
too
other
than
asking
to
change
the
wi-fi
access
points.
If
there
is
any
you
can,
for
example,
for
games
that
are
not
too
late,
latency
sensitive,
you
can
use
a
playback
buffer
that
is
not
too
large
for,
for
example,
strategy,
games
and
stuff.
It
could
still
work
if
you
don't
don't
need
too
much
reflexes,
and
if
the
user
wants
to
play
really
lately
sensitive
games.
AQ
AQ
You
should
know
that
the
wi-fi
has
a
large
detail.
You
might
prefer
using
4g,
for
example,
and
stuff
like
that,
but
we
discuss
more
use
cases
in
the
small
article
we
sent
to
the
workshop,
and
so
that's
it
for
me.
So
thank
you
for
your
attention
and
don't
hesitate
to
ask
your
question
thanks.
G
AQ
Yep,
so
it's
on
the
eighth
slide
on
the
pdf,
so
I
just
it
was
just
for
an
illustration
here.
I
added
them
a
gamepad,
for
example,
for
for
google.
AQ
G
Sure,
okay,
thank
you.
First
question
from
over.
K
AR
Polling,
so
that
the
most
up-to-date
jitter
distribution
will
be
accessible
so.
AQ
It
could
be
anything,
but
what
I
was
thinking
is
that
maybe
for
linux,
you
can
look
at
the
the
wi-fi
driver
and
look
at
the
delivery
time
of
each
frame.
So
you
you
remember
when
you
sent
the
first
time,
one
frame
and
the
last
time
and
the
time
when
it's
finally
acknowledged
and
that's
a
way
to
to
measure
the
the
delivery
time.
So
you
can
have
distribution
of
the
delivery
time
of
each
frame,
and
so
you
can
also
have
an
idea
of
the
jitter.
So
it.
K
Up
follow
up
to
a
refined
question.
Is
your
intention
to
share
information
from
exit
pump
or
from
the
wi-fi
driver
on
the
device.
AQ
It's
easy
it's
easier
on
the
device
directly,
but
we
could
imagine
that
the
access
point
could
send
it
in
there
big
on
frames,
for
example,
something
like
that.
But
I
guess
on
the
device
it's
easier,
but
you
only
have
the
upload
path.
In
that
case,.
G
Okay,
thank
you.
Okay,
so
we'll
move
over
to
the
discussion
portion.
We
have
about
half
an
hour
for
that.
So
again,
please
write
plus
q
in
the
the
webex
chat.
I
see
a
few
questions
coming
in
now
so
vb.
First.
AS
Thank
you.
Thank
you
for
all
the
presentations.
I
have
a
question
to
friends.
So
so
my
question
is
you
I
mean
the
idea
is
good,
but
my
concern
is,
if
you
start
showing
folks
like
this
access
point
is
better
than
that
access
point.
Then
let's
say
this
is
a
university
campus,
sorry
university
network.
AS
Then
everybody
will
try
to
switch
to
the
one
that
is
best
and
then
that
will
get
congested
and
so
on
you
know,
so
I
mean
a
giving
user
a
choice
to
select
sure,
but
if
it
switches
from
one
to
another,
then
I
don't
know
how
it
helps
and
if
it's
automatic,
maybe
you
know
that
is
a
better
approach.
Perhaps
but
I'm
a
little
concerned
about
how
this
is
going
to
work.
AS
The
second
question
or
or
the
concern
that
I
had
is
let's
say
we
are
talking
about
the
home
network,
where
we
don't
have
multiple
access
points.
We
usually
have
one
access
point
and
I
don't
I'm
not
an
expert
in
wi-fi,
but
I
don't
see
that
that
I
can
have
different
channels
advertised
from
my
access
point.
I
can
only
see
I
can
set
up
2.5
gigahertz
or
5
gigahertz
ap.
So
perhaps
you
can.
You
know,
provide
some
insight
on
that
as
well.
AQ
Okay,
so
for
the
first
question
with
like
when
it's
the
user
that
has
to
decide,
I
agree
with
you
here
in
the
illustration.
AQ
The
thing
was
like
the
idea
is
that
the
validity
can
be
a
tie
breaker
between
different
access
points,
but
if
an
access
point
has
a
lower
latency,
maybe
it's
not
the
best
overall,
because
maybe
it
has
a
poor
bitrate.
So
that's
why
either
we
choose
for
the
user
either.
We
find
a
way
to
make
them
understand
that,
for
example,
these
access
points
are
good
for
real-time
purposes,
but
not
for
bug,
download
and
stuff
to
limit
the
ban,
the
bandwidth
utilization.
AQ
For
so
I
don't
know
if
it
answers
the
question,
but
for
the
second
question
yeah
sure,
so
here
it's
an
experiment
with
two
different
channels
that
I
activated,
so
they
are
not
available.
At
the
same
time,
it's
just
to
say
that
the
the
bit
rate,
information
and
the
maybe
also
the
loss,
the
frame
loss
information-
is
maybe
not
sufficient
to
determine
the
latency
that
you
will
have.
AQ
So,
yes,
it
can
be
used
to
choose
between
access
points,
but
also,
if
you
have
only
one
access
point,
you
can
also
send
this
information
to
the
application
so
that
they
can
do
some
stuff.
Even
if
there
is
only
one
access
point,
you
can
maybe
configure
your
playback
buffer,
for
example,
to
to
make
it
able
to
to
to
to
take
all
the
the
jitter
in
the
buffer
or
if
you
have
multipath
with
your
smartphone.
If
you
have
the
4g
access
you
can
tend
to
use.
G
All
right,
thank
you.
We
should
move
on
stewart
you're
up
next.
U
U
It
seems
that
all
buffers
will
forever
be
bloated,
so
the
only
answer
is
to
move
my
traffic
to
a
different
bloated
queue,
but
happens
to
be
mostly
empty
right
now,
but
when
that
queue
fills
up,
the
delay
comes
back
for
now,
I'm
deliberately
ignoring
wi-fi
channel
access
and
aggregation
because
the
elephant
in
the
room
is
buffer
bloat.
Why
focus
on
a
few
milliseconds
of
wi-fi
channel
access
time
when
there's
hundreds
of
milliseconds
of
buff
uploads
there's
also
been
talk
about
different
traffic
patterns,
which
makes
the
problem
seem
intractable.
U
J
I'm
ending
up
trying
to
respond
to
three
different
things,
so
I
have
to
queue
myself
up
I'll
respond
to
stewart's
comment
there
in
scheduling
better
at
the
access
point.
We
have
generally
found
that
the
access
point
using
just
the
beq
can
do
a
much
better
job
than
trying
to
maintain
four
different
cues
of
differing
variety,
a
priority
based
on
how
the
underlying
mac
actually
works
for
clients.
J
The
issue
is
a
bit
more
vague,
so
it
definitely
should
be
a
subject
of
more
research,
but
scheduling,
better
and
handling
buffer
blood
better,
for
one
queue
is
a
lot
easier
than
handling
it
for
four
separate
hardware,
cues
with
different
characteristics.
If
I
still
have
20
seconds
left
I'll,
give
you
15.
15.,
we
talked
about
the
latency
issues.
Some
of
my
work
will
really
impact
that
metric.
I
liked
very
much
the
idea
of
a
better
icon
presently.
J
Today
we
are
typically
measuring
how
good
the
connection
to
us
is,
but
we're
not
measuring
the
connection
from
us
to
this
to
the
access
point
and
there's
a
lot
of
good
stuff
in
the
latest
wi-fi
standards
that
will
allow
us
to
measure
that
bi-directional
connectivity,
and
maybe
we
could
be
reflecting
that
in
an
icon
that
users
could
see.
I
Hi
yeah,
I
want
to
talk
about
sponsors
talk
as
well,
that
was
nice.
I
just
wanted
to
make
the
point
in
your
plot
of
throughput
and
latency
in
which
applications
use
more
of
one
or
more
of
the
other.
I
think
that's
not
really
a
intrinsic
property
of
the
applications.
It's
just
an
intrinsic
property
of
the
way
we
currently
do
low,
latency
and
and
I'm
sure
there
would
be
applications
that
would
want
to
be
high
bandwidth
and
low
latency.
I
If,
if,
if
you
could
do
both,
and
so
that's
really
a
problem
to
solve,
not
not,
you
know
it's
not
an
intrinsic
thing
that
all
games
are
low
low
throughput
because
they
want
low
latency.
You
know
I'm
sure
someone
would
want
to
make
a
high
throughput
game
if
they
could
do
both.
I
G
Okay,
thank
you.
Bob
al.
AN
AN
This
has
been
kind
of
an
arms
race
where
you
find
that
you
know
your
two
gig
bands
are
are
full
and
five
gig
became
available,
so
you
had
to
buy
some
new
equipment
in
order
to
jump
in
and
take
advantage
of
it
and
specifically
to
ken
in
the
us
when's
the
six
gigabits
gigahertz
band
going
to
become
available
to
us,
I'm
in
a
multi-dwelling
unit
now
and
the
ssid
situation
is
really
clogged.
Let's
just
put
it
that
way.
AM
That's
low
power,
it's
not
the
greatest!
What
you
really
want
is
full
power.
You
know
allowing
for
outdoor
and
that's
basically
waiting
for
us
to
kind
of
finish
up
the
specs
for
allowing
systems.
It's
called
afc
automatic
frequency
coordination,
but
for
a
lot
of
systems
that
allow
us
to
share
the
spectrum
with
incumbent,
microwave
point-to-point,
microwave
links,
which
you
know
because
they're
point-to-point
links
they
don't
use
up
a
lot
of
geographic
space
so
that
that's
the
trick
there.
AM
H
Thank
you
tallest.
Yes,
sir.
I
would
like
to
take
up
stuart
on
the
fact
that
the
way
I
heard
it
you
know,
one
policy
for
all
packets
fits
all.
I
think
that's
especially
not
true
on
wi-fi,
given
the
wide
variety
of
nerd
knobs
to
optimize
transmission
for
different
targets,
even
as
much
as
10
years
ago,
in
wi-fi
right
so
just
to
name
two
of
the
most
important
ones
is.
You
can
define
different
amount
of
spectrum
time
for
different
classes,
and
you
can
also
define
different
amount
of
retransmissions
right.
H
So
for
something
like
typical
best
effort
traffic
for
the
internet,
you
would
have
quite
a
few
re-transmissions,
but
that
introduces
quite
a
bit
of
latency,
even
if
it's
just
a
local
wi-fi
link
and
then,
on
the
other
hand,
for
something
like
real-time
traffic,
you
would
do
fewer
re-transmission,
but
you
reserve
a
much
more
of
the
spectrum
bandwidth
right
so
yeah.
I
think
the
differentiation
of
qos
for
different
classes
of
service
is
especially,
I
think
you
know
under
utilized
in
wi-fi.
AL
Yeah,
I
also
wanted
to
come
back
on
the
same
topic
related
to
diffserv.
I
I
agree
with
with
stewart
and
and
also
dave
mentioned,
that
a
lot
of
queues
and
and
maybe
not
queues
but
different
channel
qualities.
They
pose
a
lot
of
problems
related
to
managing
the
agency,
because
your
best
efforts,
let's
say
what
is
it
called
in
wi-fi
tid,.
AL
AL
They
get
another
rate
priority
and
if
you
cannot
couple
their
their
their
fairness
together,
it
is
becoming
very
complex
and
especially
if
you
have
wi-fi
beacons
of
of
of
different
users,
separate
access
points,
even
in
mesh
networks,
it
becomes
very
complex,
so
it's
better
to
to
have
a
lower
layer
which
is
very
flexible
and
low
latency
and
build
on
top
of
that.
G
G
AS
Hi
again
just
for
the
folks
who
presented
about
wi-fi
packet
loss
and
the
transmission,
I
saw
one
slide
where
it
said:
there's
a
70
probability
of
retransmission.
I
don't
know
who
presented
it,
but
I
just
want
to
ask
just
your
general
audience
or
or
anyone
who
knows.
What's
the
main
reason
for
transmission
of
wi-fi.
Is
it
corruption
congestion?
Is
it
interference?
G
Okay,
thank
you.
Dave.
J
To
briefly
answer
that,
yes
is
the
answer
to
that.
We
see
retransmissions
at
the
lowest
level
backlash,
sometimes
up
to
30
or
more
based
on
corruption
from
various
ascendant
resources.
It
is
one
of
the
most
unchecked
aspects
of
the
modern
wi-fi.
Centers
has
not
been
adopted
by
any
good
algorithms
to
manage
better.
It
was
one
of
the
things
I
intended
to
address
in
the
make
wi-fi
fast
project
as
load
got
bigger,
haven't
got
there.
J
You
actually,
why
I
got
to
this
question,
though,
is
I
wanted
to
ask
for
a
point
of
information
from
ken
regarding
the
60
standard,
the
fcc
had
mandated
a
centralized
database
at
one
point
to
coordinate
spectrum
selection
for
outdoor
access
points.
I've
lost
track
of
that
work,
and
I
was
curious
as
to
what
the
current
state
is.
AM
Yeah
so,
first
of
all,
you
write
about
the
retransmissions
that
there's
all
kinds
of
reasons.
There's
a
number
of
you
know
different
thresholds
to
declare
the
medium
empty
and
timings
and
they
don't
always
particularly
work
perfectly
so
yeah
you
get
collisions
and
things
like
that.
You
get
interference,
then
you
get
retransmissions
to
the
the
fcc.
First
of
all,
this
is
just
the
united
states
right,
we're
still
working
on
gaining
access
to
the
full
six
gigahertz
in
europe
and
other
parts
of
the
world
that
that's
work
that
needs
to
progress.
AM
So
the
fcc
has
a
process
now,
where
they'll
somewhat
bless
different
afc
system
providers
and
those
they
all
use
the
same
uls
database.
That
tells
you
where
the
microwaves
are
and
where
they're
pointing
and
all
that
sort
of
thing
they
access
that.
And
then
you
know
the
system
provider,
this
calculation
based
on
certain
parameters
and
then
that
feeds
down
into
the
ap
itself.
AM
Actually,
before
that
happens,
the
ap
has
to
register
and
say
I'm
at
this
location
and
I'm
at
this
height
the
height
of
things,
but
they
can
there's
ways
of
doing
that.
And
then
you
know
the
the
system
will
say
you're
clear
to
use
all
these
channels,
or
maybe
some
channels
with
some
power
limits.
AM
AM
In
in
wi-fi
alliance,
unsurprisingly,
the
afc
group,
but
they
don't
really
have
jurisdiction
that
has
to
go.
You
know
to
the
fcc,
you
know
and
wi-fi
lives.
We
only
provide
guidance
and
guidelines
specs
that
they
actually
do
use,
but
they
don't
have
the
force
in
the
fcc.
It's
all
good.
G
Okay,
let's
move
on
michael!
Thank
you
very
much
now.
Z
Okay
I'll
blow
the
same
horn
as
tallest
regarding
the
dhcp
meta,
I
think
that
well
given
given
the
thought
model
of
of
of
one
common
bottleneck
and
all
the
packets
hit
it,
and
now
we
have
a
large
queue
and
we
need
to
differentiate
between
all
those
flaws.
I
would
rather
have
no
cue
and
not
differentiate.
I
I
am
fully
on
that,
but
I
also
think
that
there
are,
I
mean
having
the
dhcp
marks
could
be
used
for
for
various
other
interesting
things.
Z
Now
I
have
heard
cohen's
words
of
caution.
I
agree
that
probably
it
needs
to
be
used
very
cautiously
at
the
link
layer
if
it's
used
there,
but
to
give
one
example,
this
lower
effort
dscp
could
really
be
used
for
various
purposes
that
you
know
you
could
put
it
on
other
paths.
You
could
let
it
wait.
You
could
buffer
it
for
a
while.
You
could
let
it
get
out
of
the
way
all
kinds
of
interesting
and
probably
good
things
could
be
done.
If
that
marking
was
available.
Z
I
I
tend
to
think
that
we
should
probably
offer
the
marks
yeah
as
a
way
to
give
give
the
network
a
bit
more
freedom
to
do
interesting
things
with
it,
but
I
mean
not
within
that
model
of
I
mean
having
a
large
queue
and
scheduling
okay,
I
should
stop.
B
Thank
you.
So
I
have
two
comments.
The
first
one
is
related
to
the
queuing
in
wi-fi
actually
and
the
packet
processing
wi-fi.
Actually
in
wi-fi
we
have
a
lot
of
in
the
standard
of
wi-fi.
We
have
a
lot
of
solutions
which,
unfortunately,
are
not
widely
implemented.
B
For
example,
we
can
significantly
improve
channel
access,
we
can
improve
quality
of
service,
so
we
have
scheduling
access,
but
unfortunately,
almost
all
vendors
do
not
support
all
the
functionality.
So
this
is
an
issue
because
you
have
really
very
rich
standards,
but
many
of
solutions
are
not
used.
Why
it's
a
question
we
can
discuss
it
and
the
second
comment
is
related
to
again
to
clearing
in
wi-fi
but
to
prove
to
buffering
wi-fi.
Actually,
this
station
is
as
follows.
B
So
today
wi-fi
has
a
very
high
data
rate,
nominal
data
rates
and
to
proof
efficiency
and
to
reduce
overhead,
or
which
is
related
to
channel
access
to
preambles.
To
all
this
stuff
to
acknowledgements,
we
need
to
transmit
a
lot
of
packets
in
a
batch.
B
It's
called
aggregation
and
the
latest
version
of
wi-fi
allows
to
obligate
1000
of
packets
together,
which
means
that
it's
like
one
megabyte
so
talking
about
buffer
blocks
with
all
this
issue
from
the
efficiency
point
of
view,
sometimes
it's
worth
to
have
many
packets
in
the
buffer
to
reduce
to
improve
to
improve
spectrum
utilization.
B
G
Thank
you
very
much.
Okay,
ken
did
you
have
anything
else,
you
wanted
to
add
you're
still
in
the
queue.
AM
Was
that
I'll
always
throw
in
my
two
sets,
but
on
the
last
one
I
believe
a
little
bit
surprised
to
hear
people
saying
that
I
don't
like
you
know
the
access
categories
and
wi-fi
I'd
like
to
understand
that
more
see
what
we
could
do
about
it.
My
opinion
have
been
you
know:
we've
got
these
four
access
categories
and
they're
sort
of
not
really
used
much.
Some
equipment
can
do
that
itself.
AM
Other
than
that,
it's
not
particularly
standardized.
We
do.
We
have
a
new
spec
in
the
wi-fi
alliance
that
we're
quite
happy
about
for
qrs
management
and,
in
particular,
there's
a
version
called
mirrored
stream
classification
and
that's
where
the
the
client
devices
the
stations
themselves
can
decide.
Well,
you
know
this
application
really
needs
high
bandwidth.
This
one
needs
low
latency
and
send
that
information
up
to
the
ap,
which
then
does
the
stream
classification
back
down
again
and
the
clients.
AM
H
Taurus
yeah,
so
the
answers,
of
course,
always
microwaves.
No,
actually
as
as
much
as
eight
years
ago,
you
know
commercial
or
professional
access
points
started
to
be
able,
with.
Even
you
know,
deep
neural
learning
be
able
to
differentiate
up
to
50
different
type
of
sources
of
problems
with
spectrum
analysis
and
behavior
of
the
traffic
right.
H
So
I
mean
there
is
a
lot
of
very
interesting
commercial
stuff
out
there,
but
to
give
the
very
simple
example
right,
you
may
simply
have
from
something
flying
through,
or
you
know,
microwave
or
something
one
second
of
outage,
no
traffic
at
all.
So
now
think
about
how
you
would
deal
with
two
different
applications
right.
A
netflix
flow
wouldn't
bother
at
all
about
this
right.
This
is
not
the
congestion
point.
H
This
is
just
one
point
that
had
one
second
outage
and
you
basically
are
going
to
retransmit
within
the
access
point
to
the
client
that
burst
of
one
second
data,
one
second,
later
no
congestion
in
cured,
but
you
have
incurred
one
second
of
latency.
You
really
wouldn't
want
the
end-to-end
congestion
control
to
pick
up
on
that,
then,
on
the
other
hand,
you
have
a
live.
Video
conference:
well
I
mean
you,
don't
want
to
have
latency,
you
would
rather
see
all
the
one.
Second,
just
a
black
picture.
G
Okay,
thank
you
very
much,
stuart.
U
Thank
you,
quick
reply
to
can
on
wi-fi
traffic
classes,
tell
us
talked
about
nerd
knobs
for
wi-fi
and
different
spectrum
reservations
for
different
traffic
classes.
That
requires
arbitrary
value
judgments
about
what
traffic
deserves
to
go
in
what
class,
and
it
presupposes.
We
can't
give
good
service
to
all
traffic,
so
we
have
to
give
good
service
only
to
a
privileged
major
minority.
U
I
also
disagree
with
very
commonly
expressed
view
that
video
conferencing
traffic
would
prefer
packets
to
be
lost
instead
of
delayed
that
might
have
been
true
before
we
had
video
compression.
But
these
days
traffic
is
highly
compressed.
Frames
are
sell
to
centers
deltas
from
the
last
frame,
and
you
can't
decode
the
current
frame.
If
you've
lost
the
previous
frame,
it
depends
on.
So
even
if
a
packet
arrives
too
late
to
display
that
doesn't
make
it
useless,
you
still
need
it
to
construct
the
next
frame.
U
The
result
of
this
is
a
complete
stream
set
requiring
this
to
begin
again
with
a
brand
new
keyframe,
the
intentional
discard
of
a
single
packet,
because
it
would
have
arrived
late,
resulted
in
the
transmissions
of
hundreds
more
packets
to
recover
from
that
single
loss.
So
video
conferencing
is
not
that
different
to
file
transfer
in
today's
world,
with
heavy
compression.
G
Okay,
thank
you
very
much.
We've
got
quite
a
queue
to
get
through
jim.
L
I'd
like
to
raise
a
flag
that,
whether
it's
wi-fi
or
or
cellular
systems
are
very
complex
beasts.
If
you
want
to
understand
all
the
different
or
a
bunch
of
the
different
things
that
have
been
broken
in
wi-fi,
then
you
should
go,
read
toki
toki's
thesis
and
some
of
those
are
common
in
other
radio
technologies.
L
Some
of
those
problems
are
not
so
I
give
a
very
strong
plug
for
to
toki's
thesis.
It
is
possible
as
to
have
at
least
in
wi-fi
something
close
to
our
cake
and
eat.
It
too,
as
his
his
thesis
has
shown
basically
something
approaching
two
orders
of
magnitude
improvement
of
latency
under
load,
which
is
why
I
said
earlier.
I
can
walk
around
my
house
and
teleconference
at
the
same
time
as
I'm
doing
bulk
transfer.
L
G
Okay,
thank
you
very
much.
We
need
to
be
very
brief
with
questions
now,
because
we're
going
to
run
over
time
dave,
I
believe,
you're
next.
G
K
K
Such
stream
represents
a
entire
video
conference
that
can
be
routed
cohesively
and
does
avoid
phase
interference
when
different
parts
of
the
stream
arrive
at
different
time.
However,
what
is
what
can
we
say
about
such
stream?
Is
it
real
time?
Yes?
Is?
It
is
the
best
effort?
Yes,
is
it
going
to
deliver?
Yes?
Is
it
guaranteed
delivery?
No,
I
mean
it's,
it
becomes
more
and
more
and
more
mute.
So
to
that
point,
probably
just
making
the
transfer
be
blazing
fast
and
utterly
stupid.
Probably
that's
the
that's.
The
approach.
AD
Okay,
I
was,
I
was
just
going
to
stress
that
we're
talking
a
lot
here
about
performance
and
performance
is
mainly
governed
by
latency.
So
if
you
want
buffer
management,
the
diff
surf
thing
is
orthogonal.
AD
AD
If
I
put
my
other
hat
on,
if
I
want
to
measure
this,
I
need
to
know
what
the
network's
going
to
do
with
my
traffic.
So
there's
a
little
bit
of
information
needed
back
from
the
network
to
interpret
the
performance
of
these
apps
that
want
to
do
clever
things
with
the
network.
So
there's
a
bit
of
a
two-way
couple
here
that
we
have
to
really
get
to
grips
with,
and
I
haven't
got
time
to
talk
about.
So
that
was
my
suggestion.
G
Okay,
thank
you.
Karen.
AL
Yeah
on
the
same
topic
related
to
giving
more
information.
Definitely
it
will
help.
I
think,
knowing
whether
you
need
retransmission
or
not,
it
could
help
whether
it's
always
really
necessary.
AL
There
was
already
that
discussion,
but
then
it
should
be
coupled
to
link
link
layer,
classes
and
and
big
categories,
and
things
like
that-
I
think
that's
that's
well
dangerous
and
and
the
coupling
is
not
always
there,
so
it
should
be
done
on
a
kind
of
higher
layer
where,
where
you
give
that
information
and
use
that
on
on
a
more
flexible
and
intelligent
way,
I
would
say.
G
Okay,
thank
you,
omar
did
you
still
have
another
question.
AF
No
escape
roberto.
G
May
have
one
I
think:
we've
still
got
some
more
in
the
queue
we've
got
four
of
us
next.
I.
H
Would
just
like
to
reaffirm
that?
I
think
what
stuart
was
saying
about
you
know.
Loss
is
not
a
good
thing
for
for
real-time
traffic,
but
I
don't
think
that
answer
to
my
example
that
I
gave
right.
So
if,
if
you
have
one
second
of
interruption,
where
there
is
no
traffic
flowing
playing
out,
you
know
a
live
video
conference,
a
packet
one.
Second,
too
late.
It's
just
going
to
give
you.
G
Okay,
thank
you
very
much,
mikhail.
AP
Well,
I
wanted
to
my
comment:
is
twofold?
First
about
the
retransmissions?
Well,
it's
not
only
the
interference
for
one
second,
which
can
worse
worse
in
the
situation,
but
well
we
just
in
in
wireless.
They
just
choose
the
the
coding,
the
redundancy
so
that
the
probabilities
are.
The
probability
of
retransmission
for
single
packet
is
around
1
10.
AP
So
it's
just
the
matter
of
efficiency,
viral
channel
usage
and
the
second
part
was
about.
Are
we
able
to
give
anyone
good
performance?
Well,
maybe
maybe?
Yes,
while
we
have
like
similar
delay
requirements
or
stellar
order
of
delays,
but
if
we
have
like
virtual
reality
and
etc,
those
applications
will
require
much
less
delays
and
it's
just
it
will
be
difficult
to
meet.
G
O
Thank
you
I,
as
I
posted
in
the
slack
as
well.
I
think
that
it
would
make
sense
to
allow
applications
to
state
much
more
richly
the
behavior.
They
would
wish
the
network
to
apply
to
the
packets.
O
So
the
the
most
controversial
thing
I'm
likely
to
say
is
that
I
wish
that
all
our
clocks
were
synchronized,
and
so
I
could
put
a
clock
time
in
the
packet
beyond
which
you
should
no
longer
bother
to
to
retry
and
then,
of
course,
there
should
also
be
a
separate
thing,
which
is
to
max
the
total
amount
of
bound
to
the
total
amount
of
duplicate
effort
that
the
network
should
do
on
the
application's
behalf,
which
is
effectively
the
same
as
a
ttl
like
we
have
today,
although
not
not
measured
in
hops.
Measured
in
retries.
O
It's
very
clear
that
you
don't
want
end-to-end
re-transmission
when
you
can
have.
You
know,
link
to
link
re-transmission
from
any
of
these
things,
so
there
you
go.
I
I
just
wish
that
the
applications
did
say
better
what
what
their
actual
thoughts
were
or
actual
requirements
were
with
regards
to
latency,
so
the
network
doesn't
mess
up,
doesn't
over
commit
and
minimizes
the
total
amount
of
wasted
effort.
G
Hey
thank
you
over
to
you
wes.
I
think
we're
done
with
this
section.
A
Okay,
jenny
might
close
us
out
if
he's
here.
A
All
right!
Well
I'll,
do
it!
Thank
you
all.
It
was
a
great
discussion,
it's
fantastic
when
we
keep
bumping
up
against
the
end
of
the
time.
That
means
that
you
know
it
was
a
an
interesting
discussion
that
people
wanted
to
participate
in
so
good
day.
Two
we'll
start
day
three
tomorrow.
As
a
reminder,
I'm
gonna
stop
the
recording
and
then
I
did
create
six
breakout
sessions
and
you
guys
can
sort
of
self
discuss
when
or
what
what
breakout
sessions
you
want
to
join
to
continue
discussions.
A
I
know
there
was
at
least
talk
of
one
or
two
that
we're
going
to
run
so
we'll
keep
those
open
for
at
least
an
hour
or
until
people
run
out
of
steam.
Yesterday
there
was
four
active
route:
grooms
groups
for
at
least
an
hour.
So
thanks
everybody
where
to
find
these
breakouts,
I
can't
see
them.
You.
H
A
O
Click
on
the
participants
drop
down
in
the
webex
application
and
then
you'll
see
a
green
thing
as
a
show
all
breakout
sessions.