►
From YouTube: IRTF Open Session
Description
The IRTF Open session includes a presentation by Rod van Meter and Stephanie Wehner on "Vision for a QIRG: Quantum Internet Research Group". It will also feature Applied Network Research Prize talks, including "Performance Characterization of a Commercial Streaming Video Service" by Mojgan Ghasemi -and "Vroom: Accelerating the Mobile Web with Server-Aided Dependency Resolution" by Vaspol Ruamviboonsuk.
B
B
B
And,
in
addition,
as
you
know,
there's
a
photography,
pal,
Photography
consent,
discussion
and
the
ITF
has.
The
iesg
has
made
a
statement
on
this,
because
we
actually
do
have
a
lot
of
photography
in
here
because
of
the
NRP
I
want
to
make
sure
people
know
that
that
if
you
don't
want
to
be
photographed,
the
there
will
be
the
video,
but
you
will
you're
you
have
the
right
to
consent
or
not,
and
you've
got
your
lanyards,
but
also,
let
us
know
if
you
have
any
worries
about
that.
B
B
You
know,
because
you've
been
coming
to
research
groups,
that
we
have
research
groups
last
time,
if
you
were
here,
I
talked
about
how
they're
different
I'm
not
going
to
talk
about
that
this
time
and
that
we
have
an
IR
s
G
and
their
names,
and
all
that
information
is
on
the
IRT
F
dot.
Org
page
and
I've
noticed
that
some
people
don't
seem
to
know
this.
So
I
want
to
I've
added
it.
That
IR
tip
does
not
produce
standards
where
priests
standards
so
to
the
extent
that
our
FCS
come
out
of
the
IRT
F.
B
So
we've
got
a
couple
of
proposed
groups
in
process
now
the
the
din
group,
the
decentralized
internet
infrastructure,
which
really
should
be
called
deep.
It
subbed
in
and
path
aware
networking,
and
then
we
have
twelve
groups
all
together.
Counting
those
two
paths
aware
networking
had
their
third
meeting
yesterday
and
we
had
a
hum.
We
do
actually
Humphrey
can
because
who
doesn't
want
to
hum,
but
the
and
the
the
agreement
by
the
group
by
the
room
was
that
should
go
to
our
G
and
I.
Believe
it
will.
B
So
that's
something
another
bit
of
news
is
that
when
our
reach,
our
group
gaya,
which
is
the
Internet
access
to
the
Internet
for
all
globally,
is
having
they've
organized
helped
to
organize
with
the
IAB
the
technical
plenary
tonight.
So
there
are
three
gaya
themed
talks
and
they're
moderated
by
Jayne
coffin
who's,
one
of
the
co-chairs
of
Gaia,
and
then
this
week,
everybody's
meeting
except
the
network
management
group.
B
We
we
do
feel
that
groups
should
meet
with
research
conferences
as
well
sometimes
and
they
they
regularly
meet
with
the
noms
conference
and
that's
what
they're
doing
and
then
today
you're
getting
the
first
look
at
a
potential
new
group,
which
is
the
quantum
internet
research
group
and
the
goal
here
is
to
have.
You
eventually
have
to
start
calling
yourself,
the
classical
internet
people,
if
you're
not
working
on
quantum,
so
we'll
see,
if
you're
convinced
of
that,
but
we'll
have
a
remote
speaker
on
that
shortly.
B
B
B
B
If
you
look
up
an
hour,
W
2018
or
look
at
the
IRT
F
page,
you
can
find
it,
and
the
good
thing
is
that
you
can
either
submit
a
short
paper
like
a
couple
pages
of
something
that
says
hot
and
new,
or
you
can
submit
a
previously
published
paper
from
the
past
year.
So
you
don't
have
to
write
a
brand
new,
deep,
12
page
research
paper.
B
B
If
you
want
to
get
more
involved,
you
probably
already
know
I
RTF,
discussed
and
announced,
and
we
have
wiki
links
for
all
the
research
groups
that
you
can
find
that
I
RTF
dot,
orgs
a
mail
we
tweet
and
the
agenda
for
today
is
I'm
almost
done
and
then
rod.
Venn
meters
going
to
speak.
Stephanie
has
a
family
issue,
so
she
will
not
be
joining
us,
but
we
are
hoping.
B
These
are
the
two
proponents
for
that
research
group
and
they'll
rod
will
tell
you
about
how
to
get
more
involved
in
that,
and
then
we
have
our
applied
network
research
prize
talks
and
I'm
very
excited
about
those
they'll
introduce
those
when
we
get
there
so
I
think
we're
going
to
try
to
bring
no.
Let
me
tell
you
about
the
NRP
I'll
use
this
slide
when,
when
we
come
back,
I
think
no,
let's
do
it
now.
Sorry,
so
we
have.
B
Our
two
speakers
coming,
we
are,
are
sponsored
the
the
AARP
provides
a
monetary
award
and
travel
funding
and
the
sponsors
that
make
that
possible
are
Comcast,
NBC
Universal
in
the
Internet
Society.
So
thank
you
and
if
you
think,
you'd
like
to
sponsor,
please
let
me
know
get
in
touch
with
I
saw
for
myself
Matt
for
myself,
because
if
we
get
more
sponsorships
will
bring
more
people
back
for
additional
meetings
to
allow
increased
engagement
and
I
wanted
to
say
also
that
we
have
a
wonderful
program.
Through
the
year
we
have
to
each
ITF.
B
We
had
the
largest
set
of
submissions
that
we've
ever
had
and
we
could
definitely
have
have
made
many
more
Awards
because
we
had
so
many
good
submissions,
so
I
think
you're
gonna
have
a
good
time
with
our
presentation
so
and
that's
it,
and
that
picture
is
to
tell
you
don't
be
like
that
cat.
Don't
read
your
mail
engaged
our
meeting
and
it's
particularly
actually
for
the
research
group
chairs
who
who
should
look
around
then.
If
you
see
people
being
like
that
cat,
you
probably
need
to
get
more
discussion
going.
That's
my
cat,
but
okay,
alright.
C
Okay,
great,
so
thank
you
all
for
having
and
I'm
sorry
I'm,
not
actually
there
in
person.
I
would
love
to
be
in
London
with
you
all,
but
in
solidarity
with
the
the
wet
having
there
we're
actually
having
snow
here
in
Japan.
Actually
in
Tokyo
too,
for
the
first
day
of
the
spring,
we
planned
this
presentation.
Alison
asked
us
to
to
give
the
presentation,
based
on
the
email
that
she
did.
Stephanie
I
had
exchanged
over
the
course
of
the
last
Oh.
C
Although
I
can't
see
you
very
well
and
Stephanie
Vader
is
from
the
Technical
University
of
dealt.
This
was
actually
originally
her
idea
and
she
used
to
work
for
one
of
the
I
believe
for
an
ASP
and
the
Netherlands
before
actually
getting
involved
in
quantum
work.
So
she
and
I
both
have
a
long
history
with
it
actual
networking,
as
was
a
quantum
stuff.
So
that's
where
we
are
Alison.
Can
you
go
to
the
next
slide?
Please
I.
B
C
C
That,
okay,
so
Oda,
was
a
mine.
He
is
actually
also
a
participant
in
IETF,
so
he's
one
of
the
other
people
who
actually
sits
right
at
the
border
of
quantum
and
classical
stuff.
So
after
I
go
off
behind
here
in
an
hour,
so
you
can
track
down
shota
they're
at
the
meeting
in
London
and
talk
to
him
about
it.
So
first
off
I
should
say
what
is
a
quantum
network
and
I
know
a
lot
of
people
in
the
room
are
already
familiar
with
quantum
key
distribution.
C
Quantum
key
distribution
is
a
way
to
use
quantum
effects,
to
create
a
shared
sequence
of
random
bits
that
the
physics
of
the
system
is
supposed
to
guarantee.
You
are
known
only
to
the
two
parties
at
the
endpoints,
and
then
you
can
use
that
set
of
random
bits
for
a
a
key
for
IPSec
or
LS.
Or
what
have
you?
That's?
C
If
it
works,
it
adds
to
the
longevity
of
the
secrecy
of
your
encrypted
information,
but
it
works
on
the
prefer
of
it.
100
kilometers
or
200
kilometers
through
fiber,
but
interior
also
works
over
satellite
and
in
fact
there
have
been
demonstrations
of
that
done
over
the
course
of
the
last
couple
of
years.
C
It's
sort
of
weakened
multi-hop
settings,
but
better
for
point-to-point.
It's
a
lot
easier
than
a
tank
building
entangled
networks
at
the
physical
implementation
level.
It's
still
not
very
easy,
so
a
lot
of
you
are
probably
already
familiar
with
the
basic
idea
of
qkd
you've
heard
us
or
somebody
else
talking
about
it
at
some
point.
C
Entangled
networks
are
good
for
a
set
of
applications,
I'm
going
to
show
you
those
a
little
bit
more
in
the
next
slide
and
in
theory
using
quantum
repeaters,
which
are
not
like
classical
repeaters,
the
quantum
repeaters
more
like
what?
What
in
the
internet
world,
we
would
call
a
router
using
quantum
repeaters.
We
can
in
theory,
coupled
together
systems
that
are
as
an
arbitrary
distance
apart
to
within
sort
of
the
normal
limitations
of
networking,
as
opposed
to
the
distance
over
which
you
lose.
The
single
photon,
which
is
the
limitation
in
unentangled
networks.
C
Also
next
slide,
please,
okay,
so
on
this
slide,
I've
actually
got
a
few
of
how
I
think
about
the
applications
for
entangled
quantum
networks.
There
are
three
large
bubbles
there.
The
purple
one
is
distributed,
cryptographic
functions,
the
blue
is
sensor
networks
and
that
the
yellow
one
is
distributed
computation.
C
You
can
see
sort
of
on
there
on
the
the
pin
diagram
that
there
are
a
set
of
potential
applications.
Uk
D
is
in
the
middle
on
the
light.
It's
sort
of
a
it
sort
of
sits
at
the
border
between
sensor
networks
and
distributed
cryptographic
functions
because
really
what's
actually
happening
with
UK
D
is
it's
behaving
as
a
type
of
sensor
and
detecting
the
presence
or
absence
of
an
eavesdropper
in
your
communication
channel?
C
That's
one
potential
to
use
another
one
over
tuned
in
the
distributed
cryptographic
functions
up
there.
You
can
see
the
top
of
the
Senator
are
visiting
agreement
and
leader
election.
If
you
would
get
Byzantine
agreement
and
secure
leader
election
wanting
to
participate,
oceans
reduce
our
dependency
on
public
key
one-way
fund,
additional
complexity
as
the
basis.
C
C
So
that
clock
actually
is
a
form
of
reference
frame.
It
gives
you
it
tells
you
what
the
time
is
in
a
couple
of
different
places.
It
allows
them
to
essentially
synchronize
via
the
axis
of
time,
but
distributive
quantum
states
can
also,
in
theory,
be
used
as
a
reference
frame
in
the
physical
sense
as
well,
and
so
people
are
investigating
different
kinds
of
sensors
for
that,
and
also
for
what's
called
for
interferometry,
which
is
a
large
scale,
astronomical
installations
collect
from
multiple
telescopes
or
multiple
radio
antennas
and
combine
them
into
a
single
signal.
C
Now,
why
is
that
different
from
just
being
able
to
do
what
you
can
do
today,
which
is
log
on
to
IBM's
website
and
use
a
quantum
computer,
that's
sitting
in
Yorktown
Heights
remotely
via
the
web?
Well,
if
you
begin
with
quantum
entanglement
between
your
site
and
the
mating
Freddie,
then
it's
possible
to
do
what's
called
blind
computation
and
the
blind
computation
is
a
mechanism
or.
C
Execute
asking
a
server
to
execute
a
function
for
you
without
giving
the
server
any
information
about
the
input
data,
the
algorithm,
that's
being
executed
or
the
output
data.
So
it's
like
gendry's
homomorphic
encryption
in
in
terms
of
its
role
in
the
ecosystem.
Although
the
underlying
technology
is
very
different,
if
you're
familiar
with
homomorphic
encryption,
you
can
think
of
it
in
kind
of
the
same
views.
So
those
are
the
three
broad
areas
in
which
I
can
see
uses
for
for
a
quantum
network,
distributed
cryptographic,
functions,
sensor,
networks
and
distributed
computation.
Okay,
Alison
next
slide,
please,
okay,.
C
So
I
mentioned
repeat
a
minute
ago:
quantum
repeaters
quantum
repeaters
are
the
equivalent
of
routers
in
the
internet,
and
they
have
four
basic
tasks
which
are
listed
up
there
on
the
screen.
First
is
to
the
basic
entanglement,
the
basic
quantum
state
over
a
single
hop
over
a
5
or
over
over
a
free
space
link.
C
C
The
third
task
is
to
extend
entanglement
across
multiple
hops,
so
the
behavior
of
a
network
and
then
the
fourth
task
is
to
actually
be
part
of
that
now,
including
dealing
with
routing
and
managing
resources
and
security
in
those
sorts
of
things,
and
if,
in
particular,
those
last
couple
of
bullet
points
that
I
think
folks
that
are
there
in
the
room
are
likely
to
be
able
to
make
a
positive
contribution.
Georgia's
next
slide,
please
ok,.
C
So
that
it
gave
you
just
three
minutes
on
the
the:
why?
What
is
it
that
we
want
to
accomplish
by
doing
doing
this?
The
next
thing
I
want
to
do
is
to
sort
of
give
you
an
idea,
give
you
a
sense
of
what's
going
on
in
the
world
and
the
fact
that
this
is
in
fact
happening
the
right
inside
of
the
slide
there.
You
could
just
logo
to
us
from
five
startup
companies
that
have
been
around
most
of
them
for
a
while.
C
Now,
but
overall,
in
the
quantum
information
industry,
there
are
now
more
than
50
startups,
many
of
them
that
have
been
created
within
the
last
year.
Some
of
them
are
hardware
companies
trying
to
build
quantum
computers.
Some
of
them
are
software
companies
trying
to
figure
out
how
to
use
the
existing
quantum
computers
and
the
some
of
them
are
networking
companies.
You
know
these
that
are
here
on
the
slide
are
all
qkd
companies.
C
As
far
as
I'm
aware
of
none
of
these
companies
are
actively
working
on
our
computers
yet,
but
that
may
very
well
be
internal
efforts
that
they're
not
actually
talking
about
in
public,
so
it's
happening
in
the
startup
industry.
It's
also
happening
in
the
big
labs,
IBM
Google,
Intel
Microsoft,
and
a
lot
of
attention
for
their
work
and
there's
also
event
venture
funding.
That's
out
there
there's
also,
of
course,
a
lot
of
support
from
governments,
and
so
there's
entire
complete
ecosystem
of
people
who
are
working
to
create
a
quantum
information
technology,
industry
and
I.
C
E
C
Over
the
next
several
years,
you're
going
to
see
actually
used
for
quantum
information
technology
find
out
now.
The
quantum
repeaters
is
arguably
the
hardest
part
of
all
of
this,
and
so
it's
been
people
but
wow
it's
going
to
be
a
little
while,
yet
before
onto
repeaters
of
their
in
the
wide
world,
something
you
people
can
go
and
buy
off
of
the
shelf,
but
that's
what
we're
working
toward
next
slide.
Please.
C
C
As
in
the
diagram
at
the
bottom
of
the
slide
there,
and
then
they
conduct
an
interference
operation
between
the
photons
there
in
the
middle
and
once
that's
done,
then
this
quantum
entanglement
over
this
distance
of
1.3
kilometers
in
the
across
the
the
Delft
campus.
So
this
can't
be
done
today,
the
optical
fiber,
over
from
her
modest
distances,
its
experimentally
up
and
running,
and
the
LAT
the
group
at
Delft.
It
is
arguably
the
world's
best,
but
there
are
a
lot
of
people
working
on
similar
sorts
of
things
next
slide.
Please.
B
C
C
Now
this
was
actually
in
the
news
quite
a
bit
left
summer,
where
or
last
fall
when
they
talked
about
it
and
used
the
the
heating
that
was
so
generated
to
encrypt
a
conversation
between
China
and
Vienna.
So
it
got
a
lot
of
press.
We
can
talk
about
the
actual
security
implications
for
all
of
that,
and
whether
or
not
you
have
to
trust
the
satellite
platform
and
things
like
that
for
qkd,
but
when
you're
actually
using
it.
Excuse
me
for
creating
entanglement
over
this
kind
of
distance.
C
Then
those
kinds
of
security
arguments
are
actually
very
different,
and
so
please,
or
already
of
a
particular
mind
with
respect
to
a
cue
cookie
in
certain
senses,
realize
that
the
argument
are
going
to
be
very
different
with
what
we're
talking
about
wide
area
works
that
actually
provides
entanglement
in
the
head
and
China's
not
alone
in
doing
work
on
this.
That
revolves,
the
experiments
from
the
Canada
and
Singapore
and
the
other
there
on
the
on
the
right
in
the
upper
part
is
actually
a
satellite.
C
B
C
So
what
does
it
take
to
actually
build
one
of
these
networks?
This
slide
comes
from
a
paper
that
I
wrote
fully
a
decade
ago
on
protocols
for
quantum
repeaters
and
you'll,
see
that
there's
there's
sort
of
a
stack.
Many
of
you
in
the
room
probably
know
Jo
touch,
Jo
touch
and
I,
or
argued
about
this
quite
a
bit
over
various
meals
and
visits
to
each
other.
C
The
upper
layers,
everything
except
the
bottom-
is
actually
a
classical
control
protocol,
so
at
the
bottom,
in
the
purple
box
you
see
it
says
physical,
entanglement
and
letters
they're,
the
only
quantum,
that's
the
only
part
of
the
entire
system,
that's
actually
physically
quantum.
So
everything
else
that's
above
it
is
classical
protocols
or
controlling
the
behavior
of
the
quantum
system.
In
order
to
go
from
having
entanglement
that
exists
over
a
single
link
to
entanglement
that
actually
spans
end-to-end
for
from
your
communication
endpoints.
Now
the
protocol
stack
that's
above
it.
C
It
involves
a
series
of
functions
which
are
going
to
depend
on
certain
key
design
decisions
from
your
your
general
architecture,
from
your
your
quantum
repeaters.
But
this
is
a
reasonable
example.
Some
of
it
happens
over
a
single
hop
somewhat
what
happens
over
two
hops
or
four
hops
along
the
path
chosen
through
the
work
and
then
some
of
it's
done
the
end
to
end,
and
that's
where
the
the
argument
with
Jo
touch
comes
in
is
all
right.
Well,
maybe
it's
not
really
a
layered
protocol
in
the
sense
that
the
ITF
is
accustomed
to
things.
C
It's
actually
much
more
of
a
distributed
computation,
because
all
of
the
individual
nodes
that
are
in
the
path
actually
participate
actively
in
the
entire
process,
as
you
are
conducting
your
communications
over
an
extended
period
of
time.
So
it's
it's
a
very
approach
and
that's
fun
forward
and
forget
approach
to
building
a
protocol.
But
it
is
an
area
where
classical
protocol
design.
People
can
make
a
very
strong
positive
contribution
because
of
the
expertise
in
doing
these
kinds
of
things
and
the
Pitts.
Yes,
so
Allison
perfect
timing
to
switch
to
the
next
slide
there.
C
So
we
are
getting
to
the
the
stage
of
wanting
to
take
all
of
this
and
discuss
these
kind
of
important
problems
in
any
network.
How
you
deal
with
routing
and
how
you
deal
with
connection
setup
and
how
you
deal
with
resource
management
and
how
you
deal
with
Internet
work,
interoperability
and
security,
and
all
these
kinds
of
issues
which
are
near
and
dear
to
the
hearts
of
the
people
that
are
there
in
the
room
at
high
RTF
and
the
experimental
physicists
are
doing
the
work
so
far
have
no
idea
how
any
of
this
stuff
is
done.
C
They
don't
know
how
to
go
about
setting
up
a
connection
which
was
between
two
note
across
the
network.
They
don't
know
anything
how
you
do
resource
management
in
a
wide
area
network.
You
know
you
know
what
multiplexing
meet.
Is
it
a
single
candle,
but
not
so
much
across
an
entire
network,
for
example?
So
we
need
the
expertise
of
the
people
in
the
room
and
we
need
a
way
to
connect
classical
networking
people
to
the
experimental
community.
C
That's
beginning
to
do
to
build
these
quantum
repeated
networks
stefanini
proposed
two
or
three
months
ago
that
we
should
create
a
research
group
inside
a
bi
RDF,
though
I
contacted
Allison
and
we've
been
exchanging
to
meet
bail
about
it
and
so
far
the
proposed
it
is
Qi
RG
the
mailing
list
is
open.
You
are
welcome
to
join
the
limited
list.
There
were
a
couple
of
dozen
people
on
it
already,
even
though
we're
just
beginning
the
process
of
advertising
it.
So
please
get
on
the
mailing
list
and
Alison
next
slide.
Please.
C
Oops,
okay:
this
is
not
that
all
right,
the
will
do
it
will
do
this.
I
did
want
to
mention
briefly
as
well
as
we're
finishing
up
that
there
are
also
queue,
can
be
oriented,
standardization
efforts
both
inside
of
Etsy
and
I,
Triple,
E
and,
of
course,
inside
of
the
odd
ATF
we
have
been
talking
off
and
on
for
eight
or
ten
years
about
methods
for
incorporating
UK,
degenerated
keys
at
the
IP
sack
and
dealing
with
other
out
of
bound
out
of
me
and
he
management
mechanisms
or
key
generation
mechanisms.
B
C
C
F
Hirotaka
Nakajima
and
thanks
for
a
talk,
I
have
a
one
qualifying
question.
Anyone
well
question
about
I
carrasco
network
and
the
Continental,
so
the
first
name
first
question
is
so:
according
to
your
size
and
page
number
of
nine,
it's
like
there
is
no
IP
or
TCP.
It's
a
cross
called
network
stack,
but
but
yours,
your
figures
say
it's
like
a
that
hold
up
only
a
bottom
layer.
It's
that
quantum
network,
so
I
think
we
can.
F
We
can,
it
seems
like
there
is
a
new
different
network
and
we
are
trying
to
create
the
quantum
network
instead
of
that
cross
Co
network.
So
is
that
my
understanding
is
that
we
will
create
a
new
network.
Is
that
correct
and
that
there
are
qualified
question
and
also
the
second
question
is
so
if
the
quantum
network
cannot,
we
cannot
use
IP
or
those
kursk
or
network
stack.
So
in
this
case
it's
it's
a
little
bit
hard
to
connect.
The
connector
communicate
between
the
classical
network
and
quantum
network.
C
We
are
going
to
have
to
have
a
new
physical
layer,
so
there's
there's
no
getting
around
that
that's
going
to
be
deploying
new
devices
with
new
transceivers
or
the
equivalent
thereof.
There
is
work
going
on
to
allow
that
to
multiplex
with
standard
tcp/ip
traffic
in
the
same
fiber,
that's
been
demonstrated.
Experimentally,
so
you'll
need
new,
lo
new
nodes,
but
you
don't
necessarily
need
to
pull
commit
a
completely
new
fiber
in
parallel.
C
All
of
those
other
functions
that
are
on
top
of
that
are
all
new
protocols,
but
of
course,
they
also
need
a
means
to
exchange
their
their
classical
messages
reliably.
So
so,
when
I
picture
the
the
the
protocols
PC
and
ETS
I
picture,
those
communications
actually
happening
over
top
of
TCP
and
then
the
application
layer.
On
top
of
that,
is
the
it's.
It's
a
what's:
what's
the
putting
in
to
get
I
for
quantum
network,
what
is
a
request
from
an
application
to
the
network
itself?
C
What
is
it
that
it
that
you're,
a
classical
application
that's
running
on
a
node
that
wants
to
take
advantage
of
these
services
provided
by
the
quantum
Internet?
How
does
it
make
that
request?
What
does
that
contain?
How
do
you
exchange
that,
and
do
you
agree
on
the
semantics
of
all
of
that,
so
those
colored
boxes
that
are
on
there
I
think
largely
will
ride
on
top
of
TCP
in
terms
of
the
actual
communication
getting
that
gets
done,
but
it's
building
a
large
distributed
application
on
top
of
it.
B
F
G
Was
hurt,
occur,
USC
is
a
you
know.
Originally
I
was
thinking
that
that
this
was
a
little
bit
early
in
the
in
thinking,
but
then
I
decided
that
I'm
wrong
there
and-
and
we
want
to
get
this
sort
of
right
from
the
get-go
in
terms
of
Minh
the
the
research
output
of
something
like
this
could
actually
affect.
You
know
the
development
of
the
hardware
and
the
technique,
so
I
think
that's
actually
that
the
timings
about
perfect
so
I'm
excited
to
see
this
go
forward.
G
I
do
think
that
there's
a
few
other
topics
you
could
consider
adding
to
your
list
of
things
to
explore,
such
as
the
security
requirements
for
what's
going
to
be
layered
on
top
of
it,
in
particular,
with
respect
to
privacy
concerns
as
an
example.
Your
satellite
is
sort
of
a
classic
case
of
a
store
and
forward
mechanism
where
you
have
to
store
data
at
rest.
G
So
you
know,
what's
the
security
implications
of
that,
so
I
think
there's
a
number
of
things
that
need
to
be
considered
with
respect
to
what
sort
of
technologies
you
need
above
that
and
the
requirements
above
that,
do
you
still
need
to
do?
You
know
sent
what
what
sort
of
encrypted
traffic
do
you
need
to
send
above
that
and
what
are
the
properties
are
the
protocols
that
you're
earning
is
on
top
of
this
whole
system
that
you
can
either
relax
because
you
have
quantum
or
you
still
need,
because
quantum
doesn't
solve
them.
C
H
Yeah,
certainly
I
mean
speaking.
Actually,
this
is
not
a
question
just
comments
so
right
now
the
researchers
are
canta.
Networking
requires
organization
of
terminologies
I,
believe
oh
yeah,
because
some
papers
are
started
to
propose
some
different
terminologies
for
washing
same
concept,
so
I
needed
a
I
need
a
place
for
open
discussion
for
such
things
as
otherwise.
So
you
know,
various
papers
which
use
these
different
terminology
will
lead
to
a
kind
of
hell.
So
to
avoid
that
we
need
yeah,
we
need
an
open
discussion,
so
Altea
is
a
best
price.
H
I
believe
so,
and
and
now
we
can
refactor
ask
our
network.
Tommy
knows
is
property
here,
and
so
another
thing
is
so
one
thing
we
needed
to
discuss
in
RTF
is
the
obstruction
of
the
entanglement,
and
you
know
so
entanglement
is
a
very
quantum
property
and
no
cross-county
talking.
It
doesn't
have
the
counterpart
for
that.
So
we
should
carefully
discuss
the
obstruction
of
entanglement
and
lost
yeah.
H
C
Those
comments
note
that
I've
had
quite
a
bit
of
heartburn
with
where
the
the
physicists
reuse
a
term
they
have
or
classical
engineer
and
the
way
that
using
it
not
necessarily
wrong.
It's
just
so
different
from
the
way
we're
used
to
seeing
it
that
it's
causing
a
problem.
Multiplexing
is
one
of
them.
Repeater
would
certainly
be
another
and,
of
course
you're
they
are
they
also
when
they
say
the
physics
of
this
meeting
guarantees
that
it's
secure.
C
Theoretical
factors
of
implementations
are
very
different
from
from
the
real-world
implications,
and
so
doing
all
of
this
and
discussing
all
of
that
with
the
people
who
understand
real
networks
is
the
key
to
a
long-lived,
healthy,
robust,
extensible,
quantum
internet
architecture
and
I.
Think
the
IRT
F
has
a
lot
of
expertise
is
really
required.
Yeah.
I
I'm
Eric,
oh
okay,
I'm,
not
so
familiar
with
this
area,
and
this
is
comment
ready
to
this
community
and
I'm
not
like
I,
am
sure
corner
computer
in
computing
is
important
for
network
engineering
and
also
now
on
the
future,
but
I
think
there
is
not
so
many
people
who
are
familiar
with
this
area.
I
So
so,
if
you
will
walk
on
this
quantum
computing
discussion
in
this
ayat
if
area,
how
they,
how
they
get
ITIF
network
engineers
involved
in
this
discussion,
and
also
how
do
you
blink
the
corner
computer
guys
from
outside
this
community,
come
into
this
IDT
community?
How
how
about
your
plan?
For
that?
C
That's
that's
the
for
for
doing
the
other.
You
know
that
I
was
pleased
to
hear
the
earlier
comment
that
you,
the
I,
didn't
catch.
The
name
of
the
gap
from
I
assigned
I,
don't
know
who's.
Who
said
that
you
now
he
sees
that.
Maybe
now
is
the
right
time
degree
and
and
it's
a
good
opportunity
for
people
to
be
involved,
to
prevent
the
physicists
from
going
down
the
wrong
path,
now
convincing
the
physicists
that
they
need
to
be
bumped.
That's
the
other
half
of
the
equation
and
we're
working
on
that
one
too.
C
We
restrict
attendance
to
that
two
to
100
people
so
that
we
can
have
a
small
conversation,
but
we
could
have
twice
that
many
people
and
the
queue
crypt
conference
is
read
500
people
depending
on
the
location
there.
So
there
are
a
lot
of
people
around
the
world
who
are
already
working
on
the
technologies
and
the
goal
is
to
bring
them
together
with
with
the
people
who
really
understand
the
the
way
networks
behave.
I
I
B
J
B
L
B
M
L
L
L
O
O
P
Q
P
P
P
Q
N
L
O
D
O
R
T
U
R
O
Right
we're
back
in
business,
so
here's
the
list
of
performance
problems
that
we
found,
but
don't
worry
about
it.
I'm
gonna
walk
you
through
this
list
through
talk,
but
before
that,
let's
look
at
the
system
that
we
instrument
it.
So
you
can
get
an
idea
of
what
could
go
wrong
before
we
show
you
what
did
go
wrong.
So
the
system
is
Yahoo's,
video,
streaming
system
and
the
way
that
streaming
works
very
high-level
overview.
O
It
starts
by
the
client,
which
is
your
player
requesting
and
receiving
the
manifest,
and
it
manifests
the
list
of
available
bit
rates
and
then
there's
usually
an
adaptive
better
algorithm
on
the
player
that
chooses
which
bitrate
to
request
for
and
then
once
it
decides
the
mid
rate.
It
sends
an
HTTP
request
to
the
CDN
to
get
that
and
in
this
system
these
HTTP
requests
are
sharing
the
same
TCP
connection
and
it
chunk
in
this
system
is
six
seconds
once
these
requests
arrive
at
the
city
and
the
city
and
inspects
its
local
cache.
O
O
So
our
goal
in
this
study
is
to
identify
performance
problems
that
impact
video
QE
and
it
particularly
make
the
users
unhappy.
So
they
will,
you
know,
reduce
your
revenue
and
if
we
have
only
data
from
the
players
side,
for
example,
if
you
have
here
a
data,
for
example,
the
caveat
is
that
you
may
be
able
to
detect
some
of
these
problems.
You
may
be
able
to
say
there
was
a
rebuffering,
but
because
of
the
buffer
itself,
you
are
some
of
the
problems
get
masked
because
of
the
buffer.
O
So
you
may
not
see
an
immediate
impact
on
the
QE
and
also,
if
you
only
have
player
site
information,
you
cannot
detect
problems
that
happen
in
network
or
the
CDN
and
looking
at
it
from
the
other
side.
From
the
perspective
of,
if
you
have
only
data
from
the
CDN,
for
example,
server
logs,
you
may
be
able
to
detect
some
of
the
problems,
but
you
will
not
be
able
to
isolate
problems
that
are
within
the
clients
machine.
So,
unfortunately,
for
us
we
are
looking
at
this
from
a
contra
providers.
O
Point
of
view,
for
example,
Yahoo
or
Google,
could
do
this
and
what's
unique
about
them,
is
that
they
control
both
sites.
So
this
is
an
in-house
CDN
and
their
own
player,
and
once
you
have
mu
into
the
entire
path
of
the
video
delivery,
then
you
can
find
problems
everywhere
and
see
exactly
what
happened
for
each
chunk
that
you
had
buffering.
So
our
approach
relies
on
three
principles:
one
is
and
two
an
instrumentation.
O
Measures
can
sometimes
be
technology
dependent
because
of
the
way
that
HTML
5
handles
data
arrivals
different
from
flash,
and
we
wanted
the
results
to
be
applicable
to
all
internet
video
and
finally
collecting
TCP
statistics.
The
statistics
that
we
are
collecting
here
there's
sampled
from
the
CDN
host
kernel.
There
are
some
limitations
here
as
well.
This
is
an
operational
and
large-scale
setting,
so
there's
the
frequency
that
we
can
collect
these
April.
This
information
is.
O
Not
high,
so
what
do
we
mean
by
this
end-to-end
/
chunk
measurement,
so
the
boat
lines
here
are
where
we
are
measuring
things
directly
and
the
dash
lines
are
where
we
cannot
instrument
things
directly.
So
observations
based
on
interference
and
the
life
of
the
chunks
starts
with
the
player.
Sending
an
HTTP
GET
request
arrives
at
the
CDN.
The
CDN
has
some
processing
time
and
we
show
that
with
the
CDN.
If
this
was
a
cache,
miss
and
beckon
needs
to
be
involved
to
get
the
first
byte,
then
there's
a
back-end
agency.
O
Everything
that
is
not
measured
directly
is
shown
with
red
and
everything
that
is
measured
directly
in
this
instrumentation
is
shown
with
blue
and
then
finally,
the
first
byte
arrives
at
the
player.
The
time
difference
between
when
you
should
be
get
was,
was
sent
and
when
the
first
bite
arrived
is
shown
with
TFB
first
by
today,
and
there
is
also
the
download
second,
a
transition
with
red
DDS.
O
And
similarly,
when
you
have
the
last
point
of
the
chunk
of
writing,
that's
the
time
difference
between
the
first
and
the
last
is
shown
with
the
le
last
by
today,
and
this
is
important,
because
this
is
going
to
come
up
for
different
kind
of
studies
that
we
do
so
before.
I
show
you
our
results.
I
want
to
point
out
that
we
are
studying
QE
factors
individually
in
this
talk,
the
factors
that
we
choose
this
there
is.
O
Because
of
two
reasons,
one
is
depending
on
the
type
of
content
that
you
have.
Some
of
these
factors
may
matter
more
than
the
others,
for
example,
for
a
breaking
news.
Video
startup
time
matters
more
because
the
user
just
wants
to
see
the
news,
whereas
when
you're
sitting
to
watch
a
long
movie,
the
start
of
time
really
not
matter
that
much
or
give
you
a
quality
matters.
More
and
second,
is
the
length
of
the
video
shorter
videos.
Users
are
usually
less
patient.
O
O
O
This
is
the
outline
of
the
talk
we've
already
gone
through
the
introduction.
I'm
gonna
show
you
the
measurement
data
set
that
we
have
and
then
what
are
the
problems
that
we
found
in
server-side,
network
or
client-side,
and
then
we're
going
to
conclude
the
talk
with
takeaways
and
what
can
be
done
about
these
problems.
O
O
This
is
a
video-on-demand
data
of
85
CDN
servers
across
the
u.s.
were
instrumented
selected.
Randomly
we
studied
65
million
video
sessions
and
more
than
a
half
a
billion
video
chunks.
The
users
are
predominantly
in
North
America,
where
93%
and
mostly
non
mobile
users
that
are
not
using
proxy
we're
going
to
more
details
in
the
paper
and
how
we
remove
users
that
we
think
are
behind
a
proxy
and
the
main
reason
for
that
is.
We
want
the
TCP
measurements
to
reflect
the
path
between
the
server
and
the
client
and
usually
the
proxies
terminate
the
TCP
connections.
O
O
So,
let's
dive
into
the
first
category
problems
the
server-side
problems,
so
our
measurement
in
the
server
side
comes
from
direct
measurement.
So
at
the
player
we
have
such
an
ID
chunk.
Id
start
up
time
to
buffering
in
video
and
at
the
CDN
we
have
such
an
idea
chunk
ID.
Similarly,
we
are
measuring
cell
relation,
see
back
in
latency
and
cache
Pyramus.
So
because
we
have
this
data
readily
available,
we
can
show,
for
example,
the
immediate
impact
of
several
latency
on
startup
time.
O
There
is
no
interference,
turner's
just
ground
truth,
so,
for
example,
this
is
one
graph
that
we
can
do,
because
we
have
data
from
both
sides
and
you
can
show
the
impact
of
server
latency
instead
of
time.
The
x-axis
on
this
graph
is
the
several
agency
milliseconds.
The
y-axis
is
the
startup
time
of
the
video
in
seconds
and
you
can
see
how
it
significantly
increases
with
higher
server
latency.
So
next
we
are
interested
in
knowing.
Why
do
I
have
these
servers
with
these
high
latency?
O
The
first
issue
that
we
found
is
the
Apache
traffic,
sir.
We
reach
our
timer
and
cache
misses.
So
what
happens
when
an
HTTP
request
arrives
at
the
CDN
is
that
it
first
inspects
the
memory
and
if
it's
not
there,
you
go
to
the
disk,
and
if
it's
not
there,
you
go
to
back-end.
However,
you
don't
want
to
overwhelm
the
back
end.
O
So
when
multiple
requests
come
for
the
same
content,
there's
usually
a
timer
there
that
stops
you
from
going
to
the
next
yearfor
for
a
while
to
not
overwhelm
the
backend
and
be
fine,
and
we
found
this
miss
configuration.
That
was
that
was
still
in
true
between
the
memory
and
the
disk,
and
it
was
impacting
about
65
percent
of
the
chunks
in
our
study,
but
even
more
important
than
that
is
cache
misses
we
found
cache
misses
in
this
system
increases
the
cell
relation
ceased
significantly.
O
The
median
increases
by
40
times
an
average
by
10
times,
when
your
server
side
has
a
cache
miss,
and
we
also
found
these
extreme
cases
where
server
latency
was
worse
than
Network
and
for
those
sessions
they
were
often
caused
by
cache,
misses
the
average
cache.
Miss
ratio
in
this
data
set
is
2%
for
the
sessions
that
had
worse
server
latency
than
Network
was
40
percent,
so
it
is
often
caused
by
this
significant
impact
of
cache
misses
on
the
server
latency.
O
Another
interesting
thing
that
we
found
is
that
server
site
problems
are
persistent.
It
means
that
once
the
session
starts
having
server-side
problems,
its
persistent,
it
stays
and,
for
example,
like
I,
said
the
average
cache
miss
ratio
is
2%.
If
you
look
at
the
conditional
probability
of
the
sessions
that
had
a
1
cache
miss
the
cache,
miss
ratio
for
those
sessions
goes
up.
60
percent.
That
means
that
cache
misses
are
coming
in
groups
and
that's
usually
because
of
the
unpopularity
of
the
title.
O
So
once
a
title
is
unpopular,
its
chunks
are
more
likely
to
reside
in
disk
or
worse
gate
48
in
the
backend,
and
you
start
having
cache
misses,
but
every
single
one
of
these
chunks
are
going
to
go
through
that
and
in
fact
we
found
an
interesting
paradox
in
this
system
that
it
seems,
like
more
heavily
loaded.
Servers
seem
to
have
lower
latency,
but
this
is
the
result
of
the
cache
focus.
Mapping.
O
Cache
focus,
mapping
is
you're
trying
to
have
hot
caches,
so
server
is
sending
the
request
to
where
it
was
recently
served
from
which
causes
your
popular
content
to
go
to
the
same
servers
and
your
popular
contents
have
better
performance
because
of
the
reasons
they
just
mission,
but
you
also
have
more
requests
for
your
popular
content.
So
there's
this
there
are
these
servers
that
are
less,
have
less
demand
because
they're
serving
unpopular
content,
but
have
worse
performance.
So
the
Connie
popularity
is
even
dominating
a
server
load
in
this
case.
O
So
our
network
measurements
come
from
us
instrumenting
or
host
care,
and
also
basically,
the
orange
box
here
is
showing
what
the
operating
system
is
doing,
which
is
there's
TCP,
infrastructure
and
OS
is
collecting.
We
have
this
information
about
all
the
TCP
connections,
including
a
weighted
average
of
our
TTS
called
smooth,
our
GT
or
SR
TTA
congestion
window
and
packet
retransmissions,
and
what
we
do
is
the
blue
box.
O
That
is,
we
pull
this
every
500
milliseconds
per
trunk
and
store
it
and
then
later
we
use
it
across
chunks
and
across
sessions
to
see
what
happened
in
an
hour.
Of
course,
there
are
some
challenges
in
collecting
unit.
This
way,
we're
looking
at
smooth
averages
of
RTT
or
SRT
T's
instead
of
individual
artis,
and
that's
not
sometimes
a
good
idea,
especially
if
you're
dealing
with
these
long
connections
for
video
streaming,
because
the
SRC
T
is
not
reflecting
your
RTT
during
this
chunk
exactly.
O
But
what
happened
in
all
the
previous
chunks,
this
network
snapshot
frequency
is
also
a
three
point.
Five
hundred
millisecond
in
many
cases
it's
more
than
the
RTT
of
the
connection,
but
it
comes
from
operational
limitations
and
how
often
we
have
you
were
allowed
to
pull
this
and
how
much
data
we
were
producing
in
terms
of
storage,
overhead
and
finally,
because
this
is
it,
this
is
it
operational
at
scale.
O
We
can't
collect
packet
traces
and
in
the
paper
we
go
into
more
details
and
how
we
grapple
with
these
challenges,
but
I'm
just
going
to
show
you
the
interesting
findings,
so
here's
another
similar
graph
that
you
can
do,
because
we
know
that
relation
C
and,
for
example,
the
SRT
T
of
the
first
chunk,
and
we
also
know
the
set-up
time.
So,
if
you
look
at
the
SRT
T
of
the
first
chunk,
that's
the
x-axis
here
and
the
impact
on
the
start
time
of
the
video.
O
O
It's
just
the
variation
and
in
both
cases
we
find
that
majority
of
these
prefixes
or
in
enterprise
networks,
not
residential
networks,
and
what
we
speculate
is
happening
here
is
that
these
enterprise
networks
are
running
their
middleboxes,
which
are
causing
high
latency
and
high
latency
variation,
despite
the
fact
that
they're
so
close
to
the
Syrians.
Now,
of
course,
we
don't-
and
we
do
not
have
in
network
data,
to
confirm
it's
just
a
speculation
of
why
we
see
this
high
latency
in
only
in
enterprise
networks
and
not
residential
ones.
O
The
second
finding
is
the
impact
of
packet
losses.
This
is
this
graph
is
generally
what
we
expect
to
see
in
terms
of
how
retransmission
rates
or
losses
impact
video
q.
In
terms
of
your
buffering
rate.
Generally,
we
see
that
higher
loss
rate
indicates
higher
rebuffering
rate,
but
that's
not
all,
and
in
fact
we
sell
I'm.
O
So
what's
the
percentage
of
the
chunks
that
he'll
be
buffering
and
it
is
higher
for
first
chunk
and
you
think,
because
of
the
loss
so
to
take
that
into
consideration,
we
calculated
the
green
bar
the
green
plots.
So
those
are
the
conditional
probability
F
percentage
chunks
that
hadley
buffering
given
there
was
a
loss
at
this
channel,
and
you
can
see
that
if
there
is
loss,
the
percentage
of
the
repo
Franco
is
higher
for
everyone,
but
it
goes
significantly
higher
for
the
first
channel.
O
So
this
is
because
of
the
existence
of
a
buffer
in
a
video
streaming
section.
So
when
you
have
a
buffer
in
a
video
streaming
session
it
initially
it's
not
full.
So
it
cannot
hide
these
impacts
from
the
user,
but
later
on
in
the
video
streaming
session,
when
you're
higher
trunk
IDs,
then
there
is
enough
buffer
to
hide
some
of
these
impacts
from
the
users.
O
This
is
retransmission
rate
in
chunks,
and
you
can
see
that
the
red
session
that
happy
buffering
has
actually
generally
loyal,
lower
three
and
the
green
had
significantly
higher
transmission
rates,
but
because
they
happened
after
the
first
four
chunks,
which
is
already
a
buffering
twenty
four
seconds
of
video,
because
each
chunk
is
six
seconds,
the
user
actually
never
finds
out
about
it.
Whereas
the
red
session
is
very
unlucky,
it
has
losses
in
the
first
chunk
and
right
there.
O
W
O
O
So
here
we
have
defined
a
performance
score,
which
is
a
derecho
generation.
In
our
case,
it's
six
seconds
divided
by
a
first
byte
plus
last
byte,
and
the
first
byte
is
a
measure
of
latency.
If
you
remember
from
this
graph,
first
point
was
the
time
difference
between
when
the
I
should
get
request
was
sent
and
when
the
first
boy
Drive.
So
it
includes
the
network
latency
and
the
Sirian
latency,
and
if
you
have
back-end,
if
you
have
a
cache,
miss
also
back
in
latency.
O
O
That
means
more
than
one
second
of
video
is
delivered
per
second
to
this
player
and
from
very
simple
queuing
algorithms,
if
you
have
more
than
one
second
video
delivered
to
your
player,
but
the
user
is
watching
one
second
of
video
per
second
you're
gonna
build
up
buffer
and
when
the
score
is
less
than
1,
that
means
less
than
one
second
of
video
is
delivered
to
your
buffer.
Your
users
still
watching
one
second
of
video
per
second,
and
that
means
you're
depleting
the
buffer
expected
to
have
a
rebuffering
at
some
point.
O
Third
category,
our
problems
are
within
the
client.
So
what
is
the
client
download
stack
when
the
chunks
are
arrived
from
the
network
and
before
they
are
delivered
to
our
player?
They
go
through
the
download
stack
at
the
client,
which
is
the
NIC,
the
OS
and
the
browser
before
they're
finally
handed
over
to
the
player
and,
unfortunately
for
us
at
scale,
we
cannot
observe
the
download
stack
now.
You
can
see
directly.
O
Because
we
can't
go
and
instrument
these
coins
and
see
what's
happening
at
their
browser
or
network
card,
so
here
we're
relying
on
detecting
outliers.
So
here
we're
doing
some
statistical
work
to
see
if
there
is
a
chunk
that
seemed
to
have
been
buffered
at
the
download
stack.
So
if
a
chunk
has
been
buffered
at
download
stack,
I
expect
it
to
be
delivered
late
to
the
player,
and
that
means
that
I'm
expecting
the
first
byte
to
be
significantly
higher
than
others.
O
So
let's
say
it's
2
Sigma
away
from
my
mean,
but
also
because
it
was
buffered
at
the
stack
and
delivered
so
late
to
the
player
I
expected
to
be
arriving
at
the
machines
throughput,
not
at
the
connections
throughput.
So
it's
gonna
have
very
high
standing,
yes
throughput
from
the
perspective
of
the
player.
So
let's
say
it's
2
Sigma
away
from
the
mean
again,
but
it
should
not
have
been
caused
by
network
and
server,
so
it
should
have
similar
Network
and
server
performance.
O
So
here's
one
example
that
we
found
in
our
data
set
the
graph
on
the
top,
shows
you
the
latency
matrix
and
look
at
chung-kai
d7,
and
you
can
see
that
it
has
similar
RTT
and
several
metrics
several
agency,
but
it
has
a
significantly
higher
first
byte
delay
than
the
rest
of
the
chunks.
So
what
we
expect
this
happening
here
is
that
this
chunk
was
buffered
at
the
client,
download,
stack
and
delivered
late
to
the
player,
and
then
the
throughput
measurements
are
at
the
bottom.
You
can
see.
O
So
it's
important
to
be
aware
of
these
problems
that
we
can
only
find
with
once.
We
have
data
from
both
sides,
because
if
we
don't
have
this,
each
side
is
going
to
blame
the
network
or
the
other
for
these
problems,
and
this
can
be
very
dangerous
in
the
video
streaming
world,
because
if
your
player
makes
incorrect
assumptions
about,
for
example,
agency
from
their
first
graph
and
if
the
adaptability
algorithm
is
latency
sensitive,
it
can
cause
undershooting
because
it
made
a
freak
out
and
think
the
latency
had
a
spike
where
reality
didn't.
O
O
So
we
found
these
that
download
slight
problems
there,
transient
problems
that
are
that
I
just
described
to
you.
You
found
them
to
be
more
common
in
their
first
truck
and
in
DeBary
going
to
more
details
of
how
this
could
be
a
possible
side
effect
of
flash,
and
we
also
have
persistent
downloads
tech
problems.
The
persistent
download
side
problems
are
not
very
common,
but
what's
important
about
them
is
once
they
happen.
O
They
they
often
are
the
most
important
factor
and
they
are
higher
than
network
and
server
latency
and
there's
more
in-depth
conversation
about
them
in
the
paper.
If
you're
interested
in
that
and
finally,
the
last
category
of
problems
are
the
rendering
stack
performance
problems.
So
what
is
rendering
stack
so
the
chunks
have
finally
arrived
at
the
player,
but
they're
not
ready
to
be
shown
on
the
screen.
Yet,
and
here
we
have
a
rendering
stack.
O
O
Now,
if
the
CPU
is
busy
and
you're
using
software,
rendering
the
quality
may
drop,
which
causes
high
frame
drops,
but
also
if
your
video
tab
is
not
visible,
your
browser's
do
optimizations
to
reduce
your
CPU
consumption,
so
they
intentionally
drop
frames
so
to
detect
these
problems
but
to
separate
them
from
browser
optimizations.
We
have
introduced
boolean
variable,
which
is
this
player
visible
or
not,
and
also
we're
measuring
the
drop
frames
and
then
for
each
session.
We
are
also
collecting
what
OS
and
browser
it
has,
and
there
are
some
interesting
findings
here.
O
First
one
is
that
good
rendering
is
actually
time
consuming.
We
found
that
the
multiplexing
decoding
and
rendering
takes
time,
and
you
have
to
provision
for
that
if
you
want
to
have
a
good
frame
rate
and
so
in
this
graph,
we're
showing
you
the
download
rate
of
the
chunk,
the
average
download
rate
of
the
chunk
and
x-axis.
And
what's
the
percentage
of
the
drop
frames.
O
And
finally,
we
found
some
very
unpopular
browsers
that
had
a
really
bad
rendering,
so
here
we're
looking
at
chunks
that
had
already
good
performance.
So
the
rate,
the
arrival
rate,
is
more
than
1/2.
Second
per
second,
and
we
know
the
player
is
visible
because
we're
filtering
on
the
only
these
chunks
that
have
visibility,
so
the
user
is
actually
watching
and
we
divided
the
chunks
into
two
major
platforms,
Windows
and
Mac,
and
this
blue
bars
are
just
showing
you
different
centage
of
the
chunks
in
each
platform
for
each
browser
and
they're
sorted
from
more
popular.
O
So
this
popular
and
here
you're,
looking
at
the
percentage
of
the
drop
frames
in
each
of
these
browsers,
are
always
combinations.
You
can
see.
The
trend
is
the
opposite
and
we
found
so
in
the
paper.
We
further
break
down
the
other
category,
which
are
the
least
popular
each
in
each
one
of
them,
and
there
are
some
interesting
cases
there
that
have
really
bad
rendering,
despite
the
fact
that
we
have
made
sure
the
arrival
rate
is
good
and
everything
else
is
similar.
O
One
of
the
interesting
examples-
our
safari
is
actually
really
good
at
mac,
but
it's
among
one
of
the
worst
on
windows
windows,
other
section
ders,
there's
some
less
popular
browsers
like
Yandex,
and
see
monkey
that
we
found
that
had
huge
problems
in
terms
of
rendering
so
I
walk
you
through
all
these
problems.
But
let's
see
what
are
the
takeaways
and
what
can
we
do
about
all
of
these
problems
at
each
one
of
these
places?
So,
let's
start
with
the
CDN
I
discussed
three
problems
with
you
about
the
CDN
one.
O
Is
the
impact
of
cache
misses
so
in
this
workload,
we're
looking
at
popular
heavy
workloads
and
I
told
you
we
are
using
the
LRU
cache
eviction
policy
because
of
the
impact
of
cache
misses
are
so
high
and
the
workload
is
popular.
Have
you
proposed
using
other
policies
that
are
more
tuned
for
these
kind
of
workloads
like
GD,
size
or
perfect?
All
of
you
in
terms
of
the
cache
miss
persistence,
we
propose
prefetching
subsequent
trunks.
This
is
the
problem
that
I
discussed
with
you
that
once
the
session
starts,
having
cache
misses
it.
O
Every
one
of
its
chunks
is
gonna.
Have
the
cache
misses
because
it
was
most
likely
an
unpopular
title?
Now,
it's
not
very
it's
not
that
simple
to
just
prefetch
the
subsequent
chunks,
because
in
many
cases
the
CDN
does
not
know
what
bit
rate
is
going
to
be
requested
in
the
next
chunk,
but
that's
a
whole
other
area
of
research.
But
you
talk
more
about
it.
If
we're
interested
for
the
low
latency
paradox
that
I
explained
to
you,
we
propose
better
load
balancing
by
partitioning
the
popular
content.
O
O
O
Accordingly,
for
example,
you
can
start
with
a
more
conservative
bit
rate
or
increase
the
buffer
size
to
handle
these
like
to
see
variations
better
in
terms
of
the
earlier
packet
losses
that
are
more
harmful
and,
unfortunately,
more
common
I
discussed
using
server-side
pacing
and
finally,
the
throughput
is
a
major
bottleneck.
We
think
that's
actually
good
news
for
ice
B's,
because
it's
easier
to
fix
by
establishing
better
peering
points.
O
The
latency,
the
takeaways
on
the
client
I,
discussed
the
download
cycle
agency
problem
with
you,
and
we
think
that's
an
important
problem
that
we
could
only
find
it
because
we
had
data
from
both
sides
and
we
could
confirm
that
this
problem
is
not
being
caused
by
the
server
or
the
network
and
I
talked
about
how
that
can
be
dangerous.
For
that
Ephrata
algorithm.
O
It
can
cause
overshooting
or
undershooting,
and
what
we
propose
here
is
incorporating
some
servers
like
TCP
metrics
or
some
awareness
of
the
network
path
to
the
player
and
I
have
these
discussions
with
the
path
where
networking
folks
yesterday
and
the
second
problem
is
that
rendering
is
resource
heavy.
So
you
should
provision
for
that.
We
propose
using
one
and
a
half
second
per
second
video
rubber
raid
as
a
rule
of
thumb.
O
This
is
particularly
important
if
you
are
streaming
videos
that
people
care
about
frame
rate,
for
example,
sports-
that
there's
usually
that
one
frame
that
has
whether
or
not
it
was
a
foul
and
finally
rendering
quality
differs
based
on
OS
and
browser
and
I
showed
you
some
example
browsers
that
even
in
good
conditions
and
similar
network
and
server
conditions,
they
seem
to
have
worse
rendering
quality.
It's
important
for
a
controverted
to
know
this,
to
avoid
premature
optimizations,
for
example,
rerouting
this
client.
While
the
problem
was
that
the
clients
own
down
the
stack
is
not
helpful.
O
M
S
My
name
is
Stuart
Cheshire
from
Apple.
Thank
you
for
a
really
interesting
presentation.
As
you
said,
most
of
the
traffic
on
the
Internet
today
is
streaming
video,
so
clearly
something
people
care
about
and
I
think
we're
all
frustrated
by
seeing
that
little
spinny
wheel,
waiting
for
a
buffering,
I
had
one
comment:
you
talked
about
client
download,
stack,
latencies
and
working
for
Apple.
That's
the
area
of
this
that
I'm
more
involved
with
I.
Think
I
may
know
what's
going
on
here.
So
let
me.
O
R
S
X
S
Miss
it
in
a
common
Network
setup.
Now,
unfortunately,
we
have
lots
of
buffer
bloat,
so
you
could
easily
have
a
two
second
queue
on
your
cable
modem
link
and
you
lose
one
packet
and
fast
retransmit
fills
in
that
one
packet
really
fast,
but
it's
at
the
back
of
a
two-second
queue.
So
you've
got
all
this
data
arriving
piling
up
in
the
kernel.
The
sockets
API
can't
deliver
it.
Yes,.
S
O
O
The
problem
here
is
that
our
player
sitting
on
their
black
box
right
and
we
can't
it's
probably
the
way
that
they're
handling
like
there's
a
buffer
there
and
how
they're
handling
the
data
delivery
that
causes
these
problems,
but
because
it's
a
black
box-
and
it's
like
a
footnote
in
the
paper
that
we
can
only
guess
this
is
what's
happening
and
we
confirm
it
is
not
at
the
OS
for
the
browser.
It
seems
to
be
there,
but
we
can't
really
measure.
What's
what
they're
doing
in
that
API
I.
S
I
G
G
Wes
heard
occur,
USC
is
a
excellent
presentation
and
work
very
thorough
and
I
really
enjoyed
it.
Having
said
that,
you
killed
my
dream
and
I've
had
this
dream
that
that,
with
the
advent
of
you
know,
users
can
go
out
and
find
the
things
that
they
want
and,
and
it
would
they'd
help
us
discover
new
things.
Unfortunately,
sort
of
your
caching
results
kind
of
indicate
that
video
streaming
services
are
it's
in
their
desire
to
bin
everybody
into
popular
titles.
G
Where
you
know
they,
they
won't
give
you
as
many
suggestions,
sideways
of
other
things
that
you're
interested
in
they're.
More
likely
to
give
you
suggestions
that
everybody
else
is
gonna
watch
too,
because
it's
cheaper
for
them
gives
them
better
performance,
and
thus
you
know
better
ratings
and
that
that's
sad,
but
thank.
Y
O
X
B
W
Good
cool,
okay,
so
good
morning,
everyone,
my
name,
is
Vasco
I'm
a
third
year
PhD
student
at
the
University
of
Michigan
Ann
Arbor.
This
work
is
originally
appeared
in
ACM
sitcom
2017
last
August,
so
today,
I
will
be
presenting
room,
a
new
solution
to
optimize
web
performance.
This
work
is
a
collaboration
with
Ravi
Muhammad
and
my
advisor
Harsha.
So
let's
get
started
so,
as
you
have
experience
using
a
mobile
phone
connected
to
a
cellular
network,
it's
very
very
common.
Nowadays
right
we
have
all
used
phones
to
surf
the
web.
All
the
time
now.
W
W
But
despite
all
of
this
increase
in
mobile
web
usage,
as
you
may
experience
yourself,
loading
many
of
these
pages
are
actually
pretty
slow.
Hino
systems
found
that
it
takes
almost
10
seconds
to
load
the
median
mobile
retail
site
and,
on
the
other
hand,
double
clicked
founded.
It
takes
14
seconds
on
average
to
load
a
page
over
a
4G
connection,
so
we
also
confirmed
this
ourselves
using
a
nexus
6
phone,
a
reasonable
high
performance
phone
at
the
time
connected
to
a
good
LTE
network
in
the
Ann
Arbor
area
to
load
the
mobile
optimized
popular
pages.
W
So
one
thing
to
note
about
the
results
that
we
found
here
is
that
these
are
heavily
optimized
popular
pages.
So
the
numbers
that
we
get
here
is
on
the
better
side
of
things.
So
this
is
a
bar
chart
representing
the
page
load
times,
measured
in
seconds
of
the
Alexa
top
100
sites
overall
and
Alexis
how
15
years
and
50
sports
sites,
the
top
of
the
bar
chart
is
the
median
page
load
time
and
the
whiskers
are
75
and
a
25%
house
at
the
median
page
load
times
for
the
lexer
top
hundred
sites.
W
This
is
actually
pretty
slow,
considering
the
fact
that
some
studies
have
shown
that
a
five
second
page
load
times
has
25%
bounce
rate,
which
means
that
these
pages
are
actually
losing
some
money
right
and
on
the
other
hand,
if
we
take
a
look
at
the
alexa
top
50
news
and
50
sports
sites,
things
are
far
worse.
The
median
page
load
time
in
this
case
is
actually
10
seconds,
and
the
reason
why
things
are
much
worse
in
this
case
is
because
news
and
sports
sites
tend
to
be
more
complex
than
the
alexa
top
hundred
sites.
W
Overall,
so
in
this
talk,
I
will
be
first
digging
into
why
webpages
are
slow
to
gain
some
intuition
as
to
why
that's
the
case,
and
then
we
will
use
those
intuition
to
improve
web
performance,
then
I
will
use
the
intuition
with
that.
We
get
from
the
first
phase
to
explore
room
our
solution
to
make
web
page
a
little
faster,
and
the
last
part
would
be
the
implication
of
our
work
room.
Now.
Let
me
take
you
to
into
why
web
pages
slow.
W
Now,
let's
take
a
very
simple
example.
Let's
say
we
load
one
to
load
a
page
from
a
calm,
and
this
a
calm
contains
only
one
image.
So
what
would
happen
when
this
page
gets
loaded?
Is
the
client
will
send
a
get
request
to
a
dot-com
era?
Comes
inspect
a
response,
client
parse?
It
discover
the
image
and
then
they
fetch
the
image.
W
If
we
take
a
look
at
the
network
utilization
and
also
the
CPU
utilization
at
the
client,
as
you
can
see
here,
the
bars
with
the
solid
colors
are
times
when
these
resources
are
being
actively
used.
As
you
can
see
here,
there
are
no
periods
that
the
CPU
and
the
network
is
being
fully
utilized,
and
the
crux
of
the
problem
here
is
that
the
client
has
to
parse
or
execute
a
resource
to
discover
additional
resource
to
fetch.
W
The
page
load
times
when
the
CPU
is
the
main
bottleneck
is
actually
much
higher
than
the
page
load
time
with
Network
is
the
main
bottleneck.
So,
in
this
case,
the
medium
page
load
time
is
5
seconds
in
the
case
where
the
CPU
is
the
main,
the
main
bottleneck,
so
that
experiment
implies
that
CPU
is
the
main
bottleneck
in
most
of
the
cases.
But
is
this
actually
the
case
everywhere
sure?
W
There's
one
main
process
that
keeps
doing
everything
there's
not
much
leverage
in
multi-core
for
multiple
horse.
So
the
implication
here
is
that
the
CPU
will
become
the
main
bottleneck
in
the
long
run.
So
just
to
recap
what
we
found
in
this
first
section
of
the
talk.
So
the
reason
why
web
pages
are
slow
right
now
is
browsers
needs
to
discover
resources
from
parsing
and
execution,
and
we
know
that
browsers
are
largely
serial
in
discovering
these
resources
and
performing
the
page
load
and
with
the
CPU
becoming
the
main
bottleneck
in
the
future.
W
This
process
of
discovering
resources
from
parsing,
etiquette
and
execution
will
not
become
any
faster,
so
we
have
to
somehow
think
rethink
the
way
page
logic
work.
So
our
main
idea
in
our
project
is
to
somehow
have
the
server
to
become
more
proactive
during
the
page
load.
We
want
servers
to
aid
clients
in
discovering
resources
during
the
page
load
and
that's
the
main
theme
of
room
so
now
that
we
know
why
page
web
pages
are
slow
and
gain
some
intuition
I'll
ask
you
how
we
can
make
web
pages
faster.
W
W
None
of
these
are
modified.
Everything
is
the
same
as
the
status
quo
right
now,
but
instead
of
only
sending
back,
HTTP
response
room
also
use
HTTP
to
push
to
push
resources
down
to
the
client,
so
that
client
can
receive
many
resources
before
it
needs
to
actually
fetches
discovered
them
from
parsing
our
execution,
but
push
itself.
W
It's
not
really
enough,
because
SGP
push
only
allows
you
to
push
down
resources
that
the
origin
owns
right
so,
but
we
know
that
a
lot
of
these
pages
contain
third
party
resources,
so
we
are
missing
out
on
a
lot
of
resources
to
make
web
page
load
faster.
So
what
we
also
include,
in
addition
to
HTTP
to
push,
is
also
use.
Some
kind
of
dependency
hints
resource
hints.
W
W
Now,
in
order
for
the
servers
to
push
or
send
these
hints
back
to
the
client,
the
server
has
to
have
some
kind
of
module
to
discover
these
resources
right.
So
we
have
these
dependency
resolution
module
running
at
the
web
servers.
So
these
dependency
resolution
modules
are
just
simply
trying
to
find
resources
that
the
client
will
need
during
the
page
load
and
at
the
find
we
have
some
kind
of
scheduling
mechanism
so
that
the
client
can
use
all
these
resources
as
effectively
as
possible.
W
Now
that
we
know
an
end
to
end
work
for
a
low
of
room
in
order
to
make
room
or
reality,
we
have
to
answer
these
two
main
questions
so,
first
how
web
servers
can
discover
dependencies
in
the
first
place
and,
on
the
other
hand,
how
clients
can
schedule
these
fetches
of
resources
or
use
these
hints
from
the
server
effectively
so
that
it
maximize
the
benefit
that
it
should
receive.
So,
let's
first
turn
our
attention
to
the
web
server.
W
Let's
consider
this
strawman
approach
in
discovering
resources
for
the
client,
so
the
client
can
send
a
get
request
to
the
origin.
Write
this
web
server.
Phu
kham
can
start
a
page
load
of
fuga
calm
itself
at
a
route
at
the
web
server.
Now
because
food
comes
web
server
is
a
server.
It
has
a
much
more
powerful,
CPU
and
also
a
highly
connected
network.
W
W
Unfortunately,
it
doesn't
so.
There
are
two
drawbacks
with
this
approach.
First,
as
we
all
know,
webpages
by
nature
is
very
dynamic.
There
are
a
lot
of
resources
that
is
dynamically
generated
with
some
randomized
token,
in
the
URL
and
so
on,
so
by
using
everything
from
a
one
particular
load
and
hint
them
to
the
client
or
push
them
to
the
client.
W
Now,
on
the
other
hand,
as
we
all
know,
many
of
these
pages
contain
many
personalized
resources.
So,
in
order
for
food
calm
in
this
case
to
account
correctly
account
for
personalization,
it
needs
to
get
a
hold
of
the
clients
cookie
their
party
cookies,
but
we
don't
food.
Calm
doesn't
have
that
so
food
calm
will
never
be
able
to
correctly
account
for
third
party
personalization.
W
So
what
we
did
was
we
use
an
intersection
of
offline
loads
to
overcome
the
flux
in
URLs,
so
at
the
web
server
we
load
a
page
periodically
and
then
take
the
intersection
of
these
loads.
So
it
means
that
anything
that
is
randomly
generated
per
load
will
be
filter
out
by
the
in
because
of
the
intersection.
W
W
So
let
me
walk
you
through
the
very
high-level
architecture
of
room
again,
so
room
sends
a
get
requests
to
the
web
server
web
server
uses
sends
back
a
response
with
the
HP
to
push
and
also
dependency
hints
uses
the
dependency
resolution
module
to
push
and
to
get
resources
to
push
and
also
hints
so
one
approach
for
this
scheduling
would
be
half
the
web.
Server
just
push
everything
that
it
could
from
the
dependency
resolution
module
and
then
for
hints,
just
use
all
link
preload
lingual,
Whelpley
preload
to
hint
all
of
them.
W
W
This
sounds
great
as
well,
because
we
are
discovering
resources
much
earlier
in
the
page
load
and
things
should
work
well.
Unfortunately,
it
doesn't
work
well
either,
and
this
is
a
pretty
serious
problem
because
by
pushing
and
fetching
everything
in
the
first
place
at
the
beginning
of
the
page
load,
this
leads
to
contention
in
bandwidth
and
when
there's
a
contention
in
bandwidth,
sometimes
the
important
reason
horses
are
actually
being
delayed.
W
For
example,
a
blocking
script
or
some
CSS
will
get
delayed
and
by
and
because
of
those
resources
getting
delayed,
it
has
a
cascading
effect
throughout
the
page
load
and
it
can
end
up
hurting
the
page
load
process.
So
what
we
found
in
our
experiment
was
that
by
using
this
approach,
we
don't
see
any
page
load
time
improvements
and
even
worse,
sometimes
we
see
degradation
in
page
load
times.
W
So
what
we
did
instead
is
we
prioritize
pushes
and
fetches
of
resources
that
can
potentially
have
children,
for
example,
HTML,
CSS
or
JavaScript,
and
one
very
important
detail
here
is
that
we
have
to
prop
schedule
them
or
hint
them
based
on
the
schedule
that
they
will
be
processed.
We
don't
want
to
fetch
any
resources
out
of
order,
so
that,
because,
if
we
fetch
things
out
of
order,
some
resource
that
gets
processed
earlier
might
stop
waiting
after
some
resources
that
gets
processed
later.
W
W
After
of
this,
fetches
is
done.
It
will
start
fetching
other
hinted
dependencies,
such
as
images
fonts
on,
in
other
words,
resources
that
will
not
have
any
children
and
that
doesn't
require
any
processing.
While
these
two
fetches
are
going
on,
we
also
allow
the
browser
to
parse
HTML
CSS
and
also
execute
a
JavaScript.
If
you
discover
any
resources
during
by
processing
these
resources,
we
also
allowed
them
to
go
out
now.
This
red
line
in
the
timeline
is
a
very
important
time
in
the
page
load
process.
W
This
red
line
is
the
time
when
all
the
bytes
that
needs
to
be
processed
at
the
client
is
actually
local
at
the
client
already.
So
what
this
means
is
that,
from
that
point
on,
when
the
client
wants
to
process
any
resource,
it
can
start
processing
that
resource
without
having
to
wait
on
the
network
and
from
that
point
on
this
implies
that
the
CPU
can
be
fully
utilized.
W
So
now
that
we
have
the
two
components
of
room,
let
me
sum
up
room
and
then
we
can
see
how
well
room
works
compared
to
the
current
state
of
the
page
load.
So
room
starts
by
Senate,
get
request
to
the
origin.
Origin
sense
map
there.
She
P
response,
pushes
important
resources
and
also
provide
hints
of
other
resources.
W
W
So
what
we
found
was
that
brooms
dependency
resolution
is
actually
very
accurate
and
because
of
this
room
was
able
to
speed
up
page
load
in
many
of
the
cases
we
have
a
hub
of
results
in
our
paper,
but
on
today,
I'll
only
be
talking
about
how
well
room
works
over
the
stay
is
quote
if
you're
interested
in
other
results.
Please
refer
to
the
paper
before
jumping
into
any
numbers.
W
Let
me
first
tell
you
how
we
evaluate
room,
so
we
used
a
nexus
6
phone
connected
to
a
40
LTE
network
to
a
web
record
and
replay
environment.
So
the
reason
why
we
need
a
web
recording
replay
environment
is
because
room
requires
server-side
changes.
So,
ideally
we
want
to
use
a
live
experiment,
but
unfortunately,
getting
adoption
of
artists
would
be
very,
very
challenging
now
that
we
know
how
we
evaluate
room.
W
The
top
of
the
bars
are
the
medians
and
also
the
whiskers,
our
75th
and
25th
percentile
page
load
times.
The
status
quo
load
is
for
this
set
of
pages
is
10
seconds
like
we
saw
earlier
so
when
we
enable
step2
on
all
domains.
We
saw
that
by
only
doing
that,
we
are
able
to
take
the
median
page
load
time
down
to
7.5
seconds,
but
if
we
enable
room
at
all
domains,
we
are
actually
getting
doubled
of
that
improvement
in
page
load
times
and
right
now.
W
So
if
we
evaluate
room
on
the
above
the
fold
time,
which
is
the
time
where
all
the
objects
appear
on
the
screen-
and
this
is
the
bar
chart
representing
the
above-
the
fall
time
measured
in
seconds,
so
the
status
quo
load
takes
12
seconds
to
load
at
the
median
site
and
when
we
use
vrooom
vrooom
was
able
to
improve
the
page
that
above
the
full
time
from
12
seconds
to
8
seconds.
So
that's
a
4
second
improvement
in
above
the
fall
time.
W
So
now
one
assumption
that
we
made
in
all
of
our
evaluation
so
far
is
that
we
assume
that
everyone
adopts
room
but,
as
we
all
know,
adoption
is
challenging.
So
what
we
did
as
well
is
we
evaluate
room
when
room
is
incrementally
deployed.
So
in
this
example,
here
room
is
enabled
at
all
the
domains
right.
So
what
we
did
was
we
consider
first
party
domains.
So
when
I
say
first
party
domains,
let's
say
we
are
espn.com.
W
This
actually
consumes
a
lot
of
CPU
cycles
and
network
at
the
server's.
Imagine
if
you
have
thousands
of
pages
running
serving
at
your
webserver
by
doing
periodic
loads
of
these
pages
will
be
a
huge
pain.
So
what
we
think
that
we
could
do
is
maybe
the
client
can
help
the
server
discovering
this
offline
dependency
resolution.
So
maybe
we
can
crowdsource
all
these
URLs
during
the
page
load.
That
client
sees
and
then
send
it
back
to
the
browser.
W
You
may
thinking
we,
you
may
be
thinking
right
now
that
maybe
this
is
violating
some
privacy
issues.
So
let's
take
an
example,
so
this
is
a
dependency
tree
of
a
website.
In
a
very
naive
implementation,
you
can
send
all
these
resources
as
a
list
back
to
a.com,
but
this
is
obviously
not
good
because,
let's
take
a
look
at
si.com,
slash
a
dot
HTML,
so
that
is
an
ad,
so
anything
below
that
can
be
personalized
targeted
to
the
user.
So
sending
all
this
to
a.com
means
that
we
are
giving
out
privacy
to
of
the
user.
W
So
what
we
could
do
instead
is
sending
everything
in
this
screen
encapsulation,
and
this
is
in
fact
enough
for
a
dot-com
to
discover
offline
dependency
resolution
because
a.com,
if
you
recall
from
the
offline
from
the
strawman
dependency
resolution,
a.com,
cannot
discover
personalized
resources
correctly
anyway.
So
by
sending
anything
everything
in
this
green
end,
caps
encapsulation
is
actually
in
and
then
anything
below
c-calm
can
actually
be
sent
to
just
c.com,
and
this
is
not
also
not
violating
privacy
because
see
deham
is
the
one
serving
the
ad.
So
sending
you
back
to
see.
Comm
would
be
fine.
W
Another
very
important
lesson
that
we
found
in
Froome
is
that
when
doing
these
pushes
and
fetches
of
dependencies
using
link,
rel
preload,
we
shouldn't
be
fetching
out
these
resources.
At
the
same
time,
as
we
saw
that,
if
we
do
this,
we
don't
see
improvement
in
patient
iams
and
worse.
We
are
seeing
degradation
right.
W
So
maybe
one
thing,
one
branch
that
we
could
do
regarding
these
person
prioritization
of
preloads-
is
that
include
some
kind
of
priority
to
the
link
rel
preload,
so
that
the
browser
knows
that
oh,
this
preload
is
actually
higher
priority
than
some
other
reload,
so
that
the
browser
can
schedule
the
load
of
these
resources
better.
In
fact,
this
is
an
on
a
draft
already
in
the
w3c.
L
AA
May
offer
thanks
for
interesting
presentations,
really
really
nice
to
see
that
someone
actually
tries
that
out
in
practice,
rather
than
just
looking
at
the
standardization
of
it.
I
have
a
question:
do
we
employ
any
mechanism
that
you
avoid
pushing
the
same
resource
twice
to
the
same
client
I'm,
not
talking
about
link
reload
preload,
because
the
client
would
decide
whether
he
has
that
resource
already?
But
if
we
like
pushed
CSS
on
each
and
every
page
that
sounds
wasteful
or
are
there
any
technologies
employed?
You.
AA
W
AB
AB
L
AB
Figures
for
regular
web
pages
and
it's
tremendous
numbers
of
URLs
I,
just
don't
know
if
people
who
are
optimizing
for
mobile
are
actually
giving
up
some
of
this
overhead,
which
was
always
understood
to
be.
You
know,
people
basically
we're
designing
web
pages,
thinking
that
CPU
and
network
is
unlimited,
and
that's
not
the
case
for
mobile
right.
AB
W
AB
Right,
which
means
that
actually,
if
the
people
serving
these
sites
really
thought
of
their
user
and
what
the
experience
would
be,
they
could
cut
down
on
this,
and
maybe
you
wouldn't
need
your
solution
as
much
I'm,
not
saying
you're,
so
she's
not
a
great
idea,
but
if
people
instead
was
trying
to
force
down
our
throats
lots
of
advertising
and
spyware,
actually
thought
of
the
customer
and
his
quality
of
experience,
they
could
solve
this
problem
without
needing
more
technology.
Right.
Z
W
Z
W
I
think
that's
a
separate
issue
right.
How
browsers
are
actually
designed
right
now
is
that
there's
one
c1
main
process
that
is
doing
these
tasks.
Sure
you
can
have
another
process
that
can
do
a
preload
scanning,
but
that's
separate,
but
the
main
thing
that
is
happening
is
one
in
one
process.
So
by
having
that
only
one
process,
you
are
able
to
only
execute
or
deus
cooing
something
waiting
on
the
network
on
only
one
process.
So
sometimes
they
see.
W
AC
AC
Sure
we'll
still
need
processing,
but
just
the
the
part
that
I'm
interested
in
is
in
determining
in
real
time
determining
the
redline
determining
the
fact
that
everything,
critical
or
most
critical
things
finished
loading
and
we
can
start
loading.
The
non-critical
ones
like
defining
that
red
line
would
be
extremely
interesting.
Yeah.
W
AC
AD
Alma
Chadwick
I
had
a
couple
of
thoughts,
one
of
which
is
did
you
do
any
research
into
seeing
how
many
unnecessary
resources
actually
were
downloaded
that
you
had
pushed
something
in
advance
and
then
it
turned
out
that
the
browser
didn't
need
it.
I
mean
at
one
obvious
example
would
be
high,
DPI
graphics,
where
you
know
the
browser's
deciding
which
resolution
to
download.
W
AD
And
another
thing
that
just
comes
to
mind
is
this
seems
like
a
way
that
if
we
had
a
better
way
to
organize
our
HTML
and
JavaScript
in
the
first
place,
so
that
the
browser
could
identify
that
critical
resources
earlier
in
the
parsing,
then
perhaps
this
server-side
trickery
wouldn't
be
necessary,
but
now
I'm
tilting
at
windmills.
So
thanks.
AE
Matt
Mathis
I
sort
of
had
a
continuation
of
that
thought.
It
feels
like
this
is
very
good
work
very
cool
stuff,
but
it
feels
like
you're
optimizing
at
the
wrong
layer
in
the
sense
that
the
content
providers
should
be
optimizing
better
ahead
of
time
and
they're,
not
for
some
reason
and
I
was
wondering
if
you
had
any
speculation
about
some
of
the
incentives
for
it.
Things
like
domain
sharding,
for
instance,
fills
me
to
me
is
an
example
of
a
of
a
technology
that
works
the
same
way
that
excessive
choice
in
the
grocery
store
works.
AE
It
has
the
effect
of
crowding
out
competitors
because
you
can
provide
14
different
flavors
of
chips
to
use
for
ten
times
as
much
shelf
space,
and
what
this
means
is
it
underlying
somehow,
underlying
some
of
the
people
have
inappropriate
assent
incentives
and
you're
optimizing
away
some
of
their
incentives,
but
the
real
problem
is
they're.
Optimizing
against
you.
Did
you
look
at
any
of
the
sort
of
the
causes
of
why
this
stuff
was
done
in
ways
that
it
was?
W
W
AC
B
E
Hi
Allison
I
wondered
if
we
might
spend
a
few
minutes
talking
about
or
catching
the
I
heard
you
have
community
up
on.
What's
going
on
or
how
AR
NW
was
formed,
the
the
research
workshop
first
I
want
to
say
thank
you
and
to
the
iStock
people
and
they
see
and
people
to
put
putting
that
together.
The
last
time
I
looked
at
it.
We
know
we
didn't
have
a
committee,
there
was
no
call
and
all
that
stuff.
E
So
a
bunch
of
people
did
a
bunch
of
great
stuff
to
get
that
in
order
and
in
I
guess
in
the
spirit
of
sort
of
transparency,
I
was
wondering
if
you
could
tell
us,
or
one
of
them
could
tell
us
how
the
committee
was
selected.
What
was
like
the
reach
and
diversity
goal
and
did
you
meet
it?
I
see,
there's
an
invited
talk,
that's
one
of
the
people.
That's
that's
on
the
program
committee
that
that's
an
unusual
thing,
but
could
you
speak
to
that
or
who's
responsible
forward
so.
E
We
already
do
that
for
an
RP,
so
sure
he
had
a
venue
to
do
that
and
and
I
think
that
we
should
like
raise
the
bar
for
ourselves
to
say
how
did
we
select
the
committee
and-
and
things
like
you
know,
that
kind
of
stuff,
but
I'm
really
happy
that
it's
all
in
shape
already,
and
it's
looking
really
good
the
timeline
to
come
up
to
a
workshop
in
July
we've
got
a
month
to
submit
and
they've
got
two
months
to
form
it.
So
it
looks
good.
It
looks
really
good,
mostly
I'm,.
B
B
B
Besides
submitting
yourself
is
to
encourage
your
researching
friends
to
to
bring
up
papers,
I
mean
if
everybody
in
this
room
submits
something
and
gets
a
friend
to
submit
what
an
amazing
choice,
but
I
have
had
in
mind
that
perhaps
the
committee
should
should
expand
with
respect
to
the
kind
of
continuing
versus
one-off
once-a-year
thing,
we're
thinking
slowly
about
the
idea
of
a
of
a
about
a
bunch
of
those
types
of
ideas
and
I'd
like
I'll.
Get
you
involved
in
that
conversation,
but
also
people
in
this
room.
B
We're
looking
to
try
to
increase
the
amount
of
applied
Network
research
that
there
is
I
mean,
like
you've,
heard
too
beautiful,
applied
network
research
topics
today
and
and
also
Rods
topic,
but
many
times,
there's
a
very
big
gap
between
academic
research
and
what
we
could
then
do
something
and
build
in
in
the
real
world
Internet
and
and
see
deployed.
So
we're
looking
to
just
enhance
those
relationships
in
every
way
we
can
okay.
E
And
that
you
actually
hit
on
exactly
the
last
thing,
I
wanted
to
say,
was
to
remind
people
to
solicit
for
it.
One
carrot
you
can
use
for
people.
One
thing:
that's
nicer
about
Ian
or
W
that
say
IMC
is
the
stalks
are
recorded
and
we're
having
building
this
huge
library
of
awesome
presentations
by
those
people.
So
some
of
the
students
that
I've
solicited
to
come
to
IETF
form
a
party
or
something
to
say
they
ended
up.
Having
this
talk,
that's
on
their.
You
know
web
page
afterwards,
where,
if
somebody
can
see
them
in
action
right.
B
E
Then
oh,
the
last
thing
is
the
program
committee.
It
looks
really
good.
It's
like
a
there's,
a
third
women
on
it.
Maybe
there's
it
looks
like
about
a
third
of
the
people,
are
ITF
participating
academics
already,
so
it's
looks
like
somebody
did
a
really
good
job,
but
we
should
tell
people
how
we're
doing
it.
Okay,.
B
Sounds
good,
okay,
anything
else
anyone
wants
to
bring
up
before
we
send
you
off
to
lunch.
Okay,
all
right!
Well,
thank
you
for
coming
see.
You
see
you
on
our
mailing
list,
see
you
tonight.
I'm
gonna
give
another
little
bit
of
an
overview
tonight,
hopefully
more
successfully
in
the
analog
Department,
oh,
and
there
should
be
another
blue
sheet.
If
someone
can
provide
the
other
blue
sheet,
that
would
be
helpful.