►
From YouTube: COVID-19 Network Impacts Workshop, 2020-11-09
Description
Recording from Day 1 of the Internet Architecture Board's COVID-19 Network Impacts Workshop.
Workshop home page: https://www.iab.org/activities/workshops/covid-19-network-impacts-workshop-2020/
A
A
B
Yes,
my
approach
was
to
just
ask
everybody
to
share
their
notes
with
us
later
on,
but
you
know
if
people
want
to
have
common
note
taking
in
the
in
the
notepad.
That's
probably
isn't
even
easier,
so.
D
I
think
it
would
be
more
helpful
to
just
take
collaborative
notes
in
the
ether
pad,
because
if
we
don't
actively
engage
in
the
note-taking,
it
probably
won't
happen
is
my
experience
or
you
know,
trying
to
pull
everybody's
individual
notes
in
in
different
formats.
B
So
I
don't
think
we
need
detailed
minutes
for
this
meeting
and
we
will
have
an
ib
report
published
afterwards
at
some
point.
A
E
B
Okay,
but
even
it's
five
minutes
past,
I
would
suggest
that
we
just
start.
I
think
people
have
like
most
people
are
here.
I
don't
hear
the
joining
noise
anymore.
So,
let's
start
so
welcome
everybody
to
the
ib
workshop
on
covet
19
network
impact.
B
This
is
the
first
session
we
have
today.
We
have
two
more
sessions
on
wednesday
on
friday,
thanks
everybody
for
joining,
and
we
will
go
right
on
the
next
slide.
Yes,
so
this
is
also
part
of
the
itf,
so
the
node
wall
applies
here.
However,
you
know
if
you
have
something
to
share
which
needs
other
another
framework.
We
can.
You
can
indicate
that
and
we
can
see
what
we
can
do,
but
otherwise
you
should
assume
the
noteworld
applies,
and
you
can
see
this
also
as
a
contribution
to
the
ietf.
B
Logistics,
we
have
a
web
page
which
has
some
information
about
the
workshop,
which
you
might
have
seen
because
you
might
have
submitted
something.
It
has
also
all
the
papers
on
there
that
we
have
accepted
accepted.
Thank
everybody
for
submitting
a
paper.
B
We
actually
received
a
few
more
papers
than
on
the
on
the
which
which
have
not
been
published
or
then
are
published
on
the
on
the
web
page,
because
we
only
published
like
real,
full
position
papers
on
there.
That
was
just
like
a
short
statement
of
interest
or
whatever
we
didn't
publish
it,
but
I
think
the
papers
we
got
cover
a
really
nice
broad
spectrum
of
input,
and
we
are
happy
to
have
you
all
here.
B
We
also
created
a
github
repo.
The
github
reaper
currently
has
mainly
the
slides
from
today.
It
also
has
already
an
mb
file
for
the
report,
so
if
you
guys
already
want
to
put
some
note,
there
start
working
on
the
report.
That's
also
helpful,
then
we're
today
here
on
webex.
We
will
use
the
same
web
accession
on
wednesday
and
friday.
B
The
session
is
currently
recorded
and
the
intention
is
to
publish
the
session
the
recordings
later
and
also
use
them
potentially
for
note
tagging
looking
up
stuff.
So
if
you
have
any
concerns,
if
you
want
to
share
something
that
should
not
be
published,
we
can
actually
stop
recording.
You
have
to
indicate
that
but
other
otherwise.
We
would
just
assume
that
we
can
publish
the
recordings
later
if
you're
not
indicating.
B
Otherwise,
you
should
turn
on
your
video
when
you're
speaking
or
presenting,
not
sure
if
you
want
to
keep
on
your
video,
the
otherwise
the
whole
time
we
have
a
lot
of
people
in
here.
I
don't
think
it's
a
should
be
a
network
issue.
If
the
the
tools
are
smart
enough
by
now,
but
like
feel
free
to
turn
it
off
anytime,
make
sure
you're
muted
when
you're,
not
speaking,
you
know
that
can
be
messy
otherwise
and
for
the
queue
we
will
try
to
use.
B
The
plus
q
minus
q
scheme
in
the
chat,
so
yari
is
helping
me
watching
the
chat
for
the
queue
and
I
hope
we
can
manage
as
well.
The
goals
have
to
have
the
very
interactive
discussion
here,
but
we
are
too
many
people
to
just
speak.
I
think
so.
We
need
the
scheme
and
then
on.
Here
is
also
the
link
to
the
easy
pet
that
already
published
in
the
chat.
Somebody
is
not
muted
in
making
noise
so
that
person
didn't
listen.
B
Well,
thanks,
so
etherpad
we
have
the
etherpad
for
taking
notes,
but
if
you
can
also
put
your
name
and
affiliation
there,
then
we
have
a
little
bit
of
record.
Who
was
here.
That
would
be
very
helpful.
B
B
I
think
we
have
all
seen
many
reports
how
quickly
the
the
traffic
went
up
in
march
and
april,
because
everybody
moved
to
home
work
from
home
and
there
was
big
shift
and
also
different
applications
that
were
used,
especially
video
conferencing,
media
applications
had
a
big
increase
and
all
over,
we
saw
that,
actually
things
work.
Well,
we
know
that
some
operators
managed
to
upgrade
their
capacity
quickly.
They
have
been
like
some
small
hiccups.
We
also
saw
in
the
u.
B
There
was
a
request,
for
example,
for
netflix
to
actually
reduce
the
video
rate
and
lower
the
traffic
load
there
and
these
kind
of
things.
So
this
workshop
is
really
to
figure
out
a
little
bit
more.
What
happened?
What
have
we
learned
and
what
do
we
need
to
improve
in
future
from
a
from
a
networks
operator
and
service
provider?
B
Point
of
view,
but
also
from
an
architectural
point
of
view,
what's
in
the
technology
that
maybe
needs
changes
or
what's
set
up
in
a
way
to
survive
those
crisis
very
well
that
we
should
make
sure
we
maintain.
B
So
it's
really
about
figuring
out
what
had
happened,
figuring
out,
what
was
behind
the
scene
about
network
management
and
capacity
extensions,
and
also
you
know
what
are
what
what
are
the
general
impacts
of
these
traffic
shifts
to
our
architecture
and
potentially
in
any
kind
of
future
effects,
and
a
little
bit
about
these
three
points.
We
also
set
up
our
gender,
so
our
session
today
will
focus
on
measurements
and
observations.
B
So
this
is
really
to
get
everybody
on
the
same
page,
get
some
knowledge
about
what
happened,
get
some
data
and
have
a
discussion
about
this.
The
session
on
wednesday
will
focus
more
on
operational
issues,
so
we,
what
did
the
operators
do?
What
was
their
experience
and
what
can
we
learn
from
it
and
then
on
friday
we
will
be
a
little
more
future
looking
and
we
also
have
hopefully
some
time
for
other
topics.
People
want
to
discuss
that
didn't
find
a
place
in
the
earlier
sessions.
C
We
have
cullen
on
there.
Thank
you
thanks.
I
was
just
thinking
at
the
end
of
monday
and
wednesday.
Can
we
just
reserve
a
little
bit
of
time
to
stop
back
to
stop
our
normal
discussion
and
just
make
sure
that
we
sort
of
outline
what
we
want
to
talk
about
the
next
day,
so
we
can
sort
of
guide
those
agendas
a
little
bit
more.
B
Okay,
then,
let's
start
with
today's
topic,
so
I
looked
at
all
the
measurement
papers
which
you
can
find
on
the
iv
webpage.
I
also
look
at
the
slides
I
received
already,
and
the
overall
observations
isn't
that
surprising
right
we
had
like
this
huge
increase
in
traffic
in
march
and
april,
we
have
observed
changes
in
traffic
pattern
because
we
have
different
kind
of
applications.
People
are
using
because
of
the
different
way
they
are
work
and
they
they
live.
We
have
especially
this
very
strong
increase
in
video
conferencing.
B
B
We
had
got
also
some
measurement
paper
showing
that
there
are
some
problems
about
last
my
congestion
and
latency
and
some
networks,
depending
on
actually
how
the
infrastructure
is
set
up.
So
this
is
just
like
the
very
high
level
view
and
we
kind
of
dive
today,
a
little
bit
more
into
the
data
and
and
looking
at
the
details
and
figuring
out
did
we
actually
measure
the
right
thing
and
do
we
know
everything
we
know
or
did
I
miss?
B
Did
we
miss
anything
here
and
as
such,
we
also
have
these
presentations,
which
I
tried
to
structure
a
little
bit
in
this
way
and
there's
a
christie
in
the
queue.
F
Hi,
thank
you
and
yeah.
I
just
wanted
to
sorry.
You
mentioned.
If
there's
anything
you
missed
in
the
high
level
observations
and
just
in
terms
of
a
shift
on
traffic
pattern,
and
then
I
think
we
should
really
acknowledge
the
shift
in
cyber
attacks
and
the
security
aspect
of
what
we
saw
in
terms
of
the
shift
in
traffic
patterns.
So
I
just
wanted
to
mention
that
as
a
as
a
high
level
observation
before
we
go
into
more
detail
here,
thanks.
B
Thank
you.
That's
that's
a
nice
contribution.
We
don't
have
a
measurement
paper
about
this,
so
but
that's
something
definitely
something
we
can
further
discuss.
B
Okay,
so
on
the
measurement
papers,
you
know,
I
consider
the
order
we
want
to
go
through
this
and
there
are
different
ways
to
do
that,
but
that's
the
order
I
choose.
I
would
like
to
start
with
oliver
on
the
on
a
paper
which
he
called
the
lockdown
effect
because
they
have
various
measurements
from
different
kind
of
networks,
and
this
shows
a
really
nice
high
level
overview.
I'm
getting
some
data
at
the
observations
I
just
stated.
B
Then
we
have
this
little
block
where
we
get
a
little
bit
more
into
specific
traffic
patterns
in
campus
networks
and
mobile
networks,
and
then
we
have
amp
papers
looking
more
at
interconnects
at
different
points
and
then
at
the
end
we
talk
about
this
last
my
congestion
problem
that
I
mentioned.
So
that's
the
plan.
B
B
I
can
also
quickly
slip
a
flip
on
the
next
slide,
so
there's
a
slide
with
potential
questions
we
want
to
address
later
on.
Of
course,
your
own
questions
can
go
here
as
well.
You
can
look
at
the
slides
in
github
if
you
want
to
double
check
this,
but
let's
look
at
the
presentations
first
and
with
that
we
can
start
with
oliver
and
I'm
will
try
to
find
the
slides.
G
All
right
thanks
mia,
then
so,
thanks
for
the
introduction,
my
name
is
oliver
hoelfeld
and,
on
behalf
of
all
the
offers,
I'm
going
to
present
our
paper
on
the
lockdown
effect.
So
what
we
intended
to
do
was
we
wanted
to
take
a
very
broad
perspective
on
what
is
actually
happening
in
the
internet
when
the
entire
world
shifts
their
behavior.
So
we
have
a
couple
of
vantage
points,
an
isp
internet
exchange
point
and
an
educational
network,
and
we
want
to
look
at
all
of
this
broad
set
of
networks
wow.
G
How
did
they
actually
cope
with
the
traffic
change
and
what
was
actually
happening
in
terms
of
measurable
observations?
So
next
slide
please.
G
G
So
what
kind
of
vantage
points
do
we
actually
have?
So
we
have
we're
looking
at
three
internet
exchange
points,
so
one
big
internet
exchange
point
in
central
europe,
one
in
southern
europe
and
one
internet
exchange
point
in
the
us
east
coast.
So
that's
kind
of
the
core
networking
perspective
where
a
lot
of
the
networks
meet,
then
we
are
also
looking
at
a
big
internet
service
provider
and
here
to
get
the
residential
customer
perspective.
G
So
all
the
dsl
customers
that
have
now
have
to
work
from
home
and
then
we're
also
looking
at
a
big
educational
network
in
the
madrid
region
that
interconnects
more
than
or
16
different
research
institutions,
students,
faculty
members
and
so
on.
So
next
slide
to
give
you
in
a
view
on
how
this
traffic
evolution
is
kind
of
developing.
Here
you
see
a
plot
that
starts
in
january
last
year
and
it
goes
up
until
the
end
of
june
this
year.
G
G
However,
when
the
initial
responses
and
lockdowns
were
announced.
So
this
is
the
part
in
the
plot
where
it
goes
from
lay
ddy,
shaded
background
to
the
gray
shaded
background.
So
that's
in
march
this
year,
when
the
the
government's
imposed
lockdown
restrictions,
we
have
a
big
jump
in
traffic
in
the
order
of
20
to
30
percent,
and
this
is
just
because
people
change
their
behavior
and
because
of
changes
in
life.
So
that's
a
big
additional
set
of
traffic
that
these
networks
needed
to
handle.
G
So
this
is
what
we
see
at
this
internet
service
providers.
Now
we
have
a
couple
of
more
vantage
points,
so
if
you
go
to
the
next
slide,
we
see
how
this
looks
like
for
the
internet
exchange
point,
and
here
we
see
exactly
the
same
behavior
for
every
of
the
three
internet
exchange
points
that
we're
looking
at.
G
So
we
have
a
more
or
less
gradual
increase
in
the
traffic
evolution
and
then
once
these
lockdowns
were
announced
and
and
the
restrictions
were
imposed,
then
we
have
a
big
increase
in
in
traffic
in
the
order
of
20
to
30
percent
in
every
vintage
point
that
we're
looking
at.
If
you
go
to
the
next
slide,
then
we
can
have
a
look
and
how
this
looks
for
from
the
perspective
of
a
mobile
operator,
and
here,
as
we
would
expect,
we
see
a
slightly
different
pictures.
G
So
once
the
walk
the
lockdowns
are
in
place,
the
the
traffic
is
slightly
decreasing.
So
that's
the
general
observations
that
we
can
see
here
in
terms
of
how
the
traffic
level
is
rising.
G
So
next
slide,
please
what
are
the
takeaways
that
we
can
take
from
all
of
these
measurements
that
we
did
so
people's
life
changes,
and
this
leads
to
the
advent
of
new
traffic
patterns
and
most
notably,
what
we
observe-
and
this
is
in
line
with
like
many
of
the
other
works
and
operators
perspectives
that
have
been
shared.
Meanwhile,
is
that
the
difference
between
workday
and
weekends
vanishes?
G
So
all
the
all
the
work
days
becomes
a
bit
more
like
weekends,
so
the
difference
here,
kind
of
vanishes
the
traffic
composition,
changes
and
in
terms
of
traffic
composition.
Here
we
see
shifts
so,
for
example,
all
the
applications
that
we
use
for
remote
work,
education,
vpn,
video
conferencing-
you
mentioned
it
in
the
beginning.
All
of
these
applications
see
an
increase
in
traffic.
G
G
So,
for
example,
if
you
look
at
education
in
the
networks,
if
you
send
people
at
home,
they
cannot
use
the
networks
anymore,
oh
well,
they
use
it
for
different
kind
of
services,
so
they
need
now
streaming,
meaning
that
the
the
traffic
symmetry
and
the
in
and
outs
out
ratios
they
completely
change.
G
The
other
thing
is
that
we
see
here
is
that
most
of
the
people
when
they
look
into
internet
traffic,
they
see
they
look
at
hyper
giant,
so
the
big
cdns
and
cloud
providers,
and
so
on.
What
we
noticed
is
that,
however,
only
focusing
on
these
hyper
joints
is
not
sufficient,
because
we
saw
a
lot
of
increase
in
these
non-hypergiant
networks,
but
in
general
what
we
find
here
is
that,
once
the
human
patterns
and
their
behavior
changes,
this
is
directly
reflected
in
internet
traffic
patterns.
That
operators
need
to
account
for
so
next
slide.
G
G
So
what
we
saw
here
is
that
if
we
just
look
into
the
traffic
volume,
then
we
saw
an
increase
of
about
15
to
30
percent
with
just
a
few
days
only
so
that
happened
rather
quickly
and
if
you
now
consider
that
network's,
usually
provisioned
for
about
a
30
percent
increase
per
year.
This
is
pretty
substantial
in
terms
of
like
how
did
the
networks
cope
with
that
mostly
pretty
well,
because
they
had
sufficient
idle
capacity
provisioned,
but
there's.
G
Another
effect
that
happened
in
here
is
that
the
impact
on
the
peak
traffic
was
rather
limited,
so
most
of
the
traffic
shift
they
they
they
occurred
in
non-peak
hours,
which
means
that
the
valleys
get
filled,
but
the
increase
in
the
traffic
peaks
is
not
as
substantial
as
the
the
actual
shift
in
the
traffic.
G
The
other
observations
that
we
made
here
was
look
when
we
looked
into
port
capacities
and
how
did
networks
deal
with
provisioning.
Their
links
was
that
the
track,
the
the
most
of
the
networks
actually
reacted
rather
quickly
to
the
additional
need
for
capacity,
and
they
were
very
quick
in
provisioning
additional
capacity.
G
So,
for
example,
here
the
central
european
rxp
reports
in
capacity
increase
of
about
one
and
a
half
terabits
here.
So
what
can
we
take
away
from
this?
Is
the
networks
could
deal
with
this
sudden
increase
in
traffic
and
these
shifts
in
demand
pretty
good
if
they
had
sufficient
idle
capacity
provision
and
they
had
like
a
very
quick
reaction
times
in
terms
of
like
provisioning,
new
links
and
and
adding
capacity.
G
So
this
is
why,
for
like
these
kind
of
bigger
networks
that
we
looked
into,
the
the
impact
of
the
pandemic
was
measurable,
but
it
didn't
really
hit
these
networks.
G
So
with
that,
I'm
at
the
end,
thank
you
and
I
think
we
go
to
the
next
presentation
before
we
jump
into
questions.
B
A
H
Yeah,
so
thanks
ollie,
unfortunately,
I'm
currently
not
able
to
to
view
the
exact
graph
so
which
one
is
it
only.
So
I
have
my
browser.
I
don't
see
the
sharp
presentation
so.
G
G
B
J
Question:
okay,
so
here
I
mentioned
that
the
capacity
is
can
be
increased
well
because
of
the
planning
in
advance.
But
you
know
that
in
the
in
the
process
of
the
kobe
19,
because
I
think
the
capacity
may
be
not
a
very
big
problem.
But
you
know
we
know
that
the
experience
of
the
users
may
be
not
the
same
as
euro.
J
B
Okay,
I
would
just
in
this
case,
move
on
and
we
have
martino
next
and
let
me
find
those
slides.
K
K
K
K
Okay,
thank
you.
Yes,
in
our
paper,
we
collected
some
some
data
from
our
campus
pretty
technically
torino.
Some
passive
data
using
a
network
rob
running
a
passive
meter,
called
this
data
that
we
developed
here
here
in
pre-technical
literino,
and
we
compared
what
happens
in
our
university
with
respect
to
other
large
universities
in
italy.
Here
on
these
graphs,
we
see
how
the
traffic
pattern
changed
from
before
the
lockdown
and
during
the
lockdown.
K
Here
we
have
the
daily
pattern
on
the
top
y-axis.
We
see
the
incoming
traffic
pattern,
while
on
the
bottom
y-axis,
we
see
the
outgoing
traffic
pattern.
If
we
focus
on
the
incoming
traffic,
this
is
mainly
what
people
do
while
browsing,
and
we
see
that
in
all
the
three
universities
we
see
a
very
large
decrease
of
incoming
traffic
due
to
the
lockdown,
because
people
are
not
are
no
longer
there
in
the
university
they
do
not
browse,
they
do
not
download
content.
K
Instead,
if
you
look
at
the
outgoing
traffic,
which
is
basically
the
traffic
served
by
the
servers
of
the
of
the
university,
we
see
that
in
polytechnic
and
university,
the
outgoing
traffic
is
almost
remain
the
same.
But
in
polytechnic,
which
is
the
left
image.
We
see
that
the
outgoing
traffic
really
exploded.
This
is
the
red
line
and
the
reason
is
because
polytechnic
adopted
an
in-house
teaching
infrastructure.
K
So,
basically,
all
the
lectures
are
hosted
are
distributed
by
servers
in
cell
polytechnically
train
rather
on
the
cloud,
and
this
makes
the
outgoing
traffic
to
explode
in
the
next
slide,
and
we
have
some
details
about
that.
In
general,
we
observed
different
traffic
modifications
due
to
due
to
copy
the
pandemic.
K
We
also
testified
a
large
increase
in
the
use
of
microsoft
teams
because
in
our
university,
microsoft
teams
is
used
by
faculties
to
communicate
with
students
for
administrative
stuff
for
storage
and
these
things
and
finally,
the
most
important
traffic
change
was
due
to,
as
I
said,
to
the
remote
learning
infrastructure
that
we
set
up
in
polytechnic,
which
is
based
on
a
bbb
big
blue
button,
which
is
an
open
source
application
for
for
remote,
remote
teaching,
which
is
based
on
webrtc.
K
In
turn,
it
uses
rtp
traffic
to
distribute
the
live
lectures
to
the
students.
This
had
a
large
impact
on
the
campus
traffic.
If
you
see
the
bottom
left
image,
we
had
like
600
virtual
classrooms
per
day
and
15
000
users
connected
every
day.
Also.
What
is
important
here
is
that
these
traffic
changes
are
persistent,
so
they
are
not
only
during
the
first
days
of
the
lockdown,
but
they
lasted
all
the
lock
down
longer.
So
if
you
see
the
figure
here
in
the
bottom
right,
this
is
the
incoming
outgoing
traffic
pattern.
K
This
traffic
even
increased
because
we
have
all
the
new
students,
because
all
the
classrooms
are
now
working
remotely,
and
so
we
have
like
two
gigabit
per
second
of
outgoing,
mainly
traffic.
K
If
we
go
to
the
next
slide,
we
can
also
see
how
this
modification
created
some
different
traffic
pattern
in
terms
of
daily
and
holy
pattern.
K
So
here
we
have
a
breakdown
of
the
traffic
due
to
remote
teaching,
so
outgoing
traffic,
we
have
live
classrooms
which
are
based
on
rtp
traffic.
They
are
live,
so
this
is
udprtp
and
they
happen
only
during
the
weeks
and
during
the
weeks
from
8
30
to
7
p.m.
K
We
can
observe
up
to
a
half
gigabit
per
second
of
outgoing
traffic
due
to
multiple
classrooms,
but
there
are.
There
is
also
a
large
traffic
due
to
due
to
the
on-demand
content
that
students
download,
because
students
can
for
sure
download,
also
the
the
classrooms
offline,
for
example
during
the
weekend.
K
In
fact,
we
noticed
that
during
saturday
and
sunday
we
have
almost
half
gigabit
per
second
of
downloads
of
teaching
material,
and
this
is
huge
because,
because
before
they
look
down,
we
didn't
observe
such
large
traffic
during
the
weekend,
and
so
this
is
a
very
hard
modification.
K
The
the
number
of
new
tcp
and
udp
flows
exhibit
a
great
peak
during
the
beginning
of
this
of
the
classrooms,
and
this
can
be
a
problem,
for
example,
for
firewalls
for
some
middle
box
for
passive
meters
for
all
these
kind
of
network,
epic
equipments
the
need
to
have
a
per
flow
state
inside.
K
So
this
can
can
be
a
problem
for
some
network
equipments.
If
we
go
on
the
next
slide,
we
can
finally
study
the
performance
that
students
enjoyed
while
using
our
teaching
facilities
during
the
lockdown.
Here
we
first
focus
on
the
speed
the
download
speed
of
the
teaching
material,
which
happens
over
https
over
tcp.
K
There
is
some
difference
across
operators
because
we
notice
that
fixed
and
fiber
operators
are
typically
much
faster
than
mobile
and
low
cost
operators.
This
is
the
figure
here
on
the
left
on
the
top.
We
see
that,
for
example,
fastweb
was
much
faster
than
a
free
mobile,
which
is
iliad,
which
is
a
mobile
operator.
Also.
K
We
noticed
that
at
some
point,
all
the
traffic
from
italian
students
going
from
italy
to
our
university
was
outed
to
the
france
and
then
coming
back
to
polytechnic,
and
this
was
a
clear
performance
penalty
that
students
were
enjoying
and,
on
the
contrary,
we
didn't
notice
a
great
great
differences
across
italian
regions.
Here.
This
is
the
graph
on
the
bottom
left
a
bit
throughput.
K
Yes,
there
are
some
variability
of
the
throughput
between
regions,
but
not
as
much
as
we
expected,
and
this
was
rather
interesting
because
also
in
italy,
there
is
a
large
debate
about
the
digital
divide,
for
example
all
of
rural
areas,
of
isla
of
the
islands
in
italy.
We
have
sardinia
and
sicily.
There
is
a
huge
debate
about
if
the
internet
is
lower
or
faster
in
these
islands,
and
actually
we
did
not
measure
such
a
different
performance
across
regions
and
finally,
we
can
say
something
about
live
classrooms
which
happen
over
rtp
in
udp.
K
Since
this
is
not
a
batch
download,
it
has
a
fixed
bit
rate
at
which
the
content
is
generated
by
the
teacher,
and
we
didn't
notice
any
problem
due
to
the
network
during
the
lockdown
actually
also
because
the
bitrate
is
rather
low.
The
video
is
very
sedum
above
200
kilobits
per
second,
because
we
set
up
a
low
video
b30
to
avoid
congestion
and
also
audio
is
typically
at
40
kilobits
per
second.
So
this
almost
never
saturated
the
network
of
the
students
and
we
didn't
we
there.
K
No
big
problems
were
reported
by
the
students
during
the
lockdown,
and
yes,
this
was
the
slide
in
general.
Yes,
we
can
say
that,
from
the
campus
point
of
view,
it
has
been
a
large
effort
to
set
up
this.
This
teaching
infrastructure.
K
This
clearly
stressed
the
campus
network
to
have
such
a
large
amount
of
outgoing
traffic,
which
kind
of
really
modified
the
pattern
rather
than
before
the
lockdown.
Before
they
look
down,
we
had
incoming
traffic
and
not
much
of
going
traffic.
Now
we
have
almost
all
outgoing
traffic,
but
in
general
the
network
managed
to
to
carry
packets,
and
that's
all
thank
you.
B
Thank
you
very
much
for
sharing,
especially
this
last
slide,
because
there's
a
lot
of
discussion
already
ongoing
about
how
to
measure
user
experience
before
we
take
oliver.
I
have
a
quick
question
so
in
the
previous
slide
you
talked
about
that.
The
number
of
tcp
connections
would
challenge
like
fire
firewalls
and
these
kind
of
things.
So
did
you
see
any
outages
there
and
did
you
had
to
upgrade
those
firewalls
or
did
you
do
anything
there.
K
Regarding
firewall,
I
don't
know
if
they
had
problems
the
technicians
but
for
our
passive
probe.
That
would
be.
That
is
running
this,
which
is
a
custom
software
custom,
passive
meter.
At
some
point,
we
had
to
increase
the
number
of
cores
because
it
is
a
multi-core
multi-threaded.
We
had
two
cores.
We
had
to
upgrade
it
to
four
cores
because
the
traffic
increased
too
much
so,
but
I
I'm
pretty
sure
that
the
network
administrator
had
some
other
problem.
L
Yeah,
I
have
a
question
regarding
slide
number
two,
where
you
compare
the
basically
the
incoming
or
versus
the
outgoing
traffic
before
and
after
the
lockdown,
so,
and
especially
for
b
and
c.
So,
basically,
not
the
universities
where
you
investigated,
I
guess
most
of
the
time.
So
why
don't
you
so?
Why
is
there
so
little
difference
in
the
in
the
outgoing
traffic
after
the
lockdown,
so
in
the
lower
part
of
b
and
c?
So
why
is.
K
It
longer
actually,
I
think
that
this
is
because
the
outgoing
traffic
is
mainly
due
to
the
servers,
so
the
teaching
portals-
and
so
this
pattern
didn't
change
too
much,
because
these
are
students
going
there
to
download
the
slides
to
download
teaching
material,
and
this
maybe
has
been
rather
rather
stable.
K
B
B
Definitely
we
move
on
with
looking
into
mobile
networks
a
little
bit
more
and
we
have
andra
here.
I.
B
B
M
Go
ahead:
okay,
great
thanks!
So
much
yeah!
Thank
you
so
much
for
for
having
us
today.
So
I
have
also
diego
perino
joining
from
from
telefonica
today.
M
M
So
specifically
we're
talking
about
all
two
in
the
uk,
so
this
is
joint
work
with
telephonica
research
university,
carlos
the
third
country
in
telefonica
in
the
uk.
So
specifically,
we
looked
at
how
the
pandemic
comfort
unfolded
in
its
early
weeks
within
the
uk,
and
you
know
how
all
the
government
issue
measures
to
actually
tackle
the
epidemic,
impacted
the
mobility
of
the
people
and,
implicitly
the
way
in
which
they
actually
used
the
mobile
network
services.
M
So
the
period
we
focused
on
captures
about
10
weeks
from
the
end
of
february
until
around
mid
may
of
earlier
of
this
year.
So,
within
this
period
we
actually
captured
a
few
interesting
moments
when
the
world
health
organization
declared
the
coronavirus
outbreak
of
global
pandemic,
as
well
as
all
the
follow-up
measures
the
uk
government
took
to
you
know,
impose
social
distancing,
including
a
nationwide
lockdown.
They
declared
on
the
23rd
of
march
next
slide,
please
so
I'm
going
to
start
with
the
takeaways.
M
So
in
our
analysis,
we
actually
found
that
you
know
in
an
effort
to
tackle
the
pandemic.
The
uk
nationwide
mobility
decreased
by
60
after
lockdown,
compared
to
the
last
week
of
february
of
2020,
so
actually
people
did
stay
at
home.
M
At
the
same
time,
their
usage
of
the
mobile
network
also
decreased
and
found
a
drop
of
approximately
20
percent
in
downing
data
traffic
after
knockdown,
compared
to
earlier
of
this
year,
london
in
particular,
so
many
of
its
residents
actually
relocate
elsewhere,
showing
a
10
drop
in
residence
after
lockdown
or
around
lockdown
in
a
steep,
decreasing
downlink
data
traffic,
especially
in
central
areas
of
london.
M
M
So
our
analysis
put
together,
you
know
a
rich
data
set
that
we
actually
collected
from
from
o2,
including
a
general
mobility,
signaling
information
data
set
from
devices
in
the
network
from
mobility
management
entity
and
radio
network
performance
metrics.
So
we
capture
users
mobility
only
by
checking
which
are
the
radio
towers
to
which
they
actually
connect.
So
I
must
specify
that
this
data
is
always
anonymized
and
aggregated
and
we
don't
ever
access
user
gps
information
from
these
data
feeds.
M
You
know
we
complement
with
third-party
information
from,
for
example,
the
office
for
national
statistics,
and
we
derive
different
mobility
patterns
at
the
user
level,
including
the
home
location
of
the
user
and
a
bunch
of
mobility
metrics.
So
one
of
these
mobility
meters,
for
example,
is
the
user
gyration,
which
tells
us
about
the
area
in
which
a
user
moves
only
based
on
the
radio
towers
to
which
the
user
actually
connects.
M
So
this
is
where
we
actually
see
the
evolution
in
gyration.
So,
based
on
all
this
information,
we
are
actually
able
to
observe
how
the
mobility
of
people
changed
in
the
uk
in
these
early
weeks
of
the
pandemic.
So
what
you
see
here
is
the
average
variation
in
gyration
compared
to
the
nationwide
average
in
february,
across
our
data
set
of
about
16
million
users,
so
we
captured
the
variation
from
the
24th
of
february
until
30th
of
march
in
this
plot,
so
each
bar
that
you
see
here
corresponds
to
a
different
day.
M
So
what
we
observe
is
that
oval
dependent
was
declared
in
early
march,
and
you
know
social
distancing
was
immediately
encouraged.
The
mobility
patterns
of
people
did
not
change
immediately,
so
this
actually
dropped
dramatically
once
the
government
actually
issued
the
standard
order
later
in
march.
So
we
note
that
after
the
23rd
of
march,
the
gyration
drops
by
at
least
60
percent
compared
to
earlier
in
february.
M
So
the
main
takeaway
here
is
that
the
only
effective
measure
to
actually
convince
people
to
social
distance
was
actually
the
countrywide
lockdown
next
slide.
Please
so
then
we
asked
ourselves:
what
did
all
these
changes
actually
meant
for
for
the
mobile
network
so
to
understand
the
impact
on
the
actual
network?
We
looked
here
at
six
different
indicators.
You
know
performance
and
gear
kpis,
so
we
looked
at
downlink
uploading
data
volume,
downlink
active
users,
downlink
averages
or
throughput
sales
resource
utilization
and
the
total
number
of
users
in
the
rate
yourself.
M
So
we
only
report
on
the
y-axis,
the
delta
variations
of
each
of
each
of
these
kpis
compared
to
the
nationwide
average
in
the
last
week
of
february
of
2020
prior
to
the
pandemic.
So
we
show
here
only
average
values
across
one
week
so
on
the
x
axis.
Basically,
each
of
these
plots
each
points
corresponds
to
one
week
in
twenty,
so
we
look
both
at
country
wide,
but
also
specific
regions,
namely
we
look
at
phytoplankton
counties
which
we
mark
on
the
map
below
the
legend
there.
M
In
terms
of
the
data
traffic
patterns,
we
noted
that
the
evolution
at
the
uk
levels
of
the
countrywide
is
similar
to
that
at
the
metropolitan
county
for
more
or
less
for
the
five
different
counties.
We
capture,
in
particular
the
data
traffic
volume
dropped
by
20
or
more
while
the
number
of
download
active
losers
also
decreased
together
with
the
cell
research
civilization
in
some
sense,
maybe
unintuitively.
However,
the
throughput
also
dropped
by
10
percent
and
we
put
that
likely
because
of
applications.
Rather,
but
you
know
this
is
just
conjecture.
M
We
also
see
that
london
actually
made
a
discordant
note
from
the
rest
of
the
areas
with
a
steep
decrease
in
the
number
of
users
and
both
in
the
uplink
and
the
download
data
volumes.
So
we
confirm
later
with
our
analysis,
that
this
is
likely
due
to
the
temporary
10
relocation
of
london
residents
who
didn't
remain
actually
in
their
homes
after
lockdown.
M
So
next
slide,
please
right.
So
we
know
that
we
not
only
looked
at
all
this
analysis
based
broken
down
for
metropolitan
county.
So
at
this
you
know
geographic
units,
but
also
we
looked
at
the
analysis
on
geodemographic
clusters
and
I'm
not
sure
how
much
I
have,
but
I'll
quickly
explain
what
these
are
so
every
10
years
the
uk
runs
a
census
and
with
the
data
from
the
census,
they
actually
classify
output
areas
in
different
clusters.
M
They
call
them
output
clusters
and
what
you
see
here
in
the
legend,
they
are
the
largest
clusters
that
we
can.
We
can
use
and
you
see
them
in
the
plot
map.
In
the
left
side,
you
actually
see
how
all
these
clusters
map
to
different
geographic
areas.
So
then,
instead
of
like
looking
within
in
the
limits
of
a
specific
area,
we
actually
looked
at
this
demographic
and
try
to
understand
how
the
performance
of
the
network
maps
with
these
trading
rough
clusters.
M
So
what
we
immediately
see
here,
the
first
thing
that
jumps
in
our
faces
is
you
know
the
the
discordant
that
london
makes
with
with
with
the
rest
of
of
the
different
demographic
clusters.
So
the
blue
one
is
the
cosmopolitans
they
usually
map
in
in
central
london
areas
and,
as
you
will
see
in
the
next
slide,
we
actually
found
that
a
lot
of
the
people
who
living
in
central
areas
in
in
london
are
going
to
move
away.
M
As
I
said
before,
in
the
previous
slides,
the
previous
slide.
In
the
initial
slide
of
three
takeaways,
the
voice
traffic
actually
surged,
we
actually
found
140
percent
in
value
in
terms
of
the
increase
in
the
total
volume.
However,
this
system
supportability
a
strain
on
the
network
that
actually
interconnects
the
different
operators
that
was
quickly
fixed,
and
this
was
basically
showing
the
rapid
response
of
the
network
operators
and
the
service
providers,
restoring
the
the
quality
of
the
surface.
So
next
slide.
M
So,
finally,
I
show
you
the
temporary
location
of
the
residuals
from
london,
so
in
this
matrix,
what
you
have
on
the
oligarchical
columns
are
different
days
and
the
rows
represent
different
cities
within
the
greater
the
region
of
london
within
inner
london.
So
yeah
see
how
people
from
london
moved
to
other
cities
around
london.
So
you
immediately
the
first
weeks
which
are
the
weekend
day
weekend
pattern
patterns
which
disappear.
M
Now,
if
we
focus
only
on
the
mind
of
the
remark,
you
see
that
the
color
goes
slightly
towards
the
lighter
blue,
which
basically
means
that
ten
percent,
we
see
a
decrease
of
ten
percent
in
the
number
of
inner
environment
residents
who
actually
are
present
living
in
london
after
week,
13
of
the
the
lockdown,
and
then
we
see
that
just
prior
to
the
actual
lockdown
being
imposed
on
the
23rd,
which
was
the
monday.
M
You
actually
see
a
bunch
of
movement
happening
that
corresponds
to
people
moving
residents
moving
from
inner
london
to,
for
example,
areas
like
hampshire
or
kent
just
on
the
weekend
before
the
23rd.
M
So
next
time.
This
is
just
an
update,
as
you
probably
know,
and
you've
seen
the
news.
There
is
a
new
lockdown
being
post
now
in
in
the
uk,
so
they
started
with
like
tier
love,
towns
and
now
they've
entered
like
a
full
locked
down
like
a
but
milord
and
then
before
in
march.
So
we
looked
only
at
how
mobility
changed
across
these
last
nine
months.
Just
like
this.
M
So,
if
you
think
about
the
I'm,
not
sure
how
well
you
can
read
this,
it's
not
the
best
yeah
the
easiest,
but
this
is
just
the
evolution
in
gyration.
The
upper
part
is
just
the
same
bar
plot
that
you
saw
initially,
but
now
it
goes
until
the
end
of
september
and
I'm
seeking.
So
you
do
see
that
across
the
summer
you
know
this
social
distancing,
kind
of
diminished
considerably
and
now
we're
as
we
go
into
the
second
one.
M
This
the
generation
starts
to
decrease
and
probably
will
go
down
to
the
levels
that
we
see
in
the
first
level.
You
know
check
different
areas
as
well
at
the
month
aggregated
per
month,
and
this
is
what
you
see
in
the
bottom
part.
So
you
know
the
steep
decreased
in
march
and
then
a
slight
relaxation
of
you
know
that
the
step
by
step
in
the
months
towards
the
summer,
all
of
us
that
looks
you
know
normal
sort
of
in
terms
of
mobility
of
people.
B
Thank
you
very
much
so
yeah.
I
think
this
is
like
my
two
high-level
takeover
messages
is
that
first,
the
situation
looks
slightly
different
on
mobile
networks
than
maybe
for
other
networks,
and
also
that
this
is
a
lot
of
interesting
data
about
people.
Mobility,
that's
very
helpful
as
well.
B
You
you
mentioned
very
briefly
that
also
mobile
operators
had
to
react
and
extend
their
capacity,
but
I
guess
it's
like
I
mean
like
you,
cannot
put
up
new
base
stations
just
now
right,
so
do
you
have
any
insights
what
they
did
and
how
that
worked?
They
did
deploy.
M
5G-
and
I
confirmed
it-
does
not
cause
code,
they
don't
know,
that's
a
joke
and
we're
deploying
actually
5g
across
these
this
month,
and
they
were
especially
in
london.
They
were
extending,
and
probably
we've
seen
the
news.
The
press
releases
that
ultimately
they
were
deploying
capacity,
especially
within
london.
Around
the
areas
from
outer
london.
They
focused
the
issues
they
had
in
terms
of
capacity
were
more
towards
the
inter
peninsula
voice
traffic
that
was
short-lived.
M
They
were
from
march,
like
the
23rd
immediately
so
the
first
week
before
lockdown
and
the
week
after
lockdown,
so
it
was
the
effect
of
a
natural
disaster
that
you
observe
when
people
just
pick
up
the
phones
and
they
start
calling-
and
you
see
that,
even
if
you
know
in
huge,
you
know
earthquakes
or
or
natural
disasters,
we've
seen,
we
saw
exactly
the
same
pattern
and
this
this
matched
that,
but
actually
what
was
interesting
was
that
even
after
the
the
week
after
immediately
after
the
lockdown,
the
force
traffic
stayed,
you
know
at
basically
was
was
constant
with
an
increase
of
about
50
compared
to
february.
I
Yes,
thank
you
for
this.
This
was
really
interesting
data
and
a
really
interesting
presentation.
I
particularly
liked
that
we
went
into
the
details
behind
the
big
numbers,
like
the
demographics
and
people
moving
back
and
forth
between
places,
and
it's
good
to
comment
on
one
thing
that
we
at
ericsson
had
or
smart
colleagues
have
messaged
traffic
in
different
countries,
and
they
saw
a
lot
of
variation.
One
of
the
possible
reasons
for
that
variation
is
how
much
sort
of
fixed
network
capacity
or
or
users
there
are
in
a
country
so,
for
instance,
india.
I
They
rely
quite
a
lot
on
the
mobile
networks,
so
so
doing
the
pandemic
traffic.
There
grew
a
lot
in
those
networks
and-
and
I
wonder
if
that
might
be
an
additional
detail
behind
the
big
numbers
that
one
should
perhaps
or
somebody
should
look
at.
You
know
whether
people
have
their
their
fixed
networks
and
whether
that
effect,
how
much
traffic
there
are
sending
or
not.
I
M
So
I
think
I
think
it
would
be
great,
and
this
is
something
that
we've
been
struggling
with
quite
a
lot
to
try
to
understand
the.
Why,
behind
what
we
see
right,
so
one
immediate
assumption
is
that
okay,
the
traffic
decreased
in
terms
of
mobile
network
usage,
so
it
you
know
the
waterbed
effect
should
have
been
seen
for
for
the
fixed
traffic
for
the
fixed
network
traffic.
However,
you
know
we
cannot.
We
don't
have
that
sort
of
information
to
check
for
the
same
users.
So
we're
not
very.
M
We
didn't
know
how
to
verify
this,
so
we
left
it
at.
You
know
the
conjecture
level,
so
we
could
only
assume
that
that's
what
happened,
but
then
it
would
be
very
interesting
to
be
able
to
put
together,
for
example,
nice
in
the
uk,
with
the
mobile
operator
and
check.
You
know
in
the
same
demographics,
how
you
know
if
actually
this
water
bed
actually
happened,
you
know
the
20
drop
and
the
increase
of
proportional
increase
in
the
isp
traffic.
B
Sure,
thank
you
thanks.
If
we
don't
have
any
more
questions
right
now,
we
would
go
into
a
break
given
we
are
already
behind
time
when
we
have
some
more
talks.
B
And
stephen
says
in
the
chat
that
you
of
course,
can
keep
discussing
in
the
chat,
and
we
will
also
monitor
the
chat
and
try
to
make
sure
that
points
are
captured
in
the
report.
Okay,
see
you
in
five
minutes.
B
Okay,
how
to
chat
up
everybody's
back
I'm
here!
Five
minutes
are
over,
so
we
will
simply
kind
of
start
again.
I
noticed
that
I
actually
didn't
introduce
myself
and
webex
is
nicely
hiding
my
name.
B
So
just
for
the
record,
my
name
is
mia
kulibin,
I'm
an
ib
member
and
I
would
also
like
to
quickly
introduce
those
people
from
the
iab
who
helped
putting
this
program
together
and
reviewing
these
the
papers
that
stephen
farrell,
who
has
been
already
talking
and
you've,
seen
his
video
at
the
beginning,
there's
also
yari
ako,
which
has
also
asked
some
questions
already,
and
then
there
was
also
ben
campo,
which
is
here
who
is
here
today
but
hasn't
been
active
yet
so
you're
the
next
one
to
ask
a
question,
and
then
we
have
also
colin
jennings
here
from
the
iab
who
will
share
the
session
tomorrow,
but
he
can
introduce
itself
tomorrow.
B
N
Yup
thanks
so
thank
you.
My
name
is
jason
livinggood,
I'm
an
engineer
at
comcast
in
ift
in
the
united
states
and
I'll
be
presenting
today
with
my
colleague,
nick
feenster,
from
the
university
of
chicago
next
slide.
Please
and
we'll
certainly
go
through
this
quickly.
These
are
the
folks
that
we
are
collaborating
with
next
slide.
N
Looking
at
both
long-term
trends
in
interconnect,
data
and
some
near-term
isp
observations,
and
so
the
primary
data
sources.
M
N
N
Thanks
so
the
the
primary
data
source
which
nick
will
go
into
momentarily
is
from
the
interconnection
measurement
program
was
kicked
off
in
2016
and
is
still
running
today,
and
essentially
this
is
comprised
of
sampled,
netflix
or
ipfix
data
for
all
of
the
interconnect
interfaces
across
multiple
isps,
and
so
you
know
timestamp
the
region
where
the
interconnect
is
and
then
anonymous
peer
ids.
N
N
N
It
all
right!
Well,
I
will
I
will
jump
into
this
then
and
we'll
come
back
to
him
in
a
second.
So
the
key
findings
here
is
that
I
think
similar
with
the
other
presentations
that
we
saw.
We
saw
an
enormous
increase
in
utilization
in
a
very
short
period
of
time
and
at
the
same
time,
new
capacity
was
added
in
a
very
brief
period
as
well.
So
we
see,
of
course,
a
steady
increase
in
capacity
in
the
interconnect
measurement
program
of
capacity,
particularly
from
mid
2018
but
beginning
with
covid.
N
We
saw
nearly
twice
the
rate
of
2019
editions,
so
in
a
very
short
period
of
time.
All
of
these
interconnect
interfaces
were
having
a
lot
of
new
capacity
added
that
growth
continued,
not
just
in
this
initial
first
quarter
and
a
first
quarter
time
frame,
but
through
the
second
and
third
quarter.
So
it
has
continued
and
you
know
the
let's
see
if
we
go
to
the
next
slide,
we
can
see
a
bit
more.
N
So
nick
are
you
available
now
I
see
you're
you've
stopped
talking
available
to
speak
to
this,
or
should
we
come
back
to
in
a
second.
N
O
At
peer
link
utilization
in
particular,
how
much
traffic
goes
across
the
links
from
a
particular
isp
in
the
upstream
or
downstream
direction,
and
then
comparing
that
to
between
two
periods,
january
1st,
2020
and
then
september,
1st
2020,
and
what
you
can
see
basically
on
the
diagonal
is,
is
that
the
diagonal
shows,
basically,
that
would
be
like
equal
ratios
up
and
down,
respectively,
from
those
two
time
periods.
O
O
This
is
data
that
we
looked
at
both
from
the
isp
perspective,
as
well
as
from
the
internet
interconnection
measurement
project.
You
go
to
the
next
next
slide.
N
Great,
so
just
to
wrap
up
in
addition
to
the
long-term
imp
data
set
that
nick
analyzed,
you
know
just
some
individual
observations.
Looking
at
all
of
the
detailed
comcast
interconnect
data,
we
definitely
saw
about
a
third
increase
in
that
march
april
time
frame
and
it
happened,
as
other
presenters
said,
really
in
a
matter
of
days,
so
that
was
a
year's
worth
of
growth.
In
a
few
days,
which
was
astounding,
when
we
look
at
our
access
network
to
date,
certainly
a
peat.
N
You
know
in
that
march
april
time
frame
went
down
due
to
seasonality
in
the
summertime,
has
come
back
up
and
now
with
return
to
school
in
the
august
september
time
frame,
but
over
that
period
of
time
we've
seen
downstream
consumption
or
usage
in
the
access
network.
Up
about
13
and
upstream
up
about
36
percent,
largely
driven
by
things
like
video
conferencing,
we
saw
a
huge
increase
in
the
amount
of
access
network
augments.
N
So
you
know
400
plus
percent
in
many
weeks,
which
was
big,
500
plus
augments
to
our
core
network.
Those
are
primarily
at
the
edge
so
think
of
the
interconnect.
Links
we've
done
through
this
period
as
well.
This
isn't
really
interconnect
related,
but
about
700
to
800
000
tests
per
day
to
cable
modems
in
the
network,
just
to
measure
qoe,
in
addition
to
traditional
capacity
measurements
and
notably
so
within
our
interconnect
link
types.
N
One
of
them
is
settlement,
free
interconnect
and
typically
those
would
grow
at
about
15
a
year,
so
a
little
bit
slower
than
the
norm.
But
those
went
up.
37
percent
over
this
period
and
some
of
the
notable
per
partner
ones
were
interesting.
So
it
seemed
like
our
interconnect.
Growth
were
heavily
driven
by
specific
partners,
so
it
wasn't
across
the
board
that
everyone
was
increasing,
but
that
particular
transit
networks,
content,
delivery
networks
or
web
conferencing
providers
just
saw
amazing
growth.
N
So
in
one
example,
you
know
one
peer
more
than
doubled,
115,
basically
in
four
to
seven
days
in
a
week's
period,
and
then
we
saw
other
platforms
up
3900,
which
was
just
mind-blowing
and
more
of
the
norm.
Was
you
know
one
to
two
hundred
percent,
which
is
crazy
to
say
so
that
is
it
and
for
more
information
on
the
next
slide.
You
can
obviously
click
on
the
paper.
There
are
more
insights
there
and
we're
happy
to
take
questions
later
on.
Thank.
B
You
thanks,
as
you
are
mentioning
the
speed
test
here
and
and
qe,
so
you
increased
your
test
volume
here.
Did
you
also
noticed
actually
more
mqe
problems
and
do
you
have
any
kind
of
data
or
like
rough
numbers
for
that.
N
Yeah
I
haven't,
we
didn't
share
that
in
this
paper,
but
there's
another
paper
I'll
I'll
post
it
in
the
chat
paste
it
in
the
chat.
N
Excuse
me,
and
so
it
was
kind
of
good
timing
that
we
had
this
test
come
out
in
early
february
right
before
the
pandemic,
so
we
had
a
little
bit
of
beforehand
volume
and
what
we
ended
up
seeing
is
certainly
in
some
cases
in
that
early
time
frame
an
increase
in
latency
a
decrease
in
the
percent
of
advertised
of
speed,
but
in
all
cases
we
we
still
were
above
100
of
advertised.
N
So
you
know
we
had
that
extra
buffer
as
one
of
the
earlier
presentations
said
of
capacity
and
it
ended
up
getting
consumed.
You
know
we
ran
close
to
where
we
we,
you
know,
thought
that
was
like
the
danger
zone
and
then
very
quickly
added
capacity
back
up
to
re.
You
know
recreate
that
buffer.
If
you
will
so,
we
felt
pretty
good
about
that
and
and
that
correlates
with
a
lot
of
data
sources
like
ripe
atlas
and
their
latency
changes
in
their
data
and
so
on.
N
So
it
seemed
to
work
out
well,
and
you
know
we
we
felt
lucky
that
we
had
that
extra
provision
capacity
and
you
know,
certainly
it
makes
my
job
easier
and
our
team's
job
easier
with
our
finance
folks,
who
have
always
said.
Why
do
we
maintain
all
this
extra
capacity?
That's
not
being
used?
Well,
it
just
got
used
very
quickly
and
that's
why
we
have
it
so
it
was.
It
was
interesting.
B
So
did
you
mainly
measure
throughput
and
and
delay,
or
did
you
also
try
to
look
in
in
like
more
application
metrics
which
are
close
to
application
performance.
N
So
our
test
did
not
look
at
application:
specific
metrics.
They
looked
at
upstream
and
downstream
throughput
latency
by
itself
and
then
latency
under
load
and
I'll.
Just
give
a
quick
preview
on
latency
under
load.
N
It's
a
thing,
and
it
was
a
big
thing
in
the
pandemic,
both
on
the
land
side,
which
would
primarily
be
the
wi-fi
land
and
then
on
the
upstream
network
link,
and
we
actually
had
just
through
an
accident
of
where
software
releases
ended
up
being
one
particular
device
model,
one
made
by
one
manufacturer,
another
a
different
manufacturer,
different
chipsets
in
them.
So
slightly
different
software.
N
On
one
of
these
we
had
basically
a
an
aqm
turned
on
and
then
the
other
it
wasn't
quite
ready
yet
so
for
a
window
of
time
we
had
an
exact
exact
same
make.
You
know,
exact
same
model.
Excuse
me
slightly
different
chipsets
and
manufacturers,
one
with
aqm
one
without
aqm,
with
very
interesting
differences
of
latency
under
load.
So
that's
in
the
process
of
getting
written
up
now
to
share
more
information,
but
it's
a
big
deal
and
I
think
there's
a
lot
more
to
be
done
to
study
the
wi-fi
land
side
of
things.
D
Hi
jason,
I'm
curious.
I
know
a
lot
of
people
either
upgraded
their
data.
You
know
kind
of
upgrade
their
speeds
and
also,
actually,
I
know
several
people
who
who
entirely
switch
technologies
during
you
know,
as
as
a
result
of
cable
having
much
higher
speeds.
I'm
curious
what
you
see.
You
know
how
much
of
this
traffic
was
just
organic
traffic
growth
and
how
much
of
it
was
additional
subscriber
base.
N
I
mean
I
haven't
looked
too
specifically,
but
you
know
off
the
cuff.
I
would
say
most
of
it
is
is
from
existing
subscribers
just
because
the
subscriber
base
is
so
large.
You
know
it
was
like
24
or
5
million
homes.
N
Before
the
pandemic,
we
have
added
a
significant
number
of
new
customers,
particularly
with
our
what's
called
our
internet
essentials,
tier,
which
is
a
25
megs
down
three
megs
up
from
9.99
a
month
for
low
income
folks
and
that's
added
a
huge
huge
number
of
people,
but
even
within
the
existing
subscriber
base.
You
know
lots
of
people
upgrading
because
they're
all
working
from
home.
Much
like
me,
I
went
to
a
gig.
B
B
Maybe
let's
do
that?
Okay
and
then
next
we
have
david
clarke
and
we
will
look
a
little
bit
deeper
into
congestion.
P
Hi
there
I'll
try
to
be
quick
on
this,
because
I
understand
we're
running
late.
This
is
a
different
perspective
on
some
of
the
same
stuff
that
jason
and
nick
were
talking
about.
Go
to
the
next
slide
back
around
night
2016
keda
and
at
ucsd
at
mit
started
measuring
congestion
on
lynx,
because
at
the
time
there
were
all
these
disputes
between
the
big
isps
and
the
big
content
providers,
there's
a
lot
of
congestion
on
the
transit
links,
and
now
this
is
outside
measurement.
P
Jason's
talking
about
data,
he
has.
We
can't
get
the
data
from
the
routers
we
have
to
probe,
and
what
we're
doing
here
is
a
I'll
very
quickly.
Try
to
explain
the
background
work.
Basically,
every
five
minutes.
We
we
probe
a
link.
We
send
a
ttl,
expired
packet
near
side,
far
side
and
what
we
look
for
is
episodes
of
elevated
latency,
which
tells
us
there's
cueing
on
the
link.
So
it's
an
indirect
measure
of
congestion
and
we're
probing
every
link.
We
can
identify
the
data,
I'm
going
to
show
you
here's
from
comcast.
P
P
P
So
we
can
look
at
this
data
and
try
to
decide
whether
we've
seen
anything
interesting
when
the
pandemic
started,
because
we've
been
collecting
this
dust
for
several
years
so
go
to
the
next
slide.
This
is
a
this
is
a
picture
from
comcast
and
notice
just
for
fun.
It
goes
all
the
way
back
to
2016
and
what
we're
doing
is
we
look
at
every
link
we
can
identify.
We
don't
necessarily
find
all
the
links,
but
everything
we
can
identify.
In
this
case.
P
It's
comcast
peer
con
major
content
provider,
cloud
provider,
transit
providers-
we
don't
we
I'm
not
plotting
here.
They're
customers
and
I
took
an
arbitrary
measure
which
is,
is
the
link
congested
for
more
than
a
half
hour
a
day,
and
I
just
add
up
the
total
number
of
links
that
that
are
congested.
P
But
what
you
can
see
is
in
the
first
few
months
of
2020,
you
can
see
that
more
links
are
congested
and
then
it
sort
of
goes
away
again
and
then
maybe
there's
something
interesting
right
at
the
very
end
there
we'll
go.
Look
at
that
now.
This
is,
of
course,
just
looking
at
every
link,
but
we
can
of
course
identify
the
autonomous
system.
P
What
I've
plotted
here
is
the
I,
the
the
interconnection
partners
that
contributed
to
congestion
and
the
first
thing
we
need
to
get
out
of
the
way.
If
you
look
on
the
right
side,
there's
this
huge
lump
now,
let
me
explain
what
I'm
plotting
here.
I've
organized
the
data
into
two
week
baskets
and
what
I
count
up
now
for
each
interconnected
party
is
the
number
of
congested
day
links
that
there
are
in
two
weeks.
P
P
So
so,
basically,
what
you're
seeing
here
is
that
there
were
two
or
three
links
from
an
akamai
server
into
comcast
that
were
congested
and
in
fact
we
did
confirm
that
and
you
notice
it
went
up
and
then
it
went
down,
but
if
you,
but
but
it
actually
doesn't
surprise
me
that
there
might
be
technical
reasons
why
two
or
three
links
couldn't
be
upgraded.
P
Obviously,
since
they
have
40
or
50
there's,
not
a
business
issue
here,
if
you
look
around
march,
you
know
20
2003,
you
see,
you
see
various
things
spike
up,
you
see
cloud
flare,
spike
up
and,
and
you
see
telenet,
and
then
they
go
down
again,
which
is
exactly
what
we
saw
from
other
data
now.
The
important
thing
is
jason.
Can
tell
you
exactly
about
link
capacity.
P
What
we
find
here
is
actual
congestion,
and
when
I
look
at
this
picture-
and
I
look
off
into
january
of
september
october,
up
to
the
beginning
of
november-
you
see
little
spikes
there,
but
in
general,
what
I
think
you
see
is
exactly
what
people
are
saying,
which
is
bits
of
congestion
occurred
here
and
there
and
people
put
in
extra
capacity
just
for
the
fun
look
at
the
next
picture.
I
won't
dwell
on
this.
These
pictures
are
all
in
the
paper
we
sent
in
yeah
go
go
to
the
skip
over
the
discussion.
P
I
already
said
that
and
again
I
went
back
to
2016
just
so
you
could
see
who
was
congested
back
in
the
period
when
there
were
all
these
disputes
and
what
you
see
is
back,
then
the
congestion
were
on
youtube
links,
tata
links
and
tt
links.
So
the
congestion
moves
from
place
to
place
depending
on
what
the
what
the
business
issues
are
more
than
the
technical
issues.
So
there
are
graphs
for
the
other
isps
major
isps
and
the
paper
we
sent
in
in
general.
The
story
is
about
the
same.
P
You
see
some
congestion
occurring
in
march
april
timeframe
and
and
then
it
goes
away
and
in
general
what
people
are
saying
is
yeah.
We
put
an
extra
capacity.
So
what
go
ahead
next
slide?
I
think
that's
all
I
had
yeah,
that's
all
I
say
if
you
want
to
see
some
more
more
pictures
go
look
at
the
paper
there
but,
as
I
said
for
the
u.s,
they
more
or
less
look
the
same.
So
I'm
done.
B
B
B
Next,
we
have
ricky
looking
a
little
bit
closer
at
the
index
image
to
the
cloud.
Q
Yep
hi
everyone,
I'm
ricky
from
cada,
so
this
project
we
are
measuring
instead
of
measuring
performance
from
the
edge.
We
do
it
another
way
around
to
measure
performance
from
the
cloud
to
ss
network.
Next
nice
breeze.
Q
So
in
this
project
we,
which
is
one
of
the
nsf
fund
project
for
in
particular,
for
setting
up
measurement
for
measuring
the
intake
of
kovic,
actually
built
a
system
called
claps
to
measure
to
set
up
many
vms,
mainly
in
u.s
regions,
to
perform
frequent
measurement
to
a
number
of
speed
test
servers.
Q
We
start
our
measurement
in
may,
so
we
kind
of
measure
in
the
aftermath
of
the
pandemic
effect
and
see.
How
is
the
performance
going
while
the
people
start
moving
again
and
also
some
re-lockdown
events
happen
during
the
pandemic.
So
here
we
we
are
going
to
show
some
of
our
preliminary
results
that
we
have
right
now.
So
it's
nice
piece.
Q
So
actually
we
find
that
the
cloud
is
not
as
good
as
we
thought.
We
definitely
see
evidence
of
congestion
not
only
to
use
to
small
network,
but
also
can
find
congestion
events
in
between
cloud
regions
and
that
big
access
network.
Like
calls,
we
can
definitely
see
a
very
significant
throughput
fluctuation,
this
kind
of
like
a
diurnal
pattern
recurring
every
day
that
we
have
the
throughput
jobs
in
a
certain
period
of
time.
Q
This
is
a
download
direction,
which
means
it
is
from
the
isp
to
the
cloud
which
probably
caused
by
impact
to
like
video
conferencing.
Application
like
like
google
could
be
like
google
hangout
in
this
in
this
traffic
direction.
So
next
one
please.
Q
For
small
isp,
sometimes
it
could
be
even
worse.
So
these
two,
these
two
data
shows
from
the
updater
of
from
aw
as
north
carol
regions
to
two
small
regional
isp,
one
enough
to
surf
north
colorado
area
and
the
other
one
served
up
south
colorado
area
and
not
the
other
one
served
north
texas
area.
So
those
surface
area
are
more
suburban
area
and
we
can
see
we
can
see
that
the
upload
direction
means
that
from
the
cloud
to
the
isp
is
mainly
served
like
video
content
and
also
download
activities.
Q
Definitely
we
can
see
suffering
bandwidth,
degradation
and
user
is
kind
of
suffering
during
like
evening
time
and
the
other
one.
Actually,
we
serve
a
persistent
low
throughput
except
those
bit
like
off-peak
hours
during
this
week.
Q
So,
apart
from
daily
variation,
we
also
have
a
high
level
view
of
a
data
between
months.
So
we
pick
the
first
week
of
each
month
to
see
the
ever
the
median
download
throughput
between
some
certain
cloud
region
and
isb,
and
we
can
find
some
downtrend
in
a
certain
car
regions
between
car
regions
and
certain
ss
isps
like
the
left.
Q
One
shows
from
the
spectrum
to
azure
central
asian
central
one
region,
and
we
can
see
that
the
fuel
fruit
from
like
six
six
seven
spectrum
surfers
actually
decrease
over
time,
and
we
also
can
see
like
a
slight
decrease
from
comcast
server
to
aws
ohio
region
between
three
months
over
three
months
time.
Q
So
apart
from
access
network,
we
also
take
a
look
on
the
educational
lab
network,
because
everyone
move
from
like
in-person
learning
to
distant
learning,
also
observe
a
downtrend
in
a
certain
area
of
cloud
regions.
We
we
have
recovered
three
different
universities
in
one
of
the
gcp
regions,
which
east
east
coast.
Although
some,
we
saw
some
evidence
that
there's
less
user
in
the
in
the
campus,
but
we
find
that
their
food
could
help,
show
some
downtrend
during
this
read
these
three
months.
Q
We
investigate
like
try
to
investigate
on
whether
there's
a
rare
change
or
they
may
have
used
more
distant
learning
too.
That
may
cause
degradation
next
one.
Q
So
to
summary,
actually
we
we
use
of
throughput
measurement
or
speed
test
measurement
running
from
the
cloud
to
different
isp,
and
we
find
that
actually
the
cloud
we're
still
suffering
some
congestion
events
show.
We
showed
some
evidence
of.
Q
Anti-Anti-Degradation
because
of
like
unusual
last
mile
link
is
meant
because
of
the
performance
between
two
big
parties
in
network
and
we
we
observed
some
time
trending
throughput
in
the
post
lockdown
period.
We
have
data
until
october
and
we
will
keep
working
on
the
analysis
and
see
where
the
downtrend
continues
or
is
coming
back
something
later
and
we
make
the
data
properly
available.
It
is
not.
I
will
still
working
on
it
now
and
we'll
put
it
online
later
so
in
our
next
step.
Q
B
B
B
Okay,
yeah,
but
definitely
I
think
one
piece
of
the
puzzle
to
look
at.
Thank
you.
Are
there
any
further.
B
Questions:
okay:
we
have
two
more
brief
presentations.
We
now
go
into
the
part
where
we
talk
about
last
mile
congestion.
We
get
some
data
from
japan
and
then
we
also
get
some
more
general
data
from
ripe
atlas
also
took
looking
at
latency.
Actually,
so
next
we
have
kenjiro.
S
S
Until
you
know,
last
year
we
had
a
steady
growth
and
in
this
year
we
have
a
search.
So
we
had,
you
know
the
the
first
wave
in
february
and
then
in
april
april,
made
time
frame.
We
had
state
of
emergency
and
then
we
had
second
wave
in
in
the
summer.
S
S
So
the
the,
but
if
we
look
at
the
you,
know
overall
trend,
it
seems
not
that
far
from
the
you
know,
original
growth
curve,
so
an
express.
S
So
what
we
observed
is,
as
oliver
said,
that
the
the
first
talk
the
weekday
traffic
became
similar
to
weekend
traffic
and
a
state
of
emergency.
S
So
the
left
foot
is
a
weekday
and
the
right
produce
weekend
and
the
blue
one
is
before,
and
red
one
is
under
the
state
of
emergency
and
the
green
one
is
after
the
state
of
emergency,
so
the
for
weekday
traffic.
So
during
the
state
of
emergency,
the
daytime
traffic
increased
a
lot
and
after
the
state
of
the
emergency,
it
goes
down
again
and
the
during
the
state
of
emergency.
The
traffic
looked
very
similar
to
the
weekend
as
people
stayed
home
and
next
please.
S
S
S
If
we
look
at
the
in
the
semilog
scale,
so
the
download
volume,
for
example,
evolved
for
the
last
10
years
by
the
by
two
orders
of
magnitude.
S
So
the
yet
so
you
know
10
to
the
ninth-
is
a
one
gig
one
gigabytes
per
day.
So
this
is,
you
know
the.
If
we
look
at
the
mode
people
download
three
gigabytes
per
day.
S
So,
in
summary,
so
the
overall
macro
level
impact
to
the
broadband
traffic
is
not
so
big,
so
the
traffic
increased
during
the
work
hours,
but
it
was
still
under
the
capacity
and
after
the
state
of
the
emergency
was
lifted.
It's
slowly
coming
back
to
the
original
growth
cup,
and
we
have
many
reports
in
the
talking
about
the
minor
issues,
mostly
with
legacy
infrastructure
and
the
equipment.
S
So
the
in
japan,
the
broadband
boom
started
20
years
ago.
So
we
do
have
legacy
equipment
like
pboe.
Alumni
will
talk
about
in
the
next
talk
and
vdsl
in
apartments
and
old
wi-fi
at
home,
and
we
had
some
good
luck
because
we
didn't
have
any.
You
know
real
broke
down
in
japan
and
we
are
still
I'll
start
hearing
me.
Okay
and
we
are
doing
okay
so
far,
and
also
the
olympic
game
was
scheduled
this
summer
this
past
summer,
so
the
many
infra
capacity
upgrade
was
scheduled
early
this
year.
S
B
Yeah-
and
here
I
would
suggest
we
just
like
go
on
to
the
next
talk
immediately
and
dive
a
little
bit
deeper
into
the
latency
measurements.
Yes,
and
now
I
need
to
find
that
one
this
one.
T
Okay,
so,
and
that's
great,
because
kenjiro
brought
all
the
background
so
in
this
work
we're
looking
at
the
last
my
latency
and
oh,
you
can
go
to
the
next
slide
and
so
what
we've
done
is
we
took
a
ripe
at
last
stress
route
and
we
tried
to
estimate
the
queuing
delay
for
the
last
mile
in
the
truss
route.
So
we.
T
T
We
are
not
really
interested
in
in
the
delay
for
just
like
one
prop,
so
we
try
to
aggregate
those
results
per
yes,
hoping
to
see
if
a
common
pattern
could
emerge-
and
this
is
what
is
shown
in
those
figures
here.
So
the
x-axis
here
is
the
time
of
the
day.
It's
all
utc
time,
because
those
are
in
different.
Those
networks
are
in
different
countries
and
the
y-axis
is
the
aggregated
queuing
delay
and
the
colors
of
the
curve
shows
a
different
measurement
period,
so
the
blue,
orange
and
green
are
in
2019
before
kevin
19.
T
T
What
all
this
network
have
in
common
is
the
we
see
increased
delay
during
during
the
day,
so
all
of
these
figures
have
a
peak
in
the
evening,
so
the
top
left
one
is
in
germany,
so
the
utc
is
is
smallest
the
time
in
germany,
the
bottom
one
are
in
north
america,
so
the
peak
is
on
the
left
of
the
figure
and
the
the
top
right.
T
One
is
in
japan,
so
they
usually
all
have
like
a
peak
in
the
evening,
but
during
covenanting
we
can
see
the
the
latency
increase
during
that
time.
T
I
would
say
the
two
bottom
one
just
have
like
a
slight
increase:
the
the
most
significant
one
here
is
in
japan,
so
the
top
right,
one
where
we
seen
a
very
severe
last
mile
congestion
and
in
in
our
paper.
So
we
just
restate
what
angelo
said
like
there
is
some
problem
with
the
legacy
infrastructure.
So
using
those
results,
we
could
point
out
that
the
problem
we
are
coming
from
there
and
so
next
slide
in
in
our
paper.
We
try
to
correlate
those
results,
those
latency
measure
with
throughput.
T
T
This
is
to
match
what
kenjiro
said
like
in.
In
japan,
we
had
a
state
of
emergency,
so
the
blue
one
is
before
that
the
orange
one
is
during
the
state
of
emergency
and
the
green
one
is
after
the
the
first
wave.
T
So
we
can
see
again
in
the
throughput
that
we
had
throughput
drop
during
that
time
compared
to
what
we
had
at
the
beginning
of
the
year
and
like
kanjiro
said
again,
we
see
that
after
the
first
wave
we
have
a
bit
higher
shroop,
and
our
guess
is
that
this
is
thanks
to
the
upgrade
that
was
planned
for
the
tokyo
olympic
and
next
slide.
So
that's
that's
all
the
the
result
I
have
here
so
we
found
overall,
the
the
number
of
of
congested
network
we
see
using
atlas
is,
in
fact
quite
small.
T
T
T
We
have
other
results
I
haven't
shown
here,
but
the
using
mlab
or
the
cdn
data
we
found
that
the
mobile
network
is
unaffected
and
that
those
two
urls
so
the
first
one
is
the
the
paper
with
some
more
results
and
the
second
one
is
actually
it's
completely
unrelated.
T
It's
it's
some
measurement,
we've
done
where
we
look
a
bit
further
in
the
tress
routes.
We
are
not
looking
only
at
the
the
last
mile,
but
we
are
looking
at
delay
between
two
networks.
T
So
it's
a
bit
more
similar
to
what
david
clarke
have
presented
before
that's
a
dashboard
we've
done
during
the
hackers
on
the
right
packaging,
but
well
that
brings
something
else
for
our
discussion.
If
people
want
to
discuss
that,
thank
you.
That's
all.
I
have.
B
Thanks
a
lot
yeah,
we
can
check
if
we
have
any
questions.
There
was
like
one
question
about:
what's
the
what
the
unit
of
the
x
a
y
axis
axis
on
the
peris.
T
Oh
yeah
slide
yeah,
that's
in
yeah
and
yeah.
Those
values
are
very
small,
the
well
in
the
paper.
We
show
that
because
it's
aggregated
across
like
all
the
probes,
so
it's
and
it's
the
median.
So
I
didn't
explain
very
well,
but
it's
it's
the
median
value
we
observe
across
all
the
probes
in
that
network.
So
that
means
that
about
50
of
the
probes
have
more
than
that
value.
B
T
I
B
So,
basically,
that's
the
end
of
the
presentations
and
we
only
have
little
time
left,
but
at
least
I
would
like
to
thank
everybody
who
presented,
because
even
so,
I
read
the
papers
seeing
everything
together,
putting
like
the
data
somehow
in
context,
I
think,
was
very
helpful.
My
big
takeaway
really
is
that
it's
not
enough
to
just
like
look
at
one
angle.
We
need,
like
all
the
different
data
to
actually
understand.
What's
going
on.
B
The
other
point
is
that
I
I
think
we
realized
during
the
session
that
we
actually
do
need
more
data,
especially
about
quality
of
service
or
quality
of
experience
for
specific
users
for
specific
applications
for
specific
service
providers.
That's
all
different.
B
There
might
be
a
regional
differences
that
we
didn't
have
enough
data
on
and
with
that
I
will
just
quickly
turn
back
to
my
own
slide
set
and
we
can
stare
at
this
light
for
one
more
minute.
I
think
I
already
kind
of-
or
we
already
detected,
that
we
potentially
need
more
measurements,
but
we
need
some
awesome
more
digging,
but
I
want
to
open
the
the
question
to
everybody.
You
know:
where
do
you
see
problems?
Where
should
we
look
deeper
and
what
else
should
we
measure
what's
missing?
B
What's
actually
normal?
What's,
what's
can
do
we
understand?
Well,
what
has
changed?
Do
we
understand
well
what
the
requirements
from
the
society
are,
what
the
needed
quality
of
preference
quality
of
experience
is,
what
are
the
differences
and
then
also
maybe
this
is
then
sliding
that
a
little
bit
into
the
session
tomorrow.
We
should
think
about
what
one,
what
what
we
want
to
discuss
tomorrow
and
did
we
actually
manage
it
well,
or
was
it
just
luck,
so
the
floor's
open.
B
E
That's
the
first
point
yeah,
so
I
think
one
of
the
things
that
that
struck
me
from
looking
at
all
of
these
talks,
which
were
great
thank
you
to
all
the
creators
by
the
way,
was
there
a
lot
of
sort
of
of
common
bits
right,
like
so
hey
the
internet
kind
of
worked,
at
least.
E
If
you
look
at
the
core
and
a
lot
of
expected
bits
like
I
guess,
if
you
had
said
and
looked
at
the
at
the
japanese
home
broadband
infrastructure
two
years
ago
and
said
by
the
way
we're
going
to
need
to
you
know
triple
the
load
on
this
network
in
three
weeks.
I
think.
Actually,
if
you
look
at
the
investments
that
were
made
for
the
olympics,
you
can
see
that
people
knew
that.
That's
where
the
problem
was.
E
The
one
thing
that
that
kind
of
jumped
out
at
me
from
from
some
of
the
talks
about
the
the
capacity
ads
is
that
it
was
actually
possible
to
add
capacity
at.
Like
I
mean
there
was
what
comcast
had
one
that
went
up
by
a
factor
of
40
during
the
the
pandemic,
but
it
was
possible
to
actually
do
that.
E
I'd
like
to
so,
I
think,
jason
made
a
comment
that
you
know
the
bureaucracy
kind
of
got
out
of
the
way,
which
is
one
thing
but
but
it'd
be
interesting
to
look
and
see
if
there
are
any
remaining
gaps
in
how
fast
it
takes
out
to
roll
out
the
physical
infrastructure,
and
it
seems
like
at
the
interconnection
points,
there
isn't
any
remaining
sort
of
barrier
to
physical
infrastructure,
but
but
that
might
be
something
that
we
can't
actually
answer
with
measurement
is,
like
you
know,
what
would
the
next
big
thing
we'd
have
to
do
in
terms
of
like
capacity
expansion
be,
and
is
that
a
technical,
blocker
or
a
non-technical
blogger?
E
Right
because
I
mean
like
you
can
see
that
like
over
time,
the
previous
stuff
was
clearly
business
limited
right,
because
if
you
can,
if
you
can
do
it
twice
as
fast
or
39
times
as
fast
as
you
were
before.
Well
then
you
just
didn't
need
it
before,
but
yeah
I'd
like
to
take
a
little
bit
more
into
that
again.
That's
probably
also
a
tomorrow
question,
but
thanks
yeah.
B
E
B
Definitely
tomorrow
we
had
somebody
else
on
the
cube,
but
now
I
can't
see
it
anymore.
F
U
First,
I
was
wondering
how
well
calibrated
the
data
is
about
our
background
organic
growth
back
when
it
was
really
active
in
the
ietf.
It
was
typically
13
months
doubling
time
and
now
I'm
hearing
30
per
year.
How
well,
is
that?
How
stable
is
that?
How
well
known
is
it?
U
How
well
documented
is
it,
and
one
of
the
approaches
that
was,
I
know,
was
used
in
the
past
when
there's
big
surges
was
taking
the
point
of
view
that
it,
an
unexpected
load,
is
sort
of
equivalent
to
a
failure,
and
you
have
to
reserve
at
least
double
capacity
in
order
to
be
able
to
deal
with
failures
and
being
allowed
to
eat
into
that
reserve
to
deal
with
crises
helps
a
lot.
I
Yes,
we're
trying
to
answer
a
couple
of
the
questions
or
propose
some
partial
answers,
at
least
so
I
I
think
you're
it
is
a
conclusion
that
we
need
more
measurements.
I
need
to
look
at
things
like
holistically,
not
just
from
one
particular
angle.
I
think
that's
exactly
right
and
some
areas
where
we,
I
don't
think
we
have
enough,
is
I,
like
the
quality
of
experience,
also
application
based
measurements.
I
How
much
do
we
know
about?
I
don't
know
conferencing
platforms
and
such,
and
so
everything
that
we
saw
today
was
was
really
great
and
amazed
that
we
have
had
those
numbers
but
would
be
even
better
to
have
more
by
the
way
I
in
my
paper,
I
had
some
some
material
on
quality
of
experience
like
ericsson
had
asked
people
around
the
world
like
how
did
they
feel
about
the
situation
and
so
on
so
I'll
post
a
link
to
that,
but
that
that's
a
sort
of
a
feeble
attempt
or
simple
thing.
I
I
think
we
need
much
much
more.
The
other
thing
that,
like
you,
you're
asking
this
about
luck
or
relevance,
it's
kind
of
like
what
is
your
expectation
on
the
real.
This
kind
of
you
wanna
handle
like
can
we
do
a
pandemic
and
I
don't
know
material
heat
that
takes
down
10
of
the
current
infrastructure.
I
I
That
that
I
think,
is
the
question
mark,
but
one
of
the
things
that
the
ericsson
research
actually
did
observe
is
that,
at
least
for
the
moment
the
consumers
are
really
interested
in
reliability
and
resilience
of
networking
that
they
they
use
or
enjoy
or
purchase.
So
so,
at
the
moment
at
everybody
is
interested
in
okay.
You're
gonna
be
able
to
deliver
this
thing
for
me
going
forward.
Hopefully
that
can
be
oh.
That
sentiment
can
stay
for
a
longer
period
of.
B
Time,
yes,
totally
agree.
Let's
take
oliver
hoffeld's
question
or
comment.
G
So
I
actually
want
to
make
a
comment
that
goes
a
little
bit
into
brian's
direction.
So
to
me
the
question
is
like
on
what
level
of
granularity
are
we
talking
most
of
the
results?
Also,
the
results
that
we
have
shared
are
at
the
granularity
of
we
take
the
isp
or
the
ixp
as
as
as
a
whole,
and
this
level
can
be,
or
this
picture
can
completely
change.
If
you
look
into
individual
users,
individual
peering
links
and
so
on,
and
of
course
we
have
seen
these
cases
where
you
know
a
smaller.
G
Smaller
enterprises
were
kind
of
like
not
really
planning
for
this
pandemic
to
happen,
and
then,
if
they
send
all
of
their
employees
to
home
office,
then
of
course
the
the
small
pipe
that
they
have
purchased.
It
went
from
like
a
medium
utilization
to
like
a
really
high
utilization,
so,
of
course
they
can
have
like
a
this
can
really
have
like
an
effect
on
them.
G
But
if
you
look
at
the,
if
you
take,
do
a
statistical
analysis
on
like
the
operator
or
the
network
as
a
whole,
with
like
a
large
set
of
peering
things
that
we
see
like
a
kind
of
a
different
perspective
for
the
majority
of
the
cases,
so
I
guess
that's
like
one
thing
to
discuss
for
the
wednesday
session
is
like
the
level
of
granularity
and
how,
like
you
know,
how
deep
should
we
dig
into
like
individual
users
that
really
suffered
individual
links
that
went
to
full
utilization?
B
Yeah,
definitely
I
mean
like
that
was
also.
One
thing
is
actually
understanding
what
the
problems
are,
understanding
all
the
details,
understanding
who
suffers,
but
then
the
other
thing
is
actually
the
big
picture
right
is
it?
Is
it
the
architecture?
Is
it
the
technology
where
the
limitations
are
or
is
it
a
business
question?
I
think
that's
also
a
good
point
where
we
hopefully
come
to
some
insights.
At
the
end
of
the
workshop,
I
still
have
colin
in
the
key
yeah.
C
Don't
want
to
comment
on
this.
I
just
want
to
tell
people
if
you
would
like
a
less
than
you
know,
seven
minutes
or
less
lightning
talk
slot
on
the
on
wednesday
to
talk
about
one
of
the
operational
issues
send
slides
our
request,
either
to
me
fluffy
at
cisco.com,
or
to
I'll
put
my
email
in
the
chat
or
or
to
the
program
committee
list,
and
let's
see
if
we
can
have
some
more
time
for
discussion
of
this
and
that
stuff
on
wednesday.
F
B
Think
we
should
measure
we
should
do
we
should
care
about.
So
we
don't
all
forget
about
it
immediately,
and
we
can
look
at
that
on
wednesday,
again
yeah
and
with
that
we're
at
the
end
of
this
session,
thanks
everybody
for
joining
thanks
everybody
for
presenting
thanks
everybody
for
comments.
We
didn't
have
as
much
time
for
discussions
as
I
was
hoping
for,
but
it
was
also
a
little
bit
expected
to
be
honest,
but
I
hope
you
all
enjoyed
the
chat
session.