►
From YouTube: IETF100-MAPRG-20171113-1550
Description
MAPRG meeting session at IETF100
2017/11/13 1550
https://datatracker.ietf.org/meeting/100/proceedings/
D
B
A
C
A
D
A
B
B
C
So
hello,
Bea,
Brady
I,
think
the
session
started
already
yeah.
So
if
you
could
concentrate
a
little
bit
here,
stop
shedding.
That
would
be
really
nice
I'm
a
little
bit
disappointed
that
there
are
only
so
few
people,
but
they
might
also
be
in
the
break,
and
we
have
to
start
so
welcome
to
the
memory
section
in
Singapore.
We
this
time
I
only
have
a
one
and
a
half
hour
slot,
that's
mainly
because
Jeff
and
I
were
really
lazy.
C
C
We
have
like
we
have
mailing
list,
of
course,
that
you
can
subscribe
to,
which
is
not
doesn't
have
a
lot
of
traffic
but,
like
you,
are
all
free
to
actually
generate
some
traffic
there.
If
you
want
to
announce
your
fancy
measurement
work
or
your
some
questions
about
measurements,
please
use
the
mailing
list.
C
We
have
some
other
logistics
here
about
remote
participation
and
the
slights.
Of
course,
this
link,
if
you
don't
have
the
slides
here,
doesn't
help
you,
but
you
can
click
on
the
link.
Otherwise,
and
that's
our
today's
agenda.
So,
as
I
said,
we
have
only
one
and
a
half
hours
lot
this
time
and
Dave
and
I
actually
didn't
put
a
lot
of
effort
in
to
try
to
get
various
people
from
the
research
community
community
into
the
room.
C
So
we
are
extremely
happy
that
we
have
a
really
nice
gender,
and
that
was
many
people
who
already
know
about
my
body
and
came
to
us.
We
wanted
to
present
their
work
say.
Thank
you
very
much
next
time
in
London.
We
will
definitely
ask
for
a
longer
slot
again
and
we
will
spread
out
in
the
world
and
try
to
get
interesting
things
we've
seen
somewhere
into
map
ready,
so
stay
tuned
there.
But
anyway,
if
you
have
something
that
you
would
like
to
present
in
London,
you
can
also
announce
us
anytime
about
it.
C
So
we
can
actually
take
you
into
account
early
on,
because
I
guess
their
gender
and
ilona
will
actually
be
crowded
if
you
yeah,
also
two
announcements,
so
we
we
had
this
earlier
that
we
also
give
people
five
minutes.
Lots
of
just
want
to
announce
their
measurement
work,
which
is
like
somehow
ongoing
or
if
they
have
requests
to
the
community,
that
it's
our
first
talk
and
then
another
thing
that
we
would
like
to
kind
of
keep
up
is
inviting
people
to
give
updates
on
previous
talk
they
gave.
C
F
Thank
you
and
good
afternoon
I'm
closely
and
from
finished
regulator.
Okay,
closer
to
the
mic.
Okay,
I'm
posting,
then
from
finished
regulator,
and
it's
nice
to
be
here
next
slide.
I'm
going
to
talk
you
about
the
European
measurement
tool
project,
so
the
the
yeah!
Well,
that's
based
on
the
year
made
neutrality.
Regulation
I'm
not
actually
going
to
talk
that
too
much
here.
But
what
does
that
mean?
F
Basically
for
the
end
users
that
the
entities
are
able
to
use
and
provide
applications
that
the
ISPs
should
basically
treat
all
traffic
equally
with
few
exceptions,
and
for
us
the
regulators?
We
need
to,
of
course,
monitor
and
supervise
that
believe
obligation
summit.
Well,
if
you're
interested
more
about
that
with
Everett
there's
another
presentation,
you
can
check
but
I'm
actually
going
to
the
next
slide
to
see
about
the
deep
well-being,
basic
goal
about
my
craft,
so
there,
sir
I
deem
available
that
describes
also
the
basic
principles
of
the
regulation.
F
The
goal
is
to
give
sufficient
details
for
developing
the
actual
measurement
metrics.
It
describes
the
different
use
cases,
basically
for
the
the
quality
of
service
performance
measurements,
but
also
how
to
detect
the
traffic
management
practices
that
may
in
practice
affect
the
quality
of
service
or
the
availability
of
certain
applications,
so,
for
example,
throttling
or
something
like
that
blocking-
and
it's
really
important,
also
to
be
able
to
detect
if
it's
the
end-user
environment,
that
actually
is
making
the
impact
or
is
it
the
ISP
who
we
can
blame.
F
So
basically,
we
again
my
craft
is
available
and
while
next
slide
space
now
no
to
the
main
topic,
so
the
European
regulators,
we
have
decided
to
develop
a
net
neutrality
measurement
tool.
Well,
we
are
just
starting
the
the
actual
tender
documents,
so
we
don't
have
yet
anything
I
mean
any
code
available,
but
basically
the
idea
is
to
have
a
open
source
tool.
That's
also
possible
to
be
able
available
for
the
whole
industry,
and
basically,
we
are
now
targeting
for
well,
basically
few
mandatory
measurements,
like
the
quality
of
service
for
speech
delay.
F
Maybe
some
neat
neutrality,
measurements
like
the
port
blocking
and,
of
course,
it's
a
tender.
So
we
are
hoping
to
get
all
so
many
additional
measurements
to
be
available
in
the
first
place,
so
basically
we're
targeting
to
have
a
tool.
That's
that's
available
for
the
end
users.
They
can
run
it
via
browser
or
app
and
detect
the
quality
or
DD
how
to
say
neutrality
of
the
Internet
access
service.
So
if
they're
say
they
own,
ISP
is
doing
something
nasty.
F
So
basically
the
tender
is
going
to
be
launched
early
next
year.
The
development
process
is
basically
next
year
and
we
are
hoping
to
get
the
tool
available
early,
nineteen
and
erm.
This
is
something
that
I
would
expect
that.
Well
also,
the
industry
would
be
happy
to
well
contribute
I
mean
that's
our
goal:
to
have
you
to
check
what
we
are
doing
check
that
we
are
doing
things
right
and
hopefully
the
tool
is
also
useful
for
other
people
than
the
European
regulators.
F
So
if
we
can
go
to
the
slide
and
also
some
background
information,
so
I
mean
this
presentation
doesn't
give
you
that
much
details
so
with
the
hey
Brady
net
neutrality,
regulatory
assessment
methodology
that
basically
tells
how
ideas
about
the
how
to
do
the
measurements?
Well,
lady,
that's
the
basis
for
for
the
some
of
the
measurements
we
are
now
building.
F
It
provides
the
the
basically
now
our
idea
on
how
to
measure
speed.
That's
a
bit
different
than
what
we're
seeing
in
the
industry,
because
it's
really
based
on
the
well
I'd,
be
packet
de
lodo,
basically
including
also
the
D
haters.
But
we
are
not.
We
are
talking
about
IP
level,
speed,
not
anything
lower
and
there
also
some
ideas
on
how
to
miss
it
in
it
neutrality.
We
don't
have
that
many,
how
to
say
standardized
measurements
available
in
the
market
and,
of
course
well.
F
This
is
are
used
for
who
our
tool.
Then,
if
you
are
interested
in
the
tool
development,
we
have
another
document
regarding
the
deep
how
we
have
specified
the
tool.
It
gives
you
more
idea
about
the
what
we
are
talking
for.
Basically,
it
should
be
available
in
mobile
and
fixed.
It
should
enable
the
end-users
to
make
their
own
measurements
and
help
you
help
the
inner
race
to
assess
the
the
regulation
and
how
it's
implemented.
F
G
F
C
H
I'm
Kyle
rose
I
work
at
Akamai,
not
with
Dave,
typically
but
but
I'm
interested
in
his
work,
and
he
unfortunately,
is
not
able
to
make
it
so
so
I'm
giving
the
presentation
for
him.
You
can
see
here
that
this
is
a
a
continuing
study
of
the
evolution
of
the
ipv6
internet,
specifically
looking
at
the
use
of
the
address
space.
If
you
want
to
know
more
about
the
methodology
involved
in
this
study,
you
can
take
a
look
at
these
two
links
next
slide.
H
H
H
You
can
see
that
there
are
more
active
ipv6
addresses
seen
in
a
week
than
exists
in
the
entire
ipv4
address
space,
so
it
suggests
that
people
are
using
the
ipv6
address
space
in
a
much
more
liberal
way,
which
is
which
is
good,
but
it's
also
interesting,
and
it
reveals
that
there
are
that
the
v6
internet
is
being
managed
in
a
very
different
way.
Next
slide.
It's
also
the
case
that
there
are
more
active,
slash,
64,
fix's
it
seen
in
a
week.
Then
there
are
total
ipv4
addresses
seen
a
week
next
slide.
H
H
You
see
that
there
are
824
SNS,
and
so
this
graph
and
the
next
graph
are
limited
to
only
that
subset
of
a
a
sense.
That
means
that
of
the
6500,
a
SNS
that
have
v6
connectivity,
only
824
of
them
actually
have
more
than
32
clients,
simultaneous
clients
next
slide,
so
about
of
the
the
824
about
40%
had
more
than
1000
/
60
forest
per
day.
H
That's
what
the
the
solid
blue
line
indicates
next
slide
now,
if
we
zoom
into
the
into
the
upper
right
of
the
previous
chart,
sorry
go
back
for
one
second,
if
you
zoom
in
to
the
upper
into
the
upper
right,
we're
going
to
use
a
we're,
gonna
use
a
CCDF
to
kind
of
look
at
the
what's
going
on
with
the
with
the
networks
that
have
many
many
v6
addresses.
So
what
this
chart
essentially
says
is,
if
you
look
down
in
the
lower
right,
where
the,
where
it's
it's
getting
toward
the
right
side
of
the
graph.
H
Those
are
the
networks
that
have
that
have
at
least
10
million
client
addresses
next
slide
so
about
one
percent
of
of
the
asns
in
this
study,
so
about
ten
ASNs
have
more
than
10
million
daily
total
/,
64
prefixes,
or
addresses
that
says
that
there's
a
ton
of
concentration
in
the
v6
address
space
at
the
moment,
but
many
networks
have
v6
connectivity,
but
most
of
the
clients
are
actually
in
a
small
proportion
of
those
networks.
Next
slide
so
per
ASN.
The
maximum
lower
bound
estimate
of
simultaneously
assigned
/
64
prefixes
is
about
20
million.
H
It
could
be
higher
than
this.
But
this
has
less
bias
than
looking
directly
at
directly
at
observed.
Addresses
because
observed
addresses.
There
are
many
more
of
them,
potentially
as
clients
can
arbitrarily
decide
when
to
when
to
add
a
new
privacy
address
next
slide.
So
some
of
the
key
takeaways
from
this
are
that,
despite
the
steady
growth
in
the
number
of
ipv6
capable
networks,
many
seem
not
to
have
v6
capable
hosts
and
it
seems
like
there's
a
very
small
proportion
where
all
of
the
where
most
of
the
v6
clients
are
concentrated.
H
So
the
a
small
number
of
networks
continue
to
dominate
client
counts,
so
the
top
1%
of
networks
have
more
than
50%
of
the
clients
by
the
lower
bound
estimates
of
simultaneous
clients
and
while
most
ipv6
clients
use
temporary
privacy
addresses.
There
are
millions
of
v6
clients
and
some
networks
that
are
not
using
privacy
addresses,
and
this
wasn't.
This
is
evident
from
the
study,
but
not
from
what
any
of
the
previous
slides.
If
you
go
forward
one
to
keep
going,
yes
stop
there.
H
If
you
look
at
this
slide,
you
can
see
so
the
big
purple
block
there
is
reliance.
Geo
note
how
large
that
is.
This
is
the
observed
number
of
addresses
pert
seen
per
day
now.
This
includes
privacy
addresses
next
slide.
If
you
factor
out
the
privacy
addresses
and
look
only
at
simultaneous
client
IPS
that
block
it's
much
smaller,
which
suggests
that
they're,
using
that
they're
issuing
privacy
addresses
to
their
clients
where
some
of
these
other
networks
or
not
they
become
more
prominent
as
they
become
a
larger
share
of
the
total
number
of
simultaneous
clients.
H
I
Right
but
like
realized
viewers,
a
mobile
network
doesn't
use
DHCP
I
am
NOT
trying
to
quibble
with,
with
with
your
explanation,
but
I
wonder
if
there's
something
in
the
data
that
you're
not
seeing
that
that's.
Why?
Because
the
Anwar
line
you'd
really
expect
to
see
these
privacy
extensions
so
like
if
you
took
Comcast,
for
example,
you'd
expect
to
see
a
big
thing
there,
particularly
on
wireline
or
Wi-Fi
networks,
where
hosts
join
and
leave
frequently.
Sometimes
they
might
Rev
their
privacy
addresses
multiple
times,
but
on
a
mobile
network
at
least
implementations
I'm
aware
of
they.
I
E
West
particular
USC.
Can
you
speak
to
why
you
chose
slash?
64?
Is
your
aggregation
boundary
because
I
think
the
current
recommendation
and
my
data
might
be
out
of
date
because
I
don't
follow
v6
closely
enough,
but
I
think
isn't
a
slash
56,
a
another,
reasonable
choice
to
study
the
boundary
between
where
network
boundaries
are
because
you
may
be
aggregating,
multiple
slash,
64's
we're
actually,
in
the
same,
you
know
technically
in
the
same
network,
because
the
slash
56
is
the
recommended
assignment
to
networks.
Isn't
that
true
I'm
not.
E
H
H
So
hi
I'm
Brian
Trammell
I'm.
We
talking
today
about
a
paper
that
I
wrote
with
Mark
almond
and
Rob
Beverly,
that
was
in
a
CMC
CR
this
April.
This
is
an
adaptation
of
a
presentation
that
Rob
gave
to
Singh
Kham,
focusing
on
slightly
different
primitives,
so
I'm,
assuming
everyone
here,
cares
a
little
bit
about
network
measurement
or
they
just
want
to
be
sitting
in
a
cold
room,
checking
her
email.
H
So
this
fundamental
operations,
research
protocol,
design
policy
development
I
mean
this.
Is
you
know
you
got
a
measure?
You
got
to
measure
the
network
in
order
to
know
how
it
works,
but
there's
basically
no
support
in
the
stack
there's
one
explicit
measurement
function
in
the
IP
stack.
It's
ping
ICMP
echo
request
an
echo
reply.
Otherwise
what
we
do
is
we
leverage
unintended
features
like
trace
route
or
we
use
brittle
hacks
like
looking
at
passive
tcp
loss
and
RT
t.
So
you
have
to
figure
out.
H
You
know
you
have
to
figure
out
what
congestion
control
you're
using
in
order
to
be
able
to
figure
out
exactly
how
much
loss
you're
seeing
so
on
and
so
forth,
and
then
there's
inference.
I
mean
like
so
there's
a
measurement
conference
that
I
just
came
from
IMC
that
had
a
whole
track
on
you
know
here
are
a
cool
little
inferences
you
can
use
to
take.
You
know
data
series,
that's
unrelated
to
the
thing
we
care
about
and
try
to
figure
out.
You
know
try
to
tease
out.
H
You
know
something
about
the
performance
of
the
network
or
something
about
the
topology,
the
network,
so
on
and
so
forth.
So
this
leads
to
a
question
right
like
so.
The
result
is:
is
that,
like
operationally
relevant
questions
and
research,
relevant
questions
are
hard
to
answer
right?
That's
why
we
have
a
research
group
here,
because
if
it
were
easy,
there
would
be
no
reason
for
us
all
this.
If
you're
and
say
look
at
my
wonderful
results,
I
just
you
know,
I
ran
a
program
and
it
dropped
out
the
bottom
yeah.
You
can
do
this.
H
We
don't
need
to
have
presentations
about
it.
So,
like
you
know,
how
do
you
do
routing?
You
know
it's
solve
problem,
PGP
right.
What's
the
capacity
utilization
of
a
link,
how
did
networks
interconnect
what
a
s
operates?
A
given
router.
All
of
these
are
things
that
you
have
to
go
back
to
there's
no
there's
no
explicit
functionality
in
the
network
for
finding
these
things
out,
even
things
that
we
think
are
simple
or
hard
delay
between
two
right,
like
I'm,
gonna
ping,
this
and
I'm,
going
to
I'm
gonna
see
what
the
RTT
is.
H
I'm
gonna
divide
it
by
two
that
obviously
works.
That's
the
one
way
delay!
No,
not
at
all,
there's
path,
delay,
there's
hose
delay,
there's
a
symmetry
there's
for
protocol
traffic
differentiation,
so
your
ICMP
might
not
even
be
going
over
the
same
path
that
the
actual
traffic
is
going
over.
We'd
like
to
know
what
the
endpoints
are
on
a
communication,
as
everybody
knows,
because
the
IETF
said
that
there's
no
such
thing
as
an
that
there
are
no
naps
in
the
network.
So
that's
actually
pretty
easy.
H
How
did
what
order
did
packets
arrive
in
a
row
destination
that
they
ordered
where
they
modified?
Where
they
mangled?
You
know
what
path
did
they
take?
How
are
they
cued
you
once
you
send
a
packet,
you
know
you
could
put
a
destination
address
on
it
and
it
appears
the
other
side
or
it
doesn't
it's
really
difficult
to
figure
out
how
how
it
got
there.
So
we
look
at
this
situation,
got
depressed
about
it
and
said
well
what
what
would
happen
if
we
rethought
the
internet
protocols
fast,
a
chasm
with
measurability
is
a
first-class
component.
H
What,
if
we
said
well,
the
point
of
this
stack
is
okay.
Well,
we
have
the
the
network
layer,
which
is
optical
transport
layer,
which
is
end-to-end,
and
each
of
these
things
had
facilities
in
it.
In
order
to
be
able
to
answer
these
questions,
what
if
answering
these
questions
was
as
important
as
delivering
packets,
so
the
approach
that
we
took
here
was
to
define
some
first
principles
for
measurability
and
then
to
imagine
that
packets
could
carry
this
measurement
information.
What
you
know,
what
what
would
we
put
in
them?
H
So
the
reason
for
explicitness
other
than
you
know,
explicit,
is
better
than
implicit
from
epital
of
Python
is
that
it
reduces
the
ambiguity
of
the
measurement
right,
like
so
by
using
IP
TTLs
for
courteous
route.
You're,
basically
trying
to
do
you
know
something
hop
by
hop
that
isn't
necessarily
hop
by
hop.
You
might
not
be
seeing
all
the
hops,
for
example.
H
It
also
increases
future
proofing
this,
because
if
the
feature
is
explicit,
it's
there
for
measurement
we're
not
going
to
change
it
for
some
other
thing
and
have
the
measurements
break
like
so,
for
example,
stretch
acts
or
for
TCP
RTT.
It
should
be
in
band.
This
is
primarily
to
ensure
that
the
measurement
traffic
is
going
to
get
treatment
as
close
as
possible
to
the
real
traffic
that
you're
trying
to
measure
the
consumer
should
bear
the
cost.
H
So
if
you
want
to
put
a
lot
of
boxes
on
path
whose
job
it
is
to
do
measurement-
and
you
know
okay-
well
now-
your
throughput
through
those
boxes-
goes
down
by
90%,
because
you're
measuring
things
nobody's
going
to
deploy
them,
but
it's
a
lot
easier
to
take
that
cost
and
shift
it
off
to
someone
who's
doing
the
later
analysis.
So
the
information
that
comes
out
of
this,
it's
okay,
if
it's
a
little
bit
inscrutable
as
long
as
as
there
are
some
algorithmic
transform,
you
can
apply
to
it.
The
provider
should
retain
control.
H
This
ensures
that
the
users,
so
the
measurement
provider
here
in
this
case,
is
the
entity
that's
putting
the
traffic
on
the
wire.
That
is,
that
has
the
data
in
it
that
is
measurable,
we'd
like
them
to
know
how
many
of
all
their
traffic
is
right,
you'd
like
to
be
able
to
opt
out
of
it
or
opt
into
it.
Measurement
should
be
visible.
So
if
I
can
see
the
measurement-
and
you
can
see
the
measurement,
then
everyone
can
see
the
measurement.
H
This
increases
the
transparency,
the
fact
that
there
is
measurement
traffic
on
the
wire
and
that
there's
the
that
this
can
be
trusted
and
measurement
should
be
cooperative.
This
is
just
sort
of
a
rephrasing
of
principles,
three
and
four,
and
basically
to
sort
of
leverage
the
existing
puzzle
between
the
middle
in
the
end,
so
we
have
a
set
of
candidate
primitives.
Have
a
look
at
the
paper
for
those
I'll,
be
talking
about
timing,
an
arrival
and
change
a
detection
and
cue
delay.
So
what
we're
not
saying
hey?
H
We
should
actually
build
all
of
these
things
into.
You
know,
IP
version.
What
what
are
we
up
to
now,
seven,
or
that
we
should
build
these
things
into
TCP,
two
or
whatever
we're
saying
you
know.
This
is
a
illustration
of
the
principles
we're
using
these
to
test
the
principles
and
see
if
that
makes
sense.
So
for
timing,
we
want
to
be
able
to
figure
out.
H
You
know
what
the
round-trip
time
of
a
particularly
transport
flow
is
turns
out
that
the
time
stamp
option
is
almost
right
for
this,
but
it
since
it
wasn't
designed
for
passive
measurement.
It
was
only
designed
to
give
information
to
the
sender.
It
doesn't
expose
anything
about
the
delay
right.
So
if
you
have
a
time
stamp
or
a
a
packet
that
goes
in
one
direction
and
an
AK
comes
back,
you
don't
know
how
long
it
took
generate
that
AK.
So
the
way
that
we
fix
this
is
we
basically
take
time
now
in
time
echo.
H
These
are
basically
constant
rate
clock
things,
just
like
the
STC,
PTO
sopped
and
then
the
delta
time,
which
is
this
ticks
in
the
same
clock
at
the
sender
since
we
saw
the
echo.
So
one
thing
to
know
here
is
that
there's
a
resolution
overhead
trade-off
right.
So
with
respect
to
to
TCP
timestamps,
you
have
to
put
them
on
every
packet,
because
they're
used
for
for
a
wraparound
detection.
For
this
you
could
basically
say
I'm
going
to
put
this
on
1
over
N
packets
and
I'm
going
to
eat
the
the
Lawson
resolution.
H
So
this
meets
our
second
and
fifth
principles
that
it's
in
band
is
visible.
The
fact
that
we
have
this
Delta
here
is
just
for
measurement
that
meets
our
first
principle
for
explicitness
and
then
the
resolution
overhead
trade-off
is
basically
a
way
to
achieve
the
sender
control.
So
the
way
this
works,
here's
where
it
goes
fast,
so
oh,
okay,
I
send
a
packet
with
I'll,
be
the
sender
you'll,
be
the
receiver
I
send
a
packet
with
a
time
stamp
on
it.
The
receiver
sends
a
packet
back,
but
this
time
stamp
and
then
it
that
goes
back.
H
The
thing
and
you
can
see
there
were
there
were
no
delays
and
I
wait.
Two
ticks
of
my
clock
and
I
send
the
next
thing,
and
then
we
keep
going
and
going
so
far
so
on
and
so
forth,
and
if
you
put
an
observer
in
the
middle,
they
can
basically
look
at
this
and
look
at
the
rate
of
this
interlock.
The
rate
of
receivers
clock.
H
Compare
that
to
a
local
clock,
I
mean
you
have
the
clock,
drift
issues
that
you
have
anywhere
where
you're
trying
to
do
network
based
measurement,
and
you
can
essentially
figure
out
what
the
rates
the
inter
departure
times
are
the
inter
arrival
times
you
can
get
jitter
out
of
this
and
the
delay
information
gives
you
the
correction
that
you
don't
quit
with
TCP
timestamps,
so
a
rival
information.
This
is
basically
in
TCP
done
by
looking
at
the
the
series
of
sequence,
acknowledgment
numbers
and
attending
to
model
how
retransmission
works
a
more
explicit
way
to
do.
H
H
H
Are
you
maintain
a
running
sum,
and
then
you
echo
the
running
sum,
so
this
meets
the
consumer,
cost
primitive
our
consumer
cost
principle
because
so
actually
teasing
this
apart
does
actually
require
you
to
look
at
every
packet
and
pull
things
apart
so
and
in
order
to
figure
out
what
the
the
order
is.
It's
also
explicit
and
visible
as
the
other
one
and
up.
We
only
had
three
on
that
one.
H
So
the
way
this
works
is
you
start
with
a
starting
number
and
you
have
an
increment
series
and
we
can
just
go
through
these
and
in
the
general
case
it
goes
back
and
forth
and
back
and
forth
and
back
and
forth,
and
you
you
just
echo
the
last
one
that
you
saw
each
time
you
sent
it
for
loss
detection.
Let's
go
through
this
and
let's
say:
okay:
we
lose
that
packet
here,
an
observer,
okay,
yeah.
So
here
you're
going
to
basically
echo
back
on
the
next
packet.
H
Okay,
no
so
you're
going
to
see
that
the
that
increment
in
the
middle
is
just
missing.
So
the
sender
now
also
knows
in
any
observer
on
the
left.
Side
of
the
sender
over
here
is
going
to
actually
see
that
that
didn't
get
lost,
even
though
there's
no
indication
anywhere
in
this
that
there
was
a
like
there's,
no
there's
no
flag
on
this
and
says
hey.
This
is
a
retransmit
you're,
not
looking
at
anything
in
the
in
the
transport
level,
sequence
numbers
so
on
and
so
forth.
H
The
observer
on
the
other
side
essentially
makes
different
inferences
by
looking
at
the
series
of
of
increments.
This
also
works
for
reordering.
Actually
we
can
go
through
these
very
quickly.
So
here
you
know
the
the
first
packet
here
actually
got
reordered
and
in
was
received
after
the
second
one,
and
you
can
again
see
this
in
the
sequence
of
echoes,
so
yeah.
Well,
yet
so
study
study
the
numbers
and
do
the
math
for
yourself
offline,
because
I
have
what
like
eight
minutes
left
at
this
point.
Yes,
good!
H
Okay,
you
can
take
this
a
little
bit
further.
There's
a
primitive
in
here
called
probabilistic
and
triggered
stamping.
This
is
the
second
numbers
of
the
sections
in
the
paper,
so
this
is
essentially
a
request
for
information
to
be
added
by
a
router.
So
probabilistic,
stamping
is
you
know
every
in
packets
are
going
to
get
information
out
of
the
packet,
a
problem
or
critter
of
stamping.
Is
you
essentially
have
something
that
looks
very
much
like
the
TTL
time
or
the
the
IP
time
exceeded
message?
H
You
say:
okay,
decrement
this
when
you
are
the
inter
outer
that
understands
this,
please
put
some
information
on
the
packet
and
the
the
two
basic
ones
we
will
look
at
here
were
things
like
performance
Diagnostics,
so,
okay,
I'm
gonna,
give
you
the
time
stamp
for
now
my
instantaneous
queuing
delay
and
my
instantaneous
queuing
capacity,
and
you
can
essentially
use
this
to
do
to
essentially
build
maps
of
where
the
hot
cues
are
in
the
network.
You
can
also
do
this
for
technology
discoveries.
This
is
essentially
explicit,
trace
route.
H
The
addition
that
you
get
here
is
you:
have
the
AAS
number
some
identifiers
that
the
router
chooses
for
itself
so
that,
if
you
know
which
a
yes
it
is,
you
can
actually
go
and
and
look
that
up
in
a
table,
and
then
you
have
both
the
incoming
and
the
outgoing
addresses.
So
you
lose
a
lot
of
the
aliasing
problems
you
have
in
trace
route.
H
H
So
in
conclusion,
measurements,
critical,
I,
think
you
know
the
the
people
in
the
room.
You're,
probably
all
agree
on
that.
H
This
paper
is
kind
of
a
position
paper
to
discourage
spur
discussion,
debate
and
inform
protocol
development
that
we
need
better
support
from
the
network.
So
really,
what
we
see
is
the
as
the
contribution
of
this
work
is
the
set
of
principles.
So
the
idea
that
we,
you
know
we
have
these
six
basic
ideas.
We
can
test
that
against
measurement
facilities,
that
we
look
at
adding
not
just
to
this.
H
This
imaginary
future
internet,
wherein
we
actually
can
go
back
and
say:
okay,
we're
gonna
change
this
back,
so
all
of
these
things
are
built
into
it,
but
also
any
present
sort
of
design
for
adding
measurability
for
to
a
protocol
or
for
building
a
measurement
facility
that
is
not
directly
integrated
with
a
protocol
paper
demonstrates
these
candidate
primitives
that
address
long-standing
and
important
trailwood
measurement
problems.
So
this
is
basically
sort
of
an
add.
Go
read
the
paper.
H
Please
talk
to
us.
If
you
think
these
principles
are
interesting,
that's
that's
really
sort
of
the
takeaway
that
I
want
to
get
in
that
I'd
like
you
to
get
from
this
is
that
we
think
this
is
a
good
groundwork
set
of
principles
for
designing
measurability
for
the
future
internet.
So
with
that
I
think
I
do
have
a
little
time
for
questions.
H
C
B
A
No
got
a
kicker
hi,
my
name
is
Ron
Friedrich
I
work
for
a
surface
national
research
network
in
the
Netherlands
and
I'm,
also
an
assistant
professor
at
the
University
of
Twente.
This
is
joint
work
with
all
the
organizations
here
on
the
slide
and
I'm
going
to
talk
to
you
about
the
route
Canary
and
specifically
about
the
evolution
of
a
measurement
that
we
started
earlier
this
year.
So,
first
of
all,
what
is
the
goal
of
his
talk?
A
While
we
were
setting
up
the
measurement
and
while
we
were
actually
measuring
and
sort
of
adjusting
what
we
were
doing
so.
First
of
all,
quick
recap:
why
did
we
start
this
project?
I
hope
everybody
has
noticed
that
I
can
start
this
project
for
replacing
the
root
DNS
key
with
a
new
one.
This
year.
Anybody
not
familiar
with
this
project.
Please
raise
your
hand
I,
don't
believe
you
Jeff.
A
So
we
use
write
atlas
probes,
which
is
one
of
the
obvious
choices
we
use
loominatee,
which
I'm
going
to
tell
you
a
little
bit
more
about
in
the
next
slide
and
we're
hoping
to
use
some
of
the
data
a
penis
collects
at
the
end
of
the
measurement.
So
we
can
sort
of
compare
notes
and
see
what
we
observed
from
all
of
these
different
perspectives
and
then,
finally,
we
also
want
to
look
at
traffic
recorded
at
some
of
the
route
letters.
Hopefully
most
of
them.
A
We
can
look
at
the
day-in-the-life
data
for
them
to
see
what
happened,
what
was
visible
from
the
route,
while
this
case
k,
roll
over,
was
going
on
and
how
resolvers
were
acting
based
on
the
various
stages.
During
the
case
k
rollover
so
a
little
bit
about
loominatee,
I'm
anybody
not
familiar
with
ripe
Atlas
excellent,
so
loominatee
is
an
HTTP
proxy
service.
A
There's
any
has
anybody
heard
of
the
whole
unblocker
service?
It's
one
of
these
netflix
on
blockers.
Raise
your
hand
if
you
have
heard
about
it:
okay,
very
few
people.
So
basically,
if
you're
in
a
in
a
country-
and
you
can't
watch
certain
content
content
on
a
service
like
Netflix,
but
there
are
other
services
that
this
applies
to
as
well.
You
can
sort
of
go
through
some
30
VPN
service
to
pretend
you
are
in
another
country,
and
then
you
can
see
the
content
that's
made
available
in
a
country,
and
this
is
what
Halle
does.
A
But
if
you
click
agree
on
their
terms
and
services,
you
actually
agree
to
become
an
exit
node
for
their
VPN
proxy
service
and
academics
love
this
kind
of
stuff,
because
you
can
use
this
to
do
measurements
from
a
residential
perspective,
and
this
is
actually
something
that
the
folks
from
Northeastern
University
are
collaborating
in.
This
project
have
done
before
so.
They've
used
this
for
other
measurement.
A
So
what
we
do
is
we
have
HTTP
requests
that
trigger
DNS
queries
and
using
that
week
we're
able
to
cover
about
15,000
a
SS,
and
the
interesting
thing
is
that
14,000
of
das,
that
we
covered
are
not
covered
by
wrap
Atlas
probe.
So
we
are
getting
different
visibility
into
the
problem
space
which
is
nice.
A
A
So
then,
you
get
a
matrix
of
algorithms
that
you
can
do
measurements
for
and
if
a
resolver-
and
we
can
measure
one
of
three
outcomes,
whether
a
resolver
validates
correctly,
whether
it
fails
to
validate
so
it
gives
us
a
surf
feel
when
we're
not
expecting
to
see
one
or
it
doesn't
validate
right
it
regardless
if
it's
assigned
record,
but
the
signature
is
bogus,
it
still
gives
us
a
response.
So
it's
not
validating
and
of
course
there
are
some
corner
cases
here,
but
I'm
not
going
to
go
into
detail
about
those.
A
So
if
you
want
to
have
a
look
at
this
project,
so
the
URL
is
at
the
bottom
of
the
slide.
We
actually
have
a
couple
of
life
results
up.
If
you
know,
if
you
go
to
the
website
you
can
you
can
have
a
look
at
the
live
results.
This
is
for
the
most
common
signing
algorithms,
and
this
shows
you
visibility
into
the
atlas
part
of
our
measurement.
Unfortunately,
that's
the
only
one
for
which
we
have
live
results
at
the
moment.
A
Loominatee
is
a
little
bit
trickier
to
get
life's
results
for
that,
but
we're
updating
our
results
as
they
come
in
and
what
you
can
see
here,
the
green
there.
So
these
are
all
pie
charts.
The
the
green
part
of
the
pie
chart
shows
you
the
fraction
of
resolver
Pro
Plus
resolver
pairs
that
are
actually
correctly
validating,
that
particular
algorithm
and
signing
algorithm
and
DSL
rhythm
and
the
orange
part
are
resolvers
that
are
not
validating
that.
A
So
there
are
just
normal
resolver
that
don't
do
dns,
sick
and
one
of
the
takeaways
here
is
that
you
don't
see
any
red,
so
we
don't
see
any
resolvers
failing
to
validate,
which
is
what
we
would
expect
to
see
if
stuff
starts
going
wrong
during
the
case
Kay
rollover.
Now,
if
you
click
on
one
of
these
pie
charts,
you
actually
get
a
more
detailed
view
for
that
specific
algorithm.
A
If
you
click
on
all
of
the
algorithms
is
that
if
resolver
is
validating,
it's
actually
really
stable,
it's
validating
all
the
time,
so
it
never
flips
from
validating
to
not
validating
it,
validating
all
the
time
and
that's
actually
a
nice
result,
because
we
didn't
actually
know
this
before
we
didn't
know
if
resolvers
were
actually
stable
if
they
were
validating
or
if
they
flip
state
very
often,
and
the
resolution
of
this
measurement
is
once
every
hour.
So
we
think
we're
getting
quite
good
visibility
into
this.
A
So
just
to
give
you
an
idea
of
why
it's
important
that
we
measure
from
different
perspectives,
I
guess
everybody
sort
of
assumes
that
right
path,
assistance,
somehow
biased,
right.
It's
it's
people
like
us
that
pick
up
an
atlas
probe
at
an
event
like
this
plug
it
into
our
network.
We
are
not
representative
of
the
Internet
residential
users
are
sensitive
of
the
internet
right.
A
So
if
you
compare
the
fraction
of
resolvers
that
validates
that
we
observe
with
the
Illuminati
measurement
in
this
case-
and
we
compare
it
to
the
right
measurement,
you
can
already
see
a
huge
difference.
Right
of
the
13
of
a
set
of
13,000
vantage
points
on
the
luminosity
measurement.
7%
were
behind
a
validating
resolver.
But
if
you
look
at
the
Atlas
measurement,
then
42
percent
of
Atlas
probes
are
behind
validating
resolve,
that's
a
huge
difference
and
again
the
AP
neck
will
give
you
a
different
number
because
I
think
it's.
C
A
A
Until
that
point
in
time
we
had
three
keys
in
the
DNS
key
set
for
the
root
we
had
the
new
case
K
the
old
case
k
and
one
zetas
k
and
on
September
19
as
SK
roll
over
started,
and
that
means
we
introduced
a
new
key
into
that
set.
It
grows
in
size
and
consequently
this
was
the
first
point
in
time
when
responses
from
the
root
might
actually
get
fragmented.
A
The
specific
key,
if
they're
transmitted
over
ipv6,
because
it
would
now
exceed
the
minimum
MTU
for
ipv6,
so
we
wanted
to
know
what
happens
well,
I'm,
not
even
going
to
show
you
the
Atlas
probe
results,
because
nothing
happened
there
and
I'm
going
to
say
something
about
that.
So
these
are
the
number
of
servile
responses
that
we
got
back
in
our
Atlas
measurement.
You
see
one
little
spike
a
few
days
in
from
the
actual
introduction
of
the
new
satis
k.
A
We
looked
in
detail
at
that,
and
that
was
just
one
vantage
point:
failing
for
a
completely
different
reason
that
had
nothing
to
do
with
the
new
Zetas
getting
introduced.
So
the
takeaway
from
this
is
nothing
happened.
This
is
traffic
to
be
route
with
saying,
thanks
to
West
for
giving
more
it's
access
to
that.
The
takeaway
from
this
is
again:
nothing
happened
because
there
was
no
noticeable
increase
in
TCP
traffic
to
the
route
which
you
would
expect
to
see,
because
people
were
getting
truncated
responses
and
falling
back
to
TCP
on
the
right-hand
graph
you
can
see.
A
There
is
also
no
increase
in
truncated
responses,
so
you
wouldn't
see
TCP
traffic
because
of
that,
so
nothing
really
happened.
So
Wow
summary
of
that
nothing
exciting
happen,
okay
and
then
so
October
11
of
this
year
the
new
KSK
was
supposed
to
go,
live
and
then
I
can
decided
to
pause
here.
Boo,
so
did
we
do
all
this
work
for
nothing?
Well,
no,
so
the
rest
of
the
presentation
I'm
going
to
talk
to
you
about
the
sort
of
side
effects
that
we
had
from
this
measurement
and
what
we
learned
from
that
so
spin-offs.
A
The
first
spin-off
we
had
was
an
online
algorithm
test.
While
we
were
designing,
this
measurement
I
was
having
a
conversation
with
William
from
an
outlet
labs
and
we-
and
we
said
well
can't
we
use
all
these
dis
matrix
of
algorithms
to
actually
make
an
online
test
that
allows
you,
as
a
user,
to
figure
out
which
algorithms
will
my
resolver
actually
validate,
and
actually
that
turned
out
to
be
rather
easy
to
do.
So.
If
you
go
to
the
website,
you
can
actually
click
on
the
testing.
A
You
you'll
see
the
results
and
actually
for
the
ITF
network
here.
So
this
is
for
my
home
and
I
hope
you
can
see
that
all
the
way
to
the
right,
my
home
resolver,
actually
validates
one
of
the
newer
algorithms
EDD
sa
two
five
five
one:
nine,
because
I
run
a
patched
version
of
open
SSL
to
get
that
working
on
network
here.
You'll
see
that
the
left
hand,
column
RS,
am
diva,
md5
isn't
treated
as
insecure,
is
treated
as
secure,
and
that
probably
means
that
they're
running
bind
on
the
network
here
as
a
resolver.
A
The
second
spin
office
that
we
test
algorithm
support
for
all
of
those
Atlas
probes
over
time.
So
actually,
if
you
go
to
monitor
that
route,
canario
dorg
you'll
see
a
map
and
you'll
see
measurements
popping
up
as
they
come
in
so
you'll
see
a
little
green
or
orange
dot
popping
up
on
the
map.
You
can
click
on
that
and
that
will
show
you
the
state
for
that
resolver
at
that
time,
and
it
will
show
you
a
lot
more.
A
For
instance,
a
whole
table
for
a
s
is
the
number
of
probes
in
those
yeses
with
validating
resolvers,
whether
that
went
up
or
down
over
time.
So
we
get
a
really
nice
visibility
into
the
resolver
ecosystem
behind
atlas
probes
over
time
and
we've
actually
already
seen
that
the
number
of
validating
resolvers
increased
over
time
and
I'm
hoping
that
one
day
another
resolver
will
pop
up
that
actually
validates
some
of
the
newer
algorithms
that
we
have
for
DNS
SEC.
Also
anybody
have
a
guess
which
resolver
these
people
are
using.
A
You
can
see
that
it's
one,
nine
two
one,
six,
six,
eight
one
one,
but
it's
forwarding
its
traffic
to
somebody
else.
Anybody
guess:
okay,
it's
Google
yeah,
because
Google
somehow
thinks
RSA
md5
should
return
a
surf
film
go
on
go
to
the
next.
Why
is
it
not?
Can
you
press
yeah?
Oh
thank
you,
okay,
so
we
made
a
little
mistake
somewhere
during
a
measurement
and
actually
we
forgot
to
resign
our
test
domains,
and
that
means
that
the
signatures
were
expiring.
A
So
the
final
spin-off
that
I
want
to
talk
to
you
about
is
the
Swedish
canary
and
I,
assume
you're
familiar
with
the
Swedish
Chef
next
slide,
please
so
Moritz
from
sa
DN,
who
is
also
part
of
this
project,
presented
this
work
at
the
NSO
arc
in
San
Jose
in
September,
I,
think
and
after
the
presentation.
The
good
folks
from
IAS
came
up
to
us
and
said
well
we're
doing
a
KSK
and
algorithm
roll
over
for
the
dots
dot,
a
CCC
Tod,
and
you
can
read
more
about
a
project
on
the
URL.
A
That's
on
the
slide,
slides
are
also
on
the
meeting
material
website
and
they
asked
us
if
we
would
want
to
measure
that
and
signal
problems
to
them,
because
they're
kind
of
scared-
and
the
interesting
thing
about
this
is
that
it's
on
I
would
say
a
more
agile
time
scale
than
the
root
case
care
will
over,
for
which
we
now
have
no
deadline.
This
is
due
to
take
place
in
less
than
two
weeks,
so
they're,
actually
starting
at
the
end
of
November,
and
they
hope
to
finish
by
the
end
of
the
first
week
of
December.
A
A
For
this
project
we
developed
some
new
methodology
that
covers
issues
that
are
specific
to
algorithm
rollovers,
so
I'm
not
going
to
go
into
details,
but
just
to
give
you
one
example,
one
of
the
things
that
you
need
to
do
is
you
need
to
introduce
signatures
before
you
introduce
new
keys,
because
the
RFC
specified
that
you
could
otherwise
do
a
downgrade
attack
and
some
resolvers
will
actually
fill.
If
you
introduced
the
new
key
before
you
introduce
new
signatures,
because
then
we
will
say
I'm
expecting
to
see
signatures
with
this
new
algorithm
and
they're,
not
there.
A
Sir
fill
regardless
of
whether
there
is
actually
a
chain
of
trust
from
the
route
all
the
way
up
until
that
signature
that
they're
validating
and
that's
a
problem
next
slide,
please
so
to
conclude
and
I
think
the
main
takeaway
for
me
from
from
this
project
is
that
we
started
measuring
this
root
case
cable
over
sort
of
as
an
adduct
project.
We
had
a
discussion
early
in
the
year
and
it
turned
out
that
nobody
was
actually
measuring
this
and
then
we
said
well.
A
We
think
this
is
important,
so
we
should
start
measuring
it
and
those
are
thinking
about
the
measurement
evolve.
Lots
of
spin-offs
develop,
so
we
got
lots
of
nice
site
results
that
will
give
us
better
understanding
of
how
this
protocol
actually
behaves
in
the
wild,
and
it's
also
a
case
study
of
why
rare
events
like
the
root
case
can
roll
over
or
like
an
algorithm
roll
over
that
they're
doing
in
dota.
A
C
really
deserve
our
attention
as
a
measurement
community,
because
if
we
don't
measure
these
things,
then
we're
gonna
keep
making
the
same
mistakes
over
and
over
again,
because
for
Algar
and
rollovers
some
of
them
have
occurred
in
the
past
and
almost
all
of
them.
That
I'm
aware
of
have
gone
wrong
in
some
way.
That's
why
we
need
to
start
measuring
this
type
of
stuff,
so
I'm,
gonna
sort
of
say
what
the
chairs
probably
also
says:
measurements:
give
you
better
understanding
better
protocols,
so
fewer
failures
keep
measuring
people
next
light.
A
Please
most
of
the
data
that
we
collect
is
open
data,
and
so,
although
the
right
measurements
are
stored
in
the
in
the
Atlas
API,
so
you
can
actually
extract
all
of
those
measurements.
If
you
want
to
have
a
look
at
them
yourselves
on
the
monitor
that
root
canary
site,
there
is
actually
WebSocket
you
can
connect
to,
and
it
will
stream
live
measurements
to
you
as
you
connect
to
it
and
all
of
the
data
sets
that
we're
collecting.
A
J
A
Server
for
the
traffic
to
show
for
changes
in
traffic.
No,
we
only
had
data
from
B
roots.
At
some
point
we
will
want
to
look
at
the
other
route.
Letters
as
well,
but
for
this
particular
event
we
only
had
access-
and
we
had
to
do-
is
really
quickly
right,
because
this
is
in
September
and
the
presentation
was
two
weeks
later,
so
we
had
to
sort
of
get
work
with
what
we
got
and
I
think
you're
now
going
to
say
that
another
root
letter
saw
something
different,
yeah.
J
A
K
Alright,
so
yeah
this
work
we
conducted
together
with
twenty
and
is
I
just
before.
I
start
how
many
DNS
people
are
in
the
room
here
or
the
DDoS
protection
people
right.
So
you
yeah
that's
what
next
place
so
in
this
talk,
this
is
a
paper
presented
on
I
am
see
last
week
and
the
weather
approach
we
have
in
this
paper
is
the
operators
approach.
I
work
for
our
operator
and
the
finger
wanted
to
do
is
like
how
we
can
actually
optimize
and
better
engineer
the
deployments
of
the
our
name
servers.
K
So
this
is
came
up
from
a
production
question
and
by
optimizing
better
engineering
I
refer
to
reduce
the
latency
to
our
clients,
because
well
time
is
money.
So
there's
many
reports
that
show
that
if
I
go
to
that
can
find
Amazon
saying
that
searched
times
will
higher
latency
loss
of
money
and
DNS
is
part
of
that
as
well.
So,
next
with-
and
this
is
the
real
setup
we
have
for
it's-
not
even
working
for
Dardanelle
and
if
you're
not
into
DNS.
K
Let
me
break
it
down
to
see
you,
so
we
for
the
Dinan
L,
the
T
ccTLD
for
the
Netherlands.
We
run
eight
different,
authoritative,
name
servers.
They
have
this
different
color
somewhere
any
case
on
my
unicast,
like
you
use
the
same
for
any
other
domain.
Just
only
for
that
NL
and
we
use
eight
you
can
use
different
numbers
is
just
for
redundancy.
K
Alright,
thank
you
and
high-availability
see
if
it
works
all
right
next,
please,
and
when
we
have
a
client
at
home.
Let's
say
you:
we
want
to
resolve
in
the
domain
name,
like
example,
that
I
now
you
go
to
your
local
resolver
that
you
can
find
in
a
computer
next
please,
but
if
this
resolver
can
actually
make
a
choice
to
go
to
any
of
those,
eight
name
servers,
authoritative
news
services,
see
on
top
I
think
it's
the
same.
K
If
you're
on
here
on
the
streets,
if
you
wanna
find
an
ATM
and
there
are
eight
ATM
around,
you
can
go
to
each
of
them.
Someone
gonna
be
further
than
you.
So
no
you
closer
to
closer
to
you
and
we
as
an
operator
have
no
control
on
how
resolvers
Mac
I
should
get
to
choose,
and
the
question
that
we
want
to
answer
is
like
how
we
as
an
operator,
can
actually
help
them
to
get
the
best
results
like
theirs
shortest
latency
to
the
main
service.
K
Next,
please,
and,
of
course,
some
of
the
name
servers
they
use
any
KS,
which
means
they're,
like
this
name
service,
distributed
across
the
globe,
assortment
of
various
machines.
Next,
please,
and
this
actually
came
out
from
observation
ahead
of
the
company.
So
this
is
the
same
setup,
but
the
area
of
the
graph
show
the
actual
number
of
machines
that
each
authoritative
name
server
has
so
Nath
nods,
big
dinners
provider.
It
provides
services
for
us
and
you
see
like
they
have
any
caste
of
vehicle
any
case
in
any
case
cloud.
K
Locating
the
Netherlands
we
have
22%
of
our
curious
queries
comes
from
the
US
are
even
though
the
other
any
case.
I'd
met,
not
authoritative.
Server
has
sites
in
the
US.
So
what's
going
on
here,
so
that
when
this
work
started
next-
and
this
is
happening
because
we
as
a
side,
the
end
will
only
operate
what
so
on
circular,
both
the
recursive
resolver.
K
This
is,
this
is
actually
cold
run
by
created
by
different
DNS
software,
like
a
mother,
pine
and
the
clients.
Actually,
those
clients
are
located
at
homes,
data,
centers
or
different
people.
So
we
have
a
lot
of
different
entities.
Then
they
have
different
goals.
So
we
our
ago,
is
just
as
an
operator
try
to
serve
that
matter
next,
and
this
kind
of
has
been
done
before
there
was
a
work
by
the
way,
no
esos
and
the
steam,
but
it
wasn't
in
the
live
set
up.
K
When
our
questions
like
we
don't
want
to
focus
on
specific
software
versions
or
vendors,
we
want
to
see
how
this
behaves
in
the
wild,
because
that's
what
we
care
and
and
that's
what
we
did
next,
so
we
have
set
up
seven
measurement
setup
actually
more
to
the
deltas
measurements
weight
away,
the
same
more
it's
for
the
presentation
and
we
have
used
Amazon
data
centers.
We
have
set
up
on
each
data
center,
a
different,
authoritative
nameserver,
and
we
have
activate
some
of
them
and
activate
at
a
certain
time
and
next,
please
so,
let's
consider
measurement.
K
We
had
only
two,
an
authoritative
nameservers
for
our
example
domain
and
if
it's
a
ripe,
ripe
Atlas,
if
you
have
only
two
two
sides
here,
one
is
on
poly
101
in
San
Francisco.
This
client
at
home
can
go
to
the
resolver
and
this
recursive
resolver
can
choose.
What
is
what
is
the
one
who
want
to
send
the
query
next?
Please,
and
we
didn't
have
done
that
for
all
the
orders.
So
we
have.
This
combination
see
like
how
it
actually
varies.
K
As
you
add,
more
name
servers,
so
the
first
question
I
had
do
actually
ow
there
is
overs,
choose
all
the
authoritative
name
servers
available
like
for
that
right
now
we
have
eight,
they
actually
go
to
the
eight
or
they
stick
to
one,
and
the
first
thing
we
found
is:
actually
they
use
all
of
them.
They
actually
query
all
of
them.
So
let
me
get
a
measurement
was
his
name.
True,
a
it's
like
only
have
two
sites
and
I,
don't
recall
by
heart,
which
is
hideous,
but
what
matters
is
like
very
quick.
K
That
means
that
if
this
is
one's,
gonna
traffic
attract
traffic
from
a
lot
of
people,
because
that's
the
way
the
recursos
behave
next
place
and
another
question
we're
headed
so
that
we
found
out
okay,
they
go
to
authoritative,
but
how
a
actually
distributed
the
queries
between
captain
query
with
these
measurements
were
carried
for
every
hour,
have
a
very
short
TTL,
so
they
would
expire
the
cache
and
we
will
do
that
every
two
minutes.
So
let
me
cuz,
you
know
next,
please,
let's
go
see
there,
I
set
up
that
we
had
once
fingers
chewy.
K
We
have
two
sites,
one
in
Sao,
Paulo
and
one
in
Japan,
and
what
I
have
and
what
happens
is
like
the
this
is.
The
first
graph
here
shows
that
median
art
ETF
for
all
the
rap
atlas,
probes
towards
sambal
electrolytes
Tokyo
here,
and
this
graph
shares
a
distribution.
If
you
have
only
to
name
servers
available
for
this
example
that
NL
you
see
that,
like
dr.,
take
you
through
a
similar
latency,
they
distribute
they
get
like
the
similar
number
of
queries.
K
So
the
resolvers
like,
if
you
have
more
less
the
same
agency,
I'll,
distribute
more
or
less
evenly
next,
please,
but
for
sites.
Since
rathus
has
a
lot
of
probes
in
europe,
you
would
expect
the
frankfurt
would
have
a
lot
of
lower.
Ladies,
if
it's
the
case
and
Sydney
would
have
a
higher
one,
and
if
you
that's
the
that
you
are
possibilities
that
you
can
go
to
the
name
servers
what's
gonna
happen
is
a
distribution,
it's
very
a
virtually
proportional
to
that.
K
So,
with
the
mean
sets
Frankfurt,
we
actually
get
a
lot
of
queries
and
said
we're
gonna
get
less
next
ways,
so
we
that
confirms
the
previous
work,
but
for
the
first
time
here
we
had
actually
don't
have
done
that
in
the
wild,
which
is
the
fingering
in
your
attitude
next,
also
to
confirm
it
to
have
look
not
only
to
aggregated
traffic,
but
it's
also
important
to
look
at
individual
resolvers
how
each
of
them
behave.
It's
a
very
beautiful
picture,
but
it's
let
me
try
to
explain
this
to
you.
K
Each
column
here
represents
run
resolver,
a
probe
ripe
at
this
probe
and
resolver,
and
we
have
divided
them
into
different
continents.
So
next,
please
lets
me
focus
first
here
in
europe,
and
this
is
the
measurement
we
only
had
like
two
name:
servers
available
tokyo
and
some
power.
Next
we
see
that-
and
this
is
the
the
certain
this
is
the
number
of
resolvers,
and
you
see
some
of
them
only
stick
to
some
power
here.
They
never
they
never
distributed
queries.
K
The
y
axis
here
shows
the
distribution,
so
if
you
have
only
one
caller
means,
like
your
only
sentence
to
GRU
here
next,
it's
some
other
resolvers.
You
always
stick
mostly
to
japan
here
next,
some
others
they
kind
of
like
just
reboot
among
the
two
of
them
next,
and
you
see
like
69
percent
of
65
percent
of
all
the
Rizzo,
which
is
the
x-axis
here.
They
have
a
weak
preference
for
one
of
the
two
available,
authoritative
and
first
week
is
just
a
definition
like
this
end,
between
60
or
90
percent,
between
69
percent
of
other
queries.
K
They
sent
to
one
of
those
name
service,
authoritative
service.
Next,
we
also
found
that
37%
of
those
alternatives
and
they
have
like
what
we
call
a
stronger
preference
they
sent
like
most
of
the
traffic
90%
of
the
traffic.
It's
you
only
only
of
those
two
name
servers.
Next
is
this
and
some
resolvers
they
just
always
chooses
lowest
ones
available.
So
it's
it's
you're
measuring
the
wire,
that's
kind
of
thing.
K
If
you
would
expect
to
get
next,
we
also
this
will
always
done
with
ripe
evidence,
but
we
also
decides
to
look
into
like
production
and
servers
and
we
would
like
we'll
look
at
the
root.
Nameservers
use
the
detailed
data.
How
did
the
tails
are
in
the
paper
and
we
got
data
there
from
ten
of
the
name
servers
and
what
we
and
this
is
that
a
nail
that
we
run?
We
have
data
from
four
of
them
out
of
the
eight,
and
what
do
we
see
for
that
is
the
root
the
queer
is.
K
Six
percent
of
the
queries
are
distributed
across
60
of
those
authoritative
nameserver,
so
the
root
with
letters
and
each
color
here
each
band
represents
the
server.
So
you
see
like
there's
an
e
fluid
distribution.
Some
of
them
are
very
weird
here.
They
stick
to
one
and
further
analysis:
more
less
evil,
a
distribution
which
means
that
the
production
operations
of
the
roots
to
ten
ladders
and
then
a
now
confirms
our
finds
of
our
test
bed
next,
please!
K
K
K
So
you
can
also
watch
that,
but
any
case
you
can
use
the
same
name
server
and
distribute
across
the
globe
in
different
locations,
so
you
can
provide
locally,
better
latency
for
all
your
clients,
so
they
have
to
have
good
peering
and
these
entities
work
actually
an
impact
in
production.
We
actually
deploying
a
lot
of.
We
are
placing
our
unique
cases
and
that
I
know
for
any
case
next
place.
Next
data
sets
paper
questions.
L
L
Surely
that
recommendation
is
actually
not
a
good
recommendation
unconditionally
and
you
need
to
be
bloody
careful
when
you
deploy
any
cast
in
v6,
because
you
will
not
ensure
the
integrity
of
any
kind
of
ICMP
packet
to
big
signaling,
so
I'm?
Sorry,
it's
just
not
a
good
conclusion
at
this
point.
It
should
be
a
lot
more
conditional
than
what
you're
pointing
out.
K
So
this
work
is
only
I,
think
I'm
putting
does
like
me,
but
did
not
mention
its.
We
only
have
known
that
for
IP
before
I
have
stood
that
for
ipv6,
but
regardless
the
focus
of
this
goal
is
to
deliver
better
response
times
and
I.
Think
our
conclusions
still
hold
for
that
and
I
say
feel.
This
is
something
we
have
to
take
a
look
into
that.
But
it's
a
part
of
saying
our
goal
is
to
deliver
better
performance
for
our
clients.
We
want
to
reduce
latency,
so
I
disagree
with
that.
E
Okay,
so
I'm
Wes,
Parker
and
I.
This
is
about
ver
clutter,
which
I'm
not
the
native
speaker
of
that
language,
I'm,
probably
mispronouncing
in
a
little
bit
close
though
so.
This
is
about
a
broad
and
low
to
where
any
caste
mapping
study
that
we
did
in
combination
with
the
University
of
Twente
and
from
I'm
from
USC
s
in
information
Sciences
Institute,
next
clickers,
okay,
next,
okay,
good,
wait!
Wait!
E
No
go
back,
so
our
goals
for
this
project
were
to
really
develop
a
technique
that
did
a
few
things
like
accurately
map,
anycast,
catchments
and
then
study
the
B
roots,
anycast
ipv4
catchments,
and
then
predict
some
load
and
advantage
of
those
changes.
This
talk
today
is
not
about
those
three,
those
three
we're
discussed
at
the
DNS
or
workshop
a
while
ago,
and
you
can
go
watch
the
counter
talk
to
this
one,
because
in
at
that
talk,
I
didn't
talk
about
the
study
of
any
caste
stability
over
time.
E
So
today,
you're
going
to
see
sort
of
the
flip
side
of
that
Todd.
So
really
quickly.
As
a
reminder,
any
caste
is,
you
know,
a
multiple
sites
that
serve
the
same
address
or
host
the
same
address,
and
typically
for
things
like
ripe,
Atlas
or
any
sort
of
any
caste
measurement
utility.
That's
been
done
to
date.
You
have
to
have
lots
of
measurement
points.
E
You
have
to
lots
of
lots
of
vantage
points
where
you
have
all
these
devices
around
the
net
that
are
going
to
send
queries
here,
any
caste
catchment
and
then
you're
gonna
watch
the
them
come
back
and
figure
out
where
you
got
to
so
next.
First
loader
is
sort
of
the
inverse
of
that
and
so
we're
actually
using
the
entire
Internet
as
a
vantage
point.
So
the
way
we
did
that
is,
we
collect
responses
to
ICMP
pings
that
we
send
out
from
the
catchment.
E
So
in
the
in
the
example
on
the
screen,
there's
a
box
on
the
right
and
you'll
notice
that
that's
within
the
anycast
catchment.
That
was
a
server
that
I
ran.
We
sent
out
five
pings
and
you
see
them
come
back
to
different
places
and
we
noted
the
discrepancy
of
which
ones
went
back
to
which
place.
So
it
was
very
different
way
of
doing
any
cast
catchment
and
to
do
all
those
pings.
We
used
previous
work
that
was
based
on
an
ipv4
hit
list.
E
That's
basically
a
long
list
of
IP
addresses
that
are
likely
to
respond
to
an
IP,
IP
v4
ping.
Then
it
gives
you
a
big
long
list
of
of
examples
there
that
URLs
later
on
the
slides
as
well
next.
So
what
did
we
see?
So
we
studied
a
couple
of
anycast
networks,
the
first
of
which
is
the
DNS
SB
route,
and
you
can
see
that
this
is
the
coverage
from
ripe
Atlas.
So
there's
lots
of
little
circles
on
the
diagram,
they're,
all
kind
of
small,
so
to
make
sure
that
they
all
fit.
E
But
you
can
sort
of
see
the
coverage
is
you
can
see
a
lot
of
bubbles
in
Europe
and,
of
course,
some
bubbles
in
the
United
States?
And
if
you
next,
if
you
shift
to
our
technique
for
doing
that,
measurement
that
you'll
find
that
the
circles
actually
shift
quite
a
bit-
and
one
thing
to
note
is
there
is
a
huge
scale
difference.
E
Next,
we
also
used
the
university
of
twente,
has
a
nine
site,
anycast
testbed,
that's
sort
of
spread
around
the
world
and
you
can
see
there's
all
the
sites
that
we
used
for
our
test.
Note
that
there
are
some
sites
that
actually
sort
of
have
the
same
upstream.
So
there's
a
little
bit
of
bias.
You
know
toward
one
upstream
may
actually
be
hiding
two
sites
within
it
because
of
there's
sort
of
a
single
upstream
next.
E
So,
similarly,
when
we
studied,
you
know
that
network
in
terms
of
what
sites
kind
of
excuse
me,
what
IP
addresses
out
in
the
world
were
responding
to
different
sites.
We
saw
sort
of
a
similar
kind
of
coverage
where
now
we
actually
have
a
nine
color
map,
and
if
you
go
on
to
the
next
one
you'll
see
that
the
two
things
happen.
One
we
get
more
spread
as
we
did
before,
but
not
only
that
the
colors
actually
shift.
E
So
if
you
toggle
a
little
bit,
you'll
see
that
there
are
some
places
where
the
color
shift
look
at
Australia
in
particular,
and
note
that
the
colors
are
shifting
from
one
to
the
other,
meaning
that
ripe
Atlas
is
measuring
a
different
actually
set
of
of
catchments
than
Denver
clutter,
which
is
arguably
more
accurate.
Next.
So,
given
the
significant
number
of
vantage
points,
what
can
we
study
and
learn
about
networks
around
the
around
the
internet,
and
so
one
of
the
things
we
asked
is:
can
we
study
traffic
catchments
within
an
AAS?
E
Now
that
we
have
so
many
vantage
points?
Can
we
see
what
happens
as
the
number
of
sites
change
and-
and
you
know
what
happens
with
the
size
of
the
prefixes
as
a
larger
and
larger
prefixes
are
announced
as
well
as
what
happens
when
the
size
of
the
a
s
goes
up?
Do
does
that
actually
change
visibility
and
how
many
sites
are
seen
in
an
anycast
Network
next.
E
So
to
answer
one,
this
is
a
graph
of
on
the
the
bottom
is
the
number
of
announced
prefixes
and
going
up
the
the
vertical
site
is
the
number
of
sites.
So
you
can
clearly
see
that
there
is
a
relationship
between
the
number
of
prefixes
that
are
announced
for
a
given.
A
s
is
likely
to
mean
that
they
are
also
better
connected
to
a
larger
number
of
anycast
sites.
So
that
gives
you
an
indication
that
sort
of
the
larger
ISPs
are
more
likely
to
actually
deploy
stuff
are
more
likely
to
hit
more
catchment
sites.
Next.
E
This
is
a
bar
graph,
so
I'm
going
to
have
to
walk
through
it,
cuz
I'm
sure
it's
quite
small
back
there,
but
on
the
bottom
right
is
slash
23
s,
and
so
what
happens
is
is
one
thing
that
we
find
with
respect
to
the
size,
the
bottom
right
and
if
you
go
along
to
the
left,
it
increases
by
one.
So
it's
a
slash
23
on
the
bottom
right,
a
slash
23
on
the
one
next
to
it,
21
and
it
gets
all
the
way
up
to
a
slash
8
in
the
upper
left.
E
And
what
we
see
is
that
the
number
of
sites
that
are
seeing
/
Network
block
increases
as
the
the
prefix
size
goes
up
so
specifically
80%
of
prefix,
is
smaller
than
a
/.
16
are
more
likely
to
hit
a
single
site
and
the
larger
prefixes
like
/,
8,
/,
nines
and
/,
yes
mixed
and
then.
Finally,
we
also
wanted
to
look
into
the
stability
of
any
caste.
You
know
what
how
how
likely
was
it
that
things
flip?
E
E
89,000
actually
did
change
from
either
from
responsive
to
non-responsive,
or
vice
versa,
and
then
4600
vantage
points
which
is
0.1%,
actually
changed
catchments.
So,
in
other
words,
these
are
people
that
you
know
started
off
in
one
catchment
point
and
then
at
some
point
ended
up
at
a
different
site.
E
The
interesting
thing
to
note,
though,
is
that
you
could
actually
look
at
this
and
argue
well,
is
actually
more
likely
that
they're
going
to
turn
their
computer
off
than
they
will
actually
switch
catchments
points
right,
but
you
know
the
reality
is:
is
they
probably
turn
their
computer
off
intentionally?
And
you
know
they
probably
didn't
flip
the
classroom
at
points
intentionally?
That's
the
network.
E
You
know
making
fun
of
it,
but
the
end
result
is
only
about
one
and
a
thousand
only
about
one
in
a
thousand
chance
of
a
TCP
connection
lasting
longer
than
15
minutes
will
actually
switch
to
a
different
Nek
site
next
in
all
of
that
data.
So
now,
if
we
just
look
at
the
ones
that
did
flip,
it
turns
out
that
63%
of
the
ones
that
flipped
came
from
five
a
s's,
and
so
these
are
the
the
top
five
flipping
a
s's
and
I'm
sure.
E
E
That,
probably
you
know,
are
the
result
of
that
of
all
the
flips
they're
all
located
within
2809
guesses
out
of
the
total
set
next,
so
really
quickly,
sort
of
the
summary
of
why
we
think
that
this
technique
is
actually
a
better
mechanism
for
measuring
a
large
quantity
in
DL
to
really
study
anycast
network
I've.
Given
you
some
of
the
results
but
in
in
the
end
ver
floater
sees
about
four
hundred
and
thirty
times
more
network
blocks
than
an
Atlas,
but
act.
E
E
They
saw
2079
that
we
didn't
see
and
in
similarly
burp
letter
saw
some
that
Atlas
will,
you
know,
will
never
see
in
certain
countries
that
Atlas
can't
really
be
deployed
in
and
everything
we
did
by
the
way
it
was
slash
twenty
fours,
because
that
was
the
sort
of
the
the
smallest
routable
address
block
that
you
typically
see
in
ipv4
on
the
Internet
in
total
burp
litter,
once
I
go
back
in
total
vertical
it
or
so
about
3.8
million
different.
You
know
slash
twenty
fours
note
that
birth,
clutter
doesn't
have
everything
geolocate
about.
E
There
were
some
address
blocks
that
we
actually
don't
have
geoip
data
for,
whereas
Atlas,
because
the
probe
IDs
are
actually,
you
know
encoded
with
the
location
that
they
they
deployed
at
they
actually
100
percent
coverage.
So
that's
actually
a
better
thing,
all
right
next
and
then.
Finally,
this
technique
and
the
code
and
the
data
sets
are
all
available.
M
E
E
Yeah
right,
so
we
we
haven't
been
able
to
analyze
exactly
why
there's
that
much
difference,
but
the
reality
is
is
that
people
can
go
plug
those
those
devices
in
behind
a
network
where
maybe
that
entire
country
doesn't
allow
incoming
pings
right.
You
know
so
there's
boundary
issues
on
on
the
flip
side.
If.
M
M
E
E
E
M
K
Giovanni
said
Ian
I
was
not
involved
in
the
study,
but
I
know
the
people,
but
it's
just
not
a
question.
It's
just
a
comment
that
I
had
so
first
time.
I
said:
I
saw
the
presentation
and
the
paper
I
was
like
this
is
interesting.
How
can
actually,
as
an
operator
is
that
and
I
was
talking
to
doing
about
that
later
and
I
think
as
an
operator
the
cool
thing
about
this
fair
flute,
it's
a
actually.
K
If
you
want
to
design
your
own
any
caste
system,
you
can
actually
get
your
prefix,
your
Anika's
prefix
and
before
put
it
in
production,
you
can
actually
say:
hey.
I'm
gonna
have
some
sides
here
and
there
what's
gonna,
be
the
load
distribution.
So
it's
just
a
remark
for
people
here:
you're
designing
a
caste
network
if
I
sort
of
estimated
where
the
traffic
is
gonna
go
to
each
sites,
you
should
use
this
tool
and.
E
If
you
watch
the
DNS
or
talk,
that's
exactly
what
we
did
is
we
actually
able
to
predict
loaded
B
route
in
advance
of
actually
switching
over,
so
we
could
figure
out
that
and
the
results
were
I
forego.
We
forget
which
one
is
which,
but
we
predicted
either
82.6%
would
still
go
to
LAX
and
the
and
the
rest
would
go
to
Miami
and
the
measured
result
was
eighty
one
point
four.
So
they
were
either
point
one
and
point
point
four
point:
six
off
I
mean
they
were
really
really
close.
E
E
The
data
on
the
next
slide
yep
the
data.
The
data
sets
are
all
there,
the
papers
there
and
the
you
know
as
an
IMC
paper.
So
it's
just
published
a
week
ago
or
two
weeks
ago,
at
AMC
but
and
the
software
is
there
too.
So
it's
it's
very
easy
software
to
use
just
say
ping,
and
it
does
give
it
a
hit
list.
Okay,.