►
From YouTube: IETF101-MAPRG-20180320-0930
Description
MAPRG meeting session at IETF101
2018/03/20 0930
https://datatracker.ietf.org/meeting/101/proceedings/
B
A
E
A
D
A
Everyone
take
their
seats.
We're
gonna,
get
started,
we're
running
a
little
bit
late
already,
so
okay
Jason
Jason
who's
up.
First,
can
you
be
ready
to
go.
E
A
Right
we're
gonna
get
started
so
welcome
everyone.
This
is
the
measurement
and
an
analysis
for
protocols,
research,
group,
I'm,
Dave
Wonka,
and
this
is
Maria
who
Lewin
where
you're
co-chairs.
We
have
a
packed
agenda
in
our
to
an
F
hour
time
slot
eleven
presentations
this
morning,
so
I'm
gonna
ask
the
the
next
speakers
get
ready
to
to
come
up
right
after
when
the
persons
closing
up
and
we'll
have
you
up
here.
A
You'll
have
a
clicker
to
change
your
slides
with
the
slides
on
the
on
the
common
laptop
here
will
spin
through
the
introductions,
so
the
IRT
F
uses
the
same
note
well
and
intellectual
property
process
that
they
IETF
does
that's
their
here's,
a
bunch
of
links
to
our
Charter.
The
mailing
lists,
there's
an
either
pad
for
note-taking
and
the
slides
are
all
posted,
so
yeah.
A
And
Lars,
thank
you.
So
the
way
we
put
together
the
agenda
for
this
meeting
is
again
the
call
for
contributions
on
the
mailing
list
that
we
put
out
on
February
1st
we
received
about
15
proposals,
of
which
11
you'll
see
today.
Just
a
couple
were
out
of
scope
from
FRG
and
the
others.
We've
asked
the
people
to
present
at
a
subsequent
meeting
when
it
might
be
more
convenient
and
also
to
fit
the
times.
We
have
thanks
for
sending
that
stuff
again
we're
soliciting
for
for
future
meetings.
You
can
advertise
your
projects
here.
A
If
you
don't
have
measurement
results
yet
we'll
put
them
in
the
agenda
and
there'll
be
a
couple
advertisements
at
the
end
of
this
again.
I'm
not
gonna,
go
through
these
right
now,
just
to
save
time
but
15
presentations
and
we'll
introduce
them
as
we
go
along
two
announcements.
The
applied
network
research
prize
has
been
awarded
for
I
think
to
six
people
for
2018,
two
of
which
we'll
be
presenting
tomorrow
morning
in
the
IRT
F
open
meeting.
A
So
please
join
us
for
that
really
need
opportunity
to
see
some
of
the
best
work
from
the
academic
conferences
presented
it
in
this
forum.
The
other
thing
that's
going
on
in
IRT
F
right
now
is
the
applied
network
research
workshop.
The
call
is
out
now
the
paper
submission
deadline,
it
can
be
existing
work
is
April,
20th
I
mean
previously
published,
work,
April
20th,
and
that
will
be
happening
at
the
next
IETF
in
July
in
Montreal.
A
G
G
Great
thanks
so
emeasure
Minh
challenges
in
the
gigabit
era,
so
a
view
of
some
of
the
impacts
that
measurement
have
on
things
like
network
operators.
So
his
background
I
certainly
recommend
this
paper
from
some
folks
at
MIT
a
few
years
ago.
There
are
in
essence
a
wide
range
of
existing
systems.
They've
really
fall
into
two
primary
camps,
one
which
run
automated
measurements
from
homogeneous
gear
that
set
customer
homes.
G
Most
of
these
are
national
systems
installed
by
regulators,
for
example,
Sam
knows
which
is
based
in
the
UK
used
by
Ofcom
FCC
and
a
bunch
of
other
regulators
around
the
world.
Of
course,
they're
also
user
facing
systems
usually
web-based
likes
to
test
net
and
both
of
those
you
know
from
my
standpoint,
you're
starting
to
show
some.
G
And
lastly,
you
know
measurement
systems
today
focused
primarily
on
speed,
but
and
user
expectations
about
their
internet
services
are
changing
a
lot
measurement
systems
that
are
sort
of
customer
facing
haven't
caught
up
yet
to
that
they're
really
still
focused
on
speed.
Customers
are
caring
more
about
reliability,
availability
and
other
performance
of
app
services.
So
what
are
some
of
the
existing
issues
that
we're
seeing
in
the
systems
that
are
out
there
today?
G
That
means
that
the
bottleneck
link
has
moved
off
to
primarily
an
interconnection
link
or
to
the
application
servers
where
the
tests
are
destined
or
something
in
that
data
center
network.
But
the
tests
really
the
designs
of
those
tests
really
haven't
evolved
yet,
and
so
in
many
cases
we're
finding
that
the
bottleneck
link
that
people
believe
is
being
measured
is
a
different
link
and
so
in
the
multi
hundred
megabit
era
that
were
in
today
we're
seeing
a
20
to
30
percent
negative
impact.
G
On
occasion
on
some
of
these
tests,
which
is
significant,
and
of
course,
why
does
that
matter?
You
know
a
lot
of
regulators
and
States
Attorney,
General
and
others
have
financial
consequences.
If
operators
aren't
meeting
these
tests,
users
certainly
use
them
for
troubleshooting,
so
there
are
significant
financial
impacts
to
negative
results
in
these
kinds
of
things.
G
So,
in
terms
of
key
questions
that
we've
been
asking,
some
of
our
measurement
partners
and
measurement
researchers
is,
you
know:
does
it
make
sense
to
continue
this
sort
of
test
where
you're
running
from
you
know
one
one
sort
of
test
or
multiple
TCP
connections
to
one
destination
site?
There's
no
resemblance
whatsoever
to
actually
use
your
behavior
anymore
in
that
test,
and
there
really
no
application
servers
services
at
the
edge
that
use
one
gig.
G
we've
used,
rape,
Alice
probes
to
measure
the
Sam
knows
infrastructure,
and
you
know
they
sort
of
outsource
that
to
some
other
providers
like
M,
Labs
and
I,
found
really
interesting
variations
in
availability
of
time,
they're
not
really
apparent
to
even
regulators
or
or
operators.
So
there's
a
lot
of
variation
that
the
server
side
that
can
affect
the
results
as
well
and,
of
course,
a
key
question
that
we
always
have
asked
in
an
operator
environment.
G
Is
you
know
when
will
the
test
away
from
simply
testing
throughput
tests,
move
and
start
looking
at
latency
and
reliability
and
other
things,
bearing
in
mind,
of
course,
that
the
things
that
are
measured
pervasively
are
the
things
that
network
operator
is
designed
to
the
things
that
they
market
to
and
then
the
customers
to
pay
attention
to
buy.
So
in
a
way,
things
that
start
getting
instrumented
in
measurement
may
eventually
make
their
way
out
to
the
way
the
services
are
marketed
and
so
on
and
of
course,
the
last
couple
points
here.
G
Qoe
is
a
as
much
impacted
by
sort
of
speed
as
if
you
will
and
network
operator
sort
of
access
network
performance,
but
also
application
or
edge
performance,
and
so
you
know
we've
contemplated
measuring
those
things
as
well
and
then.
Lastly,
you
know,
as
I
mentioned
before,
the
things
that
you
measure
are
you're
going
to
incentivize.
G
D
A
H
H
You
see
I've
been
attending
the
the
derived
v6
hackathon
I,
think
four
or
five
months
ago,
and
this
is
kind
of
like
a
spinoff
from
that
so
yeah.
This
will
work
so
a
problem
with
visualizing
v6
stuff.
We
cannot
really
use
what
we
have
done
for
fee
for
right,
which
is
often
true
for
things
v6.
You
probably
have
seen
one
or
both
of
these
visualizations
are
the
xkcd
or
some
of
the
work
at
eyes,
I
buy
by
John
Hyneman
and
his
colleagues.
H
We
can
visualize
the
entire
free
force
base
in
something-
that's
you
know
still
readable.
If
you
will.
This
is
a
hilbert
curve,
a
space-filling
curve
showing
the
the
entire
fee
for
aerospace.
If
you
want
to
do
the
same
thing
for
fee
six,
it's
quite
useless.
It's
like
you
know
something
like
this.
You
will
see
you
will
see
a
small
dot
representing
the
2000
slash
three
space.
H
This
is
useful
to
convince
people
that
indeed
we
do
have
enough
address
space
in
fee
six
other
than
that.
It's
it's
I,
think
it's
quite
useless
right,
so
I
have
created
this
tool
called
says
blots.
S
means
six
in
touch.
This
is
based
on
square
Phi,
3
maps.
You
can,
you
can
look
it
up
in
this
footnote,
which
basically
means
is
that
we
will
always
fill
the
same
space
for
gardening.
Of
our
of
our
input.
Set.
H
I
will
show
you
some
pictures,
but
we
use
the
size
to
depict
the
size
of
the
prefix,
and
we
use
call
us
to
depict
how
many
addresses
of
our
input
set
are
in
this
specific
prefix
and
instead
of
you
know,
plotting
all
the
v6
address
space.
We
only
plot
what
we
feed
it
right.
This
way
we
can.
We
can
look
for
outliers
in,
for
example,
measurement
data
or
HTTP
exit
logs
or
whatever
right
so
extra
time.
H
This
is
this
is
a
visualization
from
says
blots,
based
on
all
the
announced
prefixes,
so
this
is
45
46,000
prefixes,
as
seen
by
right
few
rah-rah
fuse
and
for
the
addresses
I
used
a
hit
list
that
the
guys
at
the
TU
invention
make
right.
So
what
you
see
here
is
that
the
bigger
prefixes
are
in
the
top
left
corner
and
we
fill
up
the
entire
plot
with
all
the
smaller,
more
precise
prefixes.
We
then
color
all
the
squares,
according
to
the
number
of
addresses
that
we've
seen
in
our
input
set
all
right.
H
So
what
we
can
see
here
is
that
for
the
entire
announced
prefixes
there
are
still
some
white
space
right,
which
means
that
on
this
hit
list,
there
are
no
addresses
within
these
announced
prefixes.
So
now
you
can
say
something
about
how
complete
is
this
hit
list?
Yes
or
no?
There
is
some
some
white
space,
although
there's
a
lot
of
a
lot
of
stuff,
is
there?
If
you
want
to
look
at
what
is
only
in
prefixes
that
are
represented
by
this
hit
list,
we
can
filter
it
out.
H
How
will
you
get
a
I
think
a
more
more
fancier
picture,
and
now
you
can,
for
example,
spots
for
outliers
right.
You
see
a
big
red
square
on
top,
which
is
a
bigger,
a
s
with
a
lot
of
representation.
So
you
can
reason
that
this
might
be
a
over-represented.
Prefix
in
your
data
sets
well,
those
are.
These
are
more
things
to
spot,
but
because
of
time,
I
will
skip
from
things.
Another
thing
measurement
I
did.
We
could
go
on
open.
H
Memcache
D
insists
on
v6
I
found
361,
and
the
interesting
part
here
is
that
you
can
spot
some
smaller
prefixes
that
have
a
lot
of
them
relatively
right.
That's
the
the
bright
red
square
on
the
lower
right
corner,
so
one
of
the
features
I've
built
into
this
thing
is
that
we
outputs
to
HTML
with
some
interactive
stuff.
You
can
zoom
in
on
the
plot.
You
kind
of
have
some
extra
information
in
tooltips,
which
you
can
see
here
on
this
on
the
screenshot.
H
H
If
you
will
you
feed
it
a
list
of
prefixes,
you
feed
it
a
list
of
addresses
and
that's
it
and
I'm
I'm
wondering
where
to
go
from
here
right,
there's,
probably
a
lot
of
use
cases
that
I
didn't
think
about
and
I
want
to
continue
working
on
this
thing,
because
I
think
it's
quite
nice,
but
I
I
would
like
some
input
to
see
where
this
can
go.
It's
it's.
It's
under
an
MIT
license
on
get
up
it's
written
in
risk
because
I
like
risk,
but
this
is
this-
is
basically
the
most
important
sheet.
H
A
You
thanks
so
much
Luke.
Well,
while
there's
time
for
one
question
or
comment
while
we
setup
Roland,
if
you
what
I
think
is
one
of
the
things
that's
interesting
about,
that
is
a
transitioning
from
the
tool
to
in
a
useful
measurement
and
I
think
you
got
a
good
start
on
that
by
being
able
to
show,
for
instance,
where
memcache
stuff
is
mm-hmm
Thanks.
A
And
here's
Roland,
so
the
next
two
presentations
are
going
to
be
updates
on
things
that
are
at
map
RG
before
so
one
of
our
goals,
and
this
is
to
keep
bringing
you
the
data
back
again
and
again
and
again
and
Roland.
It's
been
one
of
the
one
of
those.
That's
visited
us
a
number
of
times
and
can
keep
us
up-to-date
on.
What's
going
on
within
a
sec,
yeah.
J
So
the
goals
we've
been
measuring
the
NSTIC
deployment
for
a
long
time,
and
one
of
the
things
that
we
observed
that
everybody
observes
is
that
in
a
general
population,
DNS
SEC
deployment
generally
remains
low
right.
It's
around
1%
income
that
an
org
and
then
there
are
some
CCT
of
these
to
do
much
better.
For
instance,
Datsun
L
and
OTC
both
have
almost
half
of
all
their
domains
signed
with
the
in
a
sec,
and
this
is
likely
because
they
incentivize
the
NSTIC
deployment
by
giving
registrar's
some
reduction
in
the
price
for
registration.
J
Now,
what
we
wanted
to
study
is,
if
organizations
that
do
deploy
DNS
a
get
it
right,
both
in
the
general
population,
but
also
in
these
CC
2ds
that
have
a
much
larger
at
the
intersect
deployment,
and
this
the
graphs
in
this
presentation
are
based
on
two
papers
and
the
references
around
the
one
of
the
final
slides.
If
you
want
to
read
papers,
we
used
longitudinal
data
from
the
open,
Intel
platform
go
and
have
a
look
at
our
website.
J
If
you're
interested
in
that,
we
measure
almost
60%
of
the
entire
name
space
on
a
daily
basis,
and
we
record
a
data,
and
we
now
have
some
over
three
years
of
data
collected
that
you
can
use
to
analyze
the
state
of
the
DNS
from
day
to
day,
and
we
have
a
new
website
coming
soon.
Now
for
the
comb
Network
study,
we
use
21
months
of
data
and
for
Delta
C
and
Delta
nel.
J
We
used
about
one
and
a
half
years
of
data
and
we
had
a
number
of
challenges
when
we
were
doing
these
measurements,
because
what
we
wanted
to
do.
If
you
want
to
see
if
the
NSTIC
is
done
right,
you
need
to
validate
all
the
signatures
and
we
really
had
to
validate
millions
and
millions
of
signatures,
and
also
we
wanted
to
see
if
people
do
the
more
complex
stuff
in
the
NSX,
such
as
key
rollover.
J
So
we
had
to
track
the
NSA
key
rollovers
over
time
and
that
was
actually
quite
challenging
to
do
for
such
a
large
data
set.
So
for
that
I'm
not
going
to
go
into
detail,
but
we
used
some
say:
quote-unquote
Big,
Data
technologies,
a
Hadoop
cluster
with
spark
on
it,
so
we
could
validate
all
the
signatures
and
if
you
want
to
learn
more
about
that,
read
the
papers.
J
Now,
if
we
look
at
the
general
population,
ComNet
org,
this
shows
you
a
graph
over
over
almost
two
years
of
the
development
of
the
number
of
signed
domains
in
in
dominant
and
the
takeaway
from
this
is
its
low
white
dot.
Org
is,
is
the
highest
run
about
1%
of
domains
in
the
doric,
are
deploying
the
in
a
sec
and
for
net
common
net.
It's
a
little
bit
lower,
but
it's
also
sort
of
going
towards
1%
mark
now.
J
This
data
is
almost
a
year
old
and
I
can
tell
you
that
since
then,
not
a
lot
has
changed.
It's
still
around
this
mark,
but
this
isn't
the
whole
picture.
If
you
look
at
it,
the
previous
graph
shows
you
domains
that
have
signed
their
zones,
but
up
to
30%
of
those
don't
have
a
secure
delegation
in
their
parents
own.
So
that
means
that
they've
gone
through
all
the
trouble
of
deploying
DNS
SEC,
and
then
they
didn't
create
a
secure
delegation,
so
nobody
can
validate
their
signatures.
So
that's
just
plain
stupid.
Why
would
you
do
that?
J
And
we
have
another
paper
that
I
realized
that
I
forgot
to
cite
in
this.
That
actually
looks
at
that
in
a
little
bit
more
detail,
but
it
turns
out
that
many
of
these
signed
domains
in
combat
and
org
are
actually
side
effects
of
the
incentives
in
the
dot,
NL
and
all
to
see
ccTLD.
So
these
are
people
that
deployed
in
a
sec
for
all
of
their
domains
and
they
also
do
it
for
combinatoric
domains
that
they
have,
but
then
they
don't
bother
to
create
a
secure
delegation
in
the
parent
zone.
J
Now
we
looked
at
errors
in
these
deployment
because
we
wanted
to
see
if
people
duty
and
I
said,
do
they
get
it
right
and
the
most
common
problem
that
we
found
is
actually
missing
signatures.
So
in
as
you
can
see,
income
data
and
org
up
until
the
end
of
2016
up
to
2%
of
science
were
missing
signatures.
And
basically
that
means
that
you
break
the
zone
and
it
turned
out
that
this
was
a.
It
was
mostly
one
operator
that
was
responsible
for
this,
and
what
they
were
doing
was
that
some
of
their
name
servers.
J
If
you
had
sent
them
a
query,
they
will
give
you
back
a
signed
response,
including
the
signatures
and
other
name
servers,
would
give
you
back
just
plain
DNS
response
and-
and
this
really
broke
stuff,
we
crawled
some
logs
and
this
led
to
validation,
failures
for
people,
so
that
was
really
stupid.
They
fixed
that
at
the
end
of
2016,
and
that's
why
this
huge
drop
occurs
at
the
end
of
the
graph
and
then
there's
also
a
small
minority
of
time,
domains
that
have
signed
some
records.
J
Actually
broken
signature,
so
signatures
that
are
either
expired
or
in
some
way
invalid.
So
we
can't
validate
them
because
the
content
of
the
signature
is
somehow
incorrect
are
very,
very
rare,
expired
signatures.
Less
than
0.6
percent
of
all
signatures
were
expired
at
at
any
given
point
in
time,
and
invalid
signatures
are
extremely
rare
and
the
other
thing
that
we
found.
That
was
where
we
expected
to
see
quite
a
lot
but
actually
didn't
occur.
J
So
much
was
mismatches
between
the
parents
owns
a
home
network
and
the
child
where
there
would
be
a
secure
delegation,
but
the
secure
delegation
would
not
match
the
key
or
there
would
be
a
security
Legation,
but
the
zone
turned
out
to
be
unsigned
and
that's
actually
also
very
rare.
So
that's
that's
good
news
because
it
means
that
at
least
people
are
not
breaking
that.
J
Now.
We
also
looked
at
the
ccTLDs
because
they
have
such
large
DNS
SEC
deployments
right,
50%
of
their
domains
are
signed
and
we
we
did.
The
other
checks
that
I
showed
you
before
and
and
the
situation
is
much
the
same.
So
if
zones
are
signed,
they're
usually
signed
well
in
the
CCT
of
these,
the
number
of
missing
secure
delegations
is
much
lower
because
you
only
get
in
the
financial
incentive
if
you
have
a
fully
working
DNS
SEC
deployment.
J
So
what
we
wanted
to
check
there
is,
if
they
deployed
in
a
sec,
do
they
follow
best
practices,
and
we
took
the
NIST
guidelines
for
that
as
a
starting
point,
and
the
guidelines
are
up
here
on
the
slide
and
and
things
that
we
look
like
was:
do
they
use
to
write
key
size
do
to
use,
one
is
recommended,
algorithms
and,
more
importantly,
do
they
perform
key?
Will
overs
if
the?
J
If
the
guidelines
say
that
you
should
do
so
at
a
certain
frequency
tracking
key
role
over
I'm,
not
gonna,
explain
the
picture
in
detail
because
I
don't
think,
there's
time
for
that,
but
tracking
here
will
over
turned
out
to
be
a
pretty
tricky
and
because
you,
you
actually
have
to
look
at
all
of
the
keys
that
you
have
in
your
data
set
and
see
which
ones
changed.
But
this
is
not
one
day
to
the
next
day
changes
keys
are
used
simultaneously.
J
J
J
So,
to
conclude,
while
the
NSTIC
deployment
generally
may
arrange
low,
there
are
exceptions
among
this
cctlds
and
there
are
other
CCT
of
these
other
than
dalton
Elendil
to
see
that
have
good
results.
I
think,
for
instance,
in
Norway
there
are
over
50
percent
of
the
zones
are
signed,
real
mistakes
where
people
break
stuff
be
so
signatures
missing
or
invalid
signatures
or
expired.
Signatures
are
actually
extremely
rare,
which
is
good
news,
because
that
means
that
people
are
at
least
automating
that
properly,
but
other
important
best
practices
are
seldomly,
followed,
so
regular
key
willow,
verse
for
wikis.
J
J
J
You
only
get
the
incentive
if
you
follow
best
practices,
and
otherwise
you
don't
do
that,
and
we
talked
to
both
folks
at
Lawton
Elendil
to
see,
and
the
good
news
is
both
of
them
are
considering
updating
their
incentives,
and
hopefully
that
will
not
just
change
the
picture
in
their
CCTs
sealed
deal
these,
but
also
in
other
t,
all
these
that
have
signed
domains
as
a
side
effect
of
these
incentives.
That's
it
all.
A
A
A
Good
next
up,
we'll
have
Tommy
Pauly,
give
us
an
update
on.
He
visited
us
I
think
last
year,
possibly
the
year
before,
with
some
v6
measurements
from
their
vantage
point,
an
apple
equipment.
Alright,.
K
Hello,
I'm
Tommy
Pauly
from
Apple
we've
previously
spoken
to
map
Reggie
about
measurements.
We've
done
for
six
adoption
rates
that
we've
seen
from
the
client.
We've
also
done
some
presentations
about
how
we
see
happy
eyeballs
working
of
when
we
have
both
before
in
v6.
How
often
are
the
races
favouring
v6
over
v4?
One
of
the
comments
that
had
previously
come
up
was
that
we
didn't
have
quite
as
much
our
TT
data
actual
performance
of
how
v6
is
doing
compared
to
v4,
and
so
we
wanted
to
rectify
that.
L
K
So
this
percentage
shows
the
number
of
times
in
which
a
client
device
is
on
a
network
that
seems
to
offer
v6
that
it
even
has
a
chance
of
connecting
to
a
dual
stack
or
v6
only
host.
So
globally
we
see
29
percent
of
the
Wi-Fi
networks.
We
connect
to
offer
v6
44
percent
of
the
cell
networks
do
in
the
US.
We
see
that
figure
go
up
for
both
cases:
39
percent
of
Wi-Fi
networks
to
do
offer
v6
and
quite
nice.
K
87%
of
all
cellular
network
connectivity
does
offer
v6
just
because
we
are
in
London
I,
also
grab
the
data
to
share
for
the
UK
Wi-Fi
is
at
a
pretty
decent
32
percent
and
cellular
just
completely
abysmal
at
barely
measurable,
but
at
a
point
one
two
percent.
So
let's
work
on
that,
so
the
rest
of
the
data
I'm
going
to
be
presenting
kind
of
in
this
format.
So
to
look
at
how
we're
reading
this,
the
pie
chart
is
just
saying
for
this
sampling.
What
percentage
of
the
connections
were
actually
using
v6
so
globally?
K
K
The
salted
lines
are
a
CDF
of
what
we
see
for
the
overall
connection,
smooth
RTT
and
the
dotted
lines
that
are
tracking
those
are
what
we
see
for
the
just
the
handshake
latency
time
so
we're
comparing
when
we're
measuring
for
happy
eyeballs
how
long
it
takes
to
actually
bring
up
the
connection.
How
well
does
that
translate
into
the
connection
actually
being
faster
overall,
so
the
story
for
v6
here
is
really
quite
good.
Overall,
we
do
see
that
the
cdf
is.
K
It
has
a
nice
little
bump
for
v6,
so
that
on
the
whole,
there
are
a
lot
more
faster
v6
connections,
and
we
see
that
in
general,
across
all
networks.
The
handshake
is
a
good
predictor
of
the
actual
connection
RTT
time,
but
when
we
break
it
down,
there
are
some
interesting
observations
we
can
make.
So
here
is
the
data
for
us
Wi-Fi
values.
So
here
we
see,
14%
of
connections
are
using
v6
slightly
higher
than
the
global
average.
Again
the
v6
performance
looks
quite
good
overall.
K
K
K
We
looked
closer
at
the
data
and
we
recognized
that
actually
one
of
the
carriers
in
the
US
was
actually
accounting
for
a
lot
of
the
difference
here,
and
so,
when
we
removed
just
one
one
of
the
carrier
values,
we
actually
see
that
the
trend
shows
this
in
which
we
have
v6
being
again
better
performance
overall,
but
we
still
see
the
fact
that
the
handshakes
are
generally
faster
than
the
overall
connection
RTT.
So
this
is
a
very
interesting
point.
I'd
be
curious
to
hear
anyone's
thoughts
on
this.
K
K
So
just
because
we're
in
London,
let's
look
at
UK
as
well.
Over
Wi-Fi,
we
have
11%
using
v6
the
performance
actually
quite
good
interesting
curve
for
the
v4
numbers.
Here.
Don't
quite
understand
that,
but
we
do
see
the
same
trend
that
we
saw
in
the
US,
and
we
do
see
this
actually
kind
of
globally
that
over
Wi-Fi
we
do
expect
that
the
handshake
is
going
to
be
slightly
slower
than
the
overall
observed
RTT
and
then
just
to
make
us
sad.
K
K
This
is
my
last
slide,
so
I'll
just
kind
of
summarize,
and
then
we
can
go
to
the
comments.
So
essentially
the
observations
are
we
see
on
Wi-Fi
generally,
we
have
better
hench
large
slower
handshake,
our
tt's
than
we
do
for
the
phone
connection
it's
reversed
on
cell
in
general,
the
RT
T's
for
cell
is
better
for
v6
is
better
all
across
the
board
cell
is
worse
for
some
carriers
that
seem
to
be
doing
proxying
and
the
UK
cellular
network
has
very
little
v6
adoption,
so
yeah.
E
M
K
M
A
N
Zouk
annuity,
I
think
I
bowling
the
grass
and
the
pie.
Charts
I
get
the
impression
that
pretty
much
across
the
board,
the
v6
percentage
is
that
the
you
you
end
up
using
v6
half
the
time
of
the
total
connections
right.
Let's
so,
let's
say
you
say
in
the
u.s.
I
think
the
number
is
32%
you
use
v6
about
you,
know
half
of
that
right,
which
would
mainly
based.
N
N
G
K
N
O
K
P
Jeff
hi
I'm
having
a
slightly
hard
time
actually
interpreting
these
profiles,
because
what
is
going
on
is
that
you're
not
measuring
v4
and
v6
to
the
same
endpoint.
That's
exactly
so.
What
you
are
measuring
is
happy,
eyeballs,
selected
six
and
here's
a
profile
of
what
happened
as
a
result
and
happy
eyeballs
selected
four
and
here's
a
profile
or
the
server
was
v4.
Oh
that's
correct!
P
So
yes,
lots
of
biases
in
here
right,
so
I'm,
just
sort
of
trying
to
understand
that
profile
that
you
get
and
what
is
the
precise
meaning
of
the
differences
in
the
profile,
because
I
get
certainly
a
wildly
different
answer.
When
I
look
at
the
one
endpoint
and
look
at
v4
and
v6
to
the
same
endpoint,
dual-stack
endpoint
deeper
in
the
network.
So
then
this
profile
as
I
said,
I'm
still
trying
to
wrap
my
head
around
exactly
what
you're
measuring
right
guys.
B
P
K
P
Q
We've
been
doing
on
both
TLS
S&I
and
client,
IP
v6
client
adoption,
the
motivate
one
of
the
main
motivations
for
why
I've
been
looking
at
this
is
looking
at
HTTP
growth,
where,
when
we
have
HTTP
growing
rapidly
yay,
but
without
TLS
S&I,
you
have
no
I
P
multi-tenancy,
so
the
client
in
the
in
the
handshake,
if
it
doesn't
actually
send
the
HL,
SS
and
I
the
only
way
the
server
would
know,
would
know
which
certificate
to
serve
back.
We
based
upon
IP
addresses
but
ipv4
is
exhausted
at
the
our
our
our
IRS
and
ipv6.
Q
However,
there's
been
a
lot
of
movement
on
at
TLS
S&I
adoption
recently,
making
it
much
more
viable
than
it
used
to
be
so
we
look
at
HTTP
growth,
even
though
the
certificate
side,
this
graph
comes
from,
let's
encrypt,
and
you
see
that
we're
let's
encrypt
now
has
about
50
million
active
certificates
and
if
you
want
to
try
to
have
an
ipv4
address
per
certificate
to
do
this
without
TLS,
and
I
that
would
be
about
3/8
worth
of
ipv4
addresses
even
now
just
for
this
one
use
case.
Q
So
that's
not
going
to
be
such
that
going
forwards.
Just
using
ipv4
addresses,
2d
MUX
isn't
going
to
be
at
all
viable.
If
you
also
look
at
HTTP
growth
and
Akamai,
even
in
the
past
three
years,
we've
seen
a
lot
of
growth
of
customers
going
and
taking
sites
that
used
to
be
HTTP
only
and
moving
them
over
to
HTTPS,
where,
if
you
went
back
into
mid
2015,
we
had
around
30
37
38
percent
of
customer
host
names
that
were
delivering
over
a
hundred
million
age.
Q
P
requests
per
day,
we're
using
I'm
cut
their
own
TLS
certificates
with
their
own
names
on
them.
Too.
Now,
when
that's
somewhere
around
the
57
58
percent
range-
and
if
you
add,
on
top
of
this
customer
using
wildcards
ERPs-
and
you
also
kind
of
mix
in
the
fact
that
some
of
these
host
names
do
have
a
mix
of
HTTP
and
HTTPS
traffic,
we're
seeing
around
seventy
five
percent
of
requests,
we
serve
now
or
over
HTTPS,
which
is
a
huge
improvement
over
what
it
was
a
few
years
ago.
Q
If
we
look
at
TLS
S&I,
if
you
even
go
back
a
few
years
ago
to
to
2014,
you
start
the
s
and
I
story
was
pretty
bleak.
This
is
the
percentage
of
requests
that
would
come
in
to
Akamai
over
HTTPS.
It
sent
TLS
s
and
I
and
back
in
2014.
That
was
in
that
that
eighty
to
eighty-five
percent
range,
which
is
something
where,
if
you
go
to
a
customer-
and
you
say,
hey-
go,
go
turn
on
HTTPS,
but
the
s
and
I
only
it
only
break
15
to
20
percent
of
your
end
users.
Q
That's
just
not
going
to
fly
so
well,
but
if
you
go
back
to
even
on
where
we
were
at
the
start
of
last
year
in
2017,
that
was
starting
to
get
up
into
that.
Ninety
eight
percent
range
and
still
two
percent,
is
in
that
that
there
are
a
lot
of
large
sites
that
will
not
be
who
may
have
a
three
or
four
nines
availability
goal.
We're
telling
them
hey,
we'll
only
break
two
percent
of
your
end.
Q
Users
is
still
not
great,
but
it's
still
a
lot
better
than
the
15
to
20
percent
that
it
used
to
be.
But
if
we
now
look
at
where
it
is
in
the
past
and
how
this
has
changed
in
the
past
year
or
so
even
the
past
year,
we've
seen
substantial
improvements
in
terms
of
clients
that
are
sending
TLS
s
and
I,
as
some
of
the
remaining
light
on
things
that
have
gotten
fixed.
So
this
is
this
graph.
Q
So
the
yellow
one
is
from
February
of
this
year
and
then
you
look
at
that
and
we're
now
at
that
point,
where
we're
31%
of
of
customer
configurations
of
slots
have
sni
adoption,
that's
over
99.9%
so
and
if
you
look
even
at
at
the
median,
the
median
is
starting
to
get
into
that
case.
Where
we're
talking
more
about
the
median
later
the
medians
also
well
over
99%.
However,
there
are
still
plenty
of
customers.
Q
So
if
you
go
back
and
look
into
some
studies
that
were
done
a
few
years
ago,
it,
for
example,
CloudFlare
had
one
it
showed
a
big
variation
in
the
global
in
the
global
medians
we'd
see
some
countries
like
China
that
had
much
lower
SMI
adoption
than
others.
If
we
go
and
look
at
that
now,
we'll
see
that
that
globally,
it's
actually
leveled
out
a
lot
of
that
global
variation
between
customers
are
chemi
between
countries
has
settled
out.
So
for
us
we
see
the
our
median
customer
usage
has
around
ninety
nine
point.
Q
Q
They
started
sending
TLS
s
and
I
and
Paul
shot
and
pulled
China's
median
backup
to
be
the
rest
similar
to
the
rest
of
the
world,
at
least
from
our
perspective,
and
if
you
compare
this
over
to
two
TLS
one,
two
there's
actually
less
usage
of
TLS
one
two.
Then
there
is
of
TLS
S&I
and
there's
not
always
a
strong
correlation.
Here
there
are
a
lot
of
TLS
100
clients
that
do
send
us
and
I,
and
some
cheela
want
to
clients
that
don't
but
sni
usage
is
actually
ahead
of
where
Kilis.
Q
The
Z,
so
what
does
it
send
to
us
s
and
I?
There's
it
there's
a
a
lot
of
what
we're
saying:
it's.
We
see
custom
clients,
apps
gaming
consoles,
antivirus
things.
There
are
things
that
are
have
spoofed
spoofed
user
agents
or
man-in-the-middle
devices
like
antivirus
and
secure
web
gateways.
Windows
XP
is
now
it
is
it
used
to
be
the
thing
that
was
was
the
Bugaboo
but
of
the
non
TLS
S&I
traffic.
Only
about
6%
of
that
it's
not
sinning.
Q
It's
not
sending
us
and
I
and
and
PI,
and
also
older
versions
of
Python,
Java
and
Apache
Clyde
are
also
around
4%
of
the
non
S&I
hits
we
have
and
there
used
to
be
a
big
issue
where
last
or
even
last
year,
a
lot
of
search,
bots
didn't
send
us
and
I,
but
all,
but
one
in
China
have
fixed
that
now.
But
then
there's
a
long
tail
of
other
things
and
anecdotally.
Some
of
that's
getting
fixed
like
I
found
Apache
bench
on.
Q
Q
The
so
for
ipv6
trends,
the
methodology
we're
looking
at
here
is
looking
at
24
hour
snapshots
of
data.
It's
the
same
data
that
we've
been
sending
to
I
socks
for
world
ipv6,
launch
org
for
the
past
few
years
and
we're
looking
there
at
a
few
hundred
billion
HTTP
requests
occurrence,
dual-stack
sites
and
looking
for
what
percentage
of
those
requests
are
over
ipv6
relative
to
the
the
total
hits
and
I've
been
focusing
in
the
past
on
what
are
the
top
ipv6
leaders
here.
Q
But
as
we
start
getting
as
we
start
getting
that
that
global
average
in
ipv6
moving
up
depending
upon
what
the
mix
of
content
is,
we
can
see
that
global
average
being
anywhere
in
the
17%
to
31
percent
range
and
you'll,
see
outliers
like
websites
that
are
heavily
mobile
may
see
something
closer
to
31
percent
ice
or
global
average.
You
may
see
some
enterprise
software
downloads
that
are
only
a
few
percent
ipv6.
But
if
we
look
at
how
do
we
get
that,
I
that
global
average
to
keep
moving?
Q
Q
There
are
places
where
ipv6
deployments
are
already
in
progress
and
then
there
are
places
where
IP,
where
there's
little
to
no
ipv6
yet
and
both
of
those
are
areas
which
have
may
have
different
strategies
for
what
can
help
move
the
needle
in
ipv6
and
a
lot
of
this
ends
up
being
heavily
influenced
both
by
the
content
and
Ice
peanut
works
that
are
deployed
b6.
But
there
is
a
lot
of
content
mixed
sensitivity
here
as
I
mentioned
earlier.
Q
So
if
we
looks
at
what
are
the
top
countries
with
residual
ipv6
and
that
breaks
down
into
two
buckets,
so
those
countries
that
already
have
ipv6
deployment
in
progress,
but
we
there's
still
a
lot
of
opportunity
and
those
are
actually
the
the
top
ones,
at
least
from
what
we're
observing
and
everyone
will
observe
different
slices.
But
from
our
observations,
the
top,
the
top
opportunities
for
residual
ipv4
are
in
those
countries
that
are
already
moving
in
ipv6.
The
u.s.
Q
is
at
the
top
of
the
list
already
at
441
%
ipv6
by
on
hits,
but
then
you,
but
below
that
you
have
the
UK
and
Japan
and
and
Germany
and
India,
which
are
also
already
deploying
v6.
But
then
you
go
over
to
the
other
side,
there's
a
set
of
countries
that
are
further
down
the
list.
None
of
them
are
in
that
top
10
of
residual
v4,
but
they
have
less
3%
ipv6.
Q
In
some
cases,
it's
close
to
zero
when
observed
globally
and
those
are
Russia
China,
Italy,
Spain,
Indonesia,
Turkey
and
South
Korea
up
at
the
top
of
that
list,
and
there
are
a
few
of
those
that
may
have
ipv6
in
country,
but
they're
ipv6
connectivity,
you're
peering
to
the
rest
of
the
world,
is
sufficiently
bad
that
it's
hard
to
actually
measure
it
and
that
happy
eyeballs
is
flipping
over
to
not
use
v6
and
therefore
it
looks
low.
So
on
the
device
side
on
the
high
side.
Q
If
there's
a
lot
of
the
modern
devices,
do
ipv6
well
and
sent
use
up
quite
a
bit
of
the
time.
Oddly
went
there's
the
ipv6
preference
across
the
board.
We
see
four
four
older
versions
of
Windows
seem
lower.
That
may
be
because
it
is
a
heavier
enterprise
deployment
or
heavier
deployment
in
some
countries,
but
there's
a
lot
of
opportunity
on
the
very
limited
ipv6
side,
among
especially
set-top
boxes
and
some
streaming
set-top
boxes
and
some
custom
apps.
If
you
look
at
that,
there
are
some
vent.
Q
There
are
some
vendors
of
consumer
electronics,
of
set-top
boxes
that
do
v6
and
have
averages
in
that
30
ish
percent
category.
But
then
there
are
others
that
are
0%
that
are
actually
fairly
high
in
the
residual
ipv4
bucket.
The
same
goes
for
custom
applications
where
people
have
may
have
written
some
app
and
just
don't
send
v6
from
it.
Q
So
so
in
that
that
front-end
of
networks,
you
have
a
lot
that
have
started
deploying
v6
already.
But
if
you
start,
if
you
want
to
get
out
and
in
handle
the
next
50%
of
residual
ipv4
traffic,
that's
going
to
be
it
that's
going
to
require
networks
that
haven't
been
deploying
to
be
sixth
deploying
it.
So
in
the,
if
you
want
to
get
out
to
the
90th
percentile
of
residual
ipv4,
that's
going
to
take,
there's
1200
networks
there,
and
only
around
eighteen
percent
of
those
have
ipv6
greater
than
two
percent.
Q
But
one
thing
that
is
a
positive
sign
is:
there
are
a
lot
of
networks
out
there
that
have
v6
configured
but
haven't
actually
turned
it
on
for
their
end
user
eyeballs.
So,
for
example,
our
networks
team
has
has
been
able
to
get
v6
working
on
servers
in
around
840
networks
and
114
countries
around
the
world,
which
is
more,
which
is
a
significantly
more
than
you
necessarily
are
seeing
a
high
ipv6
eyeball
usage
from.
So
that's
it.
A
So
so
next
up,
we
invited
Yann
Ruth
to
come
and
talk
to
us
about
two
two
different
topics
that
they've
been
studying
lately,
both
active
measurements
in
the
v4
space
just
recently
got
me
sick,
so
that
you
mean
the
the
University
just
yeah,
yeah,
okay,
take
it
away
yan,
you
got
a
25
minutes
total
and
you
get
to
decide
between
your
two
presentations.
How
much
you
want
to
spend
on
each
okay.
R
So
this
is
work
we've
been
presenting
at
imc
last
year
and
some
new
stuff
that
hasn't
been
published
yet
so,
basically,
this
is
about
TCP,
neutral
congestion
window.
So
why
would
you
actually
borrow
so
basically
TCPS
initial
window
is
the
amount
of
bytes
that
you're
allowed
to
send
in
the
first
round
trip
of
a
new
connection.
So
basically
boot
steps
the
congestion
window
install
start.
R
So
with
every
round
trip
you
will
effectively
double
your
congestion
window
so
which
will
lead
to,
as
you
can
see
in
the
plot
on
the
left,
your
congestion
window
ramped
up
faster,
and
if
you
take
a
closer
look
at
some
performance
measurements
you'll
see
that
you
can
actually
have
faster
flow
completion
time,
for
example.
So
you
might
think
well,
this
is
nice.
R
So
why
aren't
we
just
making
it
very
much
so
well,
of
course,
at
the
start
of
the
connection,
you
have
actually
no
clue
about
your
network,
so
you'll
be
bursting,
or
at
least
this
Energy's
people
burst
the
initial
window
in
the
nonprofit
work.
So
you
only
know
one
sample
of
the
RTT
at
start
well
and,
as
it
turns
out,
depending
on
the
available
queue
sizes
that
are
in
your
bottleneck
link,
this
will
lead
to
a
lot
of
losses,
so
we'll
have
a
lot
of
retransmissions
with
isn't
good.
So
the
question
is
basically:
what
is
it?
R
What
is
the
initial
window?
So
well
good
thing.
We
have
standards
for
that.
If
you
take
a
look
at
the
standards,
you'll
see
that
well
they're
various
things
in
the
standard.
So
at
the
start
there
wasn't
really
a
lot
of
congestion
control.
Only
in
88
fannia
Krypton
proposed
well
use
an
initial
window
of
one.
This
was
some
standardized,
then
was
an
experiment
standard
that
serie
used
to
for
then
people
started
measuring
it.
Then
it
was
actually
standardized
and
most
recent
standard
says
well.
R
You
couldn't
use
an
initial
window
of
ten
segments,
and
this
is
also
already
in
the
Linux
kernel
since
2011,
but
basically
what
I
would
say
yeah
we
don't
know
because
well
do
people
actually
follow
the
standards.
So
what
we
try
to
do
is
to
figure
out
how
it
looks
like
an
ipv4
to
do
so.
We
actually
wanted
to
contact
all
available,
ipv4
hosts,
and
so
the
question
is
actually.
How
would
you
measure
the
initial
window
and
we
do
that
in
the
following
manner.
So
on
the
left?
R
Is
our
scanner
on
the
right
is
the
host
that
we
are
going
to
probe?
What
we
are
going
to
do
is
we
are
establishing
a
regular
TCP
connection
by
doing
a
three-way
handshake,
but
we're
in
announcing
a
very
small
segment
size,
because
in
many
implementations
the
initial
window
is
a
multiple
of
say
segment
size.
R
So
we
won't
need
a
lot
of
data
later
on
and
we'll
be
announcing
a
very
large
receive
window
for
the
flow
control,
because
effectively
a
TCP
will
send
the
minimum
of
both
the
congestion
window
and
the
flow
control
window
and
to
be
never
limited
by
flow
control.
We
just
announced
a
very
large
one.
Well
after
we've
done,
the
three-way
handshake
we'll
send
a
request
to
that
host
in
hope
to
trigger
a
lot
of
data
in
response.
R
When
the
host
will
start
sending
segments,
we
won't
acknowledge
any
of
those,
because
when
we
acknowledge
them
it
will
effectively
increase
the
congestion
window,
which
we
don't
want
to
do
so
we'll
just
not
acknowledge
anything,
and
after
a
certain
time
the
O's
will
stop
sending
any
more
frames,
because
it's
just
the
initial
window
is
full
and
we'll
have
a
timeout
and
a
retransmission
of
for
example.
First
segment
at
that
point
can
actually
start
establishing
the
initial
window
by
observing
the
sequence
number
space
of
the
packets
that
we
got.
R
However,
you
might
see
that
there
are
some
problems
in
doing
this,
for
example,
yeah
what
happens
if
some
segments
are
lost?
This
is
especially
problematic
if
the
last
segment
is
lost,
because
we
can't
detect
that
so
what
we
actually
do
is
we
just
scan
multiple
times
and
hope
for
that
over
ten
scans
we
won't
have
a
lot
of
tailors
and
what
you
should
also
do.
Is
you
shouldn't
enable
sex
or
the
Baron
or
tailors
probes,
or
something
like
that.
R
So
we
basically
implemented
that
in
zmapp
to
scan
all
of
ipv4
on
with
HTTP
requests
and
with
GLS
client,
hellos
and
well,
when
you
do
this,
the
following
picture
appears,
as
you
can
see,
all
of
I
P
V
force,
basically
dominated
by
four
values.
It
is
initial
window
of
one
two,
four
and
ten.
The
first
thing
that
you're
probably
noticing
is
that
TLS
and
HTTP
distributions
do
not
agree.
R
R
Well
now
you
might
say:
okay,
it's
performance
parameter.
Why
would
I
care?
How
is
some
host
in
ipv4
is
configured?
So
let's
look
at
the
services
we
are
actually
connecting
to
if
you
take
a
look
at
the
Alexa
list
of
popular
domains,
so
here
now
the
plot
is
with
a
logarithmic
scale,
so
don't
be
confused.
You
can
still
see
that
two
four
and
ten
are
present,
but
now
the
share
of
10
is
a
lot
higher,
so
it's
around
80
to
85
percent,
depending
on
HTTP
and
TLS.
R
You
can
also
see
that
both
distributions
start
to
agree
at
least
on
the
RFC
recommended
values.
You
can
also
see
that
there
are
some
custom
or
larger
values.
Generally,
we
saw
that
actually,
these
initial
windows
that
were
standardized
a
couple
of
decades
ago
are
actually
present
and
access
networks.
And,
if
you
connect
to
these
apiece
manually,
you
see
well,
this
is
some
gateway
in
some
ice
B
or
something.
R
R
However,
most
of
the
CD
ends
are
below
an
initial
window
of
50.
Further.
You
can
see
that
if
you
take
a
closer
look,
there
are
some
Syrians
in
there
that
actually
customize
the
initial
window,
depending
on
which
customer
or
what
services
they
are
actually
using.
So
there
are
some
that
use
16
and
this
for
other
customers.
They
use
32,
for
example.
R
R
R
We
find
some
very
large
initial
windows
and
it
seems
that
Syrians
are
far
beyond
current
standards
and
when
you
look
more
closely,
unless
you
can
even
see
that
some
customized,
depending
on
the
access
network,
that
you're
using
to
contact
the
source,
you
can
get
data
on
our
website
and
well.
I
am
happy
to
take
questions.
I.
L
Bet
there
was
a
presentation:
I
have
TFS
to
go
about,
it
was
actually
about
measuring
ACN,
but
it
also
instantly
measured,
iw
10
on
and
it
got
personal
up
the
results,
but
didn't
go
into
as
much
detail
as
you.
However,
it
did
also
measure
from
about
50%
of
its
vantage
points
with
our
mobile
networks
and
I.
Think
all
the
all
this
fixed
isn't
it
this.
S
Tries
ripe,
ncc
I
will
be
looking
forward
to
your
ipv6
results.
I'm
part
of
the
reason
I'm
asking
that
question
well,
ipv6
enumeration
a
bit
tricky
like
we
can
talk
about
scanning
the
v6
Facebook
since
you're
talking
specifically
about
CD
ends.
A
bunch
of
the
CD
ends
do
do
pretty
aggressive,
MSS,
clomping
and
I'm
wondering
how
that
affects
your
results.
Does
that's
why
I
asked
the
question
a
bunch
of
the
CDNs
do
fairly
aggressive
MSS
clamping,
so
I'd
be
interested
to
see
how
that
effects.
Your
results.
R
A
R
R
Don't
need
to
tell
you
how
awesome
quic
is
because
you
all
know
that
basically,
there's
Google
quick,
currently
out
there
and
you're
standardizing
the
other
version,
and
we
were
asking
ourselves
how
much
grew
quick
is
actually
out
there
to
answer
this
question.
We
basically
want
to
answer
these
three
questions,
so
what
infrastructure
is
actually
out
there
supporting
quick?
R
Is
it
practically
used
by
any
website
today
and
how
much
traffic
is
quick
today
in
the
Internet
to
tackle
the
questions
yeah,
we
again
look
at
ipv4
and
performs
image
scans
further.
We
also
look
in
zone
files
of
the
matter
in
arc
zone,
as
well
as
on
this
Alexa
lists
and
further.
We
take
a
look
into
traffic
shares
in
a
university
network
in
a
major
European
to
one
in
its
mobile
network,
as
well
as
in
an
IXP
I'll
start
with
the
first
question
so
well.
R
R
It
basically
worked
like
this,
so
a
client
sends
in
a
first
packet,
only
client,
hello
or
in
the
initial
in
ITF,
quick,
it's
version,
and
if
the
server
supports
it,
it
continuously
handshake.
However,
if
it
doesn't,
it
will
send
a
version,
negotiation
packet
and
that
version
ago
she
ation
packet
will
include
a
list
of
versions
that
the
server
supports.
So
what
we
basically
did
is
we
wrote
a
Z
map
module
the
test
for
that,
so
we
will
send
developed,
client
hello.
R
However,
we
are
going
to
include
a
version
that
is
very
unlikely
to
be
supported
by
the
other
end.
So
if
the
server
doesn't
support
the
burden,
he
should,
if
he
is
implemented
correctly,
sent
a
version
negotiation
packet.
So
we
can,
from
packet
structure,
deduce
that
they
end
house
is
capable
of
doing
quick
and
further.
We
also
get
a
picture
of
the
version
that
it
actually
supports.
R
Well,
so
we've
been
doing
these
observations
ipv4
since
August
2016
and
I'm
going
to
show
you
now
data
that
covers
roughly
a
bit
more
than
a
year,
so
until
September
October
last
year.
So
what
you
can
see
is
starting
from
August
2016
to
one
year
later.
You
can
see
that
the
number
of
quick
capable
I
piece
more
than
tripled
to
roughly
600,000
that
piece
as
I
said.
R
Quick
is
now
virgin
eyes,
so
we
can
further
take
a
look
at
which
versions
were
supported
when
and
how
vibrant
all
this
quick
landscape
actually
is,
and
if
you
color
the
plot,
depending
on
the
versions
that
the
host
actually
announced
you'll
get
the
following
picture.
So
basically,
what
you
can
see
is
that
it's
very
colourful
and
furthermore,
that
there
are
some
versions
that
fade
away
over
the
time.
But
if
you
take
a
very
close
look,
you
will
see
that
they're
actually
a
version
that
are
quite
stable
all
over
the
period
that
we
observed.
T
R
Last
year,
yet,
given
the
colorfulness
and
this
a
lot
of
changes,
the
question
is:
how
will
the
future
quick
internet
actually
look
like?
How
often
will
we
change
versions
and,
as
you
all
know,
updating
systems
in
the
internet
tends
to
be
quite
challenging,
so
the
question
would
be:
if
versions
are
deprecated,
if
you'll
be
creating
islands
in
the
internet
of
versions
of
quick
that
well,
your
wouldn't
be
talking
today
anymore.
R
So
the
next
question
is
that
we
ask
ourselves
who's
actually
operating
these
IPS.
To
answer
the
question,
we
took
a
look
at
the
a
SS
in
which
these
IPS
are
hosted
at
certificate
data
and
reverse
DNS
entries,
and
what
you'll
find
is
that
for
roughly
53
percent
of
the
IPS,
you
can
actually
Bute
them
to
Google.
R
Well,
that's
another
large,
surprise,
I
guess,
for
the
rest,
as
of
October
last
year,
we
were
able
to
attribute
them
to
Akamai,
and
you
can
actually
also
see
that
the
growth,
if
you
see
in
that
plot,
is
due
to
Akamai
so
in
August
of
2006
in
there
only
roughly
at
a
thousand
visible,
a
piece
to
us
and
as
of
October
as
of
November
the
same
year,
it's
already
44,000
and
then
continues
to
rise.
So
that
still
leaves
us
with
around
6
to
7
percent
of
hosts
that
we
couldn't
classify
that
way.
R
Further,
we've
found
more
Akamai
and
Google
servers
that
we
couldn't
attribute
using
the
classification
from
above,
and
we
also
found
roughly
7,000
Lightspeed
web
servers,
and
this
is
web
server.
That
announced
or
included
quick
support
in
August
last
year
and
also
roughly
350
Kenny
web
servers,
which
is
a
web
server.
That's
based
on
the
quick
go
library!
R
R
For
us,
there
are
no
tools
out
that
actually
investigate
quick
usage,
so
we
had
to
build
some
that
are
efficient
enough
to
look
at
this
number
of
hosts
and
we
built
in
on
top
of
the
quicker
like
I
just
mentioned,
and
we,
but
if
I
did
slightly
so
that
we
can
basically
trace
the
handshake
in
a
very
fine-grained
manner
to
dump
all
the
connection
parameters
actually
established.
When
you
are
doing
a
connection,
you
can
also
get
that.
R
Well,
basically,
now
we
have
an
efficient
scanner
for
these
domain
lists
and
we
can
then
further
analyze
the
connection
parameters.
So,
for
example,
do
these
house
actually
live
a
valid
certificates
for
the
domains
that
we
are
requesting
again?
This
is
stater
as
of
october.
Last
year,
it's
just
for
completeness,
so
let's
focus
on
some
of
the
data
so
from
all
these
150
million
domains.
R
You
will
be
first
visiting
the
HTTP
or
HTTPS
variant
of
the
website,
wired
TCP,
and
if
you
find
a
certain
header
in
the
HTTP
header,
that
will
tell
you
that
this
host
actually
also
supports
quick
and
you
will
be
using
it.
However,
even
for
the
valid
certificates,
we
only
found
a
very
small
fraction
that
actually
announced
this
header.
So
in
practice
you
wouldn't
really
be
using
probably
a
lot
of
these
hosts
using
quick
today
or
as
well
October.
R
So
the
next
question
basically
is
given
that
there
is
well
not
a
lot
of
practical,
quick
out
there.
How
much
quick
traffic
is
there
in
the
Internet
to
answer
this
question,
we
have
to
get
a
step
back.
So
how
do
we
classify
quick
traffic?
Classifying
quick
traffic
is
a
bit
hard
because
everything
is
encrypted
and
yeah
I
mean
you're.
R
Just
saying
garbage
apart
from
some
packet
numbers,
is
something
well,
so
we
basically
relied
on
the
port
based
classification,
so
everything
that
we
saw
basically
on
UDP
port
443,
it's
quick
so
and
we
are
going
to
classify
that
as
quick,
which
will
give
us
an
upper
bound
on
the
quick
usage,
so
that
might
be
a
bit
lower,
the
same
TCP
port
for
HTTP
and
poor
lady
for
HTTP
and
depending
on
the
data,
so
that
I'll
be
showing
you.
We
have
a
s
level
information
available
to
see
who's
actually
calling
that
traffic.
R
R
R
R
R
You'll
see
stars
in
that
plot,
and
that
means
it's
Google
traffic
or
it's
coming
from
what
to
Google.
In
fact,
actually
Google
is
capable
of
pushing
up
to
42
percent
of
their
own
traffic
with
quick,
which
averages
at
roughly
39
percent,
but
we
see
close
to
no
Akamai
traffic
in
that
trace,
so
point
1
percent.
Still,
if
you
take
a
look
at
the
very
HTTP
and
HTTPS
traffic,
Akamai
cause
a
lot
of
the
traffic
so
given
that
they
have
a
capable
infrastructure,
this
potential
future
quick
traffic.
R
When
we
take
a
look
at
the
mobile
network
of
that
ice
P,
the
first
thing
that
you
probably
notice
is
the
daily
pattern
looks
different.
So
people
seem
to
be
using
a
smartphone
all
over
the
day
starting
in
the
morning,
and
you
can
again
see
that
there
is
quick
in
it
and
here
the
quick
share
slightly
later
than
for
the
whole
network.
It's
at
around
nine
point.
R
To
run
that
up,
we
had
a
look
at
a
major
European
IXP
and
we
got
the
same
quality
of
data
for
the
same
day
in
August,
and
so
we
got
the
flaws
annotated
by
the
customer
port
and
you
can
basically
get
the
same
image
again.
The
first
thing
that
you
is
that
there's
a
lot
less
blue
here
so
quick
is
only
roughly
at
2.6
percent.
R
Furthermore,
if
you
would
now
start
zooming
in
on
that
plot,
you'll
see
that
actually
Akamai
accounts
for
60%
of
the
quick
traffic
here
and
Google
only
for
roughly
33%
honestly.
We
have
no
real
idea
why
this
is
the
case.
We
just
assume
that
both
companies
seem
to
have
some
different
traffic
engineering
strategies,
so
it
seems
to
be
more
available
for
Akamai
to
use
the
XP
or
less
available
for
Google
to
use
it.
We
don't
know
so.
R
Even
though
we
see
nevertheless,
we
see
that
the
fraction
of
Google
traffic
is
already
quite
high,
it's
very
vantage
point
specific
and
there
are
singing
companies
which
have
quite
potential
to
increase
the
quick
shares
given
their
infrastructure,
and
it
also
challenges
the
question
how
quick
traffic
actually
impacts,
Internet
traffic
as
a
whole?
Ok,
thank
you
very
much.
Yes,
ok.
A
We
got,
we've
got
a
few
minutes
for
questions
and
comments.
If
you
have
them
one
of
the
things
I
particularly
appreciated
about
yawn
bringing
this
work
to
us
is
he
and
his
research
teams,
one
of
those
from
IMC
last
year.
The
initial
window
talk,
but
this
is
this-
is
upcoming
publication
next
week
at
Pam,
so
you
you're
one
of
the
First's
to
see
it.
Yeah.
F
R
R
V
Yeah,
so
I
would
like
to
talk
about
the
adoption,
the
human
perception
and
performance
of
HTTP
to
server
push.
So
just
a
quick
reminder:
what
are
the
major
changes
of
HTTP
2
in
comparison
to
HTTP
1?
It's
a
binary
representation,
not
an
ASCII
representation
anymore.
We
should
use
only
a
single
TCP
connection
and
we're
kind
of
screams
on
that
single
TCP
connection
that
can
be
multiplexed
and
prioritized,
and
we
have
had
a
compression.
V
So
just
to,
if
you
a
quick
overview,
what
pushed
us
remember
how
your
brows
are
back
in
the
old
HP
one
day's
request
at
a
website,
so
you'll
request
the
base
HTML
document,
the
browser
starts
parsing
the
document
it
will
detect
a
stylesheet,
it
will
detect
your
JavaScript
and
then
it
will
issue
new
requests
to
the
server.
So
we
serve
a
push.
The
server
has
the
ability
that,
upon
a
client
request,
it
can
push
these
resources
to
the
client
without
an
explicit
request
for
that.
V
So
these
this
technique,
then
saves
requests
and
ultimately
round
trips,
and
so
this
relief
should
improve
the
performance.
So
ever
the
standard
itself
does
not
provide
any
strategy
to
what
resource
should
be
pushed
and
when
they
should
be
pushed
and,
moreover,
in
comparison
to
the
other
features
offered
by
h2.
V
So,
first
of
all,
before
we
can
look
into
what's
being
pushed,
we
need
to
identify
HTTP
to
a
capable
websites
and
for
that
we
use
the
Alexa
1
million
list
and
the
complete
set
of
dot-com
dot,
Network
or
domains
and
again
because
I
work
at
the
same
Institute
as
young.
We
are
huge
fans
of
Z
map
and
a
built-in
scanner
for
the
ipv4
space
and
we
explicitly
look
for
TLS
and
application
layer,
protocol
negotiation
and
its
predecessor
next
protocol
negotiation,
announcing
HTTP
2,
and
given
these
two
datasets,
we
then
utilize
the
HCP
to
capable
library.
V
V
So
we
define
full
HTTP
to
support
if
the
web
site
actually
delivers
content
I
our
HP
2,
and
we
instruct
the
library
to
visit
the
landing
page
and
follow-up
to
tell
redirects
and
we
distributed
form
ultimate
to
merge
the
workers
in
our
network
and
also
publish
this
data
on
a
regular
basis
on
our
website.
So
for
small
domain.
Let's
we
do
this
on
a
daily
basis
and
for
the
larger
two
main
sets
we
use
on
a
weekly
basis.
V
So
to
give
you
a
quick
look
at
the
HTTP
to
adoption
at
a
glance
this
is
just
in
here
for
completeness.
I
would
like
you
to
focus
on
the
top
right
right
now.
So
over
the
last
year
from
January
17
to
this
year's
general,
we
see
that
the
HTTP
2
adoption
on
the
Alexa
list
has
doubled,
which
is
quite
nice,
and
the
users
of
server
push
head
has
also
increased
by
a
large
factor.
However,
if
you
have
a
look
at
the
absolute
number
out
of
220,000
remains
on
the
Alexa
list.
V
Only
thousand
used
server
push
and
we
see
the
same
on
the
dot-com
Metro
Court
set,
so
they
are
also
the
adoption
of
8gb
to
has
nearly
doubled.
We
have
over
alpha
11
million
websites,
speaking
hep-2
and
five
thousand
websites
using
server
push
back
in
general.
At
17.
We
saw
that
7,000
sides
we're
using
server
push.
However,
there
were
six
thousand
five
hundred
websites
registered
by
a
domain
partner.
They
were
all
using
the
same
template
and
if
you
take
them
out
of
the
equation,
you
have
only
roughly
560
websites
using
server
potion.
V
So
if
you
have
a
look
at
this
over
time,
you
get
the
following
picture
on
the
lower
left.
You
see
what
I
just
said.
We
see
a
rising
adoption
of
h-e-b
2
on
the
Alexa
1
million
list
and
and
also
an
adoption
of
HCP
2,
and
what
a
server
push
and
I
would
like
to
focus
on
two
special
things
in
their
lower
right
plot.
You
see
a
drop
in
last
year,
September.
So
what
happened
here?
V
And
if
you
have
a
look
at
this
rise
of
in
this
year's
February,
we
see
that
it's
also
caused
by
CloudFlare,
because
it
seems
that
the
law
of
users
that
hosts
their
web
site,
hopefully
I,
use
a
certain
content
management
system,
and
this
content
management
method
system
called
HubSpot
had
has
recently
done
a
software
update
and
they
introduced
pre-law
Harris,
so
preload
preload
is
used
to
tell
the
browser.
This
is
some
content,
it
might
be.
V
You
want
to
fetch
very
early
because
he
will
need
it
later
on,
but
12-layer
uses
this
preload
header
to
identify
what
to
push
to
the
client.
So
we
see
a
rising
adoption
here,
so
we
next
focused
on
how
much,
through
the
website's
push
to
the
client
and-
and
this
is
the
picture
for
last
year,
so
in
general
we
saw
nearly
600
web
sites
utilizing
push
and
we
see
that
around.
For
example,
if
you
take
a
look
at
the
Left
plot,
80%
of
them
push
20
resources
to
the
client.
V
If
you
take
a
look
at
the
share
of
resources,
plot
that
are
the
resources
set
up
push
your
bill
to
the
client,
and
these
are
the
resources
that
are
on
the
same
origin.
Then
the
landing
page.
We
see
that
for
some
websites
they
push
everything
they
can
push,
and
we
see
that
the
top
3
content
types
are
JavaScript
style
sheets
and
images,
and
here's
an
updated
view
on
that
from
this
year
we
see
further.
For
the
first
part,
not
much
has
changed
there.
V
However,
on
the
for
the
mime
types,
we
still
see
that
javascript
insurgents
dominate
dominate
the
the
top
list
and
there
are
less
images
in
comparison
being
pushed
to
the
client.
So
now
that
we
see
pushes
out
there
is
it
it's
used
and
actually
there's
some
stuff
pushed
to
the
client.
How
does
this
impact
the
overall
performance,
because
in
the
beginning,
I
said
on
papers,
this
should
improve
the
performance.
So
what
we?
What
we
did
was
we
configured
a
Chrome
browser
to
automatically
repeat
with
you
visit
these
websites
and
recorded
performance
metrics?
V
You
see
the
web
sites
that
suffer
from
using
hep-2
with
server
push
and
to
identify.
If
this
caused
by
server
push
we
looked
into.
Please
have
a
look
at
the
right
picture
where
we
compared
the
websites.
They
are
HTTP
version
without
server
push,
because
the
client
can
deactivate
this
compared
to
the
version
with
server
push
and
again
it
seems
that
some
websites
improve
using
server
push,
but
some
websites,
although
they
might
improve
only
using
other
HTTP
features
like
screen,
multiplexing
or
header
compression
they
suffer
from
using
server
push
yeah.
V
The
speed
index
is
a
metric
defined
by
Google
and
it
basically
measures
how
quickly
the
page
contents,
originally
populated
and
I
won't
go
into
much
detail
here,
but
it
draws
a
same
picture,
so
we
then
try
to
map
these
results
to
the
results
we
saw
before
so
how
much's
were
pushed
by
the
websites,
what
content,
what
content
is
actually
pushed
and
how
large
the
content
is,
but
we
could
not
attribute
this
to
simpler
reasons
so
to
it
to
the
number
or
share
of
objects.
So
we
next
ask
yourselves
yeah.
V
These
are
all
technical
metrics
and
we
saw
inconclusive
results,
but
in
the
end
it's
people
visiting
websites.
So
do
people
even
notice
what
happened
that
there
is
a
change
going
on,
and
for
that
we
conducted
a
user
study
and
basically,
we
created
an
online
questionnaire
where
users
had
to
view
side
by
side.
Comparison
of
the
loading
process
of
a
website
and
I
want
you
to
have
a
close
look
at
the
baby
that
looked
like
this
user
visit
the
websites.
They
see
set-aside,
videos
of
the
loading
process
of
different
versions.
V
So
the
key
takeaway
here
is
that
server
push
can
lead
to
human
perceivable
and
negative
performance,
and
after
we
did
this
questionnaire,
we
talked
to
the
people
and
try
to
identify
reasons
and
also
looking
into
the
rendering
behavior
in
the
browser
of
these
websites,
and
there
are
various
reasons
that
may
have
caused
the
decision
in
the
end.
So,
for
example,
like
I
said,
we
see
that
a
lot
of
websites
even
even
benefit
from
other
hep-2
features
like
multiplexed
streams,
which
lead
to
less
head-of-line
blocking.
V
We
see
that
if
you
push
too
much
resources
before
the
debates
document,
the
browser
is
delayed
in
processing
the
base
document
and
that's
the
overall
performance
suffers
like,
and
we
also
saw
saw
some
real
effects.
There
were
some
websites
configured
that
they
push
resources
that
are
not
actually
referenced
in
the
web
sites
or
they
are
not
used,
but
as
we
always
use
a
cold
connection
to
a
server
pushing
more
and
larger
congestion
window
and
then
ultimately,
the
actual
page
was
loaded
faster.
V
V
Based
on
these
results,
we
saw
in
the
technical
analysis
and
in
a
user
study,
we
are
currently
analyzing
server
push
in
our
control
test
bed
so
to
remove
the
variability
we
have
been
measuring
it
in
the
internet
and
we
using
we
web
websites
and
be
using
the
Mahamaya
tool
for
that
and
what's
cool
about.
That
is
that
it
tries
to
replicate
the
reword
website
structure.
V
So
it
will
spawn
multiple
namespaces
for
third
party
resources
and
for
digital
resource,
so
you're
not
just
saving
a
website,
so
local
server
tries
to
replicate
how
the
websites
look
in
the
internet
and
we
testing
various
strategies
for
server
push
there
and
some
early
results.
Are
you
shouldn't
push
everything
you
could
push
on
the
website?
V
So
the
stuff
you
see
if
you
open
the
web
site
and
we
try
to
interleave
these
pushes
with
the
base
HTML
document,
so
we
instruct
a
scheduler
to
just
push
the
first
bytes
of
the
website,
stop
pushing
the
HTML
and
then
push
resources
like
style
sheets
I
mean
we
already
see
that
this
can
lead
to
promising
results
for
some
websites
and
our
test.
But,
however,
still
this
depends
on
the
overall
structure
of
the
websites,
the
number
of
serve
party,
continental
and,
ultimately,
the
browser
behavior.
So
takeaway
here
is.
V
Unfortunately,
we
cannot
give
you
a
single
generic
guideline
for
server
push
requires
web
site,
specific
tuning
and
configuration,
and
so
to
conclude
my
talk
what's
cool?
Is
we
see
a
rising
adoption
of
HTTP
2
and
a
lot
of
features?
They
are
like
the
multiplex
stream
stream
prioritization.
You
can
really
improve
your
website
and
we
see
that
every
now
and
then
there
can
be
a
drastically
increased
by
just
updating
server,
because
all
major
browsers,
already
supported
and
regarding
this
server
push
feature.
I
still
think
that
it's
a
very
cool
feature.
V
However,
you
shouldn't
just
switch
it
on,
because
it's
not
no
silver
bullet
to
improve
the
website
performance.
We've
seen
that
it
can
lead
to
you,
receivable
negative
performance
and
up
to
now
I
would
say
it
requires
a
lot
of
deep
understanding
of
the
page
loading
rendering
process
in
the
browser.
So
I
think
we
should
all
agree
on
that.
We
need
best
practices
and
guidelines
for
the
use
of
server
push
and
with
that,
I
would
like
to
thank
you
for
attention
and
compute.
My
talk
thank.
A
V
A
A
Maybe
I
guess
I
think
I
want
to
offer
not
just
for
your
study
but
using
passive
DNS
can
get
you
the
collection
of
fully
qualified
domain
names
underneath
a
domain
and
what
I
find
a
lot
when
I'm
studying
it
is
they're
very
many
Alexa
listed
services
that
use
a
many
CDN
simultaneously,
so
I'm
curious
to
know
what
we're
you
know,
what
subdomains
they
might
be
using
server
pushing
or
something
that
you
wouldn't
see.
Just
at
the
TLD.
Q
Q
It
isn't
gonna
start
becoming
decreasingly
useful,
because
many
things
may
switch
their
behavior
based
upon
which
they
S
and
I
they
see
and
along
with
also
as
the
ipv6
stuff
is
becoming
more
and
more
relevant
there,
but
also
I
think
on
David
one
about
the
Alexa
one
is
one
thing
you'll
also
see
is
that
the
Alexa
very
much
focuses
on
dub
dub
dub
style
sites.
There
ends
up
being
an
increasing
amount
of
content
that
is
on
separate
domains
for
images,
videos,
etc
that
don't
show
up
as
well
in
that
list.
Q
V
We
couldn't
attribute
this
to
the
provider
itself,
it's
more
like
figuration
of
the
website,
so
the
user
hosting
the
website
at
the
provider.
So,
for
example,
I
said
in
beginning
that
push
requires
manual
configuration
and
but
there
are
also
some
websites
where
we
see
that
plugins
do
this
configuration
like
WordPress
plugins.
They
scan
your
your
file
tree
and
then
CA.
There
are
some
static
files.
Why
not
push
them?
Q
U
Q
W
Yes,
what
Google
first
I
want
to
thank
you
for
publishing
this
work.
I've
repeatedly
asked
for
good
public
evidence
that
push
actually
works
well
or
poorly,
and
it's
really
hard
to
get
dated.
This
is
as
good
as
I've
seen
and
I.
Really
thank
you
for
that
and
I
think
it
emphasizes
the
challenges
of
making
push
work
in
the
wild,
which
are
real
and
we've
seen
them
at
Google.
I
mean
it's
possible
to
make
it
work.
V
Will
look
into
that,
as
you
might
know
today,
at
the
HTTP
HCP
this
meeting
there
will
be
a
talk
for
case
arises
and
really
look
into
it,
because
there
are
still
a
lot
of
stuff
that
can
go
wrong
with
server
push
and
it's
the
I
think
it's
a
cool
feature,
but
maybe
it's
it's
yeah
it's
well.
It
was
standardized
too
early.
So
maybe
it's
it's
yeah.
It's
understand
that
it's
being
used
but
I
think
without
cache
digest
shouldn't
be
used,
for
example.
So
we're
not
having
look
into
that,
but
we're
planning
to
do
so.
Thanks.
A
I
Okay,
okay,
thank
you,
yeah.
Let
me
start
by
saying
that
the
knell
of
service
attacks
are
still
illustrate
in
every
year,
becoming
more
and
more
dangerous,
either
for
profit
or
for
fan
or
supported
by
national
regimes
and
other
countries.
So
when
a
network
is
under
attack,
even
if
a
small
number
of
the
eyepiece
and
servers
that
host
it
in
the
network
are
under
attack,
the
network
provider
has
to
react,
because
if
it
doesn't
do
so,
there
is
a
problem
that
the
overall
performance
of
the
network
may
suffer.
I
So
one
very
cheap
and
available
solution
is
to
just
drop
all
the
traffic
that
goes
to
this
targeted
host
and
this
called
platforming.
And,
of
course
you
can
do
better
because
you
carry
all
the
traffic
all
the
way
to
your
network
only
to
to
drop
it
when
has
already
and
Loonette,
and
to
achieve
this,
you
use
something
called
bt
people
are
holding,
so
you
trigger
the
network
is
a
send
in
traffic
to
you.
Stop
sending
the
traffic
because
anyway,
I'm
going
to
drop
it
and
what
we
do.
I
I
So
for
the
beauty,
people
are
calling
when
you
trigger
your
network
that
sends
with
the
traffic.
The
network
can
also
drop
the
traffic
earlier
and
to
do
this,
one
way
which
actually
I
would
say
the
default
way,
is
to
use
PGP
communities.
So
when
we
advertise
a
prefix
that
is
under
attack,
you
add
the
community
field,
a
value
which
is
a
32-bit
where
the
first
16-bit
are
notate.
I
I
The
same
thing
you
can
do
actually,
if
you
are
in
the
next
pin
in
the
in
this
case,
the
XP
provides
the
black
hole
community
to
the
users,
the
members
of
the
egg
spin,
and
by
announcing
the
prefix
to
the
route
server
with
this
community,
the
members
that
participate
the
rhetoric
and
heard
about
this
announcement
and
then
essentially
the
axpy
null
dump
that
drops
all
the
traffic.
In
this
case,
the
XP
is
the
black
column
provider
for
the
black
hole.
I
I
Now,
let's
see
what
is
the
difficult
problem
that
we
have
to
tackle
in
order
to
do
this?
In
reality,
as
I
said,
there
is
no
common
dictionary
of
communities.
I
want
with
very
smart
people.
We
try
to
find
ways
to
create
a
dictionary
for
communities
is
almost
intractable,
so
the
only
thing
you
can
do
is
you
can
go
in
case
by
case,
for
example,
for
black
falling
or
for
location
or
for
other
traffic
engineering
properties
and
use
some
of
the
keywords
in
order
to
search
list
in
order
to
infer
blah
blah
to
infer
commutes.
I
For
example,
here
I
saw
the
level
3
being
paid,
or
did
they
explain
it,
and
then
you
can
see
that
actually
with
simple
data
among
things,
you
can
find
what's
the
black
hole
for
them.
So
who
did
this?
Have
we
found
approximately
400
black
hole
communities,
50
of
them
in
ixps,
and
then
we
use
that
to
analyze
passive
BGP
measurements
in
order
to
infer
blocking
activity?
I
So
what
we
do
is
that
we
have
a
list
of
communities
when
we
see
an
announcement
with
this
community
in
some
of
the
pits
be
collectors
where
we
have
access
to.
We
tackle
this
and
with
that
also
the
starting
time,
and
then
we
wait
until
there
is
a
withdrawal,
implicit
or
explicit
in
order
to
identify
a
platform
event,
and
then
we
do
this
everywhere
in
they
didn't.
We
have
access
to
this
type
of
data.
I
Now,
let's
see
the
trends
in
the
black
hole
connectivity
here,
I
saw
a
signal,
the
resources
that
we
have
used
for
this
study.
Of
course,
as
many
other
researchers
academic
research,
mainly,
we
use
driver
out
use,
but
we
also
use
PC
aids,
which
is
the
route
server
ATP
feeds,
as
well
as
the
bit
species
from
alerts,
see
the
N
and
for
the
case
of
platform,
we
were
able
to
find
three
times
more
blood
volume,
events
from
the
CDN
and
appreciate
the
set
compared
to
the
other
two
right
around
use.
I
In
total,
we
use
something
like
13,000,
PRI
peace
and
approximately
3000
pas
now.
One
of
the
big
observe
essence
is
that,
indeed,
the
use
of
the
beauty
people
are
falling
on
the
rise.
So
we
expect
that,
as
the
number
of
the
DDoS
attacks
increases,
the
number
of
providers
are
willing
to
offer
this
as
a
service
increases
and
indeed
the
during
the
less
than
three
years
of
this
study,
we
saw
more
than
250
percent
increase
in
the
number
of
providers
that
provide
blackhole.
I
I
I
So,
in
this
plot,
I
saw
that
on
average
we
save
five
IP
hops
and
three
AAS
hopes,
which
means
that
s
seemed
simple
they're
getting
with
BGP
blah
holy
can
save
the
three
networks
to
carry
traffic
that
has
only
to
be
dropped
later,
and
this
is
also
on
academic.
At
least
one
of
you,
a
very
nice
way
to
show
how
net
we
can
collaborate
with
each
other
when
there
is
an
attack
at
the
garden
we'll
spend.
Maybe
the
last
minute
right
have
two
minutes.
I
The
last
few
minutes,
in
order
to
show
you
some
insight
about
who
is
using
a
block
falling.
So
it's
not
the
big
users.
It's
not
it's
not
very
popular
in
many
countries.
Of
course,
it's
popular
in
the
years
because
it
has
a
huge
ID,
but
this
particularly
popular
in
Rosia,
and
to
become
even
more
and
more
popular
in
ukrainian
and
in
other
ASEAN
countries.
I
We
have
seen
also
that
there
is
a
huge
difference
between
how
many
prefixes
it's
provide
their
black
holes.
So
there
are
some
of
them
that
just
black
hole
a
few
tens,
but
there
are
some
the
black
hole,
thousands
or
tens
of
thousands,
and
also
in
terms
of
blackballing
uses.
Again
we
see
a
concentration
in
Rosia
who
crania
also
Latin
America
and
most
of
the
users
are
cotton
providers
or
cloud
providers,
and
this
is
somehow
expected
because
running
the
big
firms
of
servers.
I
So
when
we
did
the
in-depth
analysis
of
what
is
running
behind
that
and
who
is
actually
affected
by
black
hole,
we
see
that
mostly
our
HTTP
services,
but
there
are
also
other
services
that
have
to
do
with
mail
servers,
etc.
But
most
of
them
are
in
cloud
providers
and
most
of
them
most
of
the
domains
are
Lauren
domains.
So
it
seems
that
black
holing
still
is
a
medication
technique
for
the
poor,
probably
others
have
used.
I
I
We
have
seen
that
month,
but
most
of
them
last
for
a
few
minutes,
and
this
was
a
striking
generation.
When
we
talk
with
people
that
did
security,
they
told
us
that,
yes,
indeed,
most
of
the
attacks
are
not
certain
lasted
they
last
for
a
few
minutes,
and
another
thing
that
we
found
is
that
many
times
will
be
round
like
falling,
because
you
are
not
sure
when
an
attack
is
over
or
they
are
not
sure
that
you
want
to
penalize
this
IP
forever.
I
T
Alexander
zoom
of
Kershaw,
let's
I,
just
when
you
get
a
little
buzz
in
Mike,
please
one
more
time:
Alexander
zoom
of
curator,
let's
I
just
want
to
highlight
one
thing
that
are,
unfortunately,
black
hauling
becomes
a
service.
It's
not
only
at
the
dose
mitigation.
It's
also
a
service
for
censorship
and
especially
in
Russia.
The
Collins
is
used
to
block
some
resources
and
it
is
quite
popular.
B
I
Of
the
long
plastic
ones
and
these
for
censorships,
we
have
enough
indications
for
that,
but
you
don't
have
a
ground
truth,
so
I'm
with
what
they
can
say
right
now.
Is
that
like,
if
you
see
days
of
life
falling
and
also
we
use
the
NHD
and
other
tools,
but
we
can
actually
find
out
what
this
IP
runs.
Some
of
them
are
website
of
websites
of
political
content,
yeah.
F
X
Is
there
a
everybody
can
hear
me?
Okay,
so
I'm
Kevin
Vermillion,
a
PhD
student,
Sorbonne,
University
and
I'm
I'm
gonna
present
ongoing
work
in
collaboration
with
ripe,
ncc
and
Stephen
present
in
the
room
about
the
burning,
MJ
traceroute
and
rabbit
as
probes.
So
this
is
presentation
that
we've
made
at
kata
for
aims,
and
so
first
of
all,
I
will
give
you
a
short
recall
about
what
is
multi
pairs
detection
algorithm
and
what
its
limits
and
today
I
will
mainly
focus
on.
X
So
the
goal
here
is
to
provide
a
better
nga,
and
for
that
we
have
done
and
we
are
currently.
It
is
a
an
ongoing
survey
that
and
I
will
present.
The
first
results
that
we
have
here
under
load
balancers
and
the
data
that
we
found
in
our
survey
so
for
those
with
which
are
not
familiar
with
what
is
multipath
detection
algorithm.
So
it
allows
a
user
to
discover
all
the
paths
between
a
source
and
a
destination
it
in.
It
is
based
on
Paris
trace
routes,
so
it
is
an
extension
to
Paris
trace
routes.
X
X
So
today
approximately
one
Drennen
thousand
trace
routes
are
computed
and
by
computing
I
mean
extract
all
the
patterns
that
we
can
found
in
in
the
trace
route
in
terms
of
how
what
diamonds
to
say
contain
in
oh,
it
looks
like
so
so
what
we
found.
So
we
define
some
metrics
first
of
all,
the
diamond
lengths,
so
here
a
very
easy
example
where
the
max
and
equals
the
mainland's
was
2
and
the
right
distribution
of
the
next
lengths.
X
So
we
can
see
that
mainly
these
diamonds
are
max
trains
1
and
also
you
can
see
that
the
power
of
2
are
are
more
current,
that
not
the
total
ends,
and
that
is
one
of
the
diamond
that
we
found
in
the
data
so
with
a
maximum
of
17,
so
with
the
source
and
the
destination
corresponding
and
for
us
it
is
important
to
understand
what's
behind
this.
So
if
anybody
could
explain
us
what
happens
in
the
network
in
Cyprus
network
operators,
here
we're
pleased
to
talk
with
you.
X
So
we
see
that
there
is
diamonds
with
very
large
max
width
and
in
particular
this
one
that
we've
found
so
yes,
so
here
is
max.
Width
is
96,
so
in
the
destination
and
sources
are
provided.
So
this
is
totally
IP
level
and
not
without
an
alias
resolution
and
the
last
one
that
is
important
for
us,
because
it
helps
us
to
define
the
heuristics
to
provide
in
better
MVA.
X
It's
a
rigid
symmetry
so
here
in
the
short
X,
on
top
of
it,
two
symmetries
and
we
provides
a
distribution,
and
what
is
important
to
see
here
is
that
the
reduce
image
file
/
where
so
for
us,
it's
very
it's
a
good
thing
to
take
into
account,
and
here
also
so.
X
Here's
a
max
with
symmetry
is
39,
and
you
see
in
fact
that
there
is
so
in
the
red.
There
is
a
source
in
green,
there
is
a
destination
and
there
is
completely
two
divergent
paths
going
into
so
one
going
into
two
clustered
area
and
the
other
one
going
into
a
simpler
path.
So
here
also
will
please
to
discuss
with
network
operators
to
understand
better
what
is
going
on
here.
X
So
this
is
an
example
of
so
the
this,
the
the
diamond
on
the
right
as
a
percentage
of
Michigan
of
95%.
So
it's
what
we've,
what
we
have
found
in
the
data
and
the
last
one
just
to
to
give
an
example
of
when
you
enroll
a
topology
of
my
hope.
The
MBA
uses
eight
thousand
and
fifteen
five
hundred
packets
to
discover
all
this
topology.
So
that's
why
it's
important
for
us
in
in
their
constrained
environment
such
as
ripe
Atlas,
to
to
make
this
figure
lower.
A
Look,
it
looks
like
well,
we've
got
a
couple
minutes
for
questions.
If
anyone
has
one
anyone
other
than
me,
thanks,
Kevin
I,
think
I
I,
don't
know.
The
literature
here
I
think
it's
interesting
that
to
contrast
what
you
found
with
this
breadth
of
paths
versus
what
we
usually
call
the
diameter
of
the
internet
right,
we
usually
say
the
diameter
across
the
length.
Basically
the
trace,
we
see
a
maximum
length.
What
what
do
you
know?
What
can
you
tell
us
about?
A
X
So
there
is
a
paper
from
aristocracy
in
the
translation,
the
networking
which
has
done
a
survey
also
almost
ten
years,
seven
or
eight
years
ago,
and
it
defines
some
metrics
that
are
present
here,
like
Lenten
whiz,
but
we
have
added
the
asymmetry
and
machine
because
for
our
eristic
s--,
these
metrics
are
important.
Y
All
right
Carlos:
do
you
think
that
this
information
or
these
techniques
could
be
used
by
an
attacker
to
discover
weak
points
in
the
network,
in
particular,
if
there's
a
wide
node,
you
know
that
that
could
mean
something
or
if
there's
a
non
wide
node.
That
could
mean
something
and
could
you
as
an
attacker,
try
to
target
a
specific
load
balance
node
and
take
it
down
because
you
got
identified
it
as
a
bottleneck,
for
example,
so.
X
M
Yeah
Tim
Chen
thanks
very
much.
It's
very
interesting,
I
regularly
use
a
package
called
persona,
I,
don't
know
whether
you've
heard
of
that
for
measuring
loss
latency
in
throughput
between
a
large
number
of
quite
a
large
number
of
sites,
and
one
of
the
issues
there
is
its
trace
task
has
no
idea
when
the
path
changes,
whether
it's
in
the
local
site,
the
far
site
where
the
modes
measurement
nodes
are
hosted
or
somewhere
in
between
and
I,
think.
The
interesting
now
is
to
understand
just
simply
from
an
algorithm.
Is
it
something
in
the
local
site?
M
X
M
Think
one
of
the
interesting
things
you
could
get
from
this
is
to
be
able
to
have
some
heuristics
to
determine
when
you're
doing
a
measurement
between
measurement
points
in
two
sites,
using
whatever
package
it
is,
that
does
some
kind
of
trace.
Is
it
a
change
in
the
path
in
the
local
site?
Is
it
a
change
in
the
path
in
the
remote
site
where
the
device
is
hosted
or
somewhere
between
those
two
sites?
X
M
Z
Okay,
hello,
everyone
or
just
to
start,
but
by
giving
just
a
quick
recap
of
what
we
have.
You
know
campus
network,
because
that's
main
motivation
for
this
work
so
or
operators,
so
that's
University
campus,
never
have
two
ways
to
understand:
what's
happening:
net
florets
and
some
locks
in
the
DPI
or
the
forward,
that's
about
it
know!
There's
this
thing.
That's
been
popping
up
a
lot
over
the
last
few
year
and
if
you
don't
recognize
the
santa
needs
comes
from
there.
Z
Basically,
if
we
start
to
encrypt
everything
looking
at
net
fruit,
racists
or
DPI
locks
or
subtract,
this
from
be
useful
at
all.
In
order
to
understand
what's
happening
with
the
network,
do
I
have
a
transmission
diagnosis.
We
are
doing.
These
kind
of
things
seem
to
debates.
That's
been
happening.
The
creep
working
group
for
just
one
bit
in
the
clear
does
not
make
it
look
like.
Z
It
will
be
easy
to
understand
about
anything
else
about
it,
and
so,
basically
we
want
to
be
able
to
travel
through
the
network
at
our
scale
and
we
have
no
visibility
of
the
traffic.
So
the
main
inside
we
have
is
that
the
arrows
know
everything
about
the
connection
beads
for
TCP
as
of
now,
but
for
quick
answer.
Your
browser
sees
everything
the
clip
before
in
cryptic
and
transmitting.
Z
So
if
we
had
a
way
to
instrument
the
host
to
get
information
out
of
it,
get
exactly
what
we
want,
I
love
it,
we
would
be
able
to
troubleshoot
anything
the
ratio,
if
that
is
that
the
width.
This
is
that
so
far
we
have
MIPS,
so
the
TCP
MIPS,
for
example,
to
create
contours
to
SNMP
and
again
that's
about
it.
So
we
have
an
issue
there.
We
want
to
get
visibility,
insights
over
the
traffic
and
work
its
own
habits.
Z
What
we've
been
working
so
far
on
well
here.
The
thing
I'm
presenting
here
is
a
tool
on
which
we
have
been
working,
which
tries
to
pro
to
instrument
dynamically
and
host
so
computers
and
look
at,
for
example,
the
TCP
stack
on
the
Linux
kernel
or
DNS
resolution
routines
in
the
Lipsy
and
do
this
at
runtime
by
injecting
some
code
that
will
just
feedback
measurements
towards
some
user
space
demon.
Once
we
have
this
data
that
gets
extracted,
you
can
just
explore
it
again
over
generic
civilization
format.
Z
Society
such
as
IP
fix
you
don't
want
to
export
anything.
You
want
to
export
surf
that
we're
able
to
understand,
and
that
should
be
a
bit
that
you
should
be
able
to
apply
to
about
any
protocol.
So
we
extract
transition
for
state
machine
surfing
statistics
about
what
has
been
happening
in
the
TCP
handshake
and
then,
what's
when
the
connection
has
been
established,
what
happened
over
there?
We
did.
We
see
less
retransmission
this
kind
of
thing,
and
we
look
for
very
specific,
specific
bits
of
information
that
we
get,
that
we
can
extract
directly
from
there.
Z
So
that's
a
high-level
overview,
the
tool,
and
originally
we
wanted
to
stop
there,
but
then
we
deployed
it
and
it
turns
out
that
doing
a
deployment
is
quite
interesting
because
your
own
stuff
about
your
network
that
you
know,
did
not
expect
there.
What
so
we
push
this
on
every
single
machine
that
the
computers
that
students
views
in
our
labs,
because
the
computers,
the
student
mini
boroughs,
Facebook
Google,
and
this
kind
of
stuff?
We
have
three
C's
toward
these
services.
These
are
mainly
TCP
and
DNS
traces
because
well,
quick,
is
still
a
bit
hard
to
instrument.
Z
So
far,
that's
a
work
in
progress
and
I've
collected
here
data
from
the
past
month
and
maybe
just
as
a
final
comment.
We
have
a
network,
that's
vastly
of
a
project
provision,
so
we
don't
really
see
a
lot
of
big
issues,
but
we
still
have
a
few.
So
this
is
a
plot
that
shows
the
amount
of
flows
we
got
over
pv6
and
ipv4,
and
so,
as
I
said
earlier,
students
rows,
Facebook
and
Google,
and
so
these
are
dual
stack
services.
That's
the
delegate
we
get
from
there
and
the
ipv6
peak
for
all
the
next
graphs.
Z
The
colors
should
match
for
pv6
on
ipv4,
and
you
will
well
get
forms
every
12
hours
over
the
month,
I
care
about
the
people
for
an
ipv6
in
this
particular
example,
because
in
our
network
these
are
routed
over
different
epochs.
So
these
may
have
exhibit
differences,
and
so
the
first
performance
spawn
that
I
tried
to
look
into
us.
Did
we
have
connection
experiencing
word
gnosis
so
in
this
particular
case,
where
that
connection
that
we're
losing
some
TCP
scenes,
for
example,
when
we're
trying
to
establish
a
connection
and
how
is
it
comparing
course
across
IP
versions?
Z
Well,
that's
a
result
here,
so
we
have
actually
some
connection.
So
that's
a
tiny
amount
where
it's
distributed
evenly
over
ipv4
and
when
you
look
at
the
destination
addresses
that
were
targeted
by
this
particular
connection.
These
were
uniform
so
without
any
particular
bias
towards
a
given
destination.
Nothing
was
happening
over
v6
if
we
filter
the
data
and
check
only
connections
that
had
more
than
one
loss
in
the
number
of
scenarios.
Z
So
we
have
something
in
our
own
network
that
appears
to
be
I,
don't
know
overloaded
or
dropping
some
things.
We
don't
really
know,
and
it's
low
in
small
in
small
enough
that
when
we
query
our
network
administrator's
back,
we
don't
even
see
it
because
that
doesn't
appear
in
there
up
there.
Well,
it's
too
small
to
be
flagged
in
their
data
logs
so
that
just
one
of
the
corner
between
kind
of
no
ties
it
that
we
did
not
expect.
That's
a
super,
simple
parent!
Once
you
are
able
to
get
data
directly
from
the
annals.
AA
Z
Just
make
sure
we
also
looked
at
well
connection
that
which
was
failing
and
there
well
the
work.
The
network
works
and
that's
what's
happening
here.
We
have
a
small
spike
there
that
we
investigated,
and
actually
all
of
these
connections
were
linked
to
some
weather
applets.
That
was
running
on
every
student's
computer,
and
that
was
do
me
that,
basically
we
didn't,
we
didn't
know,
what's
actually
over
there
and
doing
queries
so
I
have
to
clean
up
a
bit
the
network
if
we
move
from
just
a
little
less
initial
TCP,
since
we
can
look
at
TC
PRTG.
Z
So
these
are
media
smooth
our
duties
as
extracted
from
the
Linux
kernel
directly
and
compare
it
or
over,
for
example,
ipv4
ipv6
and
across
from
providers.
So
students
use
microsoft
services
to
get
emails
to
use
a
la
the
online
office
who
so
we
know
we'll
get
data
over
there?
Well,
Google!
That's
how
they
work
that
Google
depression.
Z
But
if
you
look
at
your
TT
over
there
well,
we
actually
again
have
no
clue
as
to.
Why
do
we
have
such
a
vast
difference
between
two
services
that
are
used
to
be
presenting
Parvati
versions?
So
that's
yet
another
data
point
we
need
20,
BC
I,
don't
know
you
know
or
network,
and
especially
because
we
are
starting
to
speak
here
but
latency
about
RTT.
That's
something
that
we
well
at
this
or
opera.
Torah
had
no
knowledge
about,
because
I
could
not
track
it
tires
bike.
Z
We
actually
really
need
to
take
a
deeper
look
at
what's
happening
now
in
our
own
network.
So
far
all
have
presented
our
results
that
are
tiny
that
currently
have
no
widespread
impacts,
because
because
people
are
not
complaining
that
much,
but
looking
at
all
of
the
data
we
analyzed,
these
were
kind
of
spikes.
So
the
track
that
expected
and
if
for
network,
was
to
be
more
more
used,
I,
don't
know
this
could
start
to
raise
issues.
Z
I
conclude
the
talk.
We
just
recap
us
something
we
actually
had
in
February
and
so
for
once
we
got
complaints,
so
students
were
having
issues
to
reach
the
online
course
platform,
so
Moodle
and
varied
wrestling
while
it's
slow,
it
takes
way
Malick's
wave
slower
than
usual,
and
so
how
do
you
get
run
troubleshooting
this
problem,
if
you
just
have
daylight,
but
that
you
can
pro
sue
and
that's
very
simple.
Z
You
know
that.
Well,
do
DNS
query,
so
value
use
our
own
DNS
resolver.
So
we
just.
We
can
just
look
at,
for
example,
the
time
to
establish
well
the
time
it
gets
students
the
time
it
took
students
who
get
a
DNS
reply
furnace
from
the
servers
that
we
know
are
co-located
in
the
same
data
center
that
web
servers
and
compare
it
to
the
time
to
establish
at
that
time
to
get
the
cynic
firm,
the
TCP
load
balance
over
there.
The
few
ties
were
in
this
part
of
the
data.
Z
The
data
center
was
200
meters,
away
from
the
post,
instrumenting
and
so
I
guess
at
30
milliseconds
for
just
200
meters
Prairie,
not
that
great
once
the
program
get
noticed.
Well,
that
was
a
load
balancer
that
was
over
row
there,
etc.
So
that's
upset
but
ethically
disappeared.
So
we
know
it
must
need
to
redress
that
because
the
Pearl
is
fixed
and
so
just
as
final
thoughts.
We
build
a
genetic
approach
which
actually
did
not
get
into
detail
here,
because
there's
that's
academic,
formal
stuff.
Z
Before
state
machine
reduction,
these
kind
of
things
we
have
a
prototype
that
instruments
tax,
that
students
currently
use
on
or
network.
Because
that's
what
you
care
about.
We
have
a
prototype
for
quick,
but
there
are
technical
challenges
to
deploy
it
over
there.
But
the
main
question
and
asking
is:
is
there
interest
for
a
kind
of
an
even
base
reporting
monitoring
mechanism
for
transport
protocols,
especially
if
we
start
to
encourage?
Because
we
don't?
We
can't
have
a
middle
box
sitting
in
the
network
decoding
everything
that's
visible.
Z
A
So
let
me
yeah
there's
a
question
back
there.
We've
got
about
seven
minutes,
so
we
may
be
able
to
answer
those
and
then
I'll,
close
and
I
think
that
Mike
might
be
off
back
there.
B
M
Z
N
Z
N
A
AB
Z
Z
Z
A
change
we
have
a
creek,
which
is
that
we
have
12
implementation,
and
so,
if
people
start
at
well,
it's
literally
a
target
for
the
for
the
owner
or
or
network.
So
we
know
that
students,
for
example,
here
in
our
infrastructure,
have
chromium.
We
have
the
symbol.
We
can
instrument
that
one.
If
you
want
to
move
to
another
implementation,
we
need
to
make
it
compatible
and
the
strict
struggle
here.
We
need
to
define
what
would
be
a
better
way
that
to
perky's
basis,
but
have
a
generic
way
to
instrument
it.
N
AC
Neal
Cardwell
Google
I
just
wanted
to
offer
a
conjecture
on
the
mysterious
delay.
Spikes
in
some
of
the
graphs
I
noticed
that
and
some
of
the
TCP
RT
T's
you're,
seeing
whereas
suspiciously
similar
to
delayed
ACK
values
on
common
operating
systems
of
200,
milliseconds
and
40
milliseconds.
So
that
would
be
one
thing
to
look
at
that
one
was
40
and
that
one's
200-
that's
one
I,
don't
know
if
he
looked.
F
Z
A
So,
just
in
closing,
you
might
have
noticed
the
last
two
talks
were
a
little
different
than
the
earlier
ones.
If
you
didn't
notice,
the
way
they
were
different
is
they're
a
little
bit
more
about
the
measurement
tool
or
the
strategy
and
in
map
RT.
We
specifically
say
we
want
to
bring
measurements
that
that
provide
insights
to
the
engineering
of
the
protocols
or
the
operation
of
them.