►
From YouTube: IETF115-MAPRG-20221110-1300
Description
MAPRG meeting session at IETF115
2022/11/10 1300
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
They'll,
just
they'll
they'll
just
miss
the
intro,
which
is
which
is
I,
think
you're
fine
good
afternoon
folks
in
London,
I'm
Dave
plonka
co-chair
of
the
measurement
analysis
protocols,
research
group
miryakulwind.
Our
other
coach
here
is
with
you
there
in
London
and
she'll
help
me
out
when,
when
I
either
make
a
mistake
or
there's
something,
we
need
to
do
in
person
that
I
can't
do
remote
I'm
at
home,
Central
Time
in
the
U.S
7
in
the
morning
for
me
and
we've
got
a
we've,
got
a
nice
program
today.
A
So
this
is
the
note
well
regarding
intellectual
property,
like
with
the
iepf
the
irtf,
as
you
share
details
about,
if
you
have
intellectual
property
concerning
the
presentation.
So
please
read
up
on
that.
Both
the
points
here
and
the
documents
link
there.
If
you
need
to
inform
us
about
intellectual
property,
that
you're
claiming
this
is
this
session
is
being
recorded
both
audio
and
video
and
would
be
available
on
YouTube.
Afterwards.
A
I
will
be
posting
to
the
mailing
list,
a
link
to
to
each
individual
talk
that
you
can
jump
to
on
the
map.
Rg
Wiki
after
the
meeting
this
is
the
privacy
and
code
of
conduct.
A
As
we
said,
we
usually
have
pretty
well-behaved
people
in
this
meeting,
but
we
expect
you
to
be
well
behaved,
so
please
be
respectful
to
other
people,
we'll
have
a
extra
10
minutes
in
the
session
today
to
split
between
questions
and
answers
between
them,
but
please
get
in
the
queue.
If
you
have
a
question
and
then
we
apologize
in
advance,
if
we
have
to
cut
you
off,
but
anyway,
please
be
respectful
and
respect
the
code
of
content
conduct
for
the
ITF
and
the
irtf.
A
The
goals
of
the
irtf
is
to
do
research,
work
and
but,
coincidentally,
with
the
ITF
meetings
and
but
we're
not
a
standards
organization.
So
if
you're
new
to
the
irtf
just
realize
it's
a
little
different
than
than
what
happens
in
the
ITF
in
some
ways,
it's
a
little
looser
and
it's
got
a
slightly
different
Focus.
So
there's
a
primer
that
you
can
read
linked
there
if,
if
you're
new,
so
here's
the
charter
from
App
RG
the
measurement
analysis
protocols,
research
group
is
generally
a
group.
A
That's
about
studying
the
the
protocols
defined
by
the
ietf.
In
measurement
in
practice
and
on
in
operation
and
also
techniques,
sometimes
to
measure
internet
protocols,
that's
primarily
our
Focus
there's
a
mailing
list,
of
course,
and
you
can
subscribe
to
it
linked
there
There's
the
link
to
say,
slides
and
media
go.
A
Of
course,
the
meetings
being
conducted
for
both
remote
and
local
participants
through
media
go.
So
please
use
that
in
person.
Participants,
please
keep
your
audio
and
video
off
mask
in
the
meeting
room
there.
Remote
headset
is
great
and
and
will
help
you
through
getting
your
slides
up,
but
you
propose
them
when
it's
your
turn.
A
So
the
agenda
for
today
is
once
we
get
past
this
overview.
We've
got
a
series
of
talks
and
what
we're
pleased
to
bring
you
this
time,
because
this
is
time
the
time
of
the
year
that
it's
coincident
with
the
internet
measurement
conference,
one
of
the
top
academic
research
conferences
having
to
do
with
internet
measurement.
A
We've
got
a
number
of
contributions
from
that
were
presented
just
recently
or
accepted
and
presented
at
IMC,
and
then
Mary
and
I
also
solicited
some
that
we
thought
were
particularly
appropriate
for
for
for
map
RG
and
that's
what
we're
going
to
need
today,
in
addition
to,
as
usual,
a
couple
new
topics
of
unpublished,
Works
in
either
case.
A
Please
share
your
ideas,
especially
from
a
standards
and
operational
point
of
view
with
the
authors
or
if
we
have
time
here
in
the
question
and
answer
portion
after
each
talk,
so
we're
going
to
do
first
off
a
short
talk
by
Pablo
foremsky
about
a
system
they
did
called
Kirin
about,
distributed,
advertisements
and
ipv6bgp.
A
Then
we're
going
to
go
into
a
series
of
15-minute
talks,
some
of
which
are
from
some
of
which
are
unpublished
and
so
much
are
from
IMC
waiting
for
a
quick
I
believe
that's
up
on.
There's
a
link
in
the
in
the
meeting
materials
to
the
pre-print
for
that
on
archive
authors.
Please
correct
me:
if
I'm
wrong,
if
I
got
it
got
one
that
was
that
was
already
published
in
it
and
I
said
it
wasn't.
Dns
privacy
with
speed.
A
First,
look
at
Starling
performance,
another
IMC
contribution
that
we
that
we
accepted
from
Francois
and
then
eliminating
large-scale
IPv6
scanning,
another
IMC
contribution
from
Philip
Richter
and
his
collaborators
and
then
Leslie
Daigle
will
come
up
with
a
new
piece
of
work
about
iot
security
by
the
numbers
and
Ignacio
Castro
will
be
up
next
with
kovatus
ietf,
considering
something
unusual
measuring
the
ietf
itself
and
whether
or
not
it's
ossified
so
sort
of
a
little
variation
on
the
kind
of
measurements
we
typically
look
at
and
then
we'll
close
out
with
the
Tom
akawate
talking
about
where.ru
looking
at
the
the
conflict
in
the
war
in
the
Ukraine,
and
about
assessing
the
impact
on
that.
A
How
can
you
study
that,
based
on
the
what
we
see
in
the
DNS?
So
that's
the
agenda
for
today,
as
I
said,
we
just
got
about
10
more
minutes
than
the
time
we've
given
to
each
author
there.
So
keep
that
in
mind
when
you're
getting
into
queue
and
why
I
closed
it
or
why
we
closed
it.
When
we
did
before
we
switch
to
Pavel
Pavel,
if
you're
ready,
you
can
propose
your
slides
and
switch
to
them
right
after
this
here.
A
One
of
the
one
of
the
potential
contributions
that
we
couldn't
fit
in
this
time
was
a
short
paper
from
IMC
and
they
wanted
to
let
you
know
they
provided
these
slides
for
us
as
a
heads
up
about
the
they're,
creating
a
test
bed
for
low
earth
orbit
satellite
ISP
systems
and
you
can
contact
Muhammad
who's
address
whose
email
address
is
in
the
IMC.
A
The
first
software
online
MC
paper
here
and
read
their
short
IMC
paper
about
measuring
our
browser
side,
view
of
starlink
connectivity
and
then
lastly,
Muhammad
offered
to
share
with
you,
is
they're
starting
up
a
new
series
of
a
webinar
series
called
leocon
that
he
says
is
bi-monthly
I'm,
assuming
that
means
every
other
month
as
opposed
to
two
times
a
month,
and
but
if
you're
interested
in
that,
there's
the
information
and
a
link
to
it.
There.
A
I,
don't
see
oh
he's
here.
Okay,
you
mean
here
in
London,.
A
All
right
take
away
Pablo.
You
got
five
minutes.
D
All
right,
hello,
everyone,
I'm
Pavel
from
the
Polish
Academy
of
Sciences
I'm
here
today
to
shortly
present
you
our
new
paper
with
Android
play
with
Lars
and
Oliver
from
the
Max
Planck
Institute
for
informatics
in
Disney
paper.
We
are
asking
ourselves
whether
we
should
reconsider
a
well-known
prefix
aggregation
attack
against
bgp
and
in
this
talk,
I
will
shortly
mention
three
points.
First,
a
new
context
for
this
attack.
Then
some
results
and
recommendations
for
bgp
operators
next
slide,
please
excellent.
D
Okay,
so
first
the
context,
as
you
probably
know,
bgp
routers
don't
have
unlimited
space
for
their
fips
and
rip
tables,
mostly
because
sticker
memory
is
expensive,
for
instance.
So
if
you
manage
to
announce
too
many
routes
to
a
bgp
router
many
bad
things
can
happen
to
it,
for
instance,
it
can
crash
and
what,
if
we
manage
to
overflow,
not
one
or
few
erupters,
but
many
routers
that
run
the
internet.
D
So
we
are
revisiting
a
well-known
prefix,
the
aggregation
attack,
but
we
add
a
few
new
things
that
we
believe
that
can
convince
the
community
that
this
attack
might
be
actually
easier
to
exploit
than
previously
thought.
So
first
we
are
using
IPv6
since
it's
much
easier
to
obtain
a
large
V6
prefixes
compared
with
V4,
for
instance,
slash
29
allocation,
which
is
easy
to
obtain
for
a
leader,
because
it
doesn't
need
additional
justification
that
could
allow
you
to
split
it
into
a
million
possible
sub
prefixes
up
to
such
48,
which
is
good
for
bgp.
D
That
is
it
propagates
globally.
Then,
we
no
longer
need
to
be
physically
present
at
facilities
like
XPS
to
establish
bgp
sessions,
for
instance,
more
than
10
percent
of
peers
at
the
larger
XPS
are
they
are
remote
or
the
five
bucks
VM
at
Walter
already
has
an
option
for
free
bgp.
Transit
sessions
is
a
session
per
hvm,
so
in
the
paper
we
also
show
how
one
can
use
a
pool
of
distributed
bgp
sessions
to
work
around
bgp,
Max,
prefix
limits
and
the
prefix
aggregation.
D
D
So
some
results
is
it
possible
to
launch
queuing
today
against
the
internet
So
based
on
simulations
on
real
bgp
data,
real
infrastructure
that
runs
the
internet,
We
Believe?
Now
sorry
CS
here,
for
instance,
on
the
left
hand
side,
you
see
a
result
of
optimization
of
how
many
Transit
providers
and
how
many
pyramid
glands
you
need
to
obtain
a
given
number
of
V6
sessions.
So
for
a
big
enough
attacker
like
state
sponsored
attack
using
just
20
providers
at
25
lands,
you
can
obtain
more
than
a
thousand
V6
sessions.
D
D
Here,
for
instance,
we
used
a
real
Juniper
MX5
router,
it's
quite
popular
and
the
virtual
Cisco
XR
v9,
and
we
checked
how
many
routes
we
can
announce
to
it
unless
it
crashes
in
the
best
case
where
the
aspartland
was
just
one
and
with
our
bgp
communities
the
numbers
were
2
million
and
5
million,
respectively
for
Juniper
and
Cisco.
However,
in
the
worst
case,
which
is
here
in
Orange,
if
we
maxed
our
as
paths,
so
it
has
the
maximum
length
and
we
added
as
many
large
bdp
communities
as
we
could.
D
The
numbers
were
100K
and
a
million
respectively.
Of
course,
this
is
not
completely
realistic,
but
it
shows
some
bound,
but
we
run
these
experiments
on
Empty
Tables
and
with
minimal
router
configuration.
So
if
you
consider
reality
that
now
V4
plus
V6
internet
is
1.1
million
combined,
there
is
likely
little
Headroom
ahead.
So
we
normally
you
wouldn't
need
to
announce
a
new
one
million
V6
rods
to
crush
the
routers
next
slide.
D
So,
finally,
what
should
I
do?
First
of
all,
don't
worry
yeah
like
it's
quite
easy
to
detect.
We
encourage
more
monitoring
that
there
isn't
someone
preparing
for
this
attack
and
we
came
up
with
a
few
recommendations
for
operators.
D
A
D
D
A
Right,
let's
move
on
to
Jonas
mook's
presentation.
B
F
G
I'm
from
fry
University
at
Berlin
and
today,
I'm
going
to
present
about
passive
measurement
opportunities
using
quick
and
this
joint
work
with
machines,
Patrick
Johannes,
Georg,
Thomas
and
Matthias
in
this
work,
we'll
focus
on
hypergiant
deployments.
So
we
should
first
have
a
common
understanding
of
how
a
typical
hypergiant
deployment
might
look
like,
and
this
is
what
you
can
see
here
on
the
slides.
So
usually
a
user
will
resolve
a
domain
using
DNS
and
go
in
return
to
get
an
IP
address
in
the
case
of
hypergines.
G
That
will
often
be
a
virtual
IP
address,
so
it
does
not
represent
just
one
host.
But
multiple
hosts
can
answer
requests
for
this
IP
address
and
then
the
user
will
use
any
protocol
to
connect
to
the
hyper
using
this
IP
address
to
the
hyper
Giant
and
then
at
the
hyper
giant.
The
request
will
pass
through
multiple
layers
of
load
distribution
using
ecmp,
for
example,
and
then
multiple
layers
of
local
lenses,
so
layer,
4
and
layer,
7
load
balances
throughout
this
presentation.
G
G
Okay,
hope
that's
better
better
and
what
has
what
have
passed
measurement
studies
done
so
past
measurement
Studies
have
conducted
active,
quick
scans,
so
they
maybe
send
out
requests
to
the
entire,
for
example,
ipv4
address
space
or
they
have
used
DNS
to
infer
the
number
of
IP
addresses
and
used
by
hypergiants
or
how
many
domains
they
use.
But
what
we
focus
in
this
work
is
we
want
to
understand
the
content
serving
infrastructure
of
hyper
Giants.
G
So
we
want
to
understand
the
last
part
which
you
can
see
here
in
the
slides,
so
the
layer
cell
and
load
balances
and
how
we
do
this
is
we
first
identify
service
of
specific
hypergines
and
their
server
configurations,
and
then
we
use
those
detected
server
configurations
to
identify
off-net
servers
and
then,
as
a
last
step,
we'll
look
into
how
they
how
they
deploy
their
layer,
7
load
balances,
and
we
do
this
in
a
non-obtrusive
way.
Well,
how
do
we
do
this?
We
do
this
by
analyzing,
quick
Backscatter
traffic,
and
why
do
we
use
Quick?
G
Well,
it's
already
broadly
used
by
hyper
Giants.
So,
for
example,
in
2020
already
75
of
Facebook's
traffic
is
quick,
so
we're
gonna
see
a
lot
of
this
and
we
can
use
it,
and
why
do
we
use
Backscatter?
Well,
it's
a
non-intrusive
and
relatively
easy
to
capture
data
source.
So
here
you
can
see
our
measurement
setup.
Basically,
we
use
back
sketches
back.
Skater
is
just
response:
traffic
from
a
spoofed
packets.
G
So
if
an
attacker
sends
a
smooth
packet
to
a
server,
the
server
then
replies
to
that
spoofed
address
and
if
that
Spirit
address
is
in
the
range
of
our
Network
telescope,
which
is
just
an
IP
address
range,
we
can
see
these
packets
and
conduct
our
analysis
on
them
and
for
the
quick
handshake
that
you
see
here
in
the
middle.
That
usually
means
this.
An
attacker
will
send
an
initial
packet
and
then
the
server
will
reply
with
initial
intent
check
packets,
and
so
that
is
so.
G
That
is
what
we,
what
we
observe
in
the
telescope,
and
what
we
can
additionally
see
here
is
that
quick
uses
connection
IDs
and
during
the
connection
setup
each
endpoint.
So
the
client
and
the
server
determine
the
connection
ID
that
is
used
to
Identity,
to
assign
packets
to
to
Quick
connections
at
that
endpoint
and
the
connection
IDs
are
set
by
the
respective
side
of
of
connection.
G
C
G
And
so
first
we
had
a
look.
So
what
so?
What
you
can
see
on
this
slide
is
the
reception
of
Packers
at
the
network.
Telescope
for
different
quick
connections-
and
we
can
see,
is
that
there
are
distinct
patterns
for
different
hypergines
when
they
resend
packets.
So
what
you
can
observe
is
that
Facebook
starts
to
resend
packets
after
a
certain
interval,
and
you
can
also
see,
like
the
intervals
get
doubled
in
size
each
time.
So
what
we?
What
we
learned
from
this
is
that
they
use
explanation
back
off
and
we
can
also.
G
G
So
next
we'll
have
a
look
at
the
server
connection
IDs.
So
the
connection
IDs
that
are
set
by
the
service
and
we'll
have
a
look
at
the
hexadeciminal
representation
of
those
from
left
to
right.
This
is
what
you
can
see
in
those
graphs.
So
on
the
x-axis,
you
see
the
position
in
the
connection
ID
and
on
the
y-axis.
G
You
see
the
value
that
we
observed
there
or
better,
the
frequency
that
we
observe
there
and
what
we
find
is
that
Google
and
Facebook
use
the
same
length
of
connection
IDs,
while
cloudflare
has
significantly
larger
20,
byte
long
connection
IDs,
and
we
see
that
some
values
are
more
frequently
used
than
others.
While
for
Google
we
observe
just
a
random
distribution
of
values,
and
what
this
means
is
that
there
is
information
encoded
in
the
connection
IDs
and
in
fact,
if
you
look
at
the
implementations,
you
can
find
details
about
this.
For
example,
Facebook.
G
So
now
we
will
use
this
information,
so
we
know
that
Facebook's
host
connection
IDs
will
always
begin
with
the
version
and
we
know
the
values
of
that
version.
So
we
can
now
use
this
to
fingerprint
hyper
Giant
deployments
and
if
we
do
so,
we
can
relatively
accurate,
find
Facebook
service
and
we
still
have
a
some
false
positives,
but
we
can
reduce
those
with
additional
information.
So
what
we
found
during
our
measurements
is
that
Facebook
uses
low
host
IDs.
G
Those
host
IDs
are
also
part
of
the
connection
ID,
so
we
can
additionally
predict
the
first,
the
first
positions
of
the
host
host
ID.
So
we
can
now
use
11
bits
of
a
64-bit
connection
ID
to
predict
which
service
our
Facebook
off
net
servers,
and
so
we
can
significantly
reduce
the
number
of
holes
positives.
G
So
now
the
question
is:
why
would
you
actually
need
to
encode
host
IDs
well,
classically
the
layer,
follow-up
lenses
forward
packets
using
a
consistent
hashing
on
the
five
Tuple,
and
that
works
fine
for
UDP
and
TCP
connections?
But
for
quick,
it's
possible
that
these
that
the
ports
or
the
IP
address
changes
during
an
existing
quick
connection.
So,
for
example,
during
client
migration,
the
IP
address
might
change
and
in
that
case,
layer,
4
load
planser
would
generate
a
different
hash
and
forward
to
a
different
host
ID,
which
would
break
the
connection.
H
G
This
avoids
that
you
have
to
share
State
between
the
layer,
7
and
the
layer,
4
load
balances
or,
to
any
other
things,
to
look
into
look
deeper
into
the
connection.
Yeah.
G
And
so
now
that
we
know
that
what
the
hostage
is
actually
denominate.
So
that's
the
layers,
that's
the
layer,
7
load
balances.
So
where
the
quick
connection
terminates
we
conducted
in
active
measurements,
so
we
send
out
our
packets
to
Facebook
servers
and
Etc,
and
so
we
connect
20
000
times
to
those
Facebook
servers.
And
then
we
collect
the
Server
Connection
IDs
and
extract
the
host
IDs
from
that.
G
And
if
we
do
so,
we
get
around
37
000
different
host,
IDs
of
which
already
we
can
see
in
19
in
the
passive
measurement
data
set.
And
if
we
now
group
group
those
the
virtual
IP
addresses
that
we
scanned
into
clusters
if
they
share
at
least
one
host
ID.
We
get
to
this
picture
and
this
this
is
112
different
clusters,
each
of
the
size
of
22
virtual
IP
addresses,
and
there
are
three
clusters
that
differ
between
size.
G
So
there's
one
with
22,
20
and
44
virtual
IP
addresses,
and
what
we
find
is
that
each
of
these
clusters
only
spans
a
single
24,
IP
prefix,
while
there's
one
exception
to
this
Rule.
And
what
we
found
is
that
single
cluster?
It's
not
a
single
cluster
behind
behind
up
here.
Behind
the
virtual
IP
addresses
we
can
we
always
detect
the
same
number
of
host
Eddies.
So
not
only
one
host
ID
is
shared
behind
the
virtual
IP
addresses,
but
all
host
IDs
in
general.
G
So
if
you
want
to
derive
information
about
the
cluster,
it's
sufficient
to
scan
just
one
virtual
IP
address,
and
you
know
how
many
different
host
IDs
there
are
in
this
cluster.
So
ultimately
we
use
Simple
ipeg
allocation
to
geolocate
the
Clusters,
and
what
we
then
found
was
that
the
Clusters
in
Asia
are
significantly
larger
than
in
any
other
continent,
and
this
was
surprising
to
us
first,
but
we
found
that
Facebook
has
fewer
data
centers
there
and
the
population
is
pretty
large.
So
that's
probably
the
reason
for
this.
G
So
will
this
approach
work
in
the
future?
Well,
we
think
there
will
be
no
internet
without
attackers,
so
there
will
always
be
back
Sketcher,
and
so
we
can
use
this
approach
to
further
analyze
hypergiant
deployments,
and
we
even
think
that
the
back
scatter
will
increase,
because
it's
not
yet
at
a
level
of
TCP,
and
this
will
increase
increase
likely
with
with
further
adoption
of
quick.
G
So
to
conclude,
we
have
learned.
We
can
use
passive
measurements
to
learn
a
lot
about
hypergiant
deployments
and
we
can
then
use
what
we
learned
to
create
fingerprints
and
detect
of
net
servers,
and
we
have
seen
that
structure
connection
IDs
can
be
used
to
simplify
routing,
there's
already
an
ITF
draft
for
this.
So
maybe
have
a
look
at
this.
This
will
also
not
reveal
your
layer,
7
loop
and
answer
infrastructure
in
the
default.
Configuration
with
that.
Thank
you
for
your
attention.
You
can
find
more
details
in
our
paper.
A
Thanks
a
lot
Jonas,
we've
got
a
couple
of
folks
in
the
queue
already
and
a
few
minutes
to
take
questions
and
for
you
to
respond
so
looks
like
Lorenzo
is
up,
go.
I
Ahead,
yeah
out
of
curiosity,
why
is
it?
Why
is
it
interesting
that
this
is
passive
measurements?
I
mean
it
feels
like
you
could
just
go
to
like
any
random
airport
lounge
or
like
coffee
shop
on
the
internet
and
conduct
these
measurements
actively
and
I
mean
this
would
work
perfectly
well
for
active
measurements
as
well
right,
yeah,.
G
Yeah,
it
works
for
active
measurements,
but
it's
interesting
because,
for
example,
you
can
analyze
your
competitors
and
it's
non-intrusive,
so
you
don't
have
to
send
all
packets.
You
save
some
traffic
and
load.
J
G
G
This
allocates
state
at
your
servers,
so
you
you
yeah,
so
you
take
out
capacity
that
would
could.
J
J
Okay,
so
thank
you
so
it
sounds
like
the
these
are
Source
spoof,
DDOS
attacks
on
the
target
infrastructure,
not
reflection,
attacks.
J
Thanks
and
my
other
question
is
based
on
this
observation,
would
you
recommend
the
use
of
of
it
block
Cipher
approaches
to
quick
connection,
ID
generation
or
in
general,
you
know,
randomized
connection
IDs
that
have
no
visible
patterns
like
like
you
saw
in
the
Google
servers.
G
I
would
suggest
to
look
at
the
draft
at
this.
This
Heights
the
this
information
that
we
have
seen
like
for
Facebook
and
used,
but
you
will
always
try
to
have
some
information
to
make
routing
of
your
quick
packets
easier.
G
J
K
A
All
right,
thanks
Ben,
why
don't
you
guys
carry
on
with
it
with
the
draft
and
or
contact
Jonas
outside
and
let's
switch
to
Marwan.
H
R1
fan
just
in
the
interest
of
time,
they're,
not
necessarily
an
answer
required
here,
but
I.
Think.
An
interesting
question
is
rather
than
what
is
the
structure
that
you
can
see
is
how
are
people
achieving
the
lack
of
structure,
especially
if
you
want
to
preserve
that
connectivity
seems
to
me
an
interesting
question
that
might
be
useful
in
order
to
prevent
people
from
deriving
structure.
E
B
L
B
L
Hello,
everyone
first
of
all
some
context:
I'm,
not
one
of
the
main
authors
of
this
paper.
I
was
mainly
as
a
kind
of
a
consultant
on
more
of
the
quick
and
web
performance
parts,
but
the
main
authors
couldn't
be
here
today
and
so
I'm
falling
in
as
a
presenter
next
slide.
Please.
L
B
L
The
main
differences
between
the
two
papers
are
that
we
now
have
a
severely
updated
DNS
over
quick
implementation,
mainly
adding
session
resumption,
which
helped
with
quite
a
few
bugs
relating
to
things
like
application
prevention,
which
kind
of
muddled
the
results
in
the
first
version.
We
now
also
do
measurements
from
multiple
Vantage
points.
Instead
of
just
one
and
of
course
we
now
add
it.
We
don't
just
look
at
the
DNS
performance
which
we
did
in
the
first
paper.
Now.
L
Why
do
we
want
to
look
at
web
performance?
Most
of
the
web
contents
are
still
already
over
https,
but
we
still
have
some
privacy
leakage
over
DNS.
Obviously,
this
is
partially
resolved
by
Dot
and
Doh,
but
of
course
we
pay
a
heavy
performance
penalty
due
to
the
handshake
that
needs
to
happen.
The
idea-
or
the
hope,
is
that
quick
with
its
1,
rtt
or
zero
TT
handshake
can
help
alleviate
some
of
these
problems.
Like
I,
said.
L
L
Not
all
of
them
supported
even
do
UDP,
so
if
we
filter
out
only
the
ones
that
do
all
the
different
DNS
flavors
that
we
wanted
to
test,
which
were
five,
we
end
up
with
only
313.,
so
that
that's
what
we
tested
most
of
them
are
in
Europe
and
in
Asia,
as
you
can
see
on
the
map,
we
then
have
six
different
Vantage
points
on
Amazon,
for
which
we
executed
by
performance
measurements.
Next
slide,
please,
for
that
we
automate
chromium
with
selenium
framework.
L
We
test
only
the
10
most
popular
web
pages,
going
to
the
Tranquil
list,
because
chromium
also
doesn't
do
all
the
flavors.
We
want
it
or
had
a
configurable
option
for
all
of
them.
We
run
a
local
DNS
proxy
next
to
the
chromium
instances
that
can
then
talk
the
different
flavors
to
the
actual
resolvers
next
slide.
Please
so,
basically,
for
each
of
the
web
page,
we
run
a
new
test
for
all
of
the
different
things
that
we
can
test
the
protocols.
L
The
results
Finance
points
we
do
that
about
four
times
spread
across
one
week
in
April
this
year.
Interestingly,
we
do
two
independent
measurements.
The
first
one
is
to
do
some
bootstrapping
and
the
second
one
is
the
actual
web
performance
measurements.
The
bootstrapping
step
is
mainly
to
populate
the
DNS
cache,
not
the
web
browser
cache,
of
course,
that
would
heavily
impact
the
measurements.
L
Only
the
DNS
resolver
cache
should
open
beyond
have
a
recursive
resolution
impact
there,
but
we
also
store
things
like
the
actual
quick
version
that
is
in
news,
address,
validation,
tokens,
which
is
very
crucial
because
for
some
reason,
all
of
the
doq
service
we
found
have
retry
always
on.
If
you
don't
know
what
that
means
that
it
basically
incurs
a
one
rtt
delay
for
every
single,
quick
connection,
setup,
which
we
don't
want,
but
that's
bypass
by
address
validation
and
session
resumption.
L
L
I
would
say
very
interestingly,
all
of
the
quick
servers
do
support
TLS
session
resumption,
which
is
very
good,
and
none
of
them
supports
hero
OT,
which
is
absolutely
terrible,
because
that
again
incurs
at
least
a
one
rtt
delay
for
all
of
the
handshakes
that
we
can
do.
Interestingly,
also,
not
all
of
them
were
fully
up
to
date,
neither
with
the
latest
quick
version,
more
latest
DNS
over
quick
version.
So
it's
to
me
it
seems
like
not
all
the
reserves
are
production
level
quality.
L
Yet
next
slide,
please
we
see
summer
things
for
DNS
over
HTTP
2.,
good
support
for
session
resumption,
great,
no
zero,
rtt,
terrible,
no
TCP
fast,
open,
somewhat
interesting
to
some
people.
Maybe
next
slide,
please
all
right
for
the
web
performance.
Specifically,
we
look
at
two
different
metrics
first
is
called
first
contentful
paint,
that's
kind
of
like
the
first
big
thing
or
interesting
thing
that
is
shown
on
the
screen
if
you're
loading
a
web
page.
L
This
happens
relatively
early
in
the
web
page
load,
and
so
it
should
correlate
relatively
well
with
the
DNS
times
as
we'll
see,
and
then
we
have
one
very
late
in
the
web
page
load
when
everything
is
almost
completely
done.
It's
based
a
lot
of
time,
which
should
have
less
of
a
correlation,
because
a
lot
extra
stuff
is
going
on
there
as
well
next
slide,
please.
L
L
What
we
can
see
is
that
about
40
40th
percentile
yeah
doq
is
about
10
slower.
The
normal
do
UDP
and
Doh
is
about
20
slower
right.
So
it's
not
a
terrible
difference
for
this
metric,
but
the
oh
is
about
twice
as
slow
as
doq
for
this
metric,
and
this
trend
holds
next
slide,
even
as
we
move
up
to
the
88
percentile
there,
although
the
uq
obviously
becomes
slower
than
your
do.
L
Udp
Baseline
there
as
well,
but
still
20
I,
think,
should
be
relatively
acceptable
in
some
cases
next
slide.
So
the
conclusion
we
have
in
the
paper
is
a
doq
indeed
significantly
improves
over
the
oh,
at
least
for
this
metric
next
slide.
Please!
So,
let's
look
at
a
second
metric,
then
we
have
the
total
page
for
the
time
per
web
page.
L
As
I
said,
we
do
10
different
sites.
We
only
show
four
here.
Those
are
four
columns
and
then
the
four
rows
are
the
different
continents
that
we
have
the
resolvers.
L
This
image
is
much
bigger
in
the
paper
for
more
context,
but
this
kind
of
highlights
what
we
do.
We
split
this
up,
we'll
split
this
up
in
two
parts
on
the
left.
We
have
relatively
simple
web
pages
that
only
do
a
single
DNS
resolution
per
load
and
then
on
the
right.
We
have
a
bit
more
complex
web
pages
that
do
eight
or
nine
DNS
resolutions
until
they're
loaded.
Interestingly,
for
reasons
this
has
a
different
Baseline
than
previous
graphs.
L
L
So,
let's
look
at
the
on
the
left
side
relatively
simple
web
pages,
which
is
we
want
to
look
up
at
the
median.
We
find
that,
as
you
might
expect,
doq
is
just
like
for
First
Column
for
paint
it
still
holds
for
this.
Metric
uq
is
about
10
faster
than
do
HTTP,
but
it's
also
still
10
slower
than
pure
do
UDP
without
a
connection
setup
right.
So
you
could
say
one
back.
B
L
L
If
we
look
at
the
more
complex
Pages,
we
do
see
some
different
things
going
on
things
get
much
much
closer
together
visually
there
at
the
median.
For
some
of
these,
there
is
even
just
a
two
percent
difference
between
the
do
UDP
and
the
do
quick
setup
without
zero,
rtt
very
important.
So
that's
that
was
a
surprising
result
to
me.
L
What
the
team
found
was
that
this
relatively
simple
explanation,
because
you
only
have
to
do
the
connection
setup
once
and
then
you
can
just
reuse
the
connection
for
the
eight
or
nine
different
lookups
that
you
have
to
do
so
you
basically
just
paid
over
at
once.
It
has
a
heavy
impact
for
the
simple
pages,
but
a
much
smaller
impact
for
the
more
complex
Pages,
because
DNS
lookups
can
happen
in
parallel
to
other
resources
that
are
being
downloaded,
and
you
don't
have
to
wait
for
things
to
go
on.
L
So
that's
what
you
clearly
see
in
this
more
complex
or
later
page
flow
time
metric
and
there,
and
so
it
kind
of
depends
on
the
type
of
website
that
you
have
the
exact
impact
that
you
might
expect
next
slide,
please
that
was
the
conclusion.
I
just
gave
next
slide,
please
right.
L
So
the
conclusion
is
that
indeed
it
seems
that
doq
has
quite
a
bit
of
promise
to
improve
the
potential
performance
impact
we
go
from
about,
let's
say
20
on
most
pages
to
half
of
that
which
I
think
is
a
nice
game
and
it
helps
it.
It's
it's
even
better
for
more
complex
web
pages
to
do
a
lot
of
a
lot
of
lookups,
which
I
guess
is
in
many
cases
most
of
them
there's
a
big
caveat.
Of
course.
We
only
tested
10
web
pages.
We
only
tested
300
resolvers,
we're
very
aware
of
that.
L
We're
trying
to
scale
up
our
deployment
and
get
a
bit
more
insight
into
what
these
different
resolvers
are,
but
we're
not
there.
Yet
that
leads
me
to
the
last
thing:
the
future
work.
Of
course,
besides
that
increasing
the
scale,
there's
of
course,
a
big
opportunity
for
zero
rtt,
which
should
bring
doq
even
much
much
closer
to
the
UDP,
of
course,
should
have
only
benefits,
I
assume
at
least
if
we
can
get
it
secure
and
then,
of
course,
when
I
said
Doh,
we
only
tested
Duo
over
http
2.
L
recently,
while
we
were
working
on
the
paper
DNS
over
HTTP
3
was
also
supported
by
quite
some
big
deployments,
According
to
some
blog
posts,
and
so
that's
also
something
to
look
forward
to
in
the
future.
I,
don't
think
that
will
have
a
huge
impact
by
itself.
I
think
the
impact
is
mostly
on
the
quick
layer
there
as
well,
but
we
will
see
all
right
next
slide.
L
You
can
find
some
additional
stuff
there
and
before
we
go
to
questions
lasting.
L
Those
of
you
who
are
in
a
quick
working
group
earlier
saw
that
I
brought
some
Belgian
chocolate.
There
were
very
few
people
who
took
me
up
on
that.
Source
plenty
left
and
actually
I
have
to
leave
right
after
this
meeting,
I
have
to
go
catch,
train
back
to
Belgium
and
I,
refuse
to
take
Belgian
chocolate
back
to
Belgium.
I
will
not
do
it
so
I
will
leave
it
here
with
Miriam
and
whatever
is
left
after
Miriam's
done
with
it.
Please.
A
Come
up
to
thanks
Robin,
we
got
four
people
in
the
queue
and
we
got
about
well
four
or
five
minutes
to
deal
with
that.
So
why
did
why?
Don't
you
go
Lorenzo.
I
Super
interesting,
thank
you.
Thank
you
also
for
noticing
the
Android
implementation,
so
I
have
a
few
questions.
First
of
all,
just
clarifying
like
these
resolver
they're,
just
like
random
ones
that
you
found
is
that
right,
okay,
so
like
who
knows
what
they
are,
what
they
do?
Okay,
exactly.
M
I
Second
question:
did
you
graph
DNS
response
times
as
well
or
just
stick
page?
Those
no.
L
I
A
roll
milliseconds,
but
you
need
the
percentage
Improvement
to
figure
out
yeah,
because
I
would
expect
the
first
query
to
be
the
same.
That
2x
the
cost
and
the
second
queries
to
be
like
yeah.
I
N
Yep
I
can
yeah,
so
thanks
for
this,
so
first
I'm
just
gonna
comment.
It's
quite
concerning
that
we're
seeing
deployments
of
doq
servers
that
are
configured
so
badly
for
the
fast
connection.
Setup
right
so
like
sending
a
retry
is
crazy
for
this,
and
also
the
fact
that
they're
doing
session
resumption
without
doing
zero.
Rtt
is
very
bad
right
because,
like
that
doesn't
help
the
client
number
of
round
trips.
It
just
creates
linkability,
so
like
it's
a
tracking
Vector,
but
it
doesn't
help
you
on
performance,
so
kind
of
worrying
that,
because.
L
More
books
on
that
Paulie
in
the
paper
told
me.
L
N
Tests
right,
which
also
makes
me
concerned
about
some
of
the
measurements
and
then,
if
we
are
doing
more
measurements,
I
really
want
to
emphasize
what
Lorenzo
was
saying
that
like
we
should
be
comparing
doq
and
doh3
the
thing
I,
I,
love,
Doh,
I,
think
it's
the
right
answer
actually
not
doq,
because
it
can
run
over
TCP
or
quick
and
I'm
I'm
skeptical.
That
doq
should
be
any
better
than
doh3
and
the
nice
thing
about
Doh.
Is
it
works
if
quick
is
blocked
or
not?
N
So
let's
make
sure
that
we
are
letting
Doh
have
a
competitive
comparison
here
and
not
just
compare
it
against
an
H2
stack.
That's
probably
a
not
very
good
H2
stack,
let's
compare
it
against
a
production
level.
H3
stack,
yeah,.
A
Agree
right
thanks
thanks
Tammy
Siobhan,
can
you
call.
C
Hello,
shivans
I
have
Brave
browser,
just
wondering
when
you-
maybe
you
already
mentioned
this,
but
when
you
say
complexity
of
web
pages,
how
are
you
defining
that.
H
L
O
I
the
point
on
the
slide
here
that
encrypted
DNS
does
not
have
to
be
a
compromise.
To
me,
it
looked
like
the
data
showed.
There
is
a
slight
compromise
and
it
can
get
smaller.
Is
that
I.
L
I
would
agree
with
that.
Yes,
but
it's
not
like
you
really
have
to
make
a
choice.
You
can
take
a
small
performance
hit.
Let's
get
the
other
thing
there.
It's
still
a
compromise,
but
it's
much
less
big
I
would
say
than
with
the
previous
encrypted
DNS.
Yes,.
O
O
A
E
Oh
one
final
thing
on
that
last
one
and
there's
someone
in
the
chat
asking
for
the
link
to
the
paper.
B
I
believe
there's
a
link
to
the
paper
in
our
gender,
but
otherwise
just
use
the
mailing
list
to
approach
the
authors
or
approach
the
authors
directly.
I,
don't
see
your
question,
do
you
did
you
request
you?
B
P
P
So
quick
reminder
for
those
who
don't
know
stalling
stalling
aims
at
providing
internet
access
to
distant
and
rural
areas
by
deploying
satellite
constellations
a
bit
everywhere
in
the
world
with
low
earth
orbit,
meaning
that
the
satellites
are
at
a
low
height.
Basically,
perhaps
so
in
the
lab.
We
are
more
interested
in
transport
protocol
and
performances,
and
so
we
order
the
starlink
access
to
do
experiments
with
our
quick
and
FEC
stuff.
P
So
at
the
beginning
the
the
objective
was
not
to
do
a
styling
performance
evaluation,
but
at
the
end
we
saw
that
there
were
no
not
much
public
data
that
were
released,
so
we
decided
to
do
some
benchmarkings
of
our
styling
access,
so
we
ordered
the
standard
static
access.
We
put
up.
We
put
up
the
the
dish
on
the
top
of
our
building
in
Belgium,
so
not
that
far
away
from
London
actually-
and
we
did
several
performance
analysis
so
first
we
did
an
analysis
at
a
at
a
high
level.
P
So
we
did
a
small
performance
analysis
using
browser
time
looking
at
the
page
load
time.
Basically
so,
on
this
graph
on
the
x-axis,
you
have
the
the
time
needed
to
load
the
web
page
and
the
curves
are
a
CDF,
so
cumulative
distributed
function,
and
so,
let's
do
first
a
little
guessing
game.
What's
your
opinion,
where
should
I
place
installing
so
I'll?
Let
you
think
a
bit
in
your
head
and
so
I'll
give
you
the
answer.
So
starlink
performed
basically
quite
close
to
the
fast
internet
access
so
who
had
it
right?
P
Okay,
so
not
many
of
you
so
I
think
there
is
enough
chocolate
for
you,
so
I'm,
stealing
Robin's
chocolate,
sorry
for
that,
so,
okay,
so
starlink
is
performing
quite
close
to
the
fussy
tender
access.
So
we
decided
to
do
more
more
measurements
to
understand.
Why
so,
first
here
is
the
outline
that
we
of
the
experiments
we
did.
So
we
did
some
latency
analysis
latency
under
load,
and
we
also
studied
the
packet
losses,
especially
packets
drop
rate
and
packet
loss
bursts
because
plus
this
can
occur
in
bursts.
P
So
first
we
did
a
simple
ping
campaign.
We
had
a
lot
of
anchors
around
the
world,
but
here
we
will
focus
on
the
anchors
in
Europe
to
limit
or
or
study
to
the
satellite
link
as
close
as
possible.
We
have
two
anchors
in
the
Netherlands
two
anchors
in
Germany
and
foreign
cars
in
Belgium.
Why
did
we
choose
Netherlands
and
Germany,
especially
because
we
had
two
Starling
exits
when
we
did
Trace
roads,
one
in
Germany
and
one
in
the
Netherlands?
So
that's
why
we
chose
the
Netherlands
and
Germany.
P
So
here
is
a
graph
showing
the
results
of
all
pinks.
As
you
can
see,
there
is
a
blue
line
around
20
milliseconds.
This
is
the
latency
announced
by
starlink
and
basically
this
is
the
minimum
latency
that
we
could
achieve.
So
we
could
achieve
what
was
advertised,
but
on
the
median
it
was
more
around
50
milliseconds.
P
Also,
we
saw
a
small
decrease
here
and
a
small
bump
at
the
end.
So
we
search
a
bit
because
we
were
thinking
about
new
satellites
being
released,
for
example,
because
there
are
constantly
releasing
new
satellites
at
lower
orbits
and
that
kind
of
stuff.
But
we
didn't
see
any
real
correlation
between
this
variation
and
there
is
the
satellites
that
were
launched.
P
So
we
just
have
to
expect
that
the
latency
may
vary
during
long
periods.
Basically,
and
what
we
can
conclude
here
is
that
we
can
reach
what's
advertised
by
stalling,
but
at
the
minimum,
because
on
media-
and
it
was
more
50
milliseconds
on
an
idling
idle
link,
but
here
it
was
only
on
an
idle
link.
So
what
happens
when
we
put
load
on
the
link?
We
could
expect
to
see
buffer
bloat
I
guess
so.
Let's
see
we
did
a
quick
transfers,
HTTP
tree
transfer
actually
and
with
quick.
P
What's
easy
is,
as
you
have
explicit
packet
numbers
that
are
continuously
increased,
you
can
find
the
RTD
of
every
largest
acknowledged
packet,
and
so
we
did
this
HTTP
three
transfers,
download
and
upload,
and
we
reported
the
distribution
of
the
rtt
of
every
acknowledged
packet
and
we
plotted
it
on
the
graph.
The
green
line
is
the
median
ping
latency
of
the
graph
before
so
around
50
milliseconds,
and
we
saw
that
there
was
quite
a
lot
of
buffer
bloat,
probably
at
the
user
equipment,
but
we
cannot
be
totally
sure.
P
P
We
sent
25
message
per
second
at
3
Mbps,
which
is
a
lot
lower
that
than
the
throughput
that
we
can
achieve
with
starlink,
and
what
we
saw
with
that
video
conferencing
use
case
is
that
the
median
pin
latency
for
download
was
exactly
the
same
as
the
Ping
latency.
So
there's
a
question
so
I'm
gonna
answer
it
right
now,
if
needed,.
P
The
end:
okay,
okay,
let's
take
it
at
the
end,
so
yeah
here
the
graph
shows
the
difference
in
latency
between
light
load
and
heavy
load.
So
we
could
see
some
buffer
blood-
it's
not
really
specific
to
styling,
but
it's
related
to
how
the
CPU
handles
the
the
buffer.
Basically,
so
this
was
about
the
latency.
Let's
look
at
the
packet
loss
rates
because
it's
a
wireless
link
at
some
point,
so
there
might
be
losses,
especially
that
the
satellites
are
moving
and
the
antenna
is
moving
too
so
it
could
lose
the
focus.
P
So
we
studied
the
loss
rates
under
the
heavy
load
where
you
could
have
congestion,
because
we
saw
that
there
was
some
buffer
bloat
and
we
also
studied
the
loss
rates
with
the
light
load
use
case
where
we
should
expect
that
the
congestion
loss
should
be
rare
or
non-existent,
and
what
we
saw
is
that,
with
the
H3
bulk
download,
we
apparently
have
congestion-induced
glasses,
quite
a
lot
of
them
actually
and
it's
easy
to
measure
with
quick
compared
to
DCP
and
with
the
light
load
use
case
with
the
message
use
case
where
the
congestion
should
not
be
to
present
the
network.
P
P
So
that's
what
we
did
so
on
the
left.
You
have
the
loss,
bursts,
distribution
for,
or
HTTP
HTTP,
3
transfer
and
on
the
right
you
have.
The
distribution
of
the
Lost
bursts
for
our
message:
light
load
transfer,
and
so
what
we
can
see
is
that,
even
if,
with
the
heavy
load,
you
have
a
high
loss
rate
of
nearly
two
percent
of
losses,
the
loss
bursts
are
in
proportion,
quite
small.
P
But
if
you
do
the
the
light
load
transfer,
the
loss
rate
is
quite
small,
but
the
loss
bursts
are
really
longer,
and
so
that
may
be
due
to
the
fact
that
the
antenna
is
losing
focus
at
some
point,
and
so
there
isn't
another
stunning
paper
that
also
studies
that
that
so
that,
depending
on
the
satellite
position,
when
the
satellite
is
is
far
away,
the
losses
are
getting
more
and
more
frequent
and
the
and
more
long.
Basically
so
you
could,
you
could
see
quite
long
lost
bursts
with
stalling,
even
without
pressure
on
the
link.
P
So
this
is
the
part
of
the
reason
that
we
saw
in
our
paper.
We
have
others
so
I
didn't
talk
about
true
boot
install
because
there
are
other
metrics
to
look
at,
but
you
have
throughput
measurements
in
the
in
the
paper.
We
search
for
some
peps
with
TCP,
we
search
on
middlebox
and
we
did
other
other
measurements
with
the
with
the
browsers
and
stuff.
So
don't
hesitate
to
have
a
look
at
the
paper
for
that,
and
so
we
can
conclude
so
Australian
equipment
at
a
high
level.
It
could
compete
with
our
wire
access.
P
P
Finally,
one
thing
that
we
would
like
to
say
is
that
quick
helped
us
quite
a
lot
to
do
measurements
because
of
the
packet
number
thing,
so
it
might
be
interesting
to
push
the
measures
the
measurement
with
quick,
and
this
is
what
we
started
during
the
hackathon
by
pushing
quick
to
NDT,
and
so,
if
you
want
to
help
us
pushing
measurements
with
quick,
let
us
know
so
yeah.
One
limitation
is
that
we
only
had
a
single
vantage
point
in
Belgium,
so
we
basically
were
only
studying
our
starlink
access.
P
So
our
part
of
our
future
work
is
to
collaborate
with
researchers
to
do
multi-vantage.
Point
studies
and
inter-satellite
Link
studies,
so
that's
it
if
you
want
to
collaborate
with
us,
if
you
have
any
satellite
access
to
to
collaborate
with
us,
let
us
know
if
you
have
a
Geo
satellite
access.
Let
us
know
too
because
we
would
like
to
Benchmark,
especially
our
quick
NFC
stuff,
so
don't
hesitate
to
contact
us
and
finally,
all
of
our
data
set
is
publicly
available.
P
A
Thanks
Francois
we've
already
got
three
people
on
the
Queue
and
you
finished
a
little
bit
early,
so
we
get
to
hear
from
them
so
Jeff
you're
up
first.
Q
Hi
Jeff
Houston,
we
did
a
very
similar
measurement
back
in
March,
with
both
geostationary
and
starlink,
and
the
first
thing
I
want
to
actually
comment
on.
Is
your
latency
measurement?
Q
If
you
want
to
go
back
to
that
slide,
if
you
do
the
maths,
the
satellites
at
550
kilometers
up,
but
when
it's
on
the
horizon,
it's
2704
kilometers
away,
yeah,
you
know
out
there
and
what
that
means
is
that
the
rtt
variation
now
the
one
on
time,
series,
The,
Raw
rtt,
should
change
between
1.8
milliseconds,
sorry,
7.3,
milliseconds
and
36.,
and
it's
moving
one
degree
of
Arc
per
second.
Q
So
you
should
see
much
more
variance
in
the
minimum
rtt,
but
you
don't
and
neither
did
I.
So
what
this
means
is
the
rtt
you're
singing
is
actually
an
induction
coming
from
either
on
the
client
side
or
on
the
earth
station
side
that
is
effectively
compensating
for
the
delay
in
the
spacecraft
from
Horizon
to
apogee
or
whatever
it's
called,
and
also
the
switching
delay.
You
don't
see
any
of
it,
so
that
actually
has
a
big
impact
on
the
protocol
performance
right
and
so
the
next
question
is:
did
you
look
at
bbr.
P
So
basically
we
didn't
look
at
bbr,
because
we
we
used
the
quiche
implementation
and
by
the
time
it
didn't
have
vbr.
Now
it
has
it
so.
Q
I
was
able
to
rip
300
megabits
per
second
out
of
this
circuit
on
bbr
on
the
same
system,
I
was
only
getting
60
out
of
cubic
okay,
so
bbr
has
a
remarkably
different
performance
protocol
on
starlink.
That
kind
of
says
use
me
ignore
everything
else
versus
cubic
that
kind
of
goes.
It
runs
more
like
a
dog,
but
the
interesting
fact
from
all
of
these
protocols
is
that
the
the
tied
rtt
actually
makes
most
of
these
protocols
work
much
better
than
they
would
if
you
were
given
the
raw
rtt
of
actually
the
satellite.
Q
P
You
for
the
insights
and
yes,
we
had
some
throughput
measurements
with
with
the
kitchen
cubic
and
basically
we
couldn't
reach
the
same
throughput
as
TCP
was
reaching.
Basically,
we
had
the
same
finding
of
field.
We
had
more
than
100
100
megabytes,
but
not
much
more
compared
to
TCP
so
yeah,
we
have
to
say,
go.
J
P
So
it
was
a
real
loss,
so
we
did
packet
captures
and
so
these
packets
were
never
acknowledged
at
any
point.
So
I
guess
it's
not
super
use.
Yeah.
J
P
Yeah,
we
also
saw
weird
stuff
at
some
point,
with
the
first
few
handshake
packets
being
lost
nearly
systematically
at
some
point
and
then
it
disappeared.
So
we
had
some
weird
kind
of
traffic
patterns
at
some
point,
so
yeah.
R
So
Gary,
first
long
time,
lover
of
satellites
as
well
as
the
internet.
Well,
that's
interesting!
Thank
you
ever
so
much
one
thing
I
didn't
like
in
your
talk
is
I
think
you
have
to
be
very,
very
specific
about
when
the
data
was
measured
and
where
it
was
measured,
because
this
constellation
is
changing.
People
are
launching
new
stuffs
and
they're,
also
changing
I
think
what
they're
doing
internally
without
telling
anyone.
So
we
may
have
to
make
sure
that
we
tag
our
data
with
the
right
things.
R
Having
said
that,
yeah
your
lost
stuff
matches
exactly
what
I've
seen
using
ipof
yeah.
There's
lots
of
losses
going
on
here.
It's
not
within
quick
at
all
quick
can't
be
blamed,
it's
something
that
styling
could
deliver
and
goes.
They
seem
to
be
fixing
the
rtt
thing.
Yeah
Jeff's,
probably
saying
something
really
interesting
here.
We
should
work
that
one
out,
but
they
are
still
leaving
as
lots
of
losses
which
may
not
be
the
right
thing.
R
R
These
strange
bumps
in
height
yeah,
probably
should
look
at
those
yeah
sure
I
think
these
are
not
particularly
big
from
the
point
of
view
of
Quake,
but
we
saw
some
enormous
spikes,
which
could
actually
be
quite
important
if
you
drill
right
in
on
a
fine
time
scale.
So,
let's
look
also
at
individual
accounts.
Did
you
see
it
do
any
measurements
of
individual,
quick
packets
and
how
long
they
took
look.
A
Let's
cool
we're
gonna
have
to
we're.
Gonna
have
to
stop
now
thanks,
Corey
yeah
I
apologize
for
having
to
cut
it,
but
we're
just
enough
time
to
finish
by
the
end
of
the
year.
P
A
S
S
Minutes:
okay,
thanks
so
yeah
hi,
I'm,
Philip,
I'm
I'm
with
Akamai.
This
is
joined,
work
with
Oliver
and
Arthur,
who
are
with
Max
Planck
Institute
and
akamind
MIT
respectively,
and
so
this
this
talk
is
about
detecting,
who
is
scanning,
who
is
scanning
in
the
Indie
IPv6
space
and
I
want
to
start
off
with
a
quick
refresher
on
what
we
actually
mean
when
we
talk
about
scanning.
S
In
this
case,
I
would
have
TCP
synac
and
what
what
typically
happens
next
is
that
the
scanner
or
some
entity
associated
with
a
scanner
does
something
malicious
and
that
can
either
be
attempts
to
to
exploit
known
vulnerabilities
in
the
targeted
host
or
to
abuse
the
target
host
in
subsequent
amplification
attacks
or
they're
kind
of
you
know.
Many
many
scenarios
possible
so
what's
clear,
is
that
for
a
lot
of
the
cyber
attacks
that
we
that
we
see
and
that
we're
dealing
with
scanning
scanning
is
a
key
component?
S
That's
required
to
actually
enable
to
enable
them
and
now
most
of
what
you,
what
you
read
and
hear
about
scanning
concerns
scanning
in
the
in
the
ipv4
space
and
I
want
to
quickly
start
by
talking
about
what
scanning
on
the
V4
space
means.
Now,
as
we
all
know,
the
V4
space
is
comparably
small.
We
have
about
four
billion
Target
addresses
and
about
like
three
billion
of
them
are
routable,
and
you
can
relatively
easily
scan
the
entire
ipv4
space
in
less
than
less
than
one
hour.
S
So
this
is
kind
of
the
scanning
part
which
is
relatively
easy
to
conduct.
Also,
when
it
comes
to
detecting
who
is
scanning
and
what
are
what
are
people
scanning
for
in
the
ipv4
space
Also
that
is
readily
doable
by
relying
on
dark,
Nets
or
network
telescopes,
there
are
some
limitations
to
doing
that.
But,
overall,
if
you
want
to
get
a
sense
of
how
much
scanning
is
happening
in
the
ipv4
space,
that's
readily
readily
doable
and
actually
there's
a
lot
of
that
happening.
S
So
what
we
see
in
the
V4
space,
we
have
millions
of
monthly,
active
scanning
sources
and
that's
mainly
also
driven
by
several
botnets
just
scanning
randomly
through
the
through
the
ipv4
space.
Okay.
Now,
if
you
look
at
IPv6
things,
just
get
vastly
more
complicated,
we
have
about
10
to
the
power
of
38
Target
addresses
and
a
full
scan
is
simply
impossible.
S
It
would
take
trillions
of
trillions
of
years,
and
what
that
means
is
that
scanners,
whoever
wants
to
scan
the
IPv6
space
they
need
to,
they
need
to
rely
on
hit
lists
or
on
other
mechanisms
to
really
like
direct
their
scanning
traffic
at
very
specific,
very
specific
targets,
as
opposed
to
just
randomly
generating
addresses,
and
this
need
for
directing
scan
traffic
at
various
specific
destinations
also
makes
the
detection
the
detection
of
such
scanning
activity
much
more
difficult,
because
in
the
V6
space
we
now
need
a
vantage
point
that
actually
attracts
attracts
scanning
traffic
itself,
so
that
we
can
then
look
at
the
data
and
study
it.
S
And
so
perhaps,
unsurprisingly,
as
a
result
of
these
missing
Vantage
points,
the
current
extent
of
how
much
scanning
is
actually
happening.
How
much
vulnerability
scanning
is
happening
in
the
V6
space
is
largely
unknown,
and
this
is
the
key
question
that
we
want
to
shed
light
on
here
in
this
work
is
kind
of
what's
going
on
in
the
in
the
IPv6
space
in
terms
of
vulnerability
scanning,
and
for
that
we
present
the
first
longitudinal
study
on
what
we
see
in
terms
of
large-scale
IPv6
scans,
and
we
use
two
data
sets
for
that.
S
S
What
we
also
do
to
improve
reproducibility
of
some
of
our
findings.
We
double
check
some
of
the
most
Salient
findings
with
what
can
be
seen
in
publicly
available
traffic
traces
and
these
are,
namely,
the
the
Maui
traffic
traces
which
are
available
to
the
to
the
research
community.
S
Okay.
So
before
showing
you,
how
much
scanning
is
happening?
I
quickly
want
to
talk
about
why
our
advantage
points
are
capable
of
seeing
IPv6
scanning,
because
I
just
mentioned
that
it's
difficult
to
find
such
Vantage
points
now
for
the
CDN,
our
front-facing
IP
addresses
are
widely
exposed
via
DNS
right,
so
clients
request
content.
We
return
our
IP
addresses.
We
engage
in
Myriad
transactions
with
with
millions
and
millions
of
hosts
So.
Eventually,
the
IP
addresses
can
end
up
on
on
IPv6
hit
lists,
and
scanners
can
start
targeting
our
our
addresses.
S
The
Maui
data
set
is
essentially
traffic
captured
on
on
the
wire
on
a
Transit
link
and
there
it
is
really
just
if
someone
is
scanning
the
V6
space
and
if
the
scan
traffic
off
that
scanner
happens
to
cross
that
particular
link,
we
will
of
course
see
it,
see
it
in
the
data
set
and
what
we
focus
on
here
in
this
work
is
what
we
termed
large
scale
IPv6
scans,
and
what
that
essentially
means
is
that
we
consider
a
source
to
be
a
scanner
if
that
Source
targets
more
than
100
destination
IPS
in
short
order-
and
there
are
many
more
details
in
the
paper-
we
need
to
do
a
bunch
of
pre-filtering,
of
course,
but
this
is
this
is
kind
of
the
essence
of
what
we
call
a
large-scale
IPv6
scan.
S
Okay,
now
let
me
show
you
how
much
scanning
we
actually
detect
and
in
this
plot
here,
show
you
for
our
entire
measurement
period.
The
number
of
weekly
active
scan
sources
that
we
detect
in
this
case
here
at
the
CDN
CDN
vantage
point,
and
so
the
key
takeaway
from
this
slide
here
is
that
IPv6
is
now
actively
being
scanned.
We
find
evidence
of
scanning
scanning
sources
now,
let's
be
clear,
we
find
between
roughly
10
and
100
active
weekly
sources
scanning
IPv6,
and
that
is
nothing
compared
to
what
we
see
in
the
ipv4
space.
S
S
So
the
top
10
ass
account
for
more
than
99
of
all
the
scanning
traffic
that
the
that
we
that
we
Lock
And.
If
we
look
at
the
business
types
of
the
of
the
scan
sources,
we
can
see
that
it's
primarily
data
center,
ISS
and
Cloud.
As
is
you
know,
these
are
the
ISS
where
you
would
expect
to
have
the
respective
resources
available
to
conduct
these
large-scale
IPv6
scans
now,
and
we
see
and
I
should
probably
mention
that.
S
We
also
see
several
cyber
security
companies
starting
to
actively
scan
scan
the
IPv6
space,
and
we
see
this
in
our
Lots
now
speaking
of
sources
and
we
see
cloud
and
we
see
cyber
security
companies.
And
then
there
are
these
two
mysterious
assets
at
the
very
top
and
return
them
data
center,
which
are
geographically
mapped
to
to
China,
and
we
are
not
able
to
associate
them
with
any
any
major
cloud
provider.
S
And,
interestingly,
the
top
most
active
scanning
sources
that
we
see
in
IPv6
at
the
CDN
is
also
the
most
active
scanning
source
that
we
see
in
our
second
data
set
in
the
Maui,
trace
and
I.
Think
it's
just
worthwhile
to
mention
this.
So
this
is
the
by
far
most
active
Source.
It
has
been
continuously
scanning
the
IPv6
space
for
almost
two
years,
and
it
is
still
scanning
right
now.
S
I
just
checked
some
publicly
available
sources
earlier
today,
and
so
it's
the
most
active
Source
in
both
our
vantage
point,
and
it
was
also
reported
thousands
of
times
in
publicly
available
data
and
I
screenshot
it
one
of
them
here.
So
this
is
public
information.
They
must
must
have
ample
bandwidth
and
resources,
and
we
don't
really
know
who
or
what
is
behind
this.
So
if
you
have
any
insights,
you
would
really
love
to
learn
more
about
this
Super
Active
scan
source.
Okay,
a
quick
note
on
the
ports
targeted
by
the
IPv6
scanners
that
we
see.
S
Okay.
So
so
much
for
the
services
targeted
I
want
to
briefly
return
to
this
table,
and
you
may
have
noticed
that
in
this
table
and
also
in
the
plot
earlier,
when
we
talk
about
scan
sources
here,
we
show
like
three
different
columns
and
what
we
basically
do
here
is
that
in
the
in
we
report
scan
sources
when
we
trade
any
Source
128-bit
IPv6
address
individually,
that
is
the
rightmost
column.
S
And
then,
additionally,
we
show
what
happens
if
we
first
aggregate
all
the
traffic
for
an
individual,
64
prefix
and
then
apply
our
scan
detection,
and
we
also
show
the
scan
sources
when
first
aggregating
the
traffic
for
an
entire
slash,
48,
IPv6,
prefix
and
then
applying
or
scan
detection.
And
you
can
see
that
the
number
of
State
numbers
they.
S
They
vary
greatly
right
depending
on
the
different
aggregation
level,
and
this
hints
at
something
that
we
stumbled
upon
during
while
executing
this
work,
and
we
believe
that
this
is
a
major
Challenge
and
that
is
kind
of
identifying
and
isolating
the
scan.
Individual
scan
sources
in
the
in
the
IPv6
space
and
I
want
to
quickly
Show
You
by
example.
What
I?
What
I
mean
here
now?
First
of
all,
the
numbers
here
in
this
slide
are
fiction.
They're
they're
made
up,
but
I
want
to
highlight.
We
find
very
similar
cases
in
our
in
our
actual
data
set.
S
We
have
a
cyber
security
company
and
they
announced
a
slash,
32
prefix
in
in
bgp
and
from
what
we
see
is
that
there's
one
single
scan
Source
behind
this
slash,
32
prefix
that
leverages
the
entire
outer
space
behind
this
slash,
32
to
send
out
scan
probes,
and
what
that
essentially
means
is
that
every
individual
scan
packet
that
hits
our
firewall
carries
a
unique
IPv6,
Source
IP
address
and,
of
course,
that's
a
problem
for
detection
right,
because
if
you
would
attempt
to
detect
V6
scanning
for
individual
slash,
128
IP
addresses
this
would
just
not
be
possible
because
at
most
you
would
see
one
individual
packet
and
for
this
particular
case,
what
we
would
have
to
do
to
to
isolate
and
then
to
correctly
pinpoint
the
standings
first
is
to
aggregate
all
the
traffic
up
to
an
entire
slash,
32
prefix.
S
Now
we
could
say:
okay,
then,
let's
just
do
that.
But
that's
that's
a
problem
because
it
only
works
for
this
particular
source,
and
in
contrast
to
that,
we
find
one
other
host,
which
is
actually
a
well-known
cloud
provider
from
we
see
scanning
traffic
from
this
network
as
well.
They
announced
also
a
slash
32
in
bgp,
but
each
individual
virtual
machine,
each
individual
user
only
has
a
slash
124
assigned
and
the
the
issue
here
would
be
if
we
would
aggregate
all
this
up
to
a
slash
32.
S
We
aggregate
the
entire
cloud
provider
together
and
if
subsequently,
we
to
use
this
to
make
decisions
about,
for
example,
rate
limiting
or
blocking
traffic,
we
would
cause
a
lot
of
collateral
damage
for
all
the
other
users
in
this
in
the
Scout
provider.
Right
and
I.
Just
kind
of
want
to
highlight
this
here
again.
So
the
key
thing
is
that
without
aggregation,
we
may
not
be
able
to
even
detect
scans,
or
we
may
miss
a
bunch
of
that.
S
Okay,
and
with
that
I
want
to
conclude
so
key
findings.
Yes,
we
do
actually
have
evidence
of
scanning.
In
the
IPv6
space
there
are
challenges
with
detection
and
attribution
of
of
scanning
activity
and,
of
course,
as
usual,
you
can
find
many
more
details
on
the
methodology,
the
Vantage
points,
etc,
etc.
In
our
paper,
I
hope
I
could
get
you
interested
in
reading.
It.
A
K
Hi
yo
and
I
saw
the
paper.
I
was
sort
of
very
interested
to
see.
So
it
seems
to
me
you've
seen
two
things
that
there's
not
very
much
scanning
going
on
that
you
can
detect.
It
was
nice
to
see
that
I
think
the
slide
that's
at
10
to
100,
as
opposed
to
the
numbers
you
see
with
before.
But
then
the
other
thing
I
had
never
thought
of
before
was
that
because
it's
very
easy
to
generate
a
lot
of
source
addresses.
A
Thanks
Bob
Elliot
you're
up.
T
Hi
Philip
nice
paper
thanks
very
much
one
quick
question:
how
did
the
aggregated
traffic
that
you
saw
in
terms
of
the
total
aggregate
of
traffic
for
scanning
for
those
sources
for
those
top
sources?
You
saw
compared
to
the
rest
of
the
traffic
that
might
have
been
scan
traffic,
so
was
that
five
percent,
ten
percent
of
total
scan
traffic
100.
You
know
fifty
percent
of
total
scan
traffic
that
you
estimate.
S
That's
that's
I,
I'm,
not
sure
I
fully
understand
the
question,
so
the
thing
is
that
we
had
to
we.
We
had
to
do
a
bunch
of
pre-filtering
before
we
applied
a
scan
detection
regardless,
because
we
see
a
lot
of
connection
artifacts
and
in
terms
of
this
one
scanner,
this
large
cyber
security
company,
literally
every
all
the
packets
that
we
saw
from
them
were
scanning
for
the
cloud
provider.
Naturally,
you
would
expect
a
bunch
of
other
stuff
to
to
show
up
in
the
in
the
in
in
the
aggregate.
T
The
reason
I
ask
is:
is
it
that
we
don't
have
a
lot
of
botnets
out
there
today
that
that's
speak
IPv6
and
therefore
you
can
avoid
you?
You
can
aggregate
this
way,
but
once
we
have
those
botnets
we'll
just
you
know,
what
will
the?
What
should
the
methodology
be?.
S
Yes,
I
think,
that's
a
that's
a
great
point
and
we
don't
from
all
or
that
we
see
we
currently
don't
have
botnets
in
IPv6
and
what
currently
saves
us
may
just
be.
The
fact
that
IPv6
addresses
are
hard
are
not
really
exposed.
That
often
I
mean
hours
are
exposed
by
idns,
but
client
IP
addresses
often
are
not,
but
once
that
starts
to
happen
and
once
we
have
botnets
I,
don't
know
what
the
best
way
is
in
moving
forward
and
how
to
kind
of
aggregating
them
and
stopping
them.
A
All
right,
we
have
Leslie
up
next
and
Leslie
kindly
offered
to
do
her
presentation
in
10
minutes.
So
we'll
have
a
couple
of
minutes
for
questions
for
the
subsequent
talks.
A
F
So
hello,
everyone
and
just
a
point
of
clarity
here
that
all
of
the
actual
hard
work
that
I'm
about
to
present
the
numbers,
the
the
actual
testing
so
on
and
so
forth,
was
done
by
rufo
DeFrancisco
and
his
team
and
I
basically
grabbed
a
bunch
of
his
slides
and
hard
work
and
put
them
together
to
make
a
message
today.
So
if
there
are
things
that
seem
a
little
odd
in
this,
no
doubt
it
was
my
transcription
and
not
his
work.
F
So
this
is
actually
really
going
to
fall
on
rather
nicely
from
the
presentation
we
just
had.
Everything
I'm
going
to
say
is
focused
on
the
the
V4
space,
but
I'll
have
a
few
comments
about
how
that
relates
to
scanning,
so
on
and
so
forth.
F
So
I
wanted
to
talk
a
little
bit
about
what
is
the
scope
of
the
problem
of
attacks
in
the
iot
space,
and
why
does
why
does
my
day
job
care?
Why
does
the
global
cyber
Alliance
care
well
because
we're
not
for
profit
aiming
to
reduce
cyber
risk,
and
we
have
this
project
called
Aid,
which
is
supposed
to
be
an
automated
iot
defense
ecosystem
or
how
to
help
address
security
at
the
edges,
given
knowledge
about
what
goes
on
in
the
network
and
the
truth
of
the
matter
is
right
now:
it's
not
automated.
F
It's
not
a
defense
ecosystem
and
on
bad
days
we're
not
even
sure
it's
iot,
but
nomenclature
notwithstanding,
we
do
have
a
global
honey
farm
with
hundreds
of
sensors
across
the
globe.
That's
been
collecting
data
for
four
plus
years
now,
and
we
also
have
our
own
Honeypot
technology
so
and
and
the.
Why
do
we
all
in
this
room
care
about
iot
security?
F
Well,
I
think
that
most
people
here
will
remember
the
botnet
Mirai
that
was
marshaled
to
attack
the
dying
company
resources
in
2016,
which
is
just
evidence
of
how
very
impactful
a
group
of
motivated
tiny
devices
can
actually
be
if
given
the
wrong
coordination.
F
But
one
of
the
side
effects
of
that
particular
effect
was
that
there
are
now
many
laws
against
having
devices
with
default.
Unchangeable
passwords
and
a
variety
of
other
Regulators
have
gotten
interested
in
a
variety
of
other
proposals
of
policies,
mostly
oriented
towards
iot
devices,
but
generally
oriented
towards
any
device
connected
to
the
internet
and
I
wanted.
Also
to
make
the
point
that
the
actors
that
we
see
hitting
our
honey
farm
are
hitting
all
of
the
open
V4
ports
as
well.
F
So
what
we
actually
see
is
in
essence
a
lot
of
the
Bad
actors
acting
out
on
the
V4
internet
and
in
a
different
version
of
this
talk.
I
would
be
talking
about
how
maybe
we
should
stop
it
at
the
source
and
get
network
operators
to
understand
when
stuff
is
coming
out
of
their
networks,
rather
than
just
trying
to
focus
on
how
do
we
defend
at
the
device
level,
which
I
think
goes
a
little
bit
to
the?
How
do
we
block
list
B6?
Is
it
right
to
aggregate
I
think
the
right
answer?
I
hope.
F
The
right
answer
is
never
a
better
block
list.
I
really
want
us
to
get
to
the
point
where
we
have
better
detection
of
crap
coming
out
of
networks
and
stopping
it
as
well
as
what
other,
whatever
other
mitigations
on
device
and
in
software.
So
that's
where
I'll
Focus.
For
now
this
slide
just
gives
you
a
bit
of
a
sense
of
just
how
much
crap
there
is
out
there
again.
If
you
take
into
account
that
this
is
what
each
of
our
sensors,
which
is
you
know,
an
undifferentiated
little
Unix,
blob
type
thing.
F
That's
responding
as
if
it's
an
iot
device
sees
is
the
same
thing
that
your
connected
home
device
or
your
anything
that's
connected
to.
The
internet
sees
consider
that
they're
getting
about
5000
attacks
a
day
in
the
gray
just
to
explain
the
anomalies.
The
gray
bit
in
the
middle
of
this
is
when
we
were
switching
over
decommissioning,
an
old
honey,
honey
farm
and
commission
commissioning
a
new
one.
F
Sorry,
my
screen
keeps
going
to
sleep,
and
so
there
was
a
certain
amount
of
overlap
between
sensors.
So
it's
a
little
vague
in
terms
of
how
to
attribute
the
attacks
and
then
apparently,
the
spike
over
in
September
was
a
bunch
of
Amazon
IPS
attacking
Romania.
Who
knows
and
again
that
that
it
really
is
it's
coming
from
everywhere.
It's
going
to
everywhere.
F
There
is
nowhere
to
hide
if
it's
many
of
these
attacks
are
actually
coming
through
tor-like
VPN
Services,
so
that
the
source
IP
addresses
that
we're
seeing
are
in
any
number
of
cases,
unsuspecting
home
users
who
just
thought
they
were
getting.
You
know
cheap
access
to
Netflix
resources
in
some
other
country
and
had
no
idea
that
they
were
actually
offering
up
their
internet
access
to
be
a
source
of
an
attack.
F
So,
back
to
the
let's
make
some
policies
we
decided.
We
would
do
some
a
b
testing
so
configure
to
the
standard
approach
to
doing
testing.
Have
a
control
have
a
test
subject
and
see
which
see
whether
or
not
any
of
these
policies
were
actually
useful.
F
So
in
phase
one
we
set
up
some
of
our
own
honeypots,
these
proxy
pots,
in
a
honey
farm
with
virtualized
devices,
with
common
controls
from
the
policies
that
are
being
proposed
so
and
then
we
put
them
out
in
the
wild
just
to
see
what
happens
in
terms
of
the
the
attacks
and
whether
or
not
applying
the
controls
was
actually
effective
in
in
securing
the
device.
F
F
So
yes,
so
we
had
the
70
honeypots
emulating
some
fairly
common
devices
with
iot
thoroughly
common
iot,
like
software
Stacks,
had
10
different
honey
pots
deployed
for
each
of
the
seven
emulations
5A
5B
and
collected
data
for
two
months
and
saw
over
three
quarters
of
a
million
sessions
with
over
a
million
HTV
requests
and
HTTP
responses,
and
a
very
small
number
of
those
were
actually
scans
by
search
Bots
and
the
remaining
over
three
quarter
million
were
actually
classified
as
a
tax.
F
So
I
think
that
goes
a
little
bit
to
the
question
of
What
proportion
of
of
traffic
that
you're
seeing
is
scans
or
attacks,
at
least
in
the
V4
world.
F
Yeah,
so
this
is
7
500,
just
over
7
500
attempts
to
log
in
with
password
credentials
to
these
honey
Bots
and
the
hardened
device.
I.E
non-default
password
was
never
cracked
and
the
default
password
devices
were
cracked
in
just
short
of
80
times,
so
that
may
seem
pretty
trivial
in
terms
of
penetration
rate,
but
there's
two
things:
I
would
have
you
take
away
from
this
one
when
you
amplify
this
by
number
of
attacks
a
number
of
devices
available.
F
This
is
still
a
non,
not
not
insignificant,
number
of
compromises
and
two
there
are
a
lot
of
dumb
attacks
out
there
right.
This
is
not.
This
is
mostly
Mirai
like
things
that
are
roving
the
internet,
with
this
known
set
of
default
passwords
as
opposed
to
targeted
attacks,
which
also
happened
in
the
internet.
So
one
of
the
reasons
why
I
think
it
would
be
valuable
to
get
rid
of
the
attacks
at
sources,
so
we
could
stop
seeing
so
much
clutter
and
actually
have
a
better
shot,
dragging
down
some
of
the
more
targeted
attacks
yeah.
F
So
these
were
some
findings
from
that.
First
phase,
that
yes,
no
fault
default.
Passwords
is
actually
good
policy
advice
and
that
attackers
prefer
non-secured
communication
protocols,
at
least
through
this
time
last
year,
and
the
updated
software
prevents
break-ins.
F
But
we
did
find
also
that
in
looking
through
the
various
attacks,
the
attackers
were
attempting
to
exploit
the
software
sac
so
largely
even
on
small
devices
attempting
to
attack
the
web
server
used
for
administrative
control
of
the
device.
So
we
thought
hey.
F
Maybe
we
should
have
a
look
at
that
which
led
to
phase
two,
so
we
have
the
Honey
Farm
again
set
up
with
69
devices
and
a
variety
of
configurations
with
either
weak
or
strong
credentials,
an
up-to-date
version
of
software
or
not
so
that's
the
red
and
the
green,
but
zipping
right
along
in
terms
of
time.
F
The
this
one
was
active
for
just
over
half
a
year,
recording
almost
2
million
meaningful
attacks
on
the
devices,
approximately
100
attacks
per
device
per
day,
and
really
there
was
no
particular
favoring
of
one
type
of
device
over
another.
Although
the
cygnus
patched
seemed
to
have
gotten
slightly
less
than
the
others.
F
But
in
this
instance,
you
can
see
that
by
and
large
two-thirds
of
the
attacks
were
actually
on
the
software
stack
with
a
small,
much
smaller
number,
one
sixth,
on
the
device
interface
itself,
and
then
there
are
the
botnets
again
so
PHP
and
SQL
were
the
software
ingredients,
ingredients
that
were
most
often
targeted,
there's
probably
no
surprise
there,
but
most
attacks
were
really
an
attempt
to
exploit
known
vulnerabilities.
So
that's
kind
of
an
important
point.
F
So
one
of
the
things
that
particularly
stood
out
when
we
were
looking
at
the
results
from
this
was
the
number
of
attacks
against
boa,
which
has
been
long
since
unsupported
and
discontinued
like
from
2005,
but
it
still
not
only
sought
after
by
attackers.
It's
it's
out
there
it's
deployed,
and
while
people
are
still
finding
vulnerabilities
in
it,
of
course,
since
it's
unsupported,
these
are
not
getting
fixed,
devices
are
not
being
updated,
and
apart
from
that,
many
attacks
against
think
PHP,
which
is
a
PHP
framework.
F
So
in
this
instance,
it
basically
shows
that
having
a
an
up-to-date
software
stack
and
proper
password
control
does
improve.
Then
it
not
only
improves
the
likelihood
of
surviving
attack.
It
makes
you
less
interesting
to
the
attackers,
because
you
can
see
in
the
far
right
bar.
F
That's
the
the
most
up-to-date
and
patched
version
with
a
strong
password
saw
far
fewer
attacks
than
the
rest
yeah,
not
much
more
to
say
about
this.
Unless
there
are
further
questions
and
Elliot
will
get
to
you
at
the
end.
But
I
did
want
to
say
in
terms
of
conclusions
that
the
Fairly
obvious
device
security
is
necessary.
F
It's
important
to
have
device
passwords
that
are
updatable
and
not
and
and
not
default,
but
there's
still
a
whole
world
of
hurt
from
known
vulnerabilities,
and
we
can
assume
the
devices
are,
are
updated
or
are
going
to
be
updatable
or
even
that
new
devices
being
built
are
using
current
versions
of
software.
F
So
the
legacy
of
all
of
these
cves
and
the
implications
for
security
of
devices
in
your
network
has
a
really
long
tail
and
going
forward.
Regulation
May
address
some
of
this,
but
it's
only
going
to
be
in
the
you
know,
the
small
category
of
devices
that
are
responsive
to
regulation
in
in
various
parts
of
the
world
and
they're
still
going
to
be
that
you
know
Legacy
of
devices
and
software
stacks,
so
we
really
do
need
to
figure
out
how
to
address
things
going
forward.
F
There
was
some
work
going
on
in
the
iot
office
working
group,
which
is
looking
at.
How
do
you
secure
your
iot
devices?
That's
pretty
valuable,
but
we
also
have
to
think
about.
Well,
we
can't
air
gap.
Every
iot
device
there
are
have
been
interesting
stories
of
medical
devices,
medical
fridges
for
instance.
Why
does
the
medical
fridge
need
to
be
on
the
internet?
F
Well,
it
turns
out
the
medical
fridge
needs
to
be
on
the
internet
so
that
it
can
send
an
email
if
the
temperature
gets
out
of
spec
and
yes,
when
the
medical
fridge
gets
owned.
That
is
a
problem.
So
these
are
the
kinds
of
real
networking
security
challenges
that
need
to
be
addressed
in
a
meaningful
way.
Given
all
of
this,
the
implications
for
security
in
the
device
and
software
stack.
F
So
I'm
positing
that
you
know,
as
I
said
earlier,
I
think
it's
an
interesting
question
to
deal
with
the
software
attacks
at
source
out
of
the
source
networks.
That's
part
of
a
solution,
but
I
think
we
also
need
tools
and
techniques
to
Monitor
and
manage
the
networks
where
all
of
our
devices
are
connected.
I,
don't
know
if
we
will
ever
have
an
automated
iot
defense
ecosystem
at
GCA,
but
maybe
smarter
Minds
will
actually
be
able
to
develop
that,
and
certainly
we'd
love
to
talk
with
anyone,
who's
interested
in
figuring
that
out.
A
F
B
U
I'm
wondering
how
is
the
idea
of
doing
Nexus
live,
please
because
we
think
that
that's
important
as
measuring
protocols
is
to
measure
the
process
of
standardization
of
those
protocols
next-
and
this
is
part
of
an
ongoing
work
with
a
bunch
of
colleagues
from
Glasgow
and
Queen
Mary
University
my
University
and
has
been
published
so
far
very
nice.
It
will
be
the
same
on
AMC
and
all
the
work
is
available
in
our
website.
Next,
please
and
a
well.
There
are
many
conspiracies
in
the
internet.
The
internet
is
not
one
of
them.
U
U
Rfc's
emails
and
drops
next
time,
and
one
of
the
first
things
that
we
see
is
that
there
is
a
decrease
in
number
of
female
participants.
Next,
one
please,
while
at
the
same
time
we
have
an
increasing
number
sorry
unstable
number
of
females,
so
basically
the
ipf
seems
to
be
becoming
increasingly
chatty
next
one,
please
next
one
please,
and
to
get
a
better
understanding
of
that.
We
look
at
the.
How
are
these
emails
organized?
We
do
a
graph
of
the
emails
and
we
identify
the
different
components,
basically
interconnected
groups
of
people.
U
Next,
one
please
and
identify
the
largest
connected
component
next
one
please,
and
what
we
find
is
that
the
largest
connected
component
has
been
mostly
growing
over
time
and
the
number
of
less
connected
components
has
been
decreasing,
which
would
point
to
a
more
cohesive
IDF,
which
is
probably
a
good
sign.
Next
one
please,
and
together
understanding,
we
try
to
look
at
the
influence
of
people
inside
of
the
ipf,
and
we
approximate
this
using
between
us,
which
is
the
typical
approach
in
social
network
analysis
and
what
you
consider
is
the
largest
connected
component.
U
When
you
start
to
remove
people
depending
how
influential
they
are
next
one
please,
and
what
you
consider
is
basically
two
basic
takeaways.
There
is
a
lot
of
relevance
of
influential
participants,
and
this
has
been
growing
over
time.
Basically,
if
you
take
away
the
most,
the
most
influential
people
in
the
ITF,
the
largest
connected
component
drops
very
fast
in
terms
of
size.
U
How
about
the
emails
who
sent
those
emails?
Well,
what
we
see
is
that
the
next
one,
please
emails
are
mostly
dominated
by
influential
participants.
This
has
not
changed
much
over
time,
which
is
probably
a
good
sign,
but
what
if
we
look
at,
who
offers
which
traps
next
one
please?
What
we
find
is
that
there
is
an
influential
minority
that
increasingly
dominates
draft
production
next,
one
please,
and
at
the
same
time
it
takes
longer
time
to
gain
influence
in
the
IDF
Nexus
like
please-
and
why
is
this?
U
Why
is
this
rise
of
more
influential
participants
up?
Maybe
it's
because
conversations
are
more
complex
that
that's
the
evidence
that
we
seem
to
find.
We
have
that
increasingly
more
rsr
discussed
within
the
ITF,
and
we
also
find
that
the
those
that
are
more
influential
discuss
more
areas
next
and
it's
not
just
that
they
discuss
more
areas
that
they,
the
amount
of
discussion
that
they
have
across
these
areas
is
evenly
distributed
up.
U
I
won't
get
into
the
details
of
what
topic
entropy
means,
but
the
key
takeaway
is
that
people
discuss
more
areas
and
they
discard
more
discuss
more
each
of
the
areas
they
discuss.
Next,
please
next
one
more
yes,
so
why
are
we
having
this
more
complex
conversations?
Why
is
that?
And
we
hypothesize
that?
Maybe
it's
because
it's
harder
to
publish
next,
please
and
we
find
that
this
seems
to
be
the
guy
the
case.
It
takes
now
three
more
times
to
publish
an
RFC.
That's
the
number
of
days.
U
It
takes
from
first
draft
to
RFC
publication
next
place
and
at
the
same
time,
it
also
takes
many
more
drafts
to
publish
an
RFC,
and
there
are
a
bunch
of
other
things
that
I'm
not
showing
here
that
make
this
make
more
sense
and,
for
example,
like
we
have
more
authors,
more
institutions,
more
affiliations,
more
countries
participating
which,
of
course,
probably
inflates
the
number
of
emails.
U
It's
more
difficult,
as
you
probably
know,
to
have
a
zoom
call
with
people
from
all
over
the
world,
more
institutions,
more
emails,
Nexus
like
this,
and
we
have
more
work
ongoing.
But
this
is
the
basic
key
takeaways
that
we
have
found
conversations
seem
to
be
more
complex.
It
seems
to
be
harder
to
publish,
and
it
seems
that
there
is
an
influential
minority
upon
which
the
ITF
is
increasingly
dependent,
and
we
are
doing
quite
a
lot
of
work
in
trying
to
get
a
more
ground
rule
on
this
Finance.
U
So
we're
very
happy
to
hear
back
from
you
if
you
have
any
insights
or
if
you
want
to
give
us
hand
with
this
recommendation
tool
that
we
are
building
to
help
recommend
reviewers
for
drafts,
and
we
also
have
a
meeting
a
little
bit
later
if
people
want
to
join
and
think
that
analyzing
the
process
of
standardization
is
something
that
we
should
be
doing.
Thank
you
very
much.
A
B
Sorry,
sorry,
we
don't
have
a
queue.
So
thank
you
for
being
here.
Yes,
but
please
talk
to
him
and
ask
for
more
later.
V
Hello,
my
name
is
Gautama
kiwate
and
for
those
who
have
seen
me
talk
in
the
last
few
days,
I
just
don't
do
DNS
like
I,
also
work
on
other
things,
and
this
is
work
that
I
that
we've
done
with
Folks
at
University
of
Quinte
university
of
Napoli
and
Folks
at
UC,
San
Diego.
V
This
work
primary
author
of
which
is
Matthias,
who
couldn't
make
it
today
so
I'm
presenting
on
his
behalf.
So
this
this
paper
appeared
a
few
a
few
weeks
ago
at
IMC,
it's
titled.
Where
are
you
assessing
the
impact
of
conflict
on
Russian
domain
infrastructure
so
to
set
up
some
context?
V
Of
course,
when
we
say
we
are
assessing
the
conflict,
we
we
are
talking
about
the
recent
invasion
of
the
Russian
invasion
of
Ukraine,
which
produced
a
strong
Global
Response,
and
primarily
this
response
included
Western
countries
imposing
sanctions,
broad
economic
sanctions
on
Russian
institutions
and,
among
certain
other
things,
and
in
addition
to
government
per
sanctions.
There
were
also
some
action
from
private
sector
companies
whose
self-imposed
like
restricted
or
even
exited
the
Russian
market.
V
And,
of
course,
the
internet
is
very
much
part
of
the
economy
and,
like
the
internet
did
not
escape
this
conflict
and
what
we
like
a
concrete
example
of
this
is
that
corporate
Russian
websites,
including
Banks,
where
on
the
US
ofac,
which
is
the
office
of
foreign
asset
control,
special
designated
yeah,
Nationals,
yep,
okay,
I,
looked
it
up
before,
and
I
promptly
forgot
on
the
yeah
so
like.
V
Basically,
this
is
where,
like
this
is
the
list
of
individuals
that
you're
not
supposed
to
and
have
economic
activity
with
and
also
at
the
same
time
like
internet
service
companies,
where
sort
of
as
a
result
of
these
sanctions
are
independently
deciding
to
disengage
from
the
Russian
market
for
a
variety
of
reasons,
and
while
there
is
this
push,
that's
happening
from
as
a
result
of
sanctions.
V
There
is
also
this
other
aspect:
the
pull
from
Russia's
long-haired
concerns
about
internet
sovereignty,
and
this
has
been
a
long
going
effort
and
we
see
some
recent
activity.
Recent
push
from
Russian
authorities
saying
that
they
want
all
state-owned
websites
to
sort
of
switch
to
domestic
providers
and,
of
course,
there
is
also
the
troubling
new
development
of
like
the
ministry
of
digital
development,
sort
of
having
their
own
Russian
root,
CA,
which
is
which
is
for
now
only
trusted
by
Russian
browsers.
V
But
what
will
not
agree
to
a
normal,
CT
knobs
so
giving
all
of
this
context?
Our
goal
was
to
sort
of
look
at
this
push
and
pull
that
happen
as
a
result
of
economic
sanctions.
At
the
same
time,
Russian
repatriation
efforts
and
sort
of
put
it
put
all
of
this
on
an
empirical
footing.
V
We
wanted
to
sort
of
look
at
the
DNS
infrastructure
and
specifically
where
the
authoritative
name
servers
are
located,
the
hosting
and
the
certification,
so
sort
of
look
at
how
the
conflict,
the
sanctions,
and
also
like
efforts
from
Russia
itself
to
pre-patriate
infrastructure
has
worked
out,
and
in
order
to
do
that,
we
look
at
a
couple
of
data
sources.
V
The
first
and
the
primary
data
source
is
the
DNS,
so
the
folks
over
at
University
of
20,
run
the
open
rental
project
and,
as
part
of
it,
they
had
access
to
the
dot
Ru
and
the
dot
RF
zones
for
a
long
time
for
a
period
of
five
years
and
I.
Think
for
reasons
unrelated
to
this
to
this
paper,
this
access
to
the
Zone
got
revoked,
but
suffice
to
say
we
have
like
plenty
of
data
to
look
at
and
starting
in
2017.,
so
well
before
the
conflict
started.
V
We
have
some
insight
into
what
the
dot
are.
You
Zone
has
looked
like
over
time.
The
second
thing
we
look
at
is
the
TLs
landscape,
so
basically
looking
at
the
certificate
issuance
for
the
Russian
Federation
domain,
name,
so
dot,
Ru
and
Dot
RF
again
and
doing
so
longitudinally.
V
We
look
at
historic,
City,
logs
and
also
active
scans,
using
active
scan
data,
so
basically
from
census,
which
does
a
global
ipv,
first
scan
looking
for
certificates
that
were
issued,
and
this
is
primarily
to
study
the
Russian
root
CA
and,
of
course,
we
look
for
geolocation,
ipga
location
and
some
of
the
sanction
domain
lists.
V
So
before
we
start
looking
at
some
of
the
results,
I
just
wanted
to
sort
of
go
through.
Some
of
the
definitions
we
have
and
I
know
like
some
of
this
is
can
be
confusing,
but
pre-conflict
would
be
before
February
24th
post
sanctions
is
basically
when
the
economic
sanctions
came
into
effect
and
that
there
was
this
period
of
a
month
where
the
sanctions
were
sort
of
announced,
but
had
not
come
into
effect.
V
So
those
are
the
three
time
periods
that
we
sort
of
break
all
of
our
data
down
into
and
for
each
of
these
compositions
we
say
we
ask
whether
the
infrastructure
is
fully
Russian,
non-russian
or
part
Russian,
and
essentially
it
has
to
do
with
the
fact
that
whether
all
of
the
IP
addresses
that
are
associated
with
the
the
infrastructure
are
either
in
Russia,
not
in
Russia
or
partly
in
Russia,
and
we
do
a
similar
thing
for
DNS
infrastructure,
and
in
that
case
the
IP
address
of
the
authoritative
name.
Server
is
what
we
consider
okay.
V
V
So
71
percent
is
fully
Russian
and
it
just
shows
a
slight
increase
after
the
Innovation
and
and
and
essentially
what
we
think
is
that
this
is
a
manifestation
of
like
a
long
series
of
efforts
by
Russia
to
actually
move
hosting
over
and
if
you
were
to
sort
of
look
at
it.
V
Longitudinally
like
we
would
see
like
a
minor
change
that
happens
post-conflict
where,
like
roughly
seven
percent
of
the
the
infrastructure
sort
of
moves
over
to
being
fully
Russian
from
being
a
partly
Russian,
and
this
is
primarily
because
of
one
infrastructure
provider-
net,
naught
sort
of
cutting
ties
with
Ru
Center.
V
If
we
were
to
look
at
the
hosting
Network,
so
basically
like
all
of
the
hosting
providers,
you
see
that
most
of
them
are
pretty
stable,
so
I
think
cloudflare
made
it
a
point
to
say
they
are
going
to
be
stay
in
Russia
and
like,
and
you
can
sort
of
see
that
we
see
Amazon,
sort
of
flip-flopping
and
I.
Think
Cedar
is
the
one
where
we
see
a
clear.
V
We
see
some
flip
flop,
but
at
the
end
of
it
they
have
exited
the
market
and
I
think
we
have
some
crafts
later
to
show
yeah
what
this
is
yeah
so
like
the
looking
at
just
the
sanctioned
domain
so
list
of
like
the
110
sanctioned
domains
in
the
US,
ofac
and
UK
lists.
V
We
see
like
perhaps
the
most
significant
change,
but
this
is
primarily
because
again,
net
nard
decided
to
cut
ties
with
audio
center
and,
as
a
result,
a
lot
of
the
the
sanctioned
domains
moved
from
being
partly
Russian
to
like
fully
Russian.
So
essentially
they
had
to
repatriate
as
a
result
so
like
you
can
essentially
see
most
of
the
sanctioned
domains
sort
of
move.
V
This
is
the
cedo
example
and
on
March,
9th
Cedar
announced
that
they
were
pulling
the
plug
and
and
like
a
couple
of
months
later,
we
can
see
that
essentially,
they
had
in
effect
pulled
the
plug.
98
of
the
domains
had
relocated
and
we
sort
of
talk
about
other
cases.
Amazon
has
this
interesting,
Sankey
diagram
there,
where
they
sort
of
will
they
won't
they
and
sort
of
half
of
the
domains
sort
of
relocate,
but
half
of
them
don't
and
it's
not,
and
actually
they
also
onboard
a
few
new
customers.
V
Okay,
so
I
think
that
brings
us
to
the
webpk
stuff,
which
I
think
is
the
most
interesting
part
of
all
of
this,
because
it
sort
of
highlights
an
area
that
I
think
the
Russian
sovereignty
project
we
did
not
think
about.
V
So
again,
we
are
looking
at
the
pre-conflict
post-sanctions
and
the
pre-sanctions
period,
and
you
can
see
like
the
utter
domination
of
let's
encrypt
now
like
it
has
gone
from
being
91
to
like
99.23
percent,
and
there
used
to
be
this
long
tail
of
certificate
authorities,
sort
of
working
with
DOT
audio
domain,
and
that
has
all
but
disappeared
and
like.
If
you
look
at
the
number
of
certificate
issuances
over
time
in
the
in
the
period
around
the
around
the
start
of
conflict.
V
V
What
was
interesting
to
us
was
like
Global
Science
sort
of
jumps
into
the
top
three
and
I
think
we
found
this
on
the
nick.ru
website,
which
essentially
says
oh,
like,
if
you're
being
sanctioned,
go,
use,
Global
sign,
which
is
a
Japanese
certificate
Authority,
so
that
you
don't
get
affected
by
like
they
are
not
subjective
sanctions
but
like
as
an
aside,
the
Department
of
Treasury
also
put
out
a
clarification
saying
certificate.
Internet
services.
V
Companies
are
sort
of
exempt
from
the
sanctions
so
yeah,
okay,
so
looking
at
the
revocation,
we
used
the
certificate,
revocation
list
or
CSP
status,
to
sort
of
look
at
revocations
and
like
it's
essentially
a
mixed
bag
here,
but
what
we
can
see
is
digicert
and
sec2go
being
like
very
particular
about
sanctioned
domains
and
though,
in
general,
we
see
higher
revocation
rates
for
the
sanctioned
domains
it
almost
like
in
the
period
if
you
sort
of
plot
it
on
a
longitudinal
axis,
which
I
don't
do
here.
V
But
if
you
look
at
it
like
there
is
this
period
where
certificate
authorities
aren't
really
sure
as
to
how
to
handle
it,
and
we
suspect
that
there
is
a
lot
of
manual
revocation
activity
going
on
as
a
certificate
authorities
sort
of
try
to
Grapple
with
the
fact
like
how
do
they
handle
sanctioned
domains
and
like
before
the
clarification
Department.
V
It
came
out.
That
was
a
lot
of
Are
We
affected.
Not
are
we
not
affected?
Are
we
subject
to
sanctions
not
subject
to
sanctions?
So
there
was
a
little
bit
of
that.
V
The
other
interesting
analysis
that
we
looked
at
was
to
look
for
the
Russian
trusted
root,
CA.
Now,
of
course,
as
part
of
this
announcement
like
they
do
not
record
any
like
any
certificates
that
are
issued
by
Russian
trusted
root,
CAA
do
not
appear
in
City
logs.
So
we
had
this
question
like
what
are
the
kind
of
certificates
are
they
are
they
issuing,
and
is
there
a
way
to
sort
of
go?
Look
for
that
and
that
this
is
where
we
use
the
active
scan
data.
V
We
sort
of
look
for
certificates
that
Trace
their
Authority
back
to
this
Russian
trust
recruit
CA,
and
we
find
that
at
least
from
like
the
publicly
observable
internet
IPS
like
we
don't
really
see
a
lot
of
domains
like
we
see
170
domains
that
are
secured
by
the
CA
and
nearly
all
of
them
are
Russian
related
entities.
V
A
lot
of
the
sanction
domains
are
secured
by
this
trusted
root
Cas,
but
essentially,
what
we
find
is
that
there
is
very
low
uptake
and
especially
compared
to
Let's
encrypt,
like
everyone
sort
of
had
their
has
a
Russian
trusted
root
CA,
but
it
seems
to
now
prefer
using
let's
encrypt
okay
and
with
that
I
think
I'd
like
to
sort
of
wrap
up.
V
We
have
two
minutes
out,
so
we
find
that
we
put
all
of
all
of
the
recent
events
on
an
empirical
footing,
sort
of
assessing
the
effect
of
the
conflict
on
the
Russian
infrastructure.
We
find
that
a
lot
of
the
the
repatriation
has
already
happened
on
the
DNS
infrastructure
and
the
hosting
side,
and
these
we
we
note
that
the
certifications
is
like.
V
Perhaps
one
area,
significant
exposure
like
the
near
complete
domination
of
let's
encrypt,
is
actually
was
quite
surprising
to
us
and
like
given
that
let's
encrypt
has
a
public
Mission
but
is
also
a
U.S
entity.
Sort
of
highlights
an
interesting
conundrum
for
them,
and
also
is
interestingly,
like
is
one
area
that
Russia
seems
to
not
have
anticipated
by
establishing
domestic
Cas
and
those
relationships
beforehand,
which
they
seem
to
have
done
for
a
lot
of
the
other
infrastructure
and.
M
Quick
question
because
we
only
have
like
30
seconds
or
so
so
we
talk
about
this
pre-existing
domestic
provisioning,
which
is
like
70
of
conflict
start,
and
you
said
it
was
close
to
70
to
data
sets
start
right.
There
wasn't
a
whole
lot
of
move
in
the
front
of
that
yeah
I'm
wondering
if
you
have
some
way
to
differentiate.
M
You
know
a
country
having
a
drive
toward
internet
sovereignty
from
a
bunch
of
people
who
speak
the
same
language
and
do
business
with
each
other
in
a
not
very
globally
popular
currency
tending
to
have
domestic
Supply
chains.
I
have
no
idea
how
to
do
this,
but
it
would
be
really
interesting
to
try
and
measure
the
impact
of
a
centralized
governmental
drive
toward
internet
sovereignty
and
I'm
wondering
if
you
have
any
sort
of
insights
on
that.
V
Yeah
so
I
guess
what
you're
suggesting
is?
Can
we
look
at
how
different
is
dot?
Are
you
from
let's
say
other
countries
and.
V
U
V
Long-Term
basis
and
if,
if
it
were
possible,
I
think
that
would
be.
That
would
be
something
that
we
would
be
interested
in.
M
E
V
Cool
makes
sense
yeah.
Thank
you
happy
to
chat
after,
if
you
folks
have
any
questions
cool.
A
Thank
you.
Thank
you
so
much
and
thank
you
to
all
of
the
contributors
in
this
session
for
bringing
your
work
to
a
different
audience.
I
can
tell
you
I've
heard
already.
They
really
appreciate
it.
Maria
last
words.