►
From YouTube: IETF113-MAPRG-20220323-0900
Description
MAPRG meeting session at IETF113
2022/03/23 0900
https://datatracker.ietf.org/meeting/113/proceedings/
C
B
So,
for
some
reason
we
I
see
that
slides
and
make
acre,
but
I
don't
see
them
in
the
room.
B
Then
hello,
everyone
and
welcome
to
the
maverick
session,
I'm
happy
to
be
here
and
actually
see
some
faces
and
I
also
see
dave's
face.
So
that's
great
and
let's
start
right
away.
Luckily,
we
got
a
two
hour
slot
and
we
got
some
really
nice
presentations,
so
we
should
have
plenty
of
time
today
yeah.
So
this
is
an
irtf
session,
but
we
also
have
a
note.
Well,
it's
very
similar
to
the
ietf
note.
Well
and
if
you're
not
familiar
with
it,
you
should
look
up
the
respective
rc's.
Maybe.
B
B
Then
also
the
irtf
follows
very
similarly,
the
private
c
policy
and
the
code
of
conduct
of
the
ietf.
So
basically,
this
is
just
a
reminder
to
be
nice
to
each
other
to
work
together
in
a
friendly
way
and
also
for
the
presentations
state.
Your
questions
clearly
stay
friendly
and
most
usefully
for
this
session.
You
know,
provide
productive
feedback.
If
you
have
some.
B
B
Okay,
this
slide
is
usually
just
here,
so
if
you
have
the
slides
in
front
of
you,
you
have
a
quick
link
to
everything
you
need
to
know,
or
you
don't
know
yet
just
as
a
reminder,
because
this
is
our
first
hybrid
meeting.
The
people
in
the
room
are
also
supposed
to
use
the
virtual
queue
and
you
can
join
the
queue
over
the
meat,
echo
interface.
B
Either
this
light
interface
or
the
full
interface
post
is
accessible
over
the
agenda
and,
more
importantly,
is
you
have
to
please
join,
even
if
you're
in
the
room,
one
of
these
two
ways
to
join
the
meat
aqua
session,
because
that's
also
how
we
generate
our
blue
sheets.
So,
if
you're
in
the
room,
please
join
the
meat
echo
as
well.
So
you're
noted
in
the
blue
cheese.
B
Yeah
and
like
dave,
if
you
don't
have
anything
else,
basically
we
start
our
presentation
from
here
perfect.
Then
our
first
speaker
is
actually
in
the
room,
just
arrived
first
time
attendee
and
we're
very
happy
to
have
you
here
so
how
much?
If
you
want
to
come
in
front
and
I'm
getting
up
the
new
slide
set
for
you,
so.
F
G
Everyone
can
everyone
here
be
fine,
I'm
hamas,
bentanvir
and
today
I'll
be
presenting.
My
work
called
glowing
in
the
dark,
so
this
work
is
in
collaboration
with
raji
singh
from
microsoft,
research,
paul
pierce
from
georgia,
tech
and
rishabh
nitham
and
who's
my
phdwell,
so
just
to
give
you
an
overview
of
where
this
work
fits
in.
So
as
everyone
knows,
we
have
ipv4,
we
have
ipv6,
but
then
we
have
scanning
in
v4
and
scanning
in
v6.
So
scanning
in
v4
is
very
well
understood.
G
F
G
Yeah,
so
scanning
is
basically
sending
unsolicited
communication
to
an
ip
address
in
order
to
draw
a
response,
and
it
can
be
done
for
a
lot
of
different
reasons,
so
reasons
both
malicious
and
benign.
But
in
this
talk
we
are
more
concerned
about
not
concerned
about
how
what
scanning
is,
but
how
scanning
is
different
for
ipv4
and
ipv6
and
why?
Why
is
that
the
case
so
scanning
in
ipv4?
G
So
there's
there's
a
lot
of
tools
for
that.
So,
for
example,
if
you
use
zmap,
you
can
scan
the
entire
ipv4
address
space
within
a
matter
of
minutes.
Obviously,
given
that
you
have
a
good
enough
internet
connection,
but
in
ipv6
things
are
not
quite
so
simple,
because
the
address
space
is
now
2
to
the
power
128
total
addresses
you
just
cannot
brute
forces.
G
Well
so,
let's
say:
if
all
of
the
ipv6
added
space
were
equivalent
to
the
4.5
billion
years,
the
earth
has
been
in
existence.
The
ipv4
added
space
would
equal
two
trillionths
of
a
second
or
the
time
that
light
would
take
to
traverse
the
period
at
the
end
of
this
sentence.
So
basically
you
just
cannot
boot
force
it.
It's
it's
not
really
possible.
G
So,
given
that
this
ipv6
space
is
so
large,
how
do
you
even
start
to
scan
it
so
right
now,
there's
two
major
techniques
that
you
can
use,
so
one
is
called
ip
scanning
and
the
other
is
called
nx
domain
scanning
so
in
ip
scanning.
What
you
do
is
that
you
go
to
these
public
sources
of
ip
of
ip
addresses
like
dns
zone
files
or
tor
relay
consensus,
data
or
ntp
public
servers.
You
get
all
these
addresses,
you
try
to
figure
out
patterns
in
them,
and
then
you
generate
newer
target
addresses
to
scan.
G
So
this
is
so
nx
domain
scanning
basically
involves
exploiting
semantics
that
were
described
in
rfc,
80
20.,
so
I'll
go
into
a
bit
of
how
what
what
is
defined
in
rfc,
80,
20
and
hardworks,
but
just
to
be
just
to
make
sure
that,
although
the
the
search
space
is
reduced
for
both
of
these,
both
of
these
scanning
techniques
for
ip
scanning,
the
results
are
still
probabilistic.
So
you
might
find
10,
you
might
generate
10
more
addresses,
but
you
you
still.
G
Might
you
still
cannot
be
100
sure
that
they,
they
are
actual
addresses
which
are
allocated,
but
for
nx
domain
scanning?
You
will
always
get
to
get
an
ip
address
that
is
allocated,
so
what
is
rfc
8020,
so
rfc
2
8020
is
called
nx
domain.
There
really
is
nothing
underneath
so,
and
so
one
of
the
main
clauses
in
this
rfc
is
that
an
nx
domain
response
for
a
domain
name
means
that
no
child
domains
underneath
the
kuwait
name
exist
either.
G
So
when
you
apply
this
rfc
to
dns
reverse
trees,
it
unintentionally
presents
a
side
channel
for
efficient
scanning
of
the
ipv6
address
space.
So,
let's
say
over
here:
if
you,
if
you
have
the
ip
6.,
which
is
the
root
for
dns
ipv6,
reverse
zone
trees.
G
So
let's
say
if,
if,
if
my
dns
resolver
is
basically
set
up
to
reply
and
to
reply
next
domain
for
if
something
does
not
exist
under
a
t-
and
let's
say
I
look
up
zero
dot,
ip6
dot
arpa,
so
I
will
end
up
getting
an
nx
domain.
This
means
that
I
no
longer
need
to
care
about
zero
dot,
ip6
or
arpa.
G
So
I
will
go
to
zero
till
e
and
when
I
reach
f
I'll
finally
get
a
no
error,
but
for
all
others
I'll
get
an
nx
domain,
which
means
I
can
just
cut
all
of
those
sub
domains
out.
I
don't
need
to
care
about
all
of
them,
so
now
I'm
only
left
with
f.
So
now
I
can
proceed
with
zero
dot,
f,
dot,
ip6
dot,
arpa
and
I
can
just
go
on
until
I've
reached
the
whole
slash.
128
ibv6,
address.
H
Hi
it's
peter
from
the
esec.
Actually
I
don't
mind
asking
in
the
end
also,
but
so
my
question
is
this:
reverse
inference
technique
only
works
when
the
ip
addresses
actually
have
reverse
delegations
exactly.
G
G
There
has
been
prior
works,
which
kind
of
try
to
learn
new
addresses
using
these
techniques
and
the
amount
of
addresses
that
they
generate,
so
I'm
not
sure
about
the
actual
number,
but
it
was
a
lot,
but
there's
no
kind
of
study
about
how
many
of
the
total
ip
addresses
have
a
reverse
domain
attached
to
them.
But
that's
the
really
nice
points
we
can
kind
of
like
incorporate
it
in
future
studies.
Okay,.
G
Yeah,
so
just
to
put
this
into
perspective
of
a
slash
64,
I
subnet.
So
this
is
the
subnet
where
a
host
resides
in
ipv6.
Let's
say
I
am
a
scanner
and
I
want
to
find
that
little
green
dot
that
you
see
at
the
top
and
that's
the
whole.
That's
the
128
ipv6
the
whole
ipv6
address
so
now
I
have
two
65s
over
here,
so
one
on
the
right
and
one
on
the
left.
So
I
know
that
my
address
resides
in
the
slash
65
on
the
left.
G
G
G
So
what
this
ends
up
doing
is
that
it
reduces
the
number
of
potential
probes
from
2
to
the
power
64.,
so
basically
one
for
each
ip
address
in
this
whole
space
down
to
just
64
total
probes
to
get
to
the
actual
address-
and
this
is
a
significant
exponential
decrease
in
the
amount
of
probes
that
you
need
to
send-
to
find
a
host
inside
a
slash,
64.,
okay.
So
now
I'll
talk
a
bit
about
the
experimental
setup.
G
So
there
were
a
few
goals
that
we
had
in
mind,
which
which
were
motivations
from
the
previous
studies
that
were
trying
to
study
at
scanning
in
ipv6.
So
most
of
them
were
basically
based
on
studying
ipv6
scanning
in
darknets
or
from
the
vantage
point
of
an
authoritative,
dns
server.
G
But
the
problems
with
that
was
that
most
of
the
scanning
activity
they
got
was
as
a
result
of
ipv6
misconfigurations
or
they
could
not
link
the
activity
inside
the
ipv6
address
space
to
the
scanning
activity
that
came
in
so
for
this
experiment.
We
wanted
to
mimic
an
active
ipv6
address
space
capture,
actual
scanning
traffic,
so
nothing
from
its
configurations
and
link
scanning
activity
to
the
services
that
were
deployed
in
this
in
this
ipv6
address
space.
G
So
so,
just
to
give
you
an
overview
of
how
my
experimental
setup
so
the
grid
that
you
see.
So
this
was
supposed
to
come
up
one
by
one,
but
I
don't
know
what
happened
so
the
grid
that
you
see.
So
this
is
divided
up
into
four
slash
58,
so
one
of
them
is
used
to
run
our
services
and
the
other
three
are
controlled
ones,
so
nothing
was
ever
run
in
them,
and
these
are
all
the
services
that
we
run
for
the
entirety
of
our
experiments.
G
E
G
Are
running
stuff
in
here
and
you're,
more
and
you're
likely
to
find
more
addresses
here,
because
you
just
simply
cannot
brute
force
everything,
so
we
capture
all
dns
and
reverse
dns
lookup
logs
and
all
the
incoming
traffic
from
a
router
that
was
appointed
here
and
then
we
had
two
types
of
address
assignments
for
each
of
our
services,
so
each
service
was
deployed
had
four
instances
and
they
were
never
deployed
in
the
same
64
subnet
and
not
in
this
in
adjacent
subnets
and
each
service
had
two
lower
byte
assignments
and
two
random
assignments.
G
So
lower
byte
assignments
are
addresses
that
people
put
up
manually
just
to
remember
ipv6
addresses,
so
they
will
have
a
lot
of
zeros
in
the
start
and
like
a
couple
nibbles
at
the
end,
with
a
value
like
one
two
just
to
remember
the
address
and
the
random
assignment
does
not
follow
this
pattern.
So
it's
a
random
string
of
numbers
throughout
the
subnet
throughout
the
interface
identifier,
okay,
yeah.
So
some
of
the
key
observations
that
we
saw
that
so
we
do
so
our
method
of
measuring
scanning
activity.
G
So
we
did
see
scanning
activity
even
before
we
ran
any
of
our
services,
but
the
main
point
was
that
scanning
activity
increases
significantly
after
services
were
deployed,
and
this
increase
was
not
only
not
in
terms
of
the
number
of
scanners,
but
also
in
the
number
of
probes
that
we
see
and
the
main
observation
that
we
made
was
that
nx
domain
scanners
are
using
the
side
channel
very
very
effectively.
So
in
in
this
graph.
G
You
can
see
that
we
have
the
duration
of
our
experiment
on
the
y-axis
and
the
subnet
id
is
on
the
x
axis,
so
0
0
to
f
8,
so
it's
256
in
x,
so
the
green
highlighted
portion
basically
shows
our
slash
58,
which
had
services
running
in
them
and
the
other
part
is
the
control
part.
So
this
is
the
part
before
any
of
our
services
were
deployed.
So
right
now
you
can
see
that
nx
domain
scanners
are
kind
of
like
guessing.
G
Even
the
ip
scanners
are
guessing,
but
nx
domain
scanners
probably
think
that
there
is
something
in
the
start
and
the
end
of
our
subnets
and
ip
scanners
think
that
there
can
be
anything
everywhere,
but
the
important
part
comes
in
when
we
deploy
our
services.
So,
as
you
can
see
in
the
for
the
nx
domain
scanners,
they
never
go
outside
the
slash
58
line,
so
they
know
exactly
where
the
services
were
deployed
and
they
stay
inside
that
they
because
to
basically
cancel
out
every
other
subnet.
G
They
just
need
to
send
in
one
request
to
each
of
those
subnets.
If
that
returns
an
nx
domain,
they
can
just
cancel
that
out,
so
they
don't
even
need
to
check
them
so
for
the
entirety
of
our
experiment.
They
only
stay
within
that
slash
58
and,
on
the
other
hand,
ipv6
scanners.
They
constantly
search
the
entire
256
subnets,
trying
to
look
for
newer
addresses.
G
So
this
basically
tells
us
that
this,
the
rfc
8020
semantic,
is
actually
being
exploited
by
a
lot
of
scanners
right
now,
which
reduces
their
search
space
by
a
lot,
and
this
is
our
results
for
all
of
the
services
that
we
ran.
G
So,
just
to
give
you
just
to
like
explain
what
this
means
so,
for
example,
if
you
see
wget
and
511,
that
means
that
the
mean
number
of
scans
per
subnet
that
was
running
the
service
that
you
get
got
511
more
scans
after
the
service
was
run
as
compared
to
before
so
delta.
Diff
basically
means
that
the
increase
in
scanning
activity
was
within
the
treatment
subnets
so
where
the
services
were
running
and
delta
c
says
that
this
increase
in
scanning
activity
was
in
control,
subnets,
where
nothing
was
running.
G
So
some
of
the
key
takeaways
we
have
are
that
that
nx
domain
scanners
target
treatment
subnets
for
almost
all
the
services
they
target
control,
subnets,
much
less
and
ip
scanners
kind
of
exhibit
mixed
behavior.
They
target
both
treatment
and
control
subnets
because
they
don't
really
have
an
idea
every
time
they
want
to
scan
something
they
have
to
go
deeper
and
deeper
inside
the
subnets
themselves,
to
figure
out
if
there's
an
address
in
it
or
not.
On
the
other
hand,
nx
domain
can
just
send
one
request.
G
If
it
gets
an
nx
domain,
it
can
just
wipe
out
the
entire
subject
and
nx
domain
scanners
target
different
different
services
than
ip
scanners.
So,
as
you
can
see,
all
of
this
services
are
kind
of
targeted
by
nx
domain
scanners
on
the
left,
but
iep
scanners
only
seem
to
care
about
dns
probes,
ntp
servers
and
dns
zone
files.
G
But
we
think
that
this
is
only
a
matter
of
time,
and
one
question
I
pose
here
is
that
is:
it:
is
the
efficiency
from
nx
domain
responses
worth
the
loss
of
defense
against
scanning,
so
so
rfc
8020
was
initially
introduced
to
improve
the
efficiency
of
caching
of
dns
trees.
So
if
it,
if
site
channel,
gives
you
some
much
easier
access
to
scanning,
is
it
really
worth
it
and
then
added
discovery?
Methods
are
very
different
than
we
expected.
G
We
expected
that
people
will
go
to
scanners,
will
go
to
these
public
lists
of
ip
addresses,
like
tor
ntp,
get
addresses
from
there
and
then
scan,
but
they
don't
really
do
that.
They
mainly
rely
on
open,
dns,
resolvers
and
then
neighboring
networks
should
expect
scanning
activity.
So
let's
say
for
64:
subnet
has
something
running
in
them.
G
We
expect
that
the
neighboring
64s
are
much
more
likely
to
receive
scanning
traffic
due
to
something
running
in
that
64.,
and
so
we
have
a
lot
more
takeaways
and
a
lot
more
results,
but
due
to
time
constraints
I
couldn't
put
those
in
so
if
anyone
wants
to
discuss
those
I'll
be
happy
to
do
it,
and
if
anyone
has
any
questions
and
comments,
I
would
love
to
take
them.
Thank
you.
B
C
Thank
you.
Thanks
for
interesting
presentation,
alexander
mayor
from
nikkito
jt,
I
was
wondering:
did
you
also
look
at
whether
all
name
servers
actually
implement
rsc
8020,
because
that
has
been
a
big
discussion
in
the
dns
working
groups.
C
G
C
G
C
G
C
Operational
yeah.
C
I
Rashford
so
also
on
80
20,
pretty
much.
It
was
a
long
time
to
get
this
actually
done
to
have
the
kind
of
answers
be
more
kind
of
the
next
domain
answers
to
to
cover
what
is
underneath
and
if
you
don't
want
to
have
something
in
in
the
reverse
three:
either.
Just
don't
put
it
in
or
put
everything
in.
What's
that,
because
that's
what
a
lot
of
people
do
exactly
so
I
mean
I
would
not.
G
I
G
I
G
80
20.
yeah,
so
this
is
one
of
the
defense
techniques
that
we
actually
want
to
propose
in
this
paper
that
just
if
you
want
to
comply
to
rc
80
20,
just
make
sure
you
put
off
the
scanners,
because
this
technique
has
been
also
discovered
in
many
other
papers
that
you
can
discover
a
lot
of
ip
addresses
using
this.
So
one
of
the
defense
mechanisms
against
scanning
would
be
that
just
everything
that
someone
requests
for
just
give
an
answer
for
that.
Just
given
no
error
for
that
and
yeah,
that's
all
right!
Thank
you.
J
This
is
peter
koh
hi
again
on
80
20.,
I'm
I'm
wondering
a
bit.
The
the
scanning
technique
was
actually
described
as
early
as
the
what
2005
and
maybe
before
2005.
E
J
20
suggests
that
resolvers
implement
the
semantics,
but
conceptually
these
semantics
were
already
there.
So
if
you
buy
pathways
over
that
scanning
technique
is
available
anyway.
So
any
any
recommendation
in
dampening
or
or
mitigating
80
20
is
probably
not
achieving
very
much
because
you
can.
E
G
J
But
happy
to
take
that
yeah.
G
Yeah
sure
so,
just
to
like
talk
a
little
about
it.
So
what
the
previous
speaker
suggested
that
you
can
just
basically
return
everything,
something
for
everything
that
someone
asks.
So
that
can
be
one
of
the
defense
techniques,
but
obviously
you
can
still
kind
of
bypass
that
but
I'll
be
happy
to
take
it
off
right.
L
E
M
E
G
There's
a
study
by
tobias
feiberg
and
he
basically
discovered
that
think
addresses
in
the
range
of
a
couple
million
that
were
previously
not
known
from
other
scanning
techniques.
So
I
would
imagine
that
in
the
wild,
this
is
something
that
people
are
not
configuring
in
a
way
that
you
just
suggested.
So
in
this
paper
we
want
to
basically
tell
them
that.
Please
just
do
that.
It's
very
simple!
You
can
achieve
a
lot
of
a
lot
more
defense
against
scanning
yeah.
B
N
B
Have
you
here
and
we're
switching
over
to
some
dns
related
topics?
Our
next
speaker
is
morocco,
muta,
and
you
should
already
be
set
up
and
can
start
right.
O
Yes,
can
you
hear
me
okay,.
O
Yeah
hi
and
welcome
to
my
presentation
about
the
dynastic
deployment
metrics
research.
This
is
actually
not
a
measurement
study,
but
a
study
about
measurement
studies,
and
this
was
initiated
by
icann
and
we
are
carrying
out
the
study
currently
together
with
the
folks
from
anatlapse.
O
Let's
dive
right
into
the
challenges,
the
first
challenge
is
that
dns
deployment
is
a
very
wide
field.
If
you
start
thinking
about
metrics
to
measure
dns
deployment-
and
you
probably
come
up
right
away
with
two
things:
how
many
domain
names
have
are
signed
with
dnsec
at
the
root
top
level
domain
names
and
second
level
domain
names
and
so
on,
and
how
many
resolvers
actually
validate
the
nsx
signatures?
O
O
Is
it
using
nsg
and
x3,
or
does
it
support
some
other
kind
of
dns
automation
on
the
validation
side?
You
again
might
think
about
the
algorithm.
What
kind
of
algorithm
does
the
resolver
support?
What
kind
of
trust
anchors
does
it
have
configured?
Does
it
support
some
kind
of
signaling
protocols
and
so
on
and
so
forth?
O
Additionally,
the
dnsc
protocol
is
still
being
extended,
or
at
least
related
extensions
are
being
deployed
and
developed.
This
could,
for
example,
include
cds
and
cdns
key
records
where
operators
can
automate
the
deployment
of
dns
sec,
so
do
these
metrics
do
these
attributes
also
should
be
measured
in
order
to
get
a
idea
about
dnsec
deployment,
and
finally,
we
have
the
challenge
that
there
are
also
other
protocols
related
to
dns
sec.
First
thing
that
comes
into
mind
is
dane:
should
it
also
be
taking
into
account
when
measuring
dynastic
deployment.
O
Each
or
almost
each
of
these
metrics
can
be
measured
in
different
ways
when
we
think
about,
for
example,
whether
resolve
is
invalidating
or
not,
and
then
we
could
use
active
measurements,
for
example,
from
wipe
atlas
and
issue,
queries
to
the
results
of
these
web
atlas,
clients
and
then
see
what
kind
of
responses
they
get.
O
So
for
this
reason,
in
order
to
address
these
challenges,
we
have
a
the
following
approach.
The
first
is
to
get
a
very
broad
overview
of
which
metrics
have
been
measured
so
far
by
the
community,
and
there
we
mean
not
only
the
academic
community,
where
we
look
into
academic
papers
at
more
the
high
level
high
tier
conferences
and
journals.
O
I
want
to
say
a
few
words
about
the
assessment
framework.
That's
probably
a
bit
a
big
word,
but
in
the
end
we
of
course
want
to
focus
on
the
coverage.
How
many,
for
example,
resolvers?
Can
we
potentially
cover
with
a
certain
measurement
technique
or
how
many
domains
we
can
we
potentially
cover,
but
we
think
in
order
to
have
measurements,
that's
measured,
dns
deployment
that
are
also
useful
for
a
broader
community.
We
also
have
to
look
into
other
attributes
of
these
measurement
techniques.
O
E
O
So
we
also
want
to
take
this
into
account
when
assessing
the
different
measurement
techniques,
and
with
that
I
already
come
to
my
end
of
the
presentation
where
I
would
like
to
collect
feedback
from
you
as
a
management
community
and
would
like
to
understand
which
dns
metrics
do,
you
think
are
most
important
to
assess
dynastic
deployment
now
and
in
the
future.
Does
that
only
include
well
how
many
domain
names
are
signed,
for
example,
and
how
many
resources
validate,
or
do
you
think
that
also
more
advanced,
so
to
say,
metrics
are
necessary?
O
O
B
C
Thank
you
more
interesting
presentation,
a
lot
of
interesting
aspects.
To
answer
your
first
question.
C
B
Yeah,
I
mean
actually
thanks
for
bringing
this
presentation
here
and
starting
discussion
feel
free
to
have
more
comments
on
the
mailing
list.
Have
a
discussion
there,
but
I
think
that's
also
very
interesting
to
maybe
take
a
similar
look
at
different
measurement
studies
and
more
broadly
figure
out
these
kind
of
aspects,
so
very
nice
to
have
the
discussion
in
this
group.
O
B
Okay:
next,
we
have
little
mao
great
to
have
you
here.
I've
just
set
up
your
slides
and
I
think
you
should
have
control
and
you
should
be
ready
to
go.
P
Oh
sure,
thank
you
yes,
so
this
is
jerome
from
cwru
and
today
I'm
going
to
present
our
results
in
measuring
the
support
for
dns
over
tcp
in
the
internet.
Right.
So
here
are
the
topics
I'm
gonna
cover
today.
So
I'm
gonna
look
at
the
dns
over
tcp
support
on
two
sides
of
the
dns
infrastructure,
the
recursive
resolver
side
and
authoritative,
dns
server
side.
So.
B
Can
you
still
hear
us
jerome,
that's
a
problem.
B
Okay,
now
we
can
hear
you
again,
so
we
missed
a
little
bit
of
what
you
said.
So
I
think
you
have
to
go
back
a
little
bit.
B
B
What
do
you
think
it
is
brian
yeah?
Can
you
can
you
hear
us,
we
lost
you
again
and
maybe
it
might
be
your
local
airports
or
something
that
mute.
Your.
P
B
We
can't
hear
you
again:
we
let's
actually
switch
the
presentations
and
we
come
back
to
you
and
you
can
figure
out
if
you
can
get
some
other
headphones,
because
if
we
don't
hear
you
that
doesn't
help.
C
K
Thanks
thanks
for
having
me
today,
so
I
have
some
echo
here.
K
Okay,
now
the
echo
should
be
gone.
Yes,
perfect,
okay,
yeah
thanks
for
having
me
today,
so
I'm
mostly
blind-
and
I
can't
hear
you
so
maybe
we
take
questions
following
the
presentation
all
right.
Let's
get
started
so
hi
welcome,
I'm
mike
from
technical
university
of
munich
and
I'm
going
to
present
our
paper
on
dinosaur,
quick,
which
was
a
study
we
performed
and
was
accepted
at
pam
conference,
which
is
scheduled
to
happen
next
week.
K
As
for
dns
over
quick,
we
support
the
dns
over
quick
versions,
draft
06
to
draft
0
0
and
for
quick.
We
have
support
for
the
rc
version
and
the
three
different
draft
versions,
which
are
stated
here
in
our
adoption
scans.
We
collected
different
metrics.
First
of
all,
the
negotiated
dns
overclick,
as
well
as
quick
versions
and
the
common
names
of
the
certificates.
K
K
We
see
that
the
adoption
rises
slowly
but
steadily
so
from
the
first
week
of
our
measurement
to
the
final
week,
we
see
an
increase
of
around
46
to
1217
reserves,
but
what
we
also
see
is
a
high
fluctuation,
so
only
52
of
the
resolvers,
which
were
available
in
the
first
week
are
still
reachable
in
the
last
week.
If
you
compare
this
to
dns
over
udp,
we
have
around
97
percent,
which
are
still
reachable
in
the
last
years.
K
We
also
see
an
uptake
in
the
dry
and
the
weeks
which
are
highlighted
here
of
quick
version,
one
in
combination
with
dns
or
quick
draft
zero
two,
and
we
could
track
this
to
an
open
source,
dns
server,
implementation
of
edgard
home.
So
they
changed
their
d4
quick
version
from
draft
34
to
quick
version
one
in
this
week,
and
we
could
verify
this
also
by
looking
at
the
common
names
of
those
certificates.
K
So
we
find
something
like
edgard
does
something,
but
something
which
hints
at
the
usage
of
the
edgard
home,
open
source,
dns
server
implementation,
so
next
to
their
open
source.
Dns
server
implementation
edgard
also
offers
a
publicly
reachable
dns
over
quick
service
and
they
already
implemented
dnso
quick
to
f03
in
combination
with
quick
version,
one
which
is
highlighted
here
in
the
yellow
bars,
and
we
find
these
for
around
25
reservoirs.
In
the
final
week
of
our
measurement
with
the
common
names,
dean,
asterodetkat.com
and
edgar
ch.
K
Next,
we
come
to
our
response
times
so
for
our
response
times
measurement,
we
performed
hourly
measurements
over
the
course
of
one
week
and
we
use
the
other
stuff
ipv4
addresses
from
the
adoption
scans.
We
performed
a
single
credit
protocol
and
it's
important
to
note
that
we
have
a
location
bias
where
we
only
measured
from
one
vantage
point,
and
this
is
why
we
did
comparative
measurements
to
dns
of
udp
tcp
dot
indo.
So
in
total
we
find
246
resolvers,
which
support
all
those
targeted
dns
protocols.
K
Then
we
perform
two
subsequent
queries.
The
first
one
is
cash
warming,
query
to
warm
the
dns
cache
and
then
the
actual
measurement.
We
collect
different
metrics.
We
have
the
handshake
time,
which
is
the
time
from
the
start
of
the
connection
establishment.
Until
the
connection
is
established,
be
it
secured
or
unsecured.
K
Then
we
have
the
resolve
time,
which
is
the
time
from
the
time
we
state
the
dns
trivia
until
we
get
a
successful
answer
back
and
the
sum
of
those
is
the
so-called
response
time
other
than
that,
we
also
have
a
protocol
specific
rtt,
so
we
performed
protocol
specific
rtt
measurements
with
different
payloads
per
transport
layer
protocol
to
get
an
estimation
of
the
rtt
which
could
be
different
based
on
the
payload.
You
are
using
in
the
different
transfer
programs
as
the
limitations.
K
Starting
with
the
response
times
so,
first
of
all,
we
have
to
check
our
expectations
the
resolve
time
as
it's
just
a
dns
query.
Then
the
answer
from
the
cash
record
and
the
dns
sensor,
as
well
as
the
rtt,
which
should
take
roughly
one
rtt
over
all
protocols,
and
if
we
now
look
at
the
resolve
time
here
in
the
cdf,
we
can
see
that
all
different
protocols
overlap,
so
we
find
that
resolve
times
are
identical
as
expected.
K
Moreover,
if
we
now
look
at
the
protocol
specific
ltd
measurements
which
are
shown
here
in
the
subplots,
we
also
see
that
all
protocols
overlap.
So
we
find
no
protocol
specific
path
influences
here
and
if
we
would
overlay
those
figures,
we
would
see
that
the
resolve
times
and
the
rtts
are
actually
identical.
K
K
K
K
What
we
can
now
do
is
we
can
have
a
look
at
the
handshake
to
rtt
ratio.
So
I
said
we
also
measured
the
protocol
specific
rtts
on
every
handshake
time
measurement.
So
we
can
divide
the
handshake
time
measurement
by
the
rtt
measurement
to
get
the
handshake
to
rtt
ratio,
which
is
shown
in
the
support
here
and
what
we
can
see
for
dns
or
tcp.
To
the
left
hand.
Side
here
shown
in
green
is
that
this
nicely
aligns
with
the
one
rtt
as
expected.
So
this
is
confirmed.
K
If
we
now
look
at
dinosaur
materials
and
https
again
overlapping
here,
on
the
right
hand,
side,
we
see
that
it
follows
the
distribution
of
two
rtts
up
under
the
median
and
then
it
converges
into
a
long
tail.
So
this
is
somewhat
confirmed
and
for
dns
so
quick.
We
see
a
very
weird
distribution
here.
So
around
20
of
measurements
follow
the
one
rtt
distribution
and
then
it
converges
into
a
long
tail
where
we
see
around
40
percent
of
measurements
have
an
handshake
to
rtt
ratio
of
more
than
two
rtts.
K
So
analyzing
this
we
find
that
this
is
an
interaction
with
the
quick
client
address
validation,
which
is
a
mandatory
feature
of
the
quick
standard
to
prevent
traffic
amplification
attacks.
So
we
actually
did
perform
client
address
validation.
While
we
reused
the
token
which
was
issued
by
the
dnso
quick
server
issued
in
the
cash
form
query
and
our
subsequent
initial
of
the
actual
measurement.
So
according
to
client
as
well
as
server
state,
the
client
address
validation
is
actually
fulfilled.
K
So
as
a
conclusion,
we
see
a
slow
but
steady
adoption
with
high
weak
overweight
fluctuations.
As
for
our
response
times
measurement,
we
find
that
crick's,
full
potential
or
dns
over
export
potential
is
utilized
in
only
20
of
measurements,
where
40
of
measurements
show
considerably
higher
handshake
times
due
to
the
client
address
validation
behavior.
E
K
Q
Can
you
hear
me
now
if
you
are
muted
the
room?
Yes,
yes,
hey
lorenzo
coletti,
google
super
interesting
and
I'm
part
of
the
android
networking
team
and
we're
actually
in
the
process
of
rolling
out
something
that
you
didn't
test
actually,
which
is
d
d0h3.
Q
So
basically
doh
over
it,
quick
right-
and
one
thing
that's
related
to
this-
is
that
that
was
that
seemed
to
be
like
easier
to
implement,
because
there's
more
server
support
and
one
thing
that
we
noticed
is
there's
a
substantial
hit
in
terms
of
bandwidth
due
to
lots
of
http
headers.
Q
That
really
substantially
increases
the
bandwidth
used
for
the
queries,
and
it
would
be
interesting
if
you
had
that
type
of
measurement
as
well,
because
metrics
on
android
are
you
know
a
few
thousand
dns
queries
per
day
and
it
just
it
sort
of
adds
up.
Q
So
I'd
be
super
interested
in
seeing
what
metrics
you
have
well,
my
what
metrics
you
could
get
with
you
know
with
in
terms
of
bandwidth
size,
because
you
measured
rtt
and
that's
super
interesting,
but
also
like
how
much
bandwidth
is
used
by
these
queries,
because
that's
that's
a
that's
a
big
deal
so
that
yeah
just
wanted
to
say
that
it's
also
super
interesting
work.
I'll,
obviously
I'll
share
this
with
the
team.
Q
That's
working
on
this,
and,
in
particular
like
the
quick
bug
around
I
I
will
be
looking
at
the
paper
to
see
if
there's
a,
if
there's
anything
we
can
do
about
that.
We
didn't.
I
wasn't
aware
of
this
of
this
issue,
so
we're
going
to
be
looking
at
that.
Thank
you.
Thank
you
for
presenting.
B
Yeah
thanks
a
lot
from
my
site
again
like
you,
can
always
have
further
discussion
on
the
mail
list.
I
don't
see
anybody
in
the
queue.
B
F
So
hello,
everyone,
my
name,
is
deutschmann
and
I'd
like
to
present
some
performance
measurements
of
quick
implementations
over
geostationary
satellite
links,
which
we
have
obtained
using
the
quick
interrupt
runner.
F
F
So
geostationary,
satellite
networks
heavily
rely
on
tcp
proxies
or
so-called
performance
enhancing
proxies.
These
are
not
applicable
anymore
in
the
case
of
encrypted
transport,
so
like
vpns
or
quick
and
so
far,
the
performance
of
quick
overview.
Stationary
satellite
links
has
shown
to
be
very
poor,
there's
a
draft
which
summarizes
the
most
important
aspects
we
have
talked
about
this
in
previous
map,
rg
meetings
and
there's
also
a
literature
overview.
F
F
You're,
probably
all
aware
of
the
quick
indoor
of
runner
for
those
who
are
not
aware,
here's
a
screenshot
of
it.
It
was
developed
and
is
maintained
by
the
quick
working
group
and
especially
martin
siemann,
and
I
have
to
say
a
big
thank
you
for
providing
this
framework,
which
helped
us
to
integrate
our
satellite
scenarios
next
slide.
F
F
So
what
we
pay,
what
we
basically
did
is
at
satellite-related
performance
tests
shown
in
the
table
at
the
bottom
right.
F
So
very
brief.
The
architecture
of
the
original,
quick
indoor
runner
it
consists
of
docker
containers
running
on
a
single
host
machine
ns3
is
used
as
a
link
emulation.
We
have
12
quick
client,
implementations
and
13
quick
server,
implementations
and
for
each
combination.
We
run
10
iterations
and
this
setup
is
used
for
the
emulated
scenarios,
terrestrial
sut
and
satellites.
F
In
our
case,
the
client
host
in
the
server
host
is
located
on
different
machines,
so
the
client
was
directly
connected
to
the
satellite
modem,
whereas
the
server
host
is
located
in
the
in
our
university
network
and
the
main
modification
is
then
that
client
and
servers
are
interfacing
with
real
interfaces.
F
There's
only
a
single
vantage
point,
so
it's
not
very
representative,
but
it's
mainly
used
to
check.
How
well
does
the
emulation
setups
compare
to
the
real
satellite
links?
Yeah
next
slide?
F
F
F
F
The
servers
server
implementations
are
shown
in
the
columns
and
there
you
can
see
that
mainly
io,
quick
k-wick
and
ancient
x
is
performing
rather
poor,
whereas
the
client
implementations
are
shown
in
the
in
the
rows
where
k-wick
move
fast
to
some
extent,
nico
and
ng-tcp.
Two
are
not
performing
that
good.
F
In
general,
we
see
very
mixed
results
on
the
right
side.
We
have
the
sub
scenario,
but
with
an
additional
packet
loss,
and
you
can
clearly
see
that
the
performance
decreases.
We
have
a
lot
of,
we
have
a
lot
more
timeouts.
Also,
the
timeout
was
set
to
six
minutes
in
this
scenario,
and
actually
there
are
only
a
few
combinations
which
perform
okay.
F
F
Maybe
on
the
on
on
the
right
side,
we
see
oilsat,
we
said
which
has
a
link
capacity,
which
is
twice
as
much
as
the
other
scenarios.
From
looking
at
the
legend,
you
see
that
the
absolute
good
put
wells
are
slightly
higher,
but
compared
to
the
link
capacity.
The
higher
the
good
puts
are
still
not
very,
very
satisfying.
F
Okay
next
slide,
then
we
have
a
summary
of
the
results.
Considering
all
implementations,
we
use
this
metric,
the
link
utilization
defined
as
good
put
divided
by
the
link.rate,
and
here
you
can
clearly
see
that
the
satellite
scenarios
are
way
beyond
the
terrestrial
link
in
term
of
absolute
numbers.
Oilsat
has
the
the
highest
values
but
again
compared
to
the
link
capacity,
and
it's
it's
not
really
beneficial.
F
Now
we
try
to
understand
why
the
implementations
before
before
performed
differently.
We
try
to
look
into
which
congestion
control
algorithms
are
the
implementations
using.
So
this
table
is
not
guaranteed
to
be
100
correct.
We
try
to
look
at
the
code
and
the
command
line
parameters
and
the
documentation.
F
F
You
can
clearly
see
that
the
implementations
which
are
using
renewal
or
nerium
do
not
perform
well
in
the
satellite
scenario
regarding
cubic.
There
are
a
few
most
implementations
perform
well
in
the
satellite
scenario,
with
a
few
exceptions
and
bvr
performs
equally
well
in
the
sat
and
the
satellite
scenario.
F
What
we
did
not
look
at
yet
is
the
is
the
impact
of
flow
control,
which
of
course,
is
also
important,
especially
regarding
because
when
the
client
has
limited
flow
control,
then
the
server
cannot
do
much
about
it.
F
F
F
F
F
F
F
Okay,
next
slide.
We
have
another
example
which
does
not
work
that
well,
ms
quicker
server
and
x,
quick
as
client,
so
you
can
see
the
that
the
sending
rate
goes
up
and
down
a
lot
and
the
overall
time
is
yeah.
It
takes
really
long
for
for
for
transmission
of
a
10
megabyte
file.
F
The
key
message
is
probably
that
quick
over
geostationary
satellite
links
is
still
performs
very
poor,
it's
worse
with
packet
loss.
The
performance
depends
on
both
client
and
server.
Of
course,
some
open
implementations
might
be
proof
of
concept
implementations.
They
are
probably
not
optimized
for
satellite
links,
but
we
also
saw
a
big
great
variation.
Some
implementations
do:
okay,
others
do
not
that
good.
F
For
us,
it
was
very
hard
to
debug
each
and
every
implementation
or
combination
in
detail
simply
because
there
are
quite
a
lot
of
implementations,
but
his
next
steps
we
try
to
get
do
some
more
detailed
analysis,
maybe
consider
the
influence
of
the
flow
control
at
some
first
further
test
scenarios
and
long-term
measurements,
because
we
we
think
that
the
interpreter
is
really
helpful
for
us
to
to
to
test
the
performance
of
multiple
of
a
broad
range
of
quick
implementations.
F
Yeah
in
case
of
any
discussions,
I'd
like
to
welcome
you
to
invite
you
to
the
itunes
mailing
list.
Thanks.
F
We
have
received
some
feedback
already,
which
is
very,
very
good,
but
like
really
understanding
the
code
of
every
implementation
is
just
yeah.
It
takes
a.
G
Hello,
hamas,
university
of
iowa,
so
I
do
understand
that
a
very
interesting
talk.
First
of
all,
so
I
understand
that
the
performance
is
very
poor
for
quick
and
geostation
satellites.
But
how
do
so?
Can
this
study
be
kind
of
like
transferred
over
to
lower
earth
orbit,
so
something
like
starlink
and
what
would
that
look
like,
like
would
quick,
still
perform
very
poor
because
now
you're
connected
to
a
satellite,
for
let's
say
six
minutes
at
max
and
then
the
satellites
keep
on
changing.
So
how
would
this
study
transfer
over
to
lower
earth
orbits
yeah.
F
Satellites
thanks
for
the
question,
we
actually
have
some
performance
measurements
with
starlink
already
there
on
the
website,
starlink
in
general
performs
quite
well.
It
does
not
have
the
problem
of
the
high
latency
so
yeah.
G
G
In
would
that
affect
these
measurements.
Q
Lorenzo
coletti,
I'm
just
just
really
out
of
curiosity
if
there's,
if
you've
taken
any
sort
of
suggestions
to
the
quick
implementers
like
about
what
to
do
like
one.
One
like
obvious
and
probably
very
stupid
thing
to
do
is
if
the
latency
is
more
than
500
milliseconds
just
be
a
bit
more
aggressive
because,
like
that
doesn't
happen
on
wired
links,
but
I
don't
know
if
that's
like
just
really
stupid
or
only
stupid.
So
that's
just
really
out
of
curiosity
like
what
you
know
have
you
thought
about
what
to
do?
Q
F
Nothing
so
the
if
statement
in
the
code
like
I
am
on
a
satellite
link,
actually
pickle
creek
does
just
does
do
this.
There
are
other
approaches
which
have
been
discussed
in
the
quick
working
group.
F
It
helps
it
helps
pick.
A
quick
is
one
of
the
better
performing
implementations
over
satellite
links.
So
there
are
other
approaches
like
the
zero
rtt
pdp
approach,
which
was
discussed
in
the
quick
working
group
yesterday
and
of
course,
you
have
to
do
parameter
tuning
and
then
the
question
is
how
how
does
a
good
contestion
control
are
going
to
look
like
for
such
kind
of
links?
So
these
are
all
very
basic
questions,
but
it
already
helps
if
the
implementations
simply
add
a
satellite
test
case
to
their
benchmark
scenarios.
Q
Very
interesting
right:
I
actually
have
a
my
own
high
latency
link,
it's
not
a
satellite
link,
but
it's
and
I
had
to
write
my
own
performance
enhancing
proxies
for
tcp,
and
you
know
it's
yeah.
So
thank
you
very
interesting.
Thanks.
B
Yeah,
thank
you.
I
think
yeah,
that's
at
the
end
of
the
queue
and
we
move
over
to
matthias
at
this
point.
N
So
yeah,
hello,
everybody-
and
this
is
a
talk
about
quick
and
ddos
scanning
oops,
just.
N
Yeah
hello,
so
this
is
a
talk
about
quick
and
the
measurement
study
that
we
conducted
last
year
to
better
understand
whether
quick
is
used
to
conduct
ddos
attacks,
and
this
joint
work
with
marcin,
rafael
and
thomas
and
presented
was
presented
last
year
at
imc.
So
next
slide,
please.
So
in
a
nutshell,
the
main
takeaway
of
this
message
is
we
are
asked.
The
question
is
a
quick
use
for
ddos
and
the
answer
is
yes,
and
we
measured
this
based
on
data
from
a
network
telescope.
N
So
now
I
want
to
present
you
a
little
bit
more
in
details
next
slide,
please.
So,
just
as
a
brief
recap,
quick
is
based
on
udp
and
has
to
pick
a
udp
property.
So
usually
it
does
not
have
states,
and
it's
based
on
udp
course
to
prevent
ossification
attacks.
On
the
other
end,
it
has
also
some
properties
inherited
from
tcp
because
it
needs
to
actually
implement
states.
It
is
connection
oriented,
and
this
leads
to
some
vulnerability
next
slide.
N
The
first
attack
is
a
reflective
amplification
attack,
but
this
idea
here
the
idea
is
that
attacker
sends
an
initial
message
to
a
server
but
use
a
spoofed
source
address
and
the
server
will
reply
to
the
spoofed
to
the
spoof
source
address
and
instead
of
sending
one
a
packet,
it
will
send
two
packets,
so
the
server
actually
reflects
the
initial
message
and
amplifies
the
initial
message.
N
The
question
here
is
whether
this
is
a
likely
attack
next
slide
and
the
answer
is
it's
rather
unlikely
it
may
happen,
but
it
is
unlikely
because
quick
by
design
allows
this
client
only
a
server
only
to
reply
three
times
more
volume
compared
to
what
the
initial
client
was
sending.
We
heard
this
already
in
the
previous
talk
first
and
second,
and
so
it
is
limited.
N
Amplification
volume
is
limited
by
a
factor
of
three
and
and
second
there
are
many
many
more
udp-based
protocols
available
that
allow
for
much
higher
amplification
factors
such
as
dns
or
ndp.
So
this
is
a
possible
attack,
but
by
design
is
limited
and
rather
unlikely
next
slide.
N
The
second
type
of
attack
that
the
attacker
might
conduct
is
a
resource
extortion
attack,
and
the
idea
here
is
that
the
attacker
sends
an
initial
message
again
using
sports
source
ip
addresses
to
allocate
states
at
the
server
side
and
the
server
will
reply
as
usual
within
the
first
round
to
type
this
initial
message,
and
that
is
handshake
message
and
this
is
sent
back
to
the
source
to
the
support
source
ip
address.
N
And
ideally
this
source
address
does
not,
I
mean,
is
offline,
so
it
will
not
reply
at
all
with
so
reset
or
something
like
this,
which
means
that
the
server
allocates
states
for
a
decent
amount
of
time,
and
if
the
attacker
has
a
distributed,
botnet
for
example,
and
floods,
the
server
the
the
local
queue
at
the
server
will
fill
up
and
the
server
will
not
be
able
to
reply
anymore
or
even
to
a
benign
request.
N
So,
and
in
addition
to
this
to
allocating
state,
it
also
introduced
computational
resources
because
of
the
cryptographic
glsen
shake
next
slide,
and
these
replies
from
the
server
to
the
support
source
ip
address
can
be
observed
and
a
typical
measurement
infrastructure
to
observe
spoofed
packets
are
network
telescopes,
and
these
tele
telescopes
actually
will
receive
packets
that
are
spoofed
with
ip
doses,
from
the
network,
telescope,
ip
prefix
and
in
our
measurement
study.
We
leveraged
this
telescope
such
a
telescope
next
slide.
N
What
we
actually
did
is
that
we
analyzed
data
from
the
ucsd
slash
kaiba
telescope,
which
is
a
slash
noise,
prefix
or
as
a
large
address
space
which
allows
us
to
capture
more
than
two
percent
of
the
actual
ipv4
address
space.
So
that's
only
focusing
on
ipv4,
as
we
heard
in
the
previous
talk,
measuring
malicious
traffic
in
ipv6
space
is
a
different
story
and
much
more
complicated,
and
we
did
this
for
a
whole
month
into
an
april
2021,
and
this
telescope
receives
a
lot
of
malicious
traffic.
N
It's
not
only
quick
scans
or
a
quick
back
scatter,
but
also
tcp
scans,
tcp
back
together
and
so
on.
So
next
slide.
What
we
did
is
we
need
to
distinguish
this
quick
traffic,
and
how
did
we
do
this?
N
We
did
first
and
port-based
classification,
so
we
filtered
for
all
udp
for
443
traffic,
and
this
is
a
very
common
method
to
distinguish
quick
traffic
from
other
traffic,
but
we
also
applied
some
kind
of
the
packet
inspection
to
exclude
for
its
positives,
and
for
this
we
use
the
bioshark
detectors
and
we
also
did
some
manual
verification.
N
So
we
were
very
sure
that
the
traffic
that
we
identified
were
actually
quick,
driving
and
based
on
this,
this
denver
telescope
and
our
let's
say
to
identify
the
quick
traffic
we
detected
92
million
quick
packets,
and
then
we
distinguish
this
or
split
this
traffic
into
types
request
and
response.
So
requests
are
packets
that
are
sent
to
the
udp
443
and
responses
are
so
the
destination
port,
udp,
443
and
responses
are
packets.
That
included
the
source.
Port,
udp,
443
and
requests
are
packets
that
are
more
or
less
scans.
N
Now
I
mean,
as
we
heard,
researchers
do
scans,
for
example,
to
find
quick
servers
and
the
responses
are
backscatter
traffic,
so
traffic
that
was
sent
from
a
quick
server
back
to
the
spoofed
ip
address.
So
response:
okay,
next
slide
when
you
then,
but
what
you
see
here
is
a
high
level
view
on
the
traffic
that
we
captured
and
on
the
x-axis
you
see
the
time
and
on
the
y-axis
the
number
of
packets,
and
we
saw
two
heavy
scanners
which
were
from
tom.
N
We
heard
about
it
and
the
other
one
was
and
then
some
other
traffic
remains.
So
we
excluded
the
scan
traffic,
because
that
is
not
of
interest
at
all
yeah
and
in
2022
we
also
saw
some
scans
from
census,
so
the
commercial
scanners
were
a
little
bit
late
compared
to
the
actual
research
scandal,
so
that
was
sanitizing
excluding
all
of
the
scanners,
because
this
traffic
is
benign,
but
we
are
interested
in
the
malicious
part.
N
Next
slide,
please
what
you
see
here
is
now
the
traffic
that
we
captured
after
sanitizing
distinguish
between
response
and
requests
and
for
the
request,
packets.
You
also
see
a
little
zoom
a
little
inlet
here
and
that's
the
blue
curve,
and
this
shows
more
or
less
additional
pattern,
which
most
likely
are
the
scanners,
but
not
the
heavy
research
scanners.
Maybe
malicious
cannot
looking
for
quick
servers,
but
with
a
much
lower
rate
compared
to
the
heaviest
gamers
and,
more
importantly,
it
is
a
orange
curve
which
shows
the
response.
N
N
So
then
we
checked
who
actually
is
attacked,
who
receives
or
from
which
autonomous
systems
do
we
receive
the
responses
which
means
which,
where
where
the
server
located
that
receives
the
spoofed
request,
and
that
you
see
in
the
second
column
responses
and
each
line
shows
you
the
type
of
autonomous
systems
where
the
responses
comes
from
and
the
vast
majority
of
responses
that
we
receive
in
our
network
telescope
are
actually
located
in
quantum
provider
networks.
N
So
next
slide.
Now.
The
question
is
whether
these
responses
to
the
sport
source
address
from
the
source
address
are
actually
denial
of
service
attacks
or
something
else-
and
this
is
a
general
challenge
identify
I
mean
you
see
a
lot
of
traffic.
Is
it
a
attack
or
not,
and
what
we
did
here
is
that
we
applied
comments,
research
from
a
prior
verb.
N
So
first
we
group
our
packets
in
sessions
and
each
session
is
split
by
an
idle
timeout
of
five
minutes,
and
then
we
applied
this
common
threshold,
which
means
that
we
identify
a
session
as
an
attack
if
the
session
lasts
more
than
longer
than
60
seconds,
and
if
we
see
more
than
25
packets
and
a
maximum
packet
per
second
rate
of
0.5
so
and
next
slide.
Applying
this,
we
found
actually
more
than
2
900
attacks
in
our
measurement
set
of
these
one
months
and
surprise,
surprise
next
slide.
N
Weight
is
relates
to
facebook,
so
most
of
the
attacks
go
to
one
of
these
famous
content,
delivery
or
service
for
web
services.
Next
slide-
and
now
you
can
ask-
I
mean
this-
is
identifying.
N
The
attack
is
somehow
based
on
empirical
data,
whether
our
thresholds
are
valid
or
not
because
they
are
from
prior
work
some
years
ago,
and
what
we
did
is
that
we
changed
these
thresholds
based
on
of
different
weights,
so
any
that
you
see
here
on
the
x-axis
and
on
the
y-axis,
as
the
blue
proof
is
the
number
of
attacks
that
we
identify
based
on
the
weight.
N
If
you
have
a
weight
that
is
smaller
than
one,
we
have
a
more
relaxed
sweatshirts
and
if
you
have
a
weight
that
is
larger
than
one,
we
have
a
more
stricter
thresher
and
what
you
see,
even
if
we
put
a
much
stricter
10
times
stricter
in
our
service
version,
we
find
still
a
significant.
I
mean
a
decent
number
of
attacks
and
the
second
is
shown
by
the
orange
curve
that
the
that's
a
share
related
to
a
content.
Delivery
networks
is,
I
mean,
that's
a
at
expressions
more
or
less
independent
offices.
N
Next
slide,
then,
we
looked
a
little
bit
more
in
detail
into
the
victims
and
to
better
understand
whether
a
quick
denial
of
service
attacks
relates
or
correlates
with
other
attacks.
And
what
actually
can
happen
is
that
quick
only
that
the
attacker
only
quick
attacks,
a
quick
service,
that's
that
that
first,
another
service
is
attacked
and
then
the
quick
service,
or
that
an
attack
of
different
servers
occur
in
parallel.
So
this
is
you
see
here
illustrated
at
current
attacks.
N
That
triangle
shows
the
start
of
the
attack
and
the
green
attack,
as
a
green
triangle
shows
the
stop
of
that
attack
and
the
first
column
shows
you
a
concurrent
attack,
which
means
that
the
tcp,
typically
tcps
in
flutter
attack,
for
example,
occurs
in
parallel
to
a
quick
attack
and
after
that,
the
quick
attack
continues,
which
means
that
you
have
a
sequential
attack
next
slide,
and
now
we
do
a
little
statistic
of
all
all
events
and
found
that
half
of
the
attacks
are
actually
concurrent
attacks,
which
means
that
we
have
a
quick
attack
and
a
parallel
gcp
icmp
zoom
flat
of
icp
tcp
zoom
flat
and
roughly
40
of
the
attacks
that
we
found
are
sequential
attacks,
which
means
after
the
tcp
is
in
flight.
N
Now
the
question
is:
can
we
protect
against
this
and
they
submit
one
mechanism,
which
is
a
quick
retry
mechanism
that
allows
these
type
of
resources,
exhaustion
attacks
before
a
client
is
authenticated,
and
the
idea
here
is
to
follow
an
approach
similar
from
tcp
cookies,
the
before
the
server
the
quick
server
establish
the
state.
N
It
sends
a
cookie
a
secret
to
the
client,
and
the
client
needs
to
reply
with
this,
and
if
this
is
correct,
after
that,
as
a
as
the
server
starts
as
a
as
a
typical
clicking
check,
and
if
the
answer
is
not
correct,
the
server
just
ignores
it
and
no
state
is
established
at
all
and
now
I'll
show
you
a
brief
emulation,
or
I
mean
a
testbed
evaluation,
whether
this
quick
retime
is
a
mechanism,
helps
to
prevent
resource
exhaustion,
attacks
and
what
you
see
here
is
I
mean
what
we
have
to
do.
N
We
use
this
engine
x
and
initiated
actually
these
quick
flats
to
towards
the
server
and
but
based
on
different
attack
rates
and
different
configurations,
and
what
you
see
here
is
in
this
configuration.
If
you
have
a
packet
per
second
rate
of
100
packets
per
second,
and
then
you
already
will
be
able
to
to
make
the
quick
service
unavailable
then
next
slide,
you
can
argue
that
you
can
use
more
resources,
more
cpu
resources.
N
Yes,
you
can
do,
but
still
I
mean,
if
you
increase
the
packet
rate,
you
can
will
be
stay
still
able
to
turn
the
service
online
offline
and
the
quick
server
will
not
work
so
next
slide.
N
But
if
you
enable
as
a
quick
retry
message,
you
will
be
able
actually
to
prevent
the
server
from
being
exhausted
and
the
servers
will
be
still
still
available,
but
on
the
which
the
one
that's
what
no
tears.
That's
a
quick
res
tries
to
measure
and
adds
an
additional
round
trip
side
that
time.
That
is
a
little
bit
the
disadvantage,
but
it
prevents
this
type
of
ethics.
So
next
slide
yeah
very
important.
I
mean
this
evaluation
that
they
presented
here
is
not
about
nginx.
N
That
is
a
design
issue
by
a
quick,
and
what
we
found
in
our
data
is
that
none
of
the
server
servers
you
actually
currently
use
the
retire
options
most
likely
because
of
because
the
retro
message
adds
additional
enterprise
time
and
you
actually
want
to
reduce
on
trip.
Titan
delays
next
slide.
So
we
this
the
data
that
I
showed
you
was
from
2021.
N
We
also
analyzed
more
recent
data
from
this
year
and
we
found
that
these
quick
initial
floods
actually
doubled,
so
I
took
will
increase
and
we
also
analyzed
google
and
of
facebook's,
often
at
servers
or
servers
that
are
not
located
in
the
google
or
facebook
ads.
And
if
you
consider
this,
you
even
find
more
text.
N
N
So,
to
conclude,
quick
is
vulnerable
against
initial
floods
that
try
to
do
a
resource,
actuation
attack
on
the
on
servers,
and
we
actually
find
this
type
of
attacks
deployed
in
the
real
internet
with
an
increasing
trend
you
can
prevent
this
and
mitigate
this
by
using
retries.
Currently,
many
many
servers
do
not
use
retry,
but
if
you
want
to
prevent
this,
you
have
you
need
to
enable
it
or
find
another
way.
N
C
Thank
you
very
interesting
information.
Do
I
understand
correctly
that
you
said
your
telescope
would
catch
about
two
percent
of
the
potential
backscatter
of
it
of
an
attack.
C
So
that
means
that
the
attacks
would
probably
be
like
roughly
50
times
the
size
that
you
observe
and
compared
to
other
types
of
u.s
attacks.
Those
appear
really
really
tiny.
Yeah
I
mean
we
are
dealing
in
the
dns
industry
with
tens
of
millions
of
packets
per
second,
and
my
question
is
you
said:
you're
gonna,
you
saw
more
attacks
in
terms
of
number
of
lots
that
were
coming
in
in
2022.
C
E
N
Yeah
I
mean,
I
don't
know
the.
C
C
N
Attack
yeah,
I
mean
we
are
not
arguing
that
the
services
that
receives
this
data
I
actually
got
to
get
offline
or
something
like
that.
What
we
are
saying
saying
is
that
we
see
I
mean
patents
that
hints
towards
this
attack,
whether
they
are
successful
or
not,
is
a
different
question.
That
is
not
what
we
are
analyzing
here,
but
it's
a
good
point.
I
don't
know
the
pbs
for
the
updated
data
is
something
that
we
should
yeah.
I
don't
just
don't
know
just
recently.
N
N
G
All
right,
really
good
research,
so
one
of
the
things
that
we
saw
in
our
ipv6
scanners
was
that
some
scanners
controlled,
let's
say
48
and
they
would
send
two
packets
from
each
of
the
ip
addresses
and
then
increment
the
address
and
then
keep
on
sending
two
packets
every
time.
So
would
your
calculation
of
attackers
miss
that
so,
if
someone's
using
some
sort
of
a
distributed
architecture
to
send
scans,
maybe
let's
say
if
it
doesn't
send
5000
packets?
G
G
G
N
I
mean
we
if
you
can
go
back
yeah
anyhow.
J
N
Mean
we
group
it
by
by
source
ip
address,
yeah.
N
That's
fine
yeah,
yes
by
by
source
ip
address,
and
we
I
mean
we
did
not
do
it
here,
but
you
can
do.
You
can
also
consider
requests
or
replies
from
for
multiple
sources
in
parallel
which
and
then
you
can
track
and
emulate.
Some
of
the
state
increase.
For
example,
yeah
yeah.
B
Okay
thanks
a
lot
so
jerome
quickly
drove
to
university
and
now
has
better
connectivity,
so
we
will
give
it
another.
Try.
P
Oh
great,
thank
you
and,
first
of
all,
my
apologies
for
any
inconvenience.
Sorry,
I
just
moved
to
another
place
for
a
better
connection,
so
I
hope
it
sounds
better
now
right,
so
this
is
jerome
from
cwru
and
today
I'm
going
to
present
our
results.
Oh
I'm
hearing,
like
echoes
you.
P
Okay,
okay,
so
yeah.
So
today,
I'm
going
to
present
our
results
in
measuring
the
support
for
dns
over
tcp
in
the
internet
right.
So
let
me
see
oh
yeah,
so
here's
the
topics
we're
going
to
cover
today,
so
we're
going
to
look
at
the
dns
over
tcp
support
on
two
sides
of
the
dns
infrastructure,
the
recursive
resolver
side
and
authoritative,
dns
server
side.
P
So,
let's
move
on
to
the
tcp's
fallback
support
by
recursive
resolvers,
so,
let's
just
use
resolver,
which
is
the
shorthand
for
recursive,
resolver
or
recursor.
So
the
general
approach
is
that
we
want
to
measure
as
many
resolvers
as
possible
and
we
want
to
force
those
resolvers
to
talk
to
our
ads
through
tcp.
P
P
We
have
four
data
sets
on
those
resolvers,
the
first
one
and
open
ipv
for
our
scan
with
unique
queries
to
our
own
domains.
The
second
one
we
use
the
bouncing
message.
We
use
the
bouncing
message
in
smtp
protocol.
So
our
result
also
our
scanners
and
emails
to
non-existing
recipients
at
the
domains
from
the
majestic
top
1
million
list
from
our
own
domain
and
those
mail
servers.
They
won't
be
able
to
successfully
deliver
those
emails,
and
so
they
are
expected
to
send
bounce
messages
for
the
delivery
failures
back
to
our
mail
servers.
P
So
they
have
to
query
the
all.
They
have
to
query
the
mx
records
in
our
in
our
domain
and
our
ads
are
gonna
force,
the
tcp
fallback
in
those
mx
resolutions,
the
third
data
set.
We
use
the
famous
write
ls
platform,
so
we
let
the
rap
alice
probes
to
send
unique
chorus
to
our
own
domains
through
their
resolvers
and
the
last
data
set.
We
have
the
18s
logs
from
a
major
cdn.
P
So
so
here
I
I
expect
an
animation
here,
but
sorry,
it's
pdf
version.
So
there's
no
animation,
but
just
to
illustrate
what
is
a
canonical
scenario
in
tcp
fallback.
You
know
tcp
fallback.
So
if
we've
seen
one
udp
query
from
resolver
and
one
tcp
query
and
another
tcp
query
from
this
resolver
for
unique
query,
then
we
can
say
that
this
resolver
is
tcp
fallback
capable-
and
this
is
a
canonical
scenario
of
the
tcp
fallback
and
the
figure
here
shows
a
more
complicated
non-canonical,
tcp
fallback
scenario.
P
So
we
can
see
that
you
can
see
that
we
have
two
udp
queries
from
resolver,
a
and
resolver
b
and
a
tcp
query
from
resolver
c.
So,
in
this
case,
we
do
not
have
enough
information
to
say
that
the
tcp
query
from
resolver
c
is
a
consequence
of
a
or
b's
udp
udp
queries.
Then
resolver,
a
and
resolver
b
are
indeterminate
in
terms
of
the
tcp
fallback
capabilities
in
this
transaction
and
in
fact,
in
fact,
the
con
the
non-canonical
scenarios.
They
are
very
common
actually
in
our
datasets
on
the
resolvers.
P
Only
46.8
percent
of
all
the
resolutions
are
canonical
and
even
for
the
canonical
scenarios,
18.9
percent
of
those
scenarios
have
two
queries
coming
from
different
ip
addresses,
so
udp
query
from
one
ip
address
and
tcp
query
coming
from
another
ip
address
and
for
the
non-canonical
scenarios
they
sometimes
it
can
be
very
complicated
to
match
the
udp
korean
tcp
queries.
So
here
I
just
listed
two
real
world
examples
here.
P
So
the
first
one
you
you
can
see
that
so
our
scanners
and
just
one
single,
unique
udp
query
to
to
the
resolver
and
in
our
18s
it
ends
up
with
five
udp
queries
and
four
tcp
core
is
coming
from
six
resolvers
and
another
in
another
example,
the
same
it's
another
unique
udp
query
quoted
by
our
scanner,
and
you
can
see
that
there
are
like
around
four
udp
queries
and
three
tcp
core
is
coming
from
three
with
hours.
So
you
can
see
that,
like
the
udp,
the
udp
and
tcp
tcp
queries
there.
P
Fallback
relationships
are
not
quite
very
obvious
in
those
two
examples,
so
we
have
developed
an
algorithm
that
tries
to
group
the
queries
into
clusters
by
their
potential
fallback
relationships,
and
we
assume
that,
like
the
maximum
gap
between
a
udb
query
and
its
real
tcp
fallback,
tcp
fallback
tcp
queries
is
two
seconds,
so
you
might
be
wondering
so
what
what
is
a
cluster?
So
why
do
you
split
all
those
three
different
queries
into
four
clusters?
P
So
cluster
is
a
group.
It's
a
group
of
queries
and
a
cluster
ends
with
dns
over
tcp
query
and
in
addition
to
that
two
consecutive
tcp
queries
in
a
cluster
are
preceded
by
at
least
one
udp
query.
P
So
here
we
have
13
udp
and
tcp
dns
over
udp
and
dns
over
tcp
core
is
here
and
we
split
them
into
four
clusters
in
the
first
in
the
first
cluster,
you
can
see
that
we
have
udp
query
number
one
and
three
and
tcp
queries
two
and
four,
the
tcp
query
number
two
is
preceded
by
udp.
Query
number,
one
and
tcp
query
number
four
is
preceded
by
udp
query
number
three,
and
also
we
can.
We
can
say
that,
like
the
ud,
the
tcp
query,
number
four
is
also
preceded
by
udp
equal
number
one.
P
But
if
we,
if
we
assume
that
the
tcp
query
number
four
is
the
fallback
for
utp
query
number
one
then
we're
gonna
left
the
tcp
query
number
two
unmatched
so
which
is
unlikely.
So
in
this
case
we
say
that
so
we
just
assume
that
the
udp
code
number
one
is
fall
back
by
tcp.
Query
number
two
and
udp
core
number
three
is
fought
back
by
tcp
number
three
number
four.
So
in
this
case
both
number
one
and
number
three
are
tcp
fallback
capable
and
in
cluster
number
two.
P
This
tcp
core
is
not
paired
by
any
udp
query,
so
we
just
leave
it
at
this
and
in
cluster
number
three.
It's
obvious
that,
like
the
tcp
number,
seven
is
the
fallback
of
bdb
query
number
six,
so
quite
obvious
one
and
in
cluster
number
four,
it's
a
little
bit
complicated
because
you
know
there
are
several
kind
of
potential
pairs
and,
like
you
know,
we
we
cannot
find
a
way
to
to
perfectly
associate
the
udp
queries
and
tcp
queries,
because
you
know
there
are
three
udp
queries,
but
only
two
tcp
for
example.
P
So
in
this
case
we
just
say
that
all
those
three
udp
quarters
are
indeterminate.
We
just
don't
know
which
udp
query
is:
is
tcp
fallback
capable
and
which
one
is
not,
but
finally,
in
udp
query
number
13.
P
Right
so
just
like
what
we
we've
seen
in
the
previous
two
slides,
so
some
dns
transactions-
they
just
don't-
allow
ambiguous
influence
of
the
tcp
fallback
capability
like
in
cluster
number
four.
So
in
this
case,
what
we
do
is
that
we
have
two
estimations.
We
have
the
optimistic
estimation
and
we
have
the
pessimistic
estimation
and
the
only
difference
between
those
two
estimations
is
how
we
process
the
indeterminate
queries.
So
in
optimistic
estimation,
we
just
consider
the
indeterminate
as
tcp
fallback
capable
so
in
so
in
this.
P
So
you
see
in
this
example
in
cluster
number
four.
We
just
assume
that,
like
udp
code
number,
eight
nine
eleven
are
tcp
fallback
capable
and
under
pessimistic
estimation.
We
just
consider
them
as
tcp
fallback
incapable
and
throughout
all
of
our
data
sets.
We
have
studied
around
130
16
000
resolvers
and
around
95
to
97
percent
of
them
are
tcp
fallback
capable.
P
So
next
we're
going
to
move
on
to
the
tcp
support
by
the
authoritative
dns
servers,
so
in
those
measurements
we're
going
to
act
as
so,
we
have
the
control
over
the
transport.
So
the
general
approach
is
just.
We
try
to
send
tcp
queries
to
the
adns
serving
certain
domains
from
a
testing
machine
on
campus.
So
only
one
vantage
point
and
we
still
have
three
data
sets
here.
P
The
first
data
set
includes
the
domains
from
the
queries
handled
by
the
resolution
service
operated
by
the
major
cdn,
and
we
test
all
the
18s
lists
for
each
domain
and
secondary
set.
We
use
the
majestic
top
1000
rule
domain
websites
and
we
call
those
popular
websites
and
still
we
test
all
the
18.
As
listed
for
each
domain
last
data
set,
we
have
a
list
of
cdn
accelerated
domains
and
we
just
pick
one
domain
per
cdn
and
we
test
all
the
18
as
list
every
domain.
P
Here's
the
results
for
the
adns
server
side,
so,
first
of
all,
the
domains
from
the
from
the
query
is
handled
by
the
resolution
service
operated
by
the
major
cdn
more
than
five
percent
of
the
domains.
They
fail
to
resolve
tcp
queries
through
some
89
servers
and
still
and
like
even
for
the
popular
websites
still
like
around
a
little
bit
more
than
three
percent
of
the
domains.
They
fail
to
resolve
the
tcp
course
through
some
18s
and
even
for
the
cdns
11
out
of
47
cdns,
we've
studied.
P
They
have
deployed
89
servers
that
do
not
support
dns
or
tcp,
and
so
for
the
rest
of
the
adns
that
do
support
dns
over
tcp
we're
gonna,
we're
gonna,
move
on
and
see
the
the
race
condition
that
are
related
to
those
to
support
dns
over
tcp.
P
So
so,
first
of
all,
rfc
recommends
reusing
his
tablet,
tcp
connections
and
actually
resolvers.
They
do
reuse
connections
in
another
email
scan.
We
we've
successfully
induced
around
13.5
percent
of
the
resolvers
that
are
that
serves
the
mail
servers
to
reuse,
the
tcp
connections,
so
resolvers
do
reuse
connections
and
here's
the
race.
P
If
the
server
tries
to
close
the
connection
after
sending
a
response-
and
you
know
the
server
is
trying
to
close
out
this
connection
after
responding
to
this
dns
query,
but
the
client-
you
know
the
theme
or
the
reset
segment
is
still
on
the
flight
and
client
doesn't
know
that,
like
this,
this
connection
is
being
closed
out
and
the
client
and
the
client
is
trying
to
reuse
this
connection
for
further
queries.
P
We
can
see
that
this
code
is
going
to
be
left
unresponsive
or
we're
going
to
be
left
unresponded,
so
that's
a
race
condition
and
actually
around
33
of
the
popular
websites
and
for
cdn
providers.
They
deploy
89
servers
that
just
like
close
the
connections
immediately
after
responding
to
the
first
dns
over
tcp
query,
so
they
are
vulnerable
to
this
race
condition.
P
P
So
there
has
already
been
a
fantastic
ddns,
edns
tcp
keeper
live
extension
and
this
extension
allows
the
client
and
server
to
dynamically
negotiate
the
timeout
of
the
current
tcp
connection.
So
maybe
you
can
try
to
work
more
on
this
ens
extension.
So
here
are
our
proposed
optional
update
here.
First,
a
resolver
must
now
reuse
the
tcp
connection.
Unless
and
explicitly
dns
tcp
cable
live
negotiation
has
been
complete,
has
been
completed.
P
Maybe
we
can
just
let
the
adns
servers
to
retain
the
tcp
connections
for
for
another
two
maximum
segment
lifetime
beyond
the
negotiated,
keep
alive
duration
so
that
all
the
outstanding
all
the
tc
dns
over
tcp
courses
in
the
flight
can
be
correctly
acknowledged
and
processed,
and
there
might
be
another
potential
optimization
we
haven't
studied
on
that
yet,
but
it's
just
maybe
we
need
more
discussion
on
that,
so
a
resolver.
P
So
maybe
we
were
thinking
like
so
right
now,
edna's
tcp,
keep
alive
doesn't
allow
to
be
negotiated
in
udp,
but
maybe
we
can
try
to
let
the
resolvers
and
adms
servers
to
to
like
to
to
negotiate
the
tcp
cheaper,
live
in
udp,
so
that
the
resolver
and
ads
are
going
to
know
that
okay,
so
the
the
remote
endpoint
do
support
or
doesn't
support
that
support
doesn't
support
the
tcp
reusing
and
maybe
the
client
can
try
to
shorten
the
can
try
to
shorten
the
previously
negotiated,
keep
alive
duration
here
and
also
because
dns
over
tls
explicitly
borrows
the
connection
management
policy
from
dns
over
tcp.
P
So
we
hope
that,
like
these
updates
can
also
so
we
hope
that,
like
dns
over
over
tls
can
also
benefit
from
these
updates
here
right.
So
here's
the
conclusion
for
today's
talk.
First
of
all,
on
the
recursive
resolver
side,
a
small
but
non-negligible
number
of
resolvers,
they
do
not
support
the
non-server,
do
not
support
tcp
fallback
and
actually
they
are
very
active.
P
So
they
are
under
the
risk
of
not
being
able
to
to
to
to
to
receive
the
answer
for
for
very
large
to
receive
very
large
dns
messages,
the
second
of
all
on
the
on
the
dns
server
side,
still
a
non-negligible
number
of
top
websites
and
cn
providers.
They
use,
authoritative,
dns
servers
that
do
not
support
e-mails
or
tcp
and
for
the
adns
servers
that
do
support
dns
over
tcp.
Many
of
them
are
vulnerable
to
the
race
condition.
P
B
P
Oh
yeah,
yeah
yeah,
we
just
we
we
are
we
we
find
some
like
some
bugs
in
in
some
in
some
implementations
and
we
we
reach
out
to
them.
P
B
I
Yeah
hello,
so
the
question
I
have
is
the
numbers
you
presented.
You
said
some
servers
were
not
reachable.
Did
that
mean
that
the
resolution
still
could
complete
without
tcp
or
was
it
just
kind
of
one
of
these
or
was
all
reservers
not
responding
with
tcp?
I
mean
because
some
servers
not
responding
from
a
user
perspective
is
not.
P
P
I
I
From
my
understanding,
is
these
numbers
just
mean
you
have
x
x,
domains
that
or
x
server
that
don't
respond,
but
you
don't
have
the
amount
of
numbers,
the
number
that
actually
would
not
resolve.
That's
a
different
outcome.
B
H
Hi
yeah,
I
have
a
question
on
the
interpretation
of
large
samples
of
of
resolvers
or
in
fact
it's
two
questions.
So
I
think
your
slide
said
you
checked
about
a
hundred
thousand
or
so
resolvers
and
when
we
do.
H
Reserve
studies:
we
always
wonder
how
like
how
much
does
that
mean,
because
if
I
find
some
some
random
ip
address
that
that
responds
to
dns
queries,
it
may
be
resolver
that
is
not
actually
used
or
it
may
be
and
yeah.
So.
H
People
sometimes
argue
that,
especially
in
corporate
environments,
tcp
support
may
be
blocked
by
middle
boxes
or
whatever,
and
that
that
would
be
a
problem
for
dean,
essec
deployments
and
other
things
with
large
payloads
in
dns,
and
maybe
I
didn't
get
it
in
your
talk.
But
if
you
could
point
out
one
or
two
insights
on
that
aspect
and
whether
that
is
true
or
maybe
not
true,
about
the
tcp
support
lacking
in
copper
environments,
that
would
be
nice
thanks
and
very
nice
research
in
general.
P
Thank
you
so
yeah.
So,
first
of
all,
let
me
try
to
read:
try
to
repeat
your
question
first,
so
so
the
your
first
question
is
so
you're
saying,
like
maybe
some
of
the
tcp
fallback
capable
resolvers
are
not
used
or
some
tcp
feedback
incapable
resolvers
are
not
used
in
real
world
applications.
So
how
do
we
know
that
right
right?
So
we
we
have
the
we
we
have
the
we
have
the
18s
log
from
a
major
cdn.
P
So
so
we
so
we
know
that
like
so
whether
this
adns
is
being
used
or
not
being
used
and
to
what
extent
it
is
being
used
and
for
this
number
here
you
can,
you
can
see
that
like
so
we
say
that,
like
around
95
to
97
percent
of
the
resolvers
are
tcp
by
capable
and
they
contribute
to
around
96
to
99
percent
of
the
cdn
traffic
from
the
dollar
study.
So
that's
that's
our
aggregated
results
here.
P
So
basically
that
leaves
like
around
so
so
we
we
cannot
say
that
like
so,
basically
which
resolver
is
being
used
and
which
resolver
is
not
being
used.
But
we
can
say
that,
like
so
overall
the
tcp
fallback
capable
resolvers
are
being
used
as
this,
and
you
know
the
tcp
overall,
the
tcp
fallback
incapable
results
are
being
used,
are
still
being
used
because
they
still
contribute
to
around.
Like
one
percent
of
the
cdn
traffic.
B
E
B
Interesting
question
about
blocking
of
dns
over
tcp,
which
might
need
further
study
and
maybe
just
take
it.
B
Thank
you,
everybody
else
also
who
joined
in
the
middle
of
the
night.
This
wasn't
a
convenient
time
for
everybody,
but
I
think
we
had
a
great
session,
I'm
so
happy
that
we
ended
up
in
a
two-hour
slot.
Even
so,
we
only
requested
one
hour
we
got
we
had
because
we
got
some
nice
talks
here
and
I'm
already
looking
forward
to
the
next
session
dave.