►
From YouTube: IETF114-IEPG-20220724-1400
Description
IEPG meeting session at IETF114
2022/07/24 1400
https://datatracker.ietf.org/meeting/114/proceedings/
A
B
C
E
A
Maybe
yay
okay,
so
we
have
rearranged
the
agenda
very
slightly
and
we
will
first
be
having
the
what's
the
actual
full
name.
Where
is
yeah?
Where
did
my
packets
go
measuring
the
impact
of
rpki
rov
and
that's
going
to
be
by
cohen
who's
presenting
remotely,
I
guess
actually
before
we
should
do
that?
We
should
first
do
the
hello
everybody
and
welcome
to
philadelphia,
I'm
warren.
This
is
chris
you're
in
the
iepg
meeting,
which
is
a
informal
gathering
of
network
operators
that
happens
before
the
ietf
actually
starts.
A
So
technically
you
don't
have
to
be
registered
for
the
meeting,
but
I'm
guessing
everybody
probably
is
anyway.
So
this
is
us
hello
and
welcome
again
and
let's
get
started.
I
guess
I
how's
this
gonna
work.
Would
you
like
us
to
share
the
slides
or
do
you
want
to
ask
to
share
and
you
move
them,
whichever
you
would
prefer.
D
Lovely
so
yes,
I
want
to
talk
about.
Where
did
my
packets
go?
Basically,
we
do
rpg
irv.
So
what
impact
does
it
have?
Firstly,
so
why
do
we
actually
care
about
our
pki?
D
I
mean
I
would
argue
that,
okay,
for
some
of
us,
it's
probably
job
security,
but
in
generally
speaking,
we
care
not
so
much
about
rpk
intrinsically,
but
we
care
about
where
our
packets
actually
go
to
and
where
the
place
that
they
go
to
is
actually
the
place
that
we
intended
our
packets
to
end
up
in
and
rpg
rov
is
one
of
the
ways
we
try
to
make
sure
that
they
end
up
in
the
intended
place.
There
is
a
slight
issue
with
that.
D
Oh,
that
is
a
slash
21
that
is
announced
by
as333,
so
to
ripen
to
c,
and
we
have
the
same
prefix
as
a
slash
24
announced
by
as666,
which
I've
called
evocor,
which
I
know
it's
not
the
right
name,
because
but
hey
this
discussion
about
whether
we
used
reserve
prefixes
for
documentation
or
not.
D
That
has
been
going
on
forever
and
I
will
gladly
follow
that
tradition,
so
they
both
announced
that
free
prefix
to
serve,
and
let's,
for
this
example,
say
that
surf
doesn't
do
rov,
so
they
receive
both
them,
except
both
of
them
and
forward
both
of
them
to
the
university
of
twente.
So
that's
one
on
the
right.
D
So
if
the
university
of
tennessee
were
to
do
rpki
rov,
then
it
would
look
at
both
of
those
pgp
announcements
and
then
see:
okay,
hey
that
one
that
goes
from
1103
to
666..
Hey!
That's
that's
invalid!
According
to
this,
the
roast
that
I
have
so
I'm
going
to
drop
that
great
okay.
Now
we
have
the
only
the
other
route
available,
except
that
if
we
send
the
brackets
to
let's
say
193.0.1.1.
D
Then
we
take
look
at
what
the
routes
we
have.
We
only
discarded
the
one
big
that
is
more
specific,
but
we
still
have
they'll
still
have
the
one
that
goes
to
the
ripe
ncc.
We
forward.
We
send
that
packet
to
the
one,
the
next
hop
so
sure
and
surfed
and
says
hey.
I
know
a
more
specific
route
and
I
know
so
I
know
a
more
specific
destination,
let's
just
send
it
to
weaver
anyway.
D
D
then,
even
though
it
would,
the
university
of
fentanyl
would
consider
that
route.
Then
it
would
end
up
with
servant,
serves
as
yeah,
but
that's
no
that's
invalid.
So
I'm
going
to
send
it
to
the
right
ncc
anyway.
So
then
it
does
end
up
in
the
right
place,
even
though
they
don't
do
rov.
D
So
this
is
what
we
wanted
to
look
into.
Basically,
does
this
actually
happen
in
real
life
and
what's
the
impact.
D
So
quickly
about
the
experiment
that
we
set
up
for
the
experiment,
we
have
one
as
that
is
as211321,
that's
to
an
as
used
by
an
outlet
labs
that
I
was
able
to
borrow.
We
have
two
servers.
Actually
one
of
them
is
that
kohler
clue
in
amsterdam.
Koduku
is
reasonably
rather
well
connected.
Actually
it
has
a
lot
of
beer
and
appears
quite
a
few
tier
one
upstreams
and
one
ad
filter
in
sydney
which
is
also
reasonably
well
connected,
not
as
well
connected
skoda,
clue,
but
still
it's
it's
not
invisible
to
the
outside
world.
D
We
are
announcing
some
prefixes,
namely
t2a04
b905
42.
We
have
also
arrow
for
that.
That
has
a
max
length
of
also
32
for
this,
as
so
that
one
is
valid.
We
announced
that
one
from
vulture,
that's
what
the
blue
triangle
means.
We
also
announced
two
a04
p905
as
a
slash
43,
so
that
one
is
invalid
according
to
the
rows
that
we
have,
but
it
is
more
specific.
So
if
you
look
at
just
bgp,
then
it
would
be
more
specific.
So
we
choose
that.
D
But
if
you
do
rpki
rov,
then
you
consider
it
invalid.
So
you
choose
this.
Slash
ready,
too
and
then
to
check
whether
you
do
rpi
rov.
We
have
a
slash
48,
that
is,
has
an
as0
row
associated
to
with
it.
So
if
you
don't
do
rpg
rov,
then
you
wouldn't
even
send
your
traffic
to
your
upstream.
Unless
you
use
something
like
default
route,
you
basically,
you
would
just
discard
the
packet,
because
you
have
no
routes
available
for
that
prefix
and
if
you
don't
do
rpi
or
v,
you
send
it
up
to
your
upstream.
D
All
these
ip
addresses
also
have
their
ipv4
equivalents,
namely
in
two
or
three
one,
one
nines
22
23
and
the
others
below
there,
but
for
convenience
sake.
I
will
just
keep
talking
about
the
ip6
ones,
because
otherwise
this
becomes
a
very
convoluted
talk.
D
So
what
we
did?
We
set
up
a
free,
rpgi
publication
points.
We
have
parents.rv.now,
which
is
hosted
on
2804
b905
8001.
D
So
this
is
inside
the
slash
42
announcement,
but
outside
the
slash
43
announcement.
So
this
always
ends
up
at
vulture
in
australia.
It
because
that's
the
only
route
you
could
possibly
take,
because
that's
the
only
place
it's
enhanced,
then
we
have
child.rov.com.l.
D
So,
depending
on
whether
you
do
rpki
rv
and
your
upstream,
where
your
upstream
send
your
packet
sku,
this
ends
up
at
either
goloklu
in
amsterdam
or
in
filter
in
australia,
and
then
we
have
invalid.rov.gov
now,
which
is
at
the
invalid
prefix.
So
if
you
hit
this,
then
you
are
likely
not
go
if
you're
likely
not
doing
rpg
rv.
D
So
yes,
that's
what
I
said
so
quick
intermittent
about
why
rpki
publication
points
because
well,
every
measurement
that
you
do
on
the
internet
has
a
bias.
D
We
wanted
to
look
at
networks
that
were
more
likely
to
do
rov
than
the
average
rpki
publication
points,
also
again,
because
validators
look
loads
of
data
at
least
once
an
hour.
They
generate
a
nice,
steady
stream
of
traffic.
So
this
was,
we
thought,
hey.
This
might
be
a
good
way
to
to
actually
measure
this,
or
at
least
get
a
feeling
for
this
but
mind
you.
D
This
doesn't
say
something
about
the
world
wide
web,
of
course,
because
most
people
or
most
organizations
are
still
not
doing
rpi
rov
and
the
artists
are
and
most
that
don't
do.
Rov
also
don't
run
a
validator
because
you
don't
run
a
validator
for
fun,
or
at
least
not
not
not
most
people,
I
know
so.
This
is
actually
the
slide
that
most
people
are
waiting
for.
This
is
the
the
results.
This
is
a
small
table.
D
D
Amount
of
unique
ip
addresses
that
drop
invalid,
so
didn't
manage
to
reach
invalid.rov.com
now
and
ended
up
at
a
filter
in
sydney,
so
at
the
vetted
location.
However,
as
you
can
also
see,
there
is
in
the
top
left,
also
quite
quite
a
few
that,
even
though
they
did
drop
in
valids,
they
still
ended
up
at.
D
What
would
I
would
say,
is
the
wrong
location
or
the
unintended
location
if
you
look
at
from
the
rpgi
perspective,
so
that's
the
category
that
well,
I
mean
didn't,
even
though
they
did
rpki
rv
their
upstream,
send
their
packets
to
the
to
a
location
that
they
thought
hey.
We
know
something
more
specific
and
then.
F
D
Not
the
place
where
the
originator
expected
that
to
go,
however,
we
we
also
see
the
inverse,
because,
for
the
all,
those
that
don't
drop
in
valid,
quite
a
large
large
fortune
is
actually
secured
or
secured
does
go
to
the
intended
location
according
to
the
rpki,
merely
because
their
upstreams
do
so.
This
is
quite
an
interesting.
D
It
was
quite
an
interesting
first
to
see
this,
especially
this
this
bottom
one,
because
I
please
didn't
expect
this.
I
didn't
expect
that
this
would
have
such
an
impact,
but
luckily
it
does
so.
I
would
say
that
I'm
not
going
to
say
whether
this
is
a
positive
message
or
a
negative
message.
That's
something
that
you
may
decide,
but
it's
quite
interesting
to
see
evan.
D
So,
during
this
measurements
we
we
ran
into
some
challenges.
The
first
challenge
is
actually
quite
simple
at
first
99
of
traffic
that
went
to
kodaklu,
and
now
that
was
a
bit
that
seemed
a
bit
strange,
because
why
would
all
traffic
go
to
coda
glue?
I
mean
some
at
least
some
people
will
do
rpki
or
v
and
dropping
values
right.
Well,
I
mean
vulture
didn't
do
rov.
D
We
knew
that,
but
they
would
apparently,
if
their
traffic
reached
their
edge,
they
would
say
think
hey,
I
know
a
more
specific
route,
so
they
would
redirect
it
to
another
tier
one,
and
then
it
would
make
a
loop
around
the
world
and
end
up
in
amsterdam.
I've
seen
trace
routes
where
the
packet
starts
in
amsterdam
goes
around
the
world,
reaches
australia
and
then
goes
around
the
other
side
of
the
world
again.
D
So,
if
you
need
the
b2b
scenic
routing,
then
this
is
a
way
to
do
it
not
really
efficient
and
also
well
problematics,
for
what
we
wanted
to
test.
We
managed
to
solve
this
by
announcing
a
more
the
more
specific
filter
as
well
and
then
adding
a
bgp
community,
so
it
doesn't
export
outside
the
filter,
which
is
a
bit
of
an
ugly
hack,
but
it
works
and
was
for
our
measurements
good
enough.
D
D
Another
thing
which,
because
ipv4
is
difficult
to
get
nowadays
because
it
has
run
out
for
the
last
five
years
or
so
we
initially
ran
everything
on
ipv6
and
I
physics
only
which
we
thought:
okay
networks
that
do
ipv6
run
a
validator,
that's
on
an
online
machine
that
also
white
ip6
right.
Well,
the
answer
to
that
is
sometimes
yes,
but
often
also.
No,
so
we
would
see
that
hey.
Why
don't
you
understand
what
our
route
is?
Hey
we
we!
I,
I
think
why
don't
you
under?
D
Oh
wait?
We
you
don't
do
ipv6
on
your
validator,
even
though
your
network
support
ip6
and
interestingly
enough,
sometimes
networks
had
multiple
validators
running
and
some
of
them
on
machines,
some
where
some
machines
supports
ipv4
only
and
some
also
support
ipv6,
so
that
that
triggered
internal
alerts.
A
little
awful
lot
of
fun,
basically,
message
being
that
either
run.
Make
sure
that
your
publication
point
is
a
is
available
on
both
ip4
and
ip6
are
just
basically
many.
D
So
yeah,
that's
it
basically
for
me,
we
we've
seen
that
merely
doing
rov
and
dropping
effects
does
not
necessarily
mean
that
your
traffic
goes
to
the
intended
location.
We
knew
that
from
a
theoretical
point
of
view,
but
we
now
also
just
did
it
in
the
wild
and
we've
seen
the
same
behavior
and
also
the
more
varied
your
upstreams
are.
The
more
important
doing.
D
D
If
you
were
more
interested
in
reading
an
article
about
this
than
hearing
me
talk
about
it's
available
on
gripe
labs
on
that
url
or
just
google
it
and
lastly,
I
want
to
thank
the
people
at
another
labs
and
the
writebnc
for
letting
me
do
b2b
things
with
their
resources,
which
luckily
nothing
went
wrong.
Otherwise
they
would
have
taken
the
blame.
D
So
that's
it
for
me,
I'd
like
to
open
the
floor
for
questions
or
comments.
If
there's
time
for
this.
A
B
A
B
G
All
right,
hello,
everyone
and
thanks
for
having
me
I'm
nina
and
we've
got
some
guys
here.
Pavon
we
have
some
of
you
over
there.
I'm
sure
you
know
we're
graduate
students
at
george
mason
university
and
we've
working
with
the
modular
security
lab
and
really
we
want
to
show
off
some
of
the
cool
tools
we've
been
working
on
in
the
subject
of
game
protocols,
really
bringing
them
to
life,
to
power
s
mine
specifically,
but
first,
let's
jump
into
a
little
bit
of
setup.
G
Good
note
all
right,
let
me
try
to
be
a
little
more
50.
Cent
here
dane
is
a
powerful
protocol
sweet.
It
really
makes
doing
security
and
privacy
easier
right,
but
what
can
we
do
to
make
gain
easier?
That's
kind
of
the
question
that
inspires
us
so
for
the
everyday
person.
Why
can't
we
simply
turn
on
secure
messaging
on
the
internet?
Now
I
know
what
you're
thinking
hey.
We
could
do
messaging
on
certain
platforms
and
apps
right.
We
got
you
know
whatsapp
signal.
We
got
like
our
organizational
pki.
G
We
can,
you
know,
do
email
really
easily,
but
should
we
really
be
limited
on
the
internet,
for
you
know
proprietary
platforms
and
that
kind
of
pki
boundaries
right
and
what
about
usability
to
get
to
that
next
stage?
We
kind
of
want
to
be
sure
that
everyday
users
and
operators
that
make
the
gear
spin
are
not
burdened
by
the
overhead
of
having
to
manage
doing.
You
know
secure
messaging,
secure,
object,
sharing
and
whatnot,
so
the
idea
is
like
what
https
did
for
transfer
security.
G
We
want
any
entity
to
be
able
to
transact
end
to
end
secured
with
secured
objects
with
any
others
over
the
internet
right
on
like
a
wide
scale,
and
this
is
for
all
sorts
of
use
cases.
So
that's
why
we're
launching
this
basic
research
into
how
dane
can
unlock
these
long
needed
protections
for
those
uses?
G
G
So,
let's
start
by
securing
one
of
those
basic
protocols
right.
If
we
can
work
with
the
model
t,
we
can
work
with
the
tesla.
What's
something
that
everybody
uses
email
and
what
is
this
going
to
allow
us
to
do
it's
going
to?
Let
us
find
out
exactly
what
people
need
to
make
end-to-end
internet
security,
seamless
and
turn
on
everywhere,
and
the
catch
phrase
here
is
kind
of
to
make
it
invisible.
G
So
for
that
we
need
to
make
usable
tools
right,
making
it
easy,
well
securing
email
with
dain.
If
that's
our
use
case,
we
need
the
tools.
We
need
kind
of
two
sides
of
the
equation
here.
We
need
to
be
able
to
set
up
dane
right.
It
requires
some
level
of
work
from
domain
holders
on
dns,
and
you
know
we
want
to
make
that
easy
for
them,
and
we
also
want
to
use
the
search
from
dane
on
the
user
side
on
the
clients
and
the
muas
that
the
users
will
use.
G
So
that's
a
lot
of
uses
they'll
be
able
to
do
email
easily
with
the
tools
that
we
made
to
really
show
that
off.
So
for
the
first
one
we
made
the
certain
management
portal
dameportal.net
and
then
we
made
the
mua
add-on
called
courier.
So
some
fancy
names
there
we'll
show
them
off.
Don't
worry.
G
And
really
the
goal
here,
a
little
too
hard.
The
goal
here
is
to
find
out
what
people
need
to
make
into
a
security
default.
As
I
said,
we
do
know
one
thing
for
sure.
One
thing
that
people
definitely
need
is
key
management
right,
insert
discovery,
and
we
got
a
lot
of
solutions
for
those.
But
you
know,
as
I
kind
of
implied
dane,
is
kind
of
an
excellent
answer
right
and
we
just
need
to
make
use
the
tools
to
make
it
easy.
G
Gameportal.Net.
Okay,
I'm
accidentally
clicking
on
the
thing
here.
Sorry,
it's
an
open
source,
federated
cert
management
system
and
I'll
show
you
what
that
means,
and
a
dedicated
dns
infrastructure
as
well
to
make
dain
easy
right
and
literally
the
way
it
works
is
domain
holders
will
enable
dane
for
their
dns
section
zone
and
the
email
users
will
simply
manage
their
certs
in
like
a
delegated
manner.
For
specifically
the
emails
they
are
given
control
over
to
the
degree
that
makes
sense
for
the
organization.
G
So
let's
go
ahead
and
see
these
are
screenshots,
but
we
can
get
the
idea
just
hop
on
gameportal.net
and
you
can
start
to
enable
dain
create
a
new
user
like
you
would
normally
imagine
these
online
portals
to
use
and
log
in
and
suddenly
on.
The
first
page,
we'll
see
this
dashboard
page
where
users
can
add
their
see
their
email
addresses
and
zones.
So
this
is
just
kind
of
like
a
subset
of
the
page
here,
but
basically
down
over
here.
G
If
a
new
admin,
somebody
that
owns
a
dome,
the
domain
holder
uri,
wants
to
enable
dame
for
their
zone
they'll
go
ahead
and
try
to
claim
it
now.
Here
I
can
try
to
claim
example.com,
even
though
I
don't
own
it,
anybody
can
claim
any
zone
and
they
will
need
to
actually
verify
it.
Obviously,
and
the
flow
here
is
pretty
straightforward:
we
just
use
the
acme
protocol
every
to
prove
ownership.
G
They
just
need
to
add
that
txt
record
in
their
zone
and
have
dnsec
enabled
and
we'll
verify
that
straight
through
and
it'll
we'll
go
ahead
and
try
to
hook
it
up.
Basically,
to
finish
the
delegation,
just
like
any
normal
dns
section
or
we
had
the
ns
record,
yes
record,
blah
blah
blah,
you
know
the
stuff,
so
this
is
pretty
straightforward
for
kind
of
delegations.
G
Now
this
is
the
interesting
part.
So,
with
dane,
we
can
do
kind
of
the
zone
game
zone
management.
Our
portal
will
actually
create
the
dame
zone,
for
you
enable
dane,
whereas
the
admins,
the
actual
domain
holders
have
full
control,
they
retain
control
over
the
keys
here.
They
can
turn
it
on
and
turn
it
off.
Have
it
accessible.
G
And
you
know:
how
do
you
check
that?
It's
actually
I'm
doing
the
same
thing?
Sorry,
how
do
you
check
that
as
active
is
well,
you
use
some
kind
of
tool
like
sex
spider
that'll
go
through
and
make
sure
that
the
delegation
works
because
dns
sec
right,
the
sec
part
is
to
it's
to
you
know,
make
sure
dane
requires
that.
Basically,
now
we
can
go
ahead
and
add
the
domain.
This
is
the
part
where
email
users
can
be
able
to
manage
their
own
search.
Now.
G
This
is
the
part
where
we
go
ahead
and
let
another
user
on
dane
portal
manage
their
own
search
on
their
email
address.
So
here
we
got
john
doe
on
the
picture
and
well
we
added
him,
and
you
can
see
that
when
you
go
back
to
the
dashboard,
you
can
go
ahead
and
click
through
and
manage
his
own
data
on
dane
portal.
So
this
is
another
screenshot
showing
the
page
where
john
joe
will
be
able
to
set
a
cert.
G
So
we
can
just
add
that
add
the
cert
on
this
page
and
over
here
is
pretty
interesting
to
note
the
dane
specific
protocol
usage
selector
matching
if
anybody's
familiar
with
those.
That's
all
those
options
are
there.
The
defaults
are
given
to
be
the
most
permissive
and
the
most
complete.
So
those
defaults
just
allow
you
to
start
doing
secure
email
and
once
again
we
give
the
ability
just
to
do
a
quick
self
sign
start
as
well.
But
you
know
that's
just
a
standard
open
ssl,
not
too
much
fanciness
there.
G
We
can
download
it
and
use
that
sir.
Now
john
doe
go
ahead.
He
went
ahead
and
added
his
sir
here
and
his
well.
He
could
just
toggle
it
active
if
he
wants
and
he's
good
to
go
right.
He
can
start
doing
secure
email
just
as
easy
as
that
and
just
another
thing
to
note
there,
where
you
know
you
can
just
toggle
it
on
and
off.
That's
not
really
that
easy
in
classic
pki.
Is
it
cool,
so
we
saw
how
that
was
making
dane
pretty
easy.
This
is
a
straightforward
flow.
G
The
admin
added
that
zone
and
then
the
email
users
can
in
a
federated
manner
manage
their
own
search
on
that
on
the
dean
zone,
so
yeah
feel
free
to
check
it
out.
We
got
more
links
over
here.
Now
we
saw
that
first
half,
so
we
jump
ahead
and
so
how
users
can
actually
have
it
auto-resolved
inserts
on
career
in
their
muas
to
find
out
what
users
need
to
make
end-to-end
security
a
default
and
really
the
motivation
for
this
is
here.
G
We
don't
really
have
wide
scale,
ed
security,
but
by
observing
our
tools
and
action
we
can
find
out
what
makes
sense
if
we
are
to
make
ewe
default.
So
to
that
end
we
kind
of
instrumented
our
next
tool
as
a
live
experiment.
Where
really
any
one
of
us,
you
guys
can
help
us
get
some
real
numbers
on
the
human
puzzle,
piece
in
security
automation,
so
just
to
really
quickly
jump
in
we'll
show
how
easy
it
is
with
career
right.
A
10
30
you've
still
got
20
minutes.
Okay,.
G
Alrighty,
in
that
case,
I'll,
go
ahead
and
show
this
live
on.
Well,
first,
when
you're
trying
to
hook
it
up
what
we
added
assert
on
dane
portal
right,
so
that
means
your
search
is
available
on
dame.
You
just
need
that
private
key
installed
in
your
mua
now
all
muas
and
os's
have
their
own
key
chains
and
all
kinds
of
crazy
stuff.
In
order
to
do
that
with
career
is
pretty
straightforward,
you
just
put
it
into
the
settings
and
just
hook
it
up
like
a
normal
file,
choose
your
key
file
and
you're
good
to
go.
G
Now
you
can
jump
into
a
secure
email
conversation
with
a
stranger,
so
I'm
actually
show
this
off.
Why
not
so
in
thunderbird?
This
already
has
career
installed.
I
could
you
know,
do
something
fun
just
go
ahead
and
write
an
email
to
pavan
here
and
then
the
email
could
be.
You
know
anything
our
top
secret
communications
that
needs
encryption
and
all
that
so.
G
G
And
you
can
saw
what
I
did
there
and
real
quick
on
this
page.
You
saw
a
little
hint
of
it.
It
went
ahead
and
encrypted
the
email
and
then
did
something
fancy
with
like
an
attachment
and
then
just
sent
it
natively
right
and,
on
the
other
end.
G
Well,
whenever
it
comes
in
there,
you
go
so
this
top
secret
email.
It
comes
in
as
like
a
with
this
tag.
That's
basically
said
that
it's
encrypted
that
allows
the
outlook
side,
for
example,
to
go
ahead
and
read
this
and
standard
s
mime,
but
it's
just
doing
the
processing
for
us
and
it'll
tell
us
that
it
was
encrypted
and
that
it
was
signed.
So
both
this
is
possible
because
both
me
and
pavon
have
our
search
on
dane.
G
Despite
us,
not
sharing
any
keys
or
having
it
installed
on
our
own
operating
systems
or
clients
dane
allows
us
to
do
this
very
seamlessly
right
now.
I
can
seamlessly
reply
as
well,
and
you
know
I
could
say
something
like
this.
Thanks
and
pavan
could
have
that,
and
you
can
see
on
this
view
like
we're
conveniently
given
the
decrypted
email,
just
as
a
reference
and
to
be
able
to
send
and
for
specifically
on
the
outlook
side,
all
you
gotta
do
is
to
toggle
on
these.
G
And
then
you
can
toggle
on
signing
as
well
and
you're
good
to
go
it'll,
send
it
signed
and
encrypted
in
the
conversation
view.
So
you
see
another
reply
to
this
once
again
encrypted.
Now,
let's
see
how
it
looks
on
my
end,
whenever
it
comes
back
so
here
it
goes,
stop
secret.
Now,
on
the
thunderbird
side,
we'll
just
go
ahead
and
decrypt
and
you
can
actually
see
the
conversation
very
seamless,
verified
decrypted
gives
us
all
that
info,
so
real
easy
to
use
for
any
everyday
users.
G
G
So
just
a
quick
rundown
of
the
details,
so
secure
messages
are
sent
to
standard
sort
of
pkcs,
7s
mime
objects
and
to
kind
of
avoid
stepping
on
the
standard
flow
of
email
emails.
They
are
sent
as
attachments
in
order
to
retain
kind
of
the
crypto
cryptographically
secured
thing
from
end
to
end,
and
the
insert
resolution
is
handled
natively,
silently
and
directly.
So
we're
not
using
any
kind
of
like
listener
or
a
server
or
anything
like
that
that
some
implementations
use-
rather
this
is
done
completely
natively
within
the
add-on
environment.
G
So
these
are
standard
outlook
and
thunderbird
add-on
with
no
extra
softwares
that
are
up
and
running,
but
really
these
installation
flows.
Honestly,
you
could,
if
you're
curious,
you
can
check
out
that
email.
It's
got
more
of
the
info
on
the
installation
and
whatnot,
but
what
I
wanted
to
highlight
was
these
aren't
just
convenient
tools.
They
are
convenient
sure
I
hope
you
agree,
but
this
is
a
vital
part
of
our
research
to
discover
what
people
need
and
expect
to
make
ev
security
a
default
at
scale.
G
So
what
do
people
need?
Well,
we've
created
these
options
page
right
for
the
interest
of
time.
I
didn't
run
through
them
on
the
actual
demo,
but
I
hope
you
can
appreciate
that.
Well,
maybe
we
have
to
do
some
extra
clicking
there,
especially
maybe
on
the
outlook
side.
We
have
to
go
ahead
and
manually
toggle
on
for
each
email.
Maybe
we
don't
want
to
do
that.
G
Maybe
we
want
to
have
it
automated
so
that
every
email,
you're
gonna,
sign
it
by
default
right
and
every
email
you
might
try
to
encrypt
it
if
they
can
find
a
cert
on
dame.
You
know,
that's
the
kind
of
seamless
thing
we
can
go
for,
maybe
that's
what
users
want
just
configure
it
to
their
needs
right,
but
it's
not
just
what
users
want
it's.
What
we
can
find
out
about
what
users
want
so
when
to
be
silent
versus
explicit,
for
example,
with
private
keys.
G
You
know
you
can
think
of
a
use
case
where
a
you
can
have
a
separate
keys
for
signing
and
encrypting,
or
you
could
just
have
a
single
key
for
both
right.
There's,
there's
different
people
that
need
different
things,
but
it's
really
being
able
to
understand
this.
You
know
actually
seeing
this
in
action
and
seeing
what
the
numbers
will
help
us
know
that
how
this
correlates,
with
the
use,
the
use
case
of
those
individuals
and
the
other
settings
that
they've
used
what
they
expect
effectively,
and
this
is
where
our
study
comes
in.
G
G
Just
you
know,
the
kind
of
anonymous
general
ones
will
be
shared
as
statistics
to
our
telemetry
server,
along
with
the
dos
server
that
we're
using
for
the
dns
resolution,
but
only
obviously,
if
it's
public,
so
only
the
public
information
users
can
optionally
answer
some
basic
demographic
queries
to
really
zone
in
that
info
as
well
to
support
the
statistics
that
we're
getting
and
it
does
not
invade
your
privacy
in
any
way.
The
telemetry
is
shared
only
in
specific
times
when
you
change
your
settings
and
then
it
never
tracks
anything
about
your
emails.
G
Only
noting
the
set
default
conflicts
and
naturally
you
have
the
right
to
be
forgiven.
You
can
toggle
off
at
any
time
and
it's
completely
anonymous
and
you
can
request
all
your
data
to
be
removed
if
you
ever
did
toggle
it
on,
and
we
can't
do
it
alone
right.
We're
trying
to
find
the
results
that
let
us
automate
and
enable
security
on
the
internet,
and
for
that
we
greatly
appreciate
participation,
try
out
our
tools
and
go
ahead
and
toggle
that
on.
G
G
If
you
help
us
find
what
users
need,
you
know
trying
them
out
joining
our
study,
then
we
can
further
our
research
on
the
new
uses.
For
dane
that
I
mentioned
you
know:
cyber
threat,
intelligence
and
mhealth,
smart
cities
and
iot
devices
to
be
what
users
actually
need
and
that's
where
I'll
leave
you
guys
often
thank
you
very
much
any
questions
or
comments
I'll
be
out
here
as
well
to
chat
for
anyone.
H
Can
you
differentiate
here
between
what
you
mean
between
security
and
privacy,
because
there's
a
lot
of
equivalencies
that
the
ietf
specifically
talks
about
for
security,
where
I
think
many
of
us
know
that
html
email
is
very
good
at
violating
your
individual
privacy
and
encapsulating
that
transport
inside
of
tls
doesn't
really
actually
make
that
communication
any
more
private
than
you
know
than
it
was
perhaps
previously?
H
Because
the
assertion
here
is
that
this
will
make
my
communications
more
secure.
But
if
I'm
still
rendering
the
same
html,
my
privacy
is
still
likely
to
be
very
violated
by
the
people
who
want
to
engage
in
those
act.
Those
types
of
activities
and
the
community's
done
a
very
poor
job
of
differentiating
between
those
two
and
I'd
like
to
hear
kind
of
how
you
think
this
enhances
the
two
different
parts.
H
Because
for
this
it
seems
like
an
awful
lot
of
work
for
an
organization
to
go
through
to
get
a
secure
transport
for
the
messaging
back
and
forth,
which
is
already
afforded
by,
for
example,
enabling
tls.
So.
G
That's
a
perfect
question
because
that's
exactly
part
of
the
point
with
trying
to
look
at
bain,
it's
not
just
about
the
transport.
You
know
we
have
https
for
tls
transfer
security
data
is
looking
at
end-to-end
security,
so
the
objects
themselves
are
secure
at
the
ends
of
them.
If
I'm
sorry,
if
I
misunderstood
your
question,
the.
H
G
Extensions
to
that
protocol
could
definitely
do
some
work
in
that
regards
to
making
sure
email
has
objects
that
are
like
the
contents
of
email,
the
html
emails
and
whatnot
are
more
conscious
of
our
privacy
right
specifically,
but
looking
at
exactly
s
mime,
as
the
protocol
exists,
today
is
kind
of
the
objective
of
the
tools
that
we
had
so
implementing
dane
specifically,
but
that
is
an
excellent
point
and
you
know
worth
looking
into
in
the
future.
I
Yes,
hi,
okay,
whatever
I'll
just
bend.
Yes,
I
have
a
question
about
dane
porter
works
behind
the
scenes.
So
when,
when
you
kind
of
claim
a
domain,
then
the
domain
owner
somewhere
else
the
token
in
their
dns,
okay,
that's
fine,
and
then
once
that's
done
when
I
create
users
in
the
dan
portal
and
do
stuff
with
the
keys.
How
does
that
actually
relate
to
that
domain?
And
why
would
a
certain
let's
say:
email
client,
for
example,
contact
the
dain
portal
server
about
this?
I
Is
it
actually
like
that
or
is
some
information
put
into
the
customers
domain
dns
zone?
I
don't
know
that.
I
didn't
understand
that
at
all
and
regarding
coor
the
plugin
seems
to
be
relying
on
dane
porter
specifically.
So
what
if
I
do
my
own
deployment
of
that?
Can
I
make
it
to
use
my
database
and
where
is
it?
Is
it
a
dain
portal
or
is
it
in
the
customer's
dns
domain?
So.
G
This
is
a
perfect
point
as
well
there's
something,
unfortunately,
I
was
had
to
rush
through
here,
but
dane
portal
actually
just
implements
the
standard
dane
protocol,
which
is
just
part
of
dns.
You
know
dns
secured
by
dns
sec.
These
are
just
standard
dns
domains
that
are
zones
that
are
beneath,
like
our
normal
structures,
that
zone
holders
have
just
with
special
protocols,
domain
names
right
and
there's
a
lot
of
details
hidden
behind
that.
But
really
this
is
the
public
dns
right.
G
Obviously,
you
can
have
private
dns
implementations
too,
that
do
gain
within
your
own
organization,
and
that
is
totally
possible
and
your
own
male
clients
that
can
resolve
that.
Dns
can
indeed
do
that
desert
resolution
of
gain
through
that
private
infrastructure,
but
that's
kind
of
more
like
the
classical
pki,
but
with
the
when
you're
connected
to
the
public
dns,
you
have
this
secure
route
like
you're,
doing
dns
tech
you're,
starting
at
the
root
zone.
You're
coming
down,
it's
resolvable,
no
matter
whether
anybody
that's
hooked
up
just
like.
G
I
G
That
that
is
done
through
a
manual
delegation.
I
think
I
might
have
done
rush
through
those
slides
a
little
bit
but
like,
for
example,
we
dane
portal
will
actually
be
serving
the
specific
sub-zone
for
dain.
So
it
will
have
this
known.
Name
server
serves
it,
but
you
know
that's
something
it
does
currently
and
we're
looking
into
having
more
like
general
name
servers
as
well,
but
really
the
delegation
is
just
a
standard
delegation.
G
Just
use
the
ns
record
and
you
put
it
in
your
parent
zone
and
it
is
delegated
to
the
game
portal
right
and
the
reason
it
does.
That
is
because,
underneath
that
zone,
it
can
add
the
records
and
change
the
records
as
needed
to
suit
the
user's
perspective
of
the
each
email
user
being
able
to
access
the
domain
and
add
their
own
certs,
because
dane
has
their
own
records
and
protocol
for
it.
I
think
there
was
a
question
on
the
career
side
as
well,
but
I
think
I
forgot
that
I'm
sorry.
G
That's
that
was
the
question.
That's
that's
a
perfect
follow-up
because
no
it
doesn't
career
is
resolving
through
the
public
dns,
so
it's
using
a
dos
server
and
starting
from
the
root
and
just
looking
for
the
email
address
with
the
proper
following
the
dane
protocol.
So.
C
There
was
a
lot
of
talk
around
day
in
things,
and
I
I
saw
how
hard
he
worked
on
this
first
hand,
and
I
really
want
to
thank
you
for
doing
this.
That's
great,
I
hope
everybody
you
know
tries
it
out.
G
K
G
So
there
are
no
so
which
certificates?
Are
you
talking
about
dns
sec.
G
Sure
for
the
s
mime,
the
users
will
have
them
so
in
that
we
briefly
showed
the
tool
that
allows
users
to
create
a
certificate
or
key
pair.
That
is
there
for
convenience.
That
is
using
just
a
it's
a
thin
layer
on
openssl,
so
using
openssl
you
can
generate
a
key
pair.
You've
got
a
certain
and
a
key
and
dateportal
promises
that
it's
not
going
to
save
any
of
that,
and
it
doesn't
it's
open
source
people
can
verify,
but
basically
you're
just
prompted
to
download
those,
and
once
you
download
them,
it
forgets
about
it.
G
The
actual
key
answer
for
convenience:
the
user
is
given
the
option
to
let
the
server
generate
it,
meaning
run
open
ssl
on
the
server
right
now.
Obviously
we
do
prefer
if
the
users
come
in
with
a
proper
cacer
key
combo,
we
don't
ever
see
the
key.
Definitely
we
do
not
want
users
to
rely
on
that
for
their
end-all
be-all,
but
it
is
a
matter
of
convenience
to
let
people
quickly
jump
start
into
doing
secure,
email
right.
G
G
In
that
case,
perhaps
maybe
don't
use
that
the
tool
and
use
open,
ssl
yourselves
and
that's
actually
a
very
good
point
where
we
can
point
people
at
more
fulsome
discussion
of
how
to
make
their
own
certs
use
openssl
on
their
own
operating
systems
as
well,
so
that
that's
a
good
point
and
a
matter
of
improvement
for
the
site.
G
Yes,
like
it
would
be
a
matter
of
pointing
to
like
kind
of
a
run-through
of
how
to
do
it
themselves.
It's
also
worth
pointing
out
that
it
is
perfectly
fine
if
you
are
a
user.
That
already
has
ca
certs,
that
you
know
you're
already
doing
s
mime
and
you
just
want
to
push
everything
on
dane
whatever
you
have
currently
right,
and
it's
perfectly
fine
for
you
to
keep
your
own
key
and
you
never
use
that
utility
at
all.
You
know
but
you're
right,
because
the
utility
is
up
in
front
there.
G
It
is
worth
doing
a
bit
of
a
stronger
messaging
as
well,
but
yeah
feel
free
to
actually
go
up
there
and
see
and
see
how
you
feel
about
the
messaging
that
exists.
I
believe
it
is
pretty
cautious
when
it
says
that
you
know
it
is
preferred
that
you
go
ahead
and
make
it
yourself
too,
but
this
is
just
kind
of
a
convenience
tool,
but
great
point.
Thank
you.
A
Anyone
lost
questions,
I
don't
think
we
do
excellent.
Thank
you
very
much.
Everyone.
Thank
you,
guys
great
presentation
and
I
think
next
we
will
have
the
ipv6
extension
header,
performance
and
metric
diagnostic,
which
is
a
very
long
title.
So
hello.
L
A
M
So
we
did
some
extension
header
testing,
as
I
think
you
guys
know
from
actually
some
of
the
presentations
here
too
at
iepg.
There
is
an
outstanding
question
about
whether
ipv6
extension
headers
can
actually
be
used
on
the
internet.
It's
been
a
controversy
for
quite
a
few
years,
and
a
number
of
people
have
done
studies
showing
that
they
don't
work
and
by
and
large,
these
studies
sent
crafted
fake
extension
headers
to
a
number
of
the
very
large
sites
on
the
alexa
top.
M
You
know,
whatever
google
facebook,
you
know
the
usual
suspects,
and
so
what
we
were
thinking.
We
ourselves
have
been
hard
at
work
on
an
extension
header
ourselves
and
we
wish
that
to
work
and
we
do
not
wish
to
throw
all
our
work
away.
So,
if
extension
headers
don't
work,
we
have
been
wasting
our
time.
M
We
wish
not
to
do
that.
So
a
very
brief
explanation.
This
is
of
particular
interest
at
end
user
sites
enterprises
because
we
need
to
do
very
quick
triage
as
to
say
is
the
problem
at
a
at
a
hot,
very
high
level.
M
Is
it
in
the
server
or
is
it
in
the
network,
and
then
we
can
dispatch
the
right
set
of
technicians
to
to
go
to
either
way
and
the
way
we
do
this
is
we
put
timing
and
sequence
number
information
inside
an
ipv6
destination
option
next,
so
the
way
we
did,
our
testing
is.
First,
we
modified
a
freebsd
kernel
to
send
our
pdm
destination
option
with
every
packet
and
what
could
the
reason
we
did?
M
The
modification
in
the
kernel
is
that
we
wanted
to
test
real
data
going
through
and
we
wanted
it
to
come
through
all
the
time.
So
we
patched
the
kernel,
and
so
then
what
we
did
is
we
chose
locations
throughout
the
world
because
we
wanted
to
make
sure
that
we
were
going
to
multiple
transit
providers,
and
so
you
can
see
you
know,
warsaw,
toronto,
mumbai
and
so
forth
is
where
we
were
next.
M
And
you
can
see,
we
have
quite
a
few
choices
of
locations
from
this
small
hosting
service.
It
does
become
important
that
it
was
a
a
small
quote-unquote
like
no-name
hosting
service
and
not
like
one
of
the
brand
name
providers
like
you,
know,
amazon,
azure
and
so
on.
Next,
and
so
you
can
see,
our
pdm
locations
are
exactly
where
I
had
said
before
that
they
were
next.
M
So
let
me
first
give
a
shout
out
to
our
sponsors,
the
india
internet
engineering
society,
for
paying
for
these
little
servers
all
over
the
world
and
for
nitk
suratkal
for
providing
the
young
people
who
did
a
bunch
of
the
code.
Thank
you
so
much
and
then
our
own
organization,
which
is
a
consortium
of
industry
which
is
very
interested
in
this
kind
of
information.
M
So
next,
so
this
is
test
results.
So
what
I
did
was
I
took
a
very,
very
large
ftp.
You
can
see,
there's
a
ton
of
kilobytes
to
download
and
I
tested
from
toronto.
I
based
out
of
toronto
and
tested
to
all
the
locations-
and
you
can
see
here
and
pdm
is
in-
is
attached
to
every
single
packet,
and
you
can
see
here
that
the
ftp
worked
and
in
the
background
I
took
a
packet
trace
because
packets
don't
lie,
I
mean
people
can
lie,
but
packets
don't
lie
next,
so
you
can
see
there
is
pdm
headers.
M
I
took
the
psn,
this
packet
field
out
of
the
pdm
destination,
option
header
and
put
it
right
out
there
and
you
can
see
it's
in
all
the
packets.
By
the
way
all
these
traces
are
available
for
anyone
to
look
at.
We
have
them
here
and
you
can
see
surprisingly,
that
the
large
ftp
was
fragmented
and
validly.
So
so
you
can
see
fragment
headers
of
large
fragment
headers
also
going
to
the
other
end.
D
M
Please
so
you
can
see
here
that
everything
went
successful
next,
please
here
is
the
destination
option
header
out
of
the
trace,
and
you
can
see
it
is
a
valid
destination
option
with
all
the
data
filled
in
the
timing
is
extremely
important
because
those
are
delta
times
that
are
calculated
when
I
get
a
packet
from
one
end,
I
save
it
and
I
calculate
the
delta
off
of
there,
and
so
you
can
see
both
ends
are
properly
processing
the
previous
pdm.
That
was
also
received
at
their
end.
M
Next,
please
now
you
can
see
both
the
pdm
and
the
fragment
header
again.
Wireshark
is
a
delightful
tool.
Next,
please
so
in
the
bottom
line,
all
the
traces
worked,
I
mean
all
the
ftps
worked
with
this
very
large
file.
We
also
have
apache
set
up
to
these,
and
we
have
been
doing
testing
from
here
over
to
I
believe,
warsaw,
and
we
also
have
melbourne
set
up.
M
So
these
sites
are
set
up
to
do
apache
over
the
ietf
network
and
if
you
wish
to
see
the
results
of
those,
please
come
see
us
at
the
hackathon
and
I
will
leave
it.
I
won't
tell
you
whether
it
worked
or
not.
You'll
have
to
come
down
and
see
for
yourselves
next,
please
so
then
some
of
the
people
were,
like
you
know.
Internally,
some
of
people
in
the
group
are
like
wait,
a
minute:
women,
okay,
so
you're
using
one
hosting
service.
M
Why
are
your
results
so
different
from
other
people's?
Are
these
people
using
some
kind
of
overlay
network
now
keep
in
mind?
This
is
a
small
no-name
service
which
I
did
not
think
had
the
money
to
have
their
own
servers
all
over
the
world,
but
nevertheless
one
wishes
to
verify
one's
results,
so
I
sent
them
an
email
and
I
said:
do
you
guys
have
some
kind
of
overlay
network
and
they
said
no,
we
do
not.
We
go
over
the
public
internet
next,
please.
M
M
So
why
are
our
results
so
different
from
other
people's?
So
what
we
believe
is
that
we
are
using
a
real
data
and
a
real
extension
header
and
not
fake
data
which
may
validly
be
dropped
by
thing
by
people.
We
are
also
not
going
to
the
alexa
top
whatever,
and
this
becomes
important
because
we
said
well,
let's
see
whether
our
results
are
also
consistent
with
other
people's
and
indeed,
if
you
use
the
large
hosting
companies
and
go
to
the
alexa
top
whatever.
M
Indeed,
there
are
issues,
but
the
question
is
well.
Why?
Because,
in
our
mind,
these
things
are
not
being
blocked
at
the
core
of
the
internet,
so
where
are
they
being
blocked?
Next,
please.
M
So
what
we
did
was
we
did
pings
and
trace
routes
from
our
pdm
enabled
machine.
Remember
we
have
a
patch
to
the
kernel
which
will
send
our
destination
option
out
with
every
packet,
whether
it
is
a
udp
packet
or
icmp
packet,
which
are
both
used
for
traceroute
or
ping,
and
so
what
we
wanted
to
see
was:
let's
say
that
our
pdm
enabled
trace
route
does
not
get
through,
but
the
non-pdm
enabled
trace
route
does
get
through.
M
Where
does
it
stop?
For
example,
if
there
are
eight
hops
between
ourselves
and
say,
facebook,
then,
is
it
always
dropped
at
hop
seven
or
is
it
always
dropped
at
hop
three,
or
is
there
a
random
number
where
it
is
dropped?
M
I
shall
also
leave
that
for
next
time
as
we
will
have,
we
will
have
a
draft
in
v6,
ops,
summarizing
all
our
results
and-
and
I'm
I'm
only
sort
of
kidding
about
about
some
of
this.
It's
like
we
are
we're
actually
doing
all
these
dns
resolutions
and
we're
having
some
internal
discussions,
which
we
wish
to
be
completely
in
accord
on
on
exactly
what
is
it
that
we're
seeing
and
so
once
the
team
is
is
in
sync,
then
we
will
present
the
results.
M
We
welcome
collaborators
we
wish
for
others
to
test
and
validate
our
results,
and
if
you
wish
to
collaborate,
we
can
make
our
virtual
machine
with
pdm
enabled
and
you
may
test
for
yourself
again.
We
want
to
be
careful
because
if
our
results
are
indeed
correct,
letting
this
kind
of
thing
loose
on
the
internet
for
anyone
to
use
at
all
for
any
reason
could
create
some
potential
issues.
M
So
please
come
talk
to
us
at
the
hackathon
and
hack
demo,
and
we
can
show
you
the
results.
If
you
wish
to
test
yourself,
we
can
work
with
you
and
you
can
do
a
trace
route
ping
or
actually
go
into
our
apache
and
take
a
packet
trace
for
yourself
and
see
what
happens.
A
M
We
welcome
collaborators
even
collab,
and
especially
if
collaborators
have
interest
in
in
discussing
or
rather
fighting
amongst
ourselves
as
to
why
we're
seeing
what
we're
seeing
jen.
N
Hi
angeline,
actually
I
don't
think
they're
really
different
from
what
other
people
see,
because
most
of
the
results
I've
seen
is
packet,
draw
packets
being
dropped
near
the
last
near
the
destination
network
or
even
source
network,
if
it's
user
cpu
right
so
transits,
normally
let
them
through.
So
I
do
not
see
really
conflict
with
your
results
with
any
other
right
like
so
it's
not
normally
the
destination
network
drops
it
and
if
your
vantage
points
permit
them
yeah,
I'm
not
surprised.
You
see
them
going
through
and
a
question.
M
We
can
certainly
do
that.
We
can
the
reason
I
was
doing
dns.
Well,
I'm
not
okay.
So
a
couple
all
right.
Let
me
answer
that
one
and
then
I'll
go
back
to
some
of
the
other
comments.
You
had
it's
because
to
me
I
won
it's
like
for
me.
It
seems
really
obvious
it's
like
it's
like.
If
how
close
did
it
get
so
like
if
akamai,
for
example,
not
to
pick
on
akamai,
a
wonderful
company
is
if
they're
hosting
your
site
and
you're.
M
N
Yeah,
I
totally
agree,
I'm
just
trying
you
trying
to
find
out
which
network
drops
the
packet
right.
So
I'm
trying
to
understand
why
you
are
using
dns
to
find
out
who
owns
that
ip
address
instead
of
using
ies
number
attribution
for
this
address,
because
dns
might
be,
my
ptr
might
not
even
exist
while
that
address
definitely
belongs
to
some
is
number
which
indicates
who
owns
the
device.
Usually.
L
M
As
I
say,
we
welcome
collaborators
and
and
if
you
want
to
see
our
results,
live.
L
O
I
just
wanted
to
put
a
one
that
was
in
the
chat
anna
asked:
do
you
plan
to
do
any
hop
by
hop
extension,
header
testing,
we're
actually
doing
that
in
the
hackathon,
we're
trying
to
break
the
rfcs,
so
yeah
come
and
we'll
present?
Actually,
we
we're
looking
specifically
at
hot
by
hop
and
the
results
are
a
little
bit
interesting,
a
little
bit
scary
also.
M
By
the
way
and
anthony
is
a
member
of
our
hackathon
team
and
he's
and
and
liquid
is,
is
we're
happy
to
have
them
working
with
us
at
the
hackathon.
P
And
I'm
part
of
these
troublemakers
too,
and
I
just
wanted
to
add
to
what
anthony
said
and
in
response
to
that
question-
that
a
key
determining
factor
of
why
these
results
are
occurring,
we
think,
is
the
type
of
extension
header
like
hop
by
hot,
might
be
different
than
doh
and
the
content
of
the
data
in
that.
So
those
are
key
factors.
We
think
we.
M
B
A
little
bit
nasty
question
please,
what
kind
of
bad
conclusions
do
we
expect
to
result
from
your
claim
that
packets
cannot
lie.
L
L
A
Q
C
Q
Well,
you
all
know
that
too
next
slide
yeah
pushing
a
lot
of
buttons
very
quickly.
So,
as
some
of
you
might
be
aware,
we've
actually
tried
to
do
large-scale
measurement
by
actually
with
some
support
from
google,
by
by
actually
enrolling
around
20
million
people
a
day
through
use
of
an
ad
advertisement
campaign
where
the
ad
actually
contains
a
scripted
set
of
url
fetches
and
and
if
you
look
at
the
dns
and
regard
dns,
query
labels
as
actually
microcoded
instructions
and
make
the
servers
give
unique
answers
every
time
a
query
hits
it.
Q
So,
unlike
lots
of
other
measurements
like
measurements,
the
alexa,
where
one
point
measures
a
hundred
or
a
thousand
or
a
million
sort
of
remote
points,
we're
actually
enlisting
millions
of
unique
users
every
day
to
come
to
a
small
collection
of
servers
which
are
on
vms
around
the
world.
Now
we
did
this
initially
to
actually
dispel
the
myth.
Q
Q
Because
again,
if
you
say
well,
here's
a
name-
that's
not
validly
signed
the
number
of
people
that
go
to
it
is
kind
of
well
you're,
not
validating.
Are
you
you've
got
a
resolver
out
there
that
just
doesn't
care
about
the
dns
validation
and
then
we
started
around
ipv6
fragmentation
and
yeah
extension,
headers
and
yeah
completely
different
outcomes
to
to
where
nalani
is
reporting
completely
different.
It
has
a
lot
to
do
with
the
experimental
technique.
We
have
no
control
over
what
end
users
run
this
it's.
Basically,
the
ad
campaigns
do
all
the
enrolling.
Q
Q
Now.
I
should
make
a
note
about
that,
because
you're
only
going
to
see
that
there's
an
alt
service
record,
if
you
go
and
get
it
and
if
you
don't
go
and
get
it,
you
don't
know
if
it's
there,
so
you
need
to
actually
get
the
thing
presented
twice
now
with
ads.
We
normally
say
here's
a
list
of
urls
they've
all
got
unique
names.
You
only
see
them
once
it
doesn't
matter
what
you
put
in
there.
Q
It
isn't
going
to
get
triggered,
so
we
then
revise
the
script
slightly
and
for
a
couple
of
these
ones,
particularly
this
one.
We
wait
and
we
actually
wait
for
two
seconds,
which
is
an
interesting
number.
We
wait
for
two
seconds,
then
we
tell
the
user
go,
get
it
again
same
dns
name,
but
we
alter
the
http
query,
args
and
hopefully
altering
those
query.
Args
seems
to
defeat
most
values
of
https
proxy.
If
that's
what
you're
behind
so
the
idea
is
for
about
a
fifth
of
our
experiments.
We
do
two
fetches
for
everyone
else.
Q
Yeah
said
that
next,
so
these
are
the
big
answers.
We've
got.
We've
only
started
running
this
in
june,
at
sort
of
a
massive
level
of
deployment
we're
doing
around
15
to
20
million
experiments
every
day
across
a
unique
batch
of
users.
Thank
you
ad
engine
and
I'm
contrasting
the
second
fetch
to
the
first
fetch,
because
I
can
see
the
difference.
Q
Q
We
issue
the
get
command
inside
the
script
after
two.
Second
wait:
what
happens
after
that
is
browser
magic,
but
the
number
is
still
pretty
low.
3.5
percent
next
slide
so,
where
ads
give
us
an
enormous
amount
of
detail,
origin
as
network
et
cetera,
they
also
with
a
rudimentary
form
of
geolocation.
Q
Q
Q
India
is
red,
there's
some
kind
of
weird
geolocation
thing
going
on
inside
browsers
because
I
pretty
damn
sure
it's
nothing
to
do
with
networks
next
slide.
So
that's
curious
and
here's
a
table.
Malta
28
on
the
second
query,
1.4
on
the
first
central
african
republic
in
africa,
wouldn't
have
guessed
it.
But
this
is
not
if
you
will
a
predictable
list,
but
there
are
certainly
systematic
variations
there
around
locales
next
slide,
hi
a
question
quick.
B
J
Q
I've
asked
apple,
but
not
chrome
and
part
of
the
idea
of
airing
this
is
there
will
be
questions
to
chrome
as
well,
and
I'm
getting
there.
Let's
actually
look
at
the
first
query
and
sort
that
country
list
by
first
query
hit
and
all
of
a
sudden
the
scandi's
come
right
up.
Denmark,
eight
percent
of
users
in
denmark
do
a
first
query
hit
on
http
3,
whereas
the
second
query
is
only
slightly
more
so
there's.
Certainly
variations
going
on
between
first
and
second
query
next
slide.
Q
Q
Thank
you.
So
I
have
some
questions
here
that
I
actually
thought.
I
really
would
like
to
understand
this
a
bit
better
and
sort
of
see
what's
going
on
and
and
the
first
is,
which
are
the
clients
which
are
the
browser
clients
that
are
actually
performing
quick
and
why
first
and
second
fetch
variation.
Q
The
second
thing
is:
if
you,
you
read
the
quick
specs,
you
can't
fragment
you
just
can't
it
says:
do
not
fragment
these
udp
packets.
So
what
are
the
packet
sizes,
quick
sessions
actually
use
and
in
particular,
I'm
interested
in
what
the
clients
do
when
they
open
the
session
and
start
talking
to
our
servers?
So
in
other
words,
not
what
I
select,
but
what
is
being
selected
out
there
as
the
mss
values
for
quick?
Q
What's
the
connection
failure
rate,
because
there's
been
an
awful
lot
of
fear
and
distrust
about
udp
port
443,
is
it
filtered
like
crazy?
Does
it
get
through
like
glass?
You
know
what's
going
on
and
in
this
case
the
question
I
had
was:
is
quick,
faster
or
not?
Is
it
really
quicker
next,
so
let's
go
quickly
and
try
and
answer
some
of
those
whoever's
driving
this.
Q
This
is
an
odd
table.
Like
all
tables,
it
needs
explanation.
So
this
is
the
I
the
os
clients,
as
determined
by
the
browser
stream.
What
hardware
are
you
running
on
or
what
operating
system?
Everybody
lies
right.
So
in
some
ways
this
is
just
the
lies.
I
got
told
by
the
browser
in
their
browser
stream
there's
an
awful
lot
of
windows
3
out
there.
You
know
yeah
right
so
to
some
extent
it's
slightly
cloudy,
but
there's
patterns
going
on.
Q
They
aren't
comparable
horizontally,
it's
vertical,
so
I'm
just
breaking
down.
I
have
separate
tests
that
run
only
tcp
and
tls
completely
independent
of
the
quick
stuff,
and
for
those
I
see
well,
what
clients
does
the
ad
actually
get
to
so
around
5.5
percent
of
clients
out
there
for
an
ad
use
ios
another
1
uses
mac
84.5
uses
android.
This
is
sort
of
the
market.
Share
of
platforms,
windows
still
exists
and
you
linux
folk
you've
got
a
lot
of
work
to
do.
If
you
want
market
share,
so
they
add
vertically
not
horizontally.
Q
Q
It's
predominantly
the
android
platform,
with
a
little
bit
of
ios
and
again
there
are
liars
and
all
kinds
of
stuff,
because
it's
the
browser
string,
but
predominantly
it's
an
android
behavior
next
slide.
Please.
So,
let's
go
to
the
browsers
and
the
browser
clients
and
this
kind
of
sort
of
sorts
it
out.
The
world
is
chrome.
Q
Q
Interestingly,
on
the
second
fetch,
a
huge
amount
in
chrome,
but
also
some
degree
of
use
in
safari,
some
of
you,
ios
users
seem
to
prefer
or
sorry
some
of
you
android
users.
I
really
don't
understand
that
it's
higher
than
it
should
be
next.
Q
Q
Q
Q
46
are
exactly
1200
octets
on
that
first
packet
coming
in
the
packet,
that's
padded
up
that
the
spec
says
must
be
at
least
1200
octets
46
percent
go
yeah,
okay,
fine
1200,
a
few
folk
are
more
inventive,
we'll
go
1250,
yay
good
on
you
and
a
few
more
folk
go
we'll,
go
1252.,
okay
and
a
very
small
number
go.
We
go
1354
yay,
then
nothing
higher.
Q
So
whatever
you
think,
there's
no
one
kind
of
pushing
the
boundary
here,
because
predominantly
on
that
first
packet,
you
have
no
idea
what
the
path
mtu
is,
and
so
1350
is
about
the
extent
of
which
people
are
going
to
say.
Okay,
let's
go
that
far,
but
no
further,
if
that
first
packet
is
fragmented,
nothing
works
next
slide.
Q
Quick
connection
loss:
this
is
unusual.
This
is
like
seeing
a
99
drop
rate
for
hot,
buy
hop
extension
headers
in
v6,
except
the
other
way
around.
Q
Now
that's
an
awful
lot
of
equipment
out
there
that
thoroughly
trusts
udp
port
443
incoming
in
response
to
an
outgoing
yay
good
on
them.
It's
much
higher
than
sorry.
That
is
much
lower
than
I
thought
I
was
expecting
something
to
three
percent
a
bit
like
the
v6
failure
rate
v6
by
the
way
in
tcp
has
around
using
the
same
methodology
around
two
2.4
failure.
The
v6
connections
don't
work,
and
it
seems
to
be
that
the
filters
close
to
the
customer
will
let
the
packets
out,
but
won't
let
the
syn
ack
packets
back
in.
Q
Q
This
is
getting
a
lot
harder
with
quick,
because
looking
at
packets
is
difficult
because
most
of
it's
encrypted
and
actually
trying
to
trace
packet
and
ack
when
the
entire
thing
is
potentially
multi-threaded
tends
to
make
my
brain
explode.
So
I
went
for
rough
and
ready
and
inside
the
browser.
Yay
is
a
timer
and
I
have
no
idea
how
accurate
this
timer
is.
Q
I
have
no
idea
if
it's
just
a
random
number
or
whatever,
but
it
does
seem
to
be
that
if
it
takes
longer
to
load
the
browser's
timer
value
is
more
time
than
if
it
loads
quickly,
and
I
think
that's
about
as
accurate
as
you
get.
So
what
I
did
is
I
asked
the
browsers
to
go.
Well,
how
long
did
it
take
you
to
fetch
this
using
quick
and
not
using
quick,
because
there's
a
bunch
of
folk
who
don't
use
it
and
every
user
actually
gets
to
do
quick
and
non-quick.
Q
Q
So
if
quick
is
faster,
the
time
elapsed
to
complete
the
entire
transaction
should
be
lower.
It
should
be,
on
the
right
hand,
side
of
the
zero
point.
If
quick
is
slow,
it'll
be
on
the
left,
there's
a
huge
amount
where
it
said
near
same
time,
which
is
fine,
but
there's
certainly
a
bias
to
faster,
and
these
are
in
milliseconds,
it's
sort
of
visible
around
the
first
50
to
100
milliseconds.
As
being
most
obvious,
it's
clearer
in
a
cumulative
distribution
next
slide
around
two-thirds
of
the
time.
The
browser
believes
the
quick
fetch
was
faster.
Q
Q
Do
I
care
whether
it's
http,
1
or
http
2?
No,
I
don't
it's
just
you're,
not
using
quick
and
if
you're
coming
to
my
server
and
you're,
not
using
quick
you're,
not
using
tls
anymore,
it's
all
tls,
so
it
really
is
tcp
over
tls
versus
quick
is
what
I'm
comparing
yeah.
Q
Didn't
look:
okay,
yeah
just
did
not
look.
In
fact,
what
I
was
doing
was
saying
it's
either
an
https
fetch
or
it's
an
https,
slash.
Three!
That's
all
so
you
know
I
didn't
factor
in
header
compression
go
forward,
two
slides
were
we
yeah,
so
this
is
a
summary
of
what
I
said
next.
Q
Now
my
answers
are
weird
and
I
tried
to
find
some
other
public
answers
and
the
most
immediate
one
is
on
the
cloudflare
radar
site,
where
they're
reporting
served
from
cloudflare
30
of
their
web
fetches
occur
over
quick,
so
what
I'm
seeing
is
far
far
lower
now.
I
have
no
reason
to
doubt
that
it's
a
fine
number,
but
again,
this
sort
of.
Why
are
we
seeing
very
different
things
next
slide?
Q
So
I
certainly
agree,
but
I
actually
think
cloudflare
is
low,
because
if
quick
is
enabled
by
default
in
chrome,
it
should
basically
all
these
connections
should
head
towards
quick
and
so
in
cloudflare's
case.
I
suppose
the
real
question
is
to
what
extent
is
cloudflare,
seeing
first
fetch
versus
subsequent
fetch.
What's
their
breakdown
now
I
can't
see
in
behind
cloudflare.
Q
So
I
really
don't
know,
and
maybe
that
30
is
a
reflection
on
how
many
times
the
client
got
there
once
received
the
signal
saying
if
you
go
there
again
quickly
within
the
case
time,
you'll
go
and
use
quick,
but
they
didn't
get
there.
So
only
30
of
their
folk
went
there
twice
or
chrome
is
not
doing
it
all
the
time.
I
have
no
idea
so
that
number
is
kind
of
difficult
to
unpick,
but
ours
is
certainly
a
lot
lower.
Q
So
why
is
my
number
so
low?
I
think
that
two
second
timer
is
way
too
fast
for
browser
behavior
inside
chrome.
Personally,
I
might,
when
I
say
at
the
script
level,
wait
two
seconds
and
do
another
fetch
the
incredible
internal
scheduling,
issues
of
the
chrome
browser
with
their
multiple
execution,
cues,
etc,
etc.
Q
I
could
make
that
a
really
long
time,
but
you
focus
users
when
you
see
an
ad
displayed.
If
it's
displayed
more
than
10
seconds,
you
are
and
click
the
xbox
and
kill
the
ad.
So
if
I
bring
that
timer
out
you're,
just
not
going
to
see
the
ad
anymore
and
I'm
not
going
to
get
the
measurements,
that's
why
the
timing's
so
aggressive
next
slide.
Q
So
I
don't
understand
a
few
things.
I
started
actually
looking
at
how
many
folk
asked
for
that:
https
that
https
resource
record
a
lot
three
to
four
times
the
number
of
folk
that
actually
do,
the
first
fetch.
Q
But
you
know
that's
for
apple
to
confirm
and
for
me
to
guess
the
question
about
that:
two-second
scripted
wait
time
in
chrome,
you
know,
I
don't
actually
know
how
long
I
should
be
waiting
for,
and
I
don't
know
how
long
chrome
keeps
it
going
well.
I
found
this
directive.
How
long
is
that
case
of
that
directive
lasting?
I
have
no
idea,
and
the
other
thing
about
chrome
is:
will
it
always
use
quick
or
if
you
live
in
burma
or
vietnam,
will
it
do
it
differently
than
if
I
live
in
australia?
Q
You
know:
are
there
locales
that
change
that
default
behavior
and
the
same
question
for
safari?
To
what
extent
is
this
behavior
triggered
by
various
locale
setting
defaults?
I
think
that's
it.
Is
there
another
slide,
one
more
slide:
oh
okay,
whoa
and
there's
a
web
page
where
all
this
gunk
is.
Is
there
as
pretty
pictures
and
graphs
any
questions
or
preferably
answers,
because
you
know
I
have
no
idea
some
of
this
stuff.
E
Question
about,
I
guess,
maybe
back
one
slide
or
yeah,
either
slide
12
or
16.
I
can't
remember,
but
your
https
response
does
it
have
an
ipv6
or
ipv4
address
hint
in
it?
No
hints.
E
There's
connection
racing
going
on
here:
it's
happy
eyeballs
between
http,
3
and
http2,
and
if
you
don't
return
the
ip
address
in
the
https
query,
because
these
are
one
use
domain
names,
the
client
is
also
doing
a
a
record
lookup
and
a
quad,
a
record
lookup
and
an
https
lookup.
It
gets
the
a
or
quad
a
and
the
https,
let's
just
say
even
say
all
at
the
same
time.
What
it's
going
to
start
doing
is
maybe
tls
http
2.
E
Q
Q
This
is
a
larger
question
of
selection
bias
because
I
get
to
see
the
folk
who
get
to
see
ads
right
and
so
some
networks
there's
a
mobile
provider
in
south
korea.
I
think
it's
ska
seems
to
be
massively
ad
blocking,
that's
fine,
I
don't
get
to
see
them
at
all,
and
so
whether
you
use
quick
or
not
doesn't
matter,
if
you
get
to
see
the
ad,
the
full
measurement
set
runs
right
now,
oddly
enough,
I
don't
get
to
see
russia
very
clearly.
Q
We
all
know
why,
and
at
some
points
I
don't
get
to
see
iran
very
clearly,
for
I
guess
similar
reasons.
China
quite
clear.
Oddly
enough,
the
ad
systems
work
in
china.
Just
fine
using
you
know,
double
clicks
infrastructure.
A
lot
of
folk
use
ads,
so
the
selection
bias
is
there
absolutely
for
ad
blocking,
but
it's
not
for
each
individual
ad
for
each
individual
type.
That's
a
different
problem
about
censorship
and
the
labels
that
folk
use.
Q
E
F
E
Q
Q
Q
Q
So
this
is
if
it
comes
out-
and
I
answer
these-
are
the
ones
that
respond,
but
again
eric
I'll
check,
v4
v6
and
if.
S
Hi
brief
question:
do
we
verify
that
the
initials
that
you
see
at
the
server
was
triggered
by
the
client
by
the
ad?
Actually
did
you
verify
that
the
initial
that
you
see
at
your
server
was
triggered
by
the
advertisement.
Q
Every
advertisement
has
a
unique
generated,
dns
name
at
the
initial
conversation
of
the
script.
So
all
of
the
queries
with
that
dns
string,
which
is
actually
again
a
piece
of
micro
code,
ultimately
came
from
the
same
client.
So
even
if
you're,
using
apple
private
relay
that
name
filters
through
and
emerges,
it
was
you
now.
Yes,
there's
tracking.
There's
logging
there's
query
replay,
so
the
label
has
a
timer
field
inside
its
label.
Q
If
it's
more
than
10
seconds,
it's
not
you!
It's
a
replay.
I
really
don't
care
it's
something
else.
Okay!
So
yes,
I
know
it's
you
almost,
irrespective
of
how
so,
even
if
you're,
behind
a
very
aggressive,
gnat
and
you're
changing
your
source
address
every
rtt,
which
is
about
as
extreme
as
you
get
with
quick.
It's
still
you
it's
still
the
same
dns
name.
I
Q
Oh
so,
as
I
said,
that's
what
I
referred
to
in
v6
sin,
synack,
okay,
I
don't
get
the
ack
and
again
I
can
only
measure
in
six
because
to
get
to
the
ad
four
had
to
be
working,
so
the
failure
rate
in
four
doesn't
really
matter
in
a
straight
tcp
thing,
all
the
ones
that
fail
in
four
never
got
enrolled
in
the
measurement.
Sadly,.
I
Well,
but
I
guess
even
if
it
doesn't
seem
to
matter
there
may
be
still
be
the
possibility
in
before
that
it
does
fail
at
this
stage
with
tcp,
and
if
that
number
be
comparable
to
this
one,
then
I
think
the
conclusion
that
this
number
here
stems
from
client-side
filtering
of
incoming
four
for
three
packets
over
udp
is
not
necessarily
true.
I
think
the
point.
Q
I
Okay,
then
I
agree,
I
thought
you
were
saying
this
number
is
from
that
cause
and
that's
what
I
was
questioning:
okay
cool.
Then
you
have
this
diagram
that
shows
the
results
that
quick
connections
are
quicker
than
non-quick
connections.
Back
two
slides.
I
I
don't
necessarily
need
the
the
diagram
yeah,
so
my
question
is:
if
there
are
shared
results
with
the
dns
lookup
would
benefit
for
one
part,
if
you
do
it
first,
so
perhaps
the
the
quick
look
up
is
always
the
second
one.
Then
it
may
be
faster.
For
that
reason,
I
thought
you
thought
about
this,
but
perhaps
not
so
when.
Q
I
have
no
idea
what
it's
measuring,
none
when
I
ask
20
million
a
day
to
measure
it,
I'm
relying
on
the
fact
that,
no
matter
what
stupidities
happen
inside
that
browser,
it's
all
the
same
stupidity,
and
so
the
structural
difference
between
these
two
collections
of
measurements
is
the
transport
protocol
and
a
bit
of
the
dns,
and
therefore
this
is
relatively
rough
and
ready.
It's
not
a
measure
of
packet
rtt
times
or
anything
like
that.
It's
just
simply
the
browser's
view
of
the
elapsed
time
and
you
go.
What
does
that
mean?
I
Right
but
in
case
it
does
include
any
latencies
from
dns,
for
example,
and
if
there
is
a
bias
by
always
first
doing
the
non-quick
connection
and
then
doing
the
quick
connection,
you
could
average
that
out
by
randomizing
the
order
across
clients
and
50
clients
doing
quick
first
and
then
on
quick
and
the
other
50
percent.
The
other
way
around
that
might
average
out
some
of
it.
Q
I
Okay,
that's
very
interesting!
Yes,
that's
what.
F
Hi
john
o'brien,
you
pin
just
checking
my
understanding
when
you
say
that,
in
order
for
the
browser
to
participate
in
the
measurement,
ipv4
must
be
working.
I
to
infer
that
that
means
that
the
ad
platform
is
an
ipv4
only
ad
platform.
No,
so
I
don't
understand
you
know,
oh.
Q
There's
a
lot
more
mechanics
about
limitations
in
ad
delivery
networks
and
what
we
are
trying
to
do.
We
originally
had
scripted
an
ad
that
when
we
went
off
to
the
admission
it
said
run
this
set
of
code
and
we
kept
on
changing
things
and
every
time
we
did
that
we
had
to
go
to
the
god
of
advertising.
Saying.
Could
you
please
approve
our
new
ad
and,
as
far
as
I'm
aware,
google
is
an
incredible
advocate
of
ai,
incredible
there's,
not
a
human
in
sight
in
the
entire
ad
machinery
platform.
Q
So
when
you
submit
a
new
ad
in
the
answer,
is
random
yes,
approved?
No,
not
approved,
yes,
wait
for
another
day,
and
we
got
pretty
frustrated
with
this.
You
know,
understandably,
and
so
what
we
did
instead
was
to
go.
Okay,
we're
going
to
load
a
skeleton
inside
the
ad
machinery,
which
then
is
its
first
thing
to
do,
is
come
back
to
us
going
hi,
give
me
some
some
tasks,
so
we
could
change
the
tasks
without
changing
the
ad
yay.
Q
That
step
is
b4.
Why
to
maximize
the
reek,
because
at
that
point
we
were
fixated
on
measuring
v6
and
so
in
some
ways
the
v4
always
has
to
work
for
this
entire
thing
to
flow
through,
so
don't
blame
the
ads.
Don't
blame
google
blame
me.
That
was
the
way
we
designed
it.
Yes,
it
could
be
dual
stack,
but
in
some
ways
this
stuff
works
and
google
has
approved
it
yay
very
scared
to
touch
it
thanks.
Thanks.
Q
We
do
full
packet
capture
full
packet
capture,
there's
an
awful
lot
of
packets
sitting
in
a
you
know,
spinning
storage
out
there
somewhere
do
I
plan
to
look
at
it
good
question,
currently
we're
fixated
on
extension,
headers,
we'll
probably
get
back
to
quick
once.