►
From YouTube: IETF115-PEARG-20221109-1500
Description
PEARG meeting session at IETF115
2022/11/09 1500
https://datatracker.ietf.org/meeting/115/proceedings/
B
B
A
few
extra
notes
for
in
person
attendees,
you
are
required
to
have
your
mask
on
at
all
times
in
the
room,
apart
from
when
you're
actively
speaking
for
in-person
attendees.
Can
you
also
sign
in
through
the
meet
Echo
light
client,
because
that
replaces
the
blue
sheets
and
in
following
what
other
groups
have
been
doing,
we'll
be
using
a
single
queue
in
meter
Echo,
including
for
people
in
the
room?
So
if
you
have
a
question,
please
add
yourself
into
that
queue
via
the
tool,
rather
than
going
directly
to
the
mic.
B
We
have
a
very
slightly
modified
agenda
from
the
one
that
was
published
on
the
data
tracker
due
to
slight
uncertainty
of
one
of
the
authors
being
able
to
attend
for
the
draft
updates.
We
will
be
going
through
those
updates
at
the
start
of
the
meeting
and
then
moving
on
to
the
remote
presentations
I
believe
we
have
a
minute
taker.
So
thank
you
go
Shabad
for
that.
B
Before
we
get
going,
is
there
anything
anybody
would
like
to
add
or
change
about
the
agenda?
Foreign.
B
No,
if
not,
then
we
will
start
by
just
running
over
a
quick
update
of
the
drafts
that
we
have
going
through
the
research
group.
At
the
moment,
thanks
yep,
it's
a
review
of
drafts
that
have
progressed
from
the
research
group.
We
have
a
pair
of
drafts
on
the
history
and
the
generation
of
transient
numeric
identifiers.
We
believe
they
are
in
the
final
throws
of
review
and
we're
hopeful
we're
One
update
away
from
getting
those
published,
which
will
be
very
nice
they'll,
be
the
first
documents
to
come
out
of
the
research
group.
B
We
also
have
the
survey
of
worldwide
censorship
techniques
which
passed
its
last
call
a
couple
of
months
ago
and
has
now
been
sent
on
to
the
Irish
irtf
chair
for
further
review,
and
we
currently
have
two
columns.
B
Okay,
thank
you.
Yes,
I'm
hands
up
from
Mallory
yay.
Thank
you
in
terms
of
documents
that
are
being
actively
worked
on.
We
have
two
at
the
moment
we
have
the
guidelines
for
performing
safe
measurement
on
the
internet
that
had
a
fairly
recent
update
in
I
believe
August
of
this
year.
Thank
you
Mallory
for
that,
and
we
think
that's
in
relatively
good
shape.
We
would
love
some
more
review
of
it,
but
we
think
we
could
consider
moving
that
forward
in
hopefully
not
too
distant
future.
B
The
other
draft
that
is
actively
being
worked
on
at
the
moment
is
one
on
IP
address
privacy
consideration.
This
is
a
document
that
came
out
of
the
interim
we
had
on
this
topic
year.
Before
last.
B
D
Of
the
draft
last
week,
and
we
think
we
have
some
good
content
there,
although
it
is
in
bad
need
of
solid
editorial,
pass
to
make
sure
it
is
helpful
for
current
topics
that
folks
are
looking
to
use
the
draft
for
so
really
looking
for
folks
to
take
a
review.
Let
us
know
what
applications
of
the
draft
that
would
be
where
that
could
be
helpful
and
we'll
be
working
on
that
editorial
pass
between
now
and
116..
E
Hi
Mallory
noodle
CDT
I
just
wanted
to
update
really
quick
on
the
safe
measurement,
because
I
appreciate
your
faith
that
this
document
is
good,
but
there
are
actually
some
important
parts
of
it
that
are
Unwritten
or,
in
my
view,
aren't
aren't
elaborated
enough.
So
if
there
are
folks
who
work
on
this,
I
could
also
with
maybe
encouragement
from
you
all,
send
messages
to
folks
in,
like
PPM
or
elsewhere.
That
might
have
some
overlap
here,
just
to
get
I
think
better
text
in
those
places.
E
That
would
be
much
appreciated,
so
you
can
reach
out
to
me
directly
or
you
can
go
on
the
list
and
then
the
other
update
on
that
draft
is
just
that
at
the
IAB
workshop
on
measurement
in
encrypted
networks,
I
I
talked
about
this
draft
a
bit,
so
others
have
seen
it
and
it
was
sort
of
in
the
context
of
like
it
doesn't
exactly
speak
to
that
that
workshop's
mission
or
or
statement
but
I
thought
that
what
was
in
that
draft
isn't
really
anywhere
else
in
terms
of
how
you
kind
of
do
harm
reduction
holistically,
rather
than
just
worrying
about
you
know
what
what
specifically
you're
you're
sending
or
what
you're
measuring
I
guess
of
what
sense.
E
So
that
was
well
received
in
the
sense
that
folks
found
it
useful
and
so
I
wanted
to
feed
that
back
to
the
group
as
well.
Thanks.
B
B
A
Very
good,
well
I'd
like
to
thank
the
chairs
for
asking
me
to
for
inviting
me
to
present,
and
so
here
we
go.
A
We
I
put
a
tldr
in
the
front
here,
because
some
folks
in
in
the
room
may
be
very
familiar
with
privacy,
preserving
measurement,
especially
the
activities
at
ietf
or
from
other
areas,
and
there's
going
to
be
some
folks
see
this
presentation
outside
the
room.
So
I
thought
I'd
add
that
here,
but
we
can
move
right
along
to
our
origin
story
for
clean
insights,
which
is
that
we
started
in
2017
with
a
hackathon
project
at
the
Harvard
Berkeley
Klein
Center,
in
association
with
with
MIT.
A
Now
it's
hard
to
remember
this
at
this
point,
but
the
the
European
Union's
gdpr
had
been
adopted,
but
was
not
yet
enforced
at
this
time.
So
we
didn't
have
a
lot
of
insight
into
how
that
was
going
to
roll
out
and
there
was
interest
across
a
set
of
communities
who
were
involved
in
civil
society
and
internet
freedom
and
human
rights
in
what
was
going
on
with
measurement
and
how
measurement
intersected
positively
or
negatively
with
with
privacy
and
human
rights
Etc.
A
So
I
think
you'll
notice,
the
date
down
to
the
next
item
where
internews
nonprofit,
who
funded
our
work
later
on
clean
insights,
you'll
notice,
there's
a
long
time
in
there
and
I
think
that's
largely
because
it
was
a
a
period
of
waiting
to
sort
out
what
was
going
on
with
these
higher
level
privacy
sort
of
activities
and
around
the
2000
time
frame.
Both
the
funders
for
human
rights
projects
and
the
developers
of
those
projects
were
sort
of
stumbling
over
how
to
address
the
problems
that
they
were
finding.
A
So
I
think
we
started
to
ask
a
bunch
by
asking
a
bunch
of
questions
earlier.
That
then
became
even
more
present
later.
How
can
funders
understand
the
impact
of
the
ideas
they
fund
without
putting
their
users
at
risk?
A
How
can
companies
strike
the
right
balance
between
preserving
privacy
and
driving
their
development
towards
making
it
meet
users
needs
better,
you
know:
is
it
really
possible
to
do
measurement
of
digital
interactions
in
a
safe
and
sustainable
way,
especially
in
the
communities
that
we
were
interested
in
and
can
the
sort
of
privacy
precepts
be
upheld
for
even
small
projects.
A
So
in
2020
mid-2020
there
was
a
symposium
held
interviews
in
ourselves
and
several
other
parties
hosted
developers
who
were
in
the
open,
open
source
human
rights
type
communities
to
look
at
and
try
to
understand
their
problems
that
they
were
trying
to
deal
with
the
demands
of
their
funders
versus
their
deep
interest
in
their
in
their
communities
and
I.
Think
these
sort
of
key
points
came
out
of
that.
These
are
this
and
I,
like
I,
should
point
out
earlier
that
this
is
kind
of
a
different.
A
You
know
a
vector
of
Interest
into
internet
into
measurement
than
other
vectors
might
might
be,
so
you
know
understanding
the
patterns
of
behavior
in
a
way
that
doesn't
alienate
users
seeking
up
a
platform
for
measurement,
but
one
that
doesn't
harm
in
individuals.
The
there's
an
idea
here
that
no
measurement
isn't
an
option
every
we
we
need.
Even
the
smallest
projects,
need
to
understand
their
impact
or
problems
with
their
software.
A
A
So
the
outcome
of
of
that
really
is
on
on
trying
to
bring
measurement
in
line
with
respect
for
the
user,
and
the
idea
was
to
focus
on
the
right
questions
and
answering
just
on
and
getting
just
enough
data
to
answer
those
questions.
We
looked
at
aggregating
on
the
source.
The
populations
that
we're
interested
are
very,
very
largely
on
on
mobile
devices.
That
was
certainly
a
consideration
and
that
we
wanted
to
have
something
in
the
path
between
generator
and
collector
of
information.
That
would
that
would
discard
this
needlessly
toxic
personal
personally.
A
Identifying
information.
Also
we
wanted
to
make
the
idea
of
measurement
a
legible
or
transparent
is
probably
the
wrong
wrong
word
at
this
point,
but
but
in
part
of
a
usage
experience
that
would
engage
the
user
rather
than
alienate
the
user,
and
we
looked
at
techniques
for
generalizing
data
d
d
resolution.
A
What's
the
right
word
for
reducing
identifiability
I'd
like
to
point
out
here
that
also
that
that
the
ideas
like
differential
privacy
were
were
were
quite
solid
in
the
in
in
the
literature
starting
really
in
2006,
but
it
wasn't
until
the
2015s
and
those
kind
of
dates
when
it
really
became
sort
of
at
the
Forefront
to
bring
into
this
kind
of
measurement.
So
these
kind
of
ideas
are
still
pretty
new
at
the
point
we
started
so
we
started
early.
A
We
had
a
very
basic
piece
of
technology
that
worked
Back
in
1970,
eight
sorry
2017
and
you
know
since
then,
we've
we've
broadened
it
a
reasonable
amount.
There
are
sdks
for
for
both
client
and
and
server
available,
and
there's
an
anonymizing
proxy
that
sits
between
the
initial
platform.
A
We
worked
with
called
matomo,
which
can
be
generalized
to
be
used
with
other
analytics
packages
and
maybe,
as
importantly,
we
started
looking
at
doing
user
research
to
Define
experiences
that
made
measurement
collaborative
collaborative
sort
of
experience
and
more
more
obvious
for
the
user
and
started
on
implementations.
A
This
diagram
here
is
meant
to
show
the
the
kinds
of
implementations
of
clean
insights
that
are
possible.
The
one
that's
used
in
the
dominant
form
today
is
the
direct
connection
where,
where
devices
running
in
operative
an
operative
application
for
their
analytics
can
connect
directly
to
a
proxy
that
then
connects
to
the
analytics
engine.
A
But
there
are
also
Imagine
scenarios
necessary
for
us
where
different
kinds
of
front
proxies
get
involved
to
to
assist
the
user,
where
there
is
either
censorship,
surveillance
or
additional
privacy
needs
requirement
needs
necessary
domain
fronting
is
it
can
be
used.
Here
is
a
very
specific
term
and
then
alt
is
a
way
to
think
of
other
kinds
of
fronts
where
maybe
Tor
is
one
of
those,
and
there
are
some
other
tools
being
discussed
at
ietf
that
can
be
used
in
this
way.
A
So
when
it
comes
to
the
actual
technology,
we
Implement
I'd
like
to
first
point
back
to
that
idea
that
we're
feeling
our
we
were
feeling
our
way
initially
here
and
and
we're
moving
from
com.
You
know
no
answer
towards
some
answer
and
iterating
along
the
way.
A
So
what
we
wanted
to
start
with
was
a
way
to
do
what
it
was
the
general
feeling
of
normal
analytics
that
is
counting
cross
tab
on
some
on
some
relatively
General
metrics,
doing
crash
reports
and
doing
surveys,
and
we
felt
that
if,
if
the
clean
Insight
solution
didn't
provide
a
solution
to
those
basic
things
that
people
would
demand
or
people
would
use
other
existing
side
channels,
problematic
side
channels
to
get
what
they
needed.
A
So
we
wanted
enough
a
bare
minimum
of
of
of
capability
without
having,
but
not
solving
the
whole
problem
right
away.
A
In
terms
of
improving
anonymity,
we
looked
at
at
batching
reports
from
the
client
to
the
to
the
server,
and
that
has
a
number
of
effects
positive
on
on
usage
of
networks
in
areas
that
are
have
very
deep
network
problems,
whether
they're
generated
by
authorities
or
just
problematic
networks,
and
it
also
hides
time
stamps
of
measured
activities.
We
looked
at
ways
to
generalize
and
deres
information
on
the
client
talk
about
that
domain.
A
Fronting
I
mentioned
earlier,
and
we
did
recognize
that
ethos
at
the
scales
that
we're
talking
about
the
number
of
users
of
applications,
Etc
that
that
there
was
a
tension
between
respecting
consent
and
privacy
and
padding
the
anonymity
set.
So
that's
a
sort
of
detailed
part
of
this,
but
it
was
an
early
criteria
about
remembering
that
we
have
to.
We
have
to
deal
with
small
scale
projects.
A
Another
thing
we
looked
at
was
really
borrowing
from
law,
and
maybe
this
is
kind
of
rethinking
again
from
the
perspective
of
the
user.
Things
about
the
way
measurement
has
been
done,
canonically
and
the
way
we
thought
our
our
clientele
our
community
wanted
to
to
work
on
this.
So
the
first
is
the
idea
of
time-bound
contracts
that
that
you
know
the
notion
that
things
you're
bound
to
something
that
lasts
forever.
A
That's
a
challenge,
so
we
introduced
the
idea
of
campaigns
which
were
ways
to
put
a
Time
binding
on
when
measurements
were
going
to
be
made.
The
second
was
consideration,
and
this
one
ties
a
little
bit
to
the
bottom,
one
which
is
called
contracts
of
adhesion,
the
idea
that
people
feel
that
they
must
opt
in
because
there's
no
alternative,
there's
disparate
power
between
the
measuring
entity
and
those
measured
and-
and
we
wanted-
and
there
was
often
at
that
point
in
time-
issues
with
refusing
consent
meant
you're,
just
deprived
of
the
service
complete
lately.
A
So
we
wanted
to
try
to
address
these
sorts
of
things
in
in
the
way
we
did
our
work
when
we
created
a
toolkit
and
a
set
of
best
practices
for
Developers,
so
I
think
the
the
main,
the
main
thing
that
we
have
focused
on
early,
because
it
is
part
of
the
user
experience
as
well
as
the
technology
is
this
idea
of
consent,
understanding
who's
going
to
use
your
app.
What
the
app
is,
what
they're
going
to
use
it
for
and
in
what
situations?
A
We
have
a
number
of
situations
around
the
globe
that
put
really
a
fine
point
on
user
situations
and
and
and
the
difficulty
and
challenges
people
have
using
the
internet,
especially
the
communities
that
we
serve
and
the
challenges
that
we
get
into
when
we're
creating
Network
traffic
based
on
simply
trying
to
measure.
So
we
need
to
start,
as
we
said
earlier,
start
considering
about
specific
questions
narrowing
the
time
that
we
Implement,
that
we
measure
handling
the
data
carefully
and
getting
rid
of
it
once
it's
once.
A
It's
reached
its
destination
and
insight
has
been
gained
working
with
developers
at
that
Symposium
we've.
We
started
developing
sort
of
design
patterns
around
how
consent
can
be
handled
and
as
well
as
the
information
needs
that
developers
and
funders
have.
A
We
arrived
at
sort
of
principles
around
consent,
and
this
is
sort
of
the
user
experience
of
of
consent
about
making
it
easy
to
understand
what
you're
agreeing
to
that
that
thing,
you're
agreeing
to
is
is
pretty
obvious
and
there
aren't
hidden
messages
in
there
and
that
that
saying,
yes
or
no
doesn't
mean
giving
up
use
of
the
application
and
that
the
way
we
we
deliver
consent
is
concentric
array
is
sort
of
centric
around
those
who
need
it
so
giving
asking
for
it
in
the
right
way
using
it
in
the
right
way.
A
Etc
here
are
a
couple
example
models.
This
is
the
umbrella
app
and
here's
an
example
where
this
app
needs
to
look
at
patterns
of
usage.
So
it
is
in
fact,
going
to
measure
different
things
and
connections
between
things.
So
it
wants
to
make
that
obvious
that
that's
going
to
be
what
you're
consenting
to
and
then
that
you're
going
to
contribute
for
a
short
period
of
time
here.
14
days
is
the
is
the
measurement
period
on
the
next
page.
A
We
have
another
example
which
is
talking
about
you
know
collecting
metrics,
making
it
feel
like
a
like
a
focus
group
and
again.
This
is
where
only
a
portion
of
the
user
base
is
actually
requested,
gets,
gets
this
request
and
again
A
Time
bound,
but
a
different
way
to
present
the
idea
of
consent
to
measurement.
A
We.
We
of
course
knew
that
the
number
of
studies
we
were
we
did
undertake
the
number
of
groups
we
worked
with
weren't
going
to
span
the
span,
the
field
for
what
consent
models
might
look
like.
A
So
we
came
up
with
sort
of
guidelines
and
best
practices
for
for
rolling
your
own
con
consent,
and
it
I
would
hope
that
some
of
these
things
feel
awfully
logical
and
and
right
thinking,
I
guess
you
could
say,
but
in
fact
the
ways
that
these
interact
with
the
different
kinds
of
things
that
applications
do
and
the
kinds
of
users
using
applications.
Quite
a
bit
of
thought,
is-
is
required
here.
A
We
have
done
a
number
of
implementations,
but
we
also
went
back
and
looked
at
and
worked
with
the
teams
to
look
at.
How
did
our
work
work
out
for
you?
So
we
actually
have
impact
reports
on
on
on
some
on
some
subset
of
the
applications
we
implemented
clean
insights
for-
and
these
are
these-
are
the
apps
here-
you'll
notice-
that
they
are
privacy
and
security
related
again
that
relates
to
our
user
base.
A
We're
we're
sort
of
going
around
the
world
kind
of
in
a
lightweight
way
promoting
this
idea
again.
We,
since
the
Guardian
Project,
is
fortunate
to
have
a
number
of
software
libraries
that
people
that
we've
developed
that
people
use
of
ours.
We
have
a
way
to
we're
talking
to
a
lot
of
development
teams
in
the
in
the
in
the
internet,
Freedom
space.
A
So
so
we
do
a
little
bit
drive
on
these
messages
related
to
how
to
implement
smooth
and
en
metrics
that
people
can
understand
and
want
to
be
part
of
and
and
I
think
we
have
started
at
the
small
scale,
working
small,
creating
a
new
idea,
thinking
about
metrics
in
in
a
new
way,
and
that's
I,
think
the
key
insight
for
clean
insights
is
that
area
now
on
this
last
slide,
here,
I'd
like
to
talk
a
little
bit
about
without
go
getting
into
a
lot
of
detail
about
the
commonalities
and
and
differences
between
clean
insights,
which
is
again
a
fairly
you
know
now
or
five-year-old
project,
and
what's
going
on
in
in
privacy,
preserving
measurement
at
the
ietf
today,
our
Affinity
is
that
we
did
identify
that
proxying
was
going
to
be
necessary
to
protect
the
identity
of
of
the
user.
A
So
we
have
in
our
proxy
modestly
different
functionality
that
that
exists
in
in
the
work
going
on
now
or
tremendously
different
in
that
ours
is
quite
lightweight
and
we
are.
We
have
a
client
server
balance
between
who
is
implementing
privacy.
A
We
also
like
this
idea
of
starting
with
the
questions
in
the
in
the
decisions,
rather
than
just
dumping,
all
the
data
in
raw
format
onto
the
onto
some
server
entity.
So
this
idea
that
there's
a
collection
phase
that
is
happens
and
in
an
analysis
phase
that
happens
before
well
eyes
are
on
it
and
then
the
idea
of
sort
of
neutralizing,
the
toxicity
and
data
sets
that
are
at
rest
once
they
do
get
to
the
server.
A
Importantly,
I
think
what
we
sacrifice
is
is
that
we
are
placing
some
trust
in
both
the
collector
and
on
the
implementers
in
terms
of
knowing
what's
toxic,
being
careful
about,
what's
toxic
and
and
that
there's
a
best
practices
way
to
think
about
that,
rather
than
a
structured
protocol
or
tool
set
that
that
implements
those
things
in
a
hard
and
fast
Manner.
And
lastly,
you
know
we
do
our.
We
do
continue
to
be
concerned
about
the
one-time
visit,
the
one-time
use,
those
kinds
of
problems
that
occur
that
really
are
problematic
in
in
our
area.
A
Okay.
Lastly,
I
I've
just
put
up
some
some
links
here
that
you
can
look
at
I.
Think
the
consent
guide
is
a
really
interesting
thing
to
learn.
It's
one
way
to
view
clean
and
clean
insights.
A
Is
that
in
fact,
because
Santa
consent
is
the
key
piece
so,
and
we
also
have
the
code
out
there
and
the
impact
reports
if
you'd
like
to
look
at
those
we're
at
the
Guardian
Project,
and
we
really
appreciate
our
friends
over
at
matomo
also
looking
at
this
problem
from
from
a
server
and
analytics
package
standpoint,
so
I'd
be
really
happy
to
take
your
questions
at
this
point.
B
Thank
you,
David.
Are
there
any
questions
and
thank
you
for
inventing
a
new
word
in
the
middle
of
your
presentation.
We
have
D
resolutionizing
to
add
to
the
dictionary
now.
A
You
know
I
also
thought
that
consentful
might
be
plagued
out
as
a
as
a
new
word
too,
but
recently
I've
seen
that
elsewhere.
So
I
guess
we
we
didn't
make
that
one
up.
B
Stephen,
please
go
ahead.
Yep.
F
Hi
David
Stephen
Farrell
yeah
thanks
for
that.
That's
the
interesting
stuff.
I
was
just
wondering
what
how
do
you
kind
of
treat
something
analogous
to
the
kind
of
right
to
be
forgotten
or
when
people
change
their
minds
about
consent,
or
is
that
something
that
you've
looked
at
as
part
of
this
project?.
A
We
haven't
thought
about
that,
but
the
other
answer,
which
I
think
is
maybe
more
in
line
or
a
way
to
figure
out
or
a
way
for
that
gives
us
a
tool,
is
the
idea
of
the
campaign
and
we
we
do
talk
about
opt
out
and
being
respectful
of
people
who
don't
want
to
be
part
of
of
a
campaign
and
making
it
obvious
that
you're
asking
that
and
then
and
allowing
them
to
say
no
without
you
know,
decrementing
the
function
of
the
application
totally
if
they
do
say
no,
so
I
I
don't
think
we
have
a
but
right
to
be
forgotten
like
as
if
I
said
yes
once,
but
now,
I'm
not
saying
yes,
I'll.
G
G
You
know
that
if
you
click
I
do
not
consent,
it's
going
to
be
at
least
400
clicks
before
you
can
see
the
website
we
shouldn't,
wouldn't
it
be
better
to
say
actually,
let's
push
for
the
to
reject
all
tracking
must
be
at
least
one
fewer
clicks
than
accepting,
and
you
know
then
you'd
find
all
these
websites
where
I'd
like
reject
tools.
The
big
red
button
like
yeah,
everyone
wants
that
and
oh
yeah.
G
A
I
I'm
smiling
about
that.
That's
a
that's!
A
A
great
point,
I
think
this
idea
of
you
know
we
have
some
on
the
in
the
sort
of
group
that
that
represents
this,
who
we
think
that
that
consent
has
been
diluted
so
badly
by
the
current
regime
that
it's
impossible,
basically
that,
for
the
reasons
that
sort
of
Steven
mentioned
for
the
reason
that
of
the
contract
of
adhesion
I
idea
that
the
gdpr
was
initially
our
friend
and
now
might
have
been
our
enemy,
you
know
there
there's
a
lot
of
a
lot
of
issues
with
with
the
fact
that
the
the
level
of
forcing
you
to
give
consent
is
is
can
be
either
hardcore
or
subtle,
but
eventually
so
so
I
I,
don't
think
we
know
how
to
answer
that
question
at
the
at
a
macro
level.
A
What
we've
tried
to
do
is
sort
of
put
tools
in
place
that
allow
people
who
and
and
by
the
way
we
in
in
the
Mobile
area,
looking
awful
lot
at
small
scale
applications.
You
know
we
we
have
an
app
out
of
Guardian
Project
called
Orbot.
You
know
several
million
users,
but
most
everything
else
is
is
below
that
and
the
people
who
are
interested
in
that
are
also
in
a
subset.
A
C
Hi
Conrad
cobrook.
Are
there
any
plans
to
reduce
the
Trust
on
The
Collector
using
cryptography
I
understand
there
are
a
couple
of
projects
that
are
working
on
protocols
that
try
to
do
that.
Is
there
anything
on
the
agenda
for
for
your
project.
A
I
would
say
not
yet,
but
part
of
the
reason
I
I
did
attend
yesterday's
presentation
on
privacy,
preserving
measurement
and
had
been
interested
in
it,
since
the
it's
come
up
here
is
that
we
do
have
a
number
of
alternatives
for
increasing
the
privacy
of
what
we're
doing
via
VIA
encryption
for
sure.
So
the
the
oblivious
HTTP
aspect
is
one
thing,
but
there
there
are
also
a
number
of
of
items
possible
by
implementing
more.
A
You
know:
algorithmic
privacy
measures
on
the
client,
as
well
as
on
the
intermediary,
the
so
the
proxy,
the
clean
insights
proxy,
so
yeah
I,
but
I
can't
I'm
not
willing
to
speak
I.
Don't
think
I
can
adequately
speak
to
things
directly
on
on
the
item
or
on
the
list.
We're
just
mapping
and
looking
at
and
making
sure
our
the
architecture
can
can
fit
those
things
as
they
come
along
differential
privacy.
A
Ideas
were
thought
of
early
and
we
have
sort
of
in-spirit
versions
of
that
now
done
on
the
as
part
of
the
toolkit
in
the
client.
But
you
know
there
is
DP
is
has
a
very
specific
meaning
and
so
I'm
I
hesitate
to
use
that
term.
In
a
generic
way,
I
think
that's
a
specific
thing
that
we
don't
have
implemented
yet.
B
Thank
you,
David
for
the
presentation
and
thanks
for
all
questions,
move
on
to
the
next
presentation
now
Sophia.
Could
you
please
try
and
show
your
slides.
H
Okay,
okay,
they
are
up
a
little
okay,
hi
everybody.
My
name
is
Sofia
selly
and
I'm
joined
today
with
Daniel
Jones,
and
today
we're
going
to
be
talking
about
paper.
We
publish
some
weeks
ago
about
finding
certain
practically
exploitable
cryptographic
vulnerabilities
in
Matrix.
You
can
find
the
paper
online
and
this
is
a
joint
work
with
Martin,
algorithm,
Benjamin
Dowling.
H
We
started
looking
at
Matrix
because
we
wanted
to
actually
do
a
formal
analysis
and
a
formal
model
of
Matrix
and
in
the
process
of
looking
at
the
specification
and
implementation,
we
found
several
practical
vulnerabilities
which
turned
into
this
paper,
but
just
to
give
a
little
bit
of
context
into
the
people
in
the
room
that
don't
know
about
it.
A
matrix
is
a
standard
for
security,
centralized
real-time
messaging
and
basically
aims
to
do
what
sntp
has
been
doing
for
email.
H
One
of
the
biggest
thing
to
think
about
in
Matrix
is
that
it
aims
to
do
secure
communication
in
the
face
of
untrusted
service,
which
they
call
the
Home
Server
in
a
Federated
way,
and
it's
very
important
to
think
about
this
untrusted
server,
because
this
will
be
our
thread
model
when
looking
at
all
of
these
attacks
and
as
I
said,
Matrix
uses
on
trusted
server,
and
because
these
servers
are
untrusted,
they
also
allow
end-to-end
encryption
by
default
on
their
channels
because
they
are
in
the
face
of
untrusted
service.
H
One
of
the
flagships,
clients
of
Matrix,
which
people
are
most
often
familiar,
which
is
the
element
client,
which
is
the
people,
how
they
actually
approach
into
Matrix
through
the
element
client.
But
the
protocol
itself
is
called
Matrix
and
has
been
implemented
in
other
applications
and
clients.
Besides
the
element
one.
H
So
why
should
we
care?
Where
should
we
care
about
these
vulnerabilities
and
in
general,
about
doing
a
formal
analysis
of
this
protocol
is
because
Matrix
and
its
Flagship
client,
which
is
element,
is
widely
used.
The
website
of
element,
for
example,
reports
that
they
have
over
60
million
users
and
the
French
and
the
German
government
are
currently
using
for
internal
Communications
and
other
Communications,
and
also
Mozilla
and
kdae
in
2019
announced
plans
to
actually
use
it.
So
it's
a
widely
used
protocol.
H
As
I
said,
this
is
a
secure
messenger,
which
means
that
they
have
certain
or
claim
to
try
to
have
certain
end-to-end
encryption
properties
such
as
confidentiality
and
equity
and
authentication,
and
they
also
claim
to
provide
certain
certain
form
of
forward
secrecy
which
I
think
they
call
partial
forward
secrecy,
post,
compromise
security
and
a
form
of
vulnerability
in
our
research.
H
Okay,
just
to
give
you
a
little
bit
of
an
overview
of
how
the
how
cryptography
is
used
in
Matrix,
so
basically
Matrix,
you
have
several
parties,
you
have
a
user
which
you
often
represent
a
client,
and
this
user
has
many
devices,
for
example
a
laptop,
a
phone
or
something
like
that
and
all
of
them
communicate
with
each
other
through
the
relying
of
messages
via
the
home
service.
H
It's
on
trusted,
Home
Server
that
I
already
talked
about,
and
this
Home
Server
also
used
to
store
communication,
history
and
account
information
of
the
user
and,
furthermore,
is
also
used
to
provide
a
device
identifier
for
the
several
devices
that
a
user
holds
in
order
to
achieve
authentication.
They
use
a
certain
crypto
cryptographic,
identity,
there's
a
master
cryptographic
identity
that
is
called
the
master
BK,
which
is
the
one
that
is
used
to
provide
authentication
for
the
devices
that
a
user
holds
and
also
to
co-sign
the
devices
that
all
the
users
hold.
H
So
you
can
use
all
of
these
identities
to,
for
example,
if
your
Alice
also
verify
the
cryptographic
identities
of
the
devices
that
both
holds,
for
example,
and
in
turn
each
device
also
has
its
own
cryptographic
identity.
So
you
have
first
user
identity
and
also
pass
device
identity,
and
that
one
is
the
one
that
is
usually
used
to
establish
the
key
establishment
and
encryption
channels
that
are
going
to
be
used
by
matrix.
H
Those
channels
are
called
a
megalum.
Those
are
the
cryptographic
protocols
that
I
use
and
I'm
going
to
Define
them
a
little
bit
late.
H
So
you
want
to
see
how
the
process
of
actual
verification
work
is
that
you
have
this
master
public
Keys,
which
in
turn
sign
self-signing,
key
and
also
a
user
signing
key
a
sub
signing
key
is
the
one
that
is
going
to
be
used
to
verify
the
pad
device
key
and
a
user
key
is
going
to
be
the
one
to
verify
that
verifies
the
list
of
devices
of
the
other
users.
So
there's
this
complex
critical
in
which
the
different
Keys
may
cross
verify
each
other.
H
Okay,
as
I
said,
they
use
om
and
megalum
as
the
underlying
cryptographic
mechanism.
So,
let's
explain
first,
what's
his
own
home
is
basically
a
way
to
establish
a
secure
Channel
between
two
pairs,
so,
for
example,
one
device
with
one
another
and
the
reason
that
they
do
this
is
so
that
they
establish
a
secure
channel
in
which
they
can
share
the
key
material
that
is
going
to
be
used
to
encrypt
it
messages
during
the
group
chat.
H
H
They
use
an
old
version
of
3dh,
because
right
now
signal
uses
another
version
which
is
called
x3dh
and
they
use
the
old
version
which
is
3dh
claiming
to
have
certain
deniability
properties
and,
as
I
said,
they
also
have
the
channel,
which
is
called
megalon,
which
is
basically
the
group
channel
in
which,
once
you
have
established
an
OM
Channel,
a
Payless
Channel
with
each
one
of
the
devices,
then
you
use
this
pair
with
channels
to
share
the
group
share
key
information
that
is
going
to
be
used
to
encrypt
one.
H
Megalom
also
has
some
ratcheting
properties
that
are
very
different,
not
very
different,
but
different
to
how
signal
does
the
double
ratchet
algorithm
and
the
mega
loan
properties
have
not
been
formally
analyzed
but
the
community,
so
it
remains
to
be
seen
what
kind
of
this
established
properties
megalum
actually
has,
and
here
just
a
little
bit
of
diagrams
in
case
you
want
to
look
a
little
bit
more
okay.
H
So
let's
go
to
the
meat
of
the
talking
itself,
which
are
the
attacks
and
in
this
case
I'm
going
to
be
studying,
Define
the
first
attack
and
then
then
we'll
be
explaining
the
rest
of
the
attacks.
So
the
first
attack
we
have
found.
We
actually
call
it
the
attack
in
which
the
home
server
has
control
of
the
users
and
the
list
of
the
devices.
H
So
why
this
happens
is
because
you
have,
as
part
of
a
group
chat,
you
don't
only
send
user
messages,
but
you
also
send
group
membership
messages
so,
for
example,
that
a
user
wants
to
join
or
a
user
wants
to
be
removed.
Why
user
wants
to
be
modified
somewhere,
but
the
problem
is
that
neither
of
any
of
these
group
membership
messages
are
encrypted
check
for
integrity,
no
cryptographically
authenticated,
which
means
that
a
malicious
Home
Server
can
indeed
inject
any
user
into
the
room,
and
those
fours
are
injected.
H
Users
will
be
able
to
decrypt
any
future
messages
that
are
sent
in
the
channel.
Why
they
think
that
this
actually,
this
attack
happens
in
practice,
because
we
think
there
was
an
assumption
from
the
protocol
designers,
but
this
is
just
a
speculation.
We
think
that
there
was
an
assumption
that
the
only
thing
that
needs
to
be
protected
is
the
user
messages
themselves,
but
not
this
room
membership
Pages.
H
So
maybe
there
was
this
assumption
and
in
general,
because
sometimes
there's
practical
issues
when
actually
trying
to
implement
and
actually
be
encrypting
and
authenticated.
These
other
messages
are
not
using
messages.
H
A
second
type
of
this
attack
is
in
which,
on
the
country,
instead
of
adding
a
new
user,
the
Home
Server
you're,
adding
a
new
user.
What
happens
is
that
the
home
server
has
a
new
device
to
your
list
of
devices.
So
I
already
said
at
the
beginning,
is
that
you
use
cryptographic
identities
to
verify
each
one
of
the
devices
that
you
own.
So,
for
example,
you
verify
that
indeed
you're
on
the
phone
and
need
you
on
the
laptop,
but
this
cryptographic
verification
is
separate
from
the
list
of
the
devices
that
the
Home
Server
maintains.
H
This
list
of
devices
that
the
Home
Server
maintains
is
not
authenticated.
It's
not
encrypted,
it's
just
a
list
of
the
Home
Server
maintains.
So
therefore,
the
Home
Server
can
also
inject
a
new
device
into
this
list,
and
again
this
device
will
be
able
to
decrypt
any
future
messages
that
are
sent
through
the
room.
H
Why
we
Hindi.
This
happened
for
the
same
reason
that
we
already
said
in
the
fast
attack
and
how
we
actually
go
on
solve.
These
attacks
is
probably
by
the
way
that
we
will
have
to
use
a
list
that
is
cryptographically
authenticated
and
it's
also
properly
checked,
and
it's
not
just
a
plaintiffs
list
that
Home
Server
controls
and
in
fact
indeed,
we
already
have
such
a
list
because
of
the
crew
signing
procedure.
That
I
was
just
describing.
H
B
Okay,
we
can't
hear
you
Dan.
Do
you
want
to
just
try
resetting
your
audio.
B
A
H
Ready?
Yes,
yes,
sorry.
I
About
that
yeah,
so
the
next
few
attacks
we're
going
to
go
through.
They
vary
a
little
bit
in
the
extent
to
which
they
are
kind
of
attacks
against
the
protocol
design,
or
maybe
the
implementation
so
like
if
we
muddle
those
up
sorry,
but
so
this
first
attack
when
two
parties
want
to
ensure
that
connection
hasn't
been
tampered
with.
They
probably
want
to
check
for
like.
I
Is
there
a
Mallory
in
the
middle
attack,
and
they
can
do
this
by
doing
out
of
band
authentication
and
Matrix
provides
like
a
couple
of
protocols
to
do
this,
but
the
one
we're
going
to
talk
about
now
is
the
short
authentication
string
protocol,
which
kind
of
very
briefly.
The
idea
of
it
is
that
these
two
parties
are
going
to
do
a
key
exchange
to
kind
of
generate
some
kind
of
shared
secret
and
then
they're
going
to
compare
that
shared
secret
out
of
band
and
some
of
the
snidiness
of
this
protocol.
I
Is
you
get
kind
of
they
use
short
strings
for
emojis
to
do
the
comparison
and
providing
they
do
match.
I
They're
then,
going
to
send
their
real,
like
correct
cryptographic,
identities
to
each
other
over
a
secure
channel
that
they've
construct
constructed
using
the
shared
secret,
and
this
works
really
well
in
Matrix.
But
there's
our
attack
targets
that
final
final
stage,
where
they've
got
a
nice
secure
channel
that
the
Home
Server
can
track
devices
into
sharing
an
identity
that
the
Home
Server
controls.
And
what
could
you
go
to
the
next
slide
up?
Next,
thanks
so
like?
How
does
the
Home
Server
do
this
trick?
I
Well,
first
I'm
going
to
explain
this
kind
of
two
types
of
verification
and
Matrix.
So
sometimes
you
want
to
verify
to
the
the
identity
of
another
user,
so
two
users
are
going
to
do
some
out
of
balance.
Verification
together,
but
sometimes
two
of
your
own
devices
need
to
also
do
a
verification
and
they
use
the
same
protocol
for
this
and
for
step
three
of
that
protocol
where
they
exchange
their
cryptographic
identities.
I
They
they
send
a
key
identifier
and
they
use
the
same
field
for
both
of
these
types
of
verifications.
So
when
you're
verifying
other
users,
you
send
a
a
fingerprint
of
your
master
cross
signing
key.
So
it's
that
mpk
field,
but
when
you're,
when
you're
verifying
devices
you
instead
use
the
device
identifier
and
the
problem
with
that
is
that
that
is
actually
controlled
by
the
Home
Server.
I
Oh,
so
if
you
go
to
the
next
slide,
please
thanks
so
then
well,
Home
Server
can
do
this
trick
where
it
generates
its
own
kind
of
Master,
cross,
signing
key
and
and
kind
of
cryptographic,
identity
for
a
user
and
a
signs
of
its
fingerprint
as
the
device
identifier
and
then
at
some
point.
When
two
devices
perform
this
out
of
band
verification,
they're
going
to
send
a
message
and
the
device
sending
the
message
will
send
a
key
identifier.
I
They
think
it's
a
device
device
identifier,
but
the
device
that
receives
it
will
interpret
it
as
a
master
cross.
Signing
key
fingerprint-
and
this
means
they'll
sign
that
user
identity
and
you've
kind
of
like
tricked
tricked
them
into
trusting
a
user
identity,
that's
actually
controlled
by
the
Home
Server.
At
that
point,
you
can
do
kind
of
like
an
active
mother
in
the
middle
attack
from
then
onwards.
I
Next
slide,
please
thanks!
So
what
causes
attack?
Well,
it
was
effectively
just
like
a
lack
of
domain
separation
between
these
key
identifiers
and
the
device
identifiers
using
them
in
the
same
place
and
in
general
I
think
it
would
be
really
great
to
like
in
the
future
just
to
avoid
using
server-controlled
inputs
and
these
kinds
of
other
band
verification
protocols,
because
it
just
helps
with
kind
of
cleanliness.
I
Oh
and
the
next
slide.
Please
thanks,
oh
cool,
so
for
the
next
attack,
we're
going
to
talk
about
we've
called
it.
The
semi-trusted
impersonation
attack
and
the
idea
of
it
is
when
a
user
adds
a
new
device.
They'd
like
that
device
to
be
able
to
decrypt
messages
previously
sent
to
that
user
and
there's
this
thing
called
the
key
request
protocol
that
Matrix
kind
of
provides
to
do
that,
and
here
we
can
see
on
this
slide.
I
Alice's
first
device
on
the
left
receives
a
message.
It
can't
decrypt.
So
it
sends
out
another
message
asking
for
the
decryption
key
Alice.
I
The
second
device
sees
this
request
and
it
checks
a
couple
of
things
it
goes
well
is
a
requesting
device,
a
verified
device
from
the
same
user
as
me,
or
have
I
already
sent
this
kind
of
decryption
key
before
and
if
it
passes
those
checks,
they'll
encrypt,
using
an
OM
Channel
a
forwarded
room
key
message
is
what
it's
called
and
they'll
send
the
decryption
key
to
Alice's
first
device
and
as
part
of
that
message,
they'll
Vellum
include
the
the
identifier
of
the
device
that
kind
of
the
claimed
owner
of
that
key
and
that
for
that
reason,
it's
really
important
that
you
check
when
you're
on
the
receiving
side
do
I
trust
this
device.
I
That
I
sent
it
to
me,
and
this
is
where
the
checks
were
missing,
and
so,
if
you
move
to
the
next
slide,
please
thanks.
So
what
the
the
attack
it
can
do,
what
a
Home
Server
can
do
is
they
can
create
their
own
device.
They
can
generate
their
own
room
session
they're
in
megalom
session,
and
they
can
just
forcibly
send
a
forwarded
room,
key
message
to
Alice's
device,
pretending
to
be
say,
Bob's
device
and
Alice's
device
would
just
kind
of
accept
it.
I
So
this
was
this
kind
of
attack
worked
in
practice,
but
the
it
is
a
little
bit
weak
because
all
keys
that
arrive
via
these
folded
room
key
messages
will
be
flagged
as
like
kind
of
semi-trusted,
and
so
you
can
impersonate
on
the
mega
ohm
Channel,
but
only
in
a
way,
that's
kind
of
like
flagged
in
the
user
interface
of
access
to
the
next
societies.
Thanks.
I
So
what
caused
this?
Well,
it
was.
It
was
effectively
just
an
implementation
mistake,
but
as
a
general
kind
of
like
well,
this
is
my.
This
is
my
reading
of
the
situation.
The
key
request
protocol
was
a
little
bit
underspecified
in
that
it
didn't
talk
too
strongly
about
like
why
what
to
do
on
the
receiving
side
and,
similarly,
what
to
trust
like
it
wasn't
it
doesn't
it
kind
of
leaves
that
up
to
the
implementer
like
how?
How
should
you?
I
When
should
you
trust
these
forwarded
room,
key
messages
when
shouldn't
you
so
I
think
like
a
more
prescriptive
specification,
would
have
would
have
helped
there
I'll
call
next
next
slide.
Please
thanks
so
now
for
the
next
attack,
we're
going
to
talk
about
a
kind
of
a
slight
upgrade
to
this.
I
So
when
you
have
a
mega
ohm
session
set
up,
you
initialize,
like
a
mega
ohm
I'm,
an
imbo,
an
outbound
session,
an
inbound
session
and
a
signature
that
kind
of
links
them
together,
and
you
send
these
over
like
the
old
channels
as
before,
but
knowing
that
we
can
kind
of
do
the
semi-trusted
impersonation
attack
over
Mega
ohm,
the
question
is:
can
we
use
this
kind
of?
I
Can
we
if
we
can
send
these
messages,
these
session
setup
messages
over
Mega
ohm?
Instead,
we
can
kind
of
upgrade
our
impersonation
attack
to
one
that
isn't
kind
of
tagged
as
a
forwarded
key
and
that's
what
our
attack
tries
to
do.
Also,
if
you
go
to
the
next
slide,
please
you
can
see
here.
What
we
try
to
do
is
we
start
off.
I
The
Home
Server
is
trying
to
impersonate
Bob
to
Alice
and
it
first
starts
by
doing
the
semi-trust
impersonation
attack
and
at
this
point
they
can
send
the
adversary
can
send
medical
messages
that
verify
as
being
from
Bob's
device.
They
can
then
generate
a
second
Mega
ohm
session.
We've
tagged
this
G
star
and
they
send
it
using
a
normal
session
setup
message.
I
So
it's
not
a
folded
room
key,
but
they
do
it
over
Mega
ohm,
and
this
is
kind
of
where
the
the
protocol
confusion
and
where
the
main
implementation
mistake
was
and
by
sending
it
over
Mega
ohm,
it
kind
of
passes
all
the
checks.
That
say
this
is
a
verified.
I
This
is
a
like
a
verified
message
from
Bob's
device
over
Mega
ohm
and
from
this
point
onwards
it
kind
of
gets
tagged
as
a
session
and
you
can
use
it,
but
it's
not
tagged
as
like
a
forwarded
one
or
anything
like
that,
so
it
kind
of
get
a
slight
upgrade.
Oh
cool
on
to
the
next
slide.
Please
thanks!
I
So
then,
for
the
next
attack
and
the
final
one
we're
going
to
be
doing
in
this
presentation
is
the
kind
of
a
confidential
confidentiality
break.
That
extends
the
the
issues
we
just
talked
about
by
by
looking
at
two
new
subprosicles,
so
the
mega
ohm
key
backups
protocol
allows
inbound
Mega,
ohm
sessions
so
kind
of
like
the
decryption
keys
to
be
backed
up
on
home
servers
and
then
they're
encrypted
on
home
surface,
using
a
recovery
key,
that's
kind
of
shared
between
the
user's
devices.
I
The
second
sub
protocol,
we're
going
to
talk
about
and
use,
is
to
Secure
Storage
and
secret
sharing
protocol,
which
allows
users
to
backup
account
level
secrets
and
they're
supposed
to
share
them
between
their
devices.
So
this
is
things
like
cross
signing,
Keys
and
actually
the
recovery
key
that
Mega
ohm
used
for
Mega
ohm
key
backups
that
gets
synchronized
using
the
Secure
Storage
and
secret
sharing
protocol.
I
So
in
this
attack,
what
we
do
is
we
utilize
that
protocol
confusion
and
semi-trust
impersonation
attack
from
before,
but
we
impersonate
a
trusted
device
and
use
the
secret
sharing
functionality
to
force
the
target
device
into
using
a
mega
ohm
key
backup
that
the
recovery
key
I
mean
sorry
using
a
mega
home
key
backup
and
use
the
recovery
key
for
them.
That
is
actually
under
the
Home
Service
control.
I
So
we
kind
of
impersonate
a
trusted
device
and
send
your
recovery
key
and
then
what
happens
is
the
the
target
device
will
then
start
uploading,
all
of
the
decryption
keys
that
has
encrypted
on
the
Home
Server,
but
with
a
encrypted
with
a
key,
that's
under
the
Home
Service
control,
and
at
that
point
you
can
kind
of
decrypt
these
decrypt
all
the
messages
that
this
target
device
has
access
to
oh
cool
and
then,
if
you
skip
forward
again,
sorry
I'm
just
gonna
thanks
yeah.
So
what
caused
this?
Well
again?
I
But
the
there
are
ways
this
could
have
been
kind
of
maybe
avoided
a
little
bit
or
discouraged
by
the
specification
because
it
there's
a
they
push
quite
a
bit
for
a
kind
of
like
plugable
and
replaceable
encryption,
algorithms,
which
means
that
kind
of
that's
reflected
in
the
code
architecture,
but
different
algorithms
have
different
security
properties
that
check
like
change
how
they
can
be
used
security.
I
So
maybe
a
more
prescriptive
and
less
flexible
specification
might
have
helped
avoid
this
kind
of
like
this
kind
of
code
issue,
but
yes,
definitely
an
implementation
problem.
Oh
thanks,
Sofia.
H
Okay,
so
just
a
little
bit
to
give
you
a
little
bit
of
a
conclusion.
So
what's
the
lesson
learns
that
we
learned
after
looking
at
the
role
of
these
vulnerabilities,
first
and
foremost,
that
they
are
practically
exploitable?
We
even
have
a
private
concept
that
indeed
they
can
be
practically
exploitable.
We
don't
know
how
much
they
are
exploitable
in
the
world
in
the
wild.
Some
of
them
have
been
fixed
by
Dimitri
specifications.
Some
of
them
will
be
fixed
in
the
future
and
some
of
them
might
not
be
fixed.
H
H
It's
not
an
easy
task,
and
then
the
other
easy
test
on
actually
thinking
about
group
messaging
is
to
actually
think
what
kind
of
properties
you
should
have
and
that
at
least
from
the
specification
or
Matrix.
Sometimes
it's
not
unclear
what
kind
of
a
specific
properties
they
aim
to
provide.
So
that's
what
is
the
conclusion
of
this?
H
That
we
need
actually
a
formal
model
and
Analysis
of
thinking
of
this
protocol
and
perhaps
also
understanding
and
taking
inspiration
from
protocols
as
TLS
on
mls,
in
which
there
was
a
collaboration
between
the
academic
and
vanity
standardization
body,
form
of
verification
Community
to
actually
provide
protocols
that
actually
had
a
good
threat,
adversity
model
in
mind
and
also
the
specific
properties
that
they
wanted
to
provide
and
a
formal
proof
that
indeed
to
their
Chief
said
property.
So
we
probably
we
want
to
have
a
correct
group,
secure
messaging
protocol
that
is
interoperable
as
well.
H
B
Thank
you.
Thank
you
both
for
an
excellent
and
very
sobering
presentation.
I
think
you
have
certainly
answered
the
question
secure
group
messaging.
What
could
possibly
go
wrong
if
we
didn't
already
know
the
answer?
B
Okay,
we
do
Stephen
you're
up
first.
F
Hey
Stephen
Farrell
really
good
work.
Thanks
for
doing
it
and
Reporting
out
I
I
just
had
a
question:
did
you
have
a
or
did
you
spend
any
time
looking
at
the
Federation
between
home
servers
and
are
there
any
issues
there
that
you
found,
or
that
might
be
interesting.
I
What
happened
we
kind
of
modeled
it
completely
as
kind
of
all
the
home
servers
are
a
single
entity
almost
like
a
single
untrusted
entity,
but
it'd
probably
be
good
to
do
that.
I
think
there
is
a
cool
paper
that
looks
at
the
synchronization
algorithm,
the
the
state
sync
algorithm,
but
I
can't
remember
the
name
of
it.
J
Yeah
Rafa
Robert
yeah,
so
my
question
is
the
same
vein
as
stefans.
Do
you
plan
a
sequel.
I
You
got
it.
Thank
you,
yeah
I
think
our
plan
is
to
we're
working
on
some
form
of
model
modeling
and
those
kinds
of
things.
So
that
would
probably
be
like
the
natural
follow-up
is
to
try
and
show
that
now
that
Matrix
have
fixed
like
the
vast
majority
of
these
issues,
and
they
did
it
really
fast
and
they
were
super
responsive
yeah.
Just
trying
to
show
that
now
you
know
we
can
get
to
a
place
where
we
believe
it's
nice
and
secure.
H
Some
other
words
that
we
have
also
been
thinking
is
to
actually
understand
what
kind
of
the
liability
properties
seem
to
provide.
It
was
a
core
aim
of
the
prayer
of
the
Matrix
critical
to
prevent
any
ability,
but
it
has
not
been
also
formally
modeled,
so
we
will
also
be
looking
at
that.
K
What
time
first
time?
Well,
thank
you
for
doing
this.
First
of
all,
I
want
to
check
if,
if
I
got
it
I
mean,
are
this
the
same
vulnerabilities
that
were
already
discovered
in
I
mean
in
released
in
September
and
I
think
fixed
already
in
a
good
I'm
interested
for
most
of
them.
I
think
there's
a
disagreement
on
one,
so
just
just
to
check
if
these
are
new
ones
or
we've
had
the
same
that
were
already
disclosed
and
and
the
other
thing
and
I
also
wanted
to
say,
I
I,
don't
know
if
you
were
this
morning.
K
H
Yes,
they
have
the
same
vulnerabilities
that
we
disclosed.
There's,
not
anything
you
in
this
presentation.
Besides,
what
we
already
disclosed
I
also
attended
the
mini
now
future
working
group
and
I
do
hope
that
on
those
of
them,
we
do
take
the
same.
Lessons
learned
that
we
have
from
less
than
CLS
1.3
of
actually
making
a
group
that
has
a
collaboration
between
the
academic,
Community,
formal
methods
and
standardization
value
to
put
out
there's
something.
That's
really
safe
and
good.
H
G
Not
a
key
question
just
with
respect
to
the
formal
analysis
points
on
the
last
slide.
There's
a
side
meeting
in
Richmond
six
tomorrow
on
doing
formal
records.
B
Some
of
you
may
have
seen
the
brief
presentation
that
was
given
yesterday
I
believe
it
was
concentrating
on
censorship
in
Iran,
and
this
is
a
much
more
General
talk
about
measurements
of
Internet
censorship.
Globally.
Please
go
ahead.
Simone
thank.
L
You
so
it
seems
you
can
hear
me
hello,
so,
okay,
first
of
all,
thanks
for
inviting
me
I'm
very
happy
to
be
here
so
I
want
to
basically
provide
you
an
overview
of
what
one
is
doing
for
measuring
censorship
and
I
would
like
to
focus
a
little
bit
more
on
measurements
of
encrypted
protocols
and
I
also
want
to
like
focus
a
little
bit
more
on
the
more
experimental
stuff
that
we
are
doing,
as
opposed
to
like
the
mainline
measurements
that
we
are
doing
so,
let's
say
a
few
words
about
universe.
L
So
it's
a
free
software
project.
It
started
in
2012.,
like
the
idea
is
to
provide
people
with
tools
that
they
can
install
on
their
phones
on
their
computers
and
use
those
tools
to
master
internet
censorship
and
the
like.
Over
time
we
have
collected
the
more
than
a
billion
Network
measurements
in
more
than
200
countries.
L
Flagship
experiment
is
the
one
for
websites
where
basically
we
there
is
a
list
that
the
probe
is
going
to
measure
a
list
of
websites
which
depends
on
the
country
in
which
you
are
in
or
if
you
want,
you
can
choose
custom
websites
for
you
to
to
measure,
and
this
will
be
part
of
what
I
will
talking
about
in
the
presentation.
L
I
will
not
focus
much
on
instant
messaging.
I
will
just
say
that
we
have
this
specific
tasks,
for
instance
messaging
apps,
where
we
measure
endpoints
that
matter
to
those
apps
as
first
circumvention
again,
I
will
not
be
focusing
on
it
here,
but
briefly
and
I
think
it's
also
related
to
previous
conversations
I've
seen
in
the
chat.
What
we
have
is
we
integrate
siphon,
which
is
a
conversion
tool,
and
so
our
task
bootstrap
siphons
and
tells
you
how
much
it
takes.
L
We
do
the
same
for
Tor
the
vanilla
version,
so
without
any
plugable
transports
we
also
integrate
snowflakes.
So
we
can
tell
you
how
much
it
takes
to
bootstrap
to
our
plus
snowflake.
So
and
then
we
have
also
other
kind
of
tasks
that,
like
so,
for
example,
Performance
tasks
and
middle
box
tests,
and
then,
while
those
are
the
tasks
that
are
part
of
the
application
proper,
there
is
a
huge
amount,
while
not
huge
amount,
but
the
fuel
experimental
tasks
that
do
not
run
all
the
time
or
just
from
sometimes
in
the
ground.
L
Okay,
so
now,
I
want
to
spend
a
little
bit
to
explain
the
principles
with
which
we
measure
and
I
will
use
as
an
example
here.
Measuring
the
web,
though,
measuring
encrypted
DNS
is
similar
as
a
concept,
not
all
the
experiments
that
we
have
are
at
this
like
are
following
these
principles.
It
took
us
experience
to
understand
these
principles,
and
so
only
the
most
recent
ones
follow
these
principles,
though
they
are
the
ones
that
either
we
are
using
the
most
or
we
plan
to
be
using
the
most.
So
that's,
basically
fine,
okay.
L
So
the
first
part
is
that
we
want
to
provide
the
probe
to
measure
like
we
want
to
pry
the
probe
websites.
To
answer-
and
that
is
a
URL
but
providing
the
URL
in
itself
is
not
enough.
We
need
to
know
more
about
that
URL,
it's
important
to
know
in
advance.
L
What
are
non-good
IP
addresses
for
that
domain,
which
helps
us
to
find
out
more
ways
in
which
a
websites
a
website
could
be
censored,
and
then
we
also
have
ways
to
know
more
or
less
around
the
same
time
that
the
probe
is
performing
a
measurement,
whether
the
website
is
expected
to
be
up,
and
it's
expected
to
working
as
intended.
Basically,
okay,
we
like
so
that's
the
first
bubble.
Basically
the
target
to
measure
and
contextual
information
that
helps
with
measuring.
L
Then
there
is
the
first
like
Act
of
measuring,
which
for
web
is
a
DNS
lookup.
So
we
we
in
most
cases
and
historically,
we
tended
to
use,
got
a
dreamful,
and
the
reason
for
that
is
that
in
many
countries,
so,
for
example,
in
Italy,
where
I'm
right
now
and
many
European
countries,
most
censorship
or
historical
Mass,
censorship
has
been
implemented
only
by
the
resolver
of
the
ISP.
So
that's
the
historical
reason
why
we
use
data
dream,
for
we
don't
always
know
what
the
result
is.
L
Sometimes
it's
easier
than
other
times
to
discover
it,
and
so
that's
a
way
to
to
use
that.
But
more
recently
we
started
to
add
in
parallel
another
resolver,
and
this
one
is
using
the
DNS
unencrypted
over
53
EDP,
and
the
reason
for
that
is
that
in
some
cases,
get
under
info
may
be
using
an
encrypted
channel.
L
So,
for
example,
we
know
that
with
systemly
you
can
use
dot,
etc,
etc,
and
then
what
the
additional
unencrypted
query
allows
us
to
know
whether
there
is
a
blanket
interception
or
tampering
with
DNS
requests,
which
seems
to
be
the
case
in
most
countries.
So
it's
quite
uncommon
that
you,
you
have
targeted
filtering
for
a
specific
server
endpoint
in
general.
It's
foreign
queries,
okay,
that
that
that
that
part
of
our
like
measurements,
give
us
IP
addresses
or
errors,
and
generally
errors
are
like
wait.
L
What
we
didn't
expect
this,
because
in
most
cases
we
know
that
the
website
should
be
like
addresses,
so
in
general,
when
we
see
something
unexpected
compared
to
what
we
know
to
be
true,
we
call
this
an
anomaly
and
then
for
IP
addresses.
We
are
not
sure
whether
they
are
good
for
the
domain,
but
when
we
are
measuring
encrypted
websites,
we
we
say
they
are
good.
L
If
we
can't
tell
us
and
check
with
them,
because
we
bundle
mozilla's
CA,
we
hope
there
is
no
Rogue
CA
in
there
and
surely
there
is
no
extra
CA
that
the
user
may
have
so.
Okay,
okay,
once
we
have
a
set
of
IP
addresses,
which
is
the
set
that
we
already
knew,
plus
the
one
discovered
by
the
probe,
then
what
we
do
is
basically
construct
endpoints
from
those
addresses,
and
then
we
do
the
what
is
required
to
do
so
for
https.
L
That's
mostly
TCP
connect,
Tela,
Centric
and
then
trying
to
get
the
resource,
and
each
of
these
operations
could
fail
and
then,
if
we
expected
it
not
to
fail,
if
it
fails,
we
are
again
into
the
a.
This:
is
an
anomaly
territory
so
yeah.
Those
are
the
principles
and
like
users,
Runway,
and
then
there
is
a
backend
and
the
backend
is
actually
import
very
important
to
us.
L
Even
though
users
do
not
see
it
and
it
organizes
measurements
and
it
has
an
API
and
through
the
API
and
for
a
website
called
Luna
Explorer
researchers,
including
us
and
users,
they
can
fetch
measurements
and
they
can
also
not
only
fetch
individual
measurements
but
also
group
measurements
and
try
to
make
sense
of
measurements
like
looking
at
charts
that
show
Trends
in
general.
Slash
reports
is
where
we
publish
reports
on
our
websites
and
now
that
I
think
I
covered
all
the
basics.
L
What
I
want
to
do
is
to
so
basically
discuss
a
bunch
of
recent
words
that
we
did,
but
rather
than
giving
a
broad
view,
I
want
to
show
some
into
some
aspects
that
I
consider
interesting,
that
we
are
working
on
what
we
found.
L
So
the
first
report
I
want
to
focus
on
is
this
one
that
was
about
Iran,
and
so
this
is
about
the
Maza
mini
protests,
and
it
was
at
the
end
of
September
of
this
month
and
there
was
already
loss
of
disruption
in
Iran
before
the
protests,
but
something
changed
around
the
protests,
and
so,
for
example,
here
we
have
a
math
chart
of
DNS
over
https.
Each
row
is
a
service,
and
each
each
bar
in
in
the
chart
is
the
its
height.
L
Is
the
number
of
measurements
collected
on
a
specific
day,
and
then
it's
divided
in
different
colors
and
each
color
is
a
class.
Okay
means
everything.
Literally
everything
was
okay.
Then
we
anomaly
I
told
you
already
what
anomaly
means
it
means.
Basically,
we
expected
something
to
happen,
but
there
was
a
networker
we
didn't
expect.
That's
mostly
what
an
anomaly
means
in
this
context
and
then
confirm
and
confirmed
in
general
means
that
there
are
signatures
known
signatures
of
censorship,
and
we
found
some
of
those
signatures
inside
the
data
in
the
case
of
Iran.
L
That
mostly
means
bogans,
Bogan
AP
addresses
that
are
known
to
be
used
to
implement
censorship.
L
Now,
if
you
see
basically
the
colors
Trend
here,
you
see
that
after
the
21st
for
most
Services,
it
changes
from
yellow
anomaly
to
rad
confirmed
what
I
wanted
to
do
here,
which
is
like
a
bit
more
in-depth
I
wanted
to
show
you
what
happened
for
Doh
dnasapple.com
on
the
24th,
so
yeah
oops,
Yeah
next
slide
cool,
okay,.
M
L
When
I
told
you,
our
principles
for
measuring
I
told
you
that
we
provide
the
probes
with
non-good
IP
addresses
this
means
we
can,
regardless
of
what
happens
for
DNS,
we
can
track
whether
we
can
establish
connections
with
no
good
IP
addresses
and
see
what
happens
for
that.
Okay
and
this
table
shows
that
so
the
First
Column
is
the
s
number.
L
The
last
column
is
the
number
of
times
something
happened,
and
the
other
columns
in
the
middle
allows
us
to
classify
how
many
times
we
did
see
specific
blocking
patterns,
so
DNS
means
DNS
was
blocked
and
TCP
means
TCP
was
blocked.
L
Etc,
okay
and
success
means
something
was
okay
and
in
in
and
like
for
resital,
for
example,
the
first
row
ddns
always
failed
and
it
always
failed
with
one
of
the
three
bob-ons
shown
above
so
that
was
that
basically
means
that
if
you
just
use
the
DNS
local
yanas,
either
the
get
a
dream
resolver
or
our
EDP,
you
will
not
be
able
to
go
further,
but
we
provided
the
probe
we
type
addresses,
and
so,
when
we
use
those
AP
addresses,
we
always
fail
in
the
tiller
Centric,
and
we
failed
with
this
signature
that
you
send
the
client,
hello
and
then
nothing
happens.
L
Then,
eventually,
you
time
out
because
you're
tired
of
waiting,
another
Network
TCI
at
the
similar
pattern,
again
DNS
block
same
way.
However,
the
a
good
IP
addresses
that
we
provided
to
the
probe
failed
into
distinct
way.
So
sometimes
it
was
this.
It
was
timing
out
during
the
TCP
connect
for
other.
Let's
say
more
luck,
AP
addresses.
Instead,
you
were
able
to
establish
a
TCP
connection,
but
then
you
were
still
timing
out
in
the
telecentric
and
that
happened
21
times
and
but
five
times
it
happened,
something
that
is
quite
interesting.
L
A
few
of
those
AP
addresses
that
we
provided
they
were
actually
working,
so
they
were
succeeding
and-
and
this
suggests-
and
it's
not
the
first
time
we
have
seen
this
opening
in
Iran-
that
there
is
not
just
a
like
some.
If
in
the
network,
that
if
the
asanai
of
the
client
law
is
such
and
such
you're
blocked
it's
a
bit
more
complicated
than
that,
it
also
depends
on
DP
addresses
that
are
using,
and
it's
Dynamic
so
probably
depending
on
what
people
do.
That's
a
speculation,
but
I
will
support
that.
L
That
seems
I
mean
that's
my
mental
model
of
what
I
have
seen,
basically
that
it
depends
on
what
IP
addresses
are
used
and
it
seems
to
change
okay.
So
this
is
it
for
this
specific
case
now.
Another
case
was
this
studied.
We
did
for
Russia
at
the
beginning
of
the
war
and
again
I'm
not
going
to
bro
like
describe
the
whole
case.
I
just
want
to
flag
something
that
I
found
quite
interesting
so
at
the
beginning
of
the
of
the
world
in
Russia
in
Ukraine.
L
What
we
did
see
in
opening
in
Russia.
What
many
people
did
see
is
that
Twitter
was
very
difficult
to
use,
and
so
the
conclusion
was
that
it
was
throttle.
Then
many
people
published
about
this-
and
we
also
published
about
this
and
the
way
in
which
we
saw
that
was
there
was
this
form
of
tracking
was
like
like
this.
L
So
we,
when
we
are
the
lesson
shaking
we
record
the
time
stamp
and
the
amount
of
bytes
received
by
Raj
and
then
by
collecting
this
data,
we're
able
to
say
at
the
end
of
the
handshake,
regardless
of
the
result,
how
much
data
we
were
able
to
fetch
in
how
much
time.
So,
basically,
that's
the
speed
and
the
unshake
size
is
more
or
less
the
same
in
the
sense
that
you
need
to
fetch
the
same
data
like
certificates
Etc.
L
So
what
we
saw
was
that
you
can
quite
like
there
was
two
two
populations.
Basically,
a
population
of
users
through
which
the
telecentric
for
Twitter
was
at,
let's
say
the
speed
that
we
could
saw
before
this
problem.
So
at
the
beginning
of
the
chart,
the
24th
of
February
and
then
suddenly
on
the
26th.
Some
users
instead
were
having
this
very
slow
handshakes.
L
And
then,
if
you
go
and
look
you
see
that
those
measurements
with
the
very
slow
handshakes
they
were
timing
out
in
the
Android
or
timing
outward
trying
to
fetch
what
we
were
trying
to
fetch,
which
was
an
image.
So
that
was
the
way
in
which
we
detected
the
signature
of
throttling
in
our
data.
L
It
was
not
something
that
we
planned
for.
We
actually
added
these
data
collection
part
because
we
thought
it
was
useful,
but
we
didn't
think
about
throttling,
and
that
was
fortunate
that
we
did
this
and
now
what
we
are
trying
to
do
is
to
have
something
similar.
Also,
while
we're
fetching
the
body
which
will
help
us
to
detect
this
form
of
trucking
and
I
guess.
L
The
message
here
is
that
the
kind
of
throttling
that
you
would
expect
when
you
want
to
disrupt
something
is
very,
very
heavy,
because
in
a
way
you
want
to,
like
you
want
to
be
below
a
certain
bandwidth
that
makes
it
very
difficult
to
use
the
application
at
the
end
of
the
day.
So,
okay,
the
third
aspect
I
want
to
focus
on,
is
that
I,
so
far,
I
basically
discussed
only
TCP
based
measurements.
L
However,
we
have
been
starting
to
look
into
quick
as
well,
and
while
the
mainline
website
measuring
experiment
that
does
not
contain
code
for
quick,
we
have
experimental
experiments
that
we
have
run
in
some
cases
that
do,
and
we
are
looking
forward
to
merge
those
into
the
mainland
experiment.
And
so
again,
I
am
not
going
to
discuss
everything.
I
just
try
to
discuss
something
that
we
are
trying
to
look
at.
L
L
What
happens
to
the
same
website
on
http
3.,
and
so
there
is
this
basically
diagram
where,
on
the
left,
we
have
the
clusters
of
failures,
connection
reset
a
TLS,
timeout
and
Shake
success
and
timeout,
while
connecting
TCP
and
those
translate
quite
neatly
to
other
categories
that
matter
for
HTTP
3,
so,
for
example,
while
other
which
honestly
I
don't
remember
what
it
is,
but
was
very
rare
success
and
quick
and
fake
timeout.
L
So
what
we
already
knew-
and
it
was
a
bit
confirmed
by
looking
into
this-
is
that
in
China
it's
very,
very
common
to
block
IP
addresses
and
we
we're
not
sure
whether
and
honestly
we
cannot
be
able
to
say
it's
IP
addresses
or
endpoints,
but
certainly
the
endpoints
that
were
blocked
for
TCP
were
also
blocked
for
UDP.
You
see
that,
like
this,
24
percent
fraction
translates
to
a
more
or
less
equal
attraction
for
week.
L
Instead,
the
majority
of
the
sites
that
were
blocked
with
connection
reset
or
timed
out
during
the
unshake
they
were
actually
accessible.
Now
this
is
not
recent,
it's
six
months.
No,
it's
far
from
four
or
five
months
ago.
So,
maybe
now
it's
not
like.
It's
not
exactly
the
same.
It
could
have
changed,
though,
at
the
time
that's
what
we
were
seeing.
L
Okay,
so
we
we.
We
were
asking
the
question
whether
okay,
but
is
this
really
that
they
are
blocking
you
for
factory
European
points
what
it
is
like,
so
we
did
this
other.
We
created
these
other
small
tool
called
quick
Pane,
and
so
the
idea
of
quick
think
is
that
we
want
to
decoupling
way
the
the
fact
that
quick,
quick
is
like
embeds
the
transport
part
and
the
Telus
part
in
the
same
like
pocket
anyway.
L
So
so
we
we
sent
an
initial
quick
bucket
minimum
size
and
in
depart
that
should
have
been
kelas
was
random,
but
we
sent
a
version
negotiation
that
was,
if
I
recall,
correctly
Baba
in
hexadecimal,
so
that
was
invalid.
So
the
idea
is
that
we
send
this
invalid
bucket
and
because
the
version
is
invalid,
we
expected
to
receive
something
and
at
the
time
at
least
that
seemed
to
work
as
like.
We
were
receiving
Pinback
from
service.
L
So
we
tried
this
in
that
context
and
what
we
actually
discovered
is
basically
that
no
as
far
as
we
could
tell
at
the
time
the
timeouts
that
we
were
seeing
with
HTTP
3
on
the
left
were
mostly
timeouts
for
Ping,
so
it
did
not
seem
at
the
time.
This
was
basically
caused
by
inspection
of
the
TLs
part
of
quick.
Even
though
again
quick
is
a
new
protocol
and
time
flies,
and
so
maybe
now
it
has
changed
so
but
yeah
at
the
time.
We
didn't
notice
this
kind
of
potential
interference.
Okay.
L
Now
this
is
the
last
topic
to
touch
upon
the
DNS
track
experiment.
So
this
experiment
follows
the
philosophy:
I
described
you
at
the
beginning,
like
the
way
in
which
we
try
to
organize
measurements.
So
there
is
nothing
really
new
to
say
here,
but
that
there
is
an
experiment
in
uni
that
is
dedicated
specifically
to
measuring
Dot
and
Doh,
and
that
experiment
tries
to
follow
this
principle
of
providing
the
probes
with
IP
addresses
that
are
good
for
the
domain.
L
So
we
are
happy
that
we,
even
if
we
are
getting
a
censored
locally,
we
have
good
IP
addresses
to
try
and
the
other
good
news
is
that
this
experiment
is
now
in
uniprobe
and
like
as
I
mentioned
before
in
the
uniprobe.
L
There
is
a
section
called
experimental
that
you
can
run
manually
or
you
can
ask
the
only
app
to
run
it
automatically
along
with
other
tasks,
and
so
we
now
start
to
add
data
about
this,
even
though
we
have
not
looked
actively
into
this
data,
because
while
we
are
a
small
team
and
yeah,
it's
like
recent
months
have
been
very
bad
in
terms
of
censorship,
so
it
I've
always
wanted
to
look
into
this,
but
I
could
not
do
much
so
far,
though,
what
we
can
do
and
what
we
are
working
towards
doing
right
now
is.
L
We
are
trying
to
take
our
let's
see
scripts
that
we
use
for
for
writing
reports
and
convert
them
into
tools
that
everyone
can
use
with
any
data,
and
so
these
charts
here
are
produced
by
these
new
tools
and
the
what
I
find
quite
cool
about
these
new
tools
is
that
they
explode
measurements
so,
like
every
experiment,
produces
its
own
kind
of
Json
measurements.
But
these
tools
they
like
come
back
down,
and
so
every
experiment
needs
to
do.
L
Tcp,
connect
and
it's
in
the
same
way
and
tell
us
likewise,
so
they
will
create
those
tables
that
all
experiments
will
end
up
filling
TCP
tables
until
last
tables
Etc,
and
so
now
we
are
able
to
do
stuff.
Like
okay,
give
me
all
the
TLs
connections
that
we
have
for
this
domain,
regardless
of
the
experiment,
and
then,
of
course
those
are
the
ones
that
arrived
to
the
TCP
point.
So
you
need
to
have
TCP.
L
Let
us
be
worked
at
that
point,
but
yeah,
so
that
that's
something
we
are
working
to
do
so
here
is
data
from
DNS
track
and
I
I
tried
to
like
yeah
the
basically
the
meaning
of
the
rows
is
cloud.
L
Fair
Doh
domain
means
the
URL
that
you
will
use
for
cloudflare
Doh,
which
is
quite
long,
so
it
was
not
fitting
in
the
chart
Etc
and
we
tried
to
provide
like
a
view
of
what
it
was
looking
like
10
days
ago,
or
something
for
China
Iran,
Kazakhstan,
Qatar
and
Saudi
Arabia,
and
that
is
just
connections
that
are
at
the
TLs
stage
and
they
are
trying
to
tell
us
Centric
and
then
another
thing
we
are
trying
to
improve
on
is
we
are
trying
to,
rather
than
providing
this
yellow
anomaly,
which
is
a
bit
like
of
a
container
for
anomalies.
L
We
say
Okay.
This
is
the
the
kind
of
failure
that
we
have.
So
it's
more
more
interesting
to
know
like
you
can
directly
see
the
fraction
of
connection
result
or
timeouts.
Okay,
I'm
concluding
so
the
final
thing
I
wanted
to
mention
is
that
we
we
are
trying
to
improve
and
I
told
you
already
ways
in
which
you
are
trying
to
improve
and
Anonymous
our
debt
in
a
better
way
and
made
this
available
to
everyone
as
opposed
to
being
Justin
for
us
other
stuff
we
are
working
on.
L
L
It's
very
important
to
make
sure
we
are
using
the
the
I
would
say
the
Chrome
browser's
fingerprint,
because
currently
our
toolizingo
and
the
fingerprint
that
we
have
is
not
the
one
that
Chrome
would
use
and
sometimes
they
go
fingerprint
is
associated
with
their
conversion
tools.
So
we
want
to
remove
confounding
factors
here.
That's
another
thing
we
want
to
do.
Then
we
want
to
integrate
quick
into
the
myelin
experiment.
That's
I
think
yeah.
L
Those
are
the
most
important
things
we
want
to
do
and
with
that
I
think
we
reached
the
undend
I'm
happy
to
take
any
questions.
N
You
I'm
Marco
Davis
I
work
for
sidn.
Thank
you
Simon
for
this
interesting
presentation.
I
have
a
question.
I
was
wondering,
are
you
interested
and
do
you
look
at
the
the
root
cause
of
any
blocking
of
internet
access?
For
example,
our
company's
website,
as
at
the
end.net,
is
blocked
from
Iran,
but
that's
not
because
the
Iranian
government
or
anything
has
decided.
That
is
because
Google
Cloud
platform
has
decided
to
block
traffic
from
Iran
to
their
customers,
which
we
are
one
of
them.
Yeah.
L
L
Yeah
so
excellent
point,
excellent
question:
it
is
not
something
we
can
as
far
as
I
know
measure
directly
with
our
tool,
but
indirectly
we
can
in
the
sense
that,
yes,
we
are
aware
of
the
problem.
We
have
a
community
of
users
and
they
have
raised
this
problem
with
us.
So
we
try
when
we
analyze
and
produce
reports,
to
always
keep
that
in
mind,
and
my
recollection
is
that
the
way
in
which
it
is
blocked
on
Google
is
that
you
got
a
403.
So
that's
correct.
O
Hi
Alexey
Google
I
was
wondering
if
you're
planning
any
extensions
to
your
quick
ping
Tool
from
quickly
looking
at
it.
O
It
looks
like
you're
only
sending
like
the
version,
negotiation
aspects
and
one
of
the
things
that
I
believe
we've
seen
in
the
wild
is
that
there
are
some
middle
boxes
which
look
at
specific
offsets
because
they
don't
understand
quick
and
therefore,
if
they
are
looking
for
things
like
you
know,
Sni,
which
I
think
moved
between
draft
29
and
RFC,
then
they
end
up
blocking
because
they're
trying
to
allow
lists
on
things
in
the
public
handshake,
which
means
that
you're
I
believe
you're
a
quick
ping
tool.
O
L
M
Hello
thanks
for
this
presentation,
you're.
M
Thanks
for
the
presentation,
and
and
thank
you
for
all
the
work
on
New
Year,
if
I'm
blasting
out
the
speakers,
I
could
turn
it
down.
Further
I
was
wondering
if
you
have
any
insight
or
if
you
can
share
any
insight
into
the
decisions
you
make
about
what
to
gather.
M
In
particular,
I
was
just
testing
the
uni
probe
and
it
identified
some
blockages,
which
I'm
not
sure,
are
correct,
so
I'm
trying
to
figure
out
I'm
looking
at
the
data
that
it
gathered
and
I
noticed
that
some
of
the
things
will
say
you
know
this
has
failed
in
this
way
and
other
tests,
like
the
SSL,
unknown
Authority
response
I
believe
actually
sends
the
peer
certificates
that
were
gathered
so
you're,
actually
building
or
you're
you're
retrieving
a
data
set
of
those
things.
I'm
wondering
how?
M
How
do
you
think
about
how
much
information
do
you
want
to
take
versus
like
it's
useful
for
identifying
technical
routes,
and
then
some
of
it
might
actually
be
cause
problems
for
the
users
that
you're
Gathering
From
I,
wonder
if
you
could
just
talk
a
little
bit
about
how
you
know
to
reflect
the
the
quest
you
know
the
first
talk
of
this
session.
David
Oliver's
talk
about
you
know:
how
do
how
do
we
gather
measurements
in
a
responsible
way?
I'm
assuming
you
guys,
have
thought
about
that
and
I'm
wondering
what
you
know.
L
L
Quite
the
contrary,
so
your
traffic
is
traffic
that
may
stand
and
therefore
you
should
read
our
basically
on
inform
constant
documentation
if
you
consult
with
that
which
yeah
and
then
it's
it's
much
nicer
in
the
mobile
apps
or
in
in
general,
in
the
desktop
apps,
because
it
it
tries
yeah
like
it's,
not
a
common
line
tool,
so
it
tries
to
give
you
like
more
and
also
another
thing
that
we
do
is
that
after
we
provided
these
bits
of
information,
we
ask
to
the
user
questions
like
is
the
only
problem
going
to
do
xyzad
and
they
need
to
say
true
false
to
demonstrate
that
they
have
understood
what
the
tool
can
do.
L
I
I,
don't
remember
by
Art
the
exact
questions,
but
something
like
a
question
could
be
like
if
you're
using
uniprobe,
your
ISP
could
see
that
you
are
using
unipro,
true
or
false,
and
if
the
user
says
false,
we
pop
up
and
say
no
look.
Actually
that's
not
the
case,
because
it
is
a
measurement
to
it's
not
like
that.
So
we
try
to
educate
the
users
and
we
try
to
have
documentation
about
about.
L
Your
community
does
at
different
levels
of
complexity,
and
we
also
have
I
mentioned
that
we
have
experimental
experiments
and
that's
part
of
why
we
have
experimental
experiments.
So
the
the
you
cannot
run
experimental
experiments
with
the
normal
client.
You
need
to
download
the
specific
client
which
is
not
super
advertised,
so
you
need
to
know
what
you're
doing
if
you
run
experimental
stuff,
which
means
you
are
self-selecting
yourself
already.
I
hope
this
answers
to
your
questions.
M
Yeah
thanks
I
I
thought
it
was
interesting
to
see
the
pop
quiz
approach
to
trying
to
confirm
that
the
user
actually
understands.
That's
it.
I
thought
that
was
an
interesting
thing,
especially
during
the
earlier
talk
thanks
thanks.
B
L
B
That's
everything
for
our
meeting
today.
Thank
you
all
for
coming
and
we
hope
to
see
you
next
time.
Thank
you.
M
N
M
J
You
know.