►
From YouTube: IETF112-PEARG-20211111-1430
Description
PEARG meeting session at IETF112
2021/11/11 1430
https://datatracker.ietf.org/meeting/112/proceedings/
A
A
A
Perfect
thanks
watson
and
nick
great.
So
we
can
begin.
This
is
our
agenda
for
our
one
hour,
long
meeting
and
a
reminder
that
the
session
is
recorded
and
the
blue
sheet
attendance
is
auto
generated.
A
Please
keep
video
and
audio
off
when
you're,
not
speaking
or
not,
recognizing
the
queue
and
to
enter
the
queue.
There
should
be
a
join
queue
icon,
which
is
kind
of
like
a
upraised
hand
and
I'll
pause
for
a
second
in
case
folks
have
any
concerns
about
the
agenda.
We're
going
to
change
it
and
also
just
bear
in
mind
that
it's
good
to
speak
slowly
in
general
and
I'll.
Try
to
keep
that
in
mind
as
well
and
yeah.
If
there
is
no
revisions
to
the
agenda
and
tron
you're
up.
B
Okay,
so
hello,
everyone
welcome
to
my
presentation
today,
I'm
willing
to
talk
to
the
prg
attendants
about
some
state-of-the-art
artwork
that
I've
been
doing
as
part
of
my
work.
So
my
name
is
antoine
frasenko,
I'm
a
research
engineer
in
huawei
and
I'm
going
to
put
in
perspective
some
work
that
has
been
presented
in
previous
prg
session
to
protect
privacy
with
some
work
in
academia
and
see
whether
there
is
room
for
importing
some
ideas
from
academia
in
irtf
and
then
ietf,
ocb,
okay.
B
So,
first
as
a
world
of
introduction,
privacy
protection
has
become
a
global
consumer
demand.
It
has
been
especially
stressed
and
stressed
by
the
fact
that,
with
kovid
we
are
doing
increasingly
important
stuff
online.
B
B
So
many
many
web
solution
providers
are
addressing
this
need
for
privacy.
For
instance,
google
has
announced
that
it
would
drop
support
for
cell
party
cookies
and
build
privacy
sandboxes
in
chrome
in
the
worldwide
developer
conference
in
2020
june
2021
april
has
made
very
strong
statements
with
regards
to
privacy.
B
Brave
browser
is
catching
up,
even
though
its
market
share
is
still
very
small.
Yet,
even
though
many
web
browser
providers
are
addressing
privacy,
people
are
coming
up
with
ways
to
circumvent
the
privacy
protections
that
they
put
in
place.
For
instance,
the
pixel
tracking
technique
that
has
been
meant
that
has
been
popularized
lately
is
putting
at
risk
privacy
of
consumers.
So
we
think
that
to
address
to
go
further
with
privacy
protection,
you
need
to
eliminate
other
identity
linking
identifiers
and
network
layout
amplifiers.
B
So
if
you
look
at
privacy
from
a
network
perspective,
privacy
means
the
inability
for
a
third
party
observer
to
determine
from
the
observation
of
a
packet
with
the
source
and
with
the
destination
and
the
and
the
inability
for
this
observer
to
link
source
and
destination
for
the
for
this
packet,
depending
on
the
attacker,
against
which
someone
is
willing
to
protect
its
privacy
against,
we
may
rely
or
not
on
third
parties,
for
instance,
isps
tends
to
protect
privacy
of
their
consumers.
B
The
privacy
protection
that
you
want
to
add
depends
on
the
strength
of
the
attacker.
Is
it
a
very
powerful
attacker
script
kd
someone
with
a
lot
of
cryptographic,
computing
power,
someone
who
is
stable
in
the
network
and
also
we
need
to
address?
How
is
the
risk
that
is
associated
with
the
communication
that
is
protected?
B
B
In
general,
since
revelations
made
by
snowden,
the
state-of-the-art
attacker
against
which
we
are
trying
to
protect
in
network
privacy
is
an
attacker
that
is
able
to
eavesdrop
all
the
links
and
some
notes,
bearing
in
mind
that
protecting
against
the
network
consisting
in
only
rogue
nodes
is
impossible.
B
B
B
B
Some
other
approaches,
mostly
in
the
realm
of
academic
solutions,
adapt
a
built-in
approach.
Instead
of
trying
to
put
some
some
overlay
on
top
of
ip
and
work
with
ip,
they
rethink
the
way
network
protocols
behave
to
protect
the
privacy
of
of
communications,
so
their
main
objective
is
to
protect
against
a
rather
powerful
attacker
as
the
state-of-the-art
adversary
that
I
described
in
the
slide
before
those
approaches
possibly
try
to
eye
to
avoid
using
a
third
party
to
avoid
placing
too
much
trust
in
someone
else.
B
So
I
will
first
go
quickly
through
works
that
have
been
previously
presented
in
prg
main
history
projects
google's
gonna
catcher
that
has
been
presented.
I
think
in
ietf
110,
it's
a
technology
that
is
combining
two
approaches.
B
Two
take
two
set
of
technologies
near
past
that
allows
a
group
of
users
that
are
located
in
about
the
same
location
to
send
their
traffic
through
the
same
privatizing
servers
that
is,
that
is
operating
as
a
near
past
nut
and
that
are
the
this
near
past
night
is
effectively
hiding
the
ip
address
from
the
site's
house,
and
this
near
pass
nut
is
combined
with
willful
ip
blindness.
B
In
fact,
in
the
catcher,
google
has
included
the
possibility
for
a
website
over
legitimate
purposes,
such
as
abuse
prevention
to
deanonymize,
the
sender
of
the
sender,
for
packet
or
of
a
network
flow.
Then
the
second
solution
that
is,
that
has
made
that
has
gained
lots
of
public
traction
recently
apple's
private
relay
in
apple's
private
relay.
You
use
a
chain
of
two
proxies
to
ensure
source
destination
and
linkability.
B
Those
proxy
used
a
temporary
public
private
key
pair
that
is
provided
by
a
private
relay
access
token
server
as
a
private,
the
private
access
token.
It
relies
on
the
technologies
that
has
been
presented
earlier.
I
think
in
cfrg,
and
then
the
state
of
the
art
on
the
project
to
which
everybody
compares
is
store.
Tor
is
a
very
famous
anonymity
protection
solution.
B
B
Those
three
relays
are:
the
circuit
diagrams
are
identified
by
a
singular
identifier,
and
each
relay
removes
a
peel
of
the
onion
meaning,
a
layer
of
encryption
and
after
the
third
relay
the
traffic
is
sent
in
clear
to
the
final
destination
it
intended
to
be
addressed
to
toro
is
renowned
to
be
weak
against
traffic
analysis
attack
and
some
researchers
have
demonstrated
collision
attack.
When
you
control
both
the
entry
point
and
the
egress
point
of
the
circuit,
then
you
can
easily
de-anonymize
the
center
of
the
of
the
tcp
floor.
B
Now
I
will
touch,
I
will
present
some
academic
research
efforts
and
I
will
start
with
a
project
called
the
position
lightweight
and
immediate
protocol
at
network
layer.
So
this
this
project
is
the
last
of
a
series
of
lightweight
and
most
protocols
lightweight
anomalies.
Protocols
were
protocols
that
were
designed
in
order
to
address
some
concerns
with
regards
to
available
network
privacy
solutions
without
inheriting
from
heavy
cryptographic
constraints
from
mixed
networks
and
other
privacy,
preserving
messaging
solutions.
B
The
goal
is
to
avoid
an
observer
located
on
the
path
between
the
source
and
destination,
to
be
able
to
deal
anonymize
the
source
and
destination,
and
this
attacker
in
fee
is
a
rather
powerful,
because
we
assume
that
the
attacker
has
a
knowledge
of
the
topology.
So
you
need
to
hide
the
length
of
the
ice
path
between
source
and
destination
and
the
position
of
the
various
ases
in
this
pass.
B
Crypt
and
payload
encryption
is
bound
to
the
past
to
avoid
the
session
hijacking
attacks
that
have
been
done
against
previous
lightweight
only
protocols.
So
I
will
explain
very
quickly
how
fee
is
working,
so
you
have
a
source
s
and
a
destination
d.
The
source
s
wants
to
send
a
packet
to
d
by
using
a
helper
node
help
the
helper
that
is
on
the
on
the
slide,
so
want
to
build
a
path
going
from
a6
s3,
s1,
s2,
s5
s9,
so
in
red
over
the
over
the
picture.
B
To
do
that,
first
s
will
send
a
packet
to
the
help,
and
in
sending
this
packet
it
will
ask
each
es
on
the
path
to
place.
Some
information
in
a
say,
a
path
segment
vector
the
past
segment
vector
will
contain
information
regarding
for
each
es,
which,
which
was
the
ingress
point.
The
aws
point
the
position
of
the
information
about
the
previous
es
on
the
path
and
a
signature
over
this
information.
B
B
At
the
end
of
the
process,
s
and
d
have
built
a
path
segment
that
is
containing
source,
routing
information,
anonymized
to
writing
information
that
can
be
used
to
send
packet
between
between
them
and
another
project
from
academia
that
is
really
worth
mentioning
is
things
slings
is
the
major
mixed
network
project.
B
It's
a
project
that
that
was
built
in
the
realm
of
messaging
systems.
It
consists
in
a
chain
of
proxy,
several
known
ones
mixes
which
take
a
message
from
multiple
sender,
shuffles
them
and
sends
them
back
in
random
order
to
a
next
mixed
node
and
through
the
chain
of
mixed
node.
You
perform
an
anomization
procedure.
B
B
Okay,
so
I
will
go
very
quickly,
so
things
is
very.
B
Because
it's
because
it's
a
it's
very
powerful,
it
has
not
been
attacked.
Yet
it's
yet
it's
it's
very
consuming
in
terms
of
cryptography,
but
the
very
interesting
part
in
in
there
is
that
it
uses
the
source,
sorting,
approach
to
and
and
cryptography
to
protect
the
anonymity
of
routing
information.
B
I
will
skip
over
this
while
sphinx
is
very
angry
in
terms
of
cryptography.
Some
projects,
like
ornette,
has
tried
to
address
the
and
reduce
the
need
for
public
cryptography
instincts
in
order
to
have
better
performance.
B
One
idea
that
that
is
important
and
that
I
wanted
to
push
in
this
presentation
is
that
for
now
many
solutions
that
are
that
are
being
deployed
rely
on
trust
parties
and
use
of
relay
nodes,
and
I
think
that
in
the
community
we
should
look
at
projects
from
the
academic
realm
and
look
at
the
use
of
source
routing
concepts
to
address
privacy
issues
and
this
information.
This
presentation
is
a
call
to.
B
To
work
on
this
alternative
approach
to
build
privacy
at
the
network
layer
using
source
routing
rather
than
a
trusted
party.
So
if
you
are
willing
to
work
on
how
those
approaches
could
be
translated
and
adapted
in
the
in
the
current
internet,
I
would
be
very
happy
to
hear
from
you
and
to
get
your
comments.
B
Is
there
time
for
questions,
or
should
we
take
them
later
on
the
list?
Oh,
we
can
do
questions.
A
Ben
correct
yeah
for
questions
we'll.
D
B
This
is,
this
is
a
very
interesting
approach.
In
fact,
it
has
been
investigated
in
an
article
called
tor
instead
of
ip,
and
one
problem
with
that
is
that
toro
has
some
as
some
shortcomings
with
regard
to
some
attacks
with
regard
to
anonymity,
for
instance,
compared
to
sphinx
toro
can
can
reveal
the
kind
of
application
that
is
running
because
it
doesn't
do
pacing
of
the
packets
and
so
and
it's
it's.
It's
not
immune
against
traffic
analysis
attacks,
while
some
some
at
the
network
layer.
B
D
If
we
have
resolution
coming
in
financial
padding,
I
know
I
know
that
tor
does
actually
have
essentially
fixed
block
size
and
quite
a
bit
of
padding
in
order
to
try
to
defeat
these
kinds
of
similar
attacks.
E
E
D
B
For
me,
one
aspect
in
the
network
layer
is
that
you,
you
really
want
to
have
something
that
is
very
quick
in
executing
from
a
cryptographic
standpoint,
so
my
aim
to
adapt
the
technologies
that
are
used
in
the
upper
layer.
The
network
layout
would
be
to
reduce
as
much
as
possible
the
use
of
public
key
cryptography
because
it
induced
it
reduces
delay
and
latency
that
is
not
compatible
with
requirements
for
applications
are
not
delay.
Tolerant
at
the
network
layer.
F
This
seems
requires
a
significant
amount,
more
work
and
traffic
handling
by
aas
that
wouldn't
be
on
the
path
anyway
and
would
be
now
and
also
would
really
you're
asking
services
that
really
haven't
done
a
lot
of
work
in
the
past
to
do
a
lot
of
computational
work.
Have
you
considered
how
this
impasse
deployability.
B
Just
as
a
first
comment,
I
don't
want
to
claim
attribution
for
the
work
I
presented.
It
was
just
awareness,
it's
work
that
has
been
done
by
other
people
than
me,
so
I'm
just
presenting
here,
I'm
not
the
author
of
this
work,
but
the
way
I
understand
your
question
and
the
addressing
of
the
of
the
years
is
that
well
sure.
If
we
put
privacy
in
the
network
here,
some
routing
the
packets
will
take
more
will
will
take
more
efforts
for
transit,
net
ais
and
transit
equipment.
B
But
I
think
that
this
computing
overhead
is
necessary
if
we
want
to
provide
privacy
at
the
network
layer,
because
if
you
don't
have
the
aim
of
a
future
work
in
ensuring
privacy,
the
network
here
would
be
to
reduce
this
amount
of
computation
overhead
for
to
ensure
privacy
is
previously
going
at
a
cost.
Yes,
but
the
goal
would
be
to
reduce
this
cost
as
much
as
possible.
G
Jonathan
hoyland
club,
the
have
you
categorized
these
pieces
of
work
in
the
framework
of
da
set
out
like
it.
Basically,
you
try
and
say
that
that
there's
a
minimum
you
have
to
either
grow
your
traffic
linearly
in
the
number
of
messages,
or
you
add
latency
in
that's
logarithmic
in
the
number
of
users
before
you
can
get
any
results
as
a
sort
of
minimum.
Like
there's
a
a
statistical.
G
So
couldn't
could
you
like
to
have
you
looked
at
your
each
of
those
proposed
things
and
said?
Oh,
this
one
is
adding
a
minimum
of
bandwidth
in
the
number
of
messages,
or
this
one
is
adding
a
linear
bandwidth,
number
of
messages
and
this
one's
adding
logarithmic
time
or
whatever
the
trade-off
between
those.
I.
B
Think
we
started
trying
to
do
this
categorization,
not
in
terms
of
messages
and
overhead,
but
rather
by
trying
to
say
with
the
given
protocol,
which
are
the
building
blocks
it
uses
to
protect
against
this
and
this
attacker
and
trying
to
categorize
them
from
a
very
analytical
perspective
and
as
there
were,
there
are
a
lot
of
different
technologies
to
ensure
to
protect
again
the
same
kind
of
attack
by
a
given
attacker.
This
fell
short,
then
we
considered
having
a
sort
of
scoring
of
to
categorize
the
level
of
protection.
B
The
solution
would
would
provide
because
it
was
it's
fuzzier
but
to
say:
okay,
this
protection
is
level
seven.
This
one
is
eight,
so
it's
a
bit
better
and
with
regard
to
volume
of
messages,
overhead,
etc.
This
comes
to
a
discussion
on
the
mailing
list
yesterday
about
a
graphic
graph
that
I
had
in
the
slides
before,
and
that
was
a
questioned
and
I
think
we
need,
as
a
community,
some
more
solid
method
to
compare
the
solutions,
and
I
I
threw
back
the
envelope
guesses
on
the
slide.
H
Hi
antoine
david.
I
H
Privacy
enthusiast,
I
so
thanks
for
the
presentation
and
for
the
kind
of
overview
of
a
various
few
various
techniques.
I
just
wanted
to
somewhat
respond
to
your
call
to
action
as
someone
who
is
working
on
deploying
a
an
mp
blinding
solution
to
protect
user
privacy.
I
care
about
this
a
lot
and
I
you
know
to
create
the
great
inigo
montoya.
I'm
not
sure
that
the
word
network
layer
means
what
you
think
it
means
at
the
end
of
the
day.
H
You
know
where
we
do.
This
in
the
network
are
you're,
saying,
source
routing
I
mean
tor
private
relay
are
all
forms
of
source
routing
at
the
end
of
the
day,
because
it's
the
client
deciding
how
the
circuit
is
built
and
just
doing
it
as
a
layer
is
not
a
real
distinction,
saying
that
you're
doing
it
at
the
ip
layer.
You
know
for
this
to
have
any
kind
of
privacy
properties.
H
You're
gonna
need
encryption,
so
you
know
you
can
use
ipsec
or
whatever
your
favorite
ip
layer
solution
is,
but
all
of
them
do
the
same
thing.
All
of
them
start
off
with
an
asymmetric
key
exchange
and
then
use
symmetric
keys
per
packet.
So
all
of
these
distinctions
that
you've
made
through
your
presentations,
I
don't
think
they're
correct.
I
would
encourage
you
to
look
a
bit
closer
as
to
how
all
these
things
are
built,
because
I
think
you'll
realize
that
having
something
the
layer
which
is
implemented
does
not
impact
the
security
properties
nor
the
performance
properties.
H
At
the
end
of
the
day,
it's
the
choice
of
solution,
the
choice
of
number
of
hops,
of
which
hops
you
trust,
and
things
like
that
that
matter.
So
thanks
for
looking
into
this
and
thanks
for
bringing
these
new
things,
but
I
don't
think
in
terms
of
call
for
action
for
the
folks
who
are
deploying
this.
I
don't
think
these
techniques
are
necessarily
an
improvement
over
what
we
have
all
right.
Thank
you.
H
B
Your
comment,
I
think
that
it,
it
comes
back
to
the
previous
question
with
regards
to
the
proper
analytics
and
the
comparison
with
based
on
sugar.
A
H
So
why
are
we
doing
this?
So
we
sort
of
be
basically
thinking
around
what
what
can
we
do
to
further
improve
privacy,
as
we
mean
that
communications,
encryption
and
so
on?
It's
it's
largely
handled
and
we
could
use
this
for
various
kinds
of
things
like
infrastructure
functions
such
as
dns,
but
also
for
other
stuff,
I'm
going
to
go
beyond
and
just
to
give
dns
as
one
example.
H
So
so,
obviously,
today,
mostly
the
domain
name
metadata
is
still
visible
on
the
wire,
even
with
you
know,
payload
encryption
of
various
actual
data
paths,
but
this
will
change
as
we
get
more
dns
query
encryption,
we
get
more
encryption
of
smi
options
and
so
forth
in
tls,
but
even
when,
when
we
get
to
the
full
encryption
situation,
resolvers
still
have
the
potential
to
see
the
users
in
by
browsing
history,
which
is
sort
of
an
interesting
data
asset,
and
this
also
puts
large
resolver
services
as
an
attractive
target
for
various
kinds
of
pressures
and
attacks.
H
And,
of
course,
there's
been
lots
of
work
going
on
already
mentioned.
All
the
protocol
encryption
aspects,
but
also
there's
been
like
this
rfc
8932,
came
out
of
deprive
that
talks
about
the
practices
that
people
are
supposed
to
be
doing
when,
when
running
some
of
these
dns
resolver
services,
mozilla
has
their
own
requirements,
and
so
on
next
slide,
please
so
the
context
of
this
work.
How
do
we?
H
H
And
our
general
worry
is:
is
that
for
many
types
of
these
like
infrastructure
services
such
as
dns,
those
services
will
become
a
major
remaining
source
of
leaks,
it
could
be
accidental,
leaks
could
be
attacks,
could
be
commercial
use,
could
be
authorities
requesting
information
from
one
or
multiple
parties,
and
if
you
look
at
this
from
a
very
high
level
perspective,
then
we
need
to
protect
users,
data
in
flight
at
rest
and
in
use,
and
so
we've
been
running
this
this
little
experiment
with
my
colleague
here
he
was
also
on
the
on
the
meeting
to
see
you
know:
what
could
we
do
with
confidence
or
compute?
H
H
That's
it's
running
in
and
sort
of
like
a
maybe
a
novel
idea,
run
the
service
for
the
users
and
not
collect
personal
data
of
the
users,
but
we
actually
do
think
that
there's
some
incentives
for
doing
this
and
damage
from
if
there
is
such
for
not
getting
data
about
the
users
is
not
that
big.
If
you
look
at
the
details,
we'll
get
this
in
a
bit,
so
the
basically
this
desired
security
property
that
we
try
to
establish
here
is
is
having
the
security
perimeter
around
the
service
that
prevents
easy
collection
of
information.
H
This
will
not
be
perfect.
We'll
talk
about
some
of
the
issues
a
bit
later,
but
it
should
complicate
the
life
of
those
who
wish
to
collect
data
next
slide.
Please-
and
I
just
wanted
to
provide
like
a
very
brief
high
level
introduction
to
trusted
execution
environments.
I
realized
that
on
the
audience,
there's
people
who
know
far
more
about
this
than
I
do
so
I'll
keep
it
short.
H
But
basically
this
trusted
execution
environments
are
enclaves
or
environments
where
we
can
run
code
and
you
can
ensure
that
that
code
is
not
tampered
with
and
also
any
data
that
that
code
uses
is
is,
is
not
readable
outside
the
environment
and
cannot
be
changed
either
and,
of
course,
simply
running
something
inside
a
safe
environment
is
not
enough.
H
We
also
need
to
understand
what
constitution
acceptable
piece
of
software
that
we
should
be
running.
So
if
you
just
run
in
a
sort
of
secure
environment,
but
don't
care
about
the
software,
that
is
being
run,
that's
of
course
totally
ridiculous,
because,
as
we
know,
software
can
do
anything,
and
so
basically,
if
you
don't
check,
then
you
don't
care
what
what
actually
happens,
but
you
can
can
do
this,
this
kind
of
a
check
what
software
is
running,
what
software
image
has
is
being
run.
H
H
So
the
basic
setup
that
we
did
in
in
our
experiment
was
that
clients
acquire
dns
information
from
a
resolver
and
they
do
this
query
through
encrypted
query
protocols
such
as
do
or
dot,
and
then
this
encryption
ends
up
inside
the
trusted
execution,
environment
and
is
the
encryption
is
terminated
there
and
all
the
information
about.
What
is
this
particular
client
asking
about
what
name
is
being
resolved?
Stays
inside
the
t,
so
not
even
the
operator
of
the
dns
server
or
the
other
cloud
platform
can
can
see.
H
What's
actually
been
asked,
only
the
client
knows
about
that,
and
then
this
code,
this
particular
piece
of
software
inside
the
t
not
knows
about
it,
but
it
doesn't
leak
that
information.
Of
course
you
can't
do
like
for
most
problems.
Dns
is
very
similar.
You
can't
do
it
sort
of
entirely
on
your
own.
You
have
to
sometimes
ask
other
people
some
other
information
in
this
case.
If
you
don't
have
cash
hit,
then
you
have
to
go
somewhere
else.
So
there
is
an
interaction
with
outside
world.
H
There's
some
danger
that
an
observer
would
be
looking
at
this
system
from
from
outside
and
see
that.
I
know
the
client
send
the
query
and
oops
now
there's
another
query
sent
right
after
that
to
this
server
and
that
server
seems
to
be.
You
know,
providing
an
answer
but
example.com
and
therefore,
probably
this
client
was
asking
about
that.
H
H
For
instance,
in
our
case,
since
you
do
have
caching,
you
can
only
do
that
in
some
cases
and
then
we
we,
for
instance,
in
our
system,
we
implemented
timing,
obfuscation
and
some
some
random
background
traffic
that
confuses
people
about
or
outside
of
service.
What's
what's
actually
going
on,
what
queries
relate
to
what
next
slide?
Please.
H
So
the
upside
clearly,
is
that
it?
You
know
we
were
able
to
hide
all
user-related
information,
and
we
actually
argue
that
this
is
something
that
you
know
we
should
generally
strive
to
do.
If
we
can.
It's
a
very
useful
thing
to
do.
That's
not
not
the
case
that
we
should
always
leak
all
information
that
we
get
to
some
some
data
storage
that
can
be
used
against
the
users.
H
That's
bad!
We
should
try
to
limit
the
information.
That's
actually
leaked
out
of
these
processes
and
do
it
in
a
technical
manner
that
we
can
actually
trust
that
protection
somehow
and-
and
that
relates
to
the
other
upside
that
it's
not
just
like
you
know,
we
trust
a
particular
company
that
they
are
very
good
company.
They
can
do
this.
H
Well,
I
trust
them,
but
we
could
actually
get
some
evidence
of
of
their
compliance,
that
that
practice
trust
verify
model,
and
then
I
mentioned
the
incentives
earlier,
so
one
potential
incentive
to
do
this
is
that
you
could
advertise
is
that
our
service
doesn't
actually
leak
your
information,
the
user's
browsing
history,
but
but
it
actually
keeps
its
private
and
and
that
we
can
actually
show
that
we're
doing
this,
and
so
that
that,
I
think,
would
be
a
you
know:
significant
advantage
advantage
in
in
marketing
a
service.
H
The
other
thing
is
that
you
might
consider
lack
of
data
from
users
accents
a
a
downside,
but
but
you
could
also
see
this
in
in
a
more
positive
light
that
you
know.
If
you
don't
do
that,
then
that's
that's
actually
the
way
it
should
be,
but
you
can
still
get
some
valuable
data.
This
kind
of
reminds
me
of
of
the
both
yesterday
ecker
and
others
will
talk
about
priv
and
how
you
can
provide
some
aggregations
of
statistics
rather
than
per
user
relevant
information.
H
H
You
can
also
perform
more
complicated
tasks
like
if
you
need
to
do
some
geolocation
or
you
know
whatever
special
thing.
As
long
as
you
do
it
inside
this
trusted
environment,
where
no
information
leaks
outside
from
then
then
you
can.
You
can
basically
still
do
a
lot
of
useful
services
for
the
for
the
end
user.
Even
get
some
information,
that's
relevant
for
your
business,
but
not
leak,
the
actual
end
users,
private
data.
Next
slide,
please.
H
H
And
these
are
actually
difficult
questions?
I
don't
claim
that
we
have
like
clear
answers
to
to
all
of
this,
but
we
did
find
that
you
you.
You
can
have
some
some
ways
to
address
this,
so
so,
for
instance,
you
can
do
some
reporting.
That
is
at
the
level
of
this
aggregate
statistics
that
we
know
how
loaded
is
this
instance.
You
know,
where
are
you?
What
is
the
you
know,
top
two
biggest
sources
of
questions
sent
to
this
server?
H
What
is
the
you
know,
top
two
questions
or
top
ten
questions
sent
to
this
and
from
those
kinds
of
things
you
could
determine
a
lot
of
things
both
about
scaling
and
errors
and
possible
attacks
that
are
ongoing.
H
A
pretty
hard
downside
is
perhaps
dependencies
you'll
be
dependent
on
a
particular
hardware.
Not
all
cpus
can
do
this,
so
I
was
or
we
were,
building
some
of
these
prototypes.
I
realized
that
some
of
the
computers
that
I
have
in
my
basement
can
do
this,
but
others
cannot
so.
I
was
maybe
too
cheap
when
I
bought
the
cpus,
but
so
it
does
require
some
some
particular
hardware
or
us.
Cpus
you'll
be
dependent
also
on
on
these
manufacturers
for
verifying
that
this
is
indeed
their
cpu.
H
You
might
be
dependent
on
somebody
who
can
actually
check
the
software
that
this
is
like
a
reasonable
piece
of
software,
and
you
know
we
looked
at
the
source
code.
We
were
able
to
compile
it
and
get
the
same
hash
and,
and
it
doesn't
leak
information.
H
It
could
be
a
significant
effort,
but
maybe
there
are
organizations
in
the
world
that
could
be
looking
at
this.
I
don't
know
eff
or
something
like
that
or
let's
encrypt
or
could
even
provide
such
software
performance.
A
common
question-
and
here,
like
the
you
know,
the
performance
impact
does
does
vary
and
maybe
there's
other
people
on
the
audience
that
know
more
actually,
but
we
found
that
there's.
This
typically
is
an
impact,
but
it
can
also
be
fairly
small.
One
other
set
of
people
who've
done
this
in
our
draft.
H
As
a
reference
to
pdot
did
a
very
similar
thing
to
an
existing
open
source
implementation,
I
found
out
that
at
least
in
some
situations
they
were
able
to
get
the
same
or
even
better
performance
in
the
way
that
they
were
using
it.
But
maybe
that's
a
bit
of
a
special
case.
H
I
like
to
think
of
this
generally
today
with
the
technology
that's
available
right
now
in
the
cpus
typically
have
to
do
like
context,
switch
between
the
trusted
and
untrusted
parts
of
the
system,
and
so
so,
basically
for
everything
that
you
do
you
add
an
extra
context
switch
and
that's
not
the
only
way
to
do
things,
and
you
don't
know,
I
have
to
do
like
context
fix
for
every
every
little
step
that
you
do
in
the
in
the
system,
but
but
it
can
have
an
impact
so
getting
twice
the
or
having
to
burn
twice
as
much
time
or
half
the
performance
is
you
know
a
possible
outcome
in
some
some
configurations,
for
instance,
but
it
can
also
be
negligible
next
slide.
H
H
So
basically,
this
is
not
a
cure
for
all
problems.
Obviously,
there's
lots
of
attacks
that
one
can
think
of
you
all
know
about
various
cpu
problems
and
so
on,
but
it
is
an
additional
hurdle
that
you
can.
You
can
get
through.
You
know
many
kinds
of
security
technologies.
This
is
just
yet
another
one,
but
it's
a
pretty
big
hurdle
to
go
through
if,
if
you're
sort
of
an
agency
tries
to
get
to
get
data,
so
so
we
we
think
it.
It
was
useful
at
least
the
way
that
we
saw
it.
H
We
also
think
this
is
orthogonal
to
many
other
techniques.
I
think
we
are
out
of
time,
so
I
don't
want
to
read
too
much
of
it.
Just
go
to
the
next
slide.
That's
the
last
one
conclusions,
oh
yeah,
one
more
so,
there's
obviously
areas
with
you
know
active
development.
The
hardware
is
actually
developing.
We
expect
that
future
generations
of
hardware
actually
have
much
less
performance
impact,
for
instance,
there's
obviously
more
research
needed
on
operational
impacts
and
other
aspects
of
what's
the
best
model
and
deployment
set
up
for
for
attestation
next
slide.
H
H
Maybe
this
technology
is
not
like.
You
know,
prime
time
for
super
wide
deployment,
but
we
think
it
it's
ready
for
some
deployments.
It's
clearly
used
by
a
lot
of
systems
out
there
today
and
yeah.
It's
we
think
it's
interesting,
even
though
it
does
have
some
interesting
problems
as
well,
be
happy
to
hear
people's
thoughts
on
this,
whether
you
think
it's
science
fiction
or
potentially
feasible,
or
whether
some
things
need
to
be
taken
into
account.
E
D
Oh,
I
think
this
stuff
is
very
interesting.
I
I
don't
know
how
far
we
are
from
being
able
to
use
it.
My
question
was:
just:
did
you
use
something
like
oblivious
ram
or
patho
ram.
H
I
lost
you
there
for
for
a
bit,
but
we
I
I'm
not
aware
of
that
particular
technique.
We
used
for
our
particular
system
we
used
intel,
sgx
and
which
has.
D
D
If
your
sgx
encrypts
all
the
contents
of
memory
but
but
a
local
adversary
can
still
observe
all
the
memory
access
patterns,
so
that
actually
makes
it
relatively
easy
to
identify
which
dns
queries
are
being
performed.
You
can
you
can
see
each
query
hit
the
right
spot
in
ram
where
that
data
is
stored.
I
I
was
just
wondering
yeah,
I
was
wondering:
do
you
have
this
up
where
somebody
can
query
it?
What's
the
performance
look
like,
and
do
you
think
that
this
would
preclude
any
of
the
other
dns
privacy
work,
that's
being
done
at
the
ietf.
H
No,
I
I
think
this
is
definitely
sort
of
complementary.
So
so,
if
you
do
this,
it
doesn't
mean
that
you
don't
need
to
do
encryption
or
or
oblivious,
or
you
know
any
of
the
other
things.
It
just
means
that
for
for
the
particular
function
that
you're
doing
you're,
protecting
it
better
from
outside
influence
and
and
of
course,
if
you
know,
if
you
open
up
the
computer
and
look
inside
the
cpu
and
have
probes
and
everything,
you
can
find
some
information
that
you
know
you
could
even
see
inside
the
cpu
what's
going
on.
H
J
Thanks
for
presenting
this,
I
mean
so
like
I
think
you
know,
there
was
like
a
lot
of
enthusiasm
about
five
years
ago
for
enclaves
as
sort
of
a
replacement
for
multi-party
computation
and
things
like
that.
But
the
bad
news
doesn't
work
and,
like
I
can
just
point
you
to
like
a
lot
of
papers
of
like
people
breaking
the
sgx
like
from
basically
from
just
like
programs
running
the
machine.
You
don't
even
need
to
watch
our
microscope.
J
You
know
getting
dignity,
has
done
quite
a
bit
of
work
on
this
and
like
it's
not
just
like
you've
got
the
data.
It's
like
extract
the
keys
and
then
go
sign.
Your
own
stuff,
so
like
in
this
generation
of
t,
simply
is
not
up
to
the
challenge,
and
this
really
goes
to
the
threat
model,
which
is
that
the
threat
model
here
that,
like
amazing
resign
for,
was
like.
I
have
like
a
phone
in
my
hand
and
they'd
want
me
to
stop.
Probably
from
stealing
you
know
like
videos,
but
the
situation.
J
This
is
quite
different,
which
is
that
this
is
like
running
the
data
center
of
someone
who's
by
definition,
extraordinarily
sophisticated
or
able
to
run
like
a
very
high
production
service
and
so
like
to
have
the
threat
model.
Be
that
those
same
people
like
can't
like
call
daniel
ginkin
and
learn
how
to
extract
the
keys
for
sgx,
like
seems
pretty
impossible.
H
Yeah,
I
I
think
I
mean
there's
no
debate
that
there
there
are
no
attacks.
There
clearly
are
attacks,
lots
of
them,
but
I
think
this
is
still
a
barrier
for
people
to
do,
but
I
I
I
I'm
not
basically
disagreeing
with
you,
because
we
also
are
waiting
for
the
next
generation,
so.
J
Yeah,
I
guess,
as
I
said,
I
think
it
had
to
be
the
case
that,
like
you
needed
like,
I
think
I
think
your
facebook
possible
to
be
a
case
at
least
decapitate
the
chip
in
order
to
get
the
keys,
because
I
think
I
guess
the
standard
is
like
basically
the
software
attacks.
It's
just
hard
to
understand.
Why,
like
any
any
cheating,
resolver
isn't
going
to
do
it.
A
A
And
do
you
want
to
matthew
you
can
just
control,
you
should
be
able
to
actually
sorry,
you
should
be
able
to
use
the
pre-pre-loaded
slides
unless.
A
Yeah
you
can
either
ask
for
preload
slides
or
you
can
just
I
guess,
requests
yeah
preload
slots
are
fine.
A
I
think
you
can
ask
for
that
permission.
A
E
Okay:
let's
go
hey
hi
everyone,
I'm
at.
I
think
I
work
at
tor
and
recently
we
started
working
on
a
draft
around
epic
privacy
considerations,
specifically
what
what
do
we
have
to
take
into
account
as
we
develop
and
deploy
and
adopt
privacy
within
applications?
E
We
recently
published
an
updated
draft
and
I
encourage
everyone
to
go.
Look
at
it
all
right,
so
where'd
this
come
from
this
came
from
or
as
a
result
of
the
ip
privacy
interim
that
took
place
earlier
this
year
and
coming
out
of
that,
there
was
a
question
about.
E
You
know
how
how
ip
addresses
are
used
and
and
and
why
ip
privacy
is
such
a
controversial
and
and
difficult
topic
in
in
a
lot
of
spaces
and
ip
reputation
is
a
significant
use
case,
and
so
the
draft
itself
started
out
by
looking
at
at
how
to
create
a
new
reputation
system
that
could
replace
ib
privacy.
E
And
we'll
actually
get
back
to
that,
because
it's
actually
not
even
one
of
the
most
important
aspects
of
how
ip
dresses
are
used
and
so
we're
taking
a
step
back
and
the
draft
now
takes
more
of
a
holistic
view
of
ip
addresses
and
how
they're
used
and
we're
we're
basically
looking
to
categorize
the
the
different
use
cases
and
and
the
different
signals
that
platforms
and
providers
are
currently
currently
need
or
currently
currently
are
using
for
ipedosus
and
then
we're
looking
at
sketching.
E
We
within
this
draft
we
added
a
lot
of
new
use
cases,
primarily
around
anti-abuse
and
various
use
cases,
there's
also
a
new
description
of
various
iq
address,
use
cases
and
and
restrictions
within
legislation
worldwide,
which
is
which
is
captured,
and
then
we
have
some
replacement
signals
and
we
also
started
documenting
categories
of
interaction
which
which
provide
an
interesting
motivation
for
some
of
the
signals.
E
E
And
then,
in
terms
of
next
steps,
we're
continuing
to
talk
to
different
parties.
We,
the
draft
itself,
is
very
heavy
on
anti-abuse
and
we
don't
capture
a
lot
of
the
other
use
cases
of
ip
addresses.
E
And
so
we're
going
to
look
for
other
other
people
who
are
interested
in
talking
about
how
they
use
ip
addresses
and
what?
What
blockers
they
see
for
accepting
accepting
clients
over
ip
privacy
or
over
private
ip
connections
and
through
that
we're
going
to
iterate
on
signals.
And
the
draft
itself
is
becoming
quite
long
in
complicated.
So
we'll
probably
look
at
refactoring
and
making
it
easier
to
comprehend,
and
we're
also
going
to
look
at
and
continue
to
be
involved
with
other
groups
within
the
itf
and
w3c.
And
then
various
standards.
Working
groups.
E
There's
recently
anti-fraud
community
group,
launched
at
the
w3c
that
we're
interested
to
see
what
comes
out
of
that
and
then
in
terms
of
open
questions
and
issues.
E
Obviously
going
back
to
refactoring
just
like
what
is
the
scope
of
this
draft
and
and
what
what
should
be
included
for
the
within
the
purview
of
prg
and
and
just
what's
generally
useful.
As
an
example
of
of
open
issues,
there's
a
question
of
mandatory
signals.
This
is
obviously
very
contentious
because
we're
trying
to
get
away
from
mandatory
signals
with
you
know
passive
passive
semi-stable.
You
know
ip
addresses
but
on
the
other
hand,
malicious,
behavior
and
just
general
bad
actors
take
advantage
of
not
having
a
mechanism
to
re
re-identify
them.
E
So
that's
and
then
they're,
just
general
github
issues
which
go
into
some
of
those
issues
and
others
and
that's
that's
my
update
so
happy
to
take
any
questions
or
comments.
C
Thank
you,
matt
for
all
your
work
on
this
draft,
as
well
as
the
others,
based
on
the
the
sort
of
interim
meeting
that
we
had
a
while
back
and
previous
updates
in
this
draft.
I
think
we're
likely
going
to
seek
adoption
by
the
group.
So
if
folks
here
have
not
read
it,
we've
encouraged
you
to
read
it
and
we
we
should
be
following
up
on
the
list
soon-ish
to
to
start
an
adoption,
call.
C
No,
that's
it!
Okay,
great!
That
sounds
great
all
right.
Folks,
thanks
for
your
time
and
we'll
see
you
on
the
list.
A
Thanks
all
and
thanks
to
our
new
takers
and
watson,
yes,.