►
From YouTube: IETF 105 Technical Plenary
Description
On 24 July 2019 (21:10-22:10 UTC) the IETF 105 plenary session includes talks on current thinking about privacy on the Internet.
A
We
have
an
announcement
to
make
in
the
fall
and
I'll
say:
kefka's
shows
we're
having
a
bit
of
a
technical
problem
up
front
with
the
projectors.
These
lovely
people
are
working
on
it,
but
we
want
to
let
you
know
that
the
download
all
of
the
presentations
from
the
from
the
website
now
this
is
a
stress
test
for
the
network,
because
we
may
have
to
go
forward
with
the
rest
of
the
program
with
those
in
your
laptops,
as
opposed
to
on
these
nice
big
screens.
A
A
B
So
I
have
the
slides
here
on
the
monitor
in
front
of
me.
So
if
everyone
could
please
come
to
the
front,
hi
I'm,
Brian,
Trammell
I
be
the
emcee
for
the
technical
portion
of
tonight's
whoo,
or
you
can
hang
out
on
me
Tyco,
which
will
also
be
a
stress
test
for
me.
Deco,
also
a
stress
test
so
as
a
beginning
note
or
you'll
notice,
on
the
agendas
that
we're
trying
out
something
new
this
time,
we're
more
explicitly
splitting
the
Technic
and
tech
and
admin
plenaries.
You
know
way
way
way
back
in
the
past.
B
This
part
of
the
session
is
one
hour
long,
which
is
why
we
were
going
ahead
and
getting
started,
even
though
we
don't
have
video,
yet
we're
going
to
be
holding
pretty
strictly
to
time.
There
will
be
time
for
questions
and
discussion
at
the
end.
So
you
know
clarifying
questions
only,
but
please
know
clarifying
questions
with
that
privacy
question
mark.
What's
the
delay
on
okay,
all
right!
B
Yes,
next
slide,
please,
as
some
of
you
may
be
aware,
the
IB
and
the
ITF
at
least
we
hope
many
in
the
IETF
is
deeply
interested
in
confidentiality
on
the
internet.
This
is
a
conversation
that
we've
had
ongoing
for
a
while.
We
were
interested
in
this
in
large
part
for
reasons
of
privacy.
We
spend
a
lot
of
time
in
the
working
groups.
These
days,
that
is,
the
most
applause
I
thought
I
would
ever
get
for
privacy.
With
a
question
mark
and
an
IETF
meeting
post
Vancouver
I'll
go
ahead
and
it
gets
better.
B
We've
been
talking
a
lot
about
in
the
in
the
the
working
groups
and
in
the
hallways,
and
you
know
around
the
working
groups
and
in
the
press
and
sort
of
everywhere.
It's
been
a
while,
since
we've
addressed
it
in
plenary.
So
we'd
like
to
change
that
tonight
we
have
a
program
where
we'd
like
to
talk
about
current
issues
and
eternal
issues
in
Internet
privacy
with
Arvin
drivin
and
Ryan.
I
worked
so
hard
to
get
their
name
right
and
Steve.
Pehlivan
Arvin
Ryan
is
an
associate
professor
of
computer
science
at
Princeton.
B
He
leads
the
Princeton
web
transparency
and
accountability.
Project
he's
also
the
recipient
of
the
presidential
Early
Career
Award
for
scientists
and
engineers.
There's
a
word
ceremony
for
this
that
he's
missing
to
be
with
us
tonight,
so
we're
very,
very
honored
to
have
him
here.
He'll,
be
talking
about
some
of
the
implications
and
current
trends
on
communicate
on
communications
privacy
in
large
current
trends
on
communication
privacy,
internet
sort
of
in
the
large,
so
he's
sort
of
a
contextual
look
at
this
Steve
pehlivan
needs
no
introduction
in
this
room,
but
I'm
gonna
try
to
do
so.
Anyway.
B
He
is
a
former
member
of
the
IAB,
a
former
security
Area
Director.
He
was
instrumental
in
the
creation
of
Usenet.
Some
of
you
may
have
heard
of
that
he's.
Currently,
professor
of
computer
science,
at
Columbia
and
affiliate
at
Columbia,
Law
he'll
be
showing
us
tonight
that
an
Internet
privacy,
everything
old,
is
new
again.
So
thank
you.
Both
Arvind
come
on.
C
D
Thank
you
Brian.
Thank
you.
Everyone,
so
I'd
like
to
share
with
you
what
I've
learned
from
a
decade
of
doing
privacy
measurement
privacy
measurement
is
kind
of
a
boring
sounding
term,
but
what
it
really
means
is
trying
to
find
his
privacy
vulnerabilities,
ideally
on
a
large
scale
and
talking
millions
of
endpoints
in
an
automated
or
mostly
automated
way
before
I.
Do
that
I'll
start
with
a
couple
of
caveats.
One
is
I
want
to
be
really
upfront
that
most
of
my
work
has
been
in
the
web
space
and
my
prior
engagement
with
standards.
D
Agencies
has
been
with
the
w3c
and
that's
what
a
lot
of
this
is
going
to
be
informed
by,
but
nonetheless,
what
I'm
gonna
try
really
hard
to
do
is
extract
some
principles
that
are
much
more
broadly
applicable
and
that's
what
I'd
like
to
share
with
you
today
and
another
thing:
that's
gonna,
be
a
common
theme
of
their
presentation.
Is
that
I'm
gonna
be
talking
about
issues
beyond
an
encryption.
D
I'm
gonna
assume
that
we're
in
a
world
with
pervasive
encryption
and
in
fact
some
of
the
things
that
I'll
touch
upon
are
perhaps
some
downsides
of
encryption
for
privacy
and
how
we
can
try
to
mitigate
this.
So
that
already
sounds
surprising
to
some
of
you,
so
I
hope
this
will
be
an
interesting
discussion.
I
should
also
say
that,
as
you
heard
from
the
intro
I'm
an
academic,
so
my
job
is
to
think
you
know
idealistic
blueSky
thoughts.
There
are
going
to
be
points
where
you're
going
to
feel.
D
Oh,
this
will
never
work
in
the
real
world
and
you're
welcome
to
come,
say
that
in
QA,
that's
totally
their
game
and
I
appreciate
that.
Okay,
so
with
those
caveats
here
are
three
things
that
I
want
to
share
with
you.
The
first
thing
is
an
issue
that
very
often
comes
up
when
we're
talking
about
privacy
beyond
encryption,
especially
when
we're
talking
about
more
subtle
privacy
threats
such
as
device
fingerprinting.
An
argument
that
often
comes
up
is
oh
forget
about
fingerprinting.
That
ship
has
sailed.
The
horses
left,
the
barn
it's
too
late
for
fingerprinting
defenses.
D
There
are
too
many
fingerprinting
vectors.
It's
too
easy
to
do
tracking,
so
we
should
just
forget
about
that
and
accept
that
we're
gonna
be
in
a
world
where
it's
really
easy
to
track
and
profile
people,
and
that
is
a
point
of
view
that
I
actually
used
to
subscribe
to
and
what
I
want
to
tell
you
about
is
why
I
changed
my
mind
and
what
I
learned
from
that.
So
that's
the
first
thing
I
want
to
tell
you
about,
and
specifically
this
this
is
come
a
lot
in
the
web
context.
D
So
let
me
start
with
that
now.
Web
fingerprinting,
as
many
of
you
may
know,
I
really
came
to
broad
attention
about
a
decade
ago
with
the
very
cool
work
of
the
Electronic
Frontier
Foundation.
They
made
a
project
called
pit,
not
to
click.
Users
could
go
click
the
button
in
it
and,
in
fact,
still
online.
D
You
can
go
click
the
button
and,
if
you
did,
the
script
on
the
web
page
is
going
to
grab
a
lot
of
information
from
your
web
browser
like
the
user
agent,
various
HTTP
headers,
the
list
of
fonts
and
plugins
that
you
have
installed,
etc.
It's
going
to
use
that
to
constructive
fingerprints
and
it's
gonna
measure
among
the
you
know,
million
other
people
who
have
taken
the
same
test
as
you
have.
How
unique
is
your
fingerprint?
How
many
others
in
that
dataset
share
the
same
fingerprint
that
you
do
and
the
very
interesting
thing
was.
D
It
depends
on
how
you
measure,
depending
on
whether
users
have
Flash
installed
or
not
at
least
back
in
2009,
over
90%
of
users
had
a
unique
browser
fingerprint-
and
this
was
very
concerning
for
privacy
advocates,
because
fingerprinting
a
lot
of
people
would
consider
to
be
a
privacy
violation.
It
cannot
be
seen
or
controlled
by
the
user.
You
can't
get
rid
of
it
in
the
same
way
that
you
can
clear
third-party
cookies.
So
this
was
a
big
concern.
D
There
was
another
project
called
mi,
unique
dot
org,
it's
also
still
out
there
by
researchers
in
INRIA
and
France
also
came
to
some
very
broadly
similar
conclusions.
So
the
question
is:
what
do
we
do
about
this?
How
should
standards
agencies
respond
to
this
ease
of
fingerprinting?
How
should
browser
vendors
respond
to
this,
and
one
way
you
could
think
about
this
is
that
there
are
way
too
many
fingerprinting
vectors
and
in
fact
it
is
true.
There
are
way
too
many
fingerprinting
vectors.
D
This
is
a
partial
list
and
as
we're
adding
a
new
features
to
the
web,
like
canvas,
it
only
increases
the
number
of
features
available
for
fingerprinting
and,
ironically,
you
can
see
on
the
list.
Privacy
features
like
do-not-track
also
contribute
to
fingerprinting,
because
do
you
have
do
not
track
enable
do
not
have
it
enable
so
that's
a
small
amount
of
entropy
etc.
So
one
thing
you
might
conclude
from
this
is
you
know
the
horses
left,
the
bar
and
fingerprinting
is
devastatingly
effective.
D
We
shouldn't
even
try
to
minimize
the
fingerprint
ability
of
new
features
that
we
put
into
the
standard
now,
the
w3c
to
their
credit.
There
were
a
lot
of
people
who
I
still
try
to
minimize
the
fingerprint
ability
of
new
features
and
I
don't
want
to
present
this
as
a
criticism
of
other
people.
This
was
absolutely
me
up
until
about
a
year
ago,
so
this
is
what
I
believed
and
here's.
Why
I
changed
my
mind,
so
those
studies
that
I
present
it?
They
were
really.
D
You
know
excellent
studies,
but
there
was
something
wrong
with
the
way
that
a
lot
of
people
interpreted
them.
One
weird
thing
about
those
studies
is
that
the
users
who
participated
were
self
selected,
so
does
that
mean
that
the
results
could
be
non
representative
in
some
way?
What
could
be
different
about
self
selected
users?
One
possibility
is
that
all
the
users
who
had
self
select
into
a
study
like
that
are
actually
really
tech,
savvy
people,
the
kind
of
people
who
are
likely
to
make
a
lot
of
modifications
in
their
browsers.
D
That
would
make
them
more
unique
and
more
fingerprint
Abal.
So
that
was
one
interesting
kind
of
bias
that
some
researchers
suspected
could
be
in
those
studies,
including
some
of
the
researchers
at
INRIA
who
were
responsible
for
one
of
those
two
these
and
then
what
they
did.
Was
they
partnered
with
a
major
french
website
and
fingerprinted
all
of
the
users
of
the
off
that
web
site
without
really
telling
them.
D
So,
by
doing
this
lightly
ethically
questionable
thing,
they
did
a
statistically
much
more
rigorous
thing
and
they
published
that
study,
which
found
in
fact,
contrary
to
some
of
the
previous
findings.
Only
a
third
of
the
users
were
unique
and
as
more
and
more
activity
shifts
to
mobile,
less
than
a
fifth
of
mobile
users
were
unique
because
those
devices
are
less
customizable
and
further
has
flash
and
Java
and
other
old
plugins
are
getting
phased
out.
D
Even
little
things
that
browsers
can
do
in
order
to
minimize
the
fingerprint
ability
of
features
are
going
to
have
a
big
impact,
and
so
I
came
away
with
this
with
a
very
different
point
of
view,
then
a
lot
of
people
had
had
before
before
the
study,
which
is,
let's
not
even
bother,
let's
not
features
for
the
sake
of
privacy,
but
instead
after
the
study,
what
I
concluded
was
quite
the
opposite.
So
I
think
this
is
a
much
more
general
principle
and
a
lot
of
contexts
we
hear
the
ship
has
sailed.
D
The
horse
has
left
the
bar
in
kind
of
argument,
and
if
you
were
at
Pete
Snyder's
talk
at
PRG
earlier
today.
You
heard
a
lot
of
similar
points
from
him
as
well.
You
know,
and
the
the
same
kind
of
thing
that
that
I'm
saying
here
so
that's
the
first
kind
of
insight
that
I
want
to
that.
I
want
to
give
you.
The
ship
has
not
sailed,
and
one
of
the
reasons
that
people
will
say
that
the
ship
has
sailed.
Is
that
if
you
don't
have
a
perfect
defense,
even
if
you
try
to
mitigate
fingerprint
ability?
D
Oh
here's
a
clever
way
that
somebody
can
get
around
that
well,
I
have
an
imperfect
defense
at
all.
Is
it?
Is
it
not
better
to
you
know
to
not
give
people
a
false
sense
of
security?
So
that
is
a
point
on
which
I
will
disagree.
I
think
that
imperfect
defenses
are
is
still
very
useful
and
one
reason
I
believe
that
is
because
technology
doesn't
have
to
bear
the
full
burden
of
privacy
protection.
D
What
do
I
mean
by
this
here's
an
interesting
example:
Safari
has
third
plot
third
party
cookie
blocking,
as
you
might
know,
and
it's
not
a
perfect
defense.
It
can
be
circumvented.
In
fact,
Google
decided
to
do
exactly
that.
Google
decided
to
circumvent
it
and
once
they
did
something
interesting
happened
in
the
US.
The
Federal
Trade
Commission
got
involved,
they
said
hey,
you
can't
do
that.
You
can't
circumvent
a
privacy
measure,
that's
actually
a
violation
of
the
law
and
they
went
after
Google
and
they
find
Google.
D
So
that's
an
interesting
phenomenon
where
the
technology
itself
was
not
bulletproof,
but
it
turns
out
that
circumventing
even
a
weak
privacy
protection
measure
can
actually
get
companies
into
trouble
with
the
law.
It
can
also
be
a
reputational
harm.
So
when
we're
talking
about
the
privacy
adversaries
here
we're
talking
about
the
Facebook's
and
Google's
of
the
world,
we're
not
talking
about
somebody.
D
You
know
from
a
poorly
regulated
jurisdiction
somewhere
out
there
in
the
world,
and
therefore
technology
doesn't
have
to
bear
the
full
burden
and
perfect
defenses
can
still
be
useful,
even
if
all
that
it
does
is
raise
the
cost
of
some
of
these
fingerprinting
and
privacy
invasive
features,
and
it
takes
a
couple
more
years
for
those
kind
of
tracking
technologies
to
become
very
widespread.
That
is
still
useful
because
it
gives
a
couple
of
years
for
new
defenses
to
be
developed,
whether
they
may
be
technical
or
legal,
or
something
like
that.
D
So
that
was
point
number
one
point
number
two
that
I
want
to
talk
about
is
we're
in
a
world
where
what
privacy
means
to
people
changes
very
quickly,
whether
or
not
something
as
a
privacy
breach
changes
very
quickly.
So
both
privacy,
attitudes
and
private,
infringing
technology's
changed
pretty
quickly.
How
can
standards
cope
in
this
world,
given
that
standards
are
intended
to
be
pretty
long-lasting
documents,
and
so
how
do
you
resolve
the
tension
between
these
two?
D
So
one
good
example
of
this
is
one
of
my
favorite
examples
of
how
privacy
attitudes
evolved
quickly.
Is
that
if
you
thought
about
privacy,
ten
years
ago,
most
users
would
have
been
concerned
with
what
are
the
individual
harms
that
can
accrue
to
me
out
of
all
of
those
data
collection?
Out
of
all
of
the
databases
owned
by
companies
that
have
my
personal
information
is
that
identity
theft
is
the
data
breaches?
Is
it
perhaps
targeted
price
discrimination?
What
should
I
be
worried
about?
Those
were
the
kinds
of
privacy
questions
that
people
were
asking.
D
You
know
the
overall
society
that
we
live
in,
so
I
want
to
say
that
there
has
been
a
shift
from
these
very
individualized
concerns
about
privacy
to
more
collective
societal
concerns
about
privacy
among
privacy,
scholars
and
privacy
advocates.
That
shift
has
been
pretty
stark
and
even
among
the
general
public
I
think
there
has
been
a
substantial
shift,
and
so
what
this
means
is
that
a
certain
type
of
data
collection
that
might
have
seemed
pretty
innocuous
ten
years
ago
begins
to
look
very
different
today.
So
that
was
one
example.
D
I
have
a
couple
of
other
examples
that
I'll
skip,
but
the
result
of
this
is
that
it's
very
hard
in
a
standards
document
to
write
down
a
fixed
privacy
definition
and
then
say
that
I've
analyzed
this
protocol
with
respect
to
this
privacy,
definition
and
I'm,
confident
that
this
is
going
to
be
a
privacy
respecting
protocol
now
and
for
all
time
to
come,
and
so
going
back
to
that
example
of
individual
versus
collective
harms.
Let
me
show
you
very
quickly
the
paper
by
Cambridge
researchers.
This
was
in
2013.
D
This
was
the
paper
that
realised
that
you
could
take
people's
Facebook
Likes,
which
is
a
very
innocuous
sounding
type
of
information
and
use
that
to
predict
their
so-called
Big
Five
personality
traits
and,
though
are
things
like
emotional
stability,
agreeableness,
extraversion
and
so
on
the
stuff
that
you
see
in
green
over
there.
If
you
can
even
read
that
text,
sorry
about
that,
the
font
size
is
a
little
small,
and
this
is
exactly
the
research
that
was
allegedly
weaponized
by
Cambridge
analytic,
ofor
psychographic
targeting.
So
this
was
not
necessarily
anticipated
a
few
years
ago.
D
There
are
many
other
examples
of
this,
of
improvements
in
machine
learning,
turning
innocuous
data
into
something
that
can
be
used
for
something
much
more
problematic.
This
was
a
headline
from
a
few
years
ago,
statistician
said:
target
had
figured
out
how
to
use
a
person's
shopping
records
to
figure
out
whether
they
were
pregnant
or
not,
and
so
a
one
concrete
threat
along
these
lines
is
well
stated
by
Paul
ohm
who's,
a
legal
scholar
who
calls
this
the
database
of
Rouen.
D
He
asks
us
to
imagine
the
consequences
of
a
single
massive
database
containing
secrets
about
every
individual
formed
by
linking
different
companies,
data
stores
and
I
think
one
of
the
technologies
that
is
enabling
something
like
this
today
is
cross
device
tracking
techniques
that
enable
the
linking
of
our
activities
between
different
devices.
Even
if
we're
not
identifying
ourselves
using
explicit
identify
areas
that
allow
such
linkage
just
using
statistical
patterns
to
link
these
different
devices
together
and
I,
think
these
types
of
concerns
should
perhaps
be
at
the
forefront
of
some
of
our
privacy
efforts,
including
in
standards
efforts.
D
But
these
are
not
things
that
we
really
recognized
as
privacy
concerns,
maybe
ten
years
ago,
as
much
as
we
do
today.
So
that's
kind
of
what
I
mean
by
the
landscape
of
privacy
is
shifting
pretty
quickly,
and
this
is
a
challenge
for
a
standards
document
of
standards
process
which
needs
to
be
really
long
lived.
So
we
thought
about
this
in
a
paper
recently
where
we
looked
at
specifically
the
battery
status
API
in
the
web
context,
and
this
was
an
API
that
turned
out
to
have
much
more
serious
fingerprint
ability.
D
Privacy
consequences,
then
was
realized
and
therefore
was
taken
out
of
a
number
of
browsers
after
it
had
shipped
and
after
people
had
started
using
it.
That
was
kind
of
fun
precedented.
So
we
looked
at.
How
did
this
go
wrong
and
how
can
we
be
more
aware
of
these
potential
and
misuses
during
the
standards
process?
And
so
here's
a
paper
citation
at
the
bottom
and
what
we
proposed
in
this
paper
at
a
high
level.
D
What
we
called
for
is
a
much
tighter
loop
between
standards
agencies,
as
well
as
researchers
and
developers
and
by
developers
I
mean
both
implementers
and
also
developers
in
a
much
more
general
sense
people
who
are
using
the
api's
that
are,
you
know,
implemented
by
by
the
browser
vendors,
for
example,
and
as
part
of
this,
we
think
that
it
would
be
really
useful
to
incentivize
academics
to
do
two
things.
One
is
to
get
involved
in
the
standards
process
and
do
privacy
reviews
of
standards
and
the
other
one.
D
This
is
perhaps
still
quite
missing,
which
is
once
an
API
is
out
in
the
wild
and
once
people
are
using
it
to
do
regular
privacy
audits
of
how
it's
being
used
and
abused
I've
talked
about
this
a
few
times
and
one
question
that
I
get
sure.
This
sounds
good
in
theory,
but
it's
hard
to
convince
researchers
to
do
this.
How
do
we
do
that
now?
One
good
thing
I'll
say
about
this.
This
actually
sounds
like
a
horrible
thing,
but
I'll
claim
it's.
D
A
good
thing
is
that
it's
fairly
easy
to
influence
academic
researchers
influence,
influenced
them,
not
in
the
sense
of
what
they'll
say,
but
influence
them
in
the
sense
of
what
they
want
to
work
on
by
funding
certain
work
or
by
making
it
more
prestigious
by
creating
awards,
for
example,
for
certain
types
of
work,
such
as
privacy,
reviews
of
standards,
I.
Think
it's
there's
a
there's,
a
fairly
straightforward
path
to
incentivizing
much
more
academic
work
as
part
of
the
standards
process,
which
I
think
will
be
a
good
thing.
D
Another
thing
that
I
think
would
be
useful
is,
as
part
of
the
standards
process,
to
be
explicit
about
assumptions
because
privacy
changes
so
quickly,
because
we
can't
anticipate
what
new
privacy
infringing
technologies
will
be
out
there
in
five
years.
It
helps
to
be
explicit
about
assumptions
as
part
of
the
standards
process,
and
that
is
to
be
able
to
explicitly
say
we
have
created
the
standard,
assuming
that
this
API
will
not
be
highly
susceptible
to
fingerprint
ability.
D
But
if
it
turns
out
that
that's
the
case,
if
it
turns
out
that
this
is
being
exploited
in
the
wild
here,
are
some
things
that
implementers
could
do
to
mitigate
that
risk.
So
that's
the
second
point.
Okay
and
the
third
and
final
point
that
I
want
to
talk
about.
Is
that
this
idea
of
measurement,
which
is
finding
these
privacy
violations
on
a
large
scale,
I'm
clay
that
it's
been
really
useful
for
privacy,
but
unfortunately,
it's
going
away
and
I
want
to
talk
about
whether
there
is
a
way
to
preserve
it.
D
I
don't
want
to
make
the
sound
like
a
sky
is
falling
kind
of
claim,
but
in
my
little
corner
of
the
research
world
the
sky
has
already
fallen
and
a
lot
of
fesses
that
have
moved
on
to
other
research
areas.
So
let
me
tell
you
why
that
is
and
why
that
should
worry
us
from
a
privacy
perspective
and
to
see
whether
there's
a
way
to
preserve
it
so
I'm,
claiming
that,
at
least
in
the
web
context,
measurement
has
played
a
very
key
role
in
keeping
the
worst
of
the
privacy
abuses
in
check.
D
Many
teams
around
the
world
have
been
working
on
web
privacy
measurement.
I'll.
Tell
you
a
tiny
bit
about
my
own
team's
work,
something
that
we
built
is
a
tool
called
open
wpm.
This
is
a
the
github
page.
If
you
want
to
check
it
out,
as
you
can
see,
it's
an
actively
developed
open-source
project.
It
was
developed
at
Princeton
and
now
the
main
developer,
Steve
Englehart
has
moved
to
Missoula,
it's
maintained
by
Missoula.
Now,
so
what
it
is
I
don't
mean
for
any
of
the
details
on
this
page.
To
be
important.
Is
it's
just
the
URL?
D
If
you
want
to
look
at
it
or
the
name
open
wpm
now
what
it
is
is
an
instrumented
version
of
Firefox.
It's
basically
a
bot
that
visits
the
web's
top
1
million
web
sites
every
month
and
looks
at
what
kind
of
privacy
violation
violating
techniques
are
out
there.
It
even
does
things
like
put
in
fake
PII
into
various
forums
and
tries
to
see
where
they
go,
and
it
saves
all
that
data.
D
We
have
half
a
terabyte
of
data
per
month
and
then
we
run
various
scripts
on
that
data
to
try
to
find
privacy
violations
and
publicize
them
and
get
people
to
change
their
practices.
We've
written
a
number
of
papers
based
on
this
data.
This
is
one
example.
It's
called
online
tracking
of
1
million
site
measurement
analysis
and,
as
you
can
see,
one
of
the
key
things
here
is
to
be
able
to
do
this
on
a
large
scale.
In
a
mostly
automated
way,
it's
had
a
number
of
positive
impacts
on
privacy.
D
One
of
them
is
enhancing
block
lists,
for
example,
if
you
use
adblock
plus
or
you
block
origin,
those
tools,
use,
filter
lists
and
the
developers
of
this
filter
lists
often
look
to
research
like
hours
to
try
to
figure
out
what
are
some
of
the
new
privacy
violating
endpoints
in
the
URLs
in
order
to
add
them
to
their
block
lists
various
other
things.
For
example,
in
some
cases,
there's
not
there's
been
an
enforcement
action
by
data
protection
authorities
by
the
federal
agency,
and
things
like
that,
so
I'm
claiming
the
the
this
kind
of
research.
D
Unfortunately,
the
downside
of
that
is
that
the
two
ends,
of
course,
F
into
an
encryption,
are
the
device
in
the
server
it
doesn't
involve
the
user.
It
doesn't
involve
a
researcher,
a
researcher
can't
Smitham
these
devices,
a
researcher
can't
figure
out
what
data
is
being
collected
and
where
it's
being
sent-
and
we
think
this
is
you-
know,
kind
of
a
crisis
for
this
kind
of
research.
It
makes
meaningful
privacy
measurement
basically
infeasible.
D
The
public
is
very
interested
in
these
questions.
For
example,
there
was
this
article
called
the
house
that
spied
on
me
that
just
looked
at
what
are
the
endpoints
of
communication
of
various
IOT
devices,
including
sex
toys.
Why
is
that
contacting
13
different
servers?
You
know
people
want
to
know
what
data
is
going
out
there,
and
this
is
important
not
just
from
a
privacy
advocate
point
of
view,
if
you're
a
company
and
you're
a
reputable
company-
and
you
want
to
be
able
to
show
your
users
that
your
data
collection
is
completely.
D
You
know,
according
to
your
specified
privacy
policies,
there's
no
good
way
to
do
that
today,
because
researchers
can't
examine
the
plain
text
of
these
communications
and
I
think
this
is
a
serious
issue.
For
example,
if
we
wanted
to
know
if
the
smart
light
bulbs
in
our
homes
are
transmitting
conversations
because
they
actually
have
microphones,
we
really
don't
have
a
good
way
to
check
that
today,
and
this
is
not
a
paranoid
scenario,
something
somewhat
similar
has
happened.
D
For
example,
this
interesting
thing
happened
a
few
months
ago,
where
Google
sent
an
email
to
all
of
the
owners
of
nest,
thermostats
and
said:
hey
your
nest.
Thermostat
is
also
a
Google
voices.
Just
now,
and
people
like
what.
How
is
that
possible?
It
doesn't
have
a
microphone
and
Google
said
no,
it
does
have
a
microphone,
and
people
said
what
we
didn't
know
it
had
a
microphone
and
Google
said.
D
Yes,
it
does
check
the
privacy
policy,
and
people
said
what
privacy
policy,
nobody
reads:
the
privacy
policy
also
when
they
read
the
privacy
policy,
it
was
actually
not
in
there
and
then
Google
said.
Oh,
we
meant
to
disclose
that
in
the
privacy
policy.
Sorry,
that
was
an
oversight
so
of
course,
in
this
case,
I'm
willing
to
believe
that
it
was
an
oversight
on
the
part
of
Google.
D
But
if
there
was
a
malicious
you
know
vendor
who
put
microphones
and
devices
that
are
in
millions
of
people
homes,
we
literally
don't
have
a
good
way
to
know
about
it.
This
measurement
research
has
been
in
the
past
one
way
to
know
about
it,
but
it
doesn't
work
for
IMT.
So
with
that
I'll
just
end
by
saying
that
what
it
likes
a
call
for
is
some
kind
of
debug
mode
for
IOT
devices.
D
I
think
this
is
a
critical
need,
the
idea
being
that
when
you
enable
this
kind
of
debug
mode,
the
user
or
more
likely
a
researcher,
you
know
the
details
and
user
experience
will
depend
on
the
device,
but
some
way
to
be
able
to
intercept
the
plaintext
in
order
to
be
able
to
audit.
What's
going
on
out
there,
there's
a
Stanford
project
related
to
this
called
TLS
TLS.
What
is
it
called?
Tls
replays
something,
and
so
what
I'm
proposing
is
slightly
different,
I'm
happy
to
hash
out
the
details
later.
D
E
So
to
talk
about
some
modern
issues
in
privacy
today-
and
you
know,
privacy
is
not
a
new
issue
when
I
started
doing
the
research
that
led
to
this
talk
and
by
the
way
these
slides
were
already
on
my
web
pages
and
references
linked
to
references,
a
technical
class
legal
document
is
also
on
my
web
page.
A
lot
of
this
stuff
goes
back
to
the
1960s.
E
You
know
the
New
York
City
Bar
Association
started
studying
computers
in
privacy
in
1962,
Allen
Weston
prepared.
Basically,
a
report
of
that
committee
in
67
been
very
influential.
The
US
Congress
held
hearings
on
this
legal
academics
for
writing
papers
on
this
all
in
the
1960s,
and
it
actually
goes
back.
The
right
to
privacy
is
mentioned
in
Jewish
literature
about
eighteen
hundred
years
ago.
So
it's
not
a
new
issue
and
the
privacy
that
we
work
used
today.
E
The
privacy
paradigm
called
notice
and
consent
goes
back
to
Westen's
1967
book,
which
is
the
report
of
this
Bar
Association,
the
city
of
New,
York,
Committee
that
users
individuals
can
determine
for
themselves
what
they
want
to
share
and
what
they're
willing
to
reveal,
and
this
statement
from
1967
has
been
the
basis
for
virtually
all
privacy
regulation.
Since
then,
and
yet
you
look
at
the
timeline,
he
published
this
book
in
67.
E
Six
years
later,
a
US
government
committee
came
up
with
what
became
known
as
the
Fair
Information
practice,
principles
of
consent
of
security,
of
openness
of
use,
specification
and
so
on.
In
1974,
a
year
later,
the
US
government
actually
enacted
this
into
law,
but
only
is
applied
to
the
US
government
didn't
apply
to
private
corporations,
not
the
American
Way.
E
Seven
years
ago
the
gdpr
was
enacted
when
it
to
affect
a
couple
of
years
ago,
but
from
10,000
meters.
All
of
these
are
substantially
the
same
yeah
tremendous
differences
in
details.
But
fundamentally,
if
you
consent
the
data
that
you
have,
the
data
about,
you
can
and
will
be
collected
and
notice
and
consent,
and
so
notice
and
consent
is
sites.
E
Tell
you
what
they're
going
to
collect
and
what
they're
going
to
do
with
it
and
by
using
the
website
by
using
the
device
you
are
deemed
to
have
consented
to
this
policy,
and
some
of
the
risks
were
known
back
in
the
1960s
academics
law
professors
wrote.
People
are
just
going
to
go
along
with
the
requests
because
they
want
the
service
1960's.
We
didn't
have
Google,
we
didn't
have
Facebook,
they
realized
people
are
going
to
go
along
to
get
the
benefits.
E
They
realized.
They
told
the
US
Congress
people
are
gonna
share
passwords.
Maybe
we
need
multi-factor,
authentication,
1967
folks,
how
many
such
you
log
in
to
adjust
a
password.
Today
they
worried
about
hackers.
They
even
cited
MIT
the
MIT
students
breaking
into
systems
for
fun,
insider
threats.
Why
are
tapping
the
need
for
encryption,
the
importance
of
metadata
and
the
inferences
you
can
draw
from
metadata
in
again
1967-1969
the
danger
of
large
searchable
aggregate
able
databases?
E
E
There's
a
tremendous
amount
of
data
is
collected
and
we
don't
know
who
was
collecting
it.
We
have
privacy
policies,
we
have
location
data
and,
of
course
there
are
the
governments
of
the
world
there's
a
tremendous
amount
of
over
collection.
Apart
from
all
the
folks
whom
you
give
consent,
there
are
the
data
brokers
outside
parties
who
business
is
to
collect
data
about
people
and
sell
it.
They
collect
it,
they
buy
it
and
they
sell
it,
sometimes
from
public
records,
sometimes
from
private
transactions
that
you
know
nothing.
About.
E
Well,
yeah
did
I
could
send
to
it.
No,
that
was
a
private
deal
between
the
mechanic
and
some
data
collection
company,
the
ads
that
you
see
on
the
web
they're
not
generally
not
coming
from
the
website.
You're
visiting
they're
coming
from
ad
brokers,
often
multiple
levels
of
ad
brokers
who
do
HTTP
redirects
each
one,
is
a
separate
website
and
collect
and
set
cookies.
E
They
combine
that
with
information
from
the
third-party
data
aggregators
an
estimate
based
your
age,
your
gender,
your
income
and
use
that
to
say
how
valuable
a
customer
are
you
and
therefore,
what
ad
is
appropriate
to
show
you
and
you
don't
see
any
of
this,
but
we
have
privacy
policies.
No
curiosity
who,
in
this
room
reads
every
privacy
policy
they
encounter
I'm
impressed
I,
have
seriously
impressed
security.
E
Are
a
few
my
hand
was
not
raised?
There
are
lori
crater
and
her
colleagues
at
to
Carnegie
Mellon
estimated
that
the
opportunity
cost
for
reading
all
the
privacy
policies
you
encounter
would
be
about
thirty
five
hundred
US
dollars
per
year
and
they're
deliberately
vague,
deliberately
expansive
because
at
least
in
the
US
regulators
will
come
down
on
you
not
for
what
they
collect,
not
which
way
you
collect,
but
from
when
you
break
your
promise,
that's
an
unfair
and
deceptive
trade
practice
according
to
US
law.
E
So
if
you
say
you
might
do
everything,
then
you
don't
lie
when
you
do
everything
you
know
we
may
collect
personal
information
and
other
information
about
you.
Remember
the
date
of
brokers,
remember
the
analytic
platforms
from
business
partners,
contractors
and
other
third
parties,
in
other
words
the
world
quota,
Advisory
Committee
report
to
President
Obama
about
five
years
ago.
E
Only
in
some
fantasy
world
do
users
actually
read
these
notices
and
understand
their
implications
before
clicking
to
indicate
their
consent
by
and
large,
that's
true
and
remember,
because
of
all
these
third
and
fourth
and
fifth
and
sixth
parties
on
the
web.
You
don't
even
know
what
websites
you're
consenting
to
you
go
to
a
news
site,
a
sports
site
what-have-you
and
you
you're
careful.
You
read
it
and
you
look
at
the
fine
print
says
by
the
way,
reader
advertising
partners,
privacy
policies,
who
are
they
good
luck,
finding
out
location
data?
E
It's
a
huge
issue
for
mobile
devices.
Lots
of
apps
are
collecting
and
analyzing
this
kind
of
data,
and
even
if
the
app
is
not
doing
the
collection
and
transmission
IP
geolocation,
a
very
mature
technology
reveals
a
lot.
Is
it
perfect?
No,
is
it
very
very
good?
Yes,
and
this
stuff
doesn't
have
to
be
perfect.
E
If
data
exists,
it's
available
to
governments,
sometimes
in
some
governments,
you've
got
a
complex,
restricted
and
somewhat
painful
process
to
gain
access
to
your
data.
I
said
the
US
government
at
this
this
45
year
old
privacy
law.
You
can,
under
certain
circumstances,
gain
access
to
certain
information
about
the
held
about
you.
Other
governments,
don't
really
care
about
the
niceties
of
privacy
policies
and
access
in
it
is
your
data
we
wanted.
We
haven't
go
away
and,
of
course,
that's
even
ignoring
what
you
know.
E
193
nations
in
the
in
the
UN
I
think
about
192
of
them
have
espionage
agencies,
they
collect
data
via
technical
means
and
other
means,
and
this
you
don't
get
to
look
at
it
all
the
privacy
laws
that
we
have
are
largely
based
on.
What's
called
PII
personally
identifiable
information,
your
name,
your
email
address
a
government
ID
number
of
some
sort.
The
definition
varies.
The
EU
considers
IP
addresses
PII.
E
The
much
of
the
United
States
government
does
not
I'm
someplace
to
be
I,
think
they're,
both
rights
under
depending
on
the
circumstances,
but
it
turns
out
you
don't
need
PII
to
invade
somebody's
privacy.
Amazon
doesn't
need
your
name
and
address
to
recommend
products.
Oh
they
might
like
it.
Oh
you
live
in
a
well-to-do,
neighborhood,
we're
going
to
recommend
more
expensive
products.
You
have
an
ethnic
surname
family
name.
Let
me
go
recommend
products
that
appeals
to
that
ethnic
group,
so
I
can
help,
but
they
don't
really
need
that.
You
know
people
who
bought
this
also
bought.
E
That
Netflix
doesn't
need
to
know
who
you
are
to
recommend
movies.
Tivo
doesn't
need
to
know
who
you
are
to
recommend.
Tv
shows
it's
a
great
essay
out
there.
You
can
find
search
for
it
called
my
TiVo
thinks
I'm
gay
somebody
overreacted
when
he
started
getting
recommendations
from
TiVo,
freegate
themed
movies.
So
he
started
this
line
to
overcorrect
by
watching
manly,
he-man,
movie's
war
movies
and
so
on.
At
that
point,
you
started
showing
him
Nazi
propaganda
movies
in.
E
If
you're
worried
about
PII,
some
people
try
to
anonymize
the
data.
What
will
strip
off
the
identifying
information
it
doesn't
work?
First
of
all
for
most
kinds
of
anonymization,
the
real
world
has
shown
is
easy
to
re,
identify
or
even
you've
done.
Some
of
that,
as
I
recall,
haven't
you
and
if
you
do
too
good
a
job
of
anonymization,
you
may
actually
destroy
the
utility
of
the
data
for
certain
very
important
things.
E
For
example,
some
medical
dosage
calculations
done
based
on
machine
learning
on
a
large
database
of
patient
information,
very
successful
to
calculating
the
proper
dose
of
warfarin
aver,
which
have
been
a
very
tricky
problem.
But
some
academics
showed
that
if
you
anonymize
the
data
well
enough
to
really
hide
the
patient's
identity,
the
calculations
wouldn't
work
you've
hidden
too
much
of
the
subtle
details
about
the
patient's
medical
condition.
E
Pii,
focusing
on
PII
also
misses
the
importance
today
of
machine
learning
and
the
inferences
that
it
can
make.
You
can
tell
someone's
sexual
orientation
from
the
kinds
of
things
they
do.
I
will
ignore
them.
My
Tivo
thinks
I'm
gay,
but
you
can
infer
this.
Is
this
good
as
a
fan?
It's
private
information
to
a
lot
of
people,
whether
or
not
it
should
be
it's
much
much
harder
to
control,
because
it's
not
based
on
data
directly
collected.
E
The
foods
that
I
buy
might
indicate
my
ethnicity
proxy
variables
are
a
very
powerful
thing.
There
was
a
study
done
by
the
Federal
Trade
US
Federal
Trade
Commission
about
ten
years
ago.
They
discovered
that
auto
insurance
companies
were
using
credit
scores
to
set
rates.
What
is
your
ability
or
willingness
to
pay?
A
debt
have
to
do
with
whether
or
not
you're
going
to
get
into
an
auto
mode.
Whatever
bial
accident
and
the
FTC
staff
came
to
three
conclusions,
one,
it
was
a
valid
predictor.
Why?
E
E
The
higher
rates
were
going
to
certain
ethnic
groups
based
on
credit
scores.
Well,
that's
bad
social
policy,
so
the
FTC
staff
said
we're
going
to
solve
this.
We're
going
to
try
to
build
a
model,
that's
just
as
predictive,
but
not
discriminatory
and
guess
what
they
couldn't
do
it.
There
was
something
deep
in
the
data
that
said.
Yes,
there
is
this
true
correlation,
that's
going
to
discriminate.
E
So
to
me,
notice
and
consent
is
dead.
No
one
knows
who
collects
the
data?
No
one
knows
what
they'll
do
with
it.
No
one
knows
where
it's
stored
and
some
of
the
most
sensitive
stuff
like
location
is
use.
It's
used
for
your
benefit.
You
know
how
do
I
get
from
point
A
to
point
B
in
a
map
program
and
it's
part
of
what's
called
you
data
shadow.
You
know.
Even
the
US
Supreme
Court
has
noted
how
sensitive
location
data
can
be
in
the
aggregate.
So
if
we
don't
have
notice
and
consent,
what
should
we
do?
E
What
should
we
replace
it
with?
One
answer
is:
use
control
it's
controversial
but
give
up
on
data
collection
restriction.
It
doesn't
work
better
in
the
EU
but
still
doesn't
work
that
well.
Instead,
let
people
specify
how
their
data
can
be
used,
not
what
can
be
collected,
but
what
it
can
be
used
for
targeted
advertising,
statistical
analysis,
medical
research,
what-have-you.
It
sounds
like
a
great
idea:
it's
not
that
easy.
E
E
Very
few
of
anyone
has
gotten
that
right.
You've
got
to
give
consent
across
long
time
intervals.
You
know
I
have
been
posting
stuff
on
the
net
for
about
40
years
now,
it's
prime
mentions
one
of
the
people
created
net
dues
which
went
live
in
January
of
1980
and
I
was
one
of
the
founders,
so
I
was
out
there
for
the
very
beginning.
E
Do
I
have
the
same
preferences
today
as
I
had
40
years
ago.
Well,
I
was
lucky
my
first
boss
at
Bell
Labs,
when
I
walked
into
his
office
a
few
years
after
that
said,
I've
seen
your
flames
on
net
no
Steve,
yeah,
okay,
upper
management
reads
these
things:
yeah
okay,
data
that
exists
can
be
abused
by
hackers,
scofflaws
governments
or
simply
through
a
change
in
the
law,
and
it
turns
out
that
the
under
US
law
it
may
be
impossible.
E
Two
mandates
use
restrictions
for
companies;
they
could
adopt
it,
but
you
may
not
be
able
to
mandate
it
under
US
law.
So
how
do
we
implement
this
use
control?
You
can
start
with
a
privacy,
preserving
credential
scheme
tag,
all
the
data
that
you
create
with
a
privacy,
preserving
sub
identity
and
a
data
type,
and
you
can
publish
tuple,
saying
data
type,
anonymous,
identity
allowed
uses
and
all
digitally
signed
with
your
anonymous
credential
swaps,
credential,
and
where
we
put
it
well
gee,
do
we
put
it
in
the
blockchain?
No,
no
tomatoes,
please,
and
if.
E
If
you
change
your
mind
about
something,
you
just
push
out
something:
it's
your
newest!
It's
you
new,
a
statement
that
wins
enforcement.
Well,
it's
often
pointed
out.
Governments
have
a
role
if
you
break
a
legally
binding
promise.
If
you
break
a
law,
and
governments
can
come
down
on
you
difficult
but
might
be
doable
might
be
a
worth
examining
as
a
research
project.
What
we
really
need
is
a
new
privacy
paradigm.
E
It's
got
a
scale,
so
very
many
data
collectors
known
unknown
in
the
future.
It
has
to
scale
across
time.
It's
got
to
be
comprehensible
by
individuals.
It's
got
to
account
for
inferences.
It's
got
to
trade-off.
The
harms
and
benefits
of
different
kinds
of
data
use
and
I
have
no
idea.
What
such
a
paradigm
would
look
like.
Where's
that,
for
you
me
as
an
academic,
that's
great!
No,
if
we
knew
the
answer,
it
wouldn't
be
research,
but
you
know
this
is
the
real
challenge.
How
do
we
do
this?
E
So
what
should
the
IETF
do,
obviously
encrypt
as
much
as
possible?
The
IETF
has
been
moving
in
that
direction
for
more
than
20
years
and
that's
great
avoid
creating
unnecessary
third
party
metadata
one
place.
This
really
shows
up
in
protocol.
Definitions
is
stuff,
that's
left
to
the
implementation,
because
that
becomes
finger
printable.
How
about
to
pick
one
random
example?
E
If
the
HTTP
header
headers
could
only
be
a
nicer,
a
specified
order,
and
even
if
they
win
all,
you
have
to
specify
it
and
didn't
and
have
you
know
just
a
semicolon
or
something
design
more
privacy
protocols?
Do
a
privacy
analysis
of
protocols
similar
to
what
is
done
for
security
considerations
today
you
know
some
years
ago
there
was
the
geo-print
working
group.
Geolocation
said:
ok,
we,
this
is
dangerous
stuff
from
a
privacy
perspective.
E
F
Steve
Arvind
thanks
very
much
for
your
presentations,
two
comments.
First
of
all,
related
to
Arbenz
work,
I'm
very
pleased
to
fund
a
colleague
of
yours,
surgical
min
at
Berkeley
who's
done
a
lot
of
work
in
this
space,
particularly
around
linkages
on
cellphones,
and
the
idea
that
you
have
about
TLS
and
privacy
of
IOT
is
something
that
I
that
I
am
deeply
involved
in,
and
one
of
the
things
that
it
raises
the
question
of
privacy
brokerage.
F
G
E
H
E
Privacy,
like
any
other
security
problem,
has
to
be
done
in
the
context
of
a
threat
model
who
is
trying
to
collect
this
data?
What
are
they
going
to
do
with
it?
You
know
with
DNS
over
HTTP
over
TLS.
You
know
you
might
get
a
central
aggregation
point
and
do
you
trust
them
to
be
honest,
secure
against
government's
secure
against
government's
will
come
armed
with
legal
process,
and
you
don't
necessarily
have
a
business
relationship.
E
It's
not
clear
to
me
that
guarding
against
the
NSA
or
GCHQ,
or
the
FSB
or
GRU,
the
Mossad
or
whomever
is
the
best
threat
model
versus
the
commercial
threat
model
that
one
might
actually
be
best
dealt
with
with
laws
saying
your
ISP
can't
used,
collect
or
use
this
data
in
any
way,
rather
than
this
technical
mechanism
and
avoids
the
central
point
of
collection,
which
is
a
greater
threat
for
against
certain
threat
modes
watch.
The
threat
model.
I
I
My
questions
for
Arvind
I
really
liked
your
example
of
how
a
limited
technical
mitigation
was
encouraged,
a
strong
policy
that
then
filled
that
gap
for
user
privacy.
This
was
like
about
a
third
through
your
presentation
and
I.
Think
that,
like
paralleling,
that
example
with
your
conclusion
is
also
interesting.
So
I
guess
my
question
would
be
and
then
I
can
explain
a
bit
more.
I
If
that's
helpful
is
I
mean
I,
think
there's
there
is
a
role
that
that
measurement
and
research
can
still
play,
even
if
it's
limited
now
because
of
the
privacy
enhanced
protocols
that
were
using.
How
can
we
instead
pivot,
instead
of
trying
to
bargain,
like
in
the
stages
of
grief
or
in
the
bargaining
stage
like?
How
can
we
do
both
of
these
things
when
there's
an
actual
inherent
technical
paradox
between
the
two
and
rather
go
into
pivot,
into
a
more
complicated
relationship
between
policy
incentives,
policy
sticks?
I
And
then
you
know
so
I
wonder
how
academic
researchers
can
help,
for
example,
human
right
or
not
human
rights,
but
like
impact
assessments
or
getting
companies
to
take
more
responsibility
for
doing
privacy,
audits,
security,
audience
and
it's
a
slower
approach.
It's
not
as
fast
as
scanning
a
million
websites
every
month,
but
I
think
that,
where
we're
at
now
and
the
way
that
we've
advanced
privacy
for
end-users
to
get
the
higher
hanging
fruit,
we
actually
have
to
have
more
complicated
approaches.
I
D
Economists
call
this
an
information,
asymmetry
and
that
particular
information
asymmetry
was
closed
by
lemon
laws
that
mandated
certain
information.
Exclosure
disclosure
pardon
me
that
guaranteed
the
right
for
buyers
of
cars
to
first
take
it
to
mechanics
to
be
inspected,
and
so
on
so
broadly
to
your
question.
As
long
as
we
have
some
way
of
closing
this
information,
asymmetry
that
exists
between
the
sellers
of
products
and
services
and
the
people
who
use
them
I
think
we're
in
good
shape
and
one
of
the
ways
we've
been
doing.
That
is
with
academic
research.
D
That's
been,
you
know,
scanning
a
million
endpoints
at
once,
but
it
doesn't
have
to
be
the
only
way.
Another
critical
way
to
do
that
has
been
journalists
have
been
you
know,
individually,
examining
these
products
in
a
lot
of
detail
and
holding
companies
feet
to
the
fire.
So
as
long
as
we
have
some
oversight
mechanism,
whether
that
comes
from
law,
whether
that
comes
from
academia,
whether
that
comes
from
journalism
or
whether
it
simply
comes
from
a
more
informed
public
that
helps
close,
this
information,
asymmetry,
then
I
think
we'll
be
in
better
shape.
Thank
you.
C
J
Peter
file,
Deutsche
Telekom.
So
from
my
point
of
view,
that
was
an
excellent
presentation,
but
it
was
very
technology
oriented,
so
we
have
an
also
very
North
American
oriented
so
in
in
Europe,
especially
in
Germany.
We
have
very
severe
laws
regarding
privacy,
so
you
mentioned
the
GD
P
R,
which
is
in
effect
since
since
last
year.
E
You
I
agree:
I,
agree
completely.
The
document
that
that
I
wrote
to
it
my
talk
was
derived
from
was
a
submission
to
a
u.s.
government
process
on
privacy,
because
I
agree
completely.
That
is
a
very
important
legal
role
for
the
legalities
here
in
the
governments
around
the
world,
but
I
think
that
trying
to
base
your
privacy
on
notice
and
consent
from
a
technical
perspective
is
not
going
to
work
and
I
want
what
I
was
what
my
paper
said
is
we
need
to
find
a
different
paradigm
for
regulators
and
legislators
to
mandate.
D
D
I,
don't
think
it's
a
dichotomy
in
fact,
a
lot
of
the
investigations
that
have
come
about
under
the
gdpr
and
the
fines
that
have
resulted
from
that
those
privacy
issues
only
came
to
be
known
because
of
the
kind
of
research
that
I
described,
whether
it
was
done
by
academics,
journalists
or
some
other
third
parties.
So
that's
technical
work
in
a
sense
and
for
me
the
real
success
stories
involve
the
collaboration
between
technical
teams
and
legal
measures.
D
K
Folks,
thank
you
very
much
for
your
talks
to
both
Stephen
Arvind
I
want
to
actually
follow
up
on
this
discussion,
because,
in
fact,
that's
precisely
what
I
wanted
to
say
in
this
world,
where
we
believe
privacy
is
getting
harder.
The
battle
is
lost.
I
very
much
appreciate
the
message
of
no
it's
not
lost.
We
can
keep
improving
things,
and
also
just
following
up
from
the
previous
speaker
that
we
are
going
all
the
way
from
the
extremes
of
regulations
towards
the
purely
technical
and
I
think
they
both
have
to
marry
at
some
point
in
time.
K
It's
true.
We
don't
have
the
answers
to
all
the
questions,
but
in
it's
true
that,
for
instance,
I
was
recently
in
a
regulatory
discussion
where
they
were
just
discussing.
Okay
in
a
world
where
the
watch
is
checking
your
vital
signs
and
then
it's
sending
it
to
your
phone
and
then
that's
anything
to
an
app
and
that's
sending
it
to
a
cloud
provider
and
who
do
you
regulate
it's
not
anymore.
The
world
of
one
piece
does
one
thing,
so
it's
important
I
agree
with
Arvind
and
message
that
we
can
help
their
regulatory
bodies
understand.
K
That
is
the
service
provider,
probably
the
most
account
or
wand,
we'll
make
sure
that
the
information
flows
down
and
also
on
our
side
as
a
technical
writers.
We
do
have
indeed
the
the
the
role
of
writing
the
right
standard,
but
also
educating
people
as
much
as
we
can
regulatory.
It
could
be
a
section
Steven.
You
mentioned
the
privacy
considerations.
I
think
that's
a
great
way
to
communicate
what
the
standard
should
do.
What
what
are
the
issues?
L
Hi,
riad
Wahby
Arvind,
you
mentioned
at
the
very
end
of
your
talk,
a
project
of
Stanford
TLS,
our
AR
rotate
and
release.
I
was
one
of
the
authors
on
that.
So
I
think
you're,
absolutely
right
that
there
are
some
technical
measures
like
that,
but
just
to
provide
a
little
background
in
kind
of
a
counterpoint.
L
While
we
were
working
on
that,
we
actually
spoke
with
some
of
the
people
on
the
TLS
working
group
and
said:
hey
look,
it
might
be
the
case
that,
like
a
small
change
to
TLS,
would
actually
make
this
easier,
and
we
very
rightly
got
pushback
from
from
the
TLS
working
group
who
said
yeah,
but
we
don't
want
to
make
this
easier
because
yeah,
you
might
want
to
use
it
for
watching
your
own
devices,
but
anything
that
we
make
easier
for.
You
is
going
to
also
be
easier
for
somebody
who's
spying
on
you.
L
So
while
it's
true
that
we
want
to
look
at
our
devices,
it
seems
like
technical
measures
at
the
level
of
you
know
the
encryption
standards,
maybe
not
the
right
way
to
go.
We
may
be
in
some
sense
at
the
mercy
of
the
people
who
are
building
the
device
is
almost
no
matter
what
we
do,
because
you
know
we
shouldn't
insert
back
doors
into
TLS
for
our
own
good.
They
will
hurt
us
more
than
they
will
help
us.
So
just
yeah.
D
C
Okay,
max
Paula
CableLabs,
thanks
for
the
talks,
I
would
like
to
ask
you
a
question
following
the
the
gentleman
from
Ridge
Telekom
about
ownership
of
the
data.
This
is
a
very
big
difference
in
the
US
and
Europe,
for
example,
where
ownership
of
the
data
is
always
about
me
when
I'm
in
Europe
and
once
is
collected
in
the
u.s.,
is
property
of
who
collected
the
data,
and
this
I
think
is
the
biggest
issue
in
when
privacy.
C
E
Data
ownership
is
a
really
complicated
question.
There's
a
fair
amount
of
legal
writing.
Lately,
legal
academic
writing
on
why
trying
to
treat
data
as
property
can
have
bad
side
effects.
One
of
the
interesting
things
from
US
law
is
that
a
lot
of
these
transactions?
There
are
two
different
parties
that
have
ownership,
so
I
mentioned
about
the
my
mechanics,
uploading
or
selling
my
odometer
readings.
Well,
yes,
my
ODOT.
My
mechanic
is
recording
the
odometer
reading
to
go.
E
Let
me
know
when
I
should
change
my
oil
again
and
that's
perfectly
that
becomes
a
business
record
of
the
mechanic
and
that's
the
property.
The
data
belong
to
the
mechanic
and
therefore
the
mechanic
can
sell
it
as
well
as
me,
and
my
privacy
problem
is
that
gets
aggregated
and
attributed
to
me
as
well.
So
there
are
very
complicated
questions
with
trying
to
treat
this
as
property,
even
apart
from
the
international
issues
of
different
philosophies,
there's
a
lot
of
data
that
businesses
very
legitimately
have
to
collect
and
medical
personnel
utterly
rely
on
it.
E
D
D
That
measurement
should
be
a
standard
part
of
the
regulatory
process
and
just
to
tell
you
how
much
I
agree
with
that
at
Princeton
and
part
of
the
Center
for
Information
Technology
Policy,
and
it
was
started
15
years
ago
with
precisely
the
notion
that
there
need
to
be
more
technologists
in
government
exactly
because
we
can
do
more
of
the
sort
of
things
you're
calling
for
it,
because
today
the
main
limitation
is
just
the
technical
expertise
that
exists
in
regulatory
agencies.
If.
E
B
M
So
the
name
withheld
actually
so
Steve
said
we
had
the
wrong
trust
model.
You
know
we
got
thinkable
the
CIA,
etc.
Attacking
I
think
goes
beyond
that.
These
third-party
databases.
They
are
a
national
security
threat
and
we
saw
them
weaponized
in
2016,
and
it
isn't
just
personal
data
I
know
of
an
insurance
company
that
is
operated
by
an
individual
who
is
widely
believed
to
be
operating
on
behalf
of
an
intelligence
agency,
a
hostile
one,
this
insurance
agency
specializes
in
commercial
vehicles.
M
If
you
think
about
what
such
an
insurance
agency
would
be
doing,
it
is
collecting
data
on
all
the
trucks
there's
a
moving
in
that
country
and
they
managed
to
get
70%
of
the
market
in
markedly
short
time,
because
businesses
are
very
price
sensitive.
So,
when
we're
thinking
about
this
thing,
it
is
no
longer
just
us
as
individuals
having
concern
about
our
personal
privacy.
It
is
also
a
matter
of
national
security
and
patriotism.
E
Data
breaches
of
multinational
US
firms
are
frequently
a
thought
to
have
been
perpetrated
by
a
foreign
intelligence
agency,
equi
fact
the
Equifax
and
the
Marriott
breach
have
been
attributed
to
foreign
intelligence
agencies.
In
fact,
someone
Equifax
said
justice.
Just
yesterday,
they
seen
zero
evidence
that
any
of
the
stolen
data
has
been
used
commercially
for
identity
theft
or
anything
else,
which
kind
of
goes
along
pretty
well
with
the
notion
that
there's
an
intelligence
agency
that
took
it
so
yeah.
This
is
very
plausible.
B
So,
thank
you
very
much.
We
now
have
a
four
minute
break
I
think
before
the
administrative
plenary
I'd
like
to
thank
again
very
much
both
Steve
and
Arvind.
This
is
an
excellent
evening.
I
had
I
learned
a
lot
I
especially
appreciate
the
challenges
to
the
IETF
from
both
of
you
and
we
hope
to
live
up
to
them.
So
thank
you
very
much,
we'll
see
you
in
180
seconds.