►
From YouTube: IETF105-TECHPLENARY-20190724-1710
Description
TECHPLENARY meeting session at IETF105
2019/07/24 1710
https://datatracker.ietf.org/meeting/105/proceedings/
A
A
A
B
C
B
You
can
hang
out
on
me
Tyco,
which
will
also
be
a
stress
test
for
me,
deco,
also,
a
stress
test
so,
as
a
beginning
note
were
you'll
notice
on
the
agendas
that
we're
trying
out
something
new
this
time,
we're
more
explicitly
splitting
the
tech
and
tech
and
admin
plenaries.
You
know
wait
way
way
back
in
the
past.
These
were
on
separate
evenings
and
then
they
sort
of
came
together
and
then
they
sort
of
came
even
farther
together
into
a
single
session.
B
This
part
of
the
session
is
one
hour
long,
which
is
why
we
were
going
ahead
and
getting
started,
even
though
we
don't
have
video,
yet
we're
going
to
be
holding
pretty
strictly
to
time.
There
will
be
time
for
questions
and
discussion
at
the
end,
so
you
know
clarifying
questions
only,
but
please
know
clarifying
questions
with
that
privacy
question
mark.
What's
the
delay
on
okay,
all
right!
B
Yes,
next
slide,
please,
as
some
of
you
may
be
aware,
the
IB
and
the
IETF
at
least
we
hope
many
in
the
IETF
is
deeply
interested
in
confidentiality
on
the
Internet.
This
is
a
conversation
that
we've
had
ongoing
for
a
while
we're
interested
in
this.
In
large
part
for
reasons
of
privacy,
we
spend
a
lot
of
time
in
the
working
groups.
These
days,
that
is,
the
most
applause
I
thought
I
would
ever
get
for
privacy.
With
a
question
mark
and
an
IETF
meeting
post
Vancouver
I'll
go
ahead
and
it
gets
better.
B
We've
been
talking
a
lot
about
in
the
in
the
the
working
groups
and
in
the
hallways,
and
you
know
around
the
working
groups
and
the
press
and
sort
of
everywhere.
It's
been
a
while,
since
we've
addressed
it
in
plenary.
So
we'd
like
to
change
that
tonight.
We
have
a
program
where
we'd
like
to
talk
about
current
issues
and
eternal
issues
in
Internet
privacy
with
Arvind
or
Ivan,
and
Ryan
I
worked
so
hard
to
get
your
name
right
and
Steve.
Elleven
Arvin
Ryan
is
an
associate
professor
of
computer
science
at
Princeton.
B
B
He'll,
be
talking
about
some
of
the
implications
and
current
trends
and
communicate
on
communications
privacy
in
large
current
trends
on
communication
parts,
the
internet
sort
of
in
the
large,
so
he's
sort
of
a
contextual
look
at
this
Steve
Delavan
needs
no
introduction
in
this
room,
but
I'm
gonna
try
to
do
so.
Anyway.
He
is
a
former
member
of
the
IAB,
a
former
security
Area
Director.
He
was
instrumental
in
the
creation
of
Usenet.
Some
of
you
may
have
heard
of
that
he's
currently,
professor
of
computer
science
at
Columbia
and
affiliate
at
Columbia
law.
B
D
So
I'd
like
to
share
with
you
what
I've
learned
from
a
decade
of
doing
privacy
measurement
is
kind
of
a
boring
sounding
term,
but
what
it
really
means
is
trying
to
find
privacy
vulnerabilities,
ideally
on
a
large
scale.
I'm
talking
millions
of
endpoints
and
an
automated
or
mostly
automated
way
and
before
I
do
that
I'll
start
with
a
couple
of
caveats.
D
One
is
I'm
going
to
be
really
upfront
that
most
of
my
work
has
been
in
the
web
space
and
my
prior
engagement
with
standards
agencies
spend
with
the
w3c
and
that's
what
a
lot
of
this
is
going
to
be
informed
by.
But
that's
what
I'm
going
to
try
really
hard
to
do
is
extract
some
principles
that
are
much
more
broadly
applicable
and
that's
what
I'd
like
to
share
with
you
today
and
another
thing.
That's
going
to
be
a
common
theme
of
the
presentation.
D
Is
that
I'm
going
to
be
talking
about
issues
beyond
encryption
I'm,
going
to
assume
that
we're
in
a
world
with
pervasive
encryption
and
in
fact,
some
of
the
things
that
I'll
touch
upon
are
perhaps
some
downsides
of
encryption
or
privacy
and
how
we
can
try
to
mitigate
them?
So
that
already
sounds
surprising
to
some
of
you,
so
I
hope
this
will
be
an
interesting
discussion.
I
should
also
say
that
that,
as
you've
heard
from
the
intro
I'm
an
academic,
so
my
job
is
to
think
you
know
idealistic
lucify
thoughts.
D
There
are
going
to
be
points
where
you're
going
to
feel.
Oh,
this
will
never
work
in
the
real
world
and
you're
welcome
to
come,
say
that
I'm
Q&A,
that's
totally
our
game
and
I
appreciate
that.
Okay,
so
with
those
caveats
here
are
three
things
that
I
want
to
share
with
you.
The
first
thing
is
an
issue
that
very
options
up
when
we're
talking
about
privacy
beyond
approaching,
especially
when
we're
talking
about
more
subtle
privacy
threats
such
as
device
fingerprinting.
An
argument
that
often
comes
up
is
oh
forget
about
fingerprinting.
That
ship
has
sailed.
D
E
D
Ironically,
you
can
see
on
the
list.
Privacy
features
like
do
not
track
also
contribute
to
fingerprinting,
because
do
you
have
do
not
track
and
able
do
not
have
it
enable?
So
that's
a
small
amount
of
function,
entropy,
etc.
So
one
thing
you
might
conclude
from
this
is
you
know
the
horses
left,
the
barn
fingerprinting
is
devastatingly
effective.
We
shouldn't
even
try
to
minimize
the
fingerprint
ability
of
new
features
that
we
put
into
the
standard
now,
the
w3c
to
their
credit.
D
There
were
a
lot
of
people
who
still
try
to
minimize
the
fingerprint
ability
of
new
features
and
I
don't
want
to
present
this
as
a
criticism
of
other
people.
This
was
absolutely
me
up
until
about
a
year
ago,
so
this
is
what
I
believed
and
here's.
Why
I
changed
my
mind,
so
those
studies
that
I
present
it?
They
were
really.
You
know
excellent
studies,
but
there
was
something
wrong
with
the
way
that
a
lot
of
people
interpreted
them.
D
One
weird
thing
about
those
studies
is
that
the
users
who
participated
were
self
selected,
so
does
that
mean
that
the
results
could
be
non
representative
in
some
way?
What
could
be
different
about
self
selected
users?
One
possibility
is
that
all
the
users
who
had
self
select
into
a
study
like
that
are
actually
really
tech-savvy
people,
the
kind
of
people
who
are
likely
to
make
a
lot
of
modifications
in
their
browsers.
That
would
make
them
more
unique
and
more
fingerprint.
D
So
that
was
one
interesting
kind
of
bias
that
some
researchers
suspected
could
be
in
those
studies,
including
some
of
the
researchers
at
INRIA
who
were
responsible
for
one
of
those
studies
and
then
what
they
did
was
they
partnered
with
a
major
french
website
and
fingerprinted
all
of
the
users
of
the
of
that
website,
without
really
telling
them.
So
by
doing
this
lightly
ethically
questionable
thing,
they
did
a
statistically
much
more
rigorous
thing
and
they
published
that
study,
which
found
in
fact,
contrary
to
some
of
the
previous
findings.
D
Only
a
third
of
the
users
were
unique
and
as
more
and
more
activity
shifts
to
mobile,
less
than
a
fifth
of
mobile
users
were
unique
because
those
devices
are
less
customizable
and
further
has
flash
and
Java
and
other
old
plugins
are
phased-out.
That
number
is
actually
going
down
and
if
you
look
at
for
me
when
I
looked
at
the
findings
of
this
study,
the
conclusion
was
very
different.
D
Even
little
things
that
browsers
can
do
in
order
to
minimize
the
fingerprint
ability
of
features
are
going
to
have
a
big
impact,
and
so
I
came
away
with
this
with
a
very
different
point
of
view,
then
a
lot
of
people
have
had
before
before
this
study,
which
is,
let's
not
even
bother,
let's
not
features
for
the
sake
of
privacy,
but
instead
after
the
study,
what
I
concluded
was
quite
the
opposite.
So
I
think
this
is
a
much
more
general
principle
in
a
lot
of
contexts.
We
hear
the
ship
has
sailed.
D
D
The
ship
has
not
sailed,
and
one
of
the
reasons
that
people
will
say
that
the
ship
has
sailed
is
that
if
you
don't
have
a
perfect
defense,
even
if
you
try
to
mitigate
fingerprint
ability,
oh
here's
a
clever
way
that
somebody
can
get
around
that
well,
I
have
an
imperfect
defense
at
all.
Is
it?
Is
it
not
better
to
you
know
to
not
give
people
a
false
sense
of
security?
So
that
is
a
point
on
which
I
will
disagree.
D
I
think
that
imperfect
defenses
are
still
very
useful
and
one
reason
I
believe
that
is
because
technology
doesn't
have
to
bear
the
full
burden
of
privacy
protection.
What
do
I
mean
by
this
here's?
An
interesting
example:
Safari
has
third-party
cookie
blocking,
as
you
might
know,
and
it's
not
a
perfect
defense.
It
can
be
circumvented.
In
fact,
Google
decided
to
do
exactly
that.
Google
decided
to
circumvent
it
and
once
they
did
something
interesting
happened
in
the
US.
The
Federal
Trade
Commission
got
involved,
they
said
hey,
you
can't
do
that.
D
You
can't
circumvent
a
privacy
measure,
that's
actually
a
violation
of
the
law
and
they
went
after
Google
and
they
find
Google.
So
that's
an
interesting
phenomenon
where
the
technology
itself
was
not
bulletproof,
but
it
turns
out
that
circumventing
even
a
weak
privacy
protection
measure
can
actually
get
companies
into
trouble
with
the
law.
It
can
also
be
a
reputational
harm.
D
That
is
still
useful
because
it
gives
a
couple
of
years
for
new
defenses
to
be
developed,
whether
they
may
be
technical
or
legal,
or
something
like
that.
So
that
was
point
number
one
point
number
two
that
I
want
to
talk
about
is
we're
in
a
world
where
what
privacy
means
to
people
changes
very
quickly.
Whether
or
not
something
is
a
privacy
breach,
changes
very
quickly,
so
both
privacy
attitudes
and
privacy
infringing
technologies
change
pretty
quickly.
D
So
one
good
example
of
this
is
one
of
my
favorite
examples
of
how
privacy
attitudes
evolved
quickly.
Is
that
if
you
thought
about
privacy
10
years
ago,
most
users
would
have
been
concerned
with
what
are
the
individual
harms
that
can
accrue
to
me
out
of
all
of
those
data
collection
out
of
all
of
the
databases
owned
by
companies
that
have
my
personal
information?
Is
it
identity
theft
is
the
data
breaches?
Is
it
perhaps
targeted
price
discrimination?
What
should
I
be
worried
about?
Those
were
the
kinds
of
privacy
questions
that
people
were
asking.
D
You
know
the
overall
society
that
we
live
in,
so
I
want
to
say
that
there
has
been
a
shift
from
these
very
individualized
concerns
about
privacy
to
more
collective
societal
concerns
about
privacy
among
privacy,
scholars
and
privacy
advocates.
That
shift
has
been
pretty
stark
and
even
among
the
general
public
I
think
there
has
been
a
substantial
shift,
and
so
what
this
means
is
that
a
certain
type
of
data
collide
that
might
have
seen
pretty
innocuous
ten
years
ago
begins
to
look
very
different
today.
So
that
was
one
example.
D
I
have
a
couple
of
other
examples
that
I'll
skip,
but
the
result
of
this
is
that
it's
very
hard
in
a
standard
document
to
write
down
and
fixed
privacy
definition
and
then
say
that
I've
analyzed
this
protocol
with
respect
to
this
privacy
definition
and
I'm,
confident
that
this
is
going
to
be
a
privacy
respecting
protocol
now
and
for
all
time
to
come,
and
so
going
back
to
that
example
of
individual
versus
collective
harms.
Let
me
show
you
very
quickly
that
paper
by
Cambridge
researchers
this
was
in
2013.
D
This
was
the
paper
that
realised
that
you
could
take
people's
Facebook
Likes,
which
is
a
very
innocuous
sounding
type
of
information
and
use
that
to
predict
their
so-called
Big
Five
personality
traits
and
those
are
things
like
emotional
stability,
agreeableness,
extraversion
and
so
on
the
stuff
that
you
see
in
green
over
there.
If
you
can
even
greet
that
text,
sorry
about
that,
the
font
size
is
a
little
small,
and
this
is
exactly
the
research
that
was
allegedly
weaponized
by
Cambridge
analytic,
ofor
psychographic
targeting.
So
this
was
not
necessarily
anticipated
a
few
years
ago.
D
There
are
many
other
examples
of
this,
of
improvements
in
machine
learning,
training,
innocuous
data
into
something
that
can
be
used
for
something
much
more
problematic.
This
was
a
headline
from
a
few
years
ago,
statisticians
at
Target
had
figured
out
how
to
use
a
person's
shopping
records
to
figure
out
whether
they
were
pregnant
or
not,
and
so
a
one
concrete
threat
along
these
lines
is
well
stated
by
Paul
ohm
who's,
a
legal
scholar
who
calls
this
the
database
of
Rouen.
D
But
these
are
not
things
that
we
really
recognized
as
privacy
concerns.
Maybe
ten
years
ago,
as
much
as
we
do
today,
so
that's
kind
of
what
by
the
landscape
of
privacy
is
shifting
pretty
quickly,
and
this
is
a
challenge
for
our
standards
document
in
a
standards
process
which
needs
to
be
really
long-lived.
So
we
thought
about
this
in
a
paper
recently
where
we
looked
at
specifically
the
battery
status
API
in
the
web
context,
and
this
was
an
API
that
turned
out
to
have
much
more
serious
fingerprint
ability.
D
Privacy
consequences,
then
was
realized
and
therefore
was
taken
out
of
a
number
of
browsers
after
it
had
shipped
and
after
people
had
started
using
it.
That
was
kind
of
unprecedented.
So
we
looked
at.
How
did
this
go
wrong
and
how
can
we
be
more
aware
of
these
potential
and
misuses
during
the
standards
process?
And
so
here's
a
paper
citation
at
the
bottom
and
what
we
proposed
in
this
paper
at
a
high
level?
D
What
we
called
for
is
a
much
tighter
loop
between
standards
agencies,
as
well
as
researchers
and
developers
and
by
developers
I
mean
both
implementers
and
also
developers
in
a
much
more
general
sense
people
who
are
using
the
api's
that
are,
you
know,
implemented
by
by
the
browser
vendors,
for
example,
and
as
part
of
this,
we
think
that
it
would
be
really
useful
to
incentivize
academics
to
do
two
things.
One
is
to
get
involved
in
the
standards
process
and
do
privacy
reviews
of
standards
and
the
other
one.
D
This
is
perhaps
still
quite
missing,
which
is
once
an
API
is
out
in
the
wild
and
once
people
are
using
it
to
do
regular
privacy
audits
of
how
it's
being
used
and
abused
I've
talked
about
this
a
few
times
and
one
question
that
I
get
is
sure
this
sounds
good
in
theory,
but
it's
hard
to
convince
researchers
to
do
this.
How
do
we
do
that
now?
One
good
thing
I'll
say
about
this.
This
actually
sounds
like
a
horrible
thing,
but
I'll
claim
it's.
D
Another
thing
that
I
think
would
be
useful
is,
as
part
of
the
standard
process,
to
be
explicit
about
assumptions,
because
privacy
changes
so
quickly,
because
we
can't
anticipate
what
new
privacy
infringing
technologies
will
be
out
there
in
five
years.
It
helps
to
be
explicit
about
assumptions
as
part
of
the
standards
process,
and
that
is
to
be
able
to
explicitly
say
we
have
created
the
standard,
assuming
that
this
API
will
not
be
highly
susceptible
to
fingerprint
ability.
D
But
if
it
turns
out
that
that's
the
case,
if
it
turns
out
that
this
is
being
exploited
in
the
wild
here,
are
some
things
that
implementers
could
do
to
mitigate
that
risk.
So
that's
the
second
point.
Okay,
the
third
and
final
point
that
I
want
to
talk
about
is
that
this
idea
of
measurement,
which
is
finding
these
privacy
violations
on
a
large
scale,
I'm
claiming
that
it's
been
really
useful
for
privacy,
but
unfortunately,
it's
going
away
and
I
want
to
talk
about
whether
there
is
a
way
to
preserve
it.
D
I
don't
want
to
make
the
sound
like
a
sky
is
falling
kind
of
claim,
but
in
my
little
corner
of
the
research
world
the
sky
has
already
fallen
and
a
lot
of
us
that
have
moved
on
to
other
research
areas.
So
let
me
tell
you
why
that
is
and
why
that
should
worry
us
from
a
privacy
perspective
and
to
see
whether
there's
a
way
to
preserve
it
so
I'm,
claiming
that,
at
least
in
the
web
context,
measurement
has
played
a
very
key
role
in
keeping
the
worst
of
the
privacy
abuses
in
check.
D
Many
teams
around
the
world
have
been
working
on
web
privacy
measurement.
I'll.
Tell
you
a
tiny
bit
about
my
own
team's
work,
something
that
we
built
is
a
tool
called
open
wpm.
This
is
a
the
github
page.
If
you
want
to
check
it
out,
as
you
can
see,
it's
an
actively
developed
open-source
project.
It
was
developed
at
Princeton
and
now
the
main
developer,
Steve
Englehart
has
moved
to
Mozilla,
it's
maintained
by
Mozilla
now,
so
what
it
is
I
don't
mean
for
any
of
the
details
on
this
page
to
be
important.
This
it's
just
the
URL.
D
If
you
want
to
look
at
it
or
the
name
open
wpm.
Now
what
it
is
is
an
instrumented
version
of
Firefox.
It's
basically
a
bot
that
visits
the
web's
top
1
million
websites
every
month
and
looks
at
what
kind
of
privacy
violation
violating
techniques
are
out
there.
It
even
does
things
like
put
in
fake
PII
into
various
forms
and
tries
to
see
where
they
go,
and
it
saves
all
that
data.
D
We
have
half
a
terabyte
of
data
per
month
and
then
we
run
various
scripts
on
that
data
to
try
to
find
privacy
violations
and
publicize
them
and
get
people
to
change
their
practices.
We've
written
a
number
of
papers
based
on
this
data.
This
is
one
example.
It's
called
online
tracking
of
1
million
site
measurement
analysis
and,
as
you
can
see,
one
of
the
key
things
here
is
to
be
able
to
do
this
on
a
large
scale.
In
a
mostly
automated
way,
it's
had
a
number
of
positive
impacts
on
privacy.
D
Unfortunately,
the
downside
of
that
is
that
the
two
ends,
of
course
off
into
an
encryption,
are
the
device
in
the
server.
It
doesn't
involve
the
user.
It
doesn't
involve
a
researcher,
a
researcher
can
Smitham
these
devices,
a
researcher
can't
figure
out
what
data
is
being
collected
and
where
it's
being
sent-
and
we
think
this
is-
you-
know,
kind
of
a
crisis
for
this
kind
of
research.
It
makes
meaningful
privacy
measurements
basically
infeasible.
D
The
public
is
very
interested
in
these
questions.
For
example,
there
was
this
article
called
the
house
that
spied
on
me
that
just
looked
at
what
are
the
endpoints
of
communication
of
various
IOT
devices,
including
sex
toys.
Why
is
that
contacting
13
different
servers?
You
know
people
want
to
know
what
data
is
going
out
there,
and
this
is
important
not
just
from
a
privacy
advocate
point
of
view,
if
you're
a
company
and
you're
a
reputable
company-
and
you
want
to
able
to
show
your
users
that
your
data
collection
is
completely.
D
You
know,
according
to
your
specified
privacy
policies,
there's
no
good
way
to
do
that
today,
because
researchers
can't
examine
the
plaintext
of
these
communications
and
I
think
this
is
a
serious
issue.
For
example,
if
we
wanted
to
know
if
the
smart
light
bulbs
in
our
homes
are
transmitting
conversations
because
they
actually
have
microphones,
we
really
don't
have
a
good
way
to
check
that
today,
and
this
is
not
a
totally
paranoid
scenario.
Something
somewhat
similar
has
happened.
D
For
example,
this
interesting
thing
happened
a
few
months
ago,
where
Google
sent
an
email
to
all
of
the
owners
of
nest,
thermostats
and
said:
hey
your
nest.
Thermostat
is
also
a
Google
Voice
assistant
now,
and
people
like
what.
How
is
that
possible?
It
doesn't
have
a
microphone
and
Google
said
no,
it
does
have
a
microphone
and
if
people
said
what
we
didn't
know,
it
had
a
microphone
and
Google
said.
D
Yes,
it
does
check
the
privacy
policy,
and
people
said
what
privacy
policy,
nobody
reads:
the
privacy
policy
also
when
they
read
the
privacy
policy,
it
was
actually
not
in
there
and
then
Google
said.
Oh,
we
meant
to
disclose
that
in
the
privacy
policy.
Sorry,
that
was
an
oversight,
so,
of
course,
in
this
case,
I'm
willing
to
believe
that
it
was
an
oversight
on
the
part
of
Google,
but
if
there
was
a
malicious
you
know
vendor
who
put
microphones
and
devices
that
are
in
millions
of
people
homes.
D
D
I
think
this
is
a
critical
need,
the
idea
being
that
when
you
enable
this
kind
of
debug
mode,
the
user
or
more
likely
a
researcher,
you
know
the
details
and
user
experience
will
depend
on
the
device,
but
some
way
to
be
able
to
intercept
the
plane
tax
in
order
to
be
able
to
audit.
What's
going
on
out
there,
there's
a
Stanford
project
related
to
this
called
TLS
TLS.
What
does
it
called?
Tls
replays
something,
and
so
what
I'm
proposing
is
slightly
different,
I'm
happy
to
hash
out
the
details
later.
D
F
G
H
So
I'm
going
to
talk
about
some
modern
issues
in
privacy
today,
and
you
know,
privacy
is
not
a
new
issue
when
I
started
doing
the
research
that
led
to
this
talk
and
by
the
way
these
slides
were
already
on.
My
webpage
is
the
reference
link
to
references.
A
technical
class
legal
document
is
also
on
my
webpage.
A
lot
of
this
stuff
goes
back
to
the
1960s.
You
know
the
New
York
City
Bar
Association
started
studying
computers
in
privacy
in
1962,
Alan
Weston
prepared.
Basically,
a
report
of
that
committee
in
67
been
very
influential.
H
H
Since
then,
and
you
look
at
the
timeline,
he
published
this
book
in
67
six
years
later,
a
US
government
committee
came
up
with
what
became
known
as
the
Fair
Information
practice,
principles
of
consent
of
security,
of
openness
of
use,
specification
and
so
on.
In
1974,
a
year
later,
the
US
government
actually
enacted
this
into
law,
but
only
is
applied
to
the
US
government
didn't
apply
to
private
corporations,
not
the
American
Way.
H
A
few
years
later,
the
OECD
suggested
more
or
less
the
same
thing,
but
applying
to
the
private
sector
in
94
the
EU
and
active
data
privacy.
Director
seven
years
ago,
the
gdpr
was
enacted
when
it
to
affect
a
couple
of
years
ago,
but
from
10,000
meters.
All
of
these
are
substantially
as
India
tremendous
difference
in
details.
But
fundamentally,
if
you
consent
the
data
that
you
have,
data
about,
you
can
and
will
be
collected
and
notice
and
consent,
and
so
notice
and
consent
is
sites.
H
Tell
you
what
they're
going
to
collect
and
what
they're
going
to
do
with
it
and
by
using
the
website
by
using
the
device
you
are
deemed
to
have
consented
to
this
policy,
and
some
of
the
risks
were
known
back
in
the
1960s
academics
law
professors
wrote.
People
are
just
going
to
go
along
with
the
request
because
they
want
the
service
1960s.
We
didn't
have
Google,
we
didn't
have
Facebook,
they
realized
people
are
going
to
go
along
to
get
the
benefits.
H
They
realized.
They
told
the
US
Congress
people
are
going
to
share
passwords.
Maybe
we
need
multi-factor,
authentication,
1967
folks,
how
many
sites
you
log
in
to
adjust
a
password.
Today
they
worried
about
hackers.
They
even
cited
MIT
the
MIT
students
breaking
into
systems
for
fun,
insider
threats.
Why
are
tapping
the
need
for
encryption,
the
importance
of
metadata
and
the
inferences
you
can
draw
from
metadata
in
again
1967-1969
the
danger
of
large
searchable
aggregate
able
databases?
H
H
There's
a
tremendous
amount
of
data
is
collected
and
we
don't
know
who
is
collecting
it.
We
have
privacy
policies,
we
have
location
data
and,
of
course
there
are
the
governments
of
the
world
there's
a
tremendous
amount
of
over
collection.
Apart
from
all
the
folks
whom
you
give
consent,
there
are
the
data
brokers
outside
parties
who
business
is
to
collect
data
about
people
and
sell
it.
They
collect
it,
they
buy
it
and
they
sell
it,
sometimes
from
public
records,
sometimes
from
private
transactions
that
you
know
nothing.
About.
H
Last
year
I
sold
the
car
I
found
out
that
my
odometer
readings
had
been
sold
by
my
mechanic.
Well,
yeah
did
I
could
send
to
it.
No,
that
was
a
private
deal
between
the
mechanic
and
some
data
collection
company,
the
ads
that
you
see
on
the
web
they're
not
generally
not
coming
from
the
website.
You're
visiting
they're
coming
from
ad
brokers,
often
multiple
levels
of
ad
brokers
who
do
HTTP
redirects
each
warrant
is
a
separate
website,
can
collect
and
set
cookies.
H
They
combine
that
with
information
from
third-party
data
aggregators
and
estimate
based
your
age,
your
gender,
your
income
and
use
that
to
say
how
valuable
a
customer
are
you
and
therefore,
what
ad
is
appropriate
to
show
you-
and
you
don't
see
any
of
this,
but
we
have
privacy
policies,
nope
curiosity
who,
in
this
room
reads
every
privacy
policy
they
encounter
I'm
impressed
I,
am
seriously
impressed
security.
These
hands
down.
H
So
if
you
say
you
might
do
everything,
then
you
don't
lie
when
you
do
everything
you
know
we
may
collect
personal
information
and
other
information
about
you.
Remember
the
date
of
Roker's.
Remember
the
analytic
platforms
from
business
partners,
contractors
and
other
third
parties,
in
other
words
the
world
in
quota
Advisory,
Committee
report
to
President
Obama
about
five
years
ago.
H
Only
in
some
fantasy
world,
the
users
actually
read
these
notices
and
understand
their
implications
before
clicking
to
indicate
their
consent
by
and
large,
that's
true
and
remember,
because
of
all
these
third
and
fourth
and
fifth
and
sixth
parties
on
the
web.
You
don't
even
know
what
websites
you're
consenting
to
you
go
to
a
news
site,
a
sports
like
what
have
you
and
you
you're
careful.
You
read
it
and
you
look
at
the
fine
print
says
by
the
way.
Read
our
advertising
partners,
privacy
policies
to
who
are
they
good
luck?
H
Finding
out
location
data
is
a
huge
issue
for
mobile
devices.
Lots
of
apps
are
collecting
and
analyzing
this
kind
of
data,
and
even
if
the
app
is
not
doing
the
collection
and
transmission
IP
geolocation
very
mature
technology
reveals
a
lot.
Is
it
perfect?
No,
is
it
very
very
good?
Yes,
and
this
stuff
doesn't
have
to
be
perfect
if
data
exists,
it's
available
to
governments,
sometimes
in
some
government,
you've
got
a
complex,
restricted
and
somewhat
painful
process
to
gain
access
to
your
data.
I
said
the
US
government
at
this.
Is
this
45
year
old
privacy
law.
H
You
can,
under
certain
circumstances,
gain
access
to
certain
information
about
the
held
about
you,
other
governments,
don't
really
care
about
the
nice
ease
of
privacy
policies
and
access
yeah.
It
is
your
data
we
wanted,
we
haven't
go
away
and,
of
course,
that's
even
ignoring
what
you
know.
193
nations
in
the
in
the
UN
I
think
about
192
of
them
have
espionage
agency.
Is
they
collect
data
via
technical
means
and
other
means?
And
this
you
don't
get
to
look
at
it
all
the
privacy
laws
that
we
have
are
largely
based
on.
H
What's
called
PII
personally
identifiable
information,
your
name,
your
email
address
a
government
ID
number
of
some
sort.
The
definition
varies.
The
EU
considers
IP
addresses
PII
much
of
the
United
States
government
does
not
I'm
someplace
to
be
I,
think
they're,
both
rights
under
depending
on
the
circumstances,
but
it
turns
out
you
don't
need
PII
to
invade
somebody's
privacy.
Hamazon
doesn't
need
your
name
and
address
to
recommend
products.
Oh
they
might
like
it.
Oh,
you
live
in
a
well-to-do,
neighborhood,
we're
going
to
recommend
more
expensive
products.
You
have
an
ethnic
surname
family
name.
H
Let
me
go
recommend
products
that
appeals
to
that
ethnic
group
so
can
help,
but
they
don't
really
need
that.
You
know
people
who
bought
this
also
bought.
That
Netflix
doesn't
need
to
know
who
you
are
to
recommend
movies.
Tivo
doesn't
need
to
know
who
you
are
to
recommend.
Tv
shows
it's
a
great
essay
out
there
you
can
find
search
for
it
called
my
TiVo
thinks
I'm
gay
somebody
overreacted
when
he
started
getting
recommendations
from
TiVo,
forgetting
movies.
So
we
started
this
line
to
overcorrect
by
watching
nanri,
he-man
movies,
war
movies
and
so
on.
H
If
you're
worried
about
PII,
some
people
try
to
anonymize
the
data.
What
will
strip
off
the
identifying
information
it
doesn't
work?
First
of
all
for
most
kinds
of
anonymization,
the
real
worlds
are
shown
is
easy
to
re,
identify
or
Earvin.
You've
done
some
of
that.
As
I
recall,
haven't
you
and
if
you
do
too
good
a
job
of
anonymization,
you
may
actually
destroy
the
utility
of
the
date
of
a
certain
very
important
things.
H
For
example,
some
medical
dosage
calculations
done
based
on
machine
learning
on
a
large
database
of
patient
information,
very
successful
to
calculating
the
proper
dose
of
warfarin
there,
which
have
been
a
very
tricky
problem.
But
some
academics
showed
that
if
you
anonymize
the
data
well
enough
to
really
hide
the
patient's
identity,
the
calculations
wouldn't
work
you've
hidden
too
much
of
the
subtle
details
about
the
patient's
medical
condition.
H
Pii
is
focusing
on.
Pii
also
misses
the
importance
today
of
machine
learning
and
the
inferences
that
it
can
make.
You
can
tell
someone's
sexual
orientation
from
the
kinds
of
things
they
do.
I
will
ignore
them.
My
Tivo
thinks
I'm
gay,
but
you
can
infer
this.
Is
this
good
as
event
its
private
information
to
a
lot
of
people,
whether
or
not
it
should
be
it's
much
much
harder
to
control,
because
it's
not
based
on
data
directly
collected?
H
The
foods
that
I
buy
might
indicate
my
ethnicity
proxy
variables
are
a
very
powerful
thing.
There
was
a
study
done
by
the
Federal
Trade
US
Federal
Trade
Commission
about
10
years
ago.
They
discovered
that
auto
insurance
companies
were
using
credit
scores
to
set
rapes.
What
is
your
ability
or
willingness
to
pay?
A
debt
have
to
do
with
whether
or
not
you're
going
to
get
into
an
auto
mode
or
a
mobile
accident,
and
the
FTC
staff
came
to
three
conclusions:
one.
It
was
a
valid
predictor.
Why?
What
machine
learning?
H
H
The
higher
rates
were
going
to
certain
ethnic
groups
based
on
credit
scores.
Well,
that's
bad
social
policy,
so
the
FTC
staff
said
we're
going
to
solve
this.
We're
going
to
try
to
build
a
model,
that's
just
as
predictive,
but
not
discriminatory
and
guess
what
they
couldn't
do
it.
There
was
something
deep
in
the
data
that
said.
H
So
to
me,
notice
and
consent
is
dead.
No
one
knows
who
collects
the
data?
No
one
knows
what
they'll
do
with
it.
No
one
knows
where
it's
stored
and
some
of
the
most
sensitive
stuff
like
location
is
dual
use.
It's
used
for
your
benefit,
you
know
how
do
I
get
from
point
A
to
point
B
in
a
map
program
and
as
part
of
what's
called
your
data
shadow.
You
know.
Even
the
US
Supreme
Court
has
noted
how
sensitive
location
data
can
be
in
the
aggregate.
H
So
if
we
don't
have
notice
and
consent,
what
should
we
do?
What
should
we
replace
it
with?
One
answer
is:
use
control
it's
controversial
but
give
up
on
data
collection
restriction.
It
doesn't
work
better
in
the
EU
but
still
doesn't
work
that
well.
Instead,
let
people
specify
how
their
data
can
be
used,
not
what
can
be
collected,
but
what
it
can
be
used
for
targeted
advertising,
statistical
analysis,
medical
research,
what-have-you.
H
How
do
you
give
people
a
really
usable
interface
to
specify
here's
a
kind
of
data,
we're
collecting
and
here's
a
kind
of
use
that
you
may
or
may
not
want
to
permit
for
this
kind
of
data
usability
of
privacy
settings
is
a
fiendishly
difficult
problem.
Very
few
of
anyone
has
gotten
that
right.
You've
got
to
give
consent
across
long
time
intervals.
You
know,
I
have
been
posting
stuff
on
the
net
for
about
40
years
now,
Brian
mentions
one
of
the
people
created
net
News,
which
went
live
in
January
of
1980.
H
Do
I
have
the
same
preferences
today
as
I
had
40
years
ago,
I
was
lucky
my
first
boss
at
Bell
Labs,
when
I
walked
into
his
office
a
few
years
after
that
said,
I've
seen
your
flames
on
net
news,
Steve,
yeah,
okay,
upper
management
reads
these
things:
yeah,
okay
data
that
exists
can
be
abused
by
hackers.
Scofflaws
governments
were
simply
through
a
change
in
the
law
and
it
turns
out
that
under
US
law
it
may
be
impossible
to
mandate
use
restrictions
for
companies.
H
They
could
adopt
it,
but
you
may
not
be
able
to
mandate
it
but
a
US
law.
So
how
do
we
implement
these
control?
You
could
start
with
a
privacy-preserving
credential
scheme
tag,
all
the
data
that
you
create
with
a
privacy-preserving
sub
identity
and
a
data
type,
and
you
can
publish
tuple,
saying
data
type,
anonymous,
identity
allowed
uses
and
all
digitally
signed
with
your
anonymous
credential
supermoms
credential,
and
where
we
put
it
well
gee
do
we
put
it
in
the
blockchain?
No,
no
tomatoes,
please,
and
if.
H
If
you
change
your
mind
about
something,
you
just
push
out
something
at
your
newest:
it's
your
new,
a
statement
that
wins
enforcement.
Well,
it's
often
pointed
out.
Governments
have
a
role
if
you
break
a
legally
binding
promise.
If
you
break
a
law,
governments
can
come
down
on
you
difficult,
but
might
be
doable
might
being
worth
examining
as
a
research
project.
What
we
really
need
is
a
new
privacy
paradigm
he's
got.
A
scales
are
very
many
data
collectors
known
and
unknown
in
the
future.
It
has
to
scale
across
time.
It's
got
to
be
comprehensible
by
individuals.
H
It's
got
to
account
for
inferences.
It's
got
to
trade-off.
The
harms
and
benefits
of
different
kinds
of
data
use
and
I
have
no
idea
what
such
a
paradigm
would
look
like,
or
is
that
for
me
as
an
academic?
That's
great!
If
we
knew
the
answer,
it
wouldn't
be
research
but
yeah.
This
is
the
real
challenge.
How
do
we
do
this?
So
what
should
the
IETF
do,
obviously
encrypt
as
much
as
possible?
The
ITF
has
been
moving
in
that
direction
for
more
than
20
years
and
that's
great
avoid
creating
unnecessary
third
party
metadata
one
place.
H
This
really
shows
up
in
protocol.
Definitions
is
stuff,
that's
left
to
the
implementation,
because
that
becomes
finger
printable.
How
about
to
pick
one
random
example.
If
the
HTTP
header
headers
could
only
be
in
a
certain
specified
order,
and
even
if
they
know
you
like
to
specify
it
and
didn't
and
how
you
know
just
a
semicolon
or
something
design,
privacy
protocols
do
a
privacy
analysis
of
protocols
similar
to
what
is
done
for
security
considerations.
H
I
Steve
Arvind
thanks
very
much
for
your
presentations,
two
comments.
First
of
all,
related
to
Arbenz
work,
I'm
very
pleased
to
find
a
colleague
of
yours,
sir
gentleman
at
Berkeley.
Who's
done
a
lot
of
work
in
this
space,
particularly
around
linkages
on
cellphones,
and
the
idea
that
you
have
about
TLS
and
privacy
of
IOT
is
something
that
I
that
I
am
deeply
involved
in
and
one
of
the
things
that
it
raises
the
question
of
privacy
brokerage.
I
We
release
this
information
some
often
times
we
release
information
for
a
purpose
and
the
notion
of
contextual
privacy
is
something
that
I
think
is
a
relatively
nation
if
I
understand
in
terms
of
research
and
I'd,
be
very
interested
to
see
us
continue
that
discussion
here
at
the
IDF
or
at
least
at
the
IRT
F,
as
to
how
as
to
what
that
means-
and
this
goes
to
the
tagging
that
you
mentioned
Steve.
So
thanks
for
your
research
and
if
people
haven't
looked
at
surges
work
to
he's
done
a
lot
of
work,
particularly
around
the
Amazon
echo.
J
H
K
H
Privacy,
like
any
of
the
security
problem,
has
to
be
done
in
the
context
of
a
threat
model
who
is
trying
to
collect
this
data?
What
are
they
going
to
do
with
it?
You
know
with
DNS
over
HTTP
over
TLS.
You
know
you
might
get
a
central
aggregation
point
and
do
you
trust
them
to
be
honest,
secure
against
governments,
secure
against
governments
who
come
armed
with
legal
process,
and
you
don't
necessarily
have
a
business
relationship.
H
It's
not
clear
to
me
that
guarding
against
the
NSA,
your
GCHQ
or
the
FSB
or
GRU,
the
Mossad
or
whomever
is
the
best
threat
model
versus
the
commercial
threat
model
that
one
might
actually
be
best
dealt
with
with
laws
saying
your
ISP
can't
use,
collect
or
use
this
data
in
any
way,
rather
than
this
technical
mechanism
and
to
Voyage
the
central
point
of
collection,
which
is
a
greater
threat
for
against
certain
threat.
Miles.
Watch
the
threat
model,
mary
hi.
L
Malory
knodel
article
19,
and
so
my
questions
for
Arvind
I,
really
liked
your
example
of
how
a
limited
technical
mitigation
was
encouraged,
a
strong
policy
that
then
filled
that
gap
for
user
privacy.
This
was
like
about
a
third
true.
Your
presentation
and
I.
Think
that
pair,
like
paralleling,
that
example
with
your
conclusion,
is
also
interesting.
So
I
guess
my
question
would
be
and
then
I
can
explain
a
bit
more.
L
If
that's
helpful
is
I
mean
I,
think
there's
there
is
a
role
that
that
measurement
and
research
can
still
play,
even
if
it's
limited
now
because
of
the
privacy
enhanced
protocols
that
were
using.
How
can
we
instead
pivot,
instead
of
trying
to
bargain,
like
in
the
stages
of
grief
or
in
the
bargaining
stage
like?
How
can
we
do
both
of
these
things
when
there's
an
actual
inherent
technical
paradox
between
the
two
and
rather
go
into
pivot,
into
a
more
complicated
relationship
between
policy
incentives,
policy
sticks?
L
And
then
you
know
so
I
I
wonder
how
academic
researchers
can
help,
for
example,
human
right
or
not
Human
Rights,
but
like
impact
assessments
or
getting
companies
to
take
more
responsibility
for
doing
privacy,
audits,
security,
audience
and
it's
a
slower
approach.
It's
not
as
fast
as
scanning
a
million
websites
every
month,
but
I
think
that,
where
we're
at
now
and
the
way
that
we've
advanced
privacy
for
end-users
to
get
the
higher
hanging
fruit,
we
actually
have
to
have
more
complicated
approaches.
L
D
Economists
call
this
an
information,
age
symmetry
and
that
particular
information
asymmetry
was
closed
by
lemon
laws
that
mandated
certain
information.
Exclosure
disclosure.
Pardon
me:
that's
guaranteed
the
right
for
buyers
of
cars
to
first
take
it
to
mechanics
to
be
inspected,
and
so
on
so
broadly
to
your
question.
As
long
as
we
have
some
way
of
closing
this
information,
asymmetry
that
exists
between
the
sellers
of
products
and
services
and
the
people
who
use
them
I
think
we're
in
good
shape
and
one
of
the
ways
we've
been
doing.
That
is
with
academic
research.
D
That's
been,
you
know,
scanning
a
million
endpoints
at
once,
but
it
doesn't
have
to
be
the
only
way.
Another
critical
way
to
do
that
has
been
journalists
have
been
you
know,
individually,
examining
these
products
in
a
lot
of
detail
and
holding
companies
feet
to
the
fire.
So
as
long
as
we
have
some
oversight
mechanism,
whether
that
comes
from
moi,
whether
that
comes
from
academia,
whether
that
comes
from
journalism
or
whether
it
simply
comes
from
a
more
informed
public
that
helps
close.
This
information,
asymmetry,
then
I
think
we'll
be
in
better
shape.
Thank.
B
B
M
M
H
Agree,
I
agree
completely.
The
document
that
that
I
wrote
to
it
my
truck
was
derived
from
was
a
submission
to
a
US
government
process
on
privacy,
because
I
agree
completely,
there's
a
very
important
legal
role
for
the
legalities
here
and
governments
around
the
world,
but
I
think
that
trying
to
base
your
privacy
on
notice
and
consent
from
a
technical
perspective
is
not
going
to
work
and
I
want
what
I.
Just
what
my
paper
said
is.
We
need
to
find
a
different
paradigm
for
regulators
and
legislators
to
mandate.
D
D
I,
don't
think
it's
a
dichotomy
in
fact,
a
lot
of
the
investigations
that
have
come
about
under
the
gdpr
and
the
fines
that
have
resulted
from
that
those
privacy
issues
only
came
to
be
known
because
of
the
kind
of
research
that
I
described,
whether
it
was
done
by
academics,
journalists
or
some
other
third
parties.
So
that's
technical
work
in
a
sense,
and
and
for
me
the
real
success
stories
involve
the
collaboration
between
technical
teams
and
legal
measures.
D
N
Winning
a
seat
books.
Thank
you
very
much
for
your
thoughts.
They
both
first,
even
though
harvest
I,
want
to
actually
follow
up
on
this
discussion,
because
in
fact,
that's
precisely
what
I
wanted
to
say
in
this
world
where
we
leave
privacy
is
getting
harder.
The
battle
is
lost.
I
very
much
appreciate
the
message
of
no
it's
not
lost.
N
We
can
keep
improving
things,
and
also
just
following
up
from
the
previous
speaker
that
we
are
going
all
the
way
from
the
extremes
of
regulations
towards
the
purely
technical
and
I
think
they
both
have
to
now
at
some
point
in
time.
It's
true,
we
don't
have
the
answers
to
all
the
questions,
but
and
it's
true
that,
for
instance,
I
was
regulatory
discussion
where
they
were
just
discussing.
N
Okay
in
a
world
where
the
watch
is
checking
your
vital
signs
and
then
it's
sending
it
to
your
phone
and
then
that's
anything
to
an
app
and
that's
sending
it
to
our
cloud
provider
and
who
do
you
regulate
it's
nothing
more.
The
world
of
one
piece
does
one
thing,
so
it's
important
I
agree
with
Arvind
and
message
that
we
can
help
their
regulatory
bodies
understand
that
is
the
service
provider,
probably
the
most
accountable
one.
That
will
make
sure
that
the
information
flows
down
and
also
on
our
side
as
a
technical
writers.
N
We
do
have
indeed
the
the
role
of
writing
the
right
standard,
but
also
educating
people
as
much
as
we
can
regulatory.
It
could
be
a
section
Steven.
You
mentioned
the
privacy
considerations.
I
think
that's
a
great
way
to
communicate
what
the
standard
should
do.
What
what
are
the
issues?
What
should
be
looked
after
and
then
follow
up
and
make
sure
that
we
do
improve,
because
definitely
the
the
battle
is
not
lost
and
there's
a
lot
of
things
we
can
keep
you.
F
High
riad
Wahby
Arvind,
you
mentioned
at
the
very
end
of
your
talk,
a
more
project
of
Stanford
TLS
are
AR
rotate
and
release.
I
was
one
of
the
authors
on
that.
So
I
think
you're,
absolutely
right
that
there
are
some
technical
measures
like
that,
but
just
to
provide
a
little
background
in
kind
of
a
counterpoint.
F
D
Just
yeah,
thank
you
for
your
comment
and
somewhat
aware
of
the
debates
that
have
gone
on
in
the
in
the
tls
working
group.
My
main
goal
is
to
call
attention
to
the
severity
of
the
problem.
I'm,
not
claiming
that
I
know
what
the
right
solution
is,
but
I
think
the
current
situation
is
perhaps
not
optimal.
O
I
would
like
to
ask
you
a
question
following
the
the
gentleman
from
German
Telecom
about
ownership
of
the
data.
This
is
a
very
big
difference
in
the
US
and
Europe,
for
example,
where
ownership
of
the
data
is
always
about
me
when
I'm
in
Europe
and
once
is
collected
in
the
u.s.
is
property
of
who
collected
the
data
and,
as
I
think,
is
the
biggest
issue
in
when
privacy.
O
Instead
of
giving
out
your
data,
you
can
borrow
my
data,
but
I
can
always
ask
you
to
remove
your
my
data
from
your
system
whenever
I
want
to
you
know.
I
don't
want
to
have
this
very
issue
with
us,
etc,
and,
and
if
you
can
elaborate
on
that,
if
you
think
that
this
might
be
a
tool
that
enhance
privacy
from
a
legal
standpoint
of
unit,
since
that
give
me
give
me,
as
a
user,
a
possible
recourse
of
action
in
case
you're,
you're
failing
to
protect
my
privacy,
and
the
second
point
is
about
the
measurement.
O
H
Data
ownership
is
a
really
complicated
question.
There's
been
a
fair
amount
of
legal
writing.
Lately
legal
academic
writing
on
why
trying
to
treat
data
as
property
can
have
bad
side
effects.
One
of
the
interesting
things
from
US
law
is
that
a
lot
of
these
transactions?
There
are
two
different
parties
that
have
ownership.
So
I
mentioned
about
the
my
mechanics
uploading.
It
was
selling
my
odometer
readings.
Oh
yes,
my
ODOT,
my
mic
is
recording
the
odometer
reading
to
go.
H
Let
me
know
when
I
should
change
my
oil
again
and
that's
perfectly
that
becomes
a
business
record
of
the
mechanic
and
that's
the
property.
The
data
belonged
to
the
mechanic
and
therefore
the
mechanic
can
sell
it
as
well
as
me,
and
my
privacy
problem
is
that
it
gets
aggregated
and
attributed
to
me
as
well.
So
there
are
very
complicated
questions
with
trying
to
treat
this
as
as
property,
even
apart
from
the
international
issues
and
different
philosophies,
there's
a
lot
of
data
that
businesses
very
legitimately
have
to
collect
and
medical
personnel
utterly
rely
on
it.
H
D
Want
to
say
a
couple
sentences
about
the
measurement
issue
that
you
raised:
I
agree
to
hundred
percent.
That
measurement
should
be
a
standard
part
of
the
regulatory
process
and
just
to
tell
you
how
much
I
agree
with
that
at
Princeton
and
part
of
the
Center
for
Information
Technology
Policy,
and
it
was
started
15
years
ago
with
precisely
the
notion
that
there
need
to
be
more
technologists
in
government
exactly
because
we
can
do
more
of
the
sort
of
things
you're
calling
for,
because
today,
the
main
limitation
is
just
the
technical
expertise
that
exists
in
regulatory
agencies.
D
H
B
G
Name
withheld
actually
so
Steve
said
we
had
the
wrong
trust
model.
You
know
we
got
thinking
beyond
the
CIA,
etc.
Attacking
I
think
goes
beyond
that.
These
third-party
databases.
They
are
a
national
security
threat
and
we
saw
them
weaponized
in
2016,
and
it
isn't
just
personal
data
I
know
of
an
insurance
company
that
is
operated
by
an
individual
who
is
widely
believed
to
be
operating
on
behalf
of
an
intelligence
agency,
a
hostile
one,
this
insurance
agency
specializes
in
commercial
vehicles.
G
If
you
think
about
what
such
an
insurance
agency
would
be
doing,
it
is
collecting
data
on
all
the
trucks
that
are
moving
in
that
country
and
they
managed
to
get
70%
of
the
market
in
a
markedly
short
time,
because
businesses
are
very
price
sensitive.
So,
when
we're
thinking
about
this
thing,
it
is
no
longer
just
us
as
individuals
having
concerned
about
our
personal
privacy.
It
is
also
a
matter
of
national
security
and
patriotism.
H
Data
breaches
of
multinational
US
firms
are
frequently
thought
to
have
been
perpetrated
by
a
foreign
intelligence
agency,
equi
fact
the
Equifax
and
the
Marriott
breach
have
been
attributed
to
foreign
intelligence
agencies.
In
fact,
the
someone
Fairfax
said
justice
just
yesterday.
This
is
zero
evidence
that
any
of
the
stolen
data
has
been
used
commercially
for
identity
theft
or
anything
else,
which
kind
of
goes
along
pretty
well
with
the
notion
that
it
was
an
intelligence
agency
that
took
it
so
yeah.
This
is
very
plausible.
B
So,
thank
you
very
much.
We
now
have
a
four
minute
break
I
think
before
the
administrative
plenary
I'd
like
to
thank
again
very
much
both
Steve
and
Arvind.
This
is
an
excellent
evening.
I
had
I
learned
a
lot.
I
specially
appreciate
the
challenges
to
the
IETF
from
both
of
you
and
we
hope
to
live
up
to
them.
So
thank
you
very
much,
we'll
see
you
in
180
seconds.