►
From YouTube: IETF117-ANRW-20230724-2000
Description
ANRW meeting session at IETF117
2023/07/24 2000
https://datatracker.ietf.org/meeting/117/proceedings/
A
A
B
C
By
the
way,
while
we're
trying
to
solve
this
problem,
let
me
remind
you
that
you
need
to
use
the
queue,
so
it
would
be
great
if
we
can
enforce
this
fairness
using
the
queue
for
the
questions.
Thank
you.
C
B
E
This
way,
yes,
better
now:
okay,
hi
everyone.
Thanks
for
the
kind
introduction,
my
name
is
Simon
Bauer
I'm,
a
research
associate
at
the
Technical
University
of
Munich,
in
particular
at
the
chair
for
network
architectures
and
services
led
by
Professor
Diego
Carlin,
together
with
my
colleagues,
I
conducted
active
internet
measurements
in
order
to
assess
certain
impacts
by
on
performance
by
client
and
server
configurations,
considering
TCP
option
usage,
quick
and
also
the
hosting
of
a
web
service
in
the
infrastructure
of
a
Content
delivery.
Network.
D
E
Well,
why
are
we
interested
in
such
kind
of
measurements?
I
guess
we
can
all
agree
that
understanding
and
assessing
performance
of
network
connections,
as
well
as
networks
as
a
total,
is
crucial,
and
this
applies
from
a
provider
perspective
where
we
are
interested
in
providing
Optimal
Performance
to
our
users,
as
well
as
from
a
research
perspective,
to
assess
the
effect
of
effectiveness
of
arising
or
already
widely
deployed
measures.
E
E
Okay,
before
we
dive
into
our
measurement
approach
and
some
measurement
results,
let
me
Briefly
summarize
some
related
work
regarding
the
measures
we
consider
in
our
study.
So
we
consider
TCP
window
scaling,
selective,
acknowledgments
and
explicit
congestion,
modifications
which
are
around
in
for
kind
of
for
quite
a
while.
Yes,.
E
Next
slide,
please
well
in
the
early
2000s,
researchers
only
found
little
deployment
of
some
of
the
options.
However,
in
2013
miria
cool
event
at
all
found
that
selective
acknowledgments,
as
well
as
window
scaling,
is
supported
by
nearly
90
of
domains
in
the
Alexa
top
1
million
list.
Six
years
later,
it
was
reported
that
also
ecn
is
Now,
supported
by
the
majority
of
Alexa
top
1
million
domains.
Next
slide,
please,
regarding
cdns
I
guess
we
are
well
aware
of
their
importance
for
today's
internet.
In
2021,
there
was
a
study
showing
that
giant.
E
Hypergine
cdns
are
maintaining
more
than
thousands
of
autonomous
systems,
while
the
infrastructure
is
continuously
growing
further.
Also
I,
guess
you're
well
aware
in
2021
click
was
finally
specified
and
in
2022
it
was
already
observed
that
click
accounts
for
eight
percent
of
global
internet
traffic.
Next
slide,
please
so
how
to
assess
performance
impacts
by
these
measures.
E
E
E
Recursively
follow
links
from
public
from
the
in
index,
page
of
public
web
servers,
and
we
are
looking
for
files
satisfying
a
certain
file
size
and
our
study.
We
decided
to
rely
on
a
minimum
file
size
of
one
megabyte.
E
Based
on
successful,
fully
crawled
domains
and
recording
files,
we
then
map
the
IP
addresses
to
autonomous
systems,
which
are
then
mapped
to
an
organization,
maintaining
the
autonomous
systems
based
on
as2r
plus,
and
we
do
that
to
be
able
to
check
afterwards
whether
a
domain
is
hosted
in
the
infrastructure
of
a
CDN.
E
We
now
have
a
list
of
files
that
can
be
downloaded
from
public
web
servers
and
we
do
that
so
we
iterate
over
over
our
Target
files
and
conduct
sequences
of
downloads.
While
we
consider
different
permutations
of
TCP
option
usage
starting
from
a
baseline,
that
does
not
support
any
TCP
option
at
all,
then
only
a
single
option
is
supported
and,
finally,
all
three
considered
TCP
options
are
enabled.
E
Afterwards
we
conduct
downloads
with
different,
quick
implementations.
So
far
our
pipeline
includes
quiche
and
audioquick
for
a
study.
We
consider
three
different
Vantage
points:
one
physical
Vantage,
Point
located
in
our
campus
data
center
in
Munich,
and
two
virtual
machines
hosted
by
digital
ocean
here
in
San,
Francisco
and
one
in
Singapore,
download
traffic
gets
captured
and
we
then
extract
different
packet
features
and
calculate
performance
indicators.
However,
we
mainly
focus
on
throughput
in
our
studies
so
far,
so
this
was
our
measurement
approach
as
we
applied
it
for
our
first
measurement
series.
E
However,
we
observed
that
the
Baseline
configuration,
which
always
was
the
first
download
in
the
sequence
per
domain,
was
significantly
biased
due
to
On
The
Edge
caching,
by
different
cdns.
So
we
repeated
all
measurements,
including
a
warm-up,
run
before
the
Baseline,
to
ensure
that
files
are
already
cached
and
all
downloads
are
conducted
with
the
same
conditions,
so
this
presentation
includes
results
conducted
with
such
and
warm
up
front.
E
Okay,
regarding
our
targets.
First,
we
generated
a
Target
set
for
TCP
based
downloads
and
accordingly
be
crawled.
The
top
100
100
000
entries
of
the
Alexa
top
1
million
list
based
on
success,
successfully
crawled
domains.
E
If
you
choose
200
domains
per
considered,
hypertrand
cdns,
namely
Akamai
Amazon,
cloudflare,
Microsoft
and
Google,
and
then
we
choose
1000
domains
that
are
maintained
or
hosted
in
other
autonomous
systems.
What
you
see
in
the
table
are
successfully
conducted
downloads
from
one
measurement
run
conducted
in
July
2023,
as
we
only
find
little
shares
of
quick
support
in
such
Target
set.
We
generated
a
second
target
set
based
on
Google's
Google,
Chrome's
user
experience
data
set
here.
We
again
we
consider
the
top
100
000
entries.
E
We
scan
such
domain,
this
Q
scanner
to
identify
domains
that
support
quick
and
afterwards.
We
call
them
for
files
and
also
checked
that
all
domains
support
all
TCP
options.
However,
this
procedure
resulted
in
a
bit
more
than
500
domains
that
we
refer
to
as
the
click
Target
set
in
the
following.
So
measurement
results
now
presented,
rely
on
three
measurement
runs
per
one
domain,
so,
first
of
all,
we
were
interested
how
the
usage
of
different
TCP
options
impact
performance.
E
What
you
see
here
is
the
cumulative
distribution
function
for
the
vantage
point
in
Munich
and
in
San
Francisco
note
that
we
find
similar
patterns
between
the
vantage
point
in
San,
Francisco
and
Singapore.
So
I
will
only
focus
on
San
Francisco
here
in
this
presentation.
Well,
the
CDF
shows
the
mean
throughput
observed
and
first
we
run
our
warm-up
download
and
that
what
we
can
see
is,
at
the
vantage
point
in
San
Francisco
shows
a
significantly
long
tail
distribution
compared
to
the
vantage
point
in
Munich.
E
E
Next,
we
run
our
Baseline
configuration
note
that
the
warm-up
run
and
the
Baseline
run
do
not
support
any
TCP
option.
However,
you
see
that
the
distribution
of
mean
throughput
indicates
larger
throughputs
observed
for
the
Baseline
configuration
which
indicates
impact
by
Edge
caching,
next
We
Run
download
supporting
only
a
little
active,
acknowledgments
respectively
ecn,
and
we
only
find
little
Improvement
of
mean
throughput,
which
can
also
so
be
explained,
as
we
only
observe
little
retransmission
rates
during
the
Baseline
downloads.
E
Lastly,
we
conduct
download
supporting
all
options
but
only
observe
little
increase
compared
to
the
configuration
only
supporting
Windows
scaling.
E
However,
analyzing
distributions
of
mean
throughput
does
not
answer
the
question:
how
significant
the
speed
up
by
a
certain
option
actually
was
so
what
we
did
per
measurement
run
is
comparing
the
throughput
of
a
configuration
under
test
to
the
Baseline
throughput
and
accounted
the
runs
accordingly
to
buckets
of
speed
UPS.
E
E
Sorry,
in
contrast,
enabling
window
scaling
leads
to
a
positive
speed
up
for
over
90
percent
of
our
samples,
and
we
observed
that
nearly
40
of
samples
supporting
window
scaling
doubles
throughput
compared
to
the
Baseline
and
that
over
60
percent
of
samples
should
speed
up
larger
than
50
percent.
E
With
the
same
approach,
we
surveyed
the
performance
of
quick
based
downloads
in
comparison
to
TCP.
First,
we
compared
downloads
contacted
conducted
with
keys
to
downloads
conducted
with
ioclick,
and
we
found
that
quiche
outperforms.
Are
you
clicking
70
of
our
for
our
of
our
samples?
At
the
same
time,
we
observe
that
over
45
percent
of
quiche
downloads
show
a
double
throughput
compared
to
other
click.
E
Next
We
compare
the
TCP
downloads,
supporting
all
options
to
downloads
conducted
with
ioquick,
and
we
observed
that
TCP
with
all
options
results
in
better
throughput
in
over
55
of
our
measurements.
However,
we
observe
that
over
30
percent
of
samples
indicate
that
IU
Creek
shows
a
double
throughput
compared
to
TCP
with
all
options.
E
If
you
now
compare
keyspace
downloads
to
TCP,
all
we
find
a
speed
up
for
over
70
percent
of
samples
and
that
throughput
is
doubled
for
over
40
percent
of
downloads
conducted
with
quiche,
and
this
already
brings
me
to
the
conclusion
of
this
talk.
So
what
did
we
observe
during
our
measurements?
First
of
all,
TCP
window
scaling
is
crucial.
We
did
not
observe
such
significant
impacts
by
neglective,
acknowledgments
and
ecn.
E
However,
as
our
Vantage
points
are
kind
of
close
to
the
core
of
the
internet,
these
conditions
might
not
be
comparable
to
a
user
yeah
to
user
conditions.
Further,
we
observe
significant
differences
between
downloads
conducted
with
quiche
and
ioclick.
We
also
observed
such
differences
in
test
bed
measurements.
E
Further
quiche
modes
mostly
exceeds
TCP
with
all
options.
However,
as
I
mentioned
before,
we
conducted
a
first
measurement
series
and
then
we
did
not
observe
such
a
significant
increase
by
quiche
compared
to
TCP
with
all
options,
so
this
motivates
to
run
further
measurements
with
higher
iteration
counts
to
survey.
The
reasoning
of
The
observed
differences.
Further,
we
observe
different
impacts
by
Vantage,
Point,
location
and
Edge,
caching,
which
we
further
discuss
in
our
paper
for
future
work.
We
consider
to
include
further
quick
implementations
to
our
pipeline,
as
well
as
considering
further
transport
layer
parameters.
B
Colin
Colin.
First
sorry.
D
Hi
sorry
I'm
hiding
at
the
back,
so
it
takes
me
a
while
to
get
here,
Colin
Perkins,
so
this
is
really
nice
work.
I,
I
I
am
not
surprised
that
you
were
seeing
the
differences,
although
it's
especially
with
the
quick
implementations.
Although
it's
interesting
that
you
do
I
guess
the
question
is:
why
are
they
implementing
different
variants?
Different
sets
of
features,
or
is
it
just
bugs
in
one
or
the
other.
E
This
might
be
a
reason
I'm,
not
so
deep
in
the
in
the
comparison
we
did
in
a
test
bed.
So
what
was
surveyed
by
my
colleagues?
There
is
high
speed
measurements
up
to
10
gigabit.
E
Maybe
we
they
could
provide
you
more
details
for
that
question
yeah.
But,
as
already
mentioned,
those
were
our
first
measurements
and
in
the
future
we
will
consider
further
parameters
to
better
understand
the
Dynamics
of
such
patterns.
Yeah.
D
Yeah,
okay,
cool
that
I
think
that's
that's
really
nice!
It
would
be
really
good
to
I.
Think
I
guess.
My
long-term
concern
is
that
if
we
end
up
sort
of
doing
these
measurements,
we
end
up
with
a
race
to
see
who
can
make
the
fastest
version
of
quick
rather
than
who
can
make
the
most
correct
version
of
quick
and
I.
Don't
know
how
to
avoid
that.
But
I
think
it's
interesting
that
you're
doing
this
measurements
and
we
should
think
carefully
about
what
they
mean.
Yeah.
E
A
Hi
Simon
Dave
Blanca
way
back
on
I
think
it
was
like.
Maybe
your
slide,
18
I
see
it
as
table
one
in
the
paper.
I
see,
there's
a
huge
swing
between
which
of
the
hyper
giant
cdns
you're
on
versus
TCP
and
quick.
A
I'm
sorry
slide
18.
It
was
the
one
where
you
showed
Akamai
Amazon
Cloud
for
Google
Microsoft,
but
what
we
see
between
TCP
and
quick
is
tons
of
stuff
moved
from
on
the
cloudflare.
If
I'm
reading
it
right,
maybe
it's
figure
one
in
your
paper.
It's
the
one
that
says
the
difference.
A
Let
me
have
a
look
and
you
did
show
it
anyway.
It's
it's
table
one
in
the
paper,
but
I
guess
to
make
it
a
question:
what
why
should
Why
Can't
We
compare,
as
you
show
TCP
to
quick
when,
when
you
use
the
same
three
observation
points,
but
the
hosting
platform
is
completely
swung
from
my
Amazon
Google
to
cloudflare.
E
I'm
sorry,
so
when
we
compare
quick
to
TCP,
the
all
measurements
are
only
conducted
based
on
the
quick
Target
set.
Oh
okay,.
A
E
B
B
Wait
can
you
control
the
slides.
F
F
F
See
that
I
have
control
yet,
but
maybe
we
can
just
try.
B
Okay,
if
I
stop
sharing,
could
you
try
sharing
slides
from
your
sites
website.
B
Started
sorry
tell
we
couldn't
hear
you,
could
you
be
closer
to
the
microphone
or
speak
louder.
B
F
G
G
F
B
Yeah,
could
you
say
a
few
more
again.
D
B
F
B
F
B
B
Another
next
one,
okay,
so
we
can
come
back
to
this
talk
later
and.
F
B
Let
me
tell
please
keep
testing
your
yeah
device,
let.
B
I
I
Hello,
everyone
this
talk
is
was
supposed
to
be
at
the
end,
would
have
been
a
bit
more
interesting
because
now
it's
going
to
be
sandwiched
between
two
very
technical
talks.
This
is
not
very
technical,
as
you
can
probably
surmise
by
the
name,
but
my
My
Hope
here
is
to
sort
of
pitch
to
the
audience
people
who
are
listening
in
who
people
who
may
stumble
upon
this
in
the
future.
I
Some
interesting
problems
that
I
think
are
useful
for
this
community,
and
perhaps
others
outside
of
the
community
to
work
on
that
would
be
beneficial
to
the
ietf
and
the
motivation
for
this
talk
was
I,
get
asked
quite
a
lot
from
a
lot
of
people
like
you
know
what
are
some
interesting
problems
that
you
know
I
could
work
on.
I
As
a
you
know,
a
young
PhD
researcher,
or
you
know,
as
a
student
who's
trying
to
like
learn
more
about
this
particular
problem
space,
or
this
particular
area
and
I
figured
rather
than
just
repeat
myself.
Over
and
over
again,
I
would
put
it
down
in
words,
and
this
is
an
attempt
to
do
so.
This
list
of
things
may
evolve
over
time.
I
Naturally,
as
we
solve
problems
and
as
new
things
emerge,
but
that's
that's
the
intent
and
that's
why
I'm
here,
however,
I'm
actually
I'm
not
actually
going
to
talk
about
the
opportunities
to
start
I
wanted
to
take
a
step
back
and
I
guess
share
a
bit
about
how
I
think
about
what's
a
like
an
interesting
research
problem
in
the
first
place.
I
I
I
I,
I'm
I
find
myself
motivated
by
trying
to
deliver
value
or
add
impact
in
some
particular
way
and
in
my
experience,
the
way
by
which
I
we
I
I,
typically
like
end
up
delivering
value,
is
by
following
an
idea
through
a
series
of
steps
to
like
actually
getting
it
out
there
in
the
world
shipping
software
effectively
and,
in
my
experience,
there's
basically
three
I,
guess
high
level
pieces
or
components
that,
like
fold
into
the
process
of
like
shipping
software,
there's
like
the
core
science
that,
like
underlies
all
the
stuff
that
we
we
do
here
in
the
ITF
and
the
community
at
large.
I
There's
the
specifications
that
take
that
science
in
like
really
describe
how
to
implement
it,
how
to
actually
write
code
to
do
this
particular
thing
and
then
there's
the
software
that
goes
about
and
implements
that
particular
specification
and
and
does
the
thing
so
not
surprising,
I
guess
to
maybe
most
people
here
if
they're
Engineers,
but
if
you're
a
researcher.
Maybe
this
is
maybe
this
is
new
I,
don't
know
so
as
a
specific
example
consider
all
of
the
work
that
went
into
what
is
now
TLS,
1.3
and
quick.
I
There
is,
if
you
like,
open
up
Google
Scholar
and
you
search
for
like
TLS,
1.3
or
quick
or
whatever
your
favorite
search
term
is,
for
you
know,
Transport,
Security
or
encrypted
key
exchange
or
whatever
you're
gonna
get
a
huge
list
of
papers.
I
I've
listed
some
of
the
impactful
ones
here
that
really
you
know,
influence
the
current
state
of
TLS,
1.3
and
quick
as
it's
specified
and
implemented
and
shipped
today,
but
there's
a
effectively
a
huge
body
of
work
and
what
the
ITF
did
was
like
build
upon
that
work
by
like
writing
these
rfcs
in
a
very
meticulous
way.
To
like
say,
this
is
what
quick
is
going
to
do.
I
This
is
how
a
quick
sends,
but
it's
from
you
know,
client
to
server,
and
this
is
how
the
transport
protocol
works,
and
this
is
how
it
uses
TLS
and
all
this
stuff
to
encrypt
those
bytes,
and
we
even
have
a
nice
cute
logo
for
it,
which
is
really
great,
and
then
a
huge
Army
of
people
in
the
quick
working
group
went
ahead
and
implemented
a
ton
of
different
interoperable
implementations.
This
is
just
a
slice
of
the
interoperable
implementations
that
exist
today
and
now,
basically,
quick
is
like
a
household
name.
I
It's
going
to
be
in
textbooks,
you
know,
and
and
if
it's
not
already,
students
are
going
to
learn
about
it.
You
know.
Basically,
the
future
of
the
internet
is
going
to
be
built
upon
this
new
transport
protocol,
which
is
pretty
fantastic,
I.
Think-
and
you
know
we're
seeing
in
time
or
over
time,
we're
seeing
like
the
adoption
of
this
particular
protocol
go
up.
I
There
is
perhaps
not
the
most
obvious
trend,
but
like
the
green
line,
is
kind
of
going
up
and
to
the
right
and-
and
it
was
further
down
to
the
left
or
earlier
on
so
I
would
expect
as
more
browsers
and
as
more
operating
systems
enable
HTTP,
3
and
quick,
we'll
see
the
screen
line,
go
up,
we'll
see
old
versions
of
HTTP
go
down
in
particular
hpv2.
I
It
would
be
fantastic,
but
at
the
end
of
the
day,
like
those,
you
know
all
that
work
that
went
into
the
course
science
and
all
that
work
in
this
community
that
went
into
the
spec
and
all
the
the
time
and
passion
board
and
the
implementations
LED
to
like
this
value,
which
I
consider
to
be
pretty
great
but
quick,
is
not
unique.
There's
a
ton
of
different
communities
in
the
in
the
ITF
that
have
like
followed
the
same
exact
pattern
for
delivering
value
and
and
impacting
you
know
positively.
I
The
internet
TLS
which
predated
quick,
obviously
did
this
and
I
think
in
in
some
ways,
sort
of
was
the
the
first
working
group
to
really
kind
of
I.
Guess
trailblaze,
like
this.
This
close
working
relationship
between
the
academic
and
scientific
community
and
like
the
the
actual
specification
that
like
shift
as
out
of
that
particular
working
group
with
TLS
1.3,
we
also
have
Acme,
you
know,
sort
of
the
the
protocol
that
is
behind.
I
Let's
encrypt
and
all
the
you
know,
the
basically
which
got
us
https
everywhere,
which
is
fantastic
MLS,
which
was
just
a
recently
minted
RFC
for
doing
you,
know,
group
basically
end-to-end
encryption
between
groups
and
masks,
which
that's
our
new
logo,
which
is
a
pretty
fantastic
mask,
is
like
this.
I
You
can
think
of
it
like
tour
for
built
on
quick,
but
the
I
guess
the
the
takeaway
is
that
this
this
this
is
like
I
guess
a
working
model
for
a
lot
of
the
different
communities
in
the
ietf,
and
it
seems
to
be
something
that
is
quite
effective
and
specifications
are
really
at
the
heart
of
everything
we
do
here
at
the
ietf.
They
are
the
thing
that
allows
us
to
take.
I
What
is
actually
you
know
in
science
and
transfer
it
to
practice
like
we
want
to
like
take
really
cool
things
that
really
smart
people
think
about.
We
want
to
be
able
to
like
ship
them
and
use
them
in
specifications
or
the
way
we
get
there
and
we
spend
a
lot
of
time
in
the
ITF
trying
to
write
like
clear
descriptions
of
these
specifications
so
that
it's
easy
to
implement
correctly
and
well
that
we
can
like
reasonably
Implement
that
we
can
verify
analyze
and
stuff
like
that.
I
But
I
think
another
Hallmark
is
that
these
specifications
encourage
like
open,
like
you,
know,
collaboration
and,
and
they
really
build
communities
around,
like
particular
problem
domains
or
or
or
or
what
have
you
and-
and
that's
that's
important
to
this
process,
moving
something
from
Theory
to
practice.
I
Software
at
the
ITF.
Obviously
a
pretty
important,
given
that
our
motto
is
rough
consensus
and
running
code.
We
try
to
the
best
of
our
abilities
to
like
actually
implement
the
things
that
we
specify
and
get
them
out
there
in
the
world,
because
that's
that's!
I
Why
we're
here
we're
trying
to
like
deliver
value
by,
like
you
know,
actually
running
this,
this
software
and
end
users,
devices
or
you
know
somehow,
maybe
not
directly
on
their
devices
but
like
actually
running
in
a
way
that
affects,
improves
someone's
life,
and
you
know,
as
a
result
of
like
shipping,
this
software
we
like,
learn
new
things.
Maybe
we
learned
that
oh
doing
this,
this
particular
thing
in
this
way
it
was
hard.
I
Maybe
we
need
a
new
protocol
or
a
new,
an
extension
to
solve
this
this
this
this
challenging
problem
that
emerged
from
deployment
and
that,
like
feeds
back
into
you,
know
the
process
of
like
specifying
something
new
or
maybe,
like
you
know,
like
uncovers,
like
a
new
research
problem
that
people
haven't
thought
about
before,
so
this
is
sort
of
like
feedback
loop
between
shipping
software,
specifying
things
and
and
actually
like,
coming
up
with
novel
ideals
for
Stuff
and
then
science,
while
not
technically,
like
the
the
you
know,
the
core
component
of
the
ietf
I
would
argue
it's
at
the
you
know
at
the
at
the
Bedrock
of
everything
that
we
build
upon.
I
Without
all
the
like.
The
work
that's
poured
into
the
science
scientific
Community
we
wouldn't
be
able
to
like
specify
things
wouldn't
be
able
to
ship
cool
things
that
solve
interesting
problems
and
and
yeah
I
mean
all
all
the
the
work
that's
poured
into.
Science
like
it
like
has
an
effect
on
other
parts
of
the
process.
I
Guess
what
what
I'm
trying
to
say
is
that
if
you're
thinking
about
you
know
I,
guess
an
interesting
or
trying
to
like
you
know,
engage
in
the
ietf
in
from
a
like
a
research
capacity.
Each
of
these
components
is
perfectly
fine
to
engage
in
from
a
research
capacity.
You
could
be
a
you
know,
just
a
computer
scientist
doing
science
in
the
ITF
and
just
spend
all
your
time
in
that
particular
component.
That
would
be
great.
I
You
could
also
be
like
a
research
scientist
who,
like
really
spends
their
time,
you
know
cracking
or
like
writing
like
super
formally
verified
software.
That
is
like
proven,
correct
and
you're,
like
you're
looking
at
implementation
protocols
that
we
specify
that
would
also
be
super
valuable
for
the
ietf
or
it
could
be
someone
who's
like
trying
to
like
improve
the
ways
upon
which
we
specify
protocols.
I
All
of
these
are
useful.
There
is
no
like
I,
guess,
preferred
or
ideal
like
way
in
which
researchers
engage.
I
I
So,
for
those
of
you
that
don't
know
multi-party
computation,
is
it
I
guess
it
can
be
really
reduced
to
basically
a
way
of
like
Computing
functions
over
private
inputs.
So
you
want
to
compute
some
arbitrary
function.
You
don't
want
to
learn
the
inputs.
You
just
want
to
learn
the
output,
that's
when
NPC
is
kind
of
good
for,
and
you
can
imagine
that,
there's
a
lot
of
different
applications
where
that
might
be
useful.
The
ITF
is
working
on
one
of
those
particular
applications
that
is
privacy,
preserving
measurement.
I
There's
also
like
work
above
the
ITF,
where
this
particular
technique
is
useful,
like
using
MPC
as
a
way
of
measuring
ad
click
attribution
and
recently
the
ietf
is
embarked
upon,
trying
to
standardize
this
type
of
Technology.
In
the
PPM
working
group.
There
is
the
distributed
aggregation
protocol,
which
is
a
specialized
form
of
NPC,
built
upon
some
really
cool
cryptography,
that's
being
specified
in
the
cfrg
called
a
verifiable
distributed,
aggregation
function,
not
protocol,
and
this
work
is
really
exciting.
I
It's
kind
of
the
first,
the
its
first
attempt
to
you,
know,
do
something
Concrete
in
this
space,
it's
a
significant
increase
in
scope
compared
to
what
the
ITF
typically
does,
which
is
you
know
typical
client,
server,
two-party
stuff,
and
so
this
I
would
argue
is
perhaps
like
the
hottest
research
area.
I
For
you
know,
people
especially
security
and
privacy
people
to
engage
upon,
because
there
are
some
really
hard
problems
that
we
need
to
solve
and,
and
they
cover
all
three
of
the
different
components,
for
example
in
the
I
guess
the
science
realm
for
this
particular
space.
There's
a
problem
called
like
private
Heavy
Hitters.
How
do
you
compute
and
or
solve
the
heavy
hitter
problem,
which
is
like
effectively
learning?
I
What
are
the
most
common
elements
in
some
input
set
amongst
a
set
of
clients
in
a
way
without,
in
a
way
that
such
that
you
don't
learn
the
individual
inputs?
There
are
some
proposals
for
solving
this
problem,
but
they're
not,
let's
say
they're,
not
as
performant
as
we
might
like,
or
they're
they're,
more
they're,
more
expensive
to
run
than
other
types
of
MPC
specialized
NPC
protocols.
I
So
it
is
in,
in
my
opinion,
still
sort
of
an
open
problem
to
like
solve
this
problem
in
a
much
more
performant
way
and
and
I
guess,
and
also
in
a
way,
that's
a
bit
more
like
ergonomic
for
applications
like
some
of
the
existing
Solutions
right
now
are
sort
of
rigid
and
how
they
can
be
used.
In
terms
of
like
parameter
space
and
like
technical
details,
that
I
will
lose
people
I'm
talking
about
but
are
not
relevant,
but
there's
a
this
is.
I
This
is
a
very
important
problem
in
practice,
and
this
is
something
that,
like
interested,
people
should
definitely
dig
into,
and
there's
also
I,
guess
sort
of
an
orthogonal
problem,
which
is
how
do
we
like
take
all
the
work
that
has
been
poured
into
differential
privacy
and
compose
it
with
this
MPC
stuff
that
we're
standardizing
in
a
way
that
can
be
implemented
safely
and
correctly,
and
that
can
be
used
in
a
way.
That's
meaningful
for
end
users,
because
I
guess,
like
a
most
emerging
privacy,
enhancing
Technologies.
I
It's
not
like
a
binary
thing
where
you,
you
know
you
like
turn
on
privacy
or
turn
off
privacy.
There's
like
there's
a
there's,
a
knob
you
tune
and
and
differential
privacy
in
particular
that
knob
turns
out
to
be
very
important
with
respect
to
how
much
privacy
you
get
and
what
the
impact
on
the
application
is.
I
So
anyways
there's
a
lot
of
space
or
a
lot
of
work
that
could
potentially
be
done
here
on
the
composition
of
these
two
particular
domains.
On
the
specification
side,
as
I
alluded
to
earlier.
There
are
these
two
emerging
specifications
in
the
PPM
working
group
for
addressing
the
PPM
problem,
that
is
the
distributed
aggregation
protocol
and
the
the
underlying
vdaf
specification.
I
There's
been
a
ton
of
work,
that's
been
poured
into
the
underlying
crypto
protocol.
To
sort
of
you
know,
give
us
a
a
reusable
abstraction
that
we
know
has
like
very
specific
properties
that
we
can
sort
of
plug
things
into
and,
and
things
should
just
work,
but
up
above
at
the
DAP
layer.
We're
not
really
sure
that
the
thing
is
correct
as
specified
which,
if
you're
like
looking
at
deploying
dap
in
practice
may
be
important
to
your
particular
use
case.
I
And
so
we
need
people
to
sort
of
like
really
dig
into
that
specification
and
and
ask
you
know:
is
this
thing
specified?
Is
it
correct?
Does
it
drive
the
underlying
vdaf
abstraction
in
a
way
that
is
required
for
vdaf
to
be
secure
and
for
the
I
guess
mechanically
or
the
the
symbolic
you
know
proof
of
people
in
the
room.
People
who,
like
Tamarind
and
proverb
and
stuff
I,
would
I
would
challenge
them
to
think
about.
I
Maybe
maybe
we
could
like
model
dab
and
in
one
of
these
particular
modeling
languages,
to
to
check
to
whether
or
not
it
does
indeed
satisfies
our
notion
of
correctness
or
our
notion
of
security
or
privacy.
I
So
that's
work
that
could
be
done
on
the
specifications
and
then
on
the
software
side.
There's
definitely
a
lot
to
do
there
are
you
know
two
at
least
two
to
my
knowledge,
implementations
of
this
open
specification,
one
that's
developed
by
RSO
G,
one,
that's
developed
by
cloudflare.
I
It
would
be
fantastic
if
you
could
take
these
implementations
or
components
of
them
or
subsets
of
them
and
produce
formally
verified
implementations
things
that
we
know
are
correct
that
are
safe
to
run
in
production
and
that
don't
regress
any
of
the
performance
properties.
I
It
would
also
be
fantastic
if
people
who
are
thinking
about,
like
the
differential
privacy
composition,
how
to
safely
use
these
things
in
practice,
could
provide
guidance
or
like
safe
defaults
or
like
like
API
models
that
are
just
easy
to
use
out
of
the
box.
For
these
implementations
such
that
users
can't
shoot
themselves
in
the
foot,
and
so
there
there
are
probably
lots
of
other
opportunities
to
be
done
in
the
MPC
space.
That's
just
a
few.
I
There
are
a
ton
of
like
other,
more
qualified
people
in
this
room
who
may
have
ideas
about
their.
What
are
interesting
things
to
work
on
and
I
would
be
happy
to
dispatch
you
to
these
people,
if
you're
interested
but
outside
of
the
MPC
space,
there's
other
security
and
privacy,
things
that
are
also
useful
to
the
ITF.
This
topic
of
anonymous
credentials
has
come
up
time.
I
At
a
time
again,
the
the
ITF
is
working
on
a
technology
called
privacy
pass,
which
is
a
I,
wouldn't
call
it
Anonymous
credential,
because
it's
very,
very
simple
and
compared
to
what
it
does.
But
there
have
been,
like
you
know,
suggestions
that
maybe
privacy
pass
should
be
extended
in
a
particular
way,
or
maybe
you
should
do
some
other
stuff
that
it
doesn't
currently
do
and
we're
sort
of
like
getting
closer
and
closer
to
like.
I
Actually,
you
know
specifying
and
implementing
and
shipping
and
Anonymous
credential
and,
as
part
of
you
know,
doing
that,
there's
some
interesting
research
problems
to
address.
For
example,
how
do
you?
How
do
you
build
these
things
that
are
a
post
Quantum
secure?
We
currently
don't
really
know
how
to
do
that,
in
contrast
to
like
all
the
key
exchange
protocols
that
we
we
have
shipped
today
between
clients
and
servers,
post,
Quantum
security
for
privacy
pass
and
related
things
is
kind
of
an
open
question.
I
Still
there
is
again
the
formal
verification
question
of
this
of
these
implementations
and
then
really
thinking
about
how
you
could
take
these.
You
know
these
constructions
that
exist
in
in
Academia
and
actually
specify
them
in
a
way
that
makes
them
more
amenable
to
deployment,
because
there's
there's
huge
literature,
a
space
of
literature
on
Anonymous
credentials,
but
very
few
have
actually
made
them
into
practice
and
there's
probably
a
reason
as
to
why
this
Gap
exists.
I
G
I
Know
if
it's
like
the
came
out
of
the
blockchains,
I,
guess
or
whatever,
but
this
is
a
I
guess
a
generally
useful
tool.
That's
you
know
it
does
have
some
like
it
actually
is
being
used
today
for
a
number
of
iitf
things.
So,
for
example,
privacy
pass
in
some
sense
does
use
zero
knowledge
proofs.
The
PPM
work
does
zero
knowledge
proofs,
but
there's
other
other
proposals
for
using
these
things
in
a
way
that,
or
or
there
have
been
other,
like
suggestions
that
you
know
gee
whiz.
I
If
we
had
a
a
way
to
prove
this
particular
thing
in
zero
knowledge,
we
could
solve
this
problem,
but
we
don't
yet
have
like
a
you
know,
a
reusable
abstraction
for
such
a
thing
that
we
can
like
point
to
in
terms
of
a
specification
implementation
that
we
could
ship
today,
and
you
know,
working
on
that
and
trying
to
provide
that
to
the
community
could
be
really
useful
and
as
like,
as
a
concrete
example,
you
could
in
theory
build
a
post,
Quantum
version
of
privacy
pass
using
zero
knowledge.
I
Proofs
and
there's
been
some
research
to
do
that.
But
it's
hard
to
it's
hard
to
specify.
It's
hard
to
deploy
because
these,
like
reusable
attractions,
don't
exist
so
anyways.
If
I
can
leave
you
with
nothing
else,
I
would
just
encourage
people
who
are
thinking
about
research
to
not
constrain
themselves
in
any
particular
way.
The
research
can
be
done
all
over
the
place.
I
It
can
be
done
like
you
know,
if
you're
like
a
person
who
just
cares
about
science
and
writing
papers
and
Publishing
papers,
cool,
do
the
science
write,
the
papers
publish
the
papers
if
you
care
about
improving
specifications
come
help
us
do
that.
If
you
care
about
software,
that's
also
helpful
as
well,
and
if
you
have
questions
about
any
of
the
topics
that
are
discussed,
that
I
mentioned
here
that
you
may
have
already
forgotten
talk
to
me.
I
I
will
happily
dispatch
you
to
the
other
people
who
know
a
lot
more
about
these
things
than
I.
Do
because
that's
the
goal
here
is
to
effectively
get
more
people
thinking
about
these
problems.
So
with
that
I
will
I
will
take
any
questions
or
comments
you
have,
if
you
have
any.
D
Hi
Colin
Perkins
not
a
question,
but
thank
you
for
raising
this.
The
number
of
times
I
have
heard
people
say
some
variants
of
you.
G
D
Why
should
I
try
to
come
to
ITF
if
I
do
networking
research
TCP
is
the
same
as
it's
always
been,
there's
so
many
things
that
need
research
inputs,
it's
good
to
see
them
starting
to
be
so
widely
enumerated.
So
thank
you.
B
Let's
go
back
to
tell
Mizrahi
from
technion
and
let's
see
if
it
works,
this
time
can.
B
J
B
Showing
up
yeah,
could
you
try
sharing
again?
Yes,.
F
F
So
one
question
that
comes
to
mind
is
why
is
it
important
to
know
where
the
refugees
are
staying
where
they're
crossing
the
border
to,
and
the
answer
is
to
be
able
to
help
them
and
that's
important
for
large
organizations
like
the
UN,
like
the
Red
Cross,
and
one
of
the
challenges
here
is
that
it's
not
necessarily
easy
to
know
accurately
how
many
people
are
staying
in
each
country,
especially
within
the
EU,
where
border
crossing
is
not
necessarily
monitored.
The
borders
between
EU
countries
are
not
necessarily
monitored
in
terms
of
people
crossing
the
border.
F
If
we
take
this
graph
and
basically,
we
see
the
same
graph
on
the
left
here
and
next
to
it.
In
the
middle,
we
see
the
rate
of
Google
Maps
traffic
in
the
same
period
of
time,
and
we
can
see
that
there
is
a
very
high
correlation
between
these
two
graphs
and
actually
that's
not
surprising,
because
we
know
that
people
had
to
move
around
a
lot.
They
had
to
travel
and
obviously
they
needed
to
use
navigation
apps.
So
that's
one
thing
we
can
see
which
is
correlated
to
the
refugee
wave.
F
What
we
can
see
on
the
right
hand,
side
is
a
figure
showing
the
ratio
between
between
mobile
device
traffic
volume
and
desktop
devices.
Traffic
volumes
are
basically,
we
can
get
a
general
feeling
of
the
volume
of
mobile
device
usage,
and
we
can
see
that
during
this
short
period
of
time
there
was
a
significant
increase
in
mobile
device
usage
and
again,
not
very
surprising.
Given
the
fact
that
people
were
had
to
move
around
a
lot.
F
F
But
specifically,
if
we
zoom
in
on
2022
and
that's
what
we
see
on
the
right
side,
we
can
see
that
the
usage
of
Nokia
devices
went
up
from
one
percent
to
around
13
in
that
short
period
of
time
at
the
beginning
of
the
war,
and
this
can
be
explained
by
basically
the
fact
that
people
needed
more
mobile
devices.
They
took
out
their
old
unused
Nokia
devices
and
started
using
them
again.
So
that's
another
Trend
that
we
can
see
which
is
highly
correlated
with
the
Refugee
wave.
F
Now
one
thing
that
going
back
to
the
previous
slide.
One
thing
to
notice
here
in
these
graphs
is
that
the
most
popular
mobile
device
vendor
in
Ukraine
is
xiaomi.
So
if
we
look
at
the
mobile
device
usage
in
Poland,
that's
what
we
see
on
the
left
here.
We
see
that
in
the
first
few
weeks
of
the
war
there
was
a.
K
F
Steep
increase
in
the
usage
of
xiaomi
phones
and
again
it's
a
very
tightly
coupled
to
the
exact
same
time
where
we
saw
the
very
large
number
of
refugees
crossing
the
border
and
on
the
right
side.
If
we
look
again
at
the
mobile
to
desktop
ratio-
and
we
can
see
that
in
Poland
during
the
same
period
of
time,
the
mobile
device
usage
basically
increased
by
a
factor
of
two
or
something
like
that,
so
mobile
devices
were
used
a
lot
in
Poland.
So
why?
Why
are
we
talking
about
Poland?
F
F
F
So
basically,
what
we
saw
here
is
the
traces
of
the
refugee
crisis
in
Poland,
as
we
can
see
them
in
Internet
measurements.
One
question
that
comes
to
mind
here
is
whether
the
same
traces
could
be
seen
in
other
countries
as
well,
and
when
we
started
looking
at
the
same
metrics
in
other
countries,
what
we
saw
was
that
these
metrics
were
more
affected
by
different
factors
than
by
the
refugee
crisis.
F
Okay,
so
the
data
we
used
in
our
analysis
was
basically
from
three
sources
of
data
and
essentially
what
we
try
to
do
is
you
use
website
analytics
specifically
the
website
visit
locations
of
each
website,
so
we
can
see
the
at
the
bottom
an
example.
We
can
see
the
five
of
the
most
popular
Ukrainian
websites
and,
for
example,
on
the
right
side.
We
see
google.com.ua,
which
is
the
Ukrainian
version
of
the
Google
search
engine.
F
This
is
data
which
is
published,
for
example,
by
cloudflare,
so
what
we
did
was,
first
of
all,
we
extracted
the
15
most
popular
Ukrainian
websites
and
for
each
of
these
websites
we
extracted
the
percentage
of
accesses
from
each
country,
and
we
did
a
maximum
likelihood
estimation
based
on
that
data
of
the
number
of
Ukrainian
people
staying
in
each
country
and
each
website
had
a
different
way
based
on
its
popularity.
So
what
we
can
see
at
the
bottom
here
are
the
estimation
results
showing
the
percentage
of
people
in
each
country.
F
So,
for
example,
we
can
see
for
Germany
about
two
percent
of
the
total
Ukrainian
population
was
in
Germany
when
this
was
captured,
which
was
in
July
22.
and
about
two
percent
of
the
population
was
in
Poland.
F
So
that's
the
estimation
results,
but
the
the
main
problem
here
is
that
it
estimates
the
presence
of
ukrainians
in
each
country,
but
it
doesn't
doesn't
take
into
account
the
fact
that,
even
before
the
war,
millions
of
Ukrainian
people
were
staying
in
in
countries
around
the
world.
So
that's
that's
a
major
factor.
F
So,
in
order
to
consider
that
what
we
did
was
to
use
historical
data-
and
we
use
historical
data
from
Wikimedia
about
the
number
of
accesses
from
each
country
to
the
Ukrainian
version
of
Wikipedia
and
based
on
this
historical
data,
we
collected
the
maximum
likelihood
estimator,
which
is
basically
a
combination
of
data
about
website
visits,
plus
the
historical
data,
and
we
also
had
a
second
estimator,
which
is
based
only
on
data
from
Wikimedia,
so
two
estimators
and
at
the
bottom
here
we
can
see
basically
the
estimated
number
of
refugees
in
each
country.
F
F
We
said
that
the
UN
data
is
not
very
accurate
to
begin
with,
so
in
order
to
try
to
assess
the
accuracy
of
our
estimators
our
ground
truth
analysis
was
based
on
data,
we
isolated
from
the
U.N
data,
and
so
we
specifically
focused
on
countries
which
are
either
not
part
of
the
EU
or
they're
not
accessible
from
Ukraine
by
ground
trans
Transportation,
which
means
that
in
either
of
these
cases
the
border
crossing
would
be
monitored.
So
we
expect
the
numbers
to
be
more
accurate.
F
So
what
we
can
see
here
are
the
the
numbers
for
these
specific
countries,
and
we
can
see
that
in
the
grand
truth
analysis,
the
ml
estimator.
We
had
had
a
mean
percentage
error
of
11.8
percent,
which
is
actually
lower
than
we
expected.
Considering
the
Simplicity
of
this
method
and
again
it's
important
to
emphasize
that
this
method
is
not
meant
to
replace
the
data
published
by
the
UN,
but
only
to
be
a
complementary
PC
here.
F
So,
to
conclude,
what
we
basically
did
here
was,
first
of
all,
try
to
analyze
how
the
refugee
crisis
affects
internet
measurements,
basically
internet
performance,
internet
usage,
but
we
also
tried
to
use
internet
measurements
to
to
try
to
map
the
Ukrainian
refugees
and
to
be
able
to
potentially
use
that
data
to
help
and
to
protect
these
refugees.
So
hopefully,
this
method
can
be
something
that
is
helpful
in
this
Refugee
crisis,
as
well
as
potentially
in
the
future.
H
John
Levine
I'm
just
wondering
how
you
identified
the
mobile
users
as
physically,
where
they
work
and
I'm
just
thinking.
I
know
when
I'm
when
I
roam
with
my
mobile
phone,
it
shows
up
as
being
in
the
country
where
the
Sim
is
from
rather
than
the
country,
where
I'm
physically
located
so
like.
If
I'm
a
British
salmon
I'm
in
Germany,
it
looks
like
I'm,
it
looks
like
I'm
in
the
UK,
so
I'm
wondering
you
know.
F
Yeah,
so
what
you're
saying
is
one
of
the
aspects
that
would
probably
affect
the
accuracy
here
is
whether
the
location
of
the
visits
is
accurate.
Basically,
it's
based
on
geolocation
and
does
not
necessarily
reflect
what
we
were
expecting
and
I
agree.
That's
a
potential
factor
which
may
affect
the
accuracy
of
this
estimation
and
basically
there's
a
whole
list
of
factors
which
may
reduce
the
accuracy,
and
this
was
described
in
the
paper
and
yes,
I,
agree.
That's
that's
an
issue.
Okay,.
L
Hi,
this
is
Eve
Schuler
and
I
have
three
grandparents
who
are
ostensibly
are
from
that
region
of
the
world.
So
it's
very
interesting
for
me
to
see
this
information.
Thank
you
and
I
am
wondering
you
listed
the
percentages
or
estimations
of
percentages
of
refugees
fleeing
to
different
neighboring
countries.
You
had,
of
course,
a
very
large
arrow
for
Poland
I,
wonder
if
you
were
also
able
to
reveal
to
us
those
well,
you
actually
didn't
give
it
as
percentages
of
the
population.
L
You
gave
it
in
terms
of
hard
numbers
and
so
I'm
curious
for
about
the
percentages
for
those
countries,
because
what
you
know
the
impact
ostensibly,
is
that
the
it
would
be
harder
for
countries
If.
The
percentage
of
their
population
is
higher
for
refugees,
and
so
that's
that's
one
thing:
I'd
love
to
see
those
numbers,
it
would
be
an
interesting
reveal
and
I
don't
know
if
it
anyway.
It
would
be
great
to
see
that
number
and
then
how
did
you
take
this
data?
L
Did
you
is
there
sort
of
do
you
have
some
Partners
who
are
trying
to
affect
change
from
a
policy
standpoint
in
terms
of
where
the
U.N
or
other
agencies
Red
Cross,
things
that
are
Global
International,
where
their
aid
is
going
as
a
consequence
of
your
number?
So
are
you
somehow
linked
into
that
part
of
the
process,
and
that
was
really
why
I
was
asking
about
what
those
numbers
represent
in
terms
of
percentages.
F
Yeah,
so,
regarding
the
first
question-
and
we
basically
showed
the
numbers
as
hard
numbers,
but
the
maximum
likelihood
estimation
actually
computes
the
percentage
of
people
out
of
the
total
Ukrainian
population.
So
since
we
know
the
total
Ukrainian
population,
we
can
know
that
we
can
compute
the
hard
number
in
each
country.
Oh.
L
F
Right,
yeah
and
regarding
the
second
question,
we
used
publicly
available
data,
so
we
were
not
connected
to
any
of
these
organizations
or
companies
that
published
the
data,
but
we
are
open
to
any
cooperation
with
you
know,
organizations
who
would
like
our
help
to
try
to
get
more
accurate
estimates
of
this
kind
of
analysis,
and
we
would
be
happy
to
help
with
that.
G
Have
you
taken
into
account,
or
can
you
measure
the
fact
that
a
big
fraction
of
the
refugees
of
Russian
actually
is
their
primary
language?
Ukraine
has
two
languages
yeah
and
then
regions
most
affected
by
the
War
in
fact
might
be
predominantly
Russian.
F
Right,
yeah,
that's
again,
that's
a
good,
a
good
point
and
it's
discussed
in
the
paper
and,
like
you
said,
there's
a
large
number
of
Russian
speakers
in
Ukraine
and
actually
a
lot
of
the
Ukrainian
websites
have
two
languages,
so
you
can
pick
whether
you
want
to
access
it
in
Ukrainian
or
in
Russian,
and
obviously
we
know
that,
since
some
of
these
sites
are
Russian
speaking,
maybe
people
from
Russia
may
be
using
them
as
well.
So
these
are
factors
that
we
took
into
consideration.
F
We
tried
to
eliminate
some
of
these
websites
which
were
in
both
languages
and
and
also
obviously,
there's
larger
difficulty
to
try
to
assess
the
number
of
refugees
or
Russian
speakers,
because
the
the
don't
really
fall
into
this
computation.
So
that
again,
is
is
a
an
issue
which
affects
the
accuracy
of
these
estimates
and
it's
discussed
in
the
paper.
C
C
C
All
right,
so
it's
my
pleasure
to
welcome
you
to
this
pivotal
panel
discussion
today.
We
are,
we
have
the
unique
opportunity
to
take
a
peek
into
the
into
the
future
of
the
internet.
Concretely,
we
have
this
unique
opportunity
to
have
three
world-class
internet
experts
with
us
who
will
be
sharing
their
insights.
Their
predictions.
C
C
So
other
than
a
veteran
in
the
iepf
York
is
also
professor
at
the
TU
Munich
bringing
his
re,
his
REITs
expertise
in
network
architecture,
transport
protocols
and
mobile
network
systems.
He
got
his
PhD
at
the
New
Berlin
next,
let
me
reintroduce
Chris
Wood
long
time,
no
see
who
is
a
research
lead
at
Cloud,
Player
research
and
holds
a
PhD
from
UC
Irvine
before
before
cloudflare
Chris
worked
on
Transport
Security.
C
He
also
worked
in
privacy
in
cryptography
engineering
at
Apple,
as
well
as
Xerox,
Parc
and
last,
but
certainly
not
least,
lik,
who
is
who
received
her
PhD
from
MIT
and
began
her
career
also
at
Xerox
Palo
Alto
Research
Center.
Now
she
is
of
course,
part
of
the
UCLA
computer
science
department
and
she
has
been
innovating
with
ndn
or
name
data
networking
project.
C
So,
as
we
all
understand,
our
panelists
will
have
extremely
extreme
insights,
and
that
will
be
an
amazing
discussion.
But,
as
you
have
noticed,
the
actual
title
is:
what
do
we
want?
What
do
we?
We
want
from
the
internet
to
look
like
in
20
years.
This
is
another
way
of
saying
you
have
to
participate.
That
means
that
we're
really
welcoming
and
your
opinions,
your
questions,
your
insights,
your
advice,
anything
your
hopes
that
would
be
super
important
to
us
all
right
so
without
further
Ado
I
just
want
to
start
the
discussion.
M
Okay,
I'll
be
the
first
one
so
being
Professor
for
many
years
you
know
I
learned
a
one
trick:
I
have
to
give
lots
of
talks.
The
easiest
thing
to
prepare
a
talk
is
to
copy
from
others.
This
morning
we
had
a
great
keynote
by
Professor
Philip
Davis.
If
I
pronounce
the
name
correctly
I,
remember
his
last
slice
and
have
the
title
to
say
that
computers
in
10
years,
they
will
look
very
different.
M
M
M
J
Oh
wow,
let's
just
give
a
great
start
here:
I
would
have
fallen
on
the
problem.
Solving
part
I
was
actually
wondering,
ideally
in
the
in
20
years
from
now
you
didn't
have
it.
The
internet
would
have
vanished
right.
We
don't
want
to
see
it
anymore.
So,
following
up
on
your
transparency
Point
here,
so
it
would
be
kind
of
ubiquitous
and
you
wouldn't
worry
about
it
anymore
right
nowadays,
you
people
argue
about
download
bandwidth
and
whatever.
J
If,
if
something
becomes
a
true
commodity
infrastructure,
you
don't
ask
how
many
liters
of
water
come
per
minute
out
of
your
water
tap.
You
just
expect
it
to
work.
You
expect,
or
nowadays
you
don't
quite
expect
power
to
work,
but
that
would
be
some
some
reasonable
definition
of
staff
being
ubiquitous
accessible,
maybe
reasonably
uncensored.
If
you
want
to
factor
that
in
whatever
that
means
and
globally,
inclusive
and
while
in
order
not
to
make
this
too
long,
maybe
sustainable
or
green
so,
but
we
can
maybe
we
can
come
back
to
that
point
later.
J
If
we
want
to
make
things
happen
at
the
global
scale,
then
we
probably
want
to
have
things
workable
everywhere,
which
also
means
to
it
means
efficient
and
yellow
energy.
I
I
I
I
might
I
would
say
that
I
hope
the
the
community
of
people
who
are,
like
you
know
doing
things
to
help
Propel
the
internet
forward
continue
doing
so
consistent
with
principles
that
I
I
I
find
important.
So
you
know
putting
privacy.
You
know
ahead
of
other
potential,
I
guess
things
that
run
counter
to
privacy
on
the
internet.
It
would
be
something
I'd
like
to
see
continue
moving
forward.
I
Of
course,
there's
like
other
things
that
York
mentioned
like.
Maybe
maybe
we
could
be
a
bit
more
energy
efficient
with
how
we
do
things
yeah
sure,
that's
like
that's,
always
no
one's
going
to
disagree
with
that
right,
but
I,
I,
I,
I,
I,
just
I
hope
for
sort
of
a
continuation
of
a
principle,
the
principled
approach
to
how
we,
how
we
continue
to
do
things
in
the
iitf
and
I
guess
beyond
the
ITF,
which
is
kind
of
vague.
But
that's
that's.
C
Follow
up
on
your
initial
I
was
just
wondering
if
you
know
extra,
transparent
internet
or
like
one
that
just
works,
and
nobody
really
thinks
about.
It
is
what
we're
looking
for
then,
who
would
actually
fix
it
whose
job
is
it?
Is
it
like
technology
that
is
missing?
Is
it
policy
regulation
legislation
I,
don't
know?
M
I
have
my
answer
to
your
question,
but
let
me
bring
up
my
question
so
my
first
round
I
didn't
really
bring
up
the
question.
Let
me
talk
about
this.
There's
a
Winston,
Churchill
I
hope
most
people
can
recognize
the
name
and
he
has
one
quote:
I
really
appreciate
something
like
the
father
backwards.
You
can
see
the
father
forward
you're
likely
to
to
understand
equivalent.
To
that
degree
you
can
Google
that.
M
So
this
reminded
me
a
conversation
I
had
with
some
Old-Timers
long
long
time
ago,
probably
in
the
90s
I.
Remember
that
a
few
people
around
the
two
names
I
can
remember,
I,
don't
think
they're
here
one's
crucial,
the
other
one
is
above
hinden
I
remembered.
We
said
we
joked
to
say
how
many
problems
we
actually
solved
after
so
many
years
when
we
say
so
many
years.
That
was,
we
thought
it
was
already
many
years
because
I
have
started
in
1986..
That
was
in
the
late
90s.
M
M
By
that
time
we
looked
around
to
say
that
the
routing
problems-
scalability,
we
always
say
the
network
routing-
was
a
scalability-
was
a
problem
even
I.
Think
in
the
early
2000s
there's
even
a
Latin
research
group.
I
was
one
for
the
co-chairs,
but
despite
this
persistent
problem
it
has
never
stopped
or
even
slowed
down.
The
internet
grows
So
eventually,
I
realized.
That's
not
a
load
blocker
now
about
congestion
control.
There's
a
similar
thing
right.
M
We
still
have
congestion,
control,
research,
group,
I,
think
right,
yeah
I
know
the
working
groups
as
well,
but
fundamentally
since,
when
Jacobson
invented
this
T3
slow,
start
contraction,
control
problem,
I
think
we
have
that
problem
under
control.
People
continuously
improve
performance
resiliency,
whatever
all
kind
of
criterias
I
did
applied
to
that
problem,
but
by
and
large
internet
is
no
longer
like
having
this
congestion
collects.
M
M
Yes,
that's
no!
More!
No
more
long
but
think
about
the
third
question
security,
even
back
in
the
late
90s,
we
realized
we
had
not
made
a
lot
of
progress
on
that
problem
and
even
today,
I
would
claim
personal
opinion.
Of
course,
this
is
the
number
one
problem
number
one
challenge
facing
the
internet
today
and
what's
missing,
is
not
a
lack
of
effort,
but
rather
a
shared
understanding
of
the
problem
space
and,
of
course,
the
solutions
phase.
I
heard
the
mentioning
about
privacy.
M
C
So
we
have
a
couple
of
questions:
people
waiting
in
the
queue-
maybe
Chris
Christopher
padam.
N
Hi
yeah-
this
is
a
this
is
a
great
conversation
to
have
right
now.
I
think
so.
I
wanted
it's
kind
of
follows
up
on
what
Professor
Zhang
was
talking
about.
I
wanted
and
well
taking
a
step
back.
We
want
to
the
internet
to
be
invisible,
but
I
I,
wonder
and
the
analogy
you
used
was
like
Plumbing.
The
way
Plumbing
is
invisible.
Maybe
the
internet
could
be
invisible
too.
N
I
wonder
if
that's
possible,
the
internet
is
not
just
it's
not
just
like
it
doesn't
serve
a
single
purpose.
The
purpose
is
always
changing.
It's
very
much
influenced
about
how
how
we
use
it.
There's
an
attacker
in
the
network.
That's
you
know
we're
envisioning
underpinning
all
of
this
communication,
so
I
yeah
I,
wonder
if
you
think
if
anyone
thinks
it's
actually
possible
or
if
we
could
rule
that
out.
J
Or
you
can
always
wish
for
things
right.
The
question
was
what
what
we
would
wish
the
internet
to
be
like
in
20
years
and
I
suppose
I
mean
take
any
everything
that
every
piece
of
infrastructure
that
you
need
to
do.
J
It
needs
maintenance
right
and
right
now,
given
the
amount
of
effort
that
we
spend
every
day
on
fixing
little
things
like
access
or
being
networks
being
down
excess
lines
being
down
mobile
internet,
not
working
everywhere
and
on
all
these
kinds
of
things,
and
that
is
a
step
way
before
we
even
get
to
attackers
and
applications.
M
In
our
panelists
that
need
to
argue
with
each
other,
but
actually
I
agree
with
Jacques
and
I.
That
is
it's
not
so
much
as
disappeared
entirely.
Nobody
takes
care
of
that
instead,
of
course,
plumbers
are
always
needed.
The
question
is
where
what
is
the
front
line?
Challenges
I?
Don't
think
that
should
be
the
low
level
connectivity
yeah.
We
have
long
past
that
that
stage
again
back
then
30
40
years
back,
the
connectivity
was
a
hard
problem
these
days.
The
problem
is
at
a
much
higher
level
that
deserves
attention.
I
Yeah
I
guess
I
would
generally
agree,
I
think
the
the
use
cases
and
applications
that
are
driving
how
we
think
about
the
internet,
how
we
use
it
are
going
to
constantly
change
the
threat
models
are
going
to
constantly
change
as
things
in
society
change
and
the
internet
will
have
to
like
aspects
of
the
internet
will
have
to
change
to
adapt
to
that,
and
there's
probably
some
analogy
to
be
made
by
the
to
the
plumbing
thing
like
the
way
you
think
about.
I
Plumbing
might
change
as
a
result
of
like
how
stuff
works,
don't
know,
but
yeah
I
I,
don't
think
it's
ever
going
to
go
away.
I
think
the
the
interesting
task
for
us
will
be
to
ensure
that,
as
as
the
the
internet
continues
to
evolve,
that
we
it
continues
to
evolve
in
a
way
that
is,
as
I
was
kind
of
saying
earlier.
J
M
M
Level
of
networking
right,
like
you,
know,
I'm
coaching,
the
decentralization
of
the
internet
research
group,
which
is
a
published
Workshop
report
for
workshop
two
years
back
I.
Think
of
the
attention
Industries
is
not
about
so
much
the
network
as
if
connecting
things
together.
The
attention
and
the
conflict
has
really
moved
to
higher
level.
C
C
While
you're
going
to
the
mic
Alexia,
can
you
kind
of
elaborate
on
like
what
do
you
mean
by
decentralization
Authority?
Can
you
elaborate
on
that
a
little
bit.
M
Sure,
given
that
I'm
the
drg
coach
here,
that's
the
our
business,
this
stays
centralization
or
otherwise
decentralization
is
very
kind
of
important
topic.
Lots
of
people
will
pay
attention,
if
not
everybody
but
I,
think
we
need
to
have
a
first
agree
on
the
definition.
People
say
the
cloud
services.
This
is
centralization,
but
on
the
other
hand
there
is
this
thing
called
economy
of
scale.
M
I,
don't
know
about
other
people,
I
used
to
do
backup
myself,
all
the
time,
no
more
right
upload
to
the
clouds.
That's
economical,
because
it's
economy
of
scale,
the
cloud
storage
can
do
the
backup
for
I.
Don't
know
millions
of
people
or
billions
of
people
cloud
is
necessary.
So
therefore
the
central
decentralization
doesn't
mean
we
don't
need
the
cloud,
but
rather
I
think
fundamentally
is
who
is
in
control
today.
I
can
tell
you
why
I
feel
that
I
lost
the
control.
M
Just
with
one
simple
example:
who
am
I
I'm
a
nobody
except
that
Google
got
me,
identity
called
the
Gmail
address.
That's
how
I
am
kind
of
recognized
as
a
night
result.
I
go
anywhere
on
the
web,
it's
the
Gmail
as
a
Google.
Besides,
this
is
the
person-
and
you
know
Google
will
do
this
authentication
when
you
visit
other
sites.
That
is
what
I
call
losing
control
and
I
prepared
a
few
slides,
I,
don't
know
for
Maria.
You
could
show
that
you
cannot
because.
K
O
I,
don't
think
internet
will
be
a
commodity
in
20
years
and
here's
here's
why
I
say
this.
There
will
always
be
very
interesting
problems.
Here's
why
I
say
this.
Current
world
population
is
about
seven
and
a
half
to
eight
billion.
Only
3.7
billion
of
of
that
population
is
connected
right
now
to
the
internet.
When
I
say
internet
I
mean
the
entire
till
the
user
device
accesses
the
the
entirety
of
it
world
population.
In
20
years
we
don't
know
16,
17
billion
connectivity
challenges
removed.
Some
of
them
are
technical
challenges.
O
Some
of
them
are
economic
challenges.
Let's
assume
that
removed
experiences
become
more
interactive
and
you
and
more
immersive
so,
which
means
a
lot
more
data,
then
I
mean
there's
already.
90
percent
of
the
traffic
on
the
Internet
is
video.
We're
talking
about
more
immersive
data,
faster
latency,
more
more
interactiveness
getting
bandwidth
delivered,
as
well
as
with
the
latency
constraints,
and
then
on
top
of
that,
let's
put
the
the
supply
side.
O
Wireless
Spectrum
will
always
be
limited.
You
can't
generate
there's
some
Physics
based
boundaries
that
we
have
that
we
can't
unless
a
Nobel,
Prize
worthy
of
research
happened
and
we
figured
new
ways
of
sending
information,
so
that
part
is
limited,
and
then
you
have
all
this
influx
of
more
people.
More
people
can
connected
on
the
internet
more
interactive
and
immersive
applications
which
demand
better
late,
much
better
latencies
and
bandwidth.
O
So
I,
don't
think
we'll
get
to
I.
Don't
know
this
is
my
opinion,
we're
not
going
to
get
to
that
plumbing.
Just
plumbers
fix
it.
There
will
be
hard
problems
to
solve
again
just
but.
J
The
fact
that
if
you
grow
an
electricity
grid,
if
you
grow
any,
if
you,
if
you
grow
a
road
network,
if
you
grow
whatever
you
need
to
overcome
new
engineering
challenges
and
I,
don't
think
anybody
of
us
is
arguing
that
we
are
going
to
have
demanding
challenges
ahead
of
us
that
we
need
to
face,
but
still
a
a
pretty
nice
perspective
would
be
to
get
over
having
to
fiddle
with
the
basics,
on
the
on
the
side
of
the
user,
not
necessarily
on
the
side
of
the
isps
or
whoever
that
might
be
in
the
future
right.
J
M
M
M
There
is
a
great
infrastructure,
consolidation,
I
think,
there's
a
good
talk
given
by
correct
leverage.
Leverage
I
forget
how
to
pronounce
his
name
at
some
reason:
the
nanog.
They
did
a
study
and
remember
clearly
back
in,
like
2010
sitcom,
showing
the
measurement
of
the
global
backbone.
By
that
time,
the
so-called
Tier
1
isps
that
interconnect
no
big
chunk
of
the
internet
today.
M
I
I
That
is
like
absolutely
the
wrong
thing
to
do,
and
so
we're
going
to
have
some
tough
times
ahead
to
like
enable
a
continuation
of
the
services
we
have,
but
then
also
you
know,
support
new
things
like
this
interactive
like
stuff
that
I
don't
know
like
VRA
or
whatever,
whatever
the
new
fancy
stuff
is
and
it'll
yeah
it'll
be
it'll,
be
quite
hard.
K
K
West
herricker
uscisi
and
the
ican
board,
but
speaking
only
for
myself
to
answer
your
question
of
like
where
do
I
want
to
be
in
20
years,
I
can
tell
you
the
things
I
can't
do
now.
K
My
wife
is
somewhere
else
on
the
planet.
I
can't
figure
out
where
she
is
without
like
going
through,
centralized
service
right,
I
I.
We
have
to
share
not
only
you
know
where
our
location
is,
but
our
relationship
and
everything
else
many
things
have
gotten
in
the
way
of
that
and
I.
Think
in
part.
A
lot
of
it
has
come
with
the
Advent
of
mobile
devices
right.
My
wife
often
carries
two
devices
at
least
that
I
might
be
able
to
communicate
with
her
through.
K
So
my
question
is
how
in
20
years
can
I
get
to
the
place
where
I
want
to
be?
That
allows
me
to
do
things
like
have
secured
Communications.
You
know
in
a
decentralized
way,
I'll
give
you
one
other.
My
other
famous
thing
to
do
is
like
if,
if
I
want
to
print
something
I
want
to
say,
hey
print,
something
to
Chris's
compute
to
Chris's
printer
right
Chris
is
a
very
context.
K
Localized
sensitive
name,
and
it's
only
because
I'm
staring
at
Chris,
you
know:
could
anybody
actually
figure
out
that
I'm
talking
about
that
Chris
right
and
his
printer?
That's
how
humans
work?
That
is
not
how
we
have
the
Internet
working
today.
So
my
fundamental
question
is:
how
do
we
get
to
an
end-to-end
private
communication
situation
in
a
dynamic
and
mobile
environment
with
secured
identification
location
discovery?
That
is
all
decentralized.
Good
luck.
I
Like
one
problem
at
a
time,
I
would
say
doing:
end-to-end
encryption
first
would
be
like
fantastic
I
think
we're
on
our
way
there,
but
those
are
absolutely
hard
problems
to
solve
and
I
I
believe
we've
talked
about
them
before
in
the
past
that
there
wasn't
there
some
like
Workshop,
where
the
on
naming
stuff
I,
don't
remember
exactly
what
but
yeah.
I
M
So
Russ
I
really
want
to
answer
your
question,
so
you
want
to
communicate
with
your
wife
directly
with
the
end-to-end
security.
The
fundamental
thing.
The
very
first
question
is,
like
I,
said:
I'm
just
my
Gmail
address.
So
so,
therefore
you
don't
have
a
provider
independent
identifier.
How
dare
you
to
talk
about
end-to-end
encryption?
M
They
are
today.
We
have
lots
of
encryption,
work,
https,
okay,
it's
a
really
secure
communication,
but
the
two
questions
I
can
raise
with
regarding
to
that
security.
The
first
of
all
there
are
guys
who
actually
decrypt
your
data:
it's
not
a
server
who
are
those
guys
middle
boxes
residience,
and
this
is
that
also
didas
mitigators?
M
Okay,
when
did
US
happens,
you
know
the
the
servers
have
to
defrosting
mitigation,
the
service
you
can
buy
that
they
have
the
decrypted
traffic
and
all
the
traffic
to
sort
out
the
good
from
bad
traffic,
so
the
so-called
end-to-end
decryption
doesn't
really
exist
in
the
generally
speaking
manner.
The
second
thing
is
that
it's
not
end
to
end.
Like
you
communicate
with
your
wife.
You
actually
talk
to
the
server
guys
because
you
are
now
the
independent
entity.
I
Just
to
reply
to
that
I,
don't
think
anyone
in
this
room
is
going
to
disagree,
that
centralization
is
happening
and
that
there's
like
very
few
parties
that
maintain
like
a
significant
market
share
of
like
traffic
and
data
and
whatnot,
so
I
I
think
we
can
all
just
assume
that
that's
a
that's
a
matter
of
fact,
and
we
maybe
maybe
would
not
like
it
to
be
that
way,
but
as
a
I
guess,
a
more
technical,
I
guess
response
to
what
you're
saying
Alicia
you're
saying
that,
like
we
don't
really
have
ended
encryption
in
practice,
which
is
really
surprising
to
me,
because
there
are
like
a
number
of
systems
that
actually
deploy
this
technology
today
and
they
do
have
a
very
well
defined
concept
of
what
an
identity
is
so
like.
I
What's
that,
for
example
like
ships
today
end-to-end
encryption
for
messaging
and
they
identify
users
by
their
public
keys
and
they
have
a
system
for
ensuring
that
those
public
keys
are
consistently
and
correctly
bound
to
like
an
individual
and
a
user.
So
I
there
are.
There
are
like
there
are
things
we
can
point
to
that
say.
We
know
how
to
do
this
in
practice
and
for
end-to-end
encryption
in
particular,
which
is
a
separate
topic
from
centralization.
I
We
should
keep
those
like
very
different
or
very
separate
if
we
can,
but
they
just
wanted
to
like
correct
that
I.
M
Just
want
to
ask
Wes:
do
you
recognize
your
wife
with
her
public
key.
I
But
that's
exactly
the
point
of
these
systems
like
whatsapp's,
like
key
transparency
system
like
you,
you
don't
have
to
think
about
like
what
is
what
is
West's
public
key?
What's
what's
my
friend's
public
key,
like
the
system
takes
care
of
that
binding
for
you
like
that
there
we
have
the
technology
to
sort
of.
I
Do
that
and
in
fact,
there's
like
a
new
working
group,
that's
likely
going
to
be
formed
to
like
standardize
that
technology
to
and
give
us
that
binding
that
we
need,
or
you
can
associate
a
given
public
key
with
an
identifier
like
an
email
address,
a
name
phone
number
whatever
like
we're
well
on
our
way
to
shipping
this
stuff
and
yeah.
P
I'm
joining
from
Montreal
and
I'm,
a
quick
security
phase,
researcher
and
I
have
a
question
that
what
are
the
potential
implications
of
quantum
Computing
on
Internet
Security
over
the
next
20
years
and
how
my
Quantum
affects
the
targets
and
impact
the
internet
infrastructure.
I
For
example,
Chrome
is
shipping
post
Quantum
encryption
for
TLS,
and
it
does
talk
to
some
servers
so
we're
making
headwinds
and
and
strides
to
like
get
ahead
of
that
particular
curve.
There
are
other
types
of
technologies
that
we're
shipping
today
that
we
don't
have
easy
drop-in
replacements
for,
but
it's
you
know
ongoing
research
and
development
to
like
develop.
I
You
know,
post
Quantum
solutions
for
those
particular
problems
and
I'm,
confident
that
people
who
again
are
like
much
smarter
than
me
can
figure
them
out
and
we
can
talk
about
them
in
the
ITF
and
get
them
standardized
and
get
them
shipped.
So
I
I
know
there's
just
like
this.
You
know
this
looming
threat
of
like
post
Quantum
attackers,
but
I
am
very
confident
in
the
community's
abilities
to
deliver
and
I
can
deal
with
it
in
a
timely
manner.
M
I
agree,
so
any
new
problems
I
think
it
will
solve
it.
That's
not
to
the
challenge.
The
challenge
is
really
understand
what
other
problems.
Q
Hi
John
Todd
from
quad
nine,
so
I
think
in
order
to
understand
the
most
optimistic
future.
It's
also
interesting
to
understand
more
of
the
pessimistic
parts
of
the
future
we're
looking
at
kind
of
a
future
where
it
is
possible
actually
to
have
perfectly
encrypted
and
secure
communication
between
endpoints
and
I
think
there's
technical
problems
that
we
have
to
overcome,
but
we'll
get
over
that
there
are,
however,
very
strong
political
oppositions
to
that
for
a
number
of
different
reasons:
security,
national
government.
Q
That's
this
pushed
towards
data
Serenity,
so
I'd
like
to
get
the
panel's
opinions
on
what
you
would
see.
How
far
do
you
see
the
pendulum
swinging
back
towards
kind
of
a
negative
view
of
things
before
it
swings
forward
again,
because
I
think
we're
in
a
good,
optimistic,
optimistic
position
right
now,
with
these
Technologies
making
things
possible.
I
think
that
in
the
next
five
to
ten
years,
we're
going
to
see
some
pushback
on
that,
and
some
of
it
may
be
quite
severe.
I
To
clarify
quickly
by
like
the
negative
Direction,
you
mean
like
catering
to
the
the
request
to
like
you,
know,
backdoor
and
client-side
scan
or
whatever.
G
I
That
great
I
can
take
it
yeah,
so
I
I
guess
no
surprise
that
I
I
I
believe
that
we
should
continue
on
this
path
of,
like
encrypting,
all
the
things
and
ensuring
that,
like
that,
the
systems
that
we
build,
we
can
reason
about
their
security
properties
in
a
way
that
makes
sense-
and
you
know
the
the
proposals
that
I've
seen
for
I-
guess
doing:
client-side
filtering
for
csan
purposes.
I
For
example,
they
don't
technically
make
any
sense
in
any
reasonable
threat
model,
so
I'm
I'm
not
so
worried
about
that.
Like
kind
of
bringing
us
back
I
guess
what
I
am
worried
about
is
sort
of
our
inability
to
come
up
with
an
actually
technical
like
solution
to
this
problem,
for
which
I
do
not
know
of
one
that
exists.
I.
Think
that's
like
the
hard
problem
here
like
how
do
you?
I
How
do
you
build
an
open
system
built
on
Open,
Standards
and
open
protocols
in
a
way
that
can't
be
like
easily
subverted
by
like
changes
to
the
endpoints
or
can't
be
abused
by
like
service
providers
to
like
compromise
and
use
your
privacy
and
confidentiality?
Don't
know
super
hard
problem?
I
A
lot
of
smart
people
thought
about
it
so
unclear
how
how
how
how
long,
how
much
longer
we
can
kind
of
stay
on
the
path
of
trying
to
encrypt
all
the
things
before
we
acknowledge
that
we
can't
solve
this
problem
and
it
has
to
come
back,
but
I
I
hope
it
doesn't
go
in
that
direction.
M
G
M
M
I
What
somewhat
agree
I
guess
and
I
I
I
find
it
difficult
to
say
that
we
will
be
able
to
reach
a
shared
understanding
of
the
problem
when
the
problem
is
can
be
defined
in
many
different
ways
and
interpreted
in
many
different
ways
like,
for
example,
how
do
I?
How
do
I
in
the
end-to-end
encryption
and
client-side
scanning
particular
use
case?
I
I
may
consider
you
know
the
fact
that
I
can
like
send
private
information
to
someone
else
like
a
feature,
and
then
someone
else
considers
that
a
bug,
because
I'm
like
sending
csam
or
whatever
I
I
I,
do
not
have
hope
for
us,
like
establishing
any
sort
of
consensus
on
what
the
problem
is.
I
am
very
confident
we
can
agree
on
like
what
the
technical
facts
of
the
situation
are,
but
coming
to
like
a
shared.
I
You
know
this
is
the.
This
is
the
problem
that
the
everyone
needs
to
solve
at
the
same
time,
I
do
not
see
that
happening.
J
I
J
That's
an
unsolvable
problem
to
begin
with,
and
you
can
either
engineer
for
these
properties
to
hold
or
or
for
them,
not
to
hold
and
and
but
that
will
always
affect
everybody.
So
we
that
we
don't
get
out
of.
M
M
So
therefore
we
are
living
in
a
society.
We
have
to
help
principles.
How
to
address
resolve
those
conflicts.
So
therefore
that
requires
a
law
requires
consensus.
It's
not
that
people
hold
different
views
or
different
parties.
That
should
say
all
the
different
views,
and
we
say
we
cannot
have
a
shared
understanding.
R
R
Max
plan
Institute
for
informatics
speaking
for
myself
and
not
for
my
affiliation,
so
we
had
some
comments
about
taking
positive
and
negative
views
of
the
future.
I
would
argue
that
there
are
very
positive
views
of
the
future
taking
at
the
moment.
If
we
look
at
the
whole
world,
we
are
facing
a
world
that
is
heating
up.
We
are
facing
societies
collapsing,
we're
facing
water
running
out.
We
are
facing
Wars
around
water,
around
land,
around
resources
tearing
apart
our
world
currently
tearing
apart
Europe
already
and
in
that
world.
R
I'm,
not
sure
whether
Plumbing
of
the
internet
might
be
irrelevant
thing
to
consider
putting
it
to
Plumbing,
while
Plumbing
might
still
be
there
in
20
years.
The
water
running
in
those
plums
might
not
be
a
commodity
anymore.
So
I
would
like
to
hear
the
perspective
of
the
panel
on
how
the
internet
will
look
in
such
a
torn
apart
world.
J
Well,
that
that's
actually
a
good
Promenade,
there's,
obviously
a
big
picture
that
may
be
beyond
beyond
our
scope
here.
There's
one
thing
when
it
comes
to
the
fundamentally.
This
is
a
there's.
J
Multiple
things
you
can
say
about
this
one
bit
is
that
if
the
internet
has
enabled
one
thing
in
the
past
and
it
will
be
sharing
of
useful
information,
besides
all
kinds
of
disinformation
which
which
might
help
assist
coping
with
future
problems,
but
of
course
the
internet
itself
creates
resource
consumption,
and
we
had
this
example
earlier
with
that
was
given
saying
that
we
are
going
to
have
more
higher
bandwidth
demands
lower
latency.
J
So
we
might
need
to
reconsider
how
we
are
going
to
organize
future
growth
to
to
embrace
the
next,
whatever
billion
people
it
might
be
and
or
whoever
might
be
left
or
whatever
you
want
to
say,
but
so
merely
just
adding
more
routers
more
fibers.
More
processing
power
is
not
going
to
be
a
long,
not
necessarily
going
to
be
a
long-term
sustainable
approach
here.
So
that
might
be
one
one
aspect
of
that.
I
New
York
that,
like
this,
is
a
really
big
problem
and
perhaps
up
beyond
the
scope
of
this
particular
panel,
and
your
kind
of
question
was
like
what
does
the
internet
look
like
in
this?
You
know
sort
of
future
Society.
I
We
may
end
up
seeing
like
effectively
the
things
be
fractured
and
fragmented,
I'm
kind
of
already
seeing
that
in
a
way
in
terms
of
like
what
information
is
available
where,
but
as
like,
the
infrastructure
that
surrounds
like
the
internet
and
like
the
like
shapes
the
societies
that
use
it
I
I
only
kind
of
see
that
that
fragmenting
getting
worse
I
would
like
to
be
proven
wrong.
But,
like
you
know,
that's
seems
to
be
like
the
kind
of
remember
going
down,
but.
M
I'm,
more
I'm,
more
optimistic,
agree,
I
think
in
terms
of
getting
the
remaining
part
of
the
world
all
interconnected.
It's
not
a
purely
a
question
of
fiber.
Definitely
the
fiber
will
be
needed.
More
fiber
will
be
late,
we're
having
increasing
the
fiber
layout
I
think
very
fast,
but
then
at
the
same
time
we
really
need
to
look
into
new
ways
of
communicate.
M
The
you
know
the
wireless,
the
peer-to-peer
in
Academia,
you
see
so
many
Publications
with
regarding
to
data
meals,
I
reviewed
this
yeah
reviewed
papers
that
you
know
your
phone
can
be
data
meal
connecting
the
disconnected
parts
of
the
world.
I
think
a
diet
area,
I
hope
will
become
reality
in
less
than
20
years.
M
M
I
S
S
You
know
your
browser
and
a
web
server
somewhere,
where
you
know
it's
fully
encrypted
and
all
that,
but
the
user
or
you
have
not
consented
to
this
information
transfers,
which
I
would
think
is
a
privacy
violation
but
Totally
Secure
and
fulfills
all
our
definitions
of
security,
so
so
yeah
I
guess
this
disagreement
over
there,
but
and
I
think
why
it's
important
to
keep
this
in
mind
is
because,
like
privacy
plays
really
well
into
like
decentralization
or
like
you
know
like
how
do
you
distribute
keys
and
like
all
of
that
conversation
becomes
interesting
if
you're
thinking
about
like
classically
privacy
like
a
like
a
you,
know,
privacy
lens
where
you're
thinking
about
like
you
know,
transparency
and
trust
and
consent.
M
M
C
All
right
and
with
that,
like
a
full
circle
of
what
you
see
I
said
in
the
beginning,
and
now
let's
thank
our
panelists.
It
was
an
amazing
discussion
and
everyone.