►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
really
excited
for
today.
I
think
this
can
kick
off
a
good
long-term
r
d
project
to
imbue
our
networks
with
drastically
better
security
and
better
privacy.
What
I
want
to
talk
about
today
is
this
kind
of
like
a
higher
level
talk
that
kind
of
tries
to
funnel
in
and
go
through
a
set
of
kind
of
important
points
for
everyone
working
on
this
on
this
topic.
So,
first
off,
I
want
to
go
through
why
secure
and
private
communications
are
critical
for
human
rights.
A
I've
come
to
believe
that
it's
one
of
the
the
most
important
things
to
get
right
in
the
next
decade
or
two
I
want.
Then
I
want
to
talk
about
why
secure
and
private
communications
are
hard
like
inherently
why
this
is
such
a
difficult
problem
to
get
right.
A
Third,
I
want
to
go
through
strategies
for
how
to
achieve
secure
and
private
communications,
meaning
not
specifically
which
cryptographic
instructions
but
meta
strategies
of
how
you
organize
large
groups
of
people
towards
this
this
these
efforts,
then
I
want
to
talk
about
why
web3
is
special
here,
why
web3
can
succeed?
What
other
approaches
have
failed
in
the
past
and
also
what
are
different
challenges
that
web3
has
relating
to
this?
A
Then
I
want
to
dive
deeper
into
content,
address
networks
and
why
security
and
privacy
are
hard,
the
nuances
of
the
of
why
certain
things
are
harder
and
certain
things
are
easier
in
content,
address
networks,
and
then
I
kind
of
want
to
conclude,
with
a
kind
of
an
attempt
to
formalizing
a
bunch
of
properties
that
we
want
in
in
the
content.
A
Routing
problem
so
kind
of,
like
the
last
point,
is
very
specific
to
content
routing
the
rest
of
the
the
stuff
is
kind
of
much
more
general
and
so
yeah
with
each
of
these
points,
I'm
kind
of
going
to
be
zeroing
in
and
kind
of
zooming
in
so,
as
we'll
said
earlier,
I
think
that
we
need
a
secure
and
private
computing
platform.
Now.
This
is
one
of
these
things
that
is
only
going
to
get
progressively
hard
to
do
later.
A
When
you
look
back
through
history,
computing
has
become
this
tremendously
powerful
platform
that
is
imbuing
humanity
with
superpowers.
Now
we
have,
you
know,
on
the
order
of
billions
of
humans
and
trillions
of
computers
integrated
into
a
live
running
system
constantly
interacting
with
each
other
and
over
time.
More
and
more
of
our
activity
just
is
totally
governed
by
the
different
applications
that
we
use.
So
just
a
good
exercise
on
this
to
like
convince
yourself
of
this.
A
If
you,
if
this
sounds
fanciful,
is
to
just
kind
of
log
every
single
time
in
one
day
that
you
interact
with
some
computing
application
or
service
and
then
try
to
like
trace
how
many
application
services
that
interaction
has
to
touch
and
you'll
see
very
quickly
that
it's
like
a
massive
spaghetti
tangle
of
tons
of
different
internet
services
and
and
protocols
and
and
structures
that
are
generating
massive
amounts
of
of
information
and
they're,
invoking
all
kinds
of
computation,
much
of
it
completely
with
very
weak
security
properties
or
security
guarantees.
A
So
today
I
sort
of
claim
that
you
can't
quite
be
a
modern
human
without
access
to
compute
the
computing
platform,
and
this
is
only
getting
more
severe
every
year
as
more
and
more
superpowers
gets
get
up
deployed
into
the
network,
and
so
this
is
why,
like
the
the
underlying
security
properties
of
the
internet
matter,
so
much
right,
because
if,
though,
if
the
underlying
platform
is
not
secure
and
you
can
either
break
it
or
revoke
access
to
people
or
surveil,
what's
going
on
that
can
yield
a
very
different
different
kind
of
structure.
A
So
I
think
right
now
we're
entering
this.
This
very
difficult
territory,
where
you
know
for
mass
surveillance,
has
been
a
thing
for
for
many
decades,
starting
with
like
the
early
telecommunications
networks,
through
the
cold
war,
we
had
all
kinds
of
surveillance
through
the
phone
systems
and
so
on.
A
There
were,
of
course,
the
there's,
not
any
leaks
that,
were
you
know,
critical
in
kind
of
revealing
just
how
how
deep
the
the
the
surveillance
apparatus
worldwide
actually
is,
and
I
think
even
that
was
just
a
a
glimpse
into
into
a
lot
of
what
was
going
on.
But
I
think
this
is
getting
drastically
worse
over
time.
So
now
we
have
not
only
have
we
assembled
massive
repositories
of
social
data
that
have
enough
information
to
make
extremely
reliable
predictions
about
what
people
might
do
in
the
future.
A
Like
think
of
all
the
social
network,
data
sets,
but
those
data
sets
are
sort
of
like
getting
semi-democratized.
Many
states
like
many
more
states
every
year
can
get
access
to
those
kinds
of
data
sets
and
many
more
companies
can
get
access
to
those
data
sets.
So
you
get
you're
getting
into
like
a
pan
panopticon
level
of
a
problem
and
then,
when
you
couple
those
so
the
issue
is
not
the
social
networks
and
and
maybe
even
the
passive
surveillance
with
the
current
states.
A
The
problem
is
that
that
apparatus
can
then
be
used
by
a
different
governing
governance
structure.
That
is
much
more
draconian
and
problematic.
So
it's
never
about
the
current.
You
know,
there's
all
the
debates
about
pregnancy,
surveillance
and
so
on
tend
to
be
tend
to
center
on
the
current
governments
and
the
current
police
forces,
and
so
on
that
are
trying
to
keep
people
safe,
but
it's
never
about
them.
It's
it's
really
like,
even
as
well
intentioned
as
everyone
is
right.
Now
it's
really
about
the
future.
A
It's
about
like
what
what
future
group
is
going
to
take
power
and
what
future
group
is
going
to
use
the
these
incredibly
powerful
structures
that
are
left
behind,
and
so
it's
not
about
how
the
current
states
is.
It
just
imagine
some
extremely
nefarious
future
state
that
gains
power
and
takes
control
of
that
machinery
to
then
implement
an
extremely
terrible
situation.
This
is
now
getting
connected
to
incentive
structures.
An
automated
kind
of
mechanism
designed
you
can
think
of
the
the
social
credit
system
in
china.
A
As
like
one
one
example
of
this,
where
the
behave
the
the
the
direct
surveillance
of
people
is
automatically
fed
into
kind
of
like
a
control
theory
loop,
where
now
it's
going
directly
into
issuing
out
fines
for
certain
behavior
or
changing
credit
scores
which
and
the
credit
scores
then
confer
access
to
certain
government
systems
like
access
to
trains,
access
to
flights,
access
to
to
money,
to
be
able
to
and
loans
and
so
on,
to
be
able
to
to
operate.
A
So
you
have
an
extremely
powerful
platform
being
put
in
place
with
a
form
of
control
that
that
is
completely
unprecedented.
I
don't
think
you
know.
Humanity
has
never
seen
a
level
of
potential
social
control
as
what's
getting
built
now
and
as
things
progress
in
the
next
few
decades.
A
All
of
that
is
going
to
get
coupled
with
think
of
think
of
robotics
and
and
how
that's
going
to
enable
this
kind
of
this
kind
of
thing
think
about
what's
going
to
happen
once
ar
and
vr
become
much
more
prevalent
or
when
bcis
start
coming
in
and
like
suddenly,
you
no
longer
have
to
worry
about
attacks
on
your
computer.
You
have
to
worry
about
taxing
your
brain
like
what
and
and
the
the
the
model
like
the
ml
models
that
we
have
today,
let
alone
kind
of
much
more.
A
Intelligent
systems
are
strong
enough
and
good
enough
to
to
coordinate
massive
scale
action
in
response
to
to
circumstances.
Just
think
of
things
like
alpha
star
playing
starcraft
against
against
opponents
in
that
environment,
so
think
of
the
computing
infrastructure
that
is
possible
now
can
enable
this
orwellian
digital
totalitarian
state.
A
That
is
extremely
hard
to
get
out
of
so
I
think
today
we're
like
still
in
time
to
upgrade
the
infrastructure
to
avoid
and
avert
this
kind
of
potential
outcome,
but
I
think
it's
really
critical
to
to
work
on
this.
This
is
not
alone
in
thinking
this.
A
lot
of
people
think
of
this
as
well
recently
have
been
chatting
to
a
number
of
people
in
groups
like
the
future
of
life
institute
and
so
on.
A
Who
regard
this
potential
problem
of
like
arriving
at
an
orwellian
state,
is
one
of
the
highest
risks
that
humanity
has
to
has
in
the
moment,
not
quite
an
extinction
risk
because
or
willing
to
think
states
tend
to
not
want
to
become
extinct,
but
definitely
an
extremely
bad
outcome
for
for
humanity,
so
yeah.
I
think,
hopefully,
that
convinces
you
that
we
need
a
secure
and
private
computing
platform
now
ahead
of
when,
when
groups
in
the
future,
much
more
nefarious
groups
in
the
future
might
take.
A
Control
of
you
know
super
powerful
machinery
great.
So
what's
hard
about
this,
so
the
internet
is
an
extremely
difficult
medium
to
try
and
secure
it.
There's
again
trillions
of
devices
billions
of
humans,
all
kinds
of
information
and
computing,
it's
happening.
You
know.
A
Securing
the
communication
in
one
link
between
two
parties
is
already
extremely
hard
now
think
of
dealing
with
all
of
the
different
links
between
all
the
different
machines
and
think
of
all
of
the
data
that
gets
generated
along
the
way
and
all
of
the
correlations
that
you
can
start
doing.
So
all
the
metadata
problems
are
extreme.
When
you
think
about
how
easy
it
is
to
correlate
all
kinds
of
traffic
in
the
network,
if
you
can
observe
all
the
packet
flows.
A
A
Out
of
something
like
this
is
tremendously
difficult,
and
the
the
added
bonus
here
is
that
every
cryptographic
technique
that
we
have
to
try
and
and
and
or
not
everybody
most
cryptographic
techniques
that
we
have
to
try
and
prevent
certain
kinds
of
metadata
collection
or
correlation,
and
so
on
just
achieve
that
by
creating
protocols
in
terms
of
rounds
and
in
terms
of
hiding
legitimate
access
through
a
bunch
of
noise.
So
that
means
like
many
more
operations
than
than
you
need.
A
So
they
tend
to
mean
a
massive
performance
hit
and
with
that
massive
performance
hit,
then
convenience
goes
away
and
as
soon
as
convenience
goes
away,
then
people
don't
want
to
use
those
things
because
they
become
like
just
not
nearly
as
good
as
the
traditional
as
other
unsecure
insecure,
more
less
private,
less
private
structures.
A
So
the
challenge
is
double.
Not
only
do
you
have
to
get
the
the
the
the
tech
right
not
only
do
you
have
to
get
all
of
the
the
structures
to
be
really
good
and
and
and
and
hopefully
they
compose
well,
but
you
also
have
to
make
it
work
as
well
as
or
ideally
better
than
the
other
counterparts
so
that
people
will
adopt
these
so
that
people
will
actually
use
them
and
and
move
over
to
them.
A
I
think
the
the
signal
is
a
good
example
of
great
success
on
this
of
actually
producing
a
messenger.
That
was
good
enough
in
terms
of
ux
and
good
enough
in
terms
of
privacy
that
a
lot
of
people
started.
Switching
that
set
is
still
not
the
main
dominant
platform
like
most
of
the
most
people,
still
use
many
other
messenger
systems
out
of
convenience
out
of
being
able
to
message
each
other
with
stickers.
A
Out
of
like
nice,
smooth
ux
flows
like
even
though
I'm
like
signal
is
like
really
good,
but,
like
telegram
is
just
so
much
nicer
and
the
telegram
is
like
totally
busted
compared
to
signal.
Well,
I
don't
know
if
it's
busted,
but
I
just
right
now.
I
probably
would
trust
signal
more
so
yeah.
I
think
one
thing
that
I
think
signal
did
really
well
here
in
the
past
was
work
with
many
other
groups
try
and
upgrade
their
privacy.
I
wish
we
could
see
that
a
lot
more.
A
A
So
once
you
start
kind
of
diving
into
into
a
lot
of
the
the
crypt
and
so
on,
there's
just
an
enormous
attack
surface.
Every
single
system
that
you
might
make
has
tons
of
different
potential
attacks
as
soon
as
as
you
start,
coupling
those
systems
with
others,
some
properties
in
one
might
get
violated
by
the
composition
with
another.
A
So,
as
you
start
putting
together
a
piece
like
puzzle,
pieces
and
form
a
larger,
larger
structure
and
larger
construction,
then
you
end
up
with
with
extremely
difficult
to
satisfy
constraints,
and-
and
you
tend
to
it-
tends
to
get
extremely
hard
to
do
secure
systems.
A
Cryptography
is
a
massive
field
like
there's,
apparently
838
000
results
in
google
scholar
for
cryptography,
so
there's
a
ton
to
write
about
a
ton
to
reason
about
and
so
on
and
super
super
active
field
and
a
ton
of
it
is
extremely
practical
and
oriented
around
real-world
systems
in
in
kind
of
the
short-term
time
period.
A
A
good
case
to
kind
of
look
at
is
the
kind
of
oram
model
or
like
the
oblivious
ram
model,
where
you
want
to
try
and
achieve
a
a
structure
where
the
adversary
controls
in
all
of
the
computing
machinery
that
is
going
to
store
some
data
and
can
observe
all
accesses
to
that
memory,
and
you,
even
in
that
extreme
setting
where
the
adversary
controls
everything
and
can
see
everything
you
want
to
use
a
pattern
of
accesses
and
a
pattern
of
storing
data,
and
so
on.
That
will
be
secure
and
private.
A
Even
in
that
setting
so-
and
this
is
possible
there
are,
there
are
many
constructions
that
show
that
you
can.
You
can
achieve
this.
You
pay
some
performance
head.
You
pay
some
performance
penalties
for
achieving
this,
but
it
is
doable.
I
don't
know
if
this,
if
these
ramps
have
been,
you
know
how
large
the
largest
rm
deployment
is.
It
would
be.
This
would
be
kind
of
interesting,
interesting
work
to
to
do.
A
But
one
of
the
good
news
here
is
that,
because
the
computing
platform
is
getting
so
fast
in
terms
of
the
processing
speed
and
the
and
the
bandwidth
and
and
so
on,
the
storage
and
what
not
you
could
have
these
orm
type
systems
spread
out
across
the
world
close
to
people,
and
you
can
replicate
a
ton
of
data.
A
So,
as
you
generate
data,
you
can
push
it
and
replicate
it
in
different
places
and
then
have
orm
style
accesses
close
to
the
user,
so
that
you
so
these
kind
of
additional
accesses
that
you
have
to
do
against
sonoram
become
low,
latency,
localized
and
so
on.
But
again
you
have
to
worry
about
what
correlations
you
might
be,
creating
and
and
there's
kind
of
like
a
hard
question
around.
Could
you
really
do
an
rm
for
everybody
or
or
if
you
do
another
one
for
one
person,
then?
A
Can
you
think
about,
like
the
data
and
information
flow
happening
or
correlate
between
them
and
sharing
with
other
people,
and
so
on?
It's
an
extremely
difficult,
difficult
type
of
problem
also.
The
other
reason
why
this
this
model
is
a
good
one.
Is
that
in
in
normal
settings
this
this
came
up
because
people
noticed
that,
even
when
you
encrypt,
even
when
you
have
some
some
data
structure,
that
is
all
encrypted
like
trees
and
so
on.
A
If
people
are
accessing
certain
parts
of
information
in
different
ways,
you
can
use
those
access
patterns
to
extract
information
about
what's
what's
stored,
and
this
gets
worse,
as
you
share
that
with.
As
you
get
more
and
more
sharing
of
the
encrypted
data
structures,
then
it
gets
worse
in
in
terms
of
the
advertiser
being
able
to
correlate
a
lot
of
those
accesses
and
derive.
A
You
know
what
what
cipher
texts
might
include,
what
kind
of
information
certain
cipher
text
might
include
or
be
able
to
actually
extract
some
of
the
underlying
information,
so
auram
is
like
an
interesting
model,
but
it'd
be
great
to
see
them
kind
of
deployed
more
more
widely
across
the
board,
when
you're
building
different
kinds
of
systems
just
doing
doing
it
right
and
and
actually
achieving
proper
security
and
proper
privacy
is
monumentally
difficult.
A
But
it's
really
critical
to
do,
and
one
other
thing
here
is
the
more
these
systems
get
coupled
with
others,
and
the
assumptions
aren't
well
understood
across
the
board,
then
the
more
brittle
or
more
unsafe
those
systems
might
be
so
when
deploying
a
cryptosystem
that
is
supposed
to
be
actually
secure
and
actually
private
is
really
critical
to
kind
of
understand.
The
whole
thing
end
to
end
and
the
com,
any
kind
of
composition
there.
You
should
be
very
suspicious
about
many.
A
Many
of
these
cryptographic
primitives
do
not
compose
well
you
they
have
some
security
properties
and
when
you
compose
them
with
another,
it
doesn't
quite
work
as
well.
So,
for
example,
you
might
have
an
orm
that
is
supposed
to
be
safe,
but
then
you
put
that
oram
on
top
of
some
other
structure
or
or
you
use
a
norm
in
in
certain
ways,
and
you
might
be
breaking
the
the
properties
of
that
particular
construction,
great.
So
how
this
is
really
difficult.
It
seems
like
both
really
critical
but
impossible.
A
How
are
we
gonna
do
it?
So
the
good
news
here
is
that
there's
again
cryptography
is
a
massive
field.
There's
a
ton
of
people
working
on
this.
Many
people
know
that
this
is
super
important
to
do,
and
today,
like
the
the
outputs
of
the
the
world
of
cryptography,
do
give
us
tremendous
superpowers.
We
can
do
things
today
that,
were
you
know,
thought
impossible
before,
even
like
the
the
you
know,
beginning
of
the
public.
A
Key
crypto
story
is
like
a
good
one,
where
people
thought
being
able
to
people
thought
that
encryption
had
to
be
done
by
sharing
private
keys
ahead
of
time
and
that
you
could
not
establish
private
and
secure
communications
in
in
public,
and
then
you
had
merkel
puzzles
and
then
duffy,
hellman
and
then
rsa
and
so
on,
and
that
was
a
great
example
of
coming
up
with
an
amazing
construction
that
enabled
an
a
amazing
superpowers
afterwards
and
today
we're
really
operating
with
kind
of
all
this,
like
what
people
call
like
moon
math,
where
you're
doing
amazingly
amazing
things
in
terms
of
being
able
to
prove
certain
systems
and
improve
the
computation
of
certain
operations
and
and
do
things
in
zero
knowledge
without
actually
having
to
share
information
or
do
things
like
fully
homomorphic
encryption,
where
a
computer
is
going
to
have
a
bunch
of
data
and
have
a
bunch
of
encrypted
functions.
A
A
Now,
of
course,
you
pay
for
it
in
in
performance
and
you
pay
for
it
in
r
d
complexity,
but
but
there's
just
an
enormous
amount
of
promising
research
across
the
board
and
what's
really
cool
and
more
recently,
is
that
a
lot
of
these
systems
are
now
getting
deployed
much
faster
than
before.
A
So
I
think
in
the
1995-2005
era
it
was
like
there
was
an
important
kind
of
sets
of
different
encryption
systems
that
got
deployed,
but
for
the
most
part
it
was
kind
of
a
slow
process,
and
ever
since
the
crypto
currency
world
appeared
and
blockchain's
appeared,
we've
gotten
a
much
faster
deployment
of
cryptosystems.
Now,
there's
a
worry
there
that
we
might
actually
be
deploying
a
bunch
of
things
that
are
broken
and
we'll
only
find
that
out
after.
A
But
I
actually
think
it's
really
good
that
we
are
testing
a
lot
of
these
things
in
the
wild,
often
with
implicit
bounties
in
the
system,
so
that,
if
you
can
break
some
system,
you
can
seal
a
lot
of
cryptocurrency,
and
so
that
creates
like
a
just
a
a
great
forge
to
build
secure
systems
because
there's
a
lot
of
people
out
there
that
are
trying
to
break
them
constantly.
So
it's
a
great
you
know
automated
bug,
bounty
program.
A
A
So
I
think
in
cryptography,
like
many
other
fields,
the
the
critical
piece
is
kind
of
sifting
things
that
are
being
produced
in
in
in
research
environments
and
getting
them
into
workable
usable
states
and
built
into
software
built
into
libraries
built
into
things
that
you
can
use
and
put
into
products
to
then
be
able
to
ship
out
to
the
world,
and
let
people
use
them
more.
I
think
one
of
the
key
things
here
is
that
we
decouple
we
learn
from
the
signal
story
and
we
learn
from
many
other
things
in
the
past.
A
We're
like
we
ideally
want
to
decouple
approaches
as
much
as
possible,
so
that
research
and
then
implementations
and
then
deployments
in
the
product
can
leverage
the
best
results
from
from
prior
art.
A
It
would
be
bad
if
some
groups
like
create
exclusive
structures
where
some
key
research
is
like
patented
such
that
nobody
else
can
use
it
to
sort
of
like
promote
a
product
that
then
may
not
actually
succeed
in
the
market,
because
I
don't
know
it
didn't,
have
the
right
stickers,
and
so
you
end
up
in
a
situation
where,
like
extremely
valuable
contributions,
end
up
not
being
used
because
of
some
other
problem
done
downstream,
and
so,
ideally
you
you
can
do
amazing
work
in
research.
A
You
can
do
amazing
work
in
implementations
into
libraries
and
and
so
on
and
and
in
testing
those
systems
at
scale,
and
then
you
can
learn
from
that
and
enhance
as
many
products
as
you
can.
So
you
you
want
to
create
a
structure
where,
at
every
stage,
every
group
can
leverage
the
best
of
the
prior
of
the
prior
group.
So
that
means
open
patents
or
or
either
no
patents
or
open
patent
pools.
A
It
means
open
source,
it
means
open
implementations,
it
means
a
lot
of
implementations,
it
means
robust
testing
and
then
it
means
like
really
good
standards
for
certain
implementations
of
things
so
that
they
can
get
into
products,
and
it
means
a
lot
of
collaboration
with
large
groups
that
have
large
products
right.
A
So
we
should
be
in
a
positive
sum:
oriented
environment
where
we
should
be
trying
to
get
the
existing
large-scale
products
to
be
enhanced
by
the
by
the
new
crypto
systems,
as
opposed
to
try
and
create
an
entirely
new
new
stack
that
that
just
kind
of
creates
a
better
environment
kind
of
what
I
mean
with
that
is
like
if
we
have
a
good
shot
to
get
the
existing
infrastructure
to
become
much
more
private
and
much
more
secure,
that's
a
great
shot
that
we
should
take.
A
So
we
should
be
greatly
encouraging
the
current
web
2
giants
to
invest
deeply
into
things
like
fully
homomorphic
encryption
and
into
these
kinds
of
privacy
techniques,
so
they
can
have
their
their
ad
business
model
without
putting
humanity
at
risk
right,
so
it'd
be
great
to
like
get
to
a
structure
where
we
can
convince
these.
A
These
massive
scale
data
platforms
to
then
at
least
use
user
protecting
encryption
to
then
not
not
put
all
of
us
at
risk,
and
so
I
think,
there's
a
you
know
very
good
positive
sum
oriented
environment
where
we
can
do
a
ton
of
this
work
and
some
other
research
build
new
products
and
hopefully
get
them
to
be
successful,
but
at
the
same
time
also
push
a
lot
of
this
r
d
and
especially
really
promising
solutions
to
existing
systems,
because
they're
already
installed
right.
A
So
the
more
we
can
secure
existing
things,
the
the
better
everything's
going
to
go
and,
and
also
one
other
component
here-
is
things
get
better
by
intentionally
driving
r
d
through
this
pipeline.
A
So
setting
the
goal
of
like
saying,
hey,
there's
this
hard
problem
and
articulating
what
the
problem
is
getting
as
many
people
to
know
about
it
and
then
doing
the
work
against
that
heart
problem
coming
up
with
solutions,
even
partial
solutions
that
just
tackle
the
problem
and
trying
to
advance
it,
organizing
communities
of
of
practice
to
actually
try
to
tackle
on
those
problems
and
have
good
ways
of
of
measuring
the
success
over
time.
A
Creating
incentive
structures
for
doing
this
so
being
able
to
kind
of
highlight
important
problems
and
put
prices
against
those
or
use
rfps
or
grant
systems
to
support
many
groups
that
are
doing
this
work.
We
can
like
have
a
large
scale,
integrated
effort
to
kind
of
get
as
much
high
quality
work
done
at
every
at
every
yeah.
I
totally
agree
as
much
high
quality
work
done
across
this
pipeline
as
possible.
A
Right
so
think
think
of
like
a
larger
research
program
here,
where
we're
like
creating
figuring
out
like
really
good,
open
problems,
doing
periodic
surveys
of
like
what
is
the
current
state
of
the
art
and
like
what
what
works?
What
doesn't
thinking
about
like
what
products
currently
exist
and
what
products
could
use
what
kind
of
constructions
trying
to
connect
those
things
and
trying
to
sift
things
through
this
pipeline?
A
You
know
at
a
pretty
fast
clip
so
that
we
can
get
so
we
can
get
really
good
good
outcomes,
and
the
other
thing
here
is
like
it
takes
many
iterations
through
this
whole
pipeline
to
build
really
good,
robust
things
in
time
right,
so
you
go
and
like
find
a
set
of
problems.
You
like
it
takes
many
approach.
Many
attempts
to
actually
solve
some
problem.
Once
you
solve
a
problem,
it
takes
many
attempts
to
actually
implement
it
to
get
to
a
robust
good
implementation.
A
Then
it
takes
many
attempts
of
that
getting
put
into
product
before
that
product
becomes
broadly
installed
in
the
world
and
even
after
that,
you'll
notice
that
that
actually
has
some
problem,
and
then
you
have
to
go
back
to
the
beginning
and
and
come
up
with
other
solutions
or
other
enhancements
or
but
you
solved
the
problem,
but
you
created
a
different
problem.
Now
you
have
to
solve
that
problem,
so
it
takes
a
lot
of
iterations
with
this
pipeline
to
get
to
a
good
outcome.
A
However,
it's
very
very
possible
and
doable,
and
we
do
this
all
the
time
across
many
many
fields,
which
is
that
in
this
in
this
particular
one
we
have
to
like,
do
it
with
a
certain
urgency
and
get
certain
outcomes
that
right
now
seem
pretty
hard
great.
So
what
makes
web3
special?
Why
can
web3
help
a
lot
here?
A
That's
a
much
stronger
foundation
for
this
kind
of
operation,
because
math
and
cryptography
and
game
theory
and
economics
are
much
more
dependable
foundations
than
words
and
contracts
that
could
be
changed
over
time,
and
so
the
the
the
field
or
or
the
industry
in
a
sense
is,
has
like
good
foundations
in
terms
of
the
the
model
for
for
how
to
arrive
at
at
long-term,
dependable,
dependable
tech.
Now
that
said,
most
of
these
systems
are
very
far
away
from
the
right
level
of
security.
A
So
I
think
the
web3
world
has
good
security
in
some
areas
and
it
does
have
some
better
systems
when
it
comes
to
some
very
narrow
definitions
like
being
able
to
have
like
public
blockchains
with
public
smart
contracts
and
have
those
be
that
be
a
secure
computation.
Like
that's
pretty
good.
We
have
zero
knowledge
proofs,
which
is
really
good
and
we're
pushing
that
research.
We
have.
A
You
know
these
systems
are
robust
enough
and
large
enough
that
you
can
have
things
like
private
or
public,
blockchains
and
and
public
currencies,
and
we
now
have
private,
even
private
currencies
with,
like
you
know,
zero
knowledge,
money
and
so
on.
Like
it's
good
examples
of
like
some
very
narrow
successes,
however,
it's
also
a
disaster.
When
it
comes
to
real
writer
privacy,
everything
is
kind
of
in
the
public.
A
There's
all
kinds
of
correlations
you
can
draw,
even
the
best
systems
probably
are
you
can
use
metadata
to
to
figure
out
all
the
operations
and
so
in
terms
of
privacy.
Web
3
is
like
in
a
terrible
state
and
arguably
much
worse
than
the
web
2
world,
so
the
web
2
world
is
actually
fairly
private.
Comparatively
you
have
things
like
the
fact
that
there
are
massive
cloud
systems
that
you
know
as
long
as
those
are
are
operating
honestly
and
operating
well,
then,
you
can
sort
of
like
trust
their
operation.
A
You
have
vpns,
you
have
pretty
good
loss
in
certain
jurisdictions
that
sort
of
like
require
certain
deletion
of
of
data
and
so
on.
So
it's
like
web3
is
nowhere
near
being
better
than
that.
Now
we
have
to
get
better
than
that,
and
we
can
get
better
that
the
community
in
general
is
very
interested
in
in
achieving
that
and
has
many
projects
towards
it
plus.
A
It
has
like
the
the
right
foundations
where
you
want
to
get
that
all
the
way
in
a
publicly
verifiable
way
and
make
the
arguments
in
terms
of
math,
not
in
terms
of
a
privacy
policy
that
could
change
at
any
moment
or
that
could
be
changed
not
because
the
company
intended
it,
but
because
it
was
compelled
to
do
so
by
by
a
secret
court
somewhere.
A
So
one
other
component
here
is
that
web3
has
very
strong
values
around
establishing
digital
rights.
So
things
like
freedom
of
speech,
freedom
of
communication,
freedom,
assembly
users
being
able
to
own
and
control
their
data,
decentralization
in
general,
not
really
trusting
central
authorities.
All
of
these
are
like
really
good
values
that
you
that
you
want
when
your
goal
is
to
kind
of
produce
this
kind
of
safe
cryptosystem.
A
Now
we
have
to
like
actually
make
it
happen
like
we're
very
far
away
from
establishing
all
of
these
things,
but
but
but
we
can
do
it
and
all
these
values
map
to
specific
systems
like
each
of
these,
you
can
break
down
into
specific
digital
crypto
systems
that
you
can
think
about,
producing
and
constructing,
and
so
on.
The
hard
part
here
is
that
you
really
have
to
model
them
end
to
end
and
not
accidentally
build
something.
A
That's
really
safe,
but
you
know
the
metadata
around
the
around
the
edges
enables
breaking
it.
A
very
good
example
of
this
is
the
the
the
famous
like
you
know
the
top
left
here.
Let
me
make
it
big.
This
is
such
a
good
like
lesson
in
crypto
system
building
there
was.
This
is
from
a
a
posted
note.
A
I
think
where
some
agents
were
like
figuring
out
how
to
break
google's
encryption,
even
though
you
had
like
a
pretty
good
setup
or
whatever,
like
you
just
found
a
weak
point,
you
break
that
and
then
you
can.
A
You
can
bust
the
whole
system
so
like
that
you
know,
as
we
build
a
bunch
of
secure,
large-scale
cryptosystems
in
web
3,
we
should
be
very,
very
careful
that,
like
it
worked
fully
works
end
to
end
and
subject
the
entirety
of
the
system
to
like
the
the
most
stringent
scrutiny
possible
across
the
community.
You
know
everything
from
like
the
the
very
yeah
everything
from
like
how
the
the
research
model,
like
the
actual
like
model
of
the
system.
What
like
the
implementation,
how
the
implementations
work,
how
the
deployments
work?
A
How
does
the
software
distribution
channels
work
like
the
entire
kind
of
pipeline
of
like
building
the
software
and
pushing
it
out
there,
and
so
on?
All
of
that
we
have
to
get
into
into
a
really
good
good
state.
A
One
good
thing
here
that
another
thing
that
web
3
has
is
like
hash
linking
and
hash
linking
is
tremendously
powerful.
For
all
of
this,
you
can
like
make
strong
assertions
about
about
software,
strong
assertions
about
data
and
so
on,
and
it
becomes
like
an
amazing
superpower
to
enable
a
lot
of
these
things,
so
we
have
some
like
really
good
superpowers.
A
Another
thing
we
have
is
the
benefit
of
crypto
economics,
so
we
can
use
crypto
economics
itself
to
construct
these
kinds
of
systems
and
provide
for
their
operations.
A
So
the
fact
that
you
can
have
a
these
cryptocurrency
networks
that
provide
a
service
and
a
broad
utility
to
the
world
and
they're
kind
of
self-standing
through
economic
incentives
provides
a
really
good,
good,
good
substrate
for
building
these
kinds
of
systems,
because
you
no
longer
have
to
rely
on
a
particular
corporation
in
a
particular
country,
continuing
to
have
a
business
model
or
potentially
going
disappearing,
because
that
business
model
is
sort
of
like
wayne's
or
because
that
business
model
suddenly
is
no
longer
kind
of
accepted
in
that
environment.
A
So
we've
been
very
lucky
so
far
that
there
are
many
corporations
that
kind
of
like
have
fought
hard
to
preserve
certain
rights
and
freedoms
and
so
on.
But
ideally
we
could
end
up
in
a
spot
where
that
just
sort
of
happens
through
very
strong
incentives,
incentive
structures,
independent
of
specific
people,
independent
of
specific
organizations
and
so
on.
A
One
other
thing
that
replicanomics
can
do
not
just
in
terms
of
like
helping
you
design,
pretty
successful
protocols
and
being
able
to
run
open
services
and
utilities
it
cannot.
You
can
also
use
group
economics
to
incentivize
the
entire
r
d
pipeline
across
this
right.
So
you
can
use
the
krypton
economic
primitives
themselves
to
cause
a
ton
of
r
d
to
happen
across
these
things.
You
can
do
things
like
put
up
bounties
for
breaking
certain
things.
A
You
can
put
up,
put
bounties
and
grant
programs
in
rfps
for
building
kinds
of
things,
and
I
think,
we'll
see
over
time
a
large
amount
of
of
r
d
funding
coming
from
crypto
networks
to
fund
even
basic
research
in
the
early.
A
You
know
work
so
think
of
all
of
the
zero
knowledge
explosion
that
we've
seen
in
the
last
two
or
three
years
as
all
funded
by
by
the
by
by
crypto
economics
in
it's
sort
of
a
lot
of
it
is
routed
through
private
capital
markets
and
so
on,
but
today.
A
But
I
think
that,
even
if
that
hadn't
happened,
that
research
would
have
been
routed
through
public
token
markets,
so
think
of
crypto
economics
as
as
a
really
powerful
set
of
tools
that
you
can
use
to
build
these
systems,
not
just
in
the
actual
implementations,
but
in
the
meta
structure
of
being
able
to
fund
and
develop
and
fund
groups
to
go
and
pursue
these
kinds
of
projects.
A
So
it's
a
super
powerful
tool.
Think
of
warping
the
incentive
structure,
work
being
able
to
warp
the
incentive
structures
to
cause
certain
outcomes
so
yeah
you
can
use
mechanism
design
in
a
in
a
pretty
powerful
way.
So
I
think
you
web3
is
like
really
well
poised
to
improve
the
structure.
So
today
you
know,
I
tend
to
think
of
like
the
internet
is
pretty
it's
pretty
good.
The
we
have
like
a
pretty
scalable
and
and
pretty
secure
computing
infrastructure.
On
top
of
that,
you
know
fairly
secure.
A
Maybe
we
could
rate
it
yellow
instead
of
instead
of
green,
but
it's
like
fairly
good.
We
have
personal
computing
devices
broadly
distributed
around
the
world,
just
like
tend
to
work
really
well,
we
have
a
ton
of
applications
and
so
on,
but
our
human
superpowers
are
condition.
A
All
of
this
entire
structure
is
conditioned
on
a
set
of
contracts
and
end-user
license
agreements
and
terms
of
services
and
corporations
and
courts,
and
so
all
of
that
ends
up
creating
a
really
unstable
and
shaky
foundation
for
this
edifice
of
technology
that
we've
built
on
top,
because
at
the
end
of
the
day,
someone
somewhere
can
you
know
some
government
somewhere
can
change
their
mind
about
something,
especially
critical
and,
like
you
know,
any
kind
of
important
crisis,
so
think
of
like
times
of
of
fast
crises
as
moments
where
all
of
the
old
freedoms
go
out.
A
The
window
right,
so
wars
are
have
typically
in
history,
been
a
great
moment,
for
you
know,
forget
about
all
those
important
rights
that
we
all
thought
are
more
important.
They're
all
gone
now,
and
only
if,
like
good,
the
like
groups,
succeed
and
then
restore
all
those
rights
and
freedoms.
Do
you
get
them
back
they're?
A
There
have
been
many
times
in
history
where
that
didn't
quite
happen
and
you,
those
old
rights,
never
came
back
and
we've
been
very
lucky
to
to
be
in
a
timeline
where,
like
things
actually
worked
out
fairly
okay,
so
it
would
be
great
to
to
kind
of
establish
a
much
stronger
foundation
underneath
all
of
this
to
get
to
you
know
hard
math,
hard
cryptography
and
economics
to
drive
the
creation
of
these
superpowers
and
make
the
long-term
maintenance.
A
So
you
don't
have
to
rely
on
kind
of
these
much
more,
I
think
unreliable
components.
I
think
these
these
underlying
systems
are
less
reliable
than
than
than
math
and
economics.
A
So
so
I
I
think
like
web
3
has
that
promise.
Now
we
have
to
cause
it
to
happen.
There's
a
ton
of
work
to
do
this,
but
it
is
a
pretty
stable
foundation
that
worth
building
on
and
yeah.
I
think
in
general,
the
whole
field,
the
whole
industry
is,
is
very
interested
in
funding
public
goods
very
keenly
aware
of
the
r
d
pipeline,
there's
a
a
very
high
proportion
of
people
working
in
in
the
earlier
parts
of
the
pipeline,
and
so
on.
A
So
yeah
there's
a
lot
of
kind
of
regenerative,
finance,
public
funding,
decentralized,
science
and
and
so
on
great.
So
now,
let's
get
much
more
specific
into
content,
address
networks
so
going
back
to
the
to
the
internet
grapevine
so
today,
because
of
location,
addressing
a
lot
of
the
communications
are
forced
to
go
to
specific
locations,
and
so
that's
that
can
be.
You
can
leverage
that
over
time
to
kind
of
do
all
kinds
of
surveillance
in
different
spots.
This
can
get
a
lot
harder
for
the
adversary.
A
If
you
start
propagating
content
and
information
all
the
way
to
the
edges,
if
you
have
a
structure
where
you're
pushing
massive
amounts
of
content,
all
the
way
down
to
the
edges
and
then
users
and
so
on,
are
just
reading
local
information
or
going
to
each
other
or
going
just
to
their
isp
and
coming
back.
Then
it
gets
a
lot
harder
practically
to
surveil
all
the
different
different
points
in
the
in
in
this
network.
A
You
also
have
a
model
where
it's
not
like,
write
once
and
then
like
read
many
times
from
from
all
over
the
world,
but
you
have
a
model
where,
like,
as
you
write,
you
sort
of
propagate
updates
to
the
rest
of
the
world,
but
you
can
do
a
bunch
of
right
reads
locally,
and
so
you
just
have
a
different
access
pattern
by
default
in
these
content,
address
systems
that
end
up
doing
better
across
many
kinds
of
attacks
or
correlation
attacks
that
that
adversaries
might
might
deploy.
A
The
flip
side,
though,
is
that
you
are
in
an
environment
where
you
can't
just
rely
on
like
secure
links
between
a
few
parties,
because
you
end
up
in
these
kind
of
protocols
where,
like
there's
a
lot
of
like
spamming
of
information,
to
achieve
it
right,
so
you
tend
to
like
spam
out
requests
for
certain
content,
and
you
tend
to
spam
out
look
up
requests
to
like
connect
to
certain
peers.
A
So
today,
in
most
of
these
systems,
all
of
all
of
those
operations
are
super
absorb,
like
you
can
observe
a
ton
of
it
and
you
can
draw
all
these
correlations
between
between
between
systems.
Now
the
good
news
there
is
that
there
are
some
really
good
crypto
cryptography
solutions
to
those
kinds
of
problems
you
can
get
into.
You
know
things
like
mixnets
and
things
like
routing
systems
like
oblivious
routing
systems
and
systems.
A
Where
you
can,
you
have
a
network
of
participants
that
are
meant
to
propagate
messages
and
diffuse
messages
over
a
network
and
make
that
that
diffusion
hard
to
correlate
and
hard
to
hardly
leak
information
about.
So
it's
starting
in
it.
It's
going
in
a
different
track
of
of
of
how
to
build
information,
distribution
systems
that
can
that
confers
a
bunch
of
advantages
for
for
for
solving
these
problems.
A
Now
the
downside
is
that,
because
it's
a
much
newer
trajectory
and
much
newer
set
of
systems,
then
we
have
a
lot
less
research
on
it.
So
we
have
just
drastically
less
r
d
on
content
addressed
secure
private
networks
than
we
do
for
location,
location-oriented
networks,
so
we
basically
have
to
catch
up
and
do
a
ton
of
r
d
work
to
to
end
up
with,
like
pretty
good
systems.
A
A
So
a
lot
of
systems
that
and
also
computing
hardware
is
getting
faster.
So
those
those
two
things
mean
that
a
lot
of
cryptographic,
protocols
that
maybe
seem
completely
infeasible
five
years
ago
or
ten
years
ago,
are
now
viable
and
can
work
at
pretty
good
performance
as
measured
by
a
human
trying
to
use
the
thing
so
yeah.
I
think
also
worth
noting,
like
the
whole
ndn
and
ccn
literature
is
full
of
tons
of
really
good
protocols
for
for
content,
addressing
and
content
routing
and
so
on,
but
most
of
them.
A
Those
haven't
been
you
know,
designed
with
privacy
in
mind
or
with
the
adversarial
security
in
mind.
So
whenever
so
read
them
for
a
lot
of
ideas.
But
in
many
cases
we
have
to
like
start
from
scratch,
because
there's
like
some
fundamental
incompatibility
there,
or
in
many
cases
you
can
take
ndn
networks
or
ccn
style
networks
and
just
drop
them
on
top
of
mixnets
and
they'll
work.
They
might
not
be.
A
They
might
not
be
fast
if
you
try
to
like
do
cnn
on
top
of
a
mixnet,
but
like
maybe
it's
a
way
to
like
start
and
then
from
there.
You
start
improving
it
great.
A
So
I
wanted
to
kind
of
end
by
giving
you
a
a
view
into
I'm
trying
to
formalize
the
the
content
writing
problem,
so
I'll
kind
of
like
just
take
you
through
this
and
speak
about
a
few
different
properties,
and-
and
so
this
is
like
it's
gonna-
be
a
little
bit
redundant,
but
but
just
let's
kind
of
walk
through
it
and
kind
of
establish
a
shared
language
for
today,
so
in
in
kind
of
so
so.
The
content
routing
problem
is
the
specific
underlying
problem
within
a
content
address
network.
A
Where
you
have
you
know
a
certain
set
of
content
providers
that
have
content
and
want
to
serve
it
out
to
various
users.
You
have
a
set
of
consumers
that
are
trying
to
retrieve
that
content
and
use
it
in
some
way
and
you're
trying
to
connect
the
content,
consumers
and
the
content
providers
to
exchange
that
information.
Now,
ideally,
in
the
full
content
address
data
distribution
system,
you
would
want
the
whole
thing
to
be.
A
Do
you
want
to
be
able
to
provide
certain
guarantees
about
the
whole
model
so
that,
when
clients
are
retrieving
data
from
providers,
you
can
like
hide
the
accesses
and
so
on?
That's
that's
a
much
harder
problem
and
much
harder
to
get
like
one
single
uniform
system
across
the
board.
A
So
that's
not
quite
what
we're
working
on
in
content,
routing
security,
we're
working
on
a
reduced
problem
which
is
hey
just
assuming
that
the
the
connectivity
between
a
content,
consumer
and
a
content
provider
can
be
secured
in
some
other
way
through
mixedness
or
other
things,
just
the
problem
of
storing
routing
information
and
making
it
available
to
both
providers
and
clients
so
that
they
can
find
each
other.
Just
that
part
just
the
problem
of
how
do
clients
know
which
provider
to
go
to
how
consumers
know
which
provider
to
go
to
that
problem.
We
can.
A
If
we
can
build
a
pretty
secure
version
of
that
and
broadly
distribute
it,
then
it
makes
makes
it
a
lot
easier
to
build
those
other
larger
systems
that
can
have.
You
know
proper
end-to-end
security
guarantees
about
privacy
through
the
whole
thing,
so
in
order
to
kind
of
enable
those
consumers
and
providers
to
to
find
each
other,
we
can
use
a
set
of
content
routers.
Now
these
content
routers
might
be
separate
devices
or
they
might
be
the
same
devices.
So
you
could
have
a
network
where
the
routers
are
both.
A
Now
you
can
use
cryptographic
artifacts,
like
provider
records
that
store
some
information
about
the
set
of
content
providers
and
the
content
that
they're
providing
to
store
some
information
in
the
network
in
the
routers
to
make
it
possible
for
consumers
to
to
find
find
find
that
that
content
and
this
kind
of
what
we
describe
as
you
know,
provider
records
and
you
can
think
of
like
a
record
being
produced
by
a
certain
provider
and
having
you
know,
certain
data.
That
is
that
it's
about
and
for
for
most
of
our
purposes.
A
We
think
of
those
records
as
using
cids
like
content
identifiers
as
the
as
a
way
to
think
about
content,
but
these
could
evolve,
of
course
like
these
could
become
selectors
or
other
kinds
of
information
structures
and-
and
you
have
a
model
where,
when
consumers
query
routers
for
the
content,
they
get
zero
or
more
corresponding
provider
records,
and
it
doesn't
have
to
be
complete
because
at
the
end
of
the
day,
consumers
just
care
about
getting
the
data.
They
don't
care
about,
finding
all
possible
providers
for
a
thing.
A
These
are
like
some.
You
know
basic
definitions
about.
You
know
what
content
providers
are.
You
know
think
of
a
dynamic
set
of
content
providers.
You
have
many
more
providers,
you
know
coming
and
going
over
time.
They
their
actual
data,
sets
change
over
time,
and
you
want
to
be
able
to
kind
of
reason
about
the
the
records
that
they're
making
over
time
and
how
they're
advertising
those
two
to
various
right
writers,
various
routers.
A
A
Protocol
to
store
those
in
particular
ways
or
whenever
a
search
is
whenever
a
client
is
issuing
a
query
to
find
certain
records,
then
routers
might
have
to
do
a
ton
of
operations
themselves
within
their
own
data
structures
or
they
might
have
to
do
a
bunch
of
operations
with
other
routers
in
order
to
to
extract
the
records
and
be
able
to
serve
a
query,
but
you
ideally
want
to
get
into
a
spot
where
the
routers
have
no
idea.
What
they're
doing
you.
A
You
want
the
routers
to
not
know
what
the
data
is,
and
you
want
the
routers
to
not
know
who
the
providers
or
the
clients
are
or
or
what
they're
interested
in
and
but
you
want
to
build
a
system
where
these
routers
can
provide
a
pretty
efficient
and
secure
a
pretty
efficient
system
to
the
clients
and
providers.
A
So
these
these,
you
know,
consumers
are
making
requests
to
routers,
usually
providing
the
the
content
address,
they're,
looking
for
or
providing
some
encrypted
version
of
that,
so
you
can
think
of
taking
content,
addresses
and
creating
some
encryption
of
them
and
using
that
as
the
query
as
a
query
parameter.
So
that
writers
don't
can't
recover
what
the
actual
actual
data
is
and
then,
when
you're
thinking
about
the
data,
you
can
think
of
the
data
as
having
links
to
other
data.
A
A
Now
you
know
kind
of
a
set
of
useful
data
structures
that
we're
using
to
make
this
problem
easier.
So
we're
using
cids,
which
are
you
know,
self-certifying
verifiable
identifiers
for
a
specific
piece
of
data.
Now
cids
are
like
pretty
useful
because
you
know
they
kind
of
are
modular
in
terms
of
the
their
parameterized
in
terms
of
the
hash
function
and
the
encoding
functions,
and
so
on.
A
You
know
for
simplicity,
we
just
think
about
cid,
but
it's
worth
noting
that
there's
like
a
many
to
one
mapping
here,
you
can
have
many
cids
that
all
map
to
the
exact
same
set
of
bytes
and
that's
because
you
could
be
hashing
with
multi
different
hash
functions
or
or
you
could
be
using
different
encoding
functions
and
so
on
and
and
then
of
course
like
once
you
have
a
specific
byte
stream.
You
could
have
different
byte
arrangements
that
represent
the
same
information
for
users
right.
A
You
can,
of
course,
have
like
different
cipher
texts
that
encrypt
the
same
from
information.
You
could
have
different
layouts
of
arranging
the
same
information,
and
so
you
can
take
a
single
file
and
chunk
it
in
different
ways
and
then
arrive
at
different
graphs
and
so
on.
So
you
can't
quite
rely
on
like
knowing
that
there
won't
be
like
one
cid
for
every
unique
piece
of
information.
A
Now,
there's
a
lot
of
the
magic
is
in
the
provider
record,
so
you
can
think
of
maintaining
this.
Like
dynamic
set
of
provider
records,
I
think
of
those
as
advertisements
that
the
provider
has
a
set
of
set
of
content.
A
You
each
provided
record
is
has
a
particular
provider
that
is
associated
with
and
a
particular
set
of
data
that
is
associated
with
that
could
be
just
a
single
item
or
it
could
be
many
items
and
there's
a
set
of
properties.
That-
and
this
is
where
we
start
getting
into
the
nice
formal
properties,
there's
a
set
of
properties
that
you
might
want,
but
about
the
records
you
might
want
records
that
are
non-repeatable.
A
So
this
means
that
if
the
record
is
produced,
then
it
should
have
only
been
produced
using
the
direct
authorization
of
that
provider,
meaning
that
you
might
carry
a
digital
signature.
So
the
record
itself
identifies
the
provider
said
I
have
this.
You
can
also
make
make
it
stronger
and
include
a
a
content
verification
as
part
of
that,
where
you
can
include
a
proof
of
storage
of
some
kind
and
potentially
approve
storage
at
a
particular
moment
in
time
to
in
order
to
produce
a
record.
A
So
you
could
actually
produce
these
kinds
of
records
and
not
only
carry
a
signature
that
the
provider
had
the
content,
but
an
actual
cryptographic
proof
that
they
had
that
content
at
that
particular
moment
in
time.
Now
that
can
be
really
useful
for
clients
and
it
can
help
avoid
a
bunch
of
spam
from
adversaries,
but
it
creates
a
problem.
Now
you
have,
you
know,
a
direct
link
between
the
providers
and
what
content
they
had.
A
So
if
the
providers
are
not
storing
encrypted
content
that
they
don't
know
anything
about,
and
they
can
tell
what
they're
they're
storing
that
might
be
a
problem
because
you
might
have
places
where
you
know
that
might
be
like
an
action
that
they
shouldn't
be
taking.
A
So
you
might
also
want
certain
records
to
be
reputable
where
you
can
produce
those
records
without
direct
authorization
and
so
you'll.
You
tolerate
some
spam
in
order
to
give
plausible
deniability
to
providers
of
storing
that
information,
and
so
this
sort
of
depends
on
on
a
system
a
system
might
want
these
non-reputable
records
or
a
system
might
want
repeatable
records.
A
You
might
also,
you
know,
have
temporary
records
that
only
have
validity
for
a
certain
period
of
time
or
certain
under
a
certain
set
of
conditions,
and
you
might
use
things
like
spam
costs
where
you
might
allow
not
reputable
records
but
introduce
some
cost
to
generation
where,
like
maybe
a
provider,
a
legitimate,
honest
provider
that
has
the
content
can
produce
that
record
cheaply.
A
But
a
provider
that
does
not
have
the
data
can
produce
a
record
as
well
with
some
significant
cost,
and
so
that
creates
a
a
kind
of
like,
and
you
can
think
of,
like
you
know,
turning
through
some
many
keys
and
like
breaking
a
key
in
order
to
provide
that
record.
This
provides
a
pretty
in
between
sweet
spot
between
like
allowing
some
spam
to
create
a
you
know,
plausible
deniability,
but
can
cut
down
on
the
spam
problem
you
might
be
able
to.
A
A
You
can
operate
on
the
records
while
they're
encrypted,
which,
which
is
kind
of
like
where
routers
want
to
be
writers,
want
to
be
operating
on
those
records
without
knowing
what
what
they're
doing
now,
when
you
think
about
like
piecing
together
these
these
three
sets
of
agents
like
the
the
consumers,
the
routers
and
the
providers
using
the
content
addresses
and
the
records
you
can
piece
those
together
into
a
content,
routing
system
that
you
know
describes
a
protocol
that
allows
these
agents
to
use
these
data
structures
to
implement
one
of
these
content
writing
systems
and
there's
a
set
of
properties
that
you
might
want
about
these
systems.
A
You
might
want
the
system
to
be
authenticated,
like
you
know
all
the
identities
you
all
of
them
have
cryptographic
identities.
All
of
them
can
consign
messages
with
public
key
crypto,
or
something
like
that.
You
might
want
this
to
be
self-certifying,
where
agents
have
their
own
public
keys
and
you
can
only
dial
them
at
their
public
keys
and
they
can
send
messages
using
their
public
keys
and
so
on,
and
you
might
want
the
data
itself
to
be.
You
know,
hash
linked
and
so
on.
A
So
there's
like
some
self-certification
properties
going
on
now
when
it
comes
to
privacy,
here's
like
where,
where
it
gets
gets,
starts
getting
interesting.
You
know
first
off,
like
the
basics,
you
can
have.
You
know
transport
encryption
between
parties,
so
the
messages
passing
itself
is
encrypted.
You
might
have
like.
A
A
But
without
the
unauthorized,
consumers
or
routers
or
adversaries
learning
what
the
provider
has
published
so
writer
privacy
is
this
extremely
good
property
where
a
content
producer
can
publish
content
out
to
the
world,
but
only
the
authorized
parties
that
can
retrieve
the
information
know
that
they
did
that
everybody
else
just
sees
a
bunch
of
cipher
texts
and
cannot
really
know
who's.
Storing
what
and
you
might
have
different
forms
of
this
writer
privacy,
you
might
have
identity,
write
privacy
where
adversaries
cannot
learn
who
provided
the
content?
A
They
could
see
that
the
content
was
provided,
but
they
can't
tell
who
you
might
have
content
writer
privacy,
where
the
adversaries
cannot
learn
what
exactly
the
content
is.
They
might
be
able
to
see
that
certain
providers
are
spamming,
are
adding
a
lot
of
content,
but
they
don't
know
what
it
is.
You
could
have
like
full
kind
of
like
this
action
writer
privacy,
where
adversaries
just
cannot
learn
that
a
particular
provider
has
taken
any
action
at
all
like
that
would
be
like
a
really
great
protocol
to
have.
A
And
then
you
could
get
to
like
full
right
privacy,
where,
like
no
entity,
learns
what
entities
provide
content
or
what
content
is
being
provided
at
all
like
this
is
tremendously
monumentally
difficult
to
achieve,
and
it
kind
of
requires
a
bunch
of
different
protocols
between
providers
and
consumers.
Also
because,
like
you,
don't
want
to
create
a
system
where
maybe
the
content
routing
stuff
is
private
and
secure.
But
then
later
you
can
just
correlate
the
consumers
getting
data
from
the
providers
now,
the
the
just
like
writer,
privacy,
there's
reader
privacy.
A
On
the
other
side,
consumers
want
to
be
able
to
find
content
without
unauthorized
providers
or
routers
or
adversaries
learning
that
they're
looking
for
that
stuff.
So
maybe
the
providers
have
privacy,
or
maybe
they
don't
have
privacy,
but
the
consumers
might
want
privacy
separately.
So
you
can
think
of
systems
that
like
provide
both
or
one
one
of
them,
and
and
again
you
can
have
like
the
identity,
content
and
action
and
full
notions
of
reader
privacy.
A
Anybody
can
become
a
provider,
any
kind
of
anybody
can
become
a
a
router
or,
and
you
might
want
notions
of
consistency
over
the
set
like
you
might
want
an
eventually
consistent
model
where
anybody
can
join
the
sets,
and
nobody
can
quite
know
at
any
moment
in
time
who's
in
that
set,
or
you
might
want
consistency
in
some
of
these,
like
you
might
want
consistency
over
the
routers,
you
might
want
consistency
of
the
providers
that
consistency
can
be
very
useful
in
building
certain
protocols
like,
if
you
know
all
the
routers
and
you,
and
that
is
a
consistent
set.
A
But
you
know
if
it's
an
eventually
consistent
model
and
it-
and
it's
you,
have
these
sloppy
sets,
then
then
it's
much
harder
to
do
that
now.
This
is
kind
of
a
sloppy.
Is
a
is
this
kind
of
it's
not
consistent?
You,
you
maintain
a
you,
know,
partial
information,
about
a
set,
and
at
no
point
do
you
do
you
have
a
complete
registry
and
nobody
quite
knows
the
full,
the
full
story,
but
that's
enough
and
you
can
use
these
sloppy-
sets
to
be
able
to
solve
this
problem
right.
A
So
if
a
router
maintains
a
sloppy
set
of
providers,
they
don't
have
to
maintain
all
of
them.
They
just
need
to
maintain
enough
so
that
consumers
can
get
the
data
that
could
be
a
good
good
place
to
be,
or
or
you
know,
maybe
routers
keep
information
about
a
subset
of
the
providers
and
the
clients
just
find
some
of
them,
and
so
you
know
the
sloppy
sets
can
be,
can
be
useful.
A
Now,
there's
a
traditional
kind
of
liveness
properties
for
any
distributed
system,
where
you
want
the
system
to
maintain
liveness
and
eventually
make
progress
in
the
presence
of
you
know
some
set
of
malicious
parties.
You
want
the
system
so
critical
here
in
content
routing
you
want
this
to
be
partition,
tolerant
you.
A
You
cannot
tolerate
the
content,
routing
system,
that
is
not
prediction
tolerant,
because
the
price
price,
the
internet,
breaks
down
all
the
time
and
you
have
partitions
happening,
and
that
should
not
mean
that
the
whole
system
halts
there's
an
additional
security
property
there
that,
if,
if
it's
possible
to
halt
operation
by
creating
partitions
like
your
system,
is
like
not
only
is
it
not
going
to
work
well
in
normal
times
like
it
has
a
huge
attack
vector
where
all
an
adversary
has
to
do
is
just
create
a
partition
to
prevent
it
from
working,
and
then
you
might
have
you
know,
game
theoretic
security
properties
like
you
want
the
system
to
be
byzantine
fault,
tolerant.
A
Where
you
have
you
wanted
to
tolerate
a
certain
number
of
adversaries,
kind
of
maliciously
running
the
protocol.
It
might
not
be
about
the
number
of
adversaries
it
might
be
about
the
amount
of
power
or
influence
they
have
over
the
system.
You
might
want
it
to
be
bart,
which
is
a
byzantine,
altruistic
and
rational,
tolerant.
This
means
it's
an
extension
on
the
bfd
model
that
that
actually
talks
about
rationality
and-
and
this
is
really
where
the
web
3
world
is
the
wealth
3
world
is
not
in
bft.
It's
in
bart.
A
Most
of
the
providers
in
web
3
are
rational
entities
that
are
maximizing
their
utility
functions
and
maximizing
as
measured
by
the
amount
of
cryptocurrency
they
get,
and
so
you,
you
really
want
to
design
protocols
that
are
rational,
safe
in
the
presence
of
maybe
a
few
altruistic
of
a
small
proportion
of
altruistic,
honest
parties
and,
and
maybe
some
powerful
byzantine
adversaries,
but
for
the
most
part,
the
route
you
want
the
writers
to
be
able
to
be
rational.
A
You
don't
want
to
expect
honesty
or
altruism
out
of
the
out
of
the
routers,
and
then
you
need,
of
course
like.
Ideally,
some
of
these
systems
are
fully
incentive
compatible
where
the
rational
protocol
is
exactly
the
same
as
the
altruistic
protocol.
That's
much
harder
to
achieve.
You
can
usually
build
systems
where
you
know
the
you
can.
A
The
the
system
can
work
given
all
the
rational
deviations
from
the
protocol
cool
and
then
you
know
maybe
some
like
diagrams,
like
you
have
a
bunch
of
consumers
talking
to
a
router
network
and
then
some
providers,
or
maybe
the
clients
and
providers,
are
all
routers
too
or
you
know,
maybe
there's
no
routers,
they
just
all
know
kind
of
who
to
go
to
you
know
the
best
content
writing
system
is
one
where
you
don't
have
to
do
any
content
routing
at
all,
all
right,
sorry
for
going
going
a
bit
over,
but
thank
you
very
much.
A
A
Well,
so,
in
order
to
write
protocols
that
are
actually
safe,
you
need
to
be
able
to
prove
properties
about
the
system
and
in
order
to
prove
properties
about
the
system,
you
have
to
formalize
the
definitions
of
what
you're
proving
properties
about.
So
this
is
like
a
first
step
at
describing
the
practical
distributed
decision
problems
that
we've
that
we
have
in
terms
that
then
cryptographers
can
actually
grapple
with
and
say.
Oh
yeah,
like
the
way
to
achieve
this.
A
Is
this,
and
so
this
is
kind
of
a
step
into
like
framing
the
problem
in
such
a
way
that,
like
cryptographers,
can
work
with
us
to
to
solve
them,
and
this
could
be
like
this
could
be
a.
We
could
publish
some
version
of
this,
and
this
is
not
yeah.
I
want
to
be
like
very
clear.
This
is
like
I
wrote
this
out
from
like
my
understanding
of
of
the
model
and
system
and
so
on
and
based
on
on
on
past
learnings.
A
This
is
not
exhaustive
and
I'm
sure
that
there's
a
bunch
of
reader
there's
a
bunch
of
important
content,
writing
properties
that
other
people
have
found
that
are
not
in
here,
and
so.
This
is
like.
The
larger
project
here
is
like
maybe
create
a
canonical
definition
of
this
thing
and
maybe
publish
that
as
a
as
a
unit.
It's
important
that
these
things
end
up
published
in
academic
literature,
so
that
because
academics
tend
to
primarily
only
focus
on
things
coming
through
the
stream
of
academic
publications.
A
You
know
not
not
from
not
completely
but
but
that's
just
kind
of
you
increase
the
likelihood
that
a
lot
of
brilliant
cryptographers
are
going
to
work
in
the
problem.
If
you,
if
you
kind
of
like
advertise
in
the
forums
they
look
at
and
the
forms
they
look
at
are
like
you
know,
eprint
and,
and
and
and
the
you
know,
eurocrypt
and
and
oakland,
and
so
on.
A
So
if
we
get
these
things
published,
if
we
get
some
like,
if
we
solve
a
version
of
this
problem
fairly
well
and
include
it
in
that
paper,
we
have
a
huge
articulation
of
the
problem
and
point
out
open
problems.
Then
you
can
then
just
sit
back
and
wait
a
few
years
and
then
you'll
get
really
good
solutions
out
of
it.
A
A
However,
I
don't
know
whether
conferences
would
accept
that
they
might
be
like
it's
not
there's
nothing
interesting
here,
and
so
you
might
like,
like
you,
might
smuggle
this
in
along
with
you
know
some
solution
so
that
they
kind
of
like,
but
like
really
the
values.
Here
I
don't
know
we
could
start
with
a
website
like
just
start
with
the
website
and
defining
it
there
and
pointing
people
towards
it.
Maybe
a
forum,
so
ethereum
research
has
been
very
good
at
producing
really
high
quality
cryptography
work.
A
So
maybe
one
possibility
here
is:
we
could
actually
start
a
dedicated
forum
for
this
problem
and
then
invite
a
set
of
researchers
to
come.
You
know,
and
we
have
to
keep
it
a
very
high
signal
to
noise
ratio
oriented,
but
but
but
I
think
this
could
be
like
pretty
pretty
successful.
A
A
Yeah
yeah,
so
so
I
think
we
now
have
the
capability
of
deploying
some
like
partial
solutions
today,
and
so
we
should
be
engaging
in
some
short-term
oriented
projects
to
enhance
the
properties
of
the
tooling,
now
think
of
like
adopting
things
like
win
fs
into
to
provide
better,
it's
kind
of
like
a
better
usfs
and
also
piergos,
has
a
ton
of
like
really
important
properties
in
in
in
preserving
a
bunch
of
the
privacy
guarantees
that
you
want.
A
So
so
maybe
there's
some
like
enhancing
of
both
of
these
systems
and
then
adopting
them
into
into
just
ipfs
in
general,
or
giving
people
tools
for
how
to
use
those
to
build
many
kinds
of
applications.
Then
there's
also
kind
of
like
improving
the
content,
writing
storage
of
like
records
in
the
indexers
and
the
dhc.
There's
like
some
easy,
straightforward
things
to
be
done
there.
A
That
could
just
improve
the
protocols,
but
but
that's
not
gonna,
solve
it
like
just
to
be
really
clear,
like
all
of
that
stuff
is
like
not
good
enough
to
to
get
to
the
full
level
of
privacy
that
we
need
to
get
there.
I
think
what
we
need
is
like
this
longer
term,
like
medium
to
long
term
plot
like
larger
program,
where
we
have
like
these
very
strong
goals
in
in
the
long
term,
with
important
intermediate
goals,
and
you
describe
a
bunch
of
open
problems.
A
You
get
people
to
work
on
those
you
produce
results.
In
some
cases
you
some
of
these
results
are
good
enough
to
now
go
into
life
systems.
You
start
doing
that,
while
continuing
to
research,
more
open
problems
and
over
time
get
to
the
solution,
so
you
want
to
kind
of
progressively
upgrade
so
think
of
it
as
like,
incremental
delivery,
but
in
research
timelines.
So,
instead
of
like
you,
know,
agile,
it's
like
research,
agile,
which
is
like
you
know.
A
The
cycle
is
like
years
potentially,
so
I
I
do
think
like
we
can
achieve
this
and
you
know
on
like.
I
think
you
could
have
a
super
dedicated
manhattan
project
style
research
program
to
do
this
in
like
two
to
four
years,
but
like
that
requires
changing
how
the
world
works
in
the
current
trajectory.
I
think
you
could
do
it
in
10
years.
Maybe
you
could
probably
do
it
in
five.
It
would
be
great
to
do
any
five,
but
that's
aggressive.
A
I
think
we
can
get
to
a
pretty
good
place
in
five
that
is
like
just
dramatically
safer
than
other
networks
now,
but
still
not
fully
like
to
really
get
an
internet
layer
that
is
like
safe
from
this
stuff
like
this
is,
like
you
know
what
nightmares
are
made
of.
A
So
in
order
to
like
really
protect
from
like
all
of
this
stuff
is
you
know,
it's
like
way
way
way
way
hard,
but
I
think
we
can
be
in
a
pretty
good
place
in,
like
you
know
three
to
five
years,
so,
yes,
like
they're,
definitely
like
a
long-term
oriented
program
cool.
Thank
you.