►
From YouTube: CRDT Research Meetup // Distributed Applications & Platforms Towards Them - Juan Benet
Description
Originally recorded during the Lisbon Hack Week from May 21-25, 2018.
PDF: https://ipfs.io/ipfs/QmPBq4f6a58dR4q5M1B7277h5cm8GSkHt4ie7XwKgx8f6j/2018-05-21-lisbonhackweek/crdts-day/pl-overview.crdts.compressed.pdf
Keynote: https://ipfs.io/ipfs/QmPBq4f6a58dR4q5M1B7277h5cm8GSkHt4ie7XwKgx8f6j/2018-05-21-lisbonhackweek/crdts-day/
A
I'm
going
to
talk
about
fully
distributed
applications
and
how
we're
building
platforms
towards
them
at
protocol
apps
and
I'll
give
a
bit
of
context
about
protocol
absum.
Just
to
give
you
a
sense
of
how
we
work,
how
we
go
about
problem-solving
and
the
kind
of
stack
that
we're
building
and
the
models
for
it.
So
it's
it's
a
lot.
A
So
the
security
and
privacy
of
the
application
data
or
the
reliability
of
whether
or
not
the
application
can
work
is
not
only
meaning
whether
or
not
can
people
work
that
day
now
it
can
be
in
many
situations,
life
or
death.
So
and
as
we
saw
in
recent
elective,
the
last
second
and
a
half
now
very
important
movements
are
starting
to
rely
on
these
applications,
sometimes
relying
on
social
networks
like
Twitter
and
Facebook
and
so
on.
A
To
you
know,
we
saw
the
spring
of
revolutions
in
in
the
Middle
East
as
one
example
of
the
reliance
placed
upon
these
systems,
so
they
better
work
well
right.
They
better
work
well
in
case
of
certain
kinds
of
problems
or
disasters
where
the
internet
might
be
split,
or
you
know,
and
you
have
to
you
know
you
can't
reach
the
backbone
or
they
need
to
work
well
in
situations
where
there
might
be
huge
attackers
or
trying
to
collect
all
of
the
this
information
and
use
it
against
you
in
some
way.
A
So
there's
a
whole
bunch
of
different
kinds
of
problems
and
we
tend
to
decompose
problems
and
think
about
starting
projects
to
to
address
some
of
those
problems,
and
over
time
we
tend
to
refactor
projects
and
say:
oh
well,
this
thing
is
actually
quite
big
and
maybe
we
can
decompose
some
subset,
so
we
started
with
the
ipfs
project
and
that
ended
up
yielding
a
number
of
other
projects
along
the
way
and
we
recurse
so
I
think
the
you
know.
These
are
the
kinds
of
things
that
we
we
end
up.
A
Thinking
about,
I'm
not
gonna,
go
into
detail.
I.
Think
everybody
here
is
pretty
familiar
with
these.
Just
to
give
you
a
sense,
so
a
bit
about
protocol
labs
just
to
kind
of
set
the
answer,
I
guess
some
questions.
We
tend
to
think
of
ourselves
in
this,
like
stellar
nursery
model
of
saying,
let's
think
about
the
problems
and
the
potential
solutions
in
one
space
and
then
kind
of
construct,
a
project
that
could
solve
it
and
then
think
about
growing
it
over
time
and
building
a
community
around
it.
A
So
we
again
live
in
an
amazing
time
where
we
can
go
from
conceiving
of
things
and
doing
doing
research
into
and
moving
that
research
into
development
and
moving
that
development
into
some
into
deployment
into
people's
hands
and
give
people
a
human
superpower
and
in
the
lot
in
the
20th
century,
that
would
involve
hardware,
and
that
was
expensive
and
hard.
Now,
with
software
and
application
platforms
like
the
web,
we
can
make
this
extremely
fast.
So
again,
somebody
can
sit
at
a
computer
program,
an
application
and
then
grant
a
superpower
to
other
humans.
Now.
A
The
problem,
though,
is
what
are
the
reliability
guarantees
of
that
superpower?
Is
that
so
we're
actually
going
to
work
in
light
of
a
whole
bunch
of
potentially
problematic
conditions,
and
so
what's
happening
now?
Is
humans
are
starting
to
rely
on
a
lot
of
these
things,
often
without
understanding
when
they
might
break
so
we
should
build
robust
foundations
and
good
primitives
so
that
application
builders
can
build
really
good
software,
and
this
point
is
not
linear.
Of
course,
it
really
kind
of
goes
all
over
the
place.
A
Sometimes
we
started
with
something
that
exists
and
think
about
how
can
it
be
improved?
Sometimes
we
think
of
new
ideas
and
kind
of
set
them
through.
Sometimes
we
take
ideas
that
already
exists,
that
other
people
came
up
with
and
say:
oh
wow,
this
is
awesome.
Why
hasn't
it
been
built?
Let's
think
about
building
it
I
love.
A
Another
thing
to
mention
about
this
biplane
is
that
in
these
stages
there
exists
a
number
of
harsh
filters
that
have
nothing
to
do
with
the
idea
itself
or
the
viability
of
the
idea
and
are
mostly
about
just
humans
and
and
building
things,
and
what
humans
are
good
at.
So
sometimes
you
know
a
lot
of
great
labs
produce
a
number
of
of
ideas
that
based
on
how
academia
works
today
they
get
produced,
but
it's
really
hard
to
then
get
them
to
actually
exist.
A
In
the
real
world
it's
applications
and
moving
in
translating
in
this
pipeline
might
take
decades
in
some
cases.
So
you
can
look
at
IP
FS
as
an
example
of
this,
where
the
vast
majority,
the
ideas
that
went
into
ipfs,
have
been
conceived
of,
probably
10-15
years
prior
and
it
just
took
a
while
for
the
that
to
become
possible
to
actually
build
or
actually
what's
possible.
It's
just
people,
hadn't
done
it.
Yet
we
work
openly
and
we
work
with
a
lot
of
people
around
the
globe.
A
So
this
is
a
shot
of
a
subset
of
our
contributors
on
github
and
we
try
and
do
yeah.
We
try
to
do
as
much
as
we
can
through
github
and
that
involves
both
research.
In
some
cases,
a
lot
of
development
to
cross.
You
know
hundreds
of
repositories
and
so
on
kind
of
looking
at
it
a
bit
by
the
numbers.
We
have
two
large
ecosystems
forming
one
around
ipfs,
another
around
file
coin
and
twelve
large
projects
over
500,
github,
repos,
500,
reasonable
model
modules.
A
We
really
value
this,
so
we
try
as
much
as
possible
to
whenever
we
implement
something
to
refactor
things
out,
so
that
others
could
use
the
intermediate
pieces,
sometimes
maybe
to
a
fault.
But
but
we
try
very
hard
to
making
the
components
we
make
just
improve
other
things.
Even
if
we
end
up
not
relying
on
them
later.
We
have
a
lot
of
contributors
and
all
of
what
we
do
would
not
be
possible
without
our
huge
community,
so
yeah
in
terms
of
research,
we
I
mean
this
is
kind
of
a
hard
thing
to
measure.
A
A
So
I
wanted
to
briefly
mention
that,
with
all
of
these
projects
there's
extensive
documentation
on
the
web,
you
can
find
out
about
each
of
them.
There's
a
bunch
of
talks
talking
about
the
projects.
So
this
introduction
is
like
very,
very
short
and
tailored
to
the
problems
that
we
have
here
today
and
thinking
about
Sierra,
T's
and
thinking
about
the
distributed
model
in
which
we
want
to
operate
all
right.
Let's
shift
gears
to
the
good
stuff,
so
the
protocols
and
the
models,
and
so
on.
This
is
the
stack
that
we're
thinking
about.
A
We
start
with
a
kind
of
promise
around
future
proofing
and
upgrade
ability.
We
want
all
our
protocols
to
not
fail,
because
maybe
people
move
to
use
a
different
transfer
protocol.
Sometimes
that
happens
and
that's
a
shame
or
you
know
hash
function,
you
you
choose
a
hash
function
and
you
bake
it
into
everything
you
do
and
suddenly
the
cost
of
moving
from
watch
hash
function
to
another
becomes
extremely
difficult.
We
also
don't
want
to
be
victims
of
the
fragmentation
in
the
in
the
network
stack.
A
We
want
to
bind
all
of
the
network
together,
and
so
we
built
a
system
and
library
for
for
doing
that
and
I'll
explain
this
a
bit
more.
We
have
a
way
to
model
authenticated
data
structures
where
we
kind
of
treat
all
data
as
potentially
authenticated
data
structures,
and
we
have
a
way
of
translating
between
different
systems.
So
I'll
explain
a
bit
more
of
how
that
works,
but
this
is
kind
of
a
refactoring
of
the
heart
of
ipfs
and
and
treating
all
data
available
that
can
hash
link
as
the
same
kind
of
data
structure.
A
I
profess
is
the
much
more
much
more
concrete
thing
here.
It's
a
project
for
distributing
files
and
building
fully
distributed,
applications
that
can
work
offline
or
in
a
disconnected
environment
and
I'll
walk
a
bit
through
an
example
of
that.
That's
where
we
are
and
then
Farpoint
is
a
project
around
incentivizing,
a
huge
network
around
the
planet,
to
form
to
build
a
a
properly
decentralized
storage
market
that
that
is
not
controlled
by
any
one
party.
A
There
are
data
structures
that
use
hash
linking
to
address
each
other
and
if
there
are
a
whole
bunch
of
Merkle
trees,
why
can't
we
bring
them
together
in
some
huge
merkel
forest
right
and
be
able
to
address
from
one
to
the
other
and
be
able
to
move
through
them
similar
to
how
you
can
mount
different
kind
of
file
systems
in
your
in
your
OS
or
in
the
web?
How
you
can
access
a
bunch
of
different
URLs,
whether
or
not
they're
hosted
in
the
same
provider
or
the
same
databases
and
so
on?
A
So
it's
a
that
was
kind
of
the
basic
idea
and
that
you
know
turn
into
a
huge
rabbit,
hole
thinking
about
computation
and
execution
and
models
and
a
whole
bunch
of
stuff.
But
the
basic
idea
is
thinking
of
primitives
for
enabling
software
developers,
both
at
the
platform
or
application
level,
to
build
things
that
can
take
advantage
of
all
the
nice
things
about
authenticating
data
structures
and
make
it
work
well
and
play
well
with
everything
else.
So
this
leads
to
a
distributed
programming,
environment,
so
think
of
things
like
bloom,
I.
A
Think
of
things
like
like
a
distributed,
closure,
maybe
Erlang
and
so
on.
So
this
is
that
kind
of
model
that
this
is
pushing
towards.
So
at
a
basic
level,
it's
a
serious
of
formats
that
can
help.
You
bind
all
these
data
structures
together,
but
at
a
much
deeper
level.
It's
it's
a
thinking
of
a
programming
system
that
doesn't
start
with
the
IDE
and
the
runtime
in
one
process,
but
thinks
about
the
runtime
across
all
machines
and
the
planet,
whether
or
not
they're
connected.
A
There's
a
lot
embedded
in
that
so
I'm
not
gonna
dive
into
detail
but
happy
to
discuss
with
people
if
they're
interested
in
that,
what
let
p2p
is
so
I'm
gonna
jump
down.
The
stack
for
a
moment
is
a
a
project
that
just
tries
to
abstract
away
the
network
in
a
in
as
efficient
and
convenient
way
as
we
can
make
it,
but
that
preserves
the
flexibility
of
the
underlying
network.
A
So
today
the
web
does
not
really
work
over
things
like
Bluetooth
you,
you
have
to
try
really
hard
to
get
that
to
work
most
browsers
and
servers,
and
so
on
wouldn't
work
over
that
kind
of
transport.
And
if
you
want
to
try
to
use
something
safer
like
tor
you,
you
also
have
to
go
at
great
lengths
to
either
enable
your
use
as
a
proxy
or
set
up
your
own
hidden
service
and
so
on.
It's
quite
difficult.
A
A
A
What
if
we
try
and
fix
it
by
creating
a
network
stack
that
lives
in
the
application
layer
but
could
be
moved
down,
but
you
know
application
first,
that
abstracts
away
all
of
that
and
enables
applications
to
just
run
with
opaque
network
addresses
and
identifiers
and
as
long
as
you
one
application
can
connect
to
any
other
node
and
can
find
a
route
through
whatever
transport
said
it
has,
then
that
should
work
you
shouldn't
have
to
as
an
application
developer,
rewrite
your
application
to
make
it
work
over
Bluetooth.
That
should
just
work
transparently.
A
So,
that's
in
a
nutshell,
what
what
a
lipid
EP
is
about,
but
one
of
the
important
things
here
is:
it
does
not
rely
on
any
any
kind
of
existing
network
infrastructure
or
centralized
network
infrastructure.
To
the
point
where
you
could
split
the
network
in
half
or
in
whatever
pieces
and
the
nodes
that
can
still
connect
to
each
other
should
be
able
to
continue
operating.
Just
fine,
you
could
be
completely
offline
on
your
own
or
connect
with
the
rest.
A
So
this
is
the
part
of
our
stack
that
deals
with
all
of
the
peer-to-peer
magic
that
you
need
in
order
to
make
this
work
in
today's
pretty
pretty
crazy
networking
environment
with
tons
of
different
kinds
of
protocols
and
so
on.
So
this
is
a
toolkit
that
solves
a
whole
bunch
of
problems
to
enable
that
to
happen,
and
so
you
have
things
like
peer
routing
and
connection
setup
and
natura
versal
and
a
whole
bunch
of
different
kinds
of
encryption
channels
and
so
on,
just
to
make
this
possible.
A
But
the
idea
is
that
out
of
this,
you
think
all
of
those
all
that
hard
work
into
this
layer
and
out
of
that
you
get
a
really
nice
environment
for
applications
where
all
you
have
to
do
is
have
an
identifier
for
another
another
peer
which
is
their
their
ID,
and
you
have
a
routing
system
that
can
find
addresses
for
that
ID
and
then
you
can
set
up
an
encrypted
channel
between
them
and
then
start
operating
and
again.
That
is
not
really
even
bound
to
IP.
You
could
have
this
work
running
on
any
kind
of
network.
A
You
could
be
getting
it
from
a
node
elsewhere
or
whatever,
and
it's
all
the
same
content.
You
can
execute
it
the
same
same
way.
We
can.
We
have
demos
with
this
kind
of
stuff.
Later
we
can,
we
can
show
you
so
I
wanted
to
just
talk
a
bit
about
three
applications
to
give
you
a
sense
of
where
CRT
T's
come
in
I.
Guess
a
teaser
is
this
right?
A
So
if
we
want
applications
to
work
in
this
environment
where
they
can
completely
split
then-
and
we
want
things
like
chat
or
collaborative
documents
and
all
this
kind
of
stuff,
then
we
need
a
model-
doesn't
rely
on
a
centralized
database
in
the
cloud
somewhere.
We
need
a
model
that
allows
convergent
data
structures
to
build,
to
build
the
state,
the
shared
state
of
this
application.
A
Now,
of
course,
in
our
model,
we
would
like
to
have
full
full
guarantees,
despite
whether
or
not
people
are
connected
at
all
ever
that's
kind
of
hard
you
get
into
like
really
hard
questions
around.
Can
you
allow
to
never
sync
what
it'll
logs
look
like
once
once
they
grow
and
so
on?
So
a
lot
of
interesting
questions,
they're
also
access
controls,
we'll
talk
about
that
later
today,
so
the
first
application
I'll
mention
is
pierpan-
will
can
show
them
on
later
later
today.
A
That's
not
really
in
any
domain
anywhere
like
this.
This
is
could
be
loaded
from
our
local
ipfs
node
and
it
works
the
same.
So
it
demonstrates
the
use
case,
and
so
here
we're
already
thinking
about
how
do
you
deal
with
access
controls?
How
do
you
deal
with
sharing
documents
so
right
now
it
has
a
capability
model.
A
Another
example
is
ipfs
cluster,
where
right
now
this
relies
on
consensus,
but
it's
an
interesting
question
as
to
whether
or
not
this
could
be
a
C
or
D
T
ipfs
cluster
is
a
tool
that
brings
together
a
number
of
ipfs
nodes
that
want
to
pool
their
storage
to
replicate
the
same.
You
know
huge
data
set
for
you
know,
reliability
and
so
on.
We're
also
being
able
to
address
more
than
would
fit
in
a
single
computer
and
transparently
makes
these
nodes
behave
as
a
single
ipfs
node.
A
So
this
is
kind
of
a
recursive
construction
where
these
four
nodes
would
then
also
expose
the
same
ipfs
api,
and
you
could
still
make
the
same
kinds
of
requests
and
they
get
it
just
gets
replicated
to
to
the
rest
of
the
nodes,
and
you
have
a
traditional
consensus
model
here
where
you
know
things
get
fully
committed
into
a
log
and
then
externalized
it
to
the
user.
But
there's
an
interesting
question
as
to
whether
or
not
this
could
actually
be
a
CR
dt,
there's
no
real
hard
requirement
that
it
needs
to
be
to
be
consensus.
A
There
could
be
a
model
here
that
were
a
certainty,
might
be
a
better
fit.
This
is
about
like
just
tracking
pin
SATs
pin
sets
it's
just
an
identifier,
pointing
to
a
subgraph
of
a
huge
tree
that
we
want
to
back
up
yeah.
This
is
kind
of
showing
the
recursive
construction
there's
clusters,
a
tool
that
sits
next
to
an
IP,
FS,
know
it
and
can
point
to
an
API
and
then
works
with
the
other
nodes
to
the
Ducks
to
the
rest
of
the
world
and
I
guess.
The
last
example
is
chat,
so
I
think.