►
From YouTube: Journal Club - 2018 06 13 CRDT JSON Datatype
Description
Presented by: @gpestana
Presented on: 2018-06-13
pdf link: https://arxiv.org/pdf/1608.03960.pdf
Summary: https://github.com/gpestana/research-CRDT/blob/0d474fe047425c821aca9a8eaf2ec4f4a7f33cec/research/json-crdt.md
A
B
Alright,
that's
great,
can
you
see
the
paper
and
the
notes?
Yep
yeah,
all
right,
cool,
sucks,
yeah
looks
good
yeah.
Just
let
me
know
if
you
have
any
questions.
Just
interrupt
me.
I
might
not
see
your
hands,
then
again
just
feel
free
to
interrupt
me.
So
thanks
a
lot
for
the
time,
let's
jump
into
the
paper,
so
the
paper
is
about
conflict
free,
replicated,
jason--
jason--
data
type
by
Martin
and
Alastair.
So
really
interesting
paper.
B
In
my
opinion,
about
conflict
free
data,
there
are
structures
which
are,
in
my
opinion,
a
super
interesting
way
to
sort
of
solve
a
lot
of
distributed
problems
when
it
comes
to
collaborative
applications.
So
the
basic
idea
of
this
paper
is
that
they
present
an
algorithm
and
they
sort
of
a
formal
semantics
for
a
JSON
data
structure
that
resolves
the
concurrent
of
modifications
automatically
locally
in
each
and
every
at
replica.
So
there's
no
need
for
any
type
of
consensus.
B
The
replicas
can
just
modify,
modify
the
their
own
local
states
optimistically
following
the
algorithm
and
editing
all
the
metadata.
That
is
needed
the
proper
way
and
they
are
sure
that
when
they
serve
a
spread,
the
their
modification,
the
network,
everyone
will
have
eventually
the
same
state
so
that
the
state
will
converge
locally,
and
this
is
super
cool.
So
eventually
we
get
a
peer-to-peer
network
or
the
same
distributed
network
in
which
different
replicas
will
converge
towards
the
same
state.
No
user
input
is
lost
whatsoever.
We
don't
need
any
guarantees
from
the
network.
B
And
then
what?
What
happens
when
these
replicas
come
back,
where
these
peers
come
back
online
and
share
the
modifications
they
need
offline
with
the
others
is
that
there
might
be
some
concurrent
modifications
which
conflicts
right.
So
that's
usually
what
happens,
and
then
we
get
a
lot
of
a
consensus,
algorithms
or
some
other
centralized
peers
to
serve
them
or
centralized
system
to
make
sure
that
the
conflicts
are
resolved.
But
these
CR
DT
approach
is
be
different
from
that.
B
It's
really
interesting
results
and
the
really
interesting
thing
about
this
paper
as
well
is
that
we're
talking
about
Jason,
which
is
quite
expressive
already,
so
we
can
basically
build
a
lot
of
applications
for
top
of
Jason.
These
are
structures,
and
that's
exactly
what
one
of
the
most
interesting
one
of
the
most
interesting
results
from
these
very
structures.
So
so
we're
gonna
go
really
quickly
through
introductions.
So
again,
what
what
RC
R
DT
is
just
some
applications,
some
examples
of
how
what
kind
of
abstractions
we
have.
B
What
kind
of
API
do
we
have
to
locally
handle
these
Jason
structure?
It's
really
straightforward
what
everyone
would
expect
from
a
JSON
API
and
then
we're
gonna
jump
into
the
implementation
details
and
see
how
things
work
more
in
detail
and
yeah.
That's
pretty
much!
It
so
basically
I've
gone
through
a
lot
really
quickly
about
the
introduction.
B
We
already
know
more
or
less
the
context
that
Jason
is
kept
in
different
replicas
changes
are
done
locally
and
spread
across
the
network.
Eventually,
and
eventually
all
the
peers
will
get
all
the
operations
done
on
the
Jason
blog.
So
we
don't
require
many
guarantees
from
the
network
itself,
which
is
really
good
and
opens
up
broad
range
of
applications
for
us
developers.
So
if
we
can
think
about
peer-to-peer
networks,
we
can
think
about
situations
in
which
the
network
is
really
really
spotty.
B
B
You
know,
developing
collaborative
peer-to-peer
or
decentralized
applications
really
easy
from
a
developer
standpoint,
because
now
we
got
into
a
situation
in
which
all
the
sort
of
I
wouldn't
call
consensus,
but
all
what
could
have
been
a
messy
sort
of
a
situation
full
of
conflicts
when
merging
different
operations.
From
different
replicas,
it's
completely
abstracted
from
us,
so
that
makes
it
really
easy
and
really
clean
for
for
developers.
Obviously,
that
all
comes
with
the
cost
that
we're
gonna
see
that
a
bit
forward.
B
B
There
are
different
types
which
can
be
embedded
and
that's
exactly
what
this
paper
sort
of
a
the
novelty
of
this
paper
is
exactly
how
to
create
an
algorithm
and
data
structure
which
will
be
able
to
handle
properly
sort
of
embedded
CR
duties,
because
CRTs
have
been
around
for
quite
a
bit.
It's
it's
been
quite
a
hot
topic
nowadays,
but
it's
been
already
used
in
many
other
systems
and
that's
already
also
talked
through
the
paper.
But
the
big
thing
about
this.
B
B
There's
something
that
I'm
asked
quite
a
bit
when
talking
about
this
with
my
friends
and
colleagues.
So
I
think
this
is
a
really
interesting
sort
of
an
idea
to
retain
so
yeah,
as
we've
already
talked
about
it
shortly.
It
has
its
own
sort
of
a
state
or
each
each
sphere
has
its
own
replica
and
we
are
basically
aiming
at
three
main
principles
when
sava.
B
When
we're,
when
the
authors
try
to
come
up
with
this
algorithm
and
Ciardi
Jason,
they
were
really
pushing
forward
this
idea
of
strong
eventual
consistency
without
conflict,
so
that
every
peer
in
the
network.
If
yet,
if,
if
the
peers
get
all
the
events
which
have
been
tossed
if
it
or
spread
on
the
network,
then
they
will
eventually
converge
to
the
same
states
and
also
another
really
important
point.
Is
that
there's
no?
No
in
either
input
should
be
lost
due
to
concurrent
modifications?
B
That's
time
at
some
of
the
CRT
is
I
can
think
of
like
sort
of
a
lasts
right-wing
type
of
CR
duties,
in
which,
every
time,
if
two
concurrent
rights
are
done
on
a
register,
the
algorithm
will
discard
the
the
first
kind
of
right.
So
that's
that's
not
what's
aimed
at
here.
If,
if
there's
any
problem
with
them,
if
there's
any
of
the
situation
that
most
likely
the
CRT
will
keep
both
results.
We're
gonna
see
that
a
bit
forward,
so
yeah
I've
been
good
with
we've
gone
through
all
the
contributions.
B
Again,
the
idea
that
these
been
the
CRD
teas
in
which
have
been
around
for
a
while,
but
this
is
the
first
kind
of
work
published
work
to
the
author's
knowledge
that
really
embeds
a
lot
of
cearĂ¡
teas.
Creating
a
really
interesting.
There
is
structure
for
applications
and
whatnot,
so
I
think
we
can
also
already
jump
to.
B
So
maybe
the
concurrent
editing
examples
so
there's
other
approaches
which
I've
already
talked
about,
but
also
not
that
interesting
against
ear
disease
have
been
around
for
a
while
here
that
really
do
see
things
start
coming
in.
So
the
really
cool
thing
about
this
paper
is
also.
They
really
show
in
diagrams
how
the
different
sort
of
a
concurrent
modifications
result
when
there's
a
merging
of
the
different
modifications.
B
If
we
will
so,
there
are
around
for
examples,
five
examples:
let's
go
through
them
really
fast,
and
the
idea
is
that
if
you
have
two
replicas,
the
left,
replica
P
and
the
right
replica
Q
and
then
each
of
them
start
with
the
same
States,
do
different
modifications
and
then
eventually
perform
metal
communication.
And
then
we
can
see
how
the
algorithm
resolves
the
conflicts.
B
So
the
first
idea
is
already
quite
interesting.
Again.
We
have
two
okay,
so
let's,
let's
see
first,
so
we've
got
a
map
with
a
key
key
Val
value
key
and
a
as
a
register.
The
replicas
P
changes
the
key
to
be
keys,
value
to
be
replicas
to
his
case
value
to
see,
and
then
they
perform
metal
communication.
This
is
a
register,
but
because
we,
the
one
who
lose
any
input,
the
conflict
resolution
is
basically
keep
both
of
the
registers.
So
we've
got
a
multivalued
register
at
the
end
of
the
conflict
merging
then
again.
B
These,
these
sort
of
insight
is
even
more
interesting,
because
what
the
CRT
Jason
does
is
exactly
making
sure
that
both
replicas
end
up
with
the
same
result
without
having
to
solve
any
conflict
right.
It
doesn't
mean
that
on
an
application
level,
it
makes
sense,
that's
gonna,
be
others
responsibilities,
developers
most
likely,
but
as
far
as
both
replicas
have
the
same
state,
then
it's
easy
to
in
a
really
deterministic
way
to
pick
one
of
the
registers
or
do
whatever.
So.
B
What
happens
is
that
because
the
rightmost
replica
has
cleared
the
muck
and
the
left,
one
didn't
even
touch
the
key
blue
blue
will
be
deleted
altogether,
so
we
get
into
a
situation
in
which
callers
will
have
red
and
green
I
think
this
is
more
or
less
sort
of
a
sort
of
a
expectable
in
a
way.
So
there's
not
much
of
a
really
weird
results
coming
out
of
here,
the
happens
exactly
because
the
right-most
replica
has
actually
cleaners
the
whole
map
all
right.
So
for
the
next
example,
it's
also
quite
straightforward.
B
It
basically
shows
an
example
in
each
in
which
two
replicas
concurrently
creates
an
ordered
list
under
the
same
map
keys.
So
we
start
with
the
empty
map
empty
document.
The
leftmost
creates
a
list
or
or
creates
a
key
or
assigns
lists
to
the
key
grocery
and
the
rightmost
do
the
same,
but
they
add
different
different
ordered
items
to
the
list.
What
we
get
at
the
end
is
that
the
list
will
be
ordered
at
the
same
the
same
way,
so
it
doesn't
really
matter
whether,
for
example,
eggs
and
ham
would
be
third
and
fourth
item.
B
What
matters
is
that
both
of
them
converge
and
they
both
of
them,
get
the
same
sort
of
a
ordered
lists,
and
this
is
going
to
be
achieved
through
lamp
or
clocks.
Whatnot,
we're
gonna
check
how
that
works,
actually
a
bit
more
in
detail
and
if
it
in
a
few
minutes.
So
this
is
exactly
the
same
pretty
much
the
same
idea.
It
just
keeps
the
whole
ordered
list.
B
It
shows
here
that
we
can
use
these
for,
for
example,
text
documents
as
an
application-level
and
in
my
opinion
these
are
probably
the
two
most
interesting
examples
of
what
we
can
expect
from
these
Jason,
a
CRT
algorithm.
So
the
first
example
is
that
we
assign
values
of
different
types
in
the
same
map
key.
So
these
mckee
a
is
a
sank
map,
whereas
this
this
under
the
the
rightmost
replica
it
the
key
a
gets
a
least
assigned.
B
So
what's
gonna
happen
is
that
it's
impossible
to
meaningful
me
merge
a
map
with
an
array
and
then
again,
because
we
don't
want
to
lose
any
user
input.
We're
gonna
keep
both
of
them
tag
them
as
okay
we'd
get
this
map
type
map.
Has
this
value
tab
list
has
this
list
and
then
again
the
application
level
has
to
resolve
these
conflicts.
B
So
there's
always
these
sort
of
edge
cases
in
which
there's
some
work,
that
the
upper
layers
for
application
will
have
to
perform,
but
there's
really
no
other
way,
besides
a
human
to
really
make
make
the
decision
of
whether
these
makes
more
sense
or
the
other.
So
that's
another
of
the
of
the
insights
of
this
example
and
the
last
one
is
probably
one
of
the
most.
A
C
A
B
That's
a
very
good
question:
it's
so
the
idea
is
that
each
of
the
the
peers
have
a
an
ID.
Each
of
the
replicas
have
an
ID
which
is
sort
of
a
LAN
port
and
they
keep
a
LAN
port
clock
sort
of
a
LAN
port
ID
in
which
is
possible
to
make
sure
that
there's
a
casual
order
of
what's
happening
in
in
the
network
right.
B
A
B
Thank
you
yeah,
but
yeah
really,
good
question
and
that's
that's
I.
Think
one
of
the
cool
things
about
about
bees
is
that
also
the
you
know,
terrible
final
order
ringing
lists
is
it's
deterministic.
Both
replicas
have
to
set
to
have
the
same
order
alright,
so
the
last
example
is
also
really
interesting
in
a
way.
So
it's
basically
the
same
as
the
example
we've
seen
with
the
colors
in
which
in
which
the
leftmost
replica
clears
the
whole
item
of
a
to-do
list.
B
So
we've
got
a
map
to
do
which
has
a
bunch
of
which
have
a
bunch
of
items
to
do
items
and
we
get
the
left
replica
clearing
up
the
whole
the
whole
to-do
list,
and
then
we
we
got
the
right
replica
actually
setting
the
buy
milk
item
done
to
true.
So
when
we
merge
a
replica,
the
same
thing
has
happened
with
the
blue
and
whatnot
we've
seen
before.
What's
gonna
happen
is
that
because
the
left
one
cleared
the
item,
the
right
one
only
changed
it
done.
We're
gonna
end
up
with
a
to
do
list.
B
So
then,
there's
such
a
thing
as
a
to
the
list
without
done
or
title,
then
just
remove
everything.
So
that
would
be
probably
what
makes
this
the
most
sense
so
yeah,
that's
basically
it.
These
are
the
sort
of
a
weirdest
kind
of
merging
examples.
Everything
else
would
be.
Super
sort
of
a
working
is
expected
in
a
way
image.
Jason.
So
I
think
this
is
really
really
cool.
B
B
This
is
just
sort
of
an
example,
sort
of
a
pseudo
code,
almost
of
what
would
look
like
a
primitive
Jason,
a
bunch
of
adjacency
rdt,
a
primitives
or
API,
and
the
most
interesting
part
for
us.
Probably,
is
these
ID
x0?
So
basically,
what
we
have
is
we
initialize
doc.
We
set
the
the
value
of
a
of
the
shopping
as
chopped
shopping
list
as
list,
and
then
we
get
the
zero
sort
of
an
item.
It's
not
quite
an
item.
It
actually
indicates
instead
of
a
special
cursor.
B
B
So
this
is
basically
it,
but
it
shows
like
how
can
we
handle
the
distance
AR
DT,
but
but
that
that's
pretty
much
what
would
be
expected
anyways,
so
we've
gone
through
of
the
fruits
and
all
of
these
we
canna
I,
guess
at
this
point,
I
hope,
there's
sort
of
a
good
understanding
of
what
C
R
DT
the
Jason
celerity
is
supposed
to
be
used
to
what
are
the
applications?
How
does
it
resolve
the
conflicts,
or
at
least
how
does
the
final
state
looks
like
when
some
educators
example
happen
in
the
conflict
resolution?
B
So
now
we
can
start
actually
getting
into
more
of
a
sort
of
an
implementation
details
in
which
we've
got
really.
What
kind
of
metadata
do
we
need
to
keep
in
the
JSON,
CR
DT
and
what
kind
of
operations
and
what
can
I
offer
readers
when
we
insert
when
we
clear
some
of
the
elements
in
the
JSON
have
to
be
in
place
in
order
for
us
to
reach
all
the
properties
that
we
want
with
these
GCR
DT.
So
first
thing
is
how
to
evaluate
the
expressions
they
show.
B
So
the
expressions
are
these
we're
going
to
go
through
them
really
quickly.
There's
a
couple
of
interesting
insights
in
them,
but
then
I
think
the
most
important
part
is
how
to
what
our
operations
and
how
to
apply
the
operations
and
that's
I
think
where
we
can
learn
the
most
but
yeah.
One
interesting
insight,
as
well
or
sort
of
a
background
idea,
is
that
every
sort
of
an
operation
or
every
mutation
on
locally
done
on
a
or
performed
on
adjacency
rdt,
actually,
first
of
all
generates
an
operation.
B
So
this
is
an
operation
based
see
our
duty
so
before
applying
the
modification
locally,
the
algorithm
generates
this.
This
operation,
which
is
pretty
much
uniquely
identifiable
not
only
locally
but
in
the
whole
network,
and
and
then
can
be
gossiped
or
shared
with
the
other
peers,
so
that
they
can
also
apply
that
operation
and
then
be
sure
everything
fits
in
properly
and
one
of
them.
One
of
the
items
of
that
operation
is
cursor
and
that
cursor
it's
pretty
much
sort
of
a
list
of
keys
and
IDs
that
identify
a
position
in
the
Jason's
here.
B
The
local
position,
you
see
a
sincerity
but
we're
gonna
see
how
this
egg
actually
all
comes
together
a
bit
a
bit
later
on
when
we're
going
to
be
talking
about
the
appliquéd
applying
the
operations.
So
this
is
start
a
lot
of
sort
of
implementation
details.
This
is
a
I,
don't
know
if
you
are
familiar.
C
B
These
sort
of
operational,
operational,
semantics
notation-
it's
really
well
explained
in
this
link
actually
by
the
authors
and
basically,
it's
sort
of
a
formal
representation
on
how
to
evaluate
the
expression
so
by
expressions
I
mean
how
to
define
a
variable
or
how
to
how
to
sort
of
a
fetch
one
index
from
a
list
and
so
forth
so
forth.
So
I
think
the
most
interesting
part
is
these
ibx
4.
So
this
idx
4
is
a
case
in
which
we
want
to
traverse
a
list
in
which
some
of
the
elements
in
the
list
have
been
deleted.
B
So,
basically,
all
the
operations
that
are
that
have
been
using
there
have
used
that
element
for
one
reason
or
another,
whether
is
for
traversing
or
creating
creating
it
or
whatever
they
they
are
represented
in
these
elements
metadata
so
basically
clearing
the
state
is
just
removing
the
operation
IDs
from
that
list
of
that
elements,
and
eventually,
when
it's
completely
empty,
then
we
can
consider
at
a
application
level
that
that
element
is
deleted.
This
might
be
a
bit
still
confusing,
but
we're
gonna
go
through
this
through
it
again
later
on.
B
B
For-Expression
show
us
that
whenever
we
have
whenever
we
have
a
context
in
which
the
next
element
its
exists
and
whenever
it's
so
they'll,
the
element
exists
and
its
presence
set
the
one
that
shows
us
whether
there
are
operations
still
relying
on
it
or
not,
is
empty.
Then
we
can
pass
the
cursor
forward
on
the
list
because
we
are
traversing
the
lists
without
without
decreasing
the
index,
so
meaning
that
we
are
again.
We
are
traversing
a
list
just
a
bit
of
context.
B
We
are
traversing
a
list
and
we
are
going
forward
in
English
whenever
we
reach
these
one
elements,
we
check
its
presence
set
if
the
present
set
is
empty.
It
means
that
the
belief
that
the
element
has
been
actually
deleted
and
should
not
be
considered
when
traversing
the
list,
so
we
just
go
forward
without
decreasing
the
index.
So
that's,
basically
what
this
whole
thing.
Whole
expression
means,
and
it's
just
an
example
on
how
to
read.
B
We
might
be
keeping
a
lot
of
elements
that
have
been
actually
deleted.
They're
not
going
to
be
shown
to
the
application
layer,
for
example,
but
we
have
to
keep
them
because
with
the
present
set
empty
because
we're
not
sure
whether
all
the
other
replicas
have
as
well
deleted,
because
if
other
replicas
are
actually
changing
these
one
elements
that
we
are
seeing
as
deleted,
and
then
we
get
an
operation
which
change
it
and
we
don't
have
it
there.
Then
the
whole
convergence
sort
of
an
idea
is
not
gonna,
be
it's
not
gonna
work
anymore.
B
So
this
has
a
lot
of
problems.
One
of
them
is
that
we
have
to
keep
a
lot
of
unnecessary
data
when
all
the
replicas
in
the
network
have
already
sort
of
a
deleted.
Is
one
element:
there's
really
no
needs
to
keep
it
there.
We
can
actually
just
remove
it
there,
but
it's.
How
do
we
get
to
the
situation
in
which
we
know
that
all
the
other
peers
believe
with
the
elements?
How
do
we
do
this
garbage
collection?
B
B
Pedro
has
been
doing
a
really
good
job
at
explaining
and
putting
down
the
problem,
and
you
know
checking
the
discussions.
I
think
if
you're
really
interested
in
this
point
is
I,
think
it's
really
cool
to
go
ahead
and
check
this
link
and
contribute
as
well.
So
all
right.
So
basically,
we
through
this
section,
I
just
wanted
to
make
sure
that
we
can
understand
how
to
read
this.
It's
not
that
easy.
B
It
took
me
a
while
there's
also
the
link
where
we
can
later
on
check
better
how
to
to
really,
if
you're
really
interested
in
it
and
then
introduce
the
idea
of
how
to
clear
the
state
how
to
delete
elements
and
and
what
does
sort
of
a
these
dumb
stone.
Thumps
on
sort
of
nodes,
which
are
the
ones
which
are
deleted,
but
not
deleted
from
the
data
structure
mean,
but
we're
going
to
be
already
saying
that
again
later
in
a
few
minutes.
B
So
the
next
session
section
is
when
everything
starts
to
get
a
bit
more
interesting
or
at
least
more
practical
and
and
then
I
have
also
a
diagram
in
which
kind
of
shows
the
whole
algorithm.
So
now
we're
getting
into
a
situation
in
which
I
believe
we
can
start
getting
through
the.
What
kind
of
metadata
do
we
really
need?
And
then
what's
the
algorithm
as
well,
that
are
gonna
is
gonna,
make
sure
that
the
convergence
happens
without
conflict.
So
again,
I
think
I.
Just
I
already
mentioned
this.
This
is
an
operation
based
Jason,
CR
DT.
B
What
this
means
is
that,
even
before
the
mutation
is
done
locally,
an
operation
is
generated
and
the
operation
sort
of
encapsulates
all
the
necessary
information
about.
What's
the
mutation
that
the
replica
wants
to
do
locally
and
then
also
its
unique,
uniquely
identifies
these
operation
within
like
throughout
the
whole
network,
so
that
the
replicas
can
sort
of
change
these
operations,
so
basically
adjacency
R
DT
can
be
seen,
as
you
know,
can
be
constructed
by
a
set
of
operations
right.
B
B
Before
applying
the
operation
locally,
you
create
we
create
the
operation
and
then
apply
properly
and
then
wield
it
to
the
whole
net.
Okay,
so
we've
got
Lamport
time
stamps
and
the
Lamport
time
cents
are
really
important
to
sort
of
ensure
that
we
have
sort
of
a
that.
We
preserve
the
casualty,
so
the
order
of
the
operations
will
end
up
being
sort
of
an
arbitrary,
but
they
are
deterministic.
B
So
so.
That's
that
this
is
the
point
of
the
using
the
lamport's
tank
seems
pretty
much
everywhere,
so
we've
got
or
in
the
operation.
We've
got
the
dependencies
and
we're
gonna
see
that
a
bit
later
on
to
to
make
sure
that
operations
which
are
concurrent
the
order,
is
going
to
be
arbitrary
but
deterministic,
so
that
everyone
agrees
on
it.
So
how
does
actually
a
structure?
Look
like
what's
gonna
metadata?
Do
we
need
so
that
we
can
safely
and
guarantee
the
convergence
when
applying
these
locally
and
in
other
peers?
B
So
an
operation
has
an
ID
which
is
a
Lampert
ID.
It
has
a
dependency
set,
and
this
dependency
set
is
basically-
and
this
is
really
important
for
operation
sort
of
a
level.
A
dependency
set
is
a
set
of
Lampert
IDs
of
all
the
operations
that
these
one
operation
depends
on
and
what
does
depending
on
really
means
means
that
all
the
operations
that
have
been.
B
Apply
it
before
so,
let's
say:
yeah
all
the
operations
that
have
been
applied
before
this
operation
has
been
created
and
if
they
are
concurrent
to
it,
they
will
have
to
be
part
of
the
dependency
set.
So
we
are
sure
that
every
time
a
peer
or
replica
will
apply
an
operation,
all
the
dependency
said
have
to
be
met.
Otherwise,
there's
a
way
to
ensure
guarantee
the
cook
the
curser
we've
already
seen.
B
So
it's
basically
shows
the
path
to
a
certain
nodes
or
to
a
certain
node
in
the
tree,
and
then
the
mutation
is
basically
just
telling
exactly
what
what
can
a
mutation,
whether
it's
an
insert,
whether
it's
an
delete
or
a
sign
of
this
one
node,
which
is
pointed
by
the
cursor.
So
this
is
basically
it
and
an
adjacent
CR
DT.
Can
we
build
up
out
of
a
set
of
these
operations?
B
So
this
is
really
cool,
because
we
can
just
throw
these
operations
around
in
the
network
and
apply
those
locally
and
we've
got
our
CR
DT
algorithm.
So
I
think
one
of
the
most
important
parts
is
exactly
the
dependency,
the
casual
dependencies
of
the
operation.
So
it's
it's
basically
again.
I've
already
explained
it
so
exactly
the
the
har
dependencies
that
an
operation
needs
to
be
sure
that
already
been
applied
so
that
it
can
be
applied
safely.
That
says,
that's
basically,
you.
B
Yeah
and
and
main
goal
of
these
is
that,
as
it
said,
but
here
is
that
the
secret,
the
sequence
of
operations
generated
at
one
particular
replica,
will
be
applied
in
the
same
order
at
every
replica,
because
one
operation
can
be
applied
only
when
its
dependencies
have
been
applied.
Otherwise
it's
gonna
be
buffered
and,
and
the
pier
will
keep
waiting
for
operations
until
to
come
until
those
are
met.
So.
B
B
So
to
give
you
a
bit
of
context
what's
happening
now,
is
that
a
replica
locally
wants
to
change
the
Jason
CR
DT
state
locally
right,
so
this
can
happen
or
okay,
an
operation
is
created
whether
locally,
when
replica
wants
to
change
the
distance,
ERD
T
or
through
receiving
from
a
remote
pier
right.
So
what
does
the
algorithm
now
has
to
make
sure
that
these
operations
are
applied
in
a
way
that
we
get
the
converges
we
need.
B
So
this
is
exactly
sort
of
a
diagram
showing
how
to
go
about
applying
the
operations
in
the
local
state,
the
replica.
So
this
is
all
metadata
and
handling
metadata
in
in
the
different
sort
of
in
the
different
data
structures
of
of
the
of
the
elements.
So
the
idea
is
that
I
think
I
actually
didn't
mention
this
before
so
we're
having
to
this.
First
sorry,
so
the
idea
is
that
we've
got
operations.
B
B
So,
for
instance,
every
time
a
node
is
traversed
when
applying
in
operation
that
operation
ID
is
added
to
the
presence
set
of
that
of
that
element.
So
this
is
basically
like
the
three
sort
of
types
of
metadata
that
the
Sierra
T
Jason
keeps,
and
now
what
we're
going
to
see
with
the
algorithm
is
how
that
those
different
levels
of
metadata
are
actually
server
manipulated
when
applying
a
local
operate,
an
operation
locally.
Okay,
so
let's
say
that
we've
got
a
local
operation,
so
the
replica
wants
to
change
one
element.
B
B
We
can
apply
the
mutation
so
apply,
let's
say,
and
by
applying
here,
I
mean
not
destructive
because
it
distracted
is
gonna,
be
different.
We're
gonna
talk
a
bit
about
before.
So
let's
say
that
we
are
just
assigning
a
new
value
or
key
value
to
a
map.
So
then
we
apply
the
mutation
and
then,
if
it's
a
non-local
operation,
then
we
just
buffer.
We
already
applied
locally,
we
buffered
we
referred
operation
and
then
is
sent
to
the
sort
of
network
so
that
the
others
can
apply
the
same
operation
locally.
B
The
when
a
remote
operation
is
received
locally,
the
there's
still
a
couple
of
checks
that
have
to
be
done
before
so.
First
of
all,
we
have
to
see
check
based
on
the
applied
operations
list,
which
is
part
of
the
Jason
C
R
DT.
If
this
operation
has
been
already
applied
or
not
because
then
again
we
have
no
guarantees
from
the
network.
We
might
get
gets
like
two
three
times
the
same
the
same
operation,
so
we
have
to
be
sure
that
we
don't
we
don't
apply
more
than
once
the
same
operation.
B
So
if
the
operation
hasn't
been
applied,
we're
gonna
see
based
on
the
dependency
set
of
the
operation
and
the
operations
which
has
have
been
applied
to
the
whole
Jason
C
R
DT.
If
all
the
dependencies
dependencies
are
actually
satisfied,
then
the
exactly
why
we
really
need
those
dependencies
on
the
operation
topo
and
one
of
the
reasons
the
other
one
is
for
clear
the
state.
But
we
see
that
forward.
B
Through
exactly
the
same
algorithm,
whether
it's
local
operation
or
remote
operation,
but
when
it
comes
to
the
send
the
tree,
what
we
do
is,
instead
of
having
the
not
only
we
adds
the
operation
ID
to
them
or
sorry.
Instead
of
adding
the
operation
ID
to
the
present
set
of
the
nodes
that
are
traversed
and
after
after
it's
deleted.
What's
going
to
happen
is
that
it's
gonna
clear
the
present
set,
so
it's
basically
what's
gonna
happen.
B
We
want
to
delete
a
list
all
right,
so
we
traverse
the
beliefs
with
this
checking
the
cursor.
When
we
get
to
the
list,
we
Traverse
again
all
the
elements
in
the
list
and
then
we
remove
from
the
present
sets
in
each
and
every
of
the
elements
that
are
supposed
to
be
deleted.
We
remove
the
dependencies
of
the
operation.
B
So
what
this
means
is
that
we're,
instead
of
just
removing
the
element
itself,
because
then,
if
other
other
nodes
are
sort
of
making
modifications
in
the
set
in
the
in
this
element,
and
then
we
receive
those
operations,
it's
going
to
be
round
the
whole
converges.
But
what
is
make
sure
is
that
we
can
get
into
a
situation
in
which,
if
all
the
operations
that
have
been
somewhat
dependent
on
these
on
these
nodes
are
removed
from
dependency
cells.
Then
we
can
consider
the
elements
as
deleted
and
that's
exactly
when
we
get
to
the
empty
set.
B
That's
exactly
the
situation
in
which
we
can
we
get
the
Thompsons
so
the
nodes
in
which
are
taking
space
but
they're,
actually
not
valuable
anymore,
because
they
actually
delete
them.
But
we
have
to
keep
there
because
we're
not
sure
that
all
the
other
replicas
know
that
this
node
has
been
cleared.
They
might
be
concurrently
and
offline
sort
of
modifying
it.
B
So
this
is
sort
of
a
really
interesting,
in
my
opinion,
so
obviously
we,
the
trade-off
here,
is
sort
of
a
metadata
and
space
for
all
this
conflict
resolution-
and
this
is
sort
of
one
of
the
most
important
parts
of
I
believe
algorithm.
To
really
understand
it
properly
is
the
mechanics
of
how
to
set
up
an
how
to
handle
the
different
types
of
data
metadata
in
the
difference,
server
layers
right
in
a
way
or
in
the
and
objects
and
and
yeah-
that's
that's
pretty
much
it
like.
There's,
there's
not
much
more
to
it.
B
B
Yeah
and,
and
and
that's
basically
it
again
just
for
just
just
for
gonna
re
gonna-
go
through
these
again.
This
is
really
important,
my
opinion,
because
it's
not
really
explicitly
explicitly
said
or
written
in
the
paper.
But
for
me
that
I
was
that
have
been
sort
of
implementing
this
one
go
library
of
these
of
these
algorithm
and
data
structure.
This
is
exactly
what
sort
of
a
really
opened
up
the
whole
the
whole
sort
of
data
model
and
whatnot
this.
B
That's,
basically
it
and
and
with
all
these
sort
of
an
algorithm
here
and
with
all
these
metadata,
we
are
sure
that
we
get.
We
get
all
the
strong
eventual
convergence
II
without
losing
any
modifications
and
and
and
we
get
all
these
guarantees
from
the
data
structure
that
we
get
all
all
replicas
into
the
same
state
without
any
conflict.
So
I
will
probably
skip
this
because
I've
been
already
talking
for
quite
a
bit,
I'll
probably
skip
these
more
formal
convergence,
definitions,
I
think
we're
gonna.
We
went
through
them
anyways
in
one
way
or
another.
B
Basically,
the
in
there
is
an
interesting
part
here
at
the
end,
if
you're
really
into
the
maths
of
it,
the
authors
show
that
really
well
how
to
actually
the
different
the
different
sort
of
steps
in
the
algorithm
make
sure
that
we
are
always
converging
converging,
really
interesting.
Reading.
Let's
not
go
through
that,
because
we've
at
least
got
I
believe
that
this
went
through
it
a
bit
more
in
any
formal
way.
B
So
so
yeah
I
mean
that's
basically
it
I
guess:
I
hope
it
was
not
that
confusing,
because
it's
for
me
it
still
sort
of
heart
and
really
in
a
really
succinct
way,
try
to
explain
how
the
whole
thing
happens,
but
that's
it
like.
The
paper
really
shows
how
to
build
up
these
Jason
CR
duties,
which
you
can
embed
all
the
different
types
Maps
lists,
and
also
have
registers
really
expressing
very
structure
and
make
sure
that
concurrent
operations,
concurrent
mutations,
can
be
locally
merge
optimistically
and
being
sure
that
it
all
converges
no
need
for
consensus.
B
This
is
super
powerful,
in
my
opinion,
I
hope.
A
lot
of
the
you
know
further
work
or
sort
of
hot
topics
on
garbage
collection
and
stuff
like
that
can
be
solved
soon,
so
that
we
can
or
in
a
way
improved
so
that
we
can.
We
can
definitely
get
these
into.
You
know
our
application,
then
we
have
already
quite
quite
interesting.
Libraries
such
as
Auto,
merge
and
applications
also
peer
Pat,
using
using
CR
entities
in
a
really
really
cool
way,
so
really
promising.
In
my
opinion,
yeah
yeah.
A
That's
awesome.
Thank
you.
So
much
I
feel
like
I
learned
a
ton.
One
question
that
I
had
yeah
is,
it
seems,
like
the
additional
metadata
is
going
to
like
double
or
triple
the
size
of
the
database
since
you're
implementing
this.
Have
you
seen
that,
like
that
sort
of
growth,
or
how
would
you
assess,
make
a
scaling?
Yes,.
B
They
also
have
the
problem
that
whenever
someone
joins
the
someone
needs
an
upir
joins
the
network.
We
have
two
sort
of
a
sheep
old
or
send
all
the
operations
set
that
have
been.
Basically,
the
local
state
that
has
to
be
shared
with
these
new
Peter
is
basically
the
whole
list
of
operations.
Right.
So
imagine
these.
There
is
in
the
beginning
of
the
json
CR
BT.
There
was
these
operations.
There
is
these
sort
of
elements
which
were
emitted
and
none
of
the
replicas
have
that
element
anymore
right
every
time
a
new
Peter
comes.
B
We
still
have
to
send
that
so
not
only
on
the
storage
side,
but
also
bandwidth
I
mean
network
bandwidth.
There's
there's
a
lot
of
sort
of
yeah.
It
thinks
that
there's
a
lot
of
waste
of
resources
still
when
it
comes
to
operation
based
without
snapshot,
the
snapshot
thing
without
garbage
collection,
I,
don't
really
have
the
numbers,
but
I've
heard
from
some
of
the
meetings
in
the
ipfs
then,
and
the
capabilities
of
some
of
the
sort
of
benchmarks
that
have
been
that
has
been
done
and
yeah
it's
it
can.
It
can
grow
really
fast.
A
B
Exactly
when
of
one
of
the
really
cool
examples
of
not
examples,
but
one
way
the
the
authors
actually
suggests
to
create-
let's
say:
Google
Docs
right
based
on
Jason
CRT,
is
that
each
and
every
sort
of
when
we
are
building
up
a
string,
each
and
every
char
character
is
actually
an
element
in
a
at
least
right.
So
then
people
can
go
at
the
character
level
and
change,
remove
or
change,
and
that
that
all
is
going
to
be
properly
the
conflict
will
be
properly
resolved.
B
A
Yeah
I
was
also
going
to
ask
because
I'm
not
like
embedded
in
this,
we're
working
at
things
girl
related
this.
Could
you
give
me
a
like
a
overall
sense
of
the
landscape?
I
know
there
are
opportunity
between
the
different
types
of
Serie,
T's
I
know
there
are
operational
Delta
based
one
I
know:
there's
state
based
series:
yeah.
Are
there
other
kinds
that
I'm
missing
and
what
am
I
yes,
I.
B
Maybe
yes,
but
I
happen,
but
I
have
to
say
that
those
are
the
ones
that
I've
been
also
the
most
sort
of
reading
and
learning
about
so
and
I
think
that
the
Delta
C
our
duties
are
by
themselves
really
really
interesting
as
well
and
as
I've.
Seen
like
it's
been
super
super
hot
topic
and
there's
a
lot
of
things
happening.
B
There's
a
lot
of
new
stuff
coming
in
and
and
and
I
haven't
even
made
being
able
to
I
believe
to
catch
up
with
every
single
new
article,
which
is
coming
every
day
on
the
fields
see
art.
It
is
only
you're
not
even
on
this
Jason,
this
er
duties,
but
I
would
say
that
those
are
the
three
main
types
of
CR
duties
which
still
are
being
used,
but
but
then
again
probably
I
will
have
to
look
weak
research
and
I'll
come
back
to
you
later
on.
If
that's
fine,
yeah
I'm.
C
B
Yeah,
that's
a
really
good
question
and
I
think
I
haven't
run
any
tests
yet
then
again,
there
were
some
benchmarks.
I
can
also
dig
that
up
on
the
stuff
that
has
been
tested
with
Auto
merge
and
also
with
the
peer
crack
war,
but
I
guess
it
all
depend
on
how
big
the
network
is
and
also
what
kind
of
modifications
there
are
and
I
think
that
that
that's
quite
dependent
on
that
I
would
expect
other
expect.
B
But
it's
kind
of
interesting
as
well
that
sorry,
one
more
thing,
I'm
gonna,
feel
that
Sierra
disease
have
been
around
for
so
long
right
or
for
like
relatively
long
when,
when
we
think
about
all
the
decentralized
and
distributed
or
most
of
the
centralized.
Not
so
much
is
tributed,
but
all
these
consensus
ideas
and
how
to
really
build
start
building
interesting.
The
centralized
applications
they're,
not
that
old,
but
they've
been
it
been
used
for
quite
a
bit.
So
I
mean
it's.