►
From YouTube: Conflict Free Replicated Data Types
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
today's
cncf
live
webinar
conference
conflict-free,
replicated
data
types,
I'm
libby
schultz
and
I'll
be
moderating.
Today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
jared
dillon
cto
at
mycelial
and
james
moore
principal
instructor,
at
mycelial,
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
speak
as
an
attendee,
but
there's
a
q,
a
box
on
the
right
hand,
side
of
your
screen
in
the
chat
box.
A
A
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
and
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
Please
also
note
that
the
recording
slides
will
be
posted
later
today
to
the
cncf
online
programs
page
at
community.cncf.io
under
online
programs.
A
B
Thanks
libby,
as
as
a
little
introduced,
I'm
jared
dillon,
I'm
the
cto
with
mycelial
and
with
me,
is
james
moore
as
well
who's,
our
principal
instructor
over
at
myself,
so
just
to
walk
through
a
little
bit
of
what
we're
going
to
talk
about
today.
B
We're
going
to
start
off
with
a
history
of
what
distributed
systems
look
like
and
how
they've
applied
to
the
cloud
native
landscape
in
in
over
the
past
couple
years,
and
then
we're
going
to
talk
a
little
bit
about
consensus
or
values
in
distributed
systems
and
why
consensus
on
values
is,
is
important
in
order
to
build
reliable,
robust,
large-scale
systems
from
there.
We're
going
to
talk
about
the
challenges
of
building,
consensus-based
systems
at
global
scale,
and
talk
a
little
bit
about
the
use
cases
of
why
you
might
want
to
solve
these
particular
sets
of
challenges.
B
Last
we'll
move
over
to
james
and
james
will
talk
about
what
conflict-free
replicated
data
types
are,
how
to
use
them
and
how
they're
implemented
as
well
as
examples
of
how
they're
actually
being
used
out
in
libraries,
as
well
as
ways
to
contribute
and
participate
in
this
in
the
in
an
open
source.
Community,
so
just
to
give
a
little
background
into
distributed
systems
in
the
cloud
native
environment.
B
Our
goal,
when
in
in
designing
this
and
and
really
a
large
goal
of
the
cloud
native
movement
was
to
start
begin
scaling
and
solving
the
problems
of
scale
beyond
single
systems,
while
ensuring
some
sort
of
acid
compliance
making
sure
that
all
of
our
our
data
is
atomic.
It's
consistent.
It's
item
potent
and
it's
durable
right.
We
want
to
be
able
to
write
values
and
then
read
and
know
that
we're
reading
a
valid
result
back
right,
and
so
that
leads
to
some
guarantees
of
data
integrity.
We
can
trust
our
systems.
B
B
B
So
we're
wanting
to
ensure
continued
progress,
servers
are
available
and
typically
in
these
systems,
we
can't
make
progress
if
a
majority
fails,
but
we
always
want
to
be
able
to
return
a
correct
result,
and
this
is
very
critical
right.
If
we're
building
systems
on
top
of
kubernetes,
we
even
in
a
read-only
state,
we
need
the
state
of
that
system
to
return
a
correct
read.
B
Even
while
rights
are
unavailable
to
us,
so
an
understandable
way
about
to
go
about.
This
there's
been
attempts
at
this,
starting
with
with
paxos
and
other
consensus
algorithms,
but
in
in
the
past
decade
a
paper
was
written
called
a
consensus.
Algorithm
called
raft
came
out
that
was
ultimately
designed
around
under
understandability
and
with
the
with
the
idea
that
that
sane
or
or
reliable,
robust
implementations
would
come
out
of
strongly
understood
systems,
and
so
I
encourage
everyone
to
go.
Read
the
rap
documentation
if
you
haven't.
B
It
is
a
great
example
of
how
to
build
a
consensus-based
system
that
agrees
on
on
on
values,
while
remaining
strongly
consistent
and
there's
no
better
example.
In
the
cncf
of
of
raft
than
etsy
d,
it's
a
graduated
cncf
project.
It
was
originally
developed
at
core
os
to
build
out
what
was
what's
called
fleet,
and
that
was
the
ability.
It
was
a
distributed
system
manager
for
deploying
out
in
the
early
days
of
docker
workloads
on
top
of
a
core
os
cluster.
B
It
was
also
used
for
other
mechanisms
inside
of
coreos
and
really
was
the
beating
heart
of
a
cluster
of
operating
systems,
and
so
what
is
that
cd?
Well
at
its
core?
It's
a
distributed,
locking
strongly
consistent
key
value
store.
I
write.
I
write
a
key
and
I
expect
that
to
be
able
to
read
that
result
once
scd
confirms
that
that
key
has
been
written,
and
so
it's
acid
compliant,
and
that's
in
in
that
from
that
standpoint.
B
So
now,
after
years
of
use
users
being
scaled,
it's
the
core
data
store
for
kubernetes,
core
dns
and
a
lot
of
other
cncf
projects,
it's
based
on
raft
and
it's
a
single
writer
with
leadership,
election
and
multiple
readers.
So
what
does
single
writer
mean
in
this
case?
Well,
any
any
node
can
accept
a
right.
However,
those
rights
get
forwarded
over
to
the
leader,
if
that,
if
that
current
raft,
node
that's
being
accessed
is
a
follower,
and
so
you
really
only
have
one
node
that
is
responsible
for
the
of
data
at
any
given
time.
B
So,
as
systems
have
scaled-
and
we
see
this
no
better
in
kubernetes
and
multi,
kubernetes
and
federation
and
the
the
need
for
getting
workloads
closer
to
users,
we
start
to
see
challenges
in
these.
Very
consistency.
Orient
or
consistency.
Bound
environments
where
you
have
single
writers
where,
where
latency
is,
is
a
is
a
is
a
significant
issue,
and
so
these
sorts
of
stores,
these
these
consistency,
oriented
stores,
really
work
best
in
very
low
latency
environments.
B
So
a
global
scale
of
cd
cluster,
for
example,
often
suffers
from
heartbeat
issues
because,
as
you
move
across
the
globe,
latency
of
course
increases
you're
not
going
to
be
at
the
speed
of
light,
and
you
start
to
see
partitioning
problems.
And
so
what
we've
settled
on
is
is
generally
scaling
into
the
the
region
or
the
availability
zone
and
having
had
a
background
building
out
very,
very
large
kubernetes
clusters,
doing
machine
learning,
graphics,
programming
or
gpu
based
operations.
B
Some
of
the
worst
failures
that
I've
seen
actually
come
from
missed
heartbeats
with
latency,
because
etcd
failed
to
actually
get
into
a
proper
state
because
of
these
sorts
of
latency
issues,
and
so
trying
to
scale
that
out
to
a
global
environment
is
incredibly
difficult
as
compared
to
trying
to
run
multiple
clusters
and
trying
to
get
data
between
those.
B
We
also
have
a
a
major
bottleneck
in
this
single
writer
environment,
again
large
scale
and
having
developed
out
kudo
and
other
operator
based
systems,
we're
seeing
with
custom
resources,
much
more
use
of
the
kubernetes
control
plane
as
a
general
purpose
store
and
so
scaling.
Ncd
in
this
environment
becomes
very
interesting,
because
if
you
start
to
have
more
rights
than
your
max
throughput,
you
start
to
back
up.
B
So,
okay,
as
we're
starting
to
build
out
as
we're
starting
to
think
about
these
next
systems,
well
event:
sourcing
comes
into
play
right
and
event.
Sourcing
is:
is
this
idea
where
we
have
a
centralized
bus
of
events,
and
you
have
multiple
readers
coming
off
of
this,
that
that
consume
those
events
and
perform
either
different
actions
or
or
it
has
a
decoupled
sense
from
the
whole,
where
you
interact
with
the
entire
system
through
events?
B
B
If
you've
heard
of
the
term
eventual
consistent
or
if
you
haven't
eventual
consistency,
is
a
liveness
guarantee
where
eventual
consistency
means
that
eventually,
all
replicas
will
have
the
same
information,
but
can
return
any
result
in
the
meantime,
depending
on
what
the
state
of
the
system
is
in
and
and
so
there's
no
guarantees
of
safety
with
with
with
that,
because
even
if
it's
received
the
information,
an
eventually
consistent
system,
as
it's
been
defined
so
far
requires
every
single
node
in
order
to
report
the
new
value.
B
Strong
eventual
consistency
adds
the
safety
guarantee
on
top,
that
is
to
say
every
node
that's
received.
The
update
is
able
to
report
the
new
event,
and
so
it's
eventually
consistent,
but
it's
also
correct
with
the
information
that
it
currently
has
and
we'll
see
some
interesting
use
cases
for
that,
so
data
is
converging
to
the
same
value
across
all
replicas,
but
in
the
meantime,
every
replica
that
has
the
data
is
strongly
consistent
and
so
connectivity
is
not
guaranteed.
Low
latency
is
not
guaranteed
and
ultimately,
ordering
here
is
not
important.
B
We'll
talk
a
little
bit
a
little
bit
about
that
a
little
bit
more.
So
let's
look
at
that.
Some
some
use
cases
for
these
needs
right
and
a
big
one
is
globally
distributed
databases.
Now
we
have
this
in
some
forms.
Now
we
have
either
sharded
databases
with
a
shard
key
and
that's
that's
a
that's
one
form
of
scale
or
you
ultimately
have
some
sort
of
primary
region.
That's
responsible
for
accepting
rights
and
distributing
reads
out
globally.
So
neither
of
these
really
fit
the
bill.
B
If
you
think
about
systems
like
cassandra
things
things
that
that
shard,
based
on
these
shard
keys,
you're,
really
managing
scale
on
a
different
dimension
than
geographic,
you're,
managing
scale
on
a
dimension
of
your
data
cardinality.
B
And
so,
if
you
have
all
of
your
replicas
of
of
a
certain
shard
go
down
now
you
have
a
partial
outage
for
that
type
of
data,
but
that
doesn't
say
anything
about
your
geographic
distribution
right.
It
means
nothing
about
having
a
multi-writer
system
that
is
global
scale.
It's
really
talking
about
multi
writer
systems
at
that
chart
or
at
that
short
key
cardinality.
B
When
we're
doing
this-
and
this
is
the
difference
between
that
and
like
between
this
sort
of
globally
distributed
database
like
a
spanner
and
other
types
of
databases
that
that
choose
other
strategies
for
this-
and
our
goal
here
is
to
achieve
monotonous,
tonicity,
and
so
what
that
means
is
is
the
ordering
of
events
is
not
important.
We
all
eventually
converge
on
the
same
document,
no
matter
the
order
of
events
that
come
in.
B
Another
use
case
here
is
going
to
be
building
out
local
first
applications,
and
so
really
what
a
local
first
application
is,
is
the
extension
of
this
user
data
out
to
the
client
right,
they're
able
to
operate
completely
offline
and
then
synchronize
data
back
with
your
cloud
when
you're
offline
and
so
building
out.
B
This
sort
of
at
this
idea
of
edge
native
enables
cloud
native
use
cases
in
very
adverse
or
low
bandwidth
environments,
where
you
can
come
back
offline
online
and
merge
your
data
with
the
whole
with
guarantees
that
it
that
that
your
system's
always
going
to
progress,
you're
not
going
to
deal
with
rollbacks
and
just
to
move
back.
Actually,
one
very
important
thing
about
this
order.
B
B
If
not
all,
of
us
do
being
able
to
collaborate
in
real
time
with
others
is
a
feature
of
of
work
in
our
current
current
age
and
and
a
feature
of
most
of
these
applications,
and
so
applications
built
for
a
cloud
native
environment
will
be
able
to
handle
collaboration
or
we'll
need
to
handle
collaboration,
and
this
is
on
top
of
distributing
local
first
right.
B
So
we're
really
talking
about
bringing
cloud8
to
the
edge
here
right,
and
so
we
need
this
all
to
be
observable.
We
need
to
be
traceable,
we
need
to
be
operable
and
cloud
data
is
great
for
the
cloud,
but
new
solutions
are
needed
for
what
we're
kind
of
referring
to
is
edge
native
environments
and,
what's
nice
is
that
there
are
solutions
to
these
problems.
C
All
right
so,
as
jared
said
over
the
next
few
minutes,
I'm
going
to
provide
an
introduction
to
crdts.
So
what
exactly
are
crdts
well
they're
a
collection
of
data
types
similar
to
the
data
types
you're
likely
familiar
with.
So,
for
example,
there
are
arrays,
there
are
maps,
there
are
text
types
there
are
counters.
C
C
Okay,
some
of
you
might
be
thinking.
Well,
writing
a
multi-user
app
like
this.
Isn't
all
that
hard
right?
Well,
when
I
say
I
want
this
to
do
app
to
be
collaborative,
I
don't
just
mean
it's
a
multi-user
application.
I
mean
something
different
something
deeper.
Let
me
explain
what
I
mean
with
a
couple
of
examples.
C
C
Now.
What
you
just
saw
here
in
this
simple
example
should
give
you
a
sense
of
what
I
mean
by
collaboration
in
a
deeper
way,
not
just
a
multi-user
way
or
here's
a
similar,
but
slightly
different
scenario.
What
if
both
of
these
people
are
offline
and
the
person
on
the
left
decides
to
delete
the
modal
one
task
and
the
person
on
the
right
decides
to
add
a
new
to-do
item
and
again
remember
both
of
these
edits
are
happening
offline.
C
So,
first
of
all,
writing
applications
that
work
both
offline
and
online
is
not
an
easy
task
in
and
of
itself,
but
crdts
make
it
easier.
And,
secondly,
what
should
happen
in
this
app
when
both
of
these
users
get
back
online
well,
ideally,
the
model
on
task
should
be
deleted
on
the
right
and
the
new
clean.
The
garage
task
should
get
replicated
to
the
left,
okay.
So
to
achieve
this
kind
of
collaboration,
the
kind
of
collaboration
I
just
alluded
to
this
conventional
data
model
isn't
going
to
help
us.
C
C
In
fact,
the
apis
from
most
crdt
libraries
are
very
similar
to
their
conventional
counterparts,
but
you're
probably
wondering
what's
the
benefit
of
using
these
crdts
well.
This
is
where
I
want
to
turn
back
to
the
description
I
used
earlier
for
crdts,
when
I
called
them
shared
data
types.
But
what
exactly
do
I
mean
by
shared
data
types?
Well,
at
a
high
level?
C
Now
this
example
only
shows
two
computers
that
are
synchronizing
their
state,
but
there
could
be
many
other
computers
involved,
and
I
want
you
to
notice
that
I
never
mentioned
anything
about
servers.
I
mean
you
could
and
likely
would
use
servers
in
many
scenarios,
but
it's
not
a
requirement
of
crdts.
C
C
Well,
you
have
to
create
these
data
structures
in
a
special
way
following
certain
rules
which
we'll
talk
more
about
in
a
moment,
and
it's
also
important
to
note
that
crdt
store
your
application
data,
but
they
also
store
metadata
okay.
So
what's
with
this
metadata
hold
that
thought,
because
I'm
going
to
look
at
this
metadata
more
in
just
a
moment
here
now
to
fully
understand
how
crdts
work
you
need
to
understand
a
bit
of
order
theory
and
in
particular
you
need
to
understand,
join
semi-lattices.
C
C
The
reason
I
mention
this
is
because
it's
important
to
understand
that
there
are
mathematical
proofs
behind
crdts
and
we
should
draw
confidence
in
the
technology
because
of
its
underlying
mathematical
principles,
but
in
practice
most
developers
don't
need
to
understand
the
math
behind
crdts.
You
just
need
to
understand
how
to
interact
with
crdts.
C
So
we
need
to
count
something
an
application
that
counts
something
so,
for
example,
maybe
our
application
is
running
on
some
sort
of
scanners.
That
collaboratively
count
things
that
pass
them
by
on
a
conveyor
belt,
or
maybe
the
application
is
meant
to
count
people
entering
a
venue
on
smartphones
at
all
the
entrances
and
it's
possible
that
these
smartphones
need
to
work
both
offline
or
online
because
of
the
operating
location.
C
C
For
example,
we
see
on
the
left
side
of
the
screen.
It
represents
collaboration
between
machines
and
the
example
on
the
right
side
of
the
screen
represents
collaboration
between
people.
I
think
a
better
general
term
to
use
instead
of
machines
on
the
left
side
and
people
on
the
right
side
would
be
actors.
C
Okay,
let's
look
at
implementing
this
collaborative
counter
now.
First,
I
want
to
see
if
we
can
implement
this
collaborative
counter
by
using
a
primitive
type,
an
integer
in
the
applications
data
model,
but
here's
a
spoiler
alert
for
you.
Integers
won't
work
for
us
and
you'll
see
why
in
just
a
moment
here
so
let's
say
this
device
on
the
left
counts.
The
first
concert
goer,
which
increments
its
local
count
from
0
to
1..
C
C
Next,
the
updated
count
is
replicated
from
the
middle
device
to
the
left
device
and
the
one
is
changed
to
two.
Then
the
update
count
is
replicated
from
the
right
device
to
the
left
device
and
the
two
is
replaced
by
two
okay,
that's
not
right.
I
mean
we
know.
The
total
count
should
be
three
at
this
point
right.
C
C
And
basically,
this
is
all
you
need
to
know
as
a
developer,
to
interact
with
the
crt
counter
and
again
by
swapping
out
the
integer
for
a
c
or
dt
counter.
We
can
have
a
collaboratively
maintained
count
that
works
even
if
the
device
is
offline,
but
even
though
you
don't
technically
need
to
know
how
a
crdt
counter
works,
you're
probably
still
curious
how
they
work.
So,
let's
look
at
how
this
crt
counter
could
be
implemented.
C
Here's.
What
I
mean,
what,
if
the
model
for
our
counter
was
composed
of
two
things,
a
unique
id
for
each
replica
and
a
map,
then,
when
the
count
is
incremented,
a
key
value
pair
is
added
to
our
map,
where
the
key
is
the
unique
replica
id
and
the
value
is
the
incremented
count
for
this
replica
now
to
get
the
actual
count
value,
we
just
need
to
sum
the
values
in
the
count
map
which,
at
this
point
is
just
one
then
to
replicate
the
update.
C
So
both
of
these
counters
on
the
right
get
incremented
at
about
the
same
time,
then
each
node
adds
a
new
key
value
pair
to
their
count
maps
where
the
key
is
the
unique
replica
id
and
the
value
is
the
count
for
the
corresponding
replica.
So
the
first
time
increment
is
called
the
value
will
be
1.,
then
to
calculate
the
current
counter
value.
We
simply
sum
the
maps
values
which
totals
two
for
now.
C
Okay,
there's
a
few
more
details
on
how
crt
crdt
counters
work,
but
hopefully
this
gives
the
small
exercise
will
give
you
a
sense
of
how
they're
implemented
you
remember
how
a
few
minutes
ago
I
mentioned
that
crdts
contain
your
application
data
and
also
metadata.
Well
now
you
see
what
I
mean
by
metadata.
C
C
Another
trade-off
in
the
is
the
consistency
model
with
crdts
in
particular,
there's
a
period
of
time
when
replicas
can
have
different
values.
For
example,
when
we
incremented
one
of
the
counters
a
moment
ago,
the
other
counters
had
different
values
until
the
updates
arrived
at
the
other
nodes,
so
crtts
are
high,
offer
high
availability
and
strong
eventual
consistency.
C
C
C
The
next
question
you
may
have
is:
how
do
you
ensure
updates,
get
propagated
to
the
peers
as
appropriate?
Well,
there's
some
interesting
inherent
properties
of
crvts
that
make
updating
peers
a
bit
easier
and
more
forgiving.
Let
me
explain:
there
are
three
important
update,
related
attributes
of
all
crdts
that
go
back
to
the
order
theory
and
in
particular
joints
in
my
lattices,
which
crdts
are
based
on.
C
C
Now,
let's
say
the
left
device.
Maybe
it
got
temporarily
disconnected
and
it's
not
sure
if
the
other
devices
received
its
latest
update,
so
the
device
on
the
left
goes
ahead
and
sends
an
update
again
and
the
same
update
can
get
sent
again
and
again
multiple
times
and
the
result
of
merging
the
update.
Multiple
times
gives
you
the
same
result:
it's
an
idempotent
operation.
C
C
C
A
C
C
The
best
crdt
libraries
at
this
point
are
javascript
libraries.
You
have
ygs
and
you
have
auto
merge,
both
of
which
have
been
worked
on
for
several
years.
At
this
point,
there's
also
a
few
up
and
coming
libraries
written
in
rust,
which
are
of
particular
interest
to
us
and
some
other
languages
as
well.
B
Thanks
james
yeah,
so
anyone
who
wants
to
come
work
on
these
is
interested
on
in
in
advancing
state
of
the
art
at
mycelio.
We're
working
on
open
sourcing,
a
whole
bunch
of
tooling
related
to
this
and
being
able
to
embed
it
more
into
the
cloud
native
and
educated
environments.
If
you're
interested
in
discussing
mycelil,
we
are
sorry
crdts.
B
We
have
a
discord
where
we
talk
about
local
first
applications
and
crdts
all
the
time,
and
we
have
more
news
for
that
so
happy
to
take
questions
and
and
have
some
discussion
around
crdts,
and
we
have
a
question
already.
What
are
the
implications
of
long
long-term
long-duration
disconnections
on
crdt's
like
for
example,
days?
That's
a
great
question.
So
crdts
are
intended
to
solve
the
the
question
of
of
merging
towards
the
same
document,
and
so
at
the
end
of
the
day,
once
all
all
changes
merge,
they
are.
B
Every
party
has
the
exact
same
result,
and
so
this
carries
through
actually
before
I
before
I
as
I
talk
about
carrying
through
this,
can
be
dependent
on
the
type
of
crdt
you're
using
because
we're
really
talking
about
embedding
a
lot
of
this.
The
conflict
resolution
in
the
type
itself,
and
so
let's
say
that
you
have
two
you
you,
you
have
a
key
value.
B
You
have
a
map
or
a
dictionary
where
it's
a
shared
key
map,
or
you
know
key
value,
store
or
dictionary
or
map,
and
one
of
those
values
is
a
string
right,
you're,
just
using
a
string
you're,
not
using
a
text
crdt
for
example,
and
that
gets
changed
now
the
rule
might
be
a
last
right.
Wins
register
is
what
we
would
call
that
in
that
sense,
and
so,
given
that
that
version
can
be
part
of
that
metadata,
if,
if
two
parties
edit
the
same
key,
you
edited
it.
B
B
That
said,
there's
other
types
of
crdts
that
will
try
to
better
combine
those
two
pieces
of
information,
and
this
is
where
we're
also
looking
at
things
like
tools
like
webassembly
at
the
point
of
origination
or
at
the
point
of
conflict,
in
order
to
to
create
more
embedded
logical
decisions
around
that
sort
of
data.
Now,
let's
go
to
the
example
of
you're,
adding
a
key
to
a
shared
map.
B
Let's
say
let's
say
you
add
that
there's
no
other
conflicting
party
in
the
meantime,
and
now
that
that
that
that's
going
to
you,
you
hop
back
online,
no
matter
the
order
of
those
updates
and
how
old
those
are
everyone's
going
to
have
that
new
value
in
the
key
with
with
typical
you
can
call
them
observe,
remove
like
sets
right.
You
look
at
a
map
in
this
case
as
a
set,
and
so
in
that
case,
a
long-term
disconnection
is
not
such
a
big
issue.
B
Now,
one
interesting
property
that
you
can
have
there
is
crdts
can
be
made
up
over
those
c
or
dt's
as
well
and
as
james
was
showing,
and
so
let's
say
you
add,
a
key.
That's
a
list
right
yeah.
I
add
a
key
that
says
to
do's
and
I
and
that's
a
list
of
to
do's,
and
someone
else
has
done
that
as
well,
and
both
both
of
us
are
just
operating
on
on
that
list.
B
We
can
expect
those
lists
to
converge
at
the
end
of
the
day,
now
we're
probably
giving
those
random
ids,
and
so
there
may
be
cases
depending
on
how
we
detect
duplication,
that
that
you
may
end
up
with
duplicates,
and
so
you
kind
of
get
into
the
the
stories
of
your
data
problem
of
crdts,
and
one
thing
that
I
really
like
about
this
sort
of
data
structure.
Is
it
does
force
you
to
talk
about?
Well,
what
does
my
data
actually
mean?
What
does
it
actually
do
in
order
to
decide
the
guarantees
that
I
want?
B
And
but
the
most
important
thing
there,
by
the
way,
just
to
wrap
that
up
as
well
and
awesome,
I'm
glad
they
answered
your
question.
The
most
important
part
of
that
is
that,
no
matter
what
the
ordering
is
once
all
of
those
are,
once
all
those
events
are
received,
we've
all
converged
on
the
same
document,
and
so
your
long-term
disconnection
does
not
matter
from
the
perspective
of
us
having
the
same
view
of
the
world.
B
At
the
end,
we've
achieved
consensus
in
a
strong
way
by
the
end
and
you're,
not
in
in
a
a
state
that
you
would
be
like,
for
example,
with
git.
Where
now
you
have
just
this
giant
merge
conflict
to
work
through
right
there
there's
that
possibility,
because
it's
mono,
mono
isotonic,
has
been
eliminated
from
from
the
from
one
of
the
problems
that
you
need
to
solve
right.
We're
not
talking
about
whether
it's
correct
at
the
end
of
the
day.
But
what
do
you
think
the
largest
barriers
for
adoption
for
are
for
crts?
B
At
this
point?
That's
a
fantastic
question.
I'll
give
some
of
my
views
and
I'd
love
for
james
for
you
to
give
some
of
your
views.
My
biggest
one,
I
think,
is
the
accessibility
and
proliferation
of
libraries
out
there
that
are
usable
and
and
and
just
available
for
people
to
use,
and
I
think
it
gets
caught
up
into
these
peer-to-peer
applications
and
building
one
sort
of
application,
whereas
we
are
seeing
examples
of
of
crdts
being
used
for
building
global
scale,
databases
they're
just
either
not
open
source
or
not
being
heavily
used
and
promoted.
B
At
this
point,
there's
also,
the
issue
of
there
is
an
overhead
to
crdts,
and
so
latest
research
has
dropped,
that
right
and
right
amplification
down
quite
a
bit
and
and
there's
a
lot
of
work
going
into
performance
as
well,
but
these
are
larger
than
scalar
values.
Now,
there's
some
advantages
of
that
to
that
as
well,
because
now
we
can
get
to
full
causal
views
of
our
data.
B
We
can
go
back
in
history,
we
can
kind
of
understand
things,
and
so
there's
actually
benefits
to
that
amplification,
but
a
lot
of
our
systems
a
lot
of
the
ways
we
look
and
design
things
is
not
built
to
really
support
more
like
more
than
just
these.
These
these
causal-
like
I'm
sorry
more
than
just
these
scout
scholar,
values,
right
or
scalar
values.
B
Now
the
only
the
other
thing
I
would
add
to
that
is
not
every
application
is
appropriate
for
crdts,
right
and
so
scalar
values
are
totally
okay
to
use,
but
I
think
there's
more
applications
out
there,
where
you
might
not
want
just
a
scalar
value.
B
You
might
want
the
entire
history
and
of
course,
once
you
get
into
that,
there's
other
there's
other
interesting
things
around
transactions
and
compression
and
and
and
other
hard
problems
to
solve,
and
so
one
things
we're
doing
here,
mycelio
is,
is
solving
those
hard
problems
and
making
these
industrializable
and
usable
james.
What
do
you
think
on
that
question?.
C
Yeah,
I
think
I
agree
with
what
you
said
there.
Another
barrier
is
it's:
it's
just
a
very
big
sort
of
paradigm
shift,
so
I
think
in
a
lot
of
ways-
and
it's
a
very
very
new
technology,
not
particularly
mature,
but
it's
moving
fast,
and
so
I
think
there
are
we're
gonna
have
to
think
differently
about
how
to
build
apps
in
a
lot
of
ways,
and
I
think
that
you
know
being
a
somewhat
new
technology.
C
There's
a
lot
of
maturing
and
patterns
that
are
sort
of
need
to
get
figured
out
within
the
community
on
on
how
to
use
these,
but-
and
also
you
know
really
getting
back
to
libraries-
is
a
big
one
as
well,
but
there
is
a
surprising
amount
of
effort
in
in
various
libraries
that
that
we
see
throughout
the
open
source
community.
So
we're
excited
about
that.
B
Yeah
I'll
just
read,
this
out
seems
like
real
word:
apps
using
crtds
need
both
crdts
and
traditional
storage
mechanisms
seems
like
it
might
be
confusing
to
decide
how
which,
where
patterns
are
applied.
Yeah.
That's
some
of
the
work
that
we're
doing
is
to
is
to
you,
if
you
think
about
how
we've
learned
patterns
around
designing
very
data
intensive
applications.
B
Yes,
that's
absolutely
true
right,
there's,
there's
very
few
resources
right
now,
there's
very
little
practice
right
now
and
so
enabling
people
to
have
more
platforms
to
to
test
that
to
to
to
determine
a
practice
to
kind
of
figure.
Out
that
I
I
think
like
in
the
first
question,
I
said
the
stories
of
data.
I
think
that
rings
really
true
right,
because
if
you
look
at
traditional
storage
mechanisms,
they're
designed
very
generally,
they
they,
they
accept
a
a
scalar
value
of
scalar
value.
Here's
my
here's
postgres
very
advanced
database.
B
Here's
a
j,
here's,
your
even
json
data
type
right,
but
you
also
have
your
varchar.
You
have
your
your
number
types
you
have.
You
can
fit
values.
You
can
fit
a
lot
of
business,
use
cases
into
just
a
bunch
of
values
and
not
not
necessarily
have
to
think
about
it,
whereas
I
think
looking
at
all
the
advantages
of
crdts
forces
you
into
the
conversation
a
lot
sooner
of
of
okay.
B
B
Sourcing
database
would
as
well-
and
you
know
we're
we're
talking
to
some
of
the
people
who
are
doing
that,
because
ultimately,
a
crdt
under
the
hood
when
you're
replicating
it
still,
it's
just
a
very
opinionated
shaped
event,
and
so
you
could
go
store
that
create
a
materialized
view
on
it.
But
of
course
it's
that
is
a
lot
more
specialized
than
say
just
putting
something
into
redis,
for
example,
right
where
you
just
care
care
about
the
value
and
nothing
else
and
so
yeah.
B
I
think
that's
absolutely
true
on
that
confusion
and
sure,
and
still
still
be
very
oriented
around
use
cases.
Now.
That
said,
I
think
I
think
oriented
data
towards
your
use
case
is
a
lot
more
powerful
than
just
having
random
data.
That
is
is
happens
to
just
be
cardinal
in
some
way
or
have
some
cardinality
and
and
those
restrictions
do
add,
some
some
powers.
B
Yeah
they
I
I
like
that
comment
like
and
then
thank
you
yeah.
The
stories
of
data
is,
I
think,
it's
really
important.
You
know
real
implications
of
that
distribution
like
and
it's
it
isn't.
I
I
do
agree
with
the
the
comment
that,
like
machines,
need
to
calibrate
not
just
humans
and
the
reason
I
said
that
and
the
reason
we
bring
it
up.
B
So
much
is
because
you
look
at
you,
look
at
moving
compute,
more
and
more
to
the
local
environment
and
you
you
look
at
clusters
of
machines
that
are
actually
performing
tasks,
and
so
it's
not
yeah.
We
we,
I,
I
think,
that
our
cpv
systems
are
very
primitive
forms
of
community
of
collaboration.
B
We
agree
on
the
same
value
right,
but
that
that
doesn't
necessarily
allow
two
actors
to
go
perform
local,
and
so
I
think
I
think,
I
think
the
implications
once
machines
can
operate
in
a
local
first
way,
while
so
collaborating
is
going
to
be
huge,
especially
for
for
physical
industries,
where
you
have
multiple
machines.
Working
on
on
similar
tasks,.
B
B
Redwood
redwood
is
another
attempt
at
that
at
this.
It's
still
very
early,
there's
a
there's
a
few
sas
options,
but
really
what
they're
using
them
for
in
this
case,
in
that
case
is,
is
the
distribution
of
data.
I
think
gun
and
redwood.
It's
redwood,
redwood,
there's
also
the
redwood
javascript
framework.
That's
not
what
I'm
talking
about
is
is
storing
a
a
state
tree
among
among
many
many
peers
and
is
using
crdts
to
do
so.
B
I
think
there's
a
good
example,
too,
of
of
an
ipfs
based
database
that
is
storing
as
crdts,
but
we
definitely
need
more
examples
of
this
and-
and
you
know,
we're
not
really
focused
on
the
database
side
of
the
problem,
but
I
think
a
good
collaboration
between
the
applications
and
databases
are
going
to
be
more
and
more
important
and
I'll
post.
Those
links
in
the
chat
as
well.
A
B
Solutions
as
far
as
I'm
sorry
sorry,
sorry,
libby,
sat
solutions,
storage,
I'm
not
aware,
there's
any
that
use
crdts
for
for
storage,
maybe
someone
will
chime
in
on
twitter
or
discord
later
and
and
show
one,
but
the
other
things
that
I'm
seeing
on
the
sas
front
is
crdt
is
being
used
for
multi-writer
replication
between
geographic
regions.