►
From YouTube: IPLD weekly sync (Textile Threads deep-dive)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hopefully
pretty
question
based,
but
yeah
I
mean
an
outcome
of
this,
might
just
be
that
I
would
take
the
recording
and
make
some
more
formalized
materials
from
it.
Even
so
yeah
I
mean
either
white
papers
there,
but
unless
I
update
it
every
day,
it's
always
a
little
bit
behind
yep.
C
B
I'm
going
to
be
recording
as
well,
just
because
I
don't
trust
the
the
live
thing
anyway,
thanks
everyone
for
joining
us.
If
you're
watching
on
this,
this
is
ipld.
We
weekly
think,
although
we're
a
little
bit,
we
weren't
supposed
to
have
one
this
week.
Apparently,
we've
switched
to
every
second
week,
but
we
we're
joined
by
the
good
folks
at
textile
who
and
we're
going
to
focus
today
on
threads
and
possibly
a
little
bit
of
buckets.
B
So
the
textile
data
structure
and
a
protocol
for
collections
of
data
and
links
of
things.
So
we
have
there's
a
bunch
of
people
with
interest
with
technical
questions
about
threads
and
we'd
love
to
get
these
guys,
giving
us
a
some
technical
background
that
is
deeper
than
what's
easily
accessible
through
the
documentation,
so
I'll
hand
it
over
to
carson
or
sander.
And
let
you
guide
us
through
and
we'll
pepper,
with
peppy,
with
questions
as
they
come
up.
A
What
sort
of
problems
do
they
solve
and
then
after
that
dig
a
bit
deeper
into
like
the
ipld
structures
that
we
use
and
like
some
of
the
roadblocks
that
we
came
into
when
developing
the
data
structure
and
then
like
why
we
chose
one
sort
of
way
of
doing
things
over
another,
because
even
out
of
that
we
may
be
able
to
say
like
oh,
you
know,
there
was
a
missing
codec
here
and
it
looks
like
this
and
you
know
maybe
we
can
go
out
and
I'll
build
it
or
something.
A
So
if
that
seems
reasonable
to
you
sandra,
do
you
want
me
to
just
do
a
quick
overview
thing
and
then
go
from
there?
Does
that
seem
reasonable
cool
yeah.
D
A
This
is
a
an
image
from
our
threads
white
paper,
but
it's
probably
the
most
kind
of
like
useful,
contextual,
like
high
level
overview
of
what
a
thread
is,
so
you
know,
realistically,
we
think
of
threads
plural
as
sort
of
right
now,
two
things
which
is
like
threads
protocol
for
data
exchange
and
then
threads
database,
which
is
the
sort
of
like
practical
implementation.
A
And
then
the
thread
as
a
collection
is
basically
a
collection
of
all
these
logs
and
addresses
associated
with
the
peers
involved
in
that
log
and
various
other
metadata,
including
some
keys,
which
I'll
talk
about
in
a
second
and
then
in
practical
use.
You
know
if
you
have
multiple
peers
that
are
interacting
on
a
given
data
set.
A
Each
one
may
be
mutating
the
state
of
that
data
set
within
their
own
log
and
then
some
like
data
materialize
or
materialize
view
of
the
final
outcome
of
that,
which
is
basically
the
folding
of
updates
of
that
log
to
produce
those
set
logs
to
produce
the
final
state
may
be
rendered
or
materialized
on
some
client
somewhere.
A
One
of
the
most
useful
ones
is
that
you
can
essentially
defer
materializing
that
view
until
the
time
that
it's
needed,
because
you
know
that
there
cannot
be
any
conflicts,
because
only
one
peer
is
responsible
for
their
given
log.
You
can
deal
with
so
there's
never
going
to
be
conflicts
on
sort
of
the
right
side,
but
there
could
be
conflicts
on
the
read
side
when
you
actually
try
to
materialize
that
view,
and
so
it's
up
to
the
peer,
that's
materializing
the
view
to
deal
with
those
conflicts
when
they
arrive.
A
So
this
is
really
nice
for
distributed
systems,
because
you
can
basically
not
worry
about
like
you,
can
exchange
data
and
then
deal
with
like
problems
later,
and
this
turns
to
be
really
useful
property
from
where
we
came
from,
which
is
like
in
the
mobile
world
and
like
collaborating
peers
and
things
like
that,
where
you
don't
want
to
waste
time
or
effort
trying
to
deal
with
conflicts
ahead
of
time.
You
end
up
with
these
crazy
branching
structures.
A
Instead,
you
can
kind
of
deal
with
it
on
on
read
and
other
teams
have
arrived
at
similar
ideas
as
well,
but
yeah
there's
some
really
nice
properties
here
so
that's
sort
of,
and
then
we
can
talk
about
why
single
writer
logs
are
useful
for
other
things
as
well,
but
that's
a
good
high
level
overview
of
that.
A
Okay.
So
a
log
is
keyed
by
the
peer
id
and
standard
feel
free
to
jump
in.
If
I
like,
miss
or
go
over
and
brush
over
something,
really
simple
or
sorry
really
important,
but
anyway,
we've
got
our
logs
each
peers
responsible
for
writing
to
them
and
a
log
is
essentially
a
hash
link.
Hash
linked,
append
only
log
of
like
ivory
blocks.
A
In
fact,
it's
a
hashtag
rec
log
of
records.
We
call
we
call
those
sort
of
higher
level
thing
records
and
the
records
each
one
contains
what
you
can
kind
of
see
here
in
this
figure,
which
is
it's
got
like
some
header
information,
including
the
the
key
or
sorry
the
time,
and
we
use
the
term
time
loosely.
That's
generally,
some
sort
of,
like
you
know,
logical
clock
or
something
like
that.
A
To
represent
time,
it
need
not
be
wall,
clock
time
and
then
the
key
that
is
used
to
encrypt
the
body
of
that
record.
So
you
actually
stick
the
key.
It's
a
randomly
generated
key
each
time
that
gets
stuck
in
the
header
and
then
that
all
gets
wrapped
by
an
additional
key
called
the
read
key,
which
you
can
see
here
and
the
read
key
is
used
to
encrypt
the
body
stuck
in
the
header
or
sorry.
The
key
is
used
to
decrypt
the
body.
That's
stuck
in
the
header.
A
The
reason
for
this
multi-layered
sort
of
way
of
doing
things
is
that
that
allows
a
peer
to
pass
or
a
group
of
peers
to
pass
off
a
replica.
Sorry,
it
used
to
be
called
a
replicator
key.
Now
it's
called
the
service
key,
but
anyway
you
can
pass
off
that
key
to
a
service
provider
that
surface
provider
can
track
log
updates
over
time
without
being
able
to
access
the
contents
of
those
log
records.
A
So
they
only
need
to
get
sort
of
like
one
layer
deep.
So
they
can.
They
can
follow
the
hash,
the
hash
links,
but
they
can't
go
any
deeper
than
the
read
if
they're
not
given
the
read
key,
and
that
gives
some
useful
properties,
because
now
we
can
have
like
trustless
service
providers
on
the
network
that
do
nothing
more
than
just
replicate
the
thread
content,
but
don't
actually
contribute
directly
to
like
the
update
or
like
editing
of
any
actual
thread,
content
and
yeah.
A
Then
that's
where
the
hash
linking
piece
comes
in
the
bodies
themselves
can
be
pretty
much
any
or
any
arbitrary
data.
In
practice
these
are
ipld
blocks,
and
so
in
theory
you
could
have.
You
could
have
kind
of
blocks
all
the
way
down
and
you
could
have
peers
that
are
exchanging
like
actual
ipld
blocks
in
the
body.
A
We
can
talk
a
bit
about
that,
because
that
was
a
sort
of
like
design
decision
we
had
to
make
to
deal
with
encrypted
data,
but
I
think
that
gives
you
a
pretty
good
overview
of
the
general
structure
of
what
a
thread
really
is
at
the
sort
of
base
layer
and
then,
of
course,
the
threads
protocol
defines
like
how
you
actually
exchange
records
between
peers,
you
know
and
which
peers
are
allowed
to
do
certain
things
and
like
pinning
and
all
that
stuff,
that
the
actual
exchange
of
data
that
needs
to
happen
to
make
a
distributed
network
work.
A
You
know
this
is
designed
to
be
run
on
a
like
peer-to-peer
basis.
Each
peer
would
be
running
a
thread
daemon,
which
sort
of
just
wraps
a
ipf
or
ipfs
lite
peer
for
exchanging
data
and
all
that
good
stuff.
A
But
in
practice
we
also
find
that
you
know
people
also
want
multi-tenant
systems
where
you
basically
have
like
one
thread
peer,
that's
actually
responsible
for
other
peers
threads,
and
so
that
presents
like
sort
of
new
design
design
decisions
that
you
have
to
end
up.
Making
kind
of
on
top
of
this-
and
we
could
talk
about
that
as
well,
because,
that's
probably
in
practice
the
bigger
challenge
than
like
coming
up
with
a
peer-to-peer
protocol,
it's
one!
It's
like
a
multi-tenant
use
of
that
peer-to-peer
protocol.
That
makes
things
a
bit
more
tricky
anyway.
A
So
I'm
going
to
stop
with
that
high
level
overview
and
pass
it
off
to
sander.
Who
can
talk
a
little
bit
more
about
some
of
the
like
key
design
decisions
that
we
made
around
like
the
keys
and
things
like
that?
E
Good
spot
to
stop
yay,
I
was
gonna
ask
if
I
can
jump
in
so
when
you
were
going
through
the
purpose
of
service
keys.
E
You
said
it
has
something
to
do
with
making
sure
that
someone
can
replicate
the
structures
without
needing
to
see
into
the
other
encrypted
bodies.
I
got
that
part
okay,
so
this
has
a
lot
to
do
with
where
data
storage
block
boundaries
are.
I
guess,
because
you're
worried
about
replicating
with
with
some
model
of
blocks
in
mind
right,
that's
yeah!
I
think
I'm
following
you.
E
Okay,
maybe
that
question
was
just
a
series
of
confirmations.
Maybe
that's
it.
A
Yeah,
it
sounds
like
those
confirmations
are
all
yes,
that's
right,
I
mean
so
the
key
like
there's
a
there's,
different
layers
that
you
can
have
with
the
with
the
body
in
particular.
Right.
So,
in
fact,
buckets
is
an
example
which
we
can
talk
about
later.
That's
like
a
practical
use
of
of
what
goes
into
the
body,
but
in
some
cases
the
body
will
be
just
a
reference
to
like
something
stored
on
ipfs
and
so
really
the
thread.
E
So
what
happens
if
the,
if
some
of
the
content
inside
of
that
body
area,
so
it's
encrypted
to
this
key,
that's
the
purple
dotted
line
here.
What
happens
if
that
content
is
big
like
multiple
blocks,
it's
more
than
a
megabyte
or
it's
some
connected
structure.
A
So
if
it's,
if
it
so
we
we'd
have,
we
have
users
doing
kind
of
both
things.
If
it's
big,
then
generally
you
just
store
like
the
cid
reference
to
the
big
thing,
and
then
you
store
the
big
thing
in
ipfs
and
then
it
doesn't
really
matter
just
like
any
sort
of
blockchain
thing
that
you
stick
the
cid
there,
you
store
the
other
thing
somewhere
else
and
that's
what
buckets
wraps
that
process
up.
Basically,
unless
then.
F
Yeah
sorry
carson
to
interrupt,
I
was
gonna
say.
Maybe
a
clear
distinction
is
like
blob
data
versus
ipld
data
because
yeah
with
buckets,
there's
no
point
in
storing
like
file
like
data,
because
you
wouldn't
or
at
least
for
buckets,
there's
no
merging
of
there's
no
resolution
of
blob
like
data
at
this
layer.
But
you
could
imagine
some
application
having
like
a
megabyte
of
application
data
that
it
does
make
sense
to
do
like
resolution
on
or
to
actually
track
that
state
through
time
but
like
for
buckets
it's
just
files.
E
Split
and
how
it's
persistent,
so
I
guess
the
clarification
I'm
looking
for
is
if,
if
you
have
the
the
replication
key
aka
the
service
key
you're,
giving
that
to
somebody,
because
you
want
them
to
be
able
to
confidently
replicate
the
thread
and
all
of
its
component
blocks
of
data
and
then,
if
there's
any
complex,
large
or
multi-block
structure.
That's
inside
of
that
body
region,
then
that
is
a
separate
responsibility
to
figure
out
how
that's
persistent.
F
F
So
just
to
clarify
that
I
might
have
been
the
only
one
that
didn't
hear.
You
say
that.
A
F
F
Yeah,
so
right
now
back
when
we
did
this
originally
we
didn't
have
there
wasn't
that
around?
So
it's
done
in
kind
of
a
kind
of
a
crude
way.
We
basically
just
symmetrically
encrypt
each
so
each
like
the
the
body
gets
encrypted
symmetrically,
so
that
key
there
is
just
a
basic
symmetric
encryption
key
and
the
same
with
the
the
read
key
and
the
rep.
F
But
that's
something
we'd
love
to
improve
on
is
figuring
out
the
right
way,
and
maybe
that's
something
that
we
can
use
like
ipld,
prime
codecs,
for
because
it's
really
cumbersome
to
traverse
a
log
like
you
need.
You
need
to
have
the
the
right
algorithm.
You
know
you
need
to
know
how
to
do
it
basically
and
it's
not
self-describing
in
any
in
any
useful
way.
F
So
that's
something
we've
wanted
to
do
for
a
long
time.
We
just
haven't
had
the
bandwidth
to
tackle
like
okay.
What's
the
really
the
right
best
practice
way
to
to
do
the
log
encryption
and
maybe
there's
something
that
we
can
come
up
with?
F
F
Something,
like
is
people
love,
so
we
were
trying
to
kind
of
emulate
that
and
that's
where
we
landed
on
this
log-based
sort
of
event,
sourcing
architecture,
where
a
thread
is
basically
yeah.
It's
a
collection
of
these
logs
that
represent,
if
you
combine
the
tips
of
all
those
they
represent,
some
state
that
can
be
resolved
into
a
document
in
a
collection.
F
So
it
could
be
like
you
know,
you
can
imagine
a
to-do
app
or
something
where
one
to
do
is
a
is
a
document
from
one
log
or
maybe
collaborated
on
the
multiple
logs,
so
that
the
way
that
those
logs
are
resolved
into
some
state
is
left
at
a
higher
layer
which
carson
mentioned
earlier
and
in
our
threaddb
setup.
We
call
that
the
the
collection
or
a
collection
instance,
and
that's
where
the
kind
of
parallels
to
manga
db
start
so
that's
sort
of
like
a.
If
that
helps
with
some
context
and
motivation
around
this.
G
Dean
did
you
have
a
question
yeah?
Oh
that's
something
like
you
know
just
like
trying
to
formulate
it
a
little
better
thinking
off
of
what
what
eric
mentioned,
which
is,
does
it
do
users
tend
to
run
into
users,
tend
to
stick
with
small
amounts
of
data
in
the
body
or
like
because
it
because
I
assume
it
just
becomes
like
much
more
complicated
when
you
can't
just
send
the
replicator
key
and
the
thread
id
and
say.
F
They
really
can't
put
anything
more
than
like
json
application
data
into
these
bodies,
so
yeah
I'm
less
than
a
megabyte
for
sure.
It's
typically
the
the
common
use
case.
F
F
It's
pretty
naive
in
how
it
handles
complex
and
so
for
them
they
plugged
in
a
different
layer
there
but
yeah
it's
a
similar
thing.
Small
data
goes
in
the
thread.
Buckets
is
an
abstraction
that
handles
pinning
essentially
like
for
for
blob
type
data.
G
F
They
see,
they
see
the
replicator
key
and
the
read
key,
because
they
have
to
be
able
to
actually
track
the
state,
but
we
don't
really
have
that
concept
of
like
replicate
a
bucket.
In
that
sense,
it's
buckets
are
more
more
like
you
can
add
a
pinning
service
to
a
bucket
or
you
and
I
could
share
a
bucket,
but
we
haven't
sort
of
extracted
that
the
replicator
functionality
into
the
bucket
world,
because
yeah
you
do
need
to
see
the
the
read
key
to
properly
pin
the
bucket.
B
So
there's
no
it's!
These
are
not
thought
of
as
like
complete
gags
like
they.
They
have
leaves
that
may
be
unresolvable,
whether
that's
in
a
bucket
or
something
else,
that's
within
that
body.
So
we,
I
guess
we
ought
not
to
try
and
think
too
much
about
these
things
as
complete
sets
from
a
route
all
the
way
down
to
the
leaves.
There's
a
there's,
a
boundary
like
a
discrete
boundary
where
you
hop
out
of
this
and
the
resolution
is
your
responsibility
at
the
api
layer
or
something
like
that.
F
Yep
yeah,
exactly
I
mean
we
want
to
get
to
what
you're
describing
that
was
sort
of
the
initial
thing
that
we
wanted
to
build,
though
we
had
to
come
up
with
this
like
encryption
scheme
that
just
because
there
wasn't
really
anything
out
there.
F
Yeah,
if
it
was
some
codec
based
thing,
other
peers
could
could
understand
they
could
traverse.
They
don't
have
to
be
like
a
specific
thread
pier.
That
would
be
amazing.
We
even
chatted
a
lot
about
making
buckets
less
of
a
like
a
file
based
api
and
making
it
understand
just
generic
ipld
nodes
right
now.
It's
based
on
the
unix
fs
api,
so
a
bucket
yeah.
F
You
know
if
you're,
manipulating
the
dag,
that's
describing
a
unifi,
unix
fs
structure
or
some
other
file
structure.
Whatever
you
want
it
to
be,
it
would
be
really
nice
if
that
was
like
more
part
of
the
the
body,
and
then
you
could
be
more
specific
about
what
you
wanted,
the
your
replicator
to
replicate,
so
it
would
handle
like
the
whole
pinning
if
you
wanted
to,
as
well
as
the
replication
of
the
the
thread
so
be
kind
of
more
of
a
native
thing.
Instead
of
these
two
different
responsibilities,.
F
But
yeah
I
mean
just
for
context
there
we
found
that
the
that
was
the
thing
that
most
people
really
just
wanted,
like
a
really
easy
to
use
mutable
file
system
that
you
can
sync
peer-to-peer
and
we
just
kind
of
in
some
ways,
clued
together
some
things
we
had
to
make
that
possible,
but
yeah
the
the
vision
is
more
of
like
a
full
dag
thing.
B
Yeah
the
graph,
the
complete
graph
thing
and
encryption
that
scene,
that
seems
to
be
the
bit
where,
whenever
we
try
to
come
up
to
encryption,
that's
the
bit
that
just
makes
it
so
complicated
that
it's
hard
to
get
over
that
line.
How
do
you
do
that?
Well?
Where
do
you
wrap
them?
Where
do
you
expose
them?
How
do
you
permission
them
all
that
sort
of
stuff
does
seem
to
be
one
of
the
harder
problems
here.
E
So
I
might
be
interested
in
hearing
some
more
about.
You
said
that
you
are
you're
letting
folks
and
you're,
seeing
folks
actually
do
the
construction
of
their
own
conflict
resolution
algorithms.
On
top
of
this
thread
structure,
I
would
love
to
hear
just
any
lessons
learned.
You
have
it
all
from
just
like
how.
How
is
that
going?
That
seems
like
a
wild
place
to
be
letting
code
breed
yeah.
F
Well,
I,
to
my
knowledge,
I
think
it's
just
any
type,
unless
you
know
of
others,
carson.
A
So
it's
pretty
much
just
any
type.
That's
doing
it
like
at
this
layer,
yeah
in
the
db
layer
that
we
also
like
the
db
api.
A
We
also
have
the
concept
of
a
codec,
a
plugable
codec
that
you
can
use,
and
there
are
a
few
teams
that
have
done
that
and
then
I
believe
fleek
has
actually
done
some
like
document
crdt
implementation
on
on
top,
but
that's
a
little
bit
different
because
that's
more
like
that's
like
even
a
layer
removed,
they
just
sort
of
like
ship
data
around
and
then
all
the
crdt
stuff
happens
like
in
the
browsers
kind
of
separately,
because
there's
already
pre-baked
libraries
for
doing
that
stuff.
So
it's
really.
A
They
just
use
threads
as
like
the
distribution
mechanism.
If
you
know
I'm
probably
botching
exactly
what
it
is
that
they're
doing,
but
it's
sort
of
a
layer
removed,
but
any
type
is
the
one
that's
doing
the
sort
of
like
closest
to
the
metal
crdt
stuff
that
we
know
of,
and
then
our
crdt
is
like
a
ot
kind
of
inspired
crdt
for
the
the
db
api,
and
you
know
every
sprint.
A
We
like
put
off
develop
more
awesome,
crdt
implementation,
which
is
you
know
the
nature
of
the
thing,
but
we've
done
some.
We've
done
some
preliminary
research
to
do
like
sort
of
opaque
crdt
operations,
which
has
been
pretty
promising,
but
just
never
gotten
around
to
putting
it
into
production,
and
that
would
that's
like
fully
ipld
based
crdt
takes
advantage
of
the
hash
structure,
kind
of
stuff.
F
Yeah
any
type
is
a
special
case
because
they
have
at
this
point,
contributed
like
as
much
as
we
have
been
in
the
past
six
months
to
threads
to
all
layers
of
it
like
the
orchestration
between
peers
and
performance
enhancements,
and
so
it's
they're
like
essentially
on
our
team.
You
know,
but
I
think
probably
for
folks
to
really
dive
in
and
yeah
develop
their
own
conflict
resolution
mechanisms,
that's
kind
of
the
avenue
you'd
have
to
take,
because
we
don't
really
publish
publicize
or
have
any
documentation.
F
It's
had,
unlike
really
how
to
do
that.
I
mean
that
was
the
idea
to
begin
with,
but.
A
If
you
just
ran
it
for
us,
so
then
you
know
we're
like
okay,
we'll
have
the
hub,
and
we
have
this
like
database
api
and
now,
like
a
big
chunk
of
our
users,
are
using
like
the
gateway
to
interact
with
threads
on
a
remote
api,
and
you
know,
then
it's
like
a
centralized
system
with
a
bunch
of
like
decentralized
stuff,
underneath
the
beauty
of
that,
of
course,
is
that
if
they
change
their
mind
at
any
time
they
they
can
just
go
off
to
the
races
and
keep
going
and
that's
important.
A
But
you
know
the
easier
we
make
sort
of
the
more
abs
more
we
abstract
on
the
thing
we
build.
The
more
people
just
want
to
use
the
abstraction.
So
that's
why
I
think
it's
been
fairly
unpopular
to
build
joan
kodak.
G
I
think
it
also
helps
that,
like
when
you,
when
you
build
the
thing
on
top
of
the
first
thing,
you
also
know
more
about
how
the
first
thing
works
than
when
you
wrote
it,
which
means
like
yeah,
some
of
it's
better.
Some
of
it
is
nicer
and
easier
to
work
with,
because
it's
the
abstraction
layer
and
some
of
it's
just
because
you
understand
the
bottom
layer
more
than
you
did
last
time.
A
A
And
and
that's
the
sort
of
situation
any
type
is
in
now
too,
which
is
they
probably
understand
the
threads
protocol
as
well
or
better
than
we
do.
F
Yeah
they've
pushed
it
more
than
than
we
have
actually
and
are
finding.
F
Sort
of
yeah
just
areas
to
improve
around
orchestration,
mostly
like
grouping
updates
by
peer
and
they've
done
a
lot
of
that
kind
of
work,
already
batching
updates
and
just
pushing
like
basically
pushing
how
much
throughput
you
can
get
through
a
particular
pier.
D
A
Sorry
yeah
they
did
some
interesting
stuff
around,
like
tracking
thread
heads,
that's
sort
of
like
hash
based,
so
you
could
see
right
away
if
you're
in
sync
with
other
peers
in
a
given
thread
or
not
and
decide
very
quickly.
If
you
need
to
request
an
update
from
another
peer
and
that
sort
of
thing
that's
been
pretty
exciting.
B
Yeah,
no,
I
mean
sorry,
sorry,
go
ahead
ron.
I
was
just
wondering
if
we,
if
we
could
talk,
we
didn't
have
to
do
it
now,
but
I
just
want
it
was
interested
to
shift
to
the
the
view
materialization
side
and
the
practicalities
and
performance
and
whether
you
do
caching
all
that
sort
of
stuff.
Where
does
that
sit?
How
does
it
work?
That
seems
like
an
interesting
part
of
this
because
it
does
seem
like
you
know,
over
time,
these
things
are
going
to
get
big
and
messy.
A
I
can
take
I'll,
say
something
and
then
santa
you
can
correct
me
so
yeah.
This
is
an
interesting
thing.
Basically,
caching,
definitely
in
fact
in
in
our
reference
implementation
peers.
A
A
We
have
this
mention
of
a
of
snapshots
so
that
you
would
actually
snapshot
the
state
at
some
like
deterministic
interval
that
all
the
peers
in
the
thread
would
effectively
be
like
agree
on,
so
that
they'd
all
arrive
at
the
same
snapshot
at
the
same
time,
and
we
haven't
implemented
that
because
the
caching
that
all
the
peers
are
doing
effectively
is
giving
that
to
us.
A
But
that
would
be
like
more
useful,
because
that
would
or
like
a
true
snapshotting
framework
would
be
useful,
because
then
peers
could
basically
hydrate
from
some
like
previously
snapshot
state
having
never
interacted
with
the
thread
before
and
then
that's
a
lot
nicer
for
things
like
constrained
environments
like
browsers,
where
you
know
keeping
around
six
months
of
updates
is
probably
not
even
possible
because
your
browser
cache
might
just
drop
it.
D
A
These
are
like
hostile
environments,
are
a
little
bit
more
like
you
know,
they
benefit
from
the
the
snapshotting
sort
of
stuff.
But-
and
so
we
wrote
about
that
on
the,
but
we
didn't
we
haven't
implemented
that.
G
G
No,
it's
totally
doable,
though
right
and
threads.
You
know
if
you
have
like
an
acl
of
all
the
peers
and
over
some
period
of
time,
some
fraction
of
the
peers
sign
off
on
the
update,
and
you
can
tell
that
because
they
have
all
the
hashes
that
point
in
the
right
places
right
like
it's
very
doable
it
just
it
just
takes
time
to
do
these
things.
F
F
G
There
are
cheat
codes,
though,
that
get
you
around
the
consensus
right
because
you
can
just
say
like
here,
is
the
fork
you
are
allowed
to
follow
the
fork
at
your
leisure
right.
You
can
see
how
many
people
follow
the
fork.
F
Right,
yeah,
absolutely
that
was
sort
of
our
initial
idea
for
how
to
do
this
kind
of
thing
is
this
yeah
just
continually
fork
have
an
acl
is
immutable
is
tied
to
the
thread
id
and
if
you
want
to
make
modifications
to
that,
you
just
fork
it
in
thread
id
that
people
can
follow,
has
new
rules,
but
that
has
its
downsides
too,
like
it's
annoying
to
have
to
track
multiple
things.
F
So,
just
to
mention
a
bit
specifics
around
like
view
materialization
the
way
it
currently
works,
so
the
materialization
happens
at
the
app
layer.
We
call
it
in
in
the
code
base.
F
So
there's
kind
of
this,
like
bi-directional
bridge
between
the
the
app
layer
and
the
networking
layer
and
the
when
that
player
decides.
Okay,
this
is
the
new
state.
It's
just
we're
just
dumping
that
json
it's
currently,
it's
all
json
at
the
app
layer
and
we're
just
dumping
that
into
a
go
data
store
and
then
it
can
be
read
just
from
like
a
feeling
api.
B
A
Yeah
actually,
so
that's
another
reason
why
a
lot
of
folks
use
like
the
hub
or
some
other
like
running
peer
is
that
they
could
they
can
have
sort
of
like
semi-participating
peers.
So
you
have
a
web
app
that
queries
the
hub
for
like
the
current
state,
but
then
adds
like
pushes
updates
to
that
state.
That's
to
avoid
that
exactly.
B
Right,
which
is
yeah
you
I
mean
again,
there's
the
blockchain
one
you've
got
the
light
nodes
and
the
full
nodes,
100
yeah,
yeah.
Okay,
that's
fair
enough!
That's
it
does
sound
like
a
hard
problem
to
to
solve,
but
it's
only
not
unsolvable
right
and
how
big,
how
big
practically
do
these?
The
the
are
these
getting
like?
Are
people
building
really
massive
data
sets
on
this.
F
Yeah
so
well
again,
fleek
and
any
type
are
our
biggest
users,
so
the
way
that
any
type
model
they
have
any
type
of,
if
not
familiar,
is
like
a
notion
based
app
or
notion
style,
app.
So
the
way
they've
modeled
it
is
that
each
database
in
notion
speak
or
each
each
page
has
its
own
thread.
So
any
updates
that
happen
there
go
into
that
thread.
F
So
you
can
imagine
you
know
thousands
of
edits
easily
happening
and
people
can
have
hundreds,
and
I
think
they
even
said
some
folks
have
like
thousands
of
threads.
So
these
things
become
it
becomes.
The
challenge
quickly
becomes
pushing
updates
to
peers,
and
that
becomes
like
a
very
intensive
heavy
lifting
thing
to
do.
F
If
you're
always
having
to
ask
peers
and
receive
updates
and
multiple
threads,
and
especially
in
the
multi-tenant
setup,
it
becomes
really
cumbersome.
So
that's
something:
we've
started
scoping
like
how
to
break
up
a
multi-tenant
node
into
multiple,
just
like
lots
of
different
processes,
because.
B
I
guess
sorry
no
go
ahead.
I
just
just
the
this
question
of
organization,
so
I
guess
there
is
a
choice
at
some
point
about
how
much
you
pack
into
a
thread
versus
having
separate
threads
like
you
could
manage,
like
any
type,
could
manage
everything,
presumably
in
one
thread.
F
B
Update
everything
in
the
in
these
massive
documents,
but
but
then
they
have
to
have
you
know
thousands
of
threads
going?
Is
there?
Is
there
any
sense
in
nesting
threads
do
p?
Do
they?
Can
you
track
threads
within
threads
or
is
there
do
you
have
a
better
mechanism
for
tracking
the
number
of
threads
that
like,
or
does
that
just
become
an
application
concern
where
it
depends
on
how
flexible
you
want
to
be.
F
I
think
it
always
comes
down
to
the
fact
that
there's
no
like
privacy,
so
a
thread
is
if
you
have
the
keys
for
a
thread,
then
you
have
the
keys
for
a
thread.
So
usually
the
application
decision
is
around
like
what
things
do
I
want
to
be
able
to
share
and
what
things
I
want
to
keep
private.
So
if
you
just
had
one
thread
for
every
user
and
they're
putting
all
their
stuff
in
there,
then
either
it's
public
or
it's
private.
You
know
so
it
becomes
like.
F
How
do
you
build
something
where
you
can
on
notion
create
a
little
page
and
share?
Just
you
know
a
section
of
that
page
or
you
know,
and
folks
always
want
like.
Oh
they
want
to
do
the
the
google
docs
style
like
share
with
specific
people
or
share
a
link,
only
people
with
access
to
the
link
and
that
kind
of
thing
so
yeah.
F
They
don't
have
to
be
so
I
guess
to
back
up
so
there's
two
ways
to
handle
this
make
a
thread
with
a
more
granular
acl,
and
then
you
have
to
deal
with
like
key
rotation
more
frequently
or
you
just
make
it
cheap
to
have
a
thread
and
that's
kind
of
the
avenue
we're
exploring
now
so
like
there's,
no
reason
that
you
can't
have
a
million
threads
on
a
node
and
that
that
comes
down
that
comes
more
down
to
like
how
expensive
is
it
to
keep
a
thread
up
to
date
and
how
expensive
is
it
to
basically
orchestrate
the
logs?
F
So
that's
kind
of
what
our
immediate
goal
is
around
scaling.
These
things
is
to
make
it
so
that
you
don't
care
how
many
threads
you
have
like.
It's,
not
you're,
not
moving
up
against
some
machine
boundary
like
your
cpu
or
resource
boundary
with
the
number
of
threads
you
have.
D
A
Because,
realistically,
a
thread
is
a
set
of
keys
right,
like
there's
not
much
more
to
track
than
that,
instead
of
keys
and
some
like
tracking
around
like
well
pure
ad
addresses
and
stuff.
But
it's
like
to
to
have
another
thread.
That's
like
that's
referencing.
A
lot
of
the
same
data
is
like
basically
the
cost
of
creating
a
couple
random
keys.
A
B
These
these
sort
of
architectural
choices
about
where
you
divide
things,
how
you
structure
things,
does
that
end
up
being
a
problem
for
users?
Do
you
end
up
having
to
deal
with
a
lot
of
those
sort
of
architectural
questions
about
how
they
structure
their
use
of
these
things?
Or
did
you
find
it's
obvious
for
a
lot
of
use
cases.
A
I
think
that
is
a
very
good
question.
I
think
you
know
most
of
our
users
interact
with
threads
via
the
database
api
and
then
their
questions
are
like.
How
do
I
structure
my
data
like
they'll,
be
the
same
questions?
Would
ask
of
a
mongodb
like
how
many
collections
should
I
have?
What
should
those
collections
look
like?
Should
I
can
I
reference
one
collection
type
in
another
collection
type
like
those
types
of
architectural
questions
we
don't
get
the
like.
A
How
should
I
use
threads
the
protocol
questions
as
often
because
we
don't
have
as
many
users
interacting
at
that
level,
but
we
do
have
folks
like
any
type,
and
they
came
back
to
us
and
basically
said
we're
using
threads
like
this.
A
It
would
be
swell
if
it
was
even
easier
for
us
to
use
threads
like
this,
and
we
looked
at
that
and
said
yeah.
That's
a
good
idea.
We
should
try
to
do
that.
So
you
know
it's
like
a
difference
of
where
people
are
coming
in,
but
for
the
most
part
with
like
our
javascript
sdks
and
the
questions
I
feel
the
most.
It's
like
someone
who
works
for
mongodb
would
answer
the
same.
A
Sets
of
questions
maybe
a
little
bit
differently
because
we're
talking
about
a
distributed
system,
but
then
you
know
I
try
to
say
we
try
to
do
things
like
okay.
Well,
you
know
right
now.
A
thread
is
like
not
that
cheap,
so
you
might
want
to
put
like
more
collections
into
your
thread
or
you
might
want
to
like.
Have
one
user
be
responsible
for
these
collections
and
one
user
b
you
know
do
some
of
that,
but
at
the
end
of
the
day,
it's
more
about
how
do
I
structure
my
database
for
my
application?
Then?
F
Yeah,
I
think
that's
right
yeah,
I
think
they're
both
one.
In
the
same,
we
all
yeah.
The
snapshotting
thing
is
definitely
a
bit
of
a
blocker
for
us
at
the
moment
to
seeing
like
yeah
for
for
folks
to
really
open
up
their
floodgates.
F
We've
done
some
work
in
that
direction
already
so,
like
batching
updates
per
peer,
that
you're
talking
to
across
threads
and
like
cueing
and
and
all
that
kind
of
stuff
that
you
can
imagine
as
improvements.
F
But
currently
it's
still
kind
of
bound
to
like
a
thread.
So
that's
where
some
of
the
expensiveness
of
having
multiple
threads
comes
from,
because
it
just
each
thread
adds
some
multiple
multiplier
to
like
how
chattery
your
note
is,
regardless
of
in
some
ways
how
many
peers
are
in
those
threads,
so
yeah
it
should
it.
It
should
be
that
it's
just
more
expensive
to
communicate
with
more
peers
versus
like
how
many
threads
do
you
have.
You
know
kind
of
obvious
in
hindsight,
but
that
was
our
our
initial
implementation.
A
B
Right,
I
I
have
a,
I
have
a
couple
more
things.
I
know
we're
running
up
we're
running
up
against
time.
Here,
there's
a
couple
more
things
I
want
I
wanted
to
cover,
and
I
want
to
make
sure
that
others,
including
michael
now,
have
their
chance
as
well.
One
is,
we
haven't,
really
touched
on
buckets
very
much,
and
I
I
just
don't
have
any.
B
I
don't
have
much
of
a
mental
mapping
of
buckets
at
all
other
than
this
is
a
unix
fs
representation
like
this
is.
This
is
a
unix
of
s,
materialization,
of
a
large
collection.
What
is
their
relationship
to
threads?
Are
you
doing
like?
Are
you
storing
like
take
pb
in
these
things,
or
are
you
doing
something
else
where
you
can
materialize
a
unix
fs
out
of
the
thread
model
or
what
is
the
relationship?
What
what
exactly
are
buckets
and
how?
What
is
their
relationship
to
threads.
F
Yeah,
so
a
bucket
is
built
on
threads
in
that
the
root
of
its
unix
fs,
dag
is
tracked
in
a
thread
and
that's
really
where
it
ends,
except
where
you
get
the
replication
piece.
You
know
minus
the
pinning
of
that
thread.
The
history
of
the
changes
in
the
bucket
can
be
replicated
using
the
thread
structure,
but
other
than
that
you
can
imagine
building
buckets
without
the
threat.
F
History
tracking,
because
yeah
the
the
schema
for
a
bucket
is
just
a
basically
a
single
entry,
plus
some
metadata
like
we
track
metadata
about
paths
in
a
bucket,
like
you
can
add,
whatever
application
metadata
to
a
path
the
encryption
keys.
F
It's
full
node
encryption,
so
we're
encrypting
every
node
in
that
dag
unix
fps
tag
it's
multi-key
so
like
I
could
share
with
you
one
file
in
a
unix
fs
tag
and
you'd
be
able
to
encrypt
down
through
that
structure
and
encrypt
the
file.
The
leaf
that
I'm
trying
to
share
with
you
there's
an
acl
built
into
it.
It's
also
part
of
its
thread
schema.
F
So
it's
just
kind
of
like
sugar
on
top
of
unix
fs
is
basically
what
a
bucket
is
and
then
just
to
recap,
like
its
relationship
to
a
thread,
is
the
schema
is
tracked
in
the
in
the
thread
which
enables
like
the
sinking
of
a
bucket.
Basically,
so
if
carson
and
I
were
sharing
a
bucket
peer-to-peer,
then
I
would
receive
updates,
via
via
threads
of
changes
that
he
made
in
that
bucket
via
this,
the
hash
changing
or
the
metadata
about
paths
changing.
F
F
We
have
a
a
we
built,
a
pinning
api
that
lets.
You
interact
with
the
bucket
over
the
the
standard,
pinning
spec
so
yeah
again,
it's
just
like
sugar
on
top
unix
fs.
Basically,
people
who
are
familiar
with
s3
or
other
like
object,
storage,
just
things
that
they
want
like
immediately
when
they
start
using
unix
fs.
B
B
Yeah
yeah
yeah.
The
other
thing
I
wanted
I
just
wanted
to
quickly
touch
on
was
the
your
road
map
of
or
your
sense
of
what
the
priorities
are
for
development
of
threads.
It
sounds
like
snapshotting
is
important
there.
They
do
have
other
things
that
you're
that
feel
pressing
to
work
on
yeah.
So.
F
F
One
thing
we've
talked
about
is
just
severing
off
the
database
layer
and
focusing
more
on
the
network
layer,
because
what
we
always
run
into
are
people
asking
for
kind
of
like
complex
database
things
like
hey.
How
can
I
do
custom
indexing
like
we
provide
a
very
basic
indexing
mechanism,
but
if
people
want
to
do
custom
or
compound
indexes
or
like
aggregators
or
any
kind
of
like
complex
database
thing
that
are
baked
into
something
like
postgres,
that's
had
decades
of
development
like
we
just
never
we're
never
going
to
to
do
that.
F
Practically
speaking,
so
we
think
there's
a
way
to
back
into
that
and
maybe
create
connectors
for
more
advanced
databases
by
focusing
at
the
networking
layer
more
closely.
So
just
this
thread
structure,
that's
one
piece.
Another
piece
is
something
we've
touched
on,
which
is
like:
how
do
you
make
having
a
thread
inexpensive
so
that
people
don't
have
to
worry
about
when
they're
designing
their
applications?
They
don't
have
to
think
like
okay,
I
only
want
to
have
10
threads
total
per
user
and
stuff
like
that.
F
Just
getting
rid
of
that
restriction,
that's
something
we
need
for
like
for
any
type
and
some
of
the
other
heavy
users
and
then
the
last
kind
of
main
priority.
Well,
I
won't
say
the
last
one
of
the
other
priorities.
Is
this
multi-tenant
setup
so
that
people
can
use
the
replication
piece
of
the
thread
more
efficiently
and
that's
just
sort
of
like
basic
breaking
up
of
a
thread
node
into
multiple
responsibilities
and
then
basically
making
the
orchestration
mechanism
scale
at
a
multi-tenant
layer?
F
And
then
I
would
say
the
last
one:
that's
on
my
mind
any
way
carson
could
turn
into,
but
is
redoing
a
bit
of
our
this
encryption
setup
and
trying
to
do
it
more
in
like
a
best
practice,
ipl
d
way,
given
all
the
development
that's
gone
on
there,
since
we
initially
did
this,
it
would
be
amazing-
and
this
is
something
that
carson
I
have
talked
about.
F
F
F
A
Yeah,
I
mean
I'm
definitely
interested
in
you
leveraging
some
of
the
aes
codex
stuff
that
michael
did
recently
and
like
taking
advantage
just
like
kind
of
standardizing.
That
whole
thing
a
little
bit
more
and
even
though,
like
single
writer
like
hash
link
logs,
are
sort
of
like
obvious
having
a
library
which
is
like
new
log.
Insert
record.
Is
pretty
nice
like
to
have,
and
it
would
make
you
know,
would
make
building
threads
really
easy,
but
would
make
other
people
who
want
to
build
similar
things
and
there's.
F
There's
things
that
would
be
cool
to
standardize
like
how
many
you
know.
Do
you
backlink
at
a
certain
interval
to
allow
for
parallel,
traversal
and
intelligent
ways,
other
ways
to
make
verification
of
the
log
faster?
You
know
things
like
that
that
would
be
cool
just
to
have
it
be
like
a
a
standard
sort
of
thing.
F
Some
hand-wavy
way
that
that
we've
talked
about
a
lot
but
yeah.
It's
just
it
just
comes
down
to
like
division
of
resources
and.
B
Time
so,
in
terms
of
of
where
we're
heading
in
our
interest
here,
is
that
we're
looking
at
this,
this
general
problem
of
collections
yeah
and
wanting
to
standardize
something
for
the
ecosystem.
So
how
do
we
instantiate
something?
That
is
a
reasonable
standard
that
serves
a
basic
set
of
use
cases
and
you
guys
have
done
the
most
work
in
the
ecosystem
in
creating
something
practical
and
what
we're
trying
to
figure
out
is
what
are
the?
B
What
are
the
base
use
cases
that
we
care
about
and
what
is
the
amount
of
complexity
we
we
can
put
up
with
to
get
that
and-
and
my
hunch
is
that
there's
that
you
know
with
threads
you're
optimizing
for
for
a
set
of
use
cases
that
that
is
perhaps
a
superset
of
of
what
we're
looking
at,
and
so
I'm
trying
to
figure
out
what
that
difference
is
what
the
delta
is
there,
and,
and
is
there
something
within
threads,
maybe
a
subset
of
threads
that
we
could
look
as
as
extracting
or
or
you
know,
standardizing
on
a
piece
of
it
or
or
does
it
actually
require
the
whole
lot
like?
B
So
that's,
that's!
That's
what
we're
thinking
about
the
moment,
so
I
think
you
know
we'd
love
to
have
more
conversations
about
that
yeah
topics
here
I
have
some
other
crazy
stuff
too.
B
H
H
H
A
fair
amount
of
things
that
people
seem
to
think
are
insured
by
the
protocol.
Aren't
like
one
big
one
is
that
you
know
like
there
is
no
guaranteed
uniqueness
anywhere,
like
the
the
uniqueness
is
of
like
a
chain
transaction
that
you
can't
actually
predict.
So,
if
you're
looking
at
an
nft
or
looking
at
nft
kind
of
like
metadata
or
any
of
the
assets,
you
can't
go
from
that
to
an
authenticatable
structure
like
that's
kind
of
one
problem.
H
There
there's
a
like
the
only
thing
that
really
like
tells
you
about
an
nft.
Is
this
token
uri,
which
we
have
like
now
sort
of
pushed
people
to
standardize
on
using
ipfs
urls
people
were
already
doing
this
right
like,
but
but
now
it's
like
really
clear,
like
you
need
to
do
that
and
the
spec
mentions,
like.
Maybe
you
want
to
point
this
at
a
metadata.json
file
that
conforms
to
this
particular
schema,
but
there's
a
lot
of
people
that
don't
do
that
and
then
a
few
people
that
just
break
the
entire
spec,
which
is
really
annoying.
H
H
If
that
makes
sense,
so
like
one
thing
that
we
could
do
is
we
could
create
a
spec
for
say,
like
a
mutable
nft,
where
the
ipfs
url
is
to
like
a
new
codec
that
we
identify
and
then
it's
it's
always
a
identity
multi-hash
that
just
has
bytes
in
it,
and
so
now
that
entire
sort
of
prefix
right
all
the
way
up
to
you,
know
right
before
the
length
is
effectively
the
prefix
for,
like
a
key
value,
store
almost
right
on
the
chain,
because
you
can
look
at
any
chain
and
you
can
go
okay
like
who
is
the
first
person
to
do
this
particular
identifier
right
and
then
that
is
effectively
just
like
that
key
owner
and
then
the
whole
chain
of
custody.
H
For
that
key
in
that
particular
chain
belongs
there
and
then
you
can
just
use
post.
You
can
just
post
messages
to
the
nft
address
in
order
to
do
updates,
and
you
can
even
do
things
like.
H
Okay,
only
the
owner
is
allowed
to
update
right,
like
you
can
you
can
tool
that
kind
of
a
thing
in,
but
once
you
start
getting
into
updates
and
having
this
like
long
running
data
structure
that
actually
takes
updates,
like
I
start
thinking
about
threads
and
some
of
the
problems
the
threads,
do
I
think
that
some
of
the
differences
here,
though,
is
that,
like
you
want
to
well
one,
is
that
you
you
want
to
actually
leverage
the
existing
ownership
information
in
order
to
do
permissioning.
H
You
don't
want
to
like
tool
that
outside
of
it
and
you're,
also
like
not
relying
on
any
outside
tooling,
like
any
offchain
tooling,
for
that
either,
which
is
kind
of
cool,
but
that
that
changes
like
some
of
the
ways
that
I
think
you
might
use
threats.
I
don't
know,
have
you
guys
thought
at
all
about
like
this.
A
Weird
space
yeah
I
mean
this
is
exactly
how
did
the
the
et
ether
did?
A
Spec
works,
so
you
actually
mutate
did
a
decentralized
ident
identifier
document
for
a
given,
like
ethereum
address
by
posting,
to
the
address
your
different
like
updates,
and
then
the
events
on
that
smart
contract
can
then
be
queried
and
resolved
to
resolve
the
actual
document
so
effectively.
You
have
a
mutable
document
that
you
can
change
over
time
and
you
could
effectively
make
those
updates,
like
thread
record
updates.
Basically
but
they're.
A
You
know
they're
native
ethereum
events,
so
you
can
query
for
that
and
it's
pretty
cool
because
it
costs
money
to
it,
costs
a
lot
right
now
to
mutate
that
nft
or
that
did
document,
but
it's
cheap
to
query
it,
and
so
it's
great
for
dids,
because
you,
like
you
resolve
way
more
frequently
than
you
update,
yeah
yeah,
it's
the
same
same
setup.
I
think.
H
I
think
this
becomes
really
interesting
in
like
the
context
of
of
nfts
right,
because
if
you
look
at
kind
of
what
artists
and
different
people
are
doing
with
them,
they're
like
creating
kind
of
these
one-time
assets
and
then
there's
like
the
secondary
market
for
selling
them.
But
if
you
have
one,
that's
mutable
and
only
the
owner
can
mutate
it.
H
You
can
start
to
create
like
artistic
works
that
go
viral
and
selling
them
as
part
of
the
virality
and
moving
them
around
as
part
of
the
virality,
and
then
because
you
have
that
contract
for
a
percentage
of
every
sale.
That
actually
makes
that
much
more
profitable
than
like
these
big
ticket
sales
right
like
if.
H
That
really
goes
viral.
You
actually
capture
like
a
huge
amount
of
value
from
that
kind
of
like
a
yeah
yeah.
Like
a
friend
of
mine,
I
worked
on
the
worked
on
the
grimes
nft
and
we've
been
talking
about
like
a
mixtape
where,
like
when
you
own
it,
you
get
to
add
your
track
to
it,
and
people
sort
of
follow
these
mixtapes
when
they
get
popular,
greater
greater
resale
value.
Yeah.
F
H
It's
in
the
it's
in
the
contract.
You
write
it
in
the
contract,
so
you
have
to
define
it
right
and
there's
like
there's
a
de
facto
standard
right
now
that,
like
I'm
just
taking,
I
think
ten
percent,
I
mean
that's
it,
but
you
see
other
people
that
are
like
minting
them
like
putting
themselves
in
these
contracts.
H
Of
course,
you
see
like
you
see,
like
sometimes
artists,
don't
have
that
in
there,
but
I
think
that
you
yeah
there's
a
lot
of
like
cool
stuff
that
you
can
do
with
that
you
you
can
I
mean
you
can
spread
out
different
payments
of
different
percentages
to
different
wallets
and
stuff.
It
costs
more
money
to
create
it
because
there's
more
data
going
in.
But
you
know,
if
you
want
to
get
fancy
with
it,
you
can.
F
Yeah
there
there's
kind
of
a
side
discussion
that
we've
had
around
threads
for
a
while
around
a
thread
did
so
that
a
thread
peer
can
advertise
services
and
one
of
those
services
could
be
like
a
buckets
api
or.
H
Yeah,
I
think
one
one
problem
I've
been
tackling
is
like
you
have
on
chain
and
off
chain
data
right
and
all
of
like
you
know,
if
you,
if
you
parse
the
chain,
you
can
take
some
of
the
chain
state
into
the
into
the
offline
or
into
the
off
state
right.
But
when
all
you're
passing
around
is
the
off
chain
data
references,
you
don't
necessarily
have
a
way
to
to
call
that
back
and
like
authenticate
it
right,
and
this
like
becomes
a
problem
when
you're
building
some
of
the
more
advanced
workflows
so
like
one.
H
So
then,
if
you
ever
get
an
nft
by
address,
you
can
always
authenticate
the
entire
chain
of
ownership
for
right
now
for
that
nft,
and
you
can
actually
carry
around
that
context,
whereas
right
now,
like
you
can
literally
just
like
make
them
like,
and
if
somebody
in
a
side
chain
like
makes
a
really
popular
nft,
you
can
just
mint
it
in
ethereum
and
then
like
definitely
go
sell
it
like
it
was
that
one
because
people
will
assume
that
the
ethereum
one
is
the
real
one,
because
nobody's
checking
all
the
side
chains
to
see.
H
A
H
Like
it's
a
benefit
in
that
it
makes
moving
them
easier
right
like
if
you
want
to
move
it,
you
can
use
the
same
address
but
then
and
then
stamp
the
the
tombstone
right
and
the
old
one.
It
points
to
the
other
chain
but,
like
you,
have
to
know
to
go
and
authenticate
it
on
that
chain.
F
Uri
yeah
super
cool.
I
think
one
thing
that
we've
been
missing
from
threads,
just
like
clear
use
cases.
You
know
that
we
can
really
rally,
because
you
can
imagine
doing
things
differently
like
if
or
like,
or
were
to
design
this
for,
iot
or
something
or
or
design
it
for
nft
tracking
or
so
there's
been
a
like
slight
paralysis
around
some
decisions,
just
like
without
super
clear
use
cases
you
know
and
right
now
we're
just
mostly
driven
by
our
heavy
users
but
yeah
yeah.
That
would
be,
I
mean
yeah
it'd,
be
great.
F
B
And
but
we're
working
on
the
you
know
the
basics
of
what
it
is,
what
it
is
we're
even
talking
about
right
now,
so
so
I
think,
maybe
maybe
in
the
next
conversation
with
you
would
be
simply
about
the
use
cases
that
we
are
seeing.
We
are
thinking
as
important
and
then
looking
at
the
intersection
of
those
with
what
you
have
and
then
like
that,
because
that,
and
then
that
should
make
things
obvious
for
all
of
us.
Really,
that's
that's
that's!
The
next
step
for
us
is.
B
Is
this
outlining
those
use
cases
that
we
think
are
important
and.
A
B
Yeah
the
way
it
was
put
recently
was
that
this,
the
notion
of
collections
of
data,
particularly
you
know,
ipld
collections
of
things-
was
something
that
was
punted
early
on
in
the
ipfs
design
process
and
never
really
got
back
to,
and
it's
taken
it's
it's
taken.
People
like
you
to
come
up
with
ways
of
solving
that
and
it
really
would
be
good
to
have
something
that
was
advertised
as
a
standard.
B
This
is
this
is
how
we
recommend
you
do
this,
and
then
we
have,
you
know,
interchange
discussions
about
how
you
can
talk
to
different
services
to
using
these
things
and
how
you
can
bake
them
into
all
sorts
of
other
things
like
filecoin,
how
you
can
how
you
can
build
them
into
dowels
data
dowels.
B
All
that
sort
of
stuff
there's
a
whole
whole
range
of
things
where,
if
you
have
a
a
single
means
of
defining
a
collection
of
things,
then
you
can
do
a
lot
of
you
can
unlock
a
lot
of
power.
So
that's
what
we're
thinking.
A
I
mean
it's
the
same
thing
as
we
were
talking
about
with
dids
right.
It's
like
you
know.
If
you've
got
some
sort
of
definition
of
what
a
collection
is,
then
you
can
implement
that
on
ethereum
and
you
can
implement
it
in
just
hash
links,
ipfsc
ids
and
you
can
do
it
on
popcorn.
You
can
do
it
everywhere
and
that's
pretty
exciting.
H
What
like
what
is
like
a
good
multi-block
data
structure
for
storing
a
large
collection
of
cids,
because,
like
the
mutation
rate's,
just
horrendous
right
like
like
yeah
and
and
I
don't
think
that
we've
gotten
very
far
like
other
than
in
that
in
that
zag
spec,
that
I
wrote
there
is
a
compression
algorithm
for
cids
that
just
leverages
ordering
in
order
to
get
rid
of
the
prefixes
and
some
other
stuff.
So,
like
you
can
shave
like
15
off
of
the
size
of
just
like
a
bunch
of
cds,
but
other
than
that.
B
Okay,
well,
are
there
any
of
you
got
any
other
questions
that
you
want
covered
before.
G
I
guess
just
briefly
on
the
networking
side
of
things
yeah.
If
you
need
stuff,
you
want
to
talk
through
things
I'm
available.
I
have
not
forgotten
our
conversation
about
trying
to
publish
many
many
ips
records
with
buckets.
I
I
should
have
some
nice
surprises
for
you
soon
and
yeah.
I
I
too
feel
the
pins
of.
I
would
love
to
work
on
the
cool
mutability
thing,
but
they're
all
these
immutable
things
that
need
to
get
taken
care
of
first.
G
F
Yeah
we've
started
to
do
a
little
test
ground
work
with
any
type,
just
to
basically
test
little
improvements
that
they've
been
making
and
we'd
love
to
just
like
throw
you
know,
gossip
pub
in
there
and
be
able
to.
Ideally,
we
could
like
play
with
little
variations
on
gossip
pub
here
and
there
and
just
like
see
these
things
run
at
scale.
G
In
test
run
because
I
know
like
we,
there
was,
there
was
a
group
of
folks
who
were
doing
a
bunch
of
testing
test
ground
with
a
lot
of,
especially
in
the
months
leading
up
to
file
coin
launch
to
be
like.
If
we
hammer
this
and
we
try
and
be
really
mean
about
this.
Well,
the
network
still
holds
up
okay
and,
and
they
did
fine,
so
so
you
should
be
able
to
test
those
things
larger.
G
One
thing
that
is
sort
of
interesting
is
that,
as
it
stands
right
now,
gossip
sub
has
a
maximum
number
of
topics
that
a
node
is
able
to
tell
you
that
it
is.
It
cares
about
so,
if
you're
running
a
single
node,
that's
trying
to
listen
on
a
million
topics
and
the
maximum
message
size
is
like
a
few
megs
you're
gonna
start
getting
real
sad
right
right.
F
G
Right
right,
especially
because
you
don't
really
need
the
100
gig
memory
boxes
are
square
and
what
you
really
need
is
a
whole
lot
of
cpu
right
here
yeah.
So
we
can
talk
about
ways
to
do
some
like
horizontal
scaling
here,
and
also
I
mean
if
we
really
wanted
to,
we
could
find
ways
to
expand
the
number
of
like
modified
gossip
sub
to
be
able
to
handle.
You
know
more
topics
more
subscriptions,
but
it
feels
like
there's,
probably
some
good
ways
to
do.
G
Horizontal
scaling
here
and
maybe
like
the
protocol,
is
warning
you
like
before
you
go
on.
Let
us
think
if
there's
another
way
to
do
this,
yeah
and
so
yeah,
you
know,
if
you
guys,
you
guys
want
to
go.
You
know
brainstorm
some
options.
We
can.
We
can
definitely
talk
through
those.
E
B
Probably
wrap
it
up
thanks
so
much
for
your
time,
but
I
I
think
we'll
we'll
come
asking
for
some
more
of
your
time
in
the
near
future.
It's
been
fascinating.
A
B
Answering
no
it's
good.
I
I've
appreciated
the
free
form
format
here,
because,
like
looking
at
documentation,
is
one
thing
because
it's
often
user
focused
and
then
the
white
paper
is
it's
got
this
academic
thing
about
it.
That
leaves
me
wanting
just
very
specific
details,
so
I
think
freeform
is
really
helpful.
Yeah
cool,
great
okay,
all
right
thanks.