►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2021-02-01
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
A
I
guess
also
some
file
converts,
but
I
think
the
most
interesting
thing
for
this
group
is
that
I
wrote
a
small
blog
post
about
web
assembly
might
return
values
in
today's
rust
without
wasn't
binding,
and
the
background
is
that
in
wasm
you
can
have
more
than
just
one
return
value
in
the
new
versions,
and
I
thought
it
would
be
straightforward
to
do
this
in
rust,
but
it
isn't
there
for
the
blog
post
and
the
reason
why
you
would
want
to
have
multiple
returns
is,
as
web
assembly
is
so
restricted
to
the
return
types.
A
So
it's
two
values,
and
so
it's
way
more
convenient
than
just
returning
one
value
which
you
can
then
read
out,
which
has
the
size
and
the
position.
So
that's
quite
useful
and
next
week
is
for
all
people
who
will
get
protocol
labs.
A
That's
the
idea,
we'll
see!
Probably
next
week,
we.
I
have
an
update
on
this,
even
if
it's
not
ip
related,
okay
and
next
on
my
list
is
danielle.
B
Cool
so
this
past
week
I
was
mostly
finishing
well,
I
mostly
got
my
hammed
adl
to
agree
with
file
coins.
Hemmed
on
just
what
the
zebra
bytes
look
for
a
very
simple
map.
I
was
mostly
there
last
week.
I
just
got
stuck
on
the
parameters
for
a
long
time,
essentially
the
default
parameters
for
the
b
for
the
bit
width
and
the
hash
algorithm
that
the
library
provides.
B
No
other
mistakes
is
reification
so
like
taking
some
block
layer
nodes
and
then
interpreting
them
as
my
adl
and
then
supposedly
that
should
magically
work
to
read
key
values
from
an
existing
map,
but
that's
pretty
much
it
I'm
gonna.
I
need
to
tidy
up
the
code
and
push
it
it's
quite
a
lot
of,
if
else
code
for
filecoin
support,
and
this
next
week,
like
volker
mentioned,
we've
got
the
hack
week
and
my
project
so
far
at
least
is
essentially
every
implementation
of
ipfs
desktop,
but
with
an
electron
and
all
ipfs.
B
D
C
That
works,
okay,
all
right,
cool
yeah,
I'm
still
trying
to
write
my
updates,
but
I
can
just
talk
about
it
from
a
second,
the
first
one-
and
this
is
like
so
annoying
ipjs,
which
we
use
for
all
of
the
javascript,
builds.
C
C
It
turns
out
that
the
ordering
like
really
matters
for
the
keys,
because
the
spec
kind
of
assumes
that
package.json
is
a
configuration
language
and
not
like
the
output
of
a
build
tool,
so
they
really
really
care
about
the
ordering.
So
if
you
have
a
browser
field-
and
you
want
to
actually
use,
it
literally
has
to
be
first
there's
no
way
around
it,
and
so
I
fixed
that
and
so
that'll
work.
Now
when
people
do
it,
but
there's
another
really
bad
bug
in
roll
up
which
basically
roll
up
has
like
these.
C
Even
though
they're
effectively
doing
the
same
thing
and
what
that
means
is
that
it
wasn't
actually
including
browser
in
the
default
import
for
sub-imports,
even
when
you
had
said
browser
true
in
roll
up,
so
you're
literally
telling
roll
up
hey,
get
brother
things,
and
then
you
have
a
thing
that
says:
hey,
I'm
the
browser
thing
for
the
sub
import
and
it
was
like
not
picking
it
up,
even
when
it
was
first.
So
you
can
work
around
it
by
adding
this
thing
to
config,
but
it
is
a
bug.
C
So
that's
like
logged
and
roll
up
now,
and
I
think
that'll
have
the
link
in
there
and
we're
like
on
like
in
line
to
have
it
be
fixed
here.
But
what
that
means
is
that
right
now,
like
multi-formats
es
yeah
multi-formats
imports
of
esm
in
roll-up
of
all
of
the
sub-package
parts
are
broken,
so
you
can
import
the
main
package
and
import
all
the
properties
out
of
it.
But
you
can't
do
like
multi-format
block
right
now.
So
that's
dumb
other
than
that,
I
wrote.
C
I've
been
working
on
ib
sequel
figuring
out
how
to
kind
of
how
to
explain
how
to
see
things.
Apart
and
now
I
have
like
a
sort
of
fairly
well-designed
system
for
what
I'm
calling
sql
proofs,
and
I
can
actually
give
a
little
talk
about
it
with
some
slides
like
at
the
end
of
this.
If
we
have
time
but
but
essentially
like
unlike
a
proof
of
work,
it's
not
providing
you
something
that
you
can
verify
with
a
fraction
of
the
computation.
C
The
computation
is
the
same
to
verify
the
proof,
but
it
it
reduces
all
of
the
data
that
you
would
need.
So
what
happens?
Is
that,
like
you,
do
any
operation
in
sql
and
what
you
get
back
are
sets
for
the
blocks
that
are
read
and
written
by
that,
as
well
as
the
new
state
of
the
database,
and
this
ends
up
being
like
an
amazing
primitive
for
building
all
of
the
traditional
database.
Workflows
that
you
would
want
so
replications
between
any
states
can
actually
be
done
by
diffing.
These
sets
now.
C
So
we
don't
have
to
do
crazy
graph
traversals.
We
don't
have
to
like
figure
out
all
that
logic.
You
literally
like
do
whatever
crazy
thing
you
want
to
do.
In
sql,
we
run
that
sql
engine
over
a
full
traversal
for
somebody
that
actually
has
all
of
the
data
and
then
what
they
produce
is
something
that
oh
yeah,
you
can
do
this
now
with
a
fraction
of
the
data,
and
you
can
treat
these
things
like
crdts,
because
they're
one-way
functional
transforms.
C
Now
you
can
do
encrypted
versions
of
the
sets
that
don't
let
the
recipient
actually
see
any
of
the
data,
just
the
results,
and
you
can
even
exchange
that
back
with
the
original
key
holder,
and
they
can
use
that
to
calculate
all
the
same
deltas
as
they
would
with
the
unencrypted
version
of
this.
Even
though
the
the
person
that
you
shared
it
with
like
couldn't
actually
see
any
of
that
data.
There's
like
really
cool
stuff
that
you
can
do
with
this.
C
Now
that
I'm
working
out
in
these
slides
that
I
can
show
in
a
bit
but
yeah,
that's
kind
of
the
the
gist
of
my
update.
C
Same
thing,
yeah,
no,
I
mean
this
week
for
the
for
the
hack
week.
I
want
to
get
like
an
in-browser
playground
working
for
ipsequel,
so
you
should
be
able
to
like
create
databases
and
and
do
sql
queries
and
stuff
like
that,
yeah
and-
and
hopefully
that
can
actually
be
turned
into
like
a
tutorial
where
I
talk
about
how
this
stuff
works
with
like
in
browser
embeds.
If
I
get
to
that.
A
Cool
next
one
is
pedo.
E
Yes,
so
an
iplt
front
still
stuck
a
little
bit
in
in
the
big
old
school
data
place,
but
you
know
almost
almost
there
other
than
that.
E
I
spent
a
number
of
last
week
and
today,
as
well
with
the
dean
kind
of
mapping
out
well
helping
him
map
out
the
ability
to
get
large
blocks
through
bit,
swap
which,
from
our
perspective,
you
can
basically
think
of
it
as
how
to
move
around
large
streams
without
opening
yourself
to
videos
and
yeah
is
something
that
we'll
try
to
put
together
throughout
hack
week
a
minimum.
You
know
concept
because
there
are
like
a
lot
of
corner
cases.
E
We
have
to
be
kinda
like
aware
of
not
not
to
be
completely
ddosed,
but
the
basic
idea
is
basically
an
augmentation
of
something
that
steven
sketched
out
like
way
back
something
like
2017
or
something
like
that,
where
we
basically
say
that
there
is
some
way
for
you
as
a
client
to
query
other
peers
for
what
we
call
an
sid,
which
is
essentially
just
a
different
streaming.
E
Hash
function,
be
it,
you
know:
sha
1,
shuttle,
56
75,
doesn't
matter
anything
that
can
basically
restart
itself
with
minimal
state,
and
you
get
back
a
a
set
of
blocks
that
you
a
manifest
that
describes
a
set
of
blocks
that,
if
you
pull
out,
you
should
be
able
to
assemble
this
into
a
stream
that
you
can
then
verify
with
this
additional
information,
the
manifest
which
basically
tells
you
at
every
megabyte,
or
so
what,
with
with
a
particular
initialization
vector
of
the
hash
that
you
are
using
for
this
s,
id.
E
If
you
run
it
against
the
data
that
you
already
got,
you
should
end
up
with
the
very
same
sid
as
the
cache
of
the
entire
thing
and
yeah.
We
basically
mapped
the
higher
level
portion
of
that
and
I
didn't
know
checking
with
with
other
folks
which
one
of
our
options
they
like
best
and
yeah,
we'll
see
how
this
goes
during
half
week.
And
so
I
have.
F
Sure
relevant
thing
to
ipld.
Last
week
I
took
state
diff
and
the
graphql
stuff
and
pulled
that
up
to
actors
v3.
So
it
now
has
a
realized
file.
Coin
schema
with
the
new
schema.
It
turns
out.
That
was
a
lot
of
code,
duplication,
but
no
actual
changes
which
is
exciting
and
that's
because
I
don't
actually
schema
the
hamps
themselves.
I
just
pretend
like
they
are
their
adls,
so
I
pretend
like
it
is
a
map
and
both
before
and
after
they
are
maps.
F
So
none
of
the
types
in
my
representation
of
the
schema
change.
However,
I
have
to
duplicate
all
of
the
types
so
that
I
know
that
I
am
in
a
v3
node
and
therefore,
when
I
see
a
hamped
in
that
v3
node,
I
should
use
the
newer
camped
loader
to
load
those
hands.
So
I
have
essentially
now
multiple
versions
that
are
tagged
just
on
their
struct
names
for
the
older
and
newer
style
nodes
throughout
filecoin,
so
and
so
that
my
loader
can
based
on
which
one
it's
which
subtree
it's
in.
F
If
it's
in
av3
versus
a
v2
state
route
know
which
camp
loader
it
should
be
using
when
it
runs
into
one
of
these
amp
types,
but
that
seems
to
largely
be
working.
The
pressure
has.
F
Reduced
somewhat,
as
the
v3
actor
upgrade
is
still
pending
the
two
things
I
will
call
out
on
that
one.
The
file
coin
people
did
not
change
all
of
their
hamps
to
v3
hamps.
There
are
still
some
v2
hamps
lurking
in
bioplane,
so
we
will
still
not
have
a
full
data
model
even
after
this
upgrade
in
particular
messages
as
linked
from
each
block.
F
Header
are
still
linked
using
the
previous
camp
and
there
is
no
versioning
pointer
on
the
block
header
or
the
messages
struct
in
which
to
signal
that
it
should
now
use
a
different
amp
implementation,
unlike
the
state
routes.
F
So
they
are
thinking
about
how
to
do
that
and
hope
that
at
some
point
in
the
future,
they
will
and
likely
expect
to
do
it
just
on
a
block
height.
So
once
you
end
up
with
a
block
header
whose
height
is
above
some
number,
you
now
know
that
the
messages
will
be
encoded
in
a
different
way.
So
there
will
be
a
new
implicit
thing
to
learn.
The
other
implicit
thing
is
bit
widths,
which
previously
we
just
sort
of
had
as
this
constant
five
that
we
knew
of
in
terms
of
how
you
size
and
think
about.
F
Hamps
are
now
parameterized
and
are
different
in
different
hamps,
and
this
is
another
sort
of
piece
of
data
that
somewhere
lives
in
the
schema,
because
there
are
sometimes
where
you
run
into
these
stamps
and
you're
like
oh.
This
is
a
hampton
with
bit
with
six
and
versus
these
ones
are
camps
with
it
with
five.
I
don't
know
if
we
have
a
great
place
to
encode
that
anywhere
besides
a
comment,
but
it
is
important.
Certainly
when
we
go
to
mutate,
I
think
less.
F
So
when
you
go
to
read,
although
it
is
useful
to
know,
I
think
you
can
infer
it
from
how
wide
the
bit
width
is
of
that
bit
field
potentially,
but
there
is
this
additional
implicit
parameter.
That's
been
added
that
you
know
they
just
keep
making
rod's
life
easy.
F
I
think
that's
all
I
have
to
say
about
amps
and
that
data
structure,
and
indeed
for
this
week,
starting
to
work
on
the
javascript
side
of
things
I
have
jsipfs
building
and
in
a
browser
and
it
seems
to
get
a
node
id
and
connect
to
things
using
the
new
web
pack,
which
oh
boy
it
now
no
longer
does
all
the
polyfills
for
you
in
v5.
So
you
have
to
notice
all
of
the
various
things
that
don't
exist
and
slowly
fix
them
until
things
load,
but
it
doesn't
eventually
load.
F
So
that's
good,
and
so
now
I
think
the
the
first
step
is.
Can
I
get
using
like
the
dht
providers
to
find
all
the
nodes
that
are
also
interested
in
the
same
sid
and
then
from
there
make
a
pub
sub
group
around
that
collection
of
nodes
to
be
able
to
send
messages
to
other
people
in
a
room
and
do
some
sort
of
message:
back-end,
transport,
in
the
browser?
So
that's
the
time
for
this
one
cool.
G
It's
one
of
those
weeks
where
I've
had
to
go
back
and
try
and
catalog
what
I've
done,
because
it
doesn't
feel
like
I've
achieved
a
big
chunk
of
anything
and
but
that,
but
that
is
because
I
was
looking
after
the
kids
mostly
last
week,
so
I
actually
didn't
get
much
done
so
dad
jason
was
on
the
cards.
I've
been
really
digging
in
heavily
to
that
because
I'm,
I
can
taste
completion
of
that.
G
I
really
feel
like
I've
tied
up,
dag
pb
and
dag
seabor.
I
I
really
want
dak
jason
done,
because
I
I
just
I
keep
on
wanting
to
use
it
for
test
fixtures
just
to
be
able
to
express
anything.
You
want
to
do
with
block
mutations
and
creations
in
a
readable
test
in
a
test
that
you
can
copy
and
paste.
G
So
that's
that
really
is
my
main
motivation
with
diaper
bag
jason,
but
getting
it
just
consistent
and
sorted
out
would
be
really
nice
completion,
so
trying
really
hard
to
get
that
I've
got
a
new.
I
don't
know.
I
think
I
was
at
the
meeting
last
week.
G
So
some
of
this
is
from
the
week
before,
but
I
have
a
new
javascript
implementation
that
uses
a
lot
of
the
same
back
end
code
as
the
dagsebor
work
that
I
did
to
do:
parsing
and
encoding
in
the
sort
of
the
deterministic
way,
with
all
the
rules
that
we
care
about,
and
the
reason
I
did
that
was
not.
G
I
was
skeptical
that
I
could
actually
replace
the
the
current
one
because
of
speed,
but
it
turns
out
I
can
like
the
speed
is:
is
comparable
and
even
better
in
some
places.
So
that's
great,
but
the
the
thing
I
wanted
to
get
to
was
this
problem
that
eric's
having
with
the
tokens
in
the
cid
and
bytes,
how
you
how
as
you
as
you're,
essentially
stream
reading
it
you
want
to
be
able
to
bail
early
and
either
say.
G
This
is
not
what
I
think
it
is,
and
it's
just
some
standard
map
or
it's
a
malformed,
bytes
or
cid,
or
it
is
a
cid
or
bytes,
and
so
that
processing
step
we
don't
classically
haven't
had
to
deal
with
in
javascript,
because
you
just
didn't
substantiate
the
whole
thing
and
then
you
inspect
it.
But
if
you
do
it
in
a
streaming
fashion,
then
it
becomes
a
different
beast,
and
so
I
wanted
to
get
to
that.
G
So
I
could
play
around
with
it
to
describe
those
rules
so
in
the
pull
request
that
I've
linked
for
the
specs
repo
for
dag
jason,
I've
got
I've.
I've
described
those
rules.
I
know
there's
been
some
comments
on
that
for
fixing
up
the
language,
which
I
am
appreciative
of,
because
it's
it's
hard
to
describe
them
clearly,
so
I'll
try
and
focus
on
getting
that
done.
G
The
other
thing
was
I
I
did.
I
have
been
meaning
to
ask
eric
where
the
iple
prime
version
is
at
because
he
started
a
pull
request
to
properly
add
the
bytes
and
I
think,
cid
stuff
into
ipld
prime,
and
that
that
was
then
just
sitting
on
a
branch
that
he
sort
of
abandoned,
but
the
branch
now
seems
to
be
gone.
So
I'm
wondering
if
that
works
just
been
completely
thrown
away
and
we
need
to
sort
of
start
again
or
something
else
so
it'd
be
nice
to
get
iple
prime
up
to
scratch.
G
G
The
other
thing
I've
been
playing
a
lot
with
is
the
encryption
stuff
that
michael
started
the
pull
request
in
the
specs
repo
and
the
one
in
multi
formats
trying
to
work
out
it
because
we
really
are,
even
though
like
yeah,
let's
use
aes
we're
still
inventing
a
new
crypto
and-
and
that
has
all
of
the
you
know
the
caveats
about.
G
You
shouldn't
invent
your
own
crypto
system
that
come
along
with
it,
so
just
been
trying
to
figure
out
all
of
the
boundaries
of
this
thing
so
that
when
we
roll
it
out,
we
can
say
things
like
this
is
good
for
these
situations
and
it
has
sensible
defaults
and
it
won't.
Let
you
do
a
foot
gun
for
most
cases,
but
please
don't
use
it
in
these
places
because
it,
you
won't
get
the
guarantees
that
you
expect
and
that's
yeah,
there's
a
lot
of
space
for
that.
G
If
people
just
pick
this
off
the
shelf,
saying
here's
a
generic
encryption
thing
that
I
could
use
for
my
application,
which
does
it-
has
a
particular
model
of
moving
data
around
there's
potential
for
things
to
go
bad
so
yeah
anyway.
Some
thoughts
have
been
sort
of
dropping
into
that
specs.
Pr
and
yeah
we'll
see
we'll
move
forward
on
that
soon.
C
Yeah,
I
need
to
reply
to
that
I've
been
meaning
to
but
yeah
we
should
we
should
put
in
there
which
of
these
algorithms
that
we
recommend,
like
which
of
these
is
the
best,
because
it's
silly
to
not
do
that
and
just
have
a
list
of
algorithms
yeah
and
then
whatever
other
algorithms.
We
want
to
add
like
I,
I
don't
care
what
they
are
like
they're
just
they're,
just
cipher
entries
in
the
multi-formats
table
right
like
we're
not
ensuring
that
there's
an
implementation
around.
D
Carlson
hi,
I'm
the
I'm,
the
outsider
here
and
I'm
actually
in
a
public
space.
So
I
got
to
keep
the
mask
on
I'm
in
a
hotel
lobby,
so
I'll
keep
it
super
short.
I've
listed
a
few
things
that
I'm
doing
this
week,
including
tinkering
with
some
of
the
ipsql
sql
proofs
as
crdt's
stuff
that,
hopefully
a
couple
slides
will
you
can
go
through
later
and
that's
related
to
some
work
that
I've
been
doing
around
merkel
tags
as
crdts
a
general
purpose
crdt.
D
So
that's
quite
exciting
for
me
to
see
to
just
today
I
was
reading
some
of
michael's
stuff
and
it
was
like.
Oh
damn,
we
can
solve
a
lot
of
problems
with
this
like
cid
set
primitive.
So
that's
really
cool.
I've
also
started
doing
some
sort
of
like
documentation
on
the
textile
front
and
in
doing
so
started
releasing
a
bunch
of
javascript
libraries
for
like
very
simple
primitives
for
testing
peer-to-peer
like
systems.
D
So
I
just
released
a
library
called
lib
p2p
bundle,
which
basically
is
just
the
the
lib
p2p
bundle
that
is
used
as
the
default
in
ipfs,
so
that,
if
you
want
just
to
adjust
the
lib
p2p
bundle
that
exactly
ipfs
uses.
But
you
just
want
the
p2p
bundle.
You
can
just
use
that
all
the
defaults
are
the
same
and
it
turns
out
to
be
super
duper
handy
because
I
can
just
drop
that
in
without
having
to
get
a
whole
like
limp,
p2p,
repo
setup
and
everything
and
it's
zero
config
and
it
just
kind
of
works.
D
So
that's
nice
and
I'll
put
a
little
more
effort
into
packaging
that
up
a
bit
nicer
for
like
a
full,
yes
build,
so
that
you
could
just
kind
of
pull
the
es
module
in
a
browser
and
a
few
things
like
that.
But
for
now
it's
pretty
handy
and
I've.
D
It's
helped
me
test
a
bunch
of
like
peer-to-peer
just
like
block
exchange
algorithms
really
easily,
and
so
in.
Coupled
with
that,
I
released
something
called
lib,
p2p
rpc,
which
lets
you.
Then
that
uses
michael's
really
simple,
rpc
library,
so
that
you
can
just
like
test
test.
Different
rpc
configurations-
and
you
just
use
this
lib
p2p
bundle
and
it's
really
easy.
You
can
test
all
sorts
of
really
cool
like
graph
sync,
algorithms
and
things
really
quickly.
D
D
And
then
you
realize
that
if
you
can
get
two
peers,
two
browser
peers
connected
over
like
webrtc,
you
can
do
like
http
queries
between
the
two
of
them
and
some
fun
things
like
that,
so
that
you
get
a
pure
acting
like
a
server
and
a
server
acting
like
a
peer
which
is
fun
and
then
on
the
textile
front.
D
Basically,
it's
a
way
to
encode
a
jwt
with
like
nested
permissioning,
and
if
you
take
that
and
you
couple
with
some
of
the
deg
jose
work
that
that
oed
and
I
helped
to
fund,
then
you
can
get
like
basically
ipld
structures
that
nest
permissions,
and
so
you
can
pass
like
with
a
you
know
with
like
as
a
header,
you
can
say:
look
here's
my
my
you
can
and
whoever's
operating
that
you
can
can
trace
the
history
of
like
permission,
allocation,
all
the
way
back
to
the
root
and
those
that's
with
an
ipld
link.
D
So
you
can
do
things
like
say.
Look
I
own
this
data,
but
I'm
telling
this
I'm
telling
this
peer,
that
they
can
also
do
some
sort
of
operation,
and
I
sign
that
and
then
I
link
my
initial
signature
in
the
ucan
and
then
that
gear
can
pass
that
you
can
along
and
say,
look
so
and
so
said.
I
could
do
this
and
you
can
trace
it
all
the
way
back
to
the
root
permission,
and
so
you
can
start
to
and
the
rule
is
like.
D
Those
permissions
have
to
be
less
than
or
equal
to
the
permissions
of
the
like
root.
So
you
can
have
like
increasingly
permission
allocation
in
a
sort
of
trustless
or
well.
You
can
trust
that
the
the
route
is
is
allowed
to
give
those
permissions,
and
then
you
can
start
to
do
some
really
great,
like
capability
based
access
control
with
that.
So
that's
something
we're
exploring
and
it's
looking
pretty
promising.
I
D
And
huge
props
to
the
uk
to
the
fission
team
for
like
marrying
jwts
and
some
of
google's.
Oh,
what's
that
french
dessert
called
macaroons,
google's
macaroons,
sort
of
setup
and
jwts
and
ipld
kind
of
all
placed
together
this
way,
it's
quite
it's
quite
a
simple
but
powerful,
like
access,
control,
primitive.
I
A
Thanks
carlson
eric
do
you
have
an
update.
J
Not
a
ton
this
week
didn't
come
super
prepared,
sorry,
maybe
off
the
top
of
my
cuff.
I
did
a
little
bit
of
code
review
with
daniel
and
a
couple
of
other
people
this
week
and
realized
that,
like
maybe,
I
actually
need
to
back
down
on
one
of
the
approaches
that
I
was
taking
to
the
schema
dmt
unification
stuff,
the
boilerplate.
J
It
actually
got
out
of
hand,
so
we're
thinking
about
actually
cycle
breaking
with
that
in
a
different
way.
The
problem
originated
from
having
our
generated
code
had
methods
on
it
which
could
expose
their
own
type
information
self-describingly,
and
that
meant
it
ended
up
with
a
cycle
and
so
we're
actually
thinking.
We
might
just
stop
doing
that
because
so
far,
that
feature
has
actually
been
kind
of
hypothetical
in
its
utility
so
like
maybe
just
we
won't
and
that'll.
J
A
All
right,
thanks
charlie,
do
you
have
an
update.
H
Yeah,
just
so
the
stuff
I'm
working
on
with
the
did
specification
and
just
the
drama
behind
it
so
michael
and
rod.
I
appreciate
your
comments
and
helping
me
out
and
juan
chimed
into,
and
I
think
just
trying
to
figure
out
and
they
get
navigate
politically
the
path
forward.
It's
not
about
the
technology
or
just
the
the
lack
of
the
ipl
deal
with
the
dag
cbo
respect
not
being
like
normative
referenced.
Then.
H
Actually
it's
getting
booted
out
of
the
the
did
specification
or
the
fact
that
I
borrowed
some
of
the
canonical
algorithm
text
from
the
spec,
but
I
put
it
in
the
sebor
section,
hoping
that
this
the
dagsebor
section
will
inherit
it,
and
so
then
it
opens
up
the
whole
pandora's
box
as
far
as
contribution
and
ipr
and
and
then
mime
types
and
just
that's
just
oh
the
drama.
H
So
I
I'm
not
sure
what
the
path
forward
is
there.
Actually
some
talk
about,
maybe
creating
in
the
did
spec
registries,
but
it's
that
would
be
places
to
put
properties,
not
core
representations.
H
So
there's,
if
we
bump
it
out
of
the
spec
which
is
going
to
canada,
release
the
next
like
a
week
or
two,
then
it's
just
gonna
get
dropped
and
which
is
frustrating
because
I've
been
like
involved
for
like
two
years
now
and
to
get
I
wanted
to
see,
did
documents
as
that
dag
seaborn
and
mostly
the
anyways
frustrating.
A
D
A
Yeah
yeah,
so
this
was
the
updates.
Is
there
anything
else
on
the
agenda
else?
We
get
michael
to
present
some
of
his
stuff.
C
I'm
sharing
my
screen
now
here
we
go
here.
We
go
probably
okay,
we'll
just
do
what
we
can
get
through,
which
is
good
because
towards
the
end
I
run
out
of
steam
a
little
bit:
okay,
so
sql
proofs,
okay,
so
the
the
sort
of.
C
If
you
separate
a
database
into
three
different
parts,
it
becomes
much
clearer
how
you
do
kind
of
decentralized
database
and
essentially
a
sql
in
particular
in
this
model,
so
we're
really
separating
kind
of
state
transitions
from
like
over
a
consistent
state
representation.
C
So
you
need
these
transitions
and
then
you
need
to
refer
to
the
states
of
those
transitions
and
you
need
to
do
that
over
a
mutex
right.
C
So
a
database
usually
offers
you
like
some
kind
of
head
that
people
have
agreed
to
that
you're
doing
that
on,
and
so
you
have,
you
have
sort
of
state
on
a
mutex
and
then
storage
is
separate
from
that,
and
I
think
that
we're
all
really
used
to
separating
out
that
storage
layer
and
that
that
being
sort
of
any
key
value
store
that
you
can
put
the
blocks
in.
C
But
I
think,
for
you
know,
other
people
that
that's
going
to
be
pretty
new,
usually
a
database
writes
to
a
very
specific
file
format
that
you
continue
to
run.
So,
if
you
look
at
like
original
database
architecture,
we
just
have
like
one
state.
We
need
to
move
it
to
the
second
state
and
we're
taking
a
bunch
of
operations
and
doing
kind
of
intermediate
transactions
that
we
can
roll
back,
as
we
do
that,
and
then
we
eventually
kind
of
commit
it
to
this
mutex
in
a
typical
database.
C
That's
done
with
like
an
fsync
call
where
you
actually
like
write
all
about
the
disk
and
sort
of
batch
up
the
entire
transaction,
and
you
know
if
it's.
If
you
want
to
guarantee
that
it's
in
multiple
locations,
then
that's
actually
an
sync
in
multiple
computers
and
different
places,
and
then
what
this
all
means
is
that
when
you
get
back
a
response
for
a
right,
you
have
a
guarantee
that
that
is
actually
written
to
disk,
that
that
is
actually
the
state
that
everybody
agrees
on
now
right.
C
We
can't
put
all
of
our
guarantees
on
top
of
this,
and
if
we
want
decentralized
systems
like
we're,
probably
going
to
do
that
with
cryptography
right
like
we
want
to,
we
want
to
replicate
some
of
these
guarantees
that
we're
getting
in
a
decentralized
system,
and
so
many
databases
like
in
the
nosql
world
and
are
are
eventually
consistent
and
highly
scalable,
because
it
turns
out
that
if
you
want
to
do
sql
you
it's
really
hard
to
make
it
entirely
consistent
and
distributed,
because
you
can't
really
model
it
without
like
one
of
these
epson
guarantees.
C
So
what
most
big
scale
databases
end
up
being
is
like
a
lower
level
primitive
than
sql.
It
doesn't
do
as
much
a
sql.
It
does
like
a
fraction
of
what
sql
does
often,
but
you
can
model
on
this
behavior
on
top
of
it
and
then
you're
working
with
a
primitive
that
can
do
everything.
But
what
we
want
to
do
here
is
actually
take
all
of
sql
and
actually
give
you
sql
as
a
language
to
do
the
manipulation
and
allow
that
to
work
in
a
distributed
system.
C
C
It
literally
takes
an
input
hash
and
it
produces
an
output
hash
and
for
that
implementation
and
version
of
ip
sql
it
will
always
produce
that
hash
right,
like
that's,
that's
completely
deterministic
once
this
stabilizes
and
turns
into
a
spec,
then
it'll
be
like
really
consistent
right,
but
we're
essentially
talking
about
this
deterministic
state
transfer
from
one
state
to
another
and
the
the
input
is
the
current
hash
of
the
database
and
a
sql
statement.
C
We
take
those
things
together
and
hash
it.
That's
that's
the
input
hash
right.
The
output
is
a
little
bit
more
complicated.
It's
not
just
the
sort
of
database
after
that.
Like
we're
talking
about
reads
and
writes
here,
we're
not
just
talking
about
rights.
What
you
get
back
is
a
result.
So,
when
you're
doing
a
sql
query
guild
kit
act
like
a
bunch
of
column
data
right,
if
you're
doing
a
right,
you
won't
get
a
result
and
then
you
get
two
sets.
C
Essentially,
one
is
the
blocks
for
the
reads,
and
one
is
the
blocks
for
the
rights
and
if
you're
just
doing
a
read
you,
obviously
you
won't
have
any
rights,
and
these
cid
sets
are
our
trees,
just
like
the
sort
of
chunky
trees
that
I've
been
talking
about
forever
that
we
built
the
database
on.
But
we
can
like
continue
to
improve
and
refine
these
even
further,
but
they
already
are
like
really
capable
data
structures
for
doing
comparisons,
and
then,
finally,
you
get
the
hash
at
the
end
of
executing
the
statement.
C
C
Okay,
very
important
to
lock
in
right
here
that
we
are
not
talking
about
traversals
and
we
are
not
describing
traversals
as
part
of
these
data
structures.
There
are
obviously
traversals
happening
when
you
execute
a
sql
statement,
you're
going
to
traverse
the
tree,
but
we.
What
will
you
get
back
in
this
proof
is
not
anything
that
tells
you
about
traversing
and
we
never
ship
around
anything
about
traversing
between
peers.
C
What
we
ship
around
are
these
proofs
that
have
sets
of
all
of
the
block
addresses
that
were
accessed,
and
you
don't
have
to
traverse
beyond
that
to
understand
the
working
set
of
data,
and
this
is
really
important,
because
we
we
can
compare
these
all
of
the
time
to
produce
deltas
right.
It's
not
just
that
a
write
will
produce
a
cid
set.
That
is
a
delta
on
the
previous
state.
C
So
if
we're
talking
about
you
know
like
what
we
sort
of
do
with
graph
sync
today,
we
we
actually
never
do
more
than
one
request
and
response
anymore
right.
If
I
need
you
to
do
a
sql
query,
I'm
going
to
say,
hey,
run
this
query
and
instead
of
sending
me
back
like
the
entire
proof
or
all
of
the
see
the
reads
in
that
set
give
me
back
the
delta
between
that
and
this
prior
state
that
I
had
before,
and
you
can
just
get
back
that
delta.
C
So
you,
you
can
turn
like
every
synchronization
operation
in
every
application
operation
into
a
like
a
quick
request,
response
cycle
so
yeah.
So
the
producer
are
both
the
state
chains
they're.
Both
these
transactional
state
changes
and
they're.
Also
the
read
interface.
So
that's
like
important
to
keep
in
mind
and
I
haven't
even
gotten
into
how
mutexes
work,
but
you
can
you
can
imagine
a
chain
of
these
right.
That
is
like
updating
a
database
in
the
storage
somewhere.
You
could,
you
know,
publish
that
new
states
in
in
ipns.
You
can
stick
it.
C
C
A
little
bit
beginning,
but
it's
really
important
to
note
that,
like
these
proofs
do
not
reduce
the
amount
of
computation
that
somebody
would
need
to
do
to
verify
them,
they
only
reduce
the
set
of
data,
but
they
do
that
incredibly
well,
so
you
know
you
can
have
a
data
provider
or
it
can
literally
be
untrusted
and
they
have
access
to
all
of
the
data
which
could
be
petabytes
right
and
what
you
get
back
is
you
know
just
the
the
small
fraction
of
that
data
that
is
necessary
to
verify
the
proof
that
you
just
got
for
your
query
or
for
your
mutation.
C
Okay,
any
questions
until
before
we
move
on.
Actually,
I
should
probably
like
pause
for
a
second.
A
I
have
one
so
if
you
so
the
sets
return,
do
they
only
return?
The
the
leave
of
your
query
or
every
ca.
C
You've
encountered
every
cid
every
cid,
it's
literally
like
it's
literally
a
set
that
gets
passed
like
in
javascript.
It's
literally
a
set
that
gets
passed
through
the
entire
read
interface,
so
every
time
that
it
reads
a
block
of
data
or
pulls
a
node
out
of
cache
that
has
a
block
address
associated
with
it
that
gets
added
to
a
set
and
and
yeah
yeah,
and
so
it's
it's
literally
like
the
entire
tree
all
the
way
up.
So
you
would
never
need
to
to
do
traversal.
H
C
C
Yeah,
yeah,
yeah
and,
and
the
great
thing
about
them
right
is
that,
like,
if
you
encrypt
the
data,
then
you
can,
you
don't
have
to
encrypt
the
tree.
C
On
top
of
those
addresses
right
like
you,
can
have
a
bunch
of
encrypted
addresses
that
are
in
a
set
and
the
set
itself
is
actually
like
readable,
and
so
you
can
pass
that
around
like
in
the
clear
and
use
it
for
your
replication
state,
yeah,
nice
yeah,
and
you
can-
and
you
can
refer
to
that,
set
just
by
obviously
yeah
and
I
mean-
and
these
are
like
designed
for
efficient
teletubs
between
each
other
right.
Like
that's.
That's
where
we
get
set
data
structures
from,
and
these
are
no
different.
C
Yeah,
I
think
that
we've
actually
covered
everything
in
this.
This
slide
already
cool
yeah.
Okay,
so,
let's
like
see
it
right,
real,
quick,
so
yeah,
so
we
take
this
db
is
null
because
we're
creating
a
new
database.
We
we
say
create
table
with
the
schema
and
what
we
get
back
is
a
set
of
all
the
writes
that
were
necessary
for
that
and
the
new
db
right.
So
if
we're
you
know
implementing
this
database,
we
just
write
all
those
to
storage.
C
We
update
whenever
mutex
you
do
an
insert,
so
we're
going
to
insert
some
values
into
here
now.
What
we
get
back
is
like
the
new
writes,
the
new
state.
We
do
the
same
thing
again
now
we're
updating
our
database,
pretty
cool
something
that
is
really
interesting
about
this
and
I'll
talk
about
this
a
little
bit,
because
I
haven't
had
time
to
create
slides
around
it.
Yet
the
schema
for
a
table
acts
like
a
contract.
C
It
ensures
that
whoever
runs
these
proofs
in
the
future
when
they
do
new
inserts
is
going
to
do
the
work
of
generating
the
indexing
structures
that
you
need
in
order
to
do
the
where
queries
against
those
columns.
So
that's
a
really
kind
of
cool
feature
right,
like
we
as
you
kind
of
design,
your
tables
and
design,
what
you're
doing
you're,
also
sort
of
baking.
These
operate
like
these
requirements
into
the
operations
that
get
run
on
them.
Another
really
interesting
thing
happens
with
dag
tables
right,
so
I
haven't
documented
this
yet.
C
But
I
have
a
dag
table
implementation,
where
it's
rows
just
like
a
rig
just
like
a
typical
sql
database,
but
what
you
insert
are
cids
to
whole
graphs
and
then
the
column
names
that
you
add
for
the
schema
are
paths
into
that
structure.
C
And
so
what
happens
in
in
this
when
it
is
implemented
as
like
this
contract
right?
Is
that
you're
effectively
plucking
off
the
data
in
the
graph
that
you
need
for
these
indexing
structures
and
you're
not
actually
storing
the
rest?
So
if
you
know
all
of
the
indexing
that
you
need
on
a
huge
chain-
and
you
don't
want
to
store
the
entire
graph,
because
it's
huge
like
it's
the
file
coin
chain,
for
instance,
you
can
bake
into
the
contract.
C
Oh
traverse,
these
properties
to
create
these
indexes
and
then
every
time
that
somebody
does
a
write
of
a
new
state.
We
know
that
those
traversals
will
happen
in
order
to
get
those
indexes
and
those
reads
will
show
up
in
the
read
set,
but
the
read
set
won't
include
the
rest
of
the
graph
that
we
didn't
say
that
we
needed
to
index
right.
So
this
is
like
an
efficient
sort
of
plucking
structure
as
you
as
you
go
over
time
as
well.
C
Let's
look
at
reads
real,
quick,
so
yeah
we
do
see,
we
select,
we
get
back
a
result
that
result
also
has
a
hash
on
it.
So
if
I
do
proofs
in
the
future,
I
can
check
if
the
result
is
changed
by
just
looking
at
the
hash,
which
is
really
nice,
provided
that
I
that
I
trust
the
other
party
and
then
I
get
back
all
the
reads
that
were
done
and
the
database
at
the
end
of
the
transaction
yeah.
These
both
have
hash
addresses
right
like
we.
We
can.
C
C
Oh
damn,
that's
so
cool,
though,
but
if,
if
you
imagine,
sort
of
structuring
this
operation
as
an
http
request,
so
you're
putting
in
the
http
request
like
the
different
states
that
you're
changing
that
http
request
can
be
cached
forever
right
in
in
whatever
caching
layer,
because
it's
just
this
hash
to
that
hash.
The
same
thing
happens
when
you're
doing
delta
calculations.
If
you
say
do
this
query
on
this
hash,
but
then
only
return
me.
A
car
file
of
the
delta
between
this
prior
read
set.
C
Then
it's
going
to
return
you
a
car
file
in
that
htm
request
and
that
whole
thing
can
also
be
cached
like
forever
in
hp,
cache
yeah.
We
talked
about
that
yeah.
So
now
we
we
insert
new
data
into
the
database.
We
run
the
query
again
with
the
new
database
state
and
oh
yep.
We
got
this
new.
Read
this
this
new
set
of
reads
for
the
cids.
C
We
can
then
oh
yeah.
I
do
have
that
in
here
sorry,
but
yeah
yeah.
So
if
we
as
we
I
just
talked
about
this,
I
don't
need
to
talk
about
it
again,
but
yeah.
If
you
have
like
literally
like
give
me
the
read
deal,
then
the
response
can
be
cached
forever
and
I'm
really
thinking
that,
like
car
files
actually
are
the
exchange
mechanism.
I
think,
in
whatever
protocol
that
you
build
on
this
because,
unlike
sort
of
cadb
and
the
stuff
that
I've
been
working
working
on,
that
has
an
indexing
structure
over
it.
C
You
always
need
all
of
these
blocks
and
you
always
just
load
all
these
blocks
linearly.
When
you
get
them,
you
actually
don't
seek
into
them,
because
since
you're
already
doing
delta
calculations
everywhere,
you
always
want
all
the
blocks,
so
you're
always
going
to
actually
just
occur
like
soak
all
of
them
up,
iteratively,
so
yeah.
I
think
that
car
file
is
really
like
the
right
way
to
be
moving
this
data
around
yeah
so
like.
Why?
Actually,
why
don't?
I
pause
here
again
so
questions
at
this
point.
G
So
because
I've
been
so
recently
playing
with
the
file
coin
data,
my
mind
immediately
jumps
to
scale
like
this
kind
of
stuff
is
it's
perfect,
for
this
is
perfect
for
large
structures
of
data
that
you
want
to
query
and
mutate,
because
I
mean
that's
what
sql
is
good
for
taking
something
really
large
and
narrowing
down
into
the
thing
that
you
want,
and
these
cod
sets
just
make
me
think
it's
there's
a
lot
of
cids
there's
a
lot
of.
J
G
Yeah
and-
and
I
know
like
these
are
all
practical
concerns
that
can
come
later.
My
concern
is
mainly.
H
G
That
I
don't
think
that
we
have,
I
I
don't
think
we're
actually
very
good
at
making
sure
that
those
practical
concerns
actually
play
out.
We
don't
have
a
good
track
record
of
that,
and
so
I
am
becoming
more
concerned
about
that
and
we
do
a
lot
of
theoretical
work,
but
when
the
rubber
hits
the
road,
the
things
stop,
I
mean.
C
In
in
some
ways,
I
feel
like
a
lot
of
these,
the
sort
of
garbage
collection
issues
that
we
have
right
now
in
sort
of
trimming.
The
set
of
data
really
comes
from
from,
in
a
belief
that
we
were
really
going
to
be
able
to
use
traversals
for
a
lot
of
this
stuff
that
we
could
describe
certain
traversals.
C
I
need
to
be
able
to
describe
to
you
just
the
data
that
you
need
for
something
it's
like
actually
like.
A
like
kind
of
more
important
than
even
the
structure
of
the
database
in
a
way
is
that
we
have.
These
sets
that
we
can
be
comparing
against.
I
C
G
Yeah,
because,
because
I'm
even
that,
you
know
the
garbage
collection
thing
now,
the
stuff
I'm
playing
with
this
week-
I'm
just
messing
around
with
with
browser
video
stuff,
but
I'm
still
having
to
I
coming
against
the
data.
The
the
garbage
collection
thing
like
I
wanted.
I
want
to
do
ipl
data
structures,
but
I
need
to
be
able
to
efficiently
discard
the
things
that
I
don't
use
anymore.
When
I
do
changes-
and
it's
just
so
painful.
C
Yeah
yeah
yeah,
just
working
with
arbitrarily
large
and
long
graphs
is,
you
know,
difficult,
yeah
yeah.
What
was
the
other
thing
that
I
was
gonna
say?
Oh
yeah,
there's
another
really
interesting
thing
about
this
I'll
talk
about
this
because
it's
it's
like
at
the
very
end
of
these
slides
and
I
haven't
had
time
to
really
work
it
out
yet
or
haven't
had
time
to
document
it
yet.
But
when
I
said
like
you
can
use
these
like
crdts.
C
What
I
mean
is
that,
because
they're
a
one-way
functional
transform
right,
you
can
always
test
if
two
changes,
compute
they're,
sorry
commute
because
you
can
just
apply
them
against
each
other
and
if
you
have
the
same
hash
like
you're
good
and
there's
like
a
little
bit
of
extra
stuff
that
you
want
to
do
to
like
get
pure
append
inserts
out
of
the
way
which
you
can
do
by
examining
the
sql
ast.
C
But
for
the
most
part
like
you
can
actually
you
know,
take
a
thousand
transactions
on
the
same
state
that
you
sort
of
figured
out
what
you
wanted
to
do
in
parallel,
and
then
you
can
again
in
parallel
test.
All
of
those
in
pairs
of
two
and
commute
them
right,
so
you,
you
literally,
can
run
all
of
them
concurrently
in
pairs
of
two
and
then
pair
that
down
in
half
and
then
pair
that
down
in
half
from
there
and
that
that
can
be
a
very
expensive
operation
to
do.
C
There
are
edge
cases
in
which
you
end
up
causing
merges
in
the
tree
when
you
change
it,
and
so
like
every
10.
000
changes
to
like
an
integer
index,
for
instance,
will
end
up
probably
changing
one
of
the
like
causing
one
of
these
three
merges
and
then
that
tree
may
need
to
read
data
to
the
right
that
it
didn't
actually
have
that
it
was
already
handed
to
it
to
commute
it.
So
we
just
need
access
to
the
same
data
that
the
whoever
launched
the
process
had.
C
So
it
may
need
to
do
like
one
block,
read
right
away,
but
that's
literally
all
that
we
need
to
do
so.
You
you,
you
can
actually
do
all
of
these
like
crdt
commuting
things
in
parallel,
without
a
ton
of
I
o,
which
is
nice
yeah,
any
more
questions
or
comments.
D
Well,
on
the
crdt
front,
so
for
almost
all
operations
where,
like
you're,
adding
or
querying
the
like
beauty
of
the
append.
Only
like
mergeability
of
sets
like
you're
as
you
commuting
no
problem
and
then
there's
a
and
then
only
in
the
situation
where,
like
you're,
mutating
state
and
removing
something
and
in
fact,
want
to
actually
clean
up
the
the
blocks
that
you
have.
I
D
You
have
to
worry
about
like
knowing,
if
some,
if
some
past,
like
mutation,
deleted
something
and
then
another
one
added
it
back,
or
vice
versa.
So.
C
No,
no,
it's
actually
it's
a
little
bit
simpler
than
that
right.
So
this
is
how
it
would
work.
One
is
that
you,
you
need
to
look
at
the
sql
ast
once
to
see
if
it's
a
pure
insert
without
a
read
associated
with
it,
and
if
it's
a
pure
insert,
then
you
can
actually
just
you
know
that
that'll
commute
with
all
the
other
inserts.
C
But
if
you
try
to
apply
them
in
different
orders,
you're,
actually
you
are
going
to
get
a
different
hash,
so
you
need
to
like
set
those
aside,
then
like
for
the
rest
of
the
query,
you're
going
to
have
some
kind
of
read
on
write,
semantics
going
on
and
and
so
and
you
cannot
predict
in
sql
what
that
looks
like
right,
like
it's
things
get
crazy
like
in.
C
If
you
look
at
the
kind
of
backflips
that
postgres
has
to
do
to
figure
out
like
which
intermediate
state
changes
that
it
needs
access
to
and
stuff
it
gets
pretty
complicated.
But
like
sql
is
just
such
a
powerful
language.
You
can
do
like
the
like
multiple
selects
that
do
math
and
commute
things
together
and
then
produce
the
right
that
you're
doing
right,
and
so,
if
you've
got
a
thousand
rights
that
are
all
happening
at
once.
C
You
like
have
no
way
by
looking
at
the
tree
really
to
know
like.
Oh
this
one
ended
up
like
reading
some
data
that
this
other
guy
wrote
actually
right
or
wrote
over
or
changed
so
the
e.
So
the
easiest
thing
to
do
is
literally
just
to
like
run
the
statement
against
each
other's
state
and
then
see
if
the
hashes
match
or
not
right,
like
that's
the
easy
way
to
do
it.
C
There's
there's
a
I
was
talking
with
mikola
about
it
and
he
was
like
well,
no
there's
a
bunch
of
stuff
that
we
could
do
to
actually
like
narrow
that
down.
So
there's
there's
a
way
that
you
can
get
out
of
like
a
lot
of
the
computational
like
overhead,
but
there
I
think
that
there's
always
going
to
end
up
being
some
really
complex,
sql
cases
that
you'll
you'll
need
to
like.
C
Not
do
that
and
just
always
be
running
the
statement
to
check
again,
but,
as
you
cut
them
right,
you're
just
kind
of
jamming
the
statements
together
actually
like
you
you're,
just
sort
of
like
adding
one
statement
to
another
as
you
commute
them
and
you're,
adding
the
the
read
and
write
sets
and
then
at
the
very
end
of
commuting.
All
of
them.
C
You
would
just
want
to
re-run
the
whole
combined
query
set
so
that
what
you
end
up
with
is
a
proof
that
is
what
like
doesn't
include
all
the
orphan
data
that
might
have
changed
right.
C
C
How
do
I
get
like
all
of
that
down
like
sql,
is
all
about
sort
of
building
data
structures
that
are
fast
to
query
and
easy
to
query,
and
so
just
kind
of
let
sql
do
that
and
you
know
get
out
of
the
way
and
really
work
with
like
how
do
we
define
the
subsets
of
data
that
are
needed
and
reduce
the
amount
of
data
needed
for.
C
Operations
yeah,
I
think,
I'm
out
of
time
now,
so
this
is
probably
a
good
place
to
stop.
A
Yeah,
I
was
just
about
to
say
that
so
thanks
everyone
for
attending
and
see
you
all
again
next
week,
bye
everyone.