►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2022-06-20
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
And
I
believe
we're
live
so
welcome
everyone
to
the
ipld
sync.
This
is
the
20th
of
june
2022
and
we're
gonna.
Do
a
quick
run
around
of
things
that
have
been
happening
in
ipld
land
for
the
folks
that
are
here
on
the
call
and
I'm
gonna
go
first,
because
I'm
first
in
the
little
dock.
A
So
the
big
thing
that
I
have
to
talk
about
is
a
tag
to
0.17.0
of
ipld
prime
go
ipl
librarian
and
the
change
log
has
all
the
details
for
that.
So,
if
you
are
interested
in,
if
you
are
a
go
programmer,
you
should
have
a
look
at
that
and
I'm
going
to
quickly
have
a
look
at
the
highlights.
A
So
there's
there's
three
things
that
identify
as
potentially
breaking
or
disruptive
changes,
they're
unlikely
to
be
things
that
most
people
run
into
there's
a
there's
now
a
check
for
undefined
cids
in
the
codex.
If
you
put
it
on
an
undefined
cid,
so
a
zero
value
cid,
it
will
error
error
at
you
because
it
just
it
doesn't
have
a
way
of
encoding
them
properly,
and
so
that
should
have
always
been
the
case
because
you
won't
get
valid
bytes
out.
A
The
other
end
anyway,
by
note,
has
an
earlier
check
for
if
you
use
pointed
pointers,
there
was
a
case
where
we
were
doing
that
and
the
error
comes
later.
Sometimes
it
doesn't
come
at
all,
but
it's
just
a
a
check
because
it
really
does
need
to
point.
At
a
point.
Sorry,
just
a
pointer,
not
a
pointed
tool
pointer
and
go
1.16
has
been
dropped
from
the
ci
system.
So
that's
sort
of
a
bill
change
doesn't
mean
it's
not
supported,
but
could
probably
slip
over
time.
A
So
some
highlights
in
there.
This
release
has
a
ton
of
work
in
by
node.
So
daniel
did
a
heap
of
work
to
get
bionerd
up
to
scratch
for
to
be
productionized
really,
so
we
could
use
it
in
production
systems,
so
he
did
a
lot
of
fuzzing
all
the
way
through
the
codex
and
schemas
and
things,
and
he
picked
up
a
fixed
of
a
whole
bunch
of
panics
that
were
there
in
the
sort
of
early
version
of
it.
A
So
so
by
node
is,
is
pretty
solid
now
and
we've
been
working
on
it
since
schemas.
As
I
said,
I've
also
had
some
work
done
to
them:
the
the
schema
dmt
and
the
dsl,
so
the
dmt
is
the
internal
representation
of
it
has
been
improved
so
that
it
now
the
schema
stuff
in
google
play
prime
will
cover
around
90
of
the
schema
specification.
A
So
it's
and
then
that's
the
90
most
used.
So
the
the
bits
that
aren't
quite
done
are
the
bits
that
hardly
ever
get
touched
so
we're
pretty
close
to
full
support
there.
The
patch
eric's
patch
feature
got
merged,
it's
the
initial
version
of
that.
It's
it's
very
it's
very
basic,
and
initially
it
goes
with
a
spec
that
also
got
merged.
So
that's
now
tagged.
A
There's
a
couple
of
things
around
codex,
like
daxybore,
is
now
properly
strict
with
extraneous
data
afternoon
after
the
valid
object,
there's
an
experimental
determinism
flag.
You
can
use
for
the
codec
for
decoding.
A
That's
that's
fairly
basic,
but
it
can
be
expanded
over
time
and
there's
also
another
option
for
dac
jason
to
allow
it
to
go
beyond
the
end
of
a
valid
object.
So
it's
the
opposite.
And
lastly,
we
we
did
some
stuff
with
the
build
system
so
ci.
Now
it
tests,
windows
and
mac
and
32-bit
and
does
static
check
and
go
vet
and
auto
releases
and
a
bunch
of
stuff
that
comes
along
with
this
unified
ci
system
that
we
have
across
the
org
and
and
yeah.
So
that's
that's!
A
A
big
release
so
go
have
a
look
at
the
change
log
for
that.
It's
linked
in
the
dock
or
just
go
to
the
guy.
Build
your
primary
button.
Have
a
look.
The
other
thing
that
I
can
remember
having
done
the
last
two
weeks.
That's
notable
is
so
I
helped
giropo
get
his
proposed
base,
256
emoji
merged.
So
it's
a
it's
a
new
base
encoding
and
it's
built
from
emojis.
So
it's
it's
like
it's!
It's
the
opposite
of
what
you
want
a
bass
encoding
to
be
it's
it's!
A
It's
not
only
not
compact,
but
it's
it's
expensive.
So
it'll
turn
your
bites
into
a
greater
number
of
bytes
for
for
a
base
encoding,
it's
a
bit
of
fun
and
and
one
of
the
one
of
the
other
reasons
other
than
fun
that
giraffo
proposed
for
it
is
that
it
tests
our
embassy,
encoding
implementations
to
sort
of
stretch
them
a
little
bit
and
give
them
a
new
test
case.
A
I
just
think
it's
a
bit
of
fun,
so
you
can
now
get
cids
as
as
emojis
and
there's
a
valid
base
encoding.
There's
a
I've
done
a
javascript
version
of
this.
I
just
haven't
merged
it.
Yet
in
it's
it's
working
in
js
multi-formats,
I
I
did
think
it
might
be
fun
to
get
that
merged
and
maybe
get
it
supported
onto
the
cib
inspector.
That
might
be
fun,
but
that
can
come
later,
cid
inspector's
a
little
bit
out
of
date,
yeah
anyway.
A
That's
these
are
the
notable
things
I
can
remember
so,
maybe
mo
you
want
to
go
next.
B
Yeah
hi
I've
got
a
bit
more
time
next
few
weeks
to
work
on
ipld
stuff,
so
I'm
going
to
be
helping
rod
with
migrating
a
bunch
of
the
js
ipld
stuff
to
use
ecmascript
modules
more
properly,
so
part
of
that
involves
rewriting
how
typescript
types
get
generated
so
right
now
all
of
the
jsipld
stuff
is
using
jsdoc,
but
typescript
doesn't
support
js
doc
in
dependencies
because
it
sucks.
B
I
went
and
just
confirmed
that
for
myself
in
a
test
repo
and
yeah,
it
doesn't
work.
I
probably
shouldn't
just
listen
to
folks,
but
I'm
looking
into
the
existing
build
tooling
for
generating
those
types
and
just
kind
of
grocking.
What
the
existing
code
base
looks
like
before.
I
propose
any
changes,
but
that's
something
I'll
be
working
on.
We
also
got
some
new
docs
that
I
think
are
ready
to
merge
for
talking
about
why
content
addressability
is
good.
So
that's
going
to
be
in
the
motivation
section
of
the
website.
B
I
think
probably
we'll
merge
that
at
the
next
triage-
or
maybe
I
should
just
do
that
now-
I
don't
know
we'll
figure
it
out.
I.
A
B
Sweet
yeah
thanks
for
pinging
me
to
copy
paste
all
that
stuff
and
do
some
minor
formatting.
I
forget
who
the
original
writer
was,
but
props
to
them
for
doing
the
bulk
of
it
yeah
other
thing
I've
been
working
on
is
helping
this
team
called
ken
labs.
I
think
they
were
on
here
a
couple
weeks
ago
to
put
together
a
dev
grants
to
standardize
prolly
trees,
based
on
what
michael
worked
on,
which
should
make
it
easier
to
do.
B
Ipld
based
search
indexes,
so
part
of
that
work
is
going
to
be
implementing
a
go
version
using
adls
and
schemas,
and
the
schemas
are
going
to
be
really
useful
so
that
we
can
port
to
other
languages
down
the
line
and
there's
probably
going
to
be
a
demo
library
for
actually
building
an
index
and
querying
it
for
arbitrary
ipld
data.
B
So
that's
probably
going
to
start
in
the
next
few
weeks
once
it
gets
approved
by
the
grants
team,
but
hopefully
around
like
january,
we'll
have
something
implemented
for
folks
to
build
off,
of
which
I'm
personally
very
excited
for
as
well.
I've
been
starting
to
talk
with
lydl
about
ipld
stuff
on
the
gateway.
B
So
recently
I
think
the
stewards
team-
or
I
don't
remember
what
team
he's
on-
I
think
it
was
stewards
they've
put
together
some
specs
for
the
gateway
like
actual
specifications
of
here's,
what
an
ipfs
gateway
is
and
how
it
works,
and
so
some
of
the
stuff
that's
been
kind
of
floating
around
is
how
do
we
deal
with
like
arbitrary,
arbitrary
ipld
data,
so
right
now,
there's
ways
to
just
download
stuff
as
dag
seabor
or
I
think
car
files.
B
B
If
you
have
a
writable
gateway,
you
could
send
a
patch
set
to
a
given
cid
and
then
it'll
spit
back
a
new
cid
for
you
to
use
in
your
application,
so
that
could
make
modifying
existing
ipld
data
in
whatever
encoding
a
lot
easier
for
an
application,
especially
if
you're
in
an
environment
where
you
don't
want
to
have
to
import
like
all
of
goei,
pld,
prime
or
the
raw
bits.
B
So
hopefully
we'll
figure
out
some
rough
idea
of
something
there.
I'm
not
sure
what
the
next
steps
there
are.
Maybe
we'll
put
together
a
github
issue
or
start
a
pull
request
to
the
gateway
spec
or
something
along
those
lines
just
so
that
we
can
start
getting
commentary
on
like
what
makes
sense
for
the
community.
B
So
right
now,
my
thinking
is
to
base
it
off
of
some
of
the
ipld
url
stuff.
I
did
in
the
exploration
reports
last
month
earlier
this
month.
B
I
don't
I
don't
know
what
time
is,
but
so
we'll
probably
start
there
and
then
see
if
we
can
expand
that,
I'm
probably
also
going
to
be
doing
some
non
ipld
gateway,
spec
work
to
do
with
writable
gateways
as
part
of
the
work,
I
did
for
a
recent
dev
grant
to
do
ipfs
protocol
handlers
in
the
agricore
browser,
so
we
kind
of
learned
a
lot
from
what
is
like
a
nice
interface
for
talking
to
an
http
like
interface
for
like
interacting
with
ipfs,
and
so
we
can
probably
take
some
of
that
ipld,
but
also
back
port.
B
Some
of
those
changes
to
the
official
writable
gateway
spec.
I
think
that's
about
it.
D
To
keep
writing
now
I'll
go.
I
was
just
writing
more
things,
so
people
can
read
them
later.
All
right.
I
guess
some
things
so
and
having
I
guess,
having
nothing
to
do
with
iplt
go
ipfs
has
been,
has
gotten
a
new
name
which
is
kubo,
so
I
will
be
trying
to
remember
to
use,
which
name
and
slowly
getting
better
over
time.
D
There
is
a
kubo
pr
that
shows
using
webassembly,
ipld,
codecs
and
adls,
and
you
just
sort
of
like
in
your
config
file,
just
plop
in
a
file
name
and
the
name
of
the
codec
or
adl,
and
it
will
just
load
it
and
then
do
it
for
you,
which
is
pretty
cool,
and
you
know
there's
another
step
towards
like
showing
how
we
can.
D
We
may
want
to
see
how
this
webassembly
stuff
can
can
work
and
play
out
the
bittorrent
directory.
V1
codec
works.
You
can
use
that
you
can
keep
testing
through
them.
You
can
do
the
unix
fs
thing
where
you
like,
keep
walking
through
directories
and
directories
until
you
hit
files
and
then
load
the
file
you.
If
some
have
seen
demos,
they
will
look
the
same
as
all
the
other
demos,
which
is
loading
a
koala,
but
this
one
the
koala
will
be
in
a
directory
instead
of
just
the
top
level.
D
There's
some
work
going
on
to
clean
up
the
the
pr
that
is
there
and
the
specs.
I
also
separated
out
all
of
the
the
go
code
into
its
own
repo.
Instead
of
living
inside
of
like
go
ipfs
uses
almost
nothing
special
here.
Just
imports
the
like
wasm
ipld
library,
right
now.
It's
a
single
repo
that
has
both
some
rust
code
and
some
go
code.
D
And
you
can
you
can
check
out
the
the
pr?
That's
there
sorry
if
it's
a
mess,
I'm
working
on
it.
There
is
this
ipfs
thing
that
is
coming
up
on
you
know.
Next
month
there
there's
a
link
in
the
in
the
notes.
D
The
event
is
for
maintainers
and
core
contributors
of
an
ips
implementation,
whether
you're
doing
whether
you
have
like
a
production,
ready
implementation
or
like
you're,
just
getting
started
more
details
info
on
the
website
just
like.
If
you
have
not
already
received
an
email-
and
you
want
to
go
just
like
click
the
rsvp
button
and
fill
in
some
info,
I
am
leading
a
track
that
is
tentatively
called
data,
agony
and
ipfs
and
we'll
probably
have
a
bunch
of
conversations
related
to
ipld
things.
D
So
if
you
want
to
talk
about
something
or
you
would
like
to
hear,
other
people
talk
about
something
either,
let
me
know
or
if
you
have
something
you
want
to
talk
about-
just
file,
an
issue,
a
github
pr
to
the
website.
The
website
has
a
here's,
my
github
link
and
then
just
go
there
and
then
file
a
pr
and
then
it
will
modify
the
schedule
yeah.
D
So
looking
forward
to
talking
about
some
things
with
people
there
and
for
all
of
you
who
won't
be
joining,
don't
worry,
there
will
be
recordings
and
stuff
and
we
will
post
the
results.
D
Yeah,
I
I
feel
like
to
some
extent
you
know
data
again
ipfs,
which
was
a
name
I
stole
from
somebody
else
who
had
a
concept
for
this
could
also
probably
be
you
know,
boil
down
to
unix
fs.
Why
is
it
special?
How
could
it
be
less
special
in
that?
How
can
we
like
not
put
it
in
a
privileged
position
and
allow
us
to
do
other
sorts
of
things
so
that
people
can
experience
the
pain
themselves?
D
Does
unix
fs
suck?
That's
okay,
you
have
a
better
way
to
do
it.
Does
that
thing
also
suck
it's
okay,
someone
will
come
up
with
a
better
one
and-
and
we
can
like
work
like
that,
instead
of
needing
to
decide
like
unix
fs
v2,
the
ultimate
unix
fs
must
look
exactly
as
the
following,
which
I
feel
like
has
been
a
a
bad
road
to
go
down
in
the
past.
E
Yep,
I
also
have
some
sessions
at
the
ipfs
thing
on
the
concept
of
privacy.
It's
on
that
link.
We
need
more
people
who
are
interested
in
it.
I
need
to
get
my
act
together
and
organizing
it
expect
some
emails
and
such
one
comment
mob
on
the
gateway
thing.
E
I
did
at
some
point
get
part
way
through
hacking
together,
ipfs
shipyard
gateway,
prime,
which
separates
the
core
http
package
of
go
ipfs
into
its
own,
separate
repo
that
only
uses
go
ipf,
ipld
prime
based
interface,
and
does
have
the
tests
basically
passing,
so
it
could
be
plausible
to
make
that
a
thing
that
go
ipf
or
that
kubo
uses,
but
that
other
things
could
also
use.
E
It
needs
a
little
bit
of
work,
but
I
think
we
idle
was
also
reasonably
supportive
of
this
as
a
potential
path.
I
posted
it
as
the
first
thing
there
the
indexing
work.
We
are
looking
to
add
hampt
amps
as
the
way
that
we
ingest
lists
of
sids.
It's
not
quite
the
right
use
of
amps.
We
really
should
use
amps,
but
we
don't
have
amps
at
all
and
we
have
mostly
working
adls
of
hamps,
so
we're
going
to
use
hands.
E
But
that
means
we're
going
to
try
and
get
the
the
hamped
implementation
over
the
finish
line,
to
really
be
a
default
included,
ideal
or
closer
to
that.
So
we're
going
to
get
the
reifie
we're
going
to
get
put
in,
at
least
in
the
refires
of
where
we're
using
it,
which
is
in
coin
markets
and
in
our
indexer
side
code,
so
that
we
can
have
support
for
the
hands
and
selectors
like
we
have
with
unix
fs,
so
that
is
making
some
progress.
E
Another
thing
going
on
and
again
none
of
none
of
these
updates
really
are
for
me,
but
are
things
that
I
am
seeing
and
writing
code
reviews
for
things
like
that?
There
is
work
on
reframe.
This
is
tangential
to
ipld
as
well,
and
this
is
the
the
content
routing
evolution.
It
is
ipld
schema,
but
it's
thinking
about
how
do
we
make
these
protocols,
so
it's
making
use
of
idle
device
and
now
we're
thinking
about
how
we
make
that
cachable
when
you're
doing
gets,
and
so
there's
some
thought
about.
E
Well
what
what
is
a
query
versus
a
mutation
and-
and
how
do
we
think
about
that
in
these
protocols,
which
is
a
level
above
ipld,
but
is
one
of
these?
How
do
we
extend
the
ipld
data
model
into
full
protocol
level
things,
but
there's
a
good
set
of
conversations
going
on
in
ipfs
specs
about
what
that
looks
like
and
a
dean
has
been
also
driving
a
bunch
of
that
and
then
finally,
there
is
some
ongoing
work
again
on
cars,
making
the
indexes
be
able
to
stream
better.
E
So
you
can
expect
that
to
land
in
the
in
the
upcoming
weeks,
which
is
exciting.
I
think
that's
the
the
stuff
in
my
head
around
ipld
at
the
moment.
C
Yeah
I
pushed
a
few
fixes,
got
it
all
working,
also
published
benchmarking
results
for
the
tests
I've
written,
I
can
see
at
least
an
order
of
magnitude.
Improvement
in
memory
and
cpu
might
be
a
little
bit
of
a
bias
in
the
test.
So
I'm
going
to
take
a
look
at
them
again
and
maybe
add
more.
C
I
also
wanted
to
kind
of
discuss
what
the
next
steps
would
be
and
what
other
things
are
missing
in
the
implementation
that
would
allow
it
to
replace
or
like
merge
into
or
integrate
with
the
existing
batch
implementation,
so
rod
mode,
you've
kind
of
been
interacting
with
me
there,
so
not
not
necessarily
right
now,
but
whenever
we
get
a
chance
we
can
chat
and
give
up
or
where
to
go
with
this.
A
I
think
maybe
it's
worth
because
this
question
is
covered
a
couple
of
times
now
to
me
at
least
what
what
the
difference
is
between
your,
what
you've
called
amend
and
what
is
current.
What
we've
currently
got
is
patch
and
I've
had
a
couple
of
different
answers
for
that.
But
I'd
like
to
hear
like
to
hear
from
you
the
sort
of
the
motivation
and
and
how
you've
gone
about
doing
this,
that's
different
to
what
is
there
impact
at
the
moment.
C
Yeah
so
the
name
just
it
kind
of
happened
to
be
a
man
from
my
discussions
with
eric,
wasn't
trying
to
name
it
differently,
but
now
that
the
one
in
maine
is
called
patch,
I'm
kind
of
using
it
as
a
different
creator
to
like
at
least
talk
about
the
two
implementations
separately,
but
but
anyway,
the
the
main
difference
in
the
amend
implementation
is
that
it
does
not,
when
applying
an
update,
do
a
bunch
of
copies
so
the
way
batch
does
it
it.
C
It
takes
the
change,
copies
everything
else
and
creates
a
new
node
with
the
change
applied,
and
if
you
apply
a
lot
of
updates
to
a
node,
there
are
a
lot
of
copies.
Every
iteration
kind
of
does.
The
same
thing
takes
change,
adds
everything
else
around
it
and
then
creates
a
new
node.
The
way
I've
done
it
with
the
amend
implementation,
it
it
stores
instructions
around
what
to
do
in
instead
of
actually
creating
the
node
and
then
what
it
also
does
is
at
the
time
of
encode.
C
It
presents
kind
of
like,
like
a
lens
since
we've
been
using
that
term
in
ipld
for
a
bit
a
lens
to
the
encoder,
where
it
only
sees
the
cumulative
result
of
all
the
instructions
applied
together.
So
you
could
do
you
could
change
everything
in
the
node.
C
All
that
does
is
add
more
instructions
and
at
the
time
of
end
code,
when
it
traverses
the
instructions,
it
will
just
see
the
result
of
all
the
updates
applied.
A
So
it's
essentially
like
a
copy
on
the
right.
Yes,
the
way
I'm
seeing
patches
is
the
the
current
version
is
like
it's
a
it's
an
initial
placeholder
of
the
feature,
and
so
and
without
people
that
are
interested
in
pushing
it
forward,
it's
going
to
remain
a
place
placeholder,
and
so
you
know
your
work
has
some
really
interesting
efficiencies
to
it.
A
I
don't
know
whether
there's
a
conflict
in
I
mean
there's
a
potential
api
conflict
there,
which
is
probably
resolvable,
but
maybe
maybe
it's
also
just
an
alternative
mode
that
you
can
opt
into
for
patch,
but
I
think
that
the
step
forward
really
is
in
you
coming
up
with
a
proposal
for
how
to
to
to
either
amend
or
patch
the
current
implementation
and
and
then
we're
having
a
conversation
about
the
api.
I
think
you
know
currently
with
with
eric
still
out
until
next
month.
A
You
know
you're
the
main
person
that's
interested
in
the
feature,
so
you
get
a
lot
of
say
in
how
it
moves
forward.
So
I
think
that's
that's
really
up
to
you.
I
like
I
like
I
like
them.
I,
like
the
look
of
your
work.
A
I
am
interested
in
how
it
does
fit
into
the
the
layering
approach,
because
you
it's
like
you've
gone
to
the
bottom
end
of
it,
thinking
about
how
the
encode
works,
but
whereas
the
eric
does
take
its
perspective
of
stepping
up
in
the
the
very
very
much
the
lens
approach
of
how
we
look
down
at
this
stuff
and
patterns
and
stuff
implement.
So
that
would
be
something
to
pay
attention
to
in
terms
of
the
api
but
yeah.
A
I
I
think
it's
it's
completely
on
you
to
you
know,
propose
a
pr
and
then
let's
look
at
the
api
differences
and
what,
if
anything,
we
need
to
do
about
that
and
we
can
push
it
forward.
So
don't
keep
it
in
your
own
branch
for
too
long
and
yeah,
let's
get
into
your
piano.
Other
people,
I
think,
will
be
interested
because
I
think
patch
has
a
lot
of
potential.
B
If
I
may
add
on
to
that,
I
have
some
opinions
on
stuff
that
could
be
useful,
which
I
think
I
posted
on
github
but,
namely,
I
kind
of
see,
amend
and
patch
as
similar
things,
but
like
at
opposite
ends
of
the
interface,
so
like
patch
is
useful
because
it's
really
high
level
and
it's
just
like
here,
we
have
a
patch
set,
which
is
represented
as
ipld
data
that
can
be
fetched
and
encoded
and
put
everywhere,
whereas
amend
is
like
very
low
level
and
just
like
we
have
these
raw
nodes
and
we
can
do
these
operations.
B
B
I
think
this
will
be
useful
if
we
have
very
large
patch
sets,
because,
right
now,
if
we
have
like
you,
know
eager
evaluation
of
all
of
the
patches,
it's
going
to
add
up
a
lot,
and
I
guess
it
remains
to
be
seen
how
large
patch
sets
will
be,
but
especially,
if
right
now
we're
talking
about
using
it
in
the
gateway
that
could
be
important
to
have
like
good
performance.
I
don't
know
like
that's
just
my
viewpoint.
Obviously,
it's
it'll
be
good
to
talk
more.
A
And
and
there's
that
point
about
that,
like
you're
right,
the
top
down
bottom
up,
the
the
top
down
one
with
like
the
way
that
eric's
thought
about
this
is
is
very
much
if,
if
we
could
show
up
to
and
say
here's
a
hampt
and
I
want
to
apply
a
patch
to
a
hand,
I
should
be
able
to
just
use
that
as
I've
got
this
lens
here
for
the
patch
set
and
the
patch
sets
just
updating
key
values
of
the
hand,
whereas
underneath
it's
doing
all
the
crazy
hand
stuff,
and
so
we
ideally
would
want
patch
to
work
from
top
down
in
that
way.
A
So
that's
that's
something
we
need
to
keep
in
mind,
but
amen
has
the
potential
to
make
it
more
efficient,
underneath
so
that
that's
that
we
should
be
looking
at
how
to
resolve
those
two
things.
C
Yeah
that
makes
sense
yeah.
Thanks
for
the
inputs,
I
was
going
to
say
in
my
head.
I
was
considering
batch.
I
was
considering
the
bottom
half
of
batch
only,
but
batch
is
more
than
that.
It
is
the
the
spec
and
then
the
operations
handling
the
operations
applying
them
and
the
way
I
have
code
in
the
pr
right
now
it
kind
of
takes
it.
C
It
reuses
the
operations
and
the
schema
and
all
that
part
of
batch
and
then
just
does
replaces
the
the
traversal
package
updates
that
eric
made
like
it's
a
it
is
a
parallel
implementation
that
goes
as
far
as
reusing
batch
operations
and
then
underneath
so
mo
the
way
you've
described.
It
is
kind
of
the
way
it
currently
works
and
could
continue
to
work
like
replacing
the
the
bottom
piece
of
patch
with,
and
this
disadvantage.
D
Something
I
something
I
noticed,
I
guess,
while
I
was
doing
some
of
like
the
of
assembly
stuff,
which
I
don't
know,
I
guess
I
was
reminded
a
little
bit
when
talking
about
the
different
ways
that
we're
trying
to
do
like
the
the
modify
operations
is
even
on
the
read
side.
So,
on
the
read
side,
the
note
interface
covers
like
in
go
covers
the
we'll
call
the
the
big
items
right
like
I
wanna.
D
Alright,
the
top
level
things
I
wanna,
I
wanna,
you
know
bytes,
I
want
string.
I
want
integer
right
and
then
we
were
like
well
for
the
really
big
things
like
lists
and
maps,
we'll
give
you
like
iterators
or
some
things
like
help.
You
like
walk
through
them
and
then,
as
we
figure
like,
oh
well,
but
strings
can
also
be
big
and
like
bytes,
can
also
be
big.
D
If
you
try
and
do
them
that
way,
right
and
I
think
trying
to
figure
out
like
some
of
the
story
around
whether
it's
parallelism
or
just
we'll
call
them
bulk
operations
on
the
read
side
is
the
other
part
of
this,
because
again,
there's
you've
got
the.
I
want
to
do
the
things
at
the
very
high
level,
not
think
too
hard
about
them,
and
there's
like
I
want
to
get
everything
like
as
low
down
to
the
individual
pieces
as
possible
right.
We
have
that
both
on
the
right
side
and
on
the
read
side.
D
D
D
It
depends
depends
how
you
depends,
how
you
slice
it
and
deal
with
it.
But
yeah
I
mean
that's
the
that's
one
of
those
things
to
deal
with
at
this.
At
this
ipfs
thing,
there
are
some
people
who
are
very
excited
to
talk
about
of
assembly
things
and
I'm
sure
that
kind
of
stuff
will
come
up.
If
you
have
opinions-
and
you
want
to
talk
about
them
and
you
build
some
stuff,
you
should
be
there
and
then
express
them.
D
A
I
have
another
quick
topic:
if
anyone,
if
anyone
wants
to
talk
about
patreon
a
bit
more,
we
can
do
that,
but
otherwise
I
can
move
on
okay,
so
I've
got
a
link
in
the
docs
go
up
early
prime
issue
number
443
hannah
and
I
have
been
laying
out
a
set
of
concerns
around
performance
and
memory,
and
I
I
see
this
as
I
think
this
is
going
to
become
important
very
quickly
because
in
the
the
work
we've
we've
been
recently
doing
in
the
so
the
the
file
coin
data
transfer
stack
that
you
that
heavily
uses
bind
node.
A
So
we're
really
we've
been
using
that
as
well.
I've
been
using
it
as
as
as
one
as
a
way
to
push
forward,
bind
node
and
go
up
early
prime
we've
been
sort
of
replacing
seaboard
gen
with
go
openly,
prime.
In
a
lot
of
places,
it's
really
simplified
code
and
it's
sort
of
pushed
a
lot
of
the
concerns
about
encodings
into
go
up
any
prime,
rather
than
pulling
them
into
the
code
base
itself,
which
is
really
nice.
A
But
it
leaves
us
with
some
problems
with,
I
think,
are
going
to
bite
us
sooner
than
later
and
they're
outlined
in
this
discussion
thread
and
the
the
basic
frame
of
this
is
there's
a
there's,
a
three
layer
decoding
thing
that
happens
in
this
stack
where
and
it's
to
do
with
this
sort
of
a
sort
of
a
plug-in
type
of
approach
that
this
data
transfer
stack
has.
So
what
happens
is
graph?
A
Sync
is
a
generic
protocol
that
you
know
can
talk
about
graphs,
and
so
it's
got
its
own
messaging
format
and
we
use
bind
node
for
that
c
and
c
board.
Taxi
ball,
which
is
great
but
buy
node.
A
Has
these
extensions
in
the
messages
and
there's
this
little
spot
where
any
consumer
or
user
of
bind
node
can
say,
can
register
an
extension
and
say:
hey,
there's
when
you're
talking
back
and
forth
between
server
and
client
you're
going
to
be
getting
these
extensions
and-
and
I
know
how
to
deal
with
them,
but
you
don't
so
graphsync
doesn't
know
how
to
deal
with
these
extensions,
so
they
are
they're
currently
they're,
just
any
so
in
schema
language
they
are
any
nodes
and
so
graphing
doesn't
know,
and
it's
it's
got
these
enemies
and
then
the
the
the
user
of
graph
sync
and
in
the
file
coin
case
it's
go.
A
Data
transfer
is
the
library,
it
pulls
its
extensions
out
and
then
it
says
I'll
grab
this
any,
and
I
will
turn
them
into
my
message.
Format
so
grab
data
transfer
has
its
own
message,
format
that
it
extracts
from
these
any
nodes.
Then
the
data
transfer
does
something
very
similar
where
because
it
has
to
deal
sometimes
in
payments
and
other
authorization
things.
A
It
has
these
vouchers
that
it
puts
inside
the
its
own
messages
and
it
doesn't
know,
what's
in
the
vouchers,
so
the
user
of
data
transfer
does,
and
so
the
vouchers
are
any
nodes
as
well,
and
then
the
user
of
data
transfer
will
say,
hey.
I
I
you're
going
to
be
passing
back
and
forth
vouchers
in
your
messages,
and
I
know
how
to
deal
with
them.
So
it
takes
those
eddies
and
turns
them
into
concrete
types
and
with
bind
node.
We
can
turn
them
into
go
types
really
really
nicely.
A
So
we
just
sort
of
we
pass
through
by
node.
We
get
these
go
types,
it's
really
nice
to
deal
with,
but
what
it
means
is
that
in
in
go
up
the
prime
from
the
beginning,
when,
when
we're
decoding
the
seabor,
when
we
hit
the
any,
we
drop
back
into
basic
node.
So
instead
of
using
bind
node
to
write
the
the
data
directly
into
the
go
types.
So
we
sort
of
skip
basic
node
completely
we're
not
creating
these
nodes.
A
We're
creating
go
type
values
which
is
really
nice,
that's
sufficient,
but
we
hit
the
nes
and
we
have
to
switch
to
basic
node.
And
so
we
create
this
basic
node
structure
and
sometimes
that
can
be
large.
And
then
we
go
up
a
level
and
we
turn
that
basic
node
structure
into
a
go
type.
And
then
we
sort
of
hang
on
the
the
remainder
bits
off
that
and
then
we
do
it
again.
A
And
so
we
end
up
with
the
go
types
all
the
way
up
for
all
of
the
data,
but
but
because
the
while
we're
still
dealing
with
the
message,
the
that
basic
node
structure
is
still
alive
until
the
message
is
done
with
so
already
we're.
Seeing
in
some
heap
dumps
that
the
basic
node
is
taking
up
a
lot
of
memory-
and
it's
also
inefficient
because
one
of
the
points
of
bind
node
is
that
you
could
write
directly
to
the
go
type.
A
But
now
we
have
these
basic
node
structures
just
hanging
around
that
we
we
use
to
re-encode,
but
it's
it's
inefficient.
So
there's
some
back
and
forth
in
that.
Well,
there's
a
minor
discussion
in
that
thread
about
possible
ways
of
dealing
with
this.
A
The
ideal
is
that
they
would,
because
we
know
we
know
at
runtime
when
we
start
what
the
full
schema
is
and
what
the
translation
is,
but
we
don't
have
any
way
of
putting
it
all
together,
because
it's
it's
initialized
in
these
three
different
places,
so
we've
got
to
figure
out
a
way
to
put
it
all
together
at
the
beginning
and
have
the
base
layer
be
able
to
say
okay,
I
I've
got
to
the
the
any,
but
I
know
who
to
defer
to
to
deal
with
the
any,
and
I
don't
have
to
go
to
basic
note
so
yeah.
A
I
think
I
think
this
bit
of
work
is
going
to
come
up
very
soon.
It's
probably
going
to
be
on
my
plate
and
something
to
deal
with
with
my
sort
of
half-time
ipld
stuff,
but
if
anyone's
interested
in
this
problem,
because
I
I
think
it's
actually
a
really
interesting
problem
and
there's
a
lot
of,
I
think
one
of
the
biggest
challenges
here
is
the
api
making
the
api
nice
because
we
can
hack
it
together
internally.
A
But
the
api
you
know
without
some
care
could
be
very
ugly
and
messy.
So
I
think
if
anyone
wants
to
engage,
then
api
discussions
would
be
the
most
helpful
there,
but
yeah.
B
D
D
A
Yeah,
but
it's
it's
it's.
How
do
you
wire
it
up
because
yeah
we
could
have
a
union
within
any,
but
we
still
need
to
put
something
in
there.
There
isn't
there.
One
complication
is
that
I
think
I
think
that
graphsync
will
still
allow
unknown
extensions.
It's
not
going
to
drop
your
messages
if
it
doesn't
understand
your
extensions,
it
just
won't
do
anything
with
them,
so
there
is
a
legitimate
any
there.
A
That
may
end
up
being
a
basic
note,
but
what
we
really
want
to
be
able
to
do
is
because
we
set
up
with
buy
note
at
the
beginning,
saying
here's
here's
the
schema
for
the
messages
and
here's
the
types
and
put
it
all
together
and
then
it's
only
later
on
the
other
layers
come
in
and
say:
hey
you've
got
this.
Can
you
give
me
the
node
and
I'm
going
to
decode
it
so
so
graphsync
does
uses
dag
cbore,
whereas
the
other
layers
they
will
use?
A
It's
like,
I
think
they
use
datamodel.copy
or
ibld.copy
or
assign
node.
Now
they
use
a
signed
node,
so
instead
of
using
the
codec
they're
using
a
sine
node
to
do
it
and
we
really
want
to
get
rid
of
the
assigned
node,
so
it
means
pushing
it
all
down
and
I
don't
think
in
any
union
would
be
useful,
although
I
still,
I
still
do
think
that
in
any
union
there's
an
interesting
case
that
we
should
push
forward
to
figure
out.
But
I
don't
think
it's
for
this.
A
It's
something
it's
more
something
like
one
of
the
one
of
the
ways
I've
proposed
is
two
two
different
two
separate
things
that
need
to
be
solved
here,
but
one
of
them
is,
I
think
one
of
the
easier
parts
to
solve
would
be
to
have
a
programmatic
typed
prototype
for
a
union
where,
instead
of
doing
it
up,
you
know
in
schema
and
ahead
of
time.
You
have
this
union
type
that
you
can
add
things
into,
and
the
key
union
would
be
the
obvious
way
to
solve.
A
This
would
be
you
create
a
type
node
with
some
api,
and
then
you
say
now
add
a
key
and
my
prototype
for
that
key
and
so
you've
got
this
bundle.
That
has
these
collections
of
arbitrary
prototype
prototypes,
whatever
they
are,
they
could
be
backed
by
binode
or
cogen
or
whatever,
but
api
wise
you're
registering
a
key
for
that
union
and
by
type
prototype.
So
when,
when
the
the
sort
of
the
node
builder
hits
that,
then
it
knows
how
to
fork
that
off
into
the
different
type
prototypes.
A
I
think
that
would
solve
part
of
the
problem,
but
then
there's
other
issues
to
solve
there
as
well.
A
Yeah,
it
would
look
like
a
it
would
look
like
a
node
builder
like
a
generic
node
builder
yeah,
but
it's
it's
backed
by
this
sort
of.
A
Yeah
yeah
and
I
think
the
easy,
so
the
easiest
version
of
this,
I
think,
is
a
key
union
to
start
with,
but
there's
other
places
where
this
could
work
where
you've
got
this
programmatic
building
of
a
a
prototype
or
of
a
builder.
So
you
know
you
can
you
can
do
other
kinds
of
unions?
But
maybe
you
could
do
other
things
as
well
like
not
having
hypotheticals
in
my
head,
but
but
then
maybe
there's
a
place
in
go
believe
where
we
put
these
programmatic
builders.
A
A
B
A
Everyone
for
tuning
in
and
if
you
want
to
show
up
next
time,
you're
welcome
to
hop
on
the
zoom
or
tweet
it
again
on
youtube.
So.