►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2021-04-26
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
meeting
as
every
two
weeks
we
go
over
the
stuff
that
we've
worked
on
and
announced
any
interesting
things
or
anything
we
have
to
share
in
regards
to
id.
A
A
In
case
you
want
to
watch
it
this
week.
I
don't
have
any
updates
in
regards
to
ipody.
So
next
on
my
list
is
eric.
B
Oh
okay,
I
get
to
deliver
a
melange
of
updates
stuff,
I'm
working
on
and
a
couple
of
things
that
other
people
have
been
working
on.
I
guess
so
there's
still
some
some
work
that
I'm
doing
slowly
but
surely
on
updating
our
website
and
our
documentation
to
try
to
increase
the
coherency
on
that.
B
We've
got
some
prototype
content.
I've
gotten
to
the
point
of
working
on
the
publication
pipeline
and
I've
tried
out
a
new
service
named
fleek,
which
people
might
have
heard
of
before
or
might
not.
I
only
just
heard
of
it
it's
kind
of
cool.
It
has
a
very
smooth
process
for
like
getting
something
on
the
web.
It
gives
you
a
nice
little
sub
domain
stores,
content
in
ipfs.
B
B
So
I'm
happy
about
that.
Just
wanted
to
share
my
joy.
B
I've
spent
a
bunch
of
other
time
this
week,
communicating
with
some
other
folks
inside
other
research
teams
inside
of
pl
and
some
other
folks
inside
of
other
project
teams
like
vulcanize
communicating
with
people
about
how
ipld
should
be
used
continuously
doing
backfill
on
the
docs
work
that
we
actually
need
to
do
so
that's
fun
for
some
people
who
are
missing
today.
Envy
dan
is
not
with
us,
but
I'd
like
to
mention
that
he
started
some
other
work
in
goal
code
recently,
something
that
is
conspicuously
missing
in
our
golang
ipld
implementations.
B
Friction
for
many
go
developers
where
you
could
write
your
own
ghost
trucks
and
bind
them
to
the
ipld
data
model,
using
reflection,
which
is
something
that's
missing
and
mv
dan
started
working
on
a
prototype
of
this,
and
I
don't
know
if
he
wants
me
to
talk
about
this
publicly,
but
he
did
it's
early,
it's
not
published
yet,
but
hopefully
we
get
to
do
some
more
work
on
this
in
the
near
future
and
it
could
provide
a
lot
a
lot
lower
frictional
way
to
work
with
ibld
things
and
go
so
I'm
excited
about
that
and
that's
about
it.
A
Thanks
next
is
raw.
C
Yeah,
so
I've
we've
been
working
on
some
big
updates
for
the
javascript
stack
for
a
number
of
months.
Now
I
think
it's
been
and
I
pushed
them
out
yesterday.
Finally,
the
main
impetus
behind
this
was
getting
typescript
definitions
across
whole
hour,
our
main
new
next-gen
libraries,
but
there's
also
we're
also
trying
to
sort
of
squash
some
bundling
problems
with
our
esm
published
tool
chain.
C
Now
that
particular
problem
hasn't
been
necessarily
fully
resolved,
because
we
don't
have
all
the
tests
to
test
all
the
different
bundling
environments,
but
I'm
fairly
confident
with
these,
because
we
had
to
do
some
similar
work
for
the
seaboard
library,
that
is,
that
is
actually
being
used
by
js
ipfs.
So
this
inherits
all
of
the
fixes
from
that
as
well.
So
I'm
actually
confident,
even
though
verification
work
is,
is
still
to
be
done,
there's
also
some
changes
in
the
types
that
are
being
used
here.
C
We
we
did
some
back
and
forth
on
on
the
types
for
the
codex,
what
they
look
like,
how
they,
what
what
is
the
signatures
that
they
export
and
then
they
consistently
implement
and-
and
I
think
we
ended
up
in
a
good
place-
we
would
the
original
forms
were
trying
to.
They
were
trying
to
do
too
much
trying
to
be
too
ambitious
for
what
we
actually
need
right
now
and
then
we
ended
up
in
even
even
more
ambitious
place
and
now
we've
scaled
back
to
a
much
more
realistic
place
of
these
things.
C
Just
really
need
to
be
very
simple
for
now
and
if,
if
we
want
to
spend
some
time
on
making
them
more
efficient
in
certain
ways,
then
we
can
do
that,
but
let's
not
make
them
be
the
source
of
complexity.
We
can
add
complexity
up
up
this
check
the
chain,
so
that's
done
and
published,
and
we
have
new
versions
of
all
these
things
and
the
car
library
as
well
to
match.
So
I'm
pretty
happy
about
that.
C
That's
about
it!
Oh,
that's!
It
yeah!
It
was
also
a
little
bit
interesting
work
on
top
of
the
the
meeting
last
week
from
textile
and
we're
looking
at
this
notion
of
of
collections
and
how
to
how
to
define
something
that
works
across
many
use.
Cases
and
textile
threads
are
very
interesting
from
that
perspective,
because
they've
done
a
lot
of
work
on
that,
but
nothing
to
report
on
that
concretely.
C
A
Thanks
does
anyone
else
have?
Does
anyone
else
have
any
updates.
E
My
only
thing
is
adl
selectors
and
how
we
signal-
and
things
like
that,
but
I
do
not
have
coherent
thoughts
about
it
right
now,
so
probably
not
not
a
good
time
to
bring
it
up
yet.
B
I
guess
we
just
want
some
comments
on
that.
Just
real,
quick
selectors
over
adls
are
really
tricky.
B
There
is
a
there's,
an
issue
thread
going
on
right
now,
where
people
have
expressed
a
wish
to
have
selectors
that
compile
down
from
adl
to
data
model
and
I'm
gonna
go
on
the
record
and
say
that's
a
very
nice
idea
and
it's
impossible
or
usually
impossible
in
any
practical
scenario
that
anyone
cares
about.
I'm
sorry,
but.
B
But
if
it's
data
dependent
at
all,
if
there's
any
branching
factor
on
the
data,
then
you
can't
do
that
and
almost
every
adl
anyone
has
ever
proposed.
Is
data
dependent?
Because
if
it's
not
it's
going
to
be
shockingly
inefficient,
like
imagine
a
b-plus
tree
where,
instead
of
branching,
when
you
filled
up
the
bucket,
the
answer
was
always
branch
right.
That
would
be
a
terribly
inefficient,
b
plus
tree.
You
wouldn't
ever
use
that
algorithm
because
it's
useless
at
its
job,
so
sadness.
A
I
can,
I
can
only
add
that
that
I
agree,
because
I
also
thought
it
would
be
possible,
but
it's
not.
I
spend
a
lot
of
time
thinking
about
it.
The
only
place
where
I
think
it
could
be
possible,
but
it's
not
an
adl
and
it's
for
schemers,
so
I
still
would
want
to
convert
schema
selectors
to
data
models
like
this.
A
D
I
I
think
you
could
expand
selectors
to
basically
add
conditionals,
which
I
think
should
have,
but
it's
the
biggest
tricky
part
for
me
at
least
like
most
of
our
selectors
will
have
things
like
hash
this
thing
with
murmur
three
or
decode
this
proto
buff,
or
something
like
that,
where,
like
like,
whatever
okay,
fine,
we
introduced
blossom,
throw
awesome,
pokemon
they're
done
with.
F
It
so
I
have
some
questions
on
this.
It
seems
like
you
either
need,
or
I
think
the
the
options
I
listed
in
that
issue
were
either.
F
You
need
some
protocol
negotiation
on
the
adls
right
in
order
to
do
the
transfer
or
you
need
to
be
able
to
do
things
a
little
bit
at
a
time
right
like
if
we
were
using
bitswap
and
not
graph
sync,
this
problem
goes
away,
and
so
the
question
is
like
which
of
these
is
less
bad,
because
I
don't
know
if
there's
an
alternative
option,
it's
before
we
go
further.
Do
people
have
thoughts
on
that.
D
Well,
so
I
would
like
a
like
way
of
knowing
which
selectors
used
to
work.
So
basically
I
could
take
a
selector
and
say
you
don't
support
this,
this
part
of
the
selector.
Let
me
run
this
locally
to
sort
of
fetch
that
part
of
the
data
and
then
give
you
the
sub
selectors
to
support
this
is
complicated,
but
it
would
work
in
graph.
Sync.
F
D
F
So
I
mean
well
so
this
was
my
like
comment
like
two
years
ago,
when
we
said
adls
were
going
to
solve
all
these
problems.
It
was
like
okay,
we
can
hide
all
the
complexity
in
the
adls
as
long
as
they
don't
leave
as
long
as
nobody
else
needs
to
care
that
I've
decided
on
an
adl
right,
because
things
that
we
want
global
agreement
on,
we
we
want
to
use
eric's
much
misaligned.
You
know
global
table
of
of
shared
values
for.
D
F
D
One
one
one
solution
here
is
is
just
like
the
so
the
selector
best
buys
the
80
all
over
it's
over
in
some
way.
So,
like
it
says,
okay,
like
we.
D
Land,
which
means
I
expect
ipvs
like
things
I
expect
selector,
that
understands
ipfs,
or
I
expect
somebody
that
understands
hamps
or
whatever.
B
Yeah
so
we've
had
proposals
about
that
that
have
been
floated
around
for
a
while
and
we
just
haven't
picked
one
and
shipped.
It
there's
a
variety
of
different
directions
that
one
could
go
which
make
it
interesting
and
fun.
So
some
of
the
ideas
that
immediately
come
to
mind
are
like
yeah.
You
could
write
this
parallel
tree
structure,
which
says
when
you're
at
this
path
or
you're
in
this
sub
path,
then
apply
adl,
name,
intense
hand,
waving
or
wasm
cid
of
adl
even
more
intense
hand.
B
Waving,
doesn't
matter
right,
assume
that
that
part
of
the
problem
is
solved
apply
this
at
this
path,
and
that
would
be
simple
and
relatively
quick
to
implement
and
shouldn't
surprise
anybody.
B
But
then
you
would
go
oh
yeah,
okay,
I'm
gonna
need
to
I'm
gonna
need
to
do
that,
recursively
and
so
now,
you're,
probably
writing
a
selector
for
where
the
other
adls
go
and
that's
getting
sort
of
fun
or
you
could
try
to
use
the
schema
system,
which
already
has
some
kinds
of
recursion
in
it,
which
could
be
very
useful
for
signaling,
something,
that's
probably
also
recursing
in
the
same
points
that
the
schema
is
recursing,
but
then
you
construct
a
dependency
on
schema,
something
we
want
to
do
both
these
things,
yeah
and
so
the
state
of
play
that
we've
reached
in
the
go
code.
B
At
this
point
is
somebody
recently
made
a
pr
which
introduces
this
function
called
node
refire
in
the
go
code,
the
naming
quality
of
which
we
can
debate.
B
But
that
gives
us
a
call
back
in
one
particular
place
where
we're
now
able
to
solve
these
issues
and
because
it's
a
callback,
it
hasn't
standardized
a
dang
thing,
of
course,
but
you
could
imagine
implementing
either
of
those
other
two
systems
that
I
just
described,
which
are
much
more
declarative.
You
could
imagine
implementing
them
within
the
callback.
E
Yeah,
so
I
the
reason
I
basically
brought
this
up
like
last
week
and
why
I
feel
that
we
need
to
talk
about,
even
though
I
don't
have
a
good
plan
like
how
to
actually
get
around
this
is
because
we
are
not
making
baby
steps.
We
are
effectively
baking
this
into
like.
Let
me
that
means
the
other
issue.
We
are
looking
to
bake
this
into
the
entire
stack
very
aggressively
in
a
way
that
is
not,
there
is
not
version
that
is
not
signaled
and
we
are.
E
You
know
we
have
a
pretty
bad
track
record
of
removing
things
that
we
decide
later
on.
That
will
not
support
anymore,
so
this
switch
that
I
linked
together
with
this
issue
that
I
just
linked,
is
basically
putting
us
on
a
path
where
this
unix
of
s,
user,
agent,
detection
so
to
speak,
is
there
to
stay
for
good
and,
like
that's
a
bad
outcome
from
my
perspective,
yes
and
the
same
people
want
to
do
pathing
over
kdl's.
F
Well
well
that
issue
I
mean
my
opinion
on
that
that
pr
is
like
you,
you
it's
a
protocol,
breaking
change
to
merge
it,
as
is,
if
you
add
in
like
a
magic
extension
bit
that
says
like
we
have
agreed
on
this
node
ray
of
fire.
That
looks
exactly
like
this
and
we
promise
everyone's
using
it
in
exactly
the
same
way
and
we're
giving
it
a
name
then,
like
maybe
it
works,
it's
now
a
new
protocol
but
like
at
least
it
functions.
F
E
Well,
a
little
more
context
like
this
morning,
textile
engineers
were
like
in
our
shared
chat.
Space
were
literally
demoing,
like
oh
we're
doing
this
thing,
and
it's
awesome
and
thank
you
now,
riva
and
hannah
for
putting
this
together
and
you
know
like
the
horse,
has
not
left
just
yet.
But
it's
like
you
know,
at
the.
F
E
E
Precisely
and
more
more
importantly,
to
something
that
steven
said
earlier
I'll,
just
you
know,
do
one
selector
and
then
like
do
another
one
so
and
so
forth.
Falcon
explicitly
does
not
support
that.
You
basically
have
one
root,
one
selector,
and
that's
that
you
don't
have
a
choice
to
renegotiate
anything.
D
That's
not
actually
what
I
meant.
What
I
meant
is
that
you
act
well.
D
Basically,
when
you're
doing
file
transfers,
you
can
still
specify
one
selector
up
front
yeah,
but
then,
if
you
don't
understand
stuff,
basically
you
sort
of
go
back
and
forth
now
it
does
make
it
harder
to
verify
the
selectors
execute
correctly,
but
that's
actually
not
necessary
for
the
file
from
protocol
in
any
way.
So
you
can't
have
a
system
basically
we're
like.
I
asked
personally
with
a
selector.
D
E
Right
I
I
should
have
replay,
I
should
not
have
set
falcon.
I
should
set
go
fill
markets
because
that
is
effectively
our
on
the
protocol.
The
support
today
and
this
one
you
know
it
basically
will
not
even
try
to
compile
the
selector
before
it
gets.
You
know,
guess
the
payment
channels
and
stuff
like
that,
so
the
cycle
to
you
know
to
try
and
iterate
is
too
expensive.
D
No,
so
my
point
here
is
that,
like
you
have
to
get
the
data
somehow
and
the
data
the
the
process
of
getting
the
data
is
going
to
be
expensive,
regardless
of
what
you
do.
This
is
basically
an
iterative
process
there.
I'm
not
sure
why
you
have
to
compile
and
verify
this
lecture
ahead
of
time.
E
I
might
beat
bronchow,
you
know.
Logically,
go
goku
market
actually
goes
through
the
steps,
so
yeah.
D
Sorry,
I
I
have
no
idea
how
it
goes
this
up.
My
point
is
like
there's
no
fundamental
reason
why
I
can't
do
this,
why
I
wouldn't
be
able
to
do
this.
F
D
Basically,
we
have
three
options:
either,
never
add
more
selectors
or
do
really
poorly
at
it
add
like
a
jumper's
vm
or
support
some
kind
of
selector
negotiation
like
this,
where
you
can
sort
of
downgrade
to
more
simpler
approaches.
F
D
Actually,
sorry,
I
think
there
are
two
sizes:
there's
the
I'm
storing
the
data.
There's
the
I'm
retrieving
the
data
on
the
retrieval
side.
It's
a
lot
easier
to
do
the
negotiation
on
the
storage
side.
You
actually
don't
need
to
do
that,
so
they
are
eric's
point
about
not
being
able
to
compile
a
selector
down
to
like
a
a
data
model.
Selector
is
not
actually
correct
because
you
have
the
data.
D
B
E
Yeah,
it's
like
the
actual.
The
actual
use
case
here
is
like
somewhat
contrivedly
limited
because,
like
what
textile
is
actually
trying
to
do
is
they
are
trying
to
traverse
a
pre-defined,
unix
effect
structure
which
is
guaranteed
to
not
have
hands
in
it.
D
E
D
D
Let
the
user
specify
the
the
actual
selector,
but
then,
like
you,
have
some
validation
somewhere.
That
checks
make
sure
selectors
have
certain
form.
For
example,
like
you
can
write
a
selector
that
checks
to
see
like
I
am
traversing
a
link
that
has
this
name.
If
you
can't
write
a
selector
that
can
actually
give
us
that
one,
if
that
makes
sense,.
F
I
just
want
to
make
sure
like.
Is
this
violating
the?
What
I
don't
want
to
have
happen
is
it
should
not
be
possible
for
me
to
make
a
request
and
for
you
to
misunderstand
the
request,
yeah
and
a
way
for
you
to
misunderstand
the
request
is
for
you
to
have
a
different
way
of
gluing
your
adls,
together
than
I
do,
because
the
adls
are
less
strictly
specified
and
less
globally
agreed
upon
than
yeah.
So
the
the
thinking
here.
D
Specifically,
like
like
textile,
has
you
know
it
wants
using,
invest
all
that
kind
of
stuff,
but
they
want
to
limit
how
data
you
can
store
the
data.
That's
my
understanding,
like
you
want
to
store
specific
files.
Basically,
what
we
could
do
here
is
have
something
we're
like.
Okay,
it's
up
to
the
user,
to
sort
of
like
lower
a
complicated
selector
down
to
something
more
level,
but
their
motor
level
selector
has
to
have
certain
properties.
D
It
basically
has
to
conform
to
a
specific
format,
but
like
like
it
has
like
check
like
have
a
check
inside
of
it
says.
Is
this
unix,
if
that's
by
raiding
some
bytes
in
japanese
or
something
like
that?
I
don't
know
if
you
can
do
this
right
now,
but
this
is
not
too
difficult
to
do.
As
far
as
I
understand,
and
then
you
could
say
like
like,
is
it
true
like
when
you
traverse
like
link
five,
the
selector
then
have
to
assert
that
link.
Five
has
name
whatever
with
the
correct
name.
D
It's
it's
a
bit
tricky
because,
like
you
can't
validate
the
data
is
well
formed
easily
I
mean
you
could,
but
that's
really
expensive.
Maybe
you
could
do
that.
I
don't
know.
B
B
B
D
Like
a
sort
of
blessed
selector,
where
users
give
them
a
cid,
they
use
their
blessed
selector
to
store
some
subset
of
the
data.
My
thinking
here
is
that
in
theory
we
could
like,
if
we
needed
this
lowering
to
work,
we
can
make
it
that
when
basically
so,
the
user
has
to
lower
their
selector.
Basically
they
start
with
the
selector.
They
lower
it
down
to
the
data
model
selector.
But
then
the
standard.
D
D
No,
it
doesn't
actually
have
to
be
during
complete
at
all,
because,
like
completeness,
you
need
things
like
loops
and
things
that
can't
terminate
in
this
case,
you
can
literally,
in
fact,
what
you
do.
Is
you
you
trace,
so
you
would
have
like
this.
You
take
your
a
higher
level
selector,
you
trace
exactly
what
your
high
level
software
do,
and
you
just
record
that
trace
it's
going
to
be
a
fixed
length
operation.
Basically,
so
it's
definitely
not
very
complete
and
checking
that
trace
is
is
should
be
like.
D
I
don't
know
how
much
data
that
would
be.
So
you
have
to
like
make
sure
the
trace
is
small,
but
you
should
be
able
to
do
this.
F
Maybe
I'm
just
like
missing
the
maybe
I
like
lost
the
plot
here.
Can
I
get
like
an
explain
like
I'm?
Five
of
what
it
is
textile
is
trying
to
do.
E
So
textile
is
trying
to
have
a
way
to
I
for
a
user
to
apply
some
sort
of
path
to
a
root
of
a
larger
leg
like
it's
not
important.
What
the
pattern
is,
it
can
be
unique.
Suffice
it
to
be
like
some
kind
of
some
kind
of
you
know,
some
kind
of
hemp.
Sorry
not
have
some
kind
of
like
ipld
like
map
or
whatever
it
doesn't
matter.
The
point
is
that
it's
not
like
low
level
data
model
structures.
They
want
to
have
some
more
holistic
like
naming
thing.
E
F
Understand
go
ahead.
E
And
one
more
thing,
so
the
thinking
is
that
because
they
are
also
the
ones
who
bundle
this
data
and
the
data
is
like
from
disparate
sources,
they're
like
a
bunch
of
small
dags,
they
need
to
build
onto
a
big
bag.
They
use
some
sort
of
deterministic
way
of
putting
this
stuff
into
a
larger
deck,
be
it
with.
You
know,
with
one
prospect
that
came
up
or
or
some
other
method
it
doesn't
matter.
E
The
point
for
them
is
that
they
can
go
from
just
and
then
they
can
go
from
a
file
coin
root
and
a
and
us
and
the
cid
of
a
subset
of
this
root
and
they
can
construct
deterministically
automatically
the
path
between
the
root-
and
this
thing
that's
their
goal,
and
they
want
to
be
able
to
express
this
of
course,
and
currently
it
is
unique.
Surprise
it
doesn't
have
to
be
an
xfs
can
be
something
else.
F
E
B
E
E
For
example,
or
but
like
forget
that
pb
and
tuning
sps,
if
they
wanted
to
implement
this
with,
with
actual
like
you
know,
ipld
maps
and
stuff,
like
that,
they
would
still
need
something
to
express
this
selector.
Well,
because
right
now,
data
model
selectors
in
json
are
kind
of
like
difficult
without
the
schema,
so
it
it's
the
same
fundamental
problem
like
how
do
I
actually
express
a
selector
over
some
type
of
you
know,
multi-pass
structure.
D
We're
like
bundling
code
with
ipld,
so
you
can
just
like
say:
hey!
I'm
gonna
resolve
this
path.
G
D
As
in
like
like
get
rid
of
the
the
current
software,
we
say
whatever
we're
just
going
to
full
stack
vm,
we
give
you
a
bit
of
gas,
you
get
to
run
your
operations.
E
Right
so,
basically,
instead
of
itself
like
trying
to
device
like
individual
pieces
of
the
selector
not
to
be
attackable,
you
just
sandbox
the
entire
thing
and
just
run
whatever.
D
Yeah,
but
but
the
the
problem
here
is
like
still
you
have
this
problem
like
you
have
like.
If
the
trend
was
always
on
the
gateway,
the
gate
was
never
going
to
be
able
to
like
just
take
a
nice
path
and
then
resolve
something
through
this.
This
data
structure,
because
it
doesn't
like
you,
won't
know
what
that
is.
The
only
way
to
do
that
is
to
literally
have
a
system
where,
like
you,
have
like
the
pathing
algorithm
attached
to
the
object,
somehow
probably
has
code,
but
that
gets
into
the
full,
like
iplt,
with.
E
Code
thing
all
right
so
going
way
back
to
what
started
the
discussion,
the
the
thing
that
I
linked
earlier,
where
the
proposal
is
like.
Oh
we're
just
going
to
attempt
to
parse
this
this
way
and
then,
if
this
fails,
we're
going
to
attempt
to
pass
a
different
way
effectively,
this
implicit
mixing
of
the
code
and
the
selector.
This
is
something
that
we
can
agree
as
a
group
is
not
a
way
to
go
forward.
D
E
That
that
that
calms
you
down
because
so
far
so
far
everybody
else
said
like
yeah.
We
could
probably
do
that.
F
D
So
so
eric's
approach
works
or,
like
you
have
the
refine
function.
It
does
your
pathing
for
you
and
you
can
get
like,
can
pass
this
in
as
part
of
the
context.
So
this
is
like
basically,
ideally
you'd
have
a
path
that
starts
with
a
slash
textile
or
whatever,
and
then
that
would
tell
the
commands
or
the
the
name
resolution
system
pick
this
reification
function
or
resolution
function
or
whatever
you
want
to
call
it
and
then
use
that
that
works.
D
Putting
this
code
everywhere
and
just
like
saying
this
looks
like
this
look
at
this
duck
typing
everything.
You
can't
reason
about
that.
B
B
E
Ipfs
all
right,
fair
enough
yeah.
So
then
the
the
problem
remains
like.
How
do
we
solve
this,
like
more
holistically,
for
example,
for
miners
not
being
able
to
understand
the
unix
idea
when
you
ask
them
for
one,
for
example
today,
for
which
specific
use
cases.
D
B
Go
back
to
graph
sync,
I
think
another
likely,
especially
an
intermediate
result
that
we're
likely
to
see
is
even
if
we
go
full
asm
at
some
point.
Well,
that's
a
very
powerful
approach.
It
seems
like
doing
bayesian
analysis
on
how
often
we've
implemented
that
system,
given
how
often
we've
talked
about
it,
I'm
going
to
make
a
bayesian
bet
that
it's
not
being
implemented
next
week.
B
So
meanwhile,
we
will
probably
also
just
need,
like
yeah
some
pieces
of
code
for
adls
will
get
shipped
as
blobs
in
some
underlying
systems,
so
like
yeah,
maybe
you
build
them
into
the
clients
for
some
of
these
things,
and
it
would
be
ideal
if
we
have
a
path
in
the
future
where
we
replace
that
with
wasm
things,
but
even
then
it
would
probably
be
pretty
neat
if
we
take
the
idea
from,
I
think
some
other
languages
recently
have
called
this
concept
rockets,
where
you
might
have
like
the
interpreted
form
of
some
function,
and
you
also
have
a
shim
that
does
that
function
in
like
super
optimized
for
your
platform
assembly,
you
call
that
fastball
or
rocket.
B
Maintaining
functional
equivalence
of
all
those
things
is
going
to
be
a
big
pain
in
the
butt
and
very
difficult
to
do.
But
that
just
seems
to
be
like
the
nature
of
the
beast
that
we've
set
ourselves
up
to
fight.
D
A
I
have
a
question
about
the
use
case
of
getting
a
file
from
a
miner.
Basically,
so
would
it
mean
that
if
if,
if
someone
said
we
could
it
needs
negotiation,
would
it
mean
that
you,
for
example,
send
over
this
like
unique
sfs
path
and
then
it
has
the
path
in
the
adl
and
the
data?
A
So
it
could
then
return
you,
the
selector,
like
the
data
model,
selector
and
return
it
to
you
and
then
say:
do
you
want
to
do
this
select
and
then
you
can
actually
send
the
data
models
like
there
or,
if
you
like
to
because
then
you
have
the
cost.
For
example,
like
you
can
actually
say
yeah,
this
is
too
expensive.
This
is
cheap
enough.
Whatever
is
this?
What.
G
D
Support
the
the
selector
type
I
want,
so
I
need
to
do
some
negotiation,
so
the
idea
is
that,
like
I
ask
for
the
more
like
the
complicated
lecture,
they
say
I
haven't
root
cid.
I
have
no
idea
what
the
selector
is.
Okay,
so
I
then
use
some
other
vertical
or
maybe
even
like
a
simpler
selector,
just
ask
for
a
block
and
then
export
part
of
the
subject
locally
and
then
give
them
a
subset
from
the
key.
D
Do
this
one
keep
on
going
back
and
forth,
ideally
not
too
much
but
like,
for
example,
like
you
might
have
to
do
this
for
pathing
through
some
part
of
the
tree?
Maybe
I
can
do
some
like
maybe
like
for
hamps.
I
have
to
do
this,
but
not
for
like
normal
unix
of
s
that
kind
of
stuff.
I.
D
To
avoid
this
as
much
as
possible,
and
I
might
also
be
able
to
write
special
selectors
that
like
say,
okay,
maybe
I
need
to
go,
prevent
some
data
or
specularly,
basically
fetch
some
data,
I'm
not
sure
what
what
the
correct
data
is.
Yeah
that
was
expected
once
I
get
that
expected
data.
C
E
B
D
E
B
We
are
hoping
that
it
can
be
implemented
as
a
sibling
to
the
selector
system,
probably
because
it's
likely
that
any
other
things
like
you
want
to
do
pathing
over
these
things
in
general,
with
lower
power
systems
and
selectors
or
alternative
selector
systems
or
whatever
the
heck
else.
You
still
want
to
just
say:
here's
where
I
expect
the
80oz
yeah,
but
I
think
it
has
to
compose.
D
Like
I
want
to
be
able
to
select
down
through
unix
of
s
and
look
in
the
metadata
then
like
or
like,
supposing
assuming
unix
qs2
with
metadata,
then
pull
out
some
json
object.
That's
linked
or
something
like
that.
F
Reba,
I
have
a
question
about
the
textile
thing
in
particular,
if
I
recall
correctly
the
so
that
for
their
particular
case,
just
so
like
we're
we're
dealing
with
like
one
problem
at
a
time
they
have
these
directories,
which
are
filled
with
other
directories
with
files
in
them,
but
the
names
of
the
directories
and
the
names
of
the
files
are
all
cids
right.
E
F
E
Yeah,
we
don't
have
indexing
on
the
miners,
yet
at
all
we
only
understand
roots.
There
is
work
to
do
to
add
indices,
but
it's
just
starting
actually
brad
probably
knows
more
about
that.
F
C
Yeah
there's
a
lot:
that's
that's
enabled
by
indexing
all
the
cids,
so
it's
sort
of
a
it's
a
thing
to
get
done,
but
it's
being
worked
on.
It's
a
hard
job,
hard
task.
F
C
I'm
not
tuned
in
to
the
details
of
it.
I'm
not
involved
in
bedrock
so
much
as
just
watching
notes
and
stuff,
but
it's
it's
related
to
this
thing
of
we've
got
car
files,
let's
index
car
files
and
know
what
we've
got
and
then
use
that
indexing
information
to
tell
us,
and
so
we
can
retrieve
arbitrary
stuff
but
yeah.
It's
it,
but
there's
a
whole
lot
of
stuff
that
really
come
out
of
that
work.
C
So
I
think,
regardless
it's
going
to
end
up
getting
prioritized
in
some
way,
because
you
know
it's
just
it's
not
sustainable
to
to
to
keep
that
outside,
like
even
things
like
like.
Even
the
the
cross
chain,
bridges
and
oracle's
work
stuff
like
we
need
to
be
able
to
say
to
smart.
Con.
Smart
contract
needs
to
be
able
to
assert
that
a
particular
piece
of
data
is
stored
in
file
coin,
and
we
don't
have
good
ways
to
do
that
today.
So
there's
just
so
much
stuff
that
gets
unblocked
by
being
able
to
do
arbitrary.
F
E
D
D
Then
preemptively
fetch
like
some
number
of
tiers
of
these
trees
and
then,
when
they
receive
request
from
users
they
can
fetch
pieces
as
needed,
but
because
it's
it's
a
like
a
tiered
set
or
a
hampt,
they
can
filter
out
requests
but
to
fetch
the
entire
thing
and
release
the
eventual
pieces
they
need.
It
should
significantly
reduce
the
amount
of
advertising
you
need
to
do.
It
does
require
centralized
aggregation,
nodes
or
not
centralized,
but,
like
I
guess,
super
nodes,
do
you
want
to
call
it
that.
D
It's
definitely
not
it's.
Just
the
problem
is
based
on
how
ips
works
like
yes,
it
would
be
like
we
do
need
to
make
ifs
better
at
like
finding
roots,
but
even
then
like,
even
if
we
only
need
an
aspirational
wikipedia
problem
or
the
you
know,
archive
problem
where
there's
too
many
roots.
E
Yeah,
and,
and
and
in
the
case
of,
for
example,
in
storage,
what
I'm
doing
like
132
gig
sector
has
like.
D
F
Yeah
right
individually,
access
accessible
things
need
to
be
individually
accessible
right.
If
I
I
have
a
single
picture
and
that
picture
is
stored
in
five
different
deals
on
five
different
miners
like
I
should
be
able
to
find
any
of
them
in
all
of
them
without
tracking
all
of
the
routes,
but
like
there's
one
cid
that
lives
in
a
you
know
lives
in
an
ethereum
smart
contract
somewhere,
like
it
there's
only
one.
F
It's
not
like
a
list
of
all
the
paths
to
all
the
file
coin:
miners
right,
we're
using
content
addressing
not
location
addressing,
and
so
these
guys,
if
they
don't
want
like
this,
is
part
of
providing
useful
storage
is
like
maybe
wanting
to
help
people
retrieve
it,
which
presumably
right
I
mean
they're
getting
they
can
get
paid
for.
You
know
having
that
data
be
retrieved
from
them,
so
they
should
want
to
do
that.
F
F
You
know
ask
a
there's
some
really
complicated,
there's
some
really
complicated
abl
and
I
don't
want
to
download
all
the
blocks
to
figure
things
out,
but
it's
really
fast
for
you
to
execute
on
your
end,
and
I
wish
I
could
send
that
over
the
wire.
It
doesn't
mean
that
that
problem
goes
away,
but
it
means
like
we
can
be
more
precise
and
like
who
we're
trying
to
help,
and
why
so
that
the
protocols,
the
simple
protocols
can
be
can
remain
simple
and
complicated
protocols
can
exist
when
you
need
things
to
become
complicated.
E
E
Things
fall
apart
without
a
part
without
I'm
more
brought
this
up
because,
like
there
there
is
a
solution
that
was
proposed
that
you
know
we
kind
of
talked
about
already
so
yeah.
We
effectively
don't
have
a
don't,
have
a
quick
solution
for
this
and
that's
fine.
I
guess.
F
F
I
yeah
I.
I
would
like
that
to
be
like
harder
like
you
should
have
to
pass
more
flags
in
order
to
do
anything
that
changes
things
that
drastically
that
we
have
the
same.
We
have
a
similar
problem
in
kedemlia,
where
it's
like.
If
you
change
the
bucket
size,
you
are
now
in
a
different
protocol
and
or
you've
changed.
If
you
remove
support
for
ipns,
this
is
like
a
different
dht,
and
so
you
need
to
like
specif.
There
are
you.
There
are
more
flags.
F
You
need
to
pass
more
things
you
need
to
do
in
order
to
break
the
system
so
that
people
don't
break
it
accidentally,
I'm
not
sure
if
that
lives
in
the
graph
sink
layer
or
in
like
around
the
link
system
layer,
but
wherever
that
needs
to
live
there
should
be
more
flags.
A
D
D
A
plea
for
going
to
go
ahead
and
merge
it
unless
we
can
start
making
progress
other
than
that.
I
do
want
to
touch
on
sorting
just
very
briefly
just
keeping
people's
minds
eric.
B
D
D
D
D
Order
yeah.
F
F
A
few
options
as
to
because
there's
there's
like,
like,
I
think
eric
pointed
this
out,
there's
like
there's
like
sorting
in
general,
there's
like
yeah,
there's
like
sorting
in
the
data
model
they're
starting
in
doug
cbor
and
there's
like
sorting
in
the
go
ipldc
bore
or
go
ipld
prime
sebor
library
right,
and
these
are
all
like
separate
problems
to
some
extent,
if
we
just
narrow,
like
if
we're
okay
for
it
to
start
off
narrowing
in
on,
like
the
end,
one
which
is
like
go
ipld
like
go
ipld
prime
sebor,
sorting
as
related
to
the
dagsebor
spec.
F
That
might
be
like
a
good
place
to
start,
which
is
just
to
like
recap
for
everyone
who
wasn't
reading.
Although
I'm
sure
everyone
is
who
is
here,
but
I
guess
we're
on
zoom
we're
on
yeah
we're
on
zoom
and
being
recorded.
F
There
is
the
dag,
seabor
codec
number
zero
x71,
which
is
associated
with
the
dagsebor
spec,
which
asserts
that
map
elements
should
be
sorted
in
the
way.
That's
in
the
seabor
rfc
for
how
they
recommend
you
do
deterministic
encoding
of
c4.
F
C
D
D
It
just
works
if
you're
reading
in
something
that
has
the
wrong
sorting
order.
Okay,
maybe
you
don't
round
trip,
but
that's
your
problem.
We
could
add
options
around
this
where
you
can
say
like,
please
be
strict
or
we
could
even
options
on
on
the
outputs
for
like
when
you
try
to
encode
in
a
c-more
node,
you
could
ask
it
and
say:
hey
like
don't
sort
this.
I
want
my
node
to
be
in
this
weird
order.
That
makes
sense
my.
C
And
we
did
something
recently
with
dag
pb,
where
we
made
it
all
strict,
and
then
we
had
to
back
off
on
one
of
the
rules
because
we
found
there
was
real
world
historical
data
where
so
in
in
the
in
the
root
of
a
dag
pb
block,
you've
got
data
and
links
and
for
the
most
part
they're
in
a
particular
order.
C
But
then
there's
this
one
ancient
case
where
they're
out
of
order
from
very
early
on
there
was
some
data
produced
that
way
and
it's
still
in
test
cases,
so
we
backed
it
off
for
decode.
We
made
it
less
strict
on
decode
that
it'll
accept
that,
but
you
just
want
to
round
trip
them,
and
currently
we
have
no
mechanism
to
say
I
want
you
to
round
trip
this
badly,
but
it'll
still
read
them.
It'll
just
give
you
the
wrong
city,
if
you
re-encode
it,
so
we
can
do
the
same
thing
with
other
stuff,
usually.
D
C
D
C
And-
and
this
is
the
state
of
daxybought
today-
essentially
it's
we've
got
a
bunch
of
we've
got
more
codecs
than
dagpb,
but
most
of
them
will
by
default
sort
that
way
some
of
them
won't,
but
none
of
them
will
assert
that
sorting
order
is
wrong
when
decode,
like
none
of
them,
will
actually
fail.
You
for
bad
sort,
when
you're
decoding
famously
cooperatively.
D
Siever
is
wrong
and
famously
the
what's,
it
called
they're
just
wrong
in
what
way
that's
what's
wrong.
C
Cbogen's
newer
so
go
ipld
symbol
uses
refund,
which
has
the
sorting
but
obviously
early
on
it
didn't
and
it
had
different
sorting.
Then
cboard
gen
was
implemented.
It's
got
map
support,
which
sorts
alpha
numerically,
how
how
how
a
normal
person
would
decide
to
sort
seaboard
like
it's
got
the
way
that
you
should
do
it,
which
the
new
rfc
actually
says,
but
we
already
baked
in
the
original
the
length
first
sorting.
C
D
B
Just
like
the
characters,
we're
going
to
put
all
of
our
chips
in
on
decode
is
always
going
to
be
loose
on
order
and
we
still,
for
whatever
reason,
really
want
encode
to
be
sorting,
which
is
not
actually
my
position,
but
if
that's
where
we
wanted
to
go,
then
I
would
advocate
for
within
that
conditional
tree
using
the
new
sorting,
because
I
think
that
would
be
significantly
less
crazy
and
other
people
implementing
this
would
get
it
wrong
drastically.
Less
often
is
the
do
you
see
more.
C
D
Yeah
eric
one
thing
I
want
to
comment
here
is
I
I
agree
that
sorting
is
not
super
critical
like
at
the
end
of
the
day,
like
the
data
is
the
data,
but
it's
still
like
well,
one.
D
That's
much
easier
trivial
yeah,
without
that
you
have
login
or
analog
against
it.
But
beyond
that.
D
Hey
if
I
take
some
data-
and
I
see
your
losses
back
and
I
do
it
again
somewhere
else-
I
get
the
same
thing
out
usually
and
if
we
don't
give
people
that,
then
they
can
be
very
confused,
especially
like,
like
it
works
every
single
time
and
then
they
like
they
bring
them
thinking
some
of
the
language
sorts
differently
and
now
suddenly,
like
oh
crap,
now
like
I
have
to
like
mimic
disordering
in
this
language
instead
of
the
language
in
order
to
get
like
consensus
on
this
thing,
and
that's
not
very
good.
D
So
there's
a
very
easy
way
to
deal
with
this
and
actually
people's
levels.
For
us,
we
write
test
vectors,
so
we
just
have
a
set
of
test
vectors
of
like
basically
like
just
objects
that,
like
some,
are
in
the
right
order,
some
of
the
wrong
order
or
basically
descriptions
of
of
objects
and
then
say
just
decode
these
or
re-encode
them,
and
they
should
all
render
it
properly.
D
E
I
must
I
must
point
out,
though,
about
a
month
ago,
this
subject
came
up
and
it
was
kind
of
halfway
seriously
decided
that
test
factors
are
not
part
of
pmf.
So
keep
that
in
mind.
C
And
I
did
develop,
I
did
develop
a
good
sweep
for
dag
pb
and
they
just
shared
now
across
javascript
and
go
and
I'm
very
happy
about
that.
It's
just
it's!
It's
not
in
a
form,
that's
very
portable,
but
it's
in
it.
You
can
see
it
in
all
the
tests
for
those
two
libraries
yeah.
It
would
be
nice
to.
D
Just
have
a
set
of
objects
somewhere
like
so
if
we
can
extract
a
set
of
objects
from
what
it
is
that
we
create
in
terms
of
pnf,
actually
we
can't
argue
and
say
like
it's,
an
existential
problem,
we're
like.
If
we
don't
do
this
now,
we
will
shoot
ourselves
in
the
foot
later,
so
we
can
make
documents
like
that.
Oh
yeah.
A
This
meeting
is
already
like
one
hour
and
ten
minutes,
so
I
would
suggest
that
we
close
the
meeting
all
right,
then
thanks
everyone
and
see
you
all
in
two
weeks.