►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-09-14
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
If
we
have
discussions
yeah-
and
I
start
with
myself
so
in
the
past
week,
I've
worked
again
on
the
rust,
multi,
hash,
stuff
and
yeah
polish
things
a
bit
so
that
so
currently
I
work
on
the
tiny
multi-hash,
which
is
a
fork
of
rust
multihash
and
I
tried
to
get
it
feature
into
feature
parity
with
rust,
multihash.
And
that's
now
the
case.
A
So
today
I've
worked
on
actually
making
a
merge
request,
which
is
more
smoother
than
I
thought,
and
the
only
missing
piece
is
that
there
is
blake
3
support
in
rust,
multihedge,
which
isn't
yet
in
tiny
multihash.
But
yeah
we'll
come
soon.
So
that's
the
missing
piece
and
then
it's
ready
to
be
merged,
and
then
we
again
have
one
upstream
version
which
hopefully
then
everyone
will
use,
and
that's
also
what
I
kind
of
concentrate
on
this
week
to
get
this
actually
merged
and
then
also
really
p2p
using
it
and
ipl
using
it
and
so
on.
B
C
Cool
so
last
week
I
finished
getting
up
to
speed
on
the
docks
on
specs
early
last
week.
I
also
sent
up
a
request
with
an
exploration
report
on
my
experience
getting
up
to
speed,
which
is
mostly
what
pieces
I
found
confusing,
and
I
also
started
a
thread
in
the
go
channel
on
slack
about
moving
the
go
import
paths
from
github.com
ipld
to
our
own
ipld.io
domain,
which
has
some
advantages,
such
as
being
for
future
proof.
C
C
A
Thanks
next
on
my
list,
that's
right.
D
And
not
a
whole
lot
to
report
for
me
this
week
I
haven't
been
well
and
so
not
a
high
productivity
week,
but
mainly
working
on
this
block
storage
api,
which
is
meant
rewriting
the
card
library,
basically
pulling
it
into
pieces
and
then
putting
it
back
together
again
with
a
a
much
more
modern
feeling.
Api.
That's
not
doesn't
have
hangovers
from
some
old
abstraction
that
has
got
its
history
in
places
that
no
one
remembers
anymore.
So
that's
working!
Okay!
D
I
mean
I'm
happy
with
the
simplicity
of
the
api,
but
the
this
conversation
with
gaza
about
the
dependency
injection
of
the
block
and
multi-format
stuff.
It's
interesting
and
does
impact
it
as
well.
I
don't
really
know
where
that's
going,
although
he
seems
to
have
an
end
point
in
mind
and
trying
to
engage
with
that
with
him
and
we've
been
having
some
other
conversations
about
other
javascript
apis
that
we'd
like
to
see.
E
E
I
have
just
yelled
a
bunch
more
docs
into
the
docs
repo
about.
I
finally
put
the
codex
conversations
in
there.
I
did
a
little
bit
of
high
level
structuring
and
leaving
some
stubs
for
other
things
included
a
table
of
contents
over
a
bunch
of
stuff,
so
there's
plenty
more
to
do's
in
there.
But
I
hope
that
that
skeleton
is
something
that
we
can
build
upon
usefully
in
the
future.
E
They
are
doing
some
really
fascinating
stuff,
and
some
of
it
of
course
grows
to
be
application
specific
in
a
way
that
is
very
much
peer,
goss
logic
and
and
not
something
that
we
would
be
likely
to
do
in
ipld
alone,
because
it
just
accumulates
opinions,
especially
when
cryptography
gets
involved,
as
you
might
imagine,
but
they're
kind
of
interested
in
making
more
of
their
libraries
reusable
around
the
ipld
data
handling,
which
would
be
really
cool
because
they
are,
of
course
writing
in
java,
and
we,
I
don't
think
anybody
else
on
the
call
is
currently
maintaining
java
libraries.
E
So
it
would
be
awesome
if
somebody
is
taking
that
on
talked
a
little
bit
about
the
challenges
there.
It's
java's
got
a
strong
type
system
that
requires
design
discussions.
Yeah,
that's
enough
details
for
that.
I've
been
trying
to
wrap
my
head
more
around
the
advanced
data
layout
stuff,
the
adl
stuff.
So
both
talking
about
that
with
danielle
and
also
I
tried
to
build
a
demo
adl,
that's
the
simplest
possible
thing.
E
I
tried
to
make
a
rot,
13
string,
adl
and
so
there's
a
branch,
not
even
a
pr
yet
I
should
turn
it
into
a
more
conversational
format,
but
there's
a
branch
in
the
guidability
prime
repo
which
pokes
at
this
and
how
to
make
the
interfaces
work
and
what
to
call
some
of
this
stuff.
E
I
did
a
little
bit
of
work,
propagating
library
changes
downstream,
so
go
graph,
sync
and
the
protobuf
based
repos
that
we
have
for
pld
prime
both
have
prs
after
them
now
to
get
them
up
to
the
latest.
Major
versions
of
the
interfaces
and
unix
of
sp2
got
another
round
of
drafting.
I
almost
forgot
until
I
saw
michael
typing
it
into
his
nose,
so
thank
you,
michael
yep,
trying
trying
to
get
that
discussion
moving
along
a
little
bit
further.
E
E
A
Thanks
next
is
michael.
F
Hey
yeah,
let
me
actually
pull
it
up,
because
there
was
a
bunch
of
stuff
on
here
that
I've
kind
of
forgotten,
okay
yeah,
I
fixed
some
bugs
in
the
block
api
and
in
a
bunch
of
our
little
javascript
libraries.
F
I
went
in
and
updated
the
fbl
implementation
in
js
to
the
latest
spec
and
all
of
our
latest
steps,
and
then
I
implemented
eric's
new
unix,
thus
v2
draft,
which
is
actually
really
pleasant
and
easy
to
implement
like
the
highly
consistent
version
of
it,
because
we're
not
doing
any
of
the
inlining
and
it
actually
it
composes
really.
Well,
it
looks
really
well
one
of
the
nice
things
about
it
is
that
a
file
is
just
an
fbl.
F
So
it's
just
our
flexible
byte
layout,
which
is
really
nice
for
reuse.
It
means
that
you
can
kind
of
take
any
binary
object
that
you
have,
because
the
fbl
schema
allows
you
to
inline
the
bytes
as
well,
so
you
can
really
take
like
any
binary
representation
and
now
that's
just
a
file,
and
so,
if
you
were
to
imagine
us
implementing
that
in
the
gateway
you'd
be
able
to
give
it
any
cid
to
any
fbl
and
it
would
be
able
to
read
it
as
a
file
as
well.
F
And
then
you
know
on
top
of
the
file,
we
add
the
attributes
as
part
of
the
directory
structure
and
as
part
of
the
the
hamp
that
goes
into
that
and
when
we
keep
it
really
consistent
and
we
just
make
all
directories
enhanced
and
make
all
files
fbls,
then
it's
actually
really
easy
to
implement
and
really
consistently
hashed
everywhere.
F
So
it's
it's
pretty
nice
there's
a
bunch
of
places
that
we'll
probably
end
the
bike
shutting
on,
but
I
think
it's
a
nice
direction
for
the
the
spec
to
go
in,
and
I
was
really
happy
with
little
implementation
that
I
did
then.
Over
the
weekend
I
started
poking
around
at
a
podcast
client.
I
wanted
to
write
a
little
podcast
client
that
would
basically
live
in
github,
and
so
it's
using
dagdb
and
the
new
github
git
lfs
stuff
in
dagdb.
F
So
basically
it'll
every
hour
go
and
look
at
all
your
podcasts
and
then
pull
all
that
metadata
into
the
database
and
then
it'll
generate
html
for
the
entire
application
and
then
it's
so
then
it's
just
static
website
that
you
pull
down
and
then
in
the
static
website.
You
can
in
a
service
worker,
pull
up
all
of
the
data
and
then
actually
map
on
top
of
that,
all
of
the
data
that
you're
in
your
player
and
all
that
kind
of
stuff.
F
F
I
ended
up
adding
a
bunch
of
features
to
to
dagdb,
just
using
dag,
to
be
right
now,
like
surfaces,
a
lot
of
things
that
could
be
nicer,
so
I've
just
been
fixing
that
as
we
go,
and
the
last
thing
I
want
to
talk
about
is
probably
something
that
we
need
to
discuss
like
as
a
group,
maybe
towards
the
end.
F
But
I've
been
talking
with
a
few
folks
about
like
what
the
structure,
the
team
and
the
work
that
we
do
could
be
sort
of
after
the
filecoin
launch,
and
something
that's
been
coming
up
is
that
the
project
framing
is
not
great
for
what
our
team
does.
We
work
a
lot
on
just
data
structures
and
we
work
on
ipld
and
multi-formats
and
things
on
ipld
and
a
huge
part
of
what
we're
doing
is
like
building
these
data
structures
and
then
dog
fooding
the
data
structures
to
figure
out.
What's
going
on.
F
With
other
teams
that
need
data
structures
and
so
yeah,
a
lot
of
other
people
are
in
a
similar
boat
where,
like
the
project
framing,
is
not
working
for
them,
but
a
couple
things
that
I
think
that
it's
pushed
us
in
the
wrong
direction,
or
a
little
bit
too
far
in
one
direction,
is
like
we've,
overly
focused
on
implementations
and
not
paid
as
much
attention
to
the
protocols
like
we're
protocol
labs,
we're
not
project
labs,
and
we
have
all
these
specs,
but
we
don't
have
great
tools
around
compliance.
F
F
We
should
take
it
as
a
sign
of
success
when
people
implement
like
another
multi-format
thing
or
another
ipld
thing
and
just
kind
of
integrate
that
into
our
view
of
the
project
better
and
if
they're
having
problems
like
we've,
seen
people
having
problems
like
encoding
seabor,
for
instance,
because
dag
cbor
is
so
restrictive.
F
We
should
be
prioritizing
building
better
tools
and
better
specs
for
them
to
deal
with
that
those
compliance
issues
rather
than
just
trying
to
funnel
everybody
into
a
couple
implementations
that
we've
written
so
yeah.
I've
just
been
kind
of
unwinding
that
a
bit
and
thinking
about
that,
and
it's
like
a
framing
for
the
team
in
the
project
that
I
think
that
we
could
probably
do
with
some
discussion
on
ongoing
for
the
next.
Like
few
weeks
at
least.
A
Thanks,
I
think
that's
it
for
the
updates
johnny.
Do
you
want
to
say
something.
F
H
Just
a
detail
on
what
you
just
said,
I
think
so
I'm
having
a
major
challenge
with
the
did
specification,
so
the
decentralized
identifier.
It
mostly
has
to
do
with
this
seabor,
so
they're,
threatening
to
drop
seabor
support
from
the
specification,
because
I
can't
represent
seabor
cose
keys
properly,
and
the
challenge
is
that
it
is
a
mixed
media
types.
And
so
I
can't
express
it
completely
in
seabor
and
I
can't
because
we
don't
have
the
full
int
as
key
representative
in
like
the
dag
c
representation
and.
H
I
think-
and
we
talked
before
with
like
three
bucks
and
whatnot
about
this-
the
use
of
a
ipld
link
and
actually
basically
using
a
cid
or
that
cid
might
be
like
a
base
32
with
a
ed
2,
5,
5,
1,
9,
public
key
multi
codec
and
then
just
the
bytes
that
actually
goes
after
it,
and
so
the
challenge
is
that
there's
more
metadata
that
actually
has
to
wrap
that,
including,
like
an
id
field,
a
type
field,
and
so
it
also
gets
into
like
a
bike
shed
naming
problem
which
is
like.
H
What
do
you
call
this
attribute
that
the
link
is
a
dag
cos,
a
public
publicly
and
also
the
challenge
is
that
in
coset
and
seabor,
or
in
or
even
in
cose
c
board
signing
and
encryption
there's
no
tag
for
a
public
key,
it's
actually
application
specific
and
so
like
you
have
to
define
it
in
your
application.
So
there's
actually
no
key
node
tag
that
says
hey.
This
is
cbor,
and
this
section
of
it
is
actually
is:
is
the
x
and
the
curve
etc?
H
And
so
it's
so
it
mostly
has
to
do
with
mixed
media
types.
I'm
trying
to
advocate.
H
Let's
just
keep
it
in
json
serialize
to
cbore,
in
which
case
it's
just
key
value
pairs
and
the
key
could
be
string,
and
basically,
you
could
do
jw
keys,
jason
webb
keys
as
jason
butting
serialized
to
seabor,
and,
let's
just
start
off
with
that,
and
so
I'm
getting
major
pushback
as
far
as
like
the
dag
seabor
stuff
I
put
in,
which
is
basically
ipld,
and
I
wrote
a
bunch
of
stuff
for
just
generic
seabor
with
explaining
how
things
could
be
expanded
using
tags
but
they're
like
oh
well.
H
This
is
like
no
one's
really
using
this
and
you
need
at
least
three
implementations
and
like
hey,
listen,
I've
got
ipid,
which
I
created.
I
wrote
the
specification
myself,
those
guys
in
portugal.
Remember
those
guys
so
and
then
there's
another
guy
who's
in,
like
germany
or
austria,
who
actually
created
like
a
vr
app
that
actually
is
using
ipid.
H
So
but
it's
it's
talking
about
bike
shedding
and
like
standards
process
is
like
I'm,
it's
just
so
frustrating.
So
how
how?
Okay?
How
can
you
can
you
guys
give
me
some
help.
F
So
so
I
I
wanna
back
up
a
little
bit
and
make
sure
that
I
understand
the
problem.
So
if
I
recall
where
we
landed
on
dag
cose
was
that
when
you
use
the
dag
coset
codec,
it's
just
a
regular
cosa
with
a
binary
field
that
you're
signing
and
the
binary
field
is
a
binary
representation
of
a
cid.
It's
assumed
to
be
a
cid
like
that.
F
Exactly
and
then,
if
you
want
to
do
anything
else
with
cose,
you
just
make
it
use
the
kosei
codec
instead
of
the
dag,
coseco
right
and
so
but
they're
saying
that
they
want
more.
So
they
need
to
be
able
to
sign
something
more
than
that.
H
No,
so
it's
about
actually
this
whole
thing
about
cryptographic
agility
is
that
you
want
to
represent
your
keys,
so
the
did
specification
is
all
just
basically
publishing
your
public
keys
so,
but
there's
this
whole,
like
library,
that
of
naming
that
of
like
describing
authentic
verification
method,
jwk
2
2020,
that's
basically
like
we
created
this
thing,
we're
calling
it
special.
H
We
named
it
in
2020,
and
it
has
a
very
specific
purpose,
which
is
did
authentication
and
and
is
represented
as
a
jwk,
and
they
want
me
to
do
a
one-to-one
mapping
of
this
exact
same
thing
all
in
seabor
to
represent
the
key.
So
it's
it's!
Yes,
eventually,
it's
about.
H
The
keys,
but
the
did
methods
are
about
publishing
your
public
key
data
and
there's
this
whole
field
of
of
cryptographic.
Agility,
of
declaring
the
type
of
of
key
at
the
class
of
key
in
the
type
of
key
so
gets
really
gets
into
linked
data
signatures,
rdf
models,
and
just
that
by
naming
things
you
invoke
it
and
it
has
special
meaning.
F
H
Yeah,
so
so
you
need
to
wrap
it,
and
so
the
challenge
is
that
you
can
actually
basically
create
the
a
key,
a
cos,
a
key
representation,
cozy
key
format,
but
and
that's
which
is
fine,
which
is
basically
you're,
doing
it
as
intes
keys
with
parameters
and
etc
and
x
and
y
and
and
don't
use
d,
because
d
is
for
private
keys
all
right.
So
you
have
this
this
blob.
H
Now
there
has
to
be
this
way
of
actually
saying
hey
this
thing
that
I
I
represented
in
in
seabor
as
co
as
cose.
This
actually
has
some
special
meaning.
It's
a
basically
I
and
there's
something
that
they
did
suggest
is
public
key
multi-codec,
and
I
think
that
that
may
be
where
we
land.
But
the
challenge
is
that,
well
that
multicodec
is
really
it's
a
multi-base
multi-codec,
because
actually
a
public
key
multi-base
is
like
hey.
F
E
H
H
I
can
do
json
ld
on
top
of
ipld,
basically
adding
an
ad
context,
and
it
has
something
magical
purpose
and
that
in
the
attributes
are
even
though
suddenly
actually
like
from
a
cryptography
standpoint
like
suddenly
it's
mutable,
it's
a
it's
a
mutable
external
dictionary,
which
means
that
all
I
need
to
do
is
do
a
man
in
the
middle
attack
and
basically
trick
you
to
reroute
the
signature
to
a
different
field
or
a
null
field
and
in
many
cryptographic
libraries
null
equals.
True.
H
So,
like
there's
going
to
be
this
huge
security
vulnerability
years
from
now
and
so
anyways,
that's
good.
That's
gonna
be
a
topic
for
my
my
defcon
meeting
next
year,
but
in
the
meantime,.
F
H
No-
and
one
warned
me
when
I
actually
I
this
is
like
two
three
years
ago,
he
said
like.
Are
you
sure
you
want
to
do
that
like
even
actually
offered
to
pay
me
to
support
me
to
to
do
this,
and
I
said
no,
no,
I
got
it
the
piece
of
cake
man
famous
last.
F
H
F
H
Tell
me
about
it,
so
I
think
mostly
I'm
settling
on
all
right.
You
guys
want
to
do
jason,
jason
ld.
That's
fine!
I'm
going
to
do
a
more
robust,
like
ideally
that
context,
because
I
argued
for
it
has
to
be.
It
can
be
a
uri,
not
just
a
url.
So
basically
that
left
a
a
footprint
for
ipfs,
colon,
slash,
slash
or
ideally
ipld
colon
slash,
but
you
get
to
like
a
rooting
problem
is
like
if
you
branch
it
like
you
know
you,
you
get
this
issue
as
far
as
like
naming
conventions.
H
As
far
as
like
I
added
a
middle
name
and-
and
your
schema
only
had
first
and
last
are
we
talking
about
the
same
middle
name
that
you
talked
about
and
so
but
that's
more
sort
of
rooting
problem,
but
and
at
the
at
context,
I
think
I
have
a
way
of
actually
basically
creating
ipfs
or
ipld
and
they
add
contacts.
So
basically
it's
it's
now
an
immutable
reference
to
the
dictionary,
and
that
decides,
I
think,
I've,
at
least
in
my
libraries.
H
I've
actually
satisfied
that,
but
it's
so
mostly
I'm
just
dealing
with
this
mixed
media
type
and
I'm
okay
with
strings
byte
string
as
keys,
and
actually
I
left
that
in
the
specification
and
there's
this
thing
called
cbor
ld
cbor
link
data,
which
basically
you
describe
an
external
dictionary,
and
that
basically
is
a
a
map
of
attributes
and
to
hints
so
or
you
know,
there's
also
an
algorithmic
approach,
which
is
basically,
you
sort
the
those
maps
by
magic
and
then
basically,
this
one
is
zero.
H
H
Yeah,
I'm
so
so
frustrated
because
I've,
actually,
I
finally
got
it
into
the
specification
for
both
the
describing
seabor
with
tags
and
then
specifically
dagsebor
with
cids,
and
I
pointed
to
the
tag
42
and
I
said
hey
and
we
basically
we
can
do
signatures
and
we
can
do
dag
cos
a
signatures.
This
way
and
also
the
key
id
would
be
the
did
key
url
and
it
will
point
and
say
you
can
actually
go
fetch
the
public
key
et
cetera.
H
I
Can
you
just
clarify
something
for
me
because
I'm
like
not
I'm
not
like
up
to
date
on
what's
been
going
on
here
at
all,
is:
are
they
mostly
just
hung
up
on
the
fact
that
they
need
to
type
everything,
and
that
there's
no
like
inherent
like?
We
don't
have
like
a
type
named
thing
of
like?
Oh
here
is
the
schema
that
refers
to
this
data
and
they're
all
like
really
happy
to
use
like
html
links
for
things,
and
we
don't
have
like
a
cid.
H
Yeah,
mostly
because
actually
it's
just
like
this
dependency
on
jason
ld
and
which
is
all
mutable
http
links
and-
and
that
is
the
that's
where
the
web
of
the
semantic
web
fell
apart,
which
is
basically
it's
they're,
not
persistent
and
they're,
really
not
urls
the
uris
that
describe
the
reason,
an
identifier
that
just
finds
it
a
thing
and
so
and
when
that
that
that
description
is
offline,
then
you
can't
have
logic
in
your
semantic
ontology
and
so,
and
I
think
there
has
been
some
discussions
in
the
ipld
world
and
of
conversations
with
rdf
models
and
I
think
there's
even
actually
a
multi
codec
suggested
for
rdf,
but
the
challenge
is
that
rdf
by
nature
actually
has
cycles
in
it.
H
So
it
gets
into
some
concern
as
far
as
our
is
it.
It's
not
it's
not
a
dag
rdf
model,
it's
a
cyclic
rdf
model,
and
so
many
of
these
actually
have
cycles
that
are
again
are
you
you
could
get
lost
into
an
in
like
in
this
loop
or
they
they
change
they're,
mutable
and
they're,
not
persistent.
I
H
H
They
could
or
they
should,
they
could
have
cycles
they
shouldn't,
but
a
lot
of
them
are
self-describing
that
you
basically
you
get
like
this
thing,
and
it
says,
oh
as
I'm
talking
about
myself
at
this
this
and
it
uses
a
fragment
identifier
to
describe
the
the
id
within
my
own
document.
I
H
H
But
here
I
can
show
you
share
my
screen
and
show
you
the
document
and
maybe
give
you
an
example
of.
Am
I
sharing
yeah.
H
So
here's
the
the
seabor
section
that
I
created-
and
this
is
cddl
markup
language
notation,
but
basically
what
I
did
is
here's
the
in
the
seabor
pretty
print
export,
which
is
basically
the
cid.
The
did
document
is
basically
just
this
is
just
strings.
These
are
text.
You
know
a
created
field
with
a
a
date
timestamp
and
in
in
seaboard.
This
could
be
more
concise
with
a
tag
that
would
be
specific
for
dates
and
the
same
thing
for
like
the
the
public
key.
H
So
here,
whatever
identifier
and
here's,
where
they
actually
like
to
do
like
type,
this
is
an
e
dsa
public
key
type.
I
think
actually
there's
a
better
example.
So
here
it
is
basically
in
json,
so
so
yeah.
So
here's
the
verification
method
as
an
array
which
is
an
object
and
it's
basically
ex
explicitly
stating
the
type
information.
H
And
then
I
had
wanted
to
use
links
in
this
as
basically
cids
with
maybe
dag
coz
sibor,
because
dag
cos
a
is
about
the
signature
aspect
of
it
anyways.
But
this
is
actually
where
I
put
that
bit
tag
like
here's.
Here's,
a
cozy
signature
with
a
tag
42
with
the
bytes,
which
is
basically
the
c
id.
H
I
I
I
don't
know
any
thoughts
as
far
as
like
I'm
also
all
alone,
because,
like
they
say
that,
there's
no
one
really
sort
of
implementing
this,
and
so
this
approach
with
the
dag
seabor,
despite
it
being
pretty
straightforward,.
F
Yeah
that's
unfortunate.
I
mean
I
know
that
there
are
folks
that
are
using
jose
and
kosei
with
some
of
our
stuff
and
that's
driving
the
dag,
jose
cosey
conversations.
But
I
don't
know
anybody
specifically
doing
a
bunch
of
did
stuff.
H
With
a
whole
bunch
of
blobs
that
they
have
an
algorithm
to
go
through
and
it's
stored,
json
data,
I
thought
ipld
was
actually
much
more
elegant
and
I
think
yeah
it's
a
solution
to
it,
and
it's
also
it's
I'm.
I'm
really
trying
to
be
all
immutable.
Hashes
everything
there's
no
ability
in
it.
There's
there's.
I
I
G
I
F
Did
everything
the
easiest
way
and
then
they've
been
adjusting
now.
G
F
F
Yeah
other
than
that
I
just
I
don't
know,
I
don't
know.
I
have
a
hard
time
with
the
the
semantic
web
folks,
partly
because
I
wasted
a
bunch
of
my
time
there
in
the
2000s.
And
so
I
don't.
I
like
still
have
like
some
some
ptsd
about
it
or
something.
But
it's
just
I.
F
There
was
this
vision
for
a
data
layer
for
the
web
and
at
some
point
they
compromised
on
not
a
data
layer
for
the
web,
but
a
serialization
layer
for
the
web
for
semantic
data.
So
they
became
very
comfortable
with
the
idea
that
databases
are
going
to
exist
and
they're
going
to
silo
information.
But
we're
always
we're
just
going
to
have
the
serialization
layer
that
presents
all
of
that
information
to
the
web
in
a
semantic
way
for
people
to
get
more
value
out
of
it.
F
And
I
don't
think
that
that
model
is
really
going
to
reshape
the
web
and
that's
not
really
a
data
layer
for
the
web.
And
if
you're
going
to
build
a
real
data
layer
and
build
real
data
structures
and
databases
and
file
systems
and
these
things
that
people
need,
then
you
need
structures
that
are
more
stable
than
like
these
json
documents
with
mutable
links
in
them
that
can
circular.
Like
that's
crazy,
like
you
just
can't,
you
can't
build
stable
structures
that
way.
F
But
if
you
live
in
that
world
and
they're
really
comfortable
in
that
world,
then
it's
it's
hard
to
have
these
conversations.
I
don't
know.
I
H
Yeah,
so
it
still
gets
just
into
this
a
naming
problem.
So
here's-
I
think
I
put
something
in
so
it's.
H
Oh,
it's
probably
not
formatted
right,
but
basically
here
let's
say
it's
an
object,
and
this
is
a
part
of
the
did
document
and
it's
a
verification
method,
and
that
has
an
array
it's
of
objects
and
that
object
is
a
type
in
this
case.
It's
a
verification
key
just
generic.
It
has
a
controller
and
right
now
I'm
saying
it's
a
public
key
and
this
is
going
to
let's
say
a
base.
32
encoded
link,
cid,
which
is
an
ed255
public
codec
and
basically
it
follows
actually
the
public
bytes
is
basically
inline.
H
Oh
yeah,
this
because
it's
yeah,
so
here's
a
a
faked
version
of
it.
So
it's.
H
There
we
go,
and
so
I
think
it's
ultimately
just
about
naming
it's
about
bike
shedding,
which
is
that
we
have
some
special
name
for
this
type
and
this
this
public
key,
that
what
they're
wanting
to
do
is
basically
say
that
it
is
it's
a
public
key
multi,
codec
2020
and
something
that
has
special
purpose
and.
H
And
it's
basically
like
it's:
it's
somehow
registering
that
into
a
registry
that
says:
oh,
what
follows
is
going
to
be
that
type
and
I
think,
but
it's
going
to
be
a
never-ending
battle
of
actually
maintaining
and
keeping
up
these
registries
up
to
date
and
who,
who
maintains
that
that's,
not
really
a
sort
of
a
scalable
solution.
It's
also
very
centralized
into
in
this
case,
described
in
the
at
context.
A
H
The
same
type
and
that
can
be
done
in
the
ad
context-
and
so
I
suppose
like
so
that
would
be,
and
I
would
actually
want
to
have
this
as
ipld
actually
no,
this
would
be
so
because
it
has
to
be
interoperable
this
they
don't.
Let
me
do
an
object
link
like
that
with
a
cid,
so
I
actually
have
to
create
this
as
a
string,
that's
the
must,
and
that
would
be
like.
I
am
suggesting
at
least
right
now.
Support
is
ipfs,
but
I
think
ipld
and
then
actually
would
be.
H
H
Ipld
will
have
its
own
mechanism.
Cwts
will
actually
have
their
own
pem.
Those
public
key
pem
encoding
is
going
to
have
their
own
there's
one
for
x509.
H
A
Question
what
I,
what
I
don't
get
is
like
isn't
the
whole
point
of
the
context
that
you
can
interpret
the
the
contents
differently.
Like
I
mean
like
so
you
have
the
context.
So
wouldn't
it
make
sense
to
have
the
same
key
names,
because
the
context
defines
how
to
interpret
the
key
names.
So
why
are
the
key
names
specific
to
the
context?
H
Yeah
yeah,
so
mostly
it's
it's
kind
of
trying
to
boil
the
ocean
as
far
as
like
just
naming
things
and
like
it's
like
this
cryptographic,
golem
is
what
I
accused
it
of
it's.
Basically
like
you're
you're,
invoking
its
name
and
it
gives
life
to
it.
And
so
I
think,
but
that's
all
it
is
like
you're,
basically
just
declaring
it
and
because
the
the
charter
says
that
they're
not
allowed
to
create
new
cryptographic.
H
Algorithms-
and
this
is
where
it
gets
interesting,
like
declaring
es256
k-r,
defines
the
recovery
signature
recovery
mechanism
of
ethereum
signatures,
so
that
that
is
basically
a
new
thing.
It's
in
the
ethereum
world
yeah,
but
basically
it's
recovering
the
signature
with
an
extra
byte,
that's
not
registered
in
iana,
it's
not
registered
in
the
jwts
or
java
json
web
signatures.
So
it
basically
it's
this
new
thing,
and
so
but
they're
getting
around
it.
H
By
naming
things
and
and
then
this
ability
to
have
cryptographic
agility
by
saying
that
there
there's
verification
methods
that
this
type
of
key
is
only
used
for
authentication
is
not
used
for
signing
a
credential,
for
instance,.
H
G
H
Bottom
is
basically
I,
this
is
all
just
basically
key
value
pairs,
so
the
cid
in
the
public
key
multi-codec
is
basically
just
describing
the
curve.
The
key
type
the
and
the
x
coordinate
for
in
this
case,
ed25519.
H
H
Done
more
simply
and
elegantly,
but
we
have
to
get
agreements
as
far
as
the
the
meaning,
the
semantics
of,
if
I
just
give
object
with
curve
ed25519
in
this
information,
like
that,
we
need
to
know
that
that
is
a
public
key
format.
A
All
right,
so
are
there
any
other
things
that
people
want
to
discuss
or
just
give
updates
or.
I
I
mean
if
people
are
are
cool.
I
could
tell
them
a
little
bit
about
my
my
looking
around
at
what
what
caused
me
to
send
that
that
message
out
about
you
know
how
you
might
have
alternative
proofs
other
than
just
a
hash,
for
what
data
is
basically
there's
just
a
lot
of
content
that
already
exists
on
the
web
and
that
people
have
found
people
have
found
through
other
mechanisms
right.
I
They
found
them
through
websites,
they
found
them
through,
wherever
and
even
in
locations
where
there
is
already
like
a
shot
256
of
the
file.
There's
no
way
for
me
to
know
how
to
find
it
over
ipfs,
and
similarly,
we
have.
I
We
have
things
that
are
we'll
call
it
like.
We
have
codecs
that
we
sort
of
are
not
fully
compatible
with
and
that
you
know,
git
objects
are
happy
to
be
100
megs,
but
if
we
refuse
to
send
blocks
that
are
more
than
a
meg,
then
things
stop
working,
and
there
isn't
necessarily
like
an
easy
answer
for
this.
I
I
You
could
glue
it
in
in
the
as
a
special
multi-hash.
You
could
glue
it
in
as
a
codec.
You
could
glue
it
in
as
an
adl,
it's
not
clear
where
any
of
these
things
would
fit,
and
so
I
guess
the
question
boils
down
to.
If
I
had
a
long
self-certifying
thing,
so
not
not
a
hash,
but
something
not
like
a
256-bit
hash,
but
something
longer
than
we
would
feel
comfortable,
calling
a
cid.
I
E
I
I
guess
here
are
two
examples.
Here
are
two
examples.
I
give
you
a
magic
snarg
that
proves
to
you
that
shot
256
of
file
equals
the
shot
256
of
all
of
the
leaf
blocks
that
are
referred
to
by
the
cid
right.
That's
like
the
magic
snarg
approach.
Another
one
would
be.
E
That
sounds
like
something
that
I
want,
but
it
doesn't
sound
like
something
that
how
do
I
compose
it,
and
what
does
this
thing
cost
to
produce?
How
hard
does
somebody
have
to
want
to
lie
in
order
to
make
this
do
the
opposite
of
what
I
want
it
to
do
and
like
how
can
I
prevent
that?
I
don't
understand
how
to
compose
any
of
this
or
if
this
primitive
even
exists,.
I
Well,
maybe
here's
one
that's
like
a
little
easier
and
a
little
more
like
it
has
has
problems,
but,
like
I
guess
something
you
can
wrap
your
mind
around
a
little
more.
I
could
slowly
build.
I
could
build
like
another
merkle
tree,
whereas
I'm
slowly
unwinding
this
thing
right
all
the
way
down
to
the
root
to
the
to
the
leaf
blocks.
I
I
also
include
the
intermediate
shot
256
hashes
right,
because
the
thing
is
the
thing
is
a
miracle.
Damn
guard
construction.
I
can
just
glue
these
things
together
and
you
know
exposing
intermediate
state
isn't
great,
but
you
might
be
able
to
use
that
to
get
to
gain
some
confidence
that
actually
this
new
dag
represents
the
same
data
in
the
old,
dag.
I
E
Right,
I
believe
that
somebody
can
do
such
hashing
and
then
produce
such
a
tree,
and
then
I
believe
that
you
could
compose
another
document
which
refers
to
both
of
those
hashes
and
then
I
could
of
course
hash
that,
because
I
can
hash
anything
and
that's
always
a
primitive
that
we
have,
but
I
don't
understand
how
that
document
is
supposed
to
let
anyone
else
trust.
My
claim,
like
I
just
hashed,
this
random
ass,
claim
that
I
made
right,
and
so
so
I
don't.
C
I
I
F
Well,
but
I
think
that,
like
the
use
case
really
matters
for
where
you
would
want
to
stick
it
like
like
why
you're
doing
it
actually
really
does
matter
in
terms
of
where
you
want
to
put
it
in
the
stack
and-
and
I
think
one
of
the
reasons
why
I'm
having
a
slightly
hard
time
with
this-
is
that
there's
just
better
ways
to
do
this
if
you're
building
something
from
scratch,
so
that
that
sort
of
inherently
means
that
if
you're
doing
this,
then
you're
doing
it
to
integrate
with
some
kind
of
legacy
right,
like
a
bunch
of
people,
have
these
isos
that
have
this
shaw
on
them.
F
And
so
we
want
to
integrate
with,
like
all
of
those
external
systems
or
those.
You
know,
those
storage
systems
or
whatever
they
might
be,
and
so
we've
constructed
this
way
to
bridge
them
together,
because
if
you
were
just
doing
it
from
scratch,
if
you're
just
building
out
a
use
case
where,
like
hey,
I
I
want
people
who
have
parts
of
this
to
be
able
to
share
it,
and
I
want
to
be
able
to
like
know
when
they,
when
we
basically
have
the
same
data.
F
If
you're
working
with
the
fbl
byteless
stuff,
you
can
reuse
each
section
of
the
byte
list
really
easily,
and
then
you
have
hashes
for
every
single
part
of
it,
and
so
you
have
like
the
you,
basically
have
the
ideal
hashing
for
validating
each
part
of
that
data,
in
whatever
network
and
whatever
people
that
you
pull
it
from.
You
don't
have
to
map
some
external
context
on
top
of
that
and
then
reconstruct
a
hash
for
some
like
random
slice
of
the
data
that's
being
represented.
F
So
there's
like
a
better
way
to
do
that
with
the
primitives
that
we
already
have.
So,
if
you're
building
what
you're
talking
about
it
would
it
would
literally
be
because,
like
oh
people,
already
have
this
iso
iso
and
they're
already
seating
it
by
some
kind
of
hash,
but.
I
G
I
I
don't
know-
let's
say:
npm
has
a
list
of
has
a
list
of
the
hashes
of
all
of
the
binaries.
They
distribute
right,
but
I'm
not.
I
can't
convince
every
I.
I
can't
convince
everybody
who
publishes
on
npm
to
also
publish
an
ipfs
hash
and
b.
I
can't
convince
this.
I
I
can't
convince
npm
to
like
chunk
up
everything
and
dump
the
ipfs
hash
there
also.
I
F
G
I
Aren't
gigabyte
is
so
that
I
don't
have
to
download
more
than
one
mega
of
stuff
without
knowing
that
it's
garbage
and
then
banning
the
pier
right
yeah,
the
other
approaches
of
like.
Oh
I'm,
just
advertising
that,
like
by
the
way
this
this
graph
happens
to
equal
this
100
gigabyte
file.
I
swear
it
doesn't
do
that,
because
I
have
to
download
the
whole
thing
and
then
hash
it
to
see
if
it
matches.
E
E
Is
part
of
this
right
and
trust
about
denial
of
service
is
part
of
it,
but
I
don't
know
what
kind
of
proof
can
give
me
this
thing
that
I
want-
and
I
have
heard
other
people
in
our
periphery
say,
for
example,
oh
zk
snarks
can
do
this
magical
thing
that
I
definitely
want,
and
that
is
the
amount
of
information
that
I
have
received
from
somebody
who
is
making
that
claim,
and
I
do
not
believe
it
without
massively
more
detailed
substantiation.
I
Yeah,
so
I'm
not,
I,
I
can
try
and
write
up
a
little
more
some
of
the
thoughts
of
the
things
that
I'm
familiar
with,
as
opposed
to
things
where
I
would
like
need
help
on
on,
like
what
cryptographic
mechanisms
make
sense.
But
in
terms
of
what
is
the
definition
of
proof
that
I
am
referring
to
it
is,
I
would
like
a
way
to
prove
the
statement,
given
a
particular
interpretation
of
the
relevant
pieces
of
data.
I
F
Two
requirements-
I
don't
know
if
you
can
necessarily
get
it
to
one
megabyte,
but
just
like
pretend
for
a
second,
that
a
torrent
file
is
a
snark
right
like
it's
a
it.
It
literally
says
like
here,
are
all
the
slices
of
the
file
and
here's
all
of
their
hashes.
And
then,
if
you
ask
people
for
that
file
and
they're,
like
oh
yeah,
I
have
that
I
have
a
different
data
structure
for
it.
Here's
some
of
the
data.
F
As
long
as
you
get
each
piece
of
the
data
in
the
torrent
file,
you
can
reconstruct
and
rehash
to
figure
out
if
it
matches
each
of
those
hashes
right
and
then
it's
torn
file
is
going
to
be
less
than
a
meg
to
download
the
proof.
And
then
you
would
have
enough
data
to
yeah.
I
mean
you're,
probably
going
to
download
a
little
more
than
one
data,
one
megabyte
of
data
from
people,
but.
I
C
I
F
No,
every
st
every
sub
thing
in
the
torrent
file
has
a
hash.
It
breaks
up
the
file
into
parts
for
each
hash,
so
you
would
have
a
bunch
of
hashes
for
each
slice
of
the
file
and
those
aren't
going
to
exactly
match
like
a
unix
of
s
file.
But
you
can
download
that
part
of
the
unix
of
this
file
and
then
take
that
part
and
hash
it
and
see
if
it
matches
the
hash
in
this
one.
A
I
Right
this
is
the
thing
is
that
you
know.
If
I
go
to
a
random
place,
I
go
to
like
one
of
these
antivirus
guys,
and
I
know
that
they
have
they
published
that
the
shot
256
of
this
file
is
good,
okay,
cool,
so
the
only
identifier
that
is
content
based
and
not
website,
based
that
I
have
for
this
thing,
is
a
sha-256
of
the
whole
file.
A
And
we
have
only
two
minutes
left
for
the
meeting,
so
just
in
case
someone
has
anything
else
last
chance
or
I
also
see
a
new
face
so
in
case
someone
wants
to
introduce
themselves
really
quick
at
the
end
of
the
meeting.
Sorry
for
not
which
I
mean
earlier.
G
Hi,
can
you
hear
me
yeah.
I
G
G
I
decided
to
attend
your
meeting
first,
which
is
very,
very
difficult
to
follow
for
someone
who
doesn't
know
coding,
but
I've
been
trying
to
it's
basically
like
listening
to
a
language,
a
new
language
that
you
don't
know,
I'm
trying
to
learn
it
by
just
listening
to
it
but
yeah.
That's
me.
A
Thanks
yeah,
I
I
also
have
to
say
that,
like
especially
this
meeting
today
was
extreme
like
normally,
I
would
expect
the
meetings
to
be
easier
to
follow.
So
in
case
you
want
to
chime
in
next
week
again
feel
free
to
join
again.
It's
probably
easier,
like
this
week,
was
really
like
yeah.
A
Thanks
cool
thanks
all
right,
so
we
are
at
the
end
of
the
meeting,
and
so
thanks.
Everyone
for
attending
and
yeah
see
you
all
next
week
again
goodbye.