►
From YouTube: 🖧 IPLD Bi-Weekly Sync 🙌🏽 2019-04-15
Description
A bi-weekly meeting to sync up on all IPLD related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
First
of
all,
I
asked
everyone
to
put
his
or
her
name
in
the
crib
head,
I
also
put
in
a
jet
in
case.
Someone
doesn't,
after
you
were
L
and
I
need
a
note-taker
any
volunteers
for
note-taking
thanks
rod,
ok,
so
M,
if
you
haven't
put
in
the
crib
pad.
Yet
what
you
worked
on.
Please
do
so
or
any
other
agent
items
yeah.
A
So
yeah
it's
just
yeah
as
I'm
the
first
one.
The
list
I
start
with
me.
What
I
worked
on?
That's
pretty
short,
because
I
just
worked
on
the
JavaScript
IP
LD
formats,
stuff,
which
hopefully
I
will
finish
this
week,
and
then
we
don't
have
endless
discussions
in
the
JavaScript
meeting.
Why?
Why
I
don't
get
it
done,
but
hopefully
I'll
finish
that
and
then
I
will
probably
work
on
the
okey
hours
for
this
quarter
on
the
other
ones
other
than
that
I.
Don't
think
I
have
any
news
yeah.
B
Yep,
so
I
did
pull
request
number
110
in
the
IP
LD
specs
repo,
so
I
did
a
bit
of
a
survey
across
some
major
standard
libraries
looking
at
collections
and
seeing
looking
at
some
of
the
commonalities,
some
of
the
groupings
we
can
group
these
these
things
together
and
then
put
up
some
initial
framing
thoughts
in
aspect
in
a
pull
request
there.
So
the
idea
is
that
pull
request.
B
B
Anyway,
that's
one
thing:
the
other
thing
is
Eric's
going
to
talk
about
the
schema
work
that
he's
been
doing.
I've
been
working
on
that
as
well
trying
to
turn
it
into
something
that
is
actually
usable
just
to
prove
its
utility
really
is
the
initial
aim,
so
I
wrote
a
a
grammar
for
it
to
pass
it
or
the
current
version.
It's
pretty
raw.
So
it'll
you'll
probably
have
to
change,
but
the
idea
is
to
be
able
to
get
quickly
get
to
something
that
can
do
some
basic
reading
and
writing
of
blocks.
B
B
C
D
C
I
guess
they
sort
of
exist
now,
since
we
started
talking
about
them
in
earnest
last
week
and
I've
been
wasting
up
some
TRS
into
the
specs
repo.
Finally,
I
think
those
stops
at
all
of
all
the
existing
writing,
but
trying
to
like
get
things
in
some
order
as
they
go
to
the
specs
frequent
happen,
and
thank
you
guys
for
all
the
comments
on
that.
That
parser
is
also
really
pretty
cool,
she's
very
exciting,
to
see
so
so.
C
Yeah
I
think
we've
got
lots
of
interesting
conversations
going
on
there
already
I,
don't
know,
broadly
speaking,
how
it
is
going
to
be
the
most
pleasant
to
navigate
some
of
those
pr's
in
because
it's
it's
super
super
easy
for
any
of
the
discussions
around
the
data
model
specifications
or
the
schema.
It's
just
like,
there's
a
whole
fractal
of
things
that
need
to
be
specified
at
some
point
right
and
I
have
I
just
have
no
idea
how
to
keep
how
to
like
cleave
different
parts
of
the
fractal.
C
Apart
in
the
flow
Matic
github
issues,
it's
just
like
a
skill
that
requires
magic
and
as
far
as
I'm
concerned,
but
it's
going
well
so
far
and
I
don't
know.
Maybe
some
meta
discussion
about
that
might
be
helpful,
but
I
just
have
no
idea
what
shape
it
would
take.
So
I'm
really
excited
about
the
progress
anyway
and
yeah
I
guess
all
of
that
stuff
is
all
I
have
to
report.
So
it's
been
almost
be
ours.
You
know
what
I've
been
doing.
E
Yeah
so
we're
looking
at
some
of
the
loop
you
peed
notes
and
seeing
some
of
the
recurring
issues
about
like
it's.
Essentially.
Basically,
we
want
something
centralized
to
get
them
working
now,
but
we
don't
want
to
hamstring
ourselves
later
and
there
seem
to
be
like
four
or
five
issues
that
are
all
basically
that,
including
one
that
looks
like
michael,
has
been
doing.
Some
work
on
so
I'm
at
I
took
a
look
Michael
at
your
your
issue
too,
and
interested
more
about
what
you
were
thinking
and.
E
Decided
to
punt
on
ACL
type
behavior
for
dag
synchronization,
because
it's
it's
gonna
require
more
time
than
I
have
right
now
to
do
that,
I
think
and
so
just
sort
of
do
the
normal.
You
get
one!
You
get
one
ACL
for
one
name,
data
structure
and
call
it
a
day
like
IPS
is
currently
doing
and
we'll
figure
the
next
the
rest
out
later
and
I'm
gonna
try
again
with
IPL
D
Prime
in
a
week
or
two
and
see
how
that
goes.
A
D
I
have
a
bunch
of
stuff
list,
okay,
so
yeah.
Let
me
let
me
knock
through
this
okay.
So
there
was
a
bunch
of
stuff
that
happened
in
the
JavaScript
I,
peeled
e
stack
last
week,
so
I'm
limited
browser
dynamic
browser
fetching
of
codecs.
So
the
idea
is
that
in
the
future,
if
you
just
compile
this
stuff
with
web
pack,
none
of
the
codecs
will
actually
be
in
the
default
bundle,
except
for
like
the
raw
codec
for
lines,
and
then
all
of
the
codecs
will
come
in
dynamically
as
you
need
them.
D
That
should
like
really
really
help
bundle
stuff
in
the
future
and
won't
and
and
right
now,
the
way
that
works
in
IVFs
is
that
users
up
front
have
to
decide
which
codecs
as
they
want
and
sort
of
and
send
them
in
that
they
and
they
can't
get
rid
of
any
of
the
default
codecs
as
well.
It's
a
little
bit
Lyndon
I
factored
out
all
the
unnecessary
async
operations
in
the
block.
D
The
idea
here
is
that
I'm
starting
to
implement
a
bunch
of
the
performance
improvements
that
we
kind
of
talked
about
to
make
sure
that
the
API
model
that
we
have
actually
supports
all
of
these,
so
that
one
was
pretty
cool.
You
can
look
at
this.
There
I
migrated
the
dag
Jason
implementation
over
to
this
new
stack,
which
led
to
just
a
huge
deletion
of
code.
D
Okay,
the
summit
in
Berlin.
We
have
finalized
t-shirts
and
then
use
so
that's
all
coming
together
this
week,
I'm
going
to
put
together
the
agenda
a
bunch
of
stuff,
that's
already
in
Google
Docs
for
the
first
two
days
haven't
quite
mapped
out
the
rest
of
them.
Yet
last
week
you
did,
we
did
in
front
of
okay
our
scoring.
Hopefully
you
saw
that
and
agree
with
your
scores.
D
I
feel
the
earth.
So
this
has
been
coming
up
a
lot.
We're
like
we
need
some
kind
of
permanent
cloud
store
for
IP
LD
blocks.
That
is
very,
very
simple
and
can
host
blocks
on
behalf
of
people
in
an
authenticated
way.
I
decided
to
just
write
something
real,
quick,
quick
using
lambda,
so
I
registered
a
domain
called
my
TLD
that
Earth
for
it.
A
D
A
D
So
this
is
a
way
that
this
works
is
that
I
use
the
import
function
so
like
have
you
ever
used
the
import
function
before
like
in
web
pack
or
like
any
project
compiler,
will
attack
it.
A
D
So
this
is
what
happens
right
there.
There's
like
these
big,
if
deaths
in
there,
so
we
have
to
have
like
an
if
theft
for
every
codec
that
is
ever
possible
right,
just
like
we
have
now.
So
this
get
code
of
module.
We
eventually
like
have
like
you
know
the
import
function,
statements
there
and
then
what
web
pack
does
is
it?
It
goes
okay,
I'm
going
to
need
that,
but
I'm
going
to
break
it
out
into
a
separate
file
and
load
it
asynchronously.
D
D
Nice
thing
is
that,
like
that,
import
function
is
being
standardized
so
like
eventually,
that
will
even
be
in
node
and
work
more
or
less
the
same
as
an
asynchronous.
Looking
back
and
there's
nothing
you've
needed
to
note
necessarily,
but
it
would
just
mean
that,
like
eventually,
we
could
not
have
two
entry
points.
Yeah.
B
D
No,
it's
not
just
in
the
fest
right
like
any
time
you
create
a
directory
with
like
a
bajillion
files,
you're
going
to
need
a
hip,
so
it's
actually
like
part
of
just
UNIX
that
support
in
general.
It's
not
as
part
of
that
as
fast
and
the
way
that
this
works
in
the
assistive
one
was
that
it
was
just
like
a
one-off
Hampton,
Pro,
Toba,
and-
and
so
it's
like,
not
really
standardized.
D
So
we
need
a
Windows
Internet
stamped
on
top
of
the
data
model
because,
because,
like
you
know,
this
works
on
any
codec
in
the
data
model.
So
theoretically
you
should
be
able
to
create
dag
JSON
or
Dex
Ybor
like
graphs
that
have
UNIX
s
so
yeah.
So
we
need
that.
That's
slightly
more
generic
Hampton
at
some
point.
It's
not
so
it's
a
blocker
to
get
it
released,
but
we
could
actually
probably
start
the
integration
work
before
we
have
this,
because
the
interfaces
from
the
UNIX
from
from
this
library
side
will
look
identical.
D
Whether
or
not
you
use
a
hampt
or
not,
because
the
way
that
it
works
is
that
it's
just
an
async
generator
that
gives
you
all
of
the
blocks
that
you
ever
need
for
the
graph.
So
it's
just
gonna
end
up
giving
you
more
waha
for
all
the
intermediate
Hamptons
in
the
future,
like
that.
That
support
should
happen
more
or
less
transparently.
So
we
could
start
the
integration
in
ipfs
before,
let's
complete.
D
Yeah,
okay.
So
last
week
we
had
this
schemas
talk
and
we
also
did
like
a
small
demo
like
I
said:
I'll
talk
about,
I,
feel
D,
Prime
and
selectors
for
the
styling
team,
and
they
both
went
really
well
I
feel
like
it
gave
us
like
a
lot
of
really
good
feedback
and
visibility
into
those
projects,
and
we
have
a
bunch
of
other
stuff
coming
down
the
pike.
D
Now
that
is
just
like
stuff
that
we're
finishing
up
and
are
kind
of
ready
to
hand
off,
or
these
get
more
feedback
on
so
I'd
like
to
turn
it
into
just
more
of
a
regular
thing,
and
so
what
I'm
thinking
is
that
we
do
this
meeting
every
other
week
on
the
off
week.
Sometime
midweek,
not
on
Monday
I,
want
to
put
something
on
the
calendar
for
basically
a
Show
and
Tell
and
we'll
fill
that
in
with
whatever
project
happens,
to
be
ready
every
other
week.
D
But
we
should,
at
least
at
the
end
of
the
quarter,
probably
have
enough
new
things
that
we
want
to
gather
feedback
on
to
use
that
slot
and
as
each
one
comes
up,
we
should
proactively
invite
all
the
right
people
that
we
need
around
to
review
that
kind
of
stuff.
So
for
the
JavaScript
saying
we
should
get
like
the
Jazz
people
on
it.
If
it's
like
a
good
thing,
we
should
get
to.
C
A
F
So
I
have
a
quick
request
when
it
comes
to
ipfs
camp,
which
is
coming
up
in
June
and
Michael
and
I
have
been
chatting
a
little
bit
about
kind
of
like
attendance
and
stuff.
You
know
this
is
not
a
required
event
for
folks,
but
we
are
creating
a
lot
of
awesome.
New
content.
That's
meant
to
like
live
on
passes
event,
so
it's
like
very
much
targeted
to
be
done
to
get
everyone
up
like
level
up
during
the
campus.
People
that
are
coming
will
participate,
but
also
kind
of
in
the
future.
F
This
stuff
lives
on
proto
school
is
like
more
available
to
arrested
community
and
something
that
would
be
used
in
a
lot
of
other
workshops
and
tutorials.
It's
a
kind
of
there's,
there's
four
main
courses
that
we
are
planing
to
create
and
have
everyone
walk
through
for
ipfs
camp,
and
one
of
them
is
kind
of
like
the
core
understanding
how
ipfs
deals
with
files
and
tags
and
all
of
us
seven.
F
So
it's
very
much
UNIX
makes
if
sv1
stuff's
not
v2,
because
it
doesn't
exist
yet,
but
currently
the
people
who
are
kind
of
on
tapped
to
help
us
develop.
That
particular
course
are
Alan
and
Steven.
We
were
thinking
that
it
would
be
really
useful
to
have
a
group
of
three
with
someone
from
IP
LD
kind
of
jumping
in
there,
from
kind
of
a
dated
model
perspective
on
how
we
deal
with
files
and
so
depending
on,
if
there's
anyone
who's
interested
in
coming
from
IPL
B
wind,
who
wants
to
help
us
create
this
course.
D
A
I
know,
like
probably
I
know
like
the
the
lowest
level,
basically,
which
also
isn't
that
useful,
so
I
miss
the
things
in
between,
but
but
I
can
show
like.
Also
like
help
order
like
if
there's
any
like
low
level
questions
but
I
don't
know
I
get
like
I'm
a
true.
If
I
know
more
than
probably
Ellen
does
but
yeah
I
see
yeah
file
issues
or
ask
me
or
I
don't
know,
I
saw
as
I'm
able
to
about
so
I
guess
so.
I'm
Michael
should
be
probably
just
split
it
between
us.
D
F
D
Mean
I
mean
the
general
problem
right
is
that
like
and
and
this
was
a
problem
with
the
torrent
as
well,
which
is
that
you
have
a
section
of
data
that
is
in
really
high
demand,
and
so
there's
a
lot
of
peers
available
to
serve
that
content.
And
then
you
have
this
big
long
tail
of
data
that's
hardly
ever
accessed
and
so
and
there's
almost
never
peers
on
it
and
it's
like
prohibitively
expensive
to
run
a
BitTorrent.
D
No,
it's
constantly
it's
a
sort
of
that
data
that
only
gets
accessed
like
once
a
month,
so
they
did
the
system
called
web
seeds
where
you
could
basically
just
like
along
with
the
trackers
right.
So
you
have
like
each
tracker
and
then
you
can
also
just
say:
okay,
here's
an
Earl
and
it's
basically
at
the
peer
of
last
resort.
D
So
if
you
can't
find
files
like
through
any
of
these
tracks
or
peers
on
any
of
these
trackers
and
fall
back
to
this,
and
it
works
really
well,
actually
it's
what
like
most
of
the
web
tour
and
people
use
when
they
put
my
content
up
and
stuff
like
that,
so
it
just
dramatically
reduces
their
bandwidth
bill.
But
it
doesn't,
you
know,
change
their
storage,
so
obviously
because
still
storing
it
permanently
somewhere,
but
yeah.
D
D
Advantage
in
that,
we
don't
have
to
say:
oh
here's
a
URL
to
content.
We
can
actually
say
here's
a
URL
to
box
because-
and
you
can
treat
it
like
any
other
box
covering
mechanism
like
you
were
trying
to
do.
Iii
wasn't
being
very
prescriptive
at
all
about
like
the
mechanism
by
which
we
surfaced
this,
like
for
how
you
enable
it
like
I,
don't
I,
don't
really
care
I
mean
the
effect
is
going
to
be
the
same.
D
If
somebody
can
build
an
application
and
say:
okay,
I'm
gonna
put
my
data
here,
try
to
get
it
out
of
the
the
peer-to-peer
network
but
like,
if
not
then
solve
accidents,
and
that's
fine,
and
so
your
proposal
totally
works
for
that.
But
you
can
see
some
of
the
discussion.
There's
probably
quite
well
that
any
other
beacon
question
yeah.
E
D
In
that
case,
you
would
just
do
the
base
32
of
the
multicast,
rather
than
the
mysteries
of
this.
The
idea
I
think
that
we
want
it
to
be
CID,
because,
if
I've
been
certain
as
we're
sort
of
building
up,
this
I
feel
did
honor
thing.
I've
been
starting
to
think
about
what
it
would
look
like
to
have
a
service
that
was
providing
blocks
on
behalf
of
like
thousands
of
people,
potentially
millions
of
people,
so
like
billions
of
blocks
that
millions
to
the
core
whole
thing.
D
D
You
have
to
parse
the
entire
graph
and
follow
the
entire
graph,
and
so
what
a
service,
or
even
honestly,
eventually
like
regular
nodes,
they're
going
to
want
to
do
is
store
a
little
piece
of
infirmity
data
next
to
the
CID
that
says,
oh
yeah,
I
also
have
like.
So
at
some
point.
We
will
probably
want
to
make
like
a
protocol
adjustment
on
the
bits
web
side
to
say,
like
hey,
do
you
have
all
of
the
data
underneath
this
CID
and
expose
that
same
kind
of
thing?
D
And
if
we
just
the
same
thing
over
HTTP,
we
would
want
to
be
in
some
kind
of
a
header
right
and
and
but
in
order
to
do
that,
you
have
to
be
talking
about
a
CID
another
multi
hash
right,
because
that
you
don't
know
what
the
codec
is
from
the
multi.
Actually,
you
can't
tell
if
you
have
all
the
data,
all
the
data,
sorry
all
the
data
that
links
to
in
the
graph
yeah.
So
that's
that's
why
I
was
saying:
go.
C
D
A
Just
why
like,
why
do
you
need
the
the
codec
and
not
just
the
hash
like
what?
Why
do
you
need
the
full
CD
information
and
and
the
codec?
Wouldn't
you
also
like
I,
don't
even
see
the
relation
between?
No,
if
you
have
a
tree
beneath
your
CID
or
not
depending
on
is
a
CN
you're,
just
a
hash,
so
I
don't
get
deep
well.
D
A
D
A
E
D
E
Right
and
that
requires
not
really
making
anyone
else,
who's
currently
running
a
node
change
their
software.
It
just
means
that
the
people
who,
when
you
read
the
data,
the
interface
nothing
needs
to
not
return
a
pair
info
object,
which
is
what
I
think,
which
is
what
it
currently
does.
D
Right,
but
wouldn't
you
you
would
have
to
but
just
saying
it's
a
multi
address
is
probably
not
enough
right
because
you're
going
to
need
this,
like
it's
not
bad
to
the
earth,
or
are
you
just
saying
that,
like
okay,
so
further
purposes
of
these
careful
records,
if
you
get
a
multi
address,
you
are
just
expected
to
encode,
see
IDs
and
attach
them
to
and
like
append
them
to
it.
It's
not
like
what
is
considered
at
that
point.
Okay,.
E
E
D
D
D
E
Yeah,
it's
a
little
less.
It
is.
It
requires
one
more
trip
because
you
actually
still
have
to
do
a
DHT
request
and
can't
just
go
hit.
The
you
know
s3
bucket
directly,
but
it
still
allows
all
the
it
doesn't
make.
You
have
to
do
like
a
hard
fork
to
the
centralized
world.
You
just
sort
of
gets
a
naturally
closer.