►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-02-10
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
IPL
d
weekly
meeting
is
10th
of
February
2020.
As
every
week
we
go
over
the
stuff
that
we've
done
last
weekend
planned
to
do
next
week
and
then
discuss
any
open
items
we
might
have
or
also
introduce
everyone.
Who's
new
to
the
column
wants
to
introduce
myself
yeah
so
yeah,
that's
perhaps
start
with
this,
so
I
see
a
new
face.
Well,
do
you
want
to
quickly
introduce
yourself
if
you
don't
that's,
also
fine,
so.
A
All
right
cool
on
my
list,
I
start
with
myself
and
I-
certainly
haven't
done
that
much
actually
related
work
last
week,
but
this
week
I
definitely
will
have
time
to
work
again
on
the
rough
side
of
things
and
mostly
I
concentrate
on
the
multi-format
stuff,
because,
yes,
several
Chris
open
in
Forks
and
again
getting
them
together
and
having
a
solid
base
for
facing
high
quality
on
yeah.
That's
all
I
have
next.
One
on
my
list
is
Michael.
C
Sorry,
Nina
so
yeah
so
basically,
last
week,
I
got
to
the
compy
generation
side
of
this
big
data
set
project,
and
so
it
immediately
just
crashed
lambda.
We
investigated
it
was
using
a
lot
of
scratch
disk
space
and
a
lot
of
memory
so
hitting
it
up
abroad,
around
figuring
out
the
first
broad
wrapped
into
rust,
and
that
got
rid
of
some
of
the
memory
usage,
I
think,
but
still
using
a
lot
of
scratch,
disk
and
memory
and
then
fulcrum
at
it
and
from
there
and
I
think
around
us
got
together
to
farther
so
now.
C
I
think
we're
finally
like
near
the
point
where
it'll
run
in
lambda
on
rust,
the
sports
version
without
crashing.
Hopefully,
but
that's
this
also,
it's
both
a
problem
whether
we've
been
generating
par
file.
That
are
just
a
little
bit
over
again,
because
we
thought
that
the
wind
at
Falcon
sector
padding
worked
was
that
you
assembled
all
the
pieces
into
a
sector
and
then
you
did
the
padding,
but
the
padding
actually
happens
at
each
piece.
B
All
right
so
last
week
was
productive.
I
went
through
all
the
constraints.
I
was
given
for
selector
syntax
and
came
up
with
this
very
esoteric
syntax
that
may
work
I,
don't
know,
I
mean
if
it's
the
constraints,
so
I
wrote
up
a
PR
describing
it
and
and
I'm
willing
right
now
running
a
parser
for
it
to
verify
it's
not
horribly
ambiguous.
The
biggest
issue
I
see
is.
We
may
have
problems
in
the
future
with
like
old
versions
of
the
syntax
and
new
versions
of
the
parser,
and
vice
versa.
It's
not
a
fully
self
documenting
syntax.
B
B
C
I'm
I
mean
maybe
Eric
can
click
in
my
case
then
I
can't
but
like
in
all
the
cases
that
are
in
my
head,
I
would
not
be
worried
about
having
to
have
a
version
specific
syntax
parser.
As
long
as
the
parsed
form
is
fine
and
doesn't
require
that
context,
then
we
can
always
have
the
parsed
form
and
know
exactly
which
I
mean
I
made
me
the
same.
I
hadn't.
B
Thought
of
it
right,
I
mean
yeah.
The
the
ILD
data
structure
is
the
source
of
truth
is
just
if
I,
what
I'm
most
concerned
about
is,
if
I
have
a
new
version
of
the
syntax
and
I'm
using
an
old
version
of
the
parser,
it
probably
won't
know
to
do
with
it.
But
if
we
don't
care
about
supporting
that,
we're
fine
yeah.
C
D
For
export,
I'm,
gonna
select
your
DSL
stuff,
I
think
as
long
as
you
don't
go
too
crazy,
let's
at
bare
words
I'm,
not
super
worried
about
the
migration
either
you're.
The
last
couple
of
comments
about
that
I
saw
chronologically
an
email
or
white
space
has
some
semantics
and
I
think
that's
fine
as
long
as
you
define
it
as
a
white
space
to
open
collapses.
D
So
I'm
gonna
hope
that
I
can
port
all
of
the
codecs,
all
of
the
traversal
logic
and
all
of
the
selectors
to
the
new
interfaces
and
I'm
gonna
claim
that
I
can
do
that
in
a
week.
We'll
see
it's
a
little
bit
of
an
aggressive
bit
but,
like
maybe
these
interface
changes
will
also
probably
generate
some
work
for
anything
downstream.
D
Just
gonna
like
write
a
bunch
of
those
as
lessons
learned
and
do
a
reboot
after
the
interface
changes,
lens
I
think
that's
just
gonna
be
easier,
and
there
are
some
lessons
learned.
So
it's
like
purifying
Passover
as
well
and
I'm
having
a
lot
of
fun
doing
test
purposes
now,
so
something
that
I've
learned
from
the
last
time.
I
try
to
get
serialization
laberd's
is
the
matrix
between
probing
every
edge
case
and
serialization,
and
all
these
recursive
structure
assemblies
and
also
dealing
with
what
may
be
typed
or
untyped
flexible
data.
D
At
the
same
time,
the
number
of
cases
that
you
have
to
test
for
is
just
really
enormous,
so
I'm,
starting
to
put
some
extra
effort
into
making
sure
all
the
test
specs
shape
up
nicely
this
time
in
part,
that's
just
naming
them,
unfortunately,
like
making
sure
I
know
which
things
I
have
cases
covering.
So
if
this
goes
ideally,
I
want
to
have
one
set
of
programmatic
test.
Specs.
That
say,
like
this.
D
Behavior
should
confirm
correct
operation
of
a
Mac
so
long
as
that
definition
of
the
map
has
string
keys
and
integer
values
and
these
particular
keys,
which
is
funny
because
if
you
have
an
untyped
map,
that's
a
bunch
of
constraints,
don't,
but
if
you're
using
this
on
a
piece
of
typed
information,
that
is
a
strong.
If
that
happens,
to
act
like
a
map,
then
suddenly
all
that
extra
entry
food
matters
so
I'm
starting
to
write
test
specs.
With
that
in
mind,
it's
fun!
E
E
So
anyway,
that's
that's
something
I've
been
towing
a
lot
of
with
in
my
head
and
while
I'm
building
this
thing
about
how
say
you
do
some
coach
and
with
schemas
you
could
really
build
in
intimate
knowledge
about
how
these
things
get
encoded
if
you
build
codecs
that
can
work
in
a
similar
fashion.
So
you
know
it's
really
easy.
If
you
say
okay,
I'm
doing
this
schema
and
I
only
care
about
C
ball
great,
you
could
do
some
really
cool
vertical
integration.
E
I
think
there's
more
bits
to
a
lot
of
potential
and
go
to
connect
all
the
way
from
the
the
the
codecs
all
the
way
up,
but
there's
still
a
lot
of
vertical
stuff
going
on
where
things
can
swap
in
and
out,
and
you
have
efficiency
gains
by
not
treating
the
layering
as
super
strict,
just
bleeding
them
into
each
other,
a
little
bit
more
in
a
sensible
fashion.
With
that,
you
know
well,
being
a
good
modular
citizen
anyway,
that's
I!
E
E
It
turns
out
that
Michael
wanted
to
just
throw
data
at
the
car
file
generator
and
my
ap
eyes
were
assuming
that
you
were
gonna,
be
a
nice
await,
citizen
and
say
and
calmly
await
for
the
blocks
to
be
written,
but
so
I
had
to
change
that
and
make
it
actually
write
things
sequentially
and
assume
that
things
could
come
in
without
being
without
waiting.
So
that's
that
seems
to
be
working
now.
E
Actually
it
was
there's
what
we
were
doing
was
going
from
doing
this
ago
and
then
using
the
go
libraries
which
did
some
of
the
work
and
then
crossed
over
the
FFI
boundary
into
the
proofs
in
rust
and
and
we're.
Basically,
now
we've
moved
like
three
steps:
closer
to
almost
re-implementing,
the
proof
stuff
which
has
you
know,
which
we
need
to
be
think
carefully
about.
How
much
of
this
we
really
want
to
do.
But
it
turns
out
this
there's
the
two
areas
of
that
it's
using
disk
the
padded
running
the
padded
version.
E
E
So
it
when
you
give
this
padding
algorithm
a
reader
with
a
big
huge.
You
know
that
can
read
sequentially
like
a
stream
of
bytes.
Then
it
wants
to
be
able
to
move
forward
and
then
go
back
a
little
bit
and
then
forward
and
then
back
a
little
bit
and
forth,
and
so
because
it
needs
to
do
that.
Seek
you
can't
just
give
it
a
pure
store
unless
it
had
the
appropriate
amount
of
buffering.
So
then
what
what
the
proofs
does
is
write
it
to
disk
and
then
feed
it
from
that
disk.
E
So
Volker
wrote
an
in-memory
version
of
it,
just
writes
it
to
memory,
and
then
it
can
move
back
and
forth
from
memory,
and
then
we
found
it
was
still
using
double
the
amount
of
the
piece
size
in
disk.
It
turns
out
the
Merkle
tree
algorithm
in
there
that
it's
using
to
generate
the
compy,
which
is
just
companies
just
the
merkel,
proof
of
the
the
thing,
and
so
that
of
the
piece
and
it
so
does
a
it
divides
it
into
32,
bytes
and
then
hashes
them
and
then
makes
a
Merkel
treat.
E
E
Can
actually
take
a
it?
The
the
library
that's
doing
the
local
tree
stuff
can
either
take
a
disk
cache
or
a
memory
cache,
but
in
the
fire
coin
proofs,
it's
hardwired
in
as
a
disk
cache.
So
now
they
I've
got
a
link
there
just
to
the
re-implementation
of
it.
E
C
F
E
What
I
did
in
locally
to
test
this,
but
in
lambda
you
don't
I,
don't
believe
any
way
you
have
any
way
of
creating
or
using
tempo
fests.
But
the
other
problem
with
using
tempo
faced
with
such
a
tight
constraint
is
that
you
have
to
be
absolutely
sure
that
you
leave
enough
for
the
runtime
super
divided.
It
perfectly,
which
you
know.
Lambda
gives
you
three
thousand
and
eight
megabytes
and
during
testing
this
executable
he's
using
two
thousand
nine
hundred
and
ninety
five
megabytes.
So
it's
really
tight,
so
yeah.
F
E
64-Bit-
and
this
is
this-
is
the
other
dilemma,
though,
which
is
the
further.
We
move
away
from
the
the
the
rust-proof
binary,
the
more
risk
we
introduce
in
generating
compy
that
doesn't
match.
So
if
we
go
ahead
doing
it
this
way,
I
think
I
would
want
to
randomly
sample
some
of
the
outputs
and
and
generate
it
compete
generate
compy
a
traditional
method
just
to
sanity
check
that
we
we're
not
finding
some
weird
edge
case
that
we're
not
there's
this
thing.