►
From YouTube: 🖧 IPLD Weekly Sync 🙌🏽 2019-10-21
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
And
as
every
week
we
yeah
discuss
any
things
the
IPL
d15
has
and
we
first
started
yeah
it's
from
us
and
then
going
through
any
urgent
items.
This
time,
I
even
have
an
agenda
item,
and
so
I
start
with
myself
so
last
week
and
so
I'm
still
working
on
making
surly
to
support
eggs.
It's
I
don't
know
if
it's
my
brain
or
not,
but
like
it's
a
constantly
forget
house
for
the
works
and
basically
like.
B
A
Day
every
day,
I
try
to
remember
how
it
works
and
then
I
caught
something
and
then
no
this
doesn't
work
and
then
oh,
it
doesn't
work
and
then
I
have
no
idea
like
yeah
if
it's
just
complicated
or
if
it's
me,
I
have
no
idea
but
I
think
I'm
getting
closer
still
and
yeah.
Besides
that
also
like
on
my
so
it's
not
IP
related,
but
I
also
work
when
I
do
work.
I
do
rust,
stuff,
so
I'm
getting
better
there
on
foreign,
if
I,
if,
if
I
eyes
for
him
function,.
A
A
B
A
C
I've
been
doing
lots
and
lots
of
research
spokes
into
the
performance
that
we
can
expect
from
the
code
that
is
created
by
the
code
generation
in
go,
and-
and
this
is
just
a
really
deep
topic-
and
it's
very
difficult
to
iterate
on,
because
it
touches
everything
at
once.
I
know
a
couple
of
things
that
are
useful
as
high-level
guidance
from
the
previous
experience
with
our
reef
library,
which
does
a
bunch
of
serialization
and
go
through
some
of
our
projects.
C
Right
now,
and
the
main
thing
that
I
know
from
that
experience
is
that
in
making
things
perform
well
the
count
of
memory
allocations
not
to
size,
but
the
instance
count
of
memory
allocations
is
going
to
be
one
of
the
heaviest
hitting
things
in
holistic
performance
overall.
I
also
know
from
the
hard
way
that
redesigning
things
later
to
avoid
this
is
tricky,
though
it's
I've
been
trying
to
front
load
a
lot
of
those
efforts.
C
It's
also
entangled
with
the
maybe
API
design
questions
from
previous
weeks.
So
that's
fun
and
I.
Think
at
this
point
the
way
forward
is
going
to
involve
me
just
trying
to
build
a
table
of
figure
out
what
the
worst
case
is
and
then
figure
out,
which
cases
are
better
and
great
trade-offs.
I
have
yet
to
find
a
single
solution,
space
that
is
optimal
in
all
cases.
C
C
There's
some
optimizations
that
I
can
make
to
amortize
like
move
those
down
into
one
very
consistently.
What
the
trade-off
is,
then
the
number
of
bytes
taken
goes
up
and
because
of
like
specific
details
of
the
way,
the
GC
and
they
keep
work
in
the
go
runtime.
It
turns
out
that
those
additional
bytes
will
then
get
held
on
to
you
for
the
lifetime
of
know,
rather
than
just
the
note
builder.
C
C
A
So
perhaps
so
I
guess
my
takeaways
perhaps
tested
with
old
ago
versions,
because
you
can
test
with
newer
versions.
Obviously,
but
just
like
big
just
like
once,
we
have
benchmarks
just
run
with
an
older
code
version
to
see
if
it's
like
kind
of
matches
or
if
it's
like
completely
screwed,
because
yeah
that's
just
yeah,
but
I've
only
heard
stories
about
it.
So
I
was
involved
myself,
but
I
heard
stories,
but
it
was
yeah
a
huge
panes
yeah.
C
A
B
So
I
had
a
week
that
was
not
dissimilar
sounding
to
Eric's,
but
certainly
not
as
intense
and
serious
I
spent.
Most
my
we
can
go
on
the
go,
schema
library,
I'm
spending
a
lot
of
time
on
it
and
I'm
sure
it'll
pay
off.
Eventually,
it's
not
connected
to
a
purely
prime,
yet
I'm
hoping
to
move
it
towards
that,
so
that
we
have
a
bridge
between
the
two
so
that
you
can
read
schemers
even
do
something
do
Co,
join
with
them
and
it's
getting
into
shape,
I!
B
Think
and
it's
you
it
that
the
tests
are
all
now
in
sync
as
well,
between
the
two
and
what
I
spent
the
week
doing
was
one
of
those
maneuvers
like
Eric's
talking
about
where
you'd
you
can't
do
it
in
small
pieces.
You
have
to
do
all
of
it
and
so
I
refactored
the
data
structures
that
hold
the
schemers
to
get
rid
of
the
the
go
maps
which
just
made
me
so
angry,
and
so
there's
basically
no
maps
in
in
the
ghost
schema
library.
B
Now
it's
all
lists
and
then
custom
Jason,
encoding
methods
to
get
them
back
into
maps
for
Jason,
and
so
now
everything's
ordered
it
retains
orders.
And
so
now
I
can
do
the
full
test
cycle
where
I'm
reading
a
schemer
testing
the
JSON
version
and
then
turning
it
back
into
a
schemer
again
and
I.
Can
pair
I
can
compare
the
raw
schema
text
without
worrying
about
the
types
jumping
around
between
runs
and
I
mean.
B
It
just
feels
nice,
though,
just
to
have
that
stability
and
they
get
rid
of
the
but
then
I
started
doing
some
extra
experimentation
with
some
other
things.
Then
just
flamed
out
in
anger,
part
of
it,
was
discovering
that
there's
no
way
in
go
to
read
Jason
and
know
what
the
order
of
the
Jason
Paz
was
and
what
the
order
for
things
are.
They
came
in
unless
you
write
a
custom
Jason
puzzle
and
which
we.
C
B
B
B
Write
so
I
read
a
thread
that
made
me
rage
and
was
finally
the
go.
Some
of
the
go
authors
and
somebody
was
saying
I'd
like
I'd,
really
like
to
be
able
to
get
to
the
ordering
of
Jason
paths
and
they
were
like
no
what's
the
use
case.
Tell
us
use
case
like
well.
I
want
to
do
this.
No,
that's
not
good
enough
use
case.
No,
no
write
your
own
and
I
gotta
get
that
I
get
the
the
inclination
towards
like
they
want
to
keep
it
simple
and
approachable.
B
B
There
are
some
extensions
slated
as
a
to
do
for
the
core
Jason
encoding
library
to
get
to
be
had
just
some
hooks
to
get
into
the
process,
but
anyway,
I
just
came
away
with
a
bad
taste
at
the
end
of
the
week
and
to
top
of
all
I
was
talking
with
a
friend
who
spent
a
bit
of
time.
We
can
go
as
well
and
we
ended
up
having
the
same
things
to
rage
a
bit,
but
like
I
I'm,
not
gonna,
say
that
this
is.
B
Don't
doesn't
want
you
to
do
that,
and
so
you
have
to
think
differently
and
so
totally
not
over
that
hump
of
thinking
like
a
go
programmer,
and
so
my
rage
is
more
about
eat,
not
working
the
way
I
wanted
to,
rather
than
me
thinking
the
way
it
wants
me
to
so
anyway.
It's
it's
a
positive
experience,
because
it's
all
learning
and
everything
so
just
a
bit
of
Rage.
D
Yeah
so
last
week,
at
the
very
end
of
the
week,
I
got
all
of
the
jsut
next-best
stopped
working
and
I've
been
my
cranking
on
for
a
few
months
now.
So
this
is
like
you
can
basically
encode
directories
files
and
the
data,
and
we
have
the
full
sort
of
like
big,
dag
possibilities
with
the
data
side
of
it.
D
The
data
side
ended
up
being
sort
of
so
complicated,
but
also
fairly
reusable
and
a
pretty
generic
problem,
so
I
broke
that
into
its
own
spec
and
that
was
sent
to
the
specs
repo
called
the
data
I'm,
just
calling
it
data
for
now,
I'm
not
tied
to
any
of
the
names.
So
if
anybody
has
drawn
about
the
names
just
change
them,
I
change
them,
I
change
them
so
much
during
development
and
I'm,
actually
like
this
unhappy
with
all
of
them
and
so
I'm
like
a
total
pushover.
D
In
actual,
like
read
code,
so
you
can
write
basically
readers
for
three
advanced
layouts
and
then
any
changes
that
we
want
to
make
to
how
the
dag
is
actually
constructed
in
the
future
and
change
the
algorithms
and
do
flat
or
trickle
beg
or
whatever
the
hell
people
want
to
do
or
like
mutations
over
time,
where
you,
you
may
have
like
a
perfectly
flat
bag,
and
then
you
mutate
it
a
little
bit
stuff
like
that.
All
of
that
will
just
work
with
the
same
reader.
So
that's
really
nice.
D
That
means
that
we
don't
have
to
go
and
update
the
readers
in
every
implementation,
because
somebody
decided
to
encode
it
slightly
differently
with
a
different
algorithm
and,
of
course,
you
can
still
sort
of
like
encode,
the
name
of
the
algorithm
that
you
use
and
stuff
like
that's.
We
can
try
to
be
compatible
between
different
rebuilds,
the
same
data
anyway,
so
that
works.
That
was
a
real
pain
as
a
result,
the
eye
peeled.
The
image
in
library
is
pretty
close
to
finished
in
terms
of
the
feature
being
completely
future
complete.
D
D
So
you're
going
to
end
up
with
the
schemas
that
refer
to
types
that
aren't
in
the
schema,
but
they
need
to
be
there
before
it
could
actually
generate
an
API
is
actually
because-
and
you
don't
really
have
like
dependent
schemas
because
they
are
dependent
on
the
code
and
the
generator
you
end
up
using.
If
that
makes
sense
at
the
moment
that
you
inject
code
into
this,
you
can't
really
get
away
with
just
dependent
schemas.
D
D
So
all
the
data
stuff
will
be
in
a
separate
module
so
anyway,
this
is
quite
nice.
I
think
that
this,
for
the
next
few
weeks,
I'll
probably
write
more
tests
and
use
what
I
have
and
the
Phoenix
of
that
stuff
for
a
small
project
in
order
to
kind
of
put
it
through
the
paces
before
I
bug
the
ipfs
folks
about
it,
because
it's
a
bit
ahead
of
schedule
right
now
and
then
I'll
go
and
write.
D
That
I
can
just
write
it
and
hand
that,
off
to
the
episode
from
the
time
comes
so
anyway,
that
was
nice.
Oh
another
thing
that
I
did
I
recognized
and
this
actually
kind
of
sucks,
because
it's
notice
is
nice.
So
in
Prior,
iterations
of
UNIX
best.I
I
would
hand
you,
like
a
generator.
D
That
doesn't
work
anymore,
because
what
you
are
the
root
note
of
what
you
create,
isn't
necessarily
a
distinct
block.
So
what
you
end
up
creating
is
a
generator
that
you'll
block
until
it
yields
the
root
node,
and
then
it
actually
gives
you
back
essentially
the
the
instantiated
type
from
the
schema
gen.
But
it
could.
It
could
be
the
encoded
version
of
that.
If
we
wanted
to
write
but
but
effectively
like
you
need
actually
something
that
you
can
potentially
inline
into
something
else,
so
that
really
changed
how
long
the
EP
eyes
looked
like
they
yield
objects.
D
Now
that's
a
block
or
root,
and
it
looks
a
lot
more
like
other,
more
advanced
generators
that
have
like
type
variations
in
them.
But
that
was
an
interesting
thing
to
learn
in
terms
of
like
how
we
can
structure
things
and
not
always
just
to
clean,
like
hey
just
always
give
me
blocks,
because
in
to
some
extent
I'm
saying
okay,
you
figure
out
where
the
block
boundaries
are
in
the
sub
deck.
D
D
Yeah
yeah
there's
a
few
things
that
aren't
in
schema,
didn't
cuz
I'm,
just
not
using
them,
and
those
only
to
be
added
like
renames
aren't
supported.
Yet
some
of
the
representations
on
supported
magazine
yet,
but
there's
there's
places
for
them
to
go
in.
It
was
architected
sort
of
understanding
that
those
would
God
I
just
haven't
done
that
work
to
put
them
in
now.
D
It's
technically
de
Bourgh,
because
it's
an
inline
bytes
link
to
a
raw
block
by
it's
a
list
of
links
which
is
an
inline
list,
not
a
linked
list
and
then
there's
any
kind
of
nested
bag.
Beyond
that
anything
that
is
more
complicated
in
that
and
then
that
more
complicated
thing,
the
the
leaves
that
you
end
up
pointing
to
are
pointers
back
to
that
Union.
So,
within
that
big
mess
today
you
could
have
you
know
just
a
list
of
bytes.
It's
simpler
right.
D
A
B
D
So
if
you
just
have
one,
then
this
is
actually
kind
of
ideal,
and
if
you
have
other
algorithms,
you
can
just
plug
them
in
but
yeah
I
mean
we
have
some
examples
from
ipfs
where
they
have
the
flat
neg
in
the
turtle
bag
which,
with
graphs,
think
we
may
not
care
about
it.
But
you
still
have
the
problem
of
I
create
a
flat,
dag
or
some
kind
of
balance,
tag
that
I
want
and
then
I'm
mutated,
like
that
happens.
D
But
if
you
just
say
no
layouts
just
consume
these
same
types
that
the
types
I
believe
methods,
then
you
don't
need
to
do
that
and
you
can.
You
can
mutate.
You
can
make
like
really
discreet
mutations
and,
depending
on
how
big
those
mutations
are,
they
might
you
know,
create
a
another
huge
section
of
the
dag
with
another
nested
bag
or
they
might
just
like.
D
You
know
you
take
one
of
the
lists
and
pop
off
a
few
things
or
point
one
in
the
minute
piece
or
whatever
yeah,
and
then
once
once
you
get
into
optimizing
for
different
types
of
mutations.
You're
gonna
end
up
with
a
lot
of
different
layout
of
them.
I
think
that's
the
thing.
That's
really
going
to
explode
them
like
the
current
layer.
D
Algorithms
are
sort
of
hacks
around
the
fact
that
we
only
had
that
swap
and
then
it
was
really
expensive
to
do
these
round
trips
into
sub,
dag
portions,
but
I
think
that
what
you
look
at
okay
I
have
a
file
that
I'm
only
going
to
append
to
and
I
expects
to
append
to
it.
Often
that
is
just
gonna.
Have
you're
gonna
want
a
different
layout,
I.
A
All
right,
I
have
one
internal
item,
which
is
only
about
the
time
of
this
meeting
because,
like
in
Europe,
the
daylight
saving
time
ends
soon.
But
it
seems
to
me
that
so
for
me
personally,
it
would
be
great
if
we
would
just
keep
the
UTC
time,
because
then
it
would
be
one
hour
earlier
for
me
and
probably
in
the
US
as
well,
but
it's
in
the
middle
of
the
day,
so
I
I
don't
care.
D
A
D
D
D
A
D
D
A
D
Time
zone
and
then
just
just
a
general
note,
I'm
here
next
week,
but
the
week
after
that
I'm
just
out
the
whole
week
and
oh
yeah,
another
thing
so
the
week
after
the
week
that
I'm
back
the
week
before
lab
week
is
like
this
planning
week
that
they're
doing
and
so
I'll
have
a
template.
There's
a
template.
D
I'm
gonna
work
through
it
and
try
to
figure
out
what
our
plan
for
2020
is
and
then
I'll
share
that
sometimes
this
week
with
everybody
and
I
just
need
you
to
get
in
your
feedback
in
the
next
few
weeks,
so
that
I
can
finalize
it
slightly.
But
it's
gonna
be
pretty
simple.
It's
basically
gonna
say
like
look.
We
we
are
working
on
getting
things
to
a
point
where
we
get
adopted
in
our
direct
dependencies
inside
of
protocol
and
so
ipfs
in
file
coin.