►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2021-03-28
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
A
A
So
I
have
to
look
at
the
meeting
notes
because
today
I
don't
have
anything
to
share.
Oh,
and
yet,
if
you
attend,
please
add
your
name
to
the
attendees
list,
so
I
first
name
in
the
list
is
move.
So
please
share
with
us
your
news.
B
Yeah
hi
last
week
was
kind
of
busy
for
me
with
other
things,
so
nothing
super
exciting,
but
I've
been
working
some
more
on
the
ipld
uri
spec,
so
eric
and
I
met
the
week
before
and
we
did
a
whole
bunch
of
drafting
and
writing
a
whole
bunch
of
potential
specs
and
like
what
we
wanted
and
didn't
want,
and
it
turned
out
to
be
really
huge.
B
So
right
now,
I'm
putting
together
something
in
what
is
it
in
an
exploration
report
report
and
hopefully
I'll
have
something
out
there
for
people
to
look
at
yeah.
We
still
have
some
time
in
the
agricore
mobile
dev,
grant
that
we're
working
on
to
maybe
even
put
together
an
implementation
to
try
it
out
for
developers
to
use.
So
I'm
pretty
excited
to
see
how
that
goes
as
well.
We
got
together
to
look
at
the
checklist
for
ipld
implement
implementers
and
we
did
a
whole
bunch
of
restructuring.
B
So
now
there's
a
lot
of
to
do
items
and
new
sections
that
weren't
there
before
also
we're
working
on
updating
the
what
is
ipld
image
in
the
docs,
so
eric
put
together
an
awesome
diagram.
That's
on
a
whiteboard
and
kind
of
like
gives
this
grand
vision
on
how
different
pieces
fit
together.
B
So
right
now
we
need
to
figure
out
how
to
make
the
data
in
the
old
image
still
visible
in
the
page,
and
also
there
is
a
design
shop
request
open
that
is
processing
to
make
like
an
actually
clean,
non-whiteboard
version.
Also
for
the
agenda.
I
wanted
to
just
bring
up
some
thoughts.
B
I
had
about
setting
up
a
github
team
so
that
people
can
act
reviewers
in
a
pr
because,
right
now,
it's
a
little
unclear
as
to
who
should
be
reviewing
what
and
where-
and
it
seems
the
default
seems
to
be
to
at
warp
fork,
but
I
think
he's
already
got
a
lot
on
his
plate.
So
I
don't
know
if
that's
something
we
discussed
now
where,
after
everyone
else
does
updates.
A
C
A
D
Cool,
so
this
past
week
hasn't
been
super
productive
for
me,
so
I'm
mostly
going
to
cover
what
I
did
two
weeks
ago.
So
I
did.
I
wrapped
up
the
fuzzing
of
ipld,
prime
meaning
mostly
buy
node
and
taxi
board
for
graph
sync,
couldn't
find
any
more
bugs
in
the
parts
that
we
care
about,
which
is
the
schema
and
by
node
packages
we
did
run
into
some
more
taxi
boar.
You
could
call
them
bugs,
but
it's
essentially
that
I
wrote
the
fuzzer
as
per
the
spec
meaning.
D
You
know
I
expected
the
codec
to
require
canonical
encoding,
but
we
don't
actually
require
that
on
the
decode
side,
which
is
fine,
it's
just
that
the
specs
maybe
need
to
be
rewarded
a
bit
to
be
clear,
and
I
wrote
so
there's
multiple
ways
in
which
the
decoder
is
lacks.
So
there's
like
integers,
there's
floats,
there's
map
orders,
there's
a
bunch
of
things
and
I
think
with
eric
and
others.
We
came
to
the
agreement
that
instead
of
having
one
knob
for
each
one
of
them,
we
should
just
have
one
knob.
D
That's
opted
and
says
be
strict,
because
we
want
to
keep
the
default.
What
it
is
right
now,
so
I've
linked
the
two
pr's
that
work
towards
that
and
I'm
not
finished
yet.
I
also
have
at
the
top
of
my
to-do
list.
I
started
doing
this,
but
I
haven't
finished.
D
So
I
just
want
to
allow
it
to
work
with
nils
directly
if,
if
it's
already
an
option,
so
it's
not
rocket
science
in
terms
of
reflection,
but
I
want
to
add
enough
test
cases,
which
is
why
this
is
still
a
work
in
progress.
Also
a
bunch
of
code
reviews.
I
think
I've
unblocked
both
will
and
eric.
But
please
let
me
know
if
I
have
missed
something
and
an
fyi
to
rod
I'll
be
in
south
korea
in
two
weeks.
D
C
Right,
I've
been
out
sick,
I've
got
proper
sick
and
just
really
coming
back
on
lord
now,
the
only
thing
I've
got
in
the
notes,
it's
just
something
that
pull
requests
from
godzilla
fought
on
js
car.
That
I
thought
would
be
interesting.
They've
got
to
use
a
use
case
for
cars
in
in
the
dot
storage
world,
where
they
they
really
are
using
it
as
a
transport
to
throw
blocks
around
and
so
they're
doing.
C
Smallish
cars
to
bundle
blocks
to
send
places
and
the
whole
infrastructure
of
create
a
car,
write
it
to
a
thing
and
then
have
to
go
back
and
redo
the
routes
and
all
that
so
it's
like
it
just
gets
in
the
way
and
you've
got
to
write
it
to
somewhere
and
it's
just
it's
just
a
hassle.
So
he
made
a
a
buffered
writer
where
you
the
main
interface
to
it,
is
you
can
you
you
basically
give
an
estimate
of
how
big
the
car
is
going
to
be?
C
At
least
the
head
is
going
to
be
so
you
can
say
how
many
routes
you're
going
to
put
in
and
and
it'll
pre-allocate,
and
then
it'll
actually
give
an
estimate
of
the
whole
lot
and
it'll
pre-allocate
and
then
just
right
into
this
thing
and
then
give
you
back
just
the
slice
of
it
that
it
needed.
So
that's
a
neat
little
feature
that
just
gives
them
the
ability,
a
really
simple
interface,
to
just
sort
of
create
this
thing
right
into
it
and
then
be
able
to
send
it
off.
C
So
I
just
thought
some
somebody
might
be
interested
in
that,
so
a
pr
pr70
on
js
car-
that's
it
for
me.
Will.
E
Sure
I
just
have
a
couple
sort
of
small
side
things
or
continuing
there's
a
pr
up
that
dan
took
a
look
at
that
adds
a
reification
function
to
the
file
coin,
hampt
data
structure
so
that
it
can
be
used
as
a
adl.
So,
in
the
same
way
that
we
use
unix
fs
for
being
an
adl,
we
could
take
this
camp
and
use
it
as
an
adl
so
that,
if
you,
for
instance,
are
trying
to
pull
part
of
the
file
coin
chain,
you
can
now
directly
ask
for
like
the
map.
E
Key
of
like
I
want
the
miner
with
this
id
right
and
and
even
though
that's
actually,
you
know
a
multi-layer,
amped
data
structure,
you
don't
need
to
know
exactly
how
it's
laid
out
when
you
make
that
ask
so
that
that
has
some
uses
in
recovering
specific
queries
as
that
file
coin
chain.
History
gets
backed
up
onto
file
point
itself
and
you
now
want
to
retrieve
it
without
knowing
it
in
advance.
E
So
thanks
for
the
review
dan,
I
think
there's
still
going
to
be.
Probably
we
need
to
get
the
like
that
code
while
in
is
in
a
filecoin
project,
and
I
think
it
is
maintained
by
initially
spec
actors,
and
I
need
to
figure
out
who
the
current,
like
owners
of
that
code
are
to
make
sure
they're
cool
with
that,
in
addition
to
us,
on
the
ipld
side
being
cool
with
it
and
then
once
we're,
we
are
all
cool
with
it.
We
can
add
it
to
some
of
these
adl
maps
or
registries.
F
F
C
In
terms
of
spec
actors,
it's
wyatt,
but
in
terms
of
reviewing
other
pull
requests,
it's
been
largely
stephen
and
me
functionality
stuff,
but
I
think
you're
hitting
up
question
issues
there
with
alex.
I
think
he
seemed
to
have
the
biggest
objections
to
the
amount.
E
That
was
a
long
time
ago
that
was
on
the
initial
draft
like
when
I
was
using
it
for
state
diff
like
a
year
ago.
So
I
think
what
what
is
there
now
is
different
in
substance
from
what
the
initial
state
of
that
pr
was.
So
I
would.
E
I
would
not
look
at
the
first
round
of
comments
on
that
pr,
because
that
was
essentially
a
different
pr
at
that
point,
and
then
I
there
was
a
question
on
how
to
resolve
the
ipld
prime
extension
to
traversals
for
resumption
and
I'm
most
of
the
way
through
taking
that
and
putting
it
into
gokar,
so
that
gokar
can
use
it,
because
gokar
was
really
the
only
consumer
that
I
had.
I
can
just
make
it
an
interface
there.
B
G
I
do
not
have
a
highly
populated
list
of
stuff
this
week.
It's
been
a
count
of
time.
One
thing
that
does
seem
worth
a
brief
mention
is
folks
in
the
chat
had
brought
up
the
use
of
cids
as
map
keys
and
I've
written
up
a
little
bit
of
an
issue
to
discuss
that.
G
This
is
something
that
long
story
short
people
do.
Recurringly
want.
It
is
obviously
a
good
idea
and
makes
total
sense,
and
yet
it's
a
little
bit
tricky
to
pin
down
exactly
how
it
should
behave,
because
you
end
up
in
the
juxtaposition
of
well.
This
isn't
a
string
and
by
the
way,
dag
json
exists,
and
so
one
needs
to
navigate
carefully
in
that
vicinity
and
figure
out.
How,
indeed,
will
we
make
it
make
it
fit
together?
G
Put
this
declarative
description
of
where
a
lens
should
be
applied
into
other
protocols,
so
we've
hacked
this
together
for
some
special
cases
already,
which
totally
works.
For
example,
in
selectors
there's
a
I
forgot
what
it's
called
there's
a
clause
that
lets
you
say.
Please
use
an
adl
here
interpret
as
thank
you,
which
works
perfectly
and
does
what
it
says
on
the
tin,
but
it
was
originally
specified
for
adls,
and
so,
if
we
wanted
to
do
something
similar
to
that,
but
like
say,
please
use
a
schema
here.
It
would
need
more
and
different
parameters.
G
It
would
need
a
pointer
to
the
schema,
it
would
need
the
name
of
the
start,
type,
etc
and,
and
the
interpreters
clause
is
also
specific
to
selectors.
So
what
if
we
wanted
to
have
some
other
protocols,
which
also
need
to
invoke
lenses,
for
example?
G
Maybe
we
would
have
an
ipld
patch
protocol,
which
wants
to
say
please
use
this
data
transformation
to
navigate
before
you
apply
the
patch
and
also,
as
you
are
doing,
the
patch
apply
and
we
wouldn't
have
a
syntax
for
that
yet
so
this
lens
proposal
is
a
draft
of
such
a
syntax
right
now
it's
just
an
exploration
report
form.
So
feedback
is
highly
welcome.
Maybe
it'll
turn
into
code
eventually,
but
there
is
no
current,
like
actual
roadmap
schedule.
For
that,
so
just
feedback
phase.
H
All
right
hi,
so
I
spent
some
time
last
week
on
this
protocol
that
I
have
named
reframe
for
lack
of
other
words.
That
is
a
response.
It
is
a
request
response
protocol,
I
guess,
sort
of
framework.
The
idea
is
to
basically
have
a.
H
Request
response
protocol,
where
you've
defined
the
methods
using
ipld
schemas,
and
so
they
can
sort
of
generically,
be
you
know
we
can
use
json
for
all
the
things
that,
like
json,
if
somebody
decides
that
they
want
to
use
a
binary
representation,
they
can
use
a
different
encoding
and
we
can
use
this
to
like
move
data
around.
The
initial
target
of
this
thing
is
for
routing
requests,
so
all
the
things
that
one
might
expect
the
ipfs
dht
to
answer
queries
about.
H
Who
has
my
stuff?
You
know
ips
records.
Where
is
this
peer?
Things
like
that
are
some
of
the
initial
methods
we're
targeting.
If
you
have
more
things,
you
can
add
them
there.
This
means
that
we've
sort
of
been
writing
a
bunch
of
schemas.
Some
of
them
have
been
pretty
easy
to
write.
Some
of
them
have
pushed
into
some
of
them
have
pushed
into
we'll
call
them
schema
boundaries.
These
things
might
be
interesting
to
talk
about.
H
The
two
main
ones
were
fallback
conditions
for
unions,
where
sometimes
you're
like.
I
would
like
to
have
an
empty
space
here
for,
like
I
haven't
figured
out,
I
don't
know
which
keys.
I
don't
know
what
I
have
a
map:
there
are
keys,
it's
a
keith
union.
H
Some
of
these
keys
aren't
defined
yet,
but
like
I'd
like
to
still
move
them
around
and
later
on,
someone
can.
Then
you
know
pr
the
schema
with
like
a
new.
You
know
what
does
this
new
key
do
right
and
take
it
from
there
and
then
the
other
is
for
non
for
having
unions
that
aren't
string
based
this
one's
less
less
critical.
It's
just
like
kind
of
awkward
where
you're
like
well,
the
identifier
for
this
thing
is
really
a
number
or
something,
but
I
guess
all
of
our
unions
are
strings.
H
So
do
I
just
I
guess
I'll,
just
I'll
just
encode
the
string
as
a
the
number
as
a
string
or
something
right,
which
feels
does
a
little
weird.
It's
not
so
dissimilar
from,
like
the
I
want
cids
as
links
thing,
but
it's
like
you
know
as
keys
in
that
it
explores
this
like
what
can
be
the
the
key
part
of
the
identifier
field,
but
I
suspect
that
whole
thing
is
grosser
for
roughly
similar
reasons
to
why.
How
do
I
put
the
links
as
keys
and
maps?
H
Is
a
little
grosser
so,
thankfully,
that
one's
a
less
big
deal
it'd
be
fun
to
talk
about
the
fallbacks
thing,
because
I
think
that
one
might
be
an
easy
one.
Also,
I
guess,
while
we're
reporting
things
we've
done
with
ipld,
that
might
be
fun.
I
did
some
hacking
on
like
a
bittorrent
file,
adl
and
b
in
code
codec.
H
Mostly,
I
sort
of
grabbed
will's
code
for
unix,
fs
things
and
then
was
like
make
simple
and
it
seems
to
mostly
work
once
it
does
work.
There
will
be
a
repo.
It's
brought
up
some
interesting
questions
around
like
when
doing
backwards.
Compatibility
with
an
existing
thing.
Do
you
do?
H
Do
you
put
the
logic
in
codecs
and
have
more
codec
things?
Do
you
put
the
logic
in
adls?
Do
you
split
them?
How
do
you
split
them?
The
way
I
chose
for
now
was
use
like
basic
serialization,
just
like
be
in
code
and
then
have
all
the
bittorrent
stuff
live
in
the
adl
layer.
On
top
of
the
serialization
thing
yeah,
and
that's
all
for
me,.
H
Oh,
I
guess
I
should
answer
daniel's
question
or
I
or
petara
can
answer
daniel's
question,
which
is
the
relationship
between
reframe
and
allies.
I
would
say
the
difference
I
guess
is
one
of
these
things
is
more
targeted
at
like
a
specific
or
specking
out
the
wire
protocols
right
so
like
the
protocol
has
that
you
know
the
spec,
that's
posted
there
has
here's
a
list
of
schemas
that
we
know
about,
and
you
might
want
to
use
these
things
along
with.
H
I
Yeah
I
mean
I
was
gonna.
If
I
get
a
slot,
I
can,
I
think
nobody
knows
about
otherwise.
I
was
going
to
briefly
mention
it,
but
to
answer
this
question:
edelweiss
is
basically
a
language
and
a
compiler
so
languages
in
like
a
schema
language,
which
is
a
super
set
of
ipod
schema
and
it's
backwards
compatible.
I
But
basically
it's
a
new
compiler
that
you
can
write
schemas
for
a
type
system.
Just
like
ipod
schema.
It
has
a
bunch
of
new
types
that
solve
some
problems,
that
sort
of
come
up
with
ipod
schema,
like
unions
and
singletons,
meaning
unions,
untagged
units,
so
structured
unions.
We,
you
know,
I
can
go
in
detail
in
a
little
bit.
I
It
also
has
other
things
you
can
define
services
and,
in
general,
the
compiler
framework
is
flexible
so
that
it's
easy
to
add
new
things,
because
we
expect
that
filecoin,
for
instance,
actors
are
going
to
want
to
be
able
to
send
lambdas
across
network
boundaries.
So
you
want
to
have
sort
of
types
for
these
things,
but
at
the
same
time
be
flexible
about
how
are
the
lambdas
encoded,
because
different
chains
have
different
ways
of
of
referring
to
a
function
instance.
I
So,
basically,
all
of
these
kind
of
problem
domains
are
in
the
scope
of
this
compiler,
and
so
it's
prepared
to
be
a
place
where
you
can
add
more
and
more
concepts,
and
the
second
part
of
the
compiler
is
that
it
has
a
code
generation
framework
which
makes
it
very
easy
to
write
generated
code
like
the
overhead
compared
to
writing
straight
up.
I
Gold
code
is
like
10
percent
or
so,
and
so
really
it's
a
framework
for
basically
playing
around
with
schemas,
adding
all
kinds
of
custom
objects,
and
this
framework
is
at
milestone
one
now.
So
it's
like
fully
functional
and
at
this
milestone
it
supports
so
it
it
has
the
the
type
system.
That's
basically
the
superset
of
ipod
schema.
I
I
That's
the
relationship,
I
guess,
since
I'm
talking,
should
I
sort
of
continue
on
for
a
brief
introduction,
give
links.
A
I
would
quickly
like
quickly
solve
this
code
owner's
code
reviews
question,
because
this
is
where
we
actually
can
use
the
sync
time
quickly
and
then
you
go
next.
Just
I
mean
we
should
solve
this
like
in
two
minutes
like
so
so
as
you've.
A
Seen
in
the
meeting
notes,
we
already
discussed
it
there
kind
of
so
I
guess
the
question
to
move
is
like:
what's
the
what's
the
intent,
so
basically
that
you
get
co-reviewers
or
or
do
you
want
to
ping
people
so
like
it's
kind
of
like
two
things
or
do
you
want
both.
B
Yeah,
so
this
is
mostly
coming
out
of
trying
to
submit
prs
and
review
pr's
and
looking
at
stuff
in
the
ipld
repos.
It's
not
always
obvious
who
should
be
pinged
for
what.
So,
I
think
what
was
mentioned
about
having
like
a
code
owner's
file
would
be
useful,
for
example.
B
Recently
I
submitted
that
pr
with
updating
what
is
ipld
and
the
question
is
like
who
reviews
that,
because
I've
been
talking
to
eric
but
assuming
eric
should
review,
everything
seems
kind
of
rough
and
a
lot
of
pressure
on
him
or
like
what
happens
if
he's
not
around
or
on
vacation.
Who
is
the
next
person
to
reach
out
to?
B
I
don't
know
how
much
that
applies
to
like,
say,
specific,
go,
ipfs
or
sorry
go
ipld,
prime
repos
or
like
language
specific
stuff
feels
like
this.
Is
I'm
not
sure
if
it's
just
the
docs
that
need
it
or
if
other
repos
also
have
this
concern?
B
I
think
like
looking
at
the
chat
or
at
the
hackamd
logs,
there
is
some
discussion
about
using
teams
and
also
how
teams
might
not
be
the
right
thing,
because
you
don't
always
want
to
ping
the
entire
team.
It
feels
like
a
reviewer's
group
might
be
useful,
but
I
don't
know
how
fine-grained
that
should
be.
F
Yeah-
and
we
can-
I
mean
so
you're
your
key
use
case-
whether
you're
submitting
a
code
review
into
ipld,
slash,
ipld,
the
docs
site
or
potentially
diplodigo
prime,
like
you
want
to
be
able
to
at
mention
someone
or
something,
and
you
have
it
show
up
in
someone's
queue
that
like
hey,
I
need
a
review
right,
and
so
we
can
certainly
create
additional
teams.
For
that
and,
like
I
said
we
don't
have
to,
we
don't
have
to
marry
our.
F
We
have
to
marry
the
teams
that
are
for
access
to
the
repos
as
the
ones
who
are
like
our
key
code.
Reviewers
are
maybe
our
key
key
maintainers
so
like
because
I'm
assuming
there's
really
a
handful
of
folks.
Most
of
the
people
on
this
call
that
we
like
would
want
to
be
looking
at
some
of
these
prs,
and
so
I
would
probably
just
create
a
team
for
you
know
for
that
case.
F
If
we,
if
we
don't
want
to
like
be
pinging
the
whole
go
team
or
javascript
team,
I
mean
we
can
even
create
a
team.
That
is,
you
know,
ipld,
and
you
know
pl
andreas
maintainers.
So
it's
not
trying
to
say
that
this
is
like
all
the
maintainers,
but
it's
people
on
this.
This
call
that
are
probably
closer
to
the
project
and
at
least
that
gets
you
more
of
the
way
there.
But
it's
not
trying
to
be
exclusive
that
we
are
the
you
know
the
only
maintainers
of
this
work.
B
H
I
wonder
if
it's
worth
leveraging
like
if,
if
ipld
is
trying
this,
like,
I
don't
know
if
it's
the
whole
project
or
just
go
ipld.
Prime,
that's
trying
this
like,
plus
one
plus
two
business
you
could.
You
could
just
take
all
the
people
that
are
the
plus
ones
or
plus
twos
and
just
put
them
in
a
list
that
would
also
make
it
easier
to
track
when
someone
you
know,
should
be
added
or
removed
from
that
list.
G
Yeah,
okay,
yeah!
That's
me.
This
still
currently
only
resides
in
an
issue
in
go
ipld,
prime,
for
what
it's
worth
so
no,
it
is
not
discoverable.
My
apologies.
G
The
proposal
is
that
yes,
indeed,
we
do
need
to
have
more
reviewers
and
also
we
should
like
broaden
that
set
a
bit
and
the
the
arbitrary
numbers
there.
The
plus
one
and
the
plus
two
reference
comes
from
the
detail
of
the
proposal
of
like
in
order
to
merge
most
things.
One
should
go
for
a
total
of
a
plus
two
sort
of
a
score
on
it,
and
anyone
who
we
understand
to
be
like
well
versed
as
a
reviewer
can
get
a
plus
one.
G
So
if
you
get
two
independent
people
to
do
that,
then
you
proceed.
So
that's
a
proposal
that
I
kind
of
put
out
for
how
we
could
distribute
this
a
bit
more
for
the
go
ipld
prime
repo,
in
particular
we're
kind
of
in
the
trying
it
out
phase,
see
if
that
is
the
right
size.
If
we
can
actually
like
internalize
that
mental
model
and
if
it
provides
value
or
not,
I
wouldn't
mind
trying
it
more
broadly
too.
G
If
it
seems
helpful,
we
did
end
up
with
a
list
of
people
for
the
gliapild
prime
repo
identified
in
that
issue.
We
also
didn't
really
have
a
huge
discussion
yet
about
how
that
list
would
evolve
over
time.
So
right
now
that
list
there
is
what
eric
said
that
day,
honestly
capriciously,
I
might
have
forgotten
people
who
should
be
on
that
list.
I
might
have
yeah,
so
there's
still
work
to
do
in
developing
that
process
more
fully.
B
Yeah,
it
feels
like
that
list
might
be
ideal
for
putting
into
a
team,
and
maybe
having
docs
in
the
read
me
about
like
hey,
if
you're
contributing
ping.
This
group
for
reviews
I'd
be
down
in
also
like
helping
figure
that
out
for
the
ipld
ipld
repo,
since
that's
where
I'm
mostly
working.
I
don't
know
if
any
of
the
other
repos
are
interested
in
collaborating
on
that.
A
I
I
guess
in
the
past
it
has
been
that
mostly
everyone
was
subscribed
to
all
repos
anyway,
due
to
the
like
how
we
have
the
team
managed.
So
I
basically
read
all
the
notifications,
but
I
guess
it's
like
depends
on
like
if
people
are
like,
if
there's
only
one
or
two
people,
if
they
think
like
oh,
do
I
have
the
time
to
look
into
it
and
then,
like
I
would
say.
Basically
the
triples
you
are
working
on
are
the
ones
that
got
dropped.
A
The
most,
I
would
say
like,
for
example,
like
if
someone
opens
on
rust,
ipld
issue
like
on
some
rust
related
stuff.
I
hop
into
it
and
fill
it
out,
but
like
on
specs
and
talks
is
often
that
it's
like
a
little
just
like
exp
for
myself
is
like
oh,
hopefully,
someone
else
will
reply
and
then
so
yeah,
I
guess
yeah.
We
should
also
get
better
that
yeah.
F
So
I
guess
just
to
kind
of
closing
this
out
a
couple.
A
couple
of
thoughts:
one
is
pioch
who's,
a
productivity
engineer
on
pl
andres
has
been
setting
up
so
that
all
of
our
github
management
is
actually
in
github
itself.
So
we
can
actually
like
create
a
pr
saying,
hey,
create
this
new
team
and
assign
these
people
to
it.
It's
all
auditable
and
once
that
gets
approved,
it'll
all
go
create
you
know,
terraform
behind
the
scenes,
creates
the
resources,
etc.
So
it's
a
good
way
to
like
discuss.
F
Is
this
the
right
set
of
people
for
the
list
and
then
the
you
know
in
the
pr
captures
the
discussion
yeah?
I
I
think
what
you're
hearing
here
is.
There's
you
know
a
lot
of
it.
It's
kind
of
wide
open
for
someone
just
to
push.
You
know
to
suggest
something
and
get
something
across
the
line,
and
so,
if
you
don't
mind
doing
that
for
ipld
ipld,
that
would
be
great,
and
my
hope
is
that
we
could
do
that,
like
all
in
an
issue,
pr
that
ultimately
gets
merged
and
ratifies
it.
F
B
Else
comes
along
and
wants
to
add
it
to
another
report.
It
would
be
obvious.
Yeah
it'd
be
cool
to
talk
about
how
the
go
ipld,
prime
repo
is
doing
it
so
far,
so
I
can
learn
I'll
follow
up
on
that.
Async,
though.
F
Okay
yeah,
so
here's
someone's
talking
ipld
dev
but
yeah
thanks
for
driving
on
these
things,
and
we
want
to
get
get
you
unblocked
and
make
this
better
for
others
thanks
a
lot
mom
sweet.
Thank
you
very
much.
I
Okay,
so
I'll
just
so,
I
put
the
link
to
the
project
in
the
chat
and
I'll.
Just
briefly,
can
I
share
my
screen
here.
Yeah,
I
mean
so
I'll.
Just
briefly
talk
about
it
because
we
don't
have
time
to
like
go
into
details,
but
basically,
if
you
look
at
the
repo,
the
so
first
I've
kind
of
made
sure
that
all
the
documents
are
linked
through
the
home
page.
So
you
can
just
find
everything.
I
Essentially,
this
project
is
at
milestone
one.
So
it's
like
like
fully
functional.
If
you
want
to
write
rpc
services
and
just
general
ipod
schemas
that
you
want
to
encode
the
code.
I
Roughly
speaking,
like
I
said,
it's
a
superset
of
ipod
schema
and
all
the
code
for
encoding
and
the
coding
is,
is
code
generated
which
which,
which
means
the
following,
so
on
the
encoding
path.
I
I
The
decoders
basically
parse
the
data
structures
from
pre-parsed
ipod
data
model,
so
they're
probably
going
to
be
a
little
slower
than
alternative,
then
by
node,
although
I
haven't
checked
but
for
milestone.
Two
essentially,
I
plan
to
square
this
inefficiencies
away,
basically
writes
code
generation
that
natively
implements
node
assembler
for
all
the
types
and
when
this
is
done,
I
expect
that
this
is
going
to
be
faster
than
by
an
order
of
magnitude
from
the
other
alternatives,
because
it
will
be
both
no
reflection
and
zero
allocation,
so
it
should
be
faster.
I
So
in
the
continuing
in
the
roadmap
here,
you
can
read
about
how
we
plan
to
address
here.
The
variety
of
representations
that
ipod
schema
has
so
basically
one
of
the
kind
of
like
the
design
goals
which,
after
lots
of
conversations
with
people,
I
realized
this
like
an
important
goal,
is
to
basically
be
able
to
write,
schemas
and
be
able
to
capture
any
pre-existing
encodings.
I
So
there
is
this
notion
of
transforms,
which
at
least
superficially
sounds
like
what
eric
was
describing.
Lenses
are,
but
so
this
is
coming
in
milestone
two
and
this
transforms,
because
this
is
a
compiler.
All
these
things
are
code
generated
so
transforms
would
be
something
that
goes
from
ipod
data
model
to
ipod
data
model
and
it's
code
generated
and
is
aware
of
the
types
that
are
sort
of
at
the
end
of
the
of
the
encoding
or
the
coding
chain,
and
there's
something
that
you
can
chain
and
so
forth.
I
You
know
planning
to
add
lambdas,
which
can
which
can
have
sort
of
customizable
representation
based
on
what
system
the
lambda
lives
in
and
so
forth.
So
anyway.
So
this
is,
you
can
read
the
road
map,
the
big,
so
the
big
wins
in
the
existing
milestone.
One
are
essentially
kind
of
coming
from
the
union
type.
I
will
mention
here
by
the
way.
This
is
something
that
we
can
change,
but
I
view
this
schema
call
sort
of
uses.
I
One
word
union,
for,
I
guess
tagged
unions,
and
also
it
has
this
notion
of
kind
of
unions,
but
in
sort
of
like
the
type
systems
of
most
modern
languages,
there's
actually
two
different
types.
I
It
simply
represents
itself
as
the
type
that's
the
active
case
and
parses
itself
tries
to
parse
itself,
tries
to
parse
all
the
cases
that
it's
aware
of
in
sequence,
from
the
wire,
so
so
in
in
the
in
the
otherwise
compiler.
I
Basically,
the
type
names
are
union
for,
like
the
true
union,
which
doesn't
have
tags
and
inductive
is
for
what
is
called
the
uni
in
ipod
schema,
so
this
type
makes
it
possible
to
basically
substitute
if
this
type
is
very
powerful,
because
the
key
goal
of
this
is
that
when
you
have
any
kind
of
schema,
if
you
want
to
evolve
it,
you
can
always
substitute
any
element
anywhere
in
a
schema
by
a
union
of
what
its
type
used
to
be
and
new
alternatives
that
you
might
want
to
be
adding
the
so.
I
The
the
key
design
goal
here
is
that
we
don't
want
to
like
spend
too
much
time
perfecting
schema
designs
which
to
make
them
perfect
for
the
future,
because
this
never
works.
The
idea
is
that,
no
matter
what
schema
you
write,
you're,
never
backing
yourself
in
a
corner,
because
you
can
always
kind
of
use
the
the
union
type
to
like
expand
the
schema
at
any
point.
I
I
I
have
here
a
document
about
how
they're
represented
on
the
wire
and
there's
also
a
complete
working
example
in
the
repo
that
you
can
build
and
run
and
look
at
the
generated
code.
So
that's
the
main.
That's
one
more
thing,
essentially
that
I
want
to
say
is
you
don't
want
to?
I
mean,
first
of
all,
it's
a
fully
fully
functioning
sort
of
system
at
the
moment,
but
the
way
I
think
about
it
is
not
just
as
like
you
know.
Oh,
this
is
like
a
like
a
like
a
like
a
fixed
language
for
schemas.
I
The
key
thing
here
is
that
there's
a
very
sort
of
easy
to
use
code
generation
system
and
it's
and
it's
easy
to
well
relatively
easy
because
you're
adding
things
to
a
compiler
but
there's
a
compiler,
a
sort
of
pipeline
and
it's
easy
to
add
new
objects
like
adding
a
service.
The
notion
of
a
service
is,
you
know,
any
new
notion,
really
any
new
type.
Anything
is
actually
pretty
straightforward,
you
basically
add
asts
for
it,
and
then
you
have
a
stage
when
you
can.
I
Sort
of
generate
more
asds
from
it,
and
then
you
have
a
separate
stages
for
co
code
generation
and
the
code
generation
framework
is
something
like
the
go
template
system,
but
it's
much
fancier
because
it
understands
symbols,
so
you
can
type
pretty
much
straight
up,
go
code
and
just
have
wild
cards
where
you
want
to
insert
symbols
and
then
the
code
generation
framework
will
like
do
all
the
importing
and
aliasing
and
so
forth.
This
is
like
the
stuff.
That's
actually
hard
to
do
in
practice.
I
I
Okay,
yeah
and-
and
the
delegate
I
mean
the
reframe
api
is-
is-
is
it's
documented
in
ipod
schema,
because
people
are
familiar
with
the
syntax
of
ipod
schema
and
otherwise
doesn't
have
a
syntax?
I
You
know
we
can.
We
can
we
could
eventually,
when
we
have
parsing,
we
could
use
the
ipod
syntax
itself
but
yeah.
So
the
reframe
process
is
documented
in
the
ipv
schema
syntax,
but
it's
actually
implemented
in
in
otherwise
which
we
didn't
use,
because
presumably
people
are
not
familiar
with
the
advice
like
ast
language
for
defining
types.
H
Also,
to
make
it
a
little
easier
so
like
if
you
want
to
implement
a
client
for
one
of
these
things
and
or
server
and
like
you
know,
javascript
or
rust
or
whatever
that
you're
not
necessarily
like
rebuilding
like
I
need
to
make
the
compiler
output
to
my
language,
you
just
be
like.
Well,
I
already
have
a
bunch
of
this
ipld.
I
already
have
ipld
schemas
and
codecs
and
whatever
lying
around,
so
I
should
be
able
to
get
get.
You
know
a
version
of
this
going
pretty
quickly.
A
All
right
are
there
any
questions,
perhaps.
A
I
I
think
I
have
a
question
because,
like
the
the
goals
that
I've
thrown
are
pretty
similar
to
what
ipld
schemas
are
like,
for
example,
like
I
put
these
schemas,
where,
like
from
the
beginning
like
one
of
the
goals,
was
that
you
can
just
like
evolve.
The
schema
like
it's
also
like
one
of
the
major
goals
of
apple's
schema
or
also
like
the
code
generation.
So
the
code
generation
was
like
the
first
implemented.
A
I
Yeah
so
I
mean
so,
it
does
sound
similar
but
there's
a
few
small
differences
which
basically
justify
an
entire
new
project
right.
So,
let's
see
so
so
code
generation
ipod
schema
doesn't
actually
have
a
framework
for
code
generation,
because
I
think
the
big
point
here
is
that
using
just
gold
templates
is
impossible
to
actually
write
a
proper,
proper
compiler
with
code
generation.
I
Precisely
because
of
like
things
like
juggling
symbols
where
you
define
them
and
so
forth.
So
so
the
reason
I
mean
long
story
short
like-
and
you
know
dan
can
correct
me
and
so
forth,
but
I
think
the
reason
bind
node
resource
to
reflection
is
precisely
because
there
are
some
things
that
are
very
awkward
to
code
generate
if
you
don't
actually
have
a
code
generation
framework.
Basically
so
the
the
the
other
thing
is
so
oh
yeah.
So
the
other
thing
is
like
I
wanted
to
the
couple.
I
You
know
jason
over
the
wire
with
some
other
service
and
uses
one
representation
for
a
specific
struct,
and
then
it
talks
cbor
and
uses
a
different
representation
when
it's
you
know
talking
to
file
coin
or
like
saving
things
to
a
file.
So
basically,
I
wanna
might
wanna
have
a
strike
that
uses
the
map
representation
on
one
on
the
one
hand
and
the
least
pair
representation
on
on,
on
the
other
hand,
and
so
the
current
type
of
schema
code
generators
don't
have
a
way
of
doing
this.
You
would
get
two
separate
objects.
I
So
that's
what
I
mean
by
decoupling,
the
otherwise
compiler
will
actually
create
one
user
type.
That
represents
the
data
itself,
with
with
the
transformation
logic
being
other
objects
that
you
can
at
runtime
sort
of
you
know
attached
to
the
object
and
use
it
with
use
the
same
struct
type
with
different
representation
paths.
So
so
decoupling
types
and
representations
was
one
of
the
sort
of
things
that
was
awkward.
A
I
A
I
No,
you
just
get
you,
you
get
the
representation
as
as
a
as
a
as
another
object
in
goal
that
you
can
chain
at
runtime
like
basically
the
the
object.
This
is
the
transforms
that
I
was
referring
to,
but
these
objects
basically
know
how
to
they
implement
node,
assembler
and
node,
and
you
can
you
get
them
separately
from
the
struct
itself
and
you
can
then
say
I
wanted
to
call
this
object
objects
by
sort
of
chaining.
These
two
go
objects
by
essentially
chaining
objects
that
implement
null
assembler
and
nodes
for
the
two
directions.
I
I
I
F
G
If
two
type
definitions
are
structurally
identical,
that
copy
actually
no
ops
and
it
turns
out-
we
use
this
heavily
in
the
existing
cogen
implementation
there,
because
that's
how
we
switch
things
between
being
the
type
level
representation
and
the
representation
level
representation
for
free
because
they're
the
same
structure,
and
so
that
flip
is
for
free
sure.
I
mean
it
would
be
distinctly
yeah
likely
to
use
that
with
two
different
schemas.
But
it
would
yes
so.
I
So
there's
a
so
sure
I
mean
there's
always
a
way
to
hack
it
around
like
there's
so
there's
multiple,
so
this
code,
this
this
compiler
is
not
just
meant
for
go
like
the.
The
big
point
here
is,
is
that
I
want
so.
Basically
I
want
to
have
a
compiler
which
is,
from
the
start,
ready
to
generate
code
in
different
languages
and
and
so
while,
yes
in
in
go,
maybe
it
happens
to
be
the
case
that
you
can.
I
You
know
mem
copy
or
whatever
you
know
that
that's
generally
not
something
you
want
to
do
in
like
a
strictly
typed
language,
I'm
sure,
probably
you
can
even
do
it
in
the
rest
and
so
forth,
but
it's
kind
of
like
an
awkward
thing
to
do.
If
you
can
code
generate
something
that
kind
of
allows
you
to
be
strict
in
in
the
language
itself,
you
know,
but
but
but
the
bigger
thing
here
is
so
let
me
just
give
you
examples
like
why.
I
But
then,
when
I'm
building
a
service,
for
instance
like
a
new,
a
new
like
a
new
concept,
I
I
you
know,
the
user
defines
the
service
in
the
in
in
the
ast
and
then
the
compiler
actually
takes
the
ast
definition
and
actually
generates
another
ast
which
is
actually
of
all
the
types
that
will
be
necessary
for
the
service
itself.
I
So
in
other
words,
it's
feeding
on
itself
and
then
and
then
goes
off
and
generates
all
these
things
because
they
already
have
like
down
the
pipeline.
They
can
co-generate
and
so
forth.
So
the
point
is
that
the
compiler
has
a
stage
that
a
bunch
of
stages
and
you
can
in
sort
of
intercept
you
can
work
in
any
one
of
them.
So
you
start
from
asts
and
they
and
you
can
define
any
object
there.
I
Then
these
ists
are
processed
and
all
the
linking
is
like
is
resolved
and
at
this
stage,
where
all
the
objects
are
defined
and
the
links
resolved,
you
can
then
do
whatever
manipulations
you
want.
If
you
want
to
like
generate
new
objects
and
so
forth,
and
then
once
you're
done
with
this,
then
you
can
push
off
the
the
generation
plans,
which
are
basically
descriptions
of
all
the
objects
that
you
want
to
do
to
the
code
generation,
which
is
a
completely
separate
step,
which
is
dumb.
I
It
just
takes
plans,
but
now
these
plans
are
hyperlinked
with
the
and
they
all
have
assigned
names
in
the
target
language
go
or
otherwise,
but
it
makes
it
easy
to
to
implement
complex
things
if
you
just
like,
have
clean
accounting
between
all
the
stages
and
so
forth
and
and
in
the
lack
of
this
is
the
reason
why
bind
notes
he
has
to
resort
to
reflection.
Basically,
so
it
it.
It
seems
like
a
it's
a
lot
of
work
to
escape.
Okay,.
G
My
note
resorts
to
reflection
because
that's
useful
and
people
like
not
having
code
for
what
it's
worth.
I
also
completely
buy
and
believe
you
that
the
way
that
I
wrote
the
go
code
gen
for
stuff
is
terrible
and
unmaintainable
and
I
regret
almost
every
architectural
choice
in
it.
So
your
argument's
in
favor,
of
making
a
code
generator
pipeline,
I'm
so
excited
that
you're
doing
that,
but
but
don't
throw
things
that
use
reflection
under
the
boss
entirely
either.
G
It's
both
right,
but
it's
situational
like
there
are
some
places
where
the
performance
has
been
like
super
critical
in
cogen,
yes
and
allocation
minimization.
Oh
my
god,
is
it
worth
it
and
there
are
other
places
where
somebody
wants
to
build
a
hello
world
and
if
we
ask
them
to
use
a
code
generator
pipeline
at
all.
Yes,.
A
B
H
H
So
yeah,
I
think
my
take
on
like
some
of
the
reason
for
differences
here
is
one
of
the
snazzy
things
that
I
think
is
largely
doable
with
big
parts
of
the
ipld
stack,
whether
it's
the
codex
or
the
schemas
is
the
ability
to
represent
like
existing
existing
stuff,
in
the
way
that
your
target
language
wants
to
see
it
right,
which
is
why,
like
okay,
I
have
the
data,
I
tell
you
how
it
was
represented,
so
you
can
properly
decode
it
into
the
form
that
I'd
like
to
see
it,
which
is
a
little
different
than
like
building
a
thing
that
is
meant
for
protocols
in
the
sense
that
I
don't
think
it's
like
a
goal
of
at
all
voice
to
be
able
to
like
describe.
H
How
can
we
build
them
in
these
ways
that
leverage
a
lot
of
the
ipld
tooling
to
do
interesting
things
and
that
difference
is
sort
of
that
difference
in
mindset
where
I'm
not
trying
to
be
backwards
compatible
with
everything
I'm
trying
with
existing
stuff
I'm
trying
to
build
other
things,
sort
of
both
like
it
like
narrow
it
like
narrows
your
scope
and
expands
some
of
the
things
that
you
can
do
instead
and,
and
so
like.
I
feel
like
this
is
sort
of
like
a
decoupling
thing
between
like
okay.
H
I
need
a
compiler
framework
because,
like
the
way
that
the
way
that
we're
doing
codegen
with
you
know,
the
gengo
stuff
is
like
no
fun,
which
is
separate
from
like
the
ipld
schema
syntax
does
not.
Let
me
describe
all
the
things
I
want
to
describe
or
the
way
I
want
to
describe
them,
and
in
this
case,
let's
we're
sort
of
mis,
what's
like
being
described
as
the
the
element.
H
Syntax
as
each
other,
because
you
know
protobufs
are
trying
to
describe
exactly
they're
like
I'm
building
a
new
thing:
here's
my
protobuf
right
and
so
they're
like
okay,
when
I
want
to
build
a
new
protocol,
here's
my
protocol
with
those
protobufs,
whereas
ipld
schema
is
like
I
want
to
represent
existing
data
in
a
useful
way
to
me,
which
may
not
mesh
exactly
with
the
protocol
business,
which
is
why
I
think
it's
slightly
different.
D
Daniel
yeah
I'll
be
very
brief,
just
to
expand
a
bit
on
what
eric
said
about
reflection.
I
don't
really
have
a
horse
in
any
race
and
I
don't
think
it's
even
a
race
because
they're
like
apples
and
oranges,
comparing
for
example,
buy
note
with
otherwise
because
buy
note
is
a
much
smaller
project
or
package.
If
you
want
to
call
it
that,
but
also-
and
I
understand
that
l
wise
as
a
compiler
is
targeting
multiple
languages,
but
at
least
talking
about
go
itself.
D
Code
generation
is
fine
for
some
things,
but
my
take
is
that
for
things
like
protobuf
or
schemas,
it's
not
a
good,
it's
not
the
best
fit
because
then
you
end
up
being
at
the
mercy
of
the
code
generation
and
the
types
essentially
get
handed
to
you.
So
I
I
find
that
that's
not
the
best
workflow
at
least
for
go,
and
I
think
the
reflection
can
be
made
to
be
faster
than
most
people.
Think
so
that's
that's.
The
reasoning
behind
the.
D
I
You
want
to
have
a
type
system
that
is
truly
language,
agnostic
and
and
can
basically
like
I
alluded
earlier.
Basically,
it
has
to
fit
every
possible
language
and
it
has
to
it
has
to
have
lambdas,
and
actually
I
will
go
something
that's
actually
counter
what
the
dean
is
saying.
I
actually
do
have
as
an
objective
to
be
able
to
take
any
pre-existing
protocol,
no
matter
what
doesn't
even
have
to
be
up
your
disk
data
model
and
to
describe
it
in
this
universal
type
system.
I
The
reason
is
fairly
obvious
right,
so
our
goal
is
to
connect
blockchains
from
all
kinds
of
programming,
languages
and
companies
and
so
forth,
and
when
you're
trying
to
sit
to
truly
be
language
agnostic.
There
is
an
intellectual
question,
which
is
I
wanna,
I'm
gonna
take
protocols
from
all
kinds
of
places,
but
the
only
way
I
can
manage
so
many
different
protocols
is
to
have
like
a
type
system
that
sits
on
top.
I
That
is
simple
enough,
but
expressive
enough
to
talk
about
everything,
and
so
ipod
schema
was
was
close
to
this,
but
it's
not
close
enough.
In
particular,
it's
missing
unions
and
singletons,
and
also
like
a
whole
range
of
sort
of
primitive
types
that
that
have
to
be
there
and,
and
so
the
the
point
is
that
this
is
my
goal
is
to
have
like
a
lingua
francois
for
protocols
and
because
this
is
not
like
a
one-off,
where
I'm
we're
going
to
make
the
type
system
is
going
to
be
frozen.
I
It's
going
to
have
to
there's
going
to
be
new
new
primitives,
not
new,
composite
types,
ideally,
but
like
lots
of
other
little
things.
So
my
point
is
that
I
need
to
have
a
framework
for
a
truly
language
agnostic
type
system
for
protocols
that
can
handle
any
pre-existing
protocol
and
it
sets
a
good
standard
for
writing.
Future
ones.
I
So
go
is
a
zero
concern
here
and
so
in
general,
when
you
want
to
have
a
lingua
franco
like
this,
you
have
to
build
a
compiler,
because
the
fact
that
you
can
happen
to
be
able
to
do
something
in
go
with
reflection
is
completely
not
helpful
if
you're
shooting
for
the
stars
so
to
speak.
I
Does
that
make
sense?
It's
the
language
is
like
the
highlight
here,
not
a
and
then
of
course
the
implication
is
the.
If
you're
going
to
have
a
language
that
sits
between
all
programming
languages,
of
course
it
has
to
code,
generates
it's
not
an
option
to
have
to
use
the
varying
reflection
systems
of
the
languages
that
are
being
targeted,
which
is
go
and
rust,
and
julia
and
python
at
minimum.
I
I
And
I'm
also
sort
of
like
sort
of
like
you
know.
The
problem
is:
when
one
puts
a
lot
of
work
in
making
something
working
go
that's
great.
It
helps
in
no
way
for
moving
on
to
other
programming
languages
and-
and-
and
you
know
so
anyway-
so
that's
the
answer.
It's
it's
the
language
design
and
the
type
system
that
has
to
absorb
everything,
and
so
that's
why
go
is
not
like
a
guiding
sort
of
design
thing
in
the
okay.
A
Yep
thanks
all
right,
so
we
hit
the
hour.
So
thanks
everyone
for
attending
and
see
you
all
again
next
week
and
yeah.
Perhaps
we
can
even
discuss
it
there
again
and
if
anyone's
interested
all
right
so
goodbye
everyone
and
see
you
all
in
two
weeks,
bye.