►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-09-08
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting:
it's
september,
the
7th
2020
and
as
every
week
we
go
over
the
stuff
that
we've
worked
in
the
past
week
and
plan
to
work
on
and
then
discuss
open
issues
from
the
journalist
if
there
is
any
and
yeah
so
I
started
myself
so
last
week
I
found
some
time
to
work
on
rust,
multihash
and
what
I
currently
do
is
so
the
plan
is
to
merge
tiny
multihedge
into
rust,
multihash,
basically
merging
it
upstream,
and
in
order
to
do
this,
I
fix
any
outstanding
things
that
so
that
basically,
the
tiny
multihash
has
feature
parity
with
rust
multihash,
so
that
the
merge
is
easy.
A
And
while
I'm
doing
this,
I'm
also
improving
the
tests
and
find
bugs,
obviously
so
yeah.
So
I'm
fixing
bugs
and
yeah,
and
but
the
cool
news
is
that
today
or
yesterday
someone
posted
that
one
missing
feature
is
adding
blake
three
support
and
someone
from
the
community
is
on
the
way
of
contributing
this.
So
that's
pretty
cool
that
I
don't
have
to
do
all
the
work,
but
there
is
help
from
others.
So
that's
really
cool
and
I've
also
looked
into
rust.
A
Cid,
which
of
course,
then
needs
to
use
the
new
rusts
multihash
once
merged,
and
I
feared
that
it's
a
lot
of
changes,
but
actually
the
changes
are
pretty
small,
so
that's
also
pretty
cool.
So
once
the
rust
mud,
hash
things
has
merged
upgrading,
rust,
cds,
just
a
single
pr
and
pretty
straightforward
and
yeah,
and
hopefully
it
will
happen
this
week,
but
it
might
slip
into
the
next
week
the
actual
merch
yeah.
That's
all
I
have,
and
next
on
my
list
is
danielle.
B
B
I
also
did
ask
quite
a
lot
of
questions
on
slack,
so
thanks
for
helping
with
those,
I
also
seem
to
have
found
a
way
to
make
it
have
actions
hang
on
mac.
So
I
was
talking
to
a
couple
of
developers
there
and
it
seems
like
the
process
on
ci
just
drops
so
they're
gonna
look
into
that.
B
C
So
I've
been
doing
a
lot
more
of
writing
stuff
up,
which
is
fun.
The
amount
of
write-up
debt
is,
I
think,
also
kind
of
substantial,
so
there's
a
couple
of
new
issues
that
are
brain-dumping
some
stuff
around
error-type
thoughts.
There's
a
lot
of
errors
in
the
library
that
need
improvement,
and
I'm
thrilled
that
daniel's
around
can
help
us
out
with
some
of
those.
C
I
did
a
little
more
work
on
cogen
and
schema
stuff,
so
the
test
coverage
around
code,
gen
stuff,
is
continuing
to
expand
using
the
new
harness
structures
that
I
started
working
with
last
week,
added
a
bunch
more
tests
around
the
union
features
especially
found
several
bugs
there,
so
that
was
nice.
Those
are
now
fixed
for
a
while
iterators
on
unions
would
hang
test
coverage,
it's
important.
C
So
with
that
fixed,
I
was
able
to
go
on
and
work
a
little
bit
more
on
the
demo
for
the
schema
schema
and
so
that
that
demo
has
now
expanded
quite
a
bit.
It's
now
using
the
kind
of
unions
in
places
where
it
should
be.
Those
didn't
exist.
The
last
time
I
was
working
on
this
demo,
so
now
I
got
to
introduce
them,
they
work
and
the
whole
thing
is
rigged
up
to
a
demonstration
of
parsing.
C
C
You
take
the
node
prototype
from
one
of
the
code,
gen
types
and
you
hand
that,
along
with
the
string
into
json
on
marshall-
and
it
goes
so-
there
was
no
code
needed
to
be
written
for
this
to
work
at
the
end.
C
That
has
also
produced
some
learning
experiences.
I
ended
up
massaging
the
schema,
schema
json
a
little
bit
in
a
couple
of
ways
and
I'll
talk
about
that
more
later,
and
I've
also
realized
with
some
intensity
that
the
error
handling
experience
throughout
this
whole
thing
again
needs
a
lot
of
work.
It
was
a
little
bit
tricky
to
figure
out
when
I,
when
I
had
a
mismatch
between
the
schema,
as
I
had
typed
it
in
programmatically,
and
this
document
that
I
was
parsing
and
getting
to
match
to
it
in
a
lot
of
cases.
C
C
So
those
are
very
interesting.
Some
of
it
is
proposing
changing.
One
of
the
core
unions
in
the
schema
schema
to
be
a
key
to
representation
where
it
is
currently
in
line,
and
so
that
one
causes
the
structure
of
the
document
to
change
kind
of
drastically,
at
least
until
you
turn
off
the
white
space
stiff
and
then
it's
not
so
drastic
anymore.
C
That's
that's
a
change
that
I
made
primarily
because
it
meant
I
can
do
cogen
matching
that
now,
because
cogen
for
unions
with
inline
representation
is
not
yet
implemented,
but
there's
a
lot
of
reasons
that
I
think
we
might
be
inclined
to
recommend
key
unions
as
one
of
the
most
preferred
choices
in
new
designs.
And
so
I
want
to
talk
a
little
bit
more
about
that
and
like
consider
whether
we
want
to
dog
food
it
in
the
schema
schema
and,
if
not
like,
why
and
be
ready
to
like
really
defend
our
reasoning
on
that.
C
So
and
then
there's
a
couple
of
other
changes
in
there
that
I
cannot
remember,
live
from
me
right
now.
Oh
unit
types,
I
haven't
quite
finished
a
new
issue
to
write
up
the
how
and
why
I'm
aware
and
the
what
around
units,
but
I
think
that
is
something
that
we
should
look
at.
Adding
to
the
scheme
system
in
the
near
future
and
unit
type
is
basically
a
type
that
has
a
cardinality
of
one.
C
C
Oh,
we
had
some
conversations
in
recently
past
weeks
about
identity,
multi-hash,
and
I
want
to
forward
a
little
bit
of
news
from
ian
from
the
pier
boss
project
that
we
were
chatting
this
week
and
he
was
working
on
something
where
he
had
previously
wanted
to
use
identity,
multihash
and
he
implemented
the
code
to
work
without
it,
and
it
was
not
as
big
of
a
problem
as
he
was
previously
afraid
of.
So
I
think
this
is
probably
good
news.
C
I'm
going
to
try
to
beg
him
to
write
up
an
experience
report
about
that,
but
it
seemed
to
be.
It
worked
anyway,
the
figuring
out
what
to
do
about.
It,
took
him
like
roughly
a
day,
and
he
said
the
number
of
lines
of
code
that
our
user
facing
after
that
were
quite
minimal.
So
so,
hopefully
we
can
get
him
to
talk
about
how
that
worked
and.
B
E
D
C
Yep-
and
I'm
also
going
to
try
to
talk
to
you
ian
a
bit
more
about
unicafe's
v2
futures
in
the
next
week,
because
he
started
to
work
on
some
other
experimental
file
system
designs,
and
I
don't
expect
that
our
work
will
converge
on
that.
Because
he's
got
use
cases
in
mind
that
involve
especially
things
like
capability
bits
and
permission
systems
that
are
just
they're
going
to
be
full
of
the
application
level.
C
D
Okay,
where's
my
tab,
okay,
so
okay,
so
the
story
of
my
week
is
I'm
slowly
navigating
my
way
to
some
of
this
test
fixture
stuff
that
has
been
on
my
list
for
so
long
and
came
back
up
again
with
the
file
coins
work
and
like
it's
the
reason
I
touched
car
files
in
the
first
place
way
back
so
long
ago
anyway,
I
I
yak
shaved
my
way
to
a
new
deck
pb
implementation
in
javascript,
because
my
test
fixtures
for
car
files
include
some
dag
pp,
stuff
and
he's
and
with
the
migration
to
the
new
multi-format
stack.
D
That
is
the
most
awkward
piece
of
it
all
and
it
just
makes
a
mess
of
the
tests
and
michael
hadn't
done
it,
and-
and
I
don't-
I
didn't
see
what
was
going,
what
what
the
path
was
to
getting
that
done.
So
I
just
did
it
so:
there's
a
new
dag
pb
implementation
for
the
new
multi-formats
stack
and
in
that
issue
number
one
in
that
new
repo.
D
I
started
a
discussion
about
how
we
interact
with
the
new
block
abstraction
and
these
data
shapes
that
come
in
that
aren't
the
final
form
and
in
in
this
particular
instance
that
well
the
reason
that's
an
issue
is
because,
in
the
block,
abstraction
michael's
built
in
this
reader
functionality,
reader
abstraction
that
takes
over
all
of
the
path
resolution
stuff
that
was
built
into
all
of
the
old
codecs.
What
is
built
into
them
so
in
any
of
the
other
js
codecs.
The
path
resolution
is
deferred
down
to
the
codec.
D
So
whenever
you
want
to
do
a
resolve,
it
calls
this
resolve
function
on
that
codec,
and
that
does
the
resolution
for
you,
which
was
always
a
lot
of
duplication,
that
you
can
find
the
same
code
again
and
again
across
these
codecs,
because
most
of
them
do
pretty
much
the
same
thing.
There's
some
variation,
but
it's
very
similar,
so
michael
pulled
that
up
into
the
block
interface.
D
So
if
you
do
path
resolution
you've
got
this
block,
then
it
will
most
do
do
it
all
actually
locally
and
the
way
that
works
is
because
in
javascript
we're
dealing
with
just
plain
objects.
So
when
we
do
a
decode,
we
instantiate
an
object.
That
is
the
shape
of
the
data
in
the
data
model
and
then,
when
we
do
it
and
and
then
so,
we
can
path
through
that
object.
So
any
path
tells
us
how
to
get
through
that
object.
D
We
can
do
that
in
memory
and
then,
when
we
serialize,
we
take
an
object
of
the
appropriate
shape
and
then
turn
it
into
the
bytes
for
that
format,
and
that's
that's
fine
with
dag
jason
and
dax
cbo,
because
it's
just
a
matter
of
how
do
you
serialize
these
things
into
those
codecs
and
they're
flexible,
and
so
you
can
take
any
any
most
shapes
of
data
in
memory
and
turn
them
into
daxy
or
datejson
great.
But
we
have
these
other
codecs
that
are
not
flexible
and
dake.
D
So
when
you
deserialize,
when
you
decode
a
block
that
is
dag
pb,
you
get
back
this
object
that
has
a
links,
property
and
a
data
property
and
the
links
property
has.
Is
an
array
that
has
objects
with
each
of
which
has
three
properties?
D
What
is
it
the
hash
t
size
and
name
and
there's
there's
a
few
rules
around
these
things
and
what
they?
What
what
defaults
are?
What
happens
if
they
don't
exist?
Even
the
rules
around
what
happens
if
the
data
property
is
empty,
that
gets
transformed
to
a
zero
byte
array.
D
So
that's
fine,
so
resolving
paths
there's
some
questions
about
dag
pb,
because
there's
this
whole
name
linked
named
lynx
thing,
which
is,
if
you
look
at
the
dag
pp,
spec
you'll,
see.
But
we
have
this
thing
where
in
javascript
you
can
provide
you're
when
you're
instantiating,
a
new
dagpb
block,
you
can
provide
an
object.
That
is
that
has
some
variation
in
the
size
and
it
doesn't
have
to
be
perfect
and
the
codec
will
get
it
right
for
you
before
serializing.
D
So
a
couple
of
examples
of
that
are
you
might
well
the
easy
one
is
that
the
data
property
could
be
null.
You
could
just
provide
no
data
at
all
and
that's
and
that's
okay,
and
then
it
will
serialize.
A
D
A
zero
length,
byte
array
and
then,
when
you
resu,
when
you
deserialize
it'll,
come
out
as
a
zero
length,
byte
array,
so
what
you
pass
in
might
have
a
null,
but
when
it
does
a
round
trip,
it'll
come
out
as
a
array
of
zero,
and
the
other
example
is
well
there's
some
flexibility
in
the
new
stuff.
I
wrote.
This
is
why
this
came
up,
because
I
was
writing
flexibility
in
because
it
was
just
nicer.
So
you
could.
D
You
could
just
pass
in
a
biter
and
say
serialize,
this
byte
array
for
me,
and
it
would
take
that
byte
array
and
put
it
into
a
data
property
and
then
make
no
links,
and
it
would
give
you
a
nicely
formed.
Pp
object
as
if
you
were
serializing
just
a
byte
array,
but
it
would
pack
it
nicely
same
thing
with
the
the
links
objects.
D
You
could
just
give
a
cid
and
it
would
turn
that
into
the
right
object,
type
and
a
bunch
of
things
where
you
can
just
make
the
make
the
format
nicer
if
you're
building
it
from
scratch.
Because
you
know
there
is
some
holdovers
from
history
that
make
that
format
not
so
pleasant,
so
you
can
pass
in
all
these
different
objects
of
different
shapes
and
it
will.
D
It
will
do
a
round
trip
for
you
and
come
and
they'll
come
out
different,
but
then,
when
we
have
this
blocking
block
abstraction
when
you
instantiate
a
new
block
from
the
data
model
and
it's
got
a
the
shape
that
it
doesn't
quite
match,
what
would
what
would
occur
in
a
round
trip
the
this
reader
thing,
the
path
resolution
will
will
resolve
according
to
the
data
you've
passed
in,
not
according
to
how
it
will
be
if
it
does
a
round
trip.
D
So
you
get
really
weird
side
effects
of
that,
and
if,
if
you
were
building
a
system
where
you
were
creating
the
blocks
in
memory
and
not
not
doing
a
round
trip,
you
were
not
deserializing,
you
were
creating
them
and
using
them,
while
you
created
them
say
you
were
you
had
a
a
unix
fs
system
that
was
it
was
was
pulled,
is
sucking
in
a
file
system,
and
you
also
wanted
to
browse
it
in
in
your
file
browser.
D
At
the
same
time,
while
you
were
processing
it
so
before
even
serialization,
you
might
throw
these
block
objects
into
that,
and
if
you
were
doing
any
pathing,
then
the
paths
would
be
wrong,
because
the
data
shapes
are
not
their
final
form.
So
there's
this
question
now
about
what
do
we
do
about
this
block?
Abstraction
one
option
is
to
force
a
round
trip
which
seems
really
wasteful.
D
If
you
were
never
going
to
encode
in
the
like.
Maybe
you
have
a
case
where
you
don't
actually
encode
you're,
just
using
them
as
an
abstraction
another
one.
A
proposal
is
to
have
to
have
a
prepare
method
of
some
kind
for
each
codec,
where
the
codecs
have
a
way
to
register
that
they
need
to
do
some
data
preparation.
Another
another
example
of
this
is
in
javascript.
D
We
can
have
objects
that
have
extraneous
properties
on
them
and
they
just
get
ignored,
and
so,
if
you
were
to
do
a
round
trip-
and
you
had
a
property
foo
object
that
you
passed
in.
You
wouldn't
error
on
that,
because
it's
not
looking
for
extraneous
properties,
but
it
would
disappear
on
a
round
trip
and
then
but
but
if
you
didn't,
do
a
round
trip
your
path,
foo
would
resolve,
which
is
just
silly
so
yeah.
D
D
The
other
thing
that
was
a
lot
of
discussion
was
back
to
the
car
stuff.
Was
the
data
store
interface
that
I've
been
using
to
write
these
abstractions
over
block
storages
in
javascript?
It
comes
out
of
file
coin,
and
it's
even
initially
it
was
like
this
doesn't
quite
fit.
It's
got
a
lot
of
legacy
ideas
in
it
that
just
don't
apply
to
what
we
want
to
do,
and
I
just
got
to
the
point
where
it's
just
it's
just
too
frustratingly
wrong
for
a
plain
block:
javascript,
ipld
block
storage
system.
D
So,
there's
a
discussion
in
the
car
discussion
in
the
car
repo
about
a
new
storage
abstraction
there.
An
api
for
that
and
ghazal
has
been
very
helpful
as
usual
in
coming
up
with
ideas
about
what
that
abstraction
should
look
like
how
the
api
should
feel.
So,
I'm
in
the
middle
of
implementing
that
and
the
idea
is
that-
should
be
transferable.
D
To
these
other
storage
things
we
have
so,
like
michael's,
been
doing
storing
ipld
in
git
lfs
that
that
same
abstraction
should
be
mirrored
there,
so
you
can
just
pass
it
around
and
be
agnostic
to
where
you're
storing
them.
A
Okay,
any
other
updates
from
anyone.
A
I
have
one
comment
about
the
issue,
one
on
tag
pb
because,
like
as
you
explained
it,
I
was
wondering
because
I
was
also
like
I'm
also
posted
comments
on
the
discussion,
but,
as
you
explained
it,
could
it
perhaps
make
sense
that
there
is
that
the
from
the
codec,
the
encode
method,
expects
always
the
correct
shape
like
the
final
shape
it
should
have,
and
that
we
then
have
this
preparation
method
also
on
the
codec
and
so
because,
like
people
are
expected
to
interact
through
the
blog
api
anyway.
A
So
from
the
blog
api
perspective,
you
guys
d
pass
in
whatever
you
like.
Then
it
calls
this
preparation
method,
but
on
the
codec
level
itself
that,
basically,
so
you
don't
push
it
down
into
the
encode
method.
Basically,
but
like
kind
of
like
define
that
encode
methods
always
expect
the
the
correct
shape
of
the
data-
and
I
don't
know
if
this
is
too
strict,
but
this
might
make
things
like
cleaner
in
a
way,
because
I
also
felt
like
it's
kind
of
yeah
strange
where
to
put
things.
A
D
It,
yes,
that's!
That
was
that's!
That's!
Where
we've
landed
in
that
discussion
and
where
I
said
I
would
go
and
do
experiments
in
that
direction.
I
I
do
have
a
concern
about
it,
which
is
that
you
multi,
if
you
so
this
we've
got
two
layers
here.
One
is
multi
formats
and
one
is
block
and
interacting
via
block.
D
Is
that
all
makes
sense
because
you're
just
putting
in
objects
and
then
calling
in
code
and
decode
methods
on
block,
and
it
does
all
the
mechanics
underneath,
and
so
you
can
hide
all
those
details
there
and
it
would
work
quite
nicely,
but
you
can
also
interact
with
this
through
the
multi-formats
thing,
where
it's
it's,
it's
a
lot
more
manual
where
you're
saying
give
me
the
encode
method
for
this
codec
and
give
me
the
decode
method
for
this
codec
and
then
you're
doing
it
all
yourself,
which
is
what
block
does
takes
away
the
pain
of
that,
but
you
can
still
do
it
that
way.
D
If
you
just
use
multi-format
with
multi-formats
and
then
you
would
be
in
a
situation
where
codecs
need
to
decide
how
how
much
validation
they
want
to
do
in
the
encode
method,
and
then
that
ends
up
itself
being
wasteful
and
I'm
thinking
in
particular
here
the
bitcoin
format,
where
you
have
very
complex,
shapes
and
they're
very
specific.
It
has
to
have
this
property,
this
property,
and
this
probably
has
to
be
this
type,
and
then
it
shouldn't
have
any
other
properties.
D
And
so,
if
I've
got
a
prepare,
I
I
can
easily
take
input
and
put
it
in
the
right
shape,
and
that
would
mean
taking
things
like
hashes
and
turning
them
into
cids
and
removing
properties
that
shouldn't
be
there.
I
can
do
that
in
a
prepare
method,
but
then,
when
I
get
to
an
encode
method,
I'm
probably
going
to
have
to
do
it
again
anyway,
because
I
I
can't
it's
it's
it's
a
little
bit
too
much
to
say.
D
I
can
assume
that
it's
in
the
right
right
shape,
because
then
that's
going
to
lead
to
a
lot
of
potential
errors
where
hey.
I
was
assuming
that
I
could
read
this
property
here,
because
you
should
have
prepared
this
data
beforehand
and
there's
something
there's
something
a
bit
too
trusting
about
an
api
in
javascript
that
doesn't
check
the
data
going
through
it,
and
it
just
makes
assumptions
hey.
You
should
have
already
prepared
this
thing.
Oh,
that
makes
it
awkward
yeah.
E
D
But
but
I
did
have
a
thought
about
how
this
this
can
be
ameliorated,
which
is
that
you
can
do
this
prepare
thing.
D
Yeah,
no,
it
doesn't
really
solve
it,
but
but
you
with
the
calling
prepare
for
using
the
pathing
thing
you
can
defer
calling
prepare
on
any
input
data
unl
until
the
user
does
call
that
pathing
thing
or
they
ask
for
the
decoded
form
back
before
you've
done
a
round
trip.
So
you
can
limit
the
cases
where
you
do.
This
double
prepare
to
just
those
cases
where
they
want
to
re.
They
want
to
make
the
block
with
the
the
origin
with
the
with
their
input
data
and
then
use
the
block
before
even
encoding
it.
D
D
D
Instead
of
splitting
the
two
and
making
them
separate
api
pieces,
you
could
let
the
codec
say:
okay,
I'm
only
going
to
call
the
prepare
method
in
these
very
limited
state
circumstances,
and
I
trust
you
will
do
your
own
prepare
as
well
when
I
just
call
in
code,
so
it
limits
the
double
processing.
But
anyway,
these
are
all
icky
details
that
need
to
be
experimented
with.
But
there's
going
to
be
some
double
processing
here,
somewhere.
A
C
C
D
Yes,
but
then
you're
actually
adding
on
even
more
checking,
because
okay,
an
example
with
dagpb
is,
if
I
let's
say
I,
my
encode
method,
only
deals
with
exact
form
objects
it
as
if
they
did
a
round
trip,
and
I
just
assume
that
they're
all
there
and
if
they're,
not,
then
that
throws
well
what
about
this
null
data
array
thing
where
dag
pb
spec
specifies
that
it
has
to
be.
D
It
can't
be
an
empty
property,
it
has
to
be
a
zero
length,
byte
array
or
or
it
has
to
have
bytes
in
it,
and
so
then
I
have
my
code,
then
has
to
say
it
has
the
check
that
you've,
given
it
can't
just
be
this
loose
check.
If
you
know,
if
it's
not
there,
then
I'll
replace
it,
it
has
to
be
a
bite
array.
It
has
to
have
a
length
now
it
no,
it
doesn't
have
to
have
a
length.
Obviously
it
has
to
be
a
binary
and
it
can't
be
anything
else.
D
So
I
need
to
have
a
type
check
for
that
and
then
I'd
have
to
then
for
the
another
thing
I
have
to
do
on
everything
and
every
sub
property
is
check
for
the
keys
of
every
map
because
everything's
an
object
everything's
a
map
in
javascript.
So
I
have
to
check
the
keys
that
there's
no
extraneous
keys.
Okay,
you've
given
me
something
that's
got
links,
data
and
food.
C
D
Yeah,
but
that's
what
I'm
talking
about
in
an
encode
method,
normal
so
in
what
I
would
do
in
dagpb.
If
you
gave
me
an
object
that
had
links,
data
and
foo,
I
would
never
touch
foo
like
in
in
the
old
codec
and
the
new
there's,
no
reason
to
even
touch
foo
to
acknowledge
that
it
exists
or
even
check
for
it.
All
I
care
about
you,
give
me
an
object,
I'm
pulling
out
data
and
links,
and
I'm
serializing
those
but
you've.
D
Given
me
something
that's
got
it
foo
as
well,
that's
invalid
and
that
shouldn't
that
wouldn't
make
it
through
a
round
trip,
so
it
shouldn't
be
there
and
the
preparedness
yeah.
I
think
that
should
win
right.
So
that
means
that
then
I
have
to
do
a
recursive
keys,
checked
on
every
object
that
comes
in
and
then
and
that's
additional
processing
that
we've
until
we've
now
we've
avoided.
D
Yeah,
it
seems
like
it
to
you,
I
I
know
why
it
seems
like
it
to
you,
but
this
is.
This
is
not
how
we
operate
in
this
untyped
world
where
it,
because
it
gives
you
a
lot
of
flexibility
in
untyped
land,
where
you
can
reuse
the
same
thing
for
different
purposes,
and
you
can
make
something.
Look
like
it's
two
things
at
once.
There's
you
know
this
is
polymorphism
that
it
affords
us.
That's
not.
It's
often
not
a
good
practice,
but
it
can
be
useful
in
this
case.
It's
not
ideal.
D
Yes,
but
it's
the
kind
of
processing
that
we
have
been
able
to
avoid
for
the
most
part,
but
in
this
in
this
new
case,
with
this
new
abstraction,
because
we
are
operating
on
these
objects,
they
need
to
be
pure.
So
we
have
to
go
through
this
purity
cycle
for
a
prepare
method.
Yes,
it
makes
sense.
What
I
would
do
is
I'd,
probably
create
it.
No
and
the
other
problem.
Is
you
can't
you
have
to
create
a
new
object?
That's
the
trick
here,
so
you
can't
just
clean
up
an
old
object.
D
You
can't
just
take
the
old
thing
and
clean
it
up
for
them,
because
then
that's
breaking
an
api
contract
that
we
generally
stick
to
in
javascript,
which
is
don't
mess
with
the
user's
input
because
they
might
be
using
it
for
something
else,
make
something
new
and
return
that,
and
that
gives
you
the
opportunity
to
clean
up
properties.
So
what
what
you
do
then?
Is
you
take
an
object?
D
You've
got
links,
data
foo,
you
make
a
new
one
and
all
you
care
about
is
pulling
out
links
and
data,
and
then
you
give
them
that
and
there's
not
even
any
reason
there
to
check
for
properties
of
the
original
one.
So
you
don't
even
need
to
do
any
type,
checking
or
keys
checking,
because
all
you've
done
is
pulled
out,
links
and
foo
and
there's
no
error
involved.
There's
no
checking
involved.
There's
no
yeah,
because
you
know
somebody
could
potentially
give
you
something.
D
That's
got
a
lot
of
objects
on
a
lot
of
properties
on
them,
and
these
things
could
get
expensive,
and
mostly
you
can
avoid
that
anyway,
this
this
is
this
becomes
really
messy
and
and
it's
yeah,
but
but
it's
a
question
that
won't
go
away
with
these
non-flexible
formats
that
have
opinions
about
data
shapes.
A
Cool
then
we
had
a
quick
meeting
and
yeah.
We
see
us
again
next
week
so
goodbye
everyone
eric
before
we
do.
D
C
C
C
C
So
I
changed
everything
that
wasn't
kind
of
to
be
keyed.
Previously
these
were
written
as
the
inline
representation
of
unions.
C
So
all
the
things
that
are
in
the
type
definite
union
are
like,
like
type
bool
or
type
struct,
and
all
these
things
and
all
of
those
are
struct
types,
so
they're
all
represented
as
maps,
and
so
you
could
find
you
serialize
them
so
you'd
have
this
like
the
magic
key
was
called
kind
and
then
it
would
be
string
or
struct
or
whatever
would
be
the
value,
and
I
switched
these
all
to
key
unions,
which
means
like
the
body
of
the
rest
of
that
got
indented
by
one
and
then
the
word
string
came
out
in
front
as
a
map
key
and
the
kind
keyword
just
disappeared,
because
it's
encoded
as
a
map
with
a
single
key
now
and
the
key
is
the
hint.
C
So
the
reason
that
a
ketune
is
often
preferable
is
the
key
always
comes
before
the
value
which
is
kind
of
convenient.
The
other
representation
strategies
for
unions.
We've
described
all
of
these
because
they're
things
that
we
see
in
existing
protocols
and
we
want
to
be
able
to
describe,
but
using
these
other
strategies
like
inline,
it
is
perfectly
valid
for
the
kind
hint
or
the
membership
hint
to
come
below
a
bunch
of
the
data.
C
C
But
if
this
information
comes
in
an
unpleasant
order,
then
you
are
stuck
buffering
something
and
basically
going
over
a
bunch
of
stuff
twice
in
a
row,
because
the
first
time
you
see
it,
you
don't
know
what
it
is
yet
and
if
these
are
really
big
structures
like
if
this
is
the
the
first
thing
in
a
megabyte
size
block,
let's
say
is
a
union
and
it
turns
out
to
be
the
999
kilobytes
of
data
and
then
the
last
two
words
of
the
block.
Are
your
union
discriminant
hint?
C
C
So
I
dog
food
at
changing
the
scheme
of
schema
to
use
this
format.
It
is
arguably
less
pretty.
I
don't
know.
This
is
something
that
we
should
discuss.
C
One
of
the
big
changes
is
there's
a
unit
type
in
play.
Previously
there
was
a
map
that
had
a
bunch
of
keys
and
the
keys
for
all
of
the
meaning
and
the
values
are
all
null.
C
C
A
C
This
draft,
pr
that
I
put
up
does
something
slightly
different
because
the
unit
kind
isn't
introduced.
Yet
I
made
a
placeholder
which
is
basically
type
unit,
struct,
nothing
and
that
has
the
correct
semantics,
but
it
means
all
of
these
things
in
the
schema.
Schema
are
now
serialized
as
an
empty
map,
so
this
works
and
it
passed
all
the
type
checking
and
consistency
stuff
in
my
demo
using
the
cogent
types
but
like
maybe
this
is
not
what
we
want
so
introducing
a
real
unit
kind.
C
Would
let
us
do
something
better
here
and
yeah
that
that
unit
thing
applied
to
the
ums.
Basically,
so
I
think
those
are
the
biggest
changes.
The
naming
ones
aren't
really
interesting.
C
D
C
It
would
but
like
it's,
I
don't
know
it's
the
same
thing.
The
important
part
is
cardinality
equals
one.
I
don't
really
care
what
we
call
it.
I
think
we
should
call
it
unit,
because
that
literature
is
pretty
consistent
in
the
history
of
other
programming.
Language
theory.
C
D
A
C
C
A
D
C
C
A
C
D
D
I
know
I
bet
you're
dealing
with
a
go
thing,
and
this
unit
thing
is
a
go
thing,
so
we're
we're
doing
some
we're
doing
some
compromising
here,
but
yeah.
There
are
lots
of
reasons
why
inline
is
just
a
mess
and
we
and
recommend
even
recommending
them
is
like
we
have
inline,
mainly
because
that's
how
a
lot
of
data
in
the
wild
exists.
It's
just
it
seems
to
be
a
natural
format
for
a
lot
of
people.
D
It's
just
cleaner,
but
yeah,
and
it
actually
yeah
and
it's
more
it
is,
I
think,
key,
unique
and
even
kinded,
well
kind,
maybe
not
kind
of
the
key
is
certainly
much
more
intuitive
than
inline.
When
you
are
looking
at
data,
inline
requires
you
to
make
a
lot
a
number
of
cognitive
jumps
to
process
it,
and
whereas
keyed
is
like
there's
one
key
and
it's
this
key,
that's
all,
whereas
inline
is
okay,
there's
these
properties
and
in
this
state
these
properties
exist.
D
But
in
this
other
state
there's
and
there's
this
one
indicator
that
you
know
so
it's
they
are
much
more
pleasant
to
look
at
and
to
understand,
they're,
just
a
little
bit
more
inefficient,
and
this
is
this-
goes
back
to
the
problem
with
the
and
far
coin
hampt
in
in
go.
D
These
kid
unions
are
make
they
make
so
much
sense,
but
they
can
lead
you
to
the
perverse
situations
where
you're
doing
something
you
don't
need
to,
because
because,
because
you're
doing
it
for
sort
of
these
aesthetic
they're
not
really
aesthetic,
but
it's
sort
of
comfort
reasons.
It
just
seems
right,
and
so
the
key
union,
in
the
file
coin
hand,
is
complete
waste,
but
it's
done
because
it
just
feels
right
in
in
the
data
model.
C
D
Anyway,
okay,
so
this
unit
thing,
I
think
I
think
you're
stretching
it
but
because
you're
gonna
add
like
I'm
just
thinking
through,
like
all
the
changes
you
need
to
make
to
introduce
this
thing
and
it's
a
lot
like
even
down
to
the
documentation
yeah.
The
unit
is
going
to
be
really
annoying.
D
D
D
C
C
D
Well,
we
work
on
making
that
null
type
concrete,
that's
a
thing
that
you
can
refer
to
just
I
mean
I
think
I
understand
your
problem,
but
it's.
I
still
think
it's
more
of
a
it's
more
in
the
realm
of
your
constraints
around
your
types,
your
memory
types
not
so
much
the
schema
itself,
because
you
could
just
fix
that
you
could
instantiate
a
comp.
D
A
D
D
So
it
is
a
concrete
thing,
and
so
we
so
it
sort
of
falls
out
naturally
in
the
language
when
it
can
become
a
default
case
without
too
much
effort.
And
so
maybe
it's
not
appearing
in
what
I'm
looking
at
here,
because
it
is
implicitly
used
as
something
that
just
falls
out.