►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-10-05
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
I
of
course,
haven't
opened
the
heck
pad,
but
I
remember
what
I
did
last
week,
so
I
spend
still
a
lot
of
time
on
the
rust
multihash
stuff,
and
so
the
good
news
is
it's
finally
merged.
So
a
tiny
multi-hash
probably
won't
be
a
thing
anymore,
but
it's
now
fully
upstreamed
into
rust,
multihash
and
so
the
tiny
multi-hash
stuff
was
kind
of
the
work
of
four
or
five
months
of
coding
and
prototyping
and
changing
every
the
apis
almost
every
week,
but
I
think
now
stable
enough
to
get
upstream.
A
Yeah
and
the
other
thing
I
did
is
I
wrote
an
exploration
report
about
ipld
data
model
numbers
and
ipld
codecs,
which
got
simpler
than
I
expected
when
I
was
writing
it
because,
like
my
head
was
kind
of
like
blowing
up,
because
it
was
so
full
of
information
about
numbers,
and
so
I
I
suggest
that
people
are
reading
it
I
think,
are
interesting,
and
so
the
long
story
short
is
that
if
you
use
the
data
model,
you
should-
and
you
want
to
encode
it
just
use
a
single
type
for
integer
and
the
signal
type
for
floats
and
for
integers,
probably
variance
and
for
floats
it's
ieee
754
and
double
precision.
A
A
B
I
so
I'd
have
just
tied
it
up:
dag
pb!
That's
I
don't
know
if
that's
published
yet
michael,
but
that's
on
you
and
just
working
with
the
latest
js
multi-format
stuff
and
jack
jason.
There's
a
pool
requesting
for
to
get
that
switched
over
to.
So
that's
all
three
major
codecs
that
are
in
sync
and
ready
to
go
for
the
multi-format
stuff,
but
I
spent
most
of
the
week
and
go.
B
B
And
it's
struggling
a
little
bit
against
trying
to
partly
it's
a
go
programming
model
concern,
but
just
trying
to
get
things
properly
separated
out,
like
the
temptation,
is
to
just
put
everything
in
together-
and
I
just
so
I'm
so
uncomfortable
doing
that
just
in
programming
in
general
and
so
trying
to
find
a
model
where
pieces
can
be
separated
out
and
that's
been
quite
difficult.
B
Looking
at
other
codec
authoring
things
with
ipld
prime
participating
in
the
discussion
about
pretty
printing
and
dag
jason
and
clean8,
which
I
think
eric
will
talk
about
and
also
so
this
week
yesterday
was
a
public
holiday.
So
I
did
very
little
yesterday
with
which
is
my
monday
and
this
this
week's
the
last
week
of
the
school
holidays,
and
so
I
just
want
to
warn
that
I'll
be
a
little
bit
more
reduced
hours
than
normal
this
week.
B
B
Looking
at
dag
p,
dag
jason
at
the
moment.
Actually
that's
been
a
bit
of
a
distraction
but
doug
jason,
dag
pb.
That's
where
I'm
at
that's
it.
For
me.
C
Yeah
yeah
so
last
week,
as
rod
kind
of
mentioned,
I
mainly
spent
the
time
getting
the
block
interface
ported
over
to
the
new
multiformat
stuff
that
goes,
allah
did
and
getting
all
his
work
merged.
That's
all
in
it's
it's
ready
to
go.
I
had
a
call
with
godzilla.
C
Today,
though,
it's
going
to
give
me
some
more
feedback,
and
then
I
think,
what's
going
to
happen,
is
we're
going
to
take
that
block
class
and
we're
going
to
move
it
into
multi-formats,
because
it'll
actually
be
possible
for
you
to
instantiate
these
now
without
having
like
any
of
the
registry
or
anything
like
that
on
top
of
them.
C
So
we
just
want
that
to
be
like
a
very
simple
type
that
you
can
pull
out
and
instantiate
so
that
something
like
rod's
car
file
thing
doesn't
even
need
to
import
the
block
and
the
registry
and
all
that
kind
of
stuff
it
can
literally
just
be
like
no
or
this
is
all
we're
doing
yeah.
So
that's
cool!
That's
going
well!.
C
That'll
land
tomorrow,
I
think,
I
think,
that's
from
the
family
land
and
I
had
something
else:
oh
yeah,
we
have
a
new
team
management
page.
I
think
everybody's
on
the
call
thought,
but
that's
like
what
I'm
gonna
use,
at
least
through
the
end
of
the
quarter,
updating
every
time,
there's
any
adjustment
to
what
people
are
working
on
and
why
and
if
we
didn't
capture
a
lot
of
your
maintenance
burden
or
we
just
don't
capture
it
very
well
like
how
you
spend
your
time.
C
Please
send
up
our
request
to
that.
So
we
can
capture
that
more
accurately.
It's
it's
better
for
some
folks
than
others,
because
some
people
just
have
a
more
difficult
workload
to
capture
to
be
honest
and
yeah.
But
anyway,
let's
just
improve
that
and
then
yeah
reviews
are
happening.
Lots
of
management
stuff
is
happening
that
I've
had
to
deal
with.
So
that
was
most
of
my
week.
D
Hey
guys,
so
I
got
the
js
crossing
spike
completed,
which
was
so
mainly
working
through
good
p2p
issues
and
my
understanding
of
it.
So
that's
good
yeah
and
been
continuing
to
log
various
drafting,
spec
issues,
and
so
there's
some
issues
out
there
all
consolidated
all
into
an
update
of
the
specs.
Later,
probably
once
I
think,
they've
all
been
found
for
a
certain
iteration.
D
I
did
provide
some
support
for
the
final
browser,
retrieval
guys
on
jslib
p2p
stuff.
I
guess
I'm
an
expert
now
after
my
two
weeks
of
exposure
to
it,
but
so
it's
kind
of
nice
to
share
some
and
then
wrote
a
design
document
on
js,
craftsync
and
reviewed
that
with
michael.
I
made
a
few
tweaks
so
kind
of
have
a
path
forward
now,
so
this
week,
we'll
basically
be
implementing
js
draft
sync,
since
I
know
how
the
underneath
other
pinnings
work
and
I
have
a
design
in
place.
D
So
that's
cool
and
I
think
that
can
go
pretty
quickly.
It's
not
something
I
can
refactor
from
and
make
sure
it's
still
working.
And
then
I
have
a
presentation
with
rod
on
wednesday
about
scalable
ipld
ipfs
data
ingestion.
For
this
filecoin
master
class,
so
we
gotta
create
some
slides
for
that.
I
guess
michael
michael
will
be
there
too,
and
so
absolutely
I
thought
it
was
gonna
be
friday,
so
I
just
realized
it
was
wednesday
today,
so
I
I
crank
on
that
pretty
quickly,
so
that
is
my
day.
A
Thanks
next
one
is
eric
hello.
E
This
week
we
had
a
meeting
between
me
and
will
scott
and
bedan
about
adls
and
where
to
go
with
them,
and
so
that
discussion
kind
of
came
in
two
big
parts
and
trying
to
like
bridge
towards
the
middle
of
what
we
expect
the
interface
of
these
things
to
be
from
what
we
can
understand,
looking
at
things
from
the
library
design
side
and
then
getting
lots
of
information
exchanged
about
like
how
a
project
such
as
the
state
diff
that
looks
at
filecoin
data,
for
example,
might
want
to
actually
use
the
things,
and
I
think,
fortunately,
those
converge
without
a
lot
of
comment.
E
We
got
lots
more
detailed
information
for
us
in
the
ipld
team
about
what
filecoin
is
doing
and
some
of
the
the
applicational
semantics
that
drift
into
you
a
little
bit
as
well
as
a
couple
of
advanced
layout
stuff.
That
is
a
little
bit
more
interesting,
like
there's
some
recursive
advanced
layouts
in
filecoin,
we're
going
to
see
how
those
go,
but
I'm
hoping
that
those
aren't
actually
going
to
be
interesting.
They
should
just
work
naturally
in
the
ipld
constructions.
E
We'll
see
we'll
see,
so
we
got
some
notes
on
the
ground
to
cover
there
and
hopefully
we'll
just
keep
slowly
rolling
forward
on
that.
I'm
really
curious
how
the
implementation
side
of
things
that
mv
dan
is
working
on
is
shaping
up,
but
he's
not
here.
So
I
will
not
speak
for
that.
E
We
have
landed
some
new
helpful
tools
in
the
go
ipld,
prime
repo,
so
there's
now
a
handy
dandy
function
where
you
can
take
some
arbitrary
go
structure
and
you
can
hand
it
to
the
fluent.reflect
method
and
it
will
give
you
ipld
data
model
that
roughly
matches
it's
full
of
magical
reflection
and
do
the
right
thing
type
logic,
but
it
probably
does
the
right
thing.
So
the
purpose
of
this
is
basically
to
use
it
in
quick
demos
and
like
provide
an
onboarding
ramp.
E
It's
not
fast,
it's
not
performing,
but
it
does
what
you
mean
real
fast.
So
hopefully
it's
good
for
demos
and
stuff,
a
bunch
of
internal
communication
stuff.
I've
started
looking
at
simple
dag,
but
I
haven't
published
my
notes
on
that.
Yet
I
don't
know,
will
you
haven't
put
any
notes
in
the
share
doc?
But
if
you
want
to
talk
something
about
how
state
diff
is
going,
I
think
that's
been
going
really
well,
so
I'd
be
interested
in
it.
F
Yeah,
I
don't
know
not
not
too
much
there.
There
is
a
schema
for
all
of
the
the
one
actor
state,
along
with
a
lot
of
the
other
on-chain
things
like
messages
and
how
those
go
from
chipsets
to
blocks
to
messages
and
stuff.
So
there's
this
ipld
thing:
there's
an
evolving
codec
that
gets
a
pretty
printed
json
view
of
things
that
is
more
efficient
than
previously
what
we
were
having
to
do
to
make
things
work,
which
involved
multiple
rounds
of
like
interface,
removing
all
types
and
munching
things
which
is
a
little
icky.
F
So
now
it's
a
trick
of
getting
that
used
in
various
places
that
want
to
work
with
this
data
and
see
it
in
different
places.
So
yeah.
E
And
we
are
making
elephant
usage
of
the
golang
kojim
stuff
in
order
to
pull
this
off,
which
is
really
exciting,
will's,
giving
me
lots
of
small
bugs
to
fix
and
chase
down
to
some
of
which
I've
gotten
to
in
reasonable
time,
some
of
which
need
to
end
up
in
a
to-do
list.
E
But
by
and
large
it's
working
and
like,
I
think,
most
of
the
big
questions
we
have
are
like.
Where
did
the
adls
fit
into
this?
Now?
So
that's
exciting.
E
So
the
other
big
thing
that
consumed
a
lot
more
hours
of
my
week
than
I
expected
was
I'm
afraid
that
I
lit
the
fuse
on
a
string,
encoding
and
escaping
question
and
just
character
sets
and
then
rod
is
making
a
face.
Where
he's
containing
his
laughter
and
several
people
are
shoot
yeah.
So
that's
a
fun
topic.
I
tried
to
write
a
pretty
print
tool
and
I
think
that
has
gotten
into
the
weeds
in
ways
that
I'm
not
even
going
to
follow
up
on,
but
it
opened
the
can
of
forms
on
string
escaping
and.
E
E
E
Very
little,
except
for
the
intention
that
it
should
probably
be
printable,
but
I
think
it's
important
to
yeah.
You
still
have
to
segment
the
dream
of
being
printable.
From
that
that's
an
implementation
detail.
I
want
to
push
off
just
a
little
bit
so
defining
strings
as
8-bit.
Bytes
is
nice
because
it
is
so
clear
and
because
there
exist
other
systems
in
the
world
which
are
going
to
force
us
to
deal
with
this
anyway,
for
example,
do
you
guys
remember
all
the
discussions
about
file
names
in
the
unix
fsv2
draft
specs?
E
It
got
really
hairy.
It
turns
out
a
bunch
of
file
systems
out.
There
do
not
have
very
strict
opinions
on
what
sequences
of
bytes
can
end
up
in
a
file
name,
even
though
we
would
normally
discuss
this
as
a
string,
and
this
gets
particularly
impactful
when
we
then
mash
this,
together
with
our
definition
of
maps
in
ipld,
having
string
keys
and,
for
example,
wanting
to
use
large
sharded
maps
like
hands,
which
we
want
to
use
in
a
spec
like
unicode
v2,
which
still
has
a
string
keys.
E
This
all
of
these
things
are
much
much
much
much
much
much
simpler
and
more
composable.
If
we
have
this
definition
of
string
as
8-bit
byte
sequence
and
a
fair
number
of
systems
having
no
trouble
with
this
so
like
sebor
strings,
for
example,
are
instantaneously
compatible
with
this
and
have
no
difficulties.
They
are
the
implementation
of
seaboard,
bite,
parsing
and
c4
string.
Parsing
are
literally
the
same
function.
They
just
have
a
different
parameter
for
am
I
going
to
call
those
bytes
or
string
at
the
end
same
bars,
so
clearly
it
works
for
all
of
the
bytes.
E
Some
other
codecs
are
not
so
simple.
Json
does
not
support
enough
escaping
for
this,
because
json
only
supports
such
escaping
as
slash
r,
slash
n.
These,
like
super
common
special
case
ones,
and
it
supports
slash.
U
and
then
four
hex
characters
which
is
unicode
escaping
which
draws
from
the
unicode
tables.
E
E
E
There
might
be
some
good
solutions
out
there.
There's
a
thing
called
utf
clean,
8
utf-8-c8
for
short,
which,
as
far
as
I
know,
is
pioneered
entirely
by
the
pearl
community,
or
at
least
all
the
docs
that
I've
seen
so
far
are
from
them,
and
that
appears
to
have
a
mechanism
which
does
losslessly
encode
things.
E
These
are
things
that
make
it
nicer
to
build
content,
addressable
systems,
but
normalization
is
mutation,
for
example,
if
you
force
all
strings
to
be
normalized
to,
for
example,
utf-8
nfc
normalization,
necessarily
a
normalization
like
this
means,
you
disregard
whatever
input
you
got
and
whatever
you
were
given
and
you
produce
a
new
output.
Normalization
is
mutation.
E
If
we
want
to
losslessly
regard
data,
by
which
I
mean
take
some
data
that
has
been
serialized
load,
it
up,
perform
a
no-op
operation
on
it.
Just
for
the
sake
of
discussion,
serialize
it
and
be
able
to
hash
it
again
perform
this
operation
without
loss
get
the
same
cash.
Then
we
cannot
mandate
normalization
because
normalization
is
mutation.
E
C
E
A
Yes,
so
it's
it!
It's
a
bit
about
this
this,
like
do
you
have
a
bijective
mapping
between
things
kind
of
like
basically
the
same
thing
with
with
numbers,
which
is
my
expertise.
A
It's
the
same
thing
about
like
do
you
if
you
have
different
flow
types,
do
you
normalize
to
the
smallest
one
or
not,
is
kind
of
the
same
problem
that
you?
If
you
do
it,
everyone
has
to
do
it
else.
You
get
different
different
hashes
for
the
same
data,
but
yeah
yeah,
so
yeah.
I
can
see
the
problem
in
there,
so
I'm
I'm
I'm
excited
to
get
on
the
next
stage
from
numbers
into
strings,
so.
C
But
I
think
I
think
the
same
framework
applies,
though,
which
is
that
like
we
do
need
to
realize
that,
like
whenever
you
serialize
something
you
are
mutating
it
like
like
that,
like
you
are,
it's
just.
You
may
have
found
a
lossless
mutation
and
that's
great,
and
we
should
like
try
to
find
all
of
the
lossless
mutations.
But
like
the
these
cases,
where
we
have
hash,
efficiency
are
often
because
we
don't
have
a
lossless
mutation
right
or
they
can
be.
H
Don't
do
a
joke
about
that?
That's
like
it's
it
it's
supposed
to
be
two
weeks
low,
eight
days,
so
you
know
it
should
be
all
good.
A
A
E
A
C
E
B
C
H
C
B
B
I
don't
think
we
want
to
be
in
the
business
of
going
further
down
like
this
this
this
this.
This
point,
just
like
the
numbers
thing
raises
questions
about
what
is
our
role
and
where
is
our
stack
like?
Where
is
our
position
in
the
stack
and
what
is
it
that
we
should
be
doing
within
that
like
we
could
spend
a
long
time
on
this?
What
is
the
where's
the
point
of
diminishing
returns,
where
we
say
okay?
This
is
good
enough.
I
don't
know.
E
That
combination
is
actually
really
efficient.
It's
what
seaboard
does
and
it
works,
and
people
seem
to
like
it
because,
most
of
the
time
you
say
string
in
siebel
when
you
mean
string
when
you
expect
it
to
be
rendered,
and
when
you
know
you
don't
mean
string
and
you
mean
binary
bytes,
you
say
bytes
and
like
maybe
you
expect
that
it
won't
be
rendered
so
much,
and
so
all
the
seaborg
pretty
printers
respect
those
two
things
and
highlight
them
appropriately
and
it
sort
of
does
what
you
mean
as
a
human
and
everybody's
happy
and
like.
A
So
as
as
normally
like
often,
the
discussions
are
out
are
towards
go,
I've
started
a
discussion
around
rust,
so
basically
the
problem
will
be.
If
you
do
it
this
way,
the
things
won't
be
strings
and
rust,
like
you,
just
can't
use
those
things
like
rust
and
rust.
Strings
are
strictly
unicode
things.
A
File
names
are
special,
the
following
is
a
path
and
they
basically
do
weird
drinks
tricks
to
basically
make
it
work
across
platforms,
but
they're
not
strings
for
this
reason
because,
like
like
finance,
obviously
are
not
fully
unicode.
So,
basically
in
rust,
you
wouldn't
be
able
to
use
the
native
string
type
if
we
define
the
string
as
being
isn't.
C
E
Utf-8
does
give
a
way
to
place
arbitrary
bites
in
things
that
are
regarded
as
etf8
strings,
so
would,
in
the
case,
rust.
I
would
also
seriously
consider
I'm
not
super
familiar
with
this,
but
I
would
consider
using
the
rust
whatever
type
it
is,
that
they
use
for
file
names.
Use
that
as
our
string,
it
sounds
like
it's
correct.
A
Yes,
so
it's
a
wtf8
encoding,
it
is
a
good
name
because
it
basically
does
yeah.
It
does
strange
things
so
yeah
you
can
do
that,
but
it's
not
so
it's
not
so
it
doesn't
support.
I
think
arbitrary
bytes,
but
anyway
yeah
there
there's
a
long
like
there's
a
good
like
every
there's
good
documentation
about
what
they
do
with
the
file
name
of
things
and.
H
A
A
A
E
I
would
say
it
seems
unfortunate
to
me
if
a
data
model
in
memory
has
to
use
this.
I've
been
hoping
that
it's
something
that
we
can
use
in
codex,
that
have
restrictions,
but
we'll
we'll
see
it's
somewhat
surprising
to
me
that
rust
would
have
made
a
choice
like
this.
I've
heard
people
complaining
about
rust
and
file
systems,
and
I
guess
now.
I
know
why.
A
Yeah
cool.
A
C
A
I
A
C
C
A
A
As
you
have
bytes
and
you
have
strings-
and
this
is
why
I
think
we're
talking
about
like
making-
I'm
not
sure
like
if,
if
you're
talking
about
this
or
not
that,
if
we
want
to
go
into
the
direction
of
that,
our
strings
are
more
than
unicode
more
than
due
to
8.
Sorry,
our
strings
are
more
than
utf-8
things.
B
I
think
the
problem
is
that
they
already
are
because
because
we're
already
seeing
uses
where
they
are
like
far
coin
and
so
we've
we've
left
it
open,
and
you
know,
because
because
we've
not
been
willing
to
draw
a
line
on
this
stuff,
maybe
we
should
maybe
we
shouldn't,
but
because
people
are
going
to
push
that,
then
our
strings
are
essentially
bytes.
That
can
be
printed.
E
Types
think
we're
going
to
need
to
generate
a
lot
of
documentation
about
this
in
in
textual
form
in
the
specs
repo,
and
it's
going
to
have
to
cover
a
lot
of
directions.
We're
going
to
have
to
say
what
we
want
strings
to
be
in
the
data
model,
we're
going
to
have
to
say
where
codecs
force
us
to
compromise
and
what
encoding
escaping
schemes
we
use.
When
they
do
so
mind
you
escaping,
is
distinct
from
normalization
escaping
is
not
necessarily
mutation.
E
E
Implementation,
details
of
various
languages
we're
going
to
have
to
not
just
say
what
those
languages
say:
they
do
I'm
afraid
we're
going
to
have
to
look
at
what
they
act,
because
that
is
not
always
entirely
one-to-one,
so
it
seems
to
me
and
the
details
there
really
really
matter
like
if
I
barely
even
care,
if
a
a
language
or
a
library
or
whatever
says
that
it
only
accepts
utf-8.
E
E
So,
for
example,
I
discovered
I
strolled
through
a
bunch
of
different
string
escaping
functions
and
going,
and
we
don't
have
to
hold
these
to
perfect
standards.
Just
because
I
program
go.
I
don't
expect
anyone
else
to
care,
but
what
I've
found
is,
for
example,
the
strain
escaping
function
for
json
in
the
golang
standard
library.
E
C
So
if
you
try
to
like
concatenate
strings,
if
you
try
to
like
actually
get
string
representations,
they
get
shed.
But
if
you
store
them
in
index
database,
they
will
come
back
with
all
the
fidelity
that
you've
shoved
into
them.
So
one
thing
that
people
do
actually
is
that
they
have
special
compression
libraries
in
javascript
that
compress
into
ill-form
strings
and
because
they're
they
can
get
stored
and
round-trip
by
an
index
database
but
yeah.
But
if
you
do
other
things
with
them,
then
they
break
like.
C
A
I
Yes,
I
was
lurking
on
the
youtube
channel,
and
so
I
think
this
is
apropos
to
what
I'm
struggling
with
in
the
did
specification
and
lossless
conversion
between
json
and
cbor
and
we're
settling
on
an
abstract
data
model,
the
one
that
which
is
just
abstract,
which
is
to
hand
wavy.
For
me,
I've
been
proposing
cddl
for
a
while,
so
that's
concise
data
definition,
language
and
jason
being
a
subset
of
cbor.
As
long
as
there
you
have
some
rules
in
place,
then.
I
I
So
if
you
restrict
things
like
integers
have
to
be
64
bits
represented
even
with
integral
values
that
that
you
have
a
constraint
set
that
is
losslessly
compatible
with
json,
and
so
the
same
thing
with
the
json
string,
says:
utf-8
the
slash?
I
U
and
other
things
are
a
little
bit
nuanced
that
I
don't
understand,
but
I
think
what
I'm
settling
on
is
trying
to
say
that
the
abstract
data
model
is
constrained
to
using
cddl
as
to
describe
the
abstract
data
model,
even
though
it's
kind
of
like
more
syntax
but
also
somewhat
classes
and
types,
and
I'm
pitching
it
tomorrow
and
and
trying
to
get
some
some
feedback
on
it.
I
Unfortunately,
this
the
seabor
guy,
jim
shad,
just
died
over
the
weekend
and
he
was
supposed
to
be
giving
me
some
some
feedback
for
my
sibor
explanation
and-
and
I
I
don't
know
what
happened
he
was.
I
was
waiting
for
feedback
and
then
I
just
heard
from
another
colleague
that
he
died,
which
sucks.
I
But
it
just
sad
news
came
through
my
news
feed
this
morning
and
just
really
shocked
me
and
now.
It
explains
why
I
haven't
heard
from
them.
I
But
I
I
so
cd
dl,
I
think
again,
like
I
mentioned
this
last
week,
was
like
similar
to
how
you
do
is
at
context
for
jason
ld.
It's
like
you
know
if
you
constrain
your
schema
with
a
data
definition
and
that
in
that
sense,
json
is
a
subset
of
cbor
and
you
constrain
the
data
types
to
those
schemas
defined
in
in
seated
dl.
Then
there
is
lossless.
I
Conversion
like
cddl
actually
facilitates
that,
but
as
soon
as
you
get
into
like
oh
geez,
I
I
ieee
754
for
representing
numbers
and
trying
to
do
conversion
between
that
and
64-bit
or
32-bit
conversions,
and
it
gets
it
explodes
really
quickly.
E
You
can
use
cddl
as
a
syntax
for
describing
those
domain
restrictions.
If
you
want
I'm
indifferent
to
that,
I
think
if
you
wanted
to
make
your
proposal
not
depend
on
another
spec
of
that
complexity,
then
you
could
probably
just
use
more
simple
language,
around
domain
constraints
and
probably
get
away
with
it.
I
don't
know
how
complex
of
a
thing
you're
specifying,
so
I
don't
really
know
how
feasible
that
suggestion
is
but
like
that
would
be
something
you
can
try
to.
If
you
get
pushed
back
on
that,
but
in
general
constrain
things,
yeah
yeah.
I
So
relatively
it's
actually
just
straightforward
or
key
value
pairs,
mostly
with
strings,
and
so
a
lot
of
it
is
like
trying
to
get
jwk
keys,
represented
properly
and
explicitly
state
the
syntax
for
representing
a
jwk.
I
Let's
say
you
know,
ed25519
and
rsa
and
section
five,
six
k,
one
et
cetera
and
so-
and
I
think
I
and
what
jim
had
mentioned
to
me
before-
was
that,
like
just
do
it
in
json,
like
don't
try
to
do
binary
with
tag
23
with
expected
encoding
to
base
64.,
it's
like
just
keep
it
in
json,
it's
it's
humor
friendly.
I
You
get
some
compaction
about
25,
just
converting
it
to
cbor,
mostly
because
you're
not
dealing
with
any
of
the
strings
in
json,
and
so
I
think
it's
it's
cleaner
and
you
there
is
overlap
with
the
data
model
in
jason
and
and
and
seabor
and
then
more
complicated
stuff.
You
just
constrain.
Like
you
know,
numbers
have
like
just
like
in
the
sibor
spec
in
which,
which
I
think
ipld
conforms
to
is
that
there
is
just
more
the
most
simple
representation
wins.
So
the
base
like
everything,
is
base
64.
I
bit.
64-Bit
numbers.
Don't
try
to
do.
Ieee
754,
and
in
which
case
you
just
have
one
way
to
represent
numbers
and
that
basically
then,
and
they
said
most
mostly
we're
not
trying
to
boil
the
ocean
like
just
keep
it
simple
and
and
I
think
then
everything
will
work
out.
Just
fine.
B
I
Yeah
and
then
this
gets
I
so
if
you
guys
actually
do
to
get
a
chance,
if
you
could
read
the
what
I
wrote
in
the
specification
to
make
sure
I'm
conforming,
because
I
wrote
a
bunch
of
stuff,
I
don't
know
how
accurate
I
am
to
actually
what
that
dag
c
board
actually
does.
I
know
dexy
board
does
most
of
that,
but
yeah
okay,.
B
On
the
on
on
the
numbers,
when
we
don't
we're
not
consistent,
which
is
the
problem
that
we
need
to
resolve,
so
our
spec
set
see
what
we
try
and
do
is
say.
We've
said
we
do
strict
seabor
and
we
follow
the
guidelines
in
the
seabor
specification
for
what
that
means
to
be
strict.
So
that's
where
we
say:
strictness,
okay,
we'll
take
their
version
of
strictness
and
do
that
and
most
across
our
main
codex.
We
do
that
like
javascript
and
go.
We
do
that.
It's
just
that
the
numbers
we
don't
so
in
go.
B
We
do
64-bit
and
in
javascript
we
do
small
as
possible.
This
makes
specs
as
small
as
possible,
and
so
the
question
has
been:
do
we
conform?
Go
to
that
or
do
we
conform
javascript
to
our
own
version
of
script,
strictness
and
the
discussion
yesterday?
I
think
it
was
volca
also
bringing
this
up
that
maybe
64-bit
is
the
way
to
go
than
the
wish,
because
I
like
that,
I
like
64-bit,
because
it's
simple
it
means
you
don't
because
when
you
do
small,
it's
possible,
you
have
to
try
and
then
fail.
B
B
A
Say
it's
definitely
too
busy.
You
have
to
check
it
for
each
smaller
version.
So
basically,
if
you
have
a
64-bit,
you
have
to
check,
is
it
32-bit?
Is
it
16-bit
yeah?
You
have
to
do
that,
but
you
could
also
basically
start
with.
Is
it
so
you
can
directly
check?
Is
the
64-bit
thing
like
the
does
it
fit
into
16-bit
and
if
it
doesn't
check
for
it,
does
it
fit
into
32-bit
and
yeah?
B
B
B
It
in
javascript,
but
it
means
doing
an
encode
and
then
a
decode
to
check
if
it
fit,
and
then,
if
it
fails,
you
go
up
to
the
next
sizing
code
and
then
decode
and
then,
if
that
doesn't
work,
you
go
to
64
bit.
So
we
do
it.
Fine
in
javascript
javascript
works
fine
and
it
follows
the
spec.
The
siebel
spec
strictness
recommendations
that
we
have
in
our
dioxin
or
dock
in
go.
It's
just
just
throw
this
thing
at
64
bit
now.
B
We
have
to
go
back
into
the
javascript
encoder
and
and
fix
it
there,
which
you
know
it's
not
and
then
and
then,
and
then
everyone
that
wants
to
implement
this
in
their
own
language
would
then
have
to
go
and
implement
this
manually,
because
they
can't
just
pull
a
strict
seaboard
parser
off
the
shelf
and
use
that.
A
A
B
B
Which
is
nice
if
we
can
just
piggyback
that
yeah,
but
it's
not
yeah
so
anyway,
but
I'd
like
to
resolve
that
soon
and
it
either
means
going
and
doing
go,
which
is
potentially
easier
because
eric
owns
that
code
and
that's
easy
to
fix
the
javascript
one's
a
little
bit
more
complicated,
because
even
now,
there's
a
stalled
pull
request
on
getting
that
change
from
buffer
to
utf-8.
Now
sorry,
no,
you
you
win.
Eight
arrays
like
that.
I
think
even
just
modifying
that
thing
has
become
complicated,
but
I
don't
know
anyway.
I
But
mostly
I
need
to
go
through
and
just
test
to
make
sure
that
there
is
lossless
conversion,
round-tripping
it
and
it's,
but
mostly
it's
upfront
constraints
on
what
how
to
represent
it
and
those
constraints
are
explicit
in
the
cddl
method
method
to
actually
to
get
lossless
between
jason
and
and
then
trying
to
leverage
like
you
know.
Things
like
keys
must
be
strings
and
specifically
in
seabor,
they
must
be
byte,
arrays
and
json.
I
They
could
be
strings,
and
so,
but
it's
funny
because
we
don't
have
any
numbers
that
we're
trying
to
represent
yet
and
so
like.
This
is
all
conjecture,
and
I
think
it's
so
like
trying
to
get
to
this
abstract
data
model
and
boil
the
ocean,
without
even
this
being
a
problem
yet,
and
but
this.
I
Yeah
yeah
yeah,
it's
exactly
it's
a
painful
process
of
futility
and
which
juan
warned
me
about.
A
E
So
before
five
minutes
ago,
I
thought
that
used,
the
smallest
encoding
was
what
the
civil
war
spec
said
in
no
uncertain
terms,
and
therefore
I
was
sort
of
on
that
side.
Despite
having
written
code
that
does
the
simple
thing
with
64
myself,
we
were
talking
about
this
you're
you're
right.
Whoever
said
this
part
of
the
seabor
spec
is
all
examples.
I
didn't
remember
how.
B
E
I
Who's-
the
author
of
this
he's,
I
think
in
germany
at
the
university
and
just
get
some
feedback
about
this
for
the
my
specific
protocol
to
make
sure
that
actually
I'm
conforming
to
it
in
the
absence
of
jim
who
died,
I
think
he'll
have
some
sympathy
for
me.
I
hope
right
looking
for
some
wisdom.
A
I
I
had
some
email
exchange
with
him
to
get
the
the
tag
42
reserved
for
ipld
and
moved
things
out
of
bounds,
so
he
was
really
like
like
easy
to
talk
to
and
like
yeah
super
nice
person,
so
yeah,
that's
super
approachable
and
yes,
I
think,
like
the
this,
making,
the
smallest
float
possible,
I
think,
is
needed
for
if
you
want
to
have
canonical
cyborg
but
still
support,
all
floating
point
types.
Then
you
need
it.
A
B
A
B
A
B
Do
64-bit,
integers
and
then
in
the
language
land
you
have
to
it's
like
what
eric's
doing
with
apld-prime
you
don't
he's
doing
the
data
model
and
it's
just
one
type
of
number,
and
so,
when
you
get
to
the
sequal
level,
you
just
encode
with
whatever
it
has
to
do,
and
if
it,
if
we
said
in
cbore
that
you
only
had
64-bit
integers,
that
would
work
too.
It
just
means
that,
on
a
language
level,
we
have
to
deal
with
it
in
whatever
way
it
makes
sense.
B
A
B
But
you
won't.
The
number
thing
makes
sense,
because
sorry,
the
integer
thing
makes
sense,
because
it's
so
easy
to
check,
because
you
have
to
still
bet
and
boundaries
check,
but
with
the
floats,
because
you've
got
different
size
mantissas
for
each
representation,
it's
just
it
just
gets
complicated
and.
B
E
C
E
C
I
Well,
I'm
trying
to
get
it
right
because
they're
trying
to
finalize
and
get
this
into
candidate
recommendation,
and
so
I
wanted
to
make
sure.
Actually
I
get
it
right
and
I
need
to
do
a
bunch
of
test
cases
but
to
make
sure
that
what
I
wrote
fits.
What
I
put
in
the
chat
is
the
updated
draft.
What
borman
wrote
about
the
additional
considerations
for
deterministically
canonicalized
cyborg,
so
there's
some
additional
things
to
consider.
B
Cool,
oh
here's.
Another
argument
in
favor
of
just
doing
64-bit,
I'm
going
to
I'm
going
to
paste
this.
This
thing
in
this
is
the
javascript.
This
is
how
it
does
the
smallest
possible,
which
is
just
crazy.
The
the
other
argument
in
favor
of
of
going
with
just
64-bit,
is
that
most
many
modern
languages
don't
even
include
a
16-bit
float
encoding
type,
and
so
they
has
to
be
written.
B
So
you
can
do
single
precision
and
double
precision,
but
you
can't
do
half
precision
in
a
lot
of
languages
so
like
in
javascript,
you
have
to
implement
your
own
half
precision
encoder,
just
to
get
half
precision
floats,
which
you
know
like
you're
dealing
with
you're
writing
out
the
float
format
manually
at
that
point,
and
so
not
only
is
it
complicated
algorithmically
to
do
this
to
get
the
encoding
right
you
have
to
for
the
smallest
one.
You
have
to
write
your
own
manual
encoder,
so
not
good.
B
It
works
like
you
can
make
it
work
it's.
This
is
not
about
whether
or
not
you
can
make
it
work
it's
just.
This
is
a
complex
path
and
and
if,
if
you're,
working
with
a
seaborn
encoder,
that
already
has
strictness
like
a
lot
of
them
do
then
it
becomes
easy,
but
we're
getting
to
the
point
now
where
we're
starting
to
say
you
really
should
probably
be
implementing
your
own
seaboard
parser.
If
you're
going
to
be
doing
taxi
ball
so.
C
Yeah
I
mean
so,
we've
had
to
stay
for
a
while
and
that's
why
it's
like.
Just
not
it's
not
the
best.
It's
I
mean
it's.
The
situation
that
we're
in
we're
gonna
have
to
document
it
and
we're
gonna
have
to
like,
say
all
the
things
we're
not
super
facing
war
isn't
going
away,
but
yeah
there's
there's
like
a
limit
to
what
we
can
do
here
and
what
we
can
potentially
recommend.
A
All
right
we've
hit
the
hour
of
the
meeting.
So
thanks
everyone
for
attending
and
see
you
all
again
next
week,
goodbye.