►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2021-01-11
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Apparently,
we
are
live
on
youtube.
Welcome
to
the
ipld
weekly
sync,
for
it
would
be
the
11th
of
january
2021.
A
Okay,
so
my
week
was
mainly
around
seabor.
I
finally
finished
up
the
seaboard
library.
I
was
I've
been
tinkering
with
for
the
last
year,
so
that's
on
that's
published
now,
and
I've
got
a
pr
up
for
ipld
dax
cbor,
the
javascript,
the
the
the
newer
javascript
codec
and
that
so
that's
poor
quest
13
in
that
library,
and
that
has
a
bunch
of
notes
about.
What's
what
changes
are
coming
in
with
it?
So
it's
breaking
because
there's
more
strictness,
it's
the
main
thing
there
so
and
there's
some
performance
notes
there
as
well.
A
I
did
also
add
a
new
a
second
commit
yesterday
for
the
nan
and
infinity
thing,
which
is
the
next
item.
I
have
there's
a
poor
question,
specs
repo
for
number
344,
which
takes
that
discussion
about
nan
and
infinity
and
negative
infinity
and
says
that
we're
ruling
it
out
from
the
data
model
and
there's
some
additional
clarification
in
clarificationing
clarifying
words
about
number
types,
integer
and
float
appreciate
some
eyes
on
that
one,
but
once
that's
merged.
A
If
we
agree
that
that's
the
way
forward,
and
and
to
be
honest,
my
that
I
know
there
were
some
strong
thoughts
in
that
original
discussion,
I
I'm
a
little
bit
on
the
fence
about
this,
but
I
I'm
fine
with
just
locking
it
down
and
being
done
with
it
some
experimenting
I
was
doing
yesterday.
A
A
It
will
also
decode
any
of
the
other
bit
layouts
that
translate
into
these
things.
So
there's
not
just
this
is
not
just
one
way
to
get
these
things
through
seaboard
there's
a
lot
of
ways,
particularly
dan,
because
nan
really,
I
think
it's
just
a
fallback
for
everything.
That's
not
a
float
a
proper
float
so
that
so
it's
like
anything
that
says
it's
a
float
in
sibor
that
can't
be
properly
decoded
into
mantissa.
A
An
exponent
in
a
in
a
clean
way
will
end
up
as
nan
and
then
there's
a
whole
slew
of
values
that
will
end
up
with
infinity
and
negative
infinity
as
well.
So
we're
talking
about
a
very
wide
range
of
of
bytes
that
will
turn
into
these
things.
It's
so
we're
with
these
we're
a
long
way
from
the
one
data
one
byte
layout
ideal
for
codex,
so
that
alone,
I
guess
is-
is
a
reason
to
pull
the
trigger
on
this
one.
A
A
And
lastly,
there's
another
pull
request
on
the
specs
repo
three
four
five,
which
is
additional
strictness
clarifications
on
dagger
or
just
additional
little
details,
as
I
you
know,
as
my
head
gets
deeper
into
this
space.
There's
more
things
like
you
know,
like
the
the
tag
number
you
can
represent
the
tag
number
in
the
same
way
as
any
other
integer,
which
is
you
could
do
it
as
the
smallest
possible
or
you
could
make
it
64
bits
and
it's
still
the
same
number,
so
we
could
say,
tag
42
and
represent
that
as
64
bits.
A
A
A
C
Cool
so
I'll
be
brief.
Today
I
spoke
to
add
in
a
couple
of
hours
ago,
because
they're
having
issues
with
badger,
they
have
a
driver
for
go
ipfs
for
that
data
store
and
they
do
this
funky
thing
where
currently
pl
uses
version.
C
So
I
think
we
came
up
with
the
way
that's
gonna
be
okay,
at
least
it's
not
gonna,
be
like
we're
gonna,
say
essentially
a
single
repo
and
then
version
one
tracks,
badger
version,
one
version,
two
tracks,
badger
version,
two
and
so
on.
If
you
use
that
and
then
you
go
get
an
update,
something
might
break
because
badger
might
have
might
have
broken,
but
there's
no
way
you
can
work
around
that
because,
if
upstream
breaks,
then
what
do
you
do
so
anyway?
C
I
think,
as
long
as
they
put
a
warning
like
hey,
you
know
be
careful
when
you
update
this
or
badger,
because
it's
not
it
doesn't
follow
semver.
I
think
that's
an
okay
trade-off
and
I
see
my.
D
C
D
I
just
like,
I
don't
know
why
you
have
major
releases
if
you
don't
use
it
as
a
signal
for
breaking
changes
like
like.
You,
have
an
infinite
number
of
numbers
like
just
use
them
like
what
is
the
point
of
having
these
big
version
ranges
in
major
when
you're
not
using
them
to
signal
anything
other
than
like?
Let's
have
a
party
and
get
a
cake,
you
like.
E
C
I
mean,
to
be
honest,
I
don't
think
that's
why
badger
and
I
think
that's
why
they
do
it
this
way.
I
think
they
just
hate
people,
because
you
look
at
their
main
product,
which
is
called
d
graph.
I
think,
and
the
versions
are
literally
called
version
20.07.3
and
that's
like
a
calendar
year
like
2020
july
and
then
the
0.3
is
just
because
they
like
the
free.
I
guess
so
I.
D
C
So
technically,
you're
not
you're
only
allowed
to
break
stuff
on
january
1st,
and
then
it's
over
you're,
not
you
don't
allow
it
for
12
months
anyway.
Sorry
for
the
tangent
anyway.
I
also
liked
what
eric
did
with
the
new
fluent
packaging
for
ipld
prime.
So
I
had
some
thoughts
on
that,
but
I
didn't
want
to
over.
We
decided
not
to
overwrite
my
my
changes
on
top
of
his,
so
I
published
a
pr
that
essentially
exposes
a
new
package.
That's
quite
similar,
but
has
some
changes
and
the
benchmarks
are
pretty
much
a
wash.
C
I
also
started
writing
the
ci
best
practices
for
go
dog
that
I've
been
talking
about
for
a
long
time.
So
I
spoke
with
michael
and
eric,
and
we
decided
we're
just
gonna
stick
a
markdown
file
into
the
ipld
repo
and
then
we
can
just
start
referencing
that
for
our
team
and
then,
if
it
works
well
and
other
teams
like
ipfs,
want
to
start
following
that
as
well.
C
That's
also
fine,
and
I
think
that's
also
going
to
be
good
for
martin
who's,
doing
stuff
for
automating
the
distribution
of
github
actions,
so
at
least
for
go
that
can
that
document
can
be
used
as
reference
for
like
this
is
why
you
should
do
this
or
that
and
so
on.
C
C
C
I've
only
removed
about
500
out
of
3
000,
but
it's
better
than
it
sounds
because
a
lot
of
them
are
generated.
So
I
would
say
I'm
about
halfway
done
and
that's
it
for
me.
A
I
had
a
couple
comments
there
on
the
just
quickly
on
the
quip
thing.
One
of
the
comments
I
made
on
the
original.
I
started
writing
this
up,
but
I
just
I
thought
it'd
be
easy
just
to
talk
about
it
on
the
original
quip.
A
One
of
the
things
I
commented
was
that
as
a
as
a
user,
it
it's
kind
of
nice
that
you
as
as
you're
processing
you
get
to
see
when
it
errors
out,
so
that
if
you
have
some
conditional
logic,
then
you
can
you
can
deal
with
that,
like
mostly,
you
would
just
ignore
the
error
and
just
then
deal
with
the
top
like
you've
done
in
your
version,
but
there
might
be
cases
where
you
say
well.
A
If
I
get
to
this
point
and
there's
an
error,
I
really
don't
want
to
do
this
thing
over
here,
because
it's
not
item
potent
and
it's
you
know
it's
going
to
do
something
that
I
can't
back
out
of.
So
I
really
need
to
know
if
this
is
going
to.
If
this
is,
if
this
thing's
successful,
then
I
want
to
do
this.
If
it's
not
successful,
I
definitely
don't
want
to
do
this.
Like
you
might
want
to
garbage,
collect
your
data
storage
thing
and
it
so
it's
like
it's
like
a
transactional
thing.
A
A
That
was
my
only
comment
on
that
and
I
don't
really
know
if
it's
a
big
deal,
because
if
that,
if
you
were
in
that
situation,
then
maybe
you
wouldn't
use
that
package
anyway,
and
maybe
you
expect
that
most
usages
where
there
is
an
error,
would
result
in
some
kind
of
panic
anyway.
So
that
was
just
my
feedback
about
that.
Take
take
it
as
you
will,
and
I've
got
another
comment
about
the
other
hemp,
but
any
response
to
that.
C
I
was
just
gonna
briefly
say
I
think
it
should.
I
haven't
tried,
but
I
think
it
should
be
possible
just
because
the
interfaces
are
essentially
the
same
as
eric's
minus
the
error
pointer.
So
you
could
have
a
bit
of
code
like
three
levels
down
that
just
manually
uses
the
ipld
prime
apis,
that
just
return
errors
and
then,
if,
if
you
want
to
bubble
up
anything,
you
just
panic
like
the
quip,
like
my
api,
does.
A
And
could
you
nest
them
as
well?
So
you
do
multiple
instances
of
singing
yeah.
Okay,
with
the
hampton
thing.
Maybe
the
best
way
to
do
it
is.
I
know
some
of
those
a
lot
of
the
actually
a
lot
of
the
tests
are
really
focused
on
very
minute
details,
but
what
you
could
possibly
do
for
some
of
them
at
least
is
run
the
test
and
print
out
a
a
cid
roots,
the
id
and
then
that's
what
you
care
about.
A
So
you
could
run
the
same
logic,
get
to
the
same
cid,
all
good
and
there's
a
few
tests
that
I
know
rely
on
root
cid.
I
think,
and
then
there's
a
bunch
of
tests
in
there
as
well,
that
I
did
that
actually
check
your
your
byte
layout
manually.
So
that
might
be
useful
too,
but
I
would
say:
root.
Cid
gives
you
everything
you
need
really.
If
you
don't
get
the
same
root
cid,
then
you
didn't
get
the
same
form.
D
Okay,
hold
on
okay,
all
right,
yeah,
so
a
bunch
of
ip
sql
stuff,
so
yeah
you
can
now
import
most
csvs
and
get
a
sql
database
out
of
them.
You
can
then
export
that
database
as
a
car
file.
You
can
do
sql
queries
and
export
a
car
file
for
just
the
query,
so
literally
like
a
tiny
subset
to
just
satisfy
that
query.
So,
basically,
we
have
merkel
proofs
for
sql
queries
now,
which
is
beautiful.
D
I
also
added
encryption,
we'll
talk
about
kind
of
the
approach
to
encryption
and
at
the
end
of
the
call
we'll
do
a
bigger
thing
on
like
we
may
actually
just
make
this
the
way
to
do
fld,
block
layer,
encryption
so
I'll
talk
about
that
a
little
bit
later,
but
you
can
export
encrypted
car
files
for
any
of
these
things
as
well
and
then
do
queries
over
decrypting
those
car
files,
and
you
can
actually
take
that
car
file
and
pin
it
in
ipfs
and
replicate
around
the
encrypted
graph
as
well.
D
So
it's
it's
pretty
slick
yeah
that
was
that
was
some
pretty
fun
stuff.
I
also
added
a
command
line
interface
for
doing
mutations
like
create
and
insert,
and
all
that
kind
of
stuff,
so
yeah,
if
you
sequel's
really
coming
along.
D
I
think
that
the
next
set
of
things
that
I'll
do
will
probably
be
a
key
value
store
like
a
key
value
table,
basically
so
leveraging
the
same
sort
of
column,
indexing
based
on
schema,
but
over
a
key
value
store
and
then
having
the
columns
use
paths
as
the
names
of
each
column,
so
that
we
can
in
basically
put
arbitrary
dags
into
the
key
value
store
and
then
the
traversals
will
work.
D
And
then
you
can
just
query
them
with
regular
sql,
because
these
column
indexes
will
normalize
it
so
that'll
be
pretty
hot
yeah
and
then
the
last
thing
is
I
was
talking
about
cola
on
saturday,
and
so
mikola
is
at
this
like
game,
this
chinese
game,
where
they're
encoding
all
of
their
game
data
as
json,
and
these
like
custom
data
structures
and
but
they're
just
using
and
they're,
using
fpfs,
but
they're,
just
encoding
everything
as
raw
bytes
and
then
they're,
just
sort
of
like
talking
about
the
cids
for
these
raw
bytes
and
we're
talking
about
sort
of
like
some
of
the
things
that
he
wants
to
do,
and
I
was
understanding
like
how
his
build
pipeline
works
and
it
turns
out
that,
like
actually,
they
rebuild
all
of
their
game
assets
every
time
they
change
anything.
D
D
And
so
he,
like
already,
is
like
halfway
to
a
webassembly
build
system
that
does
stuff
with
ipld,
and
we
started
talking
about
like
what
would
be
some
of
the
base
level
primitives
that
he
would
need
to
start
like
just
referencing.
These
webassembly
functions
in
the
data
itself
and
potentially
doing
more
lazy
rebuilding
of
things
and
maybe
not
rebuilding.
D
If,
unless
you
need
to
and
stuff
like
that,
and
so
there's
a
link
in
in
the
notes,
but
I
wrote
up
a
sketch
of
kind
of
like
the
lowest
level
primitive
that
we
would
want
to
use
for
this,
and
it's
it's
just
a
really
like
simple
sebor
object
with
a
library
on
top
of
it.
Where
I'm
just
doing
like
you
know,
you
want
to
use
an
fbl
for,
like
you,
know,
big
pieces
of
binary
and
stuff
like
that.
But
it's
an
interesting
like.
D
Let's
just
let
somebody
else
deal
with
that,
like
all
we
need
to
do
is
like
link
to
binary
that
people
will
compile
into
webassembly
and
then
figure
out
how
to
run,
and
then
his
input
will
be
this
other
cid
of
like
the
output
data
right
like
it's,
and
then
you
can
have
multiple
systems
sort
of
tooled
on
top
of
this,
if
that
makes
sense,
so
that
was
kind
of
a
cool
thing
too,
but
yeah
we'll
I'll
table
the
encryption
stuff
until
we
can
get
to
it
after
everybody's
done
their
update.
A
Okay,
peter.
E
Yeah
I'm
next,
I
guess
so.
I
have
been
working
still
on
the
thing
that
I
got
working
last
week
with
the
abilities
to
put
pretty
much
most
of
lotus
into
a
single
database,
cut
it
to
a
performance
envelope
where
it
can
actually
be
used
going
forward
and
backwards.
It
can
actually
go
over
a
bunch
of
you,
know
old
states
and
and
ingest
them
at
a
reasonable
time.
E
Unfortunately,
the
current
variant
is
fast
and
wrong
in
terms
of
the
deadlocks
quite
often,
and
I'm
working
on
solving
that
and
yeah
other
than
that,
the
initial
tests
from
other
people
have
been
pretty
positive.
You
basically
get
a
full
falcon
node
of
sorts
within
like
an
hour
from
start
to
finish,
which
is
which
is
nice
with,
with
all
the
with
all
the
data
that
we
have
and
yeah.
The
only
thing
I'm
kind
of
waiting
on
is
still
input
from
rad.
E
If
it
kind
of
gave
you
what
you
needed
to
to
work
with
things.
We
don't
have
to
do
this
on
the
call
and
yeah
hopefully
this
week,
I'll
have
another
try
midweek
to
get
mainnet.
What
include
the
new
code
and
see
how
that
goes?
That's
all.
I
have.
A
F
So
I
also
was
doing
a
little
more
wrap-up
work
on
some
of
these
little
research.
Spikes
of
can
we
make
nicer
apis
to
crank
out
data
with
go
ipld,
prime
and
so
yeah
daniel,
and
I
kind
of
both
did
our
own
little
thing
on
this
and
I
think
they're
both
really
interesting
mine's,
the
one
that
shoves
more
errors
in
your
face,
so
we
both
got
pr's
out
for
these
now
and
I
think
we'll
merge
above
it's
gonna
be
cool.
I
had.
F
I
should
write
this
up
in
issues
on
github,
but
the
thing
that
rod
mentioned
earlier.
I
had
a
very
similar
thought
about
the
the
strategy
of
trying
to
use
more
panics
for
this
it.
It
gets
into
this
situation.
That
almost
feels
like
it
reminds
me
a
little
bit
of
the
what
color
is
your
functioning
problem,
which
is
a
much
worse
problem.
It's
a
cleverly
named
blog
post,
that's
talking
about
async
features,
and
I
think
mostly
javascript,
but
it's
a
pattern
that
I
sometimes
look
at
in
apis.
Like
do.
F
May
panic
and
I'll
I
will
know
this
and
then
like
sometimes
that
works,
but
then
so
like
what
rob
was
suggesting
where
you
have
this
situation,
where
you
you
might
need
to
know
if
you've
had
an
error
part
way
through
for
some
other,
like
logical
interaction
reason,
then,
if
you
need
to
do
that
now,
you
have
to
invoke
the
color
model
in
your
head
again
right
and
you'll.
Also,
probably,
if
you
have
this
pattern,
that's
trying
to
help
you
by
this
top
level
function
will
always
be
doing
the
error
gathering.
F
Then
you
end
up
seeing
that
top
level
function
in
several
places,
and
now
your
indentation
has
two
meanings
in
one
place
in
this
quip
example.
Anyway,
in
one
place,
the
indentation
means
my
data
tree
structure
is
similarly
having
a
deeper
nesting
of
children
now,
but
in
some
places
we'd
gain
an
indentation
we
need.
I
have
error,
handling
logic
here
and
you'll
be
colorizing
them
in
your
head
again,
it's
this
indentation
data
structure,
or
is
it
aaron
so.
D
I
have
a
similar
thing
in
javascript,
actually
like
getting
blocks
will
throw
on
not
found,
and
sometimes
you
just
want
to
know
if
it's
not
there.
So
you
end
up
having
to
like
do
try
catches,
but
if
you
return
null
instead,
which
would
be
cleaner,
then
you
would
get
other
bad
errors
in
javascript
because,
like
mill
will
just
like
it's
an
object,
and
so
it
things
happen
with
it.
When
you
return
it
and
errors
happen,
other
places
that
are
really
hard
to
debug.
D
F
So
the
other
stuff
that
I
worked
on
this
week
was
mostly
I'm.
I
was
trying
to
get
towards
the
schema
package
in
guy
pld,
prime,
using
the
schema
dmt
package,
which
is
a
reminder,
an
acronym
for
data
model
tree,
and
this
is
what
I
call
the
package
that
is,
the
output
of
code
gen
using
the
schema
schema,
so
that
package
can
recognize
like
a
json
schema
document
and
give
you
the
thing,
and
so
the
quest
purpose
here
is
by
bringing
those
two
things
together.
F
F
I
thought
that
we
could
approach
this
by
writing
really
thin
wrappers
around
the
schema
dmt
package,
so,
like
99
of
your
data,
would
still
be
held
in
these
generated
types
and
we'd
write
these
really
thin
wrappers
for
the
couple
of
things
that
are
like
graph
properties
and
need
more
pointers
and,
and
then
it'll
be
fine,
and
so
this
would
result
in
like
lots
of
immutability
properties
for
free.
It
would
result
in
me
writing
not
so
much
darn
code
because
it
would
not
be
redundant.
F
It
just
seemed
like
a
nice
way
to
go
and
it
doesn't
work
because
golang
has
an
import
cycle.
Detector
and
generated
code
implements
schema.typed
node
so
to
have
the
schema
package
refer
to
this
package
of
generated
code
in
order
to
implement
its
own
stuff
is
a
big
old,
wonk
nope,
sad
trombone.
F
Just
I
had
a
really
really
sad
day
so
and
danielle.
I
talked
about
the
different
ways
to
approach
this.
We
came
up
with
a
lot
of
different
ideas,
some
of
them
more
complex
than
others.
Clever.
I
don't
know
the
thing
that
I'm
thinking
of
going
with
now
is
trying
to
minimize
complexity.
F
Some
of
the
most
fancy
things
we
came
up
with
were
trying
to
use
the
aliasing
feature
of
going
to
make
the
cycles
go
away
and
those
all
were
genius
but
kind
of
scary.
So
what
I'm
thinking
now
is
we'll
have
roughly
three
packages.
One
of
them
is
the
schema
dmt
package
as
before,
so
that
processes,
the
raw
json
data,
a
schema
compiler
package
and
that
will
still
have
exported
functions
which
you
can
use
to
programmatically,
create
this
fully
ratified
information,
and
then
the
schema
package
itself
will
just
be
interfaces.
F
And
this
drives
me
nuts,
as
the
person
writing
the
code,
because
the
inside
of
a
bunch
of
these
packages
is
super
honkingly
redundant.
The
schema
compiler
package
will
have
all
of
these
going
native
structs
that
I
have
to
write
again,
which
are
almost
exactly
identical
to
the
generated
dmt
structures,
and
I'm
just
so.
F
But
import
cycles
so,
like
I
just
this
is
this-
is
what
we
have
to
do,
apparently.
So
when
I
finally
issue
this
pr,
it's
going
to
also
just
come
with
tons
of
caveats
in
the
documentation,
like
any
other,
go
developer
who's
using
the
codegen
tools.
Please
don't
do
what
we're
doing
here,
because
we're
doing
this
for
cycle
detector
avoidance
and
literally
no
one
else
ever
will
have
this
problem.
F
A
G
Not
much
directly
ipld
from
me
in
the
past
week
at
some
point
I
will
return
to
graphql
stage
if
world
once
actors
v3
settles
out
and
help
with
that
update.
That's
probably
the
main
thing.
H
I
pushed
a
pr
on
this
hey.
What,
if
we
use
bitswap
to
download
huge
blocks
thing
would
would
appreciate
some
thoughts.
The
two
biggest
things
are,
this,
like,
I
think
it
double
checking
the
security
properties.
It's
basically
can
shot
256
deal
with
what
they
call
like
free
collisions
or
not,
and
then
the
other
is
like.
Is
this
the
right
thing
to
do
or
like?
Is
this
something
that
if
we
did,
we
should
sort
of
hide
under
the
covers
or,
like
you
know,
tell
people?
H
This
is
a
good
thing
to
do
like
instead
of
sharing,
and
that
would
mean
that
you
know
instead
of
sharing
the
dag
identifier
for
your
for
your
file,
you
would
share
the
canonical
identifier
for
your
file.
B
Yeah,
okay,
casa,
anything
from
you!
Well,
I'm
mostly
just
here
for
the
just
just.
I
Lurk
in
the
background,
but
that's
fine
yeah,
so
I'll
leave
it
at
that.
Oh
there
is
some
exciting
stuff,
but
I'll
wait
until
I
have
made
a
little
more
progress.
There.
A
Cool
cool
are
there?
Is
there
any
other
headline
items
we
want
to
put
including
this
meeting?
Are
we
going
to
merge
straight
into
this
next
meeting
michael
or?
Is
that
a
separate
thing
no
well.
A
D
We
can
just
include
it
as
part
of
this
meeting.
It's
fine.
D
Okay,
yeah,
I
mean,
like
I
said
like
originally,
it
was
its
own
thing
because
I
I
wanted
to
get
some
separate
guidance
to
finish
it
up,
but
it's
actually
like.
I.
A
D
It
it
like
already
works
now,
so
I
don't.
I
don't
need
that.
I
really
just
need
to
know
like
does
this?
Make
sense
to.
Everybody
is
like
the
way
that
I
feel
you
should
do
block
encryption
and
should
be
added
to
everything.
D
So
I
had
a
long
chat
with
carson
on
friday
about
how
they
do
encryption
and
some
of
the
challenges
and
what
I
really
wanted
to
figure
out
was
like
how
do
we
in
ipld
and
then
also
in
ip
sql?
D
How
do
I
add
just
sort
of
like
the
this
single
layer
of
encryption
not
like
the
whole
sort
of
replication
key
and
all
the
key
management,
but
literally
just
like
how
do
I
do
the
block
layer,
encryption
stuff
and
I
think,
like
we
had
we'd
gotten
into
some
weird
kind
of
recursive
mental
loops
that
were
hard
to
get
out
of
one
was
like
we
were
really
assuming
that
there
was
going
to
have
to
be
some
kind
of
key
negotiation
in
the
link
loader
and
that
that
was
going
to
like
potentially.
D
To
actually
look
up
the
key
in
the
link
loader
in
order
to
traverse
through
this
stuff,
and
another
problem,
is
that,
like
I
think,
just
conceptually,
we
we've
been
thinking
about
block
storage
kind
of
the
wrong
way
like
because
we
tend
to
use
ipfs
as
a
reference
point
for
block
for
a
block
store,
we're
we're
taking
all
of
these
other
semantics
about
ipfs
along
with
us,
and
assuming
that
that's
how
block
access
works
like
we
really
need
like
in
order
to
really
build
applications
and
build
reliable
services
and
to
build
data
structures
that
you
manipulate.
D
We
really
need
to
think
of
block
storage
as,
like
the
lowest
level
possible,
primitive
like
it
needs
to
function,
basically
like
a
device
driver
like
like
it's
a
it's
like
a
syscall
where
I'd
say
this
address,
and
you
give
me
back.
Data
like
it
has
to
be
sort
of
on
some
level
like
we.
We
sometimes
need
it
to
be
that
reliable
right,
whereas
like
in
fps
when
you
store
a
block,
like
other
things,
are
happening
like
you're.
Sharing
that
block
on
a
network
like
it's,
it's
basically
publicized.
Now,
there's
all
of
these
other
things.
D
You
have
to
worry
about
in
terms
of
leaking,
and
it's
really
just
not
very
realistic
for
me
to
like
build
a
data
structure
that
works
unencrypted
like
that
and
then
also
make
that
data
structure
work
encrypted.
If
I
can't
just
access
unencrypted
blocks
out
of
the
store
right
like
I
actually
need
those
to
already
be
unencrypted
or
like
have
some
kind
of
scheme
for
that.
D
So
what
I
ended
up
building
actually
talking
with
carson
was
like
all
you
need,
are
codecs
for
major
encryption
ciphers
so
like
what
I
have
right
now
is
just
aes
gcn,
so
you
have
a
cid
with
a
multi
codec.
That
says
this
is
aesgcm
and
there's
basically
a
codec
for
that
that
says
that
that
understands
that
block
format.
So
it
says
the
first
16
bytes
are
the
initializing
vector
and
the
rest
of
it
is
the
cipher.
So
in
terms
of
data
model,
what
that
codec
returns?
D
You
is
like
a
struct
with
the
initial
vector
and
the
encrypted
bytes,
so
the
data
model
layer.
That's
all
that
you
see
right,
like
you
can't
traverse
into
them.
You
don't
see
the
links
or
anything
else
like
that's
all
that
you
get,
but
that
codec
ships
with
an
encryption
and
decryption
function,
that
you
call
separately
and
when
you
call
that
encryption
and
decrypt.
When
you
call
these
encryption
decryption
functions.
D
They
have.
They
also
have
a
block
format
specified
for
the
unencrypted
state,
and
this
is
so
that
I
can
call
encrypt
or
de-encrypt
with
a
key.
And
then
I
can
pass
it.
The
initializing
vector
and
the
bytes
from
the
from
the
prior
codec
right
and
what
that's
going
to
return
me
is
a
new
struct
with
cid
and
bytes,
so
that
that's
where
the
unencrypted
block
format
comes
in
right.
D
So
it
does
the
d
and
it
does
the
d
encryption,
understand
the
initializing
vector
the
key
and
the
bytes,
and
then
it
takes
that
whole
blob
and
it
it
parses
out
the
cid
from
the
rest
of
the
bytes
and
it
returns
that
instruct.
Essentially.
So
you
always
so.
The
key
information
is
completely
out
of
band
and
is
applied
separately
and
all
of
these
blocks
work
perfectly
at
a
data
model
layer
for
ipfs
and
for
everybody
else.
D
When
they're
trying
to
replicate
their
them
unencrypted
like
if
you
want
to
encrypt
like
all
the
blocks
for
a
database
or
for
a
query
or
whatever
you
just
create
an
unencrypted
graph
over
it,
and
as
long
as
that,
unencrypted
graph,
like
doesn't
preserve
any
ordering
from
the
initial
graph
like
there's,
no
way
to
tell
what
that
data
is
really.
And
then
you
just
need
to
make
sure
that
you
keep
a
reference
for
the
unencrypted
sorry
for
the
encrypted
cid
of
the
root
node,
because
you're
going
to
lose
that
reference.
D
D
It
uses
a
chunky
tree
cid
set
to
create
like
a
big
graph
over
that
data,
and
then
it
puts
that
in
a
new
struct
along
with
the
root
the
other
root
reference,
and
then
that
block
is
the
root
of
the
car
file,
and
so,
when
I
say,
decrypt
equals
key
with
ip
sql.
It
knows.
D
Okay,
this
car
file
is
going
to
be
encrypted
I'll
pass
through
the
whole
structure
grab
all
of
the
blocks
that
have
this
multi-codec
and
then
all
the
blocks
with
that
multi-codec
I'm
going
to
decrypt
this
key,
I'm
going
to
take
the
encrypted
block
and
we're
going
to
put
it
in
a
private
block
store,
which
is
usually
just
in
memory,
but
I
put
that
in
a
private
block
store
in
memory
and
then
that's
what
I
query,
and
so
I
have
all
the
same
code
to
query
that,
as
I
have
to
query
any
other
graph.
D
There's
nothing
special.
The
nice
thing
about
this
block
format,
and
this
way
of
doing
things
is
like
one
it
gives
codex.
It
gives
us
a
way
to
write
codex
for
encryption
that
include
the
decryption
program
alongside
the
rest
of
kodak
and
the
standardizes.
The
format
two
is
that
it
doesn't
have
an
opinion
about
how
to
do
all
of
the
other
encryption
layers
right.
So
if
you
want
to
rewrite
a
graph
so
that
all
the
links
are
encrypted
in
the
graph
like
not
what
I'm
doing,
you
can
do
that
with
this
format.
D
Like
it's
the
same,
there's
the
same
layering
on
it,
it's
fine
or
you
can
do
what
I'm
doing
it
doesn't
take
into
consideration
like
all
of
these
other
replication
key
semantics,
because
that's
all
going
to
be
implemented
as
a
layer
on
top
of
this
anyway
and
it
doesn't,
and
it
doesn't
have
to
dive
into
keyman
again,
because
the
key
is
applied
completely
out
of
band.
D
So
if
you
want
to
do
if
you
want
to
have
information
about
how
to
do
key
lookups,
you
just
encode
that
into
the
graph
like
that
information
is
going
to
be
wherever
it
needs
to
be.
For
that
use
case
and
that's
agnostic
of
ipld,
like
that's
a
separate
operation
that
you're
doing
so.
I
like
I,
I
hope
that
makes
sense.
D
That
was
a
lot
of
information
very
quickly
if
you
want
to
like
unwind
this
and
talk
about
other
points
like
I'm
happy
to
but
like
this
is
something
that,
like
literally
this
week,
I
can
go
and
implement
in
js
multi
formats.
It
can
be
part
of
our
regular
block
api.
There
can
be
codecs
for
all
the
aes
functions
that
are
in
the
browser
and
in
node,
and
we
can
do
the
same
thing
and
go
for
the
prime
in
like
a
week.
Right
like
these
are
really
simple.
Codecs
right.
D
All
of
the
encryption,
like
all
the
hard
parts
of
the
encryption
are
like
in
standard
libraries.
We
shouldn't
add
anything,
that's
not
in
the
same
libraries
and
we
can.
A
Month,
it
doesn't
scale
very
nicely.
It
doesn't
like
to
very
large
data
sets
because,
if
you're
opening
up
a
block
store,
you
say
here's
my
key
give
me
the
private
block,
store
and
that's
a
lot
of
blocks
and
it's
got
to
do
initialization
of
scanning
through
everything,
decrypting
everything
and
copying
them.
That
code.
D
It's
only
as
expensive
as
it
needs
to
be
because
those
encrypted
blocks
as
long
as
they're
for
the
same
key
will
have
the
same
cid.
So
if
you,
if
you're
taking
these
car
files
and
you're
putting
them
into
a
public
block
store
or
like
the
like
one
of
these
box
stores,
you're
going
to
know
all
the
cids
you've
already
imported
and
then
in
the
and
then
you're
going
to
put
them
in
the
private
block
store,
which
also
has
deduplication.
D
So
I
mean
there's,
no,
you
could
make
a
private
block
store
for
the
unencrypted
data
that
is
on
disk
right,
like
you
could
have
an
encrypted
disk.
There's
there's
other
ways
of
encrypting
that
data
to
keep
it
safe.
It
doesn't
have
to
be
in
them.
It's
just
like
the
easy
way
to
do
this.
Right
now
and
and
my.
G
H
I
Yeah,
it's
not
so
bad
anyway,
because
I
think
the
first
hour
is
supposed
is
going
to
be
mostly
introductions
from
the
various
like
textiles
going
to
be
their
visions,
giving
some
feedback
a
bunch
of
like
preliminary
stuff.
So
you
might
not
miss
it
much
and
just
some
context
for
the
other
people
on
the
call
what
you
just
described
here.
I
I
do
have
a
few
questions
about
some
of
this
like
struck
parsing
stuff,
but
it's
very
well.
I
I
mean
we
had
the
conversation
on
friday,
but
like
it's
very
similar
to
what
we
landed
on
for
that
the
lowest
level
encryption
at
the
block
layer,
and
then
we
have
some
like
additional
layers
on
top
of
that
for
the
threads
and
like
it's,
and
so
sander
already
has
a
bunch
of
go
code
that
plays
like
that
pulls
apart.
I
The
set
object
where,
like
the
relationship
between
the
blocks,
is
gone,
you
know
you
kind
of
are
starting
to
solve
that
and
that's
pretty
magical
from
our
perspective,
like
from
textile,
where
we're
trying
to
like
flush
things
to
file
coins,
you
know
we
can
take
that
right.
D
It
seems
like
in
general,
where
we're
landing
with
like
replication
patterns
is
that
you
know
I
want
to
do
a
query
for
somebody
with
somebody
that
has
the
data
and
then
what
I
want
is
like
those
c
ids
like
that
set
given
back
to
me
in
some
kind
of
a
package
right,
and
so
the
nice
thing
about,
like
the
way
that
I
did
this
in
fb
sequel.
Is
that,
like
you,
do
the
query?
D
C
D
Great
yeah
and
then,
but
you
can
pin
that
graph
now
and
and
if
you
were
going
to
create
a
replication
cue
like
create
that
over
this
and
you're
doing
it,
for
this
replicatable
set
right
it
and
whereas
like
if
you
don't
do
this,
what
you
end
up
with
is
just
like
a
bunch
of
round
trips,
because
you
can't
share
the
final
key
to
look
at
the
links
like
the
the
underlying
link
with
the
replica
with
the
replicator
right
yeah.
I
D
Yeah,
the
the
the
unencrypted
block
format
is
length
of
cid
cid
bytes.
A
D
The
cnd
says
eric
brought
this
up
like
earlier
today,
but
like
the
cool
thing
about
this,
is
that,
like
that,
just
transparently
allows
as
many
layers
as
you
want
right,
because
if
that,
if
that
cid
says
oh,
this
is
a
different
encrypted
different
layer
of
encryption,
then
yeah,
and
so
you
can,
like
you
know,
end
up
re-encrypting
things
that
you
happen
to
have
in
your
graph,
without
even
knowing
they
were
necessarily
encrypted
before
it.
Just
sort
of
like
transparently
works
right.
H
So
I
there's
like
a
lot
of
info
last
little
bit
and
I
probably
have
not
processed
all
of
it,
but
I
want
to
see
if
I
got
like
some
of
the
bigger
points
which
is
you
know
I
have.
I
have
a
graph.
D
So
that's
that's
one
of
the
use
cases
right
like
I
think
the
the
nice
thing
about
doing
the
encryption
this
way,
sort
of
using
multi,
codecs
and
cds.
Is
that
that's
just
like
it.
You
can
compose
into
that,
but
you
don't
necessarily
have
to
do
that
like
you,
can
you
can
come
up
with
other
schemes
for
doing
this
as
well,
like
that's
like
a
good
idea
and
like
we're
gonna,
do
that,
but
like
like
how
that,
like,
for
instance,
how
that
manifest
looks
no
no
opinions
right
like
as
long.
A
D
Like
it's
a
graph,
that's
connected
and
viewable
like
in
unencrypted
to
connect
everything
together
then
it'll
replicate
around
right.
You
don't
even
need
to
codex
or
anything
like
I'm
using
a
chunky,
trees
thing
that
nobody
has
an
implementation
of,
but
me,
but
it
doesn't
matter,
ipfs
will
parse
it.
I
And
there's
some
interesting
work.
They've
done
there
to
try
to
avoid
leaking
much
information
there
too,
like
you,
can't
get
deep
enough
into
that
manifest
to
learn
much
about
any
part
of
the
graph
that
you
don't
actually
explicitly
have
permission
to
see.
Something
like
that.
It's
it's
some
form
of
magic,
so
I
will
learn
about
it
more
tomorrow,
but
yeah.
F
Yep-
and
it
seems
like
some
of
this
matter,
manifest
stuff
is-
is
a
really
common
approach,
with
lots
of
different
implementations
and
like
with
the
specific
understood,
application
of
making
sure
pinning
and
other
forms
of
recursive
walk
over
the
cleartext
component
provides
some
value.
F
Those
things
seem
to
be
reusable
and
at
the
same
time,
it
seems
like
a
lot
of
these
different
implementations
that
applications
are
making
are
not
when
we're
when
we're
programming
when
we're
creating
specs,
it's
always
important
to
wonder
whether
something
is
coincidentally
divergent
or,
if
like
no
actually
there's
a
reason
for
these
things
to
specialize
in
different
directions,
and
I
think
some
of
these
things
actually
do
have
reasons
to
specialize
in
different
directions
like
in
some
applications,
you're
going
to
care
a
ton
about
making
sure
that
clear
text
manifest
leaks,
nothing
and,
at
the
same
time
in
general,
that's
probably
not
going
to
come
for
free.
F
It
probably
comes
at
the
cost
of
significant
algorithmic
complexity
and,
like
sometimes
maybe
performance
complexity,
because
you'll
be
doing
these
different
rebalancing
operations
in
the
middle,
to
hide
things
and
like
maybe
there's
padding
and
trying
to
hide,
especially
in
metadata
about
size
in
a
distributed.
Context
like
we
have
here,
is
really
touchy,
so
yeah
having
lots
of
different
things
going
to
the
space
and
trying
to
find
just
a
few
pieces
of
common
vocab
seems
really
good.
I'm
excited
about
this.
F
About
it
earlier,
the
fact
that
this
recurses
cleanly
is
actually
like
really
a
good
good
indicator
by
using
the
specification
of
okay,
we're
going
to
use
multi-codec
indicators
to
include
encryption,
codecs
and
our
definition
of
we're
going
to
use
our
very
methodical
definition
of
codec
and
ipld.
Now
right,
the
codec
must
transform
binary
data
to
some
data
model
representation
and
that
should
take
no
additional
parameters
right
other
than
the
multi-codec
indicator.
F
It
does
not
perform
single
step
decryption
because
that
would
actually
be
incompatible
with
our
definition
of
codec,
because
it
would
require
more
parameters,
etc.
It
would
like,
with
things
so
this
definition
of
just
decomposing
it
into
these
other
ciphertext
primitives
is
actually
really
clean
and
clear
and
doesn't
with
any
of
our
other
definitions
and
then
using
feature
detection
as
an
intentional
step.
F
On
top
of
that,
to
say
this
is
an
encryption
codec,
here's
another
function
that
is
available
to
you
now
wire
your
key
into
this
and
doing
that
all
under
feature
detection
ages
that
still
works
really
cleanly
and
then
now
you
have
another
method
which
is
going
to
give
you
another
data
model
representation.
That
is
the
clear
text,
and
so
we
have
successfully
avoided
something.
That's
been
a
big
problem
with
a
lot
of
these
proposals
before
which
is
like
you
get
this
awkward
problem
where
it's
like.
F
D
A
You
do
lose
your
a
lot
of
graph
primitives
for
the
unencrypted
case,
though
don't
you
like
you
you,
when
you're
dealing
with
large
data
sets
and
you've
got
this
manifesto,
you
need
all
or
nothing
in
order
to
look
at
your
data.
That's
correct
right!
You
need
everything.
So
if
I'm,
if
you.
A
D
No,
no
so
like
say,
that's
a
car
file,
so
the
car
file
format
that
I
did
it
it's
going
to
have
the
top
level
objects
going
to
tell
you
what
the
encrypted
block
address
is
for
the
route.
So
you
can
just
ask
for
that
one
right
and
then,
and
then
parse
and
then
unencrypt
it
and
look
at
what
the
links
are.
Oh,
no,
no
you're
right!
Those
are
to
the
unencrypted
state.
There's
no
way
to
that's
between
them.
You're
right,
you're,
right
yeah!
You
do
need
the
whole
thing.
D
That
is
specific
to
the
way
that
I
did
this,
though,
if
you
were
to
do
if
you
were
to
actually
just
encrypt
the
whole
graph
as
a
link
layer,
you
can
use
the
same
codec
and
then
you
would
actually
what
you
would
get
back
would
be
the
encrypted
address.
Instead,
like
you,
would
you
could
be
able?
You
could
build
that
graph
that
way
and
have
encrypted
addresses
in
there
as
well.
You
could
rewrite
the
whole
graph.
It's
just.
D
D
G
G
A
What
I
wouldn't
mind
seeing
is
a
enumeration
of
the
trade-offs,
and
I
think
that
would
get
to
that
would
get
to
some
of
the
heart
of
what
this
is
proposing
and
why
it
is
better
or
worse
than
other
things
like
what?
What
are
we,
because
encryption
will
involve
trade
with
some
severe
trade-offs,
regardless
of
how
we
do
it,
and
so
it
would
be
nice
to
see
that
and
then
look
at
the
trade-offs
compared
to
other
things.
Like,
I
would
say,
one
of
the
trade-offs
that
is
in
the
pro
here
would
be
complexity.
A
G
The
other
part
of
it
in,
in
the
same,
like
trade-offs
in
general,
there's
some
that
we
can
do,
but
I
think
the
other
side
of
that
is
having
it
rooted
in
specific
use
cases.
So
if
you've
got
two
people
who
both
have
the
keys
and
are
sinking
like
they
can
efficiently
sync
the
delta
in
in
this
world
still
right,
because
they
can
come
up
with
a
delta
manifest
of
the
blocks
that
have
changed
and
then
sync
that
additional
subcard
that'll
be
small.
G
Another
thing,
too,
is
that,
but
that's
dependent
on
a
specific
use
case
versus
one
person,
has
the
whole
thing
another
person's
trying
to
do
a
view
of
a
small
subset
of
a
large
data
set
without
having
the
whole
subset
right.
So
if
we
can
come
up
with
a
few
of
these
use
cases
that
we
see
as
primary,
you
know
things
we're
enabling
and
then
think
about
how
encryption
affects
those
that
might
also
be
useful
in
terms
of
how
we
frame
trade-offs.
H
But
if
I
start
encrypting
blocks
and
then
I
strip
all
the
structures
out
of
the
blocks,
then
finding
who
I
can
even
ask
for
things
becomes
much
more
complicated,
because
if
I
want
to
take
like
a
chunk
of
the
encrypted
graph
and
put
it
in
another
graph,
I'm
like
never
like.
How
am
I
going
to
find
things
and
it
seems
like
it's
probably.
D
H
Well,
you
I
mean
you
can
magic
it
away.
If
the
thing
you
care
about
is
I
don't
want
people
to
read
my
data
and
the
thing
you're
not
worried
about
is.
I
don't
want
people
to
know
that
I
access
this
encrypted
data
or
I
don't
want
people
to
know
how
big
the
data
was
right.
If
I
just
don't
want
people
to
read
my
stuff,
this
is
not
a
problem.
F
Right
yep,
so
I
think
we're
going
to
have
to
figure
out
where
we
want
to
draw
the
line
between,
like
ipld
libraries
and
our
specs,
for
some
of
this
codex
stuff
will
have
actual
features
that
we
will
want
to
be
clear
about
and
specify,
and
then
at
some
point
these
things
turn
into
application
level
logic
like
maybe
we
should
include
some
description
of
these
potential
use
cases
and
trade-offs
that
we
know
will
be
encountered
and
we
might
want
to
provide
some
of
that
in
the
ipld
documentation.
F
Maybe
especially
if
it
helps
us
describe
vocabulary
that
helps
people
interact.
That's
that's
when
it's
really
worth
it
to
be
in
our
documentation
right
but
like.
If
the
far
end
of
the
spectrum
is,
we
write
cryptography,
introduction
textbooks
on
the
ipld
docs
site,
and
I
think
that
we
probably
don't
want
to
do
that,
because.
F
Like
just
I
mean
those
cryptography
yeah,
that's
cryptography
like,
like,
I
have
a
bunch
of
crypto
textbook
material
still
on
my
desktop
since
I've
had
these
courses
never
left
my
desktop,
but
do
I
want
to
regurgitate
these
things
into
the
ipld
documentation
at
full
granularity?
No,
I
do
not.
It
is
a
subtle
topic.
We
should
have
boundaries
to
how
much
breath
we
give
it
like.
D
That
only
does
this
right,
like
you
know,
like
we're,
not
encoding
information
about
the
key,
we're
not
involved
in
key
management,
or
anything
like
that,
like
so
many
problems
have
kind
of
been
deferred
that
it's
not
clear
that
there,
like
really
is
a
trade-off
to
write
up
at
this
layer.
It's
just
as
soon
as
we
prescribe
how
to
use
it.
We
have
to
talk
about
trade-offs
between
different
methods.
D
D
D
Like
the
manifest
is
entirely
in
a
layer
above
this,
though,
like
above
this
sort
of
encrypted
codec
layer,.
D
D
Is
there
since
encrypted
blocks,
have
cids
that
are
just
like
cids
two
unencrypted
blocks?
If
you,
you
know
re-encode
the
graph
to
be
in
to
use
encrypted
links
or
just
all
plain
and
create
graphs
over
them,
like
none
of
that
changes,
how
the
block
level
part
of
it
works
right?
H
D
Right
right,
you're
gonna
have
to
decrypt
that
data
in
order
to
look
at
the
manifest
anyway.
So
that's
already
in
operation
that,
like
is
not
a
regular
ipfs
traversal
operation
right.
It's
like
you're,
applying
a
key
to
this
data
in
order
to
decrypt
it
and
then,
when
you
decrypt
it,
you
would
go
like
okay.
What
do
I
do
with
this
data
now
and
that's
where
all
these
semantics
come
in.
H
That
seems
like
that,
so
that
does
then
like
have
trade-offs
right,
as
opposed
to.
If
I
just
encrypted
all
of
the
blocks
right-
and
I
was
doing
the
gross
thing
where
I
passed
a
context
through
every
operation
and
the
context
can
carry
a
key
and
if
the
key
is
an
aes
key
and
it
looks
like
it's
like
a
aes.
Often
it
looks
like
it's.
You
know
an
aes
codec.
Then
I
apply
the
decryption
thing.
Then
it
means
that
I
can.
D
So
there
are
different
ways:
you
would
interact
with
it,
whether
or
not
you
did
this
atomic
encryption
of
each
block
or
the
layers
on
top,
but
in
terms
of
how
you
create
the
manifest
file
and
how
you
tie
those
together
for
the
replicator
key.
That's
basically
the
same
like
it's
a
little
bit
easier
because
you
can
just
point
at
the
root
node
and
then
you,
you
know
that
it's
going
to
be
able
to
traverse
through
or
no
no
wait.
No,
that's
that's
another
key!
D
D
F
The
key
insight
about
the
api
for
this
encryption
stuff
is
like.
Yes,
we
can
use
multi-codec
indicators
in
a
way
that
is
not
conflicted
with
our
definition
of
codex
being
unparameterized
and
by
having
the
codec
behavior
be
defined,
as
actually
still
yielding
you
ciphertext.
We
removed
all
of
the
ambiguity
around
what
happens
to
the
poor
poor
ciphertext.
F
I
don't
think
we've
solved
that
at
all,
but
I
also
don't
think
that
that's
generally
solvable
because
you
get
deep
into
application
logic
territory
there
instantaneously.
So
the
best
thing
we
can
do
is
figure
out
what
apis
to
give
people
and
play
that
by
ear
from
there
on
out.
F
H
Let's
just
make
sure
I'm
like
understanding
what
this
is
separating.
The
way
this
is
working
is
by
by
saying
that
all
of
the
all
of
the
path,
traversals
or
selectors
I
might
want
to
apply
on
a
decrypted
version
of
the
graph-
can
only
apply
at
the
graph
once
it's
been
decrypted
and
so
basically
say
get
an
encrypted
graph
decrypt.
It
then
run
selectors,
which
is
different
from
the
other
proposals
which
allowed
running
selectors
on
encrypted
graphs,
which
is
fine,
but.
F
D
Okay,
so
so
like
the
way
that
I
did,
this
is
yes,
you
just
unencrypt
everything,
and
then
you
can
do
the
traversal.
Normally,
if
you
didn't
do
that,
if
you,
if
you
rewrote
the
graph
to
the
links-
and
you
wanted
to
say
like
add
an
api
to
the
link
loader
so
that
it
understood
this
key,
you
still
want
a
multi-codec
identifier.
D
That
tells
you,
if
it's
an
encrypted
block
or
not
so
that
you
know
when
you
hit
that
block.
If
you
do
this
dance
and
apply
the
key
and
swap
out
the
reference
to
the
block
right
so
like
you
still
actually
need
the
same
block
level
standard
along
with
the
multi-codec
identifier,
to
know
when
to
apply
the
key
and
when
not.
H
D
No
no,
but
what
you're
saying
in
that
case,
though,
is
I
want
you
to
treat
encrypted
blocks
as
if
they
are
the
unencrypted
state,
so
rather
than
traversing
the
data
model
into
the
decoded
encrypted
block,
which
is
just
like
the
iv
and
the
bytes
like
actually
run
the
decryption
program,
and
so
where
you're
telling
the
graph
traversal
engine
is
like.
F
Yeah
the
thing
that
makes
me
really
really
happy
about
this
is
actually
that
now
doing
things
like
applying
a
selector
to
ciphertext
is
actually
really
well
defined
with
this
proposal,
because
it
applies
to
the
dang
ciphertext.
It's
just
it's
good.
We
previously
didn't
have
a
story
about
how
we
expected
to
do
that
with
any
of
the
other
encryption
discussions
we've
had.
F
People
have
always
wanted
to
propose:
okay,
yeah,
we'll
just
make
encryption
a
codec
and
then,
by
the
time
that
the
codec
runs,
you
will
have
the
clear
text
and
like
that
is
actually
a
bad
idea,
because
if
you
wanted
to
write
a
selector
which
worked
over
the
cipher
text,
you
can't
like
it's
undefined.
It's
not
just
like
api
hard,
it's
literally
not
defined,
and
so
now
with
this
definition
of
actually
the
codec,
just
like
maybe
chunks
up
the
ciphertext
into
iv
and
ciphertext
body
that's
defined.
D
Like
things,
I
think
the
more
useful
part
is
that,
like
the
unencrypted
state,
also
has
a
defined
block
format
that
separates
out
the
cd
in
the
block,
because
that's
like
the
other,
annoying
step
in
encoding.
All
this
stuff
is
like
it's
like
how
to
how
to
how
to
like
actually
deal
with
the
decrypted
state
to
turn
it
back
into
the
cid
and
the
bytes
yeah.
But
I
mean
like
in
you
know
like
in
javascript.
D
F
A
Yeah
you're
welcome
to
join
us
carson.
Your
input
would
probably
be
very.
A
Each
week
same
time,
same
channel,
is
there
anything
else
we
want
to
cover
before
we
close
peter?
Did
you
want
to
do
the
float
thing?
I
think
that
was
you.
E
Nah,
I
get,
I
guess
getting
so
late.
I
I
basically
just
wanted
to
race
again
like.
Why
are
we
so
obsessed
with
you
know,
with
keeping
clothes
at
the
same
time
we're
documenting
like
yeah?
It
is
you
know,
not
supported,
except
this
and
this
and
this
well
pretty
much
all
our
codecs
fully
encode
and
decode
them,
but
we
don't
support
this
stuff,
so
don't
rely
on
it
like
like
it's.
That's,
not
the
spec.
E
D
E
It's
more
like,
like
we,
we
call
out
in
in
in
a
lot
of
our
like
documentation
that
we're
that
we're
like
putting
out
that
this
is
not
a
thing
that
is,
that,
like
floats,
are
pretty
much
antithetical
to
you
know
what
we're
trying
to
do
with
ipld
with
you
know,
with
immutability,
and
one
word
is
to
represent
something
and
we
almost
go
like.
E
Actually,
I
think
I
wrote
in
in
the
thing
that
you
submitted
there
was
a
sentence:
do
not
use
this,
so
why
don't
we
just
say
you
know
just
we
discourage
them.
You
know
they're
here
as
a
historical
artifact.
We
learned
our
lesson,
don't
use
them
going
forward
like.
Why
aren't
we
entertaining
this
instead
of
the
pr's
as
well.
A
So
here's
my
thinking
on
that,
and
so
I'm
just
going
to
get
the
wording
that
I
used
in
there,
which
was
so
we
support
it
and
we
just
put
floats
there.
There's
not
do
not
consider
this
to
be
a
one-to-one
mapping
to
ie754.
That's
not
how
floats
are
in
the
data
model.
A
And
it
is
recommended,
float
values
be
avoided
when
developing
systems
on
ipld
and
content
addressable
systems
in
general,
due
to
the
broad
scope
for
introducing
variability
in
the
byte
representations,
alternative
methods
for
encoding
fractional
numbers
with
greater
position
and
less
variability
should
be
considered
where
possible.
My
thoughts
are
that
this
is
very
use
case
dependent.
There
are
plenty
of
use
cases
where
convergence
on
bytes
for
data
is
not
important
and
you
just
want
to
spit
out
data
and
transfer
it
and
have
a
content
address
for
it,
and
so
what?
A
If
you
could
do
the
same
thing
in
different
forms?
And
so
this
is
like
this
is
like
a
narrow
side
concern
it's
one
of
the
major
concerns
we
have,
but
it's
a
narrow
one
for
a
specific
set
of
use
cases,
but
there's
a
lot
of
use
cases
where
that's
not
a
concern
and
people
won't
care.
I
okay.
I
want
to
encode
1.1
and
who
cares
if
it's
not
something
I
can
do
accurately
and
I
just
need
to
get
roughly
there.
I
and
I
used
to
work
in.
A
Actually,
I
did
a
lot
of
work
in
some
genetic.
One
of
my
previous
lives
was
in.
It
was
animal
genetics
and
we
used
to
measure
animals
by
all
these
genetic
parameters
and
we
would
have
these
floating
point
numbers
around
for
everything,
and
there
was
this
point
at
which,
like
the
precision
you
needed
was
pretty
low,
and
so
I
was
like
yeah
as
long
as
it's
in
the
right
direction.
That's
fine
and
and
there's
a
lot
of
people
out
there
with
a
lot
of
data
where
that
doesn't
matter.
A
So
the
precision
is
not
like
your
single
precision,
be
fine
and
and
roughly
the
same
thing.
You
don't
need
true
equality
and
I
don't
need
convergence
of
my
data
forms
onto
the
same
cid
for
the
same
data.
That's
just
not
a
thing,
so
I
think
that's
where
I'm
getting
at
with
those
comments,
and
it's
really
up
to
the
user
to
decide
and
that's
that's
the
spirit
of
ipld
I
think
we're
pursuing.
If
we
were
to
rule
it
out
and
say
look,
this
is
just
not
something
we're
going
to
support
into
the
future.
A
E
Right
but
then
that
then
I
have
the
the
the
converse
question
like
if
you
are
in
a
place
where
you
want
to
like
shovel
data
around
that
it's
like
man,
whatever
you
know
just
just
being,
you
know
the
right.
The
right
shape
is
ipl.
This
is
something
we
actually
want
them
to
use
because,
like
they're,
getting
all
the
baggage
without
any
of
the
benefits
or,
for
example,
you
know
just
like
what
we
were
talking
about
this
encryption
stuff
like
you
can't
do
any
of
that
without
without
a
very
well
defined.
D
When
you
say
I
feel
either
like,
like
I
mean,
look
like
this
note
about
using
them
in
ipld
is,
is
essentially
like
a
warning
that
says
like
look.
If
you
know
that
you
need
a
high
degree
of
precision,
you're,
not
actually
going
to
get
that
between
languages
period
like
languages
have
a
really
hard
time
like
having
consistent
float,
implementations
like
javascript,
especially
and
but
if
you're,
if
you
have
one
programming
language
in
one
system,
and
you
just
need
to
shovel
the
bytes
back
and
forth,
you
could
just
use
bytes
right.
D
Like
you,
you
know
that
that's
going
to
be
consistent
with
whatever
your
implementation
is
you
stick
it
as
a
bytes
value,
instead
of
using
a
flow,
that's
kind
of
what
we're
warning
people
for
it's
like
if
you
can
just
encode
these
as
your
own
bytes
and
decode
them.
That
way,
you
may
end
up
with
much
more
consistent
behavior,
especially
between
languages.
G
D
That's
my
point
is
that
like
golang,
decoded,
encode
float,
bytes
is
very
consistent
and
like
decode
float,
representation
from
cbor
between
javascript
and
then
re-encode
back
and
send
it
to
go
is
like
like
very
like
it
will,
convert
it
to
an
integer,
sometimes
if
it's
the,
if
it's,
if
there's
like
no
mantissa
so
like
yeah.
This
is
like
that,
like
that,
that's
why
it's
better
to
just
encode
them
as
bytes.
If
you
know
that's
going
to
happen
right.
F
Those
things
don't
even
agree
on
what
float
means.
They
definitely
don't
agree
completely
with
ieee
7,
whatever
they
definitely
don't,
consistently
agree
with
what
various
programming
languages
implement.
They
do
not
consistently
agree
with
what
various
like
silicon
etching,
like
what
smart
sand
does
with
floating
points,
because
that
isn't
consistent
between
different
generations
of
smart
sand
or
made
on
planet
earth
like,
and
we
are
unwilling
to
claim
to
people
that
these
things
are
not
problems.
H
It's
like
what
happens
like
I
want
to
store.
I
want
to
store
and
I
want
to
store
like
you
know,
I
want
to
start
1.1
right
and
my
data
model
is
bytes
and
integers
and
strings
which
are
also
bytes.
Okay,
so
I
have
bytes
and
integers
are
my
and
c
ids
and
I
want
to
store
1.1
what's
my
plan
because
I
feel
like
will's
point
was,
if
you
say
it's
bytes,
then
then
that's
like
kind
of
miserable
for
the
user.
H
D
A
But
and
different
different
use
cases
will
find
appropriate
solutions
like
in
finance.
There's
I
think
most
of
finance
has
really
agreed
on.
We
use
integers
and
then
just
divide
when
we
want
the
fractional
part,
and
you
can
see
that
through
the
blockchain
industry,
starting
with
bitcoin,
where
you
you
store
the
very
big
number
and
then
you
divide
it
to
get
the
btc
value
and
file
coins
doing
the
same
thing.
A
E
Yeah
that
then,
actually
asks
for
you
know
can.
Can
we
put
more
more
examples
in
exactly
the
area
that
I
was
asking
about?
I
I
think
we're
actually
broadly
agreeing
my
stance
is
we
don't
warn
people
enough,
don't
use
it.
A
D
A
A
F
F
This
seems
like
one
of
those
areas
where
we've
again
written
some
docs
in
specs,
but
by
like
just
saying,
don't
use.
This
is
insufficient
and
it
seems
like
we
actually
should
provide
at
least
some
recommendations
of
alternatives,
because
otherwise
people
get
very
confused
reading
those
documentations
like,
and
we
should
not
try
to
answer
this
question
definitively
but
like
we
might
include
a
paragraph
and
some
bullet
points
of
like
consider
using
fixed
point
math
with
a
known
divisor.
That
is
relevant
to
your
context.
F
Whatever,
because
all
of
these
approaches
are
reasonable
in
some
context
or
another,
like
lots
of
good
computing,
uses
the
fixed
point
approach,
maybe
we
just
need
to
name
the
darn
things
in
our
docs,
because
letting
people
sing
saying
don't
use,
floats
and
then
hoping
somebody
finds
the
right
wikipedia
page
with
the
right
idea.
E
A
D
A
A
D
A
And
because
the
truth,
the
truth
is
most
most
systems
will
deal
with
ieee
754
and
you
will
get
consistent
floats
between
systems
except
in
the
javascript
energy
case.
You
will
still
be
able
to
talk
about
the
same
number
in
different
systems.
In
the
majority
case.
That's
that's.
The
truth
is
this.
Is
this
is
not
entirely
unsafe?
It's
not
just
let's
just
record
we'll
get
a
good
amount
of
consistency
among
systems.
H
Would
you
would
you
want
to
adopt
some
of
the
specs
language
around
like,
depending
on
what
the
languages
that
we
let
that
you
know,
gets
landed
on
for
four
floats
having
the
libraries
that
implement
this
right?
Now,
I
guess:
go
in
javascript
like
appropriately
warn
you
if
you
go
near
afloat
as
in
like
have
the
comments
on
the
functions
that
handle
floats
be,
like,
maybe
think
twice.
H
C-Link
right,
like
specs
are
great.
Not
everyone
reads
the
specs,
not
everyone
who
reads
the
specs
reads
the
specs
all
the
way
through
comments
on
the
function
that
says,
like
you
know,
to
float
that
says
this
works
asterisk.
Please
see
long
asterisk
notice
that
this
didn't
happen
when
you
did
dot,
two
integer
right
might
might
be
useful
again,
depending
on
whatever
it
gets
landed
on
in
the
spec.
A
Yeah
yeah,
and
it's
because
this
is
the
problem
with
people
not
like
we
put
out
this
text
and
people
don't
read
our
text
because
they
don't
go
to
this
obscure
corner
of
the
internet
that
we
live
in.
That
we
think
is
really
important,
but
no
one
else
actually
cares
about
yeah
I
mean,
depending
on
how
forcefully
we
want
to
do
it.
You
could
do
things
like
it.
Just
won't
do
it
until
you
provide
some
kind
of
override
flag,
it
just
seems
a
bit
hostile
to
go
that
far.
A
A
There
is
a
question
on
youtube:
can
I
use
ipld
in
meteorology
data
analysis.