►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-04-20
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
B
Okay,
welcome
everyone
to
this
week's
IPL
demeaning,
it's
April,
the
20th
in
2020
and
as
every
week
we
go
over
the
stuff
that
we've
done
in
the
past
week
and
planned
to
do
and
yeah.
So
I
start
first
with
the
thing
about
that.
This
meeting
God
now
one
our
expansion
slot,
but
the
plan
is
still
to
keep
it
short.
B
Finally,
so
it
will
be
like
an
explorer
very
pod,
giving
a
little
context
explaining
things,
and
so
the
short
version
of,
and
so
my
conclusion
kind
of
is
that
it
should
be
possible
to
store
fine
to
make
hands
with
binary
keys,
but
from
an
IP
API
perspective.
It
should
be
strings
to
the
outside,
but
what
it
has
will
be
in
the
explore
patiently
called
and
yeah.
We
can
discuss
the
details
via.
That's
all
I
have
yeah.
C
I'm,
a
question
about
that
so
like
in
in
the
other
libraries
we're
talking
about
HAMP's
the
moment
that
you're
doing
a
lookup
in
a
hand
that
doesn't
look
like
a
lookup
on
a
normal
map.
A
lot
of
the
time
like
it
like
that's
how
it
looks
in
JavaScript,
because
it
literally
moves
from
like
a
sync
operation
to
an
async
operation.
For
one
thing.
So
there
isn't
really
like
a
restriction
it
at
all
on
what
the
key
type
needs
to
be
like
it
can
be
binary.
D
B
B
C
B
B
C
E
F
The
way
this
is
set
up
already
in
the
code
base
we're
looking
forward
to
being
able
to
go.
If
you
deploy
a
selector
and
it
just
sees
the
thing
as
a
map,
it
turns
out
to
be
implemented
by
hand,
because
the
hint
will
also
be
using
same
loader
callbacks
as
the
rest
of
the
traversal
process
would
already
be
using
even
extremely
higher-level
things
that
have
no
clue
about
this,
like
graph
sync.
As
long
as
the
hemp
protein
so
far,
because
even
something
way
away
like
grab
Sanka
will
just
magically
get
it.
F
C
C
F
Already
have
this
all
over
the
place
and
go
this
so
from
my
perspective,
at
least
this
is
like
it's
true
that
you
need
to
solve
the
signaling
problem,
but
I
just
already
have
that
problem
and
already
had
to
solve
it.
So
that
doesn't
say
something
the
same
way
like
we
already
have
to
probably
go.
You
have
to
decide
what
you're
in
memory
representations,
but
that's
what
the
whole
like
no
style
and
stuff
is
about
I.
C
Mean
you
still
have
like
huge
differentiations
that
you
need
to
make
in
the
selector
engine,
though,
because
you
can't
assume
that
you're
only
ever
going
to
get
at
most
one
block
or
node
out
of
a
key
selection
on
a
regular
map,
but
on
a
hampt.
It
could
be
multiple
blocks
and
pads
in
order
to
get
there.
Like
we've
been
down
this
road
before,
like
we
have
to.
F
C
C
If
people
are
using
binary
keys,
they
probably
want
API
forms
that
accept
binary
cubes.
So
if
they're,
using
a
Hampton
order
to
do
binary,
keys,
specifically
like
they're,
probably
not
going
to
appreciate
an
API
that
makes
them
do
string
conversions
all
over
the
place,
that's
not
actually
gonna
be
fun
for
them.
B
Yeah,
but
if
the
point
is
that,
if
you
so,
if
you
want
to
use
the
ruling
like
selectors,
for
example,
if
you
were
to
be
able
to
use
the
like
this,
you
need
to
basically
comply
to
the
ID
from
the
data
model.
And
if
you
do
something
outside
of
it,
you
of
course
free
to
do
so.
But
then
you
don't
have
like
they're
all
like.
C
B
C
Just
for
reference
by
the
way,
PowerPoint
is
doing
HAMP's,
with
binary
keys
like
all
over
the
place,
so
just
keep
it
on
your
radar
they're,
making
heavy
heavy
use
of
that.
Okay,
the
big
data
project
for
file
coin
is
pretty
much
done
across
this
table
of
data.
We
actually
process
too
much
data
and
we
were
gonna
have
like
a
two
hundred
thousand
dollar
s3
bill
every
month.
So
we
we
deleted
a
bunch
of
data
and
we
will
reprocess
it
closer
to
when
we
need
it.
So
then
we're
not
paying
to
the
story.
C
Cuz
lambda
is
so
cheap
that
it
costs
us
less
to
generate
the
data.
That's
a
story
so
that
that's,
but
that's
like
on
the
back
burner
now
so
I'm
freed
up
for
other
stuff
got
a
bunch
of
DAG
to
be
stuff
done.
I've
got
an
update
or
instructor.
Now
it's
basically
like
an
API
for
doing
what
we've
traditionally
called
the
naming
problem
like
you
have
a
mutable
name
for
something
and
you
need
to
swap
out
the
reference.
This
is
just
like
an
API
layer
for
you
to
be
able
to
plug
this.
C
In
so
I've
been
working
on
the
HTTP
one
recently,
there's
also
an
LRU
everywhere
question
to
rod
in
a
minute.
Do
you
have
a
library,
that's
good
and
consistent
for
doing
Global's
across
the
browser
and
a
node
that
are
guaranteed
to
be
global,
even
with
all
of
the
the
version
mismatching
and
everything
I.
A
C
F
It's
still
nowhere
near
close
to
done,
because
the
scope
of
the
thing
is
just
enormous,
but
a
lot
of
stuff
has
happened.
String
types
are
now
generated
correctly.
Integer
types
are
not
generated
correctly.
These
are
just
like
simple
scalar
leaf
nodes,
so
those
are
pretty
simple,
but
development
just
done
it
can
be
used
in
tests
now
structs
with
maths
as
a
representation,
so
they're
natural
representation
work
that
includes,
if
they've
got
knowable
fields
that
includes
effect
optional
fields.
It
includes
whether
the
maybes
for
each
of
the
types
are
implemented
as
pointers
or
as
embedded
Struck's.
F
This
is
something
that
I
have
now
made
into
a
parameter
that
you
could
control
through
some
adjunct
config.
So
it's
not
part
of
this
game,
a
declaration,
because
it's
something
that
matters
about
the
go.
Koujun
doesn't
matter
about
the
semantics,
just
internals,
but
that's
a
thing
like
a
configure.
Now,
by
the
way,
there's
an
adjunct.
Config
configuration
system
piped
through
all
of
the
templating
all
the
culture
now
so
that
stuff
is
consistent.
F
Not
fully
wired
up
is
like
feel
renames
for
a
distinction
between
like
what
you
said
that
schema
in
logical
sense
to
be
current,
but
it's
like
a
90%
there.
You
need
a
coat
of
paint
over,
it's
actually
exported,
but
all
the
cuts
are
there
and
the
rename
directions
from
the
schema
we're
putting
first
structure
presentation,
that's
nice
and
I'm
like
just
getting
started.
This
is
such
a
ridiculous
longer
list,
but
I'm
also
so
excited
I'm
going
to
drag
everyone
through
it
with
some
apologies
but
like
it's,
not
safe,
structs
with
string.
F
Joint
representations
now
also
work
which
is
really
kind
of
consequential,
because
this
is
a
working
proof
of
the
same
type
loads
and
a
struck
with
different
functioning
representation
strategies.
So
we
can
take
a
struck
and
we
can
do
string
join
or
we
can
do
map
representation
and
all
the
cogent
symbols
for
this
work
and
yeah.
It's
just
really
exciting,
and
all
of
these
things
also
compose
recursive
lists
even
add
a
struct
that
embeds
a
struct
with
a
map
representation
that
has
a
fields
and
another
struct
string.
Representation
can
even
have
other
strokes
in
it.
F
F
I
was
doing
a
bunch
of
this
developments
like
were
quick
and
dirty,
so
there
would
be
all
of
the
coding
in
stock,
and
when
you
ran
tests
on
that
package,
he
would
spit
out
a
bunch
of
files
of
the
generated
code
and
then
I
would
switch
my
editor
context
to
that
package
and
I
request
tests
over
there,
and
this
was
my
flow
so
like
see,
I
was
not
doing
this
correctly
at
all.
The
reporting
cycles
were
split
in
half
and
it's
just
like
really
not
okay
with
it.
F
Oh
so
now
that's
fixed
the
new
system
tests,
the
code
generation
itself
working
it
tests
that
the
generated
code
compiles
at
the
same
time
and
it
has
mechanisms
for
running
functional
tests
against
the
newly
compiled
generated
code,
and
all
this
runs
from
other
than
a
regular
road
test.
Commands
went
totally
normal
formatting
to
spend.
In
fact,
I
was
saying
totally
normal.
This
wasn't
insanely
hard.
It
was
what
I
ended
up
doing
was
something
using
the
go
plug-in
build
system
and
then
loading
that
plug-in
dynamically
into
the
same
process
again.
F
F
So,
like
I
should
clarify
this
is
like
if
I'm
talking
too
fast.
This
is
all
for
the
tests.
So,
like
every
look,
you
can
use
this
without
this.
It's
just
by
far
the
nicest
way
to
get
tests.
I
did
also
spend
a
whole
bunch
of
time
trying
to
do.
This
was
like
a
go
test
on
one
package.
Doing
generation
go
test
as
a
subprocess
and
that.
F
Was
worse,
neither
of
these
solutions
really
makes
me
super
thrilled
and
Tatian
details,
but
managing
all
of
the
output
correctly
for
the
recursive
indications
go
test
as
a
sub
process
was
not
even
funny
I'm,
just
gonna
like
as
if
my
lips
before
I
say
anything
more
about
that,
because
truly
some
problems
are
at
least
funny.
This
one
was
not
moving
up,
so
there
is
another
package,
that's
also
doing
cogent.
That's
not
using
this
explanation,
this
plugging
stuff,
so
that
one
still
kind
of
has
a
two
step
manual
process,
but
that
one
has
a
different
purpose.
F
This
package
is
testing
the
same
benchmarks
as
the
basic
stuff
does
so
the
hand
rolls
know
type
info
like
generic
or
reusable
basic
note
and
cogent.
That
represents
some
of
the
same
topological
structures,
bezel
maps
and
structs,
using
the
same
benchmarks
that
is
kind
of
exciting,
come
on
its
own
and
also
it
says
what
I
wanted
it
to
say:
Cochin
is
faster,
sometimes
a
lot
faster,
I
want
to
write
more
benchmarks
to
say
how
much,
but
in
like
even
really
small
datasets,
25%,
faster
I'm,
pretty
sure
some
of
these
things
are
also
like.
F
All
of
em
versus
all
of
one
scale
so
like
bigger
structures,
way
faster,
but
I,
don't
have
enough
benchmarks
to
say
that
really
yeah,
pretty
sure.
That's
what
we're
missing
so
yeah,
that's
cool!
There
is
still
a
ton
of
work
to
do
on
Co
gen,
so,
like
lists
are
done,
they
should
be
easy.
I
just
haven't
done,
yet.
F
F
Error
handling
is
gonna.
Take
a
lot
of
polishing
and
I've
just
been
flying
through
this
without
too
much
effort
on
Mac
brain
stack,
depth
limit,
so
typ
literal
generation
is
also
not
included,
which
I'm
skipping
over
for
now,
because
nothing
is
like
directly
depending
right,
so,
like
some
areas
would
love
to
have
it
because
we're
making
them
clearer
and
some
other,
like
meta
reflection,
would
love
to
have
this.
F
Interplays
in
details
of
things
like
some
of
the
rules
around
how
you
want
maps
to
behave
if
you've
got
interesting,
things
going
on
in
the
keys
might
be
relevant.
To
that
other
discussion
we
were
having
earlier.
It
turns
out
that
there
are
some
situations
where,
like
I,
really
want
to
coerce
this
one
kind
of
arguments
in
order
to
make
sure
that
path
segments,
the
thing
let's
ease
by
selectors
can
actually
submit
to
applicable.
C
C
F
Probably
got
a
lot
of
like
synchronously
with
me
a
little
bit
cuz,
it's
really
big
and
I
have
not
written
coherent
reviews
of
it.
It's
like
I
mean
it's
draft
would
even
be
pushing
it.
There's
a
massive
amounts
of
implementation
that
has
sewn
back,
but
the
patent
is
definitely
in
my
head
and
on
a
document.
So
it's
gonna
be
rough
to
read
alone.
A
A
Who
excuse
my
yawning
that's
far
too
early
and
not
fully
awake
yet
so
there's
two
main
things
that
I've
been
breaking
up:
one
is
this
farm
Queen,
multi-code,
egg
stuff
and
that's
been
getting
our
in
the
modern
formats,
/
multi
coding,
repo
stud
from
will
request,
161
and
then
issue.
One
six
eight
comes
the
same
thing
proposal.
A
There's
still
one
of
the
be
unclear
things
is
why,
but
she
went
hitting
a
brick
wall
there
and
there's
an
insistence
that
this
is.
This
is
important,
but
basically
you
you
get
there's
these
three
commitment,
hashes,
that
you
get
from
various
processes
in
file
coin,
its
commitment,
a
piece,
commitment,
data,
commitment,
I,
think
it's
called
four
sectors
and
a
replication
commitment.
A
So
the
problem
with
these
things
is
they're,
not
just
headed
snot,
just
a
hatch.
It's
not
like
he's.
My
dog
run
the
hatch
program
and
hash
function
and
give
you
back
a
hash
comment,
so
you
can't
just
use
our
standard
paradigm
to
say
this
thing
represents
some
underlying
data,
but
you
can
say
that
this,
this
hash
value
represents
the
tip
of
a
Merkle
tree,
and
so
one
way
to
fit
this
into
multi
formats
is
to
get
to
multi.
Codec
is
to
just
limited
to
that
and
say
these.
A
These
are
just
at
the
top
of
a
local
tree
for
compy
and
comdey.
That's
a
binary
miracle
tree.
So
you
can
say
this:
this
is
at
least
this
value
corresponds
to
a
block,
who
has
two
more
values
that
you
can
follow
in
the
same
way
to
two
more
values
to
two
more
values
until
you
get
to
nodes
that
point
to
something
other
than
this
type
of
thing.
A
Merkle
tree,
so
you
have
such
as
the
binary
local
tree.
It
uses
more
complex
algorithm
to
get
there,
but
it
still
a
local
tree.
So
you
could
do
a
similar
thing
ways.
This
node
points
to
a
list
of
other
nodes
that
you
could
follow
in
an
ordered
way
to
get
to
the
base
data,
so
those
I
pulled
apart
the
original
for
quest
161
and
there's
au,
because
multi
codec
proposals
for
each
for
3
in
forest
172.
A
That's
that
identifies
these
things
as
this
type
of
thing,
but
then
there's
two
more
multi
hashing
trees
and
what
the
Porticus
once
at
me
and
171
progress
170,
is
for
a
char
to
256
variant.
That
replaces
the
last
two
bits
with
zeros
uh-huh
besides
and
it's
a
novel,
sha-256
and
there's
a
precedent
for
that,
because
we
have
a
double
shower
two
to
six
in
there
already
two-bit
going.
So
it's
not
a
huge
stretch
to
do
this.
It's
just
you
know,
were
opening
the
door
to
having
more
of
these
wacky
variants.
A
So
there's
some
sketching
about
the
name
of
that,
because
it's
because
it's
currently
called
trunk
to
and
the
trunk
it
may
not
be.
The
right
word,
so
all
the
two
might
not
be
the
right
signify,
there's
a
little
bit
of
discussion
there,
but
I
think
we
might
be
arriving
in
something
and
then
they
pluralist
171
is
adding
a
new
hash
function,
Poseidon
which
they
use
for
the
sealed
data.
A
The
Sidon
is
interesting
because
it's
it's
a
new
hash
function,
but
it's
also
heavily
parameterize
a
wall,
so
you
there's,
you
can't
just
say
I'm
using
Poseidon
256.
You
know
you'd
have
to
say
I'm
using
Poseidon
with
this
curve
and
this
this
arity
and
and
then
a
bunch
of
other
minor
premiers
and
so
I
pulled
out
those
two
main
things
curve
and
tree
energy
in
into
the
name
of
this
married
and
then
added
a
five
point
variant
on
the
end
of
it
for
the
additional
parameters.
A
So
if
somebody
else
was
using
Poseidon,
they
could
shop
and
they
could
have
and
have
another
entry
and
they
could
list
their
curve.
They
could
list
their
tree
arity
and
then
they
could
have
their
own
variant
of
the
additional
parameters
as
well.
So
far
more,
you
can
iterate
on
this
if
they
check,
if
I
ever
change
their
parameters.
If
this
hash
function
they're
using
they
can
iterate
and
then
if
anyone
else
adds
one,
then
it's
scalable
so
that
you
can
see
the
main
bits,
but
also
they
have
space
to
vary
those
additional
parameters.
A
And
then,
on
top
of
that,
when
you
generate
these
additional
parameters
in
a
file
coin
and
in
a
Poseidon
instance,
you
can
do
it
in
a
way
where
you
end
up
with
additional
circuits
through
the
hash
function
that
provides
with
a
higher
security
version.
So
so,
when
they
get
there
they're
the
initial
hash
function,
they
will
also
get
a
second
one
that
they
can
use
if
they
want
to
and
that's
useful.
A
If,
if
there
are
security
flaws
found
in
the
the
basic
one,
they
can
say
well,
let's
just
switch
to
the
high
security
version
and
it's
more
costly,
but
it
it's
available
for
them.
So
there's
a
second
one
in
there
that
accounts
for
the
fact
that
they
have
this
thing
to
use,
although
they
may
not
use
it.
So
anyway,
it's
kind
of
complex
but
there's
discussion
in
those
things
and
there's
there
is
no.
A
It's
been
oppression
to
get
this
resolved
and
and
that
there
is
a
bit
of
a
you
know,
round
peg
into
a
square
hole
kind
of
thing
going
on
in
some
of
this
stuff,
but
I
I
think
I
think
it
makes
sense.
I
think
it
makes
it
I
think
it
feels
it's
just
scratching.
Multi-Coat,
like
you
know
what
in
ways
that
I
don't
think
it
was
originally
thought
of
that
it
would
be
stretched.
Maybe
I,
don't
know,
I
mean
one
seems
to
be
adamant
that
this
is.
A
A
A
It
was,
and
basically
you,
the
the
standard
header
plus
Merkle
tree,
two
transactions
doesn't
work
for
now
about
half
of
the
transactions
actually
don't
know
how
many
transactions
are
sacred
transactions,
but
if
you
follow
that
you
miss
out
on
on
all
of
the
witness
data.
So
whenever
somebody
makes
transaction
and
they
want
to
record
witness
data
in
there,
so
you
know
the
blockchain
is
used
for
storing
all
sorts
of
other
things
other
than
financial
transactions.
All
that
stuff
now
hangs
off
to
the
side
and
to
get
there.
A
You
have
to
follow
the
second
Merkle
tree,
but
then
you
arrive
at
transactions
that
are
essentially
duplicates
of
original
ones,
with
some
additional
bytes
in
them.
I'm
going
to
share
my
screen
on
my
second
computer,
that's
connected
into
here
and
and
I'm
gonna
run
you
through
this
thing
that
I
have
been
working
on
because
I
putting
a
lot
of
brainpower
in
and
figuring
out
how
this
can
work
you're
going
to
sit
through
it
with
me.
A
So
but
I
do
I,
do
want
some
feedback
on
this
process,
because
this
has
implications
for
the
way
we
think
about
IP,
LD
and
other
content
content
addressed
data
in
coins.
So
if
you
want
to
think
of
Bitcoin
as
content
address,
then
you
have
to
you
have
to
squeeze
it
into
our
paradigm
or
you
have
to
stretch
our
paradigm
to
make
it
work
better
for
something
like
beer
trying.
A
So
so
a
Bitcoin
block
is
primarily
identified
by
its
header
and
the
block
IDs
that
you
see
for
Bitcoin
blocks,
and
you
can
look,
there's
all
sorts
of
websites
where
you
can
look
up
a
block
by
its
ID.
You
know,
starting
with
a
string
of
zeros
there,
that's
the
block
ID,
but
it's
a
it's
a
it's
a
double
shower.
A
If
you
fix
hash
of
the
80
bite
hitter,
that's
all
it
is,
and
the
block
is
a
lot
longer
than
the
header
and
the
way
that
that
works
is
the
header
contains
this
TX
mobile
root
in
which
is
a
the
tip
of
a
Merkel
tree,
a
binary
local
tree
that
will
get
you
to
the
transactions
that
also
contained
within
the
block.
The
momentary
bit
wonky,
it's
got
a
because
to
account
for
uneven
for
odd
numbered
levels.
A
They
do
a
duplication
of
of
hashes,
which
has
led
to
a
security
flaw,
but
but
you
can
decompose
that
TX
Merkel
root
into
nodes
of
the
Merkel
tree.
So
what
you
do
is
you
say
each
need
to
note
of
that.
Merkel
tree
is
an
IP
LD
block
itself,
which
had
leads
to
two
more
until
you
get
down
to
this
thing,
actually
points
to
real
transactions,
so
that
can
work.
But
then
the
problem
is
that
these
this
Merkel
tree
for
my
mouse
is
working.
Ok,
so
the
Merkel
tree
for
the
transactions.
A
If
you
look
at
the
classic
transaction
down
here,
the
TX
ID
is
the
hatch
of
the
of
the
transaction.
That's
great!
We've
now
encapsulated
that
whole
block,
and
so
a
transaction
can
be
an
ideally
block
and
it's
ID
is
its
hash
and
we
can
use
that
we
can
translate,
because
transaction
ID
is
another
thing
that
the
Bitcoin
community
is
used
to
be
passing
around,
because
you
could
look
transactions
up
by
their
IDs
great.
A
But
then
segment
came
along,
which
said:
we've
got
a
problem
with
this:
witness
dial,
that's
being
attached
into
transactions
and
it's
causing
some
security
problems.
We
need
to
segregate
it
from
the
rest
of
the
block
dog.
So
if
you
look
down
at
these
sacred
transactions,
they
still
give
at
exid
the
transaction
ID
down
the
bottom
here,
but
it's
a
hash
of
only
portions
of
the
transaction.
Actually,
you
have
to
remove
two
main
sections
of
in
the
middle
of
the
transaction
data
to
get
the
hash
that
gives
you
the
transaction
ID.
A
So
if
we
were
to
do
this
in
I
appeal,
D
terms
and
you've
got
the
TX
ID
being
your
CID,
it
doesn't
give
you
the
full
block.
It
gives
you
this
partial
block,
so
the
witness
data
is
removed.
From
that
thing,
there
is
a
TX
hash,
then
they
do
take
it
to
have
hash
of
the
whole
lot.
But
then
this
is
stored
elsewhere.
A
So
what
happens?
Is
that
you
once
you've
once
you've
navigated
to
all
these
TX
IDs
and
you've
got
these
transactions
that
are
not
real,
transact
they're,
not
the
full
transactions.
In
many
cases
they
don't
have
the
witness
data
you
can
go
to
the
first
one,
which
is
the
coinbase
transaction
and
that's
the
transaction
that
where
the
miner
gives
the
reward,
and
in
that
one,
you
can
find
out
the
this
witness
commitment
and
a
that
we,
the
witness
commitment,
is
this
is
a
a
hash
of
a
not
quite
a
merkin
tree
node.
A
But
it's
a
hash
of
the
Merkle
tree,
node
plus
a
nonce
and
the
nonce
is
important.
You
need
to
use
that
to
reconstruct
the
proper
coinbase
transaction,
but
then
the
maple
tree
know
that
it
appeared
with,
will
give
you
a
local
tree
that
actually
uses
the
the
proper
hashes
of
the
transactions,
and
so
you
can
reconstruct
the
first
transaction
using
that
nonce
and
with
this
commitment
and
then
you
can
reconstruct
all
the
rest
of
the
transactions
by
following
their
local
tree
down,
and
then
you
put
their
proper
hashes.
A
So
if
you
follow
all
that
you'll
see.
One
of
the
problems
here
is
that
we
can
reconstruct
a
Bitcoin
block
into
this
doubly
memorized
instruction
and
so
say.
There's
a
hundred
transactions
in
a
block.
You've
got
a
hundred
basic
transactions,
then
all
of
the
local
nodes
that
lead
up
to
the
local
route,
but
then
you've
got
to
build
another
second
Merkle
tree
that
leads
down
to
the
same
transactions
again
we're
about
half
of
them
are
different
and
I've
got
these
witness
data
in
them.
So
the
main
issue
with
that
well
outside
from
the
complexity.
A
The
main
issue
with
that
is
the
fact
that
there's
duplicate
data,
so
in
every
in
every
segment
transaction
you
will
have
its
classic
form
and
also
its
sacred
form
in
this
mocha
lies
in
this.
In
this
IPL
deified
version
of
a
Bitcoin
block,
so
I
don't
know
what
I
don't
know.
What
any
of
the
numbers
are
for
this,
but
they
say
the
the
blockchain
is
currently.
A
It
was
it's
whatever
size.
The
blockchain
is
right
now.
If
we
were
to
turn
into
IPL
e-content,
we
could
be
multiplying
its
size
by
it's
a
1.5
times,
maybe
to
just
to
get
to
our
classic,
ideally
form,
but
it
can
work,
but
you
can
see
where
this
is
stretching
our
concept
of
it.
The
the
other
thing
we
could
do
that
has
come
up
in
this
far
point
multi
codec
discussion
is
say.
Something
like
a
merkel
process
is
also
a
hash
and
so
actually
add
an
entry
into
the
multi
hatch
for
Bitcoin
merkel.
A
And
then
then
you
could
say
that
your
CID
is
involves
the
TX
merkel
root
and
that
resolves
to
a
list
of
transactions
and
and
just
leave
it
at
that
and
then
and
then
you
just
say
well
that
Merkel
process
has
some
really
crazy
features
to
it.
That
lets
you
resolve
a
segue
data,
that's
a
possibility,
and
then
we,
you
could
actually
use
that
process
that
same
argument
for
the
Farpoint
thing.
A
F
F
F
A
A
Yeah
and
look
I
just
kind
of
quickly
read
the
abstract.
This
bit
defines
a
new
structure
called
witness
that
is
committed
to
blocks
separately
from
the
transaction
local
trees.
The
structure
contains
divert
it
might
check
transaction
validity,
but
not
required
to
determine
the
transaction
effects.
In
particular,
the
script
signatures
are
moved
into
this
new
structure.
A
The
witness
is
committed
to
the
tree
that
is
nested
into
the
blocks:
existing
merkel
route
via
the
coinbase
transaction
for
the
purpose
of
making
this
bit
a
soft
for
this
bit
soft
for
comparable
a
future
hard,
hard
fork
and
places
this
tree
in
zone
branch.
There's
something
and
I
don't
quite
understand
it,
but
there's
something
about
the
fact
that
this
witness
data
can
be
used
by
miners
to
manipulate
the
they're
hashing
process,
because
it
the
witness
Tanner,
is
like
an
addendum
to
actual
transactions.
A
So
it's
got
essentially
nothing
to
do
with
the
financial
aspect
of
the
transactions
and
they
want
to
make
sure
that
when
they're,
when
miners
are
hashing,
these
transactions,
that
they
are
not
that
they
are
using,
they
are
varying
the
right
parts
of
the
block
to
get
to
their
winning
hatch.
Rather
than
getting
into
witness
that
are
missing
with
that
be
putting
crazy
faces,
email
I
think
that's
the
main
thing
behind
so.
D
You
also
have
something
to
do
with
ten.
The
minor
choose
to
include
this
or
not
later
after
they
find
a
hash
so
they're
there
hashing,
without
including
or
considering
this
witness
data
and
then
putting
that
in
as
the
block
actually
gets
created
afterwards,
and
they
shouldn't
that
shouldn't
then
matter
and
let
them
change
anything.
A
A
G
C
A
G
So
I
guess
the
reason
like
one
of
the
reasons
they
did
it.
This
way
is
so
that
they
can
reuse
the
storage
mechanism
of
the
blockchain
to
basically
store
this
somewhere,
but
at
the
same
time
not
you
have
to
change
what
TX
IDs
are
like
and
stuff
like
that.
So
that
probably
answers
Eric's
question.
But
my
question
is
then:
if
we
represent
the
blockchain
without
the
segment
data,
is
this
a
functional
block
chain
or
not.
A
If
we
deal
I,
think
we'd
lose
a
lot
of
information
that
people
are
using
witness
data
for
it's.
You
know
people
say
well
I,
you
know
I
verified
this
base,
luring
it
on
the
blockchain.
You
know
people
do
things
like
that.
They
can
understand.
They
can
record
history
by
saying.
Look.
I
I
picked
this
into
the
Bitcoin
blockchain
at
some
point,
and
you
could
look
it
up
verify
that
it's
there,
that
all
that
would
be
gone
and
that
that
data
is
actually
would
be
interesting
from
a
I
feel
the
process.
F
So
let
me
take
a
stab
at
summarizing.
This
I'm,
not
sure
if
this
is
correct,
I'm
still
trying
to
process,
but
it
seems
like
what
we're
looking
at
is
they
have
what
we
might
try
to
map
into
two
different
concepts
of
block,
but
they've
actually
got
one
physically
inside
the
other,
so
that's
kind
of
giving
us
head
spins,
because
that's
just
not
how
we
would
normally
think
about
it
and
whereas
we
try
to
define
a
lot
of
IP
LD
for
the
purpose
of
any
new
designs.
F
As
links
being
this
thing
you
put
in
the
middle
of
your
data
structure
topology,
but
you
don't
lean
on
them
super
directly
from
any
application
purposes,
this
Bitcoin
stuff.
We
want
to
care
about
the
links
very
directly
because
that's
what
people
are
used
to
caring
about
in
those
applications,
writing
and
so
we're
in
this
weird
rock
and
a
hard
place
where
we
want
these
links,
but
they
just
don't
map
on
to
discrete
blocks
in
the
same
way,
because
it's
this
weird
blocking
of
Lachman
is
that
the
summary
that.
A
Yeah,
that's
better!
That's
not
necessarily
a
hard
constraint,
that's
up
for
discussion,
but
my
one
of
my
assumptions
coming
into
this
is
that
somebody
should
should
be
able
to
show
up
to.
Let's
say
we
put
the
same.
We
see
like
the
ipfs
network.
With
these
blocks.
Somebody
should
be
able
to
take
a
big
coin.
Let's
say
a
transaction
ID.
They
should
be
going
to
take
that
hash.
A
Reverse
it
because
that's
presented
in
big
endian,
so
we
reverse
it
to
a
little
endian,
stick
bytes
in
front
to
make
it
a
CID,
and
you
should
be
out
a
query.
The
network
can
get
that
transaction
the
same
with
a
block
ID.
So
that's
mine.
That's
my
working
sort
of
basis
from
thinking
about
this.
That's
not!
It
doesn't
necessarily
have
to
be
that
way.
It's
just
that!
J
C
C
K
Been
lurking
and
watching
the
livestream,
but
I
have
a
question
so
I
hope
that
back
a
while
ago,
there's
actually
this
concept
of
adding
functions
to
IP,
LD
and
certainly
inherently
insecure.
But
these
these
seem
to
be
pretty
simple
functions
like
dropping
out
the
hash
in
the
middle
of
the
transaction
or
just
a
certain
number
of
bytes
in
the
middle
of
the
transaction.
So
it's
a
pretty
totally
straightforward
function.
Call
that
actually
could
be
a
link
in
the
CID
and
there's
some
things
you
payload
and
not
I'm,
sorry,
not
in
the
CID,
but
the
node.
K
C
You
still
have
the
problem
of
like
I
want
to
get
this
by
that
hash,
and
then
I
asked
for
it
from
somebody
else
and
I'm
just
dropping
data
that's
in
there,
so
they
could
give
me
like
not
the
actual
block
join
sigrid
data,
but
they
could
give
me
any
like
whatever
the
hell,
whereas
like
in
the
Bitcoin
blockchain,
when
they
pull
this
block.
They
just
know
this
is
a
special
block
that
has
to
get
validated
with
two
different.
They
didn't
keep
by
two
different
actions:
yeah.
A
There
should
clarify
something
additionally
here,
which
is
even
if
we
like
someone
once
a
transaction
and
we
only
succeeded
into
the
IPAs
Network
and
they
have
a
transaction
ID.
They
can
look
it
up,
but
they're
only
going
to
get
that
classic
version
of
it,
even
if
it's
the
same
with
transaction.
What
Bitcoin
does
internally
is
when
it
decodes
these
things.
It
attaches
the
witness
data
back
into
individual
transaction
components,
and
so
that
just
fills
them
out,
but
we
would
be
hashing
them.
A
Milk,
until
you
just
get
fetch
it
out
as
it
is,
but
without
a
way
to
navigate
back
up
to
the
block,
you
can't
navigate
back
down
to
find
the
witness
data,
because
you
have
to
navigate
down
to
transaction
the
first
transaction
and
these
transactions,
don't
they
don't
link
to
each
other?
They
only
link
through
the
route.
So
you
still
can't
get
to
the
witness
data.
C
A
A
It
depends
on
how
people
would
want
to
use
this
stuff
in
it.
I
I
would
assume
that
if
you're
pulling
this
stuff
and
you'd
be
doing
it
by
block
anyway,
because
these
blocks
are
you
try
complete
things
so
in
pulling
out
an
individual,
individual
transaction
is
kind
of
interesting
and
the
financial
data
is
all
in
there.
But
you
just
have
to
accept
that
if
you're
pulling
it
out
transactions,
individual
transactions
from
IPL
D
data,
you
may
or
may
not
get
what
you're.
Looking
for
I.
C
The
the
ground
that
we
found
here
is
actually
like
a
pretty
good
middle
ground
of
edom
and
kind
of
makes
sense
to
me
like
this
seems
to
provide
a
useful
structure
for
working
with
this
data
that
I
think
natively.
It
doesn't
provide
all
that.
Well,
unless
all
you're
trying
to
do
is
actually
do
transactions
in
Bitcoin.
A
So
my
next
steps
with
this
I've
got
turn
the
dice
X
this
stuff,
but
it's
not
complete
yet
in
terms
of
IP
LD.
What
I
want
to
do
is
be
able
to
actually
spit
out
I
building
blocks
and
see
how
many
there
are
for
an
average
modern
transaction
and
then
be
able
to
walk
back
through
the
blockchain
and
make
sure
they
verify
that
I'm.
A
Actually,
I
am
accounting
for
every
byte
of
it
of
all
the
blocks,
that's
the
biggest
issue,
so
that
I
can
then
take
those
IP,
LD
versions
and
then
reconstruct
the
block
in
exactly
the
same
form.
Won't
be
able
to
do
that
so
that
then
I
can
prove
that.
Ok,
we
can
do
I
appeal
the
application,
but
it
goes
two
ways,
and
then
that
would
also
tell
me
what
sort
of
size
overhead
we
have,
which
would
be
a
very
interesting
thing
to
know.
A
F
C
We
know
that
at
some
point
we
will
need
to
do
that
and
we
wanted
to
kind
of
create
that
type-
and
this
is
it's
similar-
it's
just
like
significantly
more
complicated
because
it's
like
a
link
to
slices
of
the
block
of
the
binary
in
the
block,
which
is
like
a
little
visit,
but
like
I,
don't
know,
maybe
there
were
there's
some
kind
of
layout
that
you
can
do
to
actually
just
break
that
up
into
individual
blocks.
That's
so
much
texture!
Oh
that's
terrible!
F
It's
possible
that
some
of
the
complexities
and
the
Linkletter
functions
and
a
go
code
base
that
you
all
have
asked
why
those
are
there,
it's
possible
that
those
might
actually
help
here
and
might
be
worth
thinking
about,
for
example,
the
there's
this
whole
sort
of
complexities
around,
so
you've
got
a
link,
loader
function
and
one
of
its
parameters
is
the
hash.
Yes,
and
then
it
takes
us
so
they're
little
tuple
of
parameters
called
the
link
context.
F
I
don't
got
all
this
info
about
like
the
sibling
node
in
case,
you
need
to
go
look
up
something
over
there
or
in
the
path
you
took
to
get
here
and
it's
possible
something
like
that
could
be
useful
here
as
well,
because
one
of
the
things
I
was
originally
visualizing.
With
that
whole
link
context
scenario
is:
oh:
if
I
have
some
type
information
and
I
want
to
use
that
to
infer
that
I
have
this
different
sharding
of
my
physical
storage
on
disk,
based
on
application
logic.
F
You
know
these
are
things
that
you
have
to
write
some
glue
code
around,
but
you
could
do
it
and
maybe
a
similar
thing
would
actually
work
well.
Here
you
would
be
like
oh
I,
have
this
kind
of
Bitcoin
transaction
I
know
when
somebody
asks
for
the
link
and
it's
in
this
set
of
fields,
then
that
semantically
gives
me
the
information
that
this
is
the
guts
of
this
segwayed
thing
and
I'm
going
to
look
up
storage
using
a
totally
different
func
of
logic.
F
A
The
fact
that
we
wanting
to
be
able
to
make
their
transaction
IDs
and
block
IDs
useful
puts
a
lot
of
constraints
on
how
we
can
lay
these
things
out
to
be
useful,
but
we
haven't.
We
hope
that
we
have
got
a
lot
of
power
with
the
way
we
can
navigate
through
blocks
and
nodes
within
blocks
like.
If
you
have
a
block
ID,
then
you
can
build
a
selector
when
we
get
to
everything
really
nicely,
but
the
transaction
ID
problem
is.
A
So
if
you
were,
if
we
built
like,
if
we
extended
I,
peeled
the
Explorer,
for
instance
to
on
the
web,
to
be
able
to
understand
Bitcoin
blocks,
we
can
actually
make
it
it's
something
that
was
close
to
all
these
various
blockchain
Explorer
websites
out
there
we
put
in
a
block
by
doing
it
again
always
information
about.
We
could
do
that
if
we
have
all
my
stuff
in
the
network,
but
you
couldn't
do
it
for
transaction
IDs.
A
A
A
K
A
I
think
this
will
extend
to
other
blockchain
as
well,
that
have
all
the
block
chains
that
are
based
on
Bitcoin.
They
have,
they
all
have
similar
challenges,
but
then
you
know
this
whole
ecosystem
thinks
about
content
addressing
in
this
very
heavily
localized
way,
and
we
we
witnessed
a
lot
of
that
with
aetherium
recently
when
we
had
a
discussion
with
them.
This
sort
of
obsessive
localization
down
to
the
bit
level
was
very
interesting.
C
C
C
What
to
put
a
table
like
you
only
know
the
parts
of
the
multi
color
table
that
you
support
ideally,
and
we
got
a
move
away
from
shipping
around
support
for
everything
ever
like.
That's,
not
gonna
scale.
We
already
have
bundle
issues
in
JavaScript
and
a
lot
of
our
code.
Cookies
are
like
way
unnecessarily
large,
because
we're
pulling
in
every
Quebec
ever
and
every
hashing
function,
other
so
yeah
I
know
things
are
gonna
go
on
the
table
that
not
a
lot
of
people
support
and
that's.
Okay,
yeah.
A
C
Text
is
mutable.
We
write
new
texts
to
define
that
I
think
like
at
the
end
of
all
of
this.
Once
you
turn
it
short
up.
The
file
coined
multi-format
steps
need
to
go
back
to
the
governance
and
just
docs
in
general,
around
multi
formats
and
change
out
some
of
this
language
to
be
a
little
bit
clearer,
because
I
think
that
a
couple
people
have
had
really
consistent
ideas
in
their
head
about
what
these
things
should
do,
but
we
have
not
done
a
great
job,
just
really
communicating
it.
A
Say
that
I
have
more
consistently
encountered
the
idea
that
Merkle
trees
should
be
treated
as
where
the
individual
nodes
are
individuals
with
their
own
C
IDs.
That's
been
a
more
consistent
thing,
I've
encountered
across
people
who
have
done
work
in
this
area
and
particularly
protocol
those
people,
and
then
the
idea
that
the
whole
Milton
tree
should
be
a
hash
function
that
we
can
bring
multi
hash
to
address
the
base
data.
That's
just.
G
C
So
we
do
need
to
make
sure
the
language
any
language
changes
make
it
in
there,
but
for
the
most
part
the
table
is
not
going
to
the
IETF.
The
table
would
go
into
anion
or
registry
and
it
remains
mutable
over
time.
So
we
can
continue
to
add
thanks
to
it,
there's
no
ticking
clock
on
like
getting
stuff
in
now
or
later
on
that
I
don't
like
well
established.