►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2021-01-04
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
I
think
so.
Okay,
well
welcome
everyone
to
ipld
weekly
sync
for
2021,
so
we
are
without
vodka
today,
but
everybody
else
is
here.
So,
let's
get
into
it,
we're
gonna
go.
Do
a
quick
stand
up
so
daniel
you're.
First.
B
Cool,
so
this
is
my
first
week
back,
so
I
haven't
done
a
lot,
yet
I'm
still
ramping
up
the
first
thing.
Well,
what
I
did
right
before
I
left
for
the
holidays
was
finish
up:
the
refactors
for
0.7
with
eric,
which
were
mainly
renames
and
also
essentially
tidying
up
the
api
a
little
bit
in
backwards
and
compatible
ways
that
we
we
had
wanted
to
do
for
months.
So
those
are
shipped
now,
but
I
think
eric's
gonna
talk
about
that.
B
I
also
did
a
bunch
of
reviews.
Mainly
what
I've
been
looking
at
today
is
up
a
request
from
eric
to
add
a
new
api
to
create
nodes
from
go.
So
we
had
a
package
called
fluent
to
do
this,
but
it
wasn't
very
nice
and
it
was
it
wasn't
as
fast
as
doing
it
directly
by
hand.
So
we
came
up
with
a
new
way
and
I
think
I
refined
it
a
little
bit
without
making
it
slower,
which
might
be
a
good
idea.
So
I
linked
his
version
and
my
version
on
top
of
his.
B
If
anybody
wants
to
take
a
look,
I
was
also
this
morning
poking
at
why
go.
Ipfs
is
so
darn
heavy,
so
I've
got
a
there's,
a
project
that
I
contribute
to
that
uses
go
ipfs
and
you
know
importing
go
ipfs
from
go
means
that
you
import
like
a
bunch
of
protocol
labs
modules,
and
I
realized
that
the
binary
was
like
100
megabytes.
So
I
started
poking
at
that
and
it
turns
out
that
ipfs
itself
is
to
blame
for
quite
a
lot
of
it.
But
I
realized,
for
example,
that
quick
go
uses.
B
A
library
called
goj
for
marshalling
json
for
its
logging
and
goj
is
ways
like
over
a
megabyte.
So
I
think
that's
pretty
sad.
So
I
was
looking
at
that
and
poking
martin
about
it.
He
said
he
benchmarked
it
against
the
standard
library's
jason
package
in
february,
but
he
doesn't
have
that
code
anymore.
B
So
he's
going
to
get
me
that
code
in
a
couple
of
days
and
that
I
want
to
look
at
that
because
I'm
pretty
sure
that
we
could
get
you
know
fast
logging,
because
the
logging
is
pretty
simple
without
needing
to
add
such
a
heavy
dependency
just
for
quick
go
so
anyway.
This
is
just
to
say,
I
think,
go
ipfs
are
some
heavy
indirect
dependencies
that
don't
belong
there
and
I
also
just
are
fun
over
the
weekend.
Did
some
analysis
of
like
the
most
popular
go
modules
on
github?
B
A
Okay
yeah,
so
I
didn't
do
it
a
whole
lot,
since
our
last
meeting.
A
I
I
I
did
work
around
this
javascript
sequel
stuff.
I
I
almost
made
made
a
pull
request
to
just
see
more
to
change
the
sea
walk
parser
out,
but
sort
of
ended
up
wasting
a
lot
of
time,
doing,
benchmark
stuff
trying
to
squeeze
performance
out.
A
I
also
got
this
package
that
bubbled
up
out
of
it
called
ipld
jsibilty
garbage,
which
is
just
a
garbage
data
generator
that
makes
randomized
objects
that
conform
to
the
data
model,
that's
useful
for
testing
and
also
benchmarking,
and
so
my
like
that's
that's
what
I'll
be
working
on
this
week,
this
cbo
stuff
to
get
that
done.
A
When
comparing
against
our
current
seabor
parser
there's
it's
benchmarking
is
a
little
bit
complicated
because
it's
the
currency
where
parser
is
turns
out,
it's
really
crappy
for
one-offs.
So
if
you
just
use
it
as
a
decoder
or
encoder
just
for
a
one-off,
it's
really
bad.
A
But
when
you
do
it
in
like
a
benchmark
environment,
you're
doing
a
lot
of
decodes
and
encodes,
and
you
when
you
use
it
the
way
we
use
it
in
the
codec,
then
it
gets
to
do
a
lot
of
memory
tricks,
and
so
it
ends
up
being
quite
quite
reasonable
when
you're
doing
a
lot
of
encodes
and
decodes,
and
so
so,
I'm
straddling
this
line
of
not
wanting
to
use
the
same
tricks
as
our
current
symbol.
Parser,
which
is
very
the
two
problems.
Are
it
becomes
a
memory
hog
because
it
has
it.
A
One
of
the
tricks
it
uses
is
to
allocate
these
large
buffers
of
memory
and
then
just
keep
them
around
and
and
so
and
then
the
other
trick
it
uses.
Is
it
uses
the
the
performance
of
optimizations
of
the
node
buffer
to
get
some
to
squeeze
out
some
edge
performance,
and
I'm
I'm
making
this
thing?
So
it
doesn't
depend
on
that
when
you're
you,
when
you're
in
the
browser
so
straddling,
that
line
turns
out
to
be
a
little
bit
annoying
in
terms
of
squeezing
out
the
performance
in
comparison.
A
So
it's
so
I
don't.
I
don't.
Have
the
performance
wins
that
I
really
wanted,
except
if
I
compare
these
things
in
a
doing
one-offs,
so
I'm
a
little
bit
disappointed
about
where
that
currently
is,
but
I'm
I'm
currently
doing
a
bit
more
squeeze
and
and
I've
got
some
additional
nice
features
coming
out
of
this
thing,
like
the
ability
to
pull
out
this
just
the
lengths
of
a
seaboard
block
without
decoding
the
rest
of
it.
A
C
Yeah
you
kind
of
need
both
like
if
you
look
at
the
the
links
method
in
the
new
in
the
new
multi-formats
codec
stuff,
it
iterates
over
it,
but
it
always
gives
you
the
pads,
as
well
as
the
the
link.
A
Yeah,
I
guess
for
the
use
case
I
want
like
I
I'm
using
this
for
the
file
coin
data
and
it
is,
it
turns
out
to
give
me
some
nice
performance
benefits
just
on
decode,
but
what
I
really
want
is
just
to
be
able
to
do
a
a
sort
of
a
raw
crawl,
and
I
don't
really
want
paths,
but
I
just
want.
I
just
want
the
links
so
because
my
and
so
yeah
I
mean
because
it's
going
to
add
overhead,
then
by
getting
doing
parts,
because
you've
got
to
do
more
way,
more
decode
but
yeah.
C
Yeah,
that's
true:
you'd
have
to
keep
track
of
the
state
a
little
better
as
well.
I
mean
I
can
see
the
use
for
both
it's
just
most
of
the
time.
C
A
A
Yeah
yeah
it's
the
bytes
to
strings
and
then
back
again
and
utf-8,
and
it's
just
anyway
so,
like
I
wasted
like
a
week
on
this
like
in
this
sort
of
fury
of
coding
and
benchmarking
and
in
the
end,
was
way
more
disappointing
than
I
thought
I
would
be
anyway.
I
will
I'll
get
that
done
this
week.
That's
my
mission
this
week
done
and
and
published
at
least
in
the
initial
form
and
then
move
on.
Oh
the
other
thing
was
the
dag
codec.
A
I
say:
go
code
pb.
I've
tagged
that
as
a
version
one
done,
it
works
it's
good
and
it's
got
the
new
go
test.
A
What
is
it
the
github
actions
test
stuff
that
martin
did
for
go
repos
and
that
picked
up
that
bug
from
last
meeting
we
talked
about
with
the
int
64s,
which
has
been
fixed
in
go
up
yearly,
prime
and
now
merge
back
into
this,
and
then
that's
all
published
and
done
that
works
nicely
and
yeah
so
good.
That's
me,
who's!
Next,
eric.
E
E
E
That
is
the
root
of
some
document
and
a
path
that
you
want
to
do
something
to,
and
then
it
will
call
you
back
with
whatever
callback
you
give
it
and
say
here's
what
I
found
in
that
documentary
at
that
position
and
please
tell
me
what
you
want
to
replace
it
with
and
then
it'll
go
through
and
like
rebuild
the
rest
of
the
documentary
around
that
in
a
copy
on
right
way,
and
this
works.
Even
if
you
have
a
path
that
jumps
through
several
links
in
the
middle
into
other
documents,
it'll
rebuild
all
of
them.
E
So
this
lets
you
do
point
mutations,
even
in
large
graphs
and
it'll.
Do
the
right
thing
for
you.
It
should
save
a
lot
of
work.
If
you
want
to
manage
documents
like
that,
I
also
tried
to
make
a
more
general
form
of
that
which
would
let
you
specify
a
bunch
of
point
mutation
operations
and
then
the
function
would
figure
out
how
to
do
that
most
efficiently
and
that
turned
out
to
be
a
little
bit
trickier.
So
it
ended
up
going
back
from
the
design
bin.
E
E
E
So
if
you
want
this
to
be
optimal,
you
are
going
to
want
to
have
this
function.
Where
you
give
it
a
list
of
the
transformations
that
you
want
to
do,
and
then
the
function
is
going
to
look
at
that
and
do
something
kind
of
like
query,
planner
type
logic
and
figure
out
in
which
order
it
wants
to
do
it
in
order
to
be
optimal,
and
then
that
gets
really
tricky,
because
not
all
the
operations
that
you're
likely
to
want
to
do
are
actually
commutative.
E
So
like
changing
something
in
a
map,
those
changes
will
generally
be
commutable
in
the
list,
because
you're
changing
the
same
map
key
silly.
Of
course,
I
can
reorder
these
and
not
change
the
semantics
or
appending
map
keys
is
probably
fine
but
okay.
Let's
say
we
want
to
append
to
the
end
of
the
list,
all
right.
Still
fine.
Let's
say
we
want
to
insert
in
the
middle
of
a
list
shoot
not
fine
cannot
do
commuting
of
this
operation
because
it
will
change
the
meaning
of
every
other
list
index
in
the
rest
of
your
instruction
set.
E
Similarly
list
deletes,
don't
actually
commute,
and
so
then
things
just
get
really
complicated,
because
you
would
have
questions
like
okay.
If
I,
if
I
have
like
a
bunch
of
insert
operations
that
all
say
zero,
so
they
frameshift
the
entire
rest
of
the
list
and
then
I
do
this
thing
like
can
I
I
think
if
you
implemented
fancy
enough
logic,
you
could
make
some
of
these
things
commutable,
but
then,
like
how
much
pre-processing
do
you
want
to
do
to
check
that
out
versus
like
and
then
you
end
up
with
edge
cases
like?
E
It
got
hairy,
so
I
put
that
back
in
the
design
bin,
but
if
somebody
else
wants
to
look
and
think
about
that,
I
think
there's
interesting
stuff
to
do
there
in
the
future.
E
E
E
We
tagged
a
release
of
the
0.7.0
in
the
godfield
prime
libraries
too.
I
think
both
people
who
talked
already
mentioned
this
briefly
already,
like
ins,
have
a
fixed
size
now,
technically,
a
breaking
change,
a
really
small
one.
That's
really
easy
to
upgrade,
but
daniel
gave
us
said
instructions
for
migrating.
All
of
these
things
basically
automatically
probably
and
those
are
in
the
change
log
notes,
so
that
should
make
upgrading
really
easy.
E
A
Okay,
peter
you're
up
thanks.
F
F
There
is
a
link
to
a
dog
with
which
there
is
an
actual
set
of
credentials,
so
you
can
just
log
in,
and
this
will
be
a
replica
that
follows
an
actual
lotus
instance
that
actually
writes
to
database
writes
blocks
rights,
wrote
all
their
links
already
fished
out,
so
you
can
actually
do
recursive
sql
queries
within
the
actual
database
and
get
blocks
out.
You
know
as
school
trees,
like
figure
out
how
much
state
weighs
and
so
on
and
so
forth.
F
It
also
analyzes
several
tips
that
it
sees
and
breaks
it
down
into
another
set
of
tables
with
actual
chain.
You
know
chain
members:
where
are
the
messages
where
the
states
and
so
on
and
so
forth?
So
you
can
do
full
traversal
of
the
falcon
chain
without
leaving
your
esco
council
more
or
less
caveat.
It
is
not
the
full
state,
it
is
from
362
500
block
onwards,
so
this
from
about
a
week
ago.
F
This
will
continue
working
needs
a
few
tweaks
to
get
it
basically
right.
Now
it
process
a
blow
a
chipset
in
about
seven
seconds
which
totally
lets
it
keep
keep
up
with
the
chain,
but
it
does
not.
Let
me
sing
from
zero
to
have
the
entire
thing
ever.
F
I
need
to
speed
up
to
about
one
one
second
per
tip
set
and
then
within
within
a
week
we'll
have
the
entire
thing
performance
of
the
actual
queries.
As
I
expected
replication
is,
as
I
expected,
the
600
gigabyte
database
got
streamed
to
a
new
host
in
about
two
hours
and
basically
just
booted
up
and
yeah.
That's
all
I
have
you
know.
Logging
kick
kick
kick
the
tires.
Let
me
know
if
something
is
missing
or
doesn't
doesn't
look
great
and
yeah.
A
F
This
is
definitely
what
I'm
using
myself,
because,
like
the
badger
thing
is
impossible
to
to
to
keep
alive,
it
requires
much
feeding
for
so
what
I
have
like
it's
it's,
certainly
in
the
talk,
but
very
quickly.
It
does
way
more
than
keep
track.
It
also
for
specifically
for
what
you're
working
on
and
you
know
and
alex
norton's
ground.
F
It
keeps
a
record
per
tip
set
which
blocks
were
read
or
written
like
it.
That's
the
granularity.
It
has
like
a
huge
log
that
just
keeps
writing
to
keeps
appending,
so
you
can
literally
for
a
specific
cid
you
can
see
like
who
needs
the
cid
and
got
a
full
list
of
you
know
where
it
was
used
and
how
and
there
are
like
even
epoch-
sorry
wall
times
of
how
long
between
accesses
and
things
like
that.
F
So
it's
so
it's
super
general
on
that
on
that
level,
and
obviously
you
don't
need
this
for
regular
for
regular.
F
You
know
for
regular
install
my
thing,
for
that
is
that
if
I
get
it
to
a
point
where
it
is
exactly
where
I
wanted
to
be
within,
like
one
second
range
or
something
like
that,
instead
of
distributing
this
ridiculous
car
files
with
you
know
the
snapshots
that
you
need
them
to
import
and
so
on
and
so
forth,
I
can
start
distributing
an
sqlite
snapshot
which
has
just
enough
of
the
state
for
you
and
you
just
insert
the
sqlite
thing,
which
is
just
a
file
into
your
data
directory,
and
you
just
use
that
going
forward
and
because
it
is
not
like
it's
a
single
node,
it
doesn't
need
to
be
super
performant.
D
I
guess
to
directly
answer
your
question:
rod
more
likely.
What
you
might
expect
is
a
future
version
of
lotus
that
has
a
flag
to
allow
a
postgres
back
end.
Probably
it
won't
run
postgresql,
but
if
you've
got
but
it
you
would
be
able
to
use
the
same
lotus
code
and
a
local
postgres
and
a
flag
to
have
that.
Be
your
backing
data
store
rather
than
magic.
G
C
I'm
back
okay,
yeah,
so
yeah
had
some
really
good
code
time
during
my
time
off,
it's
been
great.
The
big
thing
is
that
I
wrote
another
implementation
of
the
trees
that
we've
been
talking
about
for
a
while
that
me
and
mikola
had
been
working
on
this
implementation,
like
obviously
having
done
it.
A
few
times
was
a
lot
nicer
and
cleaner,
and
one
thing
that
I
realized
implementing
this
data
structure
is
that
it's
just
a
lot
cleaner
to
implement.
If
you
don't
make
some
like
serialization
the
center
piece
of
the
design.
C
If
serialization
is
like
a
thing
that
some
of
the
types
do
but
like
there's
just
it
actually
actually
doesn't
work
very
well,
if
you
think
about
it,
just
in
terms
of
like
what
the
serialization
paths
are
going
to
look
like.
So
it's
just
like
a
much
cleaner
implementation
of
the
tree
and
then
on
top
of
the
tree,
that
implementation
has
a
bunch
of
data
structures.
C
So
there's
a
sparse
array,
a
db
index
an
ordered
map,
a
cid
set
some
other
things
right,
so
it
really
works,
and
it's
really
versatile
that
works
in
these
different
scenarios
and
it
also
works.
C
Like
if
you
add
ipld
serialization
into
it
as
well,
which
is
what
it
does
right
now
so
and
then
I
use
that
to
write
ip
sql
like
there's
a
it's,
a
full
sql
implementation,
there's
some
pieces
of
the
syntax
that
aren't
there
right
now
but
like
you
can
create
tables
and
there
are
like
full
indexes
of
all
the
columns
and,
like
I
have
a
csv
importer,
so
you
can
take
csvs
and
import
them
in
and
now
you
have
like
this
ipld-based
sql
database
and
when
you
do
queries
on
it,
you
only
pull
in
the
parts
of
those
data
structures
that
you
need
for
those
queries.
C
So
I'm
working
on
the
demo
piece
of
it
now,
where
you
kind
of
wire
up
some
of
the
network
stuff
so
that
over
the
network,
we
can
do
these
queries
but
like
even
for
these
trustless
queries
over
the
network.
If
I
want
to
just
you
know,
look
at
one
particular
index:
all
I'm
going
to
ever
pull
is
the
data
for
that
index
and
nothing
else,
and
then
I
have
that
locally
in
cache.
It's
really
really
nice
and
the
more
time
that
I
spend
with
it.
C
The
more
that
I
realize
that,
like
we
can
do
a
lot
with
this
like
beyond
sort
of
basic
sql
stuff
like
yes,
you
want
to
support
all
the
basic
sql
features
and
have
regular
tables
and
columns
and
everything.
But
it's
going
to
be
pretty
trivial
to
just
say:
oh
here's,
how
to
add
a
dag
table
and
part
of
the
dag
table
is
which
paths
do
you
want
to
index
like
they're
columns?
C
C
So
yeah,
I'm
working
on
like
getting
things
a
little
bit
more
polished
up
and
demo-able,
and
then
from
there
it's
just
sort
of
like
a
grind
of
knocking
through
the
the
rest
of
the
syntax
right
like
I
don't
have
alter
table
yet,
and
you
know
I
need
to
work
on
joins
at
some
point
like
stuff,
like
that,
there's
a
few
more
comparisons
that
I
need
to
implement,
but
sql
as
a
query.
C
Language
is
really
nice
like
where,
where
you
set
up,
some
of
the
combinations
of
different
syntax
features
and
stuff
is
really
sort
of
well
understood
and
predictable,
and
so
it
composes
really
really
nicely
into
into
an
implementation
of
the
query
language.
So
I
was
able
to
make
way
more
progress
way
faster
than
I
thought,
and
and
now
it
just
works.
C
One
sort
of
interesting
thing
that
it
uncovered,
though,
is
like
this
contention
in
these
advanced
data
structures
between
doing
things
in
a
streaming
way
or
doing
them
in
a
in
a
fully
concurrent
way
where
you,
you
really
have
to
pick
one
and
so
in
the
chunky.
Trees.
C
Implementation
like
it
has
like
become
very
common
for
me
to
make
my
right
interfaces
just
be
a
generator,
so
you
do
a
mutation
operation
and
that
generator
just
returns
you
and
it
iterates
over
all
the
new
blocks
and
the
last
block
is
the
new
root
of
whatever
the
mutation
that
you
did.
But
I
really
had
to
put
that
down
for
this,
because
when
you're
doing
big
mutation
operations,
you
can
really
concurrently
parallelize
over
every
part
of
the
graph
and
every
part
and
every
like
every
part
of
the
tree
that
you
work.
C
Your
way
down
is
like
another
concurrency
vector,
but
in
order
to
implement
that
cleanly,
like
you,
need
to
just
use
recursive
functions
that
you
do
parallel
like
you
can't
you
can't
map
that
into
a
generator
cleanly
and
the
same
thing
actually
happens
in
the
reads
like
you
could
do
these
iterative
reads,
but
it's
actually
way
more
efficient
to
just
take
the
entire
clear
range
query
and
go
down
the
tree
for
the
the
entire
acceptable
range
of
the
query
and
then
just
place
it
all
together
at
the
end,
so
you're
going
to
use
more
memory,
but
it
actually
works
pretty
well
in
athens
sql.
C
C
The
old
values
and
then
you
can
update
you-
can
basically
kick
off
the
updates
for
every
index
and
concurrently,
and
so
what
I
have
there
is
like
that
mutation,
the
upper
mutation
operation
is
a
generator,
and
so,
as
I
do
the
first
mutation,
I
kick
out
those
blocks
and
then,
as
all
of
the
concurrent
index
mutations
return,
I
also
emit
all
those
blocks,
so
it
gets
kind
of
the
best
of
the
world
a
little
bit
where
I'm
not
using
up
like
an
insane
amount
of
memory,
but
it
is
like
you
know,
doing
everything
as
concurrently
as
it
possibly
can
so.
C
Yeah
yeah,
that's
that's
where
that
stuff
is
at,
I
think
in
a
private
call.
After
we
finish,
I
want
to
talk
a
little
bit
about
some
of
the
stuff
that
I
encountered,
trying
to
wire
it
up
to
ipfs
and
and
lit
pdp,
but
we
won't
get
into
that
now.
We
can
kind
of
move
on
and
who's
next.
H
A
Yeah,
good
okay,
so
we've
got
a
couple
other
items
on
the
agenda
now
so
first
one
is
this.
I
think
daniel.
This
is
yours,
the
in
64
for
lengths
in
ipld
prime.
Should
we
do
that,
and
I
will
confess
that
this
was
an
annoyance
when
I
was
using
it
and
it
did
feel
like.
B
So
if
something
overflows,
it's
not
like,
you
would
notice.
So
I'm
thinking
yeah
forget,
I
said
anything.
A
B
Well,
if,
if
I'm
happily
using
the
wrappers,
the
high
level
helpers
with
ins,
not
in
64.
and
then
suddenly
my
code
uses
something
that's
very
very
large.
The
overflow
would
be
silent
and
go.
Nothing
is
going
to
warn
you
about
it,
so
that
would
be
a
big.
No,
no
and
you
might.
A
Okay,
I
met
him.
Is
this
something
that
will
sort
of
shake
out
with
use
that
it
might
be
something
that
people
complain
a
lot
about
or
that
you
get
annoyed
at
by
using
it
a
lot,
and
therefore
you
need
to
revisit
it
like
leave
it
as
it
is
for
now
and
revisit
it.
If
it's
a
significant
annoyance
in
use.
E
I
suspect
it's
going
to
be
moderately
annoying
because
using
anything,
that's
not
the
underspecified
int
in
go
is
moderately
syntactically
annoying,
but
I
think
what
would
be
worst
is
if
we
did
a
mixture
of
them.
That's
just
really
high
cognitive
overhead
and
really
really
frustrating
so
going
uniformly
within
64
across
the
board
seems
like
the
safest
choice
at
this
point,
and
I
kind
of
doubt
that
we'll
be
forced
to
revisit
it.
A
F
Yeah
on
just
a
note
on
the
int
versus
you
and
the
reason
actually
go
is
using
interlock
of
these
places.
It's
not
because,
like
in
c,
they
have
to
signal
minus
one
things
like
that,
but
because
the
overflows
are
not
simple,
an
overflow
on
the!
U
end
is
really
catastrophic
on
our
phone
and
then
just
gives
you
something
negative
that
no
nothing
understands
and
everything
breaks.
So
that's
a
thing
to
keep
in
mind.
B
B
E
Okay,
my
opinion
on
this
is
at
some
point.
These
things
become
bit
strings
instead
of
actual
numbers.
If
you're
worrying
about
these
edge
cases,
you
have
already
become
aware
that
these
are
bit
strings
and
you
are
not
treating
them
as
numbers
anymore,
and
so,
if
that's
the
life
that
you've
chosen
fine,
then
it's
a
64-bit
bit
string
and
enjoy.
I
don't
care
if
it's
negative
or
not.
That's
your
problem.
A
Well,
well,
speaking
of
that
leads
us
perfectly
into
the
next
item,
which
is
ieee
754
leakage
into
the
data
model.
There's
issue
three
for
two
in
the
specs
repo,
which
is
a
really
interesting
one.
A
So
this
is,
you
know
the
ieee
754
defines
infinity
negative
infinity
and
then,
as
as
as
things
that
are
valid
in,
like
you
get
them
when
you
have
floats,
and
then
they
use
it
for
7x4
and
there's
a
binary
format
for
them
and
so
c
board
bakes
it
into
the
spec
as
well,
and
so
both
our
cbor
codecs
use.
They
give
you
these
free,
so
in
go
and
in
javascript
these
will
do
a
round
trip
smoothly
and
per
spec.
A
So
this
issue
started
with
somebody
saying:
well,
hey:
can
we
add
this
to
jason
and
there's
a
proposal
for
doing
it
like
we've
got
bytes
and
links,
and
so
then
so
the
discussion,
then,
is
about
okay.
Well,
then,
we
have
to
confront
this
thing
and
say
is:
is
it
something
we
explicitly
support?
Is
it
something
you
just
happen
to
get
free
by
a
codec
that
supports
ieee
754
in
some
way,
or
is
it
in
some
other
space?
A
C
I
think
I
agree
with
eric
that
we
should
just
reject
them,
like
they're
they're,
less
agreed
upon
between
languages
than
anything
else
that
we
support,
and
it's
just
it
feels
really
painful
to
try
and
deal
with
them
like,
because
cross
format
is
not
going
to
work
right
like
there's
going
to
be
like,
we
can't
do
it
in
json,
so
it's
json's
out
unless
we
want
to
do
more
crazy
type
customization
than
we
already
have.
We
hate
it.
So
I
I
don't
think
that
we
want
to
do
that.
A
C
A
A
A
That's
really
that's
where
we're
at
is
that
there's
this
overlap,
and
so
it's
like
this,
this
venn
diagram,
where
it's
mostly
overlapping,
but
when
we
could
take,
we
could
just
jump
all
in
and
say
well
when
we
say
float,
we
mean
7x4
or
we
could
just
say
when
we
say
float,
we
mean
these
things
where
you
put
a
dot
in
a
number
and
that's
it
and
what
happens
in
the
codec
happens
in
the
codec
and
we
don't
recommend
you
use
them.
It's
really.
We
need
to.
We
need
to
get
that
out
and
say
look.
C
I
mean
there
are
formats
that,
like
we
want
to
be
able
to
use
in
ipld
that
don't
have
these
types
right
like
like
json
yeah,
and
if
we
know
that
that's
an
issue,
then
I
said
we
just
kind
of
punt
on
them.
E
So
I
would
also
like
to
just
flat
out
read
the
definition
of
ieee
754
nan
from
wikipedia,
because
it
is
worse
than
you
think
the
sign
bit
may
be
either
zero
or
one.
The
biased
exponent
must
be
all
one
bits,
but
the
fraction
component
can
be
anything
except
all
zero
bits,
because
all
zero
bits
would
represent
infinity.
E
C
D
C
A
Yeah,
no,
it's
just,
I
think
the
754
spec
does
specify
bit
layout
simply
because
it's
like
it's
it's
to
do
with
how
you
do
with
arithmetic
with
it
and
all
that
sort
of
stuff,
whereas
sibor
comes
down
to
well.
There
are
these
three
specials,
and
these
are
the
these
are
the
bytes
that
you
encode
them
as
essentially,
okay,.
C
I
A
C
Yeah,
I
I
feel,
like
those
issues
are
less
that,
like
we
don't
have
a
mathematical
model,
for
what
a
float
is.
It's
just
that,
like
our
computing
model
for
floats,
is
a
little
weird
like
the
way
that
we
have
to
do.
Math
on
them
and
computers
is
actually
difficult.
It's
not
that,
like
literally
in
mathematics,
we
have
not
agreed
upon,
like
the
the
best
way
to
treat
this
right.
A
Okay,
so,
okay,
so
michael,
you
might
want
to
put
your
vote
into
that
issue
there,
so
that
we
can.
We
can
angle
towards
a
resolution
for
that.
It
does
sound
like
that.
I
I
I
was
leaning
towards
more
leaving
it,
maybe
in
an
undefined
space,
it's
like
there's
codex.
That
will
do
this,
so
you
shouldn't
rely
but
codex
that
won't
so
you
shouldn't
rely
on
it,
but
I
you
know.
A
Maybe
this
is
an
area
where
we
just
need
to
be
clear,
because
we
it's
like
in
at
the
moment
you
can
encode
undefined
in
dagger
js
taxi
ball,
but
but
go
ipld,
prime
and
then
go
dixie.
Ball
will
reject
that
on
d
code
because
it
doesn't
know
what
it
doesn't
know
what
to
do
with
it.
So
we've
got
this
problem
with
undefined.
Already
that
I
was,
I
was
I'm
in
the
process
of
fixing
I'm
going
to
reject
undefined
as
something
that
you
can
encode.
A
C
I'm
also
just
like
we
tend
to
talk
about
these
things
as
like,
how
do
they
map
onto
existing
formats
and
languages?
But
another
thing
that
we
need
to
keep
in
mind
is
how
do
these
affect
us
as
we
build
more
native
formats
like
I,
I
do
think
that
we're
going
to
have
more
block
formats
like
being
engineered
as
we
move
forward
some
of
them.
C
You
know
being
much
more
customized
to
particular
use
cases
and
the
the
more
things
you
have
to
implement
the
harder
that
will
be
and
the
more
surface
area
for
bugs
and
it
consistencies
to
just
keeping
things
smaller
rather
than
bigger,
seems
better
to
me,
especially
when
they're
areas
that
we
know
are
going
to
be
difficult
to
do.
Compatibility
on.
A
A
Okay,
anything
else
before
we
stop
the
recording.
I
And
I
I
I
did
a
little
bit
of
of
poking
at
some
stuff
over
the
you
know
over
people's
time
off,
and
so
I
mentioned
this
to
you
guys
a
while
ago,
but
the
ability
to
basically
download
a
large
block
in
increments
by
just
downloading
it
backwards.
If
it's
shot
two,
if
it
was
shot
three
and
anyone
was
using
shot
three,
you
could
use
a
merkle
tree,
but
if
it's
shell
one
or
shot
two,
it's
a
streaming
hash,
and
so
you
can
download
it
backwards.
I
Another
thing
I
was
I
was
thinking
about
this
was
that
so
hannah
put
up
a
proposal
recently
around
sort
of
graph,
sync
and
bitswap,
to
grab
to
basically
use
graph
sync
to
grab
to
grab
a
manifest
of
all
the
blocks
in
a
file
and
then
just
send
those
over
to
bitswap
and
then
grab
all
the
blocks
in
parallel
from
whoever
this.
I
I
So
so
this
is
interesting
and
what
it?
What
could?
What
it
sort
of
reminded
me
or
pointed
out
to
me,
is
for
a
while.
My
understanding
was
bitswap
has
had
this
like
has
been
in
these
restraints,
which
is
the
reason
why
we
have
a
one,
with
the
reason
why
we
have
a
block
size
at
all
right.
Why
have
a
block
size?
I
I
The
only
way
it
works
is,
if
I
download
multiple
blocks
at
a
time-
and
I
just
verify
them
when
I
get
the
chance
right
if
you're,
if
your
pipe
is
faster
than
my
cpu,
then
you
just
send
me
more
more
garbage
and
I'm
okay
with
that
and
then,
if
you're
going
to
be
okay
with
this,
then
you
have
to
set
some
parameters
like
how
much
garbage
is
okay
right.
How
much
trust
am
I
willing
to
have?
Is
it
one
megabyte
of
trust?
Is
it
a
gigabyte
of
trust?
I
I
C
That's
not
the
case,
so
so
I
want
to
unwind
a
couple
things.
So
one
is
that
in
a
graph
sync
request,
there's
a
deterministic
ordering
to
the
request,
so
there's
actually
a
deterministic
order
to
the
blocks,
and
so,
if
you're
getting
them
one
at
a
time,
you
actually
can
validate
them.
One
at
a
time
that
they're
required
for
the.
C
Being
slow,
no,
no!
I
can
start
the
query
locally
before
before
I
get
any
response
from
a
remote,
and
so
when
the
first
block
comes
in,
I
can
decode
that,
and
I
know
what
my
next
block
get
is
like
literally
synchronously
before
I
even
ask
for
the
next
block,
which
would
be
when
the
next
block
comes
off
the
network,
if
like
so
so
you
actually
you,
you
should
be
able
to
never
keep
a
decoded
block
in
like
buffering.
It
may
be
the
case.
C
The
reason
you
can't
do
that
is
because
the
way
the
graph
sync
is
implemented
is
on
like
separate
streams
for
different
control
flows
and
and
the
way
you
have
to
wrap
that
on
top
of
itself,
it
like
it
literally
just
defies
streamings
like
like
there's
no
way
to
like
set
up
the
flow
control,
so
that
I
know,
if
I
needed
to
block
it
or
not,
you
literally
just
have
to
defer
everything,
and
so
like
that's
like
that's
why
it's
implemented
this
way
and
that's
why
we
have
this
problem
but
like
in
theory,
you
should
not
have
this
problem.
H
I
C
I
C
Hold
on
like
watch
the
whole
control:
what's
the
whole
control
flow?
Okay,
you
get
100
megabytes
into
your
network
driver.
Okay.
The
network
driver,
like
sends
the
cisco
that
says,
like
hey
I've,
got
data
here.
Your
program
pulls
off
the
data
off
of
the
network
and
decodes
the
first
block
it.
It
now
hands
the
first
block
to
you
and
when
you
decode
the
first
block
you
synchronously
decode
that
block
into
c
bar
traverse
it.
You
know
what
your
next
block
get.
C
Is
you
start
your
next
block,
get
into
it
into
like
some
kind
of
defer.
The
future
pattern
saying
like
give
me
this
once
it's
processed
before
you
have
any
ability
to
process
the
next
block
yet
so
like
like,
if
you
are
actually
streaming
the
blocks
and
yielding
them
into
the
program
as
they
get
streamed
out
out
of
that
out
of
that
buffer,
you
literally
do
end
up
in
a
state
where,
like
yes,
the
network
driver
might
be
buffering
a
lot
of
garbage,
but
your
program's
memory
doesn't
you
you
can?
C
F
Do
this
and
and
to
add
to
that
the
way
the
reason
we
have
a
block
limit
is
not
for
the
garbage
in
general
is
so
that
in
the
individual
block
that
you
can
hash
and
you
can
pull
for
the
links
out
of
is
within
a
bounded
time,
because
otherwise,
the
very
first
block
that
I
sent
you
might
be
a
petabyte
of
data.
And
you
never
know
where
discussion
is.
I
I
I
Yeah
eventually,
yes,
eventually
blocks
will
start,
will
start
dropping
right,
but
and
that,
but
this
is
the
constant
right.
So
there
is
a
constant,
which
is
how
much
garbage
am
I
willing
to
download
before
everything
falls
apart,
and
we
were
assuming
that
we
could
set
that
in
software
and
just
say
one
meg,
but
like
not
really.
F
Right,
yeah
yeah,
the
one
mech
does
not
protect
your
network.
Second
protects
your
your
process,
memory
space,
that's
what
it
does.
Yeah
yeah.
C
So
a
couple
things
one
is
like
I
I
I
agree
with
you
that,
like
this
particular
thing
does
not
do
that,
and
that
is
something
that
we
say
that
it's
for,
but
there's
a
bunch
of
other
stuff
too,
like
a
lot
of
decoders
work,
serially,
so
you're
gonna
want
some
block
size,
maybe
not
a
megabyte,
but
some
block
size.
There's
a
lot
of
reasons
actually
to
keep
block.
C
Smalls,
keep
block
small
just
outside
of
all
of
this
right,
like
all
these
data
structures
that
we're
mutating
all
the
time
like
we
actually
do
want
to
keep
the
block
sizes
reasonably
small
so
that
we
can
cut
down
on
the
mutations
right.
So
as
far
as
like
the
protocol
design
goes,
we're
always
going
to
need
a
way
to
do
efficient,
like
lots
of
blocks
efficiently
right,
even
if
we
get
out
of
some
of
this
other
stuff.
C
Lastly,
I
love
the
idea
of
of
getting
them
backwards
so
that
you
can
stream
the
d
code,
but
you,
but
this
would
also
require
that
you
eat
a
mem
copy
to
reverse
the
buffer
before
you
write
it
to
the
socket
and
so
you're
going
to
eat
an
extra
mem
copy
for
every
block
that
you
want
to
send
using
any
protocol
that
relies
on
them
coming
in
backwards
and
then,
when
you
get
them
you're
going
to
have
to
do
another
mem
copy.
When
you
reassemble
it.
I
What
you
might
say
is
that
there
is
a
very
common
scenario
where
you
have
multiple
ways
of
describing
the
same
data,
one
of
which
is
as
a
single
large
block
and
the
others
as
a
tree
of
small
blocks
right.
This
is
like
files
right,
and
so
I
can
store
the
file
data
as
a
sequence
of
small
blocks.
So
I
download
the
block.
I
I
It's
chunked
up
this
particular
way,
but
also
you
can
have
the
file
as
a
single
block,
which
means
that
if
someone
requests
the
file
from
me
as
a
single
block,
this
works.
So
I
can
go
to
the
canonical
website,
find
the
sha
256
of
the
ubuntu
iso
search
for
it
on
ipfs,
and
someone
will
be
able
to
send
it
to
me,
because
there
is
a
way
to
deterministically.
Go
from
this
one
large
block
to
me
and
even
to
do
it
in
parallel.
F
F
I
Securely
so
I
can
actually
choose
as
the
client
I
can
ask
you
to
stream
it
to
me
in
whatever
way
I
want.
For
instance,
I
can
say
I
want
you
to
start
sending
me
small
pieces
and
then,
as
I
trust
you
more,
I
want
to
start
you
sending
me
bigger
pieces
like
I
can
choose
the
the
construction.
If
I
want
to.
F
I
I
When
you
add,
when
you
you
know,
add
files
to
ipfs,
you
would
need
to
store
a
second
dag
right,
which
is
or
a
second
like
metadata
sort
of
thing
which
says
hey.
This
is
the
hash
when
I
string
all
of
these
together
and
here's
the
list
of
the
pieces
that
go
to
them,
and
then
I
would
need
bitswap
to
know
that
it
can
ask
for
this,
like
virtual
object,.
I
C
So,
but
I
how
useless,
though,
right,
because
if
you,
when
you
have
text
data
you
want
to
use
like,
ideally,
you
want
to
use
some
kind
of
chunker
like
raven,
where
you
get
better.
You
get
better
semantics
when
you
mutate
it,
when
you
have
data
formats
that
are
effectively
binary
formats,
that
don't
mutate.
C
If
it's
a
video
file,
you
want
it
around
the
keyframe
boundaries,
because
that's
like
the
unit
that
everybody
will
always
need,
or
or
potentially,
like
you
know,
in
in
certain
zip
files
and
stuff,
like
that,
like
you,
can
actually
bounce
around
and
seek
and
do
it
if
you
chunk
it
up
appropriately.
C
So
like
a
lot
of
these
formats
actually
just
have
ideal
chunkers
and
you
would
want
to
do
it.
Variably.
I
Sure,
anyway,
you
so
you'd
want
to
the
advantage
here
is
like
backwards
compatibility
with
the
rest
of
the
internet
right,
everyone
who
hashes
a
file.
You
can
now
just
search
the
hash
on
ipfs
and
see
if
you
find
it
and
the
other
one
is
that
I
have.
I
can
now
like
change
my
chunker
without
it
being
a
total
game
over
scenario,
there's
still
going
to
be
an
op
there's,
still,
maybe
an
optimal
chunker
and
we'll
still
be
able
to
like
you
know,
share
the
cid
that
includes
the
graph
with
the
chunker
stuff.
C
I
mean
like
we
should
just
put
the
hash
of
the
whole
file
in
the
file
metadata
anyway,
right
like
there's,
no
reason
not
to
if
we
have
it
like
it's
a
useful
thing
to
have
around.
If
you
want
to
compare
to
other
files
and
like
it's
going
to
be
insecure,
the
same
way
that
the
length
is
insecure,
like
people
can
fake
it,
but
like
we
already
have
a
precedent
for
that,
like
we
already
have
another
property
that
has
that
so.
G
C
It's
no,
it
doesn't.
It
does
like
you,
but
you
just
put
a
record
in
the
dht
for
that
and
then,
like
people
find
you
and
they
go
like
hey,
do
you
have
a
file
with
this
hash
and
you
go
yes
and
then
you
stream
it
to
them.
I
I
I
mean
it's
the
one
I
would
say
it's
like
this.
It
is
the
one
that
is
the
ipfs
default
right,
so
that
you
have
a
canonical
thing
and
if
everyone
decides
you
know,
if
ipfs
is
shot
too
and
then
later
on,
we
switch
to
like
you
know,
sha64,
then
everyone
all
of
a
sudden
starts
publishing
using
shot
264
as
the
canonical
one,
because
that's
the
canonical
one
that
everyone's
using
now.
I
C
Oh
yeah
yeah.
This
conversation
also
reminds
me
of
something
kind
of
crazy
that
I
should
bring
up.
So
this
is
like
very
early
stages
like
basically
a
friend
of
mine,
who
does
a
lot
of
web
standards
work
reached
out
because
he's
starting
to
work
on
some
stuff.
That
is
like
kind
of
in
the
space
that
we
that
we're
in
what
he
essentially
wants
to
do
is
that
he
he
wants
to
make
bundler
performance
in
the
browser
like
a
lot
better
so
that
you
can
get
like
you
can
sync
bundles.
C
You
can
get
just
kind
of
the
parts
that
have
changed
in
the
bundle,
so
the
bundle
maintains
some
understanding
of
what
files
that
came
from
and
stuff
like
that
and
so,
and
to
do
this
by
like
basically
adding
a
lot
of
like
code
to
the
tools
that
create
the
bundles,
as
well
as
the
browser
and
have
kind
of
the
transport
and
everything
in
between
be
pretty
simple
and
stay
the
same
but
like
as
we're
sort
of
like
what
he's
landing
on
is
like.
Oh
yeah.
C
C
These
could
literally
like
these
manifest,
could
be
valid
files
and
all
the
links
and
then
be
the
hash
links
to
to
the
rest
of
the
file,
and
we
could
like
effectively
like
interrupt
with
what
the
browser
is
doing
through
the
codec
layer,
because
they're
not
gonna,
do
exactly
unix
fs,
like
they
can't.
I
So
one
of
the
things
with
with
this
that
was
like
when
I,
when
I
brought
this
up,
like
whatever
it
was
a
couple
months
ago
or
something
the
thing
I
was
concerned
about
was
like
I
didn't
know
how
I
was
going
to
describe
these
objects
like
this
download
the
file
backwards,
object
and-
and
so
this
approach
says
like
it's
just
don't
care
about
it
as
an
ipld
thing
care
about
it.
As
like,
a
network
transport
thing
which,
which
does
circumvent
a
lot
of
those
issues.
But
it
comes
with
its
own.
I
C
I
C
Like
not
like,
like
that
difficult
to
disappoint
right
like
and
once
you
start
adding
so
much
optionality
like
the
the
ease
of
it
kind
of
goes
away,
it's
like
not
really
really
useful.
I
I
mean
to
be
fair.
A
bit
swap
is
like
a
trivial
protocol
with
all
of
the
sessions
stuff.
So
if
you
could
extract
the
sessions
out
of
bitswap,
then
bitswap
would
be
like
12
lines
of
code.
I
mean
it,
it
says
I
have
block.
Please
give
me
block
right.
All
of
the
intelligence
is
in
the
sessions,
so
whatever
you
want
to
call
it
right,
it's
just
here
is
protobuf
with
options.
Please
send
me
back
protobuf
with
data.
C
C
No,
no,
it's
not
it's!
It's
like
you,
gotta
take
in
the
block,
store
and
you've
gotta
interact
like
like
it
doesn't
it's
not
like
hey
like
here
is
a
block.
Tell
me
when
you
get
asked
for
blocks
like
there's
no
interface,
like
that,
it's
like
no,
you
you
conform
to
our
block,
store,
abstraction,
and
then
you
wire
up
the
block
store
to
this
thing.
H
D
Yeah,
the
other
part
of
this
discussion
that
has
been
I've
seen
thrown
out
a
couple
times
is
the
can
we
have
these
custom
chunkers
as
webrtc
or
as
as
wasom
modules
that
get
packaged
with
data,
so
the
data
somehow
links
to
its
own
decoder
and
now
you've
got
generic
decoders
rather
than
having
that
need
to
have
be
a
predefined
set
in
your
standard
library.
F
I
C
Right
so
so,
like
one
of
the
things
that
makes
it
really
useful
right,
it's
like
say
it's
say
it's
rabin
right,
say
it's
just
the
raven
chunker
with
with
some
settings.
If
that's
at
the
root
of
my
file,
then
if
I'm,
if,
if
I
take
that
file
from
you
and
I
mutate
it,
I
know
how
to
rechunk
it
up.
Ideally,
and
I
actually
have
the
program
to
rechunk
it
without
you
know,
including
those
external
dependency
or
anything.
D
D
of
the
overall
file
being
this
extra
field,
and
maybe
we'll
have
those
in
the
dht
that
you
could
have
arbitrary
identifiers
going
to
something
about
the
format
that's
associated
with
them,
so
that
there's
you
know
when
you
have
that
256
of
ubuntu
that
you
are
looking
for.
What
that
could
be
pointed
to
is
the
manifest
that
has
that
shot
256,
but
also
the
validator
like
code
that
helps
you
do
smart
decoding
based
on
that
format
and
gives
you
some
verification.
Does
the
coding.
I
It's
like
they're
existing.
C
Yeah
dean-
I
think
you
did
forget
one
important
point
here,
though,
which
is
that
like?
If
you
have
a
really
large
file-
and
you
have
just
the
shot
two
of
it,
you
can
validate
the
entire
file
going
backwards,
no
matter
how
you
get
the
chunks,
but
only
if
you
only
get
them
from
one
party,
you
won't
really
be
able
to
very
efficiently
get
them
from
multiple
parties,
because
you
can
only
because
you
can
only
invalidate
one
piece
at
a
time
as
you
work
the
way
backwards
over
it.
I
Like
that's
why
this
is
where
the
graph
sync,
but
this
is
where,
like
the
graph
sync
knowledge
or
understanding
that
we
are
already
taking
in
extra
blocks
comes
in,
which
is,
I
just
set,
my
parameter,
which
is
I'm
willing
to
download
10
blocks
of
garbage
at
a
time
right
and
then
I
can
parallel.
I
can.
I
I
can
ask
you
know
I
ask,
will
for
a
manifest
for
the
next
100
blocks,
and
then
I
just
shoot
out
to
everybody
around
me
10
at
a
time
yeah
I
mean,
and
I
build
up
more
and
more
over
time,
as
I
trust
will's,
manifest
more
and
more
because
the
first
hundred
blocks
look
fine.
So
why
wouldn't
the
next
hundred
blocks?
Look
fine.
C
If
you
keep
the
blocks
small
and
you
have
smaller
just
units
of
validation,
everything
else
gets
easier
and
trying
to
do
anything
below
that
validation
or
trying
to
increase
the
validation.
You
end
up
with
all
these
other
long
tail
concerns,
especially
once
you're
under
attack.
I
mean
that's.
Another
thing
too,
is
like
a
lot
of
this
stuff
started
happening
and
becoming
much
more
important,
as
the
network
was
more
under
attack
where
people
were
trying
to
do
negative
things
and
giving
you
some
good
blocks
before
they
would
give
you
bad
blocks
and
stuff
like
that.
I
Yeah
I
mean,
although,
like
again,
everything
is
like
we'll
say
like
trade-off
dependent
like
maybe,
if
I'm
willing
to
say,
like
okay
I'll
just
do
like
geometric
growth
and
I'm
willing
for
half
my
blocks
to
be
bad.
If
someone's
actively
attacking
me,
they
have
to
give
me
half
of
the
file
I
want
before
they
can
send
me
an
equal
amount
of
bad
bits.
C
C
Like
so,
the
incremental
payment
channel
you're
only
paying
for
the
blocks
that
you
validate
through
the
graph
search
sync
request.
So
you
can't
actually
allow
the
buffer
to
get
too
large
because
then
they'll
actually
just
stop
sending
you
data.
If
you,
if
you
haven't,
been
sending
them
acknowledgements
because
you
stop
paying.
I
So
it's
so
that's
like
a
separate,
a
separate
issue,
but
this
is
just
like
I
don't
know
I
just
I
I
guess
I
found
it
slightly
crazy
that
like
if,
if
grabsing
is
willing
to
sit
there
and
like
eat
up
the
and
just
have
this
like
segregated
block
store
of
quarantined
blocks,
that
it's
waiting
to
verify,
I'm
like
then
like
we
can
do
all
these
things
in
bitswap.
C
And
it's
like
so
easy,
don't
look
to
that
as
like
the
the
oh?
No,
they
can
do
that.
So
it's
fine
like
it's,
not
fine
like
it's.
It's
really
not
like
there's
tons
of
bugs
in
this
and
like
problems
and
like
you
know,
like
it's
really
slow
right
now,
and
the
lack
of
flow
control
in
streams
is
really
hurting.
It's
really
hurting
performance
like
and
nobody
thinks
that
it's
a
great
idea.
It's
just
like
the
only
way
that
we
could
get
things
to
work
right
now
like.
I
I
C
A
C
Happening
is
that,
like
the
client
can
send
you
bad
data
until
you
realize
it
and
it
severs
the
connection
and
whatever
you
the
longer
that
you
make
that
buffer,
like
the
more
that
it's
going
to
be
attacked
the
longer
that
we
doing
that
in
this
yeah
yeah
yeah
anyway.
Can
we
stop
the
recording
now,
because
I
just
this
dovetails
into
a
thing
that
I
just
want
to
talk
about.
I'm
proud
a
little
bit.