►
From YouTube: CAR Pool and more - @expede - Data and IPFS: Transfer
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Carpool
and
more
so,
this
is
essentially
hot
off
the
presses.
We
merged
the
spec
last
night
night
before,
but
some
work's
already
gone
into
this
and
we
hope
to
have
an
implementation
to
share
relatively
soon
or
or
a
full
implementation
at
least
and
yeah.
We're
gonna
talk
about
pragmatic
improvements
to
sinking
performance.
A
So
really
the
heart
of
the
problem
with
with
sinking
and
trying
to
get
to
duplication
in
any
sort
of
batched
way
is
that
in
a
distributed
system,
you're
always
contending
with
local
knowledge
right,
like
like
100
of
the
time.
A
The
other
thing:
why
do
I
have
there
we
go?
The
other
thing
is
that
some
of
your
peers
are
more
pure
than
others
right,
especially
if
you're
a
service
provider
you
have
you
don't
need
to
be
as
general
right.
You
don't
have
to
say
well,
this
might
be.
You
know
going
from
an
iphone
to
an
android
phone
directly.
It's
like
no,
this
thing's
sitting
on
ec2.
A
It's
got
tons
of
ram,
it's
doing
its
thing,
and
I
know
that
we're
going
to
be
pushing
and
pulling
from
there
and
that
I
have
a
less
capable
one
side
of
this
equation
is
less
capable
than
the
other
right
often,
this
is
all
work.
That's
happened
inside
of
fission
justin.
Who
is
doing
the
actual
implementation
I
hoped
to
be
here.
Unfortunately,
he
wasn't
able
to
make
it,
but
also
you
know,
essie
is
the
one
who's
actually
doing
the
the
real
coding
work
on
this?
A
You
can
absolutely
ask
me
questions
about
this
as
well,
but
if
you
want
to
know
more
about
the
the
go
implementation
he's
the
one
to
ask
so
big
caveat,
this
is
again
not
general
right.
This
is
so
bitswap
is
amazing.
It's
completely
general.
It
works
everywhere
for
all
cases.
This
is
an
optimization
on
top
of
that.
That
is
for
trusted,
and
we'll
talk
a
little
bit
later
about
how
we
can
get
this
to
untrusted
point-to-point
and
again
later
how
we
can
get
out
of
point-to-point.
Ipld
transfers.
A
We
everyone
in
this
room,
I'm
sure,
loves
ipld,
but
we
have
deeply
nested
cross-linked
data
that
can't
all
fit
onto
our
clients
that
really
benefits
from
deduplication
and
has
multiple
writers.
A
If
we
could
use
a
mature,
universally
supported
transport,
we
could
write
things
like
like
clients
where
we
don't
need
the
entire
ipfs
node.
So
you
know
setting
up
jsipfs
in
a
browser
or
setting
up
kubo
on
desktop
and
connecting
and
doing
transfer.
That
way
has
been
challenging
both
right
so
like
definitely
in
browsers
right,
it's
like
really
difficult
and
in
if
you
just
want
to
upload
something
spinning
up
kubo
sending
some
some
data
across
and
shutting
it
back
down
again
is
like
really
not
what
it's
intended
for
right.
A
We
are
currently
at
food,
water
and
shelter
right
so
getting
bits
from
point.
A
to
point
b
is
not
reliable,
so
we
need
to
have
some
way
of
making
this
better
and
earlier
hannah
had
mentioned.
You
know
gaslighting
herself,
for
several
years.
A
I've
gaslit
myself
for
several
years
of
like
and
then
bitswap
will
work,
and
we
won't
have
to
worry
about
this
at
all,
and
somebody
else
will
figure
this
out
and
I
I
really
hope
that
the
the
result
of
this
talk
is
somebody
comes
up
to
me
after
and
says
you
know,
you're
just
holding
ipfs
wrong
but
and
like
that
would
be
an
amazing
outcome,
but
right
now
we're
really
solving.
I
just
want
to
get
bits
across
the
wire.
A
So
again,
browsers
github
actions
as
well.
So
if
you
want
to
have
like
a
publishing
flow
things
like
that
and
also
cli,
so
these
are
the
main
places
that
we're
running
this
stuff
today,
I
also
know
you
know:
a
bunch
of
other
teams
have
tried
running
the
stuff
on
native
mobile
as
well,
and
mobile
is
where
people
do
most
of
their
computing.
These
days,
like
your
average
user,
do
not
walk
around
with
one
of
these
right.
A
A
Yeah,
keeping
even
keeping
the
nodes
reliably
connected
has
been
been
a
challenge,
so
we
do
a
pure
connection
and
then
45
seconds
later
it
drops
and
it's
even
dropped
in
the
middle
of
a
transfer.
So
we're
like
halfway
through
transferring
a
photo
and
it
just
stops
right.
We
have
to
stop
detect
that
reconnect
start
it
again,
like
all
of
these
things,
right
up
and
down
over
an
http
gateway
is
just
larping
decentralization
right.
We've
we've
created
an
apache
server
with
extra
steps.
That's
my
spicy.
Take
for
this
talk.
A
Native
push
doesn't
make
sense
today,
because
it
looks
like
this
right.
We
send
a
request
to
a
rest
server
that
goes
to
a
managed,
ipfs
node.
That
says,
hey
do
a
a
pure
connection
and
then
we
can
actually
do
the
transfer
right.
So
like
this
whole
extra
ceremony
doesn't
need
to
be
in
there.
We
just
want
to
say
http
connection,
send
it
over
or
websockets
or
something.
A
This
is
a
extremely
simplified
productive
version
of
bitswap,
but
this
is
the
some
of
the
core
parts
of
the
the
challenges
that
we
have
with
it
right
say
top
of
the
graph.
Do
you
have
this
thing
yeah
totally.
I
got
that
thing
great
and
then
we
look
at
it.
We
find
more
links.
Then
we
make
another
request:
hey
do
you
have
these
other
things?
Oh
yeah.
I
got
those
we
think
about
it
again
again,
hey
do
you
have
these
things.
A
A
A
So
if
you
have
really
wide
data,
bitswap
actually
does
really
good
job,
because
I
can
discover
the
links
and
say:
hey,
you
know:
here's
the
thousand
sids
that
I
need
send
these
all
in
one
one
request
for
you
know.
I
add
these
to
my
want
list,
and
so
that's
great.
A
If
you
have
really
nested
data,
it
doesn't
do
quite
as
well-
and
I
don't
know
about
you,
but
my
data
mainly
looks
like
the
the
red
one
right
and
so
trying
to
cut
this
down
like
we
can
try
things
like
compressing
the
graph
right
so
prior
arc,
there's
a
whole
bunch.
You
just
heard
about
one
one
version
right
so
desync,
where
essentially
you
open
a
car
file
or
streaming
car
file,
and
you
start
appending
blocks
to
it
and
those
go
across
and
that
immediately
improves
things
like
1000x
right.
A
It's
like
way
better
graphsync.
I
was
kind
of
hoping
that
graphsync
would
solve
all
of
our
problems.
It
doesn't
quite
do
the
deduplication
part
that
we
want,
but
we
can
reuse
a
lot
of
this
right.
We're
working
at
an
orthogonal
point
from
select
things
like
selectors,
so
you
should
still
be
able
to
use
selectors
and
say
I
want
this
part
of
the
graph
and
then
use
the
rest
of
carpool,
which
we'll
talk
about
in
a
second
to
say,
but
not
the
stuff.
I
already
have
right.
A
The
fundamental
problem
is
that
you
know
the
top
of
the
thing
that
you
want,
and
you
know
the
parts
that
you
have,
but
you
don't
know
in
between
right.
In
the
web
native
file
system
v1,
we
had
a
hashed
history
table
and
we
could
use
this
to
sync
as
well,
because
it
essentially
turned
into
a
stream.
A
It
was
essentially
a
manifest
that
we
could
then
analyze
and
say:
okay
I'll,
send
these
across
and
then
a
lot
of
projects
also
use
something
called
a
merkle
bloom
where
you
have
a
merkle
tree
that
its
index
inside
of
it
is
various
sized
bloom
filters
that
talk
about
its
membership.
So
this
was
kind
of
the
ins.
The
original
inspiration
before
we
end
up
going
with
it,
there's
actually
tons
of
research
in
the
space.
How
to
do
performance
graph
transfer
is
like
actually
a
well-studied
area.
A
It's
a
literally
an
impossible
problem,
but
you
can
do
better
by
creating
trade-offs
in
the
system.
A
This
has
been
our
internal
rallying
cry,
for
this
right
is
like
objects
in
the
mirror
are
closer
than
they
appear.
We
can
stick
them
together
and
send
them
over
the
wire,
but
if
you
don't
need
to
send
something,
that's
even
better
right.
If
you
have
them
locally
already,.
A
So
you
need
to
find
some
balance
between
these
two
right,
so
we're
trying
to
cut
down
latency,
but
we're
also
not
you
know
you
can
cut
down
latency
immediately
by
saying
we'll
just
send
me
your
entire
store
right
like
up
front
and
that's
going
to
have
other
consequences
right.
So
we
still
want
to
keep
this
duplication,
and
the
curve
looks
a
little
bit
like
this
right,
where,
if
you're
extremely
inaccurate,
you
get
really
good
latency,
because
you
just
start
sending
stuff
right
and
then
on
the
far
other
end,
you're
really
accurate.
A
You
don't
send
any
duplicate
blocks
or
very,
very
few,
but
you
have
this
higher
latency,
because
you're
always
going
back
and
forth.
So
we
want
to
find
roughly
this
section
here
and
play
in
this
area,
so
the
tl
dr,
is
we
get
enough
of
a
performance
improvement
that
we
can
afford
to
be
a
little
bit
messy
right.
We're
gonna
have
some
duplication,
but
we're
gonna
do
much
better
on
on
latency
and
we're
going
to
miss
some
blocks
sometimes
and
that'll
trigger
extra
rounds.
A
But
that's
also
okay,
because
we're
gonna
be
somewhere
in
this.
This
section
so
step
one
reduce
scope
if
you're
coming
from
a
well.
Actually,
let
me
bring
all
these
up
if
you
have
no
overlap
or
you're
going
from
a
complete
cold
start.
You
just
start
by
making
guesses
and
you
over
subsequent
rounds.
So
this
middle
one,
you
start
to
learn
more
about
what
the
your
other
peer
has
right
and
you're
gonna
start
sending
back
and
forth
information.
A
So
you
start
to
learn
more
as
you
go
through
multiple
sessions,
so
the
next
time
I
connect,
if
I
haven't,
cleared
the
cache
from
last
time.
I
already
know
some
stuff
about
you
or
you
can
send
in
the
request:
hey,
I'm
up
I'm
pushing
or
pulling
an
update
to
this
data
structure
right
this
thing
in
dns
link
or
in
ens
or
relative
to
this
other
sid,
and
we
can
diff
that
and
do
much
much
much
better
in
in
that
case.
A
So
some
background,
napkin
math.
If
we
just
do
a
list
of
sids
depending
on
the
version
of
say,
that's
like
you
know,
let's
call
it
53
characters
roughly
each
times:
half
a
million
nodes,
that's
26,
27,
megs
and
gzipped.
That's
about
12,
11!
12.,
which
is
too
big
to
send
across
if
we
want
to
get
you
know,
five
bags
of
data
right
like
this
just
ends
up
being
huge,
especially
in
a
in
a
web
context
where
you're
sending
way
too
much
data
up
front.
A
So
can
we
do
better
bloom
filters,
our
probabilistic
data
structure?
So
sorry,
if
this
is
review
for
for
some
of
you,
but
just
a
really
quick
primer,
you
have
a
sequence
of
bits.
Each
of
them
is
a
bucket
and
it's
just
gonna
be
zero
or
one.
A
A
So
if
you
test
by
hashing
an
element
and
placing
it
inside
and
and
it
doesn't,
change
then
yep-
this
is
this
is
in
the
set
already
it'll,
never
tell
you
that
something
isn't
there
when
it
is,
but
it
might
tell
you
that
something
is
there
when
it
isn't
right.
So
there
it
you're
getting
a
much
better
size
by
giving
up
a
little
bit
of
accuracy.
A
So
if
we
take
this
half
a
million
nodes
with
a
false
positive
rate
of
one
in
a
million,
so
in
in
general,
we
recommend
adding
one
order
of
magnitude.
False
positive
rate,
like
you,
can
tune
tune
the
parameters
on
on
a
bloom
filter,
that's
about
1.7
megs
and
that's
93
savings
over
the
ungzip
version
and
gzipped.
A
A
Usually
you'll
actually
have
zero
false
positives
on
average
you'll
get
one
every
ten
requests
or
so,
and
you
can
also
adjust
the
size
of
those
based
on
how
many
elements
you
know
the
size
of
set
that
you're
working
with
right,
it's
all
all
tunable,
so
the
stages
are
selection.
A
So
you
know
initially
saying
this
is
the
kind
of
thing
that
I
want
trying
to
do
some
narrowing
on
that
you
may
or
may
not
be
successful,
but
you
tried
to
do
some
narrowing
the
actual
transmission,
doing
the
analysis
on
the
graph
that
you've
gotten
back
and
then
doing
cleanup.
So
anything
that
you
know
at
the
end,
you
might
have
some.
A
A
A
On
the
wire
we
send
these
bloom
filters
roots
for
the
the
sub
graphs
that
you
want
so
as
you're
walking
down
this
graph.
Obviously
you'll
start
with
one
typically
and
then
you'll
discover
some
more
structure
and
then
you
might
have
multiple,
it's
essentially
a
want
list
and
then
we're
going
to
construct
a
car
file
and
send
that
across
and
pull
and
push.
This
is
from
the
the
requesters
side
right.
Obviously
it
would
be
mirrored
for
the
the
others.
The
other
one
looks
a
bit
like
this,
where
it's
like.
A
Well,
I
know
about
this
part
of
the
path
in
green
and
then
the
everything
below
here
is
what
exists,
but
I
don't
know
about
yet,
and
vice
versa,
when
you're
pushing
it's
like
well,
I've
already
pushed
this
stuff,
never
send
this
again
and
then
below
that
is
like
well.
I
know
I
need
to
push
this
one
and
then
this
red
dotted
line
is
I
got
the
bloom
filter
and
it
said
to
not
send
this
one,
but
on
the
next
round
they're
going
to
tell
me
explicitly
hey,
I
need
the
sub
graph.
A
This
is
most
like
95
of
of
the
high
level
for
pull.
So
I,
if
you
have
previous
context
great
you
can
pull
that
in
analyze,
which
routes
you
need
and
construct.
Your
bloom
filter
send
that
across
and
then
you'll
get
back
a
car
file
with
a
bunch
of
records
in
it.
And
now
you
have
an
updated
bloom
on
the
requestor
side
and
on
the
on
the
sender
side
as
well,
and
vice
versa.
A
If
you're
pushing
it's,
it's
not
exactly
the
same
thing
but
reverse,
but
it's
pretty
close,
and
this
is
in
quite
a
bit
more
detail
in
the
spec,
as
you
can
imagine,
and
then
finally
straggler
cleanup
so
we've
gone
through
and
you
know
our
bloom
filter
has
excluded
these
orange
nodes.
We
don't
know
about
the
red
ones
yet,
on
the
next
round,
the
other
side
is
gonna,
say:
hey.
I
need
these
specific
ones.
These
subgraphs,
you
didn't
send
them
to
me.
Let's
do
the
next
round
and
let's
clean
these
up.
A
Constructing
these
and
actually
walking
over
this
graph,
every
time
can
be
expensive,
so
you
can
perform
graph
contraction,
we're
actually
finding
lots
of
use
cases
for
for
this
general
technique,
where
you
break
up
the
graph
somehow
often
at
where,
if
you
have
a
fork
or
a
merge
and
create
a
bloom
filter
at
each
of
these,
whatever
your
boxes
are
because
bloom
filters
have
this
nice
property,
where
you
can
just
add
them
together
and
get
the
sum
of
the
two.
A
A
A
There's
two
ways
of
doing
this
right:
one
is
requests
are
controlled,
so
somebody
has
their
iphone
and
they
can
use
rendezvous
hashing
to
break
up
the
requests
request
into
buckets
and
say:
hey.
Can
you
send
this
to
me
these
and
these,
and
then
the
providers
can
communicate
in
background,
and
this
is
why
it's
trusted
right,
and
so
you
know
a
to
h
doesn't
have
a
certain
block.
They'll
talk
to
the
others
and
say
hey
if
you
have
this
also
send
this
as
part
of
your
car
file.
A
Invertible
boom
filters
are
a
great
way
for
nodes
to
share
entire
sets
with
each
other
and
to
figure
out
what
the
difference
is.
So
this
is
for
providers
sharing
things
with
each
other.
A
So
this
is
a
regular
boom
filter
at
the
top
here,
and
the
rest
of
this
will
not
be
to
scale
and
vertical
bloom
filter
takes
so
each
bucket
now
has
two.
It
has
a
counter,
so
this
case
three
and
some
xor
of
all
the
sids
that
are
inside
of
it.
That
means
that
there's
three
things
in
this
xor,
obviously
you
can't
recover
all
the
stuff
in
there
until
this
counter
goes
down
to
one
right.
A
It
has
some
nice
properties,
if
you
add
them
together,
just
like
a
regular
boom
filter
and
you
xor
these
right,
you
get.
Actually,
I
didn't
update
the
numbers
at
the
bottom.
I
imagine
those
are
updated,
you'll
they'll
just
sum
together
and
then
you
can
also
go
the
other
way,
so
you
can
xor
the
second
one
into
the
last
one
and
recover
the
first
again.
So
it
has
these
really
nice
algebraic
properties.
A
A
So
this
is
the
really
high
level
version
of
this.
You
should
go
to
phillips
talk
about
this
later
today
tomorrow
today,
today,.
A
So
then
you
know
you
can
have
a
similar
picture
where
you
go
through
a
single
coordinator
right,
so
you
make
a
request
to
one
of
them:
they're
all
sharing
these
invertible
bloom
filters,
the
providers
with
each
other,
and
then
they
can
stream
in
like
they
can
make
smart
decisions
about
who
has
what
and
how
they
can
efficiently
break
up
the
the
task.
A
The
other
thing
you
can
do
when
you
have
multiple
multiple
providers
is
use
techniques
like
linear
network
coding.
So
in
this
case
we
have
three
providers.
They
have
set
a
and
b
or
streams
a
and
b,
and
then
the
third
one
will
take
the
xor
like
the
bitwise
xor
of
a
and
b
and
now,
if
you
get
any
two
of
these,
so
you
get
a
and
b.
Obviously
you
get
back
a
and
b
you
have
a
and
a
xor
b.
A
You
can
pull
that
a
out
with
xor,
because
xor
is
amazing
and
recover
b.
So
you
only
need
if
you're
on
an
unreliable
network
connection-
or
you
just
want
to
get
you
know
from
from
multiple
sources-
and
you
know
really
performance
conscious.
A
This
is
a
really
nice
way
to
to
go
up
into
it
and
do
that
and
finally,
some
bonus
wild
ideas
for
the
future,
like
private
intersection,
private
set
union
neural
network
compression,
xor,
folding
and
more
and
I'll
just
really
blow
through
these,
so
private
set
intersection
and
union
lets.
You
share
a
set,
basically
broadcast
hey.
This
is
the
stuff
I
have
without
actually
sharing
the
elements
and
the
only
one
who
would
be
able
to
recover.
A
Xor
folding,
something
that
we
just
started
looking
at
very
recently:
it's
usually
used
for
privacy,
preserving
links,
but
you
take
a
bloom
filter.
You
break
it
in
half
after
that,
it
degrades
really
fast
line
them
up
and
then
xor
them,
and
you
actually
don't
lose
much
in
the
in
the
way
of
being
able
to
recover
things.
You
do
get
false
negatives
in
this
in
this
world,
but
it
actually
performs
pretty
well
for
the
most
part.
A
So
this
is
another
privacy
mechanism,
and
it
also
has
the
added
benefit
of
cutting
the
size
of
the
balloon
filter
in
half
and
then
just
because
I
was
talking
about
all
these
privacy
things
and
you
know,
coordination
between
multiple
boris
providers.
I
are
giving
a
talk
tomorrow
on
the
continental
dress
alliance,
which
is
an
idea
we
have
for
what,
if
the
backbone
providers,
so
fission,
cloudflare,
vertical
labs,
etc,
coordinated
and
pre-shared.
Hey
we're
storing,
however
many
terabytes
of
data.
A
You
don't
need
to
go
to
the
dht
you're,
mostly
making
your
requests
to
these
providers.
Yes,
you
can
still
drop
down
into
the
dht
if
you
need
to,
but
you
should
probably
ask
us
first
or
we
can
tell
you
who
who
has
what
or
even
precedes
to
each
other
hey.
This
is
popular
content.
Please
store
this
for
me,
so
a
little
bit
like
the
bytecode
alliance
or
the
bandwidth
alliance.
If
you're
familiar
with
those
and
yeah,
that's
the
whirlwind:
tour.
A
So
let's
say
that
you're
on
you're,
the
you're,
the
requester
on
a
phone
right,
you
have
100
000
blocks
right.
You
stick
that
all
into
into
the
bloom
filter-
and
you
say
I
want
this
everything
under
this
content
address,
except
for
the
stuff
I
already
have
you
send
that
to
the
server
it
starts
to
construct
a
car
file
skipping
anytime.
It
sees
something
that's
in
the
bloom
filter,
and
so
it
can
start
to
carve
out.
Okay.
A
I
think
that
you
have
this
bit
only
and
if
the
bloom
filter
has
a
false,
positive,
we'll
clean
that
up
on
the
next
round,
but
I
don't
have
to
now
wait
and
go
back
and
forth
to
be,
like
you
know,
here's
one
like
one
layer
and
now,
okay,
this
is
the
diff
and
ask
for
those
again.
We
just
grab
exactly
what
you're
telling
me
to
grab
and
send
it
in
one:
go.
C
B
A
So
that's
part
of
it
yeah,
so
we
have
a
few
places
where
this
happens.
So
the
history
is
a
big
one,
because
now
we
have
every
change
to
every
file
over
time,
so
that
gets
big
actually.
B
A
B
A
So
if
I
have
a
directory,
I
think
some
subdirectory
and
I
add
a
file
to
it.
It
creates
a
new
directory
points
down
to
that
new
file
and
all
of
the
old
files.
So
it's
structural
sharing.
C
Come
do
not
a
bomb
if
you
go
to
the
explorer
lb
page
and
you
try
and
load
the
graph,
which
is
there
some
of
the
code
that
tries
to
do.
This
doesn't
think
about
deduplication
very
well
as
it
tries
to
like
film
stuff
and
so
actually
all
the
blocks
to
keep
looking
very
nicely,
because
it
does
the
pixel
thing,
but
some
other
stuff,
but
just
trying
to
walk
the
graph
and
basically
generate
a
manifest
for
you
is
very
exciting.
A
Speaking
of
deduplication
in
in
these
things,
so
the
size
calculations
inside
of
winnipes
are
like
hilariously
wrong,
because
it's
counting
for
every
new
file,
you
add,
is
counting
the
entire
history
recursively
every
time,
so
people
open
up
the
the
gateway
explorer
and
it's
like.
Oh
yeah,
you
have
like
30
petabytes
of
data.
B
A
B
A
But
this
is
specifically
about
the
deduplication
and
batching
of
the
request,
so
they
would
work
together.
Yeah.
B
D
B
Go
through
it
both
times
and
you
can
skip
that
as
long
as
you're.
It's
like
an
explore
all
because
you're
always
gonna
only
there's
no
like
hidden
things.
It's
a
long,
it's
a
whole
mess,
but
in
any
case
we
should
just
make
it.
So
if
you
don't
know
things
twice
and
then,
if
people
miss
it,
they'll
complain
with
pressure
selectors
and
it
will
be
rare,
the
the
but
yeah
you
could,
and
you
can
also
you
could
apply
the
bloom
filter
technique
to
like,
like
a
select
a
query.
B
A
You
know
if
I
in
graph
sync,
you
know
you
miss
something
cause
you're
deduplicating
the
link
right.
You
have
this
thing,
can't
you
then
get
it
on
the
next
round
or
is
it
is
the
expectation
that
you'll
get
it
in
one?
Go.
B
B
D
A
So
we
haven't
done
that.
That's
I
mean,
and
please
question
me
if
I'm
wrong
but
yeah,
that's
the
real
selectors.
So
we
haven't
implemented
that
here.
Yet
that's
something
that
we'll
want
for
sure
at
some
stage,
but
yeah
we're
literally
just
starting
with
like
yeah,
but
what's
the
default.
D
A
Yep
yeah,
yeah,
and
so
like
the
kind
of
use
case
we
have
right
is
I've
pulled
the
latest
version
of
the
file
system
right
and
lazily.
So
I'm
not.
I
don't
have
the
whole
thing.
I
just
have
like
one
stripe
and
somebody
else
has
made
some
rights
and
I
want
to
grab
the
changes.
A
So
I
want
you
to
give
me
this,
like
the
top
layer
only
and
actually
we
yeah,
we
do
ask
for
things
like
lazily
as
well,
so
we
might
be
like
hey
only
give
me
this
one
one
specific
block,
I
guess
and
then
and
then
switch
into
this
protocol
right
for
like
that.
Now
give
me
this
sub
directory
yeah.
D
Point,
but
that's
this
highly
nested
duplicated
data
that
arises
from
versioning
a
lot
of
our
technologies
inside
of
ipfs
fall
over
when
we
do
this,
like
think
for
a
second
about,
when
you
try
to
personally
pin
the
root
of
the
latest
version
of
the
system,
you
have
no
choice
but
to
pin
the
entire
article
if
you
are
to
properly
establish
that,
and
similarly
with
the
design
of
this
protocol
sort
of
dealing
with
the
implicit.
These
are
differently
shaped
eggs
than
balanced
acts
and
other
things,
because.