►
From YouTube: RAPIDE - Jorropo
Description
A Design For A Much Faster Bitswap Client
A
Today,
I
want
to
talk
to
you
about
my
idea
about
how
we
can
make
a
better
client
for
data
transfer.
So
the
first
point
is
the
data
transferred
right
now
in
ipfs
is
not
that
great.
We
have
two
things
about
speed
that
is
important.
We
have
latency
and
time
to
first
bar,
so
the
question
of
when
I
want
some
data.
How
fast
does
the
transfer
actually
start?
A
I've
heard
the
chart
Mark
here,
not
to
say
it's
perfect,
but
that
it's
a
widely
known
problem
and
we
have
many
people
already
working
on
it,
and
the
second
step
is
throughput
when
you
download
the
data.
How
fast
does
the
data
actually
comes
to
you
so
to
explain
differently,
latency
impact?
How
fast
can
you
download
Smart
files
and
throughput
is
how
fast
that
you
download
bigger
files?
A
I
have
this
example
of
website
called
ipfs.io
I
check
how
many
people
are
providing
it
right
now
and
you
can
see
the
result
is
almost
400
people,
that's
a
lot
of
nodes
that
have
this
website,
so
you
would
think
that
with
400
people,
I
could
download
this
website
extremely
fast,
like
bitron
does,
where
I
connect
to
many
many
people,
so
I
tried
to
download
it
and
the
speed
is
2
megabyte.
It's
not
very
right.
Why
I
think
most
of
this
is
due
to
the
way
B
swap
is
built.
A
As
dear
explained
previously,
the
bitter
protocol
is
request
block
and
you
give
me
block
and
that
the
actual
interface
we
have
in
the
Kubo
code
today.
This
is
not
great
because
it
doesn't
really
represent
how
we
use
the
data
to
work
around
this
bit,
swap
the
gobit
swap
implementation
at
least
has
a
bunch
of
work
around
to
try
to
be
smart
about
foreign
blocks
that
are
related
and
maybe
send
them
to
the
same
peer.
So
you
can
see
that
the
gobit
sub
implementation
is
quite
big.
A
It
has
2000
lines
of
code,
and
you
can
also
see
that
it
has
a
lot
of
different
things.
I,
say
Java,
because
you
can
see
it's
such
a
all.
A
The
most
important
thing
we
have
to
change
is
also
a
thing
that
graphing
does
it's
a
better
API,
so
I
don't
want
anyone
to
have
to
read
code,
but
basically,
instead
of
downloading
blocks,
one
by
one,
you
download
a
byte
a
block
and
you
give
it
a
selector,
which
is
a
the
thing
that
Hannah
talked
about
earlier,
which
gives
you
some
part
of
the
dag,
and
so
that
allows
the
client,
the
download,
client
and
not
the
library
to
know
what
block
are
interesting
and
to
do
better
decision
on
that.
A
So
how
would
rapid
work?
I
think
the
most
simple
way
is
to
show
how
it
would
work.
So
let's
say
this
is
a
dag
and
we
want
to
download
it.
The
first
node
is
yellow
because
we
have
learned
about
it,
but
we
are
not
trying
to
download
it
yet
now.
Let's
assume
we
have
to
do
a
dhg
search.
A
We
find
two
notes
that
have
the
block,
so
we
are
going
to
send
a
message
of
hey
I
would
like
that
block,
and
so
we
have
those
box
which
you
have
K
and
J,
which
was
the
node.
That
has
the
the
this
tag
and
we
want
to
download
it.
So
this
is
a
work
list.
So
right
now,
both
nodes
are
trying
to
send
us
the
a
block
and
we'll
say
that
K
is
faster
and
so
K
already
gave
us
the
a
block.
A
So
now,
J
doesn't
have
any
more
work
because
K
did
it
and
K
will
progress
through
the
dag
trying
to
go
deeper
and
deeper
so
now
Jay
a
dates
where
we
send
to
J
hey
by
the
way.
We
don't
want
a
anymore.
Please
continue
so
now,
K
dot
d
and
now
the
first
interesting
thing
happened
previously.
On
the
previous
slide,
we
see
that
the
two
nodes
are
red.
That
means
that
you
know
to
know
the
racing
Force
had
blocks.
A
The
both
node
wants
the
same
block,
but
the
situation
we
really
want
in
is
a
situation
where
we
divide
the
work
so
where
different
node
can
work
on
different
parts
of
the
file.
The
most
simple
way
to
put
it
is
that
if
you
have
a
big
file
in
two
nodes,
you
want
to
download
the
first
half
from
the
one
node
and
the
second
half
from
the
second
node.
A
So
that's
why
it's
happening
here.
K
has
gone
deeper
on
D
and
is
now
completely
forgetting
about
B,
at
least
temporarily,
and
hopefully
it's
like
J
with
download,
B
and
K.
We
download
d
so
now
J
has
downloaded
B
and
you
can
see
that
Jay
always
try
to
have
two
Block
in
the
one
place,
and
one
thing
I
have
not
touched
on
yet
is
that
the
one
piece
has
priority,
so
we
can
see
that
even
though
f
is
red.
That
means
that
two,
no,
the
two
nodes
wanted
it's
quite
low.
In
the
priority
list.
A
A
So
one
question
that
might
happen
with
the
example
I've
seen
is
that
we
have
many
cases
where
we
have
red
nodes.
So
note
that
Mar
is
the
two
nodes
are
fast
enough.
You
will
download
the
same
block
twice.
I
argue
that
this
doesn't
really
happen
in
real
world.
This
is
a
capture
of
some
random
website,
this
dot
ipfs.io,
which
shows
you
the
DAC
structure
and,
as
you
can
see,
it's
pretty
big.
A
We
should
have
enough
node
that
the
conflict
should
be
rare
and
only
happen
at
the
tail
end,
where
you're
like
a
99
of
the
file
and
the
no
Dodge
is
fasting,
for
whatever
is
remaining
so
also.
One
thing
is
that
we
should
get
all
the
protocols
like
graph
sync
new
cool
protocol,
things
that
the
name
that
Dr
gave
it
previously
for
free.
A
So
let's
assume
that
we
have
now
two
nodes:
one
is
K,
it's
bitswa
and
one
is
G
is
doing
graph
sync
so
because
in
graph
sync,
we
it's
a
more
server
driven
protocol.
We
just
asked
for
some
CID
and
selectors,
and
the
remote
Network
keep
pushing
stuff.
We
will
have
to
be
smarter
about
how
we
choose
to
pick
our
work
so
right
now
we
are
starting
with
G
and
G
is
graphing,
so
it's
just
going
to
continue
to
do
its
thing,
so
she
downloaded
more
stuff
more
stuff.
A
So
we
can
see
that
the
most
important
stuff
is
that
we
are
doing
a
depth
first
search
strategy.
So
when
we
download
a
new
block
instead
of
downloading
downloading
all
the
block
at
this
level,
we
try
to
race
to
the
bottom
of
the
deck
first,
because
the
deeper
you
go,
the
less
likely
you're
going
to
find
someone
else.
A
So
the
graphing
node
is
going
to
continue
going
deeper
and
deeper,
and
so
now
we
have
case
I
just
entered.
So
what
K
is
going
to
do
is
pick
the
node.org,
the
lowest
in
the
priority
of
G.
So
we
cannot
choose
a
priority
of
graphing
because
it's
a
it's
negotiated
when
you
send
a
selector.
But
what
we
can
do
is
just
look
at
the
priority
from
G.
See
that
F
and
J
are
the
last
one
and
we
just
inverse
them,
so
we
try
to
really
go
very
far
away
from
what's
likely
to
be
downloaded.
A
So
even
if
we
should
download
those
blocks
at
some
point,
it
should
happen
very
late,
hopefully,
and
so
now
we
see
that
we
have
this
port
that
is
being
downloaded
by
K
and
G
is
downloading
the
rest
and
because
K
got
J
so
now
chain
is
purple,
so
we
downloaded
it.
We
removed
it
from
the
one
list
of
J.
So
the
thing
is
that
the
author
Server
doesn't
know
yet
that
we
are
not
interested
by
the
tail
end
of
the
tag
anymore.
A
What's
going
to
happen
is
that
when
we
reach
that
part
we're
just
going
to
close
the
connection,
so
the
also
graphing
server
is
going
to
stop
the
news
data,
and
so
that's
the
main
idea
of
Rapid.
Hopefully
it
should
be
adaptable
to
any
protocol.
Others
I've
tried
implementing
it
once
and
I've
had
multiple
issues
with
that
many
bit
swap
is
a
message
oriented.
That
means
that
when
I
request,
some
block,
the
actual
block
are
going
to
come
in
a
totally
different
place
in
the
code.
A
The
auto
node
is
going
to
open
a
new
stream.
Send
me
there
the
blocks,
so
you
need
some
very
complex
data
structure
that,
like
Maps
or
this
block,
are
wanted
by
those
workers,
and
it's
not
pretty
at
all.
So
what
I
wish
we
had
is
a
simple
request
response
protocol
on
one
single
lip,
f2p
stream,
where
we
can
just
ask
for
some
block
and
in
the
same
stream
the
other
peer
give
us
the
block
back.
A
The
other
one
is
a
bug
in
the
bits
Observer,
where
the
B12
connection
doesn't
go
as
fast
as
they
want.
Even
if
you
have
plenty
of
connection
between
the
nodes,
if
the
ping
start
to
get
a
bit
high,
it's
going
to
slow
down
significantly,
and
another
thing
is
that
selectors
are
hard.
They
have
actually
a
lot
a
lot
of
features
and
probably
more
than
I
need.
A
First,
we
can
have
graphs
in
parallel
download,
which
is
today
one
of
the
thing
graphsing
goes
into
Port.
Simply
we
started
graphing
download
at
the
store,
and
then
we
do
the
same
logic
of
on
one
of
the
node
that
we
is
unlikely
to
be
downloaded
by
the
first
grafting
stream,
or
it
will
take
a
while
until
that
happened,
we
can
start
a
new
graphing
stream
with
another
node
that
attacked
this
part
of
the
deck
and
because
it's
very
independent
in
the
way
it
works.
A
Basically,
it's
just
a
way
to
like,
as
long
as
the
protocol
either
give
some
blocks
or
give
many
blocks
in
a
stream.
We
should
be
able
to
support
it,
so
I
have
car
files
over
HTTP,
for
example,
is
another
one.
A
So
the
conclusion
is
like.
Hopefully,
Rapids
should
have
the
advantage
of
bit
swap,
which
is
parallel
downloads
and
the
speed
of
graphing
with
you
have
very
server
where
your
content
aware
new
download
and
you
try
to
spread
the
load
more
effectively,
and
so
you
can
find
me
on
if
icon
flag,
if
you
want
and
I
have
some
notes,
I
have
written
about.
Well,
you
have
the
same
talk,
basically,
basically
more
detail
and
written.