►
From YouTube: bitswap AMA - @stebalien - Data and IPFS: Transfer
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
we're
here
to
talk
about
bit
swap
mostly
like
a
little
bit
historical
context
and
like
where
it
really
should
go
mostly.
This
would
be
a
q,
a
I
don't
have
slides.
I
don't
really
have
a
presentation
for
some
history
here.
Bitswap
is,
is
the
thing
we
use
to
transfer
data
around
the
most
advanced
implementation.
Is
the
go
bit
swap
implementation?
It's
the
one
that
does
the
most
things.
That's
one
that's
hacked
on
the
most.
It's
part
of
the
story.
A
bit
swap
was,
it
was
written.
A
The
first
version
was
was
kind
of
kind
of
dumb.
It
basically
asked
everyone
for
everything,
and
predictably,
you
had
like
20
x,
wasted
bandwidth
or
something
like
that,
and
the
more
viewers
are
connected
to
the
more
bandwidth
you
wasted,
and
this
became
a
big
problem.
So
the
story
of
itself
really
is
the
gateway,
is
a
very
downloads
fixed
bit
swap,
and
that
has
been
happening
repeatedly
over
time.
A
So
the
the
problem
with
this,
like
that
being
the
driving
factor,
means
like
that
swap
or
the
gobi
mutation,
has
a
lot
of
patches
and
has
been
modified
over
time
in
a
hurry
to
try
and
fix
things
which
means
like
it
had
some
architecture
and
then
kind
of
lost
it,
and
it
had
another
architecture
that
kind
of
lost
it
and
all
these
have
glommed
together
and
now,
like
in
theory,
there's
some
nice
async
things
that
are
just
held
together
by
google
locks,
and
it
has
a
lot
of
problems
because
of
that.
A
So
this
is
one
of
the
main
reasons
you
need
to
be
rewritten.
The
other
part
is
like
no
one
is
entirely
sure
like
how
peer
selection
really
works.
Right
now,
like
how
you
select,
which
peers
you
ask,
content
from
and
all
this
kind
of
stuff,
because,
like
it's
like,
they
just
had
multiple
people
work
on
it
over
time
without
enough
knowledge
sharing.
A
This
is
where
we
have
a
session
and
honestly
without
enough
time
to
actually
finish
up
what
they're
doing,
because
they
got
pulled
over
something
else,
so
yeah
the
basically
bit
swap
is
this.
So
we
talked
about
this
last
night
and
actually
the
best
way
to
describe
it.
Swap
it's
like
it's
the
the
protocol
you
use.
If
you
don't
know
what
your
data
is,
you
don't
know
the
structure
of
your
data.
You
know
the
size
of
your
data,
you
just
need
it.
A
Someone
says
give
me
the
cid
and
you're
like
okay,
I'll
find
you
that
cid
as
fast
as
possible.
It's
also
really
useful
like
if
you
need
to
like
download
from
multiple
peers,
and
you
need
to
like
quickly
pull
in
like
data
in
parallel
for
multiple
sources
that
might
have
limited
bandwidth
and
again
you
just
don't
really
know
where
to
go.
A
The
the
core
of
the
protocol
is
stuff
for
people
who
don't
know
is
like
it's
around
once
it's
actually
pub
sub
protocol.
You.
Basically
you
subscribe
to
a
specific
set
of
blocks
from
other
nodes,
other
peers
and
then
your
peers
when
they
get
the
block,
if
they
get
the
block,
we'll
send
you
the
block.
A
There
have
been
some
additions
onto
this
protocol
where,
like
that,
that
is
why
we
got
the
sort
of
20x
overhead
because,
like
we
say,
hey
everyone
give
me
the
block
when
you
get
it
and
then
everyone
gets
a
block,
they
give
it
to
you,
it's
not
great.
Now,
basically,
you,
like
you
can
also
say
actually
just
tell
me
when
you
have
the
block,
which
this
is
what
we
call
a
want
have,
so
you
send
a
want
plus
a
half
bit
saying.
A
Please
just
tell
me
when
you
get
the
block,
but
don't
give
me
the
block
when
you
get
it.
This
is
how
you
can
sort
of
like
do
some
content
routing,
but
it's
again
pubs
up.
So
you
basically
subscribe
to
notifications
about
the
blog.
The
the
other
bit
we
can
set
now
is
don't
have
where
it's
kind
of
actually
a
request
response
oriented
protocol
here,
where,
like
you
say,
hey
I'm
interested
in
this
block.
Please
immediately
tell
me
if
you
don't
have
it,
so
I
can
go.
Ask
someone
else.
A
Otherwise,
if
you
just
like
this
lets,
you
not
just
ask
everyone
up
front
lets.
You
sort
like
like
take
one
round
trip
to
ask
some
peers,
you
think
have
it
and
they
don't
have
it
and
say:
okay,
fine!
Now
I
ask
more
so:
let's
redo
some
work,
so
that's
that's
kind
of
how
it
works.
The
the
other
secret
sauce
of
bitswap
is
sessions.
So
it's
this
concept
of
like.
If
I
ask
for
one
block,
this
block
is
probably
related
or
the
peers
that
have
this
block
probably
have
other
related
blocks.
A
The
the
tricky
part
of
bitswap
is
unlike
graph.
Sync,
we
don't
have
any
knowledge
of
the
graph.
We
don't
know
what
blocks
are
really
due
to
this.
We
aren't
traversing
it.
Instead,
the
user
is
just
asking
for
one
block
another
block
into
the
block,
all
along
so
so
yeah
sessions.
Basically,
you
group
together
requests
into
a
sort
of
a
single
session,
and
then
you
can
like
correlate
the
peers
in
session
and
keep
this
clump
appears
and
say.
A
Okay,
these
peers
seem
to
have
related
things
when
you
get
a
new
block
in
the
session.
You
say
one
of
these
peers.
Probably
has
it
so
the
way
bitswap
currently
works.
Is
you
basically
like
you'll
you'll
receive
or
go
biswap.
You
receive
a
request
from
user
on
a
session.
That
session
will
then
have
a
set
of
peers
that
have
things
related
to
that
session.
A
You'll
then
ask
usually
one
of
the
peers
for
the
data
and
then
you'll
like
right
now,
you'll
actually
ask
all
the
other
peers,
they
have
it,
so
you
send
it
one
have
to
everyone
else
in
the
future.
We
probably
need
to
limit
this.
What
are
the
problems
with?
We
should
ideally
be
broadcasting
a
lot
less,
because
right
now
we
have
to
tell
everyone
we're
interested
in
a
lot
of
things
which,
like
especially
on
gateways,
takes
a
lot
of
cpu,
actually
and
also
takes
a
lot
of
bandwidth.
A
There's
also
the
reason
why,
if
you
just
joined
the
ips
network-
and
you
happen
to
connect
to
the
gateway,
you
are
sad
because
you're,
like
your
downstream,
actually
gets
saturated,
that's
actually
like
it
gets
heavily
loaded
by
just
like
incoming
requests
from
the
dht
for
data.
You
don't
have,
or
not,
usually
sorry
from
the
gateway
for
maybe
you
don't
have
but
yeah.
A
So
that's
that's
kind
of
how
that
stuff
works
right
now
that
was
a
bit
scatter
brained,
but
I
just
want
to
give
people
a
bit
of
an
overview
yeah
the
I
guess
now
I'll
dive
deeper
a
little
bit
into
sessions,
but
the
way
they're
really
supposed
to
work.
Is
you
start
off
with
no
peers?
The
first
thing
you
do
is
to
find
who
has
the
data
right
now
we
asked
everyone
you're
connected
to
the
assumption
was
that
asking
dfc
was
really
slow.
A
Asking
dht
is
now
a
lot
faster,
so
so
what
we
probably
should
be
doing
is
asking
phd
first
or
maybe
asking
one
or
two
peers
that
tend
to
be
really
really
really
useful.
So,
like
ideally,
we'd
have
some
kind
of
side
table
that
says,
like
hey,
like
these
10
peers
need
to
be
giving
a
lot
of
stuff.
It's
kind
of
a
super
session
of
everything
like
up
like
across
all
sessions.
We
could
ask
them
first
and
then
go
to
the
dht,
but
we
shouldn't
spam
everyone,
because
we
have
ten
thousand
years.
A
Just
family
everyone's,
not
great
once
you
start
like
once,
you've
found
some
peers
that
have
that
first
piece
of
data
you're
looking
for,
then
you
can
add
them
to
your
session.
And
now
you
start
splitting
requests
between
those
peers.
It's
a
smaller
set
right,
yeah.
So,
right
now
like
we
asked
one
period
that
session
for
a
block,
and
then
we
spam
like
one
half
to
everyone
else.
Ideally
we
spammed
one
half
to
a
few
peers,
but
not
everyone.
A
We
also
actually
have
this
interesting
mechanism
that
I
want
to
make
sure
people
know
about
where,
like
when
someone
gives
you
data
or
really
any
message,
they
gossip,
how
much
data
they
have
queued
for
you.
We
are
currently
not
using
this
this
mechanism,
but
it
was
actually
a
nice
mechanism
to
like
figure
out
how
heavily
loaded
your
peers
are,
at
least
in
terms
of
accommodation
we
have
seen
to
you
and
like
you,
can
try
to
keep
everyone's
cues
slightly
full
so
like
on
a
deal
bits.
A
Welcome
invitation
would
like
basically
look
at
all
of
your
peers
or
say
look
at
all
the
the
peers
in
a
session
and
find
a
peer
that
does
not
have
data
cued
to
you
and
like
prefer,
sending
once
to
that
peer
because,
ideally,
if
you
can
sort
of
like
keep
this
number
from
ever
hitting
zero,
then
basically
you're
you're
maxing
all
of
your
peers
and
that's
the
optimal
strategy
yeah.
That
is
kind
of
what
I
wanted
to
say.
Who
has
questions
on
bitswap?
Anything
you
want
to
discuss
anything.
B
Can
you
speak
to
like
adding
more
peers
to
the
session?
Perhaps
please
don't
have.
A
Yes,
the
way
this
currently
works
is,
if,
let's
see
periodically
regardless,
we
will
just
broadcast
random
ones
to
everyone
again.
This
is
not
efficient.
We
should
be
a
little
more
careful
about
who
we
send
these
wants
to,
but
that's
one
way
we
add
peers
so
like
this
is
just
so
we
can
like
add
parallelisms
like
if
we're
not
we're,
not
basically
not
maxing
our
bandwidth.
A
Well,
not
even
if
we
don't
even
check
that
basically
like
to
ensure
that
we
max
out
with
bandwidth,
we
constantly
try
to
add
more
peers
to
the
session
by
just
occasionally
broadcasting
a
random
want.
We
also
have
a
mechanism
where
we
say
if
the
session
is
empty,
then
we
broadcast
things
to
everyone.
A
So
so
that's
how
we
we
fix
that
we
also
have
a
mechanism,
actually
that
not
to
mention
where,
like
the
session,
has
peers
that
are
related
to
the
content
you're
looking
for,
but
if
a
peer
keeps
on
returning,
don't
haves
or
just
timing
out,
we
add
them
to
the
bad
peer
list
for
the
session
and
we
stop
asking
for
them
for
things.
So
basically,
like
your
session,
has
this
good
peer
list
in
the
bad
peer
list
and
once
it
well,
I'm
not
sure
if
it
has
a
bad
peer
list.
A
I
think
it
won't
have
that.
Maybe
now
we
just
kicked
out
entirely,
but
basically
we
will
remove
peers
from
the
active
list
of
peers
in
your
session
when
they
stop
giving
you
data
either
by
timing
out
or
by
it's
like
we
do
actually
have
like.
So,
if
I
ask
you,
if
I
said
you
don't
have
sorry
a
request
for
don't
have,
and
you
don't
give
it
to
me
some
period
of
time,
I
just
say:
okay,
you
timed
out
whatever,
and
I
treat
that
as
I
don't
have.
B
A
I
don't
so
you
ask
me
for
the
data.
I
don't
do
anything:
okay,
yes
yeah,
so
it
depends
like
if
you
ask
me
for
don't,
have
I
send
you
the
don't
have,
if
you
don't
ask
people
they
don't
have
so
like,
for
example,
when
I
send
people
like
a
want,
have
I
generally
don't
also
ask
for
a
don't?
Have
I
just
say:
hey
tell
me
when
you
have
this
thing,
don't
tell
me
anything
until
then.
The
final
message
reaction
is
cancel,
so
you
can
say
item
not
in
this
block
at
all
anymore.
A
We
need
to
be
like
right
now
we
have
a
fair
amount
of
bandwidth
spent
on
cancels.
Part
of
the
a
large
portion
of
this
is
because
we
send
a
lot
of
wants
to
appear.
We
don't
need
to
send
once
if
we
sent
fewer
others
once
then
we
send
fewer
the
cancels.
We
probably
also
want
to
be
better
about
batching
up
cancels
before
we
want
to
send
cancels
as
fast
as
possible,
because
cancels
were
like
like
they
would
stop.
A
Data
from
coming
to
us
nowadays,
like
cancers,
are
often
just
stopping
one
house
and
stuff
like
that
from
coming
to
us,
because,
like
we
try
to
only
ask
one
pair
of
the
block,
so
we
probably
need
to
get
better
about
like
matching
that
data,
although
we
do
actually
especially
like
in
go
in
gobitswap,
we
have
a
very
large
mechanism
around
like
batching
out
wantless
messages
like
we
sort
of
collect
the
next
message
and
then
once
we
wait
a
simply,
we
send
it
out.
A
This
is
this
is
specifically
designed
for,
like
the
the
the
bit
swap
swarm
situation
or
fog
line
so
like
in
the
bitswap
situation,
like
you
have
a
bunch
of
clients
downloading
data
at
once
or
a
bunch
of
peers.
Ideally,
you
would
all
connect
to
all
the
peers
that
are
downloading
this
data
and
like
so
one
of
them
might
have
data
right
now,
but
they
don't
have
the
other
data
and
then
eventually,
ideally,
they
get
the
data
in
the
center
too
sessions.
A
The
way
they
currently
work
actually
kind
of
mess
with
this
a
little
bit,
because
if
you
start
seeing
this
appear,
doesn't
have
the
data
they'll
kick
them
out
of
your
session.
We
probably
need
like,
ideally,
we
sort
of
have
like
people
who
have
like
ideally
be
sending
wants
to
all
peers
that
are
like
related
to
the
session,
but
haven't
been
giving
us
useful
data,
which
I
want
haves
just
not
once
just
to
see
if
they
ever
have
it
again.
A
I
don't
actually
know
how
this
currently
works,
but
ideally
that's
how
that
would
work
yeah,
the
other.
The
other
thing
here
is
for
message,
propagation
to
the
network
in
file.
Point
because,
like
the
way
that
works
is
or
block
propagation,
basically
what
I
do
is
like,
if
I'm
into
block
I
publish
it
on
pub
sub
and
then
everyone
else
who's
connected
to
me
will
then
try
to
retrieve
the
messages
over
bitswap.
A
But
the
nice
thing
about
this
system
is
that,
like
you
kind
of
you
get
this
overlay
graph
from
the
pub
sub
network
and
then
the
bit
swap
messages,
get
lazy
or
sorry.
The
messages
get
lazily
pulled
over
bitswap
and
they
kind
of
follow
the
graph
from
pub
sub.
But
what
can
happen
here
is
like
that
means
like
basically
the
messages
they're
like
one
step
behind,
but
only
one
step
behind
so
like
because,
like
my
my
block,
header
goes
out
to
my
my
peers.
A
My
peers
will
then
immediately
ask
me
for
the
messages
I
may
not
have
the
messages,
though,
yet
because
I
may
be
pulled
in
from
somewhere
else,
but
the
second
I
pull
them.
Then
I
will
say:
oh
my
peers
are
interested
in
these
messages.
Let
me
or
sorry
in
the
messages.
Let
me
forward
them
to
to
my
peers
so
like
it
creates.
This
kind
of
overlay
graph
for
like
pub
sub
is,
has
a
lot
of
duplication
where
it
sends
data
to
multiple
part
to
parties
multiple
times.
Bitswap
has
a
lot
less.
A
So
that's
the
nice
thing
about
using
bitswap
here
we
can
have
this
push
and
then
pull
system
where
you
can
get
deduplication.
The
other
part
about
messages
like
we,
like
generally
appeals
network,
already
have
some
of
the
messages,
because
they've
already
received
them
for
pub
sub.
This
is
mostly
for
like
pulling
out
or
pulling
the
messages
you
don't
have.
Bitswap
is
really
good
for
that
kind
of
stuff.
A
It's
actually
switching
over
to
another
topic,
where,
like
one
of
the
nice
things
with
bitswap
and
why
I
said
like
it's
kind
of
magical
protocol
is
like
if
I
have
some
some
directory
tree
on
my
computer
in
ipfs,
and
then
I
modify
something
and
create
a
new
directory
trade
if
I'm
using
any
other
transport
protocol
like,
I
have
to
worry
about
duplicate
data
and
stuff,
like
that,
let
me
try
and
download
the
new
tree
in
bitswap.
I
just
start
downloading.
A
If
I
already
have
it,
I
stopped
there
in
like
in
a
lot
of
protocols.
I
would
have
like
say:
oh,
please,
download
the
diff
between
a
and
b
or,
I
would
have
to
say,
like
download
a
and
then
oh
quickly
cancel
all
these
other
things
that
already
have
about
this.
Swap
it
just
works.
A
Have
not
seen
the
rust
one,
I
I
think
javascript
now
has
sessions.
Actually
I
don't
know
like.
I
know
that
they
like
both
of
them
yeah.
You
know
okay
yeah
so
like.
Basically,
the
problem
is
like
bitswap
itself
is
a
really
simple
protocol.
The
the
complexity
comes
from
all
of
the
like
heuristics
and
peer
management
sessions
and
all
this
kind
of
stuff
which
it's
a
blessing
in
the
course.
A
It
means
that,
like
you,
can
communicate
the
stupid
way,
it's
going
to
be
really
simple,
but
it's
going
to
be
really
inefficient,
but
on
the
other
hand
it
means
that,
like
you,
can
keep
the
protocol
really
simple,
so
anyone
can
just
make
an
implementation,
even
if
it's
really
efficient
and
then
slowly
add
more
features
to
make
it
more
efficient.
It
also
means,
like
the
server
side,
is
really
really
easy
to
implement
if
you're
just
playing
a
basic
version,
because,
like
someone
tells
you
they
want
something,
you
give
it
to
them.
A
But
you
do
have
some
state
management
here,
because
it
is,
it
is
a
pub
sub
protocol.
It's
not
a
request.
Responsibility,
particles
like
you,
have
to
keep
track
of
what
they
are
looking
for,
at
least
you
should
like.
We
also
have
other
like
block
fetch
protocols.
They
try
to
be
more
request,
response,
oriented
or
graph
sync.
A
You
can
it's
not
nice,
but
yes,
you
can
so
like
if
you
don't
want
to
like
there
are
some
people
in
this
room
haven't
looked
at
this?
Yes,
you
can
just
say
like.
Do
I
have
it
now?
No,
I
don't
care
about
you
so
like
yes,
you
could
probably
implement
this
in
like
an
aws
land
or
whatever,
where,
like
you
spin
it,
but
a
femoral
node
checks.
If
you
have
something
and
if
you
don't
shut
down,
you
mentioned
that
the
protocol
is
more
like
clubs.
A
A
A
Yes,
it
is,
but
the
the
the
nice
thing
about
it
being
truly
pub
sub
is
that
like
it,
it
just
makes
it
more
magical
like
the
problem
is
like
like.
A
If
it
is
purely
request
response,
then
I
have
to
keep
on
going
around
and
asking
and
asking
and
asking
and
asking
and
like
data
won't
just
flow
through
the
network
and
like
quite
often
I'll
do
is
say
like
oh,
you
don't
have
it,
I'm
gonna
mark
that
you
don't
have
it,
so
I
don't
ask
you
again
and
that
we
usually
do
that
because,
like
I
like,
basically
when
I
ask
you
the
first
time,
I'm
gonna
record
that
you
don't
have
it
so
like
that's,
why
it's
really
useful
to
like
to
have
it
being
like
request
response
like
pub
sub,
because
it
means
that,
like
no
matter
what
our
construction
is,
no
matter
how
the
data
is
both
networked,
I
will
probably
get
the
data
eventually.
A
The
other
way
to
do
this
is
to
just
keep
on
asking
like
as
long
as
I'm
interested.
I
just
keep
on
sort
of
going
around
a
circle,
but
that
can
get
like
inefficient,
especially
in
situations.
We
don't
know
where
the
data
is.
F
A
Yeah
yeah
so
bitswap.
The
network
protocol
is
a
teeny,
tiny
little
thing.
Ninety
percent
of
its
sessions
at
a
little
bit
of
the
is
this
decision.
Engine
side
like
what
blocks
to
send
into
managers,
want
list
on
the
receive
side,
but
it's
honestly
not
hard
to
manage
the
wants
on
the
receive
side.
The
main
problem
we
have
right
now
is:
we
don't
have
any
back
pressure
on
like
how
big
these
worklets
grow.
That's
something
we
definitely
need
to
fix,
but
that's
the
one
particle
change.
You
really
need
to
make
there
it's
another.
A
Okay,
no,
so
you
actually
currently
rank
them
by
latency
yeah
and
like
the
optimal
strategy
here.
Is
you
rank
them
by
latency?
So
the
first
few
blocks
go
to
the
peers
with
the
highest
there's,
a
lowest
latency,
and
then
subsequent
requests
would
go
to
the
least
loaded
peers
using
their
like
using
their
their
cues.
A
That's
the
ideal
solution
so
like
like
we.
Basically
we
do
have.
We
have
some
some
stuff
in
there
where,
basically
we
say
we
send
what
this
time
and
then
we
measure
the
amount
of
time
it
takes
to
get
like
a
one
halve,
and
then
we
treat
that
as
your
latency.
I
think
that's
how
we
do
it
or
do
we
use
pings
now
do
we
use.
I
can't
remember,
like
I
think
it
issues
one
it
was.
That
kind
of.
A
Let's
see
I
think
now
we
might
just
use
pings,
but
I
can't
remember,
but
yeah
is.
A
D
A
I
care
about
either
latency
or
bandwidth.
So
when
I'm
caring
about
latency,
then
like
yes,
bandwidth,
that
is
a
problem
where
like,
if
I
have
one
big
block
that
I
download
but
usually
either
care
about,
like
a
bunch
of
small
little
blocks
like
getting
them
very
quickly
or
I
care
about
downloading
a
lot
of
data
with
like
when
I
care
about
lots
of
data.
What
I
like,
usually
I
just
I
try
to
send
it.
A
Basically,
I
try
to
even
out
how
I
send
the
the
the
request
of
many
peers,
so
they
all
have
something
to
send
me,
which
ideally
means
that,
like
peers,
that
have
more
bandwidth
will
basically
have
more
outstanding
once
because
they
try
to
fill
their
queue.
A
So
that's
the
idea
where
it's
like,
even
if
you're
a
small
beer
and
you
have
a
low
bandwidth,
you're
still
like
I'm
so
maxing
out
now
the
problem
there
is,
if
you
have
like
a
blocker
or
like,
if
I'm
asking
a
smaller
apr
for
like
an
important
node
that
that's
not
great
right.
Now
we
have
timeouts
to
deal
with
that.
We're
like.
If
I
have
a
timeout,
then
I
go.
Ask
someone
else
we
don't
have
like.
A
We
do
need
better
logic
locally
on
the
client
to
deal
with
this,
but
this
is
where
this
whole,
like
we
just
need
to
rewrite.
A
lot
of
this
stuff
comes
from
we're
like
I.
We
should
have
good
good,
client-side
logic
where
it
says
like
hey.
A
Basically,
I
have
one
want
left
or
I
have
fewer
and
fewer
fewer
wants
left.
I
should
broadcast
these
out
to
more
peers
because,
like
it
means
that
probably
evolve
back
somewhere.
C
So
the
the
the
logic
that
requires,
like
the
thing
that
we.
C
A
It's
not
just
from
the
service
other
peers
yeah,
but.
A
So
it
depends
on.
It
depends
what
you're
doing
so.
I
actually
like,
like
the
nice
thing
about
message.
Distribution
in
file
coin
is
actually
that
will
sort
of
create
that
automatically
because,
like
you'll,
create
overlay
graph
from
graph
sync
and
that
has
its
own
peer
exchange
particle
built
in
so
like.
Ideally,
you
have
these
sort
of
clusters.
This
does
not
work
in
you
know
yeah,
like
just
in
general.
We
we
really
do
need
server-side
sessions
and
server-side
peer
exchange.
Basically,
where,
like
the
server
says,
oh
yeah,
I've
received
these.
A
I
sent
this
block
to
other
peers
or,
like
related
blocks
to
other
peers.
Somehow
it's
kind
of
hard
to
do
that,
but
like
it
should
be
able
to
tell
you
like
hey
here,
are
some
other
peers.
You
might
be
interested
in
so
basically
like
when,
honestly,
really
when
it
sends
you
a
block,
it
should
also
be
able
to
say:
hey
fyi
if
you
wanted.
That
block
here
are
some
other
peers
that
might
be
useful.
So
then
you
can
add
them
to
your
session.
A
E
Me
the
have
story,
so
let's
say
that
a
peer
you
know
receives
once
from
multiple
of
its
peers.
So
it's
you
know,
that's
pub
sub
and
it's
ready
to
notify
whenever
it
gets
the
data
that
all
of
these
other
peers
want.
How
do
how
does
bitswap
handle
telling
all
of
those
peers
that
like,
if
you
get
all
the
data
that
all
of
your
other
peers,
want
right,
super
hypothetical
and
probably
rare
situation?
But
how
is
that
situation.
A
So
when
you
get
the
data
we
we
have
this
queue
locally.
It's
like
sort
of
like
sorry
like
I
keep
tracking
want
list.
A
Whenever
I
get
something
I
look
up
and
you
want
and
we're
in
the
want,
let's
see
if
anyone's
interested,
if
they
are
then
add
a
bunch
of
entries
to
the
queue
is
saying,
I'm
going
to
send
this
stuff
out
and
it's
basically,
this
big
priority
queue
and
then,
based
on
on
that
priority
queue
we
kind
of
like
we
pull
off
like
basically
we'll
pull
off
the
sort
of
the
next
period.
Priority
queue
pull
off
a
chunk
of
work
based
on,
like
I
said
once
we
want
to
send.
A
We
queue
that,
for
them
put
the
the
the
pure
back
of
the
priority
q
pull
the
next
year
that
kind
of
stuff,
so
we
kind
of
work
through
this
cube.
We
have
a
set
of
workers,
they
just
flush
this
data
out
to
them
like
the
currently,
we
don't
have
great
like
limits.
Well,
we
have
limits
on
this,
but
like
the
way
we
have
it.
Working
right
now
is
not
great
like
you
can.
You
can
run
your
problems
if
you
have
some
slow
peers,
but
it
usually
doesn't
overload
you
too
much.
A
Think
of
something
it's
like
it's
a
context.
It's
a
request
so,
like
the
session
comes
into
existence
when
like,
for
example
like
in
kubo
when
the
user
comes
to
go,
ipffs
or
kubo
and
says
like
pin
this
thing
or
give
me
this
file
to
the
game
or
whatever
we
internally
like
in
the
command
actually
itself
will
create
the
session.
A
Then
we
basically
treat
that
as
sort
of
a
scope
block
store,
and
then
we
make
all
of
our
requests
through
the
scope
when
we're
done
with
the
request
we
throw
away
the
session
in
the
future
I'd.
Actually
I
really
want
to
make
this
use
context
context,
so
we
can
use
it
elsewhere
because,
like
we
really
have
a
lot
of
other
session
information,
we
need
to
like
shove
in
here,
but
the
idea
is
basically
like
you
have
a
request,
everything
that
is
related.
A
You
should
now
be
able
to
start
attaching
additional
metadata
to
this,
including,
like
appears
that
are
useful
for
this
request.
Content.
Writing
information
is
useful
paths.
You've
resolved
related
to
this
request,
the
root
of
the
request,
all
this
kind
of
stuff
that
you
can
then
use
to
like
try
to
find
that
you
didn't
fetch
it.
D
So,
given
that
the
session
gets
discarded
and
does
any,
I
guess
it's
up
to
the
application
if
they
want
to
somehow
persist
that
information
for
that
session.
A
We
don't
actually
suppose
in
a
way
to
do
that.
That
is
something
to
be
really
nice,
so
one
problem
we
actually
have
right
now
is
like.
If
you
are
visiting
a
website,
the
like,
we
really
should
have
some
kind
of
pure,
like
like
user
extract
like
client
scope
session,
where,
like
really
the
gateway,
should
probably
have
something
where
it
says.
Oh,
like
any
requests
related
to
this
domain
name
or
this,
whatever
go
into
the
session,
all
right
like
basically
domain
name
and
user,
or
something
like
that,
or
even
just
like
use
your
connection.
A
I
don't
know
how
you
do
this,
but
you
have
to
have
something
like
that
right
now.
We
don't
right
now
it's
very
much
like
we
create
session.
We
just
write
a
decision
whatever
it's
not
too
bad
because,
usually
like
you,
are
still
connected
to
those
peers
and
the
way
it
currently
works.
Is
we
broadcast
everyone
on
the
first
request
so
like
then,
we'll
just
ask
them
and
again
we'll
reform
the
session,
but
it's
not
great.
Ideally,
we
would
still
keep
something
but
yeah.
F
E
Data
that
you're
sending
and
receiving
and
what
people
are
requesting.
So
just
two,
I
guess
questions
about
the
metrics.
I
guess
do
you
have
a
metric
on
the
delay
between
once
and
halves
like
if
a
node
then
obtains
data
that
it
doesn't
currently
have
and
then
delays
between
what's
in
ads,
how
many
dropped
ones?
How
or
canceled
once
or
I.
A
Don't
think
we
keep
metrics
from
any
of
that
stuff.
Most
of
our
metrics
are
around
like
how
much
inbound
man
would
you
have
per
peer?
How
big
your
want
list
is
what
your
peers
are
wanting
that
kind
of
stuff
yeah.
I
don't
know.
No,
we
don't
really
have
any
like.
We
don't
expose.
At
least
it's
like.
We
do
internally
have
some
of
these
metrics,
but
we
don't
expose
them.
Those
are
great
things
to
probably
expose
through,
like,
I
guess
for
me,
set
points
but
yeah.
Okay,.
A
I
mean
so
like
we
already
have
a
bunch
of
ascend
points.
Yeah,
you
could
just
add
another
one
of
those
they
would
expose.
Like
the
true
yeah,
I'm
not
entirely
like
some
of
the
information
we
have
some
of
this
information
framers
like
we
get
it
when
we
need
it.
So
it's
not
necessarily
some
I
don't
know
actually,
unfortunately,
yeah.
C
A
It
is
absolutely
not
part
of
it,
it
doesn't
yeah,
so
this
is
actually
a
big
hole
right
now,
where,
like
you,
need
to
find
other
peers
through
the
dht
which
is
really
annoying
because,
yes,
like
you,
talk
to
someone
and
they're
serving
data,
they
can
probably
tell
you
who
else
can
serve
your
data,
but
we
have
no
like
hey
you're.
Looking
for
this
thing
related
to
the
cid,
are
these
peers
maybe
should
connect
them.
A
This
also
opens
up
a
whole
can
of
worms
of
like
please
connect
to
this
random
here
where,
like,
ideally,
what
this
would
be
is
actually
a
signed
provider
record.
So
once
we
have
some
provider
records,
then
yes,
like
I
could
ask
you
for
data
and
peer
could
just
tell
me
like-
and
here
are
a
set
of
provider
records
for
this
data
as
well
in
case
you're.
Looking
for
related
things.
C
A
The
one
has
to
do
the
same
thing,
so
the
one
halves
are
still
pub
sub
so
like.
When
I
send
you
a
one
halve.
You
will
then
send
me
to
have
when
you
get
the
block.
So
it's
like
it's
it's
very
much
like
like
our
like
gossip
stuff.
Basically,
we're
in
classes
will
be
the
same
thing
where,
like
you,
you
receive
eagerly
received
message
from
some
set
of
peers
and
then
lazily
received
gossip
from
other
peers.
To
tell
you
what
messages
you
may
have
missed,
it's
the
same
kind
of
idea.
A
Yeah
the
problem
with
that,
though,
is
it
doesn't
mean
like
wants,
don't
expire,
and
maybe
we
should
have
some
kind
of
expiring
want
my
concern
there.
This
is
what
alfredo
was
bringing
up
last
night
like,
but
I
could
turn
there's
like
I.
I
don't
want
to
keep
on
rebroadcasting
and
right
now
we
have
to
like
constantly
send
so
it's
kind
of
not
entirely
sure
but
yeah.
A
A
I
think
another
way
of
doing
this
is
kind
of
like
delaying
like
one
one
piece
of
logic
we
could
have
is
like
we
could
delay
sending
out,
cancels
or
things
like
that,
and
then
I
could
basically
look
at
the
my
in
my
image
of
your
want
list
and
my
version
of
the
want
list
and
if
they
differ
enough,
I
just
basically
send
you
the
full
one
list.
That's
another
way
to
do
this,
because
that
will
implicitly
cancel
everything
else.
A
The
other
way
to
do
this
is
maybe
assign
ids
to
wants,
so
that
we
can
actually
probably
doing
this.
We
could
assign
ids
and
we
can
send
fields,
and
that
would
make
this
a
lot
simpler.
It
does
mean.
A
They
get
merged,
no,
they
like
we,
we
have
a
flag,
that's
full
or
not
full,
and
we
have
no
way
of
ordering
them.
This
man
has
proposed
before
as
ways
to
fix
this,
but
we
need
some
kind
of
like,
especially
in
the
first
message
on
a
stream,
or
you
know
all
investors
or
whatever.
We
need
some
kind
of
ordering
thing
yeah
so
right
now,
it's
a
race
condition.
A
We
just
take
the
latest
stream
and
we
like
so
like
basically
like
we,
so
it's
either
full
or
not
full,
if
it's,
if
it's
not
full
it'll,
have
basically
cancels
in
once
and
then
we'll
merge
those
if
it's
a
full
want
list.
We
replace
your
current
wellness
with
the
full
want
list
that
you
sent
us
and
you
set
a
flag
on
your
on
your
request.
It
says
full.
A
Yeah,
the
order
would
be
great,
but
again
like
also
actually
bit
fields
would
be
nice.
That
would
be
really
interesting
because,
like
right
now,
it's
like
you
have
to
like,
say,
oh
and
cancel
this
one,
and
now
this
one,
whatever
it's
very
robust,
but
in
the
other
means
you
have
to
send.
You
basically
ever
see
id
twice
and
doubles
in
my
bandwidth.
It
would
be
nice
to
not
do
that.
A
On
the
other
hand,
like,
I
think
the
main
thing
to
to
resolve
the
applicable
problem
is
not
that,
because,
like
flying
doubling
the
alpine
bandwidth
is
nothing
compared
to
a
thousand
extremely
up
and
down
with,
by
like
talking
to
a
thousand
peers
so
like
if
we
just
broadcast,
if
you
appears
like
the
output,
bandwidth
goes
to
nothing
yep.
A
So
this
is
why
bitswap
is
super
complicated,
because
the
way
it
actually
works
is
you
have
a
series
of
sessions.
Then
you
have
a
series
of
want
lists,
basically
peer
handlers,
and
so
it's
a
matrix
basically
where
like
or
I
don't
know
what
you
can
call
yeah
it's
a
matrix
things.
But,
like
you
have
your
set
of
sessions,
you
have
your
set
of
want
lists
for
peers
or
your
sort
of
peers.
A
The
the
sessions
will
then
basically
like
send,
wants
to
the
local
sort
of
your
local
images
of
those
peers,
effectively
they're
facades,
and
then
you
basically
say:
oh,
is
this
one
already
in
the
wattles?
If
it
is
don't
send
anything
if
it's
not
send
something?
So
really
what
this?
What
this
does
is
like
effectively
like
like
when
you
ask
for
something
in
a
session
you
like
find
the
peers,
you
have
you
add
it
to
the
local
want
lists.
A
If
there
is
a
change,
you
then
add
that
to
a
diff
of
things
to
send
to
the
peer
and
then
once
after
some
time
out
or
once
the
difference
is
big
enough
or
whatever.
Then
you
push
the
diff
session.
Yes,
yeah
it's
global
quest
sessions
for
the
peers,
so
every
single.
Basically
every
session
has
a
want
list.
Every
peer
has
one
list,
and
then
you
also
have
a
broadcast
want
list.
That
is
the
the
like.
A
The
the
sum
of
all
the
broadcast
wants
from
all
the
sessions
to
kind
of
merge
them
together
and
deal
with
them.
So
this
is
like
yes,
so
like,
but
this
no,
this
is
client
side.
A
So,
honestly,
this
is
a
problem
on
the
gateways
because,
like
your
client,
side,
gateway
or
wellness
can
get
fairly
large
on
the
server
side,
the
amount
of
data
you
have
to
actually
store
here
is
pretty
minimal
because,
usually
like
okay,
you
have
like
a
thousand
p
or
something
like
that
at
most,
usually
or
maybe
five
thousand
pairs
and
then
like,
like
hundreds
of
watts,
something
like
that
for
a
year.
It's
actually
not
so
much
data.
A
Yes
yeah,
so
this
is
yeah
yeah.
This
is
this
is
one
of
the
reasons
we
have
problems
we've.
Actually
we
have
highly
optimized
this,
so
it's
it's
not
as
bad
as
it
used
to
be.
This
is
also
one
of
the
reasons
why
or
one
of
the
the
main
reasons
why
we
moved
cids
to
be
strings
in
go
because
it
means
they
can
like.
Basically,
we
can
create
one
cid
once
and
then
read
like
like
not
have
to
duplicate
the
data
effectively.
A
It's
it's
basically
like
interning
yeah,
so
that
that
saves
us
a
fair
amount.
So
we
try
to
be
very
careful
that
we
also
I
mean
perids-
are
also
effectively
interned,
so
it
saves
us
a
fair
amount
there
as
well.
It's
not
great
but
like
the
biggest
problem
actually
on
on
clients
like
the
gateway
is
more
just
like
locks
and
handling
notifications
of
connects
and
disconnects.
A
We
now
have
a
patch
that-
or
we
made
a
change
that
hopefully
helps
this
a
bit
but
yeah
like
if
you
have
lots
of
like
like
because
the
problems
whenever
idpr
shows
up
you
can
send
them
everything
you
broadcast
like
your
broadcast
models,
whatever
pr
leaves
you
have
to
like
do
a
lot
of
cleanup
and
deal
with
that,
and
we
also
have
like.
We
also
treat
a
drawing
and
leave
whenever
listen.
Whenever
here
becomes
unresponsive,
we
treat
that
as
leaving
order
to
become
responsive.
A
D
E
F
E
F
E
A
Sorry,
this
is
only
when
you
when,
when
you're
looking
for
things,
the
point
is
like
when
you're
asking
for
data
basically
like
you're
in
two
modes
you're
either
in
latency
sensitive,
where,
like
you
want,
like
the
first
block
or
something
like
that,
really
really
fast
or
you're,
in
a
mode
where
you're
just
trying
to
get
as
much
data
as
possible
as
fast
as
possible
when
you're
like,
basically
just
the
way
you've
seen
it's
like,
usually
you're,
latency
sensitive,
when
you
have
a
couple
of
blocks,
you're
looking
for
like
one
or
two,
because
that's
like
when
you're
traversing
a
path
or
you're
downloading
blockchain,
something
like
that
or
like
you're
downloading
a
small
file
in
that
case,
like
you,
find
the
lowest
lighting
cpr
and
you
ask
them
where,
ideally,
you
ask
multiple
low
latency
appears
in
the
case
where
they
try
and
downloading
as
fast
as
possible.
A
The
thinking
is
that,
like
you,
you
want
every
peer
to
be
like
maximizing
their
bound
band
with
you,
because
that's
that
is
basically
you're
maximizing
your
inbound
bandwidth.
So
that's!
But
what?
Otherwise?
No,
you
don't
talk
like
well
right
now.
You
broadcast
a
lot
but
like
if
you're
looking
for
things
but
you're,
actually
not
doing
if
you're
not
looking
for
things,
you
don't
sound,
just
quiet
but.
A
So
so
that's
the
server
side
problem,
so
basically
the
the
the
thinking
here
is
like
it's
up
to
the
server
to
decide
what
they
want
to
send
you
and
what
they
don't
want
to
send
you.
So,
like
the
the
like,
when
I'm
giving
you
data,
I
have
a
local
queue
and
I
can
make
priority
decisions
based
on.
A
Do
I
want
to
serve
you
to
want
to
serve
someone
else,
so
the
point
is
like
like
actually
one
of
the
goals
here
is
to
like
we
had
this
problem
like
you
might
like,
I
don't
want
you
as
a
client
to
keep
on
sending
me
about
just
additional,
wants
and
cids
if
I'm
not
sending
you
anything
so
like.
That
was
the
idea
behind
adding
this
extra
number
of
like
how
much
data
do.
I
have
queued
to
you
so,
like
I
can
tell
you
hey.
I
already
have
some
idea
to
keep
you
there.
A
A
Matching
watts
are
hard
to
track
because,
like
you,
don't
actually
know
how
many
wants,
I'm
really
tracking
for
you
right
now,
there's
no
real
guarantee
there
and
it's
an
async
protocol
a
simpler
way
of
doing
this
is
to
like,
like.
Basically
I
can
then
send
you
a
window
and
say
you
don't
have
20
more
wants
21
tickets,
you
can
send
me
20,
20
watts
and
then
like
or
basically
send
me
10
watts
and
say,
and
now
you
have
20
more
or
whatever
something
like
that.
We
like
we
don't
currently
like
yeah.
A
This
is
it's
still
a
bit
tricky
cause.
I
wanna
make
sure
we
don't
have
too
much
back
and
forth
like
if
you're
just
sending
me
a
bunch
of
watts,
I
don't
wanna
have
to
keep
on
signing
these
tickets,
but
the
tickets
are
pretty
small
and
like
like
okay,
fine
one
packet
once
in
a
while.
It's
not
bad!
It's
like
as
long
as
I
batch
it
up
in
time.
It's
probably
fine,
but
we
don't
do
that.
We
like
that,
though,.
B
A
You
could
you
could
have
acted
back
question.
The
problem
there
is
like
no,
the
problem
is
like
if
everyone's
overloaded,
that
doesn't
work
and
like
so
the
problem
with
stuff
like
that,
is
you
cascading
failures
where,
like
like
you're
sending
me
lots
of,
wants
I'm
now,
overloaded
or
or
you're
overloaded,
because
you're
doing
lots
of
work
and,
like
so
like
now,
my
message
back
to
you
to
like
tell
you
to
stop,
gets
blocked
up
on
something
or
it
takes
time,
and
now
you
take
time
processing
that
message
you
keep
sending
me
once.
A
But,
but
that's
not
great,
so
the
problem
is
like
if
I
start
dropping
your
wants
immediately
or
like
like
then,
like
you,
don't
know
what
I've
actually
accepted
so
like
it
gets
the
nice
thing
about,
having
a
more
like,
like
you
request,
like
basically
your
request
like,
like
I
tell
you
when
you
can
send
more.
It's
like
we've
now
agreed
on
how
much
I'm
willing
to
give
you
like.
Yes,
you
could
have
a
back
an
act
to
back
off
but
like
I
prefer
making
it
so
like
like.
A
Basically,
if
no
one
talks,
no
one
can
talk
kind
of
situations
where
like.
If
you
have
something
where,
like
you,
have
to
tell
someone
to
go
away,
then
it's
like
there's
cases
where
that's
really
useful,
but
I
really
prefer
to
make
it
so
that,
like
like
by
default,
like
you,
fail
quiet,
I
guess
otherwise
like
when
you
start
like
getting
overloaded.
You
then
have
to
like
do
additional
work
to
unload
things,
and
it's
not
great,
and
I
don't
think
there'll
be
too
much
extra
bandwidth
like.
A
Is
it
you're
sending
me
lots
of,
wants
and
actually
keep
on
sending
like
and
I'm?
Basically,
if
I'm
going
to
send
you
another
one
of
these
like,
oh,
you
can
see
more
watts
chances.
Are
I've
already
sent
you
some
blocks
like
I'm
going
to
be
sending
you
blocks
as
well?
Otherwise,
I'm
not
going
gonna
want
more
wants
from
you.
Usually,
so
it's
probably
just
gonna
get
piggybacked.
D
A
It's
very
very
much
like
implementation
thing,
but
that
should
be.
There
should
be
like
the
implementation
notes
that
describe
these
different
systems.
Yes,
I
would
like
that
to
happen
right
now.
I'm
not
focused
on
that
sure
yeah,
but.
D
C
Picked
over
the
line
that
you
paid
up
for
finding
monster
names.
The
only
thing
I
I
do
want
to
be
careful
of
is
like
the
prior.
The
prior
slash
existing
version
in
the
specs
repo
of
the
bitswap
stack
has
two
issues.
One
is
that
it
doesn't
refer
to
the
latest
version
of
bitswap,
but
the
other
is
that,
like
it
basically
describes
like
the
whole,
it
describes
like
the
whole
bitsoft
operation
from
that
time
period,
which
is
like
not
a
spec
right.
It
makes
it
like
very
difficult
to
understand
like
what
it
is.
C
F
F
I
think
they
should
live
in
the
repo,
maybe
for
our
for
like
their
first
implementation,
but
they
need
to
be
there
and
they
need
to
be
updated
because
there's
so
many
lessons
like
that
are
learned
already
and
if
you
like
it
so
so,
like
and
interestingly
git
swap
itself
secretly
has
a
big
nice.
Here's
how
a
bunch
of
things.
F
So
it
doesn't
yes,
but
also
it
can
be
kept
up
to
date.
People
are
committed
to
it
and
like,
and
that
one
is
way
more
up
to
date
than
what's
in
the
step
which
is
like
from,
I
think,
like
david,
has
put
it
in
there
like
five
years
ago,
like
it's
not
burnt.
A
Sorry,
I'm
saying,
like
I
think,
the
spec
like
it
really,
the
spec
itself
should
actually
have
some
notes
around
this
stuff
like
it
should
be
the
corresponding
like
this
is
the
protocol,
but
then
they
should
also
be
like,
and
this
is
how
you
should
implement
sessions.
This
is
how
you
should
do
these
things.
This
is
how
this
obviously,
how
the
protocol
should
be
used,
where
it's
not
must,
but
it
still
should.
I
I
just
want
to
say
that
just
because
something
may
not
maybe
an
implementation
detail
for.
E
F
A
Yeah,
I
think
this
is
a
little
more
than
just
like
it's
very
much
like,
and
here
are
all
like
all
these
weird
things
that
we
give
you
this
information.
We
give
you.
How
do
you
actually
use
this
because,
like
from
the
spec,
it's
like?
Well,
you
have
these
won't,
have
and
want
flags,
and
you
have
this
extra
information
like
telling
you
how
much
data
someone
has
queued
for
you
like
there's,
no
idea
how
you
should
actually
use
the
information
extra
bits.
B
F
B
C
F
Also
like
that
drove
me
crazy,
when
I
was
working
at
bits
off,
I
really
wanted
to
be
like
oh
want
works.
I
want
to
know
what
they're
doing
and
it's
nowhere
like.
There's
like
you
can
learn
about
their
like
little
child
algorithm
with
where
they
turn
on
and
off
years,
but
like
it's
probably
five
years
out
of
date,
you
know
like
and
anyway
the
I.
I
think
that
what
I
think
we,
wherever
it
lives,
doesn't
need
to
be
inspect
just
capturing
lessons
learned
over
many
years
of
development
office
protocol
is
useful.
F
F
A
A
The
the
core
takeaways
I
have
from
this
is
like
yeah.
Okay,
we
need
better
spec
that
actually,
like
one
fully
specific
particle,
two
specifies
in
a
separate
document
or
wherever
all
the
the
the
extensions
like
how
supposed
to
work.
We
also
like
we
do
like
before
we're
saying
you
know.
Vertical
changes
like
become
more
confident.
We
do
need
some
or
basically
the
main
ones
are.
A
We
need
some
way
to
order
like
want
us
between
different
streams,
so
some
kind
of
number
on
on,
at
least
in
the
stream
itself,
on
the
full
diffs
we
need
a
like.
We
might
want
to
consider
some
way
to
more
efficiently
cancel,
for
example,
bitfield
something
like
that.
We
definitely
need
some
kind
of
back
pressure
on
what
was
sizes,
so
that,
like
I,
can
have
some
default
like
no.
A
Every
pier
gets
like
10
watts,
and
then
I
can
decide
where
I
want
to
go
from
there
and
then
we
also
need
some
kind
of
pure
exchange
where
like,
whereas
who
servers
can
say,
hey
here's
your
here's,
the
thing
you're
looking
for
and
by
the
way
here
are
some
related
peers
that
you
should
probably
like.
Add
to
that
session.