►
From YouTube: 🙌 IPFS Weekly Call 📡 💫 2019-09-09
Description
Dirk tells us about the Bitswap Protocol and upcoming performance improvements therein.
Also: Offline Camp got rescheduled! You can still go! 27th-30th Sept in Grants Pass, Oregon, USA. A few tickets and scholarships still available. http://offlinefirst.org/camp/
A
Hi
welcome
everyone
and
it's
the
ipfs
weekly
cool
for
Monday,
the
9th
of
September
2019
I,
am
making
brain
I
will
be
your
host
today.
We're
gonna
have
a
talk
from
Dirk
on
the
bitts
walk
protocol
and
upcoming
improvements
there
and
I've
just
added
the
notes
to
the
chat.
So
if
there's
anything
else,
you'd
like
to
cover
today
stick
in
the
agenda,
if
I
could
have
a
note-taker,
please.
A
B
Yeah
so
in
case
anyone
is
on
the
Scott
who
hasn't
heard
this
skill
before
offline
camp
got
rescheduled,
it's
coming
up,
September
27th
to
30th
in
Oregon
I'm,
the
West,
Leslie
and
I
be
safe.
So
if
anyone
is
interested
in
the
offline
first
movement
or
how
the
connection
between
the
web
and
making
apps
work,
offline
and
then
low
bandwidth
use
cases,
Molly
and
I
will
be
there.
Looking
around
the
screen
to
see
who
else
on
here
and
number
of
other
folks
from
that
fast
community
and
people
I
feel
will
be
there
right.
B
B
B
C
No
yes,
cool,
alright!
So
for
people
who
don't
know
me,
my
name
is
Duke
I'm,
like
a
protocol
labs
and
today,
I'm
gonna,
be
talking
about
bit,
swap
so
I.
C
So
just
a
bit
of
background
when
I
PFS
ask
for
stuff
from
bit,
swap
it
typically
does
it
in
a
couple
of
main
ways.
So
one
way
is
that
it
requests
a
whole
file
full
of
stuff.
So
typically
it'll
requests,
the
route
block
route
block
contains
links
to
child
nodes
and
then
those
child
nodes
contain
links
to
further
the
child
nodes.
So
the
request
pattern
will
be
asked
for
one
block:
ask
for
a
few
blocks
and
then
ask
them
any
one
of
us.
C
The
other
main
way
that
it
requests
blocks
is
by
asking
for
a
path.
So,
for
example,
when
it's
requesting
a
page
on
a
website,
it'll
ask
for
the
routes
and
it'll
check
the
links
to
find
page,
and
once
it's
found
page
it'll
check
the
links
to
find
doctor
HTML
and
then
it'll
fetch
all
the
blocks
in
that
document.
So
there
Chris
Patten
here
is
request.
One
block
request:
one
block
request
born
block
requests,
many
blocks.
C
So,
where
we
have
a
pretty
high
fan-out
so
like
in
Wikipedia
or
for
a
web
page
typically
there'll
be
a
few
nodes
in
the
network
that
have
all
of
the
information
like
the
servers
that
are
serving
it
originally
and
then
there'll
be
many
nodes
that
have
parts
of
the
tree.
So
when
it
uses
browsing
through
it,
they'll
download
different
bits
for
different
documents
within
the
website.
C
C
Once
someone
responds
or
a
few
peers
responds,
we
add
them
into
something
called
a
session.
So
typically
will
ask.
Typically
the
same
peers
are
going
to
have
related
blocks
like
a
whole.
Bunch
of
peers
will
have
the
same
file,
so
a
session
just
gives
us
a
way
of
keeping
all
the
peers
together
so
that
we
can
make
any
subsequent
requests
to
the
same
group
of
peers.
C
So
we
have
this
concept
of
one
lists,
so
in
this
example
peer
a
once
CID
one
and
CID,
two
from
peer
B
and
P
OB
keeps
a
local
copy
of
which
see
IDs
that
PR
a
one
so
which
blocks.
If
PR
a
once
once
PR
B
receives
the
block
for
cid
one,
let's
say:
well,
maybe
it's
in
its
block
store
already.
It
sends
that
block
to
PR
a,
and
it
removes
the
one
from
its
local
copy
of
purées
one
list.
C
C
C
So
how
do
we
know
which
the
IDS
to
send
to
which
peers
basically
peers
in
the
session
are
ordered
by
latency
and
when
we
get
a
request
for
a
bunch
of
blocks,
we
split
that
requests
across
the
peers.
So
in
this
example,
we
have
a
split
factor
of
three
meaning:
those
three
rows
here
and
CID:
zero.
Three,
six
and
nine
will
each
be
sent
to
peers
a
D
and
G
CID
is
1
4
7
to
be
e
and
H
and
CID
is
2,
5
and
8
to
C
and
F.
C
So,
as
you
can
see,
multiple,
so
single
one
is
sent
to
multiple
peers.
So
in
that
case
we
may
get
the
same
block
back
for
multiple
peers.
We
call
that
a
duplicate.
So
if
we
start
getting
a
lot
of
duplicates,
then
we
change
the
split
factor.
So
in
this
case
we
change
this
book
factor
to
5.
There
are
5
rows
and,
as
you
can
see,
each
CID
is
now
being
sent
to
less
peers,
so
CID
0
gets
sent
to
a
and
F
5
get
sent
to
a
and
F
and
so
on.
C
C
C
C
C
C
C
So
Stephen
and
I
kind
of
got
together
and
we
came
up
with
a
few
protocol
extensions.
So
this
is
sort
of
a
bit
of
a
work
in
progress
at
the
moment.
But
I'll
tell
you
what
our
current
thinking
is.
So
we
have
this.
We've
added
a
new
message
called
a
have
message.
So
basically
this
allows
a
peer
to
say:
I
have
this
block,
but
not
to
actually
send
the
block.
So
this
is
particularly
useful
for
discovery.
C
So
in
our
example
over
here
peer,
a
whoops
say:
P
race
ends,
I
want
have
message
saying
like
hey:
do
you
have
this
CID?
It
appears
BC
and
D.
Here
B
says:
yep
I've
got
it
so
P
ra
says:
okay,
now
give
me
the
block
I
mean
up
here,
C
and
D
just
say:
oh
I
have
this
block
as
well,
and
then
Pierre
B
actually
sends
the
block
back
in
the
case
where
the
block
is
small
enough
as
an
optimization,
we
can
skip
the
half
part.
So
pob
will
just
immediately
send
back
the
block.
C
So
the
second
product
extension
we
added
was
the
don't
have
message
so
currently
in
order
to
know
whether
a
peer
has
a
block
we
just
have
to
ask
for
the
block
and
then
either
get
the
block
back
or
wait
for
a
timeout.
The
don't
have
message
allows
the
Peter
immediately
signal
us
and
say
actually
I
don't
have
this,
so
this
allows
us
to
very
quickly
determine
the
distribution
of
blocks
that
we
want.
So
in
this
example,
this
slides
a
little
complicated
but
I'll
try
to
explain
it.
C
C
So
then,
all
the
peers
respond
and
they
say:
I
have
this
block
I.
Have
this
block
I,
don't
have
this
block
I
do
have
this
one
but
I'm
not
saying
it
yet,
and
so
it
just
kind
of
allows
us
to
efficiently
get
blocks
and
also
figure
out
where
the
blocks
are.
If
the
peer
we're
expecting
to
have
it
didn't,
have
it
so
clear?
Does
anyone
have
any
questions.
C
C
One
more
extension
was
to
have
an
outstanding
queue
size,
so
this
is
when
let's
say,
I
have
requested
a
bunch
of
blocks
from
a
PA.
The
PA
is
sending
me
back
a
response,
but
not
all
of
the
blocks
can
fit
into
each
message.
So
a
bit
swap
will
actually
round-robin
its
responses
for
Fanus,
so
it
won't
just
send
all
the
responses
to
the
first
guy.
Well,
the
next
response
is
the
second
guy
it'll
send
a
few
responses
to
each
of
the
peers
that
is
requesting
and
it'll
kind
of
round-robin
until
it's
finished.
C
So
in
this
example,
we've
requested
eight
blocks
from
up
here.
The
peer
is
able
to
fit
three
blocks
into
the
message.
The
response
message,
which
is
of
a
limited
size-
and
there
are
five
blocks
left
over,
so
it
just
tells
it
hey,
there's
five
more
coming,
so
we
kind
of
monitor
this
queue
size.
This
outstanding
queue
size
and
if
it
gets
too
high,
then
we
know
we
can
back
off.
So
this
allows
us
to
kind
of,
like
request
a
whole
bunch
of
stuff,
but
not
to
overload
the
peer.
C
C
So,
as
I
mentioned
before,
we
can
ask
for
a
block
and
at
the
same
time
as
we're
asking
for
the
block,
we
can
also
ask
if
the
other
peers
have
the
block,
just
in
case
this
beer.
Doesn't
so
let's
say
that
in
this
case
pob
respond
and
said.
Actually
I
don't
have
this
block,
but
pra
says
that
it
does
have
the
block.
We
can
just
directly
go
and
ask
the
array
for
it.
C
So,
given
that
we
have
given
that
we
want
a
block,
how
do
we
choose
which
peer
that
we're
going
to
send
that
one
to
so?
As
I
said,
this
is
kind
of
an
open
question
at
the
moment,
but
our
current
thinking
is
we're
going
to
prioritize
beers
that
have
told
us
they
have.
The
block
we're
going
to
prioritize
P
is
that
we
never
provide
a
that
block.
We're
going
to
ignore
peers.
C
That
say
they
don't
have
the
block
and
then,
given
all
those
things
are
equal,
then
next
we
order
the
peers
by
they
have
don't
have
ratio,
meaning
if
a
peer
has
already
told
us
that
it
has
a
bunch
of
the
blocks
that
we
were
looking
for
before,
and
it's
pretty
likely
to
have
the
next
block
we're
looking
for
and
finally,
we
select
peers
with
the
shortest
queue.
So
each
peer
has
a
kind
of
queue
of
blocks
that
we're
sending
to
it
or
a
request.
I
should
say.
C
C
For
each
peer
for
each
block,
so
here
we're
kind
of
saying
we're
assigning
a
value
saying
that
if
we
send
CID
3
to
PA
we're
expecting
with
the
probability
of
0.8,
that
pra
is
going
to
give
it
back
to
us
because
it
said
it
had
it.
So,
in
this
case,
two
peers
have
said
that
they
have
these
two
blocks,
so
these
have
the
highest
potential
gain
0.8
and
then,
in
order
to
choose
between
those
two,
we
simply
do
it
by
the
order
in
which
those
blocks
are
requested.
C
So
that's
pretty
much
it
as
I
said
we're
kind
of
working
through
a
few
design
questions.
This
involves
me
I've,
written
a
proof
of
concepts
and
I'm,
doing
a
bunch
of
benchmarking
and
working
with
Stephen
on
that
and
there's
a
couple
of
open
design
questions
that
people
might
be
interested
in
discussing
in
our
issues
board.
So
one
design
question
is
given
that
peers
are
telling
us
how
backed
up
they
are,
how
much
stuff
they
have
to
send
to
us.
C
How
exactly
do
we
vary
the
size
of
or
the
amount
of
stuff,
the
amount
of
ones
that
we're
sending
to
them?
That's
an
open
question
at
the
moment
and,
secondly,
how
do
we
vary
the
number
of
want
blocks
versus
one
halves
so
here
should
we
send
a
one
block
to
both
peer
NBC
and
then
it
won't
have
to
be
a
see.
C
C
So
it
looks
good
I,
don't
want
to
say
too
much
more
because
I
haven't
implemented
all
parts
of
it,
so
I
don't
want
to
sort
of
state
any
solid
facts
before
before
we've
done
a
full
implementation,
because
I
don't
want
to
say,
like
it's
a
lot
better
and
then
we
discover
that
actually,
once
we've
completed
the
implementation,
there's
a
couple
of
things
that
make
it
a
little
bit
slower,
but
in
summary
for
particular
use
cases,
it
works
a
lot
better.
For
example,
when
the
blocks
it
is
first
across
a
lot
of
different
piers.
C
A
E
Janice,
so
you
mentioned
that
you
changed
them
right
now,
you're
picking
a
pair
based
on
the
latency
that
was
changed
to
through
a
good
right.
So
what
was
the
I
mean?
Have
you
thought
of
having
both
of
them
and
assigning
some
weight,
because
latency
is
also
important?
I
mean
it's
difficult
to
say,
which
one
is
more
important.
Basically,.
C
So
it's
sort
of
it's
quite
a
tricky
problem
to
measure
exactly
what
bandwidth
is
and
even
measuring
latency
is
actually
quite
tricky
a
lot
of
the
time
because
it
varies.
So
what
we're
trying
to
do
is
simulate
or
we're
trying
to
react
as
well
as
possible
to
existing
network
conditions
which
can
vary
over
time.
So
if
we
sort
of
keep
the
pipe
full,
then
we
can.
We
can
do
that
without
really
having
to
do
too
much
calculation
or
having
to
think
about
it
too
much.
It
sort
of
happens
organically
that
sort
of
makes
sense.
Stephen.
F
We're
not
optimising,
thirdly,
yeah,
but
these
just
for
specific
period.
It's
more
that
like
we
are
optimizing
for
her
own
throughput
and
then
like
toggling,
the
only
things
we
can
toggle,
instead
of
like
in
the
past.
What
we're
trying
to
do
is
like
it
appears
it
responded
the
fastest,
but
that
didn't
actually
mean
that
they
were
the
fastest
overall
or
like
okay
at
the
beginning.
That
could
appear
to
be
returning
blocks
really
quickly,
but
then
like
they
could
bog
down,
because
they
have
a
larger
person
like
that.
So.
E
Yeah
yeah,
it
does
make
sense,
it
depends
a
little
bit
on
what
these
them
the
kind
of
application
that
runs
awesomely
behind
so
I
mean.
Sometimes
you
have
a
big
file,
and
this
makes
perfect
sense.
But
then,
if
you're,
delivering
video
or
something
then
the
opposite
might
be
I
mean
it's
a
classic
problem
of
mice,
an.
E
F
F
To
like
call
it
max
up,
I,
just
like
trying
to
find
a
way
to
keep
over
up
here
as
busy
sending
us
data.
So
this
will
give
me
the
maximum,
but
it's
not
yeah
you're
right
like
if
you
had
usage,
but
you
open
a
session
and
then,
like
you
just
like
ask
for
block
and
maybe
weight
of
it.
You
ask
for
the
block,
think
wait
a
bit
ask
for
the
block
a
bit,
but
then
you
really
want
like
the
peer
that
returns
the
block
the
fastest.