►
From YouTube: Move The Bytes Working Group Meeting 4
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
meeting
four
of
the
remove.
The
bites
working
group
are
bi-weekly.
Numerous
meetings
spread
across
a
couple
of
months
working
group
dedicated
to
building
better
data
transfer
protocols.
A
We've
got
a
pack
schedule
today,
so
lots
to
get
through
I'm,
going
to
try
and
quickly
shuffle
through
a
quick
agenda
first
around
some
revised
purpose
based
on
Gathering
some
feedback
and
notes
from
other
folks
I'm
going
to
take
ideally
less
than
10
minutes
to
talk
through
some
of
that,
then
Guillaume
is
going
to
talk
to
us
a
little
bit
about
his
findings
from
doing
research
on
bit
swap
measurements,
which
I
believe
are
now
finalized
and
I
have
had
a
chance
to
read
through
and
they're
super
exciting,
and
then
we
have
a
Dean
who's
going
to
be
talking
broadly
about
the
intersection
of
what
a
data
transfer
protocol
starts
and
stops.
A
When
we're
talking
about
incremental
verification
and
other
things,
I'm
excited
to
hear
a
Dean's
talk,
we
haven't
had
a
chance
to
coordinate
yet
and
then
we'll
have
open
discussion.
A
So
moving
right
along
a
couple
of
lessons,
I've
gathered
from
a
number
of
folks
I've
sought
feedback
from
a
bunch
of
you
over
the
sort
of
before
the
break
started
and
I
want
to
sort
of
just
revisit
what
our
original
purpose
for
this
group
was,
which
was
to
ship
a
data
transfer
protocol
that
can
replace
pit
swap
in
q1
2023,
like
the
ipfs
account
tweeted.
A
This
and
I
actually
had
people
reaching
out
to
me
saying
like
wow,
that's
pretty
ambitious
and,
like
the
timing
on
that's
really
well
I'm
like
yes,
we've
gathered
feedback
on
that
and
we're
refining
the
purpose
of
that.
But
to
give
you
a
sense,
our
original
timeline,
which
we
are
still
kind
of
following
we're
on
meeting
four
Hannah,
is
scheduled
to
speak
in
our
next
meeting
on
the
original
timeline.
A
That
is
I
think
we
should
revisit
that
and
I
think
we
should
revisit
what
sort
of
what
the
group's
purpose
is
and
make
sure
that
it's
actually
in
line
with
what
I've
been
talking
with
the
folks
about
what
is
useful
about
this
working
group
and
how
we
can
sort
of
still
accomplish
a
lot
of
these
high-level
goals,
but
maybe
not
with
as
much
like
Hey
we're
all
gonna
agree
and
do
this
and
sort
of
move
and
coordinate
really
in
this
sort
of
Titan
and
direct
way.
A
So
a
couple
of
things
that
we've
also
I've
also
want
to
share
a
couple
of
things
that
we've
learned
sort
of
steering
this
group
as
the
number
zero
team.
Specifically
we've
learned
that
the
actual
shared
infrastructure
of
this
project
has
been
far
too
expensive
to
maintain.
A
It's
we've
had
a
couple
of
challenges
getting
folks
to
contribute
outside
test
plans
for
the
test
ground
structure
we
set
up,
despite
writing
sort
of
read
me
on
how
to
do
that
and
getting
everything
set
up
in
place
to
sort
of
accept
new
plans
and
then
the
actual
maintaining
of
test
ground
itself
has
been
a
really
big
and
hard
for
us.
The
other
thing
that
it
really
competes
with
is
something
that
Juan
talked
on
talked
about
in
the
our
last
meeting.
A
For
us
as
a
team,
we're
really
fired
up
about
this
experimental,
Loop
and
I.
Think
that
one
thing
I
want
to
call
out
is
a
distinction
between
the
test,
ground
infrastructure.
We
have
been
maintaining
for
this
group
and
an
experimental
Loop
and
the
distinction
being.
If
you
look
at
all
of
these
really
fantastic
screen
grabs.
This
is
all
stuff
that
is
tightly
tied
to
a
single
project
and
running,
usually
in
CI,
where
you
actually
have
continuous
feedback
for
engineers
as
they
are
working
on
their
own
protocols.
A
And
if
we
contrast
that,
with
what
we've
been
doing
with
the
test
ground
plans
in
this
working
group,
that
has
really
been
focused
at
infrastructure
that
spans
across
projects
right.
I.
Think
that
if
we
look
at
our
core
value
proposition
of
what
some
of
these
initial
metrics
reports
produce
was
something
like
this.
That
gave
you
this
really
sort
of
like
high
level
pretty
ambiguous,
but
at
least
like
somewhat
meaningful
Apples
to
Apples
comparison
of
a
bunch
of
different
data
transfer
protocols
and
I.
A
Think
that
I
think
there's
a
way
to
accomplish
a
lot
of
this
without
building
that
shared
infrastructure
and
I.
Think
the
fundamental
lesson
that
I'm
taking
away
from
last
year
was
that
this
experimental
Loop
should
take
rest
in
for
all
of
the
teams
and
that
building
this
really
nice
set
of
instrumentation
and
feedback
into
your
own
project
is
probably
a
better
use
of
everybody
else's
time.
A
And
so
what
I'd
like
to
propose
is
that
we
repurpose
this
group
a
little
bit
and
while
this
might
seem
like
a
big
sort
of
step
down
away
from
like
let's
ship
something
towards,
let's
compare
notes
and
be
friends,
I
think
it's
still
actually
quite
meaningful
and
useful
to
lean
into
the
things
that
have
been
really
working
from
this
working
group.
We've
been
getting
some
really
great
presentations
on
findings
and
research.
A
I'm
particularly
excited
to
see
the
presentations
today,
I
think
he
has
some
really
insightful
pieces
to
present
and
that
getting
that
information
out
to
folks
working
on
data
transfer
protocols
is
super
useful.
Having
sort
of
like
trying
to
get
everybody
cajole
folks
into
pring
in
their
data
transfer
protocol
into
some
shared
highly
expensive
to
maintain
test
ground
resources
is
really
not
working
for
us
as
a
group,
and
so
I
think.
The
purpose
of
this
group
is
actually
not
about
shared
infrastructure,
but
instead
about
making
our
the
stuff
we
make
comparable
right.
A
If
you
look
at
I,
think
a
great
example
for
this
is
the
world
of
hash
function:
comparisons
where
you
can
pretty
easily
Benchmark
hey.
This
is
how
Blake
3
compares
to
shot
two
and
like
we
have
lots
of
arguments,
but
at
least
the
way
that
that
works
in
practice
is.
There
is
a
shared
set
of
metrics
that
everybody
uses
and
everyone
publishes
their
own
benchmarks,
and
you
have
a
meaningful
way
to
sort
of
compare
different
things
by
looking
at
that
shared
nomenclature
and
that
shared
set
of
approach
to
measurement
and
I.
A
A
At
the
end
of
this
and
I
think,
that's
a
more
useful
sort
of
approach
to
sort
of
the
way
that
we
should
be
working
moving
forward.
I
mean
I
sort
of
like
it's
a
lot
to
talk
about
here,
but
I
think
that's,
probably
the
most
feasible
Way,
Forward
and
I
think
that,
at
the
end
of
the
day
that
that
does
mean
we
can
still
produce
a
protocol
that
can
replace
bid
swap.
A
But
this
group
has
to
like
help
us
to
Define
what
a
good
replacement
looks
like
in
particular
reading
through
some
of
your
talk,
your
conclusions
around
scaling
of
the
network
and
messages
per
peer
and
messages
per
CID
I
think
we
have
some
like
really
interesting
properties
that
we
need
to
look
at,
but
I
think
this.
This
group
can
really
do
a
lot
of
meaningful
and
impactful
work
around
that
without
necessarily
needing
to
develop
the
overhead
that
produces
this
like
charts
every
week
and
deadlines
every
week,
every
other
two
every
two
weeks.
A
So
maybe
that's
a
lot,
but
that's
based
on
sort
of
like
a
lot
of
individual
conversations.
I've
had
with
a
number
of
you
and
I
guess,
we'll
take
Hannah
I'll,
take
a
question
from
you
and
then
maybe
we'll
move
on.
Try
to
move
a
lot
of
this
conversation
to
the
discussion
section
at
the
end,
but
Hannah
take
it
away.
B
Yeah
I
I
think
that
makes
all
those
things
sound,
100,
good
and
realistic
for
the
q1,
the
original
timeline
I
I
I'm,
just
wondering
why
don't
we
continue
the
timeline
after
that,
but
the
goal
to
try
to
get
to
the
original
goal
like
because,
like
you
know,
like
I,
the
one
thing
that
would
be
there
is
a
hard
conversation
that
comes
around
like
okay,
let's
all
agree
and
start
adopting,
and
it
would
be
a
shame
to
completely
lose
that
I
think
it's
absolutely
realistic
that
we
are
not
in
a
position
to
to
like
make
that
decision
in
two
weeks,
but
but
I,
but
I
like
yeah
I'm,
just
saying,
like
I
think
it
would
be
I.
B
Think,
like
all
that
sounds
good
and
then
like
when
March
comes
around
like
I
hope
we
don't
disband
like
I
hope
we,
you
know,
like
I,
hope
we
continue
until
we
hit
our
original
goal.
You
know
like
I,
don't
know
for
for
for
for
a
discussion
amongst
the
group,
just
because
I
think
that
there
are
that,
like
with
all
the
things
that
we
would
have
in
March,
we
will
have
the
grounds
from
which
to
have
a
very
difficult
discussion,
which
is
a
necessary
discussion.
In
my
opinion,
yeah.
A
I
think
that's
great
and
I
I
want
to
sort
of
use,
leave
that
as
food
for
thought
for
others
and
then
let's
transition
now
to
ghee
who's
going
to
give
us
a
really
nice
tee
up
of
some
core
insights
that
have
now
sort
of
calcified
sound,
good
yeah.
Let
me
share
my
screen.
Hopefully
you
can
do
that.
Okay,
cool.
C
Okay,
so
hi
I'm,
you
working
with
probe
lab
on
they're
measuring
ipfs
component,
mostly
and
I,
will
present
today
the
result
from
our
study
on
bit
Swap
and
so
in
particular.
I
want
to
point
out
that
big
swap
has
two
roles:
it
does
the
data
transfer.
So
that's
the
the
reason
we're
here
and
but
also
it
does
some
content
Discovery
some
content
routing,
and
so
that's
mostly
what
I
will
be
discussing
here.
So
I
will
be
discussing
the
effectiveness
of
the
bit
swap
Discovery
process,
foreign.
C
All
right,
sorry,
so
the
motivation
to
never
beats
up
was
that
we
wanted
to
know
whether
bit
swap
was
efficient
to
discover
content
or
not,
and
it
is
so
in
this
regard.
C
It
is
a
bit
of
concurrent
to
the
DHT,
because
both
of
them
tried
to
find
the
content,
and
then
anyway,
if
you
get
an
answer
from
the
DHL
from
bit,
swap
you're
going
to
fetch
your
content
from
the
top,
and
we
also
so
we
try
to
get
rid
of
ipfs
magic
number,
which
has,
which
are
the
concept
number
that
are
shipped
in
ipfs
and
our
constant,
and
we
didn't
really
test
the
value.
So
here
the
way
bit
swap
work
is
first
when
the
user
Make
a
request,
for
instance,
apfs
get.
C
It
will
first.
So
it's
what
will
first
broadcast
the
request
to
all
of
its
connected
peers
and,
if
bitter
didn't
hear
back
from
any
other
peers
from
any
field
within
one.
Second,
it's
gonna
make
DHT
lookup
in
order
to
find
where
the
content
is
located,
and
so
the
motivation
is
to
test
whether
this
value
of
one
second
is
appropriate
for
bit
swap
or
if
we
should,
for
instance,
decrease
it
or
remove
it
at
once.
C
C
This
gave
us
a
list
of
cids
we
modified
bit
swap
in
order
to
give
it
15
seconds
to
find
the
block
instead
of
one
second
and
it's
gonna
fetch
only
a
single
block,
which
means
that
if
I
don't
know
I
get
a
rooty,
ID
I
will
not
try
to
resolve
any
other,
so
yeah.
Basically
the
the
content
inside
this
block.
C
We
also
prevented,
beats
what
to
use
the
DHT.
So
we
wanted
to
observe
only
bit
Swap
and
not
Beat,
Swap
and
DHT
command
and
in
the
case
that
bitter
failed
to
discover
the
content.
We
want
to
verify
that
the
content
is
available
in
the
ipfs
network,
so
we
did
a
DHT
walk
in
other
verify
whether
the
content
was
accessible,
and
so
we
run
all
this
from
a
VM
in
Central
Europe,
which
can
be
relevant
for
latencies
measurements.
C
So
the
results
we
have
is
surprisingly,
we
got
a
very
good
Discovery
success
rate,
which
means
that
so
within
15
seconds,
over
98
of
the
cids
we
requested
return
positive
so,
which
means
that
the
request
we
broadcasted
were
almost
all
positive.
So
at
least
one
of
the
theoretically
connected
peers
had
the
content,
and
we
also
learned
that
it's
very
heavy.
C
Now
trying
to
explain
these
results
out
of
roughly
50
50
000
request,
we
get
that
most
of
the
content
was
was
served
by
a
small
number
of
providers
by
looking
at
this
number,
for
instance,
we
got
that
the
top
20
peers,
so
the
top
20
providers
are
providing
75
percent
of
all
the
blocks
that
are
served
and
we
double
check
that
the
list
of
CID
is
representative
by
taking
the
logs
from
the
ipfs
gateways
and
we
got
similar
performances.
C
So
we
expect
that
there
are
a
few
peers
in
the
network
that
really
serve
a
lot
of
content
and
most
of
the
peer
are
saved
are
saying
a
few
or
no
content
or
the
content
isn't
widely
accessed
and
out
of
the
10
top
providers.
We
get
that
six
of
them
belong
to
nft.,
storage
and
the
rest.
We
don't
know
maybe
nft
or
web
trailer
storage,
but
I
couldn't
get
yeah.
I
couldn't
match
the
period.
C
Now.
Looking
at
the
latencies,
we
had
that
most
of
the
the
successful
bitspark
requests
were
coming
back
within
one
second
or
even
smaller,
so
yeah.
Here
we
can
see
at
the
latency
for
both
Discovery
and
fetch,
which
means
that
it's
in
the
worst,
you
know
in
the
best
case,
to
rtt,
because
first,
the
I'm
gonna
send
a
one-half.
The
remote
period
is
gonna
reply
with
the
Hub,
then
I
will
send
a
one
block
and
help
you
will
give
me
the
block.
C
And
that's
to
rtt,
so
maybe
so
the
the
figure
would
be
shifted
to
the
right
if
the
the
location
of
the
tassels
was
in
a
remote
place
like
it
was
in
Central
Europe,
so
I
guess
close
to
the
content.
Maybe
in
North
America
would
be
the
same,
but
in
New
Zealand
I
would
expect
to
see
a
shift
from
a
couple
hundred
millisecond
to
the
right.
C
So
the
main
takeaway
from
this
study
is
that
we
measured
that
bit
swap
is
fast
and
accurate,
which
means
that
we
find
most
of
the
content
during
this
first
second
of
the
swap
broadcast,
and
it
is
very
fast
because
it's
coming
back
in
200
milliseconds
in
most
of
the
cases.
But,
however,
it
is
inefficient
because
for
one
block
that
I
want
to
fetch,
I
will
send
more
than
I
will
solicitate
more
than
800
peers,
which
is
really
a
lot
compared
to
the
DHT.
C
The
dhcs
at
most
20
periods,
I,
would
say
and,
however,
the
the
discovery
content
Discovery
for
bit
swap
doesn't
scale
because
in
order
to
keep
the
same
accuracy.
So
if
we
still
want
to
have
90
percent
98
success
rate,
when
discovering
content
on
bit
swap
when
the
network
goes
10x,
then
we
need
to
have
10x
open
connection
to
be
able
to
keep
the
same
Discovery
success
rate.
C
And
so
that's
why
it
doesn't
scale
because
the
we
have
a
limit
on
the
on
the
Connection
Manager
and
also
something
worth
noting
is
that
we
have
a
small
number
of
peers
that
serve
a
lot
of
content.
So
maybe
there
are
some
easy
optimization
to
just
try
to
send
the
broadcast
to
the
top
provider
instead
of
flooding
the
whole
network.
C
So
that's
yeah
some
direction
and
concerning
the
once
again
yeah
latency
delay.
C
I
would
say
that
setting
it
to
zero,
so
removing
it
that
once
would
make
it
better
because
the
let's
say
Network
overhead
would
be
only
free
message,
because
it's
the
concurrency
parameter
of
the
THC
so
going
from
like
now
we
have
like
1700
and
if
we
had
just,
if
we
add
the
three
messages,
it
isn't
a
bigger
ahead
and
it
would
make
the
tail
light
and
see
one
second
faster,
so
for
in
the
case
bit
swap
doesn't
hit.
C
The
content
in
this
case
it
will
be
the
one
second
faster
to
resolve,
so
it's
not
expensive
to
pay,
but
the
Improvement
is
not
huge,
and
so
all
of
the
references
are
in
this
report.
So
you
can
follow
the
QR
code
or
yeah
just
read
through
the
report
and
yeah.
So
another
point
is:
do
we
really
want
I
mean
it's
more
an
open
question,
but
do
we
really
want
to
keep
so
at
the
moment?
C
Has
both
data
transfer
and
content,
routing
or,
let's
say,
content,
Discovery
bundle
together
and
I'm,
not
sure
whether
it's
a
good
idea
to
keep
them
together?
Sure
it's
fast
but
I'm,
not
sure
at
some
point
it
if
it
will
still
scale
or
if
we
should,
let's
say
optimize,
content,
routing
and
in
a
different
way,
also
optimize
data
transfer,
both
in
different,
let's
say,
bundles,
instead
of
having
pizza
I'm
doing
both
at
once.
A
Fantastic,
so
many
questions
have
appeared
in
the
chat
and
so
I
loved
it.
So,
instead
of
asking
you
answer
them
out
loud,
what
I'd
love
to
do
is:
let's
move.
Let's
keep
that
conversation
happening
in
the
chat,
we'll
capture
a
lot
of
that
in
the
notes
and
bring
it
up
in
discussion.
I
want
to
keep
moving
right
along
if
that's
cool.
This
is
a
super
great
talk,
but
I
think
we
should
do
a
d
next
and
then
we
can
move
to
open
discussion
a
Dean.
Are
you
ready
to
roll.
D
Or
some
definition
of
ready
to
roll,
so
all
right
figure
out
how
to
present.
D
All
right
all
right
everybody,
so
it's
a
little
hastily
put
together,
but
overall
I
I
kind
of
I
had
some
thoughts
on
data
transfer
that
maybe
they're
a
little
different
I.
Think
from
what
we've
generally
been
talking
about
and
and
one
of
the
things
that
sort
of
was
pushing
me
along
this
route
was
thinking
about
sort
of
block
limits,
and
things
like
that,
so,
let's
just
get
to
it.
So
I
don't
get
up
all
the
time
here,
all
right.
D
So
all
right
start
with
we'll
just
take
the
sort
of
a
Dean
opinionated
view
of
data
transfer.
So
you
see
where
I'm
coming
from
when
I
say
crazy
things
a
little
bit
about
what
we
have
so
far
and
talk
about
large
blocks
and
then
at
the
end,
implementations
are
not
protocols
and
that
has
good
and
bad
pieces
associated
with
it
all
right.
D
So
data
transfer
thanks
ipfs
is
peer-to-peer
tooling
it's
telling
for
the
peer-to-peer
web
that
uses
content
addressing.
So
this
is
not
how
do
I
you
make
Akamai.
Do
things
better
with
you
know
using
hashes
for
caching,
it's
not.
How
do
I
do
SUB
resource
Integrity
in
browsers
when
I
download
the
thing
from
Twitter?
This
is
not
how
I
do
friend
to
friend
networks,
which
is
git.
D
It's
not
peer-to-peer
distribution
of
like
the
one,
true
ultimate
file
format,
which
is
what
BitTorrent
does
and
then
this
one,
which
is
like
a
little
spooky
and
controversial
but
came
up
in
Iceland
last
year,
was
basically
that
nope
nobody
seems
to
be
interested
in
defining
ipfs
itself
as
a
if
you
have
exactly
A
and
B
and
C
wire
protocols,
and
you
know,
data
decoding
modules.
Then
you
have
ipfs
some
people
like
that.
D
Some
people
don't,
but
that
seems
to
be
where
the
impetus
is
after
Iceland,
but
we
can
use
ipfs
tooling
outside
the
primary
domain.
So,
like
there's,
no
reason
you
couldn't
use
ipfs
to
do,
git
things
or
BitTorrent
things
or
Docker
things
or
CDN
things.
D
We're
providing
abstractions
right
and
sort
of,
interestingly
about
providing
abstractions
is
that
it
turns
out
no
application,
rights
and
abstraction
writes
on
an
abstraction.
It
doesn't
do
exactly
the
abstract
thing:
it
does.
What
that
application
needs
it
to
do,
and
so
people
will
sometimes
punch
through
the
abstractions
when
they
want
something
a
little
more
right.
If
you
decided
you
were
going
to
build
basically
BitTorrent
on
top
of
itfs
you'd.
D
Do
it
and
then
you'd
say
hey,
you
know
what
I
might
be
able
to
eke
out
some
data
transfer
speed
if
I
make
a
data
transfer
protocol,
that's
specific
for
what
my
data
structure
looks
like
and
then
decide.
Is
it
worth
it?
Is
it
worth
a
specific
data
transfer
protocol
for
me
or
is
like
what
I
have
good
enough
right
this?
D
This
is
going
to
happen
sort
of
everywhere
so
as
as
Brenda
started
mentioned,
I
think
making
the
best
protocol
ever
probably
not
the
thing
whether
it's
things
like
we
want
to
reuse
data
that
we've
hosted.
That's
already
hosted
with
other
protocols
like
how
websites
work
with
torrents,
where
there's
an
HTTP
endpoint
that
has
a
file,
and
you
want
to
use
that
to
host
the
bytes
and
somebody
else
hosts
the
metadata
that
lets.
You
verify
it,
but
like
someone
else
is
hosting
the
bytes,
so
this
means
HTTP
get
resource
and
get
out.
D
Bytes
is
like
a
protocol
we
probably
want.
Maybe
you
want
that's
optimized
for
your
application
or
data
structure.
Git
has
a
very
particular
application
in
mind
and
how
it
handles
both
mutability
and
Trust
right.
That
makes
it
optimized
for
what
they're
doing
and
then
they're
just
high
level
trade-offs
where
people
are
going
to
want
slightly
different
things.
D
So
maybe
we
want
a
few
reusable
protocols
that
are
like
not
that
hard
to
implement
that
get
the
job
done
and
I.
Think
important
and
I
only
do
a
little
bit
of
this
here,
because
only
so
much
time
in
the
last
four
hours,
but
having
a
framework
to
understand
the
trade-offs
and
opinions
that
come
into
this,
so
that
when
we
come
down
to
like
how
do
we
want
to
build
new
protocols,
as
as
mentioned
the
beginning
of
this
session,
we
like
know
what
we're
working
with
so
backing
up
a
little
bit.
D
The
user
has
a
request.
They
give
you
there's
a
thing
that
they
want
right.
It
could
be
an
ipfs
path,
let's
see
an
xfs
thing,
because
it
could
be
some
other
ipld
thing
it
could
be.
You
know
a
torrent
file
with
a
manifest
of
blocks.
D
Content.
Routing
is
the
thing
that
helps
you
figure
out
how
to
go
from
some
piece
of
your
knowledge
of
the
request
to
get
the
data
and
the
data
transfer
actually
helps
you
fulfill
it
and
there's
both
like
a
query
component
and
a
moving
bytes
component.
That's
going
on
here
because,
like
when
I
look
at
you
know,
slash
ipfs
unixfs
file
to
xfs
directory
file,
some
interesting
things
there
most
people
when
they
punch
that
into
like
ipfs.io.
They
don't
know
whether
this
is
a
file
or
a
directory.
D
D
They
have
like
a
more
abstract
version
of
this,
and,
what's
interesting
is
that
ipfs
has
sort
of
the
fact
that
we've
sort
of
thrown
away
a
lot
of
information
from
say,
like
a
DOT
torrent
file
and
compressed
it
to
like
the
the
root
the
root
CID
or
a
path,
has
made
it
easier
for
users
to
work
with
and
like
they
generally
are
pretty
happy
with
that.
D
D
The
clients
for
a
peer-to-peer
environment
always
needs
to
execute
the
query
anyway,
to
make
sure
you
got
the
right
stuff,
but
in
in
you
know,
in
other
scenarios,
where
the
client
says
doesn't
have
much
information,
they
might
want
the
server
to
know
what's
going
on,
so
the
server
can
help
them
out
in
executing
the
query
as
well
right.
C
D
Is
where
like
yeah,
so
we
have
this
information
Gap
right,
the
in
addition
to
the
the
obvious,
like
information
of
the
bytes
you're
missing,
there's
a
whole
bunch
of
things
that,
if
you
knew
them,
your
life
would
be
better.
Who
has
the
data
now
I?
Don't
need
to
go
to
the
content,
routing
system?
D
What
data
do
I
actually
need
for?
You
know,
as
you
know,
as
opposed
to
data
I
already
have
deduplication.
What
does
it
look
like
so
I
know
which
pieces
I
want?
First,
maybe
I
want
maybe
I
know
it's
a
file.
I
know
it's
a
video
file.
I
want
to
download
the
first
couple.
I
want
to
download
the
first
10
Megs
quickly
in
a
linear
way,
and
then
I
want
to
just
download
everything
else
in
parallel
whenever
it
shows
up,
because,
by
the
time
the
buffering
catches
up
it'll
be
fine.
D
How
long
is
it
going
to
take
to
deal
with
each
of
the
peers
in
the
multi-pier
environment?
Who
should
I
ask
for
stuff
so
that
I
both
get
it
quickly,
but
without
duplicate
data
right?
These
are
all
things
like.
We
wish
we
had
and
we're
sort
of
operating
in
a
low
information
environment
and
trying
to
make
trade-offs
around
all
of
this.
D
So
some
things
we
have
so
far,
so
these
are
kind
of
like
some,
some
big
ones.
Just
because
we've
talked
about
these
a
bunch
and
I
just
wanted
to
like
Hammer
them
out
a
little
bit
block
based
requests
and
and
graph
based
requests
so
blocks
they're
nice,
the
server
doesn't
even
know
anything
you
can
implement.
It
takes
like
zero
minutes.
They
don't
need
to
know
nothing
about
ipld.
D
In
theory,
they
don't
even
need
to
know
multi-hash,
but
probably
they
should
and
if
they
have,
if
the
requester
happens
to
know
whatever
they
need
to
know
about
the
data,
then
everything's
great,
if
you
run
a.torrent,
but
you
have
like
a
DOT
torrent
file
that
has
all
the
blocks
in
it
like
you're
good
to
go.
D
But
you
know
when,
when
the
requester
has
information,
but
you
know
and
they
sort
of
wish,
they
could
convey
that
to
the
person
on
the
other
side.
So
the
other
side
knows
how
to
pre-fetch
data
or
like
send
them
back
multiple
things
at
once,
so
they
don't
have
to
only
run
the
query
on
the
local
side.
Right.
That's
when
like
bit
swap
is
like.
Oh
please,
just
just
like
kill
me.
I,
don't
want
to
download
a
blockchain
one
block
by
one
block
for
all
of
eternity.
D
The
blockchain
has
a
finality
of
like
half
a
second
and
the
latency
to
slump
here.
That
has
the
data
is
one
second
I'm
never
going
to
catch
up
like
this
is.
This
is
just
sad
right.
Some
examples
are
bit
slop
or,
like
the
ipfs,
you
know,
you
know,
format
equals
block.
You
know
thing
on
the
Gateway
subgraph
descriptors,
there's
more
information.
More
information
is
good.
We
already
went
through
that.
It
should
just
be
an
optimization
on
top
of
block
based
protocols,
because
you
just
have
more
information
slightly
awkward.
D
It
is
an
optimization
over
say,
like
ipbs
block,
yeah
format
equals
block,
but
a
thing
that,
like
that
swap
does
which
most
I
don't
think
any
of
the
other
subgraph
descriptor
protocols
that
we've
described
do,
although
maybe,
but
they
easily
could,
is
bundling
of
requests
which
has
an
up
it's
basically
a
granularity
thing
like
cash,
granularity
and
overhead,
and
so,
if
you
wanted
to
say,
use
graph
sync
to
make
single
block
requests
and
just
basically
treat
it
as
an
optimization
on
top
of
this
swap
at
least
1.1,
if
it
doesn't
have
have
hat,
won't,
have
requests
you
can
Theory
could
do
that
except
you
can't,
because
now
you
have
all
this
overhead
because
you
have
no
bundling.
D
So
that's
like
a
sep.
It's
like
that's
a
totally
separate
topic
from
subgraph
and
block
is
just
like
granularity
of
request
and
caching,
and
all
of
that
downsized
servers
need
to
be
smarter.
They
need
to
understand,
however,
it
is
you're
describing
your
subgraph.
Is
it
a
path?
Is
a
selector?
Is
it
a
something
in
between?
Is
it
whatever
and
of
course,
everyone
hates
every
subgraph,
descriptor
format
that's
been
proposed,
which
leads
to
lots
of
like
oh
well?
D
It
should
have
been
this
way
because
you're
trying
to
describe
like
how
do
I
convey
enough
information
that
isn't
all
the
information
right.
There's
like
there's
like
something
in
there.
That's
like
just
trying
to
get
like
the
the
best
like
bang
for
your
buck
here
all
right.
This
is
you
know
graph,
sync
getting
car
files.
You
know
crack
insane
car
mirror
Etc.
D
You
know
we
have
other
things:
how
smart
and
dumb
should
the
servers,
be
it
soft's
very
dumb,
most
other
things
are
smarter.
How
much
work
should
the
servers
do
yeah?
D
You
could
compare
say
like
crack
and
sync
with
something
where
you
just
ask
for
a
manifest
of
all
the
blocks
in
the
query
and
then
and
then
exit
and
then
run
dead
swap,
and
the
difference
would
be
that
crack
and
sync
would
require
less
upload
bandwidth
from
the
downloader
in
exchange
for
more
execution,
time
and
disclosing
time
on
everyone,
who's
being
requested
from
right,
so
you're
sort
of
Shifting
work
from
the
client
to
the
server,
and
there
are
reasons
why
that
may
or
may
not
be
a
good
thing
or
a
bad
thing,
depending
on
your
scenario,
right
you're,
like
deciding
who's,
doing
more
of
the
work,
whether
this
thing
should
be
stateless
or
it
can
have
sessions
for
those
back
and
forth
right.
D
Sometimes
sessions
are
very
helpful
in
terms
of
being
able
to
like
work
through.
You
know,
accumulate
information
over
time
and
some
people
are
like
I
want.
My
state
list
thing.
Please
don't
make
me,
have
any
sessions
inside
of
my
service
work
inside
of
my
cloudflare
worker,
like
don't
do
it
to
me?
Please
right
whether
these
things
should
be
like
a
composable
protocols.
You
you
use
one
and
then
another
or
they're
like
no
you
this
one
protocol
solves
all
the
problems
for
you
right.
D
You
know,
composable
example
would
be
like
I
fetch,
a
manifest
and
I
fetch
blocks.
An
encompassing
one
would
be
like
I
hand
you
a
graph
descriptor
and
you
hand
me
the
whole
graph,
should
it
be
extensible
or
fully
specified
I've
seen
this
come
up
a
bunch
in
in
lip
P2P,
but
it
happens
here
too,
which
is
like.
Do
we
want?
Where
do
things
plug
in?
Should
they
be
allowed
to
be
plugged
in
I?
E
D
But
it's
one
of
those
things
that
you've
got
to
like
watch
out
for
and
make
sure
that
when
you're
sending
back
errors
you,
the
errors
are
like
workable
with
or
if
you're
trying
to
do
content
negotiation,
because
in
this,
like
the
picture,
the
content
routing
there's
like
this
dead
zone
in
between
two
and
three
that
nobody
likes
to
talk
about.
Because
that
way
we
can
like
give
content
routing
to
like
hey
whoever
works
on
DHT
or
indexers.
You
go
figure
that
out
and
then
data
transfer
can
be
like.
D
Oh
whoever
works
on
like
data
transfer
protocols.
You
do
that.
But
there's
this
thing
in
the
middle,
which
is
I,
got
back
a
list
of
100
peers
that
have
stuff
who
do
I
ask
which
is
like
it's
like
the
missing
piece
in
the
middle
there,
and
you
have
to
worry
about
that.
If
you're,
like,
oh
I,
thought
you
had
my
stuff,
but
you
don't
have
all
the
codecs
I
need
because
I
just
came
up
with
a
new
one
yesterday
and
then
the
granularity
which
which
I
just
mentioned
okay.
D
So
this
brings
me
to
peer-to-peer
transfer
at
large
blocks.
Okay,
I
am
going
to
assume
that
many
of
you
have
seen
my
prior
talk
about
this
and
how
how
to
do
this,
like
safely
but
I'm,
happy
to
answer
questions
about
it
short
version.
We
have
block
limits,
ipfs
does
peer-to-peer
transfer
and,
as
a
result,
it
needs
spam
protection.
If
you're
only
different
to
friend
transfer
right.
You
wouldn't
need
that
these
attacks
do
happen.
There's
some
links
there.
It
happened
to
the
BitTorrent
Network.
D
They
had
some
hacks
to
get
around
it
because
the
attackers
weren't
very
sophisticated
in
V2
what
they
did
is
they
made
the
block
size
effective,
like
our
version
of
the
block
size,
effectively,
16k
decreasing
it
from
like
one
Meg
or
up
to
16
Megs
right,
so
they
actually
went
for
a
much
smaller
limit
than
anything
we
would
do
or
have,
and
their
transfer
is
working
just
fine,
because
there's
a
bunch
of
different
pieces
that
go
into
this.
Whatever
magic
number
you
choose
here,
isn't
going
to
work
for
everyone
right.
D
D
My
data
center
has
100
exabytes
per
second
of
bandwidth,
two
megabytes,
it's
way
too
small,
but
you
do
this
for
the
purpose
of
effectively
getting
along
and
making
sure
your
data
can
move
to
another
system
and
it's
okay,
but
it
means
that
you're
interoperable
with
anyone
who
chose
a
bigger
number,
so
BitTorrent,
V1,
Docker
go
and
PMR
leave
whatever,
but
we
can
create
incrementally
verifiable
mappings
between,
say
a
shot,
two
of
100
megabyte
block
and
a
graph
of
say,
101
megabyte
blocks,
and
we
can
get
this
like
incrementally
verifiably
I'll
sort
of
fly
through
this.
D
But
basically
this
is
Merkel
down
guard
you,
you
get
blocks,
you
shove
them
through
the
hash
function
thing
over
and
over
and
over
again
you
finalize
you
get
the
output,
so
we
decide
we're
going
to
go
backwards.
D
D
So
surely,
at
the
end,
this
means
you're
just
going
to
roll
out
a
big
block
protocol
that
that
you
say
give
me
a
large
block
and
then
it
gets
me
the
block,
but
this
is
just
doing
what
everybody
else
is
doing.
All
of
you
who
are
working
on
data
transfer,
whether
it's
blocks
or
graphs
or
both
you're
already
doing
all
the
interesting
work
here.
D
So
why
would
I
do
this
when
instead
I
could
reuse
the
existing
protocols
so
that
I
don't
have
to
wait
for
everybody
in
the
world
to
update
before
I
feel
safe
hosting
the
bytes
I
can
just
have
my
clients
update
right,
effectively,
leverage
the
dumb
servers
thing,
but
how
will
I
bridge
that
Gap,
so
we're
gonna
we're
gonna
leverage,
extensible
content
routing
over
here
content
writing
gives
you
hints
for
how
to
get
the
data
so,
instead
of
just
saying
peers?
D
Who
are
the
peers
that
give
me
the
data
instead
say
who
are
the
peers
and
like
what
might
what
is
perhaps
the
CID
of
a
manifest
that
will
allow
me
to
incrementally
verifiably,
get
the
data
and
just
sort
of
shove
that
in
there
it's
about
the
same
size
as
a
as
a
peer
ID
so
like.
Why
not,
and
then
we
can
store
that
manifest
graph
blocks.
D
Next
to
the
real
data
say
when
I
go
to
my
storage
provider,
you
know
web
throughout
storage,
a
filecoin
SP
infero
whatever,
and
now
I
can
get
all
my
I
don't
have
to
host
any
data
in
order
for
this
to
work,
I
can
still
use
all
the
existing
data
hosters.
All
the
existing
data
transfer
protocols
and
my
client
will
effect
will
just
make
all
of
this
work,
and
you
could
say
the
same
for
any
we'll
call
it
a
common
graph
request.
You
know
files
in
xfs
a
bit
torrent.
D
Anything,
that's
not
anything
where
you
tend
to
not
rely
on
the
fact
that
ipld
queries
mean
that
you
can
look
at
the
data
from
a
different
perspective
when
you're,
like
everyone
really
only
looks
at
this
with
one
perspective,
you
could
kind
of
just
put
a
manifest
in
there
and
that
would
help
you
get
the
job
done
so
implementations,
they're
they're,
not
protocols.
D
We
can
upgrade
the
protocols
by
adding
hints
in
the
content,
routing
layer
so
does
bit
Swap
and
now
let
you
transfer
100
megabytes,
shot
to
blocks
I
I
would
say.
The
answer
is
still
no
still
has
a
two
megabyte
limit,
but
you
can
orchestrate
something
on
top
of
it.
That
does,
and
that
thing
isn't
another
data
transfer
protocol.
D
That
thing
is
just
data
that
helps
you
and
so
in
some
ways
like
to
some
extent
like
data,
you
can
just
have
data,
be
the
protocol
right,
which
was
helping
here,
putting
manifest
in
the
Manifest
in
the
content.
Writing
layer
right,
like
data,
is
the
way
you're
extending
the
protocol
from
the
implementation
side.
D
It
could
be
a
side
protocol
right,
like
maybe
I,
want
to
query
a
large
manifest
of
law.
The
Manifest
of
the
large
blocks
from
the
peer
directly
instead
of
you
know,
relying
on
this
content
route
includes
because
injecting
it
into
the
global
content.
Routing
layer
is
not
great.
Like
it's
good,
it's
it's
good
for
getting
the
job
done
when
you
need
it
to,
but
it's
not
going
to
work
once
say
you
don't
have
access
to
the
global
content,
routing
layer
because
you're
offline
right
and
so
on,
your
land
or
whatever.
D
D
D
Sorry
about
that,
oh
okay
and
and
I
guess
just
a
brief
note
for
people
who
are
designing
protocols
and
thinking
about
large
blocks.
If
you
have
a
block
based
protocol,
probably
I'd
just
say
like
you
could
just
return
block
too
big
and
tell
someone
to
figure
it
out
and
contact
you
with
one
of
the
other
protocols
to
get
the
data
if
you're
doing
a
subgraph
protocol.
And
then
you
run
across
a
big
block.
D
This
is
how
we
deal
with
large
blocks,
and
there
are
proposals
for
this
is
sort
of
what
like
Blake,
3
and
bow
does
and
there's
there's
another
one
for
some,
like
HTTP
style
requests
for
you,
you
basically
can
send
alongside
the
blocks
the
pieces,
you
need
to
verify
them
as
you
go,
and
so
you
know
you
send
block
block
block.
Oh
no,
this
one's
too
big.
D
Okay,
let
me
slice
it
up
into
pieces
that
let
you
verify
them
along
the
way
and
that's
yeah
again,
not
not
super
hard
to
do,
because
all
this
is
sort
of
already
done.
I
think
I
lost
some
slides
there,
but
in
the
meanwhile
I'll
just
show
you
this
briefly
and
I
can
link
to
a
fancier
demo.
That
shows
me
actually
downloading
Docker
containers
like
Docker
pull
over
ipfs,
but
this
is
me
downloading
a
you
know,
shot
256
of
100,
megabyte
block
from
web3.
D
Storage
right
and
that
graph,
by
the
way,
just
to
back
onto
the
implementations,
are
not
protocols
thing.
So
this
took
like
seven
seconds.
Okay,
the
the
graph
is
linear.
You
have
to
go
back
verifiably
one
block
at
a
time,
so
I
did
not
do
a
hundred
round
trips
with
web3.
storage
in
seven
seconds.
That's
not
what
occurred
instead,
I
have
a
a
manifest
where
you're
sort
of
sorry
I
have
a
downloader,
that's
growing
trust
and
incrementally
verif,
like
growing
trust
exponentially.
So
you
send
me
two
blocks
of
good
data.
D
D
Okay,
it
gave
me
four
good
blocks.
I
can
now
do
four
that
are
untrusted
and
I
can
geometrically
grow,
my
my
trust
and,
if
I
wanted
that
number
to
be
capped
or
make
it
lower,
I
can.
But
that
ends
up
being
a
decision
on
my
client
right.
My
client
gets
to
decide
how
it
wants
to
do
that
that
trade-off
between
Trust
and
verifiability
and
protection
right.
The
base
protocol
doesn't
have
to
do
anything
doesn't
have
to
be
very
smart,
but
the
implementation
allows
me
to
be
smart.
D
I
suspect
Hugo
is
going
to
want
to
talk
about
this
in
a
couple
of
weeks
because
he
has
thoughts
on
how
to
sort
of
compose.
You
know
things
like
you
know
like
graph
sync
and
bit
Swap
and
other
other
sorts
of
protocols
yeah,
but
but
implementations
there.
They
are
not
the
same
thing
as
protocols.
D
I
will
drop.
Some
links
in
the
chat
for
like
people
want
to
introspect
more
into
what's
going
on
or
look
at
the
code
but
yeah.
That
is
basic
idea,
and
hopefully
it
didn't
take
too
long.
I
think
that
was
25
minutes.
It's
a
little
over.
A
E
Hi
Dean,
it's
been
a
while
I
was
curious.
What
like
the
the
using
side
channels
for
these
manifests
is
obviously
one
approach
like
another
approach
would
be
to
like
store.
These
manifests
in
various
ways
in
the
encoded
data
itself.
Do
you
have
any
thoughts
on
that
like.
D
D
D
So
if
the
thing
I
have
is
a
coming
from
Docker
Hub
is
a
shot
256
of
100
megabyte
object.
I
cannot
add
data
to
that
object
without
changing
the
hash.
So
I'm.
Out
of
luck,
however,
and
I
think
this
goes
to
like
some
stuff
that
Juan
mentioned
in
the
last
talk
and
and
part
of
why
I
think
it's
important
to
lean
on
some
of
like
the
the
ipld
tooling
and
have
support
and
things
like
for
ipld,
Uris
and
gateways.
D
Is
there
are
better
formats
than
unixfs
for
representing
data
like
I
swear,
it's
true,
and
so,
when
you
know
I
I
hear
things
like
you
know,
rudiger
mentioned
oh
I,
would
you
know
or
or
friedel
mention
hey
I
would
love
if
we
could
just
have
the
links
really
separate
from
the
data
and
all
these
sort
of
traversable.
This
way
with
like
minimal
parsing
I
said
that
sounds
pretty
cool.
D
We
need
to
make
sure
you
can
do
those
things
and
and
have
them
work
and,
like
you
know,
other
tooling,
people
want
to
use,
like
you
know,
gateways
and
other
sorts
of
resolvers
right
so
that
you
can
build
the
data
structures
like
putting
a
manifest
inside
the
data
structure
and
then
referencing
the
root
as
like
manifest
over
here
data
over
here.
That
sounds
fine,
but
you
have
to
if
that's
a
new
data
structure,
so
you
both
have
to
let
get
people
to
use
it
and
then
make
sure
it's
actually
usable
by
them.
E
Yeah
definitely
it'd
be
a
trade-off
between,
like
you'd,
have
to
pre-compute
the
data
structure
right
and
know
up
front
with
the
access
pattern
would
be
yeah.
D
D
That's
like
deeper
in
the
graph,
but
you
can
still
fetch
the
manifests
and
get
them
get
all
the
blocks
without
having
to
do
like
binary
tree
walking
down
to
16
kilobyte
chunks,
which
would
be
sad
right
like
they
know
how
their
stuff's
going
to
be
accessed.
It's
like!
Okay,
that's
that's
cool
all
right!
Allow
for
optimizations!
When
you
know
your
stuff.
D
Yeah
yeah,
so
I
think
there's
like
two
pieces
of
this
and
I
also
I'm,
just
gonna,
throw
in
like
one
sort
of
side,
Channel
potshot,
while
I'm
here
the
what
what
we
use
to
like
the
data
protocol
is,
we
don't
really
care.
We
haven't
talked
about
transports
really
I've
seen
like
people
do,
TCP
or
quick
or
whatever
you
want
to
make
these
things
HTTP.
D
That's
also
totally
cool
like
making
an
HTTP
version
of
bit
swap
would
take
like
I,
don't
know
an
hour
two
hours,
you
just
take
the
Proto
buff
and
then
you
send
the
Proto
buff
over
or
you
say:
oh
no,
okay,
then
you
can
bike
share
with
people
if
you'd
rather
use
Json
instead
of
a
Proto
buff
or
whatever,
but
like
the
the.
The
key
question
is
some
of
these
trade-offs
that
we're
gonna
that
we're
thinking
about
right,
crack
and
sync
one
is
like
an
interesting
trade-off.
D
I
ask
more
people
to
do
compute
and
ex
and
and
storage
and
lookups
to
disk
in
exchange
for
me
needing
less
upload
bandwidth.
When
I
asked
for
my
requests,
there
are
times
that
might
be
a
really
good
trade-off.
There's
times
that
might
be
a
really
bad
trade-off,
so
I
I
think
yeah.
We
need
the
data
transfer
protocols.
I
think
we
need
some
like
base.
D
Scaffolding,
ones
that
give
us
our
biggest
bang
for
the
buck
and
then
we're
gonna
need
someone's
and
then
we're
gonna
probably
need
to
make
some
data
like
data
structures
that
are
better
optimized
for
some
of
this
right.
So
if
people
want
to
make
data
transfer
protocols
that
are
they
don't
want,
they
don't
want
the
the
user
requests
they're
like
really
bad.
If
your
user
request
requires
any
type
of
multi-block
data
structure
traversal,
they
don't
want
any.
No
amps,
no
amps
for
me,
I
just
want
like
I
just
want
to
walk
through
blocks
blocks.
D
E
D
And
so
the
sort
of
the
the
trick
with
yeah,
so
one
of
the
ways
in
which
it's
easier
and
harder
right
is
easier
because
you
just
make
a
new
thing
and
then
you
launch
it,
but
it's
harder
because
you
got
to
get
people
to
like
use
it
right.
Data
transfer
is
in
many
ways
easier
because
it
just
lives
in
the
background
right,
you
can
just
run
a
separate
process,
a
separate
thing
next
to
it.
D
D
So
right,
so
we
can
design
them,
and
then
we
can
start
playing
around
with
them
right
like
again,
I
I
think
it
would
be.
The
protocol
is
not
not
equal.
The
the
implementation
right
I
made
a
comment
about
you
know
in
case
presentation,
which
is
like
I
just
wanted
to
replace
the
words
bit
swap
with
go
bit
Swap
and
like
ipfs
with
Kubo,
because,
like
the
fact
that
bitslop
has
support
for
say,
want
have,
is
I
think
not
a
bad
thing
like
it
got
added,
because
it
was
useful
because
there
were
too
many.
D
A
D
B
D
Sure,
that's
dumb,
you
just
have
to
drop
the
messages,
and
you
can
do
that.
Like
the
go
client
the
go.
Client
now
will
let
you
just
drop
those
messages.
So
that's
like
yeah.
That's
a
protocol.
Failure
right!
That's
a
fixable
protocol
failure
that
has
nothing
to
do
with
blocks.
It
has
to
do
with
like
bit
swap
being
this
wonky
thing
which
shoved
two
different
protocols
into
the
same
name
for
some
reason
right,
but
I
do
think
like
one
of
the
things
I
saw,
Fredo
point
out,
and
some
other
folks
is.
D
D
It's
not,
but
it's
not
scaling
like
that.
That's
the
problem!
The
problem
is
basically
spam
right.
So
no
matter
what
protocol
you
implemented,
you
would
still
have
like
the
spam.
That
was
there
was
there
even
before
the
want
half
messages
existed,
people
were
just
getting
duplicate
blocks
back
instead,
so,
like
all
the
spam
was
still
there.
D
The
problem
is,
you
know
none
of
our
protocols.
This
is
a
little
bit
like
the
lipid,
a
piece
stuff
we're
like.
Oh
hey.
We
need
Resource
Management
right,
like
we
need
429
errors.
We
need.
You
have
asked
me
for
too
much
stuff.
D
Please
go
away
and
the
fact
that
there's
a
big
client
on
the
network,
which
is
Kubo,
which
is
behaving
terribly
by
spamming
everyone
if
we
need
to
bump
the
protocol
version
and
then
say
up
if
you're
on
you
know
version
you
know,
X
Plus
One
will
treat
you
nicely
in
version.
X
version
X
or
lower
you're
gonna
get
less
I'm
gonna
just
answer
your
streams
less
because
I
know
you're
bad
at
stuff.
Like.
A
B
I,
just
I
just
wanted
to
share
that
that,
like
the
history
of,
why
does
bit
swap
broadcast
requests
to
everyone
right
like
in
the
chat
Dean
pointed
out
that,
like
or
I
just
now,
I
think
pointed
out.
That
was
like
now
the
broadcast
of
what
happened
to
everyone
before
broadcast
a
request
for
a
block
and
got
back
like
tons
of
duplicate
box,
but
the
reason
it
does
all
of
that
is
that
originally,
at
the
time
the
DHT
was
super
duper
slow
and
the
thought
was.
B
I
think
it
probably
still
does
exist
to
some
extent,
but
like
it's
an
interesting,
it's
just
an
interesting
bit
of
information
that,
like
all
this,
you
know
crazy,
also
like
I'm,
not
sure
I
realized
that,
like
back
when
I
was
working,
I'd
go
bit
slap
you
weren't
connected
to
700
Pierce.
You
were
connected
to
like
30.
like
that's
an
interesting.
It's.
B
D
E
D
Light
will
fix
this
in
ipfs,
desktop
long
before
the
like
Kubo
defaults,
changed.
C
F
Chiropo
I
didn't
understand
that
we
cannot
fix
the
spam
from
other
node
and
it's
true
that
right
now
the
main
implementation
is
quite
spammy
and
you
cannot,
if
you
want
to
talk
with
him.
You
cannot
pick
that.
However,
in
the
future,
what
I
think
we
could
do
if
someone,
if
someone
just
get
annoyed
that
oh
I'm
spending
too
much
resources
answering
requests
that
I
I
don't
care
about?
F
One
thing
you
could
do
is
you
could
have
some
budget
logic
where,
when
someone
spam
you
to
merge,
you
just
stop
responding
and
you
say
continue,
spelling
too
much
you
like,
kill
the
connection
and
if
they
open
a
new
connection,
you
don't
open
it,
which
is
quite
cheap.
It's
what
failed
to
bend
us,
for
example,
that
won't
help
those
peers.
F
So
those
peers
will
not
work
anymore,
but
they
will
have
the
choice
to
upgrade
and
it's
basically,
if
you
have
a
client
so
bad
that
the
only
solution
that
people
have
is
to
not
talk
to
you.
It's
a
pretty
good
incentive
for
people
to
upgrade
their
client.
So
it's
not
it's
like
forcing
people
to
upgrade
to
protocols,
that's
more
nice
to
people,
but
I
think
it's
valid
strategy
that
we'll
see
at
some
point.
If
people
get
annoyed
that,
then
they
will
be
forced
to
upgrade.
A
Fantastic,
who
has
the
last
word
anyone
Hannah?
Did
you
re-raise
your
hand,
no
okay,
cool.
Well,
then
I
think
we'll
call
it
there
because
we're
three
minutes
over
time,
despite
starting
five
minutes
late,
but
thank
you
very
much
everyone
for.
Let's
continue
this
conversation
in
the
move.
The
bikes
working
group
channel
on
the
Falcon
slack
I
have
proposed
a
new
purpose
for
this
working
group.
I'm,
just
gonna
run
with
it,
unless
someone
tells
us
otherwise
but
I'd
love
to
continue
that
discussion
and
refine
it
in
chat.
A
I
think
we
still
have
lots
to
good
stuff
coming
along
and
thank
you
like,
as
evidenced
by
ghee
and
adeen's
presentations,
so
join
us
in
two
weeks
for
Hannah
who
is
going
to
be
presenting
Ken.
Are
you
presenting
a
new
protocol
conception.