►
From YouTube: Move the Bytes Working Group Meeting 6
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everybody
Welcome,
welcome
to
meeting
six
of
move
the
bytes
working
group.
Where
we
talk
about
moving
bites,
we
figure
out
how
to
move
those
bites.
We
measure
the
bites
that
we're
moving
and
so
on
and
so
forth.
Today
we
are
on
our
sixth
meeting.
We
have
a
pretty
light
agenda.
Housekeeping
is
very
easy.
We've
settled
into
a
really
nice
Cadence
of
folks
using
time
to
present
exciting
upcoming
protocols.
I
think
we're.
A
We
have
a
continuing
discussion
ready
in
our
slack
Channel
within
the
fox
and
Slack,
and
today
we
have
geropo
coming
up
who's
going
to
be
talking
about
a
new
protocol
or
I
I
subbed
in
20
minutes.
A
You
pick
the
amount
of
time
to
talk
and
then
from
there
it's
this
open
discussion,
and
so
this
is
really
easy.
Ideally
we'll
get
some
fun
and
exciting
ideas
and
we'll
have
a
chance
to
sort
of
challenges
as
we
go.
But
with
that
jeropo
you
ready
to
take
it
away
cool
make
sure
you
can
share
a
screen,
we'll
give
it
a
second
set
up.
C
So
hi
I'm,
going
to
talk
about
rapid,
which
is
a
thing
I've
talked
I've,
described
the
ID
in
the
ipfs
game,
2022,
and
so
this
is
I've
made
the
first
implementation
now
and
I'm
going
to
talk
about
that
again.
Who
forces
who
don't
know
what
to
go?
Is
I
mean
it's
in
Kubo
and
we
have
a
small
issues
that
are
download.
Speed
are
not
very
fast
right
now
we
use
Google
to
exchange
blocks
and
our
client
is
very
slow,
so
I'm
really
interested
in
doing
a
new
client.
C
It's
not
a
new
protocol
and,
like
those
are
things
we've
seen
in
this
working
group,
so
yeah.
So
it's
not
a
new
protocol.
It
will
be
ratio
compatible.
You
will
be
able
to
download
from
existing
node
and
one
of
the
things
that
it's
like
first
of
speed.
C
Secondly,
is
the
efficiency,
because
our
current
client
is
lots
of
network
and
CPU
to
download
very
slow,
so
we
want
to
have
higher
efficiency
and
the
last
one
is
be
able
to
use
more
than
one
protocol,
because
we're
probably
gonna
find
out
that
we
don't
have
one
awesome
protocols
that
solve
all
use
cases.
So
if
we
can
use
multiple
protocols
at
once,
they
will
be
cool.
I
will
start
with
the
demo
right
away.
C
I
have
a
a
small
rabbit
client,
it's
going
to
download
this
dot
ipfs.io,
which
is
a
50
gigabyte,
Gippy
bypass,
sorry
from
free
Gateway
at
once,
which
is
no
four
sorry,
and
each
Gateway
has
three
concrete
on
the
runner.
I'm
gonna
speak
why
we
have
multiple
one,
but
we
can
see
the
speed
is
pretty
good.
It
varies
a
bit
because
the
gateways
themselves,
most
of
them
use,
Google
and
Kubo,
is
very
underwhelming
and
like
sometimes
it
sends
you
blocks
fast.
Sometimes
no
I,
don't
really
know
why.
C
But
we
can
see
that
it
has
a
very
decent
speed
if
we
compare
that
to
the
Google
form
earlier,
because
that
was
a
to
this
spot
for
us
there
and
I
had
an
example
right.
This
is
like
a
maximum
I
can
reach
with
my
fiber,
which
is
great
to
know
that
I
have
a
protocol
that
sometimes
like
limited
by
my
verifies
internet
and
not
by
by
the
protocol
itself.
So
if
we.
C
C
To
the
three
megabyte
you
get,
okay,
I
would
say
it
works
pretty.
Well,
so
how
it
works.
The
this
is
the
thing
we're
gonna
look
at.
This
is
the
main
internal
representation
of
that's
a
tree
which
approximate
the
graph
is
always
a
tiny
thing
differences.
C
This
is
mainly
a
graph,
a
Dax
array,
and
so
we
are
going
to
see
we're
going
to
assume
that
this
is
a
metal
dab
and
each
node
right
here
is
a
block
if
we
want
to
download
in
the
metal
deck
and
it's
going
to
work
with
dividing
control
strategy.
So
at
first
we
we
have
a
three
different
connection.
Let's
say
it's
pretty
fun
gateways
and
all
of
those
Gateway
are
going
to
start
a
new
dig.
The
only
block
we
know
so
for
the
color
coding
purple
is
a
downloader.
C
A
Fed
is
a
block
we
know
about
if
we
have
not
downloaded
yet
so
we
just
know
the
CID
of
that
block
and
black
is
a
block
we
don't
know
about.
So
everyone
start
downloading.
The
same
block
now,
let's
say
x,
get
lucky
x,
got
the
reverse
block,
so
the
protocol
I'm
using
right
now
is
car
file
over
HTTP,
so
I
mean
I,
sent
one
CID
as
a
request,
and
that
gives
me
a
stream
of
blocks.
That
is
all
the
blocks
under
that
Tad.
C
So
the
dotted
line
is
that
X
is
still
downloading
the
hoot.
However,
because
of
the
way
the
protocol
works,
it's
gonna
be
also
giving
me
all
of
the
blocks
underneath
route,
and
so
those
Protocols
are
called
server
driven
because
it's
the
keyboard
server
that
is
driven
into
the
download
it's
a
remote
service.
That
is
sending
me
more
and
more
blocks.
C
So
now
we
have
a
small
issue
that
z
and
y
continue
to
learn
no
root,
even
though
we
already
have
it.
So
at
this
point,
I
just
kill
the
connection
somewhat.
So
there
has
a
tiny
efficiency
issue:
it's
not
very
efficient.
C
We
can
all
talk
more
later
about
the
implication
of
doing
that,
but
I'll
just
close
the
collection
and
like
see
whatever
so
what
happened
now
is
that
we
have
some
numbers
which
will
get
important
very
shortly
right
now,
so
Zed
I
killed
the
connection
of
Zed
and
that
was
downloading
a
root.
So
it's
gonna
look
within
the
chart
of
roots
which
one
it
wants
to
download
for
now
at
this
stage.
C
This
is
the
view
of
Zen
Z
that
they
are
all
the
same
one
metric,
and
so
he
just
pick
one
random.
Now
we
have
something
interesting.
It's
going
to
happen
is
that
it's
X
time
to
to
choose
why
it's
going
to
download?
Oh,
no,
actually,
sorry.
D
C
Yeah,
whatever
other
so
okay,
so
we
are
in
this
situation.
We
feel
the
connection
of
Y,
but
Y
is
very
slow,
so
it's
not
responding
yeah.
So
now
what
we
have
is
X
and
X
just
got
I.
So
in
this
stage
we
have
I,
it
got
explored,
so
it's
on
Blue.
Now
we
know
about
it
and
we
also
know
about
those
blocks
more.
C
This
is
a
block
on
the
White
and
again
the
dotted
lines
because
of
the
way
the
protocol
Works
sandwich
driven
so
I
expect
the
server
to
send
me
the
blocks,
so
they
have
dotted
arrows
through
X,
have
dotted
out
or
the
leaf
blocks
and
Y
stock
has
a
metric
of
one,
even
though
we
still
downloaded
it.
No
okay
that
got
more
blocks,
so
Zed
got
its
block,
and
now
we
have
something
interesting
is
that
X
still
thinks
it?
C
C
So
it's
going
to
look
at
roots
and
at
which
is
going
to
look
within
all
the
children's
and
it's
going
to
see
what
skills
are
now
available
and
it
has
a
preference
is
that
it
will
choose
a
children
with
a
metrics
at
slow.
So
the
point
of
the
metric,
which
is
the
number
next
line,
is
a
form
of
like
the
higher.
It
is
the
less
that's
a
block.
We
want
a
part
of
the
deck
we
want
to
explore.
C
So
when
y
looks
at
this,
it
doesn't
want
to
look
at
at
e,
because
there
is
already
two
people
competing
for
E.
So
it's
it's
like
between
competing
with
only
X
and
competing
between
x
and
z.
It's
more
interesting
to
compete
with
only
X,
because
it
like
the
likelihood
we
have
duplicated
blocks,
is
lower,
so
y
Peaks,
a
small
optimization.
C
If
you
see
it's
likely
that
if
you
see
two
blocks,
one
that
are
downloaded
and
some
that
are
not
even
though
the
metric
of
Y
of
II
is
higher,
I
mean
sorry,
even
though
at
this
stage
they
have
the
same
metric
because
I
is
already
downloaded.
You
would
prefer
to
start
your
download
from
the
top
of
the
root
closer
to
the
high
of
the
graph.
So
here
a
is
the
best
one,
because
it's
not
downloaded
and
it
has
a
lowest
metric
of
11..
C
C
That's
a
bad
thing
so
because
we
don't
actually
know
the
order
from
the
protocol
of
sorry
the
get
your
protocol
doesn't
tell
us
the
order
of
the
blocks
we
don't
know
in
which
way
we
watch
order.
We
are
going
to
get
so
now.
X
just
sees
that
it
downloaded
the
block
and
it's
the
O.
We
already
have
it.
So
it's
going
to
kill
the
connection,
because
now
we
don't
know
what's
happening.
C
We
we
just
know
that
we
were
downloading
a
bunch
of
blocks
and
some
of
them
got
downloaded
and
we
I
want
to
avoid
downloading
duplicated
data.
So
what
x
does
is
that
it
kill
all
of
the
dotted
line
all
of
the
connection
and
it
it
stopped
down
at
the
Block
at
this
point
and
it's
looking
at
the
roots,
because
this
was
the
previous
hand
and
it's
going
to
run
the
same
algorithm
of
I'm
gonna
go
down
the
graph,
look
up
the
metric
and
pick
the
best
lock
looking
to
me.
C
So
it's
look
at
the
root
and
at
this
point
the
best
block
is
I,
because
I
have
a
zero
Matrix.
So
it
starts
downloading
I.
However,
we
already
have
I,
so
it's
not
going
to
download
it.
It
traversed
it
so
I
mean
it's
interested
in
that
part
of
the
deck,
but
we
have
not
sent
a
network
request
yet.
The
second
part
after
looking
at
I,
is
repeat
the
same
thing.
We
still.
We
look
up
the
metric
of
all
the
children's
and
we
pick
one
in
this
case.
C
They
all
have
symmetry,
so
all
no
downloaded
so
just
pick
one
randomly
and
so
yeah
Graphics
like
to
change
the
layout
of
the
graph,
so
it
moved
in
the
center
now,
and
so
it's
downloading
J.
An
important
thing
to
note
is
I
used
to
have
a
metric
of
one
you
can
think
of
the
metric
as
the
number
of
people
downloading
the
blocks
or
those
child
blocks.
C
The
goal
of
this
is
that
when
you
have
a
lot
a
lot
of
downloader,
you
don't
have
clusters
of
downloader
where
like.
If
you
have
a
part,
the
part
part
of
the
stack
that
is
very
deep.
We
don't
want
to
to
go,
keep
the
keep
going
deeper
with
tens
and
tens
of
people
we
want
to
repartee.
We
want
to
partition
people
somewhat
balanced
on
the
deck,
so
X
is
very
fast.
It's
just
download
the
Giant,
and
so
we
can.
We
see
it.
C
We
just
move
j
out
of
the
way
because
it's
not
downloaded
a
little
bit
more
anymore
and
we
will
remove
J
from
the
child
of
I.
That's
quite
an
implementation
detail,
but
I
need
a
way
to
know
when
to
backtrack.
So
right
now
we
have
only
C
logic
to
go
down.
That
means
that
when
we
have
a
head
somewhere,
we
always
got
to
try
to
go
deeper
in
that
part
of
the
graph.
C
So
theory
is
that
everyone's
going
to
try
to
go
deeper
in
their
own
part
of
the
of
the
dag
and
so
they'll
leave
lots
of
open
places
for
other
nodes
to
start
downloading
and
the
way
I
do.
This
is
by
removing
nodes,
so
here
x,
just
downloaded
K,
he
downloaded
L
and
so
now
that
he
downloaded
L
I
only
have
one
children
which
we're
all
about
to
remove
and
because
I
have
no
children.
We
also
give
me
responsible-
and
so
basically
when
something
when
a
part
of
the
tag
is
done.
C
C
Does
anyone
have
questions,
because
this
is
really
everything
I
implemented,
and
it's
just
algorithm?
So
if
people
are
Christian,
it's
a
good
moment.
C
C
Sorry
so
good
good
question
so
x,
y
and
z,
all
of
this
rant
in
one
node.
So
you
run
one
node
and
it's
you
can
imagine
it's
three
different
Gateway,
so
I'm
downloading,
from
ipfs.io
from
section.pl
and
from
my
gator.
C
Exactly
it
might
be
that
I've
limited
it
like
that,
so
in
my
head
it
makes
sense,
but
so
wait.
Each
Gateway
have
a
global
team,
and
so
each
Gateway
is
the
workers
that
driver
has
a
graph
independently
right
and.
A
C
So
no
in
my
example,
I
had
about
three
per
fee
worker
per
provider
because
it
optimized
a
lot
the
latency,
but
it's
just
working
around
China
neighbor
I'll
talk
about
that
right
after
this.
Does
anyone
have
a
question
about
the
algorithm
before
I
I
move
on
Hannah
has
one.
B
C
No
I
go
with
the
Gateway
with
car
format.
Oh
sorry,
I
get
a
car,
so
that's
a
stream
of
block
and
for
each
block
I
iterate
over
all
the
Block
in
the
car
and
I
I
I
do
the
algorithm.
That's
the
difference
between
the
strong
line
and
the
dotted
line.
The
strong
line
is
a
place
where
I
started
doing
my
HTTP
request
and
the
dotted
line
is
the
line
that
I
got
I
expect
to
get
within
that
stream
of
blocks.
C
B
C
Okay,
cool
yeah:
if
I
don't
like
the
blocks,
I'm
getting
then
I,
just
close
the
request
got
it
so
I
guess
no
more
question!
No
quick!
No
interesting
point.
How
do
we
add
more
protocols?
The
first
one
is
wrestling
which
I'm
thinking
about
it's
very
useful,
because
graphsing
is
a
lot
like
very
fancy
cafe
over
Gateway.
You
start
a
request
somewhere
and
it
gives
you
more
blocks.
It's
gives
you
block
like
the
server
is
pushing
more
and
more
blocks
to
you.
So
I
already
have
all
the
logic
to
this.
C
The
thing
that
would
be
needed
is
to
implement
that
interface,
which
take
the
concepts
because
that's
go.
You
want
to
cancel
the
request,
the
starting
CID
for
that
request,
and
this
thing
which
I
call
it
traversal
a
traversal
which
it's
just
the
need
to
align
representation
of
a
request.
Has
it's
basically
like
an
iPod
selector,
except
it
he
does
less
cost.
So
you
could
translate.
Traversal
objects
to
selectors,
and
even
if
you
did
not
want
it,
you
could
assume
that
secure
and
Gateway
protocol,
for
example,
I.
C
Don't
even
look
at
the
tracker
so
I,
just
if
I
get
blocks
that
are
not
in
my
trial,
I
kill,
I
close
the
connection,
so
I
fall
back
to
beat
swap
like
protocol
where
I
do
one
features
per
block.
If
the,
if
that's
not
happening
so
you
could
do
the
same
thing
on
graphsing,
you
just
always
send
the
selectors
that
say
give
me
everything
and
if
I
don't
like
what
you're
sending
me
I
kill
the
connection,
so
that
would
be
aggressing
would
be
fairly
easy
to
integrate.
You
need
to
implement
that
interface.
C
C
The
more
complicated
part
is
going
to
be
this
one,
because
Bitcoin
needs
to
remember
where
the
direction
of
the
so
sorry,
with
with
the
on
the
right,
we
can
see
Z,
which
is
a
a
several
treatment
protocol.
So
I
start
a
request
and
from
that
request,
I'm
going
to
get
all
the
blocks
and
earnings.
B12
doesn't
do
this.
This
one
is
one:
two
one
I
asked
you
one
block:
you
give
me
one
block,
so
we
need
to
be
more
active
with
the
head
and
to
move
it
more
often.
C
C
How
many
heads
like
how
many
downloaders
are
underneath
so
the
way
I
think
I'm
thinking
to
do
this
is
with
a
snake
of
work.
So
X
is
a
is
a
client
driven
protocol
where
I
request
blocks
onto
one
and
so
for
each
block
that
I
added
to
the
request,
I'm
creating
a
linked
list
of
them.
So
that
mean
that
now
we
have
not
only
a
head
for
the
worker,
because
the
worker
also
have
a
tail
which
is
a
so
when
you,
when
I
download
block
I,
either
cut
them
if
I.
C
C
Any
question
I'm
mostly
done
I
was
thinking
of
presenting
more
detailed.
The
trade-off
of
what
cutting
off
the
connection
does
so
I'm
going
to
look
at
more
code
for
this.
This
is
the
current
test
code.
I
have
so
it
started
download
request
and
it
just
iterate
over
all
the
blocks.
That's
a
channel
here.
Okay,.
B
D
C
I
have
four
Gateway
and
each
Gateway
has
three
runners,
so
that's
12.,
so
I
ran
12
worker
in
battle
and
the
reason
I
do
this
is
that
every
time
I
cut
off
the
Gateway
is
there
is
for
at
least
a
round
trip
time.
The
remote
player
doesn't
know.
I've
got
the
connection
because,
like
my
packets
cannot
go
there
faster
than
the
speed
of
light.
C
So
when
I
cut
off
the
connection,
all
the
like
at
least
one
round
trip
of
data
is
still
in
the
pipes
like
in
the
internet
pipes
in
the
buffer
of
different
routers
in
fibers,
and
so
this
take
a
while
to
get
to
me,
and
this
is
wasted
data.
So
what
I
want
to
do
is
when
I
cut
off
one
connection.
I
want
the
elements
of
data
to
be
lower,
so
I
have
more
connection.
C
So
now
each
connection
like,
if
we
can
imagine
I,
have
one
gigabit
per
second
and
I
have
one
second
of
all
latency
to
appear.
We
can
do
the
bandwidth,
the
less
delay
product,
which
is
going
to
tell
me
that
I
have
one
gigabit
of
data
stored
in
the
pipe
at
any
moment,
and
so
what
I'm
going
to
do
is
because
now
I
have
12
and
not
one.
The
12
gigabit
is
going
to
be
divided.
C
120
per
second
is
going
to
be
divided
by
12,
assuming
every
every
node
getter,
the
Valencia
of
the
network,
and
so
now,
when
I
get
one
connection,
I
waste
12
of
a
gig
of
a
gigabit
I,
don't
well
I,
don't
waste
the
completely.
So
this
helps
a
lot
to
have
more
workers,
but
it
has
a
decremental
point
like
at
some
point.
You
can
add
more
and
more
workers.
They
don't
go
faster,
so
that
helps
going
faster.
C
And
yeah
I
just
stick
to
the
few
gateways.
The
last
interesting
thing
is
I've
discovered
I
have
decided
to
look
at
the
CPU
usage
of
my
client
and
it's
extremely
good.
So
at
first
I
was
testing
with
the
default
client.
Promo
I
just
need
to
brought
up
the
profiler.
C
So
this
is
a
result
of
Provider
downloading,
a
roughly
200
megabyte,
maybe
200,
maybe
byte
per
second,
which
is
a
roughly
2.5
gigabit,
and
we
can
see
that
none,
absolutely
none
of
the
no.
We
have
12
samples
within
my
code.
All
the
code
which
I
actually
find
quite
funny
is
in
the
HTTP
Library
all
the
CPU
usage.
So
I
was
using
HTTP
too,
because
it's
the
default
that
go
does
if
it
can,
it
will
use
HTTP,
2.
and,
like
30
of
the
of
the
CPU
time,
is
meant
doing
Cisco.
C
We
also
have
a
few
lot
elsewhere
in
the
HTTP
Library,
so
that
was
interesting
and
so
I
actually
decided
to
change
it
to
http
one,
because
HTTP
one
is
a
simpler
protocol.
It
goes
faster
with
HTTP
one
because
HTTP
one,
because
I
have
free
stream
to
the
same
peers.
Http
one
does
have
head-of-lions
issues.
So
if
I
have
three
three
streams,
so
the
same
peer
HP
will
get
its
own
TCP
connection,
while
with
HTTP
2
each
stream
gets
or
shared
on
one
one
TCP
connection.
C
That
means
that
you
have
paid
offline
issue.
It's
when
you
cannot
read
the
stream
a
because
you
lost
data
from
streamly,
so
HTTP
one
is
faster
and
it
also
use
less
CPU,
but
it's
still
we're
still
heavily
heavily
dominated
by
the
hashing
speed,
but
not
hashing
the
HTTP
usage.
C
So
we
can
see
40
of
time
we
spent
doing
HTTP
and
20
were
spent
doing
hashing,
so
I'm
I'm
quite
happy,
because
I
didn't
try
to
optimize
the
cover
and
it
still
Miles
Ahead
from
good
pizza.
If
you
do
the
same
thing
with
Google,
you
see
gobit
swap
Pro
in
the
profile.
You
don't
see
it
a
lot.
You
see
mostly
quick
right
yeah,
so
that
was
interesting
to
discover
the
details
of
like
it's
surprising,
it's
surprisingly
costly
to
do
a
few
like
gigabit
per
second
speed.
D
E
C
Very
good
question
so
I
mostly
care
about
downloading
unique
surface
data
right
now
and
the
trick
of
killing
the
connection
you
still
waste
the
round
trip
every
time
you
do
that
so
I
would
add.
I
would
add
a
way
to
pass
through
the
Gateway.
So
when
I
start
downloading,
for
example,
if
I'm
only
interested
in
a
few
Direct
in
the
file
within
the
directory,
I
can
give
the
pass
to
the
correct
Sport
and
the
remote
Gateway
will
walk
the
path
for
me.
C
So
I
don't
have
to
kill
the
connection,
because
it's
going
in
a
place,
I
don't
want.
I
can
tell
it
where
I
want
to
go
at
least
for
Unix
FS
data.
The
API
I
have
with
ipsl
I
can
describe
in
theory
like
very
fairly
complex
requests
and
I
I.
Don't
actually
think
we
need
that
right
now.
I
would
mostly
use
this
unique
seconds.
C
A
You
yeah
this
seems
to
be
kind
of
like
carving
around
the
querying
problem.
That
seems
to
show
up
in
a
lot
of
other
protocols
by
just
saying
there
is
no
querying
the
client
receives
everything
and
we
do
and
and
I
think,
what's
really
interesting
about
the
repeat
approach,
in
contrast,
and
some
of
the
others
is
your,
your
approach
is
much
more
client
heavy.
A
But
by
client
heavy,
what
I
mean
is
we
are
reliant
on
having
providers
of
content
that
are
willing
to
just
ad
hoc,
like
that,
are
trusting
the
client
to
not
request
multiple
blocks.
A
C
No,
you
still
don't
want
that
because
you
will
waste
your
own
bandwidth
like.
C
Time,
you're
limited
by
the
receive
bandwidth.
So
one
interesting
thing,
though,
which
is
particularly
about
Gateway
but
most
Gateway,
make
them
make
themselves
Faster
by
caching
responses.
So
if
first
person
download
the
car
file,
usually
it's
a
second
one
went
down
to
the
same
Cafe:
it's
gonna
is
not
gonna
hit
the
ipfs
node,
it's
gonna
hit
some
nginx
or
whatever
server
they
have.
The
issue
is
likely
is
that
it
randomly
walks
the
graph
and
it
selects
random
node,
and
it
depends
on
what
happened
so
actually,
how
did
we
request?
C
Like
most
of
the
blocks
was
the
same?
It's
just
a
cop
5
will
be
different,
so
you
really
need
a
load
per.
If
you
want
to
efficiently
cache
happy
request,
you
need
a
load
balancer
that
knows
ipld
data
and
he's
able
to
understand
like
what
a
car
request
is
and
how
can
I
extract
some
blocks
from
the
car.
So.
C
D
Ahead,
Matthew,
hey
I,
think
I
missed
this
before,
but
what's
the
benefit
of
using
cars
over
raw?
In
this
context,.
C
You
save
a
round
trip
on
each
connection,
so,
like
most
of
the
time,
you
start
downloading
a
Blog
and
there
is
like
thousands
of
other
candidates,
so
the
connection
doesn't
get
canceled
like
it's.
It's
fairly
rare.
It
happened
a
few
times.
It's
usually
every
second,
that
you,
you
kill
one
connection
and
I
get
to
a
thousand
of
blocks
per
second
with
my
speed,
so
you
you
for
for
the
default
case
where
everything's
happening
fine,
you
don't
have
to
send
a
request.
Every
time.
C
Okay,
if
I
implement
the
client
driven
when
I
will
do
bitswap,
you
could
do
you
could
take
the
B12
manager
code,
but,
like
we'll,
do
the
work
like
bit
swap
doing
multiple
requests
in
balance
and
do
a
a
request
for
a
block
in
parallel
to
the
Gateway
using
raw
instead
of
card
that
would
work,
but
I
think
it
would
still
not
be
worth
it
just
like?
No,
it
will
be
worth
it,
but
you
will
have
to
do
like
so
many
HTTP
requests.
We
get
less
efficient
background.
A
Yeah,
it's
really
interesting.
It's
a
very
I
think
the
way
like
the
amount
of
work
that
you
put
into
yeah
I
guess
the
big
thing
I
wanted
to
talk
about
was
like
when
you
got
into
Unix
Ms
versus
sort
of
like
ipld
distinction.
One
of
the
things
we've
noticed
about
Unix
data
is,
it
tends
to
be
very
bottom
heavy
right.
The
leaves
are
where
the
actual
data
lives
and
the
content
connecting
those
leaves.
A
These
sort
of
internal
brushes
are
all
just
linking
information
connecting
you
up
to
a
root,
CID
right
and
so
part
of
what
I'm
wondering
what
would
be
really
interesting
in.
In
addition
to
lytle's
question
about
seeing
how
Quick's
last
HTTP
3
would
compare
when
thrown
into
the
mix,
is
this
question
of
like?
A
C
If
it
matters
okay,
is
that
a
good
question
it
matters
only
if
you
have
very
it
might
be.
If
you
have
so
little
word
that
you
forced
it
to
do
the
bigger
work
like
you
cannot
find
not
duplicated
work,
which
only
happens
in
like
the
first
few
milliseconds,
like
the
100
millisecond
of
the
of
the
request
will
still
know
very
little
about
the
dag.
Then
it's
useful
after
that.
It's
not
really
useful
because
you
just
like
Unix
FS,
is
so
wide.
C
It's
so
bottom
heavy
that
it's
very
unlikely
that
happened
like
it
very
quickly
stopped
happening.
So
one
way
I
was
thinking
is
we
could
use
something
like
graphing
to
download
all
the
blocks
to-
let's
say
the
depth
of
two,
and
that
should
give
us
enough
logs
that
didn't
stop
happening,
but
this
will
only
really
speed
up
the
few.
So
it's
a
really
starting
like
the
start
of
the
request.
Most
of
the
requests
just
not
happening.
C
And
one
thing
also,
it's
quite
interesting
to
think
about
when
I
was
reviewing
the
reviewing
where
other
people
made
their
presentation
about
which
protocol
is
more
server,
driven
where
more,
which
one
is
more
client
driven
and
it's
I
think
most
suitable
for
a
fell
in
that.
C
You
can
like
the
protocol
is
responsible,
like
the
different
node
will
use
whatever
to
agree
on
moose
and
the
blocked
web,
but
that
is
still.
You
can
usually
make
those
client
driven
by
lying.
For
example,
if
you
have
the
title
algorithm
in
BitTorrent,
it's
kind
of
the
same
thing,
whereas
the
node
tries
to
download
and
people
like
blocks
first,
but
most
verticals
still
fall
within
clients
on
server
I.
Think.
A
C
A
And-
and
it
seems
like
you're
you're,
the
primary
focus
of
repeat
is
to
do
the
discovery,
dag
Discovery
process
and
fast
and
have
that
be
part
of
the
protocol
and
not
require
like
a
lot
of
the
other
approaches.
We've
seen,
you
know,
there's
almost
all
of
them,
not
all
many
of
them
involve
some
type
of
metadata
superstructure
or
Construction.
C
Yeah,
it's
still,
you
could
pick
the
data
and
also
protocol
make
and
use
it
within
happy
to
more
efficiently
use
like.
C
If
I
have
ten
thousand
nodes,
I
know
I
can
download
from
okay,
maybe
not
that
much.
Maybe
a
hundred
and
I
just
know
the
hood
CID.
The
only
solution
I
have
is
to
send
to
request
the
good
CID
from
everyone
or
some
part
of
them.
If
you
at
this
point,
if
you
were
able
to
use
another
protocol
to
augment
the
information
happy,
it
will
be
able
to
optimistically
start
downloading
from
other
peers,
so
it
will
still
be
useful
to
have
a
manifests
or
different
things
like
that.
It's
just
not
critical.
C
C
Blockchain,
it
will
just
be
as
fast
as
like
the
fastest
protocol.
You
have
so,
for
example,
graph
sync
from
one
Pier
or
for
the
blockchain.
You
need
to
do
what
launches
does,
which
is
quite
fun.
Is
it
request
a
manifest
from
someone
which
gives
them
100
blocks
and
they
optimistically
download
the
100
blocks
in
parallel
from
Bitcoin?
You
could
do
the
same
thing
with
copied.
I
will
not.
It
will
not
be
as
good
at
the
white
case.
The
white
case
is
like
it's
a
borrow.
C
A
Right,
it
seems
to
me
that
the
way
that
you're
doing
this
fast
you're
the
simultaneous
construction
of
the
ash
or
the
Manifest
or
like
whatever
and
while
it's
being
downloaded
it
just
looks.
A
lot
like
reference.
Counting
to
me
like,
is
that
fair
you're,
just
building
reference,
counting
based
on
fetch.
C
That's
the
nature
so
right
now
it
is
because
I
didn't
have
any
idea
how
to
make
a
good
Mission,
so
I
just
say:
X
Mini
downloader.
So
it's
a
metric
right.
Ideally
you
want
the
Matrix.
That's
quite
smart,
because
if
you
have
a
part
of
the
attack
like
you
have
two
choice
in
the
deck
and
one
of
them
is
very
small
and
one
of
them
is
very
fast.
You
want
more
people
going
to
the
very
fast
case.
However,
right
now
rapid
will
distribute
them
equally,
it's
not
really
useful.
So
what
happens?
C
Is
that
the
fast
case
lots
of
people
too
much-
will
go
on
the
very
small
part
of
the
dag
it
will
get
downloaded
quickly,
but
we
also
have
lots
of
cancels
a
lot
of
a
lot
of
rays,
which
is
way
slow.
So
that's
bad
for
efficiency,
and
then
everyone
will
backtrack
and
everyone
will
go
back
to
the
big
case.
I
mean
as
a
half
that
go
back
to
the
small
one.
So,
ideally
is.
Should
we
borrow
a
ratio
of
like
how
big
do
we
expect
that
part
of
the
deck
to
be?
C
How
fast
does
it
knows?
Also,
because
right
now
it
has
seen
that
two
no
two
providers
are
equal.
Whatever,
however,.
A
C
A
Like
if
you
say,
okay,
we
want
throughput.
The
higher
throughput
here
is
going
to
be
able
to
assign
greater
weights
to
the.
C
A
Come
on
yeah,
you
can
see
how
that
could
create
a
lot
of
problems
when,
if,
if
you
just
change
it
to
a
constant
factor
of
the
worker
weight,
then
it's
great
I've
add
to
remove
two
cool.
But
if
that's
fluctuating
based
on
throughput
that
your
reference
counting
at
all
yeah
wackadoodle.
C
A
D
C
Now
yeah
the
main
point
of
the
main
issue:
I
have
right
now:
I've
not
implemented
I
call,
it
don't
go
there,
but
the
point
is
that
not
everyone
has
everything
and
right
now,
I
assume
that
all
the
peers
all
have
all
the
files,
and
so
the
build
field
might
be
useful.
For
for
that.
How
is
the
issue
I
see
with
the
Midfield?
Is
that
you
don't
know
the
bitfill
without
first
asking
it
from
someone
or
like
try
to
sync?
The
data
well,
happy
doesn't
need
a
it's
I
could
make
it
faster.
C
If
you
had
some
prednisol,
but
in
theory
you
don't
need
it.
You
can
just
like
everything.
It's
building's
a
dag,
while
it's
downloading
so
I
think
they
could
save
some
pre-commutation
time
on
the
like
again,
you
see
the
thing
copy
doesn't
need
any
information
beforehand.
You
just
know
the
otid
and
it
does
it
stuff.
A
E
Yeah
so
I
I
have
like
a
follow-up
question
to
rapidly
being
kind
of
like
blind
to
the
to
the
content.
Just
cares
about
being
able
to
Traverse
things
so
I
have
a
question.
Are
you
leveraging
the
codec
inside
of
a
CID
for
any
optimization.
C
I've
not
done
that
yet.
So,
actually,
is
there
a
better
optimization
to
do
if
you're
just
dealing
with
unique
surprise,
because
with
Unix
FIS
and
that
TV?
Actually
we
have
a
t
size
field
which
is
an
approximation
of
the
size
of
the
dag
in
the
second
block,
so
we
could
use
a
t
size
using
Unix
FS
to
steer
faster
peers
to
a
bigger
piece
of
the
bigger
parts
of
the
key
space
like
a
slow
key.
C
We
download
a
small
file
and
a
fast
video
will
download
a
fast
file,
a
fast
file,
I've
not
done
that.
Yet
it's
just
I
mean
it's
already
quite
fast
or
something
that's
not
optimized.
So
I'm
very
happy
with
that,
and
there
is
a
certain
Simplicity
of
having
some
like
I,
like
the
Simplicity
of
not
having
your
bench.
Cases
like
the
Unix
fshk
is
what
I
know
the
size.
The
the
code
I
know
it's
a
Roblox,
so
I
know:
I
won't
have
a
block
behind
it.
A
B
Yeah
I
just
want
to
say
one
thing:
I
really
like
about
this-
is
that
that
the
focus
on
the
client
specifically
because
like
if
we
have
agreed
that
we
are
not
going
to
have
an
Uber
protocol,
then
we
implicitly
agree.
We
have
clients
that
understand
multiple
protocols,
so
I
think
that
that
actually
means
that
the
client
becomes
an
important
piece
of
software
and
maybe,
to
some
extent,
maybe
it's
like
you
know.
B
As
long
as
the
client
is
for
one
particular
application,
it
can
be
simple
but
like
in
a
lot
of
cases,
you're
going
to
want
a
more
complex
client
that
speaks
multiple
protocols
itself,
yeah
and
I
will
I.
Did
the
the
algorithm
is
really
interesting.
I'm
I'm,
still
trying
to
rock
and
I'm
going
to
drop
most
I
wanna,
make
sure
I
really
kind
of
play
around
with
the
code.
I
think.
C
Yeah,
if
you
look
at
the
Curve,
it's
a
bit
ugly
I
mean
the
algories
in
my
reason
need
to
be
very
confident.
So,
what's
nice
is
that
rapid
on
my
CB
right
now?
It
is
about
two
core
to
download
a
two
gigabit
per
second,
however,
because
it's
extremely
concurrent
like
it's
very
well
spread
out
around
my
12
threads
on
the
CPU,
so
it
doesn't
look
like
it's
using
a
lot
of
resources
because
you
just
use
like
10
of
each
score,
if
you
so
the
way
I
did
the
code
to
do.
C
That
is
that
it's
the
it's
multi-threaded
using
locks,
but
the
locks
are
on
each
node
and
it's
eventually
consistent.
So
sometimes
you
have
like
a
wrong
Behavior,
where
some
find
some
worker
will
try
to
go
down
the
wrong
part
of
the
dial,
because
the
data
have
not
been
updated.
Yet,
however,
if
you
got
that
it
should
realize
it's
a
wrong
wrong
part
of
the
deck
later
so,
for
example,
you
try
to
go
down
someone
just
downloaded
finish.
The
part
of
that
we
have
a
branch
of
the
trees.
C
That's
finished,
we
have
workers,
that's
going
to
go
up
the
tree.
One
by
one
and
remove
all
the
nose
from
the
tree,
so
then
everyone
will
be
pushed
up.
It's
like
the
thing
where,
because
I
removed
all
of
the
children,
then
the
default
case
of
like
backtracking
will
happen
and,
for
example,
in
that
case,
there
can
be
a
race
where
I
start
removing
the
nodes
one
by
one
but
you're
still
seeing
a
node
which
I
have
not
removed.
So
you
go
down
and
then
later
you're
gonna
keep
doing
that.
C
And
after
that,
you
realize.
Oh,
there
is
no
more
stuff,
so
I'm
going
to
backtrack
if
you're
looking
at
the
criticians.
It's
a
main
things
to
be
aware
of
is
that
the
code
is
weird
because
it
does
that
it
says
on
this
I
think
if
you
notice
how
the
rough
explanation
of
the
algorithm-
and
you
know
that
I
think
you
can
understand-
because
it's
pretty
it's
not
that
bad.
D
A
E
Yeah
kind
of
like
the
wrapping
our
thoughts
here
is
that
it,
the
shifting
of
kind
of
complexity
or
like
responsibility
to
the
client,
is
actually
I,
think
a
feature,
and
maybe
just
like
Food
For
Thought,
is
that
we
wanted.
We
want
people
to
verify
data
they
retrieve
from
ipfs,
that's
the
entire
point,
but
it
no
one
had
any
incentive
to
do
that.
E
Now
we
have
a
very
you,
know:
people
we,
we
understand
why
it's
important
many
people
don't
care,
because
it's
like
slower
or
whatever,
like
rapid,
is
actually
like,
providing
the
incentive
one.
It's
truly
faster,
and
the
second
thing
by
the
sheer
fact
that
it's
like
on
the
client
like
I,
agree
with,
like
it's
very
nice,
to
have
this
like
generic
base.
But
then
the
clients
who
are
mostly
care,
like
mostly
care
about
Unix
FS,
then
kind
of
have
like
additional
optimizations
on
top
of
the
generic
one
and
I.
E
C
I
think
I
was
jealous
and
Jen
is
not.
The
writer
I
was
admiring
the
fact
that
BitTorrent,
if
you
want
to
share
five
with
lots
of
people,
it's
the
fastest
technology,
it's
probably
tokens
is
still
probably
still
faster,
but
we're
D4
ipfs
was
lower
than
HTTP.
Now,
if
you
use
Rapid
you're
faster
to
use
happy
with
multiple
peers
than
than
just
HTTP
from
one
server
so
I,
it's
the
point
we're
making.
C
C
The
where
is
IPS
I
was
right
here,
so
in
a
happy
I
have
this
ipsl
chart
also-
and
that's
also
what
I
accept
in
my
thing,
so
the
current
API
we
have
which
actually
I
can
find
because
it
should
be
click
here.
C
Good
luck!
Exactly
this
one
I
asked
you
for
a
CID
of
a
list
or
a
list
of
CID,
and
the
client
is
going
to
give
me
back
the
old
block
so
that
the
API
of
the
old
client
with
copied
I
need
to
view
the
dial,
because
the
whole
point
on
that
beat
is
I.
Do
ref,
counting
of
the
metric
and
so
I
can
go
left
and
right
and
I
can
smartly
divide
the
dag
into
smaller
and
smaller
dag,
where
I
keep
applying
the
same
algorithm.
C
So
I
need
to
know
that
the
best
way
I
think
to
do
that
is
you
have
an
object.
Some
object
is
that
is
able
to
you,
give
it
the
block,
and
it
tells
you
what
was
the
next
block
to
do
that.
You
might
know
that
as
selectors
from
IPL
weeks.
It
has
the
same
feature.
C
However,
silicone
actually
was
slightly
different
they'll,
not
build
directly
for
that
use
case
and
it's
surprisingly
hard
to
get
them
to
give
you
a
relationship
between
the
blocks,
the
so
I've
created
ipsl,
which
does
the
same
job
as
selector,
mostly
where
I
have
a
single
object
and
I
can
progressively
Traverse
it
to
get
the
to
get
to
know
the
dag
as
I
download
it.
However,
ipsl
has
a
really
small
API
compared
to
selectors.
It
takes
locks
block.block,
which
is
just
a
CID
and
some
bytes.
C
So
those
two
one
do
nothing
I
mean
it's,
it's
just
a
lot
of
detail,
but
so
it
has
this
API,
which
is
a
block
and
CID
where
selectors
work
on
the
ipld
data
model,
which
more
complicated
more
stuff,
and
this
this
is
mainly
useful
because
it's
it's
smaller
to
implement.
So
if
I
go
to
the
Unix
FS
implementation
right
now,
it's
by
it's
kind
of
fun,
because
I
made
this
thing
to
like
describe
graphs
and,
for
example,
with
ipsl,
you
can
say:
I
want.
C
C
It's
it's
like
it's
happy.
I
I
would
like
I,
have
a
Blog
so
that
the
Unix
FS
implementation
of
the
ipsl
everything
no,
and
so
it
it
just
Powers
the
block.
If
that's
a
rough
Roblox,
nothing
CID
will
have
no
children
in
in
Unix
FS.
Then
I
pass
the
photograph.
There
is
two
photographs
within
each
other
because
of
unique
surface
and
reasons
and
I
build
a
list
so
I
take
the
links.
I
take
the
I,
make
a
result.
C
Slice
which
is
just
I,
know
the
track
of
soul
and
the
and
the
CID.
So
the
CID
is
just
a
list
of
all
cids
and
the
tracker
sold
for
everything.
Everything
is
the
electricity
node.
That
applies
itself
forever.
So
it's
just
itself
and
which
is
that
I've
also
made
it
so
that
everything
I'm
using
it
for
now
it
doesn't
it's
just
useful
to
talk
too
happy
like
the
rapid
code
inside
yourself.
C
C
So
I
also
implemented
the
programming
language
to
describe
the
happy
queries,
but
we
don't
actually
use
that
right
now
and
we
don't
have
any
need
for
it
right
now
because
for
the
Gateway
I'm,
mostly
interested
in
like
having
a
instead
of
having
happy
describing
the
the
reach
request
on
Gateway,
where
I
go
to
a
folder
and
stuff
having
ipsl
to
describe
those
Gateway
resource
requests
I'll,
just
probably
we
were,
we
probably
will
like
add
rules
to
the
Gateway,
saying
oh
that's,
Unix,
and
how
Unix
FS
were
so
the
so
right
now
it
has
a
programming
language
which
you
can
use.
C
You
don't
have
to
use
it
I'm,
not
using
it
and
it's
list,
like
so
sort
of
test
case
vaccine.
That's
an
example
code.
So
that's
the
outer
one.
This
creates
a
node,
which
is
a
a
thing
apart
in
the
tree.
Reflect
is
the
name
of
the
node
reflect.
Is
a
testing
nodes
that
just
output
the
same
thing?
It
Returns
the
same
value
you
passed
in,
that's
a
comment,
so
that's
a
comment
and
an
alternator
and
that's
a
cidd
table,
so
it
parses
that
as
a
CID
and
so
you
can
integrate
constructs.
C
For
example,
if
I
go
to
Unix
FS
test,
this
one
I
can
take
this,
which
is
just
going
to
be
a
name,
and
so
this
creates
a
Unix
fafscope
loaded,
a
scope.
A
scope
is
a
list
of
optional
features,
so
you
have
lots
of
the
theories
that
that
are
different.
We
have
lots
of
data
format,
but
everyone
wants
to
implement
them
so
either
it's
all
built-in
or
you
can
use
wasm
to
load
more
mod
data
format
that
are
supported,
and
so
this
is
loading
the
Unix
code,
which
is
a
right
here.
C
That's
the
name
of
the
unit.
Let's
go
and
this
creates.
So
this
is
a
query
that
basically
download
everything
within
some
Unix
and
testing
this
load.
The
unique
stethoscope
and
then
it
called
unixfs
of
everything
which
we
don't
use,
the
track
of
cell
object.
I
was
showing
earlier.
That
is
right
here.
It
returns
you
one
of
those
objects
that
again
recursively
will
just
give
you
all
the
blocks
or
some
Unix
FS
and.
C
The
plan
is
not
to
merge
what
I'm
probably
going
to
do
is
take
ipsl,
take
the
useful
part
out
of
it,
which
is
just
the
internal
representation
and
the
interfaces,
and
it's
a
compiler
and
the
language
design
and
using
it
on
top
of
other
protocol,
and
we
don't
need
to
I.
Don't
have
I
have
no
use
case
for
this
right
now
and
probably
we
are
going
to.
We
are
not
going
to
use
it
for
now,
at
least
that
so
yeah.
C
Okay,
so
because
I
know,
lots
of
people
are
passionate
about
graph
description
language,
so
I
will
take
the
language
part
out
of
it.
We'll
just
use
the
the
things
that's
useful
for
now.
Wow.
A
That's
awesome:
I
love
the
idea
of
like
here's,
the
demo,
here's
a
protocol,
I
wrote
a
programming
language
and
threw
it
away
and
did
not
find
it
reasonable
to
address
it
anything
other
than
a
final
comment
on
a
closer
question
triple
you
take
the
cake.
First,
some
effort,
yeah.
C
C
So
I
want
to
include
that
in
Kubo.
Eventually,
Paula
I
have
actually
a
master
planning
issue
that
describes
it,
but
the
goal
is
to
have
it
in
Kudos,
so
I
can
group
it.
So
I
have
issues
yeah
so.
C
Oh
no
I
just
implemented
that,
because
I
was
a
bit
on
a
I
was
on
the
time
crunch,
and
this
was
the
easiest
thing
to
implement
yeah
I'm,
sending
that
right
now
yeah.
C
So
we
can
I
mean
I,
don't
have
any
particular
interests
of
like
where
protocol
goes
into
it.
Ideally,
we
add
as
many
protocols
as
possible,
so
everything
works.
Practice
is
gonna,
we'll
see.
A
Cool,
thank
you
so
much
for
taking
the
time
to
present
your
Oppo,
so
I'd
really
appreciate
it
and
thanks
everybody
for
coming
today
to
ask
coaching
questions
and
help
ful
goals
and
understand
better
to
everybody
else.
We'll
see
you
in
two
weeks,
I
think
we
have
a
presenter
prepared
but
I'm
gonna,
Circle
back
and
make
sure
they're
all
pretty
wrong,
but
yeah.
A
At
webadvice
working
group,
any
of
your
work
come
forth,
send
a
message
from
the
Falcon
slack
over
the
white
circuit
group
Channel.
It
can
be
tangential
to
moving
bytes.
It
should
be
data
transport
related,
but
we
would
love
to
see
your
work
cool.
Put
that
thanks.
Everybody
have
a
great
rest
of
day.