►
From YouTube: Move The Bytes Working Group Meeting 3
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everybody
and
welcome
to
meeting
number
three
of
move.
The
bytes
working
group
today
is
December
14th,
2022
and
I'm.
Yeah
excited
to
get
in
our
dentist,
for
today
is
nice
and
simple.
We
have
five
minutes
of
just
quick
housekeeping
and
then
one
is
going
to
give
us
a
great
talk
about
data
transfer
discussing
a
number
of
things.
A
This
says
open
discussion,
35
minutes,
that's
just
going
to
be
the
remainder
of
time
in
the
hour,
but
just
to
remind
ourselves
of
why
we're
gathering
this
this
meeting
or
this,
this
group
we're
trying
to
ship
a
data
transfer
protocol
that
can
replace
bitswap
in
q1,
2023.
A
I,
put
an
emphasis
on
can
and
over
the
others,
because,
as
I've
been
having
a
number
of
conversations
with
other
folks,
as
we
talk
about
the
purpose
of
this
working
group,
I
think
the
episode
should
be
on
the
capability
to
replace,
not
necessarily
that
it
is
implemented
in
the
wild
and
I
have
heard
a
little
bit
of
like
confusion
of
like
hey.
Is
this
working
group
intending
to
like
formally
ship
this
in
every
implementation
of
ipfs
other
ever
by
the
end
of
March?
I?
A
Don't
think
that's
plausible,
but
instead
this
is
about
actually
coming
up
with
a
viable
alternative
to
bed,
swap
as
we
move
forward
on
a
meaningful
time
frame.
Speaking
of
time
frames.
This
is
where
we
are
we're
currently
on
meeting
three
of
our
meeting
agenda
of
our
schedule.
We
have
a
our
speaker
for
when
we
come
back
selected.
Hannah
Howard
is
going
to
be
talking
to
us
on
January
4th.
She
has
a
proposal
for
a
new
data
transfer
protocol
based
on
a
lot
of
her
research.
A
So
we'll
be
hearing
about
that,
and
our
first
meeting
back
on
January
4th
a
couple
of
notable
reading
bits
that
have
come
in
from
the
movie
bytes
working
group
channel
the
effectiveness
of
bitslop
Discovery
process.
I
do
want
to
call
it
this
one.
This
is
coming
from
the
probe
lab
team
that
is
authored
by
ghee.
This
is
rfm
stands
for
requests
for
measurements,
so
this
is
their
16th
request
for
measurements
report.
A
It's
a
really
really
in-depth
report
on
bitstop
as
a
data,
Discovery
tool,
I
highly
recommend
others
take
a
read
through
that
I
think
it's
really
worth
taking
some
time
to
digest
now.
Moving
right
along
today,
metric
support
number
three
was
supposed
to
come
out.
We
have
no
report
this
meeting,
mainly
because
we
have
to
spend
some
time
reevaluating.
A
Before
the
meeting
posting,
our
results,
we've
been
getting
a
lot
of
feedback
on
both
what
should
be
measured
and
feedback
from
the
team
implementing
the
measurements
that
we
should
would
benefit
from
at
least
a
pattern
of
stepping
back
for
one
meeting
to
sort
of
clean
up
and
polish,
so
that
we're
not
just
like
kind
of
constantly
hacking,
together
results,
and
so
for
this
meeting
we
elected
two
at
paranormal
and
step
back
from
doing
a
report
and
instead
spend
some
of
that
time.
Cleaning
up
the
infrastructure
and
pieces
necessary
to
do
some
put
together
these
reports.
A
We
think
it's
a
small
investment
that'll
yield
better
long-term
results.
For
this.
We
just
have
to
figure
out
a
way
to
sort
of
scale
our
measurement
systems
a
little
better
and
so
Nora
meeting.
We
can
talk
about
having
one
next
meeting
but
I
think
that's
also
a
great
launch
pad
into
making
sure
that
this
working
group
is
giving
everybody
and
all
stakeholders
that
are
coming
to
these
meetings,
what
they
need
and
what
they
want
to
get
out
of
these
this
working
group.
A
So
what
I'm
proposing
we
do
with
that
is
I
would
like
to
solicit
more
feedback
from
the
folks
in
the
working
group
and
around
the
working
group.
I've
been
talking
to
a
number
of
you,
asynchronously
and
I
think
this
is
a
great
time
for
us
to
have
an
asynchronous
discussion
in
the
move.
The
bytes
working
group
Channel
about
what
everybody
wants
to
get
out
of
this
working
group
and
make
sure
that
we're
aligning
properly
on
that
at
number
zero.
We
are
still
very
dedicated
to
shipping
a
viable
data
transfer
protocol.
A
We
need
one
for
sure
and
I
know.
A
couple
of
other
teams
are
aligned
on
that
need,
but
we
want
to
make
sure
that
we're
reporting
on
stuff
that's
useful
and
providing
the
proper
on-rents
for
folks
to
get
something
valuable
out
of
this
working
group
and
rather
than
have
that
as
a
discussion,
sort
of
synchronously
that
may
exclude
folks
who
are
on
the
call
I
think
it'd
be
best
to
structure
that
as
a
channel
as
a
discussion
that
I'll
kick
off
in
the
move.
A
The
bike's
working
group
Channel
inside
the
focline
slack
after
this
meeting.
A
So
look
out
for
that,
if
you're
looking
to
weigh
in
on
what
this
working
group
is
doing
and
its
primary
sort
of
reason
for
existence,
that
would
be
a
great
place
and
we're
really
soliciting
feedback
on
what
is
and
isn't
working
for
folks,
as
we
try
to
figure
out
new
password
and
data
transfer,
that's
our
short
update
for
Logistics
and
housekeeping
this
week
and
with
that
I
wanted
to
save
as
much
time
as
possible
for
Juan
to
give
us
a
talk
about
data
transfer,
thoughts
on
data
transfer,
I
know
he's
prepared
a
number
of
notes.
A
That
I
think
are
super
reported
for
this
group,
and
so
with
that
Juan
are
you
cool
to
take
over.
B
Yep,
thank
you.
One
second,
I'm
gonna
stuff
processed.
B
Great
awesome,
great
great
see
everybody
I'm
really
excited
about
the
this
working
group
and
had
a
good
time
watching
the
last
two
talks.
B
So
a
lot
of
what
I'm
gonna
say
here
is
is
already
in
strong
and
violent
agreement
with
many
of
the
things
talked
about
before,
but
also
some
I
do
have
like
a
few,
a
few
ideas
that
need
to
be
kind
of
like
revived
a
little
bit
and
some
ideas
that
are
probably
pretty
different
from
what
what
other
people
are
thinking
about.
So
I
wanted
to
kind
of,
like
start
by
both
I
wanna
before
I,
get
to
the
ideas.
B
I
want
to
talk
about
two
sets
of
things
one.
You
know
a
set
of
recommendations
for
the
working
group
in
general,
just
because
I
think
you
know
there's
a
lot
of
people
here
that
wanna
solve
these
problems
and
when
I
get
to
good
Solutions
quickly,
so
I
kind
of
want
to
talk
a
little
bit
about
them.
B
Out
of
there,
then
I
want
to
kind
of
stress
the
importance
of
an
experimental
setting
and
I
want
to
get
a
few
concrete
ideas
here
of
like
how
you
can
how
you
can
kind
of
level
up
the
measurement
over
time
and
how
we
as
a
community,
can
can
keep
like
improving
this.
So
this
is
in
a
drastically
better
space
than
than
it
was
a
year
ago,
two
years
ago,
three
years
ago,
and
so
on.
So
it's
been
a
great
Improvement
reactory.
B
B
B
So,
as
many
of
you
can
attest
to
like
this
is
a
very
large
problem,
space
with
a
very
large
possible
potential
design,
space
for
Solutions,
there's
lots
of
different
applications
that
have
different
requirements
and
there's
an
enormous
amount
of
stuff
to
instrument
and
test,
and
when
you,
even
if
you
have
like
a
good
solution
that
hits
a
set
of
applications,
somebody's
gonna
come
along
with
new
applications
and
new
requirements
and
so
on,
and
so
things
will
have
to
keep
evolving.
B
So
a
kind
of
one
of
the
pieces
here
is
that
as
a
community,
we
need
to
be
able
to
keep
experimenting
with
these
different
potential
solutions
for
for
moving
around
objects.
Etc
somebody
is
unmuted,
and
so
we
can
meet
that
person
so
something
like
getting
to
to
a
a
spot
where
we
can
design
like
extremely
good
benchmarks
and
work
for
various
different
workloads,
and
we
have
a
good
way
of
getting
reports
out
of
those
benchmarks
for
different
kinds
of
protocols
in
different
devices
and
whatnot.
B
Like
that's
what
the
community
is
going
to
need
longer
term
more
than
a
single
protocol,
that's
just
gonna
kind
of
solve
a
bunch
of
problems.
What
I'm
trying
to
get
is,
like
necessarily
because
of
the
of
the
very
different
kinds
of
applications
and
different
kinds
of
networks.
This
will
yield
different
style
of
protocols
and
it's
gonna
need
to
keep
evolving
and
so
having
a
kind
of
a
strong
emphasis
towards
measurement
infrastructure
from
the
from
the
community.
B
To
be
able
to
do
all
of
this
together
is
going
to
be
super
super
valuable
longer
term.
So,
a
congrats
on
like
setting
up
this
working
group,
which
is
solid,
the
you
know
as
a
community.
We
should
like
invest
deeply
in
this
working
group
now
to
level
up
our
infrastructure
and
level
up
our
measurement
level
up,
all
all
the
tooling
and
so
on,
so
that
we
can
kind
of
to
get
to
better
Solutions
great.
B
So
then,
from
that
I
wanted
to
kind
of
talk
a
little
bit
of
the
meta
level
about
a
few
different
things.
So
one
is
the
application
context
really
matters.
Different
kinds
of
applications
are
going
to
want
to
do
different
things
and
many
of
the
discussions
that
differ
just
come
with
different
contexts
and
so
you're
going
to
end
up
wanting
different
protocols.
Because
of
that
now
it
is
possible.
B
You
could
have
like
one
one
product
that
handles
all
that,
but
it's
just
be
aware
of
the
complexity
of
that
protocol
might
might
assume
and
just
know
that,
like
a
lot
of
the
constraints
will
differ
for
how
you
move
around
data,
so
in
some
settings
here
trying
to
just
replicate
some
large
file
or
large
set
of
files
and
whatnot
and,
like
there's
a
much
more
straightforward
thing
in
other
settings,
you're
trying
to
get
a
large
set
of
peers
to
pull
sub
graphs
from
each
other
and
so
on.
B
So
the
the
the
request
reply,
latency
problem
kind
of
changes
entirely
so
just
recall
the
application
context
really
matters,
and
you
want
to
have
to
the
extent
that
you
can
make
this
application
friendly,
the
the
better
logo,
meaning
if
the
application
can
choose
something
and
tune
something
before
it's
used
like
that'll,
be
great
now
the
problem
with
that
is
maybe
I'll
skip
forward.
One
before
talking
about
thread
models.
B
The
problem
with
that
is,
you
can
end
up
writing
too
many
protocols
then
have
to
be
written
in
too
many
languages
like
we
already
have
this
now.
So
when
we
started
graph
sync,
the
goal
was
to
replace
this
swap.
B
So
the
idea
was
like
say:
hey,
let's
take
everything
that
we
know
from
bit
swap
write
it
into
graph
sync
and
replace
what
we
had
before,
based
on
our
learnings
and
the
idea
there
is
like
Hey
we're
no
longer
swapping
bets,
we're
swapping
graphs,
let's
and
hey
it's
not
now
we're
really
kind
of
like
one,
a
sync
broker,
also
supposed
to
just
kind
of
an
exchange
protocol
and
so
on.
So
that's
kind
of
that's
why
it
evolved
in
that
direction.
B
However,
at
the
same
time
as
as
some
of
us
were
working
on
graphicsync
and
building
it
other
folks
were
working
and
evolving
this
up
with,
like
all
the
best
app
sessions
and
whatnot.
So
we
ended
up
with
like
now
two
protocols
and
a
variety
of
implementations
and
clients
across
a
variety
of
languages
that
now
you
know
like
they
don't
speak
to
each
other
and
there's
so
many
like
groups
trying
to
put
in
this
top
and
or
graph
sync
into
more
components.
So
having
many
protocols
can
be
super
painful.
B
So
there's
kind
of
like
a
warning
against
that
on
the
other
direction.
Do
you
wear
one
protocol,
because
you
can
also
end
up
in
a
spot
where,
like
hey
it
actually
just
doesn't
work
for
for
everything.
B
Even
something
like
git
has
multiple
protocols
now
the
way
it
works
and
why
get
managers
to
succeed
with
having
to
multiple
protocols
is
because
it
sort
of
enforces
that
all
all
all
qualifications
are
going
to
implement
all
of
those
protocols
and
I.
Think
if
there's
two
or
three
and
then
there's
like
some
experiments,
that,
like
add
more
and
so
on,
but
now
I'm
gonna
have
quite
succeeded,
video
and
those
two
or
three,
and
so
the
point
here
being.
B
If
we
end
up
in
a
world
where
there's
many
protocols,
then
we're
going
to
have
to
figure
out
how
to
like
make
sure
that
everyone
speaks
to
each
other.
This
is
another
plug
for
programmability,
so
things
like
fpvm
and
so
on
could
really
help
here.
When
hey
loading
up
on
your
protocol,
it's
just
downloading
a
wasp
module
and
running
it,
like
maybe
you're,
in
you're,
in
a
good
spot.
B
But
then,
at
that
point
we're
gonna
have
to
solve
the
you
know
who
gets
to
decide
what
what
these
transfer
protocols
do
and
who
gets
published
them
and
so
on.
We
could
get
to
a
good
spot
there
by
like
just
writing
these
protocols
in
wazam
and
having
a
community
oriented
process
for
deciding
which
ones.
So
these
are
like
good
and,
like
you
know,
relatively
bug-free,
and
what
not,
but
getting
into
the
spot,
where
you're
fetching
the
protocol
at
runtime
for
my
Dynamic
source
and
so
on.
B
Without
any
kind
of
certainty
it
like
opens
up
a
ton
of
like
potential
for
attack
right
and
so
from
a
security
perspective.
So
I
would
say,
like
there's
significant
benefit
to
our
ability
here
into
into
solving
get
getting
us
into
a
spot
where
we
can
experiment
with
it,
with
different
styles
of
protocols
to
to
get
them
to
work
for
different
kinds
of
applications.
B
But
if
we
do
that,
we
should
a
really
lean
towards
an
ipvm
and
all
the
VM
work
that
we're
going
to
be
doing
there
and
B
figure
out
a
community
process
by
which
we
like
sort
of
bless,
a
set
of
protocols
to
to
to
be
safe
to
like
load
up
at
runtime.
B
We
don't
want
to
be
in
a
spot
where,
like
you're,
you're
kind
of
allowing
anybody
to
write
these
and
even
then
just
be
aware
of
kind
of
the
explosion
of
of
possibilities
there,
like
you
know,
there's
lots
of
good
examples
in
the
history
of
certain
kinds
of
exploit,
exploits
enabled
like
I,
feel,
like
Java
had
a
famous
one
where
like
or
no
XML
had
a
famous
one,
where,
like
you,
you
like.
Some
expression
would
like
enable
you
to.
B
If
you
wrote
the
expression
just
right,
you
could
like
blow
up
the
cache
of
the
of
the
local
machine
and
and
and
so
on,
so
the
point
being
I,
don't
think
we
can
get
to
a
single
protocol.
This
is
going
to
set
up
work
for
everything
with
the
perfect
setting
and
nobody's
I.
B
Don't
think
people
will
be
happy,
however,
getting
to
a
world
of
too
many
protocols,
even
just
to
is
too
much
in
a
sense
like
it
we're
already
two
two
protocols
with
many
implementation:
many
possible
languages
and
and
places
to
implement
it's
already
kind
of
a
nightmare,
and
so
we
either
have
one
protocol
or
we
invest
in
programmability
I
think
is,
is
kind
of
the
the
output
of
that
so
that
that
may
not
kind
of
change.
B
This
working
groups
in
in
media
work
in
a
sense
in
that,
like
hey,
you're
kind
of
trying
to
experiment
and
find
a
better
protocol
for
now,
and
and
that's
all
good
or
improve
the
current
ones
and
that's
great,
but
it
might
change
your
like
mid
to
long
term
goal
set
in
that.
What
you
should
be
doing
is
probably
writing
in
Rust
and
probably
targeting
awesome
and
not
and
kind
of
a
lot
about
the
protocols
in
other
languages.
That
probably
don't
make
lot
of
sense
anymore.
That
would
be
my
my
guess.
B
The
the
other
thing
here
that
I
had
to
run
throughout
models.
The
the
sync
protocol
changes
a
lot
if
you're
dealing
with
trusted
nodes
or
totally
untrusted
malicious
nodes
that
might
be
changing
your
software
to
try
and
and
mess
you
up,
so
different
kinds
of
protocols
might
evolve
with
different
capabilities.
So
so,
and
here
I
mean
capability
crypto
capability.
B
So
if
you
show
up
with
like
the
right
capability,
then
you're
kind
of
in
a
trusted
setting
and
you
you
get
to
be
able
to
you-
can
trust
like
the
other
side,
much
more.
B
If
you
don't
have
that
and
you
and
you
want
to
be
kind
of
a
sub
model
where,
like
you,
don't
trust
anything,
then
you
have
to
be
a
lot
more
conservative
and
you
may
maybe
don't
trust
the
other
side
as
much
you're,
just
going
to
factor
that
and
keep
any
managed
to
sign
things
talking
about
experimental
settings
for
a
moment.
B
Just
remember
that,
like
you
only
improve
what
you
measure
so
as
a
community,
we
will
only
improve
the
stuff
if
we
have
good
and
reliable
and
robust
measurements
across
all
the
areas
that
we
care
about.
If
we
don't
measure
it
and
we
don't
keep
measuring
it,
it'll
either
not
get
fixed
or
degrade
over
time
like
you
might
spend
a
lot
of
time,
measuring
something
and
fixing
it
you
get
it
into
place.
You
stop
measuring
software
evolves.
B
Software
brought
us
something
changes
at
some
point
and
that's
not
only
things
are
slow
again,
so
the
the
only
way
to
get
things
good
is
to
establish
a
really
strong,
experimental
Loop
and
keep
running
it,
meaning
getting
it
to
CI
and
keep
testing
things
at
scale.
B
Just
on
a
you
know,
in
terms
of
an
experimental
Loop
like
you
have
to
you
know,
software
is
going
to
keep
evolving,
which
means
and
in
addition
to
keep
evolving
because
the
problem
space
and
the
design
space
is
going
to
keep
developing
too
the
velocity
of
a
community
writing
software
is
much
more
important
than
its
position,
meaning
the
the
ability
for
a
community
to
change
the
software
and
adapt
to
the
changing
net
environment
is
going
to
outweigh.
However,
good.
B
It
is
at
any
one
point
in
time,
so
like
Even
in
our
community
within
ipfs
I've
had
so
many
conversations
with
people
like
oh
great,
like
let's
not,
invest
in
all
this
like
testing,
let's
kind
of
write
a
great
protocol
once
and
like
we'll,
improve
it
and,
like
then
it'll,
be
solved,
and
this
is
you
know
the
story
of
like
bit
swap
a
few
times
over
where
a
lot
of
work
went
into
improving
this
up
to
a
certain
point,
and
if
we
had
invested
that
time
into
the
testing,
maybe
things
would
be
in
a
much
better
spot
and.
B
Like
we're
going
to
be
re,
running
and
we're
running
and
running
experiments
so
try
to
make
them
General,
so
you
can
swap
out
like
the
different
underlying
transfer
protocols.
So
let's
talk
about
test
for
this
for
a
moment.
One
thing
I
want
to
so
I'm
going
to
talk
about
like
some
industry.
Examples
I
want
to
talk
about
test
ground
briefly
the
iPhone
Observatory
and
then
some
other
things
that
might
be
in
the
pipeline.
B
So
the
the
kind
of
the
the
great
software
that
we
use
and
rely
on
day-to-day
to
do
most
of
our
work
is
tested
massively
at
huge,
huge
scales.
So
this
goes
into
into
even
things
like
programming
languages.
Things
like
go
and
rust
and
so
on
are
have
pretty
serious
benchmarks
for
every
single
commit.
That
happens
browsers
very
famously
enable
thousands
of
people
to
to
contribute
to
the
thing
and
they
keep
that
working
well
and
they
keep
performance
High
because
they
have
invested
in
massive
scale
testing
across
tons
of
different
machines.
B
They
like
they
have
super
insane
workloads
to
test
all
kinds
of
corner
and
edge
cases
to
be
able
to
to
know
as
developers
that
when
they
contribute
one
change,
that
change
is
not
destroying
performance
in
some
way
or,
ideally
is
improving
performance
and
they
put
a
very
strong
emphasis
on
the
kinds
of
things
that
that
improve
performance,
kind
of
famously
like
Chrome,
for
a
while
I,
don't
know
if
this
still
exists
but
Pro
for
a
while
had
this,
like
you
know,
top
10
improvements
in
the
last
seven
days
for
perf
and
10
regressions
for
perf
and
so
right
away.
B
If
you
wrote
this
commit,
you
could
know
that,
like
oh
wow,
like
my
commit
just
like
Blew
Up
Performance,
like
that's
bad,
like
that's,
clearly
not
going
to
stay
in
or
if
you
greatly
improve
performance,
then
you've
got
a
ton
of
credit
in
that
community.
B
So,
the
point
of
being
like
you,
we
can
use
the
the
CI
structure
and
the
testing
structure
and
the
benchmarking
and
the
metrics
and
so
on
to
drive
a
lot
of
improvement
as
a
community,
but
it
sort
of
requires
that
kind
of
investing
in
that
experimental
setup
in
a
sense
and
we've
are
in
a
much
better
place
today
than
we
were
in
the
past.
But
you
know
we're
we're
still
kind
of
working
towards
that.
B
B
B
If
imagine
the
setting,
and
imagine
that,
like
you,
you're
going
to
make
a
commit
to
Chrome,
but
in
order
to
make
it
to
Chrome,
you
have
to
like
you,
run
experiments
yourself
and
argue
that
this
is
actually
going
to
be
better
and
and
so
on
and
like
present,
the
data
and
whatnot
like
you're.
Just
not
it's
going
to
be
kind
of
intractable
and
impossible
to
do
like
we
have
to
get
to
a
point
where
this
just
all
runs
automatically.
B
In
the
background,
like
you
make
a
change,
you
push
it
somewhere
and,
like
you
know,
CI
produces
all
these
reports
out
for
you
and
so
I
would
I
would
kind
of
encourage
us
as
a
community
to
to
invest
in
that.
In
that
part,
I.
A
B
Plug
for
like
the
work,
the
program
has
been
doing,
measuring
a
lot
of
things
and
creating
like
really
good
good
reports.
It's
been
kind
of
leveling
up
a
lot
of
the
conversation
and
even
in
the
last
two
calls
in
this
working
group.
We
saw
a
bunch
of
the
several
several
graphs
from
this
group
where
you
used
to
kind
of
like
motivate
decision
making.
B
So
that's
great
and
working
really
well
it'd,
be
great
to
get
more
Road
lab
folks
in
this
working
group,
working
with
you
to
to
help
designs
on
the
experiments
and
help
design.
Some
of
the
infrastructure
to
keep
testing
the
stuff
or
for
the
graphs,
sometimes
like
a
well-designed
figure
can
be,
can
make
a
huge
difference
in
guiding
a
process
right.
So
a
design
or
an
Engineering
Process,
so
so
getting
getting.
The
folks
involved
might
might
be
might
be
helpful.
B
So
something
to
keep
an
eye
out
for
is
we're
likely
going
to
be
in
a
spot
to
be
able
to
run
experiments
with
thousands
of
machines
and
later
on
tens
of
thousands
of
machines
later
this
year
and
next
year,
I
think
we
can
probably
hit
Millions.
So
one
really
cool
thing.
B
One
really
neat
thing
is
that
now
we're
going
to
get
to
a
spot
where
we
can
run
massive
test
cases
across
these
networks,
so
we're
doing
the
work
today
to
a
enable
users
with
access
to
servers,
desktops
or
phones
to
be
able
to
run
arbitrary,
like
computer
data
networks,
and
things
like
that,
then,
after
that,
we
need
to
write
tooling
to
be
able
to
issue
kind
of
tests
across
these
things.
B
So
when
you
sort
of
like
put
these
these
things
together,
like
imagine
being
able
to
run
test,
run
or
or
a
related
type
of
thing
over
over
a
network
like
this,
so
then
be
able
to
test
certain
workloads
at
a
massive
scale.
So
we
have
some
goal
to
like
we
have
like
some
protocol,
but
it's
supposed
to
work
in
a
particular
way.
B
Okay,
great,
like
we
can
test
it
now
and
and
this
this
is
the
sort
of
thing
that
will
allow
enable
us
to
test
not
just
kind
of
the
immediate
things
about
changing
your
protocol,
but
like
it
might
address
a
bunch
of
the
questions
people
have
about
multiplayer
downloads
or
how
these
protocols
work
with
content
routing.
B
So
you
know
how
this
kind
of
works
with
DHC,
with
dhts
or
indexers
or
whatever,
and
how
those
performance
metrics
totally
change
when,
when,
when
a
workload
is
working,
a
particular
way,
but
anyway,
just
kind
of
like
the
status
quo
scope
for
now
for
for
the
working
group,
but
like
this
will
become
like
more
relevant
later
in
the
year
and
next
year.
One
other
thing:
I
wanted
to
kind
of
flag,
is
great
thought
systems
and
greatest
systems
have
been
have
relied
on
extremely
well-specified
workloads.
B
So
that
means
it
is
worth
spending
some
time
figuring
out
precisely.
What
are
the
workloads
that
you're
trying
to
to
optimize
for
and
and
create
those
as
like?
A
a
a
you
know
create
a
repo
somewhere
like
where
you
can
add
new
workloads
and
so
new
people
as
soon
as
they
want
to
optimize
for
something
differently.
They
write
up
on
your
workload
and
they
add
it
to
that
setting
and
you
can
then
test
out
the
performance
of
moving
around
these
kinds
of
workloads.
B
So
just
because
you'll
you'll
have
differences
like
workloads
where
you're
moving
around
one
one
big
file
will
look
very
different
than
moving
around
like
subsets
of
like
little
pieces
of
a
file
or
not
a
little
different
than
video
and
that'll
look
different
than
databases
that
look
different
like
than
a
winfs
graph,
and
so
on.
So
getting
ahead
of
this
and,
like
figuring
out
what
the
workloads
that
you
want
to
improve,
are
it's
going
to
be
time
well
spent?
Now
it
might
not
be
worth
doing
now.
B
At
least
some
of
the
graphs
that
I
saw
in
the
last
couple
of
talks
show
that,
like
there's
just
a
lot
of
low
hanging
fruit
and
just
getting
some
of
the
basics
done
first,
but
this
I
think
pretty
quickly
is
gonna.
B
It's
gonna
make
a
huge
difference
between
you
know
any
factors
like
two
to
ten
in
term
in
terms
of
how
one
one
protocol
Works
relative
to
another
so
getting
figuring
out
those
workflows,
will
matter
and
then
of
course,
reporting
right
so
with
any
of
this
kind
of
experimentation,
getting
to
making
sure
that
data
is
accessible
to
take,
think
voice
so
clear
for
everybody.
Good
figures
write
up
things
present
and
whatnot
like
this.
B
Is
you
know,
standard
experimental,
Loop
type
of
thing
now
the
ideally
like
this
is,
you
know
hooked
up
to
CIA
directly.
So
again
we
can
get
the
kind
of
setting
like
like
Chrome
and
Firefox,
and
so
on.
We're
like
you're,
making
a
change
to
you're
experimenting
with
something
you
can
push
it
to
a
repo
you
can
get
like
a
whole.
B
You
know
easy
report
that
shows
you
like
how
this
thing
performed
relative
to
others,
cool
now,
with
that
I
want
to
jump
to
some
ideas,
so
I
I
wanted
to
I
piled
up
a
set
of
ideas
here
and
there's
probably
others
that
they'll
come
to
mind
as
I
go.
But
I
want
to
talk
about
kind
of
some
insights
from
the
HTTP
model.
First
I
want
to
talk
about
like
new
connections
and
why
they
really
hurt.
So
this
might
be
less
relevant
to
this
group,
but
I
think
it.
B
It
should
kind
of
give
you
a
pause
when
you
think
about
multiplayer,
so
meaning
like
say
like
there's
a
number
of
folks
that
have
been
pushing
really
strongly
for
like
single
period
rituals
and
so
on,
like
that.
Oh
that
makes
sense.
Just
kind
of
I
want
to
flag
like
that.
New
connections
can
can
hurt
a
lot.
I
want
to
highlight
bitter
and
bit
Fields.
So
it's
great
to
see
krakensync
using
the
fields
as
well
in
the
last
one
like
that's
that's
great.
B
This
is
one
of
the
ideas
that
very
early
on
I
wanted
to
implement
invisible,
but
just
kind
of
time
pass
and
we
never
got
to
it,
and
when
we
built
a
graph
sync,
we
thought
about
doing.
B
B
I
want
to
talk
about
how
good
does
it
and
then
maybe
discuss
certain
things
about
both
encoding,
that
data
and
meta
and
keeping
metadata
that
might
really
really
help
transfer
protocols,
cool
so
I,
don't
have
supplies
for
all
these,
so
I'll,
just
kind
of
for
some
of
these
I'll
just
talk
through
while
looking
at
this
slide
so.
B
We
have
to
remember
that,
like
the
HTTP
stack
has
been
hyper
optimized
by
thousands
of
people
to
serve
a
lot
of
data
really
quickly
to
tons
of
applications
around
the
world,
so
HTTP
Stacks
have
been
hyper
optimized
already
so
and
and
much
more
today
than
even
when
I
started.
So
when
I
started,
HTTP
was
not
nearly
as
high
performance
as
it
is
today.
So,
in
this
amount
of
time
that
we've
been
building
I
be
fast,
the
HTTP
world
has
greatly
improved
in
its
own
stack.
B
We've
gotten
things
like
HTTP
two
and
three
we've
got
the
TLs
model
evolved.
We
have
just
a
ton
of
like
better
tools
for
for
Distributing,
it
should
be
content,
and
so
on.
So
like
a
an
initial
plug
that
I
would
have
for
everybody
here
is
that
one
potentially
great
hack
might
be
just
Leverage
The
HTTP
stack
that
has
been
hyper
optimized
for
request
response
for
data
and
write
a
very
lightweight
kind
of
grass
think
style
protocol
on
top
of
HTTP.
B
So
that
means
provide
like
requesting
and
serving
seid
as
like
one
of
the
path
parameters
and
potentially
a
whole
car
file
or
a
whole
graph
or
whatever,
and
just
ship
that
and
ship.
That
is
like
a
super
super
basic
request,
reply
transfer
protocol
on
top
of
HTTP
and
leverage
the
fact
that
a
bunch
of
HTTP
protocol,
HTTP
Stacks,
have
been
hyper
optimized,
underneath
the
hood
to
deal
with
tons
of
requests.
B
At
the
same
time
you
know
being
able
to
handle
millions
of
requests
is
like
a
standard
thing
for
a
good
HP
stack
being
able
to
do
kind
of
request.
Pipelining.
All
of
this
kind
of
stuff
is
already
built
into
them.
So
a
very,
very
great,
a
cool
hack
that
this
group
might
come
out
out
with
is
just
say:
hey,
guess
what
like
just
put
HTTP
into
ipfs
and
and
just
lean
on
all
of
those
all
of
those
Solutions.
B
We
don't
have
to
kind
of
just
to
do
things
from
scratch,
especially
since
the
hyper
optimized
since
then
cool
so
on
new
connections
and
when
I
kind
of
make
that
point
to
this
group.
So,
like
new
connections,
can
be
super
super
damaging
and
the
reason
is-
and
this
is
one
of
the
you
know-
pain
points
of
peer-to-peer
in
general
and
one
of
the
reasons
I,
don't
think
dhcs
are
that
good
in
in
general,
but
but
it
just
it's,
it's
not
just
speed
of
light.
It's
the
it's!
B
How
speed
of
light
hurts
you
in
when
you're
setting
up
new
connections
to
new
peers,
given
the
security
context
that
we're
in
the
security
context
that
we're
in
usually
requires
establishing
a
a
an
encrypted
Communication
channel
between
new
peers,
which
usually
means
multiple
round
trips?
Sometimes
one
sometimes
two,
but
usually
more
in
this
particular
diagram
that
I
have
here.
This
is
like
from
2019
when
we
have
like
much
worse
handshakes
and
multi-stream,
almost
like
slower
or
whatever.
B
So
we
ended
up
with
like
four
to
six
round
trips
to
set
up
a
new
connection,
and
this
shows
us
how
slow
a
DHD
query
was
at
the
time
like
this
is
a
trace
of
the
reference
on
the
right
is
a
trace
of
a
DHT
connection.
There's
a
request.
Each
row
is
a
different
PR.
B
That
I'm
trying
to
connect
to
the
length
of
the
green
bar
is
the
length
of
like
the
you
know
from
the
moment
when
I
first
send
the
request
to
them
and
I
started,
trying
to
open
a
connection
to
when
you
know,
I
start
like
I
get
a
response
from
them,
and
this
this
some
of
these
Bars
were
like.
B
We
can't
quite
see
the
the
the
the
times
here,
but,
like
some
of
these
bars,
the
bulk
of
them
are
kind
of
like
four
to
five
seconds
long,
and
so,
when
you're
kind
of
limiting
the
number
of
like
active
requests
that
you
have
going
and
so
on
and
you're
trying
to
talk
to
this
many
parties,
just
you
spend
most
of
the
time
either
in
these
kind
of
like
round
trips,
to
establish
a
secure,
Channel
and
and
waiting
for
to
to
get
the
answer
from
the
party
that,
like
you,
know,
sorry,
your
the
princess
is
in
another
Castle
or,
like
you,
you're,
you're
you're.
B
The
thing
you're
looking
for
is
not
here,
go,
look
somewhere
else,
and
so
these
kind
of
like
peer-to-peer
protocols
where
you
have
to
open
tons
of
new
connections,
to
new
peers
that
you've
never
seen
before
or
that
you
haven't
seen
in
a
while,
and
therefore
you
don't
have
their
keys
and
you
don't
have
a
encrypted
Channel
like
really
really
suck.
Now
caveats
here
could
be
that
you
could
get
around
this
by
a
play.
Things
on
encrypted,
which
you
know
that
was
David
Mozart.
B
As
a
suggestion
in
the
in
the
Fireside
Knight
dress,
Camp
was
like
no
don't
encrypt
that
I
I.
Personally,
don't
think
it's
a
good
idea,
because
then
all
of
your
requests
are
kind
of
going
in
the
clear.
So
the
other
option
is
find
find
an
encryption
model
that
doesn't
lean
on
that,
potentially
a
DHC
type
request.
B
Responsored
call
could
be
built
to
use
public
history
as
opposed
to
trying
to
use
public
keys
to
establish
a
shared
private
key
public
key
crypto
is
now
computers
have
gotten
a
lot
better
now
than
you
know
20
years
ago,
so
you
could
actually
just
use
the
public
Keys
directly
and
encrypt
the
stuff.
So
you
could
have
a
one,
a
one
round
trip
type
thing,
with
request
reply,
protocols
to
make
something
like
this
drastically
faster.
B
But
you
know
this
kind
of
like
newground,
new
cryptography
and
whatnot,
and
you
know
this
is
much
more
important
for
things
again
like
these
days
and
so
on,
but
it
does
kind
of
impact
the
multiplayer
question
for
these
sort
of
protocols
right.
B
So,
if
you're
in
in
the
kind
of
like
missile
versus
graph
sync
debate
of
having
like
multiplayer
retrieval
versus
single
peer
and
whatnot,
people
have
pointed
out
a
there's,
a
bunch
of
good
data
that
shows
usually
you're
just
requesting
for
one
peer,
so
single
peer,
just
just
optimize
for
that
and
but
but
B.
The
point
I'm
making
here
is
just
multiplayer
sucks,
because
you
have
to
opening
up
opening
up.
B
You
have
to
open
up
a
ton
of
new
connections
and
that's
really
really
painful,
and
so
you
you
may
want
to
kind
of
it
for,
for
the
most
part,
you're
going
to
end
up
always
requesting
from
one
peer,
even
when
you
try
to
to
request
from
many
peers,
even
if
you're
trying
to
request
for
many
peers.
Unless
you
have
stable
connections
or
you
know
their
their
keys.
B
If
you're
dealing
with
very
large
networks
with
like
thousands
to
sorry,
tens
of
thousands
to
million
superiors,
you're
gonna
end
up
having
to
set
up
new
connections,
new
channels
and
whatnot
and
you're
going
to
pay
a
lot
in
latency
here
now.
B
This
another
heavy
caveat
to
this
entire
di
tribe
is
that
this
is
for
retrieving
lots
of
small
content
from
lots
of
different
peers
in
a
different
kind
of
application,
setting
where
you're
trading
from
a
smaller
set
of
peers
or
you're
trooping,
very
large
content,
then
this
latency
doesn't
matter
because
spending
spending
even
five
seconds
to
start
moving
a
terabyte
file
doesn't
matter
right,
and
so
this
goes
back
to
the
to
the
point
out
points
I
was
making
earlier
that,
like
the
application
context,
will
change
the
the
thinking
cool,
so
I'm
gonna
go
into
big
fields
for
a
moment,
so
it
feels
are
a
great
great
idea
that
should
be
used
way
more
in
this
protocol.
B
So
it's
great
to
see,
correct
and
saying
talk
about
it
last
time,
I'll
mention
it
here
again
and
I'll
kind
of
credit
BitTorrent,
because
return
is
kind
of
one
of
the
first
protocols
that
that
lever
spit
feels
this
way,
there's
been
a
ton
of
other
approvals,
of
course,
over
the
last
20
30
years
that
that
I've
used
them
in
tons
of
different
ways,
but
that
but
the
basic
kind
of
idea
here
is
you.
B
If
you
know
the
data
that
you're
dealing
with,
if
you
know
the,
if
you
can
enumerate
the
objects
in
a
stable
way,
then
you
can
use
bit
fields
to
communicate
with
each
other
in
terms
of
which
objects
you
need
and
which
ones
you
already
have.
Or
you
don't
don't
want
right,
and
so
for
things
like
bit,
Swap
and
others
like
the
want
want
lists
and
whatnot.
Instead
of
sending
entire
cids
you're
just
sending
a
single
bit
per
CID.
B
Now
you
have
to
understand
Which
CID
you're
talking
about,
and
this
gets
difficult
with
Dynamic
content.
The
reason
say
BitTorrent
could
do
this
is
because
they
did
a
bunch
of
pre-processing
on
an
individual
torrent.
You
know
exactly
how
many
pieces
you
had
so
you
could.
You
could
talk
about
each
piece
and
you
could
send
easily
a
bit
field
because
everybody
knew
which
pieces
you
were
talking
about.
B
So
you
could
easily
refer
to
the
you
know,
a
hundredth
piece
or
the
the
201st
piece,
and
both
parties
knew
precisely
what
piece
that
was
now
in
in
Kraken
sync:
we're
dealing
with
kind
of
a
penalty.
Log
I
think
the
inspection
was
kind
of
looking
at
apparently
only
logs
from
hypercore
and
others
work.
You
already
know
the
graph
ahead
of
time,
so
you
can
kind
of
lean
on
that,
and
you
can
talk
about
bit.
Fields
would
affects
that
graph.
B
The
same
thing
happens
with
Graphics,
so
one
of
the
goals
that
we
wanted
to
have
is
eventually
have
a
selector.
That's
a
bit
field
selector
over
graphing,
so
you
could
send
around
a
bit
field
to
select
from
with
it
from
a
CID.
So,
like
you,
pass
a
CID
on
a
bit
field
on
top
of
that
CID,
and
that
gives
you
all
the
information
you
need
to
to
Traverse
the
graph.
Now.
B
Just
one
caveat:
this
is
where
thread
models
matter,
so
in
a
trusted
setting
that's
fine
in
an
untrusted
setting
that
could
be
potentially
messy
because
you
might
not
have
the
like.
The
Midfield
might
not
actually
match
the
real,
the
real
graph,
so
just
kind
of
be
aware
that,
like
these
kinds
of
protocols
work
when
you
don't
have
like
malicious
code,
that
is
trying
to
exploit
the
fact
that,
like
it
feels
working
what
not
or
they
might,
we
might
need
to
do
put
some
work
in.
B
If
we
wanted
to
work
in
malicious
environments,
then
we
need
to
kind
of
make
them
potentially
have
proofs
and
so
on,
like
this
is
one
of
the
settings
where,
like
you,
could
do
some
work
ahead
of
time.
Pre-Process
some
content
end
up
with,
like
the
shape
of
the
graph
in
a
weather,
strive
way
and
a
proof
about
that.
So
you
can
say
you
should
say
CID.
If
you
have
such
people
like
that
graph
and
here's
a
proof
that
that
computation
was
done
correctly.
So
now
everybody
can
use
that.
B
The
problem,
though,
is
like
I
would
sort
of
like
say
like
there
are.
These
are
good,
interesting
Solutions
in
the
long
term,
if
the
problem
does
arise,
there
are
malicious
nodes
that
are
exploding
this
bit
field
stuff
and-
and
this
is
not
just
kind
of
random
circulation.
This
was
a
an
issue
in
some
of
the
Victorian
clients,
so
there
were
a
family
of
Veteran
clients
that
were
altered
to
be
malicious
in
some
ways
or
or
be
extracted
from
other
nodes.
B
Like
you
know,
things
like
bit
thief
and
bit
Tyrant
and
others
and
whatnot,
and
those
were
examples
where,
like
they
were
manipulating
kind
of
the
exchange
protocols
and
whatnot,
so
this
kind
of
stuff
can
and
and
will
happen
now.
Most
of
the
this
group
probably
doesn't
care
about
that.
Setting
most
of
this
group
probably
just
cares
about
hey,
let's
move,
let's
just
focus
on
in
a
fully
trusted
setting
move
around
just
bytes
from
one
place
to
another
and
like
just
let's
focus
on
making
this
fast
and
in
those
settings
then
yeah.
B
Just
coupling
the
having
a
bit
field,
then
being
able
to
express
bit
feels
coupled
with
cids.
B
Will
work
super
well
just
be
be
careful
that
once
you
kind
of
deploy
that
into
the
wild,
and
you
have
random
peers
being
able
to
request
that
just
make
sure
that
you
don't
just
make
sure
that
you're
not
like
your
implementation
is
not
exploitable.
If
somebody
sends
you
the
wrong
bit
field,
so
just
assume
that
whatever
people
send
you,
the
Midfield
might
be
suspect.
So,
as
you
write,
your
implementation
done
like
assuming
that's
correct,
the
cool
so
so,
and
yeah
I
think
there
was
a
discussion.
B
I
last
talk
about
run,
length,
encoding,
which
is
yeah.
It's
it's.
Why
you
can?
You
know,
afford
to
generate
like
these
massive
bit
fields
and
and
send
them
around
in
a
super
compressed
way.
So,
like
your
one
list
can
be,
can
be
tiny,
there's
a
possibility
here,
where
even
for
very
large
graphs
with
millions
of
objects,
we
can
really
lean
on
on
this.
This
sort
of
direction
to
to
send
around
you
know
fairly
large.
B
It
feels,
but
you
know,
might
be
worse
to
kind
of
move
around
so
as
an
example
like
indexers
and
so
on
could
be
communicating
with
these
kinds
of
big
fields.
B
B
Where
and
and
how
to
how
to
get
them
and
whatnot,
so
so
I
think
I
think
some
work
probably
should
be
down
there
to
like
figure
out
ways
of
leveraging
good
Fields,
like
that
in
those
kinds
of
settings
where
you
have
multiple
different,
independent
graphs
and
then
you're,
starting
to
kind
of
bundle
them
together,
and
so
you
might
be,
you
might
be
trying
to
request
things
like
you,
don't
even
have
to
see
any
other
of
the
thing
that
you're
trying
to
look
for,
but
you
know
how
to
sort
of
get
it.
B
This
starts
getting
into
pipelining,
where
we're
trying
to
send
a
random
description
of
the
of
the
thing
that
you're
trying
to
request
so
that
the
server
kind
of
processes,
the
description
results
to
the
CID
that
you
you're
trying
to
get
a
thing,
and
then
you
on
your
side
need
to
have
some
other
way
of
like
proving
that
that's
the
correct
thing,
but
anyways
getting
like
more
speculative
yeah,
so
so
on.
On
pipelining.
B
B
The
there
is
a
problem
here
around
codex
and
being
able
to
process
the
the
iple
object
that
you
get
and
being
able
to
explore
it.
So
yeah
I
think
this
was
already
discussed
last
time.
Around
pipelining
does
not
work
well
or
you
can't
account
on
Pipeline.
If
you
don't
know
how
to
Traverse
the
graphs
themselves.
B
My
prediction
is
that
this
will
be
sort
of
added
with
wasm.
So
what's
what
we
have
ipvm
and
we
have
wasm-
will
be
in
a
drastically
better
spot
and
and
this
pipeline
will
be
back
back
to
working
and
I.
Think
in
the
meantime,
you
can
just
kind
of
implement
pipelining
for
the
for
a
key
set
of
objects
that
you
care
about
and
or
suggest
that
people
like
encode
their
things
to
cboard
IBD
right.
B
So,
if
you're,
if
you're
other
more
advanced,
IBD
data
structure,
encodes
down
to
CBI
building,
then
then
it'll
just
sort
of
work,
and
you
can
do
pipelining
and
graph
sync
on
top
of
that,
and
so
I
think
I
think
enough.
Users
will
want
to
do
that
that
just
supporting
pipelining
for
those
those
types
of
objects
will
be
will
be
good
enough.
B
B
However,
the
very
basic
ones
don't,
which
means
that
you
don't
have
to
have
a
full
selector
super
super
expensive,
evaluator
thing
and
still
get
most
of
the
way
there.
So
so
you
can
kind
of
apply
a
you
can
have
something
like
close
to
full
selector
traversal
that
just
supports
kind
of
a
a
smaller
or
Dumber
set
or
whatever,
and
and
get
most
of
the
most
of
the
utility
and
most
of
the
benefit
of
selection,
traversals
so
and
then,
and
on
the
flip
side.
B
On
the
other
side,
CSS
is
a
great
example
of
we
can
actually
Implement
these
kinds
of
things
and
make
them
really
fast.
So
when
you
load
a
web
page,
your
browser
is
doing
tons
of
the
kinds
of
set
operations
and
intersections
and
traversals
that
a
graph
sync
selector
or
an
iple
selector
would
want
to
do
and
it
and
your
browser
does
them
extremely
fast,
extremely
extremely
fast.
So
I
just
wouldn't
take
for
an
answer
that
selectors
are
slow
in
general
because
your
browser
does
stuff
like
really
really
quickly.
B
The
again
latency
is
the
is
the
biggest
problem
like
you.
Don't
want
to
have
to
deal
with
the
speed
of
light.
Local
computation
can
can
be
massively
accelerated,
so
I
would
just
imagine
that
there
will
be
a
set
of
applications
where
selectors
are
just
like
super
super
useful
and
having
a
transport
that
that
that
enables
that
and
and
just
really
doesn't
kind
of
fear
on
the
bush
there
and
like
really
does
support
it
and
supports
it
really
fast,
like
meaning.
B
The
implementation
has
been
optimized
to
support
that
that
that'll
I
think
in
the
long
term
end
up
winning,
because
you
should
sort
of
assume
that
you
know
in
a
few
years
from
now,
a
lot
of
this
stuff
is
going
to
be
hyper
programmable,
like
you,
you're
going
to
have
VMS
everywhere,
you're
going
to
have
awesome,
payloads
everywhere,
you're
going
to
have
machine
learning,
models,
generating
code,
machine
learning,
models,
generating
new
programs
and
new
applications,
and
a
lot
of
this
sort
of
stuff
is
going
to
really
rely
on
kind
of
local
evaluation
of
whatever
matters
at
in
in
that
setting,
and
so
I
think,
like
just
assume
that
the
world
is
going
to
get
drastically
more
programmable
and
smaller
smaller
little
embedded
languages
will
be
everywhere,
and
so
these
kinds
of
selection
traversals
will
probably
be
the
way
that
it'll
work
in
the
future.
B
Now
it
might
not
work
there
that
way
in
the
meantime,
but
you
know
up
to
up
to
to
decide
cool
I
want
to
talk
about
two
copper
requests
for
a
moment,
so
the
the
a
lot
of
the
graphs
about
the
network
today
and
the
supply
network
at
the
moment.
B
Just
sort
of
point
out
that
you
could
do
some
of
these
transfer
protocols
and
have
them
be
to
hop
instead
of
one
hop
meaning
if
you
have
a
setting
kind
of
like
bit,
swap
where
you
send
out
one
out,
you
could
propagate
it
to
this
to
a
second
hop
and
that
way
get
like
greatly
increase
the
likelihood
you're
going
to
get
get
the
content
back.
So
this
is
less
about
moving
the
data
quickly,
which
is
kind
of
what
this
workbook
cares
more
about.
B
However,
it's
worth
mentioning
here,
because
if
you
end
up
producing
a
an
exchange
protocol
that
doesn't
support
this
kind
of
thing
and
only
lives
in
a
in
a
single
request,
reply
World
in
kind
of
like
the
traditional
HTTP
World,
then
it
it
you
might
kind
of
make
it
harder
to
to
leverage
these
kinds
of
these
kinds
of
optimizations.
So
just
kind
of
something
to
be
aware
of
in
that
like
for
the
most
part,
you
will
be
requesting
everything
from
a
single
peer.
B
However,
you
will
be
finding
things
from
many
peers,
and
so
it
just
think
of,
like
the
multiple
setting
is,
is
more
about
like
content
Discovery
than
it
is
about
downloading
a
lot
of
data
from
from
multiple
peers.
B
One
plug
for
get
that'll
have
here
is
like
get
went
through
the
same
very
similar
kind
of
Discovery
processes.
We
are
and
they
ended
up
making
almost
the
same
kind
of
position.
So
so
one
one
part,
is
they
had
a
very
simple
and
straightforward
single
object.
B
Transfer
protocol,
where,
like
you,
would
kind
of
tell
the
server
what
you're
looking
for
and
what
you
have
already
roughly
then
it
would
make
kind
of
like
a
dumb
education,
dumb,
but
kind
of
educated
guess
about
what
to
send
over
and
then
the
server
would
just
start
sending
you
over
a
ton
of
little
objects
and
you
can
check
them
locally
because
you
have
the
hash
and
and
so
on,
so
think
of
that
as
kind
of
like
graph
sync,
it
was
like
a
very
you
know,
special
case
graph,
sync
sort
of
thing
you're
telling
like
it.
B
Yeah,
whatever
you
want,
what
ref
you
have
and
the
server
kind
of
figures
out
the
difference
between
those
two
figures
out
all
the
objects
that
you're
likely
missing
and
then
sends
them
to
you
in
order
to
do
this,
I
don't
know
if
the
dumb
protocol
does
this,
but
it
is
likely
that
you,
you
have
to
send
a
bunch
of
your
refs
over,
not
just
like
a
single
head
that
you
have,
but
you
might
to
optimize
this
to
to
reduce
the
likelihood
for
duplication
and
so
on.
B
The
the
server
looks
at
those
figures
out
what
you're
trying
to
get
and
then
just
only
sends
you
the
objects
that
you
don't
currently
have
and
that's
a
really
good,
like
pretty
efficient
way
that
a
lot
of
data
gets
moved
around,
but
not
even
there
gets
found
that
this
was
too
slow
in
certain
settings
and
that's
where
dealing
with
them
like
each
individual
object,
kind
of
sucks,
and
so
this
is
where
Pac
files
came
from
in
the
same
way
that
we
have
car
files.
B
So
we
went
through
exactly
the
same
Discovery
in
a
sense,
getting
up
creating
pack
files
and
then
moves
around
entire
path
like
files
as
a
whole.
You
know
these
kind
of
like
bundles,
large,
bundles
of
objects,
and
then
you
can
move
around.
You
can
pull
objects
out
of
pack
files
and
fact
files
themselves
also
use
that
type
compression
which
is
which
is
pretty
sweet,
because
then
you
can
kind
of
move
around
move
around
objects.
That
way.
B
B
If
you
can
change
the
encoder
and
meaning
how
you
like
sort
of
shape
the
data
and
how
you
lay
out
the
data
that
could
greatly
change
the
the
speed
too
so,
for
example,
bit
swap,
doesn't
do
very
well
with
the
default
setting
of
graphs,
which
is
kind
of
like
this
large
balance
tree,
because
every
layer
is
kind
of
expensive.
B
If
you
said,
like
you,
use
like
the
the
left
line,
you
know
leaning
trees,
where
you
can
get
to
the
to
the
data
you
care
about
like
really
fast,
then
that's
what
actually
can
perform
really
well
because,
even
though
it's
kind
of
layered
and
Pipeline
and
so
on,
you
will
immediately
get
some
of
the
objects
that
you
care
about
and
you'll
start
be
able
to
start
being
able
to
stream
those
out
to
the
user.
B
So
in
the
video
setting,
for
example,
like
you
can
get
good
video
performance
that
way
the
other
thing
about
metadata.
This
is
we're
kind
of
like
decorating
the
graph
itself
with
the
things
like
the
shape
of
the
graph,
the
number
of
the
objects,
the
size
of
the
objects,
all
of
that
kind
of
information.
If
you,
if
you,
if
we
as
we
were
traversing,
the
graph
could
know
that
information
was
there,
then
that
could
like
yield
drastically
better
distribution
protocols
like
drastic
drastically
better
replication
protocols
that
could
lean
on
that
information.
B
That's
in
the
graph!
Now
you
have
to
worry
about
the
graph
being
maniformed
and
leading
to
security
problem
and
so
on,
and
so
so
you
have
to
be
careful,
but
that
could
be
like
a
drastically
better
place
to
be
so.
B
I
would
encourage
this
group
to
not
just
explore
ways
of
just
moving
graphs,
but
also
how
to
encode
data
to
make
the
movement
of
the
graphs
themselves
like
way
way
better
and
way
faster,
going
back
to
like
the
bit
fields,
for
example,
you
could
you
could
decorate
you
could
if
you
were
able
to
decorate
the
graph
itself
with,
like
the
shape
of
the
the
each
node
could
have
a
shape
of
of
itself,
then
the
bit
feels
could
work
extremely
well
and
you
could
use
the
bit
fuel
to
remember,
like
figure
out,
what's
underneath
a
particular
C
idea
and
whatnot,
but
again,
I
would
have
to
like
either
trust
the
data
or
have
some
proof
that
that
encoding
was
done
correctly.
B
All
right,
cool
I'm
going
to
ask
now
because
I
want
to
leave
at
least
a
few
minutes
for
questions,
but
I
I
agree.
Okay,.
A
Thank
you
so
much
Juan.
Does
anybody
have
any
questions
we
can
use?
The
do
you
want
to
kill
screen
share
one
for
a
second
or
do
you
think
perfect?
A
I
think
one
that
I
would
that
jumped
out
to
me
that
this
group
has
really
been
struggling
with
is
the
interplay
between
content,
Discovery
and
data
transfer
right
and
specifically,
that
we
have.
You
know.
One
of
the
first
questions
that
came
in
the
working
group
was
like
bitswap
does
a
whole
lot
of
content
Discovery?
A
What
is
the,
what
are
the
responsibilities
of
that
and,
and
particularly
when
we
think
about
coupon
like
there's
a
tight
coupling
there
and
there's,
as
you
mentioned,
with
the
two
hop
requests
conversations
there's
also
like
what
are
we
putting
in
dhts
and
network
indexes?
Are
we
just
putting
root
cities
or
are
we
just
putting
like?
Are
we
putting
group
City
Plus
bitfield
like
and
so?
B
I
I
guess
tomb
Raiders
of
thoughts
on
the
Discovery
protocol
side
of
things.
I
do
think
that
in
tons
of
applications,
many
developers
will
like
want
to
Leverage
The,
optimization
that
if
you're
already
connected
to
a
set
of
appears,
you
should
be
able
to
pull
their
data
from
them
or
or
you
should
be
able
to
kind
of
like
ask
them
for
the
content
just
in
case
they
have
it
and
whatnot.
That's
a
very
cheap
thing
to
do
that.
Can
that
can
optimize
bandwidth
and
use
and
all
that
kind
of
stuff?
B
So
I
think
like
that's
the
case,
and
so,
but
but
it
doesn't
mean
that
that
a
single
like
like
a
single
party
or
like
two-party
protocol,
could
be
wrapped
by
a
multi-pier
protocol
right.
So
you
could
have
like
a
single
peer.
Retrieval
thing
that
gets
wrapped
by
another
thing.
That,
like
is
the
one
that
kind
of
knows
that
you
can.
There
are
many
parties
and
many
peers,
and
you
may
want
to
send
requests
to
multiple
of
them
and
whatnot.
B
B
My
sense
is
that
Computing
proofs
over
code
is
going
to
get
cheaper
and
cheaper
and
cheaper
and
and
just
much
more
prevalent
over
time,
and
it's
just
that
it's
right
now,
it's
very
clunky
and
cumbersome,
and
so
I
probably
wouldn't
recommend
anyone
worry
too
much
about
it
at
the
moment
and
or
spend
a
lot
of
time
on
it
now.
B
But
I
do
think
that
eventually
it'll
this
will
sort
of
become
the
norm
in
a
lot
of
software,
like
you,
you'll
be
able
to
carry
around
proofs
that
that
the
software
is
like
doing
things
correctly,
we're
probably
about
like
three
to
four
years
away
from
that
being
super
cheap
to
compute,
and
then,
after
that,
like
people
would
just
start
embedding
this
in
a
lot
more
more
places.
D
D
This
this
group
has
already
been
struggling
with
that
and
I
feel
like
when
you,
when
you
start
with,
like
replace
you
know,
replace
protocol
for
everyone,
like
you
immediately
have
like
a
general
case
set
of
use
cases
you
asked
to
satisfy.
That
makes
it
very
hard
to
even
build
anything
from
get
off
the
ground
right.
So
it's
interesting.
D
If
I
was
hearing
you,
you
write
you,
you
were
sort
of
suggesting
the
that
we
like
possibly
abandon
the
idea
of
one
protocol,
which
I
think
is
right
and
then
hopefully
solve
the
problem
of
protocol
proliferation
with
programmability.
Eventually,
though,
that
does
seem
itself
to
be
its
own
sort
of
like.
Let's
hope
this
all
ends
and
works.
D
You
know,
but
but
I
like
the
end,
but
your
suggestion
is
write
it
in
Russ,
so
it's
at
least
compileable
to
wasm
and
then,
when
all
that
stuff
is
there,
maybe
that'll
be
not
a
some
a
good
transition.
If
I,
if
I
understood
you're
like
for
now
suggestion
yeah.
B
B
I
would
be
okay
with
having
a
few
and
then
I
would
use
programmability
to
get
out
of
having
to
implement
it
everywhere
and
I
would
like
and
I've
said
this
like
in
other
places
like
if
I
was
designing
access
today,
I
would
or
or
two
things
one
if
I
was
I,
definitely
regret
on
putting
a
VM
in
epithesis
from
day
one
so
meaning
like
having
a
VM
in
all
PFS
knows
as
soon
as
we
can
will
be
great
and
then
it
could,
you
can
have
around
ability.
B
There
are
ways
of
handling
the
the
the
sandboxing
and
ways
of
handling
like
the
the.
How
do
you
trust
the
code
that
you're
gonna
execute
and
whatnot,
or
how
do
you
keep
those
runtimes,
low
and
whatnot?
B
And
then,
second,
if
I
were
implementing
episodes
today,
I
would
probably
implement
most
of
it
in
Rust
to
then
be
able
to
just
compile
it
down
to
awesome
and
run
it
everywhere,
especially
in
like
super
small
embedded
devices
that
are
just
gonna
have
like
not
going
to
be
able
to.
You
know
deal
with
like
the
go
run
time
and
and
or
JavaScript
runtime
and
whatnot.
B
So
I
would
yeah
that
yeah
it'd
be
okay
with
two
or
three
protocols,
write
them
in
Rust
and
plan
to
compile
them
into
wasm
and
put
wasm
everywhere.
D
Yeah
I
mean
also
like
that
particular
use
case
is
one
where,
like
the
proliferation
of
protocols
is
only
hopefully
n
n
equals
five
or
less
and
so
like.
So
then
you
so
then,
like
that's
relatively
trusted.
You
know,
you
know
if
you
have
five
well-known
SIDS
for
the
code
for
to
run
to
run
those
protocols
that
that's,
maybe
not
as
hard
as
some
of
the
other
potential.
You
know
things
that
could
happen
cool.
That's
all.
A
A
B
Great,
thank
you
so
much
everybody
great
great
talking
and
thanks
for
doing
this
super
valuable,
important
work
and
let
me
know
if
I
can
be
helpful
along
the
way
all
right.
So
thanks.