►
From YouTube: Move The Bytes Working Group Meeting 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
You
see
a
slide
deck.
Someone
give
me
a
thumbs
up,
perfect
cool
all
right,
welcome,
quick
agenda
for
the
for
this
call.
Today,
I'm
gonna
give
you
a
five
minute
progress
report
on
what
the
group
has
been
up
to
and
what's
been
going
on,
and
then
you're
gonna
adapt
into
a
veteran
support.
A
We
have
done
another
round
of
metrics
collection
and
I
want
to
report
out
on
that,
and
then
we
originally
scheduled
to
have
a
talk
from
dig
about
memesync
and
our
team
internally
actually
shifted,
and
we
have
another
different
protocol
that
rudiger
is
going
to
present
called
Kraken,
sick
and
you'll,
be
hearing
more
from
radar
on
that
sorry
girl
talk
for
10
to
12,
minutes
on
that
and
then
we'll
just
have
an
open
discussion
from
there
and
so
Julie.
A
That's
a
chance
for
everybody
to
sort
of
weigh
in
and
help
shape
the
future
of
this
working
group.
So
diving
right
into
it.
So
yeah
lots
of
questions.
If
you
have
questions
or
anything,
the
meeting
notes
are
linked,
I'll
link
them
again,
actually
is
someone
else.
Casey
can
ask
you
to
link
the
video
notes
in
this
chat
feel
free
to
drop
them
into
the
chat
as
they're
going
into
the
meeting
that
says,
they're
going.
A
That
should
be
a
freeform
document
that
you
should
feel
welcome
to
add
soon
anyways
progress
report.
Where
are
we
at
we're
trying
to
ship
a
data
transfer
protocol
they
can
replace
bit
swap
by
the
end
of
q1
2023
in
terms
of
time
we
are
here
we're
on
our
second
meeting.
We
have
one
more
meeting
before
the
holiday
break
or
winter
break,
and
so
just
wanted
to
signpost
that
that's
where
we're
at.
A
In
terms
of
time,
we
have
one
we'll
be
speaking
at
the
next
meeting
on
December
14th
and
then
that
will
be
then
it'll
be
a
break
time
where
everybody's,
obviously
just
gonna,
only
go
and
work
on
data
transfer
protocols
for
a
bunch
of
time
quietly
so
there
are
for
our
first
meeting
back,
will
be
January
4th.
All
this
is
reflected
in
the
Luma,
but
there
has
been
some
confusion
with
meetings,
I
mean
so
I'm,
hoping
to
do
a
little
better
on
that.
A
A
couple
of
notable
bits
have
come
up
in
the
move
device
working
group,
slack
I'm,
going
to
try
and
call
out
some
of
them
that
I
found
interesting
and
Goodreads
Guillaume
and
Martin
both
had
really
nice
write-ups
that
they
posted
in
the
channel
I've
linked
to
those
in
the
meeting
notes.
One
of
them
is
sort
of
making
a
really
great
point
for
data
over
HTTP
and
sort
of
we
mentioned
last
meeting
about
the
necessity
of
HTTP
for
a
new
protocol.
A
I
think
this
does
a
really
interesting
job
of
construct
positioning
that
as
a
really
constructive
option
and
then
Biz
soft
success.
Measurements
is
a
really
interesting
piece
of
work
that
I'm
hoping
we
can
do
some
work
to
sort
of
connect
between
the
measurements
that
we're
doing
in
this
working
group
over
time,
so
both
interesting
reading.
If
you
have
a
chance,
Linked
In
the
notes,
we
had
a
bunch
of
things,
we
said
we
would
do
last
meeting.
We
got
three
out
of
five
done.
A
We
still
have
to
work
on
this
late
test,
ground,
latency
simulation
issue.
Making
progress
test,
ground
team
is
prioritizing,
which
is
great.
We
did
get
some
multi-testinal
plans
which
we'll
talk
about
we're
still
working
on
getting
car
mirror
measured
in
Levites,
test
ground
work
in
progress.
A
We
have
a
new
protocol
crocodzik
that
has
been
measured
in
our
one-to-one
measurements
and
then
we
also
have
documentation
on
how
to
contribute
a
test
plan,
which
I
think
is
a
pretty
crucial
thing,
given
how
many
moving
pieces
are
involved
in
setting
up
test
ground,
and
so
from
that
we
already
know.
We
have
a
few
things
that
we
want
to
get
done
in
time
for
next
meeting.
We
wanted.
C
A
Tell
them
we're
in
that's
a
carryover.
We
want
to
do
multi-node,
hotspotting,
Works
requests.
I'll
talk
more
about
that
in
a
bit,
I
have
a
request
to
two
teams.
We
want
to
establish
a
plan
for
CID
request
frequency.
This
has
come
up
a
little
bit,
but
I
think
this
is
actually
going
to
be
emerging
as
a
really
important
thing
to
know
more
about
which
is,
namely
how
often
rcid
is
requested
on
the
network.
A
And
then
yeah
I
think
I'm
getting
a
little
bit
of
noise.
If
you
have
everybody
to
check
reviews,
that'd,
be
lovely
and
then
last
but
not
least,.
A
Measure
our
message
counting
and
graph
sync:
that's
an
open
we'd
like
to
just
do,
because
Graphics
got
some
measurements,
it'd
be
nice
to
make
sure
we
capture
moving
on
how
do
I
not
move
these
okay.
C
A
A
So
there
we
go
okay,
we
are
going
to
yeah.
If
you
can
check
out
the
stock,
it
is
worth
it
if
you
are
planning
on
contributing
something.
Ideally,
what
this
does
is
like
allows
you
to
speedrun
some
of
the
pain
of
having
in
a
a
protocol
to
this
test
plan,
and
so
there's
a
document
now
I'm
getting
you
up
into
sort
of
a
Hello
World
of
hey.
This
is
how
I
test
my
protocol.
Ideally,
this
should
make
it
easier
to
contribute
a
protocol.
A
We
want
to
lower
that
barrier
to
entries
for
other
folks
and
make
that
a
little
more
of
just
a
you
know,
Focus
other
than
us
being
able
to
contribute
plans.
Did
you
have
questions
about
that?
We
were
trying
to
direct
all
of
our
conversations
with
the
filecoin
move,
the
bytes
working
group
slack.
So
let's
kick
up
discussion
in
there
feel
free.
A
On
top
of
that,
we
have
now
a
normal
set
of
normalized
names
for
metrics
that
we're
working
on
which
we'll
go
into
and
they're
in
the
notion
document
for
the
bytes
working
group.
What
we're
doing
actually
to
measure
stuff
is
just
grabbing
blogs
of
output,
and
so,
if
our
protocols
use
the
same
names
for
the
same
metrics
and
measure
them
in
the
same
ways,
we
get
a
lot
of
this
for
free
it
does
you
don't
have
to
do
that?
A
If
your
protocol
doesn't
want
to
do
that
in
the
real
world,
we
can
adjust
it
for
the
test
plan,
but
it's
a
lot
easier.
If
we
work
on
standardizing
it
and
I
think
this
would
be
a
benefit,
so
we're
working
on
some
of
these
These
are
sort
of
our
general
first
draft
pass
of
like
what
we
think
we
should
be
measuring.
It
would
be
really
fun
to
work
with
others
to
refine
these.
That's
what
I'm
going
to
call.
A
Hopefully
that
covers
some
Logistics
again.
If
you
have
questions
about
that
passes
in
the
notes
on
to
metric
support
number
two:
what
do
we
do?
We
got
to
multi-node
testing
for
me,
we'll
make
another.
We
got
to
the
point
where
we
have
a
test
plan
that
runs
real
Kubo
is
with
any
ratio
of
requesters
to
Providers.
A
This
week
we
focused
on
our.
In
the
last
two
weeks,
we
focused
on
a
single
requester
against
one
three
five
and
ten
providers
for
five
megabyte,
dag
with
two
blocks.
There's
a
full
report
in
the
notion,
but
I'm
gonna
walk
through
some
of
the
high
level
stuff
generally,
one
of
those
sort
of
a
single
example
of
one
requester
and
10
providers.
Look
something
like
this,
which
you
start
to
get
kind
of
hard
to
parse.
A
But
if
you
sort
of
Imagine
This
top
row
is
the
what
kind
of
what
the
requester
is
doing
and
the
bottom
row
is
the
aggregate
of
what
the
providers
are
doing
and
see
if
the
provider
received
message
provider
sent
messages.
It's
really
important
to
note
that
in
here
we're
only
testing
bit
Swap
and
the
bit
swap
will
group
together
messages.
A
So
this
received
message
will
include
multiples
of
the
wants,
cancels
yeah
side
of
things,
and
so
it's
important
to
understand
that,
like
this
number
is
often
higher
than
sorry
sorry,
the
sent
message
will
contain
multiples
of
these.
These
are
crossed
out
because,
where
a
requester
should
never
issue
a
have
and
a
requester
should
never
issue,
it
don't
have
and
the
opposite.
A
Hopefully
that
all
makes
sense.
But
if
we
actually
look
at
the
behavior
that
we've
observed
from
this
with
n
number
of
providers,
we're
seeing
this
pattern
where
yeah
I,
don't
know.
If
you
can
read
the
text
here
but
like
the
total
number
of
messages
sent
in
is
the
blue
for
the
across
the
entire
simulation
and
then
the
requester
sends
is
the
Gray
Line
just
beneath
that?
That
is
scaling
with
the
number
of
providers.
A
Where
we're
now
seeing
this
situation,
where
you
have,
you
know,
intent
nodes,
a
10,
node
simulation,
we're
getting
74
messages,
a
similar
story
for
the
requester
bytes
sent
where
we're
getting
this
linear
scaling
of
the
number
of
bytes
sent
per
messages
for
from
the
requester
to
Providers,
as
there
are
more
providers
listed.
A
So
that
leads
us
to
a
couple
of
like
takeaways
in
an
isolated
single
requester
environment,
we're
getting
an
overhead
of
roughly
450
bytes
per
provider.
In
the
current
setup,
the
overhead
scales
linearly
with
the
number
of
providers
we
have
and
we're
connected
with
in
the
message
scale
size
scale
slightly
below
linearly
and
so
for
the
same
result
between
one
provider
and
10
providers.
We
get
5.8
times
more
messages
and
6.8
times
more
data
sent.
Some
of
this
is
kind
of
like
understandable
you,
you're
coordinating
with
multiple
peers.
A
You
should
be
getting
more
data.
The
thing
that
I
think
we're
really
concerned
with
is
sort
of
the
slope
of
that
curve
and,
ideally
having
it
not
be
linear.
That
would
be
cool
too.
The
thing
that
is
really
neat
about
this
is
across
all
the
simulations.
They
all
came
back
with
the
same
amount
of
time
to
fetch,
which
I
think
is
a
very
interesting
call
out,
and
that,
like
folks,
worked
really
hard
to
make
that
truth
and
the
number
and
so
like
I,
think
there's
like
it's
a
really
neat
thing.
A
It
also
calls
into
question
one
of
the
things
I
put
up
last
meeting
around
having
this
be
our
highest
priority,
metric
of
the
total
time
for
all
nodes
in
a
simulated
Network
to
acquire
all
valid
and
wanted
tags,
because
bitlock
is
doing
that
right,
like
it's,
actually
accomplishing
that's
in
the
in
the
multiple
provider
scenario,
it
is
delivering
on
this
promise,
and
so
we
need
some
refinement
here.
We
need
to
think
about
this
metric
as
a
hyper
as
the
like
North
Star,
and
whether
it
needs
to
be
either
grown
or
made
non-exclusive
something.
A
There's
a
couple
of
other
big
takeaways
that
came
out
of
this,
our
team
sort
of
like
really
when
we
started
looking
at
this.
We
sort
of
realized
there's
a
bit
of
an
interplay
between
the
actual
implementation
and
the
design
time
that
I
want
to
call
out
and
raise
to
this
group
I
think
we
need
to
spend
more
time
thinking
about
State
Management
at
the
spec
design
phase.
A
We
really
need
to
contend
with
this
right,
like
they
have
to
deal
with
a
ton
of
logistics
of
managing
communication,
State
and
I
think
that
interplay
is
something
that
this
group
really
needs
to
contend
with,
and
my
first
recommendation
for
that
is
that
spec
authors
should
at
least
do
a
back
of
the
napkin
estimate
of
Percy
of
what
they
should
be
sort
of
like
looking
at,
and
the
just
prioritizing
time
allows
you
to
spend
resources
under
places
which
I
think
you
should
think
more
realistically
about
the
efficiency
of
what
we're
doing
it's
next
up
from
this
future
measurements.
A
We
want
to
graduate
more
protocols
to
multi-node
tests.
Obviously
we
want
to
see
I'd
like
to
see
graph
sync
and
crafting
sync
taken
up
the
we
want
to
test.
There
are
a
number
of
proposals
that
propose
a
request,
response
solution
and
I.
A
Think
in
that
scenario,
in
a
multi-node,
we
should
really
be
testing
multiple
requesters,
because
that
would
create
theoretically
hot
spots
on
providers,
and
so
we
want
to
make
sure
that
we're
not
just
designing
protocols
that
shift
the
burden
back
to
providers
that
in
a
way
that,
like
just
sort
of
inverts
this
relation
relationship
and
last
but
not
least,
these
we
don't
think
these
tests
are
like
meaningfully
capturing
enough
of
the
actual
real
world
most
as
we
scale
step
off
this
thing's
worst
thing
in
production
are,
like
you
know:
203
megabytes
megabits
per
second
of
incoming
noise,
which
is
just
like
the
chattiness
of
bit
Swap
and
then,
when
we're
talking
to
about
roughly
a
thousand
Pairs
and
50
connected
to
Mitzvah
our
current
tests,
don't
do
anything
to
capture
that
at
all,
okay,
I'll
blast.
A
It's
last
bit,
we
have
one
added:
we've
added
correct
fencing
to
our
one-to-one
measurements,
so
we
have
one
new
implementation
on
the
board
now,
which
is
krakensake
operating
over
quick.
This
is
not
limited
V.
This
is
just
real,
quick
and
we've
clocked
in
a
new
protocol.
It's
Landing
somewhere
between
graph
sync
and
the
swap
in
terms
of
its
overall
speed
there.
It's
time
to
deliver
when
compared
to
TCP,
which
is
I.
Think
the
thing
to
really
emphasize
here
is
we're
actually
measuring
it.
Right
like
we
have.
A
This
group
has
shipped
a
new
protocol
and
it
is
immediately
measurable
and
comparable
to
Graphics
bit
swap
quick,
just
regular
TCP
connections
and
I
think
this
is
the
way
we
should
be
operating
and,
ideally,
you'll,
see
more
updates
in
the
future.
Maybe
crafting
maybe
Krakens
and
gets
a
little
quicker,
but
the
most
important
thing
is:
we
have
a
positioning
in
in
Vector
space
that
lets
us
reason
about
this
stuff.
A
A
Thank
you
for
letting
me
rifle
through
a
really
fast
update.
The
PDF
of
this
deck
is
linked
in
the
notes.
If
you
need
it,
and
from
here,
I
will
pass
over
to
rigor
for
to
talk
about
practicing
you're
ready
to
take
away.
A
Are
you
gonna
share
a
screen?
Oh
yeah,.
C
Perfect
I'll.
Listen
to
that!
I
think
that
I
should
do
that
right.
I
mean
that
will
be
helpful.
Where
is
this
thing
real,
slideshow,
okay?
So
what's
crackinson
Kraken
sync
is
basically
well
a
tiny
little
experiment
came
out
of
some
thoughts
about
a
meme
sing
from
the
ipvs
cam
and
thoughts
I
had
about
about
bitmaps.
So.
C
Oh,
it's
fine,
it's
fun,
I
forgot
to
make
I've
been
I'll,
have
to
restart
Zoom
quickly,
so.
D
A
I
think
we
can
take
on
some
of
these
small
ones.
What
is
a
message
was
for
Martin
one
TCP
packet.
No,
it
is
not
a
packet,
it
is
a
logical
message
in
the
protocol,
so
it
it's
it's
hard
to
Define
message.
B
Yeah
I
I
want
to
call
out
that
we
should
really
get
away
from
that
measurement,
because
I
can't
just
Define
a
protocol
with
like
hey.
My
messages
are
bigger.
Now
the
thing
does
less
I
don't
know
it's
a
weird
metric
because
it's
not
actually
comparable
in
the
sense
that
yes,
their
come
was
related
to
ground
trip
time,
but
I
think
we
need
to
get
closer
to
pocket.
Leveling.
B
C
Yeah
finally,
ready
I,
guess:
okay,
so
what's
I
can
sing
yeah,
so
it
basically
Builds
on
the
idea
of
meme
sing,
which
is
basically
just
do
something
very,
very
simple,
but
then
it
mixes
in
kind
of
some
ideas
from
hyper
core.
C
C
Basically,
it
is
a
very,
very
well
designed
protocol
that
basically,
is
just
a
single
right,
append
Only
log
of
blobs,
where
a
block
just
byte
array
basically
and
the
nice
thing
about
hyper
core
is
since
it
is
single
writer-
and
it
is
just
a
just
a
log-
you
can
identify
every
single
block
by
an
offset,
and
so
what
you
can
do
is
then
you
have
kind
of
a
Common
Language
between
between
hypercourse
when
they
want
to
talk
about
what
they
have
and
what
they
want.
C
They
can
just
use
offsets
in
that
sequence
of
blobs,
which
means
or
if
you
have
multiple
offsets.
Obviously,
you
can
make
a
bitmap
a
bit
field
out
of
that,
and
that
is
way
more
efficient
than
having
individual
hashes
per
block.
Basically,
so,
in
addition
to
to
this
being
more
efficient
because
you
have
one
bit
at
a
certain
position
instead
of
a
hash,
you
also
have
the
nice
property
that
that,
for
typical
usage
patterns,
these
these
runs
will
compress
well.
So
it's
a
very
well
designed
protocol
internally.
C
It
is
using
a
Mercury,
but
you
won't
see
any
of
these
hashes
of
this
internal
mercury
in
the
protocol.
But
one
thing
to
note:
it
is
not
a
dag,
it
is
a
tree.
Basically,
there's
no
reduplication,
you
can
add
multiple
times
the
same
package
and
it
will
be
the
same
data
so
and
this
is
basically
how
it
works.
Let's
say
you
have
a
lock
and
you
have
a
few
parts
of
the
lock.
Then
you
can
define
a
bit
field
which
tells
you
which
part
of
the
log
you
have.
C
Then
you
can
send
that
to
somebody
else.
Then
he
knows
exactly
what
you
have
and
can
request,
or
you
can
request
from
multiple
PS
and
then
figure
out.
Who
has
what
and
and
optimize
your
retrieval,
it's
very
nice,
so
the
idea
of
practicing
now
is:
can
we
maybe
use
bit
Fields,
despite
the
fact
that
we
do
have
darks
and
not
trees?
C
So
the
idea
is
you
define
some
kind
of
deterministic
traversal
for
a
DAC
and
from
then
on
you
kind
of
have
a
Common
Language
between
the
requester
and
the
and
the
responder
to
talk
about
blocks
in
terms
of
offsets.
So
you
can
then
say
give
me
give
me
the
block
which
comes
at
offset
X
in
in
this
traversal.
C
C
So
let's
say
you
have
a
tree.
This
is
a
tree,
there's
no
due
duplication.
So
then
you
define
a
traversal.
This
traversal
says
we
only
go
to
death
two,
so
these
things
below
are
basically
being
ignored,
and
this
is
the
first
reversal.
C
So
we
have
one
two
three
go
go
in
deep
and
then
you
go
up
again
and
so
on,
and
so
now
you
have
a
number
and
then
now
you
have
a
way
to
talk
about
individual
notes
of
the
tree
by
offset
and
well
and
then
you
can
make
a
bit
field
and
you
can
say
I
want
this.
All
you
have
to
do.
Is
you
set
a
certain
bit
in
this
bit
field
to
one
and
then
the
other
node
knows
what
you're
talking
about
now.
C
Can
you
also
do
this
for
DAC
I
just
took
the
tree
and
added
a
few
duplicates
to
make
it
DAC?
Well,
of
course
you
can
do
it.
The
only
difference
is
that
the
nodes
which
are
duplicates
you
just
don't
count
basically,
but
you
still
can
still.
You
have
still
have
a
deterministic
order,
so
you
can
still
make
a
bit
field
and
once
you
have
determined
both
sides
have
agreed
on
the
traversal
order.
C
You
can
then
talk
about
bitmaps
instead
of
bit
Fields
instead
of
talking
about
individual
hashes
and
so
there's
one
thing
that
you
have
to
be
aware
of.
Obviously,
this
whole
thing
fails
as
soon
as
as
soon
as
you
have
a
place
where
you
don't
know
how
many
leaves
there
are
here.
Note
number
six,
you
let's
say
you
don't
have
node
number
six.
It
means
you
don't
know
how
many
leaves
he
has
and
despite
the
fact
that
you
have
the
note
to
the
right,
you
don't
know
which
offset
it
would
have.
C
So
you
cannot
talk
about
it.
Basically,
so
if
you
do
this,
you
have
to
you
basically
have
to
abort
as
soon
as
you
have
a
node
where
you
which
would
mess
up
your
order.
Basically,
if
you
don't
have
this
one,
it
doesn't
matter,
you
can
basically
just
continue,
but
if
you
don't
have
this
one,
which
would
influence
the
order
of
the
notes
after
it
you
have
to
you
have
to
stop.
C
But
this
just
means
that
you
have
to
that.
Maybe
breakfast
reversal
is
is
an
advantage
or
maybe
that
you
should
make
sure
make
sure
to
get
the
get
the
branch
notes
first
and
so
on,
but
basically
they're
waste
around
this.
But
this
is
something
to
be
aware
of
okay,
so
one
thing
I
just
want
to
reitate
how
great
beat
feeds
are.
If
you
have,
let's
say
you,
you
have
a
bunch
of
hashes
and
you
make
a
bloom
filter
out
of
them.
You
will
get
a
bit
field,
but
it
will
be
very
random.
C
Bad
speed
field
will
be
a
bit
filled
like
this,
where
you
just
have
bits
all
over
the
place,
and
if
you
wanted
to
compress
the
upper
one,
the
compression
would
be
not
very
great
I
mean
you
get
some
compression,
because
you
have
these
large
gaps
there,
but
the
bits
are
all
over
the
place,
so
you
the
compressions,
basically
as
if
you
would
just
compress
a
sequence
of
integers,
whereas
the
Midfield
below
this
would
be.
Let's
say
you
have
Wikipedia
and
you've
browsed
a
few
pages
of
Wikipedia,
but
not
all
of
them.
C
Then
you
would
always
get
these
continuous
runs.
Imagine
you
have.
This
is
a
gif
and
you've
watched
this
GIF.
That
means
that
you
have
all
these
all
these
continuous
blocks,
and
so
two
two
bits
in
this
bit
field
mean
that
this
is
data
which
is
a
little
bit
related
because
they
are
basically
close
to
each
other
could
be
the
two
blocks
of
one
file
or
it
could
be
two
two
images
on
the
same
Wikipedia
page.
Basically,
they
are
more
related
than
here,
because
here
every
bit
could
be
to
any
block.
C
C
So
you
have
to
make
sure
that
this
is
cheap
for
the
require
for
the
for
the
server.
Basically,
so
the
rules
should
be
as
simple
as
possible.
It
should
be
something
that
the
store
data
that
the
store
has
that
the
store
has
available
anyway.
So
the
only
thing
we
know
about
blocks
is
air
blocks.
Do
have
links
that's
the
thing
that
the
store
knows.
The
store
usually
does
not
know
about
how
the
links
are
named
and
what
the
exact
meaning
of
a
link
is.
C
But
it
does
know
that
there
are
links
and
there
are
blocks
which
have
a
certain
size
and
there
are
also
codex
on
the
links
that
might
be
also
something
that
you
might
use,
but
other
than
that.
In
my
opinion,
you
should
try
to
use
as
little
as
possible
from
information
where
you
would
have
to
open
up
the
block.
Basically
just
use
the
use.
The
data
that
you
already
have
as
store
I
mean
if
you
write
a
store,
you
have
to
know
about
the
links.
Otherwise
you
cannot
Implement
garbage
collection,
but
other
than
that.
C
You
don't
really
want
to
know
anything
about
the
blocks,
at
least
in
this
opinionated
version
of
the
of
the
whole
thing,
and
then
the
next
thing
I
think
we
covered
this
already.
If
you
have
a
traversal,
you
don't
know
how
fast
it's
going
to
be
or
how
slow
so
Traverse
shouldn't
have
us
should
have
as
little
State
as
possible.
You
should
also
not
lock
the
database
or
anything.
It
should
just
be
something
that
runs
in
the
background,
with
a
little
bit
of
state
and
not
much
and
basically
yeah.
C
It
should
not
be
huge,
otherwise
you
will
get
into
problems,
but
this
this
part
is
basically
an
opinion
of
mine.
You
could
also
think
about
having
a
very
complex
way
to
to
produce
these
deterministic
sequences.
You
could
have
graphs
in,
for
example,
and
say:
I
want
to
do
a
graphs
and
query
to
multiple
nodes
and
I
want
to
somehow
distribute
which
node
gives
me
what
that
would
work
totally
fine,
because
I
presume
that
graph
sync
is
a
deterministic
order.
C
So
it
also
would
work
fine,
but
it
would
require
you
to
to
look
more
into
the
blocks
itself,
so
it
would
probably
be
harder
to
implement
for
the
for
the
server
basically
but
yeah.
Well,
okay!
So
then
the
next
thing,
as
I
said,
you
have
to
be
very,
very
careful
that
the
the
server
is
not
stressed
too
much.
So
you
need
to
put
limits
on
everything
you
you
put
a
limit
on
the
number
of
buys
that
you
sent
per
request.
C
You
put
the
limit
on
the
number
of
database
operations,
you
do
in
the
store
and
if
you
know
that
the
other
node
is
friendly
or
if
you
have
a
good
history
with
the
other
node,
you
can
then
re
relax
those
limits.
You
can
say
this
I
will
send
this
node
1000
blocks,
but
the
thing
is
that
every
client
needs
to
be
able
to
work
even
with
very
small
limits
like
in
the
in
the
in
the
limit.
The
limit
is
I,
always
send
only
one
block,
and
then
you
have
basically
bit
swap
you.
C
You
send
a
hash
and
you
get
back
a
block.
You
send
another
hash,
you
get
better
block
and
so
on.
So
if
you
set
the
limit
to
one,
then
you
basically
have
bit
Swap
and
the
client
needs
to
be
able
to
deal
with
that.
So
the
client
cannot
assume
that
that
they
will
get
like
100
bucks
or
whatever
it
has
to
be
able
to
deal
with
a
very
restrictive
servers.
C
Okay,
so
here
I
looked
at
two
data
sets:
one
is
Wikipedia
I,
couldn't
look
at
all
of
Wikipedia,
so
I
took
a
subset
and
it
has
a
3.5
gig
car
file
and
this
number
of
blocks,
and
so
a
bit
field
of
this
box
would
just
be
40
48
kilobytes,
and
this
is
uncompressed.
If
you
would
compress
it,
it
would
be
even
smaller,
so
you
can
communicate
about
what
you
have
of
this
Wikipedia
map
subset
without
thinking
about
it.
I
mean
you
can
just
send
this.
C
It's
no
big
deal,
whereas
just
to
have
requests
for
this
Wikipedia
would
be
40
megabytes
I,
just
just
asking
for
the
data
one
by
one
will
be
14
megabytes
because
I
hash
with
the
overhead
and
so
on.
It's
so
big
and
one
thing
that
I
found
interesting.
Only
one
percent
of
the
size
are
Branch
knows
the
rest
are
leaving
nodes.
That's
because
it's
a
typical
Unix
FS
with
large
files.
So
next
thing
I
looked
at
is
the
Linux
kernel.
C
It's
well
similar
a
little
bit
smaller
and,
as
you
can
see
again,
the
bitmap
is
so
small
that
you
don't
even
have
to
think
about
it.
Whereas
have
have
requests
that
that
is
not
something
you
would
want
to
do
just
casually
so
to
speak,
and
just
a
tiny
fraction
of
the
data
is
Branch
notes.
C
Okay,
so
now,
let's
look
at
the
how
the
protocol
would
look
like
you
have
a
server
and
a
client,
and
so
the
client
sends
a
query,
so
the
queries
to
establish
a
Common
Language
and
then
it
sends
a
bitmap
and
the
bitmap
says
give
me
everything.
Basically
in
this
case
and
then
you
get
back
a
stream
of
data
one
by
one
and
the
stream
of
data
data
that
you
get
back
does
not
contain
the
Sid.
C
It
just
contains
the
data,
because
the
the
requester
knows
what
city
would
expect
at
this
position,
so
the
requester
will
get
the
data
hash
it
again
and
we'll
check
whether
it's
the
right
data,
otherwise
so
we'll
we'll,
basically
abort
the
interaction
you
don't
need
to
send
the
sit.
So
as
you
can
see,
the
only
thing
the
only
set
that
flows
here
is
the
roots
it
in
the
query,
other
than
that
there
are
no
sits,
fly
flying
around
it's
just
data
and
offset
now.
C
This
is
a
scenario
where
you
have
multiple
nodes
in
this
case
two
and
you
just
ask.
Basically,
you
ask
OneNote
every
even
bit
and
the
other
word
every
odd
bit
and
then
you
get
back
yeah.
You
get
back
the
interleave
data,
so
every
node
is
responsible
half
of
the
data,
and
this
is
obviously
assuming
that
both
nodes
have
the
entire
data
set.
C
But
you
can
check
whether
this
is
the
case
before
quickly
by
just
asking
for
a
bitmap
and
now
a
scenario
where
you
basically
are
a
little
bit
impolite
and
just
ask
for
everything
from
everybody.
Now
you
get
back
something
and
then
what
you
do
is
you
do
a
kind
of
an
update
to
the
other.
Node
I've
got
offset
zero
already,
so
I
don't
need
it
anymore
and
then
vice
versa.
C
If
you
get
offset
one
from
this
node,
then
you
send
an
update
to
the
other
node
that
you
don't
need
that
anymore,
and
so
this
is
how
you
would
be
able
to
coordinate
if
you,
if
you
have
a
large
number
of
servers
where
you
pull
data
from
basically
either
you
just
subdivide
the
request
to
to
each
note,
or
you
just
request
everything
from
everybody
and
then
send
cancer
requests.
C
And
again
this
cancel
is
just
a
stream,
so
it
doesn't
require
opening
a
new
connection
or
anything,
and
it's
just
a
single
offset
and
in
practice
it
will
probably
be
a
bitmap.
You
would
bunch
a
bundle,
a
bunch
of
cancels,
so
you
wouldn't
send
offsets.
You
would
send
bitmaps
around,
but
this
is
just
simplified,
basically,
okay,
so
now
this
is
basically
how
it
would
would
look
like
if
you
request
form
a
large
number
of
nodes.
C
So
you
have
a
large
number
of
nodes
and
you
have
this
thing
in
the
middle,
which
has
to
coordinate
the
requests
from
all
these
nodes.
And
so
this
is
my
Lego.
How
the
things
called
Kraken,
because
it's
kind
of
has
its
fingers
in
all
directions,
is
trying
to
pull
in
data
from
multiple
places,
so
yeah
and
so
I
got
this
implemented.
It's
just
a
very
like
package
implementation,
so
I'm
not
surprised
that
the
performance
is
not
super
awesome.
C
So
you
can
now
say:
I
want
the
Linux
kernel,
so
you
import
Linux
color,
and
then
you
do
the
same
thing
and
yeah
it
will
get
all
the
blocks.
This
is
interesting
because
this
will
not
be
able
to
be
done
in
one
query,
because
the
queries
are
limited,
so
this
will
basically
get
some
data
and
it
will
hit
the
limit
and
we'll
get
some
more
data
and
so
on,
but
at
some
point
it
will
be
done
and
then
you
will
have
a
time
how
long
it
took
yes
and
that's
it.
C
Basically
one
thing:
it
would
be
nice
if
somebody
has
interesting
car
files
like
Dax,
which
are
not
the
typical
Unix
FS
structure
and
are
not
like
artificial
data
sets,
but
real
world
data
sets
in
the
range
of
I.
Don't
know
something
realistic,
something
not
totally
small
around
a
gigabyte
or
so
I
would
love
to
have
them.
Basically,
if
you
have
something
which
is
a
little
bit
more
interesting
than
just
the
big
Unix
FS
file,
it
would
be
great
to
add
them
to
the
to
test
set
yeah
and
that's
it
basically.
C
A
E
I
had
a
question
around
the
beat
field,
so
the
is
it
true
that
you're
trying
to
optimize
for
the
health
duplex
speed,
because
from
what
I've
like
the
bit
field,
is
sent
from
the
client
to
the
server
from
what
I
understand-
and
this
was
mainly
the
important
if
you
would
like
to
be
running
half
the
black.
So
the
connection.
C
E
C
Can
use
that
and
then
the
stream
of
requests
of
responses
that
com
comes
back
also
contains
offsets
which
refer
to
this
basically
Midfield.
It's
just
a
compressed
way
to
store
these
offsets.
So
the
purpose
of
the
whole
exercise
is
that
you
don't
have
to
send
hashes
on
your
protocol.
You
can
only
always
refer
to
offsets.
E
C
C
C
But
I
am
very
interested
in
this
scenario,
where
you
have
let's
say
Wikipedia,
and
you
have
a
lot
of
a
lot
of
serbas
that
have
basically
Wikipedia
where
the
main
server
has
gone,
where
you
have
lots
of
clients
which
suddenly
have
to
serve
the
data
like
the
bootstrap
Wikipedia,
and
then
you
have
to
have
to
talk
to
many
nodes
before
you
find
one
that
has
the
data
and
that
that's
exactly
where,
where
this
optimization
with
the
bits
is
very
important,
because
yeah
yeah
you
I
mean,
if
you
have
one
peer,
then
and
large
blocks,
then
it
won't
help
much.
C
E
C
Just
to
say,
I
want
to
think
at
offset
too.
Instead
of
a
bit.
That's
quite
a
bit
of
difference,
I
mean
that's
a
factor
of
100
difference
or
something
so
well.
I
mean
people
always
say
ignore
constant
factors,
but
a
constant
factor
of
100
is
still
something
that
you
might
want
to
want
to
take
kind
of
yeah.
E
It's
just
I'm
a
bit
scared
that
the
bit
swap
is
required
to
do
the
travel
Soul,
which
might
be
both
IO
and
CPU
intensive
on.
We
ever
need
to
understand
the
bits:
well,
not
the
bits
of
service
a
bit
fail.
C
So
so
there
I
mean.
Obviously
there
is
some
overhead
produced
by
by
basically
traversing,
but
the
traversing
also
happens
exactly
in
in
the
order
in
which
you
probably
have
added
the
files.
So
if
you
Traverse
the
files
in
in
this
order,
you
Traverse
them
exactly
in
the
order
in
which
you
have
added
them.
If
you
for,
for
example,
at
a
Unix
FS
file,
so
so.
C
E
C
I
mean
sometimes
people
don't
even
ask
for
before
they
just
ask
for
the
data,
but
it
is
very,
it
is
basically
very,
very
natural
iteration
order.
Let's
put
it
that
way.
E
D
Ahead,
oh
sorry,
I
I,
don't
know,
okay,
never
mind
yeah,
so
I
have
two
one
question
about
metrics
and
then
one
one
a
couple:
thoughts
for
the
crackers
like
for
metrics
we're
doing
first
times,
there's
one
thing
that
was
confusing
because
it
said
I
think
we're
doing
a
10
megabyte
file.
That's
five,
two
megabyte
chunks.
Is
that
right
and
is
that
file
like
does
it
we
say
two
megabyte
chunks?
Is
it
a
Unix
FS
file
with
like
a
header,
chunk,
two
or
a
header
block
or
like?
How
is
that?
A
I
think
we
need
to
look
at
I
need
to
go
back
and
look
at
it.
The
it's
basically
just
the
result
of
running
a
flat
file.
You
know
so
that's
ad
of
10
by
10
megabyte
input
like
five
blocks.
D
My
yeah
I
do
I
do
think
that
we
should
because
I
think
that,
like
like
data
structure
like
all
the
messages
that
are
people
are
sending
in
here
are
are
about
like
different
data
structures
is
important
and
then
also
size.
I
think
is
important
because,
like
the
performance
characteristics,
I
care
about
for,
like
a
gif
like
a
one,
two
three
megabyte
File
versus
a
movie
or
a
file
coin
piece
are
like
very
like,
like
you
know,
latency
time
to
First
Bite.
D
D
Other
the
other,
so
then
my
up
and
then
my
comment,
the
thoughts
for
the
the
bitfield
folks
I
mean
for
the
crack
and
see
not
good
fail.
Folks,
the
crackers
one
thing
to
just
look
into
that
I've
encountered
with
I,
don't
know
exactly
what
this
would
look
like,
but
with
the
whole
question
about
ordering
and
like
oh
well,
what,
if
we're
missing
nodes,
maybe
breath
Source
will
be
easier
or
something
that
I've
started
to
think
about.
D
It's
like
thinking
about,
like,
like
you,
might
consider
like
a
hierarchical
bit
field
right
like
that
essentially
is
like
I
want.
You
know
the
first,
the
first
five
you
know.
The
first
thing
is
like
I
want
to
go
down
these
paths
and
then
I
want
to
go
down
these
paths
and
then
I
want
to
go
down
these
paths
and
they're
all
bit
fields,
and
you
add
a
little
extra,
but
you
totally
can
avoid
that.
D
You
can
avoid
that
whole
problem
around
like
missing
notes,
and
then
on
top
of
that
you
can.
You
can
also
like
give
people
way
more
freedom
to
order
how
they
like
the
traversal,
how
they
like
so
just
just
something
that
I've
come
up
with
like
this,
the
more
I
worked
with
crossing
the
more
this
felt.
Like
a
you
know,
a
blocker
is
like
trying
to
do
truly
deterministic,
traversal
and
I
end
up
building
a
tree
structure
for
some
of
the
validation
stuff.
D
That
was
helpful,
just
an
idea,
and
then
the
other
thing
is
the
other
thing
that
I
think
you
mentioned
of
like
traversal
is
cheap,
which
is
speaking
to,
in
my
opinion,
like
there's
a
key
optimization
that
isn't
around
the
transfer
protocol
itself,
but
around
storing
metadata
on
disk
that
allows
us
to
avoid.
D
You
know,
like
you
know
these
complex
traversals
that
have
to
you
know
that,
like
where
Traverse
the
dag
becomes
an
expensive
to
style
operation
just
to
satisfy,
like
which
cids
do
I
have
not
like
yeah
anyway.
So
that's
it.
That's
a
whole
other
area,
that's
relevant
to
this
work
that
I
feel
like
we
should
probably
factor
in,
but
it's
not
in
a
protocol,
but
like
probably
should
be
specified
as,
like
you
better.
Do
this,
if
you
wanted
to
perform
well
yeah.
A
Yeah
I
think
it's
interesting
to
see.
What's
the
super
quiet
dick,
you
have
a
hand
up.
B
Hello,
yeah
I'm,
following
on
on
what
you
said,
Hannah
last
I
think
is,
is
really
important
and
just
rearing
his
head
again
and
again.
Is
that
the
challenge
of
having
traversals
that
are
mixing
boundaries?
B
More
specifically,
mixing
the
boundaries
of
internal
block
structure,
traversal
and
external
block
traversal,
and
we
indeed
transfer
protocol
and
end
up
paying,
probably
one
of
the
highest
penalties
for
this
mixture,
because
we're
like,
if
we
had
pretty
before
loyal
paths
and
links,
would
look
like
that.
They
only
extend.
B
B
When
you
now
have
paths,
even
in
Unix
FS
line,
you
don't
need
to
go
full
iPod.
You
just
go
into
the
paths
that
you
use
to
Traverse
in
Unix
FS
today,
and
you
encounter
a
favorite
data
structure
of
the
hands.
B
Now
the
links
that
you're
looking
at
the
path
that
you're
looking
at
does
not
mapped
to
the
actual
block
structure
anymore,
and
this
is
really
really
painful.
You
need
real
translation
layers
that
become
very
complicated
and
a
large
portion
of
things
in
iple
that
I've
been
developer.
B
Trying
to
like
how
can
we
abstract
over
that,
but
I
think
one
of
the
questions
that
I
would
I
would
like
to
more
to
ask
is
like
how
can
we
just
avoid
that
and
have
those
traversals
live
in
application
land
and
have
the
underlying
structure?
Potentially,
just
really
only
worry
about
clock
structure
traversal
because-
and
this
is
I
mean
a
reference
to
the
way
that
I
want
to
do
Corey's
in
meme
sync,
which
is
the
only
type
of
query
you're
allowed
to
is
across
blocks.
There's
no
internal
block
links.
B
B
Was
in
real
reality
inside
the
hand,
and
so
you
add
RAM
trips,
because
you
now
have
to
well
I
figured
that
out,
which
is
unfortunate,
but
yeah.
B
D
If
other
folks
have
things
they
want
to
ask
first
I
mean
like
that's
a
discussion.
We
can
talk
for
20
more
minutes
about
right.
That's.
A
Interesting
yeah
I
think
that
would
be
a
perfect
example
of
like
let's
try
and
push
that
discussion
into
the
slack
Channel
and
like
see
if
we
can't
hash
some
of
it
all
puns
intended.
Oh
this,
we
haven't
had
a
chance
to
speak
up
anything
jumping
onto
you.
A
Yeah
I
think
we
have
one
thing:
I
would
I
do
want
to
jump
on
is
like
the
how
how
much
Oh
I
thought.
So
one
thing
I
want
to
come
back
to
you
is
we're
we're
as
a
working
group.
We
are
on
this
pretty
tight
timeline
and
ideally
would
like
to
sort
of
like
get
as
many
protocols
in
here
as
possible,
and
one
other
thing
that
I
think
that
rudiger
pointed
suit.
A
That
was
really
exciting
is
like
This,
pitfield
research
and
like
if
you
look
at
what's
happening
in
Kraken,
sync,
there's
actually
a
ton
of
precedent
in
BitTorrent,
which
is
also
using
good
fields
and
is
also
using
this
and
has
like
some
pretty
sophisticated
behaviors
that
emerged
in
BitTorrent,
where
there
are
actually
implementations
that
will
lie
about
what
they
have
in
their
Midfield.
A
To
get
you
to
not
ask
for
it
and
there's
some
really
it's,
and
so
we
I
I,
do
want
us
to
keep
keep
looking
at
other
things.
I
asked
for
her
to
talk
about
hypercore
more
because
I
think
that
bringing
some
of
these
outside
influences
into
our
design
discussion
has
really
helped
not
only
references
prior
art,
but
these
are
things
we
can
measure,
and
these
are
projects
with
histories
that
we
can
draw
upon
and
ask
like
hey.
But
with
that
one
thing
that
we
haven't
really
talked
about
a
little.
A
Much
is
like
the
current
state
that
we're
in
and
and
what
problems
are,
does
our
protocol
are
trying
to
solve
right?
We
have
a
sort
of
set
of
school
of
thought
around
request
response,
because
we
have
a
lot
of
Sids
that
are
being
provided
by
a
single
provider
and
then
on
the
other
side.
We
have
this
like
place
where
we
want
to
be
where
we
have
many
more
providers
for
any
given
tag
and,
like
I,
think
it's
another
question
for
the
group
to
say
like
hey,
like
what
are
the
design
goals
of.
A
E
Interesting
thing
about
that
is
the
way
beats
aren't
uses
bit
failed,
I.
Think
bitterne
cares.
A
lot
about
is
negative
on
scaling
that
the
more
downloader
you
have
the
faster
your
files
download,
and
so
one
thing
that
is
done
by
bitter-
is
that
it
looks
at
the
bit
field
of
your
partners
and
it
counts.
How
many
partners
has
each
block,
and
that
gives
you
a
list
of
the
blocks
for
each
sorry
for
each
block.
E
It
tells
you
which
block
are
popular,
so
lots
of
people
have
it
and
the
blocks
that
many
people
don't
have
and
most
clients
first
try
to
download
the
blocks
that
are
unpopular
in
order
to
recover
balance,
and
that
helps
a
lot
with
the
tail
latency
of
downloading
files
with
lots
of
nodes,
because
then
you
don't
have
a
case
where,
like
it
makes
it
far
less
likely
than
like
50
nodes,
will
all
connect
to
that
one
guys
that
have
this
one
block,
because
all
the
nodes
we
have
first
try
to
have
this
block.
E
A
I
completely
agree
I,
like
while
I
was
reading
through
that,
how
cropping
of
the
Tit
for
Tat
algorithm
dropper
in
this
prioritization
of
it
you
get
rank
as
a
better
pure.
If
you
have
a
rarified
block
and
the
way
that
that
helps
bits
aren't
stabilize
data
availability,
and
we
contrast
that,
with
our
recent
numbers
from
Probe
lab
team
around
the
churn
and
cids
like
it
really
does
feel
like
there's.
There
are
some
more
questions
and
there's
more
things
to
be
learned
from
like
is
that
picked
into
the
data
transport?
A
As
you
said,
this
negative
one
scaling
approach
all
the
way
down
through
the
stack
right
and
so
like
one
of
the
things
I
think
we
want
to
kind
of
contend
with
is
just
like
scope
creep,
for
this
is
is
a
real
problem,
so,
like
you
know,
making
sure
that
this
project
just
sells
like
can
we
do
faster
data
transfer
I,
don't
want
to
say
like
oh,
we
should
also
try
and
solve
data
availability
at
the
same
time,
if,
if
we
get
it
for
free
as
a
byproduction
design,
a
great
protocol
awesome
but
I,
don't
you
know,
I
want
to
be
careful
about
our
timetable.
A
D
This
could
do
everything
that
bit
flop
does,
but
slightly
better,
like
like,
on
average,
more
faster
speed,
no
matter
how
you
do
it,
you
know
like
because,
like
I'm
trying
to
think
like
okay
well
so
like
like,
if
I
were
to
say
like
what
would
what
would
take
to
like.
D
To
like,
swap
over
to
graph
sync
for
the
actual,
like
data
transfer
and
then
move
a
whole
bunch
of
blocks
at
once,
and
then
you'd
have
to
factor
that
in
anyway.
It's
just
like
a
bunch
of
a
bunch
of
questions
that,
like
I,
you
know
like
I'm
like
like
that
actually
sounds
like
in
and
of
itself
like
quite
a
hard
project
and
where
grass
think
is
a
protocol.
I
mean
despite
the
complexity
around
the
selectors
like
it
is
a
point-to-point
protocol.
It's,
like
you
know
it's
back.
D
It's
it's
not
request
response
only
because
of
terrible
protocols.
You
know
like
protocol
design
but
like
it
is
behaving
like
request
response
so
yeah
it
could
be
converted
if
one
were
to
stop.
You
know
what
to
say.
This
was
a
terrible
original
message
format,
but
you
know
like
so
there's
there's
some
interesting
yeah.
It's
just
an
interesting
question
like
how
would
you
yeah
what's
the
best
way
to
do
this
so
yeah.
A
I
think
that's
a
I.
Think
that's
just
an
interesting
question,
but
like
is
it
worth
writing
damage?
Is
it
maybe
someone
can
investigate
this
like
just
as
a
hypothetical?
What
would
it
take
to
just
use
graphic
because
right
now,
Graphics
in
the
lead
right,
like
yes
and
so
like
I,
think
it
would
be
interesting
to
see
that's
there?
Okay,.
C
C
So
in
ipfs
embed
we
have
a
high
level
API,
which
is
just
called
sync.
You
give
it
a
hash
and
we'll
make
sure
that
the
DAC
that
is
hanging
from
that
hash
is
there,
and
this
is
a
much
more
high
level
protocol
and
it
is
much
more
easy
to
put
something
different
than
bit
swap
behind
that
high
level
API.
But
if
your
API
is
give
me
that
one
block
give
me
that
other
one
block
give
me
that
third
one
block,
then
there's
only
so
much
you
can
do
behind
this
API.
You
know
we.
E
Have
I
think
a
constances
in
the
Kubo
team
that
we
want
to
change
that
it's
pretty
bad.
We
don't
really
want
selector
as
a
full,
the
full
thing,
because
it's
a
bit
too
much,
we
think
and
maybe
a
bit
complex
but
like
we,
we
don't
really.
We
don't
think
we
need
an
almost
during
complete
programming
language
to
download
files,
but
there's
like
only
three
features,
I
really
want,
so
we
probably
will
switch
to
that
in
the
future.
At
some
point.
D
E
B
D
Sorry
can
we
rewind
two
seconds
so
we've
heard
the
word
means
a
few
times,
and
that
was
originally
going
to
be.
The
presentation
today
is
that
still
a
project
in
development
like
I,
would
like,
like
I,
mean
I
I
I've,
heard
of
I've
seen
a
slide
of
an
idea
and
I've
seen
a
message
from
that
is.
That
is
that
that
is
still
in
development
that
delay
this
week.
Well,.
B
Yeah
anyway,
so
yeah,
it's
still
a
project,
I'm
I,
just
it's
not
where
I
wanted
to
to
be
at
for
a
presentation,
but
ridiculous
cracker
thing
was,
and
so
it
was
like
makes
much
more
sense
for
him
to
present
this
first,
but
yeah
I,
hope,
I'm
working
on
an
implementation
and
like
a
description
of
it
and
it'll
it'll
happen
in
the
next
week.
A
I
also
really
really
wanted
to
get
the
Midfield
research
in
front
of
this
audience.
I
think
we
haven't
heard
about
the
fields
much.
We
have
a
really
nice
I
think
we
have
a
really
nice
exploration
of
blue
filters
in
this,
like
sort
of
like
other
world
that
occupies
the
similar
design
space.
I
really
wanted
to
see
that
exposed
today.
So
yeah
I
had
an
answer
question.
It
takes
gonna
present
more
on
namesake
in
the
future
yeah
and.
D
Yeah,
just
for
your
reference
later
in
the
week
I'm
going
to
be
putting
out
like
this
proposed
protocol,
that
is
very
I,
think
it's
actually
very
similar,
but
there
might
it
sort
of
really
based
around
HTTP,
which
might
be
so
there's
probably
a
synthesis
somewhere
in
there
like
yeah.
Oh.
A
Yes,
awesome
and,
and
speaking
of
which
we
should,
if
you
would
like
a
slot
to
present
at
this,
the
only
remaining
scheduled
talk
is
one
next
next
meeting
in
two
weeks
and
so
for
the
new
year.
Hey
folks
want
to
want
to
talk,
I'd
love
to
love
to
do
it
put
your
hand
up
in
the
of
the
best
working
group
section
and
with
that
we're
one
minute
to
the
hour,
who
hasn't
made
last
thoughts,
Pearls
of
Wisdom,
as
always,
Wednesday.
B
A
No
okay,
that
was
meant
to
be
a
joke.
Thank
you
so
much
everybody
for
joining,
really
appreciate
it
and
we'll
see
all
of
you
in
two
weeks,
hopefully
on
December
14th.
Thank
you.