►
From YouTube: RETMKT Builders - RETMKT WG Keynote - Juan Benet
Description
Juan Benet kicks off the Retrieval Market Builders Mini-Summit held in April 2021.
A
I
wanted
to
kick
off
this
this
workshop
with
talking
about
four
things,
so
I
wanted
to
give
a
quick
intro
to
what
this
workshop
is
about
and
and
kind
of
this
working
group
that
david
and
I
have
been
thinking
about
in
order
to
help
a
lot
of
the
a
lot
of
the
teams
and
the
groups
work
on
all
the
retail
market.
Things
we're
observing
a
lot
of
the
different
kinds
of
things
being
built
and
there's
a
lot
of
knowledge
of
you.
A
That
needs
to
happen,
and
some
kind
of
coordination
and
planning
that
might
be
might
be
valuable
and
so
we're
thinking
of
kind
of
creating
a
bit
of
like
a
loose,
very
informal
working
group
that
can
help
all
the
teams
orient.
A
Then
I
wanted
to
talk
a
bit
about
the
retro
market's
vision,
just
to
kind
of
align
everybody
on
what
exactly
we're,
building
and
and
so
on.
I
want
to
give
a
super
fast
preview
on
some
of
the
updates
you'll
hear
about
and
the
reason
I
want
to
kind
of
very
quickly
preview.
A
few
things
is
that
I
want
you
to
kind
of
page
those
into
into
your
mind
and
cache
them.
A
I
guess
in
your
review,
marketed
minds
to
be
able
to
kind
of,
as
you
go
into,
various
updates
connect,
some
of
the
dots
and
start,
and
so
the
goal
is
to
kind
of
help.
A
lot
of
the
knowledge
diffusion
happen
by
giving
you
a
slight
preview
of
what's
what's
to
come,
and
then
I
want
to
talk
about
decentralized
development,
because
this
is
one
of
the
key
things.
That's
the
way
that
all
of
us
are
approaching.
A
Building
this
the
retrieval
market
is
that
we're
all
developing
different
kinds
of
pieces
of
assistance,
we're
doing
it
in
a
very
decentralized
way,
and
that
comes
with
a
lot
of
advantages
and
disadvantages
in
terms
of
like
again,
knowledge
of
you
showing
coordination
costs
and
and
whatnot,
and
so
I'd
like
to
kind
of
use.
This
workshop,
I
like
like
to
use
this
workshop
in
in
order
to
get
us
to
coordinate,
as
well
as
we
can
and
lean
into
the
advantages
of
centralized
development.
A
So
gonna
give
super
high
level
kind
of
objectives
here
and
don't
mean
to
everybody's
pretty
familiar
with
this,
but
again
just
trying
to
page
this,
and
the
goal
of
the
retail
market
is
to
build
the
world's
best
city
and
leveraging
web3
tech.
So,
of
course
that's
a
long
road,
a
long
vision.
A
It
will
take
many
years
to
really
be
able
to
be
able
to
compete
with
the
cloud
city
ends
of
today,
but
we
can
lean
into
the
advantages
of
web
3
to
build
something
that
has
a
tremendously
stronger
potential
to
actually
do
the
content
delivery
to
solve
the
content,
delivery
problem,
and
so
this
one
of
the
key
things
is
native
content.
Addressing
everything
from
the
bottom
up
being
content
address
is
super
super
useful
to
build
a
cdn.
A
A
We
are
also
able
to
use
verifiable
computation
to
be
able
to
create
markets
that
can
likely
optimize
the
process
of
delivering
content
better
than
a
centrally
planned
plan
structure,
and
the
whole
goal
is
to
kind
of.
A
If,
if
we
can
do
this,
and
we
can
couple
solid
markets
with
really
good
distributed
systems
and
networking,
then
we
can
end
up
with
a
with
a
cdn
that
just
has
certain
order
of
magnitude
improvements
and
certain
very
key
feature
components,
feature
vectors
that
will
make
this
kind
of
cdn
kind
of
outperform
other
cdns.
A
Now,
of
course,
that
again
will
take
many
years
to
realize,
and
you
know
we
can
start
with
building
the
best
cdn
for
web
three
and
then
from
there
kind
of
scale
out
to
to
other
kinds
of
applications,
but
that's
sort
of
like
the
long-term
long-term
goal
and
so
in
the
kind
of
short
short
to
midterm.
A
A
Let's
make
all
of
that
content
broadly
available,
so
great
so
dividend
we're
observing,
there's
a
lot
of
groups,
building
different
parts
of
the
retail
market
and
and
different
thrusts
of
it
and
by
the
way,
forgive
me
if
I
missed
a
group
here,
I
try
to
collect
a
number
of
logos,
but
if
I
miss
you
just
let
me
know
and
I'll
add
you,
and
so
the
goal
is
to
kind
of
create
a
loose
working
group
to
support
the
development,
and
so
this
might
be
giving
some
amount
of
coordination
help
creating
some
amount
of
dev
support,
being
able
to
kind
of
do
design,
review
and
knowledge
diffusion.
A
So,
if
anybody's
running
into
questions
with
certain
parts
of
the
stack,
we
can
help
answer
those
or
if
people
want
kind
of
feedback
on
their
designs
or
their
architectures
and
kind
of
like
fast
track
that
in
some
cases
we
can
also
provide
funding
for
for
various
groups.
A
A
But
we're
thinking
of
things
like
creating
a
you
know
every
demo,
a
demo
day
every
two
weeks
and
creating
a
set
of
channels
and
so
on,
with
with
like
high
slas
and
so
on,
to
to
be
able
to
kind
of
coordinate
better,
as
a
group
and
kind
of
the
objective
for
2021
that
we've
been
kind
of
taking
around
is
to
build
sort
of
the
first
production
version
of
the
retrieval
market,
and
this
is
a
goal
that
a
lot
of
you
already
have
and
you're
already
kind
of
on
the
path
to
doing
this.
A
And
so
just
concretely,
this
might
mean
a
network
supporting
supporting
payment
channels
and
so
or
potentially
a
payment
channel
network
around
tens
of
thousands
of
nodes
or
devices,
maybe
be
you
know,
kind
of
in
the
order
of
a
thousand
people.
And
ideally
we
really
want
to
hit
the
kind
of
sub
second
delivery
of
cids
worldwide.
So
this
might
involve
pre
pre-work
in
kind
of
loading,
a
cid
into
this
kind
of
worldwide
service,
but
it
should
be
possible
towards
the
end
of
the
year
for
a
developer
or
a
falcon
user.
A
To
say,
I
want
the
content
behind
the
cid
to
be
cached
in
a
bunch
of
places
in
the
world
and
be
able
to
be
delivered
in
a
sub
second
way
to
to
parties
retrieving,
and
ideally
we
would
like
to
also
be
able
to
have
the
beginnings
of
content
redistribution
happening
already
in
the
network
to
meet
demand.
So
this
means,
if
we
have
a
network
that
is
able
to
respond
to
shifting
demand
for
content
and
the
place
that
starts
moving
content.
Certain
areas
that
would
be
a
really
great
achievement
to
hit
this
year.
A
We
might
not-
and
so
many
of
these
kind
of
key
results
may
not
be
may
not
be
ahead,
but
this
is
sort
of
what
we're
thinking
of
potentially
aiming
for
great.
So
what
is
this
workshop
about?
First
of
all,
introducing
everybody
to
each
other,
make
sure
that
you're
aware
of
each
other
start
building
relationships,
and
so
on
use
this
time
to
do
a
bunch
of
updates
and
deep
dives
into
a
number
of
the
projects.
A
In
order
to
diffuse
knowledge,
we
aim
to
record
a
lot
of
the
sessions
so
that
we
have
them
around
for
for
for
people
to
watch
later
on
and
so
on,
and
we
really
kind
of
want
to
have
you
sort
of
directly
the
agenda
and
the
in
the
second
day,
to
kind
of
cater
to
what
you
want
to
know
more
about
the
kinds
of
things
that
you
would
like
to
kind
of
hear
more
about
or
discuss,
we'd
like
to
kind
of
turn
those
into
deep
dives
or
discussions,
and
we
would
like
to
kind
of
towards
the
end
chart
a
little
bit
of
the
future
and
so
kind
of
align
in
some
rough
future
plans
and
roadmaps
of
different
different
groups.
A
A
So
just
some
quick
guidelines,
code
of
conduct-
I
think
most
people
are
familiar
with
this
because
they've
been
in
the
falcon
community
for
a
long
long
time.
Hey.
We
really
want
you
to
present
your
work
in
progress,
so
don't
hold
back
in
discussing
working
progress,
ideas
and
things
in
development.
The
goal
is
to
diffuse
the
knowledge
of
where
you're
headed,
but
definitely
signpost
what
works
today
versus
what's
in
progress.
A
So
we
don't
want
to
create
a
structure
where
people
are
sort
of,
depending
on
things
that
don't
exist
yet,
and
so
we
definitely
want
to
defuse
the
ideas
and
diffuse
the
knowledge
of
where
things
are
going
to
go.
But
teams
should
be
able
to
kind
of
already
build
on
the
things
that
are
there
now,
like
almost
all
other,
like,
like
almost
every
every
kind
of
platform,
oriented
workshop
and
so
on
and
ethereum
and
other
communities.
A
A
So
I'm
sure
that
there's
a
lot
of
things
that
are
blocking
each
one
of
your
teams,
and
so,
if
you
can
kind
of
orient
us
and
tell
us
like
what's
blocking
you,
then
we
can
problem
solve
against
that
and
and
help
help
accelerate
you
and
we
kind
of
wanted
to
have
this
workshop
be
private
so
that
you
know
people
can
feel
more
comfortable
without
having
to
do
a
lot
of
prep,
but
we'll
record
all
the
talks
and
discussions
with
the
hope
to
publish,
but
it's
totally
okay,
if
you
prefer
not
to
distribute
something
that
you
that
you
participate
in
and
we
can
definitely
cut
cut
things
out.
A
So
don't
worry
about
that
we'll
record
and
then
afterwards
you
can
tell
us
if
you
want
to
keep
it
or
not
great,
let's
jump
into
vision.
A
So
this
is
probably
familiar
with
to
a
lot
of
people,
so
I
won't
recapture
recap
a
lot
of
it.
I
just
kind
of
want
to
dive
into
some
more
of
the
details
for
folks
that
are
not
that
familiar
the
falcon.io,
explainer
kind
of
goes
a
little
bit
into
a
visualization
of
what
the
ritual
market
might
do,
and
then
there
was
a
prior
workshop
last
july
that
you
can
watch
and
that
kind
of
has
a
kind
of
articulation
of
the
of
the
vision.
A
I'll
do
a
super
fast
recap
here,
but
not
not
a
kind
of
like
the
larger
view.
The
tldr
is
that
the
internet
is
a
grapevine
and
contest.
Content
must
flow
through
very
crowded
branches.
All
the
way
to
the
leaves
and
content
distribution
is
about
delivering
content
as
close
to
the
user
as
possible
so
that
we
can
cut
down
on
the
latency.
So
it's
about
moving
lots
of
static
content
to
users
with
ideally
sub
10,
millisecond
delivery,
that
this
is
what
very
serious
cdns
target.
A
So
things
like
s3,
glacier,
object,
storage
and
whatnot,
it's
all
about
storage
and
then
cdns
are
all
about
products
like
cloudfront,
cloudflare,
akamai
and
so
on,
and
by
the
way
note
that,
like
content,
addressing
and
hatchling
data
structures
are
amazing
here,
because
it
means
that
you
can
store
all
kinds
of
versions
of
the
content
close
to
the
users,
and
you
get
kind
of
this.
This.
The
persistent
data
structure
component
of
it
just
is
so
well
tuned
to
the
cdn
problem.
A
I
just
kind
of
like
a
shot
of
like
cdns,
so
in
file
coin.
What
this
means
is
that
if
you
take
a
snapshot
of
a
of
a
cloud
like
google
cloud,
which
is
on
the
left
and
you
think
of
the
core
data
centers
and
so
on,
and
then
kind
of
the
edge
caching
and
whatnot,
you
can
map
that
straight
to
storage
miners
and
respond
miners.
A
The
core
data
centers
are
the
storage
miners,
the
and
maybe
kind
of
the
edge
point
suppressants
and
whatnot,
where
you
might
have
a
bunch
of
racks
in
an
isp
or
things
like
that
that
are
close
to
the
user,
but
not
super
close,
and
then
you
can
think
of
virtual
miners,
as
as
the
cdns
and
potentially
even
users
in
homes
and
and
kind
of
in
apartment,
buildings
and
universities
and
whatnot,
so
storage
miners
and
then
retrieval
miners
are
much
closer
to
the
end
consumers
of
content.
A
This
is
kind
of
a
graph
that
I
made
last
year,
it's
probably
outdated
by
now,
but
it
gives
you
a
sense
of
kind
of
where
storage
miners
and
retro
miners
might
sit.
A
And
one
of
the
key
components
about
virtual
market
is
that
the
whole
thing
has
to
work
off
chain,
and
so
it
means
that
all
kinds
of
all
kinds
of
operations
that
we
might
do
to
reward
parties
or
create
mechanisms
or
register
parties
or
all
kinds
of
things
like
that-
need
to
have
it
in
this
kind
of
off-chain
payment
sort
of
style.
Where
things
need
there's
some
pre-work
that
needs
to
happen.
A
Then
parties
are
able
to
transact
in
payment
channel
networks
and
then
there's
some
settlement
after
the
fact
and
there's
many
different
kind
of
potential
topologies
for
how
retrieval
might
work
and
many
different
groups
are
pursuing
different
approaches.
A
But
you
should
sort
of
expect
to
see
eventually
many
of
these
things
yielding
networks
like
this,
where
there
are
groups
of
devices
that
communicate
a
lot
to
each
other
and
distribute
accounting
to
each
other
and
sort
of
form,
regions
or
the
equivalent
of
regions
as
kind
of
mapping
to
where
users
actually
are
and
where
content
wants
to
be
delivered.
A
A
Some
there's
like
three
scale
numbers
that
I
think
are
really
really
useful
to
keep
in
mind
again.
Latency
in
the
best
case
is
one
to
ten
milliseconds,
that's
very
aggressive,
nothing
in
the
blockchain
world.
Right
now
works
like
that.
So
we're
you
know,
I
think
even
hitting
100
milliseconds
is
already
really
good
and
like
that
might
be
like
a
really
great
target
for
2021,
but
really
getting
down
to
sub
10
milliseconds
is
where
very
serious
cdns
live
now
number
of
objects.
A
So
this
is
where
it
gets
pretty
crazy,
and
so
this
is
both
for
storage
and
for
and
for
retrieval.
If
you're
able,
if
you
want
to
provide
a
cdn
and
a
storage
system
for
the
applications
of
the
world,
you
have
to
be
able
to
deal
with
something
to
tune
up
like
10
to
the
18
objects,
which
is
a
lot
of
objects.
A
This
is
like
when
you,
when
you
start
kind
of,
if
you
do
a
backup
of
the
enveloping
calculation
of
things
like
twitter
and
youtube
and
other
kinds
of
social
networks
and
all
of
the
things
you
have
random
access
to
in
the
web,
that
you
want
to
be
able
to
retrieve
and
retrieve
quickly.
You
end
up
with
some
something
like
this
now
you
don't
necessarily
need
to
provide
random
access
to
every
single
object
like
that,
but
but,
and
so
there's
some
grouping
that
you
might
be
able
to
do.
A
But
you
might,
you
are
likely
only
going
to
shave
three
or
maybe
four
orders
of
magnitude
at
the
end
of
the
day,
most
cloud
systems
do
provide
this
kind
of
this
kind
of
retrieval
and,
of
course,
they
use
all
kinds
of
like
lookup,
trees
and
so
on.
A
To
achieve
this,
and
then
in
terms
of
the
number
of
retrieval
minors,
we
should
think
of
it
in
the
in
the
low
end
of
the
beginning
in
the
low
thousands
which
we
should
think
of
this
potentially
scaling
to
hundreds
of
thousands
or
kind
of
on
the
upper
end.
Like
10
million
terminally
seems
like
a
really
large
number.
A
A
You
can
think
of
the
users
of
segmenting
into
these
two
categories,
clients
and
miners,
and
there's
kind
of
two
sets
for
each
one
of
these.
In
the
client
side,
you
have
the
retrieval
end
users.
These
are
the
people
using
applications,
actually
consuming
the
content,
and
they
usually
don't
want
to
pay
for
content.
A
So
this
is
one
of
the
key
components
of
retrieval
that
needs
to
be
sort
of
worked
out
and
then
there's
the
apps
and
system
developers,
who
are
the
people
building
applications
that
want
to
deliver
the
content
to
the
virtual
end,
users
and
they're,
usually
the
ones
comfortable
paying
because
they
want
their
content
to
be
viewed
and
they
have
some
other
business
model
by
which
they
charge.
Now.
A
Of
course,
this
could
be
shifted
and
there
are
definitely
cases
where
content
gets
bigger,
where
the
retrieval
end
users
end
up
paying,
but
that's
definitely
not
the
norm
for
the
bulk
of
the
content
on
the
network.
So
a
payment
mechanism
that
allows
app
system
developers
to
pay
for
the
retrievals
of
end
users
needs
to
sort
of
exist
and
then,
on
the
minor
side
you
can
think
of
virtual
minors
as
the
people
providing
content
for
pay.
A
So
these
are,
you
know,
cloud
of
parties
all
over
the
world
that
are
helping
store,
cache
some
content
and
deliver
it
to
to
the
retrieval
end
users,
and
then
you
can
think
of
like
a
set
of
ancillary
miners
or
other
services
service
providers
that
are
assisting
the
ritual
miners
in
this
process.
And
so
this
might
be
the
storage
miners
themselves
in
you
know
kind
of
being
the
root
provider
of
the
content.
A
These
could
be
indexers
that
can
help
track
the
content,
so
you
can
think
of
the
ipfs
dhc
or
the
dhc
that
the
pegasus
team
is
proposing,
as
as
a
version
of
this
or
the
indexers
that
that
are
coming
from
the
lotus
team
in
in
in
kind
of
indexing,
the
content
within
storage
miners
and
whatnot.
So
those
you
can
think
of
those
services
as
a
set
of
components
that
need
to
exist
in
order
to
assist
the
retro
miners.
A
I
can
also
think
of
other
kinds
of
hub-like
entities
like
textile
hubs
or
or
other
kind
of
or
payment
channel
hubs,
or
other
systems
like
this
that
become
sort
of
watering
holes
for
both
clients
and
minors,
entertainment
minors
to
meet
around
some
content
that
has
some
locality
to
it.
So
a
totally
valid
version
of
the
story
here
is
to
have
a
bunch
of
hubs
that
don't
even
allow
service
providing
across
them,
but
but
then
eventually
get
wired
up
in
the
future.
A
All
right,
so
one
of
the
things
that
makes
the
vision
for
the
fuel
my
market
and
powerpoint
pretty
interesting,
and
you
know
high
potential
for
for
kind
of
like
changing
cdn.
The
cdns
of
the
world
is
that
we
we
have
like
I
mentioned
it
earlier,
but
we
have
a
bunch
of
really
fancy
tools
and
technology
that
just
provides
some
kind
of
unfair
advantages.
A
First
of
all,
having
authenticated
content
that
is
secure,
gives
you
dedupe
and
caching,
basically
for
free
is
super
super
useful
and
valuable
for
this,
for
this
use
case
being
being
able
to
do
peer-to-peer
resource
sharing,
where
you
have
a
permissionless
entry
to
providing
services.
A
So
the
you
know
the
classic
filecoin
story
or
or
bitcoin
ethereum,
where
any
party
can
run
a
a
node
and
then
start
providing
a
service
that
enables
access
to
a
whole
set
of
hardware
that
is
just
not
accessible
to
current
cdn
providers
and
the
ability
to
do
mechanism
design
with
assets
and
so
on,
like
it's
just
a
again
an
amazing
and
fair
advantage,
because
it
enables
you
to
use
markets
and
then
you
can.
A
We
can
think
of
all
the
standard,
blockchain
type
tooling
like
this,
so
tokens,
payment
channels,
networks,
economic
mechanisms
and
so
on.
So
using
all
of
these,
we
can
create
this.
You
know
kind
of
open
service
that
that
can
be
very
robust
and
that
ideally
doesn't
even
require
a
kind
of
any
sort
of
normal
management
production,
service
management,
tooling,
to
kind
of
keep
the
whole
thing
up.
A
A
Then
the
the
fact
that
we
have
liquidity
and
the
peer-to-peer
networking
layer
is
pretty
is
really
useful
because
it
gives
us
this
runtime
freedom,
where
we
can
embed
the
p2p
in
a
bunch
of
different
deployment
scenarios,
so
it
can
go
in
the
browser
it
can
go
in
in
back
ends.
You
can
go
in
and
can
even
go
across
things
like
bluetooth
and
so
on.
A
You
can
enable
it
across
even
onion
routing
and
so
on,
and
it
makes
it
easy
to
swap
out
certain
things
like
you
know.
It
ended
up
being
that
swapping
the
congestion
control
was
like,
like
an
advantage
for
certain.
Cdn
providers
were
like
swapping
from
like
normal
tcp
congestion.
Control
to
bvr
was
like
a
like
a
leg
up.
It
gave
a
some
cdns
leg
up
in
in
in
their
throughput
and
so
us
being
able
to
do.
That
is
a
significant
advantage.
A
Then,
of
course,
you
know,
the
staple
usfs
file
system
is
needed
to
kind
of
model.
What
most
people
end
up
using
cdns
for,
and
so
all
of
that
kind
of
packaged
together
in
fps,
with
kind
of
the
data
transfer
component,
gives
you
the
ability
to
kind
of
move
all
this
static
content
around
and
then
from
there.
We
kind
of
build
up
with
with
other
components
like
gossip
sub,
which
gives
you
some
ability
to
coordinate
a
set
of
parties
through
some
pops
up
layer.
A
This
might
inform
a
lot
of
content
that
gets
collocated
either
with
hubs
or
other
kinds
of
components,
and
then
we
have
a
bunch
of
components
like
regular
ipfs
nodes
and
the
iphones
gateway
and
brave
and
so
on,
which
would
be
consumers
of
the
retrieval
market
and
in
some
cases
these
parties
may
pay
directly
or
in
some
cases
they
might
not
pay,
and
is
that
be
subsidized
by
the
app
providers
or
in
the
case
of
the
optimus
gateway
you
know,
pl
can
can
help
subsidize
the
the
retrievals
for
the
gateway.
A
Then
we
have
pacman
and
the
token
for
payments
and,
of
course,
there's
possibilities
of
doing
like
efc,
20s
and
so
on
for
all
their
kind
of
components.
A
If
it
becomes
important
and
useful
to
to
isolate
some
of
some
of
the
the
activity,
and
then
you
can
do
things
like
okay
and
then
you
can
think
of
the
storage
market
and
ipfs
spinning
services
as
having
the
long-term
storage
and
having
all
the
content
that
becomes
the
source
of
material
for
the
retrieval
market,
then
we
can
think
of
clock
and
payment
channels
and
state
channels
and
other
kind
of
tooling
to
to
provide
the
the
routing
network
to
do
the
fast
payments.
A
At
the
moment,
most
parties
are
just
trying
to
get
it
working
without
payment
channels
and
that's
great,
like
let's
just
get
fast
delivery
working
for
free
and
then
kind
of
add
in
payment
channels
after
that
seems
like
a
great
great
way
and
of
course
we
can.
We
have
the
evm
to
to
do
any
kind
of
more
complex,
smart
contracts
and
and
mechanisms,
and
so
this
from
going,
you
can
deploy
contracts
to
ethereum
or
or
other
networks,
and
eventually
later
later,
on.
A
Pac
will
falcon
will
end
up
having
the
evm
as
well,
until
that
point
being
able
to
kind
of
deploy
some
of
the
contracts
into
five
one
will
help
not
have
to
do
a
bunch
of
things
through
bridges
and
then
having.
A
Then
there
are
other
components
like
these
indexing
nodes
that
are
emerging,
so
there's
different
types
of
services
and
site
services
that
might
emerge
here
and
we'll
need
some
way
of
coordinating
these.
A
The
structure
from
the
proposal
from
from
from
pegasus
has
like
this
nice
registry
component,
where
you
have
like
some
place
to
meet
each
other
and
find
other
kind
of
service
providers,
and
that
becomes
like
a
kind
of
like
a
first
entry
point
to
have
some
index
of
service
providers
and
then
from
there
jump
to
different
kind
of
components,
and
then
one
more
thing
that
is
super
super
useful
and
valuable
is
the
film
mining
reserve.
A
So
the
field
mining
reserve
is
a
big
capital
pool
allocated
for
many
types
of
mining,
including
your
true
market
incentives,
and
so
that
can
be
a
very
valuable
way
of
of
incentivizing
and
subsidizing
a
lot
of
the
ritual
market
activity.
A
Now,
of
course,
that
needs
a
very
good
economic
design,
because
you
know
incentives
like
that
are
double-edged
swords
for
sure,
and
we
we
are
also
seeing
the
emergence
of
reputation,
systems
that
are
starting
to
track
the
activity
of
various
miners
and
certain
kinds
of
quality
like
uptime
and
retrieval
speed
and
jurisdiction
and
all
kinds
of
other
features.
A
You
can
imagine
that
extending
to
things
like
hip-hop
compliance
and
whatnot,
and
so
that
those
kinds
of
features
are
getting
tracked
and
measured
by
these
emerging
reputation,
systems
that
you'll
hear
about-
and
you
can
there's
probably
stuff
that
I'm
forgetting
it's
a
lot
already
and
it's
probably
gonna
be
more,
and
so
one
of
the
big
things
about
this
workshop
is
helping
everybody
navigate
this
like
soup
of
stuff
and
understand
what
it
all
is
and
how
it
fits
and
how
it
relates,
and
so
on
will
be
important
in
an
ongoing
ongoing
basis.
A
So
kind
of
the
the
approach
that
that
I
would
suggest
to
to
to
everybody
is
to
kind
of
follow.
This
path
of
you
know
kind
of
three
steps,
first
and
foremost,
building
a
kind
of
typical
cdn
product
experience
for
clients.
So
this
means
learn
from
cdn
markets
and
the
design
and
the
product
design
of
those
cdns
and
aim
to
meet
the
expectations
of
users.
A
So
this
means
actually
look
at
and
study
the
the
exact
product
structures
that
users
expect
already
and
meet
meet
users
where
users
are
no
don't
try
and
like
shift
the
behavior
of
developers
and
so
on
too
much
they're
already
going
to
have
to
be
shifted
in
terms
of
dealing
with
crypto
and
whatnot,
but
that's
sort
of
like
the
long-term
goal.
A
We
might
not
gather
that
this
year,
because
this
year
might
just
be
about
web
3
and
that's
totally
fine,
but
for
for
this
product
to
be
really
successful
in
the
long
term,
you
need
to
be
able
to
appeal
to
parties
that
have
no
idea
about
web3
or
crypto,
or
anything
like
that,
and
here
there
are
two
user
experiences
that
are
absolutely
key.
One
is
the
end
user.
A
Retrieval
is
the
by
far
first
and
foremost,
most
important
thing:
that's
the
product,
so
the
product
is
really
fast:
delivery
of
content
to
the
user,
so
a
user
page
load
seeing
something
and
not
waiting
and
immediately
having
a
very
smooth
experience.
A
That's
that's
at
the
end
of
the
day,
the
product.
A
very
good
example
of
this
is
like
go
use,
one
of
the
modern
social
networks
that
are
maybe
video
oriented,
so
things
like
instagram
or
tech
talk,
and
things
like
that,
and
you
can
feel
the
app
just
very
smoothly,
delivering
very
high
quality
video
to
you
or
netflix
things
like
that.
A
That
is
like
a
cd,
a
very
good
cdn,
powering
that
entire
structure
and
like
that's
the
product,
then
the
second
ux
is
the
developers
they
have
to
have
a
very
smooth
and
easy
way
of
mapping
what
they're
doing
today
to
this
new
cdn
experience
and
so
finding
ways
of
of
removing
complexity
from
that
comes
from
the
web3
world
world.
A
A
So
then,
after
that,
after
kind
of
aiming
to
to
deliver
that
well
step,
two
is
really
kind
of
building
a
profitable
and
easy
to
run
virtual
mining
software.
So
this
means
there's
another
set
of
users,
which
is
the
retro
miners
that
need
to
download
some
software
and
some
product
and
run
it
and
for
them
it
needs
to
be
you
know.
A
Most
of
these
miners
are
not
going
to
be
running
very
large
scale
operations,
they're
going
to
be
very
different
from
kind
of
like
the
large
storage
miners,
they'll
usually
be
individuals,
or
maybe
a
small
group.
They'll
have
small
small
deployments
either
one
computer
or
a
rack,
like
small
rack
and
they're,
not
going
to
be
able
to
spend
a
lot
of
time
in
labor
managing
these
things.
So
then,
the
ux
of
that
product
and
the
reliability
of
that
product
become
really
important.
A
So
you
want
kind
of
high
quality
consumer
product
ux
here
where
people
should
be
able
to
download
the
thing
with
clicks,
double
click
the
to
run
the
thing
and
then
have
it
just
run
in
the
background,
and
of
course,
the
economic
flows
must
be
profitable,
so
otherwise
the
ritual
miners
won't
do
it,
and
especially
in
the
early
days
as
the
network
is
getting
going,
there
needs
to
be
very
high
margins,
otherwise
people
won't
get
over
the
hump
of
doing
this.
A
Now,
of
course,
once
the
network
is
in
large
scale
operation,
those
margins
will
likely
get
competed
away
to
some
degree
and
and
that'll
be
fine,
but
in
areas
where
maybe
there's
no
retail
miners
yet,
then
there
might
still
be
kind
of
like
high
margins,
like
imagine
a
new
city
that
has
overfilled
miners
that
there
might
be
like
a
like
a
strong
margin
to
be
had
there
again.
Subsidies
could
be
very
powerful
here,
but
they
are
a
double-edged
sword,
so
finding
good
good
economic
structures
to
do.
A
This
is
really
key
and
kind
of
like
step
three,
which
really
is
kind
of
where
we
are
today
so
we're,
starting
with
three
and
working
our
way
to
like
the
most
important
thing,
which
is
number
one
is
to
wire
components.
A
All
these
different
components
that
we're
talking
about
into
those
good
good
products
and
kind
of
my
my
advice
here
would
be
try
and
keep
the
architectures
of
the
systems
and
the
products
themselves,
nimble
and
evolving,
so
think
of
compartmentalized
compartmentalizing
components
that
are
loosely
coupled
so
that
as
the
ecosystem
evolves,
as
other
teams
build
new
things
and
so
on,
we
can
all
adapt
pretty
quickly.
So
this
was
a
big
learning
from
the
d5
world.
Where
and
of
course,
before
that,
node
and
unix
and
many
other
communities.
A
If
you
can
keep
kind
of
the
broader
architecture
of
the
thing,
nimble
and
evolving
quickly,
then
you
can
boil
down
to
get
the
right
primitives
and
the
right
components
and
kind
of
enable
really
fast
system.
Evolution.
A
Great,
so
that's
it
on
those
components
I'll
give
you
kind
of
again
a
super
rapid
fire
set
of
updates
and
then
just
to
kind
of
prime,
your
mind
you'll
hear
a
lot,
so
don't
don't
be
worried
if
you
don't
get
fully
get
one
of
these
things
or
why
it's
why
it
matters
you'll,
hear
more
about
it
either
today
or
in
the
next
couple
of
days.
A
So
many
of
you
have
seen
these
reputation.
Systems
that
are
have
been
developing.
Here's
one
example:
the
textile
minor
index,
which
is
a
way
of
of
looking
at
a
whole
set
of
properties
for
miners
and
kind
of
being
able
to
judge
them,
which
is
really
really
useful
in
order
to
select
good
good
miners.
A
There's
another
one,
phil
rep
that
launched
recently
as
well.
It's
a
different
set
of
features
and
a
different
set
of
measurements,
and
so
on
is
really
really
useful.
A
To
have
these
different
sets
of
deal
bots
that
can
that
can
track
and
test
all
kinds
of
features,
and
imagine
this
being
kind
of
a
market
of
many
different
groups
will
come
up
with
different
things
to
test,
and
I
sort
of
expect
things
like
again:
hipaa
compliance
or
the
kinds
of
critical
security
things
fitting
in
here
then
there's
the
poc1
from
pegasus,
which
is
is
booting
up
the
retrieval
gateways
and
later
on,
it's
going
to
be
adding
a
dht.
A
That's
that's
meant
to
be
incentivized
with
what's
taking
and
that's
getting
off
the
ground
and
and
has
like
the
that
registry,
where
you
can
kind
of
regis
register
the
the
various
different
parties
and
kind
of
like
having
a
lot
of
the
components
already
going,
and
then
there
there's
a
a
single
hop
dhc
which
tries
to
eliminate
the.
I
think
it's
not
built,
yet
it's
coming
in
the
future.
A
This
is
a
really
tries
to
eliminate
the
the
need
for
a
bunch
of
connections
set
up,
and
so,
ideally,
you
can
kind
of
find
exactly
which
parties
to
go
to,
and
you
know,
incentivize
chd
makes
some
sense
right
and
we've
thought
about
this.
For
for
a
long
time
like,
I
think
you
do
need
both
a
world
where
there's
some
kind
of
fully
free
alternative
for
a
set
of
use
cases,
but
an
incentivized
dhc
with
a
low
number
of
nodes
that
are
you
know,
a
highly
professionalized
service
makes
makes
a
lot
of
sense.
A
There's
the
update
from
the
atm
ipl
looking
at
a
bunch
of
different
areas
in
falcon
deal,
success
and
kind
of
addressing
a
bunch
of
problems
there
and
and
you'll
get
an
update
on
how
that's
going.
I
think
I
sort
of
sent
the
talk
earlier
earlier
today
and
you
can
kind
of
watch
that
async,
I
think,
there's
a.
B
A
Of
architecture,
changes
and
kind
of
reframings,
it's
more
of
a
reframe
than
a
ton
of
change,
and
maybe
addition
of
new
systems.
That's
really
really
key,
and
this
is
coming
out
of
like
really
really
great
work
in
the
last
few
weeks,
kind
of
boiling
down
some
set
of
interfaces,
identifying
new
components
and
new
systems.
A
So
there's
some
space
for
the
network
index,
there's
space
for
the
reputation
systems,
space
for
the
dl,
bots
or
minor
audits,
and
this
picture
really
kind
of
brings
it
all
together
in
a
very,
very
good
way,
and
it
also
shows
you
know
things
like
lotus
go
ipfs
and
js.
Ipfs
is
all
different
types
of
ipfs
nodes
you
can
think
of
like,
and
you
know,
notably
the
ffs
gateway
is
pulling
content
onto
the
platform
miner.
How
is
that
gonna
work
and
so
on?
A
Then?
There
are
updates
around
kind
of
architectural
changes
around
separating
out
processes
and
for
important
security
constraints.
We
will
hear
about
indexing
and
and
some
proposals
there
around
how
to
index
all
the
content
within
falcon
miners
and
large
ipf
spinning
services,
we'll
hear
more
about
the
indexer
interface.
A
We
also
get
here
a
formulation
of
the
retro
market
and
decomposition
of
all
of
the
components
and
the
and
laid
out
in
a
set
of
milestones,
it's
kind
of
like
a
an
overview
based
on
all
the
different
components
that
are
coming
together.
That
can
be
super
super
useful
for
for
a
number
of
groups,
and
then
we
have
things
like
these
reputation,
services
and
some
aggregation
infrastructure.
That
is,
that
is
going
to
be
added
in
so
that
we
can
get
verifiability
added
to
the
deal
bots
and
the
rep
reputation
measurement.
A
And
then
we
can
create
a
space
where
tools
can
leverage
these
reputation
systems
to
to
make
decisions
about
which
miners
to
recommend
there's
some
interesting
kind
of
developments
in
the
user-defined
smart
contracts,
world.
A
lot
of
people
want
to
add
a
bunch
of
actors
into
into
file
coin,
and
because
we
didn't
ship
with
the
evm,
as
we
had
originally
planned
way
back
and
kind
of
like
punted,
that
this
is
now
coming
up
again
in
a
number
of
groups.
One
two
one
to
push
for
this,
we'll
see
when
this
happens.
A
It's
not
clear
when
it
when
it
might
happen,
but
you
know
hopefully
it'd
be
really
awesome
if,
if
their
dual
markets,
work
and
kind
of
maybe
help
accelerate,
this
there's
also
really
cool
work
happening
around
bridges
and
store
storage,
oracles
or
circles
where
you
can
kind
of
have
a
a
falcon
style
contract
in
other
chains
that
virtualizes
some
of
the
some
of
the
popcorn
activities.
So
you
can
store
and
and
so
on
from
there
and
form
deals.
A
But
you
can
also
start
doing
some
other
very
interesting
things
like
create
economic
structures
that
that
value
things
differently
and
whatnot.
There's
a
bunch
of
thoughts
going
into
into
starting
to
to
kind
of
see
the
marriage
of
d5
and
and
storage
great.
A
So
hopefully
I
didn't
fry
any
brains
to
that,
because
there's
a
lot
of
stuff
there
and
you'll
hear
a
lot
more
later,
but
hopefully
that
quick
overview
helped
just
prime
your
mind
with
a
bunch
of
the
different
things
that
are
gonna
come
together,
so
that,
as
you
see
each
update,
that's
going
to
happen,
it
will
kind
of
make
more
sense
in
context
of
what's
to
come
later.
A
Great
so
decentralized
development
there's
a
ton
of
advantages
to
centralized
development
and
some
challenges,
and
so
I
try
to
kind
of
map
a
few
of
the
ones
that
I
think
are
really
key
here.
So
one
you
know
massive
advantage.
We
have
many
teams
and
many
orgs
exploring
the
space
there's
more
of
us,
so
we
can
get
a
lot
more
done
in
parallel.
We
can
run
a
bunch
of
experiments
and
and
kind
of
have
a
market
of
approaches
to
see.
A
What's
going
to
work,
we
don't
know,
what's
going
to
end
up
being
successful,
so
it's
very
good
to
have
different
teams
trying
different
things
and
we
have
shared
goals
and
objectives,
and
so
it's
a
group
where
kind
of
aiming
towards
roughly
the
same,
the
same
set
of
things
and
as
a
as
a
you
know,
kind
of
team
of
teams.
We
have
much
greater
knowledge
span
as
a
group
than
than
kind
of
any
one.
A
One
group
and,
of
course,
this
kind
of
being
in
a
centralized
development
space
makes
it
much
easier
for
us
to
sort
of
recruit
entire
teams
of
people
to
help
us
solve
specific
problems
so
like.
If
we
find
some
problem
there,
where
we
would
like
to
kind
of
get
recruit
an
entire
team
to
go
and
solve
that,
like
that's
that's
available
to
us
now,
the
challenge
is,
though,
it
means
coordination,
just
coordination,
complexity,
not
just
across
one
team
and
and
so
on,
how
it
works.
A
But
now
you
have
coordination,
complexity
across
many
different
organizations
with
different
norms.
You
cannot
really
accelerate
things
sequential
things
you
can
do
more
in
parallel,
but
you're
not
gonna,
just
speed
up,
sequential
sequential
work.
You
might
end
up
with
a
bunch
of
duplicate
work,
because
those
different
experiments
try
different
things
and
you
end
up
with
like
bizarre
versions
of
the
same
things,
and
that
might
be
just
a
you
know,
wasted
time
for
people,
and
so
this
is
where
the
knowledge,
diffusion
and
understanding
what
other
people
are
doing
can
become
really
useful.
A
A
That
I
think,
by
the
way,
is
the
biggest
problem
with
decentralized
development
and
I've
seen
it
plague
all
kinds
of
open
source
projects
for
for
years.
And
then,
though,
we
have
a
lot
of
a
much
larger
knowledge
span,
the
diffusion
is
slower
because
we
sync
less
and
why
not?
So
we
have
a
lot
more
progress,
but
it's
more
chaotic
and
way
less
linear
and
so
we'll
have
to
kind
of
be
comfortable
with
some
of
these
challenges
and
be
try
to
mitigate
their
their
damage.
A
But
at
the
same
time
try
to
lean
into
the
advantages
so
lean
into
the
many
experiments
sounds
lean
into
systems
designs
that
incorporate
composability
loosely
coupled
systems
and
whatnot.
So
what
will
be
so?
Hopefully,
this
kind
of
like
loose
working
group
can
play
a
useful
role
to
amplify
the
advantages
and
dampen
the
challenges
so
help
support
teams
and
coordinate,
help,
motivate
our
source,
concrete
goals
and
potentially
some
like
light
synchronization
could
help
here
and
it's
very
kind
of
opt-in.
A
If
it's
useful
to
you
great,
if
not
give
us
feedback,
we'll
try
something
else,
and
if
it's
not
useful
at
all,
then
don't
do
it.
But
we
are
thinking
of
doing
something
around
gathering
some
high-level
plants
and
roadmaps
from
groups,
at
least
in
terms
of
features
not
necessarily
time,
because
time
is
very
uncertain.
Right
now,
davina
and
I
are
very
strong
believers
in
demos,
and
so
if
you
have
like
a
demo
cadence
of
demos
every
two
weeks
or
something
could
be,
it
could
be
really
useful
to
kind
of
help
defuse
knowledge.
A
We
thought
that
we
would
provide
office
hours
for
folks
to
to
book
whenever
they
want
and
we
were
very
able
to
offer
a
runtime,
but
not
sure
if
we
can
offer
other
people,
please,
if
you
can
consider
adding
you
know
some
kind
of
like
office
hour
schedule.
So
if
other
people
can
can
book
you
and
chat
for
15
minutes
alone,
that
might
be
might
be
really
valuable
and
we're
still
thinking
through
how
to
prioritize
prioritized
blocking
requests.
A
But
we'll
try
to
come
up
with
something
here
and
then
we
thought
it
would
be
useful
to
have.
So
it's
been
very
valuable
for
the
webster
def
team
at
pl
to
have
this
kind
of
like
engineering,
product
design
reviews.
It's
been
super
useful
to
diffuse
knowledge,
and
so
we're
wondering
if
we
can
borrow
some
of
that
and
try
to
do
something
like
that
with
a
set
of
folks
here,
where
maybe
there's
a
group
of
us
that
can
help
review
certain
designs
by
different
different
teams
and
provide
feedback
along
the
way
again.
A
Only
if
it's
useful,
it's
like
a
hey.
If
is
this
useful
to
you?
Do
it
so
now
some
recommendations
that
I
would
give
every
team
here
really
aim
for
decoupling
and
refining
composable
primitives,
so
learn
from
the
composable
platform.
A
So
unix
is
the
the
most
famous
one
node
as
well
and
and
d5
and
of
course,
there's
many
more,
but
these
kind
of
like
loosely
coupled
systems
that
do
like
one
thing
really
well
and
then
enable
other
groups
to
kind
of
recompose
those
those
things
and
use
them
in
different
ways.
So
things
like
the
a
good
example
of
this
is
the
indexers
that
we
were
that
I
was
describing
earlier.
A
Those
will
and
others
work
to
decouple
a
lot
from
filecoin
itself
and
they're
now
like
pretty
generic,
and
they
can
work
for
any
ipfs
thing.
So
maybe
they're
just
hyperphase
indexers
and
you
know
really
lean
to
that.
Composability,
because
that
composability,
which
you
sort
of,
require
a
certain
level
of
simplicity
to
earn
that
composability.
That
simplicity
itself
will
make
it
easier
for
people
to
learn
and
understand
what
the
component
is
it'll,
make
it
easier
to
build
and
maintain
components.
A
A
So
if
we
ended
up
with
things
in
the
retro
markets,
where
certain
kind
of
structures
are
really
easy
for
some
team
to
come
by
and
like
compose
into
something
else
and
try
it
out
and
see
if
it
works,
some
new
mechanisms,
a
new
economic
structure,
something
like
that
that
might
be
might
might
be
really
valuable.
A
And
I
really
think
that
this
simplicity
and
composability
enables
teams
of
people
to
find
the
right
actual
primitives
faster,
because
you
boil
things
down,
you
try
a
few
things.
Then
you
throw
some
away
and
you
kind
of
keep
what
what
really
is
useful
to
people.
A
Of
a
set
of
composable
primitives
that
we
already
have
somewhere
in
the
in
the
stack
and
there's
a
bunch
more
that
are
being
sort
of
developed
right
now
in
in
different
ways
and
by
the
way
this
is
like,
probably
20.
I
just
off
the
top
of
my
head.
I
listed
a
few
just
to
kind
of
prove
the
point,
but
just
think
of,
as
we
go
into
doing
things
think
of
creating
these
composable
primitives.
So
maybe
the
the
incentivized
chd
from
the
pegasus
team
might
be
its
own
component.
A
That
could
be
reused
by
for
a
lot
of
things,
not
just
the
ritual
market
and
yeah.
I
think
you
know
kind
of
getting
into
a
cadence
of
building
shipping
and
demoing,
I
think,
will
help.
All
of
us
have
like
this
low
amount
of
time
bandwidth,
but
but
really
high
quality
way
of
seeing
what
other
teams
are
doing
and
kind
of
update
quickly
on
what
others
are
doing
to
kind
of
know
how
to
reorient
what
we're
doing
to
play.
Well
with
what
other
other
groups
or
groups
are
making.
A
Now
I
will
advise
against
coordinating
roadmaps
to
some
extent,
meaning
the
the
moment
where
there's
like
multiplicative
risk
of
one
team
is
building
a
thing
that
another
team
is
waiting
for,
that
another
team
is
waiting
for
that.
Another
team
is
waiting
for
like
that
is
just
a
recipe
for
that
thing
is
not
going
to
ship
for
years,
and
so,
like
really
really
recommend
kind
of
detangling
things
as
much
as
possible,
going
for
composition
know
what
people
are
making
and
so
don't
necessarily
duplicate
things,
but
kind
of
you
know
maybe
bound
that
risk.
A
It's
always
a
trade-off
great.
So
that's
it
sorry
for
blathering
you
for
15
minutes,
but
I
thought
it
was
important
and
valuable
to
kind
of
go
through
all
that
now,
I'm
gonna
go
through
questions
really
quickly.
Fantastic.
B
I
actually
got
your
impressions
into
the
document
and
to
make
it
easier
for
you.
So
actually
we
had
a
bunch
of
technical
questions
as
you
were
presenting
which
I
believe
will
be
answered
through
other
sessions.
B
So
we
could
like
answer
those
or
if
other
people
have
more
questions
about
what
juan
presented
about
the
the
regional
market,
working
group
or
the
plans,
or
if
anyone
wants
to
bring
a
question
around
that
all
right,
I
don't
see
any
so,
let's
go
to
the
technical
and
starting
from
the
bottom,
so
interested
in
discussing
tools
towards
search
and
indexing
of
calculate
content,
especially
before
it's
committed
to
be
sealed
into
a
falcon
sector.
B
A
So
so
to
that,
that's
the
the
indexing
tooling
that
I
was
describing
earlier.
It
is
you'll
hear
more
about
it.
I
think
in
a
couple
of
days
and
the
there
is
a
talk
that
I
linked,
that
also
has
a
description
of
it.
A
I
can
find
it
for
you,
but
the
the
gist
is
yeah
where
that
tool
is
gonna
index,
the
payload,
the
the
the
car
payload
of
the
sector,
before
it's
sealed
and
basically
allow
help
miners
provide
retrieval
to
to
nac
idea
that
they
have
now
necid
asterisk
there's
a
lot
of
cids,
so
there's
some
kind
of
filter
there.
Then
you
supply
you'll,
hear
more
from
from
rebel
and
will
on
that.
But
the
deal
is
like
as
close
to
every
city
as
we
can.
B
All
right,
thank
you,
so
much
juan,
let's
go
to
the
next
one,
because
I
think
we
have
time
to
cover
all
of
them
any
thoughts
towards
privacy
and
encryption
of
data,
storing,
falcoin
or
towards
levels
of
encryption
secret.
Key
shared
keys,
end-to-end
encryption,
double
ratchet
encryption,
key
custody,
key
management
web3.
I
am
yeah.
A
Totally
so
this
is
where
client,
like
the
client
applications
right
now
need
to
do
the
heavy
lifting.
We
should
be
able
to
make
it
easier
to
do
this,
so
finding
really
solid
primitives
to
do
this.
I
believe
that
buckets
textile
buckets
already
do
this
for
you
or
they're
going
to
don't
quote
me
on.
They
absolutely
do
it,
but
I
remember
it
being
part
of
the
the
product
product
direction
and
there
are
other
kind
of
developer
tools
that
are
strongly
orienting
towards
solving
that
at
the
ipld
layer.
A
So,
ideally,
we
would
solve
the
encryption
questions
with
encrypted
versions
of
the
ipod
data
structures
and
then
all
of
that
just
all
of
the
ciphertext
just
get
lumped
into
into
into
filepoint,
now
there's
a
whole
separate
set
of
things
which
is
really
writer
privacy.
That
is
way
bigger
way
harder,
because
we
know
how
to
do
encryption
of
content.
We
don't
know
how
to
do.
Well.
Is
reader
writer,
privacy
and
short
of
a
full,
oblivious
routing
thing,
most
approaches.
A
Don't
work
super
well.
There
are
some
good
proxy
approaches
that
that
can
work
well
and-
and
maybe
this
might
be
a
good
discussion
topic,
but
this
is
totally
a
thing
that
we
should.
We
should
care
about,
especially
for
cdn
cdn
of
content,
so
cdns
of
content,
that's
exactly
what
things
like
social
networks
need,
and
social
networks
and
other
kind
of
media
distribution
systems.
That's
the
kind
of
thing
that
you
would
want
to
try.
A
If
you
want
to
know
what
some
group
is
thinking
or
or
viewing
that's
what
you
would
try
a
track,
and
so,
even
though
the
content
is
encrypted,
you
might
be
able
to
know
which
users
are
requesting.
What
and
so
that's
the
kind
of
thing
that
we
need
drastic
improvement
to
into
all
of
these
systems
around
establishing
very
good
rewrite
of
privacy
right
now
I
don't
know.
A
I
don't
know
that
any
group
here
is
working
on
that,
but
super
interested
in
helping
groups
go
for
it
and
try
to
solve
those
questions,
and
so,
if,
if
there's
any
group
that
wants
to
work
on
it,
we
can
help
fund
or
help
support.
B
Thank
you,
so
much
corn.
We
just
got
a
new
question
from
puja
tactical
question
about
recommendations,
best
practices
for
coordinating
green
groups,
what
comes
channels,
public
world
maps
or
shared
google
notions,
etc,
where
we
can
share
plans
with
each
other.
A
Yeah,
so
this
is
what
I'd
like
to
resolve
in
the
next
couple
days
to
just
get
a
little
bit
of
feedback
from
the
group
on
what
will
work
for
people,
because
I
david
and
I
are
very
hesitant
to
just
come
up
with
some
process
and
throw
it
at
people
and
people
being
like
what
the
hell
is.
This
we
don't
want
this,
and
so
so
we,
the
last
thing
we
want,
is
to
over
process
something
that
does
not
want
the
process.
A
However,
we
seek
to
get
people's
feedback
on
what
would
be
like
very
light
touch
way
of
providing
this
coordination,
and
so
the
the
easy
things
to
reach
easy
things
to
reach
for
might
be
hey
a
slide
channel
is
helpful,
hey
some
way
where
we
can
describe
the
tracks
of.
We
can
visually
see
the
road
map
of
of
what
people
are
working
on
and
their
milestones
or
their
versions.
They
expect
to
ship
like
the
very
coarse
many
month,
ones
that
might
be
useful.
A
We
probably
don't
need
to
be
more
granular
than
that
some
index
of
all
the
different
things
that
are
going
on
might
be
useful
and
valuable
to
keep
updated
and,
and
then
some
of
the
and
then
trying
to
do
kind
of
feedback
and
reviews
and
office
hours
and
demos.
To
then
allow
the
subset
of
teams
that
want
to
talk
about
something
to
do
so
without
kind
of
dosing.
Everybody
else.
B
B
A
Yeah,
so
it
could
work,
so
I
mean
virtual
miners
could
be
pulling
from
pinning
services
just
like
any
other,
just
like
a
platform,
storage,
miner
or
any
other
party
if
it's
useful
to
the
pinning
service
and
if
it's
useful
to
the
to
the
to
the
refill
miner
and
to
the
end
user,
there's
very
much
a
market
question
there.
A
So
like
what's
the
utility,
but
if
there's
utility
we
have
there,
then
yeah
totally
fps
spending
services
could
totally
use
the
retrieval
market
without
having
to
go
through
the
storage
market,
because
it's
just
the
ids
right.
So
this
is
the
composability
thing.
The
things
going
through
the
tool
market
do
not
need
to
be
coming
from
popcorn
storage
seals.
They
could
just
be
any
cid
and
if
they're
in
ac
80,
then
any
amplifier,
spinning
service
or
any
ips
node
could
hire
a
tool
miner
to
do
this.
A
I
see
so
so.
The
question
is
more:
around
hey
could
pinning
services
instead
would
be
ritual
miners
yeah?
So
one
of
the
things
that
I
showed
was
this
diagram
with
the
access
patterns
for
the
storage,
minus
material
miners,
and
I
sort
of
hinted
at
the
fact
that
pinning
services
are
actually
very
close
to
being
very
good
virtual
miners
and
so
yeah.
I
think
that
residual
mining
might
be
a
profitable
endeavor
for
pinning
services,
but
but
it
kind
of
remains
to
be
seen,
meaning
like
they
already
have
a
solid
business
model.