►
From YouTube: Retrieval Markets 2023 - Juan Benet
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
what
I
want
to
do
is
just
go
and
update
everyone's
perspective
here
on
a
number
of
projects
that
have
been
advancing
across
the
ecosystem
over
the
last
six
to
12
months,
so
that
you
can
get
a
sense
of
how
a
lot
of
these
things
are
going
to
be
interconnecting
and
what
are
the
possibilities
that
are
now
available
to
us
that
we
can
just
kind
of
just
plug
in
and
then
have
work.
A
So
we're
going
to
be
going
through.
You
know,
sort
of
like
a
changing
landscape.
I
want
to
call
out
three
specific
projects
that
are
going
to
come
in
and
change
the
game
dramatically
for
a
lot
of
the
original
Market
projects.
I
want
to
talk
about
networks
and
how
these
networks
might
interrupt,
and
so
on.
I
want
to
talk
a
little
bit
about
scale
than
some
of
the
early
applications
that
we
should
focus
on
and
then
maybe
we
can
get
some
to
some
kind
of
network
goals.
A
I
think
a
lot
of
the
network
goals
area
will
be
covered
thoroughly
through
throughout
the
state
same
with
some
amount
of
the
applications
and
so
on.
So
I'll
just
kind
of
lightly
touch
on
this.
So
just
to
remind
everybody,
the
objectives
of
the
500
fuel
Market
are
first
to
build
the
world's
best
CDN
leveraging
world
through
Tech.
So
that
mean
that
doesn't
just
mean
the
best
CDN
in
crypto
or
in
web3.
A
It
means
the
world's
best
CDN
leveraging
these
these
tools
and
to
make
a
worldwide
fast
and
efficient
distribution
and
retrieval
of
ipfs
content.
We
can.
This
is
easier.
This
is
on
the
path
to
to
the
top.
So,
let's
sort
of
start
there.
But
let's
just
remember
that
the
the
goal
is
really
to
kind
of
be
able
to
handle
most
the
traffic
in
the
on
the
internet,
there's
a
ton
of
projects,
a
ton
of
groups
and
so
on.
A
Working
on
all
of
this
you'll
hear
from
many
of
them
and
awesome
videos
that
have
been
collected
over
time
through
the
working
group-
and
you
know
in
this
Summit
I
really
kind
of
want
to
orient
you
towards
a
lot
of
the
discussions
and
workshops,
and
so
on
that
will
happen
so
use
the
talks
to
update
on
the
state
of
the
world
and
so
on,
and
then,
after
that,
I'd
be
able
to
kind
of
upgrade
the
future
by
working
together
to
Define
like
what
what
new
goals
you
should
should
be
pursuing
I'm
not
going
to
talk
about.
A
You
know
the
vision
for
real
markets,
and
so
on.
This
is
like
well
covered
in
a
lot
of
other
talks
and
whatnot.
Maybe
like
the
the
summarization
version,
is
hey.
There's
the
the
there's
a
as
Patrick
was
saying:
there's
a
massive
explosion
of
data
on
the
network.
The
internet
is
this
massive
Grapevine.
A
So
not
actually
like
physics
measurement,
but
rather
like
packets
over
the
internet,
and
actually
you
know
time
to
deliver
the
whole
the
whole
file
or
in,
in
other
cases,
being
able
to
place
a
lot
of
the
content
where
it's
going
to
be
computed
and
so
you'll
see.
The
emergence
of
a
lot
of
computational
networks
over
the
next
year
or
two
being
able
to
retrieve
for
those
networks
is
also
a
key
range,
a
key
application.
So
we'll
talk
about
that.
A
little
bit
I'll
just
quickly
update
everybody
on
the
sort
of
platform
master
plan.
A
I
think
many
of
you
have
probably
seen
this,
but
just
as
a
quick
refresher
first
off,
like
web3,
must
get
to
web
scale
to
cross
the
chasm.
So
today,
web3
is
not
web
scale.
Web
scale
means
being
able
to
handle
the
you
know,
types
of
applications
that
you
use
day
to
day,
think
about
the
range
of
applications
in
terms
of
messaging,
consumer,
apps
and
so
on.
They
use
how
many
of
them
use
or
three
close
to
zero.
Why?
A
Because
the
infrastructure
doesn't
yet
handle
the
kind
of
like
standard
Computing
infrastructure
load
that
these
applications
require.
So
it
is
sort
of
our
job.
Is
the
trail
markets
world
to
figure
out
how
to
be
able
to
to
support
the
content
delivery?
Part
of
this
picture
like
how
do
we
get
to
kind
of
being
able
to
move
that
amount
of
data
and
with
that
high
quality
of
service
and
efficiently
and
so
on?
A
So
there's
you
know
how
we're
going
to
do
this
broadly
in
in
kind
of
broader
Falcon
project.
It's
first
we
build
the
world's
largest
centralized
storage.
Network
we've
done
that
which
is
awesome.
Then
we
onboard
and
Safeguard
Humanities
data.
That's
a
lot.
What
a
lot
of
our
projects
are
working
on
at
the
moment,
then
we
bring
compute
to
the
data
to
enable
those
kinds
of
web
scale.
Applications,
you
know
this
is
kind
of
a
great
growth
in
terms
of
capacity
and
so
on.
A
One
important
thing:
onboarding,
the
data
retrieval,
is
a
piece
of
this,
and
probably
I
should
like
start
writing
that
into
into
the
step
two
in
order
to
have
like
good
data
onboarded,
and
you
want
to
get
the
core
functionality
of
Falcon
of
being
able
to
store
and
retrieve
the
data
efficiently
working
extremely
well.
So
all
of
the
retrieval
operation
and
working
kind
of
like
fits
into
that
step
too.
A
We
had
tremendous
progress
on
this
so
over
the
last
year,
we've
just
seen
a
tremendous
explosion
in
the
amount
of
data
that's
being
stored.
Some
of
this
is
retrievable
efficiently.
A
lot
of
it
is
not
so
a
ton
of
this
data
is
not
easily
retrievable
at
all.
A
It
takes
a
lot
of
time
and
so
on
so
think
of
it
like
maybe
matching
things
like
Glacier
and
so
on,
but
it
doesn't
yet
match
something
like
S3
and
we
need
to
get
there
and
not
only
that
but
like
we
need
to
get
all
the
way
to
kind
of
CDN
level
status,
and
so
that
means
like
right
now.
There's
a
ton
of
groups
already
onboarding
their
data
into
Falcon
and
want
that
really
high
quality,
retrieval
so
think
of
those
as
kind
of
users
that
are
latent.
A
They
might
not
be
the
best
first
users
for
our
Trail
markets,
Network
and
so
on,
but
they're
there
and
they
might
be
second
or
third
third
order
users.
A
So,
on
the
Computing
side,
a
lot
has
been
said
about
fem
won't
go
too
much
into
detail,
but
this
is
going
to
be
a
game
changer
for
ritual
markets,
because
you're
going
to
be
able
to
use
Smart
contracts
directly
and
file
going
to
interface
with
the
storage
protocol,
so
be
able
to
deal
with
storage,
market
and
Deals
and
so
on.
Natively,
which
is
a
big,
really
useful
component
building
a
lot
of
their
Trio
Market
networks.
A
On
other
chains
and
trying
to
cross
do
cross
chain
computation,
it's
just
too
hard
right
now,
I
think
in
the
future.
Blockchain
enter
communication
and
so
on
will
be
much
easier,
but
today's
not
the
not
the
case.
This
will
also
enable
a
lot
of
large-scale
compute
networks.
So
that's
going
to
enable
a
lot
of
different
groups
to
be
able
to
process
maximum
amounts
of
data,
and
we're
also
talking
about
scaling
the
the
consensus
itself
to
be
being
able
to
deal
with
this
consensus
bottleneck.
A
Just
but
you
know
it's
going
to
require
a
lot
of
work,
so
we
have
to
get
the
computer
right.
We
have
to
get
the
retrieval
right.
We
have
to
get
the
use
of
the
data
right
and
so
on.
So
a
lot
of
focus
for
this
group
on
like
this
retrieval
I,
can
put
it.
A
I
want
to
walk
through
kind
of
the
storage
and
material
life
cycle
in
plot
Point.
Briefly
touching
on
a
few
projects,
there's
a
lot
of
projects
involved
in
this,
so
I'm
just
going
to
kind
of
touch
on
a
few,
mostly
out
of
like
convenience
of
slides
already
existing.
A
A
So
once
you
store
data,
both
the
on-ramps
and
the
source
providers
bring
the
data
into
the
index
or
contribute
indexing
data
into
the
index
service,
which
makes
it
easy
to
retrieve
and
then
on
the
outflow,
material,
clients
or
Trail
providers
use
the
indexing
information
to
be
able
to
find
which
SPS
have
the
data
and
and
actually
retrieve
it.
A
So
this
piece
is
like
a
really
core
part
of
the
picture,
so
we've
as
a
as
a
broader
Network
many
groups
have
built
in
the
last
year
year
or
two
on-rems,
and
retrieval
provider,
structures
and
indexers,
and
so
on
and
start
for
Rider
tooling.
A
But
what
it's
missing
is
to
tightly
integrate
it
so
that
it
works
really
well
with
very
high
quality
ux
to
like
sort
of
like
the
success
rate
of
like
storing
things
and
making
sure
they
actually
get
indexed
or
once
you're,
trying
to
retrieve
things
actually
using
the
indexer
tools
to
be
able
to
pull
things
out
is
like
not
not
perfect.
A
So
on-ramps,
you
know
are
these
kinds
of
like
tools
and
systems
that
enable
applications
and
so
on,
to
structure
the
data
and
onboard
it
and
one
kind
of
dive
into
these
storage
provider.
Tooling,
there's
probably
going
to
be
a
lot
more
over
here
over
time,
but
think
of
this,
as
kind
of
like
the
core
search
provider
nodes,
and
then
you
know
things
like
core
Falcon
implementations
and
then
tools
like
boost
and
others
that
are
going
to
kind
of
sidecar
into
what
a
source
ride
is
running
to
make.
A
You
know
some
of
the
connectivity
much
much
easier.
You
know
highlight
here
that,
like
this
is
a
great
entry
point
for
True
networks.
Right
so
boost
is
a
really
good
tool
that
you
can
sort
of,
like
think
about
integrating
with
virtual
Market
networks
and
so
think
of
in
kind
of
the
Saturn
architecture.
We
think
of
this
as
like
the
L3
layer
right,
so
you
have
like
algorithm
in
a
bit
and
kind
of
the
layering
in
the
the
network.
Indexer
part.
A
This
component
is
like
you
know,
scaling
in
terms
of
the
the
amount
of
the
of
the
content,
that's
being
indexed,
but
but
this
is
still
just
a
small,
a
fraction
of
it's
very
significant
but,
like
still
like,
not
you
know,
over
50
of
the
content,
that's
in
search
providers
now
this
provides
a
really
good
way
of
like
finding
exactly
who
has
the
content
and
so
on.
We
still
need
to
use
this
to
build
some
kind
of
end-to-end
retrieval
testing
structure,
to
be
able
to
ensure
that
SPS
are
actually
providing
really
high
quality.
A
Retrieval
conditions,
that's
a
whole
other
project,
and
then
you
know
we
get
into
kind
of
virtual
providers
and
tools
in
this
area
where
different
different
networks
will
exist
to
and
different
systems
will
exist,
to
be
able
to
facilitate
the
third
trail
of
data
for
clients.
A
Some
highlights
here,
Saturn
is
I,
think
pushing
a
really
large
scale
amount
amount
of
data.
There's
also
other
networks
that
again
I'm
not
highlighting
for
like
slide
production
reasons,
but
lots
of
groups
have
been
have
been
testing
at
pretty
pretty
large
scale.
We
need
to
get
to
be
being
able
to
kind
of
have
these
kinds
of
numbers
for
actual
usage
this
year,
so
kind
of
like
the
Target
that
I
want
to
pay
for
everybody
in
this
room
is
to
like
build
ritual
Market
networks
that
hit
this
kind
of
scale.
A
You
know
billion
referrals
per
week
and
so
on
with,
like
you
know,
delivering
content
to
actual
users,
and
you
know
and
growing
from
there,
and
so
one
other
cool
thing
is
like
the
ipsws.io.
Gateway
has
grown
a
lot
this
year,
and
so
this
is
a
good
source
for
of
traffic
for
refuel
networks.
So
this
is
end
user,
HTTP
clients
who
could
you
know,
connect
to
things
like
Saturn
and
others
to
to
provide
the
retrievability?
A
And
it's
still
like
you
know
the
response.
Time
still
sucks
right,
so
I
mean
it's
much
better
than
it
used
to
be,
but,
like
still
not
very
good,
we
want
to
get
that
down
to
sub
second
being
able
to
deliver.
Some
second
from
the
Gateway
would
be
would
be
great
and
remember
that,
at
the
end
of
the
day,
like
browsers
themselves
can
be
enhanced
with
functionality.
So
that
means
you
know
Brave
already,
ships
with
ipfs
embedded,
there's
other
browsers
that
have
like
some
light
connectivity
that
those
are
great
entry
points
for
retrieval
networks.
A
You
can
get
some
code
into
those
systems
to
be
able
to
retrieve
components
and
whatnot
so,
and
that
includes
you
know
things
like
service
workers
and
so
on.
A
So
don't
don't
think
of
like
the
ritual
clients,
as
like
just
HTTP
that
you
have
to
like
deal
with,
like
that's
definitely
the
first,
the
largest
number
of
users
and
the
first
enterpoint,
but
for
some
of
the
most
advanced
use
cases
like
video
and
large-scale
applications
like
3D
worlds
and
games
and
so
on.
You
can
think
of
getting
code
all
the
way
into
that
application
to
make
it
way
easier.
You
don't
just
have
to
like
deal
with
like
just
moving
around
HTTP
requests
cool.
A
A
They
wanted
to
be
able
to
retrieve
very
quickly
so
there's
high
demand
in
in
in
getting
all
their
true
market
stuff
working
really
well,
one
thing
about
the
consensus
scaling
thing:
the
architecture
here
is
about
building
this
Network
that
can
yeah
there's
a
whole
comment
yesterday
that
kind
of
went
in
depth
into
the
architectures
I'm
not
going
to
give
the
whole
thing
here,
but
think
of
a
scaling
structure
where
you
have
these
subnets
that
can
derive
like
smaller,
smaller
subnets
and
this
maps
on
extremely
well
to
the
regions
of
a
CDN
so
think
of
being
able
to
have.
A
You
know
the
global
Network,
it's
just
like
the
the
Falcon
mainnet
and
then
you
can
derive,
or
it
could
be,
an
application
specific
retrieval,
Network,
subnet,
that's
Global,
and
then
underneath
that
you
can
create
subnets
on
a
per
region
basis,
and
so
this
is
a
big
reason
of
why
we
designed
the
interplanetary
consensus.
This
way
is
so
that
you
can
create
these,
like
Regional
networks
that
match
to
the
application.
A
You
want
to
provide
right,
so
CDN
wants
to
have
regions,
because
you
want
to
coordinate
the
nodes
and
close
to
the
user,
and
you
want
to
move
the
data
into
those
regions
and
you
want
to
serve
it
from
there
because,
what's
really
punishing
again
like
the
whole
job
of
a
CDN
is
to
like
deliver
the
data
when
the
application
or
the
user
needs
it.
And
that
means
like
moving
it
ahead
of
time
into
good
spots
and
then
delivering
it
from
the
good
spots
to
the
user.
A
As
a
request
is
coming,
and
so
all
of
it
is
about
how
to
like
relocate
bits
to
avoid
the
seconds
being
wasted
right.
So
it's
like
how
do
you
carefully
position
the
data
so
that
you
can
route
the
requests
and
you're
not
making
a
request
from
like,
say
you
know
one
part
of
the
world
and
going
all
around
the
earth
and
then
trying
to
like
pull
out
a
video
that
way,
but
instead
you're
like
very
quickly
able
to
Route
One
request
to
something
very
close
to
people
and
deliver
the
data
there.
A
So
IPC
was
explicitly
designed
to
support
this
kind
of
use
case.
So
this
is
like
likely
going
to
be
extremely
useful
to
a
lot
of
the
the
CDM
networks,
and
so
when
you
think
about
like
a
ritual
Network
when
you
have
like
some
component,
that's
going
to
provide
kind
of
you
know.
In
Saturn's
case
you
have
like
layer,
one
layer,
two
and
layer,
three
layer.
Zero
is
like
the
ritual
client,
so
layer,
zero
might
be
the
service
worker
or
an
enhanced
application.
A
L1
is
the
area
where
you,
where
you
deal
with,
like
the
bandwidth
coming
in
so
billions
of
requests,
hit
the
L
ones
and
then
the
other
ones
produce
a
much
smaller
amount,
much
more
manageable
set
of
requests
to
l2s
or
l3s,
and
the
l1's,
just
you
know
cash.
The
this
entire
unit,
this
component
of,
like
the
you
know,
set
of
all
wants
and
a
set
of
all
twos
can
be
an
entire
region
with
with
some
kind
of
like
orchestrator
or
Hub
thing.
A
The
orchestrator
or
the
Hub
could
just
be
a
blockchain
right,
so
that
entire
thing
could
be
a
subnet
so
that
you
can
deploy
that
component
into
a
region.
Now,
how
do
you
draw
the
exact
boundaries
for
Regions?
A
you
can
just
start
by
adopting
the
regions
that
have
been
spelled
out
by
other
Cloud
systems.
Those
are
well
chosen
for
a
bunch
of
reasons.
A
Does
it
make
sense?
Why,
like
l3s,
are
here
like
you,
you
can
ship
some
code,
some,
like
virtual
provider
code
into
to
sidecar,
along
with
SPS
and
boost,
might
be
a
great
platform
for
this
right,
so
boost
might
think
of,
like
potentially
adding
some
like
component
loading
thing.
Where,
like
you
get
you
onboard
SPS
to
use
boost,
and
then
you
know
you
boost
can
have
like
a
set
of
Integrations
with
with
virtual
providers
to
create
the
cell
three
yeah.
A
So
maybe
I'll
walk
through
that
flow
again,
just
to
make
sure
that
it's
clear
when
a
retrieval
client
is
making
a
request,
it
would
go
here.
I'll
make
it
very
honest.
It
would
go
from
here.
It
would
first
hit
one
of
the
l1s.
You
there's
a
lot
of
decision
making
that
you
have
to
do
as
like
which
L1
you
hit
and
so
on,
and
you
want
to
try
and
make
that
as
determined
ahead
of
time
as
possible.
So
this
is
just
one
request
from
the
L1.
A
If
the,
if
the
L1
has
a
Content,
then
it
just
delivers
the
content
and
you're
set
you're
done
awesome
like
super
fast.
Ideally,
the
L1
is
like
within
50
milliseconds
of
the
user
like
that
would
be
like
the
best
case
or
well.
Better
would
be
like
within
10
milliseconds
of
the
user,
but
that's
harder
now.
If
the
L1
does
not
have
the
content,
there's
two
options:
one
go
to
an
L2,
that's
local
in
that
region.
A
If
it
exists
and
might
have
the
content
or
ask
the
indexer
who's
got
this
data,
get
a
response
back
from
the
indexer
and
then
go
over
to
an
L3
or
you
know
whatever
SP
has
it
to
retrieve
it
back
out
and
then
server
to
the
user.
Now,
the
next
time
that
one
sees
that
request,
it's
going
to
be
cached
here
and
delivered
efficiently
right.
Does
that
sort
of
make
sense
now
l2s,
like
are
not
necessary
here,
like
you,
could
have
this
working
just
without
ones
straight
to
SPS,
but
they
really
help.
A
A
If
you
have
regions,
if
you
don't
have
regions,
then
a
lot
of
the
L2
construction
is
like
not
that
useful,
as
you
get
SPS
to
be
really
fast,
the
L2
component
might
not
be
that
useful,
but
once
you
have
regions
like
this
is
why
the
regions
really
matter
important
thing
here
is
like
if
you
have
a
ton
of
l1's
hammering
the
indexers
like
the
indexers
are
going
to
have
to
scale
along
with
that
right,
but
I
think
the
indexes
are
in
really
good
shape
on
that
at
the
moment.
A
What
we
need
to
do
is
we
need
to
get
an
end-to-end
testing
structure
that
shows
how
all
of
this
is
working
at
scale
with
trivials
from
all
over
the
world
right.
So
we
need
to
instantiate
some
test
networks
that
can
pretend
to
be
these,
like
retrieval
clients
and
test
out
this
entire
flow
across
a
system
like
this
to
be
able
to
see
what
what's
working.
What's
not
working,
what's
work
with
the
web
degrees
of
latency,
with
what
quality
of
service
and
so
on,
and
so
this
is
like
I.
A
Think
one
of
the
things
that
I
see
for
2023,
that's
going
to
be
pretty
important,
is
working
on
on
this
set
of
this
kind
of
like
end-to-end
testing,
and
what
I
want
to
get
to
is
like
a
page
that
you
can
see
where
you
can
see
kind
of
like
the
entire
flow
working,
and
you
can
see
the
stats
for,
for
you
know
all
of
the
cids
that
are
stored
on
popcorn
and
how
ritual
are
they
around
the
world
with
what
latency
that's
like
hard
to
get
to?
A
But
like
that's,
what
would
be
great
to
greatly
cool
one?
That's
mentioned
here,
which
is
station,
so
you'll
hear
more
about
station
in
a
moment
and
I
want
to.
Actually
this
is
like
I
guess
this
was
already
shipped
on
on
Monday
and
then
discussed
elsewhere,
but
station
we
sort
of
like
decoupled
a
component
so
earlier
in
the
year
as
we
were
sort
of
Designing,
Saturn
and
so
on,
we
decided
to
decouple
a
component
and
and
create
a
separate
sort
of
like
infrastructure,
node.
A
Think
of
like
a
five
point
worker
node
that
you
can
adjust
that
you
can
run
now.
What
is
the
fact
going
to
work
or
node?
Do
it
runs
on
a
bunch
of
different
modules?
It
could
be
CDN
workers
or
it
could
be
compute
workers
or
it
could
be
whatever
else
you
want,
and
so
this
makes
the
onboarding
of
like
the
user
experience
easy.
So
you
do
that
once
because
that's
usually
the
biggest
hurdle
you
do
that
once
once
you
do
that,
then
you
can
kind
of
use.
A
A
Like
decision
making
here
and
and
so
on
later,
but
this
means,
like
it's
a
great
entry
point
for
True
Market
networks,
right
like
you-
can
deploy
your
worker
nodes
into
station,
so
think
of
being
able
to
use
boost
and
station
as
like
deployment
platforms
where
you
can
kind
of
ship
modules
into
and
that
you
know
should
save
you
a
lot
of
the
work
in
terms
of
user
adoption
to
get
a
bunch
of
notes
to
run
your
code
to
be
able
to
build
a
CDN
or
to
have
connectivity
with
SPS
cool.
A
So
let's
talk
about
networks
for
a
moment
once
the
fvm
arrives.
This
is
going
to
enable
a
lot
of
l2s
on
filecoin.
So
the
main
l2s
that
I've
been
talking
about
are
compute
networks.
There's
a
lot
of
reasons
for
this,
but
we're
going
to
end
up
with
a
lot
of
different
compute
Networks
and
is
kind
of
aiming
to
be
this
really
great
L1
for
all
of
these
and
back
while
I
was
building
a
lot
of
the
infrastructure
that
you
need
to
be
able
to
do
this
and
so
think
of
like
background
now.
A
It
notes
as
being
today
able
to
run
a
bunch
of
computation
across
like
this
larger
compute
Network
and
to
think
of
the
data
moving
here
like.
Ideally,
you
want
to
move
the
computation
to
the
data
and
where
it
is
so,
the
indexing
story
like
really
matters
there,
but
if
it's
not
all
in
one
place,
then
you're
gonna
have
to
move
it,
and
so
you
have
to
move
the
computation.
A
Now
you
care
about
retrieval
and
so
that's
a
whole
other
different
story,
retrieval
for
compute,
looks
very
different
than
receivable,
for
you
know,
content
delivery.
So
those
are
two
separate
problem.
Spaces
I
would
not
recommend
the
same
ritual
Network
address
and
Tackle.
Both
they
look
pretty
different.
You
could
do
that
down
the
road,
but,
like
start
with
one,
and
so
this
means
like
Falcon,
will
also
be
a
great
L1
for
cdns
right.
A
So
I
don't
have
like
a
nice
picture
there,
but,
like
think
of
as
soon
as
the
fbm
arrives,
you
can
build
a
bunch
of
L2
type
networks.
This
way
and
you
can
use
the
IPC
structure
to
have
like
all
the
subnets
and
and
whatnot.
So
that's
sort
of
like
what
I
expect
will
be
happening
by
like
q2q3.
A
In
terms
of
network
scale,
so
this
is
a
little
bit
harder
to
guess.
Like
what
kind
of
scales
we
might
see,
I
think
in
the
low
end,
we
will
see
networks
with
tens
of
thousands
material
numbers
with
tens
of
thousands
of
nodes
this
year,
I
think
on
the
high
end,
we
might
get
to
Millions
I,
like
I,
think
these
kinds
of
things
can
scale
really
quickly.
A
If
you
have
the
the
the
funding
to
move
through
it,
meaning
like,
if
you
have
a
bunch
of
demand
from
users
and
you
couple
the
economic
demand
to
then
paying
worker
nodes
to
to
serve
that
content,
then
you
can
actually
scale
that
really
fast.
A
So
it's
kind
of
really
hard
to
guess
whether
it's
going
to
be
like
tens
of
thousands
of
nodes
or
millions
of
nodes
this
year.
I
would
encourage
you
all
to
build
for
scale.
A
Try
to
build
a
thing
that
like
will
be
able
to
scale
well
to
those
Millions,
because
as
soon
as
you
have
you
know,
significant
demand
you
can
like,
then
then
kind
of
you
don't
have
to
like
go
and
re-architect
your
system
along
the
way
and
like
hit
like
scaling
problems
so
really
shoot
for,
like
large
scale,
smooth
scaling
lean
on
the
region
structure.
You
know,
regions
with
IPC
and
so
on,
and
then
really
find
extremely
good
demand
drivers
that
are
going
to
give
you
massive
amounts
of
content.
A
You
need
to
deliver
the
yeah
so
then,
in
terms
of
applications.
What
I
would
encourage
people
to
focus
on
right
now
is
small
static
assets,
so
this
you
know
this
includes
nfts
and
so
on,
but
it
also
includes
websites.
It
includes
images
all
the
little
kind
of
like
Little
Bits
that
get
built
built
into
the
web.
So
the
web
has
like
all
these
little
components
everywhere
anyway,
quick
check
on
timing,
okay,
great
so
the
so
yeah.
It
really
encourages
everybody
to
kind
of
like
optimize.
A
A
I
think
the
second
thing
you
want
to
optimize
for
is
video.
The
video
is
a
huge
fraction
of
the
traffic
of
the
web,
so
you
just
get
it
right,
like
that's
the
best
use
case
for
any
CDN,
you
want
to
use
a
small
static
assets
to
get
the
whole
thing
working,
but
that's
going
to
be
a
small
amount
of
the
traffic.
The
large
part
of
the
traffic
will
be
the
video
and
so
get
it
working
well
for
video
and
then
from
there.
A
You
can
then
go
into
more
real-time
things
like
games
and
Virtual
Worlds,
there's
a
lot
of
people
in
web3,
trying
to
mix
games
and
Virtual
Worlds,
and
guess
what,
like
the
ritual
life
cycle
sucks
for
them
today,
and
so
it's
just
really
difficult,
and
so
they
end
up
leaning
back
into
web
2
systems,
and
so
those
are
great
users.
A
A
Then
at
that
point
you
can
decide
whether
you
want
to
go
and
add
privacy,
meaning
like
access
controls
and
all
that
kind
of
stuff
or
you
go
into.
You
know
more
real-time,
changing
things
like
games
and
Virtual
Worlds.
A
So
you
know
eventually
we
can
get
to
like
being
able
to
serve
all
of
this
stuff
with
these
referral,
Market
networks-
and
so
you
know
I
kind
of
like
imagine,
like
all
of
this
being
loaded
from
like
your
your
cdms,
which
would
be
pretty
pretty
sweet,
cool,
I'm,
gonna
pause
for
questions
here,
I'm
gonna,
maybe
I'll,
leave
the
network
goals
thing
for
for
you
to
think
about,
maybe
possible
questions
here.
I'll
maybe
take
two.
B
Thank
you,
that's
Andreas
from
Legacy,
so
a
question
about
the
granularity
you
envision,
I
mean
where
what
so
you
talk
about
regions,
so
region
is
sort
of
continent,
country
metropolitan
area,
even
smaller,
with
what
are
your
thoughts
about
it
and
then
the
second
question
is,
you
know:
where
would
the
hosting
actually
be
of
those
CDN
nodes.
A
So
the
internet
is
this
big,
Grapevine
right,
and
so
the
what
you
want
to
do
is
like
choose
your
region
component
to
minimize
the
distance
from
the
user
to
where
the
data
is,
and
so
where
exactly
you
draw
the
boundary
like
there's
many
layers
here
and
you're,
not
you're,
like
region.
Topology
is
not
going
to
match
up
to
this
topology.
It's
just
going
to
kind
of
you're
going
to
choose.
Some
subset
like
the
cloud
has
basically
three
layers
right
now.
A
You
have
like
the
the
three,
maybe
four
layers
you
have
like
the
end
users,
you
have
the
data
centers
and
then
in
between
you
have
points
of
presence
and
in
some
cases
you
have
like
you
know
the
edge
stuff,
that's
even
like
further
closer
to
the
user,
and
so
you
have
three
or
four
layers
that
is
like
a
pretty
good
place
to
start.
A
You
know
this
is
like
happening
so
we're
gonna,
we're
like
actually
bringing
going
to
be
bringing
ipfs
into
space
and,
like
that's
another
region,
but
it's
going
in
the
other
direction.
So
I
would
say
like
don't
worry
about
this,
yet
this
is
like
a
20
24.
25
problem
expect.
B
That
the
notes
are
sitting
in
the
same
data
centers
as
their
Akamai
sits
today.
A
Yeah,
so
the
cool
thing
about
the
internet
Grapevine
is
that
it,
it
has
been
very
well
kind
of
like,
like
all
of
these
things
are
not
kind
of
single
companies
and
so
on,
like
a
lot
of
these
data,
centers
are,
like
you
can
add
like
racks
to
these.
You
have
these
internet
exchanges,
where
you
can
just
add
another
device
there
to
like
route
right.
So
when
Netflix
Builders
to
the
end,
we
have
encourage
everybody
here
to
go
and
watch
all
of
the
amazing
technical
talks.
A
That
groups
like
Netflix
and
others
have
built
have
given
over
the
last
10
15
years,
because
they
go
into
the
in-depth
into
like
all
the
problems
that
they
face
and
like
their
solution,
how
they
thought
about
it.
A
Netflix
started
with
a
question
of
like:
should
they
just
use
a
CD
that
existed
or
build
their
own
and
they
chose
to
build
their
own
for
very
good
reasons
and
they
add
performed
like
every
other
CDN
in
a
much
cheaper
way,
because
they
were
able
to
just
focus
on
the
use
case
they
wanted
to
to
do
and
in
their
case
like
they
were
just
going
around
with
like
these
machines
pre-loaded
with
all
the
content
that
they
predicted
was
going
to
be
viewed
in
that
region,
because
they
had
a
lot
of
data
not
like
what
were
people
watching
or
ahead
of
when
they
were
going
to
launch
something
they
would
like
pre-load
all
the
content
there.
A
They
would
have
these
machines
that
would
just
add
into
an
internet
exchange
and
just
serve
it
right
there.
So,
like
the
clients,
would
go
into
this
kind
of
like
gray
area,
there's
like
great
great
Cloud
here
and
then
immediate
or
I
guess,
maybe
all
the
way
over
here
hold
on.
Let's
look
more
corporate
stuff
yeah
here
they
would
basically
go
into
this
either
gray
or
or
or
this
Cloud
over
here
and
then
like
immediately
there.
A
The
box
was
sitting
right
there
and
they
would
kind
of
go
back,
and
so
this
is
kind
of
like
a
10
to
50
millisecond
distance.
50
milliseconds
is
great
like
if
you
can
start
delivering
bytes
50
milliseconds
in
you're,
very
good,
like
you're
you're,
going
to
be
able
to
load
the
first
frames
of
the
video
really
well
and
then
start
playing
back,
which
playback
helps
you
a
lot,
because
once
you
have
the
first
say:
24
frames,
that's
one
second
right,
usually
or
60
frames,
that's
one!
Second!
A
Once
you
are
able
to
serve
that
like
you
can
load
the
rest
like
much
faster.
Sorry
from
the
user
perception
perspective
right.
So
what
really
matters
in
videos
like
you
deliver
the
first
like
the
header
is
like
as
fast
as
possible,
and
then
you
know
people
start
playing.
That's
why,
when
you
seek
it
like
someone,
sometimes
Waits
a
little
bit,
because
you're
like
you're,
like
you've,
tripped
up
all
of
the
optimization
that
they
did.