►
Description
Every other week, the Retrieval Market Builders get together to share progress in their projects in a demo format. We want to sincerely thank all of our collaborators for demoing their developments and helping to improve the FIL Retrieval Markets one demo at a time.
May 11, 2022, Demo Day: In this video, you can find quick demos from -
• New Web Group
• Leeway Hertz
A
Sure
hello
welcome
to
this
week's
retrieval
market
demo
day.
Today's
date
is
wednesday,
the
11th
of
may
2022
on
the
schedule.
Today
we
have
new
web
group
talking
about
titan
ultra
networks.
The
first
milestone
and
leeway
hurts
speaking
about
the
retrieval
performance,
dashboard
so
we'll
hand
over
to
edmund
from
new
web
group.
To
start
us
off.
B
In
in
this
meeting,
we
have
some
senior
software
engineering
manager,
lean
and
project
manager,
yuan
and
me
admin
as
a
presenter
for
this.
If
you
have
any
questions,
you
can
ask
lynn
so
I'll,
do
the
translation
for
you
and
also
you
en.
Basically,
let's
get
started
today's
agenda.
We're
gonna
talk
to
two
things.
The
first
we
have.
What
we're
gonna
talk
about
is
what
is
titan
outro
networks
and
also
we're
going
to
talk
about
the
interpretation
of
tetanus
networks,
function,
model.
B
B
B
How
to
join
the
titan
network,
the
service
provider
of
edge
shared
nodes
only
need
to
install
the
titan
outer
networks,
share
resource
clients,
connect
to
the
network,
generate
a
file
coin
wallet
and
becoming
a
family
edge
ipf
as
lightweight
nodes,
participate
in
contributing
resources
and
get
few
rewards
idle.
Bandwidths
storage
resources
through
the
titan
outer
networks,
transform
into
the
titan
atom
and
transform
into
a
shared
pool
of
resources
which
are
cached
in
the
ones
by
the
intelligence
scheduling
center
to
achieve
acceleration
through
resource
usage,
frequency
condition
for
becoming
an
edge
nodes.
B
The
upload
bandwidth,
the
initial
bandwidth
threshold
is
set
at
10.
Megabytes
per
second
users
are
allowed
to
freely
choose
the
amount
of
resources
they
wanted
to
contribute
at
the
beginning
of
the
installation
process.
The
scheduler
will
not
use
more
resources
than
the
value
field
in
by
the
user
for
the
service
provider
notes.
B
B
B
Accelerate
principles,
in
other
words,
commanding
scheduling.
How
does
the
scheduler
know
what
blocks
are
stored
in
each
candidate's
nodes,
verifier
nodes
and
etch
node,
the
scheduler
directs
the
candidates,
nodes
or
verifier
nodes
to
pull
blocks
on
car
valves
from
ipfs
felcoin.
If
the
blocks
are
pulled,
the
cid
are
returned
directly
to
the
scheduler.
It
is
pulled.
Car
file,
the
candidate
nodes
or
verifier
nodes
need
to
return
the
cid
of
all
the
blocks
inside
the
car
file
after
pulling
a
whole
car
file.
B
B
How
does
a
verifier
node
know
what
blocks
are
stored
in
the
candidate
nodes
and
edge
nodes?
Candidate
node
is
kind
of
know
that
in
titan
networks,
according
to
our
setting,
each
candidate
is
qualified
to
be
a
verifier
and
elect
a
cycle.
Alternatively,
seven
days,
a
certain
number
of
verifier
will
be
randomly
selected
from
the
candidates
to
verify
the
validity
of
the
storage
and
bandwidth
of
the
node
in
the
whole
network.
B
B
B
What
types
of
files
can
be
accelerated,
how
large
the
file
transfer
can
be
accelerated?
Okay,
let's
take
a
car
file.
For
example,
if
a
car
file
is
pulled,
then
the
candidate
nodes
or
verified
nodes
will
be
split.
The
car
file
into
several
blocks
after
pulling
the
whole
car
valve
and
transfer
the
blocks
to
several
edge
nodes
as
a
closer
to
the
user
compared
with
the
direct
acceleration
of
the
carville.
The
effect
f
effect
of
accelerated
blocks
is
much
better
and
it
requires
less
hardware.
B
B
Thus,
it
supports
falco
file
storage.
Thus,
the
titan
network's
pcdn
supports
storage
data.
The
titan
network
does
not
support
user
to
store
data
freely
in
the
early
stage,
however,
but
the
candidates
and
the
verifier
knows
of
the
proposed
proposal
are
essentially
ipfs
nodes,
so
they
can
be
supported
in
a
later
stage.
B
B
If
you
have
any
questions,
please
let
not
hesitate
to
ask
me-
and
maybe
I
can
translate
to
you
with
to
dean
or
maybe
you
can
ask
yuan
for.
C
I
can
start
okay
thanks
everyone
for
joining
this
call
and
giving
us
this
opportunity
to
present.
I
am
the
book.
Let
me
share
my
screen.
C
And
let
me
know
if
you
can
see
my
screen.
A
C
Perfect,
okay,
so
I'm
deepak,
I'm
the
cto
with
leeway.
Heads
and
we've
been
working
with
protocol
labs
building
on
filecoin
for
almost
three
months
now,
and
we've
been
working
closely
on
basically
capturing
the
the
network
performance,
so
we're
working
on
retrieval
performance
dashboard.
C
So
in
this
specific
product,
what
we're
doing
is,
as
you
all
know,
filecoin
has
been
like
successfully
able
to
measure
and
analyze
the
storage
performance
of
various
files,
although
the
retrieval
performance
is
something
that
has
not
yet
been
monitored
or
has
not
been
measured
so
far.
So
as
part
of
this
project,
this
is
what
we
are
planning
to
do.
We
have.
We
are
basically
comparing
the
file
retrieval
from
various
storage
providers.
C
C
So
the
overall
networks
that
we
are
planning
to
our
storage
providers
that
we
are
planning
to
cover
in
this
project
are
cloud
flare,
estuary,
meson,
web3,
storage,
ipfs
smile
and
and
as
we
move
forward,
I
will
see
if
there
are
new
storage
providers,
we'll
include
them
also
so
based
on
different
matrices.
C
What
we
are
trying
to
do
is
we
are
trying
to
get
the
performance
of
the
retrieval
of
a
specific
file
based
on
the
location,
so
the
matrices
are
ttfb,
which
is
time
to
first
byte.
C
So,
let's
say,
if
I,
as
a
user,
tries
to
fetch
the
file
from
one
of
the
networks,
let's
say
nissan
or
mile
how
much
time
it
takes
us
to
basically
fetch
the
first
byte
and
then
what
is
the
overall
time
it
takes
us
to
download
that
file
the
c
using
the
cid
so
ttlb,
which
is
time
to
last
byte.
C
C
So
that's
the
overall
project
that
we
are
working
towards
now.
C
This
is
the
high
level
designs,
I'll
walk
you
through
the
whole
design,
what
it
contains
and
what
all
different
components
of
this
project
are.
So
let
me
just
jump
to
the.
C
Rear
performance
dashboard,
so
this
is
the
the
working
prototype
of
the
overall
project
so
over
here,
if
you
see
what
we
have
is
we
have
these
different
storage
providers
that
we
are
trying
to
compare
in
this
first
version.
So
these
are
the
the
storage
providers,
then
what
we
have
done
is
we
have
launched
like
multiple
bots
in
different
locations,
so
these
are
the
different
locations
where
all
these
bots
are
available
now
corresponding
to
each
storage
provider.
C
What
we
have
done
is
we
have
we
have
stored
one
file
and
corresponding
to
that
file.
We
have
the
cid
available,
which
is
the
content
identity
file,
and
we
are
trying
to
fetch
that
file
using
different
bots
that
are
running
in
all
these
different
locations
over
here.
What
we
are
trying
to
show
is,
let's
say
if
we
have
nft.storage
as
one
of
the
storage
provider
and
we
are
trying
to
see
the
performance
in
from
the
location
north
virginia.
C
This
is
the
file,
and
this
is
the
time
when
we
last
fetched
this
performance,
so
you
can
basically
find
this
performance
for
last
7
days
or
30
days,
then,
corresponding
to
this,
we
are
showing.
What
is
the
best
speed
time?
What
is
the
average
speed
time
and
the
worst
speed
time
for
this
specific
file
for
this
specific
location?
For
this
specific
storage
provider?
C
Now,
on
the
right
hand,
side,
you
can
also
see
all
these
different
nodes
that
are
running
different
boards
that
are
running
from
different
locations
and
based
on
these
color
coding,
these
legends.
We
are
basically
showing
what
is
the
performance
of
that
specific
location
to
fetch
that
specific
file?
Now
over
here,
we
have,
we
are
showing
like
all
these
like.
This
is
like
a
real-time
dashboard
that
we
are
showing
here
so
for
bahrain
for
estuary.
C
What
is
the
current
atfb
for
a
given
cid?
Now
all
these
files
that
we
have
stored
on
these
different
networks?
They
are
pretty
much
of
the
same
size.
So
I'll
explain
a
little
bit
in
more
detail
when
I
explain
the
source
code
structure
as
well,
so
this
is
like
the
current
the
retrieval
time
what
it
takes
for
any
cid
to
to
fetch
like.
If
you
see
there
are
different
color
codes,
so
under
200
milliseconds,
this
is
the
color
code.
C
Then
we
have
divided
pretty
much
it
into
like
five
different
legends,
excellent
good
average,
bad
and
poor.
C
You
can
basically
go
ahead.
Change
the
the
dtlb
performance
here
you
can
see
the
difference.
So
it's
live.
You
can
change
the
location
also.
So
let's
say
if
we
change
it
from
north
virginia,
let's
go
to
southeast
asia,
so
this
is
the
time
it
is
taking.
For
that
specific
content
to
fetch,
you
can
basically
try
to
see
different
storage
providers.
What
are
their
performances,
so
this
is
the
performance
based
on
the
file
which
is
available
the
location.
C
C
So
let
me
just
try
to
show
it
here
so
so
this
is
the
historical
graph
that
we
are
trying
to
show
with
respect
to
these
different
storage
providers
here
with
respect
to
a
specific
location.
C
So
how
these
networks
are
performing
over
a
period
of
time
and
what
we
are
also
showing
is
like
based
on
a
certain
event,
if,
if
certain
networks
performance
goes
down
or
up
so
we
are
also
trying
to
capture
these
different
events.
That
has
happened.
Let's
say
there
is
a
new
version
of
the
the
storage
provider,
any
upgrades
to
their
network
or
anything,
and
what
is
the
the
performance
after
that
specific
event?
That's
what
we're
trying
to
show
here.
C
You
can
basically
change
the
the
time
period,
also
and
over
here,
you'll
see
the
dates
and
the
the
performance
in
terms
of
milliseconds.
So
this
is
something
that
we
are
still
working
towards
now,
if
I
just
quickly
move
to
the
high
level
architecture
for
this
application.
C
So
what
we
have
is
what
we
have
here
is.
We
have
created
a
config
file
in
this
system,
which
basically
consists
of
the
information
around
the
content,
url,
the
cid
and
the
base
url
that
we're
using
for
that
specific
network
and
what
is
the
destination
from
where
we
are
trying
to
fetch
that
that
specific
file.
So
this
is
the
config
file,
it
is
basic.
It
is
available
with
the
scheduler
service.
So
in
this
you
do
your
service
it.
C
It
picks
up
these
this
config
file
and
keep
pushing
the
events
file,
retrieve
events
in
the
amqp
over
here
and
it's
happening
like
every
30
seconds,
so
that
we
can
in
real
time
we
can
monitor
the
performance
of
these
storage
providers.
C
So
these
are
various
boards
that
are
running
in
different
regions
of
aws.
As
of
now
I'll
explain
a
little
bit
more
around
different
infrastructure
services
that
we
are
planning
to
use,
but
as
of
now,
we
are
using
aws
and
these
are
different
boards,
which
are
running
in
like
almost
22
regions.
C
So
these
boards,
they
they
receive
the
event
and
they
have
all
the
information
in
order
to
fetch
the
file.
So
they
fetch
the
file
and
they
have
the
logic
built
in
wherein
they
create
another
file.
Retrieval
retrieved
event
to
this
next
queue.
In
this
we
are,
we
are
measuring
the
tplb
for
that
specific
file,
content
ttlb
the
the
time
period,
the
location
at
what
time
from
which
location,
bot
location.
C
We
have
retrieved
that
file
and
we
are
storing
it
in
the
database
and
then
using
the
apis
you're
trying
to
show
that
on
the
front
end.
This
is
like
on
a
very
high
level
how
the
overall
architecture
looks
like.
C
Now,
if
I
just
quickly
jump
to
the
source
code,
also,
these
are
different
microservices
that
are
running
in
the
system.
As
I
explained
earlier,
we
have
a
scheduler
microservice,
so
in
this
scheduler
microservice
we
have
the
config
file
defined
under
modules
jobs.
So
this
is
the
config.json
wherein
we
have
defined
these
content,
urls
network
destination
file,
size
and
the
cid
for
that
file,
so
it
it
basically
reads
this
config
and
pushes
the
content
to
the
to
the
amqb,
which
is
running
again
on
aws.
As
of
now.
C
Now
what
it
does
is,
as
I
explained,
this
is
what
it
does:
fetches
the
file
using
the
config
and
then
capture
ttfbtlb
for
a
specific
time
time
period
and
then
accordingly
construct
a
event
and
push
it
to
the
to
the
event
to
the
amqp.
Again
again,
there
are
a
couple
of
micro
services
that
we
have
created
for
storing
and
the
for
the
api
purpose,
so
this
is
on
a
very
high
level,
the
source
code
for
this
project.
C
Now,
if
I
talk
about
like
what's
upcoming,
what
are
the
new
things
that
we
are
planning
to
work
towards,
and
then
maybe
after
this,
we
can
take
up
some
questions.
C
So
as
of
now,
as
I
mentioned,
we
are
using
aws
as
for
launching
these
bots
now,
in
order
to
get
a
real-time
scenario
from
a
end
user
perspective
like
there
could
be
some
network
issues,
there
could
be
some
latency
from
an
end
user
perspective,
so
we
are
trying
to
utilize
these
infrastructure
services
that
are
available
in
different
regions
and
so
that
we
can
have
much
better
matrices.
C
Some
of
the
networks
that
the
storage
networks
that
we
are
planning
to
bring
in
into
the
system
like
saturn
is
one
of
them.
Then
ipfs
cloudflare
is
there
titan
and
mile.
C
So
this
is
from
a
matrices
perspective.
One
more
matrix
that
we
would
like
to
bring
in
into
this
system
is
so
it's
like
an
end-to-end
end-to-end
use
case
wherein
a
bot
receives
a
file
that
bought
basically
store
the
file
using
one
of
these
networks,
one
of
these
storage
providers.
C
We
store
the
file,
and
then
we
try
to
retrieve
the
file
also
so
that
from
storing
and
then
up
to
the
retrieval.
We
can
calculate
the
the
overall
time
taken
for
that
entire
process,
and
in
that
case,
what
we
are
trying
to
achieve
is
one
is
the
storage
time.
Another
one
is
the
retrieval
time,
and
then
there
is
the
cloud
distribution
time
as
well.
That
is
also
one
of
the
important
aspect
that
we
are
trying
to
cover
in
this
project.
C
Beyond
that
we
have
some
thoughts
around
the
the
creation
of
these
reputation
system
using
the
reliability,
scalability,
the
performance
of
these
storage
provider
networks
and
create
some
sort
of
score
around
that.
So
that's
on
a
very
high
level.
I
know
we
can
go
a
little
bit
in
more
detail,
but
we
can
always
take
in
questions
after
the
second
presentation.
A
Great
yeah,
thank
you
very
much.
I'm
just
gonna
have
one
clarification
which
is
well
at
the
beginning.
You
were
talking
about
storage
providers
yeah,
we
use
the
word
storage
provider
to
talk
about
the
the
the
far
coin.
Nodes
are
actually
storing
the
file
and
sending
proofs
to
the
to
the
the
chain.
So
in
this
case
it's
retrieval
providers
or
retrieval
networks
that
way
that
we're
measuring
the
performance
off
but
yeah
all
looks
great.
So
thank
you
very
much
as
anyone
else.