►
From YouTube: Slingshot Phase 2.4: Closing Ceremony
Description
Slingshot 2.4 closing ceremony, happening on August 3, 2021.
With presentations from:
00:00 Welcome and Overview of Phase 2.4 - Deep
13:12 Filecoin Proofs - Susmit from Huddle01
21:32 Dataset Diversity - XinAn Xu
31:01 FileDrive in Slingshot 2.4 and What's Next? - Laura from FileDrive
39:15 Solutions for New Slingshotters - Baoyuan from FilSwan
46:13 Slingshot Phase 2.5 and Closing - Deep
A
Hey
everybody
welcome
to
the
closing
ceremony
for
phase
2.4
for
slingshot.
This
is
a
super
productive
phase.
Lots
of
exciting
updates
lots
of
news
to
share
feedback
to
share
hey
everybody,
and
so.
A
For
phase
2.4
for
slingshot
first
up,
this
is
super
productive.
I
want
to
start
out
by
talking
to
the
agenda
real
quick,
so
you
folks
know
what's
coming
first
things:
first,
we
have
slingshot
phase
2.4
closing
presented
by
yoshihui
and
then
sushmit
from
the
huddle
team.
Shinan
shu,
presenting
laura
presenting
from
file
drive
bayan
from
the
phil
swan
team
and
then
I'll
talk
a
little
bit
about
the
future
of
slingshot
and
upcoming
phases
at
the
end,
and
then
we'll
try
to
leave
some
time
for
questions.
A
So,
if
you
have
any,
please
feel
free
to
share
them
in
the
chat
we'll
have
somebody
looking
regardless
if
you're,
watching
a
live
stream
or
if
you're
in
the
zoom
with
us.
So
let's
get
started,
firstly
for
all
the
participating
teams
of
which
46
qualified
for
awards
there's
a
lot
of
you.
Congratulations
on
completing
phase
2.4
in
this
phase
about
7.2
pebbly
bytes
of
data
was
onboarded
which
led
us
to
cross
like
this
massive
milestone
of
20
bytes
of
data
being
onboarded
through
sunshot
phase
2,
which
is
absolutely
massive.
Congratulations.
A
This
is
a
huge
step
in
the
direction
for
making
falcon
more
productive
in
the
near
term,
and
it's
already
proved
to
be
an
incredibly
valuable
stress
test
in
many
ways:
lots
of
learnings
lots
of
takeaways
for
how,
in
general,
like
deal
making
can
be
better
things
that
need
to
exist
for
falcoin
to
actually
deliver
its
value
and
its
mission
for
being
the
data
store
for
humanities
information.
A
This
phase
had
240
000
deals,
which
was
slightly
less
than
the
previous
phase,
but
the
each
individual
deal
was
actually
more
efficient,
and
so
we
had
an
average
deal
size
of
32
close
to
32
gb
by
sectors
are
very
densely
packed
and
the
average
data
being
part.
The
onboarded
by
participants
was
also
a
little
bit
lower,
137.17
tubby
bytes,
mostly
because
we
had
so
many
more
teams
participating.
A
So
it's
really
great
to
see
just
the
wide
variety
of
data
being
onboarded,
massive
diversity
in
the
datasets
being
used
and
onboarded
to
the
network.
This
should
hopefully
help
in
the
future,
especially
as
teams
around
the
world
that
aren't
just
storing
but
are
looking
to
retrieve.
Data
of
different
kinds
are
building
projects
where
they
can
take
a
dependency
on
data
that
lives
on
the
falcon
network.
A
So
huge
huge
congratulations
to
the
participating
teams.
It's
clear
to
see
that
you
know
you
found
your
efficiency
and
your
effectiveness,
sweet
spot
and
you're
able
to
make
progress
rapidly
in
in
onboarding
data
to
the
falco
network
and
excited
to
continue
to
see
this
kind
of
trend
and
and
get
some
additional
learnings
and
takeaways
for
how
we
should
improve
the
network
together
as
a
community
to
ensure
that,
as
I
mentioned
it,
it
meets
its
mission.
A
So
the
judging
and
results
for
slingshot
phase
2.4
were
shared.
I
think
a
couple
of
weeks
ago,
where
the
total
reward
pool
for
this
phase
was
at
50
000
falcon,
which
is
spread
across
like
46
themes.
The
link
to
the
actual
table
can
be
found
in
slack.
I
think,
there's
a
channel
called
slingshot
announcements
that
many
of
you
are
aware
of
and
I'll
link
the
public
link
to
that.
So
you
can
take
a
look
anytime,
we'll
be
talking
through
some
of
the
takeaways
and
learnings
from
judgment
and
scoring.
A
A
bonus
and
we'll
talk
about
that.
But
to
see
so
many
teams
participating
at
such
a
high
level
is
incredible
and
so
excited
to
see
that
continue
and
also
define
the
criteria
and
and
the
rules
as
a
result
of
these
learnings
and
takeaways,
to
ensure
more.
B
A
A
Each
team
submits
metadata
for
all
the
deals
that
they
made
on
the
network,
so
this
includes
the
files
that
they
stored,
documentation
about
how
they
stored
their
data
and
how
one
should
retrieve
it.
If
you're
interested
in
achieving
specifically
the
data
sets
that
they've
onboarded
to
the
network,
it
includes
building
a
deal
ui,
which
is
effectively
just
an
index
and
a
search
tool
to
ensure
that,
if
you're
interested
in
the
data
that
they've
onboarded,
you
have
a
way
to
access
it,
so
you
can
go
to
their
websites
at
look.
A
Then
retrieve
it
from
the
falcon
network
and
then
a
demo
video
which
sort
of
ties
a
lot
of
these
new
sense
together
so
that
you
can
watch
like
a
four
minute
clip
and
in
that
time
understand
what
the
data
set
is,
how
it
was
stored,
how
you
should
retrieve
it
and
what
you
can
do
with
the
chief
content
and
because
we
have
so
many
teams
that
underperform
super
well,
especially
at
the
top
end,
there's
always
disproportionately
performing
teams.
A
We
had
initially
capped
rewards
for
ten
percent
in
the
first
allocation,
and
so
you
proportionally
rewards
attributed
it
to
all
the
teams
where
a
team
can
maximize
that
10
percent
of
the
reward
pool,
and
then
we
avoid
bonuses
and
then
re-attribute.
The
outstanding
rewards.
This
is
to
ensure
that
participating
teams
do
actually
get
the
entire.
A
The
pool
that
was
supposed
to
be
distributed
to
them
distribute
to
them,
but
also
ensuring
that
teams
that
are
at
a
much
higher
scale,
don't
end
up
just
smashing
teams
at
the
bottom
and
even
if
those
at
the
bottom
and
do
participate
and
and
contribute
usefully
to
the
network
and
do
the
competition,
yeah
and
so
you'll.
We
will
talk
through
a
little
bit
of
how
this
happened
in
detail,
but
if
you
have
any
questions
feel
free
to
share
in
the
chat
yeah.
First
up.
Congratulations
to
all
the
participating
teams.
A
46
were
those
with
rewards.
There
were
six
more.
I
believe
that
weren't
because
of
some
caveats
mainly,
we
have
to
still
ensure
that
teams
participating
store
their
data
with
at
least
four
miners
and
that
not
any
single
miner
id
is
attributed
with
more
than
I
think
it's
30,
if
I
remember
off
the
top
of
my
head,
but
all
this
is
outlined
in
the
website,
and
so
this
is
primarily
to
ensure
that
for
the
storage
providers,
they're
actually
serving
data
of
the
popcorn
network.
A
This
helps
keep
your
availability
high
for
future
clients
that
might
be
looking
to
retrieve
this
data
or
if
you
want
to
retrieve
your
own
data
in
the
future,
having
multiple
replicas
in
just
it
reduces
a
little
bit
of
the
risk
that
anything
goes
wrong
with
one
of
the
strength
providers
that
you're
choosing
to
work
with.
You
know
in
a.
A
So
some
takeaways
from
judging
this
time
around
we
for
the
first
time
we
actually
integrated
a
retrieval
success
rate
into
the
scoring.
So
we've
talked
a
lot
about
how
retrieval
is
extremely
important
as
part
of
alkaline
scoring.
A
This
is
because
it
not
only
is
it
important
to
onboard
your
data,
but
the
data
should
be
onboard
in
a
way
that
it's
also
accessible
to
people,
and
so
we
currently
test
in
sort
of
automated
but
random
manner,
where
we
are
done
making
deals
based
on
data
that
you've
onboarded
as
a
participant
of
slingshot
on
a
daily
or
or
weekly
basis,
and
then
we
publish
those
logs
on
the
website,
so
you
can
actually
go
anytime.
Look
at
the
leaderboard
click
into
the
rsr
number
and
see
the
attempted
sort
of
retrievals
that
were
done.
A
We
should
we
should
also
be
publishing
the
log
so
that
you
can
take
a
look
and
make
adjustments
and
we're
going
to
do
our
best
to
update
this
much
more
frequently
so
that
you
have
chances
to
correct
dispute,
discuss
with
us
through
the
course
of
phase,
as
opposed
to
needing
to
wait
towards
the
latter
half
of
the
phase,
and
so
this
was
included
as
part
of
phase
2.4
scoring
as
part
of
like
a
data
onboarding
and
retrieval
sport,
which
is
like
a
holistic
50
percent
weighted
by
how
much
data
was
onboarded
and
then
50
rated
by
the
retrievability
of
that
data.
A
Documentation
that
was
published
generally
met
an
extremely
high
bar.
The
only
thing
that
I
wanted
to
call
out
for
participating
teams
is
feedback
is
that
many
of
the
data
sets
are
multiple
files
that
are
very
small
and
then
some
of
the
data
sets
are
massive
single
data
sets
that
are
divided
into
chunks,
so
for
the
ones
where
you're
actually
constructing
pieces
of
files.
A
You
should
have
some
explanation
on
how
those
pieces
were
constructed
and
a
lot
of
teams
will
say:
oh
we
just
divided
them
into
16,
gb
chunks
or
32
gb
chunks,
and
while
that
works,
it's
very
difficult
for
a
client
who's
reading
it
to
understand
and
follow
that
logic,
and
so,
if
you're
doing
that,
then
there
should
be
a
clear
mapping
in
your
deal,
ui
or
somewhere
else.
Where
a
client
can
follow
sort
of
the
obvious
steps
of
oh
yeah.
I
was
just
divided
into
these
pieces.
A
I
need
to
figure
out
which
piece
to
retrieve
this
is
also
extremely
important
as
we
work
towards
enabling,
like
partial
retrieval
in
the
future,
so
clients
can
figure
out
exactly
the
file
that
they
want
within
a
data
set
and
go
get
it
from
the
right
piece.
That's
been
stored
with
the
storage
provider
and
in
the
case
that
your
data
set
is
a
massive
single
file.
That's
been
chunked.
A
In
that
case,
I
think
it
matters
less,
because
typically,
clients
would
have
to
retrieve
all
the
all
the
parts
for
all
of
the
chunks
to
be
able
to
view
that
data
set.
But
in
those
cases
you
should
also
show
like
reconstruction
and
then
actually
like
what
to
do
with
the
data
set
in
order
to
use
it.
So
some
projects
have
excellent
examples
of
this
I'll
try
to
share
some
of
these
towards
the
end.
A
If
we
end
up
with
time,
if
not,
I
happy
to
chat
in
slack
but
there's
some
project,
videos
that
had
excellent
examples
of
reconstruction
and
then
actually
viewing
it
in
like
custom
software
or
video
or
media
software.
For
for
data
sets
that
are
published
in
in
chunks.
A
The
oui
is
generally
much
improved
from
the
last
phase.
Congratulations
all
of
you
for
taking
that
feedback
search
was
much
more
consistent
as
a
feature.
The
only
thing
I
would
add
is
similar
to
the
previous.
You
just
make
sure
that
search
for
files
within
pieces
make
like
works,
which
I
think
most
teams
did
do.
A
One
thing
that
was
a
little
tricky
was
partial
file
names,
so
four
file
names
that
have
like
pieces
or
part
ids
are
just
putting
in
the
id
should
be
good
enough
and
not
all
were
able
to
handle
that
demo.
Videos
again
extremely
high
bar
extremely
improved
overall.
A
Just
show
us
the
retrieve
content
at
the
end,
so
once
you
finish
your
demo
of
retrieval
open
that
up
in
the
appropriate
application,
so
that
me,
as
a
user,
can
watch
this
and
see
oh
yeah.
This
is
how
I
use
this
file
type
or
file
extension,
even
if
it's
as
simple
as
like
double
clicking
and
having
it
play
in
itunes
or
something
in
the
case
of
media
files,
but
yeah
overall
content
continues
to
remain
at
an
extremely
high
standard
and
meet
a
really
high
bar.
Congratulations
to
all
of
you.
A
Lots
of
bonuses
were
awarded
for
for
high
quality
of
content,
and
so
it's
it's
awesome
to
see.
We
got
some
feedback
as
well
and
so
I'll
be
working
towards
improving
some
of
the
definitions
of
the
rules
and
expectations.
A
So
you
know
what
you're
working
towards
integrating
retrieval
success
rate
is
always
a
little
bit
tricky,
and
so
we
should
have
conversations
in
slack
if
you
have
ideas
for
this,
as
well
as
for
adjustments
to
rewards
in
general
and
how
we
assign
rewards
to
the
pool,
we
want
to
ensure
that
the
competition
continues
to
scale
up
and
and
continues
to
make
the
network
more
productive,
we're
running
a
little
bit
behind
schedule.
A
So
if
you
have
any
questions,
drop
them
and
I'll
either
try
to
adjust
them
in
chat
or
we
can
take
them
at
the
end
of
the
session.
With
that,
I
wanted
to
invite
smith
from
the
huddle
team
to
take
over
susan
thanks
so
much
for
making
the
time
to
be
here.
Pco3
grab,
control
of
the
screen.
D
D
Okay,
so
yep
we
participated
at
in
huddle
in
slingshot,
so
we
are
from
team
huddle,
so
we
so
we
participated
and
what
we
did
was
we
stored
file
point
proof
on
the
file
con
protocol
itself,
so
the
so
we
wanted
to
do
in
the
best
way
manners
in
the
way
the
file
point
protocol
were
designed
to
be,
which
means
the
data
need
to
be
decentralized
in
in
various
respect
way
and
and
it
should
be
retrievable
as
deep
mentioned
to
the
highest
standard
that
possible.
D
So
that's
what
we
did
brief
introduction
about
huddle01,
so
huddle01
is
a
decentralized
real-time
engagement
platform
or
a
protocol
that
was
born
out
of
hackerfest
2020
last
year.
In
simpler
terms,
I
would
describe
it
like
kind
of
a
decentralized
zoom
right
now.
Zoom
is
centralized.
It's
it's
based
on
a
centralized
server
hardware:
zero.
D
When
you
focus
on
the
tech
side
and
more
on
the
tire
three
cities
penetration
to
visit,
go
to
huddle01,
dot,
io
and
everything
is
there
now
main
motivation
of
puddle
parts
are
getting
in
slingshot,
so
our
main
motivation
was
to
contribution
and
the
engagement
in
python
ecosystem.
D
It's
because
we
wanted
to
meet
and
collab
with
other
like-minded
community
people,
members
that
are
out
there,
because
it
helps
us
to
grow
as
an
individual
as
a
community.
Also,
the
other
goal
was
to
develop
the
storage
infrastructure
at
huddle
itself
and
test
those
capabilities
and
learning
and
knowledge
as
usual,
so
in
slingshot.
D
How
we
approach
to
the
problem
was
that
first,
we
set
up
the
lotus
note
on
the
cloud
and
we
downloaded
the
data
from
proof,
dot,
file,
coin
dot,
io,
it's
an
ipfs
link,
so
first
we
were
doing
with
ipfs,
but
we
found
it.
It
was
a
bit
slow,
so
we
went
for
wget
multi-threading
and
that
speed
up
the
process
of
downloading.
D
Lot
of
focus
on
because,
in
the
end,
it's
about
the
data
that
actually
matters
not
how
the
data
was
stored
right.
So
if
you
are
able
to
retrieve
those
data,
that
would
be
much
better
if
an
end
user
is
able
to
retrieve
those
data.
So,
in
the
deal
ui,
the
one
person
can
search
by
file
name
by
deal
id
or
minor
id,
and
once
the
care
files
have
been
downloaded
it.
You
can
use
the
go
graph
split
to
actually
stitch
those
care
files
and
generate
the
original
file.
D
D
D
D
We
had
sent
around
230
minus
so
even
and
the
replication
rate
is
around
eight
plus,
so
a
care
file
would
be
available
to
at
least
eight
minus,
so
even
if
one
or
two
minus
or
offline
or
if
you
are
not
able
to
determine
data,
you
can
actually
retrieve
from
six
other
miners,
which
are
geographically
differently
located
and
our
retrieval
rate,
as
the
tested
by
slingshot
was
around
75
percent.
D
The
deal
success
rate
whenever
we
were
making
online
beans.
We
had
around
deal
success
rate
of
around
17
percent
and
the
best
part
was
that
there
are
many
deals
that
we
could
have
sent
through
only
by
a
zero
five
point.
Only
so
there
were
many
miners
there
that
were.
D
And
it
was
a
very
low
with
a
very
low
cost.
We
can
actually
put
a
very
high
amount
of
data
to
pipeline
network,
so
that
was
also
the
best
part
of
it.
D
So
yeah
we
are
on
a
mission.
So,
as
we
said,
we
are
still
building
the
stuff.
We
are
still
learning
and
we
are
still
scaling,
so
our
main
aim
is
to
able
to
store
the
data
with
high
level
of
decentralization
as
the
protocol
was
designed
to
be.
The
second
step
at
huddle
is
that
huddle
is
a
decentralized
zoom,
as
I
mentioned,
and
we
are
developing
a
feature
of
recording.
So
in
that
recording
what
would
happen
would
be
stream
to
storage.
D
So
all
of
the
video
streams
would
be
directly
stored
to
a
file
point
network.
The
video
generated
is
basically
a
very
high
amount
of
data
that
that
data
easily
scales
up
to
a
gb
for
an
hour
of
data.
It
can
be
a
gb
or
it
can
it.
It's
a
couple
of
gb
for
an
hour
of
video,
that's
been
generated
at
a
720p
from
a
simple
webcam
of
the
go
and
a
meeting
can
have
around
five
to
ten
people.
D
So
a
meeting
of
five
to
ten
people
with
all
of
their
audio
video
on
that
would
generate
a
messy
amount
of
data
for
one
hour.
If
we
compare
that
would
be
couple
of
times
and
if
we
scale
those
number
two
multiple
time
that
would
be
a
bit
infeasible
or
a
bit
expensive.
If
we
have
web
two
approaches
like
amazon,
s3
or
other
storage.
D
So
that's
right,
so
we
are
developing
a
architecture
that
would
that
we
are
calling
it
as
stream
to
storage,
and
we
are
actually
also
looking
forward
to
connect
with
miners
that
can
help
us
to
build
this.
This
architecture
to
follow
a
hurdle
I
have
mentioned
the
twitter
handle
and
if
someone's
want
to
connect
me
I'm
available
on
the
slide,
and
I
use
is
also
from
huddle.
He
is
also
available
on
slack,
and
here
is
my
email.
Id
is
if
some
ones
have
been
interested
yep.
Thank
you.
A
Thanks
so
much
for
making
the
time
to
to
present
this
and
to
talk
about
huddle
as
well.
I
know
you
guys
are
doing
some
excellent
work
as
a
project
team.
So
in
the
interest
of
time
I
might,
I
might
hold
questions
for
now,
but
if
there's
any
of
you
I'll,
let
you
know,
of
course,
if
you
end
up
leaving
no
problem,
people
know
how
to
follow
up
with
you
as
well,
thanks
so
much
for
for
participating
in
slingshot
and
then
joining
us
today
to
share
thank.
D
A
Awesome
so
with
that,
I
would
like
to
introduce
cena,
who
has
been
a
prolific
competitor
in
the
slingshot
competition,
for
quite
a
while.
Take
it
away,
sir.
C
Hi
thanks
deep
hi
everyone.
My
name
is
shinnan,
I'm
one
of
the
participants
in
the
slingshot
phase,
where
it's
doing
all
the
work
solo,
which
I
believe
some
of
you
are
too
so
myself
is
a
programmer
and
other
than
slingshot.
I
also
own
a
small
five
coin
miner
and
I
also
maintain
a
few
miners
for
my
clients
too.
C
I
first
started
slingshot
during
phase
2.1,
but
at
that
time
I
was
disqualified
because
I
stored
too
much
data
on
my
miner.
So
since
then
I've
been
learning
and
improving,
and
finally
I'm
here,
I'm
so
honored
to
be
here
and
present
my
projects
to
you
all.
So
I
hope
my
presentation
will
inspire
other
solo
participants
and
ultimately
makes
the
slingshot
composition
more
diverse
as
a
small
participant
of
slingshot.
I
know
what
I'm
capable
of
and,
more
importantly,
what
I'm
not
capable
of
as
someone
who
is
doing
it
without
a
large
team.
C
I
don't
really
have
a
large
network
storage
providers
to
give
me
a
huge
ceiling
power,
so
I
shouldn't
be
looking
at
a
data
set
with
petabytes
amount
of
data.
Instead,
I
like
to
focus
on
another
aspect:
the
goal
of
the
slingshot
is
to
preserve
the
humanities.
Most
important
data
sets
file
or
a
file
called
network.
So
if
you
look
at
the
list
of
data
set,
you'll
find
out
that
all
the
desktops
are
equally
valuable,
regardless
of
the
size,
so
the
value
I
think
I
can
bring
to
this
ninja
communication
is
increase.
C
So,
instead
of
seeing
as
much
data
as
possible
in
a
huge
data
set,
I
focus
on
smaller
data
sets.
I
would
like
to
do
my
best
to
make
sure
they
are
well
documented
and
easily
retrievable.
There
are
another
benefits
to
store
smaller
data
set.
It
is
easier
to
store
the
complete
data
set
with
multiple
storage
providers
and
it's
more
valuable
to
have
the
completed
asset
available
to
retrieval,
compared
to
storing
only
a
portion
of
the
huge
data
set
and
never
finish
the
rest.
C
Of
course,
that's
not
a
problem
for
the
top
part
of
the
pen,
but
for
a
solo
python
like
or
a
smaller
person.
Like
me,
I
think
this
is
very
important,
so
what
I've
been
doing
first
is
to
explore
all
the
available
data
sets
list
on
github
and
I'll,
be
finding
lots
of
similarity
between
those
different
data
sets
in
terms
of
data
preparation,
so
those
can
be
categorized
into
below
five
categories.
C
C
The
second
one
I
think-
and
I
think
most
of
partisan
has
also
some
chance
working
with-
is
the
data
set
that
is
hosted
on
aws3
or
sometimes
on
google
cloud,
it's
very
easy
to
use
aw
cri
to
download
them.
However,
there
are
a
few
exceptions
here,
similar
to
that
set
like
axis.
They
actually
ask
for
requests
to
pay.
So
when
you
download
them,
you
actually
need
to
pay
the
pay
the
money
another
exception,
such
as
mozilla
common
voice.
C
That
will
require
the
user
to
enter
the
email
to
get
a
one-time
token,
which
can
be
then
used
to
download
the
data
from
aws,
but
most
of
the
other
cases
it's
just
free,
hosted
on
aws.
You
can
download
it
from
anywhere,
and
so
the
category
is
the
bittorrent
file,
for
example,
the
google
open
image
and
uc
berkeley
computer
science
courses
when
I
was
trying
to
download
those
use
bitcoin.
Actually,
the
speed
is
a
big
problem,
and
that's
also,
I
think
what
fightcom
will
be
able
to
help
and
another
problem
with
the
bitcoin.
C
Oh
sorry,
not
bitcoin
bittorrent.
Another
problem
with
bittorrent
on
data
is
after
a
while.
Actually,
if
there's
no
people
uploading
after
uploading
the
file,
the
data,
there
will
not
be
any
c
for
you
to
download
from,
and
we,
while
we
are
storing
them
on
file
coin
network,
there's
no
longer
a
problem.
C
The
fourth
category
is
first
party
cli.
C
For
example,
the
website
like
internet
archive
have
their
own
cli
to
search
for
and
download
data
from
their
website
directly
and
the
fifth
category,
which
is
the
hottest
one
you'll
need
to
write
your
own
crawler
and,
in
that
case,
you'll
probably
need
a
program
to
help
in
to
help
you
to
prepare
the
data
set.
C
Next,
I'd
like
to
talk
a
little
bit
about
how
I
prepare
the
data,
so
I'm
using
the
old
school
top
archives
for
those
deals
we
know
at
this
stage.
The
storage
providers
are
not
yet
as
reliable
as
the
traditional
storage
provider
storage
service.
So
we
need
to
prepare
a
situation
where
some
of
the
deals
in
a
project
may
no
longer
be
retrievable
so
to
overcome
that
and
still
provide
value.
C
C
I
myself,
I
don't
have
a
computer
science
degree,
so
I'm
always
interested
in
learning
some
kind
of
sense
courses
to
broaden
my
knowledge
and
downloading
is
very
simple.
You
just
click
download
button
you'll,
give
your
torrent
file
and
you
can
download
that
torrent
file
with
any
client.
You
like,
like
utorrent
author
and
kill
torrent,
800
and
b
turn
c
torrent
whatever
and
I
use
transmission
yeah.
So
your
next
slide
and
the
actual
like
the
ui
with
build
it's
actually
very
similar
to
the
last
presenter.
C
You
can
search
for
a
video
minor
file
names
on
the
ui
and,
more
importantly,
you
can
also
search
for
the
file
name
within
each
archive.
For
example,
if
you
search
for
operating
system
and
on
the
second
search
bar,
it's
going
to
display
all
the
deals
that
contain
that
video
and
if
you
click
the
download
button
there,
it's
going
to
show
the
download
instruction
and
you
can
see
that
the
operating
system
video
file
has
been
highlighted
in
the
list.
C
Once
you
download
the
top
file,
you
can
just
either
extract
it
actually,
actually,
with
star
archive.
You
can
actually
extract
an
individual
file
from
the
archive,
so
you
don't
need
to
need
to
extract
all
the
files
in
the
archive
so
yeah.
So
this
is
a
dual
ui
we
have
been
built.
And,
lastly,
I
just
like
to
talk
about
some
tricks,
we'll
be
making
to
make
sure
the
deal
making
process
more
automated.
It's
not
good
about
the
slides,
but
I
will
talk
through
those.
C
So
what
we
are
doing
to
facilitate
the
deal
making
data
transfer
is
that
at
the
client
side
we
have
an
api
to
let
the
miner
to
create
what
data
is
available
to
download
and
we
also
have
http
host
for
them
to
download
the
data
and
that's
usually
better
than
the
default
data,
transform
module.
Provided
by
lotus,
because
the
traditional
http
host
is
more
reliable
for
data
transferring
and
we
also
have
a
web
api,
so
the
miner
they
can
just
call
and
make
a
deal
by
themselves.
C
This
eliminates
the
need
for
manual
interaction
and
make
the
whole
deal
processing
automated
and
before
that,
we
actually,
I
will
we
talk
to
miners
every
morning
and
see
okay.
Those
data
are
available
for
download
and
I'm
going
to
retrieve
I'm
going
to
send.
C
You
deals
right
now
and
those
are
the
proposals,
the
ids
and
they
will
just
get
the
purposely
id
and
manually
import
and
repeat
it
every
day,
every
day
every
day
and
we
are
just
inventing
a
way
to
make
this
process
more
streamlined
and
automated,
and
that
really
helps
a
lot
of
course,
since
you
are
actually
exposing
api
for
them
to
kind
of
manipulate
your
loaders,
a
lot
of
validation
is
required
to
make
sure
that
miners
will
not
abuse
your
api
system
so
yeah.
C
So
this
is
the
trick
that
we're
doing
in
this
slingshot
phase,
and
that's
all
that
I
would
like
to
present
right
now.
A
Thanks
so
much
not
just
for
for
this,
but
also,
I
know
over
the
the
course
of
the
slingshot
competition
you've
been,
you
know,
great
resource
for
feedback
suggestions,
improvements
to
the
competition
and
ensuring
that
it
is
productive
and
fair.
So
thank
you
for
for
your
contribution
so
far
and
it's
been
great
to
see
the
really
really
high
quality
of
content
that
you've
been
able
to
deliver.
I
actually
didn't
know
you
were
doing
this
solo,
so
I'm
even
more
impressed
so
really
appreciate
the
time
you
took
to
share
this
with
us.
A
Thank
you
so
much
awesome.
So
with
that
I'd
like
to
hop
on
to
our
next
presentation,
which
is
actually
from
laura
from
the
famous
file
drive
team.
Many
of
you
know
them
because
of
their
contribution
in
creating
and
sharing
go
grassland,
which
many
teams
have
used
over
the
course
of
this
competition,
so
I'll
be
sharing
a
video
that
laura
sent
us.
E
Hello
sting,
shelters
and
friends.
This
is
laura
from
filter.
Now
we
are
at
the
slingshot
2.4
closing
settlement.
Congratulations!
Congratulations
to
all
the
participators
over
seven
databan
of
data
was
onboarded
to
the
network
in
phase
2.4
and
total
qualifying
data
has
crossed
the
millstone
of
20
petabit
onto
the
network.
The
achievement
is
awesome
and
incredible.
E
We
found
out
that
if
we
made
data
from
a
large
data
set
into
a
top
box,
slide
it
into
small
data
pieces
and
store
these
pieces
onto
the
field
network,
this
way
would
significantly
increase
the
infection
of
storage
data
on
board
hardware
also
bring
difficulties
for
data
retrieval
to
solve
this
problem
and
make
data
truly
usable.
Our
team
developed
the
go
graphs
slate.
It
is
a
tool
used
for
us
slating
large
data
size
into
graph
pieces
fit
for
making
deals
on
the
focal
network
which
will
stimulate
data
retrieval
as
well.
E
It
takes
the
adventures
of
ipfd
protocol
fully
in
the
unix
format,
data
structures
it
regards
the
data
size
of
its
subdirectory
as
a
big
graph
and
cut
it
into
small
graphs.
Each
small
graph
will
keep
its
field
system
structure
as
possible
as
it
used
to
be.
Besides
manifests,
I
see,
as
we
will
be
created
to
save
the
marking
with
scrubs
slave's
name
payload
cid
and
the
pcid
crop
slate
can
work
well
as
a
data
prep
tool
when
we
integrate
data
from
the
centralized
network
to
filcon
network
and
allow
users
to
retrieve
data
through
dual
information.
E
In
the
ideal
application
scenarios,
ipfs
and
filco
network
can
be
used
by
anyone
who
has
data
storage
requirements
without
technical
systems
and
knowledge
of
ipfs
protocol
and
fuelcon
consensus
mechanism
to
store
data
directly
to
the
ipfs
and
the
fingerprint
network
we
are
working
on
our
new
product,
which
is
tentatively
named,
feel
that
filter
is
a
decentralized
storage
solution
based
on
ipfs.
E
It
will
provide
a
high
quality,
ifrs
pin
service
and
customize
a
network
service
based
on
their
requirements
and
easy
to
use
front-end
user
interface
and
more
more
developer,
friendly
api
service
will
minimize
the
technical
source
holes
and
make
effort
that's
truly
useful.
It
can
also
be
regarded
as
one
of
the
necessary
technical
components
for
the
fulcrum
retrieval
market
and
the
part
of
storyline
for
six
infrastructure
for
web
3..
E
B
B
B
A
Thank
you
so
much
laura
for
sharing
the
video
that
has
been
consistently
a
top
performing
team
in
the
competition
just
absolutely
doing
phenomenally
well,
both
in
terms
of
onboarding
a
lot
of
data,
but
doing
it
with
quality
and
also
sharing
tools
that
have
become
instrumental
in
the
success
of
other
teams
in
the
competition
as
well.
A
So
thanks
for
your
contributions
and
looking
forward
to
seeing
what
comes
next,
especially
with
file
dag
and
the
other
projects
that
you're
working
on
with
that
I'd
like
to
invite
our
our
final
participating
speaker
from
this
phase,
barrion
from
the
phil
swan
team,.
F
F
So
I
will
start
so
today,
I'm
going
to
bring
you
guys
through
the
might
go
to
the
last
site.
Yes,
thank
you
today,
I'm
going
to
bring
you
guys
through
the
solution
for
new
slingshouters.
F
The
first
one
is
a
team
based
in
north
america.
In
canada.
We
have
been
participating
the
slingshot
competition
since
1.1,
which
started
last
october
until
now,
the
point
for
us
to
2.4-
and
it
has
been
almost
one
year-
and
this
is
really
a
long
journey,
but
we
really
enjoy
it.
F
So
we
can
go
to
next.
As
time
goes
by.
We
have
made
a
lot
of
improvements
in
the
deal,
processing
and
the
importing
process.
So.
C
F
On
this
platform,
we
provide
two
important
tools
to
participate:
the
slingshot
competition.
The
first
part,
is
on
the
client
side.
You
could
use
hours.
One
client
two
by
using
using
these
two
you
will
be
able
to
like
do
things
such
as
trunk
data,
merge
smart
files
into
a
large
file,
aggregate
them
and
generate
a
car
fail
and
send
it
to
the
miner,
and
also
these
two
will
synchronize
all
the
tasks.
With
our
files.
F
One
platform,
you
will
be
able
to
track
the
entire
life
circle
of
the
file
management,
which
means
you
can
exceed
the
status
of
the
deals
like
it's
downloading,
or
it's
ready
for
import
or
if
it's
a
deal
active,
and
you
can
also
get
a
csv
file
to
download
all
the
information
you
need.
For
example,
it
includes
like
the
file
name,
the
pilocide
and
which
data
site
so
etc
and
on
the
other
side,
is
the
minor
site.
F
We
provide
the
swan
miner
to
to
continually
importing
data
without
human
interaction,
and
you
just
need
to
like
start
this
one
minor
two
and
it
will
automatically
taking
the
order
from
the
offline
and
spiritually
to
yours
to
the
manner
after
everything
is
done.
F
It
will
update
the
information
to
our
fields,
one
platform,
so
your
clients
will
see
the
the
progress
and
know
that
everything,
if
it's
done
or
not-
and
it
give
it-
will
give
you
enough
time
to
do
other
things
without
just
watching
the
terminal
to
to
know
if
it's
finished
or
not,
and
we
can
go
to
the
next
step.
Thank
you
here
is
an
example
about
the
actual
interface
of
our
application,
and
you
can
see
here
we
define
everything
as
tasks.
F
One
task
can
contain
up
to
a
hundred
deals.
Each
deal
is
about
32
gigabit,
which
means
that
you
can
have
like
about
three
terabyte
deals
within
just
one
task,
and
the
task
can
be
sent
to
different
miners.
If
you
want
want
to
give
it
give
you
if
you
want,
it
will
give
you
a
writing
task
name
or
you
can
like
just
put
it.
What
you
want
and
hereby
we
can
see
that
there
are
different
processes
ongoing
to
show
the
different
stage
of
the
deals.
F
If
you,
by
using
different
transposition
protocol,
you
can
either
mount
it
as
a
local
pass
or
to
you
can
just
put
a
a
link
to
for
the
miners
to
download
your
file
and
everything
will
be
included
in
the
size
we
file
and
also
here
on
the
website.
You
can
see
there's
different
colors.
It
means
that
your
deals
has
been
in
different
states
and
we
can
go
to
the
next
that
thank
you
and
yeah.
F
You
can
always
reach
our
founder,
charles,
where
slack,
and
we
will
also
have
a
workshop
on
thursday
and
we're
going
to
show
you
like
some
videos
and
also
answer
our
questions,
so
you
can
follow
it
up
on
our
twitter,
media
or
facebook,
or
you
can
also
get
information
update
from
the
lobby
of
our
stack.
So
thanks
everyone.
I
hope
you
enjoyed
my
introduction.
A
Thank
you
so
much
abbayon
for
being
able
to
present
and
then
charleston
spirit
for
helping
put
together
this
content.
A
So
just
one
question
I
had
is:
if
somebody
watching
this
video
is
interested
in
joining
the
office
hours
or
participating
in
the
workshops
is
the
recommendation
that
they
should
reach
out
to
charles
on
slack
or
will
he
be
making
an
announcement
somewhere?
Should
they
go
to
a
site
to
register
for
these.
F
I
think
they're
gonna
make
an
announcement
yeah
about
this
more
like
the
business
team,
so
I
will
confirm
ism.
A
Sounds
great
so
whenever
you
reach
out
to
them,
feel
free
to
suggest
that
they
make
an
announcement
in
the
slingshot
channel
as
well,
because
in
falcon
slack
we
have
the
hashtag
slingshot
channel.
C
A
You
so
much
for
joining
us
to
present
today
really
appreciate
it.
Take
care
sweet
so
with
that
we
have
about
eight
minutes
remaining
and
I'd
love
to
talk
through
upcoming
and
and
what's
sort
of
what's
next
for
slingshot.
A
So
for
those
of
you
that
are
participating,
you
already
know
what's
next,
which
is
there's
more
slingshot,
so
as
we
wrap
up
phase
2.4
we'd
already
transitioned
into
phase
2.5
and
we've
already
announced
dates
for
that,
as
well
as
phase
2.6,
because
we
want
this
to
be
a
little
bit
more
predictable
for
you,
folks
around
your
schedules
and
the
work
that
you're
doing
since
for
many
of
you,
this
is
also
one
small
part
of
the
different
things
that
you
do,
whether
inside
or
outside
the
falcon
ecosystem.
A
So
poor
phase,
2.5,
the
the
the
expectations
for
submission,
are
very
similar
to
the
previous
phase.
There'll
be
continued
focus
on
retrievability
through
the
retrieval
success
state
metric,
but
we
also
continue
to
define
the
guidance
as
well
as
the
scoring
mechanism,
the
judging
mechanisms
over
the
course
of
the
next
weeks.
So
any
feedback
is
welcome
and
suggested
for
sure,
as
we
continue
to
make
this
more
consistent,
but
also
in
general,
ensure
that
you
feel,
like
you're,
being
fairly
sort
of
rewarded
for
the
work
that
you're
putting
in.
A
I
also
wanted
a
fact
that
several
new
data
sets
have
been
added
to
the
competition
as
special
shout
out
to
our
participant
askender,
who
I
think
has
single-handedly
proposed,
like
10
new
data
sets,
of
which
I
think
at
least
six
or
seven
of
them
are
now
on.
The
list
of
eligible
data
sets
the
competition.
A
Several
other
participants
have
also
proposed
interesting
data
sets
such
as
the
entire
category
of
the
internet
archive
was
recently
added,
so
definitely
check
the
website
when
you're
registering
a
project
to
see
what
data
sets
available
to
you.
We
also
always
have
sort
of
the
published
listed
data
sets
which
I'll
share
a
link
later
on
in
the
slides,
but
definitely
check
that
out
as
well,
and
the
current
status
is
roughly
200.
Bytes
have
been
onboarded,
which
is
significantly
less
actually
than
previous
phases
in
terms
of
rate
of
onboarding
partially.
A
This
is
still
early
in
this
phase
and
we're
still
wrapping
up
2.4,
but
other
than
that.
I
mean
we're.
Definitely
looking
at
ways
in
which
we
can
ensure
additional
data
is
on-boarded
at
a
faster
rate
right
now.
The
way
that
the
rewards
stand,
there's
a
reward
unlock
at
25
pebbs
for
25,
k,
final
coin
and
then
one
at
30,
pi
bytes
for
an
additional
25k
megabytes.
A
So
if
this
phase
were
to
be
yeah
more
productive
than
the
last
two,
then
the
reward
pool
would
probably
be
similar,
if
not
we'll,
probably
look
at
some
ways
in
which
we
can
ensure
that
there's
some
smoothing
operation
for
the
teams
that
are
putting
in
time
and
participating
if
you
have
any
specific
feedback
on
on
reasons
for
which
you're
choosing
to
hold
off
on
participating
or
changing
the
way
in
which
you'd
your
patterns
exist
for
onboarding
data
through
the
falcon
network.
A
But
yeah,
looking
forward
to
seeing
us
hit
some
additional
massive
milestones
crossing
the
25
milestone
will
be
massive
for
us,
as
well
as
for
the
falcon
network,
and
so
for
all
of
all
those
of
you
that
consider
yourselves
part
of
the
slingshot
community,
please
feel
free
to
you,
know,
encourage
and
suggest
ways
in
which
we
will
continue
to
do
that.
Looking
forward
to
seeing
continued
engagement,
feedback
questions,
suggestions
in
slack,
you
can
reach
me
in
slack
anytime
or
drop
a
question
or
thoughts
in
the
slingshot
channel.
A
So
one
thing
I
did
also
wanted
to
talk
a
little
bit
about
is
like,
as
we
look
at
like
the
point
of
slingshot
and
sort
of
why
it
exists.
There's
this
aspect
of
it,
which
is
we
want
to
onboard
useful
data
onto
the
filecloud
network
and
that's
primarily
because
you
want
to
ensure
that
data
that
we
deem
to
be
useful
as
a
community
is
stored
in
the
right
data
store
form,
and
so,
ideally,
if
icon
becomes
sort
of
this
archive
for
useful
relevant
data
for
humanity.
A
But
also
beyond
that,
like
part
of
this,
is
about
ensuring
that
the
data
that's
onboarded
can
be
used
or
is
productive
beyond
what
it
would
have
been
in
its
original,
like
location,
let's
say,
and
so
good
examples
of
that
are
deal
uis
that
are
being
built
by
project
teams
that
help
index
and
identify
files
within
a
data
set.
A
So,
for
example,
sheenans
work
with
taking
all
the
computer
science
courses
and
finding
video
like
you
now
have
a
website
where
you
can
download
this
video,
regardless
of
what
country
you're
in
or
where
in
the
world.
You
are
for
video,
that's
educational,
useful
to
humanity
at
any
point
in
the
future,
indexed
by
specific
courses
that
you're
interested
in
so
operating
systems.
A
That
was
the
example
that
he
used,
which
is,
of
course,
a
typically
very
difficult
and
and
lauded
course,
and
so
it's
awesome
to
see
that,
like
we're
already
looking
at
ways
in
which
the
data
itself
can
become
useful,
that's
part
of
why
rsr
was
introduced
as
a
metric
as
well
just
to
ensure
that
clients
that
are
attempting
to
retrieve
data
that's
been
stored
can
actually
retrieve
the
data
that
they
wanted.
A
That
they
want
to,
and
so
the
way
that
works
today
is
effectively
very
simplistic
right,
which
is
we
try
to
make
a
request
to
a
storage
provider
with
a
particular
cid
and
if
it
shows
up
then
great,
and
if
it
doesn't
then
no
and
there
are
definitely
ways
in
which
we
could
improve
that
and
make
that
more
and
more
realistic
in
terms
of
how
clients
would
behave.
But
the
best.
A
Yet,
I
think,
is
just
ensuring
that
more
clients
come
and
use
the
slingshot
data,
and
so
part
of
that
is
improving
like
documentation
and
videos
and
ensuring
clients
can
actually
follow
the
content
that's
being
produced
by
the
teams
participating
in
slingshots.
So
it's
actionable
by
interested
clients,
and
then
part
of
that
is
actually
exploring
business
development
for
the
network.
A
So,
like
project
teams
like
huddle,
they
came
out
of
a
hackathon
or
came
out
of
an
accelerator
program
or
an
engagement
in
the
ecosystem
are
enabled
and
realized
like
what
an
absolute
treasure
trove
of
popcorn
is
turning
into
because
of
programs
like
sunshot
that
are
that
have
participating
teams
such
as
yourselves
that
are
operating
at
such
a
high
level
of
efficiency.
A
To
onboard
data
in
the
range
of
like
pepe
bytes
in
a
matter
of
weeks,
I
know
that
you've
all
talked
about
ways
in
which
you
could
even
be
more
efficient
or
introduce
doing,
and
that's
incredible,
but
you're
already
doing
a
great
job
as
well,
and
so
you
should
definitely
take
some
credit
and
massive
congratulations
to
each
of
you
for
that,
and
then,
with
regards
to
tooling
and
becoming
even
more
effective
and
efficient.
A
A
lot
of
the
libraries
and
tools
that
have
been
built
by
the
slingshot
community
are
an
excellent
sort
of
representation,
as
well
as
excellent
tools
and
solutions
to
problems
that
are
representative
of
what
large-scale
clients
actually
need
to
adopt
file
coin,
and
so,
whether
that's
things
like
graph,
split
or
more
complex,
tooling,
that
is
being
built.
A
You
saw
sheena
talk
about
his
close
one
talking
about
theirs
and
laura,
also
referenced
file
back
another
project
that
they're
working
on
tooling,
like
this
is
super
valuable,
and
so,
if
you
are
interested
in
building,
some
slingshot
is
a
great
way
to
get
involved
and
understand,
needs
of
both
clients
and
search
providers.
A
If
you're
looking
to
improve
some,
take
a
look
at
all
of
these
everything
here
that
was
shown
today
to
you
is
on
github:
it's
largely
open
source
and,
of
course,
the
the
falcon
foundation
is
very
plugged
in
via
dev
grants,
and
so,
if
you're
interested
in
expanding
some
of
this
to
be
more
usable
for
clients
that
are
not
in
the
slingshot
sphere
and
might
be
interested
in
leveraging
this
to
make
deals
that
scale
on
the
network
definitely
check
out
the
dev
grounds
program
and
see
if
you'd
be
interested
in
pursuing
some
work
there.
A
Lastly,
and
certainly
not
leastly,
I
also
want
to
touch
on
hacking
on
the
data
that's
actually
available,
so
this
sort
of
exists
at
two
levels.
One
is
we
have
all
this
metadata
now
about
slingshot
itself,
which
is
turning
into
like
a
archive
of
its
own
and
cataloging,
and
identifying
data
via
metadata
or
via,
like
deal
info,
starts
to
become
a
very
interesting
and
compelling
problem,
and
then
there's
the
secondary
aspect
of
once.
The
data
is
actually
available
on
the
network.
What
are
interesting
use
cases
for
it,
and
so
there's
a
bunch
of
data.
A
That's
media
oriented
what
about,
like.
You
know
like
a
netflix
that
operates
on
falcon
and
things
like
that,
where
we
have
content
that
could
be
created
based
on
the
data,
that's
being
onboarded
to
the
network.
It's
definitely
like.
Coming
on
the
horizon.
Many
of
you
heard
us
talk
about
like
having
a
notion
of
like
a
hack
track
or
like
a
hackathon
based
process
involved
in
slingshot
as
well,
and
so
we're
looking
to
define
that
a
little
bit
and
share
updates
on
that
in
the
near
future.
A
A
This
is
specifically
public
data
sets
open
data,
sets
that
can
be
accessed
for
anyone
have
some
interesting
use
case
or
enable
some
interesting
use
cases
around
the
world
and
so
lots
of
scientific
data.
Lots
of
public
data
training
data
for
ai
models,
different
kinds
of
large-scale
data
sets
that
generally
make
technology
more
effective
or
leverage
technology
such
as
falcon
to
make
us,
as
as
a
population
or
community,
more
effective,
so
yeah.
Thank
you
so
much
for
those
of
you
that
have
already
contributed
tons
of
data
sets.
A
We
started
with
a
very
small
list
several
phases
ago,
and
now
it's
nearly
doubled
in
size,
so
it's
great
to
see
contributions
coming
in
from
all
sorts
of
different
participants
and
of
the
slingshot
community
cool
with
that.
I
just
wanted
to
to
thank
you
all
for
coming
today.
I
know
this
slide
says:
q
a
but
we're
running
a
couple
of
minutes
late.
So
if
you
do
have
any
questions,
please
reach
out
in
the
springtrap
channel
or
dm
me
in
slack.
A
I
really
appreciate
you
being
able
to
be
here
for
the
closing
ceremony
for
what's
been
a
prolific
phase
in
its
milestone
of
twenty
two
bytes
being
crossed
through
the
participation
of
various
teams
in
this
phase,
looking
forward
to
seeing
what
future
phases
bring,
as
well
as
engagement
from
you
on
how
we
continue
to
make
filecoin
a
more
productive
and
more
effective
network
for
being
a
data
store
for
humanity.