►
From YouTube: Compute + IPFS: Compute over Data with Filecoin & IPFS - @aronchick - Building Apps on IPFS
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
gonna,
probably
gonna,
be
a
little
bit
boring
for
a
lot
of
you,
because
this
is
some
fairly
high
level
stuff,
but
we'll
see
how
it
comes
across
is
forgive
me.
This
is
a
replay
of
a
talk.
I
just
gave
it
phil
toronto,
so
I
hope
you're.
A
Okay,
with
that,
the
broad
strokes
of
what
we're
doing
here
is
you
know
you
see
this
graph
and
it's
obviously
going
up
like
mad
all
good
stuff
and
in
the
broad
strokes
we
see
how
useful
big
data
is
going
to
be.
You
know
everywhere
and
you
you
all
know
this.
I
don't
need
to
sell
you.
A
Big
data
is
going
to
change
the
world
blah
blah
blah,
but
you
know
what
you
see
by
and
large
is
stuff
like
this
right,
where
big
data
projects
are
just
unbelievably
not
successful,
like
you
could
flip
a
coin
and
you
basically
are
guaranteed
a
big
baited
job
failure
and
one
of
the
reasons
that
we
think
that
this
is
the
case
is
because
the
tools
haven't
kept
up
and
and
these
data
developers
and
they're
handed
just
a
box
of
miscellaneous
crap
and
they're,
asked
to
put
it
together
to
sell
and
it's
a
bit
of
a
nightmare.
A
You
know
who
these
people
are,
but
they
are
growing.
Super
super
fast
they're
about
up
10x
since
2016..
So
this
is
an
audience
that
is
awesome,
they're,
ready
to
use
data
and-
and
we
give
them
really
shitty
tools
when
you
look
at
where
they
spend
their
time.
I
thought
this
was
a
really
interesting
graph.
You
know
more
than
what
seventy
percent
of
their
time
they
spend
just
prepping
and
looking
at
data
right,
it's
not
even
using
it.
A
It's
just
prepping
right,
and
it's
really
interesting
because,
like
there's,
almost
no
work
done
in
this
right.
Like
you
look
at
the
the
geniuses
of
the
industry
out
there,
building
on
whatever
spark
and
tensorflow
and
pie
torch
and
so
on
and
so
forth.
That
is
like
way
down
in
those
dark,
blue
sections,
it's
the
stuff
at
the
start,
where
they're
like
oh
yeah,
this
is
in
euro,
but
I
need
it
in
dollars,
so
I
got
to
spend
an
afternoon
doing
that,
so
I
think
we
can
help
now.
A
One
thing:
that's
kind
of
interesting
is,
like
you
know,
again,
probably
old,
hat
to
a
lot
of
you,
but
the
data
pipeline
looks
like
this
right.
You
have
that
ingestion
and
process,
and
you
have
this
whole
engineering,
then,
finally,
you
get
to
training
splitting
whatever
and
finally,
you
get
to
serving
it
or
actually
using
it
in
your
app
whatever
it
might
be.
A
I
mean,
if
you're,
really
good,
you
loop,
it
all
the
way
back.
I
you
know,
I
think
we
can
focus
on
just
that
left
part
and
make
the
data
out
there
far
more
useful.
So
what
would
they
like?
Well,
our
thesis
is
they
would
like
the
following
right:
they
want
it
to
be
familiar.
They
want
it
to
be
simple
and
they
want
it
to
be
collaborative.
What
does
this
mean
familiar?
Is
this
right?
A
A
lot
of
people
think
when
it
comes
to
building
a
model-
and
I
you
know-
I
don't
just
mean
ml
model,
literally
anything
that
you
use
to
build
to
make
your
application
smarter.
They
think
all
right!
Well,
I'm
gonna
go
out
and
build
a
model
except
it's
not
that
it's
really
this
right.
It's
17
steps
loosely
coupled
together
that
they
just
figure
out
how
it
works,
and
you
know
so
on
and
so
forth,
and
it's
really
interesting
because
when
you
go
and
talk
to
these
folks,
they
use
a
ton
of
tools
right.
A
This
is
microsoft.
Theoretically,
you
know
the
most
or
one
of
the
most
advanced
ml
data
organizations
in
the
world,
and
they
did
a
study
internally
159
different
tools
that
they
have
supported
right.
Like
could
you
imagine,
being
the
I.t
pro
trying
to
figure
this
out
a
nightmare,
but
the
moment
you
try
and
pry
one
thing
out
of
a
dangerous
hands:
they're
gonna
go
to
their
vp
and,
like
you're,
fired
right.
So
it's
we
gotta
work
in
the
space
of
familiar
now,
that's
just
the
tools
when
you
actually
get
to
the
platforms.
A
It
gets
even
worse
right
if
you're
talking
about
how
to
do
the
computation.
You
pick
one
of
these
things
on
the
left.
When
you
pick
talk
about
the
data
platform
that
sits
on
top
of
that
computation,
it
might
look
like
one
of
the
things
on
the
right
problem.
Is
we
have
too
many
choices,
and
the
funny
part
is
like
what
many
debate
this
is
my
thesis
again
strong,
strong
opinion
weakly
held
when
most
of
what
most
of
them
actually
want.
They
want
this
right
said
it
was
invented
in
1974.
A
It's
a
perfectly
good
tool
for
manipulating
data
sets.
Very
large
data
sets,
so
my
thesis
is.
We
should
bring
data
science
back
to
the
70s.
So
that's
what
it
means
familiar
you.
Let
them
use
the
tools
they
already
know
in
a
place.
They
can
get
to
this
scale
and
actually
access
it.
Second,
they
want
it
to
be
simplified.
A
So
if
you
haven't
seen
how
to
run
a
data
science
job
today,
this
will
be
kind
of
interesting
for
you.
First,
the
thing
on
the
left
is
the
housing
price
example.
This
is
like
the
hello
world
of
data
science.
You
see
there,
it's
just
a
simple
jupyter
notebook.
I
run
a
couple
commands,
most
of
which
are
just
actually
printing
it
out,
and
you
know
I
I
just
send
it
out
and
I
can
figure
out
what's
going
on.
A
This
is
spark
the
exact
same
thing
and
it's
not
even
all
there
right,
like
that's
just
some
of
it
like
what
a
nightmare
and
you're
asking
them
to
move
from
the
language
they
know
and
love
python
into
java,
which
is
a
nightmare
too
again,
it's
not
bad.
It's
just
like
you're
asking
people
to
take
on
things
that
are
way
outside
their
job
description
and
worse.
It's
like
an
sre
thing,
so
site,
reliability
engineer.
This
is
like
the
people
who
have
to
keep
these
platforms
up
and
running.
A
Let
me
tell
you
a
little
story
in
one
act.
You
have
a
data
scientist.
She
says
you
know
I'm
ready
to
go.
I
have
a
data
scientist,
that's
perfect
our
data
or
it
ops
person
says
sure.
Can
I
get
to
a
million
other
things?
You
know
file
a
jira
ticket,
maybe
I'll
get
to
it
next
week?
Fine,
so
they
do
that.
She
comes
back.
A
Finally,
it's
been
provisioned,
the
cluster
that
she
needs
and
our
data
scientists
or
our
it
ops
person
now
says:
okay,
can
you
go
do
this
right
and
that's
not
even
all
of
them
right
rewrite
it
figure
out
your
drivers
figure
out
this
figure
out
that,
like
good
luck
and
she's
like
all
I
want
to
do,
is
just
access
it,
it's
the
simplest
thing
in
the
world.
Why
are
you
making
doing
this
and
then
she
says
because
that's
what
our
platform
requires.
A
So
she
goes
off
and
does
that
that
sucked
and
then
off
she
goes.
She
deploys
her
job,
she
runs
and
it's
done
awesome
and
then
they
say
hope.
Oh
no,
we
hope
you
didn't
forget
anything
and
now
they've
blown
it
through
their
budget
for
the
year
right
again,
just
constant
management,
constant,
upkeep,
really
the
opposite
of
what
people
want,
and
so
you
know
you
say,
data
developers
and
attributes.
A
The
one
thing
they
can
agree
on
is
that
mapreduce
sucks,
which
it
does
I
mean
it
was
great,
but
it's
not
2005
anymore,
so
we
can
do
better.
And
finally,
you
get
to
collaborative
collaborative
looks
like
this
right.
You
have
these
enormous
data
sets
out
there
and
they're
super
popular.
There
are
over
350
amazing
data
sets
on
amazon
open
data
today,
by
the
way
we
should
have
all
of
those
on
ipfs
someone,
but
before
what
you
would
do
is
here,
you
have
the
landsat
example
right.
A
It's
a
petabyte
and
a
half
of
amazing
super
high
quality
images.
And
let's
say
you
have
three
data
scientists
here,
one
she
needs
her
data
science
tile
or
she
needs
these
lap
landsat
things
tiled,
so
she
wants
it
reduced
just
down
to
the
places
that
are
important
to
her.
Second,
maybe
she
wants
it
scaled,
so
reduce
the
amount
of
pixels
so
that
it
turns
through
the
system
faster
and
the
third
one
just
wants
it
grayscale,
which
is
actually
quite
common.
A
You
oftentimes
will
just
down
scale
your
images
so
that
they
run
through
your
data
model
faster.
So
because,
if
they're,
you
know,
each
image
is
a
gigabyte.
That's
going
to
take
weeks
to
to
accomplish
your
thing
now
we
get
to
our
fourth
data
scientist
and
she
says.
Actually
I
want
all
three
like
what
do
I
do.
She
has
to
go
and
rewrite
those
things
herself,
even
though,
all
these
all
the
work
has
been
done
for
so
that's
not
good
either.
A
So
what
we
could
do
is
using
the
power
of
ipfs
and
the
fact
that
all
this
stuff
is
public
and
content,
addressable
and
all
good
things.
We
could
publish
these
to
the
world
such
that
when
our
our
fourth
data
scientist
comes
along.
She
picks
up
the
work
that
was
already
done
and
by
the
way
when
she
does
her,
you
know
final
thing.
She
creates
a
fourth
data
set
which
is
now
consumable
from
that
point
forward
as
well.
So
that's
collaborative
real,
true
collaborative
data
science.
A
Okay,
so
that's
the
broad
strokes
on
what
we're
trying
to
do
around
compute
over
data
with
this
project
that
I'm
working
on
called
baccaliao
and
that
is
compute
over
data
and
filecoin.
Our
initial,
offering
it's
very
something
I
really
want
to
stress-
is
just
our
initial
offering
and
I'll
get
to
that
in
just
a
second
is
bakayow
bakayao,
the
it
is
a
punny
joke.
Bakayao
means
cod
in
portuguese.
We
were
in
portugal
when
we
came
up
with
this
cod
compute
over
data
cod
get
it
so
there
you
go
exactly
I
I.
A
If
I
didn't
even
hear
groans,
I
wasn't
doing
it
right,
so
cod
blah
blah
blah
lots
of
vision.
Words
we'll
go
through
this.
Our
vision
is
to
do
this
right,
like
something
that
looks
a
lot
like
this.
You
add
a
large.
You
know,
gps
recording
this
thing
you
know
says
I
just
want
to
filter
this.
For
very
you
know,
data
sets
data
points
within
50
kilometers.
I
can
do
that
with
said.
It's
actually
pretty
easy
and
then
I
can
fetch
the
results
now
again.
A
This
is
just
kind
of
proof
of
concept
getting
started
stuff,
but
you
can
see
the
direction
we're
going
here.
There's
no
special
clusters,
no
transfers,
I
get
to
reuse
said
I
get
to
use
mostly
idle
compute,
since
it's
just
storing
my
data
already
familiar
commands,
automatically
resolves
failure
so
on
and
so
forth
and
we're
just
getting
started
plus-
and
this
is
the
killer
point
right,
no
egress
meaning.
I
don't
have
to
download
this
massive
data
set
to
do
one
stupid
job,
so
I
can
re
upload
it
somewhere
else.
A
Now
again,
you
can
achieve
this
with
other
platforms,
but
I
I
hope
I've
made
the
case
that
a
lot
of
these
other
platforms,
which
are
designed
to
be
very
high
performance
platforms,
should
be
focused
on
the
high
performance
stuff.
Not
this
stuff,
where
you're
like
all
right,
just
get
me
back
this
filtered
data
set
by
tomorrow.
A
Okay,
so
here's
the
little
story,
she
submits
it.
She
goes
off
and
it
works
perfectly
and
it's
all
done
and
it
downscales
itself
and
then
she's.
Our
itops
person
knows
how
many
cat
videos
are
uploaded
every
second
all
right.
So
what
does
this
look
like?
Where
are
we
today?
A
So
we
wrote,
we
checked
in
our
first
line
of
code
in
february
we
did
a
six
month
or
six
week,
sprint
on.
You
know
proof
of
concept,
and
we
learned
a
whole
bunch
and
figured
out
a
whole
bunch
of
stuff.
We
threw
away
every
line
of
code
april
5th
and
start
it
over
with
all
the
learning
and
here's
what
we
have
today.
So
this
is,
I
think,
last
week
we
did
this.
A
What
you're
going
to
see
here
is
the
experience
that
that
a
little
bit
more
advanced
experience
you
see
here,
you
have
a
file.
This
is
a
sample
directory
of
10
images
and
what
we're
going
to
do
is
these.
Are
these
are
stored
in
ipfs?
I
just
happened
to
download
them
here
to
demonstrate
what
was
going
on.
A
You
can
see
over
here.
On
our
left
hand,
side,
this
is
the
command.
It
certainly
could
be
a
lot
easier
and
we're
working
on
like
getting
a
lot
of
the
the
defaults
correct,
but
it
looks
a
lot
like
the
standard
thing
if
you
set
up,
if
you
exclude
that
stuff
in
the
middle
about
concurrency
and
inputs
and
outputs,
and
things
like
that,
it
looks
just
like
the
command
that
someone
might
execute
on
the
command
line.
A
Magic
is
one
of
the
most
popular
image,
manipulation,
cli
tools,
people
use
it
all
the
time
they
know
they
often
have
scripts
already
written
and
you
can
see
there.
They
just
are
running
it
against
the
thing
that
was
that
is
hosted
on
ipfs
in
this
case.
Just
to
answer
your
questions.
We
are
for
this
particular
example,
actually
using
a
raw
docker
container.
This
is
the
docker
container
published
by
the
image
magic
folks.
A
No,
I
cannot.
This
is
the
docker
container
published
by
the
image
magic
folks,
but
you
can
build
your
own
docker
container.
You
can
include
your
own
requirements
and
things
like
that
for
security
and
other
purposes.
We
have
networking
turned
off,
so
all
jobs
must
be
embarrassingly
parallel,
no
inter
node
communication,
but
that
also
provides
a
bunch
of
security
and
other
things
like
that.
So
there
you
go.
I
ran
the
job
here.
You
can
actually
watch
the
job
as
it
takes
pla
place.
A
You
can
see
it
spits
out
and
it's
running
there
it'll
take
just
a
second
to
complete
off,
we
go
and
then
it
completed.
A
I
don't
go
forward
here
and
now
I
get
my
results
here.
I
am
downloading
it
a
little
mistake
there
and
you
can
see.
I
downloaded
that
to
my
local
file
system.
Each
one
is
much
smaller.
You
can
see
up
here.
These
are
you
know,
megabytes
in
size,
and
these
are
just
17
kilobytes
in
size
and
they've
all
been
down
scaled
right.
So
I
didn't
have
to
think
about
any
cluster.
I
didn't
move
stuff
from
ipfs.
A
It
all
happened
exactly
where
the
thing
was
already
loaded
and
running
now
going
even
further
than
that
we
have
just
last
week.
We
now
have
a
support
this
flag
determinism.
This
is
again
we're
just
working
it
out,
but
you
can
see
here
that
we
are
executing
raw
python
against
this
deterministic
thing
behind
the
scenes.
A
We
are
converting
it
into
wasm
and
executing
it
as
a
wasm
binary
and
there
you
can
see
it's
running
now
when
I
go
off
and
get
that
it
will
download
and
give
me
a
hello
world
that
says:
hi
there,
toronto
okay,
so
that
was
raw
python
executed
on
the
nodes.
You
know
where
the
ipfs
is
and
or
where
the
storage
is.
So
there
you
go
okay,
so
that's
the
demo!
A
You
know
we
just
last
week
we
hit
the
ability
to
schedule,
10
000
jobs
at
concurrency,
10
across
three
nodes
and
had
zero
failures,
and
just
to
prove
that
it
did
fail
that
we
were
our
failure.
Recognition
was
correct
when
we
got
it
up
to
20
percent
or
excuse
me
20,
concurrent
jobs.
We
get
about
30
failures,
so
at
10
000
we're
pretty
confident,
and
it's
just
up
to
us
to
figure
out
what
the
right
scale
is
and
work
it
out,
but
so
far
so
good.
A
The
project
is
out
there
and
running
all
right.
So
what
about
this?
You
might
ask
no
we're
starting
small,
not
yet
anyway.
Certainly
we
would
love
come
along
help.
Our
current
goal
for
october
is
10
000
jobs
simultaneously
scheduled
five
nines
of
job
resolution
by
job
resolution.
We're
not
saying
your
job
is
going
to
work,
but
that
we
have
scheduled
it
properly
and
that
we
resolved
it
with
whatever
error
code.
A
You
you
said
it
wasn't
like
our
fault
that
the
job
failed,
100
nodes
simultaneously
support
data's
smaller
than
32
gigabytes.
We
don't
want
to
span
spec
sectors
right
now.
We
are
working
on
that.
A
We
have
some
ideas
around
it,
especially
with
some
of
the
advanced
stuff
out
there
and
whether
or
not
we
use
ipld
and
so
on
so
forth,
public
data,
only
we
will
support
determinism
and
we
will
support
cpus,
though
gpus
is
a
really
popular
one
that
we
get
asked,
on
the
other
hand
like
if
you
could
schedule
this
and
just
finish
it.
You
know
by
tomorrow
and
not
support
cpus,
that's
fine,
and
then
you
know
by
production
time.
A
You
know
we
want
to
hit
one
petabyte
of
processing
across
many
files,
99
job
success
rate.
We
want
to
be
able
to
support
malicious
nodes
49.
That
will
be
a
challenge.
I'm
not
going
to
lie
to
you
like
the
problem.
Is
we
don't?
We
very
intentionally
do
not
have
an
incentivization
layer
yet
right,
meaning
it
will
be
easy
to
grief
us
because
you
can
sign
up
for
a
node
and
then
just
be.
You
know,
crappy
without
the
threat
of
staking
or
without
an
economic
disincentive.
A
You're
not
gonna,
be
able
to
do
that.
So
there's
no
question
we're
going
to
tackle
that.
It's
just.
We
want
to
get
to
scale
first
before
then
and
most
importantly,
having
a
dag
execution
so
being
able
to
wire
these
jobs
together.
So
you
never
have
to
download.
Until
the
final
thing
is
our
goal,
I
should
say
the
we
we,
you
will
always
be
up
re-uploading
to
ipfs
all
the
results.
So
again,
it's
got
to
be
public
only,
but
that
means
you
don't
have
to
download
it
in
between
anyway.
A
It'll
just
be
up
to
like.
If
you
want
to
do
it
all
as
part
of
one
and
you
know
only
maintain
the
the
data
sets
when
they
complete
that's
going
to
be
october.
Okay,
so
when
I
said
earlier
that
that
is
our
initial,
it's
because
this
is
serious
right.
It
can't
just
be
us
when
I
think
about
that
original
model
of
all
those
ways
to
build
things.
I
think
there
are
tons
of
ways
for
compute
to
be
specified
by
domain,
so
you
could
imagine
anyone
along
the
line.
You
know
plugging
in
saying.
A
Oh
I'm
going
to
take
the
output
of
this
compute
over
data
thing
over
here
and
I'm
going
to
serve
it
off
of
you
know
whatever
in
the
following
way.
You
know
you
could
imagine
people
having
different
verifiers
different,
like
hardware
profiles
so
on
and
so
forth.
We
want
to
support
all
of
these
via
a
plugable
system
and
we
have
out
there
go
to.
I
don't
have
the
link
up
here.
Oh,
I
have
the
link
at
the
last
slide,
but
we
have
a
very
extensible
system
as
it
stands
right
now
again.
A
Our
goal
is
to
provide
some
core
components
that
are
useful
to
you
as
it
stands
today
useful
to
the
entire
computer
over
data
community,
and
you
pick
and
choose
the
things
you
like
and
you
don't
like
it
great
swap
it
out
and
off
you
go.
If
you
don't
like
any
of
it.
That's
okay,
too.
We
want
to
support
everyone
in
the
community
at
the
end
of
the
day.
It
goes
back
to
that
original
graph.
A
A
So
you
want
it
now.
So
that's
up
to
you,
you
can
see
some
data
sets
that
we're
already
working
with.
I
want
to
point
out
the
folks
in
the
back
richard
for
all
the
help
that
he's
doing.
I
think
you're
up
here.
What
are
you
doing
this?
The
socat
one?
I
don't
know
one
of
them,
but
we
are
already
working
with
significant
data
sets.
Today
we
need
more.
We
need
lots
of
more.
We
need
researchers,
we
need.
A
Your
personal
data
sets
we
want
to
do
a
lot
of
like
processing
of
you,
know,
chain
data
and
things
like
that,
since
that's
right
there
like,
why
not
do
it
all
that
good
stuff,
you
know
it's
not
real
until
people
are
using
it
yeah
are
you.
B
A
Fantastic,
you
literally
could
say
any
words
and
I
would
say:
okay
now
as
long
as
it
no
but
exactly
to
boris's
point
exactly
to
boards.
This
point,
it's
gotta
be
public.
That's
the
one
thing
open,
crawl,
sure
great.
How
do
you?
How
do
you
help
just
come
talk
to
me
and
I'll?
Do
it
right
now
we
are
on
ipfs,
but
we
pre-pin
the
data
to
just
our
nodes
so
that
when
you
schedule
the
job
it
like
will
find
it
because
otherwise
it
wouldn't
find
it
yeah.
Not
I
mean
that's,
not!
A
That's,
not
bad
mouthing
five
point
indexing.
It's
that
we
literally
aren't
pulling
down
the
stuff,
but
absolutely
just
come
talk
to
me
and
we'll
figure
it
out
we're
looking
for
lots
of
opportunities.
That
last
bit
is
the
most
important
part.
If
you
in
any
way
could
squint
at
this
and
say:
oh
you
know,
I
want
them
to
use
this
project
that
I
am
working
on.
A
You
know,
use
my
ipfs
client
use.
My
like
executor,
use
my
wasm
bits
use
my
verifier.
I
don't
care,
we
would
love
to
talk
and
figure
out
ways
to
to
share
the
energy
and
wealth,
and
you
know
help
you
get
going
forward.
We
just
launched
last
thursday,
the
computer
for
data
working
group
brand
new.
Here
you
go,
I
have
a
bitly
link.
I
have
a
qr
code
that
you're
gonna,
never
click
on
qr
codes.
A
Noob
there
you
go,
you
can
trust
me,
though,
so
we
every
week
we're
gonna
meet.
We
want
you
to
demo.
That's
not
me.
That's
not
me
like
running
this
thing.
It's
just
us
getting
together,
chit
chatting
helping
each
other.
This
is
all
all
of
us
in
it
together
like
we
should
come
up
with
a
shared
goal
like
we
want
to
process
whatever
a
petabyte
of
compute
over
data
by
the
end
of
the
year.
Who
knows
right
like
we're
up
to
whatever
you
like
and
there
you
go
so
we
have
slacks.
A
We
have
websites,
we
have
this
that
and
the
other.
A
lot
of
this
stuff
is
out
there.
We
have
a
back
of
the
website.
Go
there
there's
a
link
to
the
docs,
the
docs.
Are
you
know
we
really
try
hard
to
keep
it
up
to
date?
We
have
a
repo.
Like
I
said
you
can
download
the
binary
yourself
today
and
mess
around
and
there
you
go
any.
Oh.
That
was
called
any
questions.
B
A
Okay,
got
it
got
it?
No,
I
I
I
know
where
you're
going
did
it
have
to
span
multiple
nodes
and
reduce
the
data
back?
No,
it's
single
node.
Only
for
now
you
know
parallelization
and
sharding
is
coming.
We
hope
this
month.
B
A
Absolutely
so
so
to
be
clear,
python
run.
A
Is
it
converts
to
wasm
right
so
like,
even
though
it
shows
python
it'll
convert
back
how
we
can
express
that
working
on
it
now
tbd
like
there
are
lots
of
tools
out
there
that
do
do
this,
but
because
of
our
requirements
of
being
embarrassingly
parallel
like
it
does
mean
we're
gonna
have
to
start
stacking
stuff
together
to
some
degree
right,
because
every
node
that
runs
is
not
gonna
have
visibility
into
the
fact
that
other
nodes
are
also
running
that
job
or
a
shard
of
that
job
anyway.
B
A
Absolutely
so
there's
I
mean
there
are
many
things
like
once
you
get
into
file.
There's
no
question:
we're
going
to
need
some
incentivization
model
right,
it
could
be
file.
Coin
could
be
something
new
could
be
like.
Who
knows
a
cross
chain
thing?
We.
We
truly
have
no
opinion
on
that
today.
We
know
we
need
it,
there's
no
way
to
prevent
bad
behavior
without
it.
A
There's
also
no
way
to
like
incent
storage
providers
to
use
it
right
like
we're
literally
just
using
their
electricity
for
free
as
it
stands,
so
we
have
to
have
an
incentive
model,
but
we
want
to
get
to
scale
first,
because
no
there's
no
value
in
an
incentive
model.
If
the
thing
you
know
does
two
jobs
and
crashes.
A
B
A
Absolutely
absolutely
we
do
also
have
the
ability
right
now
in
in
back
eye
out
like
we
have
it
flipped
on,
but
it's
like
you
know
again
not
really
like
well
publicized.
It
will
actually
pull
your
data
down
from
another
ipfs
node.
So,
theoretically
you
could
broadcast
again.
When
you
get
the
incentivization
model
right,
you
could
broadcast
to
the
entire
world
that
hey
I'm
ready
to
take
jobs.
A
I
have
all
this
spare
compute,
I'm
ready
to
take
jobs,
even
though
I
don't
have
the
data
you
know
come
give
it
to
me
that
you
would
download
from
ipfs
to
the
or
from
filecoin.
Excuse
me
to
your
node,
run
the
job,
and
then
you
know,
theoretically,.
A
Yes,
actually
all
these
jobs
are
content
addressed
so
like
yeah,
there
you
go.
I
think
one
of
the
other
ideas
in
the
content
as
well
so
filepoint
has
a
prom
primitive
as
an
example.
That
might
be
extremely
useful,
so
there
are
no
other
blockchains
that
have
a
problem
primitive
so
like
like.
I
just
wanted
things.
A
I
said
to
david,
I'm
like
literally
just
prom,
two
hosting
providers,
yep
four
dollars
a
month,
right
here.com
for
crime,
yeah
right
and
I'm
like
how
many
people
might
pay
five
point
absolutely
so
that
is
actually
an
excellent
point,
or
that
reminds
me
of
an
excellent
point,
which
is
well
what
about
fem
right.
A
We
very
much
consider
this
to
be
perfectly
complementary
with
fvm,
meaning
nothing.
You
saw
here
other
than
the
beginning
of
the
job
where
we
pick
up
from
file
coin
or
excuse
me,
ipfs
and
submit
back
to
ipfs
connects
to
the
chain
at
all
right.
This
is
just
running
the
job
wherever
it
is.
That
said,
we're
not
like
this
is
just
the
start
like
we
would
love
to
use
fvm
as
potentially
and
again
there's
lots
of
thinking
here,
but
potentially
the
beginning
and
end
of
the
job.
A
So
instead
you
submit
your
job
to
the
chain.
Fbm
kicks
off
this
entire
process
and
resolves
the
process
at
the
end,
including
developing
consensus
and
things
like
that
around
the
results.
Again,
lots
of
thinking
here
we
haven't
begun
on
that
we're
just
trying
to
get
to
scale
with
the
system
you
see
today.
A
Call
to
action
join
the
cod
yeah.
I
think.
That's
probably
your
best
bet
yeah
join
the
cod
cod
cad
join
the
cod
working
group.
It
is
every
week
it
is
just
come
demo,
your
stuff
learn
about
stuff,
whatever
it
is,
or
our
slack
feel
free
again
squint
at
this.
A
If
you
can
think
of
a
data
set
or
data
provider
or
a
researcher
or
whatever,
who
might
find
this
useful
or
you
can
squint
to
this
and
say
I
would
like
to
partner
with
you
to
use
my
thing
in
your
thing
over
there
we're
happy
to
talk,
or
you
can
say
I
want
to
use
your
thing
in
my
thing.
Whatever
it
is,
there
you
go
we're
ready
to
talk.