►
Description
This talk was given at IPFS Camp 2022 in Lisbon, Portugal.
A
Pacquiao,
how
do
we
get
started
you
see
here?
This
is
the
compute
over
data
Summit
V1
in
April
of
2022
we
had
in
we
introduced
back
out
of
the
world
a
fishy
way
to
process
data
there.
It
was
just
an
investigation.
Really
at
that
point,
could
we
process
data
decentralized
in
a
decentralized
way?
Would
it
make
sense?
A
What
were
the
the
edges?
We
were
going
to
rent
to
and
so
and
so
forth?
And
the
number
one
question
we
got
was
hey.
This
looks
pretty
good.
When
are
you
going
to
give
it
to
me?
So
we
decided
when
we
first
kicked
this
off
to
set
a
big
goal.
Even
before
the
summit,
we
really
wanted
to
say
it:
a
bag,
a
big
hairy,
audacious
goal
and
the
timeline
looks
like
this
kind
of
given
a
little
bit
away,
but
we
wrote
the
first
line
of
code
last
week
in
January.
A
If
I
remember
correctly,
it
was
me
it
was
like
some
test
framework.
It
was
terrible,
I'm
sure
it's
gone,
so
you
can
trust
the
code.
But
at
that
time
we
said
okay
for
April
for
that
computer
for
data
Summit.
We
want
to
support
a
reasonably
sized
cluster,
so
100
Linux
nodes,
no
Windows,
10,
000
jobs
able
to
be
scheduled,
so
you
can
have
cues
and
things
like
that
small
data
with
all
within
a
single
ipfs
sector.
A
We
wanted
to
have
high
job
resolution,
one
CID
only
per
job
and
so
on,
public
data,
deterministic
and
so
on.
But
then
we
also
wanted
to
take
a
look
forward
so
that
we
knew
as
we
were
building
we
were.
You
know
guiding
ourselves
to
something
that
we
thought
was
actually
significant
and,
and
we
felt
like
by
October,
we
wanted
to
really
Target
something
significant
thousand
node
support,
maybe
not
a
thousand
nodes
actually
participating,
but
we
wanted
to
be
able
to
test
and
show
a
thousand
nodes
actually
running.
A
We
wanted
to
be
able
to
run
ten
thousand
jobs,
not
just
schedule
them
actually
run
them
and
run
them
to
completion,
99
job
completion
rate.
Now
to
be
clear,
the
job
could
fail
for
its
own
reason,
but
it
wouldn't
be
because
of
the
orchestrator
we
wanted
to
support.
You
know
up
to
a
petabyte
of
processing,
so
across
cids
across
sectors.
We
wanted
to
be
able
to
support
a
third
malicious
nodes.
A
We
want
to
be
able
to
execute
dags
and
have
a
primitive
reputation
system
as
well,
and
then,
finally,
we
wanted
to
be
this
all
based
on
interfaces
so
that
people
would
be
able
to
swap
in
and
out
components
and
didn't
just
have
to
rely
on
our
system.
So
then
we
had
the
first
computer
for
data
Summit.
We
do
great,
we
actually
hit
like
whatever
flat.
A
What
is
that
six
out
of
eight
I
would
consider
totally
green
I
would
say
two
were
about
yellow
we
weren't
able
to
get
to
five
nines
job
resolution,
but
we
got
really
good,
but
you
just
did
weren't
able
to
test
it
at
the
scale
that
we
wanted
and
determinism
was
pretty
much
a
hack
I'll
be
honest
with
you,
but
it
was
there
and
we
showed
that
that
the
system
could
work
on
two
different
executors.
A
So
now
we
had
our
production
Target
and
we
said
October.
Oh
my
God,
there's
too
much
to
do
right,
we'll
never
make
it,
but
it's
a
big
hairy,
audacious
goal.
Why
not?
We
went
live
to
production
in
June,
so
anyone
in
the
world
could
go
on
use
our
dogs
and
try
it
out
and,
most
importantly,
we
go
out
to
customers
and
listen
to
them
or
users
and
say
like
hey.
You
have
a
big
problem
here:
you're
an
academic
institution.
You
have
petabyte
of
data.
What
can
we
deliver?
That
will
make
this
significant?
A
We
delivered
filecoin
and
Estuary
support
that
wasn't
even
on
the
spec
and
we
had
that
by
August
and
then
just
at
the
beginning
of
November
October.
Excuse
me:
excuse
me
end
of
September.
We
were
able
to
deliver
native
wasm
and
gpus,
which
is
pretty
sweet.
So
going
back
to
our
original
goals.
We
took
a
look
at
it
and
we
said
well,
you
know
we're
at
50
50.,
which
is
good
right,
like
you
shouldn't,
say
no
camera
that
you
hit
100
at
it's,
not
audacious
enough.
A
I
think
you
know
I'm
trying
to
be
very
conservative
about
this.
We
definitely
didn't
hit
processing
across
many
files.
We
weren't
able
to
do
distributed
reads.
You
know
we
didn't
handle
malicious
nodes,
yet
the
system
will
just
take
any
jobs,
dags
and
primitive
reputation.
Systems
we
weren't
able
to
deliver,
but
it
was.
We
did
deliver
this
by
September
even
about
a
month
early.
But
in
addition
to
that,
we
we
during
that
time
we
went
out
and
really
listened
to
customers
and
we
listened
to
all
their
needs
and
it
turns
out.
A
We
were
wrong.
They
didn't
actually
care
about
these
things.
What
they
cared
about
was
these
and
we
delivered
all
of
those.
Every
single
one,
eight
gpus,
hdb
downloads,
open,
Telemetry
progress,
bars,
blah
blah
blah
blah
all
of
that
built
into
the
system
with
significant
users
and
a
ton
of
examples.
A
So
that
brings
us
to
today
when
I
am
proud
to
discuss.
We
are
declaring
ourselves
beta.
Now
that
doesn't
mean
we
are.
That
does
not
mean
the
API
is
stable,
but
it's
certainly
stable
ish
and
we
would
love
to
collaborate
to
make
it
more
stable.
A
We'll
talk
about
roadmap
at
the
end,
but
we
think
that
it's
certainly
big
enough
to
get
out
there
and
start
messing
around
with
and
several
hackathons
and
talking
to
more
customers
shows
that
it
is
now
again,
I
will
not
say
like
it
is
five
nines
uptime
and
so
on
and
so
forth.
You
know
even
last
night
I
was
having
issues
but
suffices
say
like
it
is
a
real
system
and
it's
doing
real
things
and
by
real
things.
This
is
our
Target.
You
know
I
like
to
say
it.
A
I
almost
like
to
read
this
word
for
word,
because
it
like
matters
to
me
right.
We
want
to
transform
the
way
that
people
do
data
processing
by
giving
people
simple,
low-cost
and
distributed
first
tools
to
unlock
a
collaborative
ecosystem
and
with
that
users
will
get
simplified,
distribute
distribution
execution,
particularly
against
data
platforms.
A
Today,
it
will
actually
import
improve
performance
because
we'll
be
able
to
process
things
faster
in
a
more
embarrassingly
parallel
way
without
user
or
administrative
intervention,
and
we
hope
to
also
launch
a
new
collaborative
scientific
community
and
and
one
of
the
most
fundamental
elements
of
this,
and
this
is
what
we
kept
holding
ourselves
to
is
no
rewriting
or
as
little
rewriting
as
possible.
Anyway,
we
wanted
to
enable
people
to
use
the
tools
that
they
have
today.
I
love
sharing
this
slide.
A
But,
like
you
know,
that's
the
way
people
are
building
things
today
and
when
we
did
an
analysis,
you
know
rough
analysis,
sorry
rough
analysis
of
what
we
did
I'd
say
we
got
a
lot
of
greens,
a
lot
of
greens
and
mostly
yellows,
there's
some
red
ones
that
will
never
be
able
to
support,
or
certainly
will
be,
take
some
time
but
things
particularly
where
they're
calling
out
texternal
web
services
and
I'll
talk
about
that.
But
you
know
we'd
love
to
get
this
chart
to
as
much
green
as
possible.
A
We're
reasonable
right,
I,
don't
think
we're
going
to
support
PowerPoint,
for
example,
which
is
a
thing,
so
you
might
ask
what
can
I
do
with
it
today
and
I
would
say
you
are
very
harsh,
but
maybe
you're
not
SARS
right.
You
just
don't
want
to
know
what
you
can
do
with
it.
I
get
it
I,
get
it
like
your
PM
you're
waving
your
hands.
You're
saying
this
is
great.
So,
let's
walk
through.
Let's
show
you
what
you
can
do
with
it:
let's
show
you
how
to
abuse
Gaga
for
final
profit.
Well,
what
do
you
do?
A
A
A
We
swear
done,
and
you
might
ask,
can
I
abuse
this
to
my
heart's
content
and
the
answer
is
yes,
so
here,
I've
written
a
I,
don't
know
four
line:
bash
script
that
walks
over
a
thousand
lines
downloaded
links
from
the
open
data
set
on
the
NASA
bucket
and
it
runs,
and
it
runs
fine
and
Presto
you've
now
uploaded
a
thousand
files.
A
I
think
it's
like
400
gigabytes,
I'm,
taking
taking
care
of
for
you
and
I
have
no
worry
so
abuse
away,
oh
by
the
way
you'll
see
in
the
upper
right
corner
of
every
slide,
a
QR
code
just
go
there
and
we
link
directly
to
the
live
example
on
the
back
of
the
documentation.
A
You
can
try
this
out
yourself
all
right.
How
about
python
Python's
used
a
lot.
What
does
that
look
like
there?
You
go
if
you
want
to
issue
literally
any
python
command,
you
can
do
it
right
there
in
line
just
pop
it
in
the
text
and
off
you
go
or
maybe
you
would
like
to
issue
something
more
complicated
where
you're
reading
something
off
of
ipfs,
you
see
there,
you
do
Docker
run
python,
you
add
your
input
CID!
You
tell
it
where
to
mount
and
then
you
can
pipe
your
or
you
can
write
your
script
right
there.
A
If
you
want
Presto,
you
can
also
do
that
straight
off
in
standard
in
or
anything
like
that.
If
you
want
to
do
it
like
that
or
let's
say
you
want
to
upload
your
script
somewhere
to
ipfs
itself,
and
you
want
to
just
run
that
script
now
permanently,
it
becomes
a
permanent
place
on
the
web.
You
can
go
use
it
on
ipfs
and
execute
it
again.
A
So
that's
easy!
Maybe
it's
too
easy.
Maybe
you
don't
trust
me
that
gets
harder.
How
about
pandas
pandas
number
one
platform
for
processing
data-
I
would
argue,
I
mean
them
in
scikit-learn.
What
does
that
look
like?
Well,
let's
up
the
level
of
difficulty
I'm
going
to
upload,
not
just
the
python
script,
but
also
the
transaction
script
that
I
want
to
run
over
my
processing
with
well.
The
first
thing
I
do
is
just
I
add
it
to
ibfs.
You
can
see
how
to
do
that
there
and
next
I
do
the
exact
same
thing.
A
I
did
before,
but
now
I
get
an
entire
directory
mounted
for
me
and
inside
that
you
can
see
I'm
executing
that
python
script
and
that
python
script
is
reading
from
the
CSV
in
the
same
directory.
So
now
you
can
upload
entire
blocks
of
things
and
process
them
all
at
once,
and
you
can
see
me
building
that
out
there.
A
We
have
a
crazy
one
that
I
love,
I
love
this
and
I
talk
about
it
all
the
time,
the
author's
back
there.
You
can
go
yell
at
him.
Let's
say
I
want
to
use
duck.
Db
duct
TB
is
a
crazy
like
cool
tool.
It
basically
will
just
mount
a
database
over
an
arbitrary
column
in
your
columnar
format.
In
this
case
we're
using
parquet
you
it
will
deploy
the
duckdb,
it
will
launch
it
and
run
it,
and
then
it
will
mount
in
that
data
file
and
then
you
can
issue
any
query.
A
You
would
like,
against
that
using
standard
SQL,
it's
crazy,
and
but
there
you
go,
it
runs
and
at
the
end
of
this
run
it
speeds
through
at
the
end
of
this
run,
it
will
print
it
out.
You
can
see
there,
it's
delivered
the
query
in
line
I
got
to
make
sure
it
doesn't
Loop
over
these.
These
gifts.
A
How
about
YOLO
YOLO
is
one
of
the
most
popular
object,
detection,
Frameworks
in
machine
learning.
Let's
say
you
have
a
thousand
files
that
you
want
to
run
YOLO
against
in
order
to
do
object,
detection
inside
it.
Here
you
go
in
this
case,
you're
running
against
one
GPU.
You
can
see
there
I'm
loading
the
data
set
and
then
running
it
using
a
standard,
YOLO
container
and
then
I
run
my
python
script
and
one
thing
that
I
want
to
show
here.
That's
really
cool
is
spinning.
A
This
up
is
super
fast
because
the
date
or
excuse
me
the
weights
and
source
and
projects
they're
all
on
ipfs
too.
So
the
container
is
super
small
I'm
just
running
the
container,
but
all
the
data
is
already
on
ipfs,
so
I
just
Mount
the
container
and
run
and
there
you
can
see.
Okay,
all
right,
you're,
saying
fine,
it's
lame.
What
do
you
want?
What
do
you
want?
A
A
You
know,
okay,
so
here
you
go
so
this
is
stable.
Effusion,
sorry,
I,
gotta
jump
in
here.
So
let
me
show
you
this
real
quick.
You
can
see
there.
I'll
show
you
this
code
in
just
a
moment,
but
all
we're
doing
is
taking
this
we're
going
to
say
we're
going
to
run
this
against
GPU.
We
have
a
container
out
there
in
the
world
with
the
stable
diffusion
thing,
that's
built,
it
runs
on
we,
you
run
your
main
and
you
can
see
there
right
at
the
end.
You
give
it
your
prompt
in
this
case.
A
A
Oh,
oh
we,
this
is
basically
we
are
no
longer
back
yeah.
We
are
the
stable
diffusion
nut
jobs,
because
this
I
mean
it's
just
so
much
fun.
You've
got
to
try
this
here.
You
go.
Here's
that
code
There's
the
link
and
that's
it
seriously.
That's
all
you
got
to
do
enjoy
every
one
of
those
images.
It's
crazy,
including
some
really
nightmare,
fueled
ones.
You
can
go
look
at
our
Channel,
so
it's
still
too
easy.
You
might
say
God
you
guys,
I
mean
like
so
demanding.
A
So
we
start
with
our
superhero
dog
and
we
want
to
do
a
transform
on
animal.
What
does
that
look
like,
in
this
case
we're
going
to
do
a
crazy
kind
of
like
make
it
more
use
the
thing
as
a
prompt?
You
can
see
the
code
there,
that
is
python
code,
ready
to
execute
on
wasm
and
what
it
does
is
it's
going
to
go.
It's
going
to
go
against
those
images
that
we
downloaded
earlier
and
run
them,
and
in
this
case
it's
going
to
transform
them
using
brand
new
coloring.
A
A
So
again,
to
show
you
using
wasm,
you
write
your
program,
you
compile
it
to
wasm
on
your
local
machine,
you
know
it
works
and
that's
fine
and
we
take
care
of
the
rest.
We
upload
it
to
bakayao.
We
find
an
appropriate
node
or
nodes
or
clusters.
Anything
like
that.
We
mount
in
the
data
from
ipfs.
We
run
the
job,
and
then
we
write
the
results
to
Estuary
again
all
taken
care
of
for
you.
We
promise
and
the
best
part
is
you
don't
have
to
install
it.
A
There
is
a
client
of
course,
and
we
we
recommend
it,
but
if
you
just
want
to
mess
around
every
one
of
the
examples
you
see
here
has
a
link
on
it
and
lets
you
run
it
in
collab,
so
you
don't
even
have
to
install
it.
Just
go
after
the
browser
and
off
you
go
so
our
rule
is,
if
you
can
contain
it,
if
you
can
compile
it
to
wasm,
you
can
run
it
mostly
and
the
thing
you
can't
do
today
we're
sorry
we're
working
on.
A
It
is
networking
anything
that
requires
during
the
job
reaching
out
not
possible
or
intercommunicating
between
nodes.
Again,
we
know
this,
but
there
are
very,
very
serious
security
concerns
around
this
and
will
require
a
fair
bit
of
work,
so
we're
not
promising
networking
anytime
soon,
but
we
have
heard
the
issues,
and
so
you
might
say
these
seem
like
science
projects.
Fine,
you
know,
you're
off
generating
a
thousand
stable
diffusions
enjoy
I
want
real
world.
Well,
we're
very
happy
to
announce
a
series
of
Partnerships.
A
Today
we
have,
we
have
a
brand
new
partnership,
we're
working
with
the
Caltech
high
energy
physics,
Computing
they're,
bringing
petabytes
online
and
they're
operating
test
clusters
today
and
will
soon
operate.
Production
clusters
on
the
back
of
Yao
Network
lab
Dow
is
accelerating
scientific
progress,
making
tools
more
accessible
so
on
and
so
forth.
They're
going
to
be
partnering
to
integrate
backing
out
into
their
Network.
The
city
of
Las
Vegas
has
a
very
strict
requirement
about
where
data
and
computation
jobs
run
they're
collecting
a
whole
bunch
of
iot.
A
They
needed
to
exist
entirely
in
a
Data
Center
in
Las
Vegas,
and
we're
going
to
support
that
they're
going
to
have
a
background
cluster
on
premises
and
Boeing.
You
might
have
heard
of
them.
They're
the
cut
they're,
the
team,
the
collective
compute
team
that
binds
City
at
home
climate
predictions
Rosetta
they're,
going
to
be
using
back
of
you
as
a
Target
cluster.
So
again,
these
are
real
people
using
it
today,
or
certainly
in
progress
to
get
to
getting
it
live.
A
So
our
Theory
our
goal
here
is
that
bacliao
can
transform
Big,
Data
local
to
data
reproducible,
execution
new
incentive
models.
Provably
secure.
You
saw
me
talk
about
that
at
the
beginning.
That
is
our
Target
for
this
project
as
well,
but
we're
also
doing
it
as
part
of
the
computer
for
data
working
group
and
trying
to
make
it
something
that
everyone
can
build
a
platform
on
top
of,
and
our
theory
is
that
back
I
was
at
the
middle.
So
what's
next
roadmap,
let's
get
to
it.
A
A
So
for
end
users,
it
looks
like
this.
We
have
M1
where
we're
going
to
get
fill
plus
as
a
data
storage
solution,
we're
going
to
improve
performance
quite
a
bit
reliability
and
so
on.
We're
also
going
to
try
for
a
dashboard,
so
you
can
visualize
your
jobs
right
now,
it's
CLI
only
and
we
think
we
can
do
some
really
cool
things
around
that
for
2023
and
next
next
Milestone
is
March
2023.
A
We
have
wasabsport
it's
very
Alpha
right
now
we
want
to
get
into
production,
and-
and
in
addition
to
that,
we
want
to
improve
reliability.
We
really
want
to
make
it
much
easier
to
do
a
quick,
what
we
call
Rep
hole,
just
a
quick
loop
on
building
and
iterating
on
your
job
right.
Now,
it's
a
little
bit
like
stilted
as
you
move
from
place
to
place.
We'd
love
to
make
that
easier.
A
We
well
our
API,
for
the
client
side
will
reach
1.0
and
we
hope
to
start
a
grants
program
and
back
I
have
season.
At
the
same
time,
June
2023,
streamlined,
developer
experience,
including
dags
to
production.
We
want
to
do
Federated
reads
so:
you'll
be
able
to
read
across
sectors
and
having
a
rich
local
client
that
will
be
able
to
run
on
anyone's
laptop
and
provide
compute
similar.
The
way
a
study
at
home
did
and
finally
in
M4
September
2023.
A
Our
goal
is
to
get
to
consensus
and
verification
of
deterministic
jobs.
Right
now
we
have
to
be
honest,
like
jobs
that
are,
you
know
that
deliver
with
a
non-deterministic
binary
like
Docker
or
non-deterministic
on
the
cluster.
There's
just
you
know,
we
can't
turn
chicken
salad
into
chicken
or
vice
versa.
A
At
the
same
time,
we
will
support
wasm
wasm
does
support
non-determinism
or
excuse
me
determinism,
and
we
will
be
able
to
do
very
formal
verification
around
things
like
that.
Also,
that
is
our
goal
to
do
arbitrary
networking,
including
reaching
out
and
reaching
between
nodes
on
the
compute
provider.
Side
Phil
plus,
will
give
them
an
opportunity
to
earn
fill
plus.
If
you
are
the
person
that
runs
the
compute
provider,
that
or
it
runs
the
job,
you
will
be
first
in
line
to
get
the
fill
plus.
So
hopefully
that's
an
incentive
for
folks.
A
Also,
we
want
to
have
a
much
simplified
setup,
there's
still
quite
a
few
Flags
you
need
to
configure.
We
want
it
to
be
one
command
and
have
that
one
command
be
self
bootstrapping,
after
that,
in
March
of
2023,
when
we
hit
1.0
also
on
the
server,
we
want
to
have
a
unified
control
plane
so
that
you'll
be
able
to
manage
all
your
nodes
from
a
single
place,
single
API.
A
We
also
want
to
part
partner
with
the
storage
providers
so
that
we
have
a
single
program
for
you
to
spin
up
both
compute
providers
and
storage
systems
by
June
of
2023,
we're
going
to
support
additional
deal
engines
and
we're
going
to
start
supporting
unreliable
nodes,
and
we
think
this
is
a
benefit
for
computer
providers,
because
now
your
uptime
requirements
for
participating
are
no
longer
required.
Like
you
know,
we're
not
saying
unreliable
necessarily
is
malicious.
A
It
could
just
be
that
you,
you
have
data
centers
that
go
offline
and
things
like
that
and
the
system
will
be
more
supportive
of
that
and
finally,
in
M4
we're
going
to
make
everyone
a
compute
provider.
If
you
have
a
laptop
you'll,
be
able
to
download
this
run
this,
you
could
actually
do
it
today,
but
we're
going
to
make
it
a
lot,
a
lot
easier
via
a
station.
A
You
might
have
heard
about
also
we're
going
to
have
our
reputation
systems
in
place
and
clustered
deployments
with
internode
connectivity,
and
that's
it
and
with
that
I
have
about
two
two
minutes
left
any
questions.
A
That
is
a
very
frequent
request.
We
don't
have
label-based
provisioning
today.
I
think
that
is
probably
super
soon.
I
would
not
that
that
is
pretty
straightforward
request
and
you
can
do.
There
are
already
a
number
of
fields
that
you
can
request.
You
saw
me
requesting
gpus
CPUs
memory.
Things
like
that.
You
know.
I
know
that
we
have
that
request
a
fair
bit
and
we
would
like
to
augment
things
with
labels.
The
biggest
challenge,
of
course,
is
people
can
lie.
B
Hello,
so
a
great
talk,
okay,
that
seems
really
cool.
I,
don't
know
a
lot
about
wikile
before
like
listening
to
this
talk,
but
aren't
you
charging
for
the
computations
that
your
nodes
perform
so.
A
An
interesting
question
we
get
that
all
the
time
hold
that
one
time.
One
thing
you
can
do
by
the
way
is
you
can
Target,
you
can
have
private
clusters.
Pacquiao
runs
on
ipfs
cluster
I
just
want
to
answer
that
so
like.
If
you
wanted
to
spin
up
your
own
backyard
cluster
and
not
connect
it
to
the
network,
you
could
do
that.
So,
if
you,
if
that's
a
scenario
that
solves
your
problem,
we
should
talk
about
it.
Okay,
charging.
Yes,
we
know
that
is
absolutely
important.
A
However,
we
feel
all
the
Milestones
that
you
see
here
lead
up
to
that
now,
along
the
way,
you
will
see
a
lot
of
benefits
right.
You
will
see
fill
plus.
So
that
is
an
opportunity.
If
you
are
a
storage
writer
to
earn
fail
plus,
which
is
10
times,
quality
adjusted
power,
which
is
a
net
benefit
for
you.
A
We
also
do
expect
to
work.
Do
Partnerships
and
things
like
that
things
like
fill
mine
will
help
bridge
the
gap.
Now.
The
problem
is
that
we
can't
really
start
charging
until
we
get
full
determinism
and
verification
in
because
you
it
runs
into
a
number
of
attack,
vectors
and
things
like
that,
and
so
we
feel
like.
Thus,
the
ordering
must
happen
in
the
way
you
see
here,
but
there
is
no
question
that
is
on
our
map
or
on
our
radar.
We
just
didn't
feel
confident
enough
about
the
date
in
order
to
hit
it.
Okay,.
B
One
more
question
until
well:
I
guess:
estimating
the
cost
for
the
computation
and
charging
over.
It
is
a
good
way
to
protect
in
a
certain
way
against
the
Dos
attack
against
your
nodes.
Exactly
until
you
start
doing
that,
do
you
have
any
protections
currently
in
your
system.
A
Right
now,
the
way
that
the
system
works
is
you
you
submit
directly
to
the
back
of
the
out
cluster,
and
you
know,
basically,
you
could
DDOS
the
system,
you
don't
have
visibility
into
the
specific
nodes
that
are
delivering
it.
Those
are
all
hashed
and
shot,
and
things
like
that.
A
But
yes,
today,
I
mean
I
could
just
go
and
and
attack,
and
you
know
issue
10,
000,
stable
diffusions
and
fill
up
the
entire
cluster.
No
question:
you
can't
issue
a
job
that
runs
forever
like
I.
Can't
just
go
issue
cat.
You
know,
Dev,
you
random,
we
have
you
know
defense
against
things
like
you
know,
long-running
jobs,
for
you
know
10
minute
timeouts.
A
Your
total
amount
of
space
you
can
take
up
is
capped
and
things
like
that,
but
yeah
there's
there's
certainly
attack
vectors
that
we
know
about
we're
kind
of
going
to
tackle
that
as
they
come
up,
and
but
we
know
that
as
we
work
towards
this
road
map,
it
will
automatically
address
many
of
these
but
you're
absolutely
right.
A
C
A
I'm,
so
sorry,
it's
super
echoey
up
here.
I
can
you
can
you
say
that
again,
I.
C
Have
a
question
from
the
further
away
from
your
mouth:
yeah
I
have
a
question
from
the
miners:
I
wonder
if
we
can
be
infrastructure
provider
even
there
and
collecting
people
with
iron
on
a
one
hand
and
another
hand,
people
or
business.
C
We
have
a
lot
of
minors,
yes
and
with
gpus
yeah
and
I6
and
I'm
interested
in
my
beer.
A
Yeah,
so
so
job
Locale,
if
I,
understand
correctly
job
locality,
scheduling
and
things
like
that
again
I
think
that's
the
other
half
of
the
coin
there
like,
as
we
start
to
add
labels,
people
will
say,
like
hey
I'm.
All
part
of
this
I
would
like
very
much
for
us
to
have
sophistication
around
scheduling
to
to
have
awareness
of
distance
between
nodes
in
in
meaningful
ways
that
is
not
built
in
the
system.
We
have
talked
about
that.
A
We
are
in
the
process
of
doing
a
bunch
of
revamping
around
how
we
schedule
and
being
much
more
aware
of
that.
I
would
be
surprised
if
it
didn't
happen
soon,
but
we
don't
explicitly
have
it
on
the
road
map.
It's
definitely
something
we're
aware
of,
and
we
want
to
Target.
Okay.
A
Will
there
be
a
token
you're?
My
friend,
you
ask
that
question.
So
the
net
is
to
the
other
side
of
the
point
of
the
gentleman
there.
How
are
we
going
to
in
cause
incentives
right
or
how
are
we
going
to
incent
people
I?
Think
that's
an
open
question.
A
Could
there
be
a
token
possibly
could
we
reuse
filecoin?
Possibly,
could
we
bridge
to
other
networks
possibly?
Could
we
set
up
a
credit
card
system
like
Lambda,
possibly
like
I,
think
all
options
are
on
the
table.
Most
of
all,
we
want
to
meet
customers
where
they
are
like.
What
are
the
systems
that
they
want?
A
I'd,
be
very
surprised
if
you
know
as
part
of
our
incentivization
system,
we
didn't
explore
everything
here.
You
know
I
I,
think
probably
potentially
for
the
near
term
Beyond.
Just
what
you
saw
here.
You
will
also
see
things
like
storage
providers
and
other
computer
providers
who
are
already
partners
for
filecoin
have
systems
as
well,
but
I
I
cannot
stress
enough.
All
options
are
on
the
table.
We
haven't
made
any
decisions.
If
you
have
thoughts
around
these
we'd
love
to
hear
them.
A
Yeah
yeah,
no
I,
think
I
I
I
think
that
again,
in
the
same
respect
of
meeting
people
where
they
are
with
a
their
data
platform,
meeting
people
where
they
are
with
a
payment
system
is
part
of
our
core
goal.
Now.
Does
that
mean
that
protocol
Labs
is
going
to
set
up
whatever
a
stripe
account
and
start
collecting
your
money?
No,
there
is
no
chance
that
that
will
happen.
A
What
I
think
is
much
more
likely
is
that
you
can
see
how
storage
providers
today
provide
a
simplified
way
to
onboard
your
things,
and
you
have
an
invoice
or
credit
card
relationship
with
them
and
they
handle
the
complexity
over
here.
I
think
something
like
that.
Certainly
in
the
interim,
is
far
more
likely,
but
again
I
can't
stress
enough.
A
All
options
are
not
on
the
table
and
and
again
our
goal
is
to
make
this
trivial
like
look,
I
I'm,
not
an
idiot
right,
I,
go
out
and
see
Google
Cloud
functions
or
Lambda
or
whatever,
and
how
incredibly
easy
it
is
to
onboard
there
and
if
it's
hard
to
onboard,
because
it's
hard
to
you,
have
to
rewrite
your
stuff,
because
it's
another
reliable
system-
and
you
don't
know,
what's
happening
because
you
have
to
go
out
to
some
crazy
exchange
to
buy
something
in
order
to
just
run
a
you
know:
two-second
job,
that's
a
no-op
for
us.
A
Okay,
thank
you
very
much.
You
can
see
there
again
QR
codes
in
the
upper
corner.
You
can
come
join
us.
You
can
come
join
the
computer
for
data
working
group,
of
which
we
are
a
member
and
very
proud
to
participate
in
that
we
meet
there
every
two
weeks
you
can
come,
join
all
our
slack
channels
and
knock
yourself
out,
but
thank
you
very
much.