►
From YouTube: Bacalhau State of the Union - David Aronchick
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I'm
here
back
again
to
talk
about
bakugou,
a
platform
to
bring
to
compute
to
data
as
I
mentioned
earlier.
Bakayao
we
hope
is
just
one
of
many
platforms,
many
different
specifications
and
things
like
that.
But
we've
made
some
terrific
progress
and
I'd
love
to
show
what
we
have
today.
So
that's
us,
that's
the
fish.
A
By
the
way.
Everyone
knows
why
it's
called
back
out.
Does
anyone
know
why
it's
called
anyone?
So
it's
like
we
were
in
Portugal
last
year
when
we
were
thinking
it
up.
Compute
over
data,
Cod,
Pacquiao,
that's
the
joke
and-
and
the
question
is
you
know
we
had
at
that
time
was:
can
we
maybe
build
a
decentralized
system
that
was
built
for
decentralized,
compute
right
start
with
decentralized,
compute,
first
and
work
from
there?
A
Interestingly,
I
was
one
of
the
first
PMS
on
kubernetes
and
I
led
it
for
several
years
and-
and
this
is
much
bigger
than
we
had
as
our
first
one
kubernetes
dated
at
50
nodes
at
the
start.
Interestingly,
a
little
data
point
for
you,
so
this
was
actually
more
substantial
than
that,
but
we
said,
oh
with
eight
weeks,
we
feel
like
this
is
something
we
can
do
as
a
proof
of
concept.
A
So
we
started
first
line
of
code,
written
I,
think
I
wrote
it
I,
don't
know
I,
it
was
like
Kai
me
or
Luke.
Somebody
wrote
it.
Somebody
checked
it
in.
It
was
I.
Think
CI
who
knows.
Anyhow,
you
can
go
check
the
the
repo
it's
there
and
the
first
computer
for
data.
Summit
was
just
eight
weeks
later
and
we
did
really
well.
I
was
shocked,
actually
Luke
and
guy
get
on
stage.
They
demoed
it.
They
demoed
our
interfaces
and
things
like
that.
A
I'll
give
us
yellows
for
five
nines.
You
know
I
think
we
don't
didn't
have
quite
as
enough
monitoring
things
at
the
time
and
deterministic.
Certainly,
it
was
there
of
a
proof
of
concept,
but
like
it
it
there
was
a
lot
to
desire,
but
the
rest
of
it
did
work,
scheduling
and
cids
and
and
public
data
and
so
on.
So
then
we
said:
okay,
let's
present
this
to
everyone.
We've
got
really
good
feedback.
A
You
can
see
there,
that's
our
first
teeny
little
group
getting
together
being
excited
and
the
number
one
question
was
like:
okay
you're,
that
you're
practically
done
right.
When
is
it
coming?
So
we
set
ourselves
a
big
hairy,
audacious
goal,
and
that
goal
is
what
you
see
here
right.
We
wanted
to
get
to
a
thousand
nodes,
provably
a
thousand
nodes.
A
We
don't
actually
need
a
thousand
nodes
running
that
tend
that
being
able
to
run
not
just
schedule,
10
000
jobs
be
able
to
process,
basically
unlimited
amount
of
data
across
many
nodes,
job
success
rate,
malicious
nodes
or
handling
malicious
nodes,
dags
and
so
on
and
so
forth.
You
can
see
those
there
and
that's
a
lot.
That's
a
lot
to
dig
on
for
October,
but
we
are
not
if
nothing,
not
ambitious.
So
in
June,
June
1
we
launched
Alpha
to
production,
no
Gates
know
anything.
A
You
could
go
out
there
download
a
client
and
have
at
it
in
August
we
added
filecoin
and
Estuary
support,
and
in
September
right
at
the
beginning
of
September,
we
were
able
to
add
native
wasm
and
gpus.
A
So
we
went
back
and
looked
at
this
and-
and
we
said,
okay
we're
actually
at
where
we
wanted
to
be
timeline
wise.
We
actually
were
about
a
month
early-
and
you
know
some
Reds
here,
no
question
very,
very
large
file
processing
we
weren't
able
to
do.
We
didn't
do
dags
or
or
malicious
nodes
at
the
time
and
no
reputation
system
to
speak
of.
But
we
did
succeed
in
a
bunch
of
things.
A
You
can
see
those
there,
but
along
the
way
we
had
listened
to
a
bunch
of
com
people
from
the
the
ecosystem,
who
were
already
giving
us
feedback
who
were
always
saying
hey.
This
would
be
great
if
you
could
do
this,
can
you
do
it
and
you
can
see
this
list
on
the
right
here?
So
everything
you
see
on
the
right?
There
was
not
in
the
original
spec.
A
Yet
is
what
we
delivered:
gpus
HTTP
downloads,
you
know
open
Telemetry,
you
name
it
huge
performance,
wins
huge
examinations
of
locks
and
mutexes,
and
things
like
that.
So
we
were
really
pleased
with
that
and
and
we
were
moving
forward
so
which
brings
us
to
today
where
we
are
plowed
to
announce
our
beta.
Now,
obviously,
that
is
it
is
beta.
There
will
still
be
API
movement.
Oh
thank
you,
so
there
will
still
be
movement
in
the
API
and
things
like
that,
but
we
really
are
taking
on
SLA
and
things
like
that.
A
We
want
to
have
uptime,
we
have
canaries,
we
have
all
sorts
of
things
that
we
want
to
nail
and
I'll
talk
about
our
path
to
getting
to
1.0
and
things
like
that.
At
the
end,
and
again
it
goes
back
to
our
original
Vision.
A
Can
we
deliver
simple
low-cost,
distributed
first
tools
and
unlock
new
ecosystems
and
collaboration,
and
if
we
do
that,
you'll
get
simplified
execution,
you'll
get
huge
performance
wins
over,
even
potentially
a
lot
of
existing
systems
today
and
you'll
launch
a
new
collaborative
scientific
community
and
one
of
the
things
when
I
say
simple
is
really
meeting
people
where
we
are,
and
so
I
went
through
this
song
experiment.
I
gave
this
slide
back
in
in
April
and
I
said.
Look.
This
is
what
people
are
using
today
they're.
You
know
this
is
Microsoft's
research.
They
published
it.
A
They
use
158,
different
tools,
68
different
tools,
oh
59,
sorry,
different
tools,
it's
just
an
enormous
amount
and
if
you
map
to
what
we
can
support
today,
it's
actually
not
bad
I,
don't
know,
call
it
50
50.,
and
some
of
these
are
well
never
support
like
we're
not
going
to
support
PowerPoint.
Sorry,
just
well
I'm,
who
knows,
maybe
maybe
a
good
script.
Powerpoint
I
have
no
idea.
We
do
some
more
windows
by
the
way,
so
maybe,
but
in
some
are
yellow
like
yeah.
You
can
throw
it
in
containers
that
really
doing
it.
A
But
the
point
is:
is
that
it's
not
that
bad
right,
you
can
throw
a
Jupiter
notebook
in
you
can
throw
python
so
on
and
so
forth,
not
terrible,
but
I
think
this
is
really
a
mission
like
how
little
can
we
have
data
scientists
do
and
take
advantage
of
all
these
things,
so
you
may
be
asking
what
can
I
do
with
it
today
and
so,
let's
show
right
here
on
stage
so
I'm
going
to
start
with
a
very
simple
one
right.
A
One
thing:
that's
kind
of
weird
about
filecoin,
no
matter
how
awesome
and
like
widespread
Universal,
it
is
it's
actually
kind
of
hard
to
upload
your
data
I'll
be
back
again.
This
is
it
seriously.
A
Can
you
abuse
this
to
your
heart's
content?
Well,
obviously,
so,
let's
go
I,
don't
know
if
you've
seen
there's.
Can
you
see
this
here,
yeah,
so
I'm,
a
cheese
fan
so
I'm
going
to
search
for
some
cheese.
A
I'm
going
to
be
very
responsible
here
and
use
only
A,
Creative
Commons
one,
because
I
want
to
be
a
good
citizen
to
the
world,
oops,
okay
and
I'm,
going
to
grab
this
image
of
Wednesday
Dale
Cheese
here,
which
is
on
the
Creative
Commons
thing.
I'm,
going
to
find
this
I'm
going
to
go
over
here,
oops
I'm,
going
to
enter
it
as
a
URL
and
now
I'm
going
to
execute
this
command.
I'm.
Sorry.
A
A
That's
it
I've
taken
an
image
that
was
out
there
in
the
world.
It
is
now
running,
it
is
finding
it
downloading.
It
copying
it
over
to
background
Presto,
we're
done
and
now
I
take
this
I
say:
go
get
it
for
me.
I
was
a
little
bit
nervous.
It's
not
that
we
don't
support
it.
Just
oftentimes
the
handshake
to
actually
getting
the
thing.
Oh
fantastic,
I'm
gonna
go
grab
this.
It
downloaded
it
into
this
handy
folder.
Here,
Combine
results,
outputs
and
there
we
are
wensley
Dale,
Cheese
presto,
not
that
was
uploaded.
A
Put
on
Estuary
put
on
file
coin
downloaded
through
back
video
and
by
the
way,
not
just
there
right.
I
can
go
straight
up
to
any
other
Gateway
and
get
it
Presto
Magic
and
it's
cheese
which
I'm
always
a
fan
of.
So
let's
I
mean
that's,
not
abuse,
that's
just
nothing!
So
I,
don't
know
if
you've
seen,
for
example,
it
you
know
they
have.
These
great
data
sets
out
there
in
the
world.
This,
in
fact,
is
Noah's
geostationary
orbit
right.
There's
a
lot
of
images
in
that.
A
A
A
Abuse
away
and
by
the
way
I
should
point
out
there.
You
have
a
QR
code
in
the
one
in
the
upper
right
corner.
Every
one
of
those
QR
codes
are
different
and
they
all
link
to
our
public
documentation
where
you
can
see
stuff.
So
all
right
now,
I
have
a
bunch
of
data
on
filecoin.
What
can
I
do
with
it?
Well,
I
could
run
some
python
there.
You
go
up
a
right.
You
can
run
it
right
there
in
line
middle.
You
can
run
a
complex
thing
where
you
might
need
to
import
things.
A
Unfortunately,
we
don't
support.
As
you
know,
you
can
we'll
get
to
how
you
do
in
a
minute.
You
can't
do
requirements
and
things
like
that
in
this
format,
but
you
can
anything
that's
native
to
python.
You
could
do
or
if
you
do
need
to
do
something
complicated.
You
can
see
down
there
in
the
lower
left.
You
can
actually
put
your
entire
script
on.
You
know,
filecoin,
using
the
same
URL
that
I
showed
earlier
and
it'll
run
just
fine.
Okay,
how
about
r
r
is
pretty
popular
data
science.
A
We
got
R
there,
you
go
and
you're
like
all
right,
fine,
hello
world,
let's
get
let's
get
to
the
good
stuff,
all
right,
so
now
we're
going
to
start
to
get
more
complicated,
pandas
again
huge
data
processing
thing
super
easy
or
super
powerful.
In
this
case,
I
mentioned
about
like
using
multiple
files
simultaneously.
How
would
you
do
that?
Well,
there
you
go
so
you
can
see
in
that
tree.
A
You
know
get
cids
back
and
then
down
below
you
can
see
I'm
going
to
mount
those
cids
into
a
folder.
You
can
see
that
I
can
name
the
folder.
You
know
working
directory
files,
I
run
my
pandas
container
and
then
I
run
the
script
that
I
was
doing
and
it's
able
to
mount
in
all
the
files
that
I
had
named
earlier
into
that
single
folder
and
just
execute
it
like.
It
was
on
my
local
machine
right
there.
You
go
this
one's
insane,
but
it's
so
good.
I
love
it.
A
Let's
say
you
have
a
parquet
file
who
knows
duckdb
anyone
all
right,
not
a
lot
of
people,
so
it's
this
wonderful
thing
that
basically
will
allow
you
to
mount
in
columnar
databases
or
columnar
files
at
all
and
give
you
full
SQL
functionality
against
it.
So,
in
this
case,
you're
gonna
mount
in
the
CID
of
a
parquet
file
that
I
want
and
start
ductdb
and
execute
arbitrary
SQL
against
it.
A
Who
knows
what
yellow
is
yellow
is
the
number
one
object,
detection
thing
for
images
you
can
see
here,
we're
just
gonna
mount
in
yellow
and
run
that
Presto
YOLO,
and
you
do
that
over
thousands
of
images,
if
you
want-
and
one
thing
I
want
to
show
about
this
architecture
here
is
the
YOLO
container-
is
actually
quite
small
because
we
are
mounting
in
the
weights
from
ipfs,
meaning
you
don't
have
to
download
all
these
various
things
in
order
to
do
it,
especially
if
your
containers
are
quite
large
that
makes
it
extremely
portable
and
because
we
schedule
to
where
the
data
already
is
spin
up
time,
for
those
should
be
quite
quick
as
well
and
if
they're
not
available-
and
you
want
more
CPUs
you're
able
to
do
that,
how
about
go
figure
out
the
next
protein,
you
can
do
awful
fold
not
too
bad.
A
All
right,
you're,
saying
lame,
you're
demanding
stable
division,
let's
get
to
it
baby,
all
right.
So
if
you
are
silly
like
me,
you
like
cheese,
and
you
really
want
to
be
silly
about
this.
So
what
we're
going
to
do
is
here
we
do
have
a
stable
diffusion
container.
We
have
gpus
on
the
network.
Let's
abuse
this
I,
like
cheeses,
sorry
I
downloaded
a
thousand
cheeses.
You
can
see
them
here.
A
I
wanted
to
have
a
verb.
Just
to
like
say
that
cheese
is
gonna.
Do
something
I
don't
know
who
knows
what
right
stable,
diffusion,
you'll
figure
it
out,
I
merged
them
together
into
a
bunch
of
sentences,
cheese,
cheddar,
cloth,
bound
survey,
now
Oklahoma
City
would
go
blah
blah
blah
blah
who
knows
stable
diffusion
will
figure
it
out
and
I
executed
a
thousand
of
them.
So
there
you
go
running
on
the
back
of
your
network.
A
A
A
Some
of
them
are
crazy,
like
how
do
they
like
look
at
that?
That
looks
like
a
real
picture.
I
just
Man,
stable
diffusion
is
amazing,
and
some
of
them
are
just.
What
is
she
doing?
She's
running
with
cheese,
I,
don't
know
all
right
all
right
back
to
the
demo,
so,
like
I
said
you
can
go,
get
that
we
have
that
published.
A
I
will
say
by
the
way.
Yeah
you
run
on
CPUs
be
expected
to
wait.
That's
fine!
There
aren't
that
many
gpos
in
the
network.
We
would
love
you
all
to
join
and
participate
in
gpus,
but
there
you
go
seriously,
and
you
know
here
you
go
lots
more
stuff
that
we
did
some
real
nightmare
fuel
out
there
by
the
way,
still
one
ending
it's
wasam
time
so
I
took
that.
Actually
it
wasn't
me
it
was.
My
colleague
took
that
stable
diffusion
demo.
A
Here
we
generated
a
bunch
of
superhero
dogs
and
we
ran
those
oops,
sorry,
yeah,
here's
the
code.
Again,
it's
all
hosted
out
there
very
simple.
It
takes
the
existing
image,
that's
already
out
on
stable
diffusion
or
on
ipfs,
and
it
runs
in
wasm
a
Transformer
to
convert
that
into
just
I.
Don't
know!
Take
your
random
image
conversion
tool
here,
we're
going
to
run
that
over
here
to
the
side,
all
right,
we'll
zip
forward.
A
Here
there
are
my
superhero
dogs
generated
by
Sable
fusion
and
now
we're
going
to
apply
a
filter
against
them,
and
you
can
see
those
there
right
again
in
wasm
executed
on
the
network.
I
didn't
have
to
think
about
anything.
I
just
ran
my
code
and
let
me
just
break
those
down
for
you
for
one.
Second
right,
you
write
your
component
app
whatever
it
might
be.
You
use
in
this
case
cargo
to
Target
your
wasm
runtime,
and
that's
it
you're
done.
We
upload
the
job.
A
Tobacco
Yao
find
an
appropriate
node
mounted
in
ipfs
Mount.
The
data
excuse
me
in
from
ipfs
run
the
job
and
write
the
results
for
you
again
could
not
be
simpler.
Just
stay
focused
on
what
you
do
great:
what
every
data
scientist
does
great
and
go
from
there.
We
promise
and
even
further
you
don't
even
have
to
install
it.
We
every
one
of
these
examples
has
a
Notebook
on
colab.
A
You
can
just
go
click,
it
run
it
right
there
in
the
browser
browser
kind
of
so
our
motto
is:
if
you
can
contain
it
and
you
can
compile
it
to
one
of
them,
you
can
run
it
mostly
because
we
don't
have
any
networking
for
now.
A
Networking
is
a
huge
security
flaw
and
our
security
service
area,
and
we
want
to
be
very,
very
sensitive
about
that,
and
you
might
say
these
seem
like
science
projects.
These
seem
like
Dave
having
a
good
time
with
the
demo
too
much
fun
with
demo.
Probably,
but
there
you
go.
I
want
real
world
and
we
are
very
happy
to
announce
many
Real
World
Partners
that
we're
launching
a
few
are
listed
here.
So
first
we're
proud
to
announce.
A
We
will
be
partnering
with
the
Caltech
high
energy
physics
lab
to
do
work
with
their
Large
Hadron
Collider
CERN
processing.
A
We
will
be
partnering
with
the
city
of
Las
Vegas
and
blocks
to
launch
CCC
tev
for
internal
processing
and
other
things
like
that
that
data
is
going
to
be
available
and
online.
This
was
a
really
interesting
one
because
they
have
requirements
on
locality
and
where
that
can
be
deployed,
and
you
can
deploy
the
back
of
your
network
to
your
own
private
network.
If
appropriate.
Again.
A
My
feeling
our
feeling
is
that
this
is
even
easier
than
a
lot
of
the
the
Big
Data
platforms
today,
they're
just
far
more
valuable
and
easier
to
engage
with,
then
you
know
having
to
set
up
Hadoop
spark
so
on
I,
don't
want
to
take
anything
away
from
those
platforms,
but
we
we
solve
a
different
purpose,
we're
doing
a
lot
sharded
decentralized.
First
way
when
you
get
to
the
end
of
it,
then
sure
engage
with
these
other
things
that
are
more
appropriate
for
things
like
streams.
A
We're
also
proud
to
be
partnering
with
the
Boeing
project.
For
those
that
don't
know,
one
of
these
earliest
broad
scale,
end
user
contributions,
seti
at
home,
climate
prediction,
Rosetta,
bio
things
like
that,
we're
going
to
be
partnering
with
them
as
well,
and
finally,
you'll
hear
about
weather
XM
later
today.
We're
very
excited
to
be
partnering
with
them:
they're
collecting
weather
data
from
all
over
the
world
and
we're
going
to
help
them
process
and
run
their
models
on
it.
A
So
those
are
all
launching
today
or
very
soon,
I'm,
certainly
in
the
announcements
today,
but
very
soon,
we'll
be
talking
about
them
and
what
they're
doing
publicly
we
talked
about.
I
talked
about
earlier,
the
computer
for
data
and
what
we're
trying
to
achieve
via
compute
over
data.
A
You
can
see
that
here.
Our
hope
is
that
baccalia
sits.
You
know
right
at
the
center
of
it
and
if
we're
not,
we
really
want
to
get
there.
So
we
are
very
serious
about
being
a
very
open,
very
extensible
platform,
not
just
for
us.
We
want
to
also
enable
other
platforms
that
want
access
to
compute
over
file
coin
compute
over
data.
A
You
know
helping
them
achieve
that.
So,
what's
next
I'm
going
to
walk
through
a
little
bit
of
our
roadmap
over
the
next
a
year
or
so
when
we
think
about
it,
we
really
think
about
breaking
it
down
into
two
major
categories.
First,
from
the
end
user
side
you
have
people
who
are
actually
wanting
to
run
jobs
on
it
and
from
the
computer
provider
side.
Those
are
people
who
want
to
enrich
their
storage
provider
deployments
or
potentially
earn
my
earn
incentive.
A
Obviously
we
don't
have
incentives
yet
we're
working
on
that,
but
there's
like
making
their
the
storage
they're
already
hosting
on
file
coin
and
ipfs
richer
since
we
plug
in
directly.
All
you
have
to
do
is
mount
it
in
and
we
can
run
it
in
M1
in
December.
We're
gonna,
be
powered
by
Phil,
plus
we're
gonna
make
significant
improvements
on
performance.
Reliability.
A
We're
also
going
to
launch
our
dashboard,
which
we're
really
excited
about
so
you'll,
be
able
to
look
across
the
entire
network,
see
jobs,
see
your
jobs
and
things
like
that
in
March.
We
will
finalize
their
support
for
wasm.
What
you
see
here
is
you
know,
I
would
call
it
beta
quality,
which
is
fine.
We
are
at
beta
we're
going
to
improve
reliability,
particularly
in
transport,
making
sure
your
jobs
are
running
to
completion,
that
the
network
handles
that.
We
also
are
very
serious
about
improving
the
developer
experience.
A
We
want
to
make
that
Loop
of
executing
jobs
very,
very
straightforward,
so
you
can,
on
your
local
laptop
simulate
what
it's
like
to
run
a
full
back
of
your
network
and
then
then,
when
you
move
it
to
the
network,
you
can
be
very
certain
that
you're
getting
exactly
what
you
ran
locally.
A
We're
also
planning
on
launching
our
API
client
to
1.0
means
API
stability.
Building
on
it
will
be
much
more
predictable
and
if
nothing
else
we're
going
to
have
versioning,
we
do
have
API
versioning
right
now,
but
we're
going
to
be
better
about
enforcing
it
and
and
having
backwards
compatibility
per
whatever
we
decide
as
a
community.
Also
we're
going
to
have
our
grant
program
on
our
backl
season,
beginning
in
June,
we're
again
going
to
work.
A
Just
improve
our
developer
experiences,
particularly
around
some
of
the
most
difficult
concepts
for
decentralized
systems,
or
excuse
me
distributed
execution
day.
Mapreduce
sharding
things
like
that.
We
already
do
support
very
rich
starting
today,
but
not
over
single
files
and
and
which
can
be
a
problem.
If
you
have
a
very
large
file,
how
do
we
split
it
up?
How
do
we
make
it
easy,
also
finalizing
support
for
dags
full
dags
that
we
already
do
have
some
work
on
that
initially
I
think
we
could
do
a
lot
more
Federated
reads.
A
So
if
you
have
a
large
file
that
that
crosses
many
machines,
how
do
you
read
from
them
all
simultaneously?
How
do
you
join
them
and
execute
them
and
Rich
local
client
and
finally,
consensus
and
verification
of
deterministic
jobs
in
September
website
for
data?
You
know
no
code,
Type
experiences,
those
kind
of
things
on
the
computer
provider
side
we're
gonna,
have
fill
plus
our
plan
right
now
is
to
look
at.
If
you
run
the
compute
job,
you
will
get
first
rights
to
bid
on
the
compete
on
the
fill
plus
deal.
A
So
if
you
run
it
on
a
fill
plus
you'll
be
able
to
get
fill
Plus
on
the
other
side.
Obviously,
that's
a
huge
incentive
to
anyone
wanting
to
you
know,
participate
in
that
program
in
March.
Looking
at
the
unified
control
plane,
so
if
you
have
many
nodes
being
able
to
look
at
them
graph
them
understand,
what's
going
on,
understand
who's
running
what
on
your
machines,
you
can
already
do
that
right
now,
but
giving
people
a
much
cleaner
way
to
interact
with
that
multiple
executor
sport.
A
We
talked
about
that
and
our
AP,
our
server
API
reaches
1.0
in
June
of
2023
additional
DL
engines.
Like
I
said
we
already
support
Estuary.
We
want
to
add
many
more
we're,
also
going
to
support
for
a
much
more
malicious
Network
or
you
know,
attacks
Byzantine
fault,
tolerance
up
to
one-third
and
then
finally,
you
know
our
end
of
the
year
goal
is
that
anyone
can
be
a
computer
writer.
A
I
was
you
know,
incredibly
inspired
by
Alpha
fold
and
seti
at
home,
and
things
like
that
I
want
to
have
everyone.
You
know
on
this
very
powerful
machine
that
most
of
the
day
does
nothing
be
able
to
participate
in
the
back
of
our
Network
and
potentially
you
know,
Advance
Humanity,
and
with
that,
that's
it
no
more
cheese.
Any
questions.
B
For
This
Server
M3,
how
are
you
approaching
the
Byzantine
fault
tolerance,
and
is
that,
like
going
to
be
an
fvm
integration.
A
Or
we
are
expecting
it
out
as
we
speak,
we
are
talking
very
closely
with
the
fem
team.
Obviously
they
provide
you
a
layer
of
consensus
immediately,
and
so
it's
very
optimistic
that
you
know
we're
hope
hopeful
that
that
we're
able
to
do
it
I
think
if
I
was
to
place
my
bets,
you
know
I
think
that
integration
with
them
would
be
quite
likely,
but
you
know
we
don't
want
to
have
them.
You
know
need
to
take
a
dependency
on
us.
A
If
they're
like
have
other
priorities,
then
then
you
know
I
want
to
make
sure
that
they're
able
to
execute
according
to
their
schedule,
but
over
time
I
I'd
be
I'd,
be
very
surprised
if
we
weren't
closely
integrated.
Thank.
B
C
Hi
great
talk,
also
love
cheese,
perfect,
so
my
question
is:
do
buckle
down
nodes
communicate
so
what
I
mean
is
so
I
submit
a
job?
A
job
has
two
computations,
let's
say
so.
Can
two
baccalao
nodes
coordinate
those
computations?
Is
that
some
or
is
that
something
that
you
are
looking
forward
or
not
so.
A
There's
a
subtle
thing
there
and
I'm
going
to
break
it
apart
if
I
might
the
the
first
is
Dags
a
dag,
directed
acyclic
graph
right
like
how
do
you
start
at
one
place
and
get
to
another
place
along
many
steps
that
may
you
know,
fan
and
fan
out,
have
conditionals
all
that
kind
of
stuff.
We
plan
on
supporting
that.
A
We
are
actually
already
working
on
that
as
we
speak,
so
that
you
can
submit
to
the
network
your
dag
and
we
will
figure
out
how
to
execute
it,
whether
or
not
it's
on
a
single
node,
whether
or
not
it's
off
over
a
series
of
nodes
with
intermediate
steps.
Things
like
that
we
are
working
on
that.
A
However,
the
networking
thing
we
do
not
support
right
now,
because
the
attack
surface
is
too
high.
Today
every
node
has
no
networking.
You
know
it's
very
disappointing,
but
it
is
true,
no
networking
meaning
after
you
start
your
job.
You
saw
there
that
I
could
mount
in
from
a
URL
anything,
but
that
is
before
the
job
begins
once
the
job
begins.
That's
it
so
no
like,
for
example,
downloading
from
Pi
Pi,
and
you
know
your
requirements.
A
You
got
to
build
that
into
the
Container
or
have
it
in
a
you
know
already
mounted
on
ipfs
no
worker
parameters.
You
know
Master
nodes
for
a
three-tier,
tensorflow
deployment
or
something
like
that
right.
A
Unfortunately,
it's
just
too
big
an
opportunity
for
like
attack,
and
it
also
hurts
the
ability
to
do
determinism.
That
said,
we
have
a
lot
of
really
good
ideas
how
to
get
around
that
in
very
isolated
ways.
We
just
haven't
gotten
there
yet,
as
you
saw
on
the
roadmap,
our
back
of
the
envelope
guess
is
September
of
2023,
which
is
a
long
time.
I
think
we
can
come
up
with
intermediate
steps
that
might
get
you
there,
but
it's
the
reality
for
like
truly
like
treating
this
like
a
general
Network.
A
Still
very
much
in
the
spec
phase,
we
have
a
lot
of
thoughts,
I
think
that
we're
exploring
just
about
every
model
that
is
out
there,
whether
or
not
it's
having
referees
that
oversee
things
until
we
bootstrap
a
reputation
system,
historical
reputation
once
things
you
know,
you
can
also
Imagine.
You
know
generalized
consensus
where
you
deploy
this,
it's
actually
quite
trivial,
to
deploy
to
like
10
jobs,
concurrently
and
then
at
the
end,
evaluate
like
if
all
the
things
equate
to
each
other.
A
You
can
do
that
too,
or,
like
some
number
of
them
equate
to
each
other,
but
I
will
say.
I
I
will
be
honest.
It
is
very
early
thinking
there
we're
always
looking
to
plug
someone
else
in.
We
would
love
not
to
invent
it
ourselves.
B
A
lot
of
questions,
big
fan:
do
you
guys
support
harnessing
like
multi-cpu
or
GPU
Hardware
stuff
already
or
where
does
that
fit
into
that.
A
We
do,
it
is
fully
supported,
so
it's
already
multi.
It's
basically
just
assume
anything
Docker
can
do.
We
can
do
okay,
so
absolutely
multi-thread
multi-gpu
things
like
that.
We
don't
have
any
multi-gp
machines
on
the
network,
so
you
got
to
add
your
own
but
or
like
encourage
people
to
add
their
own,
but.
A
To
be
clear,
there's
again
we
are
gated
on
Docker
for
now,
so
if
doc,
if
you
can
give
me
a
Docker
container
that
runs
tpus
and
then
mount
a
machine
that
supports
dpus,
there's
absolutely
no
reason
you
couldn't
do
it.
There
was
one
right
here,
then
I
promise,
that's
the
last
one.
Unless
is
Juan
here.
D
E
I
do
a
lot
of
work
with
ecological
data
like
in
a
forest.
How
do
you
imagine
this
could
work
for
Edge,
Computing
and
Edge
storage
and
those
kind
of
environments.
A
Sorry,
oh
Edge.
Well,
as
I
said
in
the
in
the
first
talk.
This
is
designed
specifically
for
that
right.
So,
like
I
said,
if
you
can
contain
it,
you
can
run
it.
So
what
I
would
say
is
depending
on
how
low
power
your
Edge
is.
This
is
an
ideal
Network
for
you,
you
would
what
you
would
do
is
at
the
edge.
You
would
collect
your
data.
A
A
You
know
just
sit
there
and
run,
and
you
would
you
would
request
to
the
network
and
select
I
want
it
to
run
against
things
that
have
this
data
or
in
this
particular
machine
profile
or
my
private
signaling
right
label
I
want
it
only
to
run
in
whatever
Yosemite
on
whatever
these
devices
again
we're
looking
through
a
lot
of
the
node
selection
stuff
right
now,
there's
a
lot
of
flexibility
and
we'd
love
to
get
you
know
it's
your
specific
Graphics,
but
the
net
is.
Is
you
would
take
that
job
either?
A
You
know
raw
compiled
code,
wasm
or
your
container.
It
would
push
it
down
to
that
edge,
it
would
run
it
locally
and
then
it
would
give
you
only
the
derivative
and
so
in
many
ways
this
is
hugely
valuable
for
that,
because
those
things
on
whatever
256
kilobit
connections
that
connect
once
a
day,
you
don't
want
to
be
like
interacting
with
them
in
some,
like.
You
know,
a
long-standing
way
that
requires
like
this
huge
long
up
time.
You
want
it
to
trickle
back
the
results
and
trickle
back
just
what
you
need.
A
So
if
you
had
said
like
hey
98
of
the
day,
there's
no
data
and
then
two
percent
is
normally
today.
You
would
have
to
push
all
that
back
to
the
server
in
order
to
figure
out
there
was
98
of
it
was
worthless
if
you
could
run
a
simple
filter
job
on
the
edge
where
it
is
right
now
and
then
push
back.
Only
to
two
percent
that'd
be
a
huge
win.