►
From YouTube: Compute Over Data Working Group 7th Session (Charity Engine and Decentralized Search Engine)
Description
The CharityEngine team takes us through an overview of their decentralized compute platform and Stan Srednyak takes us through his work on Decentralized Search Engines.
Charity Engine: https://www.charityengine.com/
Decentralized Search Engines: http://rorur.com/
A
And
we
are
good
to
go
okay,
hello.
Everyone
listening
to
the
recording
today
is
our
our
last
session
here
before
we
actually
have
our
on-site
Cod
Summit
in
Lisbon,
but
we're
fortunate.
We've
got
a
lot
of
a
lot
of
content
to
cover.
Today.
We've
got
the
grid.
Republic
team
is
going
to
talk
to
us
about
the
charity
engine
platform
and
we'll
see
some
demo
the
content
there.
We
have
Stan
who's
done.
A
Some
interesting
work
around
decentralized
search
engines,
so
we're
going
to
cover
that
and
at
the
very
end,
we're
also
going
to
get
into
a
little
bit
of
logistics
planning
for
the
CD
Summit
coming
up,
November,
2nd
and
3rd
in
Lisbon,
and
just
talking
through
Logistics
and
topics
that
folks
in
the
community
are
interested
in
us
covering
so
without
further
Ado
I
will
hand
it
over
to
you
Matthew,
and
let
you
guys
let
you
guys
Kick
It
Off,.
B
B
Oh
so
I'm
going
to
be
talking
about
charity
engine
and
some
work
we've
been
doing
integrating
with
filecoin.
Let
me
share
a
screen.
Got
some
slides
to
walk
us
through
Tristan.
Allen
is
also
on
the
call
he'll
be
doing
a
demo,
but
let's
started.
B
Great
so,
as
I
mentioned,
the
agenda
is
first
I'll
talk
a
little
bit
about
charity
engine,
an
overview
of
our
service,
some
use
cases,
some
projects
we're
working
on
and
then
Tristan
will
do
a
demo
of
our
filecoin
integration
and
the
overall
theme,
particularly
of
the
demo,
is
making
data
computable.
B
B
We
also
contribute
a
considerable
portion
of
our
capacity
to
scientific
research,
so
we
have
revenues
we're
generating
for
causes
like
ox
Family,
Care
and
compute
cycles
that
we're
providing
to
humanitarian
research.
B
On
the
latter
point
We've
donated
this
point
pretty
well
over
a
billion
core
hours
to
scientific
research,
and
this
is
as
much
limited
by
our
ability
to
find
projects
that
need
large-scale
compute
as
as
our
resources.
So
if
there's
anybody
out
there
in
the
world
of
DSi
who
needs
a
lot
of
compute
capacity,
please
be
in
touch.
There's
a
contact
page
on
our
website
or
anybody
on
this
call.
Wes
has
our
our
info
foreign.
B
The
network
has
this
point
over
a
million
TPU
cores
100
000
gpus.
We
have
about
two
megabytes
of
distributed
storage
within
our
Network,
which
is
a
small
scale,
I
suppose
compared
to
filecoin.
But
ours
is
performant
in
particular
ways
that
we
might
get
into
later.
B
We
allocate
these
resources
by
provisioning
them
and
standard
instance,
types
of
the
sort
that
you
would
see
at
AWS
or
Google.
So
you
get
a
certain
number
of
CPUs
certain
amount
of
RAM
and
then
you
can
get
as
many
instances
practically
speaking
as
we
want.
We
use
software
standards
as
well.
B
B
Our
first
public
publishing
partner
is
Wolfram
research,
so
you
can
run
a
Wolfram
engine
Mathematica
jobs
on
charity
engine,
oh
by
the
way,
the
system
with
the
App
Store
is
the
publisher
can
specify
an
hourly
rate
for
use
of
their
software.
So
when
you
get
your
charges,
there'll
be
a
charge
for
computing
and
a
charge
for
proprietor
software.
If
you
opted
to
use
it,
access
to
the
system
is
through
a
number
of
fairly
easy
to
use
interfaces.
B
There's
an
API
for
programmatic
integration,
there's
a
CLI,
so
you
can
basically
from
the
command
line.
Have
access
to
you
know
hundreds
of
thousands
of
CPU
cores
as
if
they
were
on
your
desktop.
The
CLI
lets
you
use
command
line
tools
like
new
parallel
that
you
might
be
familiar
with,
there's
also
a
web
UI
that
makes
it
nice
and
easy
to
submit
jobs
and,
in
particular,
to
monitor
the
status
of
your
your
work.
So,
regardless
of
the
interface
you
use
to
submit
the
job,
you
can
come
to
the
web
UI
and
see
kind
of.
B
What's
going
on
with
your
workloads,
you
can
also
provision
and
use
our
compute
and
storage
resources
through
smart
contract,
initially
ethereum.
This
should
ease
transition
to
Smart
contract
based
integration
with
filecoin
resources
when
filecoins
virtual
machine
is
is
operational.
B
One
thing
we're
kind
of
excited
about
is
you
can
now
access
charity
engine
compute
resources,
basically
directly
from
inside
Wolfram
language,
so
if
you're
a
Wolfram
engine
or
a
Mathematica
user?
Just
as
you're
writing
your
code,
there's
commands
now
baked
into
Wolfram
language
that
let
you
execute
your
workloads
on
charity
and
and
we
are
fully
integrated
with
the
boink
ecosystem.
So
we've
learned
basically
hundreds
of
millions
of
core
hours
to
more
than
25
Point
projects.
B
As
a
part
of
our
broader
commitment
to
distributed
science,
we
have
a
Marketplace
that
allows
through
parties
to
advertise
resources
so
providers.
For
instance,
filecoin
service
providers
can
advertise
their
CPU
and
GPU
capacity,
their
storage
foreign,
for
instance,
through
this
publishing
program
and
and
data
sets,
can
all
be
advertised
and
people
can
can
buy
and
sell.
This
is
a
beta,
in
particular.
The
the
data
and
the
storage
are
are
early
stage,
but
the
the
compute
resources
work
quite
nicely.
B
So
a
couple
key
features
just
to
kind
of
clarify
what
the
platform
is.
One
really
vital
point
is
that
the
the
system
has
an
integrated
batch
scheduler.
So
in
some
of
those
interfaces
you
were
looking
at
before
you
just
point
to
your
input
files
and
your
application
container,
and
then
all
the
and
you
say
how
many
instances
you
want
for
how
long,
which
mostly
is
functioning
as
a
cap,
but
but
basically
all
the
provisioning
and
scheduling
are
handled
automatically.
So
you
just
have
to
focus
on
your
work.
B
You
don't
have
to
think
about
infrastructure
you're.
Just
like
look,
I
I
have
here's
my
jobs,
here's
my
application
and
you
know
I,
want
to
process
this
on
a
thousand
nodes
and
you
click
the
button
and
it's
all
taken
care
of.
B
Also
part
of
the
scheduler
is
capability
to
send
computations
to
data,
rather
than
vice
versa,
and
Computing
on
data
in
place
is
extremely
important
in
a
context
like
filecoin,
where
it's
basically
slow
and
expensive
to
take
data
from
the
network,
move
it
to
a
compute
resource
and
then
push
the
results
back,
particularly
for
large
distributed
data
sets.
That's
just
really
undercuts
the
utility
of
of
a
distributed
storage
platform.
B
So
our
scheduler's
ability
to
know
where
the
data
is
and
to
send
your
computations,
where
it
needs
to
go,
is,
is
really
powerful
and
comprises.
I,
think
and
interesting
adds
an
interesting
Dynamic
to
how
cert
filecoin
service
providers
can
create
value,
because,
through
programs
like
slingshot,
they
host
popular
widely
used
data
sets
they'll
get
more
Computing
work.
B
The
software
that
runs
on
the
participating,
compute
node
has
two
modes.
It
has
a
task
mode
and
service
mode,
so
in
task
mode
the
provider
can
pull
the
marketplace
for
compute
jobs
if
it
finds.
If
the
provider
sees
jobs,
that
they'd
like
to
accept
the
client
can
be
launched,
run
those
jobs
and
exit.
So
you
monitor
the
marketplace
and
you
launch
jobs
when
you
see
something,
that's
so
to
speak
worth
your
while
the
other
mode
service
mode
runs
in
the
background
persistently
and
utilizes
idle
resources
as
they're
available.
B
So
this
can
be
a
really
optimal
way
to
monetize
your
compute
resources.
So,
for
instance,
we
have
a
container
available
for
multi-coin
proof
of
work
mining,
so
there's
sort
of
like
an
infinite
workload,
infinite
volume
of
work
available
there.
So
you
can
just
have
this
running
and
whenever
your
resources
have
some
capacity,
it
will
just
mine
the
optimal
coin.
Given
the
current
market
prices
and
difficulty
levels,
and
so
by
running
the
client,
you
could
basically
always
be
generating
money.
B
Hopefully,
with
increasing
frequency,
there
will
be
commercial
workloads
that
you
can
run
to
generate
more
Revenue
than
you
get
through
the
mining
and,
alternatively,
you
can
run
persistently
in
the
background.
You
can
donate
your
background
resources
to
Scientific
or
medical
research,
where
we
have
quite
a
large
number
of
projects
that
have
quite
significant
need
for
compute.
B
Some
current
activities
that
we're
working
on
we're
running
a
neat
biosurveillance
project
with
CDC,
where
we're
processing
a
fairly
large
daily
volume
of
environmental
samples
to
search
for
evidence
of
pathogens
of
concern
to
the
CDC.
It's
part
of
our
partnership
with
Wolfram
they're
Wolfram
Alpha
Project
uses
our
platform
actually
for
large-scale
data
collection
find
bio.
A
drug
Discovery
project
does
a
quite
large
scale,
petabyte
scale,
genomic
search
We've
supported
the
University
of
Washington,
instant
Institute
of
protein
design
with
their
work.
B
B
So
it
was
exciting
to
be
able
to
support
that
we're
in
no
way
involved
with
that
research,
but
we
provided
a
meaningful
portion
of
their
compute
capacity
and
about
a
year
or
so
ago
we
did
a
quite
large
computation
for
what's
called
a
sum
of
three
cubes,
which
was
a
mathematics
problem
which
had
been
unsolved
for
over
100
years,
and
we
were
able
to
help
the
researchers
there
kind
of
brute
force
their
way
into
into
a
solution
which
was
exciting.
We
have.
These
are
just
representative.
B
We
have
a
bunch
of
cool
projects,
particularly
ones
that
we
hope
we'll
be
able
to
talk
about
more
before
the
end
of
the
year,
and
the
categories
of
Big
Data
distribute
science
and
distributed
AI.
B
So
that
brings
us
to
the
demo
part.
So
at
this
point
I'll
hand
it
over
to
Tristan.
Who
will
just
show
you
what
it
looks
like
to
use
the
platform
in
action,
so
Tristan
I'm
going
to
stop
my
screen
share
and
I
guess
you
can
pick
up
and
start
sharing.
Yours.
C
C
C
C
C
We
also
set
a
rootless
docker
for
security
reasons.
So
if
there's
any
kind
of
a
container
breakout,
someone
doesn't
get
root
access
to
the
system,
and
we
also
do
some
networking
for
security,
also
like
lock
that
down,
so
that
it's
not
just
free
access
to
the
local
network.
But
everything
goes
through
a
network
container
that
we
have
set
up.
C
C
B
Just
to
jump
in
and
clarify
a
point,
this
is
showing
somebody
starting
from
scratch,
so
in
particular
they
were
running
in
service
mode.
You
would
do
this
once
to
do
the
install
and,
and
then
you
just
walk
away
and
and
it's
running
persistently
in
the
background,
correct
yeah.
A
And
hey
guys,
just
raising
a
quick
question
from
the
chat
there
was
this
question
about
network
access?
Do
you
guys
have
any
experience
or
lessons
learned
about
opening
network
access
or
trying
to
restrict
issues
like
botnet
attacks?
And
otherwise
it's
it's
a
it's
a
common
topic
that
a
lot
of
projects
are
thinking
about.
C
Big
area
of
concern,
because
if
somebody
the
wrong
person
gets
in
the
wrong
position,
they
could
suddenly
take
control
of
this
genetic
Network.
So
yeah
we
do
a
lot
with
cryptographic
signing
keys
so
that
nothing
can
be
run
without
being
signed
and
and
all
of
the
as
far
as
Network
openings
go
everything's
kind
of
operating
in
Reverse,
so
the
the
client
nodes
are
checking
in
with
the
server
the
server
has
no
control
over
the
client
nodes.
Let's
see
it
has
to
ask
for
work,
and
then
it
receives
work.
D
Than
that,
so
all
right,
let
me
change
this
to
the
topic
slide.
So
does
that
mean
that
charity
engine
reviews
every
workload
that
runs.
C
B
B
We
run
people,
people
can
run
any
container
as
Tristan
was
noting
we
locked
down
first
of
all,
lockdown
Docker,
so
to
make
it
secure
to
limit
prospects
for
containers
breaking
out,
but
but
in
particular,
Tristan
was
mentioning
before
that.
We
locked
down
all
the
networking,
so
there's
no
networking
Allowed
by
the
compute
jobs
while
they're
running,
except
that
was
my
question
except
well.
There's
a
there's,
an
important
caveat
because
we
do
want
to
allow
actually
applications
to
have
access
to
the
network,
because
there's
a
lot
of
kinds
of
things.
D
Yeah,
it's
I
mean
it's.
That's
that's
90
of
what
I
wanted
I
guess.
My
question
is
like
why
couldn't
I
just
have
a
job
that
ran
Apache
bench
against
the
White
House
and
distributed
to
a
thousand
nodes
like
Presto?
It's
a
DOS.
B
Over
http
yeah,
well,
there's
a
lot
of
sort
of
latency
issues,
and
so
on,
generally
speaking,
our
Network
would
be
crappy
for
DDOS.
B
D
B
It
works
that
way,
actually,
okay
and
sometimes
it's
basically
a
a
decentralized
Gateway,
so
to
speak,
or
a
decentralized
networking
and
the
constraints
about.
What's
allowed,
that's
all
running
locally
and
all
those
constraints
are
enforced
locally.
Then
there's
some
centralized
accounting
that
just
keeps
track
of.
What's
going
on
a
little
bit
to
monitor
for
abuse
and
variety,
for
a
variety
of
reasons
that.
C
For
this
node,
so
any
of
that
string
that
would
be
provided
by
charity
engine
and
setup
is
complete.
So
now
let
me
switch
roles
and
assume
I'm
someone
submitting
a
job
to
this
network
right
now.
We
just
have
this
one.
This
one
node,
that's
operational,
but
in
a
in
live
environment,
would
be
thousands
hundreds
of
thousands
of
nodes
and
then
hundreds
or
even
thousands
of
jobs
being
submitted.
B
C
C
C
B
C
C
A
Well
done
here,
thank
you
guys
for
the
for
the
presentation
there
and
I
actually
have
a
lot
of
follow-up
questions
as
well.
I
will
post
them,
though,
in
the
slack
channel,
for
you
guys,
so
that
we
can
continue
the
conversation
unless
Matthew.
Do
you
have
anything
else
you
want
to
just
share
just
to
wrap
up.
B
No
I
just
wanted
to
sort
of
I
guess
put
a
the
last
slide
of
the
deck,
which
was
maybe
just
to
emphasize.
We're
really
excited
about
being
able
to.
You
know
help
people
in
this
ecosystem
do
cool
stuff,
it's
ready
to
go,
it
works
and
we're
we're
keen
to
see
what
we
can
all
do
together.
A
Well
said,
thank
you
so
much
for
sharing
guys,
brilliant
content
and
I'll
make
sure
to
post
a
link
to
your
guys
site
or
any
contact
information
in
the
channel
as
well,
so
that
everybody
can
get
looped
right
in
to
your
project.
Let's,
let's
go
ahead
and
transition,
then
so
Stan.
If
you
are
ready,
we
would
love
to
give
you
a
few
minutes
to
talk
about
your
project
as
well.
Can
you
hear
us?
Okay,.
E
E
All
right
so
I've
been
talking
about
this
is
how
I
search
engine
project.
You
can
find
a
lot
more
information
at
our
website
here.
E
So
the
centralized
search,
as
you
know,
we
are
now
in
the
middle
of
decentralization
Revolution,
where
people
try
to
decentralize
whatever
business
is
possible.
E
My
next
is
actually
not
that
easy
process
putting
business
on
blockchains
or
ledgers
saw
one
of
the
first
businesses
that
comes
to
mind
is
the
centralized
search
and
by
searching
in
this
set
of
operations
that
need
to
be
done
in
order
to
do
a
web
search,
and
you
know,
search
these
days
is
centralized
by
it's
run
by
a
few
massive
companies
such
as
Google
Microsoft,.
E
And
there
are
some
groundbreaks
and
how
they
do
things.
First
of
all,
we
have
to
trust
their
results
and
there's
no
way
to
Benchmark
how
accurate
is
the
search,
and
so
it's
no
censorship,
persistent.
So,
for
example,
they
can
suppress
student
sites
that
they
knew.
E
There
are
multiple
campaigns
from
companies
and
how
they
manage
ranking
system
and
there's
no
way
for
users
to
choose
from
different
rankings,
and,
as
you
know,
you
might
need
a
different
ranking
algorithms
for
different
purposes.
People
are
different,
they
might
need
the
specialized
ranking
algorithms
web
pages.
E
So
this
is
not
provided,
and,
most
importantly,
perhaps
is
that
these
search
engines
make
their
revenue
out
of
users,
and
the
users
are
basically
not
in
the
in
this
feedback.
Loop,
foreign.
E
So
it
has
to
do
with
the
fairness
of
advertising,
and
one
important
thing
is
that
the
broad
research
Community
is
not
but
has
no
way
to
participate
in
constructing
this
knowledge.
Graphs,
and
this
underlying
knowledge
systems
that
this
company
is
internally
construct
to
ensure
a
good
quality
search
and
also
we
are
going
to
address
in
this
project
saw
according
to
the
architecture
that
we
implemented.
So
the
search
will
be
done
by
a
network
of
independent
and
trustless
nodes.
E
Anybody
can
join
for
roughly
speaking,
it's
like
Google
search
done
by
ethereum
Network,
so
I
mentioned
that
all
this
knows
that
were
doing
a
proof
of
work
computation
for
for
ethereum
southern
switch.
The
protocol
and
started
to
do
basically
search
indexing
knowledge
mining
as
we
call
it
out
of
web
data.
E
So
there
are
multiple
challenges
of
whole
course,
and
we
should
include
basic
a
verification
that
the
results
are
correct,
because
the
first
thing
that
comes
to
mind
when
one
thinks
about
the
centralized
church
is
that
there
will
be
hackers
who
will
try
to
influence
the
ranking
and
manipulate
it
in
order
to
bring
some
sides
on
top
of
the
results
page.
E
E
Because
we
expect
large
data
volume
here
and
these
competitions
can
be
checked,
so
the
revenue
system
basically
models
the
usual
Revenue
system
from
or
existing
search
engines,
except
that
it
it
is
governed
by
a
smart
contracts.
So
whenever
there
is
at
the
consumption
transaction
issued
by
the
user's
browser,
it
will
go
directly
to
the
blockchain
and
it
will
distribute
the
funds
on
the
Node
maintainers
and
the
user
himself.
E
But
for
me
Implement
that
we,
you
have
multiple
ranking
systems,
so
there's
an
algorithm
Market
as
we
call
it.
So
anybody
who
thinks
that
he
is
knowledgeable
enough
to
supply
good
ranking
system
can
go
and
submit
his
rank
in
a
specified
format
and
then
a
network.
But
you
have
to
pay
up
front
because
it's
expensive
competition,
the
network
will
pick
up
this
Rank
and
computer,
and
then
users
will
be
able
to
use
the
tank
among
others,
and
maybe
you
will
from
will
also
be
shared
with
the
current
Kenya
providers.
E
So
that's
what
we
implemented
already,
but
this
also
brought
up
a
few
other
questions,
for
example
this.
E
E
Transitioning
to
have
three,
we
have
to
consider
decentralized
competition
with
large
amounts
of
data,
and
here
I
think
mostly
about
this
centralized
Computing
is
necessary
for
social
networks
such
as
Twitter
America
and
recently
there
has
been
some
effort,
for
example
the
centralized
social,
and
that
brings
about
the
notion
of
four
distributed
computation
and,
as
we
all
know,
ethereum
even
really
stand
up
to
the
church
challenge.
E
I
accept
that
I'll
do
Solutions
may
be
actually
do
part
of
the
job
here,
but
we
started
working
on
our
independent
product
in
this
domain
and
here
are
some
requirements.
So,
first
of
all,
we
want
to
get
rid
of
withdrawal
machines
so
that
arbitrary
language
can
be
used
to.
E
B
E
Think
this
is
clearly
the
main
sort
of
players
here,
but
then
there
are
also
roles
which
are
listed
here,
so
this
special
storage
nodes
and
executors
and
sequencers.
So
we
separated.
E
Sequencers
into
a
special
role,
as
we
know
that
in
D5
it's
very
important
to
drug
straight
sequencing
of
transactions.
As
we
know,
this
emergent
phenomenon
of
minor
extractable
value,
it's
something
that
we
have
to
keep
in
our
system.
E
So
yeah
and
I
also
want
to
have
flexible
consensus
models
in
different
parts
of
the
network
and
some
agents
of
the
country
Computing
environment.
In
particular.
We
would
like
to
have
drugs
trusted
execution,
environmental
nodes
in
some
parts
of
the
system,
so
another
part
of
the
a
project
has
to
do
with
user
data
and
what
we
do
is
it.
We
call
it
digital
mortality
team
for
the
following
reason.
E
E
Of
data
and
all
this
data
is
savagely
wasted
these
days,
it's
not
collected,
but
their
data
constitutes
what
we
call
Web
reflection
and
digital
personality,
and
the
goal
for
us
is
to
to
create
a
set
of
tools
that
will
facilitate
the
collection
of
this
data,
with
the
goal
of
combining
it
into
certain.
E
So
we
are
working
on
creation
of
a
Marketplace
of
this
attention,
data
that
is
shared
between
users
and
data
providers
such
as
websites
and
basically
other
sources
of
knowledge.
A
E
Right,
yeah,
all
right,
I'm
actually
done
I'm
actually
done
so
so
here.
So
we
are
running
a
small
cluster
that
on
which
we
deployed
our
searches,
some
of
our
nodes
for
the
search
engine.
So
here
the
I
think-
and
if
you
go
to
this
to
Ports,
you
can
actually
try
out
the
search
engine.
So
it's
a
very
small
cluster.
It's
probably
not
very
meaningful,
but
that's
more
scalable.
E
So
we
are
very
interested
in
Partnerships,
especially
with
Hardware
providers,
because
we
are
looking
for
scaling
our
search
engine
and
we
are
working
for
Partnerships.
Also,
these
advertising
Library
libraries
and
Tech
data
providers
and
yeah.
That's
basically
it.
Let
me
just
probably
show
you
just
one
more
thing
yeah.
So
here
is
basically
so
here's
the
cluster
that
we
are
running
here
of
the
AWS
here.
E
E
Just
for
machines
here,
so
here's
how
let
me
see.
A
Yeah
I
I
tell
you
what
Stan
would
you
mind
if
publishing
this
into
our
slack
channel,
so
folks
can
can
follow
up
afterwards
and
they
can
run
the
demo.
E
Because
it's
quite
expensive,
but
but
yeah
we
can,
if
you're
interested,
especially
if
you're
a.
E
That
will
be
great,
if
you
can,
you
know
collaborate
on
this
and
yeah.
A
D
D
Feel
terrible
that
I'm
interrupting
the.
E
D
Work
but
there
you
go
okay,
so
the
Cod
Summit
again
this
is
supposed
to
be
all
of
us.
Can
everyone
see
my
screen.
D
The
URL
to
to
look
to
is
this
Con
summit.io,
please
register
here
we're
trying
to
get
headcounts.
Please
tell
everyone
you'd
like
to
register,
please
invite
whoever
you'd
like
and
let
us
know
and
we'll
make
sure
to
have
space,
but
what
we
you
know,
the
nightmare
scenarios
that
we
have
too
many
people
and-
and
we
didn't
Focus
big
enough
space
or
enough
food.
So
that
was
the
time
it
does
look
to
be
very,
very
exciting
and
a
lot
of
really
great
speakers
coming
up.
D
Oh
there
speaking,
which
I'll
share
this
again
to
the
sheet,
but
we're
trying
to
finalize
some
of
the
talks.
I
I
know:
I've
talked
to
a
lot
of
you
about
the
talks
we.
If
I've
talked
to
you,
we
should
nail
down
exactly
what
your
title
is,
so
we
can
get
it
in
here.
D
The
thing
that
we
would
love
to
publish
by
the
end
of
this
week
on
schedule,
which
is
currently
open
you
can
see
here-
is
this
sheet
so
day
one
will
be
at
timeout
Market,
that's
Sports
about
150
people.
We
are
starting
to
fill
up.
But
now
is
the
time
to
you
know
get
your
name
in,
so
that
we
we
don't
overflow
and
then
what
our
plan
currently
is
to
do
day.
One
at
this
150
person
spot
we'll
have
just
a
series
of
talks.
D
I
really
would
love
this
to
be
a
series
of
things
about
what
it
means
to
run
compute
over
data.
This
is
not
about
protocol
Labs
or
file
coin,
or
anything
like
that.
This
is
about
the
entire
space
because,
as
someone
mentioned,
I
don't
remember
who
at
the
beginning
of
this
call,
there
are
so
many
interesting
projects
in
this
space,
and
you
know
I
think
we
are.
We
are
purely
in
collaboration
mode
at
this
point
with
everyone,
so
the
more
that
we
can
talk
about
different
approaches,
the
better.
D
The
way
it
lays
out
is
what
you
see
here.
You
know
order
still
to
be
decided,
but
you
know
this
was
roughly
right.
Intro,
that
we'll
do
back
of
the
owl
that
we
have
a
special
guest
for
those
of
you
that
know.
Anaconda
Peter
Wang,
the
CEO
and
founder
of
anaconda,
is
going
to
be
giving
a
talk
on
wasm
and
portable
Computing,
which
is
really
exciting.
The
vision
folks
will
be
giving
us
talk
on
decentralized
authentication.
D
Then
Juan
will
be
rounding
out
the
morning
with
a
landscape
and
impact
of
compute
over
data.
When
we
get
to
the
afternoon,
we'll
have
structured
data
in
web
3.
We'll
have
a
talk
about
incentive
models
about
from
grid
coin.
Excuse
me:
we
have
evm
compatible
file
coin
on
the
V
fvm,
so
I'll
talk
about
that.
We
have
spice.
Ai
is
doing
a
bunch
of
analysis
on
blockchains
and
other
things
like
that
in
a
computer
data
format.
D
So
that's
kind
of
more
of
a
customer
phase
and
then
we
have
some
slots
on
the
front
day.
You
know:
we've
talked
to
a
bunch
of
people.
I
I
know
that
we've.
If
I
talk
to
you
and
and
I've
said
you
know,
you
have
a
talk.
Please
let
me
know
what
your
titles
are
and
we'll
try
and
Slot
it
in
it's,
not
the
end
of
the
world.
D
If
it
goes
to
the
second
day,
we're
absolutely
gonna
have
spaces
here,
but
you
know
just
trying
to
round
out
like
that,
the
high
level
for
the
second
day,
the
current
thinking,
is
to
have
two
tracks
in
a
mostly
unconference
style
discussion.
So
you
know
one
track
is
what
it
means
to
build
a
platform
so
invocation.
You
know
talking
about
the
standards
we
want
to
work
on
together,
sdks,
so
interop,
all
that
kind
of
good
stuff.
D
Again
these
are
it's
very
unconferencing,
so
we're
happy
to
like
you
know,
slot
whatever
here
is
and
then
the
second
half
is
data,
I,
think
data
and
usage.
So
what
how
you
know
people
need
or
want
or
are
using
HPC?
You
know
private
government
reputation,
you
know
data
pipelines,
building
models
so
on
and
so
forth,
so
that
these
are
the
two
slots
again.
This
is
not
intended
to
be
me
talking
to
everyone
else.
D
Sorry
I'm,
just
noticing
my
audio
is
low.
I
hope
people
heard
me.
D
D
Okay,
that's
probably
better!
Is
that
better
Okay,
so
this
so
again,
this
is
not
supposed
to
be
me
dictating
the
schedule.
Please
let
me
know
if
this
is
not
compelling
or
if
there
are
things
you'd
like
to
talk
about,
but
this
is
roughly
the
two
days
and
as
soon
as
you
do
as
soon
as
you
do
get
a
talk
or
if
you
want
to
get
like,
have
attendee
like
headshot
here
we're
going
to
be
adding
these
as
well
this
week.
E
D
We
will
be,
we
will
both
have
a
zoom
leak
and
record
it.
E
And
what
about
contributing
talks
over
Zoom?
Is
it
possible.
D
D
D
C
D
Second
day,
each
one
of
these
is
going
to
be
75
people.
You
should
note
this
is
in
timeout
Market.
This
is
an
Hyatt
Regency,
we'll
have
all
this,
but
it
there
you
go
so
each
one
of
these
is
75
people
open.
Oh
I
had
some
other
thoughts
here,
like
you
know,
running
Cod
at
scale
like
exactly
what
Kudos
just
did
like
I'd
love
to
have
you
know,
people
who
have
run
things
at
scale,
talk
I,.
D
Providers
first
day
as
well
would
be
great,
but
we
shouldn't
feel
the
need
to
fill
up
either
of
these.
It's
totally
okay,
if
we
just
get
a
small
group
together,
what
I'm
really
looking
for
is
collaboration
and
and
Joint.
You
know
discussion.
D
B
Well,
by
way
of
suggestion,
one
thing
that
that
might
be
useful
is
sort
of
some
way
for
people
to
kind
of
communicate
to
others
present
what
their
capabilities
are
and
what
their
needs
and
pain
points
are
too
absolutely
some
kind
of
matching.
D
Absolutely
that
that
was
exactly
it
so
again,
you
know
this
was
my
rough
thing.
The
first
eight
people
again
just
talk
from
your
experience.
This
is
what
we've
seen
when
we
went
to
try
and
do
compute
over
data.
This
is
how
we're
approaching
the
problem
space.
This
is
why
we
think
it's
compelling
whatever
it
might
be
this
day
here
is
exactly
supposed
to
be
matching,
so
people
who
aren't
familiar
with
government
data
for
example,
or
want
to
run
government
data.
D
B
B
Yeah
yeah,
you
know
possible
to
fit
in
there
an
unstructured
section.
You
know
people's
need
to
maybe
quite
idiosyncrasies.
D
That's
a
great
Point,
too
I
mean
yeah
in
truth,
I'm
I'm
already
breaking
the
rules
of
unconferences.
By
doing
it,
this
way,
by
having
proposals
now
like
in
a
true
unconference
form,
you
would
not
have
anything
until
the
morning
of
but
I
want
to
help.
People
like
you
know,
get
ideas
about
the
direction
to
go.
B
Sure
maybe.
D
Okay,
well
with
all
that
I
guess
we
have
any
Quorum.
I
will
be
posting
this
spreadsheet
there
to
the
channel
shortly
like
I
said.
Oh
sorry,
if
anyone
has
any
thoughts,
please
add
them
there,
but,
like
I,
really
would
love
to
nail
all
the
stuff
by
the
end
of
this
week,.
E
So
you
mentioned
that
on
second
day
they
have
like
storage
providers.
Is
there
any
way
to
get
some
idea
of
like
what
are
these
companies,
for
example,
I'm
very
interested
in,
like
you
know,
and
talking
to
computer
providers,
because
we
are
going
to
launch
our
product
and
it
will
be
very
useful
to
have
some
like
a
list
of
storage
providers
there
somewhere.
D
Yeah,
so
is
there
a
list
of
storage
writers?
Is
that
what
you're
asking
for
yeah.
E
D
I'm
happy
to
put
you
in
contact
I,
don't
know
those
I
mean
there's
a
big
long
list
of
them.
Send
me
an
email
and
I'll
put
you
in
contact
with
the
set
of
storage
providers
or
at
least
figure
out
what
you're,
looking
for
and
and
figure
out
how
to
get
answers.
The
the
answers
you
need.