►
Description
Daniel Marin and Nexus.xyz - a new network for decentralized computation, enabling general-purpose verifiable cloud computing. Powered by MPC and ZKPs.
John Fletcher and The Innovation Game - a crypto Proof of Work project that leverages scientific computation to secure its network and reward participants for processing important scientific research workloads (the-innovation-game.webflow.io)
A
And
we're
live
all
right,
hello,
everyone
listening
to
the
recording.
This
is
the
13th
session
of
the
compute
over
data
working
group
and
Happy
Valentine's
Day
we've
got
a
pretty
packed
session.
Today,
we've
got
Daniel
from
nexus.xyz
who's,
going
to
tell
us
about
the
new
project
that
they're
launching
very
exciting
stuff,
and
we
also
have
John
Fletcher
who's
joining
us
all
the
way
from
Cambridge
crypto
team
and,
if
we're
lucky,
and
we
get
through
Broadband
issues
across
the
Atlantic
undersea
cables.
A
We'll
have
them
on
today,
if
not
we'll
Stitch
him
in
on
a
future
session
and
and
add
it
into
the
YouTube
playlist.
A
B
A
B
Yeah
thanks
a
lot
hello.
Everyone
as
Wes,
said
I'm
Daniel,
I'm,
well,
I'm,
the
founder
of
Nexus
Nexus
of
Nexus
Labs.
That's
the
official
company
name,
Nexus
Labs
is
a
company
that
well
we're
working
on
verifiable
cloud
computing
in
a
decentralized
manner.
I
believe
you
know
I'm
very
excited
to
talk
to
this
group
because
we're
all
doing
stuff
around
decentralized
computation
and
well
that's
exactly
what
we're
doing
the
centralized,
verifiable
computation
and
so
some
background
about
us.
B
Nexus
Labs
was
originated
at
Stanford
and
in
fact,
we're
based
here
at
Stanford
like
right,
right
outside
campus.
In
fact,
I'm
right
now,
outside
canvas
and
I,
can
see
it
through
the
window.
So
we
started
through
the
applied
cryptography
lab.
B
You
know
as
part
of
research
into
verifiable,
Computing
and
blockchain
in
general,
and
you
know
the
company
also
went
through
the
Stanford
blockchain
accelerator
and
well
and
just
in
general,
our
roots
are
very
over
there
at
the
University,
and
so
the
problem
we
are
trying
to
solve
here
is:
you
know,
we're
just
attempting
to
scale
all
kinds
of
blockchain
computation,
whether
that's
by
giving
them.
B
You
know
hiding
capabilities
around
computation,
storage
and
I
O
and
I
will
explain
a
little
bit
what
we
mean
by
I
O,
but
you
know
the
classic
scalability
problem
which,
for
instance,
on
ethereum
is
addressed
by
Roll-Ups,
well
we're
providing
kind
of
like
a
different
way
to
achieve
a
scalability
that
is
way
different,
very
different
than
Roll-Ups,
very
different
than
everything,
and
it's
through
a
kind
of
form
of
off-chain
compute
using
verifiable
Computing
such
that
you
know
we
run
things
of
chain
and
just
verify
them
on
chain.
B
So
the
solution
as
I
mentioned,
is
you
know?
How
can
you
achieve
verifiable
computed
Computing
such
that
you
can
verify
that
you
know
some
set
of
people.
You
know
run
a
computation
correctly
of
chain
or
are
storing
your
data.
You
know
the
folks
from
filecoin
are
very
aware
of,
or
you
know
that
you
have
I
o
capabilities
and
by
I
o
capabilities
we
mean.
B
How
do
you
make
let's
say,
for
example,
a
blockchain
interact
with
another
blockchain,
a
roll
up
the
internet
in
a
way
that
is
verifiable
and
secure
such
that
you
know
that
the
interaction
between
any
two
decentralized
systems
is
computationally
correct.
B
So
you
know
computation
storage
in
IO,
and
on
that
note,
let
me
say
something:
maybe
you're
familiar
with
the
you
know
a
little
bit
of
the
classic
theoretical
computer
science
and
well
part
of
theoretical
computer
science
and
the
classic
blockchain.
Sorry,
computer
architecture
is
this
notion
of
a
Von
Neumann
architecture
of
a
Von,
Neumann
general
purpose
machine
and
everyone
Neumann
machine.
B
It's
a
machine
that
has
access
to
computation
through,
like
the
CPU
storage
and
has
input
and
output
access,
so
something
that
we
are
building
and
then
we'll
talk
a
little
bit
more
about.
This
is
kind
of
like
a
general
purpose:
Von
Neumann,
decentralized,
verifiable
machine
yeah.
A
little
bit
of
you
know
too
many
words,
but
I
will
explain
what
all
this
means.
So
at
Nexus
Labs
we're
building
two
projects.
One
of
them
is
called
Nexus
and
the
other
one
is
called
Nexus
zero.
B
So,
for
instance,
we
just
run
any
program
that
you
want
specified
as
a
webassembly
binary
on
a
decentralized
network
which
replicates
computation
through
approval,
stake,
decentralized
computer
network.
That
is
allocated
specifically
for
you,
so
it
is
in
a
form
like
a
blockchain,
with
the
difference
that
this
blockchains
number
one
or
application
is
specific
and
number
two
are
connected
to
ethereum,
directly,
okay
and
number
three
they're
serverless,
they're
kind
of
like
a
form
of
serverless
computing.
B
Secondly,
Nexus
0
is
the
zero
knowledge
counterpart
of
Nexus,
which
is
attempting
to
achieve
verifiable
general
purpose
compute
through
your
knowledge
proofs
and
a
Serial
knowledge
virtual
machine
we're
building
and
well.
You
know
the
difference
is
whereas
Nexus
achieves
verifiability
of
some
degree
through
multi-party
computation
and
state
machine
replication,
Nexus,
zero
achieves
it
simply
through
serial
knowledge
proofs,
and
you
know,
proving
whatever
program
you
upload
to
a
network
and
in
fact,
both
of
those
networks.
B
You
know
kind
of
like
a
couple
of
many
each
other
very
well,
but
I
will
talk
a
little
bit
more
about
that
later
so
well,
this
is
how
Nexus
looks
like
Nexus
is
literally
a
network
of
networks,
all
of
which
belong
to
the
same
kind
of
like
proof
of
stake,
Network
and
essentially
you
as
a
developer.
What
you
do
is
you
deploy
one,
your
application
to
one
of
these
networks?
B
Like
you
know
here
in
the
diagram,
we
have
what
four
seven
seven
networks,
so
your
application
goes
into.
One
of
them
and
you
specify
how
many
nodes
you
want
your
application
to
be
run
on,
and
so
your
application
runs
from
one
of
these
networks,
which
do
you
rent,
similarly
to
AWS
or
Google
Cloud,
we're
literally
attempting
to
achieve
you
know
the
same
experience
as
AWS
Lambda
in
that
regard,
so
you
upload
your
program.
It
runs
on
a
dedicated
computer
network.
B
B
B
So
essentially
our
thesis
here
is
you
have
a
blockchain,
let's
say
ethereum
or
an
L2,
and
if
you
need
enhanced
computational
capabilities
that
are
not
achievable
within
your
blockchain
environment,
you
know,
perhaps,
because
you
cannot
do
floating
Point,
arithmetic
or
you
cannot
exceed
the
blog
guess
limit.
Or
you
know
it's
just
simply
a
very
constrained,
computational
environment.
B
Then
perhaps
you
can
resort
to
cloud
computing
and
verifiable
cloud
computing
Network
that
computes
your
program
or
part
of
your
program
of
chain
and
injects
the
results
of
the
computation
back
to
your
to
your
smart
contract.
B
B
Theoretically
the
upper
bound
on
the
allowed
number
of
malicious
parties.
That
collude
is
around
one-third
of
the
notes.
So
if
you're
a
blockchain
smart
contract
user
that
wishes
to
offload
computation
to
Nexus,
essentially
what
you're
making
implicitly
to
achieve
higher
computational
power
and
storage
Etc,
it's
a
trade-off
in
terms
of
security,
because
now
you're
not
running
everything
on
the
underlying
blockchain
security
mechanism,
but
you're
Outsourcing,
part
of
the
computation
and
therefore
conceding
a
little
bit
on
the
side
of
security
and
now
you're
trusting
something
like
a
Nexus
cloud.
But
well
regardless.
B
B
Is
what
we
call
decentralized
Cloud
functions,
which
are
literally
just
like
AWS
Lambda,
but
running
on
a
decentralized
network
engaging
in
you
know,
proof
of
stake,
consensus
and
you
know
participating
through
protocols
like
doing
multi-party
competition
and
when
we
talk
about
multi-party
computation.
What
we
mean
is
things
like
threshold
signatures
and
distributed
key
generation
that
allow
the
networks
to
synchronize
together
and
submit
joint
transactions
to
blockchains.
B
So
that's
what
Nexus
does
and
an
example
of
how
an
access
looks
like
is
well.
You
write
your
program,
which
is
you
can
write
them
in
raw
simple,
go
anything
that
compiles
a
web
assembly
here
I'm,
showing
an
example
of
Cloud
function.
So
you
declare
your
function
and
you
know
just
write
the
code
that
your
function
will
do
and
it's
just
like
a
normal
program
that
you
would
traditionally
upload
to
AWS,
but
instead
of
uploading
it
to
AWS,
you're
you're
uploaded
to
Nexus
and
on
Nexus.
You
know
your
program
runs
equally
well.
B
In
addition
to
that,
your
program
has
ACT,
has
read
red
access
to
ethereum
state,
so
that's
literally
how
it
works
now,
I'm
just
going
to
pause
a
little
bit
because
we're
now
going
to
jump
out
from
Nexus
and
in
into
our
newest
project.
B
We
recently
announced
a
few
days
ago
we're
built
we're
still
building
both
in
development
and
we're
actually
close
to
opening
Nexus
to
something
like
a
devnet
but
we're
building
Nexus
0,
which
is,
as
I
mentioned,
before
kind
of
like
the
zero
knowledge
counterpart
of
what
Nexus
is
trying
to
achieve,
and
so,
while
Nexus
achieves
verifiability
through
multi-party
computation.
Nexus
here
achieves
verifiability
through
to
your
knowledge,
proofs
and
well.
The
mechanics
are
very
similar.
Nexus
0
is
like
an
AWS
Lambda
in
which
do
you
essentially
upload
a
program
to
aforementionless
network
of
Proverbs.
B
This
is
also
a
Ross
program
and
we
generate
a
proof
for
the
computation,
and
this
proves
get
submitted
back
to
ethereum
or
any
other
smart
contract
system
that
supports
our
cryptographic,
Primitives,
and
so
this
essentially
enables
a
way
to
Outsource
computation
from
Smart
contracts
to
a
decentralized
network
of
Serial
knowledge
perverse.
B
This
is
kind
of
like
the
architecture
of
the
network,
but
it's
it's
simply
that
you
have
a
sport
contract.
You
say:
I
want.
You
know
to
upload
this
computation
to
Nexus
zero
and
Nexus
0,
just
computes
this.
For
you,
it
generates
a.
C
B
And
then
you
know
it
submits
it
back
to
ethereum
where
it
gets
verified,
and
so
something
important
is
that
using
Nexus
0
well
you're
like
the
computation,
is
probably
secure
number
one
and
number
two.
It
does
inherit.
Ethereum
security
instead
of
you
know,
depending
on
others,
assumption
as
I
said,
I
say
this
with
Nexus
and
so
for
the
next
zero
we're
building
a
Serial
knowledge
virtual
machine.
B
That
is
not
a
CK
evm,
but
it's
just
a
ckvm
designed
for
the
risk,
five
architecture
and
you
know
operating
it-
has
the
Von
Neumann
architecture
that
I
previously
described,
which
means
that,
among
other
things,
that
it
works
essentially
as
a
general
purpose
computer.
So
it
can.
It
is
designed
to
run
traditional
programs
like
just
like
Nexus
but
like
Ross,
simplest,
plus
and
go
and
how
it
works.
Is
we
have
a
universal
CK,
snark
circuit
such
that
folk?
B
You
know
you
we
have
this
little
diagrams
here,
but
the
important
part
is
that
the
prover
receives
an
encoding
of
the
program
as
part
of
the
input
and
so
does
the
verifier
and
so
using
a
single
circuit,
which
is
a
universal
circuit,
because
it
accepts
any
other
program
as
input
we're
able
to
verify
any
general
purpose
computation
and
specifically
those
written
in
Rust,
C,
plus
plus,
and
go.
B
B
You
know
using
traditional
languages,
you
compile
it
into
our
representation
for
next
zero
and
you
upload
it
to
Nexus
0,
which,
as
I
mentioned,
is
a
permissionless,
prover,
Network
and
well
nodes
on
Nexus
hero,
simply
compute
the
proof
for
your
application,
which
then
gets
submitted
back
to
ethereum,
and
then
that
way
you
achieved
the
verifiable
Computing
that
was
outsourced
out
of
from
ethereum
into
your
smart
contract,
so
same
thing
as
Nexus,
except
that
probably
verifiable.
B
B
So
just
finishing
our
talk,
you
know
we're
very
well
capitalized
we're
venture-backed
by
great
partners
and
well.
We
are
accepting
a
small
number
of
early
Partners
to
try
out
our
technology.
B
You
know,
especially
smart
contract
developers
that
wish
to
you
know,
achieve
crazy,
computational
stuff,
just
shoot
us
an
email
and
follow
us
on
Twitter.
We
recently
made
an
announcement
over
there
too
or
Hannah
was
down
there
and
or
we're
very
actively
hiring
we're
hiring
mostly
senior
scientists,
cryptographers
and
well.
You
know
anyone
that
is
interested
in
serial
knowledge
proves.
You
know,
feel
free
to
contact
me
or
the
team
over
there
and
that's
so
yeah.
That's
all!
Thank
you
very
much.
Thanks.
A
Fantastic,
thank
you
Daniel.
This
is
tremendous.
I
love
the
vision,
I
love
the
investment
in
the
core
underlying
cryptography
that
you
guys
are
making
which
benefits
the
entire
Community.
Thank
you.
A
Else,
an
opportunity
to
add
questions
in,
but
I'd
love
to
just
hear.
One
quick
question
for
you
is
your
opinion
about
the
use
cases
that
might
be
underserved
with
the
current
infrastructure,
which
is
predominantly
smart
contracts
and
then
AWS,
Google
Cloud.
Are
there
any
use
cases
or
problems
to
be
solved
that
you
think
could
both
benefit
from
the
the
tech
you
guys
are
building?
Yes,.
B
Yes,
quite
a
lot
so
number
one
is,
for
example,
accessing
historical
blockchain
state
right.
So
if
you
have
a
smart
contract,
you
want
to
access
the
state
of
your
blockchain.
You
know,
maybe
a
year
ago,
you
can't
do
that.
So
the
only
way
to
do
that
is
you
set
up
an
AWS
web
server,
and
but
you
do
everything
that
you
would
do,
but
the
difference
that
it
would
be
centralized
by
deploying
it
on
Nexus.
B
B
So
if
you
connect
your
smart
contract
to
one
of
our
Cloud
networks,
you
can
specify
in
your
Cloud
Network
application
that
you
know
it
should
execute
the
transaction
on
ethereum
every
time
an
event
happens
on
this
other
smart
contract
or
on
this
other
ethereum-like
Network,
and
so
that
achieves
quite
a
good
amount
of
what
you
would
call
interoperability
or
synchronization
State
synchronization
between
different
applications,
so
particularly
excited
about
that.
B
And
lastly,
well
just
the
fact
that
there's
a
lot
like
you
know,
anyone
who
has
coded
a
smart
contract
knows
that
you
cannot
do
floating
Point,
arithmetic,
happily
and
easily
right,
and
so,
if
you
are
trying
to
do
stuff
like
Matrix
multiplications,
solving
equations
or
some,
you
know
whatever
thing
that
uses
floating
Point
arithmetic.
B
Well,
if
you
need
a
lot
of
power,
you
can
just
run
it
on
Nexus
receive
the
results
back
on
your
smart
contract,
and
that
would
be
it.
You
know.
I
I
have
some
ideas
of
which
applications
could
use
floating
Point
arithmetic,
but
it
did
like.
The
examples
are
just
so
wide
I
think
it's
clear
that
you
know
there's
a
lot
of
you
can
do
with
floating
points.
So
I'm
particularly
excited
about
that
too.
B
Brilliant
yeah
happy
to
take
any
other
questions,
guys.
D
Actually,
a
wonderful
presentation
and
just
one
thing:
I'm
curious
as
you're,
going
through
the
slides
right,
I,
wonder
like
in
the
serverless
structure
like
yours.
How
do
you
orchestrate
the
incoming
tasks.
B
Oh
yeah,
that's
that's
an
excellent
question
and,
in
fact,
I
think
that's
exactly
where
we're
innovating,
as
opposed
to
you
know
the
academics
and
all
that
kind
of
stuff,
because
orchestration
is
I
believe
something
that
hasn't
been
explored
on
a
decentralized
network
of
networks
and
so
how
it
works
is
there
are
two
ways
so
number
one:
if
any
of
you
are
familiar
with
the
the
internet,
computer,
for
example,
the
way
they
achieve
orchestration
between
networks
is
by
setting
up
another
Network,
like
literally
another
Network.
B
B
However,
that
has
a
downside,
and
the
downside
is
that
if
the
orchestrator
is
well,
you
know
compromised,
then
the
whole
network
is
compromised
and
so
we're
not
taking
that
approach
and
in
our
approach
the
orchestrator
is
actually
the
whole
Nexus
Network.
So
the
whole
Nexus
Network,
meaning
the
whole
network
of
networks,
operates
as
the
orchestrator
and
does
the
matchmaking
mechanism,
which
is
you
know,
just
just
like
Market
making.
It
receives
bits
and
requests
and
it
you
know
just
ranks
them
and
it
distributes
jobs.
B
But
you
know
everybody
is
engaged
in
the
computation
of
that
orchestration
mechanism,
which
means
that
well,
the
network
doesn't
have
this
security
vulnerability,
as
this
already
designed
it
number
one
and
number
two
how
it
operates
is
every
node,
then,
is
essentially
running.
Two
different
tasks
is
running
tasks
for
consensus
on
the
whole
Nexus
Network,
which
includes
this
orchestration
mechanism
Etc,
and
it's
running
its
particular
task
for
its
own
application
that
the
user
uploaded.
B
So
it's
kind
of
like
running
two
processes
at
the
same
time,
consensus
for
its
own
application,
specific
Network
and
consensus
for
the
whole
Nexus
Network,
and
that's
it
that's
how
it
happens.
D
I
see
so
so
like
which
route
are
you
picking
for
a
Nexus?
No.
B
Yeah,
the
first
one
it's
more
efficient
by
far,
but
it
does
not
achieve
great
security
because
of
what
I
just
described
in
our
case.
We're
more
conservative,
we're
more
worried
about
security
than
probably
everything
and.
D
I
see
and
also
I
see,
as
you
mentioned,
you're
leveraging,
wassum,
so
I
guess
like
besides
the
you
know,
being
portable
and
language
agnostic
for
developers,
it's
you're
also
looking
to
have
a
greater
speed
right
so
boost
up
rate
fast
right,
but
in
a
decentralized
setting.
You
know
whenever
a
worker
is
called
upon
right.
It
boosts
up
this
runtime
or
VM,
and
now
is
retrieving
tasks
right.
It's
downloading
the
files,
so
so
in
this
process
this
becomes
kind
of
uncontrollable
right.
D
You
don't
really
know
you
know
how
long
this
part
will
take
so
like
how
do
you?
How
do
you,
you
know,
reconcile
the
fact
that
okay,
we
want
this
to
be
awesome.
We
want
this
to
be
really
fast,
but
also
in
a
decentralized
setting.
You
know
we
don't
know,
there's
no
like
hot
functions
or
code
functions
per
se
so
like.
How
do
you
deal
with
this.
B
So
for
webassembly
you
can
Define
arbitrary
host
functions
that
get
injected
into
the
runtime
and
how
it
operates
is
so
Nexus
has
this
Nexus
virtual
machine,
which
is
the
one
that
you
know
the
webassembly
runs
I
mean
which
application
is
run
on
and
the
Nexus
virtual
machine
injects
host
functions
into
the
webassembly
runtime,
which
user
applications
can
just
call
on,
and
so,
when
that
call
happens
essentially
Bob
the
runtime
gives
gives
the
computation
back
to
the
node
and
the
node
can
just
do
anything
it
wants.
B
So,
for
instance,
let's
say
that
your
Nexus
application
says
go
and
read
the
state
on
this
on
on
ethereum
or
maybe
Iran
go,
and
you
know
fetch
this
data
from
the
internet
in
the
case
of
the
internet,
for
instance,
you
can
just
simply
have
like
a
timeout.
If
the
request
is
not
resolved
within
this
time,
then
we
blog
it.
B
B
Well,
when
you
query
the
ethereum
blockchain,
you
also
have
to
have
some
sort
of
timeout
unless
you're
running
ethereum
natively
a
slide
client.
So
you
know
all
this
kind
of
like
questions
about.
How
do
you,
especially
because
I
think
this
is
what
you're
trying
to
get
to
is?
B
How
do
you
keep
your
applications,
deterministic
and
gas
metered
if
you're
running
applications
on
a
webassembly
environment
that
is
calling
into
non-deterministic
non-guess
metered
host
functions
and
the
way
is
simply
the
node
handles
that
and
it's
part
of
the
consensus
protocol
to,
for
example,
limit
the
time
this
host
functions
might
take
on
the
Node,
and
you
know
how
much?
How
many
calls
should
you
make
Etc?
B
So
it
is
made
deterministic
by
the
note,
and
so,
if
you
have
deterministic
gas
metered
host
functions
combined
with
a
deterministic
and
gas
metered
web
assembly
run
sign
environment,
then
everything
is
deterministic
and
if
everything
is
deterministic,
you
can
run
you
can
reach
consensus
and
if
you
can
reach
consensus,
you're
happy,
because
if
you
can't
reach
consensus,
then
well
you're
destroyed
so
yeah.
You
know,
maybe
that
kind
of
answered
your
question.
But
let
me
know
if
that's
not
the
case.
D
I
see
so
I
guess,
like
one
thing,
I'm
trying
to
get
is
I
say
if
I'm
running
a
note
for
Nexus
right
am
I
required
to
basically
download
all
the
possible
functions
that
will
be
called.
B
Oh,
the
boss,
oh
I,
see
I,
see
I,
see
no,
no,
no
okay,
so
yeah
exactly
yeah.
So
so
that's
part
of
the
yeah
I
think
I've
got
a
little
bit
confused
about
what
you
meant
by
functions.
Yeah,
sorry,
applications
right,
user
applications.
Yes,
yes,
so
so
no,
you
don't
need
to
download
that
you
don't
even
need
to
download
the
state
of
any
other
application
except
the
one
that
it
has
been
assigned
to
you
and,
if,
like
even
when,
you
are
unassigned
to
that
application
and
reassigned
to
some
other.
B
You
can
completely
delete
the
state
and
storage
of
the
original
application
that
you
were
running
and
run
the
new
one.
So
every
every
network
is
just
application
specific.
It
does
not
know
anything
about
the
other
applications
except
well.
You
know
perhaps
like
the
hash
of
the
function
or
something
like
that
and
that's
it.
So
you
don't
it's.
It's
literally
like
renting
AWS
servers
to
some
language.
D
I
want
to
deploy
something,
I
would
say:
hey
Nexus,
you
know,
I
want
10
machines
and
I
want
them
for,
say
the
next
24
hours.
D
B
Priced
by
time,
no,
it's
prized
by
execution
so
how
it
happens.
It's
yeah.
D
B
It
but
it
is
on
a
gas
meter
way.
So
you
you
have
like
a
tank
with
a
balance,
and
you
have
you
know
some
amount
of
money
there
and
every
time
your
application
executes,
which
is
in
response
to
events
like
ethereum
events
or
Internet
events.
It
consumes
a
little
bit
of
your
gas
tank
on
the
way
to
hit
zero,
and
so
all
you
have
to
do
is
ensure
that
your
gas
balance
is
always
positive.
So
your
application
can
keeps
can
keep
executing.
A
Yeah,
of
course,
fantastic
well
clearly,
I
think
Daniel
you've
hit
you've
hit
on
a
nerve
within
you
know.
Folks
are
very
interested
in
your
technology,
so
I
don't
want
to
I.
Don't
want
to
stifle
the
questions,
but
I
will,
if
you
don't
mind
route
the
folks
to
our
slack
channel,
to
ask
you
further
follow-up
questions.
Absolutely
thank
you.
Guys
are
in
the
middle
of
a
lot
of
things.
Zk
Unstoppable
applications
that
a
lot
of
people
care
about.
So
thank
you
for.
C
A
C
All
right,
you
know
the
strategy
for
decentralized,
Science
and
yeah.
If
we
can
skip
to
the
next
slide.
Okay,
I
just
want
to
quickly
tell
you
about
our
this
is
our
founding
team?
Ying
is
online.
Actually,
so
you
might
be
able
to
help
you
guys
with
any
questions.
I'm
the
CEO
Philip
David
at
the
end
is
he
was
20
years
at
arm
for
as
general
counsel
arm
Holdings
in
Cambridge.
C
So
he's
it's
been
a
big
inspiration,
particularly
as
I'll
mention
we're
interested
in
in
open
source
and
the
economic
model
around
that
so
he's
you
know
he
did
a
lot
of
stuff
with
open
source
while
I'm
holding.
So
that's
just
what
I
wanted
to
flag
that
one
up?
Okay,
we
can
go
to
the
next.
C
So
what
we're
interested
in
more
specifically
is
this
notion
of
open
and
collaborative,
which
is
something
that
we
came
up
with.
I
mean
there
may
be
a
an
equivalent
notion
out
there.
It's
a
pretty
Sim,
simple
thing:
it's
defined
with
these
three
points
that
anyone
can
submit
a
contribution
and
we're
thinking
about
something
like
a
software
project.
Here,
of
course,
like
a
collaborative
software
project,
full
disclosure
of
methods
and
results,
resulting
knowledge
is
a
public
goods,
and
you
know
you,
if
you
are
familiar
with
open
source.
C
Also,
interestingly,
science,
or
at
least
academic
science
meets
these
criteria
too,
or
at
least
it
should
people
are
supposed
to
disclose
their
results
and
scientific
knowledge
which
is
produced
is,
is
a
public
good
assuming
it's
not
patented
which,
which
you
know
things
like
early
stage
applied
and
and
fundamental
science,
which
is
typically
government
funded,
fits
fits
that
bill
and
the
interesting
thing
about
this.
C
You
probably
won't
find
it
too
controversial,
but
you
dig
into
it
if
you
like,
it's
the
open
and
collaborative
projects
when
you
have
a
compact
project,
if
it's
open
and
collaborative
by
this
definition
usually
has
a
lot
of
benefits,
I
mean.
One
reason
is
that
you
know
people
can
sort
of
check
each
other's
work
provide
feedback
peer
review.
If
somebody,
you
know
stops
working
on
it
for
someone
for
seeing
reason,
then
you
know
other
people
will
carry
on
because
they
know
that
they'll
get
the
benefit
of
the
of
the
project.
C
So
this
is
really
just
the
reason
why
open
source
Works-
and
you
know
it's
kind
of
a
very
useful
thing.
You
can
crowdsource
expertise
and
people
can,
you
know
spot
interesting,
you
know
kind
of
spot
problems
earlier
and
it
tends
to
just
result
in
a
better
quality,
higher
probability
quality
product.
So
but
this
is
all
predicated
on
there
being
some
Vision
participation
in
the
in
the
project
and,
as
you
probably
know
you
know,
there
were
very
many
famous
open
source
projects.
C
For
example,
you
don't
really
hear
about
the
ones
that
aren't
famous
you
know
they
don't
get
very
much
participation,
but
things
like
Linux.
You
know
when
it
works
well
and
you
get
lots
of
people
chipping
in
it
can
be
extremely
successful
and
it
just
ends
up
in
a
with
a
better
product
produced
more
quickly
and
at
lower
cost.
So
it
has
all
these
these
fantastic
benefits.
You
know
it
it.
C
So
if
you
can
do
a
complex
project
to
make
it
open
and
collaborative
that's
great,
but
you
know
the
interesting
things
are
that
it's
a
sort
of
economic
in
some
circumstances
and
and
not
so
economic
and
others.
So
we
can
skip
to
the
next
one.
C
So,
as
I
said
we're
kind
of
inspired
by
open
source
software,
you
want
to
open
source
software,
so
you
can
see
we
can.
We
can
skip
onto
the
next
one
and
and-
and
in
this
case
you
know
that
they
produce
this
code,
which,
which
is
a
which
ends
up
being
a
public
good
and
people
chip
in
and
some
people
work
part-time
for,
free
and
so
on.
It's
a
it's
all
together
a
fantastic
thing,
but
there
are
some
criteria
for
open
source
projects
which
work
better
than
others.
C
As
I
said,
you
don't
really
hear
about
the
ones
that
don't
work
so
well.
You
can
open
source
anything
of
course,
but
it
doesn't
mean
to
say
it's
gonna
be
a
successful
project,
so
the
successful
ones
tend
to
be
that
the
outcome
such
that
the
outcomes
are
uncertain
but
not
too
uncertain.
I'll,
come
back
that
one
in
a
second
that
code
base
is
large.
C
The
reason
why
that's
relevant
is
because
open
source
licenses
lean
on
copyright
quite
hard,
and
if
you
can
just
the
the
code
base
is
quite
short,
you
could
re-implement
it
in
circumvent
copyright,
and
then
you
have
a
problem
with
a
free
Free.
Rider
number
three
code
base
is
highly
modular.
This
means
that
lots
of
people
can
chip
in
in
the
in
their
part-time,
and
it
means
you
know
perhaps
100
part-time
people
would
be
more
or
less
equivalent
to
10
full-time
people.
C
So
it
sort
of
scales
in
that
interesting
way,
and
it
means
that
even
though
open
source
the
model
doesn't
typically
offer
payment
to
contributors
because
people
can
afford
to
you
know,
participate
in
their
spare
time
and
they're
hoping
to
get
a
better
piece
of
software
out
of
it
than
the
projects
can
gather
momentum.
For
that
reason,
so
the
sort
of
interesting
structure
you
know
the
architecture
of
you
know
certain
projects
work
better
for
open
source
and
others
going
back
to
number
one
outcomes
are
uncertain.
C
That
really
just
means
that
the
every
complex
project,
the
outcomes
are,
are
uncertain.
So
there's
you
know
and
making
it
open
and
collaborative
means
that
you
know
for
uncertain
projects.
You
tend
to
get
a
better
outcome.
You
know
sort
of
more
more
efficient,
but
if
it's
too
uncertain
you
know,
we'd
be
thinking
about
some
kind
of
open
source
project
to
I
know
stabilize
a
plasma
for
a
fusion
reactor,
or
you
know
some
kind
of
piece
of
fundamental
research.
C
The
the
incentives
to
participate
are
generally
reduce
because
you
don't
really
know
that
you're
going
to
get
a
usable
sort
of
useful
piece
of
software
out
of
it,
and
one
of
the
reasons
why
people
contribute
to
open
source
software
projects
is
because
they
want
to
have
a
better
piece
of
software
that
they
can.
They
can
actually
use.
So
if
it's
some
kind
of
experimental
type
of
software
using
some
extremely
sort
of
esoteric
stuff
which
may
not
pan
out
you
know
in
the
next
20
years
and
the
then
the
incentive
is
reduced.
C
So
this
was
it's
not
just
so
me
saying
this.
You
might
have
heard
of
Eric
Raymond's,
one
of
the
founding
fathers
of
Open
Source,
a
real
kind
of
big
big
advocate
of
it,
and
he
wrote
a
famous
series
of
essays
and
probably
the
most
well
known
as
the
Cathedral
on
the
bazaar,
but
he
wrote
another
one
more
in
the
economics
of
Open
Source
called
the
magic
cauldron
and
you
know
we'd
have
to
read
through
all
of
this,
but
he
sort
of
identified
cases
when
open
source,
you
know,
doesn't
work
so
well.
C
But,
interestingly,
when
you
look
at
that
in
an
example
like
that
it,
it
doesn't
fulfill
those
three
criteria,
they
just
listed
I
mean
the
code
will
be
relatively
short
so
that
if
it
was
open
source
copyright
protection
which
open
source
licenses
lean
on
really
wouldn't
have
any
effect
at
all.
You
could
just
re-implement
it.
It
was
all
about
the
idea.
C
You
know
these
projects
tend
to
be
sort
of
kind
of
less
modular
things
in
computational
science,
as
as
they're
going
to
explain.
So
really
he
was
sort
of
musing
on
you
know.
The
open
source
is
great.
It
works
for
a
lot
of
stuff,
but
it
doesn't
work
for
everything.
And
what
could
you
do
to
sort
of
tweak
the
model
hypothetically
to
get
it
to
work
for
more
stuff?
And
he
came
to
the
conclusion
that
you
know
one
way
to
do.
C
It
would
be
to
pay
people
for
their
contributions
to
a
software
project,
but
that
in
fact,
is
very
hard.
I
mean
one
thing:
is
you
know?
How
do
you
pay
people
sort
of
international
across
borders?
You've
got
to
pay
some
Anonymous
guy,
you
know
which
maybe
we
could
do
now.
This
is
written
in
1999.
They
didn't
have
cryptocurrency
there.
C
So
maybe
you
could
do
that
now,
but
then,
even
if
you
do
that,
you
don't
really
know
how
to
pet
how
much
to
pay
people
for
different
contributions
like
who
prices
them,
and
so
he
said,
you'd
have
to
have
kind
of
coined
this
phrase
of
a
super
being
you
know
trusted
super
being.
Who
would
he
would
just
decide?
You
know
kind
of
unilaterally
who
who
who
deserves
how
much
payment
for
different
contributions,
and
you
know
that
those
kind
of
super
beings
are
sort
of
a
bit
thin
on
the
ground.
C
So
it's
a
bit
difficult
to
see
how
you'd
do
this,
but
it's
interesting
to
use
those
words.
I
mean
the
first
paragraph
there
in
strongly
a
cryptocurrency.
You
know
a
kind
of
a
payment
system
and
the
second
one
is,
you
know:
do
you
have
a
trusted
super
big?
Well,
actually,
Bitcoin
was
a
bit
like
a
trusted
super
being
in
the
sense
that
it
it
it's
substituted
for
a
trusted
intermediary
which
you
otherwise
require.
C
For
sort
of
you
know
transfers
of
of
of
electronic
cash,
so
it's
sort
of
hinting
at
something
there
and
we
get.
We
can
actually
skip
to
the
next
slide.
C
Yeah
I
was
just
saying
Bitcoin,
yes,
we
can
skip
to
the
next
one.
I
just
wanted
to
say
that
so
and
okay,
so
Bitcoin
famously
uses
proof
of
work.
This
is
the
original
proof
of
work
paper
written
in
1995,
so
again,
work
well
before
give
Bitcoin,
and
what
proof
of
work
is
one
way
to
construe
it
and
they
can
shoot
it.
This
way
in
the
paper
is
a
pricing
function.
It
says
actually
at
the
bottom
line
there
in
the
in
the
abstract.
C
So
proof
of
work
basically
is
a
way
of
determining
how
much
you
should
pay
somebody
for
a
thing,
and
so
we
have
a
pricing
function.
We
have
a
sort
of
you
know
something
which
takes
the
place
of
a
of
a
centralized
third-party
and
we
have
a
need
to
price
things.
Of
course.
So
you
know
these
are
the
kind
of
pieces
of
the
puzzle
that
we
fit
together
so
yeah
skip
to
the
next
next
slide,
so
computational
science.
So
what
the
Innovation
game
is
really
about?
Computational
science,
primarily.
C
So
this
is
one
of
the
categories
of
thing
which
open
source
works
less
well
for
and
by
computational
science
we
mean
computational
chemistry,
visiting
biology.
We
don't
mean
computer
science,
which
is
really
a
branch
of
mathematics,
so
this
will
be
computational
where
natural
science
that
we're
focusing
on
here,
and
so
it's
simulations
and
inversions.
C
Primarily
you
mostly
hear
about
simulations,
and
one
of
the
reasons
is
because
inversions
are
quite
hard
so
that
you
know
people
talk
less
about
things
that
they're
less
good
at,
because
they
can't
see
that
you
know
it's
not
so
impressive
when
you
put
it
on
YouTube,
but
inversions
are
sort
of
really
what
science
is
all
about.
Anything
like
Medical
Imaging
is
an
inversion
you're
taking
a
state
at
a
later
time,
sort
of
scatter
of
some
em
radiation,
trying
to
infer
what
happened
at
an
earlier
time,
and
scientists
generally
want
to
do
inversions.
C
They
want
to
take
some
sort
of
measure,
measured
output
and
infer
from
that.
What
were
the
sort
of
initial
conditions
which
led
to
that
measured
output?
So
it's
the
simulation
inversion
is
the
difference
between
forwards
and
backwards
in
time
and
because
it's
all
essentially
based
on
physics.
This
is
a
natural
science,
as
I
said.
Second
rule
of
thermodynamics
implies,
and
that
means
that
inversions
are
hard
to
solve
but
easy
to
verify,
which
is
a
necessary
property
of
proof
of
work.
C
Okay,
so
we
can
skip
to
the
next
one,
and
you
know
just
as
you
know
this
is
written
in
February
3rd
2023.
Actually,
it
seems
quite
topical,
so,
if
you're
interested
in
why
it
is
that
you
know
thing
inversions
are
are
hard
to
perform
but
easy
to
verify.
You
could
always
read
Stephen
wolfram's
computational
foundation
of
the
second
law
of
Thermodynamics,
like
yeah.
If
you're
having
trouble
sleeping,
it's
I'd
recommend
that
one
yeah
skip
to
the
next
slide,
860
centralized.
C
So
this
is
just
to
confirm
that,
so
you
know
why
didn't
people
use
things
like?
You
know
inversions,
for
example,
in
proof
of
work
before,
and
the
reason
is
that
it
was
believed
not
to
be
possible,
and
this
is
the
the
upholsterers
paper.
I
think
he's
a
block
stream
and
he
he
just
talked
about
the
reason
why
they
cryptocurrencies
are
secure
by
optimization,
free,
quite
unquote.
C
Proof
of
work
is
that
if
somebody
finds
an
optimization
for
the
proof
of
work
sort
of
in
the
algorithm,
he
says
you
know
it's
a
strong
motivation
to
keep
it
yourself.
That's
not
I
would
say,
that's
not
the
kind
of
the
greatest
reason,
because
if
you
discover
a
better
Asic,
for
example,
there's
a
motivation
to
keep
it
to
yourself.
What
what
the
economics
are,
that
you
know
with
it
with
Asics,
and
things
like
that,
you
end
up
just
becoming
it,
makes
more
sense
to
mass,
produce
them
and
sell
them.
C
There
is
an
issue
with
the
fact
that
if
you
find
a
sort
of
software
optimization,
it's
a
little
bit
hard
to
sell,
then
maybe
just
a
piece
of
Hardware,
but
anyway
it
was
going
to
be
a
previously
believed
that
you
couldn't
have
optimizable
proof
of
work
where
you
could
find
a
more
efficient
algorithm.
You
know
you
know
famously
they
optimize
the
proof
of
work
by
finding
more
efficient
pieces
of
Hardware
to
run
it
on
until
now.
C
Okay,
so
skip
on
to
the
next
one,
and
just
you
know
we're
going
to
the
technical
details,
but
the
way
that
we
solve
this
problem,
you
know
there's
a
sort
of
analogy.
So
imagine
if
you
had
a,
we
actually
have
lots
of
proofs
of
work
for
lots
of
different
types
of
inversions
in
a
sort
of
Drive,
diverse
areas
of
computational
science,
and
imagine
that
one
of
these
sort
of
proofs
of
work
was
just
like
a
guy
in
a
in
a
boat
and
he
was
sort
of
you
found
an
optimization.
C
He
suddenly
became
a
hundred
times
or
a
thousand
times
stronger
than
everybody
else.
If
you
think
about
it,
wouldn't
really
I
mean
it
would
make
the
boat
faster
right,
but
there'll
be
a
limited
to
a
limited
degree.
He
wouldn't
just
sort
of
Blitz
all
the
other
boats
completely
because
he
has
to
row
in
unison
with
the
other
rowers
right.
C
But
if
you
do
defined
it
in
sort
of
one
or
two
or
even
more
minority,
the
sort
of
degree
to
which
the
discoverer
can
benefit
is
constrained,
and
therefore
the
economics
work
out
that
it's
it's
just
more.
It
makes
more
sense
for
them
to
enthusiastically
share
their
optimization
and
get
money
from
other
people
rather
than
try
and
keep
it
to
themselves,
because
they're
benefitly
Limited,
okay,
we'll
skip
on
to
the
next
thing.
So
this
is
the
the
Flows
In
The
Innovation
game.
C
We
start
at
number
one
a
benchmarker,
which
is
what
we
call
a
minor
really
benchmarks,
are
similar
to
miners
in
Bitcoin
they
sort
of
pick
their
Hardware
as
Bitcoin
miners
work,
but
they
can
also
pick
the
algorithm
with
which
they
get
to
solve
a
proof
of
work,
which
is
a
random
instance
of
a
of
a
scientific
inversion.
C
And
if
we
go
to
number
two
yeah
that
they
they
get
rewards
from
that
you
know
for
from
from
doing
the
quite
benchmarking,
which
is
just
the
same
as
Mining
and
each
time
they
do
some
mining
in
their
days.
You
know,
get
a
solve
one
of
these
problems.
C
They're
going
to
get
a
token
reward
themselves,
but
they're
also
going
to
give
a
part
of
that
token
reward
is
going
to
go
to
the
innovator
so-called,
who
invented
who
uploaded
that
algorithm,
obviously
they're
incentivized
to
choose
the
best
algorithms
and
in
fact
the
the
algorithms
which
the
benchmarkers
select
are
the
measure
you
know
in
our
system
or
the
sort
of
Define
which
which
of
the
algorithms
are,
are
the
best
at
any
given
time.
C
You
know
mine
most
efficiently
and
so
there's
a
market
mechanism,
selecting
which
sort
of
innovators
and
which,
which
algorithms
are
the
best,
and
this
is
kind
of
interesting,
especially
if
you
imagine
that
some
of
these
inversions
could
be
quite
esoteric
things.
You
know
early
stage
research,
and
maybe
you
know
how
do
you
keep
a
plasma
in
a
toroidal
fusion
reactor
stable
or
something
like
that?
There
really
is
no
market
for
these
things.
C
You
know
maybe
for
another
20
years
or
50
years,
and
so
this
is
one
of
the
issues
with
science
like
because
there's
no
Market
allocation,
it's
so
uncertain
that
you
get
the
sort
of
difficulty
in
science.
You
know
sort
of
Matthew
effect
and
you
know,
rather
than
the
kind
of
the
the
having
a
have
an
objective
Market
measure.
C
But
in
this
case,
because
we
can,
we
can
select
problems
which
are
known
to
be
important,
but
maybe
at
a
very
sort
of
early
stage
of
of
research,
and
you
know
the
they
are
valuable
right
now,
at
least
to
somebody
benchmarkers
and
what
they
are
doing.
Even
though
they're
just
shoulder
solving
random
instances
is
they're,
providing
an
incentive
to
innovate
and
and
move
that
problem
forward
with
more
efficient
algorithms.
C
So
we're
going
to
number
five,
these
algorithms
will
embody
a
certain
amount
of
IP
and
the
IP
is
captured
by
The
Innovation
game
and
it's
available
to
be
licensed
by
commercial
Enterprise.
You
know
they're,
not
all
early
stage
problems
and
fundamental
problems.
They
can
be
sort
of
late
stage
and
stuff
with
current
value
as
well
and
number
six.
C
The
commercial
Enterprise
will
pay
a
subscription
fee
in
return
for
this,
because
the
subscription
fee
is
means
test
is
sort
of
you
know
you
you're
not
excluded
on
the
basis
of
the
ability
to
pay.
Is
the
idea?
Is
that
commercial
Enterprise?
You
know
if
it's
not
making
any
money,
you
know
it
doesn't
get,
it
doesn't
have
to
pay
anything
in
order
to
use
all
these
things
and
that's
a
bit
like
a
bit
like
a
tax.
You
know
the
government
sort
of
tax
I,
suppose
they
tax
profits.
C
So
you
don't
get
people
saying.
Oh,
we
can't
pay
our
tax
because
you
know
by
definition,
if
you,
if
you
weren't
earning
any
profit,
you
wouldn't
have
to
pay
any
tax,
so
you
can
always
pay
afford
it,
and
that's
that's
the
same
here
so
here
on
the
right,
we're
just
saying
the
problems
are
created
so
that
we
just
put
in
problems
which
are
known
to
be
scientifically
important.
C
C
That's
absolutely
true,
but
there
are
a
lot
of
problems
which
are
known
to
be
very
important
and
people
just
work
on
them
and
trying
to
make
them
sort
of
you
know,
find
more
efficient
solutions
for
them
all
the
time,
and
these
are
the
types
of
problems
that
we
want
to
put
in
the
Innovation
game
in
particular.
C
There's
no
option
to
pay
to
have
your
problem
put
in
these
are
going
to
be
sort
of
the
world's
problems
you
know,
rather
than
some
individual
guys
problem
yeah,
so
that
you
know
we
don't
have
that
option
there
are
created
set.
C
The
problem
is
this:
is
a
random
I
just
want
to
emphasize
this,
so
this
is
not
so
called
useful
proof
of
work.
Useful
proof
of
work
by
the
the
strict
definition
is
that
your
the
computations
are
solving
sort
of
real
life
problems
such
that
the
output
is
valuable
and
interesting.
In
this
case,
it's
not
the
case,
they're
random
instances
so
they're,
just
not
it's
unlikely.
C
They
correspond
to
any
real
world
thing
and
that
the
whole
thing
is
just
an
economic
framework
to
attempt
to
the
objective
of
which
is
to
incentivize
the
incent
optimization
of
of
routines
for
solving
these
problems
more
efficiently
source
code
is
mostly
published,
everything's
sort
of
out
in
the
open
it's
very
much
like
open
source.
C
In
that
sense,
you
know
this
possibility
of
a
free
rider
problem,
because
people
could
just
take
it,
but
for
very
interesting
reasons
that
open
source
tends
to
work
quite
well,
especially
for
the
successful
problems
which
we
can
go
into.
There's
a
build,
a
build
up
of
of
social
norms,
for
example,
because
people
are
on
board
with
the
creation
of
of
a
public
good.
So
you
know
sort
of
it
leans
on
that,
and
so
we
know
that
that
can
be
successful.
License
fees
create
token
demands,
so
license
fees
are
payable.
C
In
the
token
token
means
that
can
be
decentralized.
It
sort
of
makes
sense.
This
isn't
I,
wouldn't
say
that
blockchain
is
decentralization.
Are
the
sort
of
raise
on
depth
of
this
whole
scheme?
It's
not
sort
of
you
know,
Central
decentralization
for
the
sake
of
it.
C
I
would
say
that
your
it's
decentralization,
because
it
would
be
inappropriate
for
it
to
be
centralized
like
if
this
spins
up
and
it
becomes
big
and
very
successful.
You
don't
want
to
say
one
company
having
any
control
over
the
world
science
IP
or
at
least
computational
science,
and
you
know
whose
currency
would
you
pay
it
in
anyway?
C
Would
it
be
like
the
Petro
dollar,
where
everything's
you
know,
pay
for
and
all
that
would
be
kind
of
inappropriate
as
well,
so
it's
sort
of
like
a
a
neutral
method
of
payment
and
it's
a
sort
of
a
structure
which
doesn't
belong
to
anybody
and
you
incentivize,
three
types
of
optimization,
so
I
talked
a
little
about
algorithm,
optimization,
but
code
optimizations
can
make
a
huge
difference,
be
very
valuable.
C
Those
also
incentivize
those
are
also
remunerated
and
Hardware
optimizations
are
incentivized
by
the
fact
that
there's
a
large
Market
created
by
the
benchmarkers
for
you,
know,
Hardware
on
which
they
can
Benchmark
more
efficiently
to
earn
more
tokens
yeah
skip
to
the
next
one.
C
How
does
things
accelerate
science,
accelerating
science
degree
for
economics?
We
accelerates
science
in
slightly
different
way,
depending
on
what
type
of
science
you're
talking
about.
This
is
all
computational
science,
which
pertains
to
other
science
too.
In
fact,
it's
not
it's
not
like
biology
is
a
subset
of
science.
You
know
in
that
sense
it
computational
science
is
part
of
all
science.
These
days
you
know
chemistry,
physics
and
biology.
It
you
know,
feeds
into
better
Theory.
C
It
feeds
into
better
experiment
and
experiment
and
Theory
feed
into
better
computational
science,
so
there's
sort
of
a
Triad
between
them,
but,
more
specifically
in
terms
of
the
The
Innovation
game,
commercial
research
can
be
privately
funded
but
performed
on
an
openly
collaborative
basis
by
commercial
research.
Here,
I
didn't
really
that
might
not
be
the
best
term.
I
didn't
mean
research
necessarily
just
conducted
in
a
commercial
context
like
a
Pfizer
or
something
I
mean
like
commercially
viable
research,
so
in
in
academic
researcher
universities.
C
These
days
they
do
research,
whether
when
they're
sort
of
paid
by
some
kind
of
outside
party
is
quite
common
and
the
deal
is
we'll
give
you
money
for
your
research.
But
if
you
produce
some
IP,
we
own
that
IP
right.
So
that's
a
proprietary
sort
of
deal
for
them
and
you
know
academic
researchers,
they
they
do
kind
of
take
that
deal
because
and
they
get
to
to
do
the
research
which
they
love.
But
in
this
case
the
deal
could
be,
you
know,
I,
as
a
private
investor
will
give
you
some
money.
C
You
do
the
research
and,
if
there's
good
IP,
that
comes
out
of
it
and
you
earn
tokens
in
The,
Innovation
game
I
get
to
own
the
tokens.
That
is
the
deal
and
the
IP
goes
into
the
Innovation
game
and
it
becomes
a
public
good.
So
it's
a
different
model
on
which
you
know
commercially
viable
research
can
be
performed,
which
means
this
open
and
collaborative
rather
than
proprietary,
and
that
leads
to
efficiency.
It's
better
research
and
that
sort
of
thing
early
station
fundamental
research
can
achieve
higher
participation.
C
So,
as
I
said
before,
open
and
collaborative
is
the
best
thing,
but
it
only
works.
If
you
have
decent
levels
of
participation
and
a
lot
of
fundamental
early
stage
research,
even
though
it
can
be
open
source
because
it's
funded
by
the
government,
the
government
often
encouraged
open
sourcing.
You
don't
get
many
participants
and
that's
because
the
incentive
isn't
high
enough
and
the
third
one
accelerating
generation
of
non-proprietary
scientific
knowledge.
C
It
just
happens
because
of
the
first
two,
so
you
get
more,
you
sort
of
you
know:
accelerate
the
the
the
topping
up
of
the
the
stock
of
non-proprietary
knowledge,
okay,
skip
to
the
next
one.
That
might
be
the
last
slide,
yep.
Okay,
this
is
just
contact,
and
so
that's
that's
it
how'd
I
do
on
time,
fantastic.
A
You
nailed
it
John
and
I
I
love
this
theme.
We
had
some
folks
from
the
Berkeley
boink
project.
On
a
few
months
ago
we
had
some
groups
petals
machine
learning,
which
is
doing
volunteer-based
machine
learning,
but
I
love
the
threat
of
like
you
know.
This
is
it's
good
application
of
crypto,
it's
good
for
Science,
and
then
people
also
can
have
an
economic
benefit.
You
can
sort
of
serve
all.
C
It's
not
hard
to
know
what
what
is
good
right.
You
know
more
fundamental
science.
When
science
is
done,
people
share
it
and
they
make
the
you
know
they
call
the
source
code
available
and
they
make
it
sort
of.
You
know
a
public
good.
That
is
good,
but
the
question
is:
is
that
profitable?
Can
it
be
done
on
a
sustainable
basis?
And
to
me
the
great
dream
of
crypto
would
be
to
make
that
such
that
it
is
the
case
because
we
designed
the
system
that
way.
A
A
C
Yeah,
so
that
was
that
it
is
important,
but
so
one
of
the
reasons
why
algorithmic
improvements
they
they
sort
of
as
I
said
in
their
optimization
free
that
were
kind
of
prohibited
up
until
now
is
because
they
can
be
absolutely
gigantic.
So
the
funny
thing
about
algorithm
that
can
prove
improvements.
They
have
I
mean
you
probably
if
you
know
anything
about
computational
complexity
and
there's
various
algorithms
that
have
gone
from,
say
an
N
squared
complexity
to
an
N
log
n,
which
is
almost
like
linear.
C
So,
depending
on
what
n
is
I
mean
n
could
be
a
hundred,
but
if
N
is
a
million,
then
that
means
it's
a
million
times
faster,
which
is
just
absurd
right.
So
if
you
had
a
big
mining
Network
and
somebody
found
some
a
way,
you
know
to
make
it
a
million
times
more
efficient,
they
would
simply
bankrupt
everybody
else
immediately
and
all
the
mining
was
centralized
to
them,
which
would
be
clearly
unacceptable.
So
this
is
one
of
the
reasons.
Algorithmic
improvements
are
all
types
of
improvements
are
very
important.
C
You
know
over
time,
but
there's
a
there's
just
the
potential
for
more
clever
mathematics.
In
terms
of
the
strategy
of
solving
things
to
make
things
sort
of
you
know,
absurdly
more
efficient,
you
know
lit
a
million
times
more
efficient
overnight,
because
someone
had
a
good
idea
in
the
shower
sort
of
thing.
So
it's
yeah,
they
have
a
very
They
Don't
Really
top
out.
Basically.
A
I
love
it
very
good.
Well,
this
is
a
lot
of
material
I'm
gonna
go
ahead
and
hit
pause
on
this
recording
for
now
get
us
posted
on
YouTube
John.
Thank
you
so
much
for
presenting
today.
This
is
incredible
content
and
that's
been
a
great
fit
for
the
things
the
community
is
interested
in
great.
Thank
you.
Wes
all
right,
yeah.