►
Description
Join NEAR's community:
Website: https://near.org/
Reddit: https://www.reddit.com/r/nearprotocol/
Discord: https://near.chat/
Medium: https://medium.com/nearprotocol
Blog: https://near.org/blog/
Twitter: https://twitter.com/NEARProtocol
GitHub: https://github.com/near https://github.com/nearprotocol
Dev Docs: https://docs.near.org/
Create a wallet: https://wallet.near.org/create
Apps on NEAR: https://awesomenear.com/
Learn to Build On NEAR: https://www.near.university/
Grants & Funding: https://near.org/grants/
#Blockchain #FutureisNEAR #NEAR #nearprotocol
#nft #dao
A
Okay,
before
we
have
the
slides-
let's,
let's,
let's
talk
about
artificial
intelligence-
it's
been
it's
been
taking
off
recently
and
if
you,
if
you
follow
the
history
of
near
back
in
2017,
started
as
an
artificial
intelligence
company.
Moreover,
the
name
itself
near
was
referring
to
the
to
the
Advent
of
AGI.
It
was
the
singularity
That
was
supposed
to
be
near
and.
A
Eventually,
I
guess
we
did
not
build
the
singularity.
We
built
something
else:
something
cool
but
I,
never
give
up
on
the
dream
of
building
AJ
one
day
and
as
of
last
December,
I
started
a
new
company
which
will
be
announced
in
due
course
with
the
sole
goal
of
building
artificial
general
intelligence
and
yeah.
More
more
news
will
come
out
later
this
year.
A
At
today's
presentation,
once
we
have
slides
I
want
to
talk
about
the
state
of
AI
today
how
people
build
artificial
intelligence
and
how
blockchain
is
a
very,
is
a
very
fitting
tool
for
many
things
that
people
do
and
it
is
not
being
used
yet
for
those
things,
but
it
can
be
improved
in
many
ways.
So
so,
let's,
let's
wait
for
the
slides?
A
Well,
actually
yeah
screw
the
slides
okay.
So
if
you
wanted
to
build
a
large
language
model
today
like
if
you
wanted
to
build
something
like
chat,
GPT
or
stable
diffusion,
it
builds
on
top
of
three
pillars.
One
pillar
is
is
compute.
You
would
need
something
on
the
order
of
1000
or
more
of
of
a
very
specific,
very
expensive
GPU
from
Nvidia
called
a100.
A
You
would
need
the
second
pillar
is
model
architectures
or
research
in
general.
If
you
want
to
push
the
boundaries,
you
need
the
best
people
in
the
industry
who
will
be
building
designing
the
model
architectures
and
pushing
research
for
you,
and
today,
research
is
entirely
consolidated
in
the
hands
of
very
few
entities.
And,
finally,
you
will
need
data
and
you
will
need
a
lot
of
data
and
I
only
have
15
minutes.
A
So
I
will
very
briefly
talk
about
two
of
them
and
I
will
talk
a
little
more
about
one
which
I'm
particularly
excited
about
so
compute.
If
you
think
about
compute
and
your
first
and
crypto,
and
your
first
thought
is:
let's,
let's
pull
together
compute
from
many
people
and
orchestrate
access
to
it
on
chain.
You
wouldn't
be
alone
thinking
that
I
think
that's
a
very,
very
natural
application
of
blockchain
technology
to
democratizing
access
to
compute.
A
Today,
however,
it
doesn't
work
quite
for
a
reason
that
for
all
the
modern
architectures
that
exist,
you
need
extremely
fast
interconnect
between
the
models.
Sorry
between
the
machines,
so
people
literally
have
infinite
band
between
every
between
all
the
machines
in
their
clusters
and
obviously,
if
you
put
together
compute
for
many
many
people,
that
will
not
be
the
case,
you
will
have
regular
Network
between
them.
A
So
before
we
can
do
that,
we
need
several
more
breakthroughs
to
actually
make
it
possible
to
teach
models
over
slow,
Network
and
there's
quite
a
bit
of
research
happening,
but
we're
not
quite
there
yet
to
start
pushing
it
from
from
the
blockchain
side
when
it
comes
to
model
architectures.
Today.
Oh
sorry,
actually
there's
something
we
can
do
before
we
go
to
model
architectures
and
specifically,
I.
A
Think
one
interesting
idea
that
is
worth
exploring
is
to
give
up
on
the
idea
of
having
a
decentralized
compute
cluster,
as
of
yet
have
a
fully
centralized
cluster
from
one
of
the
hyperscalers
like
AWS
or
Azure,
but
build
that
Dao
on
top
of
the
cluster
that
will
be
Distributing
access
to
it
to
the
community
right
to
imagine
like
you're,
a
researcher
in
one
of
the
smaller
universities,
you
don't
have
access
to
compute
you
go
to
the
Dow,
you
apply
for
a
grant,
but
the
grant
will
not
be
in
tokens
or
anything
like
that.
A
The
grant
will
be
in
compute
hours.
So
I
think
that's
a
very
interesting
idea
to
explore,
and
that
would
that
would
be
a
massive
step
forward
in
terms
of
in
terms
of
democratizing
access
to
to
compute.
So
when
it
comes
to
model
architectures.
It's
also
very
interesting
because
today,
all
the
people,
all
the
best
people
who
are
pushing
frontiers
of
research,
are
concentrated
in
the
number
of
entities
which
I
can
count
on
one
hand,
primarily
Google
and
open
AI.
A
It
would
take
you
several
months
of
reading
the
code
base,
understanding
and
doing
smaller
work
items
and
because
of
this
already
commitment
that
you
and
the
company
that
you
that
you
want
to
contribute
and
the
project
that
you
want
to
contribute
to
have
to
because
of
this
contribution
of
time.
That
needs
to
happen
ahead
of
time.
Usually,
if
you
want
to
be
contributing
to
some
project,
it
would
only
make
sense
if
you're
ready
to
commit
over
multiple
years.
A
It
is
not
the
case
with
with
research,
because
with
research,
if
you
want
to
be
contributing
to
some
project,
you
can
come
and
go
like
you
can
join
the
project
at
any
point,
usually
you
can
live
at
any
point,
so
there's
no
inherent
commitment
that
you
have
to
make
to
any
of
the
of
those
entities.
The
big
reason
why
people
have
to
go
to
those
entities
is
access
to
resources.
So
again,
if
you
did
have
the
Dow
that
has
access
to
that
overlooks
a
large
cluster
that
could
be
a
step
towards
democratizing.
A
Research
and
pushing
the
the
frontier
you
know
like
okay
data
data
is
the
one
I'm
most
excited
about.
The
reason
for
that
is
that
when
it
comes
to
computer
and
research,
centralization
is
a
problem
and
I
think
centralization
is
holding
us
back,
but
it's
it's
the
only
problem
so
mostly
in
a
centralized
fashion.
Everything
works
when
it
comes
to
data
annotations.
A
Everything
is
broken
for
everybody
involved,
whether
centralized
or
decentralized
and
I
just
want
to
work
you
through
today,
the
the
program
of
data
annotation
and
how
it
can
be
solved
and
also
what
has
been
done
to
solve
it
today.
So
generally,
the
idea
is
you're
an
entity
you're,
a
company,
your
person,
and
you
want
to
teach
a
model
to
have
some
new
capability.
A
You
want
to
have
a
machine
learning
model
that
given
some
input,
given
some
tasks
that
a
human
can
do
would
be
able
to
do
the
same
task
and
the
way
you
would
approach
it
is.
You
would
have
some
amount
of
data
that
some
amount
of
inputs
for
the
task,
that
you
want
the
model
to
be
able
to
produce
outputs
on,
but
because
the
model
doesn't
exist.
A
The
the
input
and
the
data
annotation
platform
will
have
a
set
of
workers
who
work
there,
and
each
worker
will
get
a
few
of
those
tasks
and
they
will
annotate
them
with
the
correct
answer
with
the
correct
output
that
the
model
of
the
future
will
have
to
learn
to
produce
all
the
solutions
go
to
the
requester,
and
at
this
point
the
requester
has
the
solutions
and
they
go
through
them
all
the
all
the
outputs
they
go
through
them
and
they
say
yeah.
This
output
is
good.
This
output
is
good.
A
This
output
is
not
good
and,
depending
on
their
verdict,
the
workers
whose
outputs
were
accepted
will
get
paid.
The
workers
in
outputs
were
not
accepted
and
there's
a
plenty
of
centralized
providers
today,
scale
AI
mechanical
flower,
and
in
this
setup
it
is
broken
for
every
single
entity
involved.
It
is
broken
for
the
platforms
it
has
broken
for
the
requesters
that
is
broken
for
the
workers
for
the
platforms.
A
It
is
hard
because
workers
are
usually
distributed
across
all
over
the
world,
and
so
as
a
platform,
you
will
have
to
figure
out
how
to
make
payments.
You
know
somehow
running
out
of
time.
You
know
just
get
on
in
a
lot
of
geography,
something
I,
don't
change
on
the
blockchain,
a
user
yeah
payment,
just
work
for
requesters.
The
problem
generally,
is
that
the
quality
is
very
low,
so
in
Silicon
Valley
AI
is
taking
off
right
now
so
like.
A
If
you
go
outside
and
talk
to
a
random
person,
they
are
an
AI
founder
and
all
of
them
annotate
data,
and
you
can
talk
to
them
about
their
experience
and
the
experiences
consistently
disastrously
bad.
So
all
if
you
use
scale,
if
you
use
any
of
those
providers,
the
quality
is
not
good,
okay
and
finally,
for
the
workers,
it's
the
worst
of
them
all,
because,
first
of
all,
there's
absolutely
no
support.
A
If
you
do
some
work
and
you're
supposed
to
be
paid
one
and
a
half
dollars
for
it,
support
is
not
economically
viable,
so
the
com,
the
company
like
Mechanical
Turk,
just
cannot
possibly
have
support
right
right.
You
cannot,
you
cannot
justify
having
people
on
salary
doing
support,
because
the
amount
of
money
in
question
is
just
too
little.
A
Secondly,
the
requester
who
submits
the
work,
has
the
last
say
in
whether
you're
getting
getting
paid
or
not
right.
So
it
is
actually
a
common
place.
It's
something
that
happens
from
Mechanical
Turk,
a
lot
where
the
requester
will
come.
Have
the
data
set,
annotated
and
then
we'll
just
Auto,
reject
every
single
submission,
keeping
the
data
and
just
not
paying
the
workers,
because
they
can
do
that
and
in
the
absence
of
support,
unless
they
consistently
do
that
it
just
works.
And
finally,
payments
are
delayed
for
multiple
reasons.
A
One
reason
is
that
if
you,
if
you
did
some
work
and
you
were
supposed
to
be
paid
two
dollars-
it's
just
not
economically
viable
for
the
providers
to
send
you
the
two
dollars-
it's
too
little
of
a
payment.
And
secondly,
even
when
you
accumulate
enough
money
to
be
paid
like
you
accumulated
20
50,
it
does
take
multiple
business
days
for
money
to
actually
arrive,
so
the
gig
workers.
From
from
the
time
the
work
was
performed
until
the
time
they
actually
paid.
A
It's
a
lot
of
time
and
all
of
those
problems
can
be
solved
with
a
single
smart
contract
where
you
just
change
the
model
so
that
when
the
requester
comes,
requester
provides
the
inputs
and
the
specification
of
the
task
written
in
plain
English,
and
then
the
task
is
sent
to
some
of
the
workers
they
perform
it
and
send
solutions
back
and
instead
of
sending
it
back
to
the
requester,
the
smart
contract
affected.
The
smart
contract
implements
a
game
between
between
the
workers
where
the
review
work
of
each
other.
A
So
this
the
task
is
then
sent
to
review
to
some
of
them.
They
reviewed
send
it
back
and
you
need
to
design
a
game,
and
this
is
a
very
interesting
exercise
in
the
game
design
where
the
workers
do
not
Converse
to
some
sort
of
equilibrium,
where
it's
more
beneficial
for
them
to
do
low
quality
of
work
rather
than
high
quality
of
work
right.
But
if
you
do
design
this
game
well,
then
the
quality
of
work
will
will
be
very
high
and
effectively.
A
A
The
data
set
like
they
can
say,
hey
as
of
this
moment,
do
not
give
any
more
tasks
to
people,
but
for
every
task
that
was
given
to
people
they
have
to
pay,
assuming
the
workers
internally
concluded
that
the
quality
was
high
through
the
game
right
so
in
this
model,
because
it's
a
smart
contract,
first
of
all,
payments
are
solved
because
the
workers,
the
moment
their
work
is
accepted.
The
payment
is
just
sent,
it
doesn't
matter
if
it's
75
cents
or
one
dollar
on
the
blockchain,
the
payment
is
cheap.
A
Secondly,
from
the
requester:
if
the
game
is
designed
properly
and
I
will
talk
about
it
in
a
second,
the
quality
will
be
consistently
High
and
for
workers.
First
of
all,
the
requester
doesn't
have
a
say
in
whether
the
quality
was
high.
A
It's
the
game
between
them
that
ensures
that,
whether
they
paid
or
not,
and
as
long
as
the
game
is
designed
properly,
they
will
be
paid
whenever
the
quality
of
work
is
high,
so
they're,
not
at
immersive
the
requester
anymore
and
as
such,
the
support
is
not
really
needed,
and,
finally,
the
payments
instantaneous
the
moment.
A
Your
work
is
reviewed
by
the
by
the
others,
which
usually
takes
less
than
a
couple
hours
you
get
paid,
and
there
has
been
a
very
long,
lasting
experiment
on
here
for
the
past
two
years
around
designing
such
a
game,
and
it's
been
running.
It's
been
running
as
smoothly
successfully
and
for
the
in
the
past
half
a
year.
The
quality
of
every
task
that
was
annotated
in
the
on
the
platform
was
extremely
high
significantly
higher
than
you
can
get
in
any
centralized
solution
with
a
very
little
cost
overhead.
A
So
something
I
didn't
mention
is
that
the
sheet
that
the
platform
takes
like
crowd
flower
is
between
20
and
30
percent.
Actually,
it's
a
massive
overhead
which
is
gone
because
there
is
no
intermediary
anymore
and
before
before,
I
can
conclude.
This
presentation
I
want
to
give
you
I,
don't
want
to
go
very
deeply
into
the
game
design
of
those
games
at
some
point.
A
I
will
write
about
them,
but
I
want
you
to
give
you
an
example
of
a
failed
game
of
a
game
that
was
designed
and
was
considered
by
the
designer
to
be
completely
foolproof,
which,
within
the
week
was
completely
broken
and
people
just
workers
just
completely
wiped
out
the
the
smart
contract
took
the
money
without
producing
any
meaningful
work.
The
the
particular
data
set
in
question
was
a
very
simple
data
set
you.
The
workers
were
given
an
image
and
needed
to
provide
a
description,
and
the
game
was
the
following.
A
The
game
was
that,
given
an
image,
the
worker
gets
one
of
the
three
assignments
assignment
number
one:
they
produce
a
description
assignment
number
two.
They
get
someone
else's
description
and
need
to
say
whether
it's
good
or
bad
and
assignment
number
three
is
called
a
Honeypot
and
the
Honeypot
is
necessary
because
if
you
don't
have
honeypots
people
will
obviously
converge
to
providing
garbage
descriptions
and
always
accepting.
A
But
if,
but
if
you
do
have
honeypots,
the
Honeypot
is
when
a
person
has
to
intentionally
make
a
mistake
in
the
description
now
you
cannot
always
accept
right
and
so
with
honeypots.
The
gameplay
was
considered
by
the
designers
to
be
completely
foolproof,
and
so
this
game
was
launched
on
the
contract.
That
happened
more
than
a
year
and
a
half
ago.
A
Now
and
within
a
week,
the
designer
did
look
into
the
data
set
and
observed
that
people
are
submitting
complete
garbage
and
and
they
managed
to
get
around
the
honeypots
and
what
happened
in
in
reviewing
in,
in
effect,
doing
the
Retro
on
what
happened
and
trying
to
understand
what
happened
was
that
initially
people
would
properly
describe
pictures
when
they
give
on
a
regular
task
and
do
some
Honeypot
when
they
have
to
the
Honeypot.
A
But
they
quickly
observed
that
if
you
do
a
Honeypot,
you
actually
don't
have
a
motivation
to
do
a
high
quality
work,
because
the
task
has
to
be
rejected
by
Design
and
so
several
people,
a
very
small
fraction,
initially
started,
always
doing
the
same
Honeypot,
which
says:
hey,
there's
a
Spiderman
on
the
picture
and
then
because
people
do
review
of
each
other.
A
If
you
get
to
review,
you
can
just
say
hey
if
there's
a
Spiderman
in
the
description
rejected
and
then
there
is
no
Spider-Man
accepted,
and
once
that
happened,
people
realize
that
if
you
write
anything
into
the
description
which
does
not
mention
Spider-Man,
it
just
does
get
Auto
accepted
right
and
so
within
a
week,
people
would
just
submit
complete
garbage
without
mentioning
Spider-Man.
If
it's
a
regular
task,
they
will
submit
something
mentioning
Spider-Man
if
it's
a
Honeypot
and
review
the
work
of
each
other
and
was
fully
automated
and
the
contract
was
depleted
right.
A
That
concludes
my
presentation
overall
I
think
it's
an
exciting
time
and
generally
I,
don't
think
there
is
much
happening
today
on
the
intersection
of
the
end
blockchain,
and
so
there
are
many
many
opportunities
for
people
to
to
start
working
on.
So
data
annotation
I
think
is
one
of
the
very
lucrative
opportunities.