►
From YouTube: The Biological Path Toward AGI (Matt Taylor's talk from UCSC Silicon Valley Meetup Jan 2020)
Description
Presentation given by Matt Taylor of Numenta at the "Towards AGI" Meetup at UCSC Silicon Valley Extension.
Original live stream at https://www.youtube.com/watch?v=qVKVj4nx-mE
A
But
so
the
good
news
is
you're
struggling
math,
it's
okay!
You
can
still
understand
how
intelligence
works
without
understanding
how
to
navigate
gradient
descent
through
the
back
propagation
through
a
neural
network.
So
again,
I
should
probably
know
that
I'm
live-streaming
this
thanks
to
the
viewers
who's
watching,
there's
been
about
40
people
watching
this
as
we've
been
presenting.
So
thank
you
all
for
joining
whenever
I'd
have
liked
the
group
of
people
in
front
of
it.
I
was
asked
this
question.
How
many
of
you
think
human
beings
are
intelligent?
A
B
B
A
C
C
A
A
Who
believes
that
an
amoeba
is
intelligent,
there's
still
some
hands
up,
who
believes
that
plants
are
intelligent?
There
are
more
people,
think
plants
than
vicinity.
Okay,
so
there's
a
there's,
a
big
big,
broad
definition
of
intelligence
that
everybody's
going
to
tell
you
something
different,
so
I'm
gonna
define
intelligence
in
you
know,
basically,
by
trying
to
explain
how
the
brain
exhibits,
what
anyone
would
call
intelligent
behavior.
This
is
all
biological
I'm
gonna
do
a
quick
review,
deep
learning,
because
we
just
sort
of
seen
deep
learning.
A
That
was
a
very
simplified
simple
I
mean
the
math
is
hard
right,
but
that
was
the
simplest
way
to
understand.
All
of
the
math
was
where
he
just
did
you
and
it's
super
hard
to
understand
that
I
went
through
this
process
at
four
years
ago,
because
I
thought
maybe
I'm
missing
something
I
thought
I
understood.
You
know
a
lot
of
things
about
the
brain.
A
Maybe
there
is
something
in
deep
learning
networks
that
my
missing
that's
gonna,
lead
to
AGI
and
so
I
took
the
course
as
I
studied
it
and
I
didn't
think
I,
don't
think
so.
So
how
close
are
we
to
AGI
from
the
deep
learning
perspective,
I'm
going
to
point
out
a
few
things
that
just
really
sort
of
make
it
poignant
how
how
deep
learning
fails
and
I'm
not
doing
this
to
be
mean
and
I'm
just
doing
this
to
be
realistic?
A
So
this
is
an
adversarial
attack.
All
deep
learning
networks
are
susceptible
to
some
type
of
evidence.
That's
happen
in
the
case
of
an
image
recognition
network.
There
is
some
combination
of
input
to
the
system
that
will
short-circuit
it,
and
if
attackers
can
find
some
combination
of
this
input
there,
they
have
the
ability
to
short-circuit
the
system,
and
the
tricky
thing
is
look
at
the
randomness
of
this
thing
right
this.
That
means
there's
a
lot
of
random
combinations
that
could
potentially
short
your
short
circuit,
the
system.
A
That
there
are
inherent
flaws
in
this
in
the
landscape
of
deep
learning,
because
it's
it's
really
built
on
mathematics
and
and
a
lot
of
the
models
that
get
created
can
be
referred
to
as
brittle
in
some
ways
in
that,
if
you
show
it
something
outside
of
the
set
of
things
that's
ever
seen
before,
it
will
fail
catastrophic,
ly,
right,
yeah,
there's
another
example
of
adversarial
noise.
You
can
apply
noise
and
images
that
humans
can't
even
see
that
will
cause
classification
networks
to
fail,
and
this
isn't
just
for
images.
A
This
is
for
all
types
of
deep
learning
networks,
the
ones
that
process
voice
input,
because
they're
all
essentially
doing
the
same
mechanism.
That
Robbie
just
told
you
and
that
is
what's
being
attacked.
That
mechanism
one
more
mother,
the
most
dangerous
one
is
a
Mis
classification
of
a
stop
sign
as
a
45
with
the
mile
per
hour,
sign
reinforcement
and
this
this
really
drives
home.
A
Learning
I'll
talk
a
little
bit
about
that
later
and
it
these
systems
are
extremely
susceptible
to
adversarial
attacks,
because
every
component
that
builds
up
a
representation
using
these
group,
these
gradient
descent
methods
are
and
in
our
possible
way
that
it
can
be
attacked
and
if
you
remove
through
space
and
time
as
an
agent,
everything
you're
seeing
is
is
a
surface
area
for
attack.
It's
just
our
brains.
Don't
work
like
this.
There
are
some
you
can
short-circuit
human
brains
for
sure,
but
there's
certainly
not
as
easily
fooled.
A
A
2005,
we've
learned
a
lot
about
the
brain
and
that's
what
I'm
going
to
talk
to
you
about
what
we've
learned
about
the
brain,
one
more
example
of
deep
learning.
Again
this
is:
did
you
guys
see?
Alpha
start
does
so
alpha.
This
is
insane
like
the
things
you
can
do
is
deep
learning,
everything
and
you're
going
to
keep
seeing
things
like
this
happen.
So
was
a
deep
mine,
deep
mind
decided
to
create
an
AI
that
could
play
Starcraft
2,
and
so
they
did
and
they
and
it's
basically,
every
human
Starcraft.
A
A
A
Ended
up
beating
all
those
humans,
600,000
years
of
like
in
game
playing,
if
you
were
to,
if
you
were
to
make
the
computer
play
it
like
in
human
time
600,000
years,
so
you
imagine
how
bad
it
is
for
the
first,
several
hundred
thousand,
we
humans
don't
learn
that
way.
It's
just
it's
not
the
way
that
we
learn,
and
even
these
things
are
also
susceptible
to
adversarial
attacks.
If
you
knew
how
this
system
was
architecture
and
there's
a
whole
paper
about
it
online,
you
can
figure
out
okay.
B
A
I'm
telling
you
that
deep
learning
is
not
going
to
lead
to
a
GI.
Why
not
right?
Okay,
there's
two
big
things!
The
main
thing
is
that
the
perceptron,
the
point
neuron
at
the
core
of
deep
learning,
is
a
little
bit
too
simple.
So
all
of
these
deep
learning
networks
that
Robbie
talked
about
are
layers
and
layers
and
layers,
and
they
all
have
units
in
each
unit
is
inspired
by
a
neuron
and,
of
course,
these
things
were
built
back
in
the
60s
and
70s.
A
You
know
these
ideas
were
emerged
way
about
them,
and
it's
only
been
until
now
when
we
have
the
massive
amount
of
compute
power
to
actually
do
things
with
them,
so
this
perceptron
neuron
is
vastly
too
simple
it's
it's.
Basically,
one
unit
is
a
sum
of
the
weights
from
the
previous
layer,
and
you
run
an
activation
function
over
actual
pyramidal
neurons
in
your
brain
or
more
complicated
in
that
probably
vastly.
So
we
at
least
know
that
a
neuron.
B
A
B
A
Going
on
in
a
pyramidal
neuron,
we
have
a
model
for
this
narrow
that
we've
used
since
2010,
and
it
what
it
enables
is,
is
a
network
to
learn
with
contests
in
our
case,
learn
with
a
temporal
context,
which
means
the
state
of
the
network
learns
by
looking
at
the
state
of
itself
over
time.
That's
what
this
context
is.
This
is
the
input.
This
is
like
the
world
coming
up
in
there
on,
and
this
is
contextual
information,
and
this
can
be
vast
like
this.
This
network
right
here
is
distal.
A
C
B
A
C
A
D
B
A
A
A
That,
basically,
are
already
locomoting
through
whatever
primordial
ooze.
Is
there
they're
existing
and
right
locomotion
comes
before
intelligence.
Locomotion
exists
before
we
before
we
have
neurons
that
have
to
take
action,
because
that's
what
they've
all
to
do?
You
have
to
have
some
form
of
locomotion,
so
in
the
very
simplest
ideas
of
how
this
evolved,
we
have
some
simple
organism,
some
type
of
external
stimulus.
That
indicates
something
good,
that
I
want
to
go
to
or
something
bad
that
I
want
to
get
away
from
that
chemical
trail
without
it
whatever
that
is.
A
The
this
thing
learns
that
when
it
sees
that
it
activates
this
motion
and
it
gets
the
hell
out
of
Dodge
or
it
sees
that
it
activates
this
motion
that
goes
and
eats
its
dinner.
This
is
how
core
this
idea
of
movement
and
space
is
intelligence.
The
very
emergence
of
intelligence
as
a
survival
strategy
already
involves
space
and
object
and
a
movement
associated
with
it.
So
these
are
like
the
core,
primitive
ideas,
I
think
of
how
to
define
what
intelligence
is,
at
least
in
biological
systems.
A
We
also
know
so
we
also
know
of
the
existence
of
location
based
cells
in
the
brain
in
2014.
The
Nobel
Prize
was
given
for
this
discovery
of
something
called
grid
cells.
There's
different
types
of
these
cells
that
we
found
in
cortex
hippocampus
and
in
Toronto
cortex,
and
they
stay
responded
to
where
the
organism
is
in
a
room.
B
A
They
seem
to
map
out
space,
they
do
map
out
steaks
like
we've
done.
That's
when
people
realize
this,
they
were
like
close
right,
so
they
did
a
bunch
of
more
experiments,
and
this
is
a
real
thing
like
you
have
neurons
in
your
brain
that
fire
in
certain
pattern,
as
you
walk
across
the
room,
it
keeps
track
of
where
you
are
in
your
environment
and
they
do
the
same
thing,
whether
you're
in
this
room
or
another
room
or
the
room.
A
They
are
generic
space
mapping
neurons
there
things
that
respond
to
just
space
and
where
you
are
in
so
we
think
this
is
a
core
mechanism
to
intelligence.
Oh
top
tell
you
how
important
spaces
and
time
is
to
the
brain.
We
see
there's
a
way,
the
brains
representing
space
and
the
way
that
we
see
it
doing
it.
It
can
do
it.
One
dimensionally
two
dimensionally
three
dimensionally
in
dimensional
like.
B
A
A
So
if
you
were
to
trace
mice
all
the
way
back,
the
area
of
the
brain
that
we
see,
that
is,
it
is
right.
There
I
bet
that
even
way
back
then
was
happening
back
there.
I
bet
this
originated
when
we
were
all
just
little
tadpoles
going
through
the
primordial
soup.
We
started
mapping
space
then
what
we
have
now
is
this
really
sophisticated
system
of
a
mapping
space,
because
what
happens
over
500
million
years
from
from
watch
the
this
green
dorsal
pallium?
A
A
In
this
part,
this
part,
oh
you'd,
have
to
have
a
pretty
good
neuroscientist
to
be
able
to
tell
you
which
part
of
the
brain
that
just
probed
like
if
they
could
look
at
the
structure
of
it,
your
neocortex,
this
green
part,
that's
doing
all
these
different
things
is
essentially
running
the
same
cortical
function
all
throughout
the
whole
thing.
That
seems.
A
Instance
what
we
base
it?
What
the
brain
basically
did
here
is
said:
I
developed
this
mechanism
to
map
my
agent
in
space,
your
mother,
who,
what
where
I
am
in
space
but
as
the
brain,
expanded
and
as
mammals
in
particular,
started
evolving
dext
areas,
fingers
and
vocal
chords.
That
could
really
evoke
different
types
of
sounds.
These
space
mapping
cells
in
the
circuits
they
were
involved
in
the
vault
to
start
representing
all
objects,
all
objects
that
we
interact
with
so
now.
A
It's
not
just
where
am
I
in
the
room
that
I'm
in
is
where's
my
finger
on
the
object
that
I'm
being
across
so
this
core
mechanism
that
we
probably
working
in
in
mice
and
rats
in
the
in
Toronto,
cortex
and
hippocampus.
We
believe
it's
been
hijacked
in
some
way
by
the
evolution
way
way
back
in
the
timeline
and
evolved
into
what
we
think
is
happening
in
neocortex
right
now,
and
that's
where
all
of
our
detailed
memory
about
every
object
that
we've
ever
interacted
with
this
story.
A
A
Throughout
the
neocortex
is
called
a
cortical
column.
What
we
do
at
my
company
since
I
work,
there
is
trying
to
figure
out
how
this
works,
because
the
cortical
column
is
complicated.
Is
it's
six
to
ten
layers,
depending
on
who
you
talk
to
there's
a
budget
for
cell
types
in
there
and
they're
all
piped
into
other
layers
in
different
areas
of
cortex
and
different
places
of
the
brain,
different
players?
It
goes
through
the
thalamus
it's
complicated.
This
is
a
messy
system,
a
little
pinch
for
my
company.
I
livestream
all
of
our
research
meetings.
A
This
is
one
of
the
things
we
do.
We
research
the
neuroscience
like
we
just
have
a
research
meeting.
This
morning
we
were
talking
about
grid
cell
firing,
a
hybrid
mechanism
of
grid
cell
firing,
using
continuous
attractor
networks
for
oscillatory
interference,
data
types,
so
that
system
sort
of
research
meetings.
That
would
that
we
have-
and
we
do
this
pretty
much
every
week,
if
you're
interested
in
this
stuff-
and
you
want
to
learn
more,
my
company
is
very
transparent
about
everything
that
we
do
so
quick
pitch.
A
So
this
thousand
brains
idea.
That's
like
that's
what
we
like
to
call
it:
that's
sort
of
our
marketing
attention,
a
thousand
brains,
idea
right
that
every
cortical
column
in
your
cortex
is
essentially
doing
the
same
thing
that
we
try
and
simplify
the
the
basic
function
of
intelligence
down
to
this
one,
sensory
motor
loop
and
the
sensory
motor
part
is
important
and
that's
kind
of
all
throughout
our
brain.
All
these
different
parts
of
cortex
are
functionally
different,
they're
doing
different
things.
A
There's
language
areas
and
speech
generation
areas
auditory
areas,
but
we
think
if
we
can
figure
out
the
score
algorithm,
it
will
inform
us
on
all
of
the
brain,
the
hierarchy
of
the
brain.
You
don't
really
talk
much
about
hierarchy
and
deep
learning
networks,
but
it's
very
important
in
deep
learning
networks
to
learn,
structures
of
things
to
have
a
hierarchy
and
that's
inspired
directly
by
neocortical
hierarchy
in
neuroscience
as
well.
A
There
is
the
higher
in
your
brain,
and
this
is
what
it
looks
like
it's
really
messy
and
it's
really
confusing
and
neuroscientists
have
been
trying
to
figure
out
how
it
works
for
like
decades
and
decades.
It's
not
deep
it's
wide
and
it's
really
really
scattering
right.
There's
connections
going
all
over
the
books.
I'll
talk
more
about
that
in
a
minute,
and
we
think
that
at
every
level
of
the
hierarchy,
the
same
essential
function
is
discarded,
so
I'm
going
to
do
a
hierarchy
example.
A
A
The
bottom
layer
will
we'll
do
basic
feature
extraction,
we'll
try
to
identify
low-level
features,
edges
and
points
and
dots
and
things
like
that,
and
then
it
is
its
new
representation
to
the
layer
above
it
and
we
ascend
the
higher
right.
As
you
go
up
the
hierarchy,
the
the
theory
is
that
the
layer
above
will
take
those
those
components
and
compose
them
in
an
illogical
way.
In
our
side,
you
know
taking
some
of
these
ideas
and
turning
them
into
feet
and
legs
and
torso
or
whatever,
and
then,
as
you
get
to
the
top.
A
That's
where
the
this
is
from
a
video
I.
Think
videos
can
actually
the
object
that's
being
classified.
So
this
is
how
all
of
the
machine
learning
textbooks,
show
hierarchy
bits,
and
it's
actually
not
even
this
clean.
If
you
look
into
real
neural
networks,
you
can't
like
look
at
one
that
there's
no
spaces
and
see
noses
at
some
point
that
everything
gets
mangled
and
mashed
together.
So
there's
no
logical
composition
of
features
like
a
human
would
do
right,
but
okay.
B
B
A
Bit
of
a
toy
example,
but
but
everything
I'm
going
to
say
yours,
I,
think
factually
true
in
the
brain
I'm
going
to
use
the
visual
cortex
as
an
example,
input
flows
feet
forward
into
multiple
areas
of
the
body.
Okay,
it
doesn't
just
go
to
the
bottom.
It
goes
several
up
and
what
I've
got
here
is
like
this
is
an
example
of
one
in
cortical
column.
In
let's
say,
if
you
want,
you
know
the
very
the
very
bottom
of
the
hierarchy
at
that
level,
that
cortical
column
gets
some
of
the
input
to
process.
A
Okay,
it
gets
a
field
of
view
of
the
space
at
the
bottom
of
the
hierarchy.
The
field
of
view-
and
you
can
do
this
yourself-
is-
is
about
the
size
of
your
thumb
at
arm's
length.
So
in
b1,
which
is
right
here,
the
back
of
your
head,
you're
only
processing
about
the
size
of
your
thumbnail,
but
it's
very
highly
detailed
I
mean
you
can
get.
You
can
read
a
piece
of
paper
right
there
arm's
length,
so
that's
all
that
bottom
part
of
the
higher
be
scanner
as
you
go
up.
A
The
field
of
view
of
the
sensor
input
broadens
so,
as
you
go
up
the
visual
hierarchy,
you
get
a
bra,
her
field
of
view
about
the
detail
is
blurrier,
you
get
less
detail
and
again
we'll
send
one
more
at
the
the
higher
region
of
the
higher
you
just
see.
It's
blurry
the
same
thing,
broader,
less
detail.
A
F
B
A
It
right
you,
you,
you
explore
the
thing
you
move
around,
then
you
look
at
it
in
all
these
different
ways:
you're
moving
your
sensors.
Now
all
the
different
layer
levels
of
your
hierarchy
across
an
object
and
writing
sensory
input
tied
to
locations
in
space
into
the
synapses.
In
your
record.
That's
how
you
that's,
how
you
learn:
objects
through
sensory
motor
and
exploration,
of
course,
there's
more
to
it.
To
that,
you
know:
there's
a
lot,
there's
a
sleep
cycle
involved
and
then,
but
at
a
very
basic
level.
That's
what
you're
doing
so!
I'm.
B
A
To
try
to
make
it
seem,
the
hierarchy
is
definitely
important,
but
it's
not
as
important
as
we
think
it
is
okay,
the
hierarchies
like
the
purple
stuff
here
all
of
these
cortical
columns
have
massive
amounts
of
lateral
connections
like
layered,
2/3,
layer,
5,
layer,
layer,
6,
they're
all
have
these
portable
portable
connections
that
some
of
them
will
go
huge
areas
across
the
brain
and
connect
to
some
cortical
column
on
the
other
side
of
brain.
Why
is
it
doing
that?
Well,
I.
Think,
one
of
the
reasons
it's
doing.
B
A
Goes
all
over
because
there's
all
of
these
different
vital
connections,
it
goes
up.
The
heart
beat
down.
It
goes
all
over
the
place,
so
you
don't
have
to
think
well
we're
young.
It's
gonna
form
down
the
bottom
of
the
heart.
Can
you
travel
all
the
way
up
earth?
The
bottom
will
come
down.
It's
very
likely,
just
whatever
Biggers
it
out.
First,
transmits
that
to
all
of
its
neighbors,
that
is,
that
is
sharing
its
dendritic
synapses
with
or
the
axons
with
so.
B
A
Lot
about
thalamus
now
next
week,
I'm
going
to
call
the
presentation
about
how
how
the
pulvinar
thalamus
is
could
be
involved
in
propagating
here
or
across
the
cortex
in
some
way.
But
what
do
we
do
right
now
and
especially
in
deep
learning,
because
that's
what
everybody's
doing
so,
we
asked
this
question
because
obviously,
we've
been
we've
been
studying
the
brain
and
trying
to
figure
this
out
for
so
long
and
nobody's
paid
any
attention,
and
so
we're,
like
a
ward,
would
be
to
them
that
something,
let's
do
something
in
deep
learning
soon.
A
A
In
your
brain,
when
I
say,
sparse
representations,
I
mean
like
when
you
think
of
your
aunt
or
whoever
you
think
about
somebody,
you
there's
a
certain
amount,
there's
a
certain
neural
code.
You
know
that
comes
on
in
your
brain.
You
have
neurons
that
represent
that
thought.
Somehow,
but
no
matter
what
you're
thinking
about
only
about
2%
of
the
neurons
in
your
brain
are
ever
active
at
any
one
time.
A
A
Those
all
those
things
come
together
to
form
your
representation
of
what
dog
means
to
you
and
it's
all,
based
on
your
sensory
experience
with
dogs,
your
whole
life,
that's
your
representation
of
dog
and
it's
all
very
sparse,
and
it's
all
very
associative.
So
if
you
were
to
take
your
representation
of
dog
out
of
your
brain
and
put
it
somewhere,
this
wouldn't
work
between
people,
but
within.
A
You
could
take
your
dog
representation
and
your
cat
representation
and
compare
them
together.
You
would
see
that
they
would
overlap
in
the
places
where
both
dogs
and
cats
are
furry
mammals
and
have
tails
and
have
ears
and
breathe
and
are
living
things
right,
but
they
would
not
overlap
in
the
parts
where
dogs,
bark
and
cats,
meow
and
cats
are
finicky
and
dogs
are
overly
emotional
and
whatever.
A
So
we
we
were
anyway.
My
point
is
we
wrote
this
paper
called
how
we
would
be
so
dense
the
benefits
of
using
sparse
representations,
and
this
is
all
about
how
to
take
neural
networks,
standard
sort
of
vanilla,
neural
networks
like
cognitive,
convolutional,
neural,
Nets
Nets.
So
we
took
like
in
nest
and
the
Google
speech
data
set.
Some
of
the
basic
data
sets
that
everybody
goes
and
does
machine
learning
stuff
on.
We
took
the
best
examples
of
standard
convolutional,
neural
networks
for
those
networks,
and
we
made
them
sparse.
So
there's
a
certain
way.
A
Layer,
layer
layer
they
can
have
thousands
and
thousands
of
units
and
those
layers
or
hundreds
or
whatever,
but
they're
all
densely
connected
to
the
next,
every
single
neuron
and
every
single
layer
and
a
standard.
Deep
Learning
Network
has
a
connection
to
every
neuron
in
the
layer
before
it
and
there's
a
there's,
a
weight
associated
with
that
and
there's
an
activation
function
that
decides
whether
that
neuron
is
going
to
be
active
in
the
next
state
or
I.
Don't
so
what
we
just?
A
What
we're
doing
is
basically
cutting
the
majority
of
those
entirely
because
we
know
that
you
don't
really
need
them.
So
there's
there's
been
some
deep
learning
studies.
You
could
look
up
things
like
the
lottery
ticket
hypothesis,
there's
one
thing
to
look
out
but
where,
if
you
had,
if
you
find
the
neural
network
that
you
already
trained
it
right
and
it's
really
running
well,
you
can
go
through
the
the
networks
and
and
zero
out
the
majority
of
the
weights
and
it
will
still
perform,
which
is
which
is
crazy
right.
A
But
it
proves
the
point:
there's
a
lot
of
stuff
going
on
there
that
doesn't
necessarily
contribute
to
the
accuracy
of
the
model
and
if
we
can
find
a
way
to
apply
that
sparse
in
a
logical
way
that
doesn't
lose
the
system
as
its
trying
to
model
whatever
it's
draining
on,
then
we're
going
to
come
out
with
deep,
deep
networks
that
have
a
vast
fraction
of
the
computational
cost
that
power
we're
training
them.
So.
A
A
B
A
A
A
This
is
something
I'm
very
excited
about.
I
think
this
is
the
future
of
AI
and
I.
Think
it'll
make
I,
think
it'll
it'll
change
the
playing
field.
So
all
of
the
GPUs
that
we
use
today
to
train
deep
market
networks
are
really
efficient.
They
were
built
in
a
way
by
the
Nvidia
or
whoever
to
do
this
type
of
dense,
matrix
calculation.
That's
required
for
for
back
propagation
and
there
that
are
very
good
at
that,
but
if
we
can,
but
if
the
time.
A
Computation
right,
so
we
already
you
can
make
these
models
sparse
and
prove
that
it
takes
fewer
computations,
less
training,
time
etc,
but
that
doesn't
amount
to
anything
until
we
have
hardware
that
can
take
advantage
of
the
sparse
computations.
So
we
are
looking
forward
to
developments
in
the
euro
Murphy
neuromorphic
Hardware
scene,
which
is
really
hopping
right
now,
and
what
they're
going
to
do
is
they're,
going
to
start
making
chips
that
that
have
dense
representations
are
able
to
have
dense
representations.
A
We're
going
to
have
I
think
neuromorphic
chips
that
can
handle
sparse
computations.
This
is
going
to
level
the
playing
field
for
the
AI
industry,
so
I
think
it's
going
to
mean
that
many
more
AI
startups
will
be
able
to
get
into
the
research
business
again,
because
right
now,
it's
it's
very
hard
to
do
real
research
with
real
live
models
with
real
data
because
it
takes
such
a
long
time
like
ridiculous
amounts
of
time
and
computing
training,
etc.
You.
B
A
E
A
A
A
So
I
think
this
is
going
to
make
a
splash,
as
the
hardware
appears
and
hopefully
we'll
get
some
of
the
big
companies
doing,
and
research
coming
back
and
I'm
looking
again
at
the
brain
and
how
does
the
brain
compute
and
how
can
we
create
systems
that
are
more
like
it?
Deep
learning
is:
has
a
lot
of
legs
left
so
I'm
not
saying
deep
learning
is
dead,
you
can
I
swear.
A
If
you
put
your
mind
to
it,
you
could
do
anything
with
deep
learning
if
you
have
enough
time
in
compute
power,
so
there's
going
to
be
a
whole
swath
of
new
applications,
a
whole
new,
because
a
lot
of
innovations
people
are
like
just
digging
through
the
math
bucket
right
now,
let's
like
try.
What
can
I
do
not
devices
like?
What's
the
next
great
thing,
and
it
is
it's
like
a
huge
tool
box
full
of
functions
so
I.
A
Don't
think,
though,
that
deep
learning
is
gonna
lead
to
strong
AI
or
AGI,
there
are
people
in
the
field
that
do
believe
that,
like
young
lucuma,
the
Facebook
AI
to
some
extent,
yoshua
bengio,
but
I
have
never
seen
that
but
I
just
I.
Don't
think
it's
gonna
happen,
like
I
said
you
need
real
neurons
or
at
least
better
neurons,
and
you
need
movement
you
you
need
to
understand
that
time
and
space
is
sort
of
the
constructor
plugin
which
we
build
our
simulation
of
the
world
and
the
brain
is
a
simulation
that
you're
constantly
running.
A
B
A
Weights,
you
want
to
prove,
and
that
means
you
have
to
decide
which
ones
are
meaningful
to
the
system
and
that's
a
tricky
thing
to
do
right.
You
don't
want
to
prune
weights,
that
is
that
are
contributing
to
the
accuracy
of
the
model.
So
there's
several
different
mechanisms
for
creating
what
we
do
the
to
track
to
a
by
sparsity
is
something
different.
We
apply
something
called
about
me
well
in
our
in
the
brain
speak
and
our
Theory.
It's
about
spatial,
cooling
and
in
machine
learning.
Speed
I
would
probably
call
it.
A
A
A
H
A
H
C
F
H
A
Know
the
sparsity
is
not
going
to
help
the
cauliflower
it
won't
produce.
It
doesn't
reduce
the
amount
of
training
data
that
you
need
to
push
through.
It
reduces
the
amount
of
training
time
it
takes
to
push
through
so
you're
still
going
to
need
a
further
understanding
of
the
same
amount
of
data
to
get
the
same
results
you.
A
E
A
A
A
D
A
More
buckets
you
have
the
easier
it
is
to
look
up,
I'm,
not
sure
if
it's
the
same
concept,
so
you
have
to
think
of
semantics
and
you
think
about
sparse
representations.
It
helps
to
think
about
semantics.
Every
representation
of
your
brain
has
semantic,
meaning
that's
one
of
the
things
that
we
miss
out
in
the
end
world,
because
if
you
take.
A
A
semantic
meaning
to
it,
and
if
you
take
another
representation,
you
should
contribute
to
compare
them
to
tell
how
similar
they
are
right
and
the
sparseness
is
just
it
opens
up
in
a
blank
field.
So
if
you
have
a
data
structure,
that's
5
bits
wide.
You
know
and
you've
only
put
and
you
put
a
bit
and
you
could
put
a
bit
in
each
one
of
them.
A
That's
five
to
the
power
two,
but
as
you
make
that
thing
really
big,
there's
a
sweet
spot
right
a
rather
you
know
around
several
thousand
units
where,
if
you
computationally
convert
represent
anything
in
the
known
universe
right,
but
just
by
turning
these
bits
on
and
on,
you
have
the
space
in
your
neural
representations
to
represent
an
infinite,
essentially
infinite
amount
of
things.
That's
why
you
can
always
continue
to
work.
F
D
A
No,
some
of
the
way
to
see
network
I
think
a
better
word,
for
that
would
be
a
tracker
if
you're
familiar
with
the
term
I
like
to
think
of
neural
representations
in
the
brain
as
an
attractor.
An
attractor
is
a
pattern
that
has
a
sort
of
an
ability
to
self
complete.
If
you
invoke
a
portion
of
it
and
so
I
like.
A
B
A
A
Are
making
that
call
as
they're
touching
the
thing
that
invokes
an
attractor
threat
frame
right
that
that
thing?
That's
like
oh,
it's
fuzzy,
it
might
be
a
cat
or
a
dog,
and
then
the
cat
pushes
up
against
your
hand,
you're,
like
that's
the
cat
right,
that's
the
type
of
the
trap
relationship
to
representation,
it's
sparse
because
you
don't
have
to
store
a
lot
of
data
just
to
say
something
is
furry
or
fuzzy
or
not.
A
You
don't
have
to
remember
every
single
hair
that
you
touch
when
you're
petting
the
Nanticoke
mountain
I
think
that's
that's
the
way,
I
think
about
it
and
if
you
think
about
it
that
way,
the
cat's
an
attractive
dogs,
an
attractor
where
they
overlap
or
their
similarities
right,
you
can
jump.
I
mean
like
the
way
you
represent
did
in
your
brain
is
just
like
a
web
of
interconnected
things
like
you
can
just
the
train
of
thought.
A
You
can
just
carry
it
from
one
thought
to
another
to
another
cup
to
another:
you're,
invoking
attractors,
exploring
the
space
a
little
grabbing,
a
subset
of
it
and
poking
another
chapter,
exploring
the
space
little
grabby
provoking
another
tractor.
That's
how
your
thought
process
is
going
through
sort
of
holding
things
in
your
brain,
as
you
like
to
think
through
ideas,
for
example,.
E
E
E
A
A
C
A
Could
you
could
make
a
smart
web
crawler,
for
example,
you
could
give
it?
It
has
an
environment
that
starts
at
0,
and
then
it
stands
the
page
and
looks
for
links.
It's
move
that
could
be
choosing
a
link
to
go
next.
You
know
that
could
be
a
movement.
That's
a
movement
through
the
internet
space.
There's
lots
of
ways
you
might
define
moving
through
space.
It
doesn't
have
to
be
bound
to
physics.
It
doesn't
have
to
be
bound
to
simulation.
A
C
E
A
Yeah,
actually
are
the
grid
cell
part.
We
did.
We
have
a
paper
called.
How
does
the
neocortex
to
create
a
sensory
motor
model
of
the
world?
I
think
that's
the
table
and
so
actually
go
to
the
MIT
sitcom
slash
papers
and
go
see
all
of
them,
though,
and
the
one
that
has
like
hands
and
fingers
touching
things.
That's
the
one.
E
A
E
B
A
Cortex
is
generating
motor
output
everywhere
in
the
cortex,
like
your
whole
cortex,
which
is
confused,
neuroscientist
group
for
deputies,
but
that
motor
output
is
like
you're,
it's
you're,
pretty
good
state
of
the
world.
It's
where
you
think
you're
going
to
be.
You
know
and
the
next
moment,
and
we
we
don't
have
we
don't
really
know
how
it
works
like
nobody
knows
how
it
works.
F
F
A
B
A
B
A
Genetically
encoded
into
some
part
of
your
subcortical
structure
to
recognize
something
that
looks
vaguely
like
a
snake
right
and
then
would
invoke
some
fears,
everybody's
afraid
of
snakes
and
spiders.
So
that's
pretty
much
beat
for
you
afford
the
cortex
even
gets
involved.
It's
almost
a
reflex.
Subcortical
store
circuit.
F
A
I,
don't
I,
don't
know,
I,
wouldn't
take
it
like
that
since
I
don't
know,
your
brain
is
constantly
pattern
matching
not
just
in
the
neocortex
but
in
all
the
other
parts
of
your
brains,
it's
trying
to
resolve
to
certain
patterns
and
if
a
path
when
sensory
input
comes
in
it
makes
a
stop
and
the
thalamus
before
it
projects
all
the
way.
On
top.
C
A
B
B
A
Reinforcement
learning
has
one
great
thing
going
for
it:
it
allows
you
to
get
movement
into
the
loop,
but
you
still
have
to
have
this
loss
function
that
so
amount
of
free
reinforcement,
learning
I,
don't
think
what
we're
I
think
you
have
to
have
a
model
of
reality,
and
perhaps
this
is
a
way
forward
with
something
more
like
the
brain.
If
we
can
get.
A
A
A
A
A
E
E
A
About
there
runs
like
this:
a
neuron
one,
single
neuron
means
absolutely
nothing.
You
can
take
a
neuron
out
of
the
brain
and
then
it
will
never
affect
anything.
A
neuron
will
fire
and
if
you're,
just
looking
at
a
neuron
firing
ants,
are
watching
brain
completely
random
pattern.
It's
like
there's
no
way
you've
been
associated
that
neuron
with
anything
you
have
to
look
at
the
brain
as
a
population
of
neurons
all
the
time
it.