►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I'm
going
to
talk
about
the
biological
path
toward
strong
AI
I'm,
an
open-source
community
manager,
as
well
as
an
engineer
at
a
very
small
company.
We
still
call
a
startup
named
Aminta,
so
I'm
going
to
give
you
some
context.
First,
I'm
going
to
talk
about
intelligence
at
a
high
level,
strong
versus
weak
intelligence
and
then
we're
going
to
talk
about
the
minor
brain
science.
We
won't
get
too
crazy,
which
will
introduce
the
concept
of
hierarchical
temporal
memory.
A
Has
anyone
ever
heard
of
this
HTM,
hierarchical
or
memory
great,
so
I'm
going
to
describe
that
the
key
concepts
behind
this,
this
theory
of
intelligence
and
describe
how
the
algorithms
work
with
visualizations
but
first
about
intelligence?
So
I
read
this
back
in
early
2015
and
the
title
of
the
wired
article
and
I
had
to
do
a
double
take
because
I
was
like
no,
it
hasn't,
but
I
realized
that
my
definition
of
intelligence
was
different
from
a
lot
of
other
people's
definition
of
intelligence
in
the
field.
A
So,
if
I
modify
this
to
say
we
ka
I
has
arrived.
I
I
would
agree
with
this
statement,
not
that
I
have
anything
against
weak
AI.
We
can
do
really
amazing
things
with
artificial
neural
networks.
Today,
a
lot
of
these
talks
between
this
conference
are
about
all
the
amazing
things
we
can
do
with
it.
We
need
this
technology
I'm,
not
belittling
it
in
any
way,
but
I
do
think
there
are
some
things
about
weak
AI
that
is
misconstrued.
I,
don't
think
we
do
intelligent.
A
A
The
AI
tech,
the
deep-learning,
convolutional
neural
networks,
all
the
stuff
that
you're
hearing
about
today
and
on
this
conference
are
originating
from
this
artificial
neural
network
neuron
that
was
created
you
know
decades
ago,
and
when
you
compare
it
to
what
a
real
neuron
does
the
pyramidal
neuron
in
the
neocortex.
It's
really
lacking
in
capabilities,
but
we've
built
some
really
amazing
things
with
this
very
simple
neuron
model,
so
it
does
hint
at.
A
Maybe
we
should
investigate
more
about
how
the
neuron
works
and
understand
how
it
works,
so
we
can
try
and
build
more
intelligent
systems,
so
I
believe
you
need
more
realistic
neurons
to
build
things
that
are
truly
intelligent
and
another
big
thing
that
a
lot
of
people
don't
talk
about.
Is
you
need
something
to
have
the
ability
to
explore
its
environment?
Everything
that
is
intelligent
today
has
the
ability
to
explore
its
environment.
You
can't
have
intelligence,
in
my
opinion,
by
having
something
to
sit
and
be
fed
data.
A
It
can
learn
a
lot
from
that
data,
but
I
don't
think
it's
going
to
be
able
to
make
intelligent
decisions.
Think
about
a
newborn
baby,
constantly
moving
wiggling
around
it's
just
randomly
moot
it's
interacting
with
this
environment
and
it's
learning,
learning
learning
all
the
time
and
that's
what
we
need
to
make
systems
that
have
that
ability.
Your
data
scientists
know
that
when
you
get
a
new
data
set
there's
a
lot
of
human
exploration
that
has
to
happen,
so
you
can
understand
the
different
aspects
of
that
data
set.
A
We
need
to
build
systems
that
are
able
to
do
that
exploration
without
us,
prompting
them
so
I
think
movement
is
very
important.
So
a
bit
about
my
company.
This
is
what
we're
really
primarily
interested
in,
is
how
intelligence
works
and
we're
focused
on
the
million
neocortex
for
specific
reasons,
and
we
want
to
create
software
based
upon
those
principles
because
we
think
to
create
intelligence.
We
need
to
study
what
we
know
is
intelligent
today,
and
the
one
thing
that
everybody
I
think
can
agree
is
intelligent.
A
A
So
the
cellular
structure
in
one
of
these
slices
is
going
to
be
the
same,
whether
you're
doing
visual
processing
auditory
processing
motor
command
generation,
whatever
it
is.
Your
cortex
is,
has
a
common
algorithm
throughout
that
gives
us
the
key
to
try
and
find
out
how
this
is
working.
So
what
we've
done
have
been
trying
to
do
for
years
is
to
try
and
model
one
layer.
Excuse
me
really:
one
region,
I,
don't
want
to
get
too
far
into
neuroscience,
but
there's
regions,
layers
of
people
there
there's
six
ish
layers
within
a
region.
A
There's
still
some
confusion
about
that,
we
model
layer,
two
three
another
point
of
confusion,
but
there's
a
lot
of
stuff
going
on
in
just
one
region,
and
so
we
make
models
of
neural
interactivity
with
a
small
cortical
stimulation
that
involves
these
parameter
on
so
I'm,
going
to
go
into
that.
Each
one
of
these
little
cubes
in
our
model
is
a
neuron
that
we
model
with
some
specific
attributes
that
we
think
are
important.
First.
This
is
my
hand
drawing
of
the
neuron,
but
neuron
doesn't
just
get
one
type
of
input.
A
It
can
get
different
types
of
input,
I'm
just
going
to
talk
about
proximal
and
distal
input.
A
proximal
input
is
like
feed-forward
input.
That's
either
coming
from
some
sensor
modality
or
some
other
part
of
the
cortex
or
the
other
part
of
the
brain.
Now
a
section
of
cortex
when
it
gets
proximal
input.
It
has
no
idea
where
it's
coming
from,
because
it's
this
algorithm,
that's
running.
It
doesn't
know
if
it's
processing
vision
data
auditory
data,
whatever
it
has
no
clue.
A
It
just
knows
that
it's
mapping
onto
this
input
and
that's
what
it
sees
changing
over
time.
The
distal
input
for
pyramidal
neurons
are
not
coming
from
some
sensory
modality,
they're
coming
from
neighboring
neurons
within
the
structure.
This
provides
context
to
the
input
so
before
I
get
into
some
deeper
stuff.
I
want
to
note
that
this
is
an
evolving
theory.
A
We
have
tried
to
create
an
interior
of
intelligence
based
on
the
most
recent
neuroscience
research
that
we
can
find
and
we
continue
to
evolve
it
as
more
research
comes
out.
So
this
is
an
evolving
theory
based
upon
this
neuron
model
that
I
just
described
to
you,
one
of
the
key
components
to
understanding
this
is
understanding.
Data
format
called
sparse,
distributed
representations
at
its
core.
A
Think
of
it
like
this,
you
think
about
a
fiber
optic
cable
like
a
giant
fiber,
optics
cable
that
could
have
thousands
of
fibers
in
it
and
at
any
one
point
only
two
percent
of
them
are
on
two
percent
important
because
we
took
that
number
from
the
brain
at
any
point
at
time,
only
two
percent
of
your
neurons
are
active,
and
that
turns
out
to
be
really
important.
So
if
you
took
a
nerve
or
a
nerve,
axon
bundle,
it's
analogous
to
a
fiber
optics
cable.
A
So
that's
the
input
that
might
be
coming
into
a
cortical
section
and
it's
just
firings
of
neurons
happening
over
time,
but
the
cortex
is
job
is
to
try
and
make
sense
of
that
information
without
knowing
where
it
comes
from
or
what
it
means.
So
over
time.
It's
looking
at
this
firing.
We
call
these
sparse
distributed
representations
at
every
time
step.
We
have
a
different
firing.
That's
happening
temporally
over
time,
so
I'm
not
talking
so
much
about
spatial
information
in
each
each
time.
A
Step
is
a
spatial
information,
but
it's
the
the
kicker
is
we're
trying
to
understand
them
through
time
as
they
change
so
I'm,
going
to
show
you
a
couple
of
demos
about
SDR
that
I
think
we'll
kind
of
illuminate
this
a
bit.
So
here's
an
example
of
an
SDR.
It's
got
2048
bits
only
40
of
which
are
on
that's
about
2%,
so
this
is
what
it
might
look
like,
and
the
capacity
of
this
is
surprisingly
large.
So
this
is
a
typical
size
that
we
use
in
our
models.
A
A
A
So
this
is
like
a
sample
of
each
one
of
these
STRs
there
and
from
this
random
generation,
I'm
going
to
try
and
match
and
I'm
going
to
show
you
I'm
going
to
pick
one
of
these
I'm
going
to
add
some
noise
to
it
and
I'm
going
to
compare
it
to
all
of
the
STRs
that
I've
collected
this
takes
few
seconds
and
what
we
end
up
with
is
this
is
the
SDR
that
matching
I've
added
25%
noise
to
it?
You
can
see
right
here.
I
can
add
more
noise.
You
want
and.
A
Right,
let
me
try
and
make
this
smaller
sorry.
This
is
this
ranking
that
I
have
here
is
that
SDR
compared
to
every
other
SDR.
This
is
just
a
simple
and
operation.
It's
just
an
overlap
score.
How
many
bits
are
similar?
We
can
easily
tell
that
SDR,
even
within
the
presence
of
45%
noise.
We
can
tell
we've
seen
that
before
and
we
can
handle
a
lot
of
noise.
This
is
one
of
the
properties.
I
want
to
explain
about
SDR.
This
is
very
noise.
A
Tolerant,
so
I
can
identify
that
I've
seen
at
SBR,
even
in
and
I
can
actually
know
exactly
what
my
false
positive
rate
is.
You
know
three
point
five
times
ten
to
the
negative
fifteen,
so
it's
not
very
often
am
I
going
to
hit
a
false
positive
with
this
space
of
100
sdrs
that
I
can
come
comparing
to
the
brain
is
extremely
reliant
on
this
noise
tolerance
capability
of
this
medium.
So
it's
something
that
I
like
to
point
out.
So
this
data
format
of
SDR
is
made.
A
Okay,
so
now
I'm
going
to
talk
about
encoders,
so
your
retina
is
an
encoder.
It's
very
complicated
encoder.
So
is
your
cochlea.
So
are
your
somatic
senses
you're
feeling
we're
not
interested,
or
at
least
Numenta
my
company
we're
not
trying
to
create
encoders
we're
trying
to
understand
how
the
cortex
processes
the
encoding
is
coming
from
the
encoders.
So
your
retina,
your
ear,
cochlea
they're
they're,
always
sending
these
stream
of
STRs
into
your
brain
and
your
visual
cortex
is
huge.
A
So
it's
spread
out
throughout
a
large
mass
of
cortical
space
and
that's
what's
understanding
what
you're
seeing
all
the
time,
the
way
the
retina
encodes.
This
is
very
complex
and
we're
not
trying
to
write
that
in
software.
But
we
do
have
some
examples
of
encoders,
very
simple
encoders
that
I
like
to
show
to
at
least
show
you
what
I
mean
when
I
talk
about
STRs
requiring
a
semantic
meaning
so
I'm
going
to
show
you
a
date
encoder-
and
this
is
going
to
seem
really
simple.
A
But
what
I
have
here
is
I'm
trying
to
encode
the
semantics
of
a
date.
The
timestamp
I
want
to
know
what
day
week
it
is
I
wanted
to
know
whether
it's
a
weekend
or
not
the
general
time
of
day
and
a
season.
So
what
I
do
is-
and
this
is
me
going
from
today
and
skipping
a
week
ahead
when
I
skip
a
week
ahead.
You
see
the
season,
ticks
and
I
continue.
A
The
season
continues
to
tick
when
I
move
to
a
Saturday
the
day
of
week
and
the
weekend
changed
so
I'm
encoding
this
timestamp
differently,
depending
on
different
features
of
the
data
point,
and
also
I,
can
change
the
time
if
I
want
to.
So
you
can
see
that
block
change
too
so
down
here,
I've
got
the
entire
encoding
and
you
can
see
it
change
as
well,
because
to
send
this
into
a
region
of
simulated
cortex.
We
don't
have
to
have
different
regions,
handling
different
semantics
that
we
want
to
extract.
A
We
just
throw
them
all
together,
so
we
just
concatenate
these
representations,
and
that
is
going
to
be
what
I
call
the
input
space
to
the
next
process
that
we're
going
to
run,
and
let
me
give
you
a
little
bit
of
a
more
tangible
example
of
this.
So
here
is
an
example
of
an
input
space
make
this
smaller,
so
I've
got
here
is
a
real
data.
It's
just
a
stream
of
scalar
values,
each
associated
with
a
timestamp
and
I
am
taking
the
actual
scalar
value.
A
So
this
is
giving
me
a
representation
of
one
scalar
value,
moving
with
an
Associated
time
with
each
point
now,
there's
lots
of
ways
that
I
might
be
able
to
encode
this
specific
data
graph
in
an
SDR
format.
Here's
another
way.
Instead
of
using
a
scaler
encoder
that
has
a
continuous
bucket,
we
can
distribute
those
buckets
throughout
the
space.
It
still
retains
the
semantics
of
the
scalar
value,
but
it
looks
a
little
bit
different
as
you
can
see,
as
things
go
up
and
down,
that's
changed.
Some!
A
Don't
there's
a
lot
of
details
to
the
encoding
process
that
I
won't
get
into,
but
on
some
resources
for
you
at
the
end
of
the
presentation.
But
this
is
a
typical
input
space
that
we
might
be
looking
at
with
the
next
phase
of
HCM,
which
is
called
so
pooling,
and
so
another
structure
I
need
to
introduce
in
the
brain
is
called
a
mini
column,
so
in
a
layer
of
cortex.
A
The
way
the
pyramidal
neurons
are
structured
is
that
they
have
this
grouping
in
columns
where
so,
if
we
were
looking
at
an
input
space
like
we
were
just
looking
at
with
an
SDR.
That's
changing
over
time.
That's
sort
of
what
that
green
is.
Each
column
has
a
proximal
input
from
that
input,
space
and
all
of
the
cells.
All
of
the
pyramidal
neurons
within
that
column
share
that
proximal
input
and
columns
become
activated
when
their
receptive
field
overlaps
enough.
Their
connections
overlap
enough
with
the
patterns
that's
occurring
in
the
input.
A
Space
and
I'll
show
you
this
in
just
a
minute.
So
this
is
all
about
proximal,
feed-forward
input
and
the
whole
point
of
this
is
it
allows
us
to
normalize
the
sparsity
of
the
representation,
so
we
can
take
an
encoding
that
may
be
very
erratic
and
chaotic
and
we
can
normalize
it
and
it
provides
a
space
for
the
next
algorithm
to
run.
That
does
a
sequence
memory,
so
we
are
retaining
the
semantics
I'm
going
to
do
another
demo
here,
hello
I'll
do
connected
samhsa's,
okay,
so
this
is.
This
is
called
spatial
pooling.
A
Okay,
so
remember
that
three
dimensional
structure
I
showed
you
you
know
of
cells
like
in
a
cube,
so
this
is
like
looking
down
on
top
of
it
in
the
spatial
cooler
columns.
So
this
input
space
is
where
the
data
might
might
show
up,
and
so
it
might
look
like
this.
This
could
be
a
possible
encoding.
That's
coming
from
somewhere,
but
I'm.
Just
I
just
want
to
show
you
how
spatial
pooling
Maps
columns
onto
the
input
space.
A
So
for
each
one
of
these
columns
and
we
can
have
many
cells
per
column,
it
has
a
specific
relationship
with
the
input
space.
So
this
column
in
particular
the
these
orange
boxes,
are
its
receptive
field.
Those
are
the
cells
that
it
could
create.
Connections
to
and
the
blue
dots
with
the
lines
are
actual
connections,
and
this
is
randomly
pseudo
randomly
initialized
so
that
every
one
of
these
columns
have
some
relationship
with
the
input
space
and
it's
a
starting
point
for
learning
we
can
I
can
even
show
you
there
are
permanence
is
involved
here
as
well.
A
So,
for
example,
it's
not
just
whether
it's
connected
or
not.
We
have
a
numeric
permanence
value
associated
with
each
of
these
connections,
so
that
active,
called
that
column
has
a
specific
relationship
with
every
one
of
these
cells,
including
a
permanence
value,
and
you
can
see
the
perp,
nor
that
you
see
the
permanence
value
on
that
graph
on
the
right,
if
it's
above
a
certain
threshold,
we're
going
to
call
that
synapse
connected-
and
this
is
basically
what
it's
view
is
into
the
input
space.
Each
one
of
them
looks
completely
different.
A
It
has
a
totally
different
viewport
of
the
input
space
and
these
connections,
these
permanence
values
degrade
over
time
or
or
increase
over
time,
depending
on
how
well
it's
matching
the
input.
That's
coming
on
the
next
one
learning
rules,
so
this
is
going
to
be
a
similar
visualization,
but
this
time
I'm
showing
you
an
actual
piece
of
data,
that's
showing
in
the
input
space.
This
is
an
encoded,
SDR
and
I'm,
showing
you
at
a
certain
time
step.
A
This
is
these
are
the
columns
that
are
active
and
this
becomes
important
in
in
the
next
topic
of
temporal
memory.
So
these
columns
are
activating
specifically
because
this
column,
for
example,
has
enough
of
its
connections
with
the
input
space
overlapping,
with
the
current
input
that
it
has
breached
a
threshold.
So
we
basically
take
the
top.
So
many
number
of
columns
that
have
the
most
overlap
with
the
input
and
will
you
turn
those
columns
on
and
then
we
have
and
we
can
control
the
parameter.
So
we
can
specifically
tell
it.
A
Okay
want
to
percent
density
in
this
in
this
representation
or
I
can
dial
it
up
or
down.
We
typically
keep
two
percent,
because
that's
what
we
see
in
the
brain-
and
it
works
so
like
this
or
this
column,
for
example-
does
not
have
enough
overlap
with
the
input
space
to
become
active
that
these
all
do
that's
how
spatial
pooling
works
and
over
time
in
the
active
columns
when
they
become
active,
their
proximal
segments
are
also
incrementing.
A
The
permanence
that
are
incremented
so
they'll
continue
to
activate
based
on
that
type
of
pattern
or
patterns
that
have
a
high
overlap
with
that
pattern
and
the
ones
that
don't
they
get
decremented.
So
this
is
called
spatial
pooling.
The
whole
point
is
retain
the
semantics
of
the
input,
but
put
it
into
a
space
or
we
have
more
control
over
the
representations.
Sparsity,
ok,.
A
Ok,
so
the
next
big
thing
is
called
temporal
memory,
so
we've
only
talked
about
spatial
stuff.
So
far,
all
the
proximal
inputs
are
basically
just
spatial
mappings
of
sum
of
encodings
to
columns
and
the
spatial
pooling
construct
that
learns
over
time
and
learns
to
represent
that
data
better
over
time
within
each
of
those
mini
columns.
There's
another
process
happening
as
it
sees
input
over
time
and
this
this
is,
we
called
the
temporal
memory
algorithm,
but
basically
this
is
that
distal
context
of
this
distal
connections-
I'm
talking
about
the
proximal,
is
all
about
spatial
input.
A
Mapping
to
the
input
space.
The
distal
connections
give
context
to
that
particular
spatial
input
and
allows
the
cells
to
decide
whether
they
think
they
might
be
next
to
fire,
based
on
the
context
that
it
sees
so
it
identifies
the
context
because
of
those
dips-
and
this
is
like
these
distal
connections
are
happening
between
cells
in
the
many
columns
of
the
structure,
so
they
can
all
connect
to
each
other
if
they
need
you,
but
it
happens
all
within
the
mini
column
structure
and
it
puts
cells
into
predictive
states
for
the
next
time.
A
Step
and
I
wanted
to
explain
it
quickly
with
this.
This
is
a
comparison
of
the
biological
neuron
model
to
sort
of
our
software
model
of
an
h2,
and
here
is
the
feed-forward
input
coming
from
the
input
space
and
this
distal
context,
one
cell
could
be
connected
to
one
or
many
it
can
have.
It
can
have
segments
one
or
many
segments,
and
this
is
all
neuroscience
terminology
and
each
segment
could
have
synapses
connected
to
a
bunch
of
different
cells.
A
Each
one
of
these
segments
is
basically
a
coincidence
detector,
so
this
cell
could
have
a
lot
of
segments
at
each
time
step.
If
it's
in
an
active
column,
it's
scratched
out.
It
could
be
any
time
stuff,
whether
it's
active
or
not.
It
can
look
across
all
of
its
segments,
all
of
its
distal
segmental,
and
decide
if
any
of
them
breach
a
threshold
and
it
puts
itself
into
a
depolarize
state,
which
means
it
thinks
it's
going
to
fire
next
question.
A
So
initially
there's
a
pseudo-random
initial
initialization,
so
actually
I
I,
don't
think
there's
any
connections
initially,
but
there
are
like
branches
reaching
out.
You
know
so.
There's
permanence
--is
between
these
difficult
connections
as
well
as
the
proximal
connections,
but
we
start
them
all
I
believe
disconnected
and
they
grow
over
time.
A
There's
a
you,
learn
the
connection
over
time,
but
there's
no
feedback
mechanism
as
of
yet
that's
a
part
of
the
theory,
but
we're
not
working
on
that
part
of
the
theory.
The
brain
is
really
complicated,
turns
out
so
I
mean
honestly,
where
we
have
spent
years
researching
layer,
2/3
of
the
neocortex
and
there's
six
layers,
so
we're
just
getting
into
researching
like
layer,
four
and
five
and
trying
to
figure
out
what
they're
doing
I'll
talk
a
little
bit
about
that,
but
but
the
whole
idea
of
feedback
and
hierarchy.
A
A
Yeah
I
mean
when
Windows,
when
the
predictions
turn
out
to
be
right,
then
that
reinforces
the
connections
that
it
had
to
those
distal
components
and
when
it's
wrong,
it's
decrement
them.
So
let
me
give
you
this
is
sort
of
a
big
demo,
but
okay.
So
what
I've
got
here
is
a
note
sequencer.
You
can
hear
these
playing
c-sharp
a
EF
over
and
over
and
over
and
over
and
over
it's
a
Spore
note
sequence
that
I'm
just
playing
over
and
resetting
and
starting
over.
A
This
is
the
input
space
in
green
and
there's
a
really
extremely
simple
encoding,
I'm,
just
taking
blocks
of
the
input
and
turning
them
on
for
each
note,
the
bottom
one
that's
never
showing
up
as
a
rest,
and
this
structure
is
the
many
columns
in
the
spatial
couleur
okay.
So
let
me
spread
this
out.
You
can
see
the
columns
now
right.
A
These
are
the
active
columns,
these
active
columns.
If
you
look
closely
for
each
one
of
these
inputs,
the
same
columns
are
firing,
the
same
columns
are
becoming
activated
and
it's
representing
the
same
semantic
data
as
the
encoding
is
in
the
input
space.
So
that's
the
spatial
pool
are
working
now
I'm
going
to
show
you
cell
activations.
So
this
is
the
temporal
memory
algorithm
now
working
on
top
of
the
spatial
pool,
and
let
me
explain
a
couple
things:
there
are
two
ways
that
they
sell
within.
One
of
these
many
columns
can
become
active.
A
The
first
way
is
if,
when
an
active
column
occurs,
we
look
at
each
one
of
the
cells
in
the
column
and
if
there
are,
none
of
them
are
in
a
predictive
state.
If
it's
completely,
this
means
that
we
it's
an
unexpected
input.
Nothing
is
predicting
this
to
happen.
Then
we'll
activate
every
cell.
That's
what
you
see
right
here.
We
call
this
bursting
and
it's
a
neuroscience
term.
That's
bursting
so
every
cell
bursts
and
what
this
represents
is
either
the
start
of
a
sequence
that
we
haven't
seen
before
or
a
branch
of
a
sequence.
A
So
we
were
in
the
middle
of
the
sequence,
but
something
unexpected
happened.
So
this
is
one
way
of
sort
of
kicking
off
a
sequence.
The
second
way
that
and
I'm
going
to
turn
on
some
different
cell
stage
here.
The
second
reason
that
a
cell
might
activate
within
a
mini
column
is
if
it
is
in
a
predictive
state,
so
we'll
look
through
all
the
cells
in
the
mini
column
and
if
any
of
them
are
in
a
predictive
States,
then
we'll
activate
it.
That
means
basically
that
it
was
correct.
A
You
know,
because
that
the
predicted
the
predictive
ones
that
I'm
showing
these
cyan
ones,
these
are
in
a
predictive
state
based
on
the
last
time
step.
So
these
think
that
they're
going
to
fire
next
and
they're,
actually
they
thought
they
were
going
to
fire
next
and
then
this
happened
and
they
are
all
in
the
predictive
state
and
the
active
columns
that
they
predicted
we're
represented,
so
they
become
active.
Yes,.
A
So
it
yes,
every
layer
of
depth
of
the
mini
column.
Yes,
they
can
connect
to
any
cell
in
that
structure.
Now,
when
you
start
talking
about
the
bio,
the
biology
I
mean
there's
a
lot
of
fuzzy
lines
going.
You
know,
because
I've
basically
cut
out
a
small
section
of
cortex
and
not
allowed
it
to
play
with
any
other
cortex
around
it.
A
So
there's
some
fuzzy
areas
here
that
we
find
it
hard
to
model,
but
for
the
sake
of
this
model,
any
cell
can
potentially
have
a
distal
connection
to
any
other
cell
and
also
those
distal
connections
don't
have
to
come
from
cells
in
this
region,
which
that's
a
different
story,
but
okay,
so
the
second
reason
was
because
the
cells
were
put
in
the
predict
escape
how
do
cells
get
into
a
predictive
state?
The
next
big
question
so
I'm
going
to
show
you
based
on
this
input.
These
are
the
cells.
A
A
All
of
these
other
cells
have
no
connections,
but
every
one
of
these
blue
cells
is
connected
to
basically
be
a
cells
that
are
currently
shown.
So
if
we
move
forward
one
step
you're
going
to
see,
all
of
these
blue
cells
turn
orange
right,
so
it
was
correctly
predicted.
Now
these
blue
cells
are
predicting
F
sharp.
We
move
forward
those
turn
arms
at
the
end
of
the
sequence.
There
are
no
predictions
because,
like
I
said
I'm
just
setting
in
four
notes
and
cutting
everything
off
for
notes
and
cutting
everything
off.
Yes,.
A
A
Well,
for
each
time
step
whenever
a
cell
is
activated
correctly,
but
let
me
move
forward
in
the
example
and
I
think
I
can
explain
a
little
better,
so
I'm
going
to
keep
playing
this
I
think
it's
learned.
It
pretty
well
at
this
point,
so
I'm
going
to
stop
it
eh,
okay,
so
we're
at
a
and
all
the
e
cells
are
about
to
be
activated
their
predictive
and
we're
they're
going
to
be
activated
I'm
going
to
change
this
now
and
I'm
going
to
make
it
a
c-sharp.
A
So
what's
going
to
happen,
anybody
know
what's
going
to
happen
when
we
move
forward
and
those
c-sharp
or
those
B
cells
are
not
going
to
fire
those
columns,
won't
fire
bursting.
So
next
time
step,
we
see
a
completely
different
set
of
columns.
Activate
that
we
didn't
expect.
We
see
c-sharp
columns,
activate,
that's
never
happened
before
so
bursting
occurs
because
we've
got
a
new
sequence.
First,
we
had
c-sharp
a
e
f-sharp.
Now
we've
got
C
sharp,
a
C,
sharp
I,
don't
know
at
this
point,
this
sequence
can
branch.
A
A
So
you
can
see,
half
of
them
are
incorrect
because
it
was
guessing
see
sharp
because
that's
the
pattern
it's
been
seeing
and
the
other
half
were
wrong
and
those
are
the
B
cells.
So
your
question
about
learning
is:
there's
punishment
happening
for
these
incorrect
cells,
so
segments
distal
segments
will
grow
to
connect
to
more
cells
as
it
as
cells
active
because
of
proximal
input,
so
cells
activate
only
because
of
proximal
input.
A
The
first
thing
that
kicks
them
off
is
the
bursting
and
after
their
bursting
and
we've
got
contexts
going
on
because
that
has
to
we
have
to
let
it
run
for
a
while
before
it
segments
start
growing,
so
you've
got
to
sort
of
think
of
it
as
a
newborn
brain.
You
know,
it's
got
no
connection,
that
it's
got
the
wiring
set
up,
but
it
doesn't
have
any
reinforcements
for
anything.
A
So
that's
a
mechanism
of
bursting,
actually
so
with
it
and
I.
Don't
know
a
ton
about
this
area
of
the
theory,
but
bursting
will
activate
those
cells
that
it
also
sort
of
randomly
chooses
one
cell
to
represent
that
new
pattern
over
time.
So
I
think
that's
that's
where
you
get
that
first,
those
connections,
starting
because
that's
reinforcing
a
connection
to
an
existing
cell
right,
so
I
think
that's
how
it
starts.
Yes,.
A
That's
the
thing:
each
state
doesn't
just
show
you
the
previous
state.
It
shows
you
the
history,
so
this
state
that
I'm
in
right
now
we
just
show
active
cells.
This
state
of
a
go
back
to
it.
Well,
here
we're
in
the
C
sharp
area,
but
these
active
cells
right
now
are
representing
not
just
C
sharp
following
a
but
C
sharp
following
a
following
C
sharp,
so
that
context
of
the
entire
pattern
is
followed
through
as
patterns
get
created.
A
So
this
come
back
they'll
event,
so
I
can
there's
some
other
experiments.
I
can
I
can
I
can
show
you,
but
I'm,
probably
gonna,
run
out
of
time.
Think
about
this
there's
four
paths
for
words:
boys
eat
many
cakes,
okay,
the
temporal
or
fourth
word
pattern:
girls
eat
many
pies,
the
the
two
in
the
middle
that
eat.
Many
are
spatially
exactly
the
same,
but
if
you
were
to
train
it,
boys
eat
many
cakes
boys
a
minute
cakes
over
and
over,
and
then
you
train
it.
A
Girls
eat
many
pies,
and
then
you
give
it
the
spatial
representation
of
many.
It
will
activate
the
cells
that
mean
girls
eat
many
and
Boise's
many.
That
makes
sense.
Okay,
all
right
so
I'm
running
low
on
time
and
I
want
to
take
some
more
questions.
So
I
just
I
want
to
emphasize
this.
Is
we
believe
these?
Are
this
foundational
components
of
neocortical
inspired
intelligence?
A
There's
a
lot
to
learn
from
the
brain
like
I
said:
there's
many
parts
of
the
brain,
we're
talking
about
just
one
part:
the
seat
of
intelligence,
which
is
the
neocortex
and
there's
lots
of
stuff
going
on
in
the
neocortex
that
we
still
have
no
idea
about
and
I'm
when
I
say
we
I
mean
humans.
So
there's
a
lot
more
to
learn
here,
but
I
think
that
we're
on
the
right
track
to
trying
to
understand
at
a
very
low
level
how
intelligence
works
and
how
we
can
build
things
with
it.
A
We
are
currently
working
on
research
involving
sensory
motor
integration.
Like
I
mentioned
movement,
we
believe
that's
a
key
component
intelligence
layer
2/3.
Does
this
sequence
memory
that
I
just
described
here
but
the
same
mechanism,
the
same
distal
connections,
proximal
connection,
that
same
algorithm?
That's
talking
about
the
dough
cell
activation
works
in
layer
4
to
do
allocentric
object
retrieval,
but
the
distal
connections.
Don't
come
from
the
same
area
of
cells,
they
come
from
motor
commands
and
other
places,
so
we're
researching
that
area,
and
we
still
think
there's
a
lot
that
we
can
do
here.
A
If
you
like
these
visualizations
they're,
all
from
a
youtube
series
that
I
did
about
how
HTM
works
called
HTM
school,
you
can
google
it.
It
should
be.
The
first
thing
that
comes
up
these
go
into
a
lot
of
detail,
and
if
you,
if
this
isn't
enough
detail,
we
have
papers
on
Numenta,
calm
about
this
tech,
we're
constant
we're
working
on
more
papers
about
how
columns
and
layers
work.
A
Some
of
these
are
silly,
but
this
should
be
entertaining.
But
if
you
lots
of
stuff
about
sdrs
spatial
cooling,
temporal
memory
and
remember,
this
is
all
of
our
coded
open
source.
We've
got
a
decent
community
that
I'm
sort
of
the
head
of
I'm,
always
on
our
forums.
You
go
to
the
nympho
org
to
learn
more
I.
Have
any
questions,
I
will
answer
them
and
I
think
that's
it.
One
more
thing.
A
A
The
methods
we
use
for
neuroscience
research,
so
we
don't
do
the
research,
we
read
the
research
and
then
we,
our
research,
is
about
how
okay,
so
the
thing
about
theoretical
neuroscience,
there's
not
many
people
doing
theoretical
neuroscience
and
our
founder
Jeff
Hawkins
found
that
to
be
the
case.
You
know
fifteen
years
ago
and
he
founded
a
Redwood
Neuroscience
Institute
to
do
the
research
and
since
then,
there's
been
a
lot
more
advances
on
how
we
understand
specifically
neurons
to
interact
with
each
other.
A
So
they
read
a
lot
of
papers
and
currently
they're
really
interested
in
grid
cells.
Location
cells,
place
cells,
these
things
that
Ben
may
denote
locations
of
things
and
time
and
space.
So
we
do
try
and
influence
what
papers
get
written,
but
we're
not
the
ones
with
the
wet
labs
and-
and
you
know
we
have
to
leave
that
to
like
the
hardcore
neuroscientist.
We
do
have
a
neuroscientist,
our
team,
but
he
helps
us
implement
software
to
prototype
things
that
how
we
understand
intelligence
to
work.
Any
other
questions,
yes,.
A
Absolutely
yeah,
like
a
sequence
that's
seen
over
and
over
and
over
that,
just
gets
wired
in,
because
those
permanence
values
between
those
distal
and
proximal
connections
just
get
weighted
higher
and
higher,
and
it
does
forget
over
time
and
those
parameters
are
all
tunable.
You
know,
if
you
create
a
model,
you
can
tell
it
to
increment
at
a
certain
rate
or
decrement
at
a
certain
rate,
distally
and
proximally.
A
Yes,
definitely,
this
is
an
online
learning
system.
This
is
something
that,
once
you
establish
what
the
data
format
is
and
how
its
encoded
into
STRs
you
can.
You
can
stand
a
system
up
that
will
handle
that
data
format,
even
as
a
changes
dramatically
over
time
as
the
patterns
change.
If
the
data
type
changes
that
we
have
to
create
a
new
model,
but
as
the
patterns
change
over
time,
it
will
adapt
to
them.
A
A
Some
of
it
is
theory,
and
some
of
it
is
not,
and
a
lot
of
this
is
the
things
some
of
the
things
that
describes
you
like
cells,
going
in
to
predict
the
state
that
the
implementations
of
this
stuff
is
as
our
discovery
like
limits
of
discovery,
and
some
of
it
is
conjecture.
Honestly,
we
think
this
is
how
it
works,
so
we
don't
have
any.
A
A
Our
biggest
success
has
been
an
anomaly
detection
for
streaming
data
over
time,
so
we
can.
We've
got
a
good
set
of
parameters
for
like
one
model
for
a
scalar.
Just
you
know
a
scalar
piece
of
data,
that's
changing
over
time
and
as
long
as
there
are
patterns
that
fit
the
human
descriptions
of
time
as
it
is
like
hours
day
day
of
month
season,
the
type
of
stuff
I
was
showing
you.
We
do
a
pretty
good
job
of
identifying
anomalies
that
occur
like
in
server
data
or
in
like
power.
Consumption
like
you
saw
there.
A
So
that's
a
that's,
probably
our
most
promising
application
and
we
do
have
at
least
we
have
one
partner
called
grok
stream
that
has
a
IP
analytics
product
that
runs
new
pic,
which
is
the
code
base.
That
is
the
HTM
implementation,
so
definitely
analytics
and
anomaly.
Detection
is
a
big
one
for
us.
Yes,.
B
A
Our
model
we
treat
all
neurons
within
the
structure
as
the
same,
but
in
a
realistic
model,
there's
topology
involved,
so
there
will
be
more
privileged
connections
to
things
that
are
closer
I've
got
some
visualizations
of
topology
I
played
around
with
a
little
bit.
We
don't
even
use
topology
now,
because
our
data
is
not
rich
enough
to
take
advantage
of
it,
but
it's
definitely
a
crucial
part
of
the
theory.
We
can't
throw
it
out
so
we've
implemented
it.
We
don't
use
it.
Yes,.
A
You're
welcome
to
try
that
might
be
a
good
approach
to
trying
to
develop
encoders,
because
our
encoders
have
had
millions
of
years
to
evolve
into
these
really
complex
systems
like
anybody
like
research,
the
cochlea
I
thought
this
would
be
easy.
It
is
not
easy,
there's
a
lot
of
stuff
going
on
in
your
cochlea
that
creates
those
activation
patterns
that
your
cortex
needs,
and
it's
not
simple.
Somebody
had
pushed
from
here,
so
the
back.
A
Yeah
absolutely
well,
the
the
sequence
memory
happens
in
the
latest
e3.
So
when
you
get
input
to
a
cortical
region,
all
of
the
layers
are
going
to
get
that
input
and
wants
one
way
or
another
at
the
same
time,
and
they
work
together
to
try
and
put
put
put
the
picture
together.
You
know
so
we're
just
modeling
the
sequence
memory.
That's
what
I
showed
you
today,
but
but
once
we
add
more
layers
to
this
model
like
to
do
object.
A
A
A
A
A
A
Yeah
but
it,
but
it
goes
off
track
really
easily,
so
we
don't
typically
do
that.
We
can
I,
don't
know
exactly
how
we
do
this.
We
use
typical
classifiers
to
try
and
get
output
out
of
the
system
based
on
cell
activation.
So
a
predicted
state
like
I
would
show
you
should
represent,
should
be
mapped
to
some
encoding
right
that
we
can
then
turn
into
a
value
and
how
that
got
encoded.
So
I
don't
totally
understand
how
all
that
jazz
works,
but
we
can
predict
not
only
like
one
time
step
in
the
future.
A
A
That's
a
good
point,
so
we've
experimented
with
that.
So
this
method
is
not
good
at
just
static
image
classification.
This
is
not
I
mean
your
brains
are
really
good
at
in
its
classification,
but
with
this
particular
model
that
we
have
is
not
something
else
is
going
on
that
enables
that,
but
we
but
the
idea
of
a
saccade
which
is
the
way
your
eyes
move.
A
You
know
like
so
many
times
per
second,
without
even
knowing
it
is
important,
because
that's
how
you
analyze
the
scene,
you
know
your
retina
gives
you
a
high-resolution
image
of
the
very
center
of
what
you're
seeing
and
it's
very
low
resolution.
Otherwise,
that's
why
you're
always
bouncing
around
so
that.
But
in
order
to
get
this
right,
you
have
to
have
motor
commands
involved.
A
They
have
to
have
motor
commands
feedback
because,
if
you're
bouncing
around-
and
you
don't
have-
that
your
core
of
your
cortex
is
going
to
have
no
idea
what
those
images
mean,
there's
going
to
be
random
images
that
it
can
never
put
together.
But
if
it
has
a
motor
command
to
know
where
your
body
orientation
is
and
where
your
eyes
are
moving,
it
can
tell
because
it
told
it
to
move
there.
A
A
Yeah,
let
me
correct
that
it
does
a
rank
order
of
the
columns
that
are
active,
so
this
isn't
a
predictive
state.
This
is
just
a
column
activation
based
on
a
spatial
pattern.
So
there's
there's
not
a
prediction
occurring
here.
This
is
just
a
mapping.
Now
the
mapping
does
change
over
time
because
the
mapping
improves,
as
it
learns,
what
portions
of
that
input
space
are
lighting
up
and
different
columns
get
customized
to
look
at
different
features
in
that
input
space.
A
So
all
these
columns
are
going
to
have
overlapping
receptive
fields
to
the
input
space,
and
some
of
them
may
be
representing
the
exact
same
thing,
there's
not
usually
exact,
but
something
very
similar,
but
we
can
enforce
that
sparsity.
Just
by
drawing
that
line
of
how
many
columns
we
are
how
many
of
the
top
X
columns
we
want
to
turn
on
based
on
the
input.
A
So
what
I've
shown
you
here
is
one
layer
called
layer,
2
3
and
that
was
I
think
chosen,
because
we
we
knew
it
was
involved
in
sequence,
memory
and,
and
we
knew
sequence
memory
was
really
important
and
it
was
something
that
current
AI
techniques
aren't
necessarily
very
good
at
you
know
temporal
sequences
over
time.
We
do
really
awesome
at
spatial
recognition.
You
know
it's
a,
but
but
but
finding
some
finding
anomalies
and
things
over
time
is
harder.
So
the
types
of
methods
we
use
or
like
LTM
is
one
of
the
big
ones.
A
Now
that
does
that
type
of
analysis,
but
even
then
it
does
have
to
be
batch
trained
again.
You
know
over
time
those
things
change
and
we
know
the
brain
doesn't
have
to
be
batch
trained,
so
we're
trying.
We
were
trying
to
find
what
part
of
that
included
sequence
memory
and
it
was
player,
2
3
and
that's
commonly
a
thought
in
neuroscience
that
player
2
3
is
a
sequence
memory,
operation,
layers,
4
and
I
am
NOT
a
neuroscientist
layers,
4
and
5
I.
Think
5
is
involved
with
motor
command
output
and
4.
A
We
think
it's
closed,
so
there
are
some
differences
between
the
different
layers
in
the
cortex
like,
for
example,
2/3
has
higher
cell
oxidations
than
layer
4,
that's
doing
object,
recognition
and
I
think
we
have
ideas
about
that.
But,
honestly,
this
is
a
research
area,
so
we
have,
in
addition
to
our
open
source
code
bases,
that's
running
every
all
the
visualizations
that
you
just
saw.
We
have
research
repositories
that
are
open
source
too,
so
you
can
see
the
type
of
work
we're
trying
to
do
with
sequence,
memory
and
sensory
motor
integration
as
well.