►
From YouTube: AI / Neuroscience Chat! Pooling in Neural Networks in silicon and in your brain - (artificial intell
Description
Spatial pooling, maxpool, CNN, HTM, temporal memory, bursting, boosting, etc. -- Watch live at https://www.twitch.tv/rhyolight_
A
Alright,
let's
get
started
thanks
for
watching
I
am
Matt
Taylor
from
the
Mentos.
This
is
I.
Think
the
there's
only
the
second
day,
I
chat
that
we've
done.
I
did
one
last
week
yeah.
So
this
is
only
the
second
AI
neuroscience
chat
that
I
have
done
and
and
I
even
have
a
little
chat
command
for
it.
It
AI
chat
and
I
just
kind
of
described
what
it
is
today
we're
going
to
be
talking
about
pooling.
B
A
Pooling
is
at
all
I
there's
there's
a
way
to
describe
it.
I
guess
in
deep
learning,
I
only
know
one
way
of
describing
it
in
deep
learning,
and
let
me
I'll
show
you
what
I'm
talking
about
here
and
on
the
screen.
So
this
is
the
deep
learning
idea
of
pooling
that
I
understand
and
that
is
you
have
an
input
space
I
like
to
call
these
input
spaces.
A
They
I
mean
these
really
represent
neurons
in
if
you're
trying
to
relate
deep
learning
to
neuroscience
these,
these
bits
really
represent
you,
the
neurons
or
synapses,
to
a
another
neuron.
You
know,
so
you
know
how
you
know
the
pooling
sucks.
Why
does
pooling
suck
I
guess
I?
Don't
understand
that
I've
never
heard
anyone
say
pooling,
sucks,
it's
sort
of
pretty
cool
when
you
understand
what
it's
doing,
it's
like
it's,
a
it's,
a
cool
trick
that
your
brain
is
doing
to
take
information
in
one
space
and
translate
it
into
another
space.
A
A
Female
squelch
down
the
feature
space
and,
in
this
case
I
think
this
is
probably
showing
an
example
of
maximum
pooling,
which
is
you
basically
look
in
that
space
and
you
take
the
the
max
value
the
unit
with
the
maximum
value
right
and
that's
the
one
that's
represented.
But
if
you
think
about
this,
you
could
do
this
in
lots
of
different
ways.
You
mean
you
could
apply
any
function
over
that
space
and
transform
a
whole
bunch
of
units
into
one
unit
right.
That's
pooling
whether
you're
doing
you
could
do
max
pooling
min
pooling
mean
pooling.
A
A
You
know
map
it
in
a
different
way,
so
here's
the
way
I
would
draw
it,
and
and-
and
you
can
have
lots
of
pool
you
could
there
could
be
a
pool
up
here
and
then
you
know
another
stream
leading
into
that.
There
could
be
a
pooling
here,
cooling
here
to
be
another
stream,
another
pool
and
each
one
of
these
pools
might
mean
something
or
represent
something
different,
depending
on
how
it
is
has
been
mapped
to
onto
the
stream
sort
of
like
the
space
that's
being
pulled
into
it.
A
You
might
see
this
space
I
like
to
call
this.
If
you're
talking
about
a
layer
of
neurons,
you
could
you
could
think
about
this
in
a
deep
learning
manner
as
well,
you
could
say
a
deep
learning
layer
or
a
cortical
layer.
Let's
just
say
a
layer
is
a
bunch
of
neurons
that
are
all
working
together,
a
population
of
neurons.
These
neurons
map
I
mean
they
have
input
right.
A
So
this
is
some
topological
space
and
it's
you
know
really
similar
to
the
convolution
convoluted
feature.
That's
also
a
topological
space
right.
So
in
this
topological
space,
what
we're
doing
differently
is
we're
not
going
to
say
this
group
of
this
this
one
neuron,
let's
say,
let's
just
say:
there's
one
neuron
here:
I'm
not
going
to
say
you're
connected
to
one
quarter
of
it
and
or
this
little
portion
here
connected
to
that
low
portion
and
we're
gonna.
Do
it
differently?
That's
that's
a
start.
That's
a
start!
A
So
what
we're
gonna
do-
and
this
is
what
biology
tends
to
do-
is
each
one
of
these
say:
we've
got
each
neuron,
I'm
gonna,
just
I'm
for
simplicity,
sake,
I'm,
just
gonna
use
one
example
of
a
neuron
here.
Each
neuron
has
a
potential
pool
of
inputs
that
it
might
be
connected
to
and
and
deep
learning
networks.
Typically,
you
have
dense
connections
between
a
unit
and
one
layer
and
the
units
in
layer
previous
to
it.
A
Hiya
Caleb
51
your
first
time
here,
I'm
talking
about
neuroscience
and
a
I,
specifically
pooling
and
I'm
sort
of
relating
the
max
pool
idea
in
and
convolutional
neural
networks
and
I
to
how
it
really
works,
sort
of
in
in
the
brain
and
so
I'm
talking
about
a
population
of
neurons
in
the
cortex,
getting
sensory
input
or
input,
proximal
feed-forward
input,
the
important
thing
this
is.
This
is
going
to
be
input.
Every
cell
in
here
is
gonna,
get
input
into
this
from
this
space,
and
then
this
is
feed-forward.
A
A
A
So
what
neurons
typically
do
is
and
neurons
grow
right.
They
could
they
grow
from
one
point
to
another
point
and
that's
like
so
there's
not
like
they're
not
connected
to
all
of
them.
There's
some
potential
amount
of
connections
that
these
this
cells
or
these
cells
dendrites
are
close
enough
to
the
other
cells
dendrites
to
make
synapses
right
or
our
axons,
and
it's
actually,
these
these
axons
are
growing
up
into
I.
Think
the
dendritic
Bush
branches
of
these
neurons,
so
we're
talking
about
axons
from
this
layer
and
dendrites
from
this
layer
I
believe.
A
A
How
the
brain
works
in
the
neocortex
and
and
the
thalamus
we're
trying
to
understand
the
common
cortical
circuit
in
the
neocortex,
which
involves
the
thalamus
and
anyway
I'm
talking
about
pooling,
because
pooling
is
like
a
crucial
concept
that
we
have
to
understand:
spatial
pooling
so
I'm
going
to
get
to
spatial
pool.
This
is
what
I'm,
basically
describing
a
spatial
pooling
here
and
each
and
in
spatial
pooling
this
would
be
a
mini
column,
which
is
a
column
of
a
small
column.
A
A
Did
you
join
the
HTM
form?
That's
the
best
place
the
best
place
to
to
join
right
there
that
that's,
a
very
active,
got
a
very
active
community
of
people
there.
So
we
were
well.
We
were
welcome
to
come
and
ask
questions.
Stupid
questions
are
fine
and
if
you
have
a
question
here,
I'm
happy
to
take
it.
That's
why
I
do
these
AI,
chats
I
just
want
to
engage
the
community?
Give
you
guys
a
chance
to
any
questions,
live
and
I'll
keep
doing
it
on
Mondays
at
1:00
pyramidal.
A
Let
me
let
me
spell
that
out
for
you.
It
is
like
a
pyramid.
Pyramidal
like
this,
so
a
pyramidal
neuron
is
is
what
is
typically
pyramidal.
Pyramidal
cells,
pyramidal
cells,
pyramidal
neurons.
These
are
these
are
typically
what
we
think
does
most
of
the
hard
core
computation,
I
guess
of
the
brain.
Here's
here's
sort
of
one
one
looks
like
so:
there's,
there's
inhibitory,
neurons
and
there's
pyramidal,
neurons
and
pyramidal.
Neurons
I
believe
are
always
on
our
excuse.
A
Me
are
never
inhibitory
they're,
always
excitatory,
neurons,
yeah,
okay,
I
see
you
explained
advanced
knowledge,
I,
guess,
I,
don't
know.
What's
I,
don't
know
where
I'm
trying
to
I'm
gonna
try
and
meet
you
guys,
River
your
ass,
so
I'm
happy
to
step
back
whoops,
step
back
and
explain
some
things
but
I'm
I'm
talking
about
our
model
of
pyramidal
neurons,
however,
I
mean
I'm
talking
also
about
mini
column,
and
we
believe
these
mini
column
structures
are
most
likely
and
forced
by
inhibitory.
A
Neurons
that
activate
in
the
same
layers
as
a
flammable
neuron,
so
I
mean
there's
a
lot
more
than
just
pyramidal
neurons
going
on
in
your
brain.
If
you
took
out
all
everything
but
pyramidal
neurons,
your
brain
wouldn't
work,
but
we're
focusing
on
modeling.
The
pyramidal
cells
like,
like
you
see
here
with.
A
All
the
inhibitory
cells,
too
so
I
mean
we
need
to
model
the
whole
brain,
but
for
the
most
part,
these
are
the
ones
that
seem
to
be
holding
the
computations
or
like
holding
holding
memory,
while
the
inhibitory
neurons
perhaps
enforce
a
structure
so
that
they
can
that
they
can
compute
within
okay,
so
I'm
trying
to
explain
pooling
so
the
I,
so
that
the
tricky
thing
here
is
it's.
It's
super
easy
when
I
think
I
think
it's
it's
a
lot
easier
when
you,
when
you
think
about
the
convolution,
because
it's
it's
really
simple
to
think.
A
B
A
One
of
these
mini
columns
that
I
have
written
drawn
in
blue
here
one
of
these
might
project-
and
this
is
just
my
my
speculations-
it
projects
topologically
onto
the
space,
so
it's
not
going
to
project
across
the
whole
space.
There's
there's
usually
a
localized
inhibition
happening
here,
so
these
this
mini
column
is
sort
of
competing
with
its
neighbors
in
its
in
its
in
its
network
right
up
here
about
hot,
which
one
is
going
to
represent
the
activations
here.
So
you
have
to
also
remember
that
in
this
space,
there's
neurons
firing
too.
A
So
at
every
time
step.
This
isn't
a
very
good
example.
I
mean
graphic
here,
but
every
time
step,
let
me
go
back
to
the
whiteboard.
Sorry,
every
time
step
we're
seeing
an
activation
pattern
in
in
this
input.
Space
just
similar
to
you,
would
see
an
activation
pattern
over
here
in
the
in
the
convolved
feature.
A
You'll
see
an
activation
pattern
in
this
input
space,
but
each
of
the
mini
columns
are
going
to
be
have
localized
connections
and
we
call
this
local
inhibition,
where
you
might
have
a
mini
column
over
here
that
that
projects
into
this
space,
a
mini
column
over
here
that's
receptive
field-
is
in
this
space,
but
we're
breaking
it
up
in
a
way
that
we
don't
have
to
like.
If
you
look
at
the
the
convolve
feature,
we
don't
have
to
have
less
space
we
can
make.
We
can
make
this
structure
as
big
as
we
want.
A
So
the
idea,
one
of
the
ideas
of
spatial
pooling
is,
is
to
change
the
representation
here.
Transform
this
representation
using
these
mini
columns,
receptive
fields
so
that
the
mini
columns
will
start
to
represent
things
in
here,
and
these
and
like
I,
said
this
is
not
a
dense
connection.
It's
not,
but
it
could
it's
not
going
to
connect
to
every
single.
It
will
never
connect
to
all
these
neurons
in
here.
It
only
connects
to
a
percentage
of
them.
That's
the
way
it
works
in
biology.
I
mean
this.
A
This
mini
column
will
never
have
the
opportunity
to
connect
to
all
these
neurons
because
they're
too
far
away
I
mean
it
does.
It
just
doesn't
happen
like
that
in
your
brain,
so
all
of
these
mini
columns
are
essentially
pooling
features
pooling
things
that
they
see
in
here
and
representing
them
when
they
see
that
that
thing
occurring.
A
So
if
you,
if
you
look
at
this,
how
this
we
saw,
we
can
sort
of
randomly
set
up
these
initial
connections,
rights
from
from
one
mini
column
to
an
input
space
and
if
we
just
randomly
set
them
up
so
there's
a
potential
pool
and
we
just
randomly
set
the
weights
of
the
connections
around
some
Gaussian
distribution
so
that
they're
almost
connected
then,
as
soon
as
we
start
streaming
information
remember,
remember,
I,
said
stream.
That's
super
important
stream!
A
That's
another
difference,
I'm
going
to
point
out
here
as
soon
as
we
start
streaming
information
through
this,
it's
going
to
be
spatial
information
by
nature,
but
the
fact
that
that
one
point
comes
after
the
other
is
super
important.
That's
the
temporal
nature
of
the
world.
You
have
to
think
of
this
as
a
sensory
array,
not
necessarily
just
an
image
that
we're
looking
at
as
a
part
of
a
million
other
images
that
we're
trying
to
classify.
That's
not
how
your
brain
works.
That's
how
deep
learning
networks
work.
Yes,
that's
not
how
your
brain
works.
A
Your
brain
is
expecting
each
image
that
comes
or
each
whatever
it
is.
Each
sensory
array
to
logically
follow
the
next
there's
a
reason
this
activation
comes
next,
there's
a
reason
the
next
activation
comes
next,
it
may
be
because
you're
moving
the
sensor
through
space,
it
may
be
because
an
object
is
moving
against
the
sensor
in
space.
A
You
know
what
I
mean,
so
what
you
end
up
having
here
is:
is
you
get
a
set
of
activations
in
your
input
space
and
you
will
get
a
set
of
mini
columns
that
activate
here,
let's
say:
there's
there's
another
one,
and
and
because
of
these
activations
that
occurred,
we
have
another
set
of
activations
in
this
other
layer
in
this
population
of
cells.
That
now
represents
this
data,
so
we
have
transformed
the
data
out
of
this
space
and
into
another
space
and
in
this
space
that
one
of
the
important
things
is
we
can
control.
A
A
A
This
is
sort
of
like
the
basics
of
how
a
spatial
Pooler
in
in
neurons-
and
we
have
simulations
of
all
this.
This
is
like
tried-and-true
software
tested.
This
is
we've
got
this
all
in
software.
It
works,
it
works,
it
works
totally
so,
and
what
this
gives
you
is
a
way
to
to
represent
semantics.
This
is
sort
of
like
how
information
gets
encoded
from
your
senses,
into
your
brain
and
in
a
way
like
you've,
got.
A
Encoding
coming
out
of
your
senses,
out
of
your
retina
out
of
the
nerve
in
our
optic
nerve,
auditory
nerves,
coming
into
your
brain
into
your
thalamus,
and
once
it
gets
up
into
cortex,
we
have
to
translate.
It
has
to
transform
to
be
represented
in
cortex,
because
the
cortex,
the
neocortex,
is
where
you're
modeling
objects
based
on
the
sensory
input,
and
so
this
is
sort
of
the
first
step
of
modeling
those
objects.
It's
taking
the
sensory
input
and
transforming
it
into
sort
of
a
a
dimension
that
the
brain
can
control.
If
that
makes
sense,.
A
Let's
see
so
this
is
spatial
pooling
and
there's
more
to
it
than
this
there's
boosting
which
I'm
not
gonna
talk
about
what
well
yeah
we'll
talk
about
that.
So
one
of
the
problems
that
we
face
by
applying
spatial
pooling
in
this
way-
and
this
is
and
it's
a
problem-
your
brain-
has
found
a
solution
for
is
I,
wouldn't
call
it
it's
it's
a
little
similar
I
guess
you
could
say
it's
similar
in
concept
to
overfitting.
A
Homeostatic
homeostatic
equal
up,
but
basically
it's
homeostasis
in
neurons.
How
do
you
say
it?
It's
some
type,
there's
a
freeze,
that's
home!
You
know,
static,
plasticity,
I,
think
this!
Is
it
the
capacity
of
neurons
to
regulate
their
own
excitability
yeah
right
relative
to
network
activity?
So
so
this
idea
of
spreading
out
the
meaning?
Okay,
is
something
that
we
had
that
the
brain
seems
to
do
and
there's
papers
about
this
and
mechanisms
of
doing
it.
A
A
What
sort
of
activation
function
and
this
okay,
so
we're
talking
about
activation
functions.
We
call
this
or
in,
if
you're
going
to
look
it
up,
you
would
call
it
look
for
like
k,
winners
or
winner-take-all
k,
winners
or
something
that's
that's.
The
equivalent
activation
function
that
you
would
use
in
a
deep
learning
network,
because
essentially,
you
might
have
a
thousand
two
thousand
mini
columns
up
here
and
in
response
to
this.
A
B
A
The
one
Episode
seven
talks
about
this,
this
mapping
to
the
input
space,
an
episode,
eight
talks
about
learning,
so
an
episode,
seven
at
the
end
of
it.
Well
it'll
talk
about
I,
don't
even
talk
about
boosting
on
that
do,
I,
there's
a
whole
other
episode
about
boosting
I
think
that's
episode,
nine
in
an
episode,
nine!
That's
what
it
is!
Hey.
B
A
A
Boosting
works,
instead
of
so
usually
what
happens
is
in
response
to
some
stimulus.
Some
activation
pattern
in
the
input
field.
You've
got
you
know
all
these,
thousands
of
mini
columns
and
and
and
we
all
stack
rank
them
based
on
their
receptive
field.
They've
got
a
certain
amount
of
connections
open
over
here
right
and
there
overlap
with
the
bits
in
the
space
that
they're
covering
right,
so
they're,
trying
to
pattern
match
in
the
space
saying
and
looking
for
a
lot
of
activations
in
their
receptive
field,
and
so
you
might
get
one
that's
got.
A
Oh
I've
got
I've
got
this
column
has
40
overlap.
This
one
has
38
this
one
has
38
this
one
has
37
and
will
stack
rank
them.
All
of
these
columns
will
rank
them
ranked
by
overlap,
and
that
is
it's
the
overlap
of
its
potential
connections.
It's
receptive
field
of
synapses
that
have
already
been
established
right
connection,
connected
synapses
to
the
input
coming
across
those
synapses
and
what
will
do
is
will
will
stack,
rank
them
and
then
will
draw
a
line.
A
Will
pick
will
say:
I
only
want
K
winners
to
represent
this
input
and
that's
what
we
call
K
winners,
and
that
is
that
is
inhibition
this
this
in
HTML
all
this
inhibition
inhibition
I'm
expressing
this
as
a
local
function,
so
there's
local
inhibition.
So
there's
groups,
you
know
each
each
mini
column
has
a
neighborhood
of
columns.
A
That
is
it's
competing
with
and
getting
stack
ranked
up
against
right
because
that
that's
what
you
need
if
you
have
topological
input,
but
a
lot
of
times,
you
don't
have
topological
input
if
you're,
just
looking
at
scalar
data,
for
example,
which
is
a
big
use
case,
with
streams,
temporal
streams
of
scalar
data.
If
you
just
look
at
a
server
and
its
metrics
over
time,
that's
all.
A
Of
scalar
data
beat
Alice
thanks
for
following
I
appreciate
it
awesome
I'm
glad
you
guys
are
enjoying
this
content,
so
I'm
talking
about
local
inhibition,
but
there's
also
global
inhibition,
and
just
imagine
that
in
global
inhibition
the
potential
pools
of
each
mini
column
spread
across
the
entire
input
space.
So,
basically
there's
one
neighborhood:
all
of
the
mini
columns
are
competing
with
each
other.
Every
time
that's
global
inhibition.
A
A
Okay
and,
like
I,
said,
if
you,
if
you
want
to
see
that,
go
to
that
HTM
school
link
and
there's
a
there's,
a
video
about
boosting
there.
What
will
happen
if
the
input
is
empty?
Nothing
will
fire
like
these.
These
cells
up
in
this
layer
that
we're
simulating
will
only
fire
in
response
to
activations
in
the
input
space.
If
there's
nothing
there,
they
don't
do
anything.
A
There
will
be
it's
like.
There's
not
nothing
happens.
There's
compute
stops,
I
mean
there's
nothing
to
do
and
in
in
the
real
world
that
doesn't
exist.
There's
always
something
if
there
is,
if
you're
awake,
you're
receiving
data,
even
if
nothing's
going
on
around
you,
you
still
have
sensation
right.
You
can
still
feel
like
close
your
eyes.
I
can
feel
where
my
hands
are
like
I
still
have
sensors
that
are
giving
the
input
constantly.
So
you
have
to
think
about
this
as
a
stream
over
time.
A
This
is
not
just
a
spatial
classification
task
that
we're
talking
about
when
we're
talking
about
HTM
Tec,
hierarchical
tempura
memory,
we're
talking
about
a
stream
of
temporal
data
flowing
across
this
input
space,
and
this
responding
and
learning
to
it
in
real
time
online
learning
completely
online
learning.
This
is
something
that
deep
learning
networks
cannot
do
today.
A
A
small
percentage
of
these
many
columns
will
start
representing
a
large
percentage
of
the
spatial
patterns
that
come
across
so
and
that's
why
I
was
talking
about
homeostatic
plasticity,
because
that's
your
brain
solved.
This
solves
this
in
a
certain
way
and
we're
trying
to
do
it
in
software
similar
way.
So,
oh
so
what
we
do-
and
this
is
a
little
compute
heavy
right,
but
you
have
to
keep
an
a
duty
cycle
for
each
one
of
these
mini
columns
in
here.
A
So
essentially
for
every
mini
column
is
we
have
we
typically
2048
will
say:
2048
mini
columns
as
a
typical
size
of
an
HTM
Network
that
will
create
and
do
and
build
experiments
on,
B
tess's
doesn't
drop
the
drop
block,
performs
spatial,
pooling,
drop,
block,
drop,
block,
I,
don't
know
what
drop
buckets,
sorry,
so,
I'm,
not
sure
and
and
on
by
the
way
spatial
pooling
may
be
a
bit
of
an
overloaded
term.
You
know
these
are
both
general
terms.
Spatial
and
pooling,
so
drop
block
might
be
spatial
drop
out.
A
Oh
I,
don't
know
what
spatial
drop
out
is
either
as
far
as
I
know
this.
This
concept
of
spatial
pooling
is
a
unique
discovery.
That's
a
part
of
the
HTM
Canon,
so
I,
don't
I've
never
seen
anyone
else
doing
something
like
this.
A
spatial
drop
out
that
might
be
a
technique.
That's
used
in
deep
learning
networks
to
try
and
spar
safai
the
network,
because
deep
learning
folks
have
recognized
that
sparsity
helps
it
helps
against,
especially
against
noise.
A
So
there
may
be
a
tactic
called
spatial
drop
powders,
but
I'm
not
familiar
with
it
so
anyway
to
apply
boosting
you
have
to
keep
an
active
duty
cycle
for
each
one
of
these
columns,
and
an
active
duty
cycle
is
just
imagine
a
time
window
say:
I'm,
I'm,
gonna,
look
at
this
one
column,
this
one
mini
column
over
time,
so
you
you'll
just
keep
a
little
counter
and
you'll
say
for
the
last
hundred
time.
Steps.
How
active
has
this
column
been?
That's
a
that's!
Basically,
it
you're
just
going
to
check.
A
Is
it
active
now
and
then
add
it
and-
and
you
know,
tally
it
up
and
sit
and
give
an
indication.
We
call
this
ADP
or
ad
ad
see
any
active
duty
cycles.
So
we're
gonna
keep
an
active
duty
cycle
for
each
one
of
these,
and
the
boosting
step
will
then
do
a
computation
based
on
all
of
the
active
duty
cycles
and
come
up
with
boost.
We
call
it
a
boost
factor,
boost
factors,
one
for
each
column
and
just
multiply
that
columns
I'm,
not
I'm,
not
exactly
sure
how
it
does
this
in
a
code.
A
Look
across
the
active
duty
cycles
because
you
want
to
prevent
some
of
the
columns
from
being
too
active,
so
so
you'll
prevent
the
columns
that
are
have
been
very
active
from
representing
new
input
and
you'll
pass
off
that
semantic,
meaning
to
another
column
that
has
not
been
active
so
so
you're,
basically
just
keeping
because,
like
I
said
otherwise,
you'll
get
because
there's
you
can
represent
a
lot
of
information
with
these
many
columns.
It
can
represent
a
lot
of
spatial
input.
A
A
lot
of
spatial
patterns
and
you'll
get
the
situation
where
you've
got
just
a
few
of
them
representing
a
lot
of
spatial
patterns,
and
you
don't
want
that
because
you
need
you,
you
want.
You
want
to
enforce
entropy
in
this
layer
so
that
you
can
spread
the
meaning
out
through
all
of
those
mini
columns.
Marc
Brown
is
in
the
king
good
to
see.
Marc
I
was
just
talking
about
boosting
homeostatic
plasticity,
so
we
were
talking
about
pooling.
I
talked
about
local
inhibition,
I
talked
about
many
columns
and
how
they
map
to
input
space.
A
We
talked
about
K
winners.
We
talked
about
active
duty
cycles,
boost
factors.
So
if,
if
there's,
if
there's
any
other
topics,
I'm
going
to
talk
about
I'm
gonna
go
on
to
a
the
idea
of
temporal
pooling
or
object
pooling
next,
but
one
of
the
key
concepts,
then
you
really
have
this
to
understand
this,
that's
different
from
deep
learning.
If
you're
coming
from
the
machine
learning
a
deep
learning
world
I
keep
hitting
on
this
again
and
again
this
it's
that
this
is
a
a
stream
of
data.
It's
a
stream
of
data!
There's!
A
A
It's
it.
The
interesting
interesting
thing
is
I
like
explaining
bursting
so
I'll
give
you
a
little
teaser
if,
if
a
pattern
comes
across
here
so
every
step
there
are
cells
in
these
mini
columns
that
are
predicting
what
they're
going
to
see
next.
Okay,
that's
that's
the
idea
of
temporal
memory,
which
I
haven't
touched
on
yet,
but
there's
going
to
be
cells
that
are
active
up
here
like
right
here
and,
and
that
means
not
they're
not
trying
to
represent.
A
What's
here,
they're
representing
based
on
everything
they've
seen
in
this
space,
what
do
they
think
is
gonna
happen?
Next?
What
pattern
is
coming
next
right?
So
if
it's
wrong,
if
this
cell
is
wrong
and
you
get
the
next
input
step
and
there's
a
totally
different
input
that
comes
across
here,
a
really
interesting
thing
happens:
every
cell
in
the
column
then
turns
on-
and
you
see
this
in
the
brain.
This
is
called
I,
think
they
call
I,
don't
know
if
they
call
it
bursting
on
the
brain
or
not.
A
A
B
A
And
and
as
that
pattern
replays
itself
it
locks
in
to
one
of
these
neurons
that
will
decide
to
represent
that
pattern
over
time,
as
you
see
it
over
and
over
and
then
the
next
time
you
see
it,
that's
the
cell.
That
will
predict
right.
That's
how
you
learn.
That's
how
you
create
memories.
Well,
one
of
the
ways
in
which
you
learn
it
gets
a
key.
It's
a
really
crucial
part
of
how
you
learn.
A
Honestly,
it's
I,
don't
know
how
you
could
learn
without
a
mechanism
like
this
in
some
way
that
takes
like
sensory
information
over
time
that
you're
constantly
that's
constantly
flowing
into
your
mind
into
your
brain
into
your
specifically
through
your
thalamus
into
your
neocortex
and
just
trying
to
understand.
What's
happening
all
the
time
and
if
something
unusual
happens
like
a
bunny
pops
up
in
front
of
me,
all
my
mini
columns
are
gonna
burst
because
that's
never
happened
before
that.
Wasn't
what
I
was
predicting.
You
know
now.
A
B
A
I
think
it
will
have
something
so
the
tonic,
you're
gonna
love
it
I
got
to
get
this
podcast
edited
Marc,
because
we
about
tonic
and
burst
modes
with
subbu,
ty
and
now
I
can't
remember
all
the
details
about
it,
but
but
it's
important
for
sure,
okay,
I'm
gonna
we're
going
to
go
on
to
a
different
type
of
pooling.
So
assuming
that
we
understand
this
idea
of.
B
A
A
So,
in
response
to
that,
you'll
have
some
mini
columns
that
get
activated
in
here.
This
is
typically
draw
them
and
you
know
there
will
be
cells
and
we
haven't
done
temporal
memory
yet,
but
there
will
be
cells
in
here
that
are
active
and
those
represent
sort
of
the
context
of
that
spatial
input.
We'll
talk
about
that
when
we
talk
about
temporal
memory,
how
is
info?
How
is
the
information
returned
back?
So
that's
a
tricky
question.
A
How
is
it
returned
back
returned
back
to
what
you're
talking
about
feed
for
you
talking
about
feed
back
to,
because
because
this
sort
of
flows,
information
flows
from
sensory
through
through
your
brain
up,
you
know
essentially
through
the
hierarchy
and
motions
motion.
Information
emits
out
of
this,
so
I
haven't
really
talked
about
this,
but
every
single
cortical
column
in
your
brain
outputs,
a
motion
signal
of
a
movement
signal
which
is
which
flummoxed
scientists
for
a
long
time,
because
why
would
you
need
that
in
every
single
part
of
your
neocortex?
A
But
it
does,
it
seems
like
in
every
part
of
your
keep
neocortex,
including
you
know
your
ears,
your
eyes.
Your
all
of
that
has
is
creating
some
motion
signals,
so
you're
sort
of
constantly
putting
together
a
Moute
emotion
pattern
for
your
whole
body,
all
the
time
right.
So
there's
always
this
constant,
like
output,
and
it's
not
coming
from
this
particular
layer.
It's
coming
from
a
different
layer
that
up
the
but
let's
ignore.
A
A
It's
like
you
have
to
have
access
to
so
much
information
in
a
global
way
to
to
run
run
that
algorithm
and
do
that
error
correction.
We
don't
see
that
there
are
there.
Is
the
connectivity
I
think
to
do
that?
There
may
be
it's
probably
done
through
apical
credit
assignment
and
my
that's
what
I
think,
but
anyway,
I'm
getting
off-topic
if
you
just
saw
the
bunny
for
the
first
time
you
pool
it,
but
you
can
draw
later.
If
you
want.
Oh
you
mean
if
you've
never
seen
a
bunny
before.
A
A
I
mean
even
if
you
saw
a
bunny
you've,
never
seen
one
before
you'd
immediately
know
some
things
that
were
similar
to
it
right.
That's
crucial!
That's
because
that
bunny,
that
you
saw
with
your
sensory
input,
has
semantic
overlap
with
other
objects
that
you've
already
learned
in
your
and
your
neocortex.
So
even
without
learning
it
before
you
can
already
relate
to
it.
You
can
already
relate
to
it
and
you
can
compare
it
to
things
that
you
already
have
representations
for.
A
If
you
bring
in
your
brain,
so
yes,
there
would
be
bursting
there'd,
be
bursting
because
it's
out
of
context,
you've
never
seen
a
rodent
appear
in
front
of
you
on
your
desk
or
whatever
before,
and
also
that
one
looks
a
little
odd
and
you've
never
seen
a
rodent.
That
looks
like
that
with
long
years
and
a
little
bitty
button
tail
right,
but
that's
that's
something
you
would
have
to
learn
and
you'd
have
to
inspect
it.
A
You
know
you'd
have
to
look
at
it
and
I
don't
and
if
you
just
had
like
a
flash
of
it
and
you
weren't
able
to
inspect
it
like
explore
it
and
because
all
of
our
intelligence,
I
believe
is
based
on
the
exploration
of
space
and
using
your
sensory
input
and
coordination.
With
your
movements
to
explore
space,
you
have
to
know
how
you
move
through
space
and
how
the
objects
respond
to
your
movements
in
space
anyway.
I
think
I'm
getting
off-topic,
but
that's.
A
So
we
can
get
off-topic
if
you
want
to,
but
I
want
to
talk
about
another
type
of
pooling.
So
this
is
a
little
bit
of
a
trickier
type
of
pooling,
because
I'm
jumping
ahead
I'm
going
to
talk
about
this
idea
of
temporal
pooling
and
we
haven't
really
talked
about
temporal
memory,
yet
so
I
think
the
easier
way
to
think
about
this
is
to
talk
about
object,
pooling
and-
and
this
is
the
one
of
the
key
ideas
here
is-
is
this
circuitry
that
we
have
in
our
in
our
cortical
column?
A
So
not
only
do
you
have
this
layer,
you
have
another
layer,
you
have
another
layer.
There
are
something
like
10
12,
different
layers,
depending
on
who
you
ask
that
I
think
there's
there's
six
like
easily
differentiable
layers
in
your
neocortex,
but
then
there's
multiple
different
cell
types
and
many
of
the
layers,
and
so
each
one
of
those
could
be
said
to
be
doing
something
different,
and
maybe
you
can
treat
that
itself,
each
different
cell
type
in
a
layer
as
a
different
layer
itself,
since
they
have
different
compute
I
mean
there's
even
gonna,
be
layers
down.
A
I'm,
not
gonna,
explain
that
right
now,
but
essentially
it
it
activates
different
cells
in
each
mini
column,
depending
on
the
context
over
time
that
it's
receiving
the
input.
So
so,
essentially,
it's
storing
sequences
of
spatial
patterns.
So
you
can
using
this
mechanism
that
I'll
explain
in
another
a
ichat,
probably
you
can
identify
or
you
can
make
predictions.
That's
the
key
thing
you
can
make
predictions
about
what
you
think
is
coming
next
and
the
predictions
will
be
in
the
context
of
all
the
other
sequences
that
you've
ever
Sene.
A
So
but
the
hard
thing
here
is
that
you
don't
know
what
the
sequences
are.
That's
the
tricky
thing
you
you,
you
have
the
predictions,
but
in
order
to
dive
into
this
list
layer
and
identify
all
the
different
sequences
that
this
particular
activation
might
right
now
be
played
be
playing
out
based
on
your
experience
is
particularly
hard.
Let's
say
mr.
chat
here
is
explainable:
AI
possible
HTM
s
also.
How
would
it
go
about
explaining
decisions
being
made?
Okay,
so
I'll.
Take
that
question
explainable,
so
definitely
I
think
it's
possible,
definitely
and
in
fact,
I.
A
Don't
think
that
there
that's
like
our
mission.
Our
mission
is
to
understand
how
intelligence
works
in
in
the
new
cortex
in
the
brain,
essentially
so
how
much
more
explainable
can
get
that's
our
mission
is
to
explain
a
I
explained
intelligence
so
that
we
can
make
intelligence,
that's
crucial.
So
absolutely
it's
explainable.
That
being
said,
there's
this
idea
of
reality.
B
A
We
haven't
talked
about
this
I.
Don't
think
I'll
of
bring
this
up
because
it
really
hit
me
when
I
read
it
so
I
read
a
book
by
Max
tegmark,
actually,
Jeff
Hawkins
suggested
it
so
I
read
it
and
it
was
called
this
met
our
mathematical
universe,
something
like
that,
and
he
has
this
idea
of
three
different
realities,
and
this
is
totally
hits
home
for
me
this
when
you're
talking
about
AI.
This
is
this
is
a
great
way
to
think
about
it.
You've
got
three
different
realities.
First
of
all,
there's
like
true
reality.
A
There's
the
reality
external
reality.
He
calls
it
there's
the
reality.
That's
out
there
and
it's
everybody
observes
it.
Everybody
has
a
window
to
that
reality,
but
but
nobody
has
a
complete
view
of
that
reality
right
now,
then
there
is
internal
reality.
So
I
have
my
own
internal
realities
represented
in
my
brain.
You
have
your
internal
reality.
It's
represented
in
your
brain.
There
is
no
way
for
me
to
completely
share
my
internal
reality
with
you.
The
way
we
do.
This
is
through
a
third
type
of
reality
that
max
tegmark
calls
consensus
reality.
A
That's
the
reality
that
we're
both
existing
in
right.
Now,
it's
the
reality
in
which
I'm
speaking
words
to
you
and
you're
understanding.
Those
words
are
trying
to
understand
those
words
and
making
an
internal
representation
of
them
as
you're.
Trying
to
understand
these
concepts
so
I
have
this
internal
reality
that
I'm
trying
to
project
through
the
consensus
reality
through
education
and
through
this
twitch
channel,
for
example,
so
that
all
of
you
can
update
your
internal
realities
to
have
a
better
understanding
of
my
internal
reality
right.
Does
that
make
sense
so
think
about
AI?
A
My
internal
reality
could
be
interpreted
by
you
there's
no
way
and
unless
you
took
my
head
off
and
put
it
on
your
neck,
and
in
that
case
who
are
you
anymore
right,
there's
no
way
I
could
freeze
all
the
neurons
in
my
brain
I
could
map
them
all
out.
I
could
put
them
all
in
a
computer
there's
no
way
you
could
make
sense
of
that.
A
There's
no
way
anything
could
make
sense
of
that
except
me
period,
and
you
know
why,
because
all
those
representations
are
based
on
my
body
they're,
based
on
all
of
my
experiences
I've
ever
had
as
I
did
this
interview
with
neuroscientist
I've
done
too
many
of
these
Michael
I,
followed
on
Twitter
I'm,
sorry
he's
a
motor
motor
guy.
He
does
motor
inference
stuff
and
I
asked
him.
What
is
it?
Did
somebody
drag
my
memory?
A
If
you
can
it's
a
it's
on
my
YouTube
channel,
it's
on
the
YouTube
channel
under
an
interview
with
a
neuroscientist
and
I,
asked
him
what
happens
in
your
brain
when
you
go
reach
for
a
cup
right.
What
happens
in
your
brain?
We
reach
for
a
cup
water's
activating
and
he's
like.
Well,
it's
like
the
some
experience
of
every
time
you've
ever
done
ever
reached
for
a
cup
or
ever
done.
A
Anything
with
your
hand
comes
into
play
when
you
do
that,
like
there
is
no
separating
the
way
I
reach
for
a
cup
from
me
from
my
identity
that
internal
reality
cannot
be
separated.
From
my
experience
right
I,
you
will
never!
You
can
never
understand.
If
my
interpretation
of
read
is
also
your
intake
interpretation
of
read,
that's
that's
the
idea,
so
with
AI
you're
going
to
have
the
same
problem.
A
If
we're
reading
intelligence,
that's
based
off
of
how
our
brain
works,
even
if
we
can
explain
all
the
minutiae
of
the
cells
and
the
layers
and
and
what's
happening
here,
what's
happening
here,
you're,
never
going
to
be
able
to
take
an
internal
representation
inside
of
a
common
critical
circuit.
That's
down
here!
That's
related
specifically
to
the
sensory
input
of
this
agent
and
it's
experience
with
reality.
You
will
never
be
able
to
compare
that
with
anything
else
and
make
sense
of
it.
Does
that
make
sense
now?
A
That
being
said,
we
can
create
some
really
elaborate
consensus
realities
so
that
we
can
communicate
with
these
AIS
once
we
get
to
that
point,
but
we're
never.
You
can
never
expect
to
look
into
these
cell
activations
that
I'm
drawing
up
here
and
expect
to
know
exactly
what
experience
that
agent
that
entity
that
intelligent
thing
is
experiencing.
Jonathan
Michaels.
Thank
you
Falco.
That
was
him
Jonathan
Michaels.
A
B
B
A
So,
if
you're
interested
in
that
idea
of
motion
and
grasping
okay,
I'm
gonna
try
and
catch
up
on
chat
here
for
a
minute,
because
I've
been
just
talking
and
talking
so
just
many
columns
sequences
in
one
map
of
mini-com
sequences
in
another
map,
yeah
is
explainable,
AI
possible!
That's
for
you!
You
might
let
night,
like
the
next
wall
of
text
sent
to
predictor.
Okay,
that's
on
the
HTM
forum.
Mark's
talking
about.
A
Explain
it
may
be
explainable
in
its
principle:
stockless
s,
but
it
unexplainable
due
to
complexity.
It's
not
complexity.
It's
unexplainable
by
nature!
That's
the
key
that
you
got
to
remember
because
it's
an
intelligent
agents.
Experience
is
unique
to
its
movement
through
space.
So,
as
you
could
you
could
take,
you
could
train.
Let's
say:
let's
assume
you
had
an
AI
robot
and
it
was
smart.
However,
we
made
it
smart
right
and
we
we
turned
it
on
and
we
let
it
learn
how
to
move
around
in
a
little
room.
Okay,
so
it's
learn
the
room.
A
Now
you
could
clone
that
thing
and
send
it
off
into
a
bunch
of
different
rooms
and
and
the
rooms
would
be
different
right
and
then
turn
them
back
on
and
say,
they're
all
learning
again
and
they
all
keep
moving
through
those
rooms.
Each
one
of
them
has
now
diverged
from
its
original
internal
representation.
You
cannot
compare
them
to
each
other
anymore.
There
may
be
some
ways
that
you'll
compare
them
for
a
small
amount
of
time.
While
the
overlaps
are
similar.
A
Okay,
reality
that
is
collective
yeah,
so
that's
the
consensus
reality
where
have
I
heard
everything
you
learn
is
in
terms
of
what
you
have
learned
before.
I
don't
know,
but
that's
absolutely
true.
Every
everything
you
bring
your
full
experience
of
the
world
to
every
new
situation.
There's
no
getting
around
it.
You
can't
really
like
make
it
down
to
just
here's
the
input
it's
like
everything
you've
ever
learned
is
a
part
of
your
experience.
A
That
test
says
this
reminds
me
of
the
Sibyl
system
and
the
anime
psychopaths
I
I'm
gonna.
Leave
you
guys
up
to
interview
to
investigate
that.
Have
you
tried
to
build
reinforcement,
learning,
HTML,
common
reinforcement
environment?
So
that
is
definitely
a
huge
deal.
So
that's
that's
the
future.
Absolutely
reinforcement,
learning
with
HTM
is
most
certainly
the
future,
especially
on
the
in
the
direction
towards
robotics
and
gaming
applications
for
HTM.
A
A
Has
to
be
learning,
the
HTM
is
a
whole
different
animal,
it's
true,
but
they
both
have
to
learn
together.
They
both
have
to
learn
together,
I
think
they
have
something
to
tell
it
they
have
to
have
feedback
they
have
to.
There
has
to
be
back
and
forth.
There
has
to
be
some
type
of
integration.
The
key
thing,
I
think,
is
that
the
the
HTM
system
plays
a
part
in
creating
movement
right
like
I,
had.
A
Htm
system
has
to
play
a
part
in
the
causation
of
movement.
In
that
way,
it
has
to
understand
how
its
movements
affect
the
sensory
input,
that
it
gets
back
and
I
think
the
way
that
it
has
to
do.
That
is
by
having
some
representation
of
displacements
between
locations
somehow
in
one
of
these
layers,
so
that
it
knows
the
repercussions
of
its
movement
because
it
knows
where
its
sensor
is
it.
A
That's
that's
the
point
where
you
take
where
you
take
the
action
in
the
environment,
and
you
have
some
type
of
goal:
reward
set
up
there
and
the
in
the
lizard
brain
is
Mark
Brown
likes
to
call
it
right.
I
hope
your
green
with
me
mark
here
so
that
we
can
attach
our
our
neocortical
model
of
the
world
to
that
control
center,
that
control
system.
A
Somehow
so
that's
sort
of
the
idea
and
the
reinforcement
learning
system
would
be
more
of
a
model
of
the
older
part
of
the
brain,
that's
dealing
with
with
reward
and
Punishment
and
stuff,
but
at
the
same
time,
it's
constantly
keeping
a
link
to
the
models
that
it's
attending
to
and
exploring,
etc
and
and
and
make
no
doubt
about
it.
I
feel
like
we're
talking
about
thalamus
in
the
neocortex
here,
I
think,
there's
that's
the
huge
link
and
that's
that's
the
direction.
We're
headed,
that's,
definitely
the
direction
we're
headed.
A
The
emotions
are
a
part
of
this
for
sure
and
I
think
emotions
gets
get
represented
all
across
your
neocortex
I
think
they're,
a
part
of
every
experience
that
you
have
your
emotional
state
when
you've
learned
something
can
affect
the
meaning
of
that
absolutely
and
I
think
that
Coke
sort
of
that
spreads
through
your
whole
brain.
You
know
yeah,
it's
definitely
so
it's
a
way
that
the
the
lizard
brain
projects
to
the
cortex,
your
fear
response,
your
fight-or-flight
response.
A
A
I
haven't
even
gotten
to
object,
pooling
yet
I'm
thinking.
Maybe
since
we've
already
reached
like
an
hour
I'm
wondering
if
we
should
talk
about
temporal
memory
instead
of
object,
pooling
because
it's
I'm
thinking,
I'm
gonna
have
a
hard
time
explaining
object,
pooling
without
explaining
temporal
memory.
A
A
Doing
this,
but
I'm
gonna
have
to
take
a
break.
So
why
don't
you
give
me
five
minutes
and
I
will
come
back
and
we
will
have
a
little
break.
I
mean
while
talking
about
we'll
talk
about
some
floral
memory.
I
haven't
prepared
anything
on
this,
so
it's
just
gonna
be
sort
of
off
the
cuff,
but
so
I'm
tired
you're
too
kind.
Thank
you
so
much.
That's
awesome!
I'm
glad
you
guys
are
enjoying
this.
I
am
I.
Think
this
twitch
thing
is
just
amazing.
I'm
I'm
very
excited
about
it.
A
It
feels
like
I'm,
the
only
person
consistently
creating
any
like
AI
neuroscience,
related
content
right
now
on
Twitch
and
that's
a
great
place
to
be
because
I
feel,
like
this
platform,
is
really
blossoming
right.
Now,
there's
a
lot
of
things
going
on
in
the
Science
and
Technology
game
in
twitch.
If
you
know
what
I
mean
all
right,
guys,
I'll
put
a
little
music
on
for
you
and
I'm
gonna
go
to
be
right
back
screen.
A
A
Feel
like
I
want
twitch,
so
I
can
be
just
like
whatever
I
can
just
do
whatever
I
want
so
I'm
gonna
wear
shades
better
we're
gonna
talk
about
temporal
memory,
I'll
leave
it
on
real
low.
If
it
bothers
anybody.
Let
me
know:
I'm
gonna
draw
something
different
than
this
to
explain
temporal
memory.
You
guys.
A
A
Alright,
this
is
in
some
layer
of
of
cortex,
so
there's
lots
and
lots
of
mini
columns.
So
we're
just
going
to
talk
about
this
one
mini
column
and
I
think
we
would
typically
say
we
see
this
behavior
in
layer,
2,
3
and
I
always
forget.
I'll
talk
about
cortical
layers,
another
time
and
I'll
have
done
my
homework
by
then.
So,
let's
just
say,
this
isn't
happening.
Every
single
layer
of
cells
in
the
in
the
neocortex
but
I
think
it's
crucial
at
least
two
of
them.
A
A
B
A
A
There's
even
olfactory
goes
straight
into
your
cortex,
so
maybe
direct,
maybe
direct
sensory,
maybe
through
the
thalamus.
The
point
is
this
layer
doesn't
care,
it
doesn't
know
it
doesn't
care,
it
doesn't
know
it's.
It
doesn't
know,
that's
one
of
the
things
that
makes
it
common
a
common
cortical
circuit,
we're
looking
something
we're
looking
for
something.
That's
can
be
generalized
that
generalizes
everything
right.
That's
why
we're
so
smart,
because
we
can
generalize
so
much
stuff.
So
we've
got
this.
A
A
One
of
these
days
it
will-
and
just
in
case
anybody
cares
to
know
I've
got
three
types
of
shows,
a
iChat,
which
is
what
we're
doing
right
now.
Every
Monday
at
1:00
I've
got
work
sessions
that
are
technical
engineering
sessions,
Tuesdays
and
Thursdays
at
9:00,
and
then
fun
Friday,
which
is
whenever
fun
for
me,
whenever
on
Friday
I'll,
do
something
fun
but
check
my
twitch
schedule
or
give
me
a
follow.
A
If
you
want
to
get
notified
when
I'm
doing
any
of
those
things,
it's
all
gonna
be
related
to
AI
in
some
way,
because
that's
what
I
do
I
work
for
the
Mensa
as
an
open-source
community
manager
and
I
try
and
make
educational
materials
explaining
this
type
of
stuff
to
people
who
might
be
writing
it
in
code.
So
you
want
to
write
some
of
this
stuff
in
code
I'm
here
to
help
anyway,
yeah
someday
I
want
to
integrate
HTM
into
the
tooling
either
for
the
chat.
A
B
A
I
already
have
an
API
to
apply
HTM
anomaly
detection
overtop
of
it
it's
old,
but
I'm,
not
sure
if
it
works,
but
I
don't
know
what
I'm
gonna
do
yet
mark
I'm,
still
thinking
it
through
I.
Honestly,
don't
have
time
to
do
anything
yet
and
I'm
worried
about
new
pick.
I'm
worried
about
Python
3
I
need
to
get
that
sorted
out
before
I.
Do
any
really
fun
stuff.
So
you're
gonna
see
me
probably
doing
some
boring
technical
Python
work,
trying
to
figure
out
how
we're
going
to
get
a
decent.
A
How
we're
gonna
get
a
Python
3
environment
for
HTM,
because
our
because
our
time
is
running
out
the
clock
is
ticking.
I
should
put
a
countdown
up
here,
so
it
reminds
me
every
day
how
many
days
I've
got
left
before
Python
2
is
end-of-life
anyway,
if
I
had
a
yak
shaving
button,
if
I
had
a
yak
shaving
button
I'd
push
it
right
now,
but
I,
don't
so
trying
to
explain
tempura
memory.
B
A
A
A
Of
things
to
do
like
that,
I've
got
a
list
of
things
to
do
to
make
the
twitch
experience
better.
It's
just
getting
I
came
in
over
the
weekend
to
do
some
of
it.
It's
just
busy
right
now,
so
I
don't
want
to
put
too
much
of
my
time
and
effort
into
twitch,
because
I
have
a
job
and
honestly,
this
was
wasn't
it
before
last
month,
so
I
still
have
to
keep
up
with
all
my
other
responsibilities,
and
then
this
Python
3
thing
is
anyway.
A
A
A
A
Now,
in
response
to
like,
like
I,
said
in
the
previous,
we
were
talking
about
this
earlier
when
we're
talking
about
spatial
pooling.
If
we
get
this
input
and
there's
been
nothing,
nothing
is
predicted
like
we
didn't.
We
didn't
predict
this
input,
we're
gonna,
we're
gonna
burst
the
whole
column,
okay,
the,
and
that
means
that
this
is
new.
A
A
My
orange
you
got
here
peanut
butter
in
my
anyway
guys
stop
looking
at
my
bald
spot.
I
know
it's
sexy,
but
my
eyes
are
right
here.
Okay,
so
let's
say
nothing
is
predicted,
so
we
burst
every
column,
I'm
gonna,
I'm
gonna,
get
rid
of
this
cuz
I.
Don't
want
to
make
my
pin
black,
so
we've
got
active
cells,
active
cells,
all
the
way
down
all
the
way
down
active
cells.
A
Okay,
so
that's
one
reason:
one
of
these
could
fire.
The
other
reason
is
there's
another
state
that
HTM
neurons
can
be
in
and
we'll
call
that
predictive
I'm
gonna
make
it
red
I.
Guess
we'll
make
it
red
hey
in
a
beta
okay.
So
so,
let's
say
this
style
cell
is
in
a
predictive
State.
What
this
means
is
that
it
has
recently
got
enough
distal
input
on
one
of
its
distal
dendrites
that
an
MDMA
spike
has
just
occurred,
which
means
which
is
a
dendritic
spike,
so
you
can
Google
dendritic
spike
and
there's
a
there's.
A
A
Bits
but
I
don't
want
to
sorry
I'm
gonna
pop
I'm
gonna
pop
okay.
That's
me,
my
point
being
is
usually
you
the
the
soma
spikes
when
it
gets
proximal
input.
That's
what
I'm
trying
to
show
there
if
the
soma
gets
proximal
input.
These
are
proximal
dendrites
near
the
cell
body,
then
the
soma
fires.
It
creates
an
activation,
but
there's
all
these
other
dendrites
that
are
around
the
edges
that
are
distal
distal,
far
away
right.
A
It
wouldn't
breach
the
activation
threshold,
but
it
would
get
it
so
close
to
the
activation
threshold
that
it
only
needed
a
little
bit
of
stimulus
to
fire.
So
that's
sort
of
a
different
state
of
the
cell
and
we
call
it
a
predictive
state
if
it
gets
enough
proxy
excuse
me
distal
dendritic
activity,
it
can
put
it
into
a
predictive
state,
there's
not
much
more
to
this.
So
and
that's
what
I'm
talking
about
here.
A
If
we
have
a
cell,
that's
in
a
predictive
state
and
I'll
talk
about
where
those
that
distal
input
comes
from
in
a
minute,
but
right
now,
I
just
want
to
tell
you
how
it
fires.
So
if
it
is
in
a
predictive
state
inside
this
mini
column,
that
means
that
this
cell
is
predicting
that
this
mini
column
is
going
to
fire
or
it's
going
to
be
activated
in
response
to
some
signal
coming
through
the
input.
So
if
that
happens,
and
the
activation
of
the
mini
column
does
occur,
and
there
is.
A
Means
it's
going
to
fire
fast,
it's
going
to
fire
fast,
because
it's
sitting
there
waiting
for
proximal
input.
It's
it
just
had
some
indication
that
it
might
be
soon.
It
got
real
close
to
firing
because
of
that
in
DMA
spike.
So
if
that
occurs,
then
that
means
that
that
prediction
was
correct
and
it
learns
so
the
synapses
are.
The
permanence
is
for
its
synapses
are
then
reinforced
that
it's
thought
it
was
going
to
be
predictive
of
the
in
DMA
spike,
put
it
into
a
predictive
state.
A
It
actually
was
correct
because
it
was
happened
to
be
inside
the
mini
column,
that
the
input
space
activated
when
spatial
cooling
occurred,
and
it
was
correct,
so
it
gets
reinforced.
The
the
distal
connections
that
caused
it
to
go
into
a
prediction
predictive
state
can
be
reinforced
to
reinforce
that
pattern
because
it
was
something's
coming
I,
don't
know.
Okay,
that's
ominous.
That
means,
but
oh
you're,
talking
to
the
audience
so.
A
Where
should
I
go?
Where
should
I
go
next?
The
the
next
now
I
have
to
talk
about.
How
does
that
cell
get
into
a
predictive
state?
That's
the
next
logical
thing,
so
that's
sort
of
well
I
just
talked
about
how
a
cell
gets
into
it.
It's
looked
at
the
distal
connections,
so
all
of
these
each
one
of
these
cells
in
the
mini
column,
has
distal
connections.
This
one
might
be
connected
to
that.
One
might
be
connected
to
this.
One
can
be
connected
to
any
cell
in
any
one
of
these
other
columns.
A
A
This
cell
has
all
of
these
different
potential
distal
connections
that
might
put
it
into
a
a
predictive
state
that
represent
one
sequence.
Now
this
cell
could
also
have
these
for
many
different
sequences,
so
it
could
go
into
a
predictive
state
because
of
some
some
pattern
that
it
sees
evolving
or
a
different
pattern
that
it
sees
evolving.
That's
completely
different.
This
cell
can
learn
lots
of
different
parts
of
patterns.
That's
just
the
way
that
memory
sort
of
takes
advantage
of
these
sparse
connections
in
your
brain.
Okay,
let
me
take
some
questions.
A
Are
the
distal
SP
connections
in
each
cell,
connected
to
cells
and
other
columns,
not
in
other
macro
columns,
not
in
other
cortical
columns?
At
this
level
there
are
distal
connections
and
other
layers
between
cortical
columns?
Yes,
but
not
not
in
this
mechanism
that
I'm
teaching
right
now.
This
is
all
happening
within
one
layer,
one
layer
of
a
cortical
column.
We
don't
you,
don't
have
to
go
outside
that
layer
at
all.
You
can.
You
can
apply
spatial
pooling
and
temporal
memory.
It's
like
peanut
butter
and
jelly.
A
They
just
you
got
to
have
one
that,
for
the
other.
Temporal
memory
works
right
on
top
of
spatial
pooling
and
then
we've
got
other
mechanisms
that
can
also
work
on
top
of
temporal
pooling.
In
the
same
way
or
spatial
pooling
in
the
same
way,
okay,
but
we'll
get
into
those
at
some
point-
says
the
distal
cell,
oh
I,
guess
so
yeah.
So
this
neuron
inhibits
others
in
a
mini
column.
The
mini
column
inhibits
other
mini
columns
from
expressing
themselves
this.
A
This
neuron
I'm
just
talking
about
what's
sort
of
happening
inside
the
mini
columns,
so
the
the
inhibition
I
about
in
the
last
hour.
That
was
when
we
were
talking
about
spatial,
cooling
and
abyssion
happens
when
the
mini
column,
competition
occurs.
So
we're
sort
of
saying
in
the
context
of
that
competition.
What
are
the
cells
within
those
many
columns,
doing
and
they're
learning
sequence
memory,
they're
learning
the
spatial
patterns
that
are
flowing
through
this
space
and
connecting
them
over
time?
Okay,
that's!
A
What
these
connections
are
this
this
guy
is
going
into
a
predictive
state,
because
this
cell
was
just
on
that
cell
was
just
on
that
cell
was
just
on,
and
that
represents
something
that
it
has
seen
before
its
last
time.
It
saw
that,
let's
say
it
fired
next,
that's
why
it's
in
a
predictive
state
and
then,
if
it,
if
it
turns
out
it's
wrong,
then
we
decrement
those
synapses.
So
we
don't.
We
have
less
of
a
chance
of
it
being
wrong
in
that
way.
A
The
next
time,
if
it's
right,
we
increment
the
synapses,
so
we
nail
in
that
pattern.
Okay,
is
there
septa
field
connected
to
the
cells
or
the
column,
the
mini
column,
I'm?
Talking
about
the
so
there's
two
different
receptive
fields
here
in
this
sort
of
idea,
we've
got
the
mini
columns
receptive
field
over
the
input
space.
That's
what
this
and
this
is.
This
is
the
other
receptive
field,
every
cell
inside
a
mini
column,
every
pyramidal
neuron
also
has
a
receptive
field
distal
receptive
field.
This
is
the
proximal
receptive
field.
A
Okay,
that's
the
mini
column
is
enforcing,
and
this
is
a
distal
receptive
field
of
the
same
cell.
Does
that
make
sense
and
what
time
frames
is
happening.
Okay,
forget
about
time
frames
this.
There
is
no
tithers
there's
there's
a
step.
There
is
step
after
step.
All
you
have
to
think
about
in
the
brain.
I
had
this
this.
A
Is
hard
to
understand,
okay,
but
think
about
how
time
slows
down
when
a
traumatic
event
occurs
or
when
your
adrenaline
kicks
in
how
it
feels
like
time
slows
down
and
other
times
where
time
goes
very
fast.
There's
that
idea
of
time
being
relative
that
I
think
is
very
applicable
here,
because
all
your
brain
knows
is
input
after
input
input,
input,
input,
input,
that's
it
this
space,
just
changes.
How
fast
is
changing
is
as
sort
of
irrelevant,
that's
a
context
that
might
be
expressed
in
the
space,
but
as
far
as
this
layer
is
concerned,
it's
irrelevant.
A
A
A
At
least
through
lateral
connections
across
cortical
columns
and
layer,
two
that's
a
little
further
down
the
road.
Okay
is
inhibition,
not
only
limited
in
time.
Inhibition
tau
happens
at
every
time,
step
every
time
step.
There's
this
mini
column,
competition.
Every
input
that
lights
up
this
board
is
a
new
competition
that
chooses
the
active
columns
in
this
layer
right
and
then
there's
yet
another
computation.
That
is
a
temporal
memory,
computation
that
occurs
within
the
space
that
the
spatial
pool.
B
A
Spatial
pooling
operation
here
is
translating
the
semantics
of
this
meaning
into
a
new
language
of
mini
column,
activations
and
within
that
language
of
many
columns.
The
temporal
memory
plays
out
this
temporal
memory
only
works
within
the
context
of
this
space.
Okay,
by
getting
everything
right
here,
mark
can
Brecht
me
from
wrong
or
in
a
beta,
if
you're
still,
there
I
think
he
left
anyway.
He
is
he's
good.
He
talks
about
neuroscience
stuff,
too.
He's
got
some
good
presentations.
A
A
A
The
in
well,
you
think
of
it:
I
don't
like
the
neuroscience
term,
is
inhibition,
because
this
activation
is
essentially
enforced
by
inhibit
inhibitory
cells,
but
I
never
thought
of
it.
I,
never
understood
inhibition
until
I
understood
the
neuroscience.
What
it
really
is
is
a
competition
for,
for
which
many
columns
are
active.
When
I
did
the
stack
rank
thing,
that's
in
abyssion,
that's
it
it's
and
it's
inhibiting
by
saying
I'm,
drawing
a
line
here
and
I'm,
not
letting
any
of
the
other.
A
The
other
mini
columns
express
themselves,
that's
the
inhibition
part
of
it,
but
it's
your
if
you're
confused
about
inhibition
you're,
probably
over
complicating
it.
In
your
mind,
it's
really
just
the
activation
of
mini
columns
based
on
stack,
ranking
there
overlaps
with
each
of
their
inputs
and
then
choosing
only
the
ones
that
most
strongly
overlap
that
input
to
represent
data
there
to
represent
that
input,
that's
the
inhibition
part
and
then
the
boosting
sort
of
happens.
In
addition
to
that,
the.
A
That
well
I,
didn't
I,
didn't
read
a
neuroscience
paper:
there's,
not
necessarily
a
in
neuroscience
paper.
That
explains
this
you'd
have
to
read:
read
the
neuron
paper.
That's
that's
the
paper
to
read.
Let
me
go
back
to
the
art
papers
section.
Why
neurons
have
thousands
of
synapses?
If
that
that's
definitely
the
paper
you
want
to
understand.
If
you
want
to
understand
this
idea
of
inhibition
and
even
in
the
in
the
Bamie,
you
did
yeah
I
read
it
again.
Honestly,
there
is
no
shame
in
reading
these.
Oh
I
had
to
read
these
several
times.
A
All
of
them
I
feel
like
I'm,
a
pretty
smart
guy,
but
I
had
to
read
these
papers
several
times,
so
don't
feel
bad
about
it.
It's
like
I
think
it's
normal
I!
Think
that's
what
you
do
when
you,
when
you
do
science
when
you're
faced
with
hard
challenging
problems.
Sometimes
you
have
to
read
things
several
times
before
they
make
since
and
sometimes
I'll
reach
something
completely
discounted,
because
I
think,
oh,
that's
not
applicable
at
all
and
then
a
year
later
realize
how
insanely
applicable
it
was
anyway.
A
A
A
The
inhibition
timescale
is
a
half-life
effect
in
the
basket
and
chandelier
cell
activations
yeah
I
can
see
that
I.
Don't
think
we
modelled
that,
but
I
can
see
that
happening,
I'm,
not
on
that
and
I'm,
not
sure
how
I
mean
that's
one
of
those
things.
That's
one
of
those
questions.
How
how
strongly
do
we
model
these
things?
We
could
add
like
a
whole
procedure
for
for
adding
that
sort
of
halflife
inhibition
effect
over
each
time
step
instead
of
applying
and
I
think
every
time
step
and
not
letting
it
move
onward.
A
That
might
increase
the
accuracy
of
things
that
might
I
don't
know.
I,
don't
think
that
I
know
of
we've
ever
tried
that
there's
a
lot
of
things
like
that
that
could
be
tried
within
the
realms
of
HTM
firry
that
I'm
talking
about
here.
There's
a
lot
of
there's
a
lot
of
knobs
that
could
still
be
installed
and
twit
and
and
maybe
they'll
have
big
effects,
and
maybe
not.
This
is
like
frontiers
of
science
type
stuff
here
this
is
a
it.
A
A
So,
okay,
so
one
last
thing
I
want
to
try
to
explain
is
the
tricky
bit
is:
is
high
order
memory
versus
single
order
memory.
So
if
you
understand
like
Markoff
a
Markov
chain,
this
idea,
where
I
don't
think
I'll
explain
it,
but
basically
you
make
your
decision
about
what
state
you're
going
to
be
in
as
a
unit.
It
depended
on
the
last
the
last
time
step
so
since
the
state
of
a
system
at
the
last
time
step.
So
you
essentially
are
only
looking
at
the
last
time
step.
A
A
There
was
one
previous
that
I
learned
that
it
was
a
part
of
too,
and
it
can
that
can
be
expressed
in
this
layer
of
cells
and
I,
really
think
the
best
place
to
understand
that
and
I'm
gonna
go
back
to
this
again
is
there's
just
there's
this
one
HTM
school
video
I
worked
a
long
time
on
these
on
these
visualizations.
For
this
one
and
I
think
it's
still
probably
the
best
place
to
see
how
it
works
and
I'm
just
going
to
mute
it
and
go.
A
There's
a
point
in
here
where
you
see
you
see
the
bursting
happen
and,
and
you
can
see
how
the
sequence
just
split
so
I'm,
just
I'm
explaining
this
note
these
notes
and
so
I
dialed
in
these
notes.
In
this
little
thing
and
and
then
had
the
sequence,
memory
learn
the
system.
No,
no
I
want
to
do
that.
Don't
do
that!
That's
the
press,
the
wrong
button
and
I.
Had
the
sequence.
A
Memory
learn
this
pattern
of
notes
and
after
a
while
it
learned
it
it
stopped
bursting
because
that's
what
it
does
at
bursts,
embarrassing
because
everything's
new
and
then
eventually
it
stops
bursting
and
it's
nails
it
down.
Every
predicted
cell
is
properly
predicted,
everything's
reinforced.
All
the
weights
are
stable
and
everything's
locked
in
and
then,
if
you
go
change,
one
of
the
notes
and
watch
what
happens?
That's
that's
the
interesting
thing
and
so
I'm
not
I'm.
Gonna
leave
this
for
you
guys
to
do.
If
you
want
to
do
it.
A
A
The
I'm
just
gonna
leave
this
for
you
guys
to
do
because
it
this
is
sort
of
fascinating
I
thought
it
was
fascinating,
so
you're
gonna.
So
when
that
when
I
change,
the
note
you'll
see
something
happen,
that's
that
really
nailed
it
in
for
me.
So
so
take
a
look
at
that.
If
you
want
to
understand
now,
I
am
going
to
log
off
I
got
all
you
guys
hooked
in
with
me
right
now,
but
I've
got
some
stuff
to
do.
I
really
appreciate
everybody
hanging
out
and
before
I.
A
Does
anybody
have
any
questions
that
they
want
me
to
ask
or
any
suggestions
about
how
I
could
make
this
twitch
experience
for
you
guys
better
or
more
interactive?
If,
if
you
come
to
my
work
sessions,
Tuesday
and
Thursday
I
get
real
interactive
with
people.
Well,
I've
been
working
on
code
with
some
people
looking
at
the
same
Trello
boards
and
everything
so
yeah
thanks
everybody
for
watching
I,
appreciate
you
jumping
in
and
following
me,
etc.
I'm
not
doing
this
to
make
money
or
anything
on
Twitch.
This
is
really
just
to
spread.
A
A
B
A
Gonna,
send
it
to
this
guy
because
he
was
recently
talking
about
him.
So
let's
raid
this
guy,
he
he
is
apparently
I
think
doing
some
a
lot
of
algorithm
stuff.
So
that's
exciting:
let's
do
some!
Let's,
if
you
want
to
learn
about
algorithms.
I
was
just
explaining
a
couple
of
algorithms
to
you
just
now:
the
spatial
Pooler,
a
temporal
memory,
algorithms,
etc.
So
we
will
go
read
him
when
you
guys
are
ready,
got
12
of
us
thanks
thanks
for
sticking
through
12
of
us
are
ready
to
raid
I
got
17
people
watching
right
now.