►
From YouTube: BHTMS - Describing Spatial Pooling
Description
Writing a prose description of Spatial Pooling for https://building-htm-systems.herokuapp.com/spatial-pooling
A
A
So
I
should
be
live-streaming
right
now.
Does.
A
A
B
A
Hello,
Falco
I
had
some
difficulties
with
the
other
link.
I
wanted
to
be
able
to
create
these
live
events
before
its
beforehand,
and
so
I
can
plan
ahead.
My
my
my
streams,
but
it
certainly
didn't
work
the
way
I
did
it
so
I,
don't
even
know
why.
Yet
so
I'll
have
to
experiment
a
bit
I
think
and
try
and
figure
it
out.
So,
in
whatever
case
we're
gonna
be
talking
about
spatial
pooling
I
will
open
up,
live
stream
chat,
so
I'm
connected
to
live
stream,
chat
and
discord.
A
A
A
There's
discord
so
I'm
still
working
out
the
kinks,
the
YouTube
live
streaming,
but
thanks
for
joining
we're,
gonna
be
talking
about
spatial
pooling
at
a
very
high
level.
I
mean
we're
gonna
get
detailed
too,
but
I
have
these
illustrations
and
these
diagrams
that
I've
been
working
on
for
a
long
time.
Here's
the
area
of
some
illustrations.
If
you
followed
the
building,
HTM
series
drilling
HTM
system
series,
you'll
recognize
these
I
believe.
A
Anyway,
there's
much
there's
much
more,
that
I
could
do
to
these
diagrams
these
parameters,
but
what
I
really
feel
like
I
need
to
do.
I'm
gonna
turn
the
music
off
entirely,
because
I
need
to
think
and
I
need
to
write
and
I
need
a
little
bit
more
silent
space
in
the
brain,
so
I
love,
I'm,
Group,
I'm,
pretty
happy
with
these
diagrams
or
sure
there's
some
styling
issues
we
could
work
on,
but
so
what
I'm
gonna
do
is
write
some
things
so
at
a
high
level,
the
things
that
I
want
to
communicate
or
here's
some.
A
A
Some
of
the
pros
create
an
outline
of
what's
going
to
be
put
on
this
page
here
on
this
page,
because,
because
I
want
to
fill
this
in
and
make
it
coherent,
not
just
a
bunch
of
diagrams
and
if
we
come
up
with
any
more
diagrams
that
we
might
want
to
build,
will
put
placeholders
in
there
for
them
if
the
noise
from
the
window
gets
too
loud.
Let
me
know
when
I
will
close
it
we're
good.
A
A
Why
is
it
happening
in
the
brain
and
we're
gonna
link
this
all
back
to
the
neuroscience
as
as
efficiently
as
we
can,
but
I
think
it's
important.
You
know
from
our
standpoint
to
always
point
back
to
the
neuroscience,
so
we
can
understand
because
that's
our
that's
our
mission,
is
you
understand,
what's
going
on
in
the
brain,
so
if
you're
interested
in
understanding
this,
we
got
to
link
it
back
to
the
brain.
Why
is
it
happening?
What's
the
point?
A
Where
is
it
happening
and
how
does
it
work?
I
think
these
are
good
questions
I
think
we
can
answer
all
these
questions
and
that's
one
of
the
reasons.
That's
what
so
this
I
have
a
window
cam.
If
you
want
to
see
out
my
window,
that's
there's
some
noise,
which
we
hear
a
fire
engine,
we're
gonna,
go
there,
that's
my
I'm
easily
distracted.
So
that's
for
me
right.
Here's
something
I'll
go
to
the
camp
and
I
am
going
to
basically
be
writing
this
in
in
markup.
So
write
the
things
here
and.
A
Watching
it
sort
of
fill
out
here
so
I
don't
need
to
necessarily
answer
all
these
questions
in
order.
What
is
spatial
pooling?
Why
is
it
happening?
Where
is
it
happening
and
how
does
it
work
I'm
not
going
to
like
block
it
out
and
say
here's
the
answer
to
one
question:
I
just
want
to
make
sure
that
we
answer
all
these
questions
in
a
clearer
way
and
so
I
drew.
B
A
Out
first,
so
so
let
me
show
you
what
I
drew
out
first.
This
is
through
this
on
the
iPad,
so
this
is
sort
of
answering
several
questions
like
where
is
it
happening,
and
why
is
it
happening
so
from
a
cortical
standpoint,
you'd
say
that
a
layer
of
the
cortical
column,
because
the
spatial
pooling
happens
within
a
layer
of
a
cortical
column,
so
we're
within
two
structures
here,
within
a
cortical
column
and
in
a
layer
in
a
cortical
column,
you
don't
know
what
I
mean
you
can
find.
A
The
cortical
circuitry
HTM
School
video,
which
I
will
link
I,
think
I
can
add
a
card
for
it
and
I'm
not
going
to
worry
about
that
right.
Now,
that's
a
detail
that
we
won't
deal
with
so
in
the
layer
that
we're
talking
about
in
the
neocortex
in
the
cortical
column,
it
will
be
receiving
proximal,
feed-forward.
A
A
To
decide
whether
and
when
we
want
to
introduce
terms
proximal
feed-forward
input,
you
know
this
these
sort
of
things
we
need
to
define
as
we
reach
them.
So
as
we
run
into
those
terms,
I
may
simply
put
a
footnote
or
about
them
to
just
define
them
that's
later
date
or
something,
and
then
we
have
to
introduce
the
idea
of
a
mini
column,
because
because
this
is
all
new,
this
tutorial
we
haven't
talked
about
many
columns.
All
we
talk
about
is
encoding,
so
these
are
all
new
terms.
A
A
The
so
we
talked
about
feed-forward
again,
this
is
proximal
the
idea
of
an
input
space
I
like
to
use
this
term
a
lot.
It's
not
necessarily
used
throughout
the
rest
of
the
HTM
literature,
but
I
like
I
use
the
term
input
space
to
sort
of
define
the
potential
input
to
a
unit
of
neurons
that
were
modeling
right.
The
proximal
feed-forward
input
is
coming
from
this
space
of
neurons
that
exists
somewhere
else
and
it's
very
generic
just
to
say,
input,
space,
we're
going.
A
A
B
B
A
A
A
Hey
I've
got
a
my
bot
worked
while
one
of
them
so
I
can't
I
I
haven't
figured
out
how
to
get
the
how
to
get
the
bot
to
hello
so
and
I'm
getting
distracted.
The
bot
doesn't
work,
so
something
works
but
not
like
not
clogged
by
I
have
I.
Have
it
set
up
to?
Thank
you
for
subscribing.
So
thanks
yep
and
the
brain
thing
like.
B
A
Okay,
so
again,
I'm
just
sort
of
reviewing
what
we
have
so
far
in
here
for
the
encoders
I,
like
this
pros.
I
like
this
text,
it
doesn't
need
to.
It
needs
to
be
reviewed
and
updated,
but
I'm
I'm
happy
with
the
direction
it's
going.
I
still
have
some
technical
problems
with
some
of
these
diagrams.
This
is
way.
B
A
And
we
said
we
started
on
I
didn't
even
get
finished
on
categories
or
time,
so
there's
still
lots
more
stuff
to
backfill,
but
right
now
we're
just
going
to
talk
about
spatial
pooling
and
describing
what
spatial
pooling
is
and
I'm
just
beating
around
the
bush,
because
I
need
to
get
started
somehow
and
I'm,
not
sure
how
the
you
know.
Maybe
the
best
way
is
to
start
with
this
picture.
A
A
A
I'm
open
to
that
keep
in
mind
that
all
of
these
diagrams
are
represented
in
here,
I
haven't
written
them
all
yet
so
I
I
can
imagine,
for
example,
that
we'll
have
this.
This
diagram,
which
is
going
to
be
a
live
sort
of
streaming
representation
of
that
of
the
data
that
you
see
at
the
top
of
this
page.
A
A
Okay,
where
was
I
so
the
point
okay,
so
I
do
want
to
get
to
the
point,
and
in
this
in
this
vision
this
these
visualizations
I'm
defining
all
of
the
structures
of
neuroscience
first,
maybe
that
is
the
right
thing
to
do
at
least
that's
the
way
I
tried
to
do
it
visually
as
I
define
one
layer
of
a
cortical
column
and
and
I
can
maybe
just
put
placeholders
for
a
definition
of
a
cortical
column
and
then
I
can,
let's,
let's
start
somewhere,
we
have
to
start
somewhere.
So,
let's
start
somewhere.
A
A
It
is
separated
into
individual
processing
units
preferred
to
cut,
let's
just
say,
called:
don't
keep
this
symbol
called
cortical
columns.
Okay,
so
at
the
moment,
I'm
happy
to
just
link
to
Wikipedia
pages
for
the
time
being,
and
we
might
end
up
changing
that.
But
let's,
let's
for
the
moment,
link
to
Wikipedia
pages
ruff.
A
A
A
Done
I
could
probably
keep
going
different
layers
perform
different
processes
and
our
I
want
to
say
they're
wired
up
differently
or
they
they
receive
different.
They
they
and
receive,
and
have
different
that,
like
the
input
and
output,
is
different,
like
that's,
one
of
the
some
of
them
may
be
doing
similar
things
as
other
layers,
but
the
input
and
the
output
makes
a
difference
and
and
what
they
represent.
A
A
B
A
A
A
A
A
B
A
B
A
A
Let's
start
a
new
paragraph
here:
spatial
pooling
is
a
process
that
occurs
and
at
least
one
of
these
cortical
layers
throughout
the
neocortex
in
every
cortical
column.
Ok,
maybe
explain
the
difference
between
excitatory
inhibitory,
neurons
I
will
have
to.
But
let's
talk,
let's
get
too
many
columns,
which
is
pretty.
You
know
just
coming
up
really
soon,
so
we're
gonna
have
to
absolutely
that's
that
we'll
have
to
define
that.
A
Now
we've
introduced
the
term
pyramidal
neurons
and
we
have
to
say
something
about
pyramidal
neurons
or
at
least
link
to
more
information
about
family
neurons
I.
Does
it
take
to
link
to
I
mean
let's,
let's
define
them
a
bit
more
pyramidal?
A
C4,
if
it's
of
a
population
of
maybe
we
should
just
say,
neurons
and
then
and
then
and
then
define
parameter
in
a
moment
when
we
talk
about
excitatory
versus
inhibitory,
the
spatial
so
layer
performance,
Bishop,
one
receives
fee
for
two
input
to
a
population
of
neurons.
This
feed-forward
input
is
drives
neuron,
activations.
A
A
A
And
I
do
I
want
to
say
what
I
want
to
say
that
when
we,
when
we
say
feed
for
it
input
what
typically,
that
means
is
the
input
drives
neuron
activations
feet
feed
for
that's
what
yeah
I
think.
That's
that's
a
good
point
Falco,
because
I
think
we
have
to
say
something
about
feed-forward
input
is,
is
typically
considered
driver
input
and
the
way
and
in
that
it
drives
cell
activations.
It's
what
directly
causes
cell
activations.
A
This
input
the
drives
or
causes
neurons
in
the
lair
to
activate
now,
do
we
have
this?
Now
we
have,
to
you,
know,
say
excitatory,
maybe
not
yet,
let's
not
yet,
let's
just
leave
it.
Generic
and
we're
gonna
talk
on
the
next
paragraph,
we'll
introduce
the
idea
of
excitatory
versus
inhibitory
neurons
after
we
say
many
columns.
A
Let's
talk
about
many
columns
first
and
this
yeah,
this
okay,
spatial
pooling,
is
a
process
that
occurs,
and
at
least
one
of
these
cortical
layers,
one
of
eases
throughout
the
neocortex
in
in
every
cortical
column,
a
layer
of
performing
spatial
pooling,
receives
feed-forward
input
to
a
population
of
neurons.
Do
I
want
to
I
eight,
this
or
not
I
need
to
just
be
consistent.
Let's,
let's
emphasize
this,
though,
because
it's
an
important
concept
and.
A
Honor,
if
this
works
no
I
can't
do
this
cut.
Yeah
paste
alright,
maybe
say
that
many
inputs
to
one
neuron
drive
the
output.
Many
inputs
to
one
neuron
drive
the
output
all
when
we
talk
about
potential
pools,
/,
receptive
fields
we'll
get
to
that
for
sure,
but
I'm
trying
to
stay
as
far
away
from
it
from
the
details
as
possible
and
define
the
hazy
stuff
or
defined
some
high-level
things,
and
then
we'll
have
some
context
to
talk
about
the
low-level
things
so
we'll.
Then
we
will
definitely
get
to
that.
A
A
Input
drives
or
causes
neurons
in
the
layer
to
activate
so
and
I
could
contrast
this
feed-forward
versus
you
know
a
distal
input.
Proximal
is
typically
feet
forward,
as
we
call
feed-forward
there's
also
a
feedback
which
we're
not
going
to
talk
about
at
all.
So
maybe
we
should
just
call
this
proximal
I.
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
A
A
Performing
spatial
pooling
there
are
structures,
sorry
about
the
passive
voice,
but
that's
what
that's
is
how
we
talk
so
I
like
to
write
it.
How
we
talk
there
are
structures
called
mini
columns
and
I've,
always
I
I,
don't
know
how
to
do
a
cortical
mini
column
and
so
they're.
Using
this
all
one
word
so
I'm
going
to
use
it
all
one
word:
two
structures
called
mini
columns.
A
All
right-
that's
not
great,
but
it's
a
start.
So
in
this
paragraph
we've
defined
feed-forward
input,
we
haven't
said
input.
Space
haven't
defined
a
term
use
the
term
input
space
yet
just
input
the
the
goal.
I
wouldn't
say
the
goal
is
to
produce
a
sparse
distributed
representation.
That's
what
it
does,
but
I
wouldn't
call
it
the
goal.
A
A
So
what
but
but
yeah
what
spatial
pooling
is
doing,
is
is
sort
of
constructing
a
place
for
another
computation
to
occur.
So
it's
do
it's
a
computation
that
makes
way
for
more
computation,
and
the
important
thing
is
that
it
retains
the
semantics
of
the
input.
It
retains
information-
it's
very
lossy,
but
it
retains
the
semantics
of
the
information
without
knowing
anything
about
it.
That's
the
that's
another
important
thing
without
knowing
anything
about
where
it
comes
from
or
what
it
represents
at
all:
okay,
these!
A
A
Performing
spatial
pooling
so
the
feed
so
I'm
introducing
feed-forward
at
the
input
space
is
the
complete
set
of
neurons
that
might
that
it
could
that
it
may
be
connected
to
it
may
be
connected
to
I
know
this
I'm
ending
in
a
preposition
I'm,
not
I'm,
not
worried.
So
much
about
the
grammar
focus
as
I
like
to
think
of
it
as
stabilizing
complex
inputs
to
something
relatively
constant
order
may
be
consistent.
A
A
A
A
A
A
A
B
A
Hey
Schmidty
column
receives
hey
you
a
a
unique
subset
of
the
input
from
the
input
space,
that's
son.
That
makes
sense
since
it's
basically
a
mascot
information.
Many
films
force
their
grip
neurons
to
only
receive
a
subset
of
information,
input,
space
I'm
going
to
wordsmith
this
later,
but
this
is
what
I'm
trying
to
get
across
so
so
I'm
going
to
use
the
term
potential
pool
will
refer
to
the
subset
of
a
mini
column,
will
refer
to
the
subset
of
a
mini
columns.
A
Okay,
you
don't
think
we
need
what
this
sentence
this
input
space
contains.
A
mess,
I
think
that's
important.
It
comes
force.
Their
group
nods
when
I
receive
a
subset
of
information
in
this
place.
Calm
receives
a
unique
subset
of
the
input
from
the
input
space.
Okay,
like
I,
said
well
wordsmith
this
later,
but
at
least
I'm
introducing
a
term
and
I've
pretty
much
defined
it
so
we've.
Well,
we
introduced
several
things
in
this
paragraph.
We
introduced
mini
column,
feed-forward
input
space.
A
A
A
A
A
A
A
A
A
Let's
say
spacial
pooling
can
be
defined
as
a
competition.
I
love
this
calling
it
a
competition
for
many
columns
to
represent
the
input
space
or
present
the
information
in
the
input.
Space
say
that
spatial
pooling,
let's
just
say,
is
a
is
a
competition
where
or
many
columns
between
between
many
columns
to
represent
the
information
in
the
input
space
like
that.
A
A
B
A
A
A
A
A
This
is
just
a
draft,
so
really
I
just
want
to
get
a
script
out
of
my
head,
make
sure
I'm
answering
the
right
questions
and
have
something
like
an
edit.
Basically,
so
the
neocortex
is
a
I'm
introducing
a
lot
of
terms
here,
which
is
tricky.
The
neocortex
is
a
modulus
sheet
of
neurons,
separated
to
individual
processing
units
called
cortical
columns.
Each
cortical
column
performs
essentially
the
same
computations
and
is
separated
so
now
I'm,
seeing
typos
that
I
want
to
fix
and
is
separated
into
many
different.
A
A
Is
editing
so
if
I
can
just
get
something
out
of
your
place
to
start
spatial
pooling
is
a
process
that
occurs
in
at
least
one
of
these
cortical
layers
throughout
the
neocortex
in
every
cortical
column
and
layer.
Performing
spatial
pooling
receives
feed-forward
input
to
a
population
of
neurons.
This
feed-forward
input
may
be
sensory
input
or
input
from
other
cortical
areas.
This
input
drives
or
causes
neurons
in
a
layer
to
activate
the
cortical
layer,
performing
spatial
pooling
in
in
cortical
layers.
Any
cortical
layers
performing
spatial
pooling
there
are
structures
called
many
columns.
A
These
structures
groups
for
the
neurons
and
force
them
to
pay
attention
to
the
same
subset
of
the
input.
The
feed-forward
input
space
for
a
layer
of
cortex
performing
spatial
pooling
is
the
complete
set
of
neurons
that
it
may
be
connected
to
this
input.
Space
contains
a
massive
amount
of
information.
Each
mini
column
receives
a
unique
subset
of
the
input.
I'll
refer
it
to
the
subset
of
the
input
as
a
mini
columns.
Potential,
for
this
could
be
a
place
for
reintroduce
inhibitory,
neurons,
but
I.
A
Don't
know
if
I'm
going
to
do
that
yet
because
we
can
wave
that'll
off
and
just
ignore
it
and
just
say
that
these
many
columns
exist
did
not
explain
why
I
mean
I,
don't
know
how
deep
someone
wants
to
go.
They
don't
have
to
understand
the
neuroscience
to
understand
spatial
pooling
I,
really
just
wanted
to
understand
the
process
of
spatial
pooling
and
sort
of
how
it
originated
in
the
cortex.
A
And
I
don't
even
want
to
get
too
specific
about
the
layers
that
we
think
that
it's
occurring
in
I
just
want
to
define
the
process
for
the
most
part
like
so
people
can
understand
why
it
might
be
happening.
They
don't
really
need
to
know
what
layers
it's
in.
Just
to
answer
these
questions,
I,
don't
really
need
to
tell
them
what
layers
it's
in
right,
so
I'm
going
to
try
and
keep
the
keep
it
as
simple
as
possible.
A
Well,
this
this
will
certainly
so
you
don't
have
to
understand
inhibitory
neurons,
to
create
this
mini
column,
competition.
So
at
the
core,
this
document
is
going
to
be
describing
how
to
create
a
spatial,
polar,
the
logical,
the
logic
of
the
spatial
pooling
operation
and
to
understand
the
logic
of
the
spatial
pooling
operation.
I,
don't
really
need
to
talk
about
inhibition
at
all.
A
The
inhibition
part
is
more.
The
is
the
neuroscience
component.
It's
the!
Why?
How
it
does
it
we
don't
have
to
model.
In
addition,
we
just
model
the
effects
of
inhibition,
so
I
I
want
to
sort
of
leave
the
door
open
so
that
people
who
want
to
understand
the
neuroscience
can
get
there,
because
they
understand
that
the
terms
at
least
to
Google.
A
But
I,
but
I
also
but
I,
don't
want
to
go
too
far
into
the
neuroscience,
because
I
want
this
to
be
like
a
functional
guide.
So
this
is
sort
of
setting
up.
Why
this
matters
like
what?
Why
do
we
care?
What
spatial
pooling
is
and
we're
like?
Here's,
what
it
looks!
Here's
why
the
brain
is
doing
it
and
sort
of
how
it's
doing
it,
where
it's
doing
it
and
what's
do
what
it's
doing
so:
inhibitory
neurons
I,
think
I
should
say
something
about
them,
but
I'm
gonna
skip
that
part
for
right
now.
A
B
A
A
A
A
A
Yeah
I
want
to
talk
about
the
K
winner,
but
I
don't
but
I'm
not
I'm.
Far
from
that
at
this
going
through
my
notes,
I'm
sort
of
I'm
far
from
the
key
winner
I,
want
to
introduce
the
mini
column
competition,
but
not
talk
about
K
winners,
because
I've
got
a
diagram
all
about
that
in
a
little
bit.
So
so
I
do
want
to
I.
Do
want
to
introduce
this
idea
of
a
competition
in
the
next
paragraph
and
and
this
can
at
least
be
a
statement
about
excitatory
versus
inhibitory,
neurons.
A
A
Maybe
thousands
of
mini
column
structures
within
a
layer
of
coral
I
think
I'm.
Gonna.
Leave
that
paragraph
for
that
right
now,
special
pulling
is
a
competition
between
many
columns
to
represent
the
information
in
the
input
space
as
internal
activations.
In
the
event,
space
change,
different
mini
columns
represent
different
input.
A
A
A
B
A
I
know
what
I'd
really
like
I'd,
really
like
it.
If
I
had
this
diagram
created,
but
I
don't
but
I
can
assume
it's
there.
At
least
I
can
assume
this
is
there
and
then
because
I
I
mean
before
I.
Get
it
too
far
into
this
stuff.
I
want
to
have
a
tangible
example
of
some
input
to
the
system.
It's
changing
over
time.
A
/
permanence,
no,
not
yet
permanence.
We'll
talk
about
I
mean
I've
got
I've,
got
a
diagram
all
about
permanence,
so
we'll
get
to
that.
When
we
get
down
there
I'm.
What
I
think
I
should
talk
about
define
sort
of
a
data
stream
and
then
talk
about
I
mean
we'll
even
I
want
to
get
down
to
the
level
of
defining.
B
A
Say:
here's
exactly
how
one
might
create
a
many
columns
potential
pools,
so
I
want
this
to
be
extremely
tangible
for
people
with
explicit
examples,
just
like
we
did
in
encoding
numbers.
Just
like
this.
You
know
this
is
the
exact
function
I
used
to
create
these
encoders
I
want
to
do
the
same
thing
here.
A
A
A
A
A
B
A
A
A
A
A
A
A
A
A
Simulated
input-
let's:
let's
for
now:
let's,
let's
create
another
header,
all
right
called
simulated
and
put
data,
and
that's
what
we'll
talk
about
for
a
bit
all
right.
Let's
say
simulated
input
space,
because
that's
what
really
want
to
talk
about
the
how
this
data
is
represented
in
the
input
space.
A
So
hypothetical,
so,
let's
still
sort
of
it.
Let's
imagine
sort
of
scenario,
so
we
need
a
tangible
example
to
try
and
describe
this
whole
process.
So
that's
what
that's
what
we're
gonna
set
up
now.
So,
let's,
let's
imagine
a
single
scalar
value.
Let's
imagine
a
single
scalar
value
changing
over
time.
A
A
A
A
B
A
A
A
The
this
is
really
not
going
to
incorporate
any
sensory
motor
stuff
and
the
concentration
of
that
the
features
as
this
motion
is
detected,
no
I
think
it
opens
up.
Another
bigger
can
of
worms.
I
want
to
do
this
without
motion.
First
of
all,
because
I
don't
need
it,
I
mean
that's
still
like
sort
of
research.
A
A
Okay,
so
after
we
introduce
the
combined
encoding
diagram
which
doesn't
really
interact
with,
except
for
just,
we
can
make
it
gray
or
combined.
So
the
the
point
of
this
was
emphasized
that
the
spatial
pooling
layer
does
not
know
how
the
encoding
is
performed.
What
semantics
exists
in
the
encoding?
So
that's
what
I
want
to
emphasize
here
so
I'm
gonna
actually
use
this.
A
A
A
Let's
just
put
this
in
a
parentheses,
so
it's
a
little
bit
more
explicit,
I'm
gonna
change
this
eventually
now
I
can
just
remove
this
I'm
gonna
change
this
little
toggle
I'm,
not
crazy
about
the
toggle.
This
is
some.
What
I
do?
How
did
I
break?
Oh,
it
didn't
really
break
okay,
so
we
can
toggle
it
and
I'm
gonna,
like
I,
said
I'm
gonna
change
the
UI
user
experience
of
that
at
some
point,
but,
as
you
can
see
by
toggling
it
many
different
semantics
of
information.
However,
that
are
encoded
in
the
input
space.
A
A
Of
different
many
columns
to
extract
the
semantics
of
the
input
without
a
local
uses
term
prior
knowledge
without
prior
knowledge
of
its
structure,
how's
that
machine
learning
guys
without
prior
knowledge,
I'm
gonna
emphasize
that
because
that's
super
important.
That's
super
important
without
prior
knowledge
of
its
structure.
A
All
right,
okay,
so
now
we
have
to
talk
about
potential
pools
and
we,
fortunately
we
have
this
great
display
here.
That
shows
and
writing.
This
prose
is
going
to
help
me
better
to
describe
like
better,
create
the
diagram.
So
I
know
these
diagrams
aren't
in
their
head
state
we're
gonna
find
ways
we
want
to
improve
them
as
we
surround
them.
With
this
prose.
A
A
I'm
not
using
I
can
put
combined
encoding
under
the
sub
under
the
header
simulated
input
space,
let's
not
say
simulated,
let's
just
say
if
it's
space
combined
encoding
and
now
potential
pools
I'm
going
to
make
a
higher
level
header
we'll
make
it
an
h2
up
the
same
as
input
space,
because
it's
an
important
concept.
So
we
introduced
potential
pools
and
now
we're
going
to
talk
about
it.
A
Now
the
way
that
this
has
been
described
to
me,
I
don't
need
this
headphones
anymore.
The
way
this
is
described
to
me
is
I
mean
like
if
you
have
a
layer
of
cells
and
I
mean
there's
a
ton
of
information
there
and
there's
no
way
that.
A
A
It
to
connect
to
everything.
Thank
you.
Okay,
only
ever
connect
two
input
cells
that
falls
into
this
potential
pool
I.
This
is
absurd.
I,
don't
know
what
else
to
say
about
this.
It's
observed,
so
it's
observed
in
the
brain.
It's
just
it's
just
logically,
your
you
will.
They
will
not
be
able
to
connect
everything,
so
we
give
that
we
only
allow
it
to
connect
to
a
percentage
of
the
input.
A
A
A
A
It
didn't
even
didn't
even
it
might
even
invite
even
passing
that
in
no
I'm
not
I'm,
not
even
I'm,
not
even
using
it.
That's
how
okay,
whatever
this
was
the
first
diagram
I
made.
So
how
did
it
so
I've
got
props
dot
in
a
diagram
with
the
coding
and
coding
diagram
with
potential
pools
selected
any
column.
That's
right,
I
think
I
changed
this
so
I'm,
that's
right!
That
was
before
I
was
when
I
was
I.
Had
the
computation
happening,
oh
gosh,
so
connected
percent
was
what
I
was
looking
for.
A
A
So
so
that's
going
to
be
something
I
swear,
I
had
a
connected
percent
connected
percent
yeah
there.
It
is
so
it's
a
state.
It's
a
state,
okay,
so
what
I
don't
have
is
a
toggle
are
a
something
to
change
this
and
I
want
some
I
want
a
way
to
change
this,
so
I
need
to
do
basically.
I
need
to
create
one
of
these
number
value
taught
other
thingies
for
it.
So,
let's
change.
Let's
steal
this
and
we'll
call
this
connected
percent.
A
A
A
Percent
of
input
many
columns
could
connect
to
is
connected
percent
and
that's
something
that
they
should
now
be
able
to
change.
Alright,
so
I
can
I
can
move
this
and
then
you
can
see
it
change.
So
that's
what
I
want.
That's
I
mean
this.
That's
exactly
the
type
of
interaction
that
is
important,
I.
Think
in
these
diagrams.
A
To
display
its
per
minute
to
display
different,
there
are
different
potential
pools
of
connections
on
the
right
as
input
passes
through
the
input
space,
you
can
see
how.
A
A
Okay,
so
I
definitely
want
to
add
legend
to
this,
and
that's
in
my
list
of
things
to
do
so.
Nothing
breaks,
yeah,
I,
hope,
not
it'll
break.
If
I
go
to
0
I
bet
it
might
break
no,
it
doesn't
break
these
broke
these
break
because
that's
not
right.
I
I
mean
it's
right,
I
mean
it's
just
sort
of
the
those
are
the
winner
of
many
columns,
they're,
all
0-0
overlapped.
You
got
to
pick
something.
A
A
A
Setting
up
many
call
many
column
potential
pools
is
not
complicated.
All
right
just
leave
it
at
that
done.
No,
no,
let's
go
look
at
components
showcase,
because
that's
where
oh
no
I
have
an
example.
This
encoding
numbers
I
think
encoding
numbers.
There
is
so
first
of
all,
I
need
code,
syntax
and
then
there's
examples
and
encoding
numbers
samples,
coding
numbers.
A
A
A
A
A
A
Code
examples:
this
is
going
to
be
from
spatial
pooling
okay,
so
now
we
should
have
examples
dot
something.
So
in
this
case,
what
did
I
do
examples?
Dot
code,
that's
basically
it!
This
is
all
I
want
something
like
this
and
I'll
actually
probably
turn
these
into
figures
and
figure
captions.
Maybe
no.
B
A
We
want
a
code
example
here.
Code
example,
one
examples:
coke
one
code
example,
one
core
example:
one:
okay,
let's
see
what
breaks
it
didn't
break.
Okay,
setting
up
many
common
central
pools
is
not
complicated,
and
then
we
have
a
nice
code
example
at
least
I,
don't
know
if
I'm
going
to
scrutinize
this
whole
lot
right
now.
First
of
all,
I
can
already
tell
things
I
don't
like
about
it.
So
let
me
let
me
fix
some
things.
What
happened
to.
A
This
is
too
many
indentation.
Oh
it's
just
tabs
are
super
huge
I.
Don't
want
tabs
to
be
that
huge,
it's
okay,
I'm,
not
gonna,
worry
about
that
right
now,
but
let's
say
we
set
up
the
pools.
This
shouldn't
really
be
opps
dot
size.
This
should
be
mini
column,
count
or
something
like
that.
So
let's
just
do
that
or
yeah.
Let's
call
it
mini
column
count
and
we'll
call
this
input
count
and
then
just
connected
percent,
so
that
sort
of
generalize
it.
So
it's
not
connected
to
a
specific
code
implementation
and
it
still
makes
sense.
A
A
A
A
A
Okay,
like
I,
said
these
legends
don't
exist
yet,
but
I'm
planning
I
will
create
them,
so
those
will
be
a
part
of
the
page,
but
they
need
to
be
part
of
the
diagram
on
click
I'm
not
doing
this
on
click
I'm,
showing
the
input
all
the
time.
I
think
it
makes
sense
to
just
always
show
it
okay.
So
now
we
have
to
talk
about
and
I'm
a
little
bit
I.
Don't
like
this
terminology
that
much
because
it
seems
like
I'm
the
only
person
who
uses
it.
A
So
maybe
it's
not
right
so
I,
don't
know
if
I
want
to
I'm
not
going
to
use
the
term
receptive
fields.
I
think
potential
pools
is
fine,
but
I
want
to
talk
about
the
permanence
--is
within
the
potential
pools.
That's
the
important
thing!
So
so
now,
so
that's
what
this
is
about.
Permanence
--is,
one
thing:
I.
B
A
A
A
Okay,
so,
along
with
labels
for
potential
pools,
let's
create
another
one
and
we'll
call
it
do
not
display
connections
on
permanence,
'iz
diagram
initially,
but
I'm
going
to
change
this
into
an
issue
right
away
over
to
an
issue.
There
should
be
an
option
for
reader
to
turn
on/off
connections
from
outside
the
diagram.
A
A
A
A
A
Let's
sort
of
further
down
I
mean
that's
gonna,
be
when
we
talk
about
the
competition
right,
that's
where
the
pooling
really
starts,
we're
not
doing
any
any
pooling.
Until
we
do
the
K
winners
operation.
That's
really
that's
the
pooling
right.
Does
that
make
sense,
so
we
I
mean
that's,
certainly
like
the
crux
of
the
biskits
that
we
want
to
get.
We
want
people
to
understand.
A
A
But
but
I
would
you
agree?
We're
not
I
mean
we're
not
pooling,
yet
we're
only
pooling,
even
when
we
create
permanence,
is
hello,
FD,
god,
sorry
and
almost
missed
either
how's
it
going
thanks
for
joining
we're,
not
pooling
until
we
do
the
key
winners
all
right.
This
is
just
establishing
initial
connections.
A
A
Okay,
all
right:
let's
do
a
quick
review
here,
I'd
like
to
keep
going
so
I
think
I
will
so
quick
review
of
spatial
pooling.
We
talked
about
introduced
many
columns,
so
here's
the
terms
were
introducing
cortical
columns
layers
within
cortical
columns,
sensory
input,
I
guess
the
sort
of
a
term
excuse
me.
We
keep
talking
about
spatial
pooling,
but
we
never
really
introduced
what
it
is,
we're
doing.
A
A
A
A
A
A
A
A
A
A
A
Spatial
pooling
is
a
process,
extract
semantic
information
from
input
to
provide
a
controlled
space
to
perform
further
operations.
Additionally,
if
it
is
converted
into
a
sparse
representation
which
surprised
further
I
said
further
twice,
maybe
says:
sparse
data
representation
since
SDR
is
such
a
recognizable
term.
A
A
A
By
what
is
it,
what
I
call
this
just
because
there's
such
redundant
coverage,
because
there's
so
much
overlapping,
receptive
receptivity
overlapping
receptivity?
A
A
A
A
A
A
Is
it
still
really
that
laggy
I
thought
I'd
changed?
I
thought
it
was
a
last
time,
I
checked
it
was
it
wasn't
as
bad
I've
got
it
on
ultra-low
latency,
but
I
guess
I
can't
do
anything
better,
I,
don't
know
what
to
say.
I,
don't
know
quite
what
to
say
here.
Even
though
information
is
lost
during
this
transformation.
A
A
A
That's
that's
too.
Generic
falco
is
just
too
generic.
Okay,
let's
see
where
we
add
many
columns,
input,
space
combined
and
coding
potential
pools,
I
love
that
I
love
that
you
can
put
stuff
in
the
text.
This
is
what
I
wasn't
really
going
for
it.
Okay,
so
so
permanence
is
it's
talk
about
permanence
is
now.
A
A
Was
just
joking
with
you,
Falco
I
know
that
we're
serious
thanks
guys
for
your
help.
It
helps
to
have
somebody
just
in
knock
ideas
around
with
setting
up
many
common
potential
pools,
they're,
not
complicated
pond
initialization,
each
mini
common
pool
of
connections
is
established
using
a
simple
random
number
generator
for
each
cell.
In
the
input
space
a
mini
column
either
has
the
a
possibility
of
connecting
or
not
again
wordsmith
later
I
want
to
have,
but
I
want
to
just
define
what
what
this
is
doing.
B
A
B
A
A
Within
each
many
columns
potential
pulling
us
establish
the
initial
state
for
each
pen
attraction,
that's
sort
of
so
I
call
it
each
potential
connection
of
permanence.
So
now
I
have
to
describe
a
connection
threshold
which
I
haven't
introduced.
Yet
I'm
wondering
I'm
sort
of
introducing
it's
the
hard
thing,
as
so
many
concepts
at
once.
No
does
permanence
have
a
counterpart
in
biology,
yeah.
A
Strength
of
a
synapse,
it's
the
strength
of
a
synapse,
so
you
can
always
say
that
the
the
state
between
two
cells,
if
they're,
if
they
could
possibly
connect
to
each
other,
that
we
that
there's
something
that
there's
a
synapse
potentially
growing
there.
There's
a
you
know
a
dendrite
growing.
So
the
permanence
is.
How
close
is
that
synapse
from
being
established
or
not
right?.
A
Brink
the
synapse
that
might
never
connect
with
that
note;
no,
no,
it
could
connect
so
we're
we're
representing
the
synapses
that
will
never
connect
with
the
potential
pool
pools.
So
this
number
defines
how
many
of
them
could
potentially
connect
or
not
that's
what
the
potential
pool
is
that
establishes
a
pool
of
potential
connections
of
potential
synapses,
yeah
strength
of
SAP's
strength
of
synapse.
That's
a
good
way
to
say
it.
A
A
B
A
That's
part
of
the
exercise:
isn't
it
just
working
through
it
and
trying
to
figure
out
what
words
need
to
be
on
the
page
and
which
ones
don't?
This
represents
the
strength
of
synapse
so
I'm
trying
to
work
towards
introducing
this
next
diagram,
where
we
can
sew
so
initially,
I
think
I'm
going
to
break
this
up
into
two
diagrams
I.
A
A
A
Yes,
so
I'll
talk
about
the
them
changing
over
time
when
we
get
to
learning
that's
going
to
be
in
a
later
section,
that's
going
to
be
in
the
next
section.
Actually,
actually
we
really
should
talk
about
learning
after
we
talk
about
the
mini
column,
competition
because
you
have
to
apply
learning
in
the
context
of
the
competition.
So
probably
we'll
talk
about
the
competition
and
then
talk
about
learning.
A
Must
establish
the
initial
state
for
each
potential
connection?
We
call
each
potential
connection
a
prominence.
This
represents
the
strength
of
a
synapse.
Okay.
Let
me
just
write
down
what
I'm
thinking,
if
the
permanence
is
that
I
already
put,
this
is
above
a
defined,
and
this
is
connection
threshold.
We
put.
A
Flew
over
it
quite
fast
and
I'd
understand
where
it
came
from,
you
could
state
that
these
permanence
values
need
to
be
stored
to
and
can
change
in
time.
Well,
that's
what
we're
doing
so
we're
going
to
have
a
code
example
right
next
right
up
right
coming
up
soon
that
establishes
these
connections
and
creates
a
data
structure
so
we'll
get
there
under
the
definition
give
a
very
broad
introduction
of
what
it
is
you're
going
to
describe
in
detail
below
if
you're.
A
A
Okay:
okay,
the
memory
of
all
neural
networks
is
stored
and
the
connections
between
cells
called
synapses.
We
model
synapses
as
scalar
permanence
values.
If
they
breach
a
connection
threshold,
they
are
connected
within
each
mini
columns
potential
pool.
We
must
establish
an
initial
state
for
each
connection.
I,
don't
think
I
have
to
say
potential
connection.
I've
said
potential
enough,
I
think
it's
contextually
there.
This
represents
the
strength
of
a
synapse.
If
the
permanence
is
above
a
defined
connection,
threshold
X.
A
We've
artists
I've,
already
said
that
I've
already
said
that
it's
a
common
techy
thing
to
say
one
really
greatly
detailed
sentence
too
dense
for
anyone
to
really
take
in
it's
not
a
sin
to
say
it
easy
as
a
warm-up
exercise,
yeah,
yeah
and
then
the
body
slam
right,
that's
a
better
approach.
Isn't
it
boil
it
down
as
simply
as
possible
and
then
add
more
detail?
That's
that
is
exactly
what
I'm
trying
to
do.
A
A
A
False
okay,
so
now,
if
I
go
into,
oh,
oh
I
got
a
better
idea.
Show
connections
show
oh
now,
we'll
just
do
the
whole
thing:
whoops
yeah!
That's
right!
Okay,
show
connection!
So
if
I
have
show
connections,
Falls
I'm
not
going
to
show
those
connections
I'm
not
going
to
show
the
connection
distribution
either.
Okay!
So
let's
try
this.
This
should
be
easy.
So
in
the
permanence
is.
A
A
B
B
A
A
There
we
go
okay,
beautiful
okay,
so
now
we
can
see,
the
permanence
is.
The
only
thing
I
don't
like
about
this
is
I.
Think
I
might
be
doing
the
highlighting
ride.
Oh
no
I'm,
not
because
they're
right,
because
it's
a
bits
of
Gaussian
excuse
me
yeah,
it's
a
Gaussian
distribution.
So,
okay,
just
make
sense.
Coughs
out.
I
forgot
about
the
cow.
Sound
I
have
the
cow
sound
even
on
youtube:
I
have
the
cow
sound.
A
B
A
A
A
A
You
never
think
about
permanence
as
a
spectrum
to
you,
it's
either
is
or
it
isn't
right,
but
in
this
case
it
is
a
spectrum.
Okay,
okay,
this
is
looking
pretty
good
I'm
pretty
happy
with
this
progress.
I've
been
going
for
two
and
a
half
hours
now
I'm
going
to
keep
going
a
little
bit
longer,
but
not
too
much
longer,
but
we're
gonna
get
to
the
mini
Connell
competition
I,
don't
cut
it
off
there
and
then
we'll
have
editing
and
stuff
to
do
so.
Okay,
quick,
quick
review
under
premises.
A
Memory
of
all
neural
networks
is
stored
in
connections
between
cells
called
synapses.
We
model
synapse
as
a
scale
of
permanence
values
that
they
breach
a
connection
threshold.
They
are
connected.
So
here's
what
I
want
to
add
down
here
now.
I
can
put
the
connection
threshold
if
it
reaches
a
connection
threshold
connection.
A
I'm
going
to
have
show
connections
and
show
distribution
as
separate
things.
First,
one's
going
to
be
false,
false
this
one's
gonna
be
true,
false
and
then
the
last
one
we're
gonna
have
three,
these
yeah
okay,
so
I
am
I,
am
gonna.
Do
this
so
instead
of
this
show
connections,
this
is
gonna,
be
show
distribution,
whoops.
A
A
B
A
A
Distributions
and
distributions
yes,
it's
happening,
permanence
is
connections
and
distributions.
Permanence
is
connections
and
distributions
figure,
3-3
permanence
values,
connections
and
permanence
distributions.
Now
we'll
have
three
right
now:
I
have
to
put
true-true
okay,
all
right,
a
little
redundant
a
little
redundant
but
and
maybe
we'll
come
and
clean
that
up
later,
but.
A
A
A
B
A
A
A
B
A
B
A
Should
show
just
permanence
distribution
so
there
won't
be
any
so
show.
Connections
false,
so
distribution
is
true,
show,
oh,
you
know
what
we'll
just
say.
If
we,
if
we
show
the
distribution,
that's
all
we're
going
to
show
so
I,
don't
even
need
this
I
don't
need
show
connections.
So
that's
so
I'm
going
to
change
that
and
the
third
one
that
we
do
is
only
going
to
show
the
histogram.
That's
even
better.
That's
exactly
what
I
want.
Okay,
so
permanence.
A
A
A
Beautiful,
okay,
perfect,
it's
exactly
what
I
want
all
right,
so
we've
split
up
this
diagram
into
three
parts,
which
is
great.
This
is
great
they're
all
linked,
it's
very
modular
and
as
we
change
anything,
they
all
change.
Okay.
This
is
great
I
love
this
and
we're
able
to
inject
the
pros
in
between.
A
A
A
A
A
A
A
You
see
what
you
see
what
the
problem
is.
You
want
them
to
learn
quickly,
so
the
closer
I
get
to
it,
the
more
likely
there
are
the
more
entropy
you
want.
Entropy,
that's
what
you
that's,
what
we're
going
for
here.
So
if
you
Center
the
connection
threshold
in
the
middle
of
the
distribution,
there's
going
to
be
immediate
entropy
because
things
are
gonna,
start
changing
right
away
and-
and
you
want
that
entropy
for
the
system
to
stabilize
you
know,
you
know,
or
else
that
gets
stuck.
A
Entropy
is
not
I,
don't
know,
I!
Think
interview
might
be
the
word
it
it's.
You
want
things
to
start
moving
quickly
and
and
randomly
right.
If
it's,
if
you're,
if
you're
up
here
things,
don't
move
quickly
or
randomly
there's
locked
in,
and
this
is
like
less
entropy,
because
there's
there's
less
things
changing
less
things
happening,
so
you
want
it
where
our
action
threshold,
0.5
0
5
this
this.
A
A
Now
think
of
stable
is
when
the
same
many
columns
are
active.
All
the
time
that's
stable,
you
want
the
entropy,
that's
what
boosting
does
boosting
enforces
entropy
it
pushes
it
prevents
the
stability
of
the
same
columns
firing
all
the
time
and
it
pushes
the
semantics
out
to
the
other
columns.
It
increases
the
amount
of
entropy
in
the
system,
and-
and
this
is
the
same,
the
same
concept
I
think
is
you
want
you
want
that
entropy
and
also
I?
Think
entropy
is
a
term.
A
B
A
A
A
A
I'm,
not
gonna.
Do
a
quick
review
because
I
know
I'm
just
editing.
If
I
do
this
many
columns
input
space,
this
diagram
doesn't
exist
yet,
but
it
will
combined
encoding,
combines
split,
no
prior
knowledge
of
the
structure
potential
pools.
So
all
these
affect
everything
else
on
the
page.
I
could,
at
this
point
and
I
probably
should
show.
Let
me
do
this
real,
quick,
the
permanence
--is.
A
A
I'm,
tired,
okay,
yeah
we'll
do
that
next
time,
I'm
pretty
happy
with
with
where
we
got,
we
didn't
write
a
ton
of
prose,
but
that's
good.
You
know,
I,
don't
want
to
write
a
ton
of
prose.
I
just
want
to
write
just
enough
to
link
concepts
to
the
diagrams
that
are
creating
that's.
The
whole
point
is
setting
the
conceptual
stage
for
these
diagrams
and
the
diagrams
are
supposed
to
like
drive
the
concepts
home.
A
A
The
permanence
reaches
a
connection
threshold
point
five,
we
see
each
connection
established
and
the
neuron
is
connected
to
the
input
cell
and
the
diagram
shown
above
connections
initially
established
a
normal
distribution
on
a
center
point,
0.5,
which
we
can
change
for
the
initial
permanence.
As
the
connection
wait.
A
A
Okay,
I'm
glad
that
helps
Falco,
that's
good
indication
that
were
on
the
right
track
here,
all
right.
Let's,
let's
commit
this
I'm
happy
with
this
happy
enough
to
commit
these
changes
and
I'll.
Do
it
in
a
pull
request
just
to
make
it
clear
the
changes
that
we've
made?
Let's,
let's,
let's
do
a
quick
code
review
first,
like
we're
supposed
to.
A
A
So
we
basically
split
this
up
into
three
diagrams,
really
easily
potential
pools,
I
just
removed
some
some
cruft
I
wasn't
using
I
added
an
examples
for
spatial
polar
and
then
we
imported
the
example
and
then
most
of
this
most
of
this
was
all
in
spatial
Pooler,
so
connected
percent
number
input
and
then
pros
pros
pros
added
this
temporary
diagram
pros
and
then
splitting
this
figure
up
and
more
pros.
So
that
looks
fine
everything
looks
good,
so
let's
say
first
pass
at
SP
bros
up
to
perm.
A
B
A
A
Hello
I,
don't
know
how
to
say
your
name,
how's.
A
A
Calm
/
papers-
that's
weird:
it
wouldn't
let
its
annum
in
to.com
slash
papers,
but
I'm
having
a
having
a
problem.
Sending
anything
right
now,
which
is
weird,
let
me
see,
can
I
send
you
I'm
new
at
the
YouTube,
oh
I'm,
not
signed
in
that's
crazy.
I
got
signed
out
while
I
was
live
streaming,
okay
hold
on
hold
on,
although
I'm
always
there
I
can
get
back.
A
A
Why
I'll
send
it
I'll
just
I'll,
just
like
it
to
you,
it's
the
it's
from
2015.
Doesn't
it
2015?
Why
no
2016,
why
neurons
have
thousands
of
synapses?
This
is
the
paper
that
you
want
to
to
look
at.
This
is
what
describes
these
ideas
of
spatial
pooling,
etc,
and
the
the
link
to
the
code
base
that
I'm
working
on
is
here.
A
Is
the
encoding
of
the
rtac
getting
converted
to
scaler
before
sent
to
spatial
polar
the
our
DSC
is
I'm
not
using
that
I'm
just
using
a
very,
very
simple
scalar
encoder,
which
you
can
see
how
it
works
here
in
encoding
numbers
I,
don't
I
don't
have
a
document
for
the
RDS
II.
Yet
all
I'm
using
is
a
normal
scalar
encoder
and
you
can
see
it
in
the
spatial
bullying
diagram
right
here.
This
is
it
this
purple
bit
the
purple
block.
That's
the
scalar.
A
Encoder
is
just
going
up
and
down
and
up
and
down
so
very
simple:
it's
not
distributed
at
all
randomly
or
otherwise.
It's
just
going
up
and
down
all
the
rest
of
these
semantics
are
all
about
the
date
day
of
month,
hour
of
day,
weekend,
day
of
week,
I
don't
these
colors
might
not
correspond
to
it,
because
I'm
still
setting
this
all
up.
So
what
type
of
data
is
being
sent
to
the
SP?
This
data
right
here
this
scalar
value,
that's
going
between
like
it's
just
a
sine
wave,
it's
a
modified
sine
wave.
A
It's
a
sine
wave
I've
added
a
little
bit
of
noise
to
it
and
and
I've
synced
it
up
two
days
right.
So
it
goes
up
and
it
goes
regularly
up
and
down
with
the
days
and
then
like
every
third
day.
I,
add
I,
add
like
amplify
it
a
little
bit,
so
it's
got
not
only
daily
patterns
but
weekly
patterns
and
we'll
start
to
see
those
as
we
move
through
the
that
I
did
that
on
purpose,
because
the
temporal
memory
will
recognize
those
patterns,
but
I
haven't
gotten
to
that.
Yet
we're
still
just
doing
spatial
pooling.
A
It's
2018
at
1:48
p.m.
you
know,
that's
the
data
point
and
it's
just
moving
through
time
and
space
time
and
space
being
the
scalar
value
moving.
That's
the
spatial
information,
any
other
questions
before
I
hang
up
and
you
guys
can
go
see
this
right
now.
I
just
put
the
link
in
right.
Here's
this
is
the.
This
is
what
we
just
created.
So
you
can
go,
take
a
look
at
it
and
mess
with
the
settings
like
I
was
just
doing.
It
looks
like
it's
all
deployed
and
it's
working,
you
turn
learning
on.
It
gets
more
interesting.
A
I
haven't
talked
about
that
yet
and
we
haven't
written
the
pros
for
the
competition
or
active
duty
cycles.
How
do
you
see
hardware
designed
spiking
neural
network
units,
influencial
architecture
greatly?
Not
not
so
much
the
spiking,
we
don't.
We
don't
do
spiking
I
mean
like
we.
We
don't
model
the
spiking.
We
just
model
on
or
off,
not
like
how
fast
it's
going
on
and
off,
but
the
the
hardware
is
definitely
a
big
deal
once
you
have
your
neuromorphic
hardware
that'll
make
a
big
difference
in
the
performance
of
these
HTM
style
systems.
A
A
That's
the
whole
point
of
this
tutorial
and
you
can
see
there's
interactive
stuff
here,
how
to
how
to
make
it
bounded.
So
this
one
is
a
bounded
scalar
coder.
You
see
how
it
when
I
get
to
the
edges.
It
doesn't
slide
off
the
edge.
It's
bounded,
the
normal
one
is
not
bounded,
so
it
slides
off
the
edge
of
it.
So
there's
different
types
of
encoders
different
ways:
you
can
do
it.
A
A
There's
even
and
we've
got
this
one
that
cyclic
encoding
and
I've
turned
it
into
a
circle.
This
is
all
tent
work
in
progress.
Some
of
these
are
broken.
So
here's,
but
there's
the
idea
of
discrete
versus
continuous
and
coatings
and
the
difference
there.
This
diagram
is
broken,
but
the
basic
scalar
encoding
code
is
right
there
and
you
can
see
it
if
you
follow.
The
links
in
here
it'll
go
right
to
the
simple
HTM
implemented
and
there
are
encoders
right
there.
A
A
Encode
more
information
for
what
we're
trying
to
do.
We
don't
need
it
yet
we
might
need
it
eventually.
It
might
like.
Oscillations,
are
certainly
important
in
your
brain
and
we're
probably
going
to
have
to
eventually
model
oscillations.
But
for
this
level
that
we're
talking
about
and
for
spatial
pooling
and
temporal
memory,
I,
don't
we
don't
see
any
reason
to
model
it
yet
certainly
is
important
in
other
aspects.
The
additional
derivative
based
temporal
domain
spiking
could
be
helpful,
so
it
you
mean.
Is
the
output,
an
array
or
a
number?
A
It's
an
array
of
bits
on
or
off.
That's
what
the
encoding
provides.
If
you
look
at
this
this,
this
blue
thing
down
here,
that's
like
an
array
of
bits
the
Blues
are
on
and
the
whites
are
off.
So
in
this
case,
seven
27.4
gives
you
that
specific
or
output
right
yeah
feel
free
to
extend
each
team
with
spiking,
certainly
as
possible.
There's
a
lot
of
different
projects.
I
mean
we
can't.
We
call
HTM
biologically
constrained,
it's
not
perfect,
it's
not
biologically
perfect.
A
We
try
and
understand
as
well
as
we
can
what's
happening
and
we're
not
trying
we're
not
implementing
anything.
That
goes
against
the
logic
that
we
understand
happening
in
the
cortex,
but
we
understand
there's
more
happening
than
we're
modeling,
but
based
on
what
we're
trying
to
do
we're
not
modeling
the
spiking
stuff,
you
could
certainly
get
more
functionality
out
of
a
system,
perhaps
by
adding
spiking
or
oscillations,
for
example,
and
oscillations
and
spiking
I
think
are
very
integrated
and
it's
not
dude.
It's
pretty.
A
You
know
it's
could
be
partially
due
to
limitations
in
the
hardware,
because
it
does
that
it
that
will
can't
take
more
computing
power,
but
that's
for
the
bit
and
coding
know
that
the
the
bit
encoding,
the
the
representations,
the
binary
representations-
that's
super
important,
that's
not
just
because
of
hardware.
That's
just
you
get
these
benefits
from
using
sparse,
distributed
representations
that
are
really
important,
and
let
me
point
you
to
some
papers.
Notes
at
comm,
slash
papers
again,
there's
a
whole
section
here
on
sparse
distributed
representations.
A
So
go
there
and
read
some
stuff.
There's
a
sparse
distributed
representations
are
super
important
than
that
binary
representation
that
the
brain
uses
is
really
really
important.
It's
it's
it
just
is
and
so
read
up
on
it.
There
I
would
suggest-
and
if
you
want
more
formulas
for
encoding
data
for
HTM
systems
right
here,
this
is
everything
that
I
learned
about.
Encoding
is
I
learned
from
this
document,
and
here's.
A
If
you
want
some
more
stuff
about
sdrs,
read
these
other
papers,
there's
there's
a
ton
of
information
about
sdrs
and
why
they're
important,
if
you
want
to
know
how
they
benefit
machine
learning
systems?
This
is
our
most
recent
paper.
How?
How
can
we
be
so
dense
and
the
benefits
of
using
highly
sparse
representations
and
deep
learning
systems?
A
A
Have
you
seen
the
HTM
School
videos
on
SD
ours,
because
that
would
that
would
be
helpful,
which
is
like
on
this
YouTube
channel
just
go:
look
at
the
HTM
school
playlist
and
and
watch
this
yeah,
so
maybe
maybe
watch
them
again.
I
mean
the
binary
thing
is
really
important
and
there
are
they're
just
bits:
they're
just
bit.
Arrays.
A
A
Thanks
you
guys
for
for
watching,
don't
forget
to
like
the
video
subscribe
to
the
channel
if
you
haven't
already
that
helps
me
personally,
because
Jeff
and
Donna
and
and
the
other
folks
at
Numenta
can
see
that
what
I'm
doing
is
helpful
to
you,
guys
and
I
get
to
keep
doing
it.
So
yeah
we've
it's
a
long
time
ago,
I
remember
we.
We
had
some
conversations
with
DARPA,
but
we're
not
looking
for
funding
at
the
moment.
A
Okay,
I'm
gonna.
Hang
it
up
everybody
thanks
for
joining
again.
I,
usually
do
this
on
Twitch,
but
I
tried
I,
tried
alright,
but
I'm
trying
YouTube,
so
the
I'm
gonna
do
another
livestream
on
Friday.
Actually
we're
gonna
do
hackers
hangout
on
Friday,
so
that
which
will
be
on
YouTube,
so
everyone
take
care
unless
there's
any
last-minute
questions
or
anything
take
care
and
have
a
great
week.