►
From YouTube: HTM Hackers' Hangout - Aug 5, 2016
Description
Come and chat with other HTM Hackers. Informal, anyone may join.
A
And
we
are
alive,
hello,
hgm
community.
This
is
matt
taylor
from
de
menta,
and
I've
also
got
with
me,
david
ray
from
Chicago
Fergal
from
Ireland
Marcus
and
Scott
at
the
new
mint
offices
in
written
city
and
Paul.
Lamb
who's
been
a
recent
addition
to
our
new
community,
but
he's
done
a
lot
of
interesting
things
so
happy
to
have
you
here,
too
I'm
going
to
run
over
the
agenda
real,
quick,
which
is
posted
in
our
forum.
A
If
you
remember
like
two
years
ago,
two
and
a
half
years
ago,
we
had
this
initiative
to
get
two
at
one
point
out
what
that
meant,
and
the
main
thing
that
people
wanted
to
do
was
to
split
new
pic,
which
was
one
codebase
up
into
two
code
bases,
one
with
Python,
one
with
C++
the
core
C++
having
a
complete
algorithmic
set.
So
it
could
be
running
independently
from
any
of
the
Python
stuff.
A
The
Python
would
provide
language
bindings
and
also
a
set
of
algorithms
implemented
in
Python,
but
this
would
allow
people
to
extend
the
new
pic
or
in
their
own
favorite
programming
language.
So
they
could
write
a
client
that
binds
to
record
algorithms
that
exists
in
C++
and
then
create
their
own
set
of
client
tools
in
whatever
language
that
they
want,
similar
to
what
we've
done,
add
meta
and
Python.
So
anybody
could
do
what
we've
done
in
Python
in
any
language
that
they
wish.
So
someone
asks
what
happened
to
that
idea.
A
A
It's
things
like
we
don't
have
enough
encoders
in
C++,
so
anybody
who
writes
a
client
would
need
to
write
their
own
encoders
but
I.
Think
all
of
the
algorithms
are
their
temporal
memory,
algorithms,
their
spatial
polar
at
all.
I
think
the
algorithm
set
is
complete,
but
there's
would
still
be
work
to
do
like
with
anomaly
stuff
anomaly
likelihood
for
sure
all
the
type
of
client
stuff
that's
currently
written
in
Python
and
not
and
see,
would
I
be
reimplemented
but
the.
But
but
the
grippe,
I
think,
is
what
happened
to
that
plan.
A
And
why
don't
we
hear
about
it
anymore
and
that's
a
valid
right,
because
we
don't
talk
about
it
much
because
people
stop
complaining
when
we
broke
up
those
projects
into
a
C
and
a
Python
project,
I
really
didn't
care
much
at
all
from
anybody
about.
Oh
I
want
to
make
my
own
language
binding.
How
do
I
do
it?
No
one's
asked
that
so
I
kind
of
figured.
Maybe
it
wasn't
as
important
as
we
first
thought
as
the
community
wanted.
A
So
if
anybody
out
there
is
listening,
wants
to
write
language
bindings
to
our
new
bit
core
and
C++,
please
speak
up
on
the
forums
or
in
the
comments
or
whatever,
and
let
us
know,
because
if
somebody
really
wants
to
do
that,
I
want
to
enable
it.
But
the
only
reason
I
haven't
been
pushing
on
this.
You
know
one
point:
0
version
of
your
core
is
because
I
haven't
felt
like
there's
been
much
demand
over
the
past
year.
So
does
anyone
want
to
comment
on
that
before
we
move
on
to
another
topic.
A
Great
all
right
next
topic
is
a
really
quick
if
you're
watching
this,
and
you
have
problems
installing
new
pic
there's
a
section
of
our
forum
called
new
pic.
That's
the
category,
that's
where
to
put
the
installation
questions.
It's
just
general
discussion
about
the
new
pic
library,
which
is
different
from
the
HTM
java
library
that
Davidson
Chardo.
So
we
have
different
categories
for
them
because
they're,
you
know
different
implementations
there's
also
a
new
pic
developers,
category,
which
is
for
contributors
to
talk
about
ongoing
code
development.
A
That's
not
the
place
to
post
installation
and
build
questions,
problems
about
getting
it
running,
so
I
keep
moving
those
back
so
I
I
noticed
there's
a
trend
of
people,
posting
installation
questions
in
the
wrong
place,
so
I
just
wanted
to
bring
that
up,
I'm
going
to
keep
moving
them
and
last
topic,
aside
from
the
beat
up
that
I
wanted
to
talk
about.
Was
this
and
we
had
a
little
mini
hang
out
about
this.
What
last
week
last
Monday,
we
were
still
trying
to
get
HTM
Java
running
an
AB
and
producing
good
scores.
A
We
have
a
theoretically.
It
should
produce
similar
benchmark
results
as
new
pic
because
they're
all
the
same
algorithmically,
but
we've
we're
having
some
it's
either
integration
bug
or
just
some
type
of
bug
in
the
anomaly
system
and
HTTP
AA,
where
I
don't
think
we're
quite
sure.
Yet
still
so,
I
wanted
to
let
David
and
ferghal
have
a
chance
to
talk
about
that.
Maybe
David
you
could
explain
the
current
state
of
what
you're
doing
I
know.
You
talked
about
this
in
the
forum.
B
Oh,
that's,
okay.
My
approach
right
now
is
there
has
been
some
odd,
I
guess,
updates
to
the
temporal
memory
written
in
Python
and
in
C++
due
to
Marcus's
handiwork
with
the
the
column
segment,
walk
style
implementation
to
try
to
reduce
recursive
iterations
over
the
different
datasets
and
make
it
more
efficient.
So
I
was
busy
trying
to
debug.
B
B
Data
I
have
a
bare-bones
file
which
is
the
same
in
Python
as
it
is
in
Java,
and
it
just
assembles
basically
the
algorithms
together
and
you
can
walk
the
data
through
and
and
see
the
output
so
I'm
going
to
use.
You
know,
use
the
Python
in
the
java
version
side
by
side
and
now
that
I
have
an
algorithm
that
produces
the
exact
same
output.
I
can
now
test
and
see
you
know
where
the
the
result
didn't
have
results
are
diverging,
so
so.
A
You're
in
a
good
place
for
to
continue
investigating
right
sounds
like
so
one
suggestion
I
would
say:
is
it's
going
to
be
hard
to
keep
up
with
with
with
new
pic
I?
Think
algorithmically
is
things
are
in
a
bit
of
flux
because
Marcus
has
come
along
and
made
some
changes
and
I?
Think
if
you
try
and
freeze
your
HTM
java
version
to
some
shot,
some
version
of
new
pic,
you
can
focus
on
the
anomaly
thing
and
or
sun
now,
but
not
worry
about.
C
I've
recently
bombed
sure
I've
recently
been
in
a
similar
position,
as
you
David,
with
with
two
different
implementations
that
hit
the
random
number
generator
and
different
orders
and
trying
to
figure
out
why
they're
producing
different
results.
I
had
this
movement
like
some
of
our
resources
expended
240
memory
and
the
way
I
ended
up
debugging
that,
because
it's
hard,
when
the
results
aren't
the
same
I
visualize
it
I
had
to
run
it
and
see
if
similar
things
were
happening,
because
you
expect
similar
things
to
happen
like
similar
numbers
of
segments
should
have
grown.
C
Each
of
those
segments
should
have
similar
number
of
synapses
and
when
I
was
comparing
side
by
side
with
that,
I
would
find
differences.
That's
how
I
would
debunk
that
I'm
not
saying
you
should
do
that.
I
think
you're
on
a
good
path,
but
I'm
just
throwing
out
there
that
that's
that's
another
type
of
debugging
I
had
to
do.
Yeah.
B
That
that
was
the
method
I
you
I'd
been
using
for
the
last
year
and
a
half
fake,
the
Treviso
from
as
you
said,
you
can't
get
the
same.
Randomly
chosen
segments
and
raz
randomly
created
synapses
so,
but
does
the
numbers
of
those
should
be
the
same,
and
the
types
of
output
should
be
the
same
so
on
the
exact
numbers
should
be
the
same.
The
numbers
should
be
exactly
the
same,
rather
whether
the
actual
indexes
of
the
segment's
in
cells
and
synapses
are
the
same
or
not
so
I
agree
I
mean.
B
That
was
the
method
I
was
using
before.
But
since,
since
you
know,
I've
had
my
reality
tested
I'm.
My
confidence
level
has
decreased
a
little
bit
so
I
when
I'm
looking
now
for
just
getting
the
exact
same
output.
A
Sounds
good
least,
you
know
what
you
have
there.
You
know
where
to
start.
Okay,
thanks
for
your
work
on
that
David
and
we'll
keep
monitoring
it.
I
think
when
that's
when
HT
of
java
is
on
that
leaderboard
of
nap,
that's
a
big
one,
I
think
for
everybody,
so
I'm
looking
forward
to
that
we're
all
working
towards
that.
That's
would
be
awesome.
A
The
next
topic
I
have
is
a
another
forum.
Category
called
nimitta,
apps
I
mean
steal
the
screen
here
so
under
new
pic
again,
I
was
I
mentioned
this
earlier.
There's
a
puppy
use,
my
whiz-bang
feature:
okay,
there's
a
developer's.
This
is
for
us
contributors,
committers
people
who
are
working
on
the
code
base
and
then
there's
an
ax
men,
two
apps.
This
is
for
any
questions.
People
have
about
new
meta
applications
that
we
been
creating
and
publishing
store,
while
that
are
all
open
source.
A
A
So,
if
you're
trying
to
get
that
stuff
running,
this
is
the
place
to
ask
it
as
opposed
to
somewhere
else,
maybe
HTM
hackers,
which
is
kind
of
a
more
general
place
to
just
ask
about
bitched
him
in
general
HT
maps
in
general,
building
things
with
hdmi
cetera.
So
that
is
the
main
thing.
Those
are
the
main
things
I
wanted
to
talk
about.
Does
anyone
else
want
to
have
the
floor
for
a
discussion
bring
up
a
topic?
Well,
we've
got
people
in
the
room
here
that
might
be
able
to
answer
questions.
B
Well,
I
just
I
just
want
to
know
I
just
added
a
method
to
the
universal
random
generator
random
number
generator.
That
returns
doubles
so
so
now
it
could.
B
Property
could
potentially
be
used
inside
of
this
spatial
pooler
also,
and
it
gets
the
same
output
in
java
as
it
does
python,
and
my
method
is,
you
know,
just
not
to
have
the
same
precision
to
truck
to
truncate
the
precision
and
use
the
int
and
truncate
the
precision
and
create
a
double
from
it
and
in
that
way
you're
getting
random
doubles
and
you
get
the
exact
same
number.
You
don't
get
some
really
weird,
you
know
trillionth
place
variation
in
in
one
language
or
the
other,
and
I
mean
I'm
able
to
keep
them.
B
Keep
the
generated
numbers
positive
too.
So
I
don't
get
any
weirdness
there.
So
so
it's
it's
pretty
efficient
and
it's
fast
I
mean
because
it's
an
X
or
shift
and
then
and
then
so
I'm
just
wondering.
I
want
to
ask
scott
what
his
affinity
I'll
call
it
toward
you
know,
considering
that
you
know
for
future
use.
D
D
A
D
I,
don't
know
yeah,
potentially
I,
don't
know
how
big
it'll
be
I'm,
not
sure
how
involved
that'll
be,
but
David
keep
in
mind
that
just
changing
to
that
random
number
generator
isn't
going
to
get
you
exact
matches
by
itself.
As
to
other
things
that
we
have
to
do
one.
D
You
know
in
the
doc
string
board
or
something
saying
this
is
how
many
times
you
call
it,
and
this
is
the
order
in
which
we
use
it,
so
that,
if
some
other
algorithm
you
know
does
something
out
of
where
or
whatever
they
might
have
to
call
the
random
number
generator
in
one
order,
but
then
use
them
out
of
order
or
something
to
make
sure
you
get
the
same
results.
The
other
thing
is
even
we
do
all
of
that.
We.
D
The
same
results
because
but
useful
in
point
numbers
fell
off
noises.
There's,
there's
kind
of
a
separate
idea
floating
around
to
stop
using
floating
point
numbers,
but
that's
an
evil,
all
change
and
so
just
keep
that
fine.
Yeah.
B
B
Right
right,
well,
I
mean
we're
doing
very
complex
software
and
it
has
very
it
has
very
esoteric
behavior
and
so
and
and
it's
very
difficult
for
you
know,
for
people
I
mean
there's
probably
five
people
on
the
planet
that
can
look
on
look
at
the
code
and
actually
have
some
real
intuition
about
how
it
works.
So
I
mean
what
are
you
supposed
to
do
in,
and
so
you
know
you
know,
as
as
the
communities
affinity
toward
the
code
grows,
you
know
we
just
need
some
ways
to
ease
that
transition.
A
About
this,
do
you
think,
if
we're
going
to
do
something
as
drastic
has
changed,
the
RNG
for
new
pic?
Do
you
think
that
requires
that
we
have
a
bench
marks
before
and
after
compare
some
type
of
performance,
comparison
just
to
make
sure
we're
not
screwing
in
you
know.
A
You
think
that's
an
option.
Scott
do
you
think
that's
a
valid
it's
hard
to
say
on
it,
while
basically
keeping
more
than
one
rng
one
that
is
that's
used
by
new
pic
and
one
that
new
pic
or
makes
available
for
other
implementations
to
choose
to
use
instead?
Yes,
I
don't
know,
I
don't.
I.
D
Don't
think
that
make
sense
that,
right
now
we
have
a
random
number
generator
implanted
in
new
pic
or
and
provided
for
other
people
to
use
its
implemented.
Zupas
plus
it's
not
implemented
in
a
bunch
of
different
languages.
I,
don't
think
it
makes
sense
to
add
a
second
line
that
we're
not
using
so
I
I
mean
we
can
add
that
in,
but
only
when
I'd
say
only
when
we
switch
over
to
it.
D
A
Okay,
so
let's
continue
discussion
of
this,
where
we
have
been
discussing
it
in
the
meantime,
David
you're,
in
a
place
where
you
can
continue
working
on
the
bug,
you're
working
on
and
and
this
that's
sort
of
a
separate
issue.
Rum
so
chandan
hasn't
joined
us
yet
so
I'm
going
to
talk
real
quickly
about
our
meetup
next
week.
A
Think
we've
got
more
60
so
far
of
RSVP,
that's
more
than
usual,
so
we're
going
to
have,
you
know,
show
up
at
six
if
you
want
to
hang
out
and
network
in
eat
pizza
or
whatever
we
end
up.
Having
show
up
at
six
I'll
be
there.
Hopefully
we'll
have
a
couple
of
engineers
from
the
team
there
and
looks
like
Marcus
is
going,
and
so
we
can
hang
out
talk
about
HTML
what
what
you
guys
would
do,
I
think
h2o,
which
is
our
host
and
that's
actually
who
jeff
whole
works
for
now.
A
A
Another
10
minute
thing:
our
director
of
marketing,
Christie
neighbor,
will
be
there
just
to
talk
about
give
a
message
from
you
meant
us,
or
just
a
company
update
what's
going
on
there
and
then
we're
also
going
to
have
a
new
way
from
from
the
Mentos
research
team
coming
to
give
some
short
thoughts
about
the
current
HTM
theory,
and
it
really
could
just
turn
into
a
Q&A
session.
If
you
have
a
WA,
he
knows
a
lot
about
the
internal
research
stuff.
A
We
can
just
kind
of
pick
his
brain
to
see
what's
going
on
there
and
then
this
whole
lightning
toxic.
The
demos
is
just
kind
of
loosey-goosey
peep.
Some
people
have
signed
up
to
give
some
toxin
we're
going
to
do
it
so
Ryan
McCall
will
be.
There
is
a
former
internet.
New
meta
is
used
new
pekin
HTM
engine
extensively,
so
he's
been
working
on
HTM
engine
to
add
prediction
to
it.
A
I
think
he's
going
to
talk
about
that,
which
is
a
cool
I've,
always
wanted
that
to
have
prediction:
I'm
going
to
run
through
HTM
school
visualization,
sort
of
live
and
explain
them
and
take
questions
not
sure
how
long
it's
going
to
take
but
and
I
already
talked
about
you
way.
Then
chandan
is
going
to
talk
about
comparison
of
popular
AI
technologies.
He's
been
doing
some
research
in
that
area
so
should
beat
in
eventful,
meet
up
and
I'm
looking
forward
to
it,
I'm
sorry
about
you
guys
who
are
not
in
the
area.
A
It
won't
be
able
to
make
it
but
I'm
going
to
try
and
get
everything
recorded.
I,
don't
know
how
well
equality
is
going
to
be
we'll
see,
but
I
will
have
a
recording
of
it
online
afterwards,
and
that's
all
I
wanted
to
talk
about
today.
So
if
I'm
give
you
guys
one
more
chance
to
speak
up
about
anything,
if
you
want
to
bring
up
the
topic
for
discussion
right
now,
I.
E
Have
a
quick,
Siri
question:
can
you
guys
hear
me?
Yes,.
C
E
E
So
that's
awesome,
Alison
HTM
school
is
just
amazing.
So
okay
question
is
qey,
so
so
I'm
feeding
in
just
a
scalar
data-
and
one
thing
I'm
wondering
is-
is
that
on
what
would
be
lost
if,
in
theory,
if
we
set
up
the
encoder
to
be
sort
of
the
size
of
liquids,
the
encoder,
let's
say
n
was
2048.
Columns
and
W
was
40
sort
of
the
overall
size
of
the
system.
E
Normally
what
would
be
lost
if
we
fed
that
encoding
threatening
to
temporal
memory
and
skipped
over
spatial
pooling
in
that
sense,
because
to
me
I,
guess:
I'm
having
a
hard
time
understanding.
Why,
in
the
case
of
just
simple
floating
point
values,
we
actually
need
spatial
pooling
if
we
can
create
a
sparse
representation
sort
of
directly
from
in
encoding
there.
So
that's
one
quick
question
that
I
have
because
I'm
kind
of
tempted
don't
want
to
do
that,
but
I
feel
like
them.
I
must
be,
may
be
missing,
something
there.
C
It's
funny,
you
ask
I've,
thought
the
same
thing
and
I
don't
disagree
in
general,
if
you're
passing
one
thing
into
the
spatial
cooler
like
if
you're,
if
you're
encoding
a
scalar
and
then
sending
that
into
this
vegetable
or
you're,
right
that
it's,
it's
not
clear.
Why
that's
necessary,
I,
think
like
when
you
need
to
get
when
you
need
to
get
a
sparse
str
out
of
just
a
set
of
bits
is
useful
for
that,
but
I
do
think
you're
right
that
you
could
make
an
encoder
that
just
does
the
whole
job
viscomi.
D
If,
even
if
you
have
one
field,
it's
easier
to
pick,
the
printer
is
a
bed,
give
you
a
good
encoding
and
then
let
this
distillation
pool
or
unwise
it,
the
varsity
and
and
everything
but
yeah.
There's
no
reason
you
can't
feed
it
directly
and
one
we've
done
that
in
some
cases
so
we're
doing
a
natural
language
experiments.
We
were
doing
a
similar
thing
where
we
skipping
space
floor
and
feed
our
reputation
directly
to
the
temple
memory.
D
F
You
guys
I
think
the
answer
is
that
the
spatial
pillar
is
necessary
if
you
need
to,
if
you
need
two
orthogonal
eyes
or
D
correlate
the
bits,
the
actual
choices
of
bits.
So,
for
example,
if
you
use
as
serial
encoder
the
one
that
slides
up
and
down
through
the
window
of
the
encoder
bits,
then
what
you
actually
want
to
do
is
you
want
to
have
the
individual
bits
separated
distributed
distributed?
F
Basically,
so
you
need
to
have
somewhere
where
you
randomize
and
or
in
the
case
of
the
cortical
I/o
encoder,
where
the
position
of
the
bits
is
actually
a
semantically
important.
So
if
the
and
that's
why
you
can
put
the
cereal
you
can
put,
the
cortical
I
can
oh
and
Toder
out
push
straight
into
the
TM?
F
Yes,
if
you
use
the
serial
scalar
encoder,
you
don't
get
the
same
sort
of,
and
you
don't
get
the
same
sort
of
thing,
particularly
if
you
use
things
like
local,
inhibition,
rodder
and
global
inhibition.
So
at
some
point
in
the
thing
you
need
to
effectively
randomize
the
spatial
positions
of
the
on
bits
and
the
SP
does
a
very
good
job
of
that
and.
A
This
will
make
more
sense
after
a
couple
more
episodes
of
HTM
school,
the
spatial
cooler
is
doing,
there's
a
lot
more
options
than
I
showed
in
the
first
episode,
especially
when
we
talk
about
local
ambition
and
topology.
That's
entirely
different
subjects
and,
and
one
other
thing
in
the
real
world.
It's
extremely
rare
that
you're
going
to
have
an
encoding,
that's
as
the
characteristics
that
the
spatial
Pollard
produces
so
most
of
the
time
you're
going
to
have
multiple
inputs
coming
in
that
need
to
be
processed
by
the
spatial
pooler.
D
So
one
thing
I
want
to
mention
that
is
/
was
right
about
local
inhibition,
but
we
normally
don't
use
local
inhibition
for
typically
network
center,
you're
speaking
a
single
scalar
value
it
you
wouldn't
probably
use
local
ambition,
and
so
I
want
people
to
get
confused
and
think
that
that
there
is
a
relationship
between
5th
that
are
close
together.
As
for
it,
when
you're
using
global
inhibitions.
D
Far
as
the
temple
memory
and
everything's
can
start,
you
don't
know,
each
other
are
no
more
similar
to
similar
the
fifth
very
far
apart
and
that's
not
what
we
say
distributed.
We
don't
mean
that,
but
the
ones
are
sort
of
randomly
scattered
around
the
input.
Space
just
really
has
a
very
different
meaning
back
just
want
to
make
sure
we
run
those
two
things
so.
D
B
So
that
would
mean
that
all
of
the
semantic
possibilities
have
equal
representation
somewhere
in
the
bits,
so
I'm
glycolysis.
So
if
I
have
five
semantic
meanings
that
I'm
encoding
or
semantic
variations,
then
all
five
will
be
have
a
bit
or
more
than
one
bit.
That
is
responsible
for
representing
that
semantic,
meaning
Wow
I.
D
B
F
Yeah
that's
correct,
but
this
particular
thing
that
I
think
Scott
is
talking
about
is
the
fact
that
what
you
don't
want
is
that
you
don't
want
the
semantics
to
be
dependent
on
a
very
small
number
of
individual
bits.
I
like
what
you
really
want.
Is
that
oh
okay?
Yes,
so
this
is
the
idea
is
that,
for
example,
if
you
have
a
serial
encoding
like
the
cereal
and
scalar
encoder,
if
you
just
look
at
that,
then,
from
a
certain
point
of
view,
many
of
the
bits
are
effectively
on
off
bits
right.
F
So
you
need
to
have
these
particular
bits
mean
a
five
and
Ray
plus
or
minus
a
little
bit
of
uncertainty
right,
so
a
lot
of
them
will
stay
on
now.
That's
what
you
want,
but
you
don't
want
that.
There's
some
direct,
like
topological
connection
between
those
particular
bits
and
the
eventual
output
of
the
system,
because
what
happens
then,
is
that
if
there's
some
failure
in
the
thing
where
it's
not
looking
at
the
sufficient
number
of
bits,
what
you
want
to
do
is
share
the
semantics
out
as
long
as
many
bits
as
possible.
Rather.
E
F
Know
it's.
It's
called
over
completeness
in
the
mathematics,
basically,
but
the
way
that
it's
insured
in
the
standard
way
that
new
pic
is
used
is
that
you
and
you
put
in
a
layer
that
basically
randomizes
the
connections
so
that
even
if
the
thing
is
very
correlated
on
the
input
rd
encoder
site.
So
by
the
time
it
gets
the
TM,
which
is
the
important
part.
Then
the
the
individual
bit
position
is
not
not
as
important
as
it
would
be
at
the
encoder
stage,
and
you.
A
Can
see
that
really
clearly,
at
the
end
of
the
last
speech,
puller
HTM
school,
looking
at
the
act
of
bits
back
of
columns
coming
out
of
this
bowler
and
the
encoding,
you
can
look
at
encoding
and
tell
what
the
data
looks
like.
You
can't
look
at
the
act
of
columns
and
tell
what
what
it
represents
at
all,
but
it
does
represent
the
same
clothes
pretty
close
to
the
same
thing.
A
F
And
this
is
particularly
important
to
me.
This
is
the
work
that
we're
doing
because
we're
doing
stuff
in
GPUs,
and
this
is
a
big
problem,
because
you
have
to
basically
spread
out
where
the
information
is
coming
from
when
you're,
using
local,
inhibition
and
at
large
scale.
When
you
have
a
lot
of
units,
you
need
locally
inhibition.
You
know
so
and-
and
you
need
to
also
have
the
choice.
F
So,
if
you're
doing
things
like
vision
or
auditory
or
anything
like
that,
where
you
have
very
high
dimensional
inputs
and
I
think
the
same
is
also
going
to
be
true
of
the
cortical
I.
Oh,
where
you
have
16
k
binary
inputs,
then
you
you're
going
to
be
using
local
innovation
at
some
point,
and
at
that
stage
you
need
to
make
sure
that
your
your
representation
is
distributed
in
a
spatial
sense
as
well
as
in
a
Conibear
and
abstract
mathematical.
A
F
A
E
One
more
quick
question:
if
nobody
else
has
one
and
you
can,
we
can
cut
it
short,
lyrically,
okay,
so
I
just
want
to
make
sure
I
understand
it's
right.
So
so,
right
now,
feeding
in
one
value,
but
I
want
to
eventually
feed
in
four
values,
because
it's
a
one-dimensional
thing,
but
it's
be
a
four-dimensional
thing.
E
But
then,
if
the
number
of
values
gets
too
high,
then
is
it
that
it
would
just
become
very
slow
or
that
it
would,
the
space
would
become
sort
of
too
saturated.
With
with
on
bits,
I
know
the
spatial
polar
would
would
only
active
a
2%
of
bits,
regardless
I
guess
well,
I'm
an
interested
India
of
scaling
out.
E
Potentially
a
high
number
of
variables,
if
you
had
many
many
variables,
let's
say
you
had
like
100
variables,
but
only
like
a
few
of
them
actually
turned
out
to
matter
like
six
of
them
turned
out
to
matter
so
where
you
could
feed
in
all
of
those
streams
and
then
it
would
eventually
sort
of
discern
that
you
know
it
turns
out
that
90
of
them
actually
don't
really
matter.
They
seem
to
show
no
real
pattern,
whereas
the
real
pattern
is
coming
in
just
a
few
of
them.
A
There
we
started
at
least
on
so
the
last
thing
you're
talking
about
is
something
that
swarming
helps
with.
If
you
have
a
lot
of
different
inputs-
and
you
don't
know
which
ones
contribute
to
the
prediction
of
the
thing
that
you
want
to
predict,
you'll
run
a
swarm
on
it
and
the
more
inputs
that
you
have
the
longer
the
swarm
is
going
to
take,
because
it's
going
to
try
different
iterations
of
different
permutations
of
encoder
parameters
for
each
of
those
inputs
and
try
and
find
which
one's
contribute
to
the
prediction.
The
most.
C
Don't
have
anything
super
insightful
to
say
here
like
could
there
are
a
couple
different
kinds
of
performance:
I,
don't
think
we're
talking
about
the
speed
of
the
code
here.
I
think
we're
talking
about
how
well
it
like
what
the
results
are
and
I
mean.
If
you're
doing
are
you
are
you
focused
undeniably
detection
or
a
prediction
right
now.
E
Right,
it's
on
well,
okay,
with
essentially
anomaly
detection.
Well,
it's
sort
of
both
because
we're
trying
to
basically
look
at
the
sequence.
These
streams
that
come
in
and
it
represents
a
certain
person's
control.
Behaviors
like
their
landing,
a
plane.
Let's
say
they
would
have
four
different
controls.
They
would
use
to
land
the
plane.
So
we
want
to
see
a
plane
landing
and
not
know
who
it
was,
but
then
try
to
determine
which
model
best
explains
that
behavior.
E
A
E
A
Have
it
so
or
you
get
it
the
law
I,
don't
know
so,
there's
a
there's
a
hacky
way
to
go
about
this
when
because
you
basically
you're
trying
to
classify
behavior
through
a
stream
of
events.
Essentially
so
the
way
I've
tried
to
do
this
before
and
your
mileage
may
vary
because
I'm
not
sure
how
well
this
works
on
different
data.
But
if
you
can
train
one
individual
model
for
each
of
the
classifications
and
feed
in
just
that
data
to
each
one.
E
A
A
So
if
you
sort
them
by
Natalie
likelihood
and
take
the
lowest
not
only
likelihood,
that
would
be
your
winner.
That's
currently
who
it
thinks
this
is
the
least
most
similar
and
the
stream
is
most
similar
to
what
it's
seen
in
the
past,
and
this,
like
I,
said
sort
of
a
happy
way
to
do
it.
It's
not
biologically
plausible.
That's
not
how
your
brain
is
doing
classification,
but
it's
a
way
to
do
it
with
HTM.
Well,.
C
You
could
also
I
mean
another
way.
You
could
do
this
with
one
model
and
I
don't
know
how
about
this
would
work,
but
it,
but
a
pretty
straightforward
way
to
do
this
would
be
to
whenever
someone
is
whatever
you're
starting
a
new
person
flying
you
do
a
quick
reset,
like
you
reset
the
HTM
which
doesn't
change
in
this
picture.
It
knows
that
things
like
starting
in
your
sequence.
Then
you
just
go
and
you
let
that
person
fly
and
your
time
like
you'll
have
a
classifier.
C
That's
like
learning
all
of
these
beauty,
the
classifier
looks
at
the
state
of
the
of
the
temporal
memory
and
and
you
label
it
as
like.
This
is
this
is
scott
flying.
This
is
scott
playing
just
every
part
of
the
sequence
and
then
like
what
you're
going
to
get
is
a
classifier
that
can
look
at
the
model
at
any
time
and
and
tell
you
whether
it's
scott
flank
or
man
flying
or
whatever.
So
you
could
do
this
with
one
model
on
the
classifier,
so.
E
Then
you,
you
would
have
one
model
where
you
should
show
it
the
first
person
flying
and
then
reset
and
show
it
a
second
person
flying
reset
show
it
a
third
person
flying
and
then
you've
got
this
model.
That
is
for
everybody,
sort
of
compacted
into
one
and
then
show
it
a
stream,
and
you
know
we
could
say,
although
that's
probably
Scott,
oh
that's,
probably
markets
or
whoever
planned.
C
Already
have
mappings
from
tons
of
cell
s
dr's,
basically
to
two
days,
so
it'll
in
general
be
able
to
look
at
different
yeah.
That
would
you
know
I
can
see
both
of
these
working.
The
anomaly
detection
one
might
actually
be
better
now
that
I'm
thinking
about
a
little
more,
but
there
are
different
ways
to
do
this.
C
Your
original
question
was
about
like
number
of
fields
or
the
best
best
complicated
question
because
like
because
one
thing
about
passing
in
a
lot
of
fields
is
that
is
that
the
anomaly
score
is
now
being
determined
by
a
lot
of
its
kind
of
spread
around
like
like
your
different
columns
that
are
being
spatial
pulled
over,
are
their
meaning
is
going
to
be
like
there
might
be
like
like
the
fraction
of
the
meaning
that
is
given
to
I.
C
A
E
Like
that
Wayne
problem,
you
talk
about
that
in
one
of
the
video
that
way
you
have
to
have
you're
going
to
you're
going
to
weigh
things
more
than
others.
If
you
sort
of
dedicate
only
two
bits
to
whether
it's
a
weekend
or
weekday,
then
that's
not
going
to
go
away
very
much,
so
you
could
control
it
through
waiting.
Is
that
what
you're
saying
you.
A
Could
I
mean
this
is
this
is
where
the
tuning
of
HTM
becomes
important,
because
it's
you
can't
just
swarm
on
something
and
just
take
the
value.
Take
the
model
parameters
that
you're
you're
given
and
expect
that
just
to
do
really
well
like
at
this
point
in
the
evolution
of
hg
m
and
h
GM
systems,
you
sort
of
have
to
understand
it
well
enough
to
do
this
type
of
tuning
if
you're
trying
to
apply
it
to
a
particular
problem
like
you're
doing
and
knowing
that
velocity
changes
are
much
more
important
than
altitude
changes
or
attitude
changes.
A
E
E
E
A
F
So
it
looks
at
a
large
amount
of
data,
and
then
it
figures
out
how
to
map
that
into
a
consistent
encoding
and
does
all
that
pre
weighing
and
combines
I
think
he
was
telling
me
even
a
year
and
a
half
ago
that
am
on
some
of
the
aircraft
that
they
were
measuring,
that
they
were
getting
186
different
scalar
inputs
and
that
his
system
was
what
NASA
were
talking
to
him
about
was
basically
turning
that
into
something
that
was
an
ST
or
representation.
That
would
be
directly
fed
into
something
like
HTM
know.
A
F
When
you
get
into
something
like
when
you
have
like
I'd,
say
three
or
four
dimensions
is
no
problem.
Even
double
dusts
becomes
a
big
problem:
yeah
I'm,
just
feeding
the
if
you
feed
in
you,
know
unfiltered
unweighted
data
that
has
no
domain
expertise,
feeding
into
it,
because
what
you're
doing
is
is
most
likely
that
territory
of
those
variables
are
going
to
explain.
F
You're
yeah
you're
killing
those
variables,
if
you
just
give
equal
way
to
what's
effectively
like
tuning
into
a
random
radio
station,
I'm
just
feeling
in
that
speeding
in
that
value.
No,
that's
literally
what
it
is,
because
a
lot
of
these
things
are
just
not
important.
Yeah.
We
have
an
in
depth
beyond
us.
A
So
I
should
be
able
to
tell
that
when
rainfall
goes
up,
flooding
goes
up,
but
that's
basically,
the
system
can
predict
without
the
rainfall
that
the
flooding
is
the
water
levels
going
up.
Just
because
that's
a
seasonal
pattern
of
the
data
it
doesn't
need
the
rainfall.
The
rainfall
is
the
cause,
but
it
doesn't
necessarily
need
to
know
the
cost
to
predict
the
effect
because
it
sees
the
effect
over
time
right,
yeah.
It's.
A
F
F
A
E
So
if
you're
finding
that,
then
that's
good
with
me,
I
decide
to
similarly
interested
in
the
idea
of
being
able
to
scale
it
potentially
being
able
to
scale
it
and
in
thoughts
on
that
board.
If
thank
you
guys
so
much
for
spending
so
much
time.
On
my
questions,
I
totally
just
crashed
this
party.
I
don't
contribute
to
the
code
base
or
do
anything.
That's
try
it
installed,
which
is
our
aim
of
the
challenge
yeah.
So
thank
you
very
much
I'll.
Let
you
guys
get
back
to
it.
Your
important
stuff
for
talking
about
well.
A
E
It's
I
think
it's
just
s:
Heiser
one
sh
TI,
SE
r
1.
I
asked
about
using
height
handling
multiple
fields
of
hierarchy
some
time
ago
and
also
basically
the
same
question
I
asked
today
about,
and
you
recommended
using
the
anomaly
likelihood
and
you
say:
can
I
see
a
little
of
your
data
and
I
know
you
answered
you
deal
with
a
lot
of
questions.
That's
why
I'm
not
coming
rights
in
mind
but
yeah,
I'm
I'm
a
graduate
student
working
in
this
stuff.