►
From YouTube: HTM Hackers' Hangout - Apr 5, 2019
Description
Discuss at https://discourse.numenta.org/t/htm-hackers-hangout-apr-5-2019/5735
HTM Hackers’ Hangout is a live monthly Google Hangout held for our online community. Anyone is free to join in the discussion either by connecting directly to the hangout or commenting on the YouTube video during the live stream.
More info on all these topics at http://numenta.org.
A
Alright
welcome
to
HTM
hackers
hangout
thanks
everybody
for
joining
today.
I
appreciate
you
taking
the
time
out
of
your
busy
schedules.
I
know:
we've
all
got
stuff
to
do
so,
I'm
going
to
get
right
to
it.
Today
we
turned
my
camera
on
the
whiteboard
here.
I
have
an
agenda
I'm
going
to
talk
about,
so
you
just
so
you
know
if
you
want
to
zoom
forward
in
the
video
or
something
I'm
going
to
talk
about
Python
3.
A
It
looks
like
we've
worked
out,
so
let's
see
I'm
gonna
go
well.
Let's
just
talk
this
out
and
I'm
definitely
I'm
glad
you're
here
David,
because
the
first
thing
I
wanted
to
talk
about
was
Python
3,
so
we've
been
looking
into
it
and
we've
already
created
a
Python
3
package
called
nupoc
torch
and
Python
3
has
this
idea
of
namespaces
in
the
packages,
so
you
can
hit
install,
for
example,
new
pic
dot,
porch
and
and
in
Python
3
sort
of
that
in
that
environment.
A
Gonna
have
to
do
that
revert
soon,
I'm
just
putting
the
division
off
at
the
moment,
but
we
we
do.
I
do
want
to
toss
this
idea
out
about
naming
things.
If,
if
did
we
come
in
a
situation
where
we
have
both
where
we
have
two
Python
three
projects,
this
this
may
or
may
not
happen.
I'm
just
I
just
want
to
have
the
conversation
we
might
have.
I
think
it
would
make
more
sense
to
have
a
an
HTM
PI,
which
would
be
the
community
version
and
and
the
this
one,
which
would
be
the
the
Numenta
version.
A
If,
if
this
happened
now,
I
would
I
would
really
like
for
this
not
to
happen.
I'm
like
I,
have
an
a
goal
to
try
and
steer
the
ship
and
see
if
we
can
take
all
the
work
that
you
guys
have
done,
especially
in
new
pic
cpp
and
see,
if
that
that
core
is
enough
for
us
to
move
ahead
with
with
the
research
that
we
want
to
do
and
create
the
bindings.
A
Maybe
we
use
new
pic
pi,
as
is
maybe
we
create
a
very
small
package
of
Python
three
bindings
directly
against
new
pic
dot,
CPP
just
for
research,
whatever
we
end
up
doing
I,
don't
know,
but
we're
we're
gonna
have
to
do
it
pretty
quick
over
the
summer.
This
is
something
that
we
have
to
do
so
I
wanted
to
float
this
idea
of
what
are
we
going
to
name
things
in
Python
three
and
the
potential
of
a
name
change?
If
this
happens
and
I
don't
know
how
you
feel
about
that?
B
A
Okay,
okay,
great
well
like
I,
said
I
would
rather
not
have
two
packages
I
I
would
I'm
trying
to.
I
want
to
steer
it
towards.
Hopefully
we
can
write
whatever
we
need
for
research
against
the
clean
version
of
new
pic
c
c++,
which
I
but
we're
gonna
have
to
spend
some
time
evaluating
that
and
making
sure
that
we
can
adjust
our
research.
A
Hopefully
there
won't
be
much
adjustment
to
do,
but
we
haven't
even
attempt
start
the
exercise
yet
so
it's
totally
coming
so
I
just
want
to
let
just
to
float
that
idea
about
namespaces
next
week,
particularly
like
Wednesday
I'm
I'm,
having
another
meeting
with
the
team
and
we're
going
to
talk
about
forcing
what
the
forcing
function
is.
Gonna
be
internally
for
new
Mendte
research.
It's
hard
to
force
scientists
to
move
from
one
programming
environment
to
another.
A
A
What
we're
going
to
have
to
do
the
work
we're
going
to
have
to
do
so
we're
going
to
talk
through
all
of
the
details
of
that
work,
I
think
next
week
and
we're
going
to
put
something
in
place,
so
we're
gonna
we're
going
to
set
a
date
and
we're
going
to
say
this
is
the
date.
We're
gonna
go
Python
3.
This
is
basically
what
we
think
we're
gonna
have
to
do
to
make
that
happen,
and
everybody's
going
to
have
this
expectation
that
we're
gonna
have
some
overhead
and
dealing
with
that.
A
So
the
ball
is
rolling,
so
I
just
want
to.
Let
you
guys
know
that.
That's
where
we're
at
right
now
and
as
this
goes
along
you're
gonna
see
me
hopefully
working
on
this
on
on
twitch,
so
you'll
kind
of
get
a
feeling
for
what
the
struggles
that
I'm
having
and
the
needs
that
we
we
have
as
to
support
research.
So
I'm
hoping
the
transparency
involved
here,
is
going
to
help.
Everybody
in
the
long
run
have
the
tools
that
we
want
right.
So
that's
the
plan
with
nip
ik
and
Python
3.
A
It's
fits
a
loosey
goosey
plan,
but
it's
a
plan
and
and
we're
in
we're
taking
steps
forward.
So
any
comment
about
that
and
yeah
and
and
anybody
who's
watching
on
YouTube
for
sure,
if
you're
invested
at
all
in
the
HTM
community
or
HTM
forum,
let
me
know
I'm
I'm
open
to
comments
on
this.
I
want
to
do
what
makes
sense
for
everybody
and,
like
I,
said,
I'd
love
to
have
just
one
Python
3
HTM
package,
but
in
the
end
you
know
people
are
going
to
write
their
own.
So,
but
we
we
do.
A
A
An
implementation
of
this
simile
again
had
an
implementation
before
we
even
have
the
paper
out
this
from
talking
to
me
on
the
sidelines
of
the
forum
and
then
I
think
a
lot
of
things
fell
into
place
when
the
paper
came
out,
and
so
we're
gonna
have
basically
that
code
just
packaged
in
a
nice
way.
Lewis
has
been
working
on
this
packaged
in
a
nice
way,
so
it
can
be
easily
incorporated
in
PI
George
projects,
so
there
will
be
like
a
modular
Kay
winners
module
and
what
was
the
other
one?
There
was
the
other
module
anyway.
A
The
modules
that
were
required
for
to
get
this
to
work
are
packaged
up
and
easy
to
add
to
or
to
create
PI
torch
modules
with
it.
So
that's
coming
and
that's
I'm.
The
bottleneck
on
that
right
now,
Louis
already
showed
it
to
me.
I
just
have
to
you
know,
make
sure
it
looks.
It
looks
good,
it's
got
the
right
license
and
you
know
there's
a
couple
boxes
that
need
to
be
checked
before
we
can
make
code
open
source.
So
that's
on
me,
I'll
get
it
out
next
week.
A
Any
comments
on
on
that
on
the
machine,
learning,
side
and
I
know
that
sort
of
been
some
new
stuff
that
we
put
out
there.
You
know
the
noise
resistance
thing
with
the
in
mist,
so
we've
gotten
some
interest
on
that
I
think
people
are
starting
to
say
hey.
This
could
be
something
we
could
explore
more
right
in
the
deep
learning
space,
which
is
what
we
wanted.
So
we're
gonna
keep
going
in
that
direction
and
see
what
else?
What
else
could
we
take
advantage
of?
You
know
in
the
machine
learning
space?
A
Okay,
the
so
the
last
thing
I'll
talk
about
is
is:
is
the
twitch
stuff
and
I
don't
know
I
know
I
know
most
of
you
probably
seen
what
I've
been
doing
on
there.
This
has
been
like
a
drastic
change
for
for
me,
although
it
feels
right,
if
I
think
it
fits
well,
it's
just
thrown
a
monkey
wrench
into
my
whole
process
and
it's
adjusted
some
of
my
priorities,
so
so
I'm
gonna
lay
this
out
for
you
guys,
because
you
guys
are
sort
of
my
sensors
in
the
HTM
community.
I
want
it.
A
It
feels
like
people
want
to
work
on
this
stuff
and
they
want
to
have
somebody
to
kind
of
guide
them
along
and
I'm
happy
to
do
that
actually
I
think
I'm
in
a
good
place
to
do
that.
I
actually
understand
most
of
the
theory
at
this
point,
whereas
three
four
years
ago,
when
I
was
still
doing,
this
I
didn't
understand
the
theory
honestly
well
enough
to
guide
anyone
into
making
anything.
A
So
so
now
what
there's
three
things
I
want
to
do
on
Twitch
I
want
to
expose
research
in
the
form
of
research
meetings
which
I've
done
only
one
of
but
it's
sort
of
it's
been
a
slow
week,
but
I'm
gonna
continue
doing
this.
Jeff
is
psyched
about
this.
He
wants
to
continue
streaming
research
meetings,
so
I
will
continue
to
do
that
and
then
I
also
have
established
this
weekly
show
and
is
sort
of
a
chat,
show
it's
about
AI
and
neuroscience.
A
A
They
thought
well,
there's
a
bunch
of
live
coders
that
are
doing
things,
and
so
let's
make
a
game
for
them,
so
they
make
a
science
and
technology,
and
so
you
see
these
robotics
competitions
and
all
this
stuff,
but
the
field
is
so
empty
when
it
comes
to
scientific
education,
programming,
education,
this
tool,
suite
that
twitch
gives
you
is
perfect
for
collaborative
work.
For
interactive
work
for
remote
execution
and
planning
it's
great
for
programmers,
it's
like
it's
like
the
perfect
set
of
tools
for
collaborative
programming
over
a
distance.
A
It
is
right
there
with
twitch
if
you
can
take
advantage
of
them.
So
I
see
that
and
I'm
like
this
is
perfect
for
an
open
source
community
manager
like
me
to
do
exactly
what
I
should
be
doing
and
and
setting
up
these
community
coding
sessions
and
talking
with
people
in
this
twitch
community
about
neuroscience
about
AI,
because
nobody
is
talking
about
AI
on
Twitch.
Nobody
and
it's
this
big
science
and
technology,
section
and
I
feel
like
I'm.
The
only
person
having
a
conversation
about
machine
learning,
AI
how
it
could
apply
to
games
or
whatever.
A
So
it
feels
like
I'm
in
a
really
good
space
to
create
a
bunch
of
content,
and
so
that's
what
I'm
doing
I'm
trying
to
create
a
bunch
of
content
trying
to
engage
a
community
that
wants
to
learn
about
this
stuff,
because
we
seem
to
be
the
the
only
loudspeaker
in
this
giant
space
right
now,
so
I'm
just
going
to
create
a
bunch
of
content.
That's
that's
the
plan
there
and
engage
people
who
wants
to
learn
about
what
we're
talking
about
when
we
say
strong,
AI
versus
weak,
AI,
cortical
columns
versus
mini
columns.
A
The
exciting
thing
is
the
community
coding,
so
I've
got
at
least
three
projects
I've
been
going
on
about
about
the
2d
object
project,
and
hopefully
a
few
of
a
few
people
are
really
engaged
in
that
so
I'm
going
to
try
continue
to
work
on
that
and
define
this
test
sort
of
it.
I'm
gonna
do
this
via
something
called
behavior
driven
development
and
the
reason
I
like
this
is
because
I
can
describe
test
scenarios
in
English
completely
English,
and
then
other
people
can
write
the
code
to
pass
the
tests
in
what
language
they
want.
A
A
You
know
these
all
of
these
features
that
will
be
in
English
and
then
anyone
else
who
wants
to
throw
their
hat
in
the
ring
and
and
use
an
HTM
like
HTM
Java,
to
solve
the
same
problems
we'll
have
an
example
of
how
it
could
be
done
and
also
test
suite
to
run
against
it.
And
so
then
we
could
turn
this
into
a
challenge
for
people
if
you
want
to
learn,
HTML
HTML,
here's
a
set
of
chat
of
of
tasks
to
accomplish,
and
here's
also
reference
code
in
a
language
of
showing
how
they
can
be
accomplished.
A
A
It
depends
on
the
day,
but
you
know
sometimes
it's
very
hard
to
get
things
done
on
twitch,
because
there's
a
lot
of
interaction,
but
at
the
same
time,
that's
why
I'm
doing
it,
because
I
want
to
engaged
an
audience
too
okay.
So
that's
one
project
that
I'm
excited
about
another
project
is
building
HTM
systems,
and
that
is
currently
this.
The
you
know
it's
that
building
HT
m
dot
systems.
A
If
you
want
to
go,
look
at
it,
but
I
wanted
to
create
a
full-on
guide
to
how
to
build
HTM
systems
with
visualizations
all
along
the
way
that
shows
I've
got
encoder
several
encoders
working
now,
but
I
wanted
to
go
onto
spatial
or
build
a
spatial
cooler
and
show
how
it
works.
So
it's
sort
of
like
the
HTM
school
animations
but
live
I
want
to
have
this
be
a
server
that
stands
up
runs
an
HTM
has
spatial
point:
has
a
temporal
memory?
A
That's
constantly
flowing
some
sample
data
through
its
own
live
temporal
could
be
semi
random,
whatever,
but
something
live
through
it
and
all
of
the
pages
that
are
displayed
that
are
described
we'll
be
showing
a
live
stream
and
the
algorithms
running
on
it,
and
some
visualization
of
that
happening
so
I've
had
a
vision
for
this
for
a
long
time
and
I've
never
had
time
to
work
on
it
and
now
I'm
thinking.
This
would
be
something
I
could
do
on
Twitch
and
build
out
an
HTM
system.
Essentially,
okay.
A
So
that's
another
thing
and
the
third
thing
is
community
projects
and
I.
Don't
what
I
really
meaning
is
is
like
other
HTM,
let's
say
like
I
said:
I
would
might
solve
this
with
which
H
with
H
TMJ
s,
and
that
would
be
an
introduction
to
the
HTM
J
s.
Framework
I
could
solve
this
with
new
pic
PI
or
PI
3
at
some
point.
Hopefully,
whatever
is
the
Python
3
version,
or
we
could
do
it
in
Java.
A
So
these
are
sort
of
the
ways
that
I
plan
on
engaging
with
the
technical
community
of
builders
in
the
HTM
hackers
community
over
the
next
year
or
so,
which
has
totally
changes
changed
from
my
original
direction.
This
year,
which
was
to
make
videos
and
finish
this
building
extreme
systems,
but
turn
it
into
like
a
wordpress
doctor,
I,
don't
know
what
I
was
going
to
do,
but
but
not
on
twitch.
A
C
C
Ahead,
I
think
it
sounds
really
good
being
the
building,
HTM
systems
kind
of
was
started
and
dormant
for
a
while.
If
that's
the
one
that's
got,
a
lot
of
ice
got
all
the
encoders
and
then
it's
got
kind
of
placeholders
for
everything
else.
Yeah
yeah
yeah,
the
the
placeholders
look
like
things
that
would
be
cool
to
have
finished
and
so
you're
talking
about
something
that
the
it's
like
a
course.
That's
kind
of
got
a
consistent
reference
sort
of
implementation
as
you
go
through
yeah.
A
C
C
Do
you
have
any
thoughts
on
how
you're
going
to
express
those
object,
recognition
scenarios
with
BDD,
because
I've
I've
used
cucumber
before
and
more
of
a
like
a
business
logic,
testing
scenario
where
you
say
you
know,
customer
buys
a
product
goes
in
the
cart
and
that
sort
of
thing
that
this
is
really
a
bit
unique.
You're
talking
about
something
navigating
a
grid.
Are
you
going
to
be
describing
that
in
English,
or
are
you
going
to
be
trying
to
draw
ASCII
art
pictures
so
no.
A
If
you
want
to
check
it
out,
it's
on
YouTube
too
so
I'm
starting
to
think
this
out,
but
I
think
it's
possible
that
you
should
be
able
to
so
I
mean
every
one
of
these
implementations
is
gonna
have
to
have
some
code
that
loads
the
object
to
script.
The
object,
schema
and
load
an
object
and
then
serve
it
to
an
agent
write
it.
A
So
there
has
to
be
an
API
there,
so
the
BDD
will
initially
start
with
an
object
library
of
some
kind
and
or
an
interface
for
the
object
to
be
loaded
and
inspected,
and
then
we'll
move
on
to
an
agent
representation
or
something
that
when
given
an
object,
has
the
ability
to
move
through
I
mean
we
have
to
start
really
simple
and
then
once
you've
described
these
key
components
of
the
system.
I
would
move
on
to
okay.
A
The
scenario
is
after
training,
you
know
a
training
scenario
where
an
agent
is
where
we
write
the
rules
for
what
training
is
essentially
maybe
there's.
Maybe
we
end
up
with
some
type
of
training
training
schema,
you
know.
So
we
can
say
this
object
is
gets
to
skip
this
many
touches
in
this
order
or
use
this
movement
half
you
know,
I,
don't
know
these
are
things
we're
gonna
have
to
work
out
as
you
go
along,
but
I,
don't
think,
there's
anything
in
the
BDD
idea
or
in
gurken.
That's
going
to
prevent
us
from
doing
this.
A
D
Yeah
jump
in
here
go
for
it.
Okay,
what
love
is
your
your
deal
about
writing
code
on
twitch
may
I
suggest
that
perhaps
what
you
could
do
is
start
with
the
Bamie
pseudocode
and
show
that
being
fleshed
into
real
working
code.
There's
an
awful
lot
of
people
that
are
struggling
with
how
to
go
from
the
concept
to
the
code
and
we're
getting
some
newbies
here
that
aren't
real
good
with
that.
D
A
That's
a
good
idea,
so
that
would
be.
That
would
probably
be
part
of
this
because
the
next
section,
the
next
big
section,
is
the
spatial
polar.
So
that's,
essentially
what
I
would
be
doing
looking
at
the
pammi
code
and
trying
to
figure
out
how
to
implement
it.
So
you're
saying
I
could
probably
prioritize
the
little
sooner
because
we've
got
a
lot
of
new
people
coming
in
wanting
to
understand
the
next
part
of
this
system.
So
that's
good
people
and
the.
D
D
A
E
A
Handle
it
I
can
handle
it.
It
will
not
decrease
the
value
of
the
channel
I
promise.
There
are
tools
in
place
that
I
could
use
to
handle
highboy
into
traffic.
That's
if
that
happens,
I
would
be
ecstatic
that
I
have
that
problem
to
solve.
Okay,
I
wouldn't
worry
about
it
right
now,
because
from
what
I've
seen
twitch
there
are
tools
in
place.
I
just
don't
need
to
use
it.
Yet.
Okay,
all
right
I
mean
I,
don't
mean
to
discount
your
concern.
It's
a
valid
concern.
It's
what
I'm
saying
I've
totally
thought
it
through.
Okay,.
A
F
This
little
just
self-indulgent
for
a
second,
but
so
first
I
want
to
tell
you
that
I
got
a
data
science
job
but
I
think
it's
pretty
much
because
of
what
I'd
done
with
HTM,
and
now
that
I'm
here
I'm
bringing
them
HTM
which
they
didn't
know
about.
So
all
right,
you
guys
wouldn't
be
just
building
my
entire
career
right.
On
top
of
your
back.
F
Yeah
so
I'm
I've
created
my
own,
you
know
wrapper
class
to
try
to
do
the
nomally
tech
stuff,
but
anyways
I
wanted
to
so
thank
you
and
I'm
really
excited
and
happy
and
yeah
people
are
interested
and
makes
me
it's,
and
that's
really
cool.
So
for
you
to
know
about
that,
it's
a
it's
a
major
company
too!
So
this
it's
gonna
help
stuff.
You
know
it
get
get
into
the
bloodstream.
More
I
wanted
to
well.
Actually,
okay,
I
guess:
I
won't
ask
you
this
year,
because
maybe
it
is
more
of
a
discussion
thing.
F
I
would
maybe
bug
you
to
check.
I
made
a
discussion
post
like
two
days
ago
I'm
trying
to
for
my
implementation
here
for
very
practical
industry
purposes
as
much
as
my
own,
so
I'm
trying
to
make
sure
I
have
full
control
of
the
anomaly
likelihood,
because
you
know
that's
sort
of
the
ultimate
filter
that
actually
decides
things
and
so
I'm.
F
Getting
into
that
more
I,
just
I
wanted
to
I
know,
there's
a
there's,
a
sort
of
a
slight
batch
mode
with
it
and
then
there's
more
of
an
online
mode
where
what
I
was
doing
was
just
creating
the
anomaly
likelihood
object
and
then
and
then
feeding
it
data.
But
I
noticed
that
there
was
like
there
was
a
short
term
averaging
window
of
ten
I,
think
yeah
and
I
guess
I.
F
Is
that
I
understand
like
let's
say
you
have
sequences
that
are
much
shorter,
all
of
a
sudden,
maybe
shorter
and
simpler,
where,
as
opposed
to
more
noisy
real-world
longer
term,
what
I
want
to
be
able
to
is
to
be
able
to
make
sure
I
know
how
to
control
everything
such
that
if
I
made
the
sequences
all
of
a
sudden
like
only
like,
you
know,
really
short
like
10,
or
something
like
that,
then
I'd
be
able
to
change
the
settings
so
that
I
still
got
the
anomalies.
That
I
feel
like
I
should
get
well.
F
A
A
Right
I
mean
the
sequence:
the
patterns
that
you're
looking
for
is
shorter,
but
when
you,
because
you're
not
resetting
the
temporal
memory
at
any
point,
you're
never
telling
it
that
this
is
right.
Hand
of
object.
Yes,
you're,
relying
on
an
anomaly
detection,
that's
happening
over
all
of
these
being
created
or
at
once
right.
F
Yes,
no
I
know
I
know
that
the
actual,
like
the
TM
core
learning,
doesn't
change,
because
it's
just
a
continuous
learning
I
just
like
in
these.
So,
for
instance,
like
an
anomaly
likelihood,
there's
like
the
there's,
the
total
window
size
and
then
there's
that
averaging
window.
So
what
it
looks
like
is
by
default.
It'll,
look
back
at
the
last
like
eighty
six
hundred
points
and
compare
a
rolling
average
of
the
last
ten
points
and
then
do
that.
F
Four,
three,
two
one,
five,
four,
three,
two
one,
five,
four
three
two
one
I
feel
like:
if
the
averaging
window
and
sort
of
the
those
hyper
parameters
of
the
anomaly
likelihood
are
too
big,
it
will
sort
of
have
a
bigger
gaze,
and
but
the
action
is
small
in
a
small
space,
so
I
just
want
to
be
able
to
flex
to
that
situation.
Just
so,
I
have
a
full
control,
because
when
anything
it's
like,
if
anomaly
doesn't
get
detected,
then
I
probably
have
to
answer
exactly.
Why
so
I
know
that
can
answer
exactly
why?
F
F
A
F
F
But
let's
say
like,
though
the
data
that
I
have
is
only
like
500
data
points
right,
so
maybe
that
sort
of
larger
window
that
slides
over
time
it
doesn't
have
enough
space
to
slide
anywhere
so
you're
always
using
everything
right,
but
maybe
if
things
are
sort
of
shorter
in
terms
of
their
their
time.
You
know
I
mean
like
on
a
smaller
timescale
where
you
might
want
to
have
that
window
smaller,
so
that
the
overall
distribution
you're
comparing
to
could
adapt
more.
F
F
Yes,
so
so
you
can
set
the
historical
window,
sighs
I,
don't
problem
doing
that.
It's
the
the
averaging
window,
because
it's
not
actually
a
class
attribute.
It's
just
a
it's.
It's
just
a
defaulted
like
a
hard-coded
input
into
it.
Just
like
so
you
pass
into
the
function,
which
is
not
like
so
I
can't
just
say:
anomaly
likelihood
dot.
You
know,
Hat
review
equals
this
I
would
have
to
pass
it
into
that
function
and
so
I'm
trying
to
use
that
lower
level
function
and
I'm
just
having
trouble
Jenna.
F
A
So
what
you're
talking
about
those
sounds
like
a
code
changed?
It
doesn't
like,
so
what
you
could
do
without
begging
a
code,
change,
extended
amelie
likelihood
in
your
own
code
and
add
this
property
that
you
want,
make
it
a
part
of
this
configuration
and
then
rewrite
the
function
that
uses
that
does
that
calculation
in
your
own
code,
okay,.
A
D
D
We
have
the
same
problem
at
work.
What
we
do
is
we
process
streams
of
data
that
we
try
to
average
it
to
find
reduction
in
noise.
Yeah,
you
know
large
discrepancy
or
when
the
buffer
is.
What
we
do
is
reset
the
buffer
length
to
a
very
short
distance
and
then
extend
it
progressively
as
the
data
streams
in
until
the
buffers
some
predetermined
length.
So
what
happens
is
when
there's
a
large
anomaly
or
where
there's
an
undesirable
change
it
quits,
leaving
it
throws
out
the
junk
and
starts
over
you're
talking
about
here.
D
D
Whichever
one
you're
trying
to
optimize,
but
the
idea
being,
is
that,
rather
than
have
a
fixed
window,
you're
rolling
buffer,
you
know
basically
average
then
divided
by
the
number
of
samples.
So
if
you
have
a
larger
buffer,
when
you
have
more
data
to
work
with,
then
it
makes
sense.
You're
saying
you
have
a
buffer
under
flow
I
got
that
right.
The
best
thing
that
you
might
want
to
consider
the
fact
that
if
it
doesn't
kill
your
code
terribly
to
play
with
the
buffer
size.
D
A
F
Okay,
can
I
okay,
maybe
maybe
Marco
I'll,
post
news
I,
don't
want
to
drag
everybody
through
it.
I
I
want
to
make
sure
I
understand
the
details
of
how
you
update
the
buffer,
but
I.
Don't
a
drive
already
been
dragging
everybody
around,
so
maybe
I'll
I
can
make
a
post
if
you
wouldn't
mind
or
something
whatever.
It
is
not
much
as
this
is
very
relevant.
D
D
A
D
D
The
pooling
and
they're
they're
still
in
straight
HTM
we're
modeling
such
a
tiny
section
of
the
cortex
that
having
an
put
area
that
covers
the
whole
field.
Okay,
that
makes
sense,
because
it's
actually
not
that
much
more
of
a
reach
than
a
single
cell
could
reach
in
the
cortex
as
your
models,
the
bigger
processor
gets
faster
that
that
you
need
a
neighborhood
more
than
you
need
that
just
a
different
approach,
but
as
our
processor
keeps
getting
faster
and
we
can
do
more
with
it.
That
may
become
more
of
an
issue.
D
A
D
A
F
A
A
C
Of
that,
it's
a
conference
for
a
data
warehouse
called
snowflake
and
I've
actually
been
experimenting
with
in
database
processing
and
thinking
of
applying
it
to
2.htm
I'm
kind
of
imagining,
like
a
in
database
version
of
HTML,
where
you
can
do
a
anomaly
detection
on
a
say,
a
scalar
stream
yeah,
because
it
supports
Java
scripts
in
stored
procedures
and
you
know
table
value
functions.
So
you
can
actually
do
quite
a
lot
in
it
and
there's
a
lot
of
compute
power
behind
it.
C
Elastic
computer,
so
I'm
thinking
it
might
might
be
useful
and
I
started
looking
at
it,
and
it
was
all
fairly
easy
until
I
got
to
the
the
part
where
you
have
to
make
the
prediction
back
into
the
value.
Little
sort
of
I
think
I
asked
a
question
about
it.
A
few
weeks
ago
on
the
forum,
so
I
can
do
I've
used
HTM
jeaious
and
it
sort
of
does
everything
I
need,
except
for
it
won't
tell
me
an
actual
value
to
predict.
It
will
only
tell
me
that
something
is
different.
A
C
Doesn't
it's
it's?
Oh,
it's
got
very
basic,
it's
not
the
same
as
the
what's
it
called
the
predictor.
That's
in
nupoc
actually
has
like
a
little.
C
A
A
So
you
could
just
attach
you
know
HTM
to
any
one
of
these
two
scalar
temporal
streams.
All
it
just
has
to
be
a
temporal
scalar
stream
period
and
interest.
They
start
here
end
here
and
run
anomaly
detection
on
it,
and
then
it
will
store
data
back
into
it
along
with
the
data.
So
you
can
store.
You
predict
your
prediction
or
your
not
only
support
along
with
it,
so
you're
thinking
along
the
right
lines,
I
think
I
think
that's
a
kind
of
a
powerful
tool.
The.
C
A
C
A
H
H
We
go,
this
app
is
confusing.
Sorry,
all
right,
so
I
was
noticing
something
when
I
was
playing
with
my
own
version
of
the
spatial
cooler
on
the
PI
torch
repository
I
got
boosting
and
the
K
winners
implemented,
so
I
feel,
like
that's
part,
pretty
close,
but
not
the
inhibition,
but
I
was
noticing,
with
some
other
things,
have
the
so
this
computer,
the
auto
encoder
for
typical
convolutional,
neural
networks
and
deep
learning,
there's
a
variational
version
of
that
and
it
seems
like
it
do
like
kind
of
the
same
thing.
A
H
A
A
H
H
A
H
A
Cool
thanks,
Josh
thanks
for
the
work
you've
been
doing
your
you
seem
to
be
right
on
the
bleeding
edge
of
this.
You
know
machine
learning,
application
of
HTM,
which
is
nice,
I
love,
seeing
people
working
on
the
edge
thanks,
hi
hi
ky6,
don't
know
where
it's
wanted
to
say
how
I
saw
you
over
there.
Anybody
else
have
anything
to
say
we're.
Just
sort
of
wrapping
it
up.
Gary
is
on
chat,
he's
talking
about
hexagons
on
chat,
but
it's
not
the
chat
that
we
see
it's
the
other
YouTube
chat,
so
one
on
the
YouTube
page.
A
He
says,
accounting
for
hexagonal
connections
has
always
been
such
a
chore.
That
I
had
to
mention
what
I
most
need
right
now
for
a
Python
related
standard.
That's
he's
talking
good
about
Python
Street.
His
first
thought
was
that
we
should
use
it
for
hexagonal
arrays
in
graphics,
so
he
might
end
up
creating
some
package
for
PI
game
that
does
that
sort
of
thing
hexagons.
It's
all
about
taxa
cons,
okay,
open
up
one
again.
F
Super
quick
man
if
I
could
just
prod
you
I
made
one
other
post
today
and
that
there's
this
guy
was
already
responding
to.
It
was
basically
in
terms
of
setting
the
encoding
parameters,
because
I
was
worried
that
in
some
of
the
data
I'm
dealing
with
it's
like
long
tail
distributions,
you
know
so
the
data
is
here
and
then,
but
if
you
were
to
take
the
max,
it
would
stretch
out
the
space
yet
I.
Also
don't
so
I'm
balancing
between
you
know
not.
A
I
know
it
was
a
big
effect.
It
should
work
he's
right,
I
think
the
spatial
blueish
should
should
adjust
for
the
tail
in
the
RDS
C
and
the
scalar
encoder
really
similar
I
mean
it's
just
a
different
method
so
like.
If
you
have
a
long
tail
or
you
encode
that
tail
in
the
RDF
see
it
will
it
will
move
through
all
of
the
bucket
all
the
way
to
get
to
that
tail
off
to
this
off
to
the
side,
so
it'll
be
semantically
very
different
from
everything
else,
and
that's.
A
F
D
F
You
say
longer
in
the
scaling
on
the
buckets:
do
you
mean,
as
like,
a
like
a
pre-processing
of
the
data
yeah
to
a
log
skin?
Okay,
oh
cool
cool
cool,
all
right
I'll
go
ahead!
I'll!
Try
that,
because
good
cuz
that
way,
okay,
that'll
kind
of
make
the
distribution
not
so
not
so
long
tail,
basically
it'll
kind
of
make
it.
What.
A
F
F
F
So
then
I
would
just
say
if
I,
if
I
call
the
get
scaler
params
with
a
Nam
good,
scalar
anomaly
prints,
it
gives
you
our
DSC,
but
I
could
just
then
go
in
have
a
function
that
takes
that
goes
and
changes
it
to
the
log
because
I'm
trying
to
do
this
all
like
automatically
and
like
dynamically
for
different
measurements.
When
I
don't
know
what
the
measurements
are
gonna
be
at
all
right.
So
then,
just
to
say:
okay,
instead
of
our
DSC
try
log
encoder.
A
D
F
Worth
trying,
that's
great,
that's
great!
Thank
you.
Thank
you.
That's
great
and
I'll
write
one
post
to
make
sure
I
understand
that
the
logic
of
your
updating
thing,
because
that's
really
interesting
and
I-
want
to
get
that
if
I
could
just
bother
you
for
that,
possibly
briefly
yeah
well,
I
will
do
that.
You
don't
mind
all
right.
A
A
We
had
a
really
busy
hackers
hangout.
We
still
got
ten
more
minutes,
so
I'm
willing
to
keep
this
open.
Well,
you
know
I
think
I
made
it
half
an
hour
yeah,
it
doesn't
matter
if
you
guys
want
to
keep
chatting
or
if
you
have
any
more
questions.
Otherwise,
you
can
certainly
check
out
my
twitch
stream
because
I
just
I'm
online
every
week
now.
G
Guy
asked
a
question
real
quick
about
the
struck
Adam
chiral
on
the
forms
of
theirs
I
work
a
little
bit
with
Rodriguez
on
spar
si
and
he
has
really
structured
in
coding
fields.
Yeah,
though
I'm
wondering
you
guys
were
earlier,
you
guys
we're
talking
about
the
the
fact
that
the
single
neuron
it's
such
a
small
field,
with
the
with
the
layers
that
you're
using
that
it's
really
okay,
that
a
single
neuron
sees
the
loss
of
the
field.
But
when
you
start
scaling
that
up,
how
are
they
supposed
to
be
talking
to
one
another?
The.
A
Neuron
has
have
to
be
segregated
into
different
populations
and
then
each
population
you
know
Marc's
laughing
yeah,
yeah
I'll.
Let
you
he's
probably
gonna.
Give
you
a
better
answer
than
me
right,
but
you've
got
to
have
a
bunch
of
different,
essentially
cortical
columns
and
they
all
are
doing
their
own
local
computations
and
they
have
their
own
local
receptive
fields
and
then
they
all
share
information
with
each
other.
Amongst
these
lateral
factions
mark
go.
D
I'm,
a
little
off
the
reservation
here
that
that
post
I
put
the
long,
rambling
thing
about
HTM
into
grits.
That's
the
whole
concept
there
is
is
structuring
the
interaction
locally
into
a
larger,
coherent
pattern.
The
thousand
brain
theory
is
doing
pretty
much
the
same
thing,
but
a
different
idea
about
what
the
output
should
look
like.
Lateral
connections
for
the
wind,
yeah.
A
It's
all
about
lateral
connections
and
I.
Think
Mark's.
Whole
he's
got
a
lot
of
ideas
about
the
hexagonal
grids
and
it
doesn't
really
go
counter
to
any
of
the
HTM
stuff.
We're
just
stopping
at
the
point
of
those
lateral
connections
and
we're
saying
the
cortical
columns
have
to
share
representations
via
lateral
connections
to
inform
each
other
remarks.
Ideas
have
taken
these
ideas
from
Calvin's
book
about
hexagonal
structures
at
this
bigger
macro
level
and
and
repercussions
of
that.
G
Okay,
yeah
the
because
in
rincón
Farsi
algorithm
uses
the
lateral
connections
of
the
last
frame
to
kind
of
do
the
temporal
pooling
and
facial
pooling
all
at
the
same
time.
So
if
you,
if
near
on,
is
kind
of
looking
down
at
the
receptor
field
and
then
it's
also
seeing
that
it's
local
neighbors
from
timestep,
that's
like
an
encoded
locally-based
temporal
context
to
it.
So
that's
what
he
thinks
the
lateral
connections
are
used
for,
but
he
is
is
based
off
of
familiarity.
G
It's
not
it's
not
really
like
you
overlap
or
anything
like
that.
It's
a
familiar.
He
calculates
it
with
a
bunch
of
sigmoids
and
do
you
know
what?
Where
is
he
talking
about
layer?
Two,
three
lateral
connections?
He
doesn't
really
wasn't
really
have
layer,
one
through
six
or
anything
like
that.
He
just
goes
with
the
mini
column,
assuming
that
the
mini
column
is
connected,
like
all
of
the
neurons
are
connected
by.
A
I'm
not
sure
that
in
some
layers
that
I'm
not
sure
they
have
lateral
connections,
I,
know
and
layer.
Five
there's
output,
connector
affections
in
layer
two
three
and
in
in
layer,
2,
3,
there's,
lateral
connections
as
well
as
internal
connections
too.
So
you're
going
to
get
a
temporal
context
and
or
cheering
I.
Think
both
of
those
happens
is
that
right
mark
that
I've
know
of
Geoff
say
that
work.
A
It's
it's
pretty
close,
but
that
all
sounds
like
within
a
cortical
column
to
me,
those
but
but
I
think
it's
also
sharing
and
you
have
to
think
of
both.
You
have
to
think
of
how
it's
computing
internally
and
also
what
it's
sharing
and
how
it's
taking
what's
being
shared
with
it
and
to
make
it
a
part
of
its
representation.
If.
D
I
could
act
again
what
what
I
have
walked
away
from
and
I,
hopefully
will
never
ever
ever
try
to
do
it
and
is
that
the
going
through
the
levels
makes
the
data
join
up
as
some
sort
of
hierarchical
processing
I'm
completely
on
board.
Now
with
it's
parallel.
All
the
way
through
and
every
single
one
of
the
maps
is
of
course,
is
through
and
that's
the
trick
is
once
you
get
to
that
point.
You
think
that
that's
how
the
data
is
arranged.
D
You
have
to
rethink
everything
about
how
its
processed
and
how
its
represented
it
doesn't
ever
really
come
together
in
the
cortex.
It
always
stays
separate.
So
that's
that's
what
I
was
saying:
lateral
connections
and
the
overall
large
pattern
that's
what's
presented
to
the
lower
brain
structures
to
use
for
their
evaluation
and
their
issuing
of
commands
to
the
frontal
lobes
to
start
action,
so
it
stays
separate
all
the
way
through
it
doesn't.
A
G
On
so
well
a
lot
of
a
lot
of
the
also
ooh
I'm
getting
a
lot
of
feedback
for
some
reason.
But
a
lot
of
the
signals
that
are
going
back
to
the
to
the
cortical
columns
are
coming
from
the
from
the
hippocampus
recycling
the
signals
and
strengthening.
So
do
you
really
think
that
you
can
do
I
know?
That's
one
of
wrinkles
concerns
with
Farsi
is
that
he
can't
model
full
intelligence
without
all
these
other
parts
to
it.
D
Going
to
be
necessary,
you're
not
going
to
be
able
to
walk
away
from
them
right
now.
We're
doing
the
the
processing
of
the
stream
of
what
kind
of
calculations
are
done
with
that,
but
you're
not
going
to
get
away
from
the
lower
structures
to
actually
do
a
fully
I
think
I
mean
they're,
absolutely
critical
for
doing
the
command
and
control
right
now.
D
G
Well,
not
one
of
the
one
of
the
tricks
we
were
thinking
of
doing
was
since
both
HTM
and
Farsi
they're
good
at
like
really
just
capturing
features.
Really
a
mess
really
been
the
goal
of
a
is
capture
States
and
then
assign
some
sort
of
action
to
the
states.
So
since
they're
good
capturing
these
states
within
capital
context-
and
such
it
seems
like
few
learning-
would
be
the
perfect
to
take
advantage
of
that
you've.
G
Value
to
the
states
that
are
being
represented
in
the
in
the
ATM's
and
I
think
that
might
be
one
of
the
one
of
like
the
first
like
shortcuts,
not
necessarily
for
intelligence,
but
a
nice
little
cheat
shortcut
to
get
something
working
but
I
know
it's
been
talked
about.
Why
why
HTM
is
gonna
like
do
anomaly.
G
I
think
if
you
use
just
even
if
you
just
slap
a
neural
network
right
at
the
end
of
the
of
an
HTM
or
something
like
that,
I
think
you
still
could
get
some
sort
of
state
or
categorizing
recognition
from
it
at
least,
and
just
as
input.
We.
A
Definitely
do
that
I
mean
we
that's
what
our
SDR
classifier
does
make
predictions
all
right,
but
but
you're
right
and
the
cue
or
anything
I've
heard
before
I,
don't
know
anything
about
Q
learning.
Is
it
it's
a.
G
A
Sounds
promising,
like
one
of
the
things
that's
limiting
right
now
with
HTM?
Is
that,
in
order
to
have
a
truly
intelligent
system,
you
need
to
incorporate
movement
like
the
the
system
has
to
be
causal
to
the
movement
somehow,
in
order
for
honestly
I
think
really,
in
order
for
the
representation
to
be
truly
valid,
it
has
to
be
created
through
movement
and
experience
the
the
reaction
of
the
movement,
the
sensory
experience
that
was
caused
by
the
movement
or
whatever.
A
That
has
to
be
learned
over
time,
so
that
you
can
learn
objects
and
HTM
is
currently
where
the
fringes
of
the
theory
right
now
is.
How
is
that
movement?
How
is
that
movement
represented
and
we're
talking
about
just
populations
of
cells
that
may
be
representing
displacements
between
different
locations
and
how
that
is
at
play
when
we're
talking
about
movement
and
feedback
from
the
motor
system
back,
you
know
and
that's
an
unknown
area.
You
know
it's
still,
research
all.
G
D
We're
the
fellow
hackers
to
kind
of
show
you
way
way
way
down
the
road.
Okay,
a
super
gotten
down
to
the
hypothalamus
and
by
the
way,
if
you're
interested
in
that
worth
being
shooting
the
alliant
I've
got
a
bunch
of
papers
on
that
and
hypothalamus.
There's
been
a
bunch
of
really
cool
things
on
that
coming
up
lately,
but
assuming
that
the
hypothalamus
has
got
this
digested
version
of
a
state
and
it
goes
into
these
lower
brain
structures.
D
They
don't
have
the
layered
uniform
structure
that
we're
used
to
seeing
on
the
cortex
they're
doing
something
very
different.
So
what
they're
doing
is?
Is
they
get
a
like
a
postage
stamp
copy
projected
onto
them,
and
they
do
this
weird
thing
that
looks
very
much
so
like
a
Boltzmann,
oscillating,
Network
and
that
combines
all
of
these
disparate
things
into
a
state
inside
that's
represented
by
a
for
the
want
of
a
better
description.
A
circular
oscillation
I
ain't
sounds
really
goofy,
but
it's
what
it's
doing.
D
Look
at
you'll
factory
world
for
a
really
good
example
of
that.
So
at
some
point
here
at
a
very
high
level,
that's
going
to
drive
some
completely
different
kind
of
network
and
then
that
network
is
going
to
the
output
of
that
oscillations
will
be
projected
back
onto
another
set
of
cortex
to
develop
a
drive
for
action
system
to
select
an
action.
So
even
though
you're
going
to
be
dialed
in
cortex,
cortex
algorithm,
there's
more
down
the
road
right.