►
From YouTube: HTM Hackers' Hangout - Apr 15, 2016
Description
Come on in, talk about HTMs. No big whoop.
A
Hi
everybody,
hello,
a
new
pic.
This
is
matt
taylor
and
this
is
the
HTM
ackers
hang
out
and
in
today's
news,
I
broke
the
build.
So
just
telling
me
so
we
have
we've
talked
about.
We
have
this
bamboo
infrastructure
that
we're
setting
up
and
it
sort
of
runs
in
parallel
right
now
to
the
open
source
Travis
at
Bayer
one
and
broke
it,
because
I
made
a
release
for
new
pic
or
a
new
pic
today.
So
I
was
trying
to
explain.
A
So
it
worked
in
Travis
and
a
bear,
but
it
might
not
work
in
bamboo.
Anyway,
but
if
that
sounds
familiar
for
when,
when
you
investigate
this
off,
you
might
just
kill
the
bills
that
are
failing.
If
there's
subsequent
builds
coming
afterwards
and
in
my
case
is
usually
the
subsequent
builds
will
pass,
and
it's
just
a
temporary
blip
in
the
pipeline.
Maybe
good
I,
don't.
C
B
All
right
so
looking
at
the
new
picnic
or
field
fence
in
bamboo,
the
CIA
new
meta,
calm
and
just
looking
at
these
latest
failures
and
look
at
the
logs
we're
all
the
way
down
at
the
bottom.
Oh.
B
D
B
Look
cool
love
in
the
big
core
is
a
different
problem.
Now
halo,
I.
A
A
A
D
Sure
you
don't
mind
yeah,
it's
fine,
so
I've
done
a
few
things
to
the
temporal
memory.
This
P
plus
plus
one
though
so,
because
we
have
the
Python
temporal
memory,
which
is
like
the
pure
implementation
of
temporal
memory
from
the
white
paper
and
then
there's
a
c++
version
of
that
which
is
designed
to
be
really
fast
and
so
I
took
it
and
versatile
a
bunch
of
case-by-case
like
profiling,
it.
Oh,
we
can
slap
this
bug,
oh
and
just
like
a
bunch
of
those
and
then
once
I
got
those
out
of
the
way.
D
I
saw
like
a
big
fundamental
way.
We
could
like
reorganize
it
to
make
just
a
lot
of
complexity,
fall
away
and
just
make
the
whole
thing
orders
of
magnitude
faster
and
I
did
that
there
is
so
it
currently
support
request
we're
talking
about
it,
but
we
pretty
much
a
proof
of
the
algorithm
and
then
some
is
still
temporal.
D
Memory
is
still
the
same
thing
from
the
white
paper,
essentially
with
the
good
couple
changes
that
have
happened
to
temporal
now
versus
the
white
paper,
but
the
change
to
it
is
it
basically
you
could
you
could
you
could
call
to
big
changes
and
their
dependents?
Oh
I'll
start
with
like
the
first
one
is
on
it.
D
Okay,
suppose
you
have
a
ordered
list
of
your
two
columns,
your
active
segments
and
your
matching
segments,
which
are
just
like
active
segments
except
with
lower
thresholds.
It's
the
things
that
might
learn
if
a
column
bursts
or
whatever,
and
you
have
those
and
they're
sorted
in
a
single
class.
You
can
walk
the
entire
list.
D
I
don't
know
if
I
don't
know
that,
whether
I'm
here
images
you're
not
but
in
a
single
tasks,
you
can
walk
the
entire
list
and
basically
a
bucket
them
by
like
okay,
here's
a
column,
here's
all
of
its
active
segment's,
here's
all
of
its
active
magic
segments
to
whatever
you
need
to
do
this
column
and
it
will
activate
the
cells
and
do
all
the
learning
right
there.
And
then
it
goes
on
to
the
next
column
and
it's
done
in
numbers
to
go
back
and
touch
this
one
again.
D
So
there
are
two
big
things
that
just
came
out
of
that
one
is
like
you,
you're,
making
this
single
pass.
That's
like
a
linear
pass,
you're,
not
doing
a
bunch
of
set
operations
like
looking
up
sets.
The
other
thing
is
you
don't
have
this
big?
These
sets
of
data
structures.
You
hear
you're,
never
making
a
list
you're,
never
making
a
list
of
all
of
the
segments
that
you're
changing
you're,
just
changing
them,
so
you're
you're
just
dramatically
reducing,
like
the
number
of
you
could
say
transient
data
structures.
D
You
don't
have
the
you're,
not
every
step,
you're,
not
building
up
this
set
of
data
that
you're
using
and
then
throwing
away
I.
The
only
the
only
data
structure
is
your
modify.
Are
those
things
that
didn't
fundamental,
like
the
things
that
you
would
see
realize
if
you
were
to
serialize
this
model,
so
those
are
kind
of
the
two
good
points
make.
D
Cool
so
I
showed
it
just
now.
I
download
it
here,
looking
down
in
the
in
the
little
perf
runners
that
like
take
the
ones
for
a
lake
on
a
half
seconds.
For
me,
it
runs
in
like
25
seconds.
It
is
like
six
times
faster,
but
under
natural
run
like
if
you
run
hot
gym
or
something
it's
about
15
times
faster
than
the
old
C++
thing.
It
did
a
lot
of
unpure
things,
but
was
heavily
in
/
5
demised,
and
this
is
dramatically
faster
than
that.
A
Outstanding,
so
not
all
these
performance
enhancements
have
gotten
into
the
the
binary
release,
I
just
kind
of
release
today,
and
that
includes
some
of
these
improvements,
but
not
the
big
big
one
that
he
was
just
talking
about
so
I
told
Marcus
as
soon
as
he's
done.
Let
me
know
and
I'll
cut
another
release,
so
anyone
installing
new
pic
from
binary
will
get
those
performance
enhancements.
That.
D
Just
made
me
think
of
something
one,
one
question
that
I
should
answer:
does
this
change
results
at
all?
The
answer
is
yes,
because
it
hits
the
note
random
number
generator
in
a
different
order.
Previously,
we
would
basically
figure
out
all
the
active
cells
which
involves
some
use
of
the
random
number
generator,
because
sorry
I
couldn't
not
collecting
sales
figuring
out
the
winner
cell
in
a
bursting
column.
That
used
to
happen
like
first
and
then
later
we'd
go
ahead
segments
adding
segments
also
that
are
adding
synapses.
D
Also
it's
the
random
number
generator
because
you
have
to
choose
which,
which
cells
to
connect
to
so
choosing
a
winner
cell
and
choosing
web
cells,
which
selling
connect
to
now
happen
together,
rather
than
doing
all
of
them
here
that
all
over
there.
So,
basically
we're
going
to
have
the
finest
to
mere
this
and
on
the
Python
side,
and
it's
going
to
change
the
results
in
a
way
that
doesn't
actually
change
the
algorithm,
but
it
we
want
to
keep
them
safe.
Well,.
E
It'll
be
actually
bigger
than
that.
So
the
reason.
So
what
Marcus
said
that?
Because
of
these
changes,
the
results
will
be
slightly
different
than
temple.
Memories,
not
pie,
be
no
one,
but
most
of
the
new
pic
community
is
not
even
using
that
they're
using
the
old
C++
out,
because
the
results
will
be
dramatically
different.
Yes,
potentially
and
me,
some
of
the
parameters
may
be
tuned
as
a
result
and
I
think
it's
part
of
cutting
this
next
release.
We
should
think
about
moving
all
of
the
examples
over,
so
they
don't
use
the
old
version
anymore.
E
They
just
use
this
conversion
because
I
think,
even
if
we
were
to
just
cut
a
release
with
this
PR
mark,
no
one
would
ever
use
it,
because
none
of
the
examples
use
it.
So
we
should
just
make
sure
we
update
everything
in
the
whole
chain
so
that
it
becomes
kind
of
the
default
blessed
version,
except
so.
E
F
E
C
E
E
C
E
D
E
A
All
right,
great
good
work,
guys
I,
want
to
remind
you
that
we
found
Marcus
and
are
in
our
community,
and
so
he's
been
a
member
of
the
HTM
up
and
source
community
for
a
long
time
and
he's
working
as
an
intern
at
noventa
and
it's
obviously
paid
off.
So
thank
you
community
for
Marcus
all
right,
so
I
want
to
know
one
other
thing
that
came
along
with
this
release
was
affixed
to
the
geo
spatial
coordinate
coder.
A
So
it's
obvious
that
no
one
has
tried
to
run
anomaly
likelihood
on
GPS
stuff,
because
there
was
a
bug
that
prevented
that
from
happening.
So
I
found
that
bug
and
you
can
now
run
the
geo
spatial
anomaly
stuff
through
you
not
only
likelihood
algorithm.
Let
me
let
me
show
you
how
to
do
it.
It's
pretty
easy,
although
it
this
is
the
this
paso
trucks,
data
set
and
experiment
framework
that
I've
been
working
on
so
I'm
just
going
to
show
you
inside
the
HTM
new
pic
folder
I've
got.
A
This
is
my
code
that
runs
new
pic
and
there's
a
function,
call
to
run
one
point
so
anyway.
I
just
want
to
show
you
how
this
is
done.
So
when
we
create
a
model
input,
a
row,
we
give
it
a
vector,
we
call
it
vector
that
includes
speed,
longitude
and
latitude.
This
is
how
you
send
data
into
a
a
GPS
coordinate
encoder
model
row.
A
If
you're
going
to
use
the
anomaly
likelihood,
you
have
to
grab
that
vector
from
the
raw
input
and
then
feed
that
into
the
likelihood
like
you
have
to
give
like
the
value
to
the
anomaly
likelihood
algorithm,
and
so
you
just
take
that
vector
out
and
and
that's
what
you
feed
is
the
raw
value.
So
it's
not
hard
to
do
it
just
didn't
work
so
now
it
works.
A
So
there's
that
and
I'm
actually
getting
better
I
mean
if
you
ever
run
the
geo
spatial
anomaly,
detection
sample
app,
that's
just
like
commutes
or
seen
mine
with
dog
walks
that
just
uses
anomaly
score.
No,
not
only
likelihood
at
all
and
I've
got
a
huge
data
set
out.
So
anomaly
score
doesn't
work
well
after
its
seen
hundreds
of
thousands
of
different
points,
so
the
anomaly
likelihood
is
providing
me
better
value
on
that
data
set.
That's
why
I
tried
to
use
it
so
so
that's
fixed
and
that's
available
on
in
the
latest
release.
A
But
it's
like
all
the
nine
counties
around
San,
Francisco
transitory
knows
a
shin
that
organizes
all
of
that
stuff.
They
held
a
hackathon
in
Oakland.
They
were
cooperating
with
automatic,
with
automatics
a
company
that
provides
the
connected
car,
API
and
so
I
looked
over
the
API
and
we
got
involved
with
this
hackathon
and
we
thought
that's.
Those
I
would
be
a
great
data
set
that
API
has
streaming
API
and
real-time
API
for
getting
like
our
data.
So
we
went
there,
we
sponsored
it.
A
You
know,
I
had
a
little
table
had
a
little
presentation
and
a
breakout
session,
and
there
weren't
very
many
people
there.
This
was
the
first
half
and
TC
has
ever
done
so
I'm,
not
a
ton
of
people
may
be
like
20,
but
but
it
was
a
good
sort
of
first-time
experience
for
us
sponsoring
a
hackathon
instead
of
running
one
and
I
was
disappointed
with
automatics
API,
because
the
streaming
API
consisted
of
two
events,
car
ignition
and
car
shut
off
and
that's
all
mo
and
the
way
that
it
works
is
if
you're
a
developer.
A
You
have
to
have
some
type
of
identifier
for
car,
so
you
have
to
have
an
automatic
device
in
a
vehicle
and
link
your
developer
account
to
that
device.
Somehow,
so
you
don't
get
some
aggregated
data
from
all
the
different
vehicles
that
are
in
the
system
at
all.
You
have
to
have
specific
vehicle
IDs
and
you
know
it.
Nobody
who
came
to
the
hackathon
had
an
automatic
device,
so
we
were
relying
on
a
dummy
data
set
that
was
provided
by
automatic.
A
That
was
like
16
pats
like
drives
16
drives
I,
think
it
just
was
not
nearly
enough
data
for
a
new
pic
or
an
HTM
system
to
do
anything
with
and
the
and
you
can't
get
the
lot.
The
real-time
API
was
that
you
know
those
two
events
you
can
tell
when
your
car
started
and
stopped
and
I
think
they
wanted
out
of
it
was
or
what
their
point
is.
When
you
get
a
car
stopped
indication,
you
can
then
go
back
and
query
the
I
query
the
API
for
some
type
of
aggregated
information
about
the
trip
right.
A
You
don't
get
raw
access
to
all
that
data
that
they're
collecting
you
just
don't
they
don't
give
it
to
you.
So
it's
sort
of
seemed
useless
to
me.
I
to
I
mean
I'm
a
hacker
right.
I
want
live
streaming.
Data
I,
don't
want
your
impression
of
the
trip.
I
would
really
just
like
that.
All
that
data,
especially
for
a
device
that
I
pay
for
an
installed
in
my
car
I,
would
like
that
data
people.
Oh,
you
can
and
can
pull,
but
you
can't
pull
for
for.
A
B
A
C
A
Me
that
you've
got
I,
know
I,
don't
want
to
poo
poo
automatic
I
think
that
it's.
That
idea
is
cool
and
I.
Think
the
connected
car
is
a
big
big
potential
space,
but
we
definitely
want
the
streams
like
we're
not
going
to
be
able
to
make
your
developer
community
is
not
going
to
be
able
to
make
something
really
won't
get
these
these
streams,
the
feedback,
if
anybody's
listening
for
mono
matic,
but
it's
been
so
there
was
oh,
no,
isn't
it
there's
an
app
on
the
phone
and
yeah?
A
So
I
would
not
call
that
a
waste
of
time
I
met
some
good
people
at
that.
Hackathon
I
spent
a
lot
of
time
talking
to
people
and
a
lot
of
people
get
what
we're
doing,
which
is
great,
so
out
of
the
20
people.
Probably
five
of
them
knew
what
new
meta
was
and
what
we
were
doing
and
wanted
to
talk
more
about
it.
So
that's
a
good
ratio,
small
sample
set,
but
good
ratio.
A
Ok,
so
two
more
quick
things
I
want
to
talk
about,
and
one
is
I-
do
want
to
share
this.
These
visualizations
that
I've
been
working
on,
which
is
my
screen
there.
You
go
ok,
so
in
new
pic
community
there's
this
project
called
SDR.
Biz
I've
already
showed
this
to
guys
in
the
office
at
a
demo
this
morning.
But
if
you
click
through
this,
you
can
see
it
live
right
here
and
and
see
these
Jack.
A
These
big
high-resolution
SDR
visualizations
I'm
not
going
to
go
through
them
all,
but
I
just
want
to
show
you
guys
if
anybody's
interested,
you
can
go
and
just
look
at
this
without
installing
anything.
It's
all
just
browser
side,
javascript,
and
if
you
want
to
create
your
own
visualizations,
that's
cool
go
for
it.
It's
a
open
source
project.
You
can
do
that,
but
I
am
using
this
to
support
the
whole
HTM
school
video
series
that
I've
been
working
on.
A
F
A
Probably
get
continue
to
work
on
this
as
I
need
visualization,
so
this
will
probably
expand
into
more
HTM
related
visualizations
I
might
try
and
pick
Marcus's
brain
and
see
if
I
can
use
some
of
his
visualization
stuff,
which
is
very
powerful
to
explain
some
things
but
and
the
HTM
school
videos
are
doing
really
well.
The
the
first
episodes
over
a
thousand
views
net
Oh
after
two
weeks.
A
The
second
episode
is
like
400
and
something
after
a
week
and
the
one
this
morning
is
already
over
200
views
and
just
like
nine
hours,
so
they
seem
to
be
doing
well
and
well
received
by
the
community.
So
that's
that's
awesome,
I'm
very
happy
to
bring
in
new
people
they're
getting
interested
in
this
technology
because
we
all
believe
it's
really
important.
A
E
A
F
We
are
working
on
a
new
ice
tea,
our
classifier
as
a
replacement
for
the
current
Sierra
classifier.
So
this
str
classifier
is
a
single
layer,
feed-forward
classification
network,
using
soft
matter
as
the
activation
function
and
the
output
can
be
interpreted
as
directly
as
a
probability
distribution.
F
It
also
from
the
research
before
we
found
that
it
has
better
performance
in
terms
of
prediction,
accuracy
and
a
sequence
like
likelihood.
So
we
are
trying
to
incorporate
that
into
yogic
and
there's
an
issue
that
lists
all
the
Stephanie.
There's
a
super
issues
that
keep
track
of
the
progress.
I
have.
A
A
C
Well,
I
was
just
going
to
talk
about
so
HTM
jet
Java.
Has
you
can
now
save
your
network,
so
I
mean
there's
a
new
persistence
API
that
allows
you
to
serialize
your
lines
to
the
disks
crystallized
to
a
byte
array
or
serialized
to
a
stream,
and
it's
real
easy
to
use.
You
can
check
point
while
it's
running
so
it'll.
Allow
frameworks
like
we
have
a
new
community
member
or
he's
an
older.
Well
he's
a
community
member.
C
That's
been
around
he's,
not
home,
but
his
name
is
Aaron
right
and
he
has
made
a
flink,
which
is
a
spark
like
it's
part,
2
point
0,
basically
or
10
point.
Oh,
it's
better
than
spark
because
it
it
actually
does
streaming
rather
than
batching
things
and
calling
it
a
string.
C
So
flank
is-
and
it's
got
some
complex
event,
processing
stuff.
For
those
who
don't
know,
it's
like
a
rule,
engine
that
operates
on
time
data
so
and
you
can
do
make
rules
about
how
things
you
know
change
in
time,
but
anyway,
so
he's
got
a
flink
implementation
of
a
distributed
platform
for
HT,
ms
so
and
he
used
HTM
that
Java
to
make
it
so,
and
it
looks
really
cool
so
frameworks
like
that
can
now
use
serialization
because
they
need
to
send
things
over
the
wire.