►
From YouTube: 2016 Year In Review (HTM Hackers' Hangout)
Description
A chance to join in live discussion with Numenta engineers and HTM community members about ongoing work on open source HTM software.
A
A
Nearly
so
I
have
a
huge
plan
for
this,
but
yet
yesterday
and
this
morning,
I
decided
I
was
going
to
write
up
a
year
in
review
and
Sibbett
I
liked
it,
and
most
of
my
job
is
to
make
see
what
I
happy
so
I'm
going
to
share
my
year
in
review
with
you
guys
and
just
kind
of
run
over,
but
I
think
the
highlights
have
been
over
this
past
year
for
the
community
things
we've
got
done,
so
this
is
I
in
addition
to
any
research
stuff
that
we've
worked
on
it
new
mentor.
A
That
has
really
been
our
focus
this
year,
and
so
there
is
a
big
shift
from
applications
to
research
them.
There's
a
bit
of
overhead
involved
in
that.
But
in
addition,
even
with
all
of
that,
we've
done
some
significant
things.
So
we've
got
the
windows
builds,
are
good,
I
have
to
start
our
little
mouse
effect.
There
we
go.
Windows
builds
for
new
pic.
We
have
also
moved
all
of
our
Travis.
All
of
our
Linux
builds
away
from
Travis
CI
and
onto
bamboo.
This
will
give
us
flexibility
in
the
future
to
run
everything
on
bamboo.
A
A
A
You
can
create
bindings
and
if
you
need
help
with
that
police
contact,
anyone
in
the
community
I've
been
sort
of
wondering
when
someone
is
going
to
say:
hey
I
want
to
write,
go
bindings
or
whatever.
So
let
me
know
another
significant
thing
that
David
really
really
put
a
lot
of
work
into
is
getting
HTM
Java
running
the
dementia
Lee,
the
new
meta
anomaly
benchmark
actually
HTM
Java
and
comport
X
are
both
represented
now
and
the
the
nab
leaderboard
so
I
think
it's
great.
A
They
have
these
community
HTM
systems
represented
our
leaderboard
along
with
new
pic
HTM.
A
couple
other
major
things
is
we
moved
from
mailing
list
to
forum,
so
if
you're
still
on
the
old
mailing
list,
you're
not
getting
much
email.
Are
you
that's
because
we
shut
them
down?
So,
if
you're
watching
this
wondering
why
you
don't
get
emails,
then
join
the
21st
century
had
come
to
discourse
demented
org,
that's
the
new
forum
software.
A
Additionally,
we
had
a
couple
of
community
integrations
come
up
this
year,
one
of
them
from
air
and
right
he's
been
our
community
for
a
while.
You
created
a
flink
integration
with
Apache
flink,
which
is
a
streaming
data
architecture,
that's
really
cool,
and
that
is
using
HTM
gotta,
understand
processor
and
I
also
created
an
integration
with
influx
DB,
which
is
a
time
series
database
that
compose
the
data
and.
C
A
A
Moving
on
Marcus
who's
in
the
office
right
now,
with
the
hell,
some
help
from
Andrew
Malta,
one
of
our
interns
made
a
bunch
of
algorithm
performance
improvements.
I
won't
go
over
all
of
them,
but
if
you
want
to
see
them
there,
they
are
there's
watch
so
I,
don't
know
exactly.
You
know
how
to
characterize
the
speed
up,
but
I
think
touched
it
in
a
lot
of
different
places
made
it.
A
Ways
so
if
you
haven't
used
nipping
in
a
while,
don't
be
surprised
if
it's
a
lot
faster,
because
I've
noticed
that's
awesome.
I
also
HTM
studio
is
a
product
that
Nimitz
is
created,
and
we
did
this
so
that
anybody
that's
got
data
they
can
get
into
a
CSV.
You
could
run
it
through.
Htm
get
a
novel
is
from
it,
so
it
makes
it
really
easy
for
non-programmers
to
get
their
data
into
an
HTM.
I
have
to
figure
out
how
to
turn
this
mouse
highlighting
thing
off,
because
it's
just
awful
stop
it.
A
A
A
Cool
well,
Lisa
problem
was
fixed,
so
that
was
the
year
update.
I
wanted
to
do,
but
I
was
happy
to
see.
You
know
how
I
got
that
list
as
I
went
through
all
the
HTM
hackers
hangouts
for
the
past
year,
so
I've
been
doing
them
for
a
year
now
and
I.
Just
gotta
went
like
double
time
through
them
and
perused
and
got
this
list
of
things
that
we
read
accomplished
over
here.
So
it's
pretty
cool,
I
think
so
good
job
everybody
who
helped
out
with
its
stuff.
A
I
preciate
a
couple
other
things
before
I
open
up
to
the
floor
for
just
general
discussion
I
will
be
pushing
in
this
month
will
be
pushing
so
demented
org
content
changes
on
our
demented
org
website.
So
that's
going
to
change
its
look
and
feel
completely.
A
Nobody
really
seems
interested
in
dividend
or
content,
so
I
didn't
really
open
it
up
anything
with
community
except
I'm
thinking
is
here,
but
it's
going
to
be
simpler.
I
think
it
better
than
it
is
today
and
it
will
better
be
integrated
with
the
new
method,
calm
sort
of
footprint,
so
that's
going
to
be
coming
this
month.
A
A
Butter
up.
We
have
to
switch
deployments
to
hell
Wilfred
to
get
done
in
the
file
a
bar.
Another
build
wait
for
all
that
didn't
happen,
but
we're
still
working
on
it.
Scott
and
I
are
going
to
be
talking
about
priorities
for
the
coming
year.
We
haven't
had
that
discussion
yet
so
really
just
update
the
community
about
what
the
priorities
are.
My
community
priorities
versus
that's
more
project,
oriented
people,
oriented
priorities,
so
we'll
be
communicating
that
over
the
next
month
too.
A
B
A
A
Have
to
say,
I'm,
really
looking
forward
to
working
with
your
visualization
library
and
biz
I.
Think
the
stuff
you
put
out
there
looks
really
cool,
so
I'm
going
to
be
using
it
over
the
next
month.
I
haven't
gotten
around
to
really
digging
into
the
temporal
memory
yet,
but
I
like
the
way
and
Marcus
did
this
to
you
know
in
your
visualizations.
You
kept
it
all
sort
of
as
one-dimensional
as
possible,
so
you
don't
show
a
topology
that
doesn't
exist.
A
I
think
that's
sort
of
important,
because
I
always
assumed
when
I
was
looking
at
that
stuff
that
whatever
it
was
displayed,
three
dimensionally
or
two
dimensionally,
that
it
was
two
dimensional
structure.
So
I
like
how
both
of
you
guys
and
your
visualizations
really
kind
of
line
it
up
in
a
one-dimensional
way
and
show
how
you
know
those
connections
in
that
fashion,
I
think
it's
clear
from
what
the
algorithm
actually
doing.
D
A
C
A
B
A
B
A
A
A
The
very
very
simplest
thing
I
could
do
is
you
know,
send
it
the
results
of
a
coin
flip
and
see
what
I
mentioned
this
in
the
forum
earlier
day,
I
thought
I'd
make
a
really
easy
simple
starter
example,
so
that
we
can
see
it
making
predictions
for
both
of
them.
You
know
at
the
same
time
and
over
time,
because
it's
random
that
are
going
to
be
5050
over
thousands
of
runs.
A
D
C
You
should
do
Monday
through
Sunday
who,
days
of
the
week
thing
because
I'm
not
sure
a
coin
flip.
Won't
they
won't
show
you
how
you
know
certain
columns
are
prioritized,
I
mean
I,
don't
understand
how
that
would
really
illustrate
anything,
except
that
you
know
that
it's
capable
of
making
simultaneous
predictions
equally
with
probably
equal
permanence
'as,
for
you
know,
for
both
sides
of
the
calling
north
or
sth.
A
Identical
size
of
the
corn,
so
baby
too
contrived
an
example
but
I'm
trial
see
I,
haven't
even
dug
into
it
yeah,
but
I.
A
C
C
A
C
You
have
to
have
distinct,
you
know
inputs
right
to
the
team.
C
A
A
So
if
I
Kenned
yeah
here's
a
coin
flip
to
express
some
of
the
very
low
level
concepts,
then
it
might
be
useful
because
I've
tried
this
before
I've
tried
like
splitting
the
input
space
sent
to
like
seven
or
ten
sections,
and
just
going
blip,
blip
repeat
and
that's
and
then
every
once
in
a
while
introduce
like
a
skitter.
You
know
so
one
of
them
happens
on
a
border
randomly
and
I
can
show
that
pretty
easily.
You
know
when
it
gets
to
the
point
where
it
it
might
split.
You
can
see
that
really
easily.
A
A
A
I'm
looking
forward
to
it
are
you
planning
on
using
the
the
really
simple
example:
we
use
quite
often
ABCD
and
XB
c
wide
and
show
I
have
to
I
think
I
have
to,
but
even
if
thats,
but
it
would
be
nice
if
I
can
express
that
in
a
little
bit
of
a
rich
or
medium
than
just
characters,
but
but
that
would
that
exact
concept
has
to
be
expressed.
Some
really,
but
maybe
I
can
maybe
I
can
express
it
a
little
bit
better,
because
I
don't
want
to
encode
strings
or
categories.
It
seems
like
it.
A
C
D
C
C
C
A
B
C
A
At
the
cord,
the
change
here
is
the
fact
that
it's
using
a
sparse
matrix
is
kind
of
an
implementation
detail
there.
The
important
thing
is
that
you
know
that
there
are
a
set
of
batch
commands
into
the
connections
class.
In
this
case,
our
connection
class
is
the
sparse
matrix
and
we're
making
a
bunch
of
batch
calls
on
it
to
change
synapses.
So
so
this
could
be
equally
well
implemented
as
a
few
new
methods
on
the
connections
class.
Oh
okay,.
C
D
A
C
So
when
you
collect
matching
or
active
segments
arm,
they're
marked
inside
of
the
sparse
matrix
instead
of
collecting
list
is
that
it
is,
though
I
had.
I
don't
know
I
don't
I'm
just
off
the
top
of
my
head,
just
talkin
crap,
but
I
don't
know,
I
don't
know
exactly.
You
know
how
you
would
eliminate
the
loop
from
the
algorithm
itself.
Oh
you
just
say
give
me
all
of
these
yeah.
C
B
D
A
Just
doing
a
set
of
operations
like
that,
and
so
inherently
what
it's
doing
is
more
complicated.
It's
doing
more
computation,
but
but
also
getting
it
in
see
ya
vast
majority
of
its
happening
down
in
C.
So
that
makes
it
even
faster.
A
B
A
C
Right
right,
then,
everything's
compiled
down
anyway,
so
I
mean
unless
only
56
run.
You
know
yeah
okay,
yeah,
because
I've
seen
javis
TI,
like
I've
watched,
my
you
know
my
little
test
algorithm
and
then
you
know
it.
I've
seen
it
go
and
then
like
the
fifth
line
through
all
the
cynical
Shh.
Oh
yeah
takes
off
it's
like
wow
man
that
it
really
does
in
line
all
the
crap
and
up
to
all
of
the
optimizations.
C
A
A
So
delayed
blings,
okay,
sure,
I'm
not
sure
what
angle
to
come
at
sis
from
in
the
network.
Yeah
I
gives
you
a
lot
of
ways
to
hook
up
a
bunch
of
different
nodes.
You
could
call
them.
We
call
them
reason
which
we
don't
like
calling
the
rigid,
but
that's
what
they're
called
and
so
yeah
they're
layers,
but
yeah
yeah,
and
you
have
these.
A
You
have
these
inputs
and
outputs
between
them,
and
so
you
and
these
you
can
connect
and
put
some
outputs
via
links
and
so
every
basically
time
step
each
node
each
layer,
each
region,
whatever
you
want
to
call,
it
runs
a
computation
and
it
puts
its
output
into
its
output.
Then
the
links
I'll
play
the
role
of
passing
them
between
passing
the
outputs
between
nodes
into
each
other's
inputs,
concatenate
them
together
and
all
that
kind
of
thing.
A
So
the
work
the
Batali
did
that
he's
added
to
it
is
the
ability
to
add
a
delay
with
like
one
node
rights
to
its
output
and
then
the
next
time
step
or
two
time
steps
later
or
three
or
whatever
number
you
customize,
that's
when
the
other
node
receives.
So
that
gives
us
the
ability
to
add
time
delays
inherent
to
the
network
so
on.
If
you
don't
do
that,
then
it's
kind
of
in
this
weird
state,
where,
if
you
have
nodes
that
are
like
kind
of
codependent
on
each
other,
then
one
of
them
will
write.
A
We
basically
we're
not
running
it
in
parallel,
so
one
of
them
can
execute
or
the
other
and
there's
going
to
have
this
weird
thing
where
one
of
them
gets
the
current
time
steps
in
quote:
one
gets
the
previous,
this
all
just
kind
of
messy
and
we
and
we
want
to
have
time
delays.
That's
kind
of
part
of
our
algorithms
part
of
the
temporal
memory
like
unmyelinated
axons,
inherently,
are
slower
and
so
they're
automatic
model
time
delays
so
in
this
is
putting
that
the
level
of
the
network
API,
okay,.
A
A
We
also
happen
to
use
nodes
as
well,
but,
like
a
region
has
nodes
in
it,
but
we
don't
actually
use
that.
So
that's
what
makes
it
a
little
confusing
to
use.
The
word
note,
but
that's
all
details,
but
yeah
I
agree
node
would
be
the
best
word.
I
mean
the
network.
Api
is
architected
in
a
way
where
you
could
have
a
node
that
contains
another
network.
Right,
so
I
mean
I.
Think
that's
got
that
will
happen
eventually
in
the
future.
C
So
have
you
thought
about
being
able
to
combine
nodes
in
a
functional
like
in
a
functional
graph,
like
you
know,
like
you
would
do,
say,
sparknotes
and
be
able
to
do
operations,
because
then
you
could
do
all
sorts
of
things
like
skip
or
delay,
or
you
know
combining
take
from
each
combined
latest.
C
A
C
That
use
okay,
okay,
all
right,
we'll
see
ya.
If
you
live
in
a
sea
or
a
Python
world
in
minot,
you
might
not
be
as
exposed
to
that
this,
like
the
functional
reactive
stuff,
you
know
where
you
can
buy
a
monads
and
you
can
do
it.
So
you
take
I,
don't
know
I,
don't
want
to
go
all
in
it's
just
way.
It's.
A
A
C
Yes,
so
you
can
take,
you
can
take
any
output
and
combine
it
in
any
different
in
any
kind
of
way
so
and
I
mean
I
could
call
it.
I
could
dress
it
up
as
a
node,
but
it
has.
You
know
it
inherits
all
this
other
functionality
to
so
I
on,
but
it's
real
good
for
streaming
stuff
and
but
anyway,
you
talk
about
that.
In
the
end,
congratulations.