►
From YouTube: NuPIC Office Hour - Feb 25, 2014
Description
- Core extraction update
- Demo app
- Season of NuPIC?
- Q & A
A
All
right,
hello,
everybody
and
welcome
to
the
I
don't
know:
fifth
new
pick
office
hour.
Fourth,
so
we've
got
myself
matt
taylor
and
in
the
new
numenta
offices
on
the
left,
scott
purdy
subutai
ahmed
on
in
the
middle
and
jeff
hawkins
on
the
right
and
off
screen,
I
think,
are
ian
danforth
and
chet
and
serper.
A
We
also
have
rick
joining
us
from
australia,
he's
the
only
only
one
currently
who
has
asked
specifically
for
an
invitation
to
take
part
in
the
conversation.
But
if
anyone
has
questions
there
are
several
ways
you
can
ask
them.
I
have
the
rrc
channel
open,
which
is
the
new
pic
room
on
freenode
and
also
there's
a
q,
a
app
on
google
hangouts.
A
So
if
you're
watching
on
google
hangouts,
you
can
also
just
type
in
a
question,
I
believe
in
the
comments
and
it
will
show
up
for
us
and
we
can
attempt
to
answer
it
so
the
the
first
order
of
business.
I
think
that
I'd
like
to
go
through
is
our
agenda
for
today
and
well.
I'd
like
let
me
share
my
screen
here:
got
a
powerpoint
I'll
just
start
this
whole
desktop
and
see
how
well
this
shows
up
okay,
so
here's
our
here
is
our
office
hour.
A
If
you're
watching-
and
you
want
to
comment
on
this
and
you
you
can
do
so
on
our
mailing
lists
on
dimension.org
lists
we're
going
to
be
talking
about
the
extraction
of
nubikor,
how
the
progress
of
that
is
going.
I've
got
quite
a
few
slides
just
detailing
the
progress
and
our
plan,
which
I've
updated
a
bit
and
then,
after
after
running
through
that,
we'll
we'll
talk
about
building
a
compelling
sample
application,
which
we've
talked
to
francisco
weber
about,
and
we
have
some
ideas.
A
Chet
has
ideas,
so
does
jeff
about
building
building
that,
and
I
will
also
brainstorm
about
creating
a
season
of
new
pick
since
we
were
not
accepted
into
google
summer
of
code
this
year
there
we
still
have
a
lot
of
momentum
behind
that,
because
we've
got
a
fully
populated
idealist
with
people
interested
in
on
mentoring
and
working
on
them.
So
we
should
take
advantage
of
that,
as
was
suggested
on
our
mailing.
B
A
By
a
community
member
for
deeptub,
so
thank
you
for
your
comment
there
and
throughout.
I
think,
we'll,
probably
accept
q
a
and
we'll
probably
get
to
the
a
at
the
end
so
feel
free
to
submit
your
questions
and
once
we
get
through
this
core
extraction
plan,
maybe
we'll
stop
and
take
some
questions
and
before
we
move
on
to
the
next
topic,
does
that
sound
good
everybody.
C
B
Regarding
hardware,
that
could
potentially
run
a
new
pick,
and
so
he
thought-
maybe
I
could
also
some
sometime
in
this.
Just
give
a
few
comments
on
that.
A
Yeah,
that
would
be
great.
We
we've.
We
don't
necessarily
have
a
time
limit.
We
can
go
as
long
as
people
have
time.
So
that's
fine
rick
did
you
have
something
you
wanted
to
comment
about.
A
Let
me
at
least
get
through
the
the
core
extraction
plan.
That's
good
and
then
we'll
take
some
questions
there
before
we
start
talking
about
some
of
the
tertiary
topics.
Is
that
fine
all
right?
Okay,
okay,
so
here
is
an
update
on
our
core
extraction.
A
You
can
see
the
whole
plan
on
the
wiki.
I
did
quite
a
few
updates
yesterday
to
this
plan
to
make
it
more
of
a
focus
towards
getting
a
new
core
release
out
the
door,
so
that's
kind
of
the
the
priority
that
I'm
shooting
for,
but
the
overall
goal
has
not
changed.
We
want
to
have
an
independent,
nuke
core
that
contains
all
algorithms
in
c
plus,
with
a
stable
api.
A
A
That
will
be
done,
okay,
so
our
current
progress,
I'm
calling
this
step
one
kind
of
phase
one
of
step.
One
was
just
to
split
up
the
repositories,
which
was
easy
enough.
Updating
the
build
was
something
we
needed
to
do
moving
to
move
toward
getting
new
core
building
itself,
so
we
do
have
two
repositories
now:
newpick
and
newpic.cor.
A
They
have
separate
issue
trackers,
but
currently
new
pic
is
responsible
for
building
new
pick
core.
So
new
pick
core
is
not
necessarily
independent.
A
It's
really
just
a
source
code
repository,
so
we've
at
least
got
this
phase
done
thanks
to
everyone
who
helped,
especially
with
the
cmx
stuff,
shout
out
to
david
berghazzi,
who
did
a
lot
of
work
on
that
and
subsequently
became
a
committer
on
new
pick.
So
we
we
like
to
thank
our
community
members
for
contributing
the
next
steps
for
this.
To
calling
phase
two
is
that
nupic
core
can
build
itself.
So
a
lot
of
the
cmake
stuff.
A
That's
currently
within
new
pic
today
needs
to
be
moved
into
an
independent,
build
process
within
nupit
core,
which
will
most
likely
involve
a
directory
restructuring
or
rethinking
there,
but
we
wanted
to
build
and
test,
and
all
of
that
run
in
ci
properly
and
then
have
the
new
pic
project
ask
dubicore
to
build
itself
instead
of
having
the
logic
for
building
newfound
core
with
the
new
pick.
So
this
is
the
current
priority
to
move
toward
move
in
this
direction
to
get
new
before
building
itself.
E
Yes,
quick
question:
I'm
fairly
ignorant
about
this
process,
so
that
means
that
you
can't
have
like
a
an
already
built
checkout
in
newpick,
or
is
it
always
better
to
to
to
build
locally?
At
the
same
time,.
A
A
What
exactly
that
means,
I'm
not
I'm
not
sure
yet,
but
the
first.
This
is
the
first
step,
but
eventually
we
want
to
have
multiple
ways
for
people
to
get
core
installed,
so
they
don't
have
to
compile
and
build
it
from
within
nitpick.
A
Okay,
assuming
this
is
complete,
that
nipit
core
builds,
let's
move
on
to
step
two
and
that
is
prepare
for
a
new
big
core
release
and
then
and
a
new
core
release
is
important
and
I'll
and
I'll
get
to
why
in
a
bit.
But
I
think
for
us
to
be
able
to
do
a
release
of
nuclear.
We
need
three
things.
A
A
A
Have
to
be
a
fully
functional
test
that
just,
I
think,
needs
to
define
what
the
api
definition
is.
What
the
signature
of
this
project
is,
and
once
this
is
established,
it
sets
a
precedent
or
a
contract
that
cannot
be
changed
without
some
consensus
from
the
community.
It
also
makes
it
very
obvious
if
a
pr
comes
up
that
changes
these
tests,
we
will
know
right
away.
This
is
a
breaking
change,
so
I
think
this
is
really
important.
We
have
this
defined
and
we
also
provide
documentation
for
it.
A
Automated
api
docs,
you
know
published
and
advertised.
These
are
the
official
core
documentations
and
going
to
engage
the
community
to
help
out,
get
that
documentation,
scrubbed
and
reviewed
and
really
nice.
We
could
also
add
tutorials
for
usage
of
the
core
and
examples
etc.
A
So
I
think
that's
really
important
step
the
next
part
of
it,
and
this
can
be
done
in
parallel
with
defining
the
api,
is
an
attempt
to
consolidate
the
algorithms
again.
There
is
another
section
in
the
in
the
wiki
plan
for
this,
and
this
means
because
currently
there's
at
least
the
current
temporal
pooling
code
or
the
sequence
memory
code
is
still
within
new
pick
written
in
python.
A
So
we
we
need
to
move
those
over
into
c
plus,
plus
and
nubit
core
and
get
some
proper
tests
around
it,
so
that
we
can
have
everything
in
one
place.
The
mat.
A
Also
important
for
prototyping
new
algorithmic
changes
as
well,
so
we
as
as
numenta,
want
to
continue
to
have
the
ability
to
rapidly
prototype
algorithmic
code.
So
we
want
to
keep
the
ability
for
anybody.
If
we
can
do
it,
anybody
should
be
able
to
do
it
as
well.
A
Core
test
suite
is
that
true
or
that's
yeah,
that's
correct,
there's
only
going
to
be
c
plus
plus
in
in
newport
core.
A
Okay,
so
the
last
thing
is
to
remove
unused
code.
I
don't
have
a
pretty
graphic
for
this,
because
it's
pretty
obvious,
so
this
will
be
a
process.
I
think
subatil
will
need
to
be
involved
in
this
pretty
heavily
to
decide
what
is
not
being
used.
We
can
certainly
set
up
some
type
of
instrumentation
to
and
run
the
product
to
run
rupee
core
in
different
ways
to
see
what
code
the
paths
are
not
being
executed,
but
we
need
to
decide
what
needs
to
be
pulled
out.
A
So
once
we
have
these
three
things
done,
I
think
we
would
be
ready
to
actually
release
1.0
of
nuka-core.
What
this
will
give
us.
I
think
this
is
really
important
because
it
gives
us
a
stable
release
for
users,
so
they
know
that
this
will
always
exist
in
this
form.
A
We,
I
think
we
still
need
to
allow
for
critical
bug,
fixes
to
be
applied,
and
it
also
gives
us
a
core
that
is:
has
a
stable
api
for
developers
to
build
language,
bindings
and
client
libraries
for
the
community.
Once
there
is
a
version
1.0,
the
head
of
master
of
the
development
fork
will
have
a
should
have
a
faster
velocity
for
any
developers
currently
working
on
on
it,
so
good
win
in
my
imagination.
A
It
looks
something
like
this
currently
today,
we've
got
new
bitcoin
and
we
split
them
the
mechanisms
or
the
mechanics
of
the
release
process
we're
not
discussing
right
now.
We,
I
think,
we'll
get
to
that
when
we
get
to
it,
but
there
will
be
a
new
core,
that
is,
the
development
branch,
essentially
moving
forward
and
new
core
v1
stops.
Essentially,
at
the
same
time,
I
think
we
should
do
the
same
thing
with
the
new
pic
client
as
well,
and
do
further
work
on
new
pick,
while
newbig
v1
stops.
A
It's
more
of
a
cleanup
of
a
client
is
to
split
up
that
new
pick
project
into
two
projects,
one
for
python
language
bindings
and
one
for
an
actual
client
that
uses
those
bindings,
and
when
we
do
this,
we
should
try
and
make
this
a
good
example
for
other
developers
to
create
their
own
clients
and
bindings,
so
that
that
is
the
plan
as
it
is
now,
as,
as
I
said,
it's
all
up
on
this
on
this
wiki
page.
I'm
definitely
accepting
feedback
for
this.
A
If
anybody
has
any
questions
or
comments,
you
can
ask
them
here
or
on
the
mailing
list.
So
so.
E
Four,
that's
going
to
be
super
confusing
for
the
following
following
reasons
if
you
have
newbik
pi,
and
that
seems
to
be
in
contrast
to
new
big
c
plus,
which
says
to
me
immediately.
This
is
the
same
code
but
written
in
python
or
which,
which
for
some
of
the
algorithm
code,
it
is.
A
E
Well,
I'm
not
I'm
trying
to
I'm
trying
to
suss
out
what
the
it
might
be
naming
or
it
might
be
more
more.
It
might
be
deeper
than
that
yeah.
So
because
we
do
have,
we
do
have
like.
We
do
have
clients
and
I
think,
of
a
client
as
the
you
know,
the
the
python
specific
rapper
that
provides
the
python
endpoints
to
c
plus
plus
code,
so
matt's
calling
those
bindings
there
yeah.
E
And
I
don't
think-
and
I
don't
think
of
the
client
that
you
have
there-
I
think
of
that
as
an
application
that
would
use
the
the
python
client
for
new
pic.
So
there's
sort
of
the
python
there's
implementation,
there's
client
and
then
there
are
applications
so
yeah.
Maybe
this
is
just.
A
A
description
issue,
I
think
it
is
so
the
the
way
that
this
that
I
was
thinking
when
I
described
this
was
there's
one
thin
layer
that
that
only
provides
a
binding
between
the
c
plus
plus
api
and
nuka
core
into
some
other
languages
api.
So
it
just
exposes
those
endpoints
within
another
language
and
those
would
be
the
bindings
yeah.
E
Works
most
of
the
time,
but
because
we
have
an
alternate
implementation
in
a
different
language,
it's
going
to
be
very
tricky
to
get
that
naming
correct.
A
Have
consistency,
I
think
that's
the
most
important
thing:
yeah?
Okay,
all
right!
Let
me
I'm
gonna
stop
sharing
my
screen
and
let's
take
some
questions
now
before
we
whoops
there,
we
go
rick
you.
You
were
kind
of
in
line
first
since
you're
you're
joined.
So
why
don't
you
ask.
D
Okay:
okay,
first
quick
question
for
jeff
jeff:
we
met
at
the
your
conference
in
melbourne
about
two
months
ago
and
you
asked
me
to
email
you
a
link
to
a
presentation
that
we
were
talking
about
just
to
make
sure
that
that
has
arrived.
B
Yes,
it
did
that
was
he
sent
that
away?
That
was
back
in
december.
B
D
Here
the
other
question,
this
more
complicated,
maybe
but
last
open
office
hour.
D
You
mentioned
you
were
looking
forward
in
2014
to
this
publish
some
articles
in
peer
review
journals,
and
so
I
was
pleasantly
surprised
to
hear
that
so
surprised,
though,
because
in
another
context,
in
some
interview
that
I
watched
you
mentioned
that
some
of
your
scientific
thinking
had
been
guided
by
the
book,
thomas
kuhn,
scientific
revolutions,
what's
it
called
the
structure
of
scientific
revolutions,
yes,
and
so
from
that,
I
understood
that
you
see
your
approach
to
modeling.
D
The
brain
has
to
be
basically
a
curious
paradigm
shift,
and
if
that's
true
I'll
give
you
this,
according
to
the
book
where
they
watched
us
just
last
weekend,
you
would
have
a
hard
time
talking
and
convincing.
You
know
the
the
academic
establishment
out
there
of
your
theories,
because
you
know
the
way
I
couldn't
put
it.
You
and
them
live
in
different
worlds.
Did
you
do
you
see
it
a
bit
that
way.
B
Well,
first
of
all,
yeah
okay,
so
this
is
a
little
deep
but
we'll
go
into
it.
So
actually,
according
to
thomas
cooner,
I
think
we're
in
a
pre-paradigm
science,
not
in
a
paradigm
shift.
B
Current
shift
is
when
you
haven't
established
paradigm
and
then
it
changes
and
there's
a
lot
of
resistance
in
that
we're
in
the
pre-paradigm
state
which
is-
and
I
haven't,
read
the
book
in
a
long
time,
but
I
believe
that
we've
talked
about
it,
and
so
there
really
are
no
establishment
dealing
with
this
and
it
is
difficult
to
get
published
in
in
a
sort
of
classic
peer-reviewed
journal,
because
the
journals
generally
wouldn't
cover
the
the
interdisciplinary
nature
of
the
work.
That's
being
done
and-
and
I've
talked
to
several
people
about.
B
This
now
talked
to
some
scientists
and
some
some
other
authors
that
I
know
and
and
we've
had
some
discussions
internally
about
it
as
well
and
generally
everyone
views
that
thinks
it
will
be
difficult
to
get
the
to
find
the
right
home
for
the
cla
in
the
peer
review
journal,
because
not
because
the
people
don't
respect
it.
In
fact,
there
are
a
lot
of
people
who
respect
what
we're
doing
it's
just
it
doesn't
fit
in
some
of
the
most
classic.
B
You
know,
like
here's,
my
newest
result
on
x
and
it
increments
someone's
owes
you
know
value
of
result
supply,
but
I
don't
think
it's
going
to
be
impossible,
and,
and
so
we
need
to
come
up
with
a
strategy
for
how
to
present
our
work.
What
are
we
going
after
neuroscience
journals
and
or
machine
learning
journals
or
other
things?
How
broadly
we
want
to
do
it?
How
narrow
do
we
want
to
do
it?
And-
and
so
I'm
not
I'm-
I'm
concerned
about
it,
but
I'm
not
like
saying:
oh,
we
can't
do
this.
B
Well,
I
think
we'll
be
successful
at
it,
and
another
thing
I
can
do
is
since
I
have
a
lot
of
friends
in
neuroscience
and
in
all
these
fields
that
are
related,
you
can
sort
of
work
the
back
channels
as
well.
You
can
sort
of
get
people
to
review
through
your
papers
before
you
submit
them,
and
so
you
can
sort
of
generate
some.
B
Some
potentially
get
some
good
will
going
so
that
when,
when
some
editor
gets
this
and
doesn't
know
about
it,
so
what
the
hell
is
this
someone
else
already
told
them
like
this
is
important
you
should
you
should
look
at
so
those
are
just
some
thoughts
about
it.
It's
not
I'm
not
overly
concerned
about
it.
I
think
we
just
have
to
go
about
it
in
a
phenological
way
and
and
we'll
have
success
and
there's
no
question.
There's
there's
enough
interest
in
the
knowledge
about
what
we're
doing
that
people
want
to
see
it
published.
D
Okay,
okay,
I
mean
I
have
friends
who
I
want
to
convince
that
this
is
a
good
thing
and
they
are
somewhat
conservative
and
asking
okay
who's
this
guy,
and
what
has
he
published
in
the
meantime
until
the
the
new
publications
come
out?
Can
I
show
them
the
stuff?
That's
already
been
published,
you
know
five
years
ago,
or
is
that
considered
too
much
on
our
date?.
B
Well,
I
think
the
papers
that
we
did
with
the
leap
and
so
on
are
definitely
out
of
date.
I
wouldn't
the
only
thing
yeah
I
mean
the
other
thing
you
can
say.
Like
hey,
you
know,
we've
talked
with
some
other
papers
in
the
past,
but
I
don't
think
they
are
they're
really
indicative
of
what
you
know
what
we're
doing.
B
So
I
you
know,
I
don't
refer
to
them
much.
I
know
there's
above
stock.
C
B
B
I
just
got
another
speaking
invitation
yesterday
to
go
out
and
speak
to
the
brain
project,
and
so
you
know
it's
not
like
we're
just
somebody.
You
know
people
know
about
this
and
they
know
about
us,
not
everybody,
but
enough
people
do
that
that,
were
you
know,
we're
being
taken
seriously
by
quite
a
few
people,
and
I
think
you
know
we
just
need
to
fill
that
out
properly
with
some
peer-to-peer
journals.
That'll
help
some
other
people.
A
Thanks:
rick,
okay,
so
I'm
gonna
go
to
some
questions
on
google
hangout,
so
this
is
from
john
blackburn.
He
asks
this
is
probably
for
you
jeff.
Can
you
explain
what
a
sequence
segment
is,
which
is
mentioned
in
the
white
paper.
C
Yeah
yeah,
I
can
do
that
yeah,
that's
that's
the
nomenclature,
that's
in
the
pseudo
code
of
the
white
paper
and
basically
in
the
temple
pooling
algorithm
that
was
described
there,
there's
two
parts
to
it:
there's
the
core
sequence,
learning
part
to
it
and
then
there's
the
pooling
part
to
it,
which
tries
to
predict.
You
know
activity
that
might
happen
multiple
time
steps
into
the
future
and
the
sequence
segment
are
those
segments
that
just
do
the
sequence
learning
part
of
it
and
so
they're
directly
activated
by
bottom-up
activity.
C
It's
a
it's.
You
know
definitely
a
little
bit
confusing.
This
is
a
part
of
the
temple
cooler
algorithm
that
I
think
people
get
most
confused
about
when
they
look
at
the
pseudocode,
but
essentially
that's
what
that
sequence
segment
does
and
if
it
helps
me
you
know
I
could
write
up
a
short
paragraph
on
the
exact
difference
and
maybe
send
it
to
them.
Well,
it
was
a
question:
what's
the
difference
between
temple
or
like
what
is
what.
B
Is
the
sequence
yeah
I
mean
you
know
one
another
way
to
look
at.
It
is,
and
I
just
did
this
in
the
talk
yesterday.
You
know
I
talked
about
the
neuron.
They
were
saying
about
all
these
synaptic
inputs
and
somewhere
near
the
cell
body
and
those
really
define
the
cells,
people
whose
microcopies
and
then
there's
all
these
synaptic
inputs
that
are
more
discipline
further
away
and
they
just
they
just
leave
the
cell
to
polarize
or
be
predicted
and
and
so
the
temporal
segments
are
the
ones
that
basically
say
are
predictions
yeah.
B
A
Okay,
there's
a
follow-up
question
from
john
that
might
be
related
to
this
he's
talking
about
the
hot
gym
example,
and
he
says
he
ran
the
hot
gym
example
and
it
seems
to
do
well
at
one
step
predictions,
but
the
five
step
is
poor
and
seems
to
get
worse
as
time
goes
on
and
he
references
nine
hundred
two
thousand
predictions
after
nine
hundred
two
thousand.
The
predictions
are
worse
than
two
to
three
hundred,
so
he
says.
C
Yeah,
it's
a
good
question.
The
hot
gym
example
is
set
up
to
predict
both
one
and
five
steps
into
the
future.
There
are
a
couple
of
issues
here.
One
is,
generally
speaking,
you
know
the
further
up.
You
predict
the
you
know,
the
worse.
Your
prediction
is
gonna,
be
usually,
and
so
that's
one
reason
why
the
five
steps
you
know
might
be
worse
than
the
one
step,
but
probably
the
more
important
reason
is
that
we
found
from
our
experience
that
when
you
do
a
swarming,
you
end
up
with
slightly
different
parameters.
C
If
you
optimize
for
one
step,
prediction
versus
five
step
prediction,
so
a
single
set
of
parameters,
at
least
in
the
current
implementation,
doesn't
always
do
best
for
both
the
multiple
predictions.
So
we
used
optimized
separately
for
one
step
versus
five.
Seven,
what's
in
the
code,
is
probably
just
optimized
for
one
step.
G
B
Okay,
it
wasn't
clear,
though,
that
you
know
at
any
point
in
time:
cla
makes
multiple
predictions,
and
so,
if
you
just
ask
for
the
most
likely
one
you'll
get
one,
but
you
really.
If
you
really
really
want
to
understand
what
it's
predicting,
you
need
to
look
at
all
of
them.
A
Okay,
so
the
next
question
is
from
michael
hale,
he
says
watched
a
recent
ted
talk
from
alex
whisner
gross
about
a
new
algorithm
for
intelligence
is
numenta
compatible
with
this,
or
how
could
they
be
sort
of
used
together.
E
E
B
I
didn't
see
this.
I
think
I
saw
another
one,
a
very
smart.
B
Detailed
mechanistic
level
about
building
models
from
the
airstream
and
he
considered
more
conceptual
framework
regarding
the
industry.
There's
jake
saying
and
I
you
know
I
don't
see
those
as
incompatible
at
all.
It's
like
I
don't
say
our
work
isn't
compatible
with
psychology
yeah,
but
it's
hard
to
make
the
connections
between
them
in
the
in
the
language.
That
would
be
meaningful.
B
A
Apis,
what
about
doing
a
cuda
thrust
library
for
parallel
processing?
He
says,
he's
still
a
noob,
so
he
may
be
off
the
rails
on
this.
C
Yeah
yeah,
I
can
answer
that.
I
think
that'd
be
a
great
project
for
someone
to
try.
I
I
think
my
my
impression
is:
we
need
a
couple
of
prerequisites
and
the
kind
of
plan
that
matt
presented
earlier,
I
think,
would
really
fit
well
into
that.
I
think
having
a
fully
c-plus
glass
reference
implementation
of
the
algorithm,
I
think
it's
a
almost
a
necessary
prerequisite
to
doing
a
a
cuda
implementation,
so
I
think
the
whole
nuclear
core
extraction
plan,
I
think,
fits
really
well
into
that.
Of
course,
you
can
start.
C
I
guess
we
have
a
reference
implementation
of
the
spatial
cooler,
so
someone
could
start
with
that
right
away
if
they
want
to.
The
other
thing
is:
there's
a
google
summer
of
code
project
that
I
put
up
there
for
doing
timing
benchmarks
to
have
a
set
of
standardized
timing
benchmarks
around
the
algorithms,
and
I
think
doing
that
would
be
very
helpful
in
this
project
too.
So
yeah
that
way,
you
can
actually
engage
what
the
actual
speed
up
is
in
various
situations
with
gpm
versus
the
the
straightforward
implementation
so
and
part
of
this
benchmark
can.
C
Which
one
optimization
yeah,
so
we
could
build
in
some
profiling
thing
in
there
and,
of
course,
benchmarks
would
also
stress
different
aspects
of
the
algorithm
as
well.
So
I
think
it'd
be
a
great
you
know
project.
Of
course
someone
doesn't
have
to
wait
for
that,
but
I
think
they're
all
kind
of
part
of
the
same
family
of
projects.
E
E
So
as
long
as
you
have
an
nvidia
graphics
card-
and
you
know
the
kernels
have
to
be
written,
but.
C
Yeah
we,
the
current
what
scott's,
referring
to,
I
think,
is
the
current
c-bus
bus
code
is
fairly
optimized
to
take
advantage
of
the
fact
that
all
our
representations
are
very
sparse.
So
if
you
have
only
two
percent
of
the
bits
are
on,
you
actually
need
to
only
do
roughly
two
percent
of
the
computation
that
you
would
have
to
do
you
get
somewhere
around.
C
Like
a
you
know,
40
to
50
x,
speed
up,
you
know
just
by
exploiting
sparsity,
it's
an
open
question
where
those
optimizations
will
translate
to
gpu
implementation
or
not,
and
I
think
that
would
be
a
very
interesting
thing
to
try.
I
think
there
was
a
mailing
list
discussion
on
exactly
that
topic.
A
few
months
ago,.
B
Recently,
one
was
one
by
darpa
and
the
other
was
I
just
was
at
yesterday
with
san
diego
national
labs,
and
these
are
workshops
on
neural
network
computing,
building,
custom
hardware
for
cortically
derived
algorithms,
and
our
work
is
seen
as
the
exemplar
in
many
ways
of
what
or
the
particularly
drive
algorithms
are,
and
so
there's
a
lot
of
discussion
about
how
you
implement
algorithms,
like
the
cla
in
hardware,
and
but
these
people
have
lots
of
different
approaches.
B
You
know
they're
talking
about
totally
custom
hardware
and
different
photonics
and
very
effective
memory
technologies
and
so
on.
But
one
thing
that
came
up,
which
I
don't
really
understand,
but
one
thing
that
came
up
there
was
talk
about
having
sort
of
a
hardware
description
on
the
prescription,
language
take
all
for
you
a
way
of
saying
so
done,
we're
just
talking
about
they're.
Like
you
know.
What
is
this
thing
and
how
do
we
know
if
we're
finding
it
correctly
and
performing?
So
I
just
think
that
the
conversations
have
been
happening
beyond
just
doing
that.
B
A
So
a
couple
of
quick
things:
we've
been
talking
about
the
google
summer
of
code
idealist
so
and
to
let
everyone
know
it
is
honor
wiki
and
it's
populated
with
several
ideas.
As
soon
as
I
mentioned,
one
down
here,
which
somewhere
down
here
performance,
benchmarks,
nine
and
nine
and
ten,
so
we're
probably
going
to
continue
trying
to
do
something
with
this.
For
our
that
we're
going
to
talk
about
later
our
season
of
new
pic
and
the
other
thing
is
scott
and
chettin
and
ian.
A
Please
try
and
speak
a
little
louder,
I'm
having
a
little
trouble
hearing.
You
sometimes
one
last
question.
It
seems
from
matt
keith
of
the
robotics
guy
from
the
hackathon
back
to
the
ted
talk
question:
are
there
plans
to
add
goals
or
a
concept
of
good
versus
bad
predictions
in
the
new
pig.
B
Defenders,
meanwhile
good
and
bad,
we
already
have
a
concept
of
good
and
bad.
Is
it
correct?
I
mean
the
whole
nature
of
the
learning
algorithm
in
this
area
is
constant
correction
for
correct
prediction,
and
but
if
you
mean
more
sort
of
good
and
bad
as
in
goal
seeking
or
you
know,
good
behavior,
there
is,
I
think,
as
we're
working
on
and
I
have
been
working
on
the
motor,
adding
motor
behavior
to
the
cli,
which
is
the
set
of
the
neutral
region.
B
That
question
comes
up
a
lot
and
you
have
to
you
have
to
answer
it
in
some
way,
and
so
yes
we're
thinking
about
it.
But
I
don't
think
there's
anything
that
fearful
to
talk
about
unless.
A
Okay,
there's
no
more
questions,
so
our
last
two
topics
are
a
sample
application
and
season
of
new
pic.
So
let's
talk
a
little
bit
about
this
sample
application.
This
is
just
something
we're
starting
to
think
about.
Chetan
has
has
put
a
little
bit
of
design
effort
behind
it,
but
chad-
maybe
you
have
a
good
understanding
of
this.
Maybe
you
can
kind
of
give
an
explanation
of
what
you'd
like
to
do
with
this
yeah.
I
actually
have
a
little
more
than
a
design.
C
H
Yeah,
okay,
so
the
what
do
you
want
me
to
explain
the
demo
project
in
general?
Sure?
Okay,
so
one
of
the
big
things
that
the
community
requested
is
a
sample
demo
or
sorry.
It's
a
compelling
demo
of
music
and
we've
been
discussing
with
francisco
weber
of
sept
on
how
to
use
the
technology
that
developed
for
some
good
natural
language
processing
demos
using
newpick
and
stuff
together.
H
So
the
idea
is
that
we
are
considering,
I
believe,
opening
up
the
projects
to
the
community
and
how
the
community
creates
some
interesting
demos
games
based
on
language
using
if
they
concept
together,
so
we
so
I've
I've
been
over
the
you
know.
Yesterday
on
the
flight
back
home,
I
built
the
beginnings
of
a
library
that
should
make
it
easy
to
work
with
and
set
together
to
create
predictions
of
words.
H
So
you
feed
in
words
into
the
into
the
cla,
and
it
will
communicate
with
the
sept
api
to
get
the
to
get
the
sept
retina
representations
of
those
words
and,
and
it
will
make
predictions
using
the
temple
for
based
on
symmetrized
code
from
the
hackathon.
So
it's
basically
a
library
form
of
us
of
the
workforce
of
the
tag.
Data
and
people
can
get
started
with
that
and
build.
You
know
we
have
some.
H
We
have
some
suggestions
which
I'm
sure
we're
about
to
go
into
on
some
demos,
but
this
library
should
make
it
easy
to
build
those
demos
without
having
to
know
how
new
pick
or
sep
work
just
using
the
interface
of
the
library
and
I'll
put
a
link
to
what
I
have
so
far.
H
I've
submitted
as
a
question
within
the
google
hangouts,
but
there's
a
live
link
to
a
repository
there
and
it
works.
Currently
it
can
do
predictions
of
it
can
read
a
document
and
predict,
read
it
word
by
word
and
predict
the
next
word
using
a
new
concept,
and
it
can
also
run
the
box
demo.
H
The
word
association
demo
within
that
framework
as
well
so
there's
some
examples
in
the
readme
for
that
repository,
so
people
can
get
started.
So
it's
really
an
api,
though.
H
Yeah,
if
we
want,
if
you
want
to
open
that
link
on
your
screen,
you
can
actually
see
the
examples
and
how
to
use
it.
J
H
Yeah,
the
idea
is
basically,
we
can
use
and
the
cla
to.
Basically
we
input
steps
representation
of
a
word,
which
is
a
spark
which
is
an
sdr
of
the
word
that
contains
information.
You
know,
contextual
information
about
that
word
as
they've
gained
information
from
wikipedia
and
their
various
sources,
so
it
basically
inputs
the
sdr
into
cla
and
using
temporal
cooler,
predicts
the
next
next
sdr
and
then
sends
it
back
to
step
to
get
the
closest
representation
of
that
new
predicted
sdr
and
so.
A
It's
right,
so
I'm
looking
at
your
first
example
here
and
it's
very
similar
to
the
the
fox.
What
is
a
foxy
example
from
the
hackathon,
so
it
looks
like
you're,
creating
terms
out
of
coyote
eats
and
mouse
and
then
feeding
them
in
sequence
and
resetting
and
then
just
giving
it
wolf
eats
and
then
getting
asking
the
dating
the
prediction
out
and
asking,
except
for
the
closest
string
which
it's
already
predicting
mouse
because
of
the
similarity
between
coyote
and
wolf.
I
imagine
right.
H
Yeah
and
then
there's
the
read
tool
which
I
described
will
read
through
a
document
and
that
one
I've
actually
copied
the
data
set
in
synthetized
demo
and
you
can
see
running
on
that
data
set.
If
you
run
it
through
the
end,
you
can
see
what
the
fox
eats.
H
But
yeah
it
basically
works
that
way
and
I'm
I'm
actually
interested
in
running
it
on
like
a
larger
data
set
and
see
how
well
it
does
word
for
word,
but
you
can
do
any
number
of
things:
okay,
yeah.
B
I
wouldn't
hope
for
too
much
there.
You
know
remember.
B
I
think
I
think
the
thing
you
can
do
is
you
can
do
some
very
clever
fun
games
that
are
a
lost
involvement,
just
point
out
that
since
after
the
last
hackathon
I've
been
using
that
subitized,
you
know
once
it's
a
foxy
in
my
talks
and
it
generally
gets
an
amazing
response.
B
You
know
people
just
blown
away
by
it
or
sometimes
they
don't
believe
it
they.
You
know
it's
almost
like.
It
doesn't
seem
right,
you
know
possible,
except
we
don't
want
to
oversell
it.
I
mean
it
was
a
very
short
demo.
You
know
we
haven't
explored
this
too
much,
but
I
think
in
the
context
of
the
limitations
we
have,
we
should
be
able
to
do
some
really
cool
demos
and
make
some
really
cool.
H
I
think
this
library
should
should
support
any
member
of
any
kind
of
those
association
games
where
you
feed
in
yeah.
B
C
B
C
B
To
be
careful
not
to
you
know,
we
don't
have
a
system
to
understand
length,
but
what
we
do
have
is
a
system
that
is
working
on
the
right
principles:
the
very
very
small
system,
the
frequent
right
principles,
the
same
principles
that
language
is
used
in
the
brain,
such
parts
distributed
representations
and
secrets,
memory,
calcium.
So
from
that
point
of
view,
we've
got
some
good
starting
point,
but
we
should
be.
B
A
So,
chad-
and
I
think
that
is
great
initial
work
on
this-
I'm
excited
to
to
get
busy
with
it.
I
would
love
to
have
a
web
application
with
pages
that
that
community
members
can
kind
of
add
their
own
sample
applications
to
create
their
own
games
and
that
we
could
potentially
publish
at
some
point
and
make
available
for
other
people
to
try
out
things
with
a
live
running
new
pick
on
the
back
end,
so
that
would
be
kind
of
my
pie
in
the
sky
goal.
With
this
thing,.
B
I
mean
it
would
be
really
cool
since
I
don't
program
anymore,
I'd
love
to
have
something
where
I
could
just
experiment
at
a
usual
interface
level
of
what
what's
the
hack
as
soon
as
I
did.
You
know
I
could
just
type
in
sentences
and
see
what
it's
predicting,
and
I
said
I
start
a
new
session
to
clean
cla
and
I
start
training
it
over
here
and
every
time
I
go,
I
could
just
be
predicting
what
would
be
very,
very
fun
to
play
with
this.
B
Just
this
would
be
fascinating
for
me
to
try
that
and
that
you
know
so
no
programming
involved
doing
a
web
interface
or
something
like
that.
If
anyone
wants
to
do
that,
I
think
that
a
lot
of
people
have.
H
A
I
know
we've
got
some
web
programmers
in
our
community.
That
would
would
love
to
bite
off
some
of
this
stuff.
So
once
once
we
have
this
framework
out
there
and
we've
got,
you
know
got
it
wrapped
in
in
some
type
of
web
app,
it's
easy
to
add
new
functionality,
be
very
cool.
A
All
right,
we've
got
another
question
from
john
blackburn.
Before
we
go
on
to
our
next
topic.
He
says:
if
you
have
sensory
motor
loop,
can
prediction
become
action
example:
animal
sees
food
and
predicts
food
will
come
closer.
It
makes
this
prediction
come
true
by
activating
motor
neurons
in
sequence.
Behavior
is
learned:
sequence
of
visual
input,
plus
motor
input.
B
Yeah,
you
know,
as
a
general
rule,
I
would
agree
with
that.
In
fact,
I
think
I
even
said
something
similar
to
that
in
on
intelligence.
I'd
have
to
go
back
anymore,
but
the
trick
is
the
details
right.
The
trick
is,
how
do
you
get
that
to
work?
What
does
it
actually
mean
and
so
yeah?
I
think
that's
right,
but
we
got
to
go.
You
know
down
to
the
level
of
yeah
the
level
that
we,
which
we
work
around
here,
neurons
and
songs
and
players,
and
things
like
that.
B
B
In
detail
detailed
details,
details
how
you
get
exactly
the
work
you
know
we
did
do
a
talk
at
the
last
hackathon.
I
think
I
did
was
the
last
the
first
one
about
the
central
motor
stuff
and
it
was
the
last
one
and
I
have
some
new
progress
on
that.
A
few
more
things
I
figured
out,
maybe
the
next
hackathon,
if
anyone's
interested.
We
can
talk
about
some
of
that.
A
Great
okay,
our
question
q
has
been
depleted,
so
let's
move
on
to
our
next
topic,
so
google,
summer
of
code,
so
unfortunately
we
were
rejected
this
year.
I'll
find
out
more
information
on
on
friday
about
why.
So
we
can
do
a
better
job
next
year
to
get
accepted
as
a
mentoring
organization.
A
A
A
A
So
I
think
we
should
try
and
make
this
sort
of
an
official
thing
that
we
try
and
do
with
or
without
google
support
and
especially
trying
to
address,
maybe
students
that
didn't
get
accepted
for
google
summer
of
codes
program
as
well
and
advertise
that
this
thing
exists
and
try
to
get
some
more
community
cooperation
of
participation.
From
this.
A
I
wonder
if
you're
wondering
if
anyone
has
any
thoughts,
I
know
pradipto
has
been
involved
in
similar
things
on
the
kde
project.
So
I
think
he'll
have
a
lot
of
input.
He
gave
some
links
with
a
lot
of
information
about
that
on
the
mailing
list,
so
I'm
going
to
be
investigating
those
today
and
coming
up
with
sort
of
a
a
plan
for
how
to
kick
off
this
project
and
how
to
deal
with
students
and
mentors,
and
I'm
open
for
feedback
and
suggestions
about
that
now
or
at
any
time
on
the
mailing
list.
C
Yeah,
I
think
it's
a
great
idea,
I
think,
figuring
out
how
how
to
structure
it,
and
you
know
what
exactly
it
is
a
great
idea.
B
A
Well,
at
a
high
level,
it
seems
google
summer
of
code
is
a
project
to
link
students
looking
to
do
interesting,
work
for
against
open
source
projects.
Looking
for
work
to
be
done,
yeah
google
pays
the
students
they're
highly
monitored
by
google,
so
mentoring
organizations
have
to
apply,
students
have
to
apply
and
then
there's
a
certain
amount
of
oversight
during
the
process
to
ensure
that
students
are
fully
engaged,
mentors
are
always
available
for
them
and
that
progress
reports
are
submitted
and
stuff
like
that.
A
So
it's
something
that
a
student
could
put
on
the
resume
at
the
end
of
the
engagement,
so
it
seems
to
be
very
professionally
done
and
could
benefit
both
parties
quite
a
bit
so
something
that
we
could
try
and
do
on
our
own
as
well,
and
that's
that's
what
I'd
like
to
try
and
try
and
do.
C
A
I
I
think
I
have
a
suspicion
that
is
where
we
lacked
in,
and
our
submission
is
not
calling
out
specific
mentors
and
and
making
it
well
known
that
they
were
dedicated
to
that
project
would
have
time
and
also
that
we
had
backup
plans
of
mentors
dropped
out,
etc.
So
we
kind
of
have
to
have
primary
and
secondary
mentors
for
each
project
and
a
higher
level
of
commitment
to
support
google
summer
of
code.
That's
my
suspicion.
So.
B
We're
going
to
do
our
own
thing,
I
think
I
mean
that's.
Probably
google
has
learned
those
are
important
things
to
have,
and
so
I
mean
we've
got
to
make
sure.
B
Yeah,
I
don't
know,
I
imagine
they
are
probably
full-time
people
working.
So
so
you
know
you've
been
through
a
lot
of
professions.
I
G
A
To
do
our
part
as
well,
it's
no
small
amount
of
work
so
truthfully
I
was
a
tiny
bit
relieved
when
we
were
rejected
because
of
all
the
other
work
that
we
have
on
our
plate
for
this
summer,
but
looking
towards
the
future,
I
think
it
would
be
great
to
be
to
be
accepted
next
year
and
have
all
of
our
ducks
in
a
row
and
make
sure
that
we're
we're
fully
engaged
with
our
community
before
that
happens
and
preparing
them
for
for
how
it's
going
to
work.
A
B
A
Looks
like
we're
out
of
questions
guys
and
we're
just
about
right
on
time.
So
if
there
is
no
other
closing
comments
from
you,
jeff
for
super
time,.
B
Oh,
I
just,
but
you
know
sure,
for
people
who
aren't
in
our
offices
all
the
time
there's
a
lot
of
things
going
on
in
our
community.
That
may
not
be
usable,
and
so
I
I
just
came
back
from
a
talk
yesterday
at
a
workshop
and
as
I
mentioned,
let
me
just
try
to
summarize
it
there's
there's
several
projects
on
the
way
that
are
related
to
our
algorithms
and
new
fix
algorithms
and
that
people
might
be
interested.
B
One
is,
I
think,
everyone
people
might
know,
there's
a
there's,
a
there's,
a
team
at
ibm,
almaden
labs,
run
by
winfrey
roelke,
who
has
they
have
several
people
dedicated
to
this?
Who
are
looking
at
how
to
implement
the
cla
in
new
custom
hardware
and
they've
done
their
own
implementation
of
the
cla
and
actually
two
different
ones,
and
I
I
don't
know
if
they're
active
on
the
email
list
tonight.
I
know
some
of
them,
but
I
think
they're
listening
in,
I
haven't
seen
anything
yeah
and
then
that's
one
thing.
B
That's
going
on
then
there's
a
program
that's
being
created
at
darpa,
which
is
the
defense
advanced
research
projects,
administration,
and
this
is
being
run
by
dan
hammerstrom
dan
is
very,
very
familiar
with
the
cla
extremely
so,
and
he
is
doing
a
program
which
is
to
do
new
hardware
for
sort
of
biologically
cortically
inspired
computing
algorithms
and
he
uses
htm
in
the
cla
as
sort
of
as
prototypical
example,
so
that
program
has
been
under
development
for
a
while,
it's
not
official.
Yet
it
looks
like
it's
going
to
happen.
B
This
would
be
a
large
program
if
it
goes
through.
You
know
many
tens
of
millions
of
dollars
over
five
years
and
it's
not
specifically
for
us,
but
we're
used
as
the
prototyping
example
of
an
algorithm
and
so
I've
been
I've
been
invited
to
speak
at
the
answer
series
of
workshops
on
that,
and
basically
I'm
one
of
the
lead
speakers,
or
at
least
people
are
those
and
then
there's
this
other
one
that
just
was
going
on.
B
I
just
came
back
from
yesterday
at
sandia
national
labs
run
by
a
guy
there
named
okandan
and
he's
also
trying
to
put
the
program
together
at
sandia
about
neuromorphic
computing,
and
so
I
spoke
with
that
yesterday
and
it's
really
interesting
because
they
these
are,
these
projects
are
all
about.
You
know,
starting
with
neocortex
and
your
particle
circuitry
and
figuring
out
the
algorithms
and
principles
by
which
it
works
and
building
machines
to
work
on
those
principles,
and
we
are
by
far
ahead.
I
think
the
really
the
only
example
by
anybody
doing
anything
exactly
like
that.
B
So
we
play
a
critical
role
in
these
in
these
meetings.
Not
everyone
necessarily
appreciates
it,
but
quite
a
few
people
do
so
it's
kind
of
amusing
and
entertaining
for
me
to
know
these
meetings
and
talk
about
our
work
and
see
how
the
reaction
to
it
is
and
see
how
people
are
trying
to
accommodate
these
ideas.
So
now
you
know
the
term
html
hierarchical
temple.
Memory
is
used
a
lot
for
these
meetings
and
the
need
for
time
and
inference
and
hierarchy
are
all
talked
about
extensively.
B
So
the
terminology
that
we've
been
promoting
has
been
sort
of
it's
it's
so
seeping
its
way
into
these
fields,
not
that
it's
just
us,
but
I
think
you
may
definitely
have
a
large
impact
on
it.
So
so
that's
going
on
the
background.
Our
attitude
here
at
jumenta
is
we're
not
relying
on
this
stuff,
we're
not
saying.
Oh,
we
have
to
have
this
hardware
we're
going
to
take
advantage
of
if
you
can,
but
you
know
these
are
the
different
opponents
that
people
are.
B
That
does
this
stuff,
so
some
of
them
will
work
out.
Some
of
it
won't
and
we're
not
we're
not
going
to
play
an
active
role.
D
So
can
I
just
ask
you
the
the
project
at
ibm?
What
you
said
sounds
like
the
most
specific
one:
that's
dedicated
to
the
cla.
Do
these
people
just
work
off
the
cla
white
paper
and
etching
it
into
silicon.
B
D
B
A
Okay,
so
there's
one
more
question:
I
got
over
email,
the
q,
a
it
wasn't
working
for
someone.
This
is
from
joe
timmons
he's
asking
about
cerebro
he's
trying
to
figure
out
how
to
load
their
own
csv
data
into
cerebro
can't
figure
out
how
to
accomplish
this.
Are
there
any
guidelines?
A
So,
unfortunately,
cerebro
is
not
in
the
best
state.
It
started
out
as
an
entirely
an
internal
tool
inside
of
numenta
for
trying
to
understand
the
the
state
of
the
cla
as
it
was
running
and
before
the
engineer
who
built
it
left.
We
convinced
him
to
clean
it,
clean
it
up
a
little
and
open
source
it
and
he's
subsequently
left
organization,
and
that
code
is
just
kind
of
there.
I
know
a
few
people
have
gotten
it
to
work
properly,
but
it
is
very
unfriendly.
A
It's
something
that
we
we
need
to
work
on,
because
it's
a
very
cool
tool.
I'd
love
to
see
it
used
more
often,
but
it
needs
a
lot
of
usability,
work
and
clean
up.
B
A
It
would
I'd
love
for
somebody
to
try
and
take
ownership
of
this
project
and
and
get
it
working,
especially
someone
with
web
experience.
There's
a
lot
of
javascript
and
python
involved,
and
it's
mostly
mostly
web
work,
so
call
for
volunteers.
On
that
sorry,
we
can't
help
you
joe.