►
From YouTube: State of NuPIC
Description
Matt Taylor describes the current state of the NuPIC open source project. 3 May 2014, NuPIC Hackathon.
A
Okay,
three
people,
I'm
gonna,
start
because
I'm
recording
this
anyway,
so
I'd
rather,
you
guys
get
hacks
done
than
watch
me.
So
I'm
going
to
talk
about
new
pick,
the
open
source
project,
not
necessarily
anything
technical
at
all,
except
for
kind
of
our
plans.
A
I'll
talk
about
sort
of
the
history
of
the
project
over
the
past
year,
since
we
went
open
source
where
we're
at
right
now
and
our
evolution
and
also
what
we're
planning
for
the
future.
A
A
A
We've
come
a
long
way
since
then,
but
basically
it
was
just
one
repo
one
lump
everything
in
the
same
place,
c,
plus
plus
python,
encoders
everything
all
in
one
spot
so
very
quickly.
After
june
3rd,
on
the
21st
of
the
same
month,
we
had
our
first
hackathon
in
the
numenta
offices.
It
was
a
24-hour
event.
We
had
15
to
20
sort
of
solid
people.
A
At
the
end,
it
turned
out
that
we
turned
out
six
or
eight
demos
about
half
of
which
were
somewhat
significant,
but
we
ran
into
lots
of
build
problems
which
I
think
turned
a
lot
of
people
off
right
away,
but
I
would
still
say
it
was
a
successful
event
for
a
project.
That's
been
open
source
for
less
than
a
month
and
we
generated
a
lot
of
buzz
so,
and
that
was
fun.
A
We
we
do
this
to
engage
the
community
to
get
people
involved
and
as
far
as
that
went,
it
was
very
successful.
We
still
had
a
lot
of
build
problems.
It
was
still
a
pain
to
get
people
up
and
running
with
new
picks,
so
they
could
even
do
hacks.
A
They
wanted
something
that
was
that
could
be
put
up
in
many
different
languages:
virtual
machines,
environments,
et
cetera,
there's
also
a
lot
of
feedback
about
usability.
People
wanted
better
documentation,
code,
samples
tutorials,
and
they
wanted
the
build
to
be
easier,
which
was
very
obvious
feedback
for
us
who
realized
this,
and
people
also
were
wanted
hierarchy.
A
A
A
We've
got
a
cmake
build
which
is
drastically
improved
from
our
previous
sort
of
just
shell
script
build,
and
that
was
just
entirely
provided
by
our
open
source
contributors.
So
we're
really
happy
about
that.
That's
a
big
thanks
to
mark
oda
hall
and
david
ragazzi.
A
We
also
got
out
of
that
better
compiler
support,
so
we
run
builds
in
both
ceiling
and
gcc
and
we
support
python27.
So
we
have
a
build
matrix
that
consists
of
those
two
compilers
python,
two
six
and
seven,
and
we
our
build
process
is
actually
simpler.
At
this
point,
the
installation
doesn't
isn't
as
hard
doesn't
take
as
many
environmental
changes,
and
that
was
fairly
evident
yesterday
when
we
had
our
installathon
and
it
went
very
smoothly,
as
opposed
to
previous
hackathons,
where
it
took
hours
of
arduous
work.
A
We
also
have
docker
support
for
virtual
machine.
That's
been
working
out
pretty
well
for
people
so
where
we
are
at
right
now
we're
at
a
point.
I've
been
working
a
lot
on
usability,
creating
tutorials
that
are
very
sort
of
beginner
friendly.
A
We
have
the
sign
tutorial,
subutai,
provided
some
swarming
examples
in
his
code
base.
That
also
works
against
the
signs,
and
we
have
a
new
hot
gem
tutorial.
A
We
also
have
a
community
some
community
provided
tutorials
like
the
audio
stream
example
and
a
spatial
pooler
example
that
came
from
our
community
member
as
well,
which
are
also
helpful,
and
I
made
a
another
screencast
on
how
to
contribute
to
new
pic
using
our
our
process
for
git,
github
and
travis.
A
A
We
have
a
a
contributor
recently
who
has
done
a
bunch
of
work
with
doxygen
on
the
newpick
core
api
of
really
fleshing
out
the
network
api
and
he's
continuing
work
on
that.
That's
utensils
song.
Thank
you
for
that
work,
and
I
put
a
lot
of
work
into
giving
our
wiki
facelift
and
kind
of
guiding
people
from
never
seeing
new
pic
to
kind
of
what
resources
they
need
to
move,
move
to,
to
understand
the
theory,
how
to
install
it,
how
to
use
it
and
then
learning
about
how
newbik
works
and
contributions.
A
So
our
the
state
of
our
c
plus
plus
python
split
is
not
quite
done.
We
do
have
two
repos.
Now
we've
got
a
new
pick
core
that
is
as
an
independent,
build
and
an
independent,
continuous
integration
process.
So
it's
totally
split
from
the
python
new
pic
repository,
but
we're
not
quite
done
yet.
We
have
this
four
step
plan
and
we've
only
really
executed
step,
one
we're
working
on
step,
two.
So
we're
moving
towards
creating
a
release
version
1.0
for
new
bit
core
and
new
pic,
but
right
now,
nuke
core
still
needs
some
work.
A
It
does
build
autonomously.
It
still
needs
api
testing.
We
want
to
define
that
that
core
api,
that
c
plus
interface
really
well,
not
just
with
documentations
but
with
api
tests,
and
then
once
we
have
that
defined,
we
can
release.
You
can't
release
software
until
you
have
public
interface
defined
properly.
So
that's
the
plan,
and
so
we
still
need
some
help
and
some
work
on
a
c
plus
plus
test
suite
with
proper
reporting
in
x
unit
format.
A
So
we
can
do
you
know
analysis
of
unit
testing
coverage
over
time
and
all
that
good
stuff
you
get
from
you
know:
enterprise
style
software
projects,
all
this
moving
towards
the
release
version
1.0,
the
one
other
major
thing
that
we're
missing
in
nupit
core
is
the
sequence
memory,
algorithms,
which
are
still
in
python,
so
that
needs
to
be
ported
from
python
into
c,
plus,
plus,
that's
really
the
major
code
piece
that's
missing
before
we
can
truly
make
nupicore
totally
independent
and
have
all
of
the
spatial
pooling
sequence,
memory,
algorithms,
and
what
we
currently
call
temporal
pooling.
A
So
as
we
work
towards
adding
some
of
the
things
jeff
and
subutai
and
chain
have
talked
about
in
the
future,
we're
going
to
be
prototyping
those
things
in
python
in
the
python
client,
with
the
in
goal
to
put
transport
them
all
into
c,
plus,
once
we're
done
with
the
prototypes,
so
the
state
of
contributors
at
the
moment.
This
is
sorry
about
the
display,
but
we've
got
93
contributors
94
soon,
because
I've
got
a
cla
on
my
desk.
A
I
haven't
processed,
yet
28
of
you
have
pushed
code
into
either
new
pick
or
new
core
we've
got
10
or
so
consistently
active
developers,
which
has
been
really
great.
We
have
11
committers
three
of
those
have
been
promoted
from
our
community
so
due
to
high
activity
and
lots
of
interaction
with
us
and
suggestions.
So
that's
great
we're
really
happy
about
that
and
we
have
a
really
active
mailing
list.
Total
subscribers
to
all
three
of
our
mailing
lists
is
almost
800.
A
Now,
of
course,
there's
a
little
crossover
between
those
if
you're
subscribed
to
two
or
three
of
them,
but
for
the
most
part
the
general
discussion.
Mailing
list
is
almost
500
subscribers,
so
and
and
lots
of
messages.
If
you're
subscribed
to
the
discussion
list,
there's
it's
a
lot
to
keep
up
with
sometimes
so.
This
is
just
a
pattern
of
steady
growth
over
the
past
year,
so
this
is
kind
of
all
the
trends
I
try
and
track
get
up.
A
Followers
get
up
forks
or
twitter
and
facebook,
social
media
stuff,
and
then
subscribers
and
mailing
list
messages,
so
we've
we're
having
a
good
healthy,
steady
growth,
we're
not
looking
to
grow
extremely
quickly.
That
can
be
kind
of
hard
to
maintain.
So
I
am
very
happy
with
this.
My
workload
has
increased
quite
a
bit
over
the
past
six
months,
just
based
on
all
the
different
activity.
That's
been
coming
from
our
community,
so
we're
really
happy
with
the
new
big
community
growth,
where
we're
going
in
the
future.
A
So
we've
been
talking
about
this
for
a
while
jeff
has
published
his
ideas
on
on
temporal
pooling
that
he
wants
to
implement,
and
so
we
want
to
try
and
make
this
happen
as
soon
as
we
can
and
and
jeff
and
and
subutai
will
be
working
on
this
as
as
soon
as
they
have
the
the
bandwidth
to
do
so.
So
we
want
to
try
and
move
towards
hierarchy
and
getting
these
new
temporal
pooling
ideas
in
place
and
and
motor
ideas
in
place
is
going
to
be
moving
in
the
right
direction.
A
I
would
say,
like
the
pie
in
the
sky
dream
here,
and
this
is
partially
me
speaking,
so
I
can't
speak
for
the
entire
community,
but
this
is
the
impression
that
I
get
from
from
the
community
conversations
that
I
monitor
all
the
time
is.
We
want
to
have
some
way
to
easily
configure
a
hierarchy
of
lots
of
different
cla
regions
and
make
that
distributed.
So
you
have
the
potential
to
have
each
region
implemented
in
a
different
client
language,
potentially
on
the
cloud
somewhere
and
the
some
type
of
standard
communication
protocol.
A
So
you
can
have
all
these
regions
talking
to
each
other,
passing
sdrs
back
and
forth
to
each
other
on
the
cloud
or
whatever
you
want
to
call
it.
So
I
think
this
is
a
good
goal.
A
lot
of
people
in
there
in
the
community
are
kind
of
striving
towards.
This
would
love
to
have
this
at
some
point,
and
this
would
this
would
really
be
an
interesting
situation
to
try
and
set
up.
So
it's
going
to
take
a
lot
of
work,
but
we're
going
in
that
direction
as
quickly
as
we
can.
A
We
also
want
to
have
a
sort
of
a
global
standard
for
model
serialization.
That
is
fast
right
now.
Our
model
serialization
is
dependent
on
on
python,
so
we
want
to
have
that
ability
to
serialize
models
up
at
a
higher
level,
potentially
within
the
core
itself,
so
that
any
one
of
these
distributed
that's
running
cla
can
serialize
its
state
and
pass
somewhere
else
and
they
can
reinstantiate
it
and
have
all
of
that
memory
of
of
the
data
that's
been
seen
transportable
into
different
places.
A
A
sort
of
a
thing
that
I
want
is
to
have
a
hosted
swarming
service
someday
swarming
is
a
very
cpu
intensive
process.
It
requires
you
have
a
database
installed
one
of
these
days,
I'd
love
to
try
and
have
an
initiative
to
create
sort
of
a
software
as
a
service,
swarming
interface,
a
restful
api,
something
easy
that
any
one
of
these
clients.
A
You
know
python,
client
a
go
client,
a
java,
client
or
rust
client.
They
don't
have
to
re-implement
the
entire
swarming
logic.
To
to
get
the
proper
model
parameters
for
a
data
set,
they
could
just
be
an
interface
to
somehow
some
hosted
service
that
can
run
this
form
for
them.
A
So
that's
a
dream
of
mine,
hopefully
someday
we'll,
have
and
jeff
talks
about
this
a
lot
hardware,
implementations
of
cla
and
htm
would
be
awesome.
Lots
of
people
are
interested
in
running
it
on
gpus,
but
even
more
than
that
run,
you
know,
implementations
of
on
silicon
and
there's
big
companies
interested
in
this
there's
ibm
and
there's
seagate,
and
they
want
to
do
this
sort
of
thing
and
they're
they're
investing
research
dollars
in
it.
People
are
working
on
this
right
now.
Some
of
them
are
working
closely
with
us
or
they're.
A
Looking
at
the
coa
they're
cla
they're
running
the
cla
running
new
pick,
trying
to
figure
out
how
they
can
do
this
and
then
hardware
implementation,
so
that
could
be
very
cool,
very
cool
at
some
point
in
the
future.
So
last
slide
is
a
motivational
slide,
and
it's
just.
A
We
need
more
people
that
are
interested
in
working
on
opf,
the
python
client,
expanding
interface,
making
it
simpler
doing
your
own
tutorials,
even
diving,
into
the
algorithms
in
c
plus,
if
you're.
If
you
have
some
expertise
and
you
see
somewhere
that
you
could
help
contribute,
I
encourage
you
to
email
me.
If
you
need
to
ask
me
how
you
can
contribute
go
to
the
wiki,
where
it
tells
you
how
to
contribute,
we
need
your
help.