►
From YouTube: Cerebro for NuPIC
Description
A CLA visualization tool for the Numenta Platform for Intelligent Computing.
B
B
B
So
this
is
all
going
to
be
open
source.
This
is
all
survivor,
non-confidential,
basically
a
tool
I
built
a
while
back
and
it's
gone
through
multiple
iterations,
but
it's
been
a
great
way
of
diving
into
the
guts
of
the
cla
and
it's
a
good
way
of
understanding.
What's
going
on
so.
A
B
Another
client
of
the
opf,
which
is
our
core
framework
or
a
tool
for
finding
news.
It's
actually
all
of
these
things.
It
is
all
of
the
above,
basically
cerebro.
What
it
does
is
it
allows
you
to
create
data,
sets
and
and
run
cla
models
over
them,
allowing
you
to
edit
the
different
parameters,
all
the
little
itty-bitty
parameters
that
we
swarm
over
or
even
ones.
B
We
don't
you
can
hand
edit
them
run
models
over
them
and
at
every
record
it
will
actually
store
the
internal
state
of
the
cla
algorithm,
all
the
permanences,
all
the
connections,
and
then
you
can
kind
of
go
backwards
and
replay
the
data
set
and
actually
look
at
how
the
state
evolved
over
time.
B
As
the
data
came
in
and
the
movie
x-men
it's
used
for
finding
means,
and
so
real
quick
just
right
now,
before
pre-open
source,
it's
located
in
project
cerebro
to
run
it
super
simple:
you
do
python
cerebrum,
you
specify
a
port
number
and
it
is
a
local
web
application.
It's
not
built
to
actually
be
hosted
yet
there'd
be
some
trickery
to
have
that
happen,
but
you
can
just
download
it
right
locally
and
then
go
to
your
browser
and
go
to
localhost.
Convert.
B
B
B
Parameters
and
actually
you
can
load
two-
you
can
load
a
base
description
file
and
you
can
load
what
we
call
the
sub
description
files
that
kind
of
override
some
parameters,
which
is
usually
what
we
get
pre
and
post
swarm
the
base
description
files.
What
we
get
pre-swarm
just
specifies
everything
and
the
the
sub
description
files
usually
get
after
this
form.
That
tells
you
what
specific
parameters
this
form
found.
B
Alternatively,
you
can
write
a
function
to
generate
this
head,
and
actually,
I
think
this
is
one
of
the
really
cool
features
of
it
that
to
generate
artificial
data
sets,
you
can
do
it
really
easily
by
writing
python
code
and
I
put
in
a
bunch
of
useful
utilities
that
I
will
take
great
pleasure
in
demoing
to
you
guys
to
show
you
how
like
just
some
nice
tricks
and
utilities
for
creating
realistic
or
you
know,
a
wide
variety
of
data
sets
and.
B
D
A
A
B
A
B
B
B
All
right,
so
I'm
going
to
load
up
a
description
file
here.
This
is
our
anomaly
detection,
the
hot
gem
data
that
we
have
actually
this
one's
for
our
anomaly
detection.
B
C
B
Parameters
so
at
the
top
we
have
the
classifier
parameters
alpha
the
steps
that
it's
trying
to
predict.
You
can
see
here
it's
a
temporal
anomaly
model
and
then
all
the
encoder
parameters
are
here
so
for
consumption.
You
have
your
n
and
your
w.
It's
an
adaptive
scale!
Encoder!
You
can
look
at
the
impacts
of
that.
A
B
B
And
then
we
have
all
our
sp
parameters
over
here.
We
see
that
it's
set
to
a
random
spatial
puller,
so.
D
B
We
can
edit
these
later
and
re-run
experiments,
but
just.
B
B
A
B
You'll
notice
here
is
that
this
is
a
non-spatial
cooler
model.
Let
me
just
for
the
sake
of
clarity.
Do
us
patient
pull
them
off?
First.
B
A
B
B
B
The
actual
on
the
right
we
have
the
spatial
puller.
The
spatial
cooler
is
2048
columns
right
now
with
4d
on
the
two-dimensional
layout
of
this
vision.
Puller
is
irrelevant,
it's
just
for
keeping
it
all.
On
the
page.
B
Predicted
and
the
anonymous
course
says
they
are
predicted.
I
don't
know
why
they're
not
showing
up
there.
D
B
A
D
A
B
Record
by
record
surprise,
so
you
can
see
you
know
the
space
builder
saying
the
same.
We
didn't
predict
this
one,
but.
A
B
You
can
cut
through
what's
going
on
on
the
bottom
here.
B
B
Here,
if
you
want
to
jump
across
multiple
records,
you
put
how
far
you
want
to
jump
across
in
that
box
and
then
you
hold
down
shift
in
the
arrow
keys
and
you
can
jump,
and
I
actually
did
that
math
right
for
me,
you
can
jump
from
point
to
point
and
that's
really
useful,
just
for
comparing
two
faraway
points
and
seeing
what
the
representations
are
is.
A
B
And
I
split
them
up
by
field,
so
you
can,
in
this
data,
set
there's
two
fields,
a
demand
which
is
an
injury
and
the
day
of
the
week.
So.
A
B
B
Of
one
that's
useful
in
the
beginning,
a
little
bit
later
and.
B
B
B
We
have
some
debugging
output
too,
that
is
just
textual,
so
you
can
see
the
data
at
time.
So
just
it's
just
reiterating
what
the
record
number
is
here,
but
then.
D
B
So
you
can
just
confirm
what
your
suspicions
were
about.
What
data
is
going
in
and
right
now
we
just
have
the
inferences
of
what's
coming
out.
So
there's
no
anomaly
label.
We
can
see
what
the
anomaly
score
is
should
match.
What's
in
that
graph,
you
can
see
what
the
multi-step
predictions
are.
So
each
time
set.
What
were
the
predictions.
A
Any
questions
right
now
just
about.
B
B
D
D
One
thing
the
easiest
way
to
do
that
is.
D
B
B
It's
a
repeating
function.
What
function,
what
mathematical
operation
do
you
think
you.
D
B
So
you
can
say
that
you
know
if
you
wanted
to
be
a
five
point
or
like
a
five
step.
Solid,
we
just
say
t
mod
five
and
so
t
is
gonna
count
from
zero
one.
Two
three
four
five.
A
B
Field
to
be
called
foo,
say
fields,
foo
is
t
mod
5
and
you
say
I
want
to
run
that
for
200
iterations,
you
say
create
it,
and
now
it's
created
that
model.
You
see,
you
can
see
all
the
parameters
on
the
left
side.
Here
it's
created
a
multi-step
prediction
model
which
is
with
some
basic
encoder
parameters
and
everything.
This
is
just
the
default.
B
Unfortunately,
for
the
running
code
here,
like
the
scrolling
code,
isn't
working,
but
it's
running
all
of
them
in
here
we
have
basically
200
records
of
the
green,
the
solid
and
then
the
blue
are
predictions.
So
you
think
it's
a
while
for
us
to
catch
up
there,
but
and
then
the
same
as
before.
You
can
step
through
and
see
what
all
the
different
references
are.
You
can
see
foo
at
the
bottom
here
you
can
see
the
different
encoding
outputs,
it's
getting
as
we.
B
That
we're.
B
B
And
you
see
the
anomaly
score
going
on
right
here
and
how
do
you
use
this
to
debug?
Well,
one.
A
B
A
B
A
B
A
B
B
Pool
of
that,
like
I
can't
really
visualize
here,
what
you
can
do
is
there's
a
parameter
here
called
the
sp.
A
A
B
I
can
go
through
and
it
tells
me
like
you
know,
what's
connected
how
many
things
are
firing.
What's
the
density
yeah
inside
the
individual
radius
so-
and
I
get
that
now
at
every
point-
it's
a
little
slow
when
I
try
to
scroll
through,
but
you
can
see
like
this
is
at
0.56
like
if
you
just
check
me
a
little
hard
here.
Give
me
a.
A
B
Get
very
big
very
fast:
you
can.
B
B
So
here
you
can
see
which
cell
like
at
each
point
which
cells
were
connecting
to
which
other
cells,
if
backtracking
was
happening,
it
had
to
go,
it
lost
the
sequence
I
had
to
go
back.
It'll
tell
you
that
they
can
tell
you
which
columns
are
bursting
or
which
new
segments
are
created,
and
you
get
again
you
get
that
as
a
play-by-play
for
every
record.
So
you
can
see
why
it's
learning
why
it's
not
learning?
D
B
B
And
see
the
different
resolutions,
but.
B
B
B
D
This
is
this
version
is
committed
already,
or
is
it.
B
B
B
So
that's
like
a
good
way
to
simulate
the
noise
from
because,
in
all
of
our
customer
data,
you
never
have
super
clean
data
like
there's
always
a
little
bit
of
variability.
So
you
can
simulate
that
here
and
see
how
it
affects
the
learning
and
you
can
see
it
takes
a
little
bit
longer
to
learn
because
of
all
the
different
sequences.
Now.
B
You
can
do
is,
like
I
said
before
you
can
say
that
foo
is
basa
is
something
that
you
know
one
two,
two
three
or
four,
and
you
can
also
have
conditions
in
here.
So
you
can
you.
B
A
B
D
B
See
that,
but
so
foo
is
just
going
to
be
some
random
number
in
one
two
three
four,
but
you
can
see
that
if
you
look
at
the
raw
end
plate
here
whenever
food
is
four
like
four
would
be
a
hundred.
That's
one.
A
B
D
B
D
D
C
A
B
B
D
A
B
B
B
Else,
you
can
say
have
some
temporary
variable
equal
to.
B
C
B
D
B
A
B
Get
the
actually
it's
a
little
hard
to
visualize,
but
trust
me.
It
works.
B
B
Is
one
else.
D
D
A
B
B
B
B
B
D
D
A
A
D
B
B
B
Know
you
make
a
lot
of
fun
data
sets
like
that.
Basically,
with
all
these
tools,
I
had
one
example.
Let
me
see
if
I
can
find
it
that
was
like.
I
had
a
cleaner
version
of
our
hot
gym
data.
B
B
D
B
A
D
B
A
B
B
A
B
B
And
then
you
can
add
varying
levels
of
noise.
That's
a
good
way
to
defend
everything,
yeah
any
questions
in
general
about
cerebral,
that's,
basically
it
that's
how
you
run
it.
Those
are
all
the
different
features
in
there.
What
libraries
are
available
from
from
this
from.
A
A
C
A
B
C
C
The
coolest
thing
it
is
kind
of
crazy,
but
so
we
might
be
able
to
harden
it.
Maybe
yeah,
and
also
the
verbose
output,
probably
wouldn't
be
able
to
run
like
a
hundred
thousand
steps,
because
that
would
be
true.
B
Although
I
will
say
that
one
of
the
benefits
I've
had
with
working
with
this
is,
I
don't
know
if
it'd
be
suitable
for
a
hosted
version,
because
what
I
would
do
is
I'd,
modify
the
cla
code
and
then
run
this.
A
C
B
A
D
It
yeah
just
wanted
to
ask
what
was
it
implemented?
How
was
it.
B
How's
it
implemented,
so
I
have
a
web
dot,
pi
backhand
and
again
this
was
my
foray
into
learning
about
how
these
things
work.
So
it's
probably
not
written
in
the
way.
It's
meant
to
be
right
here.
So
it's
got
that
and
the
running
experiments
actually
run
happens
either.
D
B
A
separate
thread
or
a
separate
green
light
to
try
to
keep
it
responsive.
So
as
it's
crunching
the
data,
you
can
click
on
stuff
and
then
just
jquery
and
boot
trap
on
the
front
end
for
all
these
things,.
D
A
B
D
A
A
B
A
B
Yeah,
that's
totally,
and
the
library
I'm
using
for,
like
is
ledger
called
code,
mirror
to
do
the
highlighting
and
stuff
it
already.
A
D
C
B
B
B
A
A
B
It
and-
and
you
can
see
here
that
if
you
look
at
the
encoders
first
of
all
the
first
one
if
we
look
at
it,
should
not
have
that
issue
anymore
of
like
like
zero
than
max
max
max.
But
zero.