►
From YouTube: NuPIC Office Hours - Oct 23, 2013
Description
NuPIC Office Hours - Oct 23, 2013
A
C
B
Yeah
there's
no
yeah,
like
the
when
I
see
people
talking
on
here,
they're
pretty
much
synced
with
the
audio
well
they're
synced.
C
With
the
audio,
but
I'm
saying
like
the
fact
that
I
can
see
my
lips
offset
by
like
less
than
a
second
yeah,
but
because
that's
the
this
is
literally.
I
think
this
is
the
capture
usb
display.
C
All
right,
it's
been
a
success.
So
far
it's
been
amazing
attendance.
Certainly
the
presenters
have
been
fantastic.
C
Welcome
here
you
are
our
first
person
joining
yeah
thanks
who's
this
this
is
ian
ian
okay
and
everyone
else
is,
is
outside
just
waiting
for
things
to
ramp
up
and
get
to
nine
o'clock,
so
we're
just
gonna,
hang
out
hang
out
until
about
nine
and
jeff,
and
subita
and
matt
should
be
joining
us.
F
C
F
C
I
J
B
G
B
B
B
E
B
A
lot
of
time
you
know,
and
so
I
we're
gonna
print-
I
don't
I
don't
the
schedule's
not
firm
yet,
but
it'll
firm
up
the
beginning
of
this
week,
because
we
got
to
print
the
badges.
I
don't
want
to
put
schedules
on
the
back
so
well.
It's.
B
E
B
B
So
we'll
do
some
introductions,
since
it's
nine
right
on
the
dot
close
enough,
so
we
have
here
at
the
memento
grock
office.
Austin
marshall,
across
from
me,
is
octopus
on
github
ian
d
on
fourth,
who
is
ian
danforth,
jeff
hawkins,
who
doesn't
do
github?
I
don't
do
that.
B
That's
lucky
and
I'm
matt
open
source
guy
ryan,
yeah.
H
B
Hello,
hi
pedro
who's
been
you're,
organizing,
the
kaggle
competition
right
pedro
you're
there.
If
you
can't
talk,
that's
okay,
hey
pedro
nice
to
see
you're
on
and
rick
l,
hello,
hello,
hello,
so
so
we
might
as
well
open
it
up.
If
anybody
has
anything
that
they
would
like
to
ask
jeff
for
super
time
well
for
anybody
or
anybody.
B
This
is
really
just
to
engage
the
community
and
talk
about
what
you
guys
would
like
to
talk
about.
So
would
you
like
to
volunteer?
I've
got
something.
F
F
So
I'm
I'm
watching
what
the
lupi
community
is
doing
on
the
mailing
list,
and
I
watched
the
hackathon
and
so
the
impression
that
I
have
that
most
everyone
is
into
it
right
now
to
solve
a
particular
problem
that
they
have,
whether
it's
you
know
credit
card
fraud
or
my
robot
doesn't
do
this
and
that
long
term
goal
with
duplicate,
though
from
what
I
understand
is
to
you,
know,
advance
the
algorithms
and
solve
long-term
questions
such
as.
F
What's
the
full
particular
you
pick
and-
and
I
I
I
don't
see-
people
in
in
the
game
for
this
long-term
goal-
I
see
people
into
it
into
the
game
for
this
for
more
short-term
goals.
It's
my
impression
right
around
there.
M
I
So
sort
of
paraphrasing,
I
think
his
impression
of
the
community
so
far,
is
that
people
are
focusing
on
very
specific
problems
like
a
particular
data
center,
rather
than
the
broader
questions
of
intelligent
computing
and
bigger
things
which
yeah
good
observation.
I
think
you
know.
J
You
know,
I
guess
you
know
when
we
started
new
pic,
we
had.
We
had
very
little
few
specific
expectations
about
what
the
community
would
do.
We
did
it
first
and
foremost,
because
we
wanted
to
make
sure
that
the
technology
was
available.
We
knew
there
were
people
who
were
interested
in
studying
it
for
various
reasons,
and
it
was
the
start
of
something,
but
we
didn't
put
it
out
there
with
the
idea
like
well.
Lots
of
people
are
gonna,
be
able
to
do
this
right
away
and
make
all
these
great
advancements.
J
It's
actually
pretty
tough
stuff
to
work
on
so
so
I
don't
think
you
know.
I
think
this
is
something
that
could
evolve
over
time.
J
If
you
look
at
our
company
right
now,
brock
we're
in
the
middle
of
producing
a
product
and
so
we're
all
heads
down
we're
doing
very
little
algorithms
work
right
now,
but
that's
not
was
wasn't
like
that
a
year
ago,
and
I
don't
want
to
be
like
that
six
months
from
now
I
mean
we're,
so
we
kind
of
oscillate
back
and
forth
between
doing
deep
algorithms,
work
and
and
practical
stuff
I've
been
working
on
the
side.
J
About
that
at
the
next
hackathon,
so
I'm
hoping
that
would
be
some
people
would
start
picking
up
that
topic
again
at
the
hackathon,
which
is
a
much
more
deeper
ai
type
of
problem.
So
you
know
I
just
stood
in
some
observations.
I
you
know,
I
don't.
I
don't
think
this
is
designed
to
do
one
thing
or
the
other.
It's
designed
what
everyone
wants
to
do,
that's
what
they
do.
I
I
think
another
thing
is:
you
know
for
many
people,
I
think
it's
it's
tough
stuff
to
even
get
there's
a
lot
of
work.
You
need
to
put
in
to
understand
the
basics
of
how
sdrs
work,
how
the
temple
cooler
work
and
those
concepts
and
starting
to
work
with
the
just
just
to
get
to
you
know,
base
it's
a
good
point
where
you
could
start
thinking
about
that.
So
a
lot
of
our
focus
has
also
been
just
just
basic
education.
J
That's
a
good,
you
know,
that's
a
good
point,
because
you
know
we
ourselves
went
through
a
long
period
of
education
before
we
could
even
figure
out
what
to
do
with
this
stuff,
and
you
know
now
we're
pretty
comfortable
with
it,
but
it
did
take
us
a
while
as
well.
So
I
think
I
think
it's
going
to
evolve.
You
know
you
may
not
be
aware
of
it,
but
there's
a
lot
of
stuff
happening
outside
of
the
newport
community
as
well
interested
in
clas.
J
I've
been
involved
in
a
bunch
of
projects
with
other
companies
which
have
been
doing
hardware
or
companies
are
looking
at
the
future
of
computing
military
agencies.
Funding
agencies
are
interested
in
this
stuff,
so
you
know,
I
think,
there's
multiple
things
going
on
and
I
think
it'll
evolve
over
time.
J
This,
what
in
june
and
people
are
just
learning
how
to
get
going,
and
I
think
you
know
we're
in
it
for
the
long
haul
we're
not
in
it
for
a
short
time.
You
know
this
is
not
like
hey.
We
expect
something
right
away.
It's
gonna
happen,
blah
blah
blah.
It's
like
you
know
everyone's
eats
to
their
ability
and
how
they
want
to
contribute,
but
I
think
it
will
block
over
time
to
get
more
theoretical
so
that
I
want
to
do
that,
and
I
know
some
other
people
here
do
as
well.
B
B
C
That
I've
observed
from
the
community
is
how
valuable
the
feedback
has
been
and
the
work
that
people
have
been
doing
to
get
people
up
and
running.
You
know
we
had
a
lot
of
issues
with
people
who
wanted
2.7
support
and
thanks
to
the
community,
we
have
2.7
support
and
you
know
people
wanted.
You
know
having
a
different
compiler
available
and
now
you
know
sea
lion
available
and
that
same
person,
but
you
know,
and
so
like
one
of
the
most
valuable
things
that
people
who
are
interested
in
the
advancement
of
you
know.
C
Machine
intelligence
in
general
can
do
and
think
that
is
a
good
road
to
that
is
to
give
that
kind
of
feedback
and
do
that
kind
of
work
to
make
sure
it's
open,
that's
accessible
to
a
wider
audience
and
it's
easy
to
get
started
because
the
concepts
are
hard
enough.
Just
the
mechanics
of
getting
started
should
be
easy
and
the
community
has
done.
You
know
made
that
leaps
and
bounds
easier,
and
you
know
the
hackathons
are
helping
with
that
as
well.
I
F
I
know
it's
just
I
mean
you're
right
that
you're
saying
it's
a
tough
problem,
but
I
was
hoping
in
the
community.
I
would
meet
the
corresponding
tough
people
right
and
so
that
I'm
not
afraid
of
tackling
the
really
big
questions
and
so
far
it's
it's
my
impression.
F
My
impression
is
that
it's
so
so
I
mean
I've
posted
a
couple
of
things
on
the
list
myself,
which
I
think
are
are
a
long
skill,
large-scale
ponderings
and
I've
had
some
interesting
discussions.
Often,
is
that
you
that
you
don't
see
there.
D
So
it's
been
good
so
far,
but.
F
Yeah
I
was
hoping
for
work
and
I
don't
know
if
jeff
says
there's
things
going
on
that
I
don't
see,
maybe
just
because
I'm
not
into
the
relevant
circuits
there.
Then
that's,
that's
good
here.
You
know.
J
One
thing
that's
helpful
for
me
is
is
would
be
a
very
pointed
question.
You
know
some,
you
know
people
post
things
and
I
don't
know
who's
going
to
respond
to
it,
and
I
don't
know
what
other
you
know.
Conversations
are
always
going
on,
but
if
you
have
something
very
particular
like,
I
really
want
to
know
who
else
is
interested
in
this
topic?
You
can
even
ask
me
specifically
to
jeff.
Would
you
comment
on
this?
J
That
helps
me
a
lot
if
you
say
jeff,
would
you
comment
on
this
because
I
know
a
lot
of
people.
I
can't
comment
on
everything,
but
there's
something
you
really
wanted
some
feedback
on.
Let
me
know
I'm
not
sitting
around
worried
about.
You
know
these
things
all
the
time
and
and
so
for
somebody
bothering
your
aunt.
You
know
just
just
be
very
pointed
about,
I
I
don't
know
which
ones
you
posted.
I
can't
remember
rick,
but
but
you
know,
don't
be
bashful.
I
think
that
was
super
type
point
just
bring
up.
F
You
mean
email,
your
directory
instead
of
posting
on
the
list,
so
I'll
ask
that.
I
And
I
would
say
you
know,
even
if
you're
having
you
know
interesting
discussions
off
list,
like
you
mentioned,
you
might
want
to
see
if
you
can
bring
it
back
on
the
list,
you
know
ask
the
other
person.
Is
it
okay?
If
we
bring
this
back
on
the
list
and
just
help
kind
of
be
a
catalyst
for
that
kind
of
conversation.
J
J
Otherwise,
you
know
it's
very
hard
to
sort
through
all
the
things
you
get
every
day,
all
right
anyway,
I'm
I'm
working
on
sensory
motor
integration.
That's
a
big.
F
J
N
Oh
sorry
is
this:
is
this
better
that's
much
better?
Okay,
I
don't
know
what
I
think
I
was
covering
the
mic.
So
after
reading
the
white
paper,
one
specific
confusion
I
had,
I
wasn't
exactly
sure
how
how
the
hierarchy
is
set
up.
So
is
it
that
the
output
of
the
temporal
pooler,
which
is
the
union
of
all
of
the
predictions,
the
input
to
the
next
layer
in
the
hierarchy?
I
mean
I'm
talking
about
multiple
regions
leading
into
a
smaller
set
of
regions
in
the
higher
layer.
J
All
right,
great
question:
there's
a
lot.
We
don't
know
about
that.
Let's
be
honest
with
you,
so
we
know
a
lot
from
the
biology
and
and
in
the
biology
we
actually
know
there's
two
pathways
that
go
forward
from
from
a
lower
region
to
a
higher
region.
J
J
We
have
spent
a
little
bit
of
time
modeling,
the
first
one,
which
is
right
from
the
cla
they
sell
activity
to
the
ceiling.
However,
it
gets
a
little
bit
more
complicated
because
what
we,
what
you
want
to
do,
as
you
go
up
the
hierarchy,
is
you
want
to
condense
time?
You
want
to
take
sequences
and
collapse
them
so
that
you
don't
spend
you
don't
pass
all
the
details
of
the
hierarchy,
at
least
not
all
the
time.
J
So
the
original
name
for
the
temporal
puller
was
the
idea
that
the
cells
actually
pool
over
time.
They
become
act
over
over
sequence
of
time.
So
if
I
have
a
line
moving
in
a
visual
field,
the
cell
will
be
active
throughout
that
entire
movement
over
some
area
and
that's
what
we
see
in
biology
and-
and
so
we
built
that
into
the
temple
pooler
as
part
of
the
cla.
J
J
For
a
while
and
there's
a
lot
of
time
and
memory
spent
on
having
the
cells
learn
to
predict
their
act,
you
know
over
a
longer
period
of
time,
so
it
multiple
steps
in
advance
of
when
it
actually
gets
input
and
that
took
a
lot
of.
We
were
doing
starting
through
some
hierarchy
of
vision
and
that
took
a
lot
of
computational
resources
and
when
we
said
step
back
and
say:
okay,
we're
not
going
to
do
the
hierarchy
right
now.
We
disabled
that.
J
But
that's
that's
a
key
part
of
the
theory
and
I
think
the
code's
there,
but
we're
not
using
it
right
now.
I
There's
also
the
issue
of,
is
it
the
active
active
active
cell
activity
or
the
predicted
activity?
That's
passed
out?
Yes,.
J
J
There's
some:
this
is
a
long
conversation
and
I'd
be
happy
to
I.
I
could
talk
about
it
for
an
hour.
The
bottom
line
is
that
we
there's
multiple
ways
you
could
go
about
this.
We
don't
know
exactly
the
right
answer.
We
have
some
parts
of
the
answer
and
we
had
not
pursued
it
much
because
of
the
computational
burdens
associated
with
it,
and
that's
one
of
the
reasons
I
just
mentioned
that
you
know.
J
I
really
want
to
focus
on
sensory
motor
integration
as
a
next
step,
because
to
me
that
can
be
done
in
the
computational
resources
we
have
today
where
the
hierarchy
gets
a
bit
tricky.
The
experiments
take
a
lot
longer
to
run
and
we
are
just
finding
them
running.
You
disagree
yeah
distributed,
so
it
really
depends
on
yeah.
I
C
Since
we
have
our,
you
know,
people
sort
of
ready
to
go.
Would
you
say
that
a
good
first
step
might
be
an
experiment
to
see?
Do
you
collapse
to
the
active
set
of
columns
and
use
that
as
the
input
to
the
next
layer
or
do
the
the
summation
of
both
active
and
predicted
and
feed
that
up.
J
You
know
you
know
well
be
careful
when
I
use
the
word
predictive
cell,
I'm
referring
to
a
cell,
that's
in
biology
would
be
depolarized
right
and
that
has
no
output
activity.
So
when,
when
we
look
at
the
standard
cla
and
say
oh
this
cell's
in
a
predictive
state,
it's
a
cell
that
biologically
would
be
a
depolarized
cell,
it's
actually
not
spiking,
but
it
has
no
output,
but
we
were.
But
when
we
were
when
we
said
a
cell
predicts
multiple
steps
in
advance,
we're
actually
saying
it's
active,
it's
spiking
activity
exists
over
time.
J
So
so
the
cell
could
be
polarized,
but
it
could
also
be
spiking
it
then,
then
alternately
can
be
spiking
over
a
period
of
time,
it's
at
spiking
over
a
period
of
time,
which
is
what
we're
trying
to
model
in
the
hierarchy.
I
think
you
know,
I
think
it's
possible.
J
We
were
looking
at
this
with
vision
and
vision.
Problems
are
really
hard.
Just
you
know,
there's
so
much
resources
in
the
brain
and
machine,
and
so
I
think
in
you
might
be
able
to
come
up
with
a
very
simple
problem
that
illustrates
all
or
deals
of
all
the
issues
of
hierarchy.
N
But
if
you
limit
the
vocabulary,
you
might
be
able
to
vision
it's
hard
to
limit
things.
I
would.
J
Come
up
with
some
sort
of
artificial
problem,
just
you
know
very
very
simple.
You
know
two
fields
of
data
and
you're
trying
to
get
them
to
converge
in
a
hierarchy,
something
like
that:
two
very
simple
clas
and
you're
trying
to
get
a
conversion
hierarchy
or
some.
You
know
something
along
those
lines.
You
know
so.
Yes,
three
cla
implementations,
one
for
variable,
a
one
for
variable
b,.
J
I
I
Was
what
that
that's,
what
the
title
was,
and
we
also
would
show
the
images
in
a
certain
way
to
build
up
the
invariants.
So,
basically,
as
you
as
you
went
higher
and
higher
up
in
the
hierarchy,
the
higher
levels
would
have
more
invariance
to
various
distortions
than
the
levels
below.
So.
C
That
I'm,
you
know
external
and
I'm
trying
to
actually
build
this
and
I've
got
new
pick
today.
Would
it
be
a
good
first
step
to
take
you
know
just
two
regions
and
take
the
active
output,
not
the
predicted
output,
but
the
active
columns
and
themselves.
C
E
J
J
It's
got
2
000
columns,
yeah
you
got
64
000
neurons,
then
the
mixed
or
whatever
the
next
level
actually
has
64
000
inputs.
It
has.
Some
number
of
those
cells
are
active,
it's
very
sparse,
but
there's
still
quite
a
few
of
them,
because
they're
they're
pruning
over
time,
so
you
might
instead
of
40
or
80
or
120
cells.
You
know
you've
been
many
because
they're
cooling
over
time,
but
still
out
of
64
000.,
the
next
level
space
pool
is
like
this
says,
convert
those
back
down
to
some
number,
like
2
000
columns.
N
So
then,
the
higher
level
actually
connects
to
individual.
N
J
Do
the
whole
sp
it's
a
standard
edge,
it's
standard
sp.
So
if
I
have
each
each
column
would
look
at
some
subset
of
the
of
the
input
space.
You
know
in
vision.
You
have
to
do
this
with
topology,
which
gets
a
little
bit
trickier,
but
I
think
you
could
do
one
without
apology.
It's
a
very
simple
problem
and
that's
what
I
would
do.
I
would
do
it.
I
would
just
like
feel
them.
C
C
J
You
know
in
the
standard
cla
the
way
we
use
encoders
with
fields.
These
are
very
impoverished
input
spaces.
So
you
know
the
cli
doesn't
like
working
with
too
few
bits.
So
we've
had
you
know,
that's
what
we've
been
sort
of
pushing
on
the
low
end.
It's
half
it's
kind
of
happy
to
have
a
lot
of
input
you
can
just
and
you
can
just
sub
sample
from.
C
Them,
and
so
one
of
those
it
sounds
like
one
of
the
key
parameters
in
terms
of
runtime
that
we
played
with
is
the
you
know
the
connected
pool
percentage.
You
know
what
is
the
possible
overlap
and
if
you've
got
64
000
inputs,
then
having
50,
you
know
potential.
Here's
what
I
would.
J
To
do
topology,
because
I
would
try
to
do
the
simplest
thing.
First,
I
would
probably
say:
make
my
clas:
don't
have
64
000
give
give
them
five
or
ten
cells
per
column.
So
now
I
have
like
you
know
now.
I
have
you
know
twenty
thousand
or
ten
thousand
cells
in
these
regions.
So
now
my
next
few
twenty
is
twenty.
Ten
thousand
a
year
ten
thousand
a
year,
I
can
buy
them
an
extra
twenty
thousand.
Instead
of
a
hundred
twenty,
you
know
in
real
brains.
J
It's
all
subsampled
everything.
You
know
a
region
projects.
Some
of
the
cells
go
over
here,
some
of
the
cells
over
here
because
it's
all
distributed.
It
seems
to
work
just
fine,
so
we
don't
have
to
connect
to
everything.
But
anyway
I
would
try
to
simplify
my
two
first
level.
Clas
still
still
can
do
temple,
cooling
and
and
then
and
then
converge
them
on
another,
just
regular
cla
like
we
simply
do
and
then
you
know
you
have
to
find
a
problem.
B
H
Talking
to
my
phone
so
well,
okay,
I
I
mean
you're
doing
a
very
good
job,
attracting
people,
I
must
say,
you're,
probably
the
best
in.
B
C
I
B
We
can't
understand
what
you're
saying:
maybe
you
could
type
your
your
question
in
the
chat
box
and
we
can
get
to
it.
Oh
you're,
on
your
phone
yeah.
Do
you
have.
B
G
O
Hi,
I'm
actually
trying
to
by
the
way
am
I
audible
priority,
you're
good
yeah,
okay,
I'm
following
a
slightly
different
approach.
My
problem
is
machine
prognosis
and
I
am
looking.
I
have
studied
the
cla
white
paper.
O
I
am
also
looking
at
new
pic,
but
I'm
trying
to
get
a
more
intuitive
understanding
of
how
the
algorithm
works
and
I
believe
the
best
way
of
doing
that
is
to
kind
of
have
my
own
flavor
of
the
algorithm
use
new
pick
as
a
sort
of
reference
and
as
I'm
trying
to
do
that,
obviously
I'm
running
into
some
issues
in
terms
of
how
I
interpret
the
data
set
that
I
have
I
in
my
problem,
I'm
looking
at
multiple
pieces
of
data
most
of
the
examples
that
I've
seen
with
new
pick.
O
O
O
Could
you
comment
on
on
what
happens
if
I
use
multiple
variables
and
how
does
that
impact
the
number
of
columns
and
the
topology
of
the
columns
for
a
particular
region.
I
I
mean
in
newpick
the
number
of
columns
are
fixed,
so
you
can
add
in
multiple
inputs
you
have
to
have
an
encoder
for
each
input
and
feed
that
in
in
in
reality,
though,
if
you
go
beyond
a
certain
number
of
variables,
feeding
into
the
same
kind
of
region,
it's
going
to
be.
I
J
So
in
our
in
our
instances,
we've
we've
done
successfully,
you
know
combine
up
the
like
four
different
metrics
four
different
fields,
but
you
gotta
imagine
when
you
just
use
it
with
a
single
cla
region.
What's
happening
is
you're
taking
those
four
fields,
you're
encoding
them
and
you're,
just
concatenating
them,
so
they're
they're,
really
like
one
big
bits,
vector
and,
and
so
what
tip
it
is
referring
to
is
yeah.
It's
not
going
to
see
them
as
separate
things.
J
It
just
says:
okay,
the
combination
of
these
fields
right
now
is
the
state
of
my
input
and
if
you
start
concatenating
too
many
of
those
guys,
then
you,
you
really
ought
to
be
doing
that
in
the
hierarchy,
because
you
want
to
be
saying.
Okay
like
this
is
this:
is
the
input
region
over
here?
Here's
another
region
here
understand
them
separately,
then
combine
them.
So
you
can
do
multiple
variables.
You
can
you
know,
we've
done.
We've
experimented
with
up
to
six.
J
I
think,
but
you
kind
of
get
diminishing
returns
because,
as
super
tight
says,
it
takes
a
lot
more.
You
have
to
have
a
lot
more
data
to
really
tease
apart
the
patterns
that
exist
across
all
all
that
data.
It's
sort
of
like
saying
a
little
bit
like
saying,
if
you
think
about
the
retina
in
your
eye,
it's
like
a
million
sensors
and
if
you
try
to
recognize
patterns
in
that
million
sensors,
all
at
once,
you'd
never
be
successful,
there's
just
way
too
many
patterns.
So
what
the
brain
does.
D
J
It
looks
a
little
pieces
of
the
retina
time,
and
so
what
are
the
patterns?
I
can
find
locally,
so
we're
sort
of
doing
we're
trying
to
do
all
this
in
one
step
would
be
like
going
right
from
a
retina
to
one
region
of
the
cortex,
and
you
really
just
can't
have
very
large
input
spaces.
Just
like
you
can't
put
a
whole
retina
into
one
region
and
expect
it
to
recognize
patterns
across
the
whole
thing.
It's
going
to
be
local,
so
anyway,
you
can
do
it.
We
do
it
a
lot.
C
Or
five
and
there's
a
you
know:
we've
seen
people
attempt
to
throw
many
many
fields
into
this.
You
know
into
the
process,
and
you
know
the
ones
that
have
proved
useful
are
the
ones
that
are
a
strong
signal
that
differentiate
patterns
that
would
otherwise
look
the
same.
C
For
example,
in
you
know
the
energy
usage
data,
if
you
know
that
it's
a
weekday
or
a
weekend,
you
know
that's
a
very
that's
a
very
useful,
strong
signal
to
differentiate
two
patterns,
and
so
every
time
you
add
a
new
field,
you
really
want
to
think.
Is
this
going
to
help?
You
know
distinguish
otherwise
similar
patterns?
Otherwise
it's
just
confusing
there's
another
thing:
you.
J
Can
do
there's
nothing
to
do,
which
is
just
sort
of
a
ensemble
type
of
approach.
This
is
what
we're
using
in
our
product
rock
which
we're
building
right
now,
which
is
for
anomaly
detection,
so
we're
looking
we're
trying
to
detect
anomalies
in
in
servers,
computer
servers,
I
t
infrastructure
and
we
can
look
at
a
particular
server
and
we'll
look
at
multiple
metrics
on
that
server
and
we
build
a
model
for
we
build
a
cla
model
for
each
metric,
so
each
metric
on
its
own
is
being
modeled.
J
Sometimes
we'll
include
time
with
that.
So
it'll
be
time,
plus
the
value
variable
of
the
metric,
and
so
now
we
have,
let's
say
three
or
four
cla
models
running
at
once.
Then
you
can
use
very
all
kinds
of
ensemble
techniques
to
improve
your
result,
whether
you're
trying
to
detect
anomalies
or
predictions
you
can.
I
won't
go
into
them,
but
you
know
ensemble
methods
always
work
well
in
the
brain.
J
That's
done
in
the
hierarchy,
but
where
you
can
do
you
can
get
a
lot
of
value
out
of
just
doing
other
traditional
ensemble
sort
of
combinations
of
these
things,
and
I
that
would
help
in
prediction
as
well.
So
that's
what
I
would
do
if
you've
got
a
lot
of
variables
you
I
would
find
out,
which
ones
are
good,
build,
reasonable
models,
build
models
and
those
individual
variables,
maybe
two
at
a
time.
J
Something
like
that,
then
I
would
say
all
right:
how
do
I
combine
the
results
of
those
together
in
some
sort
of
ensemble
fashion,
not
in
a
biological
way?
That's
how
I
would
do
it
today
and
you'll
get
better
results.
You
almost
always
get
better
results
within
solves.
N
J
I
I
didn't
understand,
I
understood
the
question,
but
I
didn't
understand
your
qualifier
on
the
question,
so
I
I
think
which
would
you
what
you
want
the
cla
to
do.
Is
you
say:
look
I
got
a
whole
bunch
of
inputs
coming
in
here.
I'm
trying
to
find
the
structure
in
it.
The
hierarchy,
sort
of
makes
an
assumption,
and
this
is
true
in
brains
as
well.
J
If,
if
I
were
to
try
to
build
a
vision
system
where
I
scrambled
the
the
bits
from
the
retina,
it
wouldn't
work,
there's
no
way
it
would
work.
You
have
there's
an
assumption
that
there's
local
correlations.
First,
in
some
region,
the
vision
area,
the
visual
area
and
the
same
thing
with
you
through
multiple
fields,
you'd
say
all
right:
I'm
assuming
that
there
there's
an
assumption
here
that
there's
some
local
correlation
between
these
fields
and
I'm
going
to
find
that
and
there's
some
other
local
correlation
between
these
fields.
J
I'm
gonna
find
that
and
then
I'm
gonna
look
for
the
correlation
between
the
the
summations
of
those
guys.
N
My
qualifier
was
that
only
would
work
for
spatial
patterns
right
for
individual
snapshot
of
time
snapshots
of
time.
How
would
it
collate
information
across
time
steps
when
it
comes
to
energy,
for
instance,
the
variables
are
not
don't
really
mean
that
much
on
their
own
in
a
single
snapshot,
they're
more
useful
across
time.
I.
J
Don't
see
why
why
time
wouldn't
work
the
same
way,
it
does
remember
the
output
of
the
cla.
If
you
have
temporal
pooling,
enabled.
As
I
talked
about
earlier,
the
output
of
the
cla
is
a
sort
of
a
spatial
time
representation
of
the
of
the
input.
It's
it's
basically,
a
an
output,
that's
stable
over
time,
capturing
the
temporal
statistics,
so
I
guess
we'd
have
to
go
into
deeper.
I
don't
know
why
you
think
it
would
work.
Why.
N
I
N
J
I
Yeah,
the
active
cells,
it's
not
the
active
columns,
it's
the
active
cells
so.
N
I
J
The
active
cells
contain
all
of
that
yeah.
That's
that's
an
important
concept.
The
active
cells
contain
basically
everything
the
system
knows
about
the
world.
It's
sort
of
like
the
current
inputs
stream
of
inputs
in
the
context
of
all
prior
learning.
That's
what
the
active
cells
represent.
So
it's
it's
a
pretty
rich
state
of.
I
What's
going
on
yeah
and
that's
what
we
use
in
using
the
classifier
as
well
to
map
back
into
values,
we
just
use
the
active
cells.
N
B
B
B
Maybe
you
can
email
your
list
with
the
question
and
we'll
try
and
address
it.
Yeah.
It's
not
muted,
you
might
be
muted,
oh,
maybe
I'm
muted.
Oh,
we
made
it
I'll.
Try,
okay,
this
time.
No,
no!
No,
no,
there's
something
wrong
here!
I
muted
our.
It
doesn't
seem
to
make
a
difference.
Oh
it
just
it's
just
only
muting
them!
No!
No!
It's!
When
you
hover
over.
B
B
N
B
I
When
you
interpret,
you
mean
bring
it
back
into
the
original
input
space.
Is
that
yes,.
I
J
But
it
gets
harder
and
harder
as
you
go
up
the
hierarchy.
Imagine
I'm
I'm
looking
at
the
top
of
the
visual
system,
and
I
say
here's
a
here's,
a
bunch
of
cells
that
represent
a
dog,
then
I
say
well
what
does
that
represent
down
four
levels
in
the
visual
hierarchy
and
say
what
is
that,
what's
my
input?
Well,
there
is
a
gazillion
inputs
which
could
have
led
to
that
state
in
the
upper
in
the
hierarchy,
and
so
there's
you
you
it's
not
like
even
close
to
one-to-one
correspondence.
J
C
C
I
C
Hoping
to
do
is
so
if
the
sp
is
performing
properly
right,
then
you're
going
to
end
up
at
the
low
and
we're
talking
about
vision,
you're,
going
to
end
up
with
very
low
level.
You
know
essentially
edge
detectors,
etc
at
the
lowest.
I
C
J
J
L
C
J
You
know
google
is
using
deep
learning,
networks
and-
and
those
have
no
time
concepts
at
all
so
and
instead
of
texting
things
of
no
kind
of
time
so
using
like
restricted
voltage
machines
or
something
like
that,
so
you
know
it
might
be
possible
to
have
a
serial
hierarchical
spatial
system
where
you
just
store
gazillions
of
patterns,
and
you
can
take
one
and
bring
it
back.
I
don't
know,
but
the
general
principle
I
think,
in
hierarchy
of
brains,
you
can't
go
backwards.
J
You
can't
say
you
know,
given
this
higher
level
state
reconstructs,
exactly
all
the
things
it
could
be
just.
I
An
example
remember:
these
are
spatial
template
patterns,
so
let's
say
one
cell
represents
someone's,
smiling
and
someone
else
another
cell.
You
know
the
act
of
smiling.
Another
cell
represents
the
act
of
yawning
or
something
else.
You
know
you
reconstruct
them.
Both
down.
They'll,
give
you
a
blurry
representation.
You
won't
be
able
to
tell
the
difference.
N
Things:
okay,
could
you
what,
if
you
I
understand,
actually
I
think
you'd
end
up
with
a
blurry
mess.
You'd
still
be
able
to
reconstruct
it,
but
what,
if
you
ended
up
what
if
you
stopped
at
a
at
a
higher
level,
so
you
start
over
here
you
construct
out
this
level
and
you
choose
one
of
those
outputs
and
then
you
reconstruct
with
that
one
down
lower
level.
So
then,
you
end
up
disambiguating
people.
I
Maybe
no,
no!
You
won't
because
it's
it's,
it's
not
choosing
one
of
the
outputs.
These
are
sdrs
here.
You
have
to
choose
all
of
them
in
a
way
that
they're
self-consistent.
J
I
J
I
So
that's
one
way
you
could
do
it,
it
was
a
do.
You
could
build
something
outside
of
the
cla.
You
know
like
the
classifier
something
else,
and
this
we've
done
before
is
you
can
show
all
of
the
input
data
and
see
which
patterns
this
cell
becomes
active
for
right
and
you
can
look
at
those
patterns.
You
can
do
that
and
that's
that's
reasonable.
Then
you
would
get
the
smiling
face
or
all
of
that
stuff
you
would
get
you
get
everything,
but
it's
not
it's
not.
C
I
D
I
Better
too
and
again,
these
are
very
simplified
scenarios
yeah.
You
know
in
a
more
complex
scenario,
with
vision
and
things
like
that,
yeah
and
there's
nothing
like
nothing
in
the
brain
that
so
they're
accessible
sure
there
is
it's
just
you
have
to
it's.
C
It's
an
inverted
pyramid
right.
No,
no,
it
is
you
don't
go
backwards
through
a
neuron.
No,
no,
but
it's
an
inverted
pyramid
so
that
you
have
connections
that
converge
and
then
a
second
set
of
neurons.
Yes,
a
second
set.
Well,
that's
totally
different,
but
that's
only
because
in
biology
you,
you
can't
know
the
neurons.
J
And
you
can't
ignore
the
whole
retention
mechanism,
which
is
what
those
backward
feedback
signals
are
involved
involving
attending
the
various
parts
of
the
input
space.
So
this
is
a
complex
topic.
I
I
think
I
I
think
at
least
a
couple
of
us
are
on
the
side
of
it's
really
hard
to
reconstruct
in
a
hierarchy,
but
we've
seen,
but
we
know
that
it's
been
done.
We've
been
done
it
in
one.
C
I
J
C
Not
the
way
brains
do
it
yeah.
I
I
agree
that
there
are
going
to
be
patterns
which
are
temporal
in
nature
and
reconstructing
those
will
be
difficult
to
visualize
and
intuit,
but
there
are
going
to
be
a
large
set
of
strictly
spatial
patterns,
like
grandmother
cells
that
you
can
directly
reconstruct
back.
I
J
That
by
the
is
a
cell
that
represents
the
camera,
no
matter
how
you
see
the
grandmother
and
therefore
your
grandmother
still
doesn't
actually
could
produce
a
grandmother.
It's
any
grandmother,
you
could
imagine
what
should
make
your
grandmother
sell
golf.
So
again,
it's
a
biology.
Doesn't
do
this,
you!
You
know
this
you're.
I
really.
D
Do
you
hear
me
yeah,
perfect,
hello,
guys
thanks
for
for
the
discussion
I've
been
following,
so
does
this
mean
like
the
hierarchical
cli
would
not
help
for
like
simple
real
world
problems
like
predicting
a
sequence,
because
you
wouldn't
be
able
to
like
use
the
hierarchy
to
get
more
abstract
patterns
and
improve
the
prediction
this
way
and
if,
if
that
is
so,
what
is
the
way
to
improve
like?
Can
you
get
a
better
prediction
just
by
scaling
the
single
region
and
making
it
much
bigger.
I
So
I
think
the
question
is
with
a
hierarchy:
will
it
help
classifying
sequences
or
learning
sequences?
Absolutely
yeah?
I
mean
it
absolutely
should
just
because
you
can't
you
know
reconstruct
that
sequence
doesn't
mean
you
can't
use
it
for
classification.
You
know
if
you
have
temporal
sequences
that
have
temporal
hierarchies
in
them.
You
know
so
it's
a
sequence
of
sequences,
and
you
know
people's
motions
are
a
great
example
of
that.
My
fingers
have
a
particular
pattern.
I
I
J
Know
if
you're
actually
receiving
input
throughout
this
period
of
time,
each
level
in
the
hierarchy
is
making
predictions
and
they're
informed
by
the
guys
above
them.
So
so
you
makes
right
you'll
make
very
detailed
predictions
based
on
the
current
input
stream
coming
in,
so
you
can
classify
and
you
can
make
these
deal
predictions
and
you
but
you're
not
making
those
detailed
predictions
purely
just
in
a
top-down
fashion.
It's
a
top-down
plus
your
current
input
that
that
leads
to
that.
So
so
again,
you
can
make
very.
J
Predictions,
but
it's
not
a
purely
top
down
thing.
You
have
to
have
some
bottoms
up
clues
as
to
what's
you
know
how
to
fill
in
the
picture
and
make
those
but
yeah
it
should
the
hierarchy
should
classify,
should
help
in
classification
temple,
sequencing,
yeah.
I
B
B
O
Sure
go
so
again
with
with
respect
to
what's
what's
mentioned
in
the
white
paper
when
you're
talking
about
temporal
pooling
the
white
paper
talks
about
extending
the
predictions
back
in
time
by
attempting
to
train
a
second
dendrite,
that's
that's
associated
with,
with
each
active
cell,
that's
chosen
as
the
one
that
best
matches
the
state
of
the
system
in
the
in
the
previous
step.
O
I
They
say
is
that
so
the
first
thing
you
talked
about
was
cooling
yeah,
where
you're
trying
to
extend
that.
K
That
amendment
is
that
related
to
what
we're
trying
to
do
with
this
classifier,
I
don't
see
how
it's
related.
J
Yeah
I
mean
you
could
tell,
but
you
could
both
say.
The
classifier
is
predicting
things
in
the
future
in
the
temple
pool
of
predicting
things
in
the
future,
but
we're
using
two
very
different
ends:
yeah
well,.
O
My
question
pertains
to
the
the
implementation
aspect.
I
would
guess
that
in
the
implementation,
these
are
two
different
different.
I
Yes,
totally
totally
different
the
classifier.
The
main
purpose
is
to
bring
it
back
into
the
into
the
input
space
so
that
you
know
we
can
calculate
error
and
we
can
use
it
to.
You
know.
Do
some
control.
J
Or
whatever
it
is
yeah
and
and
the
temple
pool
was
designed
not
for
that
at
all,
the
temple
pool
was
to
allow
us
to
basically
build
representations
that
span
over
sequences.
So
you
want
you
want,
basically,
as
you
go
to
hierarchy,
you
want
your
representations
to
sort
of
encode
longer
and
longer
periods
of
time,
so
you'll
see
cells
that
you
know
represent.
They
stay
active
during
a
complex
sequence
of
behavior,
for
example,
and
so
it's
a
sequence.
J
So
you
need
to
you
need
to
be
able
to
take
a
series
of
spatial
patterns
and
build
a
more
stable
output
over
them
and
the
tempo
did
that
it
does
actually
in
a
very
elegant
way,
because
there's
no,
I
could
get
into
it.
But
it
doesn't.
It
solves
some
problems,
but
that's
that's
a
totally
different
problem.
We're
trying
to
solve.
Then
when
we
use
the
classifier
we're
just
trying
to
produce
an
output
with
the
classifier.
O
Okay-
and
I
had
I
had
one
more
question-
I
think
rahul
talks
about
it
in
his
youtube.
Video
about
the
use
of
particles
form
optimization.
I
I
put
a
question
on
the
forum
as
well:
I'm
still
not
exactly
being
able
to
get
around
how
you're
using
the
particles
form
optimization
is
it?
Is
it
to
kind
of
converge
on
the
active
columns?
O
I
In
order
to
get
a
cla
working
well
on
a
data
set,
there
are
a
number
of
parameters
that
need
to
be
tuned.
So
there's
the
encoder
parameters
and
there's
the
the
threshold
of
the
tempo
cooler
and
there
are
a
few
other
parameters,
and
once
you
find
that
set
of
parameters,
you
don't
need
the
particle
swarm
anymore.
You
just
run
the
cla
and
what
particles
form
optimization
is
used
for
is
to
find
that
parameter
in
in
some.
You
know
as
efficient
a
way
as
possible.
I
You
know,
there's
a
the
the
space
of
parameters
explodes
exponentially,
and
so
it's
just
it's
just
an
optimization
technique.
You
can
think
of
it
kind
of
like
what
evolution
does.
You
know
our
retina
and
our
cochlea
and
all
that
evolved
over
time
over.
You
know
generations,
and
you
know
those
have.
There
are
parameters
that
have
to
be
tuned
to
in
order.
J
To
get
that
working
well,
the
kind
of
data
we
deal
with
right,
so
we
deal
with
certain
type
of
visual
and
you
know
frequencies.
We
do
certain
types
of
sounds,
and
so,
as
soon
as
I
said,
we
had
to
evolve
to
get
like
a
set
of
encoders
for
our
brain
and
we
have
to
evolve
the
basic
hierarchy
of
our
brain.
We
have
developed
some
of
the
learning
rates
that
are
suitable
for
humans
and
that's
all
that
the
product
of
storm,
optimization
is
doing
is
trying
to
find
one
of
those
sort
of
basic
encoding.
J
J
I
I
J
Think
we've
ever
told
anyone
about,
but
we
had
a
an
intern
this
summer,
who
was
studying
this
and
what
we
found.
Something
very
useful,
very
interesting.
J
But
in
reality
we
only
had
to
search
over
a
very
small
space,
because
it's
just
by
natural,
whatever
it
is,
it
turned
out
on
a
few
seems
to
do
really.
Well,
in
fact,
what
we
found
out
is,
if
you
actually
pick
a
center
of
what
of
those
clusters
and
use
that
as
opposed
to
what
the
particle
swarm
actually
gave
you,
which
is
somewhere
near
there,
you
got
better
results
for
some
reason.
That's
really
weird,
so
we
we're
still
using
particle
swim.
J
Optimization,
but
in
our,
for
example,
in
our
product,
we're
about
to
ship
the
grant
product
we're
not
doing
it
we're
picking
from
a
few
pre-existing
models
that
seem
to
do
pretty
well
for
everything.
Okay,.
N
I
Yeah,
I
agree,
I
think
that'd
be
really
interesting
to
see
what
happens.
I
think
there's
some
kind
of
just
self-consistent
sets
of
parameters.
You
know
if
you
have
a
certain
temporal
cooler
activation
threshold,
then
well.
The
min.
The
minimum
threshold
has
to
be
sort
of
related
to
that.
So
there's
consistent
groups
within
this
parameter
space
and
I
think
it'd
be
great
to
run
it
on
all
sorts
of
data.
We
want.
J
To
see
what
happens,
we
were
very
surprised
by
this
result
and
interesting
thing
when
he
did
this.
He
used
mostly
these
energy
data
sets,
and
so
we
said.
Well,
maybe
those
results
are
just
related
to
these
energy
data
sets,
but
they
seem
to
work
on
other
data
sets
too
so,
but
it
would
be
a
good
experiment
to
try
it
on
different
ones
and
see
what
happens
it.
C
Was
very
surprising,
it
could
be
very
valuable
for
the
community.
If
you
know
we
have
a
list
of
these
centroids,
if
other
people
discover
them
in
their
vision
or
natural
language
processing
to
list
them
so
that
when
you,
you
know,
are
doing
additional
experimentation,
you
start
out
with
a
reasonable
starting
point
for
a
type
of
data.
Yeah
yeah,
no
guarantee
that
that's
possible,
but.
J
But
we
definitely
think
this
is
possible
if
you're
trying
to
apply
this
in
a
commercial
way
like
we're
doing
with
rock,
and
you
like
we're
working
it
metrics.
It
definitely
seems
like
this
is
a
good
strategy.
You
know,
maybe
you
have
to
do
a
teeny,
what
we
call
a
mini,
swarm
and
figure
out
which
of
these
centroids
to
use,
but
that's
a
much,
simpler
problem,
because
you
know
the
swarm
takes
a
while.
It's
it's
the
most
computationally
intensive
thing
about
the
whole
thing
is
it's.
You
know.
J
Evolution
is
a
long
process,
so
once
you've
got
it
you
can
learn.
But
you
know
if
you're
trying
to
do
something
really
cheap
and
fast.
You
don't
want
people
waiting
around
for
a
few
minutes.
You
want
to
just
you
know,
pick
a
set
and
go
with
it.
B
Okay,
guys,
we
need
to
wrap
up
thanks
for
attending.
This
has
been
great,
we'll,
try
and
schedule.