►
From YouTube: Moksha: 2014 Spring NuPIC Hackathon Demo
Description
Sergey & Mohamed
A
All
right,
so,
I
think
we're
the
last
teams
I'm
going
to
try
to
be
really
really
quick.
The
premise
of
our
work
is
that,
for
some
reasons
that
people
who
like
to
be
called
evolutionary
psychologists
would
be
eager
to
explain
to
you.
People
have
friends
and.
A
And
we
decided
to
run
an
anomaly
detection
on
the
friends
check-ins
into
different
places
on
facebook.
A
So
we
got
the
data
from
facebook,
which
is
a
relatively
difficult
process,
considering
they
changed
their
api
four
days
ago
and
nobody
knows
how
it
works.
But
basically
we
go
ahead.
A
Yeah,
so
now
it's
downloading
and
analyzing
the
data
to
figure
out
which
of
the
latest
friends
check-ins
are
anomalous.
The
reason
why
we
think
that's
interesting
is
that
most
of
our
friends
kind
of
do
the
same
things
every
day,
so
we
don't
necessarily
want
to
talk
to
them
about
it,
but
if
they
do
something
interesting,
something
outside
of
their
range
will
likely
to
be
to
ask
about
it.
The
interesting
part
about
this
project
is
the
encoders.
You
could.
A
So
the
data
for
a
check-in
is
location
of
a
place
they
checked
in
with
whom
they
checked
in
how
many
people
like
the
place
and
categories,
there's
a
lot
of
categorical
data.
How
facebook
categorizes
different
types
of
venues
and.
A
Some
friend,
which
is
the
default,
so,
as
you
can
see,
we
we
tried
to
make
the
things
that
are
very
well
very
predictable
kind
of
transparent.
So
you
see
that
people
who
go
to
bars
and
assembly
shops
kind
of
transparent,
but
whenever
it's
an
event
that
doesn't
really
occur.
All
that
often
it's
a
bit
more
opaque.
The
interesting
part
about
this
is
creating
the
encoder
because
we're
not
predicting
any
scalar
value,
but
rather
the
similarity
between
check-ins.
So
you
can't
do
any
swarming.
A
You
have
to
manually,
create
the
encoder
and
we
assigned
half
of
the
sdr
to
location,
that's
half
of
what
we
consider
to
be
anomaly
and
approximately
a
little
bit
more
than
half
of
the
rest,
two
categories
and
the
remainders
to
things
like
how
popular
is
the
place,
how
frequently
the
people
check
in
to
it
and
how
many
people
have
people
checked
in
with.
A
So
these
are
the
people
who
have
the
most
data
on
average,
a
little
bit
more
than
100
check-ins
for
per
person.
B
A
Time
period
through
throughout
their
history,
so
on
facebook,
yeah.
C
C
A
Exactly
and
so
even
the
transparent
ones,
the
anomaly
scores
are
really
high,
they're,
like
at
point
3.4,
but
compared
to
the
real
anomalies
which
are
0.8.9.
So
the
difference
is
noticeable.
B
D
C
B
A
B
C
Now
we
we
find
that
detection
anomaly
on
like
context
that
are
news
or
checks
or
like
even
is
really
interesting,
because
you
could
actually
on
the
fly,
look
at
a
news
for
example
and
know
or
like
teach
it
while
you're
doing
it.
This
is
a
content
that
I
like,
and
since
we
have
an
sdr
on
that
content,
the
page
could
be
like
filtered
on
the
fly,
while
I'm
looking
at
new
york
time-
and
it's
actually
showing
me
only
the
thing
that
interests
me.
D
D
D
B
In
grack
we
have
this,
we
take
all
these
server
metrics
and
server
data,
and
we
we
we
show
them
in
order
ranked
by
how
knowledge
they
are.
But
this
is
a
really
cool
thing.
You
just
you're
using
the
sort
of
opacity
to
to
to
sort
of
make
it
pop
out
at
you
right
much
cooler
than
what
we
did
cool
really
nice.