►
Description
By Marion Le Borgne
EEG data is classified by NuPIC based upon the thoughts of the subject.
A
Hi,
so
my
name
is
mario
and
I
made
a
classifier
for
motorola.
So
just
if
you're
not
familiar
with
what
motor
imagery
is
it's
so
easy.
We
already
talked
about
that
if
you
record
electrical
lighting
and.
A
So
I'm
working
on
my
free
time
with
the
berkeley
community
technology
group
you're,
getting
this
eeg
data
from
a
board
called
openpci,
which
is
a
very
cheap
medical
grade,
open
source,
eg
recording
device
and
what
we
had
at
the
recent
hackathon
was
a
quadcopter
that
was
mind.
Control
with
eeg
recording
of
neutral
imagery
that
you
mentioned
is
like
recording
you
when
you're
moving
your
left
arm
your
right
arm
or
your
left
arm,
and
then,
if
you're,
just
thinking
about
it,
your
brain
actually
makes
the
same
pattern.
A
So
the
idea
is
to
have
a
system
that
will
then
recognize
you
just
thinking
about
that
and
that
way
you
can
potentially
control
things
with
your
mind.
So,
just
to
show
you
what
we
did
with
the
basic
machine
learning
classifier
it
actually
worked
pretty.
Well,
I
mean
it's
like
50
accuracy,
but
it
was
like
a
nice
breakthrough
with
this
open
source
device.
B
A
Going
to
get
crushed
by
this
quadcopter
yeah,
so
the
idea
is
to
to
use
new
peak
to
get
hopefully
a
higher
accuracy
rate.
A
Left
arm
and
right,
which
is
actually
exactly
what
I'm
trying
to
classify
as
well
and
I'm
using
the
same
that
I
set
as
a
benchmark
to
compare
it
to
this
like
kind
of
more
classical
machine
learning
approach,
and
so
we
have
eight
eegs
channels,
as
you
can
imagine,
not
all
of
them
are
useful,
but-
and
you
have
three
three
phases,
so
no
movement
left
hand
and
right
hand
same
thing
where
it
is
four
milliseconds,
and
so
I
guess,
let
me
show
you
the
demo
of
how
we
picked
it
and
so
on.
A
Actually
before
that,
I
want
to
explain
a
little
bit
how
I
massage
the
data
so
talking
about
well
with
the
people
like.
I
got
a
couple
of
insights
about
how
I
could
do
that
and
the
idea
is
to
create
one
model
per
channel
per
face.
So
that's
right,
so
three
models,
three
models
per
channel
and
then
we
disable
the
learning
at
the
end
of
each
phase
so
that
we
can,
when
we
feed
a
new
incoming
energy
data,
we
can
see
where
the
enemy
rate
is
high
or
low.
A
A
So
then,
if
I
can
first
show
you
the
prediction
part,
because
it
helps
having
an
idea
how
well
it
performs,
and
so
what
we're
going
to
see
is
a
one
channel
data
stress
building,
because
there
is
quite
a
lot
of
data
and
and
like
new
pic
has
been
trained
already
and
I'm
loading
the
model
that
has
been
saved
and
I'm
in
indian
the
day
is
very
spiky
because
it's
very
dense
and
the
information
is
like
pretty
high.
But
it's
like,
in
my
opinion,
it's
doing
quite
a
good
job.
A
A
Then
the
classified
part
is
really
cool
because
it's
actually
so
far
a
hundred
percent
accuracy.
So
in
my
mind,
so
I
need
to
reevaluate
that
afterwards.
But
I
can
show
you
that
and
then
really
like
I'm
feeding
it
400
milliseconds
of
data
and
then
it's
going
to
give.
A
A
That's
coming
from
one
channel
channel
zero.
So
let's
classify
this
eeg
data.
The
guy
is
not
moving.
It's
actually
predicting
that
he
tried
the
three
models
and
the
lowest
anomaly,
like
average
and
immediate,
is
actually
the
movement,
so
that's
cool.
But
then,
let's
see
the
motor
like
actually
moving
your
hands
and
you
seem
like
left
hands
left
hand,
is
actually
the
lowest
animal
score
as
well
for
for
the
data
incoming
from
when
the
patient
is
actually
not
moving
at
all,
so
so
so
far
like
it's,
it
doesn't
fail
to
like
to
like
not
predict.
A
B
B
A
So
this
is
like
over,
I
think,
25
000
data
points.
So
during
the
training
phase,
it's
eating
like
25
000
data
points
per
channel
per
phase
and
then,
after
that,
each
time
I
get
an
incoming
g
signal.
I
took
just
a
little
chunk
of
doing
400
milliseconds
and
hit
it
and
it's
giving
me
this
result.
That's
really
cool.
C
Yeah,
that's
we've
been
talking
about
doing
classification
using
anomaly
models
like
this
for
a
long
time,
and
I
think
this
is
the
first
time
we've
actually
seen
it
work.
B
A
A
No,
I
think
if
you
actually
combine
all
of
them,
I
can
actually
reduce
not
take
like
as
much
as
400
milliseconds
and
I
can
potentially
I
hope
again.
I
need
to
investigate
that,
but
take
a
small
amount
of
data
and
then
combine
kind
of
like
all
of
my
anime
scores.
B
A
On
calibration,
this
is
actually
why
there
are
like
two
phases:
one
training
or
exactly
like
what
you
generate.
I
I
train
it
on
a
data
set.
That
is
the
training
phase.
Where
we
tell
the
guy
okay,
move
your
left
hand,
move
your
right
hand,
you
feed
it
to
me.
We
freeze
the
model,
so
that's
the
training
part
and
then.
B
B
A
Of
like
updated
data,
like
kind
of
recreate
like
kind
of
post
streaming
data
set,
let's
say
the
quadcopter.
We
have
like
a
feedback
system,
it's
going
left
or
indeed
going
right.
I
could
once
the
prediction
the
classification
was
made
and
I'm
happy
with
what
I'm
seeing.
I
could
kind
of
label
this
predictive
data
as
being
valid
and
then
put
it
back
in
a
data
set
to
continue
training
the
model
so
that
you
would
reinforce
itself
over.