►
Description
AI / Neuroscience Chat Episode 2: Basics of sparse distributed representations -- Watch live at https://www.twitch.tv/rhyolight_
A
Okay,
hi
am
I
too
loud
in
the
volume.
Ok,
thanks
for
watching
I'm,
Matt
Taylor
from
nimitta-
and
this
is
a
I
chat,
AI
neuroscience
chat,
but
we're
talking
generally
about
both
I
think.
So
we
had
a
good
good
chat
last
week
and
the
chat
box
is
up
and
working.
If
you
didn't
notice
it
there,
it
is
there,
it
is.
A
Volume
is
good.
Thank
you
thanks
for
listening.
So
before
I
start,
oh,
what
you're
hearing
is
caravanserai
by
Santana.
This
is
the
first
track
on
the
album.
This
is
the
best
Santana
album
in
my
opinion,
looks
woops
woops
wrong
things:
okay,
yes,
so
what
do
I
want
to
do
so?
First
of
all,
you
probably
notice
I've
only
been
doing
the
switch
thing,
for
this
is
my
third
real
week
doing
it.
A
My
second
Monday,
probably,
and
so
the
second
sort
of
a
iChat
show
I'm
working
on
like
overlay
stuff,
so
you
might
notice
some
I
put
this
on
top
and
this
on
the
bottom
I'm,
hoping
that
this
is
like
the
title.
It
is
a
artificial-intelligence
chat,
episode
too,
and
we'll
talk
about
just
HTM
basics.
Today
and
I.
Think
the
viewers
thing
over
there
looks
good
too,
hopefully
I'm
going
to
I'm
sort
of
testing
it
out
today.
So
we'll
see
seven
all
right.
It's
updates
like
once
a
minute
or
so
so
good
news.
B
A
Is
nice
you
have
to
get
certain
amount
of
followers
or
something
so
I
can
have
things
like
subscriptions
and
do
neater
things
in
chat.
I
can
create
VIPs
and
chat,
and
so
it's
a
it
gives
us
me
a
lot
of
tools
to
engage
more
with
the
community,
so
I
think
that's
that'll
be
fun,
and
so
what
I'll
do
to
celebrate?
That
is
at
the
end
of
this
little
chat.
I
will
gift
five
subscriptions
to
this
channel
and
randomly
so
whoever's
in
chat
at
that
time.
A
Yeah,
so
so
that's
fun
and
so
I'm
happy
I
got
more
than
50
subscribers
watching.
So
let
me
one
of
the
things
I
wanted
to
do
is
I
noticed
that
I
got
some
comments
on
YouTube
from
last
week.
So
let
me
go
I
want
to
go
over
some
of
those
and
try
and
answer
them.
So
let
me
find
them.
I
I,
just
put
all
of
the
twitch
videos
on
YouTube
on
my
youtube
channel
here
and
so
I
didn't
really
know
what
to
expect
with
that,
how
they
would
be
received
because
he
turn
this
off.
A
I
didn't
know
how
they
would
be
received,
because
all
my
other
videos
are
try
and
make
a
bit
more
polished,
but
I
got
good
good
feedback
and
a
lot
of
people
watching
them
on
YouTube,
which
is
which
is
good.
It's
sort
of
encouraging,
so
I
have
been
replying
to
the
comments
here,
but
there's
one
in
particular
that
I
wanted
to
look
at.
Please
stop
talking
about
deep
learning.
You
have
no
experience
at
all.
Your
statements
are
misleading
or
plainly
wrong.
A
A
If
so,
please
tell
me
what
I'm
wrong
about
this
particular
comment
doesn't
tell
me
what
I'm
wrong
about
so
I
would
be
happy
to
have
a
conversation
with
anybody
who
wants
to
join
in
chat
or
twitch
or
whatever,
and
if
I'm
wrong
about
something
I
want
to
know
because
I'm
looking
for
what's
right,
I'm
looking
for
the
truth,
the
right
answer,
so
please
engage
me
in
chat
or
in
email
me
at
Matt,
at
momenta
org.
If
you
don't
want
to
do
it
publicly,
I
am
open
to
being
corrected
about
any
of
this
stuff
for
sure.
A
So,
for
the
most
part,
I
think
everybody
was
happy
that
this
concept
was
being
made
available
on
YouTube,
so
I'm
just
gonna
keep
doing
it
and
I
got
a
nice
spike
in
views.
So
I'm
just
gonna
keep
pushing
this
stuff
step.
What
what
do
you
mean?
I
still
have
the
opening
slide,
showing
I.
Don't
know
what
you're
talking
about
you
can
see
my
screen
right
I
mean
I'm,
not
viewing
I'm,
not
viewing
the
video
I'm
just
I
just
had
it
pause.
So,
okay
next
thing
on
the
agenda.
Oh,
this
is
kind
of
neat.
A
Let
me
see
if
this
shows
up
if
I
go
to
my
twitch
page
I,
have
this
poll
down
here,
you're
gonna,
see
the
twitch
page.
I
should
be
able
to
yeah.
So
there's
this
pull
down
here,
I
think
maybe
I
can
there
should
be
a
way
to
put
it
up.
I
thought
it
would
show
up
like
on
the
stream
as
you're
watching
it,
but
apparently
it
doesn't
I
guess
I
didn't
configure
it
correctly,
but
but
I
thought
it
would
be
fun
to
have
a
poll.
This
one
is
just
I
gave
this
on
YouTube
as
well.
A
When
will
truly
intelligent
machines
be
a
reality.
So,
if
you're
interested
go
to
go
to
I'm
down
on
my
twitch
page
and
vote
on
this
poll
next
time,
I
do
a
poll
I'll
figure
out
how
to
have
it.
You
know
pop
up,
live
or
watching
because
I
know
there's
a
there's
a
way
you
can
embed
it.
Sort
of
in
the
video
I'm
gonna
shut
this
that
stream
down,
because
I
don't
need
to
stream
two
things
at
once:
okay,
so
how's
everybody
doing
how
many
people
we
got
eight,
eight
or
so
in
chat.
A
That's
great
I'm
glad
you
guys
showed
up
at
any
point
in
time.
If
you
want
me
to
explain
anything,
please
let
me
know
some
of
you
guys
are
old,
HTM
forum,
people
or
not
you're,
not
old
people,
but
you
know
you've
been
around
and
you
know
the
basics
of
what
I'm
going
to
talk
about,
but
even
if
you
just
want
me
to
go
over
something
in
particular
behalf,
so
cuz
I'm,
just
gonna
whiteboard
it
today,
you
can
ignore
this.
This
is
a
future
plan.
A
Future
plan,
I,
love,
time,
series,
databases
and
influx
is
a
time
series.
Database
and
I
want
to
do
something
with
it.
So
so,
let's
I
mean
when
we're
talking
about
HTM
I,
think,
especially
if
you
know
something
about
deep
learning
or
about
machine
learning,
the
one
of
the
most
important
things
that
you
have
to
think
about
is
sparsity,
and
if
you
look
at
your
brain
at
any
point
in
time,
I
just
happen
to
have
one
right
here.
If
you.
A
Any
point
in
time
and
I
don't
know
if
this
is
exactly
true
in
every
single
part
of
the
brain,
but
I'm
gonna.
This
one
comes
apart
nicely.
So,
let's,
let's
just
talk
about
the
neocortex,
which
is
like
this
part
without
the
inside,
without
the
old
brain
stuff,
the
thalamus
of
the
midbrain,
and
all
that
this
stuff,
the
the
neurons
in
here
that
are
doing
computation.
If
you
were
to
take
a
snapshot
of
your
brain
at
one
point
in
time,
you
would
see
that
of
all
the
trillions
of
billions
or
whatever
neurons
that
are
in
here.
A
Only
two
percent
of
them
are
actually
like
in
a
firing
State
or
like
releasing
their
action
potential
or
like
their
voltage,
is
high
they're
firing,
they're
spiking,
whatever
you
want
to
call
it
only
2%
of
those.
So
if
you
think
about
that,
like
logically,
you
might
say
that,
that's
that,
why
are
we
using
all
of
our
brain?
You
know
why
are
we
only
using
2%
of
our
brain?
That's
not
what
that's.
We
are
using
all
of
our
brain
all
the
time
it's
it's
not
like,
just
because
only
2%
are
sparse,
that
that's
that's!
We're!
A
A
Mark
says
he's
having
a
hard
time
seeing
the
video.
If
anybody
else
is
also
you
say
it's
all
good
thanks.
Thanks
for
the
check
I
know,
there's
a
stream
health
check.
I
can
do
here,
it
looks
it
says,
excellence
I,
don't
know.
Mark
I,
don't
know
what
to
tell
you
sorry,
but
you
can
always
check
it
on
YouTube
but
mark
you
know
all
this
stuff
anyway.
I
know,
I,
know
you're
being
supportive
and
I
appreciate
it.
But
I
know
you
know
all
this
stuff,
so
a
refresh
fixed
it.
A
So
if
you
have
video
problems
refresh
anyhow,
sparsity
is
important.
The
one
of
the
reasons
it's
important
and
and
it's
not
just
sparsity
necessarily
its
distribution
of
semantic
meaning
as
well.
So
we
talked
about
this
in
HTM
we
talked
about
nots,
pasady
did
I
spaz
it
in
Spanish
bar
sparse.
We
call
them
sparse,
distributed.
A
Sparse,
because
in
the
population
of
neurons
that
will
represent
something
sparsity,
only
a
small
percentage
of
them
will
actually
be
on
doing
the
representation
at
any
point
in
time
distributed
because
you
want
to
take
that
semantic
idea
of
the
thing
being
represented
and
distributed
across
the
population
of
neurons.
So
it's
not
just
focused
in
one
area
and
obviously
it
is
a
representation
of
something.
A
So
when
we
talk
about
neurons,
especially
populations
of
neurons
and
activations
of
those
neurons
and
SDR,
is
in
that
sense,
where
an
SDR
is
a
collection
essentially
of
neurons
and
they're
on
or
off
state.
So
all
of
our
SD
ours
are
essentially
bit
arrays.
It's
just
a
big
list
of
neurons
and
which
ones
in
that
list
are
on
and
which
ones
are
off,
and
this
has
semantics.
It
means
something
all
the
bits
means
something
you
know
the
best
way
to
do.
This
is
still
with
HTM
school
stuff
so
hold
on.
C
A
A
The
thing
you
have
to
realize
that
why
sparsity
matters
is,
if
you've
got
I,
don't
know
even
half
of
these
bits
on,
let's
say
a
thousand
or
so
of
them
on
there's
a
certain
number
of
ways
that
those
bits
those
1000
1500
bits
can
fit
into
this
space
total
2048
spaces.
This
is
one
of
them
right.
This
is
just
one
randomly
chosen
arrangement
of
these
bits
in
space,
but
there's
a
whole
bunch
of
others
and
there's
actually,
you
know
mathematical
formula
this
in
this
one.
A
A
So
when
you
enforce
sparsity
in
a
population
like
this
you're
losing
capacity
right,
which
makes
sense
because
that
I
mean
I
can
understand
why
people
wanted
to
use
why
dense
representation
seemed
like
a
natural
way
to
to
represent
things,
because
you
can
represent
so
many
different,
unique
things
and
infinite
amount
of
unique
things
in
this
space.
You
could
put
values
in
here
until
the
end
of
time
you
know,
but
with
sparsity
you
lose
that
you
see
the
capacity
going
down.
A
I
didn't
use
mathematical
notation
because
I
like
lots
of
zeros,
but
it's
really
a
big
number.
Even
even
if
we
dial
the
sparsity
down
really
low,
we
typically
use
40,
which
40
would
out
of
2048,
which
is
2%
sparsity,
which
is
the
amount
of
neurons
in
your
brain.
So
this
is
a
good
representation
of
your
brain
and
the
amount.
B
A
A
A
A
If
you're.
If
you
like
these
visualizations,
give
me
a
follow
because
I'm
going
to
show
a
lot
of
visualizations
I
hope
in
this
and
in
my
channel
here,
having
to
do
with
HTML
or
visualizations
I
want
to
do
for
3d
visualizations
of
HTM
systems,
which
I've
done
in
the
past
and
I'll
show
some
at
some
point.
A
So
in
this
example,
I've
got
like
two
SD
R's,
one
on
the
left
and
one
on
the
right
and
they're,
both
different
and
they're,
both
random
on
the
left,
they're,
both
the
same
size
and
they
have
the
same
W.
So
there's
sparse
cities
like
10%,
yeah
and
I
did
10%,
because
cuz
I
want
to
show
you
what
happens
when
you
compare
them.
So
you
can
do
two
comparisons
with
SDR
is
that
your
brain
is
doing
all
the
time
and
in
different
ways.
One
is
an
overlap.
A
Okay,
so,
basically,
when
you
take
an
overlap,
I'll
take
the
one
on
the
left
over
there
and
I'll
put
it
over
top
of
the
wall
on
the
right
and
what
you
see
or
the
center
excuse
me.
I'll
put
it
on
top
of
the
one
on
the
center
and
what
you
see
on
the
right
is
the
overlap.
Those
are
the
bits
in
the
array
or
in
space
we're
both
on
in
both
of
the
comparators
right.
So
that's
called
an
overlap
that
tells
you
what
is
similar
between
the
two
representations.
A
You
have
to
remember
these
SDR
is
the
sparse,
sparse,
distributed
representations
have
meaning
like
they.
This
the
bits
mean
something
you
might
not
know
what
they
mean,
but
they
mean
something.
If
you
take
a
neuron
out
of
your
brain
and
and
you
don't
know,
you
don't
know
what
any
neuron
in
particular
is
doing.
If
you
just
point
at
it,
but
it's
doing
something
it's
playing
a
role
and
of
computations
most
likely.
You
know
in
some
way
or
another,
and
if
you
take
it
out,
it
isn't
going
to
affect
the
computation
of
the
whole
population.
A
It'll
be
a
super
minor
degradation
of
the
system,
so
these
because
of
everything,
sparse
and
because
of
these
comparisons
are
the
way
that
they're
compared
they're,
also
like
inherently
tolerant
to
default
right.
So
if
any,
if
any
of
these
neurons,
for
whatever
reason
decided
to
die
or
just
didn't
happen
to
fire
that
that
that
particular
time
this
comparison,
this
overlap
is
still
going
to
be
super
similar
than
them
before.
Okay,
so
overlap
is
one
of
the
important
things
to
think
about.
A
A
Sort
of
like
have
a
bucket
and
fill
it
up
with
things
and
and
fill
it
up
with
features,
and
we
and
what
you
can
do
with
this.
Let
me
just
show
you
the
comparison,
real,
quick,
this
just
kind
of
shows
you
we
typically
will
call
do
it
use
an
overlap,
score
to
identify
how
close
to
representations
are
yeah,
so
Marcos
and
the
bits
mean
something
right
and
computers.
This
that's
a
good,
that's
a
good
point
mark
and
computers.
A
If
you're
looking
at
like
ascii
code
for
a5
or
something
for
the
letter
v
number
excuse
me
the
number
five
and
you
change
like
you.
Can
that
resolves
down
to
an
array
of
bits?
Ascii
does
essentially,
and
if
you
changed
one
of
those
bits
it
would
change
the
v
to
something
random
like
it
might
be
an
asterisk
or
it
might
be
a
bracket
or
it
might
be
a
seven.
It
might
be
an
a
I,
don't
know,
but
it
would
not
be
anywhere
near
a
five
right.
You
can't
make
any
assumptions
about
that
with
these.
A
If
you
change
the
bits
and
the
representations
you're
guaranteed
that
it'll
be
similar,
because
because
these
representations
are
sort
of
cumulative
right
so
that
that's
a
good
point,
he
says
each
in
STRs.
Each
bit
stands
for
something
it'd
be
silly
to
have
all
the
bits
turned
on
meeting
everything
at
once.
It'd
just
be
a
complete
overload
right.
It
wouldn't
mean
anything
right,
it
mean
nothing.
So
let
me
show
you
a
bit
about
matching.
Is
this
the
one
I
want
to
do?
I
haven't
done
this
in
a
while
I'm
just
is
just
like
meh.
A
A
Okay,
thanks
for
the
a
ski
4/5
and
binary
appreciate
it
flip
one
of
those
flip
one
of
those
and
tell
me
what
you
get
it'll,
be
something
completely
different
yeah.
So
there's
a
great
video
on
this
that
I'll
that
I'll
put
into
chat,
cortical,
IO
semantic
folding
I,
really
like
this
I
misspelled
that
I
think
semantic
fold.
Here
is.
A
So
I'm
not
gonna,
watch
through
it
right
now.
Let
me
let
me
pause
it
though,
and
find
there's
a
couple
of
great
scenes
in
this
video
when
they're
talking
about
STRs.
So
here
it
is.
This
is
so
if
an
SDR
were
representing
Kat.
This
is
a
great
example.
You
might
have
a
bit
that
means
pointy,
ear
or
tail
or
whatever
and.
A
But
the
whole
meaning
for
Kat
is
distributed
across
the
whole
representation.
So
if
you
have
one
for
dog
one
for
cat,
they
they
might
not
have
the
same
bits
defining
their
ear,
but
they
might
have
other
bits
saying
that
they're
furry
that
overlap
anyway.
Let
me
put
this
into
chat
and
I'll.
Let
you
guys
watch
that
and
your
leisure
there
you
go
so
what
I
want
to
show
here
is
this
idea
of
matching
with
theta,
and
this
is
a
super
important
operation
to
do
in
an
HTM,
because
you
want
to
compare
representations
you
want
to.
A
A
I
might
want
to
become
predictive,
etcetera,
hey
and
a
beta
thanks
for
thanks
for
coming
along
and
talking
about
neuroscience
stuff,
I'm,
actually
talking
about
sparsity
in
the
brain
right
now
and
how
you
know
very
few
of
your
brain
cells,
your
pyramidal
neurons
in
your
and
your
neocortex
specifically
are
on
at
any
one
point
in
time
and
and
with
the
kinds
of
attributes
you
get
from
from
that
sparsity.
When
you're
trying
to
do
computation.
A
You
know-
and
you
know,
cognitive,
cognitive,
computations
and
stuff,
so
I'm
going
to
show
you
first
off
with
the
noise
down
to
nothing.
So
I've
got
an
original
SDR
here
on
the
left
now
and
I.
Guess
I
will
add
a
little
bit
of
noise.
Well,
add
20
percent
noise.
So
what
I
did
here
was
I
kept
the
sparsely
to
the
same,
but
I
just
took
20
percent
of
the
bits
and
like
flipped
them
or
something
like
that
right
and
then,
when
I
compared
them,
I
had
get
an
overlap
score
so
out
of
the
40
bits.
A
32
of
them
were
the
same
between
these
two,
because
that
noise
is
that
20%
noise
messed
with
8
of
those
bits.
So
we
can
use
this
idea
of
theta,
meaning
how
many
bits
do
I
need
before
I
will
consider
these
comparison
to
be
a
match.
That's
typically,
when
we're
talking
about
sparsity,
sparse,
distributed
representations
when
we
talk
about
theta,
we're
talking
about
the
number
of
bits
required
for
a
match
and
in
the
overlap
score,
so
the
more
noise
you
add.
A
So
in
this
case
I'll,
say:
okay,
I'm
going
to
say
20
of
them,
half
of
the
bits
in
the
original
SDR
on
the
left.
There
I've
got
to
be
in
the
one
you're
showing
me
before.
I
will
say:
that's
a
match
right
now.
It
says
it's
a
match,
because
32
of
them
are
good,
and
my
noises,
sir,
is
currently
20%
as
I.
A
Can
see
the
overlap
scores
going
down
down
down
out
there
and
there
we
go
at
some
point.
It
stopped
matching
right
so
with
two
of
60%
noise
or
somewhere
around
the
55
ish
range.
We
are
our
system
broke
like
our
matching
system
broke,
but
that's
pretty
darn
good.
You
know,
that's
a
whole
lot
of
noise.
Isn't
it
so
you
can
have
some
pretty
resilient
pattern
matching
using
this
type
of
sparse,
distributed
representation
and
doing
comparisons
with
them,
especially
when
you
you
control
the
threshold
of
what
you
consider
a
match
and
what
isn't
a
match?
A
I,
don't
remember
what
this
the
oh!
This
is
the
chance
of
a
false
positive.
So
if
I
consider
my
theta,
if
I
make
my
theta,
25
and
I
say
okay
25
out
of
the
40
half
the
match,
this
just
shows
me
the
chance
of
false
positive
is
really
really
small,
really
small
I'm
in
super
astronomically
small
and
you,
the
proof,
is
it's
happening
in
your
brain.
All
the
time
I
mean
you
barely
ever
think.
Oh
there's
a
rabbit,
but
it's
not
when
you
look
at
it,
but
every.
B
A
In
a
while
right,
every
once
in
a
while,
you
say
what
was
that
over
there
and
you
look
and
there's
nothing
there
right
or
it
was
just
a
blanket
folded
up.
You
know
if
your
brain
pattern
matched
something
that
that
lit
up
some
some
sparse
activations
in
your
head
right
and-
and
you
thought,
oh,
that's
a
that-
that
matches
one
time.
I
saw
a
cat
that
looked
like
that,
but
then
you're
looking
you're
like
that's,
not
a
cat's,
a
blanket.
A
Those
are
the
types
of
false
positives
that
I'm
talking
about
and
generally
when
they
do
happen.
It's
because
the
thing
you're
observing
is
close
to
what
you
had
thought.
What
was
the
mismatch
right
like
that
blanket
might
have
actually
been
folded
in
a
way
that
resembled
a
cat.
So
on
failure
when
you're
looking
at
these
SDR
systems,
they
they
pretty
much
generalize,
which
is
really
powerful
when
you're
doing
comparisons
with
SDR
when
they
fail
to
match
her
when
they
match
improperly
it's
it's
sort
of
a
jet,
more
general
error.
A
A
B
A
Yeah
I
would
say:
maybe
you
wouldn't
make
that
pattern
recognition,
but
from
what
I
know
about
cats
anywhere,
you
might
find
a
blanket
there
could
be
a
cat
there
too.
You
never
know.
Okay,
let
me
go
to
the
next
one
which
is
overlap.
Sets
is
this
too
deep
for
you
guys
because
I
mean
then
I'm,
basically
going
over
htm'
school
stuff,
because
I
did
I,
don't
know
if
I
showed
all
these
in
the
videos,
but
it's
a
little
bit
too
too
much,
but
not
too
much
in
okay.
A
This
is
great
because
I
haven't
I,
haven't
looked
at
this
in
a
long
time.
So
now
I
have
to
think
about
so
I've
talked
about
overlap
sets
right
if
you
have
an
SDR
which
here
I'm
calling
X
in
this
case
and
I'm
saying
it,
because
it
has
these
properties
like
there's
600
total
bits
in
it
and
we're
gonna
call
the
W,
which
is
we.
You
can
ask
questions
now:
Falco
I,
don't
mind
taking
them
any
time.
Also
I
think
I.
Have
the
discord
chat,
channel
open
yeah?
A
So
if
you
want
to
voice
chat,
you
can
come
in
and
voice
chat
in
the
discord
chat
channel,
which
the
link
is.
The
link
for
an
invitation
to
discord
is,
in
you
know
on
my
twitch
page:
okay,
wait
for
later:
that's
fine
too
I,
don't
mind
either
way,
I'm
happy
to
do
it
either
way.
Okay,
so
in
this
situation,
I've
got
600
bits
and
revolta
a211
hives
watched
a
live
coding
session.
Any
estimate
for
change
estimate
that's
a
whole
other
topic.
A
Okay,
I'll
tell
you
right
now
that
we
are
not
new
pic
is
currently
in
maintenance
mode.
It
has
been
maintenance
mode
for
well
over
a
year.
Now,
as
we
focused
on
research
and
completely
research
and
not
on
building
out
software,
we
decided
we
are.
We
are
not
going
to
put
the
effort
in
to
upgrade
new
pic
to
Python
3.
We're
Numenta
is
not
going
to
do
it.
A
I'm
going
to
start
deferring
people
to
the
community
fork
of
new
pic,
which
has
initiative
to
work
towards
Python
3
I'm,
going
to
make
an
announcement
on
the
forum
very
soon.
I
just
wanted
to
make
sure
all
my
ducks
in
a
row,
but
since
you
asked
we're
gonna
be
not
doing
any
of
that
stuff
that
I
talked
about
for
the
most
part
anyway.
I'll
talk
about
that
later.
I'm
gonna
keep
talking
about
SDR
stuff
right
now,
because
what
you
have
to
realize
about
the
Mendte,
my
company
and
what
we're
doing
is.
A
The
theory
is
the
most
important
thing.
There
are
probably
10
15
different
implementations
of
HTM
algorithms
out
there
open
source
on
github
right
now
that
I
could
find
you.
You
can
look
at
look
them
up
on
HTM
forum.
You
can
go
to
New
Mexico
Oregon
under
code
there's
a
few
there
and
you
find
them
pretty
quickly
when
you,
when
you
go
to
the
forum,
there's
a
whole
thread
and
all
the
different
implementations,
there's
Python
and
Pony,
and
rust
and
I
haven't
seen
Haskell
1
yet,
but
there's
a
ton
of
different
implementations.
A
It's
not
about
the
code,
it's
not
about
new
pic!
It's
specifically
I've
always
said:
I'm
gonna
go
I'm,
gonna,
go
full
frontal
here,
I've
always
said
since
I
first
started
this
community
for
in
2013
that
I
didn't
think
that
new
pic
would
be
the
HTM
inflamation
that
would
take
the
world
by
storm
new
pic.
Essentially,
its
role
is
more
of
a
reference
implementation,
so
the
next
generation
of
HTM
engineers
and
implementers
can
build
something.
That's
really
more
relevant,
I
think
and
our
job
as
Numenta
think
is
still
to
continue
to
focus
on
the
theory.
A
All
of
our
theory
builds
on
all
that
course
stuff
in
new
pick.
That
I
will
be
talking
about
well
that
I'm
talking
about
right
now,
you
know
I
haven't
even
talked
about
HTML
rhythms,
yet
we've
only
been
talking
about
sparsity
and
the
next
thing
would
be
to
talk
about
spatial
pooling
and
the
input
space.
And
how
do
you
get
sensory
input
into
you
know?
How
can
you
transform
that
into
a
place
where
you
can
control
the
entropy
of
of
the
pop
the
neuron
population?
Okay,
but
that
I
don't
know
if
L
I
don't
think.
A
That's
it
today,
topics
but
well,
maybe
we'll
see.
Okay,
anyway.
Back
to
the
screen
over
here,
so
this
is
a
small
example.
Six
hundred
bits,
forty
on
and
I,
can
change
them
around,
but
I'll
just
leave
them
at
just
refreshed.
Leave
it
at
forty
on
and
a
sparsity
and
the
overlap
set
is
is
like
the
the.
A
Is
given
that
you
have
at
one
random
SDR
with
these
parameters
right
where
that
has
the
same
parameters
and
has
exactly
40
bits
of
overlap?
That
makes
sense
right
in
this
case,
the
overlap
set
is
one
because
there
is
only
one
another
SDR,
one
other
representation
that
could
possibly
be
and
have
those
bits
in
those
spaces.
I
think
I've
got
a
hold
on
there.
A
A
Yeah,
the
two
pictures
are
basically
exactly
the
same.
I
like
this
as
a
bigger
example,
the
B
now
we're
gonna
start
changing
how
many
overlaps,
how
many
the
theta
essentially
we're,
calling
it
B
in
this
example,
but
so
now
the
we
can
define
the
overlap
set
by
changing
the
theatres
of
saying.
Okay.
Now
we
only
need
30
bits
of
overlap
with
X.
So
now
the
overlap
set
expands
hew
right,
so
you
got
to
think
about
matching
I've
got
an
SDR
and
I
want
to
know.
A
I
want
to
match
it
against
other
STRs
that
I'm,
seeing
because
an
SDR
represents
features
in
a
space
and
if
I
get
another
SDR,
it's
also
representing
features
in
a
space.
How
close
are
those
features
to
the
features
that
I
have
are
this?
This
is
the
same
set
of
features.
Is
it
something
relevant?
So
we
can
do
this
with
this
with
this
bit
of
math,
and
we
can
we
can
change
how
Fuzzle
e
we
match
right.
We
match
this
SDR
to
the
things
that
we're
comparing
it
to
by
changing
this
theta
value.
A
These
are
all
kind
of
old
I
haven't
played
around
with
these
in
literally
years.
So
so
what
would
to
overlap
means?
An
overlap
only
makes
sense
when
you're
thinking,
when
you're
thinking
of
it
with
respect
to
the
sdrs
you're,
comparing
like
an
overlap,
is
just
basically
two
SDRs.
What's
similar
between
two
STRs
you
well,
two
overlaps
I
mean
you
can
compare
overlaps.
You
can
say
well,
I'm
more
similar
to
this
input
than
that
input
by
you
by
comparing
the
overlap,
score,
I,
guess,
red
apple
and
red
herring,
though
so
now
that's
a
different
thing.
A
Got
a
great
demonstration
of
how
you
can
take
the
word:
what
was
it?
What
did
they
use?
They
said
Jaguar,
no
Apple
computer,
or
was
it
computer
Apple
equals
Jaguar
that
over?
So
if
you
take
the
term
Apple
and
you
give
it
to
cortical,
IO
they'll
give
you
back
an
SDI
representation
for
that
word,
they'll,
give
you
like
a
bitmap
and
it
will
have
semantic
meaning
like
the
bits
in
it
will
have
semantic,
meaning
that
video
that
I
showed
earlier
about
the
semantic
folding.
A
It
tells
you
all
about
how
they
create
these
ok,
so
you
could
give
it
one
for
Apple,
and
if
you,
if
you
look
at
what
it
gave
you,
it's
gonna
be
a
bit
overloaded,
because
Apple
can
mean
more
than
one
thing:
an
Apple
could
be
a
fruit,
an
Apple
could
be
a
record
label
of
the
Beatles
and
an
Apple
could
be
a
computer
okay.
So
three
major
things
that
Apple
could
mean.
A
So
it
turns
out,
if
you
take
the
SDR
from
the
cortical
I/o
service
for
Apple,
and
you
also
ask
them
for
the
SDR
for
computer.
Ok
and
then
you
say:
subtract
all
of
the
bits
from
Apple
from
computer
you'll
get
a
representation
of
an
apple
that
no
longer
includes
computers,
so
it
will
just
be
the
Apple
representation,
for
you
know
the
Beatles
record
company
or
the
fruit,
the
Apple
so
you're
trying
to
say
so.
Those
overlaps
would
be
basically
all
of
the
semantics
that
are
similar
between
all
of
those
terms.
A
So
you
could
take
perhaps
3
different
terms
that
you
think
are
unrelated
and
and
what
are
the
overlaps
in
them,
and
that
will
give
you
the
semantics,
the
things
that
they
all
have
in
common.
So
that
could
be
useful
right,
say
if
you
have,
if
you
took
a
a
dog
SDR,
a
badger,
SDR
and
a
bear
SDR
and
and
you
and
you
just
compared
the
only
got
their
overlaps
and
what
were
the?
What
are
those
bits
represent?
Well,
probably,
like
fariñas
mamillus,
you.
A
Things
that
walk
on
them
feet,
things
that
are
alive.
You
know
all
the
things
all
the
attributes
are
semantics
of
those
objects
that
that
are
true
for
all
of
them.
I'd,
say:
okay,
cool
back
to
what
I
was
doing.
Okay,
so
I
want
to
talk
trying
to
talk
here
about
subsampling
and
I'm,
trying
to
remember
what
this
even
meant,
because,
like
I
said,
it's
been
such
a
long
time
since
I've
messed
with
this
stuff.
It
looks
like
I've
got
three
examples:
pre-loaded,
okay,
so
have
original
SDR,
which
is
really
sparse.
A
It's
like
1%
and
then
I
have
a
subsample
all
right.
So
what
I've
done
here
is
I've
taken
an
SDR
and
I've
only
sampled
half
of
the
bits.
So
here's
another
thing
you
can
I
remember
what
this
is
about
now,
here's
another
thing
you
can
do
with
SD
ours.
That's
tricky
that
your
brain
can
do
too
and
probably
does.
Is
you
don't
have
to
store
all
the
on
bits?
This
is
amazing,
but
you
don't
have
to
store
all
the
on
bits.
B
A
A
Never
ever
ever
get
a
false
positive
with
this
one,
you'll
get
a
false
positive,
really
really
really
rarely.
But
the
thing
is
you
don't
have
to
store
all
the
bits
you
can.
You
can
get
away
with
with
with
just
storing
half
of
the
bits
that
you
see,
and
you
can
still
use
that
for
comparison.
That's
how
resilient
this
this
system
is
this
memory
system
is.
It
uses
the
sparse
distribution
of
memory,
that's
how
resilient
it
is
to
fault
tolerance.
A
A
A
A
This
is
tricky
to
explain,
though,
because
I,
because
I
want
to
show
you
like
a
way,
you
can
accumulate
a
whole
bunch
of
representations
and
then
tell
given
a
new
one,
whether
you've
seen
it
before.
This
is
that's
sort
of
what
this
is
supposed
to
do
so
so
here's
an
SDR
that
goes
like
all
the
way
across
the
screen
and
I'm
just
going
to
add
a
bunch
of
to
it.
A
You
can
sort
of
imagine
that
top
one
sort
of
dropping
down
every
time
I
did
this
button
drop
it
down
and
and
I'm
storing
it
right
so
I'm
accumulating
sort
of
this
storage.
You
sort
of
think
of
this
as
my
memory
storage,
I'm.
Storing
all
these
whoops
whoops
I
added
the
many
I
press,
the
mini
button.
That's
that's!
Okay,
so.
A
Given
the
fact
that
I
forgot
I've
seen
all
these
representations,
I
can
look
it
I
can
take
this
one
here
on
top
that
I'm
and
I
could
say:
okay,
don't
add
it,
but
I
want
you
to
match
it.
I
want
you
just
tell
me
if
you've
seen
it
before.
Okay,
so
it
match
it.
Oh
I!
Guess
you
I'm
going
to
actually
pick
one
out
of
it.
This
is
how
I
did
this
I'm
gonna
pick
one
out
of
the
stack
and
I'm
gonna
say.
A
It
says
yes
here
here:
it
is:
it's
got
the
st..
It's
got
four
bits
in
common
which
it
has
four
bits
so
that
that's
the
one
that's
it's
100%,
but
there's
also
these
others
that
have
one
bit
in
common.
That
I
saw
right.
So
you
can
sort
of
see
that
on
the
bottom
here
so
on
the
right
here,
I
mean.
Let
me
yeah
so
I
make
it
a
little
smaller
you
can.
This
works
better
when
there's
a
minute
good,
let's
go
big.
We're
gonna
go
big!
A
Alright,
when
we
go
big
I
just
made
these
representations
huge
and
I'm,
not
rendering
all
of
them
across
the
screen.
Okay,
I'm
only
rendering
12%.
So
you
imagine
that
these
representations
go
way
off
to
the
right
all
right,
but
I'm
only
going
to
show
you
this,
it
doesn't
matter.
I
can
just
show
you
part
of
it
and
you'll
get
the
gist
of
it.
A
So
I've
basically
done
the
same
thing:
I'm
gonna,
even
add
more
okay
and
then
I'm,
saying:
okay,
let's
match
one,
let's
pick
one
out
of
it
and
and
try
and
match
it
to
the
rest
of
the
stack.
And
so
here
we
go
the
our
theatres
there's,
there's
40
on,
and
it's
found
this
one
that
there's
30
overlap
and
my
theta
my
matching
is
32,
which
I
can
change
and
it
sort
of
changes
the
calculation
right.
A
So
this
basically
gives
me
a
stack
rank
of
all
the
sdrs
I've,
seen
that
just
ranked
them
all
in
order
of
how
similar
they
are
to
this
one
that
I've
got
and
if
I'm
over
some
threshold,
if
any
of
them
are
over
that
threshold,
I'm
gonna
say:
yes,
I've
seen
this
before
it's
here.
It's
probably
this
one.
It's
probably
this
one,
and
in
this
case
you
know,
I
made
the
threshold,
30,
I,
think
and
so
we're
gonna,
say
yeah
and
now,
let's
add
noise,
and
this
actually
I
have
10
bits
of
noise.
A
A
Well,
just
think
of
cell
death
I
mean
it's
not
it's
not
like
actively
doing
subsampling.
The
point
is
that,
if
you're,
if
your
neurons,
don't
work
properly,
you'll
still
get
matches,
so
it's
not
like
it's
actively
saying:
I
must
perform
subsampling.
It's
just
the
way
that
it
works,
looks
a
lot
like
a
Pareto
distribution.
I'll
have
to
put
that
that
sounds
familiar,
although
I'm,
not
a
math
guy
or
not,.
B
A
Statistics
guy
so
looking
that
up
real
quick,
hopefully
look
at
a
oh
yeah,
yeah,
yeah
I,
think
you
know
what
you'll
see
this
well
you'll
see
me
referring
to.
This
is
when
we
start
talking
about
K
winners
and
sparse
R
in
spatial
pooling.
So
we
do
this
in
spatial
pooling
we'll
stack
rank
all
of
the
STRs
okay,
this.
This
is
something
we
do
to
do
all
the
time
we
will
take.
A
Let's
see
so
all
the
mini
columns.
I've
talked
about
spatial
pooling
yet,
but
all
the
mini
columns
in
our
special
pooling
operation
have
different
receptive
fields
to
input
space
and
given
an
activation
in
the
receptive
field
that
lights
up
some
of
the
the
mini
columns
and
will
decide
which
ones
are
active
based
upon
their
overlap,
which
is
this.
You
know
this
number
right
here.
We
would
say
how
much
overlap
they
have
with
the
input,
but
I'll
have
to
talk
about
that.
No
because
it's
it's
goes
we're.
A
A
No!
No,
we
haven't
we
haven't.
We
haven't.
We
haven't
the
probability
of
a
false
positive
here,
given
this
union
and
again
we're
not
even
messing
we're
not
doing
comparisons
here
in
the
in
the
previous
screen,
I
showed
you
you
might
have
noticed.
There
was
just
a
bit
of
a
second
lag,
or
so
when
I,
when
I
told
it
to
match,
because
it
was
doing
a
manual
comparison
to
all
the
different
STRs
doing
a
binary
comparison
to
every
single
one
of
them.
A
In
this
case,
all
I'm
doing
is
comparing
this
STR
to
this
STR
and
if
it's
over,
whatever
threshold
I
decide
in
this
case,
this
overlap
is
14
and
I'm,
telling
it
I
don't
know.
Well,
I,
don't
look,
looks
like
I,
don't
have
it
in
the
screen,
but
we
our
chance
of
getting
a
false
positive.
It's
not
super
super
low,
but
it's
it's
really.
It's
really
low.
It's
still
it's.
It
is
pretty
low,
but
the
more
you
add
to
it
watch
that
number
it
gets
a
color
that
goes
higher
because
the
union
is
getting
overpopulated
right.
A
So
here
I've
got
66
STRs
in
my
Union
here,
and
it's
so
overpopulated
that
my
false
probability
of
false
positive
has
creeped
out
of
the
scientific
notation
range.
So
maybe
we're
getting
a
little
bit
too
overloaded
in
this
point,
but
it
just
shows
you
the
power
of
being
able
to
represent
a
whole
bunch
of
sparse
things
in
a
dense
way
and
still
do
valid
comparisons
right
with
with
new,
sparse
things.
A
So
this
is
I
think
this
is
really
powerful
and
and
your
brain
takes
advantage
of
this
and
a
lot
of
ways,
I
think
I,
don't
know.
Okay,
so
I've
done
some
of
the
basics.
I
could
go
on
to
encoding.
If
you
guys
want
me
to
go
on
to
encoding
or
we
could
just
chat
about,
if
you
guys
have
anything
else,
you
want
to
talk
about
because
I
mean
this
is
like
the
basics.
A
Let
me
give
you
some
resources
before
before
I
do
anything
else
and
if
you
guys
have
anything
else,
you
want
to
talk
about,
then
throw
it
into
chat.
Let's
see
I'm
gonna
go
to
new
Mensa
comm,
slash
papers
and
over
here,
I
think,
there's
like
a
topical
yeah
so
over
here.
If
you
just
go
to
sparse,
distributed
representations,
go
ahead
and
ask
Falco
all
right:
you
can
get
into
discord
voice
chat
if
anybody
wants
to
also
and
ask
their
Falco
you're
in
the
wrong
place.
B
C
C
C
C
A
A
A
A
A
C
C
C
A
C
C
A
A
This
is
from
0
to
100
or
something
and
then
I
haven't
really
bounded
it
at
the
end,
this
bit,
like
you,
the
only
reason
like
these
bits
are
close
to
each
other,
and
that
is
just
because
the
encoding
logic
that
that
decided
to
encode
this
number
in
this
way
put
them
close
to
each
other.
It
doesn't
really
mean
anything
as
far
as
the
brain
like
that
you
can
do
this
exact
same
thing
and
in
fact
it's
it's
almost
better
to
do
it
in
a
more
randomly
distributed
fashion.
So
let
me
show
you
this.
A
This
representation
as
I
move
and
I
can
I'm
going
to
turn
the
comparison
off.
So
you
can
see
I'm.
Moving
from
this
is
the
exact
same
basically
function.
It
encodes
numbers
scalar
values
in
a
bid
array,
but
as
I
move
from
509
to
510,
you
can
see
the
bits
barely
changing
right,
the
the
semantics
are
not
changing
much
but
they're
slowly
migrating
through.
You
can't
tell
where
they're
not
going
specifically
from
one
place
to
another
they're,
just
they're
just
moving.
You
know
so
by
the
time
I
get
to
600.
All
the
bits
have
changed
right.
A
A
I
think
you
have
to
have
a
low
resolution
for
this.
You
can
make
it
a
lot
of
overlap.
So
when
you
change
the
bits
of
all
out
of
them
change,
so
you
can
have
a
lot
of
control
over
how
you
represent
whatever
you're
trying
to
represent
in
the
space
in
the
encoding
logic.
So
encoding
is
really
important
to
try.
It.
C
C
A
Yeah
I
mean
the
whatever
is
computing,
the
STRs
are
sort
of
how
the
data
is
represented
and
there's
it's
it's
hard
to
think
about
this.
You
have
to
think
about
data
coming
into
the
system
and,
if
you're
talking
about
the
brain,
we're
talking
about
the
neocortex
and
you're
talking
about
input
to
neocortical
layers
and
those
are
coming
from
other
neurons,
they're,
probably
coming
from
sensory
neurons
or
from
from
neurons
down
lower
in
the
hierarchy
or
parallel
in
the
hierarchy
or
something
you
can't
make
any
assumptions
about
those
neurons
and
what
they
represent
ever.
A
That's
that's
the
tricky
thing,
I
think
when
we're
talking
about
the
you
know
the
canonical
cortical
algorithm
in
a
cortical
column,
we
can't
assume
what
is
what
what
is
being
mapped
of.
What's
coming
in.
We
can't
assume
anything
about
the
encoders.
The
encoding
problem
is
almost
like
a
completely
different
problem
and
then
the
computation
problem
it
should
take
whatever
the
encoders
are,
sending
and
understand
it,
and
to
do
that,
it
has
to
generate
motor
commands
that
operate
within
the
environment.
The
encoder
the
encoding
is
came
from.
C
Well,
then
I
should
be
the
priority.
The
octave,
because
it's
more
harmonious
to
the
other
note,
for
instance
our
second
and
by
doing
that
by
using
their
included
you're,
actually
cheating
and
I,
think
you're,
making
a
system
that
is
not
general
you're
making
them
the
system.
That
is
specific
to
understand
music
and
not
something
else.
While
our
brain
is
supposed
to
be
very.
C
A
I've
I
mean
this
is
an
interesting
thing
to
think
about
I've
thought
about
this
a
lot
too,
but
you
have
to
think
about
the
context
of
the
notes,
because
it's
not
you're
right,
it's
not
enough
to
just
encode
a
see.
You
have
to
know
what
came
before
it
to
know
what
that
C
means.
You
have
to
know
the
interval
that,
from
the
one
before
it
to
where
it's
at,
because
C
in
music
means
different
depending
on
what
became
before
it
there's
a
different
meaning
to
it
is
it
going
up?
Is
it
going
down?
A
Is
it
a?
Is
it
a
perfect
fifth?
Is
it
a
seventh?
Is
it
a
sixth
those
all
have
different
meanings
in
music
theory
like
they
invoke
different
feelings
in
your
brain
right,
so,
but
what
you
said
was
that
these
these
harmonic
properties,
you
know
these
intervals,
exist
in
nature.
It
exists
in
sound,
and
a
lot
of
the
things
that
I've
been
trying
to
figure
out
in
my
head
is:
does
music
that
the
properties
of
music
are
those
things
that
we
invented
up
here
in
our
brain
culturally.
A
A
Because,
if
you
think
about
the
intervals
and
you're
talking
about
you
know
different
frequencies,
as
you
go
up,
the
frequency
range,
the
you
know,
it's
it's
a
it's
a
logarithmic
sort
of
range,
so
the
higher
you
get,
though
the
farther
I
think
or
is
it
lasts
a
day
the
farther
it
is
you
jump
when
you
just
move
the
same
amount
of
interval
if
you're
just
talking
about
Hertz
right,
the
the
wavelengths
are
moving
faster
and
faster.
So
it's
not
necessarily
just
about
this
frequency
to
that
frequency
that
defines
an
interval.
A
It
depends
upon
the
note
itself
and
where
it
is
in
the
frequency
spectrum
to
help
you
define
where
the
next,
where
a
next
second
or
third
or
fourth
or
whatever
interval
is
up,
is
closer
okay.
So,
whichever
way
it
is,
but
it's,
but
it's
not
a
linear
sort
of
frequency
progression
right
as
far
as
I
understand
it
I
think
it's
a
it's
a
it's
a
exponential
or
logarithmic
frequency
progression,
as
you
go
up
the
scale.
A
So
that
the
yeah,
so
that
the
difference
is
between
the
frequencies
and
the
lower
octaves
are
different
than
the
one
that
the
same
interval
is
happening
in
the
higher
octaves
and
that
should
be
encode
Abul
right,
because
you're
encoding
that
you
can
encode
the
frequency
I,
don't
think
I,
don't
think
it
really
matters!
It's
just
that
it's
different!
You
know
it's
not
linear.
It's
not
linear.
The
frequencies
certainly
go
higher
as
they
go
up
the
scale
and
I.
A
Imagine
that
they're
the
differences
between
the
different
frequency
values
is
bigger
because
the
because
yeah
cuz
the
law
of
the
long
way
of
links
and
lower
frequencies.
Anyway,
it's
a
challenge.
I,
don't
know
what
the
answer
is.
Honestly,
it's
a
challenge,
but
I
think
that
there
are.
There
are
musical
properties
that
we
can
extract
it.
A
An
encoder
the
same
way
that
our
ear
does
it
or
similar
to
how
our
ear
does
it
I
think
I
mean
we
should
the
our
ear
is
doing
some
computation
because
it's
evolved
to
you
know
pair
with
our
brain
and
help
us
survive.
So
you
have
to
figure
out
how
much
do
you
want
to
learn
from
the
ear?
How
much
do
you
want
to
take
from
it?
The
cochlea
is,
actually
you
know
it's
really
complicated
the
the
yeah?
Well,
it's
all
about
it's
not
just
about
whether
the
notes
objective
or
the
timbre
is
subjective.
A
It's
about
the
context
of
the
note
it's
about
where
you're
at
in
the
song.
It's
about
what
happened
five
minutes
ago
in
the
score.
You
know
it's
it's
about
the
temporal
context.
Music
is
all
about
temporal
context,
so
the
same
exact
progression,
the
same
exact
interval
in
the
same
key
in
the
same
octave
could
mean
something
different
if
it's
played
then
in
this
part
of
the
song
versus
that
part
of
the
song.
So
it's
it's
all
about
temporal
stuff.
A
So,
if
you
can,
if
you
can
figure
out
the
spatial
properties
of
encoding
sound,
then
your
brain
should
figure
out
the
meaning
using
the
temporal
layout
of
those
sounds
through
time.
That's
where
we
should
apply
be
trying
to
apply,
meaning
is
in
the
understanding
of
the
sounds
as
they
progress
through
time,
at
least
that's
how
that's?
How
I
think
about
it
right
now
which
changes
sometimes?
But
but
if
you
use
that
part
of
the
brain
to
do
something
else,
you
have
a
problems
that
how
would
it
work
for
other
meanings?
A
The
part
of
the
brain,
that's
listening
to
music
or
understanding
music,
because
I
think
your
whole
experience
now
you
experience
things
with
all
your
senses
sort
of
at
once
and
you
might,
you
might
apply
something
an
emotional
meaning
to
a
song
because
it
played
at
a
certain
time
in
your
life
that
had
emotional
meaning
to
you
and
that'll
mean
something
it'll.
Always
remember
you
remind
you
of
that
day
or
something
every
time
you
hear
it.
A
Yeah
so
I
don't
see
a
problem
with
cortical
columns
being
able
to
I
mean
that's
what
sequence
memory
is
it's
it's?
What
I'm
describing
here
is
spatial
temporal
sequence.
Memory
that
we
have
we've
talked
about
is
happening
inside
cortical
column
layers
using
the
spatial,
cooler
and
spatial
polar
to
do
to
inject.
You
know
consistent
and
entropic
sparsity
and
then
and
then
a
temporal
memory
layer
that
links
them
together
through
time
and
each
one
of
our
cortical
help.
The
cortical
columns
is
is
doing
that
I
think
it
works
on
all
at
all.
A
A
A
I
remember
what
they
were,
but
but
I
would
say
if
these
all
these
bits
represent
a
and
and
if
all
of
these
bits
represent
C,
so
basically
you'd
see
the
encoder
would
just
encode
this
block
of
bits
or
that
block
of
bits
or
that
block
of
its
it
was
a
very
dumb
encoder.
I
wasn't
trying
to
encode
real
meaning
at
all.
I
was
just
trying
to
show
the
sequence
memory
algorithm,
organizing
the
transitions
between
these
states
right,
yeah,
trance,
the
bit
king
saying
he
transposes
scales
relative
pitch
is
important.
Absolutes,
not
as
important
yeah.
A
That
we
can,
we
can
use
it
to
store
meaning
you
know,
as
as
sound
is
played
over
time.
That
would
be
interesting
because
I'd
love
to
see
that
oh
you're
gonna
love
this
guys.
If
you
like
music
theory
Falco
did
you
know
that
I
have
a
video
with
Charlie
Gillingham
on
the
Counting
Crows
I
hope
you
saw
this
because,
if
you're
interested
in
a
music
theory,
I
gotta
find
it
now
it's
in
one
of
the
hackathons
music.
B
A
A
Here
it
is
I'll,
throw
it
into
chat,
okay,
cool,
but
that
was
a
really
interesting
discussion
on
music
theory.
He
talked
about
consonance
and
dissonance
and,
and
that
sort
of
thing
and
and
and
the
more
I
thought
about
it.
Since
then,
the
more
I
think
that
we
have
to
that.
These
are
all
spatial
properties
and
it's
the
brain.
It's
the
cortex,
it's
the
cortical
column.
A
That's
really
parsing
out
the
temporal
context
within
this
all
the
spatial
things
that
are
happening
in
music
and
the
encoder
that
you
know
your
ears
that
are
encoding
the
spatial
aspects
of
sound.
They
might
be
doing
a
little
bit
of
a
prediction
and
temporal
contextual
stuff,
I,
don't
know
I,
don't
know
it's
hard
to
say,
but
but
I
think
that
most
of
the
temporal
memory,
part
of
music,
understanding
and
replay
and
is,
is
really
going
on
in
the
cortex
I
think
is
I
play
guitar,
obviously,
and
I
I
it's
hard
for
me.
A
Not
to
think
about
temporal
memory
when
I'm
playing
thar
and
it
makes
so
much
sense
when
you're
trying
to
memorize
sequences,
because
the
sounds
go
along
with
movements
like
as
I'm
moving
my
finger
in
a
different
place,
I
start
to
anticipate
the
sounds
that
are
going
to
be
made
and
and
you're.
Storing
all
these
memories,
based
on,
like
your
complete
representation
of
reality,
not
just
what
you're
hearing
but
as
you're
moving
and
everything
yeah.
So
you
so
you
know
what
I'm
talking
about
it's
it's
a
whole
experience.
A
It's
like
a
full-body
experience,
especially
if
you're
a
musician
and
you
play
music.
The
act
of
creating
music
is
like
a
full-body
experience
and
it's
it's
much
more
enjoyable
that
way
too
I
think
thoughts
and
thinness,
synesthesia,
yeah,
I,
think
synesthesia
I
think
is
sort
of.
So
if
anyone
doesn't
know
what
synesthesia
is
it's
like,
there
are
some
people
who
are
born
I,
don't
know
if
I'd.
A
But
they
they
can
hear
a
color
or
they
can
I
think
the
most
common
way
to
think
about
synesthetic
TSA's
is
that
they'll
typically,
when
they
think
of
a
number
or
a
letter,
they
always
associate
a
color
to
it
or
a
sound
or
s--
or
a
tone
or
something.
You
know
that
you
regularly
wouldn't
associate
with
with
something
visual.
You
know
when
you
think
about
the
number
five.
Some
people
always
always
think
yellow,
like
that's
just
part
of
the
way
that
they
think
and
they
call
and
there's
there's
other
ways.
A
You
know
when
you
hear
something
you
get
a
you
can
see.
Colors
and
I'm,
not
synesthetic,
so
I
don't
really
understand
what
it
means,
but
someone
in
our
in
our
offices
said
that
they
were
synthetic.
It
was
interesting,
but
the
thing
is,
you
know:
you've
got
these
regions
in
your
brain
where
you
might
have
an
auditory
region
here
and
some
other
type
of
region
here
and
some
of
the
some
of
the
connections
between
regions.
A
Here's
the
brain
I
used
in
that
video,
you
guys
know
David
David
Eagleman
is
he's
like
one
of
the
most
famous
neuroscientists
that
I
know,
but
he
was
nice
enough
to.
Let
me
interview
him.
So
we
had
a
fun
time,
professor
at
Stanford,
director
of
Center,
for
science
and
law.
Anyway,
we
talked
about
synesthesia.
He
had
a
better
explanation
of
it
than
I
did
yeah.
A
That
was
that
was
my
first
interview
with
the
neuroscientist
I
was
super
nervous
because
I
approached
I
cold
called
him
well
he'd
been
to
her
office
before
actually
he's
interested
in
our
work.
He
wrote
a
recommendation.
Was
it?
What
do
you
call
it?
A
referral
whatever
for
Jeff's
for
our
latest
paper.
You
know
the
frameworks
paper,
hey
thanks
for
the
follow
nice
to
see
you
Mike
Mike
Moloch.
What
are
you
up
to?
What's?
What's
going
on,
hey
I
promised
that
you
guys
it's
been
over
an
hour
and
I
promised
you
guys.
I
was
gonna.
A
I
was
going
to
donate
some
subscriptions,
so
let
me
do
that
I
think
it's
time
and
then
we're
gonna
and
then
we'll
do
a
rape
all
right,
so
I'm
gonna.
Let
me
figure
out
how
to
donate
some
subs
yeah
I
know
look
check
this
out.
I
got
double
digit
viewers,
I
got
50
followers,
I'm
gonna,
go
to
the
vid
page
and
figure
out
how
to
do
this
gift
a
subscription
I'm
going
to
do
to
the
community
we're
gonna
gift,
five
subs
to
the
community
and
I
think
this
will
work.
A
C
A
A
Alright,
so
I
just
got
so
you
alright
Falco
Freeman
revolta
bit
King
rxj,
you
guys
all
got
subscriptions
to
the
channel,
so
I
just
appreciate
you
guys
all
watching,
and
that
means
like
for
the
next
month.
That
means
that
you
there
was
a
subscription.
I
was
a
noob.
That's
new
too
right!
You
like
that
for
the
next
month,
you
guys
will
see
no
ads
on
this
channel
and
I.
Think
you'll
get
like
custom,
emotes
and
stuff
I'll,
still
I'm,
still
figuring
out
that
all
that
jazz.
So
thanks
for
watching
I,
it's
just
for
fun.
Look!
A
B
A
A
subscriber
and
I
don't
there's
something
else
that
I
that
I'll
figure
out
some
other
something
else
that
I
can
do
like
I
can
give
VIPs
and
somehow
there's
custom
emotes,
but
I
have
there
I
have
to
like
decide
what
my
custom
emotes
are:
gonna,
be
and
create
them,
and
I
haven't
done
any
of
that
stuff.
Yet
so,
in
the
meantime,
here's
here's
one
here's.
A
This
is
one
in
a
betas
Kat
emotes
yeah.
If
there's
a
cat
in
the
chat,
the
cat
and
chat
okay.
So
let's
find
some
one
to
raid
off
to
now
yeah.
Definitely
I'm
gonna
make
a
brain
for
sure.
I
mean
you
saw
the
brains,
let's
see
who
am
I,
gonna
I'm
gonna
go
see
who
I'm
following
that
is
that
we
could
go
raid.
I
did
Ryan
me
once
already.
A
A
A
A
He's
not
online
right
now,
usually
he
plays
magic.
But
lately
he's
been
doing
like
this.
This
last
video
of
him.
He
was
doing
anomaly.
Detection
with
it
was
spacial
anomaly
detection
with
PI
data.
He
said
he
was
going
to
come
back
and
do
it,
but
I
don't
see
him
online
right
now,
so
I
don't
think.
There's
any
I,
don't
think
we
can
read
him
so
we'd
have
to
have
to
find
somebody
else,
but
honestly
nobody's
doing
machine
learning.
A
A
A
It's
it's
not
bad
to
start
rating,
some
people
that
have
a
lot
of
a
lot
of
viewers,
because
you
know
that's
how
you
get
viewers.
Read
that
guy
you
guys
ready
here,
we
go!
Three
viewers
are
ready
for
viewers,
are
ready!
You
guys
ready
to
raid.
This
will
be
fun.
Hey
thanks
for
watching
I'm
gonna.
Do
this!
Every
Monday!
I'll
be
back,
live
streaming,
my
work
tomorrow
morning.
So
if
you
want
to
check
in
with
me
tomorrow
morning,
I'll
be
back
online
and
we
are
ready
to
go
to
care
everyone
all
right.