►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
just
while
people
are
filing
in
I
like
to
do
this
little
poll,
so
raise
your
hand
if
you
think
that
humans
are
intelligent,
come
on
come
on
you've,
everybody
thinks
humans
are
intelligence.
Okay,
now,
now
just
bear
with
me.
Raise
your
hand
if
you
think
chimpanzees
are
intelligent.
This
is
like
a
central
theme
of
my
talk,
so
this
is
relevant
dogs.
Do
you
think
dogs
are
intelligent?
Okay,
both
majority
cats.
A
Cats
are
intelligent.
What
about
a
mouse
who
thinks
a
mouse
is
intelligent?
Okay,
how
about
a
parrot
more
hands?
How
about
a
sparrow?
Does
it
matter?
Sparrow,
parrot,
yeah?
Okay,
how
about
a
lizard,
a
chameleon,
let's
say
still
have
some
hands
going
up:
how
about
a
frog,
a
praying
mantis
who
thinks
a
praying
mantis
is
until
we
still
have
a
few
hands
going
up:
okay,
how
about
a
Paramecium?
A
A
A
My
name
is
Matt
Taylor
I
worked
for
Numenta,
which
is
a
startup.
That's
been
around
for,
like
12
years
in
Redwood,
City
I've
worked
there
about
five
years.
I'm
the
open
source
community
manager
I've
also
been
an
engineer
there
and
I'm
going
to
talk
to
you
a
lot
about
the
brain
and
about
how
intelligence
works
in
the
brain
and
what
we've
done
to
try
and
replicate
that
in
software.
A
My
agenda
is
to
talk
about
intelligence,
to
define
weak
versus
strong
intelligence
and
then
dive
into
some
minor
brain
science.
We'll
talk
about
different
structures
in
the
brain
like
neurons
layers
and
columns,
and
then
I
will
introduce
theories
of
intelligence
that
are
Bini
has
worked
on
for
years
and
years.
We
call
it
hierarchical
temporal
memory,
I'll
introduced
a
data
format
that
we
call
sparse
distributed,
representations,
that's
what
SD.
A
Ours
are
you're,
not
missing
anything
anybody,
so
please
come
on
in
I,
haven't
I'm,
just
giving
you
the
agenda
and
I'll
talk
about
spatial
pooling
and
temporal
memory,
which
are
intelligence,
algorithms.
The
way
we
understand
that
they
work
in
the
brain
and
then
I'll
talk
some
about
3d
object,
modeling
and
recognition,
and
the
our
discovery
of
a
sensory
motor
circuit
in
the
brain
and
try
and
explain
to
you
how
that
works.
So
our
mission
at
Numenta-
am
I
good
or
should
I,
be
talking
right
now
or
am
I
good
to
go?
Okay,.
A
Didn't
ask
for
missing
our
mission
at
Numenta
is
to
understand
how
intelligence
works
and
we
have
chosen
the
neocortex
as
our
target
to
try
and
understand
how
it
works,
because
everybody's
hand
raised
when
I
asked
if
humans
were
intelligent
and
it's
sort
of
what
we're
going
for
everybody
agrees.
Humans
are
intelligent
and
that
intelligence
comes
from
the
newest
part
of
our
brain,
which
is
a
neocortex,
and
then
we
want
to
create
systems
based
upon
those
principles.
Once
we've
done
that
research
and
we
think
we
understand
how
it
works.
This
was
back
in
2015.
A
Somebody
has
a
different
definition
of
intelligence
than
I
do,
which
is
typical
and
I
do
not
like
to
poke
deep
learning
or
an
ends,
but
I
do
want
to
make
a
comparison
between
humans
and
the
type
of
intelligent
systems
that
we
have
today,
a
child
that
has
never
seen
a
dog
before,
for
whatever
reason
can
be
shown
a
dog
about
five
times
and
then
will
indefinitely
recognize
dogs
forever
for
the
rest
of
his
life
and
any
orientation
in
any
context
will
know
what
a
dog
is,
whereas
our
state-of-the-art,
deep
learning
systems
need
to
train
over
lots
and
lots
of
lots
and
lots
of
images,
and
now
there
there
are
some
caveats
to
this,
and
there
are
some
ways
that
you
can
train
things
a
little
bit
faster
once
you
have
a
system,
that's
already
trained,
but
it
does
expose
something
that
seems
wrong.
A
It
seems
this
isn't
in
really
intelligent,
so
we
call
I
call
this
week,
AI
and
just
because
I'm,
comparing
it
to
strong
AI
and
you
might
from
science
fiction,
have
an
idea
of
what
strong
AI
is.
This
is
what
wikipedia
says
that
it
has
requires
the
ability
to
reason
and
strategize
and
solve
puzzles,
make
judgments
when
you're
uncertain,
represent
knowledge
and
learn
and
an
express,
common-sense
plan
communicate
in
a
common
language.
Now
all
those
things
may
be
possible
with
different
weak
AI
technologies
that
perhaps
are
focused
on
those
specific
endeavors.
A
However,
that
last
bullet
is
a
huge
bullet,
integrating
all
of
those
skills
towards
common
goals.
That's
taking
your
your
expert
knowledge
in
one
little
niche
area
and
being
able
to
apply
it
to
a
lot
of
other
places,
and
that's
something
that
we
just
we
don't
have
today.
I
posit
that
we
kiii
in
its
current
form
in
artificial
neural
networks,
will
not
produce
something
that
we
would
refer
to
as
strong
AI
or
artificial
general
intelligence
or
just
intelligence
in
general.
I,
don't
like
the
term
AI
I,
don't
like
artificial
intelligence,
there's
nothing
artificial
about
intelligence.
A
It's
either
intelligent
or
it's
not
so
what's
missing.
If
we're
not
gonna
get
too
strong
AI
from
what
we
have
today.
What's
missing,
I'm
gonna,
I'm
gonna
show
two
things
that
I
think
is
missing.
One
is
realistic
models
of
neurons,
so
the
a
n
endpoint
neuron,
that's
at
the
core
of
pretty
much
every
system
that
you
work
with
today
and
deep
learning
and
an
ends
is
a
very
simple
neuron.
A
It
has
very
few
synapses,
it
doesn't
create
new
synapses
over
time
and
it's
just
basic
some
input
of
its
weights
and
we
learned
by
modifying
those
weights.
Well,
the
pyramidal
neuron
in
your
brain
does
a
whole
lot
more
than
that
and
I'm
gonna
go
into
some
of
what
it
does.
The
other
thing
that
I
think
is
missing
is
movement,
who
can
name
something
that's
intelligent
that
doesn't
move.
A
Nobody
ever
does,
because
you
can't
there's
nothing
that
is
intelligent
on
this
earth
that
doesn't
move
it
feels
like
it's
baked
in.
It's
like
an
entity
that
needs
to
understand
the
world
has
to
understand
the
boundary
between
itself
and
the
world,
and
in
order
to
test
that
boundary,
you
must
interact
with
the
world.
You
have
to
incorporate
movement
and
behavior
into
the
core
of
the
system
when
babies
are
born.
What
are
they
doing?
They're
moving
all
over
randomly
they're
learning
and
learning
and
learning
like
crazy.
A
All
of
that
is
that
random
movement
are
neurons,
firing
and
learning
when
I
do
this,
I
feel
that
and
listen
and
then,
eventually
years
and
years
later,
they're
playing
baseball
and
all
of
that.
This
movement,
we
think,
is
totally
essential
to
intelligence.
I,
don't
think
you
can
have
something
that
just
sits
and
does
nothing
and
you
feed
it
data
that
can
really
be
intelligent
and
I'm
going
to
talk
about
a
movement
later,
but
first
we
have
to
talk
about
brains.
So
this
is
your
brain.
A
This
is
the
neocortex
as
the
wrinkly
part
I'm,
just
not
even
to
talk
about
the
old
brain.
If
you
spread
this
out,
it's
the
size
and
proportion
of
a
dinner
napkin,
okay,
your
neocortex,
if
you
iron
it
out
and
it's
homogeneous
throughout
the
entire
structure,
it
looks
the
same
if
you
cut
into
it
here
versus
here.
You're
gonna
find
the
same
cellular
structure
and
it's
based
on
this
cortical
column,
and
this
is
the
structure
that's
repeated
over
and
over
and
over
200,000
times.
A
It's
like
the
size
of
a
strand
of
spaghetti
and
this
cortical
column
is
the
key
to
intelligence
in
the
cortex,
because
we
can
figure
out
what
this
little
processing
unit
is
doing.
We
can
figure
out
what
makes
us
intelligent
in
our
brains,
so
we've
known
for
over
a
hundred
years
that
that
cortex,
that
sheet
of
cortex
has
many
different
layers.
This
is
cahal.
He
was
a
neuroscientist
back
in
the
late
1800s.
A
He
drew
lots
of
beautiful
pictures
of
brains,
including
this
one,
which
shows
the
different
layers
of
cortex,
as
he
saw
them
in
his
microscope
at
the
time.
So
we've
always
known
these
layers
exist
and
we've
been
trying
to
figure
out
what
they
do
for
years
and
years
now,
it's
a
more
recent
discovery
that
there
are
also
columnar
structures
like
I,
just
showed
in
that
animation,
so
there's
columns
and
there's
layers
each
column
has
layers.
In
addition,
those
layers
contain
neurons
and
those
are
really
the
atomic
computing
unit
in
your
brain.
A
So
I'm
going
to
talk
quite
a
bit
now
about
the
pyramidal
neuron.
This
is
sort
of
what
the
pyramidal
neuron
looks
like
it
has
these
dendrites
that
span
out
to
the
sides
and
another
apical
dendrite
that
goes
upwards
and
there
are
different
types
of
dendrites
on
this
neuron,
so
I'm
gonna
talk
about
proximal
and
distal
dendrites
and
the
difference
and
how
it
makes
the
cell
behave.
I'm,
not
even
gonna
talk
about
apical
at
this
point,
I
just
don't
have
time
so
proximal
dendrites
cause
the
cell
to
fire.
That's
the
that's!
A
What
happens
when
they
are
stimulated
each
one
of
these
little
spindles
on
these
dendrites
and
proximal
means
close
to
the
cell
body.
That's
if
you
saw
earlier
those
are
the
ones
close.
The
red
ones
were
far
out
distal
and
these
spindles
are
all
synapses
where
other
neurons
have
made
connections
to
this
neuron.
So
it's
got
proximal
connections
and
it's
looking
and
seeing
all
those
different
neurons
that
are
connected
to
it,
whether
it's
firing
or
not.
A
If
we
get
some
activity
and
watch
watch
closely
here,
we
get
a
little
activity
here
and
there
and
there
it
increases
the
voltage
of
the
cell
locally.
So
all
of
these
neurons
they
try
and
maintain
a
consistent
voltage.
It's
a
little
bit
less
than
their
surroundings
if
they
get
stimulation
through
the
synapses
that,
through
some
chemical
processes,
increases
the
voltage
because
of
ion
pumps.
A
I
know,
I
am
NOT
a
neuroscientist
I'm
not
going
to
get
into
the
terms,
but
it
increases
the
voltage
locally,
but
not
enough
to
do
much
if
they're
just
Co
sort
of
here
and
there
and
everywhere
the
cell
wants
to
dissipate
that
voltage.
It's
like
a
leaky
pipe.
It
wants
to
equalize
and
be
back
to
a
normal
voltage.
However,
if
you
get
a
lot
of
proximal
stimulation
through
a
lot
of
these
synapses
all
at
the
same
time,
and
then
they
all
increase
the
voltage
at
once,
then
that
can
affect
the
voltage
at
the
cell
body.
A
If
the
cell
body
voltage
increases
enough
to
breach
a
threshold,
you
get
what's
called
an
action
potential,
and
that
is
a
chain
reaction
that
fires
all
the
way
down.
That
cells
axon,
and
that
indicates
to
any
cell
that
disconnected
to
that
it
has
fired
and
it
is
firing.
This
is
like
neuroscience
101
now
so
proximal
stimulation
than
what's
close
to
the
cell
causes
the
cell
to
fire.
The
interesting
thing
here
is
what
about
all
these
dist
dendrites?
What
about
all
these
distal
connections
well
for
years?
A
Neuroscientists
didn't
know
why
they
even
existed,
because
they
don't
cause
the
cell
to
fire,
no
matter
how
much
activity
they
get
they're
too
far
away
from
the
cell
body
to
cause
an
action
potential
to
ever
occur.
So
what
did
they
do?
Well
again,
it's
similar
to
the
proximal
ones.
If
you
get
some
activity
in
different
parts
of
the
and
Reddick
tree,
it's
not
enough
to
increase
the
voltage
anywhere
to
make
anything
happen.
A
But
if
you
get
a
localized
burst
of
activity
on
one
of
these
dendrites
one
of
these
segments
that
causes
something
very
similar
to
an
action
potential.
We
call
this
a
dendritic
spike
I
hit
the
wrong
button.
There
we
go
a
dendritic
spike.
If
you
want
to
look
up,
the
neuroscience
is
called
an
NMDA
spike.
This
was
discovered
or
reported
by
Matthew
Larkin,
as
a
neuroscientist
I
believe
in
Europe.
A
This
distal
stimulation
sends
that
voltage
pulse
down
the
dendrite
to
the
cell
body
right,
but
it's
still
not
enough
to
cause
an
action
potential,
so
it
still
doesn't
cause
it
to
fire.
But
what
it
does
do
is
raise
the
voltage
at
the
cell
body
just
enough
that
it
affects
its
behavior
for
a
certain
period
of
time.
It
turned
it.
We
call
it
it
depolarizes,
the
cell.
It
makes
the
cell
predictive
it
thinks
that
it's
going
to
fire
soon,
so
it's
primed
to
fire.
A
So
when
it
does
get
that
next
proximal
input,
it's
gonna
fire
fast,
because
it's
almost
to
that
threshold,
where
it's
gonna
fire.
So
this
is
core
to
our
theory
of
intelligence.
Htm'
theory
that
these
distal
stimulation
causes
the
cell
to
go
into
a
predictive
State.
This
predictive
state
of
the
cell
modeling.
This
predictive
state
is
really
important,
because
what
is
your
brain
doing
all
the
time
your
brain
is
a
prediction
engine
all
you're
trying
to
do
is
predict
the
thing.
That's
going
to
happen
next,
all
that
your
brain
is
constantly
predicting.
What
happens
next?
A
A
So
if
these
neurons
have
these
different
sort
of
integration
zones,
whereas
if
they
get
connections
in
this
area,
they
are
treated
differently
than
versus
another
area.
It
goes
forward.
That's
because
they're
all
oriented
in
the
same
way
and
layers
contain
neurons
about
10,000
neurons,
that
those
integration
zones
apply
to
the
layer
as
well.
This
allows
us
to
treat
a
layer
of
neurons
much
like
a
compute
that
has
different
inputs.
The
proximal
input
is
like
the
primary
driver
signal.
A
Proximal
input
causes
the
cell
to
fire
right,
but
the
distal
input
and
the
apical
input
can
cause
that
cell
to
fire
a
little
differently
depending
on
the
situation.
What
the
distal
information
is
is
contextual
information,
something
about
the
state
of
the
cell
or
the
world
at
the
time
that
proximal
input
was
received,
so
these
are
more
like
modulatory
signals.
A
These
are
the
ones
that
cause
the
cell
to
go
into
a
predictive
state
and
think
it's
gonna
fire
next
I'm
going
to
show
you
some
software
visualizations
of
system
running
I
should
have
said
this
up
front,
but
all
of
this
theory
is
implemented
in
code
and
the
visualization
I
want
to
show.
You
is
code
that
I'm
running
and
visualizing
when
I
show
you
something
like
this:
it's
a
layer.
It
doesn't
look
like
a
cylinder,
but
it's
a
layer
and
the
different
colors
of
each
little
block
is
a
different
state
of
a
pyramidal
neuron.
A
So
each
cube
is
a
pyramidal
neuron,
okay,
sparse,
distributed
representations.
It's
always
tricky
for
me
to
try
and
explain
this.
Imagine
a
fiber,
optics,
cable
and
imagine
your
spinal
cord.
This
is
the
best
way,
I
think
to
do
it.
Your
brain
receives
information
in
the
form
of
fiber
optics
transmissions
for
the
most
part.
Ok,
this
is
an
analogy,
but
that
space
is
constantly
transmitting
information,
lots
and
lots
of
lots
of
times
a
second
and
those
are
nerve.
Axons
and
those
are
those
are
excuse
me
neurons,
that
are
firing.
That's
information.
A
It's
coming
up
your
spinal
cord
to
your
brain
at
any
point
in
time,
your
brains,
neurons,
are
about
two
percent
active,
so
so
that's
a
number.
We
usually
shoot
for
when
we're
trying
to
represent
something
in
this
data
format.
This
is
gonna
work
better.
If
I
show
you
an
example,
so
let
me
go
and
show
you
something
about
sparse,
distributed
representations.
So
here's
an
example
of
a
small,
sparse,
distributed
representation.
All
it
is
is
a
binary
array.
A
A
So
the
interesting
thing
about
this
is
I've
got
this
set,
so
there's
256
boxes,
five
of
which
are
on
right
now.
There
are
8.8
billion
ways
to
fit
these
five
bits
in
this
space.
So
it's
a
very
big
space
to
put
data
and
we
typically
use
about
40
bits
on
right,
so
here's
really
the
capacity
that
we
typically
run
with,
and
this
is
more
atoms
than
there
are
in
the
known
universe.
So
the
point
of
this
is
the
data
structure,
the
data
medium,
the
communications,
medium,
that
your
brain
uses
is
never
gonna
run
a
space.
A
It's
never
gonna
run
out
of
ways
to
represent
information
ever
so
we're
gonna.
Take
advantage
of
that
when
we
try
and
build
our
systems
to
a
couple.
Another
interesting
graphics,
like
I,
said
each
one
of
these
bits
has
some
semantics,
meaning
say
you
were
trying
to
represent
living
things
in
this
space
for,
for
whatever
reason,
so
the
different
bits
might
actually
point
to
characteristics
of
living
things
and
whether
they
exist
or
on
that
living
thing
or
not.
This
gives
us
some
interesting
things
that
we
can
do,
because
we
can
do
comparisons.
A
A
What
I've
seen
in
the
past,
and
so
we
can
get
an
overlap
score,
which
is
a
number
that
indicates
how
much
they
overlap,
and
you
can
tell
semantically
specifically
what
things
overlap
about
this,
that
that
stuff
unions
are
also
really
important,
and
this
is
just
an
a
binary
or
this
represents
the
semantics
of
both
of
those
STRs.
These
are.
These
are
things
that
we
use,
that
your
brain
uses
all
the
time.
A
So
with
these
comparisons,
you
get
some
really
interesting
things
that
you
can
do.
We
have
a
partner
called
cortical
IO
and
they
have
this
API,
where,
if
you
give
it
a
word,
they'll
give
you
back
a
bitmap
or
an
SDR,
a
representation
of
that
word
where
each
bit
has
semantic
meaning,
if
you
ask
it
for
the
SDR
for
Jaguar.
Well,
that's
an
overloaded
term,
a
Jaguar
could
be
a
cat,
it
could
be
a
car,
I
think
they're.
A
What
used
to
be
a
gaming
system
called
a
Jaguar,
so
it's
overloaded,
but
you
can
help
define
better
what
you
mean,
because
you
can
you
can
do
these
calculations
on
STRs,
so
you
could
say
Oh
give
me
a
Jaguar,
but
also
give
me
a
animal
and
I'm
just
going
to
take
all
the
animal
bits
out
of
the
Jaguar
representation
and
what
do
I
have
left
well
I'll,
send
you
the
their
API?
What
I
have
left
to
give
me
the
most
similar
term
and
it's
car?
A
Okay.
So
if
STRs
are
the
information
medium
of
the
brain,
how
do
we
get
data
into
that
format?
So
we
have
to
get
data
into
some
binary
format,
so
we
can
process
it
like
the
brain
processes
it
biologically.
Our
sensory
organs
are
what
do
that,
we're
not
in
the
business
of
replicating
sensory
organs,
we're
like
focused
on
the
cortex,
so
all
the
encoders
that
we
create
are
very
simple,
so
I'll
give
you
but
they're,
but
they
have
the
properties
that
we
know
are
necessary
for
an
encoding
system
like
you
have
to
encode
similar
input.
A
Similarly,
they
have
to
Sam
semantically,
be
similar.
Dissimilar
inputs
also
have
to
be
dissimilar
in
the
I
mean
the
sdrs
that
you
create
when
you're
encoding.
Let
me
give
you
an
example
of
a
date/time
encoder.
This
is
a
sense
that
you
don't
even
have.
This
would
be
like
if
you
had
a
clock
embedded
in
your
brain,
and
you
just
knew
exactly
what
time
it
was
all
the
time.
So
here's
a
date
this
is
today
and
I've,
trying
to
encode
like
four
different
things
about
this
timestamp
I'm
gonna
encode.
A
To
tomorrow
you
can
see
the
day
of
bits
the
day
of
week,
bits
change
the
weekend,
bits
change
and
so
the
SDR,
the
semantic
representation,
is
being
maintained.
As
we
move
through
time.
The
season
bits
change
as
I
go
through
February
March
April.
If
I
go
change
at
the
time,
the
time
bits
change
and
this
encoding
down
here
at
the
bottom.
Let
me
go
back
to
my
calendar.
A
We
all
we
have
to
do
is
simply
concatenate
all
of
those
representations
together
and
feed
it
to
the
algorithm
as
its
input
space,
and
it
knows
the
semantics
of
a
day
time,
at
least
the
ones
that
we've
defined
here.
So
this
is
a
simple
example
there
you
can
imagine.
Maybe
a
scalar,
encoder
I'll
just
show
you
scaler
encoder,
here's
a
scalar
encoder-
and
this
is
just
you
know,
1
to
100.
You
know
periodically
moving
through
the
space
there's
a
lot
of
different
ways.
You
can
code
information
we
do.
Are
you
doing
it
very
simply?
A
I
think
that
the
encoding
of
information
into
this
space
is
going
to
be
just
as
big
of
a
business
as
you
know,
processing
it,
because
every
different
area
of
expertise
is
going
to
want
it
to
encode
their
data.
A
little
bit
differently
create
your
own
sense.
If
you
will.
Okay,
let's
see
where
I'm
at
spatial
pooling
I've
got
twenty
more
minutes,
so
I
always
debate
whether
we
want
to
skip
this
or
move
through
it
or
push
through
it.
A
There's
structures
in
the
brain
called
mini
columns
and
in
some
of
those
layers,
neurons
are
organized
into
many
columns.
You
actually
see
them
here.
I
think
this
isn't
a
mouse
cortex,
so
we
represent
those
in
our
model
too,
and
what
this
does
is
these
structures
contribute
to
a
process?
We
call
spatial
pooling,
and
this
is,
if
you
imagine
the
input
space
that
you're
getting
that
fiber
optics,
cable,
that
we
want
to
process
with
our
you
know
simulation
of
cortex.
A
I'm,
going
to
show
you
a
visualization
of
this
that
it's
going
to
make
more
sense,
but
for
right
now
what
I'll
tell
you
is
that
each
one
of
these
mini
columns
has
a
receptive
field
to
the
input,
space
and
they're
all
different
and
those
those
many
columns
activate
when
enough
bits
in
its
receptive
field
turn
on.
So
those
things
learn
to
recognize
specific
characteristics
and
features
of
that
spatial
input
space
over
time,
and
we
weighed
it
and
we
do
hebbian
learning
so
that
it
starts
each
column.
A
Each
mini
column
will
start
to
hone
in
on
certain
specific
features
that
it
sort
of
has
an
affinity
to
throughout
time.
This
is
all
about
proximal
feed-forward
input,
activating
these
mini
columns.
This
helps
us
to
normalize
the
sparsity,
so,
no
matter
what
sparsity
the
encoder
is
generating.
We
can
normalize
it
and
always
turn
2%
of
those
on,
and
it
has
to
maintain
the
semantics
of
the
input,
data
and,
and
it
does,
it
does
a
good
job
at
this,
but
this
is
all
just
talking
about
proximal
input.
These
are.
A
This
is
a
drawing
sort
of
a
conceptual
drawing
of
proximal
connections,
to
an
input
space,
getting
a
feed-forward
input
and
turning
many
columns
on,
but
there's
also
this
interplay
of
distal
connections.
To
so
I
told
you,
the
distal
connections
are
the
ones
that
are
going
to
give
us
contextual
information
about
really
what's
going
to
go,
what's
going
to
happen
in
that
mini
column,
this
is
called
the
temporal
memory
part
of
the
algorithm.
Now
in
your
brain,
these
two
things
are
happening
simultaneously.
A
We
break
them
out
into
two
different
algorithms,
so
that
we
can
better
understand
what
they're
actually
doing
so
within
the
space
of
those
active
columns.
I
think
it's
time
for
me
to
turn
the
visualization
on
okay.
This
is
like
a
this
is
a
note,
sequencer
and
I.
Don't
think
the
audios
on.
Maybe
you
can
hear
it.
A
B
A
B
A
A
Now
we
have
to
learn
a
sequence
memory
of
these
spatial
of
the
spatial
sequence
over
time,
so
one
caveat
before
I
talk
about
cell
activity
like
that,
is
that
this
sequence,
this
e
to
a
sequence,
I'm
playing
four
notes
to
it
and
then
I'm
resetting
I'm,
not
looping.
The
algorithm
doesn't
see
this
as
a
loop
of
the
same
four
things
over
and
over
and
over
and
over
and
over,
because
that's
a
problem,
because
how
do
you
tell
when
it
ends?
Is
it
three
cycles?
Is
it
four?
A
Is
at
five
it's
going
to
keep
building
segments
it's
going
to
keep
growing
synapses
because
it
thinks
that
it's
going
to
go
on
forever.
So
what
we
do
is
we'll
play
the
four
notes
and
then
we're
basically
going
to
tell
it
you're,
that's
the
end
of
the
sequence.
So
every
time
it
sees
E,
it
sees
it
as
if
it
was
out
of
context.
It's
never
seen
an
E
anything
come
before
E
right
when
it
sees
the
C
sharp
like
this.
A
It
sees
it
within
context
because
e
came
before
it,
so
the
temporal
memory
algorithm
does
two
things
it
decides
which
of
these
cells
within
the
active
columns
are
going
to
be
active
at
this
time
step
and
it
decides
which
cells
will
go
into
a
predictive
state
and
these
sort
of
depend
on
each
other.
So
there's
two
way
or
two
reasons
that
a
cell
within
one
of
these
mini
columns
would
become
active.
One
is,
if
the
column
activates
and
there
are
no
predictive
cells
at
all.
A
This
is
what
happens
when
we
get
to
e
here,
okay
in
here,
I'm,
going
to
turn
on
the
predictive
cells,
and
you
can't
see
them
because
they're
not
there,
because
it
can't
predict
nothing.
It's
it's
seeing
nothing
and
all
the
sudden
ecomes.
It
can't
predict
that.
So
when
that
happens,
the
algorithm
will
turn
on
every
cell.
In
the
mini
column,
this
is
called
bursting.
A
This
is
a
neurological
thing
that
happens
in
your
brain
when
you're
expecting
something
to
happen,
and
it
doesn't
happen
and
something
different
happens
when
those
mini
columns
for
that
other
different
thing
become
activated
all
the
cells
burst,
because
it's
basically
saying
I,
don't
know
what
just
happened.
This
could
go
in
lots
of
different
directions
right,
so
it's
priming
the
sequence
to
go
in
many
possible
different
directions
and
I'll.
Show
you
a
better
example
of
this
in
a
moment.
A
The
other
reason
why
it
would
turn
cells
on
and
okay,
so
I'm
at
C,
sharp
right
now,
I'm
going
to
show
you,
the
current
cells
that
it
is
predicting
will
be
active
next
and
I'll.
Explain
why
in
a
moment,
so
we're
at
C
sharp.
If
this
is
right,
these
must
be
the
F
sharp
cells
right.
It's
predicting
f-sharp
is
next
because
you've
seen
it
learn
this
several
several
times
over
and
over,
and
it
should
be.
A
If
we
move
forward
you'll
see
all
of
those
f-sharp
cells,
the
blue
ones,
turn
orange
good
thank
God
that
worked,
and
then
it
goes
on
to
predict
the
next
one
which
is
going
to
be
a,
and
so
why
is
it
doing
that?
Let's
go
back
to
c-sharp,
so
it's
doing
that
these
cells
are
predictive
because
and
I'm
gonna
go
grab
one
of
them.
A
If
I
can
whoops
there,
we
go
so
here's
that
predictive
that
cell
that's
in
a
predictive
state
right
now,
it's
in
a
predictive
state
because
it
has
all
these
distal
connections
to
the
last
time
steps
cells
that
were
on
last
time.
It
has
learned
that
it's
it
follows
whenever
those
cells
turn
on
it
goes
predictive,
it
thinks
it's
gonna
go
next!
That's
why
it's
predictive
these
distal
dendrites,
that
I
told
you
cause
the
cell
to
become
depolarized
are
depolarizing
the
cell,
because
every
time
it's
seen
this
in
the
past,
it
has
fired
next.
A
Let
this
run
a
little
bit
more
and
then
I
think
it's
probably
learned
the
sequence.
Well
enough,
I'm
gonna!
Stop
it
right
here.
So,
let's
turn
on
the
predictive
cells.
What
are
these
cells
predicting
F
sharp?
What
happens
if
it
doesn't
see
an
F
sharp?
What
happens
if
we
turn
this
off
and
let's
play
another
e?
They
already
you
guys
already
said
it.
You're
gonna
like
on
point
we're
gonna
see.
A
Instead
of
all
those
F
sharp
columns,
turn
on
we're
gonna
see
a
bunch
of
e
many
columns
turn
on
and
it's
going
to
be
confused
and
because
it's
confused,
it's
gonna
burst
all
the
cell's,
because
it's
just
starting
to
learn
e
C
sharp
e.
This
is
the
first
instance
of
e
C,
sharp
e.
It's
ever
seen
and
I'm
just
going
to
play
it
through
and
you'll
see
that
after
it
continues
to
burst
for
a
while,
because
it
doesn't
know
if
this
is
this.
A
Just
happens
once
we're
not
gonna
we're
not
going
to
reinforce
or
build
any
segments
to
those
those
previous
neurons,
but
if
we
keep
seeing
it
over
and
over
and
over,
maybe
this
is
the
we're
not
bursting
anymore
right.
What
are
we're
predicting
Wow,
but
we're
predicting
a
lot
of
cells
right.
Why
are
we
predicting
so
many
things
at
this
point?
We
used
to
be
predicting
about
half
this
we're
making
both
predictions.
A
It's
predicting
II,
because
it's
just
seen
it
a
bunch
of
times
and
it's
predicting
what
F
sharp,
which
it
saw
a
bunch
of
times
before
so
you
just
saw
it
burst
because
it
was
confused
and
it
didn't
know
the
sequence
and
then
you
saw
me
play
it
over
and
over,
and
then
it
nailed
the
sequence
down
and
now
it
knows
both
sequences.
It
knows
when
the
original
sequence-
and
it
knows
that
it
could
potentially
branch
it's
C
sharp.
A
The
thing
I
want
to
drive
home
to
you
here
that
this
is
a
high
order,
sequence
memory,
the
information
in
these
cells
I'm
at
the
very
end
of
the
sequence
here
for
a
these
specifics
are
active
cells,
represent
a
that
followed
II
that
followed
C,
sharp
that
followed
II.
Does
that
make
sense?
So
the
may
the
information,
the
semantic
information
for
every
step
in
the
sequence
follows
through
with
with
every
step,
okay,.
A
So
you
just
saw
how
the
sequence
memory
uses
its
distal
connections
to
loop
back
to
itself
right
well,
when
I
showed
this
earlier,
that
distal
information
was
coming
from
elsewhere,
I
didn't
say
where,
but
it
was
coming
from
elsewhere.
What
makes
that
temporal
memory
temporal
is
that
its
context
is
its
past
state
when
it
loops
its
its
distal
dendrites
back
to
other
neurons
within
the
same
layer.
In
that
column,
it's
essentially
linking
to
its
past
state.
So
when
your
context
is
your
own
state,
the
only
thing
that
context
can
represent
is
your
past.
A
That's
why
it's
a
temporal
sequence
memory
system,
but-
and
this
was
the
visualization
supposed
to
me
for
that,
but
what,
if
it
wasn't-
and
we
can
see
different
circuits
in
the
cortical
column,
where
that
distal
information
does
not
represent
the
temporal
state
of
that
layer.
So,
for
example,
let's
say
that
that
distal
information
represents
a
location
on
some
object.
Okay
and
and
now
I
got
it.
I
got
a
talk
to
you
a
little
bit
about
location
here.
A
Imagine,
let's
just
imagine
a
coffee
cup
in
your
brain.
Imagine
it
over
there
on
the
floor
and
imagine
it
floating
in
the
air
and
imagine
it
on
the
table
in
front
of
you
and
imagine
it
in
my
hand
write
that
coffee
cup,
you
have
a
representation
of
that
coffee
cup.
That
is
not
related
to
your
position
at
all.
It
just
exists
in
your
brain.
That's
called
an
owl,
centric
object,
model,
allocentric,
object
or
locations
mean
that's
just
related
to
itself.
You
can
define
a
coffee
cup
in
relation
to
itself
and
nothing
else
right.
A
That's
what
I'm
talking
about
when
I
say
a
location
on
an
object,
not
in
relation
to
you
where
you
are,
but
just
in
relation
to
the
object
itself.
So
if
you
know
that
you're
touching
an
object
or
you
have
a
sensor,
that's
getting
feature
information
from
an
object
at
a
certain
place
on
the
object
and
we're
getting
proximal
information.
A
That
is
the
actual
sensory
data,
like
the
the
sensation
of
the
touch
or
the
the
the
retina
information
for
the
image
that
you're
looking
at
then
you've
got
a
way
that
you
can
identify
objects
over
time,
as
you
sense
them
so,
and
this
is
just
another
example
of
say,
you
have
an
entity
that
it's
and
it's
touching
a
thing
and
it
touches
it
a
bunch
of
times
those
locations
that
you're
storing.
When
you
touch
something
new
and
you
it,
you
identify
something
new,
those
aren't
relative
to
your
body,
those
aren't
egocentric
locations.
A
Those
are
relative
to
the
thing
you're
trying
to
understand
right:
okay,
I'm
gonna,
swap
to
a
totally
different
format,
which
I
know
throws
people
off,
but
this
is
a
really
good
animation.
Okay,
so
imagine
two
layers.
So
this
is
the
sensorimotor
circuit
that
that
we've
discovered
that
we
just
did.
We
just
put
a
paper
out
on
imagine.
We
have
an
input
layer
that
is
going
to
receive
the
location
that
we're
about
to
touch
something
the
allocentric
location
on
this
coffee
cup.
It
gets
that
as
distal
input,
because
distal
input
we
know
depolarizes
the
cell.
A
That's
going
to
prime
us
for
that's
going
to
be
our
prediction
say
if
we've
ever
touched
something
at
that
point
on
an
object
before
we
might
have
some
sensations
that
come
up
there
once
we
do
touch
it
and
we
get
sensory
information
proximally.
Then
we
can
say:
okay
now,
I
have
a
pair
of
data
here.
I've
got
a
sensory
feature
at
a
location
on
an
object.
So
that's
what
our
first
layer
is
gonna
do
is
just
going
to
identify
sensory
features
on
objects.
A
A
It's
going
to
retain
and
learn
them
so
assume
we
have
trained
this
thing
on
a
bunch
of
different
objects
and
you've
had
to
touch
a
bunch
of
different
things,
and-
and
it
knows
them
so
what
this
is
going
to
represent
these
bits
and
this
output
layer
is
all
the
objects
that
it
has
felt.
Something
like
that
at
something
like
that
location
right
with
that
could
be
a
coffee
cup
that
could
be
a
can
of
soda.
That
could
be
a
ball.
A
They
all
similar
sensory
tactile
feelings
at
that
point
in
the
object
and
then
we're
going
to
go
touch
it
again.
Here
we
get
a
different
location,
totally
different
location,
totally
different
feature,
but
since
it's
still
the
same
object,
we're
gonna
we're
going
to
narrow
down
our
representation
of
what
this
object
is
because
now
we
know
the
ball
does
not
feel
like
that
at
that
location
right.
So
we
can
do
this
Union
operation,
which
is,
if
you
remember,
we're
talking
about
SDR
as
unions
and
comparisons.
A
This
is
an
SDR
union
operation
to
help
narrow
down
what
this
object
is
that
we're
touching,
you
might
imagine
it
in
a
black
box
and
you're
just
trying
to
figure
out
what
it
is.
That's
what
your
brain
is
doing
when
you
touch
something
it's
like
that
could
be
any
number
of
a
hundred
things,
but
then
you
touch
it
again
and
again
it's
like!
Oh,
oh
yeah,
that's
my
cat,
Jimbo.
A
And
same
thing:
we're
going
back
having
another
touch,
getting
another
sensory
information
and
then
narrowing
down
once
again
saying
this.
The
coffee
cup
is
the
only
object
that
I
have
in
my
and
my
database
of
objects
that
I've
ever
touched
that
this
corresponds
to.
We
can
do
this
with
multiple
columns,
so
I've
only
talked
about
just
one
cortical
column
right.
That
was
that
you
know
Vienna
sausage.
A
So
they're
gonna
have
an
idea
of
what
it
is
and
each
one
of
those
columns
is
going
to
say
what
they
think
they're
gonna
vote
on,
what
they
think
that
thing
might
be
right
and
they
can
inform
each
other,
because
the
distal
connections
in
that
output
layer
they
go
between
columns.
They
actually
share
their
distal
information
with
other
columns
near
so
they
can
inform
other
columns.
A
That
say,
I,
don't
think,
that's
a
ball
by
the
way
and
then
it
can
update
its
model,
and
this
all
is
happening
just
in
one
cortical
column,
in
its
cooperation
with
other
cortical
columns,
without
even
introducing
hierarchy
into
the
equation
and
show
one
more
touch,
and
we
can
quickly
identify
that
this
is
a
cup.
So
when
you
have
more
sensation
and-
and
this
makes
intuitive
sense
right-
if
you
just
go
like
this
I,
don't
know
what
that
is,
I
don't
know.
C
A
So
main
thing
here
is
that
this
whole
cortical
column-
if
we
can,
we
can
figure
this
thing
out.
This
is
the
key
to
intelligence
in
the
neocortex
figuring
out
what
this
cortical
column
does,
and
we
think
we
have
a
pretty
good
idea
of
what
like
three
or
four
of
these
layers
are
doing
the
one
the
thing
I
just
showed
you
was
the
circuit
between
layer,
2,
3
and
layer
4.
We
think
there's
a
very
similar
circuit
between
layer,
5
and
layer,
6,
a
that's
doing
another
very
similar
process
related
to
egocentric
location.
A
So
we
got
to
have
that
interaction
and
we
have
to
have
it
baked
in
so
our
theory
proposes
that
model
of
intelligence.
It's
completely
biologically
constrained.
We
don't
do
anything
in
our
code
or
in
our
research.
If
we
can't
explain
why,
based
on
sound
neuroscience-
and
we
all-
we
try
and
incorporate
these
missing
agree.
I've
been
talking
to
you
about
all
of
our
stuff
is
very
open.
All
of
our
code,
all
of
our
research
code,
all
of
our
code
that
supports
our
papers,
are
all
on
github.
A
On
the
Numenta
organization,
I
am
the
open
source
community
manager.
So
if
you
want
more
information,
you
contact
me
directly
matt
at
Numenta
org,
we
publish
all
of
our
papers.
The
recent
one
is
the
theory
of
columns
and
the
neocortex.
That's
what
I
was
just
trying
to
explain
to
you
guys.
We
that's
we're
really
excited
about
this.
We
think
that
this
is
going
to
make
a
lot
of
interesting
progress
and
AI.
A
Numenta
org
is
our
primary
portal
for
me.
If
you
want
to
learn
more
about
HTM
and
also
I
have
a
YouTube
channel,
so
all
of
the
visualizations
and
the
stuff
that
I
showed
here
basically
I,
took
from
that
YouTube
channel
and
I
keep
producing
more
educational
material
about
HTM,
even
have
an
episode
on
bit
arrays.
So
if
you're,
if
you're
hard
up
on
on
educational
material,
search
for
HTML
on
YouTube,
so
I'm
open
for
questions,
it's
Friday
and
it's
the
last
session
so
shoot
sir.
Yes,.
A
Yeah,
that's
a
good
point.
Okay,
yes,
the
example
I
gave
about
touching
a
cup
and
I
did
a
lot
of
examples
with
touching
that
doesn't
have
to
be
touch
that
can
be
other
sensory
modalities
as
well.
If
you
think
about
the
objects
that
you
have
represented
in
your
brain
right
now,
you
don't
just
represent
what
they
feel
like
you
represent,
what
they
look
like
and
and
sometimes
what
they
sound
like
and
sometimes
what
they
taste
like.
All
of
those
that
is
feature
information,
that's
related
to
an
ala,
centric
location
on
that
object.
A
A
You
could
I'm
sorry,
you
could
have
what
bouys,
oh,
oh
sure,
yeah
yeah
right.
Oh
we're,
gonna
have
tons
of
different
sensors.
We're
gonna,
create
new
senses
like
crazy.
It's
easy
to
create
new
senses,
I
mean
I.
Just
showed
you
one
that
was
like
having
a
clock
in
your
brain.
It's
it's
easy
to
create
new
senses.
I'm
you're!
The
imagination
is
your
limit
honestly.
B
A
That
reminds
me
we're
having
a
meet-up
tonight
at
7:00
at
the
Crowne
Plaza
down
the
street.
If
you,
if
you
want
to
come,
learn
more
about
HTM
stuff,
so
yeah
sensory
modalities,
there
they're
all
treated.
Similarly,
like
I
said,
if
you
cut
the
cortex
here
versus
here
versus
here,
it's
the
same
algorithms
that
are
happening.
So,
yes,
you
had
a
question
I.
A
Just
hard-coded
it
we're
talking
about
the
the
note
sequencer,
those
in
codings
I
just
said:
okay,
there's
four
notes
on
arrest,
so
I
broke
the
space
into
five
different
sections.
It
just
turned
on
one
of
the
sections
for
each
one,
but
honestly,
if
I'd
done
it,
if
I
distributed
it
randomly
for
each,
it
would
have
been
the
same
behavior.
A
It
doesn't
necessarily
matter
how
you
encode
it
as
long
as
you,
semantically,
encode
it
like
I,
could
have
done
something
different
and
encoded
a
a
little
bit
closer
to
a
sharp
and
a
flat
because
they're
closer
in
the
chromatic
scale.
So
the
right
word
they're
closer
in
the
scale
so
but
I
don't
know
if
I
want
to
do
that.
For
this
sake.
Of
this
example,
I
just
wanted
to
make
it
simple.
A
A
So
there's
I
like
to
think
about
consciousness
and
intelligence,
but
I've
come
to
their
conclusion
that
if
you
go
asks
ten
different
people,
what
consciousness
is
you
get
ten
different
answers?
So
it's
already
hard
to
define
what
we're
even
talking
about
I
do
I
think,
there's
a
difference
between
intelligence
and
consciousness.
What
we're
trying
to
do
has
nothing
to
do
with
consciousness.
As
far
as
we
know,
it
has
nothing
to
do
with
consciousness.
We're
just
trying
to
figure
out
how
intelligence
works.
Nobody
knows
how
consciousness
works.
So
it's
hard
to
speculate
about
consciousness.
A
I
think
you
can
have
intelligent
systems
that
don't
necessarily
need
to
be
conscious,
and
perhaps
you
can
have
conscious
systems
that
don't
necessarily
have
intelligence,
I'm,
not
sure.
So
it's
really
hard
to
speculate
I,
don't
think
we're
going
to
build
conscious
systems
for
quite
a
while.
Now,
when
we
do,
maybe
this
will
contribute
to
those
things,
but
there's
a
lot
of
very
small
things
happening
at
sort
of
the
synaptic
level
that
we
still
don't
quite
understand
and
we're
trying
to
abstract
from
a
functional
standpoint.
A
A
A
A
You
can't
visualize
how
it's
modeling
that
three-dimensional
space,
because
it's
so
abstract
at
that
point,
but
yeah
when,
when
you
think
okay
we're
doing
a
lot
of
research
about
locations
and
about
three-dimensional
space
right
now.
If
you
want
to
learn
more
about
how
this
works
in
the
brain,
you
can
look
up
something
called
grid
cells.
There's
some
really
interesting
research
that
I
can
give
you
a
quick
and
dirty.
A
But
if
you
have
a
mouse
in
a
room-
and
you
can
tell
what
neurons
in
its
brain
are
firing
as
it
walks
around
the
room,
if
you're
just
looking
at
one
neuron
that
you
think
is
a
grid
cell,
it
will
fire
at
different
parts
of
the
room
and
it
will
create
a
hexagonal
grid
of
points,
because
that
neuron
associates
itself
with
specific
different
points
in
the
room
and
so
and
this
actually
scales
up
and
down.
So
your
brain
is
doing
some
crazy
stuff
to
model
three-dimensional
space
and
we're
trying
to
figure
out
a
lot.
A
A
That
signal
has
to
exist
and
we're
we're
researching
right
now
and
writing
more
papers
about
how
that
is
created
and
if
you
want
to
learn
more,
come
to
the
meetup
tonight
at
nine,
because
one
of
our
research
engineers
is
talking
about
location
and
how
its
treated
in
the
brain.
Yes,
sir,
in
the
back.
A
A
Right
so
at
any
point
in
time,
a
model
has
a
certain
amount
of
predictive
neurons.
Those
predictive
neurons
are
predicting
lots
of
things
at
the
same
time,
so
it
could.
If
we're
talking
about
a
sequence
it
can,
it
may
be
predicting
that
you
go
this
direction
or
that
direction
or
that
direction
all
of
those
neurons
could
be
active
and
it's
gonna
it's
if
one
of
those
things
actually
happens,
then
that
confirms
that
that's
happening.
So
there
is
a
probability
distribution
for
future
things
in
the
future
I
just
fully.
A
A
A
Imagine
yourself
throwing
a
ball
all
the
neurons
that
are
involved
and
you
physically
throwing
a
ball
just
to
fire
it
in
your
brain,
but
they
didn't
get
any
proximal
sensory
information
that
confirmed
that
you
were
throwing
that
ball.
That's
because
it
was
coming
from
apical
dendrites,
because
your
imagination
was
unfolding
that
action
down
the
hierarchy
of
your
brain,
and
that
was
what
was
causing
the
cells
to
fire.
So
there's
something
to
think
about.
A
A
A
Where
the
oh,
where
the
neurons
connects,
are
coming
from,
okay,
here's,
the
thing
a
lair
has
can't
know
anything
about
that
right
because
a
lair
we
I
didn't
talk
much
about
hierarchy,
but
this
quarter.
These
cortical
columns
are
all
part
of
a
hierarchy
right,
just
like
in
CNN
and
deep
learning
type
stuff.
There's
a
hierarchy
going
on
in
your
brain.
Lower
levels
are
more
granular
things
like
edges
and
touches,
and
and
the
higher
level
you
get
into
abstract
objects
and
then
Bill
Clinton.
A
A
So
I'll
tell
you
exactly
what
it
is:
we've
got
this
I'm
not
giving
this
talk
there.
We
have
a
talk
from
our
community
he's
going
to
talk
about
3d,
modeling
environments
for
training,
AI
systems,
which
is
relevant
to
the
research
that
we're
doing
since
we're
doing
3d
object,
recognition,
one
of
Jeff
Hawkins.
Our
founder,
is
going
to
talk
about
this
new
paper.
A
That's
called
why
columns
exist
in
the
neocortex
and
so
he'll
go
over
some
of
the
the
hard
theory
behind
the
last
part
of
what
I
talked
about
here
and
then
we
have
a
research
engineer,
who's
going
to
talk
about
how
your
brain
represents,
location
and
that's
gonna,
be
an
interesting
talk.
If
you,
if
you
know
no,
you
don't
need
to
go,
you
don't
need
to
go
through
HTM
school
I
mean
you
might
be
a
little
lost
here
and
there,
but
I
would
encourage
you
to
come
anyway.
Yeah.
Your
second
question,
yeah.
A
I
asked
myself
this
question
quite
a
bit:
it's
the
thing
is
you
have
to
go
back
to
the
mission
of
our
company?
Where
did
it
go
there?
It
is
understand
how
intelligence
works
in
the
neocortex
create
systems
based
upon
those
principles,
we're
not
focused
on
productizing
or
not,
are
necessarily
finding
anything
useful
to
do
with
this.
We
believe
that
this
mission
itself
will
be
valuable,
no
matter
what,
if
we
can
understand
how
intelligence
works
it
will,
it
will
present
its
value
at
some
point
in
the
future.
A
That
being
said,
we
do
have
some
sample
applications
that
we've
tried
to
build
just
to
test
out
and
see
what
we
could
do
with
this
thing,
as
we
sort
of
go
along
a
lot
of
anomaly
detection
applications,
because
we
seem
to
do
pretty
well
at
temporal
anomaly
detection
and
that's
something
that
we
can
say
well.
Annan's
aren't
great
at
this,
but
we
can
do
sort
of
good
at
it.
But
you
know
it's
still
we're
not
trying
to
compete.
We
can't
do
spatial
image
classification.
The
way
that
ganz
can
do
it
at
all.
A
A
A
A
Yeah,
no,
no,
not
doing
fraud,
detection,
they're
doing
they're
like
helping
DevOps
people
identify
when
servers
are
having
problems,
so
they
hook
into
cloud
watch
they
get
all
the
information
from
all
the
AWS
servers,
the
CPU
to
CPUs
and
the
the
network
in
and
out,
and
the
read,
writes
and
all
that
stuff
and
automatically
creates
models.
Htm
models
reach
one
of
those
things
and
just
does
anomaly
detection
so
the
longer
it
sees
those
models
over
time,
the
smarter
it
gets
about
them.
You
never
have
to
refresh
the
models.
A
A
Sure
yeah!
Well
so
we
have
experiment.
We
have
some
papers
actually
that
that
will
simulate
not
not
lesions
but
failure
of
certain
percentage
of
neurons
in
the
system,
and
it's
still
performed
surprisingly
well.
The
sort
of
the
same
way
that
your
brain
does
it's
really
redundant.
You
know
your
brain
is
extremely
redundant.
I
mean
that
I
mean
you're.
You
kill
neurons
every
time
you
drink
a
beer,
so
yeah.
A
Well,
so
you
have
to
remember
these
are.
These
are
a
whole
different
sets
of
neurons
and
whole
different
cortical
columns
that
are
being
at
if
you've
got
one
hand
over
here
touching
hot
one
touching
cold,
but
I
don't
know
that
I
used
to
go
this
place
in
st.
Louis
as
a
kid
called
the
magic
house,
and
they
had
these
alternating
coils
with
hot
and
cold
water
inside
them,
and
if
you
touch
them
and
touch
them
all
at
the
same
time,
it
felt
burning
hot
you're
like
that's
it.