►
Description
In today's meeting, Jeff Hawkins gives some brief comments on the book, "Human Compatible" by Stuart Russell. Then Michaelangelo sparks a discussion on why dimensionality is important in The Thousand Brains Theory.
A
Okay,
it's
recording
now
all
right,
you
guys
can
hear
me
right
yep.
So
so
I
just
thought
I
I
stopped
in
the
office
last
week
briefly
just
to
put
something
in
the
office,
and
I
saw
on
my
desk
that
I
had
received
the
book
you
made
me
share,
my
no
you
can
share
here.
So
this
is
the
book
I
got
and
it's
by
stuart
russell
who's.
Of
course,
very
famous,
a
researcher
and
the
book
is
called
human,
compatible,
artificial
intelligence
and
the
problem
of
control.
A
The
book
it
just
I
couldn't
the
package
was
not
there.
Terry
must've
opened
it
right,
but
it
had
a
letter
in
it.
The
letter
was
pretty
generic,
I
mean,
I
don't
think
it
was
customized
to
me,
but
they
said
you're,
dr
hawkins.
We
think
this
is
the
letter
I'm
reading
from
right
here.
A
We
think
this
book
is
important,
read
in
it
stuart
russell,
a
leading
researcher
and
author,
the
definitive
textbook
on
iai
clarified
many
of
the
risks
related
to
artificial
intelligence
and
discusses
how
research
that
rebuilds,
ai
on
a
new
foundation
could
address
these
risks.
We
hope
you
enjoy
the
book
all
the
best
and
then
there's
three
people
listed,
none
of
them
signed
personally
yashio,
banjo,
tim
o'reilly
and
max
tegma.
A
I
know
tim
o'reilly
and
max
tegmart.
Personally,
of
course,
I
know
who
joshua
venjo
is,
but
I
don't
know
personally.
So
it's
interesting.
You
know
this
book
was
this
letter
was
dated
march
231st,
so
it
was
in
the
office
for
a
while.
I
just
found
it
last
week
so
interesting.
I
don't
know
how
many
people
they
send
this
to
it.
A
Clearly
it
wasn't
a
personalized
letter,
but
they
must
have
probably
censored
a
bunch
of
people
they
thought
would
be
interested
and
I
think,
given
the
book's
been
out
a
year
as
published
last
year,
I
don't
think
it's
a
way
of
you
know
increasing
sales.
It
might
be,
I
think
it's
more
like
they
just
want
people
in
the
field
to
know
what
they
think
is
important
and
they
want
people.
I
think
you
should
know
it
know
it.
So
I
don't
know
how
to
interpret
that.
That
was
my
interpretation,
but
I
appreciate
it.
A
I
did
read
it
this
weekend
and
I
enjoyed
it.
It
was
it's
interesting,
it's
reasonably
easy
to
read.
If
you
know
these,
you
know
the
top
of
the
field.
I
think,
if
you're
outside
of
the
field
that
wasn't,
it
would
be
a
bit
technical,
so
I'm
just
going
to
share.
I
have
some
notes
from
that.
I
just
thought:
I'd
share
those
notes
right
now,
so
I'm
going
to
share
my
screen.
If
that's
okay
am
I
allowed
to
do
that
it
should
yeah.
A
So
here's
here's
some
just
notes
in
that
guy.
Could
you
guys
see
that
already.
B
A
So
again,
the
title
book
is
human
compatible,
artificial
intelligence
and
the
problem
of
control.
So
I
I
just,
I
think,
there's
some
interesting
observations,
how
other
people
think
about
ai
and
and
the
risks
associated
with
it.
So
it's
worth
going
over
that.
So
I
have
some
quotes.
A
Sorry
early
on
the
book,
stuart
russell
sort
of
defines
intelligence,
and-
and
this
is
the
definition
he
uses-
machines
are
intelligent
to
the
extent
that
their
actions
can
be
expected
to
achieve
their
objectives,
which
in
some
level
that
makes
sense.
I
think
it's
a
little
bit
too
it's
too
broad
for
my
world,
I
mean,
by
that
definition
of
bacterium
is
intelligent
because
its
actions
are
expected
to
achieve
the
objectives
of
finding
more
food
and
replicas.
You
know
so,
but
I
think
it's
the
whole
thing.
A
The
whole
book
is
built
around
this
idea.
That
objectives
are
the
an
intelligent
machine
has
objectives
and
the
objectives
are
the
problem,
because
the
objectives
satisfying
those
objectives
or
may
not
be
in
the
best
interest
of
humans.
A
I
thought
this
was
an
interesting
quote,
and
this
was
again
sort
of
a
foundation
of
how
the
how
stuart
thinks
about
ai
the
central
concept
of
modern
ai
is
that
a
stream
of
perceptual
interest
is
converted
into
a
stream
of
actions
again
on
the
surface,
this
kind
of
makes
sense,
but
if
you
recall
both
in
on
intelligence
in
my
new
book,
a
thousand
brains,
I
say
just
the
opposite:
well,
not
the
others.
I
say
that
ultimately,
of
course,
the
actions
of
a
machine
are
dependent.
A
You
know
what
matter,
but
ai
is
not
about
producing
actions
about
building
a
model
of
the
world,
and
you
may
or
may
not
act
upon
that
model
later,
but
this
concept
that
that
an
agent
is
constantly
producing
actions
to
sort
of
infuse
throughout
the
entire
book.
It's
the
there
isn't
really
a
much
of
a
discussion
that
you
know
the
the
intelligence
is
about
building
a
model
of
the
world.
It's
really
about.
Does
it
act?
How
does
it
act
and
you
know
how
does
it
act
to
achieve
certain
objectives?
A
And
so
I
think
that's
the
wrong
way
of
thinking
about
intelligence.
That's
the
way,
maybe
thinking
about
an
entire
system
that
has
part
of
it
has
intelligence,
but
but
it
misleads
you
into
thinking
that
this
is
constant,
like
you
know,
driving
a
car,
I'm
constantly
getting
inputs
and
reacting
or
I'm
playing
well,
I'm
getting
inputs
and
I'm
making
a
move
as
opposed
to
I'm
I'm
just
learning
about
the
world
and
and
later
on.
After
using
that
learning,
I
can
base
that
learning
to
create
my
actions.
A
I
thought
this
was
a
little
bit
on
page
81.
I
thought
this
is
a
little
bit
also
symptomatic
of
that
kind
of
point
of
view
where
he
says
he's
talking
about
a
dilemma.
He
says
reading
requires
knowledge
and
I
certainly
would
agree
with
that.
You
can't
read
the
majority
know
a
lot
about
the
world,
but
that
knowledge
largely
comes
from
reading.
A
Well,
the
certain
that's
true
for
certain
knowledge
and
he's
views
this
as
a
dilemma
like
how
can
we
learn
by
reading
if
reading
you
know,
we
already
have
to
know
something,
and
I
just
think
that
I
think
that's
the
wrong
way
of
looking
about
it.
A
I
think
knowledge
comes
from
interacting
with
the
world
first
and
then
later
we
can
embellish
our
knowledge
by
reading
and
learn
things
we
can't
interact
at
first,
but
it
sort
of
ignores
the
whole
century
motor
version
of
thinking
about
about
learning,
and
so,
as
far
as
I
haven't
finished
the
book,
yet
I
think
I'm
about
two-thirds
through
it.
Maybe
three
quarters,
but
the
whole
idea
of
sentry
motivating
learning
is
really
not
a
theme
in
the
book.
A
He
says
largely
comes
from
reading,
so
if
I
wanted
to
think
about
well,
if
I'm
learning
about
astrophysics-
yes
reading
in
knowledge,
already
becomes
a
meeting,
but
I
want
to
think
about
everyday
knowledge
right.
You
know
at
the
point
like
animals
have
everyday
knowledge,
then
that's
not
coming
from
me.
C
Yeah
and
the
babies,
who
don't
you,
know,
kids,
who
don't
know
yet
know
how
to
read?
Do
they
not
have.
A
Of
knowledge,
they
know
exactly
before
they're
speaking,
they
know
a
lot
about
the
world
already.
Then.
I
think
that
the
crux
here
is
this
next
group
on
page
141,
this
really
gets
the
core
of
his
his
worries
about
the
threats
of
aon
and
I'm
not
saying
it's
the
only
threat,
but
this
this
is
a
pers
prevailing
theme
and
he
builds
upon
this
if
a
machine
is
sufficiently
intelligent,
it
will
certainly
certainly
understand
that
it
will
fail
in
its
objective
if
it
is
switched
off
before
completing
its
mission.
A
A
The
same
is
true
for
curing
cancer
or
calculating
the
digits
upon.
So
he
says
that
if
you
give
a
machine
an
objective,
it
will
always
figure
out
how
to
prevent
you
from
stopping
it
pursuing
that
objective
and
it
uses
the
off,
which
is
the
main
metaphor,
people
say
well
can't
we
just
turn
off
the
machine
if
it's
doing
something
we
don't
like
and
you
still,
no,
you
won't
be
able
to.
The
machine
already
figured
out
that
turning
it
off
will
prevent
it
from
achieving
its
objective.
Therefore,
it
will
prevent
you
from
turning
it
off.
A
In
the
same
page,
he
says
we
can
expect
ai
systems
act,
preemptively
to
preserve
their
own
existence,
giving
more
or
less
any
definitive
definite
objective
and
any
was
his
emphasis.
So
he
says
no
matter
what
you
ask
the
machine
to
do
an
intelligent
machine.
It
doesn't
matter
what
the
objective
is.
It's
going
to
basically
preserve
itself,
so
it
can
or
it
won't
let
you
prevent
it
from
doing
that
objective.
A
I
think
this
is
a
these
are
some
pretty
extreme
views
to
me,
and
I
you
know
it's
hard
for
me
to
put
my
finger
on.
Why
I
disagree
with
it.
So
much
I
mean
humans.
Don't
do
this.
I
think
he's
going
to
argue
in
the
next
thing
that
why
that
is,
but
it
seems
a
pretty
severe
idea
that
you
know
every
machine
that
no
matter
if
that's
intelligent
about
what
level
intelligence
or
super
whatever
it
tells
you
is
going
to.
A
If
it's
smarter
than
us,
it's
going
to
figure
out
how
to
prevent
us
from
forwarding
whatever
objective
we
gave
it,
and
I
just
find
that
hard
to
believe
that
just
doesn't
just
doesn't
strength.
So
I
think
it's
true
at
all,
and
I
don't
think
there's
a
lot
of.
I
don't
think
he
justifies
his
claim
sufficiently
instead
of
just
as
a
given
like
this
is
obvious.
Isn't
it
like?
Oh,
I
don't
think
so.
I
don't
know
if
anyone
else
reacts
to
that.
D
Well,
I
I
guess
it's
if
you
conflate
it's
it's
the
necessity
for
the
ai
to
overcome
obstacles.
It's
a
question
of
whether
it
qualitatively
evaluates
an
off
switch
as
an
obstacle.
A
A
You
know
pursue
something
aggressively
too,
but
but
someone
might
come
along
later
and
point
out
like
please
don't
do
that
I'll,
say:
okay,
yeah
well,
anyway,
I
think
his
basic
thesis
in
the
book
is
is
in
the
next
three
points
which
first
appear
on
page
173,
and
I
actually
I
I
have
no
objections
these.
I
think
these
are
reasonable
ideas
to
pursue,
I'm
just
not
certain
they're
necessary,
and
you
know
it's
just
not
clear
to
me
that
the
aforementioned
risks
are
really
true
in
any
sense.
A
But
but
I
think
this
is
his
thesis
and
I
think
it's
a
reasonable
one,
the
police
to
discuss.
He
proposes
sort
of
three
laws
or
three
objectives
or
three
things
we
can
do
to
prevent
this
problem,
and
the
number
one
is
that
the
machine
being
an
ar
system.
His
only
objective
is
to
maximize
the
realization
of
human
preferences.
She
says
that
we
have
to
make
ai
systems
that
that's
their
only
objectives
is
to
do
what
humans
prefer
and
I'll
come
back
to
this
in
a
second,
because
that's
a
problem
to.
C
C
Yeah
I
mean
if
these
things
become
commonplaces.
You
know
like
today,
people
can
buy
guns
in
the
u.s
and
imagine
all
the
people
who
could
buy
guns
are
instead
buying
these
kind
of
machines
and
telling
it
to
do
things
yeah.
So.
A
That's
pretty
dangerous
to
me,
so
I
I'll
be
honest,
full
disclosure.
He
says
he's
going
to
address
this
later
in
the
book
and
I
haven't
gotten
to
it
yet
so
I
don't
know
how
he's
going
to
does
this
yet
I
just
thought
I
would
put
this
out
there.
He
says
yes,
this
is
a
problem
and
he
starts
going
to
length
about
well
what
first
of
all
how
we
need
to
know
what
those
preferences
are
and
knowing
that
those
preferences
are
are
good
enough
for
everybody.
A
This
is
this:
is
his
his
basic
formula
for
preventing
the
risk
of
aoc
that
yeah
the
book.
That's
the
books.
This
is,
it
gets,
I
believe,
the
core
of
what
the
book
is
about
the
book.
The
first
two
thirds
of
the
book
lays
out
the
problem,
as
I've
talked
about
in
the
earlier
quotes,
and
this
is
his
solution.
C
Yeah,
I
have
kind
of
a
meta
objection
to
anything
like
this.
It
seems
super
unrealistic
to
me
because
you
know,
if
you
look
at
you,
know,
machine
learning
and
ai.
Today,
you
know
stuff
is
available
in
open
source
everywhere.
It's
a
distributed
set
of
programmers
and
people
who
are
running
these
things.
They
have
full
access
to
the
source
code.
What
is
to
what's
going
to
enforce
these
three?
Any
rules
like
this.
A
Want
so
his
example
of
where
this
has
worked
in
the
past
was
there
was
a
famous
asilomar
conference,
and
I
forget
the
years
quite
a
while
ago,
where
people
talking
about
modifying
dna,
recombinant,
dna
and
dna
modification,
there
was
a
meeting
and
a
silver
meeting
where
a
whole
bunch
of
people
got
together
and
said
we
have
to
slow
down
and
basically
enforce
that
no
one
can
go
and
re.
You
know,
modify
human
dna
right
now
we
have
to
you
know
we
have
to
stop
this
from
going
out
of
control
and
go
slowly.
A
Well,
you
can
modify
dna,
but
it's
I
it's
hard
to
do
it
still
and
create
a
human
with
modified
dna,
and
he
pointed
out
there
was
recently
you
know
some
chinese
researcher
who
claimed
to
have
done
that
and
that
person
was
ostracized
and
lost
their
position,
and
you
know
so
it's
I
think
it's
a
little
again.
He
may
address
this
later
in
the
book,
so
I
apologize.
A
If
he
listens
to
this.
You
know
it
is
harder.
Perhaps
I
think
maybe
to
control
this
here,
but
his
point
is
we
have
to
try
and
there
are
examples
where
people
have
done
this
in
the
past,
and
so
you
know
if
the
world's
scientists
and
politicians
can
get
together.
We
could
put
these
controls
into
place
now
before
it's
too
late.
D
I
I
I
guess
I
I
have
a
similar
objection
with
that
first
statement
and
may
basically
with
the
word
only
that
makes
a
strong
case
for
him
because
he's
basically
setting
the
thing
up
to
be
dangerous
when
he
says
that
that
you
know
we're
just
looking
for
a
assistance
that
enables
us
to
accomplish
a
goal
whatever
that
goal
might
be
so
it's
I
mean
I
I
don't.
D
I
don't
like
to
appeal
to
asimov's
three
laws,
but
it
would
seem
like
with
any
with
any
device
that
has
that
kind
of
capability
and
power
that
it
needs
to
be
regulated
in
some
sense.
You
know
like
we
have
with
any
potentially
dangerous.
You
know
technology
that
there
has
to
be
some
overriding
objectives
so
that
you
know
I'll
just
use
as
a
straw
man,
the
three
laws
that
you
know:
okay,
you
don't
want
to
use
it
to
commit
murder.
A
Well,
I
think
he's
saying
it's
better
than
the
other
situation,
where
the
he
views
that
you
know
sort
of
the
existential
threat
that
all
humanity
can
be
wiped
out.
If
a
machine,
just
ai
machines
which
are
you
know,
he
talks
about
the
super,
intelligent
machines,
and
you
know
we
can't
fathom
what
they're
doing
and
to
control
everything.
A
So
his
point
is
that
that's
far
worse
and
that
every
machine
at
least
tried
to
at
least
try
and
setting
this
out
as
a
goal
tried
to
make
sure
that
their
ultimate
goal
was
to
not
do
something
that
humans
wouldn't
like.
Then
that's
better.
A
Well,
that's
quite
and
again
I
apologize
to
stu
for
not
reading
the
rest
of
the
book.
Yet
so
he
says
he's
going
to
address
that.
But
clearly
that
is
a
big
problem
like
who
gets
to
decide
and
he
starts.
I
started
reading
some
of
this.
Let
me
just
finish
up
the
other
two
things
he
basically
says
the
machine
is
initially
uncertain
about
what
those
what
human
preferences
are,
and
so
he
says
you
can't
assume
that
we
know
what
human
preferences
are.
A
They
vary
and,
and
they
change
over
time,
so
he
says
that
the
way
this
has
to
work
is
that
the
source
of
human
preferences
is,
is
the
ai
systems
have
to
observe
human
behavior,
and
then
he
starts
jumping
into
sort
of
sort
of
bayesian
theory
and
probability
theory
about
what
it
means
to
observe
humans,
multiple
humans
with
different
objectives
and
different
preferences,
and
try
to
ascertain
from
mathematically
ascertain
what
would
be
the
best
path
forward.
A
The
best
assumptions
about
human
preferences-
and
I
think
it's
going
to
get
a
bit
technical
later
on
in
the
book,
just
flipping
ahead
a
few
pages,
but
I
don't-
I
don't
really
know
yet
how
he's
going
to
that's.
He
started
down
that
path.
I
believe
so
I
mean
there's.
A
He
acknowledges
right
up
front
like
hey,
there's
some
problems
with
this,
but
you
know
I'm
going
to
try
to
address
him
and
I
think
he's
going
to
try
to
address
them
mathematically
in
some
sense
and
he
uses
you
know
analogies
to
turing
and
the
the
algorithm
completeness
theories
that
churning
proposed
for
computers,
and
things
like
that,
like
these
are
hard
things
to
do,
but
maybe
we
can
do
them.
You
know
so
yeah.
So
I
I
think
he
starts
from
a
premise
that
we
have
this
huge
problem.
A
Machines
are
going
to
get
out
of
control,
they're
going
to
be
super,
intelligent,
they're,
just
going
to
like
run
away.
You
know
ai.
He
talks
about
the
intelligence
explosion,
idea
that
once
the
machine
gets
a
bit
intelligent,
it's
going
to
just
skyrocket
us
without
really
defining
what
intelligence
is
or
having
other
than
the
definition
of
earlier,
and
you
know
it's
almost
like
the
assumption.
The
machine
can
learn
everything
just
by
reading.
A
So
once
it
reads
everything,
then
it's
just
you
know
be
super
smart,
so
I'll
I'll
maybe
give
an
update
when
I
finish
the
book
about.
But
you
know
how
he's
addressing
these
issues,
but
but
this
is
his
proposal
like
we
need
to
do
something
like
this
dna
modification
thing
we
need
to
get
behind
this.
We
need
to
start
putting
things
in
place
now
before
it
gets
out
of
control,
and
I
I
don't
think
these
are
bad
thoughts.
A
I
mean
these
these
ideas
here
and
these
three
points
are
interesting
to
think
about,
but
I
think,
as
I
said
earlier
to
me
personally,
the
setup
to
the
requirement
from
this
is
is
not
convincing
me
at
all
and
I
think
it's
more
of
a
sort
of
a
lack
of
understanding
of
how
brains
learn
and
what
it
means
to
know
about
the
world.
I
mean
this
stuff
we
figured
out
in
the
meantime,
but
most
people
don't
know
it.
E
So
this
this
idea
of
the
line
there
about
we
can
expect
ai
systems
to
act.
Preemptively
observe
their
own
existence
is
such
a
common
theme
in
the
the
threat
of
ai.
So
presumably
from
from
his
line
of
argument,
the
the
reason
for
preemptively,
preserving
machines,
own
existence
in
the
coffee
fetcher,
is
that
without
the
machine
there
would
be
no.
There
would
be
nothing
to
fetch
the
coffee.
So
it's
not
it's
not
a
selfish
preservation,
it's
for
the
sake
of
the
objective,
but
that
itself
requires
some
awareness
of
the
machine,
realizing
what
it
does.
E
I
just
there
always
seems
to
be
a
leap
to
this
need
for
self-preservation,
that's
built
into
the
threat,
and
it's
I
don't.
I
don't.
A
A
Yeah,
so
that's
that's
sort
of
his
conclusion
that
you
can't
give
a
machine
an
objective
without
it
basically,
ultimately
just
very
quickly
deciding
that
to
satisfy
that
objective.
I
have
to
I
have
to
you
know
naked
and-
and
you
know
he
uses
the
example
from
the
movie
2001
the
space
odyssey.
You
know
how
that's
what
how
the
computer
was
doing,
how
was
murdering
the
astronauts
or
the
travelers
in
this?
You
know
the
spaceship
because
it
concluded
that
the
safest
way
to
complete
its
mission
was
to
get
rid
of
the
humans.
C
A
And
awareness
that
it
does
and-
and
maybe
he
assumes
that
if
we
didn't
put
it
in
there,
it
wouldn't
be
aware
that
that
this
would
be
bad
for
humans
and
probably
bad
for
itself.
You
know
to
to
destroy
all
humans.
I
mean
it's
hard
to
imagine
that
you
know
an
ai
system
would
be,
could
destroy
humanity
and
yet
still
keep
going.
E
A
You
know
it's
funny
because
janet
was
reading
my
book
again
this
weekend
and
she's
she's,
not
a
technical
person,
and-
and
so
I
can
really
see
the
things
she
has
trouble
with
or
things
that
you
and
I
or
people
and
the
men
that
would
not
have
trouble
at
all
immersionism.
A
But
when
I
read
stuart
russell's
book,
it's
far
more,
you
know
he's
immediately
jumping
into
reinforcement
learning
as
if
we
know
what
it
is
and
now
it
talks
about.
You
know,
inverse
reinforcement,
learning
and
you
know,
makes
like
everyone
even
know
what
that
is
and
starts,
throwing
some
equations
about
probability
and
bayes
theorem,
and
so
from
that
point
of
view,
it's
not
a
a
a
book
that
a
that
anyone
could
pick
up.
You
know
and
say.
F
A
C
F
What
I
find
interesting
there
is
that
the
description
of
these
three
points
is
basically
a
super
intelligence
which
is
slave
to
humans,
and
I
wonder
if
there
is
like
a
third
way,
I
mean
one:
it's
a
threat,
true,
it's
a
slave
and
then
maybe
a
third
way
is
that
we
have
super
intelligence
machines
which
coexist
with
humans,
but
they're,
not
necessarily
like
slaves.
Well,
he
I.
A
Think
his
argument
would
be
like:
well,
how
can
they
coexist
if
they
have
their
own
objective?
You
know
they'd
have
to
they'd
have
to
care
about
it
in
a
very
significant
way.
I
guess
I
don't.
I
don't
know
how
that
would
I
I'm
trying
to
put
words
in
his
mouth.
I
don't
know
I
mean
I
I
think
I
think
you're
right,
I
I
I
don't.
I
just
don't
believe
the
premise
that
these
machines
will,
just
you
know,
become
like
totally
done
self-preservation
until.
H
Yeah,
I
also
just
thought
that
I
think
there'd
be
like
a
fairly
simple
test.
Maybe
this
is
too
simplistic,
but
if
you
did
have
a
machine
that
you
thought
was
too
intelligent
to
not
really
know
what
objectives
like
how
it
would
express
the
objectives.
H
Can
you
just
sort
of
like
put
sort
of
like
a
a
kind
of
paradigm
of
like
it
will
have
to
propose
what
it's
going
to
do
and
then
you
have
to
approve
it
like
it's
almost
like
you
can
do
like
a
like
a
trial
run
in
a
sense
like
like.
If
you
gave
like
a
gun
to
the
machine,
you
said
you
know
catch
a
deer
for
me
like,
ideally,
you
would
know
what
it
would
do
like
you
would
say
like.
How
would
you
use
this
gun
and
then
get
it,
but
that
may
be.
A
Yeah
well,
but
maybe
that
fits
into
his
three
rules
here
I
don't
know,
what
do
you
think
you're
saying
he
wouldn't
the
the
machine
wouldn't
figure
it
out
you
just
the
machine
would
come
back
and
ask
permission
actually.
H
Yeah,
I
mean
that's
it.
It
seems
very,
I
don't
know
somewhat
trivial
to
do
that
right.
Just
to
it's
like
well,
it's
like,
if
you
don't
know
what
it's
you
know,
how
it's
going
to
express
its
objectives.
Can
you
just
ask
it
before
it
before
it
does.
A
Yeah,
that's
an
alternate
to
this.
Perhaps
I
think
it's
a
good
observation
that
would
be
instead
of
saying,
instead
of
the
machine
trying
to
figure
out
what
our,
what
our
preferences
are,
the
machine
just
could
be
programmed
to
always
ask
yeah.
I.
D
I
I'm
looking
at
this
thing
and
it's
a
if
you
substitute
for
a
machine.
The
word
government,
you
have
an
interesting
analogy,
and
what
do
you
learn
from
that?
Well,
basically,
you
can
you
can
see
that
you
know
there's
a
variety
of
different
governments.
You
know
through
history
and
across
types
of
schemes,
but
I
mean
basically
I
mean
governments,
you
know-
are
kind
of
built
or
put
into
power
with
the
set
of
goals,
ostensibly
to
help
the
common
good.
D
But
it's
it's
like
they're
they're
trying
to
get
reelected
so
they're
trying
to
look
at
what
you
know
their
constituents
preferences
are,
and
sometimes
they
can
go
badly
awry
or
they
find
a
particular
way
of
remaining
in
power.
That
is
ultimately
toxic.
I'm
I'm
thinking
fascism
in
in
mussolini's
italy,
for
instance,.
A
I
think
I
think
he's
aware
that
that's
not
true,
but
I
don't
know.
D
A
D
But
I
I
mean
I
I
use
governing
there,
but
to
me
it
is.
It
is
a
meme
of
sorts
that
you
could
you
could.
You
could
apply
this,
this
abstraction
to
whole
ranges
of
systems
that
are
out
there
and
seeing
you
know
where
the
failure
of
modes
are.
A
C
Okay,
michelangelo,
you
wanted
to
do
something.
Should
I
stop
the
recording
for
that
or
keep
it
going?
What
do
you
think.
H
I
mean
it's
definitely
very
unstructured,
so
so
either
way.
But
you
tell
me.
H
H
H
So
it's
just
a
hopefully
a
quick
question.
Where
general,
I
don't
really
have
it
like
pinned
down
perfectly
yet.
But
the
question
is
really
just
what
are
our
questions
about
like
wide
network
like
if
we
were
to
to
do
any
investigations
and
why
networks
in
particular
machine
learning,
like
what
kind
of
things
are
we
curious
about?
I
don't
know
if
that
is
that
that's
a
bit
broad,
I
guess
to
start,
but
that's
sort
of
sort.
D
Of
yeah.
D
H
He
said
what
was
that,
as
opposed
to
wide
networks,
as
opposed
to
like
relatively
like
wide
networks
like
wide,
as
in,
like
you
know,
like
wider
than
resnet.
As
far
as
you
know,
I
mean
I
mean.
F
Wider
and
shallower,
as
opposed
to
deeper
networks,
I
mean
same
number
of
parameters,
but
instead
of
making
it
deeper
make
it
wider
is
that
it.
H
H
So
I
guess
that's
the
thing
so
part
part
of
it
as
well
as
so
understanding
when
we
say
more
powerful
or
when
you
say
more
powerful
you're
talking
about
in
terms
of
like
in
terms
of
its
ability
to
like
the
like,
the
sdr
intuition
of
you
have
like
a
smaller
and
smaller
chance
of
overlap.
For
instance,.
A
Well,
that
would
be
the
that
would
be
the
the
result
of
having
a
wide
network,
but
I
think
that
I
mean
I
don't
I
don't
know
what
it
means
until
I
mean
powerful
in
terms
of
I
guess
you
would
achieve
the
same
results
if
you
you
know
with,
I
don't
know
faster
training
and
you
know
less
computation
time.
I
don't
know
I
mean
I
mean
someone
else
could
speak
up
about
this.
I
mean
I
I
think
we
do.
A
We
do
have
this
intuition,
that
when
you
go
sparse,
it's
better
to
have
a
higher
dimensionality
and
because,
if
you're
not
very,
if
you
don't
have
a
high
dimensional
and
you
go
sparse
and
you
lose
representational
capacity.
F
H
Yes,
in
particular,
by
the
way
I
have
a
meta
question,
which
is:
what
are
our
questions
with
with
wide
networks
like
what
are:
what
are
our
interests
in
wide
networks?
As
far
as
like
just
sort
of
thinking
about
the
value
of
of
of
the
of
that
sparse
linear
package
that
I
had
mentioned
before,
or
to
port
some
of
that
functionality
into
our
own
library,
like?
Is
that
valuable?
And
why
is
that
valuable
to
us?
With
respect
to
the
questions
that
we
have
well.
D
So
there
is
there's
an
engineering
reason
for
doing
that.
Even
if
there
was
an
equivalency
in
terms
of
capacity.
Okay,.
A
But
but
inherently,
I
think
you
would
expect
continuous
learning
to
require
high
sparsity
and
that's
going
to
be
a
requisite
for
continuous
learning
and
it's
also
going
to
be
requisite
for
high
robustness.
A
That's
the
word
and-
and
so
you
know,
there's
a
lot
of
benefits
that
come
from
sparsity
and
you
just
can't
get
the
sparse
systems
to
work.
Well,
if
they're
not
white,
so
I
think
kevin's
right,
computational
efficiency,
but
I
think
there's
going
to
be
just
some
functional
benefits
that
I
don't
think
we're
going
to
get
to
continuous
learning
without
wide
networks
and
probably
dendritic
properties.
H
I
get
that
yeah
or
yeah.
I
think
I
think
I
get
the
general
intuition
for
that.
So
so
you
also
think
that
the
the
y
network
should
possibly
supplement
the
the
continuous
learning
stuff
that
we're
working
on
quite
well
as
well.
A
I
mean
just
reducing
the
overlap
between
representations
is
a
fundamental
part
of
continuous
learning.
It
means
that
you
know
I
can
learn
new
things
about
impacting
existing
things,
so
it's
just
a
fundamental.
Otherwise,
if
you
have
a
dense,
completely
dense
network
as
most
convolutional
neural
networks
are
today,
then
you
know
you
change
anything.
It
changes
everything
when
you
have
a
sparse
network,
a
very
sparse
network,
you
change
something.
It
changes
very
few
things.
C
Yeah-
and
I
think
we,
you
know
all
these
three
things
I
think
we've
said
three
things
you
know.
If
you
have
very
sparse
high
dimensional
networks,
then
you
don't
have
interference
from
random
patterns,
which
is
kind
of
a
required
requirement
for
continuous
learning,
with
really
sparse
high
dimensional
things.
You
can
get
a
lot
of
robustness
to
to
noise.
So
you
can,
you
know
you
can
have
very
noisy
versions
of
the
patterns
you've
stored
and
you
can
still
recognize
it,
which
is
harder
to
do
with
dense
networks.
C
That's
another
one
and
the
third
one
is
computational
reasons
you
can.
If
you
have
really
high
dimensional
systems
you
can
get
by
with,
at
least
in
from
the
biology
we
know
you
can
get
by
with
a
tiny
number
of
synapses
like
20,
you
know
10
or
20
synapses
and
still
retain
this
real
robustness,
properties
and
interference
properties
and
in
fact
they
get
better
in
that,
so
you
can
get
by
with
you
know
extremely
low
computational
resources,
in
theory,
with
very
very
high
dimensional
systems.
C
A
I
think
it's
interesting
to
see
in
biology
in
the
brain.
You
know
it
settles
on
certain
levels
of
sparsity
and
its
connectivity
and
activity.
A
It
might
be
that
it
would
be
far
more
desirable
to
have
even
wider
networks
and
even
more
sparsity,
not
as
a
percentage,
not
as
absolute
number
of
bits
on
and
biology
has
trouble
doing
that,
because
the
sparsely
makes
something
it's
harder.
It
is
for
the
neurons
to
find
the
other
neurons
it
has
to
connect
to
yeah.
You
know
if
I'm
saying
oh,
I
need
to
make
20
synapses
to
a
pattern
out
there.
A
A
So
a
large
part
of
how
brains
you
know
learn
is
they
have
to
form
new
connections
and
they
have
to
those
connections,
have
to
be
nearby,
and
you
have
this
whole
pruning
cycle
when
you're
young
and
but
we
don't
have
any
of
those
limitations
in
a
non-biological
system,
and
so
it
might
be
possible
to
make
extremely
sparse
systems
that
even
perform
far
better
than
human
systems
in
some
ways.
A
But
biology
couldn't
go
there
from
a
practical
point
of
view
from
biology.
It
just
doesn't
have
enough.
Wiring
to
you
know,
connect
two
neurons
that
are
remotely
apart
in
this
and
there's
nobody
sitting
there
going.
Oh
this
neuron
a
has
to
connect
the
neuron
b,
so
I'll
I'll
put
a
wire
in.
They
can
only
do
that
with
local
rules,
which
just
makes
it
difficult.
A
C
H
Yeah
there's
some
in
terms.
C
Of
sort
of
non-biological
things
like
back
propagation,
I
think
with
very
high
dimensional
systems
that
people
are
showing
that
learning
works
better.
It's
harder
to
get
stuck
in
local
minima
and
things
like
that,
but
that
there's,
I
don't
think
those
things
are
dealing
with
sparse
systems,
but
anyway,
that's
one
example.
I
thought
of
another
reason
why
high
dimensionality
is
important
with
sparsity.
It's
that
you
can
do
unions
more
effectively,
so
you
can
represent
more
and
more
things
in
superposition
and
you
can
represent
uncertainty
better.
H
C
There's
a
branch
of
statistics
that
has
looked
into
this,
I
think
from
a
bayesian
perspective,
it's
like
I
forget
densely
distributed.
C
I
forget
what
the
name
is.
It's
like
a
something
distributed
coding,
that's
from
a
statistical
point
of
view,
an
invasion
point
of
view
has
looked
into
this
as
well
and
they
get
to
similar
conclusions.
I
think.
A
Don't
think
of
it
as
far
away
or
close
think
of
it
as
just
more
neurons
that
are
sparsely
activated.
So
imagine
you
could
it's
it's
really
not
about
distance.
It's
it's!
You
know
I
could
take
a
single
cork
or
column
and
in
theory
I
put
a
lot
more
neurons
in
that
cortical
column
and
not
you
know,
let's
say
I
don't
have
to
increase
the
volume
of
the
column,
it's
not
about
distance,
and
it's
not
saying
I'm
now
going
to
connect
to
something
necessarily
connecting
something.
That's
different,
although
our
voting
neurons
do
that.
A
It's
it's
just
a
matter
of
saying.
Imagine
if
I
could
put
500
000
neurons
in
the
cortical
column
and
I'm
going
to
make
them
one-fifth.
As
I
went
five
times,
sparser
or
one-fifth
of
the
density
activation.
You
know
it
doesn't
mean
that
the
neurons
are
actually
doing
something
else.
It
just
means
I'm
going
to
use
more
neurons
to
represent
the
same
thing
and
and
just
because
they're
sparser
means
it's
in
the
brain
physically.
That
means
you
know.
A
I
do
have
to
go
a
little
bigger
distance,
but
again
in
the
brain
I
might
have.
I
could
say
well:
the
brain
could
have
settled
on
a
cortical
column
that
was
two
millimeters
in
diameter,
so
that
would
be
four
times.
As
you
know,
area
as
one
that's
one
millimeter,
and
so
a
particle
column
would
still
process
and
do
all
the
same
stuff,
but
it
would
be.
It
would
still
be
harder
for
the
neurons
to
find
the
other
neurons
that
are
active
because
physically
the
wires
have
to
go
by
near
each
other.
B
A
C
Okay,
yeah:
these
are
good
questions.
I
think
that
there
might
be
more
more
to
it
too,
but.
G
There's
this
computer
science
theorist
who's
done
quite
a
few
studies
on
the
effects
of
depth
and
width
of
networks
named
sanjivara,
and
I
think
I
haven't
really
looked
too
much
into
some
of
his
stuff,
but
I
think
in
one
of
his
papers
he
may
have
claimed
that
like
when
you
have
wider
networks,
it
helps
with
tasks
when
you
have
a
lot
less
training
data,
but
I
think
they're
always
they've
always
been
dealing
with
dense
networks.
So
I'm
not
sure
if
his
results
would
apply
to
sparse
networks
too.
G
Oh
yeah,
so
with
I
was
just
looking
through
one
of
his
papers
on
the
effect
of
infinitely
wide
networks
having
less
data
and
in
the
conclusion,
he
said
that
they're
more
efficient
at
learning,
from
from
less
amounts
of
data,
but
they're
dealing
with
dense
networks.
A
So
what
would
be
the
intuition
for
that?
Why
would
a
very
wide
dense
network
require
less
data
to
train
on
it's
not
immediately
obvious
to
me:
does
anyone
have
an
intuition
for
that.
H
A
That
makes
sense
to
me
on
a
sort
of
a
semi-intuitive
level,
but
I
wouldn't
have
reached.
I
wouldn't
have
been
able
to
stay
confidently
that
we're
actually
going
to
work.
I
mean
it's
sort
of
to
me.
That's
like
a
maybe
it's
clear
to
you,
but
to
me
that's
like
a
handwriting
thing,
like
yeah,
yeah,
kind
of
good
yeah,
sure
you
know
it's
a
more
complex.
A
F
I
was
going
to
say
there
is
this
idea
of
over
parametrized
networks
that
once
you
cross
what
you
call
interpolation
threshold
so
so
the
whole
idea
is
that
when
you
have
as
much
parameters
as
you
have
data
points,
then
you
can
basically
just
memorize
other
data
data
points.
And
then,
when
you
cross
that
threshold
and
you
have
more
free
parameters,
then
you
have
data
points.
F
A
A
I'd
have
to
have
a
lot
more
than
a
hundred
thousand
parameters
to
get
that
idea
to
work
right
yeah,
so
I
mean
I
mean
I
can
see.
I
can
see
what
you're
saying
if
I
basically
say.
Oh,
I
want
to
learn
a
few
things
sure
I
can
see
that,
but
it
seems
to
me
we
want
to
go
in
the
opposite
direction.
We
want
to
learn
more
things,
not
fewer
things.
C
But
look
lucas,
are
you
saying
you
know
you
might
have
a
network
with
you
know
billion
parameters
and
it's
only
learning
a
million
things
and
now
what
does
it
do
with
the
other
999
million
parameters?
F
A
C
A
F
C
A
C
But
lucas
there
must
be
something
other
than
just
a
lot
of
extra
parameters.
There
must
be
some
other
pressure
to
towards
good
interpolation
mechanisms,
because
when
you
have
so
many
other
parameters,
the
argument
is,
you
can
learn.
You
know
almost
any
function
of
the
training
examples,
and
so
you
can
learn
really
bad
interpolation
functions
too.
So
how
do
you
what
there
must
be
something
else
to
force
it
to
learn:
quote-unquote,
good
interpolations.
F
Right
right,
yeah,
I
I
don't,
I'm
not
sure
what
which,
which
pressure
what
is
driving
the
optimization
towards
that
I
remember
reading
there
was
this
drive
towards
learning
simpler
representations
with
all
these
extra
parameters,
but
I
I.
C
D
D
Yeah,
no,
I
I
can't
I
can't
I
can't
say
I
can't
tie
it
to
a
particular
region
of
the
brain
to
do
that,
except
for
the
fact
that
we
do
know
that
that
replay
occurs,
and
that
would
be
a
great
opportunity
for
something
to
you
know:
simplify
the
representation.
D
Right,
but
I
mean
we
have
so
many
arguments
that
we
we
don't
have
a
mechanism
for
back
prop,
but
if
there's
a
way
of
consolidating
the
memory
in
a
simpler
representation,
I
mean
at
a
cognitive
level.
You
know
I
I
stick
myself
in
something
and
all
of
a
sudden,
you
know
things
align
and
I
can
say.
Oh
you
know,
the
simpler
representation
suggests
itself.
That
you
know
explains
everything
else.
I
see
I
can
forget
all
the
crap.
D
A
I
was
thinking
you
know.
I
think
that
that
that
transfer
to
all
sudden,
when
you
understand
something
more
deeply,
I
I
argued
in
the
new
book
that
that
much
of
that
is
is
figuring
out
the
right
reference
frame
for
the
data
you
have
right,
so
you
have
all
these
facts
right.
All
these
things
observations
you
have,
but
they're
not
actionable
until
they've
applied
to
a
reference
frame
where
you
have
actions
which
which
lead
from
point
to
point
and
then
everything
becomes
so.
A
The
process
of
I
argued
in
the
book
that
the
process
of
becoming
an
expert
or
really
deeply
understanding,
something
is
to
have
the
right
reference
frame
for
it.
So
just
I'm
just
throwing
this
as
an
aside,
I
think
that
what
you
were
calling
consolidation
or
or
simplification,
whatever
much
of
that,
is
just
rearranging
the
figuring
out
the
right,
behaviors
and
reference
frames
in
which
to
rearrange
the
facts.
A
You
have,
and
it's
not
obvious
up
front
what
the
right
way
to
do
that
is,
and
so
what
we
do
when
we,
we
were
puzzled
by
a
bunch
of
facts
in
front
of
us.
I
even
wrote
about
this
non-intelligence.
You
know
you
just
rearrange
them
put
them
down
differently
on
a
piece
of
paper.
You
know
just
sort
them
in
a
different
way
or
in
the
example
of
in
a
thousand
brands.
A
I
argued
how
you
could
take
historical
facts
arrangement
with
timeline
versus
a
geospatial
map,
and
so
it's
just
that
to
me
is
like
that
that
that
all
of
a
sudden,
when
you
say
oh
and
now
I
get
it-
you
know
that's
what's
happening,
you
found
the
better
reference
frame
that
so
everything
kind
of
clicks
together,
which
means
you,
which
means
you've,
discovered
a
closer
representation
to
the
actual
structure
of
the
world
or
the
thing
you're.
A
Looking
at
you
know
right
if
it's
not
if
you,
if
you
rip,
if
you
map
the
data
in
a
way
that
doesn't
really
matter
to
the
physical
world,
it's
just
not
gonna
work
and
you
just
won't
be
able
to
figure
out
how
to
solve
problems.
But
as
soon
as
you
find
some
reference
frame
that
really
backs
to
the
physical
world,
then
you
go
oh
crap
now
I
understand.
D
Well,
I
mean
to
me
there
to
the
extent
that
emotions
are
a
driving
factor
for
memorization.
D
A
A
That
aha
moment
is
is
clearly
something
that's
recognized
and
it's
an
emotional
moment
and
then
the
brain
says
this
remember
this
yeah,
the
the
clarity
that
comes
with
that
is
somehow
has
to
be
recognized
in
the
neurons
right
away,
like
bingo
he's
like
I
I
talked
about
that
like
all
of
a
sudden.
All
these
constraints
are
satisfied
at
once.
I
don't
quite
know
about
the
biological
mechanism.
A
To
be
local
in
the
cortex,
because
that's
where
the
models
are
created
right,
so
there
has
to
be
something
in
the
cortical
column.
That
says:
oh
I've
discovered
they've,
I
mean
the
simple
thing
to
say
is
just
making
correct
predictions,
but
it
seems,
like
you
do
a
whole
bunch
at
once.
It's
like
it's
like
all
of
a
sudden.
You
know
that
all
your
predictions
are
going
to
be
true.
Just
like
somehow
just
knows,
I
don't
know
what.
D
Well,
one
one
epi
phenomena
is
when
they
look
at
how
the
the
brain
activity
of,
if
someone's,
fed
a
mathematical
problem
and
non-specialists
the
brain
lights
up
everywhere.
It's
like
they're
trying
to
harness
all
the
parts
of
the
brain
to
try
to
solve
this
problem
and
trying
to
find
it
now,
whatever
it
is,
whereas
the
specialist
you
know,
there's
only
activity
in
a
localized
area
is
low
level
because
they're,
basically
you
know
I've.
I've
got
this
scheme,
I
can
do
it.
I
can
work
through
it.
Well,.
A
That
goes
back
to
michelangelo's
argument
that,
in
you
know
fundamental
to
our
theories.
That
prediction
is,
you
know,
correct
prediction
is
the
knowledge
means
you
understand
something
and
it'd
be
less
active
right.
So.
D
Yeah,
so
what
drives
it
to
find
you
know
to
to
you
know,
find
that
aha
moment.
It's
there's
got
to
be
something
persistently
irritating
the
brain
about
something
you
know.
A
A
Yeah,
I
I
guess
it
all
the
prediction
hypothesis.
We
have
makes
sense.
I
guess
it's
just
it's
just
it's
the
quantity
issue.
For
me,
it's
not
the
quality.
I
mean
I
think,
you're
right.
The
answer
is
the
brain.
All
of
a
sudden
figures
out.
It
has
a
has
a
model
here
that
makes
correct
predictions
and
it's
low
activity
and
that
looks
good
yeah.
A
The
problem
I
had
again
is
like:
how
does
that
happen
for
a
really
big
aha
moment
when
you
satisfy
all
these
constraints,
seemingly
instantaneously,
it's
like
is
it,
but
maybe
it's
not
instantaneous.
Maybe
you
maybe
go
through
a
series
of
tests
really
quickly.
You
know
it
could
go
from
like.
Oh,
let
me
go
through
my
constraints.
D
H
H
Well,
I
guess,
if
anything,
you
know
what
I
I
wasn't
probing
it
from
that
perspective,
I
was
more
rather
just
sort
of
probing
it
from
like
a
you
know
over
the
next
week,
maybe
just
sort
of,
as
I
kind
of
think
about
this-
you
know
maybe
diffusively
in
the
background
like
just
sort
of
kind
of
keeping
tabs
on
these
things,
I'm
particularly
interested
in
if
this
is
helpful
in
our
investigation
into
continual
learning,
as
we
undergo
that
I
would
say
in
this
conversation,
though
I
also
was
just
kind
of
like
curious,
and
I'm
gonna
try
to
think
through
this.
H
For
myself,
it's
just
how
does
this
relate
to
column
voting
just
because
I
also
think
about
those
lateral
connections
as
some
form
of
wideness,
so
I
kind
of
want
to
sort
of
reconcile
that
form
of
wideness
with
you
know
this
sort
of
other
definition
of
like
a
wide
layer.
That's
the
speed
forward
layer.
C
A
But
I
think
the
probably
column
voting
is
that
it
was
based
on
an
idea
that
each
column
is
doing
a
sort
of
complete
object.
Modeling,
and
so
you
know
what
they're
voting
on
is,
is
a
commonality
of
a
thing,
and
I
don't
know
if
that
works
in
a
system
that
doesn't
have
reference
frames
and
movement,
and
you
know
it's
more
of
a
convolutional
network.
Maybe
it
does.
I
don't
know
it's
not
clear
to
me.
They
would
know
different
parts
of
the
wide
region
or
wide
layer
would
know
how
to
have
something
to
vote
on.
A
I
think
yeah
yeah,
as
you
said,
that
you
know
the
union
property
reminded
me,
and
this
made
me
occur.
This
may
be
pretty
obvious.
I
don't
know,
but
when
you
said
the
union
property,
you
know
unless
I
have
a
pattern
and
I
have
another
pattern:
one
represents
simultaneously
well
the
union
property
that
says
I
don't
get
confused.
The
two
two
do
not
interfere
with
each
other
sufficiently
to
get
confused
about
anything
and
sparsity,
but
occurred
to
me
that
that's
the
same
as
noise.
A
I
could,
if
I'm,
if
I'm
pattern
a
and
pattern
b,
is
another
pattern
that
I'm
trying
to
do
simultaneously.
I
don't
want
to
get
confused
by
pattern
b,
but
pattern.
It
could
just
be
noise
too.
So,
from
my
from
pattern,
a's
perspective
pattern
b
is
noise
or
another
thing.
He
doesn't
really
know
so
sort
of
noise
robustness.
C
A
I
think
that
that
I,
that
basic
idea
that
you
can
now
have
multiple
representations,
whether
you're
learning
a
new
pattern
or
you're
trying
to
do
a
union
of
patterns,
whether
you've
got
a
noisy
input.
The
fact
that
and
other
patterns
do
not
interfere
with
the
original
one
in
any
significant
way
is
the
basic
idea
that
we're
we're
shooting
for
here.
H
I
wasn't
saying
that
it
gives
you
calm
voting.
I
was
just
drawing
a
weak
analogy
between
like
the
notion
of
a
wide
network
and
the
fact
that,
if
you
sort
of
can
consider
all
the
lateral
connections
in
column
voting
together,
it's
it's
sort
of
it
has
it's
it's
wide
in
a
different
way.
So
I
just
had
a
kind
of
question
of
how
to
reconcile
those
two
different
definitions
of
wide.
A
You
know
a
system
where
you
have
columns,
and
you
know
you
know
a
true
1000
brainstem
system.
I
think
the
same
requirement
for
sparsity
of
column
activation
is
is
or
or
you
know,
this
each
column
has
a
very
sparse
activation
to
share
because
otherwise
you'd
have
to.
I
mean
you
have
the
same
issues
in
voting
between
columns
in
general,
where,
if
they're
going
to
voting
is
going
to
work,
it
works
better
if
the
patterns
that
are
being
the
votes
are
sparse
patterns.
A
For
the
same
reasons,
you
can
have
multiple
votes
at
the
same
time
and
the
noise
resistance
and
so
on,
but
I
don't
think
that
column,
it's
just
anytime,
you
want
to
anytime.
You
want
to
represent
something,
it's
better
to
do
it
with
a
very
sparse
wide
area.
A
H
A
A
You
do
yeah
yeah.
An
interesting
thing
I
was
just
popping
into
my
head
is
like
imagine.
I
have
a
region
with
you
know
several
thousand
columns,
maybe
maybe
fifty
five
thousand
columns
like
that.
Do
we
assume
that
you
know
the
output
of
each
of
those
columns
would
be
sparse
when
it's
voting
and
saying
hey.
I
think
it's
this
and
those
things
like
that.
But
do
we
assume
that
that
individual
columns,
some
of
them
are
inactive
so
that
the
actual
number
of
columns
that
are
active
at
any
point
in
time?
A
C
Yeah,
I
don't
think
we
have
any
necessarily
in
the
models
we
implemented.
There
was
no
inhibition
across
cortical
columns
and
I
don't
I
don't
think,
there's
anything
in
the
back.
Is
there
anything
in
the
biology
that
way?
Also.
A
No,
I
don't
think
it's
of
course
that
so
I
don't
know,
I
don't
think
so.
So
I
I
just
it
just
occurred
to
me,
so
my
working
assumption
would
be
wouldn't
be.
The
case
is
that
every
column
would
be
trying
to
do
something,
and
every
comment
have
a
very
sparse
output,
but
every
column
would
be
doing
something.
Every
column
would
be
active.