►
From YouTube: NeurIPS 2019 Conference Recap from Numenta
Description
NeurIPS 2019 Conference Recap from Numenta. Discussion at https://discourse.numenta.org/t/numenta-research-meeting-dec-18-2019/6928
A
B
To
Europe's
myself,
Marcus
and
Michelangelo
and
I
have
three
or
four
papers
I
picked
out
and
that's
just
a
couple
of
flavors
and
I'll
go
through
them
quickly.
I,
don't
understand
any
of
them
in
detail
yet,
and
so
don't
ask
me
too
many
questions
about
it,
but
will
some
of
them
book
might
want
to
go
into
later
at
some
point
for.
D
C
B
Growing
pain,
so
it
was
huge.
It
was
basically
like
drinking
from
a
firehose
and
you
can
see
that
I
felt
that
organizers
were
really
going
through
growing
things
things.
It
was
kind
of
hard
to
find
some
stuff
and
it
was
just
really
crowded
at
some
point
in
time
really
hard
to
navigate,
sometimes
and
even.
B
C
B
B
B
Talks
early
and
or
going
late,
it
was
just
insane
but
anyway,
I
guess.
The
only
really
good
thing
about
neurons
I
would
say
is
that
anyone
you
want
to
talk
to
is
there
and
so
it's
a
chance
to
meet
with
a
lot
of
people
kind
of
like
SFI
I
feel
sfn,
even
though
it
has
40
50
thousand
people,
it's
actually
better
organized
than
the
nurse
was
yeah,
but
they've
been
that
Big
Fork
wants.
B
So
that's
my
overall
nerves
impression
the
venue
is
beautiful,
is
Vancouver
right
on
the
water,
so
the
building
on
the
left
is
the
conference
center,
one
of
the
conference
centers
and
you
can
just
see
the
ocean
and
really
nice
places
to
hang
out
and
see,
and
here's
look
like
at
night,
it's
gonna
senior
and
it's
like
five
minutes
from
downtown
Vancouver.
So
ten
minutes,
it's
really
good
location.
Anything
I'll
give
two
prizes.
This
is
the
most
eye-catching
poster
design.
F
D
B
B
Video
will
show
up
in
here
yeah,
you
can
see
it's
the
most
insane
thing.
It
took
a
long
time
to
this
day
would
like
grab
people
from
the
audience,
and
you
would
sit
there
in
this
robot
look
brush
airbrush.
Just
it
took
a
long
time.
I
think
would
analyze
your
head
or
something
take
a
long
time,
and
then
eventually
it
would
come
and
do
like.
B
So
just
a
few
papers,
I,
thought
and
talks
that
so
this
was
this
like
from
Bengie.
Oh
stop,
so
he
gave
a
talk
that
was
widely
publicized
and
it
was
about.
You
know
what
the
problems
with
deep
learning
and
what
you
need
to
do
next,
and
the
first
couple
of
slides
were
pretty
good
they're
kind
of
along
the
lines
of
stuff
we've
been
saying,
and
then
he
had
a
few
kind
of
specific
things.
I
was
a
little.
B
F
B
B
That's
one
nature
again:
I've
mentioned
this
before
with
machine
learning,
conferences
and
nerps
does
a
really
good
job
of
this
that
all
the
papers
are
there
most
of
the
posters.
Are
there
most
of
the
talks?
Even
the
workshops
are
live
streamed
and
the
recordings
are
there.
So
it's
really
nice
that
way,
unlike
neuroscience
conferences,
where
you
can't
even
take
a
picture
of
a
poster
of
a.
B
One
versus
system
two
and
the
basic
thing
there
system
one
is
fast
and
it's
kind
of
exempt
when
they
want
example,
be
flashing,
prints
and
system.
Two
is
slower,
you
know
things
like
planning
and
reasoning,
and
so
this
is
kind
of
a
weird
to
me.
It's
a
really
weird
distinction
to
make
for
AI,
but
this
is
I.
Guess.
B
B
B
E
B
I
E
C
D
B
C
B
The
basic
idea
is
okay,
classical
machine
learning
is
just
really
dependent
on
iid
theory
and
you
need
to
not
be
am
I
not
have
a
idea.
Assumptions
and
iid
means
independent
identically
distributed.
This
is
that
99%
of
statistics
is
based
on
that
that
you
can
shuffle
your
data
and
it
doesn't
matter
it's
all
one
distribution,
of
course.
If
you're
dealing
with
temporal
sequences,
then
you
can't
shuffle
the
individual
observations
and
then,
if
you
have
continuous
learning
or
changing
distribution,
you
can't
have
identical.
B
B
C
B
E
B
A
slide
on
this
I
thought:
rich
Sutton
had
a
much
better
formalism
of
this
and
he
said.
Well.
Maybe
the
entire
world
is
all
one
distribution,
we
don't
know,
but
it's
a
huge
state
space
and
when
you're
an
agent
moving
around,
you
should
treat
it
as
things
are
changing
all
the
time,
because
you
can
only
sample
a
small
subset
of
it.
So
what,
as.
B
B
B
B
Model
parameters
or
either
type
of
parameters
you
learn
task
1.
Then
you
learn
task
2
and
so
on
all
the
way.
And
then
you
test
on
all
of
the
tasks
you've
learned
and
if
there's
no
catastrophic
forgetting
at
the
end,
you
should
see
perfect
performance
and
all
the
previous
tasks,
that's
kind
of
a
kind
of
rigid
way
that
people
think
about
continuous
learning
and
what.
B
Is
treat
this
whole
that
whole
process
as
just
one
step
in
a
bigger,
optimization
process,
so
you
would
go
through
that
entire
continuous
learning
thing
and
test
it.
You
get
some
result
and
then,
if
you
can
formulate
the
whole
thing
as
a
rinsable
system,
you
can
actually
back
propagate
through
this.
D
G
B
I
B
E
B
I,
in
his
case,
the
particular
network
he's
thinking
up
is
as
interesting
is
that
there's
a
feed-forward
network
which
is
the
orange
one
and
then
I
think
it's
some
sort
of
a
gating
Network
or
attention
network
up
top,
and
this
thing
basically
acts
as
a
bias
on
what
inferences
pass
through
and
you
can-
and
so
this
thing
so
this
thing
is
the
idea
is
that
the
orange
network
is
like
learning
all
of
the
tasks.
The
blue
network
tells
you
what
subset
of
the
network
to
focus
on
at
any
point
of
time.
That's
the
basic
idea.
B
I
can
you
know,
tie
it
in
a
little
bit
to
tempo
memory,
but
this
is
they're
doing
it
in
a
kind
of
a
there's.
Only
one
big
mask
at
the
end
here
and
then
he
does
matter
learning
on
top
of
this
whole
thing
and
looks
like,
and
his
system
ended
up
doing
pretty
well
on
the
600
tasks.
Example,
the
blue
one
is
their
system
which
he
called
the
animal
and
across
all
that.
B
If
this
was
have
learned
perfectly,
you
would
expect
1.0
across
the
top,
but
instead
you
have
this
blue
line,
which
is
much
better
than
all
of
the
other
competing
techniques.
And
what
was
kind
of
interesting
is
that
the
way
this
network
did
it
is
by
learning
to
become
really
sparse.
So
the
top
shows
you
the
activations
of
the
big
network,
the
orange
one
and
then
the
middle
one.
B
B
Of
egocentric
spatial
structure,
and
the
idea
here
is
that
you're
moving
around
in
this
world
and
if
you're,
if
you're
touching
this
object
at
the
bottom,
and
you
move
up,
you
want
to
make
a
prediction
of
what
sensory.
So
what
sense?
What
are
you
going
to
send
up
there,
independent
of
where
this
object
is
related
to
yeah.
C
C
F
B
B
You
have
is
the
motor
the
state
of
the
arm
at
one
point
in
time,
and
then
you
have
the
state
of
the
arm
at
the
next
point
in
time.
That's
n
of
T
and
M
of
T,
plus
one
they've
sent
out
some
outputs
and
you
get
the
sensory
input
you're
sensing
right
now,
and
the
task
is
to
predict
the
sensory
input
at
this
next
configuration.
B
B
B
C
C
C
B
I
think
I
took
him
a
little
bit
about
that.
It's
there's
I
think
I
think
would
have
to
go
through
it
in
more
detail.
But
there's
us
I
think
there's
some
simplistic
notion
of
our
at
least
egocentric
reference
frames,
so
that
was
it
I
asked
him.
Okay!
Well,
have
you
thought
about
having
features
that
are
shared
across
objects?
He
said
no,
they
haven't
dealt
with
that,
but
they
deal
with
the
problem
that
we
haven't
really
directly
addressed,
which
is
this
transformation
of
motor
state
to
external
world
reference
frame,
coordinate
space
so
so.
B
There's
a
talk
on
something
called
scene
representation
learning
this
one
I
would
really
need
to
look
at
again,
but
the
basic
idea
is:
you
have
a
camera
that
can
walk
around
the
scene
and
you
know
where
the
camera
is
in
the
scene,
and
you
can
you
know
what
pixels
you're
you're
seeing
sensory
information
you're.
Seeing
and
this
thing
they
they
more
explicitly
try
to
learn
the
actual
location,
a
3d
location,
based
representation.
So
you
would
learn
that.
B
B
At
every
point
you
know
where
the
camera
is.
You
know
that
pixel
represents
the
sensory
representation.
Then
you
move
around
and
Ukraine
on
that
and
then
for
new
objects.
You
can
actually,
when
you
know
for
for
any
object.
Now
you
can
go
to
a
novel
viewpoint
that
you
haven't
seen
and
you
can
then
re-render
it,
and
there
was
something
they
made,
that
kind
of
the
computer
graphics
system
behind
it
differentiable
it
was
a
kind
of
a
ray
tracing
algorithm
that
they
made
differentiable
and
that's
how
they
did
it
and
I
don't
understand
it.
C
H
C
B
G
B
D
B
Just
everything
so
it's
you
know,
climate
change
is
a
big
problem,
there's
tons
of
data,
how
can
machine
learning
which
is
all
about
interpreting
and
that
the
data
hooking
it
up
with
climate
change?
So
it's
very
broad
so
there's
when
all
over
the
place,
so
there's
you
can
make
predictive
models
of
weather
and
stuff
like
that,
and
you
can
maybe
do
it
well,
you
can
use
machine
learning,
Carla
Gomes.
She
had
a
really
great
talk
on
that.
B
G
B
Once
you
have
that
data
that
what
they're
pointing
out
is
different
states
have
different,
you
have
done
a
better
job
of
having
electricity,
that's
low,
carbon
or
not,
and
so
once
you
can
measure
it,
you
can
actually
choose
where
you
want
to
do
it.
That
would-
and
maybe,
if
you
make
some
changes,
do
you
model,
it
might
actually
do
better,
but
then
they
look
at
the
GPU
and
actually
monitor
the
power
output
from
the
GPU
right.
So
it's
maybe
a
non-trivial
thing.
You
may
make
small
changes
in
your
model.
Maybe
that'll
have
a
impact.
That's.
E
A
little
interesting
insight
when
I
made
my
prefrontal
cortex
thalamus
extremely
sparse.
The
runtime
decreased
quite
a
bit
because
the
neural
network
simulated
behind
it
nest
which
nicely
scales
on
supercomputers,
is
actually
event
based.
So
the
list
spikes
there
are
hapless
compute
power,
the
totals
of
the
computer
simulation.
D
D
A
Two
posters
to
show
involving
deep
learning
in
the
brain
and
two
things
to
show
involving
grid
cells
and
then
a
brief
thing
on
the
young.
You
Lichtman
talk
on
hardware,
so
this
is
an
update
in
the
past.
I've
presented
this
thing
from
DeCarlo
and
Dan
yemen's
DeCarlo's
at
MIT,
and
it's
at
Stanford.
They
do
these
deep
learning.
Models
of
like
v1,
beat
v2
before
an
IT
and
the
new
thing
here.
A
I'm
hoping
you
can
kinda
remember
what
I
presented
before
that.
If
you
train
a
network
on
object
recognition,
you
can
then
use
combinations
of
those
neurons
to
predict
neurons
and
actual
brains
that
are
seeing
those
same
images.
They
found
that,
surprisingly,
these
hundred
layer
networks
were
were
useful
for
predicting
neurons
as
well,
and
they
didn't
know
what
to
make
of
that
and
the
way
they
decided
to
incorporate
that
mission
was
they
took,
therefore,
layer
Network
show
not
to
the
right
and
they
they
certainly
reinterpreted
these
hundred
layer
networks,
as
maybe.
G
A
A
Suit
and
but
at
each
level
of
the
network
they
have
like
just
Luton
back,
so
it's
being
kind
of
going
up
and
and
and
did
I
include
the
other
figure
here.
I'll
just
go
back
and
show
here:
I,
don't
need
to
show
a
figure
for
this.
Basically,
they
through
doing
this
they're
doing
a
network
that
looked
like
this.
They
got
better
object,
recognition
with
this
four
layer
network
and
it's
better
than
never
before,
if
predicting
neurons
and
IT
and
v4
etc.
A
A
G
A
H
H
A
Second,
deep
learning
in
the
brain
thing
I'll
show
in
the
second
final
one.
Is
this
one
called
deep
learning
without
weight
transport?
So
the
premise
of
this,
as
after
Hinton
and
Rommel
did
the
original
backpropagation
paper,
the
the
the
next
year,
Stephen
Grossberg
did
a
paper
showing
why
this
was
impossible
and
he
called
something
the
way,
transport
problem
and
basically
the
the
problem.
There
is
so
in
this.
In
this
paper
there
they
proposed
a
solution
to
it
where
they,
but
their
solution,
is
sort
of
a
schematic.
A
It
imposes
this
really
weird
constraint
where
you
need
to
keep
your
weights
and
synchro
your
connections
and
sync
where,
where
anytime,
you
learn
in
one
path,
you
have
to
also
learn
in
the
other
time
or
another
way
to
say
this
as
a
neurons
axon
and
a
neurons
apical
dendrite
need
to
connect
to
the
same
upstream
self
or
downstream
yourself.
If
you.
A
A
So
this
paper,
this
work
wet
sought
to
solve
that
specific
problem
to
do
biologically
plausible
networks
that
that
that
do
that
essentially
sinking
of
weights
and
in
a
way
that
isn't
obviously
wrong,
and
so
so
it's
it.
Basically
it's
is.
They
show
how,
if
you
set
up
a
network
where
it
is
architected
and
kind
of
this
way
where
information
goes
up
and
error
goes
down,
this
error
could
be
a
prediction
error.
It
could
be
rewards.
A
It
could
be
anything
if
you
have
a
network
setup
like
this,
then
you
can
use
the
like
kind
of
special
hebbian
rules
to
make
this
all
kind
of
work,
they've,
they've
kind
of
removed,
one
of
the
objections
to
that
propagation
in
the
brain
and
and
my
my
assessment
is
they're.
Gonna
keep
going
with
this
and
they're
gonna
keep
making
something
that
looks
more
and
more
like
a
they're,
gonna
I
think
they're
gonna
be
able
to
show
networks
that
kind
of
resemble
the
brain
and
are
doing
back
propagation
or
something
equivalent
to
it.
A
Whether
that
makes
it
right,
I,
don't
know,
but
I
think
they're,
gonna
they're
gonna
successfully
make
it
where
backdrop
isn't
obviously
wrong.
I
think
that's
kind
of
their
goal
and
donk,
oh
and
others
say
one
last
thing
on
this
part
of
their
achievement
here
was
that
using
this
technique
it
was
they
were
able
to
achieve
image
net
results
that
were
on
par
with
those
of
networks
trained
untrue
background,
so
that
this
is
part
of
their
achievement.
A
Was
that
they
not
only
is
it
does
it
kind
of
resemble
something
biologically
plausible,
but
it
gets
really
good
results.
That's
it
for
deep
learning
and
next
thing,
I'm
showing
oh
15
seconds
on
this
one.
This
is
e
Lafitte.
She
showed
this
thing
involving
you.
Almost
don't
have
to
look
at
the
title.
You
can
just
listen.
Hopfield
networks,
where
they're
kind
of
relevant
to
us
as
like
are
our
layer.
Two
three,
our
object.
A
Oh
no,
that's
another
figure
for
that
paper,
but
now
something
kind
of
interesting
happened
in
one
workshop
session.
They
were
back-to-back
talks
from
ela
theat
and
Syria
Ganguly
and
they're,
both
like
physicists
who
work
on
grid
cells
and
and
they
both
laid
out
two
views
that
aren't
totally
contradictory,
but
they're
not
telling
the
same
story.
Neela's
is
more,
is
kind
of
more
in
line
with
what
we
often
say.
Although
we're
I
don't
know,
we
play
around
with
different
ideas.
A
The
idea
that
grid
cells
are
like
this
fundamental
2d
thing:
passive,
a
continuous
attractor
but
they're
kind
of
at
the
core.
These
2d
neural
circuits
and-
and
the
second
viewpoint
is
that
grid
cells
are
really
more
of
a
special
case
of
another
thing.
That's
fundamental
that
there
one
way
I
would
describe
it
as
that,
if
you
have,
if
you
describe
neurons
as
performing
a
sort
of
manifold
learning
in
certain
cases,
you
would
naturally
expect
themselves
but
they're,
not
the
thing.
That's
at
the
bottom.
There
did
like
a
result
so.
C
A
E
A
And
the
idea
that
even
one
diaz
happens
as
it
as
embeddings
into
a
2d
manifold
and
then
on
the
bottom.
You
see
the
some
figures
from
the
paper.
Marco
and
I
did
with
her
and
are
working
on
with
her
and
and
so
this
was
just
kind
of
the
the
rallying
cry
to
like
2d,
representing
continuous
variables,
using
these
2d
modules
and
just
to
show
one
brief
thing
like
at
the
end
of
her.
She.
A
Theory
grid
cells
as
an
emergent
property
of
creating
a
place
cell
code,
and
this
gets
into
there's
also
a
poster
and
paper
that
went
along
with
this
and
and
their
view.
I
think
we're
going
to
talk
about
this
in
the
future
more
in-depth.
But
it's
and
it's
not
totally
new.
It's
it's.
The
I'll
bring
up
the
I'll,
bring
up
the
poster,
a
unified
theory
for
the
origin
of
grid
cells
through
the
lens
of
pattern.
A
A
To
me,
but
I'm,
still
working
on
I'm
wrapping,
my
head
around
it.
Okay
final
thing,
I'll
show
this
is
the
end
of
the
grid
cell
portion
now
Yamla
couldn't
gave
this
workshop
talk.
It's
it's
online
called
what
deep
learning
hardware
will
we
need
and
he
made
a
series
of
points
I'll
go
through
a
couple
of
slides
where
he's
where
he
says,
like
things
he's
learned,
one
thing
I'll
just
say:
first,
though,
a
recurring
theme
through
his
talk
was
that
he
sort
of
bashed
on
matrix
multiplication
and
hardware
he
sort
of
he
sort
of
he
was.
A
F
A
One
of
the
things
he
brought
up
was
that,
like
is
that
most
of
the
processing
time
and
a
lot
of
and
deep
learning
hardware
is
spent
on
convolution
and
the
convolution
said,
like
processing,
taking
a
filter
and
moving
it
over
an
image
or
over
an
input,
and
what
the
the
thing
you
wanted.
He
was
pointing
out
this
daily.
Yes,
you
can
take
convolution
and
format
and
like
and
compute
it
using
matrix
multiplication,
but
that's
not
that's,
not
very
efficient.
You
have
to
do
all
sorts
of
copies.
You
have
to
expand
memory
there.
A
Is
about
doing
convolution,
if
you,
if
you,
if
you
design
your
hardware
to
do
matrix
multiplication,
you're
gonna
pay
this
huge
overhead,
where,
if
you,
if
you
perform
the
actual
convolution,
more
natively
you're,
going
to
be
able
to
do
much
better,
and
so
that
was
one
thing
he
brought
up
and
and
and
so
I'll
show
lessons
learned
number
one
I
don't
know.
If
you
want
to
just
skim
this
there
here,
yeah.
D
A
Three
point:
three
here:
the
that
just
I
just
want
to
say
that
say
goodbye.
The
matrix,
multiplications
question
mark
say
goodbye,
the
tensors
question
mark
he
wanted
build.
He
was
laying
out
this
point
that
the
changes
and
algorithms
that
are
on
their
way
may
reverse
our
assumptions
of
what
you
expect
the
silicon
to
be
doing
well
for
well,
while
performing
deep
what
it
will
be,
what
will
be
known
as
deep
learning
in
the
future.
F
D
B
C
A
D
E
H
A
Results,
I
think
I
think
what
he
was
talking
about
was
like.
If
you're
publishing
a
machine
learning
result-
and
it
requires
you
to
use
monadic
hardware,
then
then
you
can't
really
rule
other
people
aren't
going
to
be
able
to
replicate
that
doesn't
go
anywhere.
So
you
can,
you
can
have
kind
of
a
dead
end.
Sammy.
F
I
I
guess
yeah
I've
been
adding
more
pictures,
suicides
and
pictures
I
had
like
a
this
is
like
there's
a
big
conference
hall
that
or
Yeshua
gave
his
like
eight
of
his
talk
and
it's
just
sort
of
like
kind
of
absurd,
just
like
how
big
is
just
sort
of
like
how
cool
it
is,
and
so
I'm
just
kind
of
like
standing
up
on
the
side.
I
wanted
to
take
that
picture
and
that
overall
conference
hall
is
actually
free
me
of
the
I.
I
I
C
I
I
I
Guess
in
retrospect
that
would
be
nice
prepare
like
individual
slides
for
the
different
things.
I
think
a
lot
of
this
stuff,
sooo
good
time
and
markets
had
already
gone
over,
but
I
guess
I
wanted
to
capture
here.
So,
like
my
personal
experience
is
just
like
to
aroma,
like
a
nurse
being
like
I
got
that
thing
they
like
network,
but
now
we're
effects
in
like
real
time
exactly
just
like
keep
on
seeing
so
many
people
like
as
Hewitt
I,
said
it
just
like
a
great
way
to
talk
to
people.
I
I
I
I
D
I
Of
inherent
benefits
of
the
connectivity
of
like
the
way
how
the
network
is
connected,
so
over
the
summer
I
presented
the
weight,
agnostic,
neural
networks,
paper
and
I.
Don't
you
guys
come
over
that?
Let
me
know,
but
no
one
was
basically
saying,
like
we're
gonna
figure
out
what
connections
of
the
network
switch
that,
despite
no
matter
what
week
that
we
sort
of
said
it
to
it's,
going
to
train
well
on
some
tasks.
For
those
single
architects
was
anything
like
that,
so
that
was
actually
one
of
the
papers
in
nerves,
but.
I
You
also
have
like
sort
of
more
like
lottery
ticket
iPod
papers
have
like
finding
like
random
something
that
works
with
the
networks
and
those
things
sort
of
just
like
doing
well,
I
on
a
given
task,
so
I
just
thought
that
was
kind
of
like
neat
as
a
theme
just
just
to
sort
of
like
see.
People
were
thinking
along
lines,
and
it's
like
what's
in
here,
and
the
connectivity
in
the
network
itself,
there's
also
a
whole
subset
of
posters
dedicated
to
sparsity.
But
I.
I
D
B
F
E
E
That
this
is
open
and
it's
network
and
it's
growing
on
orders
of
magnitude
as
you've
described.
So
it
seems
to
me
the
machine
doing
what
is
much
more
efficient
and
faster
moving
than
the
neuroscience
community,
which
is
you
know,
historically
big
but
but
kind
of
stuck
in
old
ways
in
many
ways,
and
so
my
impression
is
that
machine
learning
is
just
gonna
overtake
it
and
they're
just
gonna
say
like
look.
We
got
you
covered
guys,
don't
worry
about.
H
Time
I
went
to
Europe's.
The
my
impression
was
that
you
saw
that
there
were
those
massive
corporate
sponsors
that
were
coming
and
if
you
wish
seating
the
direction
which
people
would
go,
I'm
just
wondering
if
you
guys
got
some
impression
as
to
who
the
big
players
were
in
sponsors
that
were
making
it
such
a
rich
cumference.
They.
B
B
H
B
H
D
E
B
D
B
I
I
think
a
lot
of
the
people
were
or
like
a
lot
of
there
was
a
sort
of
I
kind
of
I
mean
like
one
of
the
major
things
is
just
like
that.
We
need
something.
That's
there,
even
in
the
likes
of
Italian,
a
Yahshua,
slides
and
said
you
know,
maybe
machine
learning
or
like
sort
of
like
a
question
mark
of
like
whether
or
not
we.
I
Isn't
the
new
thing,
but
something
that
just
different
I
mean
to
say
the
new
thing
yeah,
but
just
like
rephrase
it
ever
so
slightly
or
like
the
in
the
new
and
ml
workshop.
That
I,
you
know
is
that
the
mentor
that
I
had
was
Tom.
Diedrich
am
I,
saying
his
name,
right,
mm-hmm
and,
and
so
like
we
were
having
breakfast,
and
this
is
a
guy-
that's
been
around
for
so
long,
but
he's
but
I
mean
he
was
just
saying
sort
of
like
along
similar
lines
like
yeah.
I
A
lot
of
people
have
been
around
for
a
while,
but
you
know
we're
really
trying
to
be
like
open
to
just
just
different
ideas
and
I
think
there
are
really
looking
for
different
ideas
in
the
biological
inspiration
so
but
yeah,
okay,
yeah.
So
then
the
two
papers
that
I
did
have
two
papers
that
I
thought
were
interesting
in
this
domain.
I
I
think
stupid,
I
point
out
one
and
then
marks
point
out
the
other,
but
at
some
point
in
time
I'd
like
to
present,
possibly
both
than
that
mess
I
on
the
same
day,
but
just
like
down
the
line,
I
think
the
grits,
the
grid
cells,
one
in
particular,
because
I
think
it's
maybe
a
good
kind
of
entry
point
for
me
to
understand
grid
cells,
at
least
from
I
guess,
because
it
really
tries
to
like
lay
down
some
likes
all
of
them
hack
about
you
know
how
they
may
come
about
and
they
yeah.
That's.
I
I
was
actually
being
like,
take
things
made
out,
but
I
think
it
was
I.
Don't
think.
I
could
really
comment.
I
think
I
vaguely
remember
to
go
okay,
biological
inspiration
and
stuff,
but
he
was
like
more.
He
actually
had
like
this.
I
was
rethinking.
It
was
actually
in
this
again
this
whole
presentation
about
how
you
can
potentially
better
learn.
Everything
like
you
can
meta,
learn
the
architecture.
You
can
better
learn
the
learning
rules
and
you
can
better
learn
the
whole
evolution.
Yeah.