►
From YouTube: DevoWorm (2020, Meeting 41): Computational Pareidolia, Temporal Scaling, and Deep Neural Control
Description
Papers on Temporal Scaling and Deep Neural Control, an update to the DevoWormML lecture on Computational Pareidolia, and a review of the task board. Attendees: Susan Crawford-Young, Krishna Katyal, Shruti Rajvanshsingh, and Bradly Alicea.
A
C
B
Not
bad,
I
guess
they
woke
up
to
see.
The
dog
head
tried
to
eat
the
guts
of
the
deer
that
they
brought
home
the
other
day,
which
was
not
good,
but
they've
been
hunting.
Last
year,
hunting
was
yesterday.
Okay,.
A
B
Anyway,
I
found
a
paper
online
but
that
I
thought
you
might
be
interested
in
so
I
can
either
read
the
title
to
you
or
send
it
to
you.
I
don't
know
if
I
can
find
it
at
the
moment,
but
I.
B
Yeah,
just
remembering
I
I'm
trying
to
finish
a
bunch
of
mechanics
online
mechanics,
and
so
things
are
not
as
together
as
they
normally
are.
Yeah.
B
A
Harvard
development
yeah:
we
have
that
actually
krishna
sent
me.
B
B
B
A
To
send
it
right
now,
well,
probably
not
probably
after
well,
you
can
send
it
now,
but
I
won't
be
able
to
see
it
till
after
the
meeting.
So
yeah.
B
A
I
don't
know
if
I
should
start
because
I'll
record
it
for
people
who
aren't
able
to
make
the
meeting
and
we'll
just
follow
along.
So
I
don't
yeah.
If
someone
comes
in
during
our
agenda,
then
you
know
they
can
catch
up.
So
welcome
to
the
meeting.
Welcome
susan
and
today,
we're
gonna
talk
a
couple
about
a
couple.
A
Things
prepared
update
on
a
talk
that
we
we
had
a
session
on
like
machine
learning
in
in
fall
of
2019
called
diva
worm
ml
and,
if
you've
been
in
the
group
a
while
you
you
probably
attended
it
or
you
heard
about
it,
and
so
it
was
a
series
of
lectures
on
different
topics.
And
so
today
I'm
going
to
talk
about
a
topic
that
we
touched
on
last
week
called
peridolia,
and
this
is
going
to
be
computational
peridolia,
but
I've
updated
it
with
a
number
of
slides.
A
You
know,
based
on
like
some
of
the
things
I
kind
of
created
the
lecture
in
an
ad
hoc
manner
in
2019,
but
I've
since
you
know,
run
across
articles
and-
and
things
have
happened
since
then
that
dessert
deserves
an
update,
so
we're
gonna
do
that
we'll
also
have
some
papers
that
I'd
like
to
review
and
the
the
temporal
scaling
paper
is
one,
and
there
are
a
couple
other
papers
that
we
should
probably
talk
about
and
they're
pretty
interesting
papers
and
then
we'll
probably
go
over,
and
this
is
for
people
watching
on
youtube
for
the
task
board
and
what's
just
kind
of
an
update
on.
A
All
right
so
we're
just
starting
the
meeting,
we're
gonna,
do
a
presentation
or
I'm
gonna,
give
a
presentation
update
and
then
I'm
going
to
go
over
some
papers
and
and
talk
about
the
different
things
going
on
in
the
group
just
kind
of
a
reminder
of
where
we
are
so
yeah
all
right.
So
again,.
B
Yeah
I
could
I
could
shut
off
my
my
visual
feed.
A
We
have
yeah,
we
have
a
so.
Why
don't
we
get
started
here?
I'll,
probably
start
with
this
presentation.
So
for
christian
and
surety
you
didn't
hear
this
was
a
a
presentation
that
you
can't
hear.
A
Okay,
so
he's
gonna
come
back
alright
right
when
I
go
to
the
actually
go
to
the
meeting
board
or
the
task
board
first,
just
because
that's
a
little
bit
more
a
little
bit
less
I'd
like
krishna
to
see
the
whole
presentation
on
this.
So
all
right,
so
our
task
board
is
here
we
have.
This
is
our
repository.
A
A
So
there
were
a
bunch
of
lectures
here
that
were
are
sort
of
frozen
in
2019,
but
I'm
trying
to
update
them.
So
we
had
a
number
of
different
presentations
that
are
sort
of
machine
learning
from
a
diva
worm
perspective,
and
that
means
that
it's
either
projects
that
deform
is
engaged
in,
or
you
know,
machine
learning,
that's
relevant
to
biological
systems,
developmental
systems
and
so
forth,
and
so
we
had
a
lot
of
stuff.
The
pre-trained
models
lecture
actually
turned
into
a
google
summer
of
code
project.
A
Areas,
hi
krishna,
so
yeah,
so
this
is
this
was
the
course
that
we
did.
It
was
last
fall
and
it
needs
to
be
updated
a
bit,
but
I
think
it
was
a
pretty
nice
course
and
if
you
get
the
chance,
you
should
check
it
out
just
even
in
its
current
state,
but
we'll
be
updating
this
course
just
gradually.
A
So
that's
the
course
and
then,
as
for
the
task
board,
let's
just
go
over
that
real,
quick
here,
the
major
tasks
for
2020.
So
this
is
the
task
board
and
group
meetings
and
we
have
a
lot
of
things
here.
So
we
have
a
number
of
things
that
are
either
in
progress
or
on
hold
or
to
do
or
maybe
is
an
action
item
which
is
more
like
something
we
want
to
accomplish
short
term.
A
So
in
progress
we
have
something
called
krishna
paper
review.
I
think
that
was
a
paper
that
krishna
wanted
to
review.
I'm
not
sure
if
that
hap,
oh,
that
might
have
been
the
paper
you're
writing
on
sarsa,
I'm
not
sure
yeah,
okay,
so
that
that's
sort
of
in
progress
you
know,
and
we
can
keep
it
in
progress
for
as
long
as
we
want
the
deborah
bibliography,
an
endnote,
that's
also
in
progress,
so
dick
and
I
have
been
working
on
a
bibliography
with
different
resources
in
it.
A
That's
in
progress,
lagrangian
embryo
readings,
so
that
was
something
that
we
talked
about
several
weeks
ago,
we're
still
working
on
the
readings
for
that
that
kind
of
got
pushed
back.
We
could
put
that
in
hold.
For
now.
A
I
can't
move
it
because
I'm
not
in
the
right
account,
I
guess
well,
anyways,
that's
okay,
bibliography
and
endnote
is
kind
of
59
is
tied
to
48,
create
a
theory
layer
for
diva
learn,
that's
something
that
actually
is
labeled
hacktoberfest,
but
I've
been
working
on
a
little
bit
and
that's
going
to
be
something
that'll
be
we'll
be
making
progress
on
shortly.
A
So
this
again
is
the
divo
learn
platform
that
has
the
diva
learn
software,
some
other
software,
that
you
know
our
collection
of
deep
learning,
machine
learning
software.
A
This
periodicity
paper
is
something
that'll
be
coming
up
in
the
next
month
because
we
need
a
draft
due
by
the
end
of
the
year.
So
I'm
going
to
be
working
on
that
and
in
the
next
week
or
two
I'll
start
really
kind
of
focusing
on
that,
and
maybe
in
a
couple
weeks
we'll
talk
about.
You
know
details
about
what's
needed
to
get
that
draft
out.
A
I
know
we
talked
about
people
being
interested
in
it
and
I
think
susan
and
maybe
jessie
and
maybe
dick
there
are
a
bunch
of
people
who
want
to
do
things
on
that.
So
we
haven't
really
flushed
it
out.
I'm
gonna
be
flushing
it
out
in
the
next
couple
weeks
and
then
I'll.
Let
you
know
maybe
more
specific
things
you
can
help
with
or
contribute
or
whatever
this
paperwork
on,
biosystem
special
issue
paper.
This
41
is
linked
to
47.
A
complexity.
Measures
is
something
we
haven't
talked
about
recently,
but
it's
where
we've
talked
about
different
topics
and
complexity
and
we're
kind
of
assembling
a
set
of
measures
or
a
set
of
techniques.
So
that's
that's
ongoing.
We
haven't
had
any
developments
on
it,
though
recently.
A
Finally,
this
basil
area
nonrenal
cognition
paper.
So
this
is
a
paper
we're
working
on
and
I
think
we're
planning
on
submitting
it
early
next
year.
So
we
have
a
bit
of
time
on
that.
A
There's
an
issue
65
here,
which
is
follow-up
in
bachelor
area
psychophysics,
and
that
is
actually
linked
to
this,
because
this
is
a
presentation
that
we
prepared
for
neuro
match
and
that
kind
of
laid
the
groundwork
for
some
of
the
technical
detail
for
this
paper.
So
this
paper
will
be
probably
the
paper
like
the
paper
would
be
it'd,
be
finishing
up
the
paper
getting
fleshed
out,
maybe
in
january
or
february,
so
stay
tuned
for
that.
A
A
Then
I
don't
know
if
you
got
the
email.
Last
week,
but
dick
gordon
sent
out
a
bibliography
for
the
giggle
gigapixel
technique,
and
so
that
was
something
that
was
sort
of
related
to
susan's
presentation
on
imaging
and
he
had
proposed
that
you
could
use
gigapixel
techniques
and
he
sent
a
bibliography
on
that
and
maybe
a
short
article.
I
sent
it
out
to
the
group
via
email.
A
So
that's
that
that's
something
we
might
follow
up
on
as
well.
It
was
interesting.
It
was
a
it's
a
technique
where
you
basically
sample
an
image
at
a
very
high
resolution
and
then
down
sample
it
and
they're
they're.
I
didn't
really
read
a
lot
about
it,
but
it
looks
interesting
and
then
finally,
we
have
the
axolotl
embryo,
animations
and
segmentation,
so
we're
still
kind
of
in
a
holding
pattern
on
that.
But
I
wanted
to
make
that
into
an
action
item.
A
Maybe
that'll
be
happening
soon,
so
update
hold.
We
have
updates
on
axolotl
data
and
analysis.
A
B
A
Yeah
yeah,
because
right
now
I
think
that
we're
well
the
next
step
that
we
have
and
it's
a
little
hard
is
to
like
to
take
all
the
acquisitions
and
put
them
onto
like
a
spherical
projection
or
something
similar
that
we
can
represent
it
as
a
map.
A
It
could
be,
you
know,
on
a
sphere
or
it
could
be
a
flat
map
that
you
can
move
around
on
and
explore
it,
and
so
that
would
be.
You
know,
I
guess
the
you
know
it
can
be
put
into
a
projection,
but
that's
a
little
tough
because
you
end
up
like
deforming
the
image,
some
images
somewhat
and
getting
it
to
fit
together.
A
But
you
know
if
we
could
have
the
the
better
sampling
we
can
have
in
that
3d
space
the
better,
because
you
can
then
sample
the
different
sides
of
it
and
you
don't
have
to
deform
it
as
much
so.
B
A
All
right
sounds
good,
so
yeah
and
then
so
that
was
for
the
auto
project.
Then
we
have
the
recruiting
people
as
divo
and
contributors,
so
that
was
hacktoberfest
and
and
as
I
updated
you
a
couple
weeks
ago,
we
had
a
good
interest
in
it.
I
think
you
know
at
least
at
the
beginning
it
kind
of
fell
off
towards
the
end,
but
we
had
you
know
a
fair
number
of
people
contribute,
and
so
I
don't
know
I
mean
for
some
larger
orgs.
A
You
know
they
get
hundreds
of
people,
but
I
think
we
did
pretty
well,
and
so
you
know
we're
going
to
keep
recruiting
people.
Perhaps
you
know
we'll
be
doing
some
public
presentations
about
diva,
learn
and
other
things
in
the
near
future.
So
that'll
help
to
get
the
public
awareness
up
on
in
terms
of
what
diva
learns
all
about.
So
I
have
these
hacktoberfest
labels
still
on
here,
because
it's
just
kind
of
a
reminder
of
the
things
you're
doing
for
hacktoberfest,
but
also
sort
of
promotional
things
that
we
might
do
in
the
future.
A
A
You
know
issues
that
someone
might
take
on
as
a
new
member
of
d'evil
or
diva
worm.
They
might
want
to
take
that
on
take
ownership
of
that
and
create
something.
So
that's
that's
a
possibility.
A
Neural
organoids.
We
haven't
talked
about
that
really
at
all
in
here,
but
that's
something
that
I've
been
talking
about
with
other
people.
I've
really
given
too
much
thought
to
that.
So
we'll
leave
that
in
hold
for
now
their
embryo
visualization
for
the
open
arm.
Docker.
That's
related
to
this
38.
A
Again,
we
need
something
as
a
visualization
or
a
simulation,
that's
very
simple
that
we
can
put
into
the
open
arm
docker
container,
which
contains
sort
of
an
executable
of
all
the
different
programs
that
run
under
open
worm.
You
know
like
represent
open
worm
so
from
like
a
biophysical
simulation
to
a
neural
network
simulation
to
other
types
of
simulations
that
show
the
worm
and
all
the
adult
worms,
so
they
don't
really
have
a
developmental
component,
so
it
would
be
nice
for
to
have
that.
A
A
Long
ago,
but
it
never
really
materialized
and
then
the
axolotl
montaging
is
this
related
to
the
axolotl
work,
and
so
that's
montaging
just
means
it's
going
to
be
you'll.
Take
the
images
and
project
them
somehow
onto
either
a
sphere
or
a
flat
surface.
So
you
have
like
something
to
explore
for
people
that's
a
whole
process
in
and
of
itself.
So
I
mean
that's,
that's
something
that
we
can
it'll
be
sort
of
a
it
won't,
be
a
trivial
thing
so
and
then,
finally,
to
do.
A
We
have
a
recap
presentation
from
2020
so
every
year
now
I
think
we're
going
to
do
this,
where
we
at
sort
of
in
early
january
we're
going
to
do
a
presentation
on
the
previous
year's
goings-on
and
where
we,
you
know,
assess
where
we
are
and
where
we're
going.
So
I
did
this
last
year
and
I
think
that's
a
good
thing
to
do
every
year,
so
that'll
be
coming
up.
I
think
that'll
be
useful
for
sort
of
aligning.
Maybe
our
group
meetings
board
here
with
the
reality
of
what
we're
gonna
do
going
forward.
A
Reorient
us
or
orient
us
to
the
same
things,
so
I
think
it's
good
it'll
be
a
good
exercise,
so
this
annotated
bibliography
on
computational
developmental
biology
topics.
So
this
is
something
that
I
think
I
mentioned
before
in
the
a
couple
weeks
ago
with
respect
to
annotated
bibliographies.
A
So
you
know,
if
we
think
of
a
topic,
we
want
to
cover
what
coverage
on
some
topic
that
maybe
perhaps
like
some
imaging
topic
or
some
machine
learning
topic,
we
can
create
these
annotated
bibliographies,
where
we
just
get
like
some.
Some
references,
and
you
know,
write
a
couple
of
sentences
about
why
the
paper
is
important
and
then
have
it
published
in
our
one
of
our
locations
and,
oh,
you
know
it
might
be
useful
for
education.
It
might
be
useful
for
people
coming
into
the
group.
A
So
that's
that's
an
open
issue.
That's
a
recent
issue.
I
think,
but
I
think
it's.
It's
also
relates
to
these
bibliographies
that
we're
creating
an
endnote,
and
so
these
bibliographies
and
then
note
are
just
references
that
are
formatted
so
that
people
can
download
them
in
one
place.
This
is
a
little
bit
different.
This
is
where
you
have
a
collection
of
references
with
sort
of
an
annotation,
and
it
helps
people
understand
the
literature
better.
A
So
this
paper
review
skill-free
biology.
This
is
something
jesse
parent
wanted
to
do,
but
it
hasn't
really
moved
so
you
know
that's
that
probably
should
be
in
hold,
and
then
these
last
two
are
sort
of
older
issues
that.
A
Ground
so,
as
you
can
see,
we
have
a
lot
of
issues
that
maybe
need
to
be
reevaluated
merged.
Split
apart
I
mean
I
I
try
maintaining
it,
but
if
you
have
any
issue,
if
you
have
any,
let
me
give
a
link
to
this.
So
if
you
have
any
suggestions
on
how
we
might
sort
through
this,
if
there's
things
that
you
think
that
we
should
someone
wants
to
take
on
or
someone
wants
to.
A
You
know,
then,
you
know,
if
you
can
think
of
ways
we
can
split
them
up.
Then
you're
welcome
to
suggest
those
and
to
review.
You
know
every
issue
has
a
number
here.
So
if
you
want
to
talk
about
an
issue,
usually
you
cite
the
number
and
then
we
can
go
to
go
to
that
issue
and
address
it
as
an
individual
thing,
and
some
of
these
issues
are
linked
so,
like
anything
that
says,
bibliography
or
linked
so
like
40,
48,
59
and.
A
It
typically
hit
61.,
those
are
all
linked
and
so
those
I
don't
they
don't
have
a
good
way
to
link
issues
and
github
issue
boards.
So
it's
just
kind
of
something
that
you
see
from
the
co-occurrence
of
a
term
in
the
in
the
issue.
So
those
are
all
things
that
you
know
we
can
review
as
needed,
so
so
anyways.
I
just
wanted
to
present
on
that
and
and
review
things
to
see
where
we
stand.
A
So.
The
next
thing
I
wanted
to
talk
about
was
this
presentation
on
computational
paradolia,
so
I
think
krishna
dropped
right
when
I
was
talking
about
it.
So
the
idea
behind
this
is
this
is
one
of
the
presentations
that
we
did
in
diva
worm
ml
in
2019
and
I'm
updating
it
for
2020..
A
I'm
going
to
keep
updating
these
presentations
as
needed,
and
you
know
I
just
want
to
keep
the
content
fresh.
So
this
is
a
computational
peridolia.
We
talked
about
this
with
respect
to
myok's
work,
so
this
is
computational
paradolia
and
I'll.
Explain
what
peridolia
is
in
a
minute,
but
this
is
a.
A
I
think
it's
a
generative
model
created
by
a
style
gan,
which
is
a
style
transfer
algorithm.
It's
a
generative
adversarial
network
and
they're,
using
a
style
transfer
algorithm
to
create
this
artwork,
and
so
this
is
one
of
the
pieces
of
artwork
generated
by
it
and
you'll
see
what
way
that
there's
a
kind
of
a
face
in
there
and
you'll
see
whether
it's
irrelevant
in
a
minute.
A
So
this
is
my
oaks
project
that
he
posted
on
linkedin
last
week
he
presented
on
it
last
week
in
the
group
so
he's
doing
this
thing:
torch
dreams,
which
is
a
platform
for
kind
of
creating
a
similar
type
of
generative
art,
so
he's
generating
these
images
that
are
based
on
what
the
network
is
doing
while
streaming,
you
can
use
it
to
see
what
neural
networks
see,
and
so
this
is
kind
of
an
idea
here
of
generating
these
patterns,
that
are
that
look
like
they
really
have
a
lot
of
structure
and
they're
kind
of
generated
from
random
processes,
but
they
have.
A
A
A
So
why
do
we
see
that
and
so
that's
related
to
a
phenomenon
called
pareidolia?
And
so
this
is
where
you
see
faces
in
things:
objects
in
nature,
or
you
know,
in
in
human
environments.
So
you
have
a
couple
of
this
is
obviously
like
they
put
a
set
of
cookie
monster
eyes
and
a
cookie
on
the
lid
of
this
trash
can.
But
this
was
put
there
because
the
trash
can
kind
of
looked
like
the
cookie
monster
from
sesame
street
so
that
you
know
but
the
that
the
mouth
this
opening
reminded
them
of
that
mouth.
D
B
A
B
B
A
Well,
yeah,
that's
that's
exactly
kind
of
the
thing
I'm
talking
about
here
and,
like
you
know,
so
it's
it's
just
basically,
you
know
people
see
things
you
can
see
in
this
cup
of
coffee
and
the
in
the
air
bubbles.
Here
you
have
sort
of
what
looks
like
a
face,
so
wikipedia
defines
peridolias
interpreting
a
vague
stimulus
is
something
known
to
the
observer.
A
So
these
are,
you
know,
not
necessarily
faces,
but
they
look
like
faces
to
us.
When
we
see
them,
we
recognize
faces
pretty
well,
and
then
we
recognize
these
things.
They
have
like
kind
of
looks
like
eyes
and
mouth
and
a
mouth,
and
so
we
assign
this
face
category
to
it
where
we
interpret
these
things
as
parts
of
a
face,
and
so
this
is
something
that
you
know
everyone
does,
and
it's
not
something.
A
That
means
you
know
that
you're
we'll
talk
about
the
category
or
aspect
of
it
later,
but
it's
basically
the
greek
the
way
that
paradolia,
what
it
means
in
greek
is
that
it's
beside
or
beyond,
which
is
the
para
part,
and
then
the
eidolon
part
is
the
former
image.
A
So
it's
beside
the
image
or
beside
the
form,
so
it
this
brings
up
a
lot
of
you
know,
because
we
think
about
this.
This
is
these
things
like
this
doorway,
where
you
have
these
two
openings
here
and
then
this
slot
here
it
looks
like
a
face.
A
The
thing
has
like
a
structure,
but
then
we
assign
meaning
to
the
on
top
of
it.
So
this
actually
is
pretty
interesting
from
a
semantic
standpoint,
and
so
this
is
another
example.
This
is
actually
of
a
bistable
perception
and
you
may
have
seen
these
in
psychology
textbooks
or
in
psychology
experiments
where,
if
you
take
this
image
here
on
the
left,
you
know
it
looks
like
a
duck
or
it
can
look
like
a
rabbit
depending
on
how
you
look
at
it.
A
So
if
you're
looking
at
it,
you'll
see
a
duck
or
a
rabbit,
and
then,
if
you
shake
your
head
and
you
train
retrain
your
eyes,
you'll
get
like
you'll
be
able
to
see
the
rabbit,
so
the
rabbit's
ears
are
here
and
the
ducks
bill
is
here.
So
it's
like
you
can
see
that
you
know.
If
you
look
at
it,
it
yeah
it's
it's
by.
A
I
think
I'll
buy
stable,
and
so
this
is
something
they
call
the
duck
rabbit
illusion,
and
so
what
they
did
with
this,
though,
is
they
took
this
image
and
they
used
google
cloud
vision
to
analyze
it
so
now
this
is
like
we're
taking
this
thing
and
we're
taking
a
machine
learning
algorithm
that
isn't
you
know
it
doesn't
have
this
sort
of
semantic
layer.
A
A
So
the
idea
is
once
they
start
rotating
this
image
and
remember:
they're,
not
changing
the
image
you're
just
rotating
in
space,
it
starts
predicting
it
as
either
a
duck
or
a
rabbit.
So
I
guess,
if
it's
like
at
this
angle,
if
you
shift
it,
maybe
to
the
left
of
you
know
from
the
top,
if
you
shift
it
to
the
left,
45
degrees,
it
would
be
more
likely
to
be
predicted
as
a
duck,
whereas,
if
you
shift
in
the
other
direction
45
degrees,
it
would
be
more
likely
to
be
seen
as
a
rabbit.
A
So
that's
interesting
because
there's
really
no
difference
in
the
actual
content
of
the
image
just
in
how
the
algorithm
sees
it
and
we
know
in
humans.
This
is
something
that
we
do
as
a
way
to
assign
some
meaning
onto
it
or
as
a
interpretation
of
what
we're
seeing,
but
the
machine
learning
algorithm
doesn't
have
that
aspect
to
it.
A
So
so
this
is,
you
know
so
this
this
peridolia
also
works
with
ambiguous
images.
So
what
actually
is
going
on
here?
I've
actually
written
a
couple
blog
posts
about
this,
or
I
have
a.
I
wrote
a
blog
post
about
this
and
there
are
some
other
blog
posts
if
you're
interested
in
this
topic
a
little
bit
more
and
these
these
blog
posts
provide
like
some,
I
think,
some
basic
information
about
it.
A
But
specifically
in
terms
of
like
how
computers
might
deal
with
this
problem,
so
there's
this
post
here
on
directly
on
peridolia
there's
another
one
robot
looks
for
faces
and
clouds
and
I'll
give
you
these
slides
after
I'll
make
these
slides
available
after
the
meeting.
So
you
can
get
these
links
a
bug
of
the
human
mind
reproduced
in
computers
and
then
an
idea
for
a
personality
engine
which
is
where
machine
paradolia
meets
face
tracker.
A
D
A
D
Yeah,
it's
a
mixture
of
biological
population
networks,
whereas
you
know
the
core
of
open
norm
is
into
c
against.
I
guess.
A
It's
in
pretty
much
interesting
topic,
yeah!
Well,
yeah.
We
actually
we're
going
to
talk
about
a
paper
later
that
one
that
you
sent
before
so
we'll
we'll
talk
about.
A
We
talk
about
the
other
one,
so
yeah.
That
was
that's.
Thank
you
so
check
these
blog
posts
out
and
I'll
send
the
links
later.
A
A
So
this
is
a
really
interesting
thing
that
they
did
here.
So
we
talked
about.
You
know
how
things
can
look
like
faces
where
things
can
have
like
two
different.
You
know
sort
of
ways
of
you
know.
Is
it
a
duck
or
is
it
a
a
rabbit
in
this
case,
what
they're
doing
is
they're
they're,
basically
taking
a
bunch
of
geometric
shapes
and
they're
breeding,
what
they
call
face,
and
so
in
genetic
algorithms?
What
you
do
is
you
take
a
bunch
of
you
know?
A
In
this
case
you
have
like
a
bunch
of
genes
that
encode
different
features
of
an
image
in
this
case.
They're
geometric
structures,
like
you
know,
shapes
like
triangles
and
things
like
that,
and
then
you
recombine
them
and
you
you
know
so
that
they're
overlaying
each
other
in
the
image
and
they
produce
all
this.
These
different
variant
images,
so
you
get
it
like
about.
A
You
know:
100
000
images
that
have
different
variations
of
these
shapes
that
are
overlapping,
and
then
you
select
on
those
based
on
what
maybe
looks
most
in
this
case,
what
looks
most
like
a
face
so
you're
actually
selecting
on
things
that
aren't
really
a
face:
they're
just
shapes
that
are
overlaying
on
each
other
and
you're
selecting
and
what
looks
most
like
a
face.
You
give
the
computer
criterion.
Sometimes
people
do
this
with
people
who
evaluate
the
thing,
and
yes,
they.
D
A
A
face
that
doesn't
look
like
a
face,
but
you
can
also
automate
this
and
say
I'm
going
to
select
things
that
look
most
like
a
face
given
these
criterion,
and
so
you
can
do
this
and
over
a
number
of
generations,
you
end
up
with
something
that
looks
like
the
mona
lisa.
A
Although
it
doesn't
really
look
like
the
mona
lisa
entirely,
I
mean
that's
what
we're
seeing
it
looks
enough
like
the
mona
lisa
to
say
that's
what
it
is,
but
it
actually,
if
you
zoom
in
it's
just
a
bunch
of
polygons,
and
some
of
the
detail
of
course,
is
missing
like
around
the
mouth
and
around
the
eyes.
We
can
still
recognize
it
as
a
face.
A
So
that's
an
interesting
way
to
approach
it
another
way
to
approach
this
problem,
and
this
is
more
general.
This
is
a
study
on
visual
illusions,
so
this
zooms
out
a
little
bit
and
asks
the
question.
If
we
have
these
visual
illusions,
you
know,
can
we
maybe
predict
visual
illusions
using
deep
learning
architecture?
A
Can
we
you
know?
How
do
we
know?
How
can
we
sort
of
divide
out
like
what's
going
on
in
terms
of
visual
illusions
versus
not
because,
like
I
said,
computers,
don't
have
that
semantic
layer?
So
maybe
it's
a
matter
of
processing
and
so
that's
what
these
people
these
authors
did
here
want
to
knobby
at
all,
a
looser
emotion
produced
by
deep
neural
networks
trained
for
prediction
and
so
what
they
did
was
they
used.
A
This
architecture
called
prednit,
which
is
a
deep
learning
neural
network
based
on
predictive
coding,
and
so,
if
you
don't
know
what
predictive
coding
is,
I'm
not
going
to
explain
it
here,
but
it's
a
method
that
we
it's
a
theory.
I
guess
that
people
use
to
explain
how
the
brain
is
able
to
sort
of
predict
stimuli
incoming
stimuli
in
a
way
that
is
that
allows
them
to
put
things
together
and
form
a
coherent
percept.
A
So
you
know
you're
able
to
do
you
know
the
neurons
in
the
brain
are
able
to
predict
subsequent
stimuli
coming
in
and
put
together
like
a
coherent
percept
put
together.
A
You
know
a
response
and
so
forth,
and
so,
if
you
want
to
know
more
about
productive
coding,
we
can
talk
more
about
it,
but
just
suffice
it
to
say
that
that's
the
way
it
worked,
and
so
using
this
architecture
they
were
act
able
to
accurately
predict
motion
and
direction
of
the
so
this
image
here
on
the
left,
it's
a
series
of
propellers.
A
Now
this
image
isn't
moving,
it's
just
a
visual
illusion.
It
looks
like
it's
moving.
If
you
look
at
it,
you
stare
at
it
for
a
while
you'll
see
it
looks
like
it's
moving
slowly,
the
gear
these
gears
here
are
but
they're
not
moving.
So
what
they
did
was
they
had
one
case
in
which
the
gears
actually
moved
all
right.
They
gave
it
motion
and
then
another
one
where
they
just
gave
it
this
illusion.
A
So
this
illusion
has
a
lot
of
motion
cues
in
it,
but
it's
not
actually
moving
versus
something
that's
actually
moving,
and
so
they
wanted
to
see
a
prednick
could
distinguish
between
the
two,
and
so
they
found
that
prednit
can
accurately
predict
motion
and
direction
of
propeller
motion
in
moving
images.
So
when
they
had
it
the
case
where
this
was
moving,
it
can
predict
the
motion
and
direction
of
these
propellers.
A
It
can
also
represent
motion
and
direction
for
visual
illusions.
So
when
this
these
things
are
stationary
like
they
are
right
now,
it
can
also
represent
the
motion
and
direction
that
you
can
see
if
you
look
at
it
as
a
human
and
so
these
motion
generated
illusions
may
be
produced
through
something
called
predictive
coding
which
we
talked
about.
So
the
idea
is
that
there's
I
don't
know
if
this
shows
it
well.
This
is
the
architecture
for
pregnant.
A
So
this
is
where
you
have
a
number
of
layers
in
this
pregnant
architecture,
and
you
have
a
bunch
of
input
features
and
it
just
generates
something
over
time
that
allows
it
to
predict
things
like
motion
and
things
like
other
types
of
features
like
that,
and
so
this
is
from
an
article,
predictive
coding
networks,
meet
action,
recognition
and
so
there's
an
active
area
of
research
in
this,
where
they're
looking
at
different.
D
A
So,
but
we
have
to
remember
that
human
and
deep
learning
perception
are
very
different
things,
and
so
this
article
in
medium
by
carlos
perez,
actually
talks
about
this,
where
he
kind
of
takes
a
comparison
between
human
perception
and
deep
learning
perception,
and
so
he
asked
the
question:
how
do
machines-
or
in
this
case
deep
learning,
do
perception?
A
So
deep
learning
can
be
trained
specifically
to
ignore
higher
order
invariances,
which
means
that
they
are
selective
in
the
things
that
they're
looking
at
in
in
an
image.
Networks
are
not
trained
with
the
ability
to
identify
affordances,
which
are
things
in
the
environment,
that
sort
of
suggest
an
action
like.
A
A
We
have
certain
things
that
are
affordances
like
door
handles
where
we
learn
them
at
a
very
young
age,
how
to
use
them,
and
then
we,
but
then
we
also
expect
things
that
look
like
door
handles,
maybe
to
behave
in
the
same
way,
and
so
networks
are
not
trained
with
this
ability.
So
they
don't
know
how
to
identify
things
like
affordances
that
are
obvious
to
us.
You
might
call
it
even
common
sense
in
a
broader
sense,
but
networks.
You
know
deep
neural
networks,
don't
have
that
they
also
have.
A
They
also
rely
heavily
on
occlusion
perspective
and
shadow,
which
is
both
good
and
bad,
and
it
allows
them
to
differentiate
things,
but
it
also
doesn't
allow
them
to
do
things
like
engage
in
this
paradolia.
So
they
can,
you
know
you.
Can
they
can
identify
things
that
the
the
as
you've
seen
from
the
previous
examples,
human
perception
and
machine
perception
differ
quite
a
bit,
and
so
he
lays
out
why.
D
A
Are
here,
but
there's
also
a
neuroscience
of
peridolia
and
a
couple
studies
that
focus
on
this
phenomena
and
kind
of
explain
what's
going
on
in
the
brain,
and
so
this
article
by
liu,
seeing
jesus
in
toast,
neuro
and
behavioral
correlates
of
face
paradolia
in
this
this
study
they
were
looking
at
the
interaction
between
top-down
and
bottom-up
potential
processing,
so
top
down
is
face
recognition,
which
is
something
that
we
do.
We
can
recognize
this
pretty
easily
and
bottom
up.
A
So
we
can
see
we
can
recognize
faces
more
quickly,
especially
if
they're
famous
faces
or
ones
that
we've
seen
before
much
quicker
than
objects
that
form
a
face,
but
other
forms
of
perception,
such
as
audition
also
exhibit
paradolia
and
I'll
talk
about
that
in
a
couple
slides
an
example
of
that.
So
another
example
of
this
is
from
barrick
2019
and
that's
a
machine
learning
approach
to
predict
perceptual
decisions
and
insight
into
faceperidolia.
A
So
in
this
case,
they
talk
about
the
perception
of
external
stimuli
in
two
ways:
one
is
they
care
as
the
characteristics
of
the
stimulus
and
two
is
the
outgoing
brain
activity
prior
to
presentation.
A
So
when
we
perceive
an
external
stimulus,
we
look
at
its
characteristics,
but
there's
also
like
a
prior
activity
that
we
evaluate-
or
at
least
the
brain
does,
and
they
found
that
spontaneous
face.
Brain
activity
during
the
pre-stimulus
period
might
predict
face
paradolia.
So
when
we
have
a
noisy
image,
we
will
see
a
face
because
that's
what
our
prior
experience
tells
us
we
should
see,
and
so
this
is
based
on
spontaneous
brain
activity.
A
They
measured,
I
think,
eeg
in
this
study
and
they
actually
found
that
there
was
some
brain
activity
related
to
this
sort
of
you
know
sort
of
prediction
and
they
were
able
to
find
that
time.
Frequency
components
of
hemispheric
asymmetry
in
the
brain
gave
the
best
classification
performance,
but
pre-stimulus
alpha
oscillations,
which
are
components
of
the
eg
signal,
lead
to
predicting
perceptual
decisions.
A
A
I
mean
I
don't
know
if
they're
useful
in
machine
learning
or
not,
but
we
can
definitely
classify
different
think
about
this
in
a
classificatory
way,
and
so,
if
we
think
about
it
in
terms
of
identifying
stimuli
as
like
a
true
positive
or
a
false
positive,
we
can
think
of,
paradoxically
as
being
a
false,
positive
and
so
peridolia
is
seeing
something
that
is
not
a
face
but
giving
it
the
attribution
of
a
face.
A
And
so,
if
you
see
this
in
machine
learning,
this
can
actually
be
elicited
by
malicious
actors,
but
it
can
also
be
elicited
by
malicious
actors
in
human
life
too.
And
if
you
see
something
that
doesn't
exist,
you
know,
if
you
see
like
a
face
in
something
you
know,
people
can
use
that
to
exploit
your.
You
know
they
can.
A
You
know
they
can
use
it
to
exploit
you
in
different
ways
or
they
can
use
it
to
exploit
something
you
know,
but
they
can
also
do
it
in
machine
we'll
talk
about
how
this
works
with
machine
learning
algorithms
in
a
bit.
It's
actually
interesting
false
pop
these
false
spots
are
interesting
when
they
happen
in
the
human
brain
because,
on
the
one
hand,
they're
sort
of
you
know
it's
a
false
positive,
so
you
see
a
face
in
something
where
there
isn't
a
face.
A
It's
you
know
it's
beneficial
in
some
ways,
but
it's
interesting
one
of
the
examples-
and
this
is
not
a
visual
example.
A
This
is
called
the
cutaneous
rabbit
because
they
use
the
example
of
a
rabbit,
but
it
could
be
anything
that
sort
of
scurries
up
your
arm,
and
so
obviously
there
was
nothing
scurrying
up
your
arm.
It
was
just
someone
delivering
taps
to
your
arm,
but
what's
happening
here.
Is
that
you're?
Not
it's
not
that
there's
anything
running
up
your
arm.
It's
that
you're
taking
those
cues
that
where
it's
hitting
your
arm
and
it's
filling
it,
your
brain
is
filling
in
the
gaps.
A
So
it's
basically
taking
these
sensations
and
it's
filling
in
an
experience
that
maybe
you've
had
or
maybe,
if
you've
seen
a
rabbit
hopping.
You
know
that
that's
what
happens,
and
so
you
translate
that
to
what's
going
on
on
your
arm,
and
so
they've
done
studies
on
this,
where
they
thought
about
this
in
terms
of
multi-sensory
integration,
and
they
found
that
indeed,
people
are
integrating
information
to
explain
what's
going
on
with
their
arm,
but
they
can't
see
and
verify
with
vision.
A
A
So
people
have
thought
about
this
in
terms
of
deep
learning,
ais
being
easy
to
fool.
There's
an
article
on
that
there's
also
an
article
on
robust
physical
world
attacks
on
deep
learning
models.
So
there's
this
idea
of
there
being
adverse
it's
it's
being
the
risk
for
an
adversarial
attack
and
we'll
see
what
that
looks
like
in
a
minute.
A
There's
another
paper,
houdini,
fooling
deep,
structured
prediction
models,
so
they
they
give
away
to
full
deep
structure
prediction
models
which
are
based
on
deep
learning
models
and
ad
hat
real
world
adversarial
attack
and
arc
face
face
id
system.
So
this
is
again
another
example
of
an
adversarial
attack.
So
an
adversarial
attack
is
where
you
attack
the
network
with
things
it
has
not
seen
before
or
you
try
to
fool
it
in
some
way
and
you
try
to
break
the
system,
and
so
it's
a
little
bit.
A
It's
a
way
to
like
test
for
robustness,
and
so
this
is
an
example
where
they
took
this
stop
sign
and
they
rotated
it,
and
so
they
made
it
they
had.
It
look
like
a
stop
sign,
but
then
they
also
rotated
it.
So
it
would
look
like
a
dumbbell
or
squished
it
down.
So
it
looked
like
a
tennis,
racket
and
so
they're
able
to
like
change
the
shape
of
the
stop
sign,
and
so
when
they
enter
it
into
the
algorithm,
the
algorithm
might
misidentify
it
as
something
that
it
isn't.
A
Can
fool
deep
neural
networks
because
they
might
focus
on
the
picture's
color
texture
background,
rather
than
picking
out
the
silly
features
that
a
human
would
recognize,
and
so
they
see
this,
for
example,
this
insect
here
as
a
manhole
cover,
just
because
it
looks
maybe
like
a
manhole
cover.
Given
the
background
in
the
foreground,
or
this
mushroom
looks
like
a
pretzel,
because
it's
shaped
like
a
pretzel,
not
because
it
is
a
pretzel,
it's
just
using
a
different
set
of
cues.
A
So
finally,
you
know
this
might
also
be
happening
in
cells
themselves.
We
talked
about
human
brains,
we
talked
about
neural
networks.
Now
we
might
actually
talk
about
cells,
and
this
is
a
paper
I
found
from
2015
on
oscillatory
stress
stimulation,
uncovering
an
achilles
heel
of
the
yeast
map
case
signaling
network.
A
They
delivered
in
different
frequencies
over
time,
and
then
they
saw
that
the
cells
showed
this
hyperactivated
transcriptional
stress
response,
but
then
they
also
claim
that
they're
in
yeast
remember
they
don't
have
a
brain,
but
they
do
have
like
it's
a
eukaryotic
cell
there's
a
sensory
misperception
in
that
the
cells
incorrectly
interpret
oscillations.
This
is
a
staircase
of
ever-increasing
osmolarity.
A
So
instead
of
you
know,
instead
of
understanding
that
the
fluctuations
in
osmolarity
or
oscillations
they
respond
as
if
the
stress
is
ever
increasing,
and
so
they
interpret
this
as
a
sensory
misperception.
Then
that
might
not
be
the
correct
interpretation
of
the
data,
but
this
is
what
they're
doing
it's
a
science
paper.
So
I
mean
you
know
it
might
be
it
might.
A
There
might
be
something
to
it
so
misperception
as
they
as
they
make
this
term
as
they
define
this
term,
is
the
capacity
of
the
osmolarity
sensing
map
k
network
to
re-trigger
or
to
introduce
sequential
osmotic
stresses,
and
so
this
this
is
sort
of
what
they
define
the
misperception
and
it
results
in
a
trade-off
of
fragility
to
non-natural
oscillatory
inputs
that
match
the
re-triggering
time
so
as
they
make
these
perturbations
they're
able
to
expose
hidden
sensitivities
of
the
network
of
the
cell
regulatory
network,
and
so
in
triggering
these
sort
of
responses
they're
able
to
play
around
with
the
way
it
perceives
the
environmental
stimulus,
and
then
that's
all
I
have
on
that.
A
So
I
think
that's.
I
wanted
to
update
it,
because
I
didn't
think
that
there
was
enough
stuff
on
sort
of
maybe
on
biology
when
we
did
the
original
divorm
ml
course.
We
we
really
just
did
focus
on
the
machine
learning
aspect
which
was
nice,
but
I
wanted
to
also
put
a
little
bit
of
a
biology
twist
in
there
because
I
think
there's
a
there's
a
lot
of
stuff
out
there
on
machine
learning.
D
It
was
a
nice
presentation,
yeah
thanks
so
yeah.
I
I
have
to
something
I
have
to
add
something
regarding
the
theory
building,
nothing
that
we
are
going
to
do.
A
D
Yeah,
so
we
that
we
all
we
have
done
something
regarding
the
neuro
match
and
individual
learn.
I
think
that,
if
it's
possible,
we
can
compile
all
of
those
things
and
add
more
things,
and
why
not
make
it
into
three
four
volumes
and
you
know,
produce
them
as
free
kindle
books,
because
it
would
give
us
the
divo
form
group.
A
lot
of
you
can
say
recognition.
D
Anyone
can
download
it
and
since
amazon
has
such
a
great
user
base,
we
can,
you
know,
get
more
contributors
and
more
people
who
can
join
us
because
very
or
you
can
say
very
few
persons
mostly
of
biological
background.
You
know
a
surf
github
and
the
probability
of
getting
away
you
know
is
the
chances.
Are
the
odds
are
low,
so
we
can
produce
a,
I
think,
ebook
a
free
ebook,
you
can
say
four
volumes
of
100
to
150
pages.
A
Yeah,
I
think
that
would
be
great.
I
mean
we'd
have
to
think
about
what
we
would
want
to
put
in
it,
but
yeah.
I
think
we
could
put
something
in
it
and
definitely.
A
A
D
D
B
A
E
No
sir,
but
the
presentation,
actually
I
really
liked
it
something
that
even
I
was
looking
for
and
I
guess
links
would
really
help
me
because
I
was
kind
of
working
on
it
right
now.
I'm
learning
something
like
this.
I
work
on
convolution.
A
A
Okay,
all
right,
so
I
guess
I'll
finish
up
today:
we'll
talk
about
two
papers,
so
I
let's
say
I
have
a
bunch
of
stuff
in
this
folder,
but
I
I
do
have
two
papers,
so
krishna
actually
sent
me
well.
He
sent
me
the
one
that
we
just
saw,
but
he
also
sent
me
a
link
to
this
and
then
susan
today
mentioned
this
paper,
so
we
should
probably
talk
about
it.
So
this
is
a
paper
and
then
we'll
talk
about
this
other
paper,
which
is
related.
A
It
is
essential
that
correct
temporal
order
of
cellular
events
is
maintained
during
animal
development,
so
you
have
like
a
single
cell
and
then
you
have
a
bunch
of
cells
that
proliferate
and
then
in
c
elegans.
Of
course
the
cells
are
programmed
so
they're
sort
of
deterministic.
So
once
you
get
like
an
a
b
and
a
p1
cell
in
the
two
cell
stage,
the
p1
cells
will
always
become
sort
of
the
posterior
end
of
the
animal.
A
You
know,
neurons
and
muscle,
and
things
like
that,
and
so
we
and
then
they
migrate,
as
you
know,
as
development
proceeds,
and
then
you
end
up
with
your
adult
so
yeah.
It
is
important
to
have
that
temporal
order
during
post-embryonic
development.
The
rate
of
development
depends
on
external
conditions
such
as
food
availability,
diet
and
temperature.
So
what
they're?
Referring
to
there
is
the
larval
stages,
where
you
have
where
you
can
actually
have
plasticity
of
behavior,
depending
on
environmental
inputs,
so
in
the
l1
stage,
which
is
soon
after
hatching
out
of
the
egg.
A
They
have
this
period
where,
if
they
experience
starvation,
they
can
live
for
a
long
time,
just
sort
of
in
the
suspended
animation
and
then
maybe
around
l2
or
l3.
They
can
go
into
the
dower
stage,
which
is
where
their
cuticle
grows
really
thick
and
they
can
basically
hibernate
and
it's
all
in
response
to
like
food
availability.
So
there
are
a
lot
of
things
going
on
in
the
larval
period.
A
That
can
be
that
you
know,
but
then
that
affects
their
later
growth
as
adults
and
so
how
the
timing
of
cellular
events
is
impacted
when
the
rate
of
development
is
changed
at
the
organismal
level
is
not
known.
A
So
if
you're
changing
the
rate
of
development,
what
happens
to
cellular
events
in
terms
of
their
timing?
That's
the
question
they're
asking,
and
so
they
use
a
novel
time
lapse,
microscopy
approach
to
simultaneously
measure
timing
of
excellatory
gene
expression,
hypodermal
stem
cell
divisions
and
cuticle
shedding
in
individual
animals
during
c
elegans,
larval
development
from
hatching
to
adulthood.
A
This
is
what
they're
measuring,
and
so
it's
worth
saying
that
there
are
actually
quite
a
few
cells
that
are
born
post
embryonically,
so
once
the
egg
catches
it
has
about
580
to
600
cells
and
then
in
the
post-embryonic
period
it
gains
the
rest
of
its
cells
and
it's
usually
a
lot
of
things
in
the
in
the
epidermis
and
things
like
that.
So
there's
a
lot
of
cell
birth
in
the
post-embryonic
period.
It's
just
not
they're,
not
like
critical
to
the
physiology
of
the
worm.
It's
just
kind
of
like
you
know
adaptations
adaptation
related.
A
A
So
there
was
a
lot
of
strong
variability
there.
However,
this
a
variability
obeyed
temporal
scaling,
meaning
that
events
occurred.
At
the
same
time
when
measured
relative
to
the
duration
of
development
in
each
individual,
we
also
observe
pervasive
changes
in
population
average
timing
when
temperature
diet
or
genotype
are
varied,
and
so
they
have
this
now
you're
varying
the
temperature
diet
or
genotype
systematically
so
with
genotype.
You
can
have
different
types
of
defined
mutants
and
they
can
behave
differently
or
change
the
diet.
A
Development
divided
into
epics,
they
that
differed
in
how
the
timing
of
events
were
impacted.
Yet
these
variations
and
timing
were
still
explained
by
temporal
scaling
when
timing
was
rescaled
by
the
duration
of
respective
epochs
in
each
individual.
Surprisingly,
timing
will
be
temporal
scaling,
even
mutants,
lacking
lin-42
period,
which
is
a
core
regulator
of
timing
of
larval
development.
A
So,
even
when
you
have
this
mutation
that
sort
of
knocks
out
this
regulatory
mechanism,
you
still
have
the
scaling,
and
so
this
the
scaling
is
exhibited
by
strongly
delayed
heterogeneous
timing
growth,
arrest,
timing
of
larval
development
is
likely
controlled
by
timers
based
on
protein
degradation
or
protein
oscillations.
But
such
mechanisms
do
not
inherently
generate
temporal
scaling.
A
But
basically,
what
they're
saying
here
is
that
a
lot
of
develop
these
post-embryonic
developmental
things
that
happen?
There's
a
temporal
scaling
that
isn't
it's
actually
independent
of
some
of
the
things
that
you
know
we
would
expect
them
to
be.
A
A
Related
species
that
vary
greatly
in
size
exhibit
the
same
number
of
bands,
a
similar
position
relative
to
the
size
of
the
embryo.
So
they
have
this.
They
basically
have
the
same
number
of
bands.
It's
just
that
they're
spread
out,
so
the
signal
isn't
related
to
the
size
of
the
organism.
It's
related
to
some
other
factor.
It's
you
know
something
about
like
the
way
the
genes
are
expressed,
and
so
they
have
the
same
number
of
bands.
It's
just
spread
out
here
we
examine
whether
analogous
to
scaling
or
spit
of
spatial
patterns
and
development.
A
So
in
other
words,
you
know
if
you
change
the
length
of
development
like
from
maybe
two
weeks
to
three
weeks,
and
this
is
just
hypothetical.
The
different
events
will
spread
out
in
time.
They
won't
just
like.
At
the
same
time,
then
there
will
be
an
extra
week
of
nothing
going
on
they'll
spread
out
evenly.
A
It's
interesting
because
you
can
actually
do
manipulations
experimentally,
where
you
delay
development
through
some
of
the
things
I
mentioned
about
starvation
and
you
can
actually
change
the
length
of
development
in
that
way.
And
so
then
the
question
is
what
happens
after
they
come
out
of
starvation
and
the
answer:
is
they
either
catch
up,
meaning
that
they
start
to
grow
a
lot
faster
or
they?
You
know,
maybe
miss
some
of
those
developmental
periods
altogether?
A
A
You
know
they're
sort
of
there's
this
order
in
how
they
occur,
and
so
that
can't
really
be
changed
by
changing
the
environment
due
to
its
invariant
cell
lineage
and
highly
stereotypical
development.
C
elegans
is
an
ideal
model
organism
to
study
this.
It's
post-embryonic
development
consists
of
these
four
stages
and,
let's
see,
there's
a
clear
periodic
aspect
to
c
elegans
development
with
mulch
occurring
every
eight
to
10
hours
at
25
degrees
celsius.
A
So
they
what
they
do
is
they
mold
their
cuticle,
their
cuticle
sheds
and
it's
a
very
regular
process.
Each
developmental
stage
is
where
they
have
a
mole.
So
at
the
end
of
l1,
there's
a
mole
at
the
end
of
l2.
There
is
a
mole
and
so
forth,
and
this
is
very
regular
in
terms
of
its
timing,
and
so
these
larval
stages
are
accompanied
by
a
genome
wide,
a
solitary
expression
of
a
multitude
of
genes
which,
with
peaks
occurring
once
per
larval
stage.
So
these.
A
A
A
Shall
like
these
pulses
of
of
gene
expression
over
time,
you
have
this,
these
divisions
in
the
cell
a
lineage.
So
you
have
these
different
peaks
put
in
in
terms
of
the
cell
lineage,
so
this
is
l1
through
l4.
A
These
are
seam
cells.
These
are
the
cells
that
undergo
morphogenesis
during
the
post-embryonic
period,
and
so
these
seam
cells,
which
are
in
the
cuticle
basically
in
the
epidermis
they
divide,
and
they
can
look
at
the
cell
division
and
seam
cells
to
see
this
sort
of
scaling.
And
so
why
is
scaling
important?
You
might
ask,
I
mean
other
than
regulating
development.
A
A
So
you
know
when,
when
organisms,
when
they
become
different
species,
sometimes
different
species
of
different
developmental
length
of
developmental
period
and
all
that
this
scaling
kind
of
helps,
keep
it
even
over.
You
know,
as
the
developmental
period
shrinks
and
and
expands
with
speciation.
So
you
know
you
might
have
a
species
of
of
nematode
that
has
a
developmental
period.
A
That's
three
times
the
length
of
like
c
elegans
or
could
be
one
third
of
the
length
of
c
elegans,
and
so
you
know
that
might
be
in,
but
they
share
like
basically
the
same
genetic
program
and
everything.
So
how
do
you?
How
does
that
all
get
regulated?
Well,
this
temporal
scaling
actually
helps
because
it
just
basically
scales
things
up
and
down.
You
see
this
a
lot
in
the
in
in
the
growth
of
animals.
So
you
see
this
with
terms
of
body
length.
A
You
see
this
in
terms
of
body
length
versus
like
head
size
or
wing
size
or
whatever
you'll
see
this
a
lot
where
you
know
the
body
size
depending
on
what
it
is,
if
it's
larger
or
smaller,
the
other
organs
will
scale
in
terms
of
size
as
well,
and
that
scaling
will
persist
across
the
wide
range
of
species,
and
so
it's
a
very
it's
a
very
useful
mechanism
in
terms
of
developmental
change,
but
it
also
helps
to
keep
development
regulated,
and
so
you
can
see
there
are
a
lot
of
papers
on
scaling.
A
Basically,
it's
mathematical
scaling
where
they
look
at,
like
maybe
like
the
length
of
development
or
the
body
size
across
the
wide
range
of
variation,
and
then
they
see
that
there
are
other
things
that
are
sort
of
related
to
it
fractionally.
So
you
know
the
like.
The
larval
periods
are
one
example,
the
larval
period
scale.
You
know
one
quarter
of
the
length
of
development,
and
so
you
might
see
that
as
development
there,
the
length
of
development
varies
that
those
larval
periods
will
vary
as
well.
Now
that
can
change
over
evolutionary
time.
A
You
can
see
what
changes
in
different
developmental
periods
in
terms
of
their
relative
lengths,
but
you
know,
if
you
leave
everything
alone,
you
leave
these
mechanisms.
A
A
The
animal
to
animal
variability
arises
because
each
animal
proceeds,
through
its
phase
evolution
at
an
intrinsically
different
rate,
giving
rise
to
strongly
correlated
variability.
We
measured
for
timing
of
event
pairs,
while
these
phenomenological
models
do
not
provide
a
molecular
mechanism
for
temporal
scaling.
They
reveal
a
remarkably
simple
organization
that
unifies
the
broad
variations
scene
of
timing
seemed
seen
in
our
experiments
and
so
yeah.
So
if
you
want
to
learn
more
about
this
paper,
I
would
just
go
ahead
and
read
the
rest
of
it.
I
I
can
send
it
out.
A
A
So
I'll
put
this
link
to
the
folder
in
the
chat
and
but
I'll
send
out
the
paper
later,
and
then
I
wanted
to
talk
about
this
other
paper
that
krishna
sent
me
during
the
meeting
here.
So
this
is
a
newborn.
Inspired.
Intelligence
system
drives
a
car
using
only
19
control
neurons,
so
this
is
imitating
the
nematodes
nervous
system
by
nematode.
That
means
c
elegans
to
process
information
efficiently.
This
new
intelligent
system
is
more
robust,
more
interpretable
and
faster
to
train
than
current
deep
neural
network
architecture
than
millions
of
parameters.
A
So
this
is
actually,
if
you
know
anything
about
the
open
world
robotics
group,
you
know
that
they're
doing
basically
this
they're,
creating
these
networks
of
c
elegans,
neurons
and
then
they're
simulating
the
neurons
using
a
couple
of
different
programs,
like
I
think
c302
is
the
main
one
and
then.
A
That
to
you
know,
basically
they
can
create.
Like
a
movement
circuit
like
you
know,
you
would
observe
in
a
c
elegans
and
they
know
the
parameters
that
are
used
in
cl
wiggins
movement,
and
so
they
can
just
basically
simulate
that
and
they
can
map
that
to
a
robot
and
the
robot
will
behave
like
a
worm
and
so
that
we
know
how
to
do
this.
A
We
kind
of
know
the
control
circuitry
for
c
elegans,
but
what
they're
doing
here
is
they're,
reducing
it
down
to
19,
neurons
and
then
they're
actually
using
this
to
control
the
vehicle,
I'm
not
mistaken
yeah.
So
this
is
the
picture
here
where
there's
a
self-driving
car.
This
is
the
camera
input.
This
is
the
attention
map
and
they're
using
a
convolutional
method
here
and
then
here's
the
network
here
of
interneurons
command
neurons
in
a
motor
neuron,
and
so
this
is
a
deep
supervised
model
that
they're
comparing
it
to.
A
A
So,
as
you
may
already
be
thinking,
there
must
be
another
way
to
make
machines
intelligent
with
much
less
data
or
fewer
layers
and
so
to
get
around
this
trade-off,
and
this
is
applying
it
to
self-driving
cars
they're,
using
this
method
of
using
this
neural
network
using
c
elegans
as
an
inspiration
so
actually
they're.
Okay,
so
researchers
from
ist
austria
and
mit
have
successfully
trained
a
self-driving
car
based
on
the
brains
of
tiny
animals
such
as
threat
worms.
A
A
You
know
we
can
tune
it
relatively
well.
So
in
this
case
they
actually
cite
open
worm
here.
A
They
do
have
the
open
worm
image
here
so
they're
giving
the
attribution,
but
this
is
an
example
here
of
how
this
works.
So
you
have
this
level
you
have
motor
neurons
command,
interneurons,
upper
interneurons
and
sensory
neurons,
and
these
array
them
in
this
network
and
then
they
make
a
bunch
of
observations,
and
so
this
is
where
they're.
This
is
their
sort
of
their
model.
Their
network
model
that
they're
using,
and
so
they
developed
a
new
mathematical
model
of
neurons
in
synapses
called
a
liquid
time,
constant
or
ltc
neurons.
A
One
way
to
make
this
network
simpler
was
to
make
it
sparse,
meaning
that
not
every
cell
is
connected
to
every
other
cell.
When
a
cell
is
activated,
the
others
are
not
which
reduces
the
computation
time,
since
all
the
deactivated
cells
will
not
send
any
output
or
a
zero
input,
which
is
extremely
faster
to
compute.
A
Oh
ramin,
hassani,
who's,
a
person
from
open
worm
is
on
this
paper,
so
talks
about,
I
think,
he's
on
this
paper.
I'm
not
really
sure
I
know
he's
in
austria
so,
but
he
was
cited
in
this
article.
The
processing
of
signals
within
the
individual
cells
follow
different
mathematical
principles
than
previous
deep
learning
models.
A
So
we
get
into
this
a
little
bit
deeper
in
the
model
consists
of
two
parts
at
first
there
is
a
compact
convolutional
neural
network,
which
is
used
to
extract
structural
features
in
the
pixels
of
the
input
image.
Using
such
information,
the
network
decides
which
part
of
the
image
is
important
or
interesting
and
passes
only
this
part
to
the
second
system,
which
they
then
can
call
a
control
system
that
steers
the
vehicle
using
decisions
made
by
a
set
of
biologically
inspired
neurons.
This
control
part
is
also
called
a
neural
circuit
policy
or
ncb
and
cp.
A
Basically,
it
translates
the
data
from
the
compact
convolutional
model,
outputs
to
only
19
neurons,
in
an
rnn
structure
inspired
by
the
nematodes
nervous
system
controlling
the
vehicle
and
allowing
it
to
stay
in
the
lanes.
Okay,
I
remember
this.
Actually,
I
think
I
posted
this
paper,
the
actual
paper
I
think,
in
the
papers
channel
in
slack.
So
if
you're
interested
go
check
the
paper
channel
in
slack-
and
there
should
be
a
paper
in
there,
it
could
be
the
the
papers
channel
or
the
general
channel.
A
I
can't
remember
which
well
anyways
they're
able
to
do
this
and
they
compare
it
to
a
number
of
different
types
of
networks,
cnns
of
different
types,
rnns
lstm
and
npc
ncp,
and
so
you
can
see
that
the
number
of
parameters
is
lower.
The
number
of
neurons
is
lower,
the
number
of
synapses
is
much
lower
and
then
our
trainable
parameters
is
lower
as
compared
to
all
these
other
types
of
networks.
A
So
being
so
small,
they
were
able
to
see
where
the
system
was
focusing
as
the
tension
on
the
image
is
fed,
and
so
they
found
that
such
a
small
network
extracting
the
most
important
part
of
the
picture
made
the
few
decision
neurons
focus
exclusively
on
the
curbside
and
on
the
horizon.
So
this
is
a
self-driving
car
and
they're,
focusing
on
these
two
different
places
in
the
image,
which
is
a
unique
behavior
among
artificial
intelligence
systems
that
are
currently
analyzing
every
single
detail
of
an
image.
So
it's
basically
filtering
the
image.
A
And
then
so,
this
is
so
while
noise
is
a
big
problem
for
current
approaches
such
as
rain
or
snow
and
lane
keeping
applications,
their
ncp
system
demonstrated
a
strong
resistance
to
input
artifacts
because
of
its
architecture
and
novel
neural
modeling
keeping
their
attention
on
the
road
horizon.
Even
if
the
input
camera
is
noisy,
as
you
can
see
in
this
short
video.
A
So
here
you
can
see
they're
driving
in
the
rain,
and
the
algorithm
has
to
focus
attention
on
sort
of
the
road
as
it's
like
going
off
into
the
distance.
So
it's
constantly
able
to
keep
its
attention
on
the
end
of
the
road
here,
where
sort
of
as
the
road
is
going
off
to
the
edge
of
the
scene.
Here,
it's
able
to
keep
its
attention
on
the
road
and
not
maybe
on
other
things
like
the
white
windshield
wipers
or
the
rain
or
the
trees.
A
So
they've
created
tutorials
on
this,
so
their
tutorial
links
to
these
tutorials
and
github,
and
you
can
check
it
out
there.
This
is
yeah.
So
if
you
read
the
original
paper,
that'll
give
you
a
lot
more
information
plus
this
video
also
may
give
you
a
lot
more
information.
A
So
it's
a
very
interesting
thing
and
thank
you
krishna
for
giving
out
or
bringing
your
attention
to
that.
I
know
that
that
was
the
paper.
That
roman,
I
think,
was
a
part
I
put
that
in
the
open
room
slack
and
I
had
meant
to
present
it,
but
I
never
got
around
to
it.
A
So
thank
you
for
bringing
that
up
and
thank
you
for
susan
and
krishna
for
bringing
together
the
that
our
attention
to
that
temporal
scaling
paper
as
well
was
very
good,
although
I
think
it's
gonna
require
a
little
bit
more.