►
Description
Attendees: Bradly Alicea, Mayukh Deb, Jesse Parent, Ujjwal Singh, Richard Gordon and Vinay Varma.
Featuring topics such as unsupervised learning as a developmental analogue, morphological agents, and deep chicken terminator
A
C
C
C
C
E
E
A
C
F
F
But
notion
is
like
just
this:
he
couldn't
you,
can
it's
such
a
sandbox
and
you
couldn't
get
it
rid
of
lost
in
it,
but
then
what
if
you've
know
what
you're
doing
figure
out
how
to
relate
to
databases
the
right
way
and
it
basically,
you
know,
like
it's
kind
of
I,
think
it's
a
sequel
based
but
you're,
not
like
typing
sequel
code.
You
can
get
these
modified
things
in
it
and
then
it's
sort
of
a
drag
and
drop
interface
and
you're
making
pages
that
you
can
embed
the
databases.
C
F
D
All
right,
I
think
we
get
started
then
so
welcome
to
the
meeting,
and
so
I
guess
this
summer
is
always
a
good
time
to
catch
up
on
things.
But
you
know
people
are
probably
pretty
tired
of
meeting
in
in
virtual
spaces,
and
so
but
that's
ok
thanks
for
sticking
with
us
today
we
have
was
well
my
yoke.
Of
course,
they're
gonna
give
updates
on
their
presentations
on
their
projects.
D
We
were
gonna,
hear
from
I,
guess,
Krishna
katja,
but
I
don't
know
if
he's
gonna
share
his
presentation
with
us
today
and
I
have
some
things
I'll
go
over
later,
so
I
wanted
to
say
that
the
update
for
the
the
first
period,
evaluations
for
G
sock,
are
coming
up
this
week.
So
it's
up
to
me
to
fill
those
out
and,
like
I
said
it's
not
going
to
be
too
much.
D
They
don't
think
you
guys
have
too
much
to
worry
about
in
terms
of
the
passing
or
failing
you'll
pass,
you've
been
doing
good
work
and
we've
been
doing
a
lot
of
interactions
on
slack
so
and
don't
worry
too
much
about
the
like.
My
yoke
sent
me
some
things
about.
First
period
report.
Google
will
send
you
guys
a
form
and
it's
merely
a
form.
You
just
throw
it
out.
It's
basically
gonna
be
feedback
on
me.
So,
oh
boy,
hopefully
what
you
you
know,
give
me
a
good
grade:
yeah
I,
wanna
yeah,
but
yeah.
D
D
H
D
G
D
D
G
A
D
G
D
G
C
D
F
I
G
C
G
D
H
H
D
D
H
H
But
then
I
recognize,
like
the
people,
like
physics,
feel
like
you
opened
a
tangent
X,
because
X
is
a
cocoon.
It
provides
enormous
instability
with
the
data
in
the
Excel
format,
except
they
don't
have
to
write
for
everything,
except
this
elusive
everybody
back
so
like
most
of
it
is
also
an
exercise
that
people
in
a
son.
H
H
H
H
H
H
H
H
H
H
But
it
is
seem
like
the
type
standing
object
you
can
see
are
the
cells
which
are
they
started
each
segment
so
segments
and
others
like
segments
borders
are
related
by
the
same
side.
These
are
the
right
results,
so
these
are
not
certified
result
but
are
using
masks
so
I'm
working
on
it
right
now.
They
are
valid
for
and.
H
H
H
H
H
H
I
D
G
G
H
F
J
D
D
G
D
H
H
D
D
Go
for
that
and
then
then
on
hold
and
then
integrate
cell
tracking
the
segmentation,
that's
sort
of
not
we're
not
quite
there
yet
on
that,
but
that's
sort
of
like
phase
2,
then
the
weekly
blog
post.
So
that's
number
eight.
So
that
is
my
yokai
Narwhal.
So
are
you
guys
coming
along
on
that?
Obviously
this.
D
Movement
tracking
and
let's
see
edie
color
palette,
I
think
that's
done,
but
we.
D
D
D
D
So
the
first
one
is:
this
is
actually
a
pretty
machine
learning
oriented,
but
it's
unsupervised
neural
network
models
of
the
ventral
visual
stream.
It's
machine
learning,
oriented
its
kind
of
neural
network
oriented,
but
it's
it's
based
under
the
human
visual
system,
so
I
find
this
I
found
this
interesting.
D
However,
such
networks
remain
ever
mayne
implausible
as
a
model
of
the
development
of
the
ventral
stream
in
part
because
they
are
trained
with
supervised
methods
requiring
many
more
labels
that
are
accessible
to
infants
during
development.
So
they're
talking
about
the
development
of
our
visual
systems,
the
Serra
visual
systems
developed.
Of
course.
Some
people
argue
that
you
know
there
there's
an
innate
inist
or
visual
abilities.
So
we
can
see
things
sort
of
right
out
of
the
gate.
You
can
recognize
shapes
or
you
know,
color
contrasts,
but
there's
this
development
of
the
nervous
system.
D
They're
trying
to
make
a
sort
of
an
analogy
between
the
development
of
vision
and
these
models
of
neural
networks
that
you
know
can
recognize
images,
and
so
here
we
report
the
recent
rapid
progress
and
unsupervised
learning
has
largely
closed
this
gap.
So
they're
talking
about
this
unsupervised,
learning
technique
that
they're
using
that
sort
of
developmentally
sophisticated
or
maybe
inspired.
D
The
mapping
of
these
neural
network
models
hidden
layers
is
neuro
anatomically,
consistent
across
the
ventral
stream.
Moreover,
we
find
that
these
methods
produce
brain
like
representations,
even
when
trained
on
noisy
or
limited
data,
measured
from
rural
children's
developmental
experience.
So
we
also
find
the
semi-supervised
deep
contrast
of
embeddings
and
leverage
small
numbers
of
labeled
examples.
Produced
representations
was
substantially
improve
your
pattern,
consistency
to
human
behavior
taken
together.
These
results
suggest
a
deep
contrast
of
embedding
object.
Objectives
may
be
a
biologically
plausible
computational
theory
of
premia.
Visual
development,
so
this
paper
is
actually
I.
D
D
You
don't
really
have
sophisticated
language
yet
you're
still
acquiring
you
know:
information
from
your
environment,
but
you're
able
to
recognize
shapes
and
somehow
you
know,
start
learning
things
like
that
and
so
they're
trying
to
mimic
this
with
a
neural
network
and
they're,
actually
comparing
it
with
what
we
see
in
the
developmental
psychology,
literature
and
they're
kind
of
they're,
making
some
inferences
about
that.
So
it
talked
a
little
bit
about
the
structure
of
the
human
readable
system,
which
is
quite
complex.
D
It
involves
a
number
of
different
areas
of
the
brain,
so
they're
early
visual
areas
like
v1,
and
then
there
are
other
areas
like
the
ventral
visual
area,
inferior
temporal,
cortex
and
other
areas
like
that
that
contain
separable
information
about
categories.
And
so
those
are,
you
know,
that's
a
higher
level
of
processing
as
they
call
it,
and
once
you
get
the
lower
level
features
detected
in
v1
and
that
information
is
passed
up
to
other
areas
and
these
areas
in
the
ventral
system,
which
is
the
the
inferior
temporal
cortex
in
other
areas,
they
do
more
sophisticated
processing.
D
So
the
idea
is
that
the
brain
actually
does
the
stepwise
processing
and
there
are
different
areas
of
cortex
there
they're
similar
in
terms
of
their
anatomy,
to
some
extent
but
they're
also
very
different
functionally,
and
they
actually
are
different
like
an
anatomically.
It's
just
that
they're
all
in
the
neocortex,
more
or
less.
So.
D
This
is
something
that
they're
trying
to
you
know
that
they've
been
working
on
in
the
area
of
computational
neuroscience
for
a
long
time,
and
so
people
probably
haven't
accounted
for
development,
and
so
what
they're
doing
is
they're
trying
to
figure
out
how
you
can
build
a
model
and
they
don't
use
like
something.
That's
explicitly
development
developmentally
inspired
they're
using
these
existing
tools
to
sort
of
get
up
this
issue.
D
D
We
can
say:
okay,
this
is
neuro,
physik
you're
a
physiologically
inspired,
so
they
go
through
this
and
they
talk
about
how
you
know:
task
based
neural
network,
optimization
approaches
about
the
successes
and
modeling
that
human
auditory,
cortex
and
aspects
of
the
motor
cortex,
so
the
school
driven,
modeling
approach
may
be
of
general
utility
for
modeling
sensory
motor
systems,
and
so
you
can
apply
this
as
well
to
the
visual
system
and
I
in
recognize
object
recognition.
So
without
going
too
much
farther
into
this
paper
kind
of
talk
a
little
bit
about
this.
D
They
talk
about
this
model.
They
kind
of
go
through
a
lot
of
the
math
they've
reviewed
some
of
the
unsupervised
models
that
have
been
used
so
unsupervised.
Learning,
of
course,
is
a
very
hard
problem
in
terms
of
you
know,
getting
a
rate
of
correct
identifications
and
something
that
generalizes.
So
you
know
you
can
read
through
this
if
you're
interested,
they
do
have
a
lot
of
really
interesting
work
in
this
I.
Don't
know
if
they
have
any
figures
here.
D
So
they
kind
of
come
to
the
conclusion
that
it
works.
They
talk
about
kind
of
some
of
the
techniques
that
they
use
from
neural
network.
So,
as
a
mechanism
of
the
learning
rule,
our
work
still
uses
the
standard
back
propagation
for
optimization
LBA
with
supervisor,
rather
than
supervised,
objective
functions,
and
so
they
talked
about
that
back
propagation
being
good
for
modeling,
but
not
necessarily
analogous
to
real
organisms,
and
so
a
lot
of
the
things
we
do
in
neural
networks,
I
mean
I.
Think
it's
good
to
remember
this
from
time
to
time
are
not
really.
D
D
Nevertheless,
they
also
talk
about
better
training
environments
being
critical
to
understand
like
how
you
know,
biology
deals
with
image
processing,
especially
from
a
developmental
standpoint,
and
so
you
know
they
talk
about
getting
a
better
environment
than
your
standard
data
sets
that
we
use
for
analyzing
data
that
you
know
for
testing
models
and
training
models.
And
so,
let's
see.
D
They
talk
about
things
that
are
like
features
of
development,
so
there
are
so
many
important
components
of
real
developmental
data
streams
missing
from
sa
YC
a.m.
which
is
their
training
model
or
their
training
data
set,
including,
but
not
limited
to
the
presence
of
in
utero
wet
retinal
waves,
the
long
period
of
decreased,
visual
acuity
and
the
lack
of
non
visual
auditory
ins,
medicines
or
your
modalities
that
are
likely
to
strongly
self
supervise
and
be
self
supervised
by
visual
representations
during
development.
D
Moreover,
real
visual
learning
is
likely
to
be
at
some
level
driven
by
interactive
choices
on
the
part
of
the
organism
requiring
a
training
environment.
More
powerful
than
anesthetic
data
set
could
provide.
That's
an
interesting
point,
especially
the
point
about
multi-sensory
integration
and
multi-sensory
experience.
So.
F
D
My
other
group
we've
been
talking
about
this,
a
bit
where
you
it's
not
just
vision
that
you
have
to
think
about
it's
different
senses
being
combined
and
so
a
lot
of
image,
sort
of
image,
recognition
and
things
like
that.
They
operate
sort
of
in
a
vacuum
of
vision.
Right,
so
you
recognize
some
image
as
something
that's
visual
you
make
that
analogy,
but
there
are
actually
other
sensory
things
that
we
use.
D
If
we
see
an
image
or
even
like
you
know
symbolic
things
you
the
world,
that
we
used
to
sort
of
integrate
that
image
and
recognize
it.
So
if
I
see
a
pic,
a
picture
of
a
famous
face
that
face
I
can
recognize
it,
but
I
also
associated
with
things
right.
So
they
don't
just
recognize
objects
in
a
vacuum.
D
I
see
a
sandwich,
a
picture
of
a
sandwich,
I
associated
with
things
and
if
I
see
a
sandwich
embedded
in
some,
maybe
in
a
bunch
of
grapes,
you
know
something
that
obscures
it,
that
it's
actually
a
sandwich,
I
associated
with
other
things,
and
so
and
those
things
could
be
you
know,
sounds
they
could
be
songs,
they
could
be
smells,
they
could
be.
You
know
touching
a
sandwich
there,
all
sorts
of
things
that
we
use
to
like
integrate
that
information,
and
so
when
they
say
it's
some
I
would
say.
D
D
D
So
I'm
gonna
be
these
here.
For
now
these
other
papers
I
just
want
to
quickly
move
on
to
another
topic
that
I
had
been
thinking.
Yeah
yeah,
so
I
presented
a
little
bit
on
this
and
a
talk
recently
I
calling
these
more
for
genetic
agents
and
I
don't
want
to
like
throw
an
email
at
you.
That's
saying
that
this
is
a
work
in
progress.
D
We've
done
some
stuff
with
MorphOS
oeq,
which
was
Tom
Porter
G's
invention.
You
know,
he's
been
working
on
these
cellular
automata
that
will
recognize
patterns
or
generate
patterns
that
look
like
morphogenesis.
In
this
case
it
said
you
know
what
I'm
interested
in
here
is:
I've
noticed
the
connections
between
morphogenesis
and
sort
of
pattern,
recognition,
and
so
in
this
model
you
have
these
agents
that
one's
an
observer
in
one's
an
emitter,
okay,
and
so
the
emitter
produces
different
patterns
and
I'm.
D
The
observers
creating
reconstructing
that
pattern
and
making
sense
of
it,
and
so
the
tag
here
is
what
is
the
relationship
between
morphogenesis
and
perception,
and
so
we
can
use
this
kind
of
model
and
the
way
this
well
they
benefit.
This
kind
of
model
is
that
you
can
treat
this
problem
as
sort
of
a
predator
prayer,
co-evolutionary
relationship.
So
you
know
it's
it's
like
maybe
a
competitive
game
or
a
cooperative
game,
even
where
the
the
emitter
and
the
observer
either
are
two
different
or
you
know.
D
The
links
would
be
the
observer,
which
would
be
the
predator
and
the
predator
is
trying
to
eat
the
prey,
but
it
has
to
recognize
the
prey
first,
and
so,
if
you
have
that
kind
of
relationship,
you
can,
you
know,
evolve
or
you
can
put
these
two
things
adapt
to
one
another,
and
you
know
the
emitter
can
say
we
conceal
with
the
pattern
and
the
observer
can
like
find
new
patterns
and
figure
out
where
that
concealment
is
happening.
So
there
are
a
lot
of
different
things
you
can
do
with
that.
D
So
that's
that's
the
one
thing
you
can
do
and
then
the
other
thing
is.
You
can
combine
these
two
emitter,
an
observer
into
something
called
a
more
for
genetic
agent,
and
so
this
is
where
you're
linking
you
know
the
sort
of
the
evolution
of
shape
or
the
creation
of
shapes
with
attentional
mechanisms
and
names
in
the
observer.
So
the
emitter
is
creating
these
more
for
genetic
patterns.
The
observer
is
decoding
them
and
then
the
emitter
can
also
perhaps
and
ever
work
this
out
serve
as
like
a
way
to
decode.
D
If
that
what
goes
you
know,
they're
trying
to
match
the
stripes,
so
in
this
case
the
observer,
isn't
they
think
it's
like
some
sort
of
checkerboard
pattern,
but
in
this
case
the
observer
is
able
to
match
what
the
emitter
is
producing
and
recognize
it
as
something
that
is
a
morphic
genetic
pattern.
So
that's
that's
kind
of
the
idea
behind
this.
You
know
competition
aspect.
D
Sometimes
the
cryptic
expression
of
a
pattern
is
good
for
the
emitters.
Fitness.
Sometimes
it
helps
to
train
the
observer.
So
can
the
observer
recognize
different
patterns?
That's
one
question,
but
the
other
question
is:
what
is
the
minimal
form
required
for
recognition,
and
so,
in
this
case
say
you
know,
I
made
the
kind
of
thought
about
a
thought
experiment
here
where
we
draw
an
elephant
using
a
genetic
regulatory
network
and
we
haven't
talked
about
drn's,
but
you
can
use
those
to
generate
shapes
and
things
like
that.
Let's
say
we
draw
an
elephant
with
a
grn.
D
D
So,
in
this
case,
we
have
nine
modules
each
encoded
continuously
by
an
expressed
binary
gene.
So
this
is
gives
you
the
state
from
zero
to
one
how
many
modules
can
be
shut
off
and
still
result
in
something
recognizable
as
I
am
so
we
have
these
modules
that
represent
different
parts
of
the
elephant
like
an
ear
or
the
trunk
or
the
head,
and
they
all
sort
of
like
encode
this
shape.
D
We
need
to
have
to
get
the
elephant
recognized,
and
so
we
can
obscure
the
elephant
in
various
ways,
and
this
encoding,
you
know,
will
help
us
along
that
path.
The
encoding
can
be
highly
accurate
or
not
accurate
at
all
can
just
be
like
a
block
model
of
an
elephant
and
the
emitter.
We
can
test
the
emitter
to
see
if
it
can
recognize
that-
and
this
is
sort
of
the
idea
about
the
brain
behind.
What's
going
on
in
perception.
D
So
instead
of
you
know,
we
can
use
like
a
neural
network,
but
we
can
also
use
these
newer
nearest
neighbor
models,
as
this
is
sort
of
like
cellular
automata,
where
you
have
these
nearest
neighbor
interactions
amongst
this
spatial
array
or
the
cell
array,
and
the
idea
is
that
it
would
reproduce
these
shapes
in
the
brain.
So
you
would
have.
D
It's
sort
of
a
crude
way
of
doing
sort
of
pattern
recognition,
but
it
also
allows
for
things
that
are
maybe
less
labeled
or
less
you
know
defined,
and
so
one
way
you
can
train
these
observers
is
through
morir
patterns,
which
are
these
patterns.
Where
you
take
to
sort
of
you
know,
we
have
concentric
circles
in
this
case
you
take
to
save
sets
of
concentric
circles
and
you
intersect
them
and
when
you
intersect
them
with
the
movement,
you
move
these
two
sets
of
concentric
circles
across
one
another.
They
make
these
pattern.
D
These
interference
patterns,
which
are
actually
quite
you,
know,
there's
a
lot
of
variation
in
that
pattern
and
it's
very
rich
in
terms
of
its
it's
information
content.
So
we
can
use
these
kinds
of
things
to
train
the
model
and
we
move
these
concentric
circles
across
each
other
and
you
can
get
these
patterns
and
so
then,
finally,
some
up
here
we
get
observer
effects,
says
I,
don't
know
if
you've
ever
heard
of
the
you
know
observer
problem
in
quantum
mechanics
where
you
have
like
you
know,
you
observe
like
this.
D
You
have
light
going
through
a
slit,
and
then
you
observe
on
the
other
side
of
a
slit,
and
you
see
this
pattern
interference
pattern,
that's
kind
of
what
we
may
expect
here,
and
this
is
kind
of
interesting,
because
it
gives
us
really
complex
patterns
that
we
can
observe
and
we
can
watch
how
the
observer,
analyzes
them
I
should
say.
You
can
also
use
a
neural
network
for
this
as
well
just
to
look
at
like
how
these
effects
are
encoded
and
classified
so
so
that
was
pretty.
D
F
C
F
F
F
D
So
I
don't
think
people
are
doing
too
much
I.
Think
they're,
like
maybe
people
are
like
the
by
the
machine
learning
people
are
starting
to
say.
Well,
maybe
we
need
to
understand
visual
perception
a
bit
better
because
they're
gonna
dumping
model
when
the
algorithm
and
patterns,
but
you
know
we
know
that.
D
I
J
J
G
D
F
G
G
F
G
F
G
G
G
C
G
D
G
D
Yeah
yeah
well,
thank
you,
everyone
for
attending
today
and
contributing,
and
if
you
have
any
questions
about
anything
on,
you
know
offline
on
slack,
otherwise
talk
to
you
next
week
we
have
again
ji-suk
updates
and
then,
if
anyone
wants
to
present
anything,
you
can,
if
you
want
to
volunteer
I,
don't
know
if
Christian
is
planning
on
presenting,
maybe
he'll
present
next
week,
but
will
well
figure
out
what
we
want
to
do
for
next
week.
So
have
a
good
week.
Everybody
yeah.