►
Description
Contributors: Neuromorphogenetic Patterns and the Theory of Deep Learning. Poster presented at NetNeuro 2021 (Satellite session of
Networks 2021).
Contributors: Bradly Alicea, Mayukh Deb, Mainak Deb, Krishna Katyal, and Jesse Parent
A
Hello,
my
name
is
bradley
alicea
and
I'm
presenting
this
poster
neuromorph
genetic
patterns
and
the
theory
of
deep
learning
and
my
co-authors
are
mayukdev
mynoc,
dev,
krishna,
katyal
and
jesse
parent,
we're
from
the
diva
worm
group,
which
is
part
of
the
open
room
foundation,
and
this
poster
is
about
the
theory
of
deep
learning
it's
about
deep
learning
networks,
but
also
how
they
apply
or
how
they
maybe
can
be
informed
by
biocomplexity
and
by
biocomplexity
we
mean
brain
networks,
but
also
developmental
self-organization,
and
so
why
are?
Why
are
these
sorts
of
themes
important?
A
Well,
as
you
can
see,
we
have
a
difference
between
brain
networks
and
deep
neural
networks
in
terms
of
topology
in
terms
of
structure
in
terms
of
output,
but
we
can
also
inform
sort
of
this
gap
through
looking
at
how
sort
of
developmental
structures
or
self-organized
structures
are
formed
and
the
sorts
of
things
that
they
contribute.
So
it
basically
provides
an
extra
dimension
to
this
deep
neural
network
topology,
and
so
the
outcome
might
not
look
like
a
brain
network.
It
might
look
like
a
developing
organism.
It
might
look
like
a
developing
brain
network.
A
We're
just
exploring
this
theme
in
this
poster,
so
we're
basing
our
sort
of
assumptions
on
this
idea
of
computational
equivalence,
so
computational
equivalence
is
that
the
systems
found
in
the
natural
world
can
be
perform,
computations
up
to
a
maximal
or
universal
level
of
computational
power.
Basically,
we
can
take
a
natural
system.
A
We
can
abstract
away
a
lot
of
its
complexity
and
we
can
end
up
with
this
fundamental
sort
of
skeleton
which
you
can
see
in
the
deep
neural
network,
and
if
we
get
that
exactly
right,
it
will
be
equivalent
or
be
very
similar
in
a
lot
of
ways
to
your
biological
system.
That's
our
the
idea
of
computational
equivalence.
We
can
argue
about.
You
know
whether
that's
actually
the
case,
but
we
have
to
find
the
right
sort
of
representation
for
that,
and
this
is
why
we're
introducing
a
developmental
aspect
to
this
as
well.
A
A
It
classifies
it,
but
then
there's
also
a
an
adversarial
component
that
generates
fake
or
adversarial
data,
and
this
adversarial
data
is
then
contrasted
against
the
real
data
and
it
helps
or
maybe
even
hinders,
the
learning
of
a
network.
So
the
network
can
learn
things.
They
learn
things
not
only
from
the
environment,
but
they
learn
from
the
adversarial
set,
which
is
generating
a
bunch
of
fakes,
and
so
you
can
see
in
this
figure.
A
This
is
a
generic
figure
of
a
gan
where
you
have
a
fake
image
and
you
have
a
training
set
of
real
images
and
it's
training
it
to
discriminate
between
the
fake
and
the
real
image
and
and
if
it's
a
successful
model,
it
will
be
able
to
successfully
discriminate
between
the
fake
images
and
the
real
images
in
the
training
set.
A
So
we
have
a
couple
of
sort
of
topical
areas
that
we
want
to
focus
on.
For
this,
we
have
first,
this
feature
discovery.
So
why
is
this
approach
that
we're
proposing?
A
Why
would
it
be
superior,
or
why
would
this
sort
of
idea
be
a
superior
direction
to
go
in?
The
first
is
feature
discovery,
so
in
gans
we
know
that
feature
discovery,
proceeds
in
part
through
an
adversarial
approach.
A
Pens
not
only
offer
novel
mechanisms
such
as
adversarial
training,
but
also
shows
that
game
theoretic
approaches
can
be
applied
to
understanding
networks
that
control
morphogenesis
and
development.
So
there's
this
aspect
of
morphogenesis
and
development,
and
here
we
have
a
sort
of
a
preview
of
some
work
that
our
group
has
been
doing
on
what
we
call
morphogenetic
agents
which
combine
pattern,
production
which
is
morphogenesis,
in
other
words,
these
agents
that
produce
patterns
and
present
them
to
an
environment.
A
So
these
this
is
a
sort
of
a
co-evolutionary
training
mechanism
that
might
be
something
in
a
direction
to
go
in.
Another
direction
is
that
neural
networks
provide
us
with
both
circuits.
These
are
network
subgraphs
that
serve
as
a
set
of
solutions
and
universality,
so
features
and
circuits
that
generalize
across
different
systems
and
contexts.
A
So
you
know
we
want
to
view
neural
networks,
maybe
as
sort
of
a
universal
type
of
nervous
system,
and
so
we
can
look
at
different
organisms
and
different
types
of
instantiation
of
nervous
system
and
compare
that
with
deep
neural
networks
or
different
types
of
designs
of
deep
neural
networks,
and
this
will
yield
some
interesting
information.
I
think
about
both
artificial
systems
and
the
natural
systems.
A
The
second
is
network
depth,
so
the
gans
framework
utilizes
deep
learning,
deep
neural
networks,
of
course,
as
their
core,
and
so
unlike
cellular
automata,
which
are
these
structures
here
which
operate
as
a
lattice,
where
the
neighboring
cells
all
influence,
each
other's
state
and
standard
neural
networks
can
exploit
many
layers
of
processing
specifying
the
generation
and
recognition
of
classes
of
specific
features
within
a
complex
pattern.
A
So
we
can
actually
classify
things
in
the
environment
using
these,
but
we
can
also
link
them
to
neural
networks
and
the
way
neural
networks
do
can
do
the
same
thing.
So
we've
done
some
work
in
our
group
with
neural
networks
trying
to
produce
embryonic
shapes,
but
also
classify
embryonic
shapes,
and
so
we
can
do
the
same
thing
with
gans
to
an
extent.
A
The
second
part
here
is
that,
well,
there
are
two
parts
here,
so
the
first
part
is
high
layer.
High
numbers
of
layers
in
a
biological
network
provides
a
conduit
between
the
genotype
and
the
phenotype,
so
that
mediates
complexity
in
various
ways.
Other
analogies
of
modern,
deep
learning
networks.
So
again,
we
know
in,
for
example,
in
a
biological
organism.
There
are
many
layers
between
the
genotype,
the
production
of
product
gene
products
and
the
phenotype,
which
is
the
expression
of
gene
products,
but
also
a
product
of
interacting
with
the
environment.
A
A
The
final
part
of
this
is
network
heterogeneity,
so
future
dl
models
may
allow
for
network
heterogeneity,
which
has
been
defined
identified
as
a
factor
in
making
artificial
networks
more
like
intelligent
biological
systems.
So
again,
these
networks
are
generic.
Brain
networks
are
not
generic
brain
networks
have
a
heterogeneous
structure.
A
So
how
am
I
deep
learning
how
my
deep,
deep
learning
networks
incorporated
into
brain
networks
that
exist
exhibit
skill-free
distributed
rich
club
networks,
so
you
could
have
something
that
is,
has
multiple
aspects
of
say:
brain
networks,
that
couple
pattern
generation
with
embodiment
perception,
so
that
goes
back
to
these
morphogenetic
agents.
But
more
generally,
we
want
to
build
agents
that
have
a
body
that
have
this
sort
of
four-dimensional
structure.
A
With
this
three-dimensional
structure
plus
time,
we
want
them
to
behave
in
the
real
world,
and
so
to
do
that,
we
need
to
give
these
networks
some
sort
of
structure
that
resembles
the
brain,
but
also
structure
that
allows
them
to
process
information
in
multiple
scales,
time
scales
and
spatial
scales.
And
what
have
you
future?
Artificial
neural
networks
need
to
map
to
distributed
decentralized
topologies
by
mimicking
general
system
functions
such
as
opponent
processes.
A
So
we
can
also
look
towards
things
that
we
can
observe
in
the
brain,
such
as
opponent,
processes
or
other
types
of
general
behaviors,
both
within
the
network
itself,
within
the
brain
and
behaviorally
outward
into
the
world
and
opponent
processes,
of
course
work
in
the
brain-
and
this
is
maybe
a
first
step
in
terms
of
like
replicating
output
behaviors.
A
A
Finally,
the
future
also
involves
graph
neural
networks,
which
can
be
structured
in
ways
similar
to
complex
networks,
so
graph
neural
networks
are
neural
networks
that
are
organized
on
the
principles
of
graph
theory,
the
principles
of
complex
networks,
and
then
the
question
is:
is
that
this?
Can
this
approach
allow
us
to
simulate
brain
networks
to
understand
the
functional
role
of
certain
motifs
and
structural
configurations?