►
From YouTube: DevoWorm (2021, Meeting 17): DevoLearn, Biological Networks, Branching, Learning, and Timing.
Description
DevoLearn update and Open-source sustainability, Parallel biological networks (embryo networks and connectomes), biology of Axolotls, branching in Artificial Neural Networks, Learning and Morphogenesis in single cells, interval timing in development. Attendees: R Tharun Gowda, Jesse Parent, Richard Gordon, Susan Crawford-Young, Krishna Katyal, Bradly Alicea, Shruti Raj Vansh Singh, and Akshay Nair.
A
When
did
they
find
out
like
next
week
or
something
I
think
in
two
weeks,
we
can't
announce
it
until
I
think
the
17th
but
yeah.
So
so
that's
good!
How
are
you.
B
Okay,
most
of
my
updates
are
not
for
this
group.
Obviously,
but
I'm
pretty
good.
I
have.
I
have
just
a
whole
lot.
That's
cooking
right
now
and
like
I
I
wrote,
I
wrote
a
bunch
of
yesterday
and
it
was
just
like
you
know.
There's
I
am
going
to
try
to
finish
something
I'm
going
to
try
to
put
them
together
for
that
interest.
Group
yeah.
The
case
is
a
little
bit
different
than
I
realized.
So
I
don't
know
it
kind
of
was
like.
B
A
A
It's
something
I
don't
know
what
the
if
it's
something
that
like
is
a
requires
like
a
group
of
scholars
to
be
working
on
it
for
a
couple
years,
or
I
don't
think
this
for
this
one
is
that
I
think
it's
more
like
you
know
just
some
idea.
In
any
case,
if,
if
just
put
down
some
people
like
in
the
group
that
we
have
and
then,
if
there's
you
know,
if
it
doesn't
get
accepted,
it
doesn't
get
accepted.
It's
just.
B
B
So
I
mean
I'm
gonna
go
for
I'll
write
something
up,
but
you
know
we'll
see
what
happens
with
that,
but
I
mean
outside
of
that.
I
still
have
my
own,
like
big
things
from
my
own
hashtag
summer
of
data
are
are
happening
like
like
the
week.
The
one
of
the
the
micro
mattress
courses
are
starting
up
now
and
that's
going
to
be
a
little
layer
of
stuff
and
I'm
in
the
midst
of
you
know
the
you
saw
the
the
a
little
bit.
B
I
don't
know
if
there's
any
more
reaching
out
there
talking
about
the
you
know
the
the
ea,
the
the
research
thing.
The
answer
might
be
doing
so.
There's
that-
and
I
have
I
have
so
many
submissions
to
do
with
things
to
write
out
that
you
know
about
being
a
group
and
a
lot
of
there's
a
lot
of
things
to
sort
out
like
that.
There's
a
lot
of
exciting
stuff
and
yeah.
That's
just
my
mindset
and
I'll
I'll
kind
of
step
away.
E
A
We
get
all
sorts
of
pull
requests,
so
hopefully
this
is,
this
is
gonna,
be
a
good
contribution
and.
E
A
A
Onto
the
diva
learn
platform,
so
I
want
to
I
want
to
set
that
up.
I
will
do.
F
A
A
C
A
And
it's
very
bad
there
with
in
terms
of
covet
right
now,
and
so
I
hope
everyone
is
doing
well.
Here's
krishna
and
give
me
I'll
give
you
my
best.
Hopefully
things
turn
around.
D
C
A
Yeah,
so
the
roon
is
here
and
krsna's
here
and
so
anyone
krishna
throne
do
you
have
any
updates
or
hi.
How
are.
E
C
I
H
Right
yeah,
I
was
just
on
the
call
today
he
said.
A
Yeah,
that's
good
yeah.
Usually
you
know
if
you're
doing
stuff
like
that,
if
you're
doing
multi-dem
or
multi-dimensional
interdisciplinary
stuff
or
things
that
involve
different
disciplines,
it's
always
good
to
have
like
build
on
top
one
on
top
of
the
other,
and
then
you
have
to
put
it
together.
So
it
takes
a
while
to
do
that.
But.
H
I
read
the
reviews
on
the
en
and
bn
paper
and
yeah.
It
was
quite
you
can
say
disheartening,
but
at
least
we
have.
A
What
hap
yeah
it's
that's
normal!
Actually,
yeah
I
mean
there,
it
was
a
pretty
competitive
venue,
so
it
wasn't.
H
A
You
know
they
were
almost
maybe
looking
for
excuses
to
reject
things.
Hi
hello.
How
are
you
I'm
good,
ellie,
good,
so
yeah
I
mean
well
we'll
talk
about
it
later
like
we
can
talk
about
it
on.
C
D
A
Hello,
shorty
yeah.
All
right
did
you
have
anything
you
wanted
to
present
or
talk
about
today
or
not.
E
A
A
A
A
Hello,
so
we
I
was
building
some
inquiries
about
summer
of
code
and
we'll
be
hearing
back,
we'll
be
able
to
announce
people
on
the
17th
of
may.
I
believe
so.
Hang
in
there.
I
can't
announce
anything
before
that.
So
good
luck
and
again,
if
you
don't
get
selected,
I
don't
know
which
projects
are
going
to
be.
Not
all
the
projects
might
be
selected.
So
there's
that,
but
if
you
don't
get
selected,
don't
worry
about
it.
A
We
can
do
something,
and
many
people
have
done
that
in
years
past,
where
they've
submitted
an
application
which
was
quite
good
and
then
it
didn't
get
selected,
but
then
we
worked
on
something
and
then
it
became
something
really
good.
So
the
first
thing
touch
on
a
couple
things
here,
there's
this
upstream
conference,
if
you're
interested
in
open
source
or
like
maintaining
repositories
or
doing
stuff
like
that,
this
might
be
something
of
interest
to
you.
A
This
is,
I
don't
know
I've
never
been
to
this,
but
it
might
be
something
worth
checking
out.
At
least
it's
a
lot
about.
You
know.
They're
gonna
have
a
lot
of
information
about
open
source
projects
and
what
people
are
thinking
in
that
space.
A
You
know
talking
about
community
building
and
founding
things
and
managing
projects
and
software
engineering,
and
so
you
know,
being
a
maintainer
is
something
that
myocas
had
some
practice
doing,
and
you
know
it's
basically,
where
you
build
a
open
source
project
and
you
need
to
maintain
it
in
order
for
it
to
become
successful,
it
just
doesn't
become
successful
right
off
the
bat.
A
You
have
to
put
some
effort
into
promoting
it
and
and
we're
going
to
talk
about
that
in
a
minute,
but
you
have
to
promote
it
and
then
maybe
accept
full
requests
and
be
you
know,
sort
of
meticulous
in
terms
of
how
things
are
maintained
and
updated
and
put
out
into
the
world.
So
that's
what
we're
trying
to
do
with
diva
learn,
and
this
might
be
an
opportunity
to
learn
more
about
that
now
in
terms
of
divalern
itself.
A
A
So
we
have
a
number
of
different
repositories
here,
as
well
as
diva
learns.
We
have
data
science,
demos,
which
I
think
a
number
of
you
have
submitted
to.
We
have
at
least
nine
contributors
to
that
and
all
the
different.
A
Demos
that
have
been
made
there,
we
have
a
contribution,
guidelines
repository
which
actually
needs
to
be
updated.
I'm
not
doing
good
maintenance
work
on
this
necessarily,
but
this
is,
you
know
the
kind
of
thing
we
need
to
update
in
on
a
regular
basis.
I
know
that
myoca
is
also
working
on
another
document
on
sort
of
a
long-term
plan
for
things
and
so
we'll
incorporate
that
into
this
as
well.
A
Soon
we
have
different
organisms
for
devozu,
so
we
have
different
model
organisms
and
then
I
was
just
asked
about
where
to
put
things
for
the
axolotl
project,
I'm
not
exactly
sure
where
that's
going
to
be,
I'm
going
to
create
a
repository
for
that,
and
so
that's
yet
to
come.
So
we're
building
on
this-
and
I
just
put
put
out
a
blog
post-
and
I
apologize.
If
I
didn't
mention
you
in
this-
I'm
just
hitting
some
of
the
people
who've
done
a
lot
of
different.
A
You
know
I'm
talking
about
different
things
in
the
contribution
stream,
so
so
this
is
a
blog
post
and
I'll
put
it
in
the
chat
here.
This
is
on
diva,
learn,
maintenance
and
I'm
just
giving
an
update
to
people
on
what
we've
been
doing
in
terms
of
maintenance,
and
so
this
is.
I
mentioned
that
where
myoka
has
been
busy,
maintaining
the
divo
learn
pre-training
model
software.
A
So
that's
that
one
repository
in
the
diva
learn
platform
and
I've
spent
time
in
2021
here
engaging
in
what
they
call
technology
evangelism
to
advance
awareness
about
the
initiative.
So
actually
I
presented
two
talks,
one
at
the
osf
unconference
in
february
and
another
one
just
recently
at
the
incf
assembly-
and
you
know
this
has
been
about
diva
learning
and
introducing
people
to
the
platform,
and
you
know
sometimes
people
don't
necessarily
see
the
greatness
of
it.
A
But
you
know
it's
a
lot
of
work,
but
people
won't
necessarily
see
what's
all
there,
but
at
least
getting
the
word
out
is
the
key
here.
So
so
the
diva
learn
pre-trained
model.
We'll
talk
about
that.
We've
made
a
lot
of
advances
since
january,
and
google
summer
code
is
really
facilitating
this
and
then
I
talk
about
these
talks
here.
A
So
if
you
want
links
to
the
talks,
I
think
we
have
some
slides
here
and
a
video
here
and
then
I
talk
about
the
onboarding
guide,
which
is
something
that
we've
been
developing.
Actually
krishna
was
involved
in
that
as
well.
Ojuwal's
usual
actually
took
ownership
of
the
basilwaria
project
two
years
ago
and
now
he's
going
to
be
a
mentor
for
that
project,
and
so
we
talked
a
little
bit
about
that.
So
that's
just
an
update
on
the
diva
learn
platform.
I
wanted
to
put
that
in.
A
A
Now
I
wanted
to
give
an
update
on
the
networks
conference,
so
networks
2021
there
was
a
abstract
that
was
accepted.
This
the
embryo
networks
is
generative
divergent
integration,
and
so
I've
been
working
on
the
slides
for
that.
So
I
kind
of
keep
these
slides
where
I
keep
like
parts
of
the
abstract
that
was
submitted.
A
You
know
up
at
the
top
to
remind
everyone
of
what
the
the
flow
should
be
or
what
the
we
should
be
covering
in
this,
and
so
this
may
diverge
a
little
bit
from
the
abstract,
but
hopefully
not
too
much.
I
have
some
slides
from
previous
talks
on
embryo
networks,
so
I
don't
know
if
I've
ever
presented
a
whole
talking
embryo
that
works
to
this
group,
but
to
give
sort
of
a
a
rough
idea
of
what
they
are
here.
A
You
have
a
bunch
of
cells
in
an
embryo
and
they
have
centroids
and
they're.
Each
centroid
is
a
distance
between
them,
so
you
can
characterize
distance
in
an
embryo
in
terms
of
three-dimensional
space,
but
you
can
also
characterize
it
in
terms
of
time
and
context,
which
is
this
theta
and
the
context
could
be
any
number
of
things
and
actually
it
could
be
multi-dimensional.
Even
so,
it
doesn't
need
to
be
this
five
tuple.
A
It
can
be
even
longer
where
it
describes
sort
of
the
state
of
the
embryo
at
any
given
time
or
any
given
spatial
location,
and
so
this
is
what
the
network
looks
like:
it's
just
connecting
cells
cell
centroids,
depending
on
maybe
how
close
or
far
away
they
are
in
space
and
then
that
changes
over
time.
So
the
network
changes
its
shape
over
time,
and
so
this
it
relates
to
the
lineage
tree
in
the
following
way.
A
You
have
a
lineage
tree
where
cells
are
dividing
and
then
they're
producing
these
lineage
trees
and
the
cells
at
any
given
point
in
the
lineage
and
we
usually
use
a
distance
threshold.
So
this
is
the
point
mass
for
based
on
cell
tracking
data.
So
a
lot
of
you
when
working
on
the
diva
learn
pro
or
the
diva
learn
project
know
what
cell
tracking
data
is,
and
it's
where
you
have
all
these
dots
representing
cells
and
those
dots
are
centroids.
A
So
what
they
do
is
they
sample
the
centroids
and
they
build
this
model.
You
know
this
is
over.
Like
many,
many
embryos
just
sampling
that
space,
and
this
is
what
it
looks
like
and,
of
course
the
cells
you
know
need
to
be
labeled
in
order
to
make
sense
of
this
cloud
and
then
what
an
embryo
network
is.
Is
it
further
sort
of
sorts
that
space
based
on
their
proximity
or
based
on
some
other
criterion
that
you
can?
You
know,
create
an
index
and
then
threshold
that
index
and
it
determines
which
cells
are
connected?
A
So
if
they're,
you
know
very
close
to
one
another
they're
connected
if
they're
a
certain
distance
apart,
they're
not
connected.
Alternatively,
you
could
have
like
a
signaling.
You
could
have
a
hypothesis
say
that
there's
some
signaling
molecule
that
diffuses
across
the
embryo
and
so
based
on
that
diffusion.
A
You
can
actually
use
that
as
a
criterion
for
connectivity
as
well.
It
gives
you
a
different
network,
but
it
tells
you
something
about
what
you're
trying
to
understand
doing.
Maybe
I
want
to
understand
how
something
diffuses
through
the
the
embryo,
or
I
want
to
understand
the
distance
between
cells
in
the
embryo
over
time.
Do
we
get
like
these
changes
in
structure
in
in
the
network?
Do
we
get
like
little
modules
of
the
network?
A
Where
does
it
remain
pretty
much
randomly
connected,
and
so
this
is
what
an
embryo
network
looks
like
it's
it's
one
of
these
hairball
things
that
I
think
people
have
seen
if
they've
looked
at
like
papers
on
like
protein
protein
interactions
or
like
social
networks,
you
get
these
hair
balls,
which
are
very
hard
to
interpret
on
their
own,
but
we
have
statistics
and
we
have
different
things
like
clique
analysis
that
can
help
us
sort
through
these
connections
and
there
are
other
ways
to
do
this
as
well.
A
A
You
can
see
how
they're
all
connected
and
what
the
relations
are
and
then
we
can
look
at
actually
spatial
connectivity
in
the
interactive,
so
this
is
actually.
A
This
is
based
on
a
connectome
if
you
go
and
you
look
further
into
development
and
you
look
at
how
the
connectome
is
connected.
Connectome
based
on
gap
junctions,
they
have
this
sort
of
structure
where
you
know
they're
connected
in
various
ways.
Well,
these
cells
have
precursors
in
the
embryo,
so
these
are
neurons
in
the
connectome
and
these
are
their
connectivity
based
on
their
sort
of
ancestral
developmental
cell
lineage.
So
this
is
all
of
the
connectome
cells
in
abal.
A
These
are
all
the
connectom
cells
in
abar,
abpr
and
abpl,
and
it
tells
you
something
maybe
about
you
know
in
those
sub
lineages.
You
know
what
what
kinds
of
things
they're
contributing
to
later
on
in
in
neural
development,
in
the
connectome
how
the
brain
is
connected.
So
I
talk
about
cell
division
in
these
networks
and
then
I
give
an
example
actually
from
the
connectome.
A
So
what
I'm
trying
to
do
in
this
talk-
and
I
haven't
sorted
it
out
yet-
is
go
back
and
forth
between
the
connectome
and
development,
the
sort
of
the
building
of
this
neuronal
network
in
the
worm.
So
all
these
cells
here
highlighted
in
this
orange
are
these
neural
cells
that
are
emerging
over
development,
and
this
is
put
onto
an
adult
worm
to
show
kind
of
where
they
are
in
the
adult,
so
they're
all
over
the
place
in
the
adult,
a
lot
of
them
are
clustered
in
the
head.
A
Some
are
clustered
in
the
tail
when
you
have
neuronal
cells
that
go
down
up
and
down
the
body
line,
and
you
can
see
in
this
is
what
they
look
like
in
the
embryo,
their
locations,
as
precursors
in
the
embryo
as
they're
being
born.
Where
are
they
being
born
and
they're
born
in
these
two
different
axes?
A
So
this
is
something
we
can
use
to
understand
our
network
a
bit
better.
Finally,
there's
this
principle
of
genetic
divergent
integration.
So
this
is
the
main
idea
of
the
talk
you
have
this
example
from
an
eight-cell
embryo,
and
this
is
just
a
generic
example.
This
is
where
you
have
a
bunch
of
cells
that
are
connected
by
distance,
but
there
are
no
neurons
in
this
yet
just
developmental
cells,
and
so
these
developmental
cells
form
this
this
continuous
network
and
then
the
24
cell
example.
It's
a
little
bit
clearer.
What
happens?
A
The
network
is
joined
together,
but
you
also
have
you
have
two
networks
here.
You
have
this
neural
network.
That's
forming
these
green
dots
are
the
neurons
and
these
blue
dots
are
somatic
cells
or
developmental
cells
that
haven't
differentiated
and
they
form
a
separate
network.
These
networks
are
integrated
and
that's.
A
The
whole
point
is
that
these
two
networks
sort
of
evolve
in
parallel
and
but
they're
also
integrated,
and
we
want
to
see
what
those
look
like
and
then
kind
of
consider
the
idea
that
maybe
that
networks
in
embryos
and
in
connectomes
and
then
combining
the
two
don't
behave
like
a
typica
in
any
typical
regime
of
a
complex
network,
and
so
you
know
we
think
of
networks
as
being
random
skill,
free
or
small
world,
and
so
these
are
what
these
statistical
signatures.
Look
like.
A
It's
possible
that
these
networks
for
follow,
not
none
of
these
types
of
regularity
and
so
what?
If
the
correct
model
is
not
a
complex
scale
for
your
small
world
network,
and
so
I
talked
about
a
couple
of
different
options
here
from
the
literature
and
then
some
things
about
new
world
networks
which
are
networks
that
expand
over
time
and
that
find
you
know
connectivity
patterns
as
they
emerge
and
then
talking
about
new
types
of
topologies.
A
So
this
is
pretty
disorganized
right
now
I
have
to
work
on
it,
some
more,
but
that's
the
basic
idea
of
the
talk,
we're
gonna,
I'm
gonna
work
on
it,
some
more
and
bring
up.
I'm
gonna.
Maybe
try
to
do
some
data
analysis
for
this
or
build
some
more
animations.
A
I
was
working
on
that
one
figure
for
a
couple
hours.
It
was
a
little
difficult
to
figure
out
how
to
put
some
of
these
to
visualize
some
of
these
things,
because
it's
a
pretty
hard
topic
to
get
your
head
around
and
I
want
to
make
sure
people
can
do
that.
So
that's
that's
something!
That's
coming
up!
That's
due,
I
think
at
the
end
of
this
month.
So
the
conference
I
think,
is
in
june.
So
if
you're
interested
in
networks
network
science-
that's
something
maybe
you
should
register
for
and
attend.
A
A
Why
don't
I
give
the
link
in
the
chat
and
we'll
follow
the
same
principle
that
we
usually
do?
If
you
have
some
comments,
you
can
leave
a
comment.
Don't
try
not
to
change
the
slides
per
se,
but
you
can
leave
comments
in
them
and
then
we
can
review
them
yeah.
So
I
mean
that's
and
that's
something
that'll
probably
become
a
paper
at
some
point
too.
A
I'm
just
doing
a
presentation
now
because
I
think
it's
something
that
I
presented
in
2017
at
netsigh,
which
was
the
network's
conference
precursor
when
they
were
live
in
indianapolis
indiana
and
they
had
they
were
sponsored
by
indiana
university,
which
they
do
a
lot
of
network
stuff,
but
it's
mostly
social
networks
and
other
types
of
networks.
That
aren't
what
I'm
talking
about
here.
So
you
know
I
did
a.
A
I
did
a
like
a
satellite
session
and
a
regular
session
talk
and
it
was
pretty
interesting.
It
was
fairly
well
received.
So
you
know
this
is
a
topic.
I
think
that-
and
I've
talked
about
this
topic
with
other
people.
I
think
this
is
an
interesting
topic
that
might
have
some
sort
of
legs
to
it.
So
I
definitely
think
we
should
pursue
a
paper
on
this
topic
eventually
and
then,
of
course,
we
have
other
network
interests,
so
we've
talked
about
like
with
our
submission
of
the
artificial
neural
networks
and
biological
neural
networks
paper.
A
I'm
gonna
make
a
note
of
that.
Oh
you
have
some
papers
on
connectron
all
right.
So
here
are
some
papers
on
connectrons.
If
you're
interested.
I
think
this
is
yeah
yeah
it's.
So
it's
a
it's
a
similar
idea
to
network
complex
networks,
but
actually
complex
networks
is
based
on
graph
theory.
So
it's
all
based
on
graph
theory.
But
if
you
go
to
a
complex
networks
conference
they
don't
talk
a
lot
about
graph
theory,
so
they're
not
trying
to
sort
of
solve
graph,
coloring
conjectures
or
anything
like
that.
They're.
A
C
A
Pretty
interesting
experience,
so
I
I
don't
know
if
krishna
I
don't
know
if
shruti
was
going
to
present
or
krishna
was
going
to
present
someone's,
I
think
krishna
said
he
needed
a
little
bit
of
time
to
get
his
stuff
ready.
A
Why
don't
we
go
over
the
submissions
and
then
we'll?
Oh
okay,
I
think,
are
you
you're
not
ready?
Okay,
we'll
go
over
the
submissions
and
then
maybe
we'll.
Actually
let
me
unsure
my
screen.
Do
you
need
me
to
do
that
or.
H
A
No
okay,
try
it
now.
I
I
shared
my
screen.
H
Okay,
so
you
can
clearly
like.
H
H
H
H
H
And
the
reasons
to
study
about
them
because
their
limb
generation
on
evolution,
the
feature
that
attracts
most
ancient
is
their
healing
ability.
They
can
heal
without
any
scarring.
H
H
Is
a
kind
of
a
great
research
issue,
because
if
we
are
able
to
mimic
their
regeneration,
we
can
also
humans
also
regenerate
their
wounded
parts.
E
H
H
C
H
H
H
H
H
H
H
C
H
H
But
instead.
D
H
During
language-
and
these
are
the
references
susan's
paper
and
that's
it.
C
A
So,
first
of
all,
yeah
go
up
to
the
first
one.
Diva
worm
is
misspelled,
but.
E
A
Okay,
but
more
importantly,
yeah
so
like
when
you
in
the
first
couple
slides
you
mentioned:
salamanders,
axolotls
and
so
axolotls
are
a
type
of
salamander.
It's
a
species,
and
then
you
know
sometimes
in
biology
talks.
They
give
like
the
sort
of
the
taxonomic
sort
of
position,
so
they
show
like
a
tree
of
life
or
they
kind
of
make
very
specific
or
very
specific
to
where
it
is
in
the
tree
of
life.
A
Like
biology,
yeah
yeah
so
like,
if
you've
seen
the
trees
in
our
you
know
some
of
the
papers,
we've
reviewed
the
you
know.
Usually,
if
you
look
up
like
tree
of
life,
axolotl
on
google,
you
can
find
like
or
whatever
search
engine
you
use.
You
can
find
like
some
information
and
then
so
that's
that's
one
thing.
Usually
people
do
but
yeah.
I
think
that's
a
nice
little
introduction
to
it.
So
I
had
a
couple
questions
in
the
chat.
A
Dick's
points
out
that
in
the
regeneration
slides
you
talk
about
regenerative
capacity,
then
he
also
points
out
that
they
also
study
partners.
It's
a
model
of
heart
regeneration,
so
limb,
regeneration
and
heart
regeneration
and
those
are
you
know,
I
don't
know
you
might
be
able
to
find
some
images
of
some
experiments
where
they
do
this.
So
I
mean
you
know
if
it's
something
that
you
want
to
really
kind
of
demonstrate
to
the
audience.
A
A
really
good
thing
would
be
to
show
some
pictures
of
like
a
regenerating
landmore,
also
spinal
cord,
so
limbs
hearts
spinal
cords.
Those
are
all
things
that
they
study.
The
reason
they
study
axolotl
was
in
in
fact,
a
lot
of.
I
think
a
lot
of
amphibians
have
this
capacity
for
regeneration,
but
I
think
axolotls
are
particularly
good
at
it.
C
A
Yeah
yeah,
so,
let's
see
dick
says
he
posed
the
challenge.
Can
we
read
the
differentiation
tree
of
the
axolotl
from
its
dna
sequence?
A
H
F
H
H
C
Cool
cool,
so
actually
there
is,
there
are
so
many
images
right
in
the
dataset,
provided
I
think
about
two
thousand
right.
Three
thousand
years.
C
Yeah,
so
what
my
recommendation
would
be
is
maybe,
like
you
know,
write
a
script
which
does
that
like,
if
you
are
gonna
manually,
do
it
it's
gonna,
take
a
lot
of
time.
H
Yeah,
I'm
not
doing
it
manually.
I
just
had
to
give
the
regular.
J
C
B
A
E
H
H
A
Life,
I
think
once
or
twice
dick
knows
a
lot
more
about
them
in
terms
of
like
working
with
them
than
I
do.
H
I
have
a
question
dick:
can
we
pet
them?
Do
they
make
good
pets
because
I'm
frightening,
very
beautiful.
A
A
And
breed
so
they
have
breeding
colonies
of
them
and
they
use
them
to
get
the
yeah
they're
yeah
they're
nearly
extinct
in
the
wild,
so
they're,
not
they're
from
central
mexico,
pretty
much
that's
where
you
can
find
them
and
yeah.
H
A
Yeah
yeah
yeah.
He
said
that
they
have
hobbyists,
have
a
lot
of
you,
know,
breeding
colonies
and
maintain
them
and
so
forth.
A
A
Okay,
yeah,
so
yeah
next
thing
I
wanted
to
talk
about.
Was
this
I'll
get
to
the
papers
in
a
little
bit?
I
want
to
talk
about
submissions
susan's
here,
hello,
susan.
F
A
That's
fine
yeah
welcome,
so
I
wanted
to
get
go
to
the
submissions
document.
I
just
wanted
to
make
sure
that
we
were
up
on
the
latest
deadlines.
A
A
Mathematics,
a
diva
worm
is
still
outstanding,
so
I
know
I
presented
on
that
a
while
back
now,
but
we're
still
kind
of
thinking
about
that
there's
this
williamson
symbiosis.
A
So
this
is
a
test
of
an
idea
that
different
organisms,
they
sort
of
retain
developmental
programs
from
different
organisms.
So
you
have
organisms
that
do
things
like
metamorphosis
and
williamson
proposed.
I
believe
and
dick
correct
me
if
I'm
wrong
on
this,
but
that
he
proposed
that
they
have
those
organisms,
have
multiple
developmental
programs
that
overlap
and
that
get
expressed
in
different
parts
of
development.
A
So
the
idea
would
be
to
look
at,
like
maybe
some
genetic
data
to
see
if
we
can
find
evidence
of
this
or
some
other
similar
signature.
So
that's
a
quite
hard
and
if
people
are
interested
in
bioinformatics,
if
they're
really
into
that,
this
might
be
something
you
would
want
to
like
explore.
A
So
a
lot
of
the
stuff
we've
talked
about
in
the
group
was
segmenting
images
creating,
like
you,
know,
dynamical
simulations
or
dynamic
representations
of
movement,
and
things
like
that
with
using
machine
learning.
But
this
would
be
actually
like.
A
You
know,
simulations
at
the
molecular
level
of
these
movement
dynamics
or
or
the
things
that
are
responsible
for
movement.
So
that's
another
thing,
that's
outstanding
other
than
that
we
have
a
number
of
things
that
are
out
in
terms
of
being
submitted.
A
There
was
a
the
artificial
neural
networks,
biological
neural
networks,
paper
that
was
rejected
by
a
wife
and
I
talked
to
krishna
earlier
in
the
meeting,
and
he
was
a
little
disheartened
by
the
reviews.
But
I
think
that,
like
it's
not
so
bad,
I
looked
up
them
as
well,
and
you
know,
like
reviews,
often
look
worse
than
they
are
so
sometimes
people
are
very
critical
and
you
just
have
to
power
through
and
figure
out
how
to
remedy
the,
if
you're
trying
to
submit
it
to
the
same
venue
as
a
revision.
A
C
A
Things
up
in
the
review,
like
they'll,
have
a
certain
idea
of
what
the
paper
is
about
and
that's
not
what
the
paper
is
about.
So
you
know
that
you
have
to
balance
that
out,
but
I
think
it's
it's
valuable
for
people
to
learn
how
to
respond
to
reviewers
and
understand,
like
you
know
that
some
people
are
gonna,
see
your
paper
differently
and
you
have,
to
just
kind
of
you
know,
make
sure
it's
it's
broad
enough,
so
that
people
can
understand
what
you're
saying.
But
you
can't
please
everyone
at
the
same
time.
A
That's
the
idea,
and
so
you
know,
I
think
I
think
we
can
work.
We
can
revisit
that
paper
and
come
back
to
it
later.
So
dick
said
that
okay,
so
this
is
a
contact
information.
A
If
you
want
to
order
axolotls
and
eggs,
it's
the
university
of
kentucky,
I
think
you
know
they
probably
want
you
to
use
like
you
know,
do
research
with
them
or
something
not
you
know
just
I
don't
know
if
they'll
just
hand
them
out
to
you,
but
that's
if
you
want
to
inquire
about
like
using
axolotls
as
a
model
organism.
A
A
So
I
think
the
statement
and
they're
due
to
successful
meetings
between
different
species
was
yeah.
We
got
to
williamson,
okay,
yeah
williams
and
then
the
I
am
writing
a
paper
and
mathematica
code
for
raphael
simulation
of
the
kinematics
of
diatom
motility
at
the
molecular
level.
So
this
is
the
submission
that
is
what
dick
is
working
on
and
then
this
item
in
the
submissions.
This
number
24
is
due
to
this
I'll
put
this
in
details
here.
A
And
then
so
yeah,
if
you're
interested
in
that
we
can,
you
can
talk
about
that
offline
and
then
you
know,
there's
a
conference
opportunity
here.
Conference
on
complex
systems,
abstract
should
do,
may
20th,
nurips
again
is
may
19th
or
the
abstracts
may
26th
is
the
full
paper
and
then
they'll
be
having
satellite
sessions
of
neurops
where
the
deadline
is
later.
So,
if
you're
interested
in
some
sort
of
machine
learning
or
ai
stuff
nerfs,
you
know
one
of
the.
A
Maybe
one
of
the
satellite
conferences
might
be
better,
not
necessarily
the
main
track,
because
that's
quite
competitive,
but
you
know,
and
then
the
conference
on
complex
systems
might
also
be
a
candidate
for
some
work
as
well,
but
we'll
stay
on
top
of
that.
So
that's
all
I
wanted
to
talk
about
with
respect
to
submissions.
A
Finally,
I
wanted
to
talk
about
some
papers.
The
first
one
I
wanted
to
talk
about
is
this
paper
branch
specialization.
This
is
something
I've
been
wanting
to
get
to
for
a
while.
So
this
is
from
distill,
and
this
is
of
course
the
machine
learning
journal
and
we
talked
about.
A
About
distilled
articles
now
is
they
have
these
threads,
and
so
this
is
part
of
the
circuits
thread,
and
so
this
is
a
format
where
they
get
short
articles
and
critical
commentary
about
some
aspect
of
neural
networks.
So
this
is
a
part
of
a
collection,
and
one
of
these
topics
is
branch
specialization.
A
So
what
is
branch
specialization
so
they
they
say
if
we
think
of
interpretability
as
kind
of
an
anatomy
of
neural
networks.
So
we
we
talked
earlier
in
the
meeting
about
the
anatomy
of
connectomes
and
in
development,
especially
this
is
the
anatomy
of
neural
networks.
So
this
would
be
a
neural
network
and
understanding
the
anatomy,
as
it
were,
most
of
the
circuit.
Most
of
the
circuits
threat
has
involved,
studying
tiny
little
veins,
looking
at
the
small
scale
of
individual
neurons
and
how
they
connect.
A
However,
there
are
many
natural
questions
that
small-scale
approaches
don't
address,
and
so
this
branch
specialization
is
sort
of
they
talk.
I
think
they're
trying
to
get
at
is
they're
trying
to
take
the
idea
of
biological,
branching
and
biological
sort
of
networks
based
on
branching
and
bring
it
into
neural
networks.
A
So
they
talk
about
the
most
prominent
abstractions
in
biological
anatomy,
involve
larger
scale
structures
so,
for
example,
individual
organs
like
the
heart
and
circulatory
system,
or
entire
organ
systems,
like
the
respiratory
system,
where
you
have
a
lot
of
branching
and
say
the
lung
and
then
it's
taking
in
or
pushing
out.
You
know
things
from
the
rest
of
the
body
or
from
the
circulatory
system,
and
so
you
know
these
systems
are
very
broad,
but
they
have
these
branching
patterns
that
allow
it
to
maintain
some
sort
of
structure
and
transport
things
across
the
network.
A
And
so
they
wonder,
is
there
a
respiratory
system
or
heart
or
brain
region
of
an
artificial
neural
network?
Do
neural
networks
have
any
emergent
structures
that
we
could
study
that
are
larger
scale
than
circuits.
So
this
is
a
question
of
like
these
huge
emergent
structures
that
we're
kind
of
interested
in
this
group
in
terms
of
like
biology,
but
they're,
actually
talking
about
neural
networks.
A
So
they
describe
your
branch
specialization
or
one
of
three
larger
structural
phenomena.
We've
been
able
to
observe
in
neural
networks,
so
the
other
two
are
called
equi
equivariance
and
weight.
Banding
and
those
are
separate
articles,
as
you
can
see
here,
there's
the
article
on
weight.
Banding
is
next,
so
I
don't.
I
didn't
look
those
over
before
I
for
the
meeting,
but
you
can
look
at
them
later.
A
So
branch
specialization
occurs
when
neural
networks
lit
neural
network
layers
are
split
up
into
branches,
so
each
layer,
of
course,
is
you
know
a
lot
of
times.
We
think
of
it
as
this
homogeneous
thing.
But
if
you
split
it
up
into
branches
or
separate
streams,
then
this
is.
When
you
get
this
sort
of
branch
specialization,
the
neurons
and
circuits
tend
to
self-organize
clumping
related
functions
into
each
branch
and
forming
larger
functional
units,
a
kind
of
neural
network
brain
region.
A
In
other
words,
if
you
take
a
neural
network
and
you
run
the
simulation
and
you
get
connection
weights,
those
connection
weights
will
tend
to
segregate
into
streams
or
branches,
and
so
you'll
see
this
in
the
brain
with
different
functional
regions.
Where
you
know
you
don't
just
compute
on
everything
in
the
brain.
A
There's
certain
functional
regions
for
certain
things,
like
vision
or
for
learning,
in
memory
or
for
attention-
and
this
is
just
in
the
human
brain
mind
you,
but
those
areas
are
specialized
and
it
doesn't
just
you
know,
run
you
know,
the
signal
doesn't
run
all
over
the
network.
It's
it's
specialized
into
these
streams,
and
so
you
see
this
with
respect
to
like
river
networks
as
well.
A
Where
you
have
you
know,
energetic
you
know
optimization
and
river
networks
where
you
get
like
you
know
you
go
from
little
streams
to
tributaries
to
rivers
and
then
finally,
to
the
river
delta
and
there's
a
transfer
of
energy
into
sort
of
the
maximal
expression
at
the
delta,
and
so
you
get
this
network
of
you
know
this
energetic
network
and
then
productivity
follows
that
and
all
these
things
happen
because
it's
it's
basically
based
on
this
network
structure.
A
So
that's
that's
what
they're
getting
at
in
this
article
and
it's
really
good,
because
I
think
it
it
kind
of
lays
out
this
idea
of
branching
and
sort
of
the
heterogeneity
of
networks.
So
you
can
see
again.
You
have
a
couple
of
alexnet,
for
example,
features
some
of
these
branches
inception
version.
One
has
different
branches
called
inception
blocks.
A
Residual
networks
are
not.
I
guess
this
is
a
special
type
of
network
that
can
you
know
they
have
these
sort
of
pseudobranches
that
branch
off,
and
so
there
are
a
lot
of
different
structures
you
can
implement
in
your
neural
network
and
so
and
so
why
does
branch
specialization
occur?
And
so
the
reason
why
this
occurs
is
because
you
have
its
features:
defining
it's
defined
by
features,
organizing
between
branches,
so
in
a
normal
layer
features
are
organized
randomly
given
features.
A
This
is
likely
to
be
any
neuron
in
a
layer,
but
in
the
branch
layer
we
can
often
see
features
of
a
given
type
cluster
to
one
branch.
So
this
both
reinforces
this
branching
mechanism
and
it's
almost
fractal.
In
other
words,
it's
it
sort
of
reinforces
additional
branching
structures
down
the
line,
and
you
have
this.
They
think
that
there's
some
role
for
positive
feedback
during
training,
so
there's
a
positive
feedback
loop
that
reinforces
these
branches
over
time,
and
so
the
more
data
you
add
in
you
know
the
more.
A
It's
like
a
river,
you
know
a
branch
of
a
river
that
you
know
it's
reinforced,
maybe
by
the
flow
of
water
or
like
a
trail
system
in
in
a
forest
where
people
walk
through
and
they
clear
the
area.
And
if
you
know
people
keep
going
using
this
certain
path,
you
know
it
becomes
progressively
cleared
and
it
becomes
the
path
of
choice.
A
A
A
A
So
this
is
a
nice
article
to
read
to
follow
up
on
or
to
read,
and
so
let's
see
what
we
have
here
in
the
chat.
We
have
a
couple
of
things
jesse,
he
said.
Might
you
want
to
mention
the
bioinspire
cognitive
architectures
conference
from
or
okay?
So
there's
this
and
I'll
I
can
send.
A
Maybe
next
week
I
don't
have
the
document
open,
but
there's
this
conference
going
on,
it's
called
by
bioinspire,
cognitive,
architectures
and
so
they're
accepting
papers
on
neural
networks
and
ai
systems
that
might
be
of
interest
to
some
of
you.
Maybe
if
you
could
post
an
announcement
of
it
in
the
slack
jesse
in
the
open
in
the
divorwarm
slack
channel.
That
would
be
good
and
they're,
accepting
both
abstracts
and
papers
and
the
deadline
is
a
ways
off.
A
So,
if
you're
interested,
you
might
want
to
check
that
out
and
submit
something
dick
says,
growing
graphs
see
we
see
stephen
wolfram,
so
this
is
about
growing
graphs.
This
is
a
youtube
video
from
stephen
wolfram
on
this
topic,
so
this
is
something
when
we
talked
about
in
the
in
earlier
in
the
meeting
about
like
how
graphs
you
know
you
have
these
graphs
that
grow
in
terms
of
their
topology.
A
A
I
think
dick
sent
me
this
last
week.
This
is
an
old
classic
paper
from
waddington,
conrad
waddington,
who
is
known
for
his
epigenetic
landscapes,
idea
and
russell
co.
Who
is
someone
else?
I
don't
really
not
familiar
with,
but
he's
a
computer
scientist
in
waddington
was
a
geneticist
and
they've,
with
this
paper
in
1969
called
a
computer
simulation
of
a
molluscan
pigmentation
pattern,
and
so
this
goes
back
to
the
things
you
were
talking
about
with
the
seashells
and
the
patterns
on
seashells.
So
trudy,
of
course,
is
interested
in
that.
A
A
So
the
this,
the
formal
characteristics
of
a
color
pattern
on
the
complex
appearance
of
an
on
a
molluskin
shell.
So
if
you've
seen
a
seashell,
you
know
what
that
looks
like
we've
seen
those
we're
studied
by
developing
a
computer
program
which
produces
an
acceptable
simulation
of
it.
So
they
didn't
really
use
anything
fancy
here,
especially
in
1969.
A
The
pattern
clearly
involves
a
random
factor
determining
points
of
initiation
of
diverging
lines
of
pigment
pigment.
The
program
contained
parameters
controlling
the
density
of
random
initiation
points
in
different
sectors
of
the
growing
edge
of
the
shell
and
the
angle
of
the
divergence
of
the
lines.
A
So
they
had
to
set
this
problem
up
in
some
way
and
they
used
a
certain
approach.
It's
in
the
paper
in
order
to
produce
a
satisfactory
simulation,
it
was
necessary
to
introduce
another
rule
by
which
new
initiation
points
arising
within
a
certain
distance
of
an
existing
pigment
line
are
drawn
onto
that
line.
A
We
just
needed
some
exemplars
to
sort
of
you
know,
train
the
model
on
and
then
look
at
the
model
and
see
how
it's
interpreting
the
the
patterns
inside
so
I
mean
there
are
different
ways
you
can
do
this,
and
this,
of
course,
was
back
in
1969,
so
they
were
all
using
rule
based
models,
so
it
was
very
much,
but
it
very
much
gets
you
down
to
sort
of
the
nitty
gritty
of
the
problem
and
how
to
set
the
problem
up.
A
So
I
think
that's
why
it's
worth
visit
revisiting
some
of
the
old
papers,
because
they
actually
do
talk
about
this
very
you
know
very
much
like
the
way
they
set
the
problem
up,
and
so
they
talk
about
the
turing
model,
morphogenesis,
which
is
sort
of
let's
see
what
they
say
about
that.
So
the
pattern
that
they
observe
differs
from
those
studied
by
turing
and
scriven
by
being
diachronic
in
the
sense
of
waddington,
that
is
to
say,
the
shell
does
not
exhibit
the
instantaneous
state
of
the
pattern
forming
system
at
a
particular
moment
in
time.
A
Okay,
but
rather
the
pattern
that
is
built
up
gradually
over
a
period
of
deposition
of
pigment
at
the
growing
edge
of
the
shell.
So
what
happens?
Is
you
have
this?
Basically
the
spatial
system
that
goes
from
one
end
to
the
other,
and
you
have
like
this
thing:
that's
laid
down
over
time
and
there's
a
deposition.
So
it's
almost
like
a
printed
document
that
comes
out
of
your
printer.
A
You
know
it
starts
printing
from
the
top
and
goes
down
to
the
bottom
and
it
might
go
back
over
for
different
colors
and
there's
a
bunch
of
stuff,
that's
deposited
on
the
surface,
and
then
that's
your
paper
and
you
say:
well,
how
did
this
paper
get
printed?
You
know,
and
so
the
answer
is,
there
was
a
printer
that
went
over
it
and
printed
it,
and
in
this
case
you
know
it's
it's
more
about,
like
you
know
it's
something
that
is
diachronic.
A
It
happens
at
different
points
in
time
and
it's
just
the
accumulation
of
all
that.
So
this
is
so.
This
is
building
up
gradually
and
whereas
the
turing
model
is
more
like,
inter
it
deals
more
with
the
interactions
of
the
of
of
morphogens,
you
know
it
doesn't
necessarily
depend
on
the
deposition
of
things
over
time.
So
that's
that's
the
difference
here,
so
it's
a
different
model
of
morphogenesis,
and
so
again
you
know,
depending
on
how
you
set
this
problem
up,
you
get
a
different
result.
A
A
So
they
talk
about
the
physiological
model
and
some
other
things
in
here
and
here
are
two
patterns
that
result
from
the
first
program.
So
this
is
not.
This
is
not
a
cellular
automata.
This
is
just
simply
lines
that
they've
looked
at
that
are
deposited
on
the
surface
and
they
wanted
to
look
and
see
how
you
would
produce
something
like
that.
A
A
A
This
is
randy
gallistell
who's,
a
learning
and
memory
person
and
they're
samuel
gershman
is
a
neuroscientist,
so
a
number
of
people
who
are
interested
in
cognitive,
science
and
psychology,
and
I
think,
brains,
minds
and
machines,
which
is
a
sort
of
an
interdisciplinary
group
and
they're
interested
in
this
idea
of
learning
in
single
cells.
So
so
there
in
the
early
20th
century,
there
was
a
question
about
whether
single
cells
could
learn,
and
so
learning
means
like
conditioned
learning
the
type
of
very
simple
learning
that
maybe
we
observe
in
sea
slugs.
A
We
talked
about,
I
think,
last
week
or
the
week
before,
and
they
do
this
thing
where
you
know
it's
like
they
associate
different
stimuli.
They
do
things
like
that,
but
of
course,
in
some
in
single
cell
organisms
they
don't
really
have
brains,
but
they
can
do
this
sort
of
thing
you
know
nevertheless,
and
so
why
is
that?
And
so
we've
talked
about
non-neuronal
cognition,
and
this
is
basically
what
they're
getting
at
here.
A
So
you
can
do
this
sort
of
thing
where
there
was
a
view
that
prevailed,
that
they
were
capable
that
these
single
cell
organisms
were
capable
of
non-associative
learning,
but
not
of
associative
learning
such
as
pavlovian
conditioning.
So
this
is
where
you
associate
like
two
stimuli
in
the
case
of
pavlovian
conditioning.
It
was
ringing
a
bell
and
a
dog
salivating
thinking
that
it
was
time
for
dinner,
experiments
indicating
the
contrary
were
considered
either
non-reproducible
or
subject
to
more
acceptable
interpretations.
A
So
this
prevailing
orthodoxy
was
that
single
cells
could
not
learn,
and
so
what
we
found
from
the
work
on
paramecium
revisiting
this
work
and
reinterpreting
it
was
in
fact
they
can
learn
and
they
can
even
do
associative
learning.
And
so
I
think
this
is
an
interesting,
both
interesting
historical
aside,
but
also
a
good
point
in
favor
of
this
idea
of
nominal
cognition,
and
so
the
emergence
of
learning
is
much
faster
than
evolution,
so
you
can
learn
things
much
faster
and
then
you
can
evolve
as
an
organism
and
that's
good
for
survival.
A
There's
you
know
so
the
focus
on
learning
has
led
to
a
biological
model
of
learning
based
on
synaptic
plasticity,
which
of
course,
is
good.
If
you
have
synapses
and
you
have
a
brain,
but
if
you
don't
have
synapses
in
a
brain,
then
you
know
what
what
you
know
can
you
even
learn?
Well
I
mean
we
see
from
some
of
the
experimental
data
that
there
is
learning
going
on
in
these
systems.
So
why-
and
so
that's
what
they
talk
about
here.
A
It's
interesting
to
note
that
some
of
the
in
sponges
there
they've
found
that
there
are
analogous
or
paralogs
to
synaptic
genes,
so
like
they're
genes
and
sponges
that
are
being
expressed
in
sponges
that
have
sort
of
a
parallel
with
what's
going
on
in
the
synapses
of
brains
and
those
genes
are
being
expressed
in
similar
ways:
they
don't
they
don't
get
expressed
in
the
synapse,
but
they
get
expressed
in
the
sponge
morphology.
And
so
that's
an
interesting
parallel.
A
I
remember
this
paper
from
a
while
back,
but
basically
they
found
that
there
a
lot
of
analogs
to
organisms,
don't
have
brains
with
you
know
some
of
the
mechanisms
of
brains
and-
and
so
that's
that's
another
interesting
aside.
So
in
connection
is
models
of
learning
and
memory.
Synaptic
conductance
is
represented
by
sine
connection
weights.
So
this
is
our
typical
inspiration
for
artificial
neural
networks,
but
in
unicellular
organisms
we
don't
have
any
of
this,
but
they're
still
capable
of
complex
behaviors.
A
So
in
this
paper
they
kind
of
go
through
some
of
the
evidence
from
paramecium,
and
we
don't
talk
about
paramecium
much
in
in
this
group.
But
so
this
is
the
alternative
hypothesis
for
having
brains,
which
is
that
memory
can
may
be
stored
using
a
cell
intrinsic
substrate.
A
A
So
there's
a
biochemistry
of
memory
that
so
we
don't
know
exactly
what
the
signals
are,
but
we
know
they're
different
sort
of
event,
timers
that
are
molecular
in
nature
that
might
provide
interval
timing,
so
they're
biochemical
mechanisms
for
the
sort
of
keeping
time
in
the
organism
and
keeping
time
in
terms
of
what
what
stimuli
are
coming
into
the
organism.
A
So
so
this
kind
of
goes
through
a
historical
background
of
this.
Of
what
they
used
to
think
about
this,
these
mechanisms
at
the
early
part
of
the
20th
century
and
then
kind
of
going
into
the
paramecium
as
a
model
organism
and
then
kind
of
talking
about
beatrice,
gilbert's
work
which
really
kind
of
brought
some
of
this
together.
A
But,
of
course,
this
wasn't
interpreted
that
way,
because
the
peer
review
process
didn't
accept
that
those
interpretations,
and
so
this
this
is
an
example
of
some
of
the
experiments
that
were
done
and
then
kind
of
going
through
this
whole
intellectual
process,
but
also
in
terms
of
what
might
be
happening
in
single
cells
and
yeah.
I
think
it's
an
interesting
paper.
We
should
definitely
revisit
this
later.
A
Let
me
see
what
we
have
here.
So,
let's
see
jesse
says
this
is
also
interesting
with
respect
to
paul
sizek's
recent
paper.
So
this
is
something
we
covered
in
my
other
group
how
we
talked
about
this.
This
idea
of
you
know
how
did
the
brain
evolve?
You
know
they're
different.
A
How
did
different
developmental
precursors
of
the
brain
evolve
throughout
the
tree
of
life?
So
we
have
these
different
structures
in
in
the
human
brain
or
in
the
mammalian
brain
that
this
man,
paul
cyzek,
wrote
a
paper
on
looking
at
this
phylogeny
and
it
wasn't
really
like
a
statistically
supported
phylogeny.
It
was
like
they
put
different
brain
structures
on
a
tree
and
you
know
evaluated
maybe
when
they
arose
in
the
common
ancestor,
and
so
it's
an
interesting
paper
yeah.
It
did
have
a
lot
to
do
with
it.
A
It's
very
interesting
in
that
context,
and
then
dick
says
we
may
have
to
solve
the
problem
single
somewhere
for
genesis
before
single
cell
learning,
and
that's
probably
true,
there's
a
lot
there
in
terms
of
single
cell
morphogenesis
that
we
can
understand
as
well
and
maybe
they're
connected.
We
don't
really
know.
That
would
be
an
interesting
question.
A
I
think
finally,
I'm
going
to
talk
about,
and
I
don't
really
want
to
get
deeply
into
any
of
these,
but
I'll
talk
about
this
paper,
so
the
pace
of
development,
and
so
this
is
something
that,
if
you're
interested,
I
know
we
talked
about.
We
have
this
paper
that
we
published
on
timing
of
cell
divisions
and
cell
differentiation
and
different
embryos
and
what
the
pattern
temporal
pattern
that
looked
like
and
this
this
is
a
news
article
from
I
think
nature.
A
So
this
is
talking
about
the
pace
of
development,
so
researchers
are
starting
to
work
out
why
animals
develop
at
different
speeds.
The
key
could
be
tiny
time
keepers
inside
cells,
so
we
just
talked
about
interval
timing,
perhaps
being
regulated
in
single
cells,
and
so
this
is
a
good
follow-up
to
that,
because
it
talks
about
sort
of
some
of
these
mechanisms
for
timing.
So
when
a
cell
develops,
of
course,
there's
this
whole
internal
program
that
gets
expressed
and
there's
a
certain
timing
to
it.
A
So
cell
division
has
a
timing,
there's
a
very
intricate
circ
gene
regulatory
circuit
that
manages
you
know:
cell
division,
so
like
a
different
phases
of
cell
division,
you
have
different
things
that
are
expressed
and
then
accumulate
and
then
trigger
other
things
to
be
expressed
and
accumulate
and
then
degrade,
and
I'm
talking
about
the
gene
products
here
and
all
those
things
have
to
be
timed
just
right
in
in
a
cert,
a
single
circuit
for
cell
division
to
occur
normally,
and
so
this
requires
some
sort
of
time
keeping
mechanism,
perhaps,
but
definitely
it
requires
sort
of
a
temporal
cadence.
A
A
Animal
cells
bustle
with
activity
and
the
pace
varies
between
species.
In
all
observed
instances,
mouse
cells
run
faster
than
human
cells,
which
tick
faster
than
whale
cells,
and
by
ticking
they
mean
the
metabolism
in
the
cells.
So
you
know,
but
but
you
know,
this
isn't
exactly
cell
division,
but
it's
it's
a
similar
idea.
A
A
You
know,
cells
proliferate
at
a
faster
rate
or
a
slower
rate,
and
that's
actually
controlled
by
what
they
call
heterochronic
genes,
so
there's
a
whole
literature
in
heterochromia
in
timing
and
development,
you
know
or
things
slow
down
or
things
sped
up
with
respect
to
body
growth
to
like
body
length
and
growth,
but
those
things
you
know
ultimately
reside
in
the
cells
and
cell
division.
So
so
there
there's
this
quest
to
look
at
the
cellular
time.
A
Keepers
that
control
the
speed
of
development
and
the
speed
of
how
these
bodies
are
built
and
tissues
are
built
and
so
forth,
and
so
they're
saying
that
there's
a
clock
in
early
embryos
that
beats
out
a
regular
rhythm
by
activating
and
deactivating
genes.
This
segmentation
clock
creates
repeating
body
segments
such
as
the
vertebra
in
our
spines,
and
then
this
is
this
is
what
they're
interested
in
this
lab
here
in
barcelona,
so
they,
you
know
this.
C
A
A
A
They've
looked
at
waves
of
gene
expression,
moving
along
the
embryo
from
tail
to
head
oscillating
in
synchrony,
with
the
development
of
somites.
So
there's
this
wave
activity.
Actually
this
is
something
maybe
differentiation.
Waves
has
something
to
do
with
this.
I
know
I
don't
know
this
is,
but
you
know
it's
nice
to
see
these
things
in
other
parts
of
developmental
biology.
A
Oh
so
they
do
mention
haco
and
heterocrony
in
this
article,
so
they
talk
about
heterocrony
here,
and
this
is
at
sort
of
the
organismal
level
instead
of
the
cell
division
level.
But
this
this
sort
of
clock
contributes
to,
however
crony,
and
so
this
is
so
modern
developmental
biologists
regard
heterocrony
as
a
key
concept
that
helps
to
explain
a
core
mystery
at
the
earliest
stages
of
development.
All
vertebrate
embryos
look
alike
yet,
as
the
embryos
develop,
they
become
easily
recognizable.
A
A
If,
if
you
were
to
slow
that
down
in
a
single
species,
you
could
get
different
morphologies,
you
know
if
you
could
do
that,
I
guess
experimentally
and
what
they're
saying
is
that
there's
you
know
if
you
slow
things
down,
the
organisms
are
going
to
look
different
if
you
sped
it
up,
but
across
species.
There's
a
variable
timing
that
contributes
to
these
differences
in
form,
and
so
they
have
this
diagram.
They
show
a
mouse
embryo,
they
show
the
somites
and
then
they
show
like
the
tail
here
where
there's
gene
expression.
A
And
then
there
are
these
segments
that
form
these
somites,
that
sort
of
emerge
and
expand
the
tail
outward,
so
it
elongates
the
tail.
The
somites,
depending
on
how
fast
they
get
put
in
place,
determine
the
length
of
the
tail
and
so
they
you
know
this
is
a
interesting
area
of
research.
I
know
I
probably
haven't
done
this
justice,
but
this
is
a
nice
paper
to
go
through
and
and
to
find
out
more
about
this
phenomenon.
So
this
is
just
a
news
article.
A
So,
with
every
news
article,
there
are
always
papers
to
follow
up
on
different
topics.
So
here's
a
list
of
these
papers
here
yeah.
So
I
think
that's
good.
So
what
do
we
have
in
the
chat
here?
So?
A
Could
you
sing
send
the
links
to
these
last
two
papers
here
or
in
slack
I'll,
probably
put
the
links
in
slack,
pasted
development,
etc,
dixa,
synchrony
and
cell
division,
usually
just
in
early
embryos
yeah?
So
there's
a
there's
differential
timing
and
cell
division
later
on
in
development.
So
there's
a
the
heterogeneity
of
cell
type
of
timing
and
cell
division
that
emerges
but
yeah
the
synchrony
is
usually
just
an
early
embryos
and
jesse
says.
I
would
like
to
look
a
bit
more
into
this
metabolism
with
respect
to
development
yeah.
A
E
A
Well,
I
I'll
I'll
follow
up
on
some
of
these
things
I'll
send
out
the
papers
if
we
have
anything
else
that
we
want
to
add.
Oh
jesse
had
something
to
say.
B
A
A
Yeah
thanks
susan
for
showing
up
thanks
krishna
for
your
presentation
next
week,
we'll
probably
talk
about
more,
very
similar
things,
but
you
know
if
we
have
anything
to
follow
up
on
too,
please
feel
free
to
contribute
or
to
send
papers.
My
way
you
can
see
we
have
a
very
long
queue
of
papers,
but
you
know
we'll
get
through
them.
It's
always
interesting
to
see
this
so
have
a
good
week.
Everyone
stay
safe.