►
From YouTube: DevoWorm #16: Multiview Imaging, GNNs, Cell tracking Methods, Metabolic and Neuronal Network Origins
Description
Multiview imaging and methods for spatiotemporal integration of microscopy images, Advances in Graph Neural Networks for embryogenesis, techniques for reconstructing cell lineages from sparse annotations and cell tracking, origins of metabolic (composomes) and neuronal (peptidergic-based) networks. Attendees: Susan Crawford-Young, Karan Lohaan, Jiahang Li, Richard Gordon, and Bradly Alicea
A
B
A
A
Wow,
that's
horrible
spring.
B
So
hopefully
somebody
will
contact
me
from
hamsall
and
tell
me
how
to
put
put
the
formula
in
in
the
physics
section
of
the
simulation.
B
B
A
A
Yeah,
I
don't
think
anyone
does,
but
in
our
group
but
yeah,
I
don't.
C
A
B
D
B
A
B
Oh,
is
he
there,
oh
okay,
I'll
shut
off
my
hello,
hello,
yeah.
D
D
Still
the
access
rotation
thing,
you
know
that
part
is
still
like.
I'm
currently
working
on
that
only
right
now,
because
the
thing
is,
if
I,
the
current
model
rendition
that
is
coming,
it's
the
outlines.
You
know,
if
they're
not
aligned
under
that
particular
axis
of
rotation,
they
you
know,
will
some
will
protrude
in
a
weird
manner.
You
know
like
that
was
that
was
visible
in
that
model.
D
B
Yeah,
I'm
going
to
oh
sorry,
have
you
tried
the
images
from
my
other
paper
at
all.
E
B
B
If
you
put
too
much
pressure
on
it,
then
it
it
wore
out
like
it
was
the
whole
thing
needed
to
be
more
professionally
put
together.
I
guess
you'd
call
it
for
it
to
work
properly
it.
It
was
a
concept
only
so
this
one,
because
it
stands
still
you
with
the
fixed
angles
it.
It
should
be
easier
for
you
to
work
with.
D
Okay,
yeah
yeah,
because
the
angle,
the
floatation
is
already
predefined.
B
Yes,
the
angles
are
90
degrees
on
the
top
and
90
degrees
on
the
bottom,
and
then
our
top
and
bottom
ones.
So
it's
the
angles
are
well.
They
won't
be
precisely
like
that
because
you
have
to
change
change
them
so
that
you
can
actually
zoom
in
on
the
object
so
that,
but
they
stay
the
same,
which
is
useful
once
you've
got
the
angle,
it's
not
going
to
change,
because
it's
stationary,
that's
why
I
built
this.
I
know
I
have
had
enough
of
moving
parts.
B
D
Because
the
angles
are
pre-defined,
then
it's
just.
We
know
the
projected
outline
from
each
angle
right
then
it's
just
mapping
the
contours
properly.
Then
that
won't
be
too
much
information
because
the
access
is
already
fixed.
It's
not
stationary
so,
and
even
it's
an
interesting
thing
that
doctor,
I
think
richard
gordon
pointed
out,
was
that
they're,
not
rigid
structures.
You
know
so,
if
they're
rotating
very
fiercely,
maybe
you
know
their
actual
size
and
which
size
that
is
that
has
been
captured
on
the
microscope.
You
know.
B
I
eventually
put
a
little
bit
of
plastic
in
a
tube
in
so
that
it
wouldn't
take
as
deep
a
dive,
so
it
would
stay
in
focus
better,
so
yeah
it
was.
It
was
somewhat
difficult
to
work
with
you've
got.
B
I
could
use
a
24
megapixel
camera
to
take
the
images.
That's
all
so
you
you
had
better
resolution
images
with
it.
The
way
that
yeah,
because
I
got
a
microscope
tube
for
the
lenses
from
a
company
in
europe
and
and
they
guaranteed
that
the
higher
resolution
images
I'm
hoping
with
the
10
10
images
at
2
megapixels
that
maybe
he'll
compensate
for
that
just.
D
D
A
good
approximate
model
like
it
won't
be
that
much
difficult
now,
you
know
when
I'm
trying
to
approximate
the
access
rotation,
so
the
only
part
is
meshing.
All
those
projected
images
together.
You
know
in
a
one
in
a
singular
equilibrium
image,
so
that
would
greatly
benefit
with
more
samples,
but
the
model
will
remain
the
same.
It's
just
you
know
the
like.
D
When
we
see
an
axonal
image
right,
the
pixels
on
the
out
outer
edge
closer
to
the
outer
edge
right
there,
they're
not
like
there's,
not
much
information
that
can
be
taken
out
from
that,
like
the
one
that
is
in
the
center,
which
is
in
the
focal
focus
area.
That
is
clear,
because
you
know
that's
what
we
see
but
towards
the
edges.
You
know
those
pixels.
D
They
need
a
lot
more
to
a
lot
more.
You
know
augmentation
techniques,
to
work
around
with,
so
that
I
can
get
the
software
thing
out.
B
B
B
Well,
they're
fun
to
watch
have,
but
I
like
yeah,
we'll
see
I'll
try
to
find
the
sound
and
there's
first.
Okay,
maybe.
B
It's
seashells
as
well
so
I'll
get
some
images
of
peppercorns
and
seashells.
B
Yeah
I
have
an
uninterruptible
power
supply
on
my
computer,
but
even
then
I
was
just
worried
about
spikes,
because
I've
had
it
take
out
my
computer
before.
Actually
this
one
it
it
crashed.
My
computer,
my
d
drive
just
was
gone
and
anything
I
store
on
it,
you're
good
yeah.
So
we
had
a
storm
here
and
it
was
snowed
and
there
was
an
ice
ice
storm
in
the
middle
of
it
and
there
was
thunder
and
lightning
everything.
F
A
So,
thank
you
for
that
update
quran.
Do
you
have
anything
to
update
us
on
sort
of
things
you've
been
doing.
C
Yeah,
maybe
I
I
have
some-
maybe
there
are
some
problems
with
my
network
connection.
Yes,
maybe
you
cannot
hear
me
clearly,
but
if
you
can
hear
me.
A
C
Yeah,
if
you
follow
that,
you
cannot
hear
me,
you
can
just
type-
and
I
will
know
that-
and
I
do
remember
that
I
proposed
a
research
idea
about
combining
the
sequence
to
sequence
and
graph
generation
and
we
can
employ
this.
This
idea
on
the
serial
movements
prediction.
Yes,
you've
mentioned
this
topic
in
the
proposal
of
graph
neural
network
projects,
but
we
didn't
focus
on
that
because
I
I
told
you
that
there
are
a
few
works
of
deep
learning
using
different
techniques
to
solve
this
kind
of
problems.
So
I
think
maybe
you
can.
A
Yeah,
that
would
be
great.
I
know
I
I
saw
that
you
submitted
a
project
proposal,
so
I'm
gonna
read
through
it
and
give
you
some
feedback
on
it.
In
the
context
of
this,
I
know
that
we
mentioned
this
in
the
slack
as
well
the
seek
to
seek
solution
so
we'll
talk
about
that.
C
Actually,
this
research
idea
is
separated
from
the
project.
So
I
think
if
we
are
interested
in
both
of
the
project
and
the
research
idea,
maybe
we
can
work
on
it.
Concurrently,
yes,
yeah
yeah,
yeah,
yeah
and
just
like
what
I
said
there
are.
There
are
a
few
works
on
that,
and
actually
there
are
a
lot
of
scholars
in
the
field
of
graph.
Neural
network
are
focusing
on
the
graph
generation.
C
Yes,
it
is
actually
a
rising
a
trendy
things,
so
I
think
maybe
we
can
try
something
with
that.
Yes
and
actually
I've
surveyed
some
a
lot
of
papers-
and
I
thought
that's-
there
are
a
few
words
on
combining
the
graphene
network
and,
like
the
the
dynamic
of
the
developmental
analysis
of
biological
things.
Yes,
so
I
think
there
are
a
lot
of
we
can
try.
Yes,
that's
what
I
mean
yeah.
That
sounds
great.
A
Yeah,
could
you
put
together
a
list
of
things
that
you've
read?
I
know
that
you
said
you've
surveyed
some
of
the
literature.
A
C
A
Okay,
yeah
sounds
good
yeah.
I'm
glad
to
hear
that
you're
making
some
progress
on
that
to
find
things
and
and
make
connections
between
things.
So,
okay,
yeah
yeah,
all
right.
Well,
thanks
for
the
update,
so
I
got
I
was
able
to
get
the
deadline
for
the
g-suck
projects
was
tuesday
and
I
got
your
projects.
A
I
got
all
the
applications
that
people
sent
in
and
we'll
be
making
a
decision
on
that
in
the
next
week
or
so,
which
ones
get
funded,
which
one
you
know
that
this
is
actually
up
to
incf
for
them
to
make
a
determination
as
to
what
gets
funded.
What
doesn't
so
but
we'll
we'll
release
those
results
in
a
couple
weeks,
and
then
we
can
still
work
on
projects
if
people
want
to
work
on
them,
whether
they
get
funded
or
not.
A
I
think
we've
been
doing
a
lot
of
good
work
here
and
there
are
other
opportunities
for
funding.
You
know
whether
people
get
gsoc
projects
or
not.
So
if
we
get
a
gsoc
project,
that's
great.
We
get
like
a
summer
where
you
get
paid
for
it.
If
not,
we
can
always
find
other
opportunities
to
get
funding
or
to
you
know,
and
then
there
are
always
publications
that
this
will
lead
to.
You
know
there
are
all
sorts
of
opportunities
we've
had
like.
A
I
said
the
the
diva
learn
project
was
the
culmination
of
a
couple
years
worth
of
of
gsoc
projects,
and
I
want
to
add
the
actually
I
want
to
have
the
the
the
graph
neural
network
work
in
sort
of
incorporated
into
diva
learn
at
some
point,
so
we
would
be
building
off
of
that
as
like
kind
of
a
module.
A
So
people
could
you
know,
download,
diva,
worm
and
or
download
evil,
learn
and
then
have
like
another
set
of
libraries
that
they
could
use
for
the
graph
neural
networks,
and
then
there
are
some
other
things
that
we
have
some
other
algorithms
that
we
have
that
we
can
maybe
add
in
later,
as
well
for
other
organisms,
I
mean
we've
we've
actually
in
the
github
organization
for
divalern.
We
have
those
sort
of
already
in
there,
but
they're
not
like
in
the
same
release.
A
You
know
that
that's
something
we're
working
towards,
so
I
think
I
know
I
hope
to
be
kind
of
a
resource
where
people
can
download
like
a
pre-trained
model
for
deep
learning
have
some
other
libraries
that
also
have
some
deep
learning:
algorithms,
maybe
for
some
other
broader
set
of
organisms
and
then
have
the
graph
neural
network
so
that
they
can
derive
graph
structures
for
their
data.
I
mean
that's,
that's
I
think
that
differentiates
that
line
of
work
from
a
lot
of
the
other
stuff.
A
That's
been
going
on,
because
a
lot
of
people
have
been
sort
of
embracing
this
area
of
deep
learning
and
microscopy
analysis
and
there's
a
lot
of
people
doing
things
with
it.
But
it's
largely
like
segmentation,
you
know
I
don't
I
don't
know.
If
any
there
aren't
a
lot
of
pre-trained
models.
There
aren't
really
too
many
graph
neural
network
models.
So
that's
why
I
want
to
make
sure
that
they
eventually
become
part
of
a
you
know,
sort
of
a
family
of
things
that
you
can
download
and
use.
A
But
that's
that's
the
long-term
vision
for
that
and
then
for
the
the
digital
microsphere
project.
You
know
that
would
be
nice
to
have
as
well.
You
know,
maybe
something
that,
like
a
web
interface,
that
people
can
go
to
or
download-
and
you
know
it's
it
wouldn't
be
really
in
the
same
family
of
you
know,
approaches.
G
Yeah:
okay,
yes,
question
harry
krishnan:
I
talked
on
saturday
and
he's
run
into
a
problem,
superimposing
images
on
a
sphere
from
susan's
data.
Okay,
okay
and
I
suggested
a
technique
he
wasn't
familiar
with,
which
is
simulated
annealing.
Oh.
C
G
A
Well,
I
mean
they're
like
in
terms
of
optimization.
There
are
other
optimization
techniques,
I
mean
you
know
a
lot
of
different
types
of
optimization
techniques,
but
I
don't
know
if
there's
something
that's
like
like
simulated
annealing
but
better.
So
it's
like
you
get
that
you
can
use
that
basic
approach
and
I
don't
know
like
the
state
of
the
art
for
simulated
annealing.
A
I'm
sure
people
have
worked
on
this
a
bit
more
in
the
recent
past,
but
maybe
maybe
not
like
you
know
you
could
throw
like
a
regular,
optimization
set
of
optimization
algorithms
at
it.
D
Yeah
I
my
my
method
is:
actually
you
know
using
these
ideas.
Only
it's
like
there
is,
since,
let's
say
there
are
eight
images
per
sample
of
an
axolotl,
so
we'll
have
eight
perspective
images
of
the
axonal
embryo.
Then
what
we
do
is
we,
like
you've,
seen
the
projections
of
earth
on
the
globe
and
the
map
so
yeah
we
do
a
projection
transformation.
You
know
we
get
a
an
equilateral,
rectangular
perception.
It's
like
it
uses.
D
You
know,
coordinate
transformation
to
a
cylindrical
projection
area
like
from
a
globe
being
projected
onto
a
cylinder.
Then
from
that
cylinder
you
know
we'll
just
lay
it
out
linearly
so
that
we
get
a
very
big
panoramic.
You
know
image
the
question
the
problem
that
inside
trying
to
solve
currently
with
the
current
that
lot
lmbo
data
set,
is
you
know,
since
there
are
eight
perspective
images,
I
can
convert
all
of
them
to
those
equal
rectangular
projections.
You
know
rectangular
projections
of
that
spherical
thing
on
a
plane.
D
The
tricky
part
right
now
is
since
the
side
pixels
are
also
kind
of
distorted.
You
know,
because
I
think
the
resolution
plus
the
angle
of
rotation
aspect
also
so
mapping
them
correctly
with
each
other.
You
know
stitching
all
these
eight
images
together
is
the
main
problem
that
and
I'll
try
to
cover
right
now.
It's
it's
one.
C
D
Otherwise,
like
working
with
more
images,
you
know
kind
of
improves
this
model.
So
let's
say
there
were
12
images,
so
we'll
have
more.
You
know
overlapping
points,
the
more
overlapping
points
we
have
the
easier
it
is
to
use.
You
know:
opencv,
like
computer
vision,
algorithms
that
I'm
currently
using
you
know
to
stitch
these
images
together
because
they
have
more
more
data
to
work
with.
So
you
know
they
give
a
better
talk
to
me.
G
Which
was
new
to
him
but
and
by
the
way,
I
guess
I
guess
the
cylindrical
projection
would
be
analogous
to
a
mercator
projection.
D
Yeah
yeah
yeah,
otherwise
yeah
the
different
there's
another
orthographic
way
of
doing
it.
Photographic
projections
are
there,
which
I
think
the
closest
approach
would
be,
is
a
similar
like
a
cylindrical
projection
to
an
equilibrium,
rectangular
projection.
Then,
okay,.
G
G
A
Yeah
there's
this
whole
area
emerging
area
of
360
degree,
videos
where
they
take
like
video
from
different
locations
and
then
they
put
it
together
and
you
can
like
walk
through
a
scene
and
rotate
around
it,
and
you
get
a
view
and
it's
pretty
seamless
in
terms
of
the
perspective,
so
you're
moving
around
a
scene
yeah.
What.
G
G
Okay,
okay,
yeah-
that
was
a
fad
about
10
years
ago.
Okay,.
D
G
G
B
It's
just
you,
you
can
take
it's
like.
You
have
two
eyes
so
that
will
give
you
depth
of
field.
A
D
D
Like
this,
has
the
like
majority
of
the
reconstruction
techniques
that
are
used
right,
there's
one
called
structure
from
motion
in
which
you
have
a
very
random
area
of
perspectives
that
you've
taken.
You
know
from
a
camera,
and
the
software
tries
to
make
a
point
cloud
like
a
3d
point
cloud
of
the
things
that
it
has
seen.
Similarly,
like
susan
was
mentioning,
you
know,
two
cameras
to
measure
the
depth
aspect
of
it,
that
is,
for
mbs
multi-view
stereo.
G
Yeah
I
had
a
student
years
ago
who
we
wrote
a
paper
on
doing
that
from
one
view:
oh
okay,.
G
G
D
Yeah,
this
is
the
list
of
you,
know
various
3d
construction
techniques
and
kind
of
thing.
Most
of
them
are
currently
used
for
projects
like
either
drone
mapping.
Is
there,
you
know,
like
current
sfm
techniques
are
used
for
road
mapping
open
source
software?
You
know
drones,
which
take
a
lot
of
images,
and
then
you
know
construct
a
3d
model
of
whatever
has
covered
area
mapping
chrome.
Similarly,
so
this
is
kind
of
like
a
list
of
different
ways.
D
You
know
people
have
approached
constructing
reconstructing
3d
models,
so
you
know,
while
I
was
going
through
my
axolotl
embryo,
like
I
was
looking
for
ways
to
reconstruct
the
model,
I
kind
of
got
this
list
of
methods,
so
structure
promotion
is
there.
Most
structure
for
motion
is
basically
a
camera
that
is
moving
around
and
has
different
perspective
images.
You
know
that
are
unsorted,
otherwise
we
usually
have
a
sorted
array
of
a
particular
angle
and
a
particular
plane
from
which
you
know
the
camera
is
taking
pictures
sfm,
otherwise
uses
an
unsorted
method
for
that.
D
Similarly,
down
below,
we
have,
I
think,
mvs
is
there,
and
slam
slam
is
also
pretty
popular.
Slam
is,
I
think,
the
base
for
a
current
self-driving
car.
You
know
they
try
to
visualize
using
slam
techniques.
A
D
A
So
does
slam
require
like
dynamic
data,
or
can
it
be
used
like
on
something
that's
static.
D
It's
like
I've,
seen
slam
techniques
being
used
for
again
that
data
is
static
in
the
sense
that
you
know,
you'll
have
a
sealed
image
of
different
angles,
from
which
you
know
you've
taken
the
picture
yeah,
but
it's
it's
kind
of
the
stitching
together
part
and
constructing
the
depth
information
part.
You
know
that
is
that
is
kind
of
unique
that
that
has
been
approached
in
this
method.
A
Okay,
so
this
is
yeah.
This
is
a
display
of
30
and
isotropic
images
from
limited
view
computed
to
tomograms,
so
that
should
be
jamon.
As
the
first
author
yeah,
then
yeah
there's
multi-view
stereo
vision,
which
is
what
can
you
tell
us
a
little
bit
about
that.
D
Yeah
multi-view
stereo
vision
is
it's
like:
it
uses
visual
automatic
techniques
only
in
the
sense
that
you
know
it
has
two
cameras
that
sends
the
for
for
taking.
You
know,
simultaneous
images
and
then
it
kind
of
creates
a
a
depth
field
of
some
sort.
You
know,
but
they
use
more
like
first
slam,
you
have
continuous
data.
You
know
time
continuous
data
for
multi-vision
bishop
view,
stereo
vision
you
have
instead
of
you
know,
taking
just
one
perspective
of
that
particular
camera.
D
You
have
like
two
three
perspective
views
of
that
same
scene.
You
know
that
you're
trying
to
model
so
it
has
slightly
lesser
data,
and
it
makes
slightly
more
approximations
as
to
what
the
model
will
look
like.
So
you
know
point
cloud
computation:
the
making
the
mesh
surface
mesh.
You
know
based
on
those
point
clouds.
D
Sometimes
a
lot
of
outlets
like
if,
like
I
was,
I
think
I
was
trying
to
get
a
project
based
on
this,
so
they
were
kind
of
mentioning
like
unless
the
camera
has
been
calibrated
properly,
because
camera
also
has
kind
of
like
two
three
properties
you
know,
because
of
which
it
kind
of
distorts
the
image.
In
a
positive
way
or
the
negative
way,
you
know
where
the
image
will
either
go
inside
the
plane
or
come
outside
the
plane.
D
So
those
kind
of
details
that
they're
left
out
you
know
it
kind
of
tends
to
create
more
inaccurate
models
right
all
right.
A
A
All
right,
I
was
gonna
point
out
something
this
is
going
to
the
graph
neural
network
stuff
again,
so
there's
a
target
conference
we
might
look
at
this
is
something
that
was
announced
recently.
A
They
want
to
start
a
learning
on
graphs
conference,
and
this
is
something
that
you
know
the
the
graph
neural
networks
area,
they've
kind
of
been
sort
of
under
the
guise
of
a
lot
of
the
machine
learning
conferences,
so
they
had
a
they've
had
workshops
at
nurips,
for
example
that
have
been
very
well
attended,
and
so
now
they're
they're
going
to
start
a
virtual
conference
this
year,
they're
trying
to
organize
it
and
get
everything
going.
A
So
this
is
something
that
you
know
they're
trying
to
kick-start
this
area
of
of
research
in
terms
of
its
own
conference.
So
there
are
a
lot
of
different.
You
know:
they've
been
specialized
workshops
in
the
past
having
to
do
with
graph
ml,
like
in
industry,
learning
on
knowledge,
graphs,
molecule
discovery,
graph
learning
benchmarks.
A
A
What
they
want
to
do
is
create
a
sort
of
a
large
conference
that
covers
all
of
these
areas,
so
they
want
to
they're
soliciting
papers.
They
want
to
use
open
review
for
the
review
process
and
then
you
know
you
know.
Hopefully
they
get
a.
You
know
pretty
good
set
of
submissions,
so
the
deadline
for
this-
I
guess,
is
tentatively
in
september.
A
So
this
is
a
couple
months
off
I
mean.
Maybe
we
can
put
together
a
paper
on
this
based
on
you
know,
doing
some
work
over
the
summer.
A
I
think
I
don't
know
this
looks
like
you
get
like
you
have
to
submit
it
by
september,
and
then
you
have
a
revision
period
here
in
october
and
then
the
final
decisions
are
released
in
november.
I
like
that
they
allow
you
to
revise
the
papers
before
they
give
you
the
final
decision,
because
a
lot
of
conferences
they
don't
let
you
do
that.
It's
just
like
here's,
the
paper.
A
Is
it
acceptable
or
not,
which
is
different
than
journals,
because
journals
usually
allow
you
to
revise
a
little
bit
but
anyways?
This
is
the
so
they
don't
have.
I
don't
think
they
have
well.
The
submissions
aren't
open,
I'm
not
really
sure
if
they
have
the
sort
of
the
details
on
it,
but
I
think
the
idea
is
that
you're
gonna
have
a
couple
of
different
tracks.
A
A
You
know,
thinking
about
how
we
want
to
you
know,
put
together
some
sort
of
publication
on
it
or
some
sort
of
like
report
on
it.
So
and
then
you
know
this
would
be.
I
think,
a
really
good
target.
A
It
looks
like
there
are
all
sorts
of
different
approaches
here
from
generative
models
to
knowledge,
graphs
to
expressive
graph
neural
networks.
A
So
I
mean
basically
it's
just
about
anything
you
can
think
of
so
that
would
be
great
if
we
could
target
that
and
again
you
know
this
is
something
that
if
we
don't
get
accepted
to
this,
you
know
it.
We
have
the
paper.
So
it's
not
like
it's
going
to
be
a
waste
of
time.
It's
just
a
matter
of
getting
it.
You
know
getting
something
off
the
ground
and
getting
something
tangible
out
there
and
see
where
it
goes.
A
So,
that's
good.
So
I
had.
I
know:
we've
talked
about
cell
tracking
and
lineage
construction.
This
kind
of
goes
back
to
some
of
the
stuff
that
we've
talked
about
with
graph
neural
networks,
but
it
applies
to
other
things,
and
last
week
I
covered
the
data
that
we
have.
So
we
have
a
lot
of
data
that's
available
for
embryos.
A
You
know
some
of
it's
like
the
axial
model
data,
which
is
you
know
it's
it's
kind
of
image,
raw
images
where
there's
nothing
is
being
tracked
to
the
c
elegans
data
which
we
use,
which
has
this
track
cell
tracking
aspect
to
it,
and
they
have
cell
tracking
for
all
sorts
of
organisms
for
embryos.
They
also
have
cell
tracking
or
behavioral
tracking.
Actually,
so
they
use
similar
techniques
for
tracking
behavior
in
in
open
worm.
A
We
have
a
version
of
this
for
worms,
that's
called
the
chirpsy
tracker
and
you
know
they
take
images
of
worms
in
a
in
a
culture
dish
and
they
track
each
worm.
They
use
some
marker
on
the
on
the
worm
and
they
look
at
its
movement
and
they
can
track
the
movement
around
its
environment,
and
so
you
can
do
this
for
other
organisms
as
well.
You
know
you
can
take
high
quality
video
of
their
enclosure
where
they're
they're
behaving
in
and
you
can
track
their
behavior
and
follow.
A
You
know
different
trajectories,
and
you
know
this
is
all
sort
of
enabled
by
deep
learning
technology
and
in
the
case
of
embryos,
also
a
lot
of
the
fluorescence
microscopy
that
we
use
for
looking
at
gene
expression,
but
also
looking
for
other
markers
of
different
structures
in
the
cell.
A
G
Say
something
the
harry:
if
krishna
succeeds
with
the
montaging,
then
one
obvious
step
is
to
segment
all
the
cells
that
are
on
the
surface
and
track
them.
G
A
A
C
A
All
right,
so,
let's
see
so
this
paper
actually
is
one
example
of
you
know
something
that
people
are
doing
with
this
technology.
The
c
elegans
work
people
have
published
papers
on
this.
They
also
publish
papers
and
zebrafish,
and
this
is
for
embryos.
You
can
go.
Look
at
those
the
keller,
phillip
color,
is
a
person
who
does
this
in
zebrafish.
A
So
this
is.
This
paper
is
by
a
number
of
people
here
from
hhm
auginellia.
So
this
is
a
lab
run
by
the
howard
hughes,
medical
institute
in
ashburn
virginia,
which
is
near
dc,
washington
dc,
and
so
this
is
automated
reconstruction
of
whole
embryo,
selenium
by
learning
from
sparse
annotations.
A
So
this
is
the
sort
of
stuff
that
we
see
with
you
know
they're
using
in
this
case
they're
using
stacks
of
images,
so
they're
going
from
the
dorsal
to
the
ventral
side
of
the
embryo,
going
through
a
different
magnifi
or
different
focal
planes
and
taking
high
resolution
images,
and
then
that
allows
them
to
get
like
an
image
of
every
cell,
because
you
have
cells
in
different
positions
and
that
axis
cells
across
the
anterior
posterior
axis,
which
is
front
to
back
and
then
to
the
sides,
and
so
you
can
get
every
cell
in
the
embryo
that
way
and
but
then
you
also
have
things
in
time.
A
It
allows
you
to
see
cell
boundaries
pretty
well
at
that
resolution
and,
more
critically
it
allows
the
or
the
algorithm
to
pick
up
on
those
boundaries
and
do
the
segmentation
pretty
pretty
efficiently.
We've
used
a
lot
of
light
sheet
microscopy
for
our
training
data
sets
for
our
different
for
diva,
learn,
for
example,
because
it's
very
easy
to
see
things
or
for
the
algorithm
to
see
things
in
there.
A
So
they
in
this
case
they're
demonstrating
their
technique
on
three
common
model:
organisms,
mouse,
zebrafish
and
drosophila.
So
drosophila
is
the
fruit
fly.
Zebrafish
is
the
zebrafish
model
and
mouse.
Of
course,
these
are
all
common
model
organisms
on
the
most
difficult
data
set
mouse,
which
has
a
different
motor.
Well,
its
motor
development
is
more
like
human
than
the
other
two.
You
get
a
lot
of
cells
that
you
can't
really
track
the
cell
lineage.
A
A
They
will,
you
know,
adopt
their
fate,
so
it's
not
deterministic
like
in
c
elegans,
so
you
don't
really
have
you
can't
really
say
that,
like
this
developmental
cell
will
become
this
adult
cell,
it
really
is
like
you
get
a
bunch
of
precursor
cells
and
then
they
can
become
specific
cells
in
the
in
this
in
the
organism,
with
a
specific
position
and
then
so
that
they
have
these
three
different.
You
know
different
modes
of
development,
different
model
organisms,
so
they're
actually
testing
this
out
trying
to
test
for
its
robustness
here,
bradley.
G
Got
a
question
about
this
work:
are
they
able
to
mark
the
cells
in
any
way
in
vivo
so
that
they
can
tell
what
kind
of
cell
it
is.
A
A
So
this
is
the
main
part
here
where
they
talk
about
some
of
the
manual
tracing
lineages.
So
you
can
trace
lineages
manually
using
different
techniques
and
mastodon
are
the
two
competing
methods.
These
methods
are
arduous
and
for
complex
developing
organisms
that
are
only
feasible
to
annotate
a
small
percentage
of
all
tracks.
A
So
they
have
these
different
tracks
that
they're
analyzing
that
they
that
are
created
in
the
program
and
then
it
gets
they
analyze
it,
making
automatic
cell
tracking
necessary
for
holistic
analysis,
so
they've
developed
a
number
of
cell
tracking
algorithms
while
hand
engineered
features
are
sufficient
for
cell
detection
and
tracking,
and
so
this
bow
2006
paper
is
c
elegans.
A
This
is
one
of
the
papers
where
a
lot
of
the
c
elegans
cell
tracking
data
come
from
and
c
elegans,
of
course,
has
that
mode
of
development.
Where
we
can
do,
you
know
where
it's
not
as
challenging
to
do
cell
detection
and
tracking
so
but
they're
talking
about
organisms
where
you
don't
know
the
lineage
ahead
of
time.
You
have
to
actually
map
it
out
and
then
you
have
to
track
the
cells.
So
it's
it's
a
little
bit
harder
to
do.
A
Deep
learning
has
shown
been
shown
to
improve
cell
detection
segmentation
on
a
wide
variety
of
data
sets.
So
this
is
yeah,
so,
additionally,
it
has
been
shown
the
tracking
methods
I
take
into
account
global
spatial
temporal
context
perform
better,
especially
for
data
sets
with
with
many
movement,
more
movement
between
time
frames.
So
as
the
cells
are
moving
around
and
going
in
different
locations-
and
you
know,
cell
migration
is
actually
much
more
diverse
than
we
see
in
c
and
c
elegans.
You
know
this.
This
helps
overcome
that.
A
So,
let's
see
oh,
you
can
also
do
tracking
by
graph
optimization
over
a
large
basic
spatio-temporal
context.
This
allows
inclusion
of
biological
knowledge
about
track
length
and
cell
cycle
improving
track
continuity
and
even
allowing
recovery
from
noisy
detection
and
segmentation.
A
So
why
don't
we
get
into
the
methods?
So
this
is
so
to
attain
per-voxel
predictions
for
cell
locations
and
movements.
We
use
a
u-net
architecture
with
four
resolution
levels
to
incorporate
temporals
with
a
spatial
context.
We
can
currently
feed
seven
3d
frames
centered
on
the
target
time
point
in
these
four
dimensional
convolutions.
A
Until
due
to
valid
convolutions,
a
time
dimension
is
reduced
to
one
so
they're
doing
some
down
sampling.
They
use
12
initial
feature
maps,
then
the
up
sample,
so
they
do
a
lot
of
different
things
within
the
network,
and
so
this
is
what
these
this
is
so
okay,
here
it
is.
This
is
the
data,
the
raw
data.
This
is
three
dimensions
and
times
this
is
four
dimensions.
A
These
are
the
reconstructions
here
without
anything
on
them,
then
there's
a
sparse
point
annotation,
where
they
superimpose
the
first
frame
over
the
first
frame
of
the
raw
data,
so
they're
reconstructing
these
graphs
from
the
cell
indicators
and
the
movement
vectors,
so
they're,
looking
at
the
movement,
vectors
of
cells
and
the
cell
indicators
and
they're
looking
at
that
in
three
dimensions,
and
then
they
build
a
candidate
graph
from
that
which
gives
them
information
about
like
the
movement
and
where
the
cell
is
in
in
the
embryo
and
then
they're
able
to
reconstruct
the
lineage.
A
Because
they're
able
to
like
see
like
you
know
how
the
cells
with
this
marker
is
moving
and
they're
able
to
do
it.
That
way,
trying
to
see
if
they
have
anything
about
like
what
kind
of
markers
they're
using.
A
Candidate
graph
extraction,
discrete
optimization
of
fine
lineage
trees,
so
you
know
I
think,
they're
using
like
standard
markers
for
the
cells,
and
I
don't
know
what
they're
doing
this
in
vivo
or
what
they're
exactly
they're
doing.
I
mean,
obviously
this
the
embryo
is,
I
don't
know
you
know
it's
under
a
microscope,
but
I
don't
know
how
they're
doing
this.
I
think
they're
just
using
standard
markers
in
the
cell,
like
fluorescent
markers,
to
determine
where
the
cell
is.
A
You
know
like
the
the
something
in
the
nucleus,
but
I
don't
know
exactly
if
they're,
using
anything
else
to
mark
the
cells
themselves.
A
lot
of
this
is
like
looking
at
the
cell,
how
it
you
know
where
it
is
in
time.
If
you
have
so,
for
example,
here
you
have
a
division
event.
As
you
have
a
single
cell,
and
then
you
have
two
cells,
then
you're
tracking
one
and
you're
tracking.
Two.
A
That's
you
know
unique
to
some
cell
that
gets
passed
down
to
another
cell
they're,
also
in
in
cases
where
you
put
a
marker
in
a
specific
type
of
cell
when
the
cell
divides
it
passes
down
that
marker.
So
you
can
tell
like
if
a
cell
comes
from
another's,
you
know
comes
from
another
cell
and
thus
from
maybe
from
a
lineage
of
dividing
cells,
and
you
know
that
that
that's
another
way
to
do
it.
A
And
then
they
have
this
ilp
solution,
which
is
a
an
optimization
technique,
so
they're
actually
able
to
trace
out
some
of
these
lineages
and
and
things
like
that,.
A
So
iop
is
integer
linear
program,
so
this
is
the
constrained
optimization
problem
they're
trying
to
solve
so
there's
a
lot
of
computation
in
this
a
lot
more
computation
than
cell
biology.
So
this
is.
This
is
largely
just
like
taking
microscopy
data
with
some
marker
for
the
different
parts
of
the
cell.
We
want
to
track
and
then
the
tracking
part
is
really
kind
of
you
know
finding
the
position
putting
it
into
a
framework
and
then
reconstructing
the
lineages
from
that
and
yeah.
A
A
You
can't
really
do
that
because
of
all
the
different
sources
of
error,
so
they
actually
have
five
types
of
tracking
errors,
so
there
are
all
sorts
of
different
forms
of
error
in
in
cell
tracking.
So
you
know,
apart
from
being
aside
from
being
a
very
computationally
intensive
method,
it's
also
can
introduce
a
lot
of
error
in
what
you're
actually
looking
at
so
you
know
there
are
ways
to
ground
truth.
These
reconstructions,
but
really
it's
it's
a
very
tough
problem
to
do
so.
A
A
So
that's
good!
I
think
that's
just
a
little
bit
of
a
dive
into
that
deep
learning,
the
some
of
the
deep
learning
techniques
that
they're
using
and
something
that
you
know
you
have
to
be
one
of
these
huge
labs
to
really
be
able
to
innovate
in
it's
not
something
that's
easy
to
implement,
at
least
not.
Today,
we
can
do
things
with
you
know.
Some
things
with
deep
learning
kind
of,
like
you
know,
as
a
small
group
but
like
reconstructing
a
whole
embryo,
is
really
kind
of
difficult,
a
difficult
proposition.
B
Okay,
so
yeah.
So
there's
I'm
not
sure
of
the
details.
I
just
know
he
makes
lovely
sparse
sets.
So
the
computation
is
easy.
A
So
you
know
we
have
like
annotation
data
for
c
elegans
is
pretty
well
documented,
but
for
other
organisms
less
so
and
you
know,
but
furthermore,
it's
hard
to
know
the
process,
because
there
are
a
lot
of
things,
there's
a
lot
of
signaling
going
on
and
a
lot
of
things
that
are
dependent
on
where
the
cells
are
in
space.
A
And
so
that's
that's
where
the
sparsity
comes.
It's
just.
We
don't
know
these
different
things.
We
we
can't
even
really
measure
them
directly,
but
we
still
want
to
reconstruct
the
the
lineage
to
a
reasonable
degree.
So
you
know
we're
trying
to
get
from
zero
to
something
in
terms
of
understanding
these
these
lineage
trees-
and
we
can
do
that.
You
know
with
all
different
types
of
data
integration,
but
we
can
also
do
it
with
machine
learning.
With
with
like
optimization
techniques
and
saying
this
is
the
most
likely
path
for
this.
This
process.
A
B
Okay,
I
wanted
to
tell
dick
that
I
I've
run
across
someone
who's,
making
small
microscopes
for
dentistry
that
are
actually
optical
coherence,
demography,
so
maybe
one
could
do
optical
clearance
tomography
from
all
sides
of
a
ball,
for
instance,.
F
G
A
All
right,
so,
if
you
can
stay
on,
we
have
one
more
thing
I
wanted
to
share
with
people.
If
not,
you
can
go
ahead
and
go
so.
This
is
something
that
I
I
wanted
to
kind
of
go
over.
A
This
is
some
of
the
things
on
chemical,
the
chemical
brain
hypothesis,
in
early
life.
So
I
think
dick
sent
me
this
paper
on
systems,
biology
and
prebiotic
networks.
So
we've
talked
about
this
topic
a
bit
in
the
group.
We've
talked
about
like
the
origins
of
embryos
and
geologic
time,
and
how
that
you
know
organization
might
have
happened,
their
issues
with
like
the
great
oxygenation
events
in
earth.
History-
and
you
know-
that's
that's
one
aspect
of
it.
A
Another
aspect
is
to
go
back
even
further
and
talk
about
what
was
happening
at
the
origin
of
life,
and
so
this
is
something
that
I'm
going
to
kind
of
go
through
a
couple
papers
on
this.
To
talk
about
this
a
little
bit
about
the
system's
biology,
it's
really
interesting
work,
so
this
this
paper
is
on
the
early
systems,
biology
and
prebiotic
networks.
A
A
It
is
assumed
that
such
complexity
arose
gradually
beginning
from
a
few
relatively
simple
molecules
of
life's
inception
and
culminating
with
the
emergence
of
composite
multicellular
organisms
billions
of
years
later.
So
there
is
an
origin
of
life,
and
then
there
was
a
period
where
very
simple
life
dominated
and
didn't
really
see
very
much
complexity
until
maybe
like
a
billion
years
later
in
earth
history.
So
it's
really
interesting
how
you
know
that
that
period
and
if
you've,
like
some
of
our
previous
sessions,
where
I've
talked
about
this,
I
make
the
point.
A
This
was
largely
due
to
some
of
these
oxygenation
events,
where
you
start
to
that.
The
earth
in
its
very
early
history
was
very
inhospitable
to
life
and
it
became
more
hospitable
to
life
as
time
went
on,
and
then
some
of
the
simple
organisms
were
able
to
actually
change
the
atmosphere
so
that
it
would
enable
more
complex
organisms
based
that
were
based
on
oxygen
and
and
that
sort
of
thing.
A
So
the
main
point
in
the
present
paper
is
that
very
early
in
the
evolution
of
life
molecular
ensembles
with
high
complexity
may
have
arisen,
which
are
best
described
in
analyzed
by
the
tools
systems
biology.
They
show
that
modeled
prebiotic
mutually
catalytic
pathways-
and
this
is
a
these-
are
different
types
of
pathways.
A
A
You
had
these
groups
of
you
had
these
multi,
you
had
these
small
molecular
structures
that
actually
had
they
weren't
really
cells,
but
they
were
like
protocells
that
they
had
these
network
attributes
these
attributes
of
metabolic
networks
that
are
similar
to
present-day
living
cells,
so
that
it
suggests
that
there's
a
very
long
evolutionary
history
of
this,
and
then
it
emerged
early
on
this
includes
network
motifs
and
robustness
attributes.
So
these
are
properties
of
networks
that
are
often
seen
as,
like
you
know,
optim
optimized,
parts
of
a
network.
A
You
know,
like
robustness,
means
that
a
network
can
overcome
perturbations
network
motifs
means
that
there
are
things
that
are
grouped
together
that
enable
function
in
an
organized
manner,
so
these
networks
weren't
just
random
networks.
They
had
some
structure
to
them.
We
point
out
that
early
networks
are
weighted
graded,
meaning
that
they
have
some
weight
to
them.
So
they
have.
You
know
they're,
not
just
like
binary
networks
that
turn
on
and
off,
but
that
using
a
cutoff
formalism,
one
may
probe
their
degree
distribution,
which
is
the
sort
of
the
structure
of
each
node.
A
How
many
connections
each
node
has
and
show
that
up
it
that
it
approximates
that
of
a
random
network.
So
these
are
actually
they
look
like
random
networks
in
the
random
network,
sense
of
of
network
network
science.
A
question
is
then
posed
regarding
the
potential
evolutionary
mechanisms
that
may
have
led
to
the
emergence
of
scale-free
networks
in
modern
cells.
A
So
skill-free
networks
are
like,
if
you've
ever
heard,
of
a
small
world
network
where
the
networks
have
they
both
have
these
sort
of
features
of
hierarchy,
but
they
also
have
features
of
distributedness
so
they're.
These
kind
of
mixed
network
topologies
in
this
case
they're
talking
about
things
that
approximate
a
random
network
random
network,
is
a
network
where
there's
no
real
hierarchical
structure
on
the
network.
It
doesn't
mean
that
they're
that
they
don't
do
anything
that
they
don't
have
structure
it's
just
the
structure
isn't
organized
in
any.
A
You
know
coherent
way,
as
opposed
to
scale-free
networks,
where
there
is
there's
a
certain
type
of
organization
that
leads
to
a
certain
type
of
behavior.
A
So
this
paper
goes
through
prebiotic
molecular
networks.
They
talk
about
some
of
the
scenarios
for
early
life
that
life
began
may
have
begun
with
a
single
molecular
replicator,
an
rna-like
biopolymer
that
could
make
copies
of
itself,
and
it's
further
assumed
that
complex
molecular
networks
came
much
later.
A
So
the
networks
that
we
see
in
in
modern
cells,
for
example,
came
much
later
so
there's
some
gap
in
there
in
between
these
biopolymers
and
these
very
complex
metabolic
networks
that
we
have
today,
and
so
this
idea
is
that
they're
trying
to
model
this
this
lost
period,
and
so
they
want
to
understand
if
the
network
came
first
or
if
other
things
came
first.
So
it
is
a
further
assume
that
complex
molecular
networks
came
much
later
and
were
genetically
instructed
by
the
replicating
polymers.
A
The
second
set
of
scenarios
asserts
that
early
replicating
entities
must
have
constituted
complex
molecular
networks
right
from
the
out
on
the
outset.
The
ladder
view
claims
that
the
emergence
of
single
molecules
whose
inner
works
allowed
them
to
instruct
the
synthesis
of
their
own
copies
are
extremely
unlikely
under
prebiotic
conditions.
A
So
this
is
where
you
had
just
kind
of
the
spontaneous
life
that
formed
from
spontaneous
mixes
of
different
things,
and
it
just
kind
of
happened
that
way,
and
so
this
is,
but
this
is
dependent
on
these
assemblies
having
a
certain
set
of
network
properties.
So
these
are
something
that
mimic
present
day
cells.
A
So
basically,
what
they're
saying
here
is
that
the
metabolic
networks
that
were
in
this
early
stage
not
only
mimic
what
we
see
in
modern
day,
networks
and
cells
in
metabolic
networks,
but
that
this
is
something
that's
probably
the
way
that
this
works.
I
mean.
There's
no
other
alternate
way:
it
happened
very
early,
this
type
of
organization
and
it
just
became
a
little
bit
more
complex
over
time.
A
So
the
network
actually
is
something
that
you
can
actually
see
patterns
of
how
life
emerged,
but
also
that
those
networks-
really
you
can't
organize
this
any
other
way,
so
they
kind
of
go
through
some
of
these
networks
and
they
kind
of
you
know.
This
is
very
heavy
in
metabolic
networks
and
analysis
of
those
networks.
A
So
I
don't
really
want
to
get
into
that.
That
sort
of
I
don't
want
to
get
into
the
technical
details
of
that,
but
this
kind
of
shows
like
prebiotic
and
contemporary
biological
networks,
so
I
think
a
yeah.
They
don't
really
they
don't
really
have
good.
Oh,
I
guess
this
is
the
legend
here.
So
this
is
a
graded,
auto
catalytic
replication
domain
system
here.
So
autocatalysis
is
a
it's
a
very
rich
field
and
they
talk
about
this
in
complexity.
A
Theory,
a
lot
it's
the
id
or
in
in
biochemistry
even
and
it's
the
idea
that
you
have
this
catalytic
cycle.
You
know
you
have
to
have
certain
things
in
place
for
a
catalytic
cycle
to
occur,
and
this
is
a
catalytic,
auto
catalytic
network.
So
this
means
that
it
want.
This
is
a
network,
that's
wired,
to
sort
of
perpetuate
chemical
reactions,
and
so
this
is
what
it
looks
like
when
it's
sort
of
self-organized.
A
So
this
is
what
they
call
the
composome,
and
this
is
something
that
I'm
not
going
to
get
into,
but
this
is
a
word
that
they
throw
out.
So
I
guess
that
means
it's
composing
itself
from
different
components,
and
this
is
something
that
they've
been
able
to
simulate
and
build
together.
And
then
this
is
the
b.
Is
the
schematic
illustration
of
this
auto
catalytic
set
so
stuart
kaufman?
A
Who
is
someone
who's
worked
on
other
types
of
networks
like
nk
networks
also
proposed
an
auto
catalytic
set,
and
so
this
is
something
where
you
have
a
number
of
different
sort
of
essential
pieces
that
have
to
be
in
place
for
some
of
these
catalytic
reactions
to
occur,
and
so
this
is
a,
I
guess,
is
a
minimal
set
of
of
catalytic
components
that
need
to
be
in
place
and
so
and
then,
but
this.
This
is
a
comparison
with
something
you
might
see
in
a
cell
today.
So
this
is
a
network.
A
It
looks
much
different
than
this
compassome
here.
This
is
a
protein
interaction
map.
Where
you
see
different
things,
the
structure
is
very
different
from
this
network
and
likewise
for
this
d,
which
is
the
metabolic
network
of
e
coli.
This
is
very
different
from
this
autocatalytic
set,
as
proposed
by
kaufman,
but
a
and
b
are
actually
the
thing.
That's
we
see
very
early
in
life,
as
as
the
authors
are
proposing,
where
c
and
d
are
things
that
we're
seeing
in
modern
cells,
and
so
that's,
then.
The
next
paper
is
about
the
chemical
brain
hypothesis.
A
So
now
we're
moving
towards
the
origin
of
nervous
systems
instead
of
the
origin
of
cells,
and
this
talks
about
what
they
call
the
chemical
brain
hypothesis,
and
so
this
paper
I'll
just
go
over
the
abstract
a
little
bit
in
nervous
systems.
There
are
two
main
modes
of
transmission
for
the
propagation
of
activities
between
cells.
A
Synaptic
transmission
relies
on
close
contact
at
chemical
or
electrical
synapses.
It
is
possible
to
wear
complex
neuronal
networks
as
they
call
them,
which
are
networks
of
neurons,
instead
of
like
computational
structures
by
both
chemical
and
synaptic
transmission.
So
your
synaptic
transmission
have
neurotransmitter,
your
chemical
transmission
can
be
across
the
synaptic
cleft
or
it
can
be,
like
you
know,
out
in
the
milieu
where
the
cells
live.
A
A
A
Synapses
came
later
so
the
the
the
evidence
for
this
is
the
sort
of
peptidog
signaling
that
happens
in
most
animals
except
for
sponges,
and
you
see
this
today,
it's
a
very
non-specific
form
of
signaling
in
in
these
in
your
brain
in
neuronal
networks,
but
it
happens
and
it's
probably
a
precursor
or
it's-
it's
probably
a
remnant
to
that
precursor
network
that
you
have
these
peptides
that
are
signaling
each
other
or
signaling
cells,
and
you
don't
have
synapses
yet.
But
you
have
chemical
communication
between
cells.
A
Small
peptides
are
ideally
suited
to
link
up
cells
into
chemical
networks.
They
have
unlimited
diversity,
high
diffusivity
and
high
copy
numbers
derived
from
repetitive
precursors.
So
this
chemical
signaling
is
diffusion,
limited,
but
chemical
signaling
is
diffusion,
limited
and
becomes
inefficient
in
larger
bodies.
So
these
networks
had
to
be
fairly
small.
A
It
you
know,
or
else
they
would
become
inefficient.
So
you
could
see
his
life
maybe
became
more
complex.
A
These
peptide
networks
became
less
viable,
and
so
you
started
to
have
synapses
emerge
and
then
that's
how
you
you
know
that
sort
of
signaling
actually
became
predominant
in
in
large
brains
or
in
brains
of
any
sort
of
size
to
overcome.
This
preparatory
cells
may
have
developed
projections
and
form
synaptically,
connected
networks,
tiling
body
surfaces
and
displaying
synchronized
activity
with
pulsatile
peptide
release.
Basically,
this
is
something
that
is
you
know
like
this
would
be
like
a
sort
of
a
set
of
adaptations
to
the
size
limitations.
A
So,
as
the
size
of
these
networks
got
larger,
they
started
to
develop
things
that
look
more
like
a
synaptic
nervous
system
and
then
synapses
just
made
this
all
more
efficient.
So
you
still
have
peptidogen
signaling,
but
it's
less
predominant.
In
most,
you
know
complex
organisms,
we'll
say
the
advent
of
circulatory
systems
and
neurohemo
organs,
which
are
basically
blood
flow
and
things
that
you
know
you
see
in
a
lot
of
more
complex
organisms,
further
reduce
the
constraint
imposed
on
chemical
signaling
by
diffusion.
A
Are
our
ancestors,
the
ancestors
of
c
elegans
and
the
ancestors
of
a
lot
of
organisms
that
have
this
bilateral
symmetry,
so
the
stem
biblitarians
are
a
basal
form
of
all.
I
think
a
good
number
of
animal
species-
neuros
neurosci
neurosecretory
centers
in
extent
nervous
systems-
are
still
predominantly
chemically
wired
and
coexist
with
the
synaptic
brain.
A
So
neurosecretory
centers
are
things
that
are
like
different
places
in
the
brain
where
you
have
the
secretion
of
hormones,
for
example,
and
so
this
is
actually
maybe
this
is
the
origin
of
hormonal
signaling,
but
they
kind
of
go
through.
So
this
is
a
this
is
from
this
article
or
this
special
issue
on
basal
cognition
that
they
have
they
had
in
philosophical
transactions
b.
A
A
So
that's
that's
something
worth
looking
at
and
then
there's
this
last
paper
on
pre-medizone
origin
of
neuropeptide
signaling,
which
is
from
the
same
author
with
some
other
co-authors
on
it,
and
this
kind
of
goes
through
this.
These
I
this
idea
of
neuropeptide
networks,
where
you
see
this
in
in
different
types
of
organisms
that
still
have
this
type
of
nervous
system
and
they're
using
this
as
a
model.
A
The
extent
choanoflagellates
that
they're
looking
at
and
they're
using
that
as
a
model
for
evolutionary
origins,
so
they're
looking
for
two
neuropeptide
precursors
with
broad
evolutionary
conservation.
So
that's
a
one
way
to
look
at.
Like
you
know,
the
origins
of
something
is
to
look
at
things
that
are
highly
conserved
or
the
same
across
multiple
species,
and
then
you
take
that
data
and
you
and
you
hypothesize,
that
there
was
a
common
ancestor
and
because
it's
conserved
that
means
that
you
can
predict
the
common
ancestor
without
too
much
trouble.
You
can
basically
get.
A
You
can
predict
that
they
kept
a
lot
of
the
same
features,
and
so
you
can
hypothesize
a
common
ancestor
with
those
features.
This
is
something
they
call
deep
homology,
which
is
where
you
can
go
very
far
back
into
evolutionary
time
and
assume
that
things
are
basically
shared,
amongst,
maybe
very
diverse,
divergent
species.
A
So
you
know
something
that's
shared
between
c
elegans
and
humans,
and
there
are
some
things
that
are
in
terms
of
genes
are
have
what
they
call
deep
homology,
which
means
they're
shared,
and
so
this
is
a
kind
of
a
tough
area,
because
it's
hard
to
know
exactly
what
has
deep
homology
and
finding
those
things
very
difficult.
But
this
paper
they
kind
of
go
through
this
for
this
neuropeptide
signaling,
so
so
anyways.
That's
all
I
wanted
to
talk
about.
A
B
Yeah
I
said
I
said
to
you
that
with
the
authors
in
the
8
14
like
the
monday
morning,
yeah
and
I
don't
know
if
he
was
the
second
one
or
the
last
one,
but
they
basically
showed
that
certain
gene
sequences
came
out
on
top
when
with
regards
to
that's
kind
of
an
evolution
and
they
were
able
to
able
to
simulate
that
so
so
it
was
interesting
and
maybe
if
I
think
you
should
follow
up
on
it,
it's
not
really
my
area
of
expertise,
but
you
and
dick
have
talked
about
it
a
lot.
A
Well
yeah.
Thank
you.
I
saw
that
actually
yeah
I've
been
working
on
that
list
of
contri
potential
contributors,
so
you
and
dick
both
sent
me
things.
So
thank
you
for
that
and
I
gotta
get
moving
on
that.
I
gotta
figure
out
how
to
start
contacting
people
and
yeah
well
we'll
get
to
that
so
yeah
next
week,
we'll
be
here
again.
We'll
have
people
want
to
present.
A
They
can
bring
something
the
meeting
and-
or
you
know,
opinion
ping
me
on
slack
and
we
can
talk
about
some
of
you
know
some
things
and
I
hope
everyone
has
a
good
week.