►
From YouTube: DevoWorm (2023 #1): Biological tensegrity networks, D-GNNs and DevoGraph, Self-organizing biophysics
Description
Presentation on biological tensegrity network modeling. Discussion of Google Summer of Code 2023 project (DevoGraph and D-GNNs), Synthetic Daisies blogpost on Learning on Graphs. Paper on self-organizing biological materials and the theormodynamics of biophysics Attendees: Richard Gordon, Susan Crawford-Young, Sushmanth Reddy Mereddy, and Bradly Alicea
B
B
Well
I
heard
from
maybe
one
or
two
people
they
might
show
up.
There's
you
know
just
people
are
trying.
C
A
Yeah
I've
a
lot
of
paper
here
that
I've
been
reading.
That
is
giving
me
a
headache.
So
maybe,
as
you
want
it
in
the
chat,
it's
about
10
segory
of.
A
D
B
A
Go
all
right
and
then
you
show
the
PowerPoint
there.
A
So
I
start
off
I
start
off
with
this.
Actually
this
is
I
kind
of
wanted
to
show
you.
This
I
turned
this
upside
down,
but
this
is
cytoskeleton
is
represented
here
by
microfilaments,
and
then
you
have
this,
and
this
is
a
soft
surface
and
you
end
up
with
some
microtubules
and
a
bunch
of
microfilaments
or
costs
here
and
desmosomes,
which
don't
seem
to
attach
to
too
much
and
and
then
there's
a
bit
going
on
at
the
bottom.
A
So
I
guess
this
is
soft.
There's
really
just
some
attachments
at
the
bottom
and
then
this
is
a
stiff
surface
and
a
lot
happening
with
the
substrate
extracellular
Matrix
at
the
bottom.
A
So
this
is
kind
of
what
I
was
going
with
with
an
attachment
at
the
top,
an
attachment
at
the
bottom
and
varying
the
bottom.
The
ECM.
A
Yeah
anyway,
so
that's
sort
of
what
I
was
modeling
and
then
I
found
this
paper
and
there's
the
reference
here.
Okay,.
A
Folks
did
okay
and
then
back
to
this,
they
just
tried
it
with
a
simple
three
strut:
tense
Integrity,
so,
okay,
yeah
and
yeah
it
moves
between
the
strings
touching
and
the
bar
is
touching
basically.
So
this
is
the
top
looking
down
at
the
top
of
it,
and
this
is
the
side,
so
it
does
change
height
and
it
twists
when
it
goes
moves
down-
and
you
can
see
from
this
that
they've
they
have
a
Twist
on
the
triangle
with
the
triangles-
are
not
aligned
they're
Twisted.
So
this
is
a
twisted
version.
C
Yeah
yeah
a
small
bathroom
medical
Point.
What
has
anyone
proven
that
that
is
the
middle
tensegrity
structure?.
A
That
this
is
all
right:
the
minimal
subcategory
structure-
oh
okay,
you
can
do
2D
ones.
A
A
And
this
is
this
is
an
interesting
curve
here,
if
you
have
a
short
model
like
instead
of
alarm
a
tall
one
short
or
they're,
calling
it
the
thick
model
and
you
have
elastic
strings,
you
get
these
curves
for
different
rigidity.
B
A
And
if
you
have
a
rigid
cables,
you
get
your
J
curve.
Okay
and
it's
dependent
on
the
the
base.
Alpha
and
Theta
are
the
rigidity
multipliers
of
the
strings
in
the
bars
so
which
which
curve
is
a
J
curve.
Well,
these
These
are
J
curves,
except
actually.
A
And
if
you
take
the
derivative
of
these
in
certain
ways,
you
also
get
a
J
curve,
so
I'm
I
suspect
they're
kind
of
an
exponential
curve,
yeah
I
anyway.
This
is
what
they
were
getting
with
their
modeling
and
there
there's
your
J
curve,
they're
using
rigid
cables,
just
like
Stephen
Levin
said
so.
You're
like
okay-
and
this
is
in
compression,
though-
and
this
is
intention.
A
And
they
all
seem
to
to
follow
the
same
curve
in
tension
which
is
really
interesting,
except
for
the
ones
that
are
basically
loose
that
are
are
not
really
attached
now,
if
it
equal
to
one,
it
equals
one
they're,
very
soft
materials.
A
A
A
It's
interesting,
and
then
this
is
the
tall
model
and
they
actually
made
them.
This
is
one
way
to
make
a
tensegrity
structure
kind
of
nice.
They
kind
of
give
you
a
by
showing
this.
They
show
you
how
you
could
do
it
yourself,
but
the
curves
are
like
this
with
a
slender
model,
with
the
rigid
cables
and
compression,
and
then
you
have
your
J
curves
again
and
except
these
are
different
from
the
higher
tensions
and,
of
course,
the
blue
curve
Alfie,
whose
One
beta
equals
one
is
totally
different.
It
goes.
A
It
goes
its
own
way,
let's
just
say,
and
then
this
one
is
the
dotted
lines.
Are
the
tent
the
where
the
pre-strain
is
oh
they're,
looking
at
various
pre-strains,
that's
the
p
0
and
the
rigid
and
elastic
cross
strings,
so
they're
rigid,
cross
strings.
Give
you
a
sort
of
A
J
curve
here
and
the
elastic
strings.
Give
you
this.
This
kind
of
thing
happening,
sort
of
A,
J
curve,
sometimes
sort
of
not
so
this
would
make
a
difference
if
it
was
in
a
cell.
C
This
is
like
I
put
on
your
engineering
hat.
Okay:
okay,
are
you
familiar
with
space
needles.
A
C
Unless
it
flops
over,
like
the
dinosaur
next,
that
Steve
Levin
showed
Oh.
C
The
idea
was
to
make
them
a
very
long
carbon
nanotubes,
but
people
proposing
them
didn't
seem
to
know
anything
about
tensegrity.
Oh.
A
C
A
A
There
is
a
point
of
collapse
that
this
paper
points
out,
so
oh
I'll
quit
sharing
here,
that's
kind
of
all,
I
I
got
to
and
I
ran
into
the
paper
doing
further
research,
this
being
further
research
with
a
lovely
derivative.
This
is
these
are
derivative
curves,
so.
D
A
B
So
what
well,
what
are
the
differences
between
stiffness
and
and
elastic
components
of
the
network,
so
I
think
they
model
both
in
that
paper?
We
talked
about
that
with
respect
to
biology
how
you
actually
have
a
lot
of
elastic
components
as
opposed
to
stiff
components.
What's
the.
B
A
If
you
look
at
some
of
the
papers
about
actin,
they
have
a
their
other
stiff
like
their
intention.
They're
they're,
quite
stiff.
A
I,
just
I
did
a
Stephen
Levin
suggested
with
this
and
I
put
fishing
line
on
it,
and
if
you,
if
you
compress
it
like
this,
you
can
see
it.
C
A
C
What,
if
you
replace
the
fish
line
by
for
adults,.
A
C
It
seems
to
me
that
the
paper
you
just
showed
at
some
extremes,
the
elastic
components
are
replaced
by
rods.
A
Well,
origin,
cable,
they're
cables;
they
they
don't.
C
C
Yeah
I
suppose,
but
the
comparison
it's
an
extreme
example
yeah,
because
even
a
rod,
even
a
rod,
has
some
compressibility
and
some
stretch
ability.
C
Extremes
up
there
of
their
parameters,
but.
A
I
know
in
diabetes:
they
have,
for
instance,
softer
softer
cells.
So
I
was
wondering
if
you
could
relate
a
diabetic
model
to
the
the
elastic
bottles,
so
they
have
a
different
slightly
different
Behavior
than
that
anyway,
whatever
I
mean
this
is
my
application.
A
Okay,
I
just
saw
some
papers
on
that,
but
of
course,
I
didn't
download
them.
Of
course,.
C
C
C
A
D
C
C
B
I
had
to
add
some
interest
in
the
meeting
from
a
couple
people,
but
I,
don't
know
if
they're
going
to
make
it
so
in
the
slack
they'd
mentioned
it,
but
I
don't
think
in
any
case,
I'm
going
to
go
over
a
couple
things
today.
I
have
there's
some
new
project
work.
B
Well,
we're
gonna
have
a
project
for
this
Summers
summer
of
code
and
I'm
going
to
go
over
a
little
bit
about
that
and
then
I'm
going
to
go
over
the
learning
on
graphs
conference,
which
was
I,
wrote
a
blog
post
on
it,
and
that
was
one
of
the
conferences
I
talked
about
in
the
last
meeting
and
I
kind
of
prepared
a
whole
review
of
it
and
then
I'll
probably
go
over
a
paper.
B
So
let's
let
me
share
my
screen.
Thank
you
all
right
here.
You
can
see
my
screen
yeah,
okay,
so
the
first
thing
I
want
to
talk
about.
Is
this
project,
so
every
right
around
January,
they
always
request
the
summer
of
code
projects.
This
were
sponsored
by
a
imcf
which
is
the
neuroinformatics
core.
They
sponsor
projects,
and
so
this
is
the
project
for
this
year.
I've
come
up
with
we're.
B
Only
gonna
I
think
do
one
this
year
and
I
talked
to
Gia
Hong,
who
did
the
graph
neural
networks
project
last
year
and
he's
interested
in
mentoring,
this
project
with
me
being
a
co-mentor
and
that's
good,
because
he
did
a
lot
of
the
work
on
building
this
pipeline.
That
is
mentioned
in
this
project.
So
this
is
the
graph
neural
networks
project
that
we
did
last
year,
but
it's
a
different
version
of
it.
B
So
last
year
we
did
the
graph
neural
networks
project
we
wanted
to
get
into
it
and
see
how
far
we
could
go
with
this
tool
called
graph
neural
networks,
basically
you're
taking
input
data
Maybe
from
microscopy
and
you're
forming
a
graph
out
of
it.
So
any
any
kind
of
data
that
has
graphical
structure
to
it.
You
can
use
for
a
graph
neural
network
and
that's
fortunate,
because
there's
a
lot
of
data
in
the
world
that
forms
a
graph
in
one
way
or
another.
B
It
could
be
just
you
know,
points
in
a
space
that
can
be
connected
either
geometrically
or
through
some
statistical
Association
or
it
could
be.
You
know
like
a
tensegrity
network
or
it
could
be
other
things.
So
what
we're
interested
in
here
is
building
a
neural
network
that
can
process
the
the
structure
of
the.
B
In
this
case
it
would
be
microscopy
images,
at
least
for
this
Pipeline,
and
then
it
would
produce
these
graph
embeddings,
which
are
representations
of
that
structure
in
the
image
that
would
then
be
produced
as
a
computational
structure
on
the
other
end,
so
it's
basically
processing
things
in
the
in
the
image.
It's
grabbing
landmarks,
it's
putting
those
together
and
then
it's
finding
the
common
graph
representations
or
embeddings
that
exist
in
that
image.
In
that
data
set.
B
So
you
know
you're
trying
you
have
a
lot
of
variation
in
the
in
the
input
data
you
want
to
find
the
common
embeddings
that
exist
for
it.
So
that's
that's
what
the
that's,
what
graph
neural
networks
do,
and
so
in
this
project,
when
it's
the
last
year,
we
did
this
pipeline
where
we
we
sort
of,
took
Vivo,
learn,
which
is
our
existing
software,
that
we
have
and
we
refactored
part
of
it
or
rewrote
it
to
sort
of
optimize
it
for
this
sort
of
application.
B
B
We
also
had
another
part
of
the
pipeline
was
completed,
which
was
to
implement
a
neural
network
that
could
do
this
type
of
transformation,
so
that
was
jiahang
and
there
was
another
person,
a
calendar,
his
name
right
now,
but
he
was
involved
in
the
project
and
yeah
Hong
was
the
person
who
kind
of
laid
out
this
pipeline
of
processing
the
data
to
building
this
neural
network
and
benchmarking
it,
and
then,
to
this
other
third
step,
which
wasn't
completed,
which
was
top
logical
data
analysis
and
higher
order
data
analysis.
B
So
this
is
the
thing
we're
going
to
focus
on
this
summer.
So
you
know
these
graph.
Neural
networks
are
actually
really
good
at
figuring
out.
Shape
Transformations,
which
are
actually
in
in
embryogenesis,
is
very
important
because
you
have
a
lot
of
things
changing
shape.
You
have
a
lot
of
things
cells
moving
around
and
things
like
that.
So
you
know
this
is
one
thing
we
want
to
capture.
B
So
we
have
this
ability
to
segment
raw
data
and
incorporate
it
into
this
pipeline,
refining
the
method
with
respect
to
deriving
graph
embeddings,
and
then
you
know,
refactoring
the
models
and
then
extending
this
to
topological
data
analysis
and
complex
Network
Theory.
So
this
is
these
are
two
output
areas.
So
once
we
get
these
neural
networks
that
can
do
these
Transformations
and
produce
these
embeddings,
then
we
can
go
on
and
do
like
topological
data
analysis.
So
we
can
look
at
like
the
system
in
different
states.
B
It'll
have
all
these
embeddings
that
get
produced
that
we
can
then
analyze.
We
can
compare
the
structures
using
some
sort
of
unified
method,
and
so
this
project
involves
doing
a
lot
of
this
maintenance
work
like
refactoring,
where
needed,
basically
taking
the
model
running
through
it.
B
Refactoring
things
were
needed,
maintaining
it
and
then
focusing
on
this
third
aspect,
which
is
to
sort
of
figure
out
how
to
use
the
output,
and
so
that's
that's
what
this
project
is
about
and-
and
you
know
it's
kind
of
hard
to
get
your
head
around
it
like
I,
found
that
the
students
it
was
kind
of
hard
for
them
to
get
their
heads
around
it
at
first,
because
there
is
a
lot
here.
That's
you
know,
there's
a
steep
learning
curve
for
this
sort
of
graph
neural
network.
C
Yeah
I've
put
a
name
in
the
chat.
Would
you
please
include
her
on
the
mailing
list
for
possible
summer
students,
okay,.
B
B
So
that's
our
our
project
for
summer
of
code-
and
you
know
I
I-
think
we're
supposed
to
submit
them
by
the
end
of
the
month.
So
this
will
go
through.
Maybe
a
couple
more
revisions
and
then
we'll
put
up
the
project
and
we'll
have
a
call
for
involvement
and.
B
People
can,
if
they're
interested
in
participating,
they
can
go
to
some
of
these.
So
some
of
these
things
are
drawing
from
you
know.
Of
course
we
have
the
website
with
we
have
some
papers.
We
don't
really
have
much
on
graph
neural
networks
right
now,
but
the
you
know
you
can
get
a
sense
of
what
we're
doing.
In
the
group.
We
have
the
devograph
software
on
GitHub,
so
we
have
our
Diva
learn
Repository,
which
is
involves
a
lot
of
the
work
we've
done
with
deep
learning
and
in
the
Vivo
learn
repository.
B
We
have
Devo
graph,
which
is
basically
a
last
fall.
I
took
a
lot
of
the
stuff
from
the
Google
summer
code
project
and
put
it
into
the
you
know
published
it
into
the
repo.
So
this
is
the
devograph
repo.
Here
you
can
go
in
and
download
the
software
and
run
it.
So
this
is
open
source
under
MIT
license.
You
can
run
the
software
and
see
where
we
are
with
it
and,
of
course,
I
I
would
Point
potential
students
to
this,
because
it
allows
them
to
sort
of
identify,
maybe
what
they
could
do
if
they
can't.
B
You
know
if
there
are
things
that
they
see
they're
problematic
when
they
try
to
run
the
software
or
you
know,
they
think
that
there
are
other
things
that
need
to
be
done.
There
needs
to
be
some
I
think
more
documentation
as
well.
So
that's
another
thing
that
can
be
done
in
the
in
the
project
period
is
to
do
some
documentation.
B
You
know
they
don't
often
talk
about
that,
but
that's
very
important
to
get.
You
know
to
get
people
to
understand
what's
going
on,
because
it's
a
very
hard
thing
to
really
get
right
off
the
bat.
So
that's
that's
going
to
be
part
of
that
and
then
going
through
some
of
our
other
things
that
we've
done
and
then
I
I
actually
published
this
blog
post,
which
I'll
talk
about
in
a
minute.
B
This
Is,
Us,
learning
on
crafts
conference
recap
so
so
yeah
the
project
uses
Pi
torch,
and
if
you
have
experience
with
python,
then
you're
you're,
probably
you
know,
you're
ready
to
go
with
this
any
any
work
with
pi
torture.
Tensorflow
is
good
if
you
know
how
to
use
a
workflow
there,
that's
good,
because
this
is
when
you
require
a
lot
of
the
tools
that
they
have
there.
They
have
tool
I,
think
they
have
toolboxes
for
graph
neural
networks,
so
that
that
will
help
and
then
being
able
to
work
with
data
sets.
B
So
last
summer
we
had
a
couple
of
Benchmark
data
sets
that
we
used
I,
don't
know
if
they're
in
this
repository
they're
in
this
data
directory
and
the
data
I
think
the
data
sets
are
in
here.
So
the
Epic
data
set
was
used,
which
is
a
c
elegans
data
set
this
fluo
n2dh
Sim,
plus
that
was
another
data
set
I
ipsm
and
this
other
one
here
to
make
sure
I
don't
know,
but
anyways
we
can
go
and
then
there's
some
raw
data.
So
these
can
be
benchmarked.
B
We
may
have
to
go
back
and
kind
of
annotate
what
these
are
seeing
already
kind
of
things
that
we
need
to
improve
upon
to
get
into
the
to
get
this
repository
in
shape.
So
that's,
that's
all
part
of
the
fun,
though
yeah,
so
there's
more
to
come
on
that.
B
So
the
next
thing
I
want
to
talk
about
is
this
learning
on
graphs
conference
recap.
So
last
meeting
I
talked
about
learning
on
graphs,
this
conference,
that
happened
in
early
December,
and
it's
a
nice
conference
first
conference
that
I'm
aware
of
that
had
to
do
with
graph
neural
networks.
I
think
there
have
been
a
lot
of
workshops
at
the
machine
learning
conferences,
but
not
a
standalone
conference.
So
this
was
a
nice
poor
sort
of
the
diversity
of
research
in
graph
neural
networks.
B
So
you
know,
graphical
networks
are
as
I
described
before.
You
have
a
lot
of
graphical
input
in
the
world
to
take
that
things
that
can
form
graphs
and
transform
them
into
graph
embeddings,
which
you
can
use
for
computational
analysis
and
so
one
of
the
interesting
talks.
One
of
the
first
talks
was
on
applying
this.
In
fact,
they
had
a
lot
of
talks
where
they
applied
graph
neural
networks
to
medicine,
but
in
this
case
they
applied
it
to
molecular
problems
and
specifically
protein
folding
and
protein
working
with
proteins.
B
Basically
different
types
of
prediction
tasks,
so
actually,
as
it
turns
out
graph
neural
networks,
have
a
lot
of
applications
in
the
medical
domain.
So
you
can
look
at
things
like
a
molecule
where
you
have
sort
of
graphical
structure
to
it,
where
you
know
you
have
other
types
of
biological
things
with
with
graph
structure.
I.
Think
some
of
the
images
that
Susan
showed
of
the
graph
with
her
of
the
cell
with
a
lot
of
the
actin
molecules
and
things
like
that
that
interconnect,
especially
those
are
very
good
graph
neural
network
problems.
B
B
So
you
know
there
were
some
really
interesting.
A
lot
of
interesting
talks
in
medicine,
not
so
much
on
on
biology
like
bioengineering.
B
C
Oh
okay,
this
is
a
little
crazy,
but
I
wrote
a
paper.
Many
decades
ago,
one
blinking
Christmas
tree
light
bulbs
in
the
series.
Okay,
okay
and
I-
tried
to
solve
the
mathematics
for
them.
I
actually
did
experiments
with
it.
I
couldn't
get
Beyond
two
light
bulbs
from
the
mathematics.
It's
too
complicated
yeah.
C
B
C
B
C
Okay,
do
you
get
a
short
circuit
or
do
different
parts
of
the
Network
blink
on
or
off
yeah.
B
A
C
A
Oh
okay,
anyways
LED,
just
we'll
just
allow
current
to
flow
in
One
Direction.
What
turns
it
off
the
lack
of
current?
So
that's
not.
C
B
C
A
Yeah
depends
on
your
definition
of
a
short
circuit.
A
A
Some
resistance
there,
it's
just
how
much
resistance.
D
C
A
B
Not
really
I
mean
I've
heard
of
like
you
know,
people
have
analyzed
power
grids
and
things
like
that,
where
there's
a
full
of
electricity
and
then
there's
a
disruption
and.
B
Things
like
that
right
then,
there's
like
synchronization
work
on
like
fireflies
and
things
like
that,
but
that.
C
The
network
there
is
different,
yeah
right,
that's
different,
because
each
one's
perceiving
the
other
and
trying
to
train
with
it:
okay,
yeah,
okay,
yeah:
this
is
each
one.
Each
one
can,
if
it's
just
a
single
line,
each
one
can
literally
turn
the
ball
off.
Oh
yeah,
okay,
but
if
it's
a
home
network,
what
happens.
B
So
that
actually
brings
up
an
interesting
point
with
dnns
because
they
actually
do
this
thing
with
what
they
call
message.
Passing
and
message
passing
is
a
technique
that
they
use
to
look
at
like
to
build
the
network.
So
they
have
a
series
of
inferential
techniques
to
build
these
networks
because,
as
I
said
in
the
real
world,
you
have
these
features
in
a
in
an
image
or
something
like
that,
and
there
there's
a
graph
structure.
But
you
have
to
find
the
graph
structure.
You
have
to
discover
it
because.
B
Train
the
network,
so
the
one
way
they
do.
This
is
through
message
passing,
which
is
something
that
is
from
like
communication
networks,
but
actually
works
with
electrical
currents.
So
you're
passing
things
from
node
to
node,
which
could
be
a
message.
It
could
be
electricity
and
then
that
allows
you
to
approximate
the
network.
C
B
A
good
day
you
too
bye,
see
you
later
bye,
so
this
is
one
of
the
inferential
techniques
that
they
use.
It's
very
common
in
the
field,
and
there
are
other
I
mean.
The
point
of
this
blog
post
is
to
point
out
some
of
the
Contemporary
things
going
on
in
this
field.
B
But
there
are
a
lot
of
there's
a
lot
of
opportunity
to
innovate
and
to
find
like
new
approaches
to
these
things,
because
message
passing
is
good,
but
it
has
its
limits
and,
of
course,
if
you're
dealing
with
different
types
of
data,
like
biological
data
of
different
types,
you
may
need
to
use
other
techniques.
So
some
of
these
other
techniques
are
maybe
inspired
by
differential
geometry
and
algebraic
topology.
These
would
be
continuous
methods,
whereas
message
passing
is
more
of
a
usually
thought
of
as
a
discrete
technique.
A
So
it's
a
digital
technique.
There
the
message
passing
yeah
because
you're
there
is
a
an
underlying
sort
of
pulse
in
the
brain,
I'm
sure,
and
it's
due
to
the
way
the
arteries
and
veins
work
in
the
brain,
and
it
gives
a
it's
a
sort
of
an
interesting
fractal
problem
similar
to
some
of
these
well
networks.
I,
guess
because
you
you'll
get
I,
don't
know
if
you
have
valves
there
or
not,
but
it
takes
a
while
for
the
brain
to
be
profused
with
with
blood.
A
So
anyway,
there's
some
sort
of
an
underlying
pulse
that
goes
on.
C
A
Getting
somebody
studying
the
underlying
electrical
pulse
in
the
brain
at
the
University
of
Manitoba
and
I
asked
him
about
ventilator
injury
and
the
fact
that
a
ventilator
has
a
mechanical
pump
and
it
just
pumps
regularly,
but
whereas
your
normal
breathing
is
kind
of
fractal
and
it
gets
into
the
lungs
and
therefore
gets
oxygen
through
all
parts
of
the
lungs
and
and
your
body
better
if
the
pump
is
running
a
more
fractal
manner,
so
it
all
fits
together
here,
yeah,
you
know
the
networks.
A
Some
of
this
would
fit
together.
Yeah.
A
Interested
in
this,
the
graphs
yeah,
partly
because
people
solve
the
tensegrity
structure.
For
me
too,
because
consecutive
structures
are
are
a
graph
basically
great.
B
So
yeah
one
of
the
things
that
they
well
one
of
the
things
I
noticed
about
a
lot
of
these
techniques
and
there
are
a
lot
of
techniques
here
that
so
there's
this
Gap
I
think
in
the
literature
as
to
how
to
link
these
to
real
world
problems
so
like,
if
you're
taking
data
from
some
Source,
you
know
graph
structure,
they
can
be
very
diverse
and
they
say
well.
B
These
are
generalized
models
and
you
can
just
you
know,
analyze
the
graph
with
like
all
these
tools,
and
there
are
all
sorts
of
you
know,
isomorphism
tests
to
say
like
if
two
graphs
are
isomorphic
or
you
know
you
have
other
types
of
tests
to
look
at
the
sort
of
the
you
know
the
to
sort
of
infer
the
structure
of
these
graphs.
But
the
idea
being
is
that
there's
a
graph
in
the
real
world
that
you
can
analyze
and
that
they're
all
Universal,
and
maybe
some
of
these
graphs
aren't
Universal.
B
Maybe
you
need
to
have
like
specialized
graph
neural
networks
for
different
types
of
networks,
so
either
they
be
like
an
electrical
pulse,
a
set
of
electrical
pulses
in
a
network
or
a
set
of
forces
in
a
network
or
something
more
abstract,
like
statistical
Association
or
like
there's
something
called
graphons
which
they
talk
about
here,
which
are
alternate
graph
representations
that
are
sort
of.
You
know
where
you
get
these
different
graphs
that
form
but
they're
all
basically
related
to
one
another,
and
you
can
treat
them
similarly
mathematically.
B
So
there
are
a
lot
of
there's
a
lot
of
diversity
there
in
terms
of
what
we're
dealing
with
with
graphs
and
so
I
think
this
approach
is
widely
applicable,
but
you
also
have
to
think
about
the
system
that
you're
dealing
with,
and
there
are
a
lot
of
systems
that
that
form
graphs.
You
know
very
widely
construed,
so,
like
tensegrity
networks,
you
know
that
would
be.
A
B
A
B
So
then,
I
guess
to
sum
up
here:
there's
this
cat's
very
eye
Workshop,
which
is
a
workshop
that
was
hosted
in
October.
This
is
maybe
the
precursor
to
this
conference.
B
This
kind
of
goes
over
category
Theory
and
artificial
intelligence,
and
you
say
what
is
category
Theory
have
to
do
with
networks,
and
the
answer
is
is
that
you
can
use
category
theory
in
really
interesting
ways
to
sort
of
Define
the
networks
that
you're
going
to
be
analyzing,
and
this
has
links
to
artificial
intelligence
and
some
of
the
classification
tasks
that
you're
doing
there.
So
there's
this
whole
sub
sub
research
field
on
category
theory
for
AI-
and
one
of
the
speakers
talked
about
this
in
in
depth.
B
Taco
Cohen,
who
is
from
I,
can't
remember
from
Qualcomm
Ai,
and
we
talked
about
some
of
these
tools
from
categories
here.
You
can
applied
math
that
are
you
know
that
you
can
use
to
analyze
the
input
data
in
interesting
ways
and
then
have
a
nice
output.
That's
amenable
to
things
like
causal
models,
so
you
can
look
at
causality
in
the
networks.
B
What
causes
something
to
happen
and
with
networks
it's
hard
to
really
get
a
handle
on
causality,
because
you
have
a
lot
of
things
interacting
instead
of
finding
these
causal
works,
it's
really
kind
of
powerful
and
then
the
end
of
this
Workshop
was
just
by
the
end
of
the
conference
was
just
these
workshops.
B
They
had
each
day
at
the
end
of
the
day
they
had
workshops,
so
they
had
one
a
neural,
algorithmic
reasoning,
which
was
taking
Optima
saying
that
optimization
problems
are,
you
know
hard
it's
hard
to
solve,
optimization
problems
that
they
can
be
made
easier
using
these
kind
of
neural
graph,
neural
networks
or
neural
representation
learning.
B
Then
there
was
this
issue
of
expressiveness,
which
was
basically
how
what
what?
How
can
you
cast
them
out
of
your
method
to
a
series
of
problems?
So
if
I
have
a
neural
network
and
I'm
running,
it
can
I
run
it
on
any
problem
in
the
world
or
can
I
only
run
it
on
a
small
range
of
problems?
And
when
you
talk
about
an
algorithm's,
expressiveness
you're
talking
about
how
widely
can
cast
that
net?
B
So
maybe
you
know
there
there's
some
Network
approaches
that
are
only
good
for
looking
at
you
know,
brain
networks,
you
know
things
extracted
from
or
neuroimaging
data
or
certain
things
that
are
maybe
good
for
power
networks.
You
know
like
utilities
and
but
those
those
models
are
maybe
of
limited
usefulness
beyond
that.
So
there's
a
whole
research
area
on
expressive,
algorithms,
and
they
talked
about
that.
They
talked
about
draft
rewiring,
which
is
where
they
talked
about
the
message
passing
approach
and
how
you
can
rewire
graphs.
B
This
is
very
important
for
graphs
that
grow
and
change
shape,
so
the
things
that
we
might
be
doing
this
summer
there's
this
whole
area
of
graphic
wearing,
that's
interesting,
that's
probably
pretty
relevant
and
then
finally,
they
had
they
featured
a
very
practical
workshop
on
implementing
these
networks
in
tensorflow.
So
there's
a
tensorflow
framework
for
graph
neural
networks.
If
you
want
to
implement
some
of
these
networks,
you
can
use
this
framework
and
it
works
in
tensorflow.
B
So
if
we
know
tensorflow,
you
can
walk
right
through
it,
and
so
there
are
some
nice
tools
that
are
available
for
this
there's
some.
You
know
interesting,
Concepts
and
I.
Think
that's
something
that
people
are
interested
in.
We
could
talk
about
well,
we'll
have
to
talk
about
it
for
the
summer
of
code
project,
but
just
in
terms
of
research
talk
about.
A
It
you
Demi
udemy
has
a
course
on
10
or
several
courses
on
tensorflow
and
someone
needs
to
educate
themselves
like
I
do
on
them,
because
I
don't
know
them.
Yeah.
B
A
B
A
B
We
have
a
couple,
a
lot
of
papers
that
are
competing
for
attention
here,
but
what
I
wanted
to
talk
about
was
this
paper
by
Jeremy
England
I
thought
we
brought
up
his
name
in
past
meetings,
but
he
did
some
really
interesting
work
with
like
physics
and
thermodynamics
and
biology
and
there's
this
I
think
he's
from
England
I
can't
remember
University,
but
he
published
this
paper
recently
on
self-organized
computation
in
the
far
from
equilibrium
cell,
and
this
is
in
biophysics
reviews,
I
believe
biophysics
reviews,
and
this
is
an
interesting
paper
so
in
the
inner
okay,
the
abstract,
it's
very
short,
but
actually
he
is
that
is
that
Georgia
Tech
I'm,
sorry
he's
at
Georgia
Tech
now
and
he
has
a
an
affiliation
in
Israel
as
well.
B
So
so
there's
this
the
abstract
is
recent
progress
in
her
understanding
of
the
physics
of
self-organization
in
active
matter
has
pointed
to
the
possibility
of
spontaneous
Collective
behaviors
it
effectively,
compute
things
about
the
patterns
in
the
surrounding
pattern
environment.
B
Let
me
zoom
in
here
here.
We
describe
this
process
and
speculate
about
its
implications
for
our
understanding
of
the
internal
organization
of
loving
self.
So
this
is
a
pretty
broad,
General
abstract,
but
basically
is
interested
in
the
self-organization
of
active
matter.
It's
generating
these
spontaneous
Collective
behaviors,
they
compute
things
about
patterns
in
the
environment
and
then
there's
this
relationship
to
the
internal
organization
of
cells
and
so
one
of
the
things
he
does
here
is
he
talks
about,
and
this
has
been
studied.
This
goes
back
to
scherzinger's
paper.
B
What
is
life
back
in
1944
and
this
talks
about
how
the
cell's
whole
array
of
chemical
reactions
and
transduction
of
molecular
signals
and
the
generation
of
controlled
forces
somehow
happened
in
obedience
to
known
physical
laws,
but
still
it's
impossible
to
imagine
how
this
leads
to
a
living
system,
or
at
least
according
to
Schrodinger,
because
we
kind
of
know
some
of
that.
Now
we
have
some
good
ideas
anyways.
B
So
this
is,
you
know,
kind
of
we've
worked
through
that
since
then,
and
so
there's
been
a
lot
of
biophysics
work
on
this,
and
you
know
some
of
these.
We
know
a
lot
of
the
properties
of
some
of
these
molecular
mechanisms.
For
example,
we
know
a
lot
of
things
more
things
about
inheritance
and
DNA
and
other
types
of
things
that
are
important
to
this
question,
and
then
this
is
going
through
the
literature.
B
So,
in
cell
biology,
advances
in
microscopy
have
revealed
an
economy
of
adaptive
and
multifunctional
membraneless
structures
whose
Feats
of
self-assembly
May
harness
a
variety
of
effects
of
non-equilibrium
statistical,
mechanics,
so
you're
in
here
it
may
be
talking
about
some
of
the
things
within
you
know,
within
the
cell
or
things
that
you
know,
self-organized
structures
itself
organizes
adaptive
matter,
and
so
finally,
progress
in
our
understanding
of
the
physics
of
active
matter
suggests
that
a
variety
of
spontaneous
learning
and
Computing
behaviors
would
be
surprisingly
easy
to
get
for
free,
which
is
a
that's
a
common
theme
in
complexity.
B
Theory.
This
idea
of
order
for
free,
so
self-organization
is
often
referred
as
order
for
free,
where
you
know
the
ideas
is
that
there's
there's
entropy?
It
contributes
to
everything
to
King,
but
we
have
somehow
over
time
we
get
self-organization
of
structures
and
that
at
least
intuitively
shouldn't
happen.
B
When
you
have
entropy
act
subject,
you
know
everything
subjected
to
entropy,
it
shouldn't
happen,
but
it
does,
and
so
we
have
to
explain
this
to
yourself
organization,
so,
maybe
surprisingly,
easy
to
get
to
for
free
from
the
general
principles
governing
self-organization
and
active
mixtures
exposed
to
patterns.
So
he
talks
about
closing
something
to
patterns,
I'm,
not
sure
exactly
what
that
means.
Taking
together,
these
distinct
sources
of
evidence
and
Concepts
provide
a
new
perspective
on
biological
adaptation.
B
So
yeah
he's
talking
about
biomolecular
self-organization
here.
So,
let's
see
if
he
does
give
a
number
of
so
he
gives
us
this
measure
that
he
comes
up
with
called
the
rattling
number
and
specifically
this
idea
of
low
rattling.
So
he
draws
from
things
like
dissipative
structures
and
other
types
of
physical
tools
that
have
been.
You
know,
formulated
in
the
past
sort
of
explain
this,
but
he's
he's
kind
of
coming
up
with
a
new
innovation
here,
which
is
the
throttling
number.
B
So
this
part
of
the
paper
I
want
to
kind
of
go
over
it.
A
little
bit
may
give
us
some
insight.
Active
matter
is
a
class
of
non-equilibrium
many
body
systems
in
which
individual
particles
or
components
have
access
to
energy
from
external
driving
forces.
B
So
that
could
mean,
like
you
know,
external
energy,
like
the
sun
or
some
chemical
reactions
or
something
else,
a
variety
of
active
matter
systems
ranging
from
protein
motor
and
filament
mixtures,
which
we've
talked
about,
as
you
know,
being
part
of
that
macromolecular
milieu
to
chemically
catalytic
colloid
suspensions.
So
these
are
like
different
particulates
that
are
in
suspension.
B
You
know,
we've
talked
about
things
like
the
self-assembly
of
different
particles
or
you
know
different
types
of
macromolecules,
even
I,
guess
or
robots
worms
have
been
studied,
experimentally
and
they
are
known
to
exhibit
a
range
of
striking
Collective
behaviors,
which
in
the
examples
are
spiral
vortices,
regular,
crystals
and
synchronized
dances
have
all
been
observed
in
instances
where
many
interacting
active
elements
are
subjected
to
non-equilibrium
driving.
So
you
have
this
non-equilibrium
energy
out
in
the
environment,
it's
fluctuating
in
a
non-equilibrium
state,
and
then
you
have
the
system.
B
That's
in
equilibrium,
where
the
at
least
nominally
where
these
elements
are
interacting
and
they're
being
driven
by
this
non-equilibrium
driving.
So
this
leads
to
this
sort
of
fluctuation.
This
introduced
fluctuation
into
this
system
of
interactions.
This
is
how
you
know
you
kind
of
explain
the
self-organization
and
so
active
matter.
May
provide
an
exciting
empirical
experimental
playground
for
discovering
characterizing,
interesting
non-equilibrium
Collective
phenomena,
but
of
course
it's
highly
challenging
to
make
a
prediction
here.
B
So
so
you
know
there
are
things
called
fluctuation
theorems
which
have
really
kind
of
driven
the
progress
in
this
field.
Where
you
know
we
have
these
General
results
in
statistical
thermodynamics
governing
the
arbitrary,
far
from
equilibrium
and
non-linear
regime.
So
you
can
use
these
tools
to
kind
of
look
at
some
of
these.
B
These
systems,
and,
let's
see
perhaps
the
most
fundamental
statement,
was
that,
if
Crooks,
who
showed
that
the
relative
probability
of
a
dynamical
trajectory-
which
is
this
formula
here
or
this
statement
here
and
this
time-
reverse
movie,
so
the
dynamical
trajectory
has
a
Time
reverse
movie,
which
is
basically
the
inverse
of
that
or
related
simply
to
the
heat
Q
we're
at
least
doing
the
forward.
Trajectory
by
this
equation
here,
the
Kirk's
relation
does
not
imply
a
generalization
of
the
minimum
entropy
production
principle
that
holds
in
linear
non-equilibrium
regimes.
B
Indeed,
it
makes
clear
and
instead
that
no
such
simple
thermodynamic
principle
must
govern
on
the
non-equilibrium
city
states.
So
this
is
this
is
an
exact
or
this
is
a
rigorous
argument
for
an
exact
quantitative
relationship
between
the
stability
of
ordered
non-equilibrium
structures
and
the
likely
amounts
of
work
absorbed
during
the
history
of
their
formation.
B
So
you
can
actually
get
a
sense
of
the
thermodynamic
work
being
done
in
these
in
the
ordered,
non-equilibrium
structures.
So
he
talks
about.
Let's
see
if
he
talks
about
the
rattling
number,
so
the
experimental
example
described
it
turns
out
to
be
a
simple
and
intuitive
instance
of
a
more
General
phenomenon,
known
as
low
rattling.
So
this
is
he's
talking
here
in
the
previous
paragraph
about
this
experiment
that
separates
local
and
Global
Dynamics.
B
So,
let's
see
okay
level
specifically,
it
can
be
the
case
that
the
same
external
driving
forces
acting
on
a
system
May
activate
motion
that
is
more
or
less
orderly
or
more
or
less
violent.
Depending
on
what
state
the
system
starts
in
or
being
driven,
a
simple
example
is
that
of
living
crystals,
in
which
self-colloid
particles
form
regular
crystalline
arrays,
despite
having
only
repulsive
inner
particle
forces,
so
we
talked
about
liquid
crystals
last
year,
I
think
and
how
they
can
be
applicable
to
biology.
B
You
have
particles
at
different
scales,
you
can
model
bacteria
or
Collective
Behavior
bacteria
that
way,
but
also
other
types
of
particulates
in
in
actual
crystals
or
in
things
that
approximate
crystals.
He
says
this
is
the
kind
of
system
he's
talking
about
when
the
particles
are
dispersed
in
separate
each
one
gets
propelled
from
place
to
place.
So
when
you
disperse
them
you
disperse
them
for
some
means
they
get
propelled
all
over
the
place.
B
On
the
other
hand,
when
the
particles
are
jammed
together
in
a
dense
array,
the
randomly
oriented
individual
propulsion
forces
acting
on
them
average
out
to
zero
and
the
non-angle
delivering
driving
produces
very
little
motion.
So
when
we
talked
about
Liquid
Crystal
Biology,
one
of
the
things,
one
of
the
features
of
that
is
this
idea
of
a
jamming
phase
transition,
which
is
where
you
go
from
having
a
bunch
of
particles
moving
around
interacting
to
this
Jam
state,
where
they're
all
sort
of
fixed
in
place,
because
there
are
too
many
of
them
in
a
space.
B
So,
if
you
think
about
like,
if
you
have
a
bunch
of
particles
or
you
know
in
a
in
a
some
sort
of
membrane,
you,
you
know
if
it's
enclosed
in
that
membrane
and
it's
very
packed
very
tightly,
there's
a
lot
less
opportunity
to
move
around
than
if
that
membrane,
yet
contains
a
sparse
collection
of
those
sort
of
those
particles.
A
And
cells,
if
the
cell
elongates,
it
becomes
more
likely
to
to
be
a
fluid
cell
body
than
if
it's,
if
they're,
all
regular
and
shorter
or
more
regular.
If
you
elongate
them,
they
tend
to
move
more
like
a
fluid.
B
Yeah,
so
this
this
example
that
we
just
talked
about
describes
something
he
calls
low
rattling
low
rattling.
Behavior
is
first
identified
through
the
observation
that
the
same
non-equilibrium
system
can
be
capable
of
exhibiting
randomly
diffuse
exploration
of
configuration
space,
meaning
that
you
have
this-
that
things
are
exploring.
This
configuration
space,
they're
moving
around
and
they're
taking
different
configurations,
and
so
you
can,
you
know
just
through
diffusion.
You
can
explore
this
space
and
this
is
done
randomly
this
non-equality
room
system
then
can
also
explore
the
space
just
through
this
driving.
B
So
it's
driving
these
particles
around
the
space
it
they're
moving
at
sort
of
a
random.
You
know
random
rates
and
then
moving
randomly
in
random
directions,
but
they
get
they're
actively
exploring
this
configuration
space.
So
if
they're
things
that
are
interesting
in
this
configuration
space,
they
tend
to
move
towards
those
things.
B
So
if
you
think
about
like
structure,
if
you
have
things
that
aggregate
in
certain
locations
that
can
be
done
through
random
processes,
just
through
some
driving
force,
forcing
it
into
exploring
its
basement
and
then
you
can
also
get
orderly
motion
where
you
can
get
stasis
as
well,
depending
on
what
part
of
configuration
space
it
happens
to
be
in
while
exposed
to
the
same
external
drive
so
that
the
external
drive
is
driving
this
exploration
and
it
can
drive
the
system
into
these
ordered
States
depending
on.
What's
going
on.
B
This
effect
has
now
been
established
in
a
variety
of
simulated
and
experimental
systems,
ranging
from
disordered,
spin,
glasses
and
mechanical
networks
to
robot
swarms.
So
the
basic
principle
is
a
choice
of
external
drive,
so,
like
this
time,
varying
force
will
determine
the
amplitude
and
degree
of
diffuse
of
Randomness
in
the
local
motion
foreign.
B
A
neural
network
is
a
complicated
function
with
many
parameters
that
are
mapped
to
Output
quantities,
you're
training,
the
network,
you're,
selecting
the
parameters
so
that
the
input
output,
mapping
optimizes
the
configuration-
and
that's
that's
an
analogy
of
what's
going
on
here
with
this
low
rattling
self-organization
by
low
rattling.
B
So
this
is
kind
of
what
he
talks
about
and
then
he's
talking
about
life
far
from
equilibrium,
which
is
where
you're
now
you're
applying
this
to
biology
and
statistical
mechanics.
So
now
you
have
living
systems
that
can
be
organized
using
this
little
rattling
principle,
this
low
rattling
measure-
and
you
can
characterize
it
and
talking
about
cells
being
far
from
equilibrium
and
at
least
through
crucial
respects,
so
they're
far
from
chemical
equilibrium,
they're
far
from
conformational
equilibrium.
B
There
are
conformational
this
equilibrium
and
then
they're
in
location
of
this
equilibrium,
since
macromolecules
are
arranged
across
space
but
they're
and
but
they're
in
they're
done.
This
they're
done
similarly
they're
tightly
influenced
by
driven
transport
processes.
So
this
means
that
this,
this
living
systems
are
very
amenable
to
this
sort
of
approach
and
this
sort
of
explanation.
B
So
this
is
a
an
example
of
two
assembly
States
here
that
are.
This
is
sort
of
what
he's
talking
about.
So
you
know
you
have
something
in
this
state
say:
for
example,
the
external
drive
energy
washes
it
up
into
this
state,
and
you
have
these
multi-states
that
you
know
something
in
transition
from
one
state
to
another.
This
is
very
simple:
you
know:
sort
of
complex
systems,
complex
systems,
diagram,
sort
of
explanation.
B
Then
we
talked
about
spatial
diffusion,
so
you
have
molecules
that
can
diffuse
over
space,
and
this
has
a
similar
effect
drives
it
into
new
States
into
different
aggregations.
You
also
have
chemical
reactions
that,
with
the
thriving
force
of
energy,
can
be
driven
to
new
States,
and
so
you
know
that
holds-
and
this
is
spatially
dependent.
B
So
you
know
this
is
very
amenable
to
the
cell,
and
then
it
talks
about
some
new
opportunities
for
this
and
then
kind
of
the
same
thing
as
what
we
talked
about
with
you
know
things
in
this,
these
different
types
of
subsystems
in
the
cell
chemical
systems.
B
You
know
physical
systems,
that
sort
of
so
what
was
that
paper?
I
thought
that
was
a
really
interesting
one.
Now
I'd
like
to
go
over
some
examples
relevant
to
the
Jeremy
England
paper,
the
first
one
being
the
role
of
actin
filaments
in
wound
healing,
which
shows
self-organization
and
the
second
being
some
drawings
we're
going
to
talk
about
the
fluxes
of
energy
going
into
a
system.
B
B
These
are
actin
filaments
that
are
forming
your
ring
and
then
disappearing,
so
they
kind
of
they're
all
over
the
place
and
then
they're
pushed
into
this
configuration
of
a
ring,
and
then
they
disappear
as
the
wound
heals.
So
they
need
to
be
there
temporarily
and
they
need
to
form
a
pattern
in
order
for
this
process
to
unfold.
B
So
this
animation
is
from
the
paper.
Coordinated
efforts
of
different
actin
filament
populations
are
needed
for
optimal
cell
wound
repair,
so
the
abstract
reads:
cells
are
subjected
to
a
barrage
of
daily
insults
that
often
lead
to
their
cortex
being
ripped
open
and
are
requiring
immediate
repair.
So
this
is
where
a
cell
is
ripped
apart.
The
the
surface
of
the
membrane
is
ripped
apart
and
the
cells
lose
their
integrity.
B
An
important
component
of
the
cell's
repair
response
is
the
formation
of
an
active
meiosin
ring,
which
you
saw
in
the
animation
at
the
Moon
periphery
to
mediate
its
closure.
Here
we
showed
that
inhibition
of
myosin
or
the
linear,
actin
nucleation
factors,
diaphanous
and
or
daam
result
in
a
disrupted,
contractile
apparatus
and
delayed
wound
closure.
So
these
factors
that
are
essential
for
this
process,
if
you
inhibit
them,
is
disrupts
this
contractile
formation.
B
So
you
have
this
ring
in
it
closes
down
and
then
disappears.
This
is
an
important
aspect
of
this
process.
B
We
also
showed
the
branch
depth
to
nucleators,
wasp
and
Scar
function,
non-redundantly
as
scaffolds
to
assemble,
maintain
this
contractile
ectomyosin
cable,
remove
branched
actin
leads
to
the
formation
of
smaller
circular
actin
meiosin
structures
of
the
cell
cortex
and
slow
wound
closure.
So
you
can
have
variations
on
this.
B
The
larger
ring
is
more
efficient
than
the
smaller
rings,
but,
as
we
mentioned
in
the
paper,
we
need
to
have
this
conformational.
You
know
searching
this
conformational
space
and
ending
up
with
these
patterns
that
actually
do
something.
So
we
end
up
with
these
either
these
large
rings,
or
these
small
rings
in
this
case,
removing
linear
and
branched
acting
simultaneously
results
in
failed
enclosure.
So
if
we
remove
all
of
the
actin
simultaneously,
we
don't
get
one
closure.
B
Surprisingly,
removal
of
ranchodactyl
and
myosin
results
in
the
formation
of
parallel
linear
effect
in
filaments
then
undergo
a
chiral
swirling
movement.
So
this
is
where
you
have
this
ring,
but
it
swirls
around
in
a
certain
direction
orientation
to
close
the
wound
there
by
uncovering
a
new
mechanism
of
cell
enclosure.
B
B
The
number
of
possible
States
now
I
want
to
talk
about
a
system,
and
we
talked
about
a
generic
system
where
you
have
this
energy
flux.
So
we'll
think
about
this
as
like
a
a
signal
that
goes
up
and
down.
So
this
is
the
intensity
of
the
signal
of
energy,
and
this
is
the
external
driving
force.
This
intern
internal
system,
we'll
just
have
a
bunch
of
molecules
in
here,
we'll
just
put
circles,
which
is
a
good
example.
B
Well,
these
look
like
the
cells,
but
the
cells
are
kind
of
forming
this
packing
or
you
could
even
have
a
square
or
something
like
that.
This
is
driven
by
the
energy
flock.
So
if
we
had
a
norm,
normal
energy
flux,
something
that
wasn't
really
fluctuating
over
time,
but
it
was
basically
uniform
over
time.
We
would
get
this
sort
of
constant
Brownian
motion
in
this
space
and
these
molecules
would
bounce
around
and
they
may
or
may
not
find
their
conformational
stable
State.
B
They
might
just
Bounce
Around
infinitely,
and
that's
of
course,
what
you
see
in
a
Maxwell's
demon.
You
see
in
one
chamber,
you
see
molecules
bouncing
around
and
they
don't
really
form
any
sort
of
order.
It's
only
when
they
go
through
the
door
to
the
other
chamber
and
sort
themselves
that
they
form
that
order
and
that's
the
idea
of
Maxwell's
demon
and
the
idea
there
is
that
it
defies
thermodynamic.
D
B
So
this
is
a
normal,
uniform
signal.
I
guess
is
a
better
way
to
put
it
now,
let's
think
of
a
non-uniform
signal
or
a
large
set
of
fluctuations.
So,
let's
think
of
the
energy
signal
as
something
like
this
I
mean
this
is
kind
of
a
little
bit
different
style
than
the
sign
sort
of
the
sine
wave
I
gave
before.
But
this
is
highly
irregular
energy
flux.
So
you
have,
you
know
large
fluctuations
at
the
top
and
at
the
bottom-
and
you
have
you
know,
states
in
the
middle
and
there's
no
character.
B
It's
it's
it's
non-uniform.
So
this
sort
of
signal
actually
can
do
a
lot
to
work
to
sort
this.
These
molecules
in
this
chamber.
It
could
push
them
into
rings.
It
could
push
them
to
the
sides
to
the
walls.
It
could
do
a
lot
of
things
it
could
disrupt
when
there's
some
sort
of
pattern
that's
already
forming,
so
it
could
even
disrupt
that
it.
The
point
is:
is
it
would
force
these
molecules
into
different
states
or
would
force
them
to
explore
their
space
a
little
bit
more?
B
So
that
would
be
something
that
we
would
end
up.
Getting
these
kind
of
patterns
or
they'd
end
up
engaging
in
these
sort
of
weird
behaviors
molecules
on
opposite
sides
of
the
chamber,
for
example,
might
end
up
getting
one
close
to
one
another
things
like
that
yeah.
So
that's
what.
D
D
B
Video
and
let's
take
our
chamber-
and
it's
at
a
certain-
you
know
it's
at
a
certain
Spar
City
here,
it's
not
too
sparse,
but
not
not
too
crowded
and
the
molecules
can
bounce
around.
It's
like
Brownian
motion
here
and
that's
a
long
good,
and
we
have
the
example
we
showed
before.
But
this
next
example
is
where
you
get
what
they
call
critical,
jamming
and
critical
jamming
is
where
you
have
molecules
that
are
pecked
at
a
certain
density
so
that
they
cannot
move,
and
so
something
like
this
would
be
jammed.
B
There's
a
critical
parameter
of
crowding
or
of
density
that
triggers
this
transition
from
these
freely
exploring
molecules
to
this
pet
state
where
they
don't
really
move
around,
and
this
is
irregardless
of
the
of
the
input
energy,
whether
it's
uniform
or
non-uniform,
so
in
non-uniform
energy.
Might
it
might
serve
to
break
up
this
jamming?
If
there
is
nothing
else
in
the
in
the
chamber?
But
if
the
entire
chamber
is
jammed
at
a
uniform
density,
then
it
can't
really
do
much
to
push
them
around.
B
It
can
just
kind
of
introduce
energy
and
that
may
introduce
other
factors,
so
it
may
translate
into
forces
between
the
cells
or
molecules
or
it
might
translate
into
pushing
them
in
different
directions
up
and
down.
If
you
are
in
a
three-dimensional
box,
if
you
only
had
a
two-dimensional
chamber
that
was
Square,
then
that
would
translate
into
some
sort
of
energy
for
it
to
either
the
molecules
or
the
cells
or
whatever
these
things
are.
B
So
if
you
have
local
jamming,
of
course,
it
could
break
it
up.
But
this
is
a
very
important
point.
So
if
you
have
this
sparse,
relatively
sparse
packing
at
the
top
you're
going
to
have
these
sorts
of
patterns
that
are
able
to
be
formed,
if
you
have
jamming,
you
won't
necessarily
be
able
to
form
these
patterns,
although
you
could,
if
you
had
local
jamming
instead
of
global
jamming,
you
could
force
these
apart
and
enforce
these
molecules
into
new
configurations.
D
B
B
So
if
you
have
something
that's
you
said
non
non-uniform
here
and
it's
coming
in
and
affecting
this
packing,
it's
either
breaking
it
up,
or
maybe
it's
enforcing
jamming,
because
it's
forcing
all
these
molecules
to
a
certain
part
of
the
chamber
say,
for
example,
they
have
an
electrical
potential
or
a
magnetic
potential,
and
so
you
know
we
can
see
that
effect
as
well.
So
this
is
this
is
an
interesting
paper.
I
could
give
a
number
of
other
examples.
B
This
doesn't
necessarily
lead
to
the
biology
too
much,
except
to
say
that
if
we
think
of
these
as
cells
or
molecules
like
you
know,
actin
filaments
or
something
like
that,
then
you
know
it's
a
little
bit
easier
to
visualize,
and
then
you
know
taking
this
further
out
and
thinking
about
networks.
How
are
these
things
networked?
How
does
this
network
change
with
the
introduction
of
energy
in
this.
D
C
B
Factor,
like
a
neural
network,
where
you
have
neurons
that
are
next
to
each
other,
and
you
have
energy
that
maybe
pulses
of
energy
that
modify
their
growth
or
modify
their
ability
to
send
axons
out
to
find
neighbors
and
or
or
even
like
activation
energies
activation
strengths,
so
so
I
think
that's
all
I
want
to
talk
about.
For
that
hope.
You
learned
something:
oh
hello,.
B
I
I,
don't
know
if
I
mean
I,
guess
it
kind
of
Recaps
a
lot
of
stuff
that
we
know,
but
it
also
kind
of
explores
potential
mechanisms
for
explanation.
Some
of
these
things
you've
been
talking
about
with
physics
and
physiochemistry,
and
all
that.
D
Actually
I
was
thinking
last
year.
Project
demograph,
actually
I
have
some
doubts
during
while
reading
it
have
gone
through
the
work
of
jiangle,
I
think
so
in
stage.
One
just
small
outside
just
need
a
recap
or
confirmation
from
that
I'm.
Just
asking
you
in
stage
one
of
demograph.
They
are
converting
raw
data
of
videos
and
images
into
some
CSV
file.
Right.
D
From
the
CSV
file,
we
are
transferring
it
into
some
graph.
Embeddings.
Is
this
process
going
on
I
mean?
Is
the
process
right?
I
mean
I'm
little
confused.
While
I
was
reading,
it
I
had
a
meeting
with
my
neck
and
mayuk
also
day
two
did
like
they.
They
also
helped
me
they
told
like
you,
can
just
go
and
ask
Bradley.
He
will
be.
B
B
Yeah,
so
the
step
one
that
we
did
last
year
was
to
use
devolar
in
the
software
to
generate
like
CSV
files,
which
are
basically
the
output
of
the
so
you're
analyzing
data
sets
or
images
and
then
their
analyzed
and
there's
an
output
of
CSV
CSV
file,
which
gives
you
like
the
segmentation
data
from
the
images
the
training
data
set
that
we
have.
Then
you
know
that
that
part
is,
of
course,
we've
been
refactoring
the
software
to
make
it
optimal
for
these
graph
structures
that
we
want
to
produce.
D
It's
in
stage
two,
actually,
the
parents
are
and
daughter
cells
are
like
some
cells
are
not
connected,
I
mean,
can
I
share
my
screen
and
show
it
to.
D
D
B
Could
you
scroll
down
a
little
bit?
I
can't
remember
exactly
what
this
graph
was.
I
think
it
might
be
labeled
down
at
the
bottom
or
up
at
the
top
yeah
I
can't
really
see
your
screen.
B
B
He
generated
these
okay,
yes,
I
can
see
it.
Yes,
so
yeah
I
mean
the
problem.
The
thing
here
is
that
you're,
making
he's
making
different
connections
with
this
so
I'm
not
sure
exactly
what
these
connections
represent.
Usually
once
we
have
the
data
from
step,
one
we'll
have
these
points
and
they
can
be
related
in
space.
Now
they
form
a
graph,
but
the
graph
isn't
like
doesn't
have
an
absolute
connectivity,
so
you
can
use
different
Criterion
for
the
connections,
so
I
don't
know
if
he
would.
What
Criterion
he
was
using
here.
B
B
Just
minor.
D
Mike
was
also
sitting
by
Sir
him.
He
just
explained
him
in
the
process
going
on.
They
were
talking
about
like
their
G-Shock
work,
actually
what
they
had
done
in
the
development.
They
explained
me
a
lot
yesterday
and
I
heard
it
before
meeting
with
the
yesterday
night.
Actually
yeah
after
that,
I
have
gone
through
the
documentation
of
demograph
is
I
mean
it's
interesting
right
now,
I
have
started
learning
this
demographics
that
graph
neural
networks.
Actually
I
was
watching
some
Stanford
lectures
about
it.
It's
pretty
cool
I
think
I
will
just
see.
D
I
will
I
need
to
work
on
this
like
there
are
some
connections
right.
Some
parent
cells
are
not
having
some
daughter
cell
connections.
I.
Think
I
was
implemented
correctly
for
different
time
stamps.
B
Yeah
yeah
geohang
Lee
is
the
person
who
you
might
want
to
message
in
slack
because
he
did
a
lot
of
work
on
this
step
too
last
year,
that
was
his
project
per
project,
and
so
he
he
really
worked
through
this
yeah.
What
he
did
is
he
developed
it
from
an
existing
paper
and
he
refactored
that
code,
so
it's
actually
published
in
a
paper
too.
Okay.
D
If
you
don't
mind,
can
you
share
that
paper
I?
Think
there
is
a
yeah
yeah
I
think
there
is
a
there
is
a
paper
yeah
PDF.