►
From YouTube: DevoWorm (2023, Meeting #15): Diatoms and DevoLearn/Graph, Synthetic Morphology, and D-GNN Readings
Description
Planning for a summer of DevoLearn/DevoGrah development (for development). Papers from the new book "Mathematical Biology of Diatoms". Synthetic Morphology and Antoni Gaudi's hanging chain model of bio-inspired tensegrity. Four papers on GNNs laying out the scope of future work in developmental biology. Attendees: Susan Crawford-Young, Jesse Parent, Jiahang Li, Himanshu Chougule, Sushmanth Reddy, Jyothi Swaroop, and Bradly Alicea
B
Well,
I
got
I.
B
They
told
me
how
to
get
a
a
stress
train
curve
out
of
console.
Oh.
A
B
Householding,
like
I,
checked
online
and
research
gate
and
their
help
desk
and
the
help
desk
through
the
CMC
thing
that
I'm
with
they
didn't
seem
to
understand
what
I
was
asking.
B
To
it
or
a
line
probe
or
just
wherever
you
want
in
the
model,
you
just
go
click
on
a
note-
and
you
say:
oh
I
want
Pro
over
here
and
now
it's
actually
generating
data.
For
me,
okay
and
with
the
spring,
with
a
spring
tension
element.
It
gives
me
a
sort
of
A
J
curve
and
with
a
straight
element,
it
tends
to
give
me
a
straight
curve.
So
I've
got
to
find
something.
That'll
give
me
a
J
curve
to
see
if
it'll
give
me
a
j
output.
A
B
A
A
Well,
thanks
yeah,
that
was,
it
sounds
like
you're
making
progress,
then.
B
Yeah
finally,
well
they
think
this
is
held
me
up
at
least
six
weeks.
Wow.
A
A
Yeah
yeah
so
we're
getting
ready
for
gsoc
a
couple
more
weeks,
I
think
next
week,
maybe
or
the
week
after
they
announce
so
right
now,
I
think
we
need
to
get
things
ready.
A
Just
you
know
if
we
have
things
in
the
repository
I
think
I
mentioned
this,
that
we
want
to
clarify
or
make
make
sure
that
they're
in
place.
You
know
we
should
do
that
now.
I
also
have
to
put
together
a
guide
at
it.
I
haven't
done
it
yet,
but
things
that
we
need
to
clarify
about
some
of
the
data.
But
that's
that's
on
me.
So
I
have
to
do
that.
C
Yeah
yeah
actually
I'm
for
now
I'm,
maybe
you're.
A
member
I
I
have
mentioned
one
paper.
Maybe
in
the
last
year
the
GM
for
sale
tracking.
There
there
was
one
paper
which
has
been
published
in
the
eccb
2022.
A
C
Yeah
and
actually-
and
in
this
few
weeks,
I'm
working
on
reproducing
the
results
of
this
paper
and
I-
think
this
model
is
a
good
like
I
think
starting
point
for
our
projects.
For
now,
my
work
is
not
done.
C
Yes,
I'm
still
working
on
it
and
there
are
some
problems
with
the
source
codes,
so
maybe
now
very
quick
and
for
the
juicer
projects,
because
maybe
I
have
missed
many
well
much
progress
regarding
the
G-Shock
yeah,
so
I
think
I
need
some
time
to
like
catch
up
with
the
progress
of
other
people
and
yeah.
C
Going
to
prepare
the
the
demograph
the
projects
so
that
other
participants
can
contribute
to
this.
This
project.
A
Terms
of
the
Divo
graph,
specifically,
people
really
haven't
done
much
with
that,
but
sushma
has
been
working
on
like
upgrading
Devo
Evo
learn,
which
is
the
sort
of
the
the
segmentation
and
tracking
aspect.
So.
B
A
A
Yeah,
okay,
so
yeah
I,
don't
know
who?
What
how
we're
gonna
proceed
with
gsoc
I,
don't
know
how
many
people
will
get
I
requested
at
least
two
people.
So
I
don't
know.
D
D
D
A
Yeah
I
think
you
broke
up
there.
Yeah
I
heard
that
like
you're
talking
about
data
augmentation
in
the
scikit
library
and
the
website
work,
they
have
the
website
worked.
Yeah
I
saw
a
lot
of
the
push
okay
yeah,
that's
good
I
saw
a
lot
of
the
pull
request,
stuff
okay,
half
an
hour
and
that
looked
good
I
keep
up
the
good
work.
I
have
it
looks
like
good
things
are
happening
there
so
for
I.
A
Think
last
week
we
talked
about
how
you
know
we
have
the
divawarm.github.io,
which
is
a
place
where
we
usually
have.
We
have
various
like
tutorials
and
things
to
go
over
and
they've
been
such
month
and
gothi.
Another
contributor
have
been
updating
that
quite
a
bit.
You
know
putting
in
pull
requests.
It
just
needed
an
update.
It
was
aging,
so
it's
been
a
couple
years
since
we've
touched
it,
but
it
looks
like
we're
getting
some
pretty
good
things
up
there.
So
you
know
we'll
be
wanting
to.
A
C
A
C
A
You
might
yeah,
we
might
do
that
as
well.
Put
that
in
a
basically
we
have
these
blog
posts
that
have
been
put
up
here,
but
I'm
thinking
that
we
use
that
as
like
a
documentation
aspect
to
the
project.
So
people
can
go
through
those
and
read
more
about
what
was
done,
or
you
know
things
that
for
the
future
or
whatever.
C
Yeah
yeah
yeah
I
got
your
point
yeah
actually
because
of
demograph.
For
now
the
status
of
Civil
Right
is
just
a
I.
Think
it's
not
very
much
like.
We
need
to
do
a
lot
of
things
to
like
finish
this
project,
so
I
didn't
prepare
a
full
document
for
the
typograph,
but
I
think
I
think
this
is
one
of
our
jobs
in
this
summer,
the
gso
2023.
We
want
to
do
more
on
a
demograph,
including
the
documentation
parts.
C
So
yes,
in,
furthermore,
I
think
if
we
have
successfully
finished
the
demograph
with
Grace
documentation-
and
we
can
put
some
like
descriptions
or
introductions
of
your
graph
in
the
blog
post,
I
think.
A
All
right
so
yeah
we'll
we'll
go
we'll
kind
of
Target
that
for
like
a
thing
to
do
for
the
summer
and
we'll
we'll
see
what
we
you
know,
who's
participating
this
summer
and
so.
A
Should
be
a
really
good
time
and
you
know
we'll
do
we,
you
know
we'll
we'll
keep
our
weekly
meeting
time
as
a
place,
for
you
know
showing
off
work.
People
can
present
or
whatever
they
want,
and
then
you
know
we
might
have
meetings
within
the
week
to
talk
about
specific
things
that
people
were.
You
know
problems.
People
were
having
or
anything
else
and
then
we'll
use
the
slack
and
the
GitHub
repositories
too.
A
So
we'll
have
that
as
like
kind
of
a
way
to
communicate
with
people,
and
you
know
make
sure
that
we
can
get
everything
going
on
time,
hello,
Jesse,
so
that's
good
yeah,
so
I
look
forward
to
the
summer.
It
should
be
good.
Like
I
said,
the
community
period
starts
at
about
a
week
and
a
half
so
right
at
the
beginning
of
May.
So
it's
coming
up.
A
Okay,
good
all
right,
so
what
do
I
want
to
get
into
what
we
were
going
to
do
today?
Okay,
yeah
he's
just
listening
today:
okay
for
right
now,
anyways,
so
that
we
have
a
new
publication,
Jesse,
of
course,
and
dick
Gordon
and
myself.
Let
me
share
my
screen.
So
this
is
something
that's
been
around.
We've
talked
about
for
a
while
there's,
actually
a
presentation
on
on
the
YouTube
channel
on
this
work.
A
This
is
on
the
diatom's
work
and
this
is
called
the
psychophysical
world
of
motile
diatom,
the
motel
diet
on
basilaria
paradoxia,
and
so
you
know
this
talks
about
some
of
the
basilary
work
we've
done,
but
instead
of
being
driven
by
you
know
the
last
paper
we
did
was
a
couple
years
ago,
where
we
did
a
lot
of
deep
learning
analysis
of
the
microscopy
data,
and
then
we
did
some
bio,
some
biomechanics
analyzes.
A
In
this
book
chapter
we
talk
about
what
we
call
the
psychophysics,
which
is
like
the
you
know,
the
response
to
stimuli
that
diatoms
have,
and
especially
this
genus
basilaria,
which
of
course,
is
this
colony
of
cells
that
moves
back
and
forth
like
an
accordion,
and
so
it
has
this
Collective
Behavior,
this
movement,
that's
very
intriguing,
and
so
we
characterized
this
in
the
last
paper
in
this
paper
we
consider
some
models
and
some
ideas
of
how
this
movement
is
generated.
A
So
there's
a
lot
of
good
stuff
in
here
about
you
know
how
to
how
this
organism
without
a
brain
is
generating
this
response
to
stimuli,
and
so
this
is
something
that
is.
We
have
an
open
source
version
of
it.
It's
off
of
the
website
off
of
Publications
page,
so
you'll
be
able
to
find
it
there.
This
is
part
of
a
larger
unit
called
the
mathematical
biology
of
diatoms.
A
So
this
is
chapter
nine,
as
you
see
here,
and
then
this
is
the
book
here,
which
is
the
Wiley
Online
Library.
So
you
know
people
may
or
may
not
be
able
to
access
it.
I
wasn't
able
to
access
it
for
some
reason,
but
the
basically
it's
a
whole
book
of
papers
on
this
topic
of
method,
the
mathematical
biology
of
diatoms.
A
So
there
are
a
number
of
different
papers
here
by
different
contributors.
Part
one
is
diatom
form
and
size
Dynamics.
So
this
is
really
kind
of
getting
into
a
lot
of
the
diatoms
are
really
fascinating
group
of
organisms
because
they
have
their
algae.
You
know
so
they
have
like
a
solid
cell
wall,
it's
actually
silicon
silicates
made
of
silicates
and
they
accrete
their
cells.
A
They
don't
necessarily
grow
them
like
a
say,
like
an
animal,
and
you
know
they
they
can
divide
and
they
actually
have
sexual
reproduction
and
they
respond
to
different
cues
in
the
water
column
in
different
ways.
So,
without
going
into
the
details
of
diatom
biology,
which
we
have
in
that
YouTube
video
on
the
YouTube
channel,
you
know
we
can
do
a
number
of
different
things
to
look
at
like
you
know
how
they
behave
their
biology,
how
they
persist
over
time.
So
part
one
is
diatom
form
and
size
Dynamics.
A
This
talks
about
size,
regulation
and
diatoms,
and
you
know
kind
of
getting
a
sense
of
what
the
size
and
shape
of
diatoms
are
so
diatoms
are
very
small,
a
lot
of
them
they're
less
than
100
microns
in
size,
so
they're
very
small,
but
they
are
very
you
know.
The
cells
themselves
are
actually
quite
Exquisite.
They
can
form
all
sorts
of
shapes,
so
this
is
modeling
size
result
modeling
a
diatom
populations
on
the
mathematical
description
of
diatom
algae.
A
So
they
can
the
silication
exoskeleton
structure
and
properties
to
Colonial
growth,
kinetics
and
prospective
Nano
engineering
applications,
so
they
use
diatoms
in
biofuel
production
and
in
other
Nano
engineering
applications.
As
you
can
imagine,
you
have
these
very
small
organisms
that
are
motile
that
have
silicate
bodies,
body
cell
walls-
and
you
know
they
do.
They
can
move
through
the
water
column
quite
effectively.
So
you
could
use
this
for
a
number
of
applications
in
Nano
engineering.
But
you
have
to
to
do
this.
A
You
have
to
characterize
the
cells
and
the
different
cell
types
of
which
there
are
many
amongst
diatoms
and
so
part
two
is
diatom
development,
growth
and
Metabolism.
So
whereas
the
last
section
was
about
just
a
descriptive
aspect
of
diatoms
and
using
like
you
know,
imbeam,
Imaging
and
or
imbeam
Technologies
and
some
other
Technologies
to
measure
those
talks
about
growth
and
Metabolism.
A
So
this
text,
some
somewhat
about
Evolution
and
growth,
modeling
and
some
aspects
of
noisy
data
and
characterizing
growth
and
metabolic
modeling,
which
is
I,
think
we've
talked
about
metabolic
modeling
in
in
previous
meetings.
But
it's
actually
quite
you
know
it's
quite
a
tough
modeling
exercise,
but
they
do
this
in
diatoms.
A
Here
then,
there's
diatom
motility
and
that's
where
we've
come
in.
Our
chapter
is
in
here
with
a
couple
of
chapters
from
Thomas
harbick
who's
been
in
the
group
before
he
did
a
one
paper
modeling
another
vassal
area
species
as
a
chromoto
oscillator
with
time
delay.
So
he
gets
into
the
specifics
of
some
of
the
movement
models.
The
curmoto
model
was
a
dynamical
systems
model.
It
deals
with
like
excitable
oscillators,
so
it's
it's
the
original.
A
We
use
it
in
Neuroscience
to
model
neurons
and
neural
network
neuronal
networks,
where
you
have
a
bunch
of
neurons
coupled
and
so
he's
using
that
in
this
case,
and
then
this
paper
is
on
diatomo
vulgaris
colonies,
and
this
is
used
describing
this
using
a
windemire
system.
A
So
a
Linda
Meyer
system
is
a
system
of
growth
where
you
put
together
different
shapes
and
symbols,
and
you
can
grow
systems
from
there.
So
that's
that's
another
interesting
paper
and
then
Richard
Richard
Gordon
with
Krishna
and
Shruti
rajans,
panch
Singh.
We
were
both
have
been
in
this
group
before
all
all
three
of
them
have
been
in
this
group.
They
did
a
paper
on
Buffet,
which
is
simulation,
the
Dynamics
of
diatom
motility
at
the
molecular
level,
the
the
domino
effect
hydration
model
with
concerted
diffusion.
So
this
is
their
model
of
diatom
movement.
A
So
all
four
of
these
papers
are
really
interesting.
I
would
read
all
of
them.
If
you
have
time,
if
you
can
get
access,
then
there's
diatom,
ecological
and
environmental
analyzes.
So
these
are,
you
know
again:
they're
mathematical
models
describing
the
interaction
with
diatoms
with
light.
There's
this
another
paper
on
light
response.
A
Another
paper
on
like
chemical
sensing,
so
this
is
this
diatom
species
as
a
biogenic
charged
a
sensor
for
heavy
metal
ion
contamination
detection.
So
this
diatom
can
sense.
You
know
chemical
pollution
and
it
can,
you
know,
respond
accordingly
and
you
can
use
that
as
a
determinator
of
whether
you
have
a
large
amount
of
this
pollutant
in
your
local
area
that
you're
testing.
So
that's
those
are
the
papers
in
this
special
issue.
A
I
think
that'll,
be
you
know
something
that
people
might
want
to
check
out.
Let
us
know
if
you
need
access
to
things
we
may
or
may
not
be
able
to
get
them
for
you.
So
anyways!
Congratulations
to
everyone
involved
in
that.
A
So
it
looks
like
we
have
hamanchi
with
us
how
hello
homanshu?
How
are
you.
A
Okay,
foreign-
that's
great
nice
turnout
here.
Thank
you
for
attending
the
next
thing.
I'll
talk
about.
Is
this
thing
on
synthetic
morphology?
So
this
is
something
that
we
talk
about
a
fair
amount
in
the
group
about
morphology
and
the
last
week
we
talked
about
the
special
issue
on
symmetries
and
and
more
morphic
genetics
and
things
like
that.
A
This
you
know,
we've
also
talked
about
this
in
the
context
of
of
artificial
life
and
morphological
computation.
So,
like
the
you
know,
you
have
the
body
of
the
embryo,
the
body
of
the
worm
body
of
different
organisms
and
how
you
know
you
can
how
development
shapes
that
body,
what
it
looks
like.
We
call
that
morphogenesis
and
so
this
this
is
a
Scientific
American
article
on
synthetic
morphology,
and
this
is
synthetic
morphology.
A
Let
scientists
create
new
life
forms,
so
this
is
a
new
field,
that's
emerging,
and
this
is
sort
of
based
on
some
of
the
work.
That's
been
done
in
a
life
and
there's
an
area
called
morphological
computation
which
people
have
done
largely
with
robots
where
they've
been
able
to
take
a
morphology-
and
you
know,
use
different
ways
of
calculating
sort
of
the
optimal
morphology
and
different
things
like
that:
evolving
shapes
using
genetic
algorithms.
A
This
is
kind
of
moving
this
towards
the
biological
realm.
So
well,
they
kind
of
go
through
some
interesting
things
here
at
the
beginning,
they
talk
about
mythical,
hybrid
beasts
that
you
find
in
mythology
like
mermaids,
centaurs
and
chimeras,
which
are
just
kind
of
like
speak
to
our
test,
speak
to
our
fascination
with
biological
forms
that
are
different
from
what
exist
in
nature,
and
so
this
kind
of
talk.
A
So,
let's
see,
but
a
crude
stitching,
a
components
will
not
produce
a
viable
organism.
Bodies
are
in
a
collection
of
arbitrary
pieces,
a
human
embryo
grows
into
a
being
with
the
standard
features
of
human
body.
All
the
parts
working
in
synchrony
biological
forms
seem
to
have
inevitable
unique
Target
structures.
An
emerging
discipline
called
synthetic
morphology
is
now
questioning
that
notion.
It
asks
how
and
how
far
are
the
natural
shapes
and
compositions
of
living
matter
can
be
altered.
A
So
this
is
really
about
like
understanding
the
rules
of
morphogenesis
and
how
to
use
that
to
build
these
engineered
structures.
So
say
you
wanted
to
build
an
embodied
robot,
which
people
often
you
know
just
kind
of
put
together
a
body
they
say.
Well,
you
know
this
robot
needs
a
body
or
you
know.
If
we
have
something
like
a
Roomba
robot
that
moves
across
your
floor
to
vacuum,
it
has
a
body.
A
It
can
use
that
body
to
sense
the
corners
of
the
room
to
sense
where
it
is
in
the
room,
but
you
know
we
can
actually
do
much
more
with
that
body
and
to
do
much
more
with
that
body.
We
have
to
understand
the
rules
by
which
that
body
exists
in
a
intelligent
agent,
so
synthetic
morphology
might
be
considered
the
next
stage
of
synthetic
biology.
The
latter
discipline
is
racked
up
impressive
achievements
and
retooling
cells
for
non-natural
tasks,
for
example,
programming
bacteria
to
Glow
in
the
presence
of
pollutants
and
other
chemicals.
A
B
A
Kind
of
what
we're
talking
about
with
diatoms
except
people
actually
create
they
take
things
like
bacteria,
bacterial
cells,
they'll
manipulate
the
genome,
so
they
can
get
only
like
the
essential
genome
and
then
put
in
specialized
genes
that
will
produce
things
like
enzymes
and
do
this
efficiently.
So.
A
They
interview,
Michael,
Levin,
also
Roger,
cam
who's
does
a
lot
of
stuff
in
synthetic
biology,
and
so
then
they
get
into
the
rules
of
living
form,
and
this
is
something
that
we've
been
looking
for
for
many
years
ever
since
Darwin.
In
fact,
in
the
mid-18
other
in
the
mid-1800s
Thomas
Huxley
began
to
suspect
that
there
was
a
genetic
generic
form
of
living
matter,
often
called
protoplasm,
from
which
the
most
primitive
life
forms
are
fashioned.
A
So
this
was
something
that
was
an
idea
that
you
know
there
was
a
sort
of
force
of
morphogenesis
and
of
course
that
was
incorrect,
but
we're
getting
to
I
think
understand.
Maybe
what
these
things
are
in
his
1912
book,
The
mechanistic
conception
of
Life,
German,
physiologist
Jacques
Globe
argued
that
life
couldn't
should
be
understood
according
to
engineering
principles,
after
discovering
that
he
could
stimulate
asexual
reproduction
by
treating
unfertilized
sea
urchin
eggs
with
simple
Salt
Solutions,
he
became
convinced
that
Nature's
way
of
doing
things
with
living
matter
is
not
the
only
way.
A
A
You
know
a
very
early
20th
century
quote,
it
doesn't
give
specifics,
but
you
know
it's
it's
a
starting
point
and
then
you
know
people
figured
out
how
to
do
tissue
culture
and
things
like
that.
A
But
now
we
can
use
tissue
culture
to
sort
of
serve
as
the
experimental
test
bed.
For
some
of
these
ideas,
the
ideas
of
cells
such
as
building
blocks
in
our
bodies
might
make
them
seem
rather
passive,
like
near
bricks
to
be
stacked
in
masonry
of
tissues.
So
what
they
mean
by
cells
is
building
blocks.
Is
that
cells
form
larger
structures?
A
They
form
tissues,
they
form
networks
like
we
talk
about
with
our
network
things
that
we
talk
about
with
networks
and
other
things,
so
cells
can
be
part
of
larger
systems,
networks,
structural
Integrity
and
organs,
and
so
cells
are
much
smarter
than
just
simple
bricks.
In
a
wall,
each
cell
in
many
respects,
is
a
living
entity
in
its
own
right
able
to
reproduce,
make
decisions
and
respond
to
its
environment.
A
So
this
is
with
where
they're
kind
of
going
with
this
they're
talking
about
you
know
using
the
cells
as
like
sort
of
components
in
a
system,
but
it's
not
like
you
know,
just
putting
together
like
a
set
of
bricks.
You
have
to
account
for
the
fact
that
each
cell
makes
its
own
decisions,
sometimes
collectively
with
other
cells,
sometimes
independently.
A
So
then
they
talk
a
little
bit
about
morphogenesis.
How
does
the
featureless
blob
of
cells
that
is
the
early
embryo
and
what
to
make
and
where
to
make
it?
And
we
talk
about
that
a
lot
in
this
group-
and
we
talked
about
how
some
of
that
is.
You
know
a
lot
of
it
is
encoded
in
the
genes.
Some
of
it
is
mechanical,
and
you
know
there's
this
aspect
of
morphogenesis
of
sort
of
this
Collective
Behavior,
both
mechanical
and
Signal.
You
know
communicative.
There
are
these
signaling
Pathways.
A
Some
of
them
are
mediated
by
genes
and
some
are
mediated
just
by
the
chemical
signals
in
there
sort
of
spatial
location.
So
there's
a
lot
here
they
talk
about.
They
give
some
examples
of
cell
communication,
chemical
signaling,
mechanical
signaling
and
electrical
signaling,
which
is
also
a
thing
that
we,
you
know,
we
kind
of
think
of
as
something
that
you
see
in
neurons,
but
they're
actually
in
all
cells.
They
show
a
neuron
here,
but
they
could
just
as
easily
show
a
skin
cell,
because
skin
cells
also
have
electrical
potentials.
A
So
you
have
these
three
different
channels
of
communication,
chemical,
mechanical
and
electrical,
and
so
they
kind
of
go
through
this.
You
know
some
of
the
aspects
of
this.
They
go
through
this
idea
of
a
control
knob.
So
you
know
we
can
modify
some
of
these
con
communication
channels,
turn
them
up
or
down
as
desired,
and
we
can
build
things
with
cells
so
that
the
way
they
behave
collectively
is
tunable,
and
you
can
do
this
in
different
ways.
A
A
Then
they
talk
about
the
plasticity
of
cells,
which
is
we
talked
about
that
a
lot
in
the
group
which,
with
respect
to
like
stem
cells
and
cell
differentiation,
and
then
they
talk
about
reprogramming
cells
here,
which
we
haven't
I
mean
I.
Think
we've
talked
about
a
little
bit
in
the
group,
but
so
you
know,
cells
are
also
can
be
reprogrammed
in
terms
of
their
their
fate,
but
also
in
terms
of
their
shape,
which
they
don't
show
care
too
much.
A
They
show
different
cells
of
different
shapes
down
here,
so
it's
important
to
know
that
cells
have
vastly
different
shapes,
depending
on
their
type
of
cell
but
cells.
Also,
they
they
shape
shift
during
their
existence.
So
a
lot
of
the
work
we've
talked
about
with
mechanical
stimuli,
cells
change
in
their
shape,
depending
on
where
they
are
in
a
in
a
group
of
cells
or
mechanical
stimuli.
A
They're
exposed
to
so
you
know,
cell
shape
is
important
as
well
and
then
there's
this
aspect
of
programming,
the
shape,
which
is
just
basically
controlling
that
process
and
knowing
what
it
is
exactly
that's
controlling
it.
Then.
Finally,
they
talk
about
this
idea
of
a
Camaro
which
are
two
different
organisms,
contributing
to
the
same
sort
of
hybrid
organism.
So
these
Camaros
are
very
common
in
in
mythology.
A
A
They'll
differentiate
into
different
tissue
types,
and
sometimes
they
form
systems,
so
they'll
form
systems
of
different
cell
types.
Like
you
know,
you
can
get,
you
can
make
a
small
gut
or
you
can
make
a
small
mini
brain
or
something
like
that.
This
is
done
in
culture,
it's
not
done
in
the
organism
and
it's
highly
controllable.
You
can
measure
its
activity.
You
can,
like
you
know,
affect
its
form
and
shape,
and
then
xenobots
are
these
they're
based
on
frog
embryos,
but
they're
these
little
robots
that
you
can
build
from
based
on
developmental
principles.
A
Remember
these
little
Proto
embryos
and
they
can
move
around
and
do
things
in
a
in
a
way
that
you
can
that
are
programmable.
So
this
is
really
interesting
work.
You
know
they
show
some
different
synthetic
morphology
engineering
applications
such
as
construct,
coalesce,
develop
and
generate
so
you
can
take
single
cells,
and
you
can,
you
know,
put
them
together
in
different
ways.
According
basically
functions
so
construct
is
to
take
a
single
hexagonal
cell
to
form
a
little
row
of
these,
and
then
you
can
form
an
array
of
hexagonal
cells.
A
Coalesces,
where
you
don't
form
like
you,
don't
stack
an
array
you
actually
let
them
drift
into
a
an
arrangement
that
looks
very
similar
to
construct
develop
is
where
you
wet
the
cells,
change
their
shape
in
the
array,
and
then
they
grow
according
to
differential
rules,
and
you
end
up
with
this
Str
the
same
structure
and
then
generate
is
where
you
have
a
small
number
of
cells
and
the
cell,
the
the
groups
of
cells,
replicate
and
form
these
fractal
patterns.
So
this
looks
like
a
snowflake
that
emerges
from
this
basic
array
here.
A
A
So
yeah
put
this
in
the
slack
if
you're
interested
in
reading
more
about
this.
So
any
questions
at
this
point.
A
So
I
don't
know
if
social
month
that
is
able
to
talk
more.
If
you
wanted
to
talk
more
I
know
he
was
cut
off
earlier
by
technology.
D
And
last
week,
I
couldn't
work
much
because
my
brother
got
a
surgery
to
his
hand.
He
both
went
off
from
a
bike
and
it
was
kind
of
terrible
and
right
now
we
are
working
on
the
demo
on
website.
Only
we
are
hosting
the
whole
Publications
which
are
written
on
synthetic
balance
is
what
hosted
here
and
parallel.
We
are
working
on
the
instant
segmentation
modem.
Actually,
I
was
reading
some
related
research
papers
about
it.
What
is
instant
segmentation,
etcetera,
Etc,
while
seeing
I
was
thought
of
implementing
in
chainer
library,
but
I
thought.
D
Whole
Devon
was
on
a
pie
torch,
so
I
was
thinking
to
implement
in
Fighters.
Only
and
I
want
to
mention
some
things.
Actually,
it's
devoland
the
Holland
was
working
fine,
but
in
demograph
stage
one
we
are
extracting
the
centroids.
D
At
different
time,
frames
and
even
lineage
is
also
made,
but
these
all
can't
be
expected
mainly
the
only
because
I
watched,
some
of
the
video
I
watched
some
other
video
frames
of
what
they
all
learn
segmentation.
This
was
not
working
proper
reaction.
Okay,
there
is
a
disturbance
affecting
the
centroids.
That
is
the
reason
I
was
couples
discount
segmentation
I
want
to
explain
that
the
reason
I
am
explaining
all
this.
D
D
D
D
That's
what
my
update
is
for
the
next
three
few
weeks,
I
will
be
working
on
instant
segmentation
model.
Normally
I
am
implementing.
In
my
thoughts
you
know,
I
was
trying
to
get
the
centroids
perfectly
I'm
using
a
lot
of
stuff.
Is
there
as
well
I
think
I
need
to
write
a
blog,
and
this
and
I
need
to
share
it
with
you.
A
And
that's
what
my
update
is.
Well,
that's
great
yeah!
Thank
you
yeah!
So
that's
great
that
you're
able
to
get
some
support
from
hugging
face
and
it'd
be
great
to
have
it
as
a
Transformer
and
then
also
yeah,
to
ask
the
so
the
thing
about
the
lineage
information
is
that
that's
going
to
have
to
I
think
come
separately.
A
Although
I
think
you
can
you
know
you
can
easily
get
them
out
and
see
elegans,
you
can
easily
get
them
out
of
the
cells,
so
their
position
in
the
embryo
basically
tells
you
what
cell
identity
they
are
and
then
we
have
the
information
about
lineage
tree.
So
we
can
link
those
two
together
and
we've.
We've
had
g-soc
projects
in
the
past
where
we've
done
the
sort
of
linkage
so
I'm
not
sure
if
you're
familiar
with
some
of
the
stuff
that
was
done,
I
think
in
2017
2018
in
that
era,.
D
Because,
because
there
are
two
different
file
names,
I
saw
in
the
code,
so
that
is
the
reason.
I
want
to
implement
a
new
model.
Actually
development
can't
accept
the
volume
of
each
cell.
Also,
okay,
it
says
to
me:
it's
a
volume
of
each
state.
I
mean
it's
in,
and
the
centroids
is
also
needed
for
different
time
frames,
but
the
model
accuracy
we
need
exact
time.
I
mean
centroids
of
each
cell
at
different
time
frame.
That's
what
I
want
to
update
I
want
to
clearly
construct
it
stage,
one
pipeline
which
can
be
easily
used
some
commands.
D
If
we
run
that
command,
you
just
need
to
extract
the
centroids
and
it
needs
to
be
given
to
Stage
2
of
demograph,
and
after
that,
topological
data
analysis
could
be
done
actually
a
tool
which
Have
you
shared
is
very
useful
to
me.
Tttk
toolkit.
It
is
there
in
demo
on
slack
Channel.
Actually,
okay
do
that.
We
can
use
that
we
can
do
the
topological
data
analysis.
Also,
there
is
separate
Library
called
you
can
use
a.
We
can
use
just
microscopy
image
to
it
and
we
can
expect
all
topological
data
analysis
from
it.
D
Like
depths
of
sense,
birth
rate,
option
and
instant
segmentation
of
those
things,
we
can
cross
verify
verify
our
model
and
topological
data
analysis,
toolkit
output.
So
it's
kind
of
a
huge
work.
I
am
just
that's
the
reason.
I
started
working
on
it
before
the
Peace
of
Stack,
whether
I
was
elected
or
not.
I,
don't
need
that,
but
I
want
to
complete
work
and
host
it
on
the
roller.
A
Okay,
yeah,
that
sounds
great
yeah
yeah.
Let
us
know
what
you
need
for
that.
You
know
if
there's
anything
that
you
need
like
I
know,
you're
asking
for
blog
posts.
I
gave
you
the
synthetic
daisies
blog
posts.
Those
are
just
things
that
we
have
on
our
website
that
are
useful
to
people
and
then
we'll
probably
do
some
other
doc
we'll
probably
use
it
also,
as
like
I
said
before,
like
a
documentation
portal.
A
So
we'll
have
like
you
know
some
other
posts
that
will
generate
this
summer,
probably
on
different
things
and
we'll
have
that
all
you
know
up
there
in
that
spot.
So
someone
can
go
there
and
read
what
they
need
to.
D
Actually,
when
I
came
to
devoland,
when
I
was
starting
to
understand,
I
mean
I
can't
understand,
that's
the
reason
we,
my
wife
told
me
to
build
a
block
for
whole
day
home
where,
if
a
person
comes
there
just
see
the
Olan
particular
blocks
will
be
there
actually
I
contacted
last
year.
Other
guy
also
I,
don't
remember
the
name.
Heated
G-Shock
from
orthogonal
research
lab
he's
from
my
same
College,
Amrita,
okay,.
D
Last
year,
two
guys
did
actually
on
digital
Palestinian.
A
D
A
D
A
A
D
A
Yeah
they
did
a
different
project
last
year
they
did
like
which
we
need
to
revisit
to
which
is
the
digital
microspheres.
A
So
they
built
like
a
Susan,
actually
has
some
data
from
a
specialized
microscope,
she's
built
where
she
has
in
like
images
coming
from
different
angles,
and
then
their
job
was
to
create
like
a
sphere
where
you
Stitch
those
images
together,
and
you
can
view
you
know
and
kind
of
a
viewer
where
you
can
spin
the
thing
around
and
explore
the
surface.
We
haven't
gotten
past,
like
kind
of
the
viewer,
the
basic
viewer,
so
we've
been
trying
to.
You
know,
get
it
back
off
the
ground,
but
it's
been.
A
You
know
it's
kind
of
hard
when
you
don't
have
a
devoted
time
to
do
it,
and
you
know,
but
that's
that
stuff
has
been
there
too.
So
I
I
was
thinking
of
putting
that
into
diva
learn
as
well,
but
it's
kind
of
different
in
a
lot
of
ways.
So
I
don't
know
where
to
place
it.
D
Some
kind
of
documentation
which
will
be
useful.
Actually,
everyone
responded
pretty
cool
dwelling.
They
are
like
writing
the
blocks
and
they
are
helping
me.
Actually,
they
are
sending
me
so
I
I
got
so
many
blocks.
I
need
to
create
that
logo
framework
I
need
to
push
them
all
due
to
this
lot
of
disturbance.
I
couldn't
do
about
Maybe
by
next
week
or
upcoming
week,
and
I
will
start
working
on
Instagram
segmentation
mode
on
also
for
demograph
and
topological
data
analysis.
Also,
that's
what
my
update
is
anyways
thanks,
Bill,
oh.
A
No
problem,
yeah
great,
sounds
great
So
yeah.
Thank
you.
Let's
see
some
things
in
the
chat.
Okay,
so
I
want.
You
is
I've
worked
with
hamanchu
in
our
other
group
and
he
did
gsoc
last
year.
In
the
other
group,
he
did
a
project
on
open
source
sustainability,
so
he
did
some
reinforcement,
learning
methods
and
and
built
them
into
an
agent-based
model.
A
A
I
am
interested
in
contributing
Diva
worm,
glad
to
be
here
and
currently
studying
the
literature
and
topological
deep
learning.
The
recent
paper
shared
by
Gia
Hong
this
archive
paper.
This
was
in
the
slack
that
we
have.
If
you
go
to
the
evil
learned,
evil
learned,
Evo
graph
channel
of
the
slack.
We
have
a
number
of
things
in
there
relevant
to
this
Summers,
we'll
we'll
be
doing
this
summer.
So
jiahang
shared
this
paper
here.
It's
architectures
of
topological,
deep
learning,
a
survey
and
topological
neural
networks,
and
this
is
on
the
archive.
A
This
talks
about
TDL,
topological,
deep
learning
and
says
we
I
think
we
just
talked
about
this
with
respect
to
Devo
graph.
So
this
is
like
the
sort
of
the
the
like.
We
talked
about
stage
one
and
stage
two
stage,
one
being
image
segmentation
we've
been
working
on
that
because
that's
kind
of
the
fundamental
aspect
of
it.
You
know,
if
you
don't
have
good
data,
it's
hard
to
get
stage
two
to
work
well
and
then
stage.
A
Two
is
of
course
assembling
the
graph
embeddings
and
creating
graphs
of
the
data
and
then
stage
three
is
his
topological
aspect,
topological
data
analysis,
and
so
that's
something
we
didn't
really
ever
address.
A
We
just
didn't
have
time
and
it's
very
difficult
to
do
that
in
addition
to
the
other
two
stages,
so
hamanchi
wants
to
do
basically
stage
three
of
that,
and
so
we
need
to
kind
of
figure
out.
If
you
want
to
do
that,
you
have
to
figure
out
how
to
do
it,
and
so
there's
a
lot
of
stuff
on
on
topological
data
analysis.
A
This
topological
deep
learning
area
is
a
good
place
to
start
I,
think
architectures
of
topological
deep
learning,
survey
and
topological
neural
networks.
This
was
uploaded
the
archive
four
days
ago.
This
is
a
great
resource
and
then
there's
an
Associated
GitHub
repository
called
awesome
tnns,
which
summarizes
schemes
in
various
topological
domains.
So
this
is
the
Tia
awesome.
Tnn's
Repository
looks
like
they
have
this
accompanies
as
paper
architecture.
It's
a
topological,
deep
learning.
This
talks
about
cellular
complexes,
combinatorial
complexes,
hypergraphs
cyclical
complexes
and
then
a
bibliography.
A
So
this
I
guess
this
is
talks.
This
is
a
message
passing
simple,
Network,
I
guess
this
is
code
for
it
or
something,
or
at
least
mathematical.
No
there's
a
mathematical
notation.
Okay,
it
was
it's
rendering
now!
So
that's
so.
These
are
different
schemes
without
getting
too
much
into
like
what,
because
there's
a
whole
sort
of
you
know
you
have
to
like
talk
about
the
field.
You
have
to
define
a
bunch
of
terms
without
getting
into
that.
A
You
can
go
to
that
repository
and
kind
of
browse
around
and
get
a
sense
of
the
major
areas
that
you
know
in
this
in
this
area.
So,
like
you
have
schemes
in
the
hypograph
domain,
you
have
hypersage
hyper
attend.
These
are
just
different
methods:
hyperconf,
which
is
I.
Imagine
a
convolutional
network
all
deep
sets
so
they're
different
schemes
for
this
and
so
yeah.
This
is
I,
think
a
good
place
to
go
and
look,
and
at
least
try
to
figure
out
what
might
work
with
what
we're
doing
in
Devo
and
evolar.
A
So
in
Devo
graph,
so
yeah
that
sounds
great
I.
Look
forward
to
talking
about
this
more
a
Manchu
and
yeah.
You
might
want
to
get
in
touch
with
such
month
and
jiahang
jiahang
did
the
original
some
of
the
original
devograph
worked
last
year,
and
then
such
moth
has
been
working
on
things
this
year
with
Diva
learn,
and
you
know
so
we'll
we'll
be
getting
more
into
this
over
the
summer.
A
Oh,
the
link
to
the
article
later
yeah
I
can
every
all
the
Articles
I'll
be
sharing
in
the
slack.
So
that's
good
yeah
any
questions
before
we
move
on.
A
So
this
one
is
actually
about
so
we
talked
actually
like.
Susan
has
developed
a
good
theme
here
in
the
meetings
of
talking
about
tensegrity
and
different
tensegrity
networks,
so
these
are
a
little
bit
different
than
like
the
graph
neural
networks
or
the
embryo
networks
we've
talked
about.
This
is
where
you
have
these
networks
of
cells
that
have
the
structural
Integrity.
They
form
like.
We
talked
about
in
the
synthetic
morphology
paper.
A
You
know
you
form
these
networks
of
cells
in
a
in
a
morphology,
and
you
know
there
are
reasons
why
they
exist
for
forming
tissues,
but
also
for
physical
Integrity
of
the
structure.
So
you
want
to
have
something.
That's
very
that
has
you
know
you
can
either
control
the
say,
the
stiffness
or
the
passivity
of
the
structure,
or
you
know,
have
it
be
hyper,
stable
and
so
one
thing
I
found
this
weekend.
A
I
was
looking
around
and
I
had
forgotten
about
this,
but
if
anyone's
familiar
with
the
architect,
Antonio
Antoine
Dowdy,
who
did
a
lot
of
work
in
the
early
20th
century-
and
he
built
a
number
of
pieces
of
architecture
in
Barcelona
in
Spain,
and
he
built
like
this
huge
church.
This
Cathedral,
it
was
It,
was
kind
of
based
on
like
an
ant,
a
termite
Mound.
Basically,
it
looked
like
a
termite
Mound.
A
It
had
this
very
odd
architecture,
at
least
in
terms
of
what
was
being
built
at
the
time,
and
so
he
was
a
Pioneer
in
a
lot
of
bio-inspired
architecture
and
the
reason
I
bring
him
up
in
this
meeting
is
because
he
built
these
structures
that
were
sort
of
announced
that
sort
of
described
some
of
these
biological
structures.
So
a
termite
Mound
is
known
for
its
rigidity,
but
also
for
its
ability
to
self-regulate.
Termites
will
build
these
things
collectively
and
then
they'll
live
there,
and
you
know
they
have
a
lot
of
it's.
A
It's
very
highly
they're
able
to
regulate
the
chemical
composition
of
their
environments
inside
the
mound
they're
able
to
regulate
the
temperature.
The
design
is
is
such
that
it's
both
stayed
pretty
stable.
It
has
to
be
stable,
it's
you
know,
always
getting
rain
and
wind
and
other
you
know
animals
pulling
at
it,
but
also
you
have
these.
A
You
know
it's
built
it's
this
Collective
structure,
so
it
needs
to
be
done
without,
like
I,
don't
know
if
there's
an
architecture
that
they
follow,
but
it
has
to
be
hyper
stable
with
respect
to
the
larger
structure,
and
so
there's
not
not
really
any
evidence
that
termites
build
things
based
on
a
master
architecture,
so
they
have
this
sort
of
self-organized
collective
way
of
building
this.
In
any
case,
Gowdy
came
up
with
a
number
of
different
models,
or
what
these
things,
how
these
things
may
be
built?
A
One
of
the
things
he
developed
is
this
hanging
chain
model,
and
so
this
is
a
parametric
design
of
electric.
So
this
is
the
hanging
chain
model.
This
looks
a
lot
like
this
is
actually
a
model
of
the
of
the
church
that
he
built
in
Barcelona
Colonia,
Colonial
guel,
which
is
a
church
that
you
know
has
like
this
looks
like
a
termite
Mound
or
something
that
doesn't
look
like
a
typical
European
church,
and
so
this
is
the
sort
of
skeleton
of
it.
A
A
So
it's
no.
It
is
known
that
Gowdy
preferred
modeling
architecture
over
drawing
it,
especially
models
made
of
chains
hung
from
a
ceiling
where
strings
of
small
weights
attached
their
experimentation.
With
such
models,
he
discovered
a
way
to
use
traditional
cut,
Talon
masonry
techniques
in
new,
more
complex
ways,
a
chain
suspended
simply
from
both
its
ends
results
in
a
cantenary
curve.
The
natural
distributes
the
static
load,
in
this
case
tension
evenly
between
the
links
of
the
chain.
A
So
when
it's
hung
as
a
set
of
chains
from
the
ceiling,
it
has
a
bunch
of
you
know
it's
able
to
manage
tension
when
it's
flipped
back
over
when
it's
standing
from
the
ground,
it's
dealing
with
compressive
forces.
In
the
same
way,
what
godi
did
was
apply,
this
tension,
compression
analogy
to
chains,
hanging
from
the
chains
or
arches
superimposed
and
arches
asymmetrically,
fermenting
them
design
a
much
more
fluid
architecture.
So
you
had
these
arches,
then
you
had
other
arches
on
top
of
those
and
so
that
sort
of
nested
design
was
actually
hyper
stable.
A
A
With
respect
to
the
supports
here
sounds
like
we
have
something
in
the
chat:
oh
cable,
Net10,
security,
yeah
Susan
pointed
out
that's
Cable
in
that
tensegrity,
so
yeah.
This
is
again.
This
is
the
where
this
is
a
wire
free
model
that
they
built
in
the
in
the
in
the
museum
here,
and
so
this
was
all
done
without
a
computer,
so
this
was
all
done
in
this
sort
of
architectural
context
of
lawyer
models
and
they
built
wire
models
to
show
kind
of
what
this
looks
like.
A
So
this
is
another
example
here
this
is
from
1889,
so
you
get
a
sense
of
what
this
looks
like
and
then
this
is
the
close-up
view
of
so
using
scale
models
made
of
chains
or
weighted
strings.
It
was
on
the
optimal
Arch
follows
an
inverted
cantoner
curve
talked
about
that.
D
A
And
then
there's
some
citations
here,
which
are
basically
talked.
This
is
what
this
is
the
stuff
that
we
just
read,
and
then
this
just
kind
of
goes
over
some
more
of
it.
So
that's
that's!
That's
it
I
just
found
it
interesting.
Okay!
Well,
thank
you
for
attending
such
month.
A
B
Not
too
much
I'm
just
curious
as
to
whether
you
could
have
tensegrity
structures
for
a
neural
net,
oh
Vine,
and
buying
something
to
make
say
the
network
more
flexible
or
self-supporting
I.
Don't
know
yeah.
A
So
like
in
a
lot
of
the
graph
neural
network,
stuff,
they've,
they're,
building
neural
networks,
that
kind
of
are
the
topology
of
a
graph
so
like
the
graph
embeddings
are
just
one
way
to
do
it.
That
people
have
actually
designed
the
topologies
where
they're,
like
you
know,
they
sort
of
resemble
the
graph
that
they're
that
they
propose
is
is
underlying
some
process.
A
So
it
stands
to
reason
that
you
could
build
like
a
10,
segory
neural
network.
You
could
have
the
weights,
be
you
know
something
where
they
are.
You
know
they
translate
into
physical
variables,
so,
like
the
weights
would
be
like
the
stability
of
the
connection
between
two
nodes.
You
know
like
we
have
our
strings
and
we
have
our
nodes
in
in
its
Integrity
Network.
A
Those
would
be
you
know
analogous
to
like
two
processing
units
and
then
like
a
weight,
and
then
the
weight
would
just
be
vary
with
respect
to
the
physical
stability
or
the
physical
tension
or
compression
that
we
want
to
give
so
like
the
form
Finding
intensegrity
stuff,
that
we
talked
about
earlier
this
year,
where
we
had
those
matrices
where
they
described
the
the
strings
between
different
nodes
and
then
those
strings
would
have
like
you'd
have
to
find
the
stable
structure
and
the
stable
structure
would
be
a
description
in
the
in
the
Matrix
of
like
stable
connect.
A
You
know
stable
links,
what
are
the
forces
where
the
compression
or
tension
components
and
then
finding
the
the
optimal
one?
And
you
could
do
that
in
a
neural
network,
I
think
just
by
translating
those
into
weights
and
then
saying
you
know
what
is
the
optimal
set
of
Weights
here.
A
A
So
yeah,
that's
that's
great!
Oh
yeah!
Anything
else.
We
want
to
say
before
we
go
I
think
that's
it
for
today.
A
And
so
yeah
I
see
a
lot
of
turnout
here
for,
like
the
summer,
we're
starting
in
summer
here
we're
interested
in
these
gsoc
projects
or
working
on
Diva,
learn
and
Divo
graph
and
and
related
things
so
yeah.
Let's,
let's
keep
in
touch
on
the
slack
on
that
we'll
be
ramping
that
up
soon
by
May
4th
there
is
when
they
make
their
final
selections,
but
if
you're
not
selected
for
whatever
reason
or
if
you're,
not
if
you
didn't
apply
this
year
and
you
want
to
be
involved,
please
feel
free
to
participate.
A
We
can
help
you
build
things
and
you
know
learn
about
open
source
software
and
learn
about
like
you
know
how
to
contribute
to
a
open
source
project.
So
there
are
a
lot
of
benefits
to
being
part
of
our
group.
You
know
and
working
on
this.
A
So
if
you
have
any
questions,
let
me
know
about
that
and
then
in
the
month
of
May
we'll
be
doing
a
community
period
with
you
know,
it's
related
to
something
we
have
to
do
for
gsoc,
but
we'll
be
doing
this
in
the
group
where
we
talk
more
about
like
the
organization.
A
Lot
of
things
in
the
group
we've
been
around
since
2014.,
so
we've
done
a
lot
of
things
that
are
sort
of
in
the
past
that
yeah
we
don't
revisit
every
every
week.
So
people
want
to
you
know,
find
inspiration,
there's
probably
a
lot
of
inspiration
in
our
group.
Things
we've
done
in
the
past.
We
just
you
know,
aren't
talking
about
them
all
the
time
now.
So
we'll
we'll
talk
about
those
as
well,
and
then
we'll
also
talk
about
some.
You
know
how
to
build
open
source
projects
and
software.
A
There
are
some
readings
that
are
really
good.
I
think
for
people
to
go
through
I
have
a
couple
of
books
that
I
can
share
with
you
and
they
basically
go
over
a
lot
of
the
details
of
this.
So
this
is
a
good
opportunity
to
read
about
how
to
sort
of
build
and
Market
your
own
open
source
software.
A
You
know
how
to
develop
what
it
means
to
be
open
source
and
so
forth.
So
that's
all
in
the
in
the
next
few
weeks.
So,
okay,
thank
you
for
attending
and
have
a
good
week.
A
B
A
B
A
First
paper
here
is
everything
is
connected
graph,
neural
networks.
Now
this
paper
was
put
in
our
slack
channel
in
the
Diva,
learn,
devil
graph,
Channel
I,
believe,
and
so
this
is
Peter
Villa
kovic
from
deepmind
in
the
University
of
Cambridge.
A
So
this
is
on
the
archive,
but
it's
also
on
our
slack,
so
the
apps.
This
is
the
graphical
abstract.
It
shows
kind
of
what
the
idea
of
this
paper
is.
So
you
have
this
representation
here
with
inputs,
X
and
A.
A
Then
you
have
this
GNN,
which
is
this
embedding
that
can
be
built
from
so
you
have
the
inputs
coming
into
X.
Then
you
have
the
latents
coming
into
h.
So
this
is
the
rational
Network
structure
and
many
graphical
Network
structures
that
are
generated
from
the
same
input
data.
Your
latents
are
h
a
so
your
inputs
are
X
and
A.
Your
liens
are
h
of
a
so
the
inputs
get
transformed
to
a
latent
set
of
latent
States,
and
so
that's
where
you
get
your
graph
anal
Network
embeddings.
A
Now
out
of
these
different
embeddings,
we
have
three
things
here.
We
have
node
classification,
which
is
z,
equals
a
function
of
the
latent
space
h,
so
H,
sub
I
is
the
nodes
that
we
have
in
our
late
representation
and
we
can
classify
those
nodes.
Accordingly,
we
have
the
graph
classification,
which
is
basically
a
collection
of
nodes
format.
B
A
A
A
So
I,
don't
necessarily
know
if
I
agree
with
this
statement,
that
graphs
are
the
main
modality
of
data.
You
can
certainly
create
graphs
or
anything,
but
as
it
holds
true
in
network
science,
you
have
to
have
a
natural
sort
of
graph
structure.
That
is,
you
have
to
have
nodes
and
you
have
to
have
connections
between
the
nodes
and
those
have
to
be
some
sort
of
discrete
nature
to
it.
A
So
you
can
create
a
graph
out
of
anything,
but
the
best
things
that
sort
of
are
minimal
to
graphs
are
things
that
resemble
this
sort
of
discrete
structure
of
Highly
connect,
interconnected
components.
So
not
everything
in
nature
is
like
that.
Sometimes
other
data
structures
are
more
suitable
for
those
things.
A
In
any
case,
this
term
elegantly
representable
is
interesting
because
they
talk
about
in
their
own
in
graph
neural
network
literature.
What
expressivity-
and
that
means
that
the
graph
neural
network
should
be
able
to
express
a
lot
of
different,
should
be
able
to
express
representations
for
a
lot
of
different
types
of
systems,
which
is
true.
It's
just
not
every
system
in
nature,
so
prominent
examples
include
molecules,
Representatives,
graphs
of
atoms
and
bonds,
social
networks
and
transportation
Networks.
A
D
A
Know
we
have
our
nodes
here
and
then
we
have
like
I
and
then
we
could
I,
don't
know
say
that
J
is
this
other
node
and
then
there's
a
connection
IJ,
which
is
this
could
have
a
strength,
a
quantitative
strength.
A
This
could
have
just
a
regular
value,
we're
going
to
be
at
zero
to
one
in
that
interval,
but
it
can
be
anything
now
in
the
meeting.
We've
talked
about
tensegrity
structures
and
so
form
Finding
tensegrity.
We
sometimes
use
matrices
which
are
actually
sort
of
the
representation
of
this.
So
this
can
be
represented
in
a
matrix.
A
A
We'll
have
a
bunch
of
entries
in
it
and
then
it'll
describe
the
the
physics
of
the
bonds
between
two
things.
So,
for
example,
if
we
have
our
nodes,
which
are
like
blocks,
we
have
strings
between
the
blocks.
We
introduce
a
tension
here
and
the
value
is
here
and
so
form.
Finding
10
seconds
is
finding
the
optimal
Matrix
for
hyper
stability,
so
we're
finding
an
optimal
Matrix.
A
A
A
Look
like
this
with
a
matrix
and
then
we
can
apply
it
specifically
to
like
consegrity
or
form
Finding
exercise,
or
sometimes
you
know,
if
you're
dealing
with
a
neural
network.
You
already
have
that
Network
structure
you
already
have
the
Matrix
and
all
you
need
to
do
is
you
need
to
find
like
the
weights,
the
optimal
weights
and
so
graph
neural
networks
really
to
feed
into
sort
of
sort
of
more
conventional
neural
networks.
But
it's
important
to
remember
that
you
need
to
have
like
a
certain
structure
to
your
problem.
A
So
there's
a
lot
of
potential
for
these
kind
of
models.
The
main
area
of
the
short
survey
is
to
enable
the
reader
to
assimilate
the
key
Concepts
in
the
area
in
position.
Graph
representation,
learning
in
a
proper
context
of
related
fields.
A
So,
what's
interesting
is
that
gnns
are
kind
of
like
phylogenetics
and
evolutionary
biology,
and
so
in
evolutionary
biology.
File
with
genetics
is
a
field
where
we
use
directed
graphs
to
find
the
true
tree
of
life,
and
so
what
we
have
a
similar
process.
We
take
species
we
take
relatedness
between
species,
we
put
it
into
a
matrix.
We
optimize
the
matrix
by
certain
rules
like
what's
the
most
efficient
path
through
the
tree
for
traits
to
be
grouped
together
as
as
having
common
ancestry,
and
then
we
have.
A
We
can
do
these
searches
on
a
number
of
trees
that
are
generated
and
find
the
true
tree,
and
so
we
do
this
with
graph
neural
networks.
We
try
to
find
the
true
graph
neural
network
from
an
OB,
a
set
of
options
that
we,
you
know
generate
based
on
our
Matrix,
and
so
we
actually,
we
have
a
precursor
to
this
in
phylogenetics,
of
course,
has
been
done
for
decades.
So
this
is
another
sort
of
area,
that's
sort
of
in
a
wide
area.
So
let
me
briefly
go
over
this.
A
A
They
have
tips
of
the
trees
which
are
species,
and
then
they
have
common
ancestors
within
the
plate.
The
idea
is
to
have
like
say
a
trait
that
appears
in
these
two
species.
You
say
the
common
ancestry
is
here.
This
is
more
plausible
than
seeing
the
common
ancestries
here
and
then
you
know
this
is
the
common
ancestry
of
this
clade,
which
has
this
character
ends
over
it's
another
clade,
and
then
this
is
the
common
ancestor
of
the
entire
tree.
So
this
should
have
the
least
what
they
call
derived
set
of
phenotypic
characters.
A
Now
you
can
apply
that
to
graph
neural
networks.
In
that
you
could
look
at
the
you
know,
you
could
look
at
the
directiveness
of
the
of
the
graph
neural
network,
so
the
arcs
may
have
a
directedness
to
them.
They
may
have
a
bi-directionality,
which
would
actually
also
be
interesting.
That's
something
that
phylogenies
don't
have
and
let's
see.
B
A
B
A
But
some
people
might
use.
That's
that's
what
we
have
so
you
know
they're
all
different
types
of
networks.
We
have
molecular
networks
with
chemical
bonds.
We
have
connect
domes
which
are
neurons
or
synapses
or
other
ways
that.
A
Graphs
are
universal
language,
we're
describing
truly.
There
are
a
lot
of
different
ways,
a
lot
of
different
networks
and
biological
systems,
in
which
I
guess
the
point
I'm
trying
to
make
here
is
that
we
can
not
only
learn
about
graph
neural
networks
to
Natural
networks,
but
we
also
have
to
recognize
that
there
are
different
types
of
networks
within
those
natural
systems.
So
we
have
the
dags.
We
have
the
cpgs,
we
have
the
you
know,
just
basically
all
for
all
connectivity.
A
We
have
a
lot
of
different
types
of
bipartite
graphs
in
our
stuff
that
we
do
in
embryo
networks.
We
have
these
sort
of
composite
structures,
which
are
where
they
may
be
bipartite
at
one
point
in
development
and
then
a
highly
interconnected
and
another
point
of
development.
So
there
are
all
different
ways:
you
can
model
biological
systems
with
networks
gnns.
A
You
know
this
is
a
wild
west,
sort
of
of
different
types
of
approaches
and
I
think,
like
a
lot
of
the
model
systems
that
they
hang
themselves
on
here
or
that
they
mention
are
only
a
subset
and
I
think
they
pick
the
easiest
ones,
the
ones
that
are
most
similar
to
say
a
computer
science
problem.
But
that
does
not
mean
that
you
have
to
treat
it
like
a
computer
science
problem.
You
should
treat
it
like
a
biological
problem
first
and
then
find
the
computational
analog
to
that
Beyond
this.
A
It's
likely
that
the
very
cognition
processes
I,
don't
know
what
that
means.
It
is
cognitive
processes
driving
our
reasoning
and
decision
making
are,
in
some
sense
graph
structured
that
is
paraphrasing
a
crow
from
Forester
1971.
No
one
really
imagines
in
their
head
that
all
the
information
known
to
them,
rather,
they
only
imagine
selecting
Concepts
and
relationships
between
them
and
use
those
to
represent
the
real
system.
A
So
this
going
back
to
the
dags
example,
this
is
very
much
what
you
find
with
like
sort
of
Bayesian
graphs
or
Bayesian
directed
graphs
and
people
use
those
for
decision
making
and
other
types
of
cognitive
applications.
A
So
there
are
a
lot
of
ways
we
can
use
gnns
and
graphs,
and
there
are
a
lot
of
intersections
both
in
biology
and
cognition
and
in
other
domains.
And
then
you
talk
about
Transformers,
which
is
interesting
that
actually
Transformers
are
themselves
a
special
case
of
GNN.
So
I
know
there
were
some
talk
about
turning
Devo
learn
into
a
Transformer,
and
so
that's
something
that
I
think
sushmop
is
working
on.
That
is
definitely
going
to
lead
us
down
the
road
to
gnns,
so
stay
tuned.
For
that,
so
there's
some
fundamentals.
A
There's
a
lot
of
you
have
to
understand
a
lot
of
math.
You
have
to
understand
permutation,
equal
variance
and
invariance.
So
when
we
have
graphs
when
we
have
all
those
nodes
and
they're
interconnected,
we
have
to
know
that
you
can
put
them
together
in
different
ways
and
still
get
the
same
answer.
So
sometimes
you
can
put
them
together
in
different
ways.
A
Of
different
graphs,
there's
no
way
to
like
say
that
background
one
graph
is
better
than
the
other,
and
indeed
in
some
of
the
work
in
phylogenetics,
you
know
they've
had
cases
where,
if
you
don't
have
enough
data,
you
can
have
a
bunch
of
equivalent
trees
and
so
that
doesn't
necessarily
help
you
narrow
down
your
data
set
to
the
best
tree,
but
that's
what
you
you're
dealing
with
with
graphs
and
with
interconnected
graphs
of
bi-directional
graphs,
it's
even
worse,
because
there
are
a
number
of
different
sort
of
isomorphisms
that
you
can
have.
A
So
you
can
describe
a
certain
set
of
relationships
using
many
different
types
of
graphs.
This,
actually,
though,
is
also
helpful
because
sometimes
those
graph-
you
know
sometimes
it
when
it
was
down
in
your
search
space.
So
sometimes,
if
you
find
a
class
of
isographs,
you
know
that
that
whole
classographs
basically
solve
your
problem,
and
so
maybe
it
doesn't
matter
which
one
you
use
is
only
use
one
of
those
and
so
that
cuts
down
your
search
space
to
some
extent.
A
So
they
go
through
your
invariants
and
and
covariance
and
then
they
go
through.
Let's
see,
then
they
go
through
sort
of
graph,
neural
networks
itself.
There's
this
issue
of
message,
passing
of
diffusion
and
propagation,
so
the
idea
that
a
node
will
pass
things
to
another
node
or
that
there
will
be
some
communication
to
a
node,
another
node
or,
like
I,
said,
there's
a
directionality
in
bi-directionality.
So
if
you
have,
let's
draw
this
on
the
word
again.
A
So
in
directionality
we
can
assume
that
something
passes
from
one
node
to
the
next
node,
and
this
is
usually
in
time.
So
this
is
a
flow
of
something
I'll
use
that
I'll
use
you
can
get
F,
which
is
the
flow
and
then
time
so
this
is
a
flow
over
time.
A
A
Amb
well,
what's
interesting
here
is
that
you
can
have
flow
simultaneously
in
either
direction,
so
in
other
words,
in
any
given
point
of
time-
and
this
is
the
indirected
case-
a
negative
given
point
in
time-
you
can
have
something
flowing
from
B
to
a
or
a
to
B,
and
so
you
can
have
a
collection
of
things
at
any
one
time
point
you
can
see:
T1
equals
e,
a
or
just
a
to
B
in
both
cases.
So
any
you
have
a
number
of
those
events.
You
can
count
them
up.
A
So
it's
almost
like
a
distribution
of
a
flow
over
time.
Over
that
period
of
time,
then
you
can
get
a
density
function
and
you
can,
you
know,
do
interesting
things
with
that.
In
any
case,
the
point
here
is
that
you
have
these
different
approaches
to
passing
on
information,
and
so
they
call
them
spatial
sort
of
propagation
spatial
diffusion.
A
It's
basically
moving
across
the
graph,
because
if
you
move
from
A
to
B,
that's
just
one
step,
but
in
a
network
of
course
you
have
a
bunch
of
these,
so
a
might
be
connected
to
B,
C,
D,
E
and
F,
and
things
diffuse
from
a
along
these
arcs,
and
so
they
may
diffuse
at
different
rates.
They
may
diffuse
at
the
same
rate,
they
may
diffuse
bi-directionally
at
the
same
rate
or
a
different
rate.
So
all
these
are
possible
and
so
bronstein
who's,
a
big
name
in
The
graphical
networks.
A
Talks
about
most
of
these
events,
these
diffusion,
propagation
or
message
passing
events
can
be
classified
one
of
the
three
spatial
flavors.
The
first
is
convolutional.
A
The
second
is
attentional,
and
the
third
is
message
passing
and
so
this
you
know
these
parameters
will
allow
us
to
plug
this
into
a
neural
network
context
and
it'll.
Allow
us
to
you
know
account
for
that
activity
in
the
network.
A
So
this
is
this
kind
of
goes
over
representative
convolutional
gnns
include
the
chebyshev
network
graph,
convolutional
Network
in
the
simple
simplified
graph
convolution
representative
attentional
gnns
include
the
mixture
model,
CNN
graph
attention
Network,
and
it's
recent
V2
variable
in
representative
message.
Passing
dnns
include
interaction,
networks,
message,
passing
neural
networks
and
pnns
and
graph
Networks
gns.
A
So,
given
in
such
a
GN
layer,
we
may
learn
many
interesting
tasks
of
our
graph
by
appropriately
combining
H
sub
U
that
exemplify
three
principles
such
tests
grounded
in
biological
examples.
So
these
are
biological
examples.
There's
node
classification,
where
you
have
a
protein
protein
interaction
Network,
where
you
have
this
problem,
set
up
here,
graph
classification,
where
we
need
to
reduce
all
H
sub
U
into
a
common
representation,
and
this
is
a
classifying
molecules
from
their
Quantum
chemical
properties
as
stat,
estimating
pharmacological
properties
of
toxicity
or
School,
Mobility
or
virtual
drug
screen.
A
So
this
is
classification,
so
you
wouldn't
take
all
of
your
Ace
of
you
make
them
into
a
common
representation
or
unclassified.
Then
we
can
predict
links.
So
these
links
are,
you
know,
maybe
assumed
from
the
data
you
might
be-
have
a
very
clear
definition
of
what
a
link
is
in
Connect
ohms.
We
usually
have
some
sort
of
physical
link.
We
can
measure
that,
and
you
know
that's
classified
in
like
people
have
worked
on
connectomes
where
they've
looked
at
microscopy
data
I,
don't
look
for
Gap,
Junctions
or
they've,
measured
activity
in
the
brain
to
measure.
A
You
know
which
units
in
your
connectome
are
linked
to
which
units
they
could
be.
You
know
there's
this
idea
of
effective
connectivity
and
then
there's
anatomical
connectivity,
so
those
are
those
can
be
gotten
from
the
data
and
sometimes
you
want
to
predict
these.
Sometimes
you
want
you,
don't
know
what
the
links
are
until
you
actually
have
a
prediction
mechanism
to
say
which
one
which
things
are
most
likely
to
interact,
or
sometimes
it's
not
clear
from
the
data
which
things
are
most
likely
to
interact
over
time.
A
So
you
can
actually
build
a
model
with
link
prediction,
so
we
may
be
interested
in
predicting
properties
of
edges,
UV
or
even
predicting
whether
an
edge
exists,
giving
rise
to
the,
namely
prediction.
In
this
case.
A
classifier
could
be
learned
to
over
the
concatenation
features,
along
with
any
given
Edge
level
features.
So
so
this
is
like
you
know,
things
like
predicting.
The
links
between
drugs
and
diseases
for
drug
repurposing,
predicting
drugs
and
targets
are
binding,
Affinity
prediction
and
predicting
adverse
side
effects
from
polypharmacy.
A
So
you
can
use
all
these
different
things
as
building
blocks
to
build
a
graph.
You
can
use
graph
classification
link
prediction,
node
classification
and
other
techniques
to
sort
of
build
a
graph.
You
can
build
it
from
building
blocks.
You
can
build
it
from
a
real
data
set.
A
Then
you
can
also
build
gnns
with
autograph
and
that's
where
your
Transformers
come
in.
Your
Transformers
basically
give
you
this
computation
graph,
which
is
where
passing
messages
over.
It
may
be
problematic
due
to
bottlenecks,
and
so
you
want
to
optimize
that
Network
and
so
the
you
get
an
input
graph.
If
you
want
to
optimize
it
or
you
want
to
understand
it
better,
and
so
it
talks
about.
Transformers
is
gnns
that
aren't
really
a
graph
per
se,
but
they're
optimizing
at
a
graph,
then
their
geometric
graphs,
which
go
beyond
permutation
equal
variance.
A
We
have
assumed
your
graphs
to
be
a
discrete
unordered
collection
modes
and
edges
only
susceptible
to
permutation
symmetries.
In
other
words,
if
you
change
the
order
of
how
things
are
connected,
you
can
find
symmetries.
You
can
find
structures
like
symbolical
complexes.
You
can
find
you
know
equivalencies
in
terms
of
their
function,
but
sometimes
you
know
you
have
systems
that
aren't
discrete.
You
have
more
continuous
systems.
They
may
in
fact
be
a
minimal
to
graph
analysis.
Like
I
said.
Not
everything
is,
but
you
would
have
this.
A
You
know
you
have
this
more
continuous
property,
making
it
harder
to
actually
extract
what
the
real
true
graph
should
look
like,
especially
over
time.
So
if
you're
doing
dynamical
systems
or
dynamical
contexts,
you
know
extracting
a
graph
like
that
that
operates
over
long
periods
of
time
might
be
very
hard,
so
the
graph
is
often
endowed
with
some
sort
of
spatial
geometry
and
I.
A
Think
if
you
look
at
like
the
graph
that
I
showed
you
here,
this
doesn't
really
have
much
of
a
spatial
geometry,
this
graph,
the
Phi
watch,
and
he
has
a
very
distinct
spatial
geometry.
This
little
triangle
is
a
spatial
geometry.
Sometimes
the
geometries
can
look
very
distinct.
So
sometimes,
if
you
have
you
know
there
are
things
like
the
Hairball
Network,
which
is
where
you
have
like
a
bunch
of
a
massive
links
that
are
able
to
run
a
massive
nodes.
You
know
there's
no
structure
there.
A
Sometimes
you
have
a
circular
Network
like
this,
where
you
get
these
nearest
neighbor
connections
and
then
you
get
so
these
connections
that
go
around
the
circle.
So
there's
the
structure
of
you
know
local
and
Global
connectivity.
Sometimes
the
structure
of
the
network
is
actually
relevant
to
the
geometry
of
the
system.
So
we
can
a
connectome.
You
might
have
something
like
this
or
something
might
go
out
to
a
tail
or
head.
So
you
have
this
anterior
posterior
geometry,
but
you
also
have
this
Left
Right
geometry.
A
A
Like
that,
it's
actually
both
easier
and
harder
to
deal
with,
so
we
can
exploit
the
spatial
geometry
to
understand
at
least
how
things
are
connected.
It
gives
the
example
of
molecules.
I
did
not,
but
this
that's
a
good
example
molecules
as
being
structures.
That
kind
of
look
like
the
phenotype
I
showed
you
here
or
you
know
not
a
not
what
we
would
call
A
herbal
Network.
A
And,
of
course,
using
techniques
like
graphical
Networks
because
they
have
don't
really
have
a
lot
of
internal
and
explicit
structure.
Something
like
this
might
so.
This
is
the
kind
of
thing
we
might
be
interested
in
designing
a
model
that
is
equivariant,
meaning
it
has
equal
conditions
in
different
configurations,
so
not
only
the
permutations
but
also
3D
rotations,
translations
and
Reflections.
A
So
in
this
group
we
talk
a
lot
about
embryos,
and
so
one
of
the
things
about
embryos
is
there's
a
lot
of
3D
rotation
of
the
data
of
the
especially
in
development
where
things
fold,
where
things
might
your
cells
migrate,
and
so
your
embryo
networks
will
stretch
they'll
flip
in
upon
themselves.
They'll,
you
know
connect,
so
you
have
like
my
part
type
graphs,
which
are
two
distinct
graphs
which
connect
over
time,
and
so
all
these
things
can
happen,
and
these
are
things
we
really
want
to
get
at.
A
So
an
E3
equivalent
message,
passing
layer
transforms
importance
and
features
and
yields
updated
features.
F
sub
U
coordinates
y
sub.
U
y
or
X
Prime
sub?
U
x,
approach
view,
so
these
are
the
features
and
coordinates
so
these
are
updates
in
time.
So
you
have
your
features
and
coordinates:
F
sub:
u
x,
sub
U
respectively
and
then
the
prime,
which
is
the
update.
A
There
exist
many
GNN
layers,
that'll,
be
en
Aqua
variance
and
one
particularly
elegant
solution
was
by
satoris
in
2021,
which
gives
the
equations
are
given
here.
The
key
Insight
behind
this
model
is
that
rotating,
translating
or
reflecting
coordinates
does
not
change
their
distances,
so
each
set
of
nodes
is
a
distance.
So,
as
you
see
here,
especially
if
there's
a
geometry
involved,
this
has
a
distance.
A
A
So
we
can
actually
use
that
information
to
you
know
we
don't
want
to
change
the
distances
when
we
translate
or
rotate,
and
especially
when
we
have
an
embryo,
Network
or
a
neural
network
in
the
brain
like
a
normal
network.
If
those
sort
of
those
symmetries
and
this
fundamental
distances
are
broken,
then
we
can
have
functional
deficits.
A
So
you
know
a
lot
of
this
is
sort
of
about
mathematical
transformation,
but
it's
also
about
biological
function.
So
operations
where
the
distance
is
invariant
but
the
shape
and
the
rotation
and
translation
of
the
coordinate
system,
those
are
called
isometries
and
those
are
things
we
want
to
exploit
you
and
when
will
we
find
them
so?
A
And
so,
if
we
Roto
translate
all
nodes
as
x,
sub
this
this
equation
here,
which
are
our
x,
sub,
U,
plus,
b,
equals
or
repeats
to
x,
sub
U,
but
the
output
feature
F
sub
U
Prime
remain
unchanged,
while
the
output
coordinates
transform
accordingly.
So
we
can
have
this
output
feature,
that's
persistent,
but
the
output
coordinates
transform.
We
look
at
the
network
in
time,
one
and
time
two.
It's
changed
its
position:
it's
changed
its
rotation
and
orientation,
but
the
output
Remains
the
Same.
A
So
there
are
a
lot
of
things
we
can
learn
from
this
sort
of
thing.
So
this
is
what
they
have
to
say
about
tnns,
there's
some
other
GNN
papers
here,
Alpha
full
two
is
a
GNN
sort
of
this
is
in
this
citation
jumper
at
all,
21.
A
that
they're
also
protein
related
papers
that
you've
made
headlines
in
both
nature
methods
and
nature,
machine
intelligence,
Mendez,
Lucio,
21
and
ganza
20.
Those
are
both
papers
that
might
be
useful
protein
folding
protein
design
and
protein
binding
prediction
all
appear
to
be
extremely
potentary
of
attack
for
geometric
gnns,
and
so
you
know,
one
thing
we
might
do
is
like
look
at
like
how
structural
biology
relates
to
some
of
the
stuff
we're
doing
in
develop.
A
So
the
next
paper
I
want
to
talk
about
is
this
architectures
of
topological,
deep
learning
which
we've
talked
about
in
the
meeting?
This
is
also
in
the
slack.
This
is
about
topological
data
analysis
and
topological
neural
networks.
So
there
are
a
lot
of
links
to
mathematics
and
physics
here
and
I'll.
Just
read
the
abstract
on
this
paper
and
we'll
get
too
deeply
anyway.
A
Such
as
in
some
of
the
drug
Discovery,
Networks
tdl's
demonstrated
theoretical
and
practical
advantages
that
hold
the
promise
of
breaking
ground
in
the
applied
sciences.
However,
the
rapid
growth
of
TDL
literature
is
also
led
to
elect
unification
in
notation
and
language
across
TNN
architectures,
which
is
what
they
call
topological
neural
networks.
A
This
is
an
interesting
point,
because
I
mean
in
gnns
it's
bad
enough
and
I.
Think
you
see
in
the
last
paper
that
there
are
these
issue
these
these
areas
of
interest
that
people
have
identified,
at
least
but
the
language
is.
You
know
a
little
bit
sloppy
or
depending
on
what
kind
of
network
you
want
to
build,
depending
on
what
application
you're
using
you
have
message
passing
you
have
other
types
of
diffusion.
A
You
have
other
links
to
other
types
of
models,
so
I
think
in
both
areas,
but
especially
in
this
top
watchful
neural
networks
area.
There's
going
to
be
an
issue
with
language
and
one
of
the
problems
of
doing
research,
especially
when
you
cross
between
biology
and
computational
science
is
that
the
language
is
it's
non-standard
I
mean
there's
going
to
be
language
that
you
use
for
biology
and
language.
It's
used
for
computation
and
it's
going
to
confuse
people
from
one
area
or
another
unless
you
define
that
up
front
and.
A
An
activity
I
think
that's
that's
useful,
at
least
to
some
of
what
we're
doing
is
to
really
kind
of
Define.
What
we
mean
you
know
nail
our
terms
down,
make
sure
we
understand
or
make
the
audience
understand
what
we're
trying
to
do
so.
I
think
that's
an
interesting
point
presents
a
real
world
or
just
presents
a
real
obstacle
for
building
upon
existing
networks
and
different,
deploying
tnns
to
new
real
world
set
of
problems
in
the
real
world.
A
To
address
this
issue,
we
provide
an
accessible
introduction
to
TDL
We,
compare
the
recently
published
tnns
using
unified,
mathematical
and
graphical
notation
through
an
intuitive
and
critical
review
of
the
emerging
field.
We
extract
valuable
insights
into
current
challenges
and
exciting
opportunities
for
the
future,
so
they
kind
of
go
through.
You
know
this
idea
of
relational
structure,
which
we
talked
about
with
respect
to
graphs.
These
interactions
are
relational,
so
this
node
is
really
this
node
in
a
certain
way.
It's
either
related
as
a
direct
connection,
or
maybe
an
indirect
connection
with
some
of
these
other.
A
You
can
make
multiple
hops
between
this
node
and
that
node
or
you
can
make
a
direct
connection
between
them.
Instead
of
sort
of
relational
structural
aspects
are
really
interesting,
not
only
from
the
aspect
of
building
networks
but
I.
Think
in
the
past,
we've
talked
about
relational
biology
in
some
of
the
applications
to
both
relational
biology
and
a
category
Theory.
So
there's
a
lot
of
interesting
stuff
there
under
the
surface.
A
Once
we
build
these
sort
of
machine
learning
tools,
we
can
then
attack
some
of
these
other
issues
and
kind
of
we've
already
actually
published
a
little
bit
on
our
thoughts
on
category
theory
in
a
paper
in
2021,
but
we
we
have
other
I
mean
we'd
like
to
develop
that
further,
but
inside
of
secondary
priority.
A
A
So
these
pairwise
relations
are
the
thing
that
we
see
in
in
the
matrices.
So
when
we
have
matrices
like
this,
these
are
pairwise
comparisons.
This
is
I.
This
is
J.
We
get
an
IJ
pair,
so
we
get
M
sub.
I
j,
that's
the
Matrix.
We
have
a
collection
of
these
and
we
can
build
a
relational
statements
out
of
them,
so
this
node
that
this
node
interact
in
some
way,
they're
related
in
some
way,
and
we
can
describe
that
using
a
number.
A
So
that's
what
we're
doing
we're
building
relational
structures
based
on
pairwise
relations,
the
pairwise
structure
graphs,
is
limiting.
Of
course,
we
can
have
many
many
you
can
have
like
three-way
interactions,
four-way
interactions,
Etc
and
in
in
graph
in
graph
Theory
and
in
network
science.
People
have
been
working
on
this,
so
people
have
been
working
on
this
idea
that
you
know
you
have
these
matrices
that
are
just
I.
Subject:
iij
You,
Know
M
sub,
I
j,
but
you
also
have
like
three-way
and
four-way
and
five-way
interactions.
That
should
be
characterized.
A
Some
people
have
done
this
with
like
separate
matrices.
Other
types
of
data
structures
that
allow
you
to
capture
these
topological
deep
learning.
Leverage
is
more
General
abstractions
to
process
data
with
higher
order,
relational
structure,
the
theoretical
guarantees
of
its
models
lead
to
the
state-of-the-art
performance
of
machine
learning
tests.
A
So
this
is
something
that
you
know
we
might
explore
a
little
bit
more.
It
might
actually
come
up
with
some
interesting
examples
of
it's
very
way
in
Fort,
Wayne,
Bible
interactions
and
maybe
how
these
have
been
dealt
with
in
in
network
science,
but
they're
very.
Suffice
it
to
say
you
know
we're
dealing
with
a
conflict
system
here,
it's
not
just
simply
pairwise
interactions,
you
know
added
together,
there's
a
lot
going
on.
This
is
a
simple
example
in
our
hairballs,
which
we
have
here.
B
A
Just
they're
just
useful,
you
know
categories
that
we
might
put
in,
so
we
might
find
all
triangles
in
a
graph
we're
in
a
hairball.
We
might
find
all
circles
or
circular
paths,
but
that
also
means
that
there's
this
sort
of
higher
order
set
of
relations,
sometimes
they're
three
and
four
and
five
nodes
that
are
all
interacting
simultaneously
so
to
find
those
you
know,
would
be
useful
for
a
number
of
biological
problems.
A
So
that's
I
think
if
you
want
to
read
more
about
that.
That's
on
the
slack,
but
it's
also
been
pointed
out
in
the
meeting.
A
This
is
a
nice
paper
on
accelerating
Network
layouts
using
graph
neural
networks,
so
this
is
actually
having
to
do
with
visualization,
and
so
this
is
an
application
of
graph.
Now
that
works.
It's
quite
interesting.
The
Albert
lazel
whereabc,
who
is
actually
a
Pioneer
in
network
science,
is
an
author
on
this.
That's
nice
he's
getting
into
graph
neural
networks.
A
Current
Network
visualization
software
relies
on
the
force
directed
layout,
and
so,
if
you
use
something
like
geffy,
you'll
find
the
first
directed
layout,
it's
just
basically,
where
you,
you
know,
you
have
certain
options
for
like
visualizing
all
these
nodes
and
their
connections.
You
know
it
kind
of
doesn't
lay
it
out,
typically,
doesn't
lay
it
out
in,
like
a
spatially,
explicit
manner
enjoy
lays
it
out
into
where
it
finds
space
on
the
graph.
So
if
you
have
a
field
of
you
know,
if
you
have
like
a
a
figure,
that's
a
square.
A
A
A
A
It
traps
large
graphs
into
high
energy
configurations
resulting
in
hardware
and
hairball
layouts.
So
the
Hairball
layout
like
this,
isn't
a
very
good
image
of
it.
But
so,
if
I
said
to
say
you
can't
really
find
a
lot
of
the
structure
just
by
looking
at
it,
because
there's
way
too
many
connections
and
it's
in
this
hairball
because
again
it's
the
most
effect
efficient
way
to
display
it
in
this
square
box
that
we
have
here.
But
you
know
you
could
get
into
it
and
explore
it
and
maybe
find
other
ways.
B
A
Do
it,
but
that's
not
the
way
the
force
directed
layout
algorithm
works.
So
here
we
use
graph
neural
networks
to
accelerate
FDO,
showing
the
Deep
learning
can
address
both
limitations
of
the
algorithm.
It
offers
a
10
to
100
unfold
Improvement
in
speed,
while
so
yielding
layouts,
which
are
more
informative.
So
again,
you
know
having
something
maybe
like
this,
which
is
anatomically
explicit,
would
be
better,
but
we
don't
have
to
know,
have
the
ability
to
do
this
using
restricted,
layouts
part.
B
A
Is
because,
in
very
large
networks,
your
search
space
is
very
large
and
it's
hard
to
find
the
optimal
display.
Now
this
goes
back
to
this
idea
of
finding
the
true
Tree
in
fyogeny
right.
You
have
millions
and
millions
of
trees,
sometimes
to
sort
through,
and
you
know
you
can't
sort
through
all
of
them,
using
current
algorithms
and
current
computational
power,
it's
it's
prohibitive,
so
you
have
to
find
heuristics
and
the
same
holds
true
with
displaying
graphs.
You
know
how
do
you
find
the
absolute
best
layout?
Well,
it's
hard
to
do
so
again.
A
D
A
Which
is
giving
us
these
adjacency
relationships,
we
find
we
keep
their
eigen
spectrum
that
you
can
extract
from
that,
and
you
can
look
at
different
aspects
of
the
graph
that
way
and
you
can
actually
relate
this.
A
The
computational
power
of
dnms
to
this
ability
to
sort
of
pick
things
out
of
the
graph
predicting
the
dnns
are
particularly
effective
for
networks
with
communities
and
local
regularities,
so
communities
are
just
groups
within
the
graph
that
are
highly
clustered
together
and
regularities
are
just
you
know
like
the
triangles
that
I
mentioned
or
different
structures
substructures
in
the
network.
Like
certain
you
know,
circular
connectivity
or
something
like
that.
A
Finally,
we
use
GNN
to
generate
a
three-dimensional
layout
of
the
internet
and
introduce
additional
measures
to
access
the
layout
quality
and
its
interpretability.
Exploring
the
algorithm's
ability
to
separate
communities
and
Link
length
distributions
so
link
length
is,
is
like
the
length
of
the
links
in
space.
A
It's
not
the
weight,
it's
the
length.
Is
it
the
length
and
then
these
communities,
Community,
finding
and
Community
Discovery
have
been
big
topics
in
network
science,
and
so
you
know,
there's
there's
this
this
in
this
Quest
sort
of
to
find
the
optimal
number
in
the
communities
in
the
graph
or
like
the
strong,
the
clusters
of
activity
in
a
graph,
and
so
it's
hard
again
when
you
have
this
sort
of
hairball
Network.
A
You
know
it's
really
hard
to
find
communities
just
by
eyeballing,
because
you
have
it's
not
like.
Actually,
when
you
have
a
bivariate
graph
and
you
see
clusters,
those
often
can
be
misleading.
You
know
you
need
a
formal
clustering
to
determine
whether
something
is
in
one
cluster
versus
another,
just
eyeballing
it
isn't
going
to
work.
A
You
can
imagine
taking
that
bivariate
case
and
exploding
it
into
this
High
dimensional
space,
and
you
can
see
that
it
doesn't
necessarily
work
to
say
that
they're
clusters,
even
if
things
look
clustered
in
a
hairball,
it's
almost
impossible
to
like
really
Define
what
nodes
are
part
of
that
cluster
and
which
art.
So
that's
why
we
use
these
Community
detection,
algorithms,
and
so
this
actually
helps
with
that
task
as
well.
A
So
it's
really
interesting
that
we
have
this
sort
of
ability
to
use
neural
networks
to
do
these
things
graph
neural
networks,
especially,
and
so
they
kind
of
go
through
this-
their
method
here
for
doing
the
graph
layouts,
but
they
also
talk
about
some
of
the
application
domains
as
well.
A
A
Latents,
which
are
there,
which
you
can
choose
from
if
you
want
alternates,
this
shows
the
initial
layout,
which
doesn't
show
very
much
structure
at
all
and.
A
Cube
so
this
is
a
highly
connected
Cube,
but
we
have
to
find
this
from
the
initial
layout.
This
is
a
force
directed
graph
where
it
doesn't
necessarily
show
the
structure.
It
just
shows
the
connections
and
it
displays
in
a
way,
that's
kind
of
you
know
fits
it
into
this
space,
so
it
doesn't
really
consider
any
spatial
information
or
anything
else,
and
you
can
train
in
on
this
algorithm
to
make
it
sort
of
conform
to
the
final
layout
here.
So
we
can
do
this
sort
of
rearrangement
of
the
nodes
in
a
way
that's
more
informative.
A
This
kind
of
shows
a
similar
thing
here.
This
has
comparisons
between
two
different
methods,
I
think
FDL
and
new
way.
So,
oh.
A
B
C
A
This
is
where
you
get
like,
you
know
again
like
interpreting
it
by
I,
which
is
a
common
activity
in
network
science,
sometimes
to
just
see
where
things
are
and
see
the
structure,
because.
B
A
To
see
things
in
sort
of
intuitively,
instead
of
relying
on
like
statistics
which
are
actually
more
like
valid,
but
you
know
that's
not
the
way
people's
minds
work,
so
we
want
to
be
able
to
do
this
in
a
low
dimensional
space,
and
so
you
can
see
that
this
is
where
you're
building
these
layouts,
that
are
highly
interpretable
quality,
and
this
is
an
example
of
a
herbal
Network
here
with
some
structure.
A
So
the
hairballs
are
these
like
masses
of
connections
that
are
like,
don't
really
have
any
structure,
although
this
has
structure
along
the
edges,
you
have
these
clusters
of
of
hubs
and
nodes
hubs
are
highly
connected.
Nodes
and
nodes
can
be
peripheral
to
the
hub,
but
they
can
be
in
the
same
cluster,
and
so
this
is
what
they're
showing,
and
so
this
is
the
3D
layout
of
the
internet,
generating
this
hairball,
but
you're
also
generating
some
of
the
structure
within
the
Hairball.
So
you
can
see
local
communities.
Sometimes
it
fails.
A
The
the
FDL
fails
to
identify
local
communities.
They
actually
have
a
video
on
female
that
shows
the
full
3d
representation
of
this,
so
the
2D
representation
probably
isn't
enough
to
really
see
what's
going
on
here
and
so
the
proposed
new
layout
Rhythm,
which
is
what
they've
developed,
is
they
show
that
it's
Superior
to
this
FDL,
which
is
the
old
approach,
the
force
directed
layout,
and
they
show
that
it's
Superior
to
that
I
kind
of
yeah?
Okay,
that's
about
it.
A
Currently,
the
efficiency
of
new
way
is
limited
only
by
the
computational
complexity
of
GNN,
which,
while
considerably
faster
than
FDL,
can
be
expensive
or
non-exceptionally
large
graphs.
So
this
is
something
you
have
to
consider
when
you
have
very
large
graphs,
it's
the
computational
costs
and
we
still
have
these
run
up
against
these
problems,
but
especially
with
graphs,
because
you're
dealing
with
these
huge
combinatorial
problems-
and
so
you
have
to
really
have
maybe
good
sets
of
heuristics-
is
the
way
forward
like
we
have
with
phylogenies,
but.
A
So
this
is,
they
don't
really
talk
too
much
about
the
future
directions
here,
but
they
do
give
some
of
the
Mathematica
behind
their
methods.
A
Finally,
I'd
like
to
talk
about
just
just
kind
of
briefly
discuss
this
paper:
I
mean
people
aware
of
it.
This
is
a
paper
where
they
look
at
unstructured
protein
aggregates,
and
so
we've
talked
about
Aggregates
having
a
role
in
development
and
having
a
role
in
self
assembly
and
self-organization
of
biological
structures.
A
So
there
it's
an
up-and-coming
topic
talking
about
protein
aggregates,
especially
like
in
vesicles
or
in
cell
cytoplasm,
and
so
this
is
an
interesting
area
and
in
this
case,
they're
building
Network
hamiltonian
models,
so
they're
actually
applying
networks
to
a
logical
problem
with
building
the
specific
type
of
network
and
it's
a
framework
for
sort
of
understanding
protein
protein
interactions,
topological
topologically
complex
protein
protein
interactions,
and
so
this
is
not
not
graph
neural
networks,
but
it's
an
application
where
it's
high
it's
closely
Allied
to
it.
A
So
I'm
just
going
to
go
over
the
abstract
and
then
we'll
you
know
you
know
I
can
make
a
paper
available
to
people.
Let
me
talk
about
another
piece.
This
is
from
the
general
physical
chemistry,
the
abstract
reads:
Network
hamiltonian
models
or
nhms,
or
a
framework
for
topological
course.
Grading
of
protein
protein
interactions.
A
A
A
So
we
can
do
this
so
in
which
each
node
corresponds
to
a
protein,
and
the
edges
are
drawn
between
nodes
representing
proteins
that
are
non-covalent.
A
A
The
nhms
in
this
study
are
generated,
atomistic,
simulations
of
equilibrium,
distributions
of
wild
type,
meaning
the
non-mutated
genotype
and
the
Cataract
causing
variant
w42r
in
solution.
So
there's
a
mutation,
mutational
variant,
w42r,
that's
cataract
causing,
and
we
want
to
know
how
that
is
distinguished
from
the
wild
type,
where
people
without
this
variant
were
I,
guess
eyes,
I,
don't
know
if
it's
in
humans
or
not.
So
this
kind
of
talks
about
this
Network
models
are
shown
to
successfully
reproduce
the
aggregate
size
and
structure
observed
in
the
atomistic
simulation.
A
So
you're,
looking
at
these
crystalline
structures,
that
form
is,
is
cataracts
and
you
want
to
sort
of
understand
the
size
and
shape
of
them.
You
want
to
simulate
that
and
reproduce
it.
So
this
crystalline,
these
crystalline
proteins
coalesce.
They
aggregate
into
these
things
that
block
your
vision
or
that
sit
in
your
retina
and
do
bad
things
are.
D
A
You
know
able
to
successfully
reproduce
the
aggregate,
size
and
structure
that
are
shown
in
simulation,
because
they
can't
necessarily
access
this
process
in
an
experimental
setting
necessarily
so
they
want
to
simulate
it,
but
they
also
want
to
use
this
network
model
and
then
provide
information
about
the
transient
protein
protein
interactions
they're
in
so
they
just
really
want
to
understand
the
interactions
of
this
network
model.
A
The
system
sizes,
skill
from
the
original
375
monomers
to
a
system
of
10
000
monomers,
revealing
the
lowering
of
the
upper
tail
of
the
aggregate
size,
distribution
of
the
variant
extrapolation
to
higher
and
lower
concentrations
is
also
performed.
So
you
can
do
these
titrations,
where
you
change
the
concentration
and
see
what
the
simulation
and
the
interactions
should.
Look
like.
A
These
results
provide
an
example
of
the
utility
of
an
HMS
for
poor
screen
simulations
of
protein
systems,
as
well
as
their
ability
to
scale
large
system
sizes
and
high
concentrations,
reducing
computational
possible,
retaining
topological
information
about
the
system.
So
hamiltonians
are
these
dynamical
systems
models
and
you're
just
simply
treating
each
node
as
a
hamiltonian
that
evolves
over
time
and
you're
connecting
them
into.
A
So
that's
basically
what
they're
doing
here,
I
don't
know
if
there
are
any
good
images
here.
This
just
shows
an
example
of
the
aggregation
graph.
Where
you
have
these
monomers,
that
Aggregate
and
they're
you
know
they
form
the
structure
where
they're
interacting
in
certain
ways.
So
it
looks
like
when
they
come
together.
They
have
this
linkage
in
the
network,
so
some
of
these
are
linked
together
physically,
but
then
you
can
see
this
in
the
graph.
So
you
have
basically
this
circle.
A
The
circular
sample,
with
a
bunch
of
dots
which
are
these
average
and
then
the
Aggregates
bind
together,
and
so
then
they
form
a
link,
and
then
you
have
these
little.
You
know
sub
graphs,
which
then
maybe
connect,
and
then
you
have
a
cataract
and
so
in
the
mutant,
it's
much
more
likely
that
they
connect
and
interact
in
this
way.
A
So
we're
observing
the
graph
microstate.
We
have
this
language
we're
using
hamiltonian
in
this.
You
have
any
proposed
set
of
model
terms,
choice
of
time
interval.
We
perform
parametric
inference
using
the
full
maximum
likelihood
method
of
Vietnam,
which.
D
A
Is
here,
as
our
goal
is
here,
to
reproduce
the
distribution
of
aggregate
sizes
corresponding
to
component
sizes,
so
each
of
these
little
subgraphs
is
a
component
and
then
they
link
together
and
there's
a
concept
called
the
giant
component,
which
is
very
deep
in
graph
Theory
history,
which
describes
this
sort
of
network
that
spans
the
entirety
of
the
system
and
so
the
giant
component.
Doesn't
you
know
it's
fully
connected?
A
You
can
go
from
one
end
to
the
network
to
the
other
walking
across,
so
that
or
you
know,
you
can
do
a
random
walk
across
it
and
the
size
is
dependent
on
the
the
shape
of
the
network
and
the
properties.
So
that's
what
they're
trying
to
do
they're
trying
to
find
these
components
of
different
sizes?
Maybe
they'll
find
a
giant
component
if
it
spans
the
entire
network,
but
that
that's
what
they're
getting
at
they're
using
hamiltonians,
determined
H,
reflect
multi-body
interactions
is
reflected
in
the
topological
degrees
of.
B
A
Of
the
aggregation
graph,
so
yeah
and
I
guess
people
have
used
this
before
in
different
contexts,
but.