►
From YouTube: DevoWorm #42: Modeling biological tensegrity networks, chemical perceptrons, Neural Operators.
Description
Modeling tensegrity networks in MATLAB/SciLab and COMSOL. From a 3-D structure to a connectivity matrix. Components of biological tensegrity and structural propagation. Neural operators and solving for reaction-diffusion morphogenesis. Attendees: Richard Gordon, Susan Crawford-Young, Jesse Parent, Bradly Alicea, and Morgan Hough
A
A
B
Yes,
we
were
talking
about
10
segregates,
who
Stephen
Levin.
A
Oh
good,
that's
great:
what
was
the
outcome
of
that.
B
Well,
he
he
says
my
my
consegrity.
What
I've
got
is
a
squish.
In
other
words,
the
elastic
sonnet
are
not
like
in
a
normal
tensegrity
there's,
not
hello,.
A
I,
don't
know
I
just
hiccups,
I
guess.
B
Oh
okay,
well
anyways,
yeah,
I'm
I
need
to
find
some
fishing
line
and
fix
my
my
little
toy
so
that
it's
more
like
a
tanzegrity.
They
behave
differently,
yeah.
What.
B
Yeah,
it's
not
tense
enough
in
between
okay,
the
nodes,
so
it's
it's
a
squish
rather
than
a
tensegrity.
So
that's
easy
to
fix
here.
B
It
it's
when
you
start
pushing
on
it.
The
the
elastics
lengthen.
A
B
So
that
was
nice
because
Dr
Zhang,
who
I'm
working
with
he
he's
very
intuitive
about
mechanics
and
he
said
yeah
yeah.
He
wanted
as
a
tension
structure
only
and
so
I'm
going
yeah.
It's
the
tension,
structured,
no
compression
whatsoever
and
no
elasticity
well
have
there
is
some
elasticity
in
any
material
so,
but
it
just
needs
to
be
a
material.
C
A
Yeah,
it's
great
I
was
reading
that
email
that
he
sent
to
oh
I
got
the
email
you
forwarded
to
me
and
one
of
the
things
that,
and
it
was
from
Stephen
Levin.
A
One
of
the
things
he
was
saying
is
that
there's
like
in
a
biological
tensegrity
structure,
even
the
struts
are
like
have
a
so
I
guess
on
elasticity,
I
guess
he's
talking
about
acting
or
whatever,
and
that
needs
to
be
accounted
for
us.
It's
almost
like
a
fractal.
B
The
stats
would
be
microtubules
microtubes
and
he
says
they're,
like
kind
of
like
an
island
of
of
compression
in
in
the
sea
of
tension.
A
A
B
A
Mean
I
don't
know
like
I,
don't
have
a
good
like
drawing
of
it
or
an
animation,
but
I
guess
the
microtubules
are
sort
of
things
that
hold
the
structure
to
get
so
like
the
cytop,
the
the
membrane
it
holds
things
together.
There
holds
the
structure,
you
know
in
a
certain
confirmation
and
it
also
plays
a
role
in
some
of
the
other
structures.
Like
the
you
know,
the
nucleus
and
and
other
things
that
are
sort
of
I
guess.
Whenever
you
need
structure
in
a
Cell
I
know
they
have.
B
They
do
a
lot
of
things
there,
hello,
the
microtubules
and
the
neurons,
certainly
crazy
acts
and
and
in
that
yeah
I
just
asked.
So
where
are
the
microtubules?
In
the
cell
like
I,
know
the
actin
in
a
normal
non-dividing
cell
Etc
is
on
the
outside
of
like
it
forms
an
outside
layer,
there's
stuff
inside
too,
yes,
okay,
but
but
it
does
does
tend
to
the
outside
of
the
cell.
At
least
I
have
a
paper
on
that
and
they
like
I,
said
they're
biologists
and
they
took
a
picture
of
it.
B
Is
this
what
happened
to
the
cells?
They
were
looking
at,
sometimes
it'll
come
into
the
center.
I
am
odd
cases,
perhaps
when
the
cell
is
dividing
or
doing
something
else,
but
normally
it's
in
this
out
Outer
coat
of
of
the
cell.
So.
C
C
Okay,
there
are
two
sets
of
microtubules
in
axolotl
epithelial
cells.
C
One
set
is
a
long,
a
long
axis
of
the
cells
perpendicular
to
the
surface
and
the
other
set
is
at
random.
At
the
apical
surface,
oh
okay,
okay,
okay,
the
apical
surface
plus
the
microfilament
ring,
plus
the
the
intermediate
filament
ring,
are
what
constitute
these
cell
State
splitter.
B
They
think
oh
okay
yeah,
so
there's
there's
the
microfilaments
which
are
actin
and
then
intermediate
filaments.
C
C
Okay
Dove
yeah,
you
can
read
it
in
English
or
Russia
yeah.
B
Me,
like
reacting.
C
Okay
and
then,
and
what
was
it
was
Beth
Burnside-
is
the
one
that
discovered
with
the
microtubules
or
the
two
sets
of
microtubules.
B
B
B
C
C
B
Yeah
so
n
yeah,
all
right:
okay,
okay,
all
right,
Chris
Martin's!
He
didn't
talk
to
me
much
when
I
visited
him.
Oh
he
didn't.
No.
He
didn't
want
to
talk
about
science,
no
yeah,
no
he's
out
of
it
yeah.
He
has
been
wanting
and
I
said.
Well,
you
know.
I
came
all
the
way,
basically
all
the
way
here
to
talk
to
you
about
this,
so
we
can
spend
five
minutes
that
I'm
talking
about
this
I,
don't
think.
B
C
Was
a
my
student
as
an
undergrad
and
went
on
I
think
he
did
zebrafish
work
and
then
he
ended
up
as
a
professor
at
University
of
Ottawa
and
he
got
disgusted
with
that
and
when
he
took
Grenada,
where
he
taught
500
medical
students,
we
couldn't
get
into
medical
school
in
the
U.S.
B
B
There
are
some
lots
of
doctors
in
the
U.S
there
that
have
gone
to
Granada
and
got
their
medical
license
and
then
had
to
jump
through
a
few
Hoops
to
get
in
back
into
the
U.S.
But
we.
B
I've
heard
about
it
from
from
my
daughter,
Nancy
like
she
was,
and,
and.
C
You
know
so
that's
why
I
wrote
those
papers,
the
paper
on
the
physician
scientists,
because
one
of
the
things
they
do
is
they
complain
that
there
aren't
enough
physician
scientists
and
then
reject
anybody
who
could
possibly
be
a
scientist
to
get
into
medical
school.
A
So
we
talked
a
few
weeks
ago
about
something
called
morphological
waves
and
we
had
some
skepticism
in
the
group
about
the
concept
and
how
it's
similar
to
differentiation
waves.
So
this
idea
goes
actually
back
I
think
to
discussions
with
Kristoff
teacher.
It
was
actually
a
pretty
interesting
person
in
the
field
of
a
wife
and
I
I've
gathered
some
papers
of
his
on
chemical
computational
perceptron.
A
So
let
me
take
a
look:
let's
take
a
look
at
some
of
these
papers,
so
the
first
paper
is
online
learning
in
a
chemical
perceptron
and,
of
course,
perceptrons
are
very
early
neural
networks.
There
are
systems
where
you
have
a
lot
of
inputs
that
are
summed
into
a
single
unit
or
neuron,
and
then
there's
an
output,
so
you're,
taking
in
a
lot
of
different
inputs
and
you're
producing
classifications
from
that
and
perceptrons
go
back
to
the
1960s
and
they've
been
used
in
as
toy
models.
A
A
lot
for
different
things
in
this
case
we're
looking
at
a
chemical
perceptron
and
so
we're
looking
at
biological
learning
such
as
it
is,
and
so
this
is
a
Kristoff
teacher,
is
the
second
author
on
this
paper
is
at
Portland.
State
University,
so
the
abstract
of
this
paper
reads:
autonomous
learning,
implemented
purely
by
means
of
a
synthetic
chemical
system
has
not
been
previously
realized,
so
we're
interested
in
autonomous
learning
in
a
chemical
system
in
a
synthetic
chemical
system.
A
This
is
a
lot
to
do
with
like
synthetic
biology
and
other
applications
like
that.
Learning
promotes
reusability
and
minimizes
the
system
design
to
simple
input,
output
specification.
So
again
you
have
this
perceptron,
where
you
have
multiple
inputs
to
a
single
unit
which
either
sums
the
inputs
or
filters
the
inputs
using
some
learning
Rule
and
then
gives
you
outputs
this
article
we
introduce
a
chemical
perceptron,
the
First
full
featured
implementation
of
a
perceptron
in
an
artificial
chemistry.
A
So
artificial
chemistries
are
actually
an
interesting
area
of
a
life
research
and
it
involves
taking
chemical
worlds
or
chemistry
systems
and
simulating
them
in
silico.
So
you
can
simulate
a
chemistry
system,
say
to
understand
the
origins
of
life
or
to
understand
how
to
design
different
types
of
chemical
biochemical
structures.
Things
like
that.
A
So
this
is
a
very
useful
thing
in
these
artificial
chemistries
to
be
able
to
learn
online.
You
know
to
learn
in
it
simultaneously
with
sort
of
exploring
the
space
of
this
chemical
system.
A
A
So
this
is
something
that's
you
know
they're
trying
to
model
this
as
an
actual
chemical
system,
they're
using
Michelle's
Minton
model
they're
using
a
discrete
time
model
and
a
deterministic
model.
Those
are
the
properties
of
their
their
artificial
chemistry.
A
We
present
two
models,
so
this
is
these
are
the
properties
at
the
top
and
then
the
second
sentence
they
talk
about.
There
are
different
types
of
perceptron
that
they're
using
for
those
the
weight,
Loop
perceptron
and
the
weight
race
perceptron,
which
represents
two
possible
strategies
for
a
chemical
implementation
of
linear
integration
and
threshold.
Both
chemical
perceptrons
can
be
successfully
identified
or
cancer
successfully
identify
all
14
linearly.
Separable
geel
input
logic
functions,
so
these
are
things
they
throw
at
the
perceptron
to
get
it
to
classify
correctly
or
to
test
to
see.
A
If
it
can
do
this
and
maintain
High
robustness
against
rate
constant
perturbations,
we
suggest
the
DNA
strand.
Displacement
could,
in
principle,
provide
an
implementation
substrate
for
our
model,
allowing
the
chemical
perceptor
on
a
perform,
reusable
programmable
and
adaptable,
wet
biochemical
computing.
A
So
one
of
the
criticisms
of
the
morpha
I
guess
morpha
morphological
waves
was
that
it
was
sort
of
like
it
had
a
sort
of
a
mysterious
component
to
it.
So
there
was
a
learning
component
to
the
reaction.
Diffusion
equations,
for
example.
Does
that
occur
by
magic
or
is
there
some
sort
of
underlying
sort
of?
Is
there
maybe
an
underlying
intelligence?
And
even
that
seemed
a
little
bit
outlandish?
A
But
this
is
sort
of
addressing
this
issue
where
you
have
a
chemical
system
or
a
biochemical
system
that
has
its
learning
capability,
so
you
can
actually
create
perceptrons
in
natural
systems.
Using
a
variety
of
techniques,
I
mean
the
perceptron
is
just
an
abstraction.
You
can
use
this
sort
of
model
in
chemical
systems,
so,
for
example,
if
you
have
a
lot
of
different
combinations
of
something
that
go
into
a
process
and
there's
a
simple
input,
output
that
would
be
considered
a
perceptron.
A
Even
though
it's
not
a
perceptron
or
a
biological
neuron
per
se,
it's
that
sort
of
structure
of
the
problem
or
the
structure
of
the
system
where
things
get
processed.
So
this
is
a
I
think
this
is
a
useful
mechanism,
or
at
least
a
useful
model
for
looking
at
some
of
these
chemical
or
biochemical
mechanisms.
A
So,
let's
see
if
we
can
get
down
to
the
bottom
here,
the
contributions
of
their
model
here
are
as
follows.
Our
system
is
the
first
full-featured
implementation
of
online
learning
in
simulated,
artificial
chemistry
called
the
chemical
perceptra,
so
learning
as
well
as
linear
integration
of
weights
and
inputs,
which
are
integration
at
that
same
single
unit
that
we
mentioned
before
is
handled
internally.
A
A
The
chemical
perceptron
learns
perfectly
all
14
linearly
separable
logic
functions
after
200
learning
iterations,
so
it
doesn't
necessarily
take
that
long
to
work.
The
chemical
perceptron
is
reusable
since
it
recovers
its
internal
ready
state
after
each
processing
event,
and
they
so
I
want
to
get
down
to
this
weight,
Loop
and
weight
race
perceptrons
to
give
an
idea,
because
they
talk
about
the
artificial
chemistry
here
and
their
details
to
that
and
there's
a
lot
of
kinetics
in
this
Peart
official
chemistry.
Really
it's
really
based
on
Michelle
Smith
and
enzyme
kinetics.
A
So
if
you
know
that
you
can
go
through
that
representation
variables,
so
we
have
to
Define
our
variables
in
the
computational
system,
the
system
inputs
and
outputs.
A
So
the
concept
of
actions
or
action
series
is
an
extension
of
the
input
configuration
chemical
species
concentrations
can
be
modified
at
times
other
than
t0
an
AC
action
emulates
a
step
in
the
execution
of
an
experimental
protocol
we're
at
a
certain
time
the
person
performing
the
chemical
experiment
mechanically
injects
or
removes
substances
in
or
out
of
from
a
tank,
and
so
this
you
know
the
tank
metaphor
can
be
used
in
self-organizing
chemistries
as
well,
in
terms
of
like
the
supply
of
a
certain
chemical
reactants,
reactive
species
or
whatever.
A
And
so
you
have
this
great,
limiting
aspect
to
it
that
you
can
use
to
understand
both
sort
of
you
know
if
you're
doing
chemistry,
experiments
or
you
have
these
self-assembling
chemistries
as
well,
and
so
they
kind
of
show
some
of
these
concentration
versus
time
step
Dynamics,
and
this
is
the
model
of
the
perceptron.
So
you
have
these
multiple
inputs
coming
into
this
single
unit.
That
summarizes
things
according
to
some
sort
of
rule
or
conditional
and
then
there's
an
output.
A
D
A
Fuel
species
e,
so
this
is
the
weight,
Loop
receptor
and
computes
the
weight
sum
directly
by
transforming
weights
W
into
outs,
but
species
y
ep.
The
problem
is
that
the
weights
encode
the
state
of
the
perceptron,
so
their
concentration
must
be
preserved.
Apart
from
my
species,
the
perceptron
must
also
create
back
of
copies
of
the
weights.
The
perceptron
can
then
restore
its
weights
after
the
output
production
is
over,
and
so
that
you
know
this
kind
of
works
by
preserving
information
and
re-encoding
it.
A
So
they
give
a
table
with
all
these
different
weights
in
it,
then
the
weight
raise
perceptron
is
the
second
model,
so
the
functioning
of
the
weight
Loop
perceptron
is
based
on
rather
conservatively
designed
phases
working
in
sequence.
This
approach
works
well
since
there's
almost
a
one-to-one
relation
between
the
routines
of
a
formal
perceptron
and
those
of
a
chemical
perceptron,
then.
Nevertheless,
the
idea
of
a
direct
calculation
of
the
weight
sum
and
recovering
the
original
state
seems
unnecessarily
cumbersome
for
a
chemical
system.
A
The
weight
race
perceptron
improves
upon
the
weight
Loop
perceptron
by
switching
the
chemical
rules
of
inputs
and
weights.
That
is,
instead
of
having
inputs
catalyzing
a
transformation
of
weights
to
a
weight
sum
which
determines
an
output
weight,
simply
catalyze
the
input
to
Output
reactions,
as
presented
in
figure
four,
which
is
this
figure.
D
A
Which
shows
this
process
where,
in
a
in
a
binary
mode,
function
the
weights,
W
catalyze
or
compete
on
the
input
output,
reactions
of
X
and
Y,
so
X
and
W
compete
and
then
there's
an
output
y
in
the
learning
mode,
which
is
B
the
desired
output
species
d,
0
and
D1
transforms
to
weights
W.
The
provided
variant
does
not
match
the
actual
output.
So
in
this
case
your
species
transforms
to
the
weights
W.
A
So
that's
this
paper
that
the
second
paper
is
this
feed
forward
chemical
neural
network,
an
in
silicochemical
system
that
learns
exclusive
or
so
xor
is
exclusive,
or
this
is
usually
used
as
a
logic
gate,
but
this
is-
and
this
is
also
Kristoff
teacher-
is
the
third
author
on
this
paper,
and
this
also
has
you
know
chemical
neural
network.
So
this
is
a
similar
type
of
system
inspired
by
natural
biochemicals
that
perform
complex
information
processing
within
living
cells.
A
We
design
and
simulate
a
chemically
implemented
feed
forward
neural
network,
which
learns
by
novel
chemical
reaction-based
analog
of
back
propagation.
Our
network
is
implemented
in
a
simulated
chemical
system
where
individual
neurons
are
separated
from
each
other
by
semi-permeable
cell-like
membranes.
A
So
this
is
where
we
have
neurons
that
have
these
semi-permeable
membranes,
which
is
an
interesting
design
feature
because
most
neural
network
neurons,
don't
worry
about
this
too
much.
They
worry
about
mathematical
functions,
so
this
is
an
interesting.
This
is
a
compartmentalized
modular
design
that
allows
a
variety
of
network
topologies
to
be
constructed
from
the
same
building
blocks.
A
So
they
talk
about
this.
As
you
know,
sort
of
an
embodied
system
having
developed
a
family
one
family
of
individual
chemical
perceptrons,
we
wish
to
design
a
method
for
connecting
these
in
a
more
computationally,
powerful
Network,
so
the
network
should
be
modular
and
allow
for
different
topologies
to
be
reconstructed.
We
achieved
this
goal
of
the
feed
forward:
chemical
neural
network
or
an
fcnn,
which
is
a
network
of
cellulite
compartments.
Each
containing
a
chemical
neuron
as
a
module
communication
between
nodes
in
the
network
is
achieved
the
permeation
through
the
walls
of
these
compartments.
A
So
these
part
compartments
have
walls,
walls
are
selectively
permeable
and
things
pass
between
the
walls
which
allows
the
Network's
feed
forward
and
back
propagation
mechanisms
to
function.
So
this
is
a
little
bit
different
than
the
way
we
usually
think
about
neural
networks,
which
is
that
they
have
connections
that
have
weights
and
then
those
weights
determine
whether
the
message
is
passed
on
to
the
next
Network
or
neuron
in
the
network.
Okay,
like
standard
single
layer
perceptrons
each
of
our
individual
chemical
perceptrons
can
learn
linearly,
separable
binary,
two
input
logic
functions
such
as,
and
or
so.
A
These
are
the
simplest
logic
functions
and
or
it's
that
you
have
a
logic
gate
that
says
I'm
going
to.
If,
if
such
is
true,
there
are
two
things
that
are
activated
in
and
in
or
if
something
is
true,
one
thing
or
the
other
thing
are
activated,
so
there
are
different
ways,
different
rules
that
we
can
use
to
when,
when
a
logic
function
is
performed
to
pass
on
information
to
the
next
unit.
A
These
simple
logic
functions,
however,
incapable
of
learning
will
linearly
Inseparable
functions,
xor
and
xnor,
which
is
exclusive
or
her
exclusive.
Nor
so
this
is
where
you
have
exclusivity
in
some
of
these
functions.
These
are
linearly
inseparable
here
we
demonstrate
the
fcnn
learns
each
of
these
functions
to
our
knowledge.
A
It
is
the
first
chemical
system
able
to
learn
a
linearly
Inseparable
function,
so
this
is
a
breakthrough
kind
of
for
these
kind
of
networks
actually
for
neural
networks
in
general
and
that
you
have
these
linear
linearly
Inseparable
functions
that
can
be
performed
in
one
step,
and
so
this
is
I,
don't
know
if
there's
a
figure
here,
but
this
is
the
chemical
reaction
Network
again,
this
is,
you
know,
sort
of
a
rehash
of
what
we
saw
on
the
other
paper,
these
cell
light
compartments.
A
A
This
is
a
major
motivation
behind
the
modern
study
of
Proto
cells,
which
we
talked
about
last
week
in
a
couple
weeks
ago,
as
being
something
that
people
use
to
study,
minimal
cells
and
the
origins
of
life,
and
it's
actually
something
they
do
in
artificial
life
a
lot
they
work
with
protocells,
which
are
these
things
these
these
vesicles,
that
you
can
load
up
with
different
things.
You
can
boot
them
up
with
biochemicals
or
other
proteins,
and
you
can
get
them
to
do
things
behave
like
minimal
cells
with
enough.
A
You
know
enough
stuff
in
it
to
get
it
to
do
things,
and
the
analogy
of
the
cell
is
a
self-contained.
Self-Sustaining
regulatory
machine
is
a
familiar
one
and
the
most
basic
requirement
for
such
a
machinist
compartmentalization,
so
compartmentalization
actually
is
interesting,
because
it
enables
a
lot
of
things.
It
enables
local
information
processing.
A
It
enables,
by
extension,
what
we
call
modularity
or
the
specialization
of
functions
in
a
certain
place.
It's
bounded
by
something
like
a
boundary
or
a
membrane,
and
it
allows
for
things
higher
order,
things
like
redundancies
and
other
things
where
you
have
multiple
copies
of
something,
so
that
if
one
thing
is
damaged
or
one
thing
goes
offline,
the
other
things
can
take
over
and
you
can
only
get
these
sorts
of
properties
with
compartmentalization
because
you
need
to
have
specialization.
A
You
need
to
have
individual
units
instead
of
having
things
as
kind
of
as
a
generic
system,
and
we
don't
see
a
lot
of
that
neural
networks.
You
see
some
like.
We
see
layers
of
neural
networks
and
we
see
some
specialization
of
of
different
layers,
but
we
don't
people,
don't
really
talk
too
much
about
modularity.
A
So
this
is
an
interesting
way
to
approach
this
problem,
foreign
things
here
they
talk
about
their
networks
as
modular
and
having
these
compartments,
but
that
also
implies
that
they
have
boundaries,
and
so
the
membrane
boundaries
are
permeable
they
have
to
be
in.
Of
course,
actual
cells
are
permeable.
A
We
have
ion
channels,
we
have
Gap
Junctions,
we
have
other
types
of
passageways
in
and
out
of
the
cell,
and
these
things
are
specialized
for
different
functions,
so
you
can
think
about
like
an
ion
channel,
it's
very
selective
for
certain
ions
to
pass
through
and
it
has
a
function.
So
this
is
something
that
is
can
be
highly
specialized
for
a
certain
thing
that
you
want
to
do
and,
of
course,
in
neural
networks.
We
don't
really
talk
too
much
about.
We
don't
talk
about
like
ion
channels
very
much
when
we
model
real
neurons.
A
We
do-
and
this
is
kind
of
what
they're
trying
to
replicate
in
this
work.
A
So
they
modeled
this
mathematically
and
you
can
see
the
math
here.
The
chemical
neurons,
the
chemical
neuron
we
chose
is
the
building
block
for
fcnns.
Is
the
analog
asymmetric
signal
and
perceptron
or
aasp,
and
these
are
two
new
variants
of
the
aasp-
are
used,
one
for
hidden,
neurons
and
one
for
the
output
neuron
as
a
reference
throughout
the
section,
whereas
the
article
table
one
list
of
chemical
species
and
the
fcnn.
So
this
is
in
the
subliminal
materials.
A
This
is
an
artificial
life
article.
So
it's
at
their
artificial
wave
Journal
website,
so
they
go
on
they
talk.
This
is
the
table
here
with
the
inputs,
outputs
weights
and
the
different
chemical
species
that
are
associated
with
these
things
and
they
kind
of
go
through
these
things.
Inputs
input,
weight
integration
and
they
talked
about
basically
the
mathematical
mapping
of
these
things.
A
So
this
is
a
reaction,
the
a
map
of
the
reactions
and
the
input
weight,
integration
step
in
a
two
input
aasp,
so
you
can
see
that
the
dotted
arrow
is
denote
a
catalyst.
So
this
is
a
catalyst
here,
for
example,
or
this
one,
and
then
this
signifies
it
reaction
in
the
network
between
I
x
sub,
1
and
y
each
week,
catalyzes
a
reaction
turning
its
Associated
input
into
y,
so
everything
goes
gets
turned
into
y,
well
that
input
y
simultaneously
annihilate
each
other
at
a
different
rate.
So.
C
A
Is
where
you
get
these
sort
of
Dynamics,
these
kinetic
Dynamics
and
you
can
classify
it
or
you
can
characterize
it
in
a
network.
So
then
they
get
to
the
learning
part
or
they
go
through
some
of
this.
They
have
these
aasp
networks
and
they
show
kind
of
how
these
work,
positive,
weight,
adaptation,
negative
weight
adaptation
and
an
output
comparison,
so
the
different
types
of
learning
that
they
propose
and
then
finally
we
can
talk
about
some.
A
You
know
what
these
are
useful
for,
which
is
that
you
have
these
fcnns
and
they
have
a
true
like
topology,
so
you
can
take
a
tree
of
these
compartments,
so
you
can
take
these
compartments,
in
a
say,
a
cell
or
a
structure,
and
you
can
map
them
to
a
tree,
so
you
can
show
how
these
things
sort
of
operate
or
maybe
even
evolve,
and
so
this
is
a
nice
way
of
representing
this
okay.
So
thank
you
for
listening
to
that.
A
Just
checked
I
gotta
check
in
from
Milan
Samuel
and
he
he
doesn't
have
anything
to
report.
But
I
hear
that
there's
a
email
thread
between
dick
and
Alan
and
Thomas.
You.
C
Know
I
think
I.
Think
Thomas
has
finally
gotten
my
point
the
to
what
it's
based
on
a
very
bad
experience,
which
Susan
might
remember,
I
bought
the
first
Apple
camera.
It
was
a.
It
was
a.
It
was
separate
from
the
computer
okay,
so
you
plugged
it
into
the
computer
and
we
took
Axolotl
eggs
and
dropped
them
in
a
a
tube
and
after
the
initial
transient
they
should
fall
very
smoothly.
C
So
I
took
movies
of
this,
and
the
movies
were
completely
jittery.
I
mean
time.
Interval
between
frames
was
very,
very
bad.
Okay,
so
based
on
that
I,
don't
trust.
Digital
cameras,
digital
movie,
cameras,
okay
and
I.
Think
I
think
that
Thomas
has
finally
got
my
point
and
we'll
try
to
do
an
experiment
where
the
motion
of
something
under
the
microscope
slide
that
approximates
the
size
of
the
diatom
and
does
not
have
does
not
undergo
stop,
stop
go
friction
with
the
slide.
B
The
other
thing
is
that
a
Beyond,
a
certain
limit
in
size,
something
that's
about,
say,
a
quarter
of
an
axolotl
egg,
whatever
that
is
beyond
that
limit.
The
Stokes
drum
up
drops
through
water
experiment
doesn't
work
because
they
they
float
rather
than
oh
yeah.
C
D
C
B
I
was
gonna
drop
experiments
with
the
Axolotl
eggs
that
I
was
doing
in
the
sucrose
solution.
Yeah.
C
The
guy
who
was
a
high
school
teacher,
spent
a
year
with
us.
Yes,
what
Ross?
What
was
his
name?
Flint
Flint
Flint?
What
was
his
first
name?
Russ
Russ
Flint,
oh
okay,
yeah!
What
we
did
is
we
hard-boiled
an
axolotl
egg?
Okay,
you
can
do
it
just
by
cooking
it
a
little
bit
and
then
we
sliced
it
vertically.
C
And
then
we
let
the
slices
fall
in
a
sucrose
gradient
and
they
separated
very
nicely
so
there's
a
gradient
of
density
top
to
bottom
or
bottom
of
the
top
highest
at
the
bottom.
Inside
Next,
Level
leg,
that's
been
hard-boiled.
B
And
what
I
did
was
I
kind
of
exploded,
the
eggs,
while
not
exploded,
I
put
them
through
a
it,
looks
like
sort
of
a
little
mini
guitar,
tiny
little
thing
that
broke
the
eggs
up
into
the
sucrose
solution
and
I
had
it
had
some
on
the
bottom
and
then
with
the
graduation
and
then
in
in
my
sucrose
gradient
and
they
were.
B
And
then
axolotls
do
is
an
interesting
thing.
They
float
on
top
of
your
sucrose,
gradient
like
they
during
yes,
just
when
the
blastopore
becomes
its
largest.
Oh,
oh,
we
did
it.
It's
gradient
and
become
less
so
less
dense.
C
B
B
I,
don't
know
this
is
a
known
phenomenon.
They
they,
when
the
blast
is
sealed,
is
finishing.
Growing
Yeah.
C
B
Actually
do
this
and
then
they
do
that
it's
going
over
inside
and
then
they
fall.
Then
they
become
better
interesting
at
this
point
when
they
have
risen,
that's
when
they
want
to
get
out
so
the
optical
clearance
tomography
and
take
an
image
of
the
blastocycle.
That's
I
want
to
do
this
experiment
but,
like
I
said,
I
have
to
wait
until
I
finished
my
other
stuff
for
for
my
PhD,
but
he
said
I
could
do
that.
He
said
I
could
do
this
at
the
end.
I
go
okay,
it's
a
promise!
This
is
the
carrot.
B
B
My
husband
put
it
together
because
I
don't
have
that
kind
of
patience.
Some
days.
B
B
C
C
But
I
mean
if
he
I
think
his
ideas
are
good.
So,
despite
his
okay.
A
Yeah
so
I
wanted
to
revisit
something.
I
talked
about
last
week,
which
was
the.
A
That
book
and
I
was
looking
at
how
you
might
simulate
tensegrity
networks
and
so.
C
Especially,
is
that
he
doesn't
use
elastics
for
the
the
string
component
between
the
rods
he
uses
fish
line.
Oh.
A
C
Which
is
very
rigid,
okay
and
he
thinks
that's
an
essential
part
of
it.
I.
C
C
A
Okay,
yeah,
so
squeezing
one
part
would
be
like
how
what
would
that
entail?
Well,.
C
A
A
Yeah
yeah
it's
hard
to
find
references
on
different
things,
especially
about
networks.
It
there
isn't
a
lot
yeah.
A
C
A
C
B
There
are,
there
is
a
paper
that
was
recently
written
about
microfilaments
being
a
tensegrity
structure,
so
I'm
going
to
turn
off
my
camera.
Things
are
going
too
slow,
okay,
yeah
see.
If
that
helps.
B
A
So
this
is
the
book
that
Susan
sent
to
me,
and
this
is
a
let's
see
if
it
comes
up,
not
computational
modeling
of
tensegrity
structures,
and
this
is
where
I
got
a
lot
of
the
Matlab
code
for
this.
For
this
example,
oh
okay,
so
in
the
book
I
think
it's
like
chapter
two.
They
lay
out
Matlab
code
for
simulating
10,
segregate
Network
and
it's
like.
A
Basically,
you
define
the
network,
you
define
the
nodes,
you
define
the
edges,
you
assign
you
basically
build
a
matrix
and
then
that
Matrix
is
it's
it's
the
way
they
have
it
set
up.
It's
like
negative
one,
zero
and
one.
Those
are
the
three
states
and
then
like
it's
either
there's
neutral
tension,
there's
positive
tension
or
negative
tension.
They
Define
that
in
the
book
as
to
what
those.
C
Are?
Oh
they
don't
they
don't
use
and
strings.
A
Well,
it's
basically
the
same
thing,
so
the
rods
would
be
the
edges,
the
the
sort
of
the
nodes,
I
guess
the
rods
would
be
the
nodes,
the
rods
would
be
the
nodes
and
then
the
strings
would
be
the
edges
so
they're,
just
it's
just
a
very
generic
structure.
What.
A
A
C
A
Okay,
so
it's
like
you
have
header
with
the
notation
of
these.
C
So
do
they
do
they
assume
linear
Springs
between
the
nodes
that
have
the?
What
should
we
call
the
elastic
component
yeah.
C
A
You
have
this
a
I
I
sub
IJ,
which
is
a
connectivity
Matrix
where
you
have
yeah,
I
and
J,
and
then
you
have
you
have
three
states,
plus
one
zero
and
negative
one,
and
you
can
make
different
assumptions
about
the
connectivity
like
whether
the
con
connections
are
linear
or
not.
But
you
end
up
with
these
three
states
of
force
and
then
those
states
are
foreign.
C
A
And
so
that's
that's
the
way
they
set
it
up,
and
then
you
can
do
different
Matrix
operations
in
the
book.
They
talk
about
doing
like
support,
Vector
machines
and
finding
eigenvalues
and
things
like
different
Matrix
algebra
things
that
you
can
do
with
these
matrices,
because.
A
This
is
the
code
here.
Let
me
bring
it
up.
This
is
like
in
Matt
led
the
post
and
m-file,
which
is
a
function.
I,
don't
have
it
set
up
as
a
function
here,
but
I
have
the
code,
so
you
have
three
dimensions:
these
are
your
inputs,
XYZ
and
then
this
is
your
Matrix
C.
A
So
this
is
just
these
are
the
values
of
these
states,
so
this
is
I,
guess
your
set
of
you
know
your
three
things
and
that
is
actually
I
think
these
are
your
spatial
coordinates
of
the
different
of
the
nodes,
and
then
these
are
the
force.
This
is
the
force
vect.
These
are
the
force
vectors
as
a
matrix,
and
then
this
is
and
then
you
work
through
this-
you
do
some
Matrix
algebra
to
get
to
down
to
the
bottom
here.
C
A
A
B
Okay:
okay,
well,
I
have
that
finite
element.
Analysis
paper
that
I,
maybe
should
send
you
or
maybe
did
I
I,
don't
know
I
did
yeah
did
I
ever
send
you
what's
his
name,
did
I
ever
send
you
equivalent
mechanical
properties
of
tensegrity
trust
structures
with
self-stress.
B
C
A
I
think
it's
just
like
it's
like
they
build
like
a
standard,
10
segregate
structure,
and
then
they
just
put
in
numbers
but,
like
you
know
that,
like
a
physical
toy
model
like
Susan
has,
but
you
know,
the
idea
would
be
that
you
would
be
able
to
characterize
any
structure
like
that
and
evaluate
it.
A
Them
right
right,
you
need
to
generate
them.
They
need
to
figure
out
like
what
the
you
know,
how
it
behaves
physically.
Well,
I!
Guess
that's
something
to
be
doing
like
a
physics
simulator
like
com,
Soul
or
something
where
you
have
you
can
actually
evaluate.
If
something
is
in
a
position
in
a
three-dimensional
space,
how
does
it
behave?
Yeah,
yeah,.
B
I
the
this
paper
that
I
just
well
I
put
the
title
of
it
in
the
chat
and
I
can
send
it
to
you
too.
Oh.
B
Have
enough
I
have
another
paper
about
material
for
wearable
electronics
that
has
a
J
curve
like
Stephen
Levin
says
that
all
10
seconds,
10
security
structures
have
and
they've
taken
a
close-up
look
at
this
material
and
it
looks
sort
of
tensegrity-like
okay,
so
I
have
that
paper
too.
So
maybe
I
should
send
you
those
too
papers
yeah.
A
A
B
A
B
C
A
C
A
B
B
So
people
have
been
pulling
around
with
this
with
models
for
for
a
while
I
should
just
if
you
want
any
of
these.
C
Yeah
well,
okay,
I
think
you're
gonna
end
up
having
to
write
it
so
and
then
have
him
critique
it
and
fix.
A
D
A
A
Think
it's
very
similar,
it's
very
similar,
and
then
this
is
where
you
get
the
rank
for
the
different
components
of
the
Matrix.
You
get
a
graph
like
this.
This
isn't
very
informative.
It
just
gives
you
like
a
rank
of
yeah.
A
I
think
if
the
rank
is
above
a
certain
value,
it
gives
you
like.
Basically
what
they're
trying
to
do
with
this
code
is
evaluate
the
state
of
the
structure
if
it's
stable
or
not,
and
so
I
think
at
certain
levels
of
the
like.
If
the
rank
is
above
a
certain
number,
it's
stable
and
if
it's
below
a
certain
number,
it's
not.
A
Basically,
that's
I
mean
that's
that's
what
I
got
out
of
it
and
I
think
that
would
be
the
most
heuristic,
but
I
think
there's
probably
a
better
way
to
to
figure
out
whether
what
what
you
know
a
summary
statistic
for
the
structure.
That's
basically
what
this
is,
but
yeah
yeah.
So
that
was
the.
So
that's
one
option
for
computing
this
stuff
and
it's
you
know
they're,
probably
maybe
better
options,
but
you
know-
or
things
could
be
added
to
this,
as
we
you
know-
would
want.
C
B
C
A
C
D
C
A
So
this
is
the
soup
paper.
Susan
was
talking
about
I
think
this
is
trust,
structures
yeah.
So
this
is
again
you
can
pick
it
a
little
bit
bigger.
B
I
trust
is
like
a
building,
a
high,
a
high-rise
building
and
you're.
Just
looking
at
you
know
what
can
I
share
my
screen,
yeah
yeah.
A
B
Yeah
and
then
thank
you,
okay,
so
there's
her
there's
the
trust
that's
being
pushed
around
there.
Can
you
see
it?
Okay,
hard
parts.
B
There's
like
bells
and
strings
in
this,
this
is
a
10
second
structure.
Yeah
and
a
trust
it
can
is
just
well
has
thousands
and
strings
it's
not
necessarily
A
tensegrity,
like
you,
can
Bolt
the
hard
Parts
together
and
then
have
strings
hold
up
the
sides
like
a
flagpole
and
that's
a
trust
structure.
B
Put
a
wave
through
it
and
then
they
check
to
see
which
system
see
if
both
of
these
different
types
of
modeling
matched
and
they
sort
of
did,
but
the
tsg,
Femme
and
Ennis
okay
and
I'm,
not
using
anesis
I,
mean
the
console,
but
as
an
assistant
console
are
well
similar.
B
It's
just
that
console
is
multi-physics,
so
you
can
use
I
could
put
light
through
it.
I
could
simulate
putting
light
through
the
structure
and
that's
why
I
chose
it.
But
it's
I
almost
want
to
use
analysis
because
Dr
Zhang
knows
it
and
I
get
hung
up
with
the
darndest
things
in
console
yeah
anyway.
B
Sorry,
there's
there's
a
the
one
with
the
water
in
it.
I
just
saw
that
there
see.
C
B
C
D
B
Yeah
yeah:
well,
you
can't
hardly
see
the
oscillations
on
the
left
side
without
the
water
and
then
for
that
fluid.
And
then
the
right
side
has
with
fluid
and
you
can
see
the
oscillations
in
the
different
directions
and
they're.
Just
looking
at
one
node
on
the
10
secretary
structure
and.
C
B
C
Of
course,
then
they
signed
a
mask
to
the
edges
or
to
the
notes.
B
Math
a
trust
structure.
B
C
Okay,
yeah
I.
This
is
looking
like
a
proper
book.
Taking
into
account
these
things
and
extending
them
and
incorporating
Steve's
ideas
is
probably.
B
C
C
C
C
C
B
B
Yeah
true
and
I'm,
going
with
the
engineers
on
this,
we
really
just
want
something
that
mimics
part
of
the
behavior
of
the
cells
to
build
the
material
that
covers
part
of
the
J
curve
that
that
he's
talking
about
is
the
best
strain.
The
stress
strain
curve,
the
covers
part
of
it.
That's
good,
I'll
get
another
material
for
the
other
part
of
it
thicker
and
a
third
one
for
the
top
part
or
something
so
then
I
can
cover
the
whole
like
curve
stretch
drain
curve,
and
then
we
can
improve
Optical
elastography,
not.
B
B
Structures
making
up
the
rods
and
strings
and
then
the
rods
and
strings
have
their
own
tensegrity
structure.
So
how
do
you
do
this?
Well,
the
emptiners
are
not
going
to
be
very
pleased
because
they
want
to
do
inverse
problems
and
it
has
to
be
linear.
C
Okay,
so
getting
the
basic
ideas
out
of
it,
it
would
be
best
by
questioning
him
yeah.
A
Yeah,
that
would
be
good,
I'll,
take
a
look
at
it
after
the
meeting
and
see.
D
A
We
have
the
well
Susan
shared
that
paper
see
if
I
can
share
my
screen
here.
Okay,
so
this
is
the
paper
she
shared
equivalent
mechanical
properties
of
tensegrity
trust
structures
with
self-stress
included.
So
this
is
the
let's
see
if
there
are
any
figures
in
this
paper.
Well,
these
are
the
major
well,
these
are
the
block
matrices
here.
Where
did
they
define?
A
A
B
Ng
is
similar,
except
it's
got
a
complicated
component.
You
know,
imaginary
component,
it's
a
it's
something
you
find
through
oscillating
the
material.
D
B
Yeah,
so
it's
quite
obvious.
B
A
Lot
of
this
stuff,
like
if
you're
simulating
it,
gets
pretty
deep
into
Matrix
algebra
and
Matrix
manipulation.
So
the
mechanical
properties
of
five
typical
tensegrity
modules
inscribed
in
the
cube
of
edgelink
a,
are
determined
and
evaluated
using
the
proposed
Continuum
model.
So
they
have
things
like
force:
strut,
simplexes,
three
strut,
simplexes,
an
octahedron,
a
tetrahedron
and
X
module.
So
these
a
series
of
different
structures
to
do
this,
and
then
they
just
they
analyze
structures
consisting
of
cables
and
struts
again.
A
So
these
are
not
biological
structures,
they're,
just
generic
structures
that
form
this
network,
then
they
have.
This
parameter
K,
which
is
the
EAS
cable
over
EA,
strut,
EA
strut,
where
EA
is
e,
is
the
Young's
modulus,
which
is
defined
by
well,
it's
a
yeah
and
then
a
is
the
cross
section
and
then
the
selfie
deliberated
system
of
the
forces
is
represented
by
this
parameter
s
over
EA,
where
s
is
a
free
multiplier
and
the
EA
is
that
quantity
there.
So
this
is.
A
These
are
the
matrices
that
they
derived
from
this
and
they
do
the
calculations.
And
then
this
is
what
they
look
like
these,
these
four
stroke,
Simplex
modules.
So
they
have
this
structure
here
and
again
they
have
this
XYZ
and
these
coordinates.
They
have
the
forces
acting
upon
the
nodes
and
then
you
know
those
are
transmitted
throughout
the
structure
and
then
you
do
the
calculations
and
you
can
figure
out
whether
it's
stable
or
not.
B
Yeah
I,
like
the
stability
and
Analysis
in
this,
that's
something
that
Dr
Sharif
who's
the
electrical
engineer
in
the
party.
He
was
very
interested
in
in
where
the
thing
was
stable
and
not
stable
and
G
is
the
sheer
modulus.
Okay.
B
A
Yeah
and
I
think
you
sent
me
two
other
ones.
There
was
this
paper
of
touring
and
wave
instabilities
and
hyperbolic
reaction,
diffusion
systems-
this
is
I,
don't
know
if
I
did
maybe
I
added
these
in
and
learning
the
stress,
strain
fields
and
digital
Composites
using
for
a
neural
operator.
So.
A
A
Yeah
I,
don't
know
basically
they're
analyzing
materials
for
their
sort
of
the
properties
and
performance
using
these
neural
operators,
which
this
is
a
Fourier
neural
operator,
I,
don't
know
what
a
neural
Opera
I've,
never
heard
of
a
neural
operator,
actually
I.
A
D
It
yeah
instead
of
using
you
know
the
the
you
know
like
finite
element,
solver
or
something
like
that.
Oh.
D
B
Maybe
I
want
to
do
that
with
with
my
simulations,
just
because
the
department
I'm
with
would
love
that
oh.
D
Yeah
yeah
this
is
this
is
a
really.
This
is
a
really
hot
hot
area
of
of
you
know,
current
current,
like
deep
learning
to
to
just
to
solve
Odes
PDS.
D
D
Well
so
some
of
these
are
are
pins.
Sorry,
my
daughter's
playing
with
that
a
machine
here,
physics,
physics,
inspired,
neural,
Nets
or,
and
and
other
you
know,
yeah
these.
These
neural
neural
operators.
A
Yeah
yeah
so
they're
yeah,
they're,
they're,
learning
the
mappings
between
finite
dimensional,
euclidean
spaces
and
they're,
predicting
these
outputs
for
yeah
they're,
solving
equations,
basically
and
then
there's
a
4A
interval
operator,
which
does
a
similar
thing.
It's
doing,
I
guess
a
4A
analysis,
yeah,
okay,
so
this
is
yeah.
This
is
and
then
this
shows
you
kind
of
the
diagram
of
the
system
where
you
have
an
input.
You
wouldn't
put
it
into
this
inner
operator
and
you
go
to
Output.
A
Yeah,
so
that's
that's
something
that
that's
an
option
too.
Then,
of
course
it
relates
the
third
paper
here
it.
Of
course,
some
of
this
relates
to
pattern
formation
and
some
of
these
observations
of
wave
instabilities.
A
A
Oh,
it's
just
I,
guess
a
type
of
equation
that
they
use
to
model
reaction
diffusion.
So
this
is
a
hyperbolic
reaction,
diffusion
system.
So
this
is
a
standard
reaction,
diffusion
systems,
our
best
approximations,
the
Midfield
Behavior
particles.
A
So
an
example
is
the
is
the
infinite
propagation
speed
of
disturbances
in
these
parabolic
systems,
so
they're,
looking
at
like
reaction,
diffusion
systems
but
they're
looking
at
disturbances
within
them
as
they
operating
so
one
phenomenological
way
of
addressing
the
issue
with
propagation
speed,
is
to
consider
a
hyperbolic
reaction,
diffusion
system
which
can
be
shown
to
admit
both
finite
speeds
of
propagation
and
behavior
compatible
the
standard
case
and
the
singular
limit,
at
least
for
dissipative
Dynamics.
A
So
this
is
Hardcore
dynamical
systems
looking
at
reaction
diffusion
as
a
process,
but
then
looking
at
how
it
sort
of
can
be
perturbed
or
showing
stabilities
in
it.
B
Yeah,
okay,
all
right
I,
just
I-
need
to
know
about
all
of
kind
of
the
broad
breadth
of
how
to
solve
for
these
things
and
I'll
get
myself
into
trouble
when
being
done.
Yeah
and
going
no
I
didn't
want
to
use
that
system
because
of
yeah.
B
A
D
B
Besides,
my
niece
might
know
something
about
some
of
the
neural
Nets
because
she
was
she
was
looking
at
quasars
and
how
nebula
that's
it.
She
was
trying
to
figure
out
the
shape
of
a
nebula
from
Just,
One
Direction,
so
she's
you
only
have
a
image
of
a
nebula
for
One
Direction,
because
that's
where
we
are
in
the
universe
and
she
was
trying
to
figure
out
their
3D
shape
from
from
just
looking
at
it
from
One
Direction.
Okay,
oh
boy,
so
that's
that
was
her
PhD
thesis,
okay,.
A
A
Well,
that
was
great,
yeah
I.
Think
there's
some
things
to
follow
up
on
here
and
you
know
we'll
I
guess
we'll
continue
the
discussion.
This
is
a
pretty
interesting
area,
glad
we've
kind
of
gotten
back
into
the
tensegrity
area,
so.