►
From YouTube: DevoWorm (2021, Meeting 9): DL for Cell Division and Diatom Colonies, Cellular Automata, Microscopy
Description
Deep Learning for Cell Division (ResNet) and Diatom Colonies (YOLO). Custom Cellular Automata for Pattern Formation, Cellular Reprogramming, and Autophagy. Ball microscopy, GSoC Onboarding, and Zebrafish goodies. Attendees: R Tharun Gowda, Assaf Wodeslavsky, Aayush Kumar, Yash Vadi, Krishna Katyal, Bradly Alicea, Mayukh Deb, Mainak Deb, Akshay Nair, Ujjwal Singh, and Shruti Raj Vansh Singh
A
Oh,
I
don't
think
she's
joining
today.
She
she
said
something
about
her
computer
being.
You
know
she
had
to
have
it
fixed,
so
I
don't
know,
but
I'm
gonna
I
mean
I
can
probably
answer
some
of
the
questions
too.
So
I
have
some.
I
have
a
picture
of
what
she
was
talking
about
last
week,
so
yeah
hello,
my
knock
and
ayush.
A
Hello,
you're,
muted.
I
think
I
don't
know
your
audio
isn't
working
somehow.
A
A
Okay,
welcome
to
the
meeting
everyone
so
we'll
probably
have
a
couple
more
people
come
in.
We
always
do
there's
someone
right
now,
that's
akshay!
A
A
A
Okay,
so
first,
though,
what
do
we
and
then
we'll
have
some
other
things?
I
have
some
presentations
and
some
papers
and
we'll
go
over
our
deadlines.
If
you're
watching
on
youtube
you'll
be
able
to
follow
our
catch-up
as
we
go
along
here,
hello,
my
yoke.
So
why
don't?
We
have
some
of
the
new
people
introduce
themselves?
First,
I
think
we
have.
I
don't
know
I
think
thoren
introduced
himself
a
couple
weeks
ago
and
akshay
and
ayush.
I
don't
know
if
they've
introduced
themselves
yet
so.
A
C
C
I'm
currently
undergoing
my
undergraduate
studies
in
computer
science
and
engineering.
I
am
deeply
interested
in
like
working
with
open
source.
C
Kde,
it's
actually
an
application
oriented
software
not
totally.
A
A
Thank
you
for
going
over
that
again,
and
my
next
is
hello.
A
C
And
small
cell
arguments
and
microcontroller
segmentation,
and
then
after
that
I
worked
on
the
volumetric
segmentation
of
the
other
glossy
female
muscles.
So
basically
what
we
did
was
we
developed
a
deep
learning
segmentation
model,
a
3d
model
and
for
carrying
out
wall
method
segmentation,
and
after
that
we
incorporated
a
deep
learning
model
into
a
fully
and
cg.
C
So
basically
I
worked
on
that
and
once
the
cd
model
was
developed,
we
delivered
that
product
to
ncbs.
It's
basically
a
national
council
of
biological
sciences,
which
is
based
in
bangalore
and
since
then,
that
work
is
over
and
currently
working
on.
I'm
currently
working
to
improve
the
segmentation
further
by
using
coordinate
convolutions
as
well
as
any
adaptive,
alias.
A
There's
walls
here
so
so
why
don't
we
start
with
the
presentation,
so
the
roon?
If
you
want
to
go
first,
maybe
and
then
we'll
go
to
my
knock.
I
think
mine
are
still
there.
A
D
So
I'll
be
talking
about
data
tracking
using
object
detection.
So
the
main
motivation
here
is
to
study
the
non-neural
cognition
of
diatoms.
That
is
given
an
input
such
as
the
intensity
of
light,
the
concentration
of
co2
and
the
nutrient
concentrations
of
track.
You
want
to
study
the
responses
of
these
diagrams,
as
movement
is
the
most
apparent
response,
so
it
will
be
very
much
easier
to
study
these,
so
we
need
to
track
the
objects
across
multiple
frames,
finding
their
radio
position
with
respect
to
other
atoms
and
their
velocity.
D
So
an
opposed
could
be
using
the
opencv
method,
such
as
thresholding
and
masking
to
find
the
position,
but
these
won't
generalize
well
across
as
the
diagram
might
change
the
light
illumination
and
the
resolution
of
images.
So
as
we
know
that
neural
networks
generalize
well
across
multiple
domains,
so
we'll
be
using
a
neural
network
based
on
detection.
D
So
the
most
naive
approach
to
direct
objects
will
be
to
use
a
sliding
window
where
we
just
take
part
of
an
image
and
run
cnn
classifiers
through
it
to
classify
whether
there's
an
image
or
not
and
another
neural
network
to
predict
the
coordinates
and
the
width
and
height.
But
the
immediate
drawback
will
be
that
we
need
to
do
multiple
passes
over
the
image
to
get
all
the
objects
and
when
we
pass
a
part
of
the
image
through
the
cnn,
it
is
possible
that
the
whole
object
is
not
present.
D
D
So
where
here
you
can
see
it's
a
three
cross
finger
and
assigns
each
grid
to
an
object
whose
center
lies
inside
the
grid,
so
each
grid
makes
a
prediction
of
the
probability
of
an
object
being
inside
the
grid.
The
coordinates
of
the
center
of
the
object,
the
width
and
height
and
the
class
table
so,
for
example,
in
this
grid.
If
you're
predict
you,
if
you're
trying
to
find
whether
there
is
a
car
or
not
so,
there's
not
so
the
probability
will
be
zero
and
the
other
predictions
we
won't
care
over.
D
But
in
this
cell,
as
you
can
see,
there
is
a
car,
but
this
is
placed
in
both
the
slow
rates,
but
the
center
of
the
car
lies
in
this
rightmost
grid,
so
the
discrete
will
be
making
the
predictions
of
the
center
of
the
object,
the
width
and
height
and
the
class.
D
D
So
this
will
be
one
of
the
main
reasons
that
we
choose
yolo
for
object,
detection
because
it
is
fast,
so
it
can
make
real
time.
Predictions
like
four
tractions
frames
per
second
or
like
1
45
second,
depending
on
the
model
and
euro,
is
trained
end
to
end.
So
the
parameters
are
shared
between
the
classifier
and
the
bounding
box
predictor.
D
So
if
my
ic
has
much
more
higher
accuracy
than
two
models
which
are
having
different
models
which
predict
on
the
classification
and
bonding
boxes,
so
these
are
some
of
the
predictions
that
the
model
which
we
trained
on
the
atom
object.
Detection.
D
And
certain
drawbacks
that
we
can
observe
in
these
images
are:
it
fails
to
detect
the
objects
which
are
very
much
close
to
each
other
because
of
the
great
the
people,
the
brains,
which
can
only
detect
a
certain
number
of
bounding
boxes
which
are
fixed,
and
so
sometimes
it
might
be
possible
that
two
or
three
atoms
get
predicted
together
into
a
one
single
boundary
box.
D
And
you
can
see
this
right
bottom
of
the
image
and
the
preprocessing
part
also
for
preparing
the
data
set,
becomes
much
more
difficult
as
these
object.
Detection
models
require
bounding
boxes
to
have
their
size
parallel
to
x
and
y
axis.
D
So
we
have
to
rotate
the
object
in
such
a
way
that
the
bounding
boxes,
which
we
predict
are
always
parallel
to
x
and
y
axis,
and
this
becomes
much
more
time
consuming
and
we
can
automate
this
as
the
image
needs
to
be
manually
rotated
and
when,
if
you
have
a
bunch
of
atoms
which
form
a
structure
and
they
are
bent,
so
it
becomes
highly
impossible
that
we
can
rotate
the
image
in
such
a
way
that
the
bonding
boxes
are.
D
So
we
cannot
detect
all
the
objects
in
a
single
pass,
so
what
we
can
do
before
planning
to
do
is
to
modify
the
algorithm
in
such
a
way
that
we
can
increase
the
number
of
meshes
so
that
multiple
bounding
boxes
can
be
predicted
and
also
to
increase
the
data
set
size
so
that
it
can
generalize
much
better
across
multiple
images
and
to
overcome
the
pre-processing
part
of
the
images.
We
could
rotate
the
images
in
all
possible
angles
and
predict
the
bounding
boxes
and
switch
those
predictions
together
to
get
a
single
bounding
box
per
object.
D
D
And
one
more
model
which
we
can
try
is
using
instant
segmentation
where
the
model
makes
pixel
device
predictions
of
so
it
predicts
the
class
of
each
pixel,
but
this
preparing
the
data
set
for
this
becomes
much
more
difficult
and
time
consuming.
G
G
G
Yeah
I
get
it
so
and
if
you
rotate
it,
then
the
pounding
box
does
not
actually
cover
the
thing
itself.
Am
I
like
it
becomes
a
diagonal
sort
of
a
thing
then,
so
I
think
what
you
can
do
instead,
like
you
talk
about
this,
like,
I
think
what
I
had
an.
G
D
G
G
H
So
basically,
like
oh,
my
gosh,
trying
to
stage
like
like
we
have
like
explore
the
yellow,
including
boundary
boxing
approaches
but
ultimately
like
we
have
to
go
for
the
semantic
segmentation,
or
instance,
is
also
not
going
to
work
for
us.
We
have
to
open
70
segmentation
and
the
thing
like
manually
annotating,
the
data
set
so
basically
like
if
you
are
making.
A
A
H
Hello,
you
can
see
like
brown
boxing
is
not
going
to
like
it's
generating
results,
but
we
have.
H
H
H
It
is
not
giving
us
very
good
results,
so
what
you
have
to
do
is
that
it
is
a
tedious
task,
I
know,
but
you
have
to
like
label
images
like
using
different
labeling
tools.
I
know
like
india,
those
labeling
tools.
You
still
have
to
rotate
the
image
because,
like
they
also
support
only
like
x.
H
H
H
H
A
Yeah,
that
sounds
good.
Thank
you
usual
and
my
knock
and
thirun
for
presenting
yeah,
I
didn't
know.
I
have
any
comments
I
mean
it
looks
good.
I
was
gonna,
maybe
raise
some
of
the
comments
that
other
people
were
raising,
but
I
think
we
we
got
that
taken
care
of,
so
we
got
some
people.
Josh
is
a
new
person
in
the
slack
he's
here.
A
Welcome
asaph
is
here
as
well,
and
surety
is
here
so
now
it's
time
for
my
knock
to
present
the
thing
he
wanted
to
show.
Can
you
share
your
screen.
G
A
F
G
G
So
this
is
uk
cell.
As
you
can
see,
the
bottom
is
clearer,
so
this
is
basically
an
immortalized
line
of
human
t,
lymphocyte
cells
and
these
cells
are
basically
used
to
study,
acute
t-cell,
leukemia
and
t
cell
signaling.
So
these
cells
they
undergo
a
cell
cycle
which
has
three
stages.
The
first
stage
is
g1
stage.
The
second
stage
is
g2
stage,
so
these
processes
right
these
cycle.
G
G
A
G
G
G
G
G
G
G
G
B
G
And
the
sequence
in
which
it
undergoes
these
states
so
like
the
model
itself,
came
up
with
the
conclusion
that
this
s
phase
comes
in
the
middle
of
the
g1
and
the
g2
phase,
so
that
itself
is
pretty
spectacular
actually
so
like.
It
could
be
also
like
functionalities
like
this.
G
From
trained
union
efforts
and
having
the
ability
to
know
the
stages
in
the
process,
just
by.
A
Thank
you
very
much.
That
was
good.
I
had
a
well
actually
could
you
bring
the
slides
back
up
just
reassure
your
screen
yeah,
because
I
wanted
to
look
at
like
make.
Maybe
some
points
about
it,
so
I,
like
the
umap
analysis,
he
showed
me
that
in
the
slack
a
while
a
couple
days
ago,
so
this
is
the
umap
and
of
course
we
talked
about
this
in
a
prior
meeting
like
the
how
umap
works.
A
So
I
refer
you
back
to
that
presentation
and
they
have
a
new
version
of
this
out
now.
So,
if
you're
interested
in
methods,
it's
a
good
place
to
go
but
the
so.
This
is
basically
a
dimensionality
reduction
of
your
different,
your
cells
and
their
different
phases
and
how
they're
sort
of
related
to
one
another.
So
they
overlap.
A
So
I
mean
the
basic
cl.
The
basic
logic
here
is
that
if
they're
separable
groups
they'll
be
like
clusters
and
if
they're
not
separable,
necessarily
they'll
blend
together-
and
this
is
sort
of
because
this
process
is,
you
know
it's
the
same
cell.
It's
just
going
through
these
stages
and
you
can
identify
it
using
the
phenotype.
A
G
Actually,
really
it's
not
really
significant
about
this
art.
What
we
can
cut.
What
we
can
improve
from
this
as
a
rule
of
thumb
is
that
if,
if
there
are
two
different
points,
these
two
different
points
actually
refer
to
different
images.
So
that's
the
basic
thing,
but
this
art
that
has
that
really
has
no.
A
But
this
is
the
this
is
basically
the
important
part
is
that
they
blend
together.
So
I
mean
you
know
when
you
do
when
you
look
at
any
one
image
of
cells
unless
you
synchronize
the
cells
in
the
culture
wherever
you're
looking
at
them,
they
usually
have
like
cells
in
many
different
stages
across
the
image.
So,
like
one
image
would
be
like
one
snapshot
in
time
of
a
cell
culture
and
those
cells.
B
G
Yeah,
so
so
what
I
want
you
to
add
to
that.
Is
that
see
the
labels
that
we
had
like
the
labeling
procedure
that
they
followed
they
actually
they.
They
have
basically
assumed
that,
like
under
each
label,
every
image
is
homogeneous,
but
after
training,
the
model,
the
modulation
comes
through.
The
conclusion.
G
A
Yeah-
and
you
know
it's,
it
depends
now.
I
don't
know
you
didn't.
One
thing
you
didn't
do
is
tell
people
what
your
cut
cells
were.
I
don't
know
if
you
know
what
what
the
I
mean.
Sometimes
they
give
names
to
cell
lines
that
are
like
kind
of.
I
can't
remember.
I
looked
it
up
before
and
some
sort
of
model.
A
F
A
Bring
that
up
yeah
a
mortalized
line
of
human
t,
lymphocyte
cells
that
are
used
to
study
acute
cell
leukemia,
t
cell
signaling.
Yes,
so
this
is
a
model
cell
type.
You
know
it's
something
easy
to
culture,
and
so
this
is
the,
and
so
these
cells
are
going
to
be.
You
know
you
can
you
know.
A
Different
cells
are
different
growth
rates,
different
phenotypes,
and
so
I
think
it's
good
that
we
have
like
these
model
cell
lines,
because
we
can
we
kind
of
know
how
they
behave
and
actually
for
like,
if
you're
doing
something
like
this,
it's
good
because
you
can,
you
know,
maybe
build
a
you
know,
build
a
model
where
you
can
characterize
what
the
cell
looks
like
what
things
should
look
like
in
the
cell
see
like
here.
You
have
the
g2,
the
s
and
the
g1
class
and
they
all
have
like
very
distinct
phenotypes.
A
Sometimes
some
cells,
don't
necessarily
have
that.
So
I
think
that
was
a
good
choice
for
that
yeah
did
we
have
any
other
questions
about
mayok
or
anyone.
G
G
G
G
H
A
Okay,
yeah,
I
mean
if,
if
people
have
feedback
for
my
knock,
we
can
talk
about
it
on
slack
but
yeah,
that's
excellent,
excellent
presentation
and
through
excellent
presentation.
I
know
you
guys
have
been
working
on
this
and
this
is
great
stuff.
So
why
don't
we
follow
up
on
this
in
the
coming
weeks
and
see
how
it
goes?
I
wanted
to
start
going
into
some
of
the
things
I
had
prepared
now,
and
there
are
a
bunch
of
things.
A
I
know
susan
isn't
here
today
and
she
had
some
computer
problems,
but
she
sent
me
some
images
or
an
image
of
her
ball
microscope.
Okay,
so
we
actually
have
something
in
the
chat.
Here
I
didn't
look
in
the
chat.
My
yoke
says
the
neural
network
learned
the
continuous
representation
order
without
the
user
explicitly
feeding
it
into
the
model
interesting
so
yeah.
This
was
where
we
did
it's.
A
It's
sort
of
this
sort
of
self
self
training,
yeah,
oh
self-supervised
type
of
thing
going
on
here,
so
yeah
we'll
follow
up
on
this.
So
hello,
krishna.
I
I
So
see
you
in
the
next
week
so
how
many
of
you
are
for
g,
so
here.
A
I
Okay,
so
I
hope
you
people
got
the
google
doc
on
the
slides
for
g-shock
bradley.
Did
you
share
them?
The
onboarding
files.
A
Actually,
I'm
gonna
talk
about
that
in
a
minute
yeah.
We
have
some
things
to
get
yeah.
We
have
some
things
to
organize
and
send
out
so
I'll
be
sending
those
things
out
this
week.
So
let
me
see
if
I
can
pull
them
up
here,
so
we
have
all
right.
So
this
is
the
onboarding.
So
right
now
we
have
two
onboarding
documents
and
I'll
put
these
in
the
make
these
more
prominent
in
the
slack.
A
The
first
is
this
thing
that
krishna
put
together,
and
this
is
on
open
source
sort
of
the
open
source
motivations
of
our
group
and
of
summer
of
code.
So
this
is
a
short
powerpoint
presentation
and
it
kind
of
goes
through
what
is
open
source?
A
A
You
know
you
can
come
to
the
meetings
or
you
can
do
a
project
proposal
where
you
can
get
selected
and
do
a
project,
but
there
are
other
ways
to
contribute
to
diva,
learn
and
diva
worm,
and
I'm
gonna
go
over
these
slides
and
edit
them
a
little
bit
before
we
do
them.
We
put
them
up.
A
The
other
thing,
of
course,
is
the
thing
that
my
hook
sent
me,
which
is
about
the
prod
specifically
about
the
projects.
So
we
have
the
project
descriptions
up
on
on
neurostars
and
on
github,
but
we
have
some
actually
a
little
bit
of
like
what
can
I
do
before
gsoc?
A
So
you
know,
gsoc
is
roughly
this
application
period
which
we
have
in
the
slide,
so
you
have
the
deadlines
there,
but
you
know
there
are
things
you
can
do
before
you
write
your
proposal,
so
you're
going
to
be
writing
a
proposal
for
gsac.
What
can
you
do
before
then?
And
so
he
has
these
things
here
listed
which
you
can
do
before.
A
Gsoc
starts
for
each
project,
that's
very
nice
and
then
the
skills
and
requirements
here
that
you
might
want
to
highlight
in
your
proposal
or,
if
you
really,
this
is
something
that
you
really
specialize
in.
You
might
want
to
make
sure
everyone
knows
that
or
get
you
know,
ramped
up
on
it
before
g-suck
and
so
there
you
know
this
is
I'm
gonna
make
these
available
here,
there's
also
a
general
general
faq
that
faq
that
he
put
together.
So
thank
you
for
this.
My
oak
and
we'll
probably
be
also
we'll
be
putting
these.
A
A
Okay,
so
that's
good,
so
yeah,
and
I
think
that
should
be
good.
I
think
for
onboarding
we'll
have
like
you
know
the
very
specific
open
source
part,
the
general
part
about
the
project.
So
I
think
that's
good
and
thank
you
to
both
of
you
for
putting
that
together,
and
so
I
wanted
also
wanted
to
point
out
here
something
susan
sent
me,
and
this
is
for
the
third
project
here.
This
is
the
ball
microscope.
A
So
if
you
can
see
my
screen
here,
this
is
what
you.
So
what
this
is
is
a
microscope
that
images
like
a
an
embryo
from
all
sides,
and
so
what
happens?
Is
you
put
your
sample
inside
this
ball?
You
put
it
up
or
actually,
I
think
it's
down
here
and
well.
This
is
the
light
source
I
think,
but
your
sample
will
be
loaded
around
here
and
then
all
of
these
these
are
different
cameras.
Here
I
don't
want
to
comment.
A
These
are
different
cameras
here,
hooked
up
to
a
capture
device
and
they're,
basically
taking
images
from
different
angles.
So
you
can
see
that
they're
coming
up
from
below
coming
up
from
on
top
light
source
is
coming
in
here,
the
samples
in
here,
and
so
it's
able
to
image
simultaneously
from
all
these
different
angles,
and
so
now
that
we
have
all
these
images
from
different
perspectives.
A
We
can
then
take
all
those
images
and
stitch
them
together
into
a
sphere,
so
you
can
see
that
you
can
get.
You
can
capture
image
information
from
various
angles
on
the
upper
hemisphere
of
the
embryo
and
then
different
angles
of
the
lower
hemisphere
of
the
embryo,
and
then
one
of
the
the
goals
of
digital
microsphere
is
to
be
able
to
take
those
data
that
are
being
collected
and
stitch
them
together.
A
Now
I
think
I
I
made
it.
I
need
the
data
that
so
susan
has
some
data,
that's
available
that
she
made
available
on
to
me
and
I
put
them
in
a
secure
server,
and
I
think
I
sent
the
link
to
one
person
who
was
interested
yeah
okay,
so
that
was
actually
yeah.
C
Regarding
that,
like
I
have
actually
like,
I
was
actually
thinking
about
monetizing
pictures.
So
now
it
got
clear
like
you
were
talking
about
stitching.
So
now
it's
totally
fine,
so
the
sub
sampling
you
know
you
sent
like.
Is
it
possible
to
create
at
least
a
demo
for
just
the
proposal?
A
G
A
A
Those
images
are
in
sequence
and
if
you
stitch
them
together,
it's
kind
of
like
it's
rotating.
Now
you
can
do
things
with
that.
You
can
you
don't
need
to
build
like
an
animation
just
from
those
images
you
can
montage
them
or
stitch
them
together
in
some
way,
I
mean
the
way
that
you
might
want
to
think
about.
A
This
is
like
a
geometric
projection
where
you
have,
if
you
make
a
flat
map
on
a
surface,
that
flat
map
has
been
taken
off
of
a
globe
right
off
of
a
sphere
or
off
of
a
curved
surface
right
and
you're
deforming
that
surface,
but
you
don't
want
to
deform
it
too
much
or
else
you'll
be
able
to
tell
what
the
features
are
on
the
map.
So
you
want
to
be
able
to.
You
know,
create
like
a.
A
Montage
the
image,
the
different
images
you
want
to
take
like
the
coordinate
system,
and
you
want
to
deform
it
enough
so
that
it
it
flattens
out
or
you
know
you
can
sort
of
see
it
clearly,
but
you
don't
also
don't
want
to
deform
it
too
much.
I
mean
that's,
basically
the
idea.
So
in
the
proposal,
I
think
we
said
that
you
know
students
can
come
up
with
their
own
solution
to
the
problem
and
then
it
would
be
evaluated.
C
C
Over
time
it
becomes
14,
then
we
will
be
exactly
the
fourth
dimension
is
the
time
right.
So
I
was
thinking
whether
you
send
me
the
images
of
an
instance,
or
is
it
just
a
random
substrate.
A
I
think
it's
a
random
sub
sample.
I
think
I
I
gave
you
like
one
folder
that
she
gave
me.
I'm
gonna
have
to
go
through
it
again
and
look
and
see
exactly
what's
in
there,
but
I
think
it's
like
just
yeah
just
one
instance.
You
know
there
yeah.
I
would
just
we
can
go
through
it
later
and
see
what's
in
there,
but
I
think
it's
one
instance.
A
And
so
if
people
wanna,
you
know,
people
want
access
to
this
secure
server
that
we
have
it
on.
Let
me
know,
and
I
can
send
you
the
link
and
you
know
it's
like.
I
think
that's
a
good
way
to
distribute
data.
I
don't
want
to
make
it
public
yet
so
we'll
just
do
it
that
way.
A
So
very
good.
Let's
see
we
have
some
more
things
in
the
chat
here.
Okay,
usually
asked
akshay.
Are
you
using
3d
rendering
models
or
trying
it
in
blender,
photoshop.
C
Well,
like
I
was
thinking
of
using
opencv,
actually
just
switching
it
but
yeah,
I
would
give
it
a
look
like
so
far
right
now.
I've
got
an
idea
about
what
the
project
should
be
like.
After
going
through
the
research
papers,
and
you
clarified
some
dogs.
C
A
Yeah,
I
will
review
people's
proposals
when
they
have
them.
I
can
give
you
feedback
on
them,
so
just
again
drop
a
link
in
the
slack
on
a
private
dm
it'd,
be
fine,
so
yeah
it
let's
go
on
here.
I
have
a
lot
of
actually
a
lot
of
things.
I
was
wanting
to
cover
I'll,
probably
get
through
them.
I'm
gonna
very.
E
Quickly,
one
quick
thing
I
wanted
to
ask:
okay,
hello:
yeah:
go
ahead,
yeah
yeah
hi,
so
you
want
a
few
days
back
we're
talking
about
the
cellular
automata
thing,
like
I
told
you
regarding
different
models
that
we
can
address.
So
I
was
wondering
that
if
you
could
telling
you
in
the
future,
you
will
might
you
might
be
presenting
something
or
something
like
that.
So
lately
I've
been
a
little
busy,
so
I
could
not
do
it
a
lot.
E
So
if
there
are
some
resources
that
you
would
have
in
mind
or
something
I
would
love
to,
you
know
have
to
cut
them
and
get
some
idea
about
it.
A
That's
good
yeah
so,
but
I
wanted
to
go
over
a
couple
things
here
before
that
the
first,
my
I'm
not
sharing
my
screen
anymore,
okay,
so
the
first
is
just
briefly
to
touch
on
the
submissions,
so
we
have
again
some
deadlines
coming
up,
complement,
which
is
march
26th,
the
biosystem
special
issue,
the
periodicity
in
the
embryo.
That's
been
resubmitted
already.
Thank
you
bourgeois
for
your
comments.
They
have
been
incorporated
and
jesse
as
well.
A
We
had,
I
think
it
came
out
pretty
well
we'll
be
waiting
for
that
decision
pretty
soon.
So
I
I
hope
it
gets
accepted
soon,
but
we'll
see
this
incf
neuroinformatics
assembly
is
coming
up
in
april
and
the
deadline
for
submissions
for
that
is
march
31st
and
they
want
a
1500
word
or
1500
character,
abstract,
and
so
I've
actually
prepared
something
for
that
here.
This
is
on
the
divo
learn
platform,
so
I
I
think
I
mentioned
this
briefly
last
time.
We're
going
to
walk
through
this
devil
learn
platform.
A
This
is
the
presentation
that
I
we
would
give.
I
would
take
the
lead
on
it
and
just
kind
of
go
over
the
platform,
talk
about
the
different
components
and
then
finally
discuss
a
little
bit
about
future
development
in
terms
of
these
epistemological
directories,
which
you
know
would
be
something
that
we
would
go
over
a
little
bit
before
hand
in
the
group.
I'll
probably
give
a
presentation
on
that
and
then
also
further
development
of
some
of
the
software.
A
So
I'm
going
to
submit
this
this,
I
can
send
a
link
to
this
for
people
to
edit
or
to
look
over
before
I
submit
it,
but
then
we
can
submit
it
and
hopefully
it
will
be
presenting
in
front
of
incf.
I
mentioned
in
here
that
part
of
the
people
learn
sort
of
development
has
been
sponsored
by
incf,
so
hopefully
they
say
look
at
that
and
say:
oh
yeah.
We
need
to
include
this
in
the
symposium,
so
we'll
we'll
be
revisiting.
A
This
also,
I
think
krishna
had
to
rejoin,
but
I
wanted
to
point
out
that
his
work
that
he's
been
doing
we've
been
doing
on
a
ns
and
bnns,
which
are
the
artificial
neural
networks
and
the
biological
neural
networks.
A
We're
doing
a
paper
for
the
artificial
life
conference
on
that,
so
we're
going
along
on
that
there's
still
a
lot
of
stuff
to
do,
but
this
is
something
that
is
due
probably
next
weekend,
so
we'll
be
kind
of
getting
I'll
be
cranking
through
that
this
week-
and
you
know
you
know-
hopefully,
maybe
maybe
people
want
to
give
feedback
on
it.
I
can
send
it
out
later
this
week
when
it's
almost
complete
people
can
read
it
over
and
look
at
it
and
then
I
think
that's
it
for
the
submissions.
A
I
think
there
isn't
too
much
else
coming
up
very
soon.
Europe's
actually
is
coming
up
in
may
the
the
deadlines
for
paper,
so
people
want
to
submit
something
to
nurips
it's
very
competitive,
but
if
you
want
to
do
that,
that's
that
deadline's
in
may,
so
it's
coming
up
soon.
So
just
just
to
keep
that
in
mind
and
then
the
evolution
abstracts
are
now
open
again
and
we
can
submit
something
there
by
april
30th.
A
So
that's
now
I
want
to
move
on
to
the
the
cellular
automata
stuff.
So
this
is
a
sureties.
She
just
asked
me
about
this,
so
I'm
gonna.
A
I
have
some
resources
here
that
I
want
to
point
out,
and
this
is
a
very
crude
sort
of
introduction
to
it,
but
I
wanted
to
show
you
what
we've
been
doing,
what
we've
done
in
the
past
in
the
group
here
and
some
of
the
stuff
I've
done
in
the
past
on
it.
So
we
have
this
paper
and
this
is
available
on
the
divaworm.weebly.com
website.
That's
the
group
website
officially,
and
this
is
in
the
publications
tab,
it's
morphozoic
solar
automata
with
nested
neighborhoods.
A
So
this
is
thomas
portages,
who
I
think,
gave
a
presentation
in
the
group
last
year
he's
been
a.
He
was
a
more
active
member
several
years
ago
and
he's
come
up
with
this
technique
for
a
cellular,
automata
called
morphozoic,
and
so
this
is
a
model
that
he
came
up
with.
That
is
open
source.
So
we
have
the
code
on
our
github
if
you're
interested
in
working
with
it.
A
It's
I
think
it's
written
in
c,
and
so
morphozoic
is
a
cellular
automata
that
generates
patterns
that
you
might
see
in
an
embryo,
and
so
we
have
these.
You
know
this.
It's
a
platform
where
you
have
not
just
the
regular
neighborhoods
that
you
would
see
in
a
cellular
automata
but
extended
neighborhoods.
A
So
if
you're
not
familiar
with
what
a
cellular
automata
does,
let
me
see
here:
it
is
it's
basically
something
like
this.
You
have
these
cells
that
you
know
focal
cells
that
you
want,
that
that
are
active,
so
in
this
case
would
be
the
middle
cell,
and
this
middle
cell
has
neighbors
and
it's
influenced
by
the
activity
of
its
neighbors.
So
this
cell
here
this
white
one
in
the
middle,
is
the
focal
cell,
and
so
it's
going
to
listen
to
its
neighbors.
A
But
what
are
its
neighbors
exactly
so
in
some
cases
in
what
they
call
the
von
neumann
neighborhood.
You
have
these
black
cells,
which
are
the
neighbors
which
are
the
direct
neighbors
in
the
cardinal
directions
in
a
more
neighborhood,
it's
every
cell
that
touches
the
focal
cell.
So
it's
all
these
cells
around
it,
this
nine,
these
nine
cells
in
black
here
and
then
finally,
the
extended
moore
neighborhood,
which
can
go
out
maybe
two
or
three
layers
away
from
the
focal
cell.
A
So
the
idea
is
that
these
these
focal
cells
are
influenced
by
the
state,
the
collective
state
of
all
these
neighbors,
and
this
happens
across
the
grid.
So
you
have
a
grid
where
each
automata
or
cell
is
influenced
by
its
neighbors,
and
so
you
get
these
patterns
that
emerge
because
they're
all
active
and
they're
all
influencing
each
other
in
parallel
and
they're,
basically
producing
these
patterns
across
the
grid.
A
You
know,
they're,
you
know
simple
pattern
formation
in
some
cases
like
the
game
of
life,
you
have
things
like
things
that
move
across
the
screen,
like
gliders,
where
you
have
different
patterns
that
emerge,
and
so
there
are
a
lot
of
ways
you
can
classify
these
output
patterns
and
so
in
morphozoic
they
use
you
use
a
neural
network
to
classify
them.
A
There's
a
lot
of
detailed
amorphozoic
in
in
under
the
hood
in
terms
of
what
the
focal
cell
does.
You
know
there
are
distributions
that
you
can
analyze
within
the
cells,
and
so
you
know
there's
a
lot
of
information
in
there.
A
But
basically
the
idea
is
that
you
have
this,
these
sort
of
multi-layered
networks
or
these
multi-layered
neighborhoods
that
determine
what
the
state
of
the
focal
cell
is
so,
and
you
can
produce
things
like
this,
where
you
have
this
grid
in
there,
these
cells
and
they're
all
interacting
and
they
form
coherent
patterns,
and
sometimes
you
can
actually
analyze
these
patterns
as
different
the
execution
of
different
rules.
A
So
each
focal
pattern
will
execute
a
rule
over
time
based
on
the
state
of
its
neighbors
and
so
it'll
be
like
something
like
if
a
majority
of
my
neighbors
are
off.
I
turn
off
if
the
majority
my
neighbors
are
on,
I
turn
on,
and
so
just
with
those
rules
being
implemented,
you
can
get
patterns
like
this,
so
this
is
rule
30
right.
A
So
this
is
rule
30
where
a
single
cell
is
turned
on
in
the
top
row
and
then
next
time
step,
all
the
cells
are
subjected
to
rule
30.,
two
of
them
change
state
as
a
result
leaving
three
cells
turned
on
in
the
second
row.
So
in
this
case
the
rule
is
to
have
these
cells
executing
in
sequence
and
then
producing
this
pattern.
A
So
there
are
ways
you
can
do
this
that
you
know
produce
these
patterns,
and
stephen
wolfram
has
a
huge
book
on
this
and
he's
classified
like
he's
identified
at
least
200
rules
that
these
cellular
automata
follow,
and
you
know
they're
all
different.
They
all
result
in
different
shapes
different
patterns,
but
they
all
have
like.
We
all
know
what
the
rules
are
and
that
went
into
creating
them.
A
So
this
is
like
this
is
very
analogous
to
pattern,
formation
and
embryos.
As
we've
seen
in
some
of
the
papers
we've
talked
about,
and
we
you
know
the
the
good
thing
about
using
something
like
morphozoic
is
you
can
understand
the
rules
that
unfold?
So
if
we
compare
that
to
something
we're
doing
with
image
segmentation
image
segmentation,
we're
trying
to
discover
the
pattern
of
images
right,
that
of
a
process-
that's
already
taken
place
in
this
case,
what
we
can
do
is
we
can
actually
sort
of
forward
generate
a
pattern
using
certain
rules.
A
So
we
know,
for
example,
that
certain
rules
have
certain
outcomes
and
and
if
we
apply
them
in
a
computational
environment,
we
can
get
these.
You
know
we
can
simulate
them
and
get
different
patterns
and
then
understand
you
know
which
patterns
are
maybe
replicable
or
you
know,
aren't
just
ephemeral
things.
So
this
is.
This
is
like
what
you
can
do
with
ca,
and
so
this
is
something
that
is
a
specialized
package
that
we
have
in
the
group,
but
there
are
all
sorts
of
different
packages
you
can
use
for
this.
A
Everything
from
like
a
standard,
cellular
automata
model
to
some
very
highly
specialized
model.
And
again
I
would
recommend
that
you
read
through
the
paper
and
it
gives
you
a
lot
of
detail
of
how
this
technique
works.
So
it's
a
very
specialized
type
of
ca
technique.
Oh
we
have
so.
This
is
referring
to
cell
type
densities.
So
there's
a
in
morphogen
neighborhoods.
A
You
have
these
cell
type
densities
that
you
can
analyze
as
well
as
the
state
of
the
cell
and
the
pattern
on
the
grid.
So
there's
a
lot
of
potential
here.
I
don't
think
it's
really.
I
I
think
it's
been
sort
of
not
developed
enough.
I
know
that
tom
has
moved
on
to
other
things,
but
this
is
something
that
we
might
look
good
back
into.
A
Yeah,
so
this
is-
and
this
is
the
game
of
life
which
I
was
talking
about
before
so
the
game
of
life
is
where
you
generate
these
patterns
and
they
are
mobile
across
the
grid,
and
you
see
you
know
you
basically
set
up
the
cellular
automata
and
you
run
it
and
you
can
see
these
different
things
that
look
like
life
that
are
emerging
on
the
grid
and
you
you
know
they're
different
things
in
the
game
of
life
that
you
can
see.
A
A
So
this
is,
I
guess
this
is
a
very
crude
introduction
to
this,
but
you
can
have-
and
this
is
sort
of
the
proof
of
concept
here,
using
an
image
image,
restoration,
and
so
I
would
recommend
that
you
read
this
and
then
you
know,
let's
see
if
this
is
the
one
that
I
wanted.
Okay,
so
this
is
like
something
where
I
was
working
on
this
for
a
project.
A
A
If
you
transfect
them
with
mrnas,
they
will
affect
gene
expression
which
allows
them
to
change
their
cellular
state,
so
they
go
from
like
the
skin
cell
type
to
the
stem
cells,
and
so
the
question
is:
how
do
you
model
that
process
in
a
way?
That's
allows
us
to
understand
it,
and
so
you
can
use
cellular
automata
and
in
this
case,
you're
using
something
called
a
sort
of
a
sliding
neighborhood,
which
is
where
the
neighborhood
would
actually
deform,
based
on
the
nature
of
the
process.
A
So
you'd
have
these
neighborhoods
that
where
the
cell
would
grow
and
then
you
know
it
would
influence
the
or
the
neighboring
states
would
influence
a
cell
and
it
would
eventually
change
so
there's
this.
There
are
different
ways
you
can
set
up
these
models.
That's
the
point
of
this
showing
you
this
and
so
the
advantage
of
using
a
cellular
automaton.
Something
like
this
is
you
can
look
at
the
spatial
heterogeneity
of
a
biological
process.
A
You
know
different
cells
will
take
up
different
amounts
of
the
stimulus
and
then
some
of
them
will
have
different
things
going
on
inside
of
them
that
enable
the
reprogramming
process,
and
so
you
end
up
with
basically
these
colonies
of
transformed
cells
that
change
their
state
and
some
cells
around
the
edge
that
don't
change
their
state
or
die
off,
and
so
that's
the
idea.
We
want
to
understand
that
process
better.
A
We
also
introduce
a
mathematical
model
to
this,
which
is
the
fitsy
nugumo
model,
which
is
an
excitable
cell
model,
so
you
can
actually
model
this
process
as
a
model
of
excitability
across
space,
and
so
each
cell
that
takes
up
the
the
mrna
that
you're
delivering
to
the
cell
and
transformed
successfully
is
excited
and
the
cells
that
don't
do.
It
are
not
excited,
and
this
is
a
lot
of
parallels
with
neural
activity.
They
use
this
model.
A
The
fitzhum
nagumo
model
in
neural
modeling,
neural
activity
and
neural
excitability,
and
it's
something
we've
been
talking
about
to
look
for
looking
at
the
basilaria
colonies
as
well,
a
model
of
excitable
media,
and
so
in
this
case
it
would
be
like
you
know,
each
cell
if
it's
moving,
there's
an
excitation
there
and
a
relaxation
and
you
can
model
all
that
in
a
mathematical
model,
so
another,
oh,
this
is
the
same
one.
A
So
this
is
this
is
the
and
then
there's
another
poster
where
I
think
I
advance
this
idea
a
little
bit
more.
This
is
where
it's
called
dynamic:
cellular
encoding,
dynamical
cellular
encodings.
So
this
is
a
dynamical
system
where
you
have
again
these
sliding
neighborhoods
and
you
have
these
inner
cellular
functions.
A
As
well
and
you
can
incorporate
that
into
the
design
and
it'll
result
in
different
patterns,
so
it's
not
just
a
matter
of
having
a
grid
where
things
turn
on
and
off
in
parallel,
you
can
apply
all
sorts
of
rules.
You
can
even
apply
within
each
cell.
You
can
apply
things
like
the
fitsum
nugumo
model
or
you
can
apply
some
sort
of
gene
regulatory
network
model
or
you
know
you
can
do
a
number
of
things.
There
are
a
number
of
ways
you
can
very
innovative
ways.
You
can
approach
cellular
automata.
A
So
that's
I'm
going
to
talk
about
there.
I
think
that's,
like
I
said
it's
a
very
crude
introduction
to
it,
but
I
think
it'll
help
people
understand
a
little
bit
about
and
I'll,
probably
if,
if
shorty
wants
to
do
a
presentation
on
it,
she's
welcome
to
otherwise
I
can
do
a
short
presentation
on
it
in
the
coming
weeks
as
well.
A
Okay
has
to
leave.
Thank
you
as
well
for
attending
and
thank
you
akshay
for
attending.
Yes,
so
again,
I'm
gonna
go
over
a
couple
more
things
before
we
end
their
meeting.
I
wanted
to
talk
a
little
bit,
maybe
about
some
papers
that
I've
found
over
the
week
so
last
week
the
twitter
thread.
A
I
put
a
twitter
thread
for
the
meeting
up
every
week
with
the
youtube
video,
and
so
you,
if
you're
on
twitter,
you
want
to
check
that
out
and
I
have
a
I
found
a
thing
on
a
thing
related
to
cellular
automata
last
week
on
twitter,
and
it
was
this
animation
where
using
one
cellular
automata
to
feed
another.
And
you
can't
see
the
I
don't
know
if
I
have
the
gif
here.
A
But
basically
this
is
where
there's
a
cellular
automata
here
and
it's
hitting
a
barrier
and
then
it's
transforming
the
output
of
this
automata
into
some
other
visualization,
and
so
there's
this
and
I
found
out
after
seeing
this.
It
was
very
cool.
But
I
found
out
that
after
this
there's
something
called
cyclic
cellular
automata,
and
so
these
are
things
that
recycle
so
there's
a
sort
of
trophic
relationship
with
the
cellular,
autonomic
and
its
environment.
So
there's
this
repo
and
cyclic
solu
or
automata.
A
There's
this
animation
and
with
I
think,
with
the
code
as
well,
and
so
if
you
go
back
to
last
week's
twitter
thread
on
the
meeting
there
is
this
thing
that
I
put
together
here:
cellular
automata
as
a
trophic
system,
and
I
call
it
computer
fiji,
which
means
that
computational
systems
are
eating
one
another
or
that
they're
being
consumed.
So
this
is
the
food
line
for
cellular
automata.
A
So
that's
I
mean
that's
again,
something
we
can
revisit.
So
I
don't
know
how
much
I
want
to
go
into
the
different
papers.
I
I
did
want
to
go
into
the
zebrafish
gastrulation
thing
that
I
found
as
well.
A
So
this
is
a
nice
visualization
of
a
zebrafish
embryo
undergoing
gastrulation,
and
this
is
a
zebrafish
rock
twitter
feed
here.
This
is
where
they
have
the
transformation
from
a
spherical,
embryo
and
zebrafish,
and
it's
and
you
have
this
gastrulation
stage
taking
shape
here
where
there's
differentiation
along
this
axis,
and
so
you
can
see
in
this
image-
and
this
was
this-
is
the
person
who
took
these
images.
So
credit
goes
to
this
person
and
then
this
is
the
actual
gif
which
I
have
as
an
mp4.
A
So
this
is
the
process
here,
so
you
can
see
how
it
like
starts
at
the
top,
and
then
the
cells
work
their
way
down
around
in
into
a
sphere
you
see
like
that
and
they
and
it
becomes
more
defined,
and
then
you
get
this
axial
differentiation
right
along
here.
This
is
the
earl
crest
area
here,
so
you
can
see
how
that
works.
A
It's
kind
of
coming
together
and
there's
the
neural
crest
and
there
you
go.
So
that's
that's
a
nice
visualization
of
that
process,
and
I
thought
you
know
this
is
what
I'm
trying
to
strive
for
here
in
the
meetings
is
to
understand
like
what
these
things
look
like
it's
kind
of
hard
to
communicate.
Sometimes
so
I
think
that's
all
I'm
going
to
do
for
today.
I
had
a
couple
papers,
but
I
think
it's
probably
getting
pretty
late
in
the
meeting,
so
I
I
wanted
to
ask
if
people
had
any
questions.
A
Actually
let
me
go
to
the
chat
first.
Okay,
so
we
had
mayoque
says
thank
you,
my
knock
and
the
rune
for
the
presentations
great
work.
I
also
encourage
others
to
present
their
findings
in
the
coming
weeks.
If
possible.
Yes,
please
do
if
you
want
to
present
anything.
Let
me
know
before
the
meeting
on
slack
or
by
email.
A
We
also
have
an
email
newsletter.
I
don't
know
how
many
of
the
new
people
are
getting
that,
but
let
me
know
if
you're
not
getting
it
and
I
can
subscribe
you.
I
just
tells
you
about
the
next
week's
meeting
and
do
we
have
any
other
questions
before
we
go.
A
Well,
thank
you
and
again,
if
you
have
anything
you
want
to
present
in
the
coming
weeks,
let
us
know
otherwise.
Everyone
have
a
good
week.