►
From YouTube: DevoWorm (2021, Meeting 37): Diatom Hacktoberfest, Deep Sets/Neural Processes, Neurointeractome.
Description
Overview of progress on Diatom Deep Learning using pix2pix. Hacktoberfest reminder. Call for involvement in NMC 4.0. Deep Sets and Neural Processes. Papers on the Neurotransmitter Connectome of C. elegans and Kuramoto oscillators on a sphere. Attendees: Bradly Alicea, Ujjwal Singh, R Tharun Gowda, and Krishna Katyal.
A
C
Yeah
so
his
meeting
has
like
starts
from
6
30
mine
is
going
to
start
from
seven.
So
that's
why
I
was
able
to
join
for
half
an
hour.
A
D
Yes,
so
I'll
just
brief
what
it
done
last
week,
so
I
trained
on
a
video
which
had
around
50
frames,
so
the
objects
had
like
proper
rectangular
shapes.
So
it
was
easy
for
the
model
to
like
convert
into
our
required
images,
so
it
did
perform
well
and
I
actually
trained
for
the
video
in
which
it
had
overlapping,
bounties
or
overlappings,
but
that
quality
but
disconnected
so
I
couldn't
save
the
weight.
So
I
just
train
it
again.
C
I
guess
our
next
step
we
are
going
to
let
me
and
smith
are
looking
into
literature
and
we
saw
some
steps
how
to
find
the
corner
edges
of
each
and
every
cell
in
the
model.
So
there
are
simple
techniques
and
deep
learning
model
as
well,
so
we
have
to
see
like
which
we
are
going
to
implement.
C
Also
the
last
sample
that
term
is
sent
to
us.
It
has
little
bit
in
accuracy
in
the
form
like
the
in
the
real.
We
there
are
less
number
of
cells
in
the
fake
b.
They
are
more
normal
cells
in
some
frames
so
like
it
is
creating
more
number
of
rectangles
than
the
original
one.
C
It's
been
very
few
cases,
so
let
me
see
like
how
many
frames
such
things
we
got
after
analyzing
the
data
we
can
either
skip
them
or
because
they
we
have
to
get
overall
volumetric
data,
basically
like
over
10
seconds
five
seconds
to
get
the
movement
velocity
and
all
those
things.
So
we
can
skip
few
frames
so
once
we
are
done
with
that,
then
we
can
go
after
scrutinizing
the
all
data
or
I
guess
we
can
go
for
the
rectangular
identification.
C
D
Yeah,
I
guess
those
extra
ones.
Actually
it's
my
fault
like
I
actually
didn't
label
them.
Those
were
actually
like
very
blurred,
or
just
you
couldn't
like
discriminate
like
two
or
three
would
just
get
combined,
there's
no
boundary,
so
it
actually
did
recognize
them,
but
it
couldn't
recognize
them
individually.
So
it
has
just
drawn
a
single
like
the
whole
branch.
It
has
considered
a
single
object.
So
I
guess
the
data
set
has
a
flaw.
So
yeah.
A
D
Yeah,
like
so
for
training,
I
used
only
one
video
and
for
the
testing.
I
used
the
other
video
so
that,
like
the
videos
like
the
objects,
they
were
like
repeating,
so
I
thought
that
it
could
like
be
over
fitting
or
something
so,
but
it
did
perform
pretty
good
on
the
other.
Video
which
it
did
not
train
on
and
like
our
data,
also
is
like
very
small,
like
just
level
down
50
frames.
So
I
guess,
if
you
increase
the
data
set
and
include
any
like
noise
or
something,
then
it
might
get
better.
C
Like
oh,
the
kind
of
reason
that
we
are
getting
right
now,
we
need
such
a
few
number
of
planes
me
and
ashley
are
discussing.
You
can
also
pitch
it
as
the
like
superiority
of
a
matter
like
you,
don't
need
a
whole
bunch
of
thousands
of
images.
You
can
just
take
50
or
60
images
and
get
a
very
good
model
out
of
it.
So
we
can
fix
that
as
well.
A
C
I
don't
think
so.
Anyone
has
used
like
this
thing
in
this
car.
Basically,
what
we
are
doing
is
completely
opposite
of
what
fixturepix
is
meant
to
do,
so
we
are
kind
of
re
reversing
that
model
and
I
don't
think
so
like
it
has
been
used
in
such
a
way
before,
because
when
we
started
using
it
in
our
dm
project
as
well,
our
professor
also
gave
us
same
feedback
like
what
you
are
doing
is
basically
reversing
the
model
entire
model.
C
A
C
D
Yeah,
that's
the
pi
touch
based
mod
is
like
better
than
what
they
have
published.
So
the
paper
results
with
the
show
we
are
getting
like
better
results
because
we
are
using
the
python
like
they
have
told
that
it
should
give
better
results.
So
I
guess
that
is
also
the
reason.
C
A
Good
yeah,
so
I
I
yeah,
I
know
so,
let's
go
through
this,
these
data
that
we
have
for
the
vessel
area
and
I'm
almost
wondering
if
we
have,
if
it's
working
that
well,
if
we
should
try
another
type
of
data
or
another
set
of
images
from
a
different
organism
to
see
what
those
look
like
as
well.
A
A
We
have
other
types
of
data
as
well,
so
I'm
going
to
look
around
and
see
if
I
can
find
something-
and
maybe
we
can-
you
know,
run
that
as
well
through
that
once
you
have
that
worked
out
to
see
if
it
does
good
or
if
it
does
better
or
what
I
think.
That
would
be
a
good
comparison
too,
to
have
like
two
different
types
of
cell.
D
Yeah
we
can
check
whether
the
mod
is
looking
for
these
rectangular
shapes
or
it
is
like
just
finding
of
this
edges
and
just
creating
boundaries.
So
doing
so,
then
it
can
recognize
all
the
shapes
so
yeah.
We
can
test
those.
C
D
See
that
yeah
the
labeling
part,
actually,
what
I'm
doing
is
I'm
destroying
the
polygons
around
the
objects
and
them
just
getting
the
json
files.
So
this
json
files
has
this
boundary
point.
Then
I'm
just
using
the
pillow
library
to
like
plot.
I
create
a
polygon
using
those
points,
so
I
guess
any
shape.
We
can
label
it.
You
know
it
will
be
a
different
it'll,
be
a
bit
difficult
than
doing
with
rectangle
yeah,
so
yeah
we
can
try
button.
C
Yeah,
you
cannot
do
like
from
the
serial
elements.
You
cannot
do
it
on
rectangle.
You
have
to
use
some
higher
dimensional
polygons
or
I,
if
I
recall
correctly,
I
will
share
it
on
slack
as
well.
I
saw
two
coupling
bags
where
you
can
basically
like
in
the
food
like
you
have.
If
you
have
used
photoshop,
you
can
basically
cut
out
the
images
which
you
have
to
label,
but
oh
you're
hovering
your
mouse
over
the
edges
of
all
that.
So,
if
I
can
find
that
out,
I
will
definitely
share
it.
On
slack
yeah.
D
I
can
label
me
what
it
does
it.
You
can
like
draw
polygons
of
any
number
of
points,
so
we
just
also
just
click
on
the
boundaries.
So
then
we
get
those
points.
Then
we
can
just
process
those
points
into
a
particle.
A
You
said
that
you
had,
we
had
discussed
labeling
in
the
slack
and
you
you
had
questions,
but
did
you?
What
kind
of
scheme
did
you
use
for
labeling.
D
Yes,
so
what
was
what
I
was
trying
this
like
when
they
were
overlapping
or
like
one
or
the
other?
So
what
I
thought
was
like
like
with
two
overlapping
or
you
can
just
consider
the
two
as
one
itself
or
something
like
that,
so
I
think
I
did
overlapping
itself
so
two
rectangles
here,
however,
they
were
in
the
images
I
just
labeled
them
as
it
is.
A
All
right
yeah,
because
I
remember
we
had
talked
about
like
having
like
a
label
where
you
would
say
this
is
the
first
cell
second
cell
and
use
some
sort
of
scheme
like
that.
But
it
sounds
like
you're
just
using
like
a
simpler
technique.
D
Yeah
yeah,
so
I
consider
all
of
them
as
the
same
object
yeah.
So
we
can
label
like
we
can
give
like
different
labels
to
different
objects,
but
I
thought
it
was:
let's
see
just
simply
just
test
out
the
model.
I
just
created
a
smaller
dataset,
so
yeah.
We
can
do
that.
Also,
all.
A
Right
yeah,
that
sounds
great
yeah,
so
that's
good.
Did
you
have
anything
to
show
like
a
notebook
or
anything
or
is
that
pretty
rough
still.
D
Yeah
the
same,
I
just
share
the
notebook
like
the
pix2pix,
which
the
github
link
that
sent.
I
just
used
the
same
thing
I
had
to
just
it
was
like
four
or
five
lines
of
code.
I
have
to
just
change
to
just
run
the
files
it
was
yeah
I'll
share
those
files,
all
right.
D
D
Five
comments:
that's
it
just
you
had
to
run
it
in
the
github.
Sorry,
the
collab.
What
is
it
called
the
cells
so
like
I
just
cloned
the
repository
and
use
the
same
code,
so
there's
not
much
difference
from
the
original
code,
which
is
that
all
right.
A
All
right
that
sounds
good.
That
sounds
great.
I'm
looking
forward
to
seeing
more
progress
on
it.
You've
made
some
pretty
good
progress
so
far
and
I'll
take
a
look
at
the
you
know,
we'll
keep
updating
on
this,
and
you
know
I
can
get
you
some
data
for
another
species.
A
A
I
know
there
was
a
little
bit
of
worry
that
this
would
blow
this
algorithm
could
not
work
at
all.
So
if
that's
good
that
it's
working
out.
C
Yeah,
like
oh,
first
of
all,
I
was
a
little
skeptical
like
it
might
not
work
because
in
our
case
as
well,
we
have
to
change
a
lot
of
things
in
the
algorithm
to
make
it
work
to
basically
generate
english
text
into
japanese,
but
in
this
case,
as
a
smith
was
also
suggesting
like
because
the
shape
of
the
basically
basil
area
is
very
similar
throughout.
But
the
english
characters
are
very
different
and
japanese
is
like
10
000
or
something
character
set.
C
D
Yes,
after
I,
after
this
model,
I
think
we
should
try
those
edge
detection,
module,
you're,
subjecting
some
kind,
each
detection
and
watershed
algorithm
so
that
we
just
look
into
this
this
week,
also
after
training
on
this
unit,
all.
D
Yeah,
so
after
finding
these
bounding
boxes
or
something
we
can
extract
those
coordinates
of
the
boxes
and
then
we
can
just
track
the
box
across
different
images.
A
All
right:
well,
that's
good!
Thank
you
for
that
update
so
yeah.
That
sounds
good,
and
so
that's
the
so
for
those
of
you
watching
on
youtube
here.
Welcome
to
the
meeting-
and
we
were
just
talking
about
the
update
on
the
vessel
area,
more
basil
area
stuff,
so
we've
done
the
digital
basil
area
project
and
we're
doing
a
second
data
set
where
we're
doing
some
algorithms,
some
applying
some
different
algorithms
to
those
images.
A
So
thank
you
for
that
update
through
android,
and
I
thought
I'd
go
through
some
other
things
here
that
we
have.
Let's
see
today,
we
have
a
number
of
things
to
catch
up
on
some
people
here
are
interested
in.
I
think
we're
talking
about
neuromatch,
which
is
coming
up,
and
I
wanted
to
remind
you
that
we
have
the
abstract
submissions
document.
So,
if
you're
doing
you
want
you're
interested
in
doing
something
for
neuromatch
conference,
we
have
our
ideas
board.
A
This
is
with
both
this
group
and
the
other
group
that
I
work
with
so
you'll
see
that
they're
labeled
here
in
this
column-
and
you
don't
have
to
pick
something
from
dw,
which
is
the
divor
group.
You
can
pick
something
from
any
of
these
if
it
looks
interesting
to
you
so
we're
you
know,
we
have
the
old,
the
ones
that
we
did
we've
done
in
previous
conferences.
We
did
some
work
on
the
artificial
and
biological
neural
networks.
A
We
have
some
stuff
on
complex
networks.
We
have
this
unified
theory
of
switching.
Some
of
these
are
not
like
easy
to.
I
mean
some
of
these.
Aren't
the
title
isn't
straightforward.
So
if
you're
interested
in
queer
for
you
know,
find
out
what
this
thing
is
that
that's
listed
here
and
then,
if
you're
interested,
you
can
put
your
name
here
in
this
interested
collaborators
column
and
or
you
can
just
you
know,
contact
us
on
slack
and
say
what
is
this
all
about?
A
So
may
you'll
be
contacting
me,
probably
because
I'm
coordinating
this,
but
so
we
have
a
number
of
different
things
now,
not
all
these
are
going
to
make
it
as
submissions,
but
we
want
to,
you
know,
create
a
submission,
and
you
know
we
want
to
pick
the
strongest
ideas.
So
we
have
a
number
of
different
ideas
here
and
then
we
have
this
document
here,
which
is
a
submissions
document.
A
So
when
we
come
up
with
an
abstract,
we'll
put
it
in
this
pa
in
this
file
and
then
you
can
actually
browse
these
abstracts
to
find
out
more
about
the
thing
that
people
are
doing
so
right
now
we
have
a
couple
of
abstracts
that
are
candidates,
looks
like
jesse.
Parent
has
been
populating
this
with
things,
and
so,
if
you
have
an
idea,
you
can
put
it
in
in
this
format,
where
you
have
the
title
on
the
body
of
the
abstract.
A
So
and
then
you
know
we
won't
be
just-
we
won't
just
be
submitting
a
neuro
match
with
this
system,
we'll
you
know
we'll
be
finding
other
conferences
to
submit
to
and
putting
more
ideas
in
here
I
want
to
get
in
that.
I
want
to
get
a
system
where
people
can
sort
of
put
ideas
out
there
make
an
abstract
for
it,
and
then
people
browse
and
see
if
they
find
something
that
they
like
and
they
can
become
a
collaborator
on
it
and
then
maybe
we
can
submit
it
somewhere.
A
A
I
will
put
this
in
the
chat
if
you
want
to
take
a
look
and
this
this
applies
for
to
usual
and
through
as
well.
If
you
have
an
idea
for
something
we
might
do
with
the
work
that
you're
doing
right
now,
you
know
put
it
in
there
put
it
put
it
in
as
a
item
and
then
maybe
you
can
put
an
abstract
in
you.
Don't
have
to
do
that
today,
but
you
know
we're
gonna
do
this
for
future
submissions.
A
So
you
know
this
is
good
and
then,
of
course,
if
it's
like
a
machine
learning
conference,
you
know
like
something
where
they
want
a
paper
instead
of
an
abstract,
then
you
know
still
having
the
abstract
there.
I
think
guides
us
in
terms
of
where
we
want
to
go
with
that.
So
we
can
put
out
a
paper.
A
A
So
that's
that
part.
The
next
part
is
the
submissions
overall
submissions
documents.
So
this
is
the
thing
we've
been
working
on
from
last
year,
and
so
actually,
I
think,
the
beginning
of
this
year.
Actually-
and
so
this
is
everything
in
2021
that
we've
been
doing.
A
There
are
a
lot
of
things,
we've
kind
of
missed
a
lot
of
things.
We've
hit
a
lot
of
things,
we've
been
rejected
from
actually
not
that
many
things,
because
the
a
life
conference,
for
example,
was
a
rejection,
but
we
have
a
number
of
things
that
are
outstanding,
so
there
are
a
number
of
papers
that,
if
you're
interested
in
diatoms
diatom
movement
or
in
genomic
research
or
cell
segmentation,
those
are
things
that
you
know
we
have
these
items
like
the
quantitative
comparison
of
ikea
and
this
nurip's
workshop.
A
I
think
that's
come
and
gone
in
terms
of
submissions,
so
I'm
gonna
put.
I
don't
think
anyone
submitted
anything
successfully
to
that.
So
nurip's
is
actually
coming
up
at
the
beginning
of
december
and
if
you're
interested
in
machine
learning
deep
learning,
that's
a
good
place
to
just
attend.
If
you
can
it's
going
to
be
virtual
again
this
year,
so
if
you
want
to
attend,
I
think
student
rates
are
lower
than
the
regular
rates.
A
So
if
you
want
to
attend,
try
to
register
early
for
that,
they'll
also
have
a
lot
of
materials
up
on
the
web.
Where
they'll
have
you
know
different
workshops
and
different
sessions
that
are
open
to
the
public,
sometimes
after
the
fact.
So
you
know.
Maybe
what
we'll
do
towards
that
period
is
try
to
focus
some
of
our
efforts
around
promoting
some
of
those
materials
so,
as
it
turns
out,
neuromatch,
4.0
and
nurips
are
happening
right
around
the
same
time
in
early
december.
A
So
maybe
we'll
try
to
do
like
a
diva
worm,
centric
review
of
some
of
the
content.
Some
of
the
ideas
that
are
are
going
around
at
these
conferences
and
try
to
you
know,
align
ourselves
a
little
bit
with
that
during
the
conference.
A
B
A
These
watch
parties
in
person,
but
I
was
thinking
of
doing
something
like
sort
of
a
virtual
group.
You
know,
maybe
during
one
of
our
meetings,
where
we
kind
of
recap
some
of
the
things
going
on
in
these
two
different
conference
venues
and
seeing
what
kind
of
fits
into
what
we're
doing
here.
So
there
are
a
lot
of
you
know.
Nurip's
is
extremely
large
and
has
a
lot
of
ideas
floating
around
and
it's
hard.
A
It's
easy
to
get
lost
and
everything
that's
going
on
in
my
other
group,
especially
with
the
help
of
jesse
we've
been
able
to
distill
a
lot
of
this
stuff,
and
you
know
kind
of
focus
on
interesting
ideas,
interesting
to
our
group
discussions.
A
So
you
know
maybe
we'll
try
to
do
something
like
that,
and
I
think
that
would
be
useful
for
a
lot
of
different
people
like
usual
and
thirun,
who
are
interested
in
new
algorithms,
but
also
people
who
are
interested
in.
You
know
some
of
the
larger
ideas
if,
if
you're
interested
in
some
of
the,
I
don't
want
to
speak
for
you
guys,
but
if
you're
interested
in
some
of
the
broader
ideas
that
they
have
at
the
conference,
broader
concepts.
If
it's
interesting
to
you,
then
please
you
know
we.
A
A
A
We
haven't
submitted
anything,
but
the
conference
is
coming
up
and
you
know
it'll
be
available,
so
be
able
to
see
what
they
what
their,
what
the
state
of
the
art
is
in
the
field,
and
so
we
can
get
an
appreciation
for
that
and
maybe
apply
some
of
that
to
the
work
that
we're
doing
so.
I
don't
think
there
are
any
other
pressing
deadlines.
I
think
that
let
me
put
a
red
mark
on
this
for
neuro
match,
so
this
red
mark
means
it's
coming
soon.
Actually
I
don't
know.
Oh
okay,
that's
workshops.
A
A
All
right
there
we
go.
I
think
I
made
a
distinction
there,
good
all
right.
So
that's
good!
We
have
those
two
things
coming
up
and
some
of
these
things
are
kind
of
kind
of
open,
like
this
williamson
symbiosis,
the
diatom
motion,
jerkiness
and
so
forth.
So
those
are
all
open
to
collaborators
and
I
think,
a
lot
of
stuff
we're
doing
with
diatoms
here
in
this
thirun
and
usual
and
esme.
A
A
No,
I
I
do
have
to
move
it
for
this
week
because
I
had
something
else.
But
okay,
I
thought,
like
the.
A
Oh
actually,
it
starts
in
a
couple
weeks,
so
that'll
be
a
shift
more
generally
so
yeah.
A
Yeah
so
yeah,
okay,
yeah
great
work
through
that
was
good
yeah.
So
that's
good
80.!
So
did
you
and
christian
did
you
have
anything
to
update
us
on.
A
Yeah,
I
may
have
to
re-up
it
because
I
don't
know
if
he'll
see
it
but
yeah.
This
is
the
document
here,
and
so
that
was,
I
just
covered
that
actually
so
it's
on
the
recording
but
just
kind
of
going
through
that
document-
and
you
know
it's
it's
we
list
like
different
things
you
want
to
submit
and
we
there's
a
link
to
the
abstracts,
maybe
put
an
abstract
in
there,
and
then
we
can
make
a
determination
of
how
we
want
to
if
we
want
to
submit
it
to
neural
magic.
E
B
Yeah,
so
how
do
you
have
any
like
experience
earlier
on
neural
decoding?
Not
really?
No,
I
mean,
I
know
a
little
bit
about.
A
A
A
All
right:
well,
let's
see
I
can
go
back
and
I
want
to
continue
on
with
some
of
the
stuff.
So
we
have
this
abstract
submissions
down
here.
A
A
A
Let
me
see
if
I
can
find
the
I
don't
know
if
we
have
the
agenda
here,
but
he's
gonna
be
talking
about
10
years
of
open
worm
this
this
afternoon
at
least
my
time.
It
might
be
too
late
for
thirun
and
krishna,
but
I
don't
think
they
they
have
the
event.
A
I
don't
think
they
have
the
schedule
up
here.
It's
hard
to
get
to
because
they
have
it
like
in
a
special
portal
but
anyways.
This
is
the
event
stephen
larson's
gonna
talk
about
10
years
of
open
worm.
I
think
this
is
all
going
to
be
on
youtube.
Hopefully,
so
if
you
you
know
you
don't
catch
it,
you
can
catch
it
later,
but
this
is
something
we
talked
about
last
week
where
open
worm
is
10
years
old
they've
been
doing
this
since
2011
and
they're.
A
A
Made
and
some
people
are
more
of
the
mind
that
you
know
the
c
elegans
brain
should
have
been
quote
unquote
simulated
by
now,
but
of
course,
simulating
a
brain
is
a
little
bit
different
than
kind
of
what
we've
been
doing,
which
is
kind
of
approximating
the
brain
and
approximating
the
biology
of
the
worm.
So
this
is
going
to
be
an
interesting
set
of
sessions.
I
think
steve
larsen's
talk
will
be
really
interesting.
A
There's
a
lot
of
there
are
a
lot
of
interesting
talks
in
here
in
general,
so
there's
a
lot
of
stuff
on
visualization
on
data,
open
data
pipelines
and
model
standards
and
standardization
of
ai
approaches
and
biological
modeling,
multicellular
modeling,
and
all
these
other
things.
So
I
think
this
will
be
a
nice
event.
I
don't
know
if
you
can
still
register.
A
A
Let
you
know
how
the
talk
went
and
everything
so
last
week
I
talked
about
the
nobel
prize
for
actually
last
week
was
something
I
called
oct4
day
and
the
reason
I
called
it
oct4
day
was
because
there's
a
technique
called
cellular
reprogramming
where
they
take
a
you
know
some
sort
of
somatic
cell
and
they
can
turn
it
into
a
stem
cell
and
to
do
that
they
introduce
what
they
call
tran.
A
They
over
express
something
called
a
transcription
factor
in
the
cell.
So
this
is
something
that
is
created
by
the
expression
of
a
gene
and
it's
a
an
mrna
molecule,
and
you
know
they
get
expressed
at
a
certain
concentration
and
they
affect
it
for
other,
the
expression
of
other
genes.
They
activate
different
pathways
and
you
end
up.
Ultimately,
if
you
have
the
right
mrnas
expressed
at
a
high
level,
it
can
affect
the
physiology
of
the
cell,
and
in
this
case
you
have.
A
The
people
won
the
nobel
prize,
that
identified
certain
factors
in
cells
that
make
them
change
their,
what
they
call
transform
from
one
phenotype
to
another.
So
you
know
they
had
they
identified.
I
think
four
or
five
transcription
factors
that
were
responsible
for
creating
a
stem
cell
from
a
somatic
cell
and
one
of
those
transcription
factors
was
called
oct4.
A
So
we
talked
about
that
nobel
prize,
that
was
in
2012
that
was
yamanaka
and
john
cordon,
who
did
a
lot
of
work
on
reprogramming
germ
cells,
so
taking
a
mature
differentiated
cells
nucleus
and
putting
it
into
an
egg
cell,
and
then
that
would
create
a
clone.
You
know
clonal
organism
and
the
gene
expression
program
in
that
case
would
be
rewritten
for
that
cell
and
it
would
behave
as
if
it
were
like
a
germ
cell
from
a
different
organism-
cool
stuff.
A
So
that
was
the
2012
nobel
prize
and
surrounding
that-
and
I
mentioned
something
about
the
2021
nobel
prize
for
physiology
and
medicine,
and
that
is
something
actually
quite
different
in
terms
of
what
the
award
was
for,
but
it
was.
It
was
just
released
last
week
right
before
the
meeting,
so
I
didn't
have
a
chance
to
actually
read
through
some
of
the
material
on
it.
So
I
promised
that
I
would
present
it
today
and
so
with
that
long-winded
introduction,
I
wanted
to
talk
a
little
bit
about
this,
because
it's
actually
interesting
in
neuroscience
sense.
A
So
this
is
another.
This
prize
is
very
neuroscience
centric.
So
this
is
the
2021
medicine
nobel
prize
winner
explains
the
importance
of
sensing
touch,
artem
prapudian,
who
patapudi
and
shared
the
physiology
or
medicine
prize
for
work,
on
mechanisms
crucial
to
everything
from
bladder
control
to
knowing
where
our
limbs
are,
and
so
this
this
prize
was
about
cells
that
sense
touch
pressure
from
the
environment.
A
Pressure
heat
that
sort
of
thing,
so
one
of
the
discoveries
surrounding
this
prize
was
the
discovery
of
what
they
call
capsaicin,
and
so,
if
you've
ever
eaten
hot
peppers.
There's
this
small,
yes,
chemical,
compound
called
cap
season
and
it
hits
your
tongue
and
it
burns
your
tongue,
and
you
have
to
get
water
and
to
cool
it
off
that
burning
sensation
is
actually
a
touch
receptor
or
a
pain,
receptor.
A
That's
activated
in
the
tongue
when
you
eat
that
when
you're
exposed
to
that
compound-
and
it
triggers
a
response
in
your
brain,
you
know
in
your
nervous
system
and
it's
transduced
from
the
environment
into
the
cells
and
then
down
the
nerves,
and
so,
as
it
turns
out,
there
are
a
number
of
different
types
of
cell
like
this,
of
course,
if
you
feel
pain,
you
have,
you
know,
pain,
receptors
that
do
this
sort
of
transduction.
A
You
have
proprioceptors
that
do
this,
so
if
you're
moving
through
the
environment,
you
feel
your
body
moving.
You
can.
You
know
you're,
aware
of
like
where
your
arms
are
your
legs
are.
Those
are
all
proprioceptors
that
take
in
tiny
amounts
of
force
force
from
the
environment,
and
they
kind
of
you
know
can
tell
you
kind
of
where
you
are
in
space
and
it's
very
subconscious.
You
don't
really
know
that
you're
doing
it.
A
A
These
are
protein
molecules
that
are
involved
in
sensing
pressure,
and
these
are
pressure
sensitive
ion
channels.
So
that
means
that
when
you
have,
when
you
have
the
stimulus
on
the
cell,
the
stimulus
activates,
these
pressure
sensitive
ion
channels
when
there's
a
certain
amount
of
pressure
applied
to
these.
These
receptors,
for
example,
these
ion
channels
are
activated,
and
these
proteins
are
responsible,
so
they're
embedded
in
the
membrane
of
the
cells
and
they
either
open
or
close.
These
ion
channels,
and
so
they
activate
the
cell,
so
basically
you're
getting
some
environmental
transduction.
A
It's
activating
some
molecule
could
be
piezo1
pso2.
It
could
be
there's
another
one
for
the
capsaisin
example.
They
activate
these
ion
channels
and
then
they
activate
the
cell,
and
so
then
the
cell
is
active
and
then
it
transduces
information
from
the
cell
and
down
into
the
nervous
system
where
you
feel
it,
and
so
now
your
brain
is
getting
these
signals
and
they're
regulated
by
these
by
these
proteins
at
the
receptor.
A
A
They
had
to
do
a
lot
of
what
they
call
screens,
which
means
they
had
to
test
a
bunch
of
different
genes
in
parallel
and
try
to
figure
out
the
one
that
had
the
biggest
effect
on
the
thing
that
they
were
testing.
A
So
that's
a
lot
of
hard
work,
and
so
this
I
think
this
is
leading
us
closer
to
understanding
what
we
call
proprioception
and
you
know
also
pain,
reception
and
other
things.
So
you
know
this
is
where
there's
a
lot
of
unknown.
There
are
a
lot
of
unknown
things
here.
It
seems
like
we
would
know
a
lot
about
it,
but
we
don't
so
this
the
work
on
the
genetic
work
on
this
is
about
20
years
old,
and
so
this
is
interesting.
A
This
this
article
was
an
interview
with
with
this
scientist,
and
you
know
they
talked
talked
a
little
bit
about
why
this
sensing
temperature
is
important
and
touch
so
it's
temperature
and
touch
kind
of
combined.
But
this
is
you
know,
one
mode
of
sensation
that
we
don't
really
think
about
too
much,
but
it
does
play
a
role
in
our
cognition
and
in
our
behavior.
So
so
you
know
it's
fascinating.
How
one
of
the
five
major
senses
mechanistically?
A
A
I
consider
perhaps
your
most
important
sense
and
I
would
say
a
majority
of
people
probably
never
even
heard
of
it
or
never
stop
to
think
about
the
sense,
and
so
your
your
sensory
neurons,
actually
innervate
most
of
your
body,
your
muscles,
your
skin,
everything,
and
so
this
this
is
the
the
mode
of
sensation
where
your
muscles
in
your
skin
are
all
kind
of
monitored
and
detect.
You
know
it
detects
changes
in
your
muscle,
contraction
changes
in
your
skin.
A
If
there's
like
you
know,
you
put
a
wet
towel
on
your
arm,
you're
able
to
sense
it
without
having
your
eyes
open.
You
can
do
this
and
it's
you
know
it's
an
interesting
set
of
experiments
if
you
decouple
vision
from
proprioception,
but
that's
another
conversation,
and
so
this
is.
This
is
all
very
interesting,
and
it's
very
not
very
well
known
as
to
how
this
works.
We
actually
know,
in
fact,
that
c
elegans.
A
This
is
an
important
mode
of
sensation
in
c
elegans,
because
there's
a
lot
of
sort
of
touch,
reception
in
c
elegans
and
especially
affects
the
way
that
c
elegans
moves
around
its
environment,
c
elegans
will
touch
different.
You
know
you
can
do
these
experiments
where
you
sort
of
brush
a
c
elegans
on
the
nose
or
on
the
anterior
end
with
a
what
they
call
a
worm
pick
and
the
worm
will
sense
that
and
detect
it
very
at
a
very
fine
sort
of
grained
level
and
then
move
away
from
it.
A
E
Yeah,
it's
like
a
little
bit
related
to
the
topic
like
have
you
heard
like
some
people
who
don't
have
vision,
they're
blind,
they
have
loss
of
vision.
They
have
very
good.
Sensing,
like
their
skin
can
sense
better
than
normal
people.
E
So
is
there
any
correlation
between
one
sense
and
the
other,
because
I've
like
I've,
heard
like
some
scientists
or
artists,
they
usually
touch
different
things
or
they
listen
to
music,
so
they
are
using
their
senses,
senses
in
one
way
and
it
simulates
their
senses
in
the
other
way
like.
If
you
want
to
how
do
I
say
it
if
you
want
to
compose
like
I
don't
know
it's
true,
but
it
said
like.
E
If
beethoven
has
to
compose
the
music,
he
would
see
art
pieces
or
paintings,
and
that
would
you
know
inspire
and
some
of
the
blind
people.
They
have
very
good
sense
like
this.
How.
E
A
Yeah,
so
there
is
a
whole
field
of
what
they
call
sensory
integration
or
cross
modal
plasticity,
and
so
you
can
like
in
humans
and
not
so
much
in
c
elegans.
Although
c
elegans
has
some
of
this,
where
you
have
two
different
senses
that
are
taking
in
information
from
you
know,
kind
of
being
in
the
same
place.
So
if
you,
you
know.
E
Yeah,
is
there
a
literature
on
this
thing
like
if
we
give
them
one's
simulation,
does
the
other
sensors?
Essentially,
organs
would
be
more,
you
can
say
how
do
I
put
it
like?
They
will
be
more
exaggerated,
something
like
that
if
they,
if
you
put
them
into
like
a
high.
A
E
That
this
would
be
affected
by
that
yeah.
A
Yeah,
so
it
happens,
I
think
in
people
who
go
blind
early
they've
found
that
in
humans,
because
in
humans
we
have
a
large
neocortex
that
has
devoted
is
processing
different
senses.
If
you
were
to
take
away
a
vision,
very
early,
you
don't
have.
The
entire
visual
cortex
is
not
connected
to
the
visual
system,
but
what
happens.
B
A
That
the
other
senses
recruit
parts
of
that
cortex,
and
so
they
have
this
heightened
representation
in
that
part
of
the
brain.
So
you
know
the
touch
areas
or
the
touch
sensation
and
the
auditory
sensation.
They
recruit
those
areas
and
say
they
have
a
larger
sort
of
representational
canvas,
and
you
know
that
that
takes
the
place
of
vision
in
those
those
people
so
yeah.
There
is
a
there's
a
lot
of
that
sort
of
cross
modal
plasticity.
I
can
you
know,
guide,
I
guess.
A
If
you
just
look
in
under
cross
modal
plasticity,
you
can
find
some
references,
but
I
have
some
resources
as
well
on
that.
So
so,
thank
you.
Thank
you.
You're
welcome,
so
yeah.
I
wanted
to
talk
about
well,
one
all
right
that
if
you
have
questions
about,
we
can
talk
about
that
later.
I
did
want
to
go
on
to
this
other
thing
that
I
found
I
was
reading
about
this,
this
blog
post
here.
A
A
So
this
is
a
diagram
of
sort
of
the
deep
sets
that
they're
talking
about
here.
So
in
this
post
I'll
discuss
two
topics:
I've
been
thinking
about
a
lot
recently
deep
sets
and
neural
processes,
and
so
this
is
about
meta
learning.
Actually
so
so
meta
learning
is
this
learning
from
learning
sort
of
ideas
so
in
the
standardized
supervised,
learning
setting
we're
often
interested
in
learning
or
approximating
a
function
that
maps
inputs
to
outputs.
A
So
this
is
kind
of
like
what
pix2pix
does
a
supervised
learning
algorithm
may
be
considered
an
algorithm
that,
given
a
data
set
of
such
input,
output,
pairs
returns.
A
functional
approximator
if
l
is
a
good
algorithm
f,
is
equivalent
to
f.
F
hat
is
equivalent
to
f
in
some
meaningful
sense,
so
the
function
approximator
is
equivalent
to
this
function,
that
maps
inputs
to
outputs.
A
So
the
idea
is,
like
you
know,
you're
trying
to
approximate
that
mapping
and
you
want
to
get
close
to
it,
and
that's
that's
what
you're
doing
here
in
the
meta
learning
setting,
rather
than
assuming
we
have
access
to
one
such
data
set.
We
assume
that
our
data
set
is
comprised
of
many
tasks,
each
containing
a
context
set
and
a
target
set.
So
this
is
kind
of
like
the
mapping
between
these
two.
A
Each
of
these,
in
turn,
contain
a
variable
number
of
input
output
pairs.
So
now,
instead
of
having
just
one
mapping,
you
have
a
variable
number
of
them.
Our
assumption
is
that
well,
the
mapping
between
inputs
and
outputs
may
differ
across
tasks.
The
tests
share
some
statistical
properties
that,
when
modeled
appropriately,
should
improve
the
overall
performance
of
the
learning
algorithms,
so
the
goal
of
metal
learning
is
to
produce
a
black
box
algorithm
that
maps
from
data
sets
to
function
approximators.
A
Our
goal
is
to
use
data
of
tasks
to
train
a
model
that
accepts
new
training,
sets
and
produce
function.
Approximators
that
perform
well
on
data.
That's
not
seen
before
or
is
not
labeled
meta
learning
learns
a
learning
algorithm.
That
is
appropriate
for
all
the
observed
tasks.
So
that's
learning
to
learn
is
that's
what
mental
learning
is.
So
one
of
the
most
compelling
motivations
for
metal
learning
is
data
efficiency,
so
your
average
setting
requires
a
lot
of
data
to
learn.
A
So
we
want
to
reduce
that
amount
of
data.
We
want
to
make
the
learner
more
efficient
on
samples
that
are
presented
to
it.
If
we
can
do
that,
then
we
can.
You
know
this
is
something
we
can
do
with
something
like
fuchsia
learning,
where
we
only
introduce
a
couple
of
examples
and
it'll
pick
up
the
larger
category.
That's
basically
what
we're
trying
to
do
so.
Here's
some
examples
this
this
segway
scooter.
You
know,
does
this
generalize
different
types
of
wheel,
transport?
A
You
know,
or
does
this
letter
approximate
a
a
number
of
other
letters
in
a
in
a
rating
system?
So
you
know
that
you
know
that.
That's
something
that
you
want
to
try
to
test,
given
your
algorithm
and
if
you
can
get
few
shot
learning
you
can,
like
you
know,
maybe
introduce
a
couple
of
different
types
of
wheel
transport
and
get
the
entire
set
of
wheel
transports
that
exist,
and
so
that's
the
the
kind
of
thing
you're
trying
to
do
with
fuse
shot
learning.
A
A
A
So
y
sub
t
is
the
input
we
wish
to
make
a
prediction
at
b,
sub
c
is
the
context
set
to
condition
on,
and
so
this
is
the
computational
structure
of
a
neural
process.
This
is
an
encoder
decoder
perspective
here,
so
we
can
build
this.
This
is
an
encoder
and
then
the
decoder
from
a
and
so
on,
the
left
we
have
these
contact
sets
x
and
y,
and
then
that
represents
the
context
set.
The
e
block
is
the
encoder
and
then
the
alpha
or
the
a
circle,
the
pooling
operation.
A
A
So
this
kind
of
goes
through
the
math,
and
this
shows
the
training
procedure
here
and
then
deep
sets
are
kind
of
an
extension
of
this.
It
goes
into
the
inner
workings
of
this
neural
process
here,
a
useful
way
to
think
about
them
as
follows.
Our
decoder
is
a
standard
neural
network
that
maps
x
to
set
x
to
set
y
with
one
small
tweak.
We
condition
it
on
an
additional
input,
r,
which
is
a
data
specific
representation.
A
So
this
is
an
input
that
tells
you
something
about.
The
data
set.
The
encoder's
job
is
really
just
to
embed
data
sets
into
an
appropriate
vector
space,
cmp
specif,
specify
such
an
embedding
and
provide
a
way
to
train
this
embedding
end
to
end
with
a
predictive
model.
So
is
this
the
correct
form
for
such
an
embedding?
A
So
what
does
a
function
that
embeds
a
data
set
in
a
vector
space
look
like,
so
this
is
different
from
a
machine
learning
model
where
we
expect
inputs
to
be
an
instance
of
some
space
like
a
vector
space
or
maybe
sequences.
A
What
we
want
is
a
function
approximator
that
accepts
input
sets.
So
we
want
to
have
a
set
of
inputs,
we're
treated
as
a
set
instead
of
like
as
a
space.
So
what
are
the
properties
of
sets?
And
what
properties
do
you
want?
Such
a
function
to
have
so
sets
of
varying
sizes,
and
we
want
to
be
able
to
handle
these
arbitrarily
sized
sets.
So
you
know
you
have
different
types
of
things
in
the
world.
A
They
don't
all
have
the
same
number
of
samples
in
them,
sometimes
they're,
very
small,
sometimes
they're,
very
large,
but
you
want
to
treat
them
sort
of
and
normalize
them
across
sets.
A
The
other
thing
is
that
sets
generally
have
no
order,
at
least
in
this
context.
Your
sets
are
just
kind
of
there.
There's
no
real
information
about
ordering
it's
just
kind
of
like
information
about
this
thing.
This
category,
and
so
any
module
operating
on
sets
must
be
invariant
to
the
order
of
the
elements.
A
A
They
give
some
previous
examples
from
earlier
research
on
this
problem
kind
of
going
through
some
of
these
things
and
then
the
summary
is
that
we've
taken
the
first
we've
taken
a
first
look
at
neural
process:
families,
these
model,
families
that
allow
us
to
do
probabilistic,
meta
learning.
A
The
modeling
of
sets
is
also
a
a
very
interesting
subfield.
It's
at
the
frontier
machine
learning
research,
but
it's
not
really
very
well
developed
and
then
so.
This
is
so
yeah
I'll,
be
talking
more
about
some
of
the
stuff
later
in
other
posts.
But
this
is
something
that
you
know
if
you're
interested.
Maybe
we
follow
up
on
this-
I'm
not
really
sure
how
hard
this
is
to
implement
or
if
this
is
related
to
some
of
your
work.
A
But
this
post
is
definitely
maybe
will
be
useful
to
some
of
you,
especially
in
terms
of
thinking
of
inputs,
as
sets,
we
haven't
really
talked
too
much
about
like
some
of
these
things,
but
but
I
think
we
talked
about
category
theory
in
my
other
group,
and
this
is
very
similar
in
some
ways
to
category
theory.
Although
the
mathematics
is
very
different
and
the
way
you
treat
the
the
input
sets,
you
know
they
don't
use,
set
these
categories,
these
things
like
functors
and
other
terminology
which
make
this
a
little.
A
This
problem
a
little
bit
different,
but
it's
very
similar
in
in
a
very
in
spirit,
in
a
lot
of
ways
to
what
what's
going
on
here,
where
you
don't
have
like
you
know,
labeled
data
you
have
like
sets
of
instances
and
things
that
are,
you
know
not
necessarily
ordered
categories
but
they're,
sort
of
semi-structured
and
so
you're
putting
them
into
this
model
and
you're
trying
to
extract
you're
trying
to
learn
from
some
of
the
the
rules
that
already
exist.
You're
trying
to
do
this
meta
learning-
and
you
know
so.
This
is
an
open
area.
A
It's
very,
I
don't
think,
there's
a
lot
of
hard
and
good
results
to
show
it's
just
kind
of
what
people
are
thinking
about
how
to
make
you
know
how
to
do
metal
learning
and
how
to
do
this
better.
So,
okay,
I
was
going
to
mention
that
this
is
again
hacktoberfest
month.
So
this
is
the
digital
basilaria.
A
Repository
and
we
have
the
badge
here
on
this
repository-
we
also
have
the
hacktoberfest
badge
on
the
group
meetings
repository
and
so,
if
you
look
here,
this
is
more
along
the
lines
of
different
projects
that
we
are
that
are
ongoing.
So
if
you
go
to
this
group
meetings
project
board,
you
can
see
some
of
the
things
we're
involved
in
if
you
want
to
participate
in
some
of
these
things,
pick
an
issue
and
mention
it
in
the
slack
say,
I'm
interested
in
issue
90
or
100.
A
Actually,
100
is
finished,
but
some
of
these
issues
are
live
and
they're
very
interesting.
So
if
you
want
to
pursue
some
of
these
things,
you
know
ask
about
it
by
number
and
we
can
discuss
it,
and
maybe
you
know
you
can
make
a
contribution
here.
We
also
have
the
hacktoberfest
badge
on
evil
learn
which
is,
of
course,
the
this
software
and
learning
platform
that
we
have
for
divo
learn
for
you
know,
machine
learning.
We
also
have
data
science,
demos
that
a
lot
of
people
have
contributed.
A
A
So,
finally,
I
wanted
to
finish
with
some
papers
that
we've
been
talking
or
that
maybe
are
related
to
some
of
the
things
that
we're
talking
about
and
I'm
going
to
go
over
the
hour
top
of
the
hour
a
little
bit
to
do
this.
A
One
of
these
is
this
paper
on
multi-layer
c
elegans
connectomes.
So
this
is
a
newer
paper.
This
is
by
avia
mini
he's,
worked
with
the
open
worm
foundation
before
ed
ballamore
who's,
a
person
who
does
a
lot
of
fmri
work,
especially
networks
and
william
shaffer
who's
also
been
involved
in
open
worm
quite
a
bit.
A
So
this
is
the
multi-layer
connectum
of
c
elegans,
so
the
abstract
reads:
connectomics
is
focused
primarily
on
the
mapping
of
synaptic
links
in
the
brain.
Yet
it
is
well
established
that
extra
synaptic
volume
transmission,
especially
via
monoamines
and
neuropeptides,
so
these
are
neurochemicals
in
the
brain.
It's
also
critical
to
brain
function
and
occurs
primarily
outside
the
synaptic
connectum.
A
So
our
embryo
networks
that
we
talk
about
in
this
group,
actually
are
based
on
the
principle
of
you,
know:
paracrine,
signaling
or
cells
that
are
near
each
other,
sharing
chemical
signals
with
each
other,
and
so
this
is
actually
something
that
is
in
that
direction.
Where
we
can
map
this
out,
we
need
to
understand
how
genes
are
expressed
and
what
they're
producing
in
terms
of
chemicals
that
allow
for
this
type
of
thing,
this
type
of
data
set
to
be
constructed.
A
So
these
type
of
networks
exhibit
a
distinct
topological
set
of
properties.
A
The
monoamine
network
displays
a
highly
disassortative
starlight
structure
with
a
rich
club
of
interconnected
broadcasting
hubs,
so
these
are
cells
that
are
putting
out
the
signal
into
the
extracellular
matrix
and
in
fact
this
is
a
broadcast
model,
because
it's
broadcasting
from
the
cell
outward,
and
so
you
know
you
usually
have
this
area
of
influence
where
the
signal
is
great
and
then
it
decays
sort
of
linearly
as
you
move
away
from
the
cell.
A
So
the
neuropeptide
network
is,
you
know,
being
expressed
at
maybe
shorter
ranges,
maybe
it's
being
expressed
more
regularly
regularly
across
cells
and
the
like.
So
there
are
differences
in
these
two
different
types
of
chemicals
and
how
the
genes
are
being
expressed
to
produce
them
and
their
diffusion.
A
Despite
the
low
degree
of
overlap
between
the
extrasynaptic
or
wireless
and
synaptic
or
wired
connectomes,
we
find
how
a
significant
multi-link
motifs
of
interaction
pinpointing
locations
in
the
network
where
aminergic
and
neuropeptide
signaling
mod
signaling
modulates
synaptic
activity.
So
these
these
chemical
signals
that
are
out
there,
they
actually
modulate
synaptic
activity.
B
A
Are
also
outside
of
the
cells
they're
in,
like
you
know,
in
the
space
in
between
the
cells,
so
they're
also
regulated
by
this.
So
synapses
are
these
connections
between
two
cells.
Usually
the
cell
sends
out
an
axon
and
there's
some
junction
between
cell
processes
that
meet,
and
this
is
where
a
lot
of
the
chemical
communication
occurs.
A
These
chemicals
that
are
sent
out
into
the
extracellular
area
will
affect
this
sort
of
transmission
milieu
by
interfering
or
augmenting,
what's
being
transmitted
at
the
synapse.
So
it's
usually
things
that
are
being
exchanged
across
this
short
gap
between
the
cells
processes.
A
A
So
these
are
in
the
c
elegans
sort
of
images
of
a
c
elegans
worm
and
adult
worm,
and
these
are
these
images
here
show
some
of
these
receptors
very
high
density,
and
so
the
idea
is
that
these
receptors
will
take
up
these
chemicals
and
use
them
for
different
processes.
A
So
this
is
actually
there.
Let's
see
this
a
is
these:
are
the
rem
tyramine
releasing
neurons
in
a
so?
You
can
see
that
there's
this
core
of
of
neurons,
that
are
where
there's
a
release
from
these
two
neurons
that
are
affecting
these
cells
and
then
in
b
we
have
ric
dopamine,
releasing
neurons,
showing
outgoing
synaptic
edges
and
neurons
expressing
more
one
or
more
of
the
three
octa
octopamine
receptors
gray.
A
So
this
is
again
where
you
have
a
subset
of
neurons
that
are
putting
out
this
chemical
signal,
a
subset
of
neurons
that
are
taking
it
up
and
then
the
rest
of
the
neurons
here
around
the
edge.
So
this
is
how
this
this
is,
how
they
construct
this
connecto
and
then
d.
Is
this
multi-layer
expansion
of
the
synaptic
gap,
junction,
monoamine
and
neuropeptide
signaling
networks?
A
So
you
can
see
that
the
topology
changes
as
you
consider
different
types
of
so
you
have
a
gap,
junction
network,
which
is
where
you
have
cells
that
are
adjacent
and
they
share
what
they
call
an
electrical
synapse,
which
is
a
gap
junction.
They
have
other
types
of
you
know
you
can
use
your.
You
can
use
different
criterion
to
build
different
types
of
network.
A
That's
the
key
take
away
from
this,
and
so
it
looks
very
different,
depending
on
what
you're,
considering
as
the
basis
for
that
network,
and
so
then
they
have
a
table
showing
the
number
of
receptor
expressing
cells
that
do
not
receive
synapses
from
releasing
cells
and
the
number
of
connections
in
each
layer
that
are
non-synaptic,
including
connections
between
neurons
within
the
same
class.
So
this
is
cells
with
no
synaptic
input.
A
These
are
non-synaptic
edges,
so
these
are
for
different
neurotransmitters
and
you
can
see
that
there
are
differences
in
the
numbers
and
then
this
forms
a
multiplex
network,
so
the
full
c
elegans
connectome
can
be
considered
as
a
multiplex
or
multi-layered
network.
You
can
put
these
different
network
topologies
on
top
of
one
another
because
they
sort
of
function.
You
know
the
the
entire
nervous
system
has
all
these
different
things
going
on
simultaneously.
A
So
you
can
stack
these
on
top
of
one
another
and
say
that
this
you
know
when
you're
considering
one
type
of
neurotransmitter
versus
another
type
of
neurotransmitter
the
topology
shifts.
And
so
then
you
can
say
things
about
behavioral
circuits.
So
if
you
consider
a
behavioral
circuit,
some
behavioral
circuits
are
reliant
on
some
types
of
neurotransmitters,
but
not
others.
A
So
this
is
different
from
another
type
of
behavioral
circuit
say
like,
for
example,
the
hunger
circuit,
the
circuit
that
controls
satiety
of
finding
food
versus
one.
That's
involved
in
navigation,
and
so
those
circuits
are
going
to
be
different,
not
only
in
their
structure
just
in
their
gap,
junction
network
structure,
but
in
the
way
they
utilize
neurotransmitters.
A
See
if
there's
any
other
nice
stuff
in
here,
this
is
the
mono
amine
rich
club.
So
this
is
a
rich
club
is
where
you
have
a
maybe
a
couple
neurons
that
are
doing
most
of
the
work
in
the
network
or
that's
connected
to
most
of
the
other
cells
in
the
network,
and
this
is
not
really
that
understandable.
A
This
graphics
is
a
graph
that
they
like
to
use
for
visualize
what
they
call
hive
plots
so
they're
showing,
like
the
connections
between
you,
know,
different
different
types
of
functions,
so
sensory,
motor
and
interneurons
how
they're
connected
together
using
these
criterions.
So
your
interneurons.
A
A
So
so
that's
I'll
talk
about
in
that
paper,
and
this
actually
shows
neuropeptide
networks
across
different
types
of
neuropeptide.
This
is
really
cool.
It
has
like
different
all
the
different
molecules
that
they've
identified
in
this
study
and
their
different
topologies.
You
can
see
some
are
very
small
networks.
Some
are
like
this
fan-like
thing
where
they
have
one
very
highly
connected
node
here
and
then
the
rest
are
kind
of
like
peripheral.
A
You
have
others
that
are
other
networks
that
are
more
densely
connected,
so
these
connections,
these
have
meaning
you
can
characterize
these
using
a
statistical
parameter,
but
you
can
also
it
does
have
meaning
in
terms
of
you
know
the
the
networks,
the
cells
that
are
key
in
the
network.
So
if
you
were
to
knock
that
cell
out,
the
network
would
collapse,
and
so,
if
you
imagine,
if
you
knocked
out
a
cell,
it
would
collapse,
maybe
for
that
neurotransmitter
function,
but
the
other
neurotransmitter
functions
might
retain
their
ability
to
connect
other
cells.
A
So-
and
this
is
something
you
might
find
if
there's
like
some
mutant
that
has
you
know
some-
you
know
some
cell
is
deleted
or
some
neurotransmitters
deleted
that
will
affect
the
behavior
and
you
can
trace
it
back
to
these
types
of
networks.
A
So
that's
all
I'm
going
to
talk
about
for
that
today,
I'm
going
to
talk
about
one
more
paper,
and
I
think
this
one
here.
This
would
be
good
here.
So
a
couple
weeks
ago
we
talked
about,
I
think,
was
akshay
who's
working
on
tiling,
the
sphere
for
the
axolotl
atlas,
so
he's
taking
the
axolotl
data
and
he
created
a
sphere
in.
A
I
think
it
was
in
python
where
you
know
you
have
this
three-dimensional
sphere
that
you
can
move
around
and
explore
and
then
you
can
put
the
you
tile
it
with
these
microscopy
images
of
the
axolotl
embryo,
and
so
we
have
the
data
to
make
create
this
map,
and
so
you
have
this
thing.
That's
on
a
sphere
right,
you'll
have
this
model,
that's
on
a
sphere,
and
so
now
you
have
this
embryo
and
but
it's
not
dynamic.
It's
just
kind
of
a
static
image.
It
could
be
dynamic
if
we
put
different
types.
A
You
know
images
from
different
time
points
on
it,
but
then
the
question
is
is
like:
okay,
we
have
this
changing
set
of
images
on
the
sphere
now,
what
about
representing
dynamical
systems
on
such
a
sphere?
If
we
can
make
like
a
sphere
a
static
sphere
of
the
m
of
the
axolotl
embryo,
and
then
we
can
extend
that
out
to
multiple
time
points
then,
can
we
go
a
step
further
than
that
and
create
dynamical
systems
on
the
sphere?
And
this
is
what
this
paper
is
about?
A
A
This
was
in
chaos
journal,
so
chrome
model
models
are
these
models
of
synchronized
oscillators,
and
so
you
might
have
an
oscillator
for
like
neural
activity,
you
might
have
an
oscillator
like
something
like
a
firefly,
that's
flashing.
It's
you
know
they.
They
generate
light
through
a
sack
of
luciferase
that
they
have
on
their
back
end
of
their
body
and
they
flash
it
as
a
means
of
communication,
and
this
flashing
can
occur
different
frequencies.
A
So
this
is
a
oscillator
that
oscillates
at
a
certain
period,
or
maybe
it's
unsynchronized
in
groups
they
actually
synchronize
their
behavior
and
kuromoto
was
a
scientist
who
created
a
model
of
this
type
of
behavior,
the
sort
of
synchronization
of
these
oscillators.
So
you
can
apply
it
to
you,
know
fireflies.
You
can
apply
it
to
neurons
groups
and
neurons.
You
can
apply
it
to
anything
that
has
an
oscillation
period
if
you
have
a
lot
of
agents
with
different
oscillation
periods
to
watch
them
synchronize.
B
A
Unit
sphere
in
the
dimensional
space,
so
it's
just
this
bunch
of
particles
on
the
surface
of
a
sphere
and
they're
interacting
with
one
another.
The
particles
are
self-propelled
and
coupled
all
to
all,
which
means
there
are.
No.
You
know.
Pairwise
dependencies
are
just
kind
of
like
associating
with
every
other
particle
for
d
equals.
Two.
The
system
reduces
to
the
classic
chromosomal
model
of
coupled
oscillators
for
d
equals
three
different
proposed
to
describe
the
orientation
dynamics
of
swarms
of
drones
or
other
entities
moving
about
in
three-dimensional
space.
A
Here
we
use
group
theory
to
explain
recent
discoveries
of
that.
The
model
shows
low
dimensional
dynamics
for
all
n
greater
than
three
and
or
at
or
greater
than
three,
and
to
clarify
what
it
admits
why
it
admits
the
analog
of
the
antennas
and
nsats
in
the
continuum
limit.
A
So
I'm
not
really
sure
about
what
that
means.
It's
beyond
my
advanced
skill
set
in
chaos
theory,
but
the
underlying
reason
is
that
the
system
is
intimately
connected
to
the
natural,
hyperbolic
geometry
of
the
unit
ball
in
this
geometry,
isometries
form
a
lead
group
consisting
of
higher
dimensional
generalizations
of
the
mobius
transformations
used
in
complex
analysis
once
these
connections
are
realized,
the
reduced
dynamics
and
generalized
form
of
this
fellow
immediately.
This
framework
also
reveals
a
seamless
connection
between
the
finite
and
infinite
end
cases.
A
So,
finally,
we
show
the
special
forms
of
coupling
yield
gradient
dynamics
with
respect
to
the
hyperbolic
metric
and
obtain
globally
stable
results
about
convergence
of
the
synchronized
state.
So
I
think
what
they're
trying
to
do
here
is
they're,
taking
this
model
they're
taking
these
I'm
just
kind
of
hoping
for
some
visualizations
here.
D
A
A
So
you
have
these
different
states
where
you
have
different
sort
of
how
they
migrate
across
the
thing
and
the
idea
is
they
will
either
cluster
like
this
or
they'll?
Be
all
over
the
place
like
this?
As
you
can
see
that
they're,
you
know
either
they're
scattered
and
random,
or
they're
synchronizing
in
one
location,
and
this
was
generated
in
python
visualize
it
plotly.
A
So
you
can
do
this
sort
of
a
simulation
where
you
have
this
surface
of
the
sphere
with
all
these
particles
and
there
you
know
you
can
see.
If
you
know
you
can
use
this
type
of
system
to
observe
you
know
with
different
densities
of
particles
to
observe
synchronization
behavior,
and
so
I
don't
know
if
you
need
to
apply
really
high
level
math
to
get
some
interesting
results
in
something
like
this.
The
reason
I
show
it
in
the
group
is
because
I
wanted
to
show
what
this
looks
like.
A
These
are
for
different
initial
conditions,
so
you
have
different
initial
conditions
to
start
off
with
and
they
result
in
this
sort
of
distribution
of
of
particles,
and
so
in
this
case
it's
a
different
type
of
criterion.
So
you
see
different
results
here
and.
A
That's
all
there
is
to
that
paper,
so
I
mean
this
is
again.
This
is
a
bit
beyond
what
we're
kind
of
okay,
so
thanks
krishna
for
attending
so
anyways,
that's
all
I
have
for
today
next
week
we'll
talk
about.
I
think
susan
might
present
on
her
materials
that
she's
been
working
on
she's,
been
working
on
some
physical
models
of
the
embryo
and
some
signal
processing
stuff.
I
don't
really
know
what
that.
B
A
She's
going
to
present
on
also
some
soft
material,
so
it's
going
to
be
a
mix
of
things
that
she's
going
to
focus
on.
I
don't
know
she's
going
to
present
next
week,
but
she
said
that
she
might
be
able
to
do
it,
and
so
I
look
forward
to
that
and
if
you
have
any
updates
on
the
vassalaria
work,
if
we
have
akshay,
maybe
he
can
update
us
on
the
stuff
he's
doing
with
different
the
axolotl
embryos
and
any
anyone
else
if
they
want
to
present
something
they're
welcome
to
do
it.