►
Description
Eighth DevoWormML meeting, October 23. Attendees: Bradly Alicea, Richard Gordon, Jesse Parent, and Vinay Varma
B
D
B
E
E
E
E
C
Coming
might
get
started
and
then,
if
they
come
up
during
the
the
proceedings-
and
you
know
we
can
let
them
in
and
they
can
catch
up
so
welcome
to
today's
deworm
ml
meeting
I'm
gonna
present
on
something
called
computational
pareidolia
and
then
we'll
talk
about
the
pre-trained
model,
so
Vinay
we
can
go
through
that
in
a
little
bit
more
detail.
I
worked
on
it
yesterday.
Actually,
you
know
pulling
in
basically
flushing
out
the
outline,
and
so
we
can
go
over
that
and
then
we
can
talk
about
what
Vinay
wants
to
add
in
more
detail.
C
So
what
is
pareidolia?
Well,
it's
a
Greek
word
that
means
beside
or
beyond
former
image,
so
they
come
by
and
you
know,
beyond
the
image
or
beyond
the
form,
and
the
definition,
but
according
to
Wikipedia
is
interpreting
a
vague
stimulus
is
something
known
to
the
observer.
So
you
see
these
examples
here
see
we
have
Cookie
Monster,
where
it's
like
a
mouth.
Video
game
is
trashcan
as
a
mouth,
they
put
eyes
on
it
and
this
power
out
what
looks
like
a
face,
and
this
coffee
cup
has
full
foam
in
the
shape
of
your
face.
C
So
this
is
a
good
example.
This
isn't
pareidolia
necessarily,
but
it's
an
ambiguous
image
and
so
ambiguous
images.
Are
these
images
where
you
have
this
thing
here
on
the
left?
That
is,
you
know
it's
a
duck.
It
looks
like
a
duck,
but
it
also
looks
like
a
rabbit.
So
if
you
focus
in
on
it
the
first
time
you'll
see
either
a
duck
or
a
rabbit
and
then,
if
you
focus
in
the
second
time
you
might
see
you
know
you
might
see
the
other
image.
So
you
see
a
doctor
first
time,
then
you
see
a
rabbit.
C
So
if
you
look
at
it
kind
of
sideways,
you
can
see
the
ears
of
the
rabbit
here
or,
if
you
look
at
it
this
way
it's
the
bell
of
a
duck,
and
so
there
are
a
lot
of
these
ambiguous
images.
In
this
case,
this
is
a
Google
cloud.
Vision
is
actually
able
to
interpret
these
emit
these
ambiguous
images
in
either
way,
depending
on
how
you
orient
the
image.
C
C
So
in
this
they've
actually
taken
this
concept
and
turned
it
into
a
genetic
algorithm
that
creates
faces
James.
So
in
this
case
you
have
a,
but
you
like,
have
a
feel,
a
two-dimensional
array
with
a
bunch
of
geometric
shapes
and
the
stack
together
in
different
ways,
and
then
you
use
this
genetic
algorithm
to
select
for
things
that
look
more
and
more
like
a
face.
C
So
in
this
case
we
have
a
bunch
of
you,
know:
polygons
that
are
overlapping
and
we
select
for
the
polygons
overlaps
looks
like
a
face,
and
eventually
we
get
to
something
that
approximates
the
Mona
Lisa.
So
again,
we
have
something
that
isn't
organized
I
mean
it's
just
random
polygons
stacked
on
top
one
another
with
differentiating
characteristics,
different
shape
characteristics
and
they
form
this
face.
C
Particular
well
you're,
just
I
mean
they
don't
necessarily
form
it.
You
know
on
purpose,
it's
just
that.
You
have
these
shapes
that
are
overlapping
and
you
do
rounds
of
selection
on
the
different
shapes
that
look
more
like
a
face.
So
it's
like
you're
getting
you're
kind
of
bootstrapping
this
face
into
existence
from
a
bunch
of
different
shapes
being
presented,
so
I
mean
it
takes
a
while
for
this
to
converge.
You
know,
depending
on
the
initial
condition,
but
you
get
these
different.
C
You
know
sets
of
polygons
that
overlap,
and
you
know
you
get
a
like
a
population
of
these
things
in
the
first
round
and
then
you
select
the
ones
that
kind
of
look
more
like
a
face.
So
you
know
if
the
this.
The
second
image
here
shows
that
you
have
some
formation
of
leek.
Something
looks
like
a
face
in
hair,
and
then
you
select
for
that
and
then
breeds
a
number
another
set
of
similar
images
again,
and
then
you
select
for
this
and
it
breeds
another
set
of
animals
like
this.
C
C
So
there
is
a
genetic
algorithm
and
the
fitness
function
looks
like
the
Mona
Lisa
and
then
you
you.
Basically
you
take
your
first
round
of
images,
so
you
might
have
a
population
of
a
thousand
images
that
look
different
and
you
say
which
ones
look
most
like
this
original
image
and
then
you
select
those
and
then
you
produce
a
bunch
of
some
other
images
to
those
you
know
you
make
a
reproduction
step
and
then
you
find
the
ones
that
are
most
closely
related
to
the
original
image.
And
you
do
this.
C
So
in
genetic
algorithms
you
have
a
population
of
different
images
like
a
population
of
organisms,
so
you
know
there
I,
don't
know
what
their
population
sizes
for
this
I
can't
remember,
but
you
know
say
it's
like
10,000
images
that
have
different.
You
know
configurations
of
these
polygons
and
you
take
the
original
image.
You
say
this
is
the
fitness
function,
so
this
is
what
I
want
to
approximate
and
then
yeah
is
that
are
closer
to
that.
You
know
get
it
selected
and
though
they
don't
don't
so
you
get
this
and
it
takes
like.
C
It
doesn't
look
exactly
like
a
Mona
Lisa,
but
our
perceptual
systems
seeing
it
as
that
and
the
algorithm
is
seen
as
such.
So
again,
this
might
just
be:
it's
actually
isn't
a
bunch
and
polygons
we're
just
interpreting
it
as
a
face,
so
the
neuroscience
of
pareidolia
is
a
bit
different
than
that,
of
course.
So
here's
an
example
seeing
Jesus
in
tennis,
narrow
and
behavioral
in
Europe
are
lots
of
face.
Pareidolia
this
is
described
in
this
article
was
an
interaction
between
our
face
recognition
and
our
visual
perception
abilities.
C
So
in
our
brain
we
use
our
attentional
system,
uses
these
top-down
mechanisms
and
these
bottom-up
mechanisms
so
attentional
processing
is
you
know,
both
bottom-up,
so
you're,
finding
random
patterns
in
the
environment
and
you're
evaluating
them,
and
then
you
have
these
top-down
mechanisms,
which
are
things
like
object,
recognition
face
recognition,
so
you're
taking
these
things
that
are
random
and
you
know
you're
applying
some
sort
of
you
know
mental
model
to
them.
So
I
know
how
do
you
recognize
someone's
face?
C
Well,
some
people
argue
that
we
have,
you
know
neurons
encode
specific
faces,
but
in
general
we
can
recognize
a
face
more
than
we
can
recognize.
Like
some
object,
we
have
a
stronger
ability
for
that.
We
also
have
strong
abilities
for
very
familiar
objects.
Like
you
know,
pencils
or
mugs
things.
You
see
every
day
versus
maybe
things
you
don't
see
every
day,
and
so
because
of
that
we
have
this
top-down
mechanism
that
enforces
some
sort
of
order
on
all
those
sorts
of
things
that
we
see
in
the
environment
so
faces
again.
C
Unlike
other
jaw,
objects
are
processed
just
as
a
whole
thing,
so
you
look
at
something
and
you
either
see
a
face,
or
you
know
sometimes
not
a
face.
I
gotta
wet
someone
in
the
meeting
here:
hey
Jessie,
we
were
just
in
the
middle
of
this
talk,
computational
pareidolia,
so
I
can
go
over
it
a
little
bit
with
you
know
with
you
afterwards,
but
welcome
all
right.
So
let
me
go
back
to
this
slide.
C
So
faces
are
processed
holistically,
so,
like
you
see
a
face
or
you
don't,
and
sometimes
you
falsely
recognize
things
as
faces
as
a
result.
So
you
look
at
a
power
outlet
or
an
electrical
outlet
and
it
looks
like
a
face
or
you
might
look
at
like
some
set
of
clouds
and
they
look
like
a
face,
but
they're
not
faces,
and
the
reason
for
that,
of
course,
is
this
top-down
attentional
mechanism,
and
then
I
mentioned
before
that
you
can
also
have
things
like
auditory
stimuli
that
exhibit
the
same
phenomenon.
C
So
I
I,
don't
know
if
a
lot
of
you've
heard
the
thing
about
like
misheard
song
lyrics.
Are
you
listening
song
lyrics
and
you
hear
certain
lyrics
that
are
actually
wrong.
So
that's
another
form
of
this
pareidolia,
where
you're
taking
familiar
things
and
you're
sticking
them
into
the
into
the
ambiguous
stimulus.
C
So,
like
you
know,
you
might
hear
if
the
musician
is
like,
you
know,
saying
things
and
in
in
interspersed
with
the
beat
of
the
music,
you
might
misinterpret
some
of
the
lyrics
as
things
that
are
more
words
that
are
maybe
more
familiar
to
you.
So
that's
the
neuroscience
of
it,
and
this
is
not
the
neuroscience
of
it,
but
this
is
how
you
can
replicate
it
in
a
computer.
You
just
simply,
you
know,
try
to
form
something
that
looks
like
a
face
and
then
it'll
produce
something
that
looks
like
a
face.
But
it's
not
really.
C
C
So
this
is,
of
course,
what
we're
talking
about
here
when
I
say
a
false,
positive
and
pareidolia
is
a
false
positive
of
like
things
like
faces
or
shapes
that
you
see
in
the
environment.
So
here
we
have
this
Venn
diagram,
so
you
have
true
positives,
which
are
this
area
here.
This
is,
if
you
see
a
face-
and
it
is
a
face-
both
negatives
are
things
that
are
rejected
as
faces
properly
so
things
you
see,
and
you
say
no,
that's
not
a
face.
That's
not
a
face.
Both
positives,
of
course
sure
this
territory
of
pareidolia.
C
C
B
B
B
B
B
F
C
Was
back
to
this
part
here
where
you
know
we
talked
about
how
faces
are
processed
holistic?
We,
so
you
see
something,
that's
familiar.
You
know
it's
good
to
see
something.
Maybe
that's
familiar
versus.
You
know
not
being
able
to
tell
what
something
is
in
the
environment.
If
it's
like
a
snake,
then
in
in
the
grass-
and
you
see
that,
but
it's
not
really
a
snake,
then
you
benefit
from
that
more
than
you
don't.
This
is
just
kind
of
like
the
technical
language
of
it
in
terms.
C
B
F
C
Is
it
elicit
some
feeling
or
not,
and
people
have
a
range
of
different
like
sensitivities
to
it,
so
some
people
are
much
more
sensitive
to
some
stimulus
than
others
and
so
they'll
classified
as
a
tone,
even
if
most
people
don't-
and
so
they
can
actually
measure
this
on
a
curve
as
well
in
a
performance
curve.
So
even
in
people
it's
highly
variable
in
terms
of
what
they're
responding
to
and
so
again
yeah
it's,
it's
not
necessarily
a
false
thing,
but
it's
like
you
know.
C
It's
just
the
way
we
recognize
patterns
so,
but
we
can
also
you
know
these.
These
kind
of
things
can
also
be
elicited
by
malicious
actors,
and
so
you
know
you
can
creates
a
stimuli
in
a
in
a
machine
learning
context
that,
where
you
produce
something
that
looks
like
maybe
a
face,
you
know,
maybe
you
want
to
replicate
fingerprints
or
you
want
to
replicate
a
face
for
some
reason
to
fool
what
they
call
again.
You
know
when
they
use
these
terms.
C
They
do
sort
of
conduct
a
certain
things
but
they're
just
using
them
as
a
stand-in
for
different
things
that
they
want
to
convey.
So
why
deep
learning
a
eyes
are
so
easy
to
fool
is
one
example
of
an
article
where
they
show
this
sort
of
thing.
They
call
it.
You
know
they
think
of
it
in
terms
of
model,
weirdo,
freddo,
illness
versus
model
robustness,
but
you
could
think
of
in
terms
of
model.
You
know
some
sort
of
pareidolia
response,
depending
on
what
you
want
to
do.
The.
B
C
But
these
are
a
bunch
of
articles
that
talk
about
this
topic,
so
this
is
the
interaction
of
this
sort
of
pareidolia
effect
and
people
actually
attacking
systems
like
deep
learning
models
and
AIS
for
different
things.
This
houdini
flowing
deep
structure
of
prediction
models
is
kind
of
interesting
when
this
advant
real-world
adversarial
attack
on
our
face.
So
there's
a
thing
in
computer
science
called
adversarial
attack,
which
is
where
you
use
some
sort
of
you
know
use.
C
Some
sort
of
you
know
thought
whether
you
make
little
false
positive
data
set
to
sort
of
flood
the
zone
in
the
yogur
of
them
with
false
examples.
So
sometimes
you
can
use
adversarial
techniques
to
improve
the
models
such
as
any,
but
in
this
case
they're
using
it
to
sort
of
elicit
false
results
in
a
face.
Id
system,
which
is,
if
you
you
know,
want
to
recognize
faces
correctly.
Every
time
is
a
bad
thing,
but
if
you
want
to
create
a
system
that
produces
faces
for
art
or
something
like
that,
it's
actually
a
good
thing.
C
So
you
know
you
can
use
these
things
in
different
ways,
but
this
phenomenon
can
be
elicited
in
machines
and
those
are
some
examples
of
cases
for
that.
So
so
here's
another
example.
This
is
actually
from
this
nature.
Article
deep
learning
a
eyes
are
so
easy
to
fool.
This
is
an
image
from
that
particle
and
they
talk
about
actually
using
DNS
and
being
able
to
rotate
objects
to
confuse
DNS
and
again
DM
NS
are
really
you
know,
dependent
on
this
sort
of
false
positive
to
positive
system
of
classification.
C
So
if
you
have
this,
you
know
you
can
actually
rotate
images
like
we
saw
in
the
inside
tube
or
we
looked
at
the
the
duck
and
the
rabbit
illusion.
In
this
case
you
can
rotate
objects
and
you
can
elicit
different
responses
from
the
system.
So
here
we
have
a
stop
sign,
and
this
is
generally
the
orientation
that
we
see
stop
signs
that
when
we
drive
down
the
street
or
when
a
self-driving
car
drives
down
the
street.
Probably
more
importantly,
for
this
example,
you
have
the
stop
signs
right-side
up,
it's
all
tilted.
Maybe
it
turns
out.
C
So
that's
recognizing
all
these
things
out
of
one
image,
if
you
just
change
the
orientation
of
the
shape-
and
this
of
course
is
also
you
know-
you
can
see
like
from
the
Mona
Lisa
example
where
you
know,
if
you
have
these
shapes
that
are
kind
of
you
know,
shape
you
know
squished
or
combined
in
different
ways.
You
can
elicit
certain
responses,
and
then
you
know
in
the
nervous
system,
but
also
in
a
neural
network.
C
C
So
it's
sitting
on
this
surface
and
it's
I,
don't
know
how
it's
detecting
a
manhole
cover,
maybe
from
the
table
back
underneath
this
mad,
but
it's
recognizing
it
as
such,
and
then
this
is
my
favorite,
a
mushroom,
that's
kind
of
twisted
in
the
of
its
counting
sideways
and
that's
classified
as
a
pretzel.
It's
because
it's
twisted
like
this,
so
you
get
this
sort
of
you
get
this
sort
of
effect
where,
if
you
change
the
orientation
of
these
things,
you
know
you
change
the
identity
of
it.
The
Oh
at
least
what
the
algorithm
uses
the
identity.
C
So
the
the
caption
here
is
even
natural
images
can
fool
a
DNN
because
it
might
focus
on
the
pictures.
Color
texture
background,
rather
than
picking
out
the
salient
features.
So
the
salient
features
actually
the
way
we
process
objects
is
a
little
bit
different
as
well,
then
then
faces
you're
picking
out
certain
features
in
the
image
like
corners
or
letters
in
the
relationship
between
those,
and
so
here
we
can
actually
see
how
that
could
be
manipulated,
or
you
know,
change
yeah.
B
That's
it
maybe
what's
missing
from
the
da
nose
is:
what's
called
mental
rotation,
there's
a
whole
body
of
research
going
back
Nick,
it's
a
mental
rotation,
for
example.
If
you
took
the
stop
sign,
we
relate
example
the
amount
of
time
it
takes
you
to
recognize
that
as
long
as
the
stop
sign
is
longer,
if
you
have
to
rotate
it
back
to
the
standard
orientation
in
your
reading-
and
you
can
actually
measure
this
so
I'm
going
to
suggest
that
difference
here
is
we
recognize
this,
but
first
we
rotate
it
to
a
canonical
position.
B
C
B
Then
you
ask:
is
it
the
same
or
is
it
the
mirror
image
and
the
things
to
the
person
to
make
the
decision
depends
on
the
ambulance
for
notated,
and
so
there's
actually
seems
to
be
a
rotation
algorithm
going
on
in
your
brain
that
iterates
until
you
see
cific,
to
make
a
match
or
not,
and
that's
not
the
same
as
be
presented
with
the
object
at
many
different
orientations.
So
you
can
just
select
that
it's
there
yeah.
C
C
Yeah,
you
know
cuz
in
a
DNN
or
maybe
like
in
any
neural
network.
You
really
have
to
Train
it
in
that
way.
So
you
know
you
might
get
a
good
set
of
results
if
the
images
are
one
orientation,
but
if
you,
if
they're
in
different
contexts,
different
backgrounds,
even
they'll,
have
the
same
problem.
Yeah.
B
C
G
C
C
C
G
C
So
they
have
to
know
the
class
of
transformations
in
advance
for
this,
but
then
that
doesn't
solve
your
problem
if
you've
run
into
things
that
you
don't
know.
So
if
you
have
a
you
know
a
set
of
transformations
like
rotations
around
a
circle,
you
know
what
those
are,
but
if
you
squish
it
and
stretch
it
out,
that
might
not
be
something
it'll
work
nice.
So
you
have
to
train
in
advance
of
all
those
things
as
a
separate
module,
and
then
you
know
this
is
always
a
tricky
problem,
trying
to
find
things
that
general
okay.
C
C
E
C
Mean
it's
not
really
so
much
a
research
topic
as
like
just
the
thing
that
we
can
observe
in
nature,
and
then
we
also
observe
in
algorithms
I
mean
like
I,
gave
you
a
paucity
of
preferences
for
it,
so
I,
don't
really
think
it's
built.
That
was
like
a
couple
years
ago,
when
I
did
that
blog
post
and
they
don't
think
since
then
I
was
looking
it
up.
I
was,
as
I
was
doing.
This
talk
and
I
didn't
see
much
more
than
that
out
there.
So
it's
definitely
an
open
area.
These
are
the
slides
here.
C
Definitely
an
open
area.
People
were
interested
in
thinking
about
it
more,
you
know,
I,
don't
think
people
have
talked
about
it
too
much.
That's
a
thing
about
like
a
lot
of
machine
learning.
Research
is
that
there
are
a
lot
of
like
topics
that
kind
of
come
from
human
cognition
that
you
know
kind
of
apply
to
these
models,
because
we're
we're
not
recreating
a
nervous
system
necessarily
but
we're
creating
the
conditions
for
cognition,
and
so
there
are
things
in
cognition
that
are
really
interesting.
B
C
Yeah,
well,
it's
like
the
term
pareidolia.
They
use
it
in
cognition,
but
then,
when
they
talk
about
it
in
computers,
it's
slightly
different
like
so.
If
you
find
someone
who
studied
visual
perception,
they
might
say.
Well,
that's
not
pareidolia,
that's
something
else,
but
you
know
point
taken,
but
they
don't
know
have
a
better
word
for
it.
So.
C
C
F
B
B
Okay,
I,
think!
That's!
That's
the
only
thing.
That's
all
that's
missing,
so
that
one
can
see
how
the
algorithm
is
through
the
various
ends
up
producing
result
where,
after
at
least
in
most
cases
and
also
show
word
fast
now,
you
pointed
out
that
it's
the
so
if
the
cells
are
all
strung
out
so
they're
almost
in
align,
you
kind
of
lose
it
yeah.
C
B
F
B
It's
not
static,
the
static
implies
it
doesn't
change
over
time.
Okay,
okay,
this
way,
I
guess
that's
what
it's
a
single
frame
and
movie,
but
what
is
not
used
as
the
information,
for
example,
that
the
number
of
cells
is
the
same
from
one
frame
to
the
next
and
therefore
one
could
say:
if
number
of
frames
has
changed,
one
of
two
things
has
happened:
the
algorithm
screwed
up
or
there's
been
a
cell
division
right.
B
Okay,
so
so
a
higher
order
algorithm
would
take
into
account
consecutive
frames
and
the
number
of
cells
that
yet
okay
can
I
correct
a
frame
based
on
the
frame
before
the
framing
after
it.
Unless
I
think
I
protected
a
subdivision
great
now,
a
television
I
think
could
be
detectable
by
the
cells.
Getting
whiter
ya
know
those
the
the
width
rather
than
length
would
increased,
so
I
think.
B
B
B
C
F
C
C
B
B
C
B
We
gotta
leave
some
time
for
review.
I,
say
push.
It
would
be
nice
to
submit
it
within
a
couple
of
weeks.
Okay,
okay
and
certainly
I-
think
we're
pretty
close,
so
I'd
like
to
submit
by
October
28th
that
possible,
because
we've
got
a
meeting
coming
up
on
that
date
with
the
editors
to
his
best
things.
G
C
You
know
slide
together
in
a
colony
and
we're
analyzing
images
of
those
organisms
so
and
then
dick
proposed.
You
know
having
like
a
second
paper
to
follow
up
on
this
one,
but
that's
something
that
would
be
later
on,
and
so
you
know
we're
you
know,
that's
what
you
get
when
you
do
research,
you
get
more
research
from
the
research
that
you've
already
done
so
yeah.
Oh.
C
C
So
I
wanted
to
talk
in
the
last
part
of
the
meeting
about
the
pre-trained
model.
Was
blog
post,
so
I
actually
worked
on
flushing
this
out,
I'm
gonna
present
the
screen.
So
actually
let
me
put
the
link
to
the
document
in
the
chat,
so
we
have
a
common
reference
point
here,
and
this
is
the
blogpost
I
talked
about.
This
is
actually
from
the
pre-training
model.
Discussion
we've
had
in
the
group
and
the
day
and
I
were
talking
about
this
yesterday.
C
So
this
last
week,
I
was
able
to
flesh
out
a
lot
of
the
details
that
I
last
week,
I
presented
an
outline,
and
this
week
I've
got
a
draft
here,
and
so
the
outline
was
based
on.
You
know
three
different
sets
of
topics
and
we're
so
we're
heading
towards
a
15
under
word
limit.
So
right
now,
I
think
we're
at
750.
F
C
Words
so
we're
well
within
our
limit,
so
we
have
room
for
people
to
contribute
things
if
they
want.
So
this
starts
out
with
an
introduction
to
pre
train
models,
and
this
could
use
some
work.
I
just
gave
a
couple
of
examples
here,
and
we
talked
a
little
bit
about
you
know
what
the
advantages
are
for
biologists
and
then
a
vision
for
sort
of
a
developmental
biology,
specific
model,
and
so
we
kind
of
contemplate
what
those
what
the
advantages
of
such
a
pre
training
model
might
be.
C
So
there
are
a
couple
of
features
of
developmental
systems
that
are
bullet
pointed
here
that
are
relevant,
I
mean
maybe
I
got
these
off
the
top
of
my
head.
So
if
you
can
rephrase
these
or
if
you
want
to
change
them
or
add
to
them,
that
would
work
as
well,
but
I
was
basing
these
points
on
the
just
general
things
in
developmental
biology
that
are
unique
to
that
type
of
system.
C
Maybe
not
unique
but
like
definitely
needs
need
to
be
addressed
like
cell
division
and
differentiation
events,
the
ability
to
identify
a
cell
and
when
it
you
know,
divides
when
it
replicates,
but
also
as
its
as
its
replicating.
So
to
be
able
to
identify
thing
features
like
that.
Then
we
have
some
discussion
about
models.
C
Not
equaling
phenomenology,
which
is
you
know,
has
implications
for
the
model
and
I
kind
of
wrote
that
out
a
bit
and
then
this
is
the
part
that
I'm
not
done
with
you
and
I'm
gonna
work
on
later
kind
of
giving
some
practical
examples,
and
that's
like
it
begin
as
I
told
the
neh.
This
is
a
place
where
we
can
bring
a
lot
of
this
stuff
to
bear
from
say,
like
the
the
google
Summer
of
Code,
and
also
some
of
you
know,
practical
examples
from
actual
data
sets.
C
So
you
know
we
might
create
a
hypothetical
set
of
biological
images
in
our
head,
or
we
might
like
think
about
like
a
process
or
something
I'm,
not
quite
sure
what
the
what
it'll
look
like.
But
you
know
I
know.
Vinay
said
he
was
gonna
work
on
it
today
in
tomorrow,
and
then
Jesse
Indic.
If
you
want
to
contribute
to
this
you're
welcome,
but
you're
not
required
to
do
so.
All
right,
yes,
even
going
over
it
and
making
corrections
would
be
helpful
too,
because
we're
trying
to
get
this
published.
C
We're
gonna
publish
this
on
the
note,
which
is
the
developmental
biology
blog.
They
need
sometimes
this
or
next
week.
So
you
want
to
kind
of
wrap
this
up
fairly
quickly.
So
if
we
can
get
some
feedback,
that
would
be
great
or
some
contributions.
That
would
be
great
as
well
yeah
just
he
thinks
yeah
definitely
take
a
look
at
it
and
even
if
you
just
want
to
do
some
copy
editing,
that's
that's
welcome
and
they
you
know
appreciate
your
input,
so
I
mean
you
know.
C
Sorry
I,
just
I
was
just
talking
about
this
preacher
model
blogpost
and
if
you
know,
if
you
have
any,
if
you
want
to
look
it
over
and
have
any
suggestions-
or
you
know,
copy
editing
suggestions
or
whatever
that's
welcome,
we're
gonna
try
to
put
it
out
on
this
blog,
the
node,
which
is
a
developmental
biology
blog,
but
maybe
like
the
beginning
of
next
week.
So
and
then
we
might
do
another
blog
post
later
on
year,
I'm
not
sure
yet
we'll
see
how
you
know
how
this
one
goes
over.
C
I
hope
we
can
get
some
people
to
read
it
and
get
interested
in
it.
This
will
be
definitely
one
of
the
you
know.
I
mentioned
the
group
on
the
node
and
I.
Don't
know
how
many
people
saw
the
post
but
like
to
keep
them
updated
a
little
bit
on
what
we've
been
talking
about
so
so
yeah.
That
would
be
good.
If
people
want
to
go
over
that
again,
it
took
miss
the
link.
This
is
the
link.
E
This
I
came
across
some
printed
models,
for
especially
for
the
medical
field,
let's
say
for
brain
scanning
a
this
part
of
any
segmentation
kind
of
especially
for
the
health
candy
from
the
healthcare
field,
so
I
like
to
use
that
and
then
I
had
some
introduction
to
those
models
and
then
the
next
to
them,
yeah
yeah.
So.
C
What
I've
learned
from
doing
blog
posts
is
you
want
to
keep
the
descriptions
brief
in
the
main
part
of
the
blog
post
like
if
you
do
something
you
know
you
can
make
a
brief
description
of
something,
that's
good,
but
for
more
detailed
things,
we
have
like
an
appendix
area,
which
is
this
resources
section.
It's
actually
I
thumb
this
one,
the
model
zoo,
which
is
a
nice
little
reference
on
different
pre
train
models
for
a
wide
variety
of
things,
but
we
can
also
put
like
resources
for
like
different
specific
models
and
kind
of
annotating.
C
C
Read
it
digest
it
and
then
they
go
down
and
they
can
find
some
references
and
some
resources
and
they
don't
have
to
like,
be
they
don't
have
to
have
in
the
way
when
they're
reading
it
I
find
blogs
that
people
are
very
impatient
when
they
read
blog
posts,
they
want
to
get
cut
to
the
chase
and
then,
like
you
know,
you'll
have
the
resources
at
the
end,
so
I
just
found
that
to
be
a
good
strategy.
For
that
so
I
mean
you
know,
keep
that
in
mind
when
you're
going
through
these
examples.