►
Description
Ninth DevoWormML meeting, October 30. Attendees: Bradly Alicea, Richard Gordon, Jesse Parent, and Vinay Varma
B
C
C
D
C
Okay,
yeah
yeah,
thanks
for
going
over
the
blogpost
yeah
I
saw
the
edits.
Thank
you.
So
the
blog
post
is
on
its
way
out.
I'll
post
a
link
I'll,
send
it
off
I'll,
send
it
out
to
the
group
when
it's
posted
and
finalized
Jessie
contributed
some
edits.
I
said
Nick
was
contributing
and
the
name
is
contributing.
So
that's
good
and
it
looks
pretty
good,
it's
very
long
for
a
blog
post.
But
you
know
these
are
the
types
of
things
that
we
can.
C
You
know
maybe
build
upon
in
the
future,
and
maybe
some
people
will
find
it
helpful.
So
we
had,
you
know:
you'll
see
the
blog
post
when
it's
finished,
it'll
have
some
links
and
you
know
it's
gonna,
look
pretty
good
I.
Think
I
could've
used
the
diagram,
but
you
know
it's
always
hard
like
make
the
perfect
diagram,
so
I'd
like
to
do
one
more
post
this
year
or
this
you
know
by
the
end
of
the
year,
depending
on
the
topic,
we'll
have
to
think
about
a
topic
that
we
can
do
another
wrong.
C
Yeah
yeah,
so
Jesse
said
he
was
trying
to
find
a
visual
yeah,
it's
hard
for
me
to
figure
out
the
perfect
visual.
Not
you
know,
I
used
to
pay
actually
taught
the
siphons
course
on
science
outreach
and
we
had
a
whole
section
on
blogging,
and
you
know
one
of
the
things
they
say
is
you
know
you
use
a
lot
of
visuals,
but
sometimes
it's
hard
to
find
the
right,
visual,
also
but
yeah.
C
Well,
so
I
think
it
was
a
good
experience
and
we'll
see
how
this
one
does
and
then,
if
anyone
has
any
ideas
for
a
second
blog
post,
we
can
talk
about
it
later.
So
today
we're
gonna
do
two
things:
I
have
talked
on
gans,
which
are
generative
adversarial
networks.
It's
not
something
probably
well,
maybe
even
a
has
heard
of
it,
maybe
Jesse,
but
it's
it's
an
up-and-coming
technique.
D
C
D
C
So
Gans
are
generative
adversarial
networks
and
they
put
developmental
above
because
I'm
going
to
talk
about
some
examples
from
development
at
the
end
and-
and
you
know
there
aren't
that
many
examples,
but
this
is
an
up-and-coming
areas.
You'll
see
so
generative
adversarial
networks
is
a
pretty
big
mouthful
of
words.
Let's
see
if
we
can
break
that
down,
so
our
Gans
or
generative
adversarial
networks
consists
of
multiple,
deep
learning
networks.
C
So
we
talked
about
the
deep
learning
networks
here
at
the
bottom,
and
these
are,
of
course
differentiated
from
neural
networks
because
they
have
multiple
hidden
layers
instead
of
just
one,
and
so
we
talked
about
maybe
a
little
bit
about
the
benefits
and
drawbacks
of
that
in
earlier
talks.
Basically,
it
allows
you
to
have
higher
resolution
to
increase
your
feature
space,
but
there
are
some
drawbacks
in
terms
of
the
type
of
input
data
that
you
might
use
that
are
appropriate
for
the
deep
learning
networks
and
you
have
an
increased
chance
of
multiplying
your
error.
C
If
you
have
a
data
set
with
some
systematic
error
in
it,
you
can
really
screw
up
your
task
by
just
throwing
those
data
at
the
deep
learning
network.
So
there
ways
you
know,
there's
some
drawbacks
to
using
deep
learning.
I
guess
the
advice
that
I
would
say
a
lot
of
people
seem
to
give
is
just
don't
throw
deep
learning
at
a
problem,
just
as
he
wouldn't
throw
a
t-test
or
a
regression
at
a
problem.
C
So
anyways
these
games
take
advantage
of
these
type
of
deep
learning
networks
and,
as
you'll
see,
they
use
them
in
a
very
innovative
way
and
they
have
something
called
a
generator
in
something.
You
don't
call
the
discriminator
and
I'll
talk
about
what
those
are
in
a
minute.
So
a
generative
model
and
I
know
a
lot
of
you
are
familiar
with
them,
but
maybe
the
jargon
is
a
little
fuzzy
it.
What
they
do
is
they
generate
instances
of
a
type
or
a
class
using
some
underlying
distribution
and
I
show.
C
A
slide
of
this
is
actually
of
evolution
of
peppered
moths,
which
is
a
common
example
in
evolution,
textbooks
which
has
not
exactly
been
told
the
full
story.
But
that's
aside,
that's
an
aside,
but
the
point
is:
is
that
you
get
this
generation
of
variants
at
different
generations
and
it
changes
the
cup.
You
know
the
composition:
what's
generated
changes
the
underlying
distribution.
C
So
you
know,
a
neural
network
is
technically
a
generative
network,
but
also
a
genetic
algorithm
is
also
genetic.
Her
generative
model
and
a
cellular
automata
is
a
generative
model.
So
that's
the
type
of
thing
we're
dealing
with
and
then
a
discriminant
model
or
a
discriminant
discriminative
model
is
where
you
generate
a
classification
using
an
underlying
distribution.
So
in
this
case
instead
of
generating
law,
these
are
different.
C
Colored
mods
we're
actually
trying
to
classify
butterflies,
and
so
we
can
classify
them
here
into
different
general
or
you
know,
other
sorts
of
classifications,
and
you
know
based
on
their
physical
characteristics,
so
their
wing,
color
and
their
wing
shape
and
so
discriminative
models
progression
would
be
considered
an
example,
but
also
some
sort
of
taxonomy.
That's
what
we're
talking
about.
We
talk
about
the
discriminative
model
and
it's
very
different
from
a
generative
model.
C
So
if
you're
interested
in
learning
more
about
these
different
parts
of
again
there's
a
link
here,
this
is
a
Google
developers.
Tutorial
and
I
can
share
that's
linked
later
on.
It's
basically
Google
developers.
They
have
a
bunch
of
tutorials
on
different
topics
and
they
talk
about
the
structure
of
a
game.
So
I'm
going
to
talk
about
it
in
a
couple
slides,
but
that's
another
resource,
so
gams
are
actually
pretty
new.
C
We
have
to
go
all
the
way
back
to
2014,
to
find
the
first
instance
of
a
game
in
the
literature
Ian's
good
fellow
who
works
at
Google
rain,
and
he
doesn't
let
you
old
here
and
he
hasn't.
You
know
aged
it
that
much
sense,
but
he
submitted
a
paper
at
a
new
IPs.
It
was
a
paper
in
their
proceedings,
regenerative
adversarial
networks
and
that
lays
out
the
whole
model
of
again
at
the
different
parts
of
it
and
everything.
C
And
then,
since
then,
there's
kind
of
an
explosion
of
people
applying
this
type
of
model
to
different
types
of
data.
It
could
range
from
even
like
doing
things
like
generating
artwork
to
biology
data.
You
know
data
and
there
many
different
potential
applications,
but
one
paper,
one,
recent
more
recent
paper
that's
come
out,
and
this
is
from
the
oil,
see
our
proceedings.
Well,
CR
is
another
machine
learning
journal
or
conference.
C
They
called
many
passes.
The
equilibrium
Dan's
do
not
need
to
decrease
the
divergence
at
every
step,
and
so
what's
interesting
is
that
Gans
were
always
formulated
a
little
bit
differently
than
most
machine
learning
models
and
that,
instead
of
talking
about
like
the
algorithm
converging
upon
a
solution,
they
talked
about
two
different
sort
of
ways
of
viewing
how
Again's
can
be
considered
to
be
sort
of
optimized
or
successful.
One
of
them
is
that
the
generator
and
the
discriminator
play
this
game
and
the
model.
C
What
is
a
game
theory
or
a
game
theoretic
model,
and
they
look
for
the
nash
equilibrium,
so
nash,
equilibrium
is
actually
from
game
theory
and
it's
the
idea
that
neither
player
can
reduce
costs
without
changing
the
other
players
parameters.
So,
in
other
words,
you
have
a
generator
and
you
have
a
discriminator
and
they're
both
competing
to
sort
of
out
with
the
other.
C
But
that
view,
of
course,
is
contrasted
with
something
called
divergence,
minimization,
which
is
where
and
you'll
see
what
how
this
is
applied,
where
the
generated
data
have
the
least
amount
of
difference
from
the
classified
data,
and
so
they
have
this
measure
of
like
you
generate.
So
you
know
you
generate
your
images
and
then
you
classify
your
images
and
there's
this
minimal
divergence
between
the
two
and
that's
how
you
gain
some
understanding
of
your
gand
being
efficient
or
successful,
and
so
both
of
those
are
competing.
C
C
So
you
know
we
didn't
probably
didn't
think
when
I,
when
you
logged
on
this
morning,
that
we'd
be
talking
about
game
theory,
but
we
are-
and
it's
not
very
clear,
what's
going
on
in
this
in
this
type
of
model,
because
you
know,
we've
only
had
games
for
about
five
years
and
so
that
people
really
probably
don't
even
understand
the
models
that
well
at
this
point.
But
it's
an
interesting
path.
E
C
C
E
So
it's
been
around
for
a
long
time
and
it's
it.
Basically,
you
have
a
temperature
and
you
keep
heating
up
the
system.
Cooling
it.
Alternatively,
or
this
extra
parameter
locks
it
lower
energy
states
and
then
what
you
do.
Is
you
slowly
decrease
the
temperature
over
time,
so
you
can
lock
into
a
solution,
so
it's
usually
used
for
optimization
problems.
C
Yeah,
it's
a
very
it's
a
very
obscure,
at
least
now
it's
an
obscure
technique,
but
I
learned
about
it
in
you
know
school,
so
I
don't
know
if
they
cover
it
that
punch
now
with
machine
learning.
It's
like
everything
is
new
again,
like
people
are
rediscovering
this
stuff
and
so
so
ganz.
Let
me
go
over
well
well,
first
of
all,
let
me
go
over
how
they've
applied
this
to
art.
I
think
this
is
really
fascinating
and
then
I'll
talk
about
the
actual
structure
of
the
algorithm
mm-hmm.
So
this
is
what
they
call.
C
This
is
one
example:
it's
called
gang
go
it's
a
plan
van
Gogh
and
they're,
but
they're
trying
to
do
is
they're
trying
to
create
art
with
cans,
and
so
vanity,
of
course,
is
an
artist
with,
and
the
idea
is
you
take
that
art
and
you
take
images
that
are
van
Gogh
paintings
and
there's
an
assumption
that
there's
an
underlying
style.
You
know
art
critics
will
tell
you
that,
but
they
won't
quantify
it
or
anything,
and
then
you
have
this
generator,
which
generates
either
some
sort
of
random
array
or
some
sort
of
fake
vector.
C
You
know
fake
data
like
made-up
data
that
you
plug
in,
like
it
could
be
a
random
number.
Generator
could
be
like
some.
You
know
systematic
noise
or
something
like
that,
and
then
you
combine
those
two
and
you
feed
them
into
this
discriminator
and
like
this
is
just
kind
of
a
shorthand
for
this.
Well,
what
you're
doing
is
you're
feeding
that
in
and
then
you're
trying
to
extract
the
sort
of
the
artistic
style
and
then
use
that
artistic
style
to
like
generate
new
versions
of
Van
Gogh
paintings,
but
with
the
same
style
as
van
Gogh.
C
So
it's
like
you
know
the
pictures
up
here.
They
look
kind
of
familiar
but
they're,
not
they're,
sort
of
eerily
similar
to
something
you've
seen
before,
but
they're,
not
exactly
the
same.
You
can
see
because
it
looks
a
little.
You
know
trippy
the
lines
and
everything.
That's
because
it's
generated
these
images
and
it
we
based
it
on
a
style
that
it
was
that
learned
and
then
it's
generating
new
instances.
So
it
takes
a
van
Gogh.
C
It
takes
the
style
and
then
it
says:
okay
make
me
a
van
Gogh
and
it'll
make
a
van
Gogh
like
image.
That
has
you
know
the
same
stylistic
properties,
but
it's
a
little
bit
off
and
they
call
that
sort
of
things,
style
transfer
and
it's
actually
very
fascinating,
but
it's
been
actually
quite
successful.
A
lot
of
people
do
this
and
there's
another
example:
a
neuro
algorithm
of
artistic
style.
C
It's
an
archives
paper
in
2015,
so
they're,
you
know
using
this
very
successfully
in
the
art
space
I
mean
people
fail
a
lot
and
there's
a
community
of
people
who
you
know,
try
different
techniques
and
it's
but
I
think
in
terms
of
creativity
and
like
sort
of
applying
Gans
to
a
interesting
problem.
I
think
this
is
definitely
a
place
that
people
should
check
out.
They
had
a
comment
in
the
chat
here:
ok,
so
dick
sent
out
a
simulated
annealing
citation
so
check
that
out
so
yeah
there's
a
whole
litter.
C
You
know
there's
a
whole
literature
now
on
these
on
these
artistic
works
that
were
created
by
Ganz
we're
using
Ganz,
and
so
you
know,
I,
don't
know
I
mean.
Maybe
it's
it's
worth.
You
know,
maybe
just
interesting
art,
but
it
might
just
be
like
you
know
something
that
people
can
learn
from
for
other
domains.
C
So
I'm
gonna
go
through
like
again
what
what
the
components
are
now
so
I
think
it's
probably
worth
doing,
because
it's
pretty
confusing
when
you
first
encounter
it
so
up
in
the
top
left
hand
corner
you
have
a
big
what
I
referred
to
as
the
generative
model
earlier.
So
this
could
be
you
what
you
do.
Is
you
feed
in
some
sort
of
pseudo
data,
some
sort
of
fake
data?
Again
it
could
be
a
random
number
generator
sending
in
information.
It
could
be
alike.
You
know
systematic
noise
and
two
dimensions
like
static
on
a
television
array.
C
C
You
would
have
actual
pictures
of
angle
paintings
and
you
trained
the
algorithm
on
what
those
look
like
the
generative
model.
Meanwhile
might
be
some
sort
of
random
noise
or
some
sort
of
scrambled
image,
and
so
it's
learning.
Basically
it's
trying
to
fit
the
training
data
to
some
sort
of
what
they
call
latent
space
and
then
it's
feeding
it
in
the
discriminative
model.
So
the
discriminative
model
picks
the
instances
from
that
latent
space
and
distinguishes
things
that
are
real,
some
things
that
are
fake,
and
so
this
is
a
procedure.
C
That's
used
to
sort
of
build
a
sort
of.
We
could
call
it
a
landscape.
I
did
a
deep
dive
on
Saturday
into
Layton
spaces
and
I
found
out
that
they're
related
to
multi-dimensional
scaling,
so
Layton
spaces
are
actually
low,
dimensional,
manifolds
or
spaces
that
are
derived
from
very
high
dimensional
information
like
a
bunch
of
paintings
and
you're,
creating
like
a
computational
representation.
C
That's
you
know
have,
has
essentially
all
different
combinations
and
then
the
discriminative
model
is
picking
out
the
genuine
article
from
a
bunch
of
fake
articles
and
the
adversarial
part
of
the
name.
Ganz
comes
from
the
idea
that
you're
taking
real
data
and
you're
also
taking
data
that
are
maybe
not
real
or
some
other
thing
and
you're
making
the
discriminative
model
choose
between
those
two
options.
So
it's
fake
or
it's
something
else,
or
it's
real
or
it's
something
else
and
it
you
know
it's
supposed
to
improve
its
training
over
time.
So
it
generates
a
classification.
C
It
could
generate
a
binary
classification
or
could
generate
something
more
complicated,
but
you
always
have
a
back
propagation.
So
these
discriminative
models
don't
operate
on
the
basis
of
magic.
They
operate
on
this
back
propagation,
so
the
moments
of
the
month
discriminative
model
was
evaluated
over
time
and
it's
improved
based
on
how
it's
performing
on
its
classification
tasks.
So
it
might
start
out
basically
spitting
out
a
bunch
of
nonsense
as
real
images,
but
then
over
time.
C
So
this
is
just
some
math
here.
So
this
is
an
example
of
an
unsupervised
Gann,
and
this
is
from
a
reference
that
we'll
talk
about
later,
but
they
use
a
mini
max
function
where
they
look
so
they're,
taking
X
as
an
image
and
Z
as
a
weight
vector
and
they're
performing
an
operation
on
it,
they're,
basically
taking
the
sum
of
that
and
they're
using
their
minimum
Attucks
function
to
find
the
best
fit
to
that.
So
here
are
two
examples
from
faces.
This
is
an
example
where
they
take
it.
C
This
is
an
input
face
and
they're
recovering
latent
space
and
then
they're
recovering
an
image
from
that
latent
space,
and
so
here
you
have
almost
a
perfect
match
in
this
example,
but
in
this
example
you
encode
this
face
too
late
in
space
of
like
faces.
You
know
you
I,
don't
know
if
you
can
put
a
bunch
of
different
faces
and
and
get
you
know
this
particular
face,
but
the
idea
is
you're
fitting
you're,
taking
the
face
and
coding
it
to
this
Z,
prime
latent
space
and
then
recovering
an
image
from
that
latent
space.
C
In
this
case,
it's
a
little
bit
fuzzy.
You
know
it's
not
recovering.
Exactly
the
same
face,
but
you
know
it's
a
little
bit
distorted,
that's
because
you're
putting
in
this
noise,
this
this
fake
data
to
fool
or
to
attempt
to
fool
the
model
and
that's
what
you
usually
get
so
you
can
usually
get
pretty
good
performance,
but
not
always,
and
then
here
are
some
citations.
Oh
and
I'll
make
the
slides
available
later
some
citations
that
might
be
useful,
I
think
we
have
another
question
on
in
the
chat.
C
C
So
I'll
make
these
slides
available
later,
but
this
is
basically
the
idea
of
again
and
so
again
you
have
this
weight
in
space
and
I've
made
it
like
a
an
oval
or
you
know
an
ovoid
shape
to
sort
of
represent,
maybe
a
little
like
a
two-dimensional
space
instead
of
one
dimensional
one.
But
the
basic
idea
is
you
have
this
late
in
space
and
it's
kind
of
like
a
fitness
landscape
or
maybe
a
wake.
C
You
know
some
sort
of
problem
space
that
you
would
encounter
a
regular
machine
learning
algorithm.
So
you
have
this
Lincoln
space
of
possibilities,
and
then
you
know
this
concept
X
and
you
have
the
reproduction
of
a
concept
and
every
you
know
this
is
the
input
of
the
model.
This
is
your
desired
target
and
X
stars.
C
For
these
two
points,
so
maybe
even
get
really
close
to
your
original
concept
and
that's
great
but
often
times,
reconstruction
loss
is
a
bit
more
and
the
thing
is
you
want
to
minimize
this
reconstruction
loss,
as
you
run
the
algorithm
and
that
determines
your
performance
and
your
they
don't
call
it
convergence,
but
basically
it
would
be
the
convergence
of
it,
and
so
you
synthesize
fake
examples.
You
give
it
some
data,
you
get
a
random
noise
vector
which
is
the
weight
in
space.
C
You
have
some
raw
input
which
are
the
images
and
then
that's
your
starting
point,
and
then
you
run
the
algorithm.
So
you
can
minimize
the
reconstruction
loss,
which
is
X,
minus
X
star,
that's
basically
it
then
they
have
strategies
that
you'll
run
into
in
that
literature,
so
one
of
them
is
to
use
something
called
an
autoencoder,
and
so
an
autoencoder
is
a
way
to
take
the
input
and
create
this.
The
code
which
is
in
the
middle
here.
C
This
is
this
latent
space
creation,
your
encoding,
your
data
and
then
your
decoding,
your
data
as
the
output
and
so
I
mean
you
know
with
Dan's.
You
can
go
very
deep
into
this
area
and
you
know
come
up
with
some
pretty
innovative
solutions,
but
I
just
wanted
people
to
know
that
that
autoencoders
exists
and
that
it's
a
strategy,
but
now
we're
gonna
talk
about
examples
from
developmental
data,
and
so
this
may
make
this
a
little
bit
clearer
in
terms
of
how
it's
used
in
how
it's
useful.
C
So
this
is
an
example
from
a
recent
paper
this
these
are
cells
that
have
been
generated
from
a
Gans
model,
so
these
are
based
on
real
examples
of
embryos.
That's
human
embryos
that
have
been
fed
into
this
algorithm
images
of
human
embryos
and
the
of
them
at
different
stages
of
development,
one
two
four
and
eight
so
which
is
pretty
early
in
embryogenesis.
But
you
have.
You
know
these
pretty
simple
examples
for
the
algorithm
to
sort
of
classify,
and
then
you
have
these
two
performance.
C
C
So
in
this
case
they
took
a
bunch
of
cells
that
were
differentiated
cells,
a
different
gene
development,
so
they
took
motor
neurons,
hemopoietic
cells,
which
and
then
looked
at
email,
poetic
cell
differentiation
in
different
ways,
and
so
they
took
a
bunch
of
different
cell
types
and
they
did
a
required
SC
RNA
seek
data
which
is
a
form
of
sequencing
RNA
so
that
you
can
map
it
back
to
the
genome.
Rna
seek
is
very
complicated
technique,
but
it
basically
takes
transcripts
in
the
cells.
What
they're,
what
kind
of
gene?
C
What
kind
of
genes
are
expressing
and
maps
them
to
a
whole
genome
map?
So
you
know
you
get
a
fragment
of
RNA
out
of
the
cell
a
transcript
and
you
map
it's
at
the
genome,
and
then
you
find
out
your
abundance
of
those
different
transcripts
and
it
tells
you
about
like.
What's
a
cell
was
expressing
how
much
of
it
is
expressing
and
so
forth,
but
it
you
know
it's
sequence
specific,
and
it
happens
all
you
know
across
the
genome.
So
it's
a
very
good
technique
for
understanding.
C
You
know
differentiation
or
other
types
of
perturbations
that
you
want
energies.
In
this
case
they
applied
again
to
that
type
of
data
for
a
bunch
of
different
cell
types,
so
they
used
epidermal
cells,
neural
cells,
hemopoietic
cells,
and
then
they
they
used
a
generative
model
that
simulated
realistic
types
of
RNA
seek
data,
so
they
basically
simulated
some
data
and
put
it
in
as
the
in
the
in
the
generator.
So
the
model
UB
gives
you
gives
the
models.
C
Some
input
data,
it's
just
you
know
fake
RNA,
seek
data,
it
looks
like
Ernie
seek
data,
so
you
know
has
like
the
same
structure
as
RNA
seek
data,
but
it's
just
made
up
and
you
put
into
the
generative
model
and
it's
generating
different
possibilities
from
that
alphabet
and
then
you're
matching
it
up
with
real
RNA
seek
data.
And
then
you
take
the
two
and
you
generate
this
these
models,
and
then
they
do
some
mathematical
projection
of
the
data.
But
in
this
image
you
can
see
that
there's
a
real
data
set
in
the
generated
data
set.
C
So
the
real
data
sets
in
blue
and
the
generated
data
sets
in
red,
and
you
see
for
each
example
here
that
they
give
the
real
data
and
the
generated
data
for
the
most
part
they
overlap.
You
get
some
examples
here.
We
get
some
outliers
of
the
generated
data,
but
you
also
get
some
outliers
of
the
real
data
for
motor
neuron
differentiation,
but
by
and
large
they
match
up,
and
so
that's
I
mean
that's
a
good
sign
that
you
can
use
this
for
more
than
generating
art.
C
So
it's
a
good
sign
that
it's
probably
useful
in
terms
of
looking
at
maybe
what
your
data,
as
opposed
to,
like
the
null
hypothesis,
so
I
mean
they're,
they're
different
ways.
You
can
use
these
models,
that's
one
of
them
another.
One,
of
course,
is
this:
it's
generative
models
from
tissue
organization
with
supervised
ins,
so
in
this
case
they're
using
the
variant
of
the
Gans
model,
where
they're
using
supervised
data
they're
using
labels,
in
this
case
they're
building,
generative
models
of
electron,
microscopy
images,
and
but
in
this
case
they
didn't
just
give
the
image
data.
C
They
annotated
the
position
of
some
membranes
of
mitochondria
in
the
images,
so
they
were
able
to
actually
mark
them
off
and
give
that
information
to
the
algorithm,
and
then
they
were
able
to
use
this
specific
procedure
within
the
McGann's
framework
and
they
produced
realistic
images
using
a
supervised
gann.
They
synthesized
the
label
image.
So
that
was
your
sort
of
your
training
set,
and
then
they
gave
a
noise
images
input.
C
The
label,
one
general
provided
supervision
for
classifying
Neum
image
and
even
e/m
image
synthesis,
which
is
the
synthesis
of
the
image
or
the
recovery
from
the
weight
space.
And
then
the
phone
model
generates
label
image
pairs,
so
you're
able
to
generate
image
and
you're
able
to
generate
labels
to
put
on
top
of
those
and
remember
the
images
that
it's
generating
are
real
images.
They're
not
spitting
back
out
real
images,
they're
generating
new
possible
images,
but
they're
keeping
the
labels
for
both
the
membrane
and
the
mitochondria.
C
C
C
The
error
rate-
that's
I
mean
12.3%
depending
you
know,
considering
what
we're
trying
to
do
here,
maybe
pretty
good,
but
there's
no
literature
that
really
gives
it
good.
Jerry
I
mean
good
error
rates
are
usually
like
within
a
percent,
so
this
is
kind
of
high
it's
compared
to
other
types
of
machine
learning.
There
are
other
types
of
pattern:
recognition,
but
considering
what
we're
trying
to
do
here,
it
might
actually
be
pretty
good
and
then,
finally,
the
there's
this
paper
on
contour
extraction
from
Drosophila
embryos
based
on
Gans,
and
so
this
case.
C
This
is
like
some
conference
paper
in
this
case
they're
generating
images
of
in
situ
hybridization.
So
this
is
where
they
take
an
antibody
and
they
stain
like
some
tissue,
and
they
look
at
like
where
the
the
stain
you
know
settles
into
the
tissue.
So
in
situ
images,
kind
of
look
like
like
fluorescent
images
and
they
have
a
fluorescence
in
the
stain,
but
they
show
these
areas
of
interests.
So
say
you
want
to
stain
for
some.
You
know
some
neural
chemical
in
a
tissue
or
cell
culture.
C
You
would
do
this
in
situ
hybridization
process
and
it
would
basically
bind
up
places
where
this
compound
is
found
and
then
you'd
put
it
under
a
microscope
and
you'd
see
these
large
regions
of
fluorescent
activity.
And
that's
where
you
would
you
know
that's
what
you'd
be
interested
in,
so
they
used
images
like
that,
so
you
have
like,
basically
I'm
sure,
Safa
embryos
with
these
insight
to
stains
in
in
the
embryo
on
the
embryo.
So
it's
not
the
whole
embryo,
it's
just
where
this
thing
of
interest
is
located.
C
It
could
be
a
gene
being
expressed,
it
could
be
protein
or
whatever
and
then
they're
using
these
images
they're
doing
a
very
high-tech
way
of
analyzing
these
images
and
these
the
conditional
gamm,
which
is
an
again
another
variant
of
the
Gann
model,
where
they
use
a
Bayesian
filter
to
generate
contour
maps.
So
they're,
taking
the
insight
to
images
which
are
kind
of
have
fluorescent
information
in
them,
have
anatomical
information
and
then
they're
generating
contour
maps,
which
I
assume
are
like
concentration,
gradients
of
the
different
you
know
of
the
of
the
hybridized
fluorescence.
C
They
didn't
have
any
images
in
this
paper,
so
I
don't
really
know
for
sure
what
that
looks
like,
but
and
then
they
base
this
on
a
drosophila
embryo
benchmark
data
set.
So
they
used
a
benchmark
data
set
as
the
input
data.
These
some
fake
images
as
the
generator
and
then
they
put
that
into
a
latent
space,
and
then
they
selected
images
that
looked
to
you
know
contour
maps
that
looked
to
match
the
input,
data
and
so
again,
in
this
case,
you're
trying
to
extract
contour
features
from
images.
C
You
know
based
on
these
adversarial
examples
and
real
examples,
and
it's
supposed
to
sharpen
the
result,
and
so
that's
all
I
have
for
that.
I
mean
it's
a
pretty
like
so
I
guess
what
I
would
say
is
it's
a
pretty
new
area.
I
picked
those
examples
up
because
they
were
the
best,
but
because
those
are
the
only
ones
I
could
find,
and
so
there's
definitely
a
lot
of
room
for
improvement
in
this
area.
E
E
C
E
A
C
E
C
C
C
E
E
C
C
C
E
C
You
could
you
could
have
that
yeah
I
mean
this
is
like
I.
Guess
is
something
that
they
came
up
with
like
to
solve
a
problem
originally,
but
there's
no
reason
why
you
couldn't
do
something
like
that
or
I
was
thinking
when
I
put
this
together,
that
you
could
use
not
just
deep
learning
networks,
but
you
could
use
like
other
types
of
models
like
this
could
be
a
genetic
algorithm,
and
this
could
be
some
sort
of
like
BZ
and
filter,
and
you
could
use
it.
You
know
I,
don't
know,
maybe
a
lot
better.
E
A
C
C
Jesse
ordinay
and
nobody
wants
to
present.
Maybe
will
let's
see
what
we
have
a
chat.
Okay,
so
Jesse
said.
Thank
you.
I
took
some
things
away
from
that
I
heard
of
Gans
for
a
while,
but
you
use
much
used
them,
yeah
they're,
pretty
new,
so
it's
a
pretty
new
area.
So
why
don't?
We
have
an
angle
of
his
presentation
on
his
idea.
This
kind
of
ties
into
the
Gans
we'll
see
how
it
compares
and
what
you
know
what
we
might
be
able
to
extend
from
this.
Oh.
D
D
D
So
this
is
the
paper
that
I
found
today.
This
is
based
on
image:
synthesis
like
creating
you,
you
and
me
this
based
on,
if
you
will
show
a
set
of
a
set
of
four
images
here.
In
this
case
it
is
the
retinal
images
and
we
would
ask
the
model
to
generate
no
more
data
of
which
looks
like
which
it's
like
a
realistic,
retinal
images.
D
So
this
was
all
done
by
Ecostar,
a
garden
and
Mme
a
of
the
iron
esc.
Se
k,
z4
Institute.
So
let's
get
started
how
they
realized
this
problem.
So
this
is
the
contents
that
I
have
today.
I
will
be
discussing
about
aiming
over
you
and
I'll
go
through
the
approach
that
they
have
discussed
in
the
paper
and
the
model,
implementation
details,
the
images
for
results
and
experiments.
And,
finally,
the
conclusion
like
how
were
the
key
takeaways
from
this
paper
that
begin
that
we
can
learn
so
that.
D
D
D
D
D
D
So
in
this
case
we
will
try
to
make
use
of
the
500
labels
image
paid
labels
that
we
have
and
will
try
to
label
the
other
4400
images
that
we
have
for
the
final
and
label
images
that
we
have
so
that
you
know
after
that
process
will
have
a
5000
sets
of
four
image
and
pair.
It
is
in
levy
pairs.
So
that's
basically
how
you
semi-supervised
learning
can
come
handy
in
this
approach,
so
the
aim
is
to
generate
realistic
working,
retinal
images
using
a
vessel
segment
and
image
to
image
transformation
of
adversarial
biryani.
D
C
D
D
D
D
What
we'll
do
is
take
a
pretend
model
which
can
segment,
which
is
a
segmentation
model
and
the
new
model
to
submit
these
images
with
the
help
of
this
500
imageable
page
that
we
have
so
basically
after
this
step
means
having
order
which
can
which
can
segment
those
retinal
images
and
extract
those
vessel
paths
of
those
images.
We.
D
D
Image
and
maybe
paste
so
that
the
protec
model
will
learn
to
accept
those
features
except
those
vessel
parts,
and
then
we
will
we'll
try
to
label
that
unlabeled
data
we
have.
That
is
the
4,500
images.
So
the
points
that
I've
written
here-
and
the
third
point
is
to
make
use
of
these
5,000
images
of
5,000
images
of
retinal
images
and
then
try
to
generate
hydrogen
dates
in
these
kinds
of
images
which
look
realistic
and
are
very
close
to
what
we
have
in
the
data
sets.
So
let
me
discuss
that
in
bacon.
D
D
D
These
are
100
pairs
of
image,
the
real
real
images
and
corresponding
labels
that
we
have.
So
we
will
take
this.
This
would
be
a
supervised
to
as
a
segmentation
task,
so
we
will
take.
We
take
a
paint
model
unknown
as
unit
and
we
would
create
and
we
would
win
the
segment
right
now.
This
is
a
segment
or
can
take
these
unlabeled
images
that
are
the
4,500
images
and
can
generate
the
vessel
segmentations
for
these
unlabeled
images
right.
D
So
now
we
have
five
thousand
phase
of
imagine
levels
from
well
now.
What
we
have
to
do
is
that,
with
the
help
of
these
four
have
5000
image
and
label
pips
we'll
try
to
generate
more
more
training
samples
with
the
help
of
games.
So
this
is
how
it
goes
we'll
take
these
generated
segmentation
maps
where
we
think
the
entire
data
set
actually
now
with
the
5000.
D
D
Then
this
generator
in
the
feedback
from
this
discriminator
and
it
will
update
the
weights
so
that
this
generated
images
will
look
closer
to
these
images.
These
BLM
meters
right.
So
basically,
how
this
happens
is
that,
first
of
all
this,
these
labels
and
his
name
images
are
shown
a
discriminator,
and
this
will
learn
how
to
this.
We
learn
the
difference
between
a
fake
image
and
a
generated,
a
fake,
a
method
for
the
image.
Then
this
generator
will
try
to
take
each
of
this
image.
D
It
will
apply
some.
It
will
multiply
this
image
with
some
weights
that
it
has
acquired
a
during
training,
and
then
it
can
generate
these
these
images.
So
this
mr.
memory
will
tell
you
what
to
change
so
that
these
these
will
look
more
realistic
and
more
close
up
to
these
images,
and
likewise
this
generator
and
estimator
will
be
I,
take
a
plane
so
that
then
he
could
be
like
that.
The
latest
state,
where
the
generator
was
able
to
bypass
this
discriminator,
the
discriminator
test.
D
So
we
see
what
I
mean
by
that
is
that
the
data
was
able
to
generate
an
image,
makes
the
decimator
couldn't
identify
it
as
a
fake
image.
That
means
that
that
generated
image
looks
very
realistic,
very
realistic
and
very
close
to
the
ocean
images.
So
that's
how
again
works.
That's
how
is
generated
the
relationship
between
generator
and
estimator
works,
so
so,
at
the
end
of
the
training,
the
generator
will
be
able
to
generate
these
realistic
images
with
the
help
of
this
labels.
D
This
image
is
fed
into
patches
into
this
encoded
till
this
part
till
this
part,
it
is
called
the
encoder
and
this
until
this
part
it
is
called
the
decoder,
so
how
this
encoder
decoder
picture
will
help
us
segment.
This
image
is
that,
first,
the
image
will
be
taken
as
participant
is
in
color.
It
will
learn
the
representations
and
it
will.
D
D
D
Let's
discuss
this
in
more
detail.
You
now
so
we'll
take
these
label
images
we'll
send
it
into
a
generator
which
is,
which
is
also
an
auto,
auto
encoder,
which
is
nothing
but
the
encoder
decoder
model,
and
this
is
the
encoder,
and
this
is
the
decoder.
This
encoder
will
extract
the
features
from
this
image
and
and
we'll
put
it
as
a
long
feature
vector
here,
so
this
vector
will
have
all
the
features
of
this
image.
Now
the
decoder
in
this
generative
model,
what
it
will
try
to
do
is
that
from
this
image
I
can.
D
How
can
I
reconstruct
an
image
which,
with
something
like
this
right-
that's
the
that's
the
main
job
of
generator.
So
it
should
take
this
image
and
it
should
output
these
kinds
of
image.
So
how
does
the
generator
know
how
to
generate
this
kind
of
specific
image
so
that
information
will
come
from
the
discriminator?
D
If
you
provide
a
score
on
how
much
on
how
realistic
does
it
look
like
they
based
on
that
score,
this
decoder,
the
weights
of
this
decoder,
will
get
updated
and
then
the
generator
again
price
again
price
to
generate
images
in
the
next
five
tration,
and
this
this
training
will
keep
on
happening
and
they
will
so
excuse
me,
can
I
go
and
bug
my
charger
for
a
second
one.
Second,.
C
C
C
Any
ideas
for
the
next
blog
post,
I,
don't
know
yet
that
said,
I
was
thinking
about
probably
an
idea,
maybe
in
Ganz
or
something
else.
If
someone
has
an
idea,
let
me
propose
it.
You
know,
send
me
an
email
or
a
slack
message
and
we
will
talk
about
it.
You
know
if
they
can
get
some
interest
in
it.
That's
that's
the
idea,
so
yeah,
but
yeah
I,
just
you're,
welcome
to
suggest
anything.
B
C
Okay,
dick
get
a
comment
in
the
chat
about
asking
Thomas
Harvick
to
present
his
ideas,
a
modeling
vassal
area,
yeah
I
think
he
should
give
a
presentation
on
that.
I
can
invite
him.
I
talked
to
him
the
other
day
and
he's
in
I,
don't
know
where
he
is.
If
he's
traveling-
and
he
said-
he'll
be
available
starting
next
week
again
but
or
he'll
be
back
on
his
regular
schedule
next
week.
So
we
should
get
I
should
get
him
involved.
I'll,
send
him
a
message.
C
C
Actually
I
had
a
conversation
in
the
other
group
that
I'm
doing
on
like
collective
behavior,
and
we
kind
of
went
over
that
a
little
bit
and
I
think
it's
interesting
that,
like
people
talk
about
like
emergence
and
collective
behavior,
but
they
don't
talk
about
like
just
other
types
of
collective
behavior
that
might
exist.
There
could
be
things
like.
You
know:
cumulative
behavior
that
isn't
necessarily
collective
or
you
know,
there's
some
sort
of
coordinated
behavior.
So
we
you
know,
that's
I,
think
that's
worth
talking
about
a
little
bit
at
least
yeah,
oh
in
Jesse
yeah.
C
D
F
D
Explaining
this
again
the
reason
how
the
generator
model
learns
from
the
feedback
of
discriminator
model
and
how
it
tries
to
fool
the
discriminative
model
so
so
as
to
produce
realistic
images.
That's
basically,
what's
in
this
diagram,
and
then
this
this,
the
this
approach
was
taken
from
the
image
to
image
translation
paper.
It
was
published
in
2018,
Philip
Isola
@l
publishes
that
so
I
just
took
this
image
from
this
paper
so
as
to.
C
C
D
Now
previously,
I
stopped
at
this
slide
right
here,
singing
about
how
the
generation,
the
relation
between
this
generator
model
and
destiny
to
model.
So
this
was
fully
inspired
by
the
teenage
teenage
translation
model,
which
was
discussed,
which
was
published
in
this
paper
by
Philip
Isola
at
El.
2018
I
took
this
image
to
just
provide,
in
a
one
more
example
of
how
this
can
and
how
this
again
network
works.
D
Basically,
what
they're
trying
to
do
is
that,
from
this
edge
images,
they're
trying
to
replicate
replicate
an
object
like
this,
so
it's
the
distant
Estelle
you
how
good
the
generator
is
working
and
it's
the
generators
job
to
fill
in
this
distant
matter
and
and
and
convince
that
is
convey
to
that,
whatever
the
generated
image,
whatever
the
generate
generator
generated.
Those
images
look
like
realistic,
look
realistic,
so
in
that
paper
they
actually
made
so
many
experiments
based
on
image
to
label
and
able
to
image
as
well.
D
One
such
example
is
this:
so
they
trained
again
on
these
space.
This
is
the
satellite
map
of
a
city
of
some
buildings,
and
this
is
the
label
in
this
case.
So
from
this
the
when
this
image
is
inputted
into
the
generator
after
it
has
been
trained,
it
was
able
to
convert
this
image
to
this
image.
So
that's
why
it
is
called
image
to
image
translation.
D
D
D
These
are
the
images
from
the
4,000
final
images
that
we
discussed
earlier
and
after
training
after
training
the
segmentation
model
on
the
500
images
for
which
we
have
labels,
the
model
was
able
to
generate
the
label
in
the
labels
from
from
this
corresponding
retinal
unlabeled
images,
so
this
corresponds
to
the
labels
of
this
unlabeled
data,
previously
unlabeled
data.
Now
what
we?
D
D
As
you
can
see,
we
can
infer
that
this
is
the
real
image.
This
is
the
corresponding
wuzzle
wuzzle
segmentation
image,
and
this
is
the
generated
image
from
this
vessel
image.
So
it
is
possible
that,
for
the
same
for
a
for
a
single
label,
there
can
be
images
as
well,
because
if
you
can
closely
see
in
this
generated
image,
the
vessels
are
the
same
pattern
of
this.
So
in
that
paper
they
come.
They
concluded
that
they
concluded
that
for
a
single
input,
we
can
have
multiple
outputs
like
for
the
same
vessel.
D
D
So
yeah,
like
I,
discussed
in
the
earlier
slide.
This
is
the
first
point
that
that
is
the
same
and
then
also
another
key
point
to
take
over
from
this
paper
is
that
these
conditional
gains
are
great
for
semi-supervised
problems
like
whenever
searching
this
searching
for
this
paper.
I've
I've
seen
so
many
paper,
so
much
research,
so
many
papers
regarding
this,
how
we
can
use
semi-supervised
how
we
can
use
and
web
versus
our
training
in
semi-supervised
two
datasets,
all
such
when
we
have
such
kinds
of
datasets
so
yeah.
D
D
This
is
actually
this
row
came
from
Rowan
this
row
three
came
from
row,
so
basically,
what
we
tried
is
that
we
generated
we.
We
pick
out
the
segmentation
we
segmented,
this
image.
That
is
the
result
that
is
showing
here
and
from
this
segmentation.
We
try
to
reconstruct
that
image
which
looks
like
this
image,
but
not
exactly
this
image.
From
this
segmentation,
we
try
to
generate
the
image
which
looks
like
a
retinal
image,
but
not
necessary,
not
necessary.
The
original
image
that
permits
this
segmentation
actually
came
from.
So
oh
yeah,
okay,.
C
C
So
yeah
again,
this
Ganzer
is
very
interesting.
It's
not
very
well
I
think
it
applied
to
biological
problems.
It's
not
very
well
understood
so
and
then,
of
course,
even
a
wasn't
here,
but
Jesse
asked
about
another
blog
post
and
they
told
them.
We
have
maybe
have
some
ideas
from
this
some
presentations,
maybe
some
other
topics,
but
just
keep
in
mind
that
we're
looking
for
one
more
blog
post
ideas.
So
thank
you,
Tim
Vinay
for
that.
Well,.
E
D
Yeah
the
disconnector
tells
us
how
related
like
how
close
this
generated
image
and
the
original
image
is
that
so
that
discriminator
has
a
loss.
Loss
function
to
decide
this.
Basically,
if
you
try,
as
you
said
it
would,
maybe
it
would
try
subtracting
this
image
from
this
image
and
it
would
or
it
would
extract
the
features
from
this
image
and
extract
the
features
from
this
image.
And
if
you
compare
those
is.
G
G
E
D
Yeah
yeah,
that
makes
sense,
but
in
that
paper
they
did
not
put
any
metric
for
that
like
how
this
how
this-
and
this
will
compare
these
two
will
compare
but
I
think
we
can
do
such
things
like.
We
can
take
this
generated
image
and
we
can
take
this
image
and
we
can
try
to
analyze
the
features
of
both
of
the
images
and
see
how,
like
we
can
type
some
matching
algorithms
or
comparison,
algorithms,
that
we
have
all
the
features
yeah.
E
D
Sure
I
think
they
actually
gave
the
link
to
reproduce
this
results
as
well,
because
I
think
they
gave.
Let
me
try
to
find
that
link
again.
They
gave
this
link
to
reproduce
the
results,
so
that
also
they'll
give
a
link
to
where
to
find
this
dataset
and
all
so.
Maybe
we
can
try
and
see.
We
can
replicate
the
results
ourselves
and
then
we
will
have
this
image
in
this
image
in
in
our
system.
So
then
we
can
try
to.
D
C
E
E
C
E
E
I'm
just
saying
that
these
techniques,
you
see
the
techniques
of
generating
comparing
images,
but
if
you
have
a
tight
sequence
of
images
that
you
can
start
to
detect
things
like
the
beginning
of
a
tumor
or
a
beginning,
blood
vessel
problem
stops
up
books.
Okay,
so
these
these
techniques
can
be
extended
to
the
time
to
me,.