►
Description
Video of Steve McGrew's talk on Ptychographic Methods to the DevoWorm group on May 15, 2017.
A
Okay,
thank
you.
Well,
I
think
this
was
billed
as
a
an
informal
presentation.
Let
me
show
you
my
screen.
A
A
Now
that
diffraction
pattern
contains
all
the
information
about
the
region
of
the
object.
That
was
illuminated,
but
it
sure
doesn't
look
the
same
as
as
the
object,
and
the
reason
is
that
that
what
you
get
downstream,
really
isn't
an
image
of
the
object.
It's
an
image
that
represents
the
the
amount
of
energy
in
every
spatial
frequency
inside
the
region
of
that
object.
A
If
you
take
the
object
and
and
decompose
it
into
into
a
whole
lot
of
gratings-
and
let's
say
let's
say
the
object-
is
a
Windows
screen,
then
predominantly
it's
going
to
have
two
components:
one
will
be
a
horizontal
grating
and
one
will
be
a
vertical
grating
downstream.
What
you'll
see
is
is
predominantly
two
spots
that
correspond
to
the
pitch
or
the
the
spacing
between
those
those
lines
in
the
screen
in
the
Windows
screen.
A
Well,
anything
can
be
decomposed
into
its
frequency
components.
Its
spatial
frequency
components,
so
the
diffraction
pattern
is
a
map
of
the
amount
of
power
in
each
one
of
those
in
the
middle.
It's
the
low
frequency
components
which
correspond
to
lines
that
are
spaced
very
far
apart
or
large
objects
with
with
no
features
and
out
around
the
boundaries.
A
That's
where
all
the
fine
detail
is
that's
one
of
the
reasons
that
you
need
a
big
lens
to
get
a
very
detailed
image,
because
you
need
to
capture
all
that
fine
detail,
information
downstream
and
then
squish
it
all
back
together
and
rearrange
it
to
form
an
image
downstream
of
there.
The
lens
has
to
be
big
enough
to
catch
all
that
information
that
gets
diffracted
around.
Otherwise
you
only
get
the
low
frequency
components
to
form
your
image
from.
A
Okay,
so
this
will
be
a
little
repetitive,
I
apologize,
but
typography
is
a
lensless
method
of
microscopy.
You,
you
get
an
image
without
using
any
lenses
at
all,
at
least
downstream
from
the
object.
So
normally
what
you'll
do
is
you'll
scan
an
object
with
a
laser
green
beam
and
overlapping
spots
at
each
spot,
location,
the
intensity,
distribution
in
the
far
field.
Diffraction
pattern
is
recorded,
in
other
words,
going
back
here
to
this
image.
A
You
simply
put
the
sensor
from
a
camera
here,
you
don't
you
don't
need
to
take
a
picture
of
this.
You
just
put
the
camera
sensor
right
there
capture
all
this
stuff,
the
actual
scale.
This
might
be.
You
know
anything
from
a
centimeter
to
a
foot,
but
whatever
distance
you
have
to
go
to
capture
all
the
all
the
important
information.
That's
there
and
then
the
let's
see
I
think
it
have
it
I.
A
A
A
A
This
diffraction
pattern
down
here
is
actually
the
Fourier
transform
of
this
portion
of
the
object
and
you
think
well,
you
just
apply
an
inverse
Fourier
transform
and
you
get
the
object
back
and
yeah.
You
could
do
that
if
you
knew
what
this
really
is,
but
you
don't
all
you
know
about
this
diffraction
pattern
is
how
bright
it
is
at
every
point
you
don't
know
it's
face
here.
You've
got
a
laser
beam,
it's
nice
and
clean
it
eliminates
this
object.
That's
got
both
the
thickness
and
and
an
absorption
at
each
point.
A
The
absorption
decreases
the
amount
of
light
that's
emitted
from
this
point
on
the
object
or
scattered
from
that
point
on
beyond,
on
the
object
that
the
thickness
alters
the
phase
of
that
light
and
light
actually
is
a
complex
quantity
that
to
fully
describe
light
you
need
at
each
point.
You
need
two
numbers:
the
phase
in
the
amplitude,
so
this
diffraction
pattern
down
here.
The
record
is
really
a
measure
of
the
amplitude
of
the
Fourier
transform.
All
the
phase
is
gone
and
in
fact
it's
not
even
the
amplitude,
it's
the
it's
the
square
of
the
amplitude.
A
You
can
take
the
square
root
and
get
the
absolute
value
of
the
amplitude
back,
but
you
don't
even
get
it
sine.
You
don't
know
if
it
started
out
positive
or
negative
so
to
in
to
invert.
This
sort
of
classically
was
considered
impossible
that
there
just
wasn't
enough
information
here.
However,
a
couple
of
decades
back,
a
fellow
named
fine
up
is
still
around
came
up
with
a
wonderful
method
for
what's
called
phase
retrieval.
He
can
actually
reconstruct
this
object
and
it's
phase
from
its
Fourier
transform.
A
A
All
right,
this
is
a
very
sparse
description
of
of
how
you
get
the
object
back
from
its
Fourier
transform
over
here.
This
red
red
thing
will
say:
that's
the
object
now
we
don't
know
anything
about
the
object
right
now,
but
we
can
sort
of
take
a
guess.
We
have
some
idea
how
big
it
is.
So
we
can.
We
can
take
a
totally
random
two-dimensional
distribution
of
bright,
dark
and
phase
and
then
just
trim
off
everything
that
we
know
is
outside
that
object.
We
just
get
rid
of
that.
A
So
what
we're
doing
here
is
there's
this
term
that
took
me
a
while
to
figure
out
what
it
meant,
but
it's
called
the
measure
of
the
object
and
it's
just
kind
of
the
the
smallest
window.
You
can
put
around
the
object
now
so
here
we've
got
a
random
guess,
trimmed
down
a
little
bit
by
this
constraint,
which
is
the
size
of
the
object,
and
we
apply
the
Fourier
transform
to
that.
Now.
That
will
give
us
the
four
transform
of
our
initial
guess.
A
It
won't
look
at
all
like
the
diffraction
pattern.
We
know
is
right,
so
we
can
apply
another
constraint
there.
Now
we
know
what
the
amplitude
of
this
Fourier
transform
is,
so
we
can
just
plug
that
in
as
the
amplitude
part,
the
phase
part,
we
don't
know
what
that
is,
but
we
took
a
first
guess
over
here
at
what
the
object
was
and
it
gave
us
both
a
phase
and
an
amplitude
here.
A
So
we
take
the
known
amplitude
and
the
kind
of
guess
at
the
phase
which
came
through
this
forward
transform
and
now
we
have
something
we
can
send
back
here
and
see
how
well
it
matches
the
object.
If
it
matches
the
object
exactly,
then
we
have
the
right
answer,
but
it
won't.
So
we
come
back
here
and
we
have
sort
of
a
guess
with
two
constraints
applied
at
what
the
object
is,
except
it's
not
going
to
fit
inside
this
this
window
anymore.
A
So
we
could
get
rid
of
everything
outside
the
window
and
now
we've
got
a
new
guess.
We
just
take
what
landed
here.
We
trim
off
everything
we
know,
isn't
really
part
of
the
object
and
we
send
it
down
here
again
you
go
round
and
round
and
round
and
round,
and
eventually
you
find
something
that
meets
both
of
these
constraints
and
then
you
cannot
make
any
further
progress
for
this,
but
typically
you
end
up
with
a
really
good
solution
to
what
the
object
really
is.
A
And
then
there
a
bunch
of
other
people
that
are
that
we're
doing
similar
work
back
when
he
was
doing
that,
and
there
are
a
number
of
algorithms
named
after
the
different
people.
But
what's
interesting
is
they're
all
basically
the
same
algorithm
just
with
the
constraints
applied
in
slightly
different
ways.
A
A
If
you
are
illuminating,
let's
say
we
just
do
this
to
one
piece
of
an
object,
so
we'll
eliminate
that
one
piece
with
a
laser
beam
one
one
round
spot
we'll
say
then,
instead
of
this
square
box
here
is
the
first
constraint
we
can
use
that
that
circle
that
surrounds
that
that
spot
that
we
eliminated.
We
can
use
that
as
this
constraint.
A
A
Dick
says:
do
you
have
a
sequence
of
consecutive
approximate
approximations
you
can
show
us.
I
could
email
you
that,
but
I
think
it'd
be
really
clumsy
to
show
it
to
you
right
now.
I
don't
have
that
prepared.
I
worked
most
of
the
last
two
years
on
on
some
algorithms
related
to
this
and
and
just
made
a
whole
lot
of
good
progress.
A
A
A
It
turns
out
only
a
few.
Very
natural
constraints
are
required
to
make
the
problem
solvable.
If
you
know
the
rough
size
and
shape
that's
the
measure,
and
if
you
know
that
the
object
is
not
absorptive,
for
example,
it
can
amplify
the
light
going
through
it
that
it
doesn't
have
any
negative
brightness
that
sort
of
thing,
then
those
are
all
useful
constraints
that
you
can
put
in
one
place
or
another
in
that
in
that
looping
algorithm.
A
Okay,
so
here's
a
little
more
detailed
description
of
the
typical
phase.
Retrieval
algorithm,
you
start
here
with
with
a
an
object.
Guess
now,
if
you
have
some
idea
really
what
the
object
looks
like
you
throw
that
in.
But
if
you
don't
have
any
idea,
it
doesn't
matter
very
much.
You
do
the
Fourier
transform
to
it.
That
gives
you
a
diffraction
pattern
based
on
your
guess.
You
keep
the
end.
Dude
you
keep
the
phase
from
your
guests.
Do
an
inverse,
Fourier,
transform
and
go
round
and
round
each
time
you
do
this.
You
compare
the
diffraction
pattern.
A
A
Okay
and
in
typography
each
spot
in
the
scan
across
the
object
produces
a
different
diffraction
pattern.
You
need
to
overlap
the
spots
now.
You'd
think
you
might
be
able
to
just
add
all
those
diffraction
patterns
to
get
one
diffraction
pattern,
and
then
you
you
go
through
that
looping
algorithm
to
solve
for
the
object.
You
cannot
do
that
because
the
diffraction
patterns
are
not
complete,
they're
only
the
intensity
of
the
Fourier
transform
and
you
cannot
really
get
the
Fourier
transform
out
of
it
directly.
A
So
so
there's
a
modification
of
that
iterative
algorithm.
It
goes
like
this.
You
spread
the
calculation
out
among
all
the
spots.
First,
you
pick
the
first
spot
and
it's
diffraction
pattern,
and
you
do
a
small
number
of
iterations
of
that
loop
on
that
spot.
That
gives
you
a
somewhat
refined
guess
you
take
the
result
of
that.
You
feed
that
in
to
that
one
little
portion
where
it
overlaps
with
the
next
spot,
so
the
next
spot
has
it
has
a
better
starting
guess
than
purely
random.
A
You
go
through
that
with
a
few
for
a
few
times,
then
you
take
what
you
got
from
that
you
go
on
to
the
next
spot.
You
go
through
all
the
spots
and
now
you've
got
a
initial,
fairly
refined
guesses
on
all
of
the
spots.
Go
back
to
the
front
you
go
through
it
all
again.
You
do
that
multiple
times
until
you
find
that
you've
you've
come
up
with
a
really
close
approximation
to
all
of
the
diffraction
patterns.
At
that
point,
at
that
point,
you
just
add,
you
can
just
add
the
amplitudes
of
all
those
guesses.
A
A
A
The
constraints
over
in
the
over
in
the
diffraction
pattern
domain
are
actually
not
so
much
binary,
masks
or
more
like
the
amplitude
cannot
exceed
a
certain
amount
and
that
sort
of
thing
they're,
somewhat
different
constraints,
but
yeah
I
think
it
can
be
done
by
optical
computing
as
it
turns
out
with
modern
computers.
This
can
go
pretty
fast,
so
I,
don't
think
you
need
the
speed
that
I
would
assume
you
could
get
from
optical
computing.
A
And
talk
about
the
factors
that
can
affect
the
quality
of
the
image
produced
in
typography
in
general,
they're,
the
same
factors
that
affect
image:
quality
on
a
microscope
dirt
on
the
optics
of
the
illumination
source.
If,
if
the
laser
beam
has
bugs
flying
through
it,
you're
going
to
end
up
with
a
pretty
lousy
image,
if
the
scan
stage
doesn't
position
the
sample
the
object
or
the
laser
beam,
whichever
you're
moving
precisely
then,
then
it
gets
a
lot
more
difficult
to
get
a
good
image
out
of
it.
A
A
typical
camera
is
about
eight
hits
if
you
can
get
a
16-bit
camera,
you're
you're
going
to
get
lot
cleaner,
imagery,
camera
noise
that
relates
directly
to
the
Canon
to
the
dynamic
range.
If
you
have
an
8-bit
camera
than
anything
in
further
bits,
just
isn't
there
it's
it's
been
truncated
and-
and
so
it's
it's
like
noise,
camera
nonlinearities.
A
In
my
experience
that
hasn't
been
such
a
big
problem,
but
if
you
don't,
if
the
output
of
the
of
the
camera
is
nonlinear,
so
that
you
double
your
your
intensity
at
some
spot
and
you
don't
get
double
the
output,
then
that
will
cause
problems.
And
then
one
other
thing
that
I
failed
to
include
in
this
is
the
size
of
your
your
sensor,
ray
of
your
image
check.
If
you
don't
catch
all
the
diffraction
pattern,
then
you're
not
getting
all
the
information
and
you
will
not
be
able
to
get
the
maximum
resolution.
A
A
A
A
A
But
that's
a
pretty
good
question
actually
any
place
in
a
laser
beam
anywhere
in
the
laser
beam.
Is
a
diffraction
pattern.
It
just
is
it's
it's
the
way
it
goes,
but
but
when
you
focus
it
to
a
spot,
it's
a
very
simple
diffraction
pattern.
So
the
way
you
would
confine
it
to
a
spot.
Is
you
take
the
laser
beam?
First,
you
clean
it
up
with
a
pinhole
special
filter.
A
Then
you
send
it
through
a
circular
aperture
or
square
aperture
or
whatever
you
want.
So
it's
coming
through
there
we'll
say
more
or
less
collimated,
and
then
you
use
a
pair
of
lenses.
Kind
of
like
a
telescope
that
will
then
reimage
that
spot
somewhere
else,
but
still
leave
the
beam
collimated
and
that
somewhere
else
would
be
on
your
object.
And
if
you
want
me
to
sketch
out
the
the
lenses
for
that,
I
can
I
can
do
that
later
and
email
it
to
you.
A
A
This
is
the
before
this
is
the
object,
it's
it's
the
face
of
a
squirrel,
and
then
the
amplitude
of
a
fox
image
was
used
as
the
phase
on
this.
This
is
all
done
just
numerically.
This
is
not
a
not
a
real
test,
just
in
simulation,
so
the
complex
object
was
a
the
phase
of
the
fox
and
the
and
the
amplitude
of
a
squirrel.
A
You
do
a
Fourier
transform
of
that,
throw
away
all
the
phase
information,
keep
only
the
intensity
information
of
the
Fourier
transform
and
go
through
the
loop
many
times,
and
this
is
what
you
get
back
out
and
it's
pretty
much
indistinguishable
now
you'll
notice
here
that
this
is
broken
into
a
bunch
of
spots.
This
was
actually
a
typography
simulation.
A
A
Okay,
so
that
is
my
my
quick
and
simple
and
informal
presentation
of
what
typography
is
so.
Are
there
any
questions.
A
For
a
image
is
okay,
the
first
an
earlier
question
dick
asked
is
so
the
Fourier
image
is
of
the
projection.
Okay,
we
really
have
two
things:
we
have
the
object
and
the
light
hitting
it,
and
so
the
the
light
distribution
right
at
the
surface
of
the
object
is
really
the
product
of
the
phase
and
amplitude
of
the
incoming
laser
beam
by
the
phase
and
amplitude
function
of
the
object
itself.
It's
the
product
of
those
two
point
by
point
as
the
light
field
propagates
through
space
to
the
image
sensor,.
A
A
A
A
Fellow
by
the
name
of
Lexi's,
named
Ori
Katz,
an
Israeli
researcher
wrote
a
very
short,
simple
paper.
That
is
what
really
caught
my
interest
in
this.
He
showed
that
you
could
take
a
piece
of
shower
glass.
That's
intended
to
let
light
through
but
be
effectively
opaque
and
take
a
photo
of
something
on
the
other
side
of
that
shower
glass
and
then
unscramble
the
photo,
I
sort
of
doubted
it,
but
and
I
wasn't
able
to
make
his
algorithm
work.
But
I
made
my
algorithm
work
on
it,
so
you
you
can
do
that.
That's
totally
incoherent.
A
Hairy
disk
you're
talking
about
around
the
laser
beam.
C
A
Gosh,
yes,
it's
a
it's
an.
It
would
be
an
airy
disc
of
Cour
set
the
focus,
but
actually
it
doesn't
matter.
It
turns
out
that
in
this
math.
A
The
condon
depends
on
the
algorithm
years.
It
depends
on
the
specific
algorithm
you're
using
and
it
depends
how
you
count
your
your
iterations,
but
two
or
three
thousand
iterations
is
not
unusual
in
the
algorithm.
I
ended
up
developing
it's
more
like
40
or
50.
In
terms
of
total
time.
It
depends
how
big
and
complicated
your
object
is,
but
for
something
like
that,
the
squirrel
fox
thing
could
just
be
a
few
minutes.
A
A
A
A
Okay,
Susan
asks:
do
you
have
MATLAB
code
that
does
this
I
do
and
I
would
be
happy
to
to
provide
it.
It'll
only
be
useful,
though,
if
I
can
sort
of
tune
it,
so
it
it
suits
what
you
want
to
do
with
it,
because
I
didn't
I,
didn't
design
it
to
be
easy
for
people
to
use
what
frequencies
of
lasers
do
you
use?
A
A
C
B
A
A
A
Mean
it
ok,
tom
says:
what
is
it
converging
to
what
does
it
converge
to
okay,
let
me
show
the
image
again,
because
I
think
you
came
in
kind
of
late
tom.
So
where
is
it.
A
A
That's
the
basic
thing,
Murr
we're
dealing
with
and
the
object.
The
objective
is
to
take
this
diffraction
pattern
and
from
that
get
the
object,
and
then
what
tie
cog
Rafi
is.
Is
that
generalized
to
the
case
where
you
scan
the
object
by
putting
the
spot
the
laser
spot
in
many
different
places
on
that
object?
So
you
have
a
whole
lot
of
different
diffraction
patterns
and
then
what
you
want
to
do
from
all
those
diffraction
patterns
is
reconstruct
that
object.
A
C
A
And,
and
if
everything
is
done
right,
you
can
end
up
with
resolution
down
to
close
to
a
half
wavelength.
It's
it's
super
good
and
in
principle,
if
you,
because
you're
illuminating
it
monochromatic
aliy,
it's
it
should
be
possible
to
measure
the
absorption
spectrum
at
every
point
in
the
object
down
to
that
kind
of
resolution
and
I
haven't
heard
of
anyone
doing
that.
But
it
would
be
a
really
interesting
thing
for
someone
to
do.
A
D
A
Regret
I
grabbed
this
picture
off
the
off
the
internet,
because
I
didn't
have
much
time
to
prepare
for
this,
but
we
can
presume
that
that
this
this
thing
here
is
a
microscope
slide
with
with
something
on
it,
and
we
know
roughly
where
that
something
is
so.
We
we
scan
across
that,
so
we
do
need
to
cover
the
whole
thing
with
the
laser
beam.
Okay,.
D
A
B
A
What
I'm
the
basement
gathering
the
basic
answer,
I'm
going
to
turn
off
your
echo,
the
basic
data
gathering
when
that's
done,
you've
obtained
the
intensities
at
every
point
in
the
diffraction
pattern,
you've
obtained
the
the
let
me
start
over
again.
You've
obtained
the
absolute
value
of
the
Fourier
transform
of
each
of
those
spots
in
the
object.
A
B
A
B
Okay,
so
how
do
you
put
together
the
individual
spot
images,
but
the
spot
Fourier
transforms
the
full
40
of
Trance
function
spots
after
the
iteration
right
now
you
have
the
amplitude
and
phase
components
of
the
individual
spots.
How
do
you
put
those
together
to
get
the
amplitude
and
phase
of
the
whole
image?
Okay,.
A
A
Then
we
run
this
through
a
few
iterations.
What
that
does
is
it
gives
us
a
further
refinement
of
what's
in
the
overlapped
area,
and
then
it
gives
us
the
first
guess
at
what's
in
this
next
overlapped
area.
So
after
we've
gone
through
all
this
stuff,
we've
got
a
somewhat
refined
I
mean
it
might
not
even
be
recognizable,
but
a
somewhat
refined.
First
guess
at
everything
in
here.
A
So
when
we're
all
done
running
through
all
this
stuff,
we
have
an
XY,
not
the
state
and
amplitude
and
effort
II.
They.
B
B
A
A
A
Each
subsequent
spot
includes
a
portion
of
the
phase
and
amplitude
map
of
the
previous
spot,
refines
that
a
little
bit
and
in
the
process
at
each
point,
it's
saying:
I'm
a
teach
step,
its
assigning
a
phase
and
amplitude
to
an
X
Y
position
in
the
whole
image
first
within
the
spot
and
then
within
the
next
spot,
which
overlaps
in
here.
So
as
we're
going
across
the
image,
we
just
keep
building
a
phase
and
amplitude
map
point
by
point
across
the
whole
image.
B
A
No,
no,
the.
B
D
B
A
This
is
the
result
of
going
from
all
the
diffraction
patterns
back
to
the
object,
so
this
represents
two
things
really.
One
is
where
we're
illuminating
the
object
in
a
whole
series
of
spots
and
then
off-screen
here
someplace.
We
have
all
these
diffraction
patterns
every
diffraction
pattern
that
we
get
corresponds
to
one
of
these
spots
and
that
spot
after
we've
gone
back
to
the
spot.
The
spot
represents
just
a
people
looking
at
the
object
at
that
point,
and
it
contains
the
phase
and
amplitude
information
in
that
region
of
the
object.
A
Bradley
you
had
asked
me
if
I
would,
if
I
would
do
a
brief
talk
about
that
other
paper?
Do
you
still
want
me
to
do
that,
or
is
it
getting
too
late.
A
Okay,
we
never
make
it
we
never
build
it.
All
we
have
is
the
Fourier
transforms
of
the
individual
pieces
of
the
object,
but
if
he
wanted
the
Fourier
transform
of
the
whole
object.
If
you
wanted
that
we
could
construct
it,
we
have
enough
information
and
the
way
to
do
that.
The
easiest
way
to
do
that
would
be
to
take
the
whole
object
that
we
recreated
and
take
its
Fourier
transform,
and
what
that
would
give
us,
then,
is.
B
A
A
A
Okay,
let's
just
move
this
back
up
here
where
we
can
see
it
well.
The
assumption
here
in
this
drawing
is
the
laser
beam
is
normal
to
the
surface
of
the
screen
here.
To
get
to
be
able
to
use
the
projection
theorem
and
thereby
have
the
information
you
need
to
recreate
a
three-dimensional
image
of
the
entire
object.
You
would
need
to
do
this
many
times
where
the
laser
beam
is
coming
in
from
different
angles
that
are
not
perpendicular
to
the
screen
here,
each
one
of
and
and
that
would
produce
a
whole
lot
of
different
diffraction
patterns.
A
Then
then
we
had
on
our
first
scan.
Does
that
help
any
you
just
need
a
whole
lot
of
diffraction
patterns
to
build
the
3d
for
a
Fourier,
transform
that
you
can
then
invert
to
make
your
object,
but.
A
In
the
assumption
is
normally
that
that
the
laser
beam
is
the
same,
doesn't
change
as
you're
scanning
the
object.
So
you
can.
You
can
scan
the
object
with
the
laser
beam
at
one
angle
and
then
you
tilt
the
laser
beam,
and
then
you
scan
it
again.
Then
you
tilt
it
again.
You
scan
it
again
or,
alternatively,
you
can
always
keep
the
laser
keep
the
same
and
you
can
turn
up.
A
A
B
A
A
A
A
A
Dick
I've
turned
your
microphone
back
on,
but
if
you
could
music,
when
you're,
not
using
it,
okay
good,
so
is
everybody
acquainted
with
the
term
photo
acoustics.
My
dick
says
no
and
okay.
Here's
the
general
idea,
if
you
should
laser
beam
or
any
kind
of
light
into
some
tissue
or
into
a
solution
or
into
anything
that
the
light
will
penetrate
into
it
will
heat
wherever
it
hits,
and
sometimes
the
amount
of
heat
is
very
small,
but
when
something's
heated
it
expands.
A
So
if
you
imagine
that
you're
focusing
the
light
into
one
little
point
somewhere
inside
a
piece
of
tissue-
and
you
just
send
a
pulse
of
light
to
that
one
point
that
one
point
is
going
to
emit
a
sound
wave
because
it
expands
briefly
when
the
light
hits
it
that's.
What
photo
acoustics
is
about
one
way
or
another.
You
illuminate
a
sample
and
then
you've
got
microphones
sitting
on
the
sides
to
pick
up
any
sound
waves
that
are
emitted
now,
there's
all
kinds
of
information
you
can
get
from
that
which
I'll
go
into
later.
A
A
So
imagine
that
we
have
some
tissue,
that's
this
pinkish
stuff,
and
we
surround
that
with
acoustic
detectors
just
little
microphones
and
we
have
a
source
of
sound,
that's
someplace
inside
the
tissue.
All
these
detectors
around
here
are
exquisitely
good
at
detecting
when
a
signal
arrives.
So
by
comparing,
when
the
signal
arrives
to
all
these
different
places,
we
can
localize,
we
can
determine
with
great
precision
down
on
the
order
of
1/2
wavelength
of
the
sound
we
can
determine
where
that
source
is.
A
That's
what
ultrasound
is
all
about
that
variations
on
it
now
normally
what
happens
is
there
will
be
any
minute
array
that
sends
a
bunch
of
sound
into
the
tissue
and
then
a
bunch
of
detectors
that
will
pick
up
what
comes
out
of
it
and
they
do
a
lot
of
computation
and
come
up
with
an
image
of
the
scattering
of
sound
inside
the
tissue,
but
it
doesn't
have
to
work
that
way.
You
can
actually,
if
you
have
a
sound
source
right
in
the
middle.
A
Let's
imagine
you
had
a
little
magnetic
particle.
You
could
just
twitch
that
particle
that
would
launch
a
sound
way
and
you
can
tell
exactly
where
that
particle
was
and
on
the
way
in
the
process.
You'd
also
learn
some
things
about
the
tissue,
how
the
tissue
affects
the
sound
wave
going
through
it
in
each
direction.
A
A
You
have
a
laser
pulse
that
illuminates
the
volume
of
your
tissue
diffusely,
because,
like
your
your
axolotl
embryo,
most
tissues
scattered
light
pretty
crazily,
they
don't
scatter
sound,
quite
so
much,
but
they
scatter
light.
So
you
you
illuminate
the
whole
volume
of
the
tissue
with
a
pulse
of
light
now
for
all
practical
purposes.
The
light
arrives
everywhere
in
that
volume
at
the
same
instant.
As
far
as
the
sound
is
concerned,
it's
all
one
instant,
because
the
sound
frequencies
are
much
much
much
slower
than
the
light
frequencies
and
much
the
period
of
a
sound
wave.
A
But
a
typical
ultrasound
frequency
is
on
the
order
of
microseconds,
so
we're
we're
talking
about
the
period
of
the
Soundwave
being
on
the
order
of
a
thousand
times
longer
than
than
the
typical
laser
pulse,
so
it
so.
In
essence,
we've
got
every
point
in
that
tissue
instantaneously,
eliminated
by
light
some
of
the
light
gets
absorbed
and
by
the
way
it
doesn't
matter
that
the
light
is
diffused.
It
gets
in
there.
So
everything's
lit
up.
A
Everything
is
momentarily
heated
by
a
quite
a
small
amount
by
the
light.
So
every
point
in
that
tissue
emits
a
sound
wave
in
proportion
to
how
much
light
it
absorbed
the
microwave,
the
sorry,
the
microphones
that
are
surrounding
this
pick
up
the
sound
wave
and
each
microphone
here.
Is
it
a
little
bit
differently
because
it's
travelled
a
different
distance
from
every
point
in
the
tissue
and
I?
A
Don't
know
the
math
for
doing
this,
but
it's
going
to
be
analogous
to
the
math
that
we
just
described
in
the
typography
you're
dealing
with
Fourier,
transforms
and
and
basically
trying
to
find
the
best
model
of.
What's
inside
that
tissue,
to
give
you
a
prediction
of
what
the
acoustic
detectors
will
see
that
matches
up
with
what
they
actually
did
see.
A
So
there's
some
really
neat.
Oh
yeah,
one
more
thing.
The
the
resolution
that
can
be
obtained
here
is
is
the
ultrasound
resolution.
So
it's
not
unusual
for
for
ultrasound
waves
to
have
something
on
the
order
of
a
few
microns
of
length,
the
the
wavelength
and
that's
the
kind
of
resolution
that
you
can
get
using.
This
ultrasound
imaging.
A
A
A
A
The
let's
see,
if
it
says
anything
useful
down
here,
here's
the
setup
in
this
case
they're
using
a
infrared
laser
they're
diffusing
the
light
out
it's
already
going
to
get
diffused
by
the
mouse,
but
they're
diffusing
it
some
more
before
it
gets
to
him
in
this
case
over
here.
They're
hitting
it
with
two
different.
A
It's
important
to
note
that
that
what
you're
looking
at
what
you
see
depends
on
the
wavelength
of
light
that
you
use,
because
different
tissues
will
absorb
and
different
components
of
different
tissues
will
absorb
different
frequencies
of
light
different
wavelengths
of
light
more
efficiently,
so
so
you're
kind
of
doing
absorption
spectroscopy
in
3d
of
whatever
you're.
Looking
at
that
again,
like
you
said
for
typography
just
seems
like
it
could
be
really
really
useful.
Now
this
won't
give
you
the
same
resolution
that
you
can
get
using
typography,
but
I
think
it's
a
whole
lot
faster
and.
A
A
C
C
I'm
working
with
someone
who
is
doing
optical
figure
is
marketed
as
facing
10,
nanometers
and
services.
We
trust
them
so,
given
that
I
fill
a
hole
with
that
distances
because
they
you
can
superimposition
with
it,
it
doesn't
sort
of
long
weeks
together
right
there,
one
of
them
in
the
74
bug
that
this
is
really
these
are
the
grass
I
saw.
A
Right,
any
any
wavelength
that
you
that
for
which
there
is
a
suitable
light
source,
will
work
for
something
like
this.
As
long
as
you
have
enough
power.
Another
thing
I'd
like
to
mention
about
photo
acoustics
when
I
first
encountered
photo
acoustics
was
close
to
20
years
ago.
A
A
A
But
then
you
get
some
more
pulses
that
follow
that
result
from
the
fact
that
molecules
will
absorb
some
of
the
light
and
then
they'll
relax,
and
then
they
might
relax
more
and
relax
more
and
every
time
they
do
that
they
emit
another
sound
pulse.
So
you
can
learn
something
about
the
electronic
structure
of
what's
in
the
liquid
sample
by
the
kind
of
sound
that
comes
out
when
you
hit
it
with
a
pulse,
I
haven't
seen
anything
in
these
papers
talking
about
using
that
that
phenomenon.
A
A
A
A
A
You
can
send
light
in
into
a
diffusing
structure
and
it
goes
everywhere,
but
if
you
can
tell
how
much
light
is
reaching
a
certain
point,
you
can
modify
the
shape
of
the
wave
front
as
it
comes
in
and
keep
adjusting
that
until
you
get
more
and
more
light.
Focus
to
that
one
point
and
you
can
end
up
getting
a
really
good
focus.
Basically,
you
come
in
with
a
scrambled
light
beam
that
is
then
unscrambled
by
the
diffusive
properties
of
the
medium
that's
passing
through.
A
You
can
then
hit
hit
that
with
a
just
use,
a
much
more
powerful
pulse
with
the
same
wave
front,
shape
and
all
of
that
or
a
large
fraction
of
that
light
will
focus
down
to
that
one
point
in
the
pulse,
which
means
you
should
be
able
to
do
laser
surgery
you
coming
coming
here,
and
you
just
pick
this
one
spot.
You
want
to
blow
that
up.
You
want
to
heat
it.
A
You
want
to
break
some
molecular
bonds
or
or
whatever,
but
all
this
stuff
on
the
outside
is
diffusive,
doesn't
matter
as
long
as
you've
got
a
suitable
ultrasound
system
and
a
wavefront,
a
modulator
and
the
right
algorithms,
it
ought
to
be
possible
to
do
laser
surgery
on
the
scale
of
a
few
microns
inside
diffusive
tissue,
okay,
that
that
was
what
I
wanted
to
say.
And
yes,
it's
called
laser
ablation.