►
Description
Narrowband imaging – why and how
Francesco Meschia
Capturing spectacular images of nebulae with narrowband filters has become almost commonplace, and for some good reasons. In this meeting we’ll see what those reasons are, and how to capture narrowband data effectively.
In a future meeting, we’ll see how to process the data into a color image.
A
Hi
everybody
welcome
to
the
august
2022
sjaa
imaging
special
interest
group
meeting
and
tonight
we're
going
to
continue
our
series
on
narrowband
imaging
last
month,
alex
talked
to
us
about
narrowband,
imaging
and
and
described
his
app
for
doing
that
tonight.
Francesco
is
going
to
francesco
messio
messio's
going
to
talk
to
us
about
the.
A
Why
and
how
of
narrowband
imaging
and
follow
it
up
in
a
couple
months,
with
some
examples
of
processing
narrowband
imaging
next
month,
julian
lecomp
will
talk
to
us
a
little
bit
about
maker
projects
for
astro
imaging,
and
I
wanna
once
again
encourage
anybody
who
has
some
other
presentations
to
make.
We
would
love
to
hear
you
speak
at
one
of
these
meetings.
Please
send
an
email
to
our
imaging
mailing
list
or
to
me
directly
and
love
to
hear
from
you
and
get
you
to
talk
so
without
further
ado,
francesco.
Take
it
away.
B
B
Great,
so
thank
you
very
much
and
good
evening
to
everybody
hi.
My
name
is
francesco
meschke.
This
is
the
first
of
a
series
of
two
presentations,
I'm
giving
about
nerve
and
imaging,
and
I
called
this
first
presentation:
why
and
how?
Because
we're
gonna
discuss
exactly
that.
Why
we
do
nervous
imaging.
Why
is
it
has
it
changed
the
way
that
the
amateurs
do
astral
imaging
and
how
we
capture
narrowband
data
tonight
we're
not
going
to
talk
about
processing
the?
B
B
We
call
it
narrowband
imaging
the
setter,
a
set
of
techniques-
it's
not
just
one,
and
these
are
all
the
techniques
that
involve
the
capturing
and
processing
of
images
of
astronomical
objects
by
using
only
very
small
and
narrow
portions
of
the
visible
spectrum
of
light,
and
we
do
that
by
selecting
the
portions
of
the
spectrum
of
our
interest
using
narrowband
band
filters
with
narrow,
bandpass
windows.
B
So
that's
why
we
call
it
nerve
and
imaging
refers
to
the
bandpass
of
the
filters,
but
how
narrow
is
narrow.
So
if
I
I
put
on
this
slide
a
chart,
I
believe
I
took
it
from
an
astronomic
and
it's
a
representation
of
the
the
visible
spectrum
and
then
some
so
everything
between
400
and
800
nanometers.
B
What
we
can
see
with
our
eyes
is
essentially
the
region
between
the
400
and
700
nanometers.
Of
course,
there
are.
There
are
individual
differences
that
people
can
see
a
little
bit
more
and
people
some
people
consider
less.
But
let's
say
that
as
an
average,
we
have
a
about
300
nanometer
window
in
terms
of
the
wavelengths
of
the
photons
that
we
can
see
when
we
do
what's
called
broadband
imaging
using,
for
instance,
rgb
filters
or
using
a
one-shot
color
camera
each
of
the
filters.
B
When
we
talk
about
the
narrowband
filters,
what's
commercially
available
today
has
transmission
windows
that
go
from
three
nanometers
to
15
nanometers,
so
they
are
several
times
narrower
than
than
a
typical
rg
or
b
filter.
It
can
be
30
times
narrower
or
can
be
just
five
or
six
times
times
narrower,
and
there
are
differences,
of
course,
in
what
you
can
obtain
with
the
different
bandwidth
of
the
filter
and
as
a
graphical
representation.
I
tried
to
put
here
the
superimposed
to
the
three
windows
for
rgb
and
b.
B
I
put
the
three
transmission
windows
for
three
three
nanometer
narrowband
filters:
the
oxygen
three
hydrogen
alpha
and
the
sulfur
two
now
a
hydro
oxygen,
three
has
a
is
the
the
shortest
wavelength
to
500.7
nanometers
h,
alpha
has
it's
an
intermediate
central
wavelength
in
in
the
red
region
at
656.3,
nanometers
and
finally,
sulfur
2
has
a
672
nanometer
as
a
central
wavelength.
B
Okay,
this
is
great,
but
why
do
we
care
about
this?
Why
is
it
important?
Why
do
we?
Why
do
we
like,
even
though
it's
not
important
for
us
amateurs,
to
do
an
arrow
and
imaging?
Well,
I
will
start
off
by
quoting
our
host
hai,
who
once
said
in
this
forum.
Why
do
why
are
stars
right
well
stars
give
off
light
because
they're
hot
and
the
and
they
they
behave
essentially
like
a
piece
of
metal
brought
to
very
high
temperature
in
more
scientific
terms.
They
behave
like
black
body
radiators
and
they're
another
emission.
B
The
spectral,
the
spectral
composition
of
the
light
they
emit
has
a
continuous
spectrum
and
it
depends,
depending
on
the
the
temperature
of
the
star.
This
spectral
composition
may
have
a
peak
at
a
one
wavelength
rather
than
another.
For
instance,
a
red
star
may
have
a
central.
We
have
may
have
a
a
peak
wavelength
of
about
700
nanometers,
so
it
would
look
red.
B
B
However,
not
everything
in
the
universe
behaves
like
that
when
astronomers
started,
studying
the
spectral
nebulae
in
the
19th
century
and
back
then
anything
that
was
not
a
star
was
called
the
nebula.
They
didn't
have
the
perception
of
what
is
the
difference
between
what
we
now
call
a
galactic
nebula?
What
what
is
a
different
galaxy
like
andromeda,
used
to
be
called
the
andromeda
nebula
until
1920.,
so
they
found
that
some
nebulae
had
a
spectrum
that
resembled
a
star
spectrum.
B
We
saw
with
a
continuous
distribution
of
photons
in
the
various
wavelengths,
but
other
nebulas
were
radically
different,
and
this
is
a
of
course,
a
modern
spectrum
that
I
that
I
picture
here.
But
you
see
that
that
is
a
a
line
type
of
spectrum
in
which
there
is
a
continuum
and
it's
basically
almost
a
background
continuum.
B
But
there
are
very
prominent
spectral
lines
now.
We
call
them
lines
just
because
if
you
look
at
them,
if
you
look
at
this
spectrum
of
star
to
a
spectroscope
you're
going
to
see
a
line,
that
represents
the
the
slit
of
the
spectroscope
illuminated
by
one
monochromatic
light,
but
one
monochromatic
wave
but
yeah.
They
look
like
if
you
plot
them
in
a
chart
like
this.
B
They
look
like
sharp,
vertical
peaks
and
back
then,
although
they
didn't
understand
why
they
knew
that
different
chemical
elements
had
a
different
set
of
the
other
spectral
lines
and
they
had
the
atlases
of
of
spectra
for
different
elements
and
as
an
example,
I
here
I
have
the
the
spectrum
of
the
emission
from
hydrogen.
B
But
despite
the
fact
that
they
had
this
atlases
and
they
had
some
mathematical
rules
that
allowed
them
to
predict
what
kind
of
what
wavelength
they
would
expect
from
an
element.
There
were
some
combinations
of
lines
that
couldn't
be
replicated
by
any
known
element
on
earth.
So
people
started
that
started
wondering.
Are
they
due
to
new
elements
that
we
haven't
discovered
yet
and
I'm
bringing
here
the
two
cases
that
are
the
most
famous
ones?
So
in
1864
huggins
observed
the
the
the
planetary
nebula
ngc
6543,
which
is
also
known
as
the
cat
scient
nebula.
B
For
the
first
time,
no,
nobody
had
seen
a
planetary
nebula
with
a
spectroscope
before
that
and
what
he
found
that
there
was
a
combination
of
three
spectral
lines
at
372,
nanometers,
495
nanometers
and
the
501
nanometers,
and
this
combination
could
not
be
traced
back
to
any
known
element
found
on
earth,
and
so
he
thought
that
he
had
discovered
a
new
element
which
was
called
was
actually
given
a
name.
It
was
called
the
boolean
and
four
years.
Sorry,
five
years
later
in
1869
there
was
a
solar
eclipse.
B
On
august,
the
7th
and
two
different
teams
of
astronomers
observed
took
the
spectrum
of
the
corona,
and
they
found
that
there
was
a
green
line
at
530.3
nanometers.
This
again
could
not
be
associated
with
any
element
known
on
earth,
and
so
they
they
christened
another
alchemical
element
that
was
previously
undiscovered
and
they
called
it
coronium.
B
However,
as
our
understanding
of
the
how
spectra
are
created,
our
understanding
of
the
mechanics
of
the
atom
became
better
and
better.
We
found
that
this
better
theories
unmask.
The
familiar
faces
so
medullion
didn't
actually
exist.
Actually
people
found
that
they,
if
they
predicted
the
the
number
of
electrons,
that
this
element
should
have.
They
found
that
there
was
no
room
for
a
new
element
in
the
periodic
table
of
elements
and
what
they
found
is.
Actually
you
don't
need
it?
B
You
don't
even
need
to
assume
that
there
is
a
new
element,
because
that
spectrum
is
perfectly
explained
by
the
emission
of
oxygen
from
which
you
take
away
two
electrons,
so
you
twice
you
ionize
the
oxygen
twice
in
very
low
pressure
conditions
so
hard
to
replicate
on
earth,
and
you
get
that
kind
of
spectrum.
B
Same
thing
with
coronium
was
actually
found
to
be
nothing
else,
but
the
spectral
line
that
you
get
by
ionizing
an
atom
of
iron
by
13
times
so
taking
away
13
electrons
from
from
from
the
56
that
the
iron,
the
atom
of
iron
has
so
okay.
This
is
great,
but
are
we
still
talking
about
in
this
presentation?
B
Well,
yes,
because
most
of
the
stuff
in
the
universe,
as
we
know,
is
hydrogen
hydrogen
has
been
present
in
the
universe.
Since
the
big
bang
is
the
simplest
of
all
atoms
made
up
made
up
of
one
proton
and
one
electron,
it
cannot
be
produced
by
any
other.
By
refusing
any
other
reaction,
it's
the
simplest
one
when
an
hydrogen
atom
is
ionized
by
radiation,
like
shine
a
uv
light
on
a
cloud
of
hydrogen.
B
Eventually,
these
electrons
will
be
combined
with
the
free
proton
to
form
another
hydrogen
atom,
but
when
doing
so,
and
the
electron
may
find
itself
initially
at
a
higher
energy
state
than
the
one
that
it
was
previously
because
it
has
received
the
sum
of
the
energy
from
the
incoming
radiation
and
will
eventually
fall
towards
what
we
call
the
ground
state,
the
the
lowest
energy
state
by
transitioning
in
into
intermediate
states,
when
it
transitions
from
the
state
associated
with
the
principal
quantum
number
three
to
the
principal
quantum
number
two.
B
It
emits
a
radiation
at
a
very
precise
wavelength
of
656.28
nanometers
and
that's
what
we
call
hydrogen
alpha
and
it's
one
of
those
of
the
lines
from
what
is
called
the
balmer
series
that
they
are
all
transitions
from
a
higher
energy
state
into
the
state
with
the
principal
quantum
number.
Two.
B
The
beauty
of
this
is
that
most
of
the
universe,
glows
in
h,
alpha
when
excited
by
ionizing
radiation.
So
when
you
have
a
very
hot
star
lots
of
ultraviolet
radiation
and
around
that
there's
a
cloud
of
gas
chances,
are
it
will
blow
in
h
alpha.
So
it
makes
a
lot
of
sense
to
image
at
this
wavelength
because
we
know
that
we're
gonna
see
stuff,
but
what
other
interesting
stuff
is
there?
Well,
it's
a
star
stuff
as
a
carl
sagan
would
put
it
so
they
are
same
before
some
helium
that
was
created
in
the
bb
bank.
B
B
But
it's
interesting
because
it's
associated
with
the
later
stages
of
evolution
of
very
massive
stars,
so
red
supergiants
when
they
start
forming
the
heavier
elements
and
there
are
markers
of
a
stellar
illusion.
These
are
some
of
the
reactions
that
you
find
in
the
so-called
multi-multi-alpha
process,
so
a
star
like
the
sun,
fuses
hydrogen
into
helium,
but
when
you
have
produced
helium,
if
the
temperature
is
sufficiently
high,
helium
nuclei
can
fuse
and
produce
beryllium.
B
And
if
you,
if
you
fuse
another
helium
nucleus
with
the
with
a
beryllium
nucleus,
you're
gonna,
produce
carbon
and
then
carbon
plus
helium,
you
get
oxygen,
oxygen
procedure,
neon
neon
procedure,
magnesium
and
then
silicon
and
then
sulfur
and
you
can
continue
getting
producing
the
argon,
calcium,
titanium,
chromium
iron
and
finally,
a
nickel
and
iron.
Nickel
are
the
last
two
elements
that
can
be
produced
in
a
steady
state.
B
So
why
sulfur?
Why
is
it
interesting
to
whom
well
ionize
the
sulfur
is
present,
as
we
said
in
planetary
nebulae,
especially
when
those
that
are
shed
by
very
massive
stars?
They
need
to
be
big
because
they
need
to
be
hot
enough
at
some
point
in
their
life
cycle
to
produce
sulfur
so
too,
and
in
order
to
produce
sulfur,
they
must
have
produced
all
of
the
other
elements
up
to
celeste
up
to
silicon,
to
be
able
to
do
that
and
of
course,
it's
also
present
in
supernova
remnants,
because
the
supernova
produces
essentially
everything.
B
It's
professional
astronomers
found
that
the
abundance
of
sulfur
appears
to
be
correlated
with
regions
of
compression
like
shock
fronts
in
the
interstellar
medium
and,
of
course,
if
you
are
a
research
astronomer
and
you
want
to
study
the
interstellar
medium
and
its
interaction
with
the
supernova
explosion,
planetary,
nebula,
etc.
This
is
interesting,
but
why
do
we
care
as
amateur
for
for
sulfur
too?
B
I
don't
really
have
a
precise
answer.
Maybe
it's
because
we
want
to
like
the
humble
images
in
the
hsh
palette
and
we
like
to
do
the
same.
There's
also.
It's
also
true
that
some
of
us
amateur
astronomers,
don't
care
about
s2
and
they
are
perfectly
happy
with
doing
what's
called
bicolor
imaging
using
just
hydrogen
and
oxygen,
so
you
might
have
heard
about
h-o-o
imaging,
that's
what
it
is.
B
B
They
responded
to
red,
green
blue,
and
so
it
is
convenient
to
have
three
different
to
capture
data
in
three
different
wavelengths
that
have
different
morphologies
different
shapes
so
that
you
can
combine
them
into
an
interesting
four
color
image
and
we're
gonna
dig
deeper
into
this
in
the
second
session
in
two
months,.
B
So
are
there
other
interesting
lines
sure
there's
a
lot
a
lot
of
them,
for
instance,
one
another
one
that
I'd
like
to
see
more
is
a
h
beta.
I
remember
when
I
was
buying
a
sky
and
telescopes
sky
telescope
in
the
late
80s
or
maybe
early
90s.
B
If
you
put
the
filter
that
increases
the
contrast
of
that
line,
you
would
be
able
to
see
it
visually
for
from
an
imaging
perspective.
It's
probably
not
that
that
interesting,
where
there's
h
alpha,
there's
sorry
where
those
hp
and
there's
also
h
alpha.
So
it's
it's
probably
gonna
have
the
same
type
of
structures,
the
same
morphology
that
you
already
have
captured
in
each
alpha
and
also
we
have
to
deal
with
what's
available
on
the
market.
Most
commercial
filter,
narrowband
filter
sets
are
still
h,
alpha,
s2
and
o3.
B
Okay,
so
we
know
why
we,
we
would
like
to
image
those
wavelengths,
but
why
does
it
need
to
be
done
with
a
narrowband
filter?
Can't
I
just
use
my
rgb
filters
or
my
osc
camera?
After
all,
if
you
capture
everything
with
the
with
the
100
nanometer
wide
filter,
you're,
also
going
to
capture
those
wavelengths,
why
is
it
important
to
capture
only
those
measurements
and
there's
a
number
of
reasons
for
this?
B
For
instance,
selectivity,
the
fact
that
you
are
going
to
cut
and
only
only
see
the
objects
that
have
that
line
emission
or
almost
only-
and
this
gives
you
also
better
contrast-
the
the
user
narrowband
filters,
give
you
less
star
presence
the
stars
are
less
are
smaller,
are
tighter.
They
are
less
intrusive
on
your
image
and
then
there's
the
reason
which,
in
my
opinion,
is
the
most
important
is
that
if
with
a
narrowband
filter,
you
are
greatly
reducing
the
adverse
effect
of
light
pollution
and
the
narrower
is
the
bandwidth.
The
greater
is
the
benefit.
B
So
I
have
here
a
few
examples
that
I'd
like
to
share
with
you
and
I'm
gonna
start
talking
about
the
effect
of
a
contrast.
B
B
The
throughput
of
the
stars,
the
photons
collected
by
your
sensor
from
that
come
from
the
stars,
will
be
way
fewer
than
otherwise,
with
the
with
a
broadband
filter,
so
you're
going
to
get
results
similar
to
this.
B
You
know
you
mostly
see
the
stars
and
the
in
the
image
on
the
right.
The
stars
are
not
intrusive
at
all.
It's
they
don't
take
away
anything
from
the
nebula.
They
make
it
actually
very
legible.
B
B
This
is
the
famous
I
can't
remember
the
catalog
number,
but
it's
the
tadpoles
nebula
again
on
the
left,
with
the
noise
c
and
on
the
right
with
the
bikini
by
color
black
palette,
it's
a
hoo,
so
hydrogen
and
oxygen
look
at
how
the
the
two
temples
you
have
to
almost
make
an
effort
to
to
see
where
they
are
in
the
osce
camera
and
they
are
very
contrasty.
B
B
Also
look
at
the
darker
clouds
and
you
see
the
deep
some
details
in
the
those
wisps
around
dark
wisps
around
the
the
dark
clouds
that
are
simply
not
visible
in
our
sea
and
again,
the
stars.
This
is
there's
a
big
open
cluster
at
the
core
of
this
nebula
and
it's
actually
the
cluster
that
makes
the
nebula
glow
and
the
it's
much
more
manageable.
I
would
say
in
the
aerobender
rendition,
so
there.
B
There,
in
making
images
that
are
more
striking,
in
which
the
contrast
of
shapes
and
the
contrast
of
color
is
much
more
attractive.
Again,
I'm
talking
about
aesthetics,
not
about
science.
Here,
it's
pretty
pictures,
okay,
what
I
believe
is
the
real
game.
Changer,
though,
is
the
reduced
impact
of
light
pollution,
and
this
is
especially
important
for
urban
imagers
and
suburban
images.
B
B
Cloudy
nights
that
oh,
you
can
achieve
the
same
with
good
gradient
management
with
gradient
reduction,
radiant
subtraction,
you
call
it
you
name
it
well,
let's
see.
If
that
is
true,
there
is
a
common
misconception
that
I'd
like
to
find
here
is
that
we
cannot
image
objects
that
are
fainter
than
the
background,
because
light
pollution
hides
your
faint
object.
B
Well,
if
you
think
about
it,
that's
doesn't
really
work.
That
way,
light
always
adds,
so
even
a
faint
object
would
always
ride.
On
top
of
the
background
life,
not
under
the
background,
if
you
have
a
background
sky,
that
gives
you
one
sec,
one
photon
per
second
per
pixel
and
you
only
get
a
half
a
second
per
volt.
B
Sorry,
half
a
photon
per
second
per
pixel
from
an
astronomical
object,
you're
still
going
to
have
one
and
a
half
photons
per
second
per
pixel
when,
if
you're
imaging
the
object,
so
there's
always
additional
light,
and
it's
not
just
a
matter
of
contrast
or
saturation
either.
Otherwise
we
would
solve
all
of
these
problems
by
doing
gradient,
subtraction
and
using
short
exposure
to
avoid
saturation,
and
we
would
have,
we
would
have
defeated
light
production.
B
It
doesn't
work
that
way.
I
think
we
all
know
that
it's
that's
not
the
the
right
proposition
and
I'd
like
to
explain
why
my
opinion,
wise,
is
because
the
name
of
this
game
is
a
signal
to
noise
ratio.
Snr
here
so
photons
are
not
coming
light
is
not
a
liquid.
B
Photons
are
not
coming
as
a
continuous
flow,
but
arrive
one
by
one.
They
are
quanta,
and
the
result
of
this
is
that
their
arrival
is
described
by
the
by
a
theory,
a
statistic,
a
statistical
property
that
has
a
statistical
properties
of
a
poisson
distribution
was
studied
by
poisson.
B
Rare
events
behave
in
a
certain
way,
and
so,
if
you,
if
you
have
two
detectors
like
two
pixels
in
a
sensor
that
are
counting
photos
exactly
from
the
same
source
at
the
end
of
an
exposure,
we
likely
end
up
with
two
different
tallies.
Even
in
a
perfect
camera
has
nothing
to
do
with
the
camera.
Has
everything
to
do
with
the
with
the
nature
of
light?
B
What
could
be
the
distribution
of
the
photon
counts
in
in
a
typical
coming
from
the
background,
for
instance,
you're
going
to
have
a
mean
account
for
all
of
your
pixels
that
are
exposed
to
the
same
same
light,
and
this
distribution
are
also
going
to
have
a
certain
standard
deviation.
Certain
width,
well,
a
property
of
poisson
events
is
that
the
standard
deviation
is
equal
to
the
square
root
of
the
mean.
B
So
if
the
mean
represents
the
number
of
counts
of
the
of
the
photons
by
your
detector,
you
can
expect
a
standard
deviation
in
this
distribution
equal
to
the
square
root
of
the
average
number
of
photons.
This
is
a
very
important
property
because
it
creates
a.
We
can
work
this
to
our
advantage,
because
it's
not
the
number
of
photons
from
light
pollution
that
hides
a
faint
object.
It
is
the
fact
that,
from
pixel
to
pixels,
there
are
variations.
There's
the
standard
deviation,
60
66
times
out
of
100.
B
You
can
expect
that
the
variation
from
one
pixel
to
the
next
will
be
less
than
one
standard
deviation
and
it's
this
variation
that
makes
a
weak
signal
undetectable,
because
it's
the
variation
itself
is
larger
than
the
signal
that
you
want.
This
is
what
we
call
photon
noise.
Some
people
also
call
it
shock
noise,
because
this
printer,
by
focusing
on
a
different
property,
it's
nothing
to
do
with
the
camera.
Again,
it's
the
nature
of
the
photon
beast.
B
B
B
If
we
start
with
a
situation
in
which
I
get
one
photons
photon
per
second
per
pixel
from
the
background
and
the
brighter
square
at
the
center
only
adds
0.05
photons
per
second
per
pixel,
so
I
can
expect,
as
an
average
one
photon,
every
20
seconds
at
the
end
of
a
600
for
a
600
second
exposure,
I'm
gonna
have
that
the
mean
value
the
mean
number
of
photons
counted
is
going
to
be
600
per
pixel.
B
The
standard
deviation
associated
with
that
will
be
24
photons.
So
from
one
pixel
to
the
next.
I
can
expect
66
percent
of
the
time
a
difference
less
of
less
than
24
photons,
and
the
signal
is
only
30
photons
per
pixel.
So
the
signal
to
noise
ratio
is
almost
one
in
this
in
the
chart
on
the
the
lower
left.
I
have.
B
I
have
plotted
the
number
the
count
of
photons
in
each
pixel,
along
with
the
the
dashed
line,
and
I
don't
see
the
object.
I
don't
see
any
difference
that
makes
me
think.
Oh
yeah,
this
is
the
part.
This
is
the
brighter
part
of
this
object.
It's
virtually
and
to
my
eye
at
least
it's
virtually
undetectable.
B
You
mean,
I
don't
have
a
cursor
here,
but
you
mean
on
the
on
the
left
on
the
upper
right
yeah
all
right.
Yes,
so
the
upper
right
just
represents,
what's,
let's
say
the
image
space?
No
sorry
the
object
space,
so
I
have
a
I
have.
There
is
a
difference
there.
If
I
look
at
the
the
number
of
counts,
this
is,
I
am
essentially
I
amplify.
The
difference
corresponds
to
the
actual
numbers
so
yeah.
This
is
this
one
is
undetectable.
B
If
I
look
at
the
numbers
now,
let's
do
the
same
with
the
with
the
larger
number
of
photons.
So
I
have
a
the
background.
The
same
background,
one
photons
per
second
per
pixel
and
the
I
increase
the
signals
10
times.
It's
zero
point
is,
let's
say,
0.1
photons
per
second
per
pixel
with
this.
B
Actually
sorry,
this
is
a
mistake.
It
was
used
to
be
0.08
and
the
exposure
is
still
600
times.
So
the
mean
is
the
same.
The
standard
deviation
is
still
24
photons.
The
signal
collected
from
the
object
in
this
case
is
48
photons
per
pixel,
so
the
signal
to
mass
ratio
has
become
two,
and
if
you
look
at
the
chart
on
the
left,
you
start
seeing
that
there
is
an
increase
around
the
center.
You
still
cannot
really
tell
if
it's
how
sharp
it
is.
B
B
B
Now,
that's
this
is
great,
but
we
are
not
in
this
simulation
right.
It's
in
reality,
it's
too
bad
that
we
cannot
just
crank
up
the
brightness
of
an
object
that
we
don't
like
to
image.
Just
like
I
did
in
the
simulation,
what
we
can
do
is
to
change
the
other
element
of
the
signal
to
noise
ratio.
We
change
the
the
number.
This
goes
below
the
diffraction
sign.
B
For
instance,
we
can
turn
off
like
pollution
by
loading,
the
telescope
in
the
car
and
driving
to
the
sierra,
or
we
could
increase
the
exposure
time
or
equivalently
stack
together,
multiple
exposure,
this
works,
but
it
takes
patience
because
the
signal
to
measuration
only
goes
up
like
the
square
of
the
total
integration
time.
I
have
a
hand
drawn
here.
What
could
be
the
the
relationship
of
the
signal
to
reservation
on
the
vertical
axis
and
integration
time
and
to
me
this
is
a
law
of
diminishing
returns.
B
You
have
to
go.
If
you
want
to
double
c
maximization,
you
have
to
quadruple
the
exposure
time,
so
you
you're,
if
you're
an
image
for
one
hour,
the
next.
The
next
big
change
will
require
four
hours
and
then
16
and
then
64.
it
quickly
becomes
unmanageable.
B
So
if
you
have
a,
if
you
want
to
imagine
a
nice
hydrogen
cloud
and
you're
using
an
h,
alpha
filter,
almost
all
of
the
photos
of
a
656.3
nanometer
wavelength
will
go
through
these
narrowband
filters
typically
have
a
transmission
of
19
to
95
percent
pretty
good,
but
of
course
it
doesn't
work
for
all
targets.
It
works
great
for
a
mission
nebula
that
could
be
gas
clouds,
but
also
planetary
nebulae.
B
It
doesn't
work
for
dark
nebulae.
It
doesn't
really
work
well
for
the
reflection
nebula,
because
reflection
nebula
typically
reflect
the
light
of
the
star.
As
we
know,
the
light
of
the
star
is
a
has
a
black
body
spectrum
and
it
doesn't
work
really
well
for
galaxies.
Unless
you
want
to
image
the
h
alpha.
Regions
of
that
galaxy
or
3
region
of
the
galaxy,
because
galaxies
are
mostly
made
of
stars
and
stars
as
a
continuous
spectrum.
B
But
let's
simplify
things.
I'm
only
going
to
receive
three
percent
of
of
the
incident
radiation.
B
B
B
The
dissociative
noise
will
be
reduced
by
the
square
root
of
this
ratio
or,
if
you
prefer,
by
inverting
the
fraction.
If
the
signal
from
the
object
is
transmitted
entirely,
let's
say
95
of
it.
The
signal
to
noise
relation
goes
up
by
the
inverse
of
those
factors
in
the
assuming
that
you
start
with
a
100
nanometer
filter,
and
you
end
up
with
the
with
a
five
nanometer
filter
by
switching
the
broadband
with
the
the
narrow
band.
The
signal
to
resolution
increases
by
four
and
a
half
times.
That's
pretty
significant.
B
So
now
that
we
have
an
understanding-
and
we
have
our
appetite
wet
technique
by
this-
how
do
we
do?
How
do
we?
What
do
we
need
to
do
to
capture
an
error,
better
data?
Well,
you
need
to.
B
Inevitably
you
are
going
to
have
to
get
some
filters
you're
going
to
have
to
spend
some
money
on
filters.
There
is
a
one
traditional
way
and
that
has
always
been
around
and
it's
to
use
what's
called
a
monochromatic
camera,
it's
actually,
in
my
opinion,
a
misnomer.
It
should
be
called
a
panchromatic
camera
because
it
doesn't
look,
doesn't
see
only
one
color.
It
sees
all
colors
but
system
identically,
so
it
doesn't
differentiate
between
them
between
colors
and
then
you
have
in
front
of
this
panchromatic
camera
filter
wheel
and
in
the
carousel
of
this
filter.
B
Really
you
put
the
typical
three
filter,
h,
alpha
o3
and
s2,
but
more
recently,
there's
been
an
alternative.
A
number
of
multi
and
aerobic
filters
have
hit
the
market.
I
believe
that
one
of
the
first
ones,
probably
the
most
famous,
was
the
radeon
triad.
B
The
filter
is
designed
so
that
the
windows
that
will
be
transmitted
are
somewhat
compatible
with
the
transmit
windows
or
the
bayer
metrics,
so
that
you're
gonna
get
some
signal
in
the
red
pixels
in
the
green
pixels
and
in
the
roof
and
the
blue,
pixels
and
so
you're,
not
wasting
your
your
color,
your
color
sensor,
in
the
same
way
that
you
would,
if
you
just
put
a
single
narrowband
filter
in
front
of
it,
there's
also
a
reason
to
use
them
with
the
with
mono
camera
as
an
alternative
to
luminance
to
the
luminance
filter.
B
So
luminous
is
a
filter
that
cuts
everything
shorter
than
400
nanometer
and
typically
longer
than
700
nanometers.
But
instead
of
using
that
you
could
use
a
small,
you
could
use
a
molten,
arrogant
filter
if
what
you're,
trying
to
image,
of
course,
is
emitting
light
in
those
in
those
wavelengths
there's
also
another
benefit
of
using
those,
in
particular,
using
a
mono
camera
with
narrowband
filters.
Is
that
your
any
chromatic
aberration
that
you
may
have
in
in
your
system?
B
It
becomes
almost
negligible
because
each
time,
you're
gonna,
imagine
one
person,
one
very
narrow
window
of
of
the
spectrum,
so
there
will
be
no
significant
dispersion
inside
that
and
you're
gonna
be
focused
each
time,
so
everything
will
be
sharp,
although
they
are
not
sharp
at
the
same
time,
because
you're
imaging
three
in
in
three
installments.
B
Essentially,
these
are
just
some
examples
of
the
some
manufacturers
of
a
narrowband
filter
that
you
can
find.
Probably
the
two
best.
The
two
that
are
considered
the
best
from
a
quality
perspective
are
astrodome
and
chroma.
They
are
also
the
most
expensive.
B
Astronomic
is
another
very
reputable
brand.
This
is
german.
Bader
is
also
german.
They
do
they
make
an
urban
filters.
They
made
a
new
series
that
came
out
last
year.
There's
been
some
controversy
about
those,
especially
if
you
imagine
with
the
fastest
ratios.
I
use
them.
They
work
for
me,
but
your
mileage
may
vary
and,
of
course,
there
are
also
other
players
on
the
market,
like
optimum.
That
makes
a
lot
of
optics
for
for
for
amateurs
and
also
so
anthony.
B
I
believe
has
a
set
of
3.5
nanometer
filter
in
show,
and
I'm
sure
there
are
other
ones,
and
these
are
also
some
examples
of
the
multinational
filters.
I
already
mentioned
that
the
opt
triad,
sorry,
the
arabian,
slash
or
pt-
try
and
try
it
ultra
filter.
Despite
the
name,
it
does
not
transmit
three
three
sections
of
the
spectrum,
but
four,
so
there's
a
s2
h,
alpha,
o3
and
h
beta.
So
in
this
way
they
try
to
cover
all
of
the
three
bayer
sensors.
B
Sorry
pair
filters,
there's
a
option.
That's
always
been
present
with
their
l
extreme
filter.
So
you
see
the
name
wrinkles
defected.
You
can
use
it
as
a
as
a
luminance
substitute
and
as
pretty
narrow
bandwidth.
As
you
can
see
from
this
chart
and
more
recently,
iodus
has
come
up
with
come
out
with
some
multination
filter,
although
they
are
not
as
narrow
as
their
ones.
B
They
try
to
tell
you
that
it's
a
benefit
because,
in
addition
to
capturing
each
alpha,
you
also
capture
the
ionized
nitrogen
and,
in
addition
to
capturing
o3,
you
also
capture
h,
beta
and
the
second
line
of
o3
at
495
nanometers.
But
of
course,
you,
the
price
that
you
pay
for,
that
is
that
you
also
capture
more
like
pollution.
B
You
want
your
images
to
be
dominated
by
essentially
photon
noise,
and
there
are
two
things
that
you
can
do
to
achieve
this.
You
could
decrease
the
read
noise
of
your
camera,
which
is
possible
in
some
to
some
extent,
because,
typically,
most
most
cmos
cameras
and
also
system
cameras
by
increasing
the
gain.
The
read
noise
goes
down,
but
two
things.
B
One
thing
it's
important
to
to
be
careful
about
is
that
watch
your
welding
capacity,
so
most
cameras.
Actually,
all
cameras
have
the
largest
full
weld
capacity
at
low
gain
where
the
read
noise
is
higher
and
the
the
full
weld
capacity
decreases
proportionally
as
you
increase
the
gain.
B
The
other
thing
you
can
do,
although
it
seems
counterintuitive,
is
to
increase
the
photon
noise.
Now,
why
on
earth
we
would
want,
would
we
want
to
increase
noise
because
actually
you're,
not
impressing
noise,
you're
increasing
the
signal
and
noise?
That
goes
goes
up
as
a
consequence
and
increasing
the
signal.
In
this
case
it
means
that
you
increase
the
exposure
time,
so
we
exposed
for
a
longer
time.
B
B
The,
but
what's
interesting,
is
the
bottom
chart.
There's
a
like
many
modern
cameras.
There
is
one
critical
value
of
the
the
game
parameter
that
triggers
the
high
gain
mode
or
something
like
that
and
makes
the
creates
this
sharp
step
and
the
real
noise
goes
down,
starts
from
2.8
electrons
per
pixel
at
gain
at
99,
and
it
goes
down
to
1.45
electrons
per
pixel
at
gaining
one
under
that's
a
huge
change.
B
Of
course.
You
may
want
to
say
okay,
but
why
should
I
stop
there?
I
go.
I
try
to
gain
up
to
450
and
when
I
have
one
electron
per
pixel
over
here
noise.
Yes,
true,
but
you're
only
going
to
have
a
few
hundred
electrons
as
a
full
weld
capacity,
so
you're
gonna
need
saturation
very
quickly,
so
be
careful.
B
The
other
aspect,
so
we
said
this
was
about
the
other
parameter
that
we
can
play
with
is
exposure
time.
You
might
have
heard
with
that
with
all
the
ccds
cameras
that
had
very
high
red
noise,
like
10
or
15
electrons,
with
noise
narrowband
used
to
require
a
very
long,
the
sub
exposures
up
to
60
minutes.
B
Now
this
to
me
reminds
me
of
the
days
of
the
film
and
astra
imaging
in
which
you
you
exposed
for
such
a
long
time
that
if
an
airplane
or
a
satellite
crossed
your
field,
you
start
the
swearing
because
you
adjusted
wasted
an
hour
but
low
noise.
The
cmos
cameras
have
changed
this
nowadays
with
the
cmos
with
the
modern
cmos
cameras.
B
The
readout
noise
is
so
low
that
less
than
five
minutes
are
sufficient
to
swap
rate
noise,
even
from
I
mean
for
even
from
not
so
not
aborted,
nine,
you
can
do
it
from
water,
five
or
six
and
still
smoke
noise,
and
this
also
brings
me
to
another
point
that
is
worth
mentioning.
So
these
are
a
few
examples
that
I'm
going
to
give
you.
B
But
the
question
that
I
want
to
answer
is:
if
you
want
to
take
full
advantage
of
the,
is
it
worth
imaging
in
arabin
from
a
low
portal
site?
Is
it
worth
driving
to
pinnacles
and
imaging
entire
event?
In
order
to
answer
these
questions,
I'm
going
back
to
one
slide
and
I
want
to
show
you
two
charts
now
this
chart
is
actually
an
analog
calc,
an
analog
exposure
calculator.
B
B
This
is
an
example
for
vertical
seven,
so
you
start
plotting
a
line
that
goes
from
the
the
point
that
corresponds
to
the
bright
to
the
radiance
of
your
sky
in
magnitude
per
square.
Second,
now
I
am
on
board
seven,
and
my
sky
is
typically
when
I
measure
it
within
my
sqm.
I
read
18.5
magnitudes
per
square
second,
so
my
line
goes
from.
B
This
point
goes
through
the
point
that
corresponds
to
the
f
ratio
of
my
telescope,
and
then
I
keep
on
drawing
with
the
pencil
and
the
square
and
a
straight
edge
until
I
meet
to
this
vertical
line.
Then
from
this
point
I
draw
another
line
that
goes
to
the
point
that
corresponds
to
my.
It's
called
the
spectral
efficiency
of
my
filter
and
it's
the
array
is
the
is
a
fraction
that
is.
B
B
And
again,
at
this
line,
we
will
hit
a
point
on
this
vertical
vertical
line
and
I'm
going
to
draw
another
line
going
through
the
point
that
corresponds
to
my
to
the
pixel
size
of
my
camera
and
I'm.
This
is
going
to
land
somewhere
on
the
that
scale
called
the
feet
phi
in
photons
per
second,
and
that
is
the
estimation
of
how
many
photons
per
second
per
pixels
I'm
going
to
receive
from
the
background
sky.
B
B
A
B
To
the
final
answer,
the
minimum
exposure
time
that
I
want
is
a
one
point
about
1.9
minutes
so
from
4
to
7.
Two
minutes
are
sufficient,
but
if
I
go
to
pinots
portal,
3
things
change,
so
pinnacles
has
a
21.5
magnitudes
per
square
second.
So
if
I,
if
I
redo
all
this
geometrical
construction
with
pencil
and
straight
edge,
I
get
that
I
should
expose
for
30
minutes
each
sub-exposure.
B
So
by
by
this
I
mean
that
the
the
ratio
of
the
brightness
of
the
pinnacle
sky
and
the
mountain,
the
sky
is
one
to
twenty.
B
So
it
means
that,
if
I
expose
for
to
for
using
this
suggested
30
minutes,
I'm
gonna
get
a
signal
to
measuration,
which
is
20
times
better
than
I
can
get
from
from
mountain
view,
for
the
same
total
integration
time.
So
is
it
worth
doing
that?
Well,
it
depends
on
many
things.
B
It
depends
on
whether
you,
whether
your
mount,
is
up
to
it
whether
your
mechanics
are
up
to
holding
a
30
minute
exposure
without
the
financial
fracture
or
if
you
use
oagy,
if
you're
you're,
probably
in
a
better
place,
is
you're
guiding
it
up
to
this
task,
but
in
general
I
would
say,
even
if
you
don't
do
this
30
minutes,
if
you
just
do
10
yeah
you're,
going
to
give
up
something,
but
it's
it's
much
better
to
have
some
data
that
have
no
data
because
something
went
wrong
in
between
so
my
final
message-
and
this
is
my
last
slide-
is
that
don't
sweat
too
much
over
it
just
get
out
and
start
imaging,
and
you
have
two
months
to
capture
all
the
data
that
you
want
in
two
months:
we're
gonna
talk
about
how
to
process
the
data.
A
B
It's
it's
not
in
the
visible
spectrum.
A
Is
it
brighter
in
its
wavelength
than
the
end
three
to
two.
A
A
Great
presentation
francesco.
Thank
you
very
much
francesco.
A
B
It's
a
great
question:
lumicon
used
to
do
that.
I
remember
that
lumicon
had
h
alpha
filters
that
had
the
49
millimeter
standard
thread
for
for
photographic
lenses
and
also
52
millimeters.
I
believe
they
went
all
the
way
up
to
72.
That
was
extremely
expensive.
B
Nowadays
I
haven't,
I
haven't
seen
any,
however,
both
the
chroma
and
the
baler
they
make
filters
in
the
I
believe
in
the
square
65
by
65
millimeter
format
that
you
can
put
in
a
in
a
filter
holder
that
you
can
that
you
then
screw
in
front.
B
So
back,
given
that
you
mentioned
years
ago,
I
was
doing,
I
didn't,
have
an
h
alpha
filter
back
then
I
was
using
the
technical
pan,
24
15
film,
that
was
very
sensitive
to
red,
and
I
used
the
rotten
29
filter
as
a
makeshift
h,
alpha
in
front
of
the
front
of
the
camera.
A
Yes,
there's
also,
I
I
remember
astronomic
used
to
make
clip-in
filters
that
actually
fit
right
in
front.
Oh.
B
B
I'm
I'm
actually
using
my
telescope
and
it's
my
my
blurry
screen.
B
A
I'm
I'm
rather
spending
my
money
on
camera
equipment.
It's
either
one
or
the
other
right.
B
So,
for
one
lens
that
I
personally
don't
have
but
has
become
really
popular
in
recent
years,
is
the
samyang
or
rokinon
135.
A
I
I
have
one:
oh
perfect,
it's
a
manual
focusing
lens
and
I
have
it
in
nikon.
F
mount.
B
Perfect,
so
you
can
there's
one
thing
that
you
want
to
be
careful
about
if
you,
that
is
a
very
fast
lens
f2.
So
if
you
can
put
the
filter
in
two
positions
in
that
with
that
lens,
you
can
put
it
in
a
maybe
not
with
a
dslr,
but
with
the
master
camera
you
can
have
a
filter
drawer
and
put
it
between
the
lens
and
the
camera
yeah.
B
If
you
do
that,
you
need
to
buy
special
filters
that
are,
and
they
have
the
central
wavelength
pre-shifted
for
very
fast
differentials,
because
the
yeah
and
because
the
bandwidth,
the
the
central
frequency
is
calculated
based
on
the
cone
formed
by
the
light
rays
when
they
meet
to
the
sensor
when
they
beat
the
filter.
Actually.
But
if
you
put
the
filter
on
in
front
of
the
of
the
frontal
lens,
then
the
the
wave
is
a
is
a
flat.
It's
a
plane
wave
at
the
point.
B
So
perfect,
you
don't
have
any
problem
with
and
you
don't
need
to
press
shift
right.
The
drawback
is
that
any
error
that,
on
the
wavefront
that
is
introduced
by
the
by
the
filter,
will
be
amplified
on
the
focal
plane.
But
if
the
magnification
of
the
lens
is
not
too
high,
like
135
millimeter
one,
you
should
be
still
in
good
shape.
A
Sometimes,
with
these
narrowband
filters,
I've
had
reflections
inside
the
refractor.
Do
you
have
any
comments
about
that
and
what
to
do
about
it?.
B
The
people
who
use
expensive
filters
like
roma's
and
or
astronomer
in
general,
don't
have
the
problems
and,
of
course,
no
matter
how
tells
users,
but
there
seems
to
be
a
correlation
between
how
the
the
type
of
multicoating
that
they
put
on
the
filter
and
the
the.
How
big
strong
is
the
reflection
that
you're
going
to
get
the
apparently
the
optimum
filters
were
pretty
bad,
but
they
recently
changed.
They
improved
a
lot.
Beta
was
also
very
bad,
but
I
have
the
the
latest
generation
better
filters
and
the
real.
B
I
have
reflections,
even
in
the
relatively
bright
star,
I
did
a
10
minute,
half
exposure
on
the
nab
and
there's
no
reflections
actually
but
yeah.
It's
it's
a
problem
and
some
people
have
noticed
that
the
type
of
coating
that
they
put
on
the
filter
is
not
the
same
on
the
front
side.
And
so,
if
you
have
a
strong
reflection,
it
may
be
worth
trying
to
flip
the
filter
and
see
if
it's
any
better.
On
the
other
side,.
C
Oh,
no,
not
a
question,
but
a
a
side
note
about
what
I
think
alan
before
was
asking
about
the
photographic
lengths
to
use.
I
think
that
the
switch
part
is
also
between
300
and
400.
In
my
experience
there
are,
there
are,
let's
say,
beautiful
target
in
the
135
or
150,
but
if
you
want
to
do
some,
some
nice
nebula
or
something
like
that,
you
have
to
have
a
show
a
longer
focal
length,
but
anyway,
300
or
400.
B
An
interesting
project
that
I
have
in
my
hands
is
a
tool
reused
for
astronomy.
This
beautiful
pentax
135
millimeter
lens
from
the
early
70s
used
to
be
used
to
belong
to
my
father
and
of
course,
nowadays
this
is
not
an
april
chromatic
lens.
It
would
have
a
chromatic
aberration
in
full
color,
but
my
idea
is
to
use
it
in
narrowband,
and
so
I
want
to
put
some
focusing
mechanism
that
would
turn
this
helicoid
to
to
refocus.
When
I
switch
filters,
so
I
can
have
perfect
focus
at
all
three
wavelengths
and
I.
A
So
I
and
I'm
taking
it
to
what
you're
doing
is,
is
that
you're
you're
taking
a
series
of
exposures
with
one
filter.
Then,
even
though
you're
leaving
the
camera
on
the
tracker
you're,
then
replacing
the
filter
with
the
next
filter
taking
exposures,
then
you
replace
it
with
another
filter.
A
B
And
this
is
a
in
a
in
a
setup
like
the
one
I
have.
This
is
automatic
because
the
filters
are
mounted
in
a
carousel
in
front
of
the
camera
between
the
camera
and
the
telescopic
section,
and
there
is
a
there
is
a
stepper
motor
that
controls
this
carousel,
so
the
computer
tells
the
camera
now
go
in
h,
alpha
and
the
camera
goes
in
h,
alpha
and
the
computer
knows
that
the
program
has
a
stored
in
its
memory.
B
What
is
the
difference
in
the
focus
point
when
I
switch
from
h
alpha
to
o3
and
tells
the
focus
motor
to
move
to
compensate
for
that?
So
it's
you
can
with
this
level
of
automation
you
can.
If
you
want,
you
can
take
20,
20
images,
one
filter
and
then
printing
another
one
and
then
20
the
third
one
or
you
can
do
one
one
one
one
one
one
one
one
and
change
every
time,
because
you
don't
need
to
refocus
everything.
Is
you
can
predictably
and
reliably
automate
that
part.
C
B
A
And
there's
also
just
a
for
those
of
us
with
reflectors.
You
may
not
have
a
focus
difference
between
your
filters,
yeah,
even
better.
B
A
C
B
A
good
point,
one
questions
that
one
question
that
you
probably
have
added
to
my
presentation
is:
can
you
mix
the
different
filters
from
different
brands?
Yes,
of
course,
the
filter.
What
counts
is
the
only
the
wavelength
and
the
bandwidth
of
the
filters.
However,
if
you
do
that,
you
may
need
to
be
prepared
for
some
more
focusing,
because
if
the
filters
were
cut
from
the
same
piece
of
glass
substrate
as
the
same
thickness,
they
will
likely
be
parafocal.
So
they
don't.
B
C
In
my
experience,
astrodon
and
chroma
are
perfocal.
B
Everybody
says
so
that
if
you,
if
you
can
spend
the
amount
of
money
that
they
want,
it's
definitely
worth
it.
A
C
A
Right,
no,
no
I'm
saying
for
refractors
now,
reflectors,
it's
another
story!
Now!
Nick
nick
has
a
comment.
If
you
want
astrodon
use
eye
telescope
for
free.
A
Alrighty
then
well,
thank
you
very
much
francesco.
We
look
forward
to
hearing
you
from
you
in
a
couple
of
months.
Everybody
will
take
all
their
narrow
band
images
and
maybe
you'll
give
us
some
practice
data
to
play
with
before
that,
yeah
that
in
the
processing
section
and
then
again
next
month,
we
we
have
a
talk.
You
were
talking
about
making
a
a
focuser
for
your
lens
and
julian
we'll
talk
about
that
in
other
maker
projects
next
month,
and
thank
you
very
much
everybody
please
unmutant
your
francesco
ham.