►
Description
he Dark Side of Luminance Subframes
Alex Woronow
Collecting luminance subs along with RGB subs has been a long-standing approach to teasing-out the most possible detail in an image. But my recent analyses indicate that L subs do nothing to boost detail in the domain of modern image processing. I will demonstrate this with examples and explain why what may once have been a wise practice (collecting a load of L subs) has become arcane and now simply sucks down telescope time and processing effort.
A
Foreign
to
the
March
2023
sjaa,
Imaging
special
interest
group
meeting,
and
tonight
we
got
Alex
warino
to
tell
us
all
about
lrgb,
Imaging
or
should
I
say
RGB
Imaging.
Take
it
away,
Alex.
B
You
should
see
the
first
slide
and
I
think
we
should
probably
do
so.
What
I'm
going
to
talk
about
this
evening
is
that
luminous
subframes
have
become
using
the
correct
word
obsolete.
There
was
a
time
when
they
weren't,
but
now
they
are
and
I'm
going
to
explain
why
I'm
going
to
also
try
to
explain
the
relationship
between
L
and
RGB
color
spaces,
and
why,
from
that
point
of
view,
we
can
understand
some
of
the
recently
went
Obsolete
and
then
demonstrate
that,
with
a
few
a
few
case,
studies,
foreign.
B
So,
as
I've
kind
of
just
said,
so
how
I'm
going
to
discuss
how
L
is
related
to
RGB
color
space,
and
why
probably,
we
started
augmenting
RGB
images
in
the
first
place,
which
started
a
long
time
ago
and
finally,
whether
question,
whether
the
motivation
that
we
use
to
introduce
L
into
RGB
images
is
still
a
motivation
given
our
current
equipment
and
software.
B
So
a
few
words
about
color
spaces,
and
there
are
a
lot
of
them
over
here.
Whoops
any
Mouse.
Does
that
a
lot
over
here
we
see
several
different
color
spaces
and
how
they're
related
to
each
other-
and
these
are
all
mathematical
transforms
that
take
place.
So
we
can
go
from
RGB
color
space
to
an
XYZ
color
space
to
an
l,
a
b
color
space,
all
by
mathematical,
manipulations,
they're,
all
expressing
exactly
the
same
thing
and
I'll
show
that
in
just
a
little
bit,
so
we
have
all
these
different
color
spaces.
B
Some
of
them
I
think
are
just
simply
for
research,
research
on
human
Vision
or
Research
into
display
device
design,
and
there
are
color
spaces
that
artists
prefer
to
use
when
mixing
colors
and
things
like
that.
We,
of
course,
I
use
RGB
color
space
for
our
image,
processing
and
image
capture.
Predominantly,
of
course,
we
do
have
narrow
band
Imaging,
for
example,
and
we
do
capture
L
from
a
different
color
space
like
lab.
B
B
It
was
embedded
that
same
information
was
already
embedded
in
the
RGB
image
and
all
we
did
was
go
to
a
different
coordinate
system
and
look
at
that
particular
parameter
and
if
we
add
L
now
back
to
our
image
and
then
convert
back
to
RGB,
here's
an
example
over
here.
What
that
might
look
like,
so
we
take
RGB,
210
and
50
and
we
add
L,
it
might
become.
240
160
is
still
an
RGB
image.
It
still
has
only
r
g
and
B
and
those
values
have
been
changed
and
there's
nothing
in
the
physics
of
it.
B
That
would
not
have
allowed
us
to
go
straight
from
this
first
values
to
these
values
by
such
manipulations
as
pixel
math
or
altering
the
curve
or
histogram,
or
something
like
that
or
a
combination
of
them
in
order
to
get
these
new
values,
so
it
l
doesn't,
do
anything.
Isn't
anything
added
to
the
RGB
space
it
just
modifies
the
values
of
the
pixels
they're,
red,
blue
and
green
values,
I
like
in
the
RGB
color
space
to
important
systems?
B
We
all
learned
in
high
school
somewhere,
probably
and
now
of
course,
they're-
probably
learning
in
junior
high
or
even
earlier.
Well,
the
Cartesian
coordinate
system,
X,
Y
and
Z
and
there's
a
spherical
coordinate
system,
R,
Theta,
Phi
and
here
are
the
two
of
them
any
point
that
can
be
identified
in
x,
y
z,
color
space
is
the
identical
Point
we've
expressed
in
R
Theta
Phi,
and
the
distance
between
two
points
can
be
expressed
as
a
Distance
by
calculating
that
distance
in
x
y
z
or
by
calculating
that
distance
in
the
spherical
coordinate
system.
B
The
XYZ,
spherical
and
cylindrical
coordinate
systems
to
the
different
color
spaces
that
we
function
in
so
color
spaces
are
similar
to
spatial
coordinate
systems.
Any
one
system
totally
and
completely
describes
an
object's
color
and
its
detail
its
position
in
space
whatever
it
is.
Where
that
particular
coordinate
system
deals
with
L
is
no
more
required
in
an
RGB
color
space,
then
Theta
is
required
in
an
XYZ
counter
space.
If
we
have
x
y
z
of
a
point,
we
don't
also
need
to
have
the
Theta
at
that
point.
B
We
also
don't
need
to
have
the
r,
but
if
we
took
that
x
y
z,
color
space
and
liken
it
to
an
RGB
and
R
to
L,
we
could
change
R.
We
could
add
some
R
to
it
and
push
it
out
to
here
and
then
convert
it
back
into
an
RGB,
color
space,
I'm,
sorry,
yeah,
RGB
or
XYZ
color
space,
and
we
would
have
just
a
different
point
in
that
space
and
that's
what
I
showed
in
the
previous
slide,
when
we
l
modifies
the
r
g
and
B
values.
B
Okay,
even
though
there
are
only
three
layers
in
any
image,
we
process,
if
you
use
pixel
Insight
or
you
use
Photoshop,
when
you
have
an
RGB,
when
you
open
an
RGB
image,
it
has
only
three
layers:
a
red,
green
and
a
blue
layer.
Yes,
in
Photoshop,
there's
this
distraction
of
adding
other
layers
and
blending
other
layers
and
all
that
things,
but
in
the
RGB
image
that's
displayed
on
the
screen,
it
has
only
three
layers:
r
g
and
B.
All
of
those
manipulations.
You
do
simply
alter
the
values
in
one
or
more
of
those
layers.
B
B
B
The
well-known
imager
Robert,
gentler
in
a
scientific,
a
sky
and
Telescope
article
in
2015
and
described
in
image
processing
in
this
case
using
Photoshop,
says
L,
will
provide
almost
all
the
detailing
contrast
to
the
final
lrgb
composite
David
Payne,
who
authorized
the
generalized
hyperbolic
stretch
and
gave
a
a
talk
to
a
group.
I
belonged
to
when
he's
discussing
it
and
toward
the
end.
He
said
we
add
L
for
detail
and
he
repeated
that
twice.
B
Okay,
to
get
that
detail
and
whatever
extra
detail
in
the
image.
Of
course,
we
can
adjust
the
histogram
for
one
thing,
to
improve
contrast
or
to
change
the
brightness
without
adding
L
and,
of
course,
these
here's.
This
is
picks
Insight.
These
options
down
here
to
manipulate
those
individual
channels
are
available
to
us
other
things
like
l,
a
versus
red
versus
green
for
a
blue
versus
yellow.
This
is
a
different
coordinate
system,
LS
and
a
different
ordinances,
so
color,
Hue
and
saturation
are
also
from
different
coordinate
systems.
B
This
is
one's
coordinate
system
and
these
are
from
other
coordinate
systems,
and
it's
absolutely
true
we
can
go.
We
could
do
everything
in
just
these
three,
but
sometimes
it's
easier
to
adjust
blue
versus
yellow,
for
instance,
or
to
adjust
the
saturation
without
having
to
just
adjust
the
values
of
r
g
and
B
separately.
B
C
B
It
really
is
true
that
all
the
detail
is
in
the
elf
Channel.
Then
the
question
is:
why
don't
the
nature
photographers
who
are
into
detail
like
I,
don't
know
the
whiskers
on
a
tiger
or
something
like
that?
Why
don't
they
take
L
photographs
and
how
come
their
photographs
have
detail
and
when
they
don't
take
elf
photographs?
B
How
do
these
one
shot?
Color
cameras
get
detail
anywhere
near
comfortable
with
that
of
their
grayscale
colleagues.
Pictures
and
they
do
narrow
band
images,
have
excellent
detail
and
nobody
puts
L
Subs
in
narrow
van
hso
images
to
my
knowledge,
I
guess
they
could
but
I
don't
think
they
generally
do
so.
How
come
there's
all
this
detail
in
these
images
and
how
come
we
have
been
led
to
believe
that
we
need
l.
My.
B
B
So
it's
my
thought
that
we
probably
started
collecting
it
all
as
a
cheaper
way
or
cheap
way
to
reduce
noise
on
our
enemies,
so
it
became
a
standard
and
I
think
today,
people
don't
realize
that
that
was
its
main
purpose,
at
least
as
far
as
I
can
tell
now.
I
haven't
I've
done
a
research
looking
for
someone
who
actually
said
this
perhaps
25
years
ago,
or
something
like
that
and
I
haven't
found
any
reference
that
actually
says.
This
is
why
we
do
it.
B
I
haven't
found
any
reference
that
says
that
gives
any
reason
for
doing
it
other
than
we
do
it,
because
L
contains
all
the
detail
which
it
clearly
doesn't
from
these
three
examples,
for
instance,
issued
here
in
red,
okay,
another
confounding
variable
in
terms
of
detail
is
the
rods
in
our
eyes.
We
have
a
lot
more
rods
which
receive
gray
scale
information
in
our
eyes,
then
we
do
color
receptors.
B
So
we
can
resolve
more
detail
when
it's
in
Gray
scale,
but
unless
we
leave
the
L
as
a
grayscale
image,
which
is
quite
permissible.
If
we
convert
it
into
a
into
a
RGB
image,
we're
introducing
into
an
RGB
image,
we
lose
that
advantage
of
having
the
sensitive,
highly
sensitive
and
small
and
abundant
rods
be
dominated
in
our
vision.
B
B
You
can
test
this
if
you
want
take
a
if
you
have
a
well-processed
RGB
image
extract
the
L
from
it
and
see
if
it
that
L
image
doesn't
look
more
detailed
than
the
RGB
image,
which
was
its
parent.
You
could
also,
for
that
matter,
take
an
L
image
and
just
color
it
red
transform
it
into
the
red
part
of
the
palette.
B
Don't
do
anything
with
blue
and
green
and
compare
the
detail
you
see
in
the
red
image
with
the
detail
you
see
in
the
elements,
particularly
if
you
make
the
both
images
pretty
bright,
I
think
you'll
see
that
the
or
appears
see
that
the
L
appears
to
have
more
detail
than
does
the
red
image
yet
they're
identical
images
in
their
detail.
Content.
B
B
If
here,
for
instance,
as
an
RGB
image,
I
threw
out
about
15
hours
of
L,
never
even
processed
it
and
I
think
there's
plenty
of
detail
in
this
image
so
to
say
that
that
color,
the
RGB
only
provides
color
and
all
the
detail
comes
from.
L
is
clearly
not
true
by
the
way
this
is
take
over
the
24
inch
telescope
in
Chile.
B
Okay.
So
what
really
happens
when
you
take
l?
Well,
if
you
take
rgmb
as
well,
the
r
obviously
goes
into
the
r
layer
of
your
RGB
image,
green
and
blue
go
into
their
respective
layers.
But
what
about
this
L,
this
big
luminous
filter
here
it
doesn't
discriminate
about
the
color.
It
simply
counts
photons.
So
once
you
decide
you're
going
to
put
those
photons
assign
them
to
your
RGB
image,
they
have
to
go
into
those
layers
in
some
proportion.
B
This
is
kind
of
the
default
proportions
that
they
go
in.
Six
percent
of
the
photon
signal
is
put
into
the
blue
22
into
the
red
and
a
whopping
72
into
green,
which
we
then
blend
something
like
scnr
to
remove
the
green
content
in
our
images,
so
we're
wasting
a
whole
lot
of
of
data
and
we're
just
dropping
the
photons
arbitrarily
into
different
color
channels,
because
we
don't
know
what
the
wavelengths
of
those
photons
actually
were.
B
B
B
So
if
noises
are
actual
foe,
certainly
more
subframes
is
one
solution.
Whether
those
subframes
are
are
l
or
not,
is
kind
of
up
to
you.
You
could
take
more
RGB
subframes,
that's
a
little
more
expensive
to
gather
hours
of
each
of
those,
but
you
may
not
need
hours.
You
may
may
have
been
been
collecting
enough
all
along,
but
that's
a
topic
for
another
day,
but
that's
not
taking
more
subframes.
Isn't
a
very
sophisticated
solution,
given
our
modern
tools.
B
Now
we
can
use
modern
noise
reduction
tools
to
get
rid
of
that
noise
and
reveal
reduce
the
signaling
noise
and
reveal
the
the
signal
or
the
detail
that
lies
below
new
noise
reduction
techniques
include
new
stacking
Technologies
to
throw
out
outliers
or
to
better
estimate
the
true
value
of
a
pixel,
usually
something
like
the
mean
value
of
that
pixel
versus
Pig's
Insight
offers
eight
Alternatives
in
stacking,
which
are
all
aimed
at
at
a
better
stack.
A
more
noise
free
stack,
a
truer
to
to
the
object
being
imaged
and,
of
course,
AI.
B
B
Degree,
there
are
other
noise
reduction
techniques
that
work
too,
such
as
adjusting
wavelengths,
Murray,
denoise
and
others
plus
we
are
now
getting
a
technology
advancements
in
the
fact
that
CM
CMOS
have
very
low,
read
noise
and
even
ccds
I
believe
are
improving
their
noise
signal
and
noise
ratios
compared
that
collecting
large
numbers
of
luminance,
subs
or
large
numbers
of
r
g
and
B.
B
For
that
matter,
maybe
collecting
a
few
more
RGB
and
using
one
of
these
modern
noise
reduction
technique
would
be
the
optimal
way
to
increase
the
acetyl
noise
ratio
and
the
detail
in
our
images.
B
Okay,
let's
look
at
some
of
the
analysis.
I've
done
for
some
definitions,
just
for
nomenclature
L
captured,
is
the
Luminous
captured
through
luminous
Subs.
So
these
are
Subs
that
that
with
aluminous
field
filter
in
the
in
the
path
and
as
we
normally
collect,
luminous
L
extracted
is
simply
the
equation
done
before
any
stretching.
B
B
So
one
thing
we
know
is,
of
course,
if
we
have
one
hour
of
our
one
hour
of
G
and
one
hour
of
B,
the
three
of
those
together
are
not
three
hours
of
extracted
ill,
that's
the
equivalent
of
of
one
hour
of
captured
luminance,
since
it
takes
all
three
of
those
filters
to
cover
the
same
wavelength
range
that
it
takes,
that
is
covered
by
the
Luminous
filter.
B
B
The
two
are
processed,
the
two
L's
are
processed
identically,
at
least
up
to
the
level
of
I
can
do
do
it
identically,
there's
always
a
possibility
that
one
is
somehow
one
of
the
programs
unseen
to
the
user.
Does
some
jiggering
pokery
behind
the
scene,
but
if
it
does
I
haven't
seen
it
so
I'm
going
to
assume
that
when
I
apply
these
various
methods
that
I've
applied,
they
all
are
applied
equally
to
the
extracted
and
captured
luminances.
B
So
I
go
down
through
here.
Give
some
idea
of
what
the
processes
are
and
I
won't
dwell
on
any
of
these
I'll.
Just
look
at
the
bottom
one
here,
a
histogram
matching
is
the
only
one
I
need
to
discuss,
I
think
the
others.
B
You
can
study
at
your
leisure
so
to
compare
the
two
images
one
to
another:
I
used
three
different
methods,
very
simply:
I
blinked,
the
two
images
that
would
be
the
L
extracted
and
the
L
captured
I
took
the
difference
between
the
two
images
and
equations
is
L,
extracted
minus
L,
captured
plus
a
half
so
that
if
these
two
images
are
identical,
this
should
be
a
more
or
less
uniform.
Grayscale
image
and
I
measured,
something
called
a
multi-scale
perceptual
in
index,
and
this
is
a
development
by
Wang
and
it
is
meant
to
the
word.
B
Perceptual
here
is
key,
it's
meant
to
say
if
the
detail
in
image
a
and
the
detail
in
the
image
would
look
the
same
to
an
observer
with
standard
eyes,
whatever
those
might
be,
and
so
this
is
a
mathematical
method
that
uses
several
statistical
measures
combined
in
a
a
certain
way
to
give
this
index.
B
So
obviously,
if
they'll
extract
it
in
L
captured,
if
you
blink
them
and
those
two
are
exactly
identical,
then
adding
the
L
capture
to
the
to
the
RGB
image
can't
be
adding
any
details
to
them
and
if
they
pass
either
one
of
these
tests
tests
as
well.
If
there's
no
difference
in
this
one
equation
or
this
or
ways
method
sees
no
difference
in
the
images,
then
once
again,
they'll
extract.
It
has
no
Advantage
to
adding
detail
to
the
RGB
image
over.
What's
already
in
the
RGB
image,.
B
A
Okay,
but
I
mean
you
know,
come
on,
that's
sort
of
like
saying
if
you
take
one
hour
of
images
of
your
favorite
Target,
and
then
you
take
another
hour
of
images
of
that
same
Target
and
they
both
look
alike,
the
two
different
hours,
then
you
may
as
well
just
use
one
hour
and
throw
away
the
other
hour.
I.
B
C
A
I
guess
you'll,
have
you
know
the
you'll
have
more
detail
and
less
noise,
but.
B
If
you
blink
the
two
images-
and
you
see
no
difference
in
the
detail
as
far
as
the
employees,
we
handled
the
less
noise
just
a
little
while
ago
and
said
we
can
eliminate
most
of
all
the
noise
using
AI.
We
don't
really
need
that
much.
We
need
enough.
We
need
enough
exposure
to
reach
the
level
at
which
we
become
asymptotic
and
adding
more
detail.
One
might
be
enough.
Two
hours
may
not
be
enough.
Five
hours
may
be
enough.
B
D
Do
you
mind
if
I
jump
in
I
think
I
see
the
disconnect
between
the
two
of
you
so
I
think
hi
you're
you're,
saying
if
you
have
two
equivalent
images
and
you're,
adding
them
together
by
one
hour
and
another
hour,
you're
going
to
get
reduced
noise
but
you're
not
just
adding
them
together,
because
in
the
case
of
this
lrgb
processing,
you're,
essentially
taking
the
L
data
that
would
have
been
there
via
RGB,
removing
it
and
adding
in
this
other
hour,
so
you're
not
actually
averaging
them
so
to
speak.
A
A
Muted
sorry,
I
was
muted,
yeah
I
mean
I,
don't
wanna
I'll.
Let
you
finish
your
presentation.
I
I
agree
with
your
point
that
if
they
were
identical,
I
mean
like
bit
for
bit
identical.
Then
it
won't
improve
the
images
but
but
if
they're
very
similar,
but
not
bit
for
bit
identical
I
claim
that
you
would
benefit
from
you
know
exposing
for
two
hours.
You
know
combining
the
two
hours
but
anyway
I'll.
Let
you
finish
your.
B
Process,
that's
that
may
be
a
an
issue
of
what
scale
we
look
at
it
and
I
I'll
say
one
thing
about
the
about
this:
just
I
wasn't
going
to
mention,
but
there
are
a
lot
of
limitations
to
how
much
detail
we
can
get.
B
It's
not
only
exposure
time
that
goes
asymptotically
to
to
some
limit,
but
it's
also
things
like
seeing
conditions,
Sky
conditions,
telescope
alignment,
camera
quality
and,
finally,
and
actually
I
think,
most
importantly,
the
ability
of
the
image
processor
to
bring
out
detail
and
if
it
reaches
the
limit
at
which
you
can
process
an
image
for
detail
that
is
no
use,
capturing
more
data
and
and
also
I,
don't
think
it
has
to
go
down
to
bit
by
bit,
because
nobody
looks
at
these
images
with
a
microscope
and
sees
which
pixel
looks
different
on
the
two
images.
B
By
the
time
we
we
publish
them,
we've
changed
them
to
things
like
8-bit
and
we've
dropped
them
into
a
JPEG
format
or
some
other
format,
and
a
lot
of
the
detail
is
not
there
anymore.
In
this
case
of
G,
jpeg,
we've
actually
already
added
artifacts
that
look
like
detail
that
aren't
really
in
the
image.
So
it's
it's
a
complex
problem
I
admit
that.
B
Okay,
the
mechanics
of
these
image
process
comparisons
under
realize
on
a
step
called
histogram
matching,
and
you
can't
link
two
images
and
expect
them
to
look
similar
at
all.
If,
in
fact,
they're
histograms
don't
have
a
similar
dynamic
range
and
a
similar
overall
content,
so
histogram
matching
Max
maps
one
histogram
into
another,
and
it
does
this
with
kind
of
monotonic
adjustments.
There's
no
taking
a
a
faint
pixel
and
saying
I
need
more
pixels
in
the
right
sale.
B
So
I'm
going
to
move
these
faint
ones
over
there,
that's
not
what
it
does
it
simply
compresses
and
maintains
this
kind
of
range.
If
you
looked
at
a
cumulative
curve,
it
would
maintain
this
monotonic
characteristic
of
the
cumulative
curve,
wouldn't
disrupt
that
so
and
here's
an
example
of
a
program
that
does
that
say
we
had
this
image
and
we
wanted
to
make
it
look
like
this
image.
I,
don't
know
why
we
want
to
do
that.
B
What
we
do
we
want
to
make
the
histogram
here,
look
like
the
histogram
here
we
run
through
the
program,
and
this
has
to
run.
Histogram
becomes
this
one
over
here.
So
now
these
two
are
a
pretty
good
match
and
we
blink
them
we'll
see
any
differences
between
these
images.
Now
the
binning
is
a
problem.
The
very
of
the
original
histograms
may
or
may
not
be
exactly
the
same
so
that
you
can't
make
an
exact
match.
B
But
here,
if
you
look
at
some
of
the
features
like
this
big
peak,
it's
presidential
V
is
over
here.
D
is
here,
and
E
is
here
b
d
e
b
d
e,
so
it's
maintaining
the
structures
within
the
histogram,
and
so
that
allows
us
to
measure
or
compare
these
two
with
something
like
a
blink
now
as
I
understand
it.
The
program
by
weighing
that
compares
detail
doesn't
need
histogram
matching
first,
but
I
used
it
since
it
was
part
of
my
workflow
before
applying
this
method.
B
By
the
way
that
program
I
wrote
that
program
it's
in
Python,
if
anyone
wants
a
copy,
I'd
be
happy
to
to
send
its
moment.
Okay,
this
Wings
method,
oh
I'm,
just
Wings
method-
is
for
short,
called
m-s-ssim
and
I
modified
it
somewhat,
so
that
it
looks
at
the
image
as
a
pyramid
image.
It
takes
the
greatest
resolution,
half
that
quarter,
eight
and
in
fact,
16th
resolution
and
Compares
them
quantitatively
at
all
those
different
scales
to
see
where
at
what
detail
level
they
match
and
don't
match.
B
It
then
converts
this
statistic:
it
comes
up
with
between
to
be
lie
between
those
values
of
zero
and
one.
If
it's
zero,
then
there's
no
match
and
zero.
When
we
look
at
an
image,
is
black,
of
course,
and
one
is
a
perfect
match
and
when
we
look
at
an
image,
one
is
of
course
Pure
White.
So
a
good
match
is
something
that's
bright
near
one,
and
a
poor
match
is
something
that's
nearly
black
and
gray
is
less
more
or
less
matched.
B
Okay,
so
I
did
three
different
images
of
bright
nebula,
a
Galaxy
and
a
dark
nebula,
and
these
were
processed
in
pics
insight
and
topaz
Studio,
two
to
varying
degrees
of
processing.
This
is
the
first
one,
and
this
is
a
blinked
comparison.
If
you
look
down
here,
you
have
L
captured
and
L
extracted,
and
this
is
three
hours
of
ill
extracted.
That
is,
there
is
about
3.3
hours,
of
our
the
same
for
G
and
same
for
B,
and
about
four
hours
of
L
captured.
B
A
B
A
A
B
Thank
you,
yeah,
and
if
you
added
the
cell
captured
version
to
the
RGB,
since
these
two
images
that
are
blinking
here
have
identical
detail,
at
least
to
my
visual
I've,
seen
it
in
a
bigger
scale
than
is
shown
here
on
the
screen
and
on
a
better,
it's
better,
not
to
broadcast
it
over
the
internet
too.
So
I
looked
at
it
up
close
and
personal,
and
there
really
aren't
very
many
differences.
There
are
some
differences
in
the
grade
level
yeah
in
between
the
two
engines.
B
A
C
Think
one
thing
that's
missing
from
the
discussion
is
light
pollution
right
unless
you're
taking
luminance
through
a
light
pollution,
filter
or
you're
at
a
super
dark
site.
You
know,
luminance
is
gonna.
You're
gonna
be
dealing
with
a
lot
of
light
pollution
that
is
hopefully
filtered
out
by
modern
RGB
filters
that
are
designed
to
miss
the
certain
amount
of
the
light
pollution.
B
Well,
yeah,
there's
there's
issues
about
the
filters
you
use,
of
course,
but
I
I
would
say
that
you
know
light
pollution
is
one
of
the
factors
that
limits
ultimately
limits.
The
total
detail
you
can
get,
whether
it's
RGB
or
l
or
or
combination
thereof,
there's
a
limit
and
light
filter
is
one
of
those
limiting
factors
and,
of
course,
aperture.
Maybe
is
another
one
and
various
set
of
them
all
these
were
taken
with
all
these
ones
are
with
a
cdk-17
in
Chile,
so
they're
all
comparable.
C
B
So
this
image
I've
processed
a
little
a
little
harder
and
again
the
same
exposure.
Now
the
difference
between
the
two
images.
Quantitatively,
they
have
very
good
matches
and
up
until
the
factory
at
full
resolution-
and
this
is
still
a
considered-
a
good
match
in
the
business,
but
it's
certainly
not
at
the
level
of
0.96.
So
there's
some
apparent
less
detail
in
the
two
images
it
doesn't
say
which
one
has
less
detail.
B
It
just
says:
there's
a
detailed
mismatch
between
the
two
images,
so
I
go
on
here's
an
image
and
I'm
blinking,
again:
I'm
blinking
L
extracted!
That's
this
image
with
L
with
a
similarity
map.
Remember
a
similarity
map
where
it's
white
is:
where
is
a
good
match
and
where
it's
black
is
where
it's
a
poor
match
notice
all
the
core
matches
in
this
image.
This
is
the
Full
Resolution
version
yeah.
All
the
mismatches
are
in
dark
areas
of
the
image
this
and
ends
up
as
a
flaw
in
the
in
the
measuring
technique.
B
B
The
match
is
very
good,
almost
all
white
on
the
similarity
map,
so
even
with
this
additional
processing,
in
my
opinion,
there's
no
significant
difference
in
the
in
the
certainly
detail
between
those
two
images:
okay,
here's
a
a
Galaxy
and
again
I'm
blinking,
the
L
captured
and
the
L
extracted.
Here
we
have
1.4
hours
or
4.6
hours,
total
telescope
time
of
RGB
in
five
hours
of
extracted.
So
in
fact
again
this
is
almost
four
times
as
large
as
much
l
captured
it
as
as
the
equivalent
l
in
the
RGB
okay.
B
So
here
I'm
sorry,
this
is
this
is
the
difference.
One
of
the
images
is
the
difference
between
the
L
capture
and
L
extracted
this
image
here
and
it's
seen
being
that
it's
a
fairly
flat
image.
It
means
the
two
images
have
pretty
much
identical
structured
across
the
image.
Let's
look
at
it
a
little
different
way.
Here's
the
similarity
map
and
again
a
value
near
one
means
almost
Heights
detail
and
a
value
near
black
is
very
little
detail.
B
You
see
the
match,
doesn't
look
very
good
point
three
point
four,
but
within
the
Galaxy
the
detail
map
is
very
high.
It's
this
black
sky
around
it
where
the
DST
Hill
map
or
the
detail,
similarity
where
the
Solarity
map
says,
there's
detailed
mismatch
and
again,
that's,
of
course,
in
the
black
areas,
where
it's
overly
sensitive
to
small
differences.
Even
the
faint
stars
in
this
area
have
very
good
matches,
so
take
take
one
to
will
Ford.
B
It
appears
that,
if
you're
interested
in
the
detail
in
the
Galaxy
that
the
the
L
is
not
going
to
add
anything
over
what
the
1.4
hours
each
of
RGB
got,
you
Horsehead
nebula,
a
dark
nebula
similarity
scores
pretty
much
say
it
all
0.99
for
all
resolutions,
and
here
we
had
four
hours
that
is
16
hours
of
telescope
time
four
hours,
each
of
our
GB
and
14
hours
of
L
capturing
and
there's
basically
no
difference
between
the
two.
So
this
was
14
hours
of
wasted
telescope
time.
B
So
my
conclusion
at
this
point
is
augmenting.
An
RGB
image
was
separately
captured.
Luminous
Subs
does
not
enhance
the
detail
within
the
images
that
I
analyzed.
If
you
can
come
up
with
a
case
where
it
does
that
so
much
more
information,
we
have
about
process.
Modern
noise,
take
noise,
suppression,
Technologies,
I,
think
what
gives
us
this
new
advantage
of
not
needing
to
capture
the
L.
B
You
should
analyze
this
for
your
for
your
own
setup,
your
own
Skies,
your
own
cameras,
Etc
and
again
that
comparison
program
which
I'm
busy
rewriting
to
to
do
a
little
more
than
it
currently
does,
is
a
program
written
in
Python
and
I'd
be
happy
to
share
that
with
anyone
who
wants
it
as
well.
Of
course,
I.
If
you
use
Python,
you
need
to
have
all
the
supporting
rigmarole
installed
on
your
computer.
B
A
All
right,
please,
anybody
out
there
have
questions
for
Alex.
C
Well,
I,
don't
know
if
it's
a
question,
but
you
know
my
understanding
of
the
the
thinking
of
why
this
came
about
had
to
do
with.
You
know:
CCD
cameras
having
a
noise
Advantage
at
at
higher
bin
ratios
or
lower.
However,
you
want
to
say
it
and
that
they
were
actually
processing.
C
You
know
only
taking
the
color
image
from
the
color
data
from
the
color
stacks
and
not
trying
to
take
any
detail
from
that
at
all
right.
So
it's
a
technique
in
in
Photoshop
right
where
you
can
take
only
color
images,
only
color
detail
from
certain
layers
right,
it's
a
layer,
it's
a
layer,
mode,
I
guess
so.
Combination
of
those
two
approaches,
I
think,
is
why
we
ended
up
where,
where
we
ended
up
but
yeah.
B
C
Haven't
taken
luminance
and
forever,
basically,
but.
B
Yeah,
what
are
the
other
reasons?
I
I
suspect
we
might
have
done.
This
is
again
a
time-saving
noise
reduction
technique
yeah,
while
getting
the
RGB
two
by
two
right,
not
fitting
the
luminance
right
and-
and
that
may
have
been
the
motivation
too,
but
once
we
have
these
noise
reduction
Technologies
everything
from
stacking
to
AI,
reducing
noise
is
no
longer
plus.
C
C
A
I'll
give
you
the
motivation,
you
know
I
mean
there's
many
ways
to
skin
a
cat,
as
they
say
so.
First
of
all,
noise
and
detail
are
two
of
the
same
thing.
You
know
if
you
have
detail
out
there
and
then
there's
noise
on
top
of
it.
You
can't
see
the
details,
so
it
it's
sort
of
the
same
animal
and
you
know
the
reason
folks
collect
more
images.
You
know
integrate
more
and
more
exposure,
whether
it's
lrgb
or
RGB
I
mean
you
know.
Why
not
just
take
an
image.
A
You
know
which
is
20
minutes
of
RGB
data
from
Glenn's
house
right.
Well,
it
just
won't
work.
You
get
you
well
anyway.
It
just
won't
work.
There's
too
much
light
pollution
too
much
noise.
Maybe
it
would
work
from
Hubble,
but
it
doesn't
work
from
Glenn's
house.
So
you
need
to
integrate
more
data
to
reduce
the
noise.
A
So
you
can
see
the
detail
so
then,
why
lrgb
versus
RGB,
so
I
I'm,
not
saying
this
is
the
only
way
to
do
it,
but
I
think
the
motivation
for
doing
it
is
that
with
L
you
know
for
a
given
our
on
your
telescope.
You
collect
more
photons
with
that
L
filter
than
you
do
with
an
R
filter,
let's
say,
and
so
you
know
more
electrons
get
put
on
that
CCD
and
or
of
course
CMOS
imager,
and
so
you
have
you
know
the
square
root
formula
to
you
know
reduce
the
Noise
by
that
factor.
A
A
B
B
Noise
in
the
in
this
technique,
it
noises
is
Point
noise
or
nearly
Point
noise.
In
the
cameras,
it's
pixel
noise.
It's
things
like
shot
noise
like
read,
noise
and
things
like
that.
When
you
read
out
a
pixel,
you
get
a
value.
That's
that's
not
quite
correct.
That's
a
one!
Pixel
noise
value,
you're,
not
he's
he's
dealing
with
a
sky,
that's
bright!
If
I
shine
a
light
in
your
eyes,
no
matter
how
many
times
you
blink,
you're,
still
not
going
to
be
able
to
to
see
past
that
light.
C
Muted
yeah
I
mean
the
real
trade-off
is
why
doesn't
Glenn
go
to
a
dark
site
right?
So
it's
you
know
it's
it's
10
hours
per
filter
and
it
doesn't
matter
if
it's
narrow
band
or
RGB,
basically
I'm
not
happy
with
the
results
unless
I
have
about
10
hours
per
filter
from
Union,
City
and
I
have
to
make
a
judgment
about
driving
to
Pinnacles
to
do
that
same
work
in
like
45
minutes
or
an
hour
yeah.
A
So
but
Glenn,
why
don't
when
you
go
to
Pinnacles,
let's
say
you
and
I
make
it.
You
know,
take
a
trip.
Next
weekend
we
go
to
Pinnacles
yeah
your
target.
Why?
How
come
you
don't
just
shoot?
You
know
50
targets
at
night
at
two
minutes
each.
You
know.
Why
is
it
that
you're
going
to
shoot
an
hour
of
your
target
I
mean
it's
the
same
argument.
A
B
Still
need
to
build
up
signal
regardless,
even
if
you
can
improve
signal
noise,
you
still
need
signal.
So
why
don't
you
take
zero
and
then
and
just
process
it
that
you
need
to
see.
You
need
to
build
up
the
signal.
C
A
B
I
totally
agree
and
I
I
said
this
is
an
asymptotic
process.
You
you
build
up
images
to
some
level
and
then
after
after
that,
many
captures
of
the
image
additional
images
don't
add
very
much,
but
you
still
want
to
be
up
that
of
that
asymptotic
curve.
Far
enough
and
that's
I'm
not
arguing
that
you
can
take
images
that
are
five
seconds
long
and
you're
done
with
the
line.
I.
Never.
A
Right
so
I
agree
with
with
what
you're
saying
you
know
like:
if
you
have
the
you
know
the
skies
and
the
telescope
time
to
image
and
RGB.
You
know
enough
hours
where
you
you,
you
know
you
get
a
lot.
You
know
a
sufficient
signal
to
noise
that
you
know
to
create
the
image
that
you're
going
to
be
happy
with,
and
you
don't
need
the
L
and
and
I
think.
The
only
reason
some
folks
use
the
L
is
because
they
can't
you
know,
given
their
skies
and
their.
A
A
So
it
may
be
that
the
L
filter
is
a
little
more
efficient
because
you
can
collect
more
photons
per
hour
and
when
you
mix
that
in
with
RGB
anyway,
that's
the
motivation,
it
adds
complexity
to
the
workflow
and
you
know
some
folks
can
choose
to
do
it
and
some
might
not,
but
sure
if
you
can
collect
all
that
data
and
RGB,
then
you're
in
business.
You.
C
Know
another
thing
to
think
about
too:
is
you
know
having
an
L
filter
uses
up
as
a
position
in
your
filter
wheel
and
for
my
money
I'd
rather
have
a
dark
filter,
so
I
can
take
darks.
You
know
if
it's
when
it's
not
necessarily
totally
black
out
right
and.
D
B
C
That
position
for
experiments
I,
think
you
know
over
the
years
in
these
Sig
talks,
we've
had
multiple
people,
including
myself,
try
to
come
up
with.
You
know
how
to
do
RGB
with
narrow
band
filters,
essentially
and
so
you're.
Looking
for
that
that
orangish
yellowish,
you
know,
filter
and
it
never
really
works
out
for
for
various
reasons.
C
B
Right
yeah,
they
I
gave
this
talk
of
the
Tucson
group
and
one
of
the
people
pointed
out.
He
says
he
says
this
is
great,
because
I
have
a
five
position:
unfilter
wheel
and
I'm,
using
up
one
of
them
for
for
luminance,
and
so
I
need
RGB,
which
is
not
going
to
have
SHO
foreign.
D
So
I'd
love
to
jump
in
I'm
I
have
to
be
careful
what
I
say:
I
work
for
a
large
company
and
I
have
signed
ndas,
but
my
background
is
in
image
processing.
D
D
So
if
you
want
to
make
the
comparison
of
a
grayscale
image
to
an
RGB,
what
you
actually
want
to
be
doing
is
looking
at
a
RGB
image
where
in
Photoshop
you
convert
it
to
say
yuv
or
cie
blab,
and
then
you
set
the
luminance
channel
to
gray
to
a
single
value
and
that's
where
you
then
will
find
in
that
in
the
remaining
residual
color.
When
you
remove
the
luminance
signal,
you
will
not
be
able
to
see
as
much
detail
as
looking
at
greyscale,
which
is
the
pure
luminance.
D
As
for
the
statements
about
Wildlife
photographers,
some,
some
manufacturers
actually
a
long
time
ago
did
actually
have
exotic
Bayer
patterns
or
CFA
layouts,
where
they
had
red,
green,
blue
and
luminance.
Pixels
but
it
was
an
absolute
nightmare
to
process,
which
is
why
most
people
gave
up
so
in
terms
of
when
you're
trying
to
emit.
D
If
you
have
a
certain
time
budget
You
could
argue
that
if
you
image
lots
of
luminance
frames
and
then
do
less
color,
and
then
you
do
the
right
processing,
because
the
human
visual
system
is
not
good
at
seeing
purely
the
color
component,
not
the
Luma.
You
could
actually
process
the
chroma
as
in
have
less
frames
that
contribute
towards
the
color
and
then
you
could
noise
reduce
them
more
heavily
and
it
wouldn't
be
apparent
to
the
user.
As
long
as
the
detail
is
there,
which
is
in
the
Luma
Channel
or
luminance
Channel.
D
B
That's
an
interesting,
interesting
thing:
I
I
I'm,
going
to
study
that
in
the
in
the
recording
and
give
give
some
of
the
things
you
said.
A
try
I
would
also
I
would
also
point
out
that
we
do
our
eyes
do,
detect
differences
in
color
and.
C
B
Under
sample
the
color,
by
not
taking
enough
images,
is
also
a
fault
because
there's
information
in
subtle
color
changes
as
well
in
within
nebula,
and
we
perceive
that
a
large
flat
areas
of
a
single
color
really
look
abnormal
nebula
and
it's
it
and
I
just
doing
an
image
right
now
where
there
are
such
things,
and
it
is
quite
annoying
I've
done
everything
I
can
to
to
try
to
break
those.
Those
Mona
mono.
B
B
And
by
the
way,
I
am
not
an
expert
in
image.
Processing
technology
such
as
such
as
you
Richard
I
I
I,
went
out
when
I
first
started
noticing
this
problem
with
the
health
I'm.
Never
using
it.
I
went
out
and
bought
about
five
books
and
spent
hours
and
hours
on
the
internet,
and
that
was
my
my
introduction
to
it.
So
I
have
I
have
maybe
the
equivalent
of
a
spotty
freshman
course
in
in
human
vision
and
color
perception.
B
C
Well,
maybe
you
can
serve
the
the
visual
Community
by
arguing
what
color
light
they
should
have
to
stay
dark
adapted
because
red
is
not
the
answer
right.
A
Why
is
it
that
people
use
red
for?
Do
you
know
it's
historical.
C
B
It's
a
red
we're,
leaving
well
release
sensitive
to
blue
and
red
and
among
the
colors.
The
cones
in
our
eyes
are
the
least
sensitive
to
that.
Loretta
are
I,
they
are
not
I,
don't
know,
maybe
they're,
just
not
as
strong
a
receiver
they're
they're,
the
largest
ones.
So,
but
but
they're,
not
you.
D
B
We
obviously
don't
use
blue,
so
perhaps
they
just
don't
transmit
the
signal
as
well.
But
the
other
thing
is
the
problem
is:
is
the
brain
convolves
all
these
things?
For
us,
our
our
eyes
are
RGB
and
gray,
but
our
brain
is
what
are
called
opponent
tolerance.
So
we
see
R
versus
G
and
B
versus
yellow
and
that's
how
we
we
perceive
things.
B
If
you
look,
if
you
stare
at
a
red
piece
of
paper
and
close
your
eyes,
you
see
green
and
that's
the
opponent
color
kicking
in
apparently
so
our
eyes
interpret
these
three
things
into
all
kinds
of
colors.
For
instance,
there
is
no
purple
wavelength
in
the
in
a
spectrum,
yet
our
eyes
mixed,
blue
and
red
together
and
tell
us
it's
purple,
and
but
there
is
no
wavelength
for
purple.
A
Explain
that
chart
you
had
where
you
said
the
I.
Don't
know
you
were.
You
were
breaking
up
the
L
channel
into
like
eight
percent,
red
or
green.
B
Okay,
so
what
are
you
referring
a
little
different
thing
here?
What
I'm
saying
is
that
when
you,
when,
in
pixinsight
and
I,
suspect,
all
programs,
if
you
have
a
Lunas
data,
an
RGB
data
and
you
tilt
to
combine
those,
it
has
to
go
through
some
sort
of
mathematical
manipulations
to
get
r
g
and
B
and
L
into
the
a
similar
coordinate
system.
B
So
it
translates
the
RG
and
B
into
something
like
I
forgot,
how
much
the
color
systems
lab
it
converts
the
RGB
into
a
and
b
and
L,
and
then
you
replace
the
L
with
the
Lu
captured
you
replace
the
L
in
that
image
with
the
ALU
captured
and
that
L
is
a
portioned
into
the
RG
and
B
images
layers
in
the
image.
According
to
these
percentages,
these
are
the
default
percentages
in
pix
insight.
Six
percent
of
all
the
L
signal
goes
into
blue
22
into
red
and
the
majority
72
into
green.
B
B
So
we
throw
out
that
green
eventually
in
our
image
processing
our
eyes
are
22
percent
of
the
total,
Sensitivity
I
guess
to
red
and
six
percent
sensitive
to
Blue
on
a
relative
scale,
and
so
that's
where
these
these
photons,
that
were
captured
in
the
L
converted
to
electrons
and
attributed
to
the
different
layers,
and
this
is
entirely
arbitrary.
It
doesn't
matter
what
the
wavelength
was
I.
A
A
The
sun
is,
you,
know,
shining
on
the
earth
and
you
know
to
avoid
that
saber-toothed
tiger,
you
know,
which
was
illuminated
by
a
yellow
sun,
our
eyes,
adapted
to
be
sensitive
to
that
color,
so
that
we
survived-
and
you
know
so-
we're
in
other
words
we're
we're
sensitive
to
the
that
color,
because
that's
the
color
of
the
Sun
so
to
to
say
that
there's
no
astronomical
I
mean
that's
the
color
of
the
sun.
You
know
there
are
a
lot
of
stars
that
put
out
a
lot
of
that.
B
Right
there
there
is
green
out
there.
If
you
look
at
the
black
body
radiation
of
the
star,
there
are
stars
that
radiate
in
the
green
and
the
peak
radiation
is
in
the
green.
The
problem
is,
is
their
Peak.
Radiation
is
in
the
green,
but
the
red
and
the
blue
radiations
are
also
so
intense
that
we
see
them
as
a
mixture
of
those
three
and
we
see
them
as
white
right
and
I
was
really
talking
about
dsos
I'm,
not
really
that
interested
in
detailing
stars
or
anything
like
that.
B
But
yes,
there
are
red
stars
which
are
only
receiving
22
percent
Boost
from
the
L
filter,
and
certainly
green
is
important
to
us
because
of
the
the
color
of
our
stock
of
our
sun
and
also
because
it's
the
center
of
the
bandwidth,
where
the
where
the
sun
can
penetrate
it's
not
in
the
IR
or
the
atmosphere,
interferes
or
UV,
where
I
guess
absorption
bands
or
something
cut
it
off
of
the
spectrum.
That's
in
our
in
our
vision.
B
A
There
more
questions
for
Alex
out
there.
B
I
I
want
to
thank
everybody
for
listening.
It's
been.
It's
been
a
a
exciting
and
stimulating
discussion
and
I
appreciate
that
no.
A
I
really
appreciate
you
coming
out
and
and
and
doing
all
this
work
and
presenting
this
material.
That's
great,
okay,
great
thanks
everybody!
Please
give
Alex
a
hand
unmute
yourself
and
and
thank
you
so
much
for
the
presentation.
Alex.
Thank
you.