►
Description
Processing Narrowband Images in PixInsight
Francesco Meschia
With the summer behind us, hopefully many of us are sitting on a treasure chest of narrowband data from the great showpieces of the Milky Way. In Francesco’s 2nd talk on narrowband, we’ll see a few examples of the techniques that can be used to process them with PixInsight.
A
sjaa,
Imaging
Sig,
and
today
we
have
Francesco
Messiah
again
fourth
time
this
year
and
we're
looking
forward
to
this
talk
on
processing,
narrow
band
images.
So
Francesca,
please
take
it
away.
A
B
We
are
all
right,
so
second
episode
of
an
urban
Imaging
for
for
the
summer,
this
time
we're
going
to
talk
about
image
processing,
ideally
starting,
where
we
left
off.
In
the
previous
talk,
the
previous
talk
we
said,
go
out
and
collect
data,
and
now
we're
going
to
process
the
data
so
to
talk
about
these
are
the
topics
that
we're
gonna
attach
tonight.
B
B
We're
gonna
see
a
few
palette
combinations
that
can
be
used
for
narrowband
to
render
narrowband
Imaging
and
we're
gonna
close
with
the
how
to
process
stars
in
narrowband
imaging.
My
plan
is
to
go
through
this
presentation
relatively
quickly,
but
again
to
stop
me
anytime,
that's
needed
and
then
we're
gonna
do
a
step-by-step
processing
of
of
one
of
my
images
in
pixel
socket
we're
gonna.
B
Do
it
with
the
the
Tulip
Library
they
collect
the
data
for
in
June
I
believe
so,
let's
get
onto
it
processing
philosophy,
in
my
opinion,
the
first
it's
the
first
question
that
ones
through
the
ask
is:
how
do
I
want
to
treat
color?
Is
there
such
a
thing
as
natural
color
talking
about
narrowband?
B
The
I
think
that
when
you
do
sh
o,
you
are
not
even
trying
to
approximately
recreate
this
loads
of
stimuli.
Even
if
our
eyes
were
sensitive
enough,
you
wouldn't
see
the
Eagle
Nebula
the
way
that
NASA
rendered
it
in
The
Pillars
of
Creation,
although
there
are
some
approaches
that
try
to
achieve
that.
My
fear
is
that
the
when
you
do
it
with
the
stage
of
filters,
maybe
with
other
combinations,
it
would
be
possible,
but
Sho
filters
are
doomed
from
the
start.
If
we're
talking
about
natural
colors,
and
why
is
that
to
answer
that?
B
I
would
need
to
talk
a
little
bit
about
color
science
and
color
perception,
so
this
chart
that
I
have
on
the
screen
is
the
cie
or
1931
chromaticity
diagram
it
shows
the
it
tries
to
reproduce
the
perception
of
color
that
our
visual
system
would
give
us
in
terms
of
a
perception,
not
just
the
electrical
signal
captured
by
the
eye.
But
everything,
that's
processed
by
our
brain
is
a
function
of
two
parameters.
X
and
Y
that
are
essentially
two
chromaticity
coordinates.
B
Discordants
are
essentially
abstract.
Positioning
abstract
base
vectors
in
a
color
space,
but
what's
interesting
in
this
curve,
is
that
has
this
stranger
shape
a
little
bit
like
a
valkyr
but
the
the
periphery
that
area
that,
where
you
see
those
numbers
ranging
from
460
to
620.,
those
numbers
are
actually
wavelength
of
the
monochrome
radiation.
That
would
yield
that
particular
color.
So
the
the
monochromial
radiation
marks
the
perimeter
of
the
possible
colors
that
we
can
see,
but
it
doesn't
include
all
of
the
colors
that
we
can
see.
B
So
the
rainbow
does
not
include
all
the
colors
that
we
can
see.
You
can
mix
pure
colors,
pure
monochromatic
radiation
and
produce
all
the
colors
that
you
can
see
inside
that
occur,
which
is
great,
it's
a
great
way
to
represent
what's
possible
in
our
in
our
regional
system.
But
how
does
it
come
into
play
when
we're
talking
about
the
sageo?
B
Well,
if
we
had,
if
you
combine
it
three
or
more
pure
monochromatic,
beams
together
in
an
additive
synthesis,
you're,
going
to
be
able
to
represent
from
this
curve,
not
in
the
entire
curve,
but
essentially
the
polygon.
That
is
described
by
all
of
the
possible
lines
that
go
from
one
wavelength
to
another,
and
if
we
do
that
for
the
three
radiations
of
the
Sho
filters,
so
672
nanometers,
656,
nanometers
and
501
nanometers,
the
color
space
that
those
three
radiation
can
represent
is
this.
This
is
not.
It
looks
like
a
line,
but
it's
actually
a
triangle.
B
B
So
this
means
that
the
only
portion
of
this
color
space
that
can
be
represented
in
natural
Colors
by
Sho
is
that
line
that
very
skinny
triangle.
All
of
the
colors
that
you
see
in
this
image
are
not
is
not
possible
to
produce
them
with
just
those
three
radiations,
and
this
is
my
opinion.
In
my
opinion,
the
reason
why
nobody
does
or
very
few
people
try
to
do,
and
naturally,
natural
color
Imaging
in
narrowband,
precisely
because
the
color
space
would
be
so
tiny.
That
would
not
be
interesting.
B
B
Different
monitors
may
have
some
slightly
different
position
of
these
three
dots,
but
these
are
the
three
primers
and
the
space
of
the
colors
that
these
three
primaries
span
is
that
triangle,
the
shadow
triangle
that
I
have
here.
So
this
is
going
to
be
a
much
more
interesting
image,
much
more
colorful
image
than
the
the
skin
triangle
that
I
was
showing
earlier,
and
but
this
means
that
essentially,
these
cars
are
false.
Instead
of
deciding
that
the
H
Alpha
red
is
almost
as
red
as
a
sulfur,
2.
B
I
decided
that
H
Alpha
is
green,
for
instance,
or
that
the
O3,
instead
of
being
the
teal
color,
is
going
to
be
a
nice
indigo
blue.
So
this
is
a
what
we
are
trying
to
achieve
so
to
do
that.
The
most
images
assigned
this
s
h
and
O
to
one
of
the
three
RG
and
B
primaries,
but
we
are
free
to
decide
how
this
is
the
first
degree
of
Freedom
that
we
have
as
images
in
in
narrowband.
We
can
decide
to
do
what
NASA
has
done
with
the
this
called
Hubble
palette,
the
Hubble
palette.
B
The
rationale
is
the
longest
wavelength
which
is
sulfur
2
will
map
to
the
longest
dominant
wavelength
primary,
so
the
primary
color
with
the
longest
dominant
wavelength,
which
is
red,
then
the
shortest
wavelength
corresponding
to
oxygen
3,
will
map
to
the
to
the
primary
with
the
shortest
wavelength,
blue
and,
of
course,
the
remaining
wavelength
H
Alpha
will
map
to
the
remaining
primary
green.
B
But
there's
a
all
of
the
other
cyclical
combinations
are
possible.
There
are
combinations
that,
instead
of
being
s
h,
o
r
s,
o
h
or
o
h,
s
I
mean
you
can
you
can
decide
and
also
there's
no
need
to
have
to
assign
the
wavelength
to
a
pure
primary
or
to
assign
a
better
set
to
assign
each
of
the
primary
to
a
pure
wave?
And
you
could
mix
them
with
the
mathematical
expression.
You
can
do
pretty
much
anything
that
you
want,
so
there
are
infinite
possible
mixings
when
I
started
doing
narrowband.
B
This
I
I
felt
like
I
needed.
Some
I
lacked
a
solid
support
for
my
experimentations.
That
I
would
have
with
the
with
RGB,
because
I
could
decide
absolutely
anything,
and
it
could
be
that
you
like
more
an
image
in
which
the
colors
are
completely
flipped
to
compared
to
what
the
NASA
does,
and
nobody
can
tell
you
that
that
is
a
more
wrong
or
less
wrong
than
what
NASA
has
decided
for
the
Hubble
palette.
B
And
but
how
do
we
get
there?
How
do
we
get
to
data
that
we
can
combine?
As
we
all
know,
when,
after
you
acquire
the
data
on
the
field,
you
need
to
calibrate
it
and
in
the
in
the
first
talk,
we've
done
our
best
with
the
very
narrow
band
filters
to
cut
off
all
of
The
Unwanted
signal
that
we
had
the
moon,
the
light
pollution,
it's
there.
So,
ideally,
at
this
point
we
have
a
nice
set
of
raw
Subs,
in
which
only
good
signal
is
left.
B
The
problem
is
that
the
signal
is
is
not
magically
Amplified
by
the
process.
It's
just
as
weak
as
it
was
as
it
started.
Remember
what
we
did
was
not
to
amplify
the
signal,
but
to
cut
down
The
Unwanted
signal
the
and
so
the
cut
down
the
noise
associated
with
The
Unwanted
signal,
which
means
it
is
like
saying
that
the
distribution
of
the
background
signal,
instead
of
being
a
wider
belt
curve,
has
become
very
narrow.
B
It's
become
narrow
to
the
point
that
it's
comparable
to
the
width
of
the
distribution
of
the
dark
signal
that
you
get
from
the
mastered
Arc
that
you
use
for
calibration
to
the
point
of
the
details
of
the
two
bell.
Curves
have
some
overlap,
and
this
could
be
a
problem
because
when
you
have
an
overlap,
it
means
that
some
pixel
values
could
be
calibrated
with
the
value,
which
is
a
great
with
a
dark
value
which
is
greater
than
the
pixel
value.
B
B
That's
added
to
the
pixel
before
the
calibrated
image
is
produced
and
clamping
the
pixel
values
to
the
zero
to
one
range
I
put
here
in
in
the
the
right
part
of
the
of
the
slide
two
examples
of
where
you
find
the
pedest,
the
outer
pedestal
setting
for
the
image
calibration
process.
It's
here
in
the
second
section
of
the
and
in
the
output
section,
sorry
of
the
image
configuration
tool-
and
this
is
where
you
find
it
in
a
weighted
batch
processing.
Sorry
batch
pre-processing
script.
Again,
you
can
enter
a
value
expressed
in
Adu.
B
So
in
the
ends-
or
you
can
even
in
the
case
of
wpp,
you
can
let
it
choose
a
value
that
is
just
enough
so
that
only
one
only
a
few
Peaks
very
few
pixels
are
clicked
to
zero,
or
maybe
even
none
of
them
and
don't
take
my
word
for
it.
There's
a
great
video
by
Adam
block
with
lots
of
demonstration
of
this
demonstrations
of
this
problem
and
how
to
solve
it,
and
this
is
the
link
and
it's
well.
Adam
is
a
great
teacher,
so
you'll
find
it
very
useful.
I
think.
A
Hey
Francesco
can
I
ask
please
if
you
put
a
little
bias
on
your
camera,
is
this
a
problem
on
your
you
know?
You
mean
opposite
yeah
or
whatever
you
want
to
call
it.
You
know
there's
offset
and
gain
and
if
you
had
a
small
offset
you'd
think
that
you
could
overcome
this.
This
isn't
a
big
absolute
value
kind
of
number
I'm
assuming.
B
Right
so
the
if
you,
if,
if
you're
talking
about
the
offset
value
that
you
give
to
the
camera
while
taking
the
data
yeah,
it
could
help
a
little
bit.
But
it's
not
the
solution,
because
in
order
to
properly
calibrate
your
data,
you
will
also
your
darks
needed
to
be
taken
with
the
same
offset
value.
And
so
you
are
just
going
to
shift
both
distributions
to
the
to
the
right.
But
the
you're
not
going
to
alter
the
fact
that
there
are
some
tails
in
the
overlap.
B
So
assuming
that
calibration
went
well
and
we
assigned
a
proper,
an
appropriate
pedestal,
value
and
I
tried,
for
instance,
for
using
this,
the
n100
once
the
one
time
that
I
went
Imaging
narrowband
at
Pinnacles,
where
the
the
sky
is
very
dark,
and
so
my
my
Skype
distribution
was
almost
almost
a
line.
B
So
once
we
have
all
the
data
we
needed
to
start
mapping
them
mapping
a
show,
as
we
said
to
the
three
primaries
that
are
going
to
produce
our
color
image,
and
we
might
have
read
online
that
okay,
you
assign
a
salt
for
those
into
the
red
Channel.
Hydrogen
goes
to
the
green
Channel
and
the
oxygen
goes
to
the
B
Channel.
What's
difficult
to
do
that.
B
Well,
the
problem
is
that,
because
of
the
relative
abundance
of
the
three
elements
in
the
universe,
we
talked
about
this
in
the
previous
talk
HL,
hydrogen
is
by
far
the
most
abundant
element
in
the
universe
and
it's
ionizes
at
pretty
low
energy.
So
it
creates
there's
a
lot
of
age
out
from
the
universe,
much
more
each
of
the
signal
than
O3
signal
and
certainly
way
more
than
than
sulfur
2..
B
So
whatever
primary
we
assign
H
Alpha
to
that
would
become
the
dominant
color
of
the
image
and
a
Hubble
palette,
uncorrected
image,
you
take
the
three
linear
data
you
assign
them
to
rgnb.
We
become
a
green,
a
beautiful
green
image,
and
this
is
usually
not
what
we
want,
because
we
would
like
to
have
a
wide
range
of
color
in
our
images.
Otherwise
we
are
back
in
the
same
problem
of
having
a
very
narrow
color
space.
So
how
do
we
do
that?
B
And
the
key
to
do
that
is
to
rebalance
their
relative
strength
of
the
three
channels.
This
is
a
technique
that
it's
called
tongue
mapping,
and
we
owe
it
in
particular
to
finish:
astrophotographer
called
JP
I'm
going
to
boss
this
name
maxavainia,
the
he
has
a
direct
website
www.astronanarchy.com
and
he
has
produced
some
incredible,
really
beautiful,
narrowband
images.
It's
really
much
very
much
into
mosaics.
So
you
can
see
some
incredible
super
wide
field
views
of
the
Milky
Way
on
his
website
and
he
teaches
this
technique.
B
It's
there
are
I
think
we
hosted
one
no
when
it
was
not
as
ja,
but
it
was
the
Santa
Cruz
astronomical
Association.
We
also
the
seminar
that
he
gave
like
a
couple
of
years
ago.
The
basic
trick
to
this
Stone
mapping
is
to
rebalance
each
of
the
channels
so
that
the
histograms
of
the
three
channels,
when
you
look
at
them
in
pics
inside
or
in
Photoshop,
which
is
a
tool
that
metservice
uses,
are
roughly
comparable
in
their
black
points
in
their
white
points
and,
most
importantly
in
their
mid
tones.
B
So
they
are,
they
overlap
almost
with
the
with
similar
shapes.
B
The
problem
is
that
if
we
do
that
on
directly
on
the
initial
data,
given
that
HRT
is
very
is
overpowering,
the
other
two
you
end
up
saturating
very
strongly
the
data,
the
the
stars,
in
particular
of
the
weaker
channels,
which
is
the
reason
why
maybe
less
so
today,
but
in
the
past
narrowband
images
were
affected,
the
magenta
star
syndrome,
which
the
stars
become
acquire
a
magenta,
Hue
or
magenta
halos.
B
Nowadays
we
have
a
ways
to
avoid
it,
but
that
was
the
typical
case
until
a
few
years
ago.
B
So
the
first
step
to
avoid
that
is
to
remove
the
Stars
and
the
gp's
original
technique
used
to
do
that
manually
in
Photoshop.
Using
the
scratch
and
dust
tool,
I
believe,
but
today,
as
we
all
know,
we
have
a
very
efficient
machine
learning
days,
the
tools
like
standard
plus
plus
or
my
tool
of
choice,
a
star
exterminator.
B
Once
the
stars
are
gone,
we
can
stretch
the
the
Chinese
even
very
aggressively
if
you
want
and
not
worry
about
over,
stretching
the
stars
and
clipping
them
to
white
or
making
them
become
magenta.
B
Fortunately,
there
are
good
some
recent
tools,
like
Star
exterminator,
for
sure
I'm,
not
sure
about
standard
Plus
Class
have
a
way
to
do
that,
even
when
the
images
are
too
linear,
then
I
do
an
initial
stretch
with
the
you
find
it
clear,
High
generalize
the
hyperbolic
stretch
using
the
same
type
of
stretch,
the
same
strength
in
the
three
channels,
changing
only
the
Black
Point
Station,
because
the
three
channels
May
differ
in
the
black
point.
B
In
the
end,
you
get
three
images
that
yeah
they
have
a
similar
histogram
but,
of
course,
you're
going
to
have
some
weaker
channels
that
are
stretched
way
more
than
the
stronger
Channel.
This
will
make
the
noise
in
the
weaker
channels
very
visible,
but
the
interesting
part
about
this
technique
is
that
you
don't
need
to
worry
about
that,
because
these
are
not
the
Chan,
the
the
data
that
we
produce
the
detail
in
your
image.
These
are
just
the
tone
Maps.
B
These
are
maps
of
the
presence
of
the
elements
that
will
just
provide
the
color
information
to
the
final
image,
not
the
detail,
so
you
can
be
very
graceful
in
the
noise
reduction.
You
can
use
again
a
machine
learning
based
tools
or
you
can
use
a
starter,
removing
aggressively
the
the
smaller
wavelet
layers
or
even
convolution
or
MLT,
and
sorry
MNT
or
MLP,
without
worrying
about
the
data.
The
data
will
come
later
in
the
in
the
next
step.
B
So
this
is
a
another
one,
like
a
star
removal
of
the
original
contribution
that
JP
gave
to
the
to
the
community
separate
the
stars
from
the
non-stellar
object
separate
the
color
contribution
from
the
detail
layer
from
the
detail.
It
goes
about
layer
because
it
is
a
Photoshop
guy,
but
they
say
the
detail
contribution
okay,
but
when,
where
do
we
recover
the
detail?
B
Well,
in
many
cases
the
H
Alpha,
which
is
the
strongest
channel,
has
the
most
detail.
It's
not
necessarily
the
rule,
though
there
are
some
targets
that
have
a
very
strong
spatial
separation
between
the
the
three
elements:
oxygen,
hydrogen
and
the
and
sulfur.
So
you
may
not,
if
you
just
take
H
Alpha
you're,
going
to
throw
away
a
lot
of
the
spatial
information.
The
detail.
Information
perfect
example
is
an
image
that
I
processed
a
few
a
couple
of
months
ago
and
the
image
that
I
put
in
the
the
teaser.
B
For
this
talk
the
wolfrayet
star
wr134,
which
is
surrounded
by
a
beautiful
spherical
shell
of
O3.
That
only
has
the
O3
radiation.
There's
no
h
Alpha
at
all.
Another
typical
example
would
be
the
squid
nebula
in
which
the
squid
is
exclusively
H.
Sorry
O3
and
the
H
Alpha
is
only
the
shell
that
surrounds
the
the
actors,
the
squid,
which
is
the
planetary
nebula.
B
So
there
are,
you
need
to
make
a
judgment
call
there
are
if
an
image
has
mostly
H
Alpha,
like
the
the
Pelican
nebula
or
the
the
Wizard
nebula
I
may
use
H
Alpha
as
the
the
luminance
data,
so
as
the
detail
layer,
but
in
other
cases
I
like
to
be
in
in
cases
in
which
the
channels
are
more
balanced,
I
like
to
build
a
synthetic
luminance
or
synthetic
detail
Channel
by
overlaying,
the
three
signals:
sulfur
hydrogen
and
oxygen,
using
pixel
map,
and
this
is
the
expression
that
I
use.
B
So
basically,
all
this,
this
expression
with
the
tilde
are
essentially
the
equivalent
in
pics
inside
of
the
overlay
operator
between
layers
in
Photoshop.
It's
a
what
this
expression
is
saying
is
a
take
the
anything
technically
the
difference
between
the
sulfur
signal
and
its
median
overlaid,
with
the
difference
between
the
DH
Alpha
signal
and
its
median
overlay.
B
It
again
with
the
the
difference
between
O3
and
its
median
and
then
to
avoid
having
a
median
value
of
zero,
add
back
the
median
value
of
the
H
Alpha
signal,
but
you
could
have
done
it
with
O3
or
2,
or
even
just
a
constant
in
the
end
you're
going
to
produce
an
image
that
has
information
from
all
three
channels.
I
like
to
do
this,
why
they
are
still
linear.
Actually,
you
have
to
do
it
while
they
are
still
linear.
Before
stretching,
and
in
this
way
the
noise
contribution
will
be
the
one
in
the
original
files.
B
But
if
you
find
that
there's
too
much
noise
coming
from
typically
S2,
it's
very
noisy,
you
can
put
a
multiplier
in
front
of
it
and
say:
yeah
I
want
to
take
only
20
of
sulfur
2
and
maybe,
instead
of
taking
all
of
the
signal
from
oxygen,
3,
I'm
gonna
take
60
or
70
percent
trial
and
error.
Yes,
yet
another
one
of
the
many
degrees
of
freedom
that
narrow
bended.
A
Let's
go,
there's
a
question:
Jeffrey
asks:
do
you
also
use
the
image
layers
taken
with
the
lrkb
filters
in
your
wheel.
B
I
think
I
understand,
okay,
so
I'm
normally
I.
Don't
do
it
for
the
non-stellar
side
of
things,
because
the
the
signatures
ratio
in
the
narrowband
that
comes
from
the
urban
field
is
so
much
higher
than
the
signature
noise
ratio
in
the
RGB
in
the
RGB
channels
that
it's
not
worth
it.
But
the
exception
is
the
Stars,
the
the
last
part
of
this
presentation
we're
going
to
talk
about
how
to
combine
how
to
recombine
the
Stars
into
a
final
image,
because
I
don't
want
to
have
to
produce.
B
In
the
end,
the
starless
image
and
one
of
the
basic
techniques
that
has
been
used
for
a
while
is
to
replace
the
narrowband
stars
with
RGB
Stars,
so
the
Stars
they're,
essentially
Points
of
Light.
So
the
signal
too
much
ratio
is
high
enough
that
we
can
use
it.
We
can
use
them,
throwing
away
the
the
no
Stellar
part
of
those
RGB
data
and
have
a
more
pleasant
colors
more
true.
B
B
Good
question:
now
we
have
the
three
s,
h,
o
data,
and
we
settle
data.
We
have
processed
them.
We
have
stretched
them
now,
it's
time
to
produce
color
information
and
to
do
to
do
so,
we
have
to
create
a
channel
combination
which,
as
we
said,
we
assign
the
stretched
versions
of
the
narrowband
channels
to
the
to
the
three
rgmd
Channel
color
channels.
We
can
do
it
with
basic
ones.
Sho
hso
ho
is
also
popular
used
to
be
called
the
natural
color,
because
it's
closer
to
the
using
only
eight
hydrogen
oxygen.
B
You
are
kind
of
more
sick.
More
kind
of
closer
to
the
actual
color
can
be
color
in
the
that
your
visual
cards
would
perceive
at
the
cost
of
reducing
your
color
space,
to
a
line.
That's
the
problem,
but
in
general
you
can
use
any
combination.
For
instance,
you
can
use
a
linear
combination
like
assigning
to
an
arbitrary
x,
color
Channel
nation,
of
the
three,
the
three
wavelengths,
so
a
certain
constant
times,
sulfur
and
other
constant
times,
hydrogen
and
the
third
constant
times
O3.
B
This
is
a
dynamic
pallets
are
still
linear
combinations,
but
they
are
using
the
not
some
constant
multipliers
like
in
the
expression
that
I
posted
here,
but
they
are
derived
in
a
non-linear
way
from
the
channels
themselves,
so
they
are
a
reinforcing
or
depressing
your
data,
depending
on
what
the
data
looks
like
pixel
by
pixel.
What
does
this
mean?
So
this
is
an
example
of
the
one
of
the
dynamic
palettes
that
you
can
use
for
hso
that
I
use
frequently.
B
Actually,
so
you
assign
to
rgnb
there's
a
three
rather
Arcane
Expressions,
but
if
we
try
to
interpret
them,
they
become
a
little
bit
more
understandable.
So
the
key
to
understanding
this
palette
look
at
this
Factor
when
you
have
a
an
image
and
you
raise
it
to
the
inverse
of
itself,
raise
it
to
the
power
of
the
universal
itself.
This
is
a
actually
an
exponential
transformation
which
is
known
in
forums
even
in
pics
inside
as
a
pip
power
of
inverted
pixels.
B
The
effect
of
this
transformation
is
essentially
of
boosting
your
image
so
that
it
becomes
where,
where
there
is
good
signal,
the
output
of
this
transformation
is
an
even
higher
value
and
I
tried
to
here.
You
can
see
the
transfer
function
of
this.
Of
this
sorry,
the
the
graph
of
this
transfer
function.
B
Green
will
have
H
Alpha,
where
both
H,
Alpha
and
O3
are
strong.
So
the
product
of
the
two
raised
it
to
the
inverse
to
the
power
of
the
inverse
of
the
product
of
the
two
and
where
that
is
not
true
in
the
inverse
of
that,
there
will
be
all
three
so
H
Alpha,
where
both
a
child
final
three
are
strong,
NO3,
elsewhere
and
B
will
have
a
straight
O3.
There
will
be
no
other
wavelength
to
to
pollute
that
the,
in
my
opinion,
this
this
gave
me
some
very
nice
pleasant
images
to
my
eyes.
B
That's
it's
a
matter
of
personal
taste
completely
and
that's
it's
all
to
it
to
an
urban
Imaging.
But
it's
it
meets
my
taste.
It
may
not
meet
yours.
You
may
prefer
the
classic
humble
palette,
but
yeah
try
give
it
a
try
once
you
have
the
your
RGB
image
and
because
you
have
combined
the
three-tone
map
since
somehow-
and
you
have
on
another
image,
the
detail
layer
is
still
as
a
grayscale
image,
your
synthetic
luminance.
B
Finally,
you
can
combine
the
color
and
the
details
so
recover
the
details
that
you
may
have
thrown
away
by
aggressively
reducing
the
noise
in
those
images
using
a
RGB
combination.
You
need
to
start
with
the
stretched
images.
This
LGB
combination
has
is
meaningless
on
a
non-stretched
data,
so
use
it
after
stretching,
you
can
just
uncheck,
RG
and
B
just
to
check
the
L
Channel
enter
the
name
of
your
synthetic
luminance
there
and
drag
and
drop
the
process
onto
your
RGB
image.
B
A
little
caveat,
lrgb
combination
tends
to
desaturate
Colors.
So
you
may
want
to
counter
this
by
setting
a
saturation
to
0.4
from
the
original
0.5.
This
is
kind
of
character
intuitive.
Why
should
I
reduce
it
if
I
want
to
increase
it,
but
it's
the
way
this
tool
works,
so
I
just
take
it
as
as
it
is.
If
you
find
that
your
colors
are
still
a
little
bit
noisy,
you
may
want
to
check
this
box
prominence.
B
Lab
you
can
also.
You
can
also
do
that
if
you
work
in
the
lab
in
the
lab
color
space.
A
B
Yes,
certainly
you
so
you
you
say
that
you're
gonna,
throw
away
the
DL
signal
and
you're
going
to
replace
it
with
singtel.
Is
that
what
you're
saying.
B
So
I
think
that
there's
a
lrgb
combination
does
not
entirely
throw
away
the
the
luminance
information
derived
from
the
three
channels,
but
yet
you
can.
You
can
definitely
do
that
too.
It's
you,
you
were.
You
said
to
the
channel
combination
tool
to
work
in
the
lab
Lab
space
you
uncheck,
A
and
B,
or
you
sorry.
You
create
another
image
by
using
an
A
and
B
from
the
your
RGB
image
and
L
from
the
synthetic
luminance
you
can
thanks.
B
This
is
convenient
because
it's
one
tool
that
also
gives
you
the
ability
to
correct
the
this
iteration
and
to
smooth
the
the
color
layers
if
needed.
But
it's
the
same
thing
foreign
once
you
have
that
the
following
steps
are
essentially
the
same
step
that
I
would
take
for
an
lrgb
image
taken
with
Broadband
filters.
So
these
days,
what
I
mostly
do
is
I.
Do
an
exponential
transformation,
again
powers
of
inverted
pixels
to
try
to
recover
some
of
the
low
signal
areas,
I
typically
saturate
the
colors,
because
I
like
a
saturated
color.
B
If
there
are
C
areas
of
very
high
signal,
I
may
do
an
HDR.
Multi-Scale
transform
noted
that
the
there
is
a
tool
action.
There
is
a
script
called
the
color
corrected,
HDR
multiscale
transform
that
is
actually
doing
a
slightly
better
job
than
the
the
the
the
the
process.
Hdr
multi-scale
transform
that
doesn't
work
changes
the
colors
a
little
bit
when
you,
when
you
operate
on
on
RGB
images,
I
do
I
do
a
lot.
Local
histogram
Equalization.
B
Also
it's
another
way
to
try
to
tame
down
a
very
strong
contrast
and
then
I
do
Instagram
transformation
to
correct.
Maybe
colors
color
casts
by
processing
slightly
differently
the
the
three
channels
until
I'm
until
I'm,
happy
and
at
that
point
I'm
gonna
have
a
starless
processed
image
that
would
be
ready
for
the
final.
B
The
final
step,
the
addition
of
stars,
so
we're
gonna
have
a
starless
image
and
we
can,
if
we've
done,
processing
using
one
of
the
ml
tools,
you
can
set
them
to
produce
star
only
images
as
well
or
they
are
called
star
masks
in
some
cases.
B
So
you
can
have
three
auxiliary
images
in
addition
to
your
H
Alpha,
O3
and
S2,
that
are
the
stars
of
each
Alpha,
the
stars
of
O3
and
the
stars
of
S2.
You
can
process
them
into
a
star,
only
color
image.
If
you
want
you,
can
do
Channel
combination
or
use
a
more
complicated
expressions
like
Dynamic
palette,
you
can
do
like
I
would
do
in
with
the
Broadband
Stars.
You
can
do
call
color
repairs
for
the
for
the
course.
B
If
the
cores
are
saturated
using
the
repaired
HSV
separation
script,
you
you're
going
to
stretch
that
and
I
recommend
using
ghs,
because
it
does
a
good
job
at
known
over
not
over
saturating
the
the
course
of
the
Stars.
And,
of
course
this
is
a
color
image
of
the
narrowband
data,
the
narrowband
Stars.
So
what
what?
Whatever
Starlight
goes
past?
Your
an
urban
filters
usually
has
weird
colors,
and
there
are.
B
There
are
two
traditional
approaches
to
making
narrowband
stars
look
better
and
by
better
I
mean
not
with
heavy,
not
having
a
stranger,
weird
magenta
casts
or
magenta
Halos,
not
having
green
stars,
which
are
physically
impossible.
So
the
the
two
traditional
approaches
are
to
repair
the
colors
of
the
narrowband
stars
or
to
replace
them
all
together
with
RGB
Stars.
This
is
back
to
times
back
to
the
the
question
that
we
had
from
from
Jeffrey.
B
In
addition
to
these
two
traditional
approaches,
I
have
I
favor,
a
third
option
that
I'm
proposing
to
you
may
want
to
try.
You
can
replace
the
colors
of
the
narrowband
Stars
with
RGB
colors,
but
you
may
want
to
keep
the
luminance
profile
of
the
nairobian
stars,
because
typically,
an
Arabian
star
is
is
a
smaller
sharper
more
narrower
than
than
the
equivalent
RGB
star.
So
it's
it
may
be
more
pleasant
to
the
eye,
may
give
you
an
impression
of
a
higher
image.
That
is
a
sharper
higher
resolution,
although
it's
more
of
an
impression
than
data.
B
So
before
we
go
any
further,
we
are
still
dealing
with
false
colors
I'm
I'm,
talking
about
how
to
make
things
more
aesthetically
pleasing
than
not
the
to
produce
data
of
any
scientific
value.
So
if
I
start
with
the
narrow
band,
Stars
so
and
I
produced
my
RGB
combinations
of
stars
from
H
Alpha,
s2103
I
use
this
recipe.
I
inverted
the
image.
I
apply
selective
color
noise
reduction,
essentially
by
removing
all
of
the
green
color
noise,
then
I,
I,
invert,
it
again
and
I
apply
a
CNR
again.
Why
I
do
that?
B
Well,
when
you
invert
an
image
the
if
you
had
a
magenta
stars
to
begin
with
after
the
inversion
magenta
goes
to
Green.
B
So
by
removing
green
using
a
CNR,
you
are
actually
removing
Magenta
in
the
original
data,
so
you
invert
it
back,
and
at
that
point
you
no
longer
you
have
no
more
magenta
Stars
and
then
I
apply
again
as
DNR,
maybe
not
at
100
strength,
but
only
at
80
percent
to
remove
the
The
Greener
Stars
again,
neither
magenta
Stars
don't
exist
and
green
stars
don't
exist,
so
those
are
I
Source
in
my
opinion,
and
this
is
an
easy
way
to
remove
them
in
my
in
my
experience
often,
this
is
plenty
enough
to
be
the
colors
that
look
good
again,
I'm,
not
saying
that
they
are
true.
B
They
just
look
good,
but
if
you're
talking
about
having
a
truer
color,
then
you
may
want
to
collect,
for
instance,
30
minutes
worth
of
RGB
data
using
short
exposures
like
30
seconds,
to
avoid
saturating
the
star
cores.
Then
you
process
the
resulting
data,
as
you
normally
would,
with
an
RGB
image,
no
need
to
collect
the
luminance,
your
we
already
have
the
luminance
that
we
need.
You
just
want
the
RGB
data
stretch,
the
data
aggressively
and,
most
importantly,
clip
the
blacks
aggressively.
You
want
to
the
background
to
be
pure
black.
B
You
don't
want
to
have
any
non-stellar
data
in
your
background,
because
it
would
be
mostly
noise
compared
to
the
strength
of
the
of
sorry
to
compare
to
the
higher
SNR
of
of
narrowband
data,
and
then
you
can
overlay
the
RGB
stars
on
the
narrowband
stretched
image
using
pixel
math
or,
if
you're,
using
user,
using
Photoshop
using
the
overlay
operator
so
starless
times
in
the
inverse
of
starless
times
the
inverse
of
RGB
stars
and
then
invert
everything
and
then
there's
my
my
own
technique,
replacing
the
star
the
colors
of
RGB
stars
with
RGB
colors.
B
Again,
you
need
to
produce
first,
an
arrow
band,
star
image
using
a
channel
combination.
Don't
worry
about
the
colors
at
this
point
extract
the
luminance,
while
still
linear
then
stretch
it
using
a
ghs
or
your
preferred
stretching
method
in
parallel
acquire
RGB
data
for
the
stars
and
process
it
to
a
stress
the
image
just
like
we
we
discussed
at
this
point.
You
have
stretch
the
luminance,
which
is
the
luminance
profile
from
the
narrowband
data.
B
You
have
an
RGB
image
from
the
RGB
data
that
are
only
Stars
and
you
use
a
lrgb
combination
to
put
them
together
and
your
resulting
image
will
have
the
same
type
start
profile
that
you
get
from
a
narrowband
data
you're
going
to
have
the
natural
in
multiple
calls
color
from
RGB
from
your
RGB
data
and
you're
gonna,
typically
have
a
full
control
over
how
much
you
want
your
stars
to
be
visible
in
the
final
image.
How
much
star
presence
do
you
want?
B
Because
you're
gonna
you
are
B
are
gonna,
be
in
control
of
how
much
you
stretch
the
extracted,
the
luminance
from
from
narrowband
data,
you
search
it
more
more
stars.
You
stretch
it
less.
Only
the
brighter
Stars
will
be
visible,
and
this
is
it
for
the
presentation
we
can
have
a.
If
there
are
questions
we
can
have
a
q
a
now
or
we
can
jump
to
Bricks
Insight,
do
a
step-by-step
walkthrough
and
then
take
questions
as
they
come.
What's
do
we
have
any
questions
hi
there.
A
B
Yes,
absolutely
the
problem
is
that
when
it's
very
weak
standard
plus
plus
may
not
see
the
may
not
do
an
excellent
job,
it's
a
requires.
A
it's
a
neural
network,
so
it's
it
depends.
It
depends
on
your
image
being
similar
to
the
image
that
it
was
trained
with
and
typically
it
was
trained
on
images
that
have
a
significant
background
and.
B
It
depends
on
the
image
the
Congo
is
when
I'm,
when
I'm
well,
sampled
or
oversampled.
So
if
I
use
my
small
scope,
a
65,
millimeter
refactor,
it
is
way
under
sampled.
So
I
typically
don't
there's
no
point
in
doing
the
convolution
there.
But
if
I
use
my
130
millimeter
retractor
and
maybe
I
do
been
one
so
I
have
2.3
Micron
pixels,
then
I,
the
I.
Do
the
convolution
and
I.
Do
the
convolution
after
a
past
of
a
starless,
starnet
or
Star
exterminator,
so
essentially
I
don't
do
the
convolution
on
the
Stars
I.
B
Do
the
convolution
only
on
the
non-stellar
parts
gives
good
results
and
it's
actually
it's
another
technique
that
I
learned
from
from
Raymond
some
from
the
coldest
nights.com.
A
B
Yeah
all
right,
so
this
is
my
processing
of
the
the
Tulip
nebula.
The
three
files
that
I
have
here
are
the
integrate
the
images
directly
from
wbp.
This
is
H
Alpha.
B
This
is
a
O3
and
this
is
S2.
So
there
is
the
signal
in
all
three,
but
it's
the
there
will
be
some
challenges
in
the
processing.
As
you
can
see,
the
stretching
of
these
three
images
is
unequal.
This
S2
is
way
more
stretched
than
the
other
two,
when
you
can
see
that
in
the
fact
that
the
stars
are
much
more
present
than
in
h,
Alpha
for
sure,
but
even
more
present
than
in
O3.
So
this
would
be
one
of
the
things
that
we're
going
to
have
to
deal
with.
B
So
my
my
processing
for
the
the
narrow
band
data,
typically
I
start
by
doing
a
little
bit
of
a
DB
Dynamic
background
extractions
extractions.
Sorry,
it
is
a
challenge
to
do
it
properly
in
in
an
image
like
this
and
has
a
lot
of
nebula.
In
the
background
you
try
to
do
it
by
placing
samples
in
places
that
are
either
that
are
as
close
to
the
background
as
possible.
B
So,
for
instance,
dark
nebula
and
I
do
it
separately
for
H
Alpha,
for
O3
and
for
the
student,
then
I
typically
crop,
because
the
this
IDs
are
during
the
acquisition,
so
the
borders,
the
edges,
are
actually
not
100
covered.
There's
a
there's,
less
there's
less
data
in
the
extreme
adjust
than
in
the
in
the
center
of
the
image,
so
a
little
cropping
and
then
well.
This
is
just
renaming
the
file.
Another
problem,
then
what
I
do
is
I
will
apply
in
this
case.
B
B
These
are
still
images
that
are
in
the
linear
stage,
and
so
each
of
them
has
an
auto
SDF
applied.
You
can
see
it
by
the
red.
Sorry,
the
green
line
in
under
the
view
name
after
this
I
stretched
the
images
so
I'm
gonna
remove
the
auto
SDF
and
move
forward
with
that.
In
this
case,
I
just
did
a
very
simple
Instagram
transformation.
I
could
have
used
the
ghs.
Ghs
is
a
convenience
script
that
you
can
you
have
to
install
it.
B
It
doesn't
come
with
the
with
the
clicks
inside
and
it's
let
me
just
to
show
you
an
example
of
how
to
use
it.
I
cloned
this
I
invoke
the
script.
B
If
I
want
to
see
what
this
is
going
to
do,
this
is
the
red
curve.
Is
the
transfer
curve
right
now?
It's
a
linear
one-to-one
transformation.
I
want
to
change
the
transformation,
especially
in
the
part
where
the
histogram
of
the
images
becomes
interesting.
So
I'm
going
to
put
an
inflection
point
here
and
instead
of
doing
an
identity
Matrix
for
the
transformation
I'm
going
to
apply
a
a
certain
factor
for
stretching
a
certainly
and
another
factor
for
what
they
call
local
stretched
intensity.
B
B
Produces
yeah,
it's
done,
I
can
close
it
I,
don't
want
to
save
the
log
file.
This
is
the
output
is
no
longer
linear.
It's
been
scratched,
it's
a
gentle
stretch.
You
see
I
if
you
compare
it
with
what
I
did
with
Instagram
transformation,
it's
much
less
aggressive
and
to
the
point
of
the
Instagram
transformation.
B
Entry
method
could
look
like
this,
which
is
similar
to
the
H
alphabet.
You
see
the
main
difference
that
I
see
if
I
superimpose
the
two
image
and
the
blink
between
the
two
is
that
the
ghs1,
which
is
this
one
you
see
the
highlights,
are
much
more
controlled.
They
are
not
as
bright
as
in
the
minority
special
image.
Again,
a
matter
of
personal
preference.
You
may
want
to
try
both.
B
So
let's
say
that
I've
done
this
for
each
Ultra
I'm
just
going
to
close
this
one
I've
done
this
for
or
three
and
I've
done
this
for
S2
and
I
stretch
them
differently.
They
are
not
stretched
by
the
same
factor.
Why
is
that?
Because
I
remember
what
I
said
in
the
presentation:
I
want
to
try
to
make
their.
B
Where
is
it,
there
are
histograms
I
would
like
them
to
be
roughly
comparable.
Now,
of
course,
pics
inside
is
not
cooperating.
It's.
B
B
This
is
a
bud.
Sorry,
the
histogram
seems
to
be
completely
compressed
into
this
corner,
which
is
obviously
not
true,
because
it
looks
kind
of
looks
the
same
as
this
image.
There's
no
there's
no
SDF
applied
to
Twitter
to
so
their
histogram
should
be
similar
and
they
are
not
so,
but
you
can
see
the
apart
from
the
valve.
You
can
see
that
the
H
Alpha
histogram
and
the
S2
image
they're,
not
identical,
of
course,
but
they
are
roughly
have
a
similar
width
and
a
similar
median
value.
B
The
main
differences
you
can
see
is
that
H
Alpha
is
nice
and
smooth
and
the
S2
is
already
noisy
so,
and
there
may
be.
Another
I
may
want
to
do
some
another
modular
action
before
getting
it
to
the
final
image.
What
I
actually
did
in
this,
for
this
particular
case,
was
to
actually
I
had
done
a
noise
extermination
earlier
in
the
steps.
Sorry.
So
what
I
did
at
this
point
was
to
create
an
RGB
image
and
we're
gonna
see
how
well
sorry
about
that
would
be
an
sh
show
image.
B
I
normally
do
a
Passover
scnr.
That
removes
some
of
the
Greencastle
and
I.
Don't
know
if
you
can
see
it.
If
you
look
at
the
core
of
the
Tulip,
this
is
greenish
teal
in
color,
and
this
is
that
becomes
there's
some
yellows
and
there's
some
blues
and
I
prefer
it
again.
Purely
by
from
an
aesthetical
perspective,
then
I
do
another
pass
of
a
noise
exterminator,
which
is
an
ml
based
tool
for
an
osvide
action.
You
can
use
a
MTL
if
you
prefer
not
a
problem
and
then
it's
time
to
do
the
the
RGB
combination.
B
But
what
is
the
synthetic
luminance
that
I'm
gonna
use?
Well,
while
I
was
processing
the
three
channels
before
doing
the
stretch?
I
created
the
three
linear
versions
of
H,
Alpha,
O3
and
S2,
and
then
I
produced
a
synthetic
luminance
with
this
pixelmatic
expression,
which
happens
to
be
the
same
one.
That
I
showed
you
in
the
presentation,
so
I
overlay
the
difference
between
S2
and
its
median,
with
the
difference
between
H
Alpha
and
its
median
and
the
difference
between
O3
and
its
median
and
add
back
the
median
of
H
Alpha.
B
This
is
produced
this
image
here
in
linear
and
then
I
stretched
it
with
two
manual
passes
for
the
of
HD
I
could
have
used
the
ghs
Plus
HD,
just
like
we,
we
saw
with
the
with
the
RGB
sorry
with
the
lineage
Alpha
image,
but
this
is
the
synthetic
luminance
has
the
detail
that
comes
from
the
nice
diesel
that
comes
from
each
Alpha
in
areas
like
this
one
or
in
the
the
pistols.
B
B
B
B
It's
not
the
same,
so
you
may
want
at
this
point
to
do
some
additional
historian
transformation
to
bring
the
overall
tone
balance
where
the
mid
tones
are
more
closer
to
what
you
have
in
mind
as
your
vision
for
the
for
the
image
once
I'm
done
once
I've
done
that
I
well,
I,
just
rotated
the
image
180
degrees,
because
I
preferred
it
aesthetically
to
be
oriented
in
this
way.
But
this
is
a
matter
of
personal
preference
I!
B
B
Yeah
I,
don't
know
how
it's
render
the
Via
Zoom,
but
I
can
see
an
announcement
in
the
details
of
in
this
structure
here
before
after
before
after
before
after
now,
this
is
where
also
some
of
the
some
of
the
convolution
would
have
helped.
I
was
lazy
in
this
case
and
I
didn't
do
it
after
this
I
do
a
passive
local,
histogram
Equalization.
That
helps
a
lot
in
creating
some
more
contrast,
micro
contrast
in
certain
structures
like
here
and
here
it
doesn't
need
to
be
super
aggressive
in
this
case.
B
B
You
see
the
background
here
is
a
kind
of
a
purple,
so
I
did
a
number
of
passes,
including
a
mask,
to
only
affect
the
background,
and
here
I
heard
I
heard
on
the
other
side,
and
it's
too
green
to
my
eyes,
so
I
tried
even
worse
and
in
the
end
I
reached
out
the
balanced,
which
is
this
one,
which
is
a
kind
of
okay.
At
this
point,
I
still
use
a
pass.
B
You
can
also
use
a
background
neutralization
if
you
can
find
an
area
where
you
have
some
good
background
and
you're
going
to
get
something
like
this
and
finally,
I
can
do
some
an
additional
multi-scale
linear,
transform
again
to
increase
some
detail
and
I
did
another
passive
noise,
a
noise
reduction.
Why
not?
These
tools
are
so
and
our
business
are
so
effective.
You
see
by
the
way
you
can
use
a
noise
exterminator
like
I.
Did
here,
I,
don't
denoise
at
all
I'm
just
trying
to
sharpen
the
detail.
B
It
is
quite
possible-
and
it
works
pretty
well
before
after
it's
a
similar,
similar
result
that
you
can
get
from
a
topaz
the
noise,
if
you,
if
you
like
that
kind
of
tool,
but
yeah
at
this
point,
we
have
this
final
sh
Show
process,
the
starless
image
and
it's
time
to
add
the
stars
back
now
and
by
the
way
I
could
have
chosen
a
completely
different
strategy
for
this
image.
Let
me
show
you
how
this
image
would
look
like
in
a
Hubble
palette.
B
B
B
B
The
image
apply
again
a
CNR
with
a
factor
of
one,
so
removing
all
the
green
noise
and
then
invert
to
the
image
backpack
Boom,
the
red
stars
are
the
purple
stars
are
no
longer
there
look
at
the
difference,
purple
star
here,
purple
star
here:
where
did
they
go
they're
gone
or
they're?
Much
less
evident
at
this
point,
I
I
do
a
test
of
given
that
the
Stars
they
may
not
be
green
anymore,
but
they
are
still
saturated,
because
that
was
a
saturated
room,
so
I
needed
to
recover.
B
B
B
And
more
interesting,
more
colorful
image!
You
see
see
this
blue
stars
that
I
didn't
have
before
this.
The
blue
has
been
recovered
from
the
Halos
you're,
not
going
to
see
the
Halo
right
now,
because
the
image
is
not
I'm,
not
I'm,
not
using
any
auto
SDF.
Essentially,
it's
very
hard
to
impossible
to
use
a
Auto
stf
with
a
starless
image
you
can,
but
you
have
to
do
it
manually
if
you
just
apply
for
just
to
command
a
I,
can
get
something
horrible
like
this.
You
see
nothing,
nothing
good!
B
B
I?
Don't
know
any
other
ml
based
tool
for
noise
reduction,
but
you
can
do
pretty
good
noise
reduction
with
the
mmt
multi-scale
mode,
median
transfer
or
MLT
multi-scale,
linear
transfer
I.
Believe
we
had
a
I
give
a
presentation
at
the
beginning
of
the
year
about
that.
Is
it
a
noise?
Yes,
good
idea,
Raja!
It's
actually
a
stethoscript
prepared
by
I
can't
remember
his
name.
B
Dark
archon
is
the
is
the
handle
that
he
goes
by
it's
in
my
opinion,
you
see
the
noises
a
little
bit
hit
or
miss,
at
least
for
me,
has
beaten,
hit
or
miss.
It
uses
a
tgv.
The
noise,
which
is
requires
a
it
might
have
he's,
always
drives
me
crazy
because
it
requires
a
final
amount
of
tweaking
in
tile
and
error
to
get
to
the
coefficients
right.
If
somebody
is
an
expert
in
that
and
wants
to
come
and
present
it,
I
would
be
very
interested
for
one
yeah.
It's
free,
no
doubt
so.
B
B
B
If
this
parameter
is
one
I
start
to
recover
something
two
I
recover
more,
you
can
even
do
it
more,
but
in
my
experience,
then
the
Stars
you're
gonna
have
very
few
very
bright
stars
which
I
personally
don't
like
to
match.
So
it's
again
in
yet
another
case
of
a
matter
of
personal
preference.
But
if
you
apply
this
you're
gonna
get
something
like
this,
and
here
are
the
stars
and
they
are
colored
I
mean
they're.
B
They
don't
have
all
the
colors
of
the
spectrum
and
again
we
are
starting
with
the
three
wavelengths
and
we
assign
them
another
arbitrary
to
our
gnb,
but
it's
better
than
what
we.
What
we
begin
with,
if
I
take
the
image,
let
me
do
an
experiment.
If
I
take
the
image
that
I
just
produced
with
the
with
the
three
color
combination
and
I
clone,
it
can
I
apply
the
same
VHS.
B
B
At
this
point,
I
can
well
I,
have
to
remember:
I
need
to
saturate
the
colors
a
little
bit.
I
have
to
rotate
it,
because
I
had
rotated
the
original
image
in
the
same
way,
and
I
may
do
some
even
some
more
core
substation
and
then
I
can
combine
this
with
this.
There
is
a
question:
let's
see,
PCC
automatic
color
calibration.
Yes,
you
can.
B
Although
we
are
you're,
starting
with
the
with
the
tree,
we're
in
Arab
and
the
set
of
data,
so
you're
not
going
to
have
a
completely
good
coverage
of
all
the
Spectrum
but
yeah
you
can
do
it
and
I've
done
it
in
other
images.
I
do
it
certainly
for
the
RGB
Stars.
It
always
gives
me
pretty
good
results.
B
So
when
we
in
the
in
the
following
step,
when
we're
gonna
try
to
replace
the
stars
with
RGB
data
you're
going
to
see
that
in
action
now
in
this
case,
I'm
just
going
to
show
you
the
result
of
applying
the
the
RGB
Stars.
So
this
is
the
original.
My
original
sh
o
starless
image,
I
combined
the
RGB,
the
NB
stars
in
Arabian
Stars,
then
I
do
a
step.
B
I
do
this
is
a
step
of
this
wonderful
script
called
color
corrected
the
HDR
multi-scale
transform
it's
look
at
how
well
it
recovers
the
color
in
the
core
of
the
Tulip,
it's
amazing
and
then
a
little
bit
of
a
histogram
transformation
to
tone
down
the
general
balance,
and
this
this
could
be
the
final
image
if
you
like
the
nairobian
stars.
These
are
how
the
narrowband
stars
look
like
they
are
not
too
ugly
to
in
to
my
eye,
but
we
can
probably
do
better.
B
So,
let's
how
we
can
replace
the
stars
with
the
RGB
Stars
now
I
have
here
are
G
and
be
the
data
that
I
acquired
in
30
minutes
from
with
from
home.
This
is
not
from
the
Dark
Side,
just
like
it
in
urban
data,
so
the
data
is
a
noisy.
You
see
this
I
wouldn't
want
to
have
this
mixed
with
my
with
my
nice,
clean
and
narrow
band
data
and
we're
going
to
try
to
avoid
that.
So
in
the
RGB
process,
I
am
gonna,
use
a
channel
combination
to
create
an
RGB
image.
B
B
This
is
the
image
after
PCC
PCC
has
changed
the
the
Black
Point
and
there's
still
some
starless
stuff.
If
you
want
to
hear
I
could
at
this
point
to
use
the
again
a
custom
starnet
to
remove
it,
what
I
did
was
essentially
slightly
different.
So
again,
I
I
use
the
same
RGB
repaired,
RGB
script,
RGB
combinations
script
to
recover
the
star
core
colors.
Although
this
the
cars
are
where
not
available.
B
And
their
actual
color
is
a
nice
add
one
because
that's
the
color
of
the
Halo
It's
very
effective.
The
next
stretch
with
the
ghs
and
when
I
stretch
I
start
seeing
that,
since
some
of
the
nebula
seems
to
want
to
come
out
so
I
I
made.
If
I
look
at
the
histogram
of
this
yeah,
the
there's
a
there's
a
space
between
the
right
margin,
the
left
margin
under
the
histogram
Peak,
so
I'm
gonna
do
a
pass
of
a
noise
reduction
again.
B
Noise
exterminator,
which
is
going
to
make
the
histogram
look
very
sharp,
and
then
after
rotation
and
some
saturation
and
maybe
a
little
bit
of
sharpening,
then
I
use
HD
to
click.
The
black
succession
I
know
that
we
shouldn't
clip
the
blacks,
but
here
I'm
not
interested
in
the
data
that
is
hiding
just
to
be
just
below
just
above
the
the
background.
I
don't
want
the
background.
I
only
want
the
Stars
and
when
I
do
this
and
I
produce
this
final
image.
B
This
is
ripe
to
be
integrated
with
the
RGB
data
which
I
am
doing
here.
So
this
is
the
same
image
that
we
started
from.
In
the
case
of
the
narrowband,
Stars
I
add
the
RGB
Stars
and
then
I
do
the
same
two
steps
on
the
HDR
empty
and
a
little
bit
of
Instagram
transformation.
Now
let
me
link
you.
The
two
images
yeah,
the
RGB
stars
are
more
colorful
right.
Wouldn't
you
agree,
the
problem
is
that
they
are
not
so
nice
and
Tiny
as
the
narrow
events.
B
Does
it
may
or
may
not
be
a
problem
to
you,
but
it's
a
fact:
I
think
it's
unreliable,
so
it
would
be
nice
to
combine
the
best
of
the
Two
Worlds,
and
actually
you
can
so
you
can
produce
a
combination
of
narrowband
and
the
Broadband
color
Stars
by
using
this.
So
I
start
with
the
the
same
RGB
data
that
I
had
produced
before
I
cloned
it
and
I
apply
very
simply
an
RGB
combination
using
the
luminance
they
extracted
from
the
show
star
that
we
have
the
narrowband
started.
B
We
had
produced
at
the
very
the
very
beginning
of
this
star
addition
steps
when
I
do
that
you
see
these
are
the
RGB
Stars.
These
are
the
RGB
Stars
after
lrgb
combination.
Look
at
how
this,
in
particular,
these
big
stars,
becomes
tighter
and
to
my
eye
more
pleasant,
especially
after
I
combine
them
together
with
the
rest
of
the
data.
So
again,
let
me
show
you
how
it
looks
like
I
start
with
the
starless
data.
B
I,
add
the
add
the
stars
back
then
the
usual
hdrmt
and
HD.
Now,
let's
compare
it
with
the
RGB
Stars.
Sorry,
with
the
the
version
of
this
image
with
the
RGB
Stars
I
want
to
take
the
steam
section
of
the
image.
Let's
open
this
blink,
RGB,
narrowband
plus
RGB,
the
caller
information
is
the
same.
The
profiles
of
the
scars
are
tighter,
whereas
if
we
compare
it
with
the
the
image
with
the
narrowband
Stars,
only
again
I
do
the
same
trick.
I
want
to
the
two
images
to
represent
the
same
Ito
and
then
I
blink
them.
B
Now
we
have
two
images
that
have
the
same
tightness
of
the
stars,
but
the
color
information
is
different.
This
is
the
narrowband
color
information.
This
is
the
RGB
RGB
colors,
combined
with
the
narrowband
the
star
profiles.
So
we
we
have
a
photometrically
calibrated,
the
colors,
whatever
that
means
and
the
title
starting
five
which,
in
my
opinion,
could
be
considered
the
best
for
the
two
of
Both
Worlds,
and
this
is
yeah
this
session.
This
is
the
way
I
would
I
would
leave
it.
You
could
do
the
same.
B
I
can
share
this
data
if
you
want
to
play
along
at
home
and
you
can
try
to
do
the
same
with
the
the
Hubble
palette
data.
If
you
are
so
inclined
or
create
another
palette
to
your
liking,
you
don't
need
to
use
mine
for
sure.
B
A
There's
nothing
else
on
the
chat,
but
of
course,
please
feel
free
to
speak
up.
If
you
have
questions
and
I
wouldn't
say
an
hour,
20
minutes
is
too
long,
but
okay,
thank
you.
So
please
are
there
other
questions
Folks
at.
A
B
Yeah,
you
could
do
that,
it's
it's
just.
You
can
do
it
with
the
Adam
blocks.
Star
reaction
technique,
star
start
the
emphasis
technique.
You
can
do
it
with
the
what's
the
name
of
the
tool,
morphological
transformation.
If
you
want
yeah
you,
you
can
I
I
combine
the
two,
because
this
is
information
that
I
already
have,
because
the
RGB,
the
narrowband
star
masks,
are
a
byproduct
of
the
of
the
story
of
the
starless
processing
of
the
neural
Magneto,
which
I
would
need
to
do
anyway.
So
the
data
is
already
there.
A
Well
then,
why
don't
we
finish
up,
and
so
please
unmute
and
thank
Francesco
for
another
one
is
the
presentation
you
really
do
appreciate
these
questions.
Great
presentation,
thank.
B
A
Much
thank
you
all
right.
Well,
thank
you.
Francesca,
we'll
see
everybody
next
month
when
Richard
will
tell
us
about
planetary.
Until
then
see
you
later.