►
Description
Processing ‘clean’ data in PixInsight: the Helping Hand nebula.
Francesco Meschia
We’ll go step-by-step from the master lights to the final image of the Helping Hand nebula, one of Astrobin’s “Top Picks” of December 2021. The data was acquired at Pinnacles National Park, with very little light pollution. The masters were calibrated with the Weighted Batch Pre-Processing script, and they are available for download.
https://drive.google.com/drive/folders/1Q7z3JXknmHOUTtnq2wqs9_wqStxwv1Bl You are encouraged to try and process these ahead of time and bring any questions you have to the presentation.
A
Well,
welcome
to
the
january
2022
sjaa
imaging
special
interest
group
meeting.
We
got
a
a
special
guest.
Our
friend
francesco
macias
is
going
to
tell
us
all
about
processing
and
without
further
ado,
francesco.
Take
it
away.
B
Thank
you
very
much
to
for
giving
me
this
opportunity
to
to
give
not
just
one
but
two
talks
so
and
as
we
were
joking
before
before
the
start
of
this
meeting
that
tonight
is
going
to
be
processing,
clean
data
or
relatively
clean
data,
and
then
the
next
one
will
be
processing
garbage
and
trying
to
get
out
something
out
of
garbage.
So
tonight,
if
you
want
it's
kind
of
an
introduction,
but
not
too
much
because
of
the
the
processing
of
the
helping
hand
nebula,
which
is
the
the
target
that
I
chose
for.
B
This
is
not
challenging
because
of
the
quality
of
data
were
taken
from
a
nice
dark
place
from
pinnacles
national
park,
so
they
don't
have
gradients
to
fight
they
not
a
big,
not
many
problems.
They
were
all
taken
in
the
same
night,
there
was
nothing
to
equalize,
but
the
object.
The
target
is
exceedingly
faint
and
to
give
you
an
idea
of
how
faint
it
is,
I'm
going
to
start
sharing
and
showing
you
some
images,
first
of
all
where
it
is.
Where
is
this
object?
B
What
you
need
to
know
to
find
this
object
is
that
it's
next
to
a
variable
star
in
cassiopeia,
as
you
cass
as
you,
cassiopeia
80,
is
actually
listed
in
in
sky
safari,
but
it's
just
listed
as
a
star,
and
so
how
do
we
know
that?
There's
something
there
well,
I
know
because
I
founded
this
object
image
by
somebody
else
on
astrobin,
so
let
me
switch
to
astrobin
and
show
you
master
being.
B
B
And
it
was
a.
It
wasn't
immediately
clear
to
me
that
it
was
that
faint,
and
so
I
tried
to
image
it
last
year
and
I
wasn't
of
course
I
could
not
find
it
now.
I
wasn't
really
equipped
for
that.
I
was
just
using
my
dslr
and
there
was
essentially
nothing.
It
was
very
a
very,
very
faint
change
in
the
the
color
of
the
background.
B
B
So
this
was
the
the
image
that
got
me
hooked
here,
but
where
is
it
so?
I
said
that
it's
a
su
cassiopeia
it's
between
cassifia
and
the
constellation
of
a
camelot
pardolis,
the
giraffe.
So
I
tried
to
figure
out
where
it
is
by
looking.
I
found
that
this
very
nice
white
filled
image
of
cassiopeia
to
see
where
it
is,
and
I
know
where
it
is
so
I'm
going
to
show
you,
but
I
don't
see
it
in
this.
B
Even
in
this
image,
which
is
a
relatively
deep,
I
mean
you
can
see
the
pacman
nebula
here
you
can
see
the
soul
nebula
the
heart,
nebula
double
cluster.
You
can
see
a
number
of
dark
nebulae
in
the
in
the
milky
way,
but
let
me
zoom
in
this
one
is
su
cassiopeia
and
the
asuka
sap
is
exactly
where
the
the
forward
hand
that
he
is
so
to
speak,
and
I
don't
see
absolutely
anything
in
terms
of
nebulae
around
here.
B
If
I,
instead
of
using
amateur
images,
I
go
to
a
professional
catalog,
I
can
try
to
show
you
something
more.
Let
me
get
this
one,
which
is
the.
B
Aladdin
catalog,
so
if
I
go
in
the
in
the
area
where
I
have
where
this
image
is-
and
I
and
I
show
you
with
the
standard-
the
dss,
the
the
the
digital
sky
survey
from
mount
palomar
using
the
the
color
edition-
I
see
almost
nothing.
I
can
actually
see
a
very
faint
reflection
nebula,
but
if
you
want
to
see
something
you
have
to
either
invert
this
palette
like
like
this,
and
in
this
way
you
can
start
making
out
that
this
area
here
is
a
whiter
and
being
a
negative.
B
It
means
that
it's
darker
or
you
can
use
a
color
palette
like
this
one.
That
makes
it
somewhat
easier
to
read.
You
see
here,
there's
a
this
is
the
shoulder.
This
is
the
arm
and
this
is
the
hand,
but
it's
it's
rotated
90
degrees
compared
to
my
image
because
of
my
head
rotated,
my
camera,
but
it's
very
very
hard.
B
So
to
produce
my
image
I
went,
and
this
was
from
pinnacles
national
park,
pinnacles
west,
it's
a
eight
hours
and
30
minutes
of
total
integration,
of
course,
combining
the
r
g
and
b
channels.
B
B
I
apologize
right,
so
this
is
r.
Something
is
visible
in
blue
the
dark.
The
dark
nebula
are
more
visible,
let's
bring
in
the
green
as
well,
and
this
is
green,
which
is
a
almost
halfway
between
the
two.
So
if
I
zoom
in
there's
lots
of
noise
there's
the
z,
which
means
that
these
are
all
three
images,
have
a
stf,
auto
sdf
applied,
so
it's
inside
the
screen
scratches
that
stretches
them
so
that
you
they
are
visible,
but
it
means
that
this
nebula
are
very,
very
close
to
the
brightness
of
the
background
sky.
B
There's
no
contrast
essentially,
so
the
the
motif
of
this
of
this
processing
will
be
how
to
keep
how
to
make
the
image
readable,
while
keeping
the
noise
that
way
without
magnifying
amplifying
the
noise
too
much.
B
So
there
is
one
thing
that
I
like
to
do
when
I
that
I
learned
to
do
when
I
switched
to
a
mono
camera.
There
is
a
very
useful
script
in
pixel
sight
that
needs
to
be
used
only.
It
can
only
be
used
with
mono
images.
It
needs
to
be
used
as
basically
the
first
process
that
you're
ever
gonna
do
to
the
to
the
master
data,
which
is
the
mure
the
noise
script.
And
it's
under
a
glance.
Can
you
sorry
hi?
B
B
Yeah,
okay,
so
mirror
the
noise
is
a
is
a
very
powerful
script
and
it
does
a
very
good
job
at
reducing
noise,
but
can
only
be
used
with
the
with
mono
images,
and
you
need
to
have
a
characterized
your
sensor
because
you
have
to
feed
the
script
with
the
informations
that
are
the
the
gain
of
your
sensor
in
electrons
per
dn,
the
the
noise
content
of
your
image
in
strangely
enough,
measured
in
the
end
rather
than
in
electrons,
but
it's
just
it's
just
a
conversion.
B
There
is
a
companion
script
that
you
can
use
this
one
mirror.
Noise.
Sorry
use
your
denoise
detector,
detector
settings
that
helps
you
determine
those
values.
If
you
don't
know,
if
you
don't
trust
the
ones
that
the
manufacturer
has
given
you
and
it's
easy
to
use,
you
just
needed
to
have
some
calibration
file
like
two
biases,
two
darks
and
two
flats,
two
for
four
descriptor
lines.
Now
I'm
gonna
run
the
script.
I
have
to
run
it
on
all
three
masters.
It's
gonna
take
a
little
bit
of
time,
so
I
apologize.
I'm
gonna
start.
B
This
is
not
a
very
powerful
machine,
so
it
takes
a
little
bit
in
the
meantime,
I
mentioned
earlier
that
I
was
going
to
get
some
reference
material.
So
if
you
are
interested,
these
are
a
couple
of
books
that
I
recommend
about
pixel
sight.
This
one
is
oops.
It's
some
light.
B
This
one
is
the
first
edition
of
warren
keller's
inside.
A
The
pixel
site
francesco
may
be
unsure
and
yeah,
so
we
can
see
you
bigger
yeah.
B
B
It's
a
good
it's
a
good
dog
good
reference
for
both
the
osc
and
the
mono
camera
and
has
some
good
examples
of
workflows,
also
for
rgb
images
and
for
narrowband
imaging
too
then
more
recently,
I
have
become
acquainted
with
this
one,
which
is
rohelius
mastering
pixel
site.
B
That
is
a
well
let's
say
it's
different,
because
warren
and
rohelio
have
different
approaches
to
to
processing
the
data.
But
it's
it's
just
as
useful
as
as
the
previous
one,
as
the
insider
picks
inside.
Actually,
as
a
bonus
roughly
also
has
produced.
B
A
second
volume
is
not.
The
second
edition
is
a
companion
volume,
which
is
the
reference
guide
for
all
the
processes
and
for
the
most
important
scripts
in
the
pixel
site,
and
it's
I
think
a
rogerio
has
an,
but
he
still
has
the
the
electronic
version
still
available.
So
I
don't
want
to.
F
G
F
B
Yes,
because
thank
you
for
for
mentioning
that
I
have
a
14-bit
camera
and
other
people,
people
who
have
the
1600,
that's
a
12-bit
camera,
but
it's
fixing
sight
always
rescales
your
the
the
drain
space
of
your
of
your
camera
to
a
16-bit
space,
and
so
yes,
there
is
always
this
this
multiplication.
That
needs
to
be
done.
If
you
want
to
trans,
to
transform
the
data
that
the
manufacturer
gives
you,
which
is
typically
in
the
native
with
space
of
the
camera,
to
the
pixel
side
plan.
B
B
B
And
you
please
put
also
he
has
a
website
deepskycallers.com.
Okay,
all
right,
so
let
me
show
you
the
effect
of
this
script.
Now
the
script
is
executed
right
now
I
am
gonna
undo
and
redo
as
usual.
We
assume
I
never
know
how
good
is
the
is
the
rendering
of
fine
details
use
with
the
zoom
video
protocol.
So
please
power,
glenn
hi.
Let
me
know
how
is
if
what
I'm
showing
is
visible
at
all,
so
I'm
undoing
it's
undone
and
redoing.
B
So
you
can
see
that
the
noise
is
significantly
reduced,
at
least
the
the
small
scale
noise.
We
can
also
get
a
more
an
even
more
qualitative
approach,
but
if
I
take
the
histogram
transformation
window-
and
I
zoom
I
enlarge
this-
the
access
scale
look
what
happens
to
the
distribution
of
the
histogram.
When
I
undo
and
redo
you
see,
it
gets
that
much
narrower
like
a
half
as
narrow,
when
I,
when
I
do
this
script,
so
this
is
gonna.
B
It's
what
I
like
to
it's
a
script
that
I
like
to
use,
because
not
only
because
it's
effective,
but
because
it
it
doesn't,
guess
what
are
the
the
the
data
that
are
important
and
what
are
the
data
that
are
not
important?
It
tries
to
one
to
figure
out
what
is
the
noise
distribution
based
on
the
characteristic
of
the
sensor
and
it's
it's
trying
to
multiple
doing
multiple
attempts
and
finding
the
magnitude
of
the
denoising
to
to
introduce
that
minimizes,
the
minimizing
the
the
output
sigma.
Without
sacrificing
the
detail?
B
B
B
It's
it's
a
bummer
that
it
doesn't
work
on
with
osc
cameras,
or
rather
there
is
a
way
to
make
it
work,
but
you
have
to
debayer
your
your
image
in
using
the
what's
called
the
super
pixel
dbr
algorithm,
which
is
basically
sacrificing
making
a
four
pixel
makes
one
the
advantage
that
it
doesn't,
that
it
doesn't
have
any
interpolation
which
and
the
interpolation
is
exactly
what
makes
it
mirror
denoise
unsuitable
for
for
osc
data.
B
So
I'm
we're
waiting
for,
for
the
second
mirror
the
noise.
To
finish,
let's
see
it's,
you
can
actually
like
many
scripts
in
and
the
noise
techniques
in
peaks
inside
you
can
decide
how
many
iterations
needed
to
are
subsequently
applied
to
the
image.
In
this
case,
I
used
eight,
which
seems
to
work
for
for
my
for
the
type
of
noise
that
I
get
from
my
camera.
But
of
course
it's
it's
worth
experimenting.
B
If,
with
your
own
cameras,
let's
see
almost
there,
the
iterations
are
called
the
cycles
in
in
the
script.
B
B
I
doubt
that
you
can,
because
that
changes,
that
is,
it
is
an
interpolation
and
it's
probably
not
you
cannot
take
it.
I
don't
think
you
can
predict
how
much
is
gonna
exactly
interpolate
its
channel,
but
maybe
in
that
case
I
would
do
it
before
and
before
doing
the
drizzle
integrate.
So
I
would
use
a
mirror
denoiser
on
the
calibrated
subs,
and
so
I
it's
gonna
take
a
long
time.
B
And
I
apologize,
I
don't.
I
have
several
windows
hiding
the
the
chat.
So
if
there's
a
question,
could
I
ask
you
to
to
speak
loud
yeah?
I.
A
Nick
suggested,
I
do
that
or
so
I
apologize
with
bruce
but
anyway,
yes
I'll
do
it,
but
if
I
miss
it,
please
anybody
else
who
notices.
Please
speak
up
thanks.
B
B
Oh
yeah,
it's
as
straightforward
as
it
gets.
I
use
the
wbvp,
which
is
a
thing
that
people
tell
you
not
to
do,
but
it's
in
this
case
I
would
say
that
they
were
accelerating
circumstances.
The
the
images
were
collected
under
good
skies,
the
the
the
worst
almost
no
variation
in
gradients.
It's
not
like.
I
had
to
to
do
a
big
selection
or
some
strange
normalization.
I
didn't
even
use
the
normalized
scale
gradient
script,
which
is
very
useful,
but
it's.
B
It
was
kind
of
overkill
in
this
case,
so
yeah,
it's
the
three
script
that
you
get
as
you
may
have
guessed
from.
The
five
names
are
the
ones
that
wwvp
outputs
directly
after
integra
after
calibrating,
registering
or
integrating
the
the
images,
and
I
think
that
for
as
normalizing
factor,
I
use
the
the
signal
to
noise
estimate
estimation
the
one
that
peaks
inside
doesn't
turn
all
right.
We're
done.
B
B
B
It
takes
away
almost
a
half
of
the
width
of
the
histogram,
so
not
bad.
By
the
way
we
were
mentioning
the
the
reference
star
for
this
object
s.
U
cassiopei
is
this
star
here.
B
So
if
you
recall
what
I
was
showing
you
in
that
astrobin
image,
you
should
have
seen
this
this
dark
section
here,
and
maybe
this
one,
and
I
could
absolutely
see
nothing
in
that
in
the
image
which
was
not
a
bad
image
at
all.
So
it's
it
is
faint
all
right.
So
now
we
have
the
three
masters
and
we
have
done
a
first
first
run
of
the
of
denoising.
B
What
I
normally
do
at
this
point
is
to
apply
a
little
bit
of
cropping
with
the
using
the
dynamic
crop
process,
for
the
simple
reason
that
the
edges,
given
that
I
I
neither
during
during
capture
the
edges,
have
only
partial,
covering
there's,
only
partial
overlap
of
the
images
in
the
stack
in
in
this
in
each
of
the
master.
So
I
normally
crop
a
little
bit
so
this
camera
has
natively
4144
pixels
by
2822
pixels,
and
I
like
to
crop
it
to
4000
by
2700.
B
B
The
other
thing
that
at
this
point
I
like
to
do
is
and
again
this
is
just
my
way
of
doing
things
there
are.
There
are
other
ways
I
like
to
try
to
do
an
initial
modeling
of
the
background,
and
in
this
case
what
I
do
is
simply
to
use
the
automatic
background
extractor
and
I
don't
want
to
model
closely
the
the
actual
target
so
the
helping
hand
nebula.
I
just
wanted
to
model
the
overall
brightness.
So
what
I
do
is
to
use
a
low
polynomial
degree
for
interpolation.
B
B
Now
this
descriptor,
I
set
it
to
produce
as
a
secondary
output
the
map
of
the
background
that
was
applied.
So
if
I
do
an
auto
sdf
here,
I
can
see
that
this
is
the
background
that
was
subtracted
from
this
image.
It
doesn't
have
any
sign
of
having
the
object
itself
in
it,
so
I'm
okay,
I
can
just
throw
it
away,
and
I
am
gonna
also
drag
this
drag
and
drop
this
process
on
the
other
two
masters.
H
Francesco
question
yep:
this
is
a
rich.
Does
this
script
magically
detect
and
subtract
the
nebula
from
the
background.
B
B
If
you
increase
enough
the
the
degree
of
the
polynomial,
so
if
you
put
it
like
a
16th
degree
or
maybe
even
10
degree,
it's
gonna
model
closer
and
closer
to
the
actual
brightness.
So
it
is
gonna
start
modeling,
the
nebula
itself,
which
is
what
you
don't
want
right.
What
you
want
is
to
limit
the
to
use
a
low
degree
for
the
polynomial
so
that
it
just
takes
a
general
compensate
for
a
gen
for
the
general,
not
an
average,
but
the
the
low
and
very
large
scale.
B
B
Parabolic
ramp
and
we
did
it
for
all
three
images,
so
this
is
the
these
are
the
three
results?
Of
course
they
look
like
they
have
a
similar
brightness
values
here,
but
it's
just
because
we
have
applied
abe
on
sorry.
We
have
applied
the
sdf
to
all
three.
It
doesn't
actually
mean
that
the
three
images
have
have
comparable
brightness
values
right
now.
I
just
reapplied
the
auto
sdf
and
you
can
see
the
the
brightness
level
changed
and
we
don't
know
if
they
are
compatible.
B
B
B
B
E
E
B
And
g,
sorry,
in
g
and
b,
you
see
the
histogram
almost
does
not
change.
It
means
that
the
median
values
of
the
three
images
have
been
brought
to
coincide
more
or
less,
which
means
that
when
we
combine
them
together
to
produce
an
rgb
image,
which
is
my
next
step,
I'm
not
going
to
have
a
completely
red
image
or
a
completely
green
image
or
a
completely
blue.
B
B
B
There
was
some
green
here,
some
green
here,
so
I
thought
that
maybe
it
was,
it
could
be
worth
another
run
of
the
abe
automatic
background
extraction
with
the
same
parameters
as
before,
because
maybe
there
was
something
left
over
from
the
previous
run.
B
B
Now
we
have
to
the
the
following
step
in
my
workflow
when
I'm
when
I'm
at
this
point
is
to
do
an
initial
calibration
of
colors,
while
we're
still
in
the
linear
phase
most.
Why
am
I
am
I
doing
this?
It's
because
most
of
the
tools
in
pixel
site
operate
on
a
most
of
the
color
calibration
tools
operate
best
when
they
are
in
in
the
linear
space,
so
that,
when
you
add
or
remove
or
subtract
values
like
a
background,
you
can
actually
do
it
because
you
are
still
in
the
linear
space.
B
B
The
two
when
I
use
the
color
calibration
tool,
I
have
to
ins
and
extract
the
tool
with
some
parameters
to
not
not
consider
the
background
not
consider
the
nebula
themselves,
not
consider
the
saturated
stars
that
are
by
definition,
white,
but
consider
the
all
of
the
the
union
so
to
speak,
of
all
the
stars
in
this.
In
this
image
and
rick
scale
their
values,
with
the
assumption
that
the
the
combination
of
all
this
all
these
colors,
all
the
colors
of
the
stars
is
a
pure
white.
B
And
this
is
a
I'm
answering
the
answer
in
this
way
and
one
of
the
questions
on
the
chat.
This
is
the
way
that
you
do
that
you
can
do
color
calibration
in
in
an
rgb
image
for
narrowband
and
this
most
of
the
narrowband
processing
that
I
do
uses
some
form
of
tone
mapping
or
the
dynamic
palette
combination.
That
powder
was
also
mentioning,
so
there
is
no,
not
really.
B
It
doesn't
really
make
a
lot
of
sense
to
talk
about
color
calibration
in
in
because
the
palette
is
essentially
chosen
by
you
when
you,
when
you
do
narrow
band
imaging,
but
in
this
case
I'd
like
to
what
I'd
like
to
have
is
a
balanced
color
of
the
stars.
Given
that
this
image
has
nice
bright
beacons,
you
see
here
as
uk
cpa.
This
is,
I
think,
it's
a
rz
sap
and
then
there's
an
under
a
number
of
other
stars.
B
I
wanted
to
them
to
be
to
show
me
a
nice
spectrum
of
color,
so
I
want
to
have
some
blue
stars.
I
want
to
have
some
orange
stars
and
yellow
stars,
so
I
have
tried
to
use
both
pcc
post
photometric
color
calibration
and
the
standard
color
calibration
in
this
image
and
found
that
I
prefer
the
color
calibration,
and
this
is
not
science.
This
is
purely
my
aesthetic
sense.
There's
nothing
to
nothing
to
do
with
what
is
the
right
way
to
do
it,
provided
that
there
is
a
right
way.
B
The
pcc
has
its
own
as
it's
married.
Absolutely.
What
it
does
is
to
compare
this
images
with
this
image,
the
content,
the
chromatic
content
of
this
image
with
the
chromatic
contents
of
the
the
this,
the
stars
taken
from
a
catalog
and
so
what
they.
What
it
does
is,
after
that
it
tries
to
color
white
balance,
your
image
in
compared
to
either
a
spectre
class
of
starks
or
an
average
type
of
galaxy.
B
So
you
can
tweak
the
way
that
pcc
works
considerably,
choosing
a
different
reference,
of
course,
in
this
case.
As
I
said,
my
aesthetic
sense
and
my
what
I've
meant
to
communicate
with
this
image
was
more
suitable
to
to
the
standard
color
calibration
workflow,
but
before
doing
the
color
calibration.
In
this
way,
I
want
to
tackle
another
challenge,
which
is
to
neutralize
the
background.
As
you
can
see,
the
background
has
some
colors
there's
dust
here
and
the
dust
has
colors,
because
it
reflects
the
light
of
stars
and
the
stars
in
different
colors.
B
In
this
case,
I
consider
that
this
area-
here
I
don't
see
many-
I
see
that
there's
a
width
of
dust
here-
there's
some
dust
here,
there's
obviously
dust
here,
but
this
area
here
could
is
plausibly
part
of
the
background
at
least
part
of
the
background
for
the
depth
that
I
went
to.
So
I
create
a
preview
in
this
area
and
I'm
going
to
use
this
preview
as
my
reference
black,
so
to
speak
reference
black
background,
actually
not
black.
B
You
have
to
indicate
what
is
the
reference
image
which
is
in
this
case
is
image
07,
so
the
target
one
and
I
want
to
indicate
a
region
of
interest
which
I'm
going
to
inherit
from
the
preview.
Why
do
I
do
a
reference,
a
region
of
interest
instead
of
just
choosing
a
preview
here?
Well,
it's
a
matter
of
habit.
It's
when
you
do
this.
B
If
you
have
multiple
images
that
you
want
to
apply
the
same
process
to
and
this
image,
maybe
they
don't
have
a
preview
created
in
each
of
them.
You
may
want
to
use
the
as
the
target,
the
image
that,
where
you
want
to
apply
the
process
and
copy
the
the
coordinates
of
the
original
interest
instead
of
creating
and
produce
one
for
each
of
the
images
that
you
want.
B
The
other
thing
that
you
need
to
do
is
to
tell
the
script.
What
is
that
you
consider
the
upper
limit
for
search
in
the
background,
and
this
is
one
of
those
cases
in
which
the
default
is
not
right,
because
this
is
a
still
a
linear
image,
so
we
have
to
go
and
measure.
What
is
the
average
background
here
and
the
way
I
do
it?
There
are
many
ways,
but
the
way
I
like
to
do
it,
I
take
the
histogram
transformation
tool.
B
B
B
B
The
lower
limit
can
be
left
at
zero,
of
course,
because
the
there
are
no
negative
values
in
in
peaks
inside
and
the
the
histogram
of
this
image
was
pretty
against
the
the
left
edge.
So
I'm
gonna
drag
and
drop
this
and
the
pixel.
B
The
my
my
background
in
this
preview,
as
a
knock,
actually
has
neutralized
the
entire
image,
but
using
as
a
reference
this
preview,
what
does
it
actually
mean?
Well,
if
I
take
the
histogram
again
and
let
me
magnify
as
much
as
I
can
at
this
curve,
this
curve
is
actually
the
combination
of
a
red
curve,
a
green
curve
and
a
blue
curve
before
we
I
had
applied
this
neutralization.
B
This
was
the
result
of
the
red,
green
and
blue.
So
if
you
look
at
the
this
curve,
you
see
that
there
is
a
little
bit
of
there's
more
red
towards
the
low
values
and
more
blue
towards
the
high
values.
After
I
apply
this
background
utilization,
it's
essentially
the
situation
is
neutralized
or
even
a
little
bit
reversed,
and
you
can
see
it
in
the.
B
The
shape
I
don't
know
if
it's
visible
to
be
a
zoom,
but
now
we
have
the
blue
curve,
is
to
the
left
and
the
red
curve
is
to
the
right.
What
does
it
mean?
It
means
that
most
of
the
background
of
this
image
has
become
slightly
red,
which
kind
of
makes
sense,
given
that
it
is
a
faint
dust
illuminated
by
by
stars
and
probably
emitting
what
they
call
the
er
the
and
enhance
the
red
emissions.
B
Okay,
yes,
all
right!
So
now
we
have
applied
the
background
neutralization
we
haven't
done
any
calibration
yet
for
calibration,
and
I
said
I'm
going
to
use
this
tool.
The
reference
image
will
be
the
target
image
and
I'm
fine
with
that,
I'm
going
to
apply.
I
don't
want
to
use
any
particular
region
of
interest.
I
want
to
use
all
the
stars
in
this
image,
but
I
want
this
image
this
tool
to
extract
and
avoid
considering
the
nebula
and
so
to
use
the
in
the
jargon
of
this
tool.
B
I
have
to
enable
structural
attack
structure
detection
so
that
the
structures
will
be
considered
and
removed
from
the
sample
of
the
stars.
So
the
the
level
of
of
the
brightness
of
the
stars
will
be
will
will
receive
a
subtraction
and
the
subtraction
will
consider
this.
The
first
five
layers
of
of
structure-
and
maybe
I
should
probably
bring
it
up
to
six
or
seven.
Even
these
are
large
scale
nebulae
for
background.
I
can
again
indicate
that
I
want
to
use
that
the
region
of
interest
that
I
had
already
selected.
B
B
So
this
is
the
this
means
two
things.
First
of
all
that
I'm
using
effectively
the
color
space
or
the
gray
space
of
my
camera,
because
I
have
some
saturated
stars
and
I
I
don't
want
to
research
to
get
all
of
the
the
saturation
to
influence
my
calibration.
So
instead
of
going
to
0.95,
I'm
gonna
stop
at
0.9
or
even
0.85
like
you,
you
can
decide.
You
know
that
nine
zero
point,
nine
and
nine
is
already
saturated
anything
less
than
that,
it's
probably
still
in
the
linear
space.
I
prefer
to
keep
myself
a
little
lower
now.
B
Let
me
put
back
the
auto
sdf
function
and
apply
this
tool
now
we're
ready
it's
calculating
and
calibrating,
and
this
is
the
calibrated
image
according
to
the
color
calibration
tool.
So
it's
essentially
in
considering
the
average
color
of
all
the
stars
in
this
image
and
assuming
that
that
should
be
white
and
again,
if
you're,
if
your
aesthetic
taste
is,
is
different
and
you
prefer
to
have
like
more
red
stars
or
more
blue
stars.
B
B
F
B
B
My
original
attempt
all
right
so
now
we
have
an
image
which
is
color
calibrated
and
has
received
an
initial
round
of
the
noise,
if
you
remember,
but
there's
still
a
ton
of
noise,
both
chromatic
noise
and
the
large
scale
noise.
So
the
following
the
next
step
for
me
is
to
do
a
round
of
a
linear
noise
reduction,
but
it's
it's
by
no
means
the
only
way
to
do
it.
It's
just
the
way.
I
do
it
and
there
are
many
tools
to
do
it.
B
One
of
the
very
effective
one
that
many
people
use
and
have
used
recently
is
the
denoise
the
easy,
the
noise
script.
That
is
part
of
the
easy
processing
suite,
and
it's
very
good
it's
valid
and
I'm
not
discounting
it.
B
I
hope
you
don't
mind
if
I
show
you
the
way.
I
do
it
because
I'd
like
to
to
show
one
tool
which
is
very
useful
for
linear
noise
action,
which
is
the
mmt
tool
or
the
medium
multiscale
media
transform,
and
I'm
gonna
try
to
show
how
how
I
use
it,
meaning
that
how
to
make
sense
of
all
the
parameters
that
you
can
set
in
this
tool
that
are
quite
a
few
now.
There
are
different
ways
to
apply
and
to
use
these
denoising
tools.
B
Remember,
for
instance,
some
people
prefer
to
do
denoiser
after
stretching,
I
prefer
to
do
it
before
stretching
that's
a
matter
of
essentially
taste
and
essentially
what
you
find
the
most
effective
for
your
image
and
your
imaging
style.
B
Neither
of
the
two
is
the
the
right
way
to
do
it.
Most
of
the
denoising
processes
require
some
form
of
a
mask
to
be
applied,
because
noise
affects
the
disproportionately
the
low
signal
areas.
It's
in
the
low
signal
areas
in
the
blacks
in
the
darks,
in
the
shadows
that
you
have
lots
of
noise,
not
in
the
bright
signal
area.
So
a
typical
mask
that
you
can
use
is
an
inverted
luminance
mask
an
inverted
dominance.
Mask
is
almost
transparent,
where
there
is
a.
B
B
It
is
tempting
to
just
go
to
this
button,
which
is
the
extraction
of
the
ci
l-star
component
luminance
and
just
use
it,
but
there's
a
problem
here
before
we
do
that.
We
have
to
make
sure
that
the
three
channels
of
this
image,
red,
green
and
blue
are
equally
weighted
by
the
by
this
tool,
because
the
building
illuminance
it
is
essentially
an
operation
that
depends
on
how
much
you
weigh
the
red
component.
The
green
component
and
the
blue
component
by
default.
B
Color
space
rgb
working
space,
actually
in
the
name
of
the
tool.
If
you
look,
the
rgb
worked
in
space
tool.
These
are
the
three
weights,
the
coefficients
that
are
by
default,
applied
to
the
to
the
three
channels.
So
it's
about,
we
have
a
green
which
is
takes.
The
line
of
share
71
of
the
final
result
is
from
green
red
accounts
for
22
to
the
final
result
and
blue
accounts
for
six
percent
of
the
final
result.
B
Now
for
astronomy,
this
weights
don't
make
a
lot
of
sense
because,
as
we
all
know,
there's
very
green
in
the
in
the
sky.
There
are
no
green
stars
because
there's
no
green
black
body
spectrum,
so
the
way
that
we
that
I
recommend
that
other
authors
recommended
that
I
learn
from
them
to
obtain
a
good
representation
of
the
luminance
is
to
put
these
three
coefficients
to
one,
which
means
that
actually
means
that
each
of
them
will
be
0.328
because
they
are
all
normalized
to
one
of
course.
B
B
I
have
to
apply
sdf
to
see
something,
but
instead
of
applying
sdf,
I
am
going
to
permanently
stretch
this
image
so
that
it
rema.
I
can
use
it
from
that
point
onwards
as
a
mask
and
the
way
I
do
this
is
with
a
tool,
but
there's
a
just,
because
the
tool
is
very
convenient.
I
use
this
d
linear
tool,
which,
just
in
one
click,
does
the
job,
but
the
the
way
to
do
it
manually
is
to
open
the
histogram
transformation
and
open
side
by
side.
B
The
auto
sdf
the
screen
transfer
function,
tool
use
the
nuclear
option
to
to
obtain
the
the
right.
Stf
function,
drag
it
and
drop
it
over
the
histogram
transformation
and
finally
apply
the
histogram
transformation
to
your
image.
Your
image
will
become
all
white
because
you
have
to
deactivate
the
auto
sdf
and
you
have
here
is
a
finally
a
permanently
stretched
luminance
image
that
you
can
use
as
a
mask.
B
B
That
should
be
rgbl
instead
and
then
I
apply
it
as
a
mask
the
way
you
do.
It
is
just
by
tearing
off
this
tab
and
dragging
it
next
to
the
other
preview
tabs
in
in
the
target
image,
the
image
has
become
has
become
red,
because
red
is
the
the
color
that,
in
my
particular
installation
of
pixel
sight,
I
have
assigned
to
render
the
mask
simulation
now
any
anywhere
you
see
green.
It
means
that
the
mask
protects
the
background
anywhere.
You
don't
see
green.
B
It
means
that
the
mask
is
affecting
what's
what's
behind
it,
and
so
you
can
see
that
it's
it's
doing
exactly
the
opposite
of
what
we
would
like.
It's
protecting
the
background
and
not
protecting
the
star.
It's
the
opposite
of
what
we
want.
So
I
go
back
into
the
mask
menu
and
I
invert
the
mask
now
it's
doing
what
I
want.
B
B
B
So
let's
yeah
these
two
should
be
fine.
Then
I
select
one
of
the
two
and
finally,
I'm
gonna
open
my
chosen
tool
for
for
denoising,
which
is
the
multiscale
medium
transform
tool.
B
Now,
multiscale,
linear,
transfer
or
mmt
is
a
is
a
wavelet
based
the
tool,
or
rather
than
it's
not
really
a
wave
that
is
multiscale,
so
it
operates
in
different
ways
at
different
scale.
What
is
the
meaning
of
scale
in
this
context?
Well,
it
means
essentially
the
size
of
it
separates
your
image
using
the
components
of
different
scales,
different
sizes,
so
the
scale
one
means
variations
pixel
to
pixel
variation,
very
minor
variation
scale
two
means
variations
from
a
pair
of
pixels
to
the
next
pair
of
pixels
scalar.
B
Four,
it
means
from
one
block
of
four
pixels
to
the
next
and
then
eight
and
sixteen
there
are.
You
know
there
are
powers
of
two,
as
you
can
see
so,
and
you
can
decide
what
how
many
do
you
want
to
consider
up
to
eight,
so
you
can
have
objects
up
to
the
scale
of
256,
pixels,
sorry,
128,
pixels,
and
then
everything
else
would
be
part
of
what's
called
the
residual
for
this
image.
I
think
I'm
gonna
be
happy
with
the
six
layers,
not
much.
B
You
can
apply
this
tool
to
the
lightness,
luminance
prominence
or
the
rgb
components.
Now,
in
this
case,
given
that
I
have
both
lumen,
I
have
noise
in
all
three
channels.
I
prefer
to
apply
to
the
rgb
components.
B
The
real
trick
in
using
this
tool
is
to
know
how
much
to
put
how
much
storage
action
is
needed
at
each
of
the
levels.
How
do
I
decide
that?
Well,
first,
I'm
going
to
tell
you
what
what
these
parameters
mean.
So
this
parameter
means
start
applying
noise
reduction
to
anything
that
exceeds
a
certain
sorry
that
is
below
a
certain
threshold,
and
the
threshold
is
measured
in
units
of
of
stand
of
signal,
standard
deviation
of
your
image
and
that's
observed
at
that
scale.
B
B
It
means
that
anything
that
does
not
protrude
by
at
least
10
times
10
standard
deviations
from
the
average
will
be
cut
down,
will
be
reduced,
will
be,
will
be
subject
to
noise
reduction
in
sense,
and
then
there
are
two
parameters
up
by
how
much
so
you
can
have
a
1.0
is
the
maximum
noise
reduction
and
you
can
go
down
to
zero
if
you
want-
and
adaptive
is
a
way
to
essentially,
if
you
have
a
situations
in
which
you
have
a
relatively
well
behaved
noise,
that
doesn't
maybe
require
a
super
high
threshold,
but
you
have
some
outliers
and
you
don't
want
to
increase
it
too.
B
Much
the
the
threshold,
because
otherwise
your
image
will
will
look
artificially
flat
and
smooth,
but
you
still
want
to
take
care
of
the
outliers.
You
can
start
increasing
these
adaptive
parameters
parameter
to
maybe
one
I
don't
normally
don't
use
more
than
one
and
it's
very
effective
in
catching
the
outliers.
B
B
If
I
choose
all
changes,
and
then
I
apply
this
tool
to
I'm
going
to
drop
this
this
process
into
my
preview,
it's
going
to
show
me
in
the
preview
the
the
way
that
this
tool
sees
my
image
in
the
scale
that
I
have
currently
in
the
layer
that
I'm
currently
selected.
So
in
this
case,
it's
going
to
show
me
what
my
image
looks
like.
If
I
only
look
at
the
very
minor
1,
pixel
scale
variations,
let's
do
it
the
first
time
I
execute
it
takes
a
little
bit
so
bear
with
me
for
a
moment.
B
B
This
is
what
my
image
looks
like.
If
I
only
see
the
very
small
scale
variations
well,
I
can
see
that
there
are
stars-
and
I
can
see
that
this
is
essentially
noise,
because
this
this
image
does
not
have
a
very
high,
very
fine
details.
B
It's
the
snap,
the
nebula
pictured
in
this
image
have
a
large
scale,
not
small
scale
components,
so
I
can
start
trying
to
reduce
this
noise.
Let's
say
that
I,
for
instance,
I
want
to
throw
away
anything
which
is
less
than
one
sigma
from
from
the
from
the
the
average.
So,
let's.
B
I
set
here
noise
reduction,
one
stigma.
I
drag
and
drop
the
icon,
nice
there's
still
some.
I
can
still
see
some
salt
and
pepper,
so
maybe
I
can
increase
it
a
little
bit
to
1.5.
If
I
want.
B
B
B
B
B
All
right,
I'm
happy
with
layer,
two,
let's
move
on
to
layer
three,
so
you
can
see
it's
an
iterative
process.
You
have
to
every
image,
unfortunately,
is
gonna.
Have
some
differences?
It's
not
a
one
size
fit
fits
all,
but
it
is
the
way
that,
if
you
want
to,
if
you
want
to
tailor
the
amount
of
noise
reduction
to
your
particular
image,
this
is
the
way
to
do
it
rather
than
having
a
very
large
impact.
That
is
the
same
for
all
images
and
then
tweak
the
reducing
the
the
strength
of
this
impact
with
masks.
B
I
am
using
a
mask,
but
I'm
using
a
basic,
very
basic
mask
nothing,
those
not
one
of
those
tweaked
mask
that
requires
some
level
of
prediction
in
what
the
tool
will
do
that
are
that
the
journalist
uses,
for
instance,
I'm
not
saying
that
he's
doing
the
wrong
thing
he's
doing
the
right
thing
because
he's
his
mo
his
modes
of
randy
is
geared
towards
that
towards
the
process.
Mine
is
slightly
different,
so,
let's
see
scale
three
okay,
so
scale,
trees
are
still
a
good
amount,
mostly
noise.
B
Maybe
I
would
like
to
see
what
this
part
here
looks
at
scale.
3.
Let
me
do
it.
This
is
another
preview
on
the
same
image.
Now
you
see
that
now
at
scale.
3
does
something
interesting.
There
are
those
lines
in
the
reflection
nebula
that
start
showing
up.
I
want
to
preserve
those.
I
don't
want
to
throw
them
away.
So,
let's
see
how
much
noise
reduction
I
need
to
apply.
B
This
is
the
next
layer
there's
a
you
can
see
here.
I
don't
know
if
the
zoom
renders,
but
if
you
follow
my
mouse,
this
is
the
outline
of
the
hand,
the
dark
part.
You
don't
see
it
as
dark,
because
here
what
you're
seeing
is
the
variations,
not
the
absolute
values,
but
it's
almost
a
filigree
that
you
have
to
interpret
in
this
image.
So,
let's
see
how
much
I
needed
to
how
much
nervous
action
is
that
suitable
for
this
case
at
the
scale
of
8
pixels,
so
the
previous
one
was
3
and
a
3.5.
B
Okay,
I
like
this
scale,
six,
oh
before
we
move
to
scale
six.
Let
me
go
back
to
my
previous
preview
and
see
what
it
behaves
here.
It
is
important
and,
of
course,
it's
a
little
bit
slow
and
tedious
to
test
these
tools
against
multiple
previews,
because
you
may
have
you
have
different
levels
of
contrast
and
different
details
in
your
image
that
you
may
want
to
preserve,
or
you
or
different
parts
of
the
image
are
affected
by
different
levels
of
noise.
B
What
does
the
image
look
like
at
a
scale
of
16
pixels
like
this?
Okay?
So
there's
this
details
of
the
of
the
reflection
nebula
and
you
can
start
seeing
a
pretty
well
the
dark
nebula,
not
just
the
start,
to
have
a
shadow
all
right,
let's
see
how
much
we
needed
here,
we
used
the
three
for
the
previous
one.
B
Now
it's
starting
to
get!
You
have
to
strike
a
compromise
between
removing
the
noise
and
removing
the
the
detail.
You
don't
want
to
remove
the
detail
as
much
as
possible,
so
I
keep
on
using
the
adaptive
slider.
B
B
Let's
see
how
much
is
needed,
like
I
don't
know,
let's
try
again
with
2.5,
as
we
were
doing
before.
B
B
B
B
After
before,
after
and
at
least
to
my
eye,
I
didn't
throw
away
any
detail
the
reflection
nebula
here,
all
this
str,
this
structures,
linear
structures,
are
preserved.
B
B
Four
five,
six
and
here
before
after
before
after
I
think
that
I
might
have
been
a
little
bit
too
conservative
at
some
intermediate
scales
like,
for
instance,
I
see
that
there
are
some
there's
still
some
residual
noise
in
this
area.
Maybe
I
could
have
bumped
this
up
to
four,
maybe
and
this
one
to
three
and
a
half,
because
those
are
the
scale
of
four
and
eight
pixels,
which
is
still
relatively
small.
B
Okay,
when
I'm
happy,
I
can
drag
and
drop
the
process
to
the
image
as
a
whole.
The
mask
is
still
in
force,
as
you
can
see
by
the
color,
the
shading
of
the
tab,
and
this
is
going
to
take
a
little
bit
because
now
it's
it's
no
longer
replying
on
the
preview,
but
on
the
entire
4000
by
2700
pixel
image.
B
While
we're
waiting
are
there
any
questions,
I
hope
that
this
is
not.
I
know
that
there
are
much
easier
ways
to
do.
Noise
reduction
like
easy.
The
noise
is
the
reason
why
I
wanted
to
go
through.
This
is
that
this
coefficients
are
mostly
considered
black
art
and
I
hope
I'm
showing
that
there's
nothing
black
here.
There's
no
black
art
you.
There
is
a
way
to
properly
gauge
how
the
the
effect
of
each
of
those
parameters
it
just
means
a
bit
of
patience.
B
B
Note
that
it
gets
slower
as
the
as
the
progresses
in
the
layers.
B
This
is
actually
it's
because
I
use
the
I
use
the
liberally,
the
adaptive
slider
if
by
using
that
slider
you're,
basically
forcing
a
pixel
inside
to
calculate
a
linear
combination
between
the
the
version
of
the
image,
with
the
with
no
adaptation
and
and
one
in
which
there
is
a
local
local
equalization
applied
before
this.
B
B
But
it's
it's
chugging
through.
My
fear
was
that
peace
inside
at
some
reason
parked
the
cpu
and
wasn't
progressing,
but
no,
it's
still
working.
So
a
little
patience.
B
And
the
tool
to
give
an
overview
of
the
things
that
we're
gonna
the
following
steps.
Essentially,
this
is
the
last
the
last
process
that
I
like
to
apply
in
the
linear
linear
stage.
B
From
this
point
onwards,
what
I
would
do
is
given
that
this
image
has.
We
want
to
extract
very
faint,
nebula
velocity
from
from
an
image
that
has
also
some
very
bright
stars.
I
don't
want
to
saturate
the
stars.
I
want
to
preserve
the
star
colors
as
much
as
possible,
so
I'm
going
to
use
a
technique
to
separate
this
image
in
a
starless
version
and
a
star
only
version
and
process
them
separately,
and
this
will
lead
us
finally
to
the
to
the
final
image,
essentially
after
some
processing
and
combination.
B
B
B
A
Hi
glenn
yeah,
I
was,
I
was
waiting
for
other
responses.
I'll
give
you
my
opinion,
but
I'm
happy
to
be
swayed
I'd
say
we
go,
you
know
till
9,
15
and
then
stop
and
which
is
an
hour
45.
We
usually
stop
in
an
hour
and
a
half,
but
I
know
we
had
a
lot
of
attendees
a
lot
of
interest
and
and
don't
you
my
recommendation
is:
don't
rush,
don't
skip?
You
know
we'll
just
pick
up
next
time
and
you
know
get
the
full
benefit
of
this.
C
B
B
B
So
this
script
I
I
started
using
it
when
I
was
doing
mostly
osc
with
with
my
dslr.
B
B
Now
this
doesn't
mean
that
the
stars
are
blue.
What
it
means
is
that
some
stars
have
saturated
saturation
means
that
the
well
in
the
pixel
was
full
and
so
the
the
digital
noise.
Sorry,
the
digital
number
that
the
was
readable
downstream
of
the
adc
was
essentially
one
or
65
536,
which
is
the
maximum
value
for
in
a
16
bit
space
after
calibrating
the
colors.
B
What
happened
is
that
peaks
inside
multiplied
one
channel
or
divided
another
channel
for
some
some
coefficient,
and
so
what
is
completely
saturated
is
no
longer
1.0
in
red
1.0
in
blue
1.0
is
green.
It
has
become
a
different
combination
of
these
three
values.
In
this
particular
case,
I
see
that
saturation
means
that
r
is
0.7.
B
B
B
Repair
level
means
that
it's
basically
the
saturation
level,
I'm
going
to
tell
the
script
that
anything
above
a
0.5
just
don't
consider
the
color.
There
consider
the
color
only
of
the
parts
that
are
below
this
threshold
and
consider
stars
that
have
a
maximum
radius
of
35
pixels
and
don't
click
the
shadows.
I
don't
want
to
never
click
the
shadows.
B
B
Now
these
are
the
the
three
the
results.
This
is
the
sb
component
of
this
space.
This
is
the
h
component
of
the
space,
and
this
is
the
v
component
of
the
space.
I
think
that
this
s,
h
and
v
corresponds
to
hue,
so
the
color,
the
hue
saturation
and
value.
B
Now
what
I'm
gonna
do
I
want
to
to
use
the
tool
called
the
channel
combination,
which
is
the
the
same
one
that
we
have
used
to
make
an
rgb
image
out
of
three
monomasters,
but
I'm
gonna
set
it
to
operate
on
the
hsv
space
and
I'm
going
to
use
the
h
version
of
this
image
as
produced
by
the
script
as
h,
the
sv
image
version
of
this
image
is
produced
by
the
script
and
I
am
not
going
to
use
the
v
value.
In
other
words,
I'm
going
to
use
whatever
is
the
v
value,
the
value?
B
B
B
Once
this
is
done,
nothing
has
changed
in
the
in
the
actual
image.
I
can
sorry
not
in
the
actual
image,
in
the
way
that
I
see
the
image
after
applying
sdf.
I
can
close
these
three
versions
and
what
I'm
going
to
do
at
this
point
is
to
I
needed
to
create
the
startless
and
the
star
only
version
of
this
image.
B
To
my
knowledge,
there
are
different
ways
to
do
that
doing
it
in
the
linear.
Space
is
very
convenient
in
my
opinion,
because
that's
what
allows
you
to
stretch
the
two
images
separately,
but
the
the
most
common
waste
way
to
do
it
star
net
plus
plus,
does
not
work
on
on
linear
images.
So
it
would
require
some
tweaking.
You
would
need
to
do
a
pre-stretch
which
does
not
saturate
anything
then
apply
starlet,
starnet
and
then
unstretch
or
there
are
scripts
that
do
that
automatically.
B
B
B
There
are
some
still
some
artifacts,
but
they
are
easy
to
work
with.
So
let
me
use
here
star,
exterminator
and
again,
I'm
not
giving
them
free
advertising,
but
I
like
what
they
do,
as
you
can
see,
has
a
as
a
checkbox
you
can
tell
hey
this
image.
Is
linear,
so
do
what
you
need
to
do,
but
keep
in
mind
that
it's
linear
and
also
going
to
ask
you
to
generate
a
star
image.
So
I
don't!
This
is
optional.
B
And
on
my
mac,
star
exterminator
is
very
slow,
so
I
apologize
it's
going
to
take
a
couple
of
minutes
on
because
just
because
my
mac
doesn't
have
a
discrete
gpu,
it's
using
the
intel,
the
gpu
inside
the
intel
i7
processor-
to
do
it.
Modern
macs,
needless
to
say,
are
much
faster
and
if
you
have
a
windows
machine
with
a
nice
with
a
nice
gpu,
it's
gonna
fly
through,
but
it's
not
too
bad.
As
you
can
see,
we
are
25
in.
B
All
I'm
gonna
do
is
essentially
gonna,
be
based
on
the
vision
that
I
have
in
my
head
of
how
I
want
the
final
image
to
look
like
because
it's
essentially
everything
is
dictated
by
my
taste
by
my
my
aesthetic
perception.
B
55,
so
it's
chugging
along,
but
I
hope
you're
going
to
be
impressed
as
I
as
I
was
the
first
time
I
used
the
star
net.
If
sorry,
I
use
a
star
exterminator
if
you're
familiar
with
starnet,
plus
plus,
it
leaves
behind
the
strange
texture
underneath
the
larger
stars
like
this
big
su
cassiopeia
here,
star
exterminator
does
an
excellent
job
in
creating
a
texture
that
matches
the
overall
noise
of
your
image.
So
it's
almost
invisible
on
the
flip
side.
B
F
B
B
And
now
we
have
the
stars:
here
we
have
the
nebula
here
we
can
start
stretching
and
to
do
the
stretching
at
this
point
I'll
keep
the
stars
for
later.
So
I'm
going
to
iconize
them
here,
but
here
I
can
be
as
nice
as
I
want
to
with
stretching.
I
can
actually
stretch
with
history
and
transformation
instead
of
using
sophisticated
tools
like
arcsinage
stretch
or
must
stretch.
I
can
just
do
here
and.
B
B
I
have
enabled
the
real-time
preview.
You
can
see
the
result
of
what
I'm
doing
yeah
I
can.
I
can
stretch
as
much
as
I
is
uncomfortable
with,
as
you
can
see,
maybe
not.
B
And
now
this
is
no
longer
a
linear
image.
I
just
stretched
it,
so
let
me
remove
all
these
previews
that
are
just
confusing
in
this
moment
I
want
now.
I
want
to
do
something
to
make
this
image
more
readable,
to
make
the
helping
hand
more
obvious.
So
the
first
thing
I'm
gonna
do.
These
are
two
moves
that
I
learned
along
the
way
I
use
the
exponential
transformation
using
the
power
of
inverted
pixels.
B
I
use
this
tool,
has
an
internal
lightness
mask
which
you
can
enable
or
disable
it
basically
saves
you
the
hassle
of
going
and
creating
a
mask.
Let
me
show
you
in
the
preview
what
it
does
it
does,
a
very
gentle
or
it
can
be
gentle
or
hard,
depending
on
how
how
you
tweak
this
parameter
in
this
way,
it's
relatively
gentle.
This
is
the
before,
and
this
is
after.
It
brings
it
out
of
the
background.
B
Good,
so
now
it's
more
visible.
I
would
like
to
do
something
to
make
the
contrasts
even
more
more
apparent
and
my
tool
of
choice,
for
this
is
usually
the
local
histogram
equalization
tool.
B
You
can
apply
it
with
a
mask.
You
can
apply
it
without
the
mask
and
you
need
to
be.
I
mean
it
takes
a
little
judgment
if
I
apply
it
at
the
amount
1.0.
Let
me
show
you
what
it
looks
like
in
using
the
preview
tool
yeah.
It
creates
a,
I
would
say,
an
unpleasant
texture
in
the
image
it's
too
much,
but
if
you're
not
aggressive
and
you
skate
it
back
to
maybe
0.4
40,
it's
very
nice-
it
creates
oops
for
some
reason
it
disappeared.
B
B
Okay,
this
is
the
result
before
after
before,
after
before
after
it
in
to
my
eye,
this
makes
the
dark
section
of
this
nebula
more
prominent,
not
because
it
changes
anything
in
the
way
they
are
in
their
dns.
Unlike
what
the
other
script
called
the
dark
structure,
enhanced
does
what
it
does.
It
changes
the
surrounding
parts,
so
it
creates
more
microcontrast
for
well
micro
in
multiple
quotes,
because
here
we're
talking
about
large
scale
structures
anyway.
B
B
Maybe
I
don't
do
it
at
all.
I
see
a
question.
How
do
I
get
star
exterminator?
I
don't
see
it
in
my
version.
Yes,
it's
a
it's
a
it's
an
add-on
that
you
have
to
buy.
It's
not
it's
not
part
of
peace
inside
itself.
It's
a
is
a
product
of.
I
can't
remember
the
name
of
the
guy,
but
it's
an
independent
developer.
It's
not
part
of
the
pixel
side,
collaboration.
B
So-
and
I
don't
know
if
you
are
familiar
with
this
tool-
the
color
saturation
tool-
it
allows
you
to
do
some
very
interesting
things
like
let's
say
that
I
wanted
to
give
more
saturation
to
this
bluish
reflection
nebula
well,
and
I
don't
want
to
saturate
anything
else.
Well,
you
see
that
here
there's
a
spectrum.
B
B
And
almost
nothing
to
the
other
colors,
let's
see
how
it
looks
like
once
I
apply
it.
Yeah
it's
a
satu,
but
before
after
before
after
this
section
here
is
a
is
a
nicer
color
to
my
eye
at
least,
and
we
can
continue
a
lot
to
do
this.
We
can
continue
working
on
on
this
on
this
image
until
our
aesthetic
science
is
pleased,
but
at
some
point
we're
gonna
have
to
bring
back
the
stars.
B
So
what
do
we
do
with
the
stars?
Well,
the
stars
are
the
ones
that
require
some
attention
here,
because
these
are
unstretched
stars.
We
still
have
to
stretch
them
and
I
want
to
stretch
them
in
a
way
that
does
not
destroy
them,
thus
preserves
their
colors.
So
the
way
I
do
this
is
with
the
arsenate
stretch
operator
that
this
arsenal
stretch
is
great
for
stars.
It
preserves
the
color
in
an
excellent
way.
B
B
I
don't
want
to
go
too
much
because
you
see
the
start
to
it's
really
much
in
your
very
much
in
your
face.
The
stars
become
very
saturated
and
the
halos
start
coming
out.
I
prefer
to
keep
it
at
one
between
150
and
1
and
let's
say
150
in
this
moment.
B
B
I
take
pics
of
math
and
I
am
going
to
use
an
expression
that
I
found
on
other
websites
on
how
to
combine
two
images
and
how
to
combine
two
images
is
to
take
the
inverse
of
the
two
images
multiply
them
by
one
another.
So
the
the
tilde
characters
is
means
invert.
This
in
peaks
inside
dollar
t
is
the
target
image.
B
B
B
This
gets
rid
of
this
ugly
green
cast
and
gets
us
to
an
image
which
is
very
similar
to
the
one
that
I
actually
posted
on
astroved.
B
If
you
can
stretch
more
you
can
you
can
tweak
more
the
things
like
the
exponential
transformation
that
I
used
or
the
local
histogram
equalization
and,
as
I
said,
this
is
pretty
pictures.
It's
not
science
once
you're,
if
you
are
happy
with
the
result,
that's
that's
what
counts
for
us
are
any
questions.
D
Not
not
that
this
image
needs
it,
but
if
you
wanted
to
reduce
the
stars
or
bass
eliminate
the
smallest
stars,
maybe
when
when
would
you
do
that
and
now.
B
I
would
do
it
just
before
com.
Just
before
doing
this
combination,
I
would
go
to
the
start
on
the
image
this
only
contains
the
star
doesn't
even
need
a
star
mask
to
operate,
and
here
I
would
do
some
nice
morphological
transformation
like
a
morphological
selection
using
a
structuring
element.
Maybe
five
pixels
wide.
B
This
has
essentially
reduced
the
smaller
stars
you
see
before
after
before
after
and
then
when
I
apply
this
when
I
recombine
the
stars
with
the
starless
version,
I
have
fewer
stars
or
less
visible
stars.
The
stars
are
still
there,
but
they
are
less
apparent.
B
I
B
I
That's
also
an
option
that
was
a
great
answer.
That
should
be
very
handy.
Are
you
tempted
here
to
try
dark
structure,
enhance.
B
Sure
we
can,
we
can
certainly
do
that.
Let's
see
what
it
looks
like.
I
didn't
use
it
on
my
on
my
final
image
on
astrobin,
but
these
are
my
default
settings.
It's
about
40
amount,
so
0.4,
one
iteration.
Only,
let's
see
what
it
looks
like.
B
Almost
done
it's
done
all
right
so
before
after
before,
after
it's
not
very
evident
and
in
my
opinion,
it
makes
this
hand
here
it
seems
to
be
making
it
a
little
bit
too
dark
to
my
taste.
B
E
Francesca
this
is,
you
know,
a
beautiful
image
and
doesn't
really
need
anything
more,
but
I
am
wondering:
was
it
a
time
constraint
that
caused
you
to
not
use
luminance
or
is
it
a
particular
type
of
target
that
you
find
you
don't
need
to
use
luminance
rather
than
rgb
only.
B
Essentially,
there
are
two
reasons:
one
is
related
to
my
equipment.
B
I
noticed
that
when
I,
although
my
refractor,
is
supposed
to
be
in
a
programmatic
refractor,
it
is
to
some
extent
only
like,
like
all
the
cover
real
world
refractors,
and
so
when
I
do
rg
and
b,
I
can
optimize
the
the
focus,
the
focus
for
each
of
the
three
colors,
whereas
when
I
shoot
luminance,
I
have
to
take
a
compromise
and,
as
a
result,
I
have
larger
stars
than
each
of
the
r
g
and
b
images.
This
is
one.
B
The
second
one
is
that
in
general,
an
lrgb
combination
tends
to
enhance
the
detail
to
the
expense
of
color
saturation.
B
So
in
an
image
that
does
not
have
much
detail,
fine
detail
like
this
one,
I
probably
it's
probably
kind
of
pointless.
I'm
also
following
some
of
the
recommendations
that
juan
conohiero
has
gave
over
and
over
in
his
forums
that,
if
you
want
to
do
a
lrgb,
you
are.
B
So
what
you're
referring
to
it
is,
if
you
consider
it
from
the
noise
perspective,
if
you're
considering
it
from
the
signal
perspective
well,
you're
still
using
pixels
that
have
a
if
you've
been
your
camera,
your
mono
camera
and
the
cmos
camera
you're
still
having
collecting
photons
from
a
larger
area
for
each
of
the
pixels
that
you're
gonna
see
on
the
on
the
computer
screen.
B
B
A
Francesco
and
of
course,
we
look
forward
to
hearing
you
from
you
again
at
least
next
month,
so
so
everybody
unmute
and
and
thank
francesco
and
then
we'll
we'll
be
off.