►
Description
PixInsight and friends: processing garbage… err, data with issues
Francesco Meschia
We’ll re-create the image of NGC 7331 and the Stephan’s Quintet that received the Astrobin “top pick” silver star last September, starting with 17.5 hours of challenging RGB lights taken from a Bortle 7 backyard. Gradients, noise, details… PixInsight will get by with a little help from his friends: Adobe Photoshop, Topaz Denoise AI, and introducing StarNet++ V2!
A
Hi,
everyone
welcome
to
the
february
sjaa
imaging
sig,
and
once
again
we
have
francesco
mesia
going
to
teach
us
more
about
pixensight
image
processing.
It
was
a
great
presentation
last
month
take
it
away
francesco.
B
Thank
you
very
much
hi.
So
this
is
a
we
call
it.
This
session
jokingly,
processing
garbage
data.
I
think
somebody
pointed
out
to
me
that
maybe
it's
not
garbage
enough,
but
it
will.
We
will
try
to
do
worse
next
time,
but
for
now
we're
gonna
work
with
with
this
data.
B
B
So
without
further
ado,
I'm
going
to
open
the
subs.
Sorry,
not
the
services,
the
master
lights
that
I
had
shared
with
the
with
this
group
are
g
and
b,
very
good
all
right.
So
the
worst
offender
is
the
red,
the
red
master.
You
see
that
there's
really
a
very
bad
gradient
with
a
circular
shape
in
the
center
and
like
a
half
moon
or
darker
darker
pixels
around
and,
of
course,
horrible
left
to
right
gradient
as
well.
B
We
were
discussing
earlier.
What
could
be
the
reason
it
could
be.
The
light
leak
is
what
I
suspect
I
was
also
suggesting
it
could
be
due
forming
on
the
on
the
lens
and
so
or
reflecting
diffusing
the
light
in
different
ways
scattering
in
the
optical
tube,
it's
quite
possible.
But
the
point
is
that
after
you
collect
the
17
hours
of
light,
and
you
find
that
this
is
a
has
really
given
a
poorer
result.
B
You
have
two
options.
You
can
throw
away
everything
and
start
over
and,
of
course,
before
doing
that,
going
down
into
a
rabbit,
hole
and
finding
what
could
be
the
good
cause
and
trying
to
fix
it,
or
you
can
try
to
make
do
with
what
you
have
in
the
impose
process,
which
is
exactly
what
we're
gonna
talk
about
here.
B
So
when
you
image
from
from
the
backyard,
these
things
are
relatively
common
and
the
other
thing
that
it's
relatively,
that
it
happens.
All
the.
B
From
under
light
pollution-
and
my
backyard
is
brought
to
seven-
is
the
poor
signal
to
noise
ratio.
You
can
to
some
to
some
extent,
to
mitigate
the
poor
signal
to
noise
ratio
by
increasing
the
integration
time.
In
this
case,
I
did
17
hours
distributed
among
the
three
channels,
so
I
could
have
done
better
for
sure,
but
the
just
increasing
the
integration
time
would
incur
this
gradient
problem.
B
The
what
we
need
to
do
is
to
model
to
use
the
pixel
inside
terminology
model
the
background,
and
so
that
you
can
figure
out
what
correction
would
be
necessary
to
basically
flat
field
this
image.
After
effect,
there
are
several
techniques
that
I
can
recommend
for
that
one
that
works
in
my
opinion
exceptionally
well,
but
has
some
requirements,
is
a
vissent
peris's,
multiscale
gradient
correction
technique.
B
Descent
is
one
of
the
developers,
one
of
the
guys
that
played
this
over
poster
photo
so
the
the
over
the
peaks
inside
and
he
had
published
a
very
nice
article
in
which
he
corrects
some
very
weird
gradients
in
a
very
dim
object,
so
something
that
you
want
to
bring
it
out
of
the
background.
But
the
background
is
not
much
darker
than
the
object
itself,
and
so
you
need
to
be
careful
in
how
you
correct
it.
Otherwise,
you're
gonna
essentially
eat
your
the
target
you
want
to
bring
out
instead
with
the
themselves.
B
B
But
the
assumption
is
some
of
the
problems
that
we
are
trying
to
correct
are
due
to
something
in
the
optical
training
like
additive
effects
like
light
lips,
if
you
do
it
with
a
much
wider
field,
chances
are
that
the
the
portion
of
that
wider
field
that
overlaps
with
the
images
you
want
to
correct,
has
a
much
smoother
gradient,
much
simpler
gradients,
probably
a
linear
one.
That
is
easier
to
correct.
B
So
what
you
do
is
to
normalize
your
gradients
to
that
wide
field.
Imagery
and
then
you
can
easily
you
can
correct
it
subtract
it
and
he
was
using.
Essentially
everything
was
done
via
pixelmath,
but
nowadays
there
is
a
tool
in
pixel
site,
which
is
also
very
powerful
for
that
which
is
normalized
cage
gradient.
It's
a
script
found
here
under
batch
processing
normalized
in
a
scale
gradient.
B
You
need
to
have
some
reference
imagery
that
has
either
no
gradient
or
a
simple
gradient,
and
if
you
have
that
pixel
site
will
use
a
photometry
to
determine
how
much
each
the
corresponding
regions
in
your
in
your
subs
need
to
be
brightened
or
dimmed
in
order
to
bring
the
overall
gradient
back
into
the
same
into
the
same
shape
as
your
reference
imagery.
B
B
But
in
this
particular
case
I
didn't.
These
are
ideal
solutions.
This
work
for
basically
all
targets.
In
this
case,
I
didn't
have
that
and
I
used
a
different
technique,
so
the
different
technique
that
I
use
is
to
use
some
assumptions
in
logic
to
form
a
model
of
what
the
background
of
this
image
would
be.
Now
this
is
a
field
in
pegasus,
mgc,
7331
and
the
the
stefan's
quintet.
B
So
we
are
relatively
far
from
the
from
the
milky
way.
I
didn't
see
any.
I
looked
at
the
reference
imagery
online.
I
didn't
find
anything
that
suggests
that
the
very
strong
ifn
integrated
flux
nebula
in
this
region,
even
if
there
were,
I
think,
it's
I
know
better
than
trying
to
image
ifn
from
portal
7.
So
I
don't
care.
So
I
think
I
can
assume
that
this
field
at
the
at
a
large
scale,
not
at
the
scale
of
galaxy,
not
the
scale
of
stars,
should
be
flat.
B
Now,
if
we
do
that
that
assumption
we
can,
we
can
use
some
mathematical
trick
in
bits
inside
to
say
that,
okay,
so
if
everything
should
be
flat,
but
it's
not,
maybe
I
can
come
up
with
an
image,
a
synthetic
image
that
I
can
subtract
from
this
one
to
produce
my
flat
field
and
let's
take
a
look
at
how
this
could
be
done
so
before
I
do
that.
B
B
Okay,
so
this
the
smaller
stars
are
gone,
I'm
gonna
cycle
here
smaller
stars
are
gone,
the
galaxy
the
detail
in
the
galaxy
is
gone,
but
the
galaxies
themselves
are
theory
are
still
here,
both
ngc
3
7331
and
the
the
quit
that
are
still
visible.
What,
if
I
remove
more
layers
now,
just
to
make
it
faster?
Let
me
use
a
preview.
B
B
Oh,
we
are
starting
to
get
to
get
there.
Let's
remove
eight
layers,
the
galaxy
is
still
there,
so
we
can
go
even
further
and
I
can
decide
that,
for
instance,
instead
of
doing
this
reduction
and
instead
of
doing
it
this
way
the
multi-scale
media
transfer
can
only
remove
up
to
eight
layers.
Then
you
have
the
residual
layer.
You
cannot
do
more
than
eight,
but
you
can
use
a
trick.
B
B
Look,
this
is
if
I
blink
you'll,
see
that
this
looks
like
could
be
a
plausible
model
of
the
of
the
background
of
the
sky.
B
B
A
B
No,
unfortunately,
pigs
inside
automatically
scales
things
as
long
as
they
have
the
same
aspect:
ratio
you're,
not
gonna,
have
problems.
And
of
course
these
do
have
the
same
aspect
ratio.
B
B
Before
after
so
most
of
the
gradients
are
gone,
however,
there
is
a
problem.
Let
me
zoom
in
to
show
it
a
little
bit
more
clearly,
there
is
like
a
darker
ghost
image,
around
ngc
7331
and
also
around
these
defense
quintet,
and,
if
you
think
about
it,
it's
this.
The
reason
for
this
is
visible
in
the
model.
You
see
that
there
is
a
in
addition
to
the
center
circular
light
artifact,
there's
also
a
bulge
here,
which
corresponds
to
ngc
7331
and
a
fainter
area
of
brighter
background
here,
which
corresponds
to
the
quintet.
B
Now.
What
do
we
do?
We?
If
we
want
to
get
rid
of
that
too?
Well,
let's
see
we
could
further
down
sample,
we
could
remove
another
layer
and
the
the
galaxy
would
would
become
less
visible.
Let's
see
if
this
is
a
good
idea.
So
let
us,
let
me
do
it
on
an
alternative
image,
so
I'm
going
to
copy
this
and
I'm
gonna
reset
down
sample
again
by
two.
B
Okay,
here
we
are
and
let's
remove
again
the
first
eight
layers.
B
B
B
So
this
is
unpleasant
because
it's
not
this
is
not
sufficient,
and
this
is
too
much
and
there's
actually
not
a
way,
not
an
in-between
situation.
It's
not
easy
to
get.
What
I
did
is
to
take
my
assumption
about
the
back.
B
What
the
background
should
look
like
a
step
further,
and
I
took
this
background
model
and
I
recognized
it
has
some
things
that
I
wouldn't
want
to
see
here
and
it's
relatively
easy
to
see
what
they
are
and
what
I
did
was
to
remove
to
paint
over
those
parts
that
I
don't
want
using
photoshop,
I'm
not
painting,
I'm
not
creating
data.
This
is
only
the
background.
B
I'm
I'm
painting
over
the
background
model,
because
I
want
the
background
model
to
be
more
similar
to
the
one
that
I
think
should
be,
and
let
me
show
you
as
an
example
what
I
did.
I'm
just
gonna
save
this
I
needed
to.
Oh
sorry,
I
cannot
save
it
right
now.
I
need
to
export
it
in
into
photoshop,
because
photoshop
has
much
better
paint
instruments
than
pixie
site
has,
but
this
is
a
new
it's
a
linear
image.
I
cannot
export
it
directly
in
in
in
photoshop
right.
B
I
need
to
make
it
otherwise
it
would
be
black.
I
need
to
make
it
into
a
stretched
image.
The
way
I
choose
to
do
it
is
by
using
the
histogram
transformation
tool
in
combination
to
make
essentially
a
permanent
stretch
from
auto
stf,
so
I'm
going
to
take
whatever
is
the
stf
function?
You
know
that
is
displayed
here
and
I
basically
imported
it
into
the
string
transfer
function
tool
by
checking
the
track
view
button,
and
then
I
drag
and
drop
it
here,
and
I
also
use
the
track
button
here
because
I
want.
B
I
want
this,
the
histogram
transformation
to
tell
me
how
many
pixels
are.
Am
I
clipping
it
to
black
or
too
white
in
this
moment?
Fortunately,
there's
no
clipped
pixels,
you
see
that
both
for
shadows
and
highlights
the
clipping
is
zero.
So
I'm
not
throwing
away
any
data
here,
which
is
good.
I
can
just
remove
the
sd.
The
autostf
apply
the
permanent
stretch,
and
now
it's
scratched
before
closing
histogram
transformation,
though
I'm
gonna
drag
an
instance.
B
A
copy
of
this
instance
onto
the
desktop
the
workspace
actually
sorry
to
use
the
pixel
inside
terminology,
because
we're
gonna
have
to
go
back
to
a
linear
image
before
proceeding,
and
if
I
don't
have
this,
I'm
I'm
not
going
to
be
able
to
do
this.
I
mean
I
could
I
could
get
it
from
the
history,
but
the
important
thing
is
that
you
needed
to
make
a
note
of
these
two
coefficients,
the
shadows
coefficient
and
the
midpoint,
because
we're
gonna
need
it
to
go
back.
C
B
B
I
want
to
use
something
slightly
more
sophisticated,
which
is
a
tool
called
the
the
dodge
tool,
which
is
a
equivalent
of
the
tool
that
we
were
using
in
the
dark
room
once
in
the
in
the
old
days,
by
placing
something
that
cast
a
shadow
on
the
onto
the
photographic
paper,
although
this
is
can
be
easily
done
via
this
tool
here,
I
prefer
to
do
it
not
with
this
with
this
image
directly,
but
I
want
to
do
it
on
I
like
to
try
to
do
it
in
a
non-destructive
way,
without
changing
this
background.
B
B
B
A
B
C
B
A
black
mark
here
which
I
don't
like
and
I
probably
want
to
go
back
so
I
take
the
eraser
and
the
eraser-
can
work
in
exactly
the
same
way
as
the
as
the
rest
of
the
tool.
So
I
can
work
it
in
the
airbrush
mode,
saying
that
the
opacity
is
only
10
percent
and
use
the
eraser
here,
and
you
see
my
mistake
has
become
less
and
less
evident
and
it's
gone
now.
I
can
go
back
to
the
paintbrush
and
maybe
I
want
to
refine
this
area.
B
B
A
A
B
It
we
can
try
to
do
it
and
we
will
actually.
We
can
do
that.
Actually,
the
problem
that
I
noticed
with
with
db.
I
was
never
able
to
obtain
this
in
in
short,
but
it's
quite
possible
that
there
are
some
techniques
that
I
don't
know
that
would
allow
allow
you
to
do
it.
Abe
is
likely
not
suitable,
because
what
you're
doing
here
does
not
respond
to
any
particular.
You
know
the
a
b
it
just
interpolates
it
with
a
variable
with.
B
With
a
variable
number
of
with
the
variable
degree,
you
just
specify
the
degree-
and
this
is
more
complicated
than
just
that,
so
I
don't
know
if
it
would
work.
B
B
B
Okay:
okay,
now
I
go
back
to
pixie
site.
Can
you
still
see
pics
inside
or
did?
Are
you
still
seeing
photoshop
we
selfie
photoshop?
Okay,
let
me
switch
to
pixinsight
here
we
are
in
pigs
inside.
Let
me
open
the
image
that
I
just
saved.
B
Let's
see
what
happens
now,
I
cannot
apply
directly.
We
have
to
go
back,
transform
it
into
a
linear
one
right
now
to
transform
into
the
linear
one.
I'm
going
to
look
at
the
instance
that
I
had
saved.
I'm
gonna
put
it
here.
I
am
taking
the
pixel
math
and
in
pixel
math.
I
use
this
expression.
I
want
to
use
the
inverse
minitone
transfer
function,
so
mtf
will
be
one
minus
this
number
here.
B
Zero
point:
zero,
zero,
zero,
six,
four,
nine,
seven,
nine-
and
this
this
reverses
the
mid
tone,
transformation,
the
and
then
I
need
to
add
the
shadows,
the
pedestal
that
was
here
and
add
it
here
and
to
show
you
what
the
result
would
be.
I'm
gonna
do
it
over
this
image
here
oops.
There
must
be
a
mistake.
You
know
what
I
did.
B
B
B
B
Now
you
can
apply
maybe
to
remove
some
linear
gradient
if
there's
anything
left,
but
there
shouldn't
be
anything
left
actually
now,
given
the
question,
let's
try
to
do
this
with
a
b
and
see
what
happens
so.
I'm
gonna
move
it
here.
B
B
Sorry
not
a
b
db
now.
The
problem
here
is
to
place
the
right,
the
right
samples.
So
let's,
let's
try
to
place
a
number
of
samples
here
where
I
have
this
problematic
area.
B
B
So
this
is
the
the
model
that
db
would
apply.
Let's,
let's
see
what
the
corrections
looks
like
subtraction
normalization
and
the
discard.
Okay,.
B
B
I'm
not
saying
that
you
cannot
do
it
in
db,
probably
with
patience
you
can
put
the
right
amount
of
the
pixels
in
the
right
and
find
the
right
smoothing
factor
to
do
that.
What
I'm
saying
is
that,
if
the
assumptions
that
I
started
with
hold
the
true
there's
no
milky
way,
there's
no
ifn
that
I'm
worried
about.
I
know
that
the
background
is
dark,
there's
no
navigator
except
for
the
galaxies.
Then
you
can
take
this.
B
This
shortcut
shortcut
in
quarter
you've
seen
that
we
had
to
do
a
number
of
steps
to
get
here
and
we
have
to
repeat
those
steps
for
green
and
we
have
to
repeat
those
steps
for
blue.
But
if
you
want
to
do
that,
it's
in
the
end,
you
obtain
a
really
a
well
flattened
image.
If,
if
you
cannot
make
those
assumptions
because
of
the
nature
of
your
target,
then
the
other
two
techniques
that
I
mentioned
still
are
viable
the
multi-scale.
B
The
introduction
technique
that,
in
the
article
from
from
paris
or
the
the
use
of
normalized
trade
gradients,
if
you
have
at
least
one
frame
taken
with
a
from
a
place
where
you
have
simple
gradients
and
you
can
normalize
everything
there.
B
E
Manchester,
just
a
crazy
idea:
what
about
using
a
star
extraction
in
this
case?
Okay,
there
are
the
problem
of
the
galaxy
yeah.
B
The
first,
maybe
five
or
six
layers,
the
stars
are
gone.
F
I
mean
I,
we
have
a
little
thing
on
the
side
in
the
chat
right
and
john
showed
it
like
a
grid
dbe.
I
think
I
think,
maybe,
if
you
use
starnet,
to
remove
the
stars,
then
you
don't
have
to
worry
about
samples
falling
on
stars,
yeah
and
and
then
you
can
make
the
grid.
Then
you
just
go
in
with
the
delete,
key
and
delete
over
the
galaxies,
and
then
you
can
crank
the
tolerance
because
there's
nothing
left
but
background.
F
B
Why
don't
we
try
absolutely
so
I
go
back
to
the
clone
that
I
had
created
now.
I
need
to
make
it
into
a
stylus
image.
Starnet
does
not
work
on
linear
image,
but
there
is
a
script
that
allows.
A
B
A
B
I'm
gonna
do
it
for
this
all
good.
It
creates
the
mask.
I
don't
care,
let's
start
mask,
so,
let's
execute
this
clipping.
I
don't
care.
B
I
have
started
terminator
as
well.
Yes,
we
can
try
that.
B
F
I
mean,
I
think,
your
your
photoshop
method's
a
lot
more
straightforward.
You
know
then
messing
with
this
like
in
the
past,
I've
had
to
do
exactly
what
you're
doing
here
with
complex
gradients
and
spent
a
lot
of
time
moving
samples
off
of
stars.
You
know
like
20,
30
minutes
like
trying
to
get
the
right,
so
it.
F
Tedious
exactly
I
was
going
to
ask,
though,
on
you're,
using
multi-scale,
media
and
transform
and
since
you're
just
deleting
layers
do
you
could
you
just
use
the
aptros
wavelet
tool
because
it
goes
up
to
like.
I
don't
know
how
many
layers.
B
B
F
F
B
Want
to
preserve
all
right,
we
are
done
so
this
is
the
starless
version.
Let's
do
db
now,
with
the
with
a
greed.
Does
that
work?
That
was
your
idea,
rob
right.
B
B
And
well,
but
I
can
use
this
moving
factor.
So
let's
say
0.5
is
moving
factor
and
I
just
want
to
remove
the
the
samples
that
are
overlapping.
This
galaxy.
B
That's
another
problem:
let's
do
two
and
a
half
and
put
some
samples
in
the
middle
manually.
Oh
let
me
do
it
regenerate
yeah,
it's
probably
easier.
I
can
always
redo
this.
F
B
A
A
You
needed
to
increase
the
tolerance,
no,
I
know,
but
it
seemed
like
you
were
able
to
do
it
even
with
a
lower
tolerance,
but
no.
B
B
F
B
But
it's
not
bad
at
all.
This
could
probably
be
go
through
an
ab
with
a
function
degree
of
two
and
maybe
four
and
get
something
out
of
it.
I
start
seeing,
unfortunately,
a
little
bit
around
the
1731,
but
yeah.
F
B
I
mean
there
are
multiple
ways,
but
the
one
that
I
was
showing
with
pics
inside,
although
it's
it's
painting,
and
so
it's
it's
changing
data.
It's.
As
I
said,
if,
if
I
were
interested
in,
if
I
thought
I
had
image
any
ifn
here,
I
would
not
be
doing
this,
but
I
know
that
from
the
back
the
backyard
I
have
not
all
right.
So
let
me
just
import
the
files
that
I
already
have
corrected.
C
B
B
B
B
After
I
stretch
I
I
enlarge
the
histogram,
so
r
and
b
are
fitted
to
each
other.
You
see
that
the
peak
the
histogram
shape
is
basically
the
same,
but
green
is
not.
I
needed
to
do
a
linear
fit
of
green
to
one
of
the
other
two
channels,
so
I'm
gonna
take
the
color
calibration
linear,
fit
I'm
gonna
fit
to
r2.
B
B
B
F
E
B
B
Okay
did
a
pretty
good
job
it's
before
after
yeah.
I
can
complain
at
this
point.
What
we
need
to
do
is
some
color
calibration.
There
are
the
usual
two
methods
you
can
do
pcc
or
you
can
use
a
an
average
of
all
the
stars.
Even
though
this
this
is
your
h-star
field,
I
think
I'm
going
to
use
the
average
the
average
of
the
stars
method.
B
B
B
B
B
I
want
the
syst
the
the
tool
to
detect
the
structures,
because
there
are
galaxies.
I
could
actually
consider
the
galaxies.
After
all,
they
are
made
of
stars
and
yeah.
I
can
execute
this
and
I
have
a
decently
calibrated
image
with
decent
colors
now
the
fault,
the
next
step
in
in
the
workflow
that
I
use
is
the
is
a
linear
noise
with
action
and
I'm
going
to
use
the
same
tool
that
we
use
the
last
last
bastime
the
mmt
tool
and
multi-scale
media
and
transfer.
B
B
A
B
The
reason
is
that
the
the
original
files
resulted
from
the
integration
of
subs
at
a
different
with
different
lengths,
and
I
was
experimenting
with
different
gains
so,
like
you
cannot
use.
B
We
were
already
using
it
actually
right.
We
are
gonna
analyze.
The
way
it
works
by
looking
at
this
image,
the
same
way
that
mnt
sees
it
so
we're
gonna
see
not
the
the
the
actual
image,
but
the
changes
in
the
image
by
produced
by
this
tool.
When
I,
if
I
select
a
layer
one
with-
and
I
just
drag
this,
the
icon
of
the
of
the
process
onto
the
image,
the
image
will
be
processed
and
only
the
layer,
one
as
a
seen
by
mmt
will
be
visible.
B
B
All
right
all
right:
this
is
the
image
at
scale,
one
there's
lots
of
very
pixel
to
pixel
variations,
and
some
of
them
are
due
to
the
stars,
because
the
stars
starts
coming
up
from
the
background.
B
Some
of
them
are
due
to
the
galaxy
here
and
you
see
7331,
but
this
part
here
this
is
background
noise,
so
I
can
start
applying
those
with
action
here,
how
much
noise
reduction
well,
we
have
to
try
and
I'm
going
to
try
with
the
threshold
setting
the
threshold
to
one
sigma.
So
what
does
it
mean?
This
means
that
the
everything
that
goes
beyond
one
sigma
one
standard
deviation
from
the
app
from
the
median
in
this
image.
Seen
in
this
way,
so
we've
seen
that
only
at
the
layer
one
scale
will
be
preserved.
B
Everything
that
is
below
one
sigma
will
be
cut,
especially
it
would
be.
There
would
be
a
coefficient
to
zero
apply
to
those,
and
if
I,
if
I
drag
this
icon,
I'm
gonna
see
what
the
image
look
like
looks
like
after
applying
it.
So
there's
some
level
of
reduction,
but
most
of
the
noise
is
still
here.
So
let's
go
at
threshold
2,
sigma.
B
B
B
B
You're
gonna
see
this
as
a
result
in
which
the
noise
is
there,
but
it's
subdued
which
could
be
a
good
option
and
I
typically
use
it
at
the
larger
scales,
but
I
I
like
to
go
aggressive
at
the
finest
scale,
especially
if
I
hadn't
had
a
chance
to
use
near
the
noise
before
that.
B
There's
also
this
other
tool,
this
other
parameter,
which
is
the
adaptive
parameter
so
adaptive
is
this
parameter,
tells
this
how
to
deal
with
outliers.
So,
as
I
said,
this
is
a
statistical
method.
Everything
that
goes
beyond
the
certain
threshold
expressed
in
the
standard
deviation
gets,
I
guess
through,
and
everything
below
gets
cut,
but
the
adaptive
basically
change
locally
adapts
this,
the
the
threshold
that
you
set
to
the
very
to
the
statistical
property
of
the
each
part
of
the
image.
B
B
Yeah
this
one
I
like
so
I'm
going
to
keep
this
the
second
layer.
What
does
it
look
like
after
applying
it
every
time
you
have
to
do
you
do
this
with
a
preview?
You
have
to
do
an
auto,
auto
stretch,
because
each
scale
has
different
as
a
different
medium.
So
now
we
start
seeing.
Oh
sorry,
we
start
seeing
more
certain
structure
in
the
galaxies
that
we
wouldn't
see
before
and
the
stars
look
slightly
bigger.
As
expected,
let's
apply
nose
with
action.
B
B
B
B
B
B
B
Okay,
very
aggressive:
let's
maybe
we
don't
need
the
adaptive
anymore.
Let's
see
larger
scales
are
tended
to
be
more
regular.
B
C
C
B
Three
star,
I
start
to
see
some
modeling
coming
out
of
the
background,
so
maybe
well
I
let's,
let's
try
to
leave
it
like
this,
and
I
could
continue
with
the
scales
corresponding
to
64,
pixels
and
120
pixels.
I
don't
think
it's
needed
for
by
this
image,
but
what
I'm
gonna
do,
on
the
other
hand,
is
to
I
like
to
make
the
effect
of
noise
reduction
less
stronger.
I
like
to
use
a
lesser
strength,
lower
strength
for
lower
layers
for
a
higher
layer.
Sorry,
so,
let's
I
want,
I
start
with
the
1.0.
B
B
C
B
B
B
B
B
I
have
to
say
that
I
it's
it's
pretty
good.
The
chance
is
pretty
good,
at
least
if
you,
if
you
use
the
same
under
the
same
conditions
like
if
I
image
from
home,
I
use
a
template
that
I
I
just
tweak
that's
needed.
When
I
go
through
image
from
pinnacles,
though
the
level
of
noise
is
much
less
yeah
because
of
and
I
I
deeply
recalculate
everything
from
scratch,
but
yeah
from
home,
I
reuse
the
sentence.
B
Therefore,
the
the
use
of
the
adaptive
parameter
makes
the
execution
of
mmt
much
slower
and
slower
and
slower
as
the
lab
is
for
higher
layers.
So
you
don't
you
don't
want
it
if
possible,
to
use
the
adaptive
parameter
to
be
other
than
zero
for
layers
higher
than
five
all
right.
The
result
is
not
it's
not
too
shabby.
This
is
ngc
7331,
with
the
four
fleas
as
they
are
called,
and
this
is
the
the
quintet.
B
So
I'm
going
to
remove
this
mask
and
before
stretching
the
image,
I
would
like
to
do
the
same
thing
that
I
did
last
last
time,
which
is
to
try
to
do
something
to
restore
the
colors
of
the
of
the
star
cores
now.
What
is
the
problem
now?
Look
at
these
stars
here.
The
halo
of
the
star
is
orange.
It's
definitely
orange,
but
the
core
is
almost
pure
white,
so
we
need
to
do
something
if
to
to
fix
this,
and
let
me
let
me
show
again
what
I'm
saying
it's
pure
white.
B
B
There
is
a
scaling
operation
in
progress,
and
so
the
the
the
value
is
no
longer
1.0
in
in
the
floating
point
representation
that
peaks
inside
uses,
but
is
for
some
for
some
call
for
some
channels
is
a
lower.
B
So
what
do
I
want?
What
I
want
to
do?
If
I
look
at
these
stars
in
by
removing
sdf,
you
see,
there's
no
color,
there's
even
there's
a
maybe
there
are
slightly
teal
in
colors
or
really
the
opposite
of
what
the
halo
looks
like.
So
there
is
a
wonderful
script
called
the
repaired
hsv
separation
that
repairs
the
color
of
the
saturated,
using
the
color
information
from
the
halo.
B
You
just
need
to
instruct
the
script.
How
much
you
want
to
clip
in
the
shadows?
I
prefer
to
give
nothing
how
much
how
wide
you
want
the
largest
repair,
the
star
radius
to
be
35.
B
Pixels
is
appropriate
for
my
image
scale
and
my
in
my
gear
and
the
level
from
one
from
zero
to
one
where
you
want
to
start
the
repair
process,
but
basically
the
script
is
going
to
start
to
look
suspiciously
at
any
star
that
has
a
peak
exceeding
0.5
in
the
floating
point
units,
and
the
output
of
this
will
be
three
files
in
the
hsb
space.
Now
the
hsp
color
space
h
stands
for
hue.
B
S
stands
for
saturation
and
v
is
value,
is
basically
the
brightness.
Now
we
don't
want
to
use
to
reuse
the
brightness
here,
because
we
were
already
happy
with
the
brightness
that
we
had
in
the
in
the
original
image.
We
just
want
to
reduce
the
hue
and
saturation
because
those
are
the
ones
where
the
repair
happened.
B
So
I
want
to
take
these
two
representations
of
this
image
in
this
color
space
and
mix
them
replace
the
corresponding
channels
in
the
in
the
original
image.
To
do
so,
I
use
the
channel
combination
tool.
I
set
it
to
operate
in
hsb
and
then
I'm
going
to
tell
you.
I
don't
touch
anything
for
v
for
value,
but
use
it
image.
B
B
And
they
become
orange,
just
like
their
their
halo
note
that
you
cannot
really
see
it
when
you
have
an
auto
stf
applied,
because
everything
goes
to
goes
to
white
anyway.
But
when
you
remove
the
auto
sdf,
you
definitely
see
that
the
colors
were
repaired
great
now
we
have
good
colors.
We
are
ready
to
stretch
now
I
apologize
that
I
don't
know,
I'm
not
familiar
with
the
script.
That
steve
was
mentioning.
What
I
do
is
to
use
one
of
his.
I
think
one
of
his
ancestors,
which
is
the
arxenh
stretch.
B
This
was
a
I
think
it
was
created
by
mark
shelley
a
few
years
back.
I'm
gonna
do
a
real-time
preview
and
I'm
gonna
start
trying
now.
My
goal
here
is
to
stretch
the
image
to
the
point
where
the
galaxies
start
to
become
visible,
but
the
stars
are
not
yet
becoming
saturated,
because
the
whole
point
of
using
their
oxygenate
stretch
is
that
you
want
to
preserve
the
star
collars.
B
Also,
I
don't
want
to
start
to
become
overwhelming,
because
I
would
rather
stretch
less
the
start
stretch
the
stars
less
in
this
step
than
having
to
do
start
reduction
later
so
set
this
tool
in
order
to
get
the
stars
that
you
would
like
and
maybe
adjust
I'm
just
going
to
adjust
the
background
just
a
tad.
I
don't
want
to
to
clip
anything
to
black
and
then
I
apply
the
tool.
B
And
I
obtain
this
behind
the
previews
because
they
take
away
a
little
bit
now.
These
images
are
still
very
dark.
If
I
look
at
the
histogram,
it
picks
at
around
0.7.
Actually
it's
a
little
bit
too
dark
for
my
liking.
Let
me
undo
and
apply
arcs
and
h
again
without
any
left
pointer.
Actually,
so
the
flag
point
stays
at
zero.
B
Okay,
it's
a
slightly
better,
the
it
peaks
now
at
0.861,
not
too
bad.
Now,
I'm
okay!
With
this,
this
brightness
of
the
stars,
I
don't
want
the
stars
to
become
overwhelming,
but
I'm
not
okay
with
this
level
of
brightness
of
the
non-stellar
objects,
the
galaxies
I
want
to
them
to
be
brighter.
B
B
That
rob
is
very
familiar
with
I'm
going
to
run
this
starnet
2
asking
to
create
a
star
mask
and
with
the
default
setting
of
stride
256,
which
is
what
the
what
the
author
nikita
recommends.
Note
that
starnet
2
can
operate
on
linear
data.
If
you
want,
although
this
this,
the
current
version
has
some
bug-
and
I
don't
know
it
doesn't
really
work
well
for
me
linear
data,
but
this
is
no
longer
linear,
so
we're
fine.
B
If
you
want
this,
if
you
are
interested
in
trying
out
this
tool,
there's
a
long
thread
on
cloudy
nights
in
the
vendor
software,
I
believe
that
forum,
where
the
the
author
is
discussing,
what
this
tool
does,
and
there
are
also
the
links
to
download
it-
this
tool
used
to
be
on
source
downloadable
on
sourceforge,
but
since
about
a
week,
sourceforge
realized
that
this
is
not
really
an
open
source
tool,
because
this
it's
closed
source,
the
source
is
not
available,
and
so
they
don't
want
you
to
use
their
platform
to
distribute
closed
source
software
and
they
closed
down
the
device
distribution
page.
B
B
B
Yeah,
I
think
he's
gonna
have
to
come
up
with
some
alternative
hosting
strategy,
because
sourceforge
sourceforge
is
no
longer
suitable
for
this.
B
Now
my
personal
opinion
of
starnet
2,
it
does
an
amazing
job.
It's
so
much
better,
not
only
so
much
better
than
the
old
one
doesn't
have
all
these
art
those
artifacts
left
over
from
this
on
the
stars.
B
It's
actually
even
better
and
I'm
sorry
to
say
so,
because
I
paid
for
it
for
star
exterminator,
but
it's
it's
good,
it's
better
than
star
exterminators.
It
leaves
fewer
artifacts
behind
sorry,
not
artifacts.
The
star
terminator
removes
some
parts
of
the
galaxies,
and
the
star
net
leaves
everything
that
is
placed
almost
everything
at
least
right.
So
you
can
see
this
is
the
stardust
image,
and
this
was
before-
and
this
is
after
it's
amazing.
B
Amazing,
though,
but
not
exactly
perfect,
because
let
me
show
you
some
interesting
things.
B
G
B
So
this
galaxy
here
in
near
my
cursor
completely
disappears
after
starnet.
B
So
what
I
do
is
to
take
the
result
with
a
grain
of
salt
and
instead
of
just
taking
it
and
using
it,
I
export
it
to
photoshop
again,
and
I
start
applying
corrections
and
the
corrections
that
I'm
applying.
Are
there
two
types?
B
I
use
the
content,
aware
field
tool
or
photoshop
to
fix
the
artifacts
left
behind
by
stars,
and
then
I
restore
the
the
galaxies
that
were
eaten
by
by
starnet
plus
plus
by
taking
them
from
the
original
starred
starry
image
right
and
I'm
gonna
have
to
show
you
again.
How
do
I
do
this
in
photoshop?
B
B
Let
me
show
you
some
ways.
I
could
open
the
another
version
of
this
image
version
before
applying
starnet.
I
have
exported
it
as
well.
B
And
this
one
is
the
gnome
star
net
version.
I
can
copy
this
select
copy
it
and
paste
it
as
an
additional
layer,
and
I
can
start
looking
at
what
are
the
things
that
are
not
present
in
the
two.
So
if
I
do
the
difference,
the
difference,
of
course,
are
the
stars.
Right
is
the
difference
between
the
starry
version
and
the
starless
version.
B
It's
only
the
stars,
then
I
can
zoom
in
and
start
seeing
what
are
the
the
bigger
stars
and
by
changing
the
either
the
blending
mode
or
maybe
by
blinking
this
I
can
start
seeing
what
is
left
behind.
B
B
Okay,
can
you
see
my
picks
inside
yes,
perfect
right?
If
I
go
back
to
peaks
inside,
I
see
that
starnet
also
created
this
starless,
the
the
star
mask
version
right
now
this
star
mask.
I
could
use
it
as
a
guide
to
determine
what
are
the
big
stars.
What
are
the
stars
that
are
most
likely
to
create
artifacts?
B
Let
me
do
it
by
extracting.
First,
I'm
gonna
extract
the
luminance
component,
so
I'm
gonna
convert
it
to
black
and
white.
I
am
going
to
binarize
it
with
an
appropriate
shoulder
like
0.1.
B
And
then
I
am
gonna
export
it
to
photoshop.
Now
this
is
a
two
bit.
It's
basically
a
one
bit
image.
I
don't
need
it
to
go
through
the
panes
of
using
teeth
to
preserve
the
the
dynamic
range
I
can
just
copy.
The
view
to
the
clipboard
then
go
to
photoshop
create
a
new
document
with
the
same
size
as
the
clipboard
and
paste
it.
B
B
The
magic
wand
with
the
non-contiguous
selection
has
a
selected
everything
that
is
white
in
this
image.
Right,
that's
what
we
want,
but
I
don't
want
to
select.
I
don't
want
to
transport
this
image
than
what
the
black
and
white
image
into
the
into
my
starless
image.
I
want
to
transport
the
selection
now
for
the
shop
as
a
tool
to
do
that
and
it's
in
the
select
menu
you
can
save
the
selection
and
I'm
going
to
save
it
into
the
file
where
I
want
to
process
and
I'm
going
to
call
it
star,
select
selection.
B
And
this
is
it:
these
are
not
the
stars,
it's
just
the
selection,
it's
the
ghost
left
by
the
stars,
but
they
are
all
the
stars:
small
small
and
large
okay,
I
am
gonna
use
the
selection,
modify
tools
to
crop
away,
the
small
stars
and
only
select
the
big
one.
So
first
thing
I
do
is
to
contract
the
selection.
I
shrink
it
by
four
pixels.
Any
star,
which
is
less
than
four
pixels
in
in
the
radius
will
be
wiped
away.
B
B
Now
we
start
to
see
something
interesting:
there
is
a
oneness
selection
circle
placed
over
every
star
that
I
want
to
operate
on.
This
is
great.
However,
I
want
to
operate
on
the
background
here
and
again.
I
could
do
it
directly
on
the
background
or
I
could,
which
would
mean
operating
destructively,
changing
the
layer
or
I
can
operate
on
a
on
a
correction
layer
now,
given
that
I'm
I'm
not
really
processing
the
imaging
pics
in
photoshop,
I'm
just
creating
a
temporary
file
that
I'm
going
to
re-import
into
bits
inside.
B
At
some
point
I
can,
I
can
just
do
the
the
destructive
method,
but
if
you
are
a
photoshop
efficient,
you
know
that
you
should
not
do
this
for
astro
image
processing.
You
should
use
adjustment
layers
and
smart
objects
to
do
that.
It's
quite
possible
and
just
a
little
bit
more
cumbersome,
and
if
you
allow
me,
I'm
gonna
skip
that,
but
if
you're
interested
we
can
go
through
everything.
B
Regardless
of
the
method,
the
first
thing
I'm
going
to
do
is
to
use
this
selection
to
apply
the
tool
called
the
content
aware
feel
so,
I'm
using
the
fill
tool
in
a
content
aware
mode
with
color
adaptation
on
and
blending
mode
normal
with
opacity,
85
85,
opacity
100.
It
basically
replaces
everything
85
blends,
15
of
what,
on
the
background,
with
the
85
percent
of
what
I
have
from
the
from
the
result
of
the
tool.
So
if
I
click
ok,
it's
gonna
think
a
little
bit.
B
At
first,
it's
going
to
be
hard
to
see
the
result,
because
the
selection
of
the
marching
ants
will
be
still
will
still
be
there,
yeah
and
you're
not
going
to
see.
But
let
me
hide
this
for
a
moment
and
back
one
and
the
redo
back,
redo
back
redo
and
it's
already
a
lot
of
it
is
removed.
There
is
still
something
left
over.
B
Nothing
prevents
you
from
at
this
point
going
manually
to
the
very
largest
stars
that
you
wanted
to
do
a
second
pass
on
and
select
them
this
one
and
then
also
this
one-
and
I
see
also
here,
there's
a
ghost,
if
you're
not
sure
whether
it's
a
ghost
or
it's
a
star
or
it's
a
galaxy,
you
can
just
temporarily
show
the
all
the
the
stars
and
you're
going
to
immediately
identify
what's
there.
So
this
one
was
a
star,
for
instance,.
B
B
This
time
is
faster
because
my
selection
was
simpler.
Yeah,
the
ghost
is
gone
and
yeah.
I'm
pretty
happy
with
this.
What
do
I
need
to
do
now?
I
I
may
still
maybe
there's
a
ghost
here.
I
don't
know
you
can
go
overboard
and
decide
to
pixel
peep
this
image
to
your
heart
content,
but
the
one.
The
thing
I
want
to
focus
on
now
is
how
to
I
need
to
restore
the
galaxies
that
were
eaten
by
starnet.
They.
B
Eaten
it's
a
bit
unfair,
what
I'm
saying
they
were
not
removed.
They
were
just
considered
as
if
they
were
stars,
and
so,
if
I
don't
put
them
back
into
this
star,
starless
version
I'm
not
going
to.
They
are
not
going
to
be
subject
to
the
same,
stretching
that
I
plan
to
do
for
the
rest
of
the
galaxies.
B
B
B
B
B
B
B
Oh
because
I
had
select
okay,
let's
do
it
the
horrible
way
that
the
real
photoshop
experts
will
hate
me
for
I
flatten
the
image,
and
now
I
can
save
it.
B
B
B
I
can
at
this
point,
do
a
couple
of
exponential
transformations,
which
is
what
I
like
to
use
in
the
power
of
inverted
pixel
mode
once
twice,
maybe
not
the
galaxy
is
becoming
clearer.
I
want
to
maybe
fix
the
background.
B
B
B
B
B
Right,
open
the
image,
and
now
it's
now
it's
it's
processing
the
it's
doing
its
job,
essentially
calculating
the
first
attempt,
I'm
gonna
resize.
This
note
that
I
have
these
are
the
settings
that
I
used
last
time.
Note
that,
on
a
scale
from
a
1
to
100
0
to
100,
I
have
a
2
out
of
100
for
noise
reduction
and
3
out
of
100
for
sharpness
sharpener
enhancement.
B
I
don't
know
how
much
it's
going
to
be
visible
on
zoom,
but
I
guess
my
my
point
here
is
that
I
think
this
is
a
very
powerful
tool
if
you
need
to
spend
some
time
in
tweaking
the
parameters,
but
it
can
do
really
nice
things
and
if
your
image
is
still
a
little
bit
noisy
and
you
want
to
have
a
little
bit
more
sharpness,
it
does
a
really
good
job
and
in
I
know
that
there
are
different
opinions,
I
never
founded
it
actually
invents
the
detail.
B
The
little
is
there,
it's,
basically
what
you
would
bring
out
by
a
very
careful
and
time-consuming
application
of
deconvolution,
but
it's
it's
doable.
B
So
I'm
not
gonna,
I'm
not
gonna
proceed
with
this.
I'm
just
gonna
show
you
what
the
end
result
is.
I
have
it
in
the
pixel.
B
If
I
zoom
in
you're
gonna
see
the
differences
are
pretty
dramatic
before
after
before,
after
and
and
that's
basically
the
the
the
sum
of
the
two
images,
I've
done
it
using
the
same
operator
that
I
showed
the
last
time.
It's
an
interesting,
very
interesting
expression
that
I
think
it
was
rob
that
showed
it
to
me.
B
If
you
want
to
combine
two
images-
and
you
have
stretched
them
differently
like
one
of
them
is,
is
now
very
very
much
brighter
than
than
what
it
was
before
the
starless
version,
but
the
stars
have
not
been
processed
to
the
same
extent.
So
if
you
combine
them
together,
you're
going
to
have
that
the
stars
that
are
super
imposed
over
on
a
galaxy
or
a
nebula
end
up
being
overly
bright.
B
So
what
you
want
to
do
is
to
have
the
equivalent
of
the
overlay
operator
in
photoshop.
You
can
do
it
in
photoshop
if
you
want,
but
you
you
can
do
it
in
peaks
inside
with
a
very
clever
mathematical
expression,
which
is
this
one,
it's
the
tilde
means
the
inverse
in
pixel.
Math
means
one
minus
the
operand.
B
B
So
this
means
that
for
as
incredible
as
it
seems,
if
you
do
the
math
you're
going
to
obtain
this,
which
means
that
the
the
end
result
will
be
the
sum
of
the
two
images
minus
the
product
of
the
two
images,
and
why
is
there
this
minus
of
the
product?
Well,
this
the
assumption
is
that
these
images,
each
of
them,
is
normalized
to
one.
So
you
don't
want
to
go
beyond
the
one
by
subtracting
the
product,
you're
cons,
you're,
subtracting,
a
value
that
is
becomes
higher,
where
the
images
are
both
brighter.
B
B
But
if
a
star
is
super
imposed
on
a
nebula,
the
star
will
be
toned
down
and
will
this
will
make
it
possible
not
to
exceed
the
1.0,
which
is
the
theoretical
match
without
unscaling
the
results,
and
I
think
this
was
the
last
trick
up
my
sleeve
for
for
this
presentation
hi,
I
didn't
look
at
questions.
So
if
there
are
questions,
please
bring
them
bring
them
on.
A
Let's
see
I'm
just
looking
at
the
yeah
the
chat
I
hadn't
been
looking.
A
Thank
you
yeah,
it
doesn't
look
like
there
are
any
more
questions
in
the
chat
and
we
have
been
going
almost
two
hours.
A
Everything
yeah,
but
we
can
entertain
a
couple
questions
and
then
probably
should
cut
it
off
at
that
point.
So
does
anybody
any
guys?
I.
B
B
The
basically,
that
process
creates
two
the
two
components,
two
of
the
three
components
of
an
hsv
color
space
by
reconstructing
the
color
of
the
saturated
course
from
the
colors
of
the
halos.
B
And
so
in
the
end,
you
you
get
the
you
take
the
output
of
that
script.
You
trash
the
the
v
component
of
the
of
the
color
space,
because
you
don't
you
don't
want
to
fix
the
value
you
want
to
fix
the
color.
So
what
you
want
to
do
is
to
bring
to
incorporate
the
h
and
sv
vectors
of
the
color
space
into
the
into
the
original
image,
and
you
do
it
with
the
channel
combination
by
setting
it
to
the
to
work
in
the
hsd
space.
A
All
right,
english
anything
else
before
we
close
all
right.
Well,
everyone.
You
know
unmute
and
thank
francisco
for
once
again
an
amazing
session
yeah.
Thank.
C
A
Yeah
yeah
glenn
will
post
that,
and
so
we
will
have
a
youtube
link
once
again.
B
Yeah
they
are,
they
are
long.
I
apologize.