►
Description
Artificial Intelligence for Astro-Image Processing
Alex Woronow
For the single task of image sharpening, we have a multiple filters designed to reach our goal, and we probably will need to use several together for this singular task. Now, a single, quick, easily implement artificial intelligence tool does the job quicker and more effectively than the battery of old-school alternatives. The same is true for image noise removal, image enhancement, and image resizing, and more. My talk will describe the general approach taken by the current artificial-intelligence image-processing technology and describe and illustrate some of the results one can obtain on astro-images.
A
Hello,
everybody
and
welcome
to
the
june
sjaa
imaging
special
interest
group
meeting
and,
let's
see
today,
we
have
alex
alex
warno
who's,
going
to
talk
to
us
about
ai
and
image
processing
for
astro
images.
But
before
we
do,
I
wanted
to
mention
that
next
month,
paulo
baratoni-
I
guess
right-
is
going
to
be
our
leader
for
the
monthly
meeting
and
we
don't
have
a
speaker
yet
so
somebody
has
some
topic.
They
want
to
talk
about,
there's
lots
of
possibilities.
A
Please
send
either
the
whole
list
or
me
directly
or
follow
the
a
proposal
and
we'll
see
what
we
can
do.
There's
a
bunch
of
topics.
Aic
just
happened:
we
have
the
golden
state
star
party
coming
up,
maybe
a
report
of
that
or
whatever
is
on
your
mind.
So
that
would
be
great.
A
But
moving
to
tonight
tonight,
alex
warner
is
going
to
talk
to
us
alex
is
a
new
club
member
and
he's
a
member
of
a
couple
other
astro
clubs,
let's
see
las
cruces
astronomical
association
and
the
tucson
amateur
astronomical
association,
or
something
so
great
to
have
you
with
us
in
the
san
jose
club,
remember
to
unmute
yourself
and
take
it
away,
alex
okay.
B
Well,
thank
you
for
allowing
me
to
do
this.
I
should
be
sharing
my
screen
now.
If
you
don't
see
it
say
something:
I'm
going
to
talk
about
artificial
intelligence
today
in
the
hospital
image
processing,
but
before
I
do
so
I'll
tell
you
a
couple
of
other
things,
one
I'm
glad
to
be
a
member
of
your
club.
I
looked
around
at
many
clubs
to
join
and
one
in
one
where
the
imagers
look
to
be
of
high
quality
and
your
club
won
out
with
the
high
quality
images.
B
So
as
your
reward,
you
got
me,
let's
see
what
else
I'm
in
tucson
by
the
way
I
I
so
I'm
a
strictly
a
internet
member.
So
on
with
the
talk,
let's
see.
B
Okay,
here's
roughly
what
I'm
going
to
talk
about,
I'm
going
to
say
some
words
about
what
artificial
intelligence
is
and
some
of
the
basic
principles
behind
it.
A
lot
of
people
don't
like
to
use
algorithms
and
programs
if
they
don't
understand
something
about
how
it
works
so
we'll
cover
the
basic
principles
behind
artificial
intelligence
and
image
processing.
B
Then
we'll
look
at
a
pro
the
process
that
has
really
kicked
ai
to
the
forefront
of
research
in
many
many
fields
and
something
called
a
dan
which
stands
for
generative
adversarial
network.
All
of
these
things
are
neural.
B
In
general
statements
I
read
about
ai
in
in
medical
imaging
is
that
it
outperforms
the
doctors
who
read
generally
read
the
images.
It's
also
used
in
surveillance
to
enhance
images,
perhaps
for
identification,
but
my
guess
is
a
big
pusher
of
ai
technology
is
in
fact
our
military
with
satellite
surveillance.
B
B
Some
of
the
things
that
can
be
done
with
ai
on
astronomical
images,
one
of
the
most
important,
is
denoising
and
then
sharpening
resampling,
meaning
enlarging
or
decreasing
the
size
of
your
images
in
a
rational
and
realistic
way,
structural
emphasis,
bringing
out
shock
fronts
and
loops
and
all
those
wonderful
things
that
are
out
there
stars
in
galaxies
are
not
stars:
star
clusters
or
clouds
and
galaxies
and,
of
course,
there's
brightnesses
and
color
toning.
You
also
play
a
part.
B
Okay,
neural
networks
started
way
back
in
the
1950s,
and
there
are
probably
a
few
of
us
who
remember
that
time
of
life,
the
idea
was,
the
researchers
was
to
mimic
brains,
decision
making
and
computers
were
fairly
new
and
fairly
unpowerful
by
today's
standard,
but
the
work
started
out
with
what
they
called
artificial
neural
networks,
a-n-n
and
basically
here's
what
a
simple
one
looks
like
the
first
ones
had
three
layers:
an
input
layer,
a
hidden
layer
and
an
output
layer.
B
These
are
all
these
little
circles
all
represent
neurons
in
the
brains
and
these
arrows
represent
weights.
So
the
value
of
of
a
pixel
perhaps
goes
into
this
one.
The
value
of
another
pixel
under
this
one
and
another
pixel
into
this
one
and
so
on,
at
least
for
astro
images
and
the
value
of
the
pixel,
is
multiplied
by
some
weight
and
put
into
this
neuron.
B
This
value
of
this
pixel
is
multiplied
by
some
other
weight
and
added
to
the
neuron,
and
the
value
here
is
also
added
multiplied
by
a
different
weight
and
added
to
the
neuron,
and
that
and
the
network
is
fully
connected-
that's
not
necessarily
true
in
modern
networks,
but
back
in
the
old
days,
all
the
all
the
neurons
were
connected
to
every
other
neuron
in
the
next
layer,
and
then
the
output
layer
sits
over
here.
B
So
perhaps
we
start
out
with
an
input
layer,
some
sort
of
picked
image
of
a
dog
or
a
cat,
and
a
neurons
try
to
say
whether
it's
a
dog
or
a
cat.
They
give
the
probability
of
it
and
if
it
gets
us
right,
you
go
on
to
the
next
image,
show
it
another
image.
If
a
guess
is
wrong,
if
it's
a
cat-
and
it
says
a
dog,
then
you
go
back
and
you
modify
all
the
weights
and
the
layers
to
come
closer
to
making
a
prediction.
That
was
correct.
B
That
is
a
dawn,
and
this
continues
on
and
on
and
on
putting
in
pictures
of
pieces
of
pictures
generating
an
output
going
back,
changing
the
weights
and
it's
not
uncommon
in
modern
cases,
to
use
a
million
pictures
for
input,
or
at
least
a
million
fragments
of
pictures
for
input
to
run
for
hundreds
of
thousands
of
loops
through
this
to
get
a
trained
network
back
in
those
days.
Computing
was
a
precious
commodity
and
they
didn't
train
any
of
the
networks
to
that
level.
Nor
were
they
very
big
networks.
B
B
Hidden
layers
here
are
the
yellow,
neurons
or
nodes
both
are
used
and
the
input
layer
and
the
output
layer,
and
nowadays
it's
not
unusual
to
have
50
or
100
layers
or
maybe
more
in
the
of
hidden
layers
and
each
hidden
layer
can
have
hundreds
of
neurons.
B
B
B
And
we
find
that
each
neuron
is
developing
a
specialized
activity.
One
is,
for
instance,
if
it's
a
stellar.
If
we're
taking
a
deep
sky
image,
one
of
them
might
be
looking
for
edges
in
the
in
the
image.
Another
one
might
be
looking
for
stars
and
making
sure
stars
are
round
or
whatever,
so
they
all
can
have
purposes.
It
ends
up
and
we're
beginning
just
now
to
go
back,
be
able
to
go
back
in
a
network
and
see
what
they
think
it's
actually
doing
and,
of
course,
there's
some
feedback
there.
B
What
are
the
inputs
for
a
simple
neural
network
in
image
analysis?
It
would
probably
be
the
individual
pixels
from
the
image
or
from
some
sub
point
in
the
image
and
also
for
astro
images
if
you
cut
out
a
little
piece
of
an
image
and
send
it
to
to
look
at
and
learn
from,
it's
also
important
to
rotate
that
image
into
a
random
position.
B
As
I
said,
the
weights
capture
relationships
among
the
pixels
and
there
may
be-
maybe-
but
not
really-
it
would
be
something
like
a
long
arc
of
dark
pixels
near
a
circular
arrangement
of
pixels
with
a
vertical
ellipse
in
them
and
they
say:
gee,
that's
whiskers
with
an
eye
and
a
slightly
pupil,
that's
what
we
would
infer
back
in
the
meantime,
the
neural
network
is
saying:
I
see
that
so
it's
probably
a
cat,
not
a
dog,
for
instance,
let's
see
and
the
relationships
that
neural
networks
find
when
they
go
back
and
look
at
what
exactly
the
neurons
are
finding
a
lot
of
times,
they're
finding
things
that
escape
our
kind
of
consciousness.
B
When
we
look
at
pictures,
they're
picking
out
details
and
relationships
that
we
don't
see
so
easily-
and
you
can
think
of
these-
these
neural
networks
as
some
sort
of
bizarre
and
complex
weighted
regression
line
and,
for
instance,
multivariate
okay.
So
what
is
a
trained
network?
A
trained
network
is
one
that's
good
at
making
the
predictions
you
wanted
to
make
and,
as
I
said
after
thousands
of
cases
being
fed
to
it,
perhaps
millions.
B
It's
actually
possible
to
over
train
a
network,
keep
feeding
at
the
same
thousand
or
hundred
thousand
images
over
and
over
and
eventually
it
will
memorize
those
images
and
if
you
present
it
with
something
else
that
isn't
in
its
set,
it
doesn't
do
so
well
because
it
expects
something
from
this
set.
That's
over
trained
on.
So
this
validation
with
new
data
is
very
important,
oh
by
the
way,
interrupt
any
time
with
questions.
B
That's
fine,
so
the
network
is
trained
when
the
reach
weights
reach
a
steady
state
and
simultaneously,
the
predictions
are
as
good
as
they
get,
and
this
can
take
a
long
long
time.
It's
not
unusual
using
a
fast
pc
to
frame
a
network
for
a
week
or
more.
B
Okay,
generate
generative
adversarial
networks.
These
were
invented
by
a
gentleman
named
goodfellow
at
a
canadian
university
in
2014.
B
Basically,
neural
network
research
and
activity
had
come
to
a
stall
using
the
old
techniques
and
what
kind
of
networks
I
just
showed
you
they've
done,
did
about
everything
they
could
do.
They
were
out
of
great
ideas
and
when
goodfellow
came
up
with
this
idea,
the
field
blossomed
just
everywhere,
you
look
now
in
medicine,
astronomy,
physics,
whatever
somebody
is
doing
neural
networks
and
they're,
using
generative
adversarial
networks
or
ones
that
that
sort
of
network
inspired
another
generation
beyond
those
okay,
the
g
in
the
gand
is
the
key
word.
B
The
g
stands
for
generative
and
from
one
point
of
view,
that
means
that
the
network
can
generate
an
image.
So
it
learns
about
images,
some
aspect
of
the
images
and
then
it
generates
an
image
that
has
that
aspect
in
it
and
the
a
is
at
the
serial,
which
means
that
it's
actually
the
network
doesn't
consist
of
a
neural
network,
but
two
of
them
and
they
compete
with
each
other,
and
it's
this
competition
that
advances
them
to
a
to
a
state.
That's
remarkably
good
at
doing
their
work.
B
B
Of
again
so
let
me
run
through
it
real
quickly
now.
This
this
example
is
in
the
literature
they
talk
about
it
as
a
counterfeiter
network.
A
counterfeiter
who
makes
100
bills
say
so
this
network
makes
images
of
100
bills.
There's
also
real
images
of
100
bills,
available,
at
least
to
people
who
have
100
bills,
and
these
this
one
generates
a
picture
at
random
and
passes
it
to
a
random
selector.
B
The
detective
network
looks
at
it
makes
all
those
measurements
all
its
little
neurons
fire
and
it
makes
a
prediction:
neither
says
it's
a
fake,
it's
a
counterfeit
100
bill,
or
it
says
it's
a
real
one.
If
it
says
it's
a
fake
100
bill
and
it's
correct
that
it's
a
fake,
then
the
counterfeiter
was
not
doing
his
job
very
well,
so
that
feedback
goes
back
to
the
counterfeiter.
Who
then
adjusts
whose
waist
is
are
then
adjusted
to
make
a
better
100
bill,
or
at
least
a
different
one?
B
B
If
it's
comes
down
here
and
calls
it
a
real
one
and
it's
correct
that
it
is
a
real
one.
That's
a
success
and
you
don't
have
to
do
anything
else,
since
those
are
what
you
want
to
see.
Obviously,
if
it
says
it's
real
and
it's
wrong,
then
the
detective
needs
to
learn
how
to
do
a
better
job,
so
its
weights
are
updated
again.
So
this
is
what
it
looks
like.
B
Okay,
here's
an
image
denoiser
again,
and
I
can't
tell
you
whether
this
is
what
actually
people
actually
use
or
not.
There
are
many
different
ways
you
can
put
this
together
and
none
of
them
have
published
the
interior
surprise
surprise
of
what
it's
actually
doing.
So
this
is,
if
I
were
doing
that,
I
would
do
it
something
like
this.
So
there's
a
dirt
deep,
noisy
neural
network
and
a
true
in
an
area
that
holds
true
images
with
low
noise
and
the
image
with
low
noise.
B
This
one
here
sends
a
random
image
to
a
algorithm
which
adds
random
noise
to
this
beautiful
image.
It
then
passes
it
to
the
denoiser
and
the
denoiser
removes
that
noise,
or
at
least
it
tries
to
now
that
image
is
sent
to
the
random
selector.
Along
with
the
image.
That's
perfect,
the
detector
receives
that
random
image
and
it
tries
to
say
whether
that
is
a
true
low
noise
image
or
an
artificially
generated
low
noise
image
and
again,
if
it's,
if
it
says
that
it's
a
denoised,
it's
an
artificial
image.
B
B
Another
way
of
doing
it
would
be
to
pass
this
as
an
image
pair
to
the
selector,
and
the
detective
has
to
pick
out
from
that
pair,
which
one
is
the
real
and
which
one
is
the
artificially
denoised
image,
and
then
we
also
update
it
in
some
way
or
fashion
according
to
whether
it's
right
or
wrong,
but
they're
they're,
pretty
symbologic
ideas
and
the
logic
is
very
simple.
B
B
B
So
take
note
that
if
you're
one
of
the
people
who
is
driven
to
make
do
science
with
their
astronomy,
there
are
ways
to
use
artificial
intelligence
to
do
things
that
are
on
the
cutting
edge
and.
B
Real
science,
okay,
exploring
guideline
galaxy
evolution,
photometric
red
shift,
explanation,
x,
estimations
and
there's
an
entire
book
written
in
2005
on
applications
in
ai
and
astronomy,
a
view
toward
the
future.
Well,
I
think
we're
in
that
future
now,
but
the
book
is
still
kind
of
relevant
and
there
was
an
article
called
ai
beats,
astronomers
and
predicting
survivability
of
excellent
planets.
B
Not
quite
what
show
what
survivability
means.
I
think
it's
stability
of
their
orbits
and
it
even
does
now
recognition
of
craters
on
the.
C
B
And
other
places-
and
you
think
that
was
simple.
Actually
I
spent
my
career
at
the
university
of
arizona,
lunar
and
planetary
land,
at
least
a
portion
of
it
analyzing
craters
on
various
planets
and
when
craters
land
on
top
of
other
craters,
they
beat
them
up
pretty
badly
and
it's
hard
to
recognize
craters.
In
fact,
and
I
wish
I
had
a
neural
network
back
in
the
70s,
but
I
didn't.
Okay-
here
are
some
ai
tools,
the
ones
on
the
left
here
alex.
A
I
ask
a
question
so
what
I
you
know
remember
from
the
years
I
spent
in
that
field
was
that
it
was
always
difficult.
You
know,
collecting
the
data
to
train
the
neural
nets
right
like
if
you
could
find
10
billion
examples
of
craters
you'd,
probably
pretty
easy
to
make
a
neural
net.
That
did
a
good
job
recognizing
craters,
but
the
problem
is
you
know?
How
do
you
know
it's
a
crater?
How
do
you
know
this?
A
You
know
I
mean,
like
you,
want
to
give
the
neural
net
a
picture
of
a
crater
and
say
this
is
an
example
of
a
crater
and
another
picture.
This
is
not
a
crater
so
generating
the
training.
Data
was
always
the
trick
in
those
days
anyway.
So
what
is
the
story
with
that?
You
know.
How
would
you
I
mean?
Do
you
just
have
a
bunch
of
people,
labeling
pictures
saying
crater,
no
crater
or
is
there
you
know?
What's
the
story
there.
B
There's
a
couple
ways
you
can
go
about
it.
Yes,
it's
absolutely
true
for
all
artificial
intelligence
programs
that
require
training,
which
I
guess
is
all
of
them.
Getting.
The
training
data
is
the
hardest
thing.
If
you're
doing
noise,
it's
pretty
easy
to
take
an
image,
and
you
add
noise.
We
can
all
do
that,
but
yeah
on
craters
it
will
be
very
difficult.
B
I
think
one
of
the
possibilities
is,
we
know
a
lot
more
about
about
impact
mechanics
now
than
we
used
to
in
the
good
old
days,
and
our
computing
power
is
is
much
much
more
powerful.
I
think
making
synthetic
images
might
be
part
of
the
way
to
do
it.
I
can't
actually
tell
you
but,
like
you
said,
maybe
an
alternative
is
handed
to
and
the
same
set
of
images
to
15
people
and
have
them
label
it
and
then
train
it
on
maybe
three
categories:
definite
crater,
maybe
creator,
meaning
that
some
people
saw
another
student.
A
Didn't
know
yeah
and
like,
but
you
know
going
further
like
so
one
of
your
headlines
was
you
know
finding
you
know
gravitational
lenses
from
deep
sky
surveys
again
you
know
so
there's
just
you
know
a
handful
of
the.
I
mean
how
many
gravitational
lenses
you
know
what
I
mean
like
the
known
amounts
is
in
that
what
a
thousand
I
don't
know,
they're,
not
millions
that
are
known
right,
and
so
I
don't
yeah
that
would
be.
The
trick
is
training
them.
A
I
think
that's
the
strength
of
the
ghents
as
they
sort
of
generate
their
own
training
data,
but
you
know
you're
anyway,.
B
It's
true
yeah
again,
I
think
probably
I
don't.
I
haven't
read
most
of
those
articles,
in
fact,
but
I
think
that
they
are
generally
using
artificial
data
as
well
as
real
data
to
to
do
the
training
you
can.
Certainly,
we
know
the.
D
B
Of
the
lensing
systems,
so
we
could
make
up
random
3d
distributions
of
objects
around
the.
C
A
So
maybe
that's
it,
maybe
maybe
it
is
just
synthesizing
lenses
and
then
discovering
other
ones.
Okay,
thank
you.
I.
C
I
could
do
some
small
comments.
Actually,
gans
are
not
the
only
neural
networks
used
for
for
that
recently,
there
is
a
new
set
of
neural
networks
called
auto
completion.
Basically,
you
can
give
the
neural
network
part
of
the
image,
and
then
you
will
ask
director
to
how
to
complete
the
image.
Basically
section
of
the
image
has
to
be
predicted,
so
yeah,
that's
also
another
technique
that
is
going
to
be
coming.
B
Yeah
there's
you
know,
the
field
is,
is
very,
very
rapidly
growing
and
there's
no
way
anybody
can
keep
up
with
all
the
different
applications
that
are
going
on
now.
I
I
downloaded
all
the
requirements
to
make
my
own
network
and
then
I
sat
there
trying
to
make
my
own
data
sets
and
that
is
so
laborious
and
drives.
You
crazy.
I
decided
not
to
do
that.
Instead
to
purchase
some
purchase
programs
and
some
of
the
purchase.
Some
of
the
programs
you
can
purchase
are
listed
in
this
slide
any
further.
B
Okay
over
here
first
is:
we
have
tools
now
available
to
us
and
they're.
Not
all
most
of
them
are
not
targeted
at
astro
images,
they're
targeted
images
in
general,
but
they
work
just
as
well
on
astro
images
as
they
do
in
other
images,
and
they
have
the
same
flaws
that
if
you
apply
them
too
strongly,
you
can
distort
things
and
run
into
problems
and
that's
pretty
much
true
about
any
kind
of
image
processing
program.
B
B
There's
simulated
hdr
programs
out
there
that
are
now
a
passing
structures
masking
and
there
are
ones
that,
with
a
little
guidance
that
will
draw
a
mask
for
you.
They
don't
do
very
well
on
astro
images.
B
I'll
show
an
example
later
that,
where
it
actually
works
kind
of
healing,
you
know
if
you
want
to
remove
stars
from
an
image,
maybe
they're
making
a
training
set,
and
you
can
use
a
healing
tool
and
go
through
each
one
and
start
and
highlighted
and
have
artificial
intelligence
filled
in
the
area
star
removal
in
general,
we
have
a
few
tools
that
are
actually
available
for
that
lighting.
Contrast,
image,
resizing,
enhancing
and
even
histogram
stretching
which
I
don't
know
of
any
commercial
programs
that
are
available
for
that.
There
are
some
that
are
in.
B
Github
I
haven't
tried
to
use
those
photoshop
is
often
touted
as
the
best
artificial
intelligence
program
for
image
processing.
Here's
the
tools
I
know
of
in
photoshop-
I
rarely
use
it.
I
have
an
aversion
to
paying
money
for
something
that
can
get
through
almost
free.
But
anyway,
if
you
look
at
this
list,
almost
all
of
it
is
has
to
do
with,
or
much
of
it
has
to
do
with
photographers,
who
are
professional.
Photographers
and
most
professional
photographers
are
overwhelmingly
wedding
photographers
and
portrait
photographers,
followed
by
landscape
photographers.
B
So
most
of
these
do
things
like
smooth,
smooth
eye
face
skin
transfer
makeup
from
a
model
to
your
image
and
things
like
that.
There's
one
there's
one.
I
think
that
redoes
the
hair
and
replaces
the
sky
and
all
those
kinds
of
things
that
we
don't
need
to
use.
There
are,
of
course,
object
recognition.
Classification
tools
out
there
too,
and
they'd
be
good
in
scientific
research,
but
they're
they're
not
relevant
to
image
processing.
B
B
Croman
is
good,
it's
not
as
good
as
topaz
denoise,
but
for
topaz
you
have
to
export
the
image
removable
star,
star
net
2
and
star
exterminator
and
we're
now
at
version
8..
These
two
are
competitive
with
one
another.
Sometimes
one
works
better
than
the
other
and
other
times
the
other
works
better
than
the
one.
So
I
usually
use
both
of
them
and
see
which
one
gives
me
the
best
results
image
in
larger
and
down
sizes.
I
use
topaz
gigapixel
single
image.
Hdr
I
use
aurora
hdr.
B
I
don't
really
use
single
image
hdr
very
much
and
there
are
several
programs
that
will
do
it
and
none
of
them
give
results
that
I
really
like
very
much
in
general
image
enhancement
you
have
topaz,
luminar
affinity
photo
and
a
strange
one
called
photo
lemur
and
photo
lemur.
B
You
take
a
photo,
you
drop
it
on
top
of
photo
lemur
and
it
takes
care
of
everything
all
at
once
and
I
usually
use
it
right
near
the
end
to
see
if
I,
if
I
missed
something
and
I'm
very
seldom,
actually
published
an
image-
that's
been
through
it,
but
it
makes
a
real
nice
check
to
see
if
you've
come
out
with
the
image
you
really
want,
because
it
can
do
some
remarkable
things
in
a
few
seconds:
okay,
ai,
sharpening
the
structural
enhancement,
some
of
the
advantages
of
it.
Unlike
oops.
B
B
Let's
see
but
like
I
said
it
can
be
overdone
if
you
try
to
sharpen
it
too
much,
you
do
distort
the
shapes
of
stars.
That's
a
really
a
point
that
I'd
like
to
make
because
it'd
be
wonderful.
If
somebody
would
train
a
sharpening
tool
on
astro
images
and
where
we
can
not
beat
up
stars
as
we
sharpen
as
we
sharpen
very
aggressively.
B
B
Here's
luminar,
four
structural
enhancement,
ai.
This
is
part
of
a.
B
B
Mhg
noisy
again,
I
said
this
is
an
easy
one
to
trade
train.
A
data
set
for
you
just
take
some
very
good
images,
not
necessarily
astro
images,
and
you
add
noise
to
it,
and
so
you
have
the
good
image
and
the
noisy
image.
Then
you
train
the
network
to
remove
the
noise
from
the
noisy
version.
B
B
Noise
exterminator
says
it
can
operate
on
linear
images,
I'm
not
sure,
that's
very
important
and,
furthermore,
I
think
what
it
does
is
it
doesn't
document
it
from
his
star.
Removal
program
also
operates
in
linear,
linear
images,
but
it
stretches
the
image
removes
the
stars.
Probably
the
same
thing
removes
the
noise
and
then
unstretches.
It
reverses
the
stretching
and.
C
B
Fine,
but
but
I
don't
ever
use
that
so
here's
a
denoising
one
again
a
gif
so
before
and
after
this
one
was
denoised
with
ross's
program
and
the
image
was
very
no
easy
to
begin
with.
I
never
actually
finished
processing
it
because
it
just
isn't
that
good,
but
the
denoising
is
quite
quite
spectacular
and
so
my
screen
right
that
reaches
others.
B
Here's
a
topaz
v
noise
and
I
wanted
to
show
that
it
works
on
pattern.
Noise
too.
So,
on
the
left,
we
have
an
example
with
big
vertical
patterns
in
it
probably
pixel
columns
that,
for
some
reason,
show
up
at
this
resolution,
and
here
it
is
after
it's
been
hit
with
topaz
d-noise
ai
and
you
have
all
of
the
structures
you
know
sharpened
a
bit
or
maybe
they
just
look
sharper
because
the
noise
is
gone,
but
I
think
that's
a
remarkable
piece
of
technology
there,
star
removal
at
starnet.
B
D
B
Well,
I'll
get
to
that
a
little
bit
here,
okay,
a
bit
later,
but
I
I
mentioned
that
is
a
that
is
an
issue
not
only
with
topaz
with
ai,
but
it's
an
issue
with
anything
that
manipulates
some
image,
but
in
any
case
I'll
get
I'll
talk
a
little
bit
about
that.
B
B
Here's
an
example
of
a
star
net
activity,
so
you
start
with
an
image
like
this.
You
pass
it
to
starnet
and
it
makes
two
images.
If
you
check
a
box,
one
is
a
starless
image
and
the
other
is
the
the
stars
and
all
it
does
is
subtract
this
image
from
this
image
and
makes
this
image.
B
Now,
if
you
stretch
the
crap
out
of
this
image,
we'd
find
that
it
has
a
background
in
it
all
the
stars
have
large
circles
around
them
and
every
piece
of
the
image,
even
if
it
looks
black
here,
is
in
fact
not
down
to
zero
and
that
causes
problems,
sometimes
when
you're
processing
it's
something
to
be
aware
of,
but
it
also
means
that
that
information
has
moved
from
the
stylus
image.
So
you
are
not.
You
do
not
have
an
image
here.
That
is
a
true
stylus
image.
It's
one!
That's
starless!
B
One
of
the
star
wars
stuff
anyway,
it's
over
here
too,
it
doesn't
do
very
well
with
spikes.
This
star
clearly
has
a
remnant
here.
This
star
clearly
has
a
remnant
with
even
a
cloud.
B
The
bright
halo
around
it
and
spikes
and,
of
course,
if
you
take
this
image
and
start
manipulating
the
card,
you
also
bring
out,
especially
if
you
start
bringing
out
faint
things,
you
bring
out
these
stars
that
are
not
removed
and
you
bring
out
stars.
I've
only
partially
removed
other
stars
back
here.
There's
one,
that's
even
less
removed
than
there
are
others
in
here.
So
in
this
process,
isn't
perfect
that
only
more
training
would
help
or
not.
B
But
at
this
point
that's
what
we
have
okay,
getting
stars
back
into
the
image
once
you've
removed
them
is
a
very
difficult
process
and
people
use
screening
to
do
it
is
screening
does
not
do
it.
Screening
is
a
blend
mode,
so
you're
blending,
the
colors
of
the
stars
with
the
color
of
the
image
and
people
don't
seem
to
notice
that,
because
most
effects
the
fan
of
stars,
they
lose
their
color
much
more
than
bright
stars
just
by
the
algebra
within
the
screen.
B
There
are
other
ways
to
do
it.
I
use
I
do
it
using
a
transfer
of
a
cal,
rated
rgb
image
and
a
mask
and
transfer
through
a
mask.
B
There's
a
few
tricks
in
there,
but
I
can
talk
about
that.
Sometimes,
if
you
need
a
speaker,
but
that's
it's
a
very,
very
difficult
thing.
Russ
has
settled
on
the
idea
that
he
can
use
something.
He
calls
like
this
unscreening
to
get
the
correct
color.
The
stars
are
extracted
and
then
screening
to
put
them
back
in,
and
I
had
a
bunch
of
communications
with
them
and
I
showed
them
that
if
you
take
a
color
calibrated
image,
remove
the
stars
get
the
stars
by
unscrewing.
B
Put
to
modify
the
image-
I
don't
know
just
add
a
bunch
of
red
to
it
or
something
anything
and
then
put
the
stars
back
in
and
then
and
then
pull
up
image
calibration
again,
you
find
your
image
is
no
longer
calibrated.
The
cell
colors
have
shifted,
so
it
doesn't
work,
but
whether
it
works
good
enough
for
for
you
and
it's
something.
You'll
have
to
decide
for
yourself.
B
B
Hdr
on
a
completed
image
can
actually
tone
down
things.
This
is
the
tarantula
nebula.
If
you
look
at
the
area
like
in
here
in
the
arrows,
it's
it's
almost
saturated,
it's
not
quite
saturated,
and
so
the
hdr
pulls
it
back
and
gives
you
details
in
some
of
these,
these
otherwise
bright
and
saturated
areas.
So
it's
worth
doing.
B
Okay,
up
and
down
sampling
is
you
know
when
I
post
things
on
astro
bin
and
I
down
sample.
If
I'm
going
to
print
an
image
I
up
sample
and
my
favorite
tool
for
doing
that
is
gigapixel.
B
B
I
don't
know
it's
hard
to
say
that's
assessing
makes
out
of
it.
Here's
a
linear
structure
if
you
upset
and
if
you
upscale
it
with
fix
insight,
it
comes
out
as
a
dashed
line.
You
know
scale
it
with
gigapixel,
it
comes
out
and
it's
a
continuous
line
again
yeah,
whether
it's
right
or
wrong.
I
don't
know,
but
the
same
thing
is
true
about
any
upscaling
method.
It's
interpolated!
B
If
you
have
a
red
pixel
sitting
next
to
a
green
pixel,
the
one
in
in
between
is
not
necessarily
yellow.
Even
though
that's
what
might
interpolate
it
to
be,
it
could
be
blue.
You
don't
know
so
anytime,
you
screw
with
the
resolution
and
image
enough
skin.
You
certainly
are
introducing
artifacts
into
your
image.
B
It
matters
to
you,
that's
yet
another
thing.
I
generally
do
my
images
for
one
purpose
and
that's
art.
B
B
Here's
the
image
at
the
top
of
the
screen
here
you
pass
it
to
this
topaz
mask
making
program.
You
say
these
areas
are
definitely
things
I
want
to
keep
inside
to
preserve,
don't
want
the
mask
to
cover
them
out
here.
I
want
to
cover
this
for
sure
and
in
this
blue
area
I
don't
know
what
I
want.
I
can't
find
the
edge
of
it,
and
so
here's
the
mask
that
made
from
that.
Is
it
a
good
mask?
B
B
Ever
use
a
eye
mask
and
I
haven't
found
a
good
use
for
it.
Yet
maybe
somebody
else
has
okay
image
stretch
is
not
yet
commercially
available,
but
here's
one
you
can
go
out
and
load
download
this
program
from
github.
If
you
download.
B
Here's
what
it
looked
like
when
he
ran
it
through
normal
stretching
kind
of
pipelines
and
here's
what
it
looked
like
using
ai.
B
B
This
is
an
old
picture
from
a
pixel
4
camera
with
night
sight
and
that
one
actually
can
take
pictures
up
to
16
images
at
15
seconds
each.
So
you
have
to
set
it
on
a
tripod
or
guess
a
rock
or
something
it
takes.
Those
pictures
stacks
them
internally
stretches
them
using
ai
and
outputs.
A
picture
like
this
to
the
screen.
I
think
that's
pretty
remarkable.
B
Okay,
as
I
say
here,
only
star
net
star,
exterminator,
noise,
extern
and
the
noise
exterminator
will
train
the
master
images.
So
there's
lots
of
opportunities
out
there
for
anybody
who
wants
to
get
into
this
field,
to
try
to
train
train
me
kinds
of
networks
that
already
exist
for
image
processing,
but
maybe
specialize
in
astro
photography.
Maybe
we
can
see
the
money
to
be
made
at
that.
B
C
B
Astro
images
have
to
specify
you
know,
even
without
an
extra
any
specific
training.
The
outputs
of
the
images
I
produce
never
have
enough
flamingo
in
him
anywhere.
I
have
an
acquaintance
in
one
of
the
other
clubs
who,
every
time
I
show
an
image
with
that
strip
with
ai
in
it.
B
He
tells
me
that
it's
the
shot
that
makes
me
scientifically
useless,
and
you
never
know
when
you're
going
to
when
it's
going
to
accident,
just
throw
a
flamingo
or
a
pigeon
into
your
image,
and
you
know
I've
done
hundreds
of
images
and
I've
yet
to
see
a
flamingo
word.
B
It
can't
do
that
because
it's
taught
to
recognize
a
sharp
image.
It's
not
taught
just
to
change
your
image
and
some
other
image
that's
seen,
but
the
point
is:
is
there
are
in
fact
artifacts
hey
here's
something
that,
as
I
said,
I'm
interested
in
the
art
side.
So
I
took
this
image
and
there's
a
thing
called
a
eye
transfer.
I
took
this
image
and
this
image-
and
I
said,
render
this
image
in
the
style
of
these
clock
gears
and
here's
what
again
they've
gone,
and
I
really
enjoyed
that
doing
that.
B
One
guys:
okay,
okay,
here's
photo
lemur-
and
this
is,
you
know,
drop
it
on
top
of
drop
your
image
on
top
of
it
and
see
what
photo
lemur
does
so
there's
before
and
then
after
and
it
is
kind
of
like
a
hdr
high,
dynamic
range
enhancement.
B
B
Conclusions,
the
use
of
artificial
intelligence
are
abundant
in
astronomy,
ai
is
taking
hold
for
image
data
interpretation,
which,
if
you
could
write
your
own
programs,
you
can
perhaps
use
hubble
telescope
data
to
make
scientific
discoveries
and
it's
useful
for
any
performance
and
password
and
not
a
lot
at.
B
Because
I
did
not
address
artifacts,
as
I
thought
I
would.
I
missed
that
part
apparently
in
this
slide,
but,
as
I
said,
there's
a
danger
of
artifacts
in
everything
you
do.
If
you
sharpen
an
image
you
can
bring
in
halos
around
your
stars,
even
ever
so
slightly
when
you've
sharpened
an
image,
a
nebula
you'll
see
structures
in
it
that
may
or
may
not
be
there.
All.
The
decon,
for
instance,
is,
is
an
approximation.
It's
an
energy
process
that
there's
no
way
guided
to
required
to
match
reality.
B
B
Doesn't
have
access
to
the
raw
data,
so
it's
just
the
way
it
is
and,
as
I
said
since
my
interest
is
primarily
the
artistic
side
of
it,
I'm
not
too
worried
about
it,
but
since
it
where's
you,
I
certainly
would
say:
don't
do
it.
That's
all
what
I
got
so
I'm
going
to
stop
sharing.
No,
I
won't
stop
until
we
answer
any
questions.
A
Yeah
thanks
very
much
alex.
It
was
really
good
and
I'm
sure
we
have
some
questions
so
help
me
out
here.
Anybody
got
any
questions
for
alex.
B
A
E
What
do
you
think
is
the
evolution
for
for
astronomy,
oriented
tools,
yeah
a.
B
Lot
of
you
know
right
now,
for
instance,
there
are
the
dope
has
denoise.
B
You
can
denoise
images
that
have
motion
blur
in
them,
how
good
that
is,
but
it
has
that
option
motion
blur
and
it
has
like
three
or
four
different
blurring
modes
and
each
of
those
has
within
a
three
or
four
subsets,
which
are
things
like
extreme
moderate
little,
and
it
will
automatically
select
one
for
you,
but
I
usually
go
through
several
different
ones.
Look
at
it
you'd
be
surprised
how
many
times
I
processed
an
image
from
a
large
telescope.
B
You
know
one
of
these
ones
from
I
telescope
or
somewhere,
and
I
run
it
through
the
motion
blur
the
noise
and
it
does
a
remarkable
job
on
it,
suggesting
that
down
at
the
pixel
level,
and
so
there's
there's
motion
blur
tracking
errors,
so
some
of
that
is
already
available.
B
What
I
would
like
to
see
is:
let's
see
what
would
I
like
to
see
most?
I
think
I'd
like
to
see
the
stretching
one
most
teaching
ai
to
take
an
image
one
of
our
stacked
and
stretched
color
images
and
stretch
it
to
an
hdr
image
right
off
the
bat
without
saturating
any
of
the
stars
or
any
of
the
nebula
or
galaxy
components
in
it.
That
would
be
real
nice
and,
if
and
as
I
said,
there's
a
network
out
there
that
comes
pretty
darn
close
to
doing
that.
A
I'll
tell
you
one
francesco
here:
we
could
do
this
as
a
group
project.
Here
we
have
enough
talent
and
computer
science
and
ai
a
grader,
so
in
other
words
scrape
all
the
astro
bin
images
and
you
know
find
out
which
ones
were
you
know,
nominated
and
which
ones
were
selected
and
so
on
in
a
net
to
fit
to
grade
your
images
and
then
as
you're
processing,
you
could
see
if
you're
improving
things
or
making
them
worse.
E
I
think
why
stop
there?
Why
don't
I
create
a
tool
that
applies
the
transformation
that
statistically
give
you
a
higher
score
right.
C
F
This
is
why
I
don't
join,
I
you
know
I
I
joined
one.
Competition
on
on
telescope
live
and
I
saw
my
image
and
I
saw
the
ones
that
won
and-
and
I
didn't
agree
with
you
know
I
was.
A
I
I
agree
with
you
all,
I'm,
but
anyway
I
I
am
saying
this
half
seriously,
because
I
think
it
would
be
interesting
for
to
get
a
you
know,
a
a
computer,
to
give
you
feedback
on
your
images.
Now
you
can,
like
all
computer
advice,
you
can
take
it
or
leave
it.
A
B
Another
good
one
would
be
to
grade
yourselves
and
rate
the
subs,
throw
out
a
bunch
on
them
if
necessary,
and
have
some
really
good
quality
subs
to
train
it
on
it'll,
be
pretty
easy
and
then
say
you
know,
look
at
these
and
tell
me
tell
me
what
you
think:
I'm
not
sure
that
you
could
do
one
that
would
be
universal.
B
C
I
think
the
next
level
probably
will
be
to
put
the
ai
on
on
the
telescope
itself
on
the
computer
that
controls
the
telescope
and
basically
it
can
actually
reject
images
that
are
bad
and
select
images
that
are
good
for
the
stack
even
before
you
actually
go
to
processing,
because
that
saves
you
time
right,
one
of
the
big
drawbacks
for
austrian
magic.
This
tank
consuming
you
have
to
take
flats.
You
have
to
take
directly.
C
You
have
to
take
the
calibrations
all
this
stuff
and
I
ai
famous
with
the
fact
that
it
can
fit
in
small
computers
and
can
reduce
the
data.
I
mean
it
can
do
this
dimensionality
reduction,
basically
to
cut
the
time
to
get
the
result,
but
that's
probably
for
the
future
how
to
put
ai
on
raspberry
pi
that
high
uses
and
make
the
image
session
in
the
on
the
field.
Much
better
right.
A
B
No
but
I'll
make
one
more
comment
about
completely
about
competitions
on
astro
bit,
I
had
one
point
volunteered
to
be
a
judge
and
got
into
level
two
and
there's
a
secret
page
where
judges
can
communicate
with
each
other.
After
two
weeks
of
listening
to
the
judge's
babble
about
this
and
babble
about
that,
I
withdrew
as
a
judge
and
told
whoever
owns
astrid
that
I
did
not
want
any
of
my
images
subjected
to
the
competition,
so
I
can
move
myself
from
it
because
the
judging
was
you
know
it
was
most.
A
A
B
A
Thank
you
all
right.
Well,
we'll
see
everybody
or
I
won't,
but
paulo
we'll
see
everybody
next
month
and
until
then
bye.