►
Description
Machine Learning and Ham Radio ID of HFVHF Signals Bob McGwier N4HY
A
So
I
learned
about
machine
learning
in
applying
it
to
a
radio
problem
which
I
will
discuss
later,
but
it
led
me
to
think
that
this
kind
of
activity
might
be
useful
for
ham
radio.
So
I'm
going
to
pursue
it,
I'm
going
to
tell
you
how
I'm
going
to
do
it.
Let's
start
right
at
the
beginning.
What
is
machine
learning,
I'm
sure
lots
of
you
have
heard
of
deep
learning
and
machine
learning
and
all
sorts
of
things.
What
is
what
is
any
of
this,
and
what
is
it?
What
will
it
mean
to
us?
A
A
We
don't
want
the
computer
to
have
to
write
down
of
rules
to
make
decisions
or
classifications
or
determinations
with
a
human
involved
at
all.
So
that's
the
goal,
but
where
are
we
there?
Let's
see
so
machine
learning?
Algorithms
can
figure
out
how
to
perform
important
tasks
by
generalizing
from
examples.
A
That's
also
a
form
of
machine
learning
and
how
to
generalize
a
trained
algorithm
is
a
part
of
the
part
of
the
issues
that
all
of
us
need
to
face
when
dealing
with
a
computer
trying
to
apply
learning
machine
learning
is
the
ability
of
the
machine
to
learn
on
its
own.
Now,
that's
the
most
general
of
these,
and
it's,
of
course,
a
very
lofty
goal,
and
so
major
steps
have
been
made.
A
I
mean
you
see
this
every
single
day,
you're
on,
say
your
cell
phone
or
on
your
computer,
and
you
ask
google
or
or
another
search
engine
or
talk
to
your
phone
or
ask
for
directions
from
your
gps
module
and
all
those
things
machine
learning
is
there
to
interpret
what
you
ask
and
turn
after
it's
classified
and
determine
what
you've
asked.
A
It
then
executes
an
action,
and
that
is
also
machine
learning,
let's
so,
let's
just
keep
going
and
see
where
we
can
get
with
these
kind
of
really
high
level
a
lot
high
level
descriptions
of
machine
learning
well.
For
now,
machine
learning
is
exactly
like
this.
A
You
got
a
couple
of
machines
and
a
person
is
trying
to
orchestrate
what
they're
doing
now,
it's
less
like
sitting
down
and
writing
a
program,
but
a
human
is
still
involved
and
for
right
now.
That
is,
that
is
just
the
nature
of
the
beast,
and
so
we're
teaching
these
computers
how
to
learn
from
data
mostly
using
heuristics.
Now
these
heuristics
have
an
extremely
firm
foundation,
but
as
with
all
these
extremely
firm
foundations,
there's
there's
caveats
that
one
has
to
needs
to
understand.
A
So
you
don't
make
a
mistake
or
misapply
what
you
get
so
there's
the
universal
approximation
theorem,
which
was
a
really
major
finding
that
a
lot
of
people
didn't
believe.
So,
let's
have
a
bunch
of
data
as
inputs
to
a
black
box
and
the
universal
approximation.
Theorem
says
that
if
you've
got
a
bunch
of
neural
nets,
doing
something
inside
this
black
box
that
you
can
do
with
a
finite
number
of
neural
nets
get
as
close
to
the
actual
realistic
output.
A
Given
that
data,
as
you
want
by
increasing
the
complexity
of
the
neural
nets
inside
so
you
can
get
there,
you
can
actually
train
neural
nets.
Taking
input
data
to
get
as
close
as
you
want
to
the
actual
real
function
you
want
on
operating
on
that
data.
Now
that
is
a
big
deal.
It
doesn't
tell
you
how
to
do
it.
It
just
says
you
can
do
it.
So
this
is
one
of
the
major
kinds
of
problems
or
issues
with
these
kinds
of
theorems.
It's
an
existence
theorem
without
how
to
do
the
construction.
A
A
To
learn
something
from
the
data
is
done
by
stochastic
gradient
descent
and
we're
going
to
go
through
how
that's
done
without
going
at
a
high
level
without
actually
getting
down
to
the
nitty-gritty
details.
But
just
let
me
tell
you
until
the
2000s
really
2008
or
the
thereabouts,
the
stochastic
gradient
descent
was
known,
but
it
just
was
completely
impractical
and
then
hinton,
jeffrey
henton.
A
well-known
computer
scientist
took
a
set
of
handwritten
letters
and
alphabet
and
mostly
numbers
in
the
beginning
and
trained
a
computer
to
recognize
these
handwritten
numerals
on
automatic
pilot.
A
Without
a
person
telling
them
what
to
do
so,
he
figured
out
how
to
train
it
and
it
was
a
stochastic
gradient
descent
algorithm
and
he
called
it
back
propagation.
So
there
there
is
a
way
to
tell
the
computer
given
input
data
and
it
make
a
guess
how
well
did
it
do
so?
It's
in
critical
importance
that,
from
the
beginning,
when
you're
applying
machine
learning,
you
have
a
way
to
tell
the
computer.
How
well
did
it
do
so
again,
those
two
things:
stochastic
gradient
descent
and
universal
approximation,
theorem,
there's
a
hidden
secret.
A
We
pretend
they
apply
perfectly
and
therefore
we're
talking
about
heuristics.
There
are
some
theorems,
but
again
in
general,
we
don't
actually
know
for
certain.
When
we
start
this
process
of
machine
learning
training,
it
will
actually
work.
A
So
what
do
machine
learning
techniques
do?
So
all
these
weird
things
on
the
left?
How
do
we
get
to
a
place
where
they
are
ready,
trained
and
ready
to
work?
Well,
first,
we
have
to
have
data.
We
have
to
have
input
data
of
the
type
we
want
to
present
at
random
whenever
we
want
to.
We
have
to
have
data
from
that
class.
A
So
that
we
can
train
the
machine
how
to
recognize
it's
from
that
class
and
do
something
useful
with
it,
so
lots
and
lots
of
data
and
right
now,
data
is
an
extremely
important
and
expensive
part
from
that
data.
We
want
to
derive
a
model
and
a
lot
model
is
a
machine
learning
algorithm
that
we're
going
to
construct
by
doing
iterative
analysis
on
the
data.
A
So
how
does
the
iterative
analysis
work?
So
we
take
data,
we
feed
it
into
our
model.
We
get
an
output
and
then
we
have
an
objective
function.
The
objective
function
measures
how
close
your
predicted
output
is
to
the
actual
one
you
desire,
and
we
will
go
over
some
examples
to
make
this
clear
and
the
this
process
of
data
model,
objective
function
and
feedback
is
the
optimization
algorithm
and
the
optimization
algorithm
as
we
feed
this.
The
error
back
through
is
an
instance
of
a
stochastic
gradient
descent
algorithm.
A
So
what
are
the
machine?
Learning
types
we'll
go
over
these
and
I
don't
expect
you
to
remember
them,
but
I'm
just
going
over
them
for
completeness
and
they
they
are.
They
are
illustrative
of
kind
of
things.
We
have
to
think
about
unsupervised.
Learning
is
where
you
don't
know
anything
about
the
data.
You
don't
know
what
that,
how
to
label
it,
and
so
you
take
these.
They
take
these
pieces
of
data
and
you
feed
them
into
algorithms
and
the
algorithms
that
you
feed
them
to
discover.
A
Structure
discur,
discover
meaningful
ways
to
compress
that
data
into
classes,
and
from
this
you
you
find
features
in
the
data
as
presented
to
this
machine
learning
algorithm,
which
allows
you
to
put
the
data
into
different
boxes,
and
so
this
unsupervised
learning
was
how
everything
was
done
until
we
got
very,
very
good
at
doing
a
different
kind
of
learning,
supervised
learning
is
where
you
still
have
lots
and
lots
of
data,
but
but
you
feed
the
data
to
your
machine,
and
you
know
what
the
output
should
be,
while
you're
training
and
you're
going
through
looking
at
the
output
of
the
machine
and
saying
look
you're
an
error,
and
here
here's
the
error
and
here's
how
bad
the
error
is
and
based
on
that
input,
it
corrects
itself.
A
That's
where,
like
website,
classification
and
speech,
analysis,
space,
recognition,
customer
segmentation,
where
the
like
kind
of
things
you
see
on
google
or
your
phone
when
it's
recognizing
your
voice
or
you
enter
a
search
term
and
it's
doing
the
best
it
can.
So
this
is
semi-supervised
in
that
you,
some
supervised
learning
goes
in
and
then
it
generalizes
from
there
and
the
generalization
is
not
done
with
much
supervision.
A
That's
that's
an
example.
Reinforcement
learning
is
really
really
neat.
You
you
take
your
problem
and
you
have
some
input
data
which
in
this
case
the
data
is
describing
the
current
environment
and
the
state
of
your
state
of
your
system
and
you
let
the
machine,
learning
based
model,
take
the
data
and
the
current
status
and
compute
an
action,
and
then
the
action
is
taken
by
the
system
and
you
get
feedback
on
whether
or
not
the
action
was
good
and
appropriate.
A
Whether
you
got
the
reward
you
you've
got,
your
your
dog
got
its
food
from
clicking
on
the
lever
or
it
didn't,
and
then
you
it
goes
on
and
learns
from
that
without
human
intervention
and
we've
seen
some
spectacular
examples
which
I
will
go
over
later
and
then
anomaly.
Detection
such
as
credit
card
fraud
or
cyber
intrusion,
is
a
major
growing
area
in
machine
learning
and
it's
badly
needed
because
we've
all
seen
all
the
headlines
about
all
the
fraud
and
cyber
intrusions.
A
Lately
we
really
need
machines
to
be
able
to
help
us
do
this,
since
we
don't
have
enough
manpower
to
do
it,
even
if
we
knew
how
so
unsupervised
learning
is
is
is
demonstrated
by
these
pictures
on
the
right,
so
a
kid
playing
with
toys
and
doing
things
with
it.
That's
unsupervised
learning
and
the
robot
is
taking
a
bunch
of
stuff
and
trying
to
figure
out
how
to
classify
them
or
name
them
or
cluster
them
into
different
groups.
A
A
So
in
both
these
cases,
data
and
or
things
objects
words
pictures
whatever
are
placed
in
front
of
the
child
or
the
robot
or
the
computer,
and
you
are
influencing
the
things
it
has
available
to
it
to
learn
from
so
you
bias
and
or
guide
the
learning
by
what
you
expose
the
learning
system
to
then
the
child
computer
robot
learns
to
separate
things
into
classes
or
groups,
so
you,
for
example,
down
on
the
floor
in
front
of
the
child
or
are
dinosaurs
and
a
tiger,
and
maybe,
after
a
little
bit
of
playing
a
kid,
can
tell
the
difference
between
a
dinosaur,
an
alligator
and
the
tiger
they're
different,
so
unsupervised
learning
can
be
a
goal
in
itself
discovering
hidden
patterns
in
data
that
you
didn't
know
that
were
there
in
the
beginning,
or
you
would
have
done
it
already
or
it's
a
means
towards
an
end
to
learning
a
feature.
A
So
if
you
have
a
feature
that
you
did
not
know
was
in
the
data
and
your
machine
helped
you
learn
the
feature,
then
you
can
use
that
feature
in
how
to
use
the
data
and
the
reinforcement,
learning
algorithm.
It's
where
you
have
an
an
agent
which
is
your
computer
and
it
has
a
policy
that
says
if
I
see
the
environment-
and
I
know
where
my
what
state
my
system
is
in
based
on
this
policy,
I
will
tell
the
system
to
take
an
action.
A
Until
eventually,
you
begin
getting
more
and
more
and
more
reward
for
the
action
for
the
modifications
you've
made
to
your
machine
learning
model,
and
we
have
spectacular
examples,
one
of
the
first
spectacular
ones
where
basically,
nothing
was
told
to
the
machine
and
it
had
these
screens
while
it
was
playing
atari
and
the
the
atari
atari
machines,
and
it
learned
how
to
win
these
games
better
than
any
human
that
ever
played
them
by
by
trial
and
error
and
getting
a
feedback
on
how
well
it
did
and
a
reward
for
doing
better
and
then
the
one
I
thought
basically
would
not
happen
before
I
died,
but
did
anyway
there's
so
much
for
my
intuition
was
the
go
game,
which
is
by
many
people's
opinion,
and
some
difficulty.
A
Classification
based
on
some
pretty
pretty
difficult
mathematics
is
go
is
really
difficult,
and
but
the
computer,
given
the
gold
stones
playing
against
humans
and
again
then
later
against
itself,
can
learn
how
to
win
go
better
than
any
human.
That
ever
played
the
game,
and
that
was
really
surprising
and
it
didn't
take
that
long
was.
It
was
a
a
few
days
from
scratch.
After
they
learned
the
right
algorithm
for
updating
the
game,
and
so
these
are
spectacular
things
okay,
so
what
do
we
want
to
do?
Well?
A
Last,
but
not
least,
since
it's
kind
of
where
we're
going
here
is
supervised
learning.
So,
let's
start
in
the
center,
where
you've
got
the
teacher
telling
the
two
robots
giving
them
a
lesson.
That's
kind
of
how
you
want
to
think
about
it.
So
again,
example
in
the
upper
left
is
the
teacher
showing
the
robot
pictures
and
when
it
gets
it
right,
it's
told
yes
and
when
it
gets
wrong,
it's
told
no,
and
it
does
this
over
and
over
and
over
again
and
there's
this
algorithm.
A
That
updates
the
robot's
knowledge
based
on
what
it
sees
and
it
gets
better
and
better
over
time
at
getting
the
right
answer
so
supervis.
So
let's
look
at
the
upper
right
so
once
the
model
has
converged
and
is
doing
the
right
thing
by
by
you
first
giving
it
input
and
telling
it
these
are
apples,
then
you
are
bananas
or
whatever,
so
it
eventually
gets
a
very
high
probability
of
getting
the
right
prediction
about
what
is
being
shown.
A
So
that's
the
goal
is
over
time
being
shown
lots
of
examples
where
it
guesses
the
answer
and
you
then
you
tell
it
what
the
real
answer
is
and
it
grades
itself
based
on
on
a
objective
function,
how
well
it
did,
and
it
learns
how
to
do
better
from
repeating
this
over
and
over.
This
is
repetition.
Learning
at
its
best,
so
kind
of
a
more
algorithmic
presentation
is
you've
got
training
data
you
feed
into
your
algorithm
and
it
the
machine,
learning
model
issues,
a
guess
and
then
it's
graded
on
the
guest.
This
is
now.
A
This
picture
is
of
a
trained
model.
So
now,
you're
you've
got
a
model
where
it's
been
trained
and
you've
given
unseen
and
unlabeled
data,
and
it
picks
the
right
answer.
That's
a
good
thing:
your
machine
model
is
trained,
and
this
is
us
as
hams
for
now.
I
think
this
is
really
what
we
we
can
do,
and
this
is
what
we
should
do
on
some
really
interesting
problems.
So,
let's
go
from
there.
A
So
look.
This
is
www.sigidwiki.com
and
it
has
some
really
interesting
things
on
it,
which
I
think
are
very
useful.
It
has
frequency
bands
and
in
these
frequency
bands
it
has
identified
signals
and
unidentified
signals
and,
to
the
extent
possible,
they
have
been
further
classified
into
these
classes,
military
radar,
etc
down
through
time.
A
But
but
I
want
to
use
this
data
in
here
in
the
beginning,
because
it's
there's
424
classified
and
identified
signals
and
lots
of
quite
a
few
about
half
of
them
are
unidentified,
which
is
which
essentially,
the
thing
I
fear
is
the
unidentified
signals
will
have
very
few
pieces
of
data
and
wish
to
learn
from,
but
that's
part
of
part
of
a
part
of
the
project
anyway.
A
Aka
nd
is
a
big
time
contester
and
both
at
contests
and
not
in
contests
jeffrey,
has
recorded
many
many
hours
of
hf
data
and
from
he
has
eight
terabytes
of
data
just
from
hf
contests
and
that's
a
good
case
where
you've
got
lots
and
lots
and
lots
of
instances
of
signals,
and
they
need
to
be
marked
and
told
what
the
signal
is,
so
that
that
data
can,
with
a
signal
recorded
and
the
data
marked
that
can
be
used
as
training
data
for
learning.
A
One
of
these
machine
learned
models
in
a
supervised
learning
program
and
now
you're
getting
an
idea
of
where
I
want
to
go
so:
phil
carne,
ka9q,
well-known
radio,
amateur,
a
member
of
amsat
tapper,
president
of
ardc,
has
gotten
enamored
of
taking
software
defined
radio
and
running
it
on
low-end
equipment
and
he's
done
the
following.
He
has
four
terabytes
and
growing
of
data.
A
Listening
to
the
vhf
and
uhf
fm
bands
running
on
a
raspberry
pi
talking
to
a
homebrew
software
defined
radio
with
the
radio
interface,
an
air
spy
with
the
listening
to
the
vhf
and
uhfm
bands.
So
he
has
a
filter
bank,
not
polyphase.
That's
that's
wrong!
A
It's
a
filter
bank,
but
it's
not
polyphase
and
he
divides
these
these
bands
into
channels
and
listens
to
the
signals
with
an
algorithm
that
tells
him
whether
or
not
the
channel
is
in
use,
and
so
he
records
the
channel
if
it's
in
use
and
he
constantly
runs
24
hours
a
day,
monitoring
the
fm
bands.
So
now
phil
has
given
me
a
device
that
has
actual
signals
of
interest.
They
are
marked
as
to
what
they
are
and
only
the
snippets
that
contain
usable
signal
are
are
recorded,
and
this
is
a
good
thing.
A
A
Okay,
so
look
one
of
the
things
you
gotta
have.
If
you
want
to
train
one
of
these
machine
learning
models
is
a
super
computer,
and
so
I've
built
one,
and
so
this
is
with
the
side
off
and
all
the
things
dangling
out
to
be
able
to
easily
take
a
picture
of
them.
So
in
the
upper
right
with
all
the
colorful
things
upper
left,
where
all
the
colorful
things
are,
you
see
in
the
center
of
this
circular
thing
with
tt
on
it.
A
That's
for
thermal
talking,
that
is
a
water
cooled
jacket
sitting
on
top
of
an
amd
thread,
thread
ripper
processor,
the
3970x,
which
has
32
cores
and
64
threads.
So
that's
quite
quite
a
computer
lots
of
cash
so
that
things
are
easily
moved
in
and
out
quickly.
So
long
as
you're
running
tight
algorithms
and
over
data
that's
arranged
in
blocks,
it
can
do
it
easily
and
really
get
some
speed
up.
A
So
you
see
these
vertical
bars
to
the
left
and
to
the
right
of
bars
of
light
to
the
left
and
the
right
of
the
of
the
water
jacket
for
the
processor.
That's
ddr
memory
and
that's
256
gigabytes
of
very
fast
memory
and
above
the
memory
and
the
processor
with
its
water
jacket.
You
see
these
colorful
fans
and
behind
the
colorful
fans
is
a
water
reservoir
and
below
the
thermal
taki
jacket.
You
see
these
pipes,
they
go
up
to
the
reservoir,
so
warmed
water
goes
into
the
the
reservoir
which
has
ribbed
fins
on
it.
A
A
A
So
in
the
below
below
that
picture,
there
are
these
two
horizontal
bars
with
a
bow
tie
on
the
left,
and
you
see
the
word
geoforce
on
them.
It's
covered
up
as
geoforce
rtx.
A
These
are
high-end
gpus
and
these
are
geoforce
rtx
3090s,
and
there
are
two
of
them.
So
that's
so
between
the
processor
memory,
cooler
and
those
two
gpus
and
another
little
piece
I'll
tell
you
about
that's
a
that's
a
that's
a
fairly
serious
used,
car
or
or
something
like
that.
It's
it's
not
cheap,
so
but
it,
but
it's
needed
and
I'll
tell
you.
Why
look
tell
you
why
later
so!
The
thing
I
want
to
point
out
to
you
is
this:
bow
tie
right
here
on
the
on
the
left
side
of
these
cards.
A
And
why
do
these
graphics
cards
matter
the
the
mathematics
that
is
needed
to
do
the
stochastic
gradient
descent
algorithm
to
train
these
models
is
best
done
on
gpu
cards
that
have
certain
technical
capabilities
inside
them,
and
I'm
just
going
to
tell
you
what
it
is
without
going
into
any
explanation.
They
have
these
tensor
processing
units
in
them,
which,
which
is
really
really
helpful,
to
doing
machine
learning.
A
To
point
out
to
you,
the
super
computer
nature
here
is
up
here.
You
can
see
that
I've
got
a
version
of
top
telling
me
that
I'm
right
now,
while
it's
sitting
in
idle
using
two
and
a
half
gigabytes
of
the
252
that
is
available
for
for
the
user,
and
you
can
see
that
there's
two
gigs
of
swap
and
basically
I've
got
nothing
running
now.
A
So
you
only
see
the
little
wiggles
for
things
the
machine
is
doing,
and
you
can
you
can
really
tax
this
thing
to
the
max
where
all
the
threads
are
running
and
the
machine
is
really
really
really
getting
taxed
to
its
limits
and
these
fans
spin
up
really
loud.
The
water
flows
really
fast
and
it
keeps
the
the
processor
cool
enough
so
that
it
can
keep
running
even
at
full
tilt.
A
So
it's
a
1600
watts
and
I've
run
these
gpus
flat
out
with
the
processor
running
flat
out
and
has
a
watt,
steady
state
and
it
spikes
a
little
bit
above
that,
but
1350
watts
is
kind
of
the
steady
state,
a
limit
for
the
things
I'll
be
doing,
and
over
here
is
just
another
view
of
the
cpus
and
what
they're
doing
so.
This
is
a
true
super
computer
and
for
for
I
10
years
ago
I
couldn't
have
dreamed
of
owning
this.
A
So
it's
just
a
really
great
thing
and
that
we
now
have
access
to
these
things
and
it's
affordable
for
a
person
who
is
really
serious
and
wants
to
do
it
at
home
and
and
I'm
very
feel,
very,
very
lucky
to
have
the
resources
to
be
able
to
you,
put
this
computer
together
at
home
and
do
the
work
that's
needed.
A
So
this
is
the
computer,
but
you
also
need
some
software,
so
I'm
just
giving
you
an
example
of
one
here
which
is
very,
very
popular
and
is
likely
to
be
used
in
this
work,
and
this
is
tensorflow
and
I've
just
shown
you
on
the
left
side
here,
the
the
opening
web
page.
When
you
go
to
tensorflow
and
it
up,
it
tells
you
how
to
install
it.
How
to
set
other
things
up,
gives
you
examples
of
how
to
do
a
computer
training,
and
so
I
want
to
go
over
one
here
in
this
picture.
A
That
is
illustrative
of
the
kind
of
things
we
are
doing
so
there's
a
basic
classification
notice,
that's
a
basic
one.
That's
a
training
program
and
literally
what
we're
about
to
try
to
do
and
apply
is
technically
equivalent
to
this
basic
learning
algorithm,
so
I'm
very
high
confidence
that
it
will
work.
And
now
there
are
other
reasons,
I'm
very
confident
so
we're
going
to
take
oops
sorry
we're
going
to
take
a
a
a
bunch
of
data
and,
like
it's
done
here
and
you'll
notice
this
down
here.
A
What
these
little
tiles
in
this
picture
are
that
you
can
barely
see
are
instances
of
clothing.
So
these
are
pictures
of
different
types
of
clothing,
and
this
computer
has
had
lots
and
lots
of
data
informed
of
these
pictures
of
clothing,
where
the,
where
the
data
has
been
marked,
with
the
name
and
type
of
the
clothing,
maybe
also
mark
for
color
or
or
whether
or
not
it's
typically
worn
by
a
female
or
typically
worn
by
a
male
or
or
unisex
or
whatever.
A
So
you
give
the
computer
lots
and
lots
and
lots
and
lots
of
examples
tell
it
when
it
gets
it
right,
tell
it
when
it
gets
it
wrong
and
how
bad
it
got
it
wrong
and
then
let
it
iterate
this
happens
over
and
over
and
over
and
over
again
to
the
to
the
to
the
point
where
you
to
burn
lots
of
power
on
your
computer
and
finally,
the
the
model
converges
to
an
to
a
to
a
set
of
parameters
that
when
a
model
is
applied
to
a
picture,
that's
of
unknown
clothing.
A
It
gets
close
to
the
right
answer,
almost
all
the
time.
This
is
when
you're
doing
good.
Okay,
so
look:
we've
got
some
other
target
hardware,
which
I
think
is
of
interest,
so
the
air
spy
is
on
the
right
and
the
hack
rf1
is
on
the
left.
So
the
air
spy
also
comes
with
a
converter
that
allows
you
to
listen
to
hf
and
the
hack
rf.
It
listens
to
hf,
bhf,
uhf
and
up
so
both
of
these
will
be
software
defined
radios
that
I
will
be
using
in
order
to
do
a
testing
of
the
trained
algorithms.
A
Where
will
I
run
the
trained
algorithm
in
the
center
is
a
jetson
nano,
2
gigabyte,
processor
version.
This
is
a
development
board,
but
it's
really
what
we
need.
Okay,
so
it
has
two
gigabytes
of
ram,
so
you're
not
going
to
be
playing
major
games
on
it
or
running
lots
of
graphics,
you're
going
to
have
it
doing
number
crunching
and
it
has
a
fairly
high
powered
gpu
on
it,
and
you
take
your
model.
You
put
tell
tell
the
program
you're
going
to
run
on
it
to
when
data
comes
in
run.
The
model
on
the
data.
A
Tell
me
the
answer,
so
that's
what
this
thing
will
do
and
it
has
a
fairly
high
end
arm
processor,
with
four
cores:
two
gigabytes
of
ram
and
the
cost
and
the
gpu
and
the
cost
is
59.
A
So
I'm
going
to
start
with
the
least
expensive.
One
of
these
has
a
you
know,
fairly
high
clock
rate
on
both
the
gpu
and
the
arm
and
we'll
see
how
well
it
does,
and
if
this
is
not
enough,
the
next
one
up
is
four
gigabytes
with
six
cores
and
it
is
a
hundred
dollars
and
then
right
above
that
is
another
one
which
has
many
gigabytes
of
memory,
so
they're
they're
they
vary.
There
are
several
stages,
I'm
159
one's
100.,
one's
300
and
one
is
hundred.
A
I
don't
believe
we
will
need
the
seven
hundred
dollar
we.
I
might
need
the
four
hundred
dollar
one,
but
I
don't
believe
so.
I
believe
the
fifty
nine
dollar
one
or
the
hundred
dollar
one
will
allow
us
to
take
signals
off
the
air
capture
them
in
a
way
that
the
computer
can
understand,
tell
it
to
classify
the
thing
we've
given
it
and
it
will
issue
an
answer.
I
believe
that
we
can
do
that
and
the
reason
I
believe
we
can
do.
A
A
This
is
the
work
that
will
be
adapted
for
this
problem,
and
so
we
will
not
be
taking
the
frequency
hopping
radios
of
interest
to
mitre
in
the
army.
We
will
be
taking
signals
that
we
get
off
the
air
from
our
own
means
and
train
the
algorithms
to
do
it.
That
way,
but
essentially
many
of
you
will
have
seen
a
waterfall
algorithm,
so
waterfall
algorithms
streams
down
the
page
and
shows
you
the
spectrum
of
signals.
A
So
if
a
signal
comes
on
the
air
you
can
detect
when
the
waterfall,
when
something
is
there
and
then
you'll
grab
a
block
of
it
and
feed
it
to
the
computer.
Algorithm,
that's
done
the
machine
learning
algorithm
and
it
will
classify
the
signal
to
the
best
of
its
ability
and
then
pass
it
off
to
the
next
part,
which
will
have
a
higher
grade
process
running
on
a
software-defined
radio
to
doing
even
better
work.
A
So
that's
kind
of
the
cascaded
flow
of
things
that
are
intended
and
the
reason
I
know
it
works
is
because
we
did
it
on
a
really
hard
problem
for
mitre
and
the
arms.
So
we'll
gather
data
from
k8nd
and
ka9q
and
others
and
store
locally
on
a
large
array.
That'll
be
inside
this
machine,
and
so
all
the
stuff
will
be
contained
in
one
box.
A
That
allows
us
to
do
lots
and
lots
of
training
on
lots
and
lots
of
signals
at
very
high
speed,
and
once
we
have
the
classified
classification
of
the
signal
in
terms
of
type
and
other
parameters
that
are
needed,
we
can
automatically
build,
say
a
gnu
radio
algorithm
to
process
the
signals
id'd
by
deep
learning
and
the
the
neural
trained,
deep
learning
algorithms
done
by
this
project.
A
Now,
what
is
the?
What
are
the
promises?
The
promises
are,
this
will
all
be
open,
source,
open
specification,
gpl
algorithms
will
be
done
in
gnu,
radio
and
the
python
train,
python,
training
algorithms
will
be
published
and
how
the
models
are
derived
will
be
published
and
it'll
all
run
on
commercial
off-the-shelf
hardware
that
anyone
can
purchase.
A
So
I
think
this
is
we've
and
we've
finally
gotten
to
this
place,
where
the
application
hardware
and
the
training
hardware
or
is,
is
purchasable
by
individuals
and
the
individual
person
who
just
wants
to
test
the
algorithms
to
see
how
well
it
does
in
classifying
signals
on
the
air,
hopefully
given
us
feedback,
so
that
we
can
learn
to
do
better.
We
will
will
make
this
available
to
all
lots
and
lots
of
people,
because
these
these
software-defined
radios
and
these
these
special
purpose,
arm-based
gpu-enabled
learning
computers-
are
getting
very
inexpensive.
A
A
My
dmr
is
three
one:
five
one,
four,
seven,
two
I'm
available
through
a
hot
spot
and
that's
my
email
in
case
you
wanna
contact
me,
I'm
looking
for
other
people
who
are
interested
and
with
that
we'll
stop
here
and
take
some
questions.
If,
if
we
have
time
and
that's
all
out.