►
From YouTube: ORI FPGA Meetup 2 August 2022
Description
encoder read/write problem *solved*
A
B
Between
finally,
here
the
read
is
start,
it
read
is
sorted
so
what
was
happening
when
we
are
trying
to
read
from
register
zero
or
the
memory
address
of
encoder,
so
the
instruction
was
coming
properly
till
interconnect
and
then
from
interconnect
down
to
the
encoder.
B
The
address
that
we
were
using
suppose
it
was
four
four:
a
b
eight
triple
zero.
It
was
taking
the
lower
12
bits
as
an
offset,
rather
than
taking
this
complete
as
an
address
and
offset
0..
So
it
was
trying
to
read
some
non-existing
location
and
that
was
giving
us
a
buzz
error.
So
once
I
moved
things
around
now,
the
encoder
is
at
four
four:
a
b:
zero,
zero
zero
zero.
B
So
that's
sorted
now:
jesd
has
moved
to
another
location,
four,
four,
a
b,
eight
triple
zero,
so
that's
the
jest
location
and
now
all
the
peak
pokes
to
all
the
addresses
and
the
registers
for
all
the
components
connected
to
xe,
lite
and
other
buses
is
working
properly.
Now
the
next
step
is
dma.
B
I
remember
you
had
one
sample
dma
application.
I
will
try
to
work
on
that
so
that
we
get
one
packet
across
to
the
radio
out
through
the
radio.
A
Yeah
I
had
a
some
source
code
that
was
called
dma
test
and
that's
just
a
sort,
just
a
start.
So
all
it
had
was
the
register
reads
and
writes
yes,
so
I
started
to
look
at
like
what
do
you
need
to
to
read
and
write,
or
what
do
you
need
to
write
to
what
registers
to
do
dma,
but
it's
not
completed
yet.
B
A
B
Yeah,
it
was
basically
to
get
that
ila
I
like
to
get
working
and
to
understand
and
to
learn
how
it
works
so
yeah
once
that
was
done,
it
was
pretty
good
and
pretty
good
yeah.
So.
A
Good
is
there,
is
there
anything,
that's
stopping
the
the
way
that
we
set
up
the
repo
from
from
working
like?
Is
there
anything
that
needs
to
change
in
the
repository.
B
I
think
so
now
the
yeah
the
address
map
the
address
has
changed.
So
I
think
I
need
to
update
the
tackle
script.
Okay
or
I
don't
I'm
not
sure
where
the
addresses
are
saved
so
whatever
it
is,
I
will
see
the
diff
and
then
yeah.
B
C
B
Just
a
new
address
and
yeah,
I
added
some
debug
course
which
will
be
helpful
in
debugging,
so
I
would
let
them
remain
there.
So
there
will
be
some
scripts
or
something
related
to
that
script.
A
B
Yeah,
that's
good,
and
now
the
next
thing
is,
how
can
I
get
do
you
have
any
idea?
How
can
I
get
sample
to
dma
out
to
the
radio
board
via
jest
link
and
then
out?
Where
can
how
can
I
get
that
sample.
A
Because
I
think
what
you're
talking
about
is
like
a
baseband
baseband
frame-
yes,
yes,
okay!
We
we
believe
that
the
that
we
have
a
number
of
files
that
we've
been
using
for
the
beacon
demos
and
that
we
think
that
those
might
work
and
if
they
don't
work
for
any
reason,
because
they
should
just
be
a
file
full
of
baseband
frames.
Then
we
can
go
back
to
several
people
in
the
in
the
community
and
they
should
be
able
to
help
us.
A
You
bet
cool,
okay,
hey
paul.
Do
you
have
any
comments.
C
Well,
I
have
a
question
for
actual
now
that
we
know
how
to
add
ila
instances
and
use
them.
B
I
can
write
it
down,
but
the
the
guidance
that
I
followed,
especially
basic,
basically
the
docs
from
amd.
They
explain
it
steps
step
by
step.
So
that's
the
only
official
documentation,
that's
available.
B
A
C
A
A
I
don't
know,
that's
that's
good
news,
so
everything's
working,
okay,
I
haven't
had
any
major
upsets,
or
I
know
that
we
had
a
reboot,
but
that
was
that
was
pretty
ordinary.
It
went
well.
A
Well,
it
did
I've
been
using
it
quite
a
bit
trying
to
get
the
adi
build
to
make.
So
so
I'll.
Give
I'll
give
my
report,
so
I've
been
using
remote
labs,
but
only
in
the
sense
of
the
vm
on
the
on
chunk
and
what
I've
been
trying
to
do
is
to
to
make
our
encoder
and
the
the
hardware
reference
design
from
adi
to
pull
both
of
those
in
as
sub
modules,
and
so
that
we
can
pick
the
exact
right
commit
level
from
these
two
different
repositories.
A
So
using
them
as
sub-modules
is
pretty
pretty
neat,
and
I
set
that
up.
But
the
make
has
not
worked
ever
since,
and
so
I've
reached
out
to
engineer
zone
from
adi
to
ask
for
help
and
I'm
pretty
sure
that
this
could
be
solved.
It
seems
to
be
hung
up
on
a
particular
clock
signal
called
or
a
signal
called
ref
underscore
clk,
so
reference
clock,
we
don't
use
it
in
our
code
that
we're
bringing
into
the
reference
design
and
when
you
make
the
reference
design
by
itself,
the
reference
design
makes
fine.
A
So
I'm
not
really
sure
what
to
do
here,
but
I
reached
out
and
asked
for
help
and
if
we
can
solve
that
problem,
then
I
think
that
the
with
the
changes
to
the
address
map,
then
we
should
have
a
repository
where
all
you
need
to
do
in
order
to
use
our
code
is
to
type
make
at
the
command
line.
So
that
would
be
that'd
be
nice
to
see
happen,
so
I've
been
using
the
remote
lab,
but
not
the
nothing
over
the
air.
Yet
that's
going
to
change,
I
think
very
very
soon.
A
Another
thing
that
I've
been
doing
is
is
looking
at
some
matlab
code
that
demodulates
fsk.
So
it's
a
matlab
model
that
uses
dfts
to
to
demodulate
fsk,
and
this
will
be
on
the
uplink
receiver.
And
what
I'd
like
to
do
is
take
the
matlab
code
and
prepare
it
for
hdl
coder
in
matlab
that
will
produce
the
hdl
code
and
then
we
can
try
it
out
on
the
zc706.
A
So
that's
kind
of
the
goal
there.
It's
moving
forward
gotten
a
good
good
start
there
that
combines
up
with
the
work
from
theseus
course
that
I
I
talked
about
a
few
minutes
ago.
So
that's
coming.
The
the
really
big
win
is
that
the
encoder
is
behaving
a
little
better
in
the
downlink
section.
A
I
don't
have
any
roadblocks
except
the
usual.
You
know.
I
think
we
we
all
are
dealing
with
a
pretty
steep
learning
curve
and
this
is
ambitious
work.
I
think
stepping
back
just
a
little
bit
that
we
have
an
excellent
opportunity
to
put
this
into
a
proposal
for
a
high
elliptical
orbit
project
that
we're
going
to
present
to
jamsat
as
soon
as
we're
done
and
anything
working
or
any
prototypes
that
we
can
show
make
that
proposal
better.
So
that's
some
good
news
from
from
the
wider
community.
A
A
A
Oops,
I
made
it
sorry
any
last
comments
or
questions
before
we
close
all
right.
C
Yeah
I've
been
working
on
this
uplink
essentially
simulation,
but
the
c
plus
was
implementation
of
it
and
starting
to
get
an
excuse
me,
a
notion
of
all
the
tracking
loops
and
things
that
need
to
operate,
and
this
is
started
a
little
background
process
in
my
brain,
trying
to
figure
out
how
this
is
going
to
go
into
hardware
for
a
massively
multi-channel
receiver.
C
I
would
like
to
have
a
confab
with
you,
ken
and
you
michelle
here
on
the
whiteboard,
maybe
and
see.
If
I
understand
where
that's
gonna
gonna
have
to
go
to
make
sense.
Architecturally.
A
C
So
I'm
working
on
this
opulent
voice,
uplink
and
down
uplink
receiver
and
transmitter,
and
it's
been
a
conversion
process,
changing
the
m17
code
into
opulent
voice
code
and
of
course
that
means
breaking
it
and
then
fixing
it
and
I'm
in
the
fixing
it
stage
now,
mostly
as
you,
if
you
I've,
been
posting
progress
reports
on
the
opulent
voice
channel
on
slack.
But
I've
got
to
the
point
where
I
can
actually
go
over
the
air
over
an
rf,
coaxial
cable.
C
It
could
be
antennas,
but
cable
is
a
little
more
reliable
and
that
brings
up
a
lot
of
new
issues
because
all
of
a
sudden
you're,
not
you,
know
the
same
clock
you're,
not
in
synchronization
for
free
and
so
of
course
there
are
problems,
but
two
different
computers,
one
I'm
transmitting
the
other
one
receiving,
and
this
is
mostly
working
now.
C
I
can
hear
a
good
quality
voice
with
some
glitches
which
I'm
trying
to
blame
on
the
on
the
usb
audio
output
side.
I'm
going
to
run
a
test
here
shortly
on
that
and
see,
if
that's
a
plausible
theory,
but
this
means
a
lot
of
mechanism
is
working,
we're
acquiring
symbol,
timing,
we're
acquiring
frame
timing
successfully
with
a
few
missed
sync
words
which
need
to
be
investigated
further.
C
Although
there's
enough
state
machine
mechanism
to
paper
over
that
missed
sync
words,
so
they
don't
cause
any
problem
at
this
level,
but
they
shouldn't
be
happening.
So
we
have
to
find
out
why
and
then
the
integrating
the
the
vocoder
turned
out
to
be
pretty
easy,
just
find
the
right
apis
to
call
and
call
them
correctly
and
perfect
or
near
perfect
voice
comes
out.
It's
just
remarkable.
C
If
I
play
back
the
original
file
and
the
file
that's
gone
through
the
whole
system
side,
but
not
not
simultaneously,
but
one
after
the
other.
It's
really
hard
to
tell
the
difference.
If
I
play
them
in
both
ears
of
my
headphones,
I
can
hear
which
one
is
has
been
further
processed
and
it's
worse,
but
not
very
much
worse.
It's
really
good
audio
quality.
C
If
we
can
make
this
work
over
the
air
in
a
real
system
and
with
enough
robustness
to
to
be
tolerable,
then
it's
going
to
be
a
giant
step
forward
in
in
digital
voice
for
ham,
radio.
A
Good,
thank
you,
I'm
very
much
looking
forward
to
it,
because
voice
is
the
product
for
these
sorts
of
systems.
You
know,
even
in
a
very
data-centric
world,
you
know
you've
got
to
treat
voices
the
product
and
make
it
sound
as
good
as
it
possibly
can.
So.
Thank
you
really
looking
forward
to
demonstrations
coming
up
very
soon,
defcon
and
and
also
at
kiso,
today,
ham
expo
in
september
throughout
the
fall.
Actually,
that's
that's
our.
A
We
have
some
of
our
best
opportunities
to
to
do
demos,
and
I
think
once
people
hear
it
and
get
to
play
with
it.
If
we
can
pull
it,
pull
this
off
on
a
hack,
rf
plus
support
a
pack
so
that
people
can
install
it
themselves
and
and
play
around
with
it
locally.
Then
it'll
get
some
some
good
traction
so
very
excited
about
that.
Anybody.