►
From YouTube: ORI FPGA Meetup 15 November 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
yeah
sorry
so
I
try
to
to
move
to
a
step
forward
in
order
to
debug
some
potential
issue.
You
could
have
with
the
other
platform.
A
A
A
few
days,
my
conclusion
was
that
it
should
be
eight
bytes
blind
and
the
eight
by
just
64
bits
and
the
only
only
64
bits
in
my
design
is
the
DMR
deck
bus
width
and,
as
Paul
could
explain,
maybe
in
more
details.
A
There
is
some
issue
because
we
can't
write
to
the
encoder
only
two
bytes,
which
is
normally
IQ
right.
Sorry,
four
bytes,
two
16
bits
I
and
two
and
well
well:
two
I
choose
sorry
and
by
reducing
the
the
the
biswits
of
32
bits,
then
all
the
code
rate
is
working
and
and
yeah
so
I'm
very
confident.
A
Right
now,
I
can
change
from
one
mod
code
to
the
other
seamless
and
the
next
step
right
now
is
getting
well
to
encapsulate
TS
transpose
stream
of
a
BB
frame.
That's
the
first,
the
first
step
to
see
if
all
the
datas
are
okay
and
then
try
to
integrate
the
dvb
GSA.
B
Well,
thank
you
so
much
for
your
expertise
and
Leadership
and
advice
here.
It's
been
really
invaluable,
so
the
the
minimal
dbb
test
that
you've
written
seems
to
be
working
for
the
zc706
and
we've
gone
through
the
the
source
code
carefully
to
try
to
see
where
we
went
wrong
and
we
haven't,
we
haven't,
found
it
yet,
but
it's
in
there
somewhere,
so
I
think
we're
very
close
to
being
able
to
solve
problems
and
to
increase
the
quality
of
what
we're
doing
so.
We
could
not
do
it
without
you.
Thank
you.
B
So
much
Everest
all
right,
I'm,
going
to
turn
the
floor
over
to
Paul
who's,
going
to
talk
about
some
of
the
things
that
that
referee's
brought
up
with
a
sort
of
a
realization
on
the
the
bus,
width
and
or
restrictions
on
the
dma
controller
that
we
are
are
working
with
and
a
possible
design
that
might
help
us
increase
the
resiliency
of
our
downlink
encoder
for
our
10
gigahertz
downlink,
which
is
it's
on
the
hand,
bands
and
is
supposed
to
be
for
parameter.
Radio
communication.
B
This
sort
of
problem
that
we're
dealing
with
has
wider
applicability,
of
course,
so
we're
we're
trying
to
do
the
best
possible
job.
We
can
so
that's
that's
two
things
that
we're
going
to
talk
about
and
I'm
going
to
turn
it
over
to
Paul
to
to
detail
it
out.
C
Let
me
start
out
with
some
background
for
people
who
are
joining
us
and
not
already
familiar
with
the
ins
and
outs
from
our
point
of
view
here,
we're
we're
working
with
a
reference
design
provided
by
Analog
Devices,
which
is
designed
to
work
with
their
their
radio
chips.
Ad9371
and
the
cc706
is
the
board
that
hosts
the
xilinx
fpga
part
with
its
zinc
processor
on
board,
so
we're
running
on
board
the
the
hard
arm
and
the
chip
and
using
the
fpga
to
talk
to
the
radio
okay.
C
So
it's
designed
to
take
samples
on
the
we're
talking
about
the
transmit
chain.
Now
it's
designed
to
take
samples
by
dma
from
processor
memory,
and
that
goes
into
goes
through
a
dma
controller
block,
which
then
goes
into
a
fifo
block,
which
is
only
responsible
for
a
little
bit
of
smoothing
out
of
various
rate
variations,
and
then
that
goes
on
through
some
other
logic,
routing
logic,
mainly
to
finally
get
to
the
Dax.
The
digital
analog
converters
that
make
the
analog
signals
that
make
the
radio
okay.
So
that's
the
standard
arrangement
with
the
reference
design.
C
It
doesn't
do
anything
with
the
signal
except
move
it
along
the
pipeline
and
then
transmit
it
now
in
order
to
use
this
for
our
and
dbbs2
downlink,
we've
gone
inside
the
reference
design
split
apart.
The
dma
controller
from
the
the
DAC
fifo
and
in
between
we
placed
our
encoder
block
and
our
encoder
block
takes
BB
frames,
which
are
a
variable
length
amount
of
data
that
corresponds
to
it
fixed
amount
of
output,
samples
and
it's
so
it
takes
these
BB
frames
and
through
the
interface
that
would
normally
be
samples
from
dma
from
the
processor
memory.
C
It
does
all
the
stuff
in
the
dvbs2
standard
to
make
samples
out
of
it,
and
then
they
go
off
to
the
fifo
and
down
the
routing
chain
to
the
to
the
Dax
and
on
the
zc706,
which
Michelle
and
I've
been
working
on
here
in
the
San
Diego
lab
we've
been
having
all
sorts
of
trouble
getting
this
to
work,
reliably
and
I.
C
So
we're
stuck
with
its
limitations
for
the
time
being
anyway,
and
one
of
its
limitations
is
that
it
can
only
transfer
multiples
of
of
a
bite
and
that
number
the
size
of
that
multiple
depends
on
the
widest
bus.
That's
connected
to
it.
C
So
in
our
case
with
the
reference
design
as
it
comes,
we're
bringing
in
64
bits
at
a
time
out
of
processor
memory,
so
eight
bytes
and
then
shipping
out
whatever's
convenient.
It
was
originally
128
bits
or
16
bytes
I,
guess
that
is
we've
narrowed
that
down
to
32
without
too
much
remaining
controversy,
but
the
64
still
governs.
So
that
means
that,
in
order
to
get
through
that
dma
controller,
you
have
to
be
transmitting
chunks
of
eight
by
Eights.
Of
course,
the
people
who
designed
the
dma,
the
dvbs2
protocol
did
not
anticipate
this.
C
They
assumed
that
bytes
were
going
to
go
through
in
whatever
number
were
needed,
and
so
they've
standardized
packet
sizes
that
are
not
multiples
of
of
16,
bytes
or
eight
bytes
or
I.
Think
in
all
cases
they
are
multiples
of
four
bytes
that
maybe
that's
only
maybe
that
even
that's
not
true.
So
the
result
is
that
yeah,
when
we
send
a
BB
frame
through,
we
need
to
do
something
about
figuring
out
where
the
beginning
and
end
of
it
and
the
packet,
the
BB
frame
packet
is,
and
we
can't
rely
on
the
dma
for
that.
C
We
can
try
if
we
set
up
the
dma
to
be
a
multiple
of
certain
width
and
then
ensure
that
this
bursts
we're
sending
are
always
a
multiple
of
that
width.
Then
it'll
be
it'll
work,
but
it's
still
sort
of
not
reliable.
It's
depending
on
these
dma
transfers
to
frame
the
the
packet.
C
C
C
So
the
standard
solution
for
this
problem
is
to
use
some
kind
of
framing
protocol
and
I'm
proposing
that
we
might
want
to
switch
our
interface
for
file
transmission,
to
use
this
framing
protocol
and
there's
many
priming
protocols
to
choose
from
the
most
common
one
is
used
in
PPP
and
slip
and
even
ax25,
where
it's
used
in
the
kiss
protocol
and
it's
a
bite
stuffing
protocol
where
you
get
whenever
you
get
your
frame
indicator
byte,
you
have
to
replace
it
with
two
bytes
and
then
there's
another
value.
C
You
have
to
replace
with
two
bytes
and
there's
a
little
bit
little
dance.
You
do
and
it's
pretty
efficient
in
the
average
case,
where
with
random
data,
but
in
the
worst
case
it
doubles
the
amount
of
bytes
that
would
be
transmitted,
there's
a
more
modern,
or
at
least
they
knewer
byte.
Stepping
protocol
called
Cobbs,
which
stands
for
consistent
overhead
bite
stuffing,
which
has
about
the
same
average
overhead
and
has
a
much
better
worst
case
in
the
worst
case.
C
Self-Correcting.
If
you
ever
gets
out
of
sync,
which
is
not
the
case
with
the
current
design,
if
it
ever
gets
out
of
sync,
it
stays
out
of
sync,
and
that
gives
us
all
sorts
of
convenience.
In
manipulating
these
pre-recorded
files,
we
can
even
cut
them
apart
at
arbitrary
boundaries
and
still
send
them
in,
and
it
would
still
work
got
some
a
pretty
limited
expense.
The
the
decoder
is
the
easy
part
for
Cobbs.
It
doesn't
require
a
lot
of
look
ahead
or
anything
like
that.
C
Just
a
couple
of
bytes
of
storage
I
think
it's
one
bite
of
extra
latency
which
disappears
into
the
packet
size
anyway,
and
not
very
much
logic.
So
it
should
be
an
easy
block
to
write
even
for
novices
like
me,
and
we
should
be
able
to
get
that
working
fairly
quickly
and
that
should
completely
eliminate
any
question
of
of
bite
alignment
on
the
input
of
the
encoder,
and
that's
that's
the
long
version
I'm
going
to
paste
into
the
chat.
Some
links.
That'll
save
you
about
10
seconds
of
Googling,
I,
guess
to
find
information
on
cobs.
C
Let's
see
that
is
the
right
paste
there.
It
is
I
recommend
going
if
you
want
to
read
just
one
thing:
go
to
the
original
paper.
It's
pretty
brief
and
easy
to
read
and
explains
it
pretty
well
so
I'm
interested
in
feedback
on
this.
C
If
somebody
thinks
it's
a
dumb
idea
that
it's
a
solid
problem
now
that
we've
reduced
the
the
size
down
to
32
bytes,
there's
32
bits
in
every
solution:
fine,
but
on
the
zc706,
that's
going
to
be
more
painful,
we're
going
to
have
to
modify
some
more
things,
I
think
and
it
doesn't
get
all
the
benefits
that
I
talked
about
of
being
able
to
handle
the
files
with
great
flexibility.
So
let
me
stop
talking
and
see
if
anybody
has
anything
to
say
about
that.
C
C
C
D
C
So
come
back
next
week,
I
guess!
If
you
anybody
or
put
it
in
slack,
if
anybody
has
any
questions
or
any
thoughts
about
whether
this
is
worth
the
trouble
or
whether
I'm
on
the
wrong
track
entirely
just
say
so:
I
I,
don't
mind,
hearing
that
I'm
that
I'm
making
things
more
complicated
than
they
need
to
be.
That
does
happen.
Sometimes
yeah.
B
I'm,
I'm,
enthusiastic
about
anything
that
helps
us
make
this
system
more
resilient
for
especially
for
space,
but
also
for
terrestrial
deployments
that
are
not
easy
to
to
upgrade
or
maintain
or
fiddle
with.
So
you
know,
and
just
resiliency
in
general
and
making
it
easier
to
you
know:
I
think
that
the
receiving
artifact
the
receiving
circuit
should
do
all
that
it
can
to
to
accept
anything
that
it.
B
You
know
to
go
out
of
its
way,
in
other
words,
to
to
accept
streams
of
data
that
may
be
interrupted
from
from
operators,
or
you
know,
maybe
rudimentary
there
could
be
some
malformations
or
you
know,
there's
all
sorts
of
things
that
can
happen
to
to
transmission,
so
I
think
anything
that
helps
resiliency
is
good
and
we've
already
seen
that
the
the
encoder,
which
is
brilliant
and
well
done,
is
it
can
get
off
track
and
and
it's
it's
relying
on
things
being
delivered
very
precisely
from
from
memory.
B
So
we're
not
always
going
to
have
that
we
can't
protect
the
input
to
the
to
the
encoder
as
much
as
we
would
probably
like.
We
want
it
to
be
widely
useful
to
people
so
I
think
anything
that
helps
resiliency
is
good,
and
if
this
this,
this
proposal
needs
some
some
solid
review
from
from
as
many
different
people
as
possible.
C
B
Oh
yeah,
this
is
a
big
one,
but
I
think
just
in
general,
oh
and
also
the
the
Restriction
that
we
realized
on
the
dma
which,
when
we
I,
think
we're
we're
so
we're
taking
care
of
it.
We
we
did
to
test
the
Restriction
and
I
guess
this.
This
kind
of
this
goes
into
my
very
brief
report
for
this
week,
so
I've
been
just
trying
to
help
get
things
working
and
so
every
code
for
the
minimal
dbb
test
and
our
code
that
we
wrote
is
almost
pretty
much
the
same.
B
There's
a
procedure
that
you
do
on
from
the
processor
side
to
use
a
design
like
what
we're
trying
to
accomplish
and
since
we're
both
using
iio
industrial
input
output.
So
we
both
sets
the
code
walk
through
the
same
general
set
of
things.
There's
some
minor
differences
like
we
at
no
point
did
we
ever
enforce
one
of
the
things
that
ever
East
did
about
whether
it
was
blocking
or
non-blocking.
B
It
just
reinforces
the
default
so
I
put
that
in
and
then
every
excuse
a
slightly
different
call
for
iio
for
start
versus
a
first
for
the
buffer.
But
we
we're
doing
the
same
thing
so,
how
come
every
east
code
works
on
the
zc706
and
ours
does
not
it
times
out.
Every
time
we
get
the
negative
110
error.
B
I
don't
have
any
hair
left.
So
this
is
all
not
real
I'm
like
what
the
heck
is
going
on
and
because
the
any
adjustment
takes
a
long
time
to
filter
like
if
you
have
to
go
back
to
change
the
HDL.
We're
talking
like
an
hour
hit
to
some
of
these
builds.
So
I
don't
have
no
idea.
What's
going
on
with
this,
we
were
blaming
the
dma,
but
even
after
fixing
the
problems
with
our
you
know,
64-bit
restriction
that
didn't
cooperate.
B
What
we
have
been
able
to
do
is
operate
from
for
Matlab,
so
the
exact
same
HDL
same
bit,
stream
Matlab
can
order
it
around.
We
can
change
the
profile,
which
is
something
else.
I
think
we
need
to
talk
about
and
something
we
did
we're
able
to
kind
of
come
to
grips
with
yesterday
for
complicated
rfics
like
the
9371,
you
really
have
to
have
a
configuration
file,
there's
some
things
that
you
cannot
modify
after
it
gets
up
and
running.
B
So
you
better
get
them
right
in
the
configuration
and
the
configuration
sets
like
clock,
speed,
symbol
rate
bandwidth,
it's
filtering,
so
all
the
filter
Taps
for
the
receiver
for
the
transmitter
for
the
observational
receiver.
All
of
that
stuff
is
set
in
the
configuration
file.
Now
you
can
do
this
in
code
and
that's
how
people
used
to
do
it,
so
they
used
to
actually
set
all
this
stuff
with
calls.
B
B
So
these
profiles,
you
can
do
it
by
hand
if
you're,
if
you're,
really
into
going
through
documents-
and
these
documents
are
like
500
pages
long
and
you
can
poke
through
and
figure
out
how
to
generate
all
this
stuff
or
you
can
use
the
tools
that
Analog,
Devices
and
Matlab
have
given
you
so
Analog
Devices
has
Tes,
which
is
the
transceiver
evaluation
software
package,
it's
free
and
pretty
cool
and
what
it
does
is
produces.
B
That's
that's
when
you're
really
sure,
like
you're
ready
to
ship
it
that
you
do
that
you
know
and
from
Matlab.
Interestingly
enough,
that
has
a
transceiver
package
for
all
of
this
that
we
use
you
can
swap
in
the
the
profiles
with
a
line
of
code,
so
matlab's
really
the
best
tool
for
the
job
here
in
in
fooling
around
and
and
experimenting
and
doing
engineering
with
your
transceiver,
and
we
were
able
to
successfully
do
that.
B
You
know
so
this
is
the
equivalent
of
like
you're
in
your
garage
trying
to
repair
your
car.
You
know
you
have
your
hot
rod
and
you've
swapped
out
engines
and
you're
you're.
You've
got
some
extra
cool
stuff
that
you've
put
in
there
like
our
encoder
and
now
you're
firing
it
up
for
the
first
time
and
it
just
it
catches
for
a
few
seconds
and
then
conks
out
on
you
because
well
you
don't
have
the
right.
B
Whatever
you
know
you,
you
haven't
adjusted
it
you
don't
yeah,
there's
so
many
things
that
can
go
wrong
with
trying
to
bring
up
an
engine
and
modern
complicated
circuits
like
this
are
exactly
like
a
a
car.
You
know,
so
you
have
all
this
sort
of
stuff.
You've
got
the
air
fuel
mixture.
You've
got
your
valves.
You've
got
you
know,
cylinders,
you
got
all
sorts
of
crap
going
on.
B
So
that's
what
we're
doing
we've
got
the
we've
got
the
engine
fired
up
and
it's
moving
for
a
few
seconds
and
in
the
case
of
Everest
he's
like
actually
got
it
yeah
up
and
running.
But
now
the
next
step
is
to
to
fine-tune
all
of
our
clocks
and
symbol
rates
and
filters
get
the
right
filters
for
for
this
one.
B
You
can
you
on
the
design
for
the
over
there
on
on
karapi.
You
can
see
I
think
all
these
artifacts
working
in
the
Pluto
design,
so
so
the
Pluto
implementation
has
all
the
stuff
working
and
Incorporated
there's
a
separate
whole
filter
production
tool
chain
for
the
9361,
which
is
what's
on
on
Pluto.
So
the
exact
same
encoder
over
on
the
9360,
the
9371
with
the
bigger
fpga,
is
what
we're
trying
to
get
get
moving.
B
So
that's
that's.
How
it's
going
going
pretty
pretty
well
wish
it
would
go
faster,
but
like
Hey,
we're
we'll
just
keep
hitting
it
until
it's
done,
and
you
know
I
think
as
soon
as
we
get
the
our
profile.
So
what
we
want
the
transceiver
to
do
in
terms
of
oh
an
interesting
point,
the
the
10
10
megahertz
bandwidth
that
we're
talking
about
doing
for
the
Tran
for
this
transponder,
that's
as
low
as
you
can
go
with
the
93.71..
It
turns
out
that
10
megahertz
is
that's
it
that's
as
low.
B
This
is
narrow
band,
as
you
can
go
so
I
think
we
knew
this
when
we
picked
the
chip
for
Wally
Richie
was
the
lead
on
this
and
I.
Remember
a
brief
discussion.
He
was
like.
Oh,
don't
worry,
it'll
do
10
minutes,
it'll
go
down
to
10
megahertz
and
I
was
like
oh
yeah.
These
are
these.
Are
these
are
chips
that
are
used
for
much
higher
bandwidth?
You
know
it
didn't
really
occur
to
me
that
it
might
be
what
our
use
case.
B
You
know
for
the
satellite
sub-band
and
the
terrestrial
digital
high-speed
sub
band
for
terrestrial.
They
are
literally
at
the
limit
of
the
lower
end
of
what
we
can
do.
So
this
opens
up
a
lot
of
other
questions
as
to
you
know,
okay,
once
we
get
this
design
working
a
little
bit,
what
else
can
we
mess
with
that?
That's
that's
it
24
or
47
gigahertz.
Where
else
can
we
go
with
even
wider
bandwidths
on
these
on
these
bands
and
take
our
system
and
have
some
serious
fun
with
some
with
lots
of
bits?
B
So
that's
in
the
very
near
future.
That's
that's
pretty
much
from
from
my
point
of
view.
What
what
I'm
seeing
and
experiencing
so
yeah
I'll
I'll,
stop
here
and
open
the
floor
up
to
any
other
comments
or
questions
or
discussion.
C
I'll
get
one
I
think
I
wanted
to
cover
for
remote
Labs
point
of
view.
This
is
a
an
innovation
in
usability
for
people
who
are
trying
to
do
radio
debugging
remotely,
which
includes
us,
because
we're
like
across
the
house
from
the
remote
lab.
C
So
it's
easier
if
we
could
operate
things
remotely
and
it
has
to
do
with
looking
at
the
Spectrum
analyzer
output,
the
Spectrum
analyzer
we
can
make
screenshots
like
we
can
with
all
the
other
instruments,
but
that's
slow
and
kind
of
awkward
and
it's
hard
to
tell
what's
going
on
without
seeing
it
in
motion.
C
C
C
So
what
I've
done
now
is.
Finally,
this
has
been
on
the
wish
list
for
a
long
time
that
turned
this
into
a
video
streaming
solution
where
there's
a
program
running
on
a
Raspberry
Pi
which
and
I've
added
a
Raspberry
Pi.
So
it
can
be
dedicated
to
this
purpose
that
receives
the
HDMI
from
the
Spectrum
analyzer
and
makes
a
relatively
low
bit
rate
video
stream
and
then
on
your
side.
Wherever
you
are
in
the
world,
you
just
need
a
program
that
can
receive
a
video
stream.
C
An
ffmpeg
is
serves
for
both
so
ffmpeg
creates
the
video
stream
here
in
a
lab,
and
then
you
can
use
its
its
companion
program
called
FF
play
to
receive
locally
and
I've,
bundled
this
up
into
a
script
that
uses
SSH
to
get
into
the
remote
lab
and
automatically
shuts
down
the
link
when
you,
when
you're
done
with
it
and
stuff
like
that,
this
is
not
quite
ready
to
publish
but
soon
we'll
be
making
this
available.
C
C
Performance,
it
uses
less
than
a
megabit
of
bandwidth
consistently
for
the
Spectrum.
Analyzer
doesn't
have
that
much
motion,
usually
just
the
screen
part,
and
even
that
can
be
localized.
It
compresses
really
well.
So
this
is
a
high
resolution
display.
Let
me
show
you
what
it
looks
like
actually
so
I
do
have
it
running
here
at
the
moment,
if
I
can
be
allowed
to
share
my
screen.
C
So
that's
that's
what
you'd
see.
Hopefully
it
hasn't
been
mangled
too
much
by
Zoom,
but
it
looks
nice
and
crisp
on
my
screen,
so
nothing
on
it
right
now
so
boring.
But
that's
the
idea.
B
Okay,
all
right,
let's
go
ahead
and
close
our
our
fpga
meeting.
Thank
you
very
much
for
all
of
this
amazing
hard
work
and
we
are
starting
to
see
some
some
really
good
results,
and
you
can
see
the
you
know
where
it
will
go
from
here
to
several
different
projects,
to
open
source,
highly
elliptical
orbit,
satellite
projects
for
the
amateur
satellite
service,
lots
of
terrestrial
applications
and
just
general
education
in
terms
of
Advanced
Digital
Communications.
B
So
those
are
all
things
that
we're
very
interested
in
making
happen
if
you're
watching
this
and
you
want
to
get
involved,
then
please
go
to
our
website.
It's
open
research,
dot,
Institute,
slash,
getting
Dash,
started
and
I'll
put
that
into
the
into
this
video,
so
that
you
can
just
click
on
it,
but
that
go
getting
started.
Link
will
take
you
to
all
the
different
ways
to
participate
and
follow,
and
we
appreciate
everybody
that
that
pitches
in
and
helps
us
do
all
these
ambitious
things.
So
thank
you
and
see
you
next
week.
A
Maybe
just
last
remark.
A
Yeah
I
didn't
know
if
you
noticed
that
Danielle
for
GPZ
as
written
a
nautical
about
dvbgc.
A
Thanks
to
well,
we
try
with
Brian
G4
wvg
and
me
to
get
some
BB
frame
out
of
the
mini
tune
and
it
seems
to
work
so.
Oh.
A
So
I
could
try
to
have
a
receiver.
Well,
I
have
some
meat
tuna
here,
and
so
the
good
news
is
it's
not
only
the
troll
call
stream,
but
we
can
receive
also
of
the
BBE
frame.
So
we
can
debug
the
dvb
GSC
awesome.