►
From YouTube: ORI FPGA Standup 28 June 2022
Description
Action items:
1) confirm tlast behavior out of encoder
2) model encoder timing performance - any need for superframes?
A
A
What
happens
is
that
policies
get
added
to
this,
and
we
do
lots
of
really
neat
things
with
either
quality
of
service
or
or
other
functions
system
level
functions
for
to
serve
our
the
people
using
the
communications
resource.
So
there's
a
lot
to
that.
It's
often
times
where
a
lot
of
sophisticated
things
happen
in
a
communication
system
and
ours
will
be
no
different.
So
scheduler
work
is
it's
at
the
whiteboard
level
and
we
are
looking
to
put
in
some
hardware
pretty
soon.
A
That's
that's
coming
along,
so
we
we
think
we
we
found
most
of
the
major
issues
that
prevented
this
from
just
being
cloned
and
made
and
and
built
by
just
just
anyone,
and
so
those
those
problems
have
been
fixed
and
onshore
is
going
to
incorporate
them
into
his.
He
manages
the
repo
for
us
and
then
the
tackle
script
that
hooks
things
up
in
vivado
is
going
to
be
completed.
A
So
that's
that's
coming
along
and
he's
doing
lots
of
other
really
fun
things
and
is
also
interested
in
the
scheduler.
So
I'll
go
ahead
and
talk
about
what
we've
been
doing
over
the
past
week,
we're
trying
very
very
hard
to
get
the
encoder
working
over
the
air.
So
we
do
have
a
over-the-air
demo
on
the
pluto
with
the
encoder,
and
this
was
shown
this
past
weekend
at
friedrichshafen
ham.
A
Radio
show
by
evereast,
so
he
he
talked
about
and
showed
a
screenshot
or
demonstration
of
the
hardware
working
and
that's
very
exciting,
so
so
that
exists,
and
that
has
both
the
firmware
side
and
the
hdl
side.
And
so
this
was
an
anchor
point
for
us
to
get
the
zc706
working
and
the
the
progress
there
is
that
we
are
learning
how
to
use
direct
memory.
Access
on
the
zc706
and
there's
two
approaches
to
this:
one
is
linux
dma
engine,
that's
relatively
high
level.
A
It
has
a
two
layer,
api
and
also
the
other
method,
is
simply
writing
directly
to
the
registers
and
configuring.
The
dma
controller,
in
this
case
the
transmit
dma
controller
in
the
reference
design
in
order
to
fetch
base
band
frames
from
memory
and
then
present
them
to
the
encoder,
which
then
presents
it
to
the
dac
fifo.
And
so
this
means
several
things
have
to
happen.
We
have
to
properly
size
all
the
buses.
A
We
could
just
kind
of
leave
them
mismatched,
but
I
don't
think
that's
a
good
idea.
So
the
first
test
that
we
attempted
to
do
this
past
week
was
to
change
the
bus
sizes
and
to
use
the
dma
to
transmit
a
tone
from
memory,
and
I
think
we're
all
wanted
to
really
have
it
done
for
today,
but
we're
really
close
to
having
that
that
work.
A
What
that
means
is
that
the
hardware
is
modified
and
then
wrapped
up
in
pedal
linux
and
then
put
on
the
zc706
and
then
use
iio
to
to
run
a
test
basic
test,
so
we're
hoping
to
get
that
done
really
soon
and
then,
after
that
works
after
we
prove
that
we've
got
the
bus
size
issue
done
and
that
the
encoder
can
just
drop
in
then
we'll
drop
in
the
encoder.
I
know
there's
probably
a
more
sophisticated
way
to
do
this,
so
anybody
listening.
A
So
what
we're
our
approach
is
to
use?
Iio
live
a
python
script
to
to
run
the
show,
but
we
learned
how
to
do
this
by
writing
c
code
that
directly
addressed
the
registers
in
the
dma
controller
and
that
that
that
worked
out
okay,
so
a
little
bit
lower
level,
I
think
than
we
probably
don't
want
to
end
up
with
to
do
anything
sophisticated.
I
think
we
need
to
have
some
some
higher
level
functionality
in
the
applications.
So
that's
where
we're
at
here,
at
least
in
this
particular
question.
A
I
also
have
some
some
updates
about
the
decoder,
so
we
do
have
a
decoder
code
base
from
it
on
on
and
that
has
been
picked
up
and
we'll
get
some
lots
of
additional
development
and
that
will
be
donated
back
to
the
repository.
So
I
don't
have
a
firm
schedule
for
that.
But
ahmet
was
extremely
excited
to
to
have
this
happen
and
it's
it's
how
it's
supposed
to
work
in
open
source.
A
So
it's
the
code
base
has
been
determined
to
be
a
value
from
a
r
d
firm
and
they
are
working
on
it
and
then
we
will
get
a
benefit
with
the
open
source
release
for
the
decoder.
So
this
would
be
on
the
ground
station
side,
so
very
excited
about
that
and
and
looking
forward
to
to
development.
On
that
end.
On
the
uplink
side,
we've
done
a
lot
of
work
to
firm
up
the
uplink
protocol,
which
is
opulent
voice
and
there's
a
tracking
document
that
talks
about
it.
A
It
looks
like
we're
going
to
peel
away
a
lot
of
the
stuff-
that's
done
in
m17,
because
it's
all
aimed
and
and
designed
for
a
very
low
bitrate
codec
for
the
3200
bit
per
second
codec
too,
which
it
is
not
where
we
want
to
end
up
in
terms
of
voice
quality.
So
what
we've
done
is
replaced
the
codec
with
a
16
kilobit
per
second
opus
codec
and
we're
designing
in
the
ability
to
to
go
even
higher.
A
So
some
flexibility
in
the
in
opus
codec
is
the
goal
and
then
some
of
the
some
of
the
other
functions
or
aspects
of
of
that
will
be
set
aside
and
then
some
other
sort
of
layers
will
be
put
in.
So
all
of
this
is
being
documented
in
the
tracking
document
for
opulent
voice
definitely
derived
from
from
m17.
So
I
think
it's
fair
to
say
that
it's
a
high
bitrate
version
of
m17
is
where
we
started
from
the
goal,
for
that
is
to
get
uplink
streams
working
over
the
air
to
have
an
uplink
simulator.
A
That
goes
directly
into
uplink
streams.
That
will
then
be
something
for
the
encoder
to
chew
on
and
for
the
scheduler
to
deal
with.
So
what
we're
talking
about
doing
is
is
working
on
three
separate
pieces
for
an
end
to
end
demo
to
be
done
as
quickly
as
we
possibly
can
and
to
we're
going
to
the
next
big
show
is
defcon
in
in
august,
so
whatever
is
done
by
then
we're
going
to
show
whatever's
not
done.
We
will
present
and
talk
about
and
that's
it
from.
A
From
my
end,
I'm
going
to
hand
it
over
to
to
thomas
perry.
You
have
the
floor
to
talk
about
whatever
you
like
and
then
please
pick
whoever
hasn't
spoken
yet.
B
Okay,
hey
yeah,
I'm
just
trying
to
get
back
into
things
after
a
short
break
from
the
project,
so
I
don't
really
have
a
lot
to
say,
but
I
think
that
introduction
was
really
useful
for
me
to
get
an
idea
of
what
people
have
been
working
on
so
yeah.
That's
basically,
all
I
have
to
say
james.
C
Thank
you,
thomas
I'm
james
for
people
who
don't
know
I'm
the
technician
working
at
ori
for
remote
lab
south,
currently
not
too
much
report
on
that
and
the
member
of
the
board
that
oversees
our
remote
lab
south
is
currently
out
on
a
business
trip.
He
previously
left
california,
where
I
believe
he
was
with
you
michelle
and
was
talking
a
few
things
there
and
has
moved
on
to
another
to
the
next
part
of
his
trip
in
portland,
but
otherwise
we're
not.
C
D
Yeah,
nothing
much
report
from
my
side
also.
Basically,
I
was
involved
in
some
documentation
work,
but
I
should
be
back
in
action
tomorrow
and
the
plan
is
to
pick
up
from
where
michelle
is
at
present
we're
both
trying
to
solve
the
same
problem.
So
I
will
go
through
the
notes
that
she
has
put
in
slack
and
then
pick
it
up
from
there.
D
Another
thing
is:
everest
also
has
developed
one
app
for
testing
of
pluto,
so
I
will
look
at
his
app
and
the
code
and
we'll
try
to
port
it
to
set
c7
6..
So
yeah,
that's
the
plan
and
one
question
for
michelle:
the
c
code
that
you
are
talking
about.
It's
the
same
one
from
everest
that
everest
has
shared
or
it's
different.
One
is
this
from
the
same
app.
A
Oh
for
for
operating
the
you
mean
the
c
code
for
doing
like
dma
tests.
Oh
no!
I
I
no
it's
just
a
very
basic
c
code
that
that
writes
to
the
configuration
registers
for
the
dma
controller,
so
so
not
even
as
as
fully
formed
as
the
as
ever
east's
firmware,
which
I've
I've
looked
at
and
have
not
mastered
yet.
So
I'm
I'm
still
ramping
up
to
be
able
to
appreciate
everything
that
he's
that
he's
done.
D
Okay,
yeah
so
I'll
work
on
that
starting
from
tomorrow
and
another
thing
you
are
running
this
code
to
config
dma
and
everything
on
the
psi
right.
A
Yes,
actually
yeah,
I'm
running
some.
I
wrote
to
c
program
that
configures,
the
the
transmit
dma
controller,
actually
wrote
it
using
nano
on
the
target,
so
very,
very
simple
and
basic.
You
know
just
to
to
make
sure
that
we
understood
how
to
how
to
configure
dma
and
we're
coming
right
along.
So
it's
it's
getting
better
every
day
that
we
put
our
backs
into
it.
So
so
pretty
simple,
not
straightforward.
I
I
cut
and
pasted
the
code
to
the
slack
channel
and
then
I'll.
A
I
need
to
do
a
backup
in
the
repo
and
make
sure
that
you
know
it.
This
is
very,
very
simple
code,
though
so.
D
E
D
Okay,
okay
right,
so
that
means
everything
I
have
ready
for
tomorrow.
I
can
take
that
code
and
progress,
and
I
also
have
to
look
at
this
yeah.
B
E
F
I
guess
that
leaves
me.
I've
been
working
on
most
of
the
same
stuff
that
michelle's
been
working
on,
helping
her
out
and
trying
to
figure
out
some
of
the
stuff
done,
a
lot
of
work
on
that
code
that
started
as
m17
code
and
it's
being
converted
over
to
opulent
voice
or
opulent
voice.
If
you
want
in
order
to
test
out
our
high
bitrate
codec
scheme,
that's
progressing
slower
than
we'd,
like
as
usual,
also
looking
at
driving
the
encoder
and
there's
one
fundamental
thing
that
I'd
like
to
understand
that
I
don't
really
understand
yet.
F
Maybe
reading
every
code
is
the
best
way
to
get
the
answer,
but
I'm
going
to
try
asking
instead,
as
we
send
these
we're,
sending
bb
frames
to
the
encoder
right
and
the
bb
frames
have
a
definite
start
and
definite
end
they're,
not
a
stream
like
samples
would
be
so.
How
does
the
encoder
know
where
the
edges
of
the
bb
frame
are
when
I'm
feeding
it?
Is
there
a
block,
transfer
operation
that
it's
aware
of
or
and
that
that
the
driving
code
needs
to
be
aware
of,
or
is
there
some
other
scheme
for
that
answer?
D
If
I
remember
correctly,
there
are
no
end
markers,
but
in
the
metadata
that
that
we
passed
before
the
bb
frame,
there
is
the
size,
and
I
think
encoder
relies
on
that
size.
E
A
The
mod
cods
and
stuff
like
that,
when
you
say
metadata,
because
we
used
to
call
metadata
like
the
mod
cod
and
the
frame
size
and
stuff
yeah,
the
the
block
actually
takes
tears
down
the
the
header.
So
it
it
decodes
and
looks
at
the
pl
header
as
if
it
was
receiving
it
and
derives
the
information
from
from
that.
D
D
I
mean
presumably
yes
for
the
basic
approach,
but
I
need
to
go
again
through
the
protocol
to
understand.
There
should
be
some
other
way.
Yeah.
B
F
I
guess
my
suspicion
too
yeah.
F
D
F
A
F
I
glanced
at
that
repo
and
it's
full
of
so
much
linux
stuff
that
I
couldn't
even
find
the
source
code
for
the
the
c
program
that
does
the
actual
work.
Assuming
that
that's
what
it
is.
Oh.
D
I
think
we
need
a
look
at
the
rtl
code
rather
than
the
c
code,
because
once
once
a
c
code
sensor,
vb
frames,
a
stream
of
bb
frames,
it's
the
rtl
code
that
will
start
with
the
process,
with
the
processing
of
that
bb
frame
and
and
then
it
will,
it's
a
pipelining
based
approach.
So
each
pipeline,
each
step
in
the
pipeline
will
send
the
team
frames
to
the
next
pipeline
to
attack
some
more
headers
so
yeah
there.
D
We
need
to
look
at
the
implementation
and
when
we
are
sending
the
last
and
the
beginning,
so
that,
because
it's
going
in
a
form
of
access
stream,
that's
how
we
designed
rtl
and
how
we
are
designating
beginning
and
end.
We
need
to
look
at
the
rtl
code
to
figure
it
out,
but
yeah.
Definitely
there
is
still
t
lost.
We
send
after
now
that
I
that
I
need
to
find
whether
it's
at
after
every
bb
frame
or
at
what
frequency
we
send
us.
F
D
Can
you
come
up
again
processing
of
bb
films?
What
do
you
mean
by
that.
F
And
they're,
not
very
long
in
in
terms
of
milliseconds
yep.
So
if
we're,
if
we
have
to
do
a
processor
operation
to
send
each
bb
frame
and
it
has
to
be
done
within
a
narrow
window,
probably
during
the
previous
bb
frame,
that
may
be
a
lot
to
ask
of
the
processor
subsystem.
D
D
Yes,
and
for
that
we
can
have
we
can
we
can
implement
a
fifo
in
the
software
so
that
we
always
have
the
frames
available
but
again
yeah.
That's
one
way
of
handling
delay
or
some
timing
mismatch
and
that's.
F
D
F
Do
you
could
you
could
do
a
fifo
with
one
bb
frame
being
the
element,
but
you
could
also
say:
well.
That's
just
too
fast
we're
going
to
have
10
10
bb
frames
being
a
sort
of
some
kind
of
super
frame
that
the
software
schedules.
D
A
We
might
be
at
the
point
where
we
can
answer
that
question.
I
my
instinct
is
that
we're
good,
but
we
all
know
how
instincts
can
fail.
So
so
I
have
two
action
items
just
to
confirm
that
the
t
last
signal
is
operating
as
a
as
a
signal
for
for
the
framing
within
the
encoder
that
it's
communicating
and
using
all
of
the
options
that
it
has
available
to
it
as
a
for
axi
and
axi
stream,
which
I
think
it
does.
A
But
but
we
have,
we
can
have
a
couple
of
people,
look
at
that
and
and
make
sure
that
we
know
what
we're
doing
so,
that
we
can
feed
it
correctly
and
then
timing
and
performance
is
the
two
action
items
from
that.
I'm
hearing,
so
I've
got
them
written
down
and
we'll
we'll
do
our
best
to
answer
the
questions.
A
That's
actually
good
things
have
been,
things
have
been
working.
I
of
course,
usually
find
the
time
that
you
have
scheduled
the
backup
or
parity
check.
It
seems
to
be
the
the
time
that
I
most
often
use
it,
so
I
I
suspect
that
it
does
not
matter
what
day
of
the
week
you
pick
for
parody
check
that
I
will
show
up
and
try
to
do,
work
and
that's
fine
and
yeah.
No!
A
Thank
you,
everybody
for
for
tackling
something
that
is
ambitious
and
hard,
and
I
think
I'm
definitely
proud
of
of
what
we're
we're
doing.
We
still
have
quite
a
ways
to
go,
but
it's
it's
coming
along
and
some
some
larger
structure
is
emerging
and
we
are
going
to
get
a
chance
to
do
things
that
most
commercial
systems
don't
get
to
to
experiment
with,
because
we're
not
driven
as
hard
by
trying
to
cram
lots
of
subscribers
into
every
every
hertz
of
bandwidth.
A
So
so
we
do
have
some
some
other
things
that
we
can
experiment
with
them
and
play
with,
and
those
things
are
going
to
happen
in
the
near
future,
so
very
excited
about
that.
Yeah.
If
you
need
anything
or
you
have
a
roadblock
or
a
question
or
or
just
want
to,
you
know,
learn
something
then
come
to
come
to
slack
or
the
email
list,
and
and
and
speak
up,
oh
and
I
think
I
have.
I
have
a
couple
of
reports
from
from
leonard
so
so
leonard's
day.
A
Job
prevents
him
from
coming
to
this
meeting,
but
leonard
diguez
is
working
on
on
getting
the
pluto
implementation
up
and
running
as
a
demo
and
that's
coming
along
and
and
he's
excited
about
that.
If
you,
if
you
didn't
have,
you
haven't,
looked
at
his
rf
model
that
he
did
in
in
python,
it's
it's
pretty
neat
and
useful
and
that's
in
the
repo
and
and
he
he
wanted
to
pass
along
his
his
progress
and
say
hello
to
everybody
all
right.
Thank
you!