►
From YouTube: Space and Satellite Symposium 2021 Anshul Makkar
Description
Space and Satellite Symposium 2021 Anshul Makkar, covering LDPC on FPGA for amateur space applications.
A
Hi,
I
am
angel
merker
working
as
a
software
engineer
at
ori,
I'm
open
source
collaborator
at
open
source
institute
and
my
collaboration
areas
result
relate
to
implementation
of
ldpc
on
hardware.
It's
not
the
complete
implementation
of
all
the
modules
of
ltpc
I
have.
I
have
contributed
to
gsc
encoder,
bb
frame
formation
and
few
more
modules
of
ldpc
implementation.
A
Apart
from
that,
I
have
also
been
contributing
to
debris
research
again
as
part
of
ori.
We
just
a
couple
of
days
back.
We
presented
our
paper
to
fcc
and
my
another
collaboration
relates
to
a
dynamic
scheduler
for
again
for
a
spacecraft
which
allows
a
spacecraft
to
to
switch
the
task
dynamically
at
runtime.
A
So
if
it's
performing
task
x
and
effect
ground
station,
it's
realized
that
it
needs
to
do
why
suppose
it's
taking
it's
doing
altitude,
control
and
and
it's
neat
it
needs
to
immediately
switch
some
to
some
other
task.
Then
that's
possible
with
my
dynamic
scheduler.
So
that's
my
another
contribution
today
I'll
be
today.
My
the
topic
of
my
presentation
is
coding
theory
why
we
need
it
and
or
basically
what
it's
application
in
space,
what
at
ori
how
we
have
implemented?
A
Ldpc,
which
is
a
form
of
forward
error
correction
code,
how
we
have
implemented
this
at
ori
in
fpga,
so
it's
coding,
theory
star.
I
will
start
with,
as
I
mentioned,
why
we
need
codes,
then
simple
codes,
then
going
further
deep.
What
are
ldpc
that's
a
particular
particular
class
of
codes
that
we
have
implemented
and
approach
that
we
have
used
for
its
implementation,
the
coding
theory
or
encoding.
A
A
message
using
ldpc
in
codes
involves
huge
calculations,
so
we
have
to
efficiently
implement
it,
and
so
here
I'll
be
focusing
on
how
we
have
implemented
how
it
or
we
have
implemented
this.
A
You
would
appreciate
that
this
in
itself.
Each
of
this
is
in
itself
a
big
topic.
There
are
various
research
papers
put
forward
for
each
of
these
topics,
so
in
30
minutes
it
won't
be
impo.
It
won't
be
possible
for
me
to
go
into
depth
of
each
of
the
topic.
A
So
my
aim
here
is
to
get
to
get
you
interested
or
to
give
you
some
pointers
or
to
give
you
some
basic
understanding
of
how
we
can
use
these
codes
or
why
this
codes
are
needed
for
long
distance
communication,
how
they
will?
How
they
are
beneficial
and
how
they
can
be
efficiently
implemented
in
fpga,
so
starting
with
first
of
all,
why
we
need
it
with
the
communication
link
between
satellite
or
long
distance
communication
link,
you
talk
of
leo
to
earth
or
geo
to
earth.
A
A
We
have
initially
we
had
some
class.
We
have
this
error
detection
mechanism
via
crc,
so
we
have
code
bits,
c1,
c2,
cn.
We
do
some
calculation
here
and
based
on
the
result.
We
insert
that
calculation
result
here
and
let's
and
we
we
have
this
crc
bits
here
so
this
is
transmitted
receiver
receives
the
bits
again
calculates
the
crc
and
finds
out.
A
A
A
A
A
Even
before
moving
to
parity
check
equations,
I
want
to
give
you
another
example
just
to
make
you
understand
the
scale
of
the
problem.
So
suppose
we
have
1
0
1
and
we
introduced
like
in
previous
case.
We
introduced
one
bit
extra
now
here.
Let's
repeat
the
complete
message
in
itself,
so
one
zero
one,
one
zero
one.
Instead
of
one
zero
one
or
let's
say
to
make
things
clear:
1,
1,
0,
1,
1,
0,
1,
1,
0,
we
transmit
1,
1,
0
or
another
form
of
redundancy
can
be
1.
A
A
A
A
So
if
this
bits
get
corrupted,
then
heat
has
this
bit.
If
this
bits
get
corrected,
it
has
this
bit
and
yeah.
Similarly,
it
can
find
out
here.
If
none
is
corrected,
then
it
can
take
one
another
again,
a
simplistic
form
where
you
are
duplicating
the
message
or
duplicating
the
bits
so
that
if
one
gets
corrupted
then
at
least
receiver
has
another
bit.
That
is
correct,
but
again
what
happens
if
both
get
both
the
bits
get
corrupted.
A
That's
that's,
furthermore,
harder.
A
receiver
will
think
that
if
this
boat
gets
flipped
to
one
one,
the
receiver
will
think
that
the
one
was
transferred
again
it's
next
process
of
evaluation.
So
now
we
have
for
more
complex
equations
or
parity
check
equations.
A
A
Or
we
can
say,
c1
plus
c2
plus
c4
is
equal
to
zero
c2
plus
c3
plus
c5
equal
to
0
and
c1,
plus
c2
plus
c3
plus
c6
is
equal
to
0..
So
with
this
code
word
so
with
this
message,
bits
1,
1,
0,
1,
1
0.
What
we
have
the
encoded
message
will
be
in
the
form
of
c4
is
c1
plus
c2,
which
is
0.
C5
is
c2,
plus
c3,
which
is
1
and
c6,
is
equal
to
c1,
plus
c2
plus
c3,
which
is
again
0.
A
So
now
for
information,
one
one:
zero:
we
have
the
codified
message
or
the
encoded
messages
one
one:
zero:
zero
one:
zero
with
these
equations
with
these
two
or
check
equations.
A
Now,
if
some,
if
suppose,
this
bits,
if
this
bit
gets
corrupted
or
flipped
to
zero,
then
we
have,
then
it
will
violate
these
equations.
We
have
c1
plus
c2
plus
c4.
This
is
c1
c2,
c3
c4,
c5
c6.
So
so
we
have
this
c1,
plus
c2,
c1,
plus
c2,
plus
c4.
This
will
be
now
one
which
will
indicate
to
the
receiver
that
there
is
an
error.
Similarly
c2
plus
c3,
plus
c5
c2,
plus
c3
plus
c1
c2,
plus
c3
plus
c5.
A
A
A
As
you
would
appreciate,
as
you
can
see
as
we
are,
equations
are
becoming
bit
complex,
but
the
probability
of
error,
detection
and
correction
is
better
as
compared
to
when
we
were
doing
simple
redundancy
and
gallagher
and
mckay
and
other
researchers
have
shown
that,
with
a
limited
amount
of
redundancy
introduced
along
with
message
bits,
it's
still
possible
to
achieve
the
shannon
limits
for
a
channel.
A
Before
moving
to
ldpc,
I
want
to
show
that
these
equations
can
also
be
represented
in
a
matrix
form,
and
it's
in
this
form
that
we
basically
do.
We
basically
represent
it
in
the
hardware
and
do
all
the
calculations.
So
given
this,
so
I
will
rub
these
equations.
A
A
Of
k,
bits
along
with
identity
matrix,
which
we
call
as
h
and
the
identity
matrix
will
be
in
the
form
of
1
0
0
1
0
1
0,
1,
0,
1,
1
1
and
0
0
1
zero
one
one.
This
is
h
and
length
n
so
for
message
bits,
let's
call
it
k
bit.
So
we
have
two.
There
are
total
possible
of
two
raised
to
power
k,
message,
combinations
which
will
result
in
2
raised
to
power
n
codewords,
and
this
2
raised
to
power
n
code.
A
Words
will
be
a
subset
of
the
combinations
of
it
will
result
in
two
raised
to
power
k
code
words,
which
will
be
a
subset
of
two
raised
to
power
n,
and
we
can
say
that
each
code
word
not
visible.
So
each
code
word.
A
Can
be
represented
as
u
dot
h,
with
a
multiplication
of
message,
bits
or
message
vector
along
with
h,
matrix
now.
The
next
question
is
how
to
find
that
so
we'll
come
to
that,
because
that
plays
a
very
important
role
and
this
matrix
will,
with
this
matrix
multiplication,
this
vector
matrix
multiplication
will
give
us
the
same
parity
check
equations
that
we
studied
in
the
previous.
A
A
Ldpc
encoding
because
ldpc
encoding
is
also
a
type
of
it's
it's
a
type
of
encoding
by
doing
matrix.
Multiplication
of
this
factor
message,
vector
and
h
matrix,
but
the
only
condition
on
edge
is
that
h
should
be
sparse
and
what
a
sparse
means
is
that
a
number
of
ones
should
be
less,
and
this
is
to
make
this
is
to
ensure
that
computational
capacity
computational
complexity,
while
calcul
while
encoding
the
message
is
less.
A
And
another
thing
that
I
want
to
mention
here
is
that
the
codes
that
are
given
by
this
product
are
symmetric
code
and
what
a
symmetric
means
is
that
the
initial
initial
three
bits
or
initial
k
bits
of
the
code
words
are
same
as
the
message
bits.
A
We
need
to
have
this
identity
matrix.
Then
only
the
product
will
give
us
a
code
in
this
form
so
now
coming
to
ldpc.
So,
as
I
mentioned,
ldpc
is
again
it's
it's.
A
symmetric
code
again
represented
in
form
of
codeword,
is
equal
to,
but
there
are
some
special
constraints
placed
on
the
h
or
the
identity
matrix.
And
what
are
these
constraints?
A
Row
constraint
or
row
weight.
Similarly,
each
column
contains
j,
number
of
or
again
a
fixed
number
of
ones,
which
is
called
column,
constraint
or
column
weight,
and
the
number
of
ones
common
to
any
two
rows
is
0
and
1..
Now
why
these
factors
are
important?
A
A
Now,
if
you
go,
if
you
dig
deep
into
this
topic,
you
will
you
will
see
how
cycles
are
harmful
for
this
encoding
and
how
we
can
avoid
that,
and
you
will
study
into
10
other
graph
that
will
that
can
clearly
shows
cycles
but
yeah,
I'm
not
touching
the
topic
here
again,
so
how
to
form
this
parity-check
matrix
again.
A
Various
researchers
have
shown
their
different
approaches,
like
mckay
has
shown
that
you
start
with
all
zero
metrics
and
then
introduce
once
at
random
places,
but
introduce
ones
once
one
in
once
in
in
such
a
way
that
all
these
constraints
are
satisfied.
Similarly,
gallagher
showed
another
way
of
parity
check
how
you
can
form
your
parity
check,
matrix.
A
Ldpc
implementation
was
part
of
our
dvbs2
protocol.
Implementation.
Dvbs2
protocol
specifies
that
ldpc
specifies
ldpc
as
one
of
the
encoding
mechanism,
which
you
can
use
to
codify
your
message
bits
once
they
are
transmitted
for
transmitter
and
we
are
working
on
transmitters,
so
ldp's
we've
implemented
ldbc,
so
the
implementation
of
ldpc
it
has
to
be.
It
has
to
be
efficient
because
continuously
messages
are
coming
from
the
source
and
how
it's
forming.
We
have
this
source,
it
does
some
encryption,
then
we
it
goes
to
encoding.
A
Here
we
have
ldpc,
then
we
have
then
it
goes
here.
Then
it
follows
the
reverse
form
of.
A
In
this
case,
we
don't
have
to
form
parity
check
metrics,
because
parity
checks
matrix
or
its
positioning,
where
the
number
of
where
the
ones
should
be
there
in
that
matrix,
is
provided
by
dvbs
to
protocol
and
extra,
b
and
c.
So
we
store
that
table
that
parity
check,
stable
or
parity
checks
matrix
in
b
ram.
So
we
have
this
b
ram
where
we
have
parity
check.
A
A
Of
message
bits,
so
we
get
that
message
bits
we
divide
this
table
or
ram
into
frames,
so
we
don't
fetch
the
complete
ram
at
one
time
we
get
only
a
frame
of
a
ram
frame,
let's
call
it
a
ram
frame
which
is
a
part
of
the
complete
matrix.
We
get
only
a
ram
frame,
whatever
is
needed
at
that
moment
to
do
the
equation
or
to
to
do
the
calculation
to
the
computation.
A
So
we
have
message
bits
we
get
frame
ram
from
here.
We
do
calculation
here
and
then
it's
outputted
to
the
next
step
or
next
unit.
So
here
we
we
follow
a
parallel
pipelining
architecture
where
this
is
coming
parallely.
This
is
coming
parallely,
then
do
the
calculation
forward?
It
again
next
step
here
here
and
forward
it.
So
that's
how
we
implement
our.
A
We
do
the
implementation
fpga
and
I
have
the
numbers
for
be
a
b
ram
utilization,
lut
utilization,
which
I
can
share.
So
that's
all
for
my
presentation
feel
free
to
ask
any
questions.
Thanks
for
listening.
Thank
you.
Bye.