►
Description
At Parallel Systems, a team of C++ and Python developers raised on null pointers, read/write races, and the wild west of pip packaging, set off to create a modern electric freight train with an appropriately modern systems language – Rust. We dive into our use of Rust as a general-purpose language for our entire tech stack for a safety-critical, real-time application. Join us to learn about how we use Rust to write firmware, to control and simulate the freight vehicle, and even to wrap deep learning libraries to make them all available in Rust.
A
So
my
name
is
Julie
Wong
and
I'm,
a
senior
software
engineer
at
parallel
systems.
Parallel
was
founded
back
in
January
of
2020
to
decarbonize
Freight
by
building
an
all-electric
rail
vehicle.
Here's,
the
initial
software
team.
You
can
see
we're
standing
in
front
of
our
first
rail
vehicle
in
the
shop.
We
were
a
small
team
trying
to
have
a
large
impact,
so
we
went
looking
for
a
systems,
language
that
was
less
dangerous
than
C,
plus
plus,
while
still
offering
low
low
level
safe
control
in
order
to
control
a
physical,
real-time
system.
A
We
started
looking
into
rust
and
immediately
found
language
features
that
we
liked,
so
we
set
off
to
make
a
new
mode
of
Transport
with
a
new
systems.
Language
for
all
of
us.
My
goal
today
is
to
tell
you
about
the
victories
and
defeats
we've
encountered
in
using
rust
across
our
software
stack.
But
first
let
me
give
you
an
overview
of
where
software
goes
on.
The
rail
vehicle
vehicle
software
runs
our
vehicle
computer
and
communicates
with
firmware
on
our
microcontrollers.
Together,
they
control
the
hardware
systems
on
the
vehicle.
A
A
All
the
software
comes
together
to
make
a
new
way
to
move
Freight.
Our
vehicles
are
fully
electric
fully
autonomous
and
best
of
all.
They
run
completely
in
Rust.
In
this
video
you
can
see
one
of
our
integrated
vehicles
moving
container
around
our
test
site.
So,
let's
get
into
it
and
talk
about
what
we've
built
in
vehicle
software.
A
These
vehicles
are
large,
electromechanical
systems
that
we
have
designed
and
built
from
the
ground
up:
here's
a
picture
of
the
vehicle
chassis,
getting
welded
and
here's
one
of
our
mechanical
engineers
working
with
the
high
voltage
battery,
which
Powers
the
electric
motor.
You
can
see
a
software
engineer
in
the
back,
looking
on
with
great
interest
as
his
code
gets
tested
and
here's
a
render
of
what
the
vehicle
Hardware
will
eventually
look
like
each
vehicle
has
numerous
interconnected
hardware
and
software
systems.
A
The
computers
read
in
inputs
from
cameras
and
GPS,
while
accepting
commands
over
Wi-Fi
or
cellular
together,
they
work
together
to
physically
move
the
vehicle
around
building.
An
integrated
software
and
Hardware
system
is
challenging
mechanical
and
electrical
engineers
who
have
designed
these
systems
know
the
intricacies
of
how
they
work
and
what
needs
to
be
turned
on
and
when
but
they're
not
the
ones
writing
code.
A
A
We
address
these
challenges
by
organizing
our
vehicle
code
into
State
machines.
State
machines
answer
many
of
these
challenges.
They
offer
a
clear
picture
into
what
the
software
is
doing
to
the
hardware
at
any
given
time.
Here's
a
toy
example
of
what
the
state
machine
that
stops
the
vehicle
looks
like
the
vehicle
initializes
into
the
init
State,
and
each
state
has
particular
outputs
that
are
asserted
for
the
duration
of
the
state.
In
init,
no
commands
are
accepted
and
the
break
is
disengaged.
A
Each
state
on
the
very
next
cycle,
the
state
machine
transitions
to
the
idle
State.
Here
we
now
accept
commands
from
other
state
machines
to
stop.
However,
the
break
itself
is
not
engaged
upon
receiving
a
command
to
stop.
We
transition
into
the
braking
state
where
the
break
is
now
engaged.
When
we
no
longer
need
to
stop,
we
go
back
into
the
idle
State.
Let
go
of
the
break
and
that's
it.
This
is
just
one
state
machine.
Our
vehicles,
controlled
by
numerous
interacting
State
machines,
is
controlling
a
different
Hardware
system.
A
State
machines
like
this
are
a
common
construct
used
to
control
hardware
systems
and
they
can
be
written
in
various
languages,
but
we
found
rust
features
that
made
these
work
extra.
Well,
we
took
advantage
of
Russ's
extensive
macro
system
to
implement
our
state
machines.
Our
macros
take
care
of
all
the
plumbing
work.
Things
like
setting
outputs
and
transitioning
between
states.
I
want
to
first
show
you
how
easy
it
is
to
create
a
new
state
machine.
A
A
A
That's
great
and
all,
but
you're,
probably
wonder:
okay
Julie!
What's
actually
going
on
under
the
hood.
Let's
look
at
what
these
macros
generate.
Each
possible
state
gets
a
struct
generated
for
it
in
our
case,
init
idle
breaking
each
get
their
own
struct,
a
trait
called
transition
to
gets
Auto
implemented
for
all
valid
transitions.
Here's
the
transition
from
breaking
to
idle
a
function
called
success
returns
true,
if
it's
possible
to
go
from
one
state
to
another
here,
you
can
see
what
causes
that
transition
is.
A
If
we
get
a
command
to
stop,
we
return
true
because
a
transition
should
occur.
Otherwise
we
return
false
for
no
transition.
Finally,
a
from
trait
gets
implemented
to
actually
transform
from
one
state
to
another.
The
from
function
transforms
from
idle
to
Breaking
with
nothing
magic.
It
just
returns
an
instance
of
the
breaking
struct.
A
Now,
let's
put
it
all
together
in
the
state
machine
step
function,
which
is
run
every
cycle,
we
match
against
the
current
state
that
we're
in.
If
the
current
state
is
idle
and
there's
a
valid
transition
to
go
into
breaking,
then
we
transform
the
current
state
into
the
new
Breaking
state.
Finally,
we
set
the
current
state
to
this
new
state.
I
want
to
emphasize
again
that,
thanks
to
Russ's
extensive
macro
system,
all
of
this
code-
you
just
saw-
is
auto-generated
for
every
state
machine
that
we
write.
A
The
developer
can
focus
on
what
the
valid
states
are
and
what
the
correct
transitions
between
them
are.
You
may
also
be
wondering
wait.
Why
bother
with
all
this
from
trait
implementation,
we
use
this
transformation
from
the
previous
state
into
the
new
state
to
take
advantage
of
Rus
compile
time
checks
at
compile
time.
We
can
enforce
that
only
certain
State
transitions
are
valid.
To
illustrate
this,
let's
say
we
are
starting
in
the
init
State
and
try
to
transition
to
breaking
in
invalid
transition.
A
Finally,
our
macros
take
all
of
that
state
information
and
for
free
feeds
it
into
a
graph
visualization
tool
to
automatically
generate
State
machine
diagrams
like
this
one.
This
is
a
snapshot
of
the
actual
brake
controller
State
machine
in
use
on
the
vehicle
that
goes
from
knit
to
idle
to
breaking
you'll
notice,
I,
simplify
the
outputs
quite
a
bit.
Each
time
new
code
is
pushed
these
State
machines
get
auto-generated.
This
is
huge.
A
This
allows
our
software
Engineers
to
point
to
a
state
machine
diagram
that
is
guaranteed
to
be
accurate
with
the
code
that
is
running
on
the
vehicle,
no
more
trying
to
match
diagram
versions
against
code
versions,
whatever
is
checked
into
main,
is
what
you
get.
This
makes
for
much
easier
communication
between
software
and
Hardware
engineers
and
allows
us
to
visualize
the
complicated
interactions
between
hardware
and
software.
A
A
We
are
able
to
enforce
the
state
transitions
at
compile
time
and
our
code
Auto
generates
highly
detailed,
State
machine
diagrams.
But
that's
not
to
say
everything
is
perfect
here.
These
macros
can
be
very
opaque
when
they're
working
well,
this
is
fine
but
to
add
additional
functionality
or
to
debug.
You
really
need
to
know
what's
going
on
under
the
hood,
this
has
made
it
challenging
for
new
developers
to
immediately
start
working
with
our
state
machines.
A
It
can
be
kind
of
unclear
where
the
actual
code
that
runs
the
vehicle
lives
we
have
10
or
so
State
machines
that
use
this
framework.
So
when
you
want
to
add
a
feature
for
one
in
particular,
you
have
to
carefully
reason
about
how
this
would
affect
other
state
machines
and
whether
your
special
case
is
special
enough
to
Warrant
changing
the
overall
framework.
A
A
But
overall
we
really
appreciated
the
extra
compile
time
checks
and
documentation
visibility
we've
been
able
to
achieve
with
rust,
so
we
have
state
machines
that
control
our
vehicle,
but
how
do
we
test
them
rolling
around
a
vehicle
as
big
as
ours
to
test
every
software
change
is
totally
impractical.
I've
included
this
picture
of
our
two
of
our
vehicles
with
a
spanning
structure
next
to
two
of
our
Engineers
just
to
give
a
sense
of
scale.
These
things
are
huge,
especially
when
you
consider
the
container
on
top.
This
isn't
Hardware.
A
You
can
just
keep
on
your
desk
and
run
it
every
time
to
see
how
your
tweaked
software
is
performing.
On
top
of
that,
the
software
we're
developing
is
safety
critical.
It
controls
a
10
ton
steel
vehicle.
If
your
code
is
incorrect
or
fails
to
behave,
the
way
you
expect
serious
accidents
can
occur,
enter
simulation.
A
Our
vehicle
simulation
system
exercises
the
vehicle
Control
software.
In
a
variety
of
environments,
we
can
run
our
control
algorithms
without
having
a
physical
vehicle
present
by
providing
a
physics-based
simulation
of
the
environment
surrounding
the
software.
This
lets
us
do
things
like
stress,
test
our
vehicles
in
bad
weather
or
see
how
they
do
on
really
tight
curves.
We
can
also
use
these
simulated
environments
to
perform
vehicle
design
analysis
and
run
randomized
scenarios.
A
We
visualize
and
monitor
the
movement
of
these
simulated
Vehicles
using
a
combination
of
telemetry
and
web
applications.
We've
developed
I've
included
a
screenshot
here
of
our
Fleet
management
software,
which
we
use
to
watch
things
like
velocity
and
location
of
all
the
vehicles
in
our
Fleet
to
simulate
the
whole
world.
We
had
to
break
things
up
a
little
bit.
Our
simulation
components
can
be
logically
divided
into
three
groups.
There
are
truth
models
which
simulate
the
state
of
the
world
and
the
hardware
components.
A
Here
we
have
things
like
the
battery
state
model,
as
well
as
the
thermal
model,
then
there's
the
hardware
abstraction
layer
which
consists
of
simulated,
sensors
and
actuators
that
the
software
uses
to
interact
with
the
hardware
and
Beyond
and
finally,
there's
the
actual
vehicle
software
under
test.
These
nodes
have
complex
interactions
with
each
other
vehicle
software
reads:
an
information
from
the
GPS,
the
battery
and
the
thermometers
based
on
the
state
of
charge
of
the
battery
and
how
hot
it
is.
The
Control
software
adjusts
the
amount
of
power
to
the
motor
and
the
cooling
pump.
A
This
in
turn
affects
the
state
of
the
world,
and
the
state
of
the
world
affects
itself
and
the
hardware
that's
way
too
much
to
keep
track
of.
We
need
a
means
of
organizing
these
components
and
calling
their
various
methods
in
a
structured
manner.
We
also
want
this
Sim
to
run
as
fast
as
possible,
as
that
allows
us
to
run
more
tests.
In
the
same
amount
of
time,
we
once
again
turn
to
rust,
which
allows
us
to
make
useful
abstractions
to
build
a
framework
around
these
distinct
nodes,
as
well
as
offering
fast
performance.
A
Our
simulation
framework
is
called
conductor
and
it
organizes
this
chaos
into
graphs
of
interacting
nodes.
Each
node
takes
an
input,
performs
a
computation
and
produces
outputs,
as
we
saw
in
the
previous
slide.
We
have
a
bunch
of
different
types
of
nodes
to
capture
the
different
behaviors
needed.
We
start
by
making
structs
for
each
type
of
node,
for
example
a
struct
for
the
GPS
sensor
and
the
cooling
pump
nodes,
and
now,
let's,
let's
together,
try
to
make
a
nice
interface
to
interact
with
these
different
nodes.
A
Let's
kick
it
off
by
using
rust
traits,
we
create
a
trait
called
No
trait
that
has
functions
common
to
all
node
types.
We
Implement
that
node
trait
for
the
GPS
sensor.
You
know
we
have
a
compute
where
am
I
and
we
also
implement
it
for
the
cooling
pump.
How
much
fluid
is
pumping
great?
Now
we
can
do
things
like
write,
a
function
that
can
take
in
anything
that
implements
a
node
trait
how's,
the
performance
you
may
ask
in
Rust,
there's
no
runtime
cost
for
using
generics.
A
A
A
Okay,
nice,
but
what
if
we
have
two
different
nodes
again
and
we
want
to
create
a
vector
containing
both
of
them?
Can
we
do
something
like
this?
No
Russ
won't.
Let
you
do
that,
but
we
could
get
around
this
using
a
trait
object.
A
trait
object
created
using
the
dime
keyword,
allows
you
to
write
code
where
the
exact
method
that
you
need
to
call
isn't
known
until
runtime,
so
you
can
have
a
vector
that
contains
different
types
of
nodes
and
call
a
unique
function
for
each.
This
looks
great.
A
We
can
hide
unnecessary
type
information
and
write
code
to
handle
all
the
different
nodes
that
we're
going
to
need
in
our
simulation
model
seems
too
good
to
be
true.
Well,
here's
the
catch
since
we're
dynamically
figuring
out
which
function
to
call
at
runtime.
This
implementation
is
significantly
slower
than
the
one
which
uses
generics.
Is
there
any
way
we
can
get
the
nice
encapsulation
of
trait
objects
like
we
see
here
with
the
performance
of
generics
which
we
saw
earlier?
A
Yes
in
Rust,
you
can
cheat
and
get
the
best
of
both
worlds.
Let's
see
how
you
do
it,
you
start
off
by
making
enum
that
makes
a
type
for
each
kind
of
node
and
contains
the
node
struct,
as
Associated
data
then
implement
the
node
trait
for
the
enum
type
for
each
function,
in
the
trait
we'll
match
on
the
enum's
inner
type
and
call
the
appropriate
function
for
the
struct.
A
A
And
even
better,
you
can
Auto
generate
all
that
boilerplate
code
with
a
crate
called
enum
dispatch,
which
gives
you
custom
attributes
to
slap
onto
the
trait
and
the
enum
that
you
care
about.
This
crate
even
comes
with
nice
performance
benchmarks
when
iterating
through
a
vector
of
a
thousand
structs
using
box,
object,
traits
and
ref
object
traits.
It
takes
about
five
milliseconds
per
iteration
when
using
enum
dispatch.
Our
time
is
0.5
milliseconds
per
iteration.
This
is
a
10x
Improvement.
A
So
you
might
have
thought
that
this
section
was
just
about
how
great
rust
performance
is,
but
what
I
also
want
to
emphasize
is
how
rust's
rapidly
growing
creates
ecosystem
has
given
us
the
tools
to
develop
a
complex
physics-based
simulation.
It
seems
that
rust
has
a
little
bit
of
everything
in
crates,
allowing
us
to
use
a
single
language
to
accomplish
all
sorts
of
scientific
Computing
tasks.
The
enum
dispatch
crate
that
we've
talked
about
enables
us
to
achieve
fast
generic
performance,
some
other
crates
we're
using
in
our
simulation.
A
Library
are
the
N
algebra
ND
array
and
lapac
crates
to
do
Matrix
math
the
ode
solvers
create
to
solve
ordinary
differential
equations
and
pet
graph
to
make
use
of
graph
data
structures
using
rust
in
our
simulation
framework
has
benefited
us
in
a
couple
of
ways.
Rust
makes
abstraction
easy
with
generics
traits
and
enums,
and
you
get
performance
with
those
nice
abstractions
and
finally
well,
not
totally
complete.
There's
enough
support
in
Rust
crates
to
enable
us
to
perform
complex
scientific
computing
here
in
rustland,
you
can
have
your
cake
and
eat
it
too.
A
We've
just
spent
some
time
talking
about
our
vehicle
in
isolation,
but
how
do
our
rail
vehicles
interoperate
with
other
trains
on
the
network?
Here's
a
shot
of
our
test
Strat
in
California,
you
can
kind
of
see
the
rail
is
the
blue
and
black
track
pack
with
yellow
pins,
demarcating
switches,
mile
posts
and
other
points
of
interest
for
us
and
trains
on
the
network.
A
We
are
first
targeting
operating
our
rail
vehicles
in
the
United
States
on
Mainline
rail,
most
of
the
U.S
rail
network
is
controlled
through
a
system
called
positive
train
control.
Positive
train
control
gets
its
name
from
the
back.
The
train
is
allowed
to
move
as
long
as
it
gets
a
positive
signal
to
continue.
In
the
absence
of
a
positive
message
from
the
central
controller,
the
train
will
come
to
a
stop.
Positive
train
control
was
mandated
by
the
U.S
Congress
in
2008,
and
this
is
the
de
facto
standard
across
U.S
railroads.
A
A
There
are
close
to
100
PTC
message
types
defined
in
the
specification,
and
there
are
many
little
quirks
in
the
protocol.
Things
like
numerics,
which
are
seven
bytes
long,
five
versions
of
a
single
packet
type
crc's
on
the
packet
that
sometimes
don't
include
certain
fields
in
the
packet
fixed
length,
Fields
with
variable
length,
Fields,
just
scattered,
amongst
them,
strucks
nested
within
strucks
nested
within
structs,
and
my
personal
favorite,
not
one
not
two,
but
three
different
possible
string.
Encodings
everyone
gets
a
string
encoding.
A
We
turn
to
rust
to
help
manage
this
complexity.
A
couple
rust
features
really
helped
us
out.
First,
off
macros
allowed
us
to
embed
essentially
a
custom
language
within
our
code
to
handle
all
of
these
packet
types
and
secondly,
as
we
saw
in
the
simulation
section
traits
in
Rust,
make
abstraction
dead,
simple
different
structs
can
all
implement
the
same
trait
and
be
treated
the
same
by
functions.
We
were
again
able
to
implement
all
of
these
unique
message:
types
with
a
single
set
of
generic
macros,
minimizing
code,
repetition
and
boilerplate.
A
So
how
do
we
do
it?
First
off
we
defined
a
codec
trait
that
encodes
and
decodes
from
rust,
struts
to
raw
bytes.
Then
we
implemented
it
for
primitive
types
and
derived
it
for
all
subsequent
types.
Using
some
macros,
we
wrote
here's
what
that
codec
trait
looks
like
it
contains
an
encode
and
a
decode
function.
A
Next,
we
implemented
a
custom
derived
macro
which
we
use
to
annotate
structs.
We
then
Define
the
structs
for
every
PTC
message,
type,
all
100
of
them
and
all
their
glorious
versions.
For
example.
This
is
the
request:
train
ID
list,
2
struct,
which
contains
some
fields
for
doing
just
that.
All
we
now
have
to
do
is
derive
codec
on
the
PTC
struct,
like
you
would
debug
or
clone,
and
with
that
we
get
the
encode
and
decode
functions
to
make
this
struct
automatically
created
for
us
by
our
macros.
A
Finally,
to
deal
with
those
three
unique
string:
encodings
that
you
all
got
earlier,
we
wrote
some
custom
attribute
macros.
Let's
revisit
that
train
request,
train
ID
struct.
What
I
didn't
mention
earlier
is
that
per
the
PTC
spec,
the
railroad
scac
field
is
a
four
byte
fixed
length
field.
We
simply
add
our
custom
attribute
to
the
field
and
our
macros
will
take
care
of
generating
the
appropriate
encode
and
decode
functions
to
meet
the
spec.
A
So
what
has
this
meant
for
our
PTC
implementation?
Let's
look
at
a
breakdown
of
the
lines
of
code
in
the
repo
the
PTC
structs
take
about
10
000
lines
of
code
to
just
transcribe
from
the
spec.
The
codec
macro
system
requires
about
a
thousand
lines
of
code
and
then
5500
lines
of
code
gets
generated
for
us.
You
can
see
that
the
majority
of
the
code
base
is
spent
on
defining
the
PTC
structs
to
match
the
spec.
A
So,
in
summary,
our
Fleet
Management
System
gets
packaged
from
positive
train
control
servers
and
then
forwards
them
on
to
our
vehicles.
Our
library
for
parsing.
These
packets,
makes
extensive
use
of
rust
macros.
The
pros
of
this
are:
we
can
focus
our
engineering
effort
on
getting
the
structs
right,
it's
trivial
to
support
new
message
types
from
PTC,
especially
as
they
keep
rolling
new
versions
of
packet
types,
but
some
downsides
of
that
and
actually
again
any
macro
system
brings
with
it
layers
of
indirection
and
the
overhead
to
initially
implement
it.
A
Finally,
you
may
be
thinking
wait.
Why
don't
you
guys
just
use
sturdy?
We
had
a
bunch
of
custom
encodings
that
we
had
to
support
and
when
we
looked
at
the
30
API
to
be
completely
honest,
we
weren't
sure
we
knew
rust
well
enough
to
make
a
sturdy
compliant
parser
to
handle
all
of
our
edge
cases.
So
we
figured
the
best
way
to
learn,
was
to
try
and
that
anything
we
wrote
would
inform
future
implementations
that
could
perhaps
work
within
the
sturdy
framework.
A
Finally,
let's
talk
about
perception,
which
is
the
team
I
now
work
on
our
vehicles
are
fully
autonomous
on
the
rail
with
a
wide
range
of
operating
conditions.
What
you're
seeing
here
is
a
video
of
our
deep
learning
models
in
action
on
a
variety
of
train
tracks
throughout
California,
we're
utilizing
various
deep
learning
methods
to
train
and
deploy
models
that
can
reliably
detect
and
localize
Hazards
around
the
rail.
You
can
see
in
this
video
that
we're
finding
cars,
people
bikes
other
trains
as
well
as
the
rail
in
front
of
us.
A
A
A
Now
because
we're
using
Nvidia
Hardware
as
our
primary
compute,
we
must
use
a
Nvidia
framework
called
Deep
stream
to
leverage
the
platform's
hardware
acceleration
deep
stream
is
a
series
of
plugins
built
in
C
that
move
frames
of
image
data
from
element
to
element
in
a
pipeline.
All
of
these
components
shown
here
would
be
elements
in
a
deep
stream
pipeline.
Much
of
this
code
is,
is
closed,
source
and
only
available
to
use
through
headers
that
call
into
C
library
files
at
first.
We
thought
this
meant
that
we
had
to
write
our
perception
application
in
C.
A
This
would
expose
us
to
all
the
dangers
of
a
low-level
language.
Was
there
a
way
we
could
do
this
with
existing
libraries
from?
Was
there
a
way
we
could
use
the
existing
libraries
from
Nvidia
while
still
writing
the
bulk
of
our
application
in
Rust?
Spoiler
alert,
yes,
otherwise,
I
wouldn't
have
put
in
the
talk.
Russ
makes
it
easy
to
communicate
with
C
apis
without
overhead,
allowing
you
to
leverage
rust's
ownership
system
to
provide
much
stronger
safety
guarantees
for
those
apis.
A
We
call
this
project
crabification,
which
some
of
you
may
know
as
a
surprising
evolutionary
Quirk,
where
many
types
of
crustaceans
all
Converge
on
a
crab-like
form.
For
us.
This
was
the
process
of
taking
the
perception
app
from
Cedar
rust.
The
C
libraries
we
needed
in
our
application
were
deep
stream.
The
Cuda
runtime
and
some
g-lib
libraries
rust
natively
supports
linking
against
the
C
libraries
and
calling
their
functions
directly.
This
is
through
US
foreign
function,
interface
or
ffi.
The
ffi
provides
a
zero
cost
abstraction
where
function.
A
Calls
between
rust
and
C
have
identical
performance
to
C
function.
Calls
in
my
mind
this
is
a
killer,
rust
feature.
It
allows
you
to
bring
in
external
libraries
with
no
performance
overhead.
Now
that
we
could
call
C
functions
from
rust.
We
built
high-level
libraries
that
provide
a
safe
interface
to
these
functions
for
the
user,
the
higher
level
libraries
enforce
protocols
around
pointers,
heat
memory,
Constructors
destructors,
all
the
good
stuff.
A
Finally,
the
user
now
gets
to
write
their
top
level
application
in
Rust,
while
still
accessing
the
low-level
C
libraries
provided
by
Nvidia.
We
were
able
to
find
bindings
and
apis
for
many
of
these
libraries,
so
we
specifically
built
out
the
libraries
I've
circled
in
yellow
here,
we'll
start
from
the
bottom,
with
the
rust
ffi
layer.
The
first
step
in
all
of
this
is
to
make
your
low
level
ethify
bindings.
We
made
use
of
the
bind
gen
crate
to
auto-generate
structs
and
functions
in
Rust
that
call
into
their
C
counterparts.
A
Let's
look
at
data
structure.
Let's
look
at
a
data
structure.
We
had
to
move
over
from
C
the
batch
metadata
struct,
which
contains
information
about
the
frames
in
a
batch.
This
struck
comes
with
a
create
function
and
a
destroyed
function.
Now,
let's
see
what
bindgen
makes
for
us.
First,
we
get
the
rust
equivalent
of
that
structure
and
it's
annotated
with
representation
C,
indicating
that
the
struct
has
the
same
data
layout
as
C.
A
A
Let's
see
how
we
hide
our
calls
to
the
ffi
with
an
easy
to
use
rust
interface
for
the
user.
First
thing
we
do
is
create
a
struct
around
the
ffi
struct
that
the
user
can
use.
We
mark
it
with
the
transparent
attribute.
This
specifies
that
batch
meta
should
be
treated
exactly
the
same
as
ffi
batch
method.
A
Let's
use
those
C
functions
with
this
struct,
we
Define
a
user-friendly
new
function
that
only
makes
use
of
rust
types
next,
as
we
mentioned
earlier,
we
have
to
wrap
our
ffi
call
in
an
unsafe
block.
Then
we
can
call
that
create
batch
meta
function.
Some
pointer
and
data
manipulation
have
to
happen.
I've
let
this
out
for
clarity.
The
point
of
this
is
that
the
writer
of
the
crate
has
to
perform
all
the
checks
to
guarantee
correctness
and
safety,
but
this
is
completely
transparent
to
the
eventual
user
of
this
API.
A
They
don't
need
to
know
about
all
the
terrible,
unsafe
things
you're
doing
under
the
hood.
Finally,
if
all
goes
well,
we
return
it.
Okay.
Now,
you
might
have
already
forgotten
about
that.
Destroy
function.
Well,
guess
what
you
probably
would
have
forgotten
to
call
it.
If
you
had
been
writing
your
code
and
see,
I
would
have
let's
make
it
impossible
to
get
in
Rust
and
put
it
in
the
drop
tray
of
the
object.
We
implement
the
drop
function
which,
in
an
unsafe
block,
calls
with
a
destroy
function.
A
A
Okay,
phew,
now
we've
written
our
API,
let's
bring
it
all
together
and
see
how
that
helps
our
final
user
application.
We
are
writing
a
function
that
transforms
coordinates
from
the
image
frame
into
the
world
frame.
We
pass
variables
by
value
or
by
reference
we
never
have
to
use
raw
pointers
which
can
be
void.
References
are
always
valid.
A
A
We
do
the
same
for
every
object
in
the
frame
for
every
piece
of
user
data
in
the
object.
This
is
a
standard
for
Loop
and
iterator
combo
that
should
look
pretty
familiar
to
everyone
when
it
comes
time
to
check
to
see
if
you
have
the
right
type
of
user
data,
we
can
use
the
if
let
OK
syntax
to
see
if
we
have
the
right
type
and
then
we
do
our
math
and
return
it
okay
seems
pretty.
Nice
seems
pretty
straightforward
to
me
now.
A
Let's
see
what
happens
in
C,
first
off,
there's
no
reference
types
in
C.
So
in
the
function
definition
you
have
to
take
in
raw
pointers
data
structures.
This
is
a
common
source
of
bugs.
You
must
remember
to
check
for
null
pointers
before
you
can
get
to
the
data
we're
using
a
bunch
of
pointers,
so
you
gotta
initialize
them
all
to
null
and
remember
those
nice
for
Loops
you
had
in
Rust
yeah
you,
don't
you
don't
get
those
and
see
we
go
through
the
linked
list
of
the
C
style
for
Loop.
A
A
Well,
the
rust
version
makes
me
glad
yay,
so
using
Russ
ffi
to
wrap
nvidia's
libraries
has
had
several
benefits
and
downsides
perception
and
vehicle
application
need
to
communicate
and
we've
been
able
to
reduce
the
interface
to
single
shared
rust
module.
Our
software
development
processes,
since
they're
all
in
Rust
I've,
been
able
to
share
resources
and
infrastructure
with
the
vehicle
software
and,
as
you
saw
I
believe
overall,
the
code
base
has
become
safer,
easier
to
read
and
has
led
to
faster
development.
A
So
what
are
the
downsides?
Callbacks
have
been
challenging
in
our
system.
You
have
a
rust
wrapper
around
a
c-struct
that
is
calling
a
rust
function
through
a
seed
callback.
It
was
a
bit
of
a
mind
Bender
to
get
this
to
work.
There's
a
large
deep
stream
API
to
support
I've
been
approaching
this
by
just
adding
functionality
piecemeal
as
I
need
it.
And,
of
course
this
has
been
a
large
project
and
has
required
a
large
overhead
to
make
this
gratification
effort
a
reality.
A
It's
been
a
learning
Journey
for
all
of
us.
As
we've
built
out
our
software
stack
using
rust.
We
initially
chose
rust
because
it's
a
system,
language
with
safe,
high-level
abstractions,
in
particular
various
language
features
like
generics
traits
and
enums,
make
abstraction
easy,
while
still
offering
great
performance.
A
We
found
that
rust
macros
are
powerful,
abstractions,
that
let
you
eliminate
Boiler
code
and
create
easy
to
use
interfaces
in
documentation,
but
with
great
power
comes
great
responsibility.
They
can
be
difficult
to
understand
and
work
within
Russ
has
numerous
crates
available
to
do
everything
from
solving
Odes
to
serializing
bites
on
the
wire.
Rust
is
a
relatively
young
language,
though
so
support
isn't
nearly
as
complete
as
say
numpy
in
python
or
simulink
in
Matlab,
but
you
can
generally
get
by
with
what's
available.
A
Fortunately,
Russ's
foreign
function
interface
partially
addresses
these
deficiencies
by
letting
you
bring
in
external
C
code
for
no
additional
runtime
cost
so
come
join
us
we're
building
sweet
stuff
with
rad
people,
and
we
want
to
work
with
you
check
out
our
website
with
open
positions
at
moveparallel.com
and
come
find
me
later
at
the
Q
a
or
there's
no
q
a
at
the
happy
hour,
I'll
probably
get
a
drink.
My
email
address
is
here
too,
so
feel
free
to
shoot
me
a
message
and
with
that
the
rush
train
has
reached
its
final
destination.