►
Description
Ferrocene - qualifying the Rust compiler using a fully open source tool stack presented by Florian Gilcher at the Q2 SDV Community Day at ZF Forum in Friedrichshafen, Germany on July 6, 2023.
Learn more at - https://sdv.eclipse.org/
B
Well,
the
super
coolers
topic
because
it's
the
rust
compiler
as
there
was
a
suggestion
to
do
karaoke.
If
you
ever
go
to
Ross
conference,
there's
a
ask
for
the
traditional
karaoke
session,
there's
one
after
every
so
hello,
I'm,
Florian,
I'm,
managing
director
of
a
company
called
Ferris
systems
based
in
Berlin
I've,
previously
been
with
the
rust
foundation
and
the
rust
project,
both
as
a
core
team
and
Community
team.
B
Member
and
I've
been
restraining
rust
since
2015
I'm
doing
this
for
10
years
now,
that's
quite
a
bit
and
as
I
said
as
I'm
from
Berlin
I.
If
there's
one
wish
I
have
as
a
registration
currently
for
the
German
industry
is
I
wish
we
stopped
talking
about.
Where
do
we
find
Rust
developers,
because
Europe
is
actually
a
rust
country
and
Germany
in
particular?
So
it's
actually
here
and
I
think
we
should
leverage
that
so
just
very
quickly
about
for
systems.
B
It
was
formed
2018
by
a
couple
of
rust
project
members
because
they
found
the
need
there
needs
to
be
a
company
to
help
companies
adopt
rust,
we're
mainly
a
training
and
maintenance
company.
We
are
co-maintaining
not
only
interest
compiler,
but
also
the
rest,
IDE
toolkit
or
standalizer
and
embedded
flashing
toolkit
called
knurling
bindgen
parts
of
the
infrastructure.
B
B
So
I
want
to
talk
quickly
about
what
even
is
ferrocene.
What
are
the
tools
that
we're
using
to
validate
it?
This
thing
called
mod
rocket
science.
B
B
So,
first
of
all,
what
is
first
scene
at
the
simplest
level,
it's
a
qualification
of
the
rust
compiler,
rust
C
and
for
that
ferrosine
is
a
complete
Downstream
in
the
open
source
sense
of
the
main
rust
tool
chain
that
you
can
find
on.
Github.Com
rosslang,
slash
rust,
so
it
Imports
the
whole
tool
chain
regularly
further,
validates
it
and,
in
the
end,
qualifies
it
currently
for
ISO,
26262
and
ac61508.
B
It
is
also-
and
that's
the
long
detour,
a
language
specification
effort,
because
one
of
the
long
complaints
about
rust
was
rust
does
not
have
a
specification.
Now
it
turns
out
that
here
in
this
industry
we
need
requirements
and
for
programming
language.
It
just
happens
to
be
that
requirements
document
is
a
language
specification,
so
the
first
half
year
of
that
project
was
actually
writing
a
complete
specification
for
worst
168.
Currently,
that's
the
version
that
we're
qualifying
that's
from
January.
B
It
is
also
a
product
that
provides
long-term,
supported,
rust.
Currently,
the
rust
project
does
only
provide
support
for
the
last
six
weeks
for
the
currently
released.
Compiler
first
scene
provides
support
for
longer
and
the
last
thing
it's
fully
open
source
based,
so
we're
using
open
source
tools
and
particularly
methodologies
derived
from
the
rust
project,
and
these
are
the
things
that
I
want
to
introduce
you
a
little
too
so
I
need
to
introduce
you
to
a
few
Basics.
B
The
rest
project
has
something
that
it
calls
a
trained
model,
and
this
is
actually
the
train
model
for
ferrosine.
The
last
project
itself
runs
a
simpler
one,
so
it
has
multiple
levels
of
releases
and
regularly.
Some
of
those
releases
are
promoted
down
into
higher
trains.
So,
for
example,
we
do
release
every
night,
a
version,
the
current
version
of
the
compiler.
If
it
is
releasable
which
it
should
we
have
something
called
pre-rolling,
which
is
version
that
is
intended
for
later
being
released.
B
B
and
for
the
qualification
we
take
a
step
up
so
we're
shipping
all
of
these
to
customers
and
for
qualification.
We
take
one
step
up
regularly.
We
take
one
of
those
releases
qualified
and
also
release
that
as
a
qualified
version
with
safety
manuals,
sign
safety
manuals
and
everything.
B
B
A
big
misconception
is
that
does
not
necessarily
describe
the
quality
only
by
proxy
tier
one,
and
this
is
what
we'll
be
mostly
talking
about.
Here-
are
fully
supported
targets,
which
means
any
book
found,
will
block,
release
or
even
merging
of
any
patches
into
the
rust,
compiler
and
they're
fully
automated.
They
need
full
automated
testing
also,
they
need
a
support
team
in
the
Upstream
project.
B
B
The
rust
compiler
team
itself
does
not
test
things
on
Android,
which
also
plays
into
what
I
was
saying.
That
does
not
mean
that
it's
a
low
quality
Target,
it's
just
the
Upstream
project,
does
not
do
any
kind
of
quality
assurance
on
it.
Tier
three
is
code
is
in
the
repository,
but
there's
no
guarantees
beyond
that.
You
can
build
the
compiler
your
own
may
work
may
not
still
random
quality.
He
isn't,
and
why
is
that
important?
For
example,
Arc
65
unknown
none
bare
metal,
Arc
65
targets
are
something
that
Upstream
doesn't
test.
B
It's
tier
two,
but
we
tested
for
us,
it's
tier
one.
So
that's
also
a
difference
between
the
two
projects
between
the
upstream
and
the
downstream
and
the
other
thing
rust
practices
for
since,
as
I
said,
I've
been
doing
rusts
for
a
decade,
and
this
rule
predates
me
is
the
not
rocket
science
rule
of
software
engineering,
which
is
automatically
maintain
a
repository
of
code
that
always
passes
all
tests.
B
And
the
other
eSource
that
we
combined
with
that
I
go
into
how
we
achieve
that
in
a
second
and
the
last
ethos
that
comes
out
of
rust
comes
out
of
Mosaic
research
and
Mozilla
research.
Even
as
a
research
Department
said
what
can't
be
released
is
in
software,
so
the
rust
compiler,
even
the
early
early
Alpha
versions,
was
something
that
always
could
build
a
release
binary
and
could
be
released
on
Mac
OS,
Windows
Linux,
and
they
had
to
regularly
exercise
that
every
two
months
doing
that
regularly
for
three
to
four
years
before.
B
B
So
here's
a
couple
of
tools
used
and
focus
on
just
two
of
them,
but
the
main
tools
that
we're
using
we're
using
Sphinx
for
all
kinds
of
documentation.
The
python
documentation
framework,
together
with
some
plugins
for
that
he's,
using
a
tool
called
boss,
NG
for
automation,
there's
a
star
besides
that,
because
recently
GitHub
has
released
a
feature
called
merge
queues
which
is
behaving
quite
similar
to
both
in
G,
but
not
quite,
and
that
means
that
both
NG
is
nowadays
deprecated.
B
It's
still
useful
can
still
be
used,
but
that's
something
I
think
I
should
disclose
here
for
anyone
who's
investigating
that
and
we're
using
six
door
for
document
signing.
So
we
even
have
the
document
signing
built
into
the
CI
system,
and
it
will
built
by
the
fsfe
here
in
Berlin
called
reuse
for
licensing
and
s-bomb
concerns
I'm,
going
to
focus
on
the
first
two,
the
second
two.
B
If
anyone's
interested,
please
meet
us
around
the
hall
and
yeah
and
we're
using
GitHub
as
a
as
a
code
platform
and
circle
CI
as
the
build
platform
GitHub,
particularly
for
the
reason
that
the
Upstream
project
already
uses
GitHub.
So,
for
reusing
all
of
the
automation
that
it
already
built,
it's
just
natural
to
also
use
it.
B
So
what
does
boards
do?
Ors
implements
a
pretty
interesting
process.
It
maintains
a
cue
of
changes
or
pull
requests
to
the
compiler
that
are
ready
to
be
merged,
which
means
they've
been
reviewed,
and
then
it
serializes
them
for
testing.
You
have
a
small
picture
on
why
that
makes
sense
in
a
second
serializes
them
for
testing,
so
only
one
of
those
tests
runs
at
a
time.
B
It
is
also
an
automation
of
all
operations
on
the
repository.
So
no
one
clicks
the
merge
button
we
instruct
boards.
This
is
ready
for
merging.
Do
everything
that
should
be
done
for
merging,
so
all
of
these
small
processors
actually
get
implemented,
they
made
simple
and
then
implemented.
It
does
all
the
important
bookkeeping,
for
example,
tracking
who
reviewed
and
who
authored
a
change,
and
then
it
signals
back,
for
example,
on
phase
as
well
and
success
towards
the
platform.
B
So
here's
an
example,
this
is
Pietro
Pietro
is
over
there,
our
infrastructure
engineer,
so
once
Pedro
says:
okay,
I'll
approve
this
pull
request.
It's
the
first
step
actually
approving
towards
GitHub.
They
now
have
review
features
they're,
proving
towards
GitHub,
hey,
that's
fine
and
then
instructing
boards
to
merge,
then
or
bot
will
actually
comment.
Hey
yep
I'm
doing
my
work
now
and
come
back
later.
Okay,
the
bill
succeeded.
Thank
you
will
then
merge
into
the
main
line
or
into
the
branch
that
was
merged
towards
and
also
do
the
cleanup
delete
that
branch.
B
So
if
we're
testing
those
under
the
assumption
that
they
don't
contradict
at
the
same
time,
in
parallel,
we
could
end
up
with
a
main
line
that
is
broken
by
combination,
so
what
it
instead
does.
It
only
runs
one
of
those
at
once,
so
we
can
put
multiple
changes
in
the
queue
six.
Seven.
Eight
nine
ten
and
both
will
just
work
overnight
and
see
if
they're
all
mergable,
and
we
can
come
back
next
morning
and
figure
out
well.
B
Okay,
half
of
that
worked
the
other
half
didn't
the
rust
compiler
bills
for
an
hour
or
something
like
that,
or
a
little
bit
less
on
some
targets,
so
that's
also
very
useful
and
what
it
also
does.
It
uses
the
the
commit
message
to
then
document
what
is
done
for
those
that
are
git
nerds
we
actually
do
merges
and
those
mergers
document
hey
what
was
forked
when
was
it
merged
and
particularly
who
reviewed
and
who
authored
something?
B
What
that
gives
us
is
a
straightforward
pass
to
improving
this
after
we've
got
all
of
this
process
down
a
straightforward
path
to
improving
the
quality
of
that
tool
is
more
testing,
and
so
what
we
do
instead
of
the
Upstream
project
is.
We
also
consider
all
the
qualification
material
and
the
tracing
of
the
qualification
material
part
of
the
software
test,
so
for
every
change,
we
also
build
it
and
we
release
it,
and
that
is
also
never
allowed
to
break
so.
B
No
unaccounted
tests,
no
tests
without
requirements,
and
for
us
they
said
that
meant
writing
a
language
specification,
because
language
specifications
are
quite
dense
documents
where
almost
every
sentence
is
a
requirement.
We've
gone
as
far
as
taking
inspiration
from
the
Ada
specification
data
specification
has
a
thing
called
legality
rules
and
actually
numbers
every
sentence
in
the
specification,
and
so
that
it's
clearly
addressable,
so
we
can
trace
down
to
this
test,
addresses
0.7.1
colon
1
in
the
specification.
B
Usually
we
work
on
sections
and
out
of
that,
we
built
a
play.
Normal
traceability,
Matrix
the
most
important
part.
Being
this
unique
link
over
there.
That
says,
the
test
to
the
right
are
actually
the
compiler
tests
that
all
refer
to
section
seven,
one
in
the
spec
where
constants
are
described.
B
B
And
we're
also
using
the
the
platform
that
we're
using
a
lot
for
automation,
don't
be
confused,
that
a
lot
of
these
tests
have
been
failing
of
these
pull
requests
because
otherwise
they
would
automatically
land
in
the
closed
part,
they're
still
open
and
have
a
red
X
because
they
need
to
be
cared
for.
The
other
ones
are
already
closed.
B
The
one
thing
that's
interesting
about
that
picture
is
GitHub
gives
you
a
lot
of
ability
to
annotate
a
lot
of
the
things
that
you
have
in
there.
So,
for
example,
every
pull
request
here
gets
tags,
whether
they
are,
for
example,
caused
by
automation
or
caused
by
humans.
Things
like
that
and
also,
for
example,
all
of
these
have
a
tag
whether
this
change
also
needs
to
be
backboarded.
B
This
one
applies
to
the
current
main
line,
but,
for
example,
some
of
those
would
need
to
be
back
ported
to
168
170
or
something
like
that,
and
we
actually
encode
that
in
this
tagging
for
well
later
again
tooling,
to
pick
it
up
and
that's
been
run
through
the
change
management
that
we
employ
there,
a
lot
of
it
basically
being
an
extension
of
what
the
rust
project
already
does
so
to
come
to
the
experience.
B
So
what
we've
basically
been
doing?
We
have
formalized
the
process
that
the
rust
project
has
been
informally
following
anyways
and
have
strengthened
that
when
you're
downstreaming
a
project
out
of
the
open
source
Community
whatever
that
may
be,
it's
been
very
useful
for
us
to
play
along
with
the
rules
that
it
already
has
any
kind
of
Divergence
or
hey.
This
is
not
the
practice
that
we
would
like
to
use,
actually
costs
a
ton
so
playing
along
those
rules
or
trying
to
change
them
Upstream.
So
we
have
a
couple
of
patches.
B
Upstream,
where
it's
like
this
makes
work
easier
for
us
is
quite
quite
useful
and
the
other
thing
that
we
found
is
we
found
a
project
for
basically
everything
you
can
do.
Six
server
signing
you
can
use,
extending
Sphinx
is
okay,
easy
I,
would
say
and
tuning
it
to
the
to
the
way
that
we
need
it
on.
Our
oral
terms
in
our
own
house
has
been
part
of
giving
us
our
velocity.
B
We
do
it
in
a
short
shell
script
instead
to
make
sure
that
we
never
fail
out,
applying
it
and
automating
and
using
our
platforms
API
in
that
case,
GitHub
has
been
totally
worth
it
like
with
any
minute
spent,
and
that
brings
me
to
the
last
thing
as
an
old
CI
engineer,
there's
no
such
thing
as
setting
up
automation
too
early.
As
I
said,
these
principles
have
been
applied
for
a
decade
now,
and
the
rust
project
has
also
taken
them
over
for
previous
projects
and.
B
Getting
it
takes
time
to
have
them
well
sorted,
rolling,
easily
running
smoothly
like
in
the
beginning,
all
of
this
heals
like
a
behemoth
getting
people
used
to
that
takes
months
so
on
a
project
of
that
scale.
That
has
been
that
has
been
something
that
I've
been
also
been
caring
about
when
I
was
part
of
rust
core,
like
the
experience
that
an
organization
builds
with.
That
is
also
part
of
that
tooling,
like
the
humans
that
use
the
tooling
and
that's
why
the
tooling
should
be
there
early
and
be
refined
over
quite
some
time,
yeah
yeah.
C
Okay,
now
it
should
be
on
so
thank
you
very
much
for
our
presentation,
so
I
think
it's
very
interesting.
The
approach
that
you're
following
for
certification,
really
about
traceability,
to
really
build
up
evidence
that
everything
was
done
according
to
the
to
the
best
practices
for
that,
so
for
more
specific
to
rust,
so
we're
building
another
eclipse
project
called
Xeno
completely
rust,
and
you
know
that
in
Rust,
you've
got
you
know
your
code
and
tons
of
external
dependencies
so
when
it
comes
to
certifications
right,
so
how
can
we
can
deal
with
such
complexity?
C
B
The
good
question
so,
first
of
all,
thanks
for
building
Xeno
and
for
using
Excellence
did
which
I
I've
also
been
a
maintainer
of
so
the
whole
dependency
discussion,
I
think
is
one
I
would
really
like
to
have
a
conversation
with
with
like
this
is.
B
This
is
one
that
I
can
have
opinions
on,
but
I
cannot
answer,
because
this
is
kind
of
like
encroaching
on
what
should
an
ecosystem
do,
and
it's
really
really
bad
for
one
entity,
even
if
it's
per
systems
or
even
the
rest
project,
to
try
to
encroach
on
an
ecosystem,
there's
multiple
perspectives
of
that.
B
B
Almost
all
dependencies
of
async
stood
are
actually
under
maintenance
of
async
stud,
and
so
the
question
about
a
dependency
is
always:
is
there
like
it's
less
how
many
libraries
are
in
there,
but
how
many
parties
are
maintaining
those
libraries
and
that
can
differ
from
classic?
B
If
you
look
at
npm,
where
this
is
even
more
extreme
in
the
JavaScript
ecosystem,
you
have
something
like
react,
where
it's
basically
impossible
to
figure
out
who
to
even
call
for
this
subsection
of
the
framework,
while
in
Rust
you
do
have,
for
example,
larger
projects
that
end
up
using
multiple
libraries,
for
the
reason
that
a
library
boundary
in
Rust
is
also
a
strong
module
boundary.
It
acts
as
both.
B
So
it
is
quite
common
that
projects
would
actually
separate
themselves
into
multiple
libraries
from
a
specification
perspective
that
can
also
have
advantages,
because
you,
the
scope
of
the
review,
becomes
smaller
in
the
end.
I
think
we
end
up
in
a
situation
where
we
should
think
about.
How
do
we
declare
better?
Who
maintains
a
library?
What
are
the
guarantees
around
it
and
build
actually
a
language
around?
B
How
does
this
Library
evolve?
Will
the
stay
at
one?
Oh
always?
Will
it
be,
and
the
rest
ecosystem
has
all
of
these
kind
of
things.
So,
for
example,
there's
a
parser
library
in
Rust
that
has
the
policy
of
every
major
version
exchange
version
change.
We
change
our
API
completely.
It's
called
Nom
nom1,
num2,
norm3
num4
are
different
libraries
and
even
the
importance
of
understand,
like
even
understanding
what
their
policy
is,
what
the
guarantee
is
they
want
to
give
us
is
already
a
hard
thing,
so
I
don't
have
a
strong
answer
to
that.
B
Except
this
conversation
needs
to
happen
and
the
other
thing
is
coming
from
ecosystems,
where
we
don't
have
structured
Library
management,
that's
essentially
being
had
implicitly
by,
for
example,
having
libraries
that
ship
with
100
functions
that
you
only
use.
Two
of
for
the
reason
that
you
have
you
have
no
better
ability
to
ship
them
in
a
different
way.
D
D
C
D
The
elements
is
something
called
certification
is
demonstrated,
proven
use,
so
it's
also
about
the
community
to
prove
if
there
are
some
safety
artifacts
in
this
in
the
library.
If
that's
well
documented,
at
least
people
certified
and
would
say,
look
I
mean
you
have
a
database.
It's
okay,
I
make
a
tick
in
the
box.
So
that's
where
I
think
the
community
can
help
I
mean
building
also
safety
inside
the
system.
It.
It
takes
an
effort
to
document
your
probably
your
safety
gaps,
somehow
in
order
that
they
can
be
ticked
off.