►
Description
A year ago we started an OS from scratch in Rust. Reflections on the journey and the lessons learned.
Rust Linz: https://fosstodon.social/@rustlinz - https://twitter.com/rustlinz
A
Having
me
here
at
talking
a
bit
about
our
experience
with
frost,
so
what
we
did
and
it
completely
not
completely,
but
a
bit
different
stage
instead
of
doing
embedded
stuff
here
doing
operating
system
stuff,
and
why
do
we
do?
What
do
we
do
it's
pretty
much
that
we
looked
at
the
operating
systems
we
have
today
right
and
we
figured
out
that
what
we
have
we're
pretty
much
designed
more
than
30
years
ago.
A
The
PCS
are
showing
here
is
a
three
x86,
and
that
was
the
yeah
beginning
of
the
night
is
pretty
much
end
of
the
80s
where
this
devices
came
out
and
what
you
had
there
at
the
time.
You
had
desktop
machine,
not
a
lot
of
memory,
and
probably,
if
you
compare
it
now
to
some
microcontroller
that
is
much
less
performance
you
have,
and
today
this
environment
changed
completely
right.
So
we
have
mobile
phones,
we
have
batteries,
we
have
multi-cores,
we
are
always
online
on
the
internet.
A
We
have
a
high-speed
network
connection
and
so
on
so
the
environment,
where
this
operating
systems
were
started,
look
completely
different
and
the
second
thing
what
you
figured
out
is
that
our
operating
systems
are
really
complex
right.
So
if
you
look
at
the
lines
of
code,
if
you
look
how
long
it
takes
to
build
it
and
so
on,
there's
a
lot
to
improve
and
finally
they
are
much
slower
than
than
what's
needed.
A
So
we
believe
that
it's
time
to
resync
the
OS
time
to
think
how
it
looks
like-
and
this
gives
us
the
opportunities
to
fix
the
architecture
right.
So
if
you
start
from
building
a
new
OS,
then
we
can
pretty
much
do
everything
what
you
want
to
like
and
we
can
modernize
the
apis.
So
we
don't
have
to
go
back
to
what's
Ross
right
in
the
70s
that
Unix
was
defined
or
you
don't
have
to
do
this
very
same
apis.
The
people
did
the
Navies.
You
can
do
apis
now
and
and
25th
Century.
A
It
can
get
rid
of
it
like
a
lot
of
Legacy
functionality
right.
So
you
don't
need
to
support
dos
mode
anymore.
Don't
have
to
support
all
system
calls
which
are
pretty
much
obsolete,
but
they're
still
in
there,
because
there's
all
software
and
finally,
you
can
refresh
the
user
interfaces,
starting
from
the
apis
into
the
yeah,
to
the
to
the
Shell,
to
the
graphical
user
interface
up
to
how
we
built
applications
and
so
on.
So
there's
a
there's,
a
lot
of
things
you
can
do
if
you
rethink
there
was
so.
A
Therefore,
a
year
ago
we
started
the
entire
us
project
and
we
started
from
scratch.
So
we
simply
said
we
don't
want
to
improve
on
existing
systems.
We
start
with
a
blank
sheet
and
the
main
goal
of
this
project
is
to
reduce
the
complexity
right,
because
if
you
can
do
that,
if
you
can
build
system
much
more
simpler
than
we
have
today,
then
our
secondary
games,
like
scaling
to
Modern
Hardware,
having
less
overheads,
improving
the
security
and
real-time
properties,
and
so
on,
are
much
easier
to
achieve
right.
Instead
of
having
them
Millions
codes
of
line.
A
Where
you
have
to
add
new
functionality,
we
can
make
it
much
more,
simpler
and
less
complex.
A
But
a
year
ago
we
didn't
know
that
my
pre-owned
experience
was
that
I
wouldn't
OS
code
and
CC
plus
parts
in
assembler
for
a
few
years
and
actually
for
nearly
two
decades
and
I
encoded
in
those
in
other
languages.
A
For
my
PhD
I
even
invented
my
own
minimal
system,
programming
language,
but
we
didn't,
but
it
I
didn't,
add
any
prior
arrest
experience
and
the
similar
thing
with
my
team.
So
what
we
did
to
to
decide
on
language
you
do
this
simple
language
contest.
We
figured
out
okay,
what
are
the
strengths
of
the
different
languages?
For
us,
it
was
quite
obvious
right,
so
you
have
many
modern
features.
A
You
have
trades,
your
closures,
you
have
enums
and
you
have
one
killer
feature
pretty
much
and
that's
memory
safety
and
that's
one
which
you
pretty
much
shouldn't
have
in
any
other
language,
because
this
is,
on
the
other
hand,
for
building
an
operating
system.
It
needs
to
support
the
level
code,
which
was
nice
which
was
available,
and,
finally,
if
we
have
to
sum
it
up
in
a
single
sentence
at
that
time,
I
said
that
the
language
of
Switzerland
C
and
much
nicer
than
C
plus.
A
On
the
other
hand,
it's
not
only
the
language
which
counts,
it's
also
the
environment,
and
for
us
there
were
many
tools,
so
many
tools
like
cargo
raster,
mod
and,
of
course,
I
often
take
pictures
and
so
on,
so
that
the
environment
was
also
a
deciding
point
and
what
we
were
surprised
to
see
was
two
features
which
is
testing
cross
compilation.
That
was
really
easy.
A
I
still
remember
the
days
where
you
had
to
cross,
compile
GCC
2.9
something
and
it
took
you
a
week
and
nowadays
you
just
do
a
rust
up
until
and
finally,
it
was
a
nice
community
and
we
get
90
updates.
So
it's
nothing
like
you
have
to
wait
for
years
for
getting
a
new
compiler.
You
know
so
whatever
to
talk
about
in
this
in
the
remaining
part
of
this
song
is
pretty
much
I
want
to
reflect
a
bit
on
the
lessons
learned.
A
So
what
have
we
learned
from
this
experience
of
starting
without
any
pure
experiences
at
building
an
operating
system
and
probably
looking
at
what
are
the
missing
features
right?
So
rust
isn't
still
perfect,
it's
getting
better,
but
there's
still
a
few
features.
That
would
say:
okay.
A
So,
let's
start
with
the
modern
features,
so,
first
of
all,
there's
a
learning
curve.
So
if
you
start
programming,
West
rust
code
looks
like
see
right,
you
do
the
very
same
thing
you
would
see
you
do
the
very
same
apis
you
do
the
very
same
static
variables
of
those
kinds
of
and
when
you
do
then
you're
incredibly
adopting
more
and
more
rust
features.
As
you
learn
more
things,
your
code
gets
much
better,
which
means
you
have
to
really
rise
existing
code.
You
can't
just
program
it
and
forget
it.
A
A
If
you
look
at
the
or
explain
this,
if
you
look
at
Traders
abstraction
there,
if
you
start
you
use
it
just
as
for
using
system
API,
so
you'll
Implement,
something
like
core
format
right
and
do
you
have
these
kind
of
features
there
and
the
next
step
you
try
to
use
trades
for
visualizing
codes
across
tracks,
so
you
have
a
trade
called
mmio
XS,
which
allows
our
drivers
to
have
the
very
same
boilerplate
code
for
accessing
my
registers
to
be
shared
between
and
the
third
step
you're
making
your
strikes
compatible,
so
there'll
be
a
drop
in
placement
of
the
same
class
of
drivers.
A
So
if
an
interface
for
yours,
which
are
different
drivers
for
and
if
you
use
one,
then
you
can
brand
image
use
all
of
them
at
the
same
place.
And
finally,
what
we
figured
out
is
that
we
can
use
trades
for
unifying
code
because
architectures,
so
we
have
a
paging
implementation,
which
is
abstracted
away,
the
definition
of
the
page
table
format
and
it
gets
an
unique
interface
for
all
architectures.
A
So
pretty
much,
you
start
simple
and
you
credibly
increasing
your
your
knowledge
of
the
of
the
system
and
making
more
complex
use
of
the
of
the
features.
So
there
are
a
lot
of
features
in
the
language
which
gives
you
a
simpler
code,
so
most
notably
other
iterators,
which
makes
the
code
really
clean
and
allows
it
to
use
operations
of
them
like
a
map.
A
You
can
have
a
position
inside
them
and
food
or
stuff
and
so
on,
and
it's
actually
mandatory
because
in
the
beginning
we
don't
have
any
Heap
right,
so
we
can't
simply
allocate
dynamic
memory.
Do
you
want
to
process
entries
in
a
list
or
something
like
that?
And
they
are
converted
on
the
Fly.
Then
the
iterator
is
the
right,
similar
things
like
closures,
which
are
really
nice
because
they
can
really
shorten
your
code
and
if
it's
something
that
you
have
pretty
much,
you
want
to
reuse
or
having
a
call
back.
Something
like
that.
A
Then
the
closure
is
the
right
thing
to
do
similar
thing
with
error
handling
the
option
that
was
loud.
It's
really
nice
that
you
have
this
question
mark
operator
a
shortcuts
on
the
error
checking
and
it
makes
these
nested
kind
of
stuff
where
you're
calling
a
function.
You
have
to
look
for
the
error
code
and
then
give
it
for
the
next
layer
of
of
calls
and
then
check
for
the
error
code.
Again.
A
It
makes
it
really
nice
and
clean
if
you
know
which
types
are
maybe
have
some
some
idea
under
under
a
similar
title
at
least
and
finally,
there's
this
drop
trade
reduces
an
item
in
the
operating
system,
because
you
can
be
sure
that
if
something
goes
wrong,
your
resources
are
freed.
A
So
if
you
look,
for
instance,
in
the
Linux
code-
and
you
have
sometimes
these
go
to
statements
at
the
end
of
the
function,
if
something
goes
wrong,
then
I
have
to
make
sure
which
of
the
initialization
already
happened
and
if
it
has
happened,
I
have
to
jump
to
the
right
place
and
then
have
to
which
is
really
ugly
code,
because
it's
not
made
to
mobile,
and
if
you
forgot
something,
then
pretty
much.
You
need
to
resources.
We've
dropped
with
the
drop
date.
This
kind
of
code
is
very
much
now.
A
What
about
the
memory
safety
right?
So
the
first
step
is
the
slices
still
really
helpful,
because
you
don't
have
to
check
the
links
it's
done
automatically
and
on
the
other
hand,
it
reduces
the
number
of
parameters
to
functions,
which
is
a
small
benefit
you
have
here,
and
the
bird
Checker
is
a
little
bit
more
difficult,
so
learning
the
rules
and
learning
how
to
cope
with
them
takes
time.
So
it's
nothing
you
can
do
in
a
few
hours
and
but
what
do
you
feel
after
a
year?
A
Pretty
much
is
that
in
the
beginning,
you
you
start
to
design
around
it
and
you
make
sure
that
the
bureau,
Checker
doesn't
or
Checker
doesn't
kill
your
kill
your
code
and
the
compiler
doesn't
complain,
try
to
work
around
it,
but
later
you
start
to
to
design
visit,
saying:
okay,
if
I
have
to
go
check
out,
how
can
I
make
sure
that
those
things
are
usually
wants
on
these
kind
of
stuff?
A
So
what
are
the
limits?
Well,
we're
building
an
operating
system,
it's
a
real
low
level,
so
we
can
pretty
much
do
everything
we
want
with
the
CPU.
This
means
we
can
actually
break
them
inside
the
kernel
inside
the
hyper
resonance
at
this
low
level
code.
So
it's
it's
actually
breakable
for
unsafe
code
and
if
you
go
for
undefined
Behavior,
that's
easily
possible
too
and
clearly
have
this
kind
of
box.
A
So,
for
instance,
we
once
forgot
to
save
some
register
in
a
kernel
entry
code
and
it
turned
out
to
be
worked
quite
nice
in
release
mode,
but
it
turned
out
that
the
compiler
was
using
a
different
register
allocation
in
debug
mode
and
then
it
crashed
and
similar
thing
with
another
sampler.
You
forget
one
register
clobber
and
everything
is
well
depending
how
the
compiler
behaves.
So
you
can
easily
trash
your
whole
program
similar
thing.
If
you
do
low
level
coding,
you
know
grading
system,
you
have
to
be
careful
on
the
cache
maintenance
instructions.
A
If
you
leave
something
in
your
cache
to
go
really
wrong
on
this
old
arm
CPUs
and
finally,
we
had
some
bugs
on
the
paging
code.
Well,
everything
was
was
right,
except
that
there
was
a
different
agent
TLP,
so
there's
also
breaks
or
assumptions
that
everything
is
memory
safe.
So
our
solution
for
that
is
that
we
want
to
minimize
unsafe
code
and
that
we
test
the
code
we
have,
which
we
know
is
security
critical
tested
rigorously.
A
So
what
about
assembly
so
assembly
is
pretty
much
the
difference
between
from
at
least
from
a
coding
point
of
view
between
the
kerneland
application
right.
So
you
need
assembly,
you
need
a
sample
to
access
special
CPU
stage.
You
need
to
assembly
to
access
the
CPU
registers
and
fall
is
low
level
effects
you
need
it
in
the
current
engine.
Part
right.
If
you,
if
you
have
a
system,
call
you
need
to
save
the
registers,
you
need
to
context,
switch
the
CPU
and
so
on,
and
we
also
use
assembly
to
execute
the
control
environment.
A
If
you
want
to
fetch
exceptions
that
happen,
for
instance,
if
you
copy
memory
from
kernel
to
user,
in
fact,
then
we
have
to
make
sure
that
the
member
copy
operation
is
touching
only
the
registers
we
we
wanted.
It
can
recover
from
any
page
and
roughly
have
these
assembly
micro
ASM
macro,
which
is
from
a
c
programmer,
similar
to
Ace
and
volatile.
A
What
you
figured
out
is
that
next
functions
are
very
important
for
us,
because
they
allow
us
to
structure
our
assembly
code,
not
only
that
the
small
Snippets
you
have
for
accessing
a
few
registers,
but
the
current
entry
code
and
the
association
code
and
this
kind
of
stuff
gives
you
a
much
better
structure
and
it
allows
you
to
document
actually
a
large
chunk
of
this
of
this
code,
which
is
otherwise
really
hard
to
maintain.
If
you
have
hundreds
of
lines
of
assembly
instructions
just
underneath
and
the
final
step
forward.
A
A
This
is
a
simple
function,
not
complete,
but
completely
enough
to
explain
that
how
you
would
enable
page
running
on
x86.
So
this
is
a
net
function.
You
see
here.
This
gives
you
the
ability
to
don't
have
a
preliment
on
the
function,
but
it
also
requires
that
there's
a
single
ASM
macro
statement
inside
it
and
this
micro
statement
has
to
be
no
return.
So
you
can't
return
from
that.
You
or
you
can't
leave
the
return
statement.
You
have
to
do
the
return
on
your
in
this
case.
A
What
we
Define
is
we
Define
the
parameter
called
set
co0
and
we
assign
it
a
constant
function
called
it
set
0
0
and
we
simply
use
it
by
putting
the
phrases
inside
the
the
name
of
the
the
concept
and
the
constant
function
we
can.
We
can
document,
we
can
calculate
something
if
you
say
you
you
want
to
put
in
the
you.
Do
you
want
to
put
in
the
last
digits
of
P.
A
You
can
simply
calculate
something
like
that
if
you
can
express
it
in
constant
function
and
you
don't
have
these
magic
constants
somewhere
in
your
assembly
code,
which,
after
a
while,
nobody
knows
anymore,
why
this
bit
number
60
to
set
right.
So
it
makes
the
code
really
readable
and
maintainable,
and
there
are
also
some
limits
and
I
show
you
here
on
an
internet
table
you
could
have
on
an
x86
processor.
A
A
If
you
put
it
just
and
push
ex,
you
will
end
up
with
a
different
with
a
different
instruction
to
do
in
wooden
work.
So
what
you
can
do
is
you
can
always
go
back
to
machine
code.
Pretty
much!
That's
one
thing
we
learned
here
and,
and
that's
a
that's
a
missing
feature
very
much
that
you
can't
unify
in
that
case
register
names
and
then
an
arm.
A
You
can't
unify
the
instruction
names
on
the
on
the
inline
assembly
level,
so
sharing
code
between
otherwise
very
similar
encodings
is
not
possible
today.
Another
thing
what
you
see
here
and
which
was
really
old
style
assembly
programming,
is
that
the
dot
dot,
wrapped
dot
set
and
so
on
and
Dot
enter.
These
are
pieces
of
macros
inside
assembly.
This
is
something
you
would
be
familiar
with.
A
If
you
look
at
how
all
the
operating
systems
in
the
end
of
the
80s
were
written
in
pure
assembly,
often,
and
then
you
have
these
macros
inside
and
then,
if
you
pretty
much
Loops
in
the
macro
and
that's
what
that
word
problematic
to
shortest
today,
to
accept
repeated
insertion
of
and
so
that
there
are
two
things
to
learn
here.
The
first
thing
is
I
didn't
find
we
didn't
find
a
better
way
for
repeating
assembly
instructions.
So
that's
some
some
basic
feature.
A
A
Which
we
can
do
with
a
rest,
much
easier
than
we
did
before
and
finally,
we
test,
because
we
support
multiple
architectures
quite
early
in
the
project,
so
we
decided
that
if
we
we're
building
not
only
on
arm
not
only
on
x86
but
on
different
architecture,
at
the
same
time,
we
can
make
the
interfaces
right.
So
we
we
are
forced
to
develop
interfaces
that
unify
all
these
applications
and
that's
not
possible.
A
If
you
have
a
small
team,
because
usually
the
people
tend
to
develop
only
in
a
single
architecture
and
the
other
architectures
are
getting
outdated
really
fast.
But
if
you
have
tests
in
place,
if
you
can
automatically
test
stuff,
then
that's
actually
possible.
A
So
we
kind
of
multiple
of
that
and
it
allows
us
to
run
tests
on
every
system
level,
so
not
only
for
the
kernel
not
only
for
the
applications
not
only
for
something
unit
tests
also,
but
on
all
level
that
we
can
have
different
kinds
of
tests
and
finally,
it
produces
the
documentation
for
us
and
some
coverage
of
codes
that
you
can
make
sure
how
far
we
are
in
our
past
efforts.
A
So
what
are
the
limit
of
this
testing?
Well,
one
limit
is
that
normal
unit
tests,
what
that's,
what
you
do
with
carbo
tests,
only
work
on
their
host,
so
they
are
compiled
for
for
the
host
system.
They
ran
on
your
host
system.
Everything
is
fine
normally,
but
if
you
do
targeting
multiple
architectures,
then
probably
just
testing
for
x8664.
A
The
data
structures
is
probably
not
sufficient
and,
on
the
other
hand,
it
Mrs
all
his
architecture,
specific
code
right
so
usually
to
to
distinguish
between
x86
code
and
say
in
that
case
rm64
code.
You
do
some
config
options,
you
do
some
barrier
or
a
guard
here
and
if
you
do
just
do
unit
tests
there
all
this
covers,
and
finally
it
misses
the
differences
between
32-bit
and
64
..
So
so
what
we?
What
we
did
is
pretty
much.
A
We
figured
out
that
okay,
we
actually
can
run
inside
our
unit
test
inside
the
cross-compile
little
survive,
so
you
can
specify
the
Target
for
the
cargo
test
and
you
can
pretty
much
have
a
cross-compiled
installed
rust
tool
chain
and
it
compiles
the
code
for
for
Linux
environment.
A
A
X86
machine
right,
so
it
translated
executes
the
binaries
and
also
some
calls
done
by
these
binaries
are
translated
to
the
host
system.
So
it's
really
nice
because
you
don't
have
to
do
or
provide
any
additional
scripts
put
up
a
virtual
machine
or
this
kind
of
stuff.
The
equipment
users
have
been
format,
does
it
for
you?
So
now
we
have
cross-compat
unit
tests,
which
is
quite
really
nice,
and
it
also
found
a
few
packs.
A
Another
feature
we
have
or
integration
test,
which
are
really
easy.
If
you
use
custom
test
Frameworks,
which
is
a
feature
of
rest,
and
so
you
annotate
your
your
test
functions
with
test
case
and
you
can
run
them
for
any
for
any
layer.
You
can
use
it
on
the
user
logo
on
the
application
Level.
We
can
use
it
in
the
current
and
what
we
do
there
is.
We
execute
a
lot
of
test
cases
in
the
end,
we
report
an
error
code
and
we
do
it
inside
qmo.
A
A
The
exit
code
to
the
host
and
on
on
x86,
you
have
something
called
debug
port,
a
feature
of
KML
which
allows
you
the
same
thing
with
this
equation
test.
If
this
custom
test
Frameworks
fought
a
lot
of
bugs,
so,
for
instance,
we
find
out
that
we
initialized
the
CPU
now
correctly,
so
that
the
kernel
were
able
to
overwrite
with
only
user
memory.
So
now
we
have
a
test
in
place
that
you
put
the
color
and
he
tries
to
access
user
memory
and
it
should
be
a
page
content.
A
Similarly,
we
forgot
an
initialization
of
some
Google
data
structures
that
you
can
now
test
whether
the
initialized
we
figured
out
that
there
was
some
system
called
bindings
out
of
sync,
so
we
changed
the
kernel
edit
system
call,
but
in
the
user
level,
if
you
get
to
do
that,
so
those
kind
of
tests
you
can
find.
And
finally,
we
find
some
performance
blocks.
So
we
had
some
assumptions
how
long
instructions
would
take
along
a
certain
implementation
which
would
lead
us
to,
and
we
wrote
a
test
run
it
inside
a
inside
the
kernel.
A
If
you
could
add
oops,
this
was
the
wrong
decision
here,
so
you
can
find
a
lot
of
bugs
with
this
kind
of
integration
tests,
and
the
final
thing
I
want
to
talk
about
is
about
the
coverage
right.
So
if
you
do
tests,
you
never
know
how
good
your
tests
are,
and
so
that's
why
we
figured
out
okay,
we
want
to
do
measurements
on
the
coverage.
The
problem
is
the
existing
tools,
we're
not
able
to
work
on
all
Targets
right,
so
they
were
mostly
for
x86.
A
There
were
a
few
ones
for
for
arm,
but
what
we
wanted
to
have
is
we
wanted
to
have
one
coverage
for
all.
Oh
I
should
take
just
so
we
figured
out
that
llvm
Source
based
code
coverage
is
the
right
approach
for
us
because
they
compile
it
directly
inserts
the
profiling
characters
and
it
just
works.
A
The
problem
is
the
profiling.
Runtime
of
llvm
is
really
bad
for
this
kind
of
no
standard
environments.
We
don't
have
an
allocator,
because
the
interface
says
something
like
okay.
If
you
want
to
write
out
these
profiling
files,
then
you
need
to
allocate
this
big
chunk
of
memory
because
the
data
and
finally
the
question
was
well:
how
can
we
get
the
data
out
of
qmo
right?
How
do
we
communicate
from
inside
the
kernel
in
create
a
movie?
We
get
the
this
information
down
to
Dr
the
host
system
and
what
we
did
to
to
solve.
A
That
is
pretty
much.
We
did
our
own
profiling
Library,
so
the
hardest
part
was
there
to
reverse
engineer
the
raw
file
format,
because
there's
no
really
good
description,
there's
some
that
I
would
say
yeah
C
code
of
that,
and
you
have
to
figure
out
what
all
these
thing
means,
and
the
good
thing
about
is
that
all
data
is
already
available
in
the
Alpha
you're
starting
and
you
can
write
it
actually
incrementally.
A
So
it's
a
the
back
on
the
on
the
profiling
Library
interface
that
it
doesn't
allow
you
that,
so
you
don't
need
allocation.
You
can
do
it.
You
can
do
step
by
step,
writing
out
all
the
the
profiling
counters
and
the
definition
of
the
functions
depending
what's
really
strange,
and
it
turned
out
that
they
pretty
much
assumed
a
certain
padding
and
C
code
that
it
would
work.
So
they
never
really
thought
about
padding.
It's
just
a
raw
damp
of
the
data
structures
they
had
and
see.
A
Let's
see
so
for
communicating
to
the
host
on
our
solid
to
be
simply
relatively
simple.
A
Semi-Hosting
allows
you
to
write
files,
which
is
a
little
bit
dangerous
because
it's
an
override
files
for
the
testing
around
in
the
host,
but
but
it
gets
you,
the
data
x86
it's
much
easier
to
apply
for
stuff
through
a
secondary
report,
which
is
a
little
bit
more
complicated
arm,
but
would
also
be
an
option.
So,
in
the
end,
what
you
can
do
is
we
can
measure
coverage
on
all
our
architectures,
merge
them
together
and
get
a
nice
report
of
saying
we
have
six
percent
of
your
code
coach.
A
So
what
are
the
conclusions
for
me
after
choosing
Rusty
a
year
ago?
Pretty
much
from
one
point.
It
was
a
risky
decision
right,
so
we
didn't
have
any
experience
if
you
don't
know
what
it
would
turn
out,
but
in
the
end
we
figured
out.
That
was
the
right
approach
because
for
our
goal
and
choosing
the
complexity
of
the
code
and
what
a
language
is
exactly
what
we
need.
A
So
we
couldn't
have
achieved
the
same
with
the
same
effort
if
we
would
have
state
of
C
and
on
the
other
hand,
we
figured
out
that
good
tooling
is
really
important,
and
that
was
one
point
where
I
decided.
Okay,
if
we
stayed
with
a
different
vocabulary
in
the
one
either
I
developed,
we
wouldn't
we
would
never
have
this
kind
of
tooling
we
could
and
the
let.
A
The
second
last
point
is
that
we
are
doing
all
the
stuff
on
on
the
nightly
release
after
the
rusts,
which
is
really
nice,
because
it
allows
us
to
keep
up
with
the
rest
changes
so
pretty
much
every
one
or
two
weeks
there
was
a
clippy
update
and
we
can
just
fix
our
code
and
it's
automatically
reported
by
our
our
tests
and
it's
easy
to
make
sure
that
the
code
always
compiles.
A
There's
also
this
option
that
if
you
want
to
go
back
to
previous
releases,
there's
always
this
option
to
check
out
or
to
switch
to
an
older
version
of
the
Nike
compilers
that
we
can
still
compile
something
which
is,
but
finally,
it's
still
early
in
the
learning
rust.
So
there's
still
things
to
say:
oops,
that's
interesting.
We
could
use
that
as
the
features
we
are
exploring
and
we'll
see
how
this
conclusion
would
look
like
say
in
a
year
from
now.
A
If
I
want
to
try
an
Outlook,
then
I
would
say
your
next
OS
will
be
written
in
Rust.
So
there
is
a
there's
a
path.
If
you
are
an
developer,
there's
a
path
if
you
start
from
C
to
go
to
rust,
but
I
found
out
that
there's
no
way
back.
If
you
have
programmed
a
time
and
rest,
then
you
don't
want
to
go
back
it's
no.
It's
not
a
good
idea
to
get
or
to
drop
these
nice
features
we
have
now
and
to
go
for
something
on
a
much
lower.