►
From YouTube: Rust Cologne: The Cost of Zero Cost
Description
https://media.ccc.de/v/rustcologne.2019.02.cost-of-zero-cost
Rust promisses Zero-Cost Abstractions.
In this talk we’ll look at ways to analyze generated code to determine their actual overhead.
Florob
A
A
Talk
is
ooh,
yeah
I
thought
was
that
so
what's
the
idea,
I
want
to
analyze
code
generated
by
the
compiler
and
analyze
that
at
the
different
stages
of
computation
and
we'll
see
in
a
few
sets
what
that
exactly
means,
but
compilers
generally
have
intermediate
representations.
It
goes
through
a
lot
of
intermediate
representations
and
energy
and
generates
assembly,
and
since
the
promise
is
zero
cost
abstractions,
we
would
hope
that
assembly
is
somewhat
optimal,
considering
what
we're
trying
to
do
right.
So
that's
the
idea.
A
A
Otherwise,
maybe
you
want
to
understand
details
of
my
algorithm,
and
particularly
abstractions
in
rust,
can
at
times
be
actually
very
abstract
like
with
macros
macros
or
proc
macros.
You
can
write,
run
macro
and
it
expands
to
the
universe
sorted
right,
so
you
don't
necessarily
see
what's
going
on
in
terms
of
algorithmic
complexity
or
whatever.
So
maybe
we
want
to
look
into
that
and
what
it
actually
does
and
then
maybe
looking
at
assembly,
where
could
have
helped
us
find
bottlenecks
in
what
we're
doing
good
but
because
that's
actually
work.
A
This
claim
would
only
do
that,
like
after
proper
benchmarking,
if
you're
sure
that
what
you're
looking
at
is
actually
something
that
matters
for
your
performance
and
only
do
that
after
you
have
looked
at
algorithmic
complexity
in
terms
of
your
real
rust
code
and
not
just
our
one
guess
point
one
applies
and
you're
curious
right,
okay,
so
what
does
compilation
flow?
Actually
look
like
in
crust
or
sort
of
abstract,
I've
omitted
parts
that
are
not
well
I'd,
say
not
observable
to
us.
So
we
started
with
an
input
file.
A
That's
the
daughter
s,
rust
file
and
path
of
interest,
C
and
then
Rossi
has
two
intermediate
representation.
One
it's
ghir
is
a
high
level
intermediate
representation,
which
is
still
very
close
to
actual
rust
sort
of
an
abstract,
syntax
tree.
Another
slice
for
each
of
those,
then
we
go
down
through
mid-level
intermediate
representation,
which
is
still
fairly
new,
which
is
a
bit
lower
level,
actually
expands.
A
A
lot
of
the
constructs
and
rusts
are
two
simple:
things
can
do
some
things
that
the
surface
syntax
of
rust
cannot
actually
do,
and
then
that
is
in
some
way
translated
into
other
the
MIR.
So
the
intermediate
representations
that
LLVM
s,
which
is
the
compiler
back
and
set
rust,
Jesus,
actually
generate
assembly
code.
Then
we
go
into
the
RVM
library,
which
magically
turns
that
into
assembly.
A
fun
fact
LVM
and
generally
also
has
an
mir,
which
is
mentioned
level,
music
representation,
and
it
goes
with
you
assembly.
So
that's
not
at
all
confusing
right.
A
So
all
of
those
are
technically
mostly
data
representations
which
in
the
compiler,
but
as
I
said,
those
are
those
that
we
can
observe
because
they
also
have
a
textual
representation
that
we
can
output
to
user,
and,
let's
look
at,
why
is
the
match,
is
missing
an
age?
Let's
look
at
each
of
those
stages
and
how
to
get
there.
A
Never
as
the
sole
reason
for
that
is
to
make
it
more
easy
make
it
easier
to
see
what
were
actually
doing,
because
the
print
on
macro
actually
expands
to
a
whole
lot
of
code,
and
for
this
running
example,
I
sort
of
just
wanted
to
look
at
the
for
loop,
because
one
of
the
first
things
you
actually
hear
about
rust,
hey,
follow
screws,
iterators,
they're,
just
as
good
as
c
style
for
loops.
So
that's
a
pretty
simple
loop
iterates
through
few
numbers,
so
I
thought
maybe
first
like
easy
writing.
A
Some
would
be
to
see
like
it
said,
actually
sort
of
like
a
Cecily,
but
then
we
don't
want
the
whole
garbage
of
a
PRNDL
and
expression
in
there.
So
we'll
just
always
have
function
called
puts
that's
never
in
line,
and
then
we
can
deal
with
that.
Okay,
so
first
thing
we
could
do
actually
none
of
those
intermediate
representations.
We
can
just
see
just
let
expand
some
macros,
which
only
works
on
nightly
and
adults
actually
requires.
The
option.
A
Mat
is
that
and
stable
options,
because
even
the
option
to
do
that
is
unstable,
but
what's
the
option
uses
pretty
expanded,
which
will
just
do
what
it
says?
It
expands
all
markers
and
dumps
Lexus
or
as
a
parsed
rust
back
at
you,
I've
put
two
ways
here:
to
do
that.
One
is
just
if
you
have
a
single
rust
file,
what
to
do
it?
You
can
call
Rossi
directly.
A
The
other
way
is,
if
you
actually
have
a
cargo
project
there
is
the
car
garage
c7
command
pettifog,
which
allows
you
to
run
the
whole
cargo
infrastructure
building
dependencies
and
everything
you
can
hazard
the
parameters
to
build
in
release
mode.
You
can
build
test,
build
examples
of
that,
but
also
pass
arguments
to
the
extras.
C
compiler
through
it
and
those
arguments
are
then
separated
with
the
after
you
got
Ricky
Loggins
right.
So
the
first
one,
if
you
have
just
a
file
the
second
one,
if
you
have
a
whole
cargo
crate
okay.
A
So
what
does
that
are
running?
Something
like
that?
You
expand
you,
and
I
said
print
'l
is
like
a
whole
lot
of
stuff.
It's
basically
everything
from
line
9
to
20
and
that's
like
after
I've
reformatted
that
a
bit
for
readability,
and
I
think
I'm
not
even
going
to
go
into
it
and
in
part
also
because
I'm
not
overly
familiar
with
our
the
formatting
infrastructure
in
rust,
actually
works.
But
as
we
can
see
it
expanded
into
a
lot
of
function,
calls
it
cut
out
parts
of
the
original
format.
A
String
that
you
had
like
the
braces
are
gone.
You
have
the
part
before
the
braces,
which
is
an
empty
string.
We
have
the
part
after
the
brace
in
a
print
alone,
which
is
the
newline
and
then
sort
of
just
passing
the
arguments
of
loading
it
like
fancy,
where
he's
doing
matches
to
create
page
bindings
and
all
that,
but
the
more
interesting
part
is
down.
There
are
probably
still
irregular,
faulting.
Okay,
that's
a
bit
more
but
yeah.
A
So
next
step,
I,
love,
intermediate
representation.
Is
that
it's
roughly
in
abstract,
syntax
tree
if
you're
familiar
with
that
concept
from
compiler
design,
it
sort
of
describes
operations
and
their
arguments
in
a
tree
form
and
in
hir
how
many
high-level
intermediate
representation
pretty
much
all
constructs
in
the
surface
language
I
expanded
and
particularly
what
we'll
see
in
a
moment
is
for
loops
actually
expanded
to
a
like
lower-level
form.
Otherwise,
it
still
looks
a
lot
like
the
surface
language
and
the
way
to
get
that
is
actually
very
similar.
A
You
can
also
do
that
only
in
likely
compilers
you
can
use
that
unpretty
equals
hir
and
then
an
input
file
there's
again
the
Ryan
doing
the
zoo,
Kagura
C
and
there
is
cargo
inspector,
but
he
is
endre,
which
is
also
sometimes
in
the
tense
of
this
meetup,
which
he
introduced
a
thing
at
FOSDEM
this
weekend
and
there's
linked
to
that
in
particular,
and
there's
recording
from
Fossum.
A
If
you
want
to
look
at
that
which
is
sort
of
interesting,
because
it
deals
in
ways
with
a
very
similar
topic,
even
though
we
didn't
talk
about
this
before,
but
he
has
a
very
different
spin
to
doing
toxin.
I
do
like
he
adds
a
lot
more
of
the
motivation
for
wanting
to
look
at
low-level
stuff
and
I'm
like
sure
the
mother
person
to
go
into
Z
nitty-gritty,
so
to
say
right.
A
So
what
that
does
is
basically
just
the
same
thing
as
Z
above,
but
it
tries
to
format
the
code
which,
more
often
than
not,
and
what
she
does
work,
but
oh
well
and
then
actually
prints
it
with
color,
with
a
color
scheme
on
through
the
console
and
less
like
few,
which
is
kind
of
nice.
If
you
just
want
to
have
a
quick
look
at
something.
A
Okay,
that
is
okay,
so
I
did
not
put
the
HR
on
the
slide
and
you'll
see
the
reason
in
the
moment,
I'll
make
that
a
bit
bigger.
A
And
the
reason
is
pretty
much
this:
it's
a
lot
so
I
said
we're
not
going
to
carry
Buzzy's
print
on
so
everything
up
here
is
pretty
much
the
print
expansion
this
time,
not
with
a
nice
reformatting
that
I
did,
but
that's
just
the
exactly
the
output
the
compiler
gave
me.
So,
let's
not
care
about
that.
Actually,
then
jump
down
to
the
main
function.
It's
actually
large
enough.
Everyone
to
read
I'm
looking.
B
A
Back
yeah,
okay,
so
what
we
see
is
that
the
for
loop
is
gone
right,
and
actually
it's
done
quite
a
bit.
We
are
the
only
loop
construct
we
still
have
is
actually
a
loop
which,
if
you
haven't
seen
it
because
it's
actually,
even
though
it
exists,
is
quite
rare.
Even
in
rest,
it's
an
endless
loop
unless
you
call
break
inside
it
or
whatever
and
other
things
that
are
just
superficially
note
is.
A
But
other
than
that
we
can
see
a
few
interesting
things
about
for
loops.
First
thing
we
see
is
that
the
argument
of
the
zine
of
the
for
loop
is
actually
pasts,
you
intuitive,
and
we
posited
in
theater
in
a
way.
That
might
also
be
seem
a
bit
foreign
to
most
of
you,
which
is
the
name
xu
escapes
me.
I
think
a
few
different
ones
were
floated.
A
Then
we
match
over
that
address
nearly
enough,
because
it's
it's
not
actually
something
you
can
match
over
that
over
and
then
find
29.
It's
bound
to
the
name.
It
ER
you
tably,
which
is
just
a
irrefutable
pattern
that
will
always
match,
and
it
just
gives
a
sing
a
name,
so
that
is
so
to
say
the
same,
very
same
thing
as
writing:
that
would
it
er
equals
the
intuitive
call,
but
in
the
expansion
it's
that
further
I
think
actually
lifetime
reasons
and
different
things.
A
And
after
that
is
pretty
straightforward,
like
in
each
iteration
of
the
loop,
we
call
the
iterator
next
function
on
your
iterator.
If
you
have
ever
actually
implement
an
iterator
you'll
know
that
it's
it's
the
method
of
an
iterator.
It's
gives
you
to
the
next
element,
sort
of
obviously,
and
it
does
it
in
the
form
of
an
option.
A
Either
it
returns
the
sum
of
the
option,
which
is
the
next
element
or
it
returns,
none
in
which
case
the
iteration
ended,
and
we
can
see
here
that
this
is
exactly
what
it
does
for
the
40
like
it.
If
it's
some,
it
sets
the
next
value
to
the
value
that
was
returned
and
if
it's
none,
it
breaks
the
little
bit
bands
and
unless
the
loop
was
broken.
We
then
further
sign
our
next
value
to
the
variable
E
and
pass
articles.
A
So,
by
the
way,
because
I
forgot
to
say
that
in
the
beginning
you
can
shout
questions
at
me
at
any
time.
If
there
are
longer
than
like,
say
two
sentences
try
to
get
the
microphone,
but
just
for
the
recording
but
other
than
that
yeah
shop.
So
yeah,
that's
that's
for
loops
right,
an
endless
loop
and
then
we
get
each
element
separately
and
break
its.
The
iteration
ends.
A
So
once
the
first
round
is
a
letter,
we
can
go
from
that
high-level
intermediate
representation
to
mid-level
intermediation
and
well.
That
is
the
sort
of
the
more
simple
core
language
and
it's
actually
so
simple
that
it's
I
think
a
bit
horrendous
to
read.
B
A
It's
a
more
general
concern.
Ok,
no,
ok!
So
suppose
the
even
more
people
that
actually
put
their
hands
up
basic
blocks
is
a
concept
from
yeah,
let's
say
mostly
compiler
engineering.
Actually,
what
a
basic
block
is
is
a
block
of
code
that
will
always
run
sequentially,
so
there
are
no
branches
within
it.
A
So
the
start
of
the
block
is
usually
reached
reached
by
a
some
jump,
expression
or
a
go-to
expression,
and
at
the
end
of
the
block
itself,
is
some
sort
of
terminator,
which
can
either
be
a
return
from
function
or
itself
a
jump
to
them
as
a
basic
block.
So
basically
basic
blocks
are
what
make
up
control
photographs
if
you
draw
them
right.
A
So
if
you
have
an
if
statement,
if
you
have
C
then
and
the
else
block
you'd
have
like
some
basic
block
the
decision
which
goes
to
either
other
basic
blocks,
which
then
just
run
sequentially
and
that's
what
Mir
I
showed
up
a
lot
of
like
you.
Have
those
blocks
that
link
to
each
other
oftentimes
those
go
to
statements,
but
they
also
have
these
syntactically
in
the
text
form
ways
to
go
to
different
basic
blocks
based
on
return,
values
of
functions
or
return,
situations
of
functions.
A
I
will
see
that
in
model
the
way
to
get
Mir
is
pretty
similar
to
getting
hir.
You
Billy
just
type
in
my
instead
of
HR,
if
you're
using
cargo
inspect,
which
has
a
pretty
argument
but
yeah
so
clean
off
so
I
want
you,
it's
so
simplistic
that
it's
actually
complex
to
read,
which
means
it
looks
something
like
this.
A
So
whatever
I
wanted
to
show
at
this,
so
I
think
one
thing
we
can
see
is
that
stuff
moved
around
a
bit
like
our
main
function
is
suddenly
the
very
first
thing
we
see
in
the
output
by
the
way
nice
warning
at
the
top
right.
It
says
this
is
subject
to
change
it's
for
human
consumption,
but,
like
don't
don't
trust
this
sort
of
right.
A
Call
so
that's
something
you
can
see
in
this
syntax
and,
as
I
said,
it
has
like
ways
to
tell
you
where
you
return
from
a
function
like
this
arrow,
this
arrow
BB
one
tells
you
that
returning
from
that
function
takes
you
to
basic
block
one
which
is
the
one
just
below
actually
indicated
by
labels.
So
it's
very
like
go
to
like
certainly,
and
it
actually
has
coated,
as
I
said,
it
can
just
immediately
go
to
another
basic
block.
A
So
since
I've
not
done
too
much
compiler
internal
work,
I
can't
tell
you
exactly
but
like
what's
what's
been
floated
is
that
mid-level
intermediate
representation
made
it
Lots
easier
to
implement
certain
features
like
if
you
just
have
a
very
basic
language
you
need
to
operate
on.
In
theory,
it
becomes
much
easier
to
do.
Optimizations
and
other
things
and,
in
particular
middle
of
intermediate
representation.
A
People
said
was
one
of
the
requirements
for
all
actually
implementing
non-electrical
lifetimes,
because
that
was
just
have
been
incredibly
hard
on
high-level
intermediate
representation
because
it's
like
to
abstract
from
from
just
having
actual
small
scopes
and
all
that
yeah.
So
that's
the
reason
like
most
things
affect
life
written
at
least
one
small,
optimization
on
Mir
arm
and
as
far
as
I've
seen
most
of
those
sort
of
small,
optimization
kind
of
things
are
actually
happening
on
Mir
right
now,
hir
is
used.
A
A
It
most
I
couldn't
tell
it's
very
likely
not
guaranteed
to
be
I.
Think
this
thing
that
it
output
in
this
case
was
actually
there
might
have
been
a
missing
that
planning
or
something
with
general
readers.
But,
as
I
said,
those
are
textual
representations
of
data
structures
and
they're,
mostly
like
meant
for
human
consumption.
A
A
Oh
yeah,
that's
actually
a
good
question,
so
there
must
be
a
loop
here
right.
So
let
me
see
real
quick,
we're
calling
next
here,
which
is
pretty
much
the
top
of
our
loop
and
then
at
some
point.
We
must
be
comparing
something
right,
so
that
might
also
be
interesting.
So
what
this
does
is
six
is
if
I
look
up
at
the
top,
we
can
at
least
go
by
the
type.
A
A
So
what
we
do
is
we
get
the
discriminant
of
the
enum,
so
the
option
you
know
discriminant,
meaning
the
value
that
tells
you
which
variant
the
enum
currently
contains
users
the
summer,
the
number
right.
So,
let's
start
in
ten
and
then
we
do
a
switch
in
operation
over
the
value
10
so
over
the
discriminant,
and
then
it
has
this
like
fancy,
syntax,
whatever.
A
That
is
in
the
actual
data
structure
that
says,
okay,
if
it
contains
zero,
go
to
bed
o'clock
four,
if
it
contains
one
good
basic
box,
six
and
otherwise
go
to
basic
of
five
basic
block.
Five,
as
a
remember,
is
kinda
interesting
because
Beta
Club
five
is
unreachable,
okay,
cool!
So
if
we
variants
that
actually
exists,
this
is
a
zero
which
is
the
non
variant.
It
goes
to
bei
pop
four,
which
then,
in
the
end,
returns
from
the
function.
A
Okay,
and
if
it
is
one
which
is
the
sum
variant,
it
will
go
to
basic
block
six,
which
is
done
here
and
basic
block
six,
there's
something
that
you
also
can
do
in
the
surface
language.
It
takes
our
option,
as
we
remember
on
line
six
right
cos.
It's
su
it's
some
variant,
because
it
now
knows
that
it
has
to
be
the
same
ready
to
check
the
discriminant
before
that's
valid
and
takes
the
value
in
that
which
is
932.
A
Then,
with
that
value
we
move
that's
what
I
was
talking
about
before
it
does
move
said:
I
cannot
without
no
internet
attractor
comprehend,
like
it
moves
nine
to
eleven
use,
11
to
5
5
to
12
12
to
14,
and
then
calls
puts
with
14
I
have
no
idea
why
it
has
to
do
5
moves
to
recall
the
function,
but
it
costs
the
function.
The
end
right.
So
that's
all
right
relation
value
and
after
calling
the
function
goes
to
beta
book
7
and
then
basic
block.
C
A
A
A
A
A
The
benefit
of
that
is
that
it
makes
a
lot
of
a
nurse
is
much
easier
like
if
you
have
only
one
assignment.
You
know
how
long
that.
Well,
here's
life
life,
meaning
when
the
last
point
in
time
is
where
we're
actually
used,
because
it
can
become
life
again
because
something
else
was
redesigned
to
it,
and
you
can
do
various
data
flow
analysis.
You
know
the
last
point
it's
used
is
where
that
name
last
accused
and
those
kind
of
things.
A
The
important
thing
about
that
and
I
think
we're
actually,
since
is
a
loop,
will
definitely
see
it.
It
has
a
single
assignment,
aesthetically,
meaning
you
see
it
like
in
the
syntax,
only
one
point
where
that
variable
is
assigned,
but
the
value
might
actually
differ,
depending
on
from
which
other
basic
plot
you
entered.
The
basic
block
defines
a
variable
which
sounds
super
abstract,
but
I'll
like
we'll
see,
in
example,
for
that
in
amount
other
than
just
plain
variables.
A
It
gives
you
a
pointer
that
pointer
is
starting
variable,
and
then
you
have
to
do
explicit,
read
and
write
calls
on
that
pointer
to
actually
access
memory
in
any
way,
which
is
also
the
thing
that
makes
it
like
not
strictly
a
single
static
assignment
form,
because
you
can,
of
course,
with
those
operations
at
hand.
You
can
put
variables
in
many
of
you
and
redefine
them
as
you
want.
They
don't
have
to
be
like
a
new
variable,
and
then
you
have
optimizations
that
go
from
memory
to
registers
and
the
other
way
around.
A
So
the
way
to
get
that
and
that
actually
also
works
in
stable,
rust
nice
enough,
as
you
can
pass,
emit,
LVM,
IR
and
I'll,
give
you
an
input,
dot,
ll
file,
same
thing,
also
sort
of
works
for
CarGurus
C,
in
which
case
by
the
way
it
outputs,
I,
think
targets,
release
depths
or
something
like
that
which
it
was
a
bit
hard
to
find,
but
certainly
workable.
So
let's
look
at
that.
A
The
big
blobs
like
here
are
a
lot
of
definitions
of
actually
like
unwinding
information
for
the
compiler,
some
constants
for
various
things,
but
like
what
we
care
about
is
a
for
loop
in
the
main
function
right.
So,
let's
jump
to
something
called
main:
here's
our
main
function,
or
at
least
the
mangled
name
for
it.
The
main
function
in
the
fourth
grade
and
then
there's
some
code
for
it
and.
A
Yeah
I'm,
actually,
let
me
try
something
real
quick
right.
I
did
that
without
compiling
in
release
mode.
So
if
you
want
to
look
at
what
actually
like,
if
you're
trying
to
do
this
for
optimization
or
actually
many
other
reasons,
because
it's
much
more
readable
and
of
course
makes
more
sense
to
actually
compile
that
code
with
optimizations
enabled
for
rusty,
that
is,
there
C
up
level
equals
2
or
3
234.
The
thing
the
default
for
for
release.
A
A
Other
functions
have
been
added
to
the
code
like
there
is
a
symbol
to
actually
called
length
start,
which
is
the
first
thing.
That's
called
in
a
program
when
it
runs,
which
then
in
turn
calls
main
function,
and
there
is
like,
in
exception,
handling
personality
function,
which
you
might
know
from
C++,
and
there
is
like
helpers
to
you
to
start
lifetimes
and
end
lifetimes.
But
what
we're
interested
is
is
pretty
much
just
this
snippet,
which
is
a
main
function.
A
A
So
there's
a
start
level
here,
there's
a
basic
block
for
and
a
basic
block
6
and
the
reason
for
the
number
is
not
being
contiguous
is
that
this
is
after
optimization.
So
the
original
Mir
had
continuous
basic
blocks
numbers,
but
then
we
went
to
various
optimization
passes.
Some
baby
box
might
have
got
merged
might
have
gotten
removed
because
they
did
nothing
and
things
like
that.
This
is
what
we
end
up
with
and
we
can
actually
follow
through
on
to
our
loop
here.
A
So
we,
since
there's
only
a
few
functions,
it's
a
few
lines
of
code,
so
we
started
to
start
basic
block
and
immediately
branch
to
basic
block
6
and
face
block
six.
Does
something
I
talked
about
it
a
moment
ago,
which
is
the
register
might
have
different
values,
depending
on
which
basic
block
you
came
from,
which
is
done
by
I'll
find
out
so
the
the
way
to
understand
that
is,
if
you
have
a
loop
and
you
have
like
your
induction
variable
right
here,
II.
D
A
Can't
there
are
basically
two
values
that
can
have
either
it's
the
value
that
it
got
initialized
to
like
at
the
start
of
the
40,
or
it
has
the
value
of
the
previous
iteration
of
the
for
loop
right,
the
one
that
you
just
incremented
in
a
way
and
then
went
back
to
the
top
and
if
I
note
is
a
way
to
express
that.
So
the
final
tells
you
okay.
There
are
multiple
values
that
this
variable
can
be
assigned
to
now,
depending
on
what
Bailey
per
few
came
from
and
that's
easier.
A
If
you
came
from
that
was
entirely
the
wrong
key
to
press
them.
If
you
came
from
the
start,
basic
block,
which
was
the
one
that
we
came
from
now,
the
value
of
this
register
is
zero.
Always
if
you
came
from
basic
block
six,
which
is
the
basic
but
we're
in
right
now,
so
the
loop
itself.
The
value
is
that
of
the
register
0
and
like
going
through
that
you
might
see
what
that
means.
So
register
0
is
defined
on
the
next
line
and
it
doesn't
add
other.
A
The
MIR
has
a
lot
of
modifiers
and
let
me
tie
'information
attached
to
everything
in
this
case.
Mww
and
NSW
is
non,
no
unsigned
rap
and
no
sign
rap,
so
it
guarantees
to
the
code
generation
that
this
addition
will
never
wrap
the
maximum
value
of
the
integer.
And
if
you
are
interested
in
that,
you
can
see
those
sort
of
things
easily
in
Le
mio
like
what
the
compiler
actually
assumes
about
your
code
and
what
you
could
try
to
tell
it
in
some
ways.
A
Ok,
so
what
it
does
is
is
takes
our
offense
value
up
here
and
adds
one
to
it.
So,
as
I
said,
that's
clearly
a
loop
counter
and
on
each
iteration
sensibly
we
add
one
to
our
leap
country.
So
then
we
call
tail
call
fast.
Cc
puts
it's
a
bit
hard
to
see
like
with
those
numbers
names,
but
that's
our
Pat's
function
right
and
it
calls
us
with
the
original
value
outer
loop
counter
excessively,
because
in
the
first
iteration
we
want
to
call
it
with
zero
and
not
with
the
incremented
value
of
one
right.
A
Then
it
does
a
comparison.
I
comparison
should
be
signed
comparison
for
equality
and
it
compares
our
incremented
loop
country
with
255.
Sorry
went
from
0
up
to
255
money,
including
255,
so
that's
basically
our
yes,
it
says
exit
condition
for
the
loop
and
then
depending
on
whether
that's
0
or
1.
We
branch
to
two
different
basic
blocks.
If
it
is
equal,
so
we'll
have
to
exit
the
loop,
because
this
was
the
last
iteration
it
branches
to
bed
o'clock
4,
which
returns
from
the
function
simple
enough.
A
Otherwise,
it
branches
to
Betty
blocks
6,
at
which
point
the
loop
continues,
and
we
see
that
we
got
here
to
a
loop
counter
from
the
same
basic
bucket,
we're
just
in
bed
box
6.
So
it
signs
to
do
country
to
our
incremented
value
of
the
loop
counter,
which
is
a
way
of
like
there
is
truly
just
a
single
assignment
statically
in
that
basic
block
to
that
register.
A
A
Yeah
and
that's
I
think
the
takeaway
from
that
already,
it's
probably
yes,
that's
pretty
much
a
CV
like
you
have
one
basic
block
you
check
at
the
bottom.
Is
that
the
last
value,
if
it
is
returned,
if
yeah
it
is
returned
otherwise
execute
that
same
basic
block
like
there's
no
call
to
next
there
anymore.
There
is
no
fancy
matching
of
an
option
anymore
and
pretty
much
nothing
else
right.
A
A
So
the
question
was:
if
I
know
where
that
five
front
comes
from
that
Phi
here's
the
concept
specifically
of
synesthetic
assignment
form
like
it
is
really
just
exists
for
this
kind
of
companion
to
meet
representations,
and
it's
then
using
during
code
generation
like
expanded
it
properly
by
the
compiler,
which
usually
just
means
that
I
mean
a
valid
way
to
do.
That
is
that
variable
is
a
register
and
on
the
backwards
edge
to
the
same
basic
block.
A
You
make
sure
that
that
register
gets
the
next
value
it's
supposed
to
get
like
before,
actually
branching
back
to
it.
So
you,
you
can't
do
like
a
lot
of
fancy
stuff
with
register
allocation
in
this
case
CPU
register
allocation
and
that
kind
of
thing.
Oh
yeah,
is
this
really
if
you've
read
but
I
actually
know
what
the
original
paper
for
that
is.
But
if
you
read
up
on
a
single
static
assignment
form,
fine
notes
will
come
up.
B
A
That's
absolutely
sensible
to
do
I've.
Actually
another
example:
where
did
pretty
much
that
I
guess
I'll
I'd
like
to
skip
it
for
like
during
the
talk
but
like
we
have
no
press
after
the
talk,
I'd
be
happy
to
do
just
that
and
then
compare
that
it's
totally
reasonable,
actually
yeah
just
see
like
what
would
L
VMI
actually
looks
like
for
the
C
code.
A
That
is
a
very
good
question.
Certainly
there
would
be
something
for
the
unwinding
I'm,
not
admittedly,
very
familiar
with
how
panic
actually
works
in,
like
unwinding
prison,
allottee
functions
and
all
that
what
you
usually
see-
and
that's
also
something
that
we
easily
can
try
after
the
talk
of
Ubuntu,
is
that
there
is
code.
That's
a
bit
separate
from
the
actual
function,
like
usually
at
the
very
bottom
of
it.
D
A
We
all
have
some
very,
very
basic
understanding
of
CPUs,
at
least
like
you
have
a
set
of
registers,
usually
a
fixed
set
or
not
an
infinite
set
like
as
they
used
to
have,
and
you
can
access
them
like
you
can
do
operations
on
them.
You
can
add
different
registers
together
rights
use
it's
another
one,
those
kind
of
things
and
yeah
I
guess
we'll
see
a
quick
example
of
that.
A
The
way
to
output
assembly
is
very
much
the
same
as
áliveá.
My
are
you
just
write:
em,
it's
SM
instead
of
Emmett
bottom
in
my
are
you
cannon
theory.
Do
that
for
like
various
CPUs
and
dialects,
just
by
eight
right,
applying
cross-compilation
object
to
rossi,
which
is
also
very
interesting,
and
I
failed
to
do
for
some
reason
yesterday,
who
would
have
sorta
like
to
show
that
so
yeah?
Let's,
let's
dive
into
you
a
bit
of
assembly,
I,
guess.
A
So
not
only
because
I
didn't
manage
to
cross,
compile
it,
but
also
because
I
think
it's
most
useful
to
most
people.
This
is
x86
64
assembly,
so
64
bits,
Intel,
CPUs
and
again,
there's
sort
of
obviously
a
lot
of
code
attitudes
from
what
we
had
originally
like
everything
that
implements
parental
and
had
to
be
added
to
this
assembly
ancestors
binary.
Obviously
so
printing
can
actually
happen.
That's
also
our
language
start
symbol
again
and
somewhere
in
G
should
be
our
main
function.
A
So,
let's
see
that
looks
like
I
managed
so
doing
it
like
there's
technically
another
way
to
get
to
something.
You
could
also
call
object,
D,
so
you're,
usually
yeah,
he's,
usually
object,
M
if
you're
familiar
on
the
finished
binary,
but
it's
much
less
such
actually
that
LLVM
give
you
the
assembly,
because
the
assembly
LVM
output
still
has
a
lot
of
annotations
to
it.
I
think
not
that
many
for
rest
C,
actually
the
variant
that
clang
outputs
usually
even
has
some
comments
which
variable
names.
Certain
things
came
from
ourselves.
A
A
The
next
instruction
is
the
so-called
Lea,
which
is
sort
474
got
a
load
effective
address,
yeah
I,
think
so
that
affected
you
at
essence,
you'll
see
like
a
lot
of
su
fixes
to
these
instructions,
which
is
specifically
for
x86,
tells
you
the
lengths
of
the
value
they
operate
on
like
okay.
I
should
actually
say
that,
what's
particular
in
x86
world
is
that
you
not
only
have
registers
to
work
on,
but
each
register
has
a
certain
set
of
sub
producers
which
are
smaller
and
contained
within
it
like
classically.
What
is
it
serega?
You.
A
And
a
H
register,
which
are
both
8
bits
and
then
part
of
the
ax
registered,
which
other
than
16
bit,
which
is
a
part
of
the
lower
part
of
the
EAX
register,
is
extended.
A
X
register
is
an
32
bit
and
then
now
we
have
32
foot.
We
have
64
bit
CPUs.
Now
we
have
our
ax,
which
is
which
e^x
is
the
lower
part
of,
and
then
you
can
replace
the
the
letter,
a
with
B,
C
and
D,
and
for
64
bit.
A
So
there
is
an
interesting
instruction,
because
what
it
actually
is
supposed
to
do
is
it's
supposed
to
load
an
effective
address,
meaning
one
of
the
operands
is
an
address
calculation
and
the
other
parent
is
where
you
store
the
calculated
address.
So
the
way
that's
done
is
I
think
the
false
index
of
this
is
something
like.
A
A
So
that's
sort
of
fancy
right
for
a
single
instruction.
You
can
do
two
additions,
one
with
a
register
and
one
with
the
constant
and
you
can
multiply
by
one
register
with
another
constant,
which
is
exactly
the
reason
why
people
more
often
than
not
don't
actually
use
that
to
get
your
letters
is,
but
to
calculate
new
values,
and
in
this
case
we
calculate
value
rd
I
incremented
by
one,
which
is
this
like
an
incredibly
like
Abo's
way
to
say
at
one
but
sure
we're
not
so.
A
A
X86
canvas,
okay
right
then,
then
we
call
the
putz
function.
Important
thing
to
know
here
is
how
our
parameters
actually
passed.
It
functions
and
the
way
that's
done
is
that
there's
a
defined
order
of
registers
for
each
parameter,
so
in
x86
64,
that's
the
first
parameter
is
always
in
the
RDI
register.
The
second
is
in
the
RSI
register
and
then
Summers's
and
RDI
in
our
case
is
still
set
to
0
right
because
we
cleared
it
up
here.
Then
we
added
one
to
it,
but
did
install
that
back
to
rei
but
start
at
te
DX.
A
So
RDR
is
our
initial
loop
counter
value
still
and
that's
passed
to
the
parts
function.
After
that
we
move
the
incremented
value,
so
re
I,
plus
1,
from
EBX
to
e
di.
So
the
direction
here
is
from
left
to
right.
If
you
haven't
I'm
caught
up
onto
that
red,
then
we
do
a
compare,
compares
also
at
least
on
x86,
but
interesting.
A
They
don't
return
a
result
in
a
register
anything,
but
they
set
certain
so-called
flex
in
the
CPU,
so
global
mutable
state
sort
of
a
sub
CPU,
which
tells
you
things
about
the
results
of
the
last
operation
in
particularly,
it
tells
you
whether
the
result
of
that
operation
was
0,
which
is
the
way
that
this
compares
implemented.
What
that
actually
does?
Is
it
subtracts
the
immediate
from
the
register,
but
doesn't
store
the
result
anyway,
but
just
sets
those
flags.
A
So
fundamentally
we
subtract
2
and
54
from
EBX
or
loop
counter
and
then,
if
the
result
of
that
subtraction
is
zero,
the
zero
flag
is
globally
set
and
the
next
instruction
makes
use
of
that.
It
checks.
The
zero
flag
sometimes
also
calls
the
if
equal
flag
exactly
for
that
reason,
and
if
the
whole
thing
is
not
equal,
/,
not
zero.
We
branch
back
to
the
same
basic
block
up
here
and
do
an
extra
per
iteration.
A
A
A
So
there
is
very
similar
to
actually
what
we
saw
in
the
LVM
IRA.
You
have
the
loop
counter
and
then
the
value
after
the
loop
counter
was
incremented
right
and
EB
XS
well,
for
some
part,
at
least
of
this
loop
holding
the
already
incremented
value
while
RDI
is
the
not
yet
incremented
value
our
di
/
e
di
SS
at
that
one
is
the
subset
of
the
other.
A
Let
me
sing
a
woman,
I
think
that's
not
technically
necessary,
but
if
you're
looking
at
assembler
I
think
one
thing
that's
always
like
good
excuse
to
throw
at
people.
If
that's
the
truth
of
this
case,
I
don't
know,
is
instruction
scheduling
and
instruction
scheduling
is
pretty
much
a
fancy
way
to
say
yeah,
it's
compiler
magic,
/eu
magic,
because
what
that
means
is
that,
depending
on
the
order
in
which
you
execute
instructions,
it
might
be
faster
or
not,
because
certain
computational
units
in
the
CPU
are
used
a
lot.
A
E
I
think
one
thing
that
plays
into
that
is
that
the
instructions
were
emitted
with
a
thought
in
mind
that
this
loop
will
be
executed
at
least
once
so.
It
inevitably
has
to
increase
that
one
I
think
it
would
look
different
and
may
look
different
if
the
compiler
wouldn't
be
sure
of
that,
so
dropping
the
increment.
For
the
first
check
of
the
loop
condition.
A
A
A
So
if
you
know
from
you
is
pretty
Guyler,
it's
like
a
database
of
problems
that
you
can
try
to
solve,
to
sort
of
jog
your
mind,
so
I,
guess
and
technically
you're,
also
not
necessarily
supposed
to
that
publicly
show
solutions
for
them,
but
I
think
the
very
first
one
is
easy
enough
that
we
can
deal
with
that.
So
this
is
sort
of
a
very
rusty
implementation
of
using
a
lot
of
like
sugar
and
in
this
case,
iterative
adapters.
A
A
So
no
for
loops,
no,
nothing
just
say
in
iterator
adapters
in
rust,
I'm
pretty
nice.
So
obvious
question
is
that
actually
like
that
calls
a
lot
of
functions
that
cost
a
lot
of
iterators
and
all
that
stuff?
Is
that
actually
as
nice
as
a
severe
end
of
that,
or
is
that,
like
just
a
bunch
of
overhead
and
you
wouldn't
really
want
to
do
that,
is
that
was
performance
critical?
A
A
Okay,
so
that
creates
which
I
think
I
didn't
mention
before
n
dot
s
five
for
assembly
and
again
lots
of
things
that
we
don't
necessarily
care
about
and
then
somewhere
our
main
function
and
you
notice
that
jump
straight
down
to
somebody
because
well,
let's
say
for
one
for
the
interest
of
time,
but
also
in
the
other
stages.
There's
not
not
much
interesting
happening
like
down
to
Mir.
We
definitely
know
that
it
will
still
call
function,
switch
writer
adapters
because
no
optimization
of
that
sort
happens
on
Mir.
So
we
could
like.
A
That's
our
action,
matrix
and
I
stand
that
this
is
quite
a
wireless
today,
actually
because
there's
some
very
interesting
things.
So
let
me
point
some
stuff
so
first
interesting
seeing
it
does
it
zeros,
ECX
and
ECX
is
also
very
classically.
Actually,
our
loop
counter.
Is
it
in
this
case
I
think
so?
Was
it
yeah
so
and
then
it
has
those
two
sort
of
magical
looking
variables
up
here
and
I
can
also
like
show
them
in
hex,
which
we
probably
can't
reach
with
just
a
cccccc
CD.
A
So
and
this
one
is
a
aap,
it's
all
looks
sort
of
kind
of
magical
right
and
the
next
thing
it
does
its
zeros,
ESI
I-
think
I
probably
was
wrong
about
these
things.
So
what
happens
then
is
sort
of
interesting
instructions
like
there
is
no
module
in
there
and
there's
also
no
division
in
there,
but
after
actually
staring
at
this
for
a
while
and
testing
things
out
yesterday
with
Ike,
what
we
came
up
with
is
that
those
two
instructions,
so
multiplying
was
one
of
the
magic
constants.
A
A
A
Division
and
in
this
case
they
would,
by
multiplications
called
by
shift
there's
some
cases
moveable
ways
to
look
at
this
I
think
one
of
the
ones
that's
sort
of
helpful
is,
if
you
take
that
constant
and
actually
it's
a
shift
decent
I,
don't
think
so
like
right
shifts
is
the
same
thing
as
dividing
to
that
by
that
power
of
two
right.
You're
probably
heard
of
that.
A
A
A
A
TA
check
and
then
follows
is
that
we
compare
our
original
loop
value
right
so
of
that
iteration,
with
the
value
set
with
an
interpretive
inter
division
followed
by
multiplication
on
and
if
those
are
the
same,
we
know
that
they
were
evenly
divisible
because
our
integer
division
didn't
throw
away
non
significant,
digits
and
yeah.
Then
there's
some
interesting
these
things
here
like
if
they
were
not
equal,
we
actually
zero
our
loop
counter.
So
what
we're
going
to
add
is
0
instead
of
the
current
number
and
then
there's
other
frenzied
stuff.
B
A
Loop
counter
to
the
value
we're
going
to
add,
but
only
if
the
modulo
was
equal
to
0
or
the
division
fault
by
the
multiplication
it
was
equal
right
and
yeah.
It's
it
basically
ends
up
doing
an
ad
here
with
that
value
years
is
going
to
be
our
sum
at
one
to
the
loop
counter,
compares
the
loop
counter
to
thousands
right,
our
incremented
loop
counter
to
actually
move
Kanta
and
then
branch
back
to
the
head
of
that
loop.
A
A
Here
which
looks
very
different
because
it's
actually
like
loop
and
I
have
a
some
variable.
I've
made
sure
that
the
types
match
up
so
I
know
that
on
this
architecture,
unsigned
int
is
a
32-bit
unsigned
integer,
which
is
what
I
was
using
and
rust
I'm,
which
rating
over
the
same
numbers
I'm
doing
like
in
this
case
and
if,
with
those
mod
reviews
equal
to
zero
and
then
something
all
that
up
printing
it
at
the
end.
So
let's
look
at
what
Changi
says
was
that.
A
Shot
yet
so
s4
Kangas,
givers
assembly
OH,
is
opt
level
basically
and
then
we'll
render
the
mileage
at
sea,
and
that
gives
us
the
same
file,
but
now
compact
from
our
Odyssey.
Instead
of
Rosco
and
we'll
look
at
our
main
function
again
and
interestingly
enough,
it
is
very
similar,
but
it
also
realized
something
that
apparently,
our
VM
couldn't
realize
on
the
rest
code,
because
it
still
does
fence
multiplication
followed
by
a
right
shift.
But
it's
instead
of
doing
that
with
32-bit
values.
A
It's
now
only
doing
that
with
16-bit
values
or
doing
that
with
16-bit
values
and
the
result
is
a
32-bit
value,
whereas
the
restoration
did
that
on
32-bit
values
with
the
64-bit
results,
and
what
that
tells
me
is
that
clang
somehow
realized
that
values
would
never
be
more
than
a
thousand.
Therefore,
that
was
sufficient
position
and
in
some
way
for
rusts
that
information
must
have
gotten
lost
and
the
optimizer
couldn't
actually
use
it
to
choose
smaller
constants,
which
would
actually
be
something
that
gets
interesting
like
reporters
in
park
were
looking
otherwise,
so
that
information.
E
A
A
A
We,
interestingly
enough,
just
like
rust,
get
32-bit
constants,
so
GCC
for
some
reason
also
either
doesn't
think
of
the
worst
qualities
for
smaller
constants.
If
it
doesn't
realize
it
also,
they
are
negative,
and
there
some
other
idiosyncrasies
in
here
but
yeah.
This
is
pretty
much
just
the
same
code
that
rusty
gave
us.
Interestingly
enough.
A
So
the
very
last
one
I
want
to
show
this
in
a
very
different
direction.
That
I'm
not
I,
mean
I
might
quickly
show
something,
but
not
not
like
really
look
into
it
like
just
a
single
line
of
verses
actually
was
interest
me,
so
this
is
sort
of
what
actually
sparked
me
during
this
talk,
because
I
was
sitting
at
work
and
sort
of
wondering
about
bit.
A
A
I
was
trying
to
program
for,
and
it
has
three
fields,
the
STS
vo
TDM
field
and
if
fields
and
yes,
Sarah
not
really
telling
names,
but
let's
go
with
it,
because
I
think
what
today
actually
were
calling
the
datasheet
right,
and
one
of
them
is
bit
six
and
five
years
bits.
Four
and
three
and
the
last
one
is
bits
two.
A
And
what
I
wanted
to
do
is
I
wanted
to
use
bit
field
because
that's
sort
of
for
setting
specific
values
in
that
registers
sort
of
useful.
Instead
of
doing
all
the
bit
manipulation
yourself,
but
also
since
I
wanted
to
use
set
for
initialization
of
the
chip,
and
you
said
a
lot
cheering.
For
example,
volume
changes
were
someone
might
turn
every
encoder
quickly
and
the
values
exchanged
quickly.
I
didn't
want
it
to
actually
call
functions
and,
like
always,
build
up
constants
using
bit.
Expressions
because
fundamentally
like
if
I
did
want
to
do
that
by
hand.
A
I
could
tell
you
that
that
constant
is
just
or
x53
or
something
like
that
right
and
if
it's
going
to
do
bit,
manipulation
at
runtime
I'd,
probably
rather
not
use
that
macro.
So
that
was
sort
of
what
addressed
with
me
in
that
like
what
does
that
compile
down
to?
You
is
initialized
it
to
zero,
followed
by
three
sets
in
the
end,
just
a
constant.
What
does
it
actually
call
three
functions
or
there's
an
inline,
the
functions,
but
the
bit
manipulation
on
the
value.
A
A
A
Let
me
just
dump
that
into
an
editor,
so
so
certainly
enough
would
create
a
destruct
over
that
excited
integer
type
and
it
created
some
functions,
and
these
functions
use
other
methods
on
the
struct
that
it
apparently
created
to
called
bit
range
and
set
values
into
it.
So
that's
a
little
bit
up,
so
here's
the
debug
implementation
that
it
also
created
quite
nicely
and
as
you
would
do-
and
here's
set
bit
range
and
said
no
bit
range
and
set
bit
range
functions.
A
So
yeah
from
that
point
still
I'm
interested
to
see
whether
does
that
that
compiles
down
to
the
constant
or
not
so
next
step
in
ever
not
actually
tried.
That
I
think
would
probably
be
like.
Let's
just
omit
LVM
IR,
because
either
the
mir
is
the
first
thing
that
went
to
optimization
so
conceivably.
We
could
already
see
now
the
MIRR
okay
that
assigns
a
constant
to
register
and,
if
that's
the
case,
cool
done.
A
A
A
We
see
that
it
creates
a
DI
value,
which
is
what
a
variable
it
was
called
on.
The
stack
with
a
new
oka.
Then
it
stores
to
that
Yazoo
number
zero
and
then
it
does
three
function,
calls
and
then
call
send
SPI
on
the
result.
So
at
this
point
actually,
interestingly
enough,
that
doesn't
seem
to
be
very
well
optimized
and
since
I
know
that
there
should
actually
be
a
constant
and
I'm,
not
sure
which
of
the
alphas
I
won't.
Let
me
actually
just
delete
those
that
are
there.
B
A
So
I'm
not
going
to
stare
at
this
for
too
long
because
that's
boring,
but
okay,
what
we
see
in
this
thought,
I'll
father,
that
she
does
three
function,
calls
and
then
finally
calls
send
SPI.
So
let's
guesstimate
that
it
actually
doesn't
optimize
at
this
point
and
go
down
down
one
more
layer
because
well
maybe
we
get
lucky
right.
A
A
A
A
But
fair
enough
I
think
it
did
actually
also
do
compiler
update,
but
that's
an
interesting
registry
aggression
if
that
is
actually
a
regression
or
maybe
I'm
just
too
stupid
to
type
release
or
whatever
but
yeah.
So
that's
that's
sort
of
the
way
you
would
approach
those
things
like
there
should
be
a
constant
here.
There
is
not
those
function
calls
so
I
know
that
maybe
I
shouldn't
actually
use
it
in
my
production
code
or
maybe
I
should
because
I
did
check
the
arm
assembly,
which
I'm
actually
running
at
that
had
a
constant
yeah.
A
A
It
just
so,
interestingly
enough,
that's
also
something
that's
different
from
yesterday,
because
I
was
very
pleased
yesterday
that
it
actually
only
gave
me
a
single
file
and
I
have
ever
seen
those
yeah
I'm
going
to
output,
multiple
ll
s.
Files
I,
am
not
entirely
sure
what
they
are
from.
What
I
can
tell?
A
They
don't
really
overlap
like
there
are
different
functions
and
different
symbols
in
each
of
them
and
like
most
of
them,
don't
actually
have
a
main
function.
So
that's
certainly
not
an
optimization
step
or
anything
is
that
my
assumption
is
that
that
was
might
actually
be
artifacts
from
like
partial
compilers,
a
compilation
and
caching,
and
things
like
that,
for
it
might
actually
be
things
that
get
compiled
and
sampled
separately
and
then
later
linked
to
the
final
binary
in
some
way.
F
F
A
You
remember
I
yeah
I,
do
remember
that
I
didn't
actually
look
like
this
talk
is
sort
of
done
very
much
with
the
hot
needle.
Admittedly
I
know
I
really
only
looked
at
the
assembly
for
that
yesterday,
but
I
would
expect
it
to
be
optimized
away
in
the
other
way,
Mir
like
all
the
optimizations
that
do
the
enlightening
and
constant
folding
and
all
that
should
happen.
Monavie
Mir.
A
It's
as
far
as
I
know,
it's
certainly
in
the
planning
to
do
those
kind
of
optimizations,
also
potentially
on
Mir
in
particular,
because
the
hope
is
that
that
would
get
computation
times
down
because
currently
we're
dumping
a
lot
of
fairly
non
optimal
out
of
the
Mir
at
LVM.
And
if
we
could
first
reduce
things
in
the
MIR
world
which
we
can
do
potentially
much
faster.
We
cute
give
a
lot
better
and
a
lot
smaller
Olivia
martyrium,
but
that's
still
not
really
implemented,
at
least
not
a
lot
of
it
and
I.
A
Think
constant
folding
is
something
that
certainly
doesn't
really
happen
yet
might,
in
theory,
was
Miri
and
a
lot
of
like
interpretation
sinks
from
beer
coming
up,
but.
F
A
The
question
for
the
recording
was
up
to
what
number
does
it
actually
expand?
Modelers
to
like
those
optimization
sort
of
I,
don't
know
my
suspicion
is
that
it
will
do
that
as
long
as
it
can
meaning
since
it's
doing
multiplication,
it's
limited
by
the
represent
types.
Representable
values
in
the
64-bit
register
and
I
would
assume
that
at
a
point
where
it
knows
that
that
would
overflow
so
based
on
the
range
of
the
values
that
it's
multiplying.
A
G
G
A
If
that
helps
you,
if
you're
not
writing
the
compiler
yourself,
as
you
can
see,
at
least
in
the
other
Muraya,
but
also
at
higher
levels.
What
invariance
your
compiler
knows
about
like
things
like
the
no
one
side
rap
and
all
that,
and
if
you
were
assuming
some
things
and
we
receiving
optimizations
based
of
them
like,
for
example,
that
the
compiler
knows
that
your
multiplication
cannot
rap.
You
might
be
able
to
give
it
into
the
surface
language
by
writing
in
a
search
or
something
like
that.