►
From YouTube: OMR Architecture Meeting 20200409
Description
Agenda:
* OMR IL tree interpreter (#5010) [ @dcones @skywolff ]
* Replace fomrobject_t* with fomrobjectptr_t (#5027) [ @rwy0717 ]
* Methodology for testing code generator binary encoding [ @aviansie-ben ]
* JitBuilder 2.0 repo update [ @mstoodle ]
B
C
Welcome
everyone
to
this
week's
Omar
architecture,
meeting
I,
hope
everyone
is
doing
well
and
that
they're
keeping
safe
this
week.
We
have
a
number
of
topics
to
go
through.
So
first
off
is
a
project
that
some
students
at
the
University
of
Bern
are
working
on
for
treat,
interpretation
and
I
will
turn
it
over
to
Daniel
and
Alex.
D
C
B
I'll
just
restart
so
the
goal
of
the
project
was
to
build
an
abstract
interpreter.
They
interpreted
style
tree
during
optimization
that
could
be
used
by
an
improved
aligner
optimization.
The
interpreter
wouldn't
return
on
a
body
propagation
straight,
which
would
help
bother
a
liner
decided
what
to
optimize.
So
our
project,
we
built
the
foundation
the
basic
infrastructure
by
an
interpreter
that
returns
a
concrete
value
instead
of
the
VP
constraint,
which
could
be
called
from
the
JIT
builder
to
interpret
a
interpret
IO
tree
instead
of
compiling
it
and
the
way
we
implemented.
It
is
by.
B
B
B
The
result
of
the
interpreter
is
stored
in
the
compilation
object,
the
OMR
compilation
class,
we
created
a
field
and
a
getter
and
the
setter
in
that
class.
When
the
interpreter
is
finished,
the
result
is
stored.
In
that
object,
the
compiler
is
interactive,
with
compilation
interrupted
exception
to
her
than
returns.
The
results
of
the
caller
talk
about
the
interpretation
pills
as
a
little
bit
before
Daniel
walks
us
to
walk
through
a
basic
walk.
B
B
Node
is
stored
in
a
value
map
which
maps
different
nodes
to
their
values.
This
newly
come
encountered.
Node
is
evaluated
as
stored
in
the
map.
Is
the
khamenei,
node
or
duplicating
note
that
was
previously
already
evaluated,
as
retrieved
from
the
map
prevent
a
valuation
of
the
same
node
value
is
represented
by
a
struct
with
a
union
for
the
data
and
for
the
datatype
Union
different
basic.
B
D
So
this
is
just
a
basic
program:
the
showcases
some
of
the
features
we
implement
it
so
we're
just
storing
the
value
7
into
a
variable,
a
v',
the
two
into
variable
B
and
then
just
adding
these
two
together
and
returning
them.
So
this
is
an
example
of
what
the
log
looks
like
from
running
our
example.
So
that's
that's
how
the
treetops
look,
and
so
first
we
go
into
the
first
node
and
we
initialize
a
constant
seven
and
we
put
that
into
the
no
value
map.
D
Then
we
move
on
to
storing
constant
seven
into
a
four
node
two,
but
we
see
that
it's
already
in
the
node
value
map,
so
we
retrieve
it
instead
of
processing
that
node
again.
So
this
is
an
example
of
the
commenting
that
we
were
talking
about.
So
then
we
store
that
that's
that's
seven
into
the
symbol
table.
So
a
will
equal,
seven
in
the
symbol
table
continuing
on
this.
D
Second
one
will
be
similar,
so
two
will
be
stored
in
three
and
then
similarly,
we
will
be
Q
will
be
stored
in
B,
with
commenting
from
node
3,
then
load
a
will.
Take
the
value
from
the
symbol
table
and
put
7
into
node
five,
leaving
that
in
the
no
value
map
similar
for
node,
six
just
takes
the
value
of
B
from
the
symbol
table
and
then
here
for
the
add
operation,
we
have
two
children
nodes,
but
we
noticed
that
both
of
them
are
already
in
the
node
value
map.
D
D
We
see
that
node
seven
is
already
in
the
node
value
map,
and
this
is
what
it's
what's
being
returned,
so
we
don't
need
to
process
it
again
and
it
just
takes
it
from
the
no
valid
map
and
returns
that
that's
the
end
of
the
program
and
then
we're
done
and
so
ideas
for
future
work
include
extending
the
interpretation
to
more
than
one
basic
block
of
the
isle
tree.
So
we
don't
think
that
should
be
too
big
of
an
issue.
D
A
bigger
one
would
be
returning
VP
constraint
as
a
result
of
other
interpretation
instead
of
the
value,
so
I
think
the
abstract
interpretation
will
be
really
useful
for
the
goals
of
this
project,
as
opposed
to
the
concrete
values
we
currently
support
follow-up
we
have.
We
would
like
to
have
parameters
for
the
methods
that
we
analyze
right
now.
We
have
hard-coded
2
and
7
into
the
values
of
a
and
B,
for
example,
in
the
last
program.
D
D
D
D
C
D
So
we
created
a
tree
interpreter
optimization
which
basically
just
has
our
tree
interpreter
and
a
and
opt
in
it
and
then,
as
part
of
the
interpretation,
it
exits
the
compiler
early
so
and
it
stores
the
result
as
like.
So
as
it
goes
through
the
compilation,
a
that's
that
it's
the
interpretation
and
then
it
just
exits
early
before
returning
an
entry
point
to
the
method.
E
F
So
I
think
the
reason
for
having
an
interpreter
infrastructure
would
be
to
allow
which
is
well
in
my
mind.
One
of
the
nice
things
that
we
could
do
with
a
concrete
interpreter
would
be
to
allow
languages
that
hook
up
to
get
builder
interpret,
as
well
as
to
compile
write,
because
you
may
not
want
to
invest
in
doing
the
full
compilation.
You
may
just
want
to
have
it
and
throw
it
away
or
whatever,
and
it
would
also
allow
verification
of
the
generators.
F
However,
it
guides
inlining
based
on
abstract
interpretation,
to
try
and
figure
out
where
optimizations
might
be
exposed,
and
at
present
there
is
no
because
that
abstract
interpreters
on
bike
Java
bytecode,
it's
not
really
compatible
or
runnable
from
Omar
right.
It's
a
open,
j9
thing.
If
we
have
this
interpreter-
and
we
extend
it
to
be
able
to
support
abstract
interpretation
and
produce
VP
constraints,
then
we
will
be
would
be
able
to
integrate
that
inliner
into
Omar,
which
would
provide
a
much
better
inlining
optimization
than
what
we
currently
have
a.
F
G
C
G
Yeah
so
yeah,
that's
exactly
right.
I
just
wanted
to
bring
this
up
so
that
people
are
aware,
especially
if
you're
working
on
some
kind
of
project,
that's
actually
using
the
GC.
This
might
affect
you
because
it's
a
user
facing
change
in
the
GC.
We
have
a
type
called
f
om.
Our
object
e
and
its
represent
two
object
pointer.
So
we
use
this
all
over
the
place
for
loading
and
storing
pointers
and
objects
and
for
indexing
in
the
fields
of
objects.
G
But
with
the
angle
work
to
support
runtime
compressed
references,
it's
no
longer
possible
to
map
the
size
of
an
object,
object
reference
with
like
a
to
statically
map
that
to
AC
type.
So
what
we're
looking
at
doing
is
replacing
or
redefining
this
type
from
either
you
originally
is
it's
a
human
32
or
us
60
40,
depending
on
the
build,
but
we
would
redefine
it
to
an
incomplete
struct
which
would
event
users
from
indexing
or
doing
pointer,
arithmetic
or
dereferencing
or
object
T
winter.
G
D
G
G
Keep
their
projects
up
to
date
with
this
change,
so
it
depends
what
a
lot
of
objects.
So,
what
a
lot
of
probably
do-
and
this
is
what
we
do
in
the
example
is
we
define
fo
our
object,
key
to
be
a
different
type,
so
in
the
example
it's
defined
to
like
a
salon
type.
So
as
long
as
you're
using
your
own
internal
types,
you're,
not
using
f
Omar
object,
T
directly,
it's
possible.
There
are
no
code
changes
that
you
need
to
make
yeah
you're
interested
in
supporting
length,
uncompressed
references
in
your
project.
G
G
C
C
G
G
Sometimes
we
stack
allocate
F
of
our
objects,
so
those
either
have
to
be
replaced
with
a
runtime
check.
So
you
would
say:
okay,
if
I'm
in
compress
reference
mode
loaded
you
in
32
and
store
that
on
the
stack
presence
before.
Alternatively,
we
are
also
talking
about
adding
an
Omar
object,
token
T.
So
that
would
be
a
token
that
you
could
decode
into
a
pointer
and
for
whichever
mode
you're
in
compressed
full
or
mixed.
It
would
be
wide
enough
to
store
a
pointer
that
has
not
yet
been
decoded.
G
C
C
So
next
up
we
have
Ben.
Thomas
is
going
to
describe
some
of
the
work
that
he
did
in
the
past
couple
of
months
to
add
some
means
for
testing
the
encoding.
That's
coming
out
of
the
compiler
oftentimes.
It's
not
straightforward
to
validate
all
the
different
kinds
of
encodings
of
instructions,
but
there
has
been
some
work
done
recently
in
power
that
just
like
to
talk
us
through
the
methodology
that
that
he's
using
because
it
could
be
useful
in
other
backends
as
well.
So
then
you
want
to
get
away
all
right.
Can
you
hear
me.
A
All
right,
excellent,
so
I,
don't
have
any
slides
here
or
anything,
but
first
I
just
want
to
give
a
little
bit
of
background
on
how
this
came
up
recently,
there's
been
a
lot
of
work,
I've
been
putting
in
to
refactor
a
lot
of
the
binary
encoding
on
power
and,
as
even
seasoned
compiler
developers,
sometimes
take
binary
encoding
for
granted.
We
sort
of
just
assume
that
it
works.
If
we
have
a
list
of
instructions.
A
So
I
decided
to
do
while
I
was
doing
the
reef
factors
is
try
to
unit
test.
This
particular
part
of
the
compiler
and
because
of
how
functional
this
particular
part
of
the
compiler
is
in
terms
of
it
gets
input
and
then
it
gives
out,
and
it
doesn't
have
a
ton
of
side
effects.
It's
a
really
good
candidate
for
unit
testing.
So
what
what
has
to
be
done
in
order
to
set
that
up
is
actually
a
lot
more
complicated
than
it
probably
should
be.
A
Normally
when
we
initiate
the
compiler,
we
use
a
function,
called
compile
method
from
details
and
that
initializes
all
sorts
of
data
structures
in
the
compiler,
and
it
also
goes
through
the
sequence
of
events
that
we
usually
want
to
do
for
the
compilation
so
il
gen,
optimization
tree
evaluation,
peepholes
register
allocation,
binary
encoding,
that's
a
lot
of
stuff
running
there
and
to
isolate
just
the
binary
encoder.
We
need
to
feed
it
very
particular
instructions.
So
because
of
that,
we
can't
use
any
of
the
existing
infrastructure
for
starting
and
stopping
a
compilation.
A
So
instead,
what
has
to
be
done
is
that
we
initialize
a
lot
of
the
compiler
data
structures
manually.
This
is
a
bit
unfortunate,
but
it
does
work
for
the
time
being
and
because
of
that,
we
have
full
control
over
the
compiler
when
we
have
it
in
the
state
so,
rather
than
having
to
give
it
trees
at
il
gen
time,
we
can
just
feed
it
instructions
directly
to
the
code
generator
and
once
we've
done
that
we
can
just
call
a
the
generate
binary
encoding
function
on
these
instructions
and.
A
That
gives
us
a
very
powerful
means
of
unit
testing
just
this
one
particular
part
of
the
compiler,
and
it
gives
us
a
lot
of
control
over
what's
going
on
in
this
test.
So
one
big
problem
that
you
do
run
into
when
you
try
to
do
this
is
getting
expected
values,
so
they've
naive
way
would
be
to
manually
try
to
figure
out.
Okay
I.
Have
this
instruction
I
want
to
figure
out
what
the
encoding
should
be.
That's
not
really
time
efficient.
A
A
That
would
take
that
clean
it
up
a
little
bit,
throw
it
into
a
parameterised
Google
test,
and
by
doing
that,
we
can
have
one
test
fixture
per
type
of
instruction,
that
we
want
to
test
so,
for
instance,
one
for
instructions
with
a
single
target
and
a
single
source
register
ones
with
a
single
target
register
and
an
immediate.
So
then,
what
we
do
is
we
just
use
a
Google
test,
parameterised
test,
and
we
give
it
the
opcode.
A
So
this
has
actually
caught
a
number
of
bugs
in
the
existing
power
binary
encoder,
where
either
our
template
instructions
that
we
fill
in
fields
of
were
wrong
or
we
just
didn't
fill
in
the
fields
correctly.
So
the
the
basic
test
fixture
that
initializes
and
shuts
down
the
compiler
without
using
our
normal
infrastructure.
That's
actually
not
specific
to
these
tests
at
all.
It
can
be
used
on
any
of
the
code.
Generators
for
any
kind
of
code
generator
related
test.
A
E
A
E
A
A
Generating
the
assembly
instructions
is
a
little
bit
harder.
It's
especially
harden
the
power
code
generator
since
we
don't
always
agree
with
the
ISA
on
how
to
specify
things
in
an
instruction.
So
a
lot
of
the
times.
We
have
multiple
up
codes
for
the
same
instruction
where
the
ISA
would
say
that
we
should
just
have
one
instruction
with
an
immediate
it's
like
zero
or
one.
So
what
I
did
was
I
had
a
simple
Python
script.
It
was
like
20
lines
of
mostly
just
print
statements.
A
Everything
except
one
register
field
would
be
register
0
and
the
one
I'm
testing
would
be
register
31,
so
that
the
field
would
be
fully
filled
with
ones
rather
than
zeros,
so,
unfortunately
it
it
may
miss
something,
but
I
tailor
it
to
specifically
how
power
encodes
its
instructions.
So,
for
instance,
if
we
have
certain
immediate
swear,
they're
divided
between
multiple
fields,
I
try
to
use
values
that
would
exercise
all
possible
cases.
So
it
depends
on
the
instruction.
A
C
How
much
of
the,
if
anything
of
the
infrastructure
that
you
had
to
develop,
this
can
be
shared
across
the
different
architectures.
So.
A
Everything
regarding
initialization
and
shutdown
of
the
compiler,
without
going
through
the
regular
architecture
stuff
that
can
all
be
shared
100%
no
problems,
most
of
the
rest
of
it
is
a
little
bit
power
specific.
So,
for
instance,
we
assume
that
instructions
are
encoded
as
a
series
of
words,
so
on
other
architectures.
You
would
need
to
implement
that
yourself,
because
many
architectures
don't
do
that.
Many
architectures
have
a
stream
of
bytes
instead
of
a
stream
of
work.
C
A
Memory
references
are
a
bit
strange
because
they
work
differently
between
different
architectures,
specifically
on
power.
It's
memory.
References
are
either
like
a
base,
register,
plus
displacement
or
base
register
plus
index
register,
and
that
displacement
can
have
some
weird
stuff
going.
We
had
some
interesting
behavior
before
where
we
would
try
to
deal
with
out
of
range
displacements.
C
C
C
H
H
So
since,
since
the
last
talk,
what
I
did
was
a
bunch
of
cleanup
some
modification,
and
there
were
some
slot
conversations
that
happened
after
the
after
the
architecture
meeting
last
that
I
it
got
some
good
feedback
from
people
and
one
in
particular,
I
managed
to
provide
a
rudimentary
implementation
for
as
part
of
the
update
that
I've
made.
So
I've
now
actually
pushed
a
bunch
of
code
to
my
own
personal
repo
in
the
jb2
branch
of
M
student,
Lamar
and
I.
H
Guess,
I
can
talk
a
little
bit
about
some
of
the
things
that
I've
added
so
first
off,
it's
still
limited
to
basically
implementing
the
operations
and
code
needed
in
order
to
do
a
matrix,
multiply
code
sample.
The
reason
why
I'm
not
expanding
it
beyond
matrix
multiply
right
now
is
so
that
I
can
keep
the
set
of
operations
low,
and
that
means
I
can
experiment
with
different
ways
of
doing
things
and
iterate
reasonably
quickly,
just
manually
going
through
and
modifying
stuff
without
having
to
go
and
fix.
You
know
another
hundred
operators,
or
something
like
that.
C
B
H
B
H
You
okay,
so
this
is
my
terminal.
Hopefully
you
can
all
see
yep,
so
the
code
right
now
is
in
the
in
the
underneath,
the
JIT
builder
kind
of
main
directory.
There's
a
new
directory
called
JB
il
a
table
just
Builder
il,
which
is
a
name
that
I
created
probably
eight
months
ago
and
haven't
revisited
so
anyway.
It's
just
an
old
name.
All
of
the
code
is
sitting
inside
inside
that
directory
and
then
there's
so.
H
This
is
the
this
is
the
codes
it's
kind
of
the
base
code,
implementation
of
JIT
builder,
plus
there's
the
matrix-multiply
code
sample
right
there,
mult
dot,
cpp
and
matt
malta
HBP,
which
is
largely
speaking,
the
unmodified
matrix-multiply
code
sample
from
that
we
know
and
love
from
from
from
the
JIT
builder
code,
samples,
the
original
jet
Builder
code
samples
and
then
there's
also
a
directory
here,
complex
matt
mult,
underneath
which
looks
like
a
complete
second
copy
of
this.
But
a
lot
of
these
are
just
symbolically
linked
back
up
into
the
previous
directory
and
then.
H
Those
operations
have
to
be
added
into
the
api
of
builder,
and
so
I
did
a
very
simple
extension
mechanism
here,
which
is
most
of
the
implementation
of
builder,
is
actually
done
in
this
file
called
builder
base
very
unimaginative
name.
I
know
some
people
have
problems
with
the
base,
but
anyway
that's
what
I
started
off
with
so
a
lot
of
the
implementation
of
builder,
from
the
JIT
builder
perspective
or
typical.
There
to
perspective
is
done
inside
this
builder
base
class
and
then
and
then
there
is
actually
a
builder
class
that
sits
on
top
of
it.
H
Is
there
it's
actually
very,
very
small
right,
so
it's
it's
designed
for
being
extended
by
somebody
right,
so
it
has
designed
integration
points.
If
you
have
new
public
API,
you
can
put
it
here.
You
have
to
reproduce
the
you
know.
Similarly,
to
how
our
extensible
class
mechanisms
works
with
the
classes
that
are
into
T
our
namespace.
H
You
need
to
redefine
your
constructors
just
because
of
the
level
of
C++
that
we're
currently
stuck
with,
and
then
you
know,
places
to
add
various
different
kinds
of
facilities
to
the
builder
class
and
because
the
whole
rest
of
the
of
the
code
base
refers
to
builder
and
not
builder
base,
then
anything
that
you
add
here
just
becomes
part
of
the
API
that
everyone's
used
to
seeing.
So
it
makes
for
a
relatively
easy
way
to
extend
things,
and
so,
if
you
look
at
I'll
just
quickly.
H
It
also
added
a
way
of
getting
a
complex
type
dictionary,
a
typed
X
and
then,
if
you're,
if
you're
in
a
builder
object,
you
want
to
be
able
to
work
directly
with
the
complex
type,
the
same
way
that
you
can
work,
naturally
within
8
and
16,
etc.
In
the
original
JIT
builder,
API
there's
a
complex
type
which
just
basically
caches
what
the
type
is
that
corresponds
to
this
complex
thing:
that's
been
defined
and
then
you
can
reference
it
very
easily.
So
that
kind
of
extension
is
very
easy.
H
It's
a
little
bit
more
complicated
in
operation,
but
it's
the
same
basic
mechanism
and,
as
it
turns
out
in
the
operation
basis,
really
only
defining
the
operation
class
itself
in
in
JIT
builder
2.
There
are
actually
subclasses
of
operation
that
are
the
ones
that
you
actually
use.
So
you
create
a
create
a
subclass
for
every
kind
of
operation
so
for
load,
there's
a
class
called
load,
there's
a
class
called
load,
app,
there's
a
class
called
store,
there's
a
class
called,
add,
etc,
and
those
are
all
subclasses
of
operation
base.
H
H
Although
my
comment
there's
only
one
line,
the
idea
is
there
to
create
enough
of
a
tag
that,
when
you
create
a
patch,
it's
very
easy
for
any
kind
of
merge
process
to
automatically
figure
out
where
you've
added
your
code
to
the
base
code.
That's
there,
and
so,
even
though
it's
not
in
a
separate
file,
I
think
it
should
be
fairly
easy
to
automate,
even
if
there
are
changes
happening
in
in
the
holder
to
implementation.
H
So
that's
that's
one
thing
that
I
wanted
to
talk
about,
because
I
did
manage
to
make
some
progress
on
cleaning
that
up.
It
was
a
lot
worse
when
I,
when
I
gave
the
first
presentation
on
this,
even
though
I
didn't
actually
show
what
it
looked
like.
So
that's
a
little
bit
cleaner,
now
I
would
say,
and
then
what
else
have
I
done?
H
Let's
see
so
one
of
the
things
that
came
up
in
the
slack
conversation
was
that
it
would
be.
Nice
Jan
mentioned
that
it
would
be
very
nice
to
be
able
to
attach
source
code
location
to
any
of
the
code
that
you're
working
with,
and
although
the
Omar
compiler
tracks
code
locations
very
well
using
byte
code
info
and
and
which,
which
has
the
call
site
and
and
byte
code
index
associated
with
everything,
there
wasn't
a
very
good
connection
to
this
other
than
in
the
original
chat
builder
API.
H
There
were
byte
code
builders
and
each
byte
code
builder
had
a
notion
of
a
byte
code
index
that
correspond
it
to
it
and
then
any
operations
that
you
used
in
that
byte
code
builder
would
get
tagged
with
the
byte
code
index
of
the
byte
code
builder.
So
I
decided
to
generalize
that
a
little
bit
and
came
up
with
the
notion
of
a
location,
unimaginative,
lea
named,
which
can
be
extended
very
easily,
just
add
information
to
it.
H
H
You
can
do
kind
of
three
different
ways
of
identifying
a
source
code
location,
so
these
are
just
the
ways
that
come
with
tip
builder
two
right
now
you
can
you
can
in
a
function
builder,
you
can
you
can
create
a
location,
that's
based
on
just
a
string.
That's
a
line
number
or
associated
like
it
could
be
a
line.
Number
convenient,
really
and
another
way
is
a
line
number
and
a
specific
byte
code
index.
So
you
could
imagine
this
would
get
used
later
on
by
a
real
byte
code.
H
Locations
of
the
function
builder,
that's
being
compiled
by
looking
up
the
location,
information
or
the
bytecode
index
and
getting
the
location
information,
and
we
could,
even
you
know,
incorporate
that
into
some
additional
metadata
that
gets
created
for
the
code
that
you're
building,
which
it
builder
all
right.
So
that's
location.
So
that
was
a
response.
Did
you
answer
requests
the
other?
What
else
did
I
do.
H
The
other
thing
that
I
did
was
to
create
this
coke
generator
thing,
which
is
actually
currently
implemented
as
a
transformer.
Even
though
it's
not
really
a
transformer,
it's
more
about
visitor.
It
doesn't
actually
transform
the
the
code
at
all,
but
it's
it
basically
uses
JIT
builder,
just
builder
one
to
to
generate
codes
so
you're,
just
kind
of
where
we
let
me
get
down
here
we
go
so
transform
operation
is
something
that
gets
called
on
every
operation
as
it
visits
all
the
operations
in
the
in
a
code.
H
You
basically
just
get
the
Builder
from
the
operation.
You
map
that
to
to
a
TR
cool
and
:
IL
builder,
which
is
the
structure
that
you
used
in
jet
builder,
one
when
you
are
working
with
builders,
it
sets
the
bytecode
index
and
make
sure
that
the
or
il
generator
note
was
what
the
bytecode
index
is.
So
that's
the
location,
handling
and
then
just
for
every
operation.
It
it
basically
just
Maps.
H
You
know
it
takes
the
values
that
are
coming
from
the
operation
and
it
and
it
maps
those
into
TR
:
:,
whatever
I'll
value,
I
will
type
il
builders
and
that's
what
some
of
these
it's
like
store
value.
Does
that
bring
it?
It
looks
up
the
TR
:
:,
IL
value
that
corresponds
to
this
result
and
it
creates
const
into
8.
H
It
calls
constant
8,
because
this
is
a
constant
8
operation
on
the
literal
value
of
that
constant
8.
So
it's
a
bike
and
it
calls
that
on
Omar
B,
which
is
the
TR
:
:
IL
builder.
So
it's
calling
into
the
Omar
compiler
to
create
that
that
TR
:,
:
IL
value
and
then
basically
store
value,
just
Maps
that
TR
:
:
IL
value
with
the
the
value
that's
here
off
of
the
operation
and
then
later
on.
You
can
see,
let's
pick
something
that
actually
takes
a
an
operation
here.
H
You'll
see
it
map
operand
values
that
basically
she's
looking
up
in
that
standard
map
which,
what's
the
TR
:
:
IL
value,
that
to
the
operand
value,
that's
stored
in
the
in
the
operation,
and
it
does
that
for
both
the
left
and
right,
operands
and
in
it,
and
then
it
does
the
mapping
for
the
result.
So
it's
actually.
H
It
actually
does
at
the
end,
invoke
the
map
mult
compiled
code
and
generates
the
result
that
you
can
it
actually
works,
which
was
a
nice
thing.
I
actually
discovered
a
couple
of
bugs
when
I
did
this,
so
it
was
good
that
I
went
through
that
exercise.
I
also
discovered
some
some
things
that
made
it
to
work
more
efficiently,
as
I
was
going
through
the
process
of
trying
to
to
actually
generate
code.
So
right
now
this
is
a
very
I.
Would
call
it
brainless
way
of
generating
code
for
jet
builder
to
I?
Think
it
cannot.
H
We
can
actually
do
a
much
better
job
of
this
I
just
wanted
to
get
something
working
so
that
I
could
verify
what
it's
what
it
does.
As
you
can
see,
it's
tracking
how
much
memory
was
allocated
in
the
compilation
of
this
function
builder,
so
all
of
the
all
of
the
memory
that
got
allocated
as
part
of
the
JIT
builder
to
operations
that
we're
done
on
this
function
builder
are
captured
here.
H
So
I've
used
a
whopping
9k
of
data
because
it's
a
very
big
method-
I'm
Ken
pardon
me-
and
this
is
kind
of
an
example
of
some
of
the
oh
but
I
can't
remember
if
I
showed
this
before
in
the
in
my
talk
last
week,
but
basically
I.
Think
I
did
because
I
remember
saying
that
square
brackets
are
being
used
a
lot
rather
than
open
parens,
because
I
didn't
want
it
to
look
like
Lisp,
but
anyway,
I'll
just
go
through
it
again
quickly.
So
you
know
this
is
a
function
builder
with
a
name.
H
It
has
a
bunch
of
attributes
that
are
described
here.
I
use
square
brackets
as
just
a
way
of
delineating
different
pieces
of
data,
so
that's
a
little
bit
XML
II,
but
not
really,
and
so
it's
basically
it's
easy
to
walk
around
in
this
file.
By
choosing
map,
you
can
always
find
the
matching
bracket,
and
then
it
tries
to
do
a
little
bit
of
indentation
to
make
it
easier
to
read.
So
as
part
of
this
one
builder,
when
you
enter
this
function
builder,
it
will
start
executing
this
sequence
of
operations.
H
It
shows
just
it's
a
redundant,
but
this
is
the.
This
is
the
builder
object
that
owns
this
operation
and
and
then
it
shows
in
a
very
sort
of
straightforward
form.
It's
not
using
the
tree
form
that
I
usually
use
when
I'm
writing
chip
builder
code.
Excuse
me
were
that
the
Omar
compiler
uses,
but
it's
a
very
straightforward
kind
of
statement,
style
explaining
what
it
is.
That's
doing
and
using
values
being
defined
on
the
left
and
can
be
used
anywhere
on
the
right
and
so
on,
and
you
can
have
all
kinds
of
different
services.
H
You
can
refer
to
build
their
objects.
There's
a
little
bit
of
syntactic
pleasure
there
too.
To
help
you
understand
which
parts
are
which
and
and
then
you
know,
various
different
builders
that
are
owned
by
this
method
builder
will
be
listed
as
part
of
this
function
builder
and
and
they
show
their
code
as
well,
that
they
show
whether
or
not
they're
bound
in
which
operation
they're
bound
to
so
this
builder
b5.
H
If
we
go
back
and
look
at
b5
was
it's
an
append
builder
object
from
here,
so
this
operation
for
T
didn't
append
builder
of
b5,
and
that's
why
b5
is
bound
to
that
operation
is
a
target
of
a
branch
and
it
has
a
series
of
operations,
and
so
this
is
just
you
can
you
can
go
through
the
triply
nested
structure
of
matrix,
multiply
and
see
all
of
the
code?
That's
in
here
and
you
can
read
it
fairly.
You
get
used
to
reading
it,
I
guess
I'll
say
maybe
once
drink.
Everyone
is
as
equally
useful
interesting.
H
H
H
If
you
want
to
replace
that
operation
with
some
other
code,
you
return
a
build
or
object
that
has
whatever
it
is.
You
want
to
replace
that
operation
with,
which
is
a
nice
generic
kind
of
model,
and
then
those
operations,
the
Builder
basically
evaporates
and
those
operations
just
get
jammed
into
a
spot
where
the
original
operation
was,
but
it
provides
a
nice
model
for
kind
of
being
able
to
describe
at
least
most
of
what's
being
done
inside
of
a
transformation,
and
there
is
actually
a
performed
transformation.
H
H
So
in
this
case
it's
gonna
translate
this
for
loop
into
a
lower-level
set
of
operations,
so
that
for
loop
up
no
longer
exists
in
the
in
the
in
the
il
and
what
it
is
is
just
replaced
this
with
a
builder
before
which,
as
I
said,
is
about
to
evaporate
as
it
replaces
it,
and
just
these
operations
are
going
to
be
injected
into
the
spot
where
the
original
op
6
was
so
as
you
go
through
here,
you
can
see
there's
another.
It
found
another
for
loop
here,
so
this
is
transformation.
H
H
It's
a
it's
a
it's!
A
relatively
nice
model,
unfortunately,
this
this
printing
capability
only
really
prints
the
actual
b12
operations.
So
you
as
part
of
doing
this,
and
in
fact
it
actually
happens
in
this
transformation.
That's
going
on
right
now,
if
you
create
other
builders
like
b13
that
have
other
builders
inside
them
that
you've
created
as
part
of
doing
the
transformation,
it
doesn't
actually
show
those
because
it
doesn't
know
how
far
down
it
needs
to
traverse.
H
So
it
really
only
shows
the
top-level
builder
that
you've
created,
but
it's
it's
still
a
fairly
good
mechanism,
and
then
this
fact
that
they're
number
it
allows
you
to
build
some
diagnostic
facilities
around.
You
know
disabling
certain
transformations
to
see
if
they're
responsible
for
problems
and
so
on.
H
So
it's
kind
of
building
in
some
of
the
similar
diagnostic
abilities
that
are
there
and
you
want
more
compiler
I'm,
bringing
them
into
digit
builder
as
a
as
kind
of
first-class
citizens
and
just
to
quickly
make
sure
that
kept
honest.
This
is
the
complex
map,
mult,
so
complex,
I
think
I
talked
about
this.
H
You
can
run
this,
and
so
you
can
see
it
used
more
memory
in
this
case
because
it
had
to
create
a
lot
more
structures
in
order
to
do
the
transformation
and
more
Builder
objects
got
created,
and
so
on,
but
the
outputs
here
it
printed
out
some
nice.
They
didn't
actually
manage
to
put
any
imaginary
values
in
here.
So
it's
not
a
complete
proof
that
it
worked,
but
that
it
did
actually
generate
results
and
you
can
see
in
the
IQ
go
up
to
the
transformer
here
that
ran
the
reduction,
which
is
here
somewhere.
H
H
H
H
Yes,
but
there's,
but
there's
no
way
to
express
it,
I
guess
very
well
in
the
Omar,
il
compiler
il
without
going
through
and
doing
all
of
the
double
translation
yourself
right.
If
you,
if
you
look
at
what
complex
map
mult
complex
map,
dot
CPP
looks
like
the
code
for
the
guts
of
this
thing
are
all
the
same:
I
have
to
change
anything
to
use
doubles
the
only
thing
I
had
to
change.
Was
this
one,
the
initialization
of
the
some
variable
to
start
with
a
complex
type?
Otherwise,
it's
the
same.
H
They
could
be
any
kinds
of
things
as
long
as
you
know
how
to
translate
those
representations
of
those
things
down
into
a
lower
level
representation,
you
can
write
JIT
build
our
code
that
treats
them
just
like
a
value,
no
matter
what
their
underlying
representation
is
and
then,
as
long
as
you
can
translate
the
operations
to
two
into
simpler
operations,
you
can
you
can
you
can
kind
of
do
that?
Translation
once
and
then
you
can
write
a
bunch
of
code
that
can
work
with
those
things
as
values.
H
It
seems
like
the
most
obvious,
easy
thing
to
try
anyway.
So
I
guess
that's
kind
of
where
I
am
now.
Oh
I,
guess
I'm
in
the
middle
of
creating
a
binary
representation
for
JIT
builders,
kind
of
like
the
reader/writer
code
that
we
have
and
just
builder
il
that
isn't
quite
active
right
now,
but
it's
the
same
basic
idea
and
it's
starting
to
create.
This
is
leaning
me
down
kind
of
a
slightly
different
notion
of
how
to
organize
the
various
subclasses
in
operation
and
where
things
should
should
reside.
H
So
as
an
example,
a
pretty
printer.
That's
here
this
guy,
which
is
the
thing
that's
resulted.
It
prints
out
all
of
that
wonderful
text
with
all
the
square
brackets.
It's
one
piece
of
sort
of
encapsulated
code
that
knows
how
to
pretty
print
the
JIT
build
RI
l,
which
means
that
it
has
all
kinds
of
switch
function.
There
are
switches,
well,
not
all
kinds,
it
has
a
big
switch
in
it
to
deal
with
what?
H
So
let
me
just
give
an
example
of
so
you
can
see
what
they
look
like,
so
so
here's
what
an
ADD
operation
is:
there's
actually
a
set
of
kind
of
helper
classes
that
I
built
in
the
middle
that
that
help
reduce
the
amount
of
boilerplate
that
you
need
to
generate
in
order
to
define
an
operation
so
any
kind
of
binary
operation
that
has
two
values.
You
can
use
this
binary
value
operation
and
then
this
is
all
the
code
that
you
have
to
to
write
in
order
to.
H
Some
of
the
capabilities
might
be
a
better
way
to
do
that
then,
and
collecting
the
sum
total
of
everything
in
another
class,
but
anyway,
I'm
still
kind
of
experimenting
with
that
and
seeing
what
that
looks
like
in
any
case,
I
guess
the
main
point
that
I
wanted
to
finalize
with
here
actually
talk
for
a
lot
longer
than
I
thought.
I
was
going
to
it's
that
you
know
the
the
code
I
pushed
to
the
to
the
repos
there
it's
available
on
github,
you
can
go
and
look
at
it.
There's
a
link
I.
H
Basically
sprayed
links
all
over
the
places
where
we've
talked
to
butch
it
builder,
before
slack
channels
and
in
digit
builder
working
proposal
and
and
I'd
love
to
get
people's
feedback.
I
know
it
only
does
matrix
multiply
right
now,
which
is
a
fairly
uninteresting
thing,
but
anyway
I'd
be
happy.
If
people
would
pick
it
up
and
play
with
it
and
give
your
thoughts
and
ideas
on
it,
I
guess
I'll
stop
there.
F
H
Don't
I
don't
have
a
good
story
for
that.
That's
an
interesting
idea.
Hadn't
really
imagined
that
I
right
now
it
works
in
kind
of
a
one-way,
that's
sort
of
the
way
it's
imagined
to
work.
But
if
you
have
thoughts
on
how
that
would
that
could
work
I'd
be
interested
to
talk
more
about
that
because,
obviously
it's
it
would
be
nice
not
to
have
to
read
not
to
have
to
re-implement
all
of
the
infrastructure
that
we've
gotten
the
Omar
compiler.
That
could
do
things
like
use,
definite
searching.
F
H
Yeah,
it
would
be
interesting
to
be
able
to
to
reverse
of
collapses,
I
guess
or
the
reduce
the
opposite
of
reduce.
They
come
up
with
another
verb.
That's
the
right
verb
for
aggregating
the
information
that
you've
learned
at
the
lower
level
back
up
into
the
operations
that
did
the
reduction.
That's
that's
interesting.
I
mean
think
about
that.
Some
more
yeah.
F
H
Yep
I
was
actually
thinking
about
that
when
I,
because
we
we've
had
that
discussion
before
I
was
wish.
I
guess
now
that
I
think
about.
We
thought
both
discussions
before,
but
anyway,
I
was
actually
thinking
about
the
specifically
about
the
the
Rosslyn
notion
of
those
kind
of
immutable.
F
One
other
question
related
to
your
source
line:
information
no
or
your
location
notion.
You've,
given
any
thought
to
character
ranges
like
I
mean
things
like
clang
and
some
of
the
LLVM
infrastructure
are
famous,
for
you
know,
being
able
to
highlight
particular
ranges
of
code
in
a
line
and
all
that
kind
of
stuff.
Do
you
think
that
would
scale
onto
the
bytecode
mapping
you
have
or
do
you
think
that
there
would
need
to
be
another
iteration
on
that
to
support
it?.
H
Well
so
I
mean
in
principle:
you
could,
you
could
define,
like
you
just
add
character,
start
and
end
indices
into
the
into
the
location
structure.
That's
there
right
now.
The
main
question
I
think
would
be
whether
you'd
run
out
of
depending
on
how
many
it's
kind
of
depends
on
how
big
a
function
builder,
you're,
gonna,
create
because
that's
kind
of
the
level
at
which
you'd
reuse,
bytecode
indices
right
now,
so
whether
you'd
run
out
a
bytecode
indices,
I
think
we're
at
16.