►
From YouTube: OMR Architecture Meeting 20191205
Description
Agenda:
* Eclipse OMR 2020 Project Directions [ @0xdaryl & @mstoodle ]
* OWL Project [ @xiacijie ]
B
Okay,
welcome
everyone
to
this
week's
Omar
architecture
meeting.
So
this
week
we
have
a
fairly
full
agenda
for
the
first
I
think,
two-thirds
of
the
other
time
we'll
be
giving
a
sort
of
an
overview
of
some
of
the
work.
That's
ongoing
right
now
in
Omar
and
we're
and
work
that
is
expected
to
land
over
the
next
several
months
up
to
a
year
timeframe,
things
that
align
with
the
with
the
goals
of
the
project
and
then
for
the
last
thirty
minutes.
B
So
before
we
start
talking
about
where
we're
we're
we're
going
over
the
next
few
months,
I
thought
we
could
talk
a
little
bit
about
some
of
the
things
that
were
accomplished
over
the
last
over
the
last
year
in
terms
of
code
that
was
contributed
or
changes
to
the
the
project
itself
or
or
other
things
like
that.
These
are
only
highlights.
It
actually
was
a
lot
more
that
was
done
and
I
apologize.
B
If
I
missed
something
significant
that
that
Showzen
called
here
but
feel
free
to
call
me
out
on
that,
if
I,
if
I,
missed
something
I'm
the
project
perspective,
perhaps
the
most
significant
thing
that
happened
is
the.
We
actually
did
our
first
release
sort
of
a
line
in
the
sand
release
back
on
October
4
that
we
declared
functional
stability
on
eight
of
the
components
that
we
have
and
API
stability
on
on
three
others.
There
certainly
is
more
work
to
be
done
there
and
we'll
talk
about
that
in
just
a
few
minutes.
B
We
also
welcomed
two
new
committers
to
the
project
based
on
some
some
great
contributions
over
the
past
over
the
past
months
on
for
them,
and
this
call
that
cell
actually
broadened
to
more
than
just
the
compiler
architecture
and
where
we
now
talk
about
any
aspects
of
the
of
the
former
project
architecture
from
a
development
point
of
view.
In
terms
of
code
that
things
landed
in
the
code
base,
we
did
welcome
a
brand
new
back-end
into
the
into
the
fold,
so
a
language
agnostic,
AR,
64
back-end,
was
committed
earlier
on
this
year.
B
In
totality
as
of
a
few
days
ago,
we
had
well
over
800
PRS
that
were
merged
and
perhaps
some
of
the
most
significant
things
there
is
a
40
for
those
who
are
beginners
and
those
are
actually
taken
on
by
sort
of
the
non
regulars
in
the
project.
People
that
are
looking
to
get
up
to
speed
on
on
various
aspects
of
Vollmar
and
a
number
of
those
pay
ours
as
well
were
related
to
technical
debt
reduction,
which
is
sort
of
one
of
the
one
of
the
things
that
we
need
to
continue
to
do
over
time.
B
Testing
improved
as
well
as
a
result
of
a
are
at
64,
back-end
being
introduced.
A
number
of
curl
unit
tests
were
a
number
of
the
unit.
Tests
were
actually
rewritten
in
trail
and
they
replaced
the
the
compiler
test.
Compiler
test
ones
and
the
the
parser
itself
in
trilled
was
rewritten
recently
to
C++
to
eliminate
our
flex
and
bison
dependents
and
to
make
it
more
portable
into
other
projects.
So,
for
example,
into
into
open
j9
from
an
infrastructure
point
of
view.
B
We
move
to
a
new
project
website
that
we
are
still
in
the
process
of
populating
and
getting
getting
more
in
information
there.
But
nonetheless
there
is
something
new
that
we
can
that
we
can
build
on,
and
we
also
did
some
work
to
move
the
Jenkins
CI
pipelines
into
the
repo
themselves
so
that
they
are
more
visible
and
others,
but
and
also
so
that
more
more
people
can
make
a
choice
as
necessary.
B
B
One
of
the
things
that
we
did
last
year
as
I
mentioned
was
it
was
our
first
zero
point.
One
point:
zero
release.
We
just
consider
that
to
be
a
starting
point.
We
want
to
start
doing
more
regular
releases,
perhaps
on
the
six-month
cadence,
but
that's
just
an
arborist
laboratory
thing.
At
the
moment.
We
do
need
to
go
through
the
process
of
figuring
out
what
the
process
is
going
to
be
and
how
to
define
content
for
those
releases
and
and
that
sort
of
thing.
B
B
We
need
to
think
about
how
we
want
to
introduce
API
stability
to
the
other
components
as
well.
I'm,
not
saying
that
in
the
next
few
months.
This
is
going
to
we're
going
to
have
that
stability,
but
I
think
that
the
mindset
should
be
to
start
thinking
about
how
we
can
increase
decrease
that
and
because
the
the
stability
of
the
IPI
is
something
that
may
hinder
the
adoption
of
various
components
in
other
language
environments
as
well,
and
an
aspirational
goal,
as
always,
is
to
reduce
the
backlog
of
issues
and
pull
requests.
B
I
mean,
as
of
today,
we
have
maybe
102
8
pull
request
from
the
backlog
and
I
think
there's
well
over
six
hundred
issues
that
are
that
are
open
at
this
point.
So
we
do
want
to
spend
some
time
going
through
and
reducing
that
and
making
sure
that
the
contributions
that
people
are
making
or
the
requests
that
people
are
making
or
getting
some
consideration
and
and
some
of
those
changes
potentially
landing
in
the
codebase.
So
this
is
actually
something
that
we're.
B
B
In
terms
of
the
components
within
OMR,
the
the
compiler
components
saw
a
fair
bit
of
activity
over
the
past
over
the
past
year
and
going
forward
there's
a
number
of
things
that
that
we're
looking
forward
to
happening
in
in
the
kind
of
coming
month.
So
the
first
is
to
deliver
a
risk.
Five
back
end.
So
this
was
there
was
a
contribution
several
months
ago
that
that
has
been
under
review.
I
think
that
it's
getting
very
close
to
being
contributed
and
and
which
will
be
a
great
step
forward
for
the
for
the
project.
A
B
Merged,
yes,
if
anybody's
watching
the
the
general
Channel,
the
general
flack
channel
yesterday,
Young
mentioned
that
they
were
able
to
get
the
small
talk,
vn
integrated
with
jibt
builder
running
on
a
risk,
five
back-end.
So
that's
a
fairly
significant
step
forward
as
well.
So
the
next
build
piece
here
is
to
get
the
the
actual
back-end
landed
into
the
code
and
I
think
we're
very,
very
close
to
doing
that.
B
We've
also
spent
some
time
over
the
past
couple
of
years,
looking
at
the
options
framework
within
the
within
the
JIT,
so
the
compiler
it
is
a
a
technical
debt.
If
you
like
technical
debt,
that's
your
well!
That's
where
you
should
be
spending
all
your
time.
So
we've
spent
some
time
reworking
a
lot
of
that
and
we
have
something
that
that
we've
proposed,
and
you
know
it
shows
good
promise
with
working
with
in
omar.
It
shows
good
promise
being
integrated
into
downstream
projects,
it's
real
configurable
and
we
want
to
get
that
landed
relatively
recently.
B
We
we
have
some
work
to
do
in
terms
of
tooling
and
that's,
but
that
I
expect
to
happen
in
the
next
in
the
next
several
weeks
or
so.
For
all
these
items,
by
the
way,
I'm
actually
listing
the
other
issue,
either
the
issue
number
or
the
pull
request
number
next
to
them.
So
in
case
you
want
more
information
about
any
of
them.
You
can
hopefully
look
that
up.
B
The
other
thing
that
we're
looking
at
doing
for
the
compiler
is
to
provide
more
generic
method
metadata.
So
this
is
information,
that's
associated
with
each
method
that
gets
compiled.
It
can
be
used
for
things
like
describing.
You
know
what
methods
have
been
inlined
into
it.
It
can
be
used
for
things
for
garbage,
collector
Maps,
that
sort
of
thing-
and
there
has
been
some
interest
in
this
sort
of
thing
for
other
language
environments
that
want
to
that,
want
to
consume
Omar
and
the
need
to
actually
provide
this.
B
B
What
I've
been
doing
for
I
guess
a
year
now
that
hasn't
actually
surfaced
anywhere
other
than
any
sort
of
personal
stuff
that
I
have
is
ax,
come
up
with
sort
of
a
floating-point
plan
for
the
Willmar
compiler
and
actually
Omar
in
general.
I
do
want
to
surface
that
it'll
talk
about
things
about
enhanced.
We
want
to
do
and
how
we
can
do
cleanup
of
work.
B
That's
already
there,
so
just
as
an
example
of
that
there's
a
lot
of
code
that
we
have
to
handle
the
x87
floating-point
unit
on
on
an
x86
hardware
and
which
has
been
sort
of
long
but
deprecated,
but
it's
not
as
easy
as
one
might
think
to
to
just
simply
remove
so
a
floating-point
strategy.
I
think,
would
will
go
away.
Well,
it'll
go
a
long
way
to
communicating
whether
it
is
what
we
want
to
do
on
the
floating-point
side.
B
It's
been
a
it's
been
an
issue
for
for
a
while.
Now,
if
you've
ever
looked
in
the
code,
there's
a
couple
of
ways
that
you
can
look
at
the
result,
method
hierarchy,
you
can
look
at
it
from
the
top
down
or
you
can
look
at
it
from
the
bottom
up
depending
on
the
language
environment.
That's
consuming
it.
Your
perspective
is
different.
We
want
to
provide
a
unified
way
of
talking
about
resolved
methods
and
and
well.
We
need
to
make
progress
on
that.
B
B
A
lot
of
these
divergent
properties
became
very
apparent,
as
new
back-end
started
to
land
like
a
or
at
64
and
risk
5,
you
can,
you
can
definitely
see
duplication
in
the
code
between
various
backends
and
it's
often
very
subtle
things
that
are
preventing
more
code
sharing
from
from
occurring.
So
the
the
more
sharing
that
we
have
the
easier
it'll
be
to
port
this
to
two
new
architectures.
B
So
there's
work
underway
right
now
to
unify
those
to
happen
in
a
single
bill.
There's
also
some
work
happening
on
dynamic,
breadth-first
scan
ordering.
So
this
is
essentially
some
work
that
when,
when
you
move
an
object
in
the
heap,
if
the
compiler
can
give
hints
to
the
garbage
collector
that
there
are
certain
fields
in
that
object
that
are
particularly
hot,
particularly
important.
That
should
stay
with
the
object.
B
The
garbage
collector
will
try
to
move
those
fields
along
with
the
object
as
well
to
keep
that
locality,
and
this
requires
in
for
input
from
the
compiler
itself
and
I
think
that
there's
some
discussion
happening
right
now
on
the
best
way
of
communication.
So
it's
an
important
thing
for
some
performance
going
for
them.
B
C
B
There's
also
some
work
happening
right
now
in
the
port
and
thread
components,
so
there's
work
right
now
to
provide
an
optional
new
locking
scheme
that
will
scale
a
lot
better.
You
know,
as
the
number
of
threads
and
the
number
of
cores
increase
in
a
particular
implementation
that
need
actually
give
a
talk,
not
that
long
ago,
that
that
described
the
motivation
behind
this
and
the
architecture
behind
it
in
the
context
of
open,
j9
and
I
posted
a
link
to
that
video
there.
If
you
want
to
really
like
to
watch
it,
it's
actually
very
good.
B
B
We
want
to
make
that
a
little
bit
more
generic
in
the
sense
that
we
don't
want
to
have
a
j9
specific
and
then
move
it
up
into
all
our
so
that
so
that
others
can
use
that
as
well.
So
that
work
is
ongoing
and
similarly
there's
a
lot
of
process
of
detection
features
that
exist
in
the
open,
j9
port
library.
That
should
that
that
really
should
be
living
in
the
in
the
Omar
layer
as
well.
The
advantage
of
having
it
there
is
that
you
know.
B
Certainly
the
compiler
component
in
Omar
and
even
open,
j9
I
think
comes
along
and
does
a
lot
of
the
same
kind
of
work.
So
it'd
be
nice
to
clean
a
lot
of
that
up
and
just
you
have
everybody
used
one
single
implementation
in
the
port
library
for
processor
detection
and
I.
Think
things
will
make
a
lot
more
sense
than.
B
C
C
That's
been
contributed
that
I'm
hoping
to
merge
soon
and
there's
also
been
some
other
work
going
on
for
the
next
little,
while
my
focus
anyway
around
the
JIT
builder,
API
is
actually
looking
at
turning
just
builder
in
from
an
API
into
more
of
an
intermediate
representation
that
should
be
not
language.
So
the
idea
here
is
kind
of
the
next
generation
just
builder
which
hopefully
doesn't
break
existing
clients
but
provide
some
exciting.
C
Hopefully,
new
features
for
people
who
might
want
to
build
compilers
using
JIT
builder,
so
one
of
the
goals
is
to
make
it
more
extensible
and
allow
clients
to
be
able
to
add
their
kinds
of
operations
and
their
own
kinds
of
types.
I
want
it
to
be
basically
building
a
representation
of
the
program
that
you're
describing
using
the
API
as
opposed
to
just
right
now.
C
Digit
builder
API
is
more
of
a
pass
through
to
the
Omar
compiler
il,
so
you
make
a
call
into
just
builder
and
that
by
the
time
that
call
is
ended,
you've
generated
some
Omar
compiler
il
that's
sitting
somewhere
in
a
in
an
aisle
generator.
But
that
means
that
you
you're
basically
building
everything
on
the
fly
by
making
the
calls
into
more
of
a
representation.
It
gives
you
the
ability
to
walk
that
representation
introspect
it
transform
it
and
optimize
it.
C
So
it
allows
you
to
build
sort
of
a
DSL
like
optimization
framework
on
top
of
jib
builder,
so
I'm
doing
some
early
prototyping
work
on
this
right
now,
I'm
hoping
to
have
something
to
show
by
early
2020
some
definition
of
early-
and
you
know
it's
it's
it's
kind
of
a
side
project
for
me.
So
it's
not
making
huge
progress
in
leaps
and
bounds,
but
it's
it's
a
very
interesting
one
and
I
think
it
will
enable
some
very
nice
usability
improvements
for
tip
builder
clients.
C
So
that's
a
primary
thing
that
I've
been
working
on,
obviously
there's
the
ll
JEP
project,
which,
if
you
haven't
heard
of
ljb,
it's
a
it's
a
project
that
one
of
our
students
created
that
basically
translates
LLVM
ir
in
two
calls
to
jet
builders.
So
it
basically
turns
the
Willmar
compiler
into
a
back-end
for
LLVM
ir,
and
so
the
goals
in
this
area
are
basically
to
round
out
the
ir
support.
So
it
currently
supports
a
quite
a
good
cross-section
of
the
ir,
but
not
all
of
it.
C
So
we're
gonna
try
to
round
that
out
some
more
and
and
finalize
its
contribution
into
the
Omar
project.
So
it
basically
becomes
another
facility
available
to
to
consumers
who
want
to
be
able
to
use
the
Omar
compiler
as
their
code
generator.
There
are
still
some
platform
enablement
issues
with
chip
builder.
It
doesn't
run
consistently
well
across
all
of
our
platforms.
There
are
some
issues
with
method
trampolines
in
you
other
things
that
that
could
be
improved,
so
we're
hoping
to
round
that
out.
C
Finally,
there
are
a
bunch
of
areas
in
the
JIT
builder
associated
with
digit
builder
project
that
I,
don't
I,
don't
know
that
there's
any
sort
of
active
work
going
on,
but
they're
areas
that
were
always
interested
in
and
and
if
there
are
people
interested
in
working
on
these
I
think
we
could
find
people
to
help
with
mentoring,
coaching
etc.
So
there
was
some
interesting
interpreter
builder
work
that
people
like
Robert
and
Charlie
have
worked
on.
C
You
know,
in
combination
with
the
ability
to
also
generate
a
JIT
automatically
or
an
interpreter
from
a
was
essentially
a
virtual
machine
specification,
so
that's
kind
of
a
direction
that
I'm
hoping
to
move
things
you
know
forwards
on,
but
I
don't
think
anyone's
actively
working
on
it.
Right
now,
just
builder
currently
lacks
that.
C
You
see
interfacing
ability
right
now
and
I
think
that's
actually
true
of
the
Omar
compiler,
so
that
would
also
be
an
area
that
would
be
very
useful
for
us
to
move
forward
on
Java
language
bindings
is
something
that
we've
been
interested
in
rounding
out
there.
Actually
I
should
I
shouldn't,
say:
there's
no
work
going
on
in
language
bindings.
There
is
a
project
going
on.
That's
looking
at
Python
language
bindings
for
jet
builder.
B
Okay,
all
right
so
testing,
so
there's
a
fair
bit
of
activity
planned
on
the
on
the
tilma
testing
front.
I!
Don't
want
to
go
into
any
detail
at
all
on
these
actually
because
next
week,
at
the
next
make
architectures,
meaning
Shelley.
Lambert
will
be
here
too,
to
basically
go
through
all
of
these
in
much
more
detail.
A
lot
of
these
are
sort
of
her
thoughts
and
apply
some
of
the
experience
that
she's
that
she
has
from
the
open
j9
project
to
to
OMR.
B
But
but,
as
briefly,
you
know,
we
need
to
provide
more
documentation
for
the
tests
that
are
there
and
to
make
sure
that
the
coverage
we
haven't
looked
at
the
coverage
of
the
tests
in
quite
some
time
and
some
of
them
we
haven't
even
looked
at
at
all.
So
we
need
to
make
sure
that
the
tests
are
doing
what
we
expect
them
to
do
and
that
we
have
confidence
and
with
or
with
our
testing
we'd
like
to
get
some
sort
of
performance
testing
framework
in
place
for
for
alomar.
B
There
are
some
ideas
around
testing
omr
in
the
context
of
other
downstream
projects,
so
this
isn't
just
an
open,
j9
thing,
but
it
could
also
be
other
language
environments
that
are
consuming
Omar.
How
do
we
ensure
that
we
are
that
we
don't
break
anything
that
Omar
doesn't
break
anything
in
those
downstream
projects?
The
changes
that
are
made-
and
there
are
some
examples
and
other
projects
where
you
know
down,
screen
projects.
Testing
can
be
accomplished
within
within
the
the
upstream
project.
So
I'll
leave
some
ideas
around
that
that
she'll
be
talking
about
next
week.
B
There
is
some
work
that
we
wanted
to
do
on
the
the
efi
test
directory
itself
and
just
changing
the
shape
of
it.
I
think
it's
it's
kind
of
grew
up
over
time
in
interesting
ways
and
and
different
things
have
been
added
to
it,
but
I
think
that
there's
an
opportunity
to
sort
of
straighten
some
of
that
out
and
make
it
a
little
bit
more
intuitive
and
trill
itself
the
framework
for
testing
the
compiler.
There
are
a
number
of
improvements
that
we
want
to
make.
B
There
Phillip
recently
asked
about
doing
some
work
on
performance,
and
you
know
making
sure
that
methods
we
don't
continually
the
same
methods
over
and
over
and
over
again,
and
have
more
of
an
opportunity
of
sharing
some
of
those
methods
to
improve
compile
time,
not
just
good,
well
proven,
compile
time
which
will
eventually
improve
your
build
time.
That's
the
the
angle
and
then
continuing
along
the
trail
roadmap
that
we've
established
more
than
a
year
ago.
B
We
talked
about
this
at
one
of
the
one
of
the
architecture
meetings,
there's
a
there's,
a
large
number
of
things
that
we
want
to
do
there
and
we
want
to
to
basically
carry
forward
on
some
of
those
as
we
have
as
people
that
are
willing
to
do
those
sorts
of
things.
So
you
know
a
more
on
testing
next
week.
B
Our
there's
a
team
at
IBM
here
that
is
into
that-
is
in
contact
for
the
number
of
these
number
of
these
research
groups
and
are
actually
helping
them,
along
with
with
some
about
research
and
I've
just
and
for
those
particular
interactions.
I've
just
listed
a
few
of
the
ones
here
that
are
that
are
likely
to
show
code.
That'll
arrives
in
the
code
base
in
the
next
few
months,
or
so.
The
first
are
a
couple
of
projects
from
the
University
of
Alberta.
B
The
first
is
a
benefit
driven
lining,
so
era
controller
talked
about
this
at
a
earlier
architecture.
Meeting
this
year,
slides
and
recording
are
there
if
you're
interested
in
that,
but
it's
a
it's
a
promising
new
way
of
doing
inlining
that
we'd
like
to
get
integrated
into
omr,
so
some
PRS
have
landed
already
expect
more
to
come
with
the
next.
B
For
the
next
few
weeks,
there's
also
a
group,
that's
looking
at
the
way
that
some
of
the
classes
have
been
architected
in
the
Omar
compiler
and
trying
to
decide
what
the
best
way
is
of
arc
of
providing.
The
extension
points
that
that
we've,
that
we
want
those
that
are
familiar
with
the
code
know
that
we
use
something
that
we've
been
calling
it
extensible
classes,
there's
a
lot
of
good
reasons
for
those,
but
there's
a
lot
of
perhaps
negative
aspects
of
using
accessible
classes.
B
So
this
is
actually
a
research
study
to
look
at
all
the
requirements
that
people
more
project
has
all
the
constraints
of
the
Omar
project
has
and
sort
of
coming
up
with
a
recommendation
on
so
this
kind
of
a
project.
What
is
the
best
thing
to
do
so?
There
was
some
earlier
results
that
were
presented
a
few
months
ago,
I've
linked
in
here,
but
this
is
certainly
an
ongoing
research
project
and
we're
looking
forward
to
getting
some
some
input
from
from
that
study
and
helping
to
influence
the
architecture
here.
B
The
first
is
from
a
generalized
äôt
infrastructure
in
Omar,
so
ahead
of
time,
compilation
they're
working
on
surfacing
a
lot
of
the
work
that
was
done
in
open
j9
for
ahead
of
time,
technology
and
general,
making
it
language
agnostic
and
then
surfacing
at
new
Omar.
So
there's
a
couple
of
designs
and
issues
that
are
that
I've
linked
to
here.
So
some
of
that
code
is
starting
to
appear
and
it's
under
review
right
now
same
thing,
with
a
shared
class
cache
infrastructure
in
Omar,
so
moving
that
up
from
from
open
j9,
so
that
work
is
underway.
B
B
From
an
infrastructure
point
of
view,
there's
a
few
things
that
are
that
are
going
on,
so
the
first
is
to
kind
of
round
out
some
of
our
CR
bi
bills
and
testing
that
we
do
on
before
before
committing
before
merging
pull
requests.
So
right
now
on
AR
64,
we
are
only
doing
cross-compiled
builds.
We
don't
actually
have
any
testing
that's
running
there.
We
want
to
do
something
about
that:
they're,
actually
getting
native
hardware
or
running
some
sort
of
an
emulator
to
to
at
least
run
some
of
the
tests.
B
B
Not
all
of
the
bills
that
we
have
right
now
are
using
key
make.
There
are
still
some
that
are
that
are
based
on
auto
tools,
and
you
want
to
deprecate
Auto
tools
in
favor
of
C
make.
So
there
are
about
three
or
four
builds
that
still
need
some
work
done.
There
I
think
they
are
builds
and
the
linter
jobs.
D
B
Need
to
be
need
to
be
changed,
so
it's
this
work
that
needs
to
get
done,
and
then
we've
been
looking
at
doing,
source
code
formatting
and
providing
some
infrastructure
as
part
of
the
check-in
process
for
doing
that.
So
we've
had
the
discussion
on
that
last
week
in
the
in
the
architecture
meeting
and
I
think
that
there
will
be
some
there's,
a
good
set
of
issues
that
that
Phillip
was
created
to
sort
of
tracking
the
work.
That's
that's
going
on
there,
so
you're
interested
in
that
by
all
means.
B
Please
redo
those
issues,
there's
a
lot
of
good
stuff
there
and
the
discussion
is,
is
very
active
and
another
thing
we
could
possibly
do
as
well
is
to
is
to
enable
the
automatic
copyright
verification
on
commits
that's
a
fairly
common
problem
that
gets
especially
now
as
we're
going
to
be
rolling
over
the
year.
There's
going
to
be
a
lot
of
missed
copyrights
on
some
of
the
PRS
that
that
people
are
making
that
enabling
the
automatic
copyright
verification
is
is
something
we
want
to
look
at.
B
And
then
the
final
thing
that,
when
I
mentioned
here
is,
does
some
work
on
the
website
and
documentation.
So
we
do
have
a
new
website
as
I
as
I
mentioned
before
it
was
primarily
there
to
host
blogs
that
were
written,
it's
a
wordpress
site,
so
it
did
host
a
lot
of
blogs
on
various
technical
topics.
We
want
to
continue
that
pattern
and
get
more
blogs
written
there.
We
also
want
to
summarize
a
lot
of
the
academic
collaborations
that
we're
doing
on
that
site
talk
about
the
projects
themselves,
just
so
that
people
are
aware
of
that.
B
There
have
been
some.
There
has
been
some
interest
in
finding
out
what
projects
are
we're
doing
with
with
Olimar
and
what
what
academics
are
doing
with
it
and
I
think
that
it's
important
to
get
that
information
out
there
so
and
do
that
and
similarly
downstream
projects
that
are
consuming
OMR.
So
so
not
just
open
j9,
but
you
know,
we've
got
works
and
working.
We
have.
We
have
projects
with
webassembly
we've
got
projects
with
luo
we've
got.
You
know
the
like
the
risk
5
work
with
with
Saum
right.
B
B
B
Jack
said
Julia.
Nobody
wants
to
want
to
join,
so
maybe
this
give
another
minute
or
two
for
him
to
join
he's
showing
off
line
trying
to
hold
on
one
spot.
So
we
may
just
have
to
make
a
start
and
Julian
can
join
in
at
the
end.
Otherwise,
I
have
discussed
with
Julian.
I
can
certainly
touch
on
all
the
points
he
wanted
to
read.
E
You
Jack
thank
you,
okay
cool,
so
today,
I'm
gonna
talk
about
the
OMR
and
wallet
link
h4
shorts
before
I
go
to
explain
what
it
is.
Just.
Let
me
give
some
background
information
of
this
project,
so
my
name
is
Jack.
I
am
currently
as
your
computer
science
student
at
a
university
of
alberta.
I
have
the
chance
to
work
on.
My
mark
cannot
program
and
it
is
done
as
part
of
a
course
project.
E
Ljb
and
open
Gina
will
generate
om
our
intermediate
language
in
order
to
do
all
this
work
and
I
get
mentors
and
advice,
Andrew
and
Durham
through
the
semester.
So
big
thanks
for
their
time.
Okay.
So
what
is
voila
briefly
speaking,
while
I
is
an
analysis
tool
for
Java
bytecode
other
languages
such
as
JavaScript
Python.
E
However,
what
I
can
take
Java
bytecode
has
inputs
to
perform
the
analysis.
Also
Java
pro
also
wallet
provides
a
Java,
bytecode
class
abstraction
called
stripe
bTW,
for
example,
the
track
PT
constant
0
is
equivalent
to
icon
0
up
coding.
The
java
bytecodes
they
have
very
similar
ideas,
noticed
or
i0
is
equivalent
to
I
store,
0
and
so
on
and
so
forth.
E
E
Here
very
simple
example:
basically
reads:
those
three
lines
of
code
are
doing
an
ad
operation
of
to
therapist
a
and
P
picture
on.
The
left
is
a
om,
our
inter
major
language
tree,
which
you
should
be
very
familiar
with,
and
the
one
on
the
right
it
might
be
the
instruction
they
are
doing
the
same
thing,
adding
two
values.
E
E
E
The
rugby
team
instruction
constructor
takes
the
sequence
of
translation,
object
and
make
sending
to
BT
instructions,
and
then
we've
had
those
instructions
to
the
wallet
verifier
to
the
instruction
generate
that
it's
all
not
if
the
instruction
has
to
the
verification.
Finally,
we
can
let
those
instruction
enter
the
wallah
to
do
the
analysis.
E
C
E
Om
are
is
retaining
CNC
process,
so
we
have
to
use
the
Java
native
interface
to
invoke
the
services
provided
by
wallah.
Such
approach
will
cause
trouble
if
al
is
integrated
in
the
open,
Jane
I
think
about
creating
a
java
virtual
machine
inside
a
java
virtual
machine.
So
it
just
simply
does
not
work.
E
So
we
improve
the
flow
of
the
previous
steps
should
be
the
same
to
the
translation,
produce
translation
objects,
but
the
next
step
we
need
to
check
if
JVM
created
by
the
Java
native
interface
can
be
started
so
running
open
tonight
it
cannot
be
started.
So
we
passed
the
translation
object
to
us
Eliezer,
the
serialize
er
will
serialize
the
translation
object
to
a
file
on
disk
after
Alban
Jane
I,
finish
running.
We
can
take
this
serialization
photo
and
tray
to
it
into
the
TC
réaliser.
C
E
Images
on
the
right
show
the
exact
same
example
like
in
the
previous
slide.
Basically,
it
performs
the
ads
operation,
so
we
start
from
the
first
trip
on
the
top
fund
is
deepest
child.
It
is
constant.
One
in
this
case
evaluates
its
map
it
into
the
Shrek
BT
constant
one.
Then
we
return
back
to
its
parent,
which
is
a
store
it
will
be
to
the
local
store.
0
to
0
is
a
slot
index
of
the
local
variable
table,
and
then
we
go
to
the
next
treat
same
thing,
so
on
so
forth,.
E
E
So
how
the
translation
is
down,
the
first
step
is
to
store
the
parameters
into
local
variable
table.
Those
parameters
can
be
guessed
from.
The
compilation
objects
then
work
the
three
to
the
mapping,
as
I
mentioned
in
the
last
slide
after
we
finished
mapping
to
adjust
the
offset
and
the
sliver
and
the
target
of
the
branch
instruction
such
as
go
to
and
conditional,
because
in
om
our
branch,
it
is
pointing
to
basic
block,
but
in
Shrek
BT.
The
target
is
pointing
to
the
instruction
offset.
E
E
Assigning
it
to
any
variable,
the
reference
of
this
object
be
discarded
from
the
operand
stack,
so
the
translating
instruction
should
be
new
and
reference
on
the
operand
stack
involves
the
constructor
and
then
pop.
The
reference
from
the
operand
stack.
Omr
does
not
have
pop
instruction,
because
it
is
a
tree
based
il,
but
we
can
decide
whether
to
publish
whether
the
publish
will
be
generated
by
its
reference
count.
The
reference
count
indicates
this
nose
will
be
referenced
later
or
it
will
be
revered
anymore.
E
Next
I
am
going
to
injured
implicit
load
and
store,
so
why
cousin
implicit
because
they
aren't
created
by
the
translator
instead
of
being
mapped,
the
om
RL
was
a
point
of
creating
them.
Let's
look
at
this
example,
so
in
this
case
method,
food
dot
get
one
with
return,
value
of
type
integer
is
invoking
invoked,
and
this
value
is
stored
in
a
variable
when
traversing
the
IO
tree
at
the
time
the
icon
is
evaluated.
E
We
don't
know
which
knows
where
the
value
of
the
core
indirect
later,
if
only
traversing
the
IO,
treating
the
single
path
possible
that
I
core
I
will
be
referred
to
immediately
by
the
following
note.
It
is
also
possible
that
it
will
refer
to
very
far
away
all
we
know
at
the
time
when
we
evaluate
I
call
I.
E
If
that
there's
no
notes,
I
will
be
referred
later,
because
it's
reference
count
is
greater
than
one
so
for
such
nodes,
we
create
an
implicit
store
for
it,
store
it
as
if
it
is
an
a
local
variable
and
loaded
back.
Whenever
we
visit
this
node
again,
so
we
evaluate
the
the
first
time
we
evaluate
echo.
We
know
this
node
will
be
referred
again,
so
we
create
an
interest,
implicit
store
so
ready
to
the
local
barber
table
and
the
next
time
we
refer
to
the
I
co
I.
E
We
know
it
is
inside
the
local
variable
type,
so
we
load
the
value
from
the
local
therapy
table
and
this
I
store
is
actually
it
is
an
explicit
or
because
it
is
in
the
OMI
I/o.
So
we
store
the
value
into
the
local
variable
table
at
slot
number
4,
although
in
this
example
the
interest
or
loads
interests,
it
load
and
store
returns
ends,
but
it's
guarantees
the
overall
crack
this.
E
B
Voila
as
a
static
analysis
tool
as
being
quite
popular
for
program,
analysis
research
from
a
number
of
different
sources.
So
there's
there's
sort
of
two
major
to
very
large
frameworks.
They're
used
for
static
program,
analysis
of
object,
oriented
programs,
there's
the
base
frameworks
that
are
primarily
focused
on
Java
voilá
was
written
as
an
alternative,
initially
analysis
framework
for
java,
subsequently
grown
front-ends
for
python,
for
JavaScript
or
swift
and,
as
has
been
used
as
a
vehicle
for
static
program,
analysis
research,
so
one
of
the
motivations
initially
in
looking
at
this
hook
up.
B
So,
as
you
saw
there,
there
are
research
collaborations
where
people
at
university
of
alberta
are
looking
at
Palomar.
There
are
a
group
of
people
at
the
University
of
Alberta
who
have
a
background
in
static
program.
Analysis
really
who's.
A
professor
who's,
a
assistant
professor
they're,
worked
on
the
inliner
project
also
has
a
significant
amount
of
static
program
analysis
work
and
he
was
interested
in
trying
to
be
able
to
use
OMR
as
part
of
a
vehicle
for
some
of
his
research.
B
It's
a
tool
that
would
allow
people
who
are
interested
in
language
research
to
build
upon
the
Omar
ecosystem
to
be
able
to
further
their
research
goals.
Further
the
results
of
these
kinds
of
static
analysis
frameworks
can
also
be
used
to
help
inform
compilation
so
with
the
notion
of
äôt,
in
the
notion
of
a
share
class
cache,
eventually
being
contributed
to
Omar
the
fact
that
you
could
run
a
program
or
analyze
a
program
at
a
time
statically
or
produced,
and
you
could
use
those
results
later.
B
It's
certainly
another
aspect
of
what
you
could
do
with
this
hookup,
and
this
is
this
was
a
first
attempt
at
trying
to
connect
a
static
analysis
framework
to
the
OM
ril.
It's
not
something
that
we've
tried
doing
before
and
especially
with
wala
being
a
java
hosted
analysis
framework
which
is
famous
both
of
them
running
the
GBM
there's.
B
Working
in
fact,
they've
got
some
analysis,
results
and
things
that
can
be
shown
so
I
think
that
it's
an
interesting
project
from
the
point
of
view
of
having
a
broader
set
of
tools
also
means
JIT
builder,
and
things
like
that.
If
you
choose
to
use
all
of
our
components,
if
you
choose
to
consume
the
Omar
compiler
infrastructure,
you
also
get
access
to
static
analysis
tools
with
additional
overhead.
B
The
tool
will
just
work.
I
would
point
out
that,
while
Jack
did
describe
the
strike,
beat
he
instructions
that,
as
being
as
being
a
Java
like
bytecode
instructions
that-
and
it
certainly
is
the
there-
are
other
front-end
in
Wolof
that
target
the
shrank
BT
instruction
set.
So
their
translator
for
the
dotnet
common
language,
runtime
pop
codes
go
to
shrink
BT.
There
are
others,
I
think
the
Python
goes
through
that
pathway
as
well
so
Julian.
G
B
Just
motivating
why
the
Omar
project
is
interested
in
this
and
I
think
that
the
the
one
of
the
main
motivations
I
had
for
asking
for
that
will
be
given
the
opportunity
to
present
here.
Is
that
there's
a
significant
amount
of
work?
That's
being
done,
there's
obviously
more.
That
needs
to
be
done,
but
it's
going
to
outlive
the
amount
of
time
that
he
has
to
work
on
this
particular
project.
B
He's
gotten
it
off
the
ground
and
I
would
like
to
ask
the
community
to
consider
where
we
might
be
able
to
provide
a
home
for
this
in
the
longer
term,
so
that
people
who
are
interested
in
working
on
it
as
part
of
research
or
tools
or
things
like
that,
would
be
able
to
benefit
from
what
he's
done.
Yeah.
B
E
So
long
slide
left,
it's
just
mentioned,
github
repo,
that's
links
to
so
the
first
one
is
it's
called
owl,
so
it
just
contains
some
pre
generalized
serializing
files
of
some
test
cases,
so
you
can
run
analysis
with
those
test
cases,
but
if
you
want
to
generate
some
cup
its
nest
repo,
so
it
contains
all
the
source
code
of
out
so
I
think.
That's
it.
So
I
think
that
thing
is
Julian
to
show
the
demo
of
work.
B
Sure,
okay!
Well,
thank
you
for
the
talk,
Jack
and
I!
Think
Julian.
If
you
did
want
to
just
showcase
what
you've
gotten
working
over
the
over
the
course
of
the
term,
that
would
be
helpful
just
for
five
minutes
for
people
to
see
what
what's
working
and
we
can
have
a
discussion
about
the
future
of
it.
After
that,.
B
So
from
so
yeah,
so
when
it
comes
to
the
mapping,
the
mapping
is
not
one-to-one.
There
are
some
OMR
byte
code
that
our
code
says,
do
not
have
an
equivalent
in
in
wala
and
the
smaller
strike
Beatty
instruction
set.
So
the
check
codes
are
certainly
a
primary
example
of
that
the
Shrike
Beatty
instructions,
but
is
like
Java
in
the
array.
Accesses
are
found,
checked.
B
Dereferences
are
no,
so
the
check
off
codes
by
and
large
do
not
result
in
instructions
that
are
generated
in
the
strike.
Beatty
instruction
set
now
some
of
the
more
complex
op
codes
in
all
Marr,
such
as
an
array,
coffee
or
an
array
translate,
would
have
to
explode
into
a
number
of
byte
codes
or
Shrike
bTW.
A
B
From
the
purpose,
from
the
perspective
of
program
analysis,
a
write
barrier
is
actually
no
different
than
a
story
indirect
so
for
those
ones
they
actually
map
to
the
Spain
translation
in
the
Shrike
BT
instruction
set,
because
voila
does
not
care
about
particular
details
of
what
gets
notified.
It's
concerned
about
overall
program
semantics,
not
the
low-level
execution
semantics
within
the
runtime.
G
You
guys
see
the
screen
so
Jack.
What
jack
told
you
about
was
how
the
system
can
talk
to
the
the
OMR
compiler
can
extract
IR
and
then
can
generate
these
arrays
of
Psych
instructions
and
either
can
pass
them
directly
to
wala
and
then
that
so
to
do
analysis,
kind
of
online
and
also
you
can
serialize
at
which,
for
at
least
from
the
point
of
view
of
developing
stuff,
is
easier.
So
Jack
would
Jack
created
a
bunch
of
serialized
pieces
of
ir,
and
so
we
can
just
just
take
a
look
at
one
of
them.
G
G
G
Is
while
IR
so
there's
strike
the
strike
IR
that
Jack
generates
is
more
or
less
completely
literal
and
implement
interpretation
of
Java
bytecode
and
what
we
do
with
that,
as
we
turn
that
into
an
IR.
That's
a
little
more
amenable
to
analysis,
so
you
can
see
if
you
look
and
instruct
the
instructions
there
are
these.
G
That
would
had
two
ads
which
in
bytecode
would
have
local
loads
and
stores,
and
things
like
that
and
what
we've
done
here
is:
we've
turned
it
into
and
a
bunch
of
things
with
SSA
value
number,
so
they're
dos
or
2
3,
4,
5,
6
and
so
on.
So
we
bet
and
that
that
what
that's
showing
is
that
the
IR
that
there's
right
be
TIR
that
Jack
builds
is,
you
know,
really
indistinguishable
from
the
kinds
of
IR
the
bytecode
that
we
read
from
from
class
files?
G
Are
we
generate
from
from
source
code
and
we
are,
and
so
while
I
can
turn
that
into
the
SETI
SSA,
based
that
IR,
that
we
use
for
program
analysis
and
just
as
a
simple
example
of
what
you
could
do
with
that.
The
next
thing
you
see
below
here
is
it's
a
core
graph,
so
we
start
with
this
fake
root
method,
which
is
God,
which
is
just
the
root
of
the
core
graph.
G
Then
it
calls
it
has
a
fake
seed
head
of
a
special
node
for
all
of
the
class
initializes
and
they
call
various
object
and
class
initializes
and
then
they're
the
cross
and
says
there
are
some
native
things
in
the
Java
library.
Did
they
think
what
we're
building
with
here
was
a
call
graph
in
which
this
method
that
Jack
created
is
treated
as
the
main
method.
So
you
see
a
call
from
a
main
call
from
Maine
to
the
the
operation
right.
There's
invoke
staff
that
can
cause
the
various
initializes.
Then
it
caused
the
operation.
G
The
operation
just
adds
numbers
and
doesn't
really
call
anybody,
so
the
co
graph
doesn't
have
much
in
it.
But
the
base
of
the
point
there
is
to
illustrate
that
the
IR
that
Jack
built
is
already
real
enough,
that
it
works
on
instinct,
which
I
believe
from
other
I
are
in
is
in
Wallach
graph,
so
it
can
be
mixed
and
matched
really
with
other
IRB,
the
library
there
is
read
from
a
from
a
from
bytecode.
G
A
B
D
B
B
Is
there
a
pull
request
or
something
that
we've
been
look
at
the
code
or
well,
I?
Think
I
think
the
reason
for
having
the
talk
was
to
discuss
if
Lamar
would
be
interested
in
hosting
the
code,
so
that
a
pull
request
could
be
open.
Obviously,
building
the
owl
code,
like
the
actual
the
actual
the
code
that
actually
does
the
inflation
to
shrink
beat
the
instructions,
has
dependencies
on
a
JVM
actually
build
I'm,
not
an
expert
on
the
omr
build
system,
so
I
wasn't
quite
sure
how
that
would
fit
in.
B
The
community
would
feel
about
having
code
that
had
a
dependency
like
that,
so
why
the
considerations
shouldn't
go
with
the
wallet
project,
but
building
it.
There
is
even
harder,
I
think,
because
the
owl
mapper
is
actually
an
optimization
pass.
That's
basically
just
runs
to
walk
the
trees
during
a
compilation
to
generate
the
the
DT
instruction
set.
So,
to
my
mind,
the
more
the
more
logical
place
to
put
it
would
probably
be
something
related
to
Omar.
B
A
G
C
Yeah,
but
so
right
now,
Bo
Lamar
is
a
repository
and
an
organization
called
Eclipse
which,
under
which
all
the
Eclipse
or
organization
projects,
foundation,
projects
that
are
on
reside
so
right
now
we
don't
have
any
other
way
of
there's
only
kind
of
the
two
two
levels
of
things
you
can
do.
You
can
either
be
an
or
agree
URI
bow
right
now
we're
a
repo,
so
it's
possible
to
petition
you
could
project
to
become
our
own
organization,
which
has
some
other
implications
associated
with
Intuit
believe
we
could
look
into
it.
C
C
C
B
Okay,
so
the
so
there
is
there's
a
piece
that
is
the
the
optimization
pass
and
the
associated
stop.
There
is,
then,
the
serializer,
so
he
has
a
serialize
ur
class.
That's
able
to
write
the
binary
file,
which
is
a
representation
of
his
translation
objects,
but
he
produces
in
the
mapper.
There
is
the
deserialize
Urdu
love
that
there
is
a
library
of
ji
utility
functions
that
simplify
the
writing
of
the
hook
up
to
the
Shrike
API,
which
is
a
set
of
Java
methods.
B
C
G
G
B
So
he
can
yeah,
so
he
can
create
the
translation
objects
and
it's
whether
you're
going
to
serialize
it
or
call
directly
into
a
wall
of
it.
The
question
right
are,
you:
are
you
going
to
do
the
strike
inflation
in
compilation
or
are
you
going
to
just
serialize
and
what
needs
to
be
inflated
and
inflate
it
out
of
process
later?
Is
the
choice.
E
E
C
Although
there
are
pieces
that
obviously
sound
like
they
could
fit
reasonably
in
omar,
like
the
optimization
past,
for
example,
and
yeah,
but
probably
something
that
you
would
want
to
live
in
in
or
more
yeah,
so
that
everyone
would
have
access
to
it
yep
the
other
bit.
Maybe
the
serialization
deserialization
bit.
C
Getting
guys
here,
maybe
maybe
another
repo-
is
the
right
place
for
that
stuff
to
kind
of
design.
So
if
you're
gonna
build
a
static
analysis,
pass
that
builds
on
this
stuff,
here's
those
stuff
you
need
to
do
it
with
yeah
at
that
point.
Having
that
repo
being
an
omar
ordered
makes
a
lot
of
sense.
I
think
that
there's
an
association,
but
it's
not
it's,
not
a
tight.
B
Would
it
bring
puppy
tested,
yeah
I,
don't
know
that
it
necessarily
warrants
testing
on
every
single
PR
I
think
it
needs
to
be
tested
regularly,
yeah
as
part
of
the
family
of
things
that
we
should
test
it
regularly
when
we're
doing
architectural
changes
that
might
break
your
downstream
consumers.
This
certainly
is
another
one
that
comes
into
that
category.
Just.
B
Early
so
generally
in
llj
B,
it
happens
basically
after
the
first
sort
of
passive
sort
of
domestication
and
dead
trees,
just
to
simplify
the
the
Isle
a
little
bit,
but
before
anything
else
in
open,
j9,
it's
being
scheduled,
basically
in
the
same
place.
So
after
il
Jen
ops
and
like
the
first
passive
simplification
is
before
the
inlining.
B
C
G
G
C
Wouldn't
say
we're
not
interested
I
would
say
it's
probably
not
the
direction
that
we
would
naturally
fall
in
just
because
we're
naturally
expecting
to
be
used
in
a
kind
of
scenario.
But
having
said
that,
you
know
there
are
you
know,
static
compilation,
scenarios
and
connected
static,
dynamic
compilation
scenarios
where
I
think
you
know
the
ability
to
do
deeper
analyses
makes
a
lot
of
them.
G
D
H
B
C
B
D
D
B
Okay,
all
right
thanks
thanks
Jack
and
Julian
any
more
discussion,
or
we
close
the
call.
No
okay
thanks
everyone
for
joining
this
week
and.