►
From YouTube: OMR Architecture Meeting 20201203
Description
Agenda:
* RISC-V CI builds [ @janvrany ]
* Clean up centralized opcode enums (#5703) [ @fjeremic ]
A
Welcome
everyone
to
this
week's
omar
architecture
meeting
today
we
have
two
topics
to
discuss.
The
first
today
is
jan
rani.
Who's.
I've
been
working
on
risk
five
and
he'd
like
to
talk
about
introducing
some
risk.
Five
ci
builds
into
our
into
our
build
process
for
for
prs,
jan,
you
wanna
leave
the
discussion
on
that
one.
B
Yeah,
hello,
everyone.
I
will
try
to
to
make
it
short,
because
this
is
you
know
a
simple
thing
and
I
just
I
just
would
like
to
you
know,
get
a
better
understanding
where
we
are
and
what
what
are
the
next
steps,
as
as
you
all
know,
we
don't
have
a
ci
for
the
risk
five
which
I'm
trying
to
to
you
know
look
after
the
code
and
make
sure
that
that
it
works.
B
But
I'm
failing
at
that
and
you
know
the
the
bills
are
getting
broken
every
now
and
again.
B
Usually
there
are
two
reasons
either
either.
This
is
the
consequence
of
some
refactoring
on
other
parts
which
also
attach
to
the
risk
5
part,
and
you
know
there
is
some
compilation
slip.
I
mean
missing
semicolon
or
something
so
the
code
won't
compile
at
all.
These
are
usually
easy
to
fix,
but
yeah,
you
know
you
need
another
pr
and
then
another
revision-
and
you
know
it
takes
not
only
my
time
but
also
also
time
of
the
reviewers,
which
I
guess
is
also
scars.
B
Other
kind
of
regressions
are
mostly
caused
by
adding
new
tests,
which
is
a
great
thing.
On
the
other
hand,
I
am
not
as
fast
keeping
up
implementing
new
code
and
now
when,
when,
for
example,
there
is
a
test
for
a
new
op
code
which
is
not
implemented
by
the
by
the
back
end.
Then
the
whole
test
fails
instead
of
saying
saying,
like
I
mean
it,
the
test
crashes,
because
there
is
the
they
are
unimplemented,
which
which
causes
a
trap
and
the
ho.
B
And
second,
if
I'm
working
on
something
else,
I
can
just
run
the
test
and
you
know
look
whether
it's
all
green
or
red,
because
I
know
that
this
is
red.
So
I
have
to
have
to
manually
check.
You
know
which
recharge
are
failing
or
not.
B
We
we
looked
for
you
know
what
are
the
options
to
get
some
hardware
donated
actually
boris
finger
of
looked
at
it,
so
he
might,
he
might
say
more
about
it,
but
there
is
a
little
chance
that
you
know
we
might
be
able
to
get
some
hardware
for
the
eclipse,
ci
infrastructure,
if
that
helps
in
in
any
way
so
yeah,
that's
that's!
This
short
summary
of
you
know
how
the
situation
looks
from
my
side
and
now
how
to
move
forward.
A
Okay,
thanks
yeah,
thanks,
jan
for
bringing
for
bringing
that
bringing
this
up
so
first
off.
I
don't
think,
and
anybody
here
can
can
certainly
jump
in
if
they
have
an
alternate
opinion.
I
don't
think
there's
gonna
be
much
objection
to
introducing
a
ci
build
for
risk.
Five,
I
mean.
Certainly
the
problems
that
you
highlighted
are
are
definitely
issues
that
any
platform
faces,
and
it's
certainly
you
know
if
risk
five
is,
is
considered
to
be
a
first
class
platform
in
omr,
which
I
certainly
think
that
it
should
be.
A
Then
you
know
it.
It
needs
to
be
tested
as
well,
using
whatever
whatever
means
we
have.
So
I
don't
think
you're
going
to
get
any
objection
to
that.
But
if
obviously,
if
somebody
has
a
differing
opinion,
please
please
speak
up
the
the
challenges
that
we've
had
with
testing
on
risk.
Five
are
exactly
what
you
you
highlighted
there
in
that
it
is.
It
is
a
lack
of
hardware,
unfortunately,
and
the
question
is
where
to
get
that
hardware
from
so
I
mean
at
the
at
the
very
minimum.
A
One
thing
that
we
could
introduce
probably
immediately
is
a
simple
cross-compile,
build
of
the
of
a
risk.
Five
back
end,
just
to
verify
that
you
know
any
build
time
asserts
statica
search
things
like
that
would
would
fail
whether
or
not
there's
any
linking
issues
or
actual
compile
compile
problems.
I
know
you've
actually
experienced
some
of
those
in
the
past
and
I'm
probably
one
of
the
ones
that
have
been
guilty
of
of
causing
some
of
those
problems
on
on
risk
five.
A
So
so,
at
the
very
least,
that
kind
of
thing
can
be
done
on
the
existing
hardware
that
we
have
probably
on
x86
to
flush
out
compile
problems.
I
don't
think
that
going
through
the
two
options
that
you
have,
I
don't
think
we
have.
We
do
have
very
limited
hardware
available
for
testing
and
I
don't
think
we
have
a
machine,
that's
as
as
capable
of
running
qemu
in
the
manner
that
you'd
like
it
to
as
part
of
the
build
farm.
A
So
if
you
do
require
a
fairly
powerful
machine
for
that,
I
I
don't
think
that
what
we
have
is
is
going
to
be
adequate.
So
so,
therefore,
that
option
probably
isn't
really
on
the
table
the,
but
the
real
hardware
approach
is,
is,
I
think-
and
I'm
I'm
glad
you
mentioned
about
your
your
company,
possibly
donating
some
some
hardware
to
the
cause
here,
and
I
think
that
that
would
be.
A
A
Would
it
be
a
a
couple
of
devices
that
would
be
hosted
by
your
company,
or
would
that
be
something
that
we
would
potentially
put
with
the
other
open
hardware
that
we
have
at
the
university
of
new
brunswick?
How
would
how
would
that
work
kind
of,
maybe
boris?
You
could
give
us
a
bit
of
an
update
on
what
you
are
proposing.
C
C
By
the
risk,
five
people
and
by
sci
five
and
well,
they
already
donated
the
the
hardware
that
that
jan
is
using
and
that
I
am
using
for
development
purposes
and
I
reached
out
to
their
cto
because
well
it's
it's
not
that
I
I
asked
him
he.
He
proposed
that
hey
by
the
way
you
know
we
want
to
give
you
more
hardware.
C
If
that
helps,
and-
and
I
was
just
you
know,
but
the
thing
is
that
there
is
there's
two
dev
boards
right,
the
one
that
was
popular
a
couple
years
ago,
which
is
there's
no
way
to
obtain
any
more
of
those.
Because
they're
not
I
mean
the
the
chips
themselves,
they
they're
not,
they
just
had
the
one
production
run
at
the
foundry
and
and
that's
gone,
and
this
is
not.
You
know
that
this
is
not
being
produced
yet,
and
then
there
is
there.
C
Is
this
new
machine
which
is
just
coming
out
and
they
they
have
several
of
them
and
they're
kind
of
still
scarce,
because
this
that
there
hasn't
been
an
actual
production
like
a
mass
production
run
of
those
they
are
pre.
C
Beta
previews,
or
something
like
that
and
the
the
message
that
I
have
from
yoon
soup
is
that
as
soon
as
they
enter
production
they
will
just
donate
us
whatever
we
need.
You
know
the
machine
couple
of
machines
or
whatever
now
where
to
host
them.
C
I
mean
we
can.
We
can
put
that.
I
don't
know
where.
Where
is
the
other
ci?
Where
are
the
other
ci
machines.
A
So
the
ones
that
aren't
on
the
so
we
have
some
that
are
that
are
from
eclipse,
I'm
not
exactly
sure
where
they,
where
those
are
physically
located.
But
the
other.
A
lot
of
the
hardware
that
ibm
has
donated
is
hosted
at
the
university
of
new
brunswick
in
canada
and
they
run
it
and
and
make
it
open
like
they're,
maintaining
that
farm
they're
providing
you
know
the
power
and
the
cooling
and
the
maintenance
of
that
hardware,
while
it's,
while
it's
there
and
and
making
it
accessible
to
to
the
public.
A
So
that
would
seem
to
be
the
the
well
that
that
is
our
only
option
at
this
point
in
in
terms
of
having
some
kind
of
a
farm
external
to
to
host
this.
But
if
there
was
like,
for
example,
of
sci-fi,
we're
were
providing
these
and
they
already
were
if
they
already
had
some
kind
of
an
external
farm
that
they
were
providing
other
projects
to
use.
And
perhaps
that
is
something
that
we
could
piggyback
on
on
top
of
as
well.
A
Right,
can
you
talk
a
little
bit
about
what
the
what
they're
willing?
Is
it?
What
they're
willing
to
provide?
Is
it
just
like
one
board
and
that's
that's
all
or
is
it
gonna
be
like
five
or
six,
something
that
we
could
actually
use
to?
You
know
to
build
out
enough
bandwidth
for
for
running
a
decent
number
of
tests
on.
C
Say
what
are
they
going
to
provide?
I
don't
know
how
many
they're
willing
to
provide-
I
don't
know
at
least
one
for
ci
and
at
least
a
couple
for
actual
development
or
how,
however,
many
developers
they
are
plus
at
least
one
for
ci,
but
I
I
can
talk
to
you
in
soup
and
and
that
right
now
the
problem
is
not
is
not
about
how
many
right
now
the
problem
is
when
it
enters
actual
production,
because
there
is
this
dichotomy
before
the
old
model
and
the
new
model
and
the
old
model
is
completely
gone.
C
B
B
Yes-
and
I
think
I
think
some
open
j9
people
have
it
david
sorry,
I
forgot
the
surname.
B
The
fedora
images
claimed
that
he
has
bunch
already
on
the
on
the
j9
slack
in
the
risk
five
channel.
D
I
was,
I
was
going
to
say,
not
hong
chang,.
A
Yeah
there
was
somebody
else
that
chimed
in
recently
about
about
getting
something
set
up
on
their
on
their
boards.
I
don't.
D
A
I
I
I
remember
the
name
or
I
remember
seeing
the
name.
I
don't
it
wasn't
familiar
to
me.
So
I'm
not
sure
who
they
are
to
be
honest
with
you,
go
back
and
check
so
so
mike.
My
question
for,
for
maybe
yan
would
be
how
many
boards
do
you
think
you
would
need
in
order
to
do
a
decent
amount
of
testing.
So
if
we're
going
to
ask
for
something
from
risk
from
from
sci
5,
what
should
our
ask
be
like
five
ten?
B
Because
ion
that's
that's
another
question,
because
if
if
we
are
ambitious-
and
we
are
thinking
about
about
j9
as
well,
then
is
the
same
farm
used
for
for
the
open,
j9
or
canopy?
Can
it
be
used
for
opening
nine
testing.
A
Well,
so
right
now
the
projects
are
independent,
but
in
terms
of
where
ibm.
A
A
So
I
think
you're
you're
right
that
that,
if
we're
going
to
talk
to,
if
we're
going
to
get
some
commitment
from
from
sci-fi
here,
I
think
that
it
should
apply
to
both
projects,
so
we
would-
and
if
we
had
to
have
more
in
one
than
the
other,
I
think
that
openj9
should
probably
have
more
because
the
head
there's
a
there's,
a
much
larger
number
of
tests
that
run
longer
than
what
omr
is
capable
of
of
running
right.
Now,.
B
Yeah,
so
you
know
I
would
I
would
you
know
the
thing
is
I
I
haven't.
I
haven't
seen
the
seen
the
new
one.
I
haven't
get
my
hand
on
it,
so
I
don't
know
how
much
how
much
powerful
it
is.
It
should
be
much
better.
B
I
mean
the
the
cpu
should
be
much
faster
and
also
you
know
you
can
you
can
hook
in
in
ssd
which
can
make
it
significantly
faster
than
running
from
the
sd
card
or
or
you
know,
compiling
over
nfs
or
something
like
this,
but
I
would
say
five
looks
like
we
can
do
something
busy.
B
Okay,
but
obviously
it
depends
how
many
they
are
really
willing
to
willing
to
donate
a
part
of
that.
If
I
can,
if
I
can
also
make
a
comment
on
the
qmu
and
and
the
jobs
the
cross,
compiling
jobs
and
and
the
native
compiling
jobs
for
the
omr,
they
are
already
committed
and
used,
and
they
are
running
on
my
on
my
board
when
the
board
is
online,
because
sometimes
I
have
to
take
it
off
and
actually
work
on
it,
but
the
code
is
there.
B
B
The
problem
is,
this
is
not
to
integrate
it
with
the
with
the
jenkins
pipeline,
and
I
don't
know
how
to
do
it
because
you,
you
have
to
you,
know,
run
it
under
the
qmu
not
directly,
and
this
is
this
is
all
going
through
the
c
test-
and
this
is
this-
is
area
which
I'm
not
really
familiar,
but
with
a
bit
of
a
work.
B
I
believe
somebody
who
knows
the
c
makes
ma
better
than
me
can
probably
use
this.
This
sort
of
emulation
to
run
the
tests,
and
this
is
reasonably
fast.
I
mean
usably
fast.
E
Yeah,
I
have
a
question
when
you
said
it
takes
two
to
three
hours.
Is
that
the
full
compile
and
test.
B
B
Yes,
this
is,
this
is
totally
practical.
This
is
what
I
am
using,
and
only
only
when
this
passes,
I
run
it
on
the
board
and
you
know
for
final
check
that
it
runs
on
the
real
hardware
but
the
first
level
testing.
I
always
do
on
the
omr
site
on
on
process
or
user
level
q.
And,
u
is
general,
I'm.
E
Just
going
to
throw
this
out
there,
what
we
can
do
is,
we
can
add
the
the
emulated
job
to
the
build
pipeline,
not
the
pr
testing.
So
at
the
very
least,
you'll
get
a
run
for
each
commit
that
gets
merged,
seeing
as
we
only
commit,
maybe
like
five
things
per
day.
That
would
be
five
runs,
even
if
they
take
an
hour
each
it
should
be
doable.
B
Yeah
yeah
great:
we
can,
we
can
do
it.
I
I
just
only
don't
know
how
to
properly
hook
it
into
into
the
cmake,
but
I
guess
somebody
would
know
how
to
do
that.
A
B
No,
no,
I
mean
I'm
running
in
on
my
seven
years
old
laptop,
which
is
that
wasn't
wasn't
that
great
machine
and.
A
B
A
I'm
wondering
like
from
maybe
sorry
just
one
sec.
A
Excuse
me
yeah,
I
I'm
wondering
from
if
we
can
use
as
your
pipeline
to
to
run
this
or
that
be
too
slow.
Maybe
that's
a
question
for
philip,
because
he's
got
a
bit
more
experience
with
that.
B
A
B
B
No
so
you
know
the
user
level
qmu
can
can
load
the
libraries
from
there,
which
you
know
that's
just
a
bunch
of
files,
but
I
just
haven't
have
them
on
on
my
ci
in
the
image
of
the
operating
system.
D
A
You
know
sort
of
a
quick
test
in
the
in
the
pipeline
and
then
the
ideal
solution
would
be
to
get
actual
hardware
to
to
test
on
so
boris.
Is
there
anything
that
the
is
there
any
input
from
the
project
that
you
would
that
that
you
think
would
be
helpful
to
get
sci-fi
to
provide
us,
let's
say
with
10
boards?
Something
like
that.
I
mean
is
that
a
communication
that
should
come
from,
let's
say
an
omar
project
lead
or
an
open
j9
project
lead
to
request.
C
I
don't
think
so
I
mean
to
them.
It
looks
like
to
them,
jan,
and
I
are
the
group
who
is
porting
omr,
slash
j9,
to
risk
five,
so
they
will
just
you
know
they
they
never
that
that
has
never
been
any
kind
of
issue
for
them.
A
Okay,
that's
that's
good,
I
mean
I'm
glad
you
have
that
that
kind
of
of
a
relationship
with
them.
So
I
guess
the
the
thing
then
would
be.
To
I
mean
I
don't
want
to
sound
overly
greedy,
but
I
mean
we
should
try
to
get
as
many
as
many
of
these
devices
as
we
as
we
can
just
to
make
the,
because
we
certainly
know
from
other
architectures
that
the
limited
number,
the
limited
amount
of
hardware
certainly
impacts
the
amount
of
testing
that
you
can
do
and
the
quality
of
that
testing.
A
So
I
mean
we
need
to
sort
of
strike
a
balance
between
being
overly
greedy
and
and
getting
a
little
bit
more
than
what
we
what
we
need,
and
so
who
knows
maybe,
starting
with
you
know,
five
boards
each
might
be.
E
Yeah,
I
think
it
would
be
interesting
that
we're
providing
they'll
be
more
than
happy
to
give
us
more
hardware,
yeah.
A
Okay,
I
will
follow
up
with
the
university
of
new
brunswick
as
well
and
and
joe
to
understand
what
our
capacity
capability
is
there
and,
if
they're,
they
need
to
make
any
adjustments
on
their
part.
In
order
to
accommodate
a
number
of
these
devices,
I
imagine
that
they're
fairly
low
power,
so
it
isn't
like
we're
pulling
in
10
large
servers
that
they
need
to
to
accommodate.
So
it
may
not
be
too
bad
from
a
power
and
cooling
point
of
view.
E
Okay,
yeah,
why
don't
I
touch
base
with
you
at
some
point
later
today
or
maybe
tomorrow,
and
get
a
sense
of
how
the
cross
compile
works,
and
I
can
drive
forward
to
look
into
this
with
devon
and
adam
to
see
if
we
can
hook
this
up
to
our
automated
form,
to
at
least
run
on
the
build
on
merge,
yeah.
B
E
A
Yeah,
I
mean
I
mean
I,
it
is
unfortunate
that
you
know
we've
had
you
know
the
risk
five
back
end
in
the
code
for
over
a
year
now,
and
yet
we've
been
unable
to
provide
the
kind
of
testing
that
it
that
it
probably
needs
even
the
basic
testing.
So
hopefully
this
will
correct
some
of
that
in
the
short
term.
While
we
wait
for
a
full
hardware
solution
that
we
can
get
going
in
both
farms,
so,
okay
cool
great.
Thank
you
all
right,
any
more
discussion.
Anybody
wants
to
have
on
this
topic.
A
Okay,
all
right,
so
the
next
topic
we
have
is
is
something
suggested
by
phillip,
so
some
work
that
he
wants
to
do
to
to
extend
some
of
the
work.
That's
happened
recently
with
the
centralized,
off-code
enums,
so
I'll.
Let
you
take
it
away,
philip.
E
Sure,
maybe
I'll
share
my
screen
to
open
up
the
photos
or
the
audio.
E
E
Okay
cool,
so
this
issue
kind
of
gives
a
summary
of
the
slack
thread.
I
guess
since
lac
eventually
deletes
these
things
for
us,
so
there
was
some
work
done
a
couple
weeks
back
to
centralize
the
the
opcode
enum
to
make
it
more
extensible
and
auto-generated.
E
So
we
don't
have
to
rely
on
various
tables
having
specific
order
and
various
names,
I
guess,
is
the
best
way
to
put
it
so.
I've
gone
ahead
and
extended
that
and
implemented
the
same
work
over
an
open
g9,
and
now
I'm
trying
to
do
a
bit
more
cleanup
to
get
rid
of
some
more
definitions
between
the
two
projects.
E
So
where
we're
at
right
now
is
we
can
reorder
our
opcodes
in
omar
and
openg9
without
breaking
either
project.
That's
all
fine,
however.
We're
left
with
these
really
ugly
looking
defines
that
we
have
in
a
bunch
of
places,
so
here's
one
in
the
omr
tree
evaluator
table
for
x86
64-bit.
E
We
defined
this
opcode
macro
definition
and
we
include
the
op
codes
genome,
so
this
is
an
extensible
enum,
so,
for
example,
openg9
implements
their
own
set
of
op
codes,
that
buildings
will
get
included
first
and
then
they'll
include
the
omar
ones,
and
all
that
this
file
contains
is
just
a
listing
of
opcode
macro
calls
which
have
all
the
metadata
for
that
particular
file.
E
E
There
we
go
right,
so
this
one
is
in
omar
up.
E
E
In
this
case,
we
just
map
it
to
the
up
code,
followed
by
evaluator
and
then
once
this
gets
pre-processed
by
the
cpu
processor,
it'll
just
expand
into
a
table
of
things
that
look
like,
for
example,
min
max
evaluator,
and
these
will
map
the
functions
which
then
we
index
via
the
op
code
into
the
table
and
that's
how
the
evaluators
work.
E
So
that's
all
that's
all
simple,
however:
we're
left
with
a
super
ugly,
looking
defined
sequence
and
the
reason
we
had
to
do
this
was
because
the
names
that
we
generate
here
need
to
have
a
very
canonical
naming
scheme.
So
it's
always
the
upcode
name,
the
underscore
then
the
upcode
name
and
then
evaluator
string
and
the
piece
that
together
and
that's
what
ends
up
generating
it
ends
up
getting
generated
for
that
particular
off
code.
E
Now
the
problem
comes
when
we
have
multiple
platforms,
so,
for
example,
I
gave
an
example
here
of
the
I
think
the
bounce
check
evaluator
or
some
other
one.
E
Okay,
so
on
x86,
the
lu
net
evaluator
maps
to
integer,
neg,
evaluator
actual
function,
which
is
implemented
in
c
plus,
however,
on
power
and
z
and
ar64
they're
different
names,
and
I'm
not
sure
even
I
didn't
put
risk
five
here,
I'm
sure
something
similar
to
power,
but
you
can
see
the
kind
of
difference
in
the
evaluator
naming
that
ends
up
happening.
So
it's
not
very
clear.
For
example,
if
you
want
to
look
at
what
is
the
implementation
of
evaluator
across
the
different
platforms?
Well,
what
do
you
do?
E
You
have
to
kind
of
guess
what
what
it's
actually
called
or
look
at
this
define
and
try
to
figure
it
out.
So
my
proposal
here
is
to
try
and
get
rid
of
this
ugly,
ugliness
and
kind
of
standardize
the
naming
scheme
across
all
the
platforms.
E
So
what
I
was
proposing
is
to
have
a
function
for
every
single
evaluator
and
make
the
name
standard
across
all
the
platforms,
which
means
we
might
have
to
introduce
kind
of
stub
evaluators.
For
example,
things
like
bound
check
evaluator,
which
currently
mapped
to
the
bad
iop
evaluator,
might
have
to
have
a
stub
that
looks
like
this
just
because
the
name
has
to
exist
at
compile
time
or
to
be
resolved
properly
when
we
compile
on
the
c
plus
plus
object.
E
So
this
is
going
to
introduce
a
little
bit
of
a
footprint
footprint
increase,
which
I
will
take
a
look
at
to
see
if
it's
something
manageable
and
it'll
introduce
quite
a
bit
of
these
kind
of
stub
functions.
The
other
thing
we
lose
is
the
ability
to
map
multiple
evaluators,
for
example,
the
f
to
iu
and
the
f
to
lu
evaluators
here,
map
to
the
same
sql
source
function.
Whereas
in
the
new
scheme
we
would
have
actual
two
functions
and
then
inside
of
them
we
would
just
call
the
same
evaluator
choice.
E
So
this
is
kind
of
something
that
we
lose
in
that
respect.
So
I
just
want
to
get
an
opinion
or
hear
your
thoughts
about
how
people
feel
about
making
this
change
or
whether
there's
something
better
that
we
can.
E
D
Do
you
anticipate
the
footprint
increasing
a
crazy
amount.
E
I'm
not
sure
I
haven't
protected
the
change,
but
it
would
be
well,
let's
see
the
maximum
it's
going
to
increase
is
one
function
for
per
op
code,
so
about
500,
500
or
so
functions,
but
obviously
a
lot
of
these
have
functions
already
there.
So
it's
going
to
be
quite
less
and
more.
A
D
E
The
openj9
tree
evaluator
will
actually
include
we'll
we'll
do
its
own
set
of
defines
first
and
then
it'll
include
the
omar
ones
or
vice
versa.
So
what
ends
up
happening
is,
if
I
want
to,
for
example,
override
the
dmin
evaluator
in
a
downstream
project.
I
can't
actually
do
it
because
I
have
to
make
my
own
underscore
demon,
evaluate
or
define,
but
omar
already
creates
one
for
me.
So
they
end
up
kind
of
overriding
each.
E
C
C
Yeah
to
me,
the
other
incentive
to
do
a
change
like
this
is
related
to
what
we
can
and
what
we
cannot
do
regarding
formal
proofs
and
the
formal
definition
of
what
each
il
does
and
it
looks
like
I
mean
I
mean
it
was
very
difficult
to
even
think
about
how
to
somehow
extract
or
or
do
whatever
to
to
make
a
sensible
definition
usable
for
a
formal
verification
of
of
these
of
these
op
codes.
C
So
I
can
like
I
don't
really
have
an
exact
idea
of
what
I
would
like
of
of
what
shape
I
would
like
for
them,
but
certainly
I
want
to
think
about
some
kind
of
rationalization
of
these.
E
I
did
actually
find
there.
I
think
I
noted
it
bottom
down
here.
There
is
an
alternative
that
we
could
do.
Is
we
can
blow
up
that
defined
sequence
into
a
more
uglier
define?
E
I
guess
triple
it,
by
adding
some,
if,
if
not
defines
there
to
allow
at
least
downstream
projects
to
override
evaluators.
B
Platforms
well,
I
would
certainly
certainly
welcome
having
having
the
name
unified,
because
when
I'm
looking,
how
other
other
backends
do
it,
sometimes
I'm
looking
for
something
that
is
not
there.
B
So
that's
fine.
What
I'm
not
really
understanding-
and
that's
probably
my
ignorance
of
the
code
base-
is
why
you
can't
simply
you
know
if,
when
you
are
extending
the
tree
evaluator,
well,
you
can
just
overwrite
the
stuff
the
same.
The
same
goes
for
for
the
steps
you
were
talking
like
you
know
that
you
would
have
to.
For
example,
you
know
we'll
just
dispatch
to
the
unimplemented
il
evaluator
when
something
is
not
implemented.
A
So
I
think
well,
I'm
I'm
not
sure
if
I
can,
if
I
can.
I
think
I
understand
your
your
question
and
I
think
I
understand
the
answer,
but
I'm
not
sure
if
I'm
going
to
be
able
to
explain
it
very
very
clearly,
so
so,
typically
for
a
lot
of
these
evaluators
in
the
at
the
omr
layer,
they
all
live
in
the
same
file
like
the
implementation
of
these
things.
So,
let's
say
just
pick
omar
tree
evaluator
for
any
particular
architecture
is
going
to
have
dozens
and
dozens
of
evaluators.
A
If
there's
one
now,
if,
if
a
project
like
openj9,
wants
to
override
one
of
those
evaluators
and
provide
its
own,
what
if
they
had
the
same
name
and
they
lived
in
the
same
name,
space
you'd
have
to
somehow
prevent
the
omr
one
from
being
seen
and
the
open
j9
one
from
you
and
you
want
the
one
in
open
j9
to
be
seen.
But
the
problem
in
the
omr
side
is
that
that
one
evaluator,
that
you
don't
want
that
you
want
to
hide
lives
in
amongst
dozens
and
dozens
of
other
evaluators.
A
So
it's
not
like
they
live
in
different
files,
that
your
build
system
can
just
sort
of
pick
and
choose
which
files
to
include
you
want
the
you
want
the
granularity
to
be
within
that
file
itself.
I
want
to
pick
that
particular
function
to
include
otherwise
you're
going
to
get
some
kind
of
a
linking
a
linking
error.
So
I'm
not
sure
if
that
made
sense,
but
I
think
that's.
A
A
E
Yeah,
I
I
guess
the
only
way
to
do
it
today,
we
have
to
put
the
open
g9
defines
first
because
it's
omar
that
ends
up
including
this
opcode
enum
file,
which
actually
has
all
the
definitions
of
the
opcodes.
E
E
He
defines
the
the
upcode
macro
and
then
the
op
codes
themselves
are
included
and
because
openg9
is
being
compiled,
it
ends
up
including
first
the
open,
g9
up
codes
and
then
once
so,
there's
no
way
for
us
today,
at
least
to
override,
for
example,
any
of
the
omr
evaluators.
Because
if
I
created
my
own,
you
know
hash
the
max
evaluator
instead
of
openg9,
while
omar's
just
gonna
overwrite
it
with
its
own.
E
Yeah
yeah,
yes
right,
but
we
still
have
that
ugliness
between
the
different
platforms,
where
we
have
different
names
for
different
values
and
different
platforms,
and
everything
is
inconsistent.
B
A
Okay,
well,
it
sounds
fill
up
like
you've
got
the
green
light
to
go
ahead
and
prototype,
something
just
to
see
what
the
what
it
would
look
like
and
what
the
impact
would
be
for
implementing
it.
That
way,
and
perhaps
then
we
can
decide
on
how
that
looks,
and
then,
whether
that's
something
we
want
to
go
forward
with.
E
A
Can
you
do
it
on
just
one,
can
you
do
it
on
just
one
cogen
to
begin
with,
like
just
from
a
from
okay.
E
B
Well,
maybe
not
not
a
really
question,
but
it
looks
to
me
at
least
that
on
the
air
64
back
end,
this
can
be
used
also,
you
know
with
register
definitions
and
things
like
this.
You
know
the
same
approach
that
you
have
just
defined
it
that
or
you
know
you
you
have
this
table
and
then
you
define
some
macro
and
include
the
file
with
the
table
to
generate
the
code
which,
for
example,
I
use
quite
heavily
in
in
risk
five
back
end.
So
I
have
a
one
one
huge
table
defining
the
registers.
B
A
Yeah
the
the
other
goal
that
I
mean,
I,
I
think
it's
maybe
unspoken
in
in
a
lot
of
some
in
some
of
these
pr's
of
of
late,
but
one
of
the
goals
that
we
have
for
this
project
is
to
is
to
converge
the
the
code
generators
as
much
as
we
can
going
forward.
A
There
is
a
lot
of
overlap
in
the
in
the
functionality
between
the
the
various
back
ends,
but
most
of
them
are
fairly
similar
in
terms
of
what
they're
trying
to
do
so
as
much
as
possible.
We'd
like
to
come
up
with
solutions
that
can
be
shared
across
the
various
ends
or
very
much
look
very
or
look
identical.
A
You
know
what
you
look
at
in
the
art.
64
is
the
same
thing
that
you
see
in
risk,
5
or
power
that
kind
of
thing.
So
that
is
the
direction
that
we
want,
that
we
want
to
go
in
for
sure
so
exploring
the
use
of
a
technique
like
this
is
is
interesting
for
sure.
I
think
we
just
have
to
make
sure
that
it
can
be
applied
across
all
the
the
the
architecture.
So
the
convergent
story
is
is
happening.
A
It's
just
happening
very
slowly
when
we
have
when
we
have
time
and
there's
a
fair
bit
of
cleanup
that
has
to
go
into
that,
and
sometimes
you
have
to
move
mountains
in
order
to
make
things
make
things
happen,
but
but
things
are
so
but
yeah
as
we
look
into
registers,
or
you
know
the
machine
class
things
like
that,
there
there's
certainly
opportunities
to
to
leverage
this
kind
of
a
solution
there
as
well.
B
Yeah,
that's
that's
great
to
hear,
because
this
is
something
that
that
bothers
me
for
for
some
time
that
you
know
there
is
a
especially
in
between.
There
is
5
and
r64.
There
is
a
lot
of
code,
duplication,
which
is
basically
cost,
because
I
used
to
use
the
ar
64
as
a
as
a.
B
Really
there
are
not
much
differences
in
many
many
cases,
so
this
this
would
make
things
certainly
much
much
better,
and
I.
A
E
For
the
instruction
type
metadata,
I
think
we
were
moving
towards
the
the
properties.
Okay,
what
they
called
our
install
code
yeah
these
two
files,
they
haven't
empowered
nz.
A
Yeah,
that
was
that
work
was
redone
like
a
few
years
ago
on
x86,
but
I
think
it
can
be.
I
mean
I
think
that
it
makes
sense
to
to
continue
to
use
that
to
use
that
to
unify,
with,
with
the
with
the
other
other
two
and
yeah
art,
64
and
and
risk
five
can
come
along
as
well.
I
was
going
to
say
again
that
a
lot
of
the
arts
got
a
lot,
but
a
not
insignificant
amount
of
the
ar
64
code
generator
borrowed
from
power
as
well.
A
So
I
think
you'll
see
some.
You
know
some
definite
lineage
between
you
know
risk
five
and
ar-64
which
borrowed
from
power
which
also
borrowed
from
you
know
armed
32-bit
arm.
So
there's
lots
of
shared
dna
between
the
different
back
ends
and
we
just
need
to
to
get
that
all
straightened
up.
I
think.
E
A
A
Okay,
if
not,
I
think
we
can
adjourn
for
today
and
the
agenda.
The
proposed
agenda
was
created
for
the
meeting
two
weeks
from
now.
So
if
there's
any
topics
you
anybody
wants
to
raise,
please
just
edit
the
edit
that
issue-
and
I
guess
we'll
convene
or
we'll
adjourn
now
and
convene
again
in
two
weeks
thanks
everyone
for
attending.