►
From YouTube: OMR Architecture Meeting 20201105
Description
Agenda:
* Clean up generation of load and store sequences for Power (#5630) [ @aviansie-ben ]
* Discuss future OMR dependence on Travis CI [ @0xdaryl ]
A
B
All
right,
so
the
way
we
currently
generate
the
load
sequences
from
our
nodes
on
power
during
tree
evaluation
is
turning
out
to
be
duplicating
a
lot
of
code
all
over
the
place.
B
B
So
in
that
case,
if
our
symbol,
reference
was
volatile,
we
actually
need
to
emit
a
memory
fence,
and
the
problem
with
this
is
that
the
code
for
generating
these
loads
is
actually
duplicated
a
lot
within
the
coach,
because
it's
very
common
for
us
to
have
code
gen
optimizations,
where
a
load
which
is
only
used
in
one
location
as
the
child
of
another
node,
may
actually
end
up
being
emitted
with
a
different
load,
op
code,
then
the
then
the
load
itself
may
imply
such
that
we
can
perform
both
operations
in
one
step.
B
So
in
the
example
I
gave
that
code
would
need
to
be
in
the
byte
swap
evaluator,
and
what
this
has
led
to
is
a
lot
of
duplication
of
these
relatively
small,
actually
code
sequences.
But
it's
actually
very
error
prone.
So
I've
actually
found,
while
looking
at
the
code
gen
over
a
dozen
locations
where
we
fail
to
correctly
emit
the
the
memory
fence
after
a
volatile
load,
because
whoever
was
writing
that
code
did
not
think
to
check
whether
or
not
the
symbol
reference
we
were
performing.
B
The
load
of
was
volatile
and
because
of
this,
we've
ended
up
with
a
lot
of
volatility
bugs,
and
these
things
can
actually
be
really
nasty
to
track
down,
because
they
will
lead
to
unpredictable
data
races
everywhere,
and
there
are
some
other
minor
problems.
This
has
caused.
So,
for
instance,
because
we
create
a
memory
reference
that
is
corresponding
to
a
load
node.
B
So
it
it
entangles
a
lot
of
node
evaluation
related
code
in
tr
memory,
reference
which,
on
its
face,
should
be
a
relatively
simple
class.
But
because
of
concerns
like
this,
it
has
become
far
more
complex
than
it
needs
to
be,
and
it
has
been
coupled
with
things
such
as
unresolved
in
open
j9,
and
it
has
all
sorts
of
complex
tree
evaluation
logic
in
it,
and
these
sorts
of
things
just
add
unnecessary
weight
to
that
class.
B
You
provide
it
with
the
node.
You
provide
it
with
the
op
code,
you'd
like
to
do
to
use
to
perform
the
load
or
store
and
the
source
or
target
register,
and
it
will
handle
emitting
the
instruction
corresponding
to
the
actual
load
or
store,
as
well
as
constructing
the
memory
reference
keeping
track
of
what
nodes
need
to
be
kept
live
and
then
when
they
need
to
be
when
they
stop
being
kept,
live
as
well
as
emitting
those
memory.
B
B
So
basically,
this
does
remove
some
power
from
the
evaluators,
so
they
can't
just
generate
a
memory
reference
and
use
it.
Multiple
times
turns
out.
That's
actually
not
something
you
should
be
doing
for
a
volatile
at
all,
but
in
certain
cases
it
may
be
useful,
especially
useful
on
sisk
architectures
like
x,
but
not
as
useful
on
risk.
Architectures
like
power,
where
our
loads
or
stores
are
typically
done
in
separate
instructions.
B
So
basically,
what
I'm
looking
for
is
any
comments
on
potential
problems.
People
might
foresee.
If
we
go
this
route
or
other
things,
people
want
to
bring.
C
C
B
B
It
does
have
a
separate
mode
of
operation
where
you
can
ask
it
to
compute
the
effective
address
that
a
load
or
store
would
would
operate
on,
and
then
you
can
then
later
ask
it
to
perform
the
load
or
store
separately
on
that
address,
and
while
that
is
a
bit
more
error-prone,
that
that
flexibility
is
required,
particularly
in
the
right
barrier,
but
the
vast
majority
of
locations
will
be
using
the
simplified
interface.
D
Do
we
allow
memory
references
to
be
reused
on
on
power
in
x86,
because
we
certainly
don't
own
nazi.
D
A
D
Okay,
because
from
what
it
seems,
z
does
already
one
thing
where
it
artificially
increments
reference
counts
to
make
it
transparent
for
the
user.
So
if
x86
does
something?
Third
and
we'll
have
three
different
implementations
across
the
cogens.
B
Yeah,
as
I
understand
it,
x,
is
currently
doing
the
same
thing
as
power
where
the
memory
reference
itself
is
keeping
track
of
the
nodes
that
need
to
be
decremented.
A
Later,
okay
to
phillips
point:
if
if
there
is
a
point
of
convergence
here,
maybe
we
should
aim
for
that.
A
I
think
the
comments
that
some
of
the
discussion
that
was
happening
in
the
issue
seem
to
suggest
that
that's
certainly
a
possibility
between
z
and
and
power,
and
if
that's
the
case
there,
then
same
kind
of
thinking
could
be
applied
to
x86.
B
B
So
if
there
may
need
to
be
different
handling
for
x,
but
I
know
for
sure
that
all
of
our
risk
architectures
could
definitely
do
the
same
thing
as
power,
also
not
sure
about
z.
B
A
X86
yeah,
the
logic
on
64
borrowed
borrowed
quite
a
bit
from
power.
So
I
think
that
even
some
of
the
issues
that
that
then
is
found
here
with
with
the
barriers
not
being
inserted
properly,
I
think,
would
apply-
may
apply
to
art
64
as
well.
A
So
I
took
that
as
a
as
a
to
do
to
bring
it
up
with
the
look
myself
or
bring
it
up
with
some
of
the
developers
that
have
been
looking
at
that
code
that
were
inspired
heavily
by
the
power
implementation
to
see
if
the
same
issues
appear
there
because
places
where
barriers
are
required
for
the
memory
model
on
ar-64
seem
to
be
similar
to
what
power
requires,
not
as
necessarily
as
strong
but.
C
C
B
B
C
So
for
the
memory
reference
generation
in
what
you're
proposing
how
it
is
like
well
for
one
of
the
I
have
rip
relative,
I
guess
is
the
way
I
would
describe
it
on
x86
like
so.
How
does
pc
relative
addressing
fit
with
this
model?
Does
it
fit
naturally,
or
is
there
something
weird
about
that.
B
So
that
does
fit
naturally
into
it,
so
load
store
handler,
has
some
internal
apis
that
it
would
use
to
basically
construct
a
memory
reference
based
on
a
node.
So
as
long
as
you
can
represent
the
rip
relative
memory
reference
as
a
tr
memory,
reference
you're
all
good
and
you
wouldn't
need
any
special
handling
for
that.
A
B
Yes,
I've
got
a
pull
request,
work
in
progress
that
implements
the
api
and
switches
all
known
locations
in
omr
to
use
it
where
appropriate.
B
C
A
B
C
A
B
Here
all
right
yeah,
this
is
definitely
a
change
in
how
we're
thinking
about
node-based
memory
operations.
B
A
Any
more
questions
for
ben
about
this
proposal.
E
B
B
It's
definitely
invalid
to
have
a
point:
two
pointers
to
the
same
memory:
reference
on
two
different
instructions.
So
I
know
for
sure
we
don't
do
that.
There
is
still
the
copying
that
we
do
on
on
non-node
based
memory
references,
but
in
the
case
of
non-node-based
memory
references.
That's
generally
fine,
because
you
can
assume
that
there's
no
data
races
going
on
there,
usually
we're
usually
with
those
we
are
loading
data
that
we
have
internally
sort
of
emitted
within
the
compiler.
So
we
know
more
about
it
than
with
an
arbitrary,
simple
reference.
E
A
A
Okay,
so
I
guess,
if
there's,
if
there's
any
further
comments
for
people
want
to
study
this
a
bit
more,
there's
that
discussion
in
the
issue
and
some
sample
code
in
the
in
the
poll
request
so
we
can
is,
is
the
reason
that
it's
a
work
in
progress,
because
you
wanted
to
get
feedback
on
it
or
is
it
ready
for
review.
B
It's
mostly
because
I
wanted.
I
want
to
get
feedback
on
the
proposed
interface
early
and
especially
before
I
s,
while
I'm
starting
to
go
through
the
write
and
read
barrier
stuff
in
in
open
j9,
because
that
that
may
take
me
a
week
or
two
more.
But
I'd
like
to
get
some
feedback
early.
A
E
E
Oh
sure,
as
a
result
of
this
work,
you're
gonna
remove
old
interface
like
that
node
based
memory,
reference.
B
E
A
A
Okay,
so
the
the
the
next
topic
is
about
the
omar
project's
usage
of
travis
ci
for
those
that
may
not
have
have
been
following
goings-on
at
travis
ci
lately,
so
they
recently
changed
their
their
pricing
model
because
of
because
of
some
issues
that
they
were
having
with
with
usage
of
their
of
their
farm
machines,
perhaps
predominantly
by
those
that
were
in
the
free
tier,
which
of
which
omr
is
so.
A
What
they
wanted
to
do
was
to
try
and
restrict
the
amount
of
usage
that
free
projects
were
were
using
and
to
try
to
drive
them
the
ones
that
really
needed
the
the
throughput
to
drive
them
onto
a
paid
plan.
A
A
We
would
get
about
10
000
credits,
which
translates
to
about
a
thousand
minutes
of
of
build
time
in
practical
terms,
based
on
our
current
usage.
This
is
about
90
builds
or
90
check-ins,
where
a
ci
bill
get
launched
on
travis.
A
A
They
do
have
some
other
credits
that
you
can
apply
for,
but
it's
not
clear
to
me
how
how
how
generous
those
credits
actually
are
and
whether
or
not
that
they
would
be
adequate
for
for
this
product
for
our
project
from
an
omr
perspective,
what
we're
really
using?
We
only
really
have
one
build
that
happens
at
travis
right
now.
A
The
rest
of
our
builds
are
happening
at
azure,
so
the
work
that
that
philip
did
not
long
ago
to
to
add
the
azure
pipelines
on
mac,
os
and
windows,
and
I
think,
there's
a
linux
there
as
well,
and
then
we
have
the
rest
of
our
jobs
happening
at
a
build
farm
at
the
university
of
new
brunswick
that
we
that
we
use
the
eclipse
jenkins
to
to
get
jobs
scheduled
there.
A
So
between
those
two,
I
think
that
there
is
probably
enough
capacity
to
handle
moving
the
job
off
of
travis
ci
into
one
of
those
environments
and
and
and
doing
it
there
and
then
going
forward
deprecating
our
dependency
on
travis
ci.
Even
if
we
decided,
we
would
still
have
to
do
some
work.
If
we,
even
if
we
decided
not
to
do
anything
and
just
continue
on
with
the
free
plan,
because
the
url
is
changing
as
well.
The
the
travisci.org
is
going
away.
A
We're
gonna
have
to
switch
things
to
the
dot
com,
probably
not
a
big
deal,
but
it
is
something
that
we
would
have
to
do
so.
A
So
I
wanted
to
turn
it
over
and
see
what
what
people?
If
people
have
any
thoughts
about,
you
know,
travis
ci,
whether
that's
something
we
you
know,
even
though
we're
only
using
it
for
one
build
going
forward,
are
there
are
sorry,
even
though
we're
using
it
for
only
one
build
now.
Are
there
you
know,
do
you
see
an
expanded
role
there
in
the
future
or
or
what
or
do
you
think
we
should
back
away
from
it
and
move
things
on
to
the
onto
the
unb
farm.
A
It's
from
what
I
read:
it's
not
clear,
yeah.
A
It
when
you
use
up
your
credits,
what
it
says
is:
if
you
use
up
your
credits,
you're
welcome
to
discuss
with
them
paid
options,
so
it
doesn't.
F
A
But
but
even
then
we're
gonna
blow
through
that
pretty
quick
yeah
you
know
within
within
two
weeks
I
would
say,
would
would
be
ten
thousand
per
project
and
not
per
org.
E
F
A
I
guess
one
potential
complication
for
moving
things
to
onto
the
b
farm
are
some
of
the
dependencies
that
we
have
for
running
the
linter
job.
So
at
the
moment
it
is
based
on.
A
B
F
On
jenkins
automatically
as
well
without
a
comment
well,
I
was
going
to
suggest
this:
could
you
put
it
into
a
docker
container
so
that
you
can
sort
of
control
your
own
dependencies,
and
then
you
just
run
on
any
machine
at
jenkins
that
has
docker
on
them.
A
Philip,
I
guess
you've
had
the
most
experience
with
azure.
Do
you
think
that's
the
kinds
of
dependencies
that
are
required
to
run
that
job
on
travis?
Is
it?
Are
they
easy
to
get
set
up
on.
D
A
Okay
is
that
is
that
a
requirement
of
the
azer
environment.
A
Sticking
with
with
travis
die,
or
is
there
any
possible
use
that
we
have
for
it
going
forward?
It
doesn't
sound.
It
sounds
like
the
the
direction
that
we're
thinking
about
is
is
to
move
away
from
it.
Do
the
automatic
builds
on
azure,
perhaps
even
on
the
on
the
jenkins
farm,
initiated.
F
A
To
basically
track
this
piece
of
work
to
move
us
off
of
travis
ci
and
discussion
and
other
pieces
of
the
pieces
of
work,
for
that
can
be,
can
you
track
there.
A
Okay,
when
you
got
things
set
up
on
azure
philip
did
you
have
to
create
an
account
there.
A
Okay,
no,
I
just
want
to
just
want
to
check
that
it
includes
the
project
leads
as
well:
okay,
cool.
A
A
A
Okay,
so
we'll
get
that
done.
Are
there
any
other
discussion
points
that
anybody
wants
to
bring
up
before?
We
adjourn?
A
Okay,
all
right,
that's
the
last
topic
for
today,
so
I
guess
we'll
we'll
end
it
now.
So
thanks
everyone
for
participating
and
talk
to
you
in
a
couple
of
weeks.