►
From YouTube: OMR Compiler Architecture Meeting 20181024
Description
Compiler Architecture Meeting agenda:
* Developer guidelines for choosing IL opcodes vs symbols (#3051) [ @Leonardo2718 ]
* Remove or keep TR::ipopcnt and TR::lpopcnt IL (#3049) [ @NigelYiboYu ]
* Rename “NonHelpers” (#3050) [ @0dvictor ]
Please add any comments/questions to the GitHub agenda issue: https://github.com/eclipse/omr/issues/3115
A
B
B
Graciously
agreed
to
to
moderate
the
discussion
on
the
first
point,
so
on
choosing
developer
guidelines
for
I'll
opcodes
deciding
when
to
use
ILOG
code
versus
symbols.
So
it's
a
fairly
important
topic
for
us
to
come
to
some
kind
of
an
agreement
on
it.
So
I'll
turn
it
over
to
there
Nardo
sure,
Thank,
You,
Darryl,
so
hoping
most
people
here
are
familiar
already
with
the
topic.
I.
Don't
try
to
summarize
everything,
because
there's
been
a
lot
of
discussion
going
on
a
lot
of
really
good
discussion,
I
think.
B
But
the
gist
issue
is
what
are
our
guidelines
for
whether
a
particular
feature
or
functionality
and
the
JIT
compiler
is
represented
as
either
an
il
opcode
or
a
non
helper
function,
and
on
helper
being
some
sort
of
function
like
thing
representing
functionality
that
a
code
generator
can
simply
know
as
an
intrinsic,
how
to
generate
code
for
or
just
do
something
reasonable
that
will
emulate
the
correct
behavior.
So
I
will
turn
them
discussion
over
to
people
to
share
their
thoughts.
F
F
Does
there
really
is
no
difference
between
the
two,
except
the
only
one
that
I
could
think
of?
Is
that
called
always
have
to
be
anchored
and
other
things
compares
children,
and
what
that
means
is
that
things
that
appear
as
children
basically
have
the
property
of
implicit
code
motion
when,
when
the
optimizer
does
things
to
the
code,
these
things
can
move
around,
whereas
called
are
always
anchored.
They
don't
move,
but
I've
tried
to
think
of
examples
that
contradict
this
and
I
can
really
come
up
with
anything.
So
to
me,
that's
that's
the
main
distinction
between
two.
F
C
Really
much
agree
that
these
two
representations
are
fundamentally
equivalent.
In
that
anything
we
wanted
to
do
with
the
one
representation
we
could
do
with
the
other
if
we
so
chose
right.
So
this
say
we're
calls
are
anchored
as
a
peculiarity
of
our
particular
il,
but
the
fundamental
thing
is,
you
know
you
sort
of
write
down
the
information
one
way
or
you
write
it
down
another
way,
but
you've
written
down
the
same
thing,
and
so.
C
Although
I
do
think
that
there
are
other
peculiarities
of
calls
that
that
occur
in
our
compiler
and
quite
possibly
in
in
other
compilers,
given
that
calls
are
often
opaque,
you
know,
if
you
don't
know
specifically
what
something
is
and
all
you
know
that
it
is
that
it's
a
call
you're
liable
to
get
conservative
around
things.
I
think
if
we
represent
some
of
these
cheap
operations
that
are
arithmetic
like
as
calls
it
exempts
them
from
a
lot
of
extremely
generic
optimizations
like.
C
About
it,
like
a
local
common,
sub-expression
elimination,
almost
certainly
does
not
common
them
value.
Numbering
probably
does
not
do.
B
We
have
a
recognizer
to
know
they
might
affect
recall
side-effect
free.
We.
F
C
C
D
F
D
I
mean
in
some
cases
you
need
to
know
the
semantics
of
the
New
York
Code.
That
added
things
like
value,
propagation
and
simplify
are
certainly
fall
in
that
category.
But
there
is
a
class
of
apps
that
don't
even
need
to
know
what
the
operation
does
as
long
as
it
returns
an
int
for
example,
and
is
expressed
like
any
other
arithmetic
operation
right.
It
can
be
pulled
out
of
a
loop,
for
example,
and
put
in
to
attend.
B
D
C
C
C
C
C
D
You
very
hard
to
enter
sort
of
fear
call.
It
may
even
be
more
of
a
cogent
reason
than
an
ultimate
private
coaching
and
there
are
people
more
well-versed
and
cogent,
evaluate
your
capabilities
than
me
here.
If
you
have
a
call
appearing
suddenly
in
the
middle
of
some
other
non-trivial
awkward,
that
you're
evaluating
I,
don't
think
the
coaching
can
just
start
at
a
deleting
a
call
in
the
middle
of
an
instance
off
or
a
little
of
a
chat
cost
or
some
other
complex
operation.
That
requires
internal
controls
for
some.
D
F
C
There
are
some
ops
that
care
about.
You
have
a
symbol
reference
or
not,
because
that
is
the
notion
of
a
potential
side
effect
in
that
there's
a
class
of
things
that
can
happen
to
non
sim
ref
holding
nodes
right
even
outside
of
it
saying.
Are
you
a
call
just
asking
the
question?
Do
you
have
a
sim
wrap?
Isn't
that
lead
to
learn.
C
C
That
something
is
not
referentially
transparent,
so
loads
have
a
side
effect,
but
it
matters
where
they
happen.
Yeah,
sorry,
they
don't
have
a
sawtooth
language,
what
it
doesn't
matter
but
where
they
happen
yeah
the
only
exception
to
this
rule
that
I
know
of
is
load
adder,
which
is
effectively
a
constant
yeah
I.
F
E
C
To
declare
on
that
opcode
to
express
to
the
optimizer
how
it
should
participate
in
a
variety
of
common
optimizations
and
data
flow
patterns
right
and
and
things
like
do
you
have
a
sim,
ref
or
not,
and
you
know,
is
like
use
or
is
like
that
or
these
various
other
things
that
capture
behavior,
that
offer
that
a
number
of
optimizations
query
without
caring
about
the
opcode.
These
are
properties
of
the
opcode
right
that
allow
it
to
participate
in
these
general
optimizations
and
specific
ways.
C
Now,
intrinsics
inherently
are
going
to
have
a
family
of
different
kinds
of
behavior.
Some
of
them
will
behave
like
ads.
Some
of
them
will
behave
like
loads.
Some
of
them
will
behave
like
volatile
loads.
Some
of
them
will
have
cockamamie
semantics
that
we
can't
even
sit
here
think
about
so
trying
to
so
trying
to
say
that
we
should
have
sort
of
a
meta
opcode.
C
That's
going
to
allow
us
to
encode
those
various
things
is
basically
saying
we
want
to
be
able
to
write
intrinsic,
but
babe
like
opcodes,
so
I
think
that
the
the
trade-off
is
in.
So
if
you
look
at
the
design
choice
of
it,
the
participation
of
the
the
opcode
right
like
if
you're
choosing
an
opcode,
you're
saying
okay,
this
this.
This
is
a
this.
C
This
this
operation
is
sufficiently
common,
that
I
I
care
about
it,
participating
in
optimization
in
certain
ways
that
it
is
a
generic
concept
that
applies
broadly
in
a
number
of
places,
because
I'm
going
to
force
many
optimizations
to
explicitly
handle
it
and
then
you're
in
your
opting
into
all
that
work.
If
you're
going
to
go
the
intrinsic
route,
what
your
encoding
is
is
an
operation
that
has
very
precise,
or
at
least
in
my
mind,
a
very
precise
semantics.
C
You
don't
care
to
have
to
teach
every
optimization
how
to
deal
with
it,
because
you
aren't
opting
into
having
an
opcode
with
all
of
the
opcode
properties
and
all
the
work
that
that
entails.
But
by
making
that
cheaper
choice,
you
are
inherently
giving
up
some
performance
for
the
simplicity
of
the
implementation.
You
are
opting
to
go.
They
didn't
sufficiently
generic
that
I
need
to
do
all
that
work,
and
so
the
optimizer
has
to
have
a
point
where
it
assumes
a
certain
level
of
conservatism.
C
C
So
some
of
them,
so
the
merits
of
the
choice
that
somebody
made
in
implementing
an
opcode
versus
a
call.
Well,
it
comes
back
to
this
debate
of
how
common
is
it?
How
you?
How
much
do
we
care
about
ringing
the
maximum
amount
of
performance
and
utility
out
of
it
right?
So,
for
example,
we
had
a
debate
that
was
a
really
hot
count
right
population
count.
C
Now
that
opcode
was
introduced,
I
believe
in
a
very
specific
corner
case,
and
it's
not
generally
participating
in
what
people
are
doing
and
arguably
probably
is
a
corner
case
where
we
don't
particularly
care
to
have
it
fully
participate
in
everything,
because
it's
a
kind
of
niche
operation.
Well,
then,
that
might
be
an
example
of
something
that
wasn't
the
right
thing
to
make
an
opcode,
but
if
you're
going
to
have
like
an
atomic
compare-and-swap,
for
example,
or
an
atomic
exchange
which
is
kind
of
a
fundamental
language
feature
for
a
lot
of
different
languages
that
handle
concurrency.
C
C
You
know,
optimizations
have
to
deal
with
a
whole
bunch
of
stuff
that
they
don't
recognize
all
the
time
and
they
have
generic
ways
of
doing
so,
for
sort
of
regular
unobjectionable
code
and
for
things
that
sort
of
count
as
unobjectionable
code
like
pop
count,
it
seems
to
me
like,
like
it's
it's
perfectly
sensible,
to
allow
the
generic
stuff
that
doesn't
care
to
apply
to
it.
It's
sort
of
like,
if
you
make
it
in.
C
C
It's
sort
of
like.
Similarly,
though
there
was
a
concern
about
the
port
in
the
cogent
right
so
like.
Oh,
these
things
become
op
codes
and
the
koujun
has
to
support
them
and
there
was
a
countervailing
like
we
shouldn't.
Have
these
queries
like
it
supports
square
root,
whatever
right
and
regardless
of
which
representation
we
choose.
C
A
C
Don't
think
that
you
can
say
that
they're
necessarily
orthogonal,
if
you're
going
to
allow
so
consider,
for
example,
I,
don't
mean
that
everything
should
be
this
kind
of
unobjectionable
code
that
I
yes,
no
I,
understand
like
our
compare-and-swap
is
side
affecting,
for
example,
absolutely
should
not
be
one
of
these
regular
arithmetic.
Looking
things
yeah.
F
F
D
D
So
Linda,
so
it
depends
on
your
intrinsic
in
question.
I
think
a
likelihood
of
being
optimized
the
likelihood
of
of
it
paying
off.
Those
things
should
play
into
the
fall
count
example,
maybe
isn't
so,
and
the
gains
on
special
casing
is
as
a
call
and
generating
some
specialized
sequence
for
it
don't
make
it
work
it's
on
its
own.
Maybe
you
should
make
it
an
awkward.
Maybe
those
gains
can
only
be
realized
if
it
was
allowed
to
be
hoisted
and
uncommon,
and
so
on
that
inherently
it's
not
enough
to
just
carry
an
optimized
sequence
for
it.
D
C
Sympathetic
to
Phil's
concern
to
about
the
combinatorial
explosion
of
our
op
codes,
because
we
we
combine
them
with
other
particular
types
and
if
you
have,
if
you
want
to
have
50
intrinsic
type
sort
of
operations,.
F
F
C
Are
other
drawbacks
to
a
call
right
if
you
are
going
to
take
the
call
and
you're
going
to
start
trying
to
recognize
that
call
that
is
more
calm,
more
computationally
intensive
than
checking
the
enum
value
for
a
given
node
match
it,
you
know,
is
not
a
certain
thing
right.
It
requires
inspecting
the
symbol
on
the
call
you
know
sim
rep
on
the
symbol
on
the
call
you
have
to
query
properties
about
it
figure
out.
C
If
it
really
is
the
thing
that
you're
looking
for
and
that
ends
up
being
computation
expensive,
if
you're
going
to
do
it
everywhere
in
the
code,
even
if
we
introduce
better
API
is
to
do,
it
is
still
to
end
up
looking
like
a
steady
factory.
If
you
end
up
having
those
sort
of
wired
syrupy,
the
optimizer,
which
is
not
really
a
situation,
I
really
want
to
see.
C
C
Sort
of
arithmetic,
like
intrinsic
opcode,
with
variance
what
that
does,
is
it
it
could
be
allowed
to
look
like
normal
stuff
right
in
that
it
floats
around
its
referentially
transparent.
You
can
do
REME
at
you,
can
voice
out
of
loops
in
or
whatever
you
want
to
those
sorts
of
things,
but
it
would
keep
the
combinatorial
explosion
out
of
the
opcode
list
and.
C
B
So
we
would
define
a
sub,
we
would
define
something
somewhere
of
a
list
of
known
operations
that
could
be
represented
by
this
kind
of
node
and,
like.
D
A
blended
automatic
operations,
or
sometimes
if
it's
owned
off
course,
family
on
some
other
type
of
all
code
which
will
have
types
associated
with
it.
Let's
call
it
intrinsic
call
or
something
it
will
have
type
versions
off
infants
a
call
and
since
a
call
will
not
have
all
the
baggage
that
a
normal
call
yeah.
D
C
Instruction
has
to
be
generated
right
or
not,
you
know,
and
so
what
you.
What
you
want
and
pop
count
for,
for
example,
is
that,
on
you
know,
architecture
a
there
is
a
pop
count
instruction.
You
just
generate
the
instruction
and
you're
done
right,
like
that's
what
it's
there
for
so
I
in
the
good
cases.
It's
it's
like
one
instruction
or
something
in
the
bad
cases.
It
is
hopefully
not
very
many
instructions.
C
G
C
So
there's
the
second
idea.
The
second
idea
is
some
operations
that
we
don't
want
to
require
code
gen
to
support.
B
F
G
C
Know
generally
happens
in
worse,
you
could
generate
a
call,
a
call
to
a
right,
but
if
it's
not
anchored,
then
you
have
the
danger
that
this
is
a
child.
That's
evaluated
as
part
of
the
internal
control
flow
secret
or
check
fast,
or
instance,
of
or
something
like
that.
As
soon
as
you
have
that
alligator,
so
for
the,
what
the
lowering
would
do
is
if
it
had
to
generate
a
call,
it
would
be
like
a
regular
kind
of
call
getting
generated
that
would
get
anchored
before
right.
G
C
C
C
Have
because
of
those
very
specific
semantics
of
how
it's
being
used
right
now,
you'd
have
a
lot
of
lots
of
performance
that
you
wouldn't
otherwise
need
to
have,
but
that's
not
in
state
that
that
opcode
should
end
up
all
right.
So,
regardless
of
the
comparison
to
the
Ribery
errs,
I
think
there
are
benefits
of
this
idea,
which
are
that
cogent
doesn't
have
to
be
required
to
implement
the
operation
directly,
although
it
can-
and
it
would
generally
be
you
know
after
if
it
did.
C
C
G
We
can
do
it,
you
can
turn
on/off
chord
as
well.
If
it
was
all
play,
I
all
not.
D
F
C
A
Methods
that
you
recognized
and
assumably
you're,
recognizing
a
library
method,
prick
and
you're,
reducing
it
to
pop
count.
Why
couldn't
you
have
in
mind
the
method
on
the
platform's?
The
way
we
don't
support
pop
down,
not
China
in
the
first
place,
of
course,
and
then
you've,
probably
optimized,
you
more,
are
you
or
are
you
recognizing
the
library
culture
well,.
C
C
The
size
of
the
code
that
you're
going
to
generate
can
end
up
looking
extremely
obscene
now
yeah
I,
wouldn't
the
lowering
should
do
things
like
split
blocks
and
insert
loops,
but
for
some
of
these
things,
unless
you're
going
to
call
helper
library,
implementation
is
necessarily
going
to
require
some
kind
of
control,
and
that
is
that
you,
a
helper
library
also
you
can
do
pop
count
without
whipping,
but
but
your
performance
next.
So
for
the
platforms
that
don't
want
to
support.
B
C
Guess
it
depends
on
whether
that
pop
count,
for
example,
got
hoisted
out
of
a
loop
with
you.
You
know
sort
of
magically
now
is
allowed
to
be,
whereas
you
can't
hoist
your
loop
out
of
a
loop
if
the
implementation
of
that
method,
when
we
in
line
it
would
have
been
a
loop
right,
but
sort
of
it
role
really
depends
on
what
happens
today.
B
D
F
D
B
C
F
Right
I
will
say
that
I'm
generally
a
big
fan
of
low
ratings
because
we're
actually
doing
the
work
and
it's
just
the
only
question
is:
where
are
we
doing
it?
In
every
code
generator
like
Paul
Kerr
makes
it
all
the
way
to
the
code
generator
and
nobody
has
instructions
to
do
it.
Everybody
gets
to
write
a
pop
count.
Loop
or
something
better
exists
in
each
code,
generator
versus.
If
you
lower
once
an
an
al
representation
exists,
it's
only
done
once
and
not
for
everybody
and.
C
If
the
code
generator
can
do
better
than
great
right,
then
there's
opportunity,
but
we're
exploring
a
design
point
around
this
already
for
the
profiling
of
the
Strayer
right,
so
J
profiling.
The
new
profiling
infrastructure
has
value
profiling
as
one
of
its
features
to
do
that,
you
have
to
observe
a
value
and
that
value
gets
stored
into
a
very
carefully
crafted
in-memory
hash
table
using
various
rules
now
currently
for
profile
compilations,
we
generate
one
of
those
guys
for
every
single
value.
C
You
want
to
profile
and
regenerate
those
late
in
the
optimization,
but
ahead
of
code
generation,
so
that
there
are
optimization
passes
that
take
place.
Tra
can
happen
and
you
get
maximum
performance,
but
you
get
a
compile
time
penalty
because
of
that
it
has
to
participate
like
anything
else
now
there
are
cases
where
we
want
to
do
the
value
profiling,
but
we
don't
want
to
pay
that
cost
right.
C
E
C
Looking
sequence.
That
needs
to
be
anchored
at
a
defined
point
in
the
evaluation,
but
other
than
that.
It's
sort
of
like
a
straight
line.
Sequence
with
hop
counts
can
be
done
by
the
way.
Whether
it
is
thank
you
point
taken,
but
regardless
of
whether
it's
Popkin
or
fine
or
cost
or
hyperbolic,
tangent
or
hyperbolic
arctangent
or
whatever.
It
doesn't
really
like.
C
What
we're
talking
about
is
whether
we're
going
to
have
this
facility,
and
it's
generally
generally
useful
enough
and
we'll
provide
enough
utility
for
the
complexity
and
mental
complexity,
that
it
introduces
those
having
to
understand
and
manipulate
the
il
versus
going
to
the
coal
route.
Where
yeah
there's
a
there's,
a
bunch
of
things
that
aren't
going
to
happen.
Do
we
actually
care?
C
G
We're
doing
similar
things
in
the
simplified
example
we
in
the
center
fire
will
translate
in
transform
circles
like
so.
Can
you
hear
it
divided
into
the
multiply,
and
then
we
generate
a
big
iOS
acres
which
won't
be
like
which
can
be
optimized
tightly
and
well.
If
we
have
such
a
such
a
facility
to
lower
a
auto
code,
we
can
we
can
move
the
similar
things
from
the
simplified
design,
so
the
entire
please
I.
C
Don't
disagree.
What
I'm
questioning
is
whether
the
engineering
effort
in
the
mental
complexity
and
the
amount
of
smacking
heads
into
walls
trying
to
make
sure
that
this
all
works
and
stays
working
is
worth
the
benefit
that
is
likely
to
reach
for
the
set
of
operations
that
are
likely
to
be
representable
by
I?
Agree
that
it
is
it.
It
is
an
interesting
half
way,
design
point
but
I'm,
not
a
hundred
percent
clear
on
whether
the
benefits
that
you're
going
to
reap
are
going
to
be
sufficient
to
warrant
the
complexity.
C
G
Why
do
I
want
to
minimize
the
numeral
code,
for
example
exponent,
lung
or
the
popcon
case,
like
coaching
or
simplify
or
even
VP,
have
to
know
it
have
to
know
the
type
for
what
each
time
you
need
and
I
do
something
with
it.
So,
for
example,
in
the
inner
collagen
that
we
have
differentiate,
whether
it's
in
it
I,
popcorn
or
alpaca,
and
while
having
the
to
increase
in
the
aisle
of
in
the
ioe
of
course
done,
doesn't
make.
A
I
guess
my
argument
in
the
in
the
issue
was
based
on
my
presumption
that
aisle
should
be
supported
on
every
platform.
I
business
denial
form
and
if
you
want
an
intrinsic,
which
a
platform
doesn't
support
via
a
single
instruction
arm
that
comes
from
a
particular
call
reduction
at
the
library
level,
then
there
should
be
a
cogent
query.
A
Whether
we
should
opt
in
or
opt
out
to
give
so
pop
count
is
a
perfect
example
of
something
today,
that's
typically
implemented
in
a
language
library
and
something
that
you
would
reduce
to
either
an
intrinsic
or
an
aisle
on
Z.
In
particular,
we
don't
have
a
pub
count
instruction
yet
so
that
means
for
us,
it's
much
better
to
inline
that
library
called
and
let
the
optimizer
do
its
job
on
it,
rather
than
transforming
into
an
aisle
that
we
don't
support
and
doing
something
weird
right.
G
A
Of
that
opinion,
yes,
because
it
makes
implementing
the
il
on
another
architecture,
for
example,
arm
quite
a
lot
simpler.
We
have
to
find
sort
of
il
which
you
have
to
support,
implement
that
il
everything
this
works
on
rather
than
arm
professionally,
not
supporting
I,
don't
know
a
64-bit
load
and
I'm,
adding
a
64-bit,
a
query
so
I
support
hello.
A
C
Or
no,
but
I
think
it
makes
sense
that
everybody
should
support
out
loud
right,
but
regardless
of
the
the
representation,
if
we
have
some
explicit
representation,
we
just
have
to
know
for
every
operation
whether
it's
required
to
be
implemented
or
not
right.
So,
like
now,
I
mean
I.
Guess
if
you
make
those
things
not
opcodes,
then
when
something
is
an
opcode,
it
makes
it
obvious,
but
the
set
of
operations
that
you
are
forced
to
implement
our
executive
sing
right.
G
A
F
You
want
to
be
able
to
generate
IL
without
having
the
check.
If
particular
code
generator
supports
that
you
just
want
those
to
leave
it
without
any
constraint
whatsoever.
You
can
always
generate
them
and
if
they
will
always
work,
yes,
whereas
intrinsic
have
to
be
something
that
has
queried
a
line
that
you.
F
C
So
one
one
area
where
an
intrinsic
can
be
significantly
simpler
to
implement
is
if
a
particular
language
particular
implementation
wants
to
have
its
own
intrinsic
of
some
kind.
Adding
an
opcode
requires
a
change
in
Omar
which
impacts
every
downstream
consumer,
because
the
opcode
enum
table
changes
a
bunch
of
things
at
Omar
that
need
default.
Implementations,
yada,
yada,
yada,
yada
yada.
If
you
have
a
call
right
like
if
you
have
a
call,
the
call
is
a
call
in
the
call
now
those
particular
semantics
you
attach
to
the
symbol
and
some
wraps
that
you
attach
to
it.
C
You
can
deal
with
in
your
language
coach
in
the
symbol
that
you
generate
can
be
isolated
into
your
derivation
of
the
SIM
ref
tab
and
the
symbols
underneath
it,
and
you
can
then
get
the
the
customized
behavior
without
requiring
an
om.
Our
level
discussion
on
the
utility
of
the
particular
feature
you
wish
to
implement
and
the
imposition
of
that
feature
on
the
entire
universe
of
omar
languages.
C
What
I'm
saying
is
that
the
current
pattern
that
we
have
for
non
helper
intrinsic
is
a
pattern
that
allows
them
more
easily
allowed
language
specific
extensions
for
particular
corner
case
opcode
requirements,
whereas
if
you
had
to
add
them
to
the
opcode
table
and
extend
extend
everything,
you
have
a
huge
amount
of
stuff
that
suddenly
has
to
care
about
it
right,
the
evaluators,
the
optimizer,
the
il
tables,
the
debug
infrastructure,
the
blah
blah
blah
you
know
rank
the
list
is
a
page-long
anybody
who's
out
at
an
opcode
student
crimes
into
that
at
that
sequence.
Right
so.
B
And
because
it
came
up
in
the
discussion,
this
is
they
basically
with
the
LLVM
guidelines
boil
down
to
and
their
documentation
is.
Intrinsics
are
a
much
simpler
extension
mechanism
for
them
than
introducing
a
new
instruction.
So
if
a
project
is
the
need
to
have
to
describe
to
the
compiler
some
behavior,
for
which
there's
no
existing
intrinsic
or
instruction,
it
can
start
by
defining
their
own
intrinsics,
which
is
fairly
straightforward
in
LLVM
and
then
later
on.
C
B
C
If
we
had
a
well
behaved,
extended,
arithmetic
or
intrinsic
opcode,
supposing
that
the
flags
can
be
assigned
in
a
sufficient
way
for
for
all
the
sorts
of
uses
under
there,
that
you
might
want
the
sub
operations
inside
that
could
be
made
extendable
so
that
a
downstream
project
could
add
its
own
yeah.
I.
Think
I
think
there
certainly
is
value
in
the
notion
of
the
intrinsic
opcode
I.
Think.
C
D
B
C
A
C
C
What
I
would
like
to
add
is
that
the
ones
where
we
can't
automatically
do
nice
things,
even
though
they're
opaque
you're
not
going
to
get
much
benefit
out
of
making
them
anything,
but
a
call
right
up,
you
might
as
well
just
make
it
a
call.
It's
fine
yeah,
so
it's
sort
of
like
I
guess
that
that
idea
is
really
restricted
to
those
types
of
things.
C
Yeah
and
I.
Guess
the
the
follow-on
to
that,
but
I
would
say
as
well.
If
we're
considering
arithmatic
type
extensions
right,
things
that
come
to
my
mind,
are
things
like
square
root,
sign
cost.
You
know
some
of
how
some
of
these
things
that
we
don't
represent
explicitly
in
the
aisle
at
the
moment.
A
lot
of
those
are
floating-point
operations
of
one
kind
or
another.
The
semantics
around
floating-point
operations
is
currently
already
in
Fernie
topic
in
RI
house.
C
So
if
we
were
to
allow
an
intrinsic
like
a
floating
point,
returning
typed
intrinsic,
that
inherently
feels
like
something
that's
going
to
be
very
difficult
to
have
the
right
behavior
in
terms
of
precision
and
those
other
things
sort
of
in
general,
and
so
is
it
that
those
things
would
have
to
be
called
because
of
needing
to
preserve
the
precision
and
evaluation
order
and
rounding
and
whatever
else,
and
if
so
does
that
only
leave
us
with
like
a
subset
of
integer
things.
That
would
be
useful.
C
It's
just
one
thing
that
I
would
toss
out
there
for
consideration,
so
you
can't
do
any
reassociation
or
anything
like
that
without
having
some
knowledge
of
what
the
operation
is
right.
So
you
know
it's
not
as
though,
if
we
say
oh
yeah,
now
sign
gets
to
be
a
node
that
floats
around
that
somebody's
going
to
come
in,
and
you
know
just
change
its
arguments,
because
it
knows
that
you
know
whatever
it
things.
C
C
Yeah
I
think
that
if,
if
we
are
comfortably,
I
was
throwing
it
out
there
just
as
one
of
those
points
for
consideration.
If
the
feeling
is
that
well,
the
things
that
are
going
to
happen
to
the
pointing
points
will
happen
to
the
floating
points
you
opted
into
it
bonne
chance,
I'm,
fine
with
that
I
guess
it's
just
it's
just
that
the
question
on
the
general
utility
of
that
mechanism.
If
we
think
enough
stuff
falls
into
the
bucket,
the
answer
may
well
be
yeah.
B
The
top
of
my
head
I
think
there's
some
from
the
perspective
of
omr
in
terms
of
language
support,
I,
don't
know,
I,
don't
know
how
much
of
a
performance
hit
or
benefit
for
the
Congress
case.
There
would
be
to
either
allowing
or
disallowing
these
kinds
of
floating-point
operations.
I
do
know
that,
of
course,
different
languages
are
going
to
have
different
requirements.
I
know,
Java
has
very
strict
requirements
when
it
comes
to
rounding
mode
and
floating-point
related
operations.
At
the
same
time,
a
language
like
Lua
has
practically
no
restrictions,
and
things
can
kind
of
get
rounded.
C
B
F
B
B
Ignore
them,
but
I
don't
have
at
the
moment
yeah.
G
E
B
Or
just
at
the
top
of
my
head,
I,
think
of
and
they're
not
think
there
should
be
coals,
though
1018
yeah
I
mean
the
way
that
arm
had
implemented
floating
very
soft
little
boy.
Dinar
right
aim
to
make
that
is
by
turning
all
of
those
operations
in
the
call
right.
I
have
to
use
a
and
B
I'll
call.
Fn
I'll
call
a
net
bad
everything
kind
of
thing,
so
it
it
is
able
to
get
away
with
insulating
calls.
For
all
the
point.
C
C
I
think
Victor
is
why
no,
we
should
just
sort
of
leave
it
into
a
special
box.
There
are
vectors
and
our
vector
registers,
there's
special
vector
hardware,
there's
vector
all
kind
of
themselves
yeah,
so
that
I
think
is
kind
of
outside
the
scope
of
this
discussions.
I
think
this
discussion
is
I
think
we
should
put
that
in
a
box
we
have
in
the
corner
for
another
day
a
whole
different
problem.
C
E
D
D
Have
you
cases
like
that
or
we
didn't
think
the
burning
need
to
go
that
route
have
an
intrinsic
all
off
code.
Today
of
that
we
could
have
been
wrong
in
some
cases.
We
should
probably
look
at
all
the
cases
that
we
do
intrinsic
intrinsic
calls
result
form
for
today.
Look
at
that
set
see
if
some
of
those
would
be
candidates
for
this
middle-of-the-road
proposal.
You
can
find
some
then
perhaps
go
down
the
route
of
drafting
up
proposal
along
those
lines.
D
B
For
it,
I
will
add
to
that
current
and
I
do
agree
that
we
should
look
into
how
this
would
be
useful,
given
our
history,
and
it
will
point
out
that,
because
Oh
historically
this
was
developed
or
the
compiler
technology
was
developed
in
the
context
of
Java,
there
may
have
been
less
of
a
need.
Then
there
is
now
because
now
part
of
what
we
have
to
consider
is
how
other
languages
might
want
to
extend
opcodes.
It.
C
Going
to
all
going
through
all
the
machination
in
hand-wringing
and
pet
slapping,
that's
involved
in
actually
doing
the
full
design.
For
this
thing,
perhaps
someone
from
one
of
the
teams
that
has
more
exposure
to
things
that
are
to
Java
may
be
able
to
sketch
a
set
of
use
cases
for
the
intrinsic
either
op
codes
that
we
currently
think
should
turn
into
intrinsic
because
never
used
or
here's
the
set
of
things
we
currently
made
calls
and
gosh.
C
D
C
B
B
Since
one
of
the
features
that
LLVM
supports,
is
this
ability
to
teach
certain
optimizations
to
handle
intrinsic
what
that
would
mean
in
our
case,
would
it
be
possible
to
also
have
ways
in
our
ecosystem
to
have
languages
be
able
to
extend
certain
optimizations
to
teach
them
how
to
handle
particular
intrinsics?
If.
B
It
a
first-class
citizen
right
well,
I,
think
that's
in
the
LLVM
guidelines.
That
is
one
of
the
things
they
specify
is
that
if
someone
has
found
something
that
they
think
is
warrants
creating
a
new
instruction
for,
is
they
start
out
by
implementing
it
as
an
intrinsic?
That
way,
the
community
has
time
to
look
at
it
think
about
it,
figure
out
how
it's
going
to
play
with
the
other
optimization
and
after
some
time
as
as
can
get
promoted
to
a
full
instruction,
I
mean
the.
B
F
C
A
C
A
E
C
B
G
C
E
C
Exactly
yeah
yeah
I
agree
so
or
we
can
remove
as
a
something
to
go
on
hello,
intrinsic
opcode
could
be
called
something
else.
Yeah
I
think
we
need
to
decide
if
we're
doing
the
middle-of-the-road
thing
and
then,
if
we
are,
that
dictates
one
set
of
design
choices,
around
names
and
other
things
right,
and
if
we
aren't,
then
it
provides
a
certain
other
freedom
for
changing
the
names.
I
guess.
C
B
C
I
think
we're
still
going
to
want
those
kinds
of
things,
because,
certainly
not
everything
is
going
to
fit
into
the
mold
where
you
can
float
it
around
and
do
all
of
the
transparent
stuff
yeah
right
it
would
just
be
what
are
they
called?
We
could
if
we
wanted
to
hand
the
name
intrinsic
to
those
right
now
and
the
force,
the
other
one.
C
If
we
choose
to
do
it
to
have
a
different
name
but
yeah
I
I,
think
I
would
rather
arrive
at
the
where
we
wish
to
go
and
then
trying
to
create
a
set
of
names
that
is
non
brain
working
for
all
concerned,
understandable,
so
I
agree,
it's
not
intuitive.
However,
it's
been
that
way
since
Omar
was
created
a
few
more
weeks
of
being.
That
way,
not
really.