►
From YouTube: OMR Architecture Meeting 20210204
Description
Agenda:
* RISC-V CI pipeline update [ @fjeremic ]
* Deprecate PrefetchInsertion optimization [ @fjeremic ]
* Switch to using Node::isSingleRefUnevaluated() in code generators (#5648)
* Avoid using TR::comp in Z InstOpCode utility functions (#5663)
* Standardize compiler getter/setter syntax (#4575)
* Size-optimized builds
A
Welcome
everyone
to
the
february
4th
edition
of
the
omar
architecture
meeting
today
we
have
a
number
of
topics
to
to
talk
through.
I
don't
think
any
one
of
them
is
particularly
deep,
but
there
are
a
number
of
them,
so
hopefully
we
can.
A
We
can
make
it
through
them
all
to
begin
with
this
week,
I'd
like
to
have
phillip
share
an
update
on
some
of
the
work
that
that
he
and
jan
rani
have
been
doing
to
enable
a
risc-v
ci
pipeline
as
part
of
our
builds
and,
as
you
know,
they've
been
working
on
that
for
for
about
a
couple
of
months
now
so
I'll,
maybe
just
turn
it
over
to
philip
to
give
us
a
quick
update
on
where
that's
at.
B
Thanks
real,
so
this
has
not
been
rehearsed,
just
spilled,
my
brain,
so
we
got
a
debian
10
machine
from.
I
think
we
reimaged
one
of
our
existing
x86
machines,
so
the
guys
over
at
openg9
helped
us
with
that.
On
the
infrastructure
side,
we
had
the
machine
pretty
much
clean
slate.
B
I
documented
a
docker
file
which
is
pretty
much
a
guide
to
how
to
set
up
the
cross
compilation
with
what
yen
has
done.
So
he
pretty
much
did
all
the
work
there.
I
just
kind
of
consolidated
all
the
information
in
one
place
and
it
requires
us
to
build.
I
guess
a
risk.
Five
cross
compiled
tool
chain
root
file
system
on
deviant10,
and
you
can
use
that
in
qemu
to
actually
do
the
emulation
of
first
five
compiled
code.
So
that's
that's
all
been
done.
B
Jan
then
got
to
work
on
actually
enabling
the
ci
pipeline.
We
got
a
bunch
of
cmaq
work
to
sort
out,
but
right
now,
we've
merged
that
actually
yesterday
and
I've
enabled
it
by
default.
Since
so,
you
should
see
rex
rest5,
cross
compilation
builds
running
as
part
of
build.
B
So
I
guess
for
reference
our
second
slowest
build
takes
about
20
or
20
25
minutes
or
so
so
it
kind
of
doubles
our
our
time
to
get
testing
done,
but
I
think
it
might
be
a
worthwhile
trade
because
doesn't
seem
like
we
merge
too
many
pr's
per
day
anyway.
So
I
think
it's
fine
to
get
the
trade-off
is
fine
to
get
diverse
five
tested,
that's
pretty
much
where
we're
at.
I
guess
the
goal
has
been
achieved,
we're
going
to
stop
progressing
verse,
5
moving
forward,
because
the
test
will
fail.
B
If,
if
we
have
a
change
that
comes
in
so
people
need
to
address
that.
A
All
right
cool
thanks
phillip,
so
I
guess
one
question
is:
I
noticed
that
there
is.
There
is
a
risk.
Five
pipeline
failure
in
some
of
the
prs
that
are
running
today.
Is
there
that's
a
known
infrastructure
issue,
that's
being
worked
on?
Is
that
correct.
B
We
had
an
issue
flagged
yesterday,
so
we
had
to
do
some.
We
had
to
define
some
environment
variable
which
had
to
be
done
on
the
eclipse
side,
so
adam
put
in
a
change
and
it
was
working
and
then
something
made
it
reset
so
he's
tracking
that
down
something
on
the
eclipse
side.
B
A
Okay
yep!
Oh,
I
saw
one
just
a
couple
of
hours
ago,
I'll
I'll
look
into
that
after
the
after
the
call.
Okay,.
B
A
B
Do
your
due
diligence
like
if
you're
a
committer,
if
the
change
is
obviously
not
risk
flight
related
and
it's
not
introducing
you
test,
chances
are
nothing's
going
to
fail
until
we
fix
these
kind
of
infrastructure
type
issues,
you
can
go
ahead
and
merge,
even
if
the
risk
five
fails.
B
Just
use
your
yeah
common
sense,
I
guess
yep.
A
Okay
sounds
good
and
thank
you
and
and
yeah
for
all
the
work
that
you
put
into
to
getting
this
important
piece
of
infrastructure
in.
I
know
that
it's
something
that
we've
needed
for
some
time.
So
thanks
guys
any
questions
from
anyone
on
that.
D
A
You
okay!
So
let's
move
on
to
the
next
item
in
the
agenda
so
sort
of
a
last-minute
edition
that
philip
asked
to
insert
here
to
talk
about
deprecating
the
prefetch
insert
insertion
optimization
so
turn
it
back
over
to
philip.
B
Okay,
so
I
tried
to
dig
this
one
back
to-
I
guess
2009,
when
it
was
first
introduced
internally
in
our
history,
and
I
can't
find
where
exactly
we
disable
it.
But
I
guess
to
summarize
this
optimization
what
it
does
is:
it
introduces
additional
loads
from
arrays
and
it
attempts
to
prefetch
their
their
values.
B
So
it
is
currently
disabled
by
default
for
forward
arrays
and
it
is
disabled
by
default
under
concurrent
scavenge.
This
is
a
recent
bug
that
we
found,
and
it
is
only
enabled
for
backwards
array,
traversals,
the
actual
algorithm
to
analyze
all
the
trees
and
figure
out.
The
right
insertion
points
is
an
n
squared
algorithm.
B
So
I
believe
we
can
save
quite
a
bit
of
compile
time
by
just
not
doing
this,
and
I
tried
to
dig
back
to
see
what
drove
this
optimization
in
the
first
place,
and
I
I
couldn't
really
find
it
at
least
on
z,
in
in
last,
several
iterations
of
the
hardware
software
prefetching
has
been
discouraged
because
the
hardware
has
improved
greatly
in
that
respect.
B
It's
quite
problematic
for
the
gc,
because
we
used
to
have
a
tail
heap
padding
so
a
couple
of
extra
words
at
the
end
of
the
heap
which
were
allocated
by
the
gc
to
make
this
optimization
work.
But
since
they've
gotten
rid
of
that,
you
can't
no
longer
prefetch
anything
towards
the
edge
of
the
heap,
which
means
you
could
get
a
sig
bus
failure,
for
example,
by
inserting
prefetches
in
the
forward
direction,
which
is
probably
why
it
was
disabled
and
in
the
backwards
direction.
We
now
have
modified,
for
example,
for
openg9.
B
A
I
only
have
a
fuzzy
or
fuzzy
interaction
with
prefetch
insurgent
over
the
years.
I
have
a
suspicion.
It
was
introduced
for
power
and
to
catch
some
opportunity
that
it
was
looking
for.
I
I
don't
remember
the
history
behind
it
at
all,
not
that
I've
gone
looking
for
it
either,
but
is
there
anybody?
That's
got
a
power
gita.
Perhaps
do
you
recall
if
this
is
a
a
historical
optimization
that
is
still
needed
on
power.
E
E
I
don't
have
objections
to
removing,
but
we
should
do
some
perf
study
before
it
may
be
on
power.
A
Yeah
and
I
may
be
getting
other
eunice's
opinion
or
or
gita
and
and
julian
as
well,
to
see
if
they
remember
anything
with
this
but
yeah
doing
the
perf
work,
I
think,
is
the
next
step
to
getting
rid
of
it.
A
Background,
maybe
you
could
open
an
issue
phillip
just
for
discussion
on
this,
can
track
some
of
the
facts
on
the
discussion
on
it.
Yeah
propose.
A
Prefetch
removal,
I
guess,
is
the
where
it's
possibly
heading:
okay,
okay,
so
let's
move
on
okay,
so
the
next
three
topics
that
I
have
in
the
agenda
here
really
came
about
this
week
because
of
some
work
that
happened
earlier
this
week
by
someone
who's
not
normally
affiliated
with
a
project
where
they
went
and
implemented
some
feature,
and
it
turned
out
that
there
was
there
wasn't
necessarily
consensus
yet
from
people
in
the
community
on
whether
or
not
that
was
such
a
good
idea
to
implement.
A
So
what
I
thought
that
we
could
do
is
I
went
through
some
of
the
older
issues
that
we
have
opened
and
have
opened
for
some
time
that
are
looking
for
for
help,
and
I
just
want
to
make
sure
that
the
community
has
had
an
opportunity
to
at
least
consider
it
and
put
forth
a
consensus
that
it
is
something
that
we
want
to
do
and
if
not,
I
guess
the
issue
should
be
should
be
closed
or
if
we
needed
to
refine
it
in
some
way.
A
We
can
change
the
issue
itself,
so
I've
picked
three
from
from
the
history
if
you're
really
sharp,
you
would
have
saw
that.
I
had
four
initially
that
fourth
one.
I
wanted
to
do
a
little
bit
of
study
on
that
myself,
just
to
understand
what
that
is
really
all
about
and
whether
it
even
applies
anymore.
It
had
to
do
with
the
vp
handlers,
but
but
we
can
always
revisit
that
at
another
time,
so
so
the
first
one
here
is
it's
issue
5648.
A
I
think
ben
actually
opened
this
one
last
last
fall,
but
there
are
two
apis
that
exist
on
the
node
class
in
the
compiler
that
are
there
apparently
to
provide
more
readability
to
the
code.
So
there's
one
function
called
is
single
ref,
which
all
it
does
is
check,
whether
or
not
the
node's
reference
count
is
one,
and
then
there
is
a
second
api
is
single
ref
unevaluated,
which
checks
if
the
reference
count
on
a
node
is
one
and
the
register.
A
A
register
has
not
been
set
on
that
node
and
when
a
register
is
not
set
on
a
node,
that
is
usually
an
indication
that
that
node
has
never
been
evaluated
by
the
code
generator.
So
we
haven't
actually
produced
any
instructions
for
that
node
before
so
we
have
pretty
much
come
upon
it
for
the
first
time.
I
guess
you
can
think
of
it
that
way.
A
A
They've
been
around
for
many
years
actually
and
in
fact
they
even
predate
the
open
sourcing
of
the
of
the
code
base,
so
they
have
had
some
use
in
the
past,
but
first
of
all,
I'm
not
I'm
not
convinced
personally
that
they
that
the
the
use
should
be
broadened
and
in
fact,
maybe
even
the
the
the
places
that
are
using
it
right
now.
A
We
could
potentially
consider
undoing
it.
If
that's
what
the
community
wants
to
do.
So
maybe
I'll
stop
there
before
adding
more
thoughts
to
this
to
see,
if
others,
you
know,
if
you
understand
the
issue
and
if
you
have
any
opinions
on
which
way
to
go
with
this
kind
of
a.
A
A
A
I
think
the
the
motivation
for
using
them
is
readability,
but
I'm
I'm
personally,
I'm
not
I'm
not
really
convinced
that
readability
is
is
a
is
a
concern
and
I
would
hate
to
be
adding
an
ap
or
adding
the
use
of
an
api
that
is
potentially
ambiguous
depending
on
the
context
in
which
it's
used
and
making
that
widespread
throughout
the
throughout
the
code
base.
A
You
may
not
be
getting
the
answer
that
you're.
It
may
not
be
answering
the
question
that
you
are
actually
asking
right,
because
there
are
situations
where
the
code
generator
will
decrement
the
reference
count
on
a
node
without
updating
the
register.
So,
for
example,
if
it's
evaluating
a
constant
and
that
constant
fits
into
the
instruction
itself-
and
it
doesn't
need
a
register,
then
then
it'll
evaluate
decrement,
the
reference
count
and
then
the
the
register
field
will
still
stay
empty.
So
it's
not
like
the
first.
A
If,
if
you're
using
this
api
to
discover
the
first
time,
you've
come
across
the
first
and
only
time
you
see
a
node,
so
I
have
a
single
ref.
The
reference
count
is
one
and
it's
unevaluated,
if
you're
using
that
logic,
to
conclude
that
this
is
the
one
and
only
reference
to
this
node
and
I've
never
seen
it
before.
A
D
All
the
existing
uses
are
in
the
code
generator
right
now,
but
I
think
I
agree
with
you
daryl,
based
on
the
just
the
reasoning
you
just
mentioned,
that
there
is
that
kind
of
ambiguity,
and
what
is
what
does
it
mean
for
a
node
not
to
have
a
register
on
it?.
F
F
I
mean
in
those
cases
you're
talking
about
where
we're
looking
at
constants,
and
things
like
that.
You
could
technically
argue
that
the
node
was
never
actually
evaluated
because
we
never
actually
call
cg
evaluate
on
it.
I
agree:
it's
a
bit
ambiguous.
A
A
That
sounds
very
ambiguous
in
itself,
but
I
and
I
can't
think
of
another
example
off
the
top
of
my
head,
but
yeah
there
there
are.
I
know
there
are
other
situations
that
you
can
find
in
the
code.
Generators
that
do
avoid
decrementing
reference
counts
to
serve
some
other
purpose.
A
If
it
I
well,
I
think
at
some
level,
though,
it's
still
you'd
still
need
to
do
the
evaluation.
Yes,
yeah.
A
A
What
do
you
really
want
to
know
when
you're
asking
is
single
ref
unevaluated?
Are
you
asking?
Is
this
the
one
and
only
reference
to
this
node
and
if
so,
perhaps
that's
what
the
name
of
the
api
should
be,
and
I
guess,
depending
on,
if
you're
in
the
code
generator
or
not,
you
could
answer
that
question
different
ways,
depending
on
what
you
have
knowledge
of
as
long
as
you're
able
to
answer
that
question
exactly
the
same
way.
B
D
G
D
It's
possible
there's
some
other
question
that
could
be
asked.
That
would
be
the
right
question
and
could
be
unambiguously
answered
that,
but
I
think
is
single
ref
unevaluated
feels
to
me
like
a
kind
of
it's
almost
like
an
identity
based
kind
of
query
right.
It's
a
the
question.
You're
asking
asking
is
a
proxy
for
what
you're
actually
trying
to
figure
out.
So
we
should
really
just
ask
the
right
question
if
we
want
to
do
that.
G
C
D
That's
true
yeah!
So
if,
if
we
were
to
keep
this
because
we
find
it
useful
to
have
those
two
things
be
asked
at
the
same
time,
I'd
prefer
that
we
come
up
with
a
better
name
for
the
query
that
we're
asking
so
that
it
actually
is
the
question
that
we're
asking
in
all
of
those
cases
and
if
we
need
to
implement
two
queries,
two
things
and
answer
it
the
same
way.
Then
either
we
haven't
come
up
with
the
right
name
for
it
or
it's
you
know.
A
What
I'm
going
to
propose
is
just
dropping
a
comment
in
that
issue,
explaining
our
our
discussion
here
and
closing
the
issue
off.
Actually,
maybe
we
could
well
either
we
create
a
new
issue
to
to
remove
the
or
to
replace
the
the
uses
in
in
the
z
code
gen,
or
we
just
repurpose
that
that
issue
to
do
that
work.
F
I
have
no
objections.
Okay,
all
right.
I
think
this
was
something
that
was
pointed
out
to
fill
up
by
phillip
to
me
on
pr
like
a
year
ago
or
something
so
I
don't
recall
what
the
reasoning
was.
A
Okay,
all
right
so
we'll
do
that.
If
I
had
a
gavel,
I'd
be
smacking
it
on
the
table
right
now.
So
all
right,
let's
go
on
to
the
next
one,
all
right
so
issue,
five,
six,
six
three!
A
So
this
is
to
avoid
using
the
tr
comp
thread:
local
storage,
query
in
the
z
in
stop
code
utility
functions.
So
there
are
a
number
of
utility
functions
for
the
z-op
codes.
The
z-instruction
op
codes
that
need
a.
They
need
to
do
a
query
on
the
compilation
object,
but
rather
than
passing
in
the
compilation
object
each
one
of
them
does
a
query
through
tier
comp,
which
is
a
thread
local
storage,
lookup
tls
lookup
tends
to
be.
A
A
I
think
it
is
considerably
more
expensive
than
simply
passing
in
the
compilation
object
to
each
of
these
utility
functions
and
then
using
that
instead,
so
this
work
proposes
getting
rid
of
tr
comp
usage
in
all
those
utility
functions
and
simply
passing
in
the
compilation
object.
So
you
have
to
go
around
all
the
code
and
change
the
change.
A
A
A
Okay,
well,
if
there's
no
opinion,
I
can
mark
in
the
issue
that
it's
been
discussed
and
agreed
to
that's
good
to
me.
A
Okay,
the
next
one
is
standardizing
the
compiler
getter
center
syntax,
so
four,
five,
seven
five.
So
this
actually
came
out
of
a
discussion
in
some
issue.
I
can't
or
a
pr
I
can't
remember
when
it
was
a
while
ago,
like
a
long
while
ago,
when
the
when
the
compiler
code
was
first
written,
you
know
20
years
ago
and
for
a
good
part
of
its
life,
the
the
convention
being
used
for
getters
and
setters
was
to
use,
get
you
know,
followed
by
the
field
name
and
set
followed
by
the
field
name.
A
However,
over
time
another
format
has
has
started
to
creep
into
the
code
and
is
actually
prevalent
in
a
lot
of
places
in
the
code,
which
is,
I
think,
an
attempt
to
make
the
code
more
readable,
but
for
the
getters
it's
to
is
to
omit
the
get
and
then
just
use
the
field.
Name
with
you
know,
use
the
field
name
with
a
like
as
a
function,
so
it's
like
it
would
be
lowercase
or
something
like
that,
but
you're
just
using
a
function
name
rather
than
get
field.
A
So
from
a
code
consistency
point
of
view,
it
would
be
nice
if
we
had
a
nice
if
we
could
decide
on
which
way
we
wanted
to
go
one
way
or
the
other
and
and
standardize
that
throughout
the
compiler.
Just
so
that
we
don't
have
that
that
level
of
inconsistency,
it
does
seem
a
bit.
A
I
mean
it
may
seem
a
bit
like
a
bit
like
a
waste
of
time
to
come
up
with
this,
but
I
think
just
from
a
consistency,
point
of
view
and
and
making
the
code
base
homogeneous,
it
could
be
useful
to
do
and
it's
easy
for
us
to
check
like
in
code
reviews
and
things
like
that,
that
that
someone
is
doing
the
right
thing.
So
I
guess
the
question
is
which
format
is?
C
B
I'll
bite
on
this
one
personally,
I
prefer
the
get
prefix
because
of
the
fuzzy
searches
in
various
ids,
where
you
can
just
type
in
whatever
object,
arrow
get
and
then
whatever
thing
you're
looking
for
and
find
the
right
function.
D
B
A
G
A
A
Okay,
if
there's
no
other
opinions
on
this,
then
we
have
to
sit
here.
Thinking
about
it.
B
Of
this
the
question
there
is
important:
what
does
a
setter
return?
B
B
A
Yeah,
I
know
the
the
setter
returning
a
value
was
one
of
the
ways
that
it
was
done
from
day
one,
but
I
don't
think
I've
ever
seen
it
used
in
any
context
where
that
the
return
value
of
the
setter
was
actually
used
for.
D
So
I
use
that
to
that
idiom
in
in
jit
builder,
for
for
some
kinds
of
initialization,
just
because
it
makes
for
a
nice
kind
of
take
the
object.
Then
you
know
arrows
set
this
arrow
set.
This
arrow
set
this
arrow
set
this
and
it
just
kind
of
lays
it
out
nicely
for
initializing
the
object
or
updating
the
object.
If
you
need
to
do
something
like
that,
so.
B
B
D
A
D
A
G
A
Nope,
okay,
so
I'll
go
on
to
self
or
size
optimize
builds.
I
just
need
to
check
is
nazim
on
the
call.
A
Okay,
I'll
try
and
talk
through
this
one
then
so
there
has
been
some
I
mean.
A
Certainly
omr
project
has
received
some
feedback
in
the
past
about
the
sizes
of
the
shared
objects
that
are
produced
when,
when
things
are
built
in
some
environments,
where
you
want
a
compiler
used,
for
example,
the
the
expectation
is
that
it's
as
as
tiny
as
possible,
using
only
the
minimal
amount
of
features
necessary
to
to
produce
code
and
produce
code
well
and
in
some
instances
there
there
may
be
some
things
that
get
built
into
the
into
the
compiler
or
other
parts
of
the
of
the
omr
that
aren't
really
necessary
in
all
configurations.
A
So
one
example
that
I
will
give
this
is
not
the
only
example,
but
it's
just
one.
One
thing
that
we
have
found
when
we
did
poke
into
this
a
bit
was
the
the
gpu
support
in
the
in
the
in
the
in
the
jet
and
in
the
compiler,
and
some
of
the
things
that
that
get
pulled
in
with
that
now.
This
isn't
so
much
of
an
omr
consideration.
A
It's
maybe
more
of
a
concern
in
in
downstream
projects
like
open
j9,
which
which
will
pull
in
a
lot
more
from
that,
but
the
the
idea
here
is:
you
know
we
for
something
like
that.
We
could
have
prevented
that
from
happening
if
we
were
able
to
do
a
build
that
excluded
certain
things
from
from
a
size,
optimized
build.
So
if
you're
really
concerned
about
producing
the
smallest
possible
size,
here's
a
build
option
that
you
can
use
that
you
can
use
to
guard
code
that
you
would.
You
would
potentially
consider
to
be
optional.
A
Now,
that's
one
way
of
thinking
of
it,
but
perhaps
the
other
way
of
thinking
of
it
is
if
we
know
that
we're
implementing
some
feature
that
we
know
shouldn't
be
that
may
not
be
useful
or
may
not
be
useful
in
all
sizes
of
builds.
A
Perhaps
all
of
these
sorts
of
features
should
actually
have
their
own
kind
of
option
where
you
can
or
build
option
where
you
could
enable
it
or
disable
it.
So
you
could
build
it
in
or
build
it
out.
If,
if
you
want
and
that
way
you
could
control,
you
have
more
refinement
over
how
you
are
controlling
what
gets
included
in
your
build
or
or
what
what
feature
gets
included
in
your
build
and
what
doesn't
now
like.
A
We
already
do
that
at
sort
of
the
component
level,
like
gc
or
port
library
or
thread
library,
but
we
don't
do
it.
Let's
say:
if
you
look
within
the
compiler
you
don't
you
don't
see,
features
that
are
you
that
that
you
can
disable
like?
I
don't
want
to
build
a
certain
class
of
like
I
don't
want
loop
optimizations,
for
example,
or
I
don't
want
you
to
do
any
data
flow
or
anything
like
that
or
I
don't
want
cuda
support
or
gpu
support
or
cmd
support,
or
something
like
that.
A
So
we
don't
have
those
kinds
of
options
available
at
this
point,
so
I'm
thinking
either
having
sort
of
these
categories
of
of
build
flags
that
could
disable
that
or
we
have
a
more
general
size,
optimized
build
flag
that
can
be
used
to
to
turn
off
sets
of
these
of
these
things,
so
just
pause
there
for
a
sec
in
case
there's
any
thoughts
on
that.
A
A
It
would
be
for
fairly
significant
features.
That's
for
sure.
D
So
the
original
is
like
we
used
to
have
the
small
jit
right,
so
the
way
that
that
was
done
was
there
was
a
query
called
is
small
that
was
propagated
with
just
if
statements
throughout
the
whole
code
base
in
various
places,
and
that
is
small-
was
an
inlined
pound
if
dime
pound,
if
deft
return
value
so
the
pound.
If
def
didn't
propagate
everywhere,
although
it
was
necessary
in
places.
A
Well,
there
were,
there
were
two
aspects
of
that:
there
was
the
the
compile
time,
query
is
small,
and
then
there
was
the
actual
macro.
The
preprocessor
macro
is
small
and
both
of
those
were
used
throughout
the
code.
A
A
So
I
think
for
this
one,
what
needs
to
happen?
We
don't
there's
no
issue
for
it,
yet
I
think
what
needs
to
happen
is
an
issue
needs
to
be
created.
That
would
potentially
propose
a
couple
of
different
approaches
for
for
doing
this
and
to
come
up
with
well.
A
First
of
all,
whether
or
not
it's
a
it's
a
good
idea,
if
it's
something
that
it
needs
and
also
what
is
the
best
strategy
going
forward,
I
think
there
already
is
actually
an
issue
that
somebody
created
years
ago
in
omr
to
add
capability
kind
of
flags
to
the
to
the
build
I'd
have
to
go
look
for
that.
I
just
remembered
that
now,
when
I'm,
while
I
was
speaking
here
to
see,
if
that's
still
around,
if
something
there's
something
there
can
get
resurrected.
C
A
little
bit
worried
about
taking
sort
of
pieces
out
of
optimizer,
because
behavior
can
become
completely
sort
of
unexpected.
You
never
know.
Actually,
if
you
move
remove
something,
something
else
will
not
happen
right.
It's
not
as
sort
of
independent
all
these
optimizations,
they
pretty
much
depend
on
each
other.
A
Yeah,
I
mean,
I
think,
that's
true
to
some
extent
if,
but
maybe
like
I
was
saying
before
I
don't
know,
if
we
can
do
this-
that
it
depends
on
the
granularity
that
we
are
trying
to
permit
this
app
like.
D
A
I
mean
the
biggest
thing
that
we
found
so
far
was
was
all
the
gpu
work
and
the
cindy
support
cindy
instruction
support.
You
know,
potentially,
if
you
wanted
a
much
like
a
really
stripped
down
kind
of
optimizer,
you're,
only
let's
say
interested
in
up
level
cold
or
something
like
that.
Potentially
you
could
build
out
all
the
optimizations
that
you
don't
want
to
enable.
A
G
Need
and
for
what
it's
worth
the
way
you
do
that
without
littering
the
code
with,
if
staffs,
is
you
make
the
code
modular
right?
Because
then
you
just
when
you
need
a
particular
module,
you
pull
it
in.
You
build
it,
and
if
you
don't
need
it,
you
don't
pull
it
in
and
it's
not
part
of
your
build.
G
C
I
I
think
that
I'm
thinking
of
is
like
what
devices
would
need
that
and
because
we
have
the
microwave.
I
don't
know
how
the
micro
fits
in
this
yeah
like
if
the
devices
that
need
this
are
like
one
of
the
small
devices,
then
maybe
the
microwave
will
solve
be
the
component
for
it.
A
Well,
if
the
micro
jet
can
be
well
there
were
I
mean
there.
There
are
a
couple
of
you
know
there.
There
are
those
that
have
that
have
taken
the
omr
compiler
and
tried
to
integrate
it
into
a
completely
different
front
end
than
what
we've
traditionally
been
used
in
and
in
those
contexts
you
know
there
they
they
discovered
that
it's
just
simply
too
large
for
their
for
their
needs.
But
I'm
not
sure,
if
I
mean
is
the
micro
jet
being
designed
to
be
used
in
for
more
than
just
java.
G
And
even
if
it
wasn't,
I
don't
think
that
would
really
address
the
issue
here,
because
that
basically
only
gives
you
two
options.
Is
you
either
have
microjet
or
you
have
full
omr?
D
Well,
to
be
fair,
there
are
some
facilities
available
already
in
the
compiler
that
help
strip
things
down
right.
So,
if
you
don't
or
if
you're,
if
you're
willing
to
go
to
this
to
the
code,
extreme
of
creating
your
own
extension
of
the
omar
compiler,
you
can
configure
the
optimizer
to
not
include
optimizations
that
you're
not
interested
in
so
jitbuilder.
Does
that.
G
Yeah,
I
remember
luke
having
to
go
through
and
make
it
so
that
we
weren't
initializing
optimizations
yep,
that
we
weren't
using
right.
Yep.
G
We
can
definitely
create
our
own
extensions
and
make
it
so
that
we
don't
pull
in
the
pieces.
We
don't
need
it's
just.
You
still
end
up
pulling
in
quite
a
bit,
at
least
from
what
I
remember
the
experiments
we
did
for
that
or
not
we,
but
people.
D
Right,
well
I
mean
that
was
that
was
work
at
the
level
of
the
optimizations.
So
if
the
optimizations
pulled
in
things,
then
it
would
just
kind
of
get
pulled
in
there
wasn't
much.
You
could
do
about
that.
But
if
you,
if
you
were
only
willing,
if
you
only
wanted
to
initialize
certain
optimizations,
you
wouldn't
have
to
pull
in
everything.
So
I
know
jitbuilder
saved
about
a
megabyte
of
of
size
in
the
in
the
jit
builder
library.
D
At
the
time
when
luke
did
that
it
was
actually
shelley
who's
started
that
refactoring
work
and
then
luke
went
through
and
completed
the
the
initialization
so
that
it
wouldn't
you
wouldn't
have
to
have
all
of
the
optimizer.
C
A
G
D
D
H
D
Yep
right,
so
that
would
be
more
isolating
the
client
from
the
the
rest
of
the
actual
guts
of
compiling.
So
the
that's
almost
going
back
to
the
code,
rt
versus
jit
split
that
we
used
to
have
that's
the
other
way
of
cutting
it.
H
A
Okay,
so
I
mean
there's
no
issue
for
this
yet,
but
I
think
probably
one
of
the
next
steps
is
at
least
to
create
an
issue
to
to
track
some
kind
of
investigation
into
the
right
way
of
of
of
approaching
this
and
actually
doing
it.
A
Okay,
so
that's
all
we
have
on
the
agenda
for
today.
I
guess:
are
there
any
other
topics
that
anybody
wants
to
bring
up
before
we
close.
A
Nope:
okay!
Well
thanks
for
attending
everyone,
and
we
will
convene
again
in
two
weeks
time
thanks
bye,.