►
From YouTube: OMR Compiler Architecture Meeting 20190606
Description
Agenda:
* Switching inliners from the command line (#3956) [ @efferifick ]
A
B
Since
we're
talking
the
open
sourcing
project,
one
of
the
very
first
issues
that
we
are
encountering
is:
how
exactly
are
we
going
to
contribute
this
e-learning
strategy
such
that
we
preserve
the
default
in
learning
strategies
at
the
moment
and
yeah
a
command-line
argument?
We
can
switch
the
new
inlining
strategies
whenever
we
want
now.
B
I
did
not
prepare
a
presentation,
but
on
the
issue
that
is
linked
on
the
agenda,
there
are
three
small
propose
both
on
how
we
think
this
could
be
used
or
sorry.
How
can
we
achieve
changing
the
inlining
strategy
on
the
command
line?
The
first
one
is
just
placing
a
conditional
statement
on
the
ops
array
at
the
optimizer
file
on
the
optimizer
class
and
just
replacing
our
in
liner
with
a
default
e-minor
path.
B
The
second
option
is
to
have
a
wrapper
class
around
the
around
a
specific
instance
of
the
circle
of
the
default
inlining
surgeon,
and
when
that
what
a
room
translates
here
wonderful
say
is
that
we
would
like
to
have
this
class,
which
is
named
TR,
select
a
minor
to
select
either
the
default
e-learning
class
or
the
a
the
new
experimental
in
mining
class
based
on
the
based
on
a
command-line
argument,
or
either
we
get
both
the
multiplexing
of
the
inlining
classes
to
the
optimisation
itself.
So
that
would
be
option
three.
B
Optimisation
kind
of
decide
which
inlining
path
to
perform,
depending
on
synchronous,
hotness
or
command
line,
flags,
etc,
and
this
will
enable
further
experimentation
with
inliner,
because
then
we
would
be
able
to
just
switch
the
mining
strategy
at
will
and
that
that
could
be
beneficial,
that
that
is
beneficial
for
us.
A
Well,
I
I
was
just
going
to
note
that
for
those
not
familiar
with
the
optimisation
enums
that
we
have
in
Palomar,
there
are
three
entries
for
inlining
at
the
moment:
there's
a
inlining
there's
a
targeted,
inlining
and
there's
a
trivial
in
lighting,
and
those
three
are
currently
used
in
the
static
optimization
strategies,
specifically
to
denote
a
specific
kind
of
inlining
that
we
want
to
have
happen
in
the
optimization
strategy.
At
that
point.
So
in
OMR
at
the
moment,
there's
one
that's
initialized
in
you
know,
mr
at
the
moment,
which
is
the
trivial
inlining.
A
A
So
it
represents
a
kind
of
clean
up,
calls
pass
as
opposed
to
try
and
pick
the
best
things
to
in
line
for
my
body
as
sort
of
a
and
there's.
This
follow
on
set
of
things
where
the
call
overhead
is
really
high,
or
maybe
it's
not
even
representable,
and
you
want
to
force
the
inlining
for
whatever
reason
it's
kind
of
a
mop-up
pass
that
friends
at
the
end.
A
B
B
Like
well
did
I
just
kind
of
wants
to
ask
for
inputs
on
how
to
implement
this,
since
it's
going
to
go
to
Lamar
I'm
assuming
I,
don't
like
the
two
designs.
That
I
think
is
the
preserve
some
of
the
properties
that
you
have
interested
in
and
by
that
properties,
I
mean
make
sure
that
the
the
enums
still
are
meaningful
purposes
alone.
B
For
the
default
inliner
for
that
specific
issue
for
that
specific
enum,
but
on
that
condition
that
if
there
is
a
command-line
arguments
and
we
switch
to
our
experimental
aligner,
for
example,
so
I
think
that
is
probably
the
best
option
available
from
the
trees
that
I
presented
on
the
issue.
But
if
you
guys
have
any
more
ideas,
I
am
all
ears.
A
An
optimisation
slot,
the
end
Erick's
research,
and
certainly
the
development
of
in
liners
in
general
for
Omar
and
derived
languages
that
derive
from
that
may
necessitate
that
you
have
a
non-default
in
liner
that
you
are
in
the
process
of
developing
right.
Getting
an
in
liner
tuned
to
work
for
a
language
in
general
is
a
large
undertaking.
You
generally
would
like
that
in
liner
to
live
in
your
language,
implementations
code
base
or
in
om
RF,
it
is
an
Overland.
B
C
B
C
C
B
Have
any
position
for
selecting
liner
but
I?
Have
it
for
the
third
option,
which
is
we
just
create
an
enlightening
path
that
replaces
all
inlining
passes?
Sorry,
let
me
rephrase
that
that
replaces
the
tributon
lining
and
the
inlining
flight
passes,
but
does
not
replace
the
targeting
lining
paths
and
upon
service,
so
the
code
is
also
on
the
or
like
a
smaller
version
of
the
code
is
available
in
the
issue
based
on
a
command
line
flag
or
the
method
hotness.
We
can
multiplex
between
the
in
liners
that
we
are
deciding
okay.
A
So
the
reason
I
asked
about
the
second
option
was
because
I'm
wondering
if
you
had
written
some
code,
how
general
that
code
is
because
you
could
maybe
think
that
this
kind
of
a
selector
would
be
useful
for
other
optimizations
potentially
as
well.
You
could
provide
alternate
implementations
for
some
of
these
other
things,
for
some
other
language
front
ends.
C
A
B
I
think
it
looks
possible.
I
have
a
credit,
but
yeah
I
think
we
can
just
yourself
having
selecting
minor
class.
We
can
name
it
a
little
bit
more
generic
like
I,
don't
know
default
or
experiment
or
something
I,
don't
maybe
that's
not
a
good
name,
but
yeah
I
think
certainly
something
generic
is
possible.
C
A
Templating
the
create
I
was
not
sure
if
that
actually
needed
to
move
out
to
the
optimization
manager,
because
I
was
not
sure
if
the
shared
mutable
state,
that's
persisted
across
ox
may
need
to
be
the
thing
like
if
you
have
a
custom
op
manager
that
is
associated
with
your
opt
pass,
or
you
are
doing
specific
things
with
the
opt
bandage
ER
or
say.
Let's
say
that
this
but
say
you
have
a.
Let's
take
the
case
of
a
general
office
elect
right.
A
So
that
was
something
I
wanted
to
the
optimization
can
use
and
groups
yeah.
Exactly
because,
like
the
shared
mutable
state
would
be
set
up
by
one,
it
wouldn't
be
what
the
other
one
is
expecting
it
I
could
see
that
you
could
have
problems
right
length
escape
analysis
for
in
open
j9
is
like
it's,
not
one
that
you
are
all
familiar
with,
but
one
of
the
things
that
it
does
is
it's
sort
of
a
self
enabling
path,
because
it's
not
finding
an
opportunity
and
exploiting.
A
I'm
not
currently,
that
optimization
does
not
clean
up
that
shared
mutable
state,
it
resets
certain
parts
of
it,
but
certainly
not
everything,
and
if
you
were
to
have
a
different
escape
analysis
that
was
possibly
sharing
that
optimization
manager
I
could
see
that
potentially
being
a
problem,
so
I
wasn't
sure
from
an
architecture
point
of
view,
what
the
what
your
perspective
might
be.
I
believe.