►
From YouTube: OMR Architecture 20210121
Description
Agenda:
* 2020 retrospective and 2021 (and beyond) goals [ 0xdaryl ]
A
Welcome
everyone
to
the
first
omar
architecture
meeting
for
2021.
today
to
kick
off
the
year.
I
thought
that
what
we
could
talk
about
is
a
bit
about
what
we've
accomplished
since
the
last
time.
We
did
one
of
these
road
maps
in
early
2020
and
to
look
forward
to
what
it
is
that
we
hope
to
accomplish
this
year
and
what
some
of
the
items
that
we're
looking
at
a
bit
further
out
than
that
and
where
this
project
is
where
this
project
is
going.
A
So,
to
begin
with
at
the
at
the
beginning
of
last
year,
actually
I
think
it
was
very
late.
In
2019
we
gave
a
roadmap
presentation
for
what
we
we're
thinking
about
accomplishing
in
2020,
and
what
I've
got
here
is.
I
thought
I'd
go
through
those
slides
that
I
talked
through
last
year
and
highlight
what
we've
accomplished
and
what
new
things
we
accomplished
that
we
didn't
think
we
were
going
to
do
last
year.
A
So
what
you're
seeing
here
is
mostly
the
2020
slides
the
resemblance
in
the
top
right
there
of
the
2020
star
to
a
covid
virus
is
completely
unintentional,
but
to
begin
with,
for
the
omar
project
itself,
there
were
some
some
great
great
accomplishments,
so
in
total
we
had
well
over
700
pull
requests
merged.
A
The
cool
thing
there
is
that
we've
had
some
new
contributors
to
the
project,
some
brought
in
through
working
on
beginner
issues,
so
we've
had
nine
of
those
completed
last
year
and
there
was
a
fair
bit
of
work
happening
throughout
the
various
components
of
the
project
to
to
reduce
technical
debt
refactor.
A
The
code
you
know,
provide
better
documentation
and
and
clear
apis
that
sort
of
thing
either
either
as
part
of
work,
that's
specific
to
to
doing
that
or
as
part
of
other
pull
requests
that
that
were
that
were
coming
in
so
completing
that
as
part
of
the
daily
work
we've
had,
one
new
committer
join,
join
the
ranks,
so
ben
thomas
became
a
committer
for
his
for
his
great
work
on
in
the
project
last
year.
A
The
other
thing
that
we
did
at
the
project
level
last
year
was
to
work
on
cleaning
up
old
issues
and
pull
requests
and
basically
closing
them
off
or
having
some
kind
of
a
resolution
for
them.
A
So
I
should
I
should
have
mentioned
actually
that
the
color
scheme
that
I'm
using
here
is
since
these
are
the
slides
that
I
had
used
before.
If
we
had
completed
something.
Last
year,
I'll
I'll
use
green.
A
If
we
have
started
work
on
something
and
maybe
not
quite
finished
it
I'll
use,
sort
of
an
orangey,
color
and
new
features
that
we
didn't
have
on
the
on
the
plan,
but
but
we
actually
ended
up
doing
I've
got
those
in
in
blue
and
then
everything
else
we
didn't
quite
make
any
progress
on
is
in
black
I'll
also
say
that
when
I'm
speaking
about
a
plan,
I'm
really
just
summarizing
all
the
work
that
we
know
is
happening
on
this
project
in
the
community.
A
This
doesn't
mean
that
this
is
the
only
thing
that's
going
on
with
this
project.
There's
certainly
you
know
there
certainly
is
other
other
forks
and
things
like
that
that
are
happening,
that
that
that
may
not
be
reflected
here,
and
it's
certainly
not
the
only
thing
that
we
that
we
are
like.
A
This
is
just
sort
of
a
summary
of
all
the
things
that
we
know
that
we're
going
to
do
and
and
where
we're
we're
going
to
be
taking
some
of
these
some
of
this
project,
part
of
the
issue
and
and
pull
request
cleanup,
I
should
say,
was-
was
helped
by
the
fact
that
by
we
added
some
automation
for
github
actions,
which
actually
marked
issues
and
pull
requests
that
are
older
than
a
certain
number
of
days.
A
I
think
it
was
six
months
actually
and
if,
if
there
wasn't
any
response
in
those
times,
it
would
automatically
get
cleaned
up,
so
we
did
close
off
some
of
those
some
some
older
issues
that
way.
But
if
you
recall,
if
you've
been
participating
with
this
group,
we
did
go
through
some
of
the
very
old
issues
and
prs
to
to
try
to
come
up
with
some
kind
of
a
resolution
for
them.
So.
A
The
compiler
component
we
had
a
fair
bit
of.
We
were
fairly
ambitious
with
with
the
work
that
we
wanted
to
do
last
year,
because
this
deck,
as
I
mentioned
before,
we
we
talked
about
it
in
late
2019,
we
were
still
hoping
to
deliver
the
risk
five
back
end
and
the
risk
five
back
end
did
actually
land
in
december
of
2019.
A
So
that's
really
more
of
a
2019
statement
than
a
than
a
2020
statement,
but
but
nonetheless
we
we
have
a
risk
five
back
end,
and
we
have
seen
some
some
incremental
changes
to
that
over
the
past
several
months,
which
is
which
is
great
to
see
we
did
also
in
2019.
We
did
some
work
on
on
the
options
processing
framework
and
we
were
hoping
to
land
that
as
part
of
this
project
sometime
last
year
and
and
again
we've.
A
We
made
some
minimal
progress
on
that,
but
I
think
that
there's
still
some
work
that
needs
to
get
done
on
the
infrastructure
side.
In
order
for
that
to
actually
happen,
we
also
made
a
bit
of
progress
as
well
on
the
on
some
conceptual
integrity
issues
and
in
particular
on
the
resolved
method
side.
A
I
know
I
have
that
one
in
particular
on
my
on
my
list
of
things
to
do,
and
I
was
able
to
make
a
bit
of
progress
with
that,
but
but
never
quite
got
it
to
the
point
where
it's
finished
and,
and
certainly
this
year
is
going
to
be
one
where
I'm
going
to
want
to
be
pushing
that
forward
for,
for
other
reasons,
but
perhaps
by
far
a
lot
of
the
work,
that's
happened
on
the
compiler
component
specifically
last
year
has
been
around
the
the
goal
of
the
project
to
really
converge
the
back
end
technologies
and
encourage
as
much
reuse
as
possible
between
them.
A
This
will,
of
course,
lead
to
a
overall
a
simpler
design
and
a
smaller
code
and
more
maintainable
code
because
it
is
being
shared
between
you
know.
The
five
back
ends
that
we
have
and
plus
risk.
Five
now
will
be.
You
know
a
a
sixth
back,
a
six
back
end,
so
there's
a
lot
of
opportunities
for
for
sharing
between
them
and
a
lot
more
opportunities
for
for
for
sharing.
A
Probably
the
the
best
example
from
those
would
be
would
be
things
that
are
specific
to
the
to
the
open
j9
project,
moved
up
into
into
openg9,
to
simplify
the
code
in
omr
and
can
and
and
make
it
more
language
agnostic.
A
Also,
last
year
there
was
a
research
project
from
the
university
of
alberta
that
that
started
more
than
a
year
ago,
and
the
the
goal
of
that
project
was
to
study
the
the
means
that
we're
using
to
extend
classes
in
the
compiler
and
to
determine
whether
or
not
that
was
actually
the
best
approach
to
use
or
if
there
were
alternatives
that
would
that
would
yield.
A
That
would
be
better
and
as
a
result
of
that
project,
they
studied
a
number
of
the
requirements
that
this
project
has
the
the
reasons
that
we
had
to
design
things
certain
way.
The
the
constraints
that
this
project
has
like,
for
example,
the
the
different
platforms
that
it
has
to
run
on
the
different
compilers
that
that
all
need
to
be
supported
and
the
different
language
levels
things
like
that,
and
they
took
a
lot
of
those
into
consideration
to
try
to
come
up
with.
A
You
know
the
the
right
answer
for
what
it
is
that
we
should
be
doing
in
the
end.
Unfortunately,
even
though
the
study
ended
last
year,
there
wasn't
any
sort
of
firm
conclusion
as
to
as
to
whether
or
not
there
was
anything
better
or
not.
There
were
too
many
constraints
and
too
many
too
many.
You
know
some
of
the
challenges
that
we
have,
that
that
made
that
difficult
to
recommend
one
approach
over
the
other.
A
However,
there
there
probably
are
ways
that
we
can
continue
to
simplify
the
code
and
to
try
to
eliminate
some
of
the
constraints
and
and
limitations
that
we
have
in
order
to
make
a
better
recommendation
in
the
future.
So
suffice
it
to
say,
at
least
in
the
short
term
going
forward.
Extensible
classes
are,
are
really
going
to
be.
The
means
that
we
use
to
extend
classes
until
we
have
a
reason
to
to
change
from
that
path.
A
In
the
garbage
collector
component,
so
last
january,
robert
young
gave
a
an
overview
of
the
of
the
garbage
of
the
garbage
collector
and
some
of
the
cool
ideas
that
that
were
being
explored
with
that.
Some
of
those
haven't
made
it
onto
this
list
just
yet,
but
for
the
for
2020,
the
things
that
we
did
accomplish
with
the
the
gc
was
continued
work
on
producing
a
unified,
build
for
compressed
and
native
pointers.
A
So
this
is
really
compressed.
Pointers
is
really
a
representation
of
an
address
in
32
bits
when,
when
that's
possible
at
the
moment,
in
order
to
support
that
you
actually
have
to
build
two
different
versions
of
the
of
the
gc
that
you're
using.
In
order
to
make
that
happen,
we
want
to
unify
that
into
just
a
single
build
and
there's
been
some
good
progress
made
on
that
in
the
past.
In
the
past
year,
some
work
for
dynamic
breadth
for
scan
ordering
was
contributed
as
well.
I
think
that
there's
still
some
cleanup
work.
A
That's
that's
happening
there
to
to
finish
that
off,
but
I
think
that
a
good
bulk
of
that
was
was
contributed
last
year.
A
In
the
port
and
thread
library,
there
is
some
work
happening.
There's
some
work
contributed
to
to
omr
as
a
result
of
the
jit
server
project
in
openj9,
and
it
was
really
to
contribute
a
network
socket
api
into
omr.
It
was
a
very
you
made
in
a
very
language
agnostic
way,
so
there
was
a
good
fit
for
that,
and
there's
also
a
potential
need
for
a
socket
api
in
the
port
library.
So
a
lot
of
good
work
went
into
that
last
year
and
that
was
contributed
in
in
the
summer
time
as
well.
A
We
also
contributed
some
work
for
processor
detection
features
from
openj9,
so
the
the
means
for
determining
what
target,
what
the?
What
kind
of
processor
you're
running
on
and
the
features
available
on
that
processor
is
a
bit
scattered
throughout
omr
and
it
was
a
bit
scattered
throughout
open
j9
as
well,
so
there
there
certainly
was
an
attempt
to
to
centralize
where
that
information
could
be
determined
and
the
port
library
is
a
very
natural
place
for
that.
A
A
On
the
jitbutler
side
of
things,
there
was
some
talk
last
year
about
a
jit
builder
2.0
and
some
discussion
around
a
newer
representation,
a
new
higher
level,
il
representation,
so
mark
talked
at
the
at
least
a
couple
of
the
architecture
meetings
last
year
to
describe
his
progress
in
evolving
the
the
jit
builder
il
and
that
you
know
it's
it.
A
We
were
able
to
make
progress
last
year
with
that,
didn't
quite
get
to
to
a
code
contribution
yet,
but
I
think
that's
going
to
be
coming
sometime
this
year,
but
but
nonetheless,
the
the
il
has
has
been
evolving
or
the
the
yeah
the
il
has
been
evolving.
A
We
also
contributed
a
llg.
This
project
called
lljb,
which
was
a
a
project
that
was
worked
on
by
a
a
student
to
provide
support
for
consuming
llvm
ir
viagbiller
into
to
convert
that
into
into
testosterone
il
to
be
consumed
by
the
compiler.
A
So
we
really
did
that
work
as
a
means
for
connecting
omr
technology
with
with
language
front
ends
that
are
that
are
producing
llvmir
so
that
we
can
do
some
experimentation
with
that
and
to
possibly
get
some
inroads
in
those
sorts
of
environments.
So
that
project
was
a
pretty
good
success
and
all
that
code
was
contributed
to
to
omr
last
year
the
language,
the
the
of
all
the
other
work.
Some
work
that
we
did
last
year
was
around
java
language
bindings.
A
It
was
more
of
a
sort
of
a
lot
of
thought
that
went
into
that.
There
was
a
reason.
There
was
a
project
that
started
at
a
the
u
of
a
last
year
around
java
language
bindings.
It
didn't
quite
get
to
the
point
where,
where
it
was
finished
and
where
code
could
be
contributed,
but
a
fair
bit
of
legwork
and
thinking
went
into
that.
So
I
imagine
that
that's
something
that's
going
to
carry
on
into
the
into
the
future
as
we're
making
some
good
progress.
There.
A
On
the
testing
side,
we
unfortunately
didn't
get
too
many
of
the
things
that
we
wanted
to
get
to
after
we
gave
this
roadmap
talk
in
for
2020,
which
again
was
in
december
of
2019
shelley.
Lambert
did
talk
at
the
architecture
meeting
about
some
of
the
ways
that
she
thought
that
that
omr
testing
could
be
evolved
and
some
of
the
things
that
we'd
like
to
do
there.
A
Unfortunately,
this
was
a
bit
of
a
casualty
of
of
not
having
enough
bodies
to
to
help
with
some
of
this
work,
but
one
thing
that
did
come
out
last
year
that
that
was
actually
not
part
of
any
of
the
plans
was
a
better
means
of
testing
the
code
that
was
being
generated
by
the
by
the
power
backend
and
it's
it's
basically
a
means
of
of
comparing
the
the
encodings
that
the
the
testerosa
compiler
technology
is
using
against
another
source.
A
So,
for
example,
gcc
you
can
compare
the
outputs
of
those
to
see
that
they
that
they're
there's
some
consistency
there.
So
so
that
turned
out
to
be
a
success
on
power
and
there
certainly
are
some
opportunities
for
replicating
that
kind
of
testing.
On
the
other.
Back
ends
that
we
that
we
support
in
omr
on
the
infrastructure
side,
as
I
mentioned
before,
one
of
the
main
contributions
was
some
work
that
philip
did
to
get
github
actions,
part
of
the
as
an
enable
as
part
of
the
the
development
pipeline.
A
So
so
that
got
on
and
there's
you
know
a
few
things
that
that
were
switched
on,
as
as
a
result
of
that,
I
mentioned
the
stale
issues,
but
we
also
have
a
job
right
now
that
runs
that
will
welcome
a
new
contributor
to
the
project
and
give
them
a
bit
of
information
about
places
to
find
information
and,
and
that
sort
of
thing,
so
we
can
really
now
that
we
have
that
on
and
and
working
it's
really
something
that
we
can
consider
using
for
for,
for
other
reasons
as
well.
A
We
also
had
our
thoughts
on
getting
a
risk
five
ci
pipeline
enabled
for
as
a
result
of
that
see
as
a
result
of
the
risk
five
back
end
getting
contributed.
We
wanted
to
have
a
means
of
testing
it.
Unfortunately,
there
wasn't
really
any
hardware
available
in
the
product
in
in
the
in
the
project,
but
getting
an
emulated
environment
set
up
was
certainly
a
possibility.
A
Unfortunately,
we
didn't
really
make
a
lot
of
progress
on
that
until
the
last.
I
would
say
you
know
six
weeks
to
a
couple
of
months
now
ago
and
I
think
we're
getting
pretty
close
to
being
able
to
enabling
a
risk.
Five,
a
ci
pipeline
using
an
emulated
environment,
so
things
are
looking
good
there,
so
I'm
hoping
that
one's
going
to
get
resolved
pretty
soon.
A
Some
offline
progress
was
made
on
the
wikipedia
entry
just
to
make
sure
that
we
get
the
right
text
and
the
right
citations
and
things
like
that,
so
it
doesn't
get
rejected.
A
Somebody
did
go
and
cr
did
actually
go
and
created
an
omr
wikipedia
entry
last
year,
but
it
was
shunned
by
the
community.
A
A
Okay,
so
that
was
2020.
We
did
so
looking
forward
to
this
year.
A
Just
reflecting
a
bit
on
the
on
the
reality
of
the
of
what
we
know
to
be
of
what
we
know
on
on
the
resources
that
we
have
that
are
working
on
omar.
Specifically,
I
think
what
you'll
find
here
is
that
what
I
have
is
a
little
bit
less
aggressive
than
what
we
had
hoped
for
last
year.
It's
a
little
bit
more
on
the
realistic
side
and-
and
I
think
that
a
lot
of
these
ones
are
going
to
immediately
benefit
some
of
the
work
that's
already
happening
in
in
the
project.
A
So
from
the
compiler
point
of
view,
one
of
the
things
that
has
come
up
of
late
for
other
projects,
but
it's
really
sort
of
owned
by
omr
are
around
the
vector,
il
opcodes,
and
we
had
a
discussion
on
this
as
part
of
the
omr
architecture
meeting
almost
two
years
ago
now
on
ways
that
we
can
adapt
and
simplify
and
make
a
just
make
a
little
bit
more
understandable,
understandable.
A
The
the
way
that
we're
representing
vector
operations
in
our
high-level
il
so
really
the
issue
here
is
to
is
to
perhaps
dust
some
of
that
off
and
and
make
it
a
reality
and
the
fact
that
there's
another
project
an
upstream
project,
sorry
a
downstream
project
wanting
and
needing
to
do.
This
may
give
this
some
additional
focus
for
sure
this
year.
A
The
other
tasks
that
we
have
around
for
the
compiler
specifically
are
really
around
just
the
technical
debt
reduction.
So
things
like
continuing
the
code,
generator
convergence,
there's
a
lot
of
things
that
are
happening
there
as
part
of
other
prs,
not
necessarily
specific
to
just
converging
the
code
generators,
but
as
more
work
is
happening.
We're
the
the
community
is
looking
a
lot
more
these
days
for
for
opportunities
where
work
can
be
shared
across
the
different
back
ends,
which
is
great
and
something
that
we
should
definitely
continue.
A
For
jit
builder,
we're
hoping
to
get
an
initial
contribution
of
some
of
the
things
that
that
market
talked
about
for
jet
builder
2.0
contributed
into
the
code
base
sometime
this
year
so
and
by
some
time
I
mean
probably
the
first
half
at
some
point,
so
that'll
be
a
good
sort
of
stake
in
the
ground
for
budget
builder
2.0,
for
others
to
to
see
what
to
have
something
concrete
in
front
of
them
and
for
us
to
continue
to
build
on
once
that
is
there.
A
I
do
expect
that
there
will
be
some
incremental
progress
being
made
on
top
of
that
to
implement
well
in
the
event
that
there
are
projects
that
are
consuming
jitbuilder
and
want
to
build
upon
the
api.
That's
that's
there,
plus
some
of
the
other
ideas
that
that
mark
has
there
and
then
there's
going
to
be
some
work
to
improve
the
integration
of
jit
builder
into
other
language
environments.
A
We've
taken
a
couple
of
runs
at
this
in
the
past
and
there
are
some
known
limitations
that
there
are
there
are.
There
are
some
limitations
with
with
jit
builder
and
the
omar
compiler
technology
that
prevent
it
from
integrating
nicely
into
some
language
environments.
We
tried
to
solve
some
of
those
last
year.
Actually
we
talked
about
you
know
the
the
memory
allocation
story.
You
know
the
the
resolved
method
stuff
is
is
is
related
to
this
as
well.
A
There
are
other
problems
as
well,
but
it's
important
to
to
solve
these
sorts
of
things
so
that
jit
builder
can
be
used
in
in
other
environments
such
as
such
as
open
g9,
for
example.
So
we
hope
to
you,
know,
identify
and
solve
as
many
of
those
problems
as
we
can
this
year
to
make
to
make
this
technology
as
portable
as
possible.
A
On
the
port
and
garbage
collector
side,
I
think
the
main
two
contributions
that
we'd
like
to
see
this
year
is-
and
we
talked
about
this
at
the
architecture
meeting
a
few
months
back
is
to
initiate-
is
to
have
some
sort
of
initialization
story
for
the
port
library
so
that
it
can
be
consumed
in
the
compiler
and
git
builder.
A
There
are
the
result
of
the
last
discussion
that
we
had
had
was
around
basically
having
a
simplified.
He
doesn't
solve
every
single
problem
that
that
has
come
up,
but
we
can
have
a
simplified
way
of
of
initializing
the
port
library
just
so
that
it
can
be
used
within
the
compiler
and
jit
builder.
Just
having
the
port
library,
there
will
will
simplify
some
things
and
eliminate
some
of
the
duplicate
code
that
we
have
in
the
compiler,
that's
already
implemented
in
the
port
library.
A
Work
on
the
infrastructure
side:
this
looks
a
lot
like
what
we
had
hoped
for
for
for
2020.
A
We're
still
fairly
deficient
on
art,
64
hardware,
and
I
guess
risk
five
hardware
for
that
matter.
The
newer
platforms
we
are
still
on
the
lookout
for
finding
new
arc64
hardware
to
contribute
to
the
project
so
that
we
can
do
testing
on
it
right
now.
We're
only
doing
builds
only
we
don't
actually
execute
any
of
the
tests
on
it,
and
this
is.
This
is
certainly
problematic
same
thing
for
risk
five.
Actually
for
risk
five.
We
don't
even
have
a
build
yet,
but
that
we're
getting
very
close
to
to
enabling
something
like
that.
A
If
we
can't
actually
get
art
64
hardware,
we
could
potentially
build
on
the
work.
That's
happened
for
risk,
5
to
build
and
emulate
and
and
build
and
emulate
on
on
art
64
as
well.
So
that's
that's
a
positive
step
forward,
but
having
real
access
to
real
devices
would
be
ideal
and
we're
still
on
the
lookout
for
air
at
64.
A
As
I
said,
and
possibly
getting
some
contributions
from
the
risk
5
community
for
hardware
there,
there
are
still
a
couple
of
builds
that
are
not
c
make
that
that
that
aren't,
that
haven't
been
ported
to
cmake.
A
So
we
need
to
complete
those,
and
then
we
can
finally
deprecate
our
auto
tools,
dependents
and
then
one
thing
that
I
mean
especially
this
time
of
year,
where
it's
particularly
comes
up.
A
lot
is
on
the
copyright
verification.
So
we
don't
actually
have
a
story
there.
A
So
not
a
exhaustive
list
for
2020
one,
but
it's
certainly
possible
that
looking
a
little
bit
further
out.
Some
of
these
items
could
get
done
sooner
as
well
so
duplicated
a
lot
of
our
project
goals
from
from
2020
here,
because,
as
there
are
other
projects
that
are
potentially
consuming
omr,
there
may
be
a
greater
need
for
for
increasing
for
doing
releases
and
increasing
the
cadence
of
the
releases
that
we
that
we
have
and
the
stability
of
those
of
those
releases.
A
So
I'm
not
sure
if
2021
is
the
year
for
that,
but
but
coming
soon
we
may
need
to
start
to
put
into
place
a
more
rigorous
release
process
for
this
project
and,
as
part
of
that
is
the
the
plan
for
the
api
stability
on
the
components
that
don't
have
a
stable
api.
So,
of
course
I'm.
My
finger
is
pointing
at
the
compiler
and
and
jit
builder,
as
the
main
ones
that
need
to
choose
and
stabilize
on
a
particular
api.
A
The
other
thing
that
that
we
can
do
as
part
of
the
project
is,
we
have
a
lot
of
really
good
ideas
and
directions
that
we
want
the
project
to
go.
A
There's
also
a
lot
of
people
that
have
come
to
the
project,
wanting
to
help
in
some
way
shape
or
form,
but
it's
been
difficult
for
them
to
get
a
toehold
in
the
project,
because
some
of
the
tasks
that
we
want
them
to
do
or
that
that
we
want
the
project
to
do
are
far
too
large
or
they're
far
from
nebulous,
and
they
need
to
be
broken
down
into
much
smaller
pieces
in
order
for
someone
to
consume
and
and
take
it
on,
because
you
know
people
are
coming
to
the
project
they're
not
coming.
A
You
know
necessarily
with
the
ability
to
to
devote
all
their
time
to
it,
they're
coming
in
really
more
of
a
part-time
context.
A
So
if
we
can
take
some
of
the
bigger
ideas
that
we
have
and
break
them
into
smaller
ideas,
so
that
it
would
encourage
others
to
to
participate,
that
should
be
one
of
the
goals
that
we
that
we
strive
for.
A
A
We
also
have
a
very
large
I
mean
I
call
it
a
large
issue
backlog
and
a
pull
request
backlog.
We
can
issue
like
we
have.
You
know
in
the
hundreds.
A
It's
not
like
it's
an
overwhelmingly
large
issue
backlog,
but
nonetheless,
I
think
that
there
is
work
that
we
can
do
to
continue
to
to
ensure
that
some
attention
is
paid
to
the
older
issues
or
pr's
that
were
that
were
opened
and
that
and
that
we
continue
to
move
things
on
and
and
the
other
thing
is
to
make
sure
that
if
somebody
does
open
an
issue
that
we
don't
forget
about
them,
you
know
people
are
opening
in
in
a
handful
of
cases.
A
There
are
you
know,
people
may
be
opening
problems
that
don't
get
immediate
attention
and
as
a
as
a
community,
we
should
be
working
to
to
address
those
problems
and
at
least
give
them
some
attention,
as
as
soon
as
you
possibly
can.
A
From
the
compiler
point
of
view,
looking
forward,
there's
a
fair
bit
of
things
that
we
want
to
do
in
the
that
we
know
that
we
can
do
looking
at
our
new
back
ends
ar
64
and
risk
five.
There
are
a
few
op
codes
still
that
that
haven't
been
implemented.
A
A
There
has
been
some
as
part
of
the
work
to
common
data
structures
between
the
different
backends.
A
We
can
certainly
look
at
more
than
just
things
like
the
instruction
op
codes
or
the
instructions
and
think
about
other
table
data
structures
within
the
compiler
that
could
possibly
benefit
from
a
more
automatic
generation
and
and
to
improve
maintenance
of
this.
So
we've
talked
about
this
in
the
past
at
other
architecture
meetings,
but
haven't
really
made
a
lot
of
progress
on
this.
This
is
really
more
just
a
simplification
item
to
make
things
easier
to
generate
and
to
make
things
easier
to
share
across
different
projects
like
the
options.
A
One,
for
example,
is
a
good
one
in
that
it
is
able
to
produce
a
single
table.
That
is,
that
is
that
takes
requirements
from
omr
and
it
will
take
requirement
from
a
downstream
project
like
openj9
and
synthesize.
Everything
together
automatically
into
like
a
single
into
a
single
table,
as
opposed
to
sort
of
a
confusing
way
of
saying
a
little
bit
comes
from
here
and
then
I
gotta
go
over
there
and
grab
a
little
bit
from
from
openg9.
A
It
just
pulls
everything
together
into
a
nice
representation,
so
there
are
other
table
structures
in
the
compiler
that
could
potentially
benefit
from
this
kind
of
unification.
Things
like
in
the
like
value,
pro
value,
propagation
tables
or
simplifier
tables.
Other
op
codes
tables.
Things
like
that.
A
A
In
order
for
us
to
use
the
omr
garbage
collector
technology
is
to
have
a
representation
of
of
the
of
what's
live
at
certain
points
in
a
in
a
method
as
it's
being
compiled,
and
so
we
use
a
gc
map
for
that
and
that's
typically
stored
in
something
we
call
method
metadata,
but
it
doesn't
really
exist
in
any
real
form
in
omr,
just
yet
so
we'd
like
to
to
generalize
and
surface
some
of
that.
A
Metadata
is
also
useful
for,
if
you're
doing
more
inlining,
it's
useful
for
in
capturing
information,
about
exception
ranges
plus
other
things
for
other
languages,
that
language
front
ends
that
we
haven't
really
considered
just
yet.
So
we
need
to
provide
a
more
generic
framework
for
that
the
memory
allocation
story,
like
I
said
we
talked
about
that
at
least
a
couple
of
times
last
year
and
and
a
few
of
us
did
make
a
bit
of
progress
on
on
on
trying
to
come
up
with
a
simplified
design
for
that.
A
But
there
are
a
number
of
there's,
certainly
a
lot
of
complexities
that
we
have
to
to
work
through
there.
The
unfortunate
thing
about
the
memory
allocation
story
in
in
the
compiler
right
now
is
that
it
is.
It
isn't
very
consistent,
it's
very
difficult
for
anybody
to
understand
if
you're
not
already
deeply
involved
with
with
it,
and
there
are
at
least
there
are
at
least
a
couple
of
different
implementations
of
of
of
of
parts
of
it
that
need
to
be
unified.
I
would
say
so.
A
I
know
it's
a
it's
a
very
difficult
problem
to
to
to
work
on,
and
it's
also
one
that
is
so
difficult
that
it
often
gets
pushed
aside
because
of
the
effort
that
are
required
in
order
for
us
to
do
something
there.
But
I
do
think
that,
in
order
to
meet
some
of
our
goals
of,
for
example,
porting
jit
builder
into
other
environments,
these
are
the
kinds
of
things
that
we're
going
to
have
to
to
make
some
progress
on
and
solve
in
order
to
make
that
a
reality.
A
Floating
point
in
omr
is
is
another
area
that
that
needs
some
some
definition,
I
would
say,
I
think,
just
coming
up
with
the
the
right
strategy
for
omr
to
handle
floating
point,
the
way
that
things
are
represented.
What
things
mean?
I
think
that's
important
to
do.
A
There
is
going
to
be
some
work
very
likely
happening
in
omr
this
year
around
vector
and
and
cindy
support
and
various
various
architectures.
Some
of
this
is
being
driven
by
open,
j9
and
their
vector
api.
That's
coming
up,
but
but
art
64,
for
example,
doesn't
have
any
vector
support
implemented,
but
I
know
that
there
are
those
that
are
working
on
an
implementation
for
vector
for
ar64
that
will
hopefully
land
sometime
this
year,
and
part
of
this
will
be
tied
in
with
an
overall
floating
point
strategy.
So.
A
A
This
isn't
necessarily
just
the
amount
of
memory
that
it
consumes,
but
the
actual
on
disk
footprint
of
the
of
the
of
the
of
the
shared
object
that
gets
produced
when
you,
when
you
build
the
compiler-
and
there
has
been
some
preferences
on
mentioned
about
you-
know
the
the
compiler
needs
to
fit
under
a
certain
in
a
certain
window
for
it
to
be
useful
in
a
certain
environment.
A
So
we
have
started
doing
some
investigation
on
reducing
the
overall
compiler
footprint
but
and
we're
still
working
on
unlanding.
Some
of
that.
Actually,
I
should
have
had
this
kind
of
a
a
bullet
point
on
the
2020
accomplishments,
because
we
did.
A
We
did
make
some
progress
there
in
2020,
but
there's
still
more
study
that
needs
to
happen
on
what
parts
of
the
jit
are
consuming
a
lot
of
space
and
are
there
areas
that
we
or
are
there
things
that
we
can
do
to
to
eliminate
or
or
you
know
make
some
of
those
parts
of
the
compiler
optional
if
somebody
doesn't
want,
doesn't
want
to
to
use
just
give
you
an
example.
A
If
you're
running
an
environment-
and
you
don't
care
anything
about,
you
know
the
vector
instructions,
for
example
the
cindy
instructions,
then
perhaps
you
could
build
them
out
entirely,
so
the
compiler
does
not
support
generating
them
at
all
or
any
of
the
evaluators
for
them,
and
you
could
potentially
save
some
footprint
that
way.
So
we
need
to
continue
to
study
that
going
forward
and
then
I'll
I
mean
seem
to
be
on
a
theme
of
vector
here.
We
don't
actually
have
support
for
intel.
A
Avx
512
support
just
yet
there
is
some
rework
that
has
to
happen
in
the
x86
back-end
in
order
to
make
this
a
reality,
so
it
isn't
just
a
matter
of
going
through
the
the
intel,
docs
and
and
implementing
it.
Some
rework
does
need
to
happen.
That's
that's
not
trivial
to
do
so.
A
A
For
jit
builder,
you
know
this
is
really
building
more
on.
You
know,
moving
out
from
from
jit
builder
2.0,
one
of
the
things
one
of
the
topics
that
has
come
up
in
in
several
conversations
about
jet
builder,
has
been
its
its
integration
with
with
the
gc
and
the
kinds
of
things
that
the
jit
builder
needs
to
be
able
to
provide
in
order
to
to
work
with
the
garbage
collectors.
A
So,
for
example,
the
ability
to
produce
maps-
and
you
know,
means
of
producing
or
allocating
objects,
if
there's
any
sort
of
barriers,
memory,
access
barriers
that
are
needed
for
a
particular
memory
model.
That
sort
of
thing.
So
that's
one
of
the
things
that
we
think
jetbuilder
needs
to
be
to
integrate
better
with
going
forward,
also
making
sure
that
jeep
builder
runs
on
as
many
platforms
as
possible
in
2019.
A
We
did
a
fair
bit
of
work
to
get
ship
builder
running
in
in
on
the
zos
platform.
I
do
know
that
there
is
some
interest
in
in
in
continuing
to
work
with
jip
builder,
on
on
z
os.
So
there
could,
you
know,
there's
there's
something
to
build
on
there,
but
making
sure
that
it
runs
in
all
environments
that
are
that
are
of
interest
is,
is
important
and
making
changes
as
necessary
to
the
way
that
the
project
is
structured
or
the
code
is
structured
or
built
for
the
tool
chain.
A
Also
working
on
continuing
to
build
on
j
builder
2.0
and
also
working
on
taking
that
and
building
it
into
existing
language
front
ends
to
demonstrate
that
it
that
it
can
be
done
and-
and
you
know,
providing
a
concrete
example
for
for
those
to
study
and
to
move
forward
with
java
language
bindings.
That's
that's
another
thing,
so
the
ability
to
invoke
builder
from
within
a
java
method
to
basically
provide
customized
control
over
snippets
of
code
that
you'd
like
to
like
to
see
built.
A
That
is
a
feature
that
we
think
has
got
some
promise
in
some
contexts
that
we'd
like
to
continue
working
on.
Like
I
said
before,
it
was
started
as
a
research
project
last
year,
but
more
work
needs
to
happen
there
to
to
finish
it
up
and
then
just
ongoing
refactoring
and
documentation
improvements
to
make
it
easier
to
understand
and
to
get
others
to
to
help
work
on
it.
A
A
You
know
more
than
a
year
ago,
and
I
guess
we
need
to
decide
whether
or
not
that's
something
that
we
want
to
continue
to
contribute
into
into
omr
and
where
it's
going
to
go.
A
Metronome
for
those
that
don't
know
is,
is
a
is
a
real-time
garbage
collector,
which
makes
guarantees
over
the
the
pause
times
as
it's
as
it's
collecting
so
there
there
certainly
is.
You
know
some
cool
technology.
There
that's
coming
from
openj9,
but
I
think
just
finding
a
place
for
it,
no
omr
and
figuring
out
how
we
can
integrate
and
test.
It
is
isn't
one
of
the
things
that
we
need
to
figure
out
still.
A
On
the
testing
side,
still
lots
of
things
that
that
need
to
happen
here.
You
know
the
we
talked
about
this
at
the
end
of
2019
with
shelly,
and
some
of
these
certainly
came
from
came
from
her
list,
but
looking
at
what
it
is
that
we're?
What
we're
currently
testing
and
how
we're
testing
it
is,
is
certainly
important
like
doing
that
kind
of
a
review
restructuring.
The
the
the
test
directory
that
we
have.
A
A
So
I
guess
it
would
be
nice
to
to
dust
off
some
of
those
ideas
and
to
to
start
making
them
a
reality,
taking
the
work
that
happened
last
year
on
power
and
expanding
for
testing,
binary,
encoding
and
power
and
making
it
work
across
other
architectures.
A
That's
certainly
something
that
we'd
like
to
to
roll
out
as
well
seems
to
be
of
catching
some
problems,
much
sooner
than
and
in
a
much
more
deterministic
way
than
encountering
it
out
in
the
field
and
then
trill
the
means
of
injecting
il
directly
into
the
compiler
or
doing
very
targeted
testing.
Unfortunately,
a
lot
of
the
work
that's
happened
on
trill
has
has
stalled
of
late,
but
looking
forward,
you
know
there
certainly
is
a
long
list
of
things
that
we
want
to
do
and
continuing
to
evolve.
A
The
the
the
the
compiler
testing
story,
because,
as
as
omr
zero
compiler
is
starting
to
be
used
in
more
and
more
environments,
it's
gonna
become
harder
and
harder
to
test.
In
some
cases,
some
of
these
different
environments,
but
having
the
ability
to
inject
trees
into
into
the
common
backend
into
the
common
compiler
might
make
things
simplify
some
of
the
testing
process.
A
There
is
a
road
map
that
we
had
come
up
with,
maybe
a
couple
of
years
back,
and
it's
really
just
finding
the
time
to
continue
on
that
path
and
making
some
of
those
things
a
reality
on
the
infrastructure
side.
A
One
of
the
things
that's
still
open
it's
at
the
beginning
of
last
year
we
had
a
at
the
end
of
2019,
beginning
of
last
year,
there
was
some
activity
around
a
consistent
source
card
format,
formatting
effort
in
in
the
throughout
the
omar
code
base,
just
to
give
it
a
consistent
look
and
feel
there
was
a
fair
bit
of
discussion
and
opinions
around
that
and,
in
the
end
it
it
didn't
end
up
happening.
A
But
I
think
that
there
there
is
something
to
potentially
salvage
from
the
discussions
that
that
that
did
happen,
that
I
think
that
there
is
an
opportunity
for
trying
something
again,
perhaps
in
a
different
form.
That
might
be
more
amenable
to
to
the
problems
that
were
raised
in
the
in
the
previous
discussion.
A
So
at
some
point
in
the
future,
we
may
see
something
coming
again
around
that
because,
if
you
look
outside
just
the
the
sort
of
the
regular
users
of
of
omr
and
omar
consumed
in
open
j9,
there
is
interest
in
providing
a
consistent
source
code
formatting
so
from
from
those
that
don't
normally
participate
in
this.
So
we'll
it's
not
forgotten,
but
we
have
to
make
sure
that
we
solve
as
many
of
the
problems
that
were
raised
before
as
we
can
and
finally
website
and
documentation.
A
So
always
something
that's
good
to
do,
but,
unfortunately,
something
we
just
don't
normally
have
a
lot
of
time
to
do
discontinuing
to
improve
the
the
the
documentation
that's
available
with
omr.
A
This
is
really
more
about.
Well,
I
guess
what's
sort
of
presented
in
this
on
this
on
the
slide.
Here
is
really
more
about
things
that
you'd
find
on
the
website,
as
opposed
to
documentation
that
you'd
find
in
the
docs
directory
of
the
repo
we've
actually
been
pretty
good
of
late
of
creating
new
documentation
in
the
repo
itself.
A
You
know
often
fairly
small
bits
of
documentation,
but
but
useful
nonetheless,
as
opposed
to
writing
a
blog
or
some
deeper
technical
article
on
something.
So
it's
it's
important
to
do
the
blogging
and-
and
things
like
that,
in
order
to
increase
the
visibility
of
the
project
and
to
you
know
have
it
appear
to
be.
You
know
fairly,
you
know
you
know
eminent
and
so
that
others
can
can
see
that
there's
a
lot.
Some
of
the
cool
things
that
are
happening
on
on
the
project
itself.
A
And
I
think
that's
my
last
slide
here,
so
I'm
going
to
stop
there
and
open
this
up
for
any
discussion
that
anybody
wants
to
have
on
any
of
these
any
of
these
points
in
particular,
you
know
if
there's
anything
that
if
there's
any
area
that
you
think
that
we
should
be
focusing
on
that
that
isn't
getting
enough
attention
here.
If
there's
some
idea
here,
that's
really
wild
and
crazy
that
you
think
has
no
hope
of
I've
ever
seen
the
light
of
day.
A
I
can
hear
that
too.
So
any
thoughts
on
that.
B
A
End
the
end,
the
meeting
then
so
yeah
I'll
have
this
posted
and
again
we'll
we'll
we'll
do
this
again
in
in
a
year's
time
to
to
checkpoint
how
we've
done
the
last
the
last
year's
year,
or
so
it's
certainly
a
very
asynchronous
project.
So
so
a
lot
of
things
come
in
during
the
year
new
ideas
come
up.
That
kind
of
thing
project
gets
pushed
in
different
directions.
A
That
will
certainly
influence
the
plan
that
we
that
we
have
so
all
right.
Thanks
for
your
participation
today,
and
we
will
talk
in
two
weeks
time
at
the
next
at
the
next
meeting
thanks
everyone.