►
From YouTube: JVMLS 2015 - Multi-Language Runtime
Description
JVMLS 2015 - Multi-Language Runtime - Mark Stoodley
The JVM Language Summit is an open technical collaboration among language designers, compiler writers, tool builders, runtime engineers, and VM architects. In 2015, it convened at Oracle's Santa Clara, CA campus on August 10-12.
Materials provided by IBM are owned by IBM and use of such materials is limited to non-commercial use only.
A
Alright,
so
we've
seen
a
lot
of
great
talks
at
the
JVM
languages
summit,
talking
about
jb,
ms
and
and
hearing
about
what
people
are
doing
with
different
languages
on
that
JVM.
This
talk
is
going
to
look
at
multiple
languages
and
JVMs
from
a
slightly
different
perspective
that
I
still
hope
you'll
find
interesting
and
I
will
try
to
end
up
a
little
bit
early
since
we
went
long
in
the
in
the
last
session
and
hopefully
that'll,
give
you
some
time
to
for
questions,
and
I
look
forward
to
hearing
which
think
about
it.
A
This
is
my
first
time
at
the
JDM
languages
summit,
so
I
thought
I'd
tell
you
a
little
bit
about
me.
I
I
work
at
the
IBM
Toronto
lab
on
JIT
compilers
I've
been
doing
that
for
about
13
years,
I'm,
the
current
architect
for
the
Testarossa
JIT
compilers
in
the
IBM
JDK
and
I'm,
leading
a
current
I
secret
project,
which
is
the
topic
that
I
will
be
talking
to
you
about
today.
A
A
So
so,
first
of
all,
I'm
going
to
give
you
the
story
behind
this
talk,
so
I'm
going
to
start
off
purposefully
telling
you
about
a
problem
that
IBM
has
been
facing
for
a
while,
now
and
and
just
talk
about
the
secret
path
solution
that
we
have
been
working
on
to
address
that
challenge
now.
I
know
you
all
traveled
here
from
all
parts
of
the
globe
to
hear
about
IBM's
problems,
but
bear
with
me
some
of
those
problems
aren't
just
hours.
A
A
We
still
have
a
lot
of
people
who
are
interested
in
Java,
but
there
are
other
Java,
sorry
other
languages
that
are
now
showing
up
from
JavaScript
to
Python,
to
Ruby
to
you
name,
it
lots
of
different
dynamic
languages
and
yes,
even
COBOL.
We
do
keep
hearing
about
things
that
are
happening
in
COBOL
and
there
are
advancements
going
on
in
COBOL.
But
I
won't
talk
about
that
very
much
today,
because
I
know
it's
not
really
a
dear
topic
for
this
room.
A
This
is
actually
a
great
thing.
This
is
not
a
problem,
but
from
ibm's
perspective
we
have
our
own
JVM
implementation.
We
find
maintaining.
Even
one
language
is
a
pretty
costly
affair
right.
You
have
to
keep
up
with
advances
in
hardware
platforms,
making
sure
that
you're
exploiting
all
the
new
instructions,
all
the
new
capabilities
that
keep
coming.
A
We
have
to
keep
up
with
competitors
thanks,
although
I
think
that's
actually
a
very
good
thing
about
the
job
ecosystem
right
having
lots
of
competitors
working
on
the
JVM
implementation
has
actually
been
very
good
for
the
job
ecosystem,
and
it's
driven
us
to
do
a
lot
of
performance
improvements
that
might
not
otherwise
have
happened
quite
as
rapidly.
You
know
we
have
to
do
lots
of
hours
of
testing
writing
new
tests,
fixing
all
the
problems
that
customers
find
as
well
as
even
the
ones
that
we
find
ourselves
before.
A
We
inflict
them
on
the
rest
of
the
world
and
then
all
this
sort
of
activities
that
go
on
around
the
development
team,
writing
documentation,
managing
project
the
project
and
releases
and
engaging
with
stakeholders
and
oh
yeah.
This
thing
called
innovation,
actually
pushes
forward
the
state
of
the
art
in
in
the
run
time
that
we've
got.
So.
Let
me
just
look
at
the
top
of
that
top
one.
There.
Now
we've
heard
a
couple
of
times
today
that
x86
is
the
only
important
platform
may
be
armed,
but.
A
A
Okay,
well
good
funny,
I'll
have
to
say
it
again.
We've
heard
today
that
x86
is
the
most
important
platform.
It's
not
actually
the
point
I'm
trying
to
make
but
I'm
making
it
very
well
IBM,
on
the
other
hand,
has
customers
that
run
across
a
number
of
different
platforms.
So
we
have
our
own
power
architecture,
we
have
our
mainframe
architectures
and-
and
these
are
actually
very
important.
A
We
have
very
big
customers
running
on
all
of
these
different
platforms,
and
you
know
we
have
different
operating
system
variants
on
all
of
these
three
major
hardware
platforms
which
add
another
multiplicative
factor
and
I,
haven't
even
taken
you
to
account
32
and
64
bit
compressed
reps
and
all
those
other
variations,
little-endian
vs,
big
endian.
All
of
those
variations
makes
it
extremely
painful
to
to
implement
and
maintain
even
one
language
runtime.
A
So
as
soon
as
there's
one
hardware
advance
on
one
of
these
platforms,
we
really
have
to
surface
that
in
the
runtime
and
if
you
start
adding
more
language
runtime.
So
if
we
were
to
take
the
same
approach
that
we
took
with
Java
and
implement
a
parallel
implementation
of
all
these
language
runtimes,
we
would
have
to
do
all
that
work
over
again.
We
have
to
pick
up
all
that
work
and
repeat
it
in
another
run
time
so
42
that
looks
bad,
but
maybe
we
could
pull
it
off.
A
Clearly,
the
approach
that
we're
using
for
Java
is
not
going
to
be
the
thing
that
we're
going
to
be
able
to
use
to
address
the
needs
of
our
customers
going
forward
and
new
customers,
and
the
thing
that
really
makes
this
challenging
is
all
the
existing
language.
Runtimes
are
completely
different
implementations.
They
have
just-in-time
compilers,
some
of
them
some
of
them.
Don't
they
have
GC
technology,
they
have
threading
models,
they
have
mechanisms
to
cooperate
among
different
threads,
but
they're
all
implemented
completely
differently,
they're
completely
different
implementations.
A
Implementation
costs
efforts
own.
So
it's
looking
very
costly
to
for
us
to
support
multiple
language
runtimes
in
the
way
that
we've
traditionally
done
this.
Now,
that's
not
an
unfamiliar
problem
to
everyone
in
this
room
right,
we're
here
to
talk
about
JVMs
and
languages
running
on
top
of
the
day,
bm's,
because
that's
a
great
way
to
leverage
all
of
the
investment.
A
That's
happened
in
the
JVM
and
that
provides
great
interoperability
with
Java,
which
is
you
know
we
get
to
leverage
that
large
ecosystem
of
both
programmers
developers
and
code,
as
well
as
all
of
the
JVM
implementation
details
under
the
covers.
So
we
get
great
jits.
We
get
great
GC
performance,
there's
a
little
bit
of
an
impedance
mismatch
between
the
implementing.
What
the?
A
What
the
JVM
is
implementing
in
terms
of
the
JVM
specification
and
the
thing
that
the
other
language
runtimes
want
to
to
implement
on
top
of,
but
that's
a
problem
that
obviously
has
been
recognized
and
we're
working
towards
we've
seen
lots
of
talks
talking
about
how
the
details
of
the
JVM
specification
can
be
massaged
to
help
other
language
runtimes
operate
better,
and
this
has
been
a
very
there's.
A
very
successful
community
of
JVM
implications,
lots
of
looking
on
a
Wikipedia
page,
there's
56
different
languages
that
run
on
top
of
the
JVM
and
that's
great.
A
But
IBM
had
a
little
bit
of
a
concern
with
leveraging
the
JVM
in
this
particular
way,
because
in
a
sense
it
divides
the
language
community
by
imposing
trade-offs
all
right,
there's
trade-offs
in
the
runtime
that
are
imposed
by
using
the
JVM
and
then
there's
also
just
well.
Okay,
there's
there's
trade-offs
implied
by
using
the
jbm
to
run
these
different
languages
for
some
languages.
The
division
in
the
community
doesn't
really
matter
because
the
language
was
created
on
the
JVM.
It
runs
on
the
JVM
there's
a
single
community
on
the
JVM.
A
So
that's
not
a
big
issue,
but
many
of
these
existing
languages
like
Ruby
and
Python,
have
a
vibrant,
non
JVM
based
community
and,
if
you're
forced
to
choose
between
two
different
runtime
implementations
to
run
your
code,
that
can
make
some
quite
stark
trade-off
choices
right.
You
may
want
high
performance,
but
you
may
not
be
able
to
run
with
all
the
extension
modules
that
make
that
community,
vibrant
and
great
and
I
know
there
are
some
talks
later
in
this,
maybe
even
later
today
that
are
going
to
go
into
detail
on
some
of
that
ish.
A
Those
issues
with
accessing
extension
modules.
Via
of
the
JVM,
which
has
traditionally
been
a
pretty
bad
problem,
so
and
even
bridging
this
divide
would
mean
really
migrating
an
entire
community
either
one
way
or
the
other,
and
so
IBM.
We
have
some
experience
with
being
another
JVM
implementation
and
seeing
some
of
these
migration
and
compatibility
issues
java
as
a
language
was
really
almost
designed
from
the
start
to
be
to
have
multiple
implementations.
It
has
a
really
well-defined
JVM
specification.
It
has
a
really
well-defined
language
specification
and
these
are
great.
A
They
do
help
with
implementing
multiple
implementations,
but
they
don't
cure
all
the
problems
right.
There
are
always
compatibility
issues,
sometimes
bug
for
bug,
issues
which
can
be
particularly
frustrating
and
behavioral
issues,
because
the
specifications
aren't
you
know
fully
complete.
There
are
areas
of
the
specification
that
our
implementation
to
find
or
that
they
aren't.
You
know
they
don't
they
aren't
actually
specified,
and
so
you
end
up
getting
the
it
doesn't
work
that
way
on
the
other
one
kind
of
statements
and
these
migration
issues,
as
we've
heard
earlier,
or
really
kind
of
turned
customers
off.
A
A
So
I
think
from
the
last
talk
that
cliff
gave
you
can
see
that
there's
a
lot
of
technology
and
a
lot
of
stuff
that's
been
built
inside
the
JVM
that
doesn't
necessarily
need
to
have
a
java
flavor
to
it.
We
may
have
implemented
it
inside
a
JVM
for
the
purpose
of
executing
Java,
but
those
technologies
don't
need
to
be
written
in
that
way.
A
That
would
be
a
toolkit
of
language,
agnostic
components
that
could
be
used
to
build
language
runtimes
and
augment
existing
language
runtimes.
And
so
you
could
pull
out
that
VM
and
you
could
integrate
it
into
say
the
Ruby
MRI
runtime
and
use
it
to
bring
jit
compilation,
GC
technology,
other
tooling
capabilities,
or
you
could
do
the
same
thing
with
C
Python
and
leverage
these
components.
These
ways
really.
A
What
we're
doing
is
we're
establishing
a
different
API
into
the
components
that
we've
built
rather
than
using
the
JVM
specification
which,
despite
all
of
the
efforts
that
we're
going
through,
is
still
a
jvm
specification.
It's
a
specification
for
a
machine
that
runs
java,
even
though
it's
getting
a
lot
better
at
running
other
languages
as
well.
A
So
the
secret
experiments
that
we've
been
performing
with
our
production
JVM
we
are
we've.
We've
been
working
on
refactoring
the
components
within
the
j9
JVM,
which
is
the
IBM
JDK
to
create
this
language,
agnostic
tool,
kit
designed
for
being
integrated
into
other
language
runtimes.
So
things
like
memory,
allocators
thread
the
libraries
platform,
port
library,
then
hook
frameworks,
tracing
engines,
garbage
collectors,
JIT
compilers.
A
A
lot
of
the
code
that
implements
the
things
that
you
just
saw
in
cliff
clicks,
presentation
that
are
present
in
the
IBM
JDK
are
now
being
refactored
so
that
we
can
actually
pull
those
out
and
apply
them
in
other
language
runtimes.
We
then
performed
a
bunch
of
experiments
to
try
to
integrate
some
of
those
components
into
other
language,
runtimes
and
Ruby
MRI
NC
Python
are
the
two
that
we
chose
as
basic
proof
points
to
see.
How
well
does
this
experiment
work
now?
This
is
a
lot
of
work,
so
it's
been
a
long
effort.
A
It's
that
you
can
take
these
components
out,
treat
them
as
a
toolkit
augment
the
toolkit
and
reintroduce
them
periodically
into
those
other
language
runtimes,
it's
a
model
of
consumption,
rather
than
just
you
know,
here's
a
starting
point
go
wherever
you
want
to
go
from
there.
We
want
to
be
able
to
build
something
that
you
could
actually
continue
to
innovate
in
and
push
innovations
into
multiple
different
runtimes.
A
A
What
are
the
pain
points
that
you
need
to
worry
about?
We
also
ported
our
entire
garbage
protect.
Well,
not
the
entire,
our
garbage
collection
framework
into
both
of
these
runtimes,
to
provide
some
interesting
capabilities.
On
top
of
that,
we
also
got
some
for
most
GC
output,
which
is
something
that's
not
always
available.
It
helps
you
understand
what
your
garbage
collector
is
doing,
as
well
as
some
of
the
visualization
tools
and
the
insight
tools
that
we
have
around
using
the
garbage
collection
technology
and
finally,
just-in-time
compiler.
A
We
incorporated
some
simple,
just-in-time
compilers
into
these
languages,
which
don't
exist
right
now
and
so
I
want
to
report
on
how
our
successful
we
were
so
method
profiling.
So,
just
as
a
background
err,
iBM
has
a
tool
called
health
center
which
can
automatically
connect
to.
If
you
ask
it
to
connect
to
a
JVM
and
it
will
with
a
very
light
weight
overhead.
It
will
tell
you
where
methods
are
consuming
time
within
the
JVM.
A
So
you
can
see
here
there's
a
histogram
on
the
top
pane,
showing
just
where
the
most
time
being
spent
in
these
in
this
little
application
that
this
is
demonstrating
and
then
the
lower
pane
is
actually
showing
call
stack.
So
you
can
actually
break
down
if
something
took
a
lot
of
time.
Where
was
it
being
called
from
most
of
the
time,
and
the
way
that
this
gets
implemented
is
actually
by
a
trace
point
which
feeds
call
stacks
periodically
to
the
health
center
tool?
A
And
so
we
took
that
same
trace
point
in
incorporated
it
into
the
Ruby
runtime,
leveraging
all
of
the
infrastructure
that
we
had
already
poured
it
into
the
runtime
to
handle
the
trace
points
and
to
connect
to
the
health
center
agent
and
for
free.
We
got
Ruby,
methesse
row
files.
So
there's
this
the
top
pain
again
shows
time
being
spent
in
Ruby
code
and
the
lower
pane
shows
the
call
stack
from
the
from
the
the
Ruby
stack
representation
within
MRI
of
where
that,
where
those
calls
were
being
far
from
so
this
is
a.
A
A
The
second
experiment
we
did
was
with
taking
our
scalable
garbage
collection
framework
and
incorporating
it
into
C,
Python
and
MRI.
So
it's
a
our
GC
is
type
accurate.
It's
a
it's!
Actually,
a
very
scalable
high-performance
multi-threaded,
parallel
concurrent,
you
name
it
all
the
buzzwords
that
Cliff
mentioned
in
his
talk
for
how
you
know
what
you
need
to
do
in
order
to
have
a
high-performance
GC.
A
It
is
a
type
app
framework,
of
course,
because
it's
used
in
Java
but
in
this
case
we're
using
it
in
a
conservative
sense,
because
the
Ruby
MRI
garbage
collector
is
conservative
and
the
reason
for
that
is
primarily
so
that
extension
modules
can
continue
to
operate
the
way
that
they
operate
because
they're
not
tightly
aware
of
what
the
garbage
collector
is
doing.
So
it's
not
careful
enough
and
doesn't
maintain
type
accuracy
so,
and,
and
since
and
since
not
breaking
extensions
is
an
important
goal
for
us.
A
We
decided
to
just
step
down
and
do
and
do
conservative
GC
so
another
advantage
that
we
got
of
introducing
this
collector
technology
was
Ruby
MRI.
The
heap
is
actually
a
set
of
slots
that
are
constant
size
and
most
of
the
memory
is
actually
being
hung
off
of
the
the
heap
in
native
memory,
and
what
that
means
is
it's
really
hard
to
control?
What
the
size
of
the
Ruby
heap
is,
because
what
most
of
the
data
is
actually
somewhere
else
and
it's
not
in
a
in
a
managed
environment.
A
We
were
able
to
make
some
changes
into
the
MRI.
Now
this
wasn't
as
low
touch
as
some
of
the
other
changes
that
we've
made,
but
we
were
able
to
actually
instrument
to
find
all
those
allocation
points
and
move
that
data
from
native
allocation
on
to
the
heap
and
and
with
variable
sized
objects,
and
now
the
GC
you
can
manage
all
that
size.
So
you
can
say
if
you
want
a
four
gig
heap,
you
can
get
a
four
gig
heap.
A
So
one
of
the
interesting
capabilities
that
we
got
out
of
that
was
just
a
verbose,
GC
output,
which
I'm
don't
believe
Ruby
MRI
has,
you
can
just
say,
dash
for
both
GC
and
bam.
You
get
xml
output,
that's
shown
here.
Sorry,
it's
a
little
bit
small,
a
probably
can't
read
it,
but
it's
just
basically
giving
a
lot
of
information
in
detail
about
what
the
garbage
collector
activities
are.
Well,
what
GC
phase
is
happening?
How
many
objects
is
it
finding?
A
How
much
memory
is
it
collecting
how
much
memory
is
left
on
the
heap
at
the
end
of
the
cycle?
How
long
did
the
cycle
take,
and
so
on
so
similar
to
the
method
profiling
view
that
health
center
can
provide
method
can
also
provide
a
GC
visualization,
so
it
can
actually
take
the
data
that
I
just
showed
in
text
and
xml
form
on
the
previous
slide
using
trace
points.
A
It
actually
gets
the
same
data
to
the
health
center
tool,
and
so
it
can
provide
a
visualization
of
what's
going
on
during
GC
pauses
when
they're
happening,
how
long
they
take,
how
much
data
is
being
collected
again,
the
same
type
of
data,
and
it
can
visualize
that
for
you,
so
we
actually
didn't
have
to
do
anything
here,
because
the
GC
technology
that
we
picked
up
out
of
j
9
already
has
the
trace
points
in
it.
We
put
it
down
inside
of
Ruby.
A
Mri
did
some
stuff
around
the
edges
to
make
it
work
inside
the
Ruby
runtime
and,
and
we
got
that
the
same
tool
for
zero
changes
could
provide
a
visualization
of
the
GC
activity.
That's
going
on
the
other,
oh,
and
we
could
also
do
that
for
C
Python.
Sorry,
I
didn't
have
a
graph
of
that
for
the
method
profiling
stuff
before,
but
we
can
also
do
this
for
C
Python
same
basic
premises.
It
is
we
just
picked
it
up.
A
A
Another
tool
that
we
have
is
a
it's
called
garbage
collector
memory
and
memory
visualizer,
it's
a
tool
that
can
provide
more
deep
insights
of
both
the
activities,
the
GC.
It
can
provide
tuning
recommendations,
options,
settings
that
might
help
tune
that
the
performance
of
the
garbage
collector
and
it's
based
on
the
verbose
GCO
put
that
that
we
just
saw
so
because
we
get
the
verbose,
DC
output.
A
Just-In-Time
compilers,
okay,
so
Ruby
MRI
on
CPI
thon.
Neither
of
them
have
JIT
compilers.
There
are
other
Ruby
implementations
like
JRuby
and
Rubinius,
and
other
C
Python
implementations
like
pi
PI
that
have
JIT
compilers,
but
the
reference
implementations
which
are
written
in
C
do
not
have
chik
compilers.
There
are
lots
of
reasons
why
tech
compilers
are
tough
to
to
build
for
these
languages.
A
Sorry
unmanaged
direct
access
to
vm
data
structures,
because
it's
c
you
can
write
extensions
and
see
they
can
reach
back
and
for
things
and
twiddle
things
and
change
things
all
unbeknownst
to
anybody,
and
so
that
makes
it
a
real
challenge
for
the
JIT
compiler
to
actually
optimize
anything,
because
it
has
to
be
maintaining
the
state
that
anybody
can
reach
in
and
look
at
and
see.
And
if
you
do,
you'll
break
something
which
kind
of
goes
against.
A
The
compatibility
concerns
that
we
had
finally
there's
some
design
choices
in
the
runtimes
themselves
that
make
it
difficult
to
write
yet
compilers.
For
so
one
of
the
examples
here
is
the
Ruby
MRI
uses,
set
jump
and
long
jump
for
exceptional
things
like
iterating
and
calling
blocks,
and
this
is
actually
quite
a
challenge,
quite
an
interesting
use
of
such
jump
and
long
jump.
A
That
makes
it
rather
difficult
to
incorporate
native
compiled
code
into
the
same
environment.
So
our
efforts
to
date
here
have
really
focused
on
trying
to
get
to
the
point
where
we
have
compiled
native
instructions.
We
want
to
avoid
making
big
changes
to
the
runtime
so
that
it's
not
too
difficult
to
adopt.
If
this
becomes
of
interest
to
these
communities,
it
wouldn't
be
too
hard
to
adopt
these
these
compilers
into
those
runtimes.
A
We
want
to
provide
consistent,
behavior
between
compiled
code
and
interpreted
codes,
so
we
don't
have
those
compatibility
and
migration
issues
that
we
were
talking
about.
We
don't
want
to
place
any
restrictions
on
the
use
of
native
code
and
extension
modules
for
either
Python
or
Ruby,
and
so
the
last
point,
I
guess
is
just
that.
You
know
this,
this
isn't
a
super
mature
effort.
At
this
point
we
haven't
done
any
benchmark
tuning.
We
haven't
done
any
benchmark,
specials
we're
really
not
very
exploiting
very
much
profile
information
about
what
types
are
flowing
through
the
program.
A
At
this
point,
we're
really
just
trying
to
do
a
proof,
point
and
experimental
and
just
demonstrate
that
it's
possible
to
pick
up
the
compiler
from
j9
and
put
it
down
inside
one
of
these
other
runtimes
or
both
of
these
other
runtimes
and
get
compiled
code
performance
for
relatively
easy
effort.
Remember
that
a
recurring
theme
here
is
that
we
want
to
maintain
compatibility.
A
We
want
the
community
to
be
able
to
stay
together,
while
they're,
picking
up
all
the
code
that
we're
picking
up
and
so
actually
the
proof
here
is
that
we
can
run
rails
and
that's
true
for
all
of
the
stuff
that
we've
that
I've
talked
about
today
for
the
method,
profiling,
support
the
GC
technology
and
adjust
in
time
compile
support.
We
can
run
rails
and
that's
a
pretty
strong
statement,
because
rails
is
not
an
easy
thing
to
to
get
working
all
right.
A
Just
a
little
bit
of
background
on
the
the
just-in-time
compiler
component
that
is
inside
j9,
so
it's
called
Testarossa.
It
was
originally
built
as
a
Java,
just-in-time
compiler.
It
has
a
fairly
standard
flow
where
you
build
the
intermediate
language
from
the
byte
codes
that
flows
through
a
sequence
of
optimizations
which
improve
the
quality
of
the
intermediate
language
that
then
flows
down
through
a
code
generator
which
can
generate
code
for
either
power
or
x86
or
mainframes.
And
you
know
there's
there
are
a
lot
of
queries
that
the
compiler
has
to
make.
A
It
manages
the
code
code
caches
and
the
metadata
that's
associated
with
the
code
so
that
things
like
stacks
stack,
frames
can
be
walked
and
so
on.
The
compiler
was
built
as
a
java,
just-in-time
compiler.
However,
over
the
recent
years
it's
actually
been
expanded
to
target
eight
different
languages,
including
cobol,
and
an
esoteric
language
that
only
exists
in
IBM.
None
of
you
will
have
heard
of
it,
but
our
economy
probably
depends
upon
it
fairly
substantially,
all
right
so
results.
So
this
is
a
bench
9k
performance,
a
subset
of
the
bench,
9k
benchmarks.
A
It's
not
a
particularly
awesome
set
of
benchmarks
to
use
to
measure
just
a
time,
compilat
performance.
There
are
lots
of
compilers
that
do
a
lot
better
than
this
I'm,
not
going
to
say
that
we
have
fantastic
stunning
numbers
here,
but
I
think
if
you
put
them
in
the
context
of
the
requirements,
the
constraints
that
we're
operating
inside
and
trying
to
maintain
compatibility
and
ease
of
adoption
within
the
runtime.
These
are
actually
pretty
good
results.
So
on
x86,
which
I
know
is
the
platform
everyone
here
is
claim
claims
to
care
about.
A
You
know
we
have
three
benchmarks
that
are
above
two
times
performance
and
a
number
of
ones
that
are
everything's
above
one,
some
of
them
not
very
much
above
one
like
I,
said
we
haven't
done
any
benchmark
tuning
here
at
all.
This
is
just
basically
the
results
that
we
get
form
from
a
straightforward
application
of
our
technology
into
this
runtime.
There's
lots
of
low-hanging
fruit
here
that
we
know
of
that
would
make
this
a
lot
better.
Optimizing
things
like
method,
domestic
dispatch
for
jetted
calls,
and
so
on.
A
A
These
are
not
as
great
results
as
we
saw
on
x86
and
obviously
will
be
very
motivated
to
go
and
look
into
why
those
differences
are
there,
because
in
principle
we
should
be
getting
relatively
similar
performance
on
power,
as
we
would
get
on
any
other
platform,
because
we
haven't
done
any
target
specific
stuff
right
now.
We've
basically
just
gone
to
the
process
of
converting
the
bytecode
into
RI.
A
Oh
excuse
me
running
a
small
set
of
optimizations
on
that
IL
and
then
just
letting
it
flow
through
the
code
generator,
and
so
you
know
so
for
free,
and
once
we
had
done
that,
we
got
x86,
we
have
power
and
we
also
have
it
running
on
our
main
frames.
So
all
of
these
are
on
linux
format,
for
example,
and
you
know
again
similarly
similar
kind
of
performance
profile
for
our
mainframe
systems,
and
that
was
basically
for
free.
A
We
didn't
have
to
do
anything
special
to
get
that
final
proof
point
here
is:
is
it
the
fact
that
we
actually
use
this
code
base
to
release
our
IBM
JDK
8,
so
this
released
at
the
beginning
of
the
year?
You
could
argue
that
if
we,
if
we
were
doing
this
work
and
it
slowed
down
our
ability
to
deliver
other
code,
what
are
their
releases?
That
would
not
really
be
a
very
successful
thing
right.
A
It
coincided
with
the
new
release
of
the
z,
13
mainframes,
and
we
did
some
pretty
amazing
stuff
to
exploit
all
the
new
features
that
were
available
than
that
in
that
in
those
machines,
as
well
as
a
number
of
similar
kinds
of
things
in
the
Power
Architecture,
so
we're
doing
a
lot
of
work
across
platforms.
We
do
have
a
big
team
working
on
this,
but
we
manage
to
do
all
of
this
aggressive
refactoring
and
take
that
code
and
reuse
it
inside
of
other
runtimes.
At
the
same
time,
so
I
think
that's
actually
a
pretty
impressive
thing.
A
We
also
managed
to
release
some
cobalt
compilers
at
the
same
time,
but
so
we
think
our
results
are
pretty
promising,
but
it's
all
secret
so
like
who
cares
right?
Well,
you
know
I
am
here
talking
about
it.
We
realized,
obviously
very
early
on
that.
These
are
not
just
problems
that
ibm's
facing
IBM
is
very
large,
and
it
has
all
of
these
concerns
altogether,
but
that
doesn't
mean
that
these
concerns
aren't
also
true
for
everybody
else.
A
Who's
using
different
language,
runtimes,
so
I
think
most
of
the
people
in
this
room
can
probably
read
through
this
list
of
of
challenges
that
IBM
faces
right.
We
own
our
own
platforms,
we
own
our
own
operating
systems,
we're
building
cloud
and
pass
infrastructures.
We
own
lots
of
different
tools
that
have
to
access
a
lots
of
different
runtimes.
We
have
large
data
centers
that
benefit
from
optimizing
and
and
and
improving
the
resource
usage
for
efficiency,
and
we
have
lots
of
developers
and
customers
who
are
working
with
multiple
runtimes.
A
So
we
think
this
is
a
really
a
common
set
of
problems
across
our
industry,
and
we
would
all
benefit
from
having
these
language
agnostic
components
that
we
could
use
in
a
number
of
different
ways,
and
even
iBM
has
figured
out
that
these
languages
really
exist
in
the
open,
and
so
that
means,
if
we
really
want
to
have
the
broadest
possible
access
to
the
components
that
we're
talking
about.
We
have
to
create
an
open
community
around
these
language,
agnostic
components,
and
so
that's
what
IBM
is
going
to
do.
A
Our
vision
here
is
to
create
a
community
of
contribution
around
this
toolkit
of
components
that
can
be
used
to
build
VMs
for
any
language.
The
fundamental
tool
kit
is
language
agnostic,
and
that
means
that
it's
a
good
place
for
all
of
us
to
work
together,
individuals,
communities,
corporations
to
safely
collaborate
on
building
new
and
better
vm
infrastructure
right.
You
don't
have
to
just
use
the
infrastructure,
that's
in
one
run
time.
We
can
pull
it
all
together
and
we
can
work
on
it
together.
A
So
it
will
also
have
the
benefit
that
it
will
provide
a
more
robust
core
technology
for
all
these
languages.
Runtimes,
because
we're
testing
it
from
a
number
of
different
scenarios,
we'll
fix
bugs
once
not
five
times
in
five
different
runtimes
and
it's
a
it,
would
be
a
great
place
to
collect
best
practices.
Share.
Learning-
and
you
know,
do
things
like
this
conference
to
try
and
share
what's
going
on
in
this
area,
and
it
also
would
mean
a
lower
barrier
of
entry
for
new
languages
right.
A
So,
if
there's
this
core
set
of
coop
that
this
cool
core
toolkit
of
components
available,
if
you
want
to
create
a
new
language,
runtime
is
relatively
easy
to
pick
up
something:
that's
language,
agnostic
and
specialized
it
for
your
new
language
idea.
And
so
we
can
test
new
ideas
and
get
much
more
effective
results
and
test
them
more
reliably.
A
A
No
one
says
I'm
going
to
write
the
file
system
from
scratch.
No
one
says
I'm
going
to
write
the
display
drivers
unless
you
work
for
a
video
company
we'd
like
to
make
these
statements
just
as
unlikely.
I
don't
want
to
have
to
write
a
cross-platform
port
library
from
scratch.
Every
time
that
I
want
to
work
on
a
new
language,
runtime
I,
don't
want
to
write
a
new
garbage
collector
from
scratch.
Every
time
I
want
to
run
it
write
a
new
language,
runtime
and
I.
Don't
want
to
write
a
new
chick
compiler
from
scratch
either.
A
Now
there
have
been
some
projects
in
the
open
that
are
being
that
are
being
very
successfully
used
on
some
of
these
fronts,
but
we
think
a
community
built
around
trying
to
trying
to
solve
all
of
these
problems
together
is
really
a
very
effective
approach
and
so
I'm
keen
to
hear
your
feedback
we
think,
are
encouraged.
Our
results
are
encouraging.
What
do
you
think.
A
On
the
idea
yeah
the
idea,
the
proposal-
thank
you
I,
like
it
two
thumbs
up
awesome.
Thank
you
so
some
of
the
challenges
that
we're
currently
experiencing
here.
So
we
would
like
to
create
an
open
project
open
source
project
around
these
language,
agnostic
components
that
come
from
the
j9
JVM
technology.
Our
codes
not
quite
ready
to
open
up.
A
Ibm
well,
ok,
so
I
wouldn't
say:
we've
gone
through
all
the
hurdles
there,
but
what
we're!
What
we're
working
towards
right
now
is
we're
looking
at
an
eclipse
foundation
project
which
would
mean
it
would
be
under
the
Eclipse
license,
which
means
a
very
friendly
license
for
any
type
of
use.
Right,
as
I
said
in
an
earlier
slide,
we
want
these
to
be
broadly
accessible
to
as
many
people
as
possible.
That
doesn't
mean
putting
restrictions
on
how
you
can
use
it.
A
A
So
you
know
the
issue
of
cooperative
suspend
came
up
earlier
in
the
context
of
hot
spot,
so
IBM's
JVM
is
a
fully
cooperative,
suspend
model
and
it
does
have
rather
efficient
primitives
for
figuring
out
how
to
lock
and
how
to
keep
everybody
in
the
right
frame
of
mind
and
so
taking
a
parallel
technique
to
that,
not
necessarily
taking
the
code
that
we're
using
in
j
9,
which
is
actually
very
complicated
code
to
understand
and
track.
You
know
it's
it's.
A
B
A
So
we
did
have
some
issues
incorporating
the
GC
into
C
Python,
and
you
might
have
noticed
that
the
GC
results
that
I
presented
were
from
Ruby.
We've
had
more
success
with
integrating
the
GC
into
MRI,
then
with
C
Python.
We
do
have
some
lingering
issues,
so
I
can't
claim
for
sure
that
we've
solved
that
problem,
but
but
we
have
done
the
work
to
integrate
the
GC
in
to
see
python
and-
and
you
know,
and
it
does
work
we
done.
A
We
didn't
change
reference
counting,
although
we
did
do
a
few
experiments
on
the
side
to
try
what
would
happen
if
we
took
away
the
reference
counting
aspect
and
just
relied
on
a
mark
sweep
approach
for
C
Python,
with
the
obvious
consequence
that
you
lose
timely
finalization.
And
so
you
know
that's
a
migration
issues,
because
there
are
lots
of
extensions
that
actually
rely
on
timely
finalization.
And
so
you
know
we
kind
of
backed
off
from
that
as
not
really
an
interesting
experiment
to
do
by
ourselves
in
secret.
A
A
A
So
there
are
a
number
of
them.
It's
not
it's,
not
us.
It
would
not
be
a
small
effort
and
so
in
in
our
view,
we
thought
it
would
be
better
to
make
the
compatible
see
what
we
could
do
by
staying
compatible,
see
where
we
can
get
to.
Hopefully,
that's
a
compelling
argument
to
at
least
get
involved
and
start
you
know
so
working
with
the
community
I
think
there
are
a
number
of
things
that
we've
learned.
A
You
know
along
the
same
lines
as
cliffs
talk
things
that
we've
learned
from
building
j9
over
the
past
20
years
that
that
could
be
brought
to
a
run-time
like
Ruby
that
we
could
try
to.
We
can
try
to
help.
You
know
Ainge
some
of
those
fundamental
problems,
but
as
it
is
I
don't
know
like
I
say
we
can
still
make
improve.
B
A
Does
have
to
it
does
have
to
be
fixed.
It
will
definitely
limit
the
viability
that
the
effectiveness
of
a
jit
compiler
like
we'll,
never
see.
Numbers
like
like
JRuby,
shows
for
these
types
of
benchmarks.
I,
don't
think.
Maybe
it's
possible.
You
know,
like
I,
say,
there's
still
lots
of
low-hanging
fruit
that
we
could
push
on.
A
I,
guess
I'm,
just
I'm
dubious
that
within
the
constraints
of
the
existing
system
that
we
could
actually
make
that
happen
without
making
large-scale
changes
in
some
very
complicated
parts
of
the
of
the
the
runtime,
for
example,
exception
handling,
which
is
always
an
extremely
tricky
part
of
any
runtime
design
in
MRI.
It's
not
pretty
either.
A
B
A
A
So
I
mean
so
you
know
the
weight
Java
gets
around.
This
is
by
the
definition
of
the
jni
and
the
in
directions
that
you
perform
in
order
to
get
access
to
anything,
that's
actually
from
within
the
management
that
managed
to
run
time.
So
you
know
providing
that
same
facility
in
another
language
like
Ruby.
You
know
along
the
lines
of.
A
Maybe
what
Rubinius
has
done
is
one
way
to
get
there
and
then
the
the
fact
that
it
makes
performance
slower,
I'm,
not
sure,
that's
like
that
would
be
a
compelling
reason
not
to
just
do
that,
but
if
you
had
other
things
that
could
make
up
for
that
performance
like
the
advantage
that
you'll
get
by
doing
that
is,
is
the
more
read
them
from
the
jit
compiler
to
optimize
things
right
right,
so
you
get
on
you!
It's
it's
going
to
cliff
mentioned
this
earlier
right.
It's
it's
you!
A
C
A
Yes,
so
that
I
mean
that's,
that's
an
issue
that
so
it
wouldn't
necessarily
interoperate
at
the
Java
level,
but
we
could
at
some
point,
dr
interruption
at
a
lower
level.
So
the
api
there,
the
the
specification
of
these
lower
level
components,
is
at
a
language,
agnostic
level.
If
you
can
surface
the
same
API
through
different
things
and
you
have
the
same
connective
tissue
going
on
between
the
different
runtimes
the
opportunity
for
interoperability.
Is
there
it's?
C
A
Nope
it's
the
GC
is
language
agnostic.
It
does
not
part
of
part
of
incorporating
the
GC
technology
into
a
language.
Runtime
is
the
language.
Runtime
specifies
what
the
object
model
is
and
then
the
GC
works
within
that
it
queries
the
language
to
say
how
do
I
find
out
what
this
thing
is.
You
know,
here's
a
pointer
that
you
told
me
is
live.
You
know
tell
me
what
other
pointers
it
points
at
right,
other
blocks
of
memory
that
it
points
out
and
then
I'll
walk
those
ones
too.
A
A
A
That's
a
good
question:
we
haven't
measured
that
so
it
would
it
would.
It
would
have
an
impact
so
that
so
that
the
JIT
being
introduced
does
the
current
that
we've
incorporated
is
is
synchronous,
so
you
block
to
compile
code.
The
the
underlying
technology
that
we
have
in
j9
is
multi-threaded
asynchronous.
We
must
haven't
taken
that
code
and
brought
it
across
into
these
runtimes.
So
in
principle
we
could
get
that
same
startup.
A
Performance
startup
performance
is
something
that
j9
is
well
known
for
doing
very
well
at
rightful
I
have
things
like
shared
classes
and
ahead
of
time,
dynamic
ahead
of
time
compiled
code.
So
you
know
bringing
those
capabilities
over
into
these
other
language.
Runtime
should
help
with
those
types
of
issues
as
well.