►
Description
This online meetup is about Eclipse OpenJ9 and Jenkins. The discussion will be led by Steve Poole (Eclipse Openj9), and Tracy Miranda (https://github.com/tracymiranda) where they aim to shed light on some questions:
Eclipse OpenJ9 is a fully open sourced virtual machine designed to run Java applications cost-effectively in the cloud. Jenkins already features significant JVM tuning work to enhance performance. Can Jenkins stand to gain any performance boosts by taking advantage of Eclipse OpenJ9 and its optimizations? How can the two open source communities collaborate to drive improvements for Jenkins running in the cloud?
Q&A | chat on #jenkinsdev Gitter channel.
B
Hi
everybody
so
welcome
to
our
Dinkins
online
meetup
today
on
what
can
eclipse
open
j9
do
for
Jenkins,
so
I'm
tracy
miranda
and
one
thing:
yeah
I,
really
love
open
source
and
the
best
thing
is
really
when
you
get
to
different,
open
source
projects
together
and
see
what
happens
and
see
what
magic
can
come
out
of
that.
So
just
before
we
get
into
the
presentation
and
our
guest
speaker
for
today,
I've
got
a
shot
agenda
here,
so
I'm
going
to
go
through
a
couple
of
announcements.
B
So
just
a
few
announcements
first
and
in
particular
our
events
we
have
coming
up
where
you
can
see
more
of
the
Jenkins
community
in
person
and
learn
more
about
Jenkins.
As
so.
First
up,
we've
got
the
DevOps
Royal
Jenkins
world
conferences,
one
in
San
Francisco
in
September,
and
one
in
October,
which
will
be
in
Nice
in
France
or
the
first
one
in
Europe,
especially
for
the
open
source
community.
We've
got
a
discount
code
or
if
you
want
to
try
winning
a
free
pass,
there's
a
contest
there
that
closes
today
Thursday.
B
So
that's
two
ways:
you
can
join
us
at
Burke
conference
before
those
specific
conferences.
We
also
have
contributor
summits
and
just
to
be
clear.
These
summits
are
free
to
attend
and
a
big
gathering
of
various
Jenkins
contributors,
new
and
old,
so
we'd
love
to
welcome
anyone,
who's
thinking
about
becoming
a
contributor
and
for
contributors
we
don't
just
mean
cold
contributors,
we'd
love
to
welcome
and
folks
who
get
involved
in
running
meetups
or
in
documentation
or
any
way
you
feel
part
of
the
Jenkins
community.
B
You
come
along
to
this
and
you'll
see
a
link
to
the
meetup
at
the
bottom,
which
shows
you
where
you
can
go
register,
so
the
events
are
free,
but
you
do
have
to
register.
So
we
know
you're
coming
and
one
other
event
worth
mentioning.
Is
the
day
of
Jenkins
as
cold,
which
is
in
October
in
Copenhagen,
and
that's
just
before
Jenkins
well,
Denise,
so
I
know
some
folks
in
the
community
are
gonna
go
along
to
both
of
these
back-to-back.
So
if
you
can
join
us
at
one
of
these
events,
that
would
be
really
great.
B
Now.
The
next
set
of
announcements
I
just
wanted
to
highlight
to
folks
who
are
joining
us
is
in
the
Jenkins
community.
We
now
have
special
interest
groups,
and
these
are
areas
where
folks
can
get
together
around
specific
topics.
So
far,
we've
got
about
four
groups:
running
Chinese,
localization
cloud
native
cig,
a
google
Summer
of
Code
sig
and
the
platform
sink.
B
So
if
you
check
this
out
on
Jenkins
io
six
page
and
in
particular,
I
wanted
to
highlight
the
platform
sig,
because
this
is
a
special
interest
group,
where
we're
going
to
use
as
a
venue
for
all
kinds
of
platform,
support
discussions,
so
that
could
in
for
involve
versions
of
Java
operating
system,
stock
or
packaging.
Web
containers
yeah
just
all
elements
of
that,
and
actually
the
meeting
we're
having
today
falls
under
that
category.
B
So
this
is
well:
it's
a
Jenkins
online
meetup
we're
going
to
run
half
of
its
in
the
style
of
a
platform
sake
as
what
this
means
is
that
you
can
join
us
in
the
gated
chat.
If
you
want
to
ask
questions
or
get
involved
in
some
of
the
discussions,
and
we
also
have
a
live
document
or
we'll
be
capturing
questions
and
just
sort
of
anything
that
gets
discussed.
A
Yeah
yep
good,
okay.
Well,
thank
you
for
the
for
the
invite
so
I'm
going
to
take
you
through
quickly,
20
minutes
or
so.
I
can
keep
the
time
about
open,
j9
and
I'll.
Explain
about
it
where
it
could
in
a
bit
where
it
comes
from
and
Bob
see
the
big
benefits
that
we
see
and
then
obviously
we
can
see
how
that
fits
in
with
with
Jenkins
Jenkins
being
something
that
I've
had
a
soft
spot
for
for
a
very
long
time.
A
A
A
So
one
of
the
things
that
we
do
a
lot
when
we're
building
this
stuff
is
we
use
things
like
Jenkins
and
snot,
and
all
the
build
tools
to
create
our
own,
you
know
builds
and
the
adopter
team,
where
we're
making
g9
and
hotspot
numbers
can
be
downloaded
from
users
Jenkins
as
well.
So
it's
quite
a
cool
relationship
here.
A
So
the
first
thing
is:
if
you
want
to
continue
on,
you
can
do
get
it
from
this
website.
Sorry,
I'll
talk
to
the
website
a
second,
so
open,
gen
I
was
contributed
by
IBM
late
last
year,
as
you
can
see
under
dual
licenses,
which
were
important.
You
get
choice
and
if
you
want
to
participate,
contribution,
contribute
to
it,
and
then
there
are
various
links
there
that
you
can
go
off
and
and
join
in.
A
So
this
particular
talk
talk
is
focused
around
cloud
because
that's
a
big
driving
factor,
all
the
things
I
talk
about,
which
is
about
cloud
performance,
isn't
meant
to
suggest
in
any
way
that
the
JVM
Open
j9
is
not
fit
for
other
activities.
This
just
happens
to
be
the
thing
that's
most
relevant
to
us.
Nowadays
is
cloud
so
I'm
going
to
talk
through
like
that,
so
you
know
as
a
Java
developer
the
the
background
of
Java
performance
and
prior
to
cloud,
and
we
ended
up
with
being
driving
java
to
a
particular
performance.
A
Shake
and
you've
probably
seen
lots
of
benchmarks
that
look
very
similar
to
these
sorts
of
shapes,
and
you
end
up
with
this
sort
of
traditional
performance
profile
that
basically,
as
you
can
see
over
time,
it
takes
it
takes
time
to
reach
maximum
throughput
and
throughput
and
memory
use
sort
of
interchangeable.
Because
of
the
tight
relationship-
and
we
know
that
many
of
these
occasions
you
end
up
with
this
sort
of
lead
time
to
get
going
and
some
big
lag
of
the
some
people
humph
at
the
top.
A
That
disappears
and
that's
a
traditional
profile,
but
with
cloud
we've
now
got
new
things
coming
along
cloud
coupons
new
profiles,
you
know
with
compute
on
tap,
which
is
the
thing
we're
all
looking
for.
It's
finally
arrived
and
we're
now
making
full
use
of
it,
and
it
has
significant
impact
on
on
the
shape
of
the
applications
that
we
want
to
run,
and
one
of
the
reasons
that
we
have
a
big
challenge
is
because
compute
on
tap
has
given
us
a
better
understanding
of
the
relationship
between
compute
and
money.
A
So,
whereas
before
it
was
not
completely
obvious,
the
relationship
now
with
cloud
where
you
buy
things
by
the
amount
of
RAM
you
buy
or
the
amount
of
bandwidth
or
whatever
it
becomes
really
obvious.
When
you
want
to
buy
some
more
candy
bar
that
you
have
to
spend
money
and
so
that
relationship
between
computer
money
is
now
staring
every
interface
and
obviously
our
accountants
are
interested
in
making
sure
that
we
get
value
for
money.
So
for
developers
point
of
view.
A
The
occasion
looks
like
this:
the
big
figure
that
we
talked
about
in
on
Amazon
or
IBM
cloud
or
wherever
is
gigabytes,
but
how
office
what
you
tend
to
buy
memory
so
from
developers
point
of
view.
You
can
see
that
actually
translates
to
this.
So
if
you're,
a
Java
developer
and
you've
set
some
heap
size,
it
cost
some
amounts
of
money.
From
now
on,
if
you
increase
the
excise
for
whatever
reason,
then
someone's
gonna
say:
hey,
that's
gonna
cost
me
another
dollar,
another
10
dollars
or
whatever,
and
so
people
are
scrutinizing
them
alright.
A
A
A
If
you
move
that
type
of
application
to
the
cloud
and
you
by
the
equivalent
about
the
cloud
resource
for
the
for
the
machine
that
you
have
your
data
center,
then
suddenly
you
realize
that
for
lots
of
the
time
when
this
service
isn't
being
used,
isn't
under
peak
you're,
still
paying
for
compute
power
and
your
accountants
etc
will
be
looking
and
going.
What
can
you
do
to
reduce
your
cost
because
you're
wasting
money,
and
so
obviously
what
we've
done
is
we've
looked
at
it
and
said:
how
do
you
fit
the
line
better?
A
Well,
you
have
smaller
units
smaller
compute
units
that
live
shorter
lives
to
to
map
the
curve,
and
nowadays
it
looks
quite
like
that
and
to
be
honest
here,
you
can
see
just
from
this
picture.
You
can
see
the
economic
pressure
of
that
drives.
Why
we're
doing
things
like
micro
services
to
give
us
the
scaling
to
fit
that
demand
curve,
so
that's
cool.
A
It
still
doesn't
quite
come
down
to.
What
do
you
want
the
JVM
to
do
so?
The
JVM,
so
the
demand
of
the
JVM
from
these
sorts
of
environments
are
somewhat
a
memory
footprint
because
it
costs
money.
Smaller
deployment
sizes
because
I've
got
to
shove
things
up
to
the
cloud
on
a
reasonably
on
a
repeating
basis.
We
know
what
about
those
I
need
my
application,
my
micro
service
to
start
really
quickly,
because
if
it
doesn't
start
really
quickly,
then
I'm
not
dealing
with
the
workload
etc
and
then,
by
the
way,
when
I'm
not
doing
anything.
A
When
my
application
is
idle,
really
idle,
I
don't
want
to
be
spending
any
money
and
I
want
to
be
doing
what
I
can
to
reduce
my
costs.
So
if
I
put
that
differently,
so
let's
put
our
different
picture
so
take
this
picture.
Let's
go
back
to
that
throughput
that
original
curve,
if
you've
done
any
sort
of
profiling,
you
know
that
you
get
these
sorts
of
profiles.
So
even
if
we
break
the
applications,
little
micro
services,
you
end
up
with
the
shape,
even
for
the
micro
service.
A
You
might
end
up
being
pushed
into
a
bigger
tier
just
to
get
the
thing
started
and
after
that
you
don't
use
that
that
here.
So
that's
a
challenge.
So
what
we're
really
looking
for
is
a
throughput
line
that
looks
like
that,
where
it's
very
quick
to
start
it
doesn't
have
peaks
and
troughs
in
terms
of
the
VM
or
the
application
getting
started.
You
know
the
workload
just
is
a
different
thing,
so
this
means
from
a
cloud
point
of
view.
You
end
up
with
all
of
these
obvious
changes.
You
know.
A
A
Thank
you
very
much
more
about
where
memory
of
being
used
and,
of
course,
you
also
know
some
a
micro
service
point
of
view
that
it
multiplies
very
quickly
because
we're
not
talking
about
one
JVM
we're
talking
loads
and
if
you're
running
in
the
cloud
or
you're
in
a
coop
cluster,
you
have
the
same
sort
of
basic
economics
and
whatever
cost
you
spend
whatever
money
you
spend,
somebody
will
be
looking
at
you're
going.
Can
you
make
it
cheaper?
A
So
all
those
were
drivers
that
we
were
aware
of
and
it's
the
sort
of
thing
as
users
as
we're.
Looking
at
trying
to
duck
duck
cloud,
we
know
we
need
to
do
those
sorts
of
things,
but
there
are
more
drivers
and
those
drivers
come
from
what
the
cloud
provided
needs
and
what
the
cluster
provider
needs,
and
that
might
be
more
relevant
too.
So
those
of
us
to
run
in
jaqen
servers,
etc.
A
The
physical
change
is
that,
though
we
talk
about
cloud
having
we
imagine
cloud
being
lots
of
small
bits
of
hardware,
it's
actually
going
the
other
way,
because
the
hardware
quality
and
the
price
points
are
turning
us
into
situations
where
people
buy
large
machines
and
divide
them
up
into
small
units.
So
it's
not
lots
of
small
machines.
A
It's
big
machines
with
lots
of
memory,
lots
of
CPUs,
and
that
gives
us
a
challenge,
because
when
you
take
a
traditional
application
that
where
you
can
see
the
blue
boxes
on
the
left,
where
there
was
lots
of
opportunities
between
those
applications
sharing,
whether
it's
you
know
just
traditional
Java,
sharing,
whether
it's
the
fact
that
you're
running
the
same
app
on
two
apps
on
the
same
JVM
or
two
applications,
the
same
app
server.
There
was
just
natural
places
for
surfacer
for
sharing.
A
So
the
providers
are
trying
to
find
ways
to
solve
this
problem
as
well.
How
do
I
reduce
my
memory
usage,
because
that
saves
me
on
physical
hardware?
How
do
I
get
in
the
machine
so
that
it's
we
might
say
that
with
an
application
is
not
busy,
whatever
it's
being
used
for
whether
it's
you
know
whether
it's
CPU
or
memory,
how
do
I
reduce
its
usage?
Not
the
application
will
notice,
but
I
want.
A
The
provider
will
notice
that
we
can
free
up
those
physical
resources
and
use
them
for
something
else,
and
obviously
that
means,
if
you
can
do
that,
you
can
get
more
work
load
out
in
the
same
box
and,
as
it
says,
number
three.
We
need
to
figure
out
how
we
get
these
things
to
stop
really
quickly
because
there's
authorizations
now.
So
that's
what
we
get
to
open,
j9
and,
as
I
said,
open
j9e
is
based
on
it's
not
just
a
cloud
base.
Vm,
it's
a
long-running
VM
and
I'll
talk
about
that.
A
As
I
said,
it
was
a
clear,
was
open
sourced
in
September
last
year
and
we
contributed
to
eclipse
I've.
You
have
a
contributor
to
eclipse,
because
we
think
there
are
lots
of
things
that
need
to
change
in
the
Java
space
and
there's
no
good
I
was
trying
to
beat
things
beat
people
up,
we
need
to
be
contribute
and
sharing
what
we've
got.
We
had
some
cool
tech.
We
want
to
share
it,
one
people
to
make
use
of
it
and
we
want
to
start
building
communities
around.
A
Well,
it's
whether
we
need
Java
in
the
cloud
to
go.
What
do
we
need
to
have
happen
in
the
VM
space,
so
the
best
way
to
drive
those
things
is
to
offer
up
what
we
have
already
and
demonstrate
that
we
can
actually
provide
some
of
these
capabilities,
so
open
Jane
eyeing
started
on
a
small
phone.
So
don't
even
remember
these,
that's
that's
partly
wrote
was
designed
to
start
from,
and
that
was
really
cool.
That
really
makes
a
difference
because
we've
gone
from
something
that's
small,
which
has
a
set
of
operational
requirements.
A
So
if
you've
got
ever
had
a
phone,
you
will
know
if
you
find
a
phone,
you
leave
that
java
running
on
your
phone.
You
will
know
that
you
don't
have
much
memory
and
you
want
your
games
and
things
to
start
up
really
fast,
because
you're
not
going
to
wait
for
things
to
start
you're
just
going
to
go,
you
said,
use
a
different
game
or
whatever,
and
you
don't
want
your
game
to
get
better
as
it
takes
as
it
as
it
as
it
runs.
You
don't
want
to
be
in
the
situation
where
the
graphics
get
better.
A
Eventually,
you
need
a
media
throughput.
You
need
immediate
maximum
throughput
and
those
characteristics
are
quite
similar
to
what
we
just
said
about
Java
clowned,
so
the
footprint
size,
the
startup
size
and
the
ramp
up
right.
These
are
all
very
similar.
So
when
we
look
at
open
j9,
we
can
say
actually
it's
got
built
into
it.
A
The
characteristics
from
the
beginning
all
right
and
though
it
may
have
started
on
one
of
these
small
things:
iBM
is
taking
it
and
running
it
on
the
largest
machines
that
you
can
get
so
large,
mainframes,
32,
terabytes
of
RAM,
loads
of
CPUs,
etc.
So
big
machines.
So
this
architecture
runs
and
this
code
runs
from
small
to
big,
and
so
it's
got
some
obvious
idea.
So
it
goes
from
smallest
to
the
largest.
A
So
let
me
talk
about
how
it
works,
so
the
big
thing
in
terms
of
sharing
is
open.
General
sharing
cloud
shared
classes
technologies,
so
that
diagram,
where
I've
got
these
yellow
books
got
our
eyes
showing
where
we
used
to
have
sharing
capabilities.
Of
course,
when
you
move
into
the
clustering
and
into
docker
containers
those
shared
opportunity
to
disappear,
but
what
we
can
do
with
share
Clark
would
share
classes.
Is
we
can
add
that
capability
back
and
allow
you
to
share
state
share
classes?
A
A
The
second
option,
the
screen
you
can
see
is
the
Dax
X
option
to
set
the
size
of
it,
but
the
first
one
is
that's
what
you
do
you
just
turn
it
on
and
the
reason
it
works
is
because
of
the
way
that
we
deal
with
class
files
and
obviously
sharing
is
about
what
you
have
in
common.
So
what
happens?
Is
we
take
the
class
file
when
we're
loading
up
classes?
It's
from
a
the
end
designers
point
of
view:
it's
not
optimal
for
what
we're
trying
to
do
so.
A
It
gets
broken
down
as
part
of
the
loading
into
two
parts.
It
gets
laid
broken
down
if
this
wrong
part
ROM
class,
where
we
work
out
all
the
stuff
that
your
class
file
can
share
or
the
static
stuff
and
others
things
that
we
can
figure
out.
You're
not
going
to
mutate,
stick
that
in
there
wrong
class
and
then
put
everything
else.
The
stateful
thing
in
a
separate
class,
so
now
we
have
built
in
a
real
separation
between
stateful
and
and
non-state
form.
A
You
can
now
share
all
the
wrong
stuff
because
it's
non
stateful
and
it's
accessible
by
all
the
VMS
right
and
that
just
immediately
reduces
your
fast,
your
your
footprint,
and
it
gives
you
faster
startup
time,
because
we
can
already
figure
out
the
class
that
you
want
to
rode
in
jail
in
1,
JD
m2
is
already
loaded
and
it's
been
cached
alright
and,
as
I
said
here,
you
can
share
the
wrong
classes
across
the
boundaries.
It's
not
a
container
or
a
VM
space.
A
If
you
can,
if
you
can
get
access
to
the
filesystem
in
a
common
way,
then
you
can
share
this
data,
and
just
doing
this
just
doing
the
regulator's
gives
us
20%
footprint,
reduction,
right
and
startup.
Will
start
up
is
also
a
movable
feast
depending
on
your
application,
but
it
definitely
improves
startup,
because
simply
loading
things
from
loading,
these
pre
pre
compiled
classes
makes
a
big
difference
and
we
can
share
the
JIT
code
as
well.
A
So
we
can
provide
this
dynamic,
a
OTS
capability,
so
once
the
once
the
jail
is
kicked
in
and
has
compiled
the
code
and
produced
the
optimized
JIT
code,
then
that
can
be
cached.
So
the
next
time
that
you
start
up
a
VM,
then
that
code
chemical
can
be
immediately
loaded.
So
you
don't
even
have
to
reach
it
alright,
and
that
gives
you
immediate
startup
performances.
Alright,
now
there's
some
wordage
here
about
the
differences
between
how
you
run
a
ot
but
effectively.
A
If
you
do
it
right,
then
you
get
significant
improvements
in
load
types
because
we're
loading
up
code,
that's
already
been
optimized
and
it's
been
structured
to
fit
straight
into
a
JVM.
So
there's
none
of
the
usual
heavy
lifting
that
takes
place
when
you
have
to
win
it.
When
you're
doing
it
here
a
couple
more
pages
on
some
options,
there's
bunch
of
there's
a
whole
bunch
of
things
you
can
do
to
chain
Oh
j9
to
get
the
three-point
you
want,
there's
a
whole
bunch
of
stuff
there.
A
I
won't
go
through
them
in
detail
and
then
something
else
which
again
is
useful
if
you're
a
provider
so
I'm,
sorry
for
other
words
on
the
page,
when
you're
doing
when
your
application
comes
to
the
point
where
it
has
gone
idle.
If
the
JVM
knows
that's
happening,
that
it
tends
to
just
stop
because
there's
nothing
to
be
done
and
in
general,
that's.
That
means
that
there's
no
GC
note
in
the
your
application
just
stops,
and
so
nothing
happens,
and
that
means
that
if
you've
got
garbage
sitting
in
memory
doesn't
get
collected.
A
Pk
and
that's
a
shame,
because
if
you
collected
the
memory,
then
you
could
in
theory
give
that
member
some
of
that
memory
back
to
the
OS
right.
So
there
are
capabilities,
been
hot
spot
and
open
j9
to
do
things
like
this
with
hotspot,
the
developer
has
to
figure
out
when
it's
idle
and
it
has
to
do
the
developer,
so
they
have
lifting
with
j9.
You
turn
on
the
right
option.
We
figure
it
out
for
you
and
part
of
the
pot
there's
a
whole
bunch
of
heuristics
to
work
out
whether
your
application
is
idle.
A
Okay,
why?
Mr.
Peter?
So
just
finishing
up
on
that?
So
if
we
can
figure
out
when
your
idle,
then
that
means
that
we
can
do
a
GC
and
clear
up
garbage
and
give
the
memory
back
and
then
at
the
point
where
your
application
goes
away
and
becomes
non
idle,
then
you've
already
been
GC,
and
so
you
can
kick
straight
on
and
not
have
to
worry
about
running
a
GC
as
soon
as
the
workload
picks
up
again.
A
So
here
are
the
results.
Here's
the
first
one
startup
time.
So
we
ran
a
whole
bunch
of
different
sorts
of
benchmarking
and
tools
just
to
see
what
we
get
roughly
30%
startup.
If
you
press
the
right
buttons
footprint
is
dramatically
reduced
and
you
just
gotta
turn
on
share
classes
and
I
know
where
you
go.
It
just
really
is
fantastic,
but
you
get
it
really
good.
You
can
improvement
just
by
using
open
j9,
but
you
tarnish
their
classes
and
quick
start.
A
It's
it's
it's
just
great
and
this
complicated
there
are
two
charts
here
which
are
supposed
to
identify,
explain
to
you
what
happens
in
terms
of
performance
benefits
from
a
real
idle
point
of
view,
so
the
top
one
is
with
no
idle
detection
turned
on.
So
the
top
line
is
the
blue
process
memory
and
the
Java
heap,
and
you
can
see
on
the
bottom
of
the
chart,
there's
some
active
idle
statements
and
every
time
there's
your
idle.
You
have
an
opportunity
to
give
back
some
of
the
process
memory
and
the
chart
below.
A
You
can
actually
see
that
happening
with
it
turned
on
and
you
can
see
the
little
blue
cutouts
and
that's
where
j9
has
figured
out
that
things
are
idle
and
it
can
actually
do
a
GC
and
give
back
some
of
the
memory
and,
as
I
said,
the
benefit
from
that.
Of
course
is
that
you've
kind
of
GC
happen
ready
for
when
you
actually
go
non
idle
right.
So
came
back
to
this
dog,
this
diagram
more
like
this,
please
here's
some
real
data.
A
This
is
open,
JDK
9
with
hotspot,
so
you
can
see
from
a
particular
benchmark
but
you'll
get
similar
similar
shape,
depending
on
which
box
you
run
you
can
see.
It
took
open
j9o
with
hotspot
some
time
to
reach
this
maximum
through
pro
here's
open,
JDK
9
with
open
j9,
so
you
can
see
no
heat,
no,
no
bumps
and
significantly
faster
startup.
A
A
It's
I
go
to
convey
to
these
eclipses
because
of
the
fact
of
that
we
want
to
use
this
as
the
base
for
discussions
for
the
future
of
Java,
and
if
we
don't
put
out
our
stuff
and
don't
have
those
conversations
then
around
our
technologies,
then
it
looks
peculiar
so
a
contra
meeting
at
open
today
Eclipse
has
been
contributing
code
to
open
source
for
a
very
long
time.
So
this
is
the
right
thing
for
us
to
do
to
get
the
technology
out
for
people
to
try
it.
A
A
A
So
if
you
go
to
a
doctor
open
JDK,
it's
ready
for
you,
you
can
go
again
open,
JDK,
8
or
9
or
10,
maybe
even
11
enough
to
check
and
you
can
go
get
it
with
hotspot
or
you
can
go,
get
it
with
open,
G
9
and
you
can
try
it.
These
things
are
all
fully
compliant
available
for
download
either
eyes
tips,
jars,
Jesus
or
even
as
dr.
images.
So
you
can
go
to
docker
and
pull
down
the
same
thing.
A
That's
really
it
I
think
I
think
there's
one
more
page:
oh
yeah,
so
I'm
touting
it
because
I
think
it
is
a
really
good
technology.
We've
always
been
very
proud
of
it,
and
now
that
we've
open
sourced
it
we're
still
proud
and
other
people
are
trying
it,
and
you
can
see
some
comments
there
from
people
and
go
Google
you're
going
to
see
other
people
saying
that
they
just
did
the
switch
and
got
some
really
good
improvements.
B
Right,
thank
you
very
much
Steve.
So
just
a
reminder
to
everybody.
If
you
do
want
to
ask
questions-
and
you
know
on
the
live
hangout,
please
go
to
our
get
a
channel
which
is
Jenkins
CI,
slash,
platform
cig,
so
you
can
ask
questions
there.
So
I
open
it
up
to
discussion,
but
I
wanted
to
just
ask
a
few
things.
First
of
all,
so
in
terms
of
operating
systems
which
operating
systems
does
open,
j9
support
well.
A
B
Excellent
okay:
we've
got
a
question
here
from
mark.
Wait,
it
says,
are
the
license
restrictions
on
using
the
open,
JDK
j9
project,
any
less
restrictive
than
the
Oracle
licenses,
for
example,
am
I
allowed
to
distribute
a
local
copy
of
the
open,
JDK
j9
binaries
to
internal
machines
without
downloading
from
the
project?
Yeah.
A
A
In
the
sense
that
they're
on
adopt
so
we're
working
with
the
adopt
team
to
provide
the
hosting
and
as
a
you
know,
I'd
said
it
really
quickly,
but
the
atop
guys
are
hosting
downloads,
they're
building
and
making
available
open,
JDK
applause.
What
with
hotspot
OpenJDK
with
open
j9
has
by
no
downloads
traditional
sort,
panis,
docker
images,
I
think.
B
C
Yeah
have
a
question,
but
maybe
about
the
history
of
open
gen9,
yes,
in
jenkins
project
to
be
have
an
on
both
wishes
in
IBM
java.
So
there
is,
there
are
maybe
10
or
20
issues
open
to
this
platform,
specifically
the
most
of
them
are
pretty
old
and
I
wanted
to
ask
whether
you
know
about
non
and
the
real
compatibility
issues
with
the
recent
sessions.
A
So
let's
say
we
clear
that
the
code
that
you
get
from
eclipse
the
eclipse
local
j9
code
is
exactly
the
same
code
that
we
use
for
any
of
the
IBM
binary
builds.
So
the
objective
is
that
there's
only
one
code
base,
so
I
believe
there's
a
little
tiny
bit
around
the
edges
for
seeing
that
they
haven't
quite
got
out
the
door
yet,
but
the
code
is
exactly
the
same.
So
if
you
have
compatibility
issues
and
they're
still
true
with
with
latest
IBM
java,
then
we're
gonna
have
the
same
issues
with
open
j9.
C
C
A
I'm
this
the
first
time
I've
seen
this
list
and
obviously
we
would
like
to
make
sure
that
if
there
are
challenges
that
we're
aware
of
them
and
fix
them,
so
I
will
go
through
them
and
way.
If
we're.
If
we
still
have,
if
there's
anything
any,
we
can
do
to
to
address
them
or
we
know
their
address,
then
we'll
we
can
update
the
the
issues.
If
that's
right,
yeah.
B
So
I
think
for
what
I
was
looking
at,
which
was
just
start
up.
There
was
no
I,
couldn't
really
sort
of
measure
anything
because
it
was
all
you
know,
four
or
five
seconds
between
hot
spot
and
open
j9,
but
I
did
see
what
you
were
talking
about
in
terms
of
memory
differences,
so
just
running
open,
j9
with
Jenkins
just
off
the
war
start
up.
B
I
could
see
that
he
was
using
hundred
and
thirty
one
megabytes
and
if
I
did
the
same
thing
with
hotspot
just
default,
no
options
passed
in
it
would
come
in
at
495,
megabytes
and
I
did
try
to
sort
of
set
a
j
MX
setting
to
just
have
it
try
and
get
the
garbage
collection
to
Ryan
more
frequently,
but
the
best
I
could
get
hot
spot
down
to
was
sort
of
250
Meg's,
so
yeah
definitely
still
enough
to
out.
So
that
was
good
to
see
ya.
C
A
We
same
performance
wise
hotspot
versus
open
j9
on
8
is
pretty
good,
but
in
general
Java
9
is
even
better
in
performance
across
the
board,
so
I'll
sports,
better
than
Java
rate
on
Java
9.
It's
you
know.
The
memory
footprint
at
startup
is
just
it's
just
better,
so
it's
really
worthwhile
moving
to
9
as
soon
as
you
can
or
or
thee.
So
we
say
this:
this
new
structure,
Jaden,
given.
C
A
C
To
benefit
from
it,
but
of
course
yeah
for
me
regarding
the
immigration,
one
of
the
issue
I
want
to
address
is
docket
packaging
because
yeah
once
Oracle
stops
shipping
official
distributions
for
open,
GDP
8,
we
may
have
problems
like
mark
reference
in
there,
so
yeah.
If
we
have
options,
yeah
I'm
really
happy
to
consider
any
options
on
the
table
and.
B
One
other
thing
I
was
going
to
say
that
might
have
been
an
issue
when
I
tried
it
with
Jenkins
was
when
I
went
to
do
a
control,
see
I'm,
not
sure
it
was
responding.
So
now
this
is
sort
of
native
signals,
possibly
considered
unsupported
features,
but
are
you
seeing
much
with
people
adopting
it
where
they're
using
features
that
perhaps
they
get
for
free
but
aren't
really
part
of
the
java
spec?
B
A
B
A
A
A
We've
been
trying
to
create
a
list
of
the
the
edge
cases
now,
it's
so
obviously
in
in
general
because
of
the
the
history
of
j9,
it's
a
very
solid
JVM,
so
we
don't
have
any
problem
with
it
running
99.9
percent
of
the
world's
Java,
but
they're
always
going
to
be
edges,
and
so,
if
you
have
any,
if
you
come
across
any
little
edge,
then
obviously
let
me
know
because
we'll
do
what
we
can
to
fix
it.
Yeah.
B
A
That's
a
good
question,
depending
by
support,
if
you
want,
if
you're
talking
about
paid
for
paid
for
support,
then
well
open,
dedicate
open,
JDK
j9
guys
won't
do
that!
Ibm!
Will.
If
you
talk
about
technical,
then
I
yeah
I,
don't
know.
That's,
that's
not
really
an
open
source
conversation
that's
more
about
an
adopt
conversation
about
whether
they
what
they
want
to
do
with
Java
and
j9.
B
B
Okay,
one
other
question
is
going
to
ask
so
someone
had
brought
up
that
like
Jenkins
does
a
lot
of
dynamic
loading,
so
perhaps
things
like
äôt
optimizations,
it
might
not
be
able
to
really
take
advantage
of
that.
It's
like
do.
You
have
any
feeling
on
things
with
lots
of
dynamic
loading,
no.
A
It's
it's!
It's
it's
as
I
said,
wait,
there's
no
remote
from
a
VM
performance
point
of
view.
G9
is
just
as
performant
as
hot
spa
in
in
every
every
way
that
we
can.
We
can
make
it
so
I
cut.
There's
nothing
particular
where
I
go.
Whole
j9
doesn't
doesn't
work
as
well
as
hotspot
in
some
particular
circumstances.
So
I
would
be
surprised
to
to
hear
if
there
was
if
there
was
that
much
of
a
difference.
So
basically,
as
far
as
I
can
tell
and
I
used
hot
spot,
open,
g9,
primo
compatible.
B
Okay,
no,
that
sounds
good,
so
yeah.
What
if
we
have
these
experimental
images,
I
think
that
gives
us
a
lot
of
scope
as
well
to
try
some
of
the
settings.
You
mention
the
egghead
classes,
X
QuickStart
and
just
you
have
really
start
to
see
how
much
we
can
really
tune
it
and
and,
as
I
say,
take
advantage
of
just
smaller
images
running
in
the
cloud
yeah.
A
I
mean
we
have
obviously
because
it's
got
an
IBM
legacy.
We
have
more
documentation
that
we
could
throw
a
stick
at
everything
all
of
the
kimono
options,
which
there
are
many
there
are
lots
and
lots
of
options
for
tuning.
So
what
I
say
is
get
going,
give
it
go
and
any
point
where
you
go:
it's
not
working
the
way
you
expected
or
not
what
you
told
then
just
let
me
know
and
we'll
we'll
help,
because
I
don't
really
have
any
expectation
that
you
should
hit
any
roadblocks.
B
Okay,
I'm
gonna
take
that
as
a
no
so
yeah.
Thank
you
very
much
Steve
after
coming
in
and
giving
us
a
rundown
on
open
j9,
it
looks
really
good.
I
think
we'll
be
keen
to
try
it
and
well
in
particular,
get
users
in
the
community.
If
they
want
to
go
ahead
and
try
it.
It
sounds
like
it's
good
to
go.
Give
it
a
try
and
see
how
it
might
improve
your
performance
and
yeah.
It
sounds
like
going
forward.