►
From YouTube: OMR Compiler Initialization 20180427
Description
This Compiler Architecture meeting was a broader discussion on component initialization in OMR.
Meeting agenda: https://github.com/eclipse/omr/issues/2422
A
Okay,
welcome
everyone
to
this
week's
compiler
architecture
meeting
so
so
today,
actually
I
don't
have
anything
to
present
or
anything
like
that.
I
was
hoping
that
we
could
have
a
discussion
around
a
topic
that
has
come
up
recently
with
with
regards
to
some
initialization
work,
that's
happening
in
the
compiler
technology,
so
I
would
think
that
a
lot
of
the
discussion
that
we
maybe
will
have
today
isn't
necessarily
around
the
compiler
in
particular,
but
more
into
OMR,
and
how
the
how
the
different
components
of
Wolmar
are
initialized.
A
So
just
for
a
bit
of
background,
we,
the
compiler
technology,
recently
created
a
dependence
on
the
port
library
in
OMR
and
there's
a
number
of
advantages
for
doing
that.
There's,
you
know,
there's
a
fairly
good
discussion
on
the
issues
that
we
created,
that
we
followed
to
do
that
work
and
we
were
able
to
actually
make
that
dependence.
The
next
sort
of
step
in
that
in
that
for
that
work,
is
to
actually
initialize
those
initialize,
the
port
library
initialize,
the
JIT
technology
in
sort
of
the
right
places.
A
While
we
can
hack
something
together
that
isn't
necessarily
the
cleanest
way
of
doing
it,
and
in
fact
there
already
is
in
the
in
the
Omar
startup
files
there.
Actually
you
know
there
are.
There
are
methods
Omar,
initialize,
VM
and
Omar
shut
down
VM
that
already
take
care
of
doing
the
initialization
for
the
components
that
are
actually
enabled,
and
initial
thinking
was
that
the
compiler
technology
should
just
be
part
of
that
right.
And
so
we
started
looking
into
what
would
be
required
for
us
to
do
that,
and
it
turns
out.
A
It
was
a
little
bit
more
difficult
than
than
what
we
had
expected
and
I
think
we
wanted
to
talk
through
some
of
those
issues
today
and
maybe
hope,
maybe
to
come
to
some
kind
of
a
an
agreement
or
a
direction
forward
on
how
we
want
to
approach
the
the
initialization
problem
in
NMR.
So
so
in
terms
of
one
of
the
issues
that
we
that
we
had
seen
so
there's
at
least
a
couple
of
a
couple
of
things
that
we
should
probably
start
to
talk
about.
A
But
one
is
that
the
the
different
there's
there
seems
to
be
a
requirement
in
OMR.
Initialize
vm
that
there's
a
number
of
of
other
components
that
are
that
are
that
are
required
as
part
of
the
initialization
process
and
if
we're
sort
of
looking
at
the
maybe
the
philosophy
of
OMR,
where
you
can
sort
of
pick
and
choose
the
components
that
you
want
and
only
build
in
those
components.
A
It
doesn't
seem
right
that
there's
a
dependence,
for
example,
on
the
Omar
he'll
on
on
Health
Centre
during
the
initialization
process,
for
something
that
doesn't
need
it
or
for
a
component
that
doesn't
want
it
or
not.
So
the
thinking
there
is
that
perhaps
the
the
steps
that
are
in
that
initialization
need
to
be
more
guarded
for
the
different
components
that
are
that
are
that
are
enabled
in
the
bill
that
you
that
you
have
so
not
just
being
enabled
but
actually
present
as
well.
A
So
perhaps
we
need
more
build
flags
around
the
the
different
components
to
to
to
allow
that
and
I
guess.
The
question
then
comes
up
is
whether
or
not
the
way
that
we
have
architected
OMR
at
the
moment
allows
us
to
do
that
easily.
So
people
just
pause
there
and
and
see
if
anyone
has
any
particular
thoughts
on
that,
especially
those
that
have
got
more
more
detailed
knowledge
of
the
initialization
strategy
that
we
have
currently.
A
B
Yeah
yeah
yeah.
Absolutely
it's
just
missing
some
wrapping.
So
when
that
code
was
added,
it
was
at
it
prematurely,
there's
three
rods,
things
that
are
started
up
and
shut
down,
that
aren't
under
flags,
and
those
should
definitely
just
be
wrapped
with
the
right
flags.
It's
on
the
list
of
to
Do's.
Okay
are.
A
A
So
with
that
I
think
that
we
could
actually
insert
at
least
the
initialization
we
could
we'll
have
to
work
on
standardizing
the
the
interface
for
furnished
baizen
that
the
the
compiler
technology
will
will
have
to
create.
I
mean
there
already
is
an
initialized
JIT
function,
extern
function
that
gets
called,
but
we'll
probably
have
to
document
that
a
lot
better
than
it
than
it
currently
is
so
we'll
do
that.
A
The
the
other
thing
that
that
that's
going
to
come
up
here
as
well
is
the
the
what
the
requirements
are
for
the
initialization
function,
so
the
Omar
initialize
VM.
So
right
now
it
takes
four
parameters:
an
omar
vm
struct,
I
think
no
more
thread
struct
and
then
language,
vm
and
and
the
language
vm
thread.
A
B
I'll
give
that
language
thread
is
probably
something
that
most
that
a
lot
of
languages
may
not
have,
because
they
may
be
a
single
thread
at
runtime.
Language
vm
is
almost
certainly
something
and
everything
you're
talk
about
all
of
our
VMs
right
and
om.
Our
VM
thread
slot.
Those
are
two
things
we
have
to
pass
it,
but
I
think
leave.
We
already
actually
handle
the
other
two
things
big
null.
B
If
they
need
to
be,
if
you
don't
have
any
struct,
which
is
the
the
background
of
OM,
our
VM
is
kind
of
where
everything
is
hung
together,
that
the
components
can
use
like
right
now.
I
know
the
GS
and
jib
builder
aren't
to
the
the
startup
api's
and
things,
but
as
they
plug
in
I,
think
it's
a
great
opportunity
for
some
of
the
global
structures
that
the
JIT
builder
uses
and
the
compiler
technology
in
general
used
to
actually
move
into
the
om
r
vm,
struct
or
like
be
held
off
of
that.
B
B
Again,
if
your
inner
one-time
that
doesn't
have
anything,
then
maybe
there's
not
much
per
thread
data
there.
But
if
there
is
of
more
threads
in
a
given
runtime
again,
I
assume
the
compiler
technology
for
sure
and
JIT
build.
Your
hosts
likely
will
also
have
some
purse
read
data
which
this
is
kind
of
like
the
idea
of
how
that's
all
hung
together
at
OMR.
A
So
yeah
I
think
that
that's
kind
of
where
this
discussion
was
going
to
go
is
to
is
to
what
the?
What?
What?
What
do
we
want?
The
the
consumers
of
Moammar
to
actually
provide
so
I
mean
at
the
moment.
I,
don't
think
Oh
JIT
builder
does
use
an
Omar
VM,
but
perhaps
it
should
right
we
and
if
it,
if
it
will,
what
what
components
of
that
structure
do
we
want
like.
A
But
what
is
the
minimum
sort
of
set
of
things
we
need
to
have
set
in
Omar
VM
for
for
a
consuming
language,
runtime
I
think
right
now
that
structure
is
somewhat
minimal,
but
I
think
we
need
to
have
an
understanding
of
what
all
the
different
fields
are
in
there
and
what
it
actually
for
a
particular
for
a
runtime.
You
must
hasten.
C
I
don't
know:
okay,
like
from
my
perspective,
it
almost
seems
like
the
the
VM,
startup
and
shutdown.
Api
is
therefore
bringing
up
an
entire
language,
VM
runtime,
but
chip
builder
isn't
really
a
part
of
that
story
because
it
suggests
general
compiler
framework,
so
maybe
like
the
VM,
startup
and
shutdown
api's
should
be
bringing
up
a
compiler
component,
but
maybe
that
component,
isn't
it
builder?
C
B
Sort
of
agree
with
that,
except
for
in
a
lot
of
run
times,
I
think
JIT
builder
will
be
the
compiler
technology
that
is
used.
So
JIT
builder
is
kind
of
different
right
now
from
a
lot
of
parts
of
Omar
sort
of,
but
not
really
something
like
support
and
thread
library.
They
could
easily
be
used
by
a
runtime
just
by
themselves,
but
if
you're
going,
but
if
the
runtime
wants
to
consume
more
than
one
component
from
Omar,
then
these
startup
and
shutdown
api's
are
mine
as
my
belief
of
how
the
runtime
should
then
be
consuming
Omar.
B
If
you
want
a
single
component
and
that
component
supports
being
brought
up
and
shut
down
on
its
own,
like
jet
builder,
the
current
way
it's
packaged
then
I
feel
any
runtime
is
free
to
go.
Consume
it
that
way.
But
if
you
want
the
opportunity
to
use
port
thread
and
now
that
you
build,
are
sort
of
has
a
dependency
on
port
and
thread,
and
you
say
you
want
to
use
the
GC
and
R
as
and
these
other
features.
These
are
the
api's
to
sort
of
get
in
there
and
bring
up
the
entire
OMR
for
a
runtime.
D
Yep
I'm,
hearing
kind
of
two
use
cases
here
that
I
think
are
valid
for
janitor
bonus.
She's
going
to
talked
about
before
is
the
sort
of
just
native
acceleration
library,
which
is
not
really
using
tip
builder,
isn't
just
in
a
run
time
per
se.
It's
just
being
used
as
a
library
to
generate
native
code,
and
in
that
case
it
may
not
make
as
much
sense
for
JIT
builder
to
slot
into
the
initialization
process
of
the
VM.
D
A
D
Is
what
what
is
that?
What
is
the
value
of
that
like
you,
can
hide
whatever
you
want
behind
initialized,
yet
initialized?
It
takes
no
parameters,
it's
just
a
call,
so
you
know
in
principle:
initialize
JIT
could
create
an
ohm
RVM
structure
or
a
language
vm
structure
of
its
own.
It
could
create
a
language
beyond
thread.
It
could
do
all
kinds
of
stuff
under
the
covers
in
order
to
match
the
API
that
the
more
flex
PM
stuff
is
supporting.
E
D
So
we
can
make
it
work
in
such
a
way
that
I,
don't
have
to
create
I
can
create
an
empty
structure.
I
can
pass
an
old
or
whatever.
Then
that's
fine,
but
if
it's
you
know,
I
guess
the
other
mild
concern
I
have
is
that
right
now
our
model
for
building
everything
outside
of
the
compiler?
Is
you
build?
You
specify
a
bunch
of
configure
flags
and
you
build
it
and
you
get
a
static
library
that
has
nothing
in
it
and
then
you
link
against
that.
D
So,
if
we're
going
to
have
a
bunch
of
different
tests,
we're
also
gonna
have
haven't
done
a
bunch
of
different
builds
to
rebuild.
You
know:
here's
here's,
the
build
of
Walmart
that
we
did
in
order
to
test
JIT
builder,
here's
the
build
of
our
that
we
did
in
order
to
test
the
test.
Compiler,
here's
the
build
of
lomar
that
we
did
to
test
everything
else
in
all
unless
we
can
figure
out
some
other
way
of
solving
those
different.
C
Performance
emails
and
what
do
I
think
that,
with
regards
to
like
having
one
bill
that
can
build
multiple
compiler
components
versus
just
building
different
components
and
different,
builds
I.
Think
that
the
directions
that
well
the
direction
that
we
took
in
the
rest
of
Olimar
is
that
you
build
one
particular
thing
or
build
and
I
think
that
that
makes
sense
for
the
compiler
as
well
and
just
having
multiple
bills.
One
for
the
test.
Compiler
check
build
air
and
oxygen
builder.
D
D
About
just
this
Ridge
to
my
other
point
clear,
so,
if
you're
going
to
compile
the
file,
that
has
warren
Gillette's
be
a
minute
when
you're
running
a
build,
that's
going
to
have
a
GC
and
Raz
profiling,
all
the
other
support,
that's
in
tracing
all
the
other
stuff!
That's
in
there
you're
going
to
want
all
that
stuff
pound
of
debian
right
when
you
build
it
to
builder
library,
you
kind
of
don't
want
all
that
stuff
kind
of
tucked
in
because
it's
not
really
useful
yeah.
B
D
B
Like
that,
in
that
case,
you
would
probably
just
call
officialized
JIT
and
have
only
compiled
up,
because
you
just
want
JIT
builder,
so
I
think
you'd
have
turtle.
Basically,
almost
all
the
other
compiler
flags.
Sorry
up
any
bike.
Sorry
wrong!
Word
not
compiler,
build
flags!
I
think
you
would
have
turned
off
like
GC
grass,
whatever
our
core
library
is
called
to
have
this
startup
and
shutdown
API.
That
Rob
is
actually
just
working
on
renaming
and
cleaning
up
right
now.
D
C
I'm
thinking
I'm
thinking
that
maybe
the
thing
that
makes
the
most
sense
is
to
have
Jim
Felder,
have
its
own
complete
set
of
initialization
api's
and
keep
those
maintained
and
then
layer,
the
the
VM
or
like
the
runtime
initialization,
on
top
of
that
as
like
a
separate
library.
So
when
you
do
a
full
build
of
beaumart,
you
can
still
consume
two
filters
standalone,
but
you
can
also
use
it
using
like
the
mega
startup
in
behind.
C
D
You're
right
now,
just
folders
very
simple
to
use
all
in,
like
you
know,
don't
worry
about
anything
else.
I'll
get
part
of
its
attraction
new
anchor
for
not
to
say.
Okay,
now
you
have
to
do
exploits
lavalla
log
and
every
time
we
want
something
new.
We
have
to
update
what
the
initializations
you
might
rather
hide
that
behind
initialized,
JIT
and
house,
like
a
beauty
deport
library
did
builder,
so
it
did
builder
need
the
form
library
just
builder
here,
just
initialize
those
departments,
yeah.
E
Okay,
well,
they
look
for
find
one,
that's
already
the
short-lived
area.
That's
what
you're
trying
to
read
you're
trying
to
not
have
five,
and
then
you
want
to
configure
the
same
way.
The
first
first
person
who
hit
be
to
initialize.
Did
it
right,
you
read,
and
anyone
else
has
I
have
small
thinking
of.
D
Us
so
a
consumer
of
shipbuilder
would
call
initialized
kid,
which
would
then
dive
down
into
whatever
the
Omar
initialization
process
is
configured
for
brightleaf
or
how
Jeb
builder
wants
to.
And
then,
if
someone
were
doing
the
full-on
one-time,
initialization
thing,
they
would
just
call
and
lesbian
themselves
appropriately
and
maybe.
B
B
A
Yeah
I
liked
I
liked
that
idea,
charlie,
actually
because
I
mean
I,
think
that
I,
like
the
fact
that
you'd
have
one
central
place
to
go
to
do
the
initialization
right.
You
don't
have
to
think
like
if
it's
just
behind
an
API
and
you
can
make
the
implementation
of
that
as
small
as
you
can
right,
so
that
JIT
builder
could
potentially,
if
it
does,
if
it
wants
to
use
this,
it
can
actually
link
this
thing
in
it.
Maybe
it's
only
a
few
kilobytes
right,
so
maybe
it
isn't
a
huge
footprint
impacted
to
that.
B
C
I'm,
so
my
concern
with
having
initialized
it
wrapped
VM
startup,
is
that
if
JIT
builder
ever
ever
became
a
library
that
was
installed
on
a
system
and
GC
builder
was
also
a
library
that
may
or
may
not
be
installed
on
the
system,
then
what
does
like
you
no
longer
can
rely
on
build
flags
to
tell
what
you're
bringing
up
it
just
sort
of
depends
on
the
application.
So
everything
has
to
be
brought
up
separately.
Anyways.
F
C
Yeah
yeah
yeah
but
I
mean
the
port
library.
As
long
as
we
have
proper
locking
around
it,
it
could
be.
You
would
have
one
port
library
per
address
space,
but
multiple
chips,
multiple
collectors,
but
then
they
would
all
be
sharing
like.
If
you
were
to
change
a
function
pointer
in
the
table,
then
that
would
affect
everything
in
your
process.
But
do
we
actually
do
that
in.
B
Some
places-
yes,
you
might
want
a
one
time
to
be
using
them
check,
which
then
goes
and
changes
a
bunch
of
the
port
library
functions
for
memory,
so
that
we
can
do
extra,
mem
Tech
work
and
you
might
not
want
that.
Runtime
overhead
on
some
of
the
other
runs
core
libraries
that
are
you
being
used.
Okay,.
C
C
F
E
C
E
E
B
B
D
E
C
E
D
E
Need
to
bolt
on
style,
they
add
this.
It
initializes
everything
you
need.
Here's
the
new
wave
guides
that
are
now
going
to
be
callable,
because
you
have
the
right
parameters
to
start
passing
on
the
ends
and
the
VMS
and
jet
builders,
I.
Guess
no,
like
environment
chef
to
pass
it
just
works
must
be
hidden.
Underneath
you
know
it's
going
to
get
massaged
into
different
threats
at
the
moment,
get
to
different
things:
right:
okay,
okay,
that's.
D
E
E
C
At
least
in
multi-purpose,
Ollie
will
float
over
in
all
the
approach
that
I
took
was
to
split
startup
into
three
tiers,
which
was
process
wide,
startup
runtime
or
like
VM
startup.
So
I'm
not
runtime,
VM
start
up
and
then
VM
thread
sternum.
So,
and
it
happened
to
those
three
orders.
So
a
VM
attaches
to
the
process
and
then
the
thread
attaches
to
the
VM
and
that,
in
my
experience,
the
kind
of
API
that
real
languages
need
to
be
integrated
because
that's
the
order
that
it
happens
in
so
maybe
we
can
split.
E
D
E
So
you're
saying
that
you
have
common
VM
powder
would
be
to
have
threads,
do
things
and
they're
going
to
just
you're
going
to
fork
when
the
first
thing
it's
going
to
do
is
go
attach
to
the
thing
it's
going
to
want
to
run,
get
enough
stuff
to
be
able
to
do
the
things
it
wants
to
do,
and
you
have
to
make
that
work
in
life
cycles
because
threads
come
and
let's
go
yeah
and
the
process
is
gonna.
Hang
around.
D
D
C
So
in
the
case
where
I
have
it,
you
know,
the
number
of
GC
threats
is
related
to
the
number
of
cores
and
GC
keeps
its
own
threads
around
to
its
own
Lotus
threat,
dispatching
things
that
those
are
totally
different
from
this
model
where
these
it's
the
hollow
tools,
where's
those
you
know.
Oh
this
is
part
of
the
run
time
as
part
of
the
VMI.
So
yes,
I,
guess
in
like
the
existing
language
that
we're
talking
about
the
run
time
would
be
process
wide,
initialization
and
then
the
VM
would
be
like
her
excellent.
C
B
Most
sense,
that's
kind
of
how
we
envisioned
the
api's.
If
you
look
at
some
of
the
other
OMR
api's
around
OMR,
there's
a
no
amar
runtime,
om
r,
vm
and
om
r
vm
threat,
we've
always
sort
of
envisioned
that
the
Omar
runtime
would
be
your
sort
of
process
wide
one
for
library,
one
thread
library
to
start
with,
and
then
each
VM
would
be
sort
of
a
VM.
F
A
Okay,
so
what
are
we
sure
if
there's
a
proposal
here
in
our
during
a
sort
of
agreement
here?
Is
it
we
want
to
take
the
current
initialization
steps
that
that
that
are
there
and
perhaps
break
them
into
multiples,
simplified,
good
conceptual
integrity,
kind
of
steps,
so
perhaps
a
process
wide,
perhaps
a
vmy,
perhaps
a
thread
wide
and.
A
B
Think
for
some
runtimes,
though,
where
they
only
have,
if
you're
going
to
say
plug
into
some
really
small
runtime
that
doesn't
have
threads
and
things.
You
probably
still
want
one
sort
of
global
thing
that
can
do
everything,
but
just
to
make
that
a
bit
simpler,
so
they
don't
have
to
worry
about
starting
up
a
runtime
and
then
starting
up
their
VM
and
that's
starting
up
their
first
thread
for
the
main
thread,
especially
if
they're
not
going
to
use
any
of
those
things
in
their
run
in
their
runtime
themselves.
B
D
Okay,
so
the
ask
a
question
about
open
j9
because
I
they
know
it's
initialization
processes
complicated
than
that
so
I'm
wondering
if
it
slots
into
that
model.
I
know
it.
It
goes
through
a
sequence
of
phases
and
there
are
some
dependencies
between
the
phases
and
you
know
the
order
of
which
things
are.
You
know
you
initialize
all
the
libraries
and
you
can
do
another
step
on
all
the
libraries
then
right
wondering
if
that,
like.
B
D
Not
what
I'm,
asking,
though,
guess
what
I'm
trying
to
ask
is?
Could
we
accomplish
the
same
thing
that
opened
a
nine
accomplices
with
the
same
structure
of
like?
Is
there
something
that
open,
j9
fundamentally
needed
to
do
that
was
different
that
wouldn't
fit
into
this
model,
since
it's
kind
of
our
best
example
of
a
normal
raised,
runtime
right
now,.
D
B
B
If
we
actually
just
had
this
API
that
called
low
level
component,
one
startup
low
level
component
to
startup
in
the
right
order
and
then
for
the
runtime
part
and
then
for
the
VM
part,
I
think
it
all
works
and
I
looked
at
moving
a
bunch.
But
it's
like
you
said,
probably
a
lot.
A
large
work
item
that
I
just
don't
have
time
to
go,
do
but
I
believe
it
all
fits
into
this
model
of
the
runtime
VM
and
thread
startup.
D
B
A
D
B
I
sort
of
think
we
are
saying
that,
though,
in
that
API
in
that
VM
startup
API
and
we're
going
to
specifically
lay
out
a
PA
API
calls
to
the
particular
components
pretty
much
in
the
same
order
that
that
wacky
pattern
that
happens
in
j9
all
right
so
for
just
according
shows,
because
those
components
have
to
be
brought
up
in
that
way.
Otherwise
they
won't
work
because
that's
just
how
they
have
to
be
brought
up.
D
B
D
I
understand
it
from
low
as
I
want
to
initialize
things
so
I
initialize,
the
runtime,
then
I
initialize
the
VM
and
attach
it
than
I
initialize
the
threads
and
attach
them
that's
very
simple
and
easy.
But
if
I'm
writing
an
implementation
and
I
need
to
make
modifications
in
the
GC
and
I
need
to
make
modifications
in
the
compiler
I
need
to
make
modifications
in
the
Browse
components.
B
D
B
C
I
think
startups
art.
So
it's
I
think
it's
realistic
that
we're
going
to
sort
of
approach
we're
going
to
approach
a
runtime
that
does
do
startup
in
a
complicated
circular
way
and
then
there's
a
possibility
that
this
API
is
simply
just
not
going
to
fit
into
their
model
and
I
would
I
mean
I
might
argue
that
their
model
is
bad
and
they
are
bad
programmers,
but
that's
not
really
going
to
get
us
integrated
into
their
runtime,
so
I
think
maybe
the
the
really.
C
The
best
thing
that
we
can
do
is
have
like
a
clear
statement
about
the
dependencies
between
our
different
modules
and
possibly
like
my
diagram
but
as
well
have
that
clearly
expressed
in
our
startup
API.
So
if
JIT
builder
needs
a
port
library,
then
that
should
be
one
of
the
parameters
engine
builders
start
up,
so
people
can
see
that
that
needs
to
come
up
first
and
then
you
know
we
can
layer
helpers
on
top
of
this,
but
at
the
core
we
just
need,
like
a
dead,
simple
obvious
model
for
how
the
dependencies
all
work
together
and.
C
E
E
C
An
opinion
on
this
so
I
think
that,
in
my
mind,
I
think
that
the
simplest
thing
to
do
would
be
to
have
some
sort
of
like
configuration.
That's
program,
program,
ibly
accessible,
so
the
language
would
say
things
like
these
are
my
heat
bounds,
or
this
is
my
maximum
heap
size
or
whatever
Orange
it
like.
C
B
and
the
language
is
fully
capable
of
initializing
that
structure
itself
and
then,
when
we
go
with
to
start
up,
we
we
go
through
like
a
validation,
pass
where
we
make
sure
that
the
options
make
sense
and
if
they
don't
come
crash
or
whatever
be
able
to
start
up
and
if
it
works,
then
it
works.
And
then
we
just
sort
of
have
a
bunch
of
different
like
fine-grained
configurations,
drugs
for
each
component.
B
B
B
B
D
So
I
think
that
it's
a
good
question
I'm
stuck
alone
I'm,
still
stuck
on
the
great.
A
I
think
that
what
we
would
want
to
do
is
just
if
you're
going
to
is
to
actually
define
the
order,
like
the
minimal,
Big,
Bang
kind
of
thing
that
needs
to
happen
for,
for
the,
for
all
this
technology
be
like,
like
maybe
the
portal
ever
has
to
be
the
first
thing,
the
thread
library
report
or
have
to
be
the
first
thing
to
get
initialized.
Some
very
basic
daily
questions
like
a
complete
library,
have
any.
B
D
C
B
C
B
Definitely
articulated
there
I,
don't
actually
think
we
should
have
them
there,
but
that's
a
different
architectural
discussion
completely
than
this
one.
A
D
B
A
And
actually
I,
don't
I
I
think
that
there
might
even
be
an
opportunity
to
provide
some
kind
of
elementary
processing
that
a
language
runtime
might
want
to
use
for
parsing
command
lines,
and
things
like
that,
if
you
don't
already
have
something
like
that
in
your
in
your
runtime,
perhaps
you
need
a
tool
in
the
box.
It'll
that'll
get
that
for
you,
so
that
could
be
an
opportunity
for
us
to
provide
that
I
agree.
B
B
B
Library:
here's
how
you
parse
things
from
the
command
line:
here's
how
you
pull
values
out
of
environment
variables,
the
different
ways
that
we
could
have
things:
here's
an
options,
file
that
you
could
pull
things
out
of
define
those
things
generically,
but
X
X,
colon
foo,
something
that
is
your
runtime
in
my
opinion,
but
I
could
be
convinced.
Otherwise,
if
we
want
some
generic
ones,
but
oh.
D
D
D
D
That's
why
we
just
put
them
to
change
like
to
build
our
bone,
prefer
all
right,
but
we
do
still
have
to
sew
fur
that
is
internal,
both
the
GC,
the
option-
processing,
that's
currently
in
omar's.
This
is
kind
of
a
subset
of
or
most
of
the
j9
processing
options
are
there,
which
we
need
to
preserve
those
for
j9,
but
that
doesn't
mean
we
can
also
try
to
create
a
different
way
of
specifying.
D
B
There's
an
environment
variable
where
you
can
configure
a
few
small
things,
but
we
didn't
move
our
options
over.
Those
stayed
on
the
Java
side,
where
they
belonged
interesting
type
of
options
that
were
they're,
not
saying
that
there
aren't
GC
options
to
come
along
at
omr,
but
a
large
portion
of
the
ones
are
very
Java
specific
in
my
opinion,
so
the.
D
D
B
I
think
start
to
do
this,
then
I
think
that
the
option
parsing
starts
to
have
a
very
large
runtime
set
of
app
code
and
I'm,
going
to
start
using
app
code
instead
of
glue,
because
I
think
Rob's
actually
made
a
valid
point.
That's
a
better
name,
our
application,
extensions
or
something
I.
Think.
That's
where
we'll
see
a
huge
amount
of
growth
would
be
on
your
runtime
having
some
ability
to
go
in
and
set
those
options.
But
if
we
start
to
do
this,
then
I
think
the
location
those
options
get
set.
B
C
F
A
Another
thing
that's
unique,
and
actually
it's
a
really
good
story
for
the
the
compiler
technology.
They'll
ever
know
that
the
command-line
options
are
actually
can
be
per
method
as
well.
It's
not
just
these
are
the
cut.
These
are
global
options
settings
they
actually
do
apply
for
for
methods
or
patterns
of
method
signatures,
and
that's
a
really
good
differentiator
with
this
boy
with
the
java
technology
versus
that's
a
hotspot
which
doesn't
do
that.
So
we
want
to
make
sure
that
you
know.
C
Other
thing
they
don't
want
us
to
forget
is
that
there
are
multiple
VMs
for
process
into
only
one
set
of
command
line
options
into
only
one
environment.
So
how
does
that
work?
It's
these
are
the
defaults
that
a
language
would
then
be
able
to
override
on
a
per
VM
basis,
so
I
think
that
the
language
has.
This
should
be
the
thing
that
sort
of
glues
together,
vm
initialization,
in
command
line,
option
parsing,
but
as
well.
You
know
we
can
still
provide
our
own
default
if
we
want
to,
because.
D
B
D
So
to
me
that
means
some
of
this
config
should
probably
be,
and
the
runtime
very
as
the
yes,
some
of
it
could
be
at
the
runtime
level,
because
it's
command
line
issue,
there's
only
one
command
line
for
a
process.
Some
of
it
probably
needs
to
be
at
the
VM
level,
because
you
may
want
one
VM
to
be
able
to
override
a
setting
and
on
how
to
defect
other
VMs.
B
Yeah
I
think
so
that
was
sort
of
why
I
was
breaking
it
down
as
you've
got
not
this
structure
that
you
could
fill
stuff
in
and
then
you
set
to
the
runtime
on
start
of
it.
Sorry
the
VM
on
startup
of
the
VM
and
then
the
VM
sort
of
does
what
it
needs
to
with
those
options,
because
two
VMs,
if
we're
talking
really
crazy,
could
actually
one
could
reject
this
of
options
and
the
other
based
on
how
it
was
to
configure
it
or
compiled.
C
Least,
in
the
case
of
testing,
there's
no
way
that
options
are
going
to
be
coming
from
the
command
line
at
all,
not
like
not
a
single,
optimal
comfort
in
there.
So
and
then
again.
That
also
means
that,
like
it's
impossible
to
have
processed
bite
options
because
you're
going
to
want
to
bring
up
and
tear
down
things
under
different
configurations
in
the
same
process,
so
I
know,
yeah
I
think
it
makes
sense
that,
like
configuration,
validation,
happens
and
like
the
smallest
granularity,
I.
B
Think
we
could,
we
should
probably
work
inside
out
on
this,
define
some
way
for
the
components
to
have
a
horse
or
even
components.
B
The
VM
to
sort
of
have
the
m
initialization
do
what
we
need
to
to
get
that
set
up
and
have
some
sort
of
common
infrastructure
that
works
for
all
components,
and
then
once
we
have
that
if
we
want
to
go
and
start
doing
things
like
command
command
line,
processing
and
things
for
the
process,
then
we
can
build
upon
it
at
that
point
that
make
sense
once
I
think
we
will
do
but
I
think
we
just
need
to
sort
of
build
it
inside
out.
Here.
B
Even
the
ability
for
things
to
change
at
runtime,
so
by
sort
of
having
a
set
option,
bla
value,
blog
and
I
actually
have
a
reason
why
you're
setting
it,
because
actually
that
comes
in
quite
useful
and
the
reason
is
command-line
option.
Environment
variable
default,
runtime
decision
sort
of
different
information
along
those
lines-
I
want
to
make
sure
I,
don't
complicate,
complicate
things
too
much
or
make
it
slow,
slow
down,
startup
in
any
way,
but
I
think
it's
a
good
way
for
us
to
start
sort
of
defining
these
things.
A
We
that
we
can
be
discussing
I
think
back
to
the
maybe
the
very
original
question
that
you
were
asking
about
options,
processing
was
sort
of
where
does
it
fit
into
the
overall
initialization
scheme
and
I
think
that
the
consensus
seems
to
be
that
we
could
do
it
very
early
on
so
okay,
so
I
guess
we're
a
few
minutes
past
the
top
of
the
hour
and
are
there
any
other
thoughts
on
the
initialization
problem
that
we
had?
We
had
talked
about
that
anybody
wants
to
bring
up.
A
So
what
I'm
going
to
do
is
sort
of
summarize
our
discussion
here
and
and
put
that
into
an
issue
that
that
we
can
continue
the
discussion
on
if
necessary,
but
I
think
that
there
are
some
follow-up
work
here
to
you
know
to
break
the
API
up
and
and
to
start
flushing
flushing
things
out
that
way,
so
any
other
thoughts
anybody
wants
to
bring
up
right
now,
if
not.
Thank
you,
everyone
for
for
joining
and
we'll
talk
again
in
a
couple
weeks
thanks.
Everyone.