►
From YouTube: OMR Compiler Architecture 20190117
Description
Agenda
Simplify lowest level compiler allocation classes (#3396) [ @mstoodle ]
A
B
C
B
B
This
slide
lists
kind
of
the
the
sort
of
three
levels
of
allocations
that
are
happening
are
three
types
of
memory
allocation
that
are
happening
in
Omar
and
open
j9.
The
first
is
the
persistent
allocator
which
exists
all
the
way
through
the
entire
compilers
lifetime,
so
it
crosses
different
compilations
and
it
tends
to
be
a
very
allocate
and
free
type
of
style.
If
you,
if
you
don't
deal
okay,
something
it's
a
leak,
so
you
so
everything
is
explicitly
de-allocated.
B
The
second
main
category
ER,
is
a
called
it
region.
It
has
been
called
heap
or
stack
allocation
in
the
past.
The
basic
lifetime
here
is
inside
of
a
compilation.
So
it's
a
bunch
of
memory
that
you
allocate
during
a
compilation.
We
tend
to
allocate
this
using
larger
segments
and
then
smaller
objects
are
carved
out
of
those
segments.
But
then
the
segments
are
typically
both
freed.
It's
not
always
the
case.
B
There
are
cases
where
objects
are
reallocated
individually
from
from
different
memory
regions,
but
for
the
most
part,
compilation
memory
in
this
region,
section
is,
is
bulk
freed
and
then
the
third
area
is
the
typed
allocator
stuff
which
which
basically
hooks
all
these
low-level
allocators
into
the
STL
containers,
and
allows
you
to
do
some
useful
lifetime
management
as
part
of
the
STL
containers.
Today,
I'm,
primarily
looking
at
the
raw
memory
and
segment
allocation
memory
that
are
used
in
the
first
two
segments
here.
B
So
I
will
be
talking
about
the
persistent
delegator
a
little
bit,
mostly
at
the
end,
and
primarily
I'll,
be
talking
about
the
lowest
level
memory
allocation
routines
that
are
done
as
part
of
region
and
we'll
be
talking
about
region
itself.
They'll
be
taking
talking
about
the
allocators
that
provide
a
segment
memory
to
the
region,
alligator
fabric.
B
There
are
kind
of
three
different
allocations
scenarios
that
you
need
to
worry
about
when
you're
thinking
about
memory
allocation
in
the
JIT,
the
first
one
I
just
called
normal.
That's
like
a
the
usual
business
as
usual
for
the
jitneys
memory
it
allocates
it
and
it
frees
it
to
do
compilation
related
work.
A
second
scenario
is
kind
of
a
debugging
diagnostic
scenario
that
we
support,
which
not
everyone
knows
about,
but
it's
there.
B
B
So
there
is
some
compiler
code
and
open
genome,
which
can
run
as
part
of
an
extension
inside,
say,
GTP
and
in
that
scenario
that
that
compiler
code
that's
running,
may
need
to
allocate
memory
in
order
to
perform
some,
it's
usually
to
diagnosis,
to
print
stuff
out
or
to
operate
it
properly.
And
when
you're
running
in
the
debugger,
you
need
to
allocate
that
memory
in
the
debugger,
not
in
the
memory
of
the
debug
ease
because
well,
that
would
just
be
wrong.
B
B
So
what
what
follows
here
is
is
a
proposal
to
refactor
the
existing
code,
so
I'm
not
trying
to
add
anything
here.
Really,
although
I
am
bringing
some
of
the
code,
that's
in
open
j9
down
into
or
up
into,
depending
on
how
you
move
things
into
Omar
I'm,
trying
to
simplify
the
concepts
a
little
bit,
at
least
in
my
head
and
trying
to
avoid
having
classes
that
are
really
doing
multiple
jobs,
which
is
one
of
the
things
that
I
find
confusing
about
the
current.
The
current
arrangement.
B
I
am
obviously
trying
to
unify
those
interfaces
across
the
two
projects
so
that
we
can
facilitate
code
movement
better
and,
like
I,
said,
there's
a
couple
of
things
that
are
only
in
open
j9.
That
Omar
has
extremely
rudimentary
implementations
of
that
I'm
trying
to
move
into
Omar
so
that
Walmart
can
take
advantage
of
the
improved
memory.
Efficiency
that
that
open,
j9
exhibits.
B
So
I
didn't
create
the
complex,
so
I'm
going
to
be
presenting
a
bunch
of
complex
stuff
which
I'm
trying
to
simplify
how
to
how
to
understand
it.
Okay,
I'll
do
my
best
about
that
and
you'll
see
what
I
mean
in
a
few
slides,
but
I
didn't
create
this
complexity.
It's
evolved
over
the
last
20
years,
so
all
I'm
doing
is
trying
to
find
a
way
to
explain
it
and
structure
it
to
make
it
relatively
easier
to
understand.
B
I
hope
and
what
I've
tried
to
do
here
is
basically
do
the
same
things
that
the
current
implementation
does
and
make
it
easier
to
understand
so
I'm
not
trying
to
change
behavior
I'm,
hoping
not
to
change,
affect
performance
and
obviously
that
will
need
to
be
tested
when
the
ultimate
change
comes
mostly
I'm,
just
load
up
code,
all
right.
So
having
said
that,
let's
move
forward,
so
there
are
a
couple
of
concepts
that
will
sort
of
help,
help
structure
the
rest
of
the
presentation.
B
B
It's
just
a
blob
of
memory
that
you
referenced
by
a
pointer,
and
this
is
really
what
the
raw
allocator
part
of
the
hierarchy
which,
if
you
are
already
familiar
with
the
memory
allocation
stuff
in
the
compiler
or,
if
you
listen
to
the
rest
of
this
presentation,
you'll
see
how
raw
allocator
all
fits
together.
The
second
sort
of
concept,
or
a
type
of
memory
that
gets
used
by
the
compiler,
is
represented
by
a
thing
called
TR
:
:
memory
segment,
so
memory
segment
is
really
just
a
small
object.
B
That
represents
another
large
separate
chunk
of
memory,
which
is
the
actual
segment
in
this
small
object.
It
has
a
few
little
fields
that
help
you
track.
What
parts
of
that
larger
have
already
been,
are
already
being
used
or
assigned
to
be
used
as
to
allocate
smaller
pieces
of
memory?
So
I
have
left
lists
of
them.
B
So
memory
segment
is
a
way
of
taking
a
larger
chunk
of
memory
and
basically
parceling
it
up
into
smaller
chunks.
And
you
can
you
can
basically
do
that
by
just
handing
back
a
pointer
from
you
know,
taking
segments
and
adding
allocated
as
long
as
the
requested
size
doesn't
doesn't
blow
the
size
of
the
full
segment.
And
then
these
TR
memory
segments
are
typically
both
freed
because
the
segment
doesn't
have
any
notion
of
where
the
internal
objects
are.
So
it
doesn't
have
a
way
of
individually
freeing
what
you've,
what
you've
done
all
right.
B
So,
having
said
that,
there
are
then
a
bunch
of
classes
that
deal
with
the
raw
memory
and
a
bunch
of
classes
that
deal
with
the
dealing
with
memory
segments.
So
we'll
start
with
the
the
raw
memory
one.
So
the
raw
allocator
rowella
key
pardon
me
raw
allocator,
is
a
class
that
already
exists
but
I'm
changing
it.
A
little
I'm,
changing
it
around
a
little
bit
in
this
new
scheme,
so
in
particular
I'm
making
it
an
abstract
class
that
basically
represents
the
interface
between
the
rest
of
the
compiler
and
using
raw
memory.
B
Now,
for
the
most
part,
that
means
all
the
segments
allocate
are
self
which
I'll
talk
about
in
the
in
a
little
in
a
little
bit
are
the
ones
that
primarily
use
raw
allocator,
but
there
are
some
other
parts
of
the
compiler
that
directly
access
raw
allocators.
If
you
happen
to
see
them
this.
This
interface
for
for
dealing
with
raw
memory
is
actually
quite
simple.
B
The
current
world
has
allocate
and
deallocate
I've
added
a
third
thing
called
protect,
which
is
to
support
that
that
second
debug
scenario
of
memory
allocation,
where
you
want
to
protect
the
memory
so
that
the
future
accesses
will
hold,
but
not
all
rel
reallocate
errs,
will
provide
a
protecting
one
of
the
key
things
about
raw
allocator.
Is
that
it's?
It
has
to
be
thread
safe
because
it
is
used
as
the
basis
of
the
persistent
allocation
which
could
be
accessed
by
different
threads.
It
needs
to
be
a
thread
safe.
B
Another
interesting
thing
about
raw
allocator
is
that
the
instances
of
each
subclass
are
expected
to
be
equivalent,
which
basically
means
you
can
copy
raw
allocator
objects
of
willy
nilly,
and
you
should
be
able
to
allocate
memory
from
one
wrap
raw
allocator
and
free
it
via
another
raw
alligator.
So
that
means
it's
not
keeping
a
list
of
which
things
it's
allocated.
You
know
it's,
it's
really.
You
know.
In
the
simplest
case,
it's
just
a
wrapper
around
malloc
and
free
right.
B
If
you
think
about
that,
it
shouldn't
matter
how
how
you
get
to
malloc
or
how
you
get
the
free.
It
should
work
just
work,
so
that's
consistent
across
the
subclasses.
Now
it
doesn't
mean
that
you
should
be
able
to
allocate
with
one
object
of
us
of
one
subclass
and
be
able
to
free
with
an
object
of
another
subclass.
It
means
within
that
subclass.
You
should
be
able
to
use
different
instances
of
the
object,
so
the
next
few
slides
I'll
go
through
the
various
sub
classes
that
are
that
are
going
to
be
defined.
B
So
the
first
one
is
malloc
allocator.
This
one
actually
corresponds
to
the
current
raw
allocator
in
the
indio.
Our
compiler
code
base,
and
basically
it's
just
it
really
literally-
is
a
wrapper
around
malloc
and
free.
So
the
allocate
functions
call
malloc
the
deallocate
functions,
call
free
and
protect
does
nothing.
So
it's
a
very
simple
class
debug
allocator
is.
B
What
did
it
get
cold
in
the
immature
one?
I
can't
remember.
In
any
case,
it
supports
the
second
that
second
scenario
of
memory
allocation
where
you're
trying
to
allocate
memory
that
you
can
protect,
it
will
and
then
deallocate.
So
this
doesn't
end
up
using
malloc
and
free
under
the
covers.
It
uses
a
variety
of
different
functions
based
on
the
platform.
B
B
Virtual
I'll
offer
any
of
the
other
functions
on
different
platforms
that
used
allocate
memory,
but
this
class
does
implement
protect
in
most
on
most
platforms.
There
are
a
couple
of
platforms
where
it
doesn't
know
how
to
do
it
right
now,
but
for
the
most
part
it
will
try
to
protect
the
memory
if
it
gets
cold
and
free
and
our
sorry
allocate
and
deallocate
work
as
you'd
expect.
B
Custom
alligator
is
a
thing.
That's
designed
to
support
the
the
debugger
of
that
third
scenario
of
memory
allocation.
So
here
you
get
to
construct
a
raw
allocator
and
just
tell
it
what
function
you
want
to
use
for
allocate
what
function
you
want
to
use
for.
D
allocate
one
book
and
you
want
to
use
for
protect.
So
it's
a
way
for
open
j9
to
provide
the
debugger
versions
of
malloc
and
free,
basically
into
an
alligator.
B
B
There's
a
function
like
j9
mem
allocate
memory
which
is
used
to
allocate
raw
memory
rather
than
malloc
and
free,
because
open
g9
likes
to
track
all
of
the
memory
that's
being
allocated
by
all
of
its
components,
not
just
the
compiler
and
so,
and
so,
rather
than
use
malloc
and
free,
it
uses
j9
mem,
allocate
memory
and
free
memory,
and
that
gets
represented
as
a
j9
memory
allocator.
But
that's
in
the
open
j9
project
just
for
information
purposes.
B
B
There
isn't.
Actually
it
there
is
the
deep
I
believe.
Let's
go
and
check
this
now,
you've
asked
the
debug.
Allocator
is
a
thing
that's
provided
in
Almar,
which
means
it
has
no
idea
about
allocating
and
putting
protections
on
j9m
allocate
j9
memory,
segments
and
I.
Think
that's
because
it's
really
just
a
diagnostic
mechanism
to
find
uses
after
freeze
in
the
scenario
don't
really
care.
If
you're
tracking
memory
or
not
buy
a
Genoese
memory
allocation
routine.
B
All
right
so
built
on
top
of
the
raw
allocators,
our
segment
alligators.
So
these
are
the
this.
Is
the
classes
responsible
for
allocating
a
TR
memory
segment
from
whatever
memory
source?
It's
supposed
to
use
segment
allocator
tends
to
round
things
up
into
a
minimum
allocation.
Size
Omar
tends
to
use
the
minimum
allocation
size
of
something
like
64
K,
open
j9
will
use
minimum
allocation
sizes
of
up
to
16
megabytes,
they
believe
for
certain
segments.
B
So
basically
the
segment
alligator
is
responsible
for
getting
a
chunk
of
memory,
that
of
the
right
size
in
Omar.
A
segment
alligator
just
uses
a
raw
alligator
to
allocate
a
chunk
of
memory,
and
then
it
builds
a
TR
memory
object
around
it
in
j9,
j9
has
a
thing
called
a
j9
in
memory
segment
and
it's
the
segment.
Alligator
Alec
uses
that
or
allocates
directly
j9
memory
segments
without
using
a
raw
alligator.
B
However,
the
API,
the
efforts
that
represents
back
up
into
the
compiler,
is
defined
by
segment
alligator,
so
you
don't
have
to
it's
basically,
an
abstraction
over
the
difference
between
j9
memory
segment
and
raw
memory
in
a
way
the
segment
allocator.
So
a
bear
segment.
Alligator
just
does
allocating
DL,
okay,
it's
free
on.
If
you
tell
it
allocates
something
it
will
allocate
memory.
If
you
tell
it
to
deal
a
kata
memory
and
earlier
allocated
segment,
it
will
be
allocated
it's
just.
It
has
no
memory,
it
just
does
whatever
you
tell
it
to
do
in
open
j9.
B
There
are
a
couple
of
caching
mechanisms
that
are
in
play
to
try
to
reuse
segments
once
the
compilers
done
with
a
segment.
It
doesn't
just
necessarily
hand
it
back
to
the
operating
system.
It
keeps
it
in
a
list
and
will
try
to
reuse
it
over
time,
and
so
there
is
a
thing
called
a
segment
cache
in
the
current
Omar
compartment.
I
can
remember
it's
a
normal
compiler,
the
open
j9
compiler,
but
this
is
not
that
segment
cache.
This
is
a
different
type
of
segment
cache.
This
segment
cache
is
actually
part
of
the
functionality
of
this.
B
In
if
you're
familiar
with
the
current
scheme,
is
the
system
segment
provider?
That's
doing
this.
Essentially,
what
the
segment
cache
is
doing
is
it's
providing
a
way
to
remember
segments
that
have
already
been
allocated,
so
it
can
do
some
carving
up
of
things,
but
it
it
only
handles
a
segments
of
one
size
to
keep
things
simple
and
also
to
reduce
fragmentation,
and
so
the
model
here
is
that
the
it's
a
basically
a
delegator
model.
B
B
The
segment
cache
itself
allocates
segments
which
is
a
bit
of
a
strange
thing,
I
guess
or
it
has
it
performed
it
tries
to
get
segments
from
somewhere.
It
allocates
that
it
provides
backup
if
it
needs
to,
and
it
can
actually
sorry
I
know.
This
is
complicated,
it's
hard
to
explain
simply,
but
the
cache
itself
allocates
large
segments
that
it
can
carve
up
into
smaller
segments.
B
That
I
won't
go
into
detail
here.
I'll
just
call
you
with
the
complexity
of
all
this
stuff,
but
if
all
hidden
inside
this
segment
cache
thing
right
now,
this
functionality
is
kind
of
merged,
they're
joined
together
inside
what
does
segment
allocation
and
what
does
and
caching
is
all
kind
of
mixed
together,
which
makes
it
a
little
bit
hard
to
pick
apart.
B
So
I
mentioned
that
the
segment
cache
has
to
allocate
segments.
So
how
does
it
allocate
segments?
Well,
it
has
its
own
segment
allocator
dedicated
for
allocating
its
own
segments.
So
it's
a
it
basically
uses
these
the
new
segment
alligators
in
red.
Those
are
different
objects
than
the
other
segment
alligator.
That's
in
blue
on
the
slide
and
in
fact,
segment
caches
can
be
changed
together,
so
segment
cache
can
have
another
segment.
Why
do
you
want
to
do
that?
Well,
this
is
again.
This
is
the
complexity
of
the
current
scenario.
B
So,
within
a
compilation
that
middle
segment,
cache
is
trying
to
cache
segments
that
are
being
used
within
this
compilation
and
then
an
open,
j9,
there's
actually
another
segment
cache
which
lives
across
all
of
the
compilations,
which
really
only
holds
on
to
one
segment.
But
it's
just
a
degenerate
example
of
a
segment
cache
and
it
and
this
performance
does
actually
matter
so
when
the
original
rewrite
of
this
code,
they
are,
this
memory
allocation
scheme
was
done.
B
The
statement
cache
on
the
right
was
not
put
in
place
and
performance
was
affected,
and
so
we
had
to
retrofit
that
back
in
so
this.
This
captures.
That
is,
the
new
scheme
that
you'll
have
a
compilation
segment
cache
as
well
as
a
longer
lived
compilation,
segment,
cache
and
again
it
will
use
the
same
delegator
model.
So
you
delegate
first
to
see
if
you're
down
streams,
that
new
cache
has
a
cache.
How
to
have
segments
that
you
can
use
before
you'll
try
to
allocate
anything
yourself.
B
So
you
might
still
be
asking:
why
do
we
if
the
segment
caches
can
allocate
segments?
Why
do
we
need
this
first
segment?
Allocator
on
the
left,
hand,
side
and
the
reason
comes
down
to
how
the
how
segments
get
managed
by
the
caches,
so
the
segment
caches,
as
I
mentioned
briefly
only
allocate
segments
of
a
particular
size,
and
so,
if
you
end
up
having
an
allocation,
that's
beyond
that
size,
you
still
need
some
way
to
be
able
to
get
that
segment
and
so
in
this
particular
in
this
model.
B
If
the,
if
your
segment
cache
is
allocating,
say
64,
kilobyte
segments
and
somebody
comes
along
and
asks
for
256
minutes
our
k
of
memory,
you
won't
be
able
to
fill
filled
out
with
a
segment
in
the
segment
cache,
and
so
you
need
a
way
to
allocate,
and
so
that's
what
this
segment
allocator
is.
Therefore,
really
it's
only
allocating
large
segments
and
in
the
current
scheme,
if
you
allocate
a
large
segment,
you
just
hand
that
back
you
don't
try
to
carve
it
up
and
reuse
it
in
the
in
the
free
list.
E
B
So
that's
that's
a
the
allocation
scheme
that
sits
underneath
region
and
there's
the
persistent
allocate
or
so
persistent
a
locator
is
its
own
little
beast.
However,
it
kind
of
looks
the
same:
persistent
allocator
uses
a
segment
allocator
and
that
has
its
own
draw
a
locator,
so
the
persistent
allocator
allocates
space
for
individual
objects
and
it
maintains
a
free
list
of
those
objects
for
when
they're
d
allocated.
So
we're
not
talking
about
segments
now
we're
talking
about
smaller
excuse
me,
smaller
chunks
of
memory.
B
So
this
is
a
case
where
you're
using
tear
memory
segments
but
you're,
not
using
bulk
free
you're.
Only
using
bolt
three
at
the
very
end
of
the
compiler
when
you're
giving
away
all
the
memory
that
you
could
use,
because
this
memory
is
persistent,
there's
a
mildly
complicated
bit
here
and
trying
to
initialize
the
persistent
allocator
object
at
the
very
beginning,
which
resides
inside
the
two,
your
compiler
and
object,
which
everyone
references,
STR,
:,
:
compiler.
But
beyond
that
this
is
a
relatively
simple.
B
Okay,
that's
lots
of
stuff
and
I
know
it
wasn't
a
particularly
exciting
presentation
or
even
particularly
interesting,
but
but
I
promise
you.
The
current
scheme
is
more
complicated
than
that.
B
What
what
I
have
a
couple
of
more
slides
on
here,
which
I'll
just
steamroll
through
them,
and
then
we
can
take
questions
if
people
have
questions
so
what
I'm
doing
right
now
is
reorganizing
the
code
into
the
classes
that
I've
presented
here
on
the
on
the
previous
slides
I'm,
going
to
create
pull
requests
back.
There
is
a
pull
request
open
at
Omar.
E
B
It
open
j9
right
now
that
have
some
of
the
changes,
but
an
earlier
version
of
the
changes.
So
if
you're
trying
to
match
what
I've
said
in
this
presentation
to
what's
in
the
current
PRS
at
form
our
organs
j9
forget
it
it's,
you
won't
be
able
to
match
them
up
because
my
thinking
has
evolved
since
then
in
the
week
since
I
opened
them
flower
last
change
them
the
so
that
right
now
what
I'm
doing
is
putting
together
PR,
so
1p
are
basically
pour
per
major
concept.
B
So
they'll
be
one
for
all
the
raw
allocator
stuff
there'll
be
one
set
of
segments,
allocator
one
for
the
segment
cash
so
on,
and
what
I'm
going
to
do
is
contribute
the
code
without
integrating
it
into
the
rest
of
the
project.
So
the
code
will
be
there,
but
it
won't
be
used
by
anything.
It
will
still
be
using
all
the
old
stuff,
and
then
there
will
be
a
plan
to
kind
of
bring
the
integration
story.
It
integrates
those
classes
into
the
rest
of
the
project,
so
I'm
using
open.
B
G9
here
is
an
example
of
a
downstream
component
that
depends
on
this
implementation,
and
so
it
needs
to
participate
in
this
recently
complicated
scheme
to
keep
to
get
things
integrated,
so
I'm
intending
to
keep
both
the
old
and
have
the
new
memory
code
active
at
the
same
time,
at
least
for
a
period
of
time,
to
enable
us
to
track
down
performance,
regressions
or
functional
bugs
and
so
I'm
going
to
use
the
relatively
uninteresting
macaroni
I'm
old
memory.
As
the
thing
that
you
will
have
to
define
in
your
downstream
project.
B
Once
the
new
functionality
is
turned
on
in
all
mark,
so
that's
the
example.
Open
j9
I'll
do
a
pull
request.
There's
actually
already
a
port
request
at
open
j9
that
will
define
a
macro
called
old
memory
when
it's
building
a
llamar
and
then
Omar
will
that
the
next
step
would
be
that
Omar
would
switch
after
a
functional
performance
evaluation
to
use
the
new
scheme,
but
if
by
default,
but
if
old
memories
is
defined,
it
will
build
and
use
the
old
memory
allocation
schemes
so
that
open
genuine
will
continue
to
work.
B
B
So
at
that
point
we'll
have
both
the
old
and
the
new
memory
allocators
in
omar
and
in
open
j9,
but
the
old
allocation
code
won't
be
used
for
a
period
of
time.
So
I'll
have
a
some
settling
period,
hopefully
not
too
long
and
obviously
other
consumers
are
going
to
be
encouraged
to
switch
to
the
new
scheme
at
the
same
time.
And
then
once
the
dust
is
settled
and
we're
happy
with
things,
then
we'll.
C
B
So
rough
timing,
I'm
leaving
in
a
week
for
3
weeks
so
I'll
do
everything
before
then
no
well
I'm
hoping
to
create
the
PRS
with
the
various
classes
before
I
leave
so
I'm,
hoping
that
those
will
be
at
least
created
and
at
the
project
and
available
for
people
to
review
and
see
what
they
look
like.
If
things
go
really
well
in
the
next
week,
I'll
also
try
to
create
the
integration
PR.
B
So
they
can
see
what
it's
going
to
look
like
once
the
once
the
integration
is
done
so
merging
can
happen
while
I'm
away,
if
it's
warranted
like
if
the
reviews
go
well
and
no
more
additional
changes
are
needed
for
me.
Find
that
highly
unlikely.
But
you
never
know
call
me
an
optimist,
but
for
what
I'm
really
expecting
is
for
those
PRS
to
remain
open,
while
I'm
away
and
they'll
collect
review
comments
and
then,
when
I
come
back,
I'll
address
the
comments
and
we'll
get
the
merged
and
then
finally,
the
integration
PRS
will
obviously
remain
work.
B
In
progress
until
I
return,
there's
no
way
those
are
going
to
get
merged
until
until
after
the
vacation
and
then
and
then
at
that
point
we
would
proceed
with
that
is
the
rest
of
that
integration
plan
and
late
February,
which
is
want
to
come
back
and
hoping
I'm
hoping
to
have
everything
flipped
over
by
the
end
of
March.
Is
that
the
timing
works
out,
which
would
maybe
even
mean
removing
the
old
implementations
unless
a
downstream
consumer
raises
their
hand
and
yells
loudly?
C
C
B
B
So
it's
it
partially
the
size,
because
some
of
those
segment
alligators
will
use
different
sizes.
But
it's
also
that
the
lifetime
of
those
alligator
objects
is
different
right.
So,
on
the
slide,
the
segment
cash
can
be
change.
The
slide
that
I'm,
showing
this
long-lived
one
on
the
right,
is
going
to
live
longer
than
compilation
about
the
compilations.
C
B
B
For
example,
it's
going
to
have
to
go
DL
kolp
to
allocate
the
allocate
and
it's
going
to
go
and
have
to
walk
a
bunch
of
lists
in
the
segment
allocator
to
say:
okay,
freeze
this
one
now
look
for
this
one
free
debt
now
look
for
this
one
free
it,
whereas
if
I
arrange
it
this
way,
I
can
just
do
a
bolt
free,
just
walk
your
list
and
DL.
Okay,
all
the
segments.
B
This
simplifies
the
watering
right
and
it
also
makes
it
more
efficient.
You
don't
have
to
search
for
things.
You
can
just
just
get
rid
of
them
all
and
then
there's
also
kind
of
fragmentation
effects
that
you
could
also
encounter
if
you're
always
freeing
and
deallocating
and
different
sizes
right.
It
becomes
more
difficult
to
manage
that
memory
efficiently.
If.
B
C
B
C
B
It
means
it
what
it
means
is
that
the
actual,
so
terror
segment
object
is
just
the
little
object
with
the
four
fields
in
it
and
then
there's
the
actual
segment
memory
which
gets
allocated
differently.
It
may
not
get
allocated
by
a
raw
alligator
in
Omar
the
segment
and
the
TRA
memory
segment.
Object
are
both
allocated
using
royal
here
in
j9
the
TR
memory
segment
object
is
allocated
with
the
raw
alligator
of
community
IMM
allocate
memory.
B
C
C
I
have
a
question
sure
when
you
mentioned
that
the
segment
alligator
came
to
you
if
it
needs
to
I,
guess
the
question
is:
if,
if
it,
if
it
allocation
the
cash
first,
is
there
a
way
where
it'll
say
I'm,
not
no?
Okay,
so
we
have
to
segment
allocator
uses
the
cash.
Is
it
always
going
to
use
the
cash
as
a
backing
alligator,
or
can
it
decide
to
start
allocating
it
with
whatever
it
does
normally,
in
which
case?
How
do
you
know
which
segments
came
from
the
cash
in
which
itself
allocated
right,
yeah.
B
So
it
could,
but
the
way
that
I'm
thinking
to
arrange
this
is
that
it
will
always
delegate
to
the
cash
first.
So
the
cash
gets
the
first
shot,
basically
at
providing
a
segment,
and
that's
just
so
that
you'll
you'll
cash
it
first
in
in
reality
in
the
current
implementation.
I
guess
is
a
better
way
to
explain
it
in
the
current
implementation,
this
segment,
the
functionality
that
I'm
calling
segment
allocator
and
the
segment
cash
are
bundled
together
a
little
bit
into
the
system,
segment
provider
class,
and
so
the
way
that
guy
works
currently
is.
B
If
you're
is,
if
you're
allocating
a
quote/unquote
small
segment
like
a
regular
sized
segment,
it
will
try
to
use
its
free
list
and
essentially
the
functionality.
That's
that
I'm
calling
the
segment
cash
it
will
try
to
read.
It
provide
a
segment
using
the
free
list
of
segments
that
it's
already
got.
But
if
you
end
up
needing
to
allocate
a
large
object,
it
will
not
be
able
to
do
that,
and
so
it
will
go
and
allocate
a
segment
a
dedicated
segment
for
that
allocation.
B
And
when
that
segment
is
returned
to
the
system
segment
provider,
it
will
just
be
de-allocated
ill,
not
it.
They
won't
try
to
cash
any
of
the
any
parts
of
it,
and
so
I
decided
to
just
take
that
those
notions
and
separate
them
out,
so
that
the
segment
allocator
will
handle
the
big
allocations
in
the
segment.
Cash
will
deal
with
the
smaller
segment
sizes
and
I
believe
this
will
simplify
the
implementation.
B
C
I
guess
the
reason
I
was
asking
is
because
so
because
the
the
thing
that's
per
compilation,
I
can
defer
to
the
segment
Katniss
outside
the
competition.
The
concern
I
was
having
was
that
the
current
scheme
with
the
whole
backing
provider
I,
would
not
know
when
the
competition
ends.
All
memory
is
released,
except
for
116
Meg
segment
that
stays
alive
yep
across
compilations
right.
C
B
So
so
yeah
yeah
I
understand
the
question.
So
the
way
to
think
about
this
is
that
the
long-lived
segment
cash
is
only
going
to
hold
that
one
segment
its
that
its
purpose
is
to
hold
on
to
that
one
segment.
So
it's
a
special
segment
cash
in
a
way
it
may
actually
end
up
being
I
might
actually
have
to
create
a
different
class
name
for
that
segment.
Cash.
But
the
only
thing
that
that
segment
cash
is
going
to
do
is
hold
on
to
that
one.
F
B
Will
hold
on
to
it,
and
it's
only
purpose
is
that
that's
all
it
will
use
its
segment
alligator,
for
is
the
whole,
is
to
deal
with
that
16
Meg
segment
and
then
the
the
the
free
list,
the
free
segment
list.
That's
in
the
current
system
segment
provider,
all
of
the
code
that
runs
that
stuff
goes
in
the
segment
cash,
that's
per
compilation
and.
C
B
B
D
B
C
B
That's
what
the
current
implementation
does
in
principle,
there's
no
reason
why
it
couldn't
cash
additional
segments,
but
the
way
that
the
current
system
works
in
open
j9
is
at
the
very
beginning.
It
allocates
a
16
mega
4
memory
allocations
and
it
holds
on
to
that
that
segment
until
the
end
of
the
JVM.
It
gives
it
back
at
the
end
of
each
compilation,
but
it
doesn't
hand
it
back
to
the
operating
customer
back
to
j9.
B
C
So
VJ,
to
give
you
motivation
as
to
why
that
fixing
leg
segment
was
even
there
in
the
first
place
was
because
apparently,
if
you
don't
keep
a
16
mic
segment
between
compilations
there'd,
be
a
regression
because
of
the
overhead
of
calling
the
OS
to
allocate
memory.
But
if
you
held
on
to
too
much,
then
you
would
see
footprint
regressions
because
it
would
show
up
in
your.
B
C
B
C
B
D
B
D
B
So
amazing,
because-
or
it
just
Li,
just
elaborate
on
that
a
little
bit
so
segment,
alligator
I,
believe
I
haven't
actually
decided
for
sure
that
thing,
an
alligator
will
be
extensible,
that
segment
casual
form
will
be
be
extensible,
so
that
revised
mechanisms
for
the
for
downstream
projects
to
do
today.
What
they'd,
like
and.
D
D
B
Only
difference
between
those
two
segment
alligators
is
really
the
constructor
arguments
that
are
passed
to
it,
as
well
as
whether
or
not
it
has
passed
a
cache.
Okay,
okay,
so
the
segment
alligator
decides
whether
or
not
to
delegate
to
a
cache
based
on
whether
it
was
constructed
and
told
to
have
a
cache
right.
Okay,
so
arbitrary
complexity
is
probably
possible.
Okay,
simply
not.
B
F
B
F
B
Basically,
allocations
come
in
on
the
segment
allocator.
The
segment
allocator
will
first
say:
hey
I
need
to
I,
have
a
request
for
exploits
I
had
a
segment
cache.
Can
a
segment
cache,
give
me
a
segment?
If
it
can't,
then
it
will
allocate
it
itself
cific.
If
it
can't
will
return
the
segment
that
returns
by
the
segment
cache,
then
the
segment
cache
also
has
can
also
have
a
downstream
segment
cache,
which
it
should
ask
first,
if
it
can
get
that
segment.
B
And
then,
as
leonardo's
says,
you
could
in
principle
Cheney's
in
arbitrarily
complex
and
scary
ways,
but
this
this
represents
basically
what
open
j9
does
today,
if
it
not
great
detail
and
not
so
the
thing
that's
not
described
here
are
the
rules
that,
whereby
the
code
decides
to
go
in
this
cash
or
that
cash-
or
this
is
this
alligator
of
this,
but
in
principle
it's
like
at
a
very
high
level,
it's
delegate
to
the
cash
the
cash
conserve.
B
B
B
A
Okay,
there's
no
other
discussion.
Thank
you,
Mark
for
taking
us
through
that
one
and.