►
From YouTube: OMR Architecture Meeting 20220804
Description
Agenda:
* JIT scratch memory profiling [ @jdmpapin ]
A
Welcome
everyone
to
the
august
4th
omar
architecture
meeting
today
we
have
one
topic
from
devin:
papineau
he'll
be
talking
about
some
profiling
tools
for
profiling,
jit
scratch
memory,
so
turn
it
over
to
devin.
Take
it
over.
B
Okay,
so
I'm
just
going
to
share
my
screen.
B
B
B
Okay,
everybody
see
that
yes,
okay
great
now,
so
I
don't
have
any
slides.
So
I'm
just
gonna
sort
of
talk
about
it
and
then
show
show
what
what
I
have
so
the
the
context
here
is.
I've
been
trying
to
look
into
some
very
expensive
compilations,
both
in
expensive
and
compile
time
and
in
scratch,
memory
usage
and
some
of
the
scratch
memory
usage
is
just
it's
through
the
chart
or
off
the
charts
so
through
the
roof.
So
we
do
have
some
existing
stuff.
B
That
will
try
to
tell
you
about
scratch.
Memory
usage.
In
particular,
we
have
a
type
called
region
profiler
and
there's
also
one
called
lexical
mem
profiler.
B
And
the
problem
that
I
have
with
them
is
that
they
don't
really
work
in
in
the
way
like
they
don't
give
me
the
information
that
I
would
hope
to
see,
and
the
reason
for
that
is
that
they're.
Well,
I
guess
what
it
is
that
memory
is.
B
Accumulate
all
of
the
usage
and
look
at
the
the
total
right,
the
like
we're,
not
interested
in
total
memory
usage
we're
interested
in
maximum
memory
usage.
B
So
these
existing
things
they
kind
of
they
accumulate
across
the
run.
But
that's
not
not
telling
you!
B
So
in
order
to
try
to
find
that
out,
I
I've
been
working
on
this
tool.
So
basically
there's
a
you
know:
different
different
instrumentation
to
measure
the
memory.
So
here
there's
something
here
that
says
profile
this
scope.
B
It's
going
to
declare
a
a
variable
with
this
name
and
we
will
call
the
scope
opt
in
the
output
and
this
is
to
interact
with
all
of
the
stuff
that
I've
been
working
on
and
basically
it's
going
to
record
a
a
detailed
stream
of
events
during
compilation.
B
It's
going
to
generate
this
output
file
with
you
know
this
happened.
This
happened.
This
happened.
This
happened
in
particular.
B
The
interesting
events
are,
we've
got,
you
know
the
start
and
end
of
compilations
and
then
within
a
compilation,
the
creation
of
a
region,
destruction
of
a
region.
Here
I
mean
tr
region,
the
memory
region.
B
At
the
start
and
end
of
a
scope
like
this
right,
so
the
start
is
here
where
the
line
is,
and
the
end
is
when
this
variable
goes
out
of
scope
at
the
end
of
of
the
method
in
this
case
and
allocations
within
the
regions.
B
So
you
know
you
allocate
an
object
and
that's
this
many
bytes
and
whatever,
and
then
that
gets
it
gets
counted
and
then
now,
instead
of
recording
every
single
allocation,
I
treat
all
of
the
other
events
as
sort
of
significant
points
in
time,
and
I
just
sort
of
total
the
allocations
in
between
them
and
it's
a
there'll
be
like
sort
of
a
linear
approximation
between
the
significant
events
later
on.
So
then
this
event
stream.
B
It's
got
a
lot
of
data
that
can
answer
questions
right,
so
we
can
do
post
processing
on
that
data
to
to
find
you
know
what
it
is
that
we
really
want
to
know.
B
B
B
Okay,
so-
and
I
guess
I'm
hoping
to
supersede
and
displace
these
existing
in
the
region-
profiler
and
the
lexical
mem
profiler-
and
I
guess,
there's
also
a
lexical
timer.
B
B
So
right
that
brings
me
to.
A
Can
I
just
ask
a
clarifying
question,
maybe
just
for
sure
so
you
have
to
go,
and
so
tr
underscore
profile.
Scope
is
something
new
that
you
are
are
introducing.
So
you
have
to
go
throughout
the
code
and
drop
those
into
all
the
places
that
you
want
to
be
able
to
profile.
B
Yes,
they
do
need
to
be
sprinkled
into
the
code
by
a
human
okay.
A
B
B
So,
where
exactly
the
things
should
exist
all
of
the
time
versus
should
I
have
some
that
are
behind
an
option,
and
you
know
maybe
you
you,
you
know,
attack
a
boolean
on
here
or
something
versus
which
things
should
be
left
to
a
process
like
what
I
just
described.
That's
sort
of
an
open
question.
B
Right,
okay,
so
I'm
going
to
be
using
java
here,
but
basically
none
of
this
is
java.
Specific.
The
the
implementation
is
almost
entirely
an
omr.
I
think
I've
got
some
something
about
the
getting
the
time
stamp
where
we
want
to
use
the
omr
port
lib,
which
we
can't
rely
on
being
able
to
directly
call
from
the
jit
at
the
moment
in
omr
code,
but
we
can
in
a
downstream
project,
and
so
I'm
sort
of
doing
an
end
run
around
that.
B
But
this
is
it's
basically
all
omr
level,
stuff,
okay,
so
all
right,
so
we're
gonna
run
version
right
and
in
there
we
can
specify.
First
of
all,
I'm
gonna
make
it.
You
know,
spend
some
time
actually
actually
compiling
async
compilation.
B
Let's
tell
it
to
do
hot
low
count,
so
that's
going
to
actually
spend
time
so
now
in
order
to
collect
a
profile.
So
the
option
called
profile,
compile
scopes.
B
Here
we've
created
an
output
file
like
this.
Now
before
I
go
on,
I'm
just
gonna
delete
that
and
generate
a
new
one.
I
want
to
sort
of
highlight
this:
is
each
compilation
generating
this
stream
of
events,
so
each
one
naturally
has
its
own
output
suffix
logs
now
too,
so
that
we
don't
have
to
deal
with
all
the
junk
on
the
end
of
that.
B
We
see
here
is
a
thread
specific
output
file,
and
this
is
what
has
basically
the
data
in
it
right
and
if,
if
we
had
had
more
compilation
threads
running,
there
would
be
more
of
these
and
the
file
that
we
specified.
B
If
we
look
at
it,
it's
just
listing
the
name
of
this
and
then
on
jit
shut
down.
It
combines
them
all
together.
B
So
sometimes
you
might
have
a
case
where
the
you
know
the
the
process.
You
know
it
didn't
shut
down
in
the
normal
way
right.
Something
like
this
happens
where
I
just
canceled
it
or
I
know,
we've
had
at
least
one
benchmark
that
we
cared
to
look
at
before,
which
ended
with
the
the
process
being
just
sent
a
sig
kill
for
some
reason.
B
So
if
that
happens,
there's
an
option
to
combine
an
existing
one.
So
it's
is
it
scope
profile
combined,
I'm
forgetting
my
own
option.
B
B
Now
it's
been
put
into
the
main
file,
so
this
is
because.
B
Transporting
multiple
files
that
go
together
is
not
fun
and
interpreting
file
names
across
different
operating
systems,
and
you
know
there's
is
it?
Is
it
an
opaque,
byte
string?
B
B
I
guess
I
can
talk
a
little
bit
about
what
what
we
can
see
inside
of
that.
While
it's
running
oh
well,
it's
done
all
right,
so
I
guess
the
main
thing
is
the
memory,
but
let's
start
with
probably
the
time
at
the
moment,
so
there's
a
tool
that
I
have
here
in.
B
Well-
and
I
guess
first
of
all,
if
you
look
in
here,
it's
a
python
script,
but
it
relies
on
this
c
program
next
to
it
because
tearing
through
the
data
in
python,
I
originally
tried
to
do
that
and
it
was
it
took.
You
know,
minutes
to
open,
even
a
modestly
sized
profile.
B
B
20
max,
when
I
run
that
we
get
a
list
of
everything
that
we
have
profiling
data
for,
so
this
is
how
you
pick
it
up
from
the
list
right.
It
asks
you
to
choose
which
one.
B
This
is
the
peak
backing
memory
usage
for
scratch
memory,
it's
the
total
amount
of
time
that
it
took
and
the
op
level
and
the
signature,
and
these
are
sorted
by
scratch-
memory
usage.
So
if
you
look
at
the
memory
usage
here,
it's
increasing
the
time
is
not
increasing.
You
see,
it
goes
back
and
forth,
but
you
can
just
type
in
time
and
you
get
them
sorted
the
other
way.
So
now
the
time
is
increasing
here
and
the
memory
usage
see
this
one
used
less
memory
than
that.
B
B
Sorry,
brief:
interruption:
okay,
now,
okay,
so,
first
of
all,
here's
the
the
timeline
of
everything
that
we
everything
that
we
did
during
the
compilation.
So
we
spent
most
of
our
time
in
the
opt.
That's
this
one
right
here.
B
And
you
can
sort
of
zoom
in
so
you
can
see
you
know
that's
when
we
started
cogen
right
there
and
if
you
click,
you
know,
here's
the
things
that
we
did
inside
of
cogen.
A
B
So
that
just
shows
the
nesting.
A
B
So
instruction
selection
happened
nested
within
the
cogen
scope,
dynamically
nested.
Anyway,
it's
not
a
lexical
thing,
if
that
makes
sense
right,
it's
sort
of
like
it's
like
an
activation
frame
for
a
function,
but
it's
it's
this
scope
instead,
so
it's
sort
of
you
can
see
the
stack
right.
We
were
compiling,
we
did
cogen,
we
did
instruction
selection
and
you
know
you
can
see
how
much
time
we
spent
you
know
the
instruction
selection
was
16
and
a
half
milliseconds
these
percentages.
B
Here,
that's
the
percentage
of
the
parent
and
then
the
percentage
of
the
overall
total
amount
for
the
compilation.
B
And
you
can,
you
know,
there's
a
lot
of
stuff
here.
That's
you
know
it's
it's
too
small
to
see,
but
you
can
you
can
zoom
in
on
that
stuff.
If
you
want.
B
You
know
so
this
was
what
local
cse
loopstrider
loopstrider
did
use
defs
and
then
it
did
its
perform
method
and
so
on.
B
B
Time,
it's
showing
you
absolutely
every
time
that
it
entered
and
exited
a
scope
in
here.
A
So
the
two
percentages
that
you
were
showing
when
you're
hovering
over
one
of
those
boxes,
one
is
the
percent,
the
the
relative
percentage
to
like
the
the
like
the
row
beneath
it,
the
box
that
it's
in
and
then
the
other
is
the
relative
overall.
B
B
You
can
also,
you
know,
say
here's
inlining,
and
this
is
ilgen
and
you
say:
okay
well,
what
were
we
really
measuring
when?
Oh,
you
know
what
that's
not
going
to
it's
not
going
to
show
up.
I'd
have
to
switch
windows
again
anyway,
so
it
tells
you
where
the
where
the
thing
is,
and
you
can
there's
a
there's
a
hook,
so
you
can
right
click
on
this
and
it'll
open
it
in
in
your
editor.
If
you
have
a
script
to
to
do
that
from
the
file
and
line
number.
B
Right
so
you
know:
there's
there
was
a
lot
of
stuff
going
on
here
right,
like
there's
tons
of
things
that
we
did
and
you
can
see
all
of
it.
Okay.
So
then,
from
this,
if
you
merge
together
the
stacks
that
are
the
same,
you
get
the
flame
graph.
B
So
this
is
just
the
same
stuff,
but
it's
been
sort
of
aggregated
up
right.
So
if
we,
if
we
see
here,
here's
gbp
gbp
gbp
gbp
gbp
and
then
this
is
the
total
of
all
the
gbp.
B
Right
so
you
can
see
five
percent
of
all
the
time
here
was
spent
doing
value
numbering
for
gvp.
B
Oh
here
yeah,
so
here
you
can
see,
there's
also
a
total
versus
self,
so
it
only
shows
that
when
it
actually
looks
different
and
the
the
south
are
is
just
the
the
time
excluding
any
of
the
children
nested
inside.
B
So
that's!
This
is
basically
just
a
flame
graph,
like
you
know
the
the
brendan
gregg
script
online,
but
it's
derived
from
this
event.
Stream.
B
Okay,
then
you
know
the
the
star
of
the
show
here
is
the
memory.
So
this
is
scratch
memory
usage
over
time,
so
you
can
see
these
linear
approximations
here
right.
You
can
end
up
with
this
straight
line,
and
it's
just
that
you
know
there
wasn't
necessarily
any
interesting
events
along
there
anyway.
This
is
you
know
how
much
memory
we're
using
over
time
and
it
measures
well
it.
B
It
shows
a
whole
bunch
of
stuff
here.
So
first
of
all,
this
is
a
stack
plot
right.
So
the
blue
at
the
bottom
here
is
the
heap
memory
usage
for
the
the
heap
region.
B
The
orange
then
is
the
node
pool
region
and
then
cfg
regions,
structure,
regions,
alias
regions
and
so
on,
and
then
at
the
top
we
have.
This
yellow
is
the
stack
regions,
which
is
a
mostly
stacked
memory
region,
but
it
also
includes
some
that
that
are
not
registered
as
a
stack
memory
region,
but
declared
on
the
stack
for
a
relatively
short
time,
which
happens
in
a
few
places,
and
so
in
here
we
can
see.
You
know
I
mean
this
right
here
is
clearly
where
our
peak
memory
usage
is
right.
B
B
So
when
you
say
you
know,
region.allocate
12
bytes,
the
requested
memory
is
12,
bytes,
the.
So
that's
just
it's
it's
just
what
was
asked
for
then
there's
allocated
memory
is
higher.
B
B
So
let's
say
you
have
100
bytes
left
in
the
segment,
and
somebody
asks
for
a
hundred
and
eight
bytes
or
something
well
you're,
going
to
get
a
new
segment
and
you're
going
to
hand
them
the
108
bytes
from
the
new
segment.
But
then
nobody
ever
uses
that
100
bytes
at
the
end
of
the
original
segment.
B
A
Devin
can
ask
a
question
sure.
Maybe
I
missed
it
when
you
first
started
showing
this
this
graph?
Is
this
just
one
method
or
is
this
of
all
all
compilations.
B
B
Okay
right
like
if
you
now
there's,
there's
something
about
like
which
things
were
happening
simultaneously
with
which
other
things
and
that's
the
kind
of
information
that
I
haven't
tried
to
capture
here.
B
But
you
know
at
a
at
a
coarse
grain.
We
we
have.
You
know
some
number
of
compilation
threads
and
each
of
those
can
at
any
moment
be
using
up
to
the
maximum
that
we
used
for
any
compilation.
B
B
B
Right,
so
this
is
exactly
the
same
reason
why
I'm
not
adding
up
all
of
the
allocations,
even
within
a
compilation
right,
because.
B
Well
after
this
point,
you
know-
maybe
all
of
these-
I
think
these
correspond
to
the
gbp,
and
maybe,
if
you
add
these
up,
they
might
look
like
they're
just
as
much
as
or
nearly
just
as
much
as
this
thing,
but
really
they're.
Far
less
of
a
problem
than
this
thing
is,
if
that
makes
sense,.
A
B
Okay,
so
so
those
are
the
three
kinds
of
memory
that
we've
measured
and
so
I'll
go
back
to
requested.
So
when
you
look
at
requested,
it
shows
the
gray
is
sort
of
the
top
of
where,
where
the
allocated
memory
would
be,
and
the
the
pink
or
the
sort
of
transparent
red
is
where
the
backing
memory
reaches
up
to
all
right.
So
what
we
could
do
with
this
now,
so
you
can
zoom.
B
You
know,
and
you
can
look
very
closely
if
you
want-
and
this
x-axis
is
tied
to
the
to
the
timeline,
so
you
can
see
what
part
of
you
know
what
you
were
doing
at
the
part
on
the
graph
that
you're
looking
at,
but
probably
more.
Interestingly,
we
can
select
a
range
of
time
like
that
and
it
will
identify
the
peak
memory
usage
within
the
selected
range
or
you
can
sort
of
double-click
to
select
the
whole
thing,
and
so
here's
our
peak
memory
usage
right.
B
B
How
much
memory
is
allocated
in
each,
and
so
these
are
the
same
categories
that
were
shown
here
on
the
in
the
stack
plot,
but
now
they're
they're
a
bar
chart,
showing
you
know
again
requested
memory,
but
you
can
look
at
allocated
or
backing.
B
It
would
change
the
aspect
ratio
which
changes
the
meaning
of
the
area,
and
then
you
wouldn't
necessarily
be
looking
at
what
you
thought
you
were
zooming
in
on,
but
you
can
sort
of
do
that
to
sort
of
fill
the
screen
with
with
one
category.
B
B
B
B
And
then
oh,
I
didn't
want
to
go
there
yet
spoilers.
If
I
go
back
to
yeah
here
and
then
the
other
thing
that
we
could
do
is
we
can
select
these
regions
so.
B
You
know
maybe
we're
interested
in,
let's
say
everything
except
the
heap
region,
and
that
tells
us
that
it's
you
know
all
this
stuff
is
99
megs
and
that's
how
many
percent
it
is
of
the
total
and
so
forth.
B
Now
from
here,
we
can
go
into
the
memory
flame
graph,
so
this
is
just
like
the
time
flame
graph,
but
but
now,
instead
of
the
width
of
these
frames
representing
an
amount
of
time,
they
represent
an
amount
of
memory,
and
this
is
just
memory.
It's
it's
who
allocated
the
memory
that
is
still
outstanding
at
this
exact
moment,
which
is
exactly
what
you
need
to
know.
If,
if
you
want
to
be
using
less
memory
at
this
at
this
peak.
B
And
yeah,
so
we
could
see
you
know,
for
example,
right
like
we
out
here.
We
can
see
that
it's
the
stack
memory
that
suddenly
increased
the
most
right,
there's
a
bit
of
of
alias
or
whatever.
So
you
can
look
at
say
just
stack,
that's
that
category
and
we
see
that
it's
almost
entirely
partial
redundancy
elimination
doing
that
and
now,
if
you
were
to,
you,
know,
put
some
some
of
those.
B
Oh,
I
can't
I
can't
point
outside
of
this
window,
but
if
you
were
to
drop
some
of
those
profile
scope
macros
into
some
strategic
locations,
then
you
would
see
further
breakdown
of
who
allocated
it.
B
And
of
course,
that
that
flame
graph
is
using
the
the
measure
here
of
requested
memory,
but
if
you
were
in
you
know
allocated
say,
then
it
would
show
you
allocated
right.
So
this
is
85.8
megs,
which
is
the
total
selected
here,
which
is
different
from
the
83
requested.
So
that's
not
too
much
overhead
over
here.
B
From
here
also,
if
you,
if
you
look
at
the
count
and
then
you
go
to
the
flame
graph
right
so
now,
the
measure
is
number
of
regions,
and
these
are
attributed
to
the
stacks
that
created
them.
B
B
B
Is
here
and
we
see
this
one
took.
You
know
a
full
four
gigs.
I
ran
with
an
increased
scratch
space
limit
and
if
we
select
that
one.
B
B
That's
just
sort
of
to
give
you
an
idea
of
kind
of
the
resolution
of
the
the
time
data
that
this
thing
is
working
with.
So
if
it
if
it
gets
too
too
precise,
you
know
you
know
like
if,
if
you
see
things
that
look
like
they're
one
nanosecond,
you
know
that's
that's
a
fiction
where
it's
it's
sort
of
spread
things
out
in
between
in
between
successive
timestamps
and
then
all
of
this
stuff
down
here
is
just
sizes
of
parts
of
the
the
temporary
file.
B
B
Anyway,
if
we
go
to,
I
have
to
keep
switching.
B
B
Okay,
so
yep,
so
here,
for
example,
we
can
look
at
this.
This
is
alias
memory
and
maybe
I'll
look
at
the
the
backing
wow
the
backing
is
way
down
there.
That's
interesting.
B
I
was
seeing
a
larger
problem
with
alias
memory
before
I'm
not
sure
what's
going
on,
but
here
we
can
see
like
this
just
absurd
number
of
regions.
Again
this
is
122
000
regions,
and
now
I'm
not
going
to
go
into
the
flame
graph
for
this,
because
it's
it's
slow
and
it's
slow
because
of
there's
just
this
number
of
regions
is
just
crazy
and
they're,
basically
all
cfg
and
structural
regions,
so
there's
definitely
something
going
on
there.
B
I
think
it's
good
to
see
this
count
because
one
it
it
interferes
with
with
what
this
tool
is
trying
to
do
and
two
it
could
represent
a.
B
Space
usage
problem
of
its
own
because
the
the
backing
memory
doesn't
count,
there's
a
there's,
a
4k
buffer,
that's
allocated
in
line
as
part
of
every
region
object,
and
it
starts
with
that.
And
then
it
only
allocates
segments
if
it
runs
out
of
its
4k
buffer
and.
B
I
don't
want
to
count
that
as
part
of
as
part
of
the
backing
memory,
because
then
you're
double
counting
right,
so
that
4k
buffer
as
part
of
the
regent
object,
will
well
except
for
possibly
the
the
main
heap
region.
B
B
B
B
B
And
I
have
some,
let
me
see-
I
probably
should
have
run
this
in
the
background.
While
I
was
talking
about
that.
But
let's
see.
B
Basically,
I'm
just
I'm
re-running
the
the
benchmark
that
I
got
this
from.
B
B
Was
from
global
copy
propagation
right,
so
it's
it's
actually
still
not
too
slow.
If
I
don't
select
all
of
this
stuff.
B
That
one
is,
I
think,
actually
a
good
change
on
its
own,
which
is
to
get
rid
of
all
of
those
ridiculous
regions
that
that
are
misleading
the
other
one
is
sort
of
a
provisional.
B
I
don't
know
it's
a
sort
of
a
work
around
for
for
for
this
here.
The.
B
Copy
prop
using
a
lot
of
alias
memory,
which
I
track
down
to
basically
the
alias
set
interface
using
far
more
memory
than
it
ought
to.
Okay,
it's
finishing.
A
Sorry,
how
did
you
go
from
global
copy
propagation
to
alias
set
interface?
You
just
inspected
the
code.
B
B
The
yeah
and
then
I
found
exactly
where,
where
we're
allocating
most
of
that
alias
memory
or
possibly
all
of
it,
but
those
those
scopes
are
not
in
the
build
that
generated
this.
B
B
We
did
not
create
a
gigantic
alias
spike
in
global
copy
prop,
and
if
we
look
over
here
now
we
only
have
24
regions.
So
this
is
much
easier
to
look
at.
B
And
by
the
way,
there's
this,
this
detailed
information
exists
for
this
method
and
many
others.
I
guess
you
saw
the
scrolling
list
before,
and
this
output
file
is
244
megs,
which
you
know
could
just
be.
B
You
know
one
hot
compilation,
log.
If,
if
it's
a
big
enough
method,
so
I
tried
really
hard
to
keep
the
the
output
compact,
but
anyway
there
there
you
have
it
here,
there's
there's
more
detail
on
on
what's
happening
during
inlining
in
this
one.
You
can
collapse
this
recursion.
When
you
look
at
the
flame
graph.
B
But
anyway,
that's
that's
what
I
have
going
here,
and
hopefully
people
are
are
interested
to
have
this
as
as
part
of
omr.
It's
a
considerable
chunk
of
code
to
to
make
this
all
happen.
I
think
the
the
c
and
the
python
each
in
the
tools
directory
are
about
3000
lines.
A
Okay,
great,
I
mean,
I
think
this
is
a
great
improvement
over
over
what
we
have
and
I
mean
you've
already.
I
mean
in
terms.
A
To
find
where
some
of
these
memory
allocation
problems
are
and
you've
identified
a
few
places,
you
know
you
didn't
mention
them
all
on
this
call,
but
I
think
that
it's
really
proven
its
value
there.
So
you
know
getting
this
into
the
code
and
getting
it
into
the
hands
of
other
developers.
I
think,
would
be
a
very
useful
tool
to
have
for
sure.
A
Very
early
on
you
were
talking
about
lexical
memory,
profiling
and
lexical
timers,
like
we
already
have
functions
for
that.
Are
you
saying
that
we
could
deprecate
those
and
perhaps
unify
all
this
under
under
these?
I
don't
know
what
you
want
to
call
them
tags
that
you
would
add
in
the
code
to
delimit
where
this
profiling
should
occur.
B
B
So
a
lot
of
the
places
where
I
have
placed
you
know
my
profile.
Scope
macro
are
places
where
it's
it's
right.
Next
to
one
or
more
of
those
other
things
and
it
it
seems
as
though
those
sort
of
should
be
unnecessary
after.
A
Okay
yeah,
I
I
would
support
that
direction.
The
existing
tr
memory
class
has
got
and.
B
A
A
large
enum
in
there,
where
you
can
specify
when
you
allocate
where
the
memory
is
or
what
phase
you're
in
or
who's
going
to
be
using
the
memory.
It's
you
probably
know
what
I'm
talking
about.
A
Do
you
use
that
information,
or
is
this
just?
No,
I'm
not
using.
B
That
information,
I
think,
a
lot
of
things,
don't
even
necessarily
specify
it,
probably
mostly
things
that
are
allocated
in
the
heap
region,
specify
it.
A
Yeah,
I
know
we
used
to
be
very
careful
about
specifying.
A
I
can't
remember
that
term
is
where,
where
the
memory
is
going
to
be
like
who's
asking
for
the
memory,
you
know
at
least
like
15
years
or
so
ago,
but
I
think
that
that
has
definitely
not
been
enforced
as
much
as
it
used
to
be.
So
I
think
a
lot
of
it
is
stale,
but
the
reason
I'm
asking
that
is
because
perhaps
with
an
perhaps
all
of
that
infrastructure
for
that
could
be
deprecated
as
well
with
the
proper
placement
of
these
profiling
books
or
tags.
B
A
A
Keep
track
of
resistant
memory
because.
B
Yeah
that
definitely
makes
sense
for
persistent
memory.
I
think
because
this
all
this
stuff
that
I've
just
shown
is
completely
based
on
on
region
in
in
scratch,
scratch
memory
and
by
the
way
it
relies
on
on
the
compilation
not
leaking
any
memory
at
a
lower
level
than
region
which
we
currently
have
a
central
memory
leak
in
in
open
j9.
B
I
have
a
provisional
fix
in
my
branch,
but
I
need
to
do
a
proper
fix
that
and
and
do
like
you
know,
measure
startup
and
whatever
for
that.
B
But
basically,
if
if
region
goes
out
to
system
segment
provider
and
then
out
in
system
segment
provider,
you
can
leak
memory.
This
won't
show
it
to
you.
A
B
But
I
should
I
should
probably
look
into
how
how
you
can
even
see
those
statistics
based
on
the
types
that
you
were
mentioning
darrell.
I
think
it's
I.
A
Don't
think
I've
used
them
before
yeah
there's
a
it
may
actually
be
a
debug.
You
have
to
enable
it
under
debug
like
a
like
they're
guarded
with
the
debug
macros.
So
it's
been
a
long
time
since
I've
used
them
and
I
wouldn't
even
rely
on
them
anymore,
because
I
think
that
they
are
very
stale.
I
don't
think
that
people
have
been
maintaining
them.
B
I
guess
one
other
thing
to
mention:
wait
I
I
did
have
another
thing
to
mention.
What
was
it
sorry
just
a
moment.
B
Oh
yes,
the
the
time
measurements.
B
B
B
B
And
also
you'll
generate
a
lot
of
data,
although
I
did
try
to
make
sure
that
you
can
that
you
can
put
like
that,
you
can
have
a
lot
of
scopes
started
and
ended.
I
think
I
put
a
loop
at
one
point
into
global
vp.
That
just
says
you
know:
okay.
B
Well,
when
we're
doing
global
vp
perform,
you
know
loop
a
million
times
and
inside
the
body
of
this
loop,
open
and
close
the
scope,
and
I
made
sure
that
that
you
know
didn't
produce
a
a
ridiculous
size
file
and
that
it
you
know
it
didn't
slow.
The
entire
visualization
to
a
crawl.
B
A
Looks
good,
I
think
it's
a
good
positive
step.
Other
questions
for
devin.
A
No,
it's
yeah,
I'm
very
glad
to
see
that
you
know
we
have
something
like
this
developing
it's
memory.
Consumption
is
something
we
really
haven't
kept
an
eye
on
for
a
long
long
time
in
the
jet
I
mean
there
have
been
other.
I
mean
there
have
been
cases
where
yes,
we've
had
to,
but
tracking
it
at
this
level.
I
don't
think
we've
done
in
in
quite
some
time.
So
it's
a
great
great
step
forward
here.
A
So,
okay,
if
there's
no
other
questions,
thank
you
devon
for
taking
the
time
to
take
us
through
this
and
show
us
what
you
have
and
with
that
I
guess
we
can
end
the
call.
Then
thanks
everyone
for
attending
this
week,
we'll
talk
to
you
in
a
couple
weeks.