►
From YouTube: 2023-02-16 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
All
right,
I
added
this,
but
Jack
is
out
today,
so
we
will
chat
bucket
hint
API
another
day.
D
So
I
added
the
kind
of
the
question
here.
So
basically
it's
this
is
a
last
year,
so
we
consumed
open,
Telemetry
tracing
your
macro
profile
and
then
we
try
to
expand
to
look
at
the
metrics
as
well.
I
know,
Matrix
API
is
stable,
but
the
semantic
conversion
like
such
as
matching
name
and
still
I
mean
yeah
experimental
I'm
wondering
like.
Is
there
any
plan
like
a
all
the
projections
when
this
will
be
stable.
A
A
There's
been
renewed
interest
at
this
point
in
the
community,
the
kind
of
a
proposal
of
how
to
because
we've
the
community.
We
have
struggled
to
get
semantic
conventions
to
stability.
A
A
Right
now,
we're
just
trying
to
get
our
first
one
stable
and
then
I
think.
Once
we
see
that
we
can
succeed
at
that,
then
we
will
tackle
others.
A
So
the
current
timeline
for
HTTP
we're
targeting
end
of
April
for
stability
there's
one
thing
that
has
come
up
which
could
have
a
significant
impact
on
the
attributes,
which
is
there's
a
proposal
to
align
with
elastic
common
schema
and
that
would
I.
F
Imposing
I
think
we
offered
our
schema
and
there's
ongoing
discussions
about
which
which
bits
to
take
and
which
bits
and
not
to
take.
But
it's
a
discussion
and
we're
not
trying
to
impose
any.
A
Oh
yeah
yeah,
so
we're
we're
I
I've,
looked
through
in
quite
a
lot
of
detail
and
put
together
spec
issues
on
all
the
you
know:
kind
of
mappings
from
ECS
to
what
our
current
semantic
conventions
are,
and
so
what
we
would
need
to
change
rename
essentially
lots
of
renames
to
align,
I
think
there's
a
lot
of
benefit
to
aligning
for
the
broader
industry,
but
I
think
we're
kind
of
waiting
to
right
now.
A
A
A
So
Emily
I,
if
there
are
we're,
still
hoping
either
way
to
stabilize
by
the
end
of
April,
with
the
caveat
that
we
have
never
stabilized
a
semantic
convention
yet
and
so
there's
a
lot
of
risk
inherent
in
that
date
and
there's,
depending
on
which
way
the
the
ECS
discussion
goes.
A
There
could
be
a
lot
of
breaking
changes
or
very
few
breaking
changes
before
that
stability
is
declared.
D
A
So
for
HTTP
stability
includes
is
going
to
include
metrics.
D
A
I
suspect
there
will
be
additional
database
metrics,
just
your
basic
duration,
I
would
expect
kind
of
similar
to
we
have
hdb
server
duration
and
HTTP
client
duration,
which
these
are
HTTP.
I
would
only
expect
these
two
HTTP
server
duration
and
HTTP
client
duration
to
be
marked
stable
in
this
initial
stability.
A
D
This
is
a
server
yeah
server.tuition
is
there
anywhere?
Also
in
here
is
a
different
attribute
for
metric
Matrix
name.
D
H
G
A
thank
you
trusts
of
this
field,
make
everything
HTTP
related
some
of
the
conventions
favor
right,
so
server
and
clients,
because
you
mentioned
also
like
initial
sustainability.
What
would
that
mean
for
HTTP?
We
have
the
whole
thing
be
stable
or
just
parts
of
it.
A
For
metrics
yeah
only
big,
because
the
would
just
at
least
want
to
at
least
we
want
to
Mark
the
critical
ones,
the
HTTP
server,
duration,
HTTP,
client,
duration,
that
you
know
allow
you
to
get
all
your
basic
rate:
errors,
duration,.
G
A
That's
correct,
that's
my
and
that's
my
gut
feeling
on
at
this
point.
Okay,.
G
Okay,
because
to
me,
that's
kind
of
confusing,
like
when
you're
talking
about
final
push
to
make
like
the
issue
is
convention
stable.
If
it
has
like
unstable
parts
that
to
me
at
least,
it
means
that
it
is
not
stable.
G
A
G
D
D
A
G
That
makes
sense,
but
I
believe
that's.
That
means
for
MVD
that
some
of
the
conventions
as
a
whole
is
not
stable.
A
A
D
D
It's
sorry
yeah,
it's
a
it's
a
sort
of
split
up
into
two
portions,
but
the
most
important
thing
I
am
interested
in
is
the
essential
part
of
the
hcp
side.
So
at
the
moment,
there's
a
more
than
like
HTTP
server
door,
duration
and
client,
George
duration,
there's
something
else
like
active
requests.
F
A
There's
requests
size,
there's
a
bunch
of
request.
Metrics
I
mean
the
size
metrics.
Basically,
your.
A
And
then
this
is
client
duration.
This
will
be
more
stable.
A
And
these
are
more
content
length
metrics.
Basically,
the
content
link
the
my
my
pushback
on
marking
these
table
is
I,
don't
think
they're
that
critical,
so
we're
trying
I'd
like
to
keep
the
scope
small
down
down
for
marking
stable,
to
reduce
the
amount
of
friction.
D
A
Consider
that
I
think
we
had
a
discussion
going
about
that
already.
A
But
it's
a
I'm
not
sure
this
is
quite
so
useful
either
because,
like
it's
just
a
snapshot
in
time,
every
so
you're
exporting
once
a
minute
you're,
just
getting
a
snapshot
in
time
once
a
minute,
and
you
can
get
that
same
info,
basically
from
the
server
duration.
Basically,
your
count.
The
number
of
requests
divided
by
the
time
to
get
your
rate
anyway,.
A
Is
a
if
you
would
like
definitely
I
mean
we're
taking
feedback
on
what
the
community
would
like
to
be
as
part
of
that
stable
release?
A
To
add
that
to
this
tracking
issue
or
come
to
the
HTTP
semantic
convention,
stability
working
group,
we're
meeting
we've
been
meeting
three
times
a
week
to
try
to
accelerate
we're,
we're
really
trying
we're
trying
to
accept.
You
know,
because
we've
tried
twice
in
the
past,
to
get
HTTP
cement
to
conventions
to
stability
and
failed,
so
we're
trying
a
new
approach
here
and
so
far,
so
good
I
think
we're
we're.
Making
I
I,
don't
think
end
of
April
is
out
of
the
window.
D
Right
yeah,
the
thing
is
I,
don't
know,
April
is
just
too
much.
It's
maybe
stable,
so
I
think
yeah,
it's
a
lot
of
still
not
as
stable,
because
we
are
talking
about
June
release.
I,
don't
think
this
is
yeah.
I
D
So
the
metrics
are
shown
on
your
web
page.
So
it's
basically
it's
we
would
like
to
to
make
them
stable,
yeah.
Basically,.
D
D
A
This
yeah
yeah,
please
do
and
please
let
let
me
know
you
know
or
post
to
hear
or
in
the
we
have
a
slack
channel
for
semantic
convention
working
group.
A
But
if
you
just
ping
me
on
slack
I
can
route
I
can
route
you
and
Route
the
what
you
need,
because
I
mean
we
definitely
want
to
you
know.
Micro
profile
is
important
to
us.
D
Okay,
thank
you.
So
the
cinematically
convention,
you
said,
is
you
meet
up
three
times
so
we
can
which
which
is
Monday
or
we.
Today's
do
you
mean
now.
A
Yeah,
so
we
meet
Monday,
three
I,
don't
know
what
time
zone
you're
in
this
I'm
doing.
My
calendar
here
is
Pacific
and.
D
You
mean
same
time
as
yeah.
A
Yeah
yeah,
but
if
you
also,
if
you're
just
funnel
the
feedback
to
me
on
slack
I,
can
make
sure
that
it
you
know
is
addressed.
This
is
also
great
any
kind
of
public.
You
know
that's
great,
to
get
feedback
publicly.
Also.
D
Yes,
thank
you.
Oh
can
you
also
post?
Can
you
also
put
a
link
versus
America
versus
Matrix
to
the
meeting
minutes
yeah
this
one?
Thank
you.
A
You
know
the
our
thought
on
not
marking
some
of
those
stable
was
just
that
they
didn't
feel
that
important.
So
we
figured
you
know
better
to
let
them
sit
and
digest
a
little
longer.
But
if
we
get
community
feedback
that
the
that
other
ones
are
important,
that
you
know
definitely
helps
us
prioritize.
G
Do
you
happen
to
know
what
will
happen
with
the
PT
published
jars?
G
Will
there
be
a
new
jar
which
will
be
stable
and
if
you
like,
move
things
there
gradually
or
there
will
be
just
the
alpha
charts
and
the
daily
BR
for
still
everything
in
the
70
commercials
will
be
civilized.
A
Yeah
we
have
a
plan
instrumentation.
A
Api
and
semcon
okay,
so
our
plan
is
currently
like
HTTP
all
the
same
anti-conventions
are
in
this
in
here,
and
this
is
Alpha.
This
one
is
not
Alpha
anymore.
This
one
is
stable
as
we,
our
plan
is,
as
we
like
once
HTTP
symmetic
conventions
are
stable.
We
will
move
the
http
stuff
from
here
over
into
here.
Keeping
the
package
names
the
same,
but
just
moving
it
into
the
stable
artifact.
I
B
That's
that's
exactly
the
plan
but
like
in
hindsight,
I
wish.
We
named
like
the
other
artifact
instrumentation,
API,
incubator
and
one
single,
but
yeah.
We
do
not
want
to
introduce
any
any
other
Breaking
Chains
that
we're
giving
the
name
like
this
mm-hmm.
A
B
A
The
API
okay,
just
50
pixels
down.
A
Yes,
we
did
think
about
that,
and
so
they
don't
share
I
think
we
fixed
that
in
that
they
don't
share
any
packages,
and
we
will
move
HTTP
package
entirely
from
here
over
to
here,
which
of
course
gives
us
a
problem.
If
there's
some
parts
that
are
stable
in
some
parts
that
are
not
stable,
in
which
case
I,
then
maybe
we
add
the
incubator
package,
yeah.
A
A
Speaking
of
SDK
was
there
anything
I
know.
John
has
a
hard
stop
in
four
minutes,
so
any
of
these
things
related
to
SDK
side.
Maybe
this
one
Raphael.
H
Yeah,
that's
just
more
of
a
question.
Let's
see
to
understand
the
context,
so,
if
I
look
at
results,
it
seems
that
results
attributes
are
only
honored
in
the
Auto
Group
gear
SDK
and
that's
a
bit
different
from
other
languages
and
also
spec,
doesn't
say
anything
about.
Let's
say
about
where
they
should
be
implementing
the
SDK.
So
it
would
look
just
like
to
understand
what
the
motivation
to
on
implementing
the
auto
configure,
SDK
yeah.
J
Let's
see
the
motivation
here
is
that
we're
trying
to
keep
the
SDK
separate
from
the
configuration
of
the
SDK,
the
SDK,
provides
all
of
the
functionality
that
you
need
to
configure
it.
But
if
you
want
to
do
all
of
the
auto
configuration
with
environment
variables
and
system
properties,
etc,
etc,
that
lives
in
Auto
configure
module,
it's
just
a
decision.
We
made
to
keep
things
simplified
so
that
you
can
opt
in
to
Auto,
configure
or
not
use
it
at
all.
If
you
don't
want
to.
H
J
You
don't
have
to,
though,
but
yes,
that
is,
it
is
a
that
is
something
you
can
use
without
a
configure
module,
there's
other
other
ways
to
use
it.
But
yes,
if
we
need
non-spi
support
for
something
that
we
don't
have
today
create
a
ticket
for
it,
okay
and
not
only
create
a
ticket
for
it.
Please
contribute
the
code
because
it's
basically
just
me
and
Jack
and
I'm
not
getting
paid
for
this
anymore,
and
we
don't
have
a
lot
of.
We
don't
have
a
lot
of
resources
on
the
maintainer
side
right
now,.
A
K
Hi,
everyone
yeah,
so
just
some
updates
with
regards
to
the
the
jvm
symmetric
conventions
and
JFR
streaming
implementation
of
that
so
and
some
questions
too
so,
where
we
left
off
with
this
was
I,
was
trying
to
implement
the
the
metrics
of
manta
conventions
with
JFR.
K
But
there
are
some
gaps
due
to
data
that
was
not
available
in
JFR,
so
I
was
going
to
propose
adding
new
JFR
events
to
the
hotspot
JFR
team,
so
I
spoke
with
them
over
the
last
month
and
it
seems
like
they
are
quite
resistant
to
adding
these
new
events
to
fill
in
the
gaps.
So
I
think
the
only
option
now
is
just
to
get
the
data
from
jmx.
K
So
that's
the
main
update
so
I
think
either
by
try
and
get
the
data
from
jmx
within
the
jforce
reading
module
for
completeness
just
so
that
it
fully
implements
the
semantic
conventions.
Even
though
it's
not
the
data
source
isn't
exclusively
j4
or
is
there
another
option
to
use
the
jmx
implementation
in
combination
with
JFR
to
together?
K
Is
that
also
a
possibility
or
am
I,
maybe
missing?
Something
I
mean
the
the
jmx
implementation
in
the
instrumentation
repo.
A
Okay,
so
what
will
happen
with
duplicate.
K
You
can
right
so
I
think
there
Jack
made
a
commit
to
the
JFR
streaming
stuff
a
little
while
ago,
which
basically
breaks
up
the
the
handlers
into
I.
Guess
like
features,
so
one
is
re-implementing.
What
jmx
already
does
and
one
another
feature
is
new
stuff,
so
locks
Etc.
That
is
not
already
included
in
the
jmx
implementation,
so
you
can
just
choose
the
features
that
you
want.
So
there
is
an
overlap,
so
maybe
that
would
be
a
solution,
so
yeah
I
guess.
K
The
question
now
is
like:
should
I
bother
trying
to
fill
in
those
gaps
with
jmx
just
for
the
sake
of
completion,
or
is
it
not
really
worthwhile
and.
K
Would
be
proposing
I
made
a
list
somewhere.
J
K
Most
of
them
or
with
regard
to
the
garbage
collection,
so
serial
Heap,
Memories,
One
I,
know
CPU
Linux
load
is
another
one
and
some
things
to
do
with
Maps
buffers
as
well,
and
a
lot
of
the
memory
attributes
like
initial
memory
and
committed
memory
are
not
really
obtainable
from
JFR,
while
they
are
obtainable
from
jmx.
So
basically
I
think
it's
just
that
stuff.
C
Yeah,
it
would
be
cool
to
get
those
added
for
completeness.
I
mean
I'm,
not
holding
my
breath,
because
those
changes
take
a
long
time,
especially
if
you're,
considering
backboards
like
or
backfills
down
to
Java
8.
Like
that's
I,
don't
know,
I'm,
not
I'm,
not
trying
to
discourage
it.
I,
just
I'm,
I'm
I
think
it
would
be
awesome
to
see
that
happen
this
this
approach.
C
So
are
you
also
suggesting
bringing
in
jmx
into
the
JFR
module
that
way?
The
module
itself
is
complete.
K
About
the
possibility
of
like
going
the
other
way,
would
it
in
the
future
there
be
any
possibility
of
the
JFR
implementation?
If
we
choose
to
expand
this
spec
with
data
only
available
in
jfar.
Would
that
replace,
what's
currently
in
the
instrumentation
repo,
which
is
currently
just
the
jmx
foundation?
Now
or
is
that
was
that
never
in
the
in
the
cards.
K
J
K
Case
maybe
it
is
a
good
idea
to
just
for
completeness.
Have
the
JFR
streaming
fully
implemented,
spec
taking
bits
from
J
JMS
in
case
it
doesn't
in
case
they
aren't
put
together
and
can't
be
used
in
combination
is,
is
that
does
that
make
sense
or.
A
I'm
not
clear
why
we
would
just
kind
of
Copy
in
the
jmx
stuff,
I
guess
I'm,
not
sure
what
the
like
do.
We
think
that
some
of
the
some
of
the
things
that
JFR
does
collect,
that
it
collects
better
than
jammex,
and
so
maybe
we
would,
in
the
jmx
instrumentation,
have
an
option
to
or
in
our
runtime
metrics,
maybe
in
our
normal
runtime
metrics,
we
would
have
an
option
to
either
use
JFR
or
jmx
for
those
sources
of
data.
A
K
Think
I
think
the
the
main
thing
was
that
there's
more
data
available
in
j4.
So
in
the
future,
if
you
introduce
you
straightfor,
then
we
could
expand
the
spec
and
it's
just
a
matter
of
neatness
like
having
just
either
JFR
or
jmx
and
I.
Think
so.
F
C
C
A
Give
you
a
chance:
what
do
we
think
of
moving
the
JFR
metrics
into
the
runtime
metrics
module
in
the,
as
you
know,
just
consolidating
that
as
that's
those
are,
these
are
all
runtime
metrics.
We
just
have
two
different
data
sources,
potentially
to
gather
them.
K
You're
talking
about
runtime
metrics
in
the
instrumentation
yeah
I
think
that
could
be
good.
Would
that
give
people
the
option
to
take
from
both
in
combination
like
a
at
the
same
time
like
to
supplement
what?
But,
let's
say
they
only
enable
the
JFR
streaming
feature
that
that
collects
data
that
is
not
currently
collected
by
the
jmx
implementation
and
then
use
both
in
combination
with
active,
provide
the
option
for
that
yeah.
Okay,.
B
K
K
C
K
K
A
So
yeah
I
mean
that's
another
option
if
I
mean
we're
kind
of
thinking.
Initially,
this
was
kind
of
experimental
right
and
it
sounds
like
we're
feeling,
like
the
experiment
is
mostly
done
and
looking
at
how
to
get
it
into
the
hands
of
more
users
by
pulling
it
into
the
Java
agent.
A
Maybe
even
if
it's
just
a
separate
module
in
the.
I
K
Okay,
so
what
would
the
process
be
for
moving
the
JFR
streaming
stuff
over.
A
First,
maybe
open
an
issue
start
with
the
issue
because
I
would
want
to
and
if
Mateus
or
Lori
have
thoughts
right
now.
Otherwise
we
can,
you
know,
discuss
on
the
issue
see
if
we
get
consensus
and
if
we
get
consensus
then
yeah
just
copy
it
over
open
a
PR
copy
it
in
test.
You
know
we
would
want
tests.
You
can
look
at
how
the
runtime
metrics
works
and
I
think
there's
both
Library
instrumentation
and
Java
agent
instrumentation
and
tests
that
work
with
both.
K
C
K
I
K
Good
I'll
start
with
with
an
issue
and
see
yeah
where
we
go
from
there.
K
Okay,
I
think:
that's
that's
all
I
had
in
terms
of
updates
and
questions.
A
E
Yeah,
so
this
is
a
PR
opened
a
couple
weeks
ago
and
I
just
wanted
to
follow
up
on
it.
E
This
is
similar
to
a
a
a
change
that
we
found
beneficial
at
datadog
and
the
the
main
issue
here
is
that
week,
Maps
tend
to
have
a
bit
of
a
lag
in
how
they
actually
get
cleaned
and
they
tend
to
create
a
a
fair
bit
of
additional
load
on
the
garbage,
collector
and
so
putting
a
hard
limit
on
the
size
is,
is
helpful
for
resource
constrained
systems,
be
it
you
know
a
small
VM
or
or
Docker
that
has
memory
limits
or
whatnot
and.
E
A
E
That
that's
part
of
it-
and
this
is
a
particularly
problematic,
especially
when
you've
got
Dynamic
libraries
that
generate
class
loaders
on
the
Fly
so
like
in
in
the
particular
case
that
we
saw
here,
it
was
a
scripting
language
where
they,
the
the
class
loader,
creates
sorry,
the
the
library
creates
a
new
class
loader
for
each
script
that
it's
trying
to
run,
and
so
it's
a
very
short-lived
class
loader
and
it
gets
thrown
into
this
cash.
E
A
E
Right
but
I
guess
in
this
example
that
weak
reference
adds
up
for
a
substantial
number
of
garbage.
I,
don't
know:
I
I'm
I
didn't
get
a
copy
of
the
memory
dump
to
review
myself
I'm
working
off
of
what
a
customer
from
a
customer
call.
C
Yeah
I
mean
there
was
a
similar
motivation
for
my
PR
with
the
removing
The
Groovy
class
loader,
because
it
was
the
exact
same
thing
like
they
were
looping
through
a
bunch
of
scripts
in
a
like
a
bunch
of
groovy
scripts
in
a
directory
like
tens
of
thousands
of
them,
which
okay,
that's
a
wacky
use
case,
but
they
were
doing
it
and
they
were
like
yeah.
This
thing's
leaking
memory
all
over
the
place
and
I
think
that
in
some
of
the
discussion
here,
it
was
like.
Well,
it's
a
it's
a
week.
C
L
Yeah
but
I
also
read
the
same
ticket,
but
I
couldn't
reach
that
conclusion.
I
I
think
that
the
customer
was
just
saying
that
he
got
bitten
by
this
once
and
he
would
prefer
for
those
entries
not
to
exist
in
the
map.
But
there.
E
But
okay,
so
that
said,
I
I,
don't
feel
like
the
the
fix
here
introduces
a
lot
of
risk
for
for
systems.
It's
it's
putting
a
fairly
generous
limit
on
the
the
size
of
the
the
caches.
E
So
scroll
down
yeah
so
for
for
the
this
particular
one.
I
went
with
25.,
but
keep
in
mind
that
this
is
per
instrumentation
so
that
magnifies
it
quite
a
bit.
So
one
one
class
loader
isn't
creating
just
one
week
reference
here.
It's
creating
a
re
a
week
reference
for
each
of
the
cash
instances
that
in
each
cache
instance,
is
per
instrumentation.
E
So
that's
why?
For
for
this
case,
the
the
matcher
caches
can
explode
quite
a
bit.
E
And
so
then,
the
other
one
so
there's
two
with
that
are
matching,
basically
the
the
class
loader
to
Boolean
and
then
the
third
one
is
sorry,
maybe
the
the
the
this
next
one.
The
is
basically
matching
a
class
loader
to
a
weak
reference.
It's
like
an
optimization
to
allow
reusing
the
same
weak
reference
instance
for
multiple
times.
E
Foreign,
so
in
each
of
these
cases
the
the
cache
is
just
being
used
as
an
optimization
and
by
not
having
a
limit
on
the
size
of
the
cache.
It's
creating
other
problems,
rather
than
just
being
a
quick,
optimization.
E
I
think
I
mentioned
it
in
the
I
can't
remember,
did
I
post
that
in
slack
maybe
yeah.
E
And
this
is
basically
just
you
know,
going
through
creating
a
temporary
class
loader
running
the
GC
and
showing
that
the
total
memory
being
used
is
more
like
quite
a
bit
more
than
what
we
started
with.
E
Yeah,
so
what
happens
is
is
especially
if
you're
dealing
with
a
poorly
tuned
jvm
instance,
where
you
know
normally
it's
running
at
a
certain
threshold
and
they
have
their
garbage
collector
configured
way
high
and
if
the
something
changes
changes
in
the
system.
Now
it's
pushing
up
against
that
limit,
a
lot
more
and
so
viewing
it
from
the
outside
perspective,
you're
using
a
much
higher
amount
of
RSS
memory,
so
the
Java
once
it
takes
memory.
E
It
isn't
always
very
quick
to
give
it
back
if,
at
all,
from
the
the
perspective
of
the
operating
system,.
A
Do
you
think
you
could
turn
this
into
a
Repro
that
we
could
run
with
the
actual
Java
agent
to
see
like
something
that
creates
lots
of
class
loaders
so
that
we
could
see
it
how
it
behaves
with
the
real
Java
agent.
L
Did
you
did
you
examine
where
the
memory
goes?
Is
it
just
because
the
the
maps
are
are
like
they
need
to
have
a
large
size.
E
L
Yeah
like,
but
the
dump
that
he
had
in
the
issue
was
just
a
random
dump,
like
the
numbers
there
were
small,
didn't
indicate
any
issue.
I
E
Oh,
it's
like
I,
said
the
the
weak
keys.
A
My
feeling
Tyler
the
best
way
to
move
forward
on
this
would
be
Repro
that
we
can
run
with
the
Java
agent
and
then
you
know,
people
can
use
that
to
to
see
for
themselves
sort
of
like
and
understand
the
details,
because
it
It's
tricky
to
just
explain
or
understand
I
find
without
seeing
it
in
person
like
seeing
it.
You
know,
looking
at
a
heat
dump
yourself,
yeah.
E
And
so
one
thing
I
do
want
to
point
out
so
in
the
the
the
pr
that
the
other
the
customer
submitted,
we
added
ignores
to
specific
class
loaders,
so
that
helps
for
the
instrumentation
matchers
and
reducing
the
impact
of
those.
However,
on
the
the
the
agent
pool
strategy-
cash
that
happens
before
so
that
that
pool
is
actually
used
by
the
ignores
matcher,
and
so
that
is
still
impacted.
There.
L
That
is
well.
F
L
Really
want,
though
we
could
get
rid
of
it.
I
think
white
body
allows
us
to
inject
our
own
clusters
form
our
so
that
they
wrapped
the
white
Buddhist
Transformer.
We
could
do
the
class
loader
magic
there
also
like
adversity
would
be.
It
would
end
up
like
occasionally
having
to
do
the
muscle
of
their
magic
twice,
but.
L
We
have
a
have
a
matcher
that
says
that,
okay,
if
class
loader
is
has
some
specific
name
then
just
skip
it
runs
it
a
bit
too
late.
So
we
already
do
some
stuff
at
the
class
loader
and
place
it
into
one
of
the
maps.
If
you
really
want
to,
we
could
try
doing
it
a
bit
earlier,
but
I'm
not
really
sure
whether
it's
worth
the
hassle,
I
I
play
there.
L
Yeah,
if
you
do
the
matching
in
the
classified
Transformer,
we
could
cut
off
like
way
before
the
type
description
is
built
like
you
could
ignore
the
past
order
before
it
even
gets
passed
to
the
bite
by
the.
L
But
but
I
think
it's
it's
probably
like
it's
not
worth
the
effort
doing
it.
I
played
a
bit
with
Tyler's
changes.
L
I
I
think
that
most
of
the
caches
that
he
limits
aren't
really
like.
Those
questions
aren't
like
too
useful,
even
if
you're
like
completely
remove
them.
Nothing
too
bad
will
happen.
Besides
the.
I
L
One
that
one
is
kind
of
important
I
think
with
I
I,
try
to
measure
like
what's
good
what
goes
on
when
I
start
life
frame,
I
think
the
Tyler's
changes
the
time
it
basically
did
like
three
times
the
resource,
lookups
and
and
two
three
times
more
time.
L
L
L
Or
maybe
maybe
it
would
perform
better
if
you'll
just
increase
the
size
to
64
or
something
like
that,
and
another
thing
that
seems
is
that
that
the
concurrently
touch
map
is
a
larger
data
structure
than
the
V
cash
map.
So
the
memory
savings
like
don't
seem
to
be
that
large,
like
the
amplify,
if
I
interpreted
it
correctly,
it
seems
that
the
unbounded
V
cache
map
that
has
like
a
thousand
entries
is
twice
the.
I
C
L
E
Generally
I
would
expect
very
few
class
loaders,
but
later
you
know,
some
of
the
newer
Frameworks
are
doing
crazy
stuff
with
class
loaders
to
make
things
very
dynamic.
L
L
L
L
A
Sir
Tyler
one
of
my
I'd
say
my
my
my
hesitation
with
like
because
I
totally
understand
your
point
of
like
this
is
not
harmful,
but
we
have
had
a
lot
of
problems
in
the
past
with
our
caches
and
working
across
all
the
different
jvm
versions,
and
we
were
using
I
can't,
remember,
matish
or
Lori
helped
me
the
name
of
the
caches.
We
were
using
patenting
yeah.
A
A
E
That
guy
I'm
just
reusing
the
existing
one.
So
if
you,
if
you
click
up
on
that,
the
the
yeah
up
here,
it's
reusing
the
same
one,
that
if
you
expand
it's
the
same
instance
as
the
the
bounded,
not
that
one
down
below
so
like
line
50
there.
E
Yeah,
so
if
you
scroll
up,
you
can
see
that
the
the
unbounded
is
using
yeah,
just
the
the
default
one
and
that's
why
I
had
to
make
a
change
to
the
weak
concurrent
map
to
allow
for
a
map
instance
to
be
passed
in.
C
C
E
Cool
I
will
oh
yeah
good
I'll
work
on
trasks
ask
of
trying
to
create
something
to
work
with
just
a
Java
agent.