►
From YouTube: 2022-12-01 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
C
C
All
right-
oh
yes,
Peter
you're,
here
great
I,
threw
this
on
I
thought
this
would
be
interesting
to
chat
about
with
Jack
Sue
Peter
is
using
the
logging
exporter.
E
So
one
of
the
features
of
this
new
garbage
collection
metrics,
which
are
histogram
based,
is
that
it
records
the
maximum
garbage
collection
time
per
per
the
reported
interval
and
the
these
times
of
interest
for
for
the
customers,
because
one
of
the
long-standing
issues
with
Java
was
that
there
was
so
the
stop
of
the
world.
E
Garbage
collection
poses
that
introduced
a
lot
of
instability
into
applications,
so
this
is
still
a
concern,
especially
for
users
of
java
8.,
and
we
do
not
have
a
a
valid
numbers
for
the
Stop
of
the
world
garbage
collections
because
they
are
not
exported
by
MBS
of
nmvs
or
or
this
garbage
collection
listeners.
E
However,
garbage
collection
times,
individual
garbage
collection
times
could
serve
as
as
an
approximate
for
these
values.
So
if
the
customer
sees
that
the
longest
garbage
collection
in
a
given
interval
was
not
longer
than
x
milliseconds,
then
he
can
be
assured
that
well
definitely
no
stop
of
the
world.
Garbage
collection
was
taking
longer
than
that.
E
E
We
are
unable
to
to
create
the
Maximus
for
for
deltas,
so
at
least
in
this
case
I
believe
it
would
be
nice
to
have
the
default
aggregation,
temporality
Delta
instead
of
cumulative-
maybe
that's
true
for
other
histograms
as
well.
I
I'm,
not
I,
don't
I,
don't
know,
but
it's
just
a
simple
use
case
where
we
would
like
to
have
these
the
Delta
aggregation
temporality.
E
So
how
to
do
this?
Oh
well,
I
I
see
two
ways.
One
is
that
we
would
have
a
default
view
configuration
that
would
export
these
histograms
as
deltas
and
perhaps
a
little
more
effort,
but
maybe
it's
useful
is
to
have
some
kind
of
hints
on
the
SDK
level.
E
F
So
the
totally
agree
that
it's
Min
and
Max
are
are
useless
for
cumulative
histograms
and
that
the
the
max
garbage
collection,
duration
is
is
a
very
useful
value
for
the
reasons
you
stated
a
couple
of
thoughts
and
because
there's
a
lot
to
unpack
there,
so
I've
been
considering
removing
Min
and
Max
as
being
recorded
when
the
histogram
is
cumulative,
because
they're
not
useful.
So
as
you
point
out
or
they're
they're
minimally
useful
they're,
the
spec
recently
had
a
change
that
changes.
F
The
the
histogram
aggregation
to
it
make
have
the
recording
of
mid
and
Max
be
an
optional
configuration
parameter,
and
so
I
was
considering
implementing
that
in
Java
and
making
it
default
to
false.
For
when
the
aggregation
temporality
is
cumulative,
so
record
Min
and
Max
by
default
for
Delta
histograms
do
not
record
Min
and
Max
by
default
for
cumulative
histograms,
that's
kind
of
where
my
head
was
at.
F
You
mentioned
a
couple
of
other
things.
You
mentioned
possible
using
a
default
view
to
to
configure
this
specific
histogram
to
use
Delta
aggregation,
so
aggregation
temporality
is
actually
not
a
it's
not
a
view
level
concern.
It's
not
something
that
you
can
configure
via
views.
It's
a
metric
exporter
and
a
metric
reader
level
concern
and
it's
so
it's
the
the
idea
there
is
that
you
know
the
temporality
of
your
metrics
is
is
really
something
that
is
important
to
to
whatever
back
end
you're.
F
Sending
it
to
back
ends
are
kind
of
built
around
that
concept
kind
of
at
a
low
kind
of
conceptual
level,
and
so
it's
not
something
you
want
to
change
on
an
instrument
by
instrument
basis.
It's
something
that
you
want
to
say:
hey
for
all
instruments
of
this
type,
all
histograms
or
all
counters.
You
know
make
the
temporality
be
cumulative
or
Delta,
and
so
that's
that's
the
place
that
you
want
to
configure.
The
temporality
is
really
at
the
the
metric
reader
and
by
extension,
of
the
metric
reader,
the
metric
exporter
level.
E
You
yes,
so
well
with
with
otlp
exporter,
there
is
this
property
that
I
can
use,
and,
yes,
it
changes
all
histograms
and
I
I
thought
it.
It's
maybe
a
two
coarse
grained
for
the,
because
I'm
not
sure
if
all
I
want
all
histograms
to
be
to
be
dealt
as
maybe
some
of
them
should
be
cumulative
and-
and
there
is
no
way
to
to
specify
that.
F
Yeah,
so
that
that's
that's
an
interesting
idea:
I'm,
not
sure
why
I
I'm
not
sure
what
the
use
case
would
be
to
have
some
histograms
be
cumulative,
and
some
Delta
like
I
was
mentioning
that's
something
that
normally
a
back
end
has
a
has
a
strong
preference
or
one
or
another-
and
you
know
I,
guess
the
what
you
get
out
of
a
cumulative
metrics
in
general.
Is
you
get
kind
of
some
protection
against
data
loss?
F
So
you
know
if
you're
sending
deltas
and
some
of
those
messages
are
lost
in
transit,
then
your
your
metrics
on
the
back
end,
you
know
are,
can
can
be
a
little
bit
inaccurate
because
they
can
be
missing
some
of
the
data
with
cumulative.
If
some
of
those
messages
are
lost,
you
always
know
what
the
the
latest
was,
and
so
you
know
you
can
you
can
kind
of
tolerate
some
loss
along
the
in
transit,
some
intermittent
Network,
outages,
I'm,
not
sure
why
exactly
you
would
want
that
for
some
histograms
and
not
for
others.
F
D
F
C
To
the
original
question
about
Min
about
the
max
GC
how
to
capture
Mac,
you
know
these
identifying
the
stop
the
world
kind
of
garbage
collection
events
I'm
wondering
if
it's
solvable
by
having
more
I
mean
by
histogram
buckets
right
like
if
you
had
large
values,
large
histogram
buckets
like
five
seconds
ten
seconds,
20
seconds,
60
Seconds,
you
would
know
in
any
given
interval
if
there
was
a
GC
that
exceeded
10
seconds
or
20
seconds
or
60
Seconds.
F
Can
get
like
the
actual
value,
but
so
it's
not
as
important.
C
E
Yes,
it
is
an
approximation,
but
yeah
not
not
very
reliable.
We
would
have
to
keep
a
very
large
number
of
buckets
to
be
even.
C
That's
where
views
could
come
in
handy
because
there
there
is
probably
going
to
be
or
not
views
but
hints
the
hint
API
is
going
to
probably
allow
us
to
specify
what
we
think
are
good
buckets
for
a
given
in
the
instrumentation
layer
for
a
given
metric.
F
Yeah
so
Peter.
What
Trask
is
alluding
to
is
the
this
hint
API
that
has
been
proposed
and
talked
about
at
the
spec
level,
and
so
the
idea
would
be
that
instrumentation
can
inform
some
aspects
of
how
the
SDK
should
should
treat
the
those
measurements,
and
so
a
popular
idea
is
that
instrumentation
probably
has
a
better
idea
of
the
default
buckets
than
you
know:
kind
of
global
default,
buckets
that
are
shared
across
all
histograms,
and
so
you
know
an
instrumentation
author
could
hint
to
the
SDK
of
which
buckets
to
use
that
that
doesn't
exist.
F
Yet
today,
and
you
know
it's
unclear
exactly
which
things
you
would
be
able
to
hint
to
the
SDK
and
how
those
hints
would
interact
with
views
and
other
kind
of
defaults.
Those
are
still
kind
of
open
questions.
F
C
F
So
there
they
have
a
like
the
the
buckets
have
a
base,
that
is,
it
changes
it's
Dynamic,
and
so
sometimes
it
might
be
base
two
but
other
times
it.
It's.
You
know,
base
two
to
some
power.
C
So
yeah
Peter
I
think
so
the
making
having
a
more
fine-grained
configuration
of
Delta
versus
cumulative
would
is
beyond
this
group.
That
would
definitely
need
to
be
like
discussed
through
the
spec.
F
Yeah,
so
what
this
group
can
control
is,
you
know
well,
actually,
there's
no
tools,
there's
no
tools
that
we
could
add
to
the
SDK
that
don't
exist
today
and
like
are
specified,
so
the
full
toolkit
that
the
specification
allows
us
to
give
you
is
is
already
implemented,
and
so
you
know
either
the
options
are
to
either
use
Delta
across
the
board,
for
all
histograms
or
to
you
know,
do
something
like
Trask
was
suggesting
to
use
views
to
add
more
granular,
histogram
buckets
and
get
a
an
approximation.
If
cumulative
is
important.
C
Okay,
what
is
the
bad,
the
app
Dynamics
metrics
backend,
because,
as
Jack
mentioned
earlier,
what
the
pattern
we've
seen
generally
is
that
metrics
backends
are
either
Delta
or
cumulative
and
so
I
think
that's
what
drove
in
the
spec
this
to
just
be
a
an
exporter,
concern.
E
So
for
cumulative
sums
we
are
converting
internally,
those
metrics
to
deltas
and
we
store
them
as
deltas,
I
I.
Think
many
many
other
vendors
are
doing
the
same
thing
with
histogram.
Well,
we
are
on
the
fence
about
that,
because
we
don't
have
too
many
experience
with
with
histograms
so
far
and
I
was
hoping
that
this
garbage
collection
metrics
would
be
one
of
the
first
things
that
we
would
try
to
implement
and
and
show
to
the
customers,
and
we
hit
this
particular
issue.
E
And
so
here
is
another
thought:
well
how
about
changing
the
default
export
for
otop
that
it
would
provide
Deltas
for
histogram?
What
what
do
you
guys
think
about
that.
E
E
C
Mean
So.
E
Currently,
so
so
well
for
for
cumulative
sums,
reporting
them
as
some
as
cumulative
and
not
well
sorry
for
monotonic
exams,
reporting
them
as
cumulative.
It
makes
a
lot
of
sense.
It
has
benefits
multiple
benefits
with
histogram
I'm,
not
so
sure-
and
this
particular
case
shows
that
exporting
them,
as
deltas,
at
least
in
this
particular
case,
has
has
quite
visible
benefits.
F
Well,
I
think
I
think
like
so
the
the
defaults
which
you
know
use
cumulative
temporality
are
I
think
the
goal
there
is
to
align
with
Prometheus.
F
E
Yes,
yes,
so
I
I
understand
why
why
we
are
by
default
exporting
histograms
as
cumulative,
but
we
miss
this
particular
use
case
so
for
for
garbage
collection,
metrics.
So.
C
Peter
does
this
I
mean
this?
If
you
have
your
customers
set
it
to
Delta,
then
you
get
Delta
for
some
things,
but
you
still
get
cumulative
for
up
down.
Counters
is.
Does
this
setting
work
for
you.
E
E
F
So
I
work
for
New,
Relic,
New
Relic
has
is
predominantly
a
Delta
back
end,
and
this
is
just
kind
of
what
we
have
to
live
with.
We
tell
our
customers
to
use
Delta
temporalities
so
to
configure
this
parameter
to
use
Delta
temporality.
The
other
option
that
like
if
they,
if
they're,
really
insistent
on
using
cumulative,
is
that
there's
a
cumulative
to
Delta
processor
in
The,
Collector
contrib,
you
know
it
is
not
able
to.
It-
is
not
able
to
have
useful
Min
and
Max
values.
F
For
the
reasons
you
said
so
you
know,
Min
and
Max
are
basically
lost,
but
you
know
you
can
convert
cumulative
to
Delta
for
the
buckets
the
sum
and
the
count
of
the
histograms
okay.
C
Work
going
on
because
to
Peter's
point
I
mean,
ideally,
we
all
just
want
to
tell
people
hey
just
use
this
and
point
it
at
the
end,
the
otlp
endpoint
and
the
less
things
they
have
to
configure
the
less
things
they're
going
to
get
wrong.
Yeah.
C
D
C
Might
be
worth
Peter
asking
in
the
op-amp
folks,
if
that's
something
that
they've
considered.
F
They're
configuring
full
configuration
of
the
SDK,
that's
one
of
the
the
use
cases
I.
Don't
think
that
there's
any
implementations
of
that
yet,
but
that's
something
they
want
to
get
to
eventually.
F
You
know
what's
interesting
about
op-amp
like
one
thing
to
be
aware
of
is
like
the
protocol.
Is
it's
agnostic
of
the
thing
that
you're
configuring,
so
it
doesn't
care
whether
like
there's
a
client
in
a
server
aspect
of
the
protocol
and
they
communicate
and
they
pass
down
configuration,
but
the
configuration
is
just
bytes,
it's
just
like
a
blob
of
bytes
and
so
like
it's
up
to
the
the
the
the
client
of
the
protocol
to
interpret
those
bytes
and
know
which
thing
it's
configuring
and
you
know
parse
those
bytes
and
apply
them.
D
F
G
G
The
hundreds
percentages
and
could
use
that
but
I
know
that
summaries
are
kind
of
deprecated
in
open
Telemetry,
so
I'm
just
wondering.
Does
anyone
know
why.
F
Well,
Iris
summaries,
deprecated
in
open,
Telemetry,
so
I
think
they're.
The
the
main
reason
is
because
they
they
histograms
are
meant
to
supersede
them.
But,
as
you
mentioned
that
there's
some
there's
some
distinctions
between
a
summary
and
a
histogram,
especially
around
those
quantiles
I.
Think
what
if
I
remember,
there's
some
conversations
back
when
there
was
a
metric
Sig.
F
You
know
one
of
the
concerns
about
Prometheus
summaries
was
that
there's
not
a
clear
definition
about
like
the
window
over
which
those
quantiles
are
captured.
So,
like
the
quantile,
the
you
know,
0
25
50,
75
100.
You
know
they
incorporate
some
some
sort
of
algorithm
to
say
how
far
back
looking
they
are
considering
measurements
to
be
included
in
those
quantiles
and
I.
Don't
think
that
it,
it
wasn't
cumulative
and
it
wasn't
Delta.
It
was
somewhere
in
between
where
there's
some.
C
F
C
H
We
leave
them
deep.
Basically,
they.
G
F
But
Fabian
I
think
there
there
have
been
some
people
that
have
requested
summaries.
You
know
actually
have
better
support
in
open,
Telemetry
I.
Think
there's
some
spec
issues
floating
around
about
that
and
I
can
see
if
I
can
find
a
link.
C
Do
you
want
to
summarize
do
we
we
have
John
here?
Yes
great,
do
you
want
to
kind
of
summarize.
F
F
You
know
the
spis
they're
they're
wired.
You
know
we
load
them
and
then
we
wire
them
into
the
SDK
and
then
they're
they
run
as
a
part
of
the
SDK.
F
And
what
what
happens
if
you
want
to
instrument?
One
of
those
components
like
you
want
to
be
able
to
instrument
one
of
your
exporters
and
say
hey
how
many
spans
or
logs
did
we
receive
how
many
successfully
were
exported?
How
many
failed?
How
do
you
obtain
an
instance
of
open
Telemetry
such
that
you
can
instrument
yourself
so
that
one
of
these
SPI
components
can
instrument
itself?
F
That
goes
away
when
you
switch
to
you
know
an
SPI
based
implementation,
and
so
we
want
to
continue
to
allow
these
exporters
to
instrument
themselves,
and
you
know
how
do
we
do
so?
F
Global
open
Telemetry
has
been
the
answer
for
situations
where
there's
kind
of
ordering
or
access
issues
with
getting
an
instance
of
open,
Telemetry
and
I.
Think
it's
a
good
candidate
to
use
here,
but
Global,
open
Telemetry
has
this
nasty
side
effect
where,
if
Global
Telemetry,
if
you
call
Global,
open,
telemetry.get
and
and
it
hasn't
previously
been
set,
and
it
detects
Auto
configure
on
the
class
path,
it
has.
F
This
side
effect
where
it
it
you
know
automatically
initialized
the
initializes,
the
auto
configured
open
Telemetry
and
it's
kind
of
a
weird
side
effect
and
it
kind
of
throws
a
wrench
into
this
whole
situation.
Here,
and
so
you
know
this
comment
kind
of
expresses
a
couple
of
ideas
of
what
we
can
do
about
this
Mateus
has
expressed
a
preference
I,
think
it's
the
one
that
I
prefer
as
well.
I've
opened
a
draft
PR
that
proposes
this
or
I
guess
makes
concrete
this
proposal.
But
you
know
the
idea
is
hey.
F
H
Honestly,
I
remembered-
maybe
just
just
before
the
meeting
that,
like
just
after
we
started
that
I
think
we've
had
several
issues
from
users
who
called
Global
Telemetry,
not
get
and
suddenly
totally
unexpectedly
initialize
the
SDK
for
them
again
and
try
to
set
it
as
Global
or
something
like
that.
H
F
Yeah
I'm
wondering
if
okay
so
in
the
the
draft
that
I
proposed
for
this
I
I
log,
a
message
that
says:
hey,
you
know:
you've
called
the
global
open
telemetry.get.
You
have
Auto
configure
on
your
class
path
and
you
haven't
you,
you
haven't
initialized
it
yet
so
we're
gonna
log,
a
message
at
the
severe
level
that
says
hey.
You
need
to
call
Auto
configured
open,
Telemetry,
SDK
initialized
earlier
in
your
application
lifecycle,
and
so
whatever
that's
fine.
F
We
could
take
it
a
step
further
and
provide
some
sort
of
environment
variable
that
if
they
maybe
it
defaults
to
false.
But
if
you
flip
this
environment
variable
or
system
property
to
true
it
will,
potentially
you
know
we
could
continue
doing
the
old
behavior
for
some
period
of
time.
A
A
Yes,
were
you
all
everything
you
say
is
true:
I'm
trying
I'm
just
thinking
through,
like
what
I
mean
I
think
it
would
be
most
existing
user,
relying
on
the
side
effect
friendly
to
keep
the
current
behavior
without
any
change
and
allow
changing
the
behavior
to
what
we
want
via
an
experimental
environment,
environment
variable,
perhaps
or
system
property,
or
both
or
whatever,
either
with
a
very
loud,
noisy
log.
Message
saying
that
this
Behavior
will
change
in
the
future.
A
Yeah
this
is,
this-
has
been
probably
the
most
aggravating
and
annoying
behaviors
that
we
built
into
the
API.
A
Sure
100
sure
it
was
just
to
facilitate
true
Auto
configure
right
like
if
you
really
want
to
think
about
Auto
configure
it's
Auto.
You
don't
have
to
do
anything
except
just
have
your
stuff
on
the
class
path
and
your
environment
variables
right
and
we're
going
to
be
changed.
This
would
be
changing
that
so
that
you
actually
have
to
call
some
code
to
make
it
configure
Auto
configure,
whereas
previously
it
would
essentially
really
truly
Auto
configure
just
based
on
configuration
and
class
path.
A
D
A
Said
I
would
very
much
approve
of
getting
rid
of
this
Behavior,
but
I
also
sensitive
to
the
fact
that
there
are
almost
certainly
users
who
are
relying
on
it
at
the
moment.
For
you
know,
for
good
or
for
ill
and.
F
I
think
you
were
imagining
that
you
know
you're
suggesting
that
maybe
by
default
we
continue
the
current
Behavior
and
we
have
an
experimental
environment,
variable
or
system
property.
Where
you
opt
out
of
this
current
behavior
and.
F
Yeah
I
think
it
has
to
be
the
opposite.
I
think
you
know,
I
think
we'd
have
to
turn
off
this
Behavior
by
default
and
opt
into
it
in
order
for
in
order,
for
you
know
this
to
be
useful
or
else
because
these
SPI
components
they
don't
have
the
ability
to
change
the
environment
or
system
properties,
and
so
you
know
they
don't
have
the
ability
to
turn
off
this
this.
You
know
this
side
effect.
B
A
A
H
A
Oh
yeah,
we
would
be,
we
would
be
asking
I
mean
it
for
us
implementing
these
things.
It's
not
too
difficult
to
understand
how
that
has
to
work.
It's
going
to
be
a
lot
more
complicated.
You
have
to
explain
to
somebody
how
they
have
to
build
one
of
these
things
if
they
want
to
get
it.
Instrumented
I
think.
H
Yeah,
but
the
global,
open,
Telemetry
solution
is
also
kind
of
difficult
to
implement,
because
you
cannot
use
it
right
away
in
the
Constructor.
You
have
to
actually
wait
for
the
global
Telemetry
to
allow
to
get
populated.
F
A
I
know
the
way
that
the
go
SDK
solves
this
problem,
which
is
that
all
of
their
all
of
their
tracers
and
instruments
and
everything
are
basically
lazily
like
they.
They
will
they'll
have
a
reference
to
an
SDK
that
can
get
filled
in
with
them
and
then
they'll
replace
themselves
with
a
real
implementation
when
the
SDK
gets
created.
That
would
be
a
pretty
big
change
to
the
Java
world.
A
If
we
were
to
do
that,
so
they
basically
produced
they
were
all
the
API
always
returns
a
shim
that
can
be
then
filled
in
with
a
real
instance
of
a
tracer
or
a
tracer
provider,
or
whatever
needs
to
be
there
and.
F
F
A
B
A
F
I
think
I'm
inclined
to
you,
know
I'm
going
to
think
about
this
more,
but
I
think
I'm
inclined
initially
to
change
the
behavior
of
global
open
Telemetry
to
not
have
the
side
effect,
and
the
reason
is
is
that
you
know
I,
don't
think
it's
a
huge
ask
for
to
ask
users
that
depend
on
this
Behavior
to
add
a
new
environment
variable
that
opts
in
to
the
side
effect.
F
And
you
know
the
reason
is,
if
they're
using
Auto
configured
open,
Telemetry
they're
already
used
to
passing
system
properties
or
environment
variables
to
configure
their
open
Telemetry.
So
what's
one
more.
C
It's
fair,
Jack
I
would
consider
also,
maybe
both
because,
like
I'm
wondering
what
the
the
open
Telemetry
the
auto
configure,
you
don't
have
to
register
that
as
when
you
initialize
it
right,
can't
you
initialize
it
without
setting
it
into
the
global
yeah.
Okay,
in
which
case
the
Global
is
going
to
be
a
no-op
in
the
exporters.
C
I
think
I'm
in
favor,
also
of
still
doing
the
global,
removing
the
the
side
effect.
B
F
They
get
a
no-op
version
and
a
log
message
that
says:
hey.
If
you
were
relying
on
this
Behavior.
You
know
you
need
to
initialize
earlier.
If
you
can,
or
maybe
flip
this
environment
variable
to
opt
in.
F
F
So
you
know:
is
this
inject
open,
Telemetry
interface,
going
to
pass
you
an
open
Telemetry
instance
and
the
exporter
that
was
configured
so
you
have
access
to
both
those
things
and
you
can
wire
them
together
or
do
you
have
to
save
your
export,
or
instance
in
some
static
place
so
that
later,
when
this
inject
open,
Telemetry
interface
is
called
you
can
you
can
call
a
Setter
I.
C
B
I
want
to
talk
about
HTTP
instrumentation
tests
at
some
point,
but
not
today.
I
gotta
drop,
okay.
H
D
H
Thanks
bye,
there's
an
issue
that
I
could
talk
about:
I'll
just
drop
the
link.
H
Yeah,
so
this
is
an
very
interesting
bug
that
one
of
the
users
submitted
that
they're
using
the
micrometer
shim,
which
actually
doesn't
really
matter
here
and
what
it
matters
is
they're
using
async
metric
instrument
and
within
the
async
metric
instrument
callback.
They
are
calling
some
spring
magic
which
tries
to
load
the
class
and
they
call
all
the
callbacks
are
executed
in
the
context
of
the
agent
class
loader
and
not
the
application
crossover.
H
So
obviously
it
doesn't
love
the
class
because
it
doesn't
see
it
sees
the
agent
classes
and
the
application
classes
and
it
fails.
H
So
there
are
a
couple
of
ways
we
could
fix
that,
and
one
of
the
ways
is
hacking
it
around
in
the
micrometerson,
but
I
I
think
it's
a
deeper
problem
and
we
should
fix
that
somewhere
on
the
level
of
the
open,
Telemetry
API,
either
the
open
symmetry,
API,
Bridge
or
even
the
SDK.
H
H
H
Yeah,
so
we
could
have
the
SDK
possibly
just
to
remember
the
and
Associate
the
context
class
loader
with
the
Callback.
G
The
issue
would
be,
if
you
have
this
old
school
thing,
where
you
deploy
and
undeploy
applications
in
an
application
server.
You
would
assume
when
you
undeploy
an
application.
It's
gone,
the
class
is
a
class,
loader
is
gone
and
all
the
classes
are
removed.
But
if
you
keep
a
pointer
to
the
application
class
loader
from
the
agent,
then
the
garbage
collector
will
never
be
able
to
collect
the
classes
when
you
undeploy,
because
there's
still
a
reference
left.
F
With
holding
the
reference.
A
H
C
H
I,
don't
think
it
does
anything
actually
I
mean
it
just
uses.
The
current
front
thread
context
cluster
there,
but
maybe
that's
not
a
problem.
If
you're
not
using
the
Java
agent,
you
don't
have
any
like.
That's
all
the
magic.
I
D
H
I
The
first
thing
that
I
thought
about
it
was
that
maybe
it's
pushing
the
limits
too
much
like
should
be
really
supported,
but
later
I
thought
that
when
something
like
Works
without
the
agent,
then,
when
it
doesn't
work
with
the
agent,
then
it
must
be
a
bug
of
some
sort.
C
C
C
Guide
for
your
thoughts
on
which,
which
what
would
you
recommend,
initially
or
as
a
fix
here.
I
I
guess
fixing
it
isn't
like
that
important
in
the
sense
that
they're
already
easy
to
work
around,
that
was
suggested
to
the
user
I,
probably
at
least
to
the
setting
context
class
loader
to
null
for
the
periodic
metric
Freedom
thread.
As
for
the
other,
one
I
think
it's,
it's
probably
not
too
hard
to
implement.
I
I
If
something
like
this
happens,
when
the
user
is
using
the
apis
manually,
then
he
can
fix
it
himself,
but
with
agent
like
they
have
no
control
about
like
how
our
instrumentations
work.
C
I
wish
Jonathan
was
here
today
because
I'm
suspicious,
that
I
mean
given
that
micrometer
hasn't
run.
If
they're
not
doing
this
of
capturing
the
context,
glass,
loader
and
their
callbacks
and
reusing.
It
sounds.
I
H
Yeah,
it
says
it's
probably
unlucky
or
he
didn't
think
that,
like
calling
application
context
to
even
call
my
food
to
something
like
that.
C
E
E
I
believe
setting
it
to
now
for
the
callbacks
should
be
sufficient,
so
I
know
there
is
a
lot
of
mysteries
related
to
this
context.
Class
loader
Concept
in
general.
It
should
not
affect
the
class
loading
at
all.
This
is
this
was
added
as
a
way
to
ensure
that
some
classes,
loaded
from
the
boot
class
loader,
would
have
access
to
the
application
class
loader
for
the
very
select
operations.
E
D
E
A
C
Maybe
we
should
be
more
selective
because
of
this
inheritance,
The
Inheritance,
when
we're
initializing
like
localize,
where
we
really
need
to
set
it.
The
agent
class
loader
into
the
contacts
class
loader.
I
I
think
they
can't
easily
do
this
because
we
want
the
old
hpis
to
find
stuff
from
the
agent
class
loader
and
some
of
the
calls
might
be
inside
the
SDK.
C
C
Consensus
if
we
want
to
fit
I
I,
think
it's
a
good
thing
to
fix,
because
I
I
worry
about
leaking
the
the
agent
context
class
loader
into
the
user
code
and
yeah.
So
it
sounds
like
we
could
at
least
initially
try
just
the
null
nulling
it
out
and
then,
if
it's
an
issue
later
come
back
and
try
to
do
something.
Fancier.
C
Cool
this
is
not
important
and
I
need
to
do
some
more
research
on
it
to
get
it
fresh.
In
my
mind,
before
I'm
chatting,
with
probably
in
particular
Jack
and
Mateus,.
F
One
thing
to
think
about
with
this
Post
Release
API
diff
process
is
in
open,
Telemetry
Java.
We
don't
do
it
right
away.
We
have
to
wait
a
couple
of
hours
because
you
know
the
the
release
has
to
get
propagated
to
Maven
Central.
C
Yeah,
so
we
could
have
the
GitHub
action,
we
could
pull
and
wait
for
the
maven
Central
download.
Basically
we
know
the
URL,
so
we
can
keep
just
a
curl
loop
to
I've
done
that
before
waiting
for
the
waiting
for
it
to
get
published,
yeah.
F
F
C
We
initially
we
had
very
few
and
then
we
got
bumped
to
50
concurrent
across
all
of
hotel
and
then
that
still
wasn't
enough,
so
we
got
bumped
to
150
across
all
of
hotel
and
then
after
that,
at
some
point
we
joined
the
cncf
GitHub
Enterprise,
org
and
I.
Don't
underst
I,
don't
know
how
things
work
after
that.
I
hope
that
it
means
that
we
have
even
more
than
that
and.