►
From YouTube: 2022-07-28 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
B
A
All
right
so
yeah,
so
we've
got
so
far.
One
item
on
the
agenda
feel
free
to
add
ad
stuff.
I
would
like
to
introduce
this
topic
is
cu
on
our
team
intern
on
our
team.
This
summer
has
been
working
on
the
notoriously
challenging
javascript
snippet
injection
automatic
snippet
injection
problem
in
injecting
javascript
like
rum
snippets
into
the
from
the
java
agent
into
the
outgoing
html
response,
and
she
has
sent
pr
and
sue.
A
D
Oh,
so,
although
the
normal
case
is
encoding
for
html
utf-8,
there
may
be
like
other
encoding
that
have
that
could
reach
the
head
tag
as
well,
so
we
use
the
headset
to
track
where
to
inject.
So
if
the
hair
tech
character
is
split
across
two
bytes
in
some
casting
coding
that
could
cause
problem
and
also
special
special
character
is,
if
that's
character
after
special
decoding.
D
D
D
So
both
two
methods
caused
potential
problems,
but
if
we,
if,
if
we
change
the
continent
before
injection
first,
we
do
not
find
hair
type
later
or
like
if
the
injection
filled,
then
the
html
confidence
will
be
incorrect,
and
if
we
update
the
continuous
after
the
injection,
then
the
header
is
already
sent,
which
means
that,
although
we
call
it
the
set
of
various
functions,
although
we
tried
to
update
it,
we
couldn't
update
the
data
that
is
already
being
said.
So
there
will
also
cause
problems,
so
both
methods
look
at
the
perfect
solution.
D
So
that
is
also
a
question
for
us
and
the
search
open
question
will
be
all
the
stream
delegated
of
the
screen
so
because
so
normally
we
overwrite
function,
but
because
we
use
3.0
here
and
each
service
will
prove
one
there's
two
additional
functions.
So
we
cannot
use
overrides
here.
We
have
to
use
five
body
integration
and
which
directly
change
selling
of
the
stream
library,
which
means
that
if
the
certain
upstream
dedicated
to
themselves
of
the
stream,
then
the
library,
your
library
is
being
changed.
So
maybe
the
injection
will
be
happened
twice.
D
A
And
maybe
we
can
walk
through
just
very
high
level.
The
test
talk
about
the
test
coverage.
E
Okay,
can
I
have
a
very
key
question
sure
just
to
put
myself
into
context
here,
so
what
is
happening
basically,
that
if
you
have
like
a
servant,
application
or
servant-based
application-
and
you
are
using
teachable
agent,
this
will
modify
the
html
output
and
put
some
javascript
here
like
inject
some
javascript
inside
of
it.
A
D
Yes,
so
normally,
when
they
try
to
inject
that
javascript
into
their
servers
into
the
html
themselves,
then
they
are
going
to
collect
the
data
from
still
to
credit
data
from
the
age
about
it
about
their
clients,
behaviors
and
that's
what
we
sent
to
like
when
they're.
A
And
we
don't
open,
telemetry
doesn't
currently
have
a
browser
story,
but
that
is
being
worked
on
and
so
in
the
future.
We
would
inject
this
javascript
snippet
that
would
be
open,
telemetry
the
open,
telemetry
javascript
snippet
and
open
telemetry
real
user
monitoring.
A
E
Are
you,
are
you
also
planning
to
add
support
for
like
non
servlet
applications
like,
for
example,
nadi.
F
I'm
just
wondering,
because
there
is
this
open,
telemetry,
rum,
rumsig,
that
is
fairly
new
and
they
are
discussing
you
know,
probably
what's
in
the
javascript,
I
guess
that's
their
focus,
but
I
mean
first
of
all,
I
find
it
really
cool
to
have
this
like
automatically
injected
so
that
you
don't
need
to
notif,
modify
all
your
html
pages.
That's
a
really
cool
feature
and
like
looking
at
the
at
the
questions
here,
I
actually
feel
that
most
of
them
are
independent
of
the
programming
language.
F
A
Yeah,
we'll
nev
wiley
and
our
team
is
working
in
the
rum
sig.
So
so
you,
let's
follow
up
with
nev
on
that
and
see
if
we
can
bring
those
platfor
language
agnostic.
Questions
to
that
group.
G
Will
will
this
allow
very
sort
of
direct
control
over
which
end
points
to
hook
into
so
that
you
don't
sort
of
turn
the
zone
and
it
sort
of
enables
it
for
for
all
outputs?
You
just
want
it
for
the
html
output,
then
you
might
have
sort
of
api,
endpoints
and
other
things
on
the
same
thing.
A
So
there
it
does
look,
it
only
injects
if
it's
text
html
and
it
finds
a
head
tag.
A
But
given
the
and
we
kind
of
know
that
this
is
a
notoriously
challenging
problem,
because
in
this
space
I
and
others
have
known
vendors
who
have
done
this
in
the
past
and
run
into
there's
often
edge
cases,
and
you
can
break
an
application.
So,
for
example,
it's
probably
not
something
we
would
turn
on
by
default.
G
Yeah
under
page
or
like
just
making
sure
like
okay,
here's
my
circle
with
engine
and
and
is:
are
they
all
individual
pages
or
is
there
something
else
flowing
through
it
like
it
might
be?
Someone
wants
to
enable
this,
but
then
it's
a
they
might
have
pages
where
they
don't
want
it
to
be
or
or
things
where,
as
you
say,
just
breaks
or
things
where
they
have
a
like,
it's
html
being
outputted,
but
it's
not
sort
of
really
what's
going
to
the
to
the
browser.
G
A
So
yeah,
so
let's
on
the
test
coverage
to
sue
you
do
you
want
to
just
briefly
cover
the
tests?
Yes,.
D
And
what
do
test
problems
print
writer
and
of
the
string,
which
are
the
two?
D
The
only
two
methods
for
the
server
to
write,
a
female
to
browser
so
and
both
of
them
were
tests
on
the
thicker
things
because
extra,
like
all
those
things
that
is
basically
for
the
computer
that
is
designed
already
and
also
the
test,
will
cover
like
where
the
user
has
before
after
the
writing,
and
whether
the
users
write
byte
by
byte
or
write
string
like
different
format.
D
A
But
potentially
and
then
there's
unit
tests
to
cover
more
cases,
but
given
even
within
like
tomcat
and
jetty
some
interesting
things
came
up
like
tomcat,
reuses
response
objects
for
future
requests
and
so
and
there's
different
edge
cases
that
make
it
very
beneficial
to
run
these
tests
in
a
real
servlet
container.
A
D
We
check
whether
we
want
to
injure
if
we
want
to
inject
the
wrap
over
response
to
be
prepared
for
the
injection.
So
the
response
vapor
imaging
to
the
adjective.
D
One
about
the
budget
for
the
instrumentation
of
the
release
of
the
stream,
so
we'll
check.
If
we
want
to
sit
on
so
we'll
check
five.
A
D
Okay,
so
will
check
byte
the
user's
input
and
then
we
will
check
if
we
have
matched
attack
and
if
we
met
hasn't
graduate
injection,
which
is
in
line
29.
D
So
the
hello
rights
will
check
whether
we
want
the
injection
and
if
we
want,
then
we
are
good
injection.
But
this
will
return
true
if
we
have
not
handled
the
right
already,
which
means
the
whole
function
will
return
first,
and
that
means
that
we'll
skip
the
original
message.
So
we
will
not
write
the
original
original
battery.
A
I
think
there's
one
last
thing
that
I
would
like
to
mention
for
the
reviewers,
because
it
will
probably
pop
out
as
why
the
servlet
in
the
serverlet
output
stream,
with
it's
not
wrapping
it
so
in
the
print
writer
we're
wrapping
the
writer
in
the
serverlet
output
stream,
we're
not
wrapping
it.
We're
instrumenting,
servlet
output
stream.
D
Yes,
because
so
for
this
service,
three
folder,
we
are
using
service
3.0
instead
of
service
3.1
and
for
service
3.0.
The
site
of
the
stream
does
not
have
the
central
listener
function.
So
because
we
don't
have
those
two
functions,
we
cannot
override
it
directly
directly.
It
will
cause
compile
arrow,
so
we
have
to
instrument
it
instead,.
A
Yeah
this,
unfortunately,
this
method
added
in
servlet
3-1,
gave
us
kind
of
headache
and
laurie.
Here,
no
laurie
laurie
often
has
good
ideas
for
how
to
work
around
these
things,
but
we
couldn't
subclass
this
and
compile
against
three
zero,
because
this
arg
was
also
just
added
in
three
one.
So
we
couldn't
sort
of
provide
both.
A
A
So
anyway,
if
you
have
any
good
ideas
on
that,
please
leave
them
on
the
pr
and
also,
if
you
know
anybody
in
your
orgs
or
who
is
interested
in
this
topic.
Please
point
them
to
the
pr
would
love
feedback,
and,
although
I
really
like
the
idea
of
taking
this
to
the
rum
sig,
because
folks
they're
probably
have
a
lot
of
interest.
C
So
the
server
at
apple
still
think
I
encountered
it
almost
three
years
ago
when
I
actually
started
working
here.
It
was
the
first
like
thing
with
muzzle
that
started
the
entire
refactoring
of
it,
and
I
at
that
time
we
completely
dropped
the
wrapper,
but
I
had
an
idea
that
we
could
generate
the
class
in
runtime.
Actually,
so
we
could
use
bytebody
to
depending
on
whether
the
right
listener
is
present
on
the
application
class
path.
We
could
generate
these
two
methods,
but
simply
delegate
to
the
original
server
output
stream.
H
Hi
this
is
emily,
so
this
is
the
director's
jax.
Rs
does
spec
right
says.
The
other
thing
is
because
that's
jax
rs
is
under
active
development.
You
could
even
fire
a
issue
on
the
jakarta
rest,
canada
area
and
ask
for
opinions.
I
can.
I
can
put
a
link
on
the
minutes,
so
you
lovely.
G
G
So
if
you
have
this
technology,
one
thing
you
could
use
use
the
potential
to
solve
is
like
the
catch-22
of
not
only
just
the
back
javascript
to
run
tracing,
but
we
could
sort
of
send
back,
maybe
sort
of
here's
the
trace
id
for
the
the
html
we
just
generated
to
you
and
and
the
load
events
and
whatnot
in
the
browser
could
then
use
that
trace.
I
need
to
sort
of
collect
everything
together
right
now
that
is
sort
of
happening
a
little
bit
after
effects
and
then
finally
there.
G
A
Yeah
yeah
that
would
be
cool,
that's
not
something
that
the
the
that
would
be
a
great
studio,
maybe
add
to
the
future
work
section.
C
We
actually
have
something
like
that
in
the
spanglish
straw
that
sends
back
the
server
trace
I
need
to
as
a
header,
but
since
there
was
no
standardized
way
of
sending
back
the
trace
id
to
the
browser,
we
decided
not
to
upstream
it
for
the
time
being.
C
So
it's
it's
also
a
good
thing
to
go
to
to
the
ramsing
to
to
talk
to
them,
because
if
we
decide
on
some
way,
for
example,
if
we
decide
to
use
the
server
timing
header
as
as
a
transport
for
that,
then
we
could
very
easily
upstream
our
logic.
C
It's
a
bit
hacky,
I
mean
it's,
it's
not,
let's
say
the
official
way
of
using
that
kind
of
header.
A
I
think
I
had
seen
there's
also
a
w3c
proposal
for
reverse
distributed
trace
propagation
back
in
the
response.
Also.
C
Yeah,
I
remember
that
there
being
one
but
last
time
I
was
interested,
it
was
still
very
much
in
progress,
so
we
ended
up
using
our
own
thing,
but
well
server.
Timing
is
also
a
standard
header,
so
we
are
using
it
in
our
own
way.
Let's
say.
A
That
is
a
great
question,
so
we
did.
We
cancelled
the
monday,
jvm
metrics
sig
meeting
and
we
just
or
we
didn't,
cancel
it
so
much
as
we
decided
to
merge
it
into
this
meeting
since
we
haven't
had
an
overflowing
amount
of
topics,
but
I
didn't
remove
the
the
other.
The
wednesday
meeting.
A
But
it
may
have
here
it
is:
we've
had
problems
with
the
calendar
where
things
mysteriously
disappear
sometimes,
but
as
far
as
I
know,
I
left
this
meeting
on
the
calendar
in
case
it
was
helpful
for
people,
especially
like
tommy
in
japan,
but
maybe
I,
if
fabian,
if
you
go,
can
go
to
this
maybe
next
week
and
see
if
people
want
to
keep
that.
A
Okay,
great,
I
can
also
ask,
and
we
can
ask
in
slack
I
had
brought
it
I'd
mention
in
slack
when
I
removed
the
monday
one,
but
the
slack
channel's
been
fairly
busy
lately,
so
things
get
lost.
E
Trust
when
you
ask
this
one's
like,
could
you
please,
like
mention
tommy,
so
that
he
will
he
will
get
a
notification
and
he
will
be
able
to
answer
that.
B
Hello,
so
yeah,
this
is
actually
kind
of
related
to
the
jvm
meetings.
I
think
they're
we've
been
struggling
to
get
attendance
at
them
with
the
with
the
summer
vacation
and
and
with
a
lot
of
the
work
having
having
been
completed.
B
One
of
the
last
remaining
items
is
is
to
come
up
with
conventions
for
garbage
collectors
and
I'm
hoping
to
make
some
progress
on
that
asynchronously,
and
so
this
is
a
draft
pr
for
a
proposal,
in
contrast
to
the
current
garbage
collection
that
is
collected
in
the
java
agent,
which
just
collects
a
counter
of
the
time
that
was
spent
collecting
and
in
the
number
of
times
collection
events
took
place.
B
I'm
proposing
that
we
hook
into
these
notifications
that
you
can
get
about
garbage
collection
events
and
that
we
record
garbage
production
times
to
a
histogram,
so
that
people
can
analyze
the
distribution
of
garbage
collection
times
and
have
that
you
know
faceted
or
have
attributes
representing
the
the
type
of
action
that
took
place.
So
that's
kind
of
a
summary
of
what's
going
on
here.
B
A
The
I
really
like
these
dimensions.
B
Yeah,
I
think
it's,
I
think
it's
really
powerful
to
think
to
collect
garbage
collection
events
in
a
histogram
versus
just
counters
of
time
and
and
and
count
which
you
know
for
garbage
collectors
like,
like
the
z
garbage,
collector
and
shenandoah,
which
have
concurrent
phases
the
the
current
way
that
we
collect
just
the
time
in
the
count
doesn't
allow
you
to
understand,
for
example,
how
much
time
was
spent
concurrently
versus
in
stop
the
world
mode.
So
that's
you.
B
You
can't
see
that
type
of
thing
and
then
also
by
recording
it
to
a
histogram
you'd,
be
able
to
understand
the
maximum
amount
of
time.
So
you
could
say:
hey.
What's
the
max
amount
of
time
that
a
garbage
collector
spent
in
like
a
stop
the
world
garbage
collection
event,
so
the
histogram
provides
you
a
lot
of
interesting
analysis
options
that
aren't
available
currently.
B
B
So
that's
that's
kind
of
the
only
thing
I'd
say
I
I
did
go
and
I
I
tested
this
exact
proposal
with
all
the
different
garbage
collectors
and
a
number
of
different
versions
of
jvms
and
that
was
kind
of
driving
this
this
proposal
so
yeah
it's
it's
like
I've
done
some
some
kind
of
manual
testing
with
it,
but
I
haven't
really
done
built
the
automated
unit
tests.
Yet.
G
G
This
would
support
older
versions
like
where
you
wouldn't
have
jfr,
but
those
are
gonna
disappear
eventually.
So
that
might
be
something
to
consider,
but
some
of
these
data
points,
as
you
say,
like
they're
gonna,
be
super
interesting,
especially
like
the
memory
used
after
gc
is
sort
of
one
of
the
key
ones
for
for
people
to
keep
an
eye
on
which
has
been
very
hard
with
metrics.
So
far,.
B
Yeah
and
the
jfr
front
that
was
kind
of
that
was
kind
of
one
of
our
guiding
principles.
When
we
were
talking
about
jvm
instrumentation
is
you
know
whatever
metrics
that
we
propose?
Can
we
produce
them
both
through
jmx
and
through
jfr?
And
so
you
know
with
with
this
proposal,
I
just
my
goal
was
to
draw
a
line
in
the
sand.
B
I
haven't
worked
extensively
with
jfr,
and
so
I
don't
have
good
intuition
about
what's
possible
or
what's
not,
and
so
I'm
really,
I
guess,
hoping
that
folks
that
are
more
experienced
with
jfr
and
and
that
ben
evans,
I
think,
has
previously
volunteered
to
do
some
of
this
analysis,
but
I
hope
they
can
chime
in
and
say
hey.
You
know
this.
This
could
work
this.
Would
this
would
be
possible
in
jfr
or
say?
No,
we
don't
have
access
to
this
type
of
data.
Would
you
consider
this
other
alternative,
so.
B
And
I
think
you
mentioned
stefan
something
about
memory
usage
after
garbage
collection
so
as
as
proposed.
Currently
I'm
only
recording
to
a
histogram
the
amount
of
time
that
was
taken
for
each
garbage
collection
event,
but
it's
possible
to
access
the
memory
used
before
and
after
garbage
collection
in
each
each
one
of
the
memory
pools
that's
available,
and
so
you
know
you
can
you
can
analyze
the
the
those
the
the
pools
and
figure
out
that.
G
G
Yeah,
I
think
that
will
be
fine
to
just
have
it
as
a
gauge.
Basically,
because
the
way
you
would
look
at
that
a
lot
of
times
is
sort
of
okay.
Here's
my
used
graph
and
and
here's
the
limit
right,
and
this
will
help
you
see
like
okay.
Well,
what's
my
actual
live
data
because
the
cracks
people
have?
Is
they
look
at
the
used
memory
and-
and
you
have
this
seesaw
pattern
right
and
then
no
one
like
oh,
it
looks
like
the
heap
is
full
like?
G
B
G
A
How
would
that
tie
into
right
now?
We
do
have
memory
pool
memory,
metrics,
which
are
gonna.
Do
the
seesaw
thing?
Would
we
do
you
think
we
should?
I
mean
yeah.
We
do
want
to
capture
those
metrics,
though,
on
a
continuous
basis,
but
like
there's
overlap
there.
B
Yeah
one
thought
I
had
and
there's
a
lot
of
different
ways.
You
could
do
this,
but
one
thought
I
had
was
that
you
could
record
the
change
in
memory
as
a
result
of
the
garbage
collection
event.
And
so
maybe
you
don't
record
like
the
absolute
like
current
value
of
the
the
memory
pool.
But
you
say
hey
memory
reduced
by
by
you
know
this
many
bytes
on
a
on
a
per
pool
basis,
and
I
think
my
idea
there
was
that
that
would
be
kind
of
complementary
to
those
other
metrics
that
already
exist.
G
So
so
it
would
be
sort
of
a
you
can
think
of
it.
A
little
bit
as
a
low
water
mark
of
of
your
heap
usage
in
a
sense.
G
The
the
way
to
think
of
it
is
like
if,
if
at
any
point
in
time
you
were
to
do
a
gc,
you
would
expect
that
the
heap
usage,
your
current
heat
usage
to
drop
down
there
right,
but
you
can
only
get
that
data
point
after
gc.
Otherwise,
like
the
the
gc
needs
to
do
a
garbage
collection
to
know,
what's
actually
live,
it
doesn't
really
know
that
until
the
gc
happens
so
the
best.
So
that's
why
the
number
keeps
going
up
it
doesn't
know,
what's
been,
let
go
until
it
does
to
gc.
A
And
then
the
question
of
that,
as
opposed
to
the
existing
memory
pools
which
are
async,
you
know
checking
what
they
are
at
the
collection
time.
B
Well,
you
could
do
it
as
a
gauge
as
well,
so
you
could
say
this
suppose
that
we
had
synchronous
gauges
and
not
just
asynchronous
gauges
but
like
with
a
synchronous
gauge,
you
could
say
hey.
We
saw
this
value
and
then
you
know
it's
okay,
to
aggregate
that
over
and
just
take
the
last
one
over
the
course
of
like
a
collection,
because
you
know
if
you're
collecting
every
30
seconds
it's
even
if
you're,
only
seeing
the
last
low
water
mark
or
high
water
mark
or
low
water
mark.
B
B
One
if
there's
no
collect
so
synchronous
gauge,
which
doesn't
which
doesn't
yet
exist,
is
just
like
saying,
hey,
set,
set
the
gauge
value
to
be
this,
and
so
the
internally
in
the
aggregation
we're
just
holding
on
to
whatever
the
last
recorded
value
was
so
we
we
only
hold
one
value
in
memory
instead
of
like
potentially
many
or
some
other
type
of
aggregation,
not
sure.
If
that
answered
your
question.
H
A
If
it
was
a
if
it's
cumulative,
if
you're
reporting
cumulatives,
it
makes
sense
to
me,
it
would
always
be
you'd,
always
keep
reporting
that
old
value
yeah,
but
if
you're
reporting,
what's
the
opposite
of
doubt,
I'm
guessing,
maybe
you
wouldn't,
which
is
okay.
Well,
I.
G
Don't
know
what,
unfortunately,
I
don't
know
the
full
time,
but
for
me
it's
it's
just
sort
of
it's
sort
of
you
measure
value
and,
and
that's
the
measured
value
until
you
remeasure
right.
So
so
your
granularity
will
never
be
better
than
sort
of
the
last
gc
and
that
might
be
30
seconds
in
between
a
minute
five
minutes
all
depending
on
like
how
fast
you're
allocating
how
frequent
you
do.
B
Yeah
yeah
and
just
just
ask
answer
trask
so
we're
in
hypothetical
zone
because
these
don't
exist
yet.
But
I
imagine
what
would
happen
is
like
so
with
the
synchronous
gauge
if
you're
reporting
cumulative
temporality
yeah.
So,
like
you
know,
even
if
you
don't
have
a
garbage
collection
in
this
window,
you
would
still
report
the
last
recorded
value
if
you're
recording
with
delta
temporality.
B
You
know
the
idea
there
is
you're
reporting
changes
that
happen
in
this
reporting
interval.
So
I
suspect,
if
there
was
no
garbage
collection
event
in
that
reporting
interval
that
you
wouldn't
report
a
value
at
all,
but
I
that
has
not
been
discussed
yet
so
that
might
not
be
where
folks
go,
but.
A
G
G
There's
something
to
think
about
like
okay:
how
do
you,
how
do
you
sort
of
because
it's
somebody's
global
effects
right?
It's
like
you,
have
contended
lock
how
like?
How
do
you
there's
this
profiling
going
on
and
then
some
of
these
you
want
to
have
the
traces,
but
some
of
these
global
events,
if
you
have
a
complete
pause,
could
you
and
sort
of
set
the
what
they
call
it
like
long
events
on
on
the
span,
you
might
not
want
a
full
stand
for
it,
so
you
know
like
oh
yeah.
This
normally
takes
30
milliseconds.
C
I
think
it
could
probably
be
done
with
jfr.
I
mean
that's
we've
kind
of
done
something
similar
in
this
round.
This
straw,
we're
in
the
profiler,
where
we
managed
to
correlate
t-lab
allocations.
That
happened
in
a
given
threat
with
the
current
context
at
that
time.
So
if.
C
C
We
have
a,
we
have
a
custom,
jfr
event
that
occurs
whenever
context
and
when
we
like
replay
the
jfr
recording,
then
we
keep
track
basically,
which
fret
has
switch
active
trace
and
we
use
that
to
correlate
the
t-lapse.
That
happened
like,
for
example,
if
at
some
point
in
time
we
know
that
the
thread
was
in
such
and
such
I
am
sorry
trace
id.
Then
we
know
that
the
tdap
has
occurred
like
with
is
tied
to
this
particular
trace
id.
C
We
we
are
actually
tying
these
two
together
in
the
agent,
so
we
are
exporting
profiler
events
as
open
symmetry
locks,
so,
for
example,
that
t-lab
allocation
is
exported
as
a
as
a
log
basically,
and
we
are
setting
the
trace
id
and
span
id
on
that
block.
A
Right,
I
guess
my
question
was:
does
the
span
when
you're
exporting
a
span
does
that
have
an
attribute
on
it?
That
tells
you
about
the
tlab
event
or
to
if
you're
looking
at
a
span,
and
you
want
to
find
the
tlab
events,
that's
something
that
your
backend
yeah,
that's.
The
backhand
does.
C
That
backhand
does
it.
We
are
exporting
like
the
connection
the
other
way,
so
the
the
fill
up
events
know
which
spans
happened.
I
don't
know
which
span
is
tied
to
the
event,
but
not
the
other
way.
Yeah.
G
It's
similar
a
little
bit
right
with
with
so
if
there
is
the
j4
span
and
trace
events
that
that
are
part
of
the
open,
limited
java.
So
if
you
use
those,
you
can
pick
up
right,
which
which
thread
was
executed
in
which
context
and
and
what
other
events
happen
at
the
same
time.
So
that's
part
that
that's
that's
true
like
this
is
sort
of
almost
a
reverse
right.
How
do
I
expose
it
out
but,
like
you
can
almost
think
of
it
like?
G
How
do
you
do
an
an
mdc
style
capability
for
for
so
the
logging
right
uses
the
mdc's
every
time
you
update
the
context,
you
will
update
the
logging
framework
with
what
what
the
thread
is
currently
at
then
you
can
almost
do
the
same
thing
here
into
sort
of
our
jfr
sort
of
event
tracking
and
then
that
would
allow
you
to
sort
of
know
like.
Oh,
this
event
came
from
this
thread.
I
know
sort
of
from
from
this
sort
of
integration
that
what
it's
currently.
A
Doing
yeah
so
with
any
kind
of
global
event
like
gc,
I
think
you
have
maybe
both
of
those
options.
One
is
the
back
end.
Could
you
know
you're
looking
at
a
span
or
a
trace
and
the
back
end
could
overlay
gc
times
like
global
events?
On
top
of
that
view,
or
you
could
try
to
tag
that
on
the
client
side
into
the
span
so
that
the
back
end?
A
Doesn't
you
know,
maybe
if
it's
like,
especially
if
you're
reporting
to
like
a
zipkin
or
a
jager
that
maybe
doesn't
have
as
much
analysis
capabilities
more
just
a
raw
span
viewer
that
could
be
nice
to
have
that
info
directly
in
those
spans.
A
All
right,
emily.
H
H
So
bruno,
I
think
it's
a
discussed
last
time.
So
bruno
did
a
property
change
in
the
hotels
back.
So
so,
if
you
click
on
that,
so
at
the
moment
is,
if
you
click
on
the
pr
to
the
end
at
the
moment,
are
waiting
for
kind
of
owner
code
owner
to
approve
it
and
then
so
that
it
can
be
merged.
You
know
who
are
the
owner.
So
if
you
go
down
task
analysis.
B
I
think
in
general
this
this
has
two
approvals.
So
far,
generally
prs
to
the
specification
have
have
more
approvals
before
they're
merged.
So
I
I.
I
wouldn't
think
that
this
is
something
that
is
going
to
be
merged
imminently.
I
think
it
needs
to
get
more
support.
First,
unfortunately,.
A
And
the
yeah
the
way
to
it
can
be
hard
to
get
spec
issue.
Spec
pr's
through
one
good
way,
is
to
attend
the
tuesday
8
a.m.
Pacific
time,
there's
a
weekly
specification,
sig
meeting
on.
A
Yeah,
so
tigran
is
suggesting
monday
at
9
00
a.m.
Pacific
time
is
late,
the
maintainers
meeting
so
and
that
that's
for
this
particular
pr
probably
sounds
like
a
good
idea,
since
it
would
affect.
B
And
the
maintainers
meeting
on
mondays
is
typically
pretty
brief.
People,
mostly
just
do
status
updates
amongst
the
maintainers.
It's
not
it's.
Not.
It's
not
entirely
clear
to
me,
what's
appropriate
to
bring
up
in
the
maintainers
meeting
versus
the
spec
meeting,
but
it
seems
like
more
kind
of
decision
making
and
consensus
gathering
happens
at
the
specification
meeting
on
tuesdays.
A
I
I
would
say,
if
I
I
would
say
bo,
why
not
both,
especially
because
tigran
is
a
one
of
the
people
who
can
green
light
this
pr.
The
owners
of
the
spec
repo,
the
people
who
have
the
power
to
hit
the
merge
button
are.
D
H
Oh
okay!
Oh
that's!
That's
great!
Thank
you
jack!
So
my
another
question
is
like:
if
it
is
a
pr
mod,
do
you
know
how
soon
it
will
get
a
release
out.
B
So
we
do,
we
do
monthly
releases
of
the
open,
telemetry
java
project,
and
so
we
typically
target
you
know
the
end
of
the
first
week
of
the
month,
so
something
around
there.
We
can't
we
can't
obviously
make
the
change
in
the
open,
telemetry
java
project
until
the
specification
pr
is
merged,
but
this
is
a
simple
change
to
make.
Once
it's
available,
we
already
have
the
feature
built
into
into
the
project.
We
would
just
need
to
add
an
additional.
B
You
know
read
from
an
addition,
an
additional
environment
variable
to
toggle
it.
So
you
know
I,
I
think
that
if
it
were
to
be
merged,
the
spec
pr
were
to
be
merged.
It
would
just
take
a
couple
of
days
for
us
to
get
that
merged
to
the
open,
telemetry
java
project,
and
if
the
timing
aligns
you
know,
it
would
get
released
with
the
next
monthly
release.
H
B
B
A
Then
they
don't
have
a
july
release.
H
H
A
Is
the
thing
that
I
would
ask,
I
would
focus
on
tigran's
specific
ask
here
in
the
maintainer
meeting.
A
In
the
spec
meeting
is
where
you
could
ask
about,
you
know
the
specification
releases
and
that
sort
of
thing.
H
A
I
I
would
it
helps
to
get
make
that
in
person
ask
tends
to
help
get
more
eyeballs
on
something.
H
A
C
A
Help
make
sure
that
you
get
in
the
queue
there.
C
All
right,
I
have
one
minor
thing
recently,
the
spec
pr
about
the
http
attribute
sets
was
merged.
So
it's
just
a
question
to
you,
trust
you
have
an
open
draft
pr
with
net
monthly
convention
changes.
Are
you
going
to
continue
that
merge
that
I
should
shouldn't?
I
I've
been
looking
at
the
http
spec
today
and
wondering
how
how
these
changes
will
play
out
in
the
instrumentation
api,
but
we'll
need
this
pr.
First
anyway,.