►
From YouTube: 2022-09-22 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
C
C
No
I,
don't
think
so.
I'm
just
I've
been
on
a
couple
of
the
meetings
I'm
a
datadog
recently
so
looking
at
seeing
how
we
can
integrate
some
of
this
stuff
with
hotel
and
things
welcome
thanks.
C
A
Let's
get
to
our
big
agenda
and
then
maybe
we
can.
If
this
will
probably
be
time,
we
can
take
a
peek
if
there's
any
PRS
that
need
a
little
help.
A
Are
the
first
one
I
added
this
so
I,
proposing
a
slight
behavioral
change
in
the
way
that
the
context
customizers
in
the
instrumentation
API
work
just
ordering
having
them
run
before
span
start
instead
of
after
this
allows
them
to
access
the
parent
span
context,
and
also
then,
when
they
add
things
to
the
context
that
those
content
that
would
be
visible
in
the
span,
processors,
which
could
be
useful
for
something
well.
A
Oh
I
know
like
I,
have
in
sort
of
a
use
case:
General
span
process
span,
processors
are
nice,
they
apply
to
all
spans
whether
or
not
they
were
added
via
the
instrument
or
API,
and
so,
if
you
have
some
logic
over
there
in
the
span
processor,
that
checks
like
say,
for
example,
context,
attributes
or
something
in
the
context
and
stamps
it
on
the
span.
Then
you
can
just
do
that
once
here.
A
You
can
still
accomplish
that
use
Case
by
doing
it
in
both
places
and
the
context
customizer
or
in
the
way.
The
context
customizer
is
right
now,
where
it
runs
after
spanstart,
you
can
get
the
span
and
add
attributes.
A
So
it
didn't
break
any
of
our
usages
of
context
customizer,
but
that
is
not
a
promise
that
you
know
that
somebody's
not
relying
on
that
that
behavior
in
the
instrument
or
API
so
just
wanted
to
get
thoughts
on
making
that
behavior
change.
You
know
post,
we
did
just
declare
it
stable,
it
doesn't
change
the
apis,
it
doesn't
looked
in
the
Java
docs,
it
doesn't
contr.
This
change
doesn't
contradict
anything
that
is
in
the
Java
docs.
But
again
it
could
break
somebody
who
was
relying
on
that
ordering.
A
C
I,
don't
think
anything
really
akin
to
this
I
think
the
things
that
we've
had
is
minor
interpretation
changes
of
environment
variables
and
things
like
that,
but
nothing
really
of
this
sort.
No.
C
When
we've,
when
we've
ended
up
well,
it's
a
little
bit
different
we've
deprecated
things
and
we've
kind
of
given
people
a
big
heads
up
the
release.
Before
like
we
deprecate,
we
tell
people
in
the
deprecation
like
in
the
message
like
contact
us.
If
you
need
this
thing,
but
this
is
a
little
different
where
this
is
actually
just
changing,
Behavior,
so
I,
don't
I
can't
remember
anything
like
this.
No.
A
All
right:
well,
we
will
probably
sit
on
it.
Think
some
more.
We
have
plenty
of
time
before
the
next
release.
If
anybody
has
concerns,
please
let
us
know.
B
Yeah
so
recently,
we've
added
the
sort
of
automatic
service
line,
detector
that
is
able
to
extract
This
research
attribute
for
Springwood
applications
to
the
Java
agent
and
We've
added
it
like
straight
to
the
extreme
repository,
because
it's
seeing
useful
and
accurate
enough
that
it
like
the
correct
Place,
seemed
to
be
right
there,
but
we
actually
have
two
more
of
them
in
the
Spring
District.
B
So
there
is
one
that
attempts
to
extract
the
service
name
from
the
server
web
XML
file.
It
uses
the
top
level
display
name,
tag
which
is
not
necessarily
will
contain
a
service
name,
but
there's
nothing
better,
that
we
could
use
and
there's
like
the
last
Last
Resort
for
back
one
that
just
takes
it
from
the
application
jar
file
name
and
these
are
not
as
accurate
and
not
as
precise
as
the
spring
with
one.
B
These
are
like
heuristics
mostly,
but
maybe
there
is
some
use
for
them
in
the
Java
agent
or
in
the
country.
Prepper
we've
been
thinking
of
absolutely
this
because
even
if,
for
example,
even
if
we
use
the
Java
application,
then
I
mean
it's
still,
probably
better
than
having
just
a
no
service.
B
So
while
it's
not
perfectly
accurate,
it's
it
still
might
be
a
little
bit
better
than
not
having
any
servicing
together
so
yeah.
What
do
you
think?
Is
there
any
interest
in
us
doing
this
to
instrumentation
or
country
Repository,
either
of
them.
A
How
do
you
get
the
the
display
name.
B
We're
looking
for
the
war
files,
unpacking
them
and
finding
the
word
XML
file,
and
if
there
is
a
display
name
tag
set,
then
we'll
be
using
it.
If,
if
not,
then
yeah,
then
we're
not
using
it.
B
It
is
extremely
hacky
voodoo
magic
that
heavily
depends
on
the
application
server
that
you're
using
because
all
of
them
have
like
different
deployment
patterns
and
for
some
of
them
it
won't
even
work
at
all,
because
I
don't
remember
which
one,
whether
it
was
web
sphere
or
Liberty,
or
one
of
them
actually
loads.
That
Wi-Fi
is
like,
after
that,
after
it
starts
so
they're
not
accessible
like
right
at
the
start,
so
yeah,
it's.
B
B
A
I,
probably
like
just
speaking
for
art,
distro
I,
would
probably
pull
something
like
this
into
artists
using
our
distro,
but
I'm.
Not
sure
I
would
use
this,
which
would
tell
me
like,
unless
there's
other
people
who
are
particularly
interested
in
this.
Maybe
you
know
contrib.
Certainly
contrib
I
don't
see
any
problem
with
it
being
in
contrib.
A
Yeah
I
feel
like
this
could
be.
Is
there
any
reason
we
wouldn't
enable
this
by
default
in
the
Java
agent
jar
right.
B
I,
don't
think
that
we're
losing
anything
but
by
enabling
it
because
you
know
if
it's
disabled
you're,
getting
the
default
thing,
which
is
unknown
service
call
on
Java,
which
carries
precisely
zero
information
about
the
info.
The
application
that
you're
through
I
think-
and
this
is
some
attempts
to
guess
what
it
is.
So
it's
probably
better
than
nothing.
D
So
I
can
tell
that
at
least
for
quarkers.
The
the
jar
name
is
always
the
same
for
everything,
so
it
wouldn't
add
much.
We
usually
set
the
service
name
through
a
property.
B
Yeah
so
that
I
mean,
if
you
set
the
servicing
through
the
property,
it
will
always
override
this
kind
of
automatic
detection.
So
this
is
just
a
bonus
for
people
who
are
I,
don't
know
deploying
like
singular
applications
and
they
actually
named
them
correct
or
properly.
A
I
think.net
may
do
something
similar
having
a
fallback
of
like
assembly
name
or
something
like
that.
Let
me
take
a
look
because
I
that
might
help
us
to
feel
more
confident
about
doing
something
like
that.
Would.
D
B
So
we
are
actually
using
the
process
handle
if
you're
building
for
Java,
9
plus
so
we're
just
taking
like
that.
Zero
argument
that
this
past
the
the
Java
process-
let's
say
so
as
long
as
in
the
native
people
build
applications
pass
the
proper
executable
Lane.
It
should
be
correct,
we're
just
stripping
here
all
the
directories,
the
lead
to
it,
we're
just
taking
the
file
name
and
stripping
the
extension
so
whatever.
If
the
extension
is
jar
or
zip,
it
doesn't
matter,
and
it
doesn't
really
matter
what
directory
the
app
is.
A
In
that
case,
sorry,
if
it's,
if
the
executable
is
Java
you're
taking
the
first
ARG.
B
We,
if
it
like,
if
the
full
command
line,
is
Java
like
Dash
xmx,
something
something
that's
Divergent,
that's
blah
blah
blah
and
we're
stripping
everything
like
the
Java
related
stuff
and
we're
taking
the
first
argument.
That's
not
no
sorry,
we
are
taking
the
argument
that
is
appearing
after
the
dash
jar.
A
Cool
I
see
Peter
here,
hey
Peter,
welcome.
E
Yes,
I
created
a
pull
request
for
that.
The
number
is
six
five,
seven.
Three,
yes,
apologies
for
for
the
size.
Unfortunately,
it
was
really
hard
to
deliver
this
functionality
in
the
smaller
chunks,
but
at
least
portions
portions
of
this
PR
are
non-executable
code.
So
it's
it's
just
the
documentation
and
the
configuration
files.
So
that
may
be
a
little
bit
easier
to
review.
A
So
yeah,
maybe
you
could
talk
a
little
bit
about
like
what
are
these
metrics.
It
looks
like
you
have
kind
of
hand
selected
certain
jmx
metrics
for
certain
libraries
and
how
that
works.
E
Yes,
so
we
were
well,
the
original
request
was
to
provide
the
capabilities
which
are
similar
to
jmx
met3
together
in
kinds.
You
in
case
you
are
not
familiar
with
this.
This
is
a
standalone
process
that
pulls
jmx
metrics
accessing
accessing
jml
exports
on
on
the
target
system.
The
the
inconvenience
of
this
is,
of
course,
that
you
have
to
run
additional
process
and
you
need
to
man
open
the
ports
and
deal
with
security
issues
that
way
and
so
on.
So
we
took
some
of
the
definitions
that
this
metric
gather
already
provides.
E
We
vetted
them,
we
looked
at
what
they
they
proposed
and
in
most
cases
we
followed
them,
but
we
made
some
changes
whenever
we
felt
there
were
omissions
or
or
some
errors,
so
they
are
not
identical.
E
E
And
I
think
this
is
really
the
most
important
part
of
this
effort,
because
this
is
user,
visible
interface
and
I
understand
that
configuration
files
so
far
were
pretty
rare
in
the
Java
agent.
Most
of
the
configuration
happened
through
properties
and,
as
you
probably
know
best,
these
things
were
getting
out
of
hand
a
little
bit.
So
we
took
this
extra
step
to
create
actually
a
yaml
configuration
which
is
much
more
precise
and
much
more
powerful
than
any
system
properties
or
environment
variables.
E
A
So
it
looks
like
there's
two
two
parts,
one
where
you
can
select
these
built-in
jmx
metrics
and
one
where
you
can
provide
your
own
custom,
jmx
metrics.
E
That's
right
again,
this
is
following
what
the
metric
gatherer
was
already
providing.
They
also
had
these
predefined
rules
and
they
allowed
people
to
write
rules
in
in
on
their
own.
However,
they
were
not
yaml
based.
You
had
to
to
to
write
some
kind
of
glue
code
that
was
connecting
jmx
metrics
with
the
gatherer
internals.
A
Enhancement,
because
it
is
a
lot
of
extra
work
to
set
up
that
jmx
gatherer,
especially
if
you're
not
using
the
hotel
collector
if
you're
using
the
otel
collector
it's
more
convenient,
but
you
still
have
to
have
to
open
ports
on
things.
So
that's
can
be
painful
and
so
yeah.
G
What
what
kind
of
well,
let
me
just
back
up
so
there's
a
jmax
collector
in
contrib,
I,
think
and
then
there's
some
sort
of
part
of
the
collector
that
does
something
similar
I
think
the
collector
probably
just
wires
up
the
the
jmx
metric
gather
from
control
as
a
separate
process.
So
it
just
manages
that
process.
So
those
things
are
like
coupled
together
and
then
there's
going
to
be
this
code
that
does
a
similar
function.
G
Is
there
an
opportunity
to
dry
this
to
have
this
published
as
some
sort
of
Library
instrumentation
that
can
be
wired
into
the
agent
such
that
like
the
metric
gatherer
can
just
act
as
like
a
dedicated
process
that
spins
up
this.
This
Library
essentially.
B
Or
we
could
remove
or
refactor
the
existing
Matrix
gatherer.
You
know
that
serves
the
collector
and
just
use
the
library
that
is
being
built
here
right.
So,
instead
of
coding,
The
Groovy
scripts,
you
would
just
import
the
Yama
files
over
there.
A
So
yeah
so
you're
saying
having
this
library,
because
it
is
already
as
a
library
here
running
it
and
connecting
to
a
remote
Jam,
X,
still
being
able
to
point
it
at
a
remote
jmx
service.
G
Yeah,
it
seems
like
there's
an
opportunity
to
reuse
the
same
code
here,
so
you
know,
instead
of
looking
at
local
beans
connecting
to
a
remote
process,
but
still
using
the
same
yaml
based
configuration
scheme
to
to
transform
those
metrics
into
the
Telemetry
data
model.
B
Yeah
I
think
that,
probably
as
long
as
the
light
battery
will
accept
an
end
server
and
it
doesn't
matter
whether
whether
it
is
a
local
server
or
it
is
a
my
server,
then
you
should
be
able
to
query
it
like
with
no
difference
in
English.
A
Yeah,
that's
actually
a
great
thought,
I'll
ping
over
in
the
contrib
repo
for
the
maintainers
of
the
jmx
metric
gatherer.
To
take
a
look
at
this
also
either
before
or
after
you
know,
we
bring
it
into
the
instrumentation
repo
I
know
we
would
all
love
to
get
rid
of
the
groovy
code.
That
is
the
current
jmx
metric
gatherer.
A
Let's
see
what
our
groovy
code
percentage
is
over
here,
only
7.8
percent
groovy
in
the
contributo,
if
only
we
were
so
lucky
in
instrumentation
repo
Mateos.
Can
you
find
my
slack
comment
when
I
was
trying
to
track
this
historical
Trend
where
we're
on
a
groovy,
groovy
replacement
I
would
call
it
spree,
but
we're
not
spreeing
very
fast,
of
slowly
trying
to
remove
replace
groovy
with
Java.
A
Cool
any
other
PRS
that
are
worth
highlighting.
G
Yeah
that
one
that's
that's
for
the
next
release
and
once
that
is
merged,
then
I
can
you
know
open
up
a
branch
in
instrumentation
and
point
it
at
the
snapshot.
G
You
know
start
to
replace
the
I,
don't
know
what
we're
calling
it,
but
the
log
ofender
API
and
SDK
and
instrumentation.
G
A
Nice,
and
would
you
like
another
review
on
this.
G
It
it's
it's
pretty
self-exp,
there's
not
a
lot
of
decisions
being
made
in
it.
It
just
kind
of
falls
out.
It's
it's
the
natural
thing
to
do.
If
you
want
to
give
it
a
peek
to
make
sure
that
it
would
be
compatible
with
what's
going
on
over
in
instrumentation.
You
know,
please
do
there.
It
follows
the
pattern
of
what
we
did
with
the
metrics
API
before
that
was
stable,
where
we
had
a
global
meter
provider.
G
There's
now
like
a
global
logger
provider
with
a
static,
getter
and
Setter,
you
know
that
lives
separately
from
the
the
normal
Global
open
Telemetry.
So
there's
no
new
patterns,
and
this
is
essentially
what
I'm
trying
to
say.
A
Yeah-
and
it
is
still
Alpha
so
not
not
high
risk.
If
there's
anything,
we
want
to
tweak
later.
G
That
sounds
good
yeah
I,
don't
I,
don't
feel
I,
don't
feel
good.
You
know
merging
a
PR
that
creates
a
new
artifact
without
without
John,
so.
B
C
B
One
one
thing
about
this:
PR
we'll
have
to
implement
a
logging
Bridge
API
in
the
in
the
agent,
and
we.
B
That,
because
it
was
hacking
internal
stuff,
so
that
we
just
simply
ignore,
but
we'll
have
to
make
another
bridge
for
that.
G
A
So
that
if
users
also
bring
the
log
API
and
they
use
the
log
API,
we
bridge
that
to
the
Java
agents,
internal
lag,
API
or
S
API.
So.
D
B
D
D
Not
the
SDK,
but
so.
D
G
Well
said,
the
the
reason
the
instrument
or
API
does.
That,
though,
is
to
do
collection,
phrases
and
metrics
in
a
unified
way.
So
you
know
you,
you
implement
one
interface
to
collect
all
the
attributes
for
your
HTTP
server
calls
and
then
you
can
invoke
the
the
the
trace
API
and
you
know
increment
the
right
instruments
in
the
metric
space,
and
you
know
you
use
the
same
size
batch.
G
It's
both
I,
don't
think,
there's
any
it's
where,
where
logging
fits
into
that
picture
yet
where
you'd
want
to
do
have
like
logging,
unified
with
tracing
and
metrics,
so
I,
don't
I,
don't
know
where
the
log
API
fits
within
the
instrument
or
API
at
this
moment
and
lots
like
you
know,
maybe
in
the
future,
there's
some
sort
of
thing
that
says:
there's
like
an
there's
semantic
conventions
for
collecting
events
when
HTTP
server
requests.
G
D
G
This
hasn't
been
implemented
yet,
but
part
of
the
part
of
the
log
API
is
is
going
to
be
an
option
when
you're
obtaining
a
logger
on
whether
Trace
contacts
should
be
automatically
included
or
not,
and
it's
going
to
default
to
true
so
by
default
like
if,
when
you
obtain
a
logger
and
you
and
you
log
messages
to
it,
Trace
contacts,
it
will
be
automatically
propagated
there
and
so
I
think
it'll
be
up
to
the
SDK
to
implement
that
so
the
SDK,
should
you
know
when
it's
in
its
implementation
of
logging
statements
should
look
for
the
the
trace
contacts
that
is
active
and
set
the
appropriate
fields
on
the
long
run.
D
G
So
yeah,
if
you're
it
depends
on
what
your
use
case
is.
If
you
want
to
emit
events,
you
would
use
the
API
directly.
If
you
want
to
do
kind
of
traditional
application
style
logs
with
a
severity
and
a
message,
then
you
know
that
you
would
want
to
use
an
appender
and
you'd
want
to
you.
G
You'd
want
to
use
log4j
or
log
back
or
one
of
the
you
know
the
main
logging
Frameworks
and
configure
and
appender
to
to
bridge
those
over
to
the
log
API
and
the
appenders
should
automatically
ensure
that
Trace
context
is
included
on
those
log
records.
A
Yeah,
so
you
can
do
that
today.
Bruno
with
we
do
have
live
back,
upender
log4j
appender,
which
can
be
configured
and
will
send
well.
I
guess
if
that's
yeah,
if
you
want
to
you,
know,
send
out
your
log
back
logs
to
otlp.
G
A
And
I
wanted
to
show
this
picture
just
to
for
folks
who
haven't
seen
it.
How
the
agent
kind
of
this
is
a
really
important
story
for
the
agent
to
interrupt
with
manual
instrumentation,
so
that
users
can
use
the
open,
Telemetry
API
in
their
apps.
They
can
use
libraries
that
have
been
instrumented
with
the
open,
Telemetry
API
and
when
the
Java
agent,
normally,
if
you're
not
running
the
Java
agent,
these
will
all
flow
down
to
the
open
Telemetry
SDK.
A
But
when
the
agent
is
there,
it
hijacks
the
open,
Telemetry
API
in
the
users
class
loader
and
funnels
that
those
calls
Bridges.
Those
calls
over
to
the
shaded
open,
Telemetry
API,
that
is
sitting
in
the
we
have
a
shaded
open,
Telemetry
API
in
the
bootstrap
class
loader
and
then
the
open
Telemetry
SDK
is
not
actually
not
shaded,
but
it
is
in
the
eight.
C
G
A
And
that's
part
of-
and
we
actually
kind
of,
learned
this
through
Andrew
through
datadox
experience
with
with
the
open
tracing
API,
and
so
they
had.
Originally,
the
data
Java
agent
was
putting
the
open
tracing
API
unshaded
in
the
bootstrap
class
loader,
so
that
they
could,
you
know,
communicate
between.
Basically,
it
made
it
a
One-Stop
shop
there.
A
It
would
go
straight
from
here
to
there,
which
is
nice
and
convenient,
but
it
then
causes
caused
a
lot
of
issues
with
version
conflicts,
and
so
we
intentionally
have
this
versioned
bridge
which
allows
us
to
adapt
to
different
versions
of
the
open,
Telemetry
API
that
the
user
brings
and,
of
course,
the
backward
compatibility
guarantees
of
the
open,
Telemetry
API
have
helped
us
a
lot,
make
it
less
complicated.
A
If
there
were
more
breaking
changes,
this
would
be
way
more
complicated,
but
even
in
the
future,
if
there
is
an
open,
Telemetry,
API
2.0,
we
would
be
able
to
support
backward
compatibility
with
1.0
in
the
agent.
A
Yeah,
it's
good
stuff!
That's
why
Mateus
is
looking
forward
to
writing
a
a
new
logging
bridge
for
the
new
logging
API.
B
B
A
So
if
you
look
over
in
the
open,
so
this
is
our
actual
open,
Telemetry
instrumentation
of
the
open,
Telemetry
API
and
because
there's
been
new
features
added
like
in
the
1.4
and
1.10.
We
have
new
additional
bridging
for
those.
A
Cool
anything
else,
anybody
wanted
to
bring.
F
A
F
Yeah
I'm
sorry
for
doing
so
late.
They
helped
so
many
calls
before
anyway.
Yeah
I
just
want
to
stop
by
I,
was
talking
to
Jack
briefly,
very
briefly
last
week
about
this
that
I'm
trying
to
put
cyclists.
So
we
can
I
can
wrap
up
the
open
racing
team.
F
So
I
don't
know
whether,
like
I
just
want
the
world
first,
let
all
of
you
know
Jack
already
knows
about
this.
That
I
will
be
putting
Cycles.
The
second
thing
is
that
how
do
you
want
to
do
the
review
process
like
I
mean
I
am
not
an
approver
in
automatic
anymore,
so
I
can
only
post,
PRS
I,
don't
know
but
and
I,
don't
think,
there's
probably
enough
interest.
This
is
like
very
specific.
F
You
know
right
to
open
tracing
users,
so
I
don't
know
whether
you
you
would
like
to
have
other
people
who
were
involved
in
open
tracing,
come
check
the
PRS
or
how
do
we
move
on
and
just
for
for
for
context,
the
open
tracing,
shame
specification
is
already
stable.
Main
theory
is
just
a
matter
of
Simply,
just
bringing
those
things
in
the
spec
to
the
actual
Shield.
G
Yeah
I
can
provide
sanity
checks
on
the
PRS
and
and
obviously
spend
some
time
reviewing
them,
but
to
your
good,
if
there
was
at
least
one
or
two
users
that
are
using
open
tracing
and
are
are
using
the
the
shim
to
be
able
to
do
reviews
as
well.
F
Yeah
totally
there
is
a
developer
Gregor
from
Germany
from
Salam.
We
think
that
he
he
mentioned
that
they
were
using
the
shim
I
I
was
trying
to
poke
him,
but
I
think
he's
busy.
So,
of
course
the
question
is,
you
know
and
as
I
said
before,
basically
this
is
arranged
spec,
but
of
course,
I
would
like
to
have
a
second
pair
of
eyes
on
that
one.
A
Andrew,
do
you
happen
to
know,
given
that
datadag
has
probably
a
lot
of
open
tracing
users
if
you've
had
any
of
the
open
trade
at
those
users
starting
to
move
to
open
telemetry.
C
I,
don't
really
know
what
fraction
of
the
the
users
are
on
on
open
tracing
or
whether
they're
pushing
towards
open
Telemetry.
That's
something
I'll
take
a
look
at
because
that'd
be
good
to
know.
A
C
Yeah
I
think
most
people
are
going
straight
with
open
Telemetry,
but
let
me
do
a
little
research
and
find
out
some
more.
F
A
Carlos,
do
you
I
know,
lightstep
was
big
in
open
tracing.
Also,
do
you
all
have
customers
using
the
open
tracing
gym
today
or
are
most
of
them
just
re-instrumenting
with
open
Telemetry?
We.
F
Have
won
a
big
customer
that
I
cannot
say
the
name
but
yeah,
so
we
have
one
big
customer
and
they
have
you
know
being
using
Dev.
That's
why
we're
trying
to
push
for
that,
but
they
don't.
They
are
not.
Sadly,
very
involved
in
open.
Telemetry
I
was
trying
to
bring
them
in.
You
know,
but
not
much
success
on
this
front.
D
So
as
part
of
sorry
as
part
of
the
process
of
migrating
micro
profile,
Telemetry,
which
is
which
uses
the
open
tracing
too
right
to
open,
telemetry,
there's
still
not
the
process,
but
the
most
likely
outcome
is
to
provide
the
shim
before
the
the
the
old
spec.
The
old
library
or
specification
is
abandoned.
So
I,
imagine
that
you
will
be
using
this.
A
A
Yeah,
if
you
can
and
maybe
advertise
well,
once
Carlos
puts
up
a
PR
advertise
that
to
get
if
anybody's
interested
in
looking
at
it
also.
F
Oh
for
sure,
yeah
I
mean
it
could
be
nice
like
so,
basically
just
for
a
little
bit
more
of
context,
uni
also
from
Jagger
and
open
tracks
in
Fame
and
and
me
were
working
on
the
spec.
But
that's
only
from
the
spec
perspective,
like
called
wise.
We
need
to
know
that
you
know
that
the
actual
code
runs
fine
and
try
to
find
Corner
cases.
You
know
I
think
we
were
trying
to
cover
as
much
as
we
could,
but
there
are
still
maybe
Corner
cases.
A
F
Yeah
totally,
as
I
said
before,
we
have
this
big
customer
that
uses
open
tracing
Java
specifically
and
they
they
say
it's
working
fine,
but
it's
only
a
customer
and
maybe
even
if
it's
a
big
one,
it's
only
one.
You
know
so
we
need
much
more
rise.
C
Yes,
so
I
will
I
will
I
have
to
admit
that
it's
a
startup
I'm
at
we
are
using
the
open
tracing
shim
at
least
a
little
bit
until
I
can
get
everything
converted
over
I've
been
running
any
issues
so
I
don't
know
that
I
really
can
review
the
PRS.
We
basically
just
had
a
bunch
of
open
tracing
stuff
and
I
swapped
out
the
Tracer
with
the
shim
and
everything
just
sort
of
worked,
so
I
haven't
been
messing
with
it.
F
Yeah
yeah
I
mean
I
I
think
that
in
any
case,
any
just
reviewed
that,
even
if
he
just
called
you
know,
because
just
as
Jack
said
sanity
called
sanity
check,
that
would
be
more
than
enough.
I.
Think.
C
A
And
what's
the
Carlos,
what's
the
plan
once
that
is
merged,
I
assume
you're,
hoping
to
get
that
shim
marked
stable
yeah.
F
Correct
yeah,
so,
basically,
my
plan
is
to
start
sending
small
PRS
for
small
changes
here
and
there
cleanups
after
that,
we'll
do
a
final
review:
try
to
get
more
users
to
test
that
before
we
can
actually
once
everything
from
spec
is
complete
and
then
yeah
Mark
that
as
a
stable,
because
now
it
has
an
alpha
suffix.
If
I
remember
correctly,.
G
It
might
be
useful
if,
when
you
open
those
PRS
Carlos,
if
you
could
list
out
a
couple
of
the
corner,
cases
that
you
might
be
thinking
are
are
prone
to
issues.