►
From YouTube: 2022-06-30 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
B
A
Well,
I
don't
think
that
there
is
a
you
know,
a
good
choice.
We
can
probably
choose
the
least
terrible
one.
A
Still
number
two,
I
think,
probably
I
mean
the
best
we
can
do
about
these
interfaces
like
the
auto
configuration
customizer
is
probably
at
the
super
bound
like
the
question
mark
super
config
properties,
and
maybe
that
will
make
it
better.
A
C
Yeah,
I
look
forward
to
your
your
proposal
because,
yes,
you
said
generic
super
balance
and
I
stopped
listening.
I
mean.
C
C
They
want
to
add
like
additional
attributes
to
our
auto-collected
metrics
and
so
right,
that's
easy
to
do
for
spans
most
of
the
time
except
clients,
fans,
and
we
had
originally
thought
that
we
would
pass
all
span
attributes
to
the
metrics
and
have
the
like
a
default
metrics
view
filter
something
in
place
that
would
limit
those
to
predefined
low
cardinality,
known
low,
cardinality
ones
yeah.
I.
A
D
Yeah
and
that
could,
if
such
a
thing
existed,
that
could
influence
the
default
aggregation.
So
you
know
by
default,
use
this,
but
you
know
you
can
also
override
it
with
your
own.
Your
own
views.
D
So
so
sort
of,
if
you,
if
you
so
right
now
as
as
they
exist
today,
views
are
allow
you
to
remove
things.
So
you
can
get
rid
of
attributes,
you
can't
add
them
back
so
from
an
instrumentation
standpoint.
You'd
have
to
do
what
you
were
suggesting,
which
is
include
everything
which
is
probably
not
what
most
people
want
and
then
selectively
pare
it
down
to
the
low
cardinality
set
that
you
want.
C
Could
we
have
a
default
view
that
in
the
agent
that
removes
all
the
others,
and
then
users
could
provide
their
own
view
if
they
wanted?
Even
if.
A
D
Yeah,
I'm
not
sure,
there's
a
good
way
to
do
what
you're
describing
trask.
You
know
you
could
you
could
do
something
like
this?
Like
you
could
say,
you
could
provide
a
yaml
configuration
file
that
says
hey
most
people
want
to
use
views
like
this,
where
the
views
are
defined
in
that
yaml
configuration
and
you
could
provide
an
easy
way
for
them
to
opt
into
that.
C
But
yeah,
I
think
matthias's
point
is
good
about
the
library
instrumentation
and
ideally
we
would
want
to
wait.
Well,
I
don't
know,
I
don't
think
it's
a
super
urgent
problem,
but
I
foresee
this
being
a
common
question.
C
So
do
you
know
if
there's
anyone
working
on
the
hints
stuff
at
this
point
in
this
bag?
I.
D
Don't
I
don't
think
that
anybody
is,
I
think
you
know
on
tuesday,
riley
mentioned
that
you
know
metrics
still
aren't
done,
there's
still
a
couple
of
things
that
we've
talked
about
and
are
are
important
that
we
punted
on
and
those
are
things
like
exponential,
histograms
and
finishing
the
prometheus
exporter,
spec
and
and
the
hint
api,
but
you
know
there's,
and
so
he
was
suggesting.
D
Maybe
we
get
the
metrics
working
group
back
together
to
to
continue
to
push
on
those
threads,
but
you
know
we'll
see
where
that
goes
really
really
quickly
before
we
end
this.
This
topic
so
have
we
talked
about
potentially
using
baggage
is
like
a
stop
gap
for
this.
A
It
in
this
issue,
but
actually
using
it,
especially
for
servers
and
instrumentation,
is
just
horribly
difficult
because
you
need
to
either
add
your
own
instrumentation.
That
kind
of
you
know
injects
itself
before
the
agent's
server
instrumentation
or
modify
the
agent
server
instrumentation
call,
which
is
kinda
complex,
even.
E
C
Cool
well
that
that's
helps
to
know
sort
of
the
least
be
clear
on
the
sounds.
Like
long-term
approach
is
probably
hints,
and
hopefully
the
hints-
hopefully,
the
metrics
working
group
will
is
starting
to
well.
I'm
glad
it
came
up
this
week
in
the
spec
call.
C
Oh
yeah,
your
reply,
so
we're
planning
we
discussed
in
previous
weeks,
moving
the
withspan
annotation
from
the
core
repo
to
the
instrumentation
repo,
so
that
it
can
live
alongside
of
the
actual
implementation
of
that,
so
that
we
can
start
building
out
more
like
metrics
annotations,
and
it
just
felt
weird
to
have
the
writing
the
annotation
itself
in
a
vacuum
without
writing
the
implementation
of
the
instrumentation
for
it.
C
So
that's
the
plan
to
move
that
over,
and
so
I
just
threw
out
a
couple
of
options
for
artifact
names
where
that
should
land
could
land
and
this
seemed
reasonable,
and
then
this
is
sort
of
part
of
some
of
the
annotation
support.
We
have
that
instrumentations
can
use
to
influence
with
span
when
it
returns
a
promise
type.
C
So
if
you
have
any
thoughts
check
me
in
or
post
here,
otherwise
I'm
going
with
one
vote
to
zero
one
and
a
half
votes
to
zero.
I
put
this
one
first,
because
that
was
for
a
reason
also.
C
Oh
yeah,
so
last
week
we
talked
about
some
sort
of
a
jfr
client
library
that
we
were
interested
in
from
that's
currently.
C
Over
here
in
and
we're
using
it
in
our
distro,
and
it
feels
like
something
that
other
people
are
probably
doing
very
similar
things
also
as
far
as
managing
the
recordings
jfr
recordings
across
different
java
versions
and
how
to
configure
the
events
that
you
want
from
those
recordings.
C
So
we
were
interested
to
get
other
people's
and
splunk
folks
in
particular,
since
you
all
have
a
hotel
distro.
That
is
also
doing
profiling.
C
If
that's
something
that
seemed
reasonable
to
put
in
contrib
and
sort
of
start
building
on.
I
think
that
at
this
point
you
know
the
until
we
have
a
pro
hotel
profiling,
spec
and
format
wire
format.
C
It
would
just
be
about
managing
the
recording
and
producing
the
files,
and
you
know
you
can
read
the
files
and
then
do
whatever
you
want
to
as
far
as
sending
that
to
your
backend
system,
but
this
would
also
give
us
sort
of
a
foundation
to
when
we
do
have
the
profiling
wire
format.
To
then
start
in
building.
On
top
of
that,.
F
Okay,
the
the
name
jfr
streaming
is
a
little
bit
interesting
right
because
of
the
streaming
apis
that
exist
after
17
or
whatever,
and
this
is
all
about
files
right.
G
F
Yeah
cool,
I
don't
have
a
better
name
on
top
of
mind,
but
I
just
want
to
let
like
avoid
that
confusion.
I
think
having
that
incontrib
would
be
a
great
start.
I
also
wanted
to
ask:
I
think
the
interest
is
specifically
about
controlling
meaning,
starting
and
stopping
and
also
event
choice
via
jmx.
Is
that
right
so
from
a
separate
process
into
a
running,
app
or
also
potentially
programmatically
from
within
the
same
app
itself.
F
C
That
makes
sense
hey,
I
think
it
today.
It
is
at
least
this
library
and
our
usage
is
all
in
proc,
but
certainly,
if
that's
an
interest,
you
know
of
managing
out-of-proc
things
as
well,
but
well.
D
F
Yeah
our
thing
currently
does
it
in
process
too,
but
maybe
I
don't
know
if
there's
a
need
for
both
like
maybe
we
could
keep
it
flexible.
C
Oh
no
come.
I
was
just
trying
to
remember
what
and
it
seemed
like
a
really
good
resource
of
something
that
we
should
look
at
as
well
and
as
far
as
trying
to
converge
on
something
that's
useful
for
multiple
people.
C
Even
if
that's
not
something
that
they
would
necessarily
snap
to
a
new,
a
new
library
until
we
have
the
full
profiling
story,
probably.
C
F
Yeah,
in
hindsight
I
mean
maybe
jack-
I
don't
know
if
you
messed
with
that
stuff,
you
might
have
an
opinion
too,
but
I
think
I
think
it
might
have
been
over
complicated.
To
be
honest,
I
think
just
having
one
continuously
long-running
recording
that
you
snapshot
has
worked
out
better,
but
one
continuously
long.
F
So
you
start
the
recording
right
and
it
just
it's
capturing
events
for
some
amount
of
time
and
then
there
are
apis
to
invoke
like
I
think
it's
called
snapshot,
maybe
I'm
using
the
wrong
terminology
but
yeah.
So
you
take
a
snapshot
and
that's
like
the
last
n
messages,
since
you
last
snapshot
and
then
those
get
written
to
a
file,
I
can
see
if
I
can
find
where
we
do,
that.
D
Yeah,
that's
so
I
I
didn't
write
this
kind
of
overlapping
windows,
code
thing
and
deduplication,
but
I
know
it
exists
and
that's
why
I
was
talking
about
it
last
week
if
there's
a
better
way
to
do
it
that
achieves
the
same
effect
of
you
know.
Basically,
you
know
using
the
file
based
approach,
but
still
kind
of
being
able
to
operate
on
a
stream
of
events.
That's
you
know.
Let's
see,
yes,.
F
Totally
so
I
think
the
act
of
starting
recording
is
non-trivial,
I
think,
and
so
what
we
were
doing
at
new
relic
was
starting,
recording
and-
and
let's
just
make
up
a
number.
Let's
say
we
wanted
to
run
it
for
one
minute
and
like
the
n
seconds
mark,
where
end
is
like,
I
don't
know,
30
or
40
seconds,
we
would
start
a
second
recording
that
would
be
overlapping
in
time
with
the
prior
recording.
F
So
there
were,
there
are
two
going
on
at
the
same
time,
but
they
would
have
an
overlap
and
then
we
had
to
go
back
through
and
they
were
overlapping
to
ensure
that
we
didn't
lose
anything.
So
it
was
like
this
orchestration
of
like
starting
and
stopping
this
like
this
overlapping
time
windows,
and
I
think
it
was
more
complicated
than
it
needed
to
be.
C
Yeah,
I'm
definitely
interested
in
the
snapshot
reference
because
I
don't
think
I've
seen
that.
C
And
that's
part
of
why
I'm
I'm
excited
to
like
I
I'm
interested
in
bringing
this
to
the
contrib
repo
is
there's
so
many
smart
people
here
doing
this
stuff
and
I
think,
there's
like
little
nuggets
of
wisdom
here
and
there
that
we
can
all
benefit
from.
F
And
laura's
done
some
interesting
work
around
also
detecting
chunks
within
a
given
recording.
E
Basically,
I
think
the
problem
was
that
some
of
the
data
in
the
in
the
jfr
file
is
in
in
random
order.
So
we
had
to
sort
it
and
we
didn't
want
to
like
read
all
the
records
from
the
jfr
file
and
sort
them,
but
do
it
in
smaller
chunks,
because
the
chair
for
utility
itself
use
the
same
approach,
because
that,
like
the
data,
is
organized
as
chunks.
But
you
had
to
do
some
kind
of
hack
to
figure
out
when
the
next
junk
is
started.
C
And
now
this
is
all
focused
on,
like
the
continuous
profiling
use
case
are,
is
anyone
doing
like
a
like
a
trigger
on
demand
or
trigger
based
profiling.
F
C
F
Of
this,
like
two-way
command
structure,
it's
like
which
I
think
tigran
has
put
a
little
work
into.
I
just
I
don't
know
that
without
that,
it's
super
practical.
C
Right
that
would
that
would
be
for
on
user
triggered
on
demand
yeah.
What
about
like
regret
like
threshold
like
when
you
know
performance
starts
tanking
trigger.
C
Yeah,
I
I
renewed
the
you
could
look
at
response
times:
memory
cpu.
C
E
You,
basically
you
don't
get
the
like
the
nice
things
about
like
the
jfr
profiling,
that
it
wouldn't
stop
all
the
threads.
If
you're
using
that
event,
you
basically
stop
all
the
threads
anyway
and
then
you
have
to
parse
the
text
might
as
well
do
it
in
the
java
code.
E
We
will
get
like
at
least
you
don't
have
to
parse
the
stack
traces
from
the
text
down.
F
Yeah
in
case
people
aren't
aware,
there's
also
that
execution
sample
and
native
execution
sample
or
jfr
events
that
you
can
subscribe
to
and
they're
not
periodic,
though
right.
So
that's
the
problem.
If
you're
trying
to
do
wall
times,
you
need
a
periodic
sampling
of
call
stacks
and
those
events
do
not
give
you
that.
C
E
I'm
not
sure,
like
I
remember,
reading
from
this
marcus
here,
it
was
like
the
guy
who
worked
on
the
jfr
for
like
20
years,
mentioning
that
it
doesn't
have.
It
doesn't
support
full-time
profiling
and
they're
planning
to
add
something.
C
And
then
so
continuous
profiling
triggers,
so
it
is
that
sort
of
so
we
are
doing
sort
of
trigger
based
profiling
sort
of
it's
a
little
bit
more
of
when
we
think
performance
is
so
instead
of
continuous
profiling.
We're
currently
only
trying
to
diagnose
a
specific
regression
performance
regression.
C
I
I
think
that
both
are
potentially
super
useful.
I
think
the
one
advantage
continuous
profiling
is
great.
It
has
a
lot
of
other
advantages
to
being
able
to
just
generally
look
at
bottlenecks
in
your
code,
whether
there's
a
regression
or
not.
C
As
far
as
triggering
I
mean
it
allows
you
to
do
to
gather
less
and
be
a
little
bit
more
targeted,
but
I
I
think
we're
it'll
be
interesting.
I'm
looking
forward
to
getting
more
user
feedback
from
so
that
was
sort
of
the
the
the
illuminate
folks.
The
j
clarity
acquisition,
microsoft,
j
clarity,
acquisition,
that's
sort
of
their
their
piece-
is
trying
to
do
diagnostic
route
diagnostics
on
a
performance
regression.
C
So
they
kick
off
a
bunch
of
things
when
they
see
performance
start
to
go
down
so
that
in
general
it's
not
consuming
a
lot
of
resources.
But
we
have
not.
I
know
they've
had
user
feedback
from
the
previous
product,
but
we're
just
starting
to
get
that
feedback
inside
of
our
product
umbrella.
C
It
was
initially
limited
to
just
cpu
and
memory,
but
really
the
one,
and
I
think
that
the
one
that
they're
just
adding
in
right
now
is
the
response
time,
because
really
that
was
what
they
wanted
was
from
a
user
perspective,
slowness
that
seemed
to
be
the
primary
driver
of
when
to
do
an
analysis,
so
yeah
they're,
they're,
currently
hooking
into
just
the
span
stream,
but
really
I
need
to,
and
I
guess
the
metric
stream
is
a
little
questionable.
C
Jack
is
there
any
way
to
hook
into
the
metric
stream
metric
event
stream,
or
do
you
just
have
to
change
your
emitting
to
every?
You
know
five
seconds
if
you
want
to
view
that
more
frequently.
D
Like
the
stream
of
measurements
themselves,
yeah,
there
is
not
currently
a
way
to
do
that,
not
even
not
even
like
a
low-level
way
like
that
would
use
internal
apis.
D
D
You
might
be
able
to
do
something
where
you,
you
know
inspected
the
internal
apis
and
you
know
implemented
the
actual
interface
that
you
need
to
to
to
have
your
own
aggregation,
and
maybe
you
could
like
wrap
an
existing
aggregation
and
do
something
beforehand
bet
you.
I
bet
it's
possible.
C
I
thought
that
user,
so
you
can't
write
your
own
aggregation
at
this
point.
Okay,.
D
Yeah,
that
would
that
would
be
kind
of
a
place
that
you
could
do
it
there's
a
number
of
reasons.
Why
well
that
that
may
never
be
a
thing
like
implementing
your
own
aggregations.
It
might
become
a
thing,
but
I
don't
know:
there's
there's
some
complications
that
make
that
not
trivial.
F
So
as
far
as
continuous
versus,
not
I
guess,
continuous
versus,
triggered
profiling
or
recording,
I
think
someone
on
the
profiling
side
also
had
a
use
case
or
describe
the
use
case
of
just
wanting
to
sample
across
your
fleet
to
get
like
typical
behavior
patterns
right
like
so,
instead
of
necessarily
needing
to
record
every
minute
on
every
instance,
you
might
say.
Okay,
I
have
a
hundred
my
fleet's
currently
100
vms,
I'm
going
to
take
20
of
those
and
do
a
random
sampling
like
a
continuous
random
sampling
of
those.
So.
F
F
I
mean
you'd
want
to
keep
it.
I
think
I
think
you'd
want.
I
don't
know.
I
think
you'd
probably
want
to
keep
it
consistent.
But
oh.
F
F
F
C
C
Yeah,
I
know
that
the
the
application
insights,
the
microsoft.net
profiler
profiling
service-
I
don't
know
if
they
do
the
fleet
thing,
but
the
default
configuration
is
like
a
profile
two
minutes
out
of
every
30
minutes
or
every
60
minutes.
Something
like
that
to
not
do
continuous
all
the
time,
but
to
get
some.
F
Samples
yep
and
I
mean
there's,
there's
a
non-trivial
amount
of
like
data,
egress
cost
and
storage
when
you're
doing
continuous
as
well.
G
Various
identities
to
continue
profile
with
gvm
it's
when
you
have
sporadic
performance
regulations,
things
that
happens
some
time
to
time,
but
that
can't
be
easily
reproduced
so
having
the
ability
to
continuously
profile.
The
gm
could
help
to
investigate
this
difficult
case.
C
Especially
if
you
can
limit
the
amount
of
data
in
some
ways,
which
is
hard.
C
Cool
well,
we
will
we're
still
discussing
a
bit
with
the
the
so
that
this
was
written
by
the
the
j
clarity
folks
and
so
the
it
seemed
like.
I
mean,
as
far
as
anything
for
a
initial
start
that
we
would
could
propose
in
to
contrib
and
then
definitely
evolve
and
pull
in
other
ideas.
C
Oh
the
other
thing,
I
wanted
to
ask
how
what's
the
as
the
story
for
correlating
traces
and
profiles,
the
traditional
one
that
I've
seen
is
on
scope,
activation
emitting,
jfr
events.
That
say
this
thread
id
is
now
running
this
inside
of
this
trace
id
span
id.
F
It
is
yeah
exactly
and
depending
on
the
runtime,
it
could
be
potentially
a
large
number
of
events,
so
that
kind
of
stinks
and
then
you're
also
not
guaranteed
that
in
the
recording
file
that
they're
ordered
the
same
way,
the
timestamps
you
know
should
be
ordered
but
physically,
when
you're
reading
or
iterating
through
the
events
in
a
file
you're
not
guaranteed
that
they
are
in
timestamp
order.
F
C
F
Yeah
yeah,
sorry,
I'm
going
fast.
If
so,
if
your
recording
stream
looks
like
you
know,
scope
change,
scope,
change,
scope,
change,
and
then
you
get
a
thread
dump
for
that
for
a
thread
you
have
to
be
able
to
go
back
and
and
keep
state
of
what
spans
are
currently
executing
on
which
threads
so
there's
a
little
stateful
bit
and
then,
when
you
get
us,
when
you
encounter
a
thread
event
to
be
able
to
tell
what
span
was
in
context
on
that
thread
at
that
time,
you
have
to
have
time
order.
C
F
C
And
are
you
and
you
have
your
own
ingestion
format
currently
or
did
you
send
the
whole
jfr
file
to
the
back.
F
Yeah
so
our
first,
our
first
pass
was
sending
a
custom
payload
inside
of
logs
as
as
text,
just
like
plain
text
and
we've
recently
switched
over
to
p
prof.
But
it's
still
like
mime
encoded
within
logs
right,
because
logs
currently
do
not
provide
a
binary
format.
C
And
so
with
interesting,
so
and
you
have
just
a
generic
mapping
from
jfr
events
to
pprof,
or
do
you
only
pull
out
like
execution
profiles
from
jfr.
F
Yeah,
I
think
that's
the
idea
lori,
I
don't
remember
the
specifics,
but
there's
some
other
stuff
that
we
need,
in
addition
to
the
like.
Pprof
is
primarily
concerned
with,
like
stack
traces,
but
we
also
need
to
ingest
things
like
thread
id
thread,
name
and
potentially
span
context.
Do
you
do
you
remember
how
those
get
in
or
or
tagged
along
with
profits?
It's
still
part
of
the
the
log
like
their
attributes
on
the
log.
C
F
Yeah
I
missed
last
week
because
I
was
out,
but
I've
been
trying
to
go
to
those
and
yeah.
It
seems,
like
the
I
mean,
there's
a
lot
of
ideas
kicking
around
right.
Now,
it's
as
any
new
sig.
You
would
expect
a
lot
of
varied
and
diverse
opinions
and
interests
to
be
present
and
they
are
but
yeah
I
mean
the
the
first
thing
people
are
thinking
about
is
like
periodic
sampling
of
stack
traces,
which
I
also
think
makes
sense
as
like
the
first
approach.
F
But
to
your
point
to
ask:
there
are
a
ton
of
other
really
useful
events
in
the
jfr
stream
and
some
of
them
could
be
used
in
the
context
of
a
profiler
and
some
maybe
more
metric-y,
but
to
a
jvm
specialist
like
totally
invaluable.
E
Well,
let's
say
you,
you
have
a
minute
long
recording,
so
it
describes
data
that
happened
like
a
minute
ago,
like
the
event
potentially,
so
you
would
need
some
kind
of
matrix
api
that
allows
you
to
like
back
date,
the
metric
to
get
it
in
the
right
place
in
the
on
the
timeline.
Otherwise,
what
you
get
from
the
jfr
file,
it
won't
correlate
with
any
other
metrics
that
you
have.
B
C
And
so
that
would
be,
though,
like
in
java
17
the
stuff
that
ben
did
in
contrib
of
streaming
the
real
jfr
streaming.
That
would
solve
that
problem
right,
because
you're
getting
instantaneous.
D
Well,
it's
closer.
It's
it's
closer
to
instantaneous,
there's
still
going
to
be
some
delay
between
when
you
receive
the
measurements
and
when
you
actually
like
record
them,
but
I
think
that
was
part
of
his
motivation
for
only
supporting
java,
17
plus
and
not
trying
to
you
know,
go
back
to
the
the
versions
where
you
have
to
you
know
extract
the
the
events
from
files,
because
the
I
think
the
assumption
is
that
the
delay
is
too
long
and
the
data
becomes
not
as
useful.
F
C
F
However,
however
jfr
buffers
internally
is
going
to
be
much
better
than
buffering
in
a
file.
Oh,
I.
E
F
G
The
events
are
in
memory,
but
what
I
have
seen
from
the
jdk
code
is
that
it
is
a
flashed
in
in
in
disk.
Even
if
you
don't
configure
a
disk
at
the
end,
the
event
will
be
flush
on
the
file
it.
That
is,
it's
possible
to
have
an
input
stream
of
events,
but
in
the
end
the
data
will
be
on
on
disk
on
in
on
the
file
generated
in
the.
C
Are
you
so
this
here
is
part
of
this
extension
in
the
core.
Repo
is
also
doing
the
correlation
of
scopes
emitting
those
events.
C
Are
you
all
using
this
in
the
in
the
splunk
distro
or
something
different,
and
because
I
was
thinking
of
potentially
moving
this,
or
at
least
this
part
of
it
over
to
contrib
as
well,
because
this
seems
like
the
correlation,
seems
like
an
important
piece
of
the
profiling
story.
F
C
And
then
yeah,
I'm
not.
I
think,
we'll
put
up
put
a
call
out,
because
I
know
this
is
super.
C
This
is
super
important,
but
I
don't
I'm
curious
if
anybody
is
using
this
piece
of
it,
so
we
may
jack,
we
may
put
a
call
out
like
whether
or
we
could
discuss
with
john
and
honorag
whether
you
want
to
keep
this
in
this
repo
or
see.
If
anybody
is
interested
in
maintaining
this
part
inside
of
the
contrib
report,.
F
D
Have
to
we'd
have
to
ask
the
original
contributor
of
this
or
pick
the
brains
of
john
and
honorable.
C
Yeah,
I
think
they're
and
they
had
similar.
It's
been
here
forever.
I
was
contributed
a
long
time
ago,
but
I
think
when
it's
been
brought
up
before
it's,
that
we
haven't
known
sort
of
what
it
was
why
we
would
use
it,
but
certainly
we
can
always
ping
the
author
and
see
if
they
want
to
maintain
it
in
contrib.
C
Did
anybody
drop
anything
nope
any
last
top
anything
else?
Anybody
wants
to
chat
about
in
our
remainder.
C
But
let's
we're
oh,
I
think
I
think
I
was
okay.
I
mean
my
spin
processing
so.
C
I
totally
agree.
This
is
a.
This
is
a
important
problem,
important
use
case.
I
was
a
little
concerned
about
introducing
a
new
kind
of
a
new
concept
when
we
already
have
like
span
processors.
C
On
this
one,
but
I
I'm
I'm
good
with
now
that
it's
not
in
the
stable
instrument
or
instrumentation
or
not
stable,
but
near
stable,
instrumentation
api.
I
think
it's,
it's
fine
to
unblock
people
and
put
out
some
solution
until
we.
E
I
think
currently,
like
the
the
reasonable
solution,
would
be
that
like,
if
you
can't
add
an
attribute
to
this
band,
then
just
add
the
span
around
the
operation
and
that
the
either
attribute
or
the
magic
name
that
you
want
to
use
on
that
spam.
Instead,.
E
F
C
I
think
just
it
to
me.
The
parallel
is
to
span
processors,
which
at
least
only
on
start.
You
get
the
context
and
that's
kind
of
tends
to
be
the
most
useful
place
to
do
things.
E
A
E
We
have
like
some
of
those
instrumentations
that
that
fill
some
attributes
on
end
or
or
change
the
span
name.
E
C
I
agree
with
that
yeah
I
I,
since
it's
not
something
since
it's
something
we
expect
to
be
solved
by
a
spec
at
some
point,
I
think
just
having
a
narrow,
the
most
narrow
sort
of
solution
to
unblock
people
makes
sense
to
me.
C
Jack,
do
you
use
in
the
core,
repo,
experimental,
annotation
or
beta
or
anything
anywhere.
D
Nope,
just
just
the
internal
package.
C
Yeah,
I
so
I
mean,
even
though
it's
going
in
an
experiment,
the
semcon
module
now
which
is
going
to
stay
alpha.
It
does
make
sense
to
kind
of
have
a
sort
of
a
warning,
since
we
expect
this
to
be
a
temporary
solution.
D
E
C
And
I
think
that
the
the
semcon
module
may
never
be
stable,
because
they'll
always
be
unstable
semantic
conventions
and,
as
like
http
semantic
convention
becomes
stable,
we
would
move
the
http
attributes
extractor
from
send
semconf
into
the
stable
instrumentation
api.
D
Trash,
do
you
remember
if
that's
been
well
we're
at
time?
We
can
talk
about
that
later.
I'm
wondering
if
there's
any
other
strategies
for
that,
because
I
don't
know
that
doesn't
sound
appealing
to
have
to
rely
on
an
alpha
artifact
forever,
if
you're
using
a
stable
semantic
invention.
But
it's
conversations.
C
What
I
was
saying
is
we
would
move
the
stable
ones
into
out
of
semcons.
Oh.
D
C
At
least
a
decade
until
yeah
cool,
hey
great,
to
see
everyone.
Thank
you
for
good
discussion
and
until
next
time
take
it
easy.