►
From YouTube: 2022-10-27 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
C
C
Are
yes,
I'm
in
California.
B
A
A
B
So
in
the
fall
so
the
mapping-
this
makes
sense
to
me
now
and
I,
like
kind
of
that
flow
of
your
mapping
from
the
bean.
B
B
C
Yes,
yes,
I
I,
absolutely
agree.
I
I
missed
that
part.
So,
as
you
remember
originally
it
was
named
label
in
my
first
version
of
the
pull
request
which
would
eliminate
this
confusion,
but
so
I
I
thought
a
number
of
of
solutions.
I,
don't
see
a
good
solution
for
this.
A
C
Can
we
can
rename
it
metric
attribute
in
both
places,
which
could
be
more
clear,
but
it's
it's
a
little
bit
verbose.
We
could.
We
could
introduce
metric
attribute
only
outside
of
the
mapping
section
and
keep
attribute
inside,
but
that
would
lead
to
another
confusion
that
we
are
talking
about
two
different
things
in
both
places
and.
C
B
If
I
think
it's
really
close
I'll
start
to
go
ahead.
If
you
want
to
I,
remember
you
had
held
off
on
making
some
changes
to
the
code
right.
C
B
C
C
C
For
example,
I
think
this
would
be
benefit
not
only
this
little
project,
but
also
be
more
clear
for
readers
of
the
specification,
because
when,
when
you
read
the
metric
specification
attribute
is
like
anything
that
you
can
say
could
be
even
a
description
of
some
sort,
and
only
later
you
realize
that
this
is
really
the
meaning
of
this
attribute
is
dimension
or
label,
but
Dimension
is
probably
better.
C
So
just
a
thought,
I
know
it's
it's
a
big
ask,
but
perhaps
this
is
feasible.
C
B
And
I
would
say
that
it
would.
My
guess
is,
it
would
be
a
pretty
uphill
battle,
just
since
the
it
has
already
been
well
in
the
metrics
stuff
has
already
has
all
been
declared
stable
and
it
was
labeled
initially
and
they
aligned
to
attributes
to
have
consistency
across
the.
C
B
A
shot
yeah
Jack,
do
you
know
if
I
mean
I
wonder
in
if
there's,
if
that
anywhere
in
the
specification
that
sort
of
addressed
the
different
terminologies
that.
E
B
D
There's
a
small
blurb
in
the
in
the
metrics
SDK
data
model
or
not
the
SDK,
the
data
model
section
where
it
says
attributes
and
then
in
quotations
or
parentheses,
says
Dimensions.
So
right,
that's
a
subtle
acknowledgment
to
it.
Gonna.
C
Yeah,
so
that
leads
to
the
next
question:
what
do
you
guys
think
if,
if
I
introduce
dimension
in
this
place,
knowing
that
it
is
not
fully
compliant
with
the
official
terminology,
but
at
least
it
would
eliminate
the
confusion
that
we
have
right
now.
E
Let
me
let
me
just
ask
what,
when
we
say
it's
not
fully
compliant,
does
it
actually
walk
the
definition
of
a
dimension
I
mean
my
my
take
on
this.
Is
that
if
we
say
Dimension,
it
must
have
a
sense
of
aggregate
if
it
doesn't
have
a
sense
of
aggregate,
we
shouldn't
say
dimension
well,.
C
So
in
in
the
text
that
explains
the
the
schema
I
would
I
would
say
that
by
Dimension
we
meet
what
is
also
called
metric
attribute,
so
these
terms
would
be
identical,
so
the
reason
to
introduce
Dimension
is
to
just
differentiate
attributes
from
amines
from
attributes
from
metrics,
but
this
is
leading
to
the
confusion.
So
if
we
use
dimension
for
metric
attributes,
we
will
eliminate
this
this
this
confusion
here,
because
it's
it's
unrealistic
to
change
the
attribute
name
for
MBS
right.
D
Well,
one
thing
that's
interesting
is
you
know
we're
talking
about
whether
attribute
is
the
right
name
for
this
or
not,
but
you
know,
attributes
in
the
open,
Telemetry
metric
data
model
do
seem
to
be
pretty
much
an
exact
match
for
am
being
attributes,
and
so
does
that
suggest
that
we,
the
the
name
attribute,
is
actually
correct.
It's
just
used
in
two
different
contexts.
C
D
B
And
then
to
your
question
about
sensible
aggregation,
I
think
that's
some
somehow
implicit
in
open,
Telemetry
metric
attributes
as
all
metric
attributes
are
supposed
to
be
low,
cardinality
and
aggregatable.
E
E
So
we
have
here
things
like
Capital
request:
account
Capital
has
failed
time,
total
50p
and
99p.
Are
we
genuinely
sure
that
those
are
both
low,
cardinality
and
sensitively?
Aggregatable?
Oh,
that's
an
appalling,
dramatic
expression.
I'm!
Sorry.
C
B
And
was
there
a
particular
label
so
in
this
this
is
the
older
scheme
so
label.
What
is
metric
attribute?
E
Well
so
in
particular,
so
count
failed
time,
total
time
15
times,
99p
will
have
an
attribute
of
type.
B
C
For
this
rule,
yes,
yes,
you
have.
You
have
other
other
metric
just
above
that
quiz
with
type
fetch,
but.
C
Yeah
I
I
like
this
idea
with
a
const
function.
B
D
I
I,
would
you
know
it's
it's
verbose,
but
you
know
you're
not
going
to
be
writing
these
constantly.
You
just
need
to
write
them.
You
know
once
and
then
hopefully
they're
good
to
go.
C
Yeah
mapping
attributes
will
not
be
as
frequent
here
because
yeah
metric
attributes
I
mean
because
there
will
be
quite
Limited
in
number
yeah.
B
C
Right
so
we
we
discovered
this
pattern
while
producing
the
the
rules
for
the
different
platforms
that
usually
you
have
one
attribute
defined
for
the
whole
rule
above
mapping,
which
is
common
for
for
a
number
of
attributes,
and
then
there
is
a
specific
attribute
for
each
metric
attribute
for
each
mbn
attribute.
C
But
this
is
well
I
would
say:
80
percent
of
rules
are
constructed
in
this
way,
but
of
course
there
are
exceptions,
but
no
matter
how
we
look
at
it,
number
of
metric
attributes
or
Dimensions
will
be
small.
We
have
no
guarantees
that
that
the
the
values
that
will
be
there
will
will
be
limited
in
a
number,
but
that's
a
different
story.
B
Cool
well
I
think
that
I
mean
at
least
from
my
reviews:
I
think
that
and
set
solves
all
of
my
concerns.
B
I
don't
know
if
anybody
else
I
wanted
a
chance
to
look
through
kind
of
more
closely
before
Peter
goes
ahead
and
updates
the
code
to
match
that
match.
The
new
schema.
C
Yeah,
okay,
thank
you.
Thank
you,
Trask.
Thank
you.
Jack
and
Ben
I
will
make
the
changes
and
and
I
will
await
your
opinions.
Awesome.
B
Yeah,
that's
great
that
I
want
to
make
sure
that
we
I
really
want
to
land
this
for
the
next
for
the
November
release,
which
isn't
until
like
mid-november,
because
I
think
our
calendar
November
right.
We
jack
you,
release
the
Friday
after
the
first
Monday
of
the
month,
so
that's
gonna
be
late.
This
month,.
B
All
right,
yeah,
let's
talk
metrics.
B
Jack,
maybe
I'll,
let
you
do
you
want
to
drive
metrics
discussion,
sure
sure
we'll
kind
of
stop
sharing
I'll.
Let
you
take
over.
D
Hopefully,
that's
big
enough,
okay,
so
the
last
major
section
of
jvm
metric
semantic
convention
discussions
that
is,
has
not
been
addressed
as
garbage
collection,
metrics
and
you
know
I
think
the
last
time
the
jvm
semantic
convention
working
group
met
was
sometime
in
the
middle
of
the
summer,
but
we
lost
a
bit
of
steam
and
so
a
while
back
I
proposed
a
PR
that
I
I
was
hoping
to
make
some
progress
on
asynchronously
and
I
did
I
got
some
feedback,
a
trickle
of
feedback,
but
some,
and
so
basically
I
proposed
that
we
record
individual
garbage
collection
events
into
a
histogram.
D
So
we
currently
have
runtime
jvmgc
time
and
runtime
jvm
GC
count
so
we're
tracking
the
total
amount
of
time
that
was
spent
garbage
collecting
and
then
the
number
of
times
garbage
collection
events
occurred.
I
think
this
is
a
pretty
weak
representation
because
you
don't
know
you're
not
able
to
categorize
which
events
were
stopped,
the
world
versus
which
happened
concurrently,
and
then
you
don't
have
any
insight
into
how
long
different
events
took,
and
so
when
I
was
looking
into
what
you
have
access
to
Via
via
the
garbage
collection
and
beans.
D
I
found
that
you
can
actually
add
a
hook
to
get
notified
of
individual
garbage
collection
events
and
from
those
you
can
get.
You
know
the
duration
of
individual
events,
and
so
you
can
record
those
to
a
histogram
and
that's
what
I
proposed
here
and
from
a
histogram.
You
can
do
all
sorts
of
interesting
things,
so
you
can
it's
like
a
super
set
of
what
you
can
do
today,
so
you
can
still
analyze
the
total
time
spent
garbage
collecting
the
number
of
events.
D
But
in
addition,
you
can
see
the
distribution
of
events
you
can
filter
to
which
ones
were
Stop,
The,
World
versus
concurrent,
depending
on
which
garbage
collector,
you're,
using
and
so
on
and
so
forth,
and
so
there's
currently
a
handful
of
PR's
that
are
out
there.
This
was
a
draft
I've
kind
of
formalized,
this
and
Incorporated
a
little
bit
of
feedback
that
I've
gotten
to
from
from
folks
along
the
way,
and
so
there's
two
PR's
open
to
the
spec
and
two
PR's
open
to
instrumentation
that
reflect
the
changes.
I
propose.
D
So
you
know
just
to
kind
of
summarize
those
there's
there's
two
bits
and
I
think
the
easiest
way
to
look
at
them
is
to
look
at
the
specification
PRS.
D
So
the
first
change
of
the
spec
is
proposing
this.
The
addition
of
this
histogram
that
I
talked
about
and
so
I
propose,
recording
the
time
for
each
garbage
collection
event
and
having
two
Dimensions
to
start.
The
name
of
the
garbage
collection
collector
that
that
perform
the
event
and
the
action
that
took
place
and
I
give
some
examples
down
here
for
a
couple
of
popular
garbage
collectors.
D
So,
for
example,
the
G1
garbage
collector
has
two
different
types
of
events
that
it
produces
to
the
best
of
my
knowledge,
G1,
young
or
two
different
garbage
collectors.
Those
are
the
two
names
for
GC
and
two
different
actions,
and
you
know
if
you
have
some
prior
knowledge
about
these
garbage
collectors,
you
can
you
can
infer
which
of
these
were
Stop
The
World
versus
you
know,
which
happened
in
parallel.
D
D
So
for
each
this
this
is
kind
of
poor
formatting.
Here.
B
The
best
way
I
found
to
render
that
Jack
was
in
non-markdown
mode
and
it
do
the
ignore
white
space
at
the
top.
The
gear
icon
at
the
top
of
the
screen.
A
D
Okay,
I
I'm
gonna
have
to
file
that
away.
I'm
gonna.
Add
that
to
my
bag
of
trucks,
that's
good!
Okay!
So
we
already
have
some
a
variety
of
metrics
that
we
track
that
are
related
to
memory.
D
So
we
track
the
initial
memory
by
pool
the
committed
memory
by
pool
the
limit
by
pool
and
the
usage
by
pool,
and
so
this
suggests
that
we
add
one
more,
which
is
usage
after
GC
by
pool
so
for
each
memory
pool
that
we
have
access
to
what
was
the
the
memory
that
was
used
after
the
most
recent
garbage
collection
event
that
interacted
with
that
pool
and
just
to
summarize
the
the
value
of
this
you
can
kind
of,
you
can
do
a
couple
of
things
with
it.
D
You
can
see
if
you
might
have
a
memory
leak.
So
if
your
usage
after
garbage
collection,
is
continuously
going
up
for
a
particular
pool,
then
you
might
have
a
memory
leak
and
then
the
other
type
of
analysis
you
can
do
is
if
your
usage,
after
garbage
collection
it
as
a
percentage
of
your
limit,
so
the
limit
for
that
pool.
D
If,
if
that
percentage
is
is
High,
then
you
might
be
doing
garbage
collection
more
frequently
than
than
you
need
to
wasting
potential
CPU
cycles,
and
so
you
might
want
to
increase
the
size
of
your
heat.
So
those
are
the
two
types
of
things
that
you
can
unlock
from.
Adding
this
metric
so.
D
So
we
don't
have
access
to
this
information
for.
D
So
this
comes
from
a
different
part
of
a
different
M
being
so
we
can
only
you
know
we
can
have
a
callback
that
asks
for
this
value,
but
we
can't
we
can't
get
access
to
it
after
each
event.
D
Well,
it's
actually
it's
actually
a
bit
more
tricky
than
that.
So
you
can
when
you,
when
you
subscribe
to
these
garbage
collection
events,
you
they
have
some
sort
of
accessor.
That
says
you
know
memory
before
and
after
garbage
collection,
but
the
they
don't
that
you
can't
tie
them
back
to
a
particular
pool,
and
so,
if
you
can't
tie
them
back
and
say
like
this
was
the
memory
after
GC
for
this
this
memory
pool,
then
you
can't
do
that
comparison
that
you
need
to
do.
A
B
You
can
get
the
the
well
sorry.
Can
you
explain
that
again?
What
can
we
get
from
the
GC
event
because
I
guess?
Let
me
kind
of
explain
where
my
concern
about
the
up
down
counter
and
almost
trying
to
also
think.
Should
it
then
be
a
gauge
or
not
like,
because
you're
gonna
have
say
there
was
a
GC
event.
Five
minutes
ago,
every
minute,
You're
Gonna
Keep,
reporting
that
same
value
and
you're
not
going
to
have
any
context
that
there
hasn't
been
any
new
GC
event.
Since
then,.
F
I
think
I
had
the
same
question
that
you
would
ask,
and
then
I
was
looking
to
the
specification
and
Jack
is
using
a
synchronous
up
and
down
counter.
In
that
case,
we
registered
the
value
we
don't
add
or
remove
from
the
previous
value.
We
always
register
the
value
that
was
observed,
at
least
that's
what
I
saw
in
the
specification
and
I
also
confirmed
that
in
the
code.
B
So,
on
the
back
end,
then,
if
I'm
looking
at
my
data
I'm
going
to
have
for
the
last
five
minutes,
it's
gonna
tell
me
that
the
memory
usage
after
GC
was
say.
B
D
Well
so
I
think
you're
trying
to
do
different
things
with
that.
So
the
memory
usage
after
GC
I
want
to
detect
whether
there's
any
possible
memory
leak
and
I
want
to
detect
whether
my
Heap
is
sized
properly,
and
so
you
don't
need
to
necessarily
have
frequent
garbage,
collector
events
or
like
have
that
value
change
frequently
in
order
to
detect
those
situations.
D
B
C
C
C
Yeah,
these
two
use
cases
whether
whether
we
have
a
memory
leak
or
whether
our
Heap
is
size
correctly.
They
are
both
handled
by
this
metric,
as
opposed
to
Heap
usage,
which
of
course
goes
up
and
down
between
some
minimum
and
maximum
values,
and
by
reading
a
value
at
any
particular
point,
doesn't
give
you
much
information.
You
need
to
have
a
long
term
observation
period
in
order
to
come
to
any
conclusion.
B
B
Yeah,
so
I
was
thinking
like
time
series
of
those
but
as
a
whole,
if
you
aggregate
over
that
whole
time
like
aggregate
over
15
minutes,
I.
E
D
D
So
I'm,
not
sure
you
have
that,
so
you
need
to
be
able
to
tie
it
back
to
the
memory
for
the
pool,
I,
see
what
you're
saying
and
then
there's
also
like
the
like
power
to
weight
ratio
so
like
histograms,
are
kind
of
heavyweight
instruments,
and
so
and
they
they
have
more
data,
that's
exported
on
egress
and
things
like
that
and
so
I
think
I
I
personally
I
think
they
should
be
reserved
for
situations
where
there's
a
lot
of
value
in
analyzing
the
distribution
of
of
measurements-
and
you
know,
maybe
you
could
find
some
value
with
seeing
what
the
distribution
is
after
GC
events,
but
really
the
most
important
one
is
like
what
is
the
latest
like
what
was
the
last
memory
used
after
the
latest
GC
event?
D
Yeah,
so
up
down
counters
are
meant
to
be
aggregatable
across
third
dimensions,
and
so
the
idea
is
okay.
We're
reporting
the
memory
usage
after
GC
for
a
number
of
different
memory
pools.
Is
there
any
useful
situation
where
you
want
to
sum
those
up
and
well
I'm,
not
sure
if
there
actually
is
like
so.
C
D
D
That
the
problem
I
have
with
that
is
that
yeah.
Maybe
it
is
useful
to
look
at
all
the
memory
pools
and
see
if
they're
going
up
and
to
the
right
together,
but
you
know
different
garbage
collection.
Events
can
impact
different
pools,
and
so
it
doesn't
feel
quite
right
to
have
to
like
sum
up
the
the
usage
after
garbage
collection,
when
those
figures
could
refer
to
different
events,
so
they're,
not
all
at
the
same
point
in
time,
the
values
that
they
represent.
C
A
F
D
A
D
F
D
D
Only
my
jvm
memory,
metrics
and
I
want
to
read
those
every
five
seconds
and
the
rest
of
my
metrics
I
want
to
read
every
one
minute
and
you
could
configure
views
for
to,
like
you
know,
disable
all,
but
the
jvm
metrics
for
the
one
that
reads
every
five
seconds
and
disable
the
jvm
metrics
for
the
one
that
reads
every
minute,
and
so
in
that
case
you
could
have
different
intervals
for
different
slices
of
your
metrics.
But
for
now
it's
just
you
know.
All
your
metrics
are
read
at
the
same
interval
got
it.
D
So
going
back
to
the
up
down
counter
versus
the
gauge,
you
know,
I
think
it's
less
useful
to
aggregate
these
across
their
Dimension
versus
the
other
memory
metrics,
but
maybe
you
could
it.
There
still
is
at
least
some
argument
that
it's
useful
to
to
aggregate
across
the
dimensions,
and
this
also
provides
consistency
with
the
other
jvm
memory
metrics
as
well,
so
I'm
inclined
to
keep
it
as
an
up-down
counter
versus
a
gauge.
B
As
a
gauge,
does
it
make
sense
for
gauges
to
even
be
like
all
the
limit
stuff
yeah,
those
are
app
down
counters
so
when
you're
dividing
yeah
I
I'm
sold
on
up
down
counter
I
wonder
if
it's
worth
putting
some
comment
somewhere
about
the
reasoning
for
a
future
or
do
we
think
it's
more
obvious.
D
And
it's
this
section
here:
I
want
to
report
the
absolute
value
of
something
if
the,
if
the
measurement
values
are
additive,
if
you
know
you
can
add
them
up
and
it's
that
is
actually
a
meaningful
value,
then
you
should
use
like
in
either
asynchronous
counter
or
up
down
counter,
depending
on
whether
your
value
goes
just
up
or
up
and
down,
and
so
you
know
there
is
you
know
if,
if
you
add
across
the
dimensions,
it
is
a
useful
number
that
comes
out
memory.
D
D
So
yeah
I'd
I'd,
appreciate,
if
folks
could
go,
take
a
look
at
those
I
would
love
to
get.
D
You
know
these
these
garbage
collection
collector
metrics
in
in
some
form,
so
we
can
tie
a
nice
spell
around
jvm
matchworks
and
you
know
not
saying
that
we
we
stop
future
editions
but
we'll
be
in
a
good
starting
place.
So.
F
Regarding
those
jvm
metrics,
something
that
I
find,
let's
say
it's
a
general-
relate
to
open
Telemetry.
But
let's
say
the
description
about
those
memory.
Pools
and
whatnot.
Are
they?
Are
they
specified
somewhere
because
I
tried
to
Google
briefly
and
I
couldn't
find,
let's
say
a
formal
definition
of
the
memory
pools
and
what
they
mean
the
name
of
the
memory
pools,
because
they
it
seems
that
the
the
name
of
those
we've
arrived
drastically
per
garbage
collector
right.
Yes,.
D
F
D
The
in,
in
practice,
the
documentation
is
not
just
the
code,
but
it's
like
you
know.
You
have
to
run
the
code
with
a
particular
garbage
collector
setup,
because
only
then
can
you
see
the
data.
That's
like
emitted
from
that
and
see
the
set
of
names,
because
you
know
we
we
get
the
names
directly
from
you
know
the
the
runtime
and.
F
D
It
would
be
useful
to
go,
and
you
know
here:
I
try
to
catalog
the
the
names
of
different
garbage,
collector
garbage
collectors
and
their
actions.
Maybe
it
would
be
useful
to
do
a
similar
exercise
for
the
memory
pools
just
like
enumerate
for
the
different
garbage
collectors
which
memory
pool
names
does
it
use
and
whether
they're,
Heap
or
non-heap.
B
It
could
be
yeah
or
just
in
the
Java
one
of
the
Java
repos
as
a
doc
about
just
here
are
the
garbage
collectors
we've
observed,
observed
and
the
pools
and
the
actions
that
they
produce
and
at
some
someday.
Maybe
we
could
even
categorize
actions
as
like
may
stop
the
world
or
not
stop
the
world
and
add
a
extra
attribute
into
the
metrics
to
sort
of
alleviate
a
lot
of
back-end
systems
already
are
parsing
these
things.
Oh
there's.
B
Actually,
the
jvm
Folks
at
Microsoft
had
released
Simpson
open
source,
which
is
a
pretty
nice
I,
have
a
pretty
nice
garbage
collection
collection,
an
analyzer.
B
That
might
be
something
interesting
to
look
at
at
some
point.
Integrating
that
somehow.
D
Yeah
in
the
meantime,
I
can
take
an
action
item
to
you
know:
I
I
have
a
local
setup
where
I
can
run
an
app
with
all
the
garbage
collectors
that
I
know
of
and
I
can
just
you
know,
run
that
app
with
each
of
the
garbage
collectors
and
see
which
pools
are
produced
and
then
document
those
in
the
readme
of
the
the
runtime
metrics
artifact
or
the
module.
B
Cve
scanning
yeah
we've
got
been
getting
a
lot
of
reports
in
our
distro
for
any
kind
of
like
there's
third
party
Library
dependencies
that
have
cves
in
them
like
recently,
well
up
the
with
that
snake
yaml
and
then
in
our
distro.
We
pull
in
some
other
libraries
that
have
had
cves
lately.
It
seems
like
there's
a
lot
of
a
lot
of
security
research
going
on
these
days,
which
is
great,
but
then,
even
if
we're
not
vulnerable
people,
really
users
sometimes
can't
even
deploy.
B
B
Is
that
sometimes-
and
this
is
what
happened
in
our
distro
recently
is-
there
was
a
transitive
dependency
which
had
the
vulnerability,
and
so
while
we
were
up
to
date
on
our
the
primary
dependency,
we
had
an
older
version
of
what
was
it
was
I
was
probably
Jackson,
it's
almost
always
Jackson,
so
this
does
analyzes
the
full
tree
and
checks
the
cves
cve
database.
So
this
would
run
once
a
day
and
I,
don't
think
I.
B
Have
it
post
opening
an
issue
at
this
point,
but
anyway-
and
this
is
also
kind
of
why
I've
been
putting
in
a
lot
of
dependabot,
enabling
dependabot
more
and
more
on
our
repo
is
to
try
to
stay
on
top
of
dependency
variations
to
reduce
this
issue.
B
What
dependencies
do
you
have
I?
Guess
you
have
do
you
still
have
a
Prometheus
dependency.
D
We
do
have
okay
HTTP,
we
have
snake
ammo
and
as
well,
it's
just
like
kind
of
in
an
experimental
module.
So
it's
not
that
important.
D
B
Mean
I,
if
it
works
out
nicely
I
think
it.
You
know
it
probably
doesn't
hurt
something
that
wouldn't
hurt.
I
was
kind
of
surprised
that
dependabot
I
mean
because
GitHub
has
these
Security
checks.
If
you
go,
go
back
to
or
scroll
up
to
the
top
here
of
the
page
and
go
to
security.
B
This
is
the
code
ql
that
we
have
running
daily,
but
there's
nothing
really
analyzing
I
was
kind
of
surprised,
there's
nothing
built
in
yet
analyzing
dependencies
for
cves,
I'm
sure
it's
something
coming,
because
they
have
on
they're
doing
a
lot
of
their
own
internal
cve
like
if
you
link
to
cves.
Now
a
lot
of
them
are
like
analyzed
or
they
have
a
nice
aggregation
and
Analysis
site
on
GitHub
itself,
foreign.
B
B
Just
wanted
to
mention
the
John
had
brought
up
library
documentation
last
week
and
there's
been
some
great
progress
on
the
Library
instrumentation
documentation
in
the
last
week.
Thanks
to
several
folks,
thanks
Jack
for
I,
like
the
new
PR,
also
reorganizing
I,
think
that's
gonna
be
really
nice.
D
Yeah
my
plan
was
to
it
seems
like
a
big
chunk
of
work
to
go
through
and
figure
out
which
libraries
have
emit
data
According
to
which
semantic
inventions,
so
I'm
kind
of
planning
on
doing
that,
like
piecemeal,
like
as
I
find
time
or
or
the
will.
B
Yeah,
maybe
just
for
your
current
PR
I,
would
totally
merge
that,
but
maybe
take
out
all
the
to
do's.
Do
you
think
that
the
tattoos
look
I
mean
just
aesthetically
if
it's
blank
I
think
we
also
know
that
it's
to
do
unless
we
think
that
there's
none.
D
B
And
the
last
topic
is
the
open,
Telemetry
instrumentation
annotations
when
we
moved
it
over
from
the
core
repo
into
the
instrumentation
repo.
It's
not
stable
in
the
instrumentation
repo
and
the
microprofile
folks
would
like
stable
release,
but
I
think
probably
need
matesh
and
Emily
for
this
discussion.
D
Yeah
my
question
about
the
Matrix
annotations
would
be
so
even
if
we
get
them
in
which
I
think
is
reasonable.
There
is
a
draft
put
together
by
Fabian,
was
it
and
but
we
wouldn't
want
to
Mark
those
as
stable
right
away.
D
We
would
want
to
let
those
exist
in
the
wild
for
a
bit
while
we
gain
confidence
with
them
and
so
I
don't
think
it's
a
good
idea
to
let
those
hold
up
the
stabilization
of
the
of
the
the
trace,
annotations
and
I
can
think
of
like
two
strategies
to
deal
with
that
one
put
them
in
an
internal
module,
an
internal
package
and
just
acknowledge
that
hey
this
is
our
best
guess
of
what
The
annotation
is
going
to
look
like
and
when
it
comes
time
to
stabilize
stabilize
it,
the
package
name
will
change
or
you
know,
introduce
a
second
artifact.
B
B
Would
we
want
to
generalize
that
to
just
any
kind
of
annotation,
whether
it's
span
or
metric,
or
if
we
have
a
on
the
span
annotation,
we
might
want
to
both
capture
a
span
and
metrics,
and
so
that's
it
with
the
would
it
would
we
want
to
merge
anything
like
maybe
have
a
Telemetry
the
renamed
annotation
to
Telemetry
and
have
attribute
on
it
for
what
kind
of
telemetry
to
capture
span
or
metric.
D
Span
or
metric
or
maybe
someday
in
the
future
events
I
think
I.
Think
one
difference
between
span
attributes
and
Metric
attributes
that
we
would
have
to
acknowledge
is
that
spans?
You
want
to
include
as
many
attributes
as
you
can,
all
the
information
you
can
metrics.
You
want
to
make
sure
that
they
are
low
cardinality,
and
so
maybe,
if
you
were
wanted
metrics
and
spans
produced
from
the
same
metrics,
maybe
the
set
of
of
attributes
would
not
be
the
same.
B
Foreign,
so
maybe
there's
like
attrib
the
attribute
annotation
also
has
a
attribute
on
it
to
say
whether
to
Stamp,
It
On
spans
with
metrics
I,
don't
know,
I
haven't
really
thought
through
it.
That's
kind
of
why
okay,
hesitant
otherwise,
like
yeah
I
kind
of
had
that
same
initial
thought
check
of
well,
it
was
stable
already.
We
should
probably
just
make
it
stable
in
instrumentation
repo
and
that's
fine,
also
because
we
can
always
introduce
additive
stuff,
also
like
if
we
want
to.
B
D
D
Yeah
and
ultimately,
we
just
moved
the
artifact
for
our
own
maintainer
purposes.
We
wanted
the
code
to
live
side
by
side
with
the
implementation
of
the
code
of
the
annotations
and
I.
Don't
know
it
feels
like
a
refactor
right,
so
you
know
to
move
it,
but
without
changing
its
behavior
and
right
now,
we've
moved
it
and
changed
it's
not
Its
Behavior
but,
like
you
know,
we've
changed
it
from
stable
to
to
Alpha
and
so
not
exactly
a
strict
refactor.
At
this
point.
B
Cool,
we
are
right
on
time
any
thing
any
last
thing:
anybody
wanted
to
chat
about.