►
From YouTube: 2023-03-07 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
C
C
And
yeah
these
two
items:
essentially
they
are
C
Joe's
update
you
want
to
be.
We
want
to
release
Alpha
dot,
one
version
and
we'll
do
it
probably
sometime
later
in
the
day
today,
this
will
have
the
changes
for
exam
class
and
exponential
histograms
would
still
need
a
bit
of
work
so
considering
how
we
want
to
quickly
push
out
a
release
which
has
exemplars
that
would
be
exponential.
Histograms
would
be
a
part
of
alpha
too
or
like
the
whatever
comes
next.
B
Regarding
exponential
histograms
I'd
be
willing
to
show
off
some
end-to-end
stuff
today
do
a
little
demo.
If
folks.
A
D
E
B
B
Yeah,
we
don't
do
cameras
on
this
on
this
call
foreign.
A
F
Yeah
this
I
just
wanted
to
get
you
guys.
I
guess
opinion
I
feel
like
you're
allowed
to
do
some
things
with
the
API
and
then
like
silently
under
the
hood,
like
the
data
doesn't
maybe
work
the
way
you
expect.
For
instance,
if
you
create
a
metric
with
a
space
in
the
name,
the
export
will
just
log
a
warning
and
continue,
and
you
won't
ever
know
unless
you
have
that
diagnostic
information
enabled
I
think
same
for
using
decimals
for
a
metric
I
don't
know.
F
Maybe
there
are
other
sort
of
common
pitfalls
that
that
are
maybe
obvious,
I'm
just
curious.
If
we
like,
if
you
read
through
the
spec,
you
you'll
find
a
line
that
says
like
you
can't
have
a
space,
but
for
the
name
of
a
metric,
but
it's
like
really
not
obvious
for
him,
like
people
who
are
implementing
their
code,
I,
don't
know
just
wondering
if
you
guys
think
like
a
common,
Pitfall,
doc
or
some
other
way
of
understanding
like
these
types
of
things,
yeah
any
feedback,
I
guess.
E
I
I
wanna
corroborate
on
that,
because
I
have
one
data,
point
I
I
think
perhaps
we
have
to
go
back
for
this
back
to
that.
But
I
have
one
data
point
that
kind
of
the
the
users
simply
gave
up
on
using
open
Telemetry
metrics.
Because
of
that
because
the
situation
is,
they
have
an
internal
library
to
large
metrics
for
a
long
time.
E
So
the
group
that
owns
this
Library
just
wanted
to
change,
to
open
Telemetry
and
because
of
the
name
issue
that
Dan
mentioned,
the
math
is
just
disappeared,
but
then
they
have
to
track
down
which
methods
are
disappearing
and
go
to
each
Team
to
ask
them
to
change
the
map.
So
in
the
end
they
opted.
Oh,
we
are
not
gonna
take
open
telemetry
to
to
supply
the
metrics.
F
Yeah,
we
actually
had
something
kind
of
similar
where
we
have
our
own
receiver,
that,
like
translates
Legacy
proprietary
format
of
data
and
we
actually
create
open
telemetry
like
a
like
a
counter
or
something
in
the
open,
Telemetry
format.
Json
format
that
like
includes
a
space
in
the
name,
and
that
works
fine.
It
goes
to
the
back
end.
So
it's
like
definitely
not
obvious
for
people
who
are
like
migrating
a
path,
maybe
not
starting
with
open
Telemetry
from
from
the
get-go
yeah.
B
F
C
Yeah
I,
like
plants
in
Allen,
did
we
like
I,
mean
I,
know
that
we
follow
we
use.
We
didn't
create
our
own
API
for
metrics,
so,
like
the
dotted
team,
does
allow
a
lot
of
things
and
doesn't
validate
as
much
as
we
like.
We
have
more
constraints
on
like
names
naming
the
metric
and
even
the
types
that
we
can
use,
but.
B
C
I
mean
I
I,
don't
know
like
I
would
like
to
hear
like
what
do
you
think?
Maybe
there
is
I
think
there
is
value
to
just
like
some
kind
of
disclaimer
that,
even
though
your
dotnet
API
would
allow
you
to
create
a
metric
name
with
something
without
throwing
an
error.
Yeah
Apollo.
E
Yeah
I
I
think
that
I
didn't
review
this
on
this
pack,
I
kind
of
having
kind
of
being
a
little
bit
far
from
the
expect,
but
I
think
I
would
say.
Dotnet
SDK
is
very
spec,
but
what
I
feel
is
that
for
this
we
need
some
kind
of
Escape
mechanism.
Perhaps
we
have
to
get
back
to
this
back,
so
we
can
have
kind
of
in
this
kind
of
situation
have
something
that
say:
Hey
I
know
that
this
matter
has
a
longer
name,
but
I
wanted
this
map
which
will
be
published
anyway.
E
You
know
it's
not
ideal,
but
I
think
these
user.
That
I
encountered
it's
going
to
be
a
relatively
common
case,
and
people
will
simply
not
use
the
open
Telemetry.
Either
they
go
straight
in
the.net.
Fortunately
they
they
have.
The
wrong
time
is
in
the
PCL,
but
for
other
cases,
I
I
think
you'll
be
a
kind
of
through
the
letter
to
the
spec.
But
it's
not
practical
at
all.
E
You
know
so
perhaps
this
is
something
that
I
I
could
try
to
go
back
to
the
spec
and
have
some
kind
of
get
out
of
jail
card,
but
I
think
that
will
be
the
best.
We
need
something
to
allow
people
to
kind
of
do
their
own
validation
or.
H
Yeah
I
agree
with
that.
So
we've
we've
had
this
internally
at
Microsoft.
There's
some
teams
migrating
to
open
Telemetry
that
are
using.
You
know,
slashes
in
their
names,
dots
whatever,
so
the
open,
Telemetry
SDK
is
spec
compliant
and
the
spec
is
very
strict
about
what
can
be
in
a
name
so
we're
just
following
this
back
I
assume
Alan
can
probably
confirm
it's.
It's
influenced
by
Prometheus.
H
It's
probably
Prometheus
is
very
strict
requirements,
so
I
think
it
would
be
great
to
go
back
to
the
spec
and
allow
sdks
to
expose
a
way
to
control
or
turn
off
this
validation
I'm
going
to
paste
in
the
chat.
What
we
did
for
the
Microsoft
users
that
are
blocked.
We
basically
in
our
internal
exporter,
just
use
some
reflection
to
just
replace
the
regex
to
turn
it
off
and
just
allow
whatever
the
risk
there
is.
H
You
know
if
users
have
an
exporter
that
does
really
care
about
the
naming
conventions,
and
then
you
give
it
some
invalid
stuff,
it's
kind
of
undefined.
So
that's
where
we
need
the
spec
to
say
kind
of
what
the
sdk's
responsibility
is.
Is
it
just
throw
it
at
the
exporter
and
it
blows
up
I
don't
know,
but
I
think
going
through
this
spec
is
the
right
direction
on
this,
and
then
we
can
make
the
SDK
more.
You
know
accommodating
for
the
users
where
it
makes
sense.
B
B
Yeah
so
I
mean
if
folks
on
this
call
are
interested
in
this
I
think
that's
totally
legit
and
a
lot
of
back
ends
would
probably
support
that.
Just
fine
but
yeah
I
think
you're
right,
I
I,
don't
I'm
not
super
clear
on
the
details
of
like
how
or
why
this
would
break
Prometheus.
But
yes,
I
do
think
that
Prometheus
is
the
is
the
reason
behind
open,
telemetries
name
restrictions,
but
yeah.
E
No
I
don't
think
that
is
I
I,
I,
I,
I,
I
kind
of
I
I
confess
that
I
kind
of
try
to
avoid
doing
stuff
on
this
back
but
yeah
I
can
try
to
to
push
this
and
Io
Io,
probably
use
the
link
that
Mike
sent
kind
of.
As
one
of
the
data
points
you
know,
I
will
describe
the
case
that
I
have
about
the
user
that
tries
to
migrate
and
used.
E
Also
this
one
as
a
data
point
you
know
so
I
I
I
can
I
can
put
that
in
my
list
and
kind
of
do
something
that
I
don't
like,
but
it's
needed.
I
can
I
can
be
the
one
trying
to
drive
this
index
back.
B
Other
component
of
this
that
folks
have
mentioned
is
that
it's
extremely
difficult
to
troubleshoot
these
things
I
think
you
know,
there's
a
number
of
things
that
we've
talked
about
for
troubleshooting
the
SDK
or
improving.
That
experience
you
know
like
instead
of
using
Event,
Source
or
whatever,
maybe
still
using
Event
Source,
but
also
allowing
logs
to
go
via
ilogger
or
something
is
an
idea.
That's
been
floated
might
make
it
easier
for
people
to
consume
diagnostic
logs.
B
Also
I
think
that
there's
some
an
issue
open
about
capturing
metrics
about
SDK,
health
and
Behavior.
None
of
that
I,
don't
I'm,
not
that
I'm,
aware
of
at
least
is,
has
been
defined
or
but
basically
Telemetry
about
the
SDK
itself
and
making
it
more
discoverable.
B
So
ideas,
I
I,
don't
have
the
issues
handy,
but
I
could
try
to
find
them
and
share
them
and
if
folks
have
other
ideas
about
how
it
would
be
easier
for
you
to
troubleshoot
the
SDK
or
things
that
you'd
like
to
see.
You
know
you
could
probably
comment
on
those
issues
or
open
issues
of
your
own.
G
Existing
troubleshooting,
because
we
don't
have
anything
else
right
now,
which
works
so
I
I,
would
still
expect
like.
If
someone
is
facing
this
issue,
they
should
follow.
The
troubleshooting
in
SDK
should
be
emitting
a
log
whenever
it
says
I'm
not
processing
this
metric,
because
it
it's
not
following
some
spec
or
I,
don't
allow
space
or
it
like
all
issues
are
captured.
It
should
be
captured
by
Step
Diagnostics
how
to
improve
that.
That's
yes,
we
don't
have
any
active
work
going
on
to
improve
that.
G
So,
probably
by
the
time
of
1.6,
then
the
loading
stuff
is
brought
back
to
the
main.
We
could
do
the
ilogger
thing
on
top
of
what
we
are
currently
doing.
So
until
then
like
this
is
our
only
bet.
Even
the
heartbeat
thing,
like
the
other
other
option,
is
like
sdks
Telemetry
being
exposed
asset
metric.
G
That
would
also
take
some
time
and
again,
it's
not
going
to
be
like
as
in
Duty,
because
someone
has
to
look
at
their
metric
dashboard
and
know
that
there
is
a
metric
called
SDK,
Telemetry
internals
and
then
understand
that.
Okay,
that's
the
reason.
So
it's
still
like
going
to
be
like
some
steps
involved.
I,
don't
know
like
whether
we
can
do
any
better
than
any
or
like
another
idea.
I
think
Blanche
was
got
no
warm.
G
It's
like
you
start
SDK
with
a
special
flag,
and
then
we
throw
like,
if
you
see
a
instrument
name,
if
you
don't
like.
Instead
of
catching
and
loading,
we'll
just
throw
it's
like
exception
mode
or
something
where
we,
while
it
display,
can
start
throwing.
But
that's
an
opt-in
thing,
but
as
of
today,
nothing
exists
other
than
the
self-diagnostic.
So
we
just
have
to
accept
that
and
use
that
as
the
current
way
to
troubleshoot.
G
D
F
I'm,
ultimately,
losing
data
so
by
having
SDK
or
Sorry
by
having
the
troubleshooting
self-diagnostics
configured
with
error.
I
wasn't
able
to
see
these
issues
so
I'm
I'm
wondering
like
do
we
have
a?
Is
there
a
general
General
classification
of
like
what
I
would
expect
to
see
in
an
error
versus
what
I
would
expect
to
see
in
a
warning.
E
F
Now,
I'm,
just
gonna
internally
tell
everyone
hey
like
log
warning,
because
you
know
you
could
uncover
these
other
types
of
issues
but
like
it
wasn't,
maybe
obvious
that
I
should
have
had
a
warning
on
when
I
would
consider
this
more
of
like
a
fundamental
problem
with
what
I
was
doing.
D
G
Yeah
I
think
the
like.
G
So
we
could
consider
like
increasing
the
severity
to
error
like
for
anything
which
would
result
in
data
being
lost.
Even
if
it's
contained
to
a
specific
instrument.
Let's
call
it
as
an
error.
Would
that
be
like
a
better
one?.
F
I'm
I,
don't
I,
mean
I
think
we
can
argue
on
the
semantics
of
that,
but
maybe
like
in
the
self-diagnostic
stock.
We
could
just
say,
like
you
know,
if
you
set
this
to
warning,
you
will
see,
you
know
things
that
are
missing
due
to
being
non-compliant
with
the
spec
I.
Think
like
something
explicitly
stating.
That
would
go
a
long
way
because
I
kind.
G
So
like,
can
you
open
that
the
dog
we
have
on
self-diagnostic
do
we
have.
F
F
G
Can
be
like
that
specific
I
think
we
should
be
like
very
generic
anything
related
to
data
loss,
we'll
put
warning
anything
related
to
complete
functionality
being
disabled
and
also
we
could
be.
We
should
be
able
to
write
something
like
that
right
here,
or
we
should
just
apply
that
in
practice,
like
anything
related
to
tell
us,
let's
convert
that,
to
our
I
mean
unless
someone
update
I
can
send
a
short
PR
just
to
make
it
to
elevate
them
all
to
error.
G
C
G
D
G
Yeah
so
I
think
like
two
action
items
we
can
think
of
like
one
ace.
Instead
of
asking
people
to
go
to
troubleshooting
with
the
self-diagnostic
logs,
the
troubleshooting
should
contain
a
list
of
well-known
or
likely
issues
which
are
like
the
meter
name
being
non-compliant,
or
so
those
things
can
be
like
added
as
a
well-known
issues
and
then,
if
you
don't
fall
into
these
categories,
then
go
and
get
that
loss.
So
maybe
like
something
like
that
and
combine
with
the
elevating
the
severity
to
error
for
anything
related
to
data
loss.
G
G
The
first
one
is
like
having
a
in
the
troubleshooting
like
having
a
list
of
known
issues
or
no,
not
really
issue
like
like
frequently
made
mistakes
or
frequently
encountered
misconfiguration,
or
something
like
that.
We
will
figure
out
some
name,
but
the
idea
is
like
meter,
name
or
instrument.
Name
should
be
compliant.
It
should
be
listed
in
that
because
we've
been
seeing
it
for
like
very
frequently.
G
G
It's
not
that
of
an
issue
these
days,
but
it
used
to
be
a
big
issue
in
the
early
days
where
people
just
dispose
things
much
earlier,
so
we
could
start
with
like
one
or
two
and
see
if
that
would
like
make
any
difference
or
we
are
still
in
the
famous
current
or
based
on
that
we
can
decide
like
if
you
should
continue
investing
in
that
path,
or
you
should
completely
take
step
back
and
think
about
something
more
drastic
like
throwing
based
on
a
flag
like
letting
the
app
crash.
G
G
I
mean
again,
this
would
only
work
if
you
are
still
in
the
troubleshooting
section,
because
otherwise
you
wouldn't
know
like
because
you're
not
comp,
the
compiler
doesn't
throw
so
the
only
time-
or
there
is
no
exception
Throne,
so
I
still
assume
that
people
are
coming
to
the
troubleshooting
talk,
otherwise
they
wouldn't
make.
None
of
the
thing
we
discussed
here
would
help.
G
G
G
D
G
It's
not
like
going
to
solve
all
problems,
because
we
don't
really
expect
people
to
know
that
they
are
diagnosed
there.
Cardinality
is
more
by
looking
at
roles.
We
definitely
need
the
built-in
sdks
own
metric
about
like
something
like
how
many
metrics
we
lost,
how
many
items
we
drop,
because
the
buffer
is
full
Etc,
but
that's
still
not
happening
like
at
least
in
the
next
month.
So
these
are
like
really
short-term
things
which
we
can
see
if
it
helps
yeah.
C
And
also
I
think,
like
Martin,
asked
this
on
slack
Channel,
like
streams
trying
to
use
the
counter
with
type
decimal
and
like
what
the
what's
the
specific
reason
again
for,
like
only
for
like
converting
things
to
long
in
novel.
Is
it
the.
G
F
D
G
G
G
B
B
F
G
Yeah
I
mean
adding
support
would
be
like
very
straightforward.
We
can
add
like
one
more
type
decimal
and
cast
it
back
to
double.
It
should
be
okay.
To
do
that,
I
mean
I,
don't
know
whether
that's
the
expectation
or
we
should
just
keep
the
current
one,
but
make
it
like
more.
G
C
A
C
G
A
A
However,
I
think
we
like
the
API,
since
we
are
only
relying
on
the
extractions
one
I
think
we
should
be
able
to
make
it
stable
before
June.
If
we
don't
need
to
make
any
changes
to
the
API
part
of
it.
So
it
is.
Do
we
still
want
to
not
merge
this
here
in
the
main
branch
or
I.
G
Want
to
hear
this
opinion,
like
maybe
like
Ellen,
can
you
share
what
audience
you're
thinking?
Because
it's?
Yes,
we
sure
said
we
can
make
it
stable,
but
still
the
question
is
otlp
suspect,
select,
speculated
or
respect
mandated
exporter
or
be
okay
to
take
dependencies
on
like
things
which
are
not
validated
by
respect
or
should
be
considered
like
more
all
of
that
thing
into
a
separate
package,
no
don't
test
the
official
otlp
exporter.
G
If
you
want
to
write
a
exporter
with
persistence,
you
should
do
it
like
one
one
variable,
so
you
don't
need
any
change
in
the
actual
or
TLP
exporter
itself,
like
you
can
take
over
the
exporting
mechanism
like
over
it
with
your
own
thing,
something
other
so
that
the
core
exporter
is
strictly
like
following
respect
and
not
doing
anything
more.
B
Yeah
I
think
you're
right
I,
don't
think
that
we
should
take
dependency
on
a
beta
thing.
I
think
that
that
would
be
frowned
upon
by
a
number
of
end
users
who
you
know
115
goes
stable,
they're,
going
to
expect
that
all
dependencies
are
also
stable.
Some
people
don't
care
about
that,
but
no.
G
Because
it
is
more
like
even
if
it's
stable
like
because,
like
let's
say
that
we
should
be
proposed
to
make
the
dependency
as
stable,
like
the
persistent
abstractions.
So
once
you
do,
that,
like
are
we
okay
to
take
extra
dependencies?
It's
not
like
bringing
anything
new,
just
that
single
quite
a
case,
but
what
what
would
be
the
general
thinking
of.
B
I
see
what
you're
saying
yeah
sorry
I
haven't
thought
deeply
about
this,
but
I
I,
yeah,
I,
think
you're
right.
That
is
a
concern.
It
it'd
be
ideal
to
ship
this
somehow
such
that
it's
not
a
dependency.
G
Yeah,
there's
no
spec
work
happening
in
this
direction
and
I
I
checked
like
there
is
probably
like
some
old
issue,
but
most
pick
work
is
happening,
so
it's
unlikely
that
we'll
get
any
support
from
specs,
it's
mostly
on
our
own,
which
is
where
I
was
kind
of,
preferring
if
you
can
do
it
without
affecting
the
core
exporter.
But
when
you
do
the
exporter
plus
plus
or
exporter
with
oh
sorry,
the
processing
storage,
it
can
override
the
export
behavior
and
it
can
decide
to
store
things.
B
B
At
this
point,
but
like
basically
in
order
for
persistent
storage
to
work
right
like
you
need
that
low
level
hook
to
know
understand
like
what
kind
of
a
status
code
you
got
back
from
like
the
underlying
HTTP
or
grpc
like
communication,
and
if,
if
we
were
to
introduce
something
that
was
not
part
of
the
spec
to
our
exporter,
I'd
probably
want
to
consider
implementing
it
in
a
more
generic
way
like
like
as
a
hook
like
hey,
you
know,
I
got
this
status
code
back
from
a
something
I
just
sent
back.
B
G
Yeah,
that
sounds
one
approach,
I
I
would
also
say,
like
we
should
consider
the
other
option
where
the
persistent
storage
can
override
the
export
Behavior.
G
Somehow,
because
exporter
should,
we
might
need
some
tweaks
in
the
exporter
itself
to
allow
subclass
to
overwrite,
but
that
way
it
it's
not
like
the
exporter
itself
is
doing
anything
extra.
It's
like
you
provide
your
own
exporter,
the
persistency
exporter,
which
overrides
the
default
behavior,
and
at
this
point
it
can
do
like
extra
like
persistency
and
retrace
later.
G
I
think
it
doesn't
implemented,
so
we
have
to
actually
see
whether
it's
such
a
thing
is
feasible.
So
if
it's
not
feasible
and
if
it
requires
some
extra
things
from
the
OTL
be
exporter,
then
we
we
should
make
that
happen,
like
maybe
like
making
something
internal
and
having
special
rights
to
that
special
package.
That
would
also
be
working.
I
mean
those
are
all
like
much
better
options
than
messing
with
the
official.
What
will
be
exporter.
A
D
A
I
see
okay
yeah,
so
for
that
I
think
we
definitely
would
need
to
do
some
like
internals,
visible
to
and
stuff
like
this.
A
All
right,
okay,
yep!
We
we
can
explore
that
see.
G
What's
the
general
purpose
I
mean
I
haven't
thought
it
through,
but
the
idea
like
some
hooks,
which
allows
you
to
like
take
over
or
take
control
of
the
transport
or
okay.
G
So
we
leave
next,
but
yeah
I
mean
since
we
haven't
implemented
it
I,
don't
know
how
it
would
look
like.
A
Okay,
just
one
other
question
so
like
leaving
the
TLP
exporter,
I
I
think
Ellen
had
issue
open
to
rename
the
packages
in
the
country,
repo
and
I
think
it
was
mostly
for
the
contract
people
if
I,
remember
this
company,
so
there's
a
link
that
I
added
in
that
comment
that
I
have
sorry.
A
So
like
are
we
like
confirmed
on
this
renaming
or
like
do
we
need
to?
We
still
need
some
more
discussions
on
this
one.
E
A
E
B
Good
name
I
mean
others
can
feel
free
to
chime
in,
but
I
I
think
removing
the
word.
Extensions
from
this
package
is
is
a
good
thing,
just
in
the
sense
that
extensions,
the
I
guess.
The
main
point
of
this
issue
is
that
extensions
is
has
kind
of
become
an
overloaded
term
in
our
packages
and
shouldn't
be
used
where
it
doesn't
make
sense.
G
Yeah
I,
like
the
new
name
so
maybe
like,
like
you,
can
start
with
appear
renaming
that
particular
package.
Unless
you
want
to
attempt
renaming
all
the
other
things
in
one
code.
So
maybe
start
with
that
particular
package,
rename
it
yep.
A
I'll
I'll
start
with
that
particular
package
first
and
also
like
I,
wanted
to
give
like
a
brief
overview.
I,
don't
think
I
have
like.
After
we
shipped
the
beta
version
of
this
package.
A
E
A
That
so
I'll
be
able
to
give
like
a
short
demo
on
that
as
well.
So
I'll
open
the
pr
before
next
week
and
next
week,
I
I
can
do
that
demo
as
well.
That
sounds
okay.
C
Okay,
then
there's
another
item
on
the
agenda:
resource
Builder,
API.
A
E
Yeah
originally
I
wanted
to
talk
about
the
the
API
itself,
but
I
just
note
that
two
hours
ago,
Robert
posted
that
the
issue
on
the
expect
was
resolved.
It's
basically
about
the
service
name.
Perhaps
let
me
share
the
screen
so.
E
Okay,
there
was
a
very
old
issue
that
was
opened
by
Robert
about
the
service
name
and
I.
Think
people
are
aware
that
kind
of
the
defaults
are
not
ideal
but
anyway,
especially
for
Auto
instrumentation
is
pretty
bad
and
earlier
today
final,
there
was
a
final
decision
about
the
spec
in
this
regard.
E
So
it's
fine
to
just
use
the
unknown
service
name
as
a
fallback,
so
we
can
change
the
default
on
the
SDK
so
which
I
think
is
good
news,
because
I
saw
some
uses
of
the.net
complain
about
that
anyway.
E
So
this
is
always
So.
Eventually,
we
probably
are
gonna
in
the
short
run.
Do
this
on
the
automatic
instrumentation,
but
eventually
us
or
somebody
else-
is
gonna.
Do
the
same
on
the
SDK
I
think
more
related
question
that
I
have
now
it's
kind
of
mood
point
because
of
these
changing.
E
They
expect
this
resolution
they
expect
is
regarding
the
the
API
to
do
the
resource
Builder,
because,
basically,
let
me
see,
are
you
guys
seeing
the
code
for
resource
build
yep
all
right,
so
basically
the
case
that
we
I
think
it
was
most
makes
sense
here-
will
be
through
the
current
set
or
at
The
Passage
through
the
resource
detector
in
the
detect
method,
but
once
more
I
guess
that
this
is
coming
straight
out
from
the
spec
and
because
I
think
resource
detectors
in
a
lot
of
cases
could
be
kind
of
fallbacks
and
the
way
that
the
API
not
the.net
specifically
but
the
the
way
the
spec
is
putting
seems
to
be
kind
of
not
very
helpful.
E
In
that
sense
you
know
so,
then
you
end
up
having
to
kind
of
code,
something
that
builds
the
resource
and
inspected
the
attributes
I'm
just
trying
to
see
because,
as
I
said
for
the
server's
name
itself,
there
was
final
decision
on
this
spec
seed.
So
we
are,
we
are
fine,
but
if
people
also
see
this
for
other
resources,
perhaps
this
is
something
that
we
want
to
bring
to
the
Spectrum.
You
know
so
we
could
have
when
detecting
the
the
resources
I
think
just
having
what
are
the
attributes
so
far.
E
You
make
much
more
sense
in
the
API,
you
know,
so
we
we
could
build.
This
kind
of
fallback
detectors
much
easier.
E
So
basically,
just
sharing
this
point.
So
if
people
perhaps
have
a
similar
case
being
me
or
you
perhaps
take
that
to
or
somebody
you
take
for
us
to
to
the
spec
but
just
kind
of
cover
this
case
and
also
share
the
good
news
about
the
service
name.
C
All
right
yeah,
thanks
for
sharing
that.
H
They
have
sort
of
fall
back
built
in
like
if
they
don't
find
a
service
name.
Then
they
call
back
into
this
thing
called
a
default
resource
and
they
use
that
as
the
service
name
I,
don't
know
if
it
makes
more
sense
to
put
it
there
or
fold
this
all
into
one
pipeline
or
do
something
I
just
wanted
to
share
that
there
is
this
logic
in
I,
think
Zipkin,
Jaeger
and
otlp,
but
I'm
not
a
100
sure,
just
something
to
keep
in
mind.
E
C
Okay,
I
think:
there's
no
other
agenda
item
Alan.
Do
you
think
you
have
enough
time.
B
Yeah
sure
exponential
histograms
yeah
I'll
share
my
screen.
B
So,
to
actually
start
with
just
heads
up
the.
B
As
previously
stated
in
the
agenda
coming
in
Alpha
two
so
Alpha
One
we'll
land,
some
Exemplar
support
that
CJ's
been
working
on
novel
too.
Hopefully
we
will
have
end-end
support
for
exponential
histograms
I'll
be
kind
of
iterating
on
that
this
is
my
most
recent
PR.
So
if
folks
have
some
spare
Cycles
to
to
review,
I
have
follow-up
PRS
to
to
continue
this
work.
B
That
said
a
couple
months
ago,
I
I
did
get
end
to
end.
I
have
a
I
have
another
Branch,
just
with
a
rough
cut
of
exponential
histograms,
fully
working
in
the
SDK
Riley
actually
did
the
most
of
the
the
low-level
support
for
exponential
histograms
I
want
to
say,
I,
think
back
in
November
or
so
last
year,
and
so
the
work
that
I'm
doing
at
this
point
is
is
just
basically
wiring
it
up.
B
So
that
you
know,
people
can
actually
use
it,
but
in
case
exponential
histograms
are
a
thing
that
you
haven't
been
paying
super
close
attention
to
or
you're
completely
unfamiliar.
E
B
I
can
just
kind
of
share
some
some
details
about
them,
so
I
think
maybe
a
useful
place
to
start
is
the
data
model.
This
is
the
the
Proto,
the
otlp
Proto,
which
is
a
pretty
good,
concise
description
of
the
of
the
data
model.
So
you
know,
there's
this
new
histogram
data
point
an
exponential
histogram
data
point
that
has
a
different
structure
than
the
explicit
bounds.
B
Histogram
data
points,
the
unique
properties
of
it
are
that
well,
actually,
the
the
the
the
properties
that
are
not
unique
to
it
that
are
shared
by
the
explicit
bounds.
Histograms
are
things
like
you
know:
histograms
have
a
count
of
of
measurements,
the
sum
of
the
measurements.
These
are
all
the
same
as
the
explicit
bounds.
B
Also
you
know,
Min
and
Max
are-
are
shared
between
exponential
and
explicit
bounds.
It's
really
just
the
buckets.
The
the
the
way
that
the
buckets
are
modeled
and
implemented
are
a
differ
between
explicit
and
exponential
histograms.
B
So,
as
you
know,
with
the
explicit
bounds,
you
know
you
you,
as
the
application
author,
have
to
Define
your
your
bucket
boundaries,
either
by
hand
or
just
you
know,
be
subject
to
the
defaults
of
the
SDK,
but
for
exponential
histograms
there
is
no
such
configuration.
You
don't
configure
your
your
bucket
bounds.
They
are
dynamically
computed
for
you
and
adapt
to
the
range
of
values
that
are
recorded
So
in
terms
of
exponential
histograms.
B
The
there
is
some
things
that
describe
the
the
set
of
buckets,
notably
what
we
call
a
scale
which
I'll
get
into
and
then
also
two
sets
of
buckets
for
capturing
both
positive
and
negative
values.
B
Just
a
note
about
negative
values:
this
is
technically
not
really
supported
by
the
metric
API
today
recording
negative
values.
That
is,
but
it
is
supported
by
the
data
model.
As
we
see
here
now.net.
However,
you
know,
given
that
the
histogram
API
is
part
of
the
actual,
like
base.net
libraries,
you
can
actually.
D
B
Negative
values,
but
that
said,
I
don't
know
that
there
are
back
ends
that
necessarily
support
negative
values
or
I'm
not
aware
of
any
today
so
diving
into
just
kind
of
like
a
little
bit
of
the
the
math
is.
The
math
is
interesting
in
all
of
this,
but
I
have
this
spreadsheet
with
a
lot
of
noisy
numbers.
B
So
in
that,
in
that
data
model
you
know
we
saw
that
an
exponential
histogram
data
point
is
described
by
a
scale
and
a
set
of
buckets
a
scale
is
basically
assigned
integer.
So
that's
what
I
have
here
in
row,
one
just
demonstrating
like.
B
Scales
of
an
exponential
histogram
data
point
and
this
scale
is
used
to
compute
the
the
boundaries
of
the
buckets.
So
a
set
of
buckets
for
an
exponential
histogram
is
is
defined
as
an
array
of
bucket
counts
and
the
array
starts
at
in
an
offset
index.
The
offset
index
can
be
negative,
so
it's,
unlike
you
know,
like
standard
array
indices,
but
this
index
is
used
in
conjunction
with
the
scale
to
compute
the
boundaries.
B
So
all
the
values
that
you
see
in
this
spreadsheet
Define
the
boundaries
of
a
individual
bucket
within
an
exponential
histogram.
So
if
we
look
at
this,
a
given
bucket
within
a
exponential
histogram
data
point
with
scale,
two,
the
bucket
boundary
is
a
factor
of
the
what
we
call
the
scale
factor
raised
to
the
power
of
the
index,
and
that
is
what
equals
the
lower
bound
of
a
exponential,
histogram
histogram
bucket.
B
So
for
a
value
recorded
by
this
histogram
values
captured
in
bucket
index,
negative
five
will
be
between
approximately
0.42
and
the
next
bucket
boundary.
0.5
I
hope
that
makes
sense.
This
is
kind
of
a
lot
of
information
I'm
throwing
out
real
fast,
but
basically
we'll
see
that
there's
there's
a
relatively
dense
distribution
of
bucket
boundaries
and
that
are
capable
of
representing.
B
A
pretty
dense
data
set
there's
other
interesting
properties
of
this
in
that
like,
depending
on
the
scale,
the
the
number
of
buckets
between
integral
powers
of
of
two.
So
one
two
four
is
a
factor
of
the
two
to
the
power
of
the
scale,
so
2
to
the
3
is
8,
and
so
there
are
eight
buckets
between
integral
powers
of
of
two,
and
so
we
see,
as
the
scale
goes
up,
our
density
on
of
our
Precision
of
the
of
the
values
that
we
can
record.
A
B
An
exponential
histogram
as
the
scale
goes
up,
the
density
also
goes
up.
We
record
there's
more
buckets
to
describe
the
values,
so
that
was
kind
of
like
the
minutia
underneath
all
of
this,
but
really
from
the
standpoint
of
like
implementing
exponential
histograms
in
the
SDK.
B
When
a
value
is
recorded,
we
basically
need
to
be
able
to
take
that
value
and
map
it
to
one
of
these
indices
within
within
an
exponential
histogram
and
then,
furthermore,
as
values
are
recorded,
the
scale
of
the
exponential
histogram
may
change
to
adapt
to
new
a
new
range
of
values,
as
as
they
are
recorded
foreign.
B
So
with
that
said,
I
can
pop
over
to
some
code,
just
to
kind
of
show
you
the
difference.
G
One
question
yeah
go
ahead,
so
can
you
go
back
to
that
spreadsheet,
so
I
for
any
given
time
like
we
only
have
a
single
scale
rate
could
be
either
one
or
two
or
three
or
four.
It
won't
be
like
more
than
one
scale
at
any,
given
point
right,
correct
yeah,
so
we
start
with
like
maybe
like
one
or
and
then
adjust
it
as
needed.
B
Yeah
so
typically
we'll
start
with
we'll
start
with
a
Max
scale.
Currently
the
rsdk
is
not
the
the
scale
is
not
configurable,
because
it
wasn't
at
one
point
in
the
spec,
but
it
actually
is
a
configuration
when
we
instantiate
an
exponential
histogram,
a
bucket
histogram.
We
arbitrarily
set
the
the
max
scale
to
20..
B
So
again,
this
is
not
configurable
today
in
our
SDK,
but
it
will
be
in
the
future.
So
it
starts
at
20.
But,
as
we
begin,
recording
values
it
may
become
necessary
to
scale
down
is
what
we
call
it
to
adapt
to
a
greater
range
of
values
than
what
scale
20
can
can
represent.
B
But
at
any
given
point
in
time
you
are
correct.
The
scale
is
is
is
is,
is,
is
a
is
just
one
value
and
we'll
see
in
the
code
here
that
has,
as
we
record
values
it
may
begin.
When
we
increment
a
bucket,
it
may
become
necessary
to
perform
that
scaled
down
operation
which,
which
will
effectively
lower
the
resolution
of
the
of
the
exponential
histogram.
B
Free
feel
free,
thank
you.
Dan
I
actually
told
myself
not
to
do
that
and
then,
of
course
there
it
was
on
the
screen.
G
So
like,
since
we
only
have
a
few
minutes
left
instead
of
going
into
the
implementation,
which
we
can
say
for
the
next
time,
can
you
like
explain
what
I
say
like
benefit
if
I'm
an
inducer?
Why
would
I
go
for
this
one
yeah
and.
D
B
Can
see
yeah,
so
let's
look
at
the
code?
Is
it
as
it
stands
from
like
an
application
developer
like
basically
I've
configured
a
media
provider
in
this
sample
code,
I
have
I'm
recording
a
number
of
different
histograms,
some
of
them
exponential,
some
of
them
using
the
explicit
and
just
to
kind
of
highlight
the
differences
in
configuration.
B
Within
explicit
bounds
histogram,
you
know,
as
you
know,
I
can
I
can
set.
The
I
can
set
the
boundaries
for
an
exponential
histogram,
that's
not
necessary,
and
so
this
code
I'll
just
kind
of
show
you
what
kind
of
benefit
you'd
get
I'm.
B
Basically,
in
the
the
code
is
very
simple:
I'm
recording
the
same
values
with
both
the
exponential
and
the
explicit
configuration
to
be
able
to
highlight
what
you
what
you'd
get
so
if
we
pop
over
I
have
I've
reported
this
up
to
New
Relic,
so
I
have
the
otlp
exporter,
basically
like
wired
up
on
end
to
end
on
on
my
experimental
branch
here,
I'm
I'm,
showing
the
results
of
using
the
explicit
bounds.
B
Histogram,
you
see
a
distribution
of
of
points
like
this
we're
short
of
time,
so
basically
I'll,
just
I'll
just
cut
to
the
chase
and
show
you
what
the
exponential
range
looks
like
you
see
a
a.
D
B
More
fine-grained
distribution
of
those
values,
given
that
the
the
buckets
are
dynamically
calculated
at
runtime,
to
be
able
to
more
accurately
reflect
the
the
data
set
that
I've
reported
in
this
sample.
App.
B
I
haven't
looked
at
Prometheus,
yet
no
Prometheus.
The
Prometheus
does
support
an
exponential
format,
though
I
haven't
looked
at
where
it's
at
stability
wise,
so
I
haven't
even
looked
at
what
it
would
take
to
to
do
that.
Yet
in
our
Prometheus
exporter,.
G
Yeah
I
mean
like
use
the
otlp
exporter
about
how
the
collectors
and
YouTube
Prometheus
that's
what
I
did
for
examples
like
leveraging
the
collector's
knowledge,
so
that
might
be
like
a
collector
exporter,
for
there
is
already
one
for
Prometheus.
Maybe
it
has
a
setting
to
support
exponential
as
well.
G
G
B
Also,
if
there's
a
I'll
I'll
load
but
I'll
I'll
work
on
the
Prometheus
exporter
too,
assuming
the
the
exponential
histograms
are
natively
supported
by
Prometheus.
If
that's
true,
then
you
know
the
end
example
might
just
be
able
to
involve
just
the
SDK
exporting
or
yeah
straight
to
Prometheus.
G
Yeah
so
I
think
like
we
should
continue
this
next
week,
because
we
haven't
had
chance
to
look
at
the
implementation
that
from
what
I
looked
at
very
scary,
to
look
at.
So
maybe
if
you
can
help
us
break
down
some
of
the
things
it
really
helps.