►
From YouTube: 2021-09-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So,
just
quick
update
for
the
isdk
experimental
release,
all
the
pr
submerged
now
like
we've
done
our
job
waiting
for
colors
to
cut
release.
So
thanks
everyone
for
your
contribution,
help
the
next
one.
I
want
to
quickly
go
through
the
future,
freeze
and
stable
items
so
so
far
we
we
don't
have
any
items
for
stable
milestone.
A
I
I
put
all
the
existing
metrics
item
under
fischer
freeze
and
probably
there
are
some
new
items
created
in
this
in
the
past
few
days.
I
haven't
done
the
triage
yet
but
looks
like
the
numbers
are
pretty
small.
I
know
in
the
previous
meeting
in
the
meeting
notes.
I
also
promised
something
I'm
going
to
create
the
issue.
I
haven't
done
that
yet,
but
if
I
remember
correctly,
the
number
of
issues
that
I
need
to
create
is
less
than
three.
A
A
A
A
A
B
My
thinking
here
is:
we
can
lean
on
the
collector
having
renames
available
via
the
metric
transform
processor
or
whatever
the
hell.
It's
called
now
metric
processor
transform.
I
forget
the
the
there's
one:
that's
deprecated
one
that
is
in
progress
of
being
implemented,
but
we
could
say
basically,
like
the
sdk,
doesn't
support
it
directly.
It
won't
support
it
in
ga,
but
you
can
use
the
collector
as
a
workaround.
I
think
that's
realistic.
B
I
agree
with
you
that
the
view
thing
alleviates
that
concern,
but
if
you
look
specifically
at
what
they
were
talking
about,
I
still
think
that
use
case
may
need
to
get
added
at
some
point,
but
it
should
be
okay
to
go
through
a
collector.
C
This
was
filed
in
response
to
many
people
asking.
When
are
we
going
to
have
more
options
for
instagram
resolution
or
different
sketch
algorithms,
and
so
on?
At
today's
date?
I
almost
a
year
old,
I
would
say
we
have
in
the
version
0
or
1.0
spec
we're
just
going
to
use
the
explicit
histogram
and-
and
I
think,
by
the
time
we
get
there-
we'll
have
an
option
in
the
protocol
and
an
option
to
implement
a
higher
resolution
histogram,
and
we
already
have
the
view
mechanism
to
take
effect
there.
B
If
I
recall
correctly,
I
thought
we
specifically,
I
think
so
yeah
I
think
we
might
have
fixed
it.
But
if
somebody
wants
to
confirm
that
with
me,
I
think
that's
what
we
ended
up
doing
I'll.
Do
that.
D
Okay
question
a
little
bit
connected
to
this:
the
otlp
line
protocol
for
matrix.
It
is
marked
as
stable
right.
B
B
A
It
seems
to
be
saying:
okay,
if
you
have
a
like
summary
of
cumulative
change
and
you
don't
have
any
update,
then
can
you
report
some
like
reset
and
then
stop
sending
the
data
over
and
over
again?
Until
you
see
the
next
data
point
for
delta,
can
you
just
say
I'm
not
seeing
any
value
and
then
moving
forward.
You
don't
report
the
dimension
at
all.
Until
you
see
any
updates.
B
This
actually
relates
to
a
memory
leak
in
java,
where,
if
you
have
high
cardinality
metrics,
where
you're
attaching
like
attributes
to
the
metric
and
they
change
frequently-
and
you
don't
ever
clear
out
the
old
metric
data
streams
in
cumulative,
you
can
actually
run
out
of
memory
because
you
keep
allocating
new
buckets
for
the
metrics
that
never
show
up
again.
B
You
could
consider
this
a
bug
with
your
usage
of
metrics
and
your
usage
of
labels
or
attributes
on
metrics,
but
I
think
that
this
is
likely
to
affect
all
implementations
and
and
it's
it's
a
possible
memory
leak.
If
you
don't
clear
out
the
metrics,
that's
why
I,
I
wouldn't
necessarily
say
it's
a
bug
in
the
java
implementation.
B
It's
one
of
the
workarounds
that
java
uses
to
limit
memory
usage
for
high
cardinality,
and
I
I
think
that
this
is
something
we
do
need
to
address,
and
it's
a
good
thing
to
do
during
feature
freeze,
because
we
should
be
able
to
address
it
without
actually
changing
the
features
that
we've
exposed.
C
In
my
opinion,
I
don't
believe
that
this
should
be
an
option
anymore,
like
there's
sort
of
a
choice
of
cumulative
behavior,
and
really
it
comes
down
to
gauge
values.
If
I
set
a
gauge
four
hours
ago,
do
you
want
me
to
report
that
every
interval,
or
should
I
just
not
report
it
because
it
hasn't
changed?
I
don't
think
that
users
actually
want
that
cumulative
export,
but
not
updating
those
old
gage
values.
A
C
C
Nice
well,
this
is
why,
every
time
it
comes
up,
it's
like
I
want
a
way
to
reset
my
cumulative
metrics
and
even
in
prometheus,
that's
not
an
offered
api,
so
you
get
replacement
libraries
that,
let
you
do
it
it's
well
defined
in
the
protocol.
We
can
reset
start
time
in
order
to
achieve
this
effect,
but
we
haven't
actually
exposed
an
interface
to
allow
users
to
flush
out
that
state,
and
I
don't
think
we
should.
I
think
that
we
should
encourage
people
to
use
delta
exports
if
that's
what
they
want
to
use
high
cardinality.
A
I
have
a
question
very
related
to
this,
so
the
dimension
like
cap
to
avoid
any
like
cardinality
explosion
from
one
particular
attribute
to
ruin
the
other
attributes,
for
example
we're
saying
for
this
metric.
We
have
three
dimensions
and
the
first
dimension
is
the
http
verb.
The
second
one
is
the
http
status
code.
The
third
thing
is,
I
got
notes.
A
We
retrieved
something
from
the
user,
like
the
user
provided
a
tag
or
something,
and
in
the
real
case
the
user
might
give
us
a
lot
of
credit
tags,
and
we
know
that
the
http
status
code
and
the
verb
is
super
important.
So
we're
saying
I
want
to
keep
all
the
combination
of
the
verb
and
the
status
code
in
case
I've
reached
a
certain
limit.
Let's
say
I
have
a
million
combinations,
then
I'm
going
to
sacrifice
the
user-provided
tag.
A
So
when
we
provide
such
thing
a
certain
moment
when
we
reach
the
cap,
we
start
to
drop
the
data
and
if
we
have
the
cumulative
behavior-
and
we
have
to
remember
everything
since
the
beginning
of
the
process-
then
there's
no
such
solution.
So
we
probably
need
to
drop
some
old
data
based
on
certain
criteria.
A
B
Right
but,
but
we
need
to
allow
it
to
happen
like
josh
mcdonald
was
saying:
prometheus
basically
holds
on
to
everything,
and
we,
I
think
we
should
explicitly
say
what
happens
under
memory
pressure
or
around
memory
in
the
spec
like.
I
think
we
need
to
address
this
specific
scenario
in
the
spec
in
some
fashion
and
I
think
it'd
be
okay.
If
we
went
with
josh's
suggestion
of
we
never
drop
data
and
if
you
have
concerns
use
delta,
I
think
that's
an
okay
way
to
go.
I'm
a
little
bit
nervous
about
it.
B
Personally,
I
think
it's
not
the
best
from
a
user
standpoint,
because
I
think
we're
we've
already
seen
issues
with
memory
pressure
from
metrics
in
the
java
instrumentation
we're
producing
so
I'd
prefer
to
actually
have
some
sort
of
memory.
Pressure
scenario
that
we've
specced
out:
that's
okay
for
sdks
to
implement
that
we
can
leverage
because,
like
I
said,
we
already
saw
the
issue
in
the
1.5
release
of
java
instrumentation
yeah.
C
So
I
mean
there
is
this
topic
of
a
reset
and
we
have
it
in
the
data
model.
So
it's
really
a
question
about.
I
just
don't
want
to
see
it
in
the
api.
I
don't
want
there
to
be
a
new
instrument
method.
Saying
reset
me:
it's
dangerous
and
it's
hard
to
say
what
it
means
and
I
don't
think
users
will
use
it
the
way
we
want
them
to,
but
an
sdk
could
just
flush
its
state
and
and
essentially
restart
that
metric
stream,
which
is
kind
of
accepted
prometheus,
still
doesn't
like
it.
B
A
A
I
think
the
original
question
brought
by
carlos
is
more
like
giving
some
flexibility,
so
people
can
select
what
they
do.
I
guess
we
probably
won't
have
a
very
good
selection
story,
because
the
scope
seems
to
be
big.
We
can
probably
start
with
some
suggestions
and
once
we
form
a
good
idea,
we
can
think
about
whether
it's
time
to
expose
some
like
option
for
the
user.
C
I'd
like
to
ask
a
related
question
in
earlier
drafts
of
metric
specs
that
I
was
involved
with,
there
was
in
order
to
make
that
delta
export
strategy
work,
that
I've
described
a
requirement
that
you
must
be
able
to
flush
memory
after
cardinality
is
no
longer
used,
so
it
was
essentially
a
requirement
saying
this
has
to
work
in
the
delta
export
strategy.
In
order
that
that
we
could
tell
the
user
just
switch
to
delta,
put
the
delta
cumulative
downstream
somewhere,
and
this
will
work.
You
won't
run
out
of
memory.
E
E
C
That's
what
I
mean
that
when
I
say
the
data
model
specifies
how
to
handle
a
reset
so
that
we
have
a
start
field
a
start
time
on
any
anything
that
can
be
cumulative
and
the
in
order
for
the
delta
to
cumulative
converter,
to
flush
its
memory.
It
will
flush
out
anything,
it
knows
about
it
and
then,
when
it
begins
again,
it
will
use
a
new
start
time,
and
then
consumers
can
see
that
the
stream
was
reset.
C
Okay
and
the
way
the
go
sdk
prototype
does
this:
is
it
every
time
it
runs
through,
collect
it
if
it
spots,
a
combination
of
instrument
and
label
set
that
wasn't
used
in
the
entire
recent
interval.
It
will
schedule
it
for
removal
so
that,
if
you
go
an
entire
interval
without
touching
something
it'll
eventually
disappear,
but
that
way
you
don't
churn
your
hash
table
every
every
interval
with
stuff
that
you
know
it
most
likely
stays
in
use.
A
A
This
would
allow
you
to
handle
new
combinations,
which
basically
means
for
delta.
You
only
have
to
remember
what
happened
in
the
previous
reporting
cycle
and,
if
you
have
like
you,
keep
like
refreshing
the
combination
of
dimensions,
you
don't
suffer
from
that
problem
of
you
have
to
remember
everything.
B
Yeah
yeah,
I
that's
that's.
The
original
point
I
was
trying
to
get
around
is,
I
think,
post
feature
freeze.
We
should
do
a
little
focus
on
memory,
usage
and
and
allocations
and
things
there's
a
topic
that
I'll
add
later
to
discuss
when
we're
done,
triaging
that
I
forgot
to
mention
from
the
java
sig.
So
let
me
let
me
add
that
to
the
bottom
of
the
agenda.
A
For
cumulative,
I
think,
by
default,
you
can
follow
what
premises
is
doing.
So
literally,
you
have
to
remember
everything
from
the
beginning
of
the
process,
but
if
there's
a
high
memory
pressure,
I
think
what
carlos
mentioned
here
is
to
give
people
an
option
they
can
see.
If
I
have
cumulative,
but
I'm
running
such
high
memory
pressure,
I
can
drop
the
cumulative
and
next
time,
if
I
see
this
as
a
new
combination,
I'll
set
the
start
time
to
the
current
time.
F
C
I
think
there's
a
slightly
separate
option,
which
just
tells
it
the
sdk
to
do
prometheus.
The
way
prometheus
expects-
and
I
don't
know
if
we
need
to
talk
about
it,
but
it's
essentially
saying
when
you
have
not
touched
an
instrument,
even
though
you're
being
cumulative,
should
I
re-report
a
value
that
I
already
reported
that
hasn't
changed.
That's
really!
What's
when
I
think
of
memory,
I
that's
what
I've
been
thinking
of
it
as
and
it's
separate
from
whether
you're
resetting
it's
just
whether
I'm
repeating
myself
with
a
value
I've
already
reported.
C
And
they're
variations
because
you
you,
you
might
not
report
for
several
periods
and
then
re-report
just
because
the
value
changed
and
in
the
first
conversation
we
were
going
to
forget
it
forever
and
if
in
a
gauge,
there's
just
no
difference
there.
But
in
a
cumulative
you're
you're
saying
that
I
I'm
going
to
stop
reporting
a
value
and
I'm
going
to
forget
it,
meaning
the
next
value
will
start
from
0..
C
Well,
I
don't
know
that
we've
actually
said
that,
though,
generally
you
could
reset
without
having
staleness
and
it's
sort
of
implied
that
there
was
a
period
of
restart.
I
guess,
but
you
know
like
the
last
report
could
have
been.
You
know
a
million
and
then
the
new
report
just
starts
with
a
new
start
time
immediately.
C
C
In
a
traditional
prometheus
system,
you
don't
actually
know
the
start
time,
which
is
why
there's
this
heuristic
to
detect
resets
and
as
long
as
you
never
shorten
your
memory
of
a
cumulative
to
the
point
where
its
next
value
is
going
to
be
larger
than
the
previously
reported
value,
despite
the
reset,
which
is
when
you
lose
information,
you're
able
to
reset
prometheus
metrics.
So
essentially,
this
says
that
as
long
as
you
do
a
relatively
slow
time
scale
compared
with
the
amount
of
increments,
you
can
reset
them
safely.
C
C
Speech
to
tags,
because
I'm
just
going
to
share
my
conspiracy
theory,
then,
which
is
that
google
planted
the
idea
of
a
start
time
in
open
metrics
on
purpose
and
prometheus,
never
understood
it,
and
so
here
we
are
with
us
with
a
start
time.
Yes,
they
support
open
metrics,
but
they
never
wanted
to
support
start
time.
A
And
I
actually
have
a
question,
so
the
the
suggestion
is
basically
to
avoid
people
writing
extra
code
if
they
can
do
something
simpler.
Currently,
if
you
look
at
the
observable
like
the
asynchronous
instruments
we
have,
we
want
people
to
write
a
callback
function
in
a
callback.
We
give
them
some
observer
object
and
then
they
use
observer
to
report
the
data,
so
jonathan
suggested
for
simple
cases.
Can
people
just
give
a
simple
callback
function?
A
My
my
challenge
here
is:
I
I
want
to
understand
how
how
do
we
have
two
different
things:
providing
the
value
and
the
attributes
and
still
be
able
to
support
a
list
of
things.
So
jonathan
is
your
suggestion
here
going
to
cover
the
scenario
where
we
can
report
a
multi-dimensional
thing
or
it's
just
covering
one
single
value.
D
A
Yeah,
so
I
I
I
guess
this
is
coming
from
micrometer,
because
when
you
compare
the
asynchronous
instruments
with
the
the
like
the
the
counter
with
a
function
callback
in
micrometer,
you
find
that
in
open
telemetry
we're
making
the
callback
a
little
bit
hard.
If
people
only
have
a
very
simple
use
case,
so
you
suggest
that
we
can
optionally,
give
people
or
an
overload
the
function
or
something
to
make
it
easier.
F
D
So
that
that
is
basically
like
past,
like
once
when
you,
when
you
create
the
the
instrument,
and
that's
also,
there
are
no
several
like
callbacks
for
that.
D
It
is
it,
it
is
not
even
like
a
callback,
it
is
like
when
you,
when
you
create
an
instrument,
you
are,
you
are
specifying
the
set
of
tags.
F
D
No,
no!
No!
So
when
you
create
a
meter,
you
need
to
specify
all
of
the
tags
and
you
need
to
specify
the
callback
that
will
like
belong.
F
To
those
those
stacks,
yes
exactly
that
was
my
point.
I
think
this
is
where
the
difference
is
our
meet.
Your
concept
of
meter
is
more
closer
to
what
we
call
the
bound,
which
is
includes
the
metric
name,
plus
the
dimensions.
B
F
B
There's
a
related
issue
today
in
the
java
sdk,
where,
if
you
call
create
observable
counter
or
the
equivalent
twice
with
different
callbacks
and
you're,
trying
to
report
different
attributes
in
those
different
calls,
should
that
be
allowed
or
not,
and
the
the
interface
right
there
from
jonathan
is,
is
I
think,
nice
for
that
simple
use
case
of
I
have
a
simple
callback.
I
know
all
my
attributes
and
I
just
want
it
in
and
done
this
is.
This
is
the
same
kind
of
a
feature
request.
B
I
would
say
as
providing
that
timer
instrument,
where
we
know
that
it
will
simplify
people's
use
cases
and
their
lives,
but
we
might
not
be
able
to
implement
it
in
time
for
when
we
want
to
actually
freeze
the
spec
so
like
the
spec
we
have
now
is
the
bare
minimum,
and
we
know
that
there's
convenience
things
that
can
make
it
better.
This
is
one
of
those
things
that
I
think
could
make
it
better.
B
For
the
same
instrument-
and
we
already
see
users
expecting
this
to
happen-
and
this
is
an
example
of
feature
request
that
I
think
is
because
that's
how
other
apis
work,
so
I
think
we
might
need
to
actually
answer
the
this
and
kind
of
that
underlying
issue
of
like
callback
instruments
and
multiple
registrations
of
callbacks,
where
some
of
the
callbacks
are
tied
to
specific
attributes
and
some
are
tied
to
other
attributes.
B
So
anyway,
I
guess
I
don't
want
to
expand
this,
but
but
I
think
the
particular
api
that's
demonstrated
here
is
nice
for
the
simple
use
case.
I'd
love
to
be
able
to
support
it,
but
if
we
do
support
it,
it
means
that
we're
going
to
get
where
we're
going
to
even
promote
having
multiple
registrations
per
metric
or
per
instrument
name
of
callbacks,
because
of
that
tied
attributes,
possibly
right.
So
I
think.
F
F
B
F
Definitely
but
but
the
previous
question
that
you
had,
which
was
with
multiple
registration
with
the
same
callback
with
different
callbacks
or
or
multiple
registrations
for
the
same
instrument.
I
think
that's
a
very
important
one
that
we
should
tackle
right
now,
because
because
it's
it's
gonna
happen
and
probably
we
can
return
an
error.
We
can
accept
that
and
combine
the
results.
B
I'll
attach
the
java
specific
issue,
so
so
everyone
here
can
see
it,
but
that's
that
is
one
of
the.
I
consider
this
almost
a
bug
to
the
spec
as
opposed
to
a
feature,
but
we
could
also
consider
it
a
feature.
I
don't
know
how
you
how
we're
all
feeling,
but
let
me
link
the
issue
and
then
you
can
take
a
look,
but
I
think
it's
highly
related.
B
For
synchronous
instruments
we
allow
calling
the
same
constructor
and
you'll
get
the
same
instrument
back.
So
if
I
call
it
with
the
same
constructor,
I
get
the
same
instrument
back
and
I
can
send
metrics
to
it.
So
I
can
get
access
through.
You
know,
get
this
instrument,
so
I
think
we
should
allow
the
same
for
async.
B
Okay,
so
basically,
what
that
looks
like
is
an
incompatible
metric
description
for
it
and
again
it
depends
on
how
your
view
configuration
is
because
you
can
actually
fix
it
with
the
view
api,
but
as
long
as
there's
an
overlap
with
an
existing
metric,
that's
already
been
registered,
whoever
registers.
Second,
you
log
in
error,
and
that
second
thing
basically
gets
a
null
metric
instrument
to
report
metrics
against
and
there's
an
error
in
your
logs.
B
F
Perfect,
I
see
okay
yeah.
I
think
I
think
this.
C
B
F
C
Long
as
they're
reporting
separate
attributes
that
it
works.
The
reason
I
care
about
this,
though,
is
that
in
the
data
model,
I
think
that
there
there's
trouble
ahead
if
you
allow
multiple
callbacks
to
report
a
value
from
asynchronous
instruments
with
the
same
attributes,
because
you
end
up
with
trying
to
explain
something
that
shouldn't
really
happen
in
an
asynchronous
context
like
you
can't
have
two
observations.
At
the
same
time,.
B
I
think
my
suggestion,
my
suggestion
is
that
we
allow
multiple
registrations
at
callbacks
and
we
explicitly
handle
in
the
async
callback
functionality
that
case
of
overriding
a
particular
attribute
value,
and
that
turns
into
a
logged
error
and
a
dropped
measurement
as
opposed
to
today.
It
it's
just
considered
a
bug.
If
that
happens,
you
can
do
it
from
the
same
callback
even
and
you
actually
cause
issues
with
yourself.
It's
like
it's
a
known
bug
in
java.
B
C
Now,
but
we
don't
take
a
strong
stance
on
it
and
I
advocated
that
we
did
when
this
was
discussed
originally
and
in
the
go
sdk.
I
actually
have
a
map
built
to
ensure
that
you
can't
set
a
value
more
than
once
in
an
asynchronous
callback
and
it's
a
little
extra
overhead
to
do
it.
That
way.
But
I
don't
consider
it
a
bug.
I
consider
it
an
accident-
and
I
was
just
going
to
specify
last
value-
wins
just
to
make
sure
that
you
can
only
set
one
value
per
per
attribute
set.
F
But
I
think
I
think
josh
I'm
worried
that
this
may
cause
perform
more
deeper
performance
issues.
So
one
of
the
reasons
why
we
want
people
to
kind
of
change,
their
mindset
of
registering
a
callback
for
each
combination
of
attributes
is
the
fact
that
we
want
that
callback
to
be
called
only
one.
So
in
case
of
memory
you
you,
we
want
you
to
grab
the
memory,
get
all
the
values
for
all
the
attributes.
B
I
I
think
it
makes
sense
to
encourage
people
to
do
that.
The
question
is:
what
should
we
do
if
they
don't.
F
I
can
be
a
bad
guy
and
say
we
should
throw
an
exception
in
this
case,
or
we
should
not
allow
this
so,
hence
they
would
think
about
doing
this
or
we
can
be
nice
and
say
we're
gonna
accept
this,
maybe
log,
maybe
logan,
warning
for
you
that
you
shouldn't
do
this,
because
it's
probably
a
problem.
It's
probably
going
to
be
a
extra
overhead
that
you're
going
to
have
to
eat
because
of
this
so.
C
B
If
this,
if
this
decision
is
fragmented
across
instrumentation
code,
so
if
somebody
writes
instrumentation
for
one
migraine,
someone
else
writes
instrumentation
for
another
and
they
happen
to
use
the
same
meter
which
shouldn't
happen.
But
does
then
you
end
up
with
this
conflict,
and
then
you
end
up
with
actually
two
different
pieces
of
code
on
by
two
different
people
and
in
those
cases
yeah
you
can
throw
an
exception.
But,
like
that's,
that's
kind
of
it's,
not
necessarily
user
friendly
when
you're.
In
that
context
of
I'm
pulling
an
instrumentation
from
two
different
people.
F
You
you
just
mentioned
one
example:
let's
assume
two
people
report
the
same
metric
and
how
they're
gonna
have
conflict
in
the
attributes
and
half
of
the
measurements
or
half
of
the
result
will
come
from
one
of
the
metric
and
the
other
half
will
come
from
the
other
one,
but
the
overlaps
will
be
only
from
one
of
them,
so
it
will
be
a
mix
and
people
will
not
know
how
to
interpret
that.
You
will
be
maybe
worse
than
not
reporting
at
all
and
logging
error.
C
I
have
a
data
model
reason
for
wanting
to
say
something
about
this
case,
and
it's
that
I
think
when
you
have
these
asynchronous
instruments.
One
of
the
use
cases
is
this
thing
that
we've
called
a
gauge,
histogram
and
or
gauge
distribution,
and
it's
that
I
have
a
number
of
key
of
attribute
sets
which
I'm
going
to
report
values
for,
and
they
form
a
total.
They
form
a
set,
and
the
sum
of
the
items
in
the
set
tells
me
the
total
and
therefore
I
can
divide
each
total
each
contribution
into
a
ratio.
C
I
want
the
ratio
to
be
well
defined
and
as
soon
as
you
allow,
multiple
values
for
each
attribute
set
ratio
becomes
ill-defined,
and
you
end
up
adding
to
a
sum
what
you
don't
add
a
distinct
count
for,
so
I
want
to
make
sure
that
the
contribution
to
a
sum
and
the
contribution
to
the
distinct
count
are
the
same.
If
that
makes
some
sense.
F
It
makes
a
lot
of
sense
any
any
any
kind
of
obligation
that
you
do
even
less
value.
The
fact
that
you
don't
expect
this,
you
know
in
the
unexpected
situation
that
just
sourest
pointed
I
think,
will
lead
you
to
wrong
results.
C
In
other
words,
I'm
going
to
report
some
attribute
sets
attribute
values
in
my
callback.
They
all
logically,
are
being
reported
at
the
same
moment
in
time
again
to
support
that
idea
of
a
ratio
but
different
callback
executions.
They
don't
need
to
logically
happen
at
the
same
time
to
me
because
they
are
a
separate
callback
execution
and
it
might
be
that-
and
there
is
so.
I
don't
perhaps
feel
that
I
need
to
call
two
executions
of
two
separate
callbacks,
the
same
logical
moment
in
time.
Therefore,
the
two
observations
can
co-exist.
A
E
So
a
couple
of
thoughts,
so
one,
if
isn't
isn't
the
idea
of
registering
multiple
callbacks,
just
syntactic
sugar
for
what's
possible
today,
can't
you
just
register
a
single
callback
that
invokes
multiple
callbacks.
So
that's
one
thought
and
then
the
other
piece
is
the
idea
of
multiple
pieces
of
instrumentation
registering
callbacks
for
the
same.
E
Like
meter,
I
don't
think
there's
any
good
scenario
that
that
can
come
out
of
that,
like
I
don't
think,
there's
a
good
way
to
handle
that
so
aren't
you
best
off
just
alerting
like
failing
fast,
like
alerting
the
the
user
as
soon
as
possible,
so
that
they
can
configure
their
view
api
or
their
sdk
in
a
way
to
avoid
that
in
the
first
place,.
B
What
I'm
suggesting
is
if
we
detect
a
conflict
in
what's
being
reported
between
those
two
pieces
of
instrumentation,
then
we
fail,
but
if
there's
no
conflicts,
we
we
can
let
it
go
through,
but
I'm
not
suggesting
that
that's
the
approach
we
take.
We
should
also
take
the
approach
that
each
instrumentation
library
should
have
their
own
independent
meter.
B
That's
that's
kind
of
in
the
design,
so
this
we're
we're
kind
of
talking
about
a
failure
case
of
a
failure
case
of
a
failure
case
and
what
is
best
for
the
user
right
me
as
someone
who's
pulling
in
a
library.
What
what
should
I
be
able
to
do?
And
what
should
I
be
told?
I
agree
with
you:
you
should
fail
fast.
You
should
log
quickly,
I'm
all
on
board
for
that,
it's
more!
B
If
somebody
if
somebody
registers
specifically
in
this
job
example,
someone
registered
an
async
instrument
themselves
twice
and
they
expected
it
to
work
right.
That
tells
me
that,
on
on
that
meter,
that
tells
me
that
we
have
a
user
expectation
problem
we
have
to
solve
in
some
fashion,
and
the
java
implementation
today
doesn't
have
provenance
and
error
messages
done.
Well,
that's
one
of
the
two
news
on
the
implementation,
so
the
bug
is
made
more
difficult
by
the
fact.
B
The
error
message
is
junk,
so
that
will
get
fixed,
but
even
so
what
I'm
suggesting
is
there's
an
expectation
that
this
should
work,
and
I
think
we
need
to
address
that
expectation.
F
F
E
That
you
know
a
composite
callback
where
one
callback
actually
invokes
other
callbacks.
E
Maybe
there's
a
solution
in
there
to
solve
the
problem
that
bogdan
you
were
discussing
where
the
I
think
it
was
you
where
the
observations
can
be
made
at
different
points
in
time.
So
you
know,
if,
if
you
have
a
single
callback,
you
can
find
a
way
to
have
them
all
observed
at
the
same
time,
rather
than
separately.
E
In
terms
of
like
you
know
what
to
do
in
the
situation
where
something's
done
by
mistake,
my
inclination
is
just
to
is,
to
you
know
log
that
as
soon
as
possible
and
count
on
the
user
to
fix
it.
And
you
know
you
know
whether
that
be
last
registration
wins
or
first
registration
wins
and
like
a
log
message,
I
don't
think
it's
super
important
just
as
long
as
you
inform
the
user
as
soon
as
possible,
because
I
don't
think
there's
any
good
scenario
that
can
come
out
of
come
out
of
that.
F
That
that
was
my
point
as
well,
but
josh
has
a
good
point
that
there
is
a
user
expectation
of
doing
this.
So
how
do
we
resolve
that
better
documentation,
josh?
Another
option
that
came
to
my
mind
would
be
most
likely.
We
expect
people's
to
run
unit
tests
or
to
write
unit
test.
F
Can
we
make
this
an
exception
in
the
testing
sdk
that
we
provide,
or
we
encourage
people
to
use
for
for
unit
tests.
G
F
The
official
sdk
we
log
an
error
when,
when
something
bad
happens,
okay,
so
now
in
unit
test,
most
likely
people
will
have
to
use
a
testing
meter
provider
that
tesla.
F
Is
based
on
sdk
or
whatever
we
want,
but
that
meter
provider
behavior
should
be
to
crash
through
an
exception
signal
panic,
whatever
we
want,
if
some
of
all
of
these
exceptions
that
we
just
log
in
production
we
actually
similar
with
the
the
panic
or
the
air
or
whatever
you,
you
know
what
I'm
talking
about.
B
Yes,
yeah.
No,
I
I
totally
get
what
you're
saying
now
and
yes,
I
do
think
that
that
would
be
worth
calling
out
as
a
feature
that
we
provide
for
sdks
of
so
this
would
be
while
the
sdk
itself
wants
to
be
rather
flexible
and
continue
to
work,
even
in
the
presence
of
like
bad
config
or
this
kind
of
an
error
right.
We
want
to
do
our
best
to
get
telemetry
out
the
door
in
a
unit
testing
development
scenario.
B
We
should
have
a
strict
sdk
test
implementation
that
tries
to
detect
common
failures
and
common
issues
and
give
you
warnings
if
your
code
is
not
structured
appropriately.
This
would
be
things
like
you
know.
If
you
are
reporting
metrics
and
they
don't
align
with
the
semantic
conventions,
we
could
issue
an
error,
saying:
hey
you're,
missing
these
attributes
or
you're
missing
this
thing
right
that
I
I
totally
agree
that
that
would
be
super
useful
and
that's
what
you're
suggesting
right.
F
B
B
I
I
feel
like
that
can
be
added
on
as
a
follow-on,
and
I
would
personally
want
to
tie
that
to
instrumentation
efforts
or
tie
it
to
semantic
conventions,
as
I
suggested
of
like
making
sure
that
the
telemetry
we
produce
is
high
quality.
B
So
I
think
we
should
put
it
on
the
table
as
something
we
do.
It's
more
a
matter
of
timing,
and
when
we
do
it,
I
still
think
we
need
to
answer
the
underlying
question
here.
B
F
It
also
may
help
because
user,
when
the
instrument
they're
going
to
unit
test
their
code,
and
if
you,
if
you
throw
an
accession,
tell
them
hey.
You
cannot
register
multiple
callbacks
for
the
same
instrument,
because
this
is
our
decision
and
stuff
like
that.
Then
people
may
not
be
necessarily
immediately
understanding
why
we
do
this,
but
they're
gonna
solve
the
issue
and
not
have
an
unexpected
result
in
production.
F
So
so,
my
point
being,
even
though
we
keep
one
decision
that
is
not
is
not
common
for
for
their
current
users,
if
that
crashes
in
tests
may
help
them
to
get
clarification
or
to
catch
faster.
E
Two
quick
thoughts,
one,
so
some
of
these
scenarios
may
only
manifest
in
run
time,
so
not
not
test
time.
You
might
only
observe
the
same
attribute
set.
You
know
when
you're
running
in
a
certain
scenario
in
production,
another
another
thought
as
an
alternative
to
that
or
maybe
in
in
conjunction
with
that
could
could
we
have
some
sort
of
semantic
convention
around.
You
know
reporting
these
types
of
errors
as
metrics,
so
that
you
know
you
know
we're
emitting
metrics
or
some
sort
of
telemetry
data
about
when
a
particular
application's
runtime
has
these
error
conditions.
B
So
I
unfortunately
I
have
to
drop
because
we're
almost
out
of
time,
so
I
can't
follow
up
with
the
next
discussion
point
here,
I'm
just
gonna
to
to
mention
it
briefly
before
I
have
to
drop
apologies,
but
in
java
we're
using
bound
instruments,
there's
this
notion
of
binding
to
attributes
what
that
does.
Is
it
allocates
memory
for
a
particular
metric
stream
and
prevents
that
memory
from
being
deallocated
until
the
instrument
is
unbound?
B
The
current
specification
has
something
around
optimal
attribute
passing.
However,
in
java,
bound
instruments
is
more
about
trying
to
avoid
allocations
during
the
hot
path
of
synchronous,
metric
collection.
I
just
want
to
raise
this
as
a
do.
We
want
to
officially
say
anything
in
the
specification
around
that
there's
concerns
that
the
current
specification
isn't
strong
enough
for
java
to
keep
this
api,
but
it
is
something
that
people
are
interested
in.
Our
users
are
interested
in
preserving
in
any
case,
I'm
just.
I
want
to
want
to
get
people
thinking
about
that.
F
Idea,
I
think,
if
you
think
about
that,
think
how
you're
gonna
manage
that
combining
with
the
with
the.
H
Extracting
the
attributes
from
baggage,
I
had
issues
with.
B
A
A
C
A
A
The
outcome
might
be
eventually
we'll
have
the
boundary
instruments
implementing
all
languages,
but
they
all
look
different,
which
is
probably
fine,
because
it's
for
extreme
optimization
and
I
think,
even
if
we
try
to
spec
that
out,
we
might
eventually
realize
oh
different
languages
have
their
own
way
or
we're
saying
the
spike
wouldn't
allow
that
java
has
to
release
the
bond
one
as
experimental
thing
and
instead
of
kind
of
stable,
which
might
be
an
issue,
because
if
people
need
that
they
probably
would
expect
everything
to
be
stable,
otherwise
like.
Why
would
they
push
for
that?
A
A
So
I'm
more
linked
towards
giving
that
flexibility,
something
in
the
epa
and
ict
case
back
saying
the
people
have
a
hard
pass
and
they
know
they
know
the
combination
of
the
dimensions
ahead
of
time,
whether
it's
a
full
combination
or
it's
a
partial
combination.
They
can
do
some
optimization
to
avoid
people
providing
that
every
time.
C
D
So
the
basically
the
point
there
is
that
if
you
have
something
that
can
provide
you,
the
value
of
the
gauge,
you
don't
need
to
wrap
it.
You
can
just
pass
it
to
the
gauge
and
and
that's
all
so
it
is
like
a
simple
version
of
it,
because
you
don't
need
to
have
the
boilerplate
of
wrapping
this
into
a
measurement
and
providing
the
attributes
as
well.