►
From YouTube: 2021-10-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
we
can
start
so
for
the
sdk
feature.
Freeze,
we
we
don't
have
a
lot
of
items
remaining,
I'm
trying
to
see
if
we
can.
We
can
like
finish
this
by
may
october,
so
their
their
pr's
are
listed
out.
It
seems
like
some
of
the
pr's
got
stuck
so
the
first
one
I
want
to
talk
about
the
mi
max
one.
I
see
bogdan
blocked,
the
pr
it
seems
they're
like
he
he's
not
concerned
with
the
direction
he's
concerned
with
how
it
is
phrased.
A
I
wonder
like
if,
like
josh
other
folks,
you
have
time
to
look
at
spr
so
jack.
Would
you
help
me
summarize
and
see
what
what's
the
disagreement
here.
B
Yeah,
so
he
hasn't
replied
to
my
responses
to
his
comments.
Yet
you
know,
essentially
one
of
the
one
of
the
bits
of
feedback
was
that
when
you're
configuring,
what
I'm
calling
the
degenerate
histogram
now
I
say
that
there's
a
field
that
you
include
called
monotonic
and
that's
either
true
or
false.
B
C
Yeah,
that's
that's
a
good
point.
We
do
have
an
issue
right
now,
where
open
telemetry
allows
histograms
to
be
recorded
with
negative
measurements.
Prometheus
does
not,
and
we
have
no
way
to
know
whether
or
not
we
allowed
negative
measurements.
So
actually
what
we
tried
to
do
in
the
sdk
was
prevent
negative
measurements
and
histograms
initially
so
that
every
histogram
that
we
export
is
prometheus
compatible.
I
don't
really
like
that
solution
at
all
and
we're
trying
to
fix
it.
So
there's
there's
a
tangle
of
issues
here.
C
There's
bogdan's
issue
with
the
actual
wording,
then
there's
the
concern
around
the
min
max
proposal
of
using
optional
things
that
also
got
bogged
down
with
this
yeah.
So
so
we
need
to
make
progress
here
when,
when
you
use
the
terminology
monotonic-
and
it
doesn't
involve
a
sum-
you're
always
going
to
get
feedback,
so
the
the
phrasing
we
chose
in
the
past
was
allows
negative
measurements,
much
more
verbose.
C
So
I
don't
know
if
that
helps
with
this.
But
let
me
take
a
look
at
what
bogdan's
complaining
about
here.
I
thought
is
this
a
different
mid
max
pull
request
we
had
before
so.
B
It
is
a
different
one,
so
the
first
one
was
to
the
data
model.
This
one
is
to
the
sdk
and
I'm
just
adding
an
additional
aggregation
to
the
sdk.
So
you
know
there's
a
there's:
a
fixed
bucket,
histogram
aggregation,
which
is
the
default,
and
this
one
makes
it
possible
to
choose
what
like
I
propose.
We
call
a
degenerate
histogram,
which
is
a
histogram
with
zero
buckets
and
in
specking
out
that
that
new
histogram
aggregation.
I
just
cherry-picked
language
from
the
from
the
existing
explicit
bucket
histogram
aggregation.
C
Yeah
yeah,
I
got
you
I
gotcha
and
we
might
if
we
need
to
if
we
need
to
update
the
language
to
to
make
it
consistent
and
happy
between
the
two.
That's
fine.
What
I
would
do
to
resolve
this
initially
is
just
basically
explicitly
state
that
if
you
wire
up
this
aggregation
to
an
up
down
sum
counter,
that's
considered
an
or
an
up
down
counter,
that's
considered
an
error
and
then
we're
fine,
because
everything's
monotonic.
C
So
effectively,
this
aggregator
can
only
aggregate
against
counter
or
histogram
instruments.
It
can't
be
wired
against
the
up
down
counter
instrument.
Then
we
don't
have
to
deal
with
monotonic
at
all,
because
all
the
values
that
we
record
have
to
be
positive.
B
But,
but
I
just
don't
I
don't
I
don't
get
why
you
would
want,
like
you
know
if
you're
allowing
you
know
negative
values
in
the
explicit
bucket
histogram,
because
that's
what
the
language
currently
states.
Why
you'd
want
to
deviate
from
that
for
other
histogram
aggregations.
C
So
so
so,
basically
for
the
initial
version
of
metrics,
we
were
not
going
to
allow
negative
measurements
to
hit
explicit
bucket
histograms
out
of
an
sdk
initially
just
to
get
around
all
the
stupid
issues
and
resolve
that
optional
problem
that
we
have
in
the
data
model
that
it's
just
a
workaround.
It's
not
a
long-term
plan.
C
Yeah
we
have
to
do.
I
think
we
need
to
be
consistent
here
and
I
think
we
have
to
make
a
decision
if
we
should
I
want,
I
want
to
have
open
telemetry,
allow
negative
value,
so
I
just
want
to
put
that
out
there
to
begin
with,
but
to
get
this
initial
version
out,
because
we
have
all
that
complexity
of
of
getting
the
optional
sum
crap
in
histogram
in
the
data
model,
and
how
to
do
that.
I
don't
want
that
to
block
us
making
progress
on
getting
something
that
works
out
the
door.
C
So
the
effect
of
disabling
negative
values
is
literally
just
not
allowing
up
down
counters
to
have
histogram
aggregation.
That's
it.
B
Okay,
so
then,
so
I
I
agree,
I
I
don't
have
any
problem
with
that.
Riley
do
you?
Will
you
be
okay,
then,
if
I
expand
kind
of
the
scope
of
this
pr
to
you
know,
remove
this
monotonic
flag
from
explicit
bucket
histogram
and
include
that
that
qualification,
that
josh
mentioned,
which
is
that
like
histograms,
cannot
be
used
as
an
aggregation
on
up
down
some
counters
for
knock
down.
Sims
yeah
for
not
yes,
okay,.
C
In
fact,
actually
you
can't
the
monotonic
flag
is
inherent
in
the
instrument
it
shouldn't
actually
be
on
the
aggregator
either.
I
think
we
actually
had
that
discussion
on
the
initial
pr
and
I
thought
we'd
removed
it
from
explicit
bucket
histogram.
C
That
got
through
because
it's
the
the
notion
of
whether
or
not
the
sum
is
monotonic
is
inherent
in
the
measurements
that
it
gets
right.
So
if
we
have
a
histogram
and
it's
on
a
histogram
instrument,
it
is
monotonic.
If
it's
against
a
counter,
it
is
monotonic.
If
it's
on
an
up
down
counter,
then
it
is
non-monotonic
and-
and
it's
implicit
in
that
like
we,
don't
even
expose
it
in
java
as
a
thing
there's
a
bug
open
in
java
about
having
that
discussion,
I
think
we
talked
about
it.
C
I
thought
there
was
a
pr
to
remove
it,
but
maybe
it
fell
down
there.
I
know
we
had
that
discussion
before,
but
it
should
not
be
a
configurable
parameter
on
histogram
or
the
explicit
bucket.
I
think
we
want
to
remove
it
just
because
it's
inherent
in
the
data,
not
necessarily
the
the
aggregator.
B
C
Okay,
yeah
and
feel
free.
If,
if
I'm
not
responsive
ping
me
on
slack
and
I'll
I'll
comment
once
you
pull
that
stuff
up,
if
you
need
more
voices,
okay,.
B
There
was
one
more
comment
that
bogdan
made,
and
so
it's
about
the
the
count
value
on
a
histogram
so
for
the
explicit
bucket
histogram
count
only
includes
measurements
that
are
within
the
buckets
any,
and
so
that's
because
the
the
sum
of
the
counts
and
all
the
buckets
has
to
equal
the
count.
That's
you
know
the
overall
count,
and
so
any
any
measurement,
that's
outside
the
buckets,
doesn't
contribute
to
the
count.
B
So
for
this
degenerate
case,
where
there's
zero
buckets
at
all,
I
have
a
slightly
different
language
and
I
say
that
you
know
the
count
is
is:
is:
is
the
entire
population?
Not
just
you
know,
values
that
are
within
the
buckets,
because
there
are
no
buckets.
So
that's
nonsensical.
C
B
That
that
honestly,
is
an
alternative
is
to
have
a
single
bucket
histogram
instead
of
zero,
with
whose
range
is
negative,
infinity
to
positive
infinity.
C
Yeah,
if
you
create
a
explicit
bucket
histogram
that
has
zero
boundaries
defined.
That
is
what
you
get.
You
get
a
single
bucket
histogram
from
negative
infinity
to
positive
infinity,
although
it's
weirdly,
inclusive
and
exclusive.
I
forget
what
rules
are
there,
but
the
so
you
do
you
get
that
and
then
the
counts
would
be
the
same.
So
I
think
what
I
would
phrase
there
under
count
is:
basically
if
there
are
no
explicit
buckets
defined,
it
is
okay
to
admit
count
and
have
it
be
the
same?
B
C
Yeah,
that
would
actually
be
good
for
exemplar
sampling
too,
because
it
exemplar
sampling
on
explicit
bucket
histogram
actually
will.
B
So
the
the
primary
idea
of
this
is
to
be
like
a
lightweight
alternative,
so
you
don't
have
buckets,
you
don't
have
the
heavy
weightness
of
that
and
so
to
the
extent
that
they
can
work
with
exemplars.
You
know,
I
think
that
they
should.
I
don't
think
so
anything
that
I
can
do
to
this
spec
to
make
it
you
know,
have
better
compatibility
with
exemplars.
I
think,
is
a
good
thing.
C
Okay,
I
guess
what
I'm
suggesting,
though,
is
buckets
and
exemplars
are
tied
for
explicit
histogram.
Okay
for
some
they're,
not
tied,
you
could
have
more
than
one
exemplar
for
like
a
sum
aggregator.
C
If
you
wanted
to
from
a
practical
standpoint,
what
exemplars
you
want
out
of
histograms
are
like.
I
want
an
example,
exemplar
trace
that
correlates
to
a
10
second
latency
right,
so
for
your
degenerate
histogram,
you
might
want
to
think
about
an
exemplar
sampler
that
tries
to
pull
interesting
exemplars
instead
of
just
a
single
exemplar
for
the
whole
thing,
but
maybe
you
just
want
it
to
be
lightweight
and
exemplars,
don't
matter
in
that
case,.
B
C
By
default,
yeah,
like
you,
can
we
the
one
of
the
reasons
we
specified
the
example
our
reservoir
interface
is
that
you
can
write
your
own
and
do
your
own
thing
if
you
needed
to,
but
by
default
yeah
there's
one
exemplar
per
bucket.
B
Mm-Hmm
yeah:
I
think
that
they
example
our
behavior,
then
is
closer
to
you
know
what
you
would
want
for
a
sum
aggregation
where
you
like.
You
suggested
you
pick
a
number
of
interesting
exemplars
and
don't
limit
it
to
one.
C
Yeah,
I
can
make
comments
on
your
pull
request
to
that
effect
about
what,
like
one
reason
why
this
should
be
a
different
aggregation
than
just
saying.
You
know,
configure
explicit
bucket
with
zero
buckets
yeah,
because
that
might
be
a
good
reason
to
do
that.
B
A
So
for
the
next
one
feature
metrics,
I
I
think
the
intention
of
the
pr
is
very
good.
There
are
just
too
many
changes
and
and
there's
some
comment
about
the
wording
and
also
the
pr
is
marked
as
a
draft.
I
believe
so
I
wonder
if
the
owner
diago
of
the
pr
can
make
it
as
a
real
pr
instead
of
draft
and-
and
we
know
that-
there's
some
suggestion,
but
I
think
we
should
merge
that
and
use
that
as
a
foundation.
So
we
can.
A
We
can
polish
the
wording
and
and
remove
some
of
the
things
that
we
like.
Eventually,
we
don't
think
we
want
to
make
make
them
a
requirement
at
this
stage
in
order
to
make
progress,
because
there
are
a
lot
of
changes,
because
we
can
sit
here
and
debate
it
for
another
month
and
I
suggest
that
we
don't
do
that.
D
Yeah,
I
agree,
I
think
that
fell
through
with
some
cracks,
and
I
had
advised
him
to
remove
the
exemplars
section
because
it
seemed
to
create
some
trouble
and
just
merge
what
he
has.
I
think
we
somehow
let
that
slip.
I
will.
I
can
ask
him
to
do
that.
A
This
is
a
interesting
topic,
so
so
josh
the
issue
is
assigned
to
you.
I
want
to
know
like
if
you
have
energy,
to
work
on
that,
and
I
I
do
see
some
questions
around
like
overflow,
whether
it's
integer,
overflow
or
double
overflow,
and
when
people
ask
if,
if
the
like,
the
spike
is
going
to
do
something
about
that.
A
I
I
think
those
are
two
questions,
they're
not
the
same,
but
it's
very
related.
So
I'll
explain
what
I
I
heard.
So
if
you
have
a
double
and
imagine
like
people
when
they
report
double
value,
they
keep
adding
one
at
some
point.
The
value
is
so
big.
So
when
you
add
one,
you
actually
wrong
trip
to
the
same
number.
So,
no
matter
what
you
do,
you
keep
getting
the
same
number
and
then
your
screw.
I
I
think
my
answer
to
this
is
this
is
not
something
specific
about
open
telemetry.
A
A
Yeah
so
integer
is
very
precise.
It
will
never
have
the
wrong
trip
ratio,
but
it
is
the
the
overflow
because
the
dynamic
range
is
very
small.
Comparing
to
double
then
josh.
You
mentioned
the
next
topic
so
for
integer.
If
people
report
something-
and
we
keep
adding
the
theme
doing
the
sum
and
we
have
overflow
do-
we
expect
the
the
spec
to
require
something
that
we
should
check
or
we're
saying.
A
If
overflow
happens,
the
language
runtime
or
the
ick
owner
can
do
whatever
like
they
can
run
trip
to
it
like
they
can,
they
can
wrap
to
a
negative
or
they
can
do
whatever
they
want.
I
I
don't
know
I
my
bar
is:
if
we
record
that,
like
all
the
sdks,
I
can
reserve
something
like
the
the
maximum
isn't
allowed
in
this
like
size
to
be
reserved
for
anything
of
the
exact
value
or
anything
greater
than
that.
A
D
Yeah,
I
think
I
agree.
I
just
had
to
write
something
very
similar
about
the
exponential
histogram,
at
least
it's
still
pending,
but
trying
to
understand
what
to
do
with
values
that
are
basically
unrepresentable
and
I'm
talking
about
boundaries
at
this
point
in
the
floating
point
sense,
but
the
same
issues
exactly
kind
of
come
up
with
if
you're
computing
sums
you
know
eventually
you're
going
to
overflow.
Should
the
producer
have
to
check
that,
and
I
think
my
my
kind
of
standard
policy
is
producers
shouldn't
go
to
trouble
for
telemetry
consumers.
D
If
it's
a
monotonic
counter
you're
going
to
see
it
yeah
like
that,
that's
something
you
can
test
for
in
an
expensive
path
when
you're
consuming
your
metrics,
but
you
shouldn't
have
to
test
it
on
the
right
path
and
it's
the
same
with
the
exponential
histogram
in
this
draft
and
the
current
pr's,
like
you,
don't
have
to
test
for
valid
bucket
indexes
on
on
the
producer
side,
and
it's
expected
that
you
will
run
into
a
case
where
you
have
a
partially
representable
bucket
and
just
let
the
consumer
deal
with
it.
A
I
agree
with
you
so
that
that
brings
the
question
to
to
this
not
a
number
and
infinite.
So,
for
example,
if
people
use
the
asynchronous
gauge
they
report
number,
I
believe
they
report
infinite,
but
probably
should
support
that
like
if
they
turn
that
to
something
that
can
be
aggregated
like
a
like
observable
counter,
so
they
want
to
turn
that
to
a
sound.
A
A
C
A
Add
with
nan
so
so
two
scenarios,
one
is
the
they
use
observable
gauge
the
report,
one
two
and
then
report
infinite
and
they're
saying
in
the
view.
I
won't
treat
this
as
a
a
delta
sum.
A
I
know
this
is
a
delta
sum
like
the
instrumentation
owner
did
the
wrong
thing,
and
I
want
to
see
the
the
increment
so
report
to
stats
d.
The
first
one
from
one
to
two
should
be
one
and
then
two
two
infinite.
I
don't
know
like
in
this
way.
We
can
probably
define
it
as
infinite
as
well,
but
do
we
want
to
handle
that
in
the
spike
at
all.
C
So
I
want
to
ask:
are
we
limiting
this
to
what
the
sdk
produces
in
the
spec,
or
are
you
trying
to
talk
about
like
the
data
model
itself
as
well?
The.
C
C
Yes,
I
think
we
should,
to
the
extent
that
we
can
get
consensus
from
everybody
on
how
to
handle
these
values.
We
should
have
it
specified
and
I
can
write
something
up
around
those
lines.
That's
that's
fine.
I
totally
agree
with
what
josh
was
saying
as
well.
It
sounds
like
we
have
agreement
there
on
the.
D
I
I
ran
into
this
question
also
in
my
prototype
exponential
histogram
aggregator.
You
know
when
you're
mapping
values
from
floating
point
into
histogram
bucket.
You
end
up
not
having
a
you
have
to
handle
man
case
and
in
case
like
in
zero
case
and
negative.
I
mean
like
I'm,
I'm
handling
negative
explicitly,
I'm
handling
zero,
explicitly.
D
That
leaves
the
two
values
in
and
man
that
I
have
to
basically
handle,
and
in
my
prototype
I
basically
wrote
comments
saying
I
expect
the
api
is
not
going
to
pass
this
as
far
as
an
aggregator,
because
I
there's
no
way
for
me
to
aggregate
that
number.
I
don't
know
how
to
draw
the
line,
though,
because,
as
you
point
out,
riley
like
a
gauge
being
set
to
in
for
nan
is
is
reasonable
in
some
in
enough
ways
that
I'm
like
that,
I'm
supportive
of
it.
D
A
A
You
should
still
follow
the
error,
handling
and
crash
the
application
or
call
side
effect,
but
whether
you
still
report
a
reasonable
value
or
just
because
the
the
the
customers
are
calling
the
api
they
give
you
invalid
values,
so
you
screw
them
up.
You
give
some
something
so
the
consumer
would
suffer.
I
think
the
sdk
has
the
freedom
to
do
that,
like
if
you
use
eyes
that
I
think
can't
decide.
A
If
I
got
a
like
infinite,
I
don't
know
how
to
add
that
I'll
just
drop
that
on
the
floor
or
there
might
be
other
scenarios
saying
like
if
you
add
infinite,
then
the
outcome,
the
sum
should
be
infinite.
So
whatever
you
do
in
this
reporting
period,
it
will
be
infinite.
Anyways
or
you
can
say,
like
I
don't
know
what
to
do
so
I'll
I'll
generate
some
side
effect.
Log
log,
some
arrow,
I
think
either
way
is
fine.
A
C
Yeah,
I
I
also
think
we
need
to
for
the
data
model
purposes.
I
think
we
already
call
this
out,
but
it
could
be
wrong
that
nands
are
propagated
specifically
for
prometheus
compatibility
and
prometheus
usages,
but
that
that
I
don't
think
affects
the
sdk
at
all,
because
we
can
define
the
sdk
completely
separately.
So
I
think
this
was
assigned
to
me
because
of
the
prometheus
nands,
and
that
was
what
I
was
going
to
comment
on
and
sorry.
I
dropped
that.
C
But
I
agree
with
you
like:
let's,
let's
leave
this
out
of
the
sdk
then
and
explicitly
specify
that
we
should
not
crash.
My
fear,
though,
is
it
how
expensive
is
nan
checking
on
every
single
measurement.
D
E
C
C
A
C
You
have
time
that
would
be
ideal.
There's
a
there's,
a
few
other
things,
I'm
working
on
that
you'll
see
in
the
rest
of
the
agenda.
A
And
this
is
actually
another
year
I
have,
I
seems
it
hasn't,
got
a
lot
of
attention,
but
I
I
see
some
problem,
that's
why
I
want
to.
I
want
to
raise
it
and
and
yuri
replied,
and
I
really
like
his
reply.
So
the
problem
here
is
when
we
use
counter.
A
Even
we're
saying
it
is
monotonically
increasing,
but
I
I
think
to
be
precise,
there's
another
there's,
another
property
that
we
have
to
mention.
It
is
always
positive
because
with
counter
you
can
only
add
a
non-negative
value,
which
means,
if
you
have
something
that
starts
from
negative.
You
want
to
add
something
and
you're
using
synchronous.
We
don't
have
such
api.
A
A
So
I
want
to
get
some
opinion
here
and
see
if
my
understanding
is
aligned
with
your
understanding,
so
the
synchronous
version
of
the
counter
we
allow
mono
like
monotonically,
increasing
non-negative,
and
we
give
a
delta
value.
The
result
would
always
be
non-negative,
but
the
asynchronous
version
is
a
little
bit
different,
so
it
is
almost
the
same
as
counter,
but
the
difference
is
we
require
people
to
report
the
absolute
value
and
that
absolute
value
can
be
negative.
D
I
totally
support
this.
I
think
it's
I
I'm
cert,
I'm
certain
that
this
has
been
discussed
between
me
and
bogan
in
ancient
history.
I'm
also
certain
that
I
at
some
point
tried
to
write
down
exactly
something
about
this
as
well.
I
think
it's
sort
of
implied
by
the
data
model
when
you
talk
about
the
behavior
of
sums
essentially
like
and
then
labeled
behavior
like
so
I
support
you
writing
something
saying
that
the
logical
value
of
a
metric
that's
never
been
set
is
zero
and
therefore
monotonic
asynchronous
measurements
must
be
non
non-negative.
A
I
see
so
so
I
have
two
asks
number
one.
There's
an
issue
trying
to
clarify
that
counter
is
monotonic.
When
I
try
to
do
that
job
I
figure.
We
should
clarify
that
it's
not
only
monotonic,
but
it
is.
It
is
non-active.
A
A
Make
sure
you
already
commented,
I
want
to
receive
more
comments
and
seeing
like
what
should
we
should
we
move
some
of
the
the
things
to
the
spec.
So
here
I
try
to
explain
like
the
the
counterpart,
which
is
always
non-negative,
and
I
do
have
a
question
about
the
asynchronous
counter.
I
couldn't
find
a
good
example
when
an
asynchronous
counter
could
start
from
a
negative
value.
I
remember
josh,
you
mentioned
dj
mcd,
you
mentioned
like
the
the
solar
power
or
something.
So
it
might
be
helpful
if
you
could
comment
here,
give
some
concrete
example.
D
A
Yeah,
okay,
so
this
is
not
urgent,
because
we
we
don't
need
this
pr
to
be
merged
for
the
feature
freeze
of
the
sdk.
We
only
need
this
before
we
ship
stable.
D
A
Okay,
so
that's
the
update
and
we
have
six
remaining
issues,
I'm
going
to
take
one
from
josh
so
moving
forward.
Can
we
give
a
quick
update
on
what
each
language
take
is
to
starting
from
java.
C
I
can
take
this
unless
john
wants
to,
I
think,
john's,
on
the
call
where
he
was
it's
all
yours,
I'm
just
lurking.
Okay,
it's
it's
release
week.
C
C
It
might
be
missing
some
specific
details
and
components
which
need
to
get
fed
back,
and
my
plan
was
to
go
through
the
feature
matrix
and
update
it.
Probably
tomorrow,
depending
on
how
my
talk
prep
goes
for
next
week
and
the
the
important
thing
in
here
is
you're
going
to
get
multiple
so
metric
readers
in
you
can
have
multiple
exporters,
which
is
new
for
java.
C
It
has
exemplars
wired
all
the
way
through,
including
the
difference
between
sums
and
explicit
bucket
exemplars.
It
has
all
the
aggregators
defined
that
are
in
the
spec
wired
through
what
else
did
we
have
to
add?
I
think
metric
readers
is
the
big
one.
C
C
There
might
be
a
few
missing
things
that
have
gotten
added
over
the
past
week,
but
for
the
most
part,
you
should
be
able
to
try
it
out
and
see
what
the
stuff
that
we
specified
feels
like
in
practice.
As
of
this
release,.
A
Okay,
great,
I
don't
know
I'd,
have
wonder
if
sigil
is
here,
not
icon.
I
can
cover
that
okay,
so
for
download
we've
added
view
and
in
the
view
we
allow
people
to
like
drop
something
rename
something,
and
we
also
support
the
like.
I
did
a
wild
card
and
we're
getting
because
earlier
during
the
conversation,
the
dotnet
runtime
team
had
some
concern
about
the
up
down
counter.
So
they
didn't
add
the
synchronous
and
asynchronous
version
of
up
down
counter
we're
trying
to
collect
the
feedback
and
see
if
there
is
a
sufficient
use
case.
A
We
should
plan
that
for
the
next
donut
release,
so
that's
update
and
dotnet
will
release
a
beta
version.
Once
the
the
view
feature
is
complete
and
for
the
exemplar
we
haven't
started,
but
right
after
beethoven
will
do
the
example
work
there's
a
one.
One
thing
that
we
didn't
do
today
in
donet,
which
I
I
hope
we
can
add
very
soon.
So
we
don't
allow
individual
view
to
specify
the
aggregation
temporarily.
A
A
Definitely
will
do
it,
but
I
want
to
get
some
idea
here.
Like
do
you
see
some
real
world
scenario
where
people
want
to
mix
them
up?
So
when
you
send
to
a
dedicated
exporter,
you
have
some
streams
that
would
have
some
streams
in
for
a
real
reason.
Instead
of
just
for
amusement.
B
So
so
is
this:
you
know
one
example
that
comes
to
mind
is
like
okay,
you
want
to
export
metrics
to
over
otlp
data
collector,
but
then
you
also
want
to
expose
a
prometheus
server,
so
you
can
inspect
the
current
state
of
your
system
at
a
glance
is,
this
is,
is,
is
what
I'm
talking
about
in
in
line
with
what
you're
saying
or
no?
Are
you
saying
something
different,
okay,.
A
We
allow
that,
for
example,
you
can't
have
three
readers
and
the
readers
can
like
can
be
configured
with
some
exporters.
You
send
to
otlp,
send
to
promises,
and
you
dump
this
thing
to
a
console
and
for
console
you
can
see
everything
should
be
delta
for
permissions.
Everything
is
cumulative
and
for
otl
you
have
to
pick
whether
it's
cumulative
or
delta.
A
What
we
don't
have-
and
I
I
heard
that
java
has
today-
is
for
otlp-
you
can
specify
hey,
I
have
a
b
c
and
a
b
will
use
cumulative
c
will
use
delta.
I
I
think
we
can
do
that.
It's
not
hard.
We
already
have
the
infrastructure,
but
I
want
to
know.
Do
we
need
that
flexibility,
or
is
this
something
like
we
probably
put
there,
and
no
people
would
use
that
in
production?
So
I
want
to
see
if
I'm
missing
something
in
order
to
have
a
better
understanding
about
the
usage.
A
A
I
see
yeah
okay,
cool,
thank
you
because
in
indonesia
we
kind
of
did
some
tricks,
so
the
exporter
can
can
explain.
What's
the
preference
and
what's
supported
so
the
exposure
can
say,
hey
I
support
both,
but
I
would
prefer
cumulative
or
I
support
both
and
I
don't
have
any
preference.
So,
whatever
you
have
I'll
just
export
that
and
then
the
reader
will
understand
the
preference
and
send
that
back
to
the
provider.
A
C
Yeah,
I
think
one
thing
I
want
to
comment
on:
if
we're
going
to
be
magical
for
users,
the
only
I
don't
want
to
harp
on
a
or
beat
a
dead
horse,
async
instruments
and
delta
metrics,
like
the
the
only
the
only
case
where
I
can
see
us
having
both
delta
and
cumulative,
would
be
if
somebody
really
wants
delta
metrics,
but
they
also
have
async
instruments.
C
D
I
have
this
configuration
exactly
running
in
the
hotel,
go
sdk
and
using
it
with
lightstep,
which
supports
delta.
So
I
have
it
configured
and
there's
a
pr.
I
have
out
to
change
some
names
and
so
on,
but
it's
called
export
kind
in
the
current
code,
which
probably
should
be
called
aggregation,
temporality
there's
an
aggregation,
temporality
selector
you
can
select.
You
can
provide
a
selector
when
you
configure
the
sdk,
and
I
have
two
options
that
are
kind
of
meaningful.
C
All
right,
I
think
we
need
to
open
an
issue
and
get
this
fixed
before
we
finish
feature
freeze,
because
the
if
you,
if
you
go,
re-read
the
specification
around
temporality,
we
haven't
explicitly
defined
the
preferred
way
for
users
to
do
temporality
other
than
that
view
api,
which
is
why
java
focused
on
that
like.
If
you
look
at
the
specification
for
how
users
configure
temporality
right,
the
only
thing
that
users
can
touch
right
now
is
the
view
api
to
specify
temporality.
So
to
your
yes,.
D
D
C
I'm
happy
to
update
the
java
prototype
to
get
that
in
there,
because
it's
not
there
now
and
it'll,
be
a
little
bit
awkward
to
fix
at
this
like
right
now.
But
but
I
think
it's
worth
it
because
I
think
it's
a
significant
user
improvement
and
significant
simplification
for
the
sdk
or
for
users
of
the
sdk
right.
C
Yeah,
yeah,
and-
and
so
I
understand,
I
think,
go-
has
this
notion
of
default
temporality
on
exporters.net?
Has
the
notion
of
default
temporality
on
exporters?
Does
python
also
have
this
in
their
prototype
like
is
it?
Is
it
java's
the
only
one
that
doesn't
have
it
or
is
it
because
if
everybody's
already
implemented
it,
I
think
it
just
makes
sense
to
specify
it
then.
F
Sorry,
what's
python
josh.
C
Do
you
choose
the
aggregation
temporality
for
your
aggregators
or
sorry,
your
yeah,
the
temporality
for
your
aggregators,
based
on
the
exporter?
That's
attached
to
the
sdk.
F
Yeah,
well,
we
we
could
follow
that
path.
It
has
not
yet
been
defined,
yet
that
we're
going
to
do
so.
So
maybe
we
should
not
consider
python
if
you
want
to
make
a
decision
about
this.
C
Well,
I
say:
let's
put
the
let's
put
the
pr
out
for
review
and
see
if
we
get
a
lot
of
pushback
on
it,
I'll
take
a
try
on
that.
You'll.
Take
a
try.
Okay,
I'm
happy
to
to
spend
some
time
helping
there.
I
think
that's
higher
priority
than
some
of
the
other
things
I'm
doing.
A
Okay,
python
diagonal.
F
Sure,
yes,
hello,
everybody,
so
for
tuning
right,
it's
been
a
bit
of
a
short
week.
We
have
been.
I
have
been
working
on
this
pr
that
should
hopefully
get
us
closer
to
this
feature
that
we
need
to
handle
the
errors
on
the
metrics
api.
F
Aaron
has
also
submitted
a
pr
that
will
fix
our
issues
that
we
were
having.
We
were
with
our
proxy
meter.
I
mean
providers
by
extending
that
up
to
the
instruments,
yes
pretty
much.
What
I
have
to
report,
especially
for
me
this
is
this.
This
is
going
to
be
a
very
short
week
of
only
two
days,
so
I'll
be
again
on
full
metrics
work.
F
A
F
Yeah
these
these
comments
you
had
on
the
on
the
error,
handling
yeah.
A
H
H
B
A
So
if
you
have
some
questions
about
python
specific,
create
some
issue
form
some
discussion,
I'm
happy
to
join
and
contribute
some
idea.
I
I
don't
think
like
the
arrowhead
or
something
are
the
general
spike
issue,
so
I
I
suggest
that
we
leave
that
as
a
specific
python
issue
and
note
it
has
been
collecting
some
of
the
error
handling
mechanism
that
different
language
teams.
A
C
Yeah,
sorry
so
summary
and
data
model,
two
open
questions
to
walk
through.
Basically,
josh
mcdonald
opened
a
really
great
issue
on
this
with
a
collector,
and
we
specifically
did
not
talk
about
summary
in
our
data
model.
Stability
we
said
summary
is
for
prometheus
compatibility
and
it
will
abide
by
what
prometheus
does
and
it
will
transport
prometheus
metrics,
but
we
didn't
dive
further
into
the
specification.
So
it's
actually
lacking
a
lot
of
details
and
information
on
it.
That
might
actually
be
important.
Specifically
josh
is
trying
to
do
statsd
based.
C
I
forget
they
call
them
timing
metrics,
but
they
they
produce
summaries
off
of
them.
They
call
them
histograms
too,
but
anyway.
So
the
issue
here
comes
in
prometheus
is
a
little
bit
loose
with
its
specification
around
what
summaries
are
open
telemetry
for
the
most
part,
if
we
record
a
start
and
stop
timestamp
one
of
the
things
and-
and
you
heard
all
our
debates-
pre
like
about
min
max
and
histograms-
is
that
that
start
and
stop
time
stamp
should
account
for
all
the
measurements
that
are
included
in
that
metric
data
point.
C
If
you
look
at
prometheus,
if
you
look
at
stats
d
and
you
look
at
the
statsd
collector
that
we
have
in
open
telemetry,
it
is
actually
reasonable
for
us
to
create
a
summary
and
have
it
have
a
start.
Stop
timestamp
of
the
metrics
that
we
observed
and
do
something
there.
So
there's
an
open
question
of,
should
we
so
actually,
I
should
put
a
question
before
the
open
question
is:
do
we
want
to
support
summary
for
statsd.
C
Users
in
open
telemetry
is
that
something
that
we
feel
we
need
for
compatibility
and
that's
kind
of
an
open
question
for
josh
or
if
you're
on
the
phone
and
someone
else
can
answer.
That'd
be
great.
D
D
In
fact,
if
you
do
that,
like
step,
it
causes
horrible
problems
because
we
ingest
way
too
much
data.
So
the
summary
option.
The
reason
I
uncovered
this
as
a
bug
is,
we
recommended
a
customer,
try
it
and
it
just
blew
up
on
us
again
for
all
the
reasons
in
the
bug.
So
we
were
taking
this
route
as
a
as
a
workaround
for
not
having
a
histogram,
and
I
don't
really
care
for
it.
It's
not
a
good
option,
and
I'm
I'm
maybe
I
should
I.
C
C
I
would
like
to
we
totally
different
discussion,
but
anyway,
I'd
like
to
have
a
statsd
compatibility
section
in
the
data
model,
and
I
will
explicitly
specify
there
that
we
would
convert
timing
metrics
into
histograms
and
then,
additionally,
I
will
go
refill
out
the
summary
to
basically
exactly
match
prometheus
specification
in
the
data
model.
Specification
it'll
either
exactly
match
what
we
have
the
protocol
buffer
or
it'll
exactly
match
what
openmetric
says
about
summaries
and
that's
the
the
go
forward
path
for
summary
in
open
telemetry.
Does
that
sound
reasonable
to
everybody.
D
D
You
could
imagine
a
st
receiver
converting
to
cumulative,
but
then
again
you
can
imagine
a
processor
in
the
collector
taking
a
histogram
and
turning
it
into
a
summary,
a
cumulative
one.
So,
but
I
think
if
you
look
at
back
at
the
proto
and
the
detail
of
openmetrics,
you
will
find
that
the
data
model
already
says
what
we
need
to
say,
which
is
that
start
time
is
supposed
to
reflect
the
start
of
the
process,
because
you
still
have
these
cumulative
metrics
that
need
to
have
that
right
that
semantic
correctly.
D
D
That's
that's
right.
Well,
I
think
it's.
I
think
it
lines
up
with
how
we
treat
start
time
for
sums
because
of
the
sum
and
account
fields.
C
D
Could
argue
and
that
that
we
should
that
we
could
bless
this
this
this
type
add
a
temporality
field.
Do
the
same
thing:
we've
recently
discussed
with
first
min
max
and
allow
that
quantiles
are
exact
if
you're,
using
delta,
temporality
and
they're
10-minute,
if
you're
using
chemo
temporality,
but
I
don't
really
think
that
the
world
cares.
If
we
would
do
that,
I.
C
C
This
means
that
I
will
probably
be
sending
the
start
of
a
prometheus
compatibility
section
and
a
statsd
compatibility
section
that
will
need
some
review.
Those
were
the
two
of
the
things
that
I
was
starting
to
work
on.
That
will
be
the
prometheus
compatibility
section.
I
want
to
meet
with
the
prometheus
working
group
on
a
little
bit,
I'm
just
gonna
from
a
data
model
standpoint.
C
It's
gonna
be
reverse
engineering,
what
they
do
in
the
collector,
but
I'd
like
to
get
that
started
and
kind
of
finish
before
we
shore
up
metrics
so
anyway,
those
are
those
are
two
things
that
should
be
coming
and
that's
submarine
data
model.
We
have
seven
minutes,
so
I'm
going
to
rush
a
little
bit
next
topic
semantic
conventions,
our
semantic
conventions
around
metrics
are
direct
markdown.
C
They
are
not
in
the
yaml
files
and
they
are
not
code-gend
in
the
specification,
and
so
therefore,
language
cigs
cannot
coach
in
them.
I'm
looking
to
see
if
anyone
is
willing
to
s
to
make
the
migration
of
our
existing
semantic
conventions
into
yaml
and
make
sure
that
they
cogen
in
the
specification
correct.
That's
it
just
that
limited
scope.
If
anyone
has
time
to
do
that,
that'd
be
great.
C
I
Josh
quick
question
on
that.
I
think
the
current
yemol,
the
current
tool,
the
semicolon
gen
tool,
can't
handle
the
metrics
semantic
conventions,
doesn't
know
about
instrument
names,
for
instance,
there.
Yes,.
C
I
think
there's
going
to
be
work
on
that
tool
to
get
this
to
work.
There's
it
doesn't
know
instrument
names,
but
it
does
know
that
a
convention
is
for
metrics
or
not
so
yeah.
I
think
there's
gonna
be
work
on
that
tool,
but
I
I
really
think
someone
from
this
sig
should
be
working
with
the
authors
to
try
to
get
that
up
to
date.
It
really
does
need
someone
who
knows
python,
because
the
tool
is
written
in
python.
C
That'd
be
awesome,
I'll
follow
up
with
you
on
slack
sound
good.
Yes,
yes,
by
the
way,
though,
this
is
lower
priority
than
the
the
spec
matrix,
so
whatever
we
can
do
to
get
that
spec
matrix
again,
that's
that's
that's
first
and
then
we
can
work
on
this.
I
A
Can
I
just
quickly
jump
in
here
and
like
for
the
very
very
first
issue
that
we
talked
about
with
the
histogram
thing?
I
don't
know
if
it's,
if
it's
something
that
just
occurred
to
me
as
a
non-native
english
speaker
is
that
degenerate
sounds
like
a
like.
A
degenerate
histogram
to
me
sounds
like
a
little
bit
of
a
like
the
word.
Doesn't
it
it
has
like
a
little
bit
of
a
negative
connotation
for
for
me,
at
least
so
I'm
not
not
sure.
A
If
that's
something
that
is
only
because
I
didn't
grow
up
with
with
english
as
my
first
language
and
it's
not
something
that
I
learned
in
school
to
be
like
a
perfectly
normal
mathematical
term
that
you
use
all
of
the
time
but
yeah.
I
don't
know
if
that
is
something
that
people
might
take
offense,
I
don't
know,
I
just
thought
I'd
call
it
up,
because
it
occurred
to
me
curious.
How
do
you
think
about
the
trivial
histogram
I
like
trivia,
you
know
you
give
us
see
the
world.
D
I
appreciate
your
remark.
I
do
use
the
word
degenerate
in
one
other
case,
where
I
do
mean
it
in
the
way
you
describe
it's.
It's
not
a
nice
condition
that
you
want
to
endorse
it's
this
case,
where
you
have
a
cumulative
time
series
that
resets
on
every
point,
and
that
is
the
worst
type
of
correct
for
a
cumulative
series.
It's
so
I
call
it
degenerate
because
it
is
correct
but
incorrect.
At
the
same
time,.
A
Okay,
yeah,
I
don't
know
if
this
is
like
a.
I
really
don't
want
to
like
block
the
pr,
as
as
I
see
that
we're
looking
to
get
this,
I
think
so.
I.
D
Think
the
word
trivial
makes
a
lot
of
sense
because
it
avoids
that
connotation
that
you're
very
correct
about.
Thank
you.
A
B
E
A
Yeah,
so
it
it
seems,
like
most
of
the
folks
would
prefer
trivial
histogram.
So
maybe
like
on
the
pr
I
can
reply
and
saying
we're
debating
on
the
name.
Then
we
let
people
vote.
I
am
I'm
guessing.
We
won't
have
a
lot
of
participants.
B
Yeah
reveals
probably
more
intuitive,
but
yeah
see
I
I
heard
degenerate.
I
was
like
just
you
know
going
through.
You
know
the
internet
trying
to
figure
out
a
good
naming
convention
for
this,
and
you
know
I
came
across
degenerate
and
it
it
spurred.
You
know.
Memories
of
you
know
days
of
math,
so
like
a
degenerate
line
is
a
point.
A
degenerate
sphere
is
is
like
a
point.
A
sphere
with
like
a
zero
radius
is
a
point.
B
There's
degenerate
triangles
are
ones
where
you
know
two
of
the
sides
equal
the
sum
of
the
third
side,
and
so
you
know
these
are
all
special
cases,
simple
cases
in
math.
So
you
know
I
that
that's
what
caused
me
to
go
with
that.
You
know.
I
didn't
think
that
trivial
really
came
up
as
often
or
you
know.
It
didn't
really
click
with
me
as
well
as
degenerate,
but
I
see
the
points
of
people
here.