►
From YouTube: 2021-05-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Okay,
so
we
we
can
start
so
first,
I
want
to
ask
because
I'm
a
little
bit
blocked
here
and
please
reveal
comment
either
block
or
approve
these
two
pr's,
especially
the
isdk
skeleton
pr.
So
I
want
to
set
up
the
skeleton,
so
we
can
display
the
work
and
people
can
work
on
individual
sections.
A
A
If
you
haven't
please
help
out,
and
the
next
topic
is
the
schedule
change
we
discussed
last
week,
so
I
reported
back
to
the
tuesday
spec
meeting
and-
and
we
haven't
seen
anyone
disagree
with
this,
so
people
people
are
happy
to
see
like
if
we
can
move
faster
and
especially,
we
can
decouple
api
sdk
to
give
extra
benefit.
People
can
have
high
confidence
to
use
api
to
use
from
their
stuff
without
having
to
worry
about
breaking
changes,
and
thanks
john
for
bringing
this
out.
A
So
here
goes
the
the
the
current
timeline
and
you
can
see
we
try
to
bundle
the
api
sdk
together
and
I'm
I'm
thinking
based
on
the
current,
like
the
speed
progress,
we
have
so
here's
my
proposal.
I
want
to
see
if
people
are
okay
with
the
timeline
or
or
if
you
think
we're
too
aggressive
here
or
you
think
we
should
be
more
aggressive
here.
A
A
A
B
A
I
mean
if
you
look
at
the
time
we're
pushing
the
api
two
months
ahead
of
the
previous
time.
So,
if,
like.
B
A
Yeah
yeah,
I
think
we
might
be
able
to
push
the
api
like
one
like
like
three
months
ahead
of
the
previous
time,
but
I
want
to
know
like
what
what's
the
like.
What's
the
pros
and
cons
you
see
like,
do
you
see
a
like
a
high
demand
that
people
want
the
api
to
get
stability,
because
if
we
put
push
the
time
earlier,
that
gives
us
less
time
to
get
feedback
and
and
for
example,
if
you
look
at
the
api,
experimental
and
feature
freeze,
we
can
see
we
give
people
a
month.
A
A
B
Well,
at
least
the
way
it
would
work
in
java
is.
If
we're
we,
I
mean
the
instrumentation
group
isn't
going
to
pick
up
the
api
until
it's
published,
and
if
we're
not,
I
mean
we
can.
I
mean
I
guess
during
that
time
frame
is
the
idea
that
you
could
publish
it
as
experimental.
Slash,
unstable
and
instrumentation
can
start
messing
around
with
it.
A
I
I
think
the
like
at
the
before
stable,
we
cannot
ship
anything
as
a
stable
release.
We
we
should
have
a
package
that
we
tell
people.
This
is
close
to
the
stable,
like
the
release
candidate.
So
unless
we
made
some
obvious
mistake,
you
can
rely
on
the
fact
that
we
we
won't
intentionally
change
the
api
and
break
you
so
at
the.
B
End
of
may
the
spec
I'm
just
trying
to
understand
like
from
a
maintainer
perspective,
what
this
is
going
to
mean
so
by
the
end
of
may,
or
at
the
end
of
may,
we
can
tell
maintainers
you
should
implement
the
api
without
with
essentially
a
no
op
api
that
matches
this
spec
and
release
an
experimental
package
for
it.
As
soon
as
you
can
experiment
with
it.
A
Yeah,
okay-
and
here
I
I
think
we
expect
the
languages
to
release
something
telling
people
either
paid,
how
or
are
safe,
but
setting.
The
expectation
like
this
will
be
fairly
stable
unless
we
notice
some
bug
will
do
the
fix
that
might
cause
some
breaking
change,
but
we
expect
it
to
be
very
rare.
Otherwise
people
should
be
able
to
rely
rely
on
that
signature
without
having
to
worry
about
to
change
the
code.
A
A
B
A
B
A
B
By
6
30,
at
the
same
time,
the
experimental
sdk
is
done
in
the
spec
at
that
same
date
or
within
you
know
approximately
that
same
date,
we
would
want
languages
to
release
a
candidate
api
package.
A
B
C
Available
yeah,
I
think,
looked
at
from
a
slightly
different
perspective
if
we
were
to
share
this
timeline
with
end
users.
They're
probably
going
to
look
at
this
as
the
the
client
packages
they're
going
to
use
so
making
clear,
which
are
spec
dates
and
which
are
expected.
Client
dates
would
be
helpful
to
the
users.
It's
also
helpful
to
us
as
maintainers,
so
we
know
what
targets
did
but
for
communication
purposes.
Okay,.
B
B
A
B
I'm
just
trying
to
I
mean
I
don't
think
we
should.
We
should
separate
in
this
timeline
out
by
language.
We
should
say
all
the
main
languages,
whatever
those
are,
should
have
a
single
date
in
mind
if
some
of
them
are
early,
that's
totally
fine,
obviously
right
so
I
think
I
thought
it
was
by
6
30.
We
would
want
the
languages
to
have
released
a
release
candidate
api
we
give
one
month
after
the
bag
right.
A
B
B
B
A
B
A
If
you
look
at
the
the
tracing
part,
I
I
think
my
previous
observation
is:
it
takes
time
for
people
to
like,
like
characters,
start
to
fill
in,
like
a
future
parathymite
missing
and,
and
that
that
time
you
gotta
have
people
really
going
through
the
spike
and
implement
things
in
their
language
isdks.
They
will
file
issues
and
ask
for
clarification
and
and
worried
if
we
give
less
time.
A
And
what
I
think
is
after
feature
freeze,
you
can
have
a
reasonable
level
of
confidence
to
tell
people
hey,
instrumentation
libraries,
you
can
use
this
preview
version
and
I
won't
like
I
won't
see
like
a
lot
of
chances,
we're
going
to
introduce
breaking
changes
so
most
likely
you
will
be
just
fine
and
just
wait
for
two
months
until
we
release
the
stable
version
to
you,
but
but
young.
A
If
you
see
a
specific
scenario
that
people
do
want
to
take
a
1.0
or
2.0
like
stable
release,
I
want
to
understand:
what's
the
what's
the
main
motivation
there,
I
guess.
As
long
as
we
have
a
feature
freeze
version
with
help
people,
you
can
use
that
with
a
high
confidence
that
we're
not
going
to
ask
you
to
re-instrument.
B
I
think
it's
not
good
enough,
because
I
think
there
are,
I
mean
I
think
there
are
language.
There
are
libraries
like
I'm,
I'm
thinking
specifically
about
spring
sleuth
and
jonathan's
on
the
call
right
now
that
they
are
not
going
to
ship
sleuth
with
metrics
until
it
is
no
longer
marked
as
experimental
or
unstable,
like
it's
just
a
hard,
a
hard
line
in
the
sand.
They
will
not
ship,
which
I
think
is
reasonable.
A
I
I
hear
you
so
so
my
understanding
is
like
the
the
scenario
mentioned.
Is
some
libraries
outside
open
telemetry
and
they
want
to
take
it,
and
my
question
is:
if
they
take
a
dependency
on
this,
they
have
the
ick
like
as
experimental,
and
they
take
the
stable
version
of
the
api.
Do
they
have
the
entrance
story
or
not.
B
A
B
Yeah,
I
think
july
is
too
aggressive,
but
I
would
hope
that
we
would
be
able
to
get
enough
feedback
in
the
three
months
between
the
end
of
may
and
the
end
of
august,
to
have
fun
to
have
gotten
things
stable
by
then,
and
I
mean
I
guess
just
I
mean
if
we
want
to
be
honest
with
ourselves,
if
we
don't
think
we're
going
to
be
able
to
get
that
feedback
in
three
months
I
mean,
I
think
we
have
a
fundamental
problem
with
being
able
to
ever
release
metrics.
B
E
Years
yeah
I
mean
I,
I
feel
very
confident
in
what
we
have
I'm
supporting
you,
john
I,
but
I
I
want
to
let
riley
you
know,
take
the
lead
here.
B
A
B
A
E
I
know
everyone
in
the
project
is
a
little
overloaded
with
work,
so
it's,
I
think,
being
realistic,
john.
We
we,
this
is
the
right
pace.
I'm
not
disappointed
it's
only
because
I've
been
doing
it
long
enough.
B
Oh
this
is,
I
was
just
I'm
hoping
that
there
was
some
way
that
I
can
convince
the
I
mean
the
technical
committee
I
think
needs
to
be
more
involved.
I
think,
is
what
this
is
boiling
down
to
is
we're
kind
of
relying
a
lot
on
you
riley
and
josh
you're
on
the
tc
like?
Can
we
get
the
tc
involved
more
involved
here,
like
I
think
the
tc
has
been
almost
silent
on
metrics
api
prs
in
the
past
two
months.
E
I
I
mean
I've:
I've
tried
to
approve
riley's
pr
as
fast
as
I
can,
but
I
hear
your
point
yeah.
E
Right,
I
think
we
should
be
adding
approvers
and
tc
members
to
try
and
move
this
project
faster
frankly,
so
I
I'm
open
to
that
suggestion
that
I
just
made
to
myself
if,
if
getting
more
approvers
could
be,
could
solve
the
problem.
I
think
that
that's
the
solution.
A
E
A
90
percent
some
additional
polish
work
there.
So
so
how
about
we
do
this?
So
if
you
think
the
car
like,
what's
currently
showing
here,
looks
good
I'll
write
a
document
based
on
this,
and
I
understand
it's
a
little
bit
aggressive
and
we
like
with
more
help.
A
A
B
Outside
the
api,
so
your
your
date
at
the
end
of
june
so
june,
30th
sdk
experimental,
is
that
when
we
would
expect
languages
to
start
implementing
an
sdk.
A
I
I
think
the
the
languages
that
like
who
have
committed
to
work
closely
with
us
they
they
can
start
as
early
as
possible.
This
is
the
signal
that
we
expect
majority
of
the
languages
like
python,
ruby,
those
like
if
they're,
not
heavily
involved
at
this
stage.
We
expect
this
to
be
the
official
signal
that
they
should
start
to
work
on
this,
which,
which
is
the
signal
at
the
end
of
june
yeah,
okay,
okay,
which
means
like
today.
The
answer
would
be
like:
go
java
and
donate.
A
If
you
have
energy,
please
please
work
on
the
experimental
version
of
the
ick
and
the
api,
and
then
you
can
like.
I
want
you
to
be
able
to
start
today,
if
not
yesterday,
but
for
the
other
languages
were
saying:
no,
we
don't
have
enough
energy
and
we
think
we
might
change
something
dramatically,
so
we
don't
want
all
of
them
to
jump
in
until
end
of
june.
E
B
I
would
say,
having
been
bitten
several
times
by
jumping
in
and
working
on,
the
sdks
before
the
spec
is
done,
I'm
very
very
hesitant
to
do
it
again,
so
we
have
a
fully
implemented
thing
in
java.
That
looks
doesn't
look
like
the
sdk
specification
as
it's
written
right
now.
It
looks
I
mean
you
can
maybe
squint
and
figure
out
pieces
and
parts,
but
it's
certainly
not
there.
So
until
that's
stable,
I
don't
want
to
go
in
and
start
messing
with
it.
E
I
thought
the
goal
that
riley
had
set
was
for
us
to
to
sort
of
take
any
sdk
in
any
shape
and
and
implement
the
sdk,
and
as
long
as
it's
like,
the
otlp
is
correct.
It's
good
enough,
and
from
that
perspective
for
go,
I
just
need
to
rename
some
stuff
and
it's
done
and
remove
some
stuff
and
that's
the
hard
part
is
gonna,
be
removing
the
stuff.
It's
not
part
of
the
current
api.
A
Yeah,
I
I
think
for
things
like
accumulator
those
things
I
I
I
think
we
should
be
careful
and
not
introduce
that
to
the
ickes
back
at
this
moment.
So
languages
should
have
freedom,
but
the
ask:
is
you
don't
expose
this
flexibility
to
the
user,
so
you
only
expose
the
minimum
necessary
component
to
the
user
and
later
we
figure
out.
Oh
there's
a
common
python.
We
can
add
this
to
the
sdk,
even
after
it's
reaching
stable,
so
example.
I
can
give
you
if
you
think
about
the
the
tracing
spec.
We
have
the
processor.
A
Technically,
you
can
see,
we
don't
even
need
to
have
the
exporter
concept
as
long
as
you
have
processor,
you
can
do
whatever
you
want.
If
you
want
to
take
the
data
and
show
that
to
a
like
exhaust,
you
can
do
that
right.
So
for
metrics,
I
I
I
think
we
should
start
from
that
point
and
if
we
figure
out
oh
exporter,
is
a
very
common
pattern.
We
should
go
and
add
it.
So
this
is
why
you
see
in
the
in
the
skeleton
of
the
sdk.
A
You
see
the
basic
like
process
and
exporter
concept,
but
I'm
not
putting
things
like
this
is
a
controller,
and
this
is
some
like
coordinator.
This
is
a
the
aggregator,
so
I'm
trying
to
avoid
introducing
too
many
concepts,
because
before
we
have
a
very
solid
understanding,
whether
this
is
a
common
thing
or
not,.
A
Does
that
make
sense?
So,
for
example,
we're
we're
saying,
there's
an
engine
bay
and
you
can
put
electrical
engine
or
gasoline
diesel,
we
don't
care,
but
as
long
as
you
have
an
engine
that
should
work
for
a
car
later,
you
might
figure
out.
Oh,
like
90
of
the
cars,
are
using
spark
plugs
and,
let's
invent
that,
spark
plug
and
make
that
a
spec
that
wouldn't
ruin
the
existing
thing.
You
wouldn't
try
to
say
by
introducing
a
spark
plug
we're
going
to
like
refactor
the
engine
bay
or
wipe
it
out.
E
The
that
have
we
specced
out
the
functionality
minimum
functionality
for
measurement
processor.
Do
we
need
the
to
demonstrate
how
to
pull
context,
label
or
attributes
out
of
the
context,
for
example,
like
yeah,
there's
some
stuff
that
we're
gonna
have
to
do
john,
that
that
I
didn't
do
and
go
yet.
So
there
will
be
things
in
spec,
but
I
don't
think
it
specifies
what
your
sdk
needs
to
look
like.
B
A
B
B
A
E
One
example
I
have
here
is
the
use
of
delta
transports
like
prometheus,
really
isn't
ready
for
it.
We've
specced
out
how
to
convert
delta
into
cumulative,
but
we
don't
really
have
a
processor
to
do
that
in
the
collector.
Yet
so
I
don't
expect
any
sdk
to
produce
deltas,
but
we've
experimented
with
it.
I've
seen
it
work,
it
can
be
lower
memory.
We've
got
data
import
using
that
functionality,
so
we
had
to
spec
it
for
the
data
model.
E
We
believe-
and
we
are
very
confident
we
can
add
a
delta
requirement
later,
so
it's
fine
to
just
implement
cumulative
for
now.
I
I
think
this
is
fine,
but
I
think
everyone
should
understand
that
we're
implementing
a
very
narrow
set
of
requirements
for
now
with
you
know
any
kind
of
engine
or
you
know,
level
of
specificity.
B
A
A
Oh
yeah,
yes,
well
so
jonathan
I'll
create
another
document,
because
this
document
is
already
reviewed
and
posted
on
medium,
so
I'll
create
another
one
and
discuss
with
the
spec
folks
next
week
to
see
we
should
have
another
medium
post
or
we
should
just
update
our
timeline
on
the
project
tracking
on
github.
A
Okay,
so
the
heads
up
for
next
week,
like
right
after
the
sdk
skeleton
pr
got
merged.
A
I
already
have
the
my
my
local
code
changes
on
the
view,
so
I'm
going
to
send
the
the
pr
very
soon
and
depending
on
the
feedback,
I
I
think
we've
already
discussed
the
view
a
lot
last
week,
so
I
don't
expect
a
big
surprise,
but
if
there
is,
I
I'll
work
on
that,
depending
on
that,
how
I'll
be
able
to
take
the
learning
from
the
view
and
work
on
the
hint
api
that
will
help
us
to
get
closer
to
the
api
stability.
So
this
will
be
my
focus.
A
That
means
I
I
probably
will
spend
less
time
on
the
on
the
export
processor
part
at
least
for
now.
Until
we
have,
we
have
a
relative,
complete
api,
spack
and,
and
once
we
have,
this
timeline
revealed
and
hammered
out
I'll
I'll
update
the
the
command
and
product
tracking
okay.
So
john,
it's
your
turn
again.
B
Cool,
I
got
it
into
the
agenda
just
in
time,
so
I
don't
know
if
josh,
if
you
were
reading
what
I
was
typing,
but
I
was
I
I
so
I
I
I've
updated
the
java
repository
proto
to
the
latest
version,
but
I
have
not
implemented
what
appears
to
be
this
semantic
requirement
coming
from
the
data
model
about
not
recording
sums
in
otlp.
E
I
flagged
this
on
the
review
as
well.
Riley.
Do
you
remember
this
question
coming
up,
or
was
it?
Oh,
it's
josh
right,
just
not
here.
Okay,
I
raised
this
question.
I
don't
think
there's
a
good
answer
to
this
in
the
sdk
and
I
don't
think
we
should
try
to
solve
this
question
that
you're
asking
I
would
I
I'm
actually
comfortable
saying
you
should
use
only
non-negative
numbers
for
open,
telemetry,
histogram
instruments,
which
is
to
say
that
the
data
model
admits.
E
You
admits
these
histograms
with
negative
numbers
for
now
and
but
we're
we're
struggling
to.
I
I
don't
know,
I
don't
think
you
want
to
implement
the
state
that
you're
talking
about.
Maybe
the
hint
api
that
riley's
talking
about
can
dictate
whether
one
of
these
things
is
uses,
non-negative
numbers
or
not.
I
don't
know
this
is
a
hard
one
I'll.
Let
someone
else
talk
what.
A
I
remember
from
the
the
other
josh
is
that
we
might
want
to
have
a
hint
or
something
that
can
tell
people
if
the
histogram
would
expect
all
positive
numbers
or
will
allow
negative.
E
Right
his
suggestion
was
in
the
data
model,
though
so
you
could
have
a
boolean
on
the
histogram
data
point
and
that
would
tell
the
consumer
maybe
something
useful,
but
I
think
that
this
john's
question
is
about
how
do
I
output
that
boolean?
If
I
don't
know
when
I
create
the
instrument,
what
it's
going
to
do
and
we
don't
want
to
output
like
change,
the
output,
like
you've,
been
recording
a
sum
and
then
all
of
a
sudden,
you
saw
a
negative
number.
So
the
sum
disappears
that
doesn't
seem
like
a
good,
a
good
experience.
B
So
maybe
this
is
something
that
does
end
up
impacting
the
api
and
we
need
to
say
that
when
you
create
a
histogram,
oh
I
mean
the
problem.
Is
it
could
be
any
instrument
right?
Because
you
can?
If
you
we
have
a
view
api,
where
you
can
configure
any
instrument
to
be
end
up
with
a
histogram
coming
out
of
it.
Maybe
not
any
instrument
but
like
a
lot
of
instruments
could
have
histograms
or
summaries
coming
out
of
them.
A
If
I
remember
correctly
from
from
what
josh
ice
mentioned,
I
I
think
by
default,
we
allow
histogram
to
be
positive
and
negative,
and
if
people
want
to
enforce
that
they
can
I
they
can
add
that
additional
information
in
a
hint
or
the
view
or
somewhere
else.
So
we
can
derive
that
information
in
the
exporter.
A
So
so,
to
make
the
like
the
conversation
either.
I
wonder
if
we
we
can
go
back
to
the
to
the
old
tab,
146
to
describe
a
scenario
and
we
can
use
that
scenario
to
validate
whatever
solution
or
options.
We
have.
A
E
Yes,
this
is
almost
morally
more
of
a
data
model.
Question
like
the
sum
is
well
defined
in
this
case,
but
people
are
saying.
Oh,
I
want
to
turn
my
sum
into
a
cumulative
in
prometheus,
so
it
has
to
be
monotonic,
and
so
my
very
well-defined
sum
might
not
be
monotonic,
and
so
either
we
require
the
instrument,
the
inputs
to
be
labeled
as
non-negative
or
we
hint
that
the
output
is
non-negative
so
that
the
histogram
can
be
labeled.
That
way.
E
And
so
and
then
john,
your
concern
can
be
addressed
that,
like
you
know,
the
configuration
of
a
view
will
just
say
what
kind
of
histogram
it's
going
to
do
and
you
can
say
I
want
a
monotonic,
histogram
and
you're
just
declaring
it
that
way.
Perhaps
monotonic
sum
histogram.
A
B
So
so
let
me
just
repeat
back
what
I
think
I
heard
from
all
of
you,
and
that
is
that
the
idea
is
that
we
would
have
an
api
surface
via
the
hint
api
to
for
the
instrumentation
author,
to
suggest
that
this
thing
is
going
to
be
only
positive
values
recorded
or
not,
and
that
that
then
the
sdk
can
use
that
to
create
the
appropriately
tagged
histogram
so
that
when
it
gets
to
export
the
otlp
exporter
can
do
the
right
thing.
E
A
E
B
Yeah
up
until
today,
the
monotonicity
has
actually
been
enforced.
Well,
it's
been
enforced
by
the
sdk,
literally
throwing
an
exception.
If
you
try
to
record
a
a
negative
value
on
a
monotonic
sum,
I'm
not
saying
that's
correct,
I'm
just
saying
that's
what
it
is
today
currently
in
the
java
sdk,
so
it's
actually
enforced
at
the
the
implementation
level.
If
you
try
to
record
the
wrong
kind
of
value,
I
like
the
idea
better
of
giving
the
sdk
freedom
like.
I
don't
think
we
should
break
people's
app,
so
that'll
break
someone's
app
right.
E
B
I'm
not
a
prometheus
user,
so
I
agree,
but
that
that's
a
that's
a
purely
selfish
answer.
Obviously
anyway,
I
just
wanted
to
bring
this
up
because
I
think
it
is.
I
read
that,
and
I
looked
at
the
code
and
I'm
like-
I
have
no
idea
how
to
implement
this.
So
probably
some
clarification
in
the
spec
in
the
sdk
spec
about
how
to
deal
with
this
sort
of
thing
would
be
very
useful.
E
You
know
we
talked
a
lot
about
counters
versus
up
down
counters
and
the
difference
being
monotonicity.
So
I
I
feel
like
this
is
the
level
of
difference
between
is
the
sum
account
or
an
up
down
count
in
your
histogram,
and
so
I'd
rather
have
just
a
hint
that
says
that
some
this
is
a
histogram
with
an
up
down
count
sum
rather
than
a
histogram,
with
a
count
sum.
A
Yeah,
so
would
you
mind
to
update
the
old
type
146,
give
one
concrete
scenario
and
explain
why
this
is
important
and
and
what
you
want
like
what
you
expect
to
see.
I
I
it's
not
important,
except
until
you
have
the
the
prometheus.
Maybe
this
is
very
important
to
help
people
understand
like.
Why
are
we
doing
this
and
also
help
us
to
prioritize?
Because
if
this
is
a
corner
issue-
and
only
john
cares
about
that,
nobody
else,
then
I
probably
make
a
call
it's
a
low
priority
right.
I
try
to
understand
how
important
this.
B
A
E
The
protobuf,
you
can't
even
tell
the
difference,
was
the
thing
missing
or
was
it
zero,
and
that
makes
it
an
extra
hurdle
for
the
collector
to
deal
with
this
question
mark
interesting,
which
is
just
go
being
hard
to
use
with
protocol
buffers.
A
A
B
E
As
part
of
our
data
model
research
last
week
I
went
and
looked
at
the
latest
and
greatest
draft
from
prometheus
team
about
the
compressed
high-resolution
histogram
that
they
have
been
planning
and
it
actually
doesn't
support
negative
values.
I
noticed
so
I'm
starting
to
to
question
whether
anyone
cares
enough
about
negative
values
entering
their
histogram.
At
the
same
time,
anyone
else
want
to
comment
on
that.
F
E
So
that's
why,
when
I
saw
this
flying
through
the
data
model
requirement
came
from
openmetrics
josh
was
just
trying
to
check
off
a
box
with
openmetrics
compatibility
when
he
wrote
the
pr.
I
flagged
the
issue,
because
I
couldn't
see
like
john
how
we
would
implement
it
in
a
in
a
stateful
way.
It
doesn't
make
any
sense,
so
either
it's
a
hint,
but
the
other
solution
is
to
just
declare.
Histograms
shall
not
have
negative
values
and
as
an
api
constraint
that
might
not
be
a
problem.
B
So
do
we
have
any
examples,
real
world
examples
in
observability,
slash
telemetry
not
like
in
scientific
use
cases,
because
there's
lots
of
scientific
use
cases
where
you
want
to
record
negative
values,
temperature
as
being
a
good
one,
if
you're
not
using
kelvin.
B
Sure
so
I
guess
the
question
is:
are
there
any?
Does
anyone
have
a
concrete
example
from
the
observability
world,
where
negative
values
are
something
that
would
be
recorded
with
this
api.
B
C
E
Okay,
that's
a
that's
a
legit
case
darn
it.
I
can
see
that
I'm
not
saying
well.
A
C
A
The
example
like
josh
has
a
solar
panel
and
it
might
have
positive
energy
consumption
or
it
might
contribute
back
to
the
power
grid
and
they
want
to
see
the
histogram
of
the
power
consumption
based
on
every
minute.
Like
granularity,
he
is
an
active
histogram.
I
I
guess
in
theory
it
might
work,
but
why
would
I
do
that.
E
And
you
can
imagine,
transforming
data
of
one
sort
or
another
into
a
representation
where
it
matters.
So
I
think
it's
like
the
data
model
can
talk
about
value.
You
know
computing
histograms
from
deltas,
isn't
it
makes
sense
to
get
negatives,
but
do
you
ever
measure
and
then
directly
put
that
into
a
histogram.
A
Okay,
so
so
coming
back,
if
we
want
to
add
some
hint
in
the
sdk,
I
I
I
think
we
can
do
that
and
and
even
though
people
might
not
use
that
for
this
purpose,
I
think
it's
still
valuable.
For
example,
if
certain
system
they
want
to
do
compression
of
the
data
by
knowing
that
the
data
is
always
positive,
they
might
have
better
performance,
they
can
prepare
the
memory
ahead
of
time.
So
I
think
we
should
add
it,
but
whether
this
will
solve
the
problem
here,
don't
know.
There's
a
question
mark.
E
The
simplest
thing
we
can
do
is
say
that
histograms
don't
take
negative
histogram
instruments
in
the
api,
don't
take
negative
numbers
and
if,
if
there's
a
great
call
for
that
in
the
future,
perhaps
we
add
a
hint
that
says
this
histogram?
Well,
it's
not
a
hint,
it's
a
new
struct
or
I
mean
I
don't
know
if
it's
a
new
hint,
if
it's
a
hint
or
a
new
type,
I
don't
think
I
should
throw
an
exception.
Like
john
was
hinting
at.
F
I
will
add
a
comment
here
that
there
might
be
complications
with
bucket
allocations
if
you're
allowing
negative
values
when
you're
implementing
histograms
so
the
way
that
prometheus
is
doing
it
and
seems
to
be
doing
it,
they
they
want
to
allocate
a
contingent
region
of
buckets
where
the
values
lie
and
for
positive
histograms.
F
This
is
a
sensible
thing,
but
if
you
are
crossing
zero
with
this,
you
are
essentially
allocating
an
infinite
number
of
buckets,
because
those
are
like
dense
near
zero,
so
you
will
have
some
subtleties
and
you
want
to
cross
zero
and,
like
are
using
that
kind
of
allocation
scheme
for
sparse,
allocated
histograms,
it's
easier,
it's
just
another
bit
that
you
need
to
put
somewhere,
but
you
might
avoid
some
implementation
issues
with
just
restricting
yourself
to
positive
ones.
G
E
A
Yeah,
so
to
move
this
forward,
how
about
this
so
I'll?
Try
to
cover
this
in
the
in
the
spec,
and
I
believe,
even
if
this
is
not
used
for
the
specific
purpose,
people
can
still
use
a
hint
to
do
performance
optimization.
Meanwhile,
we
we
need
some
concrete
scenario
just
to
help
people
to
understand.
Like
we
ever
come
back
to
this
topic
again,
I
think,
having
that
simple
story
would
be
much
easier
for
us
to
discuss
and
and
john
will
take
a
lead
on
this.
B
A
Would
probably
just
put
that
hypothesis
that
we
have,
for
example,
you
you
try
to
unq
and
dq,
and
you
want
to
see
the
distribution
of
the
the
the
rate
and
see
any
other
can
come
with
a
better
scenario.
If
not,
then
we'll
just
put
the
scenario
and
put
a
disclaimer
there.
We
don't
see
any
high
demand
or
a
real
world
scenario,
but
this
is
the
scenario
that
we
imagine
and
and
when
we
design
we
try
to
cover
that
like
potential
scenario
as
well.
G
E
You
that's
a
very
excellent
example
and
it's
a.
E
Zero
and
arbitrary
numbers
scale
that
crosses
zero.
Another
one
is
signal
strength
with
decibels.
G
Yeah
correct,
so
a
comment
riley
when
you're
doing
the
view
api,
I
I'll
be
curious
to
clarify
yeah,
I
think,
for
some
range
of
instrument.
With
the
view
api,
you
could
take
the
same
measurement
and
apply
additional
aggregation.
How
would
that
apply
if
you're,
let's
say
some
kind
of
a
gauge,
and
then
you
want
to
add
a
histogram
and
so
forth?
So
you
know.
G
G
You
know
instrument,
you
know
different
aggregation
right.
So
would
we
allow
histogram,
especially
if
you
know
the
instrument
might
collect
negative
numbers?
Yes,.
E
And
that's
why
I
was
saying
thank
you
victor.
You
provided
an
excellent
example.
A
negative
number
for
temperature
crosses
zero,
makes
total
sense,
and
I
don't
know,
there's
not
much
time
left
in
this
hour.
There
has
been
a
conversation
happening,
it's
like
in
the
background
between
a
few
of
the
experts
who
know
what
we're
talking
about,
but
there's
this
thing
called
gage
histogram
in
prometheus
and
we've
been
talking
about
how
to
model
that
a
little
bit,
and
I
think
what
you
just
described
is
it
so
you
have
a
bunch
of
gauges.
E
You
know
one
per
therm
thermometer
and
my
my
attribute
is
which
thermometer
number
am
I
on,
and
I
decided
I
want
to
erase
the
attribute
number
thermometer
number
and
what
I'm
left
with
is
a
bunch
of
gauges
that
are
now
either
I'm
going
to
take
last
value,
which
is
not
very
meaningful
if
they're
all
the
same
timestamp
or
I'm
going
to
take
a
gauge
histogram.
E
Essentially,
and
so
that's
that's
how
I
like
to
model
the
production
of
a
gauge
histogram,
and
if
your
gauges
are
negative,
you're
going
to
get
negative
values
in
your
histogram.
E
And
that
is
a
different
kind
of
histogram
than
the
one
that
we
talk
about
most
the
time,
which
is
the
delta
or
the
cumulative
where
there's
an
aggregation
temporality.
So
this
is
another
issue.
That's
in
the
proto-issue
repository
right
now
about
how
to
represent
gage
histogram.
I
don't
want
to
talk
about
any
more
in
this
hour.
B
To
your
point
victor
I
mean
there's
real
world
negative
temperatures
that
the
data
centers
might
want
to
record,
especially
if
you're
dealing
with
super
cool
computers,
quantum
computers,
etc.
Like
recording
negative
temperatures
on
your
negative
celsius,
temperatures
on
your
your
the
temperature
of
your
quantum
supercomputer
is
going
to
be
super
important.
G
G
E
F
F
People,
in
this
case
is
were
to
offset
theirs
to
a
value
so
that
the
measurements
are
all
positive
that
you
expect.
So
you
have
a
precision
which
is
proportional
to
the
offset
which
you're
doing
so.
That
was
like
how
we
dealt
with
that
in
practice,
and
it's
just
like
it's
not
really
working
if
it
crosses
zero
and
that's
support.
F
And
for
the
for
the
gauge
histogram,
I
see
that
more
as
a
visualization
thing,
so
you
don't
want
to
deal
with
thousands
of
gauge
values
which
are
around
there.
You
want
to
aggregate
that
in
the
view,
and
yes,
those
can
be
negative
and
you
want
to
aggregate
them.
You
have
not
done
that
in
a
data
path.
We
have
done
that
like
at
the
graphing
layer
and
there
like,
of
course,
other
bucketing
strategies
are
interesting.
F
Their
linear
buckets,
for
example,
for
the
view
range
are,
are
typically
a
good
option,
but
we
have
not
done
that
in
the
data
path.
So
if
that
is
a
requirement,
then
okay,
yes,
we
should
maybe
open
up
a
wider
discussion,
but
yeah,
as
I
said,
I
think
this
use
case
is
a
little
bit
different
and
deserves
maybe
a
different
instrument
to
cover.
E
So
we're
not
going
to
talk
about
gage
histogram,
everyone
agrees
not
to
talk
about
gage
histogram.
E
I
I
think
that
the
heart
of
your
suggestion
perhaps
heinrich
was
that
if
you,
if
you
have
a
number
that
truly
is
on
both
sides
of
zero,
the
view
you
might
want
to
if-
and
you
want
to
use
exponential
bucketing
is
the
view
might
be
to
specify
a
good
offset
and
then
the
view
that
you're
actually
recording
the
metric
is
my
actual
value
minus
five
or
whatever
is
because
we
know
that
you
can
get
good
precision
there.
F
E
And
so
because
we
know
we
can
talk
about
gauge
histogram
being
like
I'm
converting
gauges
into
a
histogram.
Those
are
going
to
have
negative
numbers.
I
think
we
could.
We
could
say
at
the
api
level.
Histograms
have
to
have
non-negative
numbers
and
there's
all
kinds
of
options
to
figure
out
how
to
encode
negative
value
histograms.
E
E
I've
even
considered
making
a
proposal
which
is
an
attribute
which
is
named
anonymous.
Essentially,
you
have
to
erase
this
attribute
and
it
gives
you
a
way
to
create
the
histograms
using
an
asynchronous
observer
pattern,
just
observe
observable
with
an
erasable
attribute,
and
you
end
up
with
a
histogram
or
something
like
that.
I
haven't
said
that
again
before,
because
it's
complicated.