►
From YouTube: 2022-06-08 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
C
I'd
like
to
propose,
we
start,
I
have
a
short
schedule
today.
There's
a
sick,
family
member
and
it's
a
little
crazy.
I
I
came
don't
always
because
I
know
we
nintendo's
discuss
the
question
about
bounds
for
exponential
histogram,
and
what
I'd
like
to
ask
is
that
someone
else
leads
this
discussion.
We
know
where
I
stand
and
I'm
happy
to
explain
why
why
we
we
chose
what
we
did
and
I'm
also
happy
to
concede
and
change
our
mind.
A
Yep,
and
also
like
I've
collated
everything
that
I
know
on
the
dock,
but
I
have
more
accessibility
to
the
prometheus
folks,
which
makes
the
dock
a
little
bit
prometheus
bias.
So
if
you
have
any
suggestions
of
things
that
you
want
me
to
make
changes
for
like
just
feel
free
to
comment
them
or
dm
me
and
add
that
to
the
doc.
C
Well,
so
the
the
topic
of
exactness
is
the
one
that
I'm
really
concerned
about
the
way
we
had
written
the
specification
in
open,
telemetry,
there's
no
requirement
for
the
implementation
to
be
exact
and
my
understanding.
My
reasoning
there
is
that
it's
actually
going
to
require
a
table.
Lookup
implementation
to
be
exact.
C
That's
a
lot
to
require
of
a
an
instrumentation
library,
and
I
because
the
conversation
last
week
in
the
specification,
sig
meeting
richie
came
and
talked
about
sort
of
concern
for
the
confusion
of
the
user
and
particularly
talked
about
a
situation
where,
if
open,
telemetry
and
prometheus
have
chosen
different
boundary
conditions,
that
users
might
be
confused
and
I
think
we're
referring
to
end
users.
C
But
I
think
that
the
decision
to
use
upper
inclusive
boundaries
in
this
case
of
an
exponential
histogram
requires
a
lot
more
justification,
particularly
because
you
can't
use
a
simple
logarithm
implementation
and
it
requires
a
special
case,
which
is
you
know,
five
percent
slower.
So
if
you
don't
have
a
table,
look
up
operation,
so
that's
sort
of
like
a
complexity
cost.
C
C
So
I'd
like
to
ask
prometheus
what
are
the
compatibility
stories?
What
really
breaks
if
consistency
is
so
important,
we
should
be
doing
it
not
mirroring
the
boundaries,
so
the
negative
boundaries
should
have
upper
inclusive
two,
but
I
believe
your
current
proposal
is
to
include
mirroring
which
breaks
that
compatibility
story.
So
there's
two
reasons:
the
compatibility
story
doesn't
quite
work.
You
can't
express
the
boundaries
from
the
old
queries
exactly
as
powers
of
two
and
and
the
other
one.
I
just
said.
A
Yep,
so
the
compatibility
story
is
like
again
two
things.
First
thing
is
what,
if
an
older
prometheus
is
scraping
our
application,
exposing
exponential
histograms.
A
A
Yes,
give
me
a
second
yeah:
I'm
gonna
write
that
down
the
the
idea
is:
if
you
have
a
histogram
quantile
function,
it
still
works
across
both
histograms,
so
people
can
kind
of
be
like.
Okay,
here's
an
older
prometheus,
it's
a
recording
rule
with
instagram
quantity
and
if
you
upgrade
the
library
to
use
exponential
histograms,
the
histogram
content
function
still
works.
So
that's
the
kind
of
like
you
don't
need
to
change
a
lot
of
your
queries.
So
that's
the
idea
and
again
on
the
prompt
end.
A
C
Well,
I
guess
you
could
you
could
imagine
synthesizing
the
le
like
emulating
it
somehow
you're
gonna
you're
gonna
do
a
query
with
an
le
label,
and
we
know
that.
I
believe
that
you
do
not
want
to
continue
having
those
labels
you're
going
to
have
one
data
point
with
all
the
histograms.
So
you
now
query
the
histogram
for
how
many
are
less
than
or
equal.
But
the
problem
is
that
query:
unless
it's
an
exact
power
of
two
can't
be
answered
exactly
no.
A
No
you're
right
like
I,
I
do
believe
that
those
queries
will
fail,
but
the
histogram
queries
will
still
continue
to
work
like
anything.
C
C
So
when
you're
querying
a
metric-
and
you
have
an
le
expression,
you're
asking
an
exact
test,
how
many
were
less
than
or
equal
to
this,
and
that
is
an
inverse
quantile
function
right
there
and
that's
the
one
where
we
won't
be
able
to
do
the
same
thing
and-
and
it
really,
I
think,
really
we're
debating
a
situation
where,
like
you,
have
a
bunch
of
sevens
and
a
bunch
of
nines,
the
average
is
eight.
Now
is
the
p
50
less
than
or
equal
to
eight?
C
And
you
don't
the
problem
with
these
data
structures
is
you're,
not
gonna,
exactly
know,
because
the
exactness
is
really
hard
with
an
exponential
histogram.
I
was
gonna
propose
to
you
that
the
users
would
be
better
served
here
with
a
bit
or
a
flag
that
says
that
the
inputs
were
discrete
if
we
knew
the
inputs
were
discrete.
This
debate
would
be
a
lot
different
and
I
think
that
that
might
be
interesting
to
look
at
for
the
user's
sake.
C
My
my
biggest
concern
here
is
that
I
want
to
use
a
logarithm
function.
I
don't
wanna,
I
wanna
and
I
like
the
fact
that
we
have
a
one-liner
basically
to
implement
this
histogram.
The
mapping
function
is
logarithm
times
the
scaling
factor,
and
I
want
that
to
be
correct
and
the
way
we've
written
the
specification
you're
allowed
to
be
off
by
one
and
still
be
correct,
because
we
know
the
logarithm
is
not
exact.
Now
the
logarithm
is
right
about
half
the
time,
because
it's
just
a
flip
of
flipping
a
coin
situation.
C
I
did
a
test
yesterday,
it's
not
actually
perfectly
even
there's
about
55
of
the
times
when
you
look
up
one
of
these
boundaries
and
it's
going
to
be
on
the
high
side
of
the
low
side,
the
one
where
we
know
exactly
that.
The
answer
is
wrong
is
the
one
special
case,
and
so
that's
the
reason
why
I
think
it's
kind
of
silly
like
if
I
give
you
the
value
one,
the
exponential
bucket
is
negative
one
other
than
that
the
logarithm
function
might
be
right.
C
So
I
could
have
a
you
know,
a
simple
sorry.
If
we
decided
to
change
our
minds
about
these
boundaries
for
the
positive
range,
what
I
could
do
is
keep
my
existing
function
but
add
one
special
case:
if
the
numbers
one
return
negative
one,
otherwise
return,
the
scaled,
logarithm
scaled
logarithm
might
be
off
by
one,
but
I
can't
there's
no
correct
adjustment.
I
can
make
to
the
result
from
logarithm,
except
on
those
powers
of
two
and
that's
when
I'm
going
to
put
an
if
power
of
two
potentially
if
power
of
two
then
respect
the
boundary.
C
A
Yep,
I
am
going
to
like
poke
beyond
about
the
logarithm
logarithmic
function
as
well,
so
that
we
can
compare
those
two.
C
I
did
look
at
your
your
document.
Of
course
you
have,
you
have
described,
bjorn's
function
uses
a
table
lookup,
it
is
using
a
logarithmic
search
and
doesn't
need
to
we.
We
have
demonstrated
a
constant
time
lookup
to
that
we
can
make
a
table.
Lookup
go
either
way,
that's
the
benefit
of
a
table.
Lookup
function,
you
can
exactly
have
less
than
or
equal
or
greater
than
equal
or
whatever
you
want.
C
It
requires
like
the
code
for
that,
as
just
not
not
wanting
to
get
that
reviewed.
You
know
I
was
gonna
have
to
have
somebody
read
the
code
that,
like
has
math
big
functions,
raising
you
know
two
to
the
power
of
1700
over
square
rooted
eight
times
like
that's
the
answer
it's
about
100,
but
it's
a
little
bit
less
than
100.,
and
so
you
and
you
can't
express
that
number
exactly
so.
C
You
have
to
encode
the
exact
threshold
above
which
or
below
which
or
less
than
or
greater
have
equal
test
will
fail
or
succeed,
and
in
that
code
I
had
generator
function
to
print
out
a
table
of
constants
and
it
was
200
lines
of
code
and
nobody
wants
to
review
that.
That's
my
problem,
like
the
the
logarithm
function
that
we've
specified
one
liner,
it's
correct
and
that's
why
I
like
it.
A
No,
no,
I
completely
agree
like
I
still
don't
understand
the
table.
Lookup
function
fully
but
yeah
like
I
am
going
to
see
if
we
can
come
up
with
a
logarithmic
function.
That's
simple!
If
not
I'm
going
to
document
that
as
well
so
yeah,
I'm
going
to.
C
And
I
I
think
you
know
I'm
the
only
one
who's
spoken
now
from
open
geometry.
If
I'm
the
only
one
who
disagrees,
we
should
absolutely
do
whatever
you
guys
want.
I
felt
like
when
you
came
into
the
meeting.
There's
sort
of
there
was
a
wedge
created
which
was
like.
If
we
go
this
way
and
have
a
disagreement,
kubernetes
will
have
to
make
up
their
mind.
C
I
think
it
would
be
fair
to
ask
kubernetes
what
they
think
if
they
don't
want
to
use
an
hotel
library
and
they
don't
want
to
use
a
prometheus
library,
they'll
have
to
implement
whatever
we
tell
them
to
and
it'll
either
be
that
logarithm
function
or
a
table
lookup.
I
think
it
would
be
fair
to
ask
some
users
what
they,
what
they
prefer.
A
Yep,
so
there
is
a
sig
instrumentation
meeting
tomorrow
that
also
discusses
this.
I
will
try
to
get
the
logarithmic
function
in
there
before
that
meeting.
Yeah.
D
I
also
want
to
quickly
chime
in
and
just
say
anything
that
requires
a
large
table.
Lookup
will
require
the
large
table
to
be
encoded
somewhere
in
the
code,
and
the
javascript
community
is
likely
to
frown
on
that
in
in
js.
We've
already
had
issues
in
the
past
where
code
size
has
been
a
problem,
so
anything
that
artificially
inflates.
That
is,
you
know
not
great
for
us
when
you.
D
I
mean
josh
just
said
it
was
several
hundred
lines
of
code.
I
obviously
haven't
measured
anything,
but
you
know
anything.
We
have
some
some
of
the
js
contributors
that
frown
on
tens
of
bytes.
So
I
mean
you
know
we're
talking
about
minimum
hundreds
or
thousands
of
bytes
here.
So
I
I
you
know
larger
tha
and
anything
greater
than
zero
is
is
not
ideal,
but
the
larger
it
is
the
bigger
problem
it
is
for.
E
E
D
D
Okay,
it
looks
like
josh
dropped
off,
I
mean
he's
the
only
one.
That's
really
been
talking
the
whole
time.
B
D
A
Yeah
I
mean
the
other
side
is
the
prometheus
side
and
the
main
reason
we
are
doing
this
is
to
kind
of
keep
compatibility,
as
I
explained.
So,
if
you
have
an
application,
exposing
exponential,
histogram
being
scraped
by
an
older
prometheus
things
will
still
work
and
that's
the
hope
that
will
make
upgrades
seamless.
A
We
don't
need
to
change
queries
or
like
have
a
special
function
for
the
newer
histograms.
We
can
kind
of
reuse
the
older
functions
as
well.
So
for
us,
the
main
reasoning
is
the
backwards.
Compatibility
and
josh
is
right
that
you
know,
for
the
negative
bucket
ranges.
A
We
break
compatibility,
but
negative
bucket
ranges
are
so
rare
in
practice
that
we
consider
it
okay
to
do
that
and
in
the
negative
ranges
like
the
semantics
of
the
histogram
are
also
slightly
broken,
because
we
expect
some
to
be
a
counter,
but
it
goes
down
for
the
negative
observations
and
things
like
that,
so
detecting
counter
resets.
There
is
also
super
complicated
yeah,
so
that
that
would
be
the
reasoning
on
the
prometheus
end.
B
I
do
have
a
question
about
that,
so
does
prometheus
generally
consider
it
backwards
compatible
to
change
bucket
boundaries,
because
I'd
imagine
when
you
switch
from
like
a
regular
histogram
with
my
own
bucket
boundaries
defined
to
one
to
an
exponential
one,
that
queries
would
start
to
potentially
work
differently.
Anyways.
A
E
E
A
The
other
argument
is
like
the
performance
impact
of
like
a
couple.
More
comparisons
doesn't
really
matter
in
practice,
because
there's
so
many
other
things
that
require
a
lot
more
instructions
that,
like
a
small
couple
of
instruction,
changes
like
the
performance
impact,
is
not
the
main
issue
here.
B
No,
I
think
it
sounded
like
josh's
main
concern
was
about
complexity
and
requirements
on
people
implementing.
A
Yep
yeah
like
if
I
mean
again,
I
will
be
very
honest
to
say
that
the
table
lookup
method
is
kind
of
magic
for
me
right
now.
A
So
if
that
is
the
only
way,
then
yeah,
that's
like
a
valid
argument,
but
I'm
gonna
document
it
and
I'm
gonna,
see
if
beyond
can
come
up
with
a
logarithmic
function.
B
D
Is
the
exactness
requirement
that
josh
referenced
also
a
backwards
compatibility
issue
because
that's
as
far
as
I
know,
what
was
requiring
the
lookup
table
and
without
it
it
sounded
like
you,
can
just
apply
a
logarithm
function
and
then
apply
a
correction
only
on
certain
boundaries.
Maybe
I
misunderstood
exactly
I'm
coming
late
to
this
conversation
honestly.
A
So
that's
a
good
question.
I
don't
know
I
actually
planned
on
like
going
back
and
reviewing
the
recording
and
doing
some
research
to
kind
of
figure
that
out
myself,
because
I
also
didn't
fully
understand
the
excitement
exactness
requirements.
D
A
My
assumption
is,
if
you
have
a
number,
that's
exactly
a
power
of
two
which
bucket
do
you
put
it
in,
so
that
is
my
read
on
it.
But
I'll
have
to
go
back
and
review
a
few
things
to
kind
of
understand
that.
E
One
thing,
maybe
to
keep
in
mind,
is
that
for
bytes,
but
powers
of
two
are
actually
pretty
plausible,
pretty
much
cache
sizes
or
something
like
that.
You
know.
D
A
So
the
idea
was
like
I'll
fix
the
todos
in
the
dock
and
I've
shared
it
with
the
open
telemetry.
I
created
an
issue
and
shared
it
on
open,
telemetry
matrix
and
we're
going
to
meet
again
in
two
weeks
like
the
next
meeting
will
be
basically
to
kind
of
hash
out
or
try
to
come
to
a
consensus
regarding
this.
So
until
then
like
add
comments
and
use
cases
as
like
to
the
doc
and
suggestions
to
the
doc
and
like
yeah,
that's
I
I
think
that's
the
idea.
D
B
I
think,
I'm
going
to
ask
josh
if
there's
anything
he
wants,
but
I'm
going
to
try
and
answer
the
question
of
whether
whether
we
would
change
existing
histograms
to
be
exponential,
histograms
or
whether
we
would
only
opt
opt-in,
new
histograms
to
be
exponential.
Basically,
whether
we
consider
it
breaking
for
for
our
users
to
switch
even.
D
D
Yeah,
that
makes
sense.
Are
there
any
other
stakeholders
that
we
should
try
to
get
in
contact
with
in
the
next
two
weeks?.
B
I
can
say
that
from
google's
perspective,
I
was
talking
with
some
folks.
Yesterday
we
don't
from
a
consumption
standpoint.
We
don't
think
it
actually
matters.
D
B
It
doesn't
matter
from
a
query
perspective
unless
you're
doing
basically
like
we're
talking
about
unless
you
have
very
exact
powers
of
two
bucket
boundaries.
But
even
then
you
still
get
the
same
error
properties
that
I'm
trying
to
remember
the
conversation.
I
think
you
still
get
the
same
error
bounds
in
terms
of
which,
in
terms
of
percentiles,
if
I
remember
correctly,.
C
B
Okay,
is
there
anything
anyone
else
would
like
to
discuss
today
or
any
other
points
on
that
topic.
B
Okay,
then
I'll
see
everyone
in
two
weeks
and
hopefully
we'll
we'll
all
have
more
information.
Then
thanks.
Everyone
for
for
coming.