►
From YouTube: 2022-06-22 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
It
should
be
a
larger
call.
Today
we
are
going
to
talk
about
the
bucket
boundaries
and
high
resolution
histograms,
which
are
currently
incompatible
between
promises
and
open
telemetry
and
unless
I'm
so,
both
prometheus
team
and
more
people
from
open
telemetry
should
join
and
also
quite
likely
people
from
kubernetes
instrumentation
to
try
and
realign
course
else.
C
Well,
yeah,
I
was
saying
well
I
from
my
standpoint,
the
strategy
jordan's
saying
so
hopefully
you'll
be
here.
A
A
He
said
he
would
be,
I
I'm
currently
in
austin,
so
I
can't
I
can't
poke
and
I'm
sleep.
A
A
Okay
george
says
he's
going
to
be
here
in
a
few
minutes:
do
we
have
anything
else
on
the
agenda?
I
honestly
didn't
even
check.
E
F
F
F
I
see
someone
entering
into
the
agenda
to
talk
about
spurs
well
about
histogram
boundaries.
Perhaps
if
that
is
the
topic
that
I
was
asked
to
come
here
for
I
well
just
say
I
I
I
promised
a
write-up
of
the
change
that
you
wanted
and
I
will
try
to
to
leave
my
opinions
out
of
it.
The
the
group
I
will.
I
will
write
it
up
and
propose
the
change
as
much
as
I
can
do.
I
think
there's
no
more
debate,
that's
needed!
It's
just.
F
I
hadn't
scheduled
any
time
to
make
that
change
right
up
and
two
weeks
ago.
So
it's
it's
in
the
sprint
this
time.
If
that's,
if
that's
what
you
wanted
to
hear.
A
F
I
thought
I'm
trying
to
do.
Oh
okay,
so
I
thought
we
had
called
two
weeks
ago
and
I
guess
you
were
not
there.
I
had
after
that
gotham
set
me
up
with
some
direct
messaging
with
bjorn
and
we
talked
about
it
now.
I
have
my
my
opinion
personally.
F
Is
that
the
like
the
cost
benefit
is,
is
a
tough
sell,
but
I
like
I
don't
win
by
by
causing
this
problem
here,
so
I'm
I'm
better
off
making
the
proposal
that
you
want
and
leaving
my
opinion
out
so
so
I
don't
think
we
need
to
debate
this
anymore.
We
I
think
we,
the
the
place
that
we
ended
with
me
and
brn,
is
like
I'm
not
really
convinced
that
the
benefit
for
the
user
equals
the
cost
of
forcing
a
table.
F
Look
up,
for
example,
as
implementation
goes,
and
as
long
as
we're
willing
to
accept
an
inexact
solution,
then
open
telemetry,
I
believe.
Well,
I
will
propose
this
and
it's
the
group's
decision,
we'll
we'll
accept
the
ceiling
minus
one
formula.
In
other
words,
it
is
slightly
more
complicated
to
compute
a
lower
bound,
a
lower
inclusive
boundary
than
an
upper
inclusive
boundary.
F
Because
of
the
way
our
hardware
works,
and
we
can
do
that
on
paper,
but
the
exactness
is
a
lot
to
ask
for,
and
we
think
that
that
is
a
go
going
too
far.
So
if
this
sounds
like
a
good
compromise,
the
compromise
is,
we
will
document
it
exactly
the
way
you
want,
but
not
require
a
table
lookup,
in
which
case
the
exactness
that
you're
asking
for
is,
is
not
guaranteed
without
the
table.
Lookup.
You
just
don't
think
that
a
table
lookup
implementation
is
complexity,
cost
wise
worth
the
the
trouble.
G
Yeah
sorry
for
joining
late-
and
I
mean
I
spoke
with
beyond
and
exactness-
is
not
a
requirement
as
long
I
mean,
as
we
can
do,
the
ceiling
minus
one
and
that's
completely
fine.
If
it's
like
a
negative
schema
or
negative
scale
and
a
power
of
two.
That's
where
exactly
comes
into
picture,
but
for
a
positive
scale
ceiling
minus
one
would
like
work
completely.
Fine.
F
So
practically
speaking,
there's
no
complexity,
cost
for
the
user,
you're
still
call
ceiling.
Instead
of
four
I
mean.
If
you
look
at
the
implementation
of
ceiling,
it
may
call
floor
internally,
which
is
one
reason
why,
like
it,
doesn't
feel
quite
natural,
but
that's
okay
and
then
for
the
negative
scale
we
put
in
a
correction
which
is
just
like
a
test
for
power,
two
and
subtract
one
which
can
be
implemented
without
a
branch.
You
know
we
can
do
some
bit
shifting
and
subtraction
and
and
stuff
like
that.
F
It's
not
like
actually
that
important
to
me,
so
I
would,
as
I
mentioned,
I
have
opened
the
documents
to
write
this
change.
I
called
a
compromise.
It
means
calling
ceiling
minus
one
for
the
positive
scales
and
it
means
calling
the
same
logic
except
with
a
special
case
that
can
be
done
without
a
branch
if
we
need
to
for
the
negative
scales
and
we're
not
going
to
require
exactness.
F
F
Thanks
to
this
so
and
the
table
lookup
implementation
was
about
twice
as
fast
as
the
logarithmic
implementation.
So
that's
that's
a
good
outcome
and
I
think
that's
the
reason
why
half
of
the
original
research
that
the
hotel
researchers
did
was
done
by
a
person
with
a
table.
Lookup
implementation.
When
you
do
a
table,
look
implementation,
you
just
don't
care
about
the
inclusivity.
It
just
doesn't
matter.
But
as
long
as
we
don't
care
about
exactness,
we
can
write
the
formula
the
way
you
like.
F
I'm
going
to
make
that
recommendation
and
fully
back
it
and
I'll
get
that
out
by
the
end
of
the
week,
it'll
be
presented
in
the
spec
meeting
next
week
for
open
telemetry.
F
The
the
complexity
that
the
implementer
faces
is
slight
and
real,
but
not
worth
an
argument
here.
So
I
I
that's.
That's
what
I'm
saying.
D
I
have
one
quick
question
related
to
the
histograms,
but
not
to
the
bucket
boundaries,
because
there
is
this
other
tiny
thing
that
in
prometheus,
you
have
a
field
where
you
can
explicitly
specify
what
counts
as
zero,
while
in
open
telemetry
it's
up
to
the
implementer,
and
there
was
some
proposal
to
add
this
as
an
optional
field
in
open
telemetry
as
well.
Because
then,
when
we
scrape
prometheus
to
open
telemetry
and
push
it
into
a
prometheus
database,
it
wouldn't
get
lost.
Could
you
also
make
this
optional
field
part
of
the
proposal?
F
That
that
discussion
was
was
definitely
talked
about
a
little
bit
and
then
kind
of
lost
in
the
noise.
I
I
thank
you
for
reminding
us
of
it.
I'll
tell
you
what
I
think
I
would
like.
Well
I'd
like
to
have
a
wider
discussion,
perhaps
about
zero
tolerance
being
more
of
a
cross-instrument
question,
cross-instrument
issue.
F
I've
looked
at
metrics
protocols
outside
of
you,
know
our
world
and
there's
this
concept
of
a
dead
band
for
a
gauge
instrument
being
the
amount
that,
if
this
gauge
changes
less
than
a
threshold,
I'm
not
going
to
re-report
it
and
it
seems
like
the
same
concept,
and
so
I've
been
wondering
if
there
might
be
room
to
put
a
concept
like
zero
tolerance
higher
up
in
the
the
protocol,
meaning
maybe
at
the
instrument
level.
This
is
this
would
be.
F
I
was
looking
at
this
industrial
metrics
protocol
called
spark
plug
it's
kind
of
concurrent
with
us
being
started
for
three
four
years
ago,
and
they
have
a
concept
of
instrument,
metadata,
meaning
ways
to
attach
key
value
information
about
your
instrument
to
carry
with
the
instrument,
and
those
would
be
things
like
calibration
details
as
well
as
deadband.
So
I've
been
wondering
if
we
could
do
zero
tolerance
as
an
instrument
characteristic
rather
than
as
a
data
point
property.
That's
my
question
it.
F
It
does
bring
to
mind
like
what
does
it
mean
when
you've
got
two
instruments
with
different
zero
tolerances?
I
know
you've
defined
the
merge
rules
for
them,
but
I
think
of
this
as
a
property
of
the
instrument,
more
so
than
of
the
data,
and
I
wanted
that
conversation
to
really
you
know
separately
separate
from
the
boundary
question.
I
don't
think
we're
opposed
to
it
as
much
as
just
kind
of
not
really
understanding
it.
H
I
mean
that's
definitely
a
separate
question.
It's
also
different
in
nature,
as
in
there
is
like
from
when
we
see
side.
This
is
quite
open
thing
because
there's
nothing
similar
in
the
existing
instagrams.
H
So
far,
I
see
it
mostly
as
if
you
have
very
small
kind
of
noisy
numbers,
you
want
to
give
a
convenient
way
of
bonding
them.
I
was
more
thinking
by
making
this
like
such
a
fully
fledged
feature
of
things
like
what
datadog
is
using
in
their
dd
sketch,
because
they
are
using
this
not
just
for
like
this
is
approximately
zero
they're
using
this.
As
far
as
I
understand
aggressively
in
case
you're
only
interested
in
like
the
upper
end,
and
then
they
have
a
very
large
what
we
call
a
prometheus
lingua
zero
bucket.
H
So
I
was
more
concerned.
If
then,
data
dogs
dd
sketches
should
be
ingested
into
prometheus,
or
vice
versa
that
you
you
want
to
have
this
way
of
being
compatible,
and
that
is
where
I
actually
kind
of
expected
open,
telemetry
to
be
interested
in
that
as
well,
but
yeah,
I
don't
know
I
mean
so
far.
This
is
a
fairly
niche
feature.
F
H
I
mean
they
do
it
for
like
they.
There
is
a
benefit
in
in
playing
that
trick
essentially,
and
the
prometheus
idea
was
also
to
I
mean,
even
if
you
don't
implement
everything
in
the
beginning
to
be
like
open
for
other
talking
layouts
and
those
approaches,
but
yeah
mergeability,
because
that
was
also
a
point
right.
Mergeability
is
not
I
mean
that's
well
defined.
I
think
fabian.
We
discussed
this
right.
H
So
maybe
if
I,
if
you
explain
in
your
words,
if
necessary,
perhaps
that's
better
than
because
you
had
the
outsider
perspective
and
then
understood
it.
So
that's
that's
not
like
breaking
mergeability.
If
you
just
have
a
larger
zero
work.
Oh
zero,
buggers
of
different
size
sizes
during.
F
The
during
the
debates
over
the
hotel
formula,
six
or
seven
months
ago,
man
well
ten
months
ago-
now
I
remember
you
know
it
was
recently
three
people
me
and
the
researchers
from
dinotrace
and
the
relic,
and
we
were
talking
about
inclusivity
exclusive
exclusivity
without
any
of
you,
president,
and
that
was
clearly
a
mistake,
but
at
the
time
atmar
pointed
out
that
you
know
to
to
implement
zero
tolerance.
F
F
So
that's
my
personal
opinion,
but
I'm
trying
to
I'm
trying
to
help
this.
I
know
I
one
one
option
would
be
to
have
a
dead
band
or
a
zero
tolerance
on
the
instrument.
One
is
to
have
an
upper
inclusive
like
zero
boundary.
In
the
data
point,
I
think
we
could
you
know
in
this.
F
Maybe
this
we
should
look
at
datadog
and
when
we
haven't
that's
all
I
have.
I
don't
know.
H
Yeah
I
get
I
mean,
as
I
said,
I
think
this
is
most
demand
in
cases
where
you
actually,
it
makes
more
sense.
We're
actually
go
far
away
from
zero,
so
that
matters
and
I
see
uses
for
that.
I
don't
claim
this
is
super
important,
but
it's
it
seems
it
seems
promising
enough
to
not
just
drop
it
right
away.
F
So
I
guess,
do
you
guys
see
any
pros
and
cons
between
the
two
like?
Let's
just
hypothetically
one
is
we
put
imagine
we
put
key
values
into
the
instrument
and
you
could
have
anything
any
property
you
want
and
I've
seen
in
the
spark
plug
world
people
use
these
for
calibration
parameters.
Serial
numbers,
you
have
an
instrument
serial
number
like
you
have
like
what
revision
of
the
hardware
was
that
sensor.
F
You
know,
there's
all
kinds
of
stuff
that
you
can
put
onto
the
instrument
that
we
don't
really
have
a
place
for
other
than
a
description.
It's
a
string
right
now,
so
you
could
have
histograms
here.
My
zero
tolerance
is
one
like
millisecond
histograms.
Here
my
tolerance
is
10
seconds
because
I'm
counting,
I
don't
know
epochs
or
whatever,
and
you
know
that
could
be
an
instrument
property
versus
having
in
the
exponential
histogram
data
point
just
a
unsigned
integer,
which
is
the
boundary
according
to
the
scale
parameter,
I
guess
or
not.
F
H
So
we
have
an
implementation
which
just
specifies
the
boundary
as
a
float
value.
It's
it's
possible
that
it's
in
the
middle
of
a
bucket,
and
this
can
be
treated
in
a
defined
way.
If
you
want
to
merge,
there
could
be
you
could
make
an
argument.
The
boundary
should
always
be
coincide
with
the
pocket
boundary.
But
then,
if
you
for
example,
that's
the
thing
that
happens
in
prometheus,
you
drop
resolution
because
you
want
to
shed
pockets.
H
Then,
if
your
zero
threshold
was
on
one
of
the
higher
resolution
boundaries,
then
you're
in
the
middle
of
the
pocket
anyway.
So
yeah-
I
don't
know,
I
mean
that
was
just
the
most
straightforward
way
for
us
to
to
put
just
the
floating
point.
Number
upper
inclusive
boundary
of
that
zero
bucket
and
yeah
be
done
with
it
takes
eight
bytes.
So
that
might
be.
C
Yeah,
I
will
say
all
the
other
use
cases
you
mentioned
there.
Josh
are
kind
of
infometrics
and
I'd
be
wary.
You
know
of
moving
those
too
closely
into
here's
another
bag
of
key
value
pairs
like
something
more
structured
like
burns
talking
about.
It's
probably
better
for
this
particular
case,
especially
because
you
want
to
do
a
structured,
merging.
F
F
If
we
had
a
way
to
talk
about
the
schema
of
the
attributes
or
the
labels
themselves,
I
would
probably
be
more
inclined
to
agree.
Maybe
that's
the
right
answer
like
if
I
could
see
this.
This
metric
has
five
labels.
Three
of
them
are
static
properties
that
don't
change
they're
the
serial
number
of
the
instrument,
the
dead
band
configured
by
the
user
or
whatever
like
those
types
of
property.
F
Then
they
become
metric
attributes,
but
I
I
know
that
I'm
not
aggregating
by
meaningful
characteristics
of
the
data.
When
I,
when
I,
when
I
see
that
data,
so
I
wouldn't
want
to
treat
them
as
ordinary
labels.
That's
the
kind
of
distinction
I'm
looking
at
here,
but
but
but
I
don't
want
to
see
us,
add
new
key
values
and
confuse
user.
F
I
just
will
say
if
you
look
at
something
like
spark
plug,
you
see
them
right
there
and
they
don't
have
that
concept
of
labeled
metrics,
because
we
don't
have
the
concept
of
virtualized
instruments
like
we
do
like
this.
You
know
this
is
real
hardware,
we're
talking
about
having
key
values.
H
You
just
drop
the
specification
right
and
in
the
other
way,
if
you
get
the
unspecified
thing,
then
you
could
just
say:
okay,
this
is
a
zero
bucket
with
with
zero
and
with
the
knowledge,
this
might
actually
be
more
than
just
zero,
because
it's
coming
from
from
that
other
world.
So
I'm
not
super
worried,
even
if,
if
this
ends
up
in
different,
if
with
a
different
detail
that
this
will
be
like
super
annoying,
all
the
all
the
time.
F
I
think
it's
okay,
so
I
think
the
the
reason
we
never
added
this
field
is,
it
wasn't
quite
understood.
I
can
sort
of
see
why,
from
this
conversation,
so
and
and
remember,
the
job
is
not
to
convince
me,
but
at
least
a
handful
of
other
reviewers
here.
So
I
I
think
what
would
help
would
be
a
focused
short
write-up
that
just
talks
about
zero
tolerance
and
the
questions
of
whether
of
like
it
comes
from
datadog.
F
I
actually
didn't
know
that
until
today,
right
now,
so
I
have
to
think
a
little
bit
like
it's
not
really
about
the
semantic
question
that
I
originally
proposed.
It's
more
about
the
mechanics
of
data
being
in
the
upper
exponents
upper
ranges,
and
when
you
have
data
in
the
upper
ranges,
data
in
the
lower
ranges
becomes
effectively
meaningless,
and
that's
that
is
a
semantic
thing
I
think,
is
what
we're
trying
to
say.
It
sounds
like
we
could
just
say:
there's
another
field.
It's
in
the
exponential
histogram
data
point.
It's
defined
to
be
zero
tolerance.
F
It
may
be
eight
bytes
of
ieee
floating
point
and
the
rules
for
merging
are
specified.
I
think
what
people
are
not
going
to
want
to
see
is
a
lot
of
complexity
there.
So
because,
but
that's
I,
I
can't
write
that
sentence
right
now
in
my
head.
I
that
that's
the
property
that
we're
trying
to
add-
and
you
know
we
could
just
document
it-
I
make
it
look
not
very
hard
and
I'm
sure
people
will
agree
to
it.
I
think,
is
what
I'm
trying
to
say.
F
I
I
guess
I
guess
it's
sort
of
like
the
motivation
got,
got
lost
somewhere.
Maybe
if,
if
the
motivation
is
to
to
respect
something
that
datadog
has
done.
H
Okay,
I
I'm
not
100
sure
datadog
is
doing
it
exactly
that
way.
That
was
just
my
memory
from
reading
their
paper
like
two
years
ago
or
something,
but
there
is
in
other
exponential
histograms
that
exist
as
well
like,
and
then
I
think
it's
mostly
motivated
by
let's,
I
guess
it's
coming
from
you.
You
might
actually
get
physical
measurements
that
are
noisy
and
will,
even
if
it
should
be
zero
like
it's
like
0.0001
and
then
it's
0.
well.
That.
F
That's
why
I
kind
of
want
that,
for
my
gauge
too,
so
I'm
I
mean
I've.
I
mentioned
physical
measurements
since
looking
at
spark
plug,
because
I
have
this
like
water
system
that
I'm
kind
of
responsible.
For
so
I
have
a
physical
measurement,
which
is
pressure
and
when
I
say
deadband,
what
I
mean
is
I
want
to
configure
the
amount
for
which
you
should
not.
You
should
not
send
me
a
new
data
point
if
it
changes
by
less
than
.001,
because
my
sensor
has
that
much
noise
in
it.
I
think.
H
The
idea
is
just
this
is
kind
of
in
the
spec.
If
you
don't
have
a
dynamic
like
we
talk
about
this,
we
can
double
the
resolution
and
have
it
as
we
see
fit
right.
So
we
have
this
dynamic
pocket
layout
where
it
becomes
more
interesting
to
specify
the
boundary
if
you
are
like
just
like
my
histograms,
have
this
pocket
layout,
but
everything
below
0.001
is
just
one
bucket
and
it's
just
a
statically
thing
in
the
working
layout.
It's
not
in
the
protocol.
H
It's
just
in
the
way
you
specify
your
bucket
loader
if
it
never
changes.
So
we
kind
of
get,
I
think,
into
this
discussion
at
all,
because
we
want
to
make
it
dynamic,
which
is
dd
sketch
is
also
doing,
and
then
they
go
doing
the
extra
mile
of
of
utilizing
very
broad,
zero
rockets.
If
I
remember
that
correctly,
but
so
this
is
coming
from
different
directions
and
yeah,
I
mean
mostly
motivation.
H
F
Yeah,
I
feel
like
two
weeks
ago
three
weeks
ago,
rich
came
to
open,
telemetry
and
said
we're
confusing
the
user.
We
should
do
something
here
and
I'm
trying
to
to
be
as
accommodating
or
as
cooperative
as
possible,
but
what
I
feel
like
at
like
actually
asking
users
and
when
we
come
to
this
type
of
question,
it's
another
one.
It's
not
the
same
as
the
boundary
question
like
in
the
example
kubernetes
you're,
a
big
user.
What
do
you
think
about
zero
tolerance?
F
F
H
Hard
to
sell
from
actual
implementations
right,
the
paper
just
was
coming
out
after
the
implementation
was
there
and
got
sold,
but
also
other,
like,
as
I
said,
like
a
lot
of
exponential
histograms.
Have
this
idea
of
a
dead
band
right?
It's
yeah
and
I
think,
since
we
do
have
dynamic
rockets,
I
think
we
should
have
it
as
part
of
what
we
store
on
the
data
point.
How
broad
this
is
because
we
can
also
make
use
of
it
changing
it
dynamically.
I
don't
think
it
it's
like
a
paradigm.
H
F
I
think
I
see
that
the
idea
the
conceptual
justification
now
that
was
that
was
well
said
yarn
for
me
now
the
question
is:
what's
my
implementation
change
going
to
look
like,
and
I
can't
I
can't
tell
okay,
I
guess
I'm
only
a
little
bit
hesitant
to
offer
to
do
the
work
to
convince
open
telemetry.
This
is
a
good
idea,
since
I'm
merely
the
middle
man.
At
this
point,
I
would
I
guess
what
would
probably
I
know
you
have
a
reference
implementation.
F
I
know
you
have
a
paper
if
there's
some,
like
section
of
that
paper,
that
we
could
lift
out
put
into
a
pr
description
along
with
the
field,
a
change
that
adds
well,
there's
like
one
change
in
our
data
model.
It's
the
paragraph
perhaps
out
of
your
your
document.
It
says
what
the
zero
boundary
means
and
why
it's
there
and
it
has
that
symmetric.
F
F
H
There
are
histogram
implementations
that
have
a
fixed
pocket
layout
period.
They
just
say
this
is
our
bucket
layout,
it's
good
enough.
It
ought
to
be
enough
for
everyone,
and
in
that
scenario
you
have
a
finite
zero
bucket,
which
is
part
of
the
static
pocket
layout,
which
is
the
reason
why
they
don't
have
to
store
the
threshold
in
the
in
the
data
point,
but
they
have
a
counter
for
zeros
right
and-
and
so
that's
that's
also
the
thing
in
histograms
that
are
not
dynamic
and
we
want
to
have
the
dynamic
histograms.
I
guess.
F
I
don't
know
of
any
of
those
examples
that
would
also
help
motivate
that
pr,
no.
D
F
Makes
me
happy
yes,
nice.
I
will
handle
the
upper
inclusive
boundary
issue,
but
if
you
could,
if,
if
someone
else
could
handle
that
zero,
a
great
or
or
an
agreement,
then-
and
I
I
am
committing
to
deliver
something
for
next
week-
hopefully
before
the
spec
meeting
monday.
Hopefully.
F
We,
I
think,
the
way
we've
we
keep
changing,
trying
to
refine
the
process
for
complicated
issues.
I
think
this
is
not
big
enough
to
write
no
tap,
which
we
talked
about,
but
but
like
a
spec
level
issue
at
the
specification
repository
saying,
add:
zero
tolerance
to
the
exponential
histogram,
and
then
you
know
like
a
paragraph
or
two
and
then
it's
got
a
change
in
our
data
model
that
lays
out
what
the
exponential
instagram
is.
F
I'm
currently
making
changes
in
the
same
file
to
change
the
boundaries
exact,
for
example,
and
then
because
the
proto
calls
in
a
separate
repository.
We
need
a
separate
pr,
but
like
probably
the
first
thing
that
everyone's
going
to
see
is
the
issue
in
the
spec
repo
and
then
everything
can
be
linked
from
there.
D
A
F
I
Well,
anthony
usually
runs
the
meeting,
but
I
don't
think
so.
I
think
that
resolving
that
issue
is
more
than
enough
of
an
accomplishment
for
one
meeting,
two
issues:
two
issues:
okay,.
F
Yeah
we
should
end
in
a
high
note,
then
I'll
share
what
I
have
as
soon
as
I
get
it
in
the
slack
channel,
where
you
know
to
find
me.