►
From YouTube: 2022-05-31 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
D
D
C
E
But
we
have
couple
of
topics
for
today
and
I
think,
besides
the
agenda
items,
I
don't
think
we
have
anything
else.
So,
let's
start
with
the
first
item,
santos
santosh.
D
Hey
hi
so
yeah,
so
so
there
is
an
events,
api
or
tip.
We
want
more
folks
to
you,
know,
look
at
it
and
give
comments.
D
So
this
is
a
call
to
you
know,
get
more
comments,
and
specifically,
there
is
one
question
that
you
know
we
need
feedback
on
which
is:
should
there
be
one
single
api
call
it
logs
api
that
will
be
used
to
create
both
events
and
logs
or
there
should
be
separate
apis
events,
api
and
logs
api
to
create
events
and
api
separately
and
both
of
them
using
the
log
record
data
model.
D
So
having
one
api
you
know
is
consistent
with
that
definition,
whereas
there
is
it's
more
clear,
although
for
a
lot
of
people
logs
and
events
are
still
separate,
hybrid
concepts,
so
it's
clear
if
we,
you
know,
call
them
using
different
names,
so
there's
some
decision
that
needs
to
be
taken.
E
Yeah,
that's
that's
that's
a
good
point,
but
for
me
I
would
start
from
the.
As
you
mentioned,
people
are
still
seeing
those
different
people
are,
some
people
are
seeing
them
as
seeing
them
the
same.
Do
we
start
with
defining
what
is
a
log
versus
what
is
an
event
in
our
community
or
what
we
call
a
log
versus
what
we
call
an
event
inside
our
community?
So
then
at
least
we
refer
to
the
same
things
and
we
understand
each
other
when
we
are
talking
about
these
things.
D
Yeah
yeah,
absolutely
so,
I
think
in
in
the
otep.
I
have
clearly
defined
what
is
different
between
the
two.
You
know,
and
it
is
basically
logs
have
a
mandatory
severity
field.
Events
have
a
mandatory
name
field,
name
type
other
than
that.
I
think
that
they're
pretty
much
the
same
the
logs.
Typically,
unless
it's
a
structured
log,
you
know.
E
No
makes
makes
total
sense.
Okay,
I
I
think
we
should
start
by
at
least
do
we
think.
Let
me
ask
a
follow-up
question
and
then
we
I'm
proposing
something.
Do
we
do?
We
need
an
advancing,
or
can
we
put
this
into
the
log
c
as
owners
of
this
order.
D
E
So
so
so
we
can,
we
don't
need
a
separate
thing.
Then
then
I
would
ask
if
you
were
discussing
that
in
the
log
c,
why
do
we
not
see?
Can
you
can
you
ask
the
log,
seek
members
to
come
and
chime
in
and
approve
the
otep
or
or
make
sure
that
before
bringing
it
to
the
whole
specification,
you
have
an
agreement
because
for
me
it
doesn't
seem
that
we
have
an
agreement
on
this
hotep.
Yet.
D
So
are
there
any
more
folks
that
need
to
approve
it?
I
think
they.
They
both
have
different
views
on
this,
not
strong
opinions,
but
you
know
tigger
and
feels
you
know
for
consistency.
We
should
continue
to
call
it
logs
api
to
create
both
events
and
logs
and
jack
thinks
that
they
should
be
separate.
So
that's
why
we
need
more
people
to
comment
on
it.
D
F
Yeah,
I
would
also
say
that
tigran,
who
is
who's
part
of
the
log
he
specifically
asked
in
in
the
otep
that
we
should
circulate,
circulate
this
more
widely.
Okay,.
A
Yeah,
a
little
bit
more
context,
there's
a
lot
of
agreement
conceptually.
I
think
everyone
agrees
that
in
from
the
log
sig
that
there
should
be
an
event
api
and
that
we
have
this
other
use
case
of
accommodating
minimally,
an
api
to
append
log
messages
from
existing
log
frameworks
into
open
telemetry.
A
So
like
what
we
might
call
like
a
log
appender
api,
but
for
some
languages
they
might
be
interested
in
a
general
purpose.
Log
api-
if
they
don't
have
you
know
the
history
of
having
log
apis
and
frameworks
already
available,
and
so
I
think
I
think
most
of
us
agree
on
that.
It's
just
you
know
what
we're
trying
to
circulate
more
is
this.
You
know,
should
should
those
two
kind
of
use
cases
be
be
joined
from
an
api
standpoint,
or
should
they
should
they
be
separate.
E
You
mean
you
mean
the
log
appender
api
and
in
the
events,
api
or
correct
or
yeah,
okay,
then
then,
I
would
also
agree
with
you
jack
that
those
should
be
separated,
because
I
personally,
I
feel
the
logopender
seems
more
or
less
like
our
sdk
apis
than
the
what
we
call
the
api
in
other
signals.
So
yeah.
E
Let
me
let
me
also
follow
up
with
with
tigran
to
understand
better
what
he
wants
there,
but
so
I'll
take
an
action
item
to
at
least
talk
to
tigran
and
see
how
why
he
sees
this
as
a
same
thing,.
A
E
I
think
the
the
other
fox-
and
I
think
that's
the
most
I
can
do
for
the
moment
since
I'm
not
very
involved
in
logs,
but
I
will
follow
up
with
tigran
at
least
to
help
with
that.
Okay,
anything
else
we
we
have
to
discuss
here.
Do
we
have
any
any
other
opinion
or
anything.
E
I
think
we
are
good:
okay,
ludmila!
You
are
next.
G
Yeah
cool,
thank
you
yeah.
I
want
to
understand
how
we
can
merge
the
tooling
changes
for
requirement
levels.
It
seems
somebody
from
technical
committee
can
merge
it.
There
is
a
consensus
and
I
think
somebody
just
needs
to
press
a
button.
E
G
Yeah,
thank
you.
I
appreciate
it.
What
is
the
process
of
actually
publishing
it
to
docker
hub,
so
we
can
leverage
it
in.
E
Our
skype
we
need
to
somebody
needs
to
create
a
tag.
I
think
I
can
do
that.
I
can't
create
a
release.
Okay,
I
will
do
that
after
while
we
discuss-
maybe
the
next
meeting
or
or.
C
E
C
E
I
I
personally
would
go
and
release
as
fast
as
we
can.
I
mean
build
tools.
We
should
not
be
worried
to
have
a
strict
release
cycle
there
right
like
if
we
release
today
and
we
agree
on
the
rpr
and
is
important
tomorrow-
we
can
do
another
release
tomorrow.
I'm
not
worried
about
that.
That's
my
personal
opinion,
but
I'm
happy
to
hear
others.
C
H
By
the
way,
sorry
for
being
late,
but
one
more
concern
about
probably
postponing
or
doing
an
another
short
release
is
the
fact
that
we
have
been
desiring
to
go
with
otlp
json
stable
and
we
are
missing
some
final
steps
so,
but
why
that's
that's
not
in
include
tools.
What's
the
relation
with
billion
dollars,
oh,
never
mind,
I
thought
it
was.
I
saw
a
comment
about
mila,
never
mind
so
discard
my
comments.
Sorry.
E
E
C
C
It's
a
little
bit
reminiscent
of
2020
when
we
had
the
same
with
the
normal
histograms
that
bucket
boundaries
are
misaligned
between
the
two,
which
is
like
as
a
super
short
summary
for
for
everyone.
Who's
not
familiar
with
with
the
history
of
this
means
that
a
that
it
is
mathematically
and
theoretically
impossible
to
have
correctness
between
the
two
implementations.
You
will
lose
state
and
you
will
lose
preciseness
while
sending
this
data,
which
b
means
significant
user
impact
or
confusion
or
or
non-convenience
when
using
it.
C
As
far
as
I
saw
as
far
as
within
wg
prometheus,
we
saw
there
is
already
a
accepted
thing
within
open
telemetry
from
last
may.
I
believe,
which
honestly
flew
under
the
radar
from
from
our
perspective
and
unless
I'm
mistaken,
there's
now
a
addendum
within
open
telemetry
to
allow
both
bucket
boundaries,
which
seems
to
be
a
little
bit
of
worst
of
all
worlds,
because
it
you
wouldn't
be
certain
which
directions
anything
is
going
in
any
specific
implementation.
C
E
C
Yeah
exponential
or
high
resolution
both
functionally.
C
That's
fine,
but
we
basically
saw
the
same
thing
which
we
had
with
fixed
histograms
until
I
think
sometime
in
2021,
early
2021,
late,
2022,
sorry
late,
2020,
where
we
also
had
those
bucket
boundaries
difference
that
has
re-emerged
with
the
exponential
slash
high
resolution
instruments.
So.
E
C
I
Yeah
hi,
when
the
reviews
were
being
done
for
this
topic
a
year
ago
beyond
was
part
of
the
the
conversation
and
we
I
think
we
agreed
to
disagree
there.
There
were
two
topics
of
disagreement.
The
first
was
about
zero
tolerance
and
I
don't
think
that's
the
one
you're
concerned
about
the
second
was
about
boundary
alignment
so
and
the
we
the
reason
that
we
agreed
to
disagree
on
that
comes
from
the
straightforward,
like
the
implementation
of
your
bucket
calculation.
For
that
histogram,
we
believe
the
authors
of
that
conceptually.
I
It
helps
us
when
the
value
1
maps
to
index
0
in
the
histogram
and
that's
what
we
get
when
you
have
less
than
or
equal
sorry
less
than
and
greater
equal
in
your
histogram,
the
floating
point
hardware
of
our
computers
already
has
these
numbers
in
exponential
representation
we're
doing
very
simple
operations
to
turn
those
existing
exponential
numbers
in
base
two
into
a
histogram
bucket
and
having
the
opposite
boundary
conditions
actually
makes
that
code
more
complicated
and
harder
to
verify
and
harder
to
test
and
slower.
So
that
was
the
mechanical
reasoning
behind
it.
I
The
justification
that
was
used
has
always
been
the
same.
It's
that
the
histograms
do
not
the
histogram
boundaries
are
not
have
zero
space
have
zero
area
in
the
histogram,
so
when
a
measurement
falls
exactly
onto
a
boundary,
it
is
an
example
of
the
worst
case
error.
That's
definition,
the
def
definition
of
it
is.
We
have
a
situation
where
otel
otlp's
histogram
would
have
exactly
off
by
one,
which
is
the
worst
case
error
and
if
you've
chosen
your
histogram
with
high
resolution.
An
off
by
one
case
should
not
be
a
problem.
I
First
of
all
and
second
of
all,
like
the
measurements
you're
talking
about
measurement,
precision
issues.
How
close
are
you
to
exactly
1.0
and
do
you
think
that
a
plus
or
minus
of,
like
1
10
millionth,
should
put
you
into
one
bucket
or
another?
And
if
that
really
matters
to
you,
that's
when
we
started
talking
about
open,
histogram
being
actually
potentially
better
for
us.
I
When
we
talk
about
base
10,
it's
really
believable
that
the
precision
of
an
exact
measurement
matters,
because
we
think
in
terms
of
orders
of
magnitude
like
100
100
milliseconds,
is
a
real
number
in
open
histogram.
But
it's
not
in
the
base
2
histogram,
and
so
we
just
decide
that
the
precision
issue
is
going
to
be
worst
case
and
that's
what
we
have
that
left
us
talking
about
zero
tolerance,
which
is
another
precision
issue,
and
I
that's
a
different
topic,
so
I
hope
I've
helped.
I
We
don't
think
that
this
this
difference
should
be
real.
In
other
words,
I'm
I'm.
J
I
So
the
the
histogram,
if
you're
measuring
in
milliseconds-
and
you
want
a
value
like
a
hundred
there's
just
no
base
two
representation.
That
is
exactly
that,
and
so,
whether
you're
talking
about
the
boundary
of
I
don't
know
exactly
what
binary
fraction
is
right,
around
100
or
but
you're
talking
about
finding
a
boundary
issue
at
a
power
of
2,
not
a
number
that
makes
sense
to
humans,
and
that's
that's
another
reason
why
I
don't
particularly
feel
it's
a
strong
argument
in
favor
of
those
exactness
numbers.
C
We
are
partially
replaying
past
discussions
to
onboard
everyone
else
on
this
call.
The
counter
arguments,
as
I
see
them,
or
as
I
understand
them,
are
that
a
there
is
an
installed
base
in
both
within
the
cloud
native
ecosystem
and
also
within
the
previous
histograms,
within
open
telemetry,
so
for
backwards
compatibility
reason.
The
other
thing
would
make
more
sense.
C
Notwithstanding
that
everything
what
josh
said
is
right
on
the
mechanical
level,
the
other
is,
you
have
a
lot
of
discrete
measurements
in
computers
and
a
lot
of
those
tend
to
be
powers
of
two
as
well,
because
that's
just
how
computers
work
all
those
points
are
honestly
technical.
The
the
main
point
why
why
we
found
that
we
needed
to
raise
this
again
is
there
is
going
to
be
an
incompatibility?
C
It
might
not
be
huge,
but
it
will
bite
some
users
and
it
will
be
largely
invisible
to
those
users
until
it's
too
late,
and
they
won't
have
a
way
to
fix
it,
even
if
they
detect
it
and
that's
the
thing
which
which
we
are
concerned
about
about
introducing
long-term
incompatibility
within
within
the
cloud
native
ecosystem.
C
That's
that's
the
risk
which,
which
I'm
seeing
and
that's
also
just
a
second,
that's
also
why
I
agree
to
disagree
on
the
technical
level
is
absolutely
fine,
but
on
the
end
user
level
and
on
the
on
the
user
level
that
still
doesn't
solve
any
problems,
it
just
creates
one
for
them.
I
I
I
I
F
I
E
I
have
a
question
for
you:
I'm
not
an
expert
at
all,
but
what
would
be
the
downside
if
we
do
what
richard
proposes?
Is
there
any
real
downside
or
is
just
the.
I
I
mentioned
the
downside
is
that
is
that
the
conceptual
mathematical
equations
underlying
this
histogram
center
on
the
value,
one
which
is
very
special
in
an
exponential
histogram,
the
value
one
mapping
to
zero,
makes
it
so
that
I
can
do
this
math
in
my
head.
For
one
thing
it
is,
it
helped
me
understand
what
we're
doing
with
it.
It
felt
like
a
mistake
to
put
in
a
special
case
just
to
get
this
boundary
condition
because
of
his
history.
I
With
a
with
a
histogram,
that's
not
going
to
be
forward
compatible.
You
won't
be
able
to
take
every
explicit
battery
histogram
and
import
it
forward.
There's
just
no
exactness
there
either.
I
don't
see
why
we
should
be
held
back
in
the
future
because
of
but
for
a
new
data
type
like
we
agreed
on
the
old
data
type
to
be
exact,
but
this
is
a
new
data
type.
E
Yeah
so,
and
and
the
the
prometeous
issue
is
correct
me
if
I'm
wrong,
because
I
want
to
understand
this
and
probably
others
it
happens
when
we
convert
this
new
data
type
into
the
old
data
type
correct.
E
C
The
issue
which
we're
having
is,
if
you
look
at,
for
example,
cube
state
metrics,
which,
for
those
who
don't
know,
is
how
how
kubernetes
emits
anything
about
itself
in
the
metrics
land.
It's
neither
using
a
promises,
client,
libraries,
nor
is
it
using
open
telemetry
and
unless
I'm
super
mistaken,
they
don't
plan
on
changing
this
in
the
future.
So
they,
for
example,
there
they
have
to
make
a
choice.
Do
they
go
with
their
own
backwards
compatibility
in
their
installed
base,
or
do
they
or
do
they
flip
that
bucket
boundary?
C
And
you
have
similar
issues
across
a
ton
of
other
of
other
projects
within
cncf
that
everyone
has
aligned
on
this?
I'm
not
saying
that
it
is
mathematically
perfect,
but
it
is
what
it
is
and
there
is
an
installed
base
and
there
are
realistic
scenarios
where
you
want
to
have
a
seamlessness
between
the
old
and
the
new
style
histograms
and
all
of
those
things
will
hit
a
mathematic
mathematical
wall,
because
you
can't
convert
this.
E
C
The
current
kubernetes
is
doing
the
current
prometheus
format,
because
that's
what's
what
is
specified
for
the
future?
Kubernetes
wouldn't
need
to
make
a
decision,
and
the
same
is
true
for
all
other
things
within
the
cloud
native
ecosystem.
I
think,
and.
F
I
Yes,
it
was
concurrent
with
the
work
we
were
doing.
We
had
this
disagreement
a
year
ago
and
bjorn
was
in
the
thread,
and
we
all
discussed
it
and
and
the
the
point
that
I'm
making
about
value
one
mapping
to
zero
comes
from
uk
the
new
relic
engineer,
who
was
leading
that
that
design
it
was
going
to
be
hard
to
overturn
him
on
that
and
see,
is
that
he
had
been
leading
up
the
whole
thing.
I
You
know
the
representation
of
the
value
one
in
our
computers
is
very
close
to
the
representation,
zero
and
that's
that's
sort
of
how
we
get
to
this
in.
In
again,
it's
just
a
technical
point.
I
felt
like
prometheus
had
an
opportunity
to
to
to
match
us
on
this,
given
that
we
had
merged
all
of
our
work.
I
In
fact,
lightstep
has
a
server
today,
accepting
this
format
as
of
last
week,
so
it'll
be
hard
for
us
to
to
change,
particularly
because
the
implementation
that
generates
this
code
is
quite
detailed
and
quite
difficult,
and
actually,
I
think
that,
having
that
off
by
one
like
logic
in
there
is
going
to
really
throw
it
off
I'd
rather
not
do
it.
It
just
seems
to
make
the
implementation
a
lot
more
difficult.
C
Again
agreed
on
the
mathematical
properties,
maybe
not
in
the
performance
but
absolutely
agreed
on
on
them,
like
at
least
not
on
on
how
much
the
performance
matters
in
this
specific
case,
but
absolutely
agreed
on
the
mathematics.
The
issue
is
installed
based
on
backwards
compatibility
and
there
are
realistic
scenarios,
I'm
keeping
I'm
repeating
myself.
The
the
issue.
Why
we're
having
this
discussion
is
of
course
agreed
to
disagree,
does
not
solve
the
underlying
technical
problem
that
there
needs
to
be
a
decision
one
way
or
the
other,
and
either
we
have
full
compatibility
or
we
don't.
E
E
C
The
sniff
test
I'm
getting
from
this
is
we
need
to
solve
this,
because
if
we
need
to
ask
other
projects
how
they
are
going
to
to
implement
and
which
side
they
are
going
to
choose
like
site
used
in
in
light
form,
but
like
on
on
the
conceptual
level
that
makes
sense,
you
can
already
see
the
four
boatings
of
of
this
basically
being
carried
forward
as
as
a
point
of
contention
across
the
cloud
native
ecosystem.
Unless
we.
I
C
E
For
forgive
my
ignorance,
josh
or
richard,
is
it
possible
to
with
some
precision
laws
to
to
convert
from
one
to
the
other
four
months
or
or
they
are
incompatible?
At
this
point,
that's.
I
What
I
mean
by
worst
case
error,
we
we
agreed
to
the
on
the
base
two
representation.
We
agreed
on
the
underlying
exponential
formula.
We
agreed
on
the
parameterization,
you
know
all
that
stuff
and
it's
off
by
one
or
boundary
can
issue
and
the
zero
tolerance
issue
we're
on
the
table
and-
and
we
just
moved
forward.
I
I
So
in
this
case,
if
you
receive
prometheus
data-
and
it
has
a
lot
of
sparse
sparseness
to
it,
you
might
end
up
with
a
rather
a
lot
of
a
lot
of
zeros
or
you
might
want
to
downsize
the
histogram,
which
is
very
easy,
as
well
by
the
by
the
design
that
we
used
in
the
case,
because
you
know
there's
too
much
sparseness
and
then
the
other
direction
will
happen.
The
same
you
can
have
a.
I
We
can
have
a
heuristic
for
how
to
produce
sparseness
out
of
denseness
and
that
can
that
can
be
done
and
they're
non-lossy.
But
if
you
have
exact
measurements
like
powers
of
two,
if
you're
counting
things
in
units
of
1024
or
whatever
it
might
be,
that
and
you're,
and
then
you're
histograming,
your
counters
or
whatever
like.
Then,
you
might
see
this
off
by
one,
and
I
I
don't
know.
C
How
do
you
get
the
off
by
one
course?
The
thing
the
main
risk
which
I'm
seeing
is.
We
are
measuring
something
discrete
and
we
measure
a
lot
of
this
on
certain
boundaries,
because
again
it
is
a
binary
system
and
computers
are
binary
and
you're
measuring
almost
all
values.
Let's
say
on
1024
and
just
by
happenstance.
This
is
where
the
boundary
is
sitting
and
now
I
don't
know
if
those
10
000
measurements
are
this
direction
or
this
direction.
C
I
cannot.
I
cannot
mathematically,
say,
is
this
sitting
on
this
precise
level
or
not,
and
as
jerome
said
earlier,
there
are
absolutely
situations
where
a
precise
statement
is
this
on
this
side
on
that
side
of
the
bucket
is
highly
relevant.
So
anyone
designing
a
system
will
not
care
about
transport,
they
will
care
about
the
mathematical
properties
and
being
able
to
be
certain
where
all
their
measurements
are
lying.
That's
the
issue.
I
When
you're
counting
something
discreet,
in
other
words
and
and
I
think
that
I
I
don't-
I'm
not
sure
that
that
that's
very
common-
and
I
do
believe
that
measuring
timing
values
for
like
slo
calculations
is
really
complement
and
those
are
numbers
where
it's
going
to
be
powers
of
10,
that
humans
understand,
and
so
the
argument
falls
apart.
In
that
case,
like
100
milliseconds
is
not
an
exact
binary
number
one
second
is
but
but
you're
not
going
to
be
measuring
discrete
seconds
so
you're.
I
If
all
your
numbers
were
around
1024,
the
scale
of
the
histogram
would
be
such
I
don't
know
without
looking
real
data.
But
if
your
spread
is
only
a
few
integer
discrete
numbers,
you
should
have
a
scale
so
that,
like
at
least
you're
going
to
have
some
some
resolution
right
around
that
exact
power
of
2.
But
I
think
you're
right.
It
is
hard
to
represent
exact
powers
of
two
through
discrete
numbers
and
then
histogram
them,
but
I'm
not
sure
how
common
that
is.
C
I
Boundaries
that
argument
was
certainly
made
when
we
were
debating
whether
to
go
with
open
histogram
and
it's
log
linear
decimal
structure,
and
I
think
that
was
a
very
compelling
argument.
In
fact,
none
of
our
measurements
are
precisely
exactly
100,
milliseconds
or
whatever,
but
then
again
are
our
clocks,
even
that
accurate,
I
don't
buy
that
that
the
histogram
measurements
are
commonly
used
with
discrete
things
that
are
exactly
integers
that
are
right
around
powers
of
two,
especially
across
many
orders
of
magnitude.
C
But
we
no
the
boundary,
is
not
something
which
depends
on
the
order
of
magnitude
and
if,
if
you
have
a
bunch
of
realistic
scenarios
in
which
you
are
bitten
by
this,
it
doesn't
matter
if,
if
there
are
other
scenarios
and
like
for,
for
example,
100
milliseconds,
where
you're
not
bitten.
Because
that's
not
what
anyone
is
arguing
in
this
call.
E
E
Until
and
in
one
hour
we
see
who
is
winning
I'm
joking,
but
but
seriously
schedule
some
follow-up
where,
where
we
discuss
this,
because
I
think
it's
important
to
to
solve
it-
and
maybe
maybe
josh
is
right-
maybe
we
can
also
look
into
changing
some
of
the
prometheus
decisions
and
so
on
so
forth,
because
I
don't
think
prometheus
also
implemented
this
correct.
It's
it's
just
early
days
is
that
true,
richard.
C
E
Let's
can
we
can
we
have
a
meeting
between
the
experts
there
and
some
of
the
folks
josh.
I
think
you
you
may
want
to
discuss
with
you.
You
use.
E
I
I
apologize.
This
is
a
question
for
for
the
exponential
histogram
for
the
use
cases,
you're
saying.
J
Yes,
I'm
just
like
I'm
just
trying
to
to
see
how
we
can
be
efficient
in
what
bogdan
is
recommended
like
discussing
during
one
hour.
I
Yeah,
I
don't
know
if
it's
worth
anyone
going
back
to
read
the
very
lengthy
pr
from
putting
the
first
protocol
together
before
that
the
otep
and
then
I
myself
wrote
two
pr's
in
the
otep
or
the
hotel
specs,
just
to
sort
of
write
down
exactly
the
code.
You
can
use
to
implement
this
histogram
and
again
it
comes
out
really
nice
because
of
this,
this
boundary
condition
choice
and,
of
course
the
the
lookup
is
a
simple
like
math
function.
I
Call
it's
a
lot
to
read
and
I
think
we've
just
distilled
this
conversation
right
here.
I
don't
know
that
any
like
the
the
open,
telemetry
apis
talk
about
how
you
use
histograms.
This
is
just
one
of
the
choices
of
implementation,
so
I
don't
know
about
going
back
as
far
as
use
cases.
In
this
case,
I.
E
I
mean
the
use
case.
The
use
case
I
think
jerome
may
be
mentioning
is
identifying
if
the
discrete
values
is
a
real
use
case,
or
not.
Probably
that's
what
I'm
hearing
or
maybe
I'm
wrong,
but
in
terms
of
you
use
cases
of
high
resolution
histograms.
I
think
everyone
agrees
that
we
need
a
high
resolution
histogram.
E
I
don't
think
here
we
are
debating
that
we
are
just
debating
a
technical
or
technical,
slash
backwards,
compatible,
slash
political
decision
between
the
two
projects
and-
and
I
don't
think
it's
it's
it's
anywhere
close
to
the
discussing
use
cases
unless
we
we
are
looking
for
for
proving
that
richard's
point
of
the
existence
of
the
discrete
values
is
not
it's
not
going
to
happen
in
reality,
which
I
don't
believe
is
true.
But
unless
that's
the
argument,
I
don't
see
why
we
are
going
back
to
use
cases.
E
But
we
still
need
to
have
a
follow-up.
I
I
believe
this
is
very
important.
I
don't
want
to
just
agree
to
disagree
between
the
two
projects
because,
as
richard
pointed
if
permit,
if
kubernetes
has
to
make
a
decision,
it
will
make
one
or
the
other
one
of
the
projects
will
suffer
or
will
have
to
to
do
things
that
we
don't
want
to
do.
And,
let's,
let's
try
to
to
work
together
like
early
on
to
make
sure
we
don't
get
into
a
disagreement.
E
Richard
you,
I
know
you,
you
also
are
part
of
the
observability.
E
C
Procedurally
speaking,
neither
toc
nor
tech
cares
about
the
specific
decisions.
Any
any
specific
member
projects
are
making
for
good
reason.
So
I'm
not
convinced
if
that
is
the.
If
that
is
the
right
forum,
but
like
we
can
absolutely
use
this
form,
if,
if
everyone
wants
to
because
it's
there,
I
you
asked
about
then
what
what
the
right
process
for
this
is.
C
I
don't
know
that's
why
why,
within
w
prometheus,
we
came
to
the
conclusion
that
that
we
need
to
raise
it
in
this
call
to
kick
something
off
also
just
to
make
it
explicit.
I'm
I'm
I'm
worried
about
use
cases
and
everything.
Yes,
but
the
the
the
point
which
is
actually
being
raised
is
there
is
a
substantial
risk
of
long-term
incompatibilities
and
that's
that's
the
core
of
the
issue.
E
Which
we
all
agree
about
that,
but
we
would
like
to
also,
as
I
said,
we
would
like
to
also
discuss
with
the
prometheus
folks.
Maybe
maybe
maybe
I
don't
know-
is
it
for
prometheus,
which
I'm
what
I'm
understanding
is
prometheus
balance
more
backwards.
Compatibility
with
fixed
buckets
than
the
simplicity
of
the
new
of
the
new
format
is
that
what
is
the
decision
about.
E
So
so
so
I
think
I
think
the
argument
on
the
prometal
side
is
that
backwards.
Compatible
is
more
important
than
than
the
simplicity
of
the
current
argument
of
the
current
algorithm
and
on
open
telemetry
was
that
we
don't
care
necessarily
about
that
backwards.
Compatibility
issue,
but
we
care
about
being
simple
to
calculate
from
from
the
current
representation.
C
E
Okay,
I
personally,
as
I
said,
I'm
fine-
I
I
would
prefer
to
have
compatibility
josh.
I
don't
know
you,
you
are
the
expert
here.
Maybe
we
can
convince
the
prometheus
folks
that
this
backwards
compatibility
is
not
that
important.
Maybe
we
convince
ourselves
that
it's
not
that
bad!
The
the
algorithm
is
not
that
bad.
With
the
with
the
other
option,
I
I
would
like
to
see
compatibility
personally,
but
I'm
not
proposing
one
or
the
other,
because
I'm
not
an
expert
in
any
of
these,
but
I
would
like
to
see
compatibility
here.
I
Maybe
we
should
qualify
that
you're,
saying
100
boundary
equivalent
compatibility,
whereas
the
argument
has
been
made
that
that
the
differences
in
in
the
area
under
the
histogram
are
are
like.
The
boundaries
themselves
are
have
no
area
in
a
histogram.
So
the
the
reason
why
this
matters
is
that
prometheus
has
a
you,
can
ask
a
less
center
equals
question
and
that's
that's
not
a
percentage
question
like
that
is
a
how
many
values
fell
exactly
under
this
boundary,
which
is
not
what
histograms
maybe
are
meant
to
do.
I
We
you
know
like
maybe
according
to
this
document,
that
we
wrote
a
long
time
ago.
So
we
are
preserving
an
operational
model
for
querying.
Histograms
is
what
we're
preserving.
If
we,
if
we
go
this
way,
I
would
say
yeah,
I
understand,
and
the
argument
was
made
that
that
you
choose
your
histogram
with
resolution
that
you
want,
and
so
worst
case
error
should
be
within
your
tolerance,
in
which
case
these
are
compatible
histograms.
I
We
should
we
should
taylor
taylor's
conversation.
Someone
asked
for
an
issue
to
be
written
up.
You've
asked
for
a
meeting
I
volunteered
to
do
all
check
to
go
to
all
those
things
and
to
read
the
issue,
but
I
don't
think
I
want
to
file
that
issue
myself
since
I
wrote
a
specification
for
the
way
it
is
today.
Someone
else
could
do
that
for
me.
E
No,
I'm
not
I'm
not
asking
you
to
write
anything.
I'm
asking
you
to
participate
in
that
meeting
and
maybe
ask
the
other
two
folks
to
to
participate
to
make
sure
we.
We
have
the
courtroom
from
our
side
that
we
designed
it
yeah.
I
I
I
I
understand
your
position
and
understand
your
your
things.
I
still
believe
that,
if
somebody
will
have
to
implement
this
without
using
prometheus
or
open
telemetry
as
a
library,
they
will
have
to
choose
one
or
the
other,
and
it
will
be
very
confusing
if
they
implement
one
of
the
other
and.
I
My
point
would
be
that
the
implementation
is
going
to
be
confusing
if
you
allow
this
boundary
condition
to
persist
and
the
implementation
becomes
quite
a
lot
simpler
and
easier
to
read
and
that's
written
in
the
specs.
Now
you
know
it's
one
call
to
ldx
to
compute
your
boundary
multiplied
by
a
constant
like
that's
it
there's
no
checking
if
it
was
equal
to
zero
and
subtracting
one
or
like
something
like
that.
There's
none
of
that
and
and
I'd
love
to
see
the
alternate
implementation
that
takes
care
of
this
boundary.
E
Yeah
so
so,
for
example,
yeah,
I
think
that's
why
it's
important
to
to
have
a
communication
with
them
and
also
ask
because
they
are
in
development
phase
as
or
maybe
look
at
their
document
read
that
document
and
see
if
there
is
a
proposal
there
of
this
algorithm.
That
is
so
simple
that
we
haven't
thought
about
that.
Maybe
changes
our
mind
like,
let's,
let's
not
be,
let's
try
to
be
open
in
the
and
and
see
all
the
the
things
richard
are
you.
Are
you
happy
to
moderate
this
discussion
and
stuff?
E
C
C
E
Yeah,
I
can
be
the
moderator,
since
I
have
no
clue
about
what
you're
talking
about.
I'm
I'm
just
asking
questions
here
I
mean
technically,
I
have
no
clue
because
I
don't
know
the
representation
very
well,
but
yeah.
I'm
I'm
happy
to
help
with
that,
but
wish
we
need
to
find
the
a
slot
and-
and
maybe
find
a
meeting
this
week
next
week
to
to
discuss
this
topic
while
prometheus
is
still
in
development
and
see.
Where
do
we
get
from
there.
E
C
Sounds
good
we
in
theory
we
have.
We
have
the
recurring
call
for
the
wg
prometheus,
which
we
can
reuse
or
we
can
reuse
if,
if
you
prefer
also
the
tag
observability
one,
but
I
think
it's
better
to
do
this
within
within
open
telemetry,
but
both
works.
I'm
on
pto
tomorrow
morning,.
C
C
C
The
question
is
how
urgent
this
is.
It's
not
super
urgent
from
from
my
perspective,
I
think
it's
super
important,
but
not
super
urgent.
So
should
we
should
we
do
this
more
or
less
immediately
and
try
and
find
time
on
pto
and
everything
also
there's
linuxcon
in
two
weeks
or
should
we.
C
Twenty
second,
twenty
seconds,
the
twentieth
is,
is
open
telemetry
day
in
austin.
I
don't
know
who
else
of
you
is
making
there
I'll
be
there,
so
we
can
reuse
the
time
slot
on.
Let
me
check
1500
utc
on
wednesday,
the
22nd
of
june
and
I'll
try
and
make
certain
to
get
to
get
prometheus
folks
in.
We
already
have
the
wg
prometheus
folks
in
because
that's
the
recurring
thing
and
josh.
Would
you
also
be
okay
joining
this
one,
or
should
we
do
a
different
one?.
E
E
Good,
so
we
have
a
meeting
on
22nd
of
june
we'll
discuss
this.
I
I
think
it's
fine,
since
the
prometheus
work
is
still
in
development.
I
don't
think
I
don't
expect
that
in
three
weeks
things
will
change
dramatically
and
will
be
in
production.
So
I
still
think
it's
it's
good
good
timing,
return.
C
Yeah,
I
think
it's
defensible
like
there
is
working
code.
There
is
something
which
is
already
working
and
which
was
presented
at
cubecon
and
everything.
But
it's
not
it's
not
imminently
launching.
L
E
Good,
I
think
we
have
we
have
discussed
this
and
I
think
we
have
action
items
for
for
these
things.
Everyone
next
question:
let's:
let's
try
to
go
over
the
the
rest
of
the
items
in
the
agenda
so
who
is
currently
maintaining
the
aws
sdk?
I
would
expect
to
be.
H
Anthony,
I
think
an
iraq
was
driving
this
first,
but
on
iraq
is
not
with
us
full
time
at
least
so.
He
is
no
longer
with
aws.
C
Yeah,
I
think
that
also
bill
amiros
who's
still
on
the
on
the
2016
x-ray
team
at
aws.
I
think
also
contributed
to
that
and
should
be
knowledgeable.
C
B
Yeah
sounds
good.
This
aws,
thank
you
for
javascript,
so
william
from
x-ray
team
has
some
comments
in
the
prs,
and
you
mentioned
anthony
wright
from
open
television,
aws
open
telemetry
thanks.
I
Sorry
I
put
that
there
because
we
had
joha
here.
I
think
he's
still
on
the
call
I'm
here
if
I
said
his
name
right.
Yes,
please
would
you
like
to
talk
about
your
pr.
I
think
it's
very
important
and
I
want
to
emphasize
it.
K
Yeah,
so
I
I
opened
this
a
while
ago,
so
tiger
actually
opened
the
issue
and
I
volunteered
to
to
to
work
on
it.
So
the
pr
is
open
now
and
it's
it's
about
adding
partial
success
responses
to
otop
because
to
to
l2p
exports,
because
today
there's
there's,
there's
nothing
so,
for
example,
for
ingesting
metrics,
some
metrics
may
be
ingested.
K
Some
metrics
may
be
dropped
for
whatever
reasons,
or
also
logs,
for
example,
and
there's
no
way
to
indicate
this
to
to
receivers
that
certain
things
didn't
didn't,
weren't
ingested
and
right
now.
K
So
the
idea
that
we
that
tigre
and
I
were
discussing
the
issue
was
that
we
could
start
easily
like
with
the
easy
start
and
just
having
a
string
message
with
the
whatever
error
happened
or
whatever
thing
that
that
the
the
the
back
ends
or
or
whoever
handle
that
can
put
there,
and
also
the
the
number
of
of
accepted
spans
or
traces
or
spans
or
logs
or
or
data
points.
K
So
we
probably
should
need
more
more
structure
to
be
able
to
do
more
interesting
things
on
the
receiver
side
and
that's
that's
pretty
much
it
and
just
pointed
out
on
using
the
log
record
as
as
a
source
for
a
structure,
and
I'm
not
sure-
or
maybe
I
didn't
get
it
correct,
because
I
think
the
your
points
about,
for
example,
the
the
self
monitoring
metrics
or
the
the
self
telemetry
on
the
receivers.
K
But
if,
like
I
was
wondering
like
if
we
use
the
the
log
record
that
we
have
today,
which
has
a
body
which
is
which
is
a
string
and
the
the
the
attributes
which
we
can
put
anything,
I
don't
know
if
that
will
make
the
receiver
sides
more
complicated.
Because
then
we
don't
know
pretty
much
what
is
inside
and
then
we
need
to
define
semantic
conventions,
for
example,
for
the
attribute
key.
K
I
don't
know
like
reason
or
status
codes
or
stuff
like
that
and
and
the
receivers
will
have
to
check
that
then
and
so
on.
So
I
I
I'm
not
sure
if
the
log
like
I
get
the
idea,
that
is
flexible,
for
example,
that
it
offers
us
to
add
pretty
much
anything
and
we
can
define
decide
after
afterwards
what
we
want
to
do,
but
I
I
feel
like
maybe
we
should
step
a
bit
a
bit
out
and
and
see
like
what
we
want
to
achieve.
For
example,
we
want
to
have
this
self-monitoring
metrics.
K
We
want
to
do
something
else,
because
maybe
having
more
like
like
like
specific
fields
on
the
types,
maybe
it's
better
because
then
we
simply
don't
have
to
check.
We
don't
have
to
come
up
with
semantic
conventions,
and
things
like
that,
I
mean
both
ways
would
work
and
but
I,
as
stegan
also
I
mentioned,
we
kind
of
don't
know
much.
Today
I
mean
we
have
some
ideas,
but
it's
not
entirely
clear
like
what
we
want
to
achieve.
Definitely
for
metrics.
K
But
that's
a
given,
I
guess,
but
for
for
for
for
for
our
case
for
all
to
be
receivers,
it
should
be
a.
There
should
be
a
better
way
to
to
indicate
this
informatics.
As
I
said
for
metrics,
I
think
it's
clear,
because
we
want,
for
example,
to
know
how
many
data
points
were
actually
ingested
and
for
those
that
were
rejected.
K
Do
we
know
why
like
why
they
were
rejected,
was
because
of
some
quota
or
because
the
data
is
bad
or
it
doesn't
conform
with
the
format
of
the
backend
and
etc,
and
then
these
could
be,
and
then
the
collector,
for
example,
could
use
or
or
any
receiver
could
use
this
to
derive
metrics.
So
the
user
could
see
that
they're
losing
a
lot
of
data
points,
because
there
are
some
special
characters
in
the
metric
name
or
some
stuff
like
that.
I
Watching
for
that
recent
discussion,
please
go
see
the
issue.
I've
already
said
a
lot
from
there
and
it's
definitely
something
we
should.