►
From YouTube: 2022-09-20 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
B
A
A
So,
just
for
your
information
now
it
that
means
that
we
will
be
merging
more
items
that
we
had
in
the
back
corner.
We
just
know
we
didn't
want
to
break
113,
but
it's
it's
done.
Okay,
so
the
first
item-
pederev
he's
not
present,
but
he's
asking
to
take
a
look.
Basically
it's
for
explicitly
allowing
fail
fast
for
invalid
minimum
values.
A
There
we
are
so
this
is
this,
is
it
and
basically,
this
is
specifically
for
sdk
and
values?
Basically,
instead
of
generating
a
warning,
we
could
be
actually
failing.
You
know.
A
That's
an
interesting
one
because,
at
least
from
the
light
step
experience
that
I
can
tell
some
of
these
values,
some
of
these
environment
variables
are
more
important
than
others,
and
there
are
some.
Some
of
them
are
important
enough
that
you
want
to
fail,
but
some
of
them
are
not
that
important.
So
maybe
you
want
to
just
generate
a
warning
and
that's
it.
D
Think
in
the
past
at
least,
this
was
true
for
open
census.
The
approach
was
that
telemetry
is
never
important
enough
to
crash
the
application,
and
so
any
failure
from
your
telemetry
system
shouldn't
prevent
the
underlying
system
from
working.
I
don't
know
if
I
continue
to
like.
I
agree
with
you.
There
are
things
that,
like
probably
the
user
wants
it
to
crash,
but
we
do
have
to
walk
a
fine
line
here
between
users
who
consider
us
an
add-on
and
users
who
consider
it
like
critical
right.
C
Should
is
this
also
a
good
time
to
discuss
possibly
adding
like
a
strict
mode
or
something
like
that?
I
know
it's
come
up
in
the
past
a
handful
of
times
and
never
really
gone
anywhere,
but
it
seems
like
never
failing
might
be
something
that
some
users
want
and
failing
early
might
be
something
other
users
want,
so
to
have
a
way
to
configure.
That
might
be
a
good
idea.
C
Rather
than
a
strict
mode,
what
about
a
debug
mode
so
effectively
when
it's
in
debug
mode
it
will
fail
or
developer
mode?
Maybe
it's
a
better
word
yeah,
then
you
can
also
maybe
enable
different
log
levels
and
such.
A
D
My
fear
is,
it
might
become
like
pearl,
where
everyone
learns
to
just
always
go
in
strict
mode
and
never
ever
not
run
in
strict
mode,
but
like
it's
that,
that's
probably
like
that's
a
sign
to
us
that
possibly
the
weight
the
direction
we
took
in
the
spec
was
wrong
in
the
long
run,
so
I'd
be
a
fan
of
instead
of
blurring
the
line
drastically
and
making
it
really
confusing
whether
or
not
we
consider
ourselves
an
add-on
or
core.
D
I
do
like
that
idea
of
having
a
strict
mode
environment
variable
that,
if
you
set
then
any
other
environment
variable
problems
become
failures
instead
of
warnings
right.
So
I
do
like
that
can
are
we
going
to
take
action
on
that?
Are
we
going
to
comment
on
the
pr
to
say?
Hey
we'd
rather
do
this
instead.
A
A
A
A
In
the
in
the
pull
request
list,
we
were
holding
some
of
them
because
of
the
release
of
113,
but
now
it's
time
to
go
and
review
them.
The
first
one
is
about
boolean
values
and
it's
also
environment
variables
by
the
way
there's
enough
reviews-
and
this
is
actually
blocking
one
more
from
from
bruno
there's-
a
general
agreement
so
yeah.
If
nobody
opposes
this,
we
will
actually
be
merging
that
later
today,
hopefully
so,
just
for
the
information,
you
have
some
opinion
different
to
the
general
agreement.
A
A
There
has
been
some
discussion
about
the
exact,
the
very
exact
name
and
I
feel
like
we
are
kind
of
micromanaging
there,
so
we
can
find
some
middle
point
or
you
know
some
yeah
I
mean
I
I
personally,
I
don't
have
like
a
hard
opinion
on
this
one,
but
I
think
that
we
have
been
discussing
just
the
minor
detail.
So
please
comment
on
that.
One.
A
I
don't
know
we
have
riley
alex
bogdan
or
tigran
or
jack
here
to
comment
that,
but
if
we
could
find
an
agreement
on
whatever
works
here
that
is
decent
enough
and
just
go
with
that.
That
would
be
fantastic.
E
Yeah,
I
don't
have
any
strong
preference
on
this.
I
agree.
We
seem
to
be
bike
shedding
on
this.
You
know.
Just
some
indication
would
be
nice
to
indicate
that
it's
a
a
language,
sdk
version
of
an
otlp
exporter
versus
just
you
know
some
other
client
that
is
not
associated
with
an
open,
telemetry
language
sdk
that
is
using
the
otlp
protocol.
I
think
that's
useful,
but
I
don't
really
care
how
it's
accomplished.
F
A
Yeah
riley:
are
you
you're
around.
A
Oh
never
mind
yeah
because
he's
complaining
about
something
about
that.
It's
too
repetitive
having
nutella
tillp,
but
the
reason
for
having
that
is
that
in
case
there's
a
different
vendor
or
company
or
organization
creating
their
own
otnpxporter.
You
know.
So
that's
the
reason.
So
if
there's
agreement
with
that
they
just
I
will
commend
that.
That's
the
reason
and
let's
go
with
that
and
over
you
know
other
than
that.
Let's
go
and
merge
that
I
think
their
general
agreement
is
just
the
very
final
detail.
A
Okay,
perfect,
so
I
will
put
a
comment
there
and
yeah
then
just
there
and
then
we
can
go
with
it
sweet.
Thank
you.
The
next
one
is
about
changing
the
default
vocals
for
explicit
histogram.
I
don't
know
irelia
is
not
here
jd.
I
don't
know
whether
he's
here,
but
their
general
agreement
with
jack.
You
were
asking
whether
this
is
can
be
considered.
A
breaking
change
or
not.
A
E
Well,
it's
interesting
because
you
know,
I
think
it's
I'm
interpreting
it
as
a
breaking
change
to
the
specification,
but
this
breaking
change
actually
brings
the
specification
into
alignment
with
what
java
does
today
we
we,
we
made
a
mistake
and
our
default
histogram
bucket
boundaries
were
not
aligned
with
the
what
was
defined
in
the
specification.
So
you
know
it's
breaking,
but
it
helps
us
so,
and
you
know
it's
also
interesting
that
the
other
the
other
stable
well,
maybe
I'm
getting
this
wrong.
E
But.Net
is
one
of
the
other
stable
metric,
sdks
and
they're.
Clearly,
proponents
of
this
they're
in
favor
of
making
this
change
so
that
would
be
you
know,
java
and
net
would
be
an
agreement
of
this
is
python.
The
only
other,
stable
metrics
sdk.
A
Yes,
it
is
okay,
that's
a
good
call!
Yes,
okay!
So
let
me
poke
diego
here
he's
a
maintainer,
so
I
can
probably
ask
him
to
take
a
look
if
he
confirms
that
this
is
fine.
E
Yeah-
and
so
I
guess
I'll
I'll
defer
to
you-
know
the
tc
folks
on
how
to
make
the
call
of
you
know,
there's
a
breaking
change
to
specification,
but
it's
a
breaking
change
that
doesn't
like
materially
impact
the
sdks.
Is
that
allowed?
I
think
that's
a
question
for
you
all
to
discuss.
A
Yeah,
perfect,
okay
yeah.
In
the
meantime,
I
will
try
to
poke
diego,
so
he
can,
or
anybody
from
the
python
seek
to
review
this
one
and
yeah.
We
will
make
a
dc
call
on
this
one
very
soon.
F
So
I
got
a
question
on
that,
though,
that
if
it's
a
breaking
change,
then
it
wouldn't
comply
with
our
policies
like,
I
think
that's
that's
the
key,
because
I
know
this
is
this
has
definitely
got
developer
issues
associated
with
it.
I
know
that
they're
people
that
don't
have
stable
sdks
out
and
so
technically
they
could
still
change
it,
but
I
think
this
talks
about
a
little
bit
more
of
a
meta
problem
like
if
we're
releasing
things
that
we're
kind
of
like
well.
F
This
is
going
to
be
a
part
of
our
stable
guarantees,
but
we
don't
actually
plan
to
adhere
to
our
stable
guarantees.
Then
what
are
the
value
of
our
stable
guarantees
for
the
specification,
because
I
think
this
is
a
really
key
thing
is
what
do
we
mean
when
we
actually
release
a
stable
as
specification
like
I,
I
know
that
godus
released
an
alpha
sdk
and
it's
compliant
with
you
know,
112
of
the
specification,
and
now
it's
not
going
to
be
compliant.
F
F
Does
this
actually
break
our
guarantees,
because
I
know
in
our
guarantees
we
had
always
kind
of
talked
about
the
telemetry.
That's
delivered
may
change
over
time,
and
so
I
wonder
if
this
falls
under
that
purview,
because
I
want
us
to
be
careful
here
because
setting
a
precedent
like
this
is
not
a
good
one
if
it
is
going
to
be
a
breaking
change,
but
if
it
doesn't
actually
adhere
to
what
our
versioning
policies
are.
That,
I
don't
think
is
as
critical.
D
This
one
is
really
subtle
in
practice,
so
users
who
relied
on
the
default
will
be
broken,
and
so
from
that
standpoint
like
fundamentally,
we
should
probably
consider
this.
You
know
spec
2.0
kind
of
change,
but
what's
what's
interesting
is
you
can
also
not
break
users
as
an
instrumentation
author,
because
you
can
specify
your
own
buckets
if
you're
worried
about
that
right.
The
other
thing
is,
we
don't
have
any
guarantees
on
the
instrumentation
itself
yet.
D
D
What
happens
if
you
have
a
previous
default
bucket
in
the
new
default
bucket
as
long
as
those
metric
time
series
have
unique
identities,
you
probably
won't
even
notice,
because
you're
doing
histogram
style
queries
so
you'll
just
see
a
different
resolution
and
you'll
have
more
accuracy
on
some
rate,
queries
that
you
do
on
that
histogram.
So
like
from
to
some
standpoint,
I
would.
I
would
argue
that
the
break
that
occurs
from
this
change
is
rather
minimal
from
a
user
standpoint,
but
I
think
that
you're
right
we're
on
this,
like
crickety
walkway.
D
That
is
very
dangerous
to
just
kind
of
justify
changes
like
this
in
that
fashion.
However,
if
you
think
of
like
prometheus
users,
if
you
think
of
histogram
users
in
general,
the
the
likely
scenario
that
you'll
see
is
previous
versions
that
we're
sending
histograms,
you
know,
are
missing.
These
buckets
new
versions.
D
Have
the
buckets
prometheus
is
designed
to
kind
of
handle
that,
in
some
fashion,
because
they're
going
to
be
unique
time
series,
because
the
identity
of
the
time
series
will
be
different
and
your
queries
should
not
break
unless
you're
querying
for
a
specific
bucket
and
that's
kind
of
a
more
rare
thing
so
like
in
terms
of
the
number
of
users
broken
here,
it
should
be
almost
zero.
F
Yeah,
I
I'm
also
really
interested
josh
how
a
instrumentation
author
sets
the
buckets
of
histograms,
but
one
of
the
other
things
I
kind
of
wanted
to
point
out
is
like
this
is
a
specification
that's
intended
for
the
community
and
there's
nothing
that
says
that
vendors
themselves
couldn't
implement
their
own
sdk,
and
so
it's
like.
I
think
that
we
have
a
lot
of
the
sdk
there's
a
really
good
chance.
F
We
have
100
of
the
sdk
implementations
here
in
the
call,
but,
like
the
specification
is
released
as
a
standalone
to
you
know,
guide
the
community
whether
they
want
to
do
that
or
not,
and
so
yeah
like
to
that
point
like
we
just
want
to
make
sure
that
like
if
this
is
going
to
be
a
breaking
change,
it's
something
that
we
identified
in
our
versioning
and
stability
guarantees.
That
is
something
we
said
we
will
be
doing,
which
I.
E
I
just
want
to
add
a
comment
to
what
josh
said,
so
you
know
josh.
The
summary
of
what
you
were
saying
was
that
you
know
in
all
likelihood
a
small
number
of
users
would
be
broken
by
this.
You
know
I
what
I
commented
on
this
pr.
I
said
something
to
that
effect.
I
said
you
know
it's
impossible
to
tell
which
what
vendors
are
doing
with
these
histograms,
that
they
they
receive.
You
know
we're
talking
about
prometheus,
probably
not
affected.
E
My
my
vendor
new
relic
users
will
not
be
affected,
but
you
know
to
say
that
something
is
not
an
impactful
breaking
change,
but
you
know,
even
if
a
small
number
of
users
are
impacted
by
it,
it's
still
a
breaking
change.
G
I
sympathize
with
all
the
things
has
been
said
so
first
sh.
First
of
all,
when
I
read
this,
I
assumed
it
would
be
a
like
a
should
advice,
saying
that
sdks
should
use
these
defaults.
I
realize
it
doesn't,
so
I
maybe
need
to
change
my
opinion,
but
but
if
this
was
like
advice
for
all
the
future,
stable
sdks
to
follow
and
would
ignore
the
two
that
have
already
just
chosen
their
buckets,
that
would
be
okay
with
me.
G
The
other
point
I
have
here
is:
I
looked
at
the
open,
metrics
specification
just
now,
and
it
has
a
statement
that
says
bucket
values
may
have
bucket.
Let's
see
bucket
values
may
have
examples.
Next
sentence
buckets
are
cumulative
to
allow
monitoring
systems
to
drop
any
non-infinite
buckets
for
performance
in
any
denial
of
service
reasons
in
a
way
that
loses
granularity,
but
is
still
a
valid
histogram
to
me.
That
makes
it
okay
to
add
buckets
because
they
can
be
removed
safely.
G
So
adding
buckets
would
be
okay
here,
but
not
removing
them,
and
I
believe
that's
because
every
existing
query
is
going
to
keep
working.
If
you
had
a
query
for
all
the
values
less
than
10
as
a
counter
value,
you're
going
to
see
that
and
even
inserting
a
7.5
is
not
going
to
change
the
value
of
less
than
10
count.
So
I
think
this
is
actually
not
breaking
from
prometheus
perspective.
It's
certainly
not
breaking
as
a
vendor.
Here,
that's
all.
I
have
to
say.
D
Yeah,
I
guess
to
follow
up
on
that,
the
we
know
a
few
vendors
here
of
metric
systems
and
I
think
most
of
us
probably
this
won't
break
us.
D
There
might
be
an
existing
vendor
that
this
does
break,
and
I
would
be
surprised
if
that's
the
case,
but
here's
here's,
the
fundamental
problem
with
open
telemetry,
we
don't
own
the
storage
and
a
breaking
change-
is-
is
basically
defined
by
query
time
usage
of
users
right,
so
we
have
to
rely
on
a
data
model
for
which
we
define
breaking
changes
and
for
metrics
we
decided
to
align
on
the
prometheus
time
series
model
that
was
done
early
in
the
data
model.
So
if
we
want
to
align
with,
does
it
break
general
prometheus
usage,
yes
or
no?
D
Now,
that's
that's
why
that
litmus
test
works
for
this
case,
even
though
this
looks
like
breaking
there
are
things
in
place
to
make
it
non-breaking
kind
of
like
in
java
you
can
add
a
method
to
an
interface
and
for
some
reason
that
doesn't
break,
even
though,
theoretically
it
can
in
various
situations,
from
the
standpoint
of
what
we
consider
compatibility,
that's
non-breaking,
because
of
how
that
back
end
works.
For
us
it's
how
our
data
model
works,
so
someone's
going
to
interact
with
our
metric
data
model.
C
E
Yeah
so
one
potentially
recommendations,
so
you
know
I
I
remember
back
with
the
otlp
exporter.
We
changed
the
default.
From
being
you
know,
the
default
endpoint
was
https,
colon,
slash
localhost,
and
we
changed
it
to
http,
so
we
took
about
we
took.
We
wanted
to
take
away
the
https,
because
people
aren't
sending
encrypted
communication
to
their
local
collector
when
that's
running
in
you
know
our
language
to
make
that
correction
was
said.
E
Something
to
the
effect
of
languages
should
use
https
the
non-tls
version,
unless
they
have
some
historical
backwards,
compatible
reason
to
do
otherwise.
Could
we
do
something
here?
That's
a
similar
effect
say
you
know.
Languages
should
use
these.
These
new
buckets
unless
they've
previously
established
like
a
different
set
of
bucket
boundaries
and
have
to
stick
to
those
for
compatibility
reasons.
F
I
think
that's
a
good
point
and
it
may
be
a
good
solution.
The
only
thing
I
worry
about
is
consistency
across
implementations,
but
I
think,
like
kind
of
what
you
already
said,
jack
at
the
beginning
of
this
conversation
was
like
well,
that
may
not
actually
apply
to
any
open,
telemetry
specific
implementations
of
the
specs,
so
it
might
not
be
an
issue,
but
it
would
cover
our.
You
know:
liability
for
anybody
who's
like
implementing
this
outside
of
this
open,
telemetry
group.
So
I
think
that
might
be
a
good
solution
there.
A
E
A
Perfect,
if
this
is
it
from
the
change
for
the
default
bucket
for
expo
histo,
in
that
case,
let's
move
to
the
next
one.
This
is
mostly
also
for
your
information.
There's
this
pr
for
adding
process
metrics.
It
was,
it
had
general
approval,
but
we
were
holding
it
back
because
of
the
split
changes.
You
know
metrics
split,
but
you
can
see
it
has
enough
reviews
and
it's
specifically
adding
a
pair
of
well.
A
A
Okay,
the
next
one
is
about
a
clarification
that
bogdan
puts
regarding
on
changing
span
asynchronously
after
you
receive
that
on
the
on
start
operation,
it's
a
small
clarification.
It's
at
the
same
time
kind
of
changing
some
of
the
semantics,
even
though
it's
only
a
clarification,
it
has
been
there
forever.
So
probably
it's
a
good
one
to
read
that
christian
had
some
reserve
against
this,
because
he
thinks
that
changes
some
slight,
not
very
obvious
technicalities.
You
know
they're.
A
As
I
said
before,
this
already
has
no
robots.
Just
please
take
a
look
if
nobody
opposes.
We
can
probably
merge
this
in
the
next
days.
A
Okay,
next
one,
the
last
one
from
my
site,
add
mobile
language
semcom
for
for
browser
resources,
similar
situation,
no
enough
approvals.
A
A
Okay,
if
this
is
good,
we
can
probably
merge
that
by
the
end
of
the
day
today,
perfect,
that's
all
from
my
size.
So
now,
let's
jump
to
the
rest
of
the
items
and
and
say
sorry
created
a
couple
new
metric
name
proposals.
Wonder
if
there's
anything
controversial!
Okay,
do
you
want
to
talk
about
that.
B
B
Yeah,
there's
a
paging.
Actually
this
one
there's
a
comment
down
there
below,
and
I
just
noticed
after
reporting
the
issue
that
there's
a
in
the
system.
Namespace,
there's
a
system
paging
false,
so
probably
process
should
follow
the
same
convention
of
process.
Paging
faults:
it's
just
the
child
photo,
I'm
not
sure.
If
we
should
should
be
included
at
all
and
the
first
one
I
get
done
is.
I
think
it
was
already
commented-
and
I
was
just
about
to
post
my
comment
too
in
this
one.
B
G
Yes,
that's
an
ancient
one,
I
I
have
to
say
I
wasn't
ready
to
discuss
this.
If
you
ask
me
what
my
recommendation
is
today,
I
think
that
you
know
you
could
define
process.uptime
would
be
a
good
metric
name.
The
non-monotonic
sum
of
you
know
uptime
seconds.
That
would
be
the
most
traditional
way
to
do
this.
You
can
define
its
rate,
gives
you
an
up
variable,
which
is
constant
one.
G
Usually,
however,
one
could
point
out
that
a
resource
to
say
when
you
started
would
potentially-
and
that
depends
on
some
data
model
questions,
let
you
define
an
uptime
without
having
a
metric.
I
think
that
that
is
perhaps
making
the
perfect
the
enemy
of
the
good.
Here,
I
think
a
process
uptime
metric
is
probably
the
right
solution
for,
for
the
current
state
of
the
world.
B
A
Okay,
perfect,
thank
you
so
much
for
that.
Next,
one
dashboard
update
for
prometheus
namespacing,
please
so.
H
H
So
this
is
now,
I
think,
a
simpler
proposal
than
it
was
previously
and
we
don't
need
to
change
naming
conventions.
As
we
discussed
last
week,
we
can
add
instrumentation
scope,
name
inversion
to
prometheus
metrics.
B
A
E
Yeah
sure
I'll
just
I
guess,
give
a
little
bit
of
context.
So
in
1.13.0
of
the
spec
we
made
a
change
to
redefine
the
the
bucket
boundaries
to
align
with
the
what's
anticipated
to
be
the
bucket
boundaries
of
prometheus's
exponential
histogram
design.
E
So
you
know
we
we
made
that
accommodation,
that's
been
implemented
in
go
and
in
java
in
the
pr
form
in
java
and
in.net,
I've
linked
to
a
python
implementation
here,
and
so
I'm
just
wondering,
with
the
release
of
1.13.0
we've
kind
of
opened
ourselves
up
to
make
some
more
changes
to
the
specification.
A
D
One
thing
that
one
of
the
reasons
I've
been
personally
hesitant,
but
this
is
also
a
time
issue
for
me
because
I'm
focused
on
other
things
right
now,
but
the
the
java
implementation
and
the
performance
comparison
of
exponential
histograms
versus
explicit
bucket.
D
I
don't
know
if
we
pulled
down
the
performance
down
to
where
it
needs
to
be,
for,
like
I,
one
of
the
goals
around
exponential
histograms
was
eventually,
I
think,
to
make
these
be
the
default
kind
of
histogram
and
to
really
promote
them,
and
I,
I
think,
there's
basically,
we
need
some
performance,
measurements
and
consistent
performance
measurements
to
see
where
they
land
versus
explicit
bucket,
because
nominally
these
should
be
a
wholesale
replacement
right
like
I
think
we
really
wanted
to
go
after
them
hard,
but
I
don't
think
we
can
do
that.
D
So
I
think
we
need
to
understand
the
implications
today
of
our
implementations,
their
performance
when
to
recommend
exponential
histograms,
we
could
stabilize
say
the
the
actual
aggregator,
but
I
think
that
should
come
with
some
guidance
on
when
to
choose
explicit
versus
exponential.
That's
the
only
bit
on
my
mind.
That's
missing
right.
E
Okay,
so
performance
reasons,
so
I'm
I
don't
if
I
is
there
anything
in
the
definition
of
an
exponential
histogram
that
that
makes
the
performance
fundamentally
worse
than
explicit
bucket.
Or
is
this
just
you
know
an
artifact
of
java's
implementation
for
explicit
versus
exponential,
because
if
there's,
if
there's
nothing
inherent
in
the
design,
then
I
think
you
know
these
are
two
separate
problems:
stabilize
the
exponential
histograms
and
then
go
back
and.
D
I
I
hear
what
you're
saying
I
from
so
to
me:
it's
there's
design
decisions
that
were
made
that
maybe
we
have
to
reevaluate
if
we're
gonna
stabilize
them.
What
are
we
trying
to
accomplish
by
stabilizing
exponential
histograms
right
and
I
think,
widespread
usage
is
the
thing
we're
trying
to
gain.
G
Is
there?
Is
there
an
actual
performance
concern
somewhere?
I
mean
I
to
me
the
the
data
structures.
Were
you
know
very
fast
and
for
for
our
vendor,
I
want
all
exponential
instagrams,
no
explicit
bucket
instagrams
at
this
point
already
making
that
recommendation.
If
we
found
a
bug,
I
think
we
would
call
it
a
bug
in
one
of
the
implementations.
G
You
think
the
question
jack's
asking
is
whether
the
stable
to
stabilize
the
specification
I
would
vote
yes,
we've
got
four
implementations
at
this
point
that
have
not
identified
bugs
in
the
spec.
If
there
are
bugs
and
implementations
fine,
that's
a
bug
of
implementation.
C
Also,
as
long
as
there's
nothing
fundamental
to
the
data
structures
and
the
spec
that
causes
the
performance
problem,
as
mentioned
by
someone
else
just
because
the
spec
is
stable,
doesn't
mean
that
java
has
to
say
that
theirs
is
stable.
If
they
want
to
wait
until
they've
worked
out
some
performance
bugs.
D
G
G
The
the
meaning
of
the
field
is
clear,
but
the
the
there's
not
much
that's
been
said
about
how
and
configuration
or
an
api
should
change
like
do
you
want
to
set
your?
How
do
you
configure
an
exponential
histogram
with
a
zero
tolerance
of
one,
which
means
anything
less
than
one
should
fall
into
zero
bucket?
That's
what
they're
talking
about,
and
it
would
be
a
new
option
and
some
new
code.
I
think
it's.
I
think
it
as
a
refinement,
though.
E
Great
josh
real,
quick
while
you're
on
the
phone,
so
so
we
can
just
communicate
this
synchronously.
So
we
can't
you
mentioned
earlier
that
one
of
the
goals
was
to
make
exponential
histograms
the
default
histogram.
I
think
that's
been
taken
off
the
table
because
of
a
previous
discussion.
We
we
tried
to
have
this
definition
of
a
best
available
histogram
that
had
the
ability
to
change
from
explicit
bucket
to
exponential,
but
we
rejected
that
idea,
and
so
we
we've
instead
added
an
option
to
easily
configure
exponential
histograms
to
be
the
default.
D
Right
what
my
fear
is,
though,
so
there's
friction
now
because
there's
an
option
to
change
it,
but
if
there's
also
performance
issues
right
that
then
like,
if
we
mark
this
as
stable
and
people
start
trying
it
out
and
there's
performance
issues.
My
fear
is
that
that
we
will
never
outlive
that
performance
concern
ever
so
like
when
these
come
out
the
door
they
need
to
be.
D
G
You
just
answered
my
question,
which
is:
is
there
a
real
performance
concern
here
like
what
are
we?
Where
is
this
coming
from?
And
I
guess
you're
saying
that
if
you
look
at
the
number
of
nanoseconds
to
update
an
explicit
bucket
instagram
compared
with
an
exponential
bucket
histogram,
you
see
a
two
to
four
x
yeah
yeah.
D
D
D
There
were
some
just
dramatic
inefficient
it
was.
It
was
designed,
so
it
took
the
data
dog
design,
not
the
new,
relic
design
in
terms
of
how
the
implementation
was
actually
made.
As
you
know,
datadog
can
actually
compress
buckets
because
they
use
doubles
to
record
numbers
and
they're
recording
like
an
average
in
a
bucket.
So
like
fundamentally,
the
data
structure
was
wrong
and
we
still
need
to
clean
that
up.
G
So
the
the
go
data
structure
uses
a
circular
array.
It
is,
you
know,
like
every
insertion
is
going
to
you
know
I
mean
there's
an
o
n
operation.
If
you,
if
you
need
to
rotate
or
expand
the
array
you're
going
to
copy
all
the
buckets,
but
otherwise
I
mean
I'm
not
sure
it's
a
fair
comparison
and
I
would
accept
the
the
exponential
histogram
cost
for
the
improvement
in
resolution.
G
D
There
yeah,
I
think,
there's
still
some
things
to
to
look
through
there
that
can
get
corrected
but
yeah.
Fundamentally,
if
you
look
at
what
our
benchmarks
were,
that
we
were
doing
our
prototype
based
on
and
where
the
implementation
is,
I
still
think
there's
head
room
to
fix
in
java
effectively,
I
don't
know
if
we
can
get
down
to
where
explicit
bucket
is
and
again,
my
fear
is,
if
there's
overhead,
going
from
an
explicit
bucket
to
an
exponential,
should
we
be
recommending
a
wholesale
replacement?
Should
we
be
making
that
path?
Easy?
G
I
mean
I
looked
at,
you
know,
benchmarks
and
go
and
like
we're
talking
about
a
sub
109
100
nanosecond,
update
operation
here,
and
so
it
came
to
the
point
of.
Am
I
willing
to
optimize?
You
know
the
mapping
function
to
use
table
lookup
because
that'll
save
me
10
nanoseconds,
and
I'm
not
willing
to
do
that.
It's
way
too
complicated.
So
at
some
point
I
decided
the
mapping
functions.
G
I'd
rather
have
them
be
simple
than
than
fast,
and
I,
like
the
the
overhead
involved
in
updating
one
histogram,
was
my
opinion
insignificant,
compared
with
the
cost
of
handling
all
the
data.
So
you're
going
to
you
know,
export
10
times
as
much
data
like
whether
you
pay
a
few
nanoseconds
to
update
it.
E
E
If
we
can
make
the
the
trade-off
clear,
then
I
think
it's
up
to
users
to
decide
for
that,
whether
it's
worth
it
but
josh,
you
know
so.
You
you've
clearly
looked
deep
into
this
code.
I
I
haven't
looked
into
it
in
a
little
bit.
E
I
have
looked
into
it
pretty
deeply,
but
just
not
in
some
time
I'm
curious
if
you
could
enumerate
the
the
issues
or
concerns
you
have
or
just
like
kind
of
do,
a
brain
dump
in
an
issue,
and
I
can
follow
up
on
that,
because
I
I
have
cycles
that
I
can
dedicate
to
making
this
happen.
D
Sure
yeah
I'll
I'll
have
to
refresh
myself
and
then
dump
down
what
I
was
doing.
I
have
about
five
abandoned
branches
of
different
performance
fixes
for
that
code
that
I
never
got
to
the
point
where
they
would
be
peer,
reviewable,
so
we'll
work
on
that
thanks.
G
E
I'll
you
know
we'll
follow
up
josh,
then
and
I'll
open,
a
pr
to
propose
stabilizing
the
specification,
and
you
know
we
can
comment
on
it
there
in
the
relative
advantages
or
disadvantages
on
proceeding
with
stabilizing
it
or
waiting
further.
So
I
think
there's
enough
interest
where
we
can
at
least
have
a
conversation
now.