►
From YouTube: 2021-07-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
A
Okay,
we're
going
to
start
so
first
thing
just
heads
up,
so
I
I
updated
the
project
board.
Originally,
we
were
trying
to
see
if
we
can
target
a
matrix,
sdk
experimental
release
by
end
of
june
and
we
couldn't
meet
it.
So
I
I
said
the
new
date
has
end
of
july,
which,
according
to
the
current
progress
like
if
the
pr
is
blocked
for
a
month,
then
I
I
think
it's
very
aggressive.
So
so
just
heads
up
and
and
looks
like
we'll
need
some
help
here.
A
A
I
I
think
I
I
probably
don't
have
enough
energy.
I
need
some
help,
so
I
wonder
if,
like
folks
here,
you
can
help
to
take
some
part,
so
we
can
work
in
parallel,
understand
everyone
is
busy.
So
if,
if
you're
busy
on
something
else,
don't
feel
obliged,
I
I
can
reach
out
to
the
other
like
spec
folks.
I
remember
like
there
are
folks
from
the
javascript
like
dynatrace.
It
might
be.
A
A
There,
by
the
way
so
so
josh
thanks
it
seems
you
have.
You,
have
a
very
deep
understanding
on
histogram
and
exemplar.
So
I
wonder
if
I
can
put
put
it
on.
C
Yeah
yeah
and
I
think,
there's
a
high
correlation
there
like
I
I
wanted
well
anyway,
I've
been
looking
at
how
prometheus
does
exemplars
and
well.
We
have
lots
to
talk
about,
but
I'm
more
than
happy
to
write
down
all
my
findings
and
I
was
already
working
on
an
exemplar
pr
that
has
been
sitting
in
my
to-do
list
for
a
long
time
for
the
sdk.
So
I'm
more
than
happy
to
take
that
and
push
it.
E
A
Yes,
yeah,
I
think
this
part
is
challenging.
We
can
probably
talk
about
that
later.
I
have
some
topic
here.
So
talk
about
the
aggregators,
I
wonder
like
if
anyone
else
can
can
give
some
help
here.
F
A
So
so
victor,
let
me
just
put
your
name
here
and
you
can
send
the
pr
and
see
where
that
goes,
and
if,
if
you
need
help,
we
can
come
back
and
see
if
someone
else
can
work
together
by
the
way
victory.
Like
probably
change
your
microphone
or
something
like
your
voice
change
a
lot.
So
I
I
it's
very
hard
for
me
to
to
connect
your
new
voice
with
with
you
want
to
confirm
it.
It's
it's
victoria!
Is
that
better!
A
That's
strange,
okay,
so
for
the
expander.
Currently,
I
think
about
let's
say
concern
that
part
seems
to
me,
like
quite
straightforward.
I
think
the
exporter
data
model
will
follow,
follow
the
matrix
data
model,
so
basically
the
otlp
model
and
the
built-in
exporters
we
should
just
have
four
of
them,
so
that
part
should
be
straightforward.
Unless
someone
see
like
I
like,
I
I'm
over
optimistic.
C
Where
are
we
accounting
for
the
pipelining
and
like
multiple
exporters
versus
one
exporter?
If
that's
part
of
the
exporter
work,
then
I
expect
people
to
argue
okay,
good
good,
but
I
expect
I
expect
there
to
be
a
good,
a
good
awesome
debate
there
and
I
think
that
needs
to
be
called
out
as
an
item
for
someone
to
push
on.
A
E
Well,
I
hope
that
it
doesn't
take
me
that
long
and
I
don't
know
that
I
have
enough
to
say
for
that
long
time,
but
there's
definitely
a
debate
happening
and
I'll
try
and
run
people
through
it.
Hopefully
that
can
help
us
decide
what
how
much
priority
to
give
it.
Let
me
see
I'll
share
for
a
second
just
to
show
you
a
little
bit
if
you
can
unshare.
C
E
All
right,
you
can
see
now
I
posted
an
issue
a
week
ago,
after
merging
an
otep
which
had
been
open
for
a
long
time
since
march,
but
this
is
building
on
a
pr
that
had
been
open
since
I'm
going
to
say
september
october.
So
this
this
pr
had
been
through
a
lot
of
debate,
and
then
we
decided
that
there
was
so
much
debate.
We
couldn't
just
have
a
pr.
E
We
had
to
have
an
o
chap,
so
the
author
uk
went
back
to
the
drawing
board
and
came
back
with
the
hotep,
which
I
merged
last
week,
the
the.
If
you
read
through
this
document
and
the
debate
that
was
there,
this
is
primarily
being
contributed
to
by
the
industry
here.
So
datadog
dynatrace
new
relic
are
the
primary
contributors
to
this
otep,
and
there
are
some
sort
of
I
call
them.
E
Experts,
people
who
are
known
in
the
world
for
publishing
papers
about
histograms
and
telemetry
industry
people
who
have
been
added
to
this
thread
at
the
end
of
the
thread.
There
was
a
lot
of
consensus
on
this
particular
proposal.
It's
an
exponential
histogram!
Sometimes
people
call
that
a
logarithmic
histogram,
it's
basically
the
simplest
of
the
kind
of
parameterized
histograms.
These
are
compressible
because
all
the
buckets
are
explained
by
like
one
growth
rate,
so
you
don't
have
to
list
every
bucket.
E
You
just
list
your
growth
rate
and
then
every
bucket
index
you
can
just
do
some
math
to
figure
out
where
its
boundaries
are.
So
that's
the
basic
basic
idea.
It's
so
simple.
That
kind
of
doesn't
require
much
explanation
so
where
we
ran
into
trouble-
and
it's
like
this
is
a
long
thread.
I
don't
expect
anyone
to
read
it,
but
but
so
the
author
of
open
histogram
came
into
the
room,
that's
theos
trust
niggle,
the
other
author
of
the
circonas
histogram
or
the
open
histogram,
that's
been
renamed
from
the
circonus
histogram
is
heinrich.
E
Hartman
he's
been
involved
in
our
meetings,
a
bit
he's
someone
who
is
interested
in
open,
telemetry
he's
working
at
a
customer
of
light
steps.
I
just
disclaimer.
So
there
there's
a
connection
between
me
and
his
and
heinrich
for
that
reason.
But
heinrich
was
coming
in
kind
of
as
a
neutral
party
saying
I
don't
care
what
we
do,
but
we
want
histograms
for
our
users,
so
heinrich's
surprisingly
neutral,
given
he's
one
of
the
authors
of
open
histogram.
E
The
biggest
claim
that
the
author
of
open
histogram
came
into
this
room
saying
is
that
if
you
have
parametrization
and
it
allows
you
to
have
arbitrary
boundaries,
we
are
going
to
end
up
in
a
world
where
histograms
get
converted
from
one
format
to
another
as
they
transfer
from
one
protocol
to
another,
and
if
the
boundaries
are
belong
to
different,
essentially
exponential
families
or
different
parameterization
schemes.
E
So
the
first
kind
of
rule
that
was
that
was
put
out
is
that
we
should
be
able
to
have
a
set
of
a
histogram
that
everyone
agrees
to
use,
and
this
may
be
actually
too
much
to
ask
for,
because
if
we
decide
to
go
with
a
base
10
scheme,
then
all
of
our
buckets
are
going
to
be
aligned
with
base
10
powers.
Essentially-
and
this
is
good
as
long
as
everyone
agrees
to
it.
E
But
as
soon
as
you
have
someone
else
who
says,
I
want
a
base,
2
histogram,
it
means
all
my
histogram
buckets
are
on
power
of
two
boundaries,
and
now,
if
I
have
to
change
from
one
to
another
that
they
just
don't
line
up,
there's
always
going
to
be
a
little
bit
of
error
introduced
in
order
to
convert
a
base,
10
histogram
boundaries
into
base
two
and
so
on.
So
this
debate
essentially
has
come
down
to
a
let's.
Let's
have
a
choice
here.
E
If
we
all
kind
of
agree
that
we
don't
want
two
histograms,
we
all
kind
of
agree
that
we
only
want
to
implement
one
data
format,
but
choosing
a
histogram
means.
Choosing
a
winner,
unfortunately,
and
open
histogram
is
trying
to
say.
I
would
like
to
donate
this,
it's
good
for
the
community,
but
not
everyone
likes
it,
and
then
I
would
say
it
turns
out
that
the
research
and
the
development
that
went
into
this
otp
149
is
actually
really
compelling.
E
So
in
the
course
of
discovery
here,
what
we,
what
we
learned
about,
is
essentially
a
new
a
new
system
which
is
called
udd
sketch
it's
one
of
the
variations
on
dd
sketch
that
was
published
from
academic
sources
in
the
last
few
years
and
it
basically
modified
dd
sketch
with
a
very
simple
idea.
That
is
very
powerful.
E
So
this
idea
of
perfect
subsetting
was
brought
into
the
conversation
and
you
can
do
perfect
subsetting
for
a
binary
scheme
and
you
can
do
perfect
subsetting
for
a
base.
10
scheme
now
open
histogram
doesn't
actually
do
this,
but
it's
a
fairly
trivial
extension.
So
I
wrote
it
up.
This
is
in
my
ticket
here.
This
is
the
referendum
on
histogram
formats.
E
So
if
you
this
is
sort
of
a
quick
summary
of
the
otep
149,
it
says:
let's
assume,
that
the
power
that
we're
using
power
of
twos
as
the
growth
ratio
and
they're
going
to
mean
fractional
powers
of
2..
So
it's
like
I'm
going
to
say
2
to
that
to
that
one-half
or
2
to
the
one-quarter
or
2
to
1
8.
That's
my
that's!
E
E
Let's
see
I
didn't
write
down
the
equation
here,
but
the
format
of
this
proposal
in
otep
149
is
that
your
boundaries
are
located
at
2
to
the
2,
to
the
x
or
whatever
x
is
or
k,
or
that's
a
scale
factor
so
and
we've
inverted
it
so
that
so
that
larger
scales
give
you
a
higher
resolution
here.
So
this
is
an
example
of
what
you
would
get
in
this
binary
scheme.
You
choose
a
scale
of,
let's
say
two:
it
gives
you
27
buckets
across
the
range
of
one
to
a
hundred.
E
If
you
chose
a
scale
of
three
you
get
54
and
four,
you
get
107
you're
doubling
every
time
and
we
can
always
collapse
these
down
and
they
have
perfect
boundaries,
so
they
perfectly
subset.
So
this
is
actually
very
close
to
what
we
understand.
Google
is
using
according
to
some
internal
discussions,
or
some
so
josh
montana
has
said
that
so
it's,
but
this
is
basically
the
udd
sketch
scheme.
It
is
base
two
it's.
E
It
means
that
if
you
have
floating
point
numbers
in
ieee,
754
format,
you
can
just
do
bit
manipulations
to
shift
and
mask
to
get
the
the
the
boundary
positions
for
each
for
any
given
floating
point
number.
So
there's
definitely
some
like
assuming
you
have
floating
point
hardware.
This
is
a
really
compelling
answer.
E
Okay,
but
why
are
people
talking
about
open
histogram?
It's
sort
of
the
same
idea
you
have,
and
I
actually
I
pulled
up
their
paper.
I
think
they
have
great
a
great
diagram.
This
is
the
idea
is
in
open.
Histogram
is
essentially
called
log
linear.
You
have
two
parameters,
one
is
the
logarithmic
scale
and
one
is
a
linear
scale.
So
you
do
logarithmic
for
your
your
your
exponents.
Basically,
and
then
you
do
linear
within
each
exponent.
So
in
a
decimal
scheme
I
call
them
decades.
E
E
It's
four
per
decade,
so
you
can
imagine
having
90
divisions
between
1
and
10
90
divisions
between
10
and
100,
90
divisions
between
100
and
1
000,
and
the
power
of
this
technique
is
that
all
of
your
divisions,
land
on
integers
and
that
helps,
if
you
don't
have
floating
point
hardware,
it
also
helps
because
it
maps
into
these
human
round
numbers
every
open,
histogram
boundary
is
a
power
of
10
or
a
fraction,
an
integer
fraction
between
two
powers
of
ten,
and
so
the
reason
why
90
is
so
important
is
that
you
have
to
be
a
power,
a
factor
of
nine
in
order
to
to
end
up
with
integer
divisions
between
your
x,
your
your
decades,
so
open
histogram,
the
number
100.
E
The
number
90
is
very
important
and
that's
this
line
here
you
get
boundaries
at
1,
1.1
1.2
you
get
90
of
them
up
to
10.
and
then
from
10
to
100
you
get
one.
You
get
10
11,
12
13!
You
get
all
the
integers
between
10
and
100,
that's
90.
E
to
do
perfect,
subsetting
with
open
histogram.
I
basically
made
this
up
because
I
think
it's
a
good
idea
and
I
think
the
thing
that
people
don't
like
about
open
histogram
is
it's
too
high
resolution,
but
if
we
just
take
the
same
perfect,
subsetting
idea
from
otep149
apply
it
to
open
histogram.
This
is
what
you
get.
It
makes
some
good
sense.
So
if
I
have
only
18
divisions
per
100
per
decade,
I
end
up
with
these
fairly
nice
round
numbers
as
well:
1
1.52
up
to
up
to
10.
E
those
are
nice
because
they
actually
align
with
prometheus
boundaries.
The
default
boundaries
and
prometheus
happen
to
fall
into
this
category.
They
so
that
you
have
36
divisions
per
between
1
and
100.
You
have
18
per
decade.
This
is
a
proposal
that,
to
me
sort
of
it
fits
with
prometheus,
really
well.
That
fits
with
human
intuition
really
well,
but
it
requires
this
decimal
approach,
which
is
going
to
be
pervasive
and
anytime.
You
convert
back
into
another
format,
you're
going
to
have
precision
issues
and
like
you're,
going
to
introduce
errors.
E
So
that's
the
end
of
my
of
my
dialogue
here.
These
are
both
good
options.
One
is
friendly
towards
computers.
One
is
friendly
towards
humans.
I
don't
believe
that
there's
a
great
acceleration
we
get
by
choosing
anything,
because
we
have
to
build
new
code
from
scratch
to
make
this
go
through.
E
The
collector
there's
going
to
be
a
lot
of
conversions
from
legacy
formats
that
have
to
be
built,
and
I
think
that,
no
matter
what
we
do,
it's
not
going
to
be
a
fast
move,
there's
something
appealing
about
choosing
open
histogram,
but
it's
also
pretty
contentious.
Every
single
vendor
sent
somebody
into
that
room
and
voted
against
it.
So
it's
not
clear
why
we
would
actually
choose
open
histogram,
given
that
the
vendors
seem
to
strongly
believe
in
the
base
2
approach.
Now
as
soon
as
I
tweeted
about
this.
E
E
So
there
are
two
or
three
options
of
just
styles
of
protocol
and
that
doesn't
talk
about
what
libraries
we're
going
to
use
and
that
doesn't
actually
talk
about
exactly
what
encoding
we're
going
to
use.
There's
still
sparse
versus,
dense,
encodings
and
so
on,
but
the
biggest
choice
seems
to
be
choosing
base
10
versus
base
2
as
the
kind
of
foundation,
and
then
then
we
can
actually
have
multiple
protocols
that
convert
perfectly
between
each
other
if
desired,
and
I
don't
really
know
what's
best.
I
was
hoping
to
get
lots
of
input.
E
About
starting
about
here
three
days
ago,
I
have
not
read
this
whole
thread,
and
I
apologize
for
that.
But
here
we
are,
I
hope,
I've,
given
you
enough
background
to
jump
into
this
thread.
Everyone
commenting
here
is
pretty
much
an
important
person.
Gil10A
is
the
author
of
htr
histogram.
I
haven't
figured
out
who
guard
taylor
is
yet,
but
I'm
sure
charles
masson
is
dd.
Sketch
uk
is
new
relic
we've
got
hdr
histogram
we've
got
uk
wrote
an
essay
here.
E
A
E
E
And
you
you
have
a
lossy
translation
between
baseline
and
base
two
and
there's
that
so
I'd
be
interested
in
having
essentially
any
library,
the
most
popular
library,
but
it
has
to
kind
of
fit
with
the
scheme
in
the
protocol.
So
that's
why?
That's
that's
what
that's
why
people
caution
us
against
having
more
than
one
protocol
or
what
more
than
one
sort
of
scheme.
E
A
If
you
convert
from
base
to
input
to
base
10
output,
then
we'll
do
some
crazy
interpolation
and
you
should
expect
there's
data
loss,
and
this
is
the
best
thing
we
can
do
in
theory,
and
I
I've
seen
that
in
other
metric
systems
in
microsoft,
like
we
tell
people,
if
you
use
this,
the
input
output
will
give
you
the
very
precise
thing.
Otherwise,
we'll
give
you
the
wild
guides
like
by
interpolating
data,
and
you
should
expect
there's
arrow.
B
E
Yeah
lightsaber
has
a
piece
of
code
that
I
wrote
to
convert
from
the
explicit
boundaries
of
otlp
v07
into
circle
histogram
because
we
use
it
internally,
it's
a
good
representation,
but
there
is
loss
introduced
and
approximation
introduced
and
when
you're
dealing
with
those
explicit
boundaries
from
prometheus,
I'm
not
sure
it
matters.
But
if
those
are
the
default
boundaries
for
prometheus,
they
do
map
exactly
into
open
histogram.
It's
just
that
the
default
12
buckets
of
a
prometheus
histogram
become
320,
open,
histogram
buckets,
which
is
where
I
get
the
idea.
That's
a
little
bit
too
high
resolution.
E
The
only
problem
concerning
mine
is
that,
as
soon
as
we
have
two,
we
have
to
implement
twice
as
much
machinery
inside
the
collector
and
every
vendor
is
going
to
be
faced
with
what,
if
I
receive
the
wrong
and
like
I
have
to
do
conversion,
so
we
can
definitely
choose
to
have
two
protocols
like
I,
I
think,
like
you
could
say,
let's
follow
otep149
and
introduce
a
simple
base.
E
2
exponential
people
like
that,
and
then
let's
also
just
slap
in
a
a
placeholder
for
open
histogram
and
a
placeholder
for
hdr
histogram,
and
if
you
get
those
are
three
formats
now
and
we
can
always
convert
between
them
but
understand
it's
just
a
lot
of
code
to
write
and
maintain
and
I'm
worried
about
it.
Yeah.
A
It's
more
like
a
the
open
telemetry
like
how
open
telemetry
position
itself
similar
like
when
people
work
on
the
w3c
trace
context
protocol.
Initially
they
were
thinking.
We
want
to
support
all
the
existing
protocols
and
later
they
went
to
another
experiment,
so
we
want
to
support
only
one
and
and
in
the
end
they
land
in,
like
we
have
the
one
like
the
trace
id
spam
id
thing,
and
we
also
have
the
trace
state,
which
gives
people
a
room
if
they
want
to
interrupt
with
some
legacy
existing
tracing
protocol
like
b3.
A
A
E
So
it
sounds
like
you
would
favor
accepting
all
the
protocols
and
publishing,
essentially
a
translation
routine
that
will
be
standard
and
suggesting
that
the
collector
could
have
these
conversions
built
in
so
there'd,
be
like
a
base
2
to
base
10
histogram
processor,
and
that
would
be
a
base
10
to
base
2
histogram
processor
and
be
a
hdr
histogram
to
base
two
histograms
like
like
hdr
histogram
is
already
based
too,
but
it's
got
different
resolutions
across
its
range,
so
you
need
to
convert
it
into
a
sort
of
simple
histogram.
E
You
might
actually
lose
resolution,
but
you
can
do
that
and
then
to
convert
from
open
histogram
into
base.
Two
you've
got
some
approximations
going
on
and
the
same
in
the
opposite
direction.
It
sounds
like
a
huge
commitment
of
resources
that
we
don't
really
have
to
maintain
and
develop
this
pipeline
for
the
collector.
E
On
the
other
hand,
it
sort
of
is
the
most
accommodating
for
client
libraries.
It
says
the
sdks
have
the
choice:
to
choose
the
most
developed,
best
choice,
histogram
library,
whatever
that
may
be,
because
we
have
all
all
support
the
problem
is
that,
then
you
end
up
with
a
vendor
or
a
customer
running
a
big
deployment.
That's
got
some
python
and
got
some
java
and
it's
got
some
go
and
you've
got
three
different
instagrams,
because
three
sdks
chose
three
different
libraries
and
now
some
of
them
are
base
10.
E
Some
of
the
base
two
and
the
question
is
how
how
good
are
your
numbers
at
that
point,
and
maybe
the
I
actually,
I
I've
been
so
close
to
this
for
a
while.
I
start
to
question
some
things
like
I
don't
a
lot
of
the
histogram
experts
in
the
room
care
so
much
about
relative
error.
E
So
three
percent
four
percent,
like
what
is
your
bound
on
your
relative
error
and
I
don't
actually
have
a
feeling
that
people
care
about
relative
errors
like
it
like
coarsely
speaking,
I
want
to
see
my
histogram
data
and
I've
never
asked.
What's
the
relative
error
in
this
data?
So
so,
if
you
introduce
a
one
percent
error
by
converting
between
open
histogram
and
binary,
then
that
seems
okay.
The
problem
is
like,
like
the
the
experts,
will
tell
you
yeah,
but
theoretically,
that's
just
a
one
percent.
Every
time
you
convert
it.
E
So
like
there's
like
any
time,
you
merge
it
later
again
you
like
they
compound.
I
guess
I'm
not
sure
like
it
that
that's
the
type
of
like
fear
and
uncertainty
that
that
comes
from
those
experts
is
that
like
do
things
that
are
not
perfect
and
then
it
could
be
arbitrarily
bad
and
it's
like
panicky
and
I'm
not
sure.
I
feel
that
way.
A
Yeah
and
I'm
I'm
very
biased
because
I
work
for
microsoft
have
been
in
this
big
company
for
10
years.
I
always
see
like
multiple
folks
debating
their
big
orgs
like
windows,
office,
they're,
they're,
very
successful
product,
but
they
have
very
different
business
priorities,
so
normally
with
and
like
microsoft
will
end
up
with
very
complex
solutions
that
can
support
anything.
But
we're
not
super
good
at
one
particular
problem.
B
Only
from
the
user
perspective,
I
believe
it
would
be
very,
very
desirable
to
have
just
like
one
format,
because
as
a
user,
I
would
hate
like
having
like
100
microservices
and
70
of
them
is
using
like
one
bucketing
strategy
and
30
of
them
are
using
like
another
one.
You
will
not
be
able
to
aggregate
those.
I
believe
that
is
basically
the
if
you
guessed
the
lens,
just,
for
example
like
on
instagram,
the
feet
makes
your
histograms
aggregate
both,
which
is
like
a
impossible
thing
other
than
like.
B
If
you
don't
have
like
pockets
that
are
aligning.
E
So,
adding
to
this
there's
one
more
non-technical
consideration
and
we're
being
recorded
so
I'll,
be
very
careful.
Prometheus
has
opinions,
and
I
think
we
have
to
match
opinions
with
prometheus
and
the
opinions
from
prometheus
are
not
also
not
technical
and
turns
out,
and
there
are
lawyers
involved
at
companies,
and
I
shouldn't
say
more.
E
I
just
there's
something
going
on
besides
technical
debate
here
and
I
I'm
trying
not
to
say
exactly
what
I'm
what
I
know,
but
there
are
some
except
there's,
some
choices
that
prometheus
will
not
make
and
it
is
being
guided
by
some
vendors.
As
far
as
I
can
tell-
and
this
this
to
me
suggests
that
the
work
in
otep
149
is
is
our
best
choice,
because
bjorn
robinstein
from
prometheus
liked
it
and
started
to
implement
it.
E
So
there's
a
preference
coming
out
of
prometheus
that
might
just
steer
us
towards
a
single
base,
two
exponential
family
of
histogram
boundaries,
and
I
think
we
can
make
that
decision
and
it's
gonna
mean
disappointing,
open
histogram
and
I
think
probably
what
matters
most
is
that
we
just
get
this
done
and
get
some
kind
of
histogram
out.
It
also
feels
like
a
tough
decision.
C
I
I
want
to
call
out
one
important
thing:
perfect
should
not
be
the
enemy
of
good
here
users
don't
have
such
a
histogram
today,
and
I
think
there
is
no
such
thing
as
a
perfect
histogram
and
we
could
spend
all
our
time
debating
what
that
is.
I
think,
understanding
what
we
think
is
good
and
pushing
from
our
standpoint.
An
industry
standard
makes
sense.
Everyone
wants
to
be
the
industry
standard
and
the
real
question
is
you
know,
adoption
will
will
be
the
key
to
any
histogram
kind
of
picking
up.
C
In
my
mind,
the
question
is:
is
there
is
open
histogram
adopted
widely
and
used
by
vendors
and
open
source
solutions?
Today?
I
think
the
answer
is
no.
I
think
we
we
are
still
in
that
space,
where
we
can
make
a
decision
here
we
can
make
the
best
decision
possible
and
if
we're
wrong,
what's
going
to
happen,
is
users
are
going
to
demand
the
other
format
then
we'll
be
a
two
format
like
system,
but
today
we
don't
know
what
adoption
is
going
to
look
like
and
so
us
picking
as
a
as
a
community.
C
What
we
think
is
going
to
be
the
best
thing
for
users
is
absolutely
the
right
thing
to
do,
and
it's
not
going
to
be
perfect.
It's
not
going
to
solve
everyone's
needs
and
be
super
happy,
but
as
long
as
we
pick
something
good
and
we
get
it
implemented
quickly
and
we
drive
this
thing
and
get
vendors
to
adopt
it,
I
think
it's
gonna
take
off.
That's
that's
my
opinion.
So
if
we
think
it's
everyone
froze,
so
I
think
my
internet's
dead.
C
Okay
cool,
so
you
can
still
hear
that
as
long
as
we
pick
a
good
thing,
I
think
I
think
and
drive
it
quickly
and
do
do
the
right
thing
for
users.
I
think
we're
likely
to
have
a
really
compelling
use
case
that
that
pushes
the
industry
and
I
think,
we're
at
that
stage-
everybody
wants
to
do
to
get
their
thing
adopted.
A
Yeah-
and
it
seems
to
me
like
the
base
tool,
sounds
like
the
winner
for
now
and-
and
I
I
think,
as
long
as
we
have
some
room
in
the
protocols
later,
we
just
discovered
that
we
need
to
add
another
one.
It's
not
going
to
blow
things
away
as
long
as
we
have
the
room,
I
think
we
should
start
from
base
two
and
implement
it
across
all
the
sdks
and
and
the
collector
with
one
additional
thing.
We
want
to
leave
the
room
so
later.
E
Yeah
we
talked
about
this
a
couple
months
ago,
the
idea
that
we
would
introduce
new
histogram
types
rather
than
trying
to
overload
and
introduce
one
ofs
within
the
existing
types.
So
we
have
currently
we
have
explicit
boundary
histograms,
it's
called
histogram.
Maybe
we
think
about
renaming
it
explicit
histogram
and
then
have
an
exponential
histogram,
that's
base
two,
and
then
you
could
have
a
decimal
histogram
that's
base
10
later
on,
I
don't
know,
or
you
could
have
an
hdr
histogram.
E
The
idea
that
we
rejected
was
that
you'd
have
a
sort
of
histogram
type
that
was
general
purpose
and
then
you'd
have
a
subtype
like
a
buckets
type.
That
would
be
a
one
of,
and
there
was
a
little
bit
of
objection
to
that
a
few
months
ago.
It
said
that
you
could
have
a
explicit
one
other
bucket
and
you
could
have
a
exponential
one-up
bucket.
You
have
a
histogram
one
of
the
bucket
and
an
open
histogram
bucket.
E
It
seems
to
be
just
excessive
one
of
and
the
preference
that
we
were
talking
through
then
was
having
a
particular
data
type
per
histogram
class,
and
that
does
leave
us
room
to
add
more
types.
As
riley's
asking
sounds
like
if
this
group
were
for
us
to
choose,
we
would
choose
otep149
and
we
would
encourage
going
forward
now.
E
The
thing
that
the
open,
the
149
proposal
means
is
that
we
are
going
to
lean
then
on
the
vendors,
particularly
new,
relic,
dana
trace
and
datadog,
who
all
proposed
this
and
promoted
this
to
supply
us
with
implementations
that
meet
our
guidelines,
whatever
they,
what
whatever
they
are,
and
I
think,
there's
still
quite
a
lot
of
flexibility
like
do.
You
want
to
have
automatically
configured
histograms
that,
like
just
sort
of
vary
their
size
based
on
the
data
range,
do
you
want
to
have
fixed
precision?
Do
you
want
to
have
fixed
size?
E
Those
those
are
two
choices,
for
example,
and
I
think
we
should
try
to
say
as
little
as
possible
and
earlier
I
found
an
issue
saying
any
histogram.
Any
instagram
implementation
is
acceptable.
I,
but
I
don't.
I
don't
know
how
to
say
what
we
exactly
want,
because
I
think
people
don't
want
to
set
the
boundaries.
E
A
E
I'm
going
to
go
update
that
thread
after
I
read
everybody's
and
I'm
going
to
say
more
or
less.
What
I
just
said
is
that
we're
still
leaning
towards
the
top
149
we're
still
leaning
towards
base
2,
and
we
are
expecting
the
neural
extended
traces
and
data
dogs
of
the
world
to
provide
implementations
that
produce
this
protocol
relative
quickly
and
then
pull
the
links.
I'm
also
willing
to
post
a
converter
routine
between
histogram
formats.
That's
a
general
purpose
thing
that
we've
been
using
and
it
deals
with
zeros
and
it's
a
little
tricky
to
get
right.
A
Yeah,
so
for
folks
who
want
to
follow
up
more,
these
are
the
links
I
just
captured
from
josh's
presentation.
Okay,
so
for
the
next
one,
it's
a
very
simple
thing
just
mention
this
pr
has
been
a
little
bit
stuck.
I
believe
I
solved
many
blocking
comments
so
so
please
take
a
look
and
while
you're
reviewing
the
pr,
I
want
you
to
focus
on
which
part
you
think
is
crazy
or
not
going
to
work,
and
I
want
to
take
a
step
back
and
we
can.
A
We
can
remove
that
section,
just
market
as
a
tbd
for
now,
so
we
can
make
progress
instead
of
trying
to
make
this
pr
a
perfect
answer
for
everything
I
want
to
scope
down
as
long
as
we
can
make
progress,
we
have
some
skeleton
and
we
solve
one
problem
by
noticing
it
introduce
other
two
open
questions.
I
think
it's
totally
fine.
We
can
leave
the
open
question
as
a
tbd
and
hammer
that
out
in
subsequent
pr.
A
A
If
you
can
see
my
screen
on
the
right
side,
I'm
trying
to
illustrate
the
extreme
case
where
you
have
all
type
of
combinations
of
the
exporter
here.
So
this
is
one
single
process,
this
big
box-
and
you
have
you-
have
multiple
meter
providers.
They
have
multiple
meters
and
they
have
instruments
and
the
the
only
one
thing
you
want
to
remember
here
on
the
left
side
is
we
have
synchronous
instruments
and
asynchronous
instruments,
so
the
asynchronous
one
are
not
sending
data
unless
someone
asks
for
it
and
on
the
right
side,
I
have
four
combinations.
A
So
you
have
the
push
exporter,
you
have
the
push
exporter,
but
you
want
to
push
the
data
on
a
different
frequency
versus
the
sampling
frequency.
So
here's
the
sampling,
I'll,
give
you
one.
Example.
You
want
to
measure
the
soil
like
temperature,
but
you're
running
on
a
big
farm.
So
you
don't
want
to
export
the
data
every
one
minute.
A
Otherwise
you
you
would
like,
like
run
out
of
cash.
So
what
you
want
to
do
is
you
want
to
measure
the
the
soil,
temperature
and
humidity
every
15
seconds,
but
you
only
export
the
data
every
one
hour
or
a
day,
so
I'm
drawing
the
circle
here
indicating
that
this
can
run
on
its
own
schedule,
and
you
can
already
see
the
problem.
You
might
have
situation
where
you
want
to
measure
the
blood
pressure
from
a
patient
every
0.1.
A
Second,
because
blood
pressure
drops
to
zero,
the
patient
will
die
right,
but
the
body
temperature
is
not
likely
to
change
every
second,
so
you
probably
measure
the
body
temperature
every
15
seconds.
So
in
this
way
you
can
already
see
different
instruments
might
run
on
its
own
cycle
and
they
could
be
totally
different
from
the
exporting
cycle,
and
this
has
been
an
ask
from
folks
in
dinatras.
A
Here
are
other
scenarios,
you
have
the
pool
exporter
and
when
you
pull
we're
going
to
collect
all
the
existing
data
and
meanwhile
we'll
call
all
the
callback
functions
to
get
the
asynchronous
instrumentation
and
another
scenario
is
you:
you
want
to
pull,
but
your
poor
frequency
might
be
different
from
the
collection
frequency
like
you
collect
the
soil
temperature,
but
you
only
pull
the
data
when
a
drone.
If
it's
flying
by
the
sensor-
and
then
you
report
the
data.
So
these
are
the
case
and
I
I
think
ultimately
we
probably
will
have
to
support
all
of
them.
A
But
for
now
I
want
us
to
focus
on
only
being
able
to
support
push
and
pull
multiple
exporters
in
the
same
process
on
the
same
meter
provider,
whether
the
instrument
can
sample
at
a
different
rate.
This
can
be
added
later
and-
and
here
goes
why?
Because
today,
if
you're
saying
I
want
to
push,
then
I'm
going
to
call
the
callback
function
from
async
instruments
or
I'm
receiving
a
permissive
scraper
call.
I
want
to
pull
the
data,
then
I'm
going
to
call
the
callback.
A
A
So
I'm
going
to
update
the
issue
explaining
why
that's
the
case
and
how
I
believe
we
can
solve
that
just
want
to
see
if
people
think
this
is.
This
is
a
good
decision
or
you
think
we
should
solve
this
problem
right
now
or
you
think
this
is
something
we
shouldn't
even
try
to
solve
after
stable
release.
It's
just
something
else:
people
can
can
do
their
own
thing,
so
we
don't
have
to
cover
that
in
this
bag.
F
A
F
E
C
Yeah,
so
here
are
my
hold
on.
I
ask
a
few
a
few
implementation
questions
here
so
who
who
decides
on
the
start,
stop
time
stamps
for
collection
just
to
make
sure
I
understand
so
in
in
a
pool-based
exporter
right
when
they,
when
you
get
a
pool,
that's
when
you
decide
on
your
start,
your
stop
time
for,
like
you,
the
the
cumulative
or
delta
distribution
that
you're
looking
at
right,
yes,
yeah
and
for
push
there's
something
else
that
decides
when
to
do
it.
A
That
runs
the
way
I
think
push
and
pull
they're
actually
the
same.
The
only
difference
is
push
exploder
is
triggered
by
a
timer
or
something
else
like
memory
pressure
or
something.
The
pull
exposure
is
triggered
by
some
external
call,
like
the
scraper
call,
but
they're
they're,
both
resp
reacting
to
some
signal,
so
they
sit
there
do
nothing
until
someone
either
a
timer
or
memory
pressure
or
application
exit
signal
or
a
scraper.
Ask
them
hey,
go
and
fetch
the
data.
C
I'm
more
worried
about
synchronous
instruments
here,
and
my
next
question
is-
is
also
related
of
in
memory
state.
Do
you
see
in
memory
state
getting
duplicated
for
every
exporter,
then.
A
Currently,
no
I
discussed
with
spoken.
I
I
can't
give
an
update.
That's
my
next
topic,
so
I
I
think
there's
a
way
that
we
can
avoid
having
multiple
in
memory
state.
C
Yeah
so
so,
effectively
like
just
from
an
implementation
standpoint
right,
there's
with
an
aggregator
you're,
keeping
track
of
your
internal
aggregation
and,
like
your
actual
measurements
and
you're,
putting
them
into
something
and
then
at
some
point
you
get
a
call
that
says:
okay
produce
a
metric
data
point
right.
C
And,
more
importantly,
when
does
that
aggregator
reset
its
state
if
you're
doing
a
delta
right,
if
you're
doing
cumulative
it
kind
of
doesn't
matter,
because
you
should
be
able
to
just
always
pull
and
everything's
gravy,
but
it
still
does
because
there's
implementation
details
that
are
dirty
and
for
delta.
It's
absolutely
impossible.
C
Like
you,
we
need
to
know
when
to
clear
state
for
delta
yeah
and
consistently
do
that,
and-
and
I
it's
still
not
clear
to
me
in
with
what
you're
saying
here
and
what
you're
explaining
how
I
do
that
effectively.
Okay,.
E
How
I
did
this
in
the
hotel
go
prototype,
just
just
as
a
explanation.
So,
as
you
were
saying,
there's
always
a
signal
that
that
causes
something
to
happen
and
it
in
a
poll
it's
coming
from
externally
and
push
it's
a
timer.
But
but
I
had
three
components.
Essentially
in
the
pipeline
it
was
called
accumulator,
processor
and
exporter.
The
accumulator
is
the
one
that's
responsible
for
that
synchronous
switchover
between
there
was
a
delta
of
before
and
now
I've
snapshot
and
I've.
E
Given
you
that
delta-
and
I
started
at
the
moment
again
so
that
accumulator
is
responsible
for
telling
you
a
delta.
So
it's
start
time
and
end
time
and
then
the
some
some
signal
caused
it
to
do
the
computation,
whether
that
was
from
a
controller,
essentially
whether
it
was
push
or
pull
some
signal
came
in
and
then
one
delta
set
of
deltas
for
a
time
slice
gets
sent
to
the
processor,
which
then
can
decide
whether
it
wants
to
do
delta
to
cumulative
and
the
processor
has
the
time
stamps
for
that
delta
and
it's
so
they
can.
E
It
can
do
the
delta
to
cumulative
and
compute
the
correct
time
stamps.
The
only
part
of
my
prototype
that
didn't
have
was
like
a
variable
thing
for
multiple
exporters,
but
you
could
accomplish
that.
E
So
you
have
to
be
careful.
If
you
aren't
copying
that
state,
you
might
block
yourself
with
a
scraper.
C
Yeah,
I
agree:
there's
there's
a
copy
stage
here.
What
what's
interesting,
though?
How
are
you
handling
the
case
where
you
want
to
remember
a
measurement
that
happened
in
a
previous
collection
interval,
that's
cumulative
but
doesn't
show
up
in
your
current
delta.
E
There's
a
current
state
is
being
held
in
the
processor
and
if
you
need
to
do
delta
to
cumulative
or
if
you
just
need
memory
for
any
reason
which
is
prometheus
needs
it
anyway,
old
gauge
values,
there's
no
delta
being
computed.
I
just
need
to
remember
my
old
gauge
value,
so
you
keep
a
current
state
in
the
processor
if
you
must
and
and.
C
E
Well,
it
is
closer
to
the
metrics
processor
and
I
think
of
my
accumulator
as
being
closer
to
the
measurement
processor,
but
I
didn't
so
so.
This
is
but
this.
But
this
component
that
I'm
calling
processor
is
really
just
responsible
for
any
state
that
happens
to
be
in
the
pipeline,
because
it's
receiving
updates
and
it's
being
read
by
exporters.
E
E
So
this
there's
this
extra
hook
in
my
prototype
that
says
before
I
know
how
to
set
up
my
processor,
I
have
to
ask
whether
the
output
is
going
to
be
delta
cumulative
and
then
the
exporter
gets
to
say
what
it
wants
and
then
it
that
sets
up.
Yes,
it
unfortunately
means
that
the
processor
depends
on
the
exporter
to
know
what
type
of
processing
to
do
so.
A
I'll
I'll
explain
the
story
where
I
talked
with
spoken.
So
imagine
you
have
a
exporter
that
sends
the
data
every
one
minute,
so
you
suddenly
have
one
minute
and
then
you
reset
the
delta.
You
start
from
zero
or
you
keep
the
cumulative
whatever
you
do
and
then
at
some
point.
For
example,
you
just
export
the
data
you're
waiting
for
the
next
minute
to
come
and
all
of
a
sudden
a
pool
exporter
reported
hey.
I
got
to
respond
because
there's
a
premises
scraper.
A
So
what
you
have
to
do
is
to
ask
the
aggregator
whatever
we
call
that
to
give
the
data
and
that
aggregator
will
give
the
data
to
all
the
exporters
while
the
the
push
exposure
every
one
minute.
The
exposure
is
going
to
scream
and
say,
I'm
not
ready,
I'm
only
expecting
the
data
to
come
after
one
minute.
A
So
what
I'm
going
to
do
is
to
take
the
the
current
delta
put
it
somewhere
on
hold
and
now
wait
for
the
next
data
to
be
reported,
and
if
all
the
data
reported
meet
the
one
minute
bar,
then
the
exporter
is
going
to
export
the
data
and
this
part
doesn't
have
to
do
its
own
aggregation.
The
exporter
can
simply
call
aggregator.
Give
me
the
data,
I'm
not
ready
to
just
keep
it
somewhere
and
later.
A
You
have
additional
data,
merge
that
and
now
I'm
ready
so
give
the
final
result
to
me
so
that
that
part,
I
I
think
definitely
we
need
some
some
spark.
So
I'm
I'll
I'll
probably
talk
with
you
josh
tomorrow
and
and
then
we
can
figure
out
how
we
can,
how
we
can
make
a
pr,
but
that
should
be
a
separate
pr
than
the
view.
Pr.
F
A
A
Wait
for
the
next
pr.
Okay.
Is
this
the
video
type
out
we're
not?
The
pipeline
problem
is
currently
a
to
do
item,
so
we
won't
be
able
to
solve
that
just
hides
up.
This
is
the
thinking.
If
you
have
a
different
thinking,
you
should
bring
the
topic
here,
but
if
you
wait
for
the
answer,
then
you
should
wait
out
on
the
pr
for
the
answer.
E
A
E
The
point
I'm
I'm
leaning
on
there
is
that
there
was
this
stage
where
the
my
words,
the
accumulator
dropped
some
data
into
a
processor,
and
it
has
to
be
able
to
merge
right
there
in
order
to
compute
cumulative
state.
So
the
same
machinery
should
be
able
to
use
to
compute
a
sum
of
deltas.
That's
like
an
intermediate
result.
C
E
E
C
But
but
I
I
like,
I
like
the
idea
you're
proposing
josh
the
the
the
thing
that
I'm
still
trying
to
noodle
around
is
this
notion
of
the
collection
interval
and
the
exporter.
If
we
detangle
the
two,
it
becomes
a
little
more
clear
because
basically,
you
can
have
a
slot
for
all
your
metrics,
that
you
just
have
something
that
samples
your
metrics
and
shows
them
into
that
slot.
When
you
get
a
call
from
prometheus,
you
won't
sample
the
metrics
at
the
time
of
the
call
you
just
sample
like
the
current
value
in
this.
C
C
That's
how
you
do
it
today:
okay,
that's
that
it
wasn't
clear
to
me
that
that's
how
you
did
the
other,
the
other
half
of
it,
okay,
so
that
makes
sense.
That's
what
I
was
thinking
of
doing
in
the
java
prototype
as
well,
but
I
do
think
we
should.
We
should
probably
do
some
more
focused
discussion,
maybe
in
chat
just
design
ideas
flush
this
out,
because
this
is
getting
this
right
across
languages-
is
going
to
be
interesting,
and
I
don't
want
to
limit
language
implementations
based
on
language.
One
language
is
strengths
and
another
language's
weaknesses.
F
So
if
that
is
the
case-
and
maybe
that's
not
the
case,
but
if
that's
the
case,
do
we
have
a
sense
of
you
know
how
much
memory
we're
really
talking
about
because
from
at
least
in
my
in
my
implementation
for
most
of
the
aggregators
the
memory
usage,
it's
really
not
the
actual
accumulation
of
the
value.
It
really
is
just
the
expansions
of
the
of
the
label
set
combinations.
F
Right
now
I
haven't
looked
at
histogram,
which
probably
has
a
higher
memory
you
know
needs,
but
for
the
simple
ones
like
you
know,
just
a
you
know,
either
a
delta
or
a
cumulative
or
whatever
those
are.
The
memory
doesn't
seem
to
be
an
issue,
and
if
that
is
the
case,
then
would
we
not
be
simpler?
If
we
just
you
know,
had
a
in-memory
block
for
each
exporter
frequency
anyway?
So
that's
a
thought.
A
A
A
This
is
a
question
I
received
when,
when
I
send
the
view
pr,
because
people
ask
hey
when
I
define
something
whether
it's
a
view
or
a
user
exporter,
do
I
have
the
flexibility
to
specify.
I
want
to
export
the
data
as
cumulative
or
delta
for
sums
or
like
histograms,
so
so
here's
my
thinking
I'll,
explain
that
to
you.
A
I
I
think
if
you
export
data
to
some
backend,
that
only
support
cumulative
then
asking
the
user
to
do
some
configuration
is
like
a
no-brainer
thing,
so
we
shouldn't
like,
if
we
send
it
to
premises
and
premises,
only
supports
cumulative.
We
should
convert
everything
to
cumulative
if
we
can,
without
asking
the
user,
to
specify
anything.
If
we
send
data
is
delta
like
to
a
delta
only
backend.
We
should
also
do
the
same
thing
without
asking
the
user.
The
interesting
problem
happened
when
we
export
that
to
some
backend
that
can
support
both
in
this
way.
A
Originally,
I
was
thinking
I'll
just
be
lazy,
so
if
we
got
a,
we
got
an
absolute
value
from
the
async
instrument,
I'll
export
that
as
a
cumulative.
If
I
got
that
from
a
synchronous
counter
something
I'll
report
as
delta,
but
bogdan
changed
my
mind.
So
this
is
my
original
proposal,
the
bug
dimension.
We
probably
need
to
give
the
user
some
flexibility
and
it
seems
that
can
cannot
be
part
of
the
processor
or
the
the
view
configuration.
A
It
should
be
exported
configuration,
and
here
it
goes
why
so
the
bundle
belief
if
you're
running
behind
the
load
balance
like
the
collector
is
running
behind
a
load
balancer.
You
have
multiple
things:
sending
that
the
the
collector
will
have
no
good
idea
how
to
how
to
do
the
conversion,
because
the
the
start
time
and
all
the
other
mesh,
so
it
will
be
the
earlier.
You
can
do
that
in
the
sdk.
You
will
have
much
more
information,
so
you
can
give
the
like
more
accurate
value.
A
So
the
assertion
here
is,
if
you
have
to
do
the
delta
cumulative
conversion,
due
to
whatever
reason,
then
you
should
do
that
in
the
sdk
or
as
early
as
possible,
so
that
that
part
got
me
convinced.
Then
I
started
to
think
in
sdk
in
the
otlp
case
or
whatever
exporter
that
you
can
support
both
you
should
give
the
user
a
way
to
to
cumulative
or
delta
what
I'm
struggling,
and
what
I
want
to
get
feedback
here
is.
A
Do
you
think
that's
in
like
one
single
option
say
I
want
to
export
everything
here
as
cumulative
or
delta,
or
you
want
to
have
functionality.
I
want
to
export
as
cumulative
but
b
as
delta
by
default.
I
would
avoid
the
complexity,
but
I
want
to
see
if
there's
a
valid
business
consideration,
so
josh
go
ahead.
E
Yeah
this
has
been
discussed
in
the
past
and
I
have
a
feeling,
but
the
reason
why
this
is
so
hard
to
discuss
in
the
open
community
is
that
prometheus
only
accepts
cumulative,
and
nobody
cares
like
to
talk
about
deltas.
Really,
I'm
really
looking
forward
to
a
time
in
the
future
when
otlp
is
enough
established
and
our
sdks
are
enough
established
that
we
can
advertise
hey
database,
vendors
or
you
know,
timescale
db
or
victoria
metrics.
E
I
don't
think
that
people
want
to
always
convert
cumulatives
to
deltas
and
what
we
see
instead
is
that
people
are
converting
cumulatives
to
gauges
in
these
other
systems,
so
that
there's
probably
never
a
reason
to
do
cumulative
to
delta
inside
of
an
sdk.
E
There
is
the
reason
that
you
just
gave
to
compute
deltas
to
cumulatives
in
the
sdk,
but
it
means
putting
this
memory
burden
inside
the
sdk,
so
there's
definitely
an
appealing
option
outside
in
the
future
when
the
vendors
or
when
databases
actually
support
it
in
the
open
source.
World
lightstep
supports
both
right
now,
but
it
doesn't
matter
to
anybody,
because
so
many
people
care
about
prometheus,
that
they're
not
going
to
think
about
deltas
and
saving
memory.
E
If,
but
I
did
create
an
option
for
the
hotel
go
exporter.
It's
an
exporter
configuration
as
you
said,
to
choose
whether
you
want
humana
delta
output
and
the
one
that
makes
sense
for
me
is
a
stateless
option.
So
you
say
I
don't
want
to
do
delta
to
cumulative
because
it
just
causes
a
buildup
of
memory,
and
so
you
can
imagine,
then,
for
all
the
synchronous
instruments?
Well,
cars
and
histograms
is
do
you?
Do
you
want
to
actually
pay
the
cost
of
memory,
or
do
you
want
to
just
flush
out
those
deltas?
E
What
I
found
is
that
people
get
really
hung
up
on
up
down
counter
here,
because
they
traditionally
want
to
see
them
as
gauges.
If
you
start
pushing
your
delta
to
cumulative
out
of
process
for
up
down
counters,
you
run
into
this
like
reset
problem.
That
rates
can
be
that
you
can't
compute
the
total
anymore
and
it
becomes
a
potentially
useless
metric
to
a
lot
of
people.
E
So
so
people
are
interested
in
having
cumulative
up
down
counters.
It
seems
quite
quite
strongly,
but
then
I
think
it
does
make
sense
for
performance
reasons
just
to
give
the
deltas
for
counters
and
histograms.
So
I've
imagined
a
sort
of
a
pure
stateless
option,
which
is
where
there's
no
memory
and
everything
is
either
cumulative
or
delta
according
to
the
instrument.
E
C
Value
the
the
only
use
case
I'm
talking
about
is
where
you
run
a
local
collector
and
you
try
to
get
the
metrics
out
of
process
as
quickly
as
possible
to
be
resilient
to
failure
in
your
main
process
and
so
that
your
metrics
pipeline
and
your
main
process
have
different
failure.
You
know
domains,
that's
what
I'm
talking
about
specifically
as
as
that
use
case,
which
is
the
statsd
show
this
thing
out
as
quickly
as
I
can,
and
I
would
call
that
thing
an
up
down
counter.
C
If
that's
not
a
valid
use
case,
we
care
about
it
all.
Then
I
totally
agree
with
you
around
up
down
counter
and
I
would
say
that
people
who
are
used
to
prometheus
and
that
like
see
up
down
counters
this
weird
thing
sure
I
buy
it,
but
I
don't
I
don't
want
to
throw.
I
don't
know.
I
thought
that
was
the
primary
reason
you
use
statsd,
but
I
could
be
wrong.
E
E
It's
this
case,
where
you
drop
some
deltas
for
an
up
down
counter
that
the
total
value
is
now
completely
unknown
and
as
if,
as
long
as
the
user
only
cares
to
see
the
rate
of
the
rep
down
counter.
That's
fine,
but
I've
seen
a
very
strong
like
signal
that
people
don't
want
to
see
those
values
be
broken
down
into
deltas.
E
B
Okay,
eventually,
because
of
prometheus
hotel
needs
to
support
both
right.
E
Yeah,
there's
just
so
few
people
that
want
to
accept
a
mixture
of
delta
and
cumulative
right
now
that
almost
doesn't
seem
too
important
to
me,
but
I
could
see
like
you
know.
I
I
want
to.
It
seems
to
me
a
decision
about
where
you
put
your
memory
so
yeah,
it's
not
really
a
you
know
so
until
there's
a
widespread
reason
to
do
deltas,
I
don't
think
this
conversation
is
very
important
and
we'd
always
said
the
default
will
be
cumulative
just
to
avoid
having
this
conversation.