►
From YouTube: 2020-08-11 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
Do
we
have
agenda
or
notes.
A
Document,
I
think
we
have
a
couple
of
issues
that
we
want
to
discuss.
Actually
our
pr's
the
most
important
one
being
the
exemplars.
B
A
So,
yes,
that's
the
only
blocking
issue
that
I
see
before
we
can
do
a
protocol.
B
Yeah,
I
don't
know
how
likely
I
mean
like.
We
should
be
confident
that,
like
we're,
not
gonna
expect
to
make
changes
right
away
like
with
like
we
can
add
things
that
are
compatible,
like
example,
ours.
I
think
so.
B
I
mean
I'd
like
to
think
that
we
can
make
a
candidate
that's
stable.
I
mean
we're.
Gonna,
have
to
experiment
and
learn
some
things,
and
if
we
learn
that
there's
a
problem
then
well
we
scoot
up,
then
we
have
to
break
it
again,
but.
A
So
the
whole
idea
is
the
whole
idea
is
we
can
do
a
release
today,
for
example,
with
the
mindset
that
we
I
I
haven't
had
time
to
evaluate
that
performance
thing.
So
what
I'm
trying
to
say
is
we
can
do
a
release
today,
not
mark
it
as
stable,
but
gives
us
the
opportunity
to
start
playing
with
a
protocol
and
learn
during
this
time,
but
we
we
know
there
is
this
caveat
that
I'm
behind
a
bit
with
that
that
test.
A
For
me,
the
results,
I
think,
will
stay
in
this
way,
but
I
need
to
prove
the
ground
wrong.
So
the
preliminary
results
that
I
saw
are
not
worrying
and
I
would
prefer
to
stay
this
way
for
because
I
I
think
it's
much
more
clearer
and
much
easier
to
to
consume
the
data,
because
you
just
do
a
switch
and
say
which
type
it
is,
and
this
is
the
type
these
are
the
properties
and
you
don't
have
to
bother
about
other
things.
B
Let's
see,
I'm
aware
of
a
number
of
people
pushing
me
at
whitestop
to
get
this
done,
so
I
I
fear
that
if
we
release
something
now
with
the
knowledge,
that's
likely
to
break
again,
it's
going
to
be
unhelpful
to
me.
Okay,.
B
I
guess
I'm
I'm
worried
that
that
tigran's
gonna
be
hard
to
convince
just
because
we
he
did
a
lot
of
sort
of
study
and
like
even
when
I
ran
his
benchmarks.
I
I
convinced
myself
that
I
couldn't
do
something
that
was
going
to
cost
seven
percent
or
10,
or
something
like
that
in
terms
of
throughput.
A
Yeah
my
my
hope,
my
hope
is
that
I
will
be
less
than
one
percent,
but
let
me
double
check
again
it's
on
me,
but
I'm
also
getting
a
bit
of
pushing
from
specs.
You
know
the
maintainers
meeting
asking
for
more
reviews
there,
so
I'm
trying
very
hard
to
not
fail
on
this
one.
A
So
far,
that's
the
only
critical
issue
that
I've
seen
and
yeah.
By
the
way.
I
also
convinced
myself.
Somehow
there
is
the
other
issue.
That
is
there,
which
may
be
breaking
the
monotonic
some
versus
up
down
counters.
If
we
support
deltas
for
upgrade
encounters.
A
I
I
try
to
do
a
backwards
compatible
way
in
a
change.
I
will
show
everyone
the
pr
and
I
would
like
to
make
a
decision.
So
essentially
it
was
not
as
trivial
as
there
is
a
boolean
missing,
because
the
problem,
the
problem,
is:
if
people
interpreted
that
without
reading
that
there
is
an
extra
bullet
there
is.
A
There
is
certain
so
so
right
now,
for
example,
if
I
send
you
non-mono
during
the
same
message,
I
send
you
monotonic
sums
at
the
beginning.
Okay,
you
have
deltas,
you
have
cumulative,
you
have
everything
you
want
and
then
later
I'm
starting
to
send
you
up
down.
A
During
the
same
thing,
you
may
have
made
a
an
assumption
that
it
is
the
message
what
you
are
gonna.
Do
you
don't
check
for
extra
boo
and
that
added
in
the
message
does
it
make
sense,
even
though
the
the
protocol
is
backwards
compatible,
but
people
may
do
an
assumption
and
may
not
check
for
an
extra
bullet.
B
Yeah,
I
see
I
don't
know
if
you
saw,
I
made
a
comment
on
the
same
issue.
Is
I
I
still
sort
of
think
that
that
that
converting
of
delta
to
cumulative
for
these
measurements
is
probably
not
always
something
we
should
be
doing.
I
mean
you've
pointed
out
the
problems
like
if
you're
doing
this
from
a
sdk.
B
You
know
it's
reliable,
like
you
know
whether
you've
lost
any
information,
if
you're
doing
it
inside
of
the
collector
over
at
hotel
p,
it
becomes
less
reliable
and
then
I
think
you
know
if
you're,
if
you're,
like
looking
at
a
system
that
does
traditionally
use
deltas,
then
there's
potential
for
lost
data,
but
you
shouldn't
try
to
recompute
sums
or
totals,
if
you're
doing
that.
A
B
And
that's
why
we
are
exchanging
the
default
to
be
cumulative,
and
I
agree
that
that
it's
possible
to
that
that
we
can't
allow
a
horizontally
scaled
cluster
to
serve
this
delta
to
cumulative
function,
and
I
I
I
kind
of
want
to
figure
out
how
we
can
just
kind
of
leave
this
complexity
in
for
now,
because
if
there
is,
if
there
is
an
agent
running
like
many
users
of
delta,
oriented
systems
are
going
to
have
an
agent
anyway.
So
if
you're
running
your
collector
like
single
single
agent
mode,
then
that
can
do
reliable
conversion.
B
So
we
shouldn't
worry
about
the
scalable
pool
problem
it
just
it's
just
a
configuration
that
should
never
happen,
but
that
doesn't
mean
the
protocol
shouldn't
support
it.
I
guess
I
see.
A
B
That's
yeah,
and
I
kind
of
I
kind
of
wish
for
the
future
that
we
have
a
like
a
sort
of
negotiation
phase
where
you
you're,
like
if
I'm
client
connected
to
a
server
like
I
can.
I
could
try
to
figure
it
out.
Like
am
I
a
single
point
like
or
is
my
destination
a
single
point?
So
if
you're,
if
you're
talking
to
a
scalable
pool
you're
like
your
destination,
is
not
a
single
point,
and
so
you
know
that
certain
combinations
won't
work.
But
if
you
have
an
agent
in
line,
then
it's
great.
A
One
thing
we
can
do:
one
thing
we
can
do,
maybe
is
put
it
in
the
response,
so
one
of
the
thing
that
we
can
do
is
tune.
The
response
of
our
protocol
to
include
this
information,
I'm
not
supporting
delta
yeah,
is
having
a
different
config.
Okay,
you
convinced
me,
I'm
not
gonna,
remove
we're,
not
gonna,
but
we
we
for
the
moment
try
to
not
use
them
too
much
until
we
have
all
these
stories
defined.
So,
let's,
let's
keep
them
here
for
for
for
for
being
generic
and
for
future
pro
work.
A
B
Right
well,
yeah.
I
think
I
mean
I'm
thinking
of
the
statsd
case,
where
you
know
it's
common,
that
they're
positive
only
but
but
we've
seen
examples
out
in
the
world
where
changes
have
been
made
to
support
negatives,
and
so,
if
you've
got
up
down
counters,
they
look
like
staff
decounters
in
some
sense,
and
so
it's
okay
to
just
sum
them
all
up
and
but
but
it's
perhaps
not
okay
to
to
print
the
total,
essentially
because
that
could
be
lossy
or
or
inaccurate.
B
But
it's
still,
if
you're,
just
measuring
change
over
a
rate
of
time
like
it
gives
you
the
right
thing
and
when
there's
missing
data
like
okay,
you're
missing
data,
but
you
know
like
it's
still
use
it's
still
meaningful
and
useful
in
that
sense.
So
I
would
say
it's
okay,
as
long
as
you
keep
things
in
deltas
and
don't
ever
change
them
to
accumulate
as
it's
fine.
A
Yeah
as
long
as
the
back
end
needs
that,
but
for
the
moment
we
do
not
have
this
part
of
negotiating
with
the
back
end.
We
do
not
have
to
hold
this
part.
That's
why
that's?
Why?
Probably?
We
should
start
with
the
mindset
of
supporting
only
cumulatives
like
not
supporting
sorry
using
only
communities
right
now
until
we
define
all
these
story
about
deltas,
make
sense.
B
Yeah,
I
think
that
there's
there
will
be
valid
vendor
configurations
that
want
deltas
so,
but
I
think
that
that
we've
seen
quite
a
lot
of
evidence.
It
says
that
the
standard
request
is
for
prometheus
and
so
well
and
it's
it's
harder
to
do
the
right
thing
if
you've
made,
if
you
haven't
put
cumulatives
out
of
the
process
itself
for
prometheus,
whereas.
A
B
With
deltas,
you
can
as
long
as
you
keep
some
deltas.
It's
actually
okay,
but
if
you,
if
you
start
by
sending
deltas
like
you'll,
never
get
prometheus
right
and
if
you
start
by
sending
deltas
well,
we
have
the
same
problem
for
like
a
statsd
exporter
downstream
that
case
we
can
configure
deltas,
which
is,
I
think,
the
right
thing
to
do.
A
Yeah,
so
so
in
the
client
for
the
moment,
what
I
think
we
should
recommend
everyone
is,
you
have
to
configure
all
your
backend
or
your
sdks,
based
on
your
your
final
destination.
If
your
final
destination
is
stasd,
configure
everything
to
work
with
delta,
we
will
carry
them
happily,
and
you
will
get
that
what
you
want.
If
you
want
to
use
prometheus
like
backhands,
configure
everything
cumulative,
we
will
not
yet
get
into
the
business
of
transforming
deltas
to
cumulative.
We
will
for
the
moment
until
we
have
a
better
story
of
all
this
configuration.
A
B
Well,
I
think
we
should
get
out
of
that
business
in
the
collector
right
now,
because
there's
too
many
questions,
but
this
the
sdks
can
still
do
it.
So
there
is
still
here,
for
me
at
least
a
question
about
the
preferences
that
a
user
wants
for
we've.
Just
we've
designed
these
some
observer
instruments
that
are
cumulative
inputs.
B
A
B
Yeah,
I
think
that
the
collector
we
need
to
first
grapple
with
this
question
of,
are
you
scalable
or
not,
and
but
what
I've?
What
I've
got
right
now
in
the
sdk
is
three
options:
cumulative
delta
or
pass
through
and
pass
through
just
says:
whatever
we
get
in
we're
going
to
put
out
and
that
way,
there's
no
memory
requirement,
and
if
so,
if
we
configure
pass-through,
your
counters
will
be
deltas
and
your
observers
will
be
sums.
B
So
if
so,
so,
if
we
so
that,
unfortunately,
the
the
pass
through
default
can
can
work
for
a
sd
as
long
as
you
put
the
observer,
values
are
set
as
gauges
and
the
sums
are
set
as
counters,
so
so
setting
delta
or
pass
through
can
work
for
statsd,
but
never
works
for
prometheus.
That's
the
problem.
Is
it
sort
of
never
works
for
prometheus
problem?
B
C
Have
curiosity-
and
I
apologize,
how
do
you
interact
with
with
this?
Do
you
not
increment,
or
do
you
send
a
value.
B
That's
sorry,
that's
what
we're
referring
to
when
I
say
the
count
so
and
this
there's
like
a
very
long,
lengthy
discussion
about
how
we
ended
up
here,
but
we
have
both
so
there's
a
what
we've
called
observer
instruments
are,
those
that
are
asynchronously
called
by
the
sdk.
So
it's
like
a
callback.
So
a
callback
give
me
a
number.
We
decided
that
that
was
more
sensible
to
have
the
callback
return,
the
total
or
the
sum
value.
So
we
called
it
some
observer
and
up
down
some
observer.
B
B
I
think
in
some
some
cases
when
people
have
a
total
like
that
they
turn
it
into
a
gauge
right
away,
and
I
but
I
but
I
know
you
can
ask
certain
person
and
say
and
okay,
I've
got
a
sum
that,
like
total
memory
usage
that
changed
from
last
minute
to
this
minute,
and
would
you
like
to
see
the
news,
the
new
sum?
Or
would
you
like
to
see
the
change
from
the
last
time
I
reported,
and
I
think
both
answers
kind
of
are
are
true.
Some
of
the
time
yeah.
C
Yeah
I
know
I
apologize.
I
was
going
to
say
my
opinion
is
extremely
colored
by
the
way
we
we
store
and
afford
for
these
things,
but
but
certainly
our
customers
would
expect
a
gauge
in
that
case
right
they
would
expect
the
whole
thing,
and
if
they
wanted
a
diff,
they
would
use
a
function.
Post
aggregation
function
to
compute
the
diff.
B
So
that
suggests
to
me
and
there's
there
is
an
otep
about
statsd
conversions.
So
maybe
that's.
This
is
relevant
and
your
input
is
totally
useful
here.
That
basically
says
that
we
have
these
two
categories,
so
we've
got
counters,
which
are
the
so-called
synchronous,
or
these
are
the
ones
the
user
calls
the
sdk.
B
We
want
those
to
be
deltas
because
that's
the
norm,
that's
just
normal
and
then
there's
these
asynchronous
ones
where
the
sdk
calls
the
user
we've
decided
to
make
those
be
totals
or
sums,
and
we
are
capable
of
reporting
the
difference,
but
probably
we
shouldn't
so.
This
suggests
that
for
statsd
pass
through
is
the
right
way
to
be,
and
that
means
that
sums
will
be
exposed
as
sums
and
they'll
either
be
monotonic
or
not,
and
that
means
that
counter
inputs
will
be
exposed
to
deltas
and
they'll
either.
A
B
B
That
means
pass-through
is
a
good
configuration
for
stats
d,
but
just
not
it's
not
good
for
prometheus
yeah,
and
it
also
tells
me
that
delta
is
never
a
good
configuration
like
we
don't
actually
want
to
force
some
observers
to
be
deltas,
so
maybe
that
is
support
that
we
just
we
could
think
about
removing.
B
But
but
this
is
not
in
otlp,
because
it's
not
in
the
hotel
p,
I
put
it
in
the
go
exporter
or
the
go
sdk,
for
example,
is
there's
the
ability
to
compute
deltas
from
these
cumulatives,
but
it
required
some
extra
apparatus
that
I'd
be
happy
to
remove.
I
basically
added
forced
me
to
add
an
optional
subtraction
interface
to
say.
Well,
if
I'm
going
to
compute
deltas
from
cumulatives,
I
need
to
know
how
to
subtract
and
not
all
not
all
aggregators
support
subtract,
so
it
created
a
special
case.
B
B
Yeah,
so
maybe
it's
okay
to
leave
this
as
a
sort
of
conceptual
option.
I
we
keep
running
into
stuff
in
the
metric
space
where
it's
like.
Is
it
useful
question
mark?
Is
it
meaningful
question
mark
and
it's
like?
I
want
it
to
be
one
of
those
and
I
think
it's
meaningful.
I
don't
know
whether
it's
useful
and
I
could
be
convinced
to
remove
that
delta
export
mode,
meaning
we
only
would
support
cumulative
and
pass
through
cumulative
means.
We
will
convert
deltas
into
cumulative
inside
the
sdk
and
pass
through
means.
A
Quick
question:
I
I
heard
you
talking
about
some
up
down
some
in
the
otlp.
We
use
some
and
monotonic
some
should
we
think
on
the
the
way
how
the
observers
are
named,
because
observers
are
embedding
an
aggregation.
Should
we
change
that
we
have
a
field
called
monotonic
or
is
monotonic.
I
don't
remember
the
right,
the.
I
think
it's
called.
B
Anyway,
my
attitude
is
colored
by
sort
of
convenience.
I
think
that
monotonic
is
actually
the
normal
case,
but
it's
also
like
seven
or
eight
more
letters
that
people
end
up
typing.
So
I
think
the
default
should
be
the
shorter
name.
That's
why
we
ended
up
leaving
counter
and
some
observer
as
the
sort
of
standard
monotonic
instruments
and
that's
why
up
down
was
the
the
sort
of
six
characters
longer
version?
That's
not
not
so
common.
I.
B
I
am
inclined
to
suggest
that
we
don't
need
to
keep
with
the
same
conventions
or
approach
for
naming
the
protocol,
and
I
think
it's
okay,
to
keep
the
word
gauge
in
the
protocol.
We've
taken
it
out
of
the
instruments,
but
I
think
we
have
gauge
inside
the
protocol
that
describes
what
I'm
saying.
I
think
it's
okay,
whereas
we've
removed
the
word
gauge
from
the
api
surface.
A
B
Yeah
I
know
I
like
that.
I
like
that.
I
apologize
that
that's
preferable
to
me
and
the
way
I
think
about.
That,
though,
is
also
that's.
This
notion
of
structure
where
it's
like
things
that
are
gauges
are
our
individuals,
things
that
are
sums
are
going
to
be
added
together
and
and-
and
I
think
about,
if
you're
sampling,
like
a
very
a
zero
value,
was
like
useless
in
the
counter,
but
it's
meaningful
and
engaged.
A
Awesome,
okay,
so
then,
then,
then
the
goal
is
to
release
then
the
protocol
with
as
much
stability
as
possible.
Then
my
priority
becomes
p0
that
issue
for
performance
and
if
I'm
able
to
convince
in
goal
tigran
that
we
have
zero
impact,
then
then
we'll
stick
with
this,
because
I
think
it
I
like
more
the
way
how
verbose
is
and
and
can,
consume
easier.
If
not,
we
will
go
back
to
one
of
your
pr
josh
and
try
to
grab
an
encoding
and.
B
Yeah,
you
can
take
the
we
can
take
the
generated
code
and
and
maybe
that'll
be
a
way
to
go.
Yeah,
okay
and
but
I
I
I
hear
you
saying
that
you've
been
pulled
in
many
directions
with
spec
and
tc
and
jc
work.
A
B
A
Let's,
let's
focus
on
that,
so
my
goal,
as
I
said
I'll,
find
time
until
thursday
to
do
this,
not
nothing
else
in
terms
of
metrics,
but
this
one
is
on
my
list
for
you
josh.
I
would
like
you
to
if
you
help
me
just
review
the
examples,
pr
more
be
more
clear,
a
bit
that
pr,
if
you,
if
you
have
anything
also,
I
would
like
to
challenge
this
thing
of
having
a
similar
message
between
exemplars
and
and
raw
measurements.
A
I
feel
it's
gonna
cause
us
problems,
starting
with
with
how
the
labels
are
interpreted.
Are
these
just
drop
labels
or
are
including
the
labels
from
the
the
other
points?
Are
yeah
yeah.
A
Them
inside
every
point
or
or
the
other
question
by
the
way,
the
other
thing
is
right
now
an
approach
is
we
put
examples
in
every
point
kind
of
in
every
with
the
label
combination
stuff.
If
we
put
them
as
separate
as
you
initially
thought,
maybe
that's
the
way
to
go.
I
don't
know
I
need.
We
need
to
think
a
bit
more
on
these
examples.
I
feel
like
about
them.
What
is
the
correlation
if
we
put
them
inside
the
data
points?
What
is
the
correlation
with
the
labels
that
the
data
points
has.
B
Yeah,
I'm.
I
have
mixed
feelings
because
I
I
there's
like
things
that
I
kind
of
feel
like
I
want,
but
I,
but
I
I
don't
think
there
are
many
uses
for
them
right
now
and
then
there's
like
a
kind
of
a
standing
desire
to
do
the
way
things
have
been,
which
I
think
is
very
limited
in
use.
So
I'm
I'm
caught
between,
like
let's
just
get
something
that's
like
minim
like
meets
the
old
requirements
which
is
like
histograms,
have
span
ids,
or
something
like
that
versus
something
new
newer.
B
That
would
like
open
up
other
well
giving
us
examples
for
other
data
points,
but
also
I
I
have
a
personal
interest
and
then
that's.
The
problem
is
it's
personal
interest
that
I
need
to
get
rid
of
to
make
progress
here,
but
the
personal
interest
is
in
sampling,
metric
events
to
reduce
dimensionality,
so
that
I
could
like
have
exemplars
that
have
dimensions
in
them
that
are
not
exactly
being
aggregated.
That's
to
me
like
personally
interesting,
but
I
gotta
stop
having
my
personal
interests
get
in
the
way
of
this
progress
here.
A
What
I
would
suggest
by
the
way
as
a
feedback-
and
I
was
able
to
get
to
move
forward
with
this-
is
I
for
this
thing,
make
sure
we
put
the
examples,
for
example,
where
it
should
be
inside
the
data
point
or
outside
the
data
points
or,
or
things
like
this,
make
sure
that
that
natural
in
two
months,
when
we
are
more
ready
to
discuss
about
this,
it's
just
gonna
be
one
line
to
add
that
sampling
double
sampling
thing
so
so
make
sure
we
know
we
don't
shortcut
that
option
in
the
future,
but
maybe
so
so
what
I'm
suggesting
is
maybe
don't
push
to
have
right
now,
the
entire
thing
but
push
yeah
to
make
sure
we're
not
gonna.
B
In
for
westworth
connor
adams,
who
did
this
work
is
his
internship
is
over.
He
doesn't
actually
care
he's
offered
to
help
merge
his
work,
and
I
think
it
was
a
successful
internship
from
my
perspective
like
he
proved
that
it
could
be
done
and
did
it
and-
and
that
was
great,
but
it's
not
clear
that
that
anyone
accepts
you
know
anyone
needs
this
right
away.
So
I
think
the
two
fields
that
I'm
aware
of
that
I
added
sort
of
out
of
my
own
self-interest
can
be
taken
out.
So
that
was
sample.
B
Count
like
we
forget
about
probabilities,
forget
about
it.
We
can
add
it
later.
It's
one
field
and
it
was
just
extra
labels,
which
I
think
is
the
one
you're
saying
was
quite
confusing,
and
so
maybe
the
question
putting
back
to
you
is
if
we
removed
sample
count
and
extra
labels.
What
we're
left
with
is
a
double
or
an
integer
and
a
strategy
and
a
span
id,
and
is
that
enough.
A
We
did
have
extra
labels
by
the
way
in
open
in
previously.
In
the
previous
examples,
oh
we
did,
we
called
them
dropped
labels
and
okay.
The
idea
was
because
we
had
the
exemplars
inside
the
data
point.
The
data
point
came
with
the
labels,
so
these
are
the
labels
that
we
dropped.
B
A
B
B
Well,
I
am,
he
was
very
rigorous,
so
I'm
afraid
of
being
less
rigorous
than
he
but
yeah.
I
try
to
remember
why
why
else,
I
would
want
it
to
be
separate,
like.
A
B
So
the
thing
about
raw
measurements.
I
think
this
is
a
little
bit
different
than
exemplars
and
I'm
totally
fine
to
separate
these
questions,
and-
and
it
doesn't
matter
to
me
other
than
the
size
implications.
I
guess
the
thing
there
is,
I
think,
there's
a
probably
it's
it's
more
likely
that
users
are
going
to
have
a
bunch
of
data
points
with
the
same
label
labels
and
that
they
would
like
to
have
a
repeated
field
so
that
they
don't
have
to
repeat
their
labels
all
the
time.
But
then
the
question
is:
do
they
want?
B
A
A
We
just
send
every
point
that
you
gave
to
us
in
this,
and
we
want
all
points
aggregation
like
sum,
but
called
it
all
points
or
whatever
you
want
that
will
produce,
will
not
keep
the
the
individual
timestamps
for
everything
will
will
most
likely
do
do
the
aggregation
based
on
the
labels
that
you
want
on
the
labels,
and
you
may
also
have
examples
for
this
with
job
labels
or
stuff,
and
these
in
this
scenario,
you
will
have
two
times,
because
these
are
all
the
the
the
points
or
all
whatever
all
measurements
inside
this
interval,
so
that
that
will
become
a
delta
cumulative.
A
It
will
be,
will
be
a
proper
aggregation.
So
so,
when,
when
you
do,
these
kind
of
things
that
becomes
an
aggregation
will
be
a
another
mess,
another
type
that
we
have
there
as
another
aggregation
type
that
we
support
in
the
in
the
type
one
off,
but
it
will
be
different
than
the
clear
every
point
like.
B
Yeah,
you
almost
could
call
that
clear.
Every
point
thing
that
you
described
a
logging
protocol
like
I
and
I
asked
for
this
in
the
logs
group,
which
is
maybe
a
little
bit
too
philosophical
at
this
point.
But
you
know
like
if
you're
going
to
have
a
encoding
for
structured
data,
we
ought
to
be
able
to
put
a
structure
together
to
represent
one
metric
event
and
then,
and
then
maybe
you'd
prefer
to
see
your
your
raw
stream
of
events
as
actual
log
events
rather
than
as
as
like,
not
quite
aggregated
stuff
into
a
metrics
program.
B
A
B
A
No,
the
only
anyway,
the
only
benefit
for
that
is
php.
Php
cannot
do
any
aggregation.
No
state
can
be
all
the
requests,
so
they
need
something
that
they
are
actually
streaming
us
every
point
yeah,
so
so
we
may
have
to
do
it
on
the
metric
side,
but
I
think
what
were
you
looking
before?
Is
this
what
I
call
it?
Well
all
points
or
whatever
another
type,
another
type
of
of
aggregation
that
we
support,
and
I
think
we
should
not
keep
timestamps
and
we
should
treat
it
as
an
individual
as
any
other
aggregation.
A
A
Okay,
so
that's
fine.
We
can
add
that
later
so
breaking
things
I
need
to
to
test
the
performance.
Oh
last
last
thing,
which
I
forgot
to
mention
to
everyone
summary
min
max.
This
is
another
another
thing
that
we
haven't
chatted
min
max.
Think
summary,
you
had
a
pr
you,
I
don't
know
you
close
it
for
whatever
reason.
B
B
Okay,
so
the
we've
talked
about
the
value
recorder
instrument,
which
is
the
synchronous
one
and
the
value
observer
instrument,
which
is
the
asynchronous
callback
version,
and
we
know
that
when
you've
got
the
a
segment
of
callback
version,
we
really
expect
we
actually
have
defined
inspect
that
there's
one
value
per
interval
like
you
can
only
get
one
value
per
observation
and
duplicates
are
considered
duplicates
and
so
for
that
particular
instrument
like
it's
quite
natural,
to
figure
out
how
to
get
a
gauge
out,
because
that's
what
users
are
kind
of
expecting
one
value
in
one
value
out
gauge:
good:
okay-
and
it's
not
one
of
these
sum
observers
where
it's
count,
counting
sort
of
type
number.
B
So
so
the
observer
case
is
settled
I
had
originally.
I
we've
had
a
debate
which
is
like
almost
purely
philosophical
about
whether
we
should
specify
a
different
default
aggregator.
But
when
you
talk
about
value
recorder-
and
you
can
have
these
synchronous
measurements,
so
there
could
be
an
interval
with
100
or
a
thousand
of
them.
We've
we've
got
two
or
three
different
possible
defaults
that
and
I'm
kind
of
trying
to
rank
them
by
most
likely
to
be
useful.
B
One
default
is
make
a
histogram,
because
that's,
but
but
usually
in
in
the
world
of
metrics
users
need
to
explicitly
call
out
when
they
want
a
histogram,
and
so
it's
sort
of
like
that's,
not
a
good
default,
it's
expensive,
so
the
other
extreme
is
like.
Let's
just
show
the
last
value,
because
some
of
the
time
that
is
actually
what
users
are
kind
of
thinking
about
when
they
wrote
gauge
in
their
old
code
now
they
say
record
value,
and
maybe
that's
the
one
that
we
want
to
display
or
export
after
an
interval
is
the
last
value.
B
There's
this
other
opinion,
which
I
think
kind
of
came
from
a
few
of
us.
I
know
I
always
promoted
it
as
well,
but
I
think
the
new
relic
people
like
it
because
it
matches
their
system,
which
is
that
over
that
interval
of
time,
you've
got
a
thousand
events
or
whatever
you're,
going
to
compute
min
max
sum
and
count.
So
that's
giving
you
the
average
or
the
mean
it's
giving
you
the
range
of
values,
and
I
discovered
after
kind
of
publishing
this
spec
and
having
implementers
start
working
on
it.
B
A
A
B
A
A
C
B
Well,
so
bugs
is
right
that
most
examples
that
we
know
of
for
this
value
recorder,
where
there's
multiple
points
per
period
are
a
case
where
you
want
a
distribution.
So
a
histogram
is
a
viable
output.
A
summary
is
a
viable
output,
but
most
people
are
sort
of
think
of
that
as
a
deprecated
or
discouraged
from
use,
and
then
this
this
one
that
we
kind
of
made
up
was
the
min
maximum
count
being
a
sort
of
fixed
size.
Summary
that
doesn't
have
the
ambiguity
over
aggregation.
B
So
you
can
merge
these
because
you
can
merge
an
average
and
you
can
merge
a
maximum
in
it's
the
the
quantiles
that
are
hard
to
merge
and
it's
so
it
sort
of
finds
a
sort
of
nice
like
lowest
cost
option,
and
I
think
that
there
are
a
lot
of
people
who
understand
what
I'm
after,
when
I
do
that,
I'm
trying
to
reduce
the
cost
of
histograms
and
still
expose
some
information
about
how
frequent
it
happened
and
what
was
the
min
and
max
et
cetera,
but
the
users
still
kept
coming
and
saying.
B
I
really
want
a
way
to
report
a
gauge
and
I
don't
want
min
max
summer
count.
It
particularly
happens
when
they
have
a
very
long
collection
interval
like
every
hour,
I'm
going
to
output
the
current
value
of
some
statistic,
and
it's
really
actually
one
of
these
observers,
but
you,
your
code,
is
structured
to
call
it
synchronously.
So
you
like
it's
hard
to
make
an
observer,
etc,
and
so
like
like
how
like
there's
no
out-of-the-box
instrument.
That
gives
you
a
gauge
the
way
we've
kind
of
designed
it
yeah.
C
I
mean
usually
the
way
you
combine
gauges
is
by
averaging
them
for
certain
definitions
of
averaging
them
right.
So
a
customer
in
that
case
could
take
your.
We
call
them
summaries
when
we
return
some
mid-maps.
I
don't
know
yeah.
So
if
you
return
those
four
values
you
could
derive
the
average,
which
is
probably
the
behavior
you're
getting
out
of
your.
C
B
Right
so,
and
now
that
it
occurs
to
me
that
that
what
I
was
basically
trying
to
do
when
I
wrote
a
note
on
this,
it's
118.
It
basically
says
how
to
get
to
back
to
stat
c,
f
from
the
in
input
instruments
and
to
prometheus
and
and
basically
in
both
cases.
What
I
was
seeing
was
users
saying
if
I'm
going
to
put
it
back
on
the
wire
as
bestie
or
back
on
the
wire
as
prometheus.
B
B
But
what's
missing
is
the
last
value
and
I
think
there's.
B
B
Sorry,
that
word
is
overloaded.
Now,
there's
no
time
distribution
information
at
all
when
you
guys
get
mid
max
on
account
and
so
there's
some
sort
of
like
back
of
my
head
thinking.
Yeah
there's
nothing
about
time
here.
So
maybe
there
that's
why
people
kind
of
want
gauge
some
portion
of
the
time,
but
it's
small
so.
A
B
Michael,
your
point
about
just
histogram
is
actually
good
because
the
the
aversion
to
histogram,
I
think,
came
from
the
staffy,
where
every
histogram
update
became
a
network
message
right
and
so
it's
actually
less
expensive
to
aggregate
a
histogram
in
process
and
then
export
it.
But
you
still
don't
know
the
last
value
right.
No.
C
It's
it,
it
comes
up
infrequently,
but
it
does
come
up.
Yeah.
B
And
so
I
maybe
maybe
I'm
wrong
about
how
frequently
this
problem
will
arise,
but
I
was
I
had
been
proposing
that
we
could
one
way
to
address
this
was
to
just
throw
the
last
value
in
every
time,
and
so
like
min
max
last
sum
count
would
be
the
mid
max
some
count
and
the
last
value,
and
that
can
be
converted
into
anything
you
want,
but
I
it
ran
into
a
lot
of
confusion,
so
we
do
need
to
resolve
something.
The
user
comes
in
and
says
I
had
a
prometheus
code
base.
B
I
knew
what
it
did
now.
I
just
want
to
gauge
what
do
I
do
and
we
don't
have
a
good
answer,
so
my
answer
was
going
to
be.
B
But
when
you
have
a
long
interval,
it's
really
hard
to
argue
that
you
should
drop
the
last
value
and
average
it
out
like.
If
there's
some
like
trend
there,
you
know
the
value
at
the
end
of
the
interval
could
be
very
different
than
the
out
value
at
the
mineral
middle
of
the
interval
or
the
average.
Nobody.
C
B
Yes,
yes,
but
if
you
yes
right
averaging
averages
exactly
the
case
that
we
don't
want
to
get
into
so
yeah.
C
So
I
really
do
like
the
histograms
idea,
because
you
can
derive
a
max
and
then
a
sum
account
all
from
the
histogram
and
then
you
can
drop
the
histogram
in
the
collector.
If
you
want
no.
C
C
Here,
michael,
we
we
did
develop
the
the
dog
sketch
algorithm,
which
is
based
on
an
hdr
implementation,
with
shrinking
buckets
that
you
don't
have
to
define
a
maximum
or
any
bucket
sizes.
The
algorithm
will
do
that
for
you
and
they're
mergable
and
will
keep
one
percent
rank
accuracy,
and
if
we
want
to
use
that
you
sidestep
the
bucket
sizing
issue
and
you
have
metrics
histograms
that
that
can
merge
in
time.
A
Okay,
I
think,
gosh.
I
think
we
need
to
double
check
with
the
last
value
and
make
sure
that
we
really
have
not
misunderstood
the
users,
and
maybe
they
really
need
the
last
value.
B
Yeah,
I
can
go
dig
up
this
issue
where,
where
someone
literally
said
that
that
would
not
be
what
I
want
asking
sort
of
exactly
that
question
but
yeah.
So
that
was
my
first
conversational
topic
of
this
nature
was
I
I
still
think
people
are
going
to
come
up
and
ask
for
all
value,
but
but
let's
suppose
that
is
a
separate
independent
matter.
I
do
think
that
this
is
a
pretty
reasonable
proposal
with
the
the
dog's
dvd
sketch
algorithm.
I
like
it.
B
I
I
have
my
own,
like
personal
preference,
for
t
digest
based
implementations,
but
but
it's
personal
and
it's
it's
purely
about
sort
of
simplicity
or
complexity,
and
so
I
did
link
in
the
the
go
implementation
of
dd
sketch
and
call
it
the
sketch
aggregator
in
the
hotel
sort
of
prototypes
and
what
we've
run
into
is
problematic
a
little
bit,
and
maybe
you
can
help
us.
So
it's
great
like
it.
It
does
the
interface
of
doug
statsd.
B
Sorry
dd
sketch
lets
me
compute
quantiles,
so
I
can
use
it
inside
my
sdk
to
compute
a
summary
in
the
classic
sense,
but
I
can't
use
it
on
the
wire
yet
because
I
don't
know
how
to
do
that-
and
I
think
what's
what's
intended,
is
that
you
would
output
the
buckets
that
are
non-empty
in
any
given
interval
and
they
can
be
merged,
but
there's
not
really
a
standard
protocol
for
encoding
or
a
text
representation
for
encoding
these
these
values.
B
So
what
we
would
need
in
otlp
to
do
something
like
this
is
either
to
to
decide
that
that
we
can
use
the
standard
histogram
encoding
for
for
dd,
sketch
values,
which
is
question
mark
to
me,
or
we
can
decide
to
encode
it
to
add
a
sort
of
dd
sketch
dedicated
dd
sketch
protocol
in
the
otlp,
like
a
data
point
type,
which
is
the
db
sketch
type
as
opposed
to
histogram
type,
or
we
could
go
with
some
sort
of
generic
approach,
which
sounds
harder.
B
A
Adding
a
new
type,
I
mean,
if
you
know
how
to
decode
it.
You
know
what
you
encode
is
the
same
thing.
B
B
Yeah,
sorry,
I
I'm
just
adding
on
it,
provides
a
lookup
function,
but
it
doesn't
provide
a
like
what
are
your
bucket
boundaries
and
what
are
your?
What's
your
internal
state
that
we
might
need
to
encode?
As
far
as
I
know,
so,
it
always
seemed
like
that
was
an
intention,
and
I've
always
had
this
question.
So
now
it's
great.
If
you
can
get
an
answer,
it
might
be
awesome.
B
B
Because
and
then,
and
then
on
this
topic,
if
you
will
entertain
me
for
just
another
minute,
so
I
I
like
I
like
the
thing
I
like
about
t
digest
is
that
I've
worked
with
it
quite
a
bit
and
I
know
how
you
can
generate
it
from
sample
data
which
I
also
like,
but
I
think
you
can
literally
use
a
raw
encoding
to
to
encapsulate
the
t
digest.
So
the
the,
if
you
just
give
a
data
point
well,
is
it
well?
B
It
can
be
raw,
a
couple
ways
you
can
do
this,
but
that
all
it's
really
encoding
are
these
centroids,
which
are
like
the
middle
of
a
bucket
and
a
weight
factor.
So
all
you
need
to
do
is
list
a
pair,
a
list
of
pairs
which
is
value
and
weight,
value
and
weight,
value
and
weight,
and
so
you
can
list
value
and
weight
using
raw
values.
So
it's
possible
that
we
could
have
an
output
from
a
d.
B
A
t
digest
algorithm,
could
just
output
raw
values
and
say
these
are
t
digest
values,
meaning
you
can
construct
a
distribution
from
them.
They've
been,
and
all
I
need
is
that
sample
count
information
which
is
the
probability
associated
with
each
point.
That
was
a
digression
I
like
dd
sketch
because
it
has
a
much
better
formalization
behind
it.
T
digest
is
sloppy,
it
works
really
well,
but
it's
like
academically
sloppy
or
it's
just
hard
to
prove
anything
about
so.
C
For
me,
and-
and
I
am
a
product
manager
who
works
with
metrics-
not
not
the
mathematician
behind
the
duty,
sketch
or
or
even
an
engineer,
to
be
honest
for
many
years.
The
thing
that
I
like
about
dd
sketch
is
that
it
has
been
faster
to
work
with,
but
it
isn't
even
that
it's
the
guarantee
of
relative
accuracy
versus
rank
accuracy
right,
whereas
a
t
digest
will
give
you
the
p98
or
p100
for
a
1
accuracy
difference.
But
it
will
give
you
a
real
value.
The
dd
sketch
will
give
you
whatever
one.
C
What
number
is
within
1
of
the
actual
p99
value,
which
is
far
less
volatile,
got
in
usage
between
yeah?
I
support.
B
Dd
sketch
as
the
default
and-
and
maybe
we
could
make
a
good
case
that
if
we
had
that
as
the
default,
it
would
become
a
good
default
for
value
recorder,
and
then
it
would
let
us
at
least
it
would
give
us
the
ability
to
export
mean
maximum
count,
but
it
still
doesn't
have
a
left
value.
So
I'm
left
wondering
I.
I
will
dig
up
this
issue
and
I
will
post
it
in
our
gitter
channel
and
we
can
talk
about
it
on
thursday.
If,
if
it
feels
like
it's
written
to
the.
C
Top
where
I'm
super
curious
on
that
is,
is
not
just
why
people
want
the
last
value
which
is
clear
to
me,
but
why
they
want
it
in
the
instrument
that
is
specifically
bounded
to
a
period
of
time
instead
of
the
async
one
which
they
can
yeah.
B
A
Just
to
put
one
more
thing
in
your
mind:
should
we
remove
temporality
from
summary,
because
temporarily.
B
Does
not
yeah
yeah?
I
think
that
so
yeah
that
raises
to
me
another.
If
we
have
a
few
minutes.
The
summary
data
that
prometheus
has
is
neither
cumulative
nor
nor
delta.
It's
smoothed
over
some
arbitrary
number
of
windows
that
I
don't
really
know
so
and
I
think
conceptually
it
makes
sense
to
think
about
summary
as
being
either
cumulative
or
delta,
but
it's
less
useful
as
a
cumulative
and
it's
not
exactly
what
president
does
in
the
delta
either.
So
I
always
support
what
you
just
said:
perfect.
A
I
would
do
that.
I
have
to
really
run.
I
have
to
prepare
for
another
with
a
client.
Sorry,
I
mean
these
stupid
meetings
as
well.