►
From YouTube: 2020-07-23 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Hey
everyone
thanks
for
joining
the
call,
be
sure
to
open
up
the
doc
and
add
your
name
to
the
attendees
list.
If
you
have
any
other
issues
you
want
to
address
in
the
agenda,
please
make
sure
to
add
them,
probably
starting
a
little
bit
waiting
on
josh,
I'm
guessing.
D
D
E
E
Yes,
I
see
you
so
my
my
hope
was
that
we
could
lead
off
this
meeting
by
getting
some
background
on
the
the
project
and
the
proposal
and
then,
as
we
will
see,
there
are
several
issues
here
that
are
related
and
problematic,
and
that's
the
real
topic
that
we'd
like
to
I'd
like
us
to
discuss.
So
why?
Don't
we
let
yang
or
one
of
the
nominations
from
elita's
group
lead
this.
F
Off
sure
I
can
talk
about
it.
This
is
yang,
so
we're
trying
to
build
a
we're
trying
to
build
exporters
to
cortex
which
takes
takes,
which
is
a
open
source,
multi-tenant
storage
for
prometheus.
F
It
originally
takes
data
from
prometheus
remote,
write
a
remote
right
api,
so
we're
trying
to
support
the
direct
export
of
metrics
metrics
from
open
telemetry
to
cortex
and
now,
while
we're
doing
this
for
the
open,
telemetry
exporter,
an
issue
we
encountered
is
that
the
otlp
exporters
are
passed
pass
through
and
and
and
and
that
so
so
the
so,
the
aggregation
has
to
be
done
by
the
a
collector,
whether
it's
like
a
processor
or
the
or
our
exporter
component,
and
currently
with
like
the
otlp
receiver
prometheus,
which
also
which
also
accepts
cumulative
data,
is
not
working
properly.
F
Counter
values
are,
are
exported
to
prometheus
as
gauge
and
so
and
and
our
exporter
would
have
the
same
problem.
So
we're
we're
thinking
how
we
can
address
this
issue
and
and
and
where,
where
where
this
aggregation
should
happen,
we
could
put
it
in
the
collector.
F
But
then
the
problem
is
with
multiple
collectors:
behind
a
load
balancer
like
delta
events
from
the
same
metric
might
not
be
aggregated
at
the
same
collector,
and
thus
we
would
be
exporting
the
wrong
value
to
our
back
end.
F
So
our
suggestion
is
to
have
otlp
export
cumulative
by
default
so
that
we
don't
have
to
deal
with
so
that
so
that
the
collector
doesn't
have
to
deal
with
this
issue.
A
Quick
question
about
this
prometheus
is
switching
to
use
open
metrics,
which
has
a
concept
of
delta
q
delta
counters.
Are
you
gonna
have
that
support,
because
that
will
solve
half
of
the
issues.
A
E
E
It
would
not
be
able
to
apply
deltas
as
as
the
protocol
stands
from
my
read
of
it,
because
there's
no
notion
of
a
start
time,
you
you
just
are
practicing
time
stamped
cumulative
values.
A
G
G
E
How
I'd
love
to
know
how
that's
going
to
be
solved,
and
I
haven't
enough
information
at
this
time.
It
seems
like
if
the
remote
right
api
were
to
take
a
time
stamp
and
then
then
deltas
could
be
done
in
a
way
that
cortex
could
hide
a
lot
of
complexity
for
us.
I
think
it
would
be
possible
to
then
support
multiple
collectors
in
a
sort
of
naive
way.
H
Yeah
we
can,
we
can
we
can
again.
This
is
our
lita
hi
josh
we
were,
we
have
reached
out
to
the
cortex
team.
I
don't
think
there
is
any
plans
of
adding
that
immediately
and
and
that's
something
that
again
may
make
more
sense
to
be
on
the
ot
side
anyway.
So.
H
E
I
I
wanted
to
interject
a
hopefully
not
long
statement
here,
but
but
I
I
guess,
I
want
to
explain
how
we
got
here
and
I
realized
it
had
to
do
with
me,
and
there
may
be
a
bit
of
ideology
that
I'm
ready
to
get
to
this
card.
But
when
we
started
trying
to
kind
of
integrate
the
world
of
prometheus
with
sasti,
there
was
more
of
a
practice
in
statsy
of
just
any
label.
E
So
when
you
export
deltas,
you're
allowed
to
forget
things
which
enables
high
cardinality
metrics
in
a
way
when
you
have
to
report
cumulatives
you're
not
allowed
to
forget
anything,
and
that
makes
a
makes
a
memory
problem
that
basically
prevents
high
cardinality
metrics
from
existing,
and
I
I
think
it
was
probably
some
part
of
me
just
hoping
to
see
the
world
move
towards
hecarine
metrics
when,
when
I've
promoted
some
of
these
ideas,
because
I've
seen
that
we,
you
know
it's
possible
to
support
head
cardinality,
but
it
doesn't
mean
that
we
should
break
everything
in
the
world
just
to
do
that.
E
So
at
this
point,
the
the
simple
solutions
that
we've
found
for
this
problem
at
the
moment
are
are
one
is
just
to
change
otlp,
to
specify
that
you'll
always
get
a
cumulative
value
that
essentially
makes
status
quo
with
the
prometheus
exporter
model,
and
it
basically
says
you
can't
out
of
the
box,
get
high
card
high
cardinality
metrics,
without
a
memory
leak.
Essentially,
I
would
be
okay
with
a
change
of
stance
that
essentially
changes
the
default
as
long
as
there's
a
possibility
that
I
can
configure
something.
E
The
way
I'd
like
to
have
not
a
memory
problem
and
high
cardinality
support,
that's
good
enough,
so
one
idea
was
just
change:
otlp's
default
behavior
to
always
go
cumulative,
has
a
memory
cost.
The
other
was
require
an
agent
in
such
a
configuration.
So
if
you
have
a
single
agent
in
the
reporting
path
between
your
process
and
the
back
end,
the
agent
is
that
single
point
that
can
that
can
easily
convert
deltas
into
cumulatives,
and
so
that's
a,
but
that's
a
pretty
onerous
requirement
for
a
user
that
doesn't
want
to
have
an
agent.
E
The
users
of
staffy
are
basically
used
to
having
agents.
If
you
look
at
the
datadog
sort
of
customers,
but
it
doesn't
mean
that
we
want
to
force
people
to
do
that.
So
those
are
the
only
two
easy
solutions
that
don't
involve
some
sort
of
synchronization
between
the
pool
of
collectors
behind
a
load
balancer
which
I
don't
want
to
require.
E
A
Okay,
josh,
let
me
let
me
let
me
tell
you
all
the
things.
First
of
all,
I
would
like
everyone
to
review
my
issue
about
the
the
what
we
call
gauges
or
or
is
this
the
one
related
issue
right
here,
yeah
so
yeah?
These
are
all
related.
The
delta
delta
for
non-cumulative
things,
sorry
delta
for
none
counters.
What
prometheus
calls
counters
so
the
one
of
the
problem,
no
matter
what
protocol
we
design
is.
We
have
to
make
sure
that
we
in
our
protocol
we
can
handle
dropping
of
events
or
dropping
of
messages.
A
A
You
will
have
big
troubles
and
you
will
get
fires
you
you
will,
may
you
may
not
get
the
alert
fired
when
the
when
you
should
be,
or
you
may
get
the
love
fired
when
it
shouldn't
be,
and
I
try
to
put
down
here
a
lot
of
the
example-
and
I
think
statsd
was
developed.
A
Sasd
has
this
support
even
for
non
non-counters
for
for
what
prometheus
called
gauges,
but
I
think
statsd
did
that
by
with
the
mindset
that
actually
they
have
always
an
agent,
they
don't
they're,
not
gonna,
lose
messages
between
the
library
and
the
agent.
But
if
this
protocol
is
used
across
different
services
across
multiple,
like
network
long
network
connections
and
so
on,
I
think
it's
very
error
prone
to
have
deltas
for
for
things
that
you
are
interested
in
the
current
value,
not
things
that
you
are
interested
in
the
rate
of
changes.
A
So
so
because,
for
rate
of
changes,
the
missing
a
delta
is
not
that
important.
It's
still
sure
you
have
a
lack
for
for
a
small
period
of
time,
but
usually
you
will
set
up
an
alert
over
five
minutes,
not
over
every
collection
interval.
So
you
can
handle
that,
but
for
for
for
for
delta,
for
for
these,
like
up
down
counters,
that
we
call
it's
super
error
prone.
A
If
you
lose
any
of
these
messages-
and
I
hope
everyone
agrees
with
me
on
this
one-
and
I
think
I
think
I
I
I
know
even
open,
metrics
and
even
prometheus-
are
willing
to
allow
you
to
send
deltas
for
counters.
But
they
are
not
willing
to
allow
you
to
send
deltas
for
for
gauges
or
or
changes
for
gauges,
because
the
whole
idea
is
that
you
are
interested
in
the
current
value
of
a
theme
and
that
current
value
has
to
to
be
has
to
to
to
not
suffer
of
the
problem
of
missing
any
messages.
D
For
the
scraping,
as
in
in
permitted
exposition
format
and
open
metric
scraping,
we
only
have
the
total
value
for
the
counters.
For
precisely
this,
if
you
miss
a
delta,
you
have
a
loss
of
information,
so
we
don't
actually
support
deltas.
That's
that's
one
of
the
core
things
about
about
the
pull
model.
A
D
D
If
you
have
like
a
myriad
of
things,
which
you
want
to
count
and
then
have
long-lived
processes,
but
that's
the
trade-off,
we
made
that
by
default,
nothing
goes
away
and
we
actually
discourage
anything
from
going
away
and
always
having
the
full
counter,
because
this
also
allows
us
to
see
counter
resets
happening,
blah
blah
blah
blah
and
doing
the
appropriate
math
on
it.
Correct,
but,
but
also
just
point
of
order.
D
Callum
is
now
here
who's
part
of
cortex,
and
if
you
have
any
remote
read
write
questions
we
can
jump
back
in
in
the
topic
a
little
bit.
Okay,
I
just
asked
him
to
join.
I
think
callum
should
be
here.
Yes,
he
joined
yes.
E
I
just
want
to
quickly
add
that
I
agree
with
the
issue
in
general.
I
think,
if
you're
going
to
monitor
a
sum
you
should
you
should
the
most
reliable
way
to
report.
That
is
as
a
sum.
If
you're
going
to
monitor
a
rate
or
changes,
then
you
then
it's
okay
to
export
rates
or
to
export
changes.
So
there
seems
to
be
a
distinction
being
drawn
between
up
down
counter
and
counter.
E
I
think
it's
pretty
artificial
like
if,
if
you
want
to
monitor
the
rate
of
change
of
either
instrument,
you
should
monitor
deltas
and
if
you
should
collect
deltas,
and
if
you
want
to
monitor
the
sum
you
should
collect
sums
and,
as
I
said
earlier,
the
issue
does
boil
down
to
a
concern
about
height
cardinality
and
I'm
willing
to
drop
it,
particularly
because
I
think
we
can.
We
can
do
something
here
with
the
exemplars
and
the
sampling,
so
this
does
end
up.
E
Looking
almost
exactly
like
the
issue
that
was
mentioned
earlier
at
the
beginning,
which
has
to
do
with
prometheus
and
the
cortex
remote
right
api,
I
don't
know
kellum
if
you'd
like
to
introduce
yourself,
the
question
had
to
do
with
whether
cortex
might
ever
support
some
sort
of.
E
E
Well,
what
I'm?
What
I'm
trying
to
suggest
is
that
today
the
remote
write
api
is
is
a
list
of
time
stamped
cumulative
values
and
the
way
we've
been
looking
at
the
otlp
protocol
here
there
there
are
these
deltas
and
there's
these
cumulatives.
In
both
cases
you
have
a
start
time
and
a
change
in
that
sum.
E
Since
the
start
time,
and
if
it's
cumulative,
that
means
the
start,
time
is
sort
of
constant
and
if
it's
delta,
it
means
that
the
start
time
is
the
previous
windows
end
time
for
cortex
to
help
us,
which
I
recognize
as
a
very
hard
problem.
We
would.
The
reports
would
start
to
look
like
deltas.
They
would
say
this
is
the
last
point
that
I
definitely
reported
and
here's
the
change
since
then
or
here
are
all
the
points
with
relative
values
since
that
point
in
time.
In
other
words,
the
reset
timestamp
is
part
of
the
request.
J
D
A
I
see
so
column,
I
don't
know
if
you
know
the
open,
metrics
and
the
new
prometheus
format
we're
gonna
support
this
start
time
or
reset
time
or
whatever
it
is
called
so
so
that
means
you're
gonna
support
it
or
it
means
you
need
extra
effort
to
support
that.
J
Well
so,
as
far
as
like
remote
right
and
cortex
is
concerned,
it
has
no
no
notion
of
the
exposition
format.
It's
basically
rereading
prometheus's
right
ahead
log
and
using
that
to
generate
data
for
mo
right.
So.
J
So
you're,
unless
that
format
changes
significantly,
which
I
don't
think
is
really
going
to.
We
have
some
plans
around
like
changing
ordering
of
things
as
they're
written
to
the
right
head
log
and
maybe
introducing
some
new
data
types
like
exemplars,
for
example,
but
in
terms
of
like
changing
how
the
data
is
actually
written
in
terms
of
doing
like
this
delta
bit
instead
of
the
entire
value
for
a
counter.
For
example,
I
don't
think
we
really
have
any
plans
to
do
something
like
that.
E
Can
I
ask
a
related
prometheus
question
someone
you
know,
so
it
seems
to
me
that
if
the
prometheus
client
is
going
to
ever
reset
and
and
send
a
new
start
time
on
its
cumulative
series
and
the
prometheus
server
was
to
write
to
a
remote
right,
there's
not
enough
information
there.
Therefore,
what
could
be
done?
I'm
imagining
is
when
the
prometheus
server
sees
a
reset
for
some
series.
E
Look
up
the
old
series
by
reading
cortex
figure
out
where
that
same
series
left
off
and
then
begin
appending.
This
requires
prometheus
to
have
state.
So
I
don't
know
that
that
actually
works.
Maybe
I've
just
imagined
something
that
can't
be
done.
D
D
A
counter
reset
is
just
detected
by
whatever
the
counter
is
not
being
plus
zero
or
more
like
if
it
goes
down
by
even
one
that's
detected
as
a
counter
reset
and
handle
as
a
counter
reset
having
the
same
the
literal
same
metric
or
time,
series
name
with
a
new
time
start
time
is
an
empty
pattern
like
I.
D
What
have
you
that
is
always
the
same
thing
like
it's
literally
the
same
identity,
even
even
if
something
be
going
away
is
this
is
heavily
discouraged,
but
something
going
away
and
something
new
reappearing
with
the
same
identity
is
an
absolute
nd
pattern,
at
least
in
prometheus
and
by
extension,
sinusoidal,
cortex
and
open
metrics,
but
but.
A
I
think
I
think
samir
tripped
you
and
when
he
added
the
the
start
time
there
is
no
so
in
open
metrics.
When
you
have
start
time,
you
have
to
respect
that.
D
And
yeah,
I
I
I'm
not
saying
I'm
not
saying
you
can't
do
this
in
open
metrics,
but
I'm
very
much
saying
that
there
will.
There
will
definitely
be
no
specific
provisions
for
this
within
anything
which
is
coming
from
the
prometheus
ecosystem,
obviously,
as
you're
receiving
the
data
you're
free
to
do
whatever.
On
your
end.
Yes,
absolutely
agreed,
but
that's
that's
orthogonal
from
anything
happening
or
directly
being
supported
within
prometheus
and
with
my
promises
head
on.
D
A
I
think
I
think
what
I'm
hearing
from
from
everyone
for
the
moment
is
deltas
may
be
problematic
in
a
bunch
of
systems,
but
but
to
be
honest,
there
are
backends
like
new
relic,
which
expect
deltas
for
some
of
these
things.
A
John
or
or
tyler
can
jump
and
tell
me,
but
I
I
know
for
sure,
for
for
some
of
the
things
they
explicitly
accept
expect
to
you
to
reset
everything
before
before
the
like,
whenever
you
explore
reset
start
from
the
beginning
and
do
a
new
export
every
time.
Okay,
good.
D
D
In
my
opinion,
having
absolute
values
has
better
mathematical
properties,
but
both
basically
works,
and
whichever
you
have
if
your
wire
format
requires
the
other
thing
you
need
to
recast
it
into
the
other
thing
anyway,
like
that,
it's
just
a
matter
of
deciding
what
the
library
should
be
doing
and
what
is
better
to
be
done
within
the
library.
D
K
Remind
people
of
john
watson's
otep,
otap
126,
which
I
think
is
addressing
exactly
the
scenario
we're
talking
about.
A
That
may
be
so,
it
may
be
a
library
problem
reach
richard,
but
it
may
also
be
a
an
intermediate
step
problem.
So
imagine
imagine
if
you
have
the
library
talking
to
another
binary
which
talks
to
the
backend
now,
even
though
the
library
may
do
whatever
we
believe
is
the
right
thing.
A
There
is
this
intermediate
step,
and-
and
this
is
where,
where
we
don't
know
between
the
library
and
this
intermediate
step,
which
is
outside
the
process,
if
we
go
with
absolute
values,
then
in
this
intermediate
step,
tyler
for
new
relic
will
have
to
calculate
the
deltas.
If
we
go
with
deltas
in
this
intermediate
step,
prometheus
will
have
to
calculate
absolute
values.
So
why
not.
D
Prometheus
cannot
do
this,
there
will
be,
there
will
be
an
exporter
or
there
will
be
an
an
integration
component
to
to
open,
telemetry
or
whatever,
but
prometheus
itself
can't
do
this.
It
already
needs
to
be
on
the
wire
in
the
way
which
permeate
is
understanding.
That
is
the
problem
that
you
will
basically
have
a.
You
will
have
a
system
somewhere
which
will
be
rebuilding
the
state
anyway.
E
Correct
so
I
I'd
like
to
interject
here,
I
think
we're
circling
around
the
same
point
we
understand
now
deltas
are
good,
sometimes
and
bad.
Sometimes
vice
versa,
for
cumulatives
and
and
and
the
proposal
that
we
started
with
is
otlp
can
support.
Both
all
we
need
to
do
is
talk
about
what's
the
default
and
if
there's
no
good
default,
things
get
more
complicated.
E
I
don't
want
to
have
to
talk
to
the
server
to
say
what
are
you
doing
with
this
data
before
I
know
what
my
configuration
is,
and
that
leaves
us,
I
think,
in
an
not
a
great
situation.
I
I
mean
I
can
imagine
having
an
environment
variable
or
a
configuration
flag
like
in
john's
proposal,
when
you're
going
to
send
the
data
remotely,
you
need
to
know
whether
you'd
prefer
deltas
or
cumulatives.
I
think,
and
that
is
a
reason
why
some
exporters,
if
the
exposition
format
supports
both,
may
need
configurations
for
this.
E
It's
still
not
great,
since
they
think
that
we
have
to
choose
a
default.
It's
going
to
be
bad
for
half
the
users.
A
C
So
so,
but
on
that
point,
though,
in
like
the
gosig,
the
solution
was
to
build
into
the
sdk
some
sort
of
tooling
that
handled
that
that
that,
if
you
ask
for
it
to
be
cumulative,
it's
cumulative
and
if
you
ask
for
it
to
be
delta,
is
delta
and
sending
it
over
the
wire
is
one
thing.
I
think
that
there's
been
some
great
points
that
are
being
made
here
about
the
correctness
associated
with
that,
but
eventually
in
the
collector
bogdan
it
feels
like
that
should
be
a
similar
functionality.
That's
implemented.
C
A
Have
that
we'll
have
that?
There
is
no
question
about
that,
so
so
we
started,
we
started
with
the
idea.
Let's
stop
and
finally,
deltas
josh
josh
was
kind
of
throwing
that
idea
and
what
I'm
trying
to
say
is.
We
cannot
completely
ignore
them,
because
if
we
want
to
to
be
friendly
with
prometheus
and
we
want
to
be
friendly
with
new
relic,
we
cannot
ignore
them.
What
we
need
to
to
make
sure
is
is
whenever
we
get
to
prometheus,
we
go
with
absolute
values.
Whenever
we
go
to
new
relic,
we
call
it
dell
buzz.
E
And
as
tyler's
saying,
the
go,
sdk
already
has
this
configurability
and
otlp
could
be
trivially.
The
change
with
the
go
code
right
now
to
just
decide
to
export
deltas
by
building
that
memory
state
up
in
the
in
the
process
and
never
forgetting
things.
As
we've
talked
about
the
question
that
the
leaders
group
is
bringing
into
this
conversation
that
started
us
off
was
we'd
like
to
have
the
same
type
of
transformation
or
processing
being
done
in
the
collector,
because
there
are
valid
configurations.
E
D
Yep
read
your
chat
or
something
yeah,
just
to
walk
back
of
course,
and
it
feels
a
little
bit
like
groundhog
day,
because
we
literally
had
those
discussions
both
within
prometheus
team
and
within
openmetrics,
and
that's
why
we
arrived
exactly
this
problem
or
the
solution.
The
point
is:
if
you
have
a
generic
system,
a
generic
library-
and
you
don't
know
at
the
latest
at
start
time
of
your
process,
what
what
system
will
be
ingesting
your
data
by
definition,
you
must
keep
the
cumulative
values
somewhere.
D
You
don't
know
when
that
thing
goes
away,
so
it
can
basically
dos
you
by
by
requesting
cumulatives
once
if
you
have
a
system
which
is
built
on
on
just
allowing
deltas
so
either
you
need
to
keep
cumulative
for
the
com
by
default
or
you
need
a
flag
to
deliberately
disable
them,
because
else
you
will
be
having
a
bad
time
at
some
point.
Okay,
there
is
no
way
to
to
like
you
can't
if
you
throw
away
the
data
it's
gone
and
you
need
it.
If
you
do
absolute
values,
yeah.
A
So
what
do
you
do
in
the
source?
You
can
say
you
go
with
with
cumulative,
but
actually,
if
the
exporter
that
you
need,
it
needs
deltas
for
whatever
reason,
but
this
like,
like
new
relic,
and
this
is
configured
in
the
intermediate
step.
You
will
produce
deltas
here
and
you'll
rev.
Sorry,
you
will
produce
cumulative
if
we
go
to
always
producing
cumulative
in
the
library
and
then
we'll
revert
them
to
deltas
in
the.
A
And
people
people
will
not
be
happy
and
maybe
maybe
we
are
trying
to
to
to
to
solve
a
very
complicated
problem.
But
I
think
we
have
to
keep
this
in
mind
and
I
I
don't
know
what
ever
is
the
best
solution.
But
I
do
understand
all
the
points
of
of
absolute
values
are
better.
That's
what
I
try.
I
was
trying
to
make
there
as
a
point,
especially
for
gauges
and
for
stuff,
like
that,
even
if
it's
a
up
down
sum
like
a
gauge
that
you
do
q
size
and
you
do
plus
one
minus
one.
E
I'd
like
to
call
time
on
this
conversation
we're
going
in
circles,
and
I
don't
know
that
we
have
any
great
answers
other
than
require
some
configuration
and
choose
a
default
that
will
upset
people
and
or
I
think
we
should
all
agree
that
this
is
one
of
those
issues
we
should
just
think
about
for
a
week
and
talk
about
next
time.
E
If
anyone
doesn't
mind,
then
there's
this,
I
I
sort
of
want
to
leave
some
time
at
the
at
the
end.
So
maybe
we
can
try
to
make
this
quick
bojan
and
I
both
have
proposals.
Spogan's
raising
his
hand,
go
ahead.
A
One
thing
for
the
issue
that
is
filed
in
the
collector
and
for
all
these
things
I
would
prefer
to
wait
for
for
the
otlp
to
be
finished
before
we
start
implementing
that,
please,
please
don't
go
write
lprs.
So
there
is
an
issue
in
the
collector
to
to
do
this
accumulation
aggregation
to
transform
deltas
into
cumulative
and
so
on.
Until
we
have
the
the
final
protocol,
it
will
be
hard
for
me
to
accept
something
going
and
reaching
to
apply
to
the
new
protocol.
So
please.
E
Yeah
I've
been
encouraging
I've.
Basically
I've
I've
been
talking
with
that
group
and
and
the
there
is
like
a
thought
experiment
that
needs
to
happen
in
context
of
these.
The
next
conversation
about
these
two
proposals
that
we
have
the
flag
experiment
is
you're
a
collector
you're
going
to
get
data
in
you're,
going
to
get
you're
going
to
put
data
out
if
you're
processing.
So
we
need
to
understand
one
of
the
valid
transformations
before
we
can
do
this,
so
so
they
think
they're
they're
totally
coupled
as
you
say
I
just
want
to.
E
I
think
the
link
might
be
wrong
here,
but
there
are
two
proposals
and,
if
you've
been
following
along
you'll,
note
that
I
had
this
out
last
week
and
bowdoin
has
come
up
with
another
one.
I
want
to
say
very
clearly
that
bogdan
taught
me
a
lot
by
reading.
Through
his
proposal,
I
had
made
some
some
errors,
and
so
I
have
updated
my
proposal.
I
still
like
my
proposal.
E
I
just
want
to
say
that
clearly,
but
it
was
missing
some
things,
so
I
basically
rewrote
all
the
comments
in
the
last
day
and
a
half,
and
I
I
I
don't
think
we
have
time
to
read
through
them
here,
but
the
the
summary
that
I
would
give
you
all
very
briefly
is
that
I
still
have
18
different,
like
real
combinations
of
these
temporality
structure
and
I'm
now
calling
continuity,
which
is
the
property
of
being
a
snapshot
or
not
and
they're
all
valid,
and
I've
got
ways
of
thinking
about
them
that
I'm
starting
to
get
comfortable
with
I've
documented
all
the
most
important
distinctions
here,
and
then
I've
updated
comments
on
these
bits
themselves.
E
A
little
bit
based
on
my
understanding.
One
of
the
things
that
I
was
missing
in
my
prior
proposal
was
this
notion
of
instantaneous
points,
especially
when,
when
do
we
compute
histograms
of
instantaneous
points
and
and
I've,
just
in
my
in
my
understanding
of
this
now
that
the
snapshot
property
is
the
one
that
enables
that
and
that
you
can
see,
I've
documented
it
now
that's
my
proposal
and
I
guess
you
could.
E
E
Now
bogen's
proposal
has
less
of
the
sort
of
complicated
kind
variable,
but
changes
the
data
points,
and
there
are
ways
in
which
these
both
accomplish
our
goals.
I
haven't
conclusively
finished
evaluating
either
at
the
moment
and
on
bogdan
I'd
like
you
to
give
it
to
give
you
a
chance
to
state
the
same.
In
your
words.
A
I'm
not
changing
the
data
points,
I'm
changing
so
so
right
now,
in
your
proposal
you
have
a
value,
type
and
kind,
and
I
was
struggling
to
understand
exactly
all
these
combinations
and
I
think
that's
one
of
the
things
that
I
ask
you
on
the
chat-
and
I
don't
know
if
you
address
that
is:
are
these
six
values,
six
value
type
valid
for
the
18
kinds.
G
E
Yeah
and
I've
I've
tried
to
add
comments
to
address
that
if
you're,
if
we
think
in
terms
of
the
open,
telemetry
instruments
there,
there
are
six
of
them
and
there
are
three
temporalities
and
those
are
the
18
combinations.
But
we've
replaced
the
the
six
instruments
with
these
two
variables:
three
of
them
and
two
of
them.
A
But
but
histogram
can
histogram
be
adding
all
those
18
can
a
histogram,
be
all
the
18
types
or
just
some
of
them
can.
E
Yeah,
so
so
the
new
distinction
that
I
that
I
came
up
with
to
help
me
get
to
this
and
by
the
way
I
just
want
to
say
that
I'm
not
like
I'm
not
gonna
fight
to
the
end
on
this,
because
there
are
many
ways
we
could
come
up
with
a
proposal
here,
but
I
came
up
with
a
new
descriptive
if
you
will
for
the
value
types
themselves,
and
I
think
it's
it's
now,
I'm
starting
to
think
of
a
distinction
between
what
I'm
calling
single
value
data
points
and
multi-value
data
points.
E
Histogram
is
a
multi-value
data
point,
whereas
scalars
are
a
single
value
data
point
and
when
we
add
raw
data
points,
those
are
also
single
value.
So
the
question
that
I
came
up
with
is
when
is
it
okay
to
have
a
single
value
data
point?
And
and
when
is
it
not
okay,
to
have
a
single
data
value
data
point
and
the
only
time
I
can
think
is
it's
not
okay
to
have
a,
not
okay
to
have
a
multi-value
data
point
is
when
you're,
instantaneous
and
continuous
meaning
these
are
not
snapshots.
E
These
are
not
from
a
callback.
These
are
individual
moments.
In
time,
if
you
happen
to
have
the
same
instant
time
stamp,
it's
a
coincidence,
it
doesn't
mean
anything
so
that
you
shouldn't
be
allowed
to
have
a
histogram
if
you're,
giving
an
instantaneous,
continuous
variable.
That
was
the
only
new
kind
of
insight
that
I
got.
A
It
was
before
15
and
I
got
like
60
combinations
and
I
looked
at
that
and
like
man,
I'm
not
gonna
be
able
to
to
to
process
all
these
16
combinations.
All
these
60
things
it's
going
to
be
very
hard
for
me
to
to
to
know
what
to
do
with
these
values,
and
I
gave
you
examples
and
and
aggregation.
I
call
that
aggregation
of
aggregation,
which
I
put
in
the
document,
which
was
like
essentially
I'm
starting
from
a
raw
measurement.
A
I'm
doing
this,
some
delta
sum
or
something
like
that,
and
then
I'm
applying
a
histogram
of
these
things,
and
I
I
failed
to
understand
which
are
the
one
of
the
16
things
I
should
choose
for
for
every
of
these.
This
thing.
So
with
that
in
mind-
and
maybe
maybe
I'm
not
that
smart
and
I
don't
know
how
to
do
it
correctly.
A
But
with
that
in
mind,
I
started
from
from
a
different
angle
and
from
the
fact
that
what
this
data
represents
as
like
points-
and
I
I
found
that
if
you
open
my
pr
here
yeah,
I
found
that
we
have,
if
you
go
to
the
one-off
definition-
that's
that's
where
it
matters
the
most
after
some
histogram
summary
yeah
type.
So
I
found
that
we
have
and
again
comments
are
not
updated.
I
was
just
trying
to
to
put
together
something
so
I
found
that
I
have
an
ability
of
exporting
raw
measurements.
A
Raw
measurements
mean,
meaning
somebody
does
a
plus
one
minus
one
on
a
on
a
up
down
counter.
Somebody
does
record
latency,
and
this
is
the
latency.
Why
do
we
support
this?
It's
a
good
question.
I
think
it's
a
it's
a
thing
that
we
should
consider
to
support.
The
second
thing
was
what
I
call
gauges
or
you
can
call
them
scalars
or
anything
idea
of
this
were
somebody
gives
me:
let's
say
they
pre-calculate
the
percent
of
something
or
or
they
calculate
a
value
that
I
don't
know
what
whatever
formula
they
use
to
calculate
these.
A
So
I
have
to
be
able
to
support
these
things
as
as
what
I
called
gauges-
and
this
may
not
happen
that
the
user
calls
synchronously
to
me
may
happen.
That
user
gives
me,
let's
say,
cpu
usage
for
for
every
core
and
they
calculate
percent
of
per
core
from
the
total.
A
It
is
no
longer
the
raw
value
because
there
is
an
aggregation
happening,
but
it
produces
a
scalar
so
and
I
did
not
know
what
whatever
aggregation
transformation
they
use.
So
I
call
them
gauge,
maybe
a
wrong
term.
Maybe
I
just
scalar
is
good
enough.
Then
then
the
next
one
is,
when
user
does
a
sum
for
me,
some
is-
and
I
have
my
own
concerned
about
up
down
sum
to
be
represented
at
some
or
or
not
because
of
the
issue
that
I
filed,
but
this
is
mostly
mostly
what
counters
are
in
in
in
prometheus
and
stuff.
A
So
I
know
that
this
is
a
accumulation
over
time
of
things,
so
I
know
I
know
I
can
build
in
the
back
end.
I
can
use
temporality
to
determine
like
deltas,
to
calculate
rates,
to
do
the
things
like
that.
Then
I
have
histograms
I'm
exposing
histograms
and
then
I'm
exposing
what
we
call
summaries
and
now,
if
you
go
back
to
all
of
these
things,
everyone
has
a
huge
list
of
properties
about
these
points
which
try
tries
to
map
what
josh
does
in
the
60
values.
A
But
more
focus
like
this
is
a
histogram,
and
here
are
the
properties
for
that
histogram,
or
this
is
a
sum,
and
here
are
the
properties,
for
this
sum
is
monotonic
is
not
monotonic.
It's
delta
is
cumulative
and
all
these
properties
based
on
this.
So
anyway,
my
approach
was
like
first
describe
the
data
and
then
the
properties
of
the
data.
E
There's
something
a
little
bit
more
difficult
to
grasp
about
my
proposal
and
but
but
in
the
exact,
for
example,
of
the
distinction
here
is
that
I
I
want
to
have
in
my
proposal.
One
scalar
value
it.
It
can
be
an
a
sum
if
it's
an
adding
kind
of
structure
and
it
can
be
a
gauge
if
it's
a
grouping
kind
of
structure
and
that
that
that
I
should
always
be
able
to
figure
out
what
I
got
based
on
the
values
of
the
kind.
E
And
that
the
data
points
should
always
sort
of
stand
on
their
own
as
data
points,
you
should
be
able
to
strip
the
kind
away
all
you've
got
are
numbers.
If
that's
all
you
want
the
sign,
but
but
the
kind
tells
you
something
about
how
you
got
here.
So
I
think
we're
if
we
keep
talking
about
this,
we'll
run
out
of
time
for
the
other
issues,
and
I
think
the
two
of
you
and
I
talking
is
not
gonna
help
anybody.
We
should
get
others
to
read
through
this
and
think
critically
about
it.
E
I
think,
but
this
is
really
urgent.
We
gotta
move
forward.
This
can't
sit
for
that
long.
So
so
I
think
we
need
more
people
to
familiarize
themselves
with
the
issues.
C
Yeah
well,
I
haven't
caught
up
on
the
latest,
but
my
plan
is
to
hopefully
read
into
this
more.
I
agree
that
maybe
you
two
talking
about
it's
not
going
to
progress
the
issue,
but
I
agree
I'd
encourage
everyone
else
to
call
also
weigh
in
on
this,
because,
as
josh
has
pointed
out,
this
is
really
critical
and
and.
E
And
not
being
able
to
like
help
the
team
working
with
cortex
can't
move
forward
on
a
processor
and
so
on.
It's
like
really
tricky.
Okay,
so-
and
I
think
I
didn't
expect
the
conversation
about
cumulative
to
last
so
long,
but
we
didn't
find
a
great
solution
there
and
these
are
not
entirely
unrelated
either.
E
I
think
we
need
to
give
time
to
justin
who's
been
taking
over
a
couple
issues
that
are
like
moving
a
lot
slower
than
we'd
like
and
there's
some
confusion.
Justin.
Do
you
want
to
give
us
an
update.
K
Yeah,
okay,
so
we
had
my
proposal
from
like
a
month
ago
about
semantic
conventions,
for
timed
operations
is
like
how
to
create
golden
metrics
out
of
spans,
some
of
it
kind
of
hinges
on
how
we
describe
error
conditions,
and
so,
while
we
were
really
close
to
a
resolution
on
it,
things
were
kind
of
blown
up,
because
the
errors
zig
has
started.
Rethinking
how
they're
going
to
describe
errors.
K
I
didn't
attend
the
heirs
meeting
this
morning,
but
I
got
a
recap
from
our
representative.
Who
did
it's?
K
It's
actually
moving
in
a
direction
that
I
really
like,
where
the
instrumentation
will
make
we'll
try
to
make
we'll
try
to
only
apply
objective
information
to
to
spans
about
like
what
occurs
during
a
span,
and
it
won't
try
to
make
any
judgment
calls
about
whether
that
that
situation
is
an
error,
so
I'm
really
liking
that
there's
also,
I
think,
actually,
it
was
bogdan's
suggestion
that
we
like
strip
out
things
that
we
are
unsure
of
before
ga,
so
that
we
can
easily
add
them
back
in
later,
because
removing
things
will
be
much
harder
down
the
road
so
kind
of.
K
Given
all
of
this,
I'm,
where
I'm
currently
at
on
our
proposal
for
making
metrics
out
of
spans,
I
think
that
maybe
in
our
generic
semantic
conventions,
we
shouldn't
include
something
like
a
status
and
we
should
defer
that
right
now
to
the
category
specific
semantic
conventions
so
like
for
http,
we'll
put
the
http
status
code
in
the
semantic
convention
for
grpc,
we'll
put
that
status
code
in
and
we
can
kind
of
solve
those
on
a
case-by-case
basis
without
coming
up
with
this
abstraction
for
status.
K
E
K
I
believe
that's
interpretive
and
I
don't
think
that
instrumentation
should
be
responsible
for
putting
an
error
equals
true
false.
I
actually
think
this
is
a
good
candidate
for
something
like
a
views,
api,
a
metric
view,
you
could
describe
your
own
error
conditions
and
apply
your
own
booleans.
If
you
wanted
to.
A
I
support
that
so
by
the
way
I
I
was
in
the
meeting,
and
I
pushed
a
lot
for
them
to
move
and
think
about
ga
goal,
and
indeed
what
I
told
everyone
was
error
equals
true,
is
very
subjective
and
we
should
not
have
it
for
the
moment.
We
should
have
only
facts,
facts
being
this
was
an
http
code,
200
leave
it
there
and
we
can
define
later
mapping
to
errors,
not
errors
and
stuff.
E
I
like
the
idea
that
the
views
api
answers
this
question
for
us
just
put
in
whichever
statuses
you
you
have
and
and
to
find
a
view
for
your
error
conditions.
I
guess,
but
to
do
that
in
the
client,
is
going
to
be
a
tricky
bit
of
code
and
and
I'd
rather
not
so
I
think
it's
going
to
end
up
that
the
collector
will
be
tasked
with
you
know,
checking
implementing
this
more
complicated
views.
E
Eventually,
that's
probably
what
will
happen
justin.
I
then
the
next.
I
I
I
think
that
that's
uncontroversial
actually,
but
it's
really
a
question
for
that
group.
Now
the
question
was
really
about
what
was
intended
by
657
and
I
think
there
are
reasonable
disagreements
about
interpretation
there.
You
want
to
tell
us
what
you
think.
K
Yeah,
okay,
so
my
intention,
which
I
recognize
is
only
one
view
here,
but
my
intention
was
that
this
pr
would
describe
how
an
instrumentation
provider
would
create
metrics
yeah.
What
semantics
to
use
for
metrics
like
that
are
based
off
of
spans.
K
I
really
like
the
direction
that
john
took
it.
John
watson
put
together
like
an
abstraction
that
timed
an
operation
and
created
both
a
span
and
a
metric
or
incremented
a
value
recorder
at
the
same
time.
So
this
was
my
intention.
I
certainly
want
something
upstream
of
the
collector,
because
I
think
it's
very
important
that
we
get
this
information
before
the
effects
of
sampling
have
taken
place.
A
So
to
to
to
to
answer
your
things,
the
the
happening
before
collector
in
the
library
is
a
library
change
and
it's
independent
of
semantic
conventions,
and
that's
what
one
of
the
thing
that
we
we
pointed
there
is
that
you
cannot
add
subtle
things
in
the
semantic
conventions
that
imply
us
to
change
api.
You
have
to
make
a
proposal
to
change
the
api
to
support
this.
If
you
want
from
semantic
conversion
perspective,
I
think
that
reference
should
not
exist.
It's
like,
if,
if
you
have
the
span,
this
is
how
you
produce
the
metric.
A
A
This
is
what
I'm
hearing,
if
you
call
this
on
a
tab
which
allows
you
to
propose
different
changes
in
other
places,
I'm
fine
with
it,
but
as
just
a
semantic
convention,
I'm
not
happy
to
see
that
this
has
points
to
other
changes
that
again
they
should
be
address
independent.
I
think
I
think
for
for
me
when
I,
when
I
read
the
semantic
convention,
is
either
am
I
the
consume?
Am
I
the
the
person
who
adds
these
informations
to
the
span
here
is
what
I
should
follow.
Am
I
consumer
of
this
pen.
A
E
When
I
originally
read
this,
I
I
kind
of
thought
well.
What
we
want
is
this
minute
convention
for
naming
any
metric
that
measures
a
duration
which
is
a
very
simple
statement
and
says
nothing
about
spans
it,
but
it
does
carry
implications
if
you're
going
to
generate
a
metric
from
a
span.
It
should
follow
the
same
pattern,
and
that
was
all
it
needed
to
say.
In
my
opinion,
I
think
justin's
ambition
was
a
little
greater
and
I
don't
disagree
with
having
sort
of
a
a
stronger
recommendation
or
more
specific
recommendations
for
naming
metrics
derived
from
spans.
E
That
still
follow.
That
first
pattern,
which
is
ending.duration
but
the
greater
pattern
might
you
know,
share
some
category
naming
with
the
metric.
I
guess
I
I
had
hoped
that
the
broad
language
of
ocep
108,
which
says
you
know
how
you
should
name
metrics,
would
cover
that
category
and
stuff
so
that
we
don't
need
to
say
anything
more
than
you
have
a
metric
measuring
duration,
call
it
dot.
Duration,.
E
Okay,
but
I
that's
just
my
feeling:
it's
not,
it
doesn't
matter.
A
And
the
other
thing
is,
is:
should
these
metrics
be,
for
example,
calculated
offline
or
online
like?
Should
we
calculate
these
metrics
while
the
spans
are
produced,
or
is
this
I
mean
there
are
a
bunch
of
things
we
can
discuss
about
this
from
the
semantic
convention.
It
shouldn't
matter
where
you
do.
It
is
this.
This
is
the
data
you
should
produce
now
now,
when
you
come
to,
where
do
we
do
it?
How
do
we
do
it?
It's
a
different
thing,
it's
from
semantic
convention.
A
For
me,
it
shouldn't
say
how
it
should
say
what
we
produce,
what
we
do,
what
we
we
we
make
anyway.
Okay,
that
was
my
our.
That
was
my
understanding
and
again
it
can
simply
be
two
separate
issues,
one
that
explains
the
data
that
we
want
to
produce
and
one
that
explains
how
do
we
produce
this
data
and
as
two
different
things,
I'm
happy
go
ahead.
George,
I
see
you
disagree
with
me.
E
Oh,
I
don't
know,
I
want
to
call
time
on
this
conversation
as
well.
I
think
justin
has
enough
information
to
go
back
and
revise
it.
I
guess-
or
you
know
I
I'm
not
sure,
but
there
are
two
items
left
here
and
I
want
to
get
a
little
bit
of
both
of
them.
Can
I
skip
ahead
of
chris's
item
and
just
make
sure
to
answer
the
last
one?
E
Otlp
is
our
biggest
problem
right
now
we
have
some
debate
and
I
I
feel
that
we're
close
and
I
want
to
make
sure
that
we
don't
just
like
introduce
artificial
delays
like
we
gotta
focus
on
that
those
those
proposals,
like
nothing
else,
matters
right
now,
because
we
can't
get
to
ga
if
we
don't
get
a
protocol
that
we
think
is
going
to
work.
E
So
I
I
I
don't
know
when
it's
going
to
stabilize
we,
we
don't
have
like
command
and
control
here.
I
can't
just
do
it.
We
got
out
of
agreement
and
it's
hard,
so
I
all
I
know
is
that
I
wish
it
was
done
like
a
month
ago
and
I've
been
working
as
hard
as
I
can,
or
as
fast
as
I
can,
but
it's
just
for
this
confusion
here.
E
E
A
D
E
D
D
A
D
Thanks,
ideally,
just
drop
me
an
email
and-
and
that's
probably
best.
E
Okay,
well,
there's
a
minute,
I
don't
know
samuel
when
I
talk
about
views.
B
I'm
gonna
say:
there's
a
quick
fyi.
We've
got
a
couple,
internships
that
are
ending,
and
I
know
I've
been
out
of
this
for
long
enough
now
that
I'm
missing
a
bunch
of
important
contexts,
but
I
basically
just
wanted
to
check
that
we
could
continue
with
the
exemplars
work
that
conor
put
up
in
the
otap.
E
I
I
don't
want
anyone
any
intern
to
be
blocked.
I
wish
I
could
say
that
they
shouldn't
be
at
all.
I
love
the
exemplars
proposal,
the
otek
ones,
the
pr
159
the
proto.
I
would
change
nothing
about
it.
It's
just
that
it
feels
like
it
should
be
blocked
behind
this
other,
more
big
question
about
otlp
kind
or
whatever.
It
is.
That's
the
only
reason
it's
blocked.
I
think
it's
going
to
land
as
is,
or
I
hope
it
is-
and
I
and
I
said
this
every
time
it
comes
out.
This
is
like
to
me.
E
This
is
the
most
exciting
thing
that
we
can
do
here,
because
exemplars
with
statistical
sampling
give
us
a
way
to
expose
high
cardinality
without
memory
requirements.
That's
a
it's
a
promise
and
I
know
it
can
be
done.
There's
a
question
of
how
we're
going
to
get
this
in
the
open
source
world.
I've
actually
been
asking
my
employer
if
I
can
open
source
something
I'm
hopeful,
but
it's
very
exciting,
and
I
don't
want
to
block
the
interns.
B
Okay,
great
yeah,
so
what
what
we
can
try
and
do
then
is
just
go
ahead
with
exemplars
in
the
encounters
python
prototype.
It's
it's
possible.
This
will
include
some
stuff
from
the
views
prototype
that
gets
changed
in
the
future,
but
I
think
it's
it's
not
the
end
of
the
world.
If
we
have
slightly
different
views,
implementations
and
go
in
python
right
now,.
A
As
long
as
you,
you
make
sure
that
it's
experiment
make
the
cold
experimental
call
not
stable
or
whatever.
I
would
strongly
encourage
you
to
do
that
because
it
improved
us,
improves
us
it's
working
it
it's
it's
better
than
than
than
you
can
imagine.
So
please
do
that
thanks.
H
So,
just
just
just
an
ending,
even
our
interns
are
blocked.
So,
therefore,
would
you
be
able
to
give
us
a
clearer
answer
on
the
otlp
proposals
by
next
week?.
A
Are
you
blocked
by
the
application?
Yes,
by
the
way
talking
about
aggregation?
Can
you
look?
There
is
another
google
intern
who
does
a
processor
for
aggregation
inside
right.
E
With
aggregation,
we've
talked
about
how
if
the
client
goes
to
cumulative
reporting,
the
the
collector
processor
is
not
a
blocking
blocker
for
you.
I
still
think
there's
interest
in
having
a
processor
that
can
do
delta
to
kill
it
every
kind
of
delta,
but
we
just
discussed
it
for
half
an
hour.
It's
problematic
yeah,
so
I
would
be.
E
I
would
recommend
figuring,
there's
going
to
be
some
way
to
tell
the
client,
if
not
by
default,
which
I
think
probably
will
be
the
default,
that
you
should
get
cumulative
counters
and
so
cortex
will
just
work,
and
so
I
I
don't
think
you
you.
We
should
be
blocking
that
project
on
otlp.
H
So
it's
you
think
that
we
should
just
add
that
functionality
to
the
collector.
E
I
would
say
we
should
not
worry
about
converting
deltas
to
cumulatives
in
the
collector
and
and
proceed
with
as
much
of
the
project
as
you
as
we
can,
because
if
we
go
go
forward
with
changing
the
default
to
report,
kubernetes
from
client,
libraries,
it'll
just
it'll
just
work-
and
you
won't
have
to
worry
about
this
processing
question.
C
E
Okay,
so
yeah,
I
I
sort
of
in
general
think
this
protocol
all
these
variations
that
we're
talking
about
are
like
they
better,
be
compatible
or
better,
be
transformable
into
one
another
like
if
you
were
to
code
with
one
of
the
drafts
that
we
have
and
then
real
that
we
make
changes
later.
I
doubt
it
will
be
a
big
disruptive
change,
so.
E
That's
good,
okay,
but
everyone
please
give
your
opinions
on
these
otep's
proposals.
I'm
gonna
keep
studying
it
because
I
haven't
finished,
writing
the
most
convincing
case
for
it
either
and,
as
I
mentioned
to
you
all,
there's
a
thought
experiment
of
of
how
would
I
write
the
processor
given
these
18
kinds?
So
that's
that's
sort
of
like
one
of
the
ways
I
will
be
analyzing,
this
okay,
we're
out
of
time.
Thank
you
all.
I'm
sorry
we're
moving
faster
and
we'll
keep
trying
have
a
great
day.
Thank.