►
From YouTube: 2022-07-29 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
Should
we
can
we
get
started
without?
I
know,
reggie
can't.
B
A
That's
true:
I'm
going
on
vacation
for
two
weeks,
so
I'll
still
be
like
semi-active
on
open
source
because
it's
fun,
but
I
don't
have
any
work
so
it'll
be
good.
A
Okay,
so
we
only
have
one
new
bug.
Do
you?
I
actually
can't
share
my
screen
because
I
joined
with
the
wrong
account.
That's
okay,.
A
B
A
But
yeah
it
is,
we
have
one
new
bug
to
triage
and
then
we
can
go
through
the
backlog,
but
it
is
the
one
that
is
assigned
to
you.
A
Allow
exporter
to
determine
when
to
output
spans
and
traces
instantaneous
exporting
spans
traces
as
they
occur,
such
as
when
a
new
event
is
added
to
a
span,
is
what
they're
trying
to
treat
exporters
outputting
spans
traces
as
they
seem
fit,
not
dictated
by
when
a
the
span
or
trace
has
ended.
A
I'd
like
to
capture
crashes
up
to
the
last
moment
possible.
Currently,
if
a
desktop
application
crashes
with
hundreds
of
spans
with
events
in
flight,
all
the
data
is
lost
by
having
the
exporter
control
when
to
output
event
spans
traces
to
persistent
file,
then
we
could
capture
the
events
that
led
to
the
crash.
However,
since
the
export
handler
is
only
called
when
a
span
is
ended,
this
is
currently
not
possible.
An
alternative
approach
is
to
have
a
global
exception,
handler
unravel
each
span
and
close
them,
but
that's
not
possible
for
all
applications.
B
Yeah
very
old
one,
just
by
the
number
I
I
kind
of
agree
I
like
to
use
these
issues
to
to
like
ask
for
more
sometimes
from
open
telemetry
in
this.
In
this
case,
the
one
I'm
thinking
about
about
is
indefinite
operations
and
spanning
them,
and
I
know
I
have
memory
of
what
google
did
with
its
like
requests.
These
servlet
and
indefinite
operations
are
really
well
supported
there
and
I'd
love
to
see
hotel.
Do
that?
B
Even
if
they're
not
complete
and
then
have
flush
and
shut
down
do
the
same
thing,
they
would
do
that
you
would
expect
them
to
do,
which
is
to
shut
down
or
flush
them
when
you're
done
the
same
way.
I
think
that
would
give
this
user
what
they're
asking
for
as
well
so.
B
B
A
Yeah
so
architecturally
that,
like
the
most
important
thing-
and
this
is
something
that
I'm
a
little
worried
about
with
some
of
our
sdks
and
exporters-
you
need
to
pre-allocate
the
memory
that
you're
going
to
use
to
export,
because
when
you
crash
you're
not
guaranteed
you're
going
to
be
able
to
access
it,
because
our
exporters
are
doing
translation
logic,
it's
unlikely
they're
able
to
pre-allocate
that
memory.
Unless
we
let
them
know
about
things
ahead
of
time.
B
B
A
Hold
on
just
probably
getting
the
idea,
I'm
gonna
tell
you
what
we
have
to
do.
Okay,
it's
all.
B
Okay,
it
seems
like
a
duplicate,
but
I
want
to
ask
you
if
you
a
wild
idea
kind
of
question
so
on
this
topic.
The
reason
why
it's
hard
to
give
this
user
what
they
want
is
that
we
have
this
existing
like
pretty
detailed
specification
that
doesn't
really
give
anyone
what
they
want.
So
the
processor
pattern
and
the
exporter
pattern
are
low
level
abstractions
that
that
are
hard
to
customize.
B
I
guess
what
I'm
trying
to
say
and,
moreover,
we
don't
have
a
very
rich
like
what
I'll
call
view
specification
for
spans,
because
and
so
and
so
samplers
are
kind
of
like
this
wing
of
the
spec
that
you
don't
know
how
to
use,
and
nobody
knows
how
to
configure
their
processor
exporter
pair,
to
do
something
smart,
which
is
what
you
need
to
do.
You
need
to
wire
them
both
together
and
somehow.
Samplers
are
not
connected
to
that
story.
B
It's
it
it
basically
just
requires
you
to
completely
start
over
from
scratch
with
your
tracing.
Sdk
is
what
I'm
afraid,
I'm
afraid
that
the
tracing
sdk
today
has
not
been
required
to
keep
track
of
every
active
span.
A
Yeah
can
I
can,
I
add,
to
that
my
my
current
thinking.
This
was
actually,
I
believe,
ted
ted
is
the
person
who
mentioned
this
to
me,
but
it
could
have
been
an
independent
thought.
I
just
don't
think
it
was
when
we
do
traces
today
right,
we
are
trying
to
keep
an
in-memory
state
of
when
a
trace
is
complete.
It's
firing
that
out
an
alternative
implementation
could
be
a
simple
api
model
that
actually
fires
every
trace,
api
endpoint
as
an
event,
and
then
something
collects
these
events
and
aggregates
them
back
into
memory.
B
A
B
B
For
open
telemetry
three
years
ago,
because
I
didn't
want
to
do
another
ordinary
tracing
library,
so
I
started.
I
started
from
that
and
I
called
it
this
streaming
sdk
and
it
had
a
basic
prototype
for
span
events
and
metric
events
passing
through
the
same
like
it
was
doing.
It
wasn't
doing
over
a
process
pipe
to
a
separate
process,
because
it
was
just
much
easier
for
me
to
demo
an
in-process
consumer
of
the
same
thing.
B
B
In
fact,
that's
that
would
be
a
competing
approach
for
the
one
I
was
thinking
of,
which
is
to
try
to
use
the
metric
sdk
to
do
spans,
which
sounds
weird
at
first.
But
I've
been
thinking
about
it
for
a
while
all
a
lot
of
the
hard
problems
that
we've
solved
in
a
metrics
sdk
have
to
do
with
memory
management
and
dealing
with
things
that
are
like
high
cardinality
explosions
and
indefinite
lifetimes.
B
Having
put
so
much
rigor
into
that,
I'm
inclined
to
try
and
use
it
for
spans
and
it
just
means
creating
a
totally
new
instrument
called
the
span
instrument
and
requiring
you
to
create
your
spam
instruments.
Much
like
you
create
a
metric
instrument
and
it
might
have
some
static.
Attributes
and
kind
information
will
be
set
up
at
construction
time
this
way,
which
is
totally
new
and
that
that's
that's
like
an
api
change
that
I've
been
thinking
about.
B
B
I'm
going
to
use
the
machinery
that
I
had
to
do
for
delta
temporality
to
avoid
high
cardinal
explosion,
problems
to
avoid
high
cardinalities,
like
leaky
span
problems
or
whatever
you
call
that,
like
I
forgot
to
close
my
span,
and
I'm
also
you
know
in
a
language
that
like
unlike
go
that
has
these
contexts
or
manual
like
in
c
plus
plus.
Do
you
remember,
leaky
spans
ever
causing
a
problem?
It
was
huge
problem
back
in
my
day
of
google
so,
and
that
was
why
census
did
something
without
reference
counting
right.
B
So
so
the
idea
of
a
metric,
sdk
based
implementation
of
a
span
is
to
do
set
the
open
sent
to
do
the
google
census
thing
of
not
having
span
objects
in
memory,
but
having
an
aggregator
in
memory.
That's
like
lock,
free
or,
like
you
know,
managed,
but
just
like
another
metric
time
series
would
be
that's
about
the
level
of
detail.
I'm
thinking
about.
B
I
actually
don't
mean
lock
free
in
the
sa.
In
that
sense,
what
I
do
mean
is
that,
like
there's,
all
I
actually
mean
is
there's
a
separation
of
walking
that
the
sdk
has
some
state,
at
least
in
my
implementation,
has
some
state
about
lifetime
of
handles
for
metric
things
and
it's
very
short-lived,
while
you're
looking
it
up
and
operating
on
it,
and
then
there's
a
an
export
cycle
lifetime
and
there's
multiple
readers,
so
there's
different,
separate
states
for
different
export
lifetimes.
B
What
I
want
to
figure
out
is,
if
I
can
I
haven't,
I
haven't
completely
solved
this
question
like
I
haven't,
tried
to
implement
it.
Yet
I
want
is
for
the
metrics
to
be
aggregated
by
spam.
It's
almost
a
different
idea
than
the
exemplar
idea,
where
you
have
span
id's
attached
to
metrics.
I
want
to
put
metrics
in
spans
basically,
and
I
have
a
feeling
that
if
I
use
the
metric
sdk
to
implement
the
span,
that'll
be
a
little
bit
easier.
B
All
the
shutdown
and
flush
is
done
for
me.
The
the
problem
being
discussed
in
this
issue
in
front
of
us,
which
is
why
I
was
talking
about
this,
would
basically
be
solved
if
you
it.
You
know,
from
a
sec
except
for
the
problem
that
you
mentioned,
which
is
totally
separate,
and
you
know
the
memory
allocation
at
crashing
time
problem,
but
yeah
anyway.
A
Pitch
I
I
I
hear
what
you're
saying
I
think.
Basically,
what
what
we're
saying
specific
to
this
bug
is,
there's
enough
information
that
we
could
accept
or
reject
it
as
in
scope
or
out
of
scope.
The
scope
that
we're
talking
about
is
really
large.
A
I
think
there's
a
way
that
we
could
do
this
with
our
existing
api
without
changing
it.
The
sdk
is
where
things
get
interesting,
because
you
know
to
some
extent
the
things
we're
talking
about
can.
Can
we
define
a
span?
Exporter
span
processor
pairing
that
pre-allocates
memory
for
spans
when
they
enter
the
queue
and
can
flush
them
in
a
partial
state.
A
A
B
So
let
me
let
me
follow
your
your
proposal.
There
then
I've
seen
it
done
already,
so
the
java
group
sampling
team
has
a
sampler
that
is
a
configurable
rate,
limited
reservoir,
consistent
probability,
sampler,
that's
a
lot
of
words,
but
what
it
means
is
that
it
does
pre-allocate
space
for
a
fixed
number
of
span,
pointers
which
isn't
the
same
as
memory
of
course,
but
the
same
concept,
of
course,
and
there's
a
limit
there,
but
it
has
to
it
it.
B
It
speculatively
starts
spans
to
meet
its
reservoir
quota
essentially
and
then
during
the
export
and
process
pipeline
drops
them.
So
something
can
be
dropped
from
export
somewhere
between
process
and
export
depending
on
other
span
activity,
and
it
requires
you
to
bind
these
two
together,
of
course,
you're
implementing
a
sampler.
So
the
point
is
that
sampler
exporter
and
processor
can
be
crafted
very
carefully
and
not
without
a
lot
of
difficulty
into
something
configurable.
B
I
don't
know
how
users
are
going
to
get
a
view
out
of
this
at
the
end
of
the
day,
which
is
kind
of
what
they
want,
and
I
mean
we
have.
I
guess
you
have
a
end
of
the
line.
It
looks
like
jaeger,
remote
sampling
config,
but
I
guess
the
point
is
hotel
could
come
out
tomorrow
and
say:
hotel
sdks
will
include
an
implementation
of
jaeger,
remote
sampling,
config
that
satisfies
josh's
sampling,
proposals
and
stuff.
B
A
Yeah
well
partly
for
metrics.
I
think
we
just
never
exposed
the
hooks
for
it
to
to
change
anything
right.
There's
no,
there's
no
changeable
aspect
of
metrics.
You
can't
make
your
own
aggregator
right
now,
yeah.
If
we
did
expose
that
we
it
would,
it
would
have
a
similar
problem
to
what
we've
done
with
sampler
and
processor
and
exporter
right
where
that
pipeline
can
be
defined
by
the
user
and
is
open,
yeah.
Okay,
let.
B
Me,
well,
I
say
metrics
here
pretty
close
to
making
a
user
defined
aggregator
as
long
as,
but
the
problem
is,
you
need
to
have
a
protocol
expression
for
it.
So,
even
when
I
have
user-defined
aggregators,
I
have
to
have
some
kind.
It
has
to
fit
an
existing
pattern
of
data.
Otherwise
it's
not
going
to
work
and
that's
span
won't
exactly
fit
that
pattern
right
now,.
A
A
Responding
to
this
real
quick,
so
first
off
I
discussed
during
triage,
we
see
two
main
problems
listed
here.
So
one
long
running
span
identity,
I'm
going
to
spend
identity.
A
Number
three
it's
hard
to
find
and
know
if
these
exist
and
they
can
get
lost
consumed
memory.
Two
is
a
crash
behavior
of
sdk,
for
example.
I
want
to
know
what
stands
for
in
flight
when
crashing.
A
Okay,
here's
what
I'm
going
to
say,
I
propose
or
sorry
we
think
we
think
number
373
is
a
proposal
solution
for
both
one
and
two.
A
However,
this
bug
details
mostly
two
which
is
crash
behavior
of
the
sdk
right.
If
we
read
this
bug
and
we
talk
about
what
it's
trying
to
do,
it's
mostly
about,
I
want
to
be
able
to
make
sure
that
spans
during
the
crash
are
exported.
So
I
can
figure
out
what
happened
what's
about
to
so
what
I'm
going
to
say
specifically
for
this
bug,
we
think
there's
room
in
the
current
hotel
specification
to
solve
this
problem
with
modifications
and
look
forward
to
proposals
here.
A
It's
very
close,
so
what
I
was
going
to
do
is
actually
re-title
it
to
be
called
export
partial
spans
during
during
critical
failure,
or
something
like
that.
Yeah.
B
B
A
Yeah,
okay,
so
I'll
rename
the
bug
we'll
mark
it
as
triage
accepted.
That's
unreasonable
here.
I'll
put
the
comment
in
and
you
can
take
a
look
at.
A
A
Now
we're
basically
repurposing
this
and-
and
I
I
would
argue
all
of
the
discussion
we
had
around
like
whether
open
someone
should
write
a
span
start
event.
A
span
end
event
expand,
you
know,
attribute
event
and
spend
you
know
decomposing.
The
span
signal,
I
think,
is
either
an
implementation
detail
or
a
out
of
scope.
A
B
A
A
B
B
That
I
was
issues
I
was
hot
after
except
I'm.
I
think
this
adding
support
for
personal
success
will
merge
soon.
Actually,
I
I
do
want
to
talk
about
something
that's
been
in
the
back
of
my
mind.
I
haven't
found
the
issue.
If
you,
if
you
don't
mind
it's
about
synchronous,
gauge,
I've
got
a
prototype.
I've
been
thinking
about
it
a
lot.
I
also
have
enough
forward
thought
to
connect
it
with
the
gage
histogram.
B
If
I,
if
I
was
forced
to
from
step
some
some
looking
at
some
stats
to
stuff
that
I
have
didn't
didn't
think
about
in
the
past
yeah,
so
I
don't
think
we
anyone
else
cares
about
gage
histogram,
but
getting
synchronous
gauge
settled
would
be
awesome,
and
I
think
I
just
wrote
a
big
transition
document
for
everyone
inside
lightstep
and
everything.
It's
like
everything
is
fine
here,
except
we
don't
have
to
give
a
stage
except
we
don't
stage
except
you
don't
have
singers
like.
I
want
to
fix
that.
B
So
can
I
add
to.
A
This,
yes,
when
we
look
at
adapting
existing
metric
apis,
there
are
two
things
that
call
out
to
me:
one
is
synchronous
gauge
and
the
other
one
is
saying:
here's
the
exact
sum
right
now
to
a
counter
like
hey
your
sum
is
now
this
value.
B
It
is
sometimes
I
have
a
problem
with
that
too,
and
I
was
hoping
to
not
go
there,
but
since
you've
gone
there,
it's
it's
unavoidable.
Now.
So,
okay,
I
found
an
issue.
One
of
them
says:
support
synchronous,
gauge
and
synchronous
up
down
counter,
cumulatively,
which,
like
a
cupped
encounter,
can
be
set,
and
it's
like
actually
what
it
sounds
like
is.
B
B
Now
I
wanna
have
a
version
of
the
counter
that
I
can
set.
I
want
a
version
of
uptown
counter.
I
can
set
cumulative
cumulative
and
then
I
want
to
have
a
version
of
the
gauges
I
can
set
in
synchronous
context.
I
could
actually
just
reuse
my
asynchronous
instrument
from
a
synchronous
context
and
make
that
work.
In
other
words,
the
the
callback
is
set
up
for
an
asynchronous
instrument,
and
I
I
want
to
like
give
a
flag
that
says
this
is
a
manual
asynchronous
instrument.
Maybe
that's
the
first
time.
B
I've
used
those
two
words
together,
but
a
settable
asynchronous
instrument
is
one.
You
use
synchronously.
B
A
B
B
For
now,
I've
got
separate
packages
to
do
synchronous,
ac
singers
instruments
and
asynchronous
instruments,
and
right
now,
if
you've,
if
you've
tried
to
do
what
I
just
described,
I'm
not
sure
exactly
how
I'll
do
it,
but
it's
going
to
kind
of
cross
cross-link
those
two
a
little
bit,
but
what
I'll
end
up
doing
is
saying
and
it
it
could
be
a
a
callback
that,
in
at
least
in
go,
we've
made
it
so
that
you
create
the
instrument,
you
create
the
callback
and
then
you
bind
them
together
and
if
you
just
create
the
instrument,
never
give
a
call
back
for
it.
B
I
just
have
to
make
sure
that
there's
a
flag
that
says
that's,
okay
and
I
want
to
make
sure
the
user
doesn't
get
confused,
which,
which
is
two
various
distinct
possibilities,
but
once
I
have
the
flag
saying:
that's:
okay,
I'm
just
gonna
go
through
my
ordinary
synchronous
code
path
and
what's
cool
that
I've
been
discovering.
As
I
just
mentioned,
the
synchronous
gauge
is
happening.
B
I
end
up
using
exactly
the
same
logic
that
I
needed
for
the
synchronous
instruments
and
exactly
the
same
aggregator
that
I
needed
for
the
gauge
and
it
all
works
out
exactly
the
way
I
want
it
to
work
out.
Moreover,
the
decision
between
between
temporality
that
we
ordinarily
apply
to
synchronous
instruments
applies
to
the
gauge
as
well.
B
So
when
I'm
a
delta
synchronous
instrument,
I'm
going
to
forget
the
instrument,
if
I
stop
using
it
from
interval
to
interval-
and
that
is
exactly
what
a
statsd
user
expects
for
their
synchronous
gauge
and
that's
important,
because
I
I
can't
give
lightstep
a
replacement
for
synchronous
gauge
that,
for
that
remembers,
the
way
prometheus
did
so
now,
I'm
working,
but
now
I'm
applying
just
two
more
problems
here,
I'm
applying
the
word
temporality
to
a
data
point
that
doesn't
have
temporality.
So
it's
it's.
It's
now
not
just
aggregation
temporality,
but
it's
it's
run
times.
A
Though
so,
first
of
all,
it
feels
natural.
So
the
question
is:
we
should
run
that
by
other
people
and
see
if
it
feels
natural
to
them,
and
if
we
consistently
get
the
same
answer,
then
it's
just
a
matter
of
how
to
describe
it
so
that
it
sounds
natural.
Because
again
I
I
it
that
feels
right
to
me
like
that.
Just
feels.
B
Still
it
still
gets
more
complicated,
because
now
I
have
a
new
aggregation
kind.
I
can't
just
say
it's
a
synchronous,
hop-down
counter
instrument,
it's
a
synchronous,
cumulative
up
down
counter
instrument,
it's
a
synchronous,
cumulative
counter
instrument
which
are
two
of
the
most
obscure
cases
in
the
world,
but
I
know
they're
out
there.
People
want
to
use
that
too.
B
Cool
all
right,
yeah,
I'm
interested
in
this.
I
would
file
that
issue.
Perhaps
this
is
all
to
be
part
of
the
proposing
synchronous
gauge
and
I
was
on
the
fence
about
whether
to
even
start
with
synchronous
settable
up
down
counter,
but
if
we
only
give
people
a
synchronous
gauge
our
upturn,
counter
sinkers
our
up
down
counter
gauge
distinction
will
be
totally
lost.
I
think
so.
I
think
we
need
to
have
all
the
instruments
be
represented
in
this
manual.
This
manual
asynchronous
or
I
don't
know
what
to
call
it
elsewhere.
B
The
worst
okay,
I
can
tell
that,
but
it's
like
not
the
priority,
and
since
I
brought
it
up
verbally,
you
don't
care
I'll.
Just
put.
Thank
you
for
your
input.
Yeah
you're.
B
Asked
for
a
setable
counter,
you
know
the
cumulative
form,
but
I
actually
just
was
writing
that
transition
document
and
the
datadog
user
with
a
cumulative
counter
is
going
to
do
exactly
that.
They're
going
to
say,
I
need
to
set
it.
So
I'm
going
to
use
a
gauge,
and
now
it's
a
gauge,
that's
a
cumulative
counter.
Instead,
so
there
it
is.
B
Right:
yeah,
yes,
okay,
so,
and
that,
thank
you
that's
the
other
reason
I
was
telling
reggie.
I
wanted
to
finish
the
chris
gage
is
that
it
the
view
I'm
putting
artificial
limits
into
the
view
mechanism,
because
we
haven't
demonstrated
or
defined
the
the
data
points
or
the
the
aggregations
they're
just
ill-defined.
So
so
I'm
preventing
you
from
doing
stuff
and
it's
harder
to
explain
what
I
want
you
to
not
do
than
to
just
explain
what
I
want
you
to
do
at
some
level.
B
B
The
timestamp
of
the
measurement
will
be
the
consistent
across
the
callback
and
I
want
to
say
something
about
multi-column
column-wise
representation
and
multivariate,
something
something
something
about
there.
But
that's
not
there's
no
time.
The
idea
is
that,
like,
if
you
have
a
callback
with
some
observations,
they're
coherent,
they're,
consistent
you
can
math
them
and
make
sure
that
they
they're
in
the
same
row
of
time
right,
logically
speaking.
But
now,
if
I
have
a
synchronous
gauge
and
I'm
prometheus
or
whether
I'm
prometheus
or
data
dog,
like
I
set
that
gauge.
A
B
Minutes
ago,
a
minute
ago,
10
seconds
ago,
whatever
some
time
ago
and
now
I'm
observing
it
now
and
those
are
two
time
stamps
and
I'd
like
to
use
two
time
stamps.
So
I
want
for
the
simplest
gauge.
I
want
to
use
start
time,
samples
observation
time
and
now
time
stamp
is
sorry.
Start
time
is
set
time
and
now
timestamp
is
observation
time
and
I
think
that's
consistent,
but
I
want
to
do
that
before
we
declare
the
protocol
done
some
level,
because
it's
done
has
a
gauge
in
it
and
if
we're
not
careful.
A
I'm
not
I'll
be
honest,
I'm
not
a
huge
fan.
Okay,
so
two
reasons:
one
is
kind
of
need
to
understand
where
accuracy
matters
here
with
that.
A
B
A
Yeah,
I
would
I
would
rather
either
use
observed,
timestamp
or
just
use
collected
timestamp,
one
of
the
two
pick
one,
but
let's
not
overcomplify
that
oh
okay
never
complicate
it
like
I
like
it
so
like.
If
we
chose
observed
time
stamp,
I
think
it
makes
sense
and
folks
who
want
to
treat
it
as
a
multivariate
have
to
understand
that
they
are
moving
that
time
stamp
towards
the
collection,
like
they'll,
know
the
collection
time
stamp
to
some.
Oh,
that's
another
thing:
let's
pretend
like.
A
B
As
a
user,
everyone
is
very
in
time
what
that's
true
we
haven't
explicitly
mentioned
time
for
the
gauge
itself
and
that
that's
another.
That's
a
related
point
is
that
in
my
gauge
aggregator
I
implemented
a
sequence
number
associated
with
each
measurement
and
when
I'm
doing
last
valuing
I
just
compare
sequence
numbers.
B
That
was
because
there
was
literally
no
requirement
to
use
the
time
and
and
I'd
rather
not
call
time
on
every
gauge
measurement.
So
but
the
community
sdk
started
with
a
timestamp.
So
it's
logical,
you're,
going
to
put
a
time
eventually,
that's
the
time
of
the
observation
or
what.
But
I
when,
when
it's
a
callback
scenario,
I'd
rather
use
the
one
time
stamp
of
the
caller.
A
Are
overridden
by
the
collection,
timestamp,
so
they're
all
unique,
but
there's
a
possibility.
We
could,
in
the
protocol,
say
here's
the
collection
time
of
this
metric
in
a
bundle
as
an
option
and
then,
but
I
I'd
like
to
be
as
consistent
as
possible
for
gauge
so
specifically
start
time
tends
to
mean
when
I
could
have
started
sampling
this
metric
right
when
that
system
became
active.
A
I
don't
want
it
to
have
multiple
meanings,
because
I
think
that's
going
to
just
overload
it
in
a
very
confusing
way,
having
having
the
the
time
stamp
be
as
close
to
the
sample
of
timestamp
as
possible.
I
think
is
useful,
but
if
we
were
to
specify
synchronous
gauges
to
be
the
same
as
the
collection
timestamp,
I
still
think
that's
okay
in
practice,
because
when
I
look
at
a
gauge,
I'm
not
expecting
it's
like
incredible
accuracy.
B
B
How
delta
and
cumulative
actually
mean
something
for
synchronous
gauge,
which
is
whether
the
start
time
stamp
could
logically
be
filled
in
as
the
start
time
of
the
process,
which
is
what
all
the
cumulatives
are
set
to
or
whether
it
would
be
set
to
the
last
the
window
where
observations
started,
as
you
just
said,
which
would
mean
the
timestamp
before
the
of
the
last
collection,
yeah
yeah,.
A
B
That
that
is
actually
connected
with
another
open
issue.
Have
you
seen
the
one
about
which
one
is
it
I'm
open?
So
many
issues
right
now
it's
about
how
to
handle
overflow
of
cumulative
state
in
a
long-lived
process
where
you
strongly
suspect
there
are
stale
entries
and
we
we
have
all
the
machinery
defined
in
the
data
model.
So
I
have
a
flag
to
say
when
data's
not
present.
I
have
a
start
time
to
say
when
a
series
began
or
when
the
instrument
was
created
or
when
the
process
started,
it's
kind
of
ambiguous.
B
As
long
as
we
just
to
say
it's
there
to
define
a
rate.
That's
that's,
okay!
You
can
actually
have
a
lot
of
freedom
there,
but
there's
actually
still
some
unspecified
stuff
when
you're
and
when
you're
doing
pull-based,
especially
and
when
you're
prometheus,
and
I'm
actually
wondering
if
they
would
give
us
guidance,
because
no
no
one
has
given
me
an
answer
to
the
question
and
then
there
is
not
really
a
good
answer.
B
As
far
as
I
can
tell
of
what
do
you
use
as
the
start
time,
if
you've
already
forgotten
that
series
and
your
in
your
and
your
you
know
it's
like,
do
I
say
the
first
time
it
reappears?
Do
I
use
now
minus
one
nanosecond
or
now
minus
a
minute
or
now
minus
an
hour.
Whatever
you
choose
there,
the
initial
point
is
going
to
be
smeared
across
that.
That's
that.
B
Little
bit
of
time,
whatever
that
amount
of
time
is
and
that
amount
of
time
is
arbitrary
and
they're
just
smearing
something,
but
actually,
if
you're
prometheus,
that
doesn't
that's
not
the
logic
that
the
query
model
is
much
more
based
on
extracting
from
storage
and
interpolating,
or
you
know
what
I
mean,
whereas
so
we
were
trying
to
make
it
so
that
start
time
meant
rate
can
be
calculated
here.
A
A
You
know
right,
so
it
you
end
up
with
graphs
that
significantly
shift
their
shape
when
you
time
shift
or
down
sample.
Always
so
that's
why
the
accuracy
thing
doesn't
bother
me
for
gage.
If
we
were
talking
about
sums,
I
would
totally
be
more
like
worried
about
very,
very
good
times
and
very
very,
very
accurate
times.
A
A
B
Gosh
yeah,
I
forget
exactly
what
the
wording
now
is:
there's
two
options
for
handling
missing
start
time
already.
You
can
probably
make
a
third
or
or
rephrase
what
you
just
said
in
terms
of
one
of
them.
I
guess
what
you
just
said
is:
when
you
are,
I
think
I
think
what
the
rules
should
be
something
like
this.
If
you're
going
to
forget
a
series,
you
should
publish
a
no
data
present
for
five
minutes
before
before
you
do.
I
don't
know,
that's
that's.
B
B
Maybe
I
mean
that's
how
I
think
about
it
and
then,
when
you
restart
we,
we
already
have
a
way
to
send
multiple
data
points
per
metric
per
interval.
So
you
can
put
two
data
points
into
the
first
output,
which
is
to
say.
A
Right,
yes,
so
wherever
we
detected
stale,
my
suggestion
is:
that's
where
we
oh.
So
let's
pretend
we're
deleting
the
data
stream
and
we've
decided
to
delete
the
data
stream
right
at
that
point
in
time.
We
keep
track
of
that
time
stamp
if
we
need
to
okay
or
at
that
point
in
time
we
write
hey
this.
This
stream
is
dead,
we're
gonna
remove
it,
so
you
would
write
a
no
data
available
time.
A
We
need
some
way
to
know
for
that
particular
stream
that
the
next
time
we
see
a
data
point,
we
should
look
at
the
last
collection
interval
as
the
start
time,
because
that
was
the
first
collection
interval,
reshowed
up
and
restart
from
the
previous
collection
interval.
That's
what
I'm
suggesting
and
then
and
then
just
keep
it
as
close
to
that
as
possible.
We
should
be
recording
collection
interval
generally
because
we
do
that
for
delta
metrics.
Maybe
there's
an
optimized
cumulative
metric,
one
that
doesn't
do
this,
but
I
think
to
the
extent.
B
B
B
B
B
A
A
A
A
A
But
we
should
continue
at
some
point.
There's
a
there's,
a
few
other
things
to
talk
about
man,
okay,
anyway,
good
yeah.
B
Maybe
that
was
a
little
bit
long-winded,
but
you're
right.
We
should
do
a
little
bit
more
spec
triage.
I
think
that
we're
close
with
the
metric
spec
to
calling
it
1.0
even-
and
I
am
aware
of
probably
five
to
ten
small
issues
that
just
need
like
a
sentence
here
and
sentence
there,
and
I
I'm
kind
of
questioning
to
do
the
work.
But,
let's,
let's
do
it.
A
B
Okay,
I
I
I'm
my
only
opinion.
Is
I'm
so
tired
of
like
not
being
done,
that
I
want
to
be
done
and
okay
first,
the
vend,
my
vendor
being
totally
you
know
like
vendor.
Non-Neutral
here
doesn't
have
a
feature
quite
yet.
B
I
see-
and
I
say
quite
yet,
because
it
totally
has
that
feature
for
spans.
It
just
doesn't
have
a
metrics
user
flow
for,
but
that's
a
chicken
and
egg
thing
like
if
open
temperature
had
done
this,
they
would
be
pushing
forward
a
little
faster.
So.
A
B
Metrics
vendors
have
been
pushing
that
dimension,
which
so
chronosphere
especially
is
trying
to
sell
that.
I
get
it.
Okay,
that's
as
much
an
argument
as
I
need
to
agree
like
I
it's
that
my
vendor
doesn't
have
that
particular
you
know
battle
going
on
right
now.
Perhaps
so
I'm
less
less
about
it,
but
here's.
The
other
thing
is
that
the
go
sdk
right
now
is
the
community
metrics
sdk.
B
I
I'm
not
helping
any
more
than
I
have,
and
unfortunately
I
gave
a
prototype
so
like,
and
it's
complete
and
I've
now
published
it
so
now,
lightstep
has
its
metrics
sdk
and
it's
working
and
kind
of
done
and
and
yesterday
tyler
predicted
two
and
a
half
more
months
for
the
community
sdk.
That's
why
I
don't
want
to
do
exemplars
because
it
just
means
another
month
of
community
work
before
we
call
it
done.
B
Yeah,
but
here's
here's
another
way
you
could
change
my
mind
is
I
actually
think
much
like
with
the
sdk
I
I
wasn't
comfortable
calling
it
done
until
I
had
implemented
it
and
now,
even
though
the
go
sdk
is
not
done,
I
have
implemented
a
complete
sdk.
To
my
satisfaction,
I
didn't
implement
a
prometheus
exporter.
I
haven't
thought
about
per
exporter
stamps,
but
that's
okay
or
per
export
path,
timestamps.
Otherwise,
I'm
happy
with
it.
I
haven't
implemented
exemplars
and
that's
the
reason
why
I'm
feeling
you
know
hesitant.
B
A
B
Riley,
what's
wrong
with
answering
this
question,
I
think
I'm
I'm
convinceable
easily.
I
I
just
need
to
do
a
work
a
week
at
work.
I
say
that
I
don't
think
I
really
I've
guided
sampling
projects
and
I've
guided
interns
through
this
exemplar
selection
process
already.
So
I
think
I
know
what
to
do.
I
don't
really
need
to
do
it,
but
I
guess
to
test
out
all
the
you
know
like
there's
just
so
much
more
to
do.
A
A
B
I
mean
I
blame
yuri
for
that
and
jaeger
did
this
to
us.
Open
tracing
had
to
have
this
thing
called
baggage
and
then
and
then
it
was
declared
as
a
thing
that
is
used
for
application
state
transfer
not
as
a
thing
used
for
telemetry
and
ever
since
then
users
have
had
to
like
scratch
their
head
and
one
out
of
100
and
figures
out
how
to
do
it,
and
then
you
know
proposes
a
new
way
to
propagate
telemetry
information.
After
that.
B
Oh,
oh,
thank
you
yeah.
Yes,
I
totally
agree.
The
problem
that
I
just
mentioned
is
more
of
a
like
semantic
roadblock
that
has
been
raised
before
and
still
exists,
which
is
to
say,
don't
call
it
baggage
call
it
something
else,
and
I
want
you
to
look
at
the
otap
number
it's
around
205,
or
so
it's
from
christian
mueller
about
scope,
attribute
propagation
I've.
A
Like
it
would
be
embedded
in
baggage,
that's
fine!
So
specifically
have
you
seen
cuis
and
our
public
discussions
of
cuis?
No,
what's
that
mean
it's
customer
user
interaction?
The
basic
just
is
my
front.
End
tags
hey.
This
was
my.
This
is
my
web
search
or
this
is
my
front
end
play
store
or
whatever
right,
I'm
using
google
terms
like,
for
example,
we'd
say
this
is
the
youtube
video
feed
right
and
then
that
gets
attached
as
baggage.
A
That
goes
all
the
way
downstream
and
now
I
can
do
analytics
where
I
can
actually
look
at
latency
caused
by
youtube
in
my
payment
server.
That
kind
of
thing
it's
very,
very,
very
cool,
but
there's
actually
some
really
practical
implications
in
the
small
that
are
useful,
that
we
actually
wanted
to
highlight
and
kind
of
showcase.
But
I
can't
get
braggage
to
propagate
in
hotel
without
rewriting
instrumentation
by
hand
in
almost
all
cases,
so
that
that
is
complete
bs
in
my
opinion,
but
anyway,.
B
So,
okay,
I
see
your
point
and
I
agree
that
that
if
you
ask
me,
what's
missing
in
hotel,
metrics
after
a
synchronous
gauge
its
exemplars
and
its
context
to
package
to
metric
attributes
and
each
one
of
those
is
something
I
want
to
implement.
But
synchronous
gauge
is
the
only
one
I've
actually
implemented
like
end
to
end
and
that
whole
topic
earlier
about
using
the
manual
asynchronous
instrument
for
the
synchronous
observation
of
the
cumulative
quantity
that
that
just
that's
too
many
words
and
I'm
so
tired
of
writing.
A
A
B
A
No,
I
I
totally
agree
so
when
it
comes
to
all
of
those.
All
I
want
to
make
sure
is
that
we
have
room
for
them
and
then
I'm
comfortable
with
1.0
so
like,
for
example,
ours.
I
don't
want
the
java
ones
turned
off
just
because
I
see
prometheus
users
using
them
now
got
it
and
that's
going
to
be
a
big
kind
of
friction
thing.
So
if
we
could
mark
them
kind
of
optional
but
stable,
like
like,
we
already
feature
froze
the
spec
part
okay.
B
A
B
Thing
I
would,
I
would
be
so
much
happier
to
spend
my
week
nerding
out
and
implementing
a
prototype
or
completing
myself
proof
that
exemplars
works,
while
someone
else
fiddles
with
the
spec
wording
for
all
the
10
issues
that
I'm
aware
of,
because
I'm
totally
tired
of
fiddling
with
the
hotel,
metrics
spec
and
I'm
just
trying
to
say
like
for
me.
The
priority
would
be.
B
A
Right,
let's,
if
you
have
a
list
of
those
issues,
send
them
to
me
I'll
see
what
I
can
do.
B
I
I'm
trying
to
figure
out
if
they're,
real
or
not
there's
this
one
in
we've
already
talked
about
a
couple
of
them
so
yeah.
What?
What
do
you
do
about
start
time
in
a
cumulative
overflow
situation?
Is
that
the
biggest
one
I
I
feel
like
to
some
extent?
Those
are
not.
Those
are
not
blockers,
so
there
was
prometheus
stability
where
there's
still
mark
experimental,
so
like
dots
and
dots
are
still
disallowed
in
prometheus,
which
I
think
we
ought
to
get
them
to
change.
B
As
you
know,
and
I'm
starting
to
talk
to
them
about
it.
I
just
don't
it's
another
time
consuming
issue
and
then,
which
I'm
willing
to
say,
doesn't
matter
like
we're,
pretty
close
to
stable.
On
that.
I
don't
think.
B
Ashley's
going
to
do
awesome
work
and
I
think
we
could
just
push
on
that,
like
the
scope
naming
stuff,
and
this
may
be
resolvable,
the
only
thing
we
can't
resolve
ourselves
is
this
dot
issue,
so
push
forward
david's
thing,
maybe,
and
then
we're
done
with
prometheus,
except
for
agreeing
to
disagree
on
names.
I
don't
know
we
could
declare.
This
is
how
rob
pike
would
do
it.
We
will
declare
that
open
telemetry
the
dot
equals
the
underscore,
and
then
it
will
be
ambiguous
and
correct.
Always.
A
Yeah
I,
regarding
that
it
could
be
what
we
do
is
we
go
1.0,
but
we
keep
the.
We
have.
We
might
end
up
with
two
prometheus
exporter
specs,
and
I
actually
think
this
is
okay,
oh
yeah,
so
we
have
a
prometheus
export
respect
that
we
mark
1.0
now
and
then,
if
we
get
prometheus
to
be
amenable
to
us,
it's
going
to
be
a
breaking
change
for
prometheus
anyway.
At
some
point,
so,
like
there'll
be
a
prometheus
version,
that's
compatible
so
we'll
probably
need
a
v2
prometheus
exporter
for
that
change.
A
B
B
I
think
riley's
not
happy
with
having
prometheus
mark
experimental
and
doesn't
know
what
to
do,
and
I
so
so
yeah
you
could
help
with
that.
I
I'll
flag
you
on
any
of
the
other
issues
that
I've.
You
know
about
start
time
that
I
think
you
could
help
with,
but
I
think
we're
super
close.
B
No,
I'm
not
sure
that
the
collector
has
caught
up
with
all
of
david's
recent
work,
but
david
does
have
one
final
pr
out
to
handle
name
spacing
and
I
think
one
fail.
Failure
we've
made
with
prometheus
is
like
building
stuff
into
the
prometheus
receivers
and
exporters,
rather
than
building
it
into
the
open
telemetry
like
data
model
and
metric.
B
Namespacing,
your
metrics
is
like
why
don't
we
just
do
that
for
us
instead
of
doing
that
for
prometheus
is
kind
of
my
thinking
and
it's
connected
with
scope.
Attributes
with
you
know,
christian
roy
mueller's
proposal,
also
with
one
of
mine
from
long
ago.
A
B
A
We,
okay,
that
is
a
data
model
change.
B
David's
already
proposed
it,
essentially
it's
just
that
it's
proposed
as
a
fix
for
prometheus,
and
I
think
I
haven't
even
read
it
yet
that's
the
thing:
it's
probably
just
okay,
if
you
just.
A
A
The
right,
but
but
when
we
widen
its
scope,
to
hit
otilp
think
of
how
long
that's
going
to
take
to
get
through
right
so
does
it
make
sense
to
keep
what
it
does
now
the
way
it
is,
and
then
we
do
the
rest
of
that
work
follow
on
because
we
need
it.
But
it's
going
to
take
longer
so
like
we
can
kick
off
the
discussion
for
the
otlp
bits,
but
we
don't
need
to
do
it
for
1.0,
with
metrics.
A
B
Here's
my
request
then
yeah.
I
I
would
like
you
and
david
and
to
help
with
riley,
of
course,
to
help
drive
1.0
to
being
done,
because
I'll
do
as
much
as
I
can
to
help
flag
these
issues.
I
just
I'm
not
sure
I
could
do
it
all,
and
then
we
just
need
to
talk
to
more
people
and
and
push
hard
and
I'll
get
any
any
help
from
the
led
step
team
on
on.
B
You
know
the
sdks
that
they
that
they
have
influence
over,
but
I
think
javascript
is
you
know,
is
has
its
own
stature
and
we
need
to
please
them
essentially
and
and
there's
a
bunch
of
issues
that
that
they've
got
going
again
about
cumulative
overflow.
So
we
just
have
to
settle
them
and
then
with
synchronous
gauge,
it's
the
question
of
what
do
you
do
when
a
view
asks
for
a
gauge
on
a
on
a
counter?
Do
you
say
that's
impossible,
or
do
you
just
go
ahead
and
do
it
if
it's
ill-defined
and
then
what
what?
B
B
A
B
That
measurement,
so
here's
the
test,
question
that
comes
up
I've
configured
a
counter
to
a
gauge
and
you
and
now
I'm
using
it
with
a
negative
number,
does
the
counter?
Stop
reporting,
validation,
errors?
In
other
words,
is
the
aggregator
responsible
for
range
checks,
or
is
the
instrument
responsible
for
range
checks
and
the
answers
I've
gotten
so
far
from
the
community?
Are
the
instruments
still
responsible
for
rank
checks?
And
that
means
you
can't
get
what
you
want,
which
is
you
can't
fix
every
problem.
A
I
have
another
solution:
sorry,
this
is
fixing
so
again
when
I
say
fixing
a
problem.
This
is
somebody
else
wrote
instrumentation,
but
I
think
that
they
were
wrong
and
I'm
fixing
it
so,
which
means
that
there's
a
principle
of
the
count.
The
instrument
should
work
the
same
regardless
of
what
the
aggregator
is.
So
if
the
instrument
throws
an
exception
with
one
aggregator,
it
should
throw
the
same
exception
with
the
other
and
that
shouldn't
change
right.
I
would
say
that
that's
a
principle
that
we
should
have
in
place
so.
B
A
Fixing
making
a
counter
into
a
gauge
would
always
work,
because
the
enforcement
of
the
range
is
done
on
the
counter
side.
So
whoever
wrote
the
instrumentation
would
be
unable
to
send
negative
numbers
to
begin
with,
but
I
might
disagree
on
whether
or
not
what
they
have
is
actually
a
count
got
it,
and
I
can
turn
it
back
into
a
gauge
right.
B
A
Breaks
everything's
an
enhancement,
so
let's,
let's
stabilize
the
crap
out
of
what
we
have
get
it
out
the
power.
You
know
even
with
prometheus
right
that
the
the
if
we
get
prometheus
to
change,
that's
a
v2
because
prometheus
already
exists
and
we
have
to
work
with
how
it
works
today,
even
if
they
change
in
the
future
yeah.
B
So
I
think,
just
to
make
riley
happy
we're
going
to
say
something.
Like
you
know,
dots
become
underscores
period.
There's
some
ambiguity.
If
you
round
trip
twice
sorry
yeah
cool,
let's
do
it
yeah
I'd
appreciate
all
the
help
from
you
know.
Your
side
on
just
pushing
this
down
to
dog,
because
I
think
light
stuff
is
ready
for
that.