►
From YouTube: 2020-07-31 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Hey
everyone
looks
like
probably
wait.
A
little
bit
longer
wait,
none
josh
to
show
up
but
be
sure
to
open
up
the
agenda
and
the
items
that
you
have
for
the
meeting
and
be
sure
to
add
your
name
to
the
attendees
list.
C
Hi
everybody-
I
am
really
hoping
to
let
bogdan
lead
off
the
topic
of
otlp
right
now,.
A
Yes,
give
me
a
second
I'm.
I
did
not
have
a
lot
of
time
to
put
all
the
issues
that
I
created
in
the
dark,
I'm
adding
them
right
now.
A
Okay,
so
let's
start
with
the
the
first
item
of
the
agenda:
otlp
182,
I
merged
that
30
minutes
ago
after
I
had
like
six
approvals
plus
my
personal
approval,
which
was
not
counted
as
one
of
them,
because
I
was
the
author
of
that.
A
So
I
I
found
that
seven
approvals,
nobody
complaining
anything
so
far
was-
was
a
decent
amount.
So
I
I
moved
ahead
and
I
went
ahead
and
merged
that
I
started
right
now
to
create
a
lot
of
issues
based
on
the
to-do's
that
I
left
in
the
in
the
code,
as
I
promised
in
one
of
the
comments
and
I'm
looking
forward
to
make
decisions
on
these
issues,
it's
it's
very
important,
first
of
all,
to
identify
if
they
are
required
for
ga
or
or
things
can
be
added
later
and
the
decision.
A
A
Yes,
so
the
first
one
is
the
performance
of
the
the
current
approach,
so
the
current
approach,
as
explained
last
tuesday,
when
we
had
a
ad
hoc
meeting
for
for
for
the
otlp,
the
current
approach
was
to
have
more
clear
definition
of
of
the
data
which
implied
one
or
two
extra
allocations
when
it
comes
to
to
serialization
and
dc
realization.
A
A
This
is
a
non-controversial
from
semantic
perspective,
is
more
controversial
just
for
for
performance
and
I'm
willing
to
take
this
as
required
for
ga,
because
we
need
either
to
close
this
and
say:
okay,
we
don't
care
about
this
performance
heat
or
we
care
about
this
performance
hit
and
come
up
with
an
encoding
into
a
big
set,
something
similar
with
what
josh
tried
to
do
in
that
small
goal
program
and
then
just
use
that.
A
So
I
would
be
happy
to
mark
this
as
required
for
ga,
and
I
will
take
it
as
as
a
sign
to
me.
If
anyone
has
any
comment
or
or
anything,
let
me
know,
but
this
is
my
current
thought
about
this.
C
When
I
was
studying
one
of
those
earlier
variations
on
this
proposal,
I
did
run
some
of
tigrin's
benchmarks
and
it
was
a
little
tricky
to
get
it
working.
So
I
left
basically
a
mess
behind
and
I
don't
think
you
could
actually
use
it,
but
it
seemed
like
either
we
are
going
to
basically
say
this:
doesn't
matter
and
we're
not
going
to
benchmark
it
or
we
should,
in
order
to
sort
of
follow
sequence
level,
discipline
we
should
benchmark
it
in
order
to
know.
A
I
will
I
will
definitely
benchmark,
but
one
thing
that
I
would
like
everyone
to
agree
on
is:
is
god
the
only
thing
that
matters,
because
all
the
data
will
go
through
the
collector
or
or
do
we
care
about
other
languages
as
well?
So
so
a
lot
of
digress
benchmarks
by
the
way
are
based
on
the
goal
performance
and
that's
one
thing
that
I
would
like
to
to
make
an
agreement
on.
C
C
A
Collector,
for
example,
we
started
to
use
google
proto,
which
allows
me
to
to
put
some
some
annotations
to
not
create
a
new
object
to
not
use
pointers
sometimes
and
stuff.
So
I
can.
I
can
make
things
faster
if
my
only
goal
is
to
make
gold
fast,
I
can
go
full
full
gas
on
that
and
and
use
the
whole
tricks
for
the
goal
versus.
Should
I
consider
java
python
ruby
and
implications
in
this
sdks.
That's
that's
the
question
that
I
would
like
us
to
answer
before
before
making
a
final
decision
on
this.
C
I
think
that
it's
less
of
a
concern
in
the
clients
than
it
is
in
the
server.
It's
not
it.
Doesn't
it's
not
a
quantified
statement,
though
I
I
and
I
think
it's
also
easier
for
the
client
to
hand
code
an
encoder
or
to
cache
an
object.
That's
like
been
pre-serialized,
so
possibly
there's
a
lot
more
potential
for
optimization
than
clients
and
and
in
the
collector.
C
If
you
go
doing
something
like
a
hand,
decoder
now
you've
got
a
new
problem,
which
is
that
you're
trying
to
consume
this
data
that
wasn't
parsed
out
of
a
protocol
buffer.
You
know
in
a
standard
way,
so
it
makes
trouble
and
far
greater
amount
of
trouble
in
the
collector
to
start
talking
about
hand
coding
within
that.
Okay.
A
So
also
also
a
good
point
is
metric.
Descriptor
in
languages
can
be
cached,
so
yeah
exactly
a
simple
thing.
Even
if
you
don't
cache
the
real
encoding,
you
just
cache
the
object.
You
don't
have
to
create
this
type
and
one
off
every
time
you
just
cache
the
entire
proto
object
in
the
exporter
and
it's
still
pretty
reasonable
and,
and
you
have
decent
performance
there.
A
B
No,
I
was
gonna
say
exactly
what
josh
was
saying:
perfect,
perfect.
A
Okay,
so
then
I
will
take
as
an
action
item
tune,
a
bit
thickness
benchmarks,
use
gogo
proto,
do
some
annotations
here
and
there
to
avoid
some
of
the
things
and
see
how
how
bad
we
are
versus
if
we
want
to
pay
the
cost
of
readability.
For
this.
A
Also,
my
worry
is
not
only
readability.
To
be
honest
to
to
tell
this
thing
is
also
the
flexibility
of
adding
new
things.
If
we
do
a
very
tight,
very,
very
tight
encoding,
adding
new
things
we
may
have
to
deal
with
backwards.
Compatibility
on
that
encoding
versus
right
now,
protobufs
gives
us
the
full
backwards
compatibility
story
by
themselves.
So
anyway,
that's
another
thing
that
I
would
like
to
point
to
to
everyone.
C
Yeah,
that's
another
good
one,
I'm
I'm
starting
to
think
that
the
value
of
getting
a
finishing
spec
is
far
more
important
than
this
performance
question,
especially
if,
like
the
gogo
tricks
can
work
can
actually
work.
I
don't
know
that
they
work
for
one
of,
but
I
may
be
wrong
about
that.
A
They
work
at
least
for
one
of
the
allocations,
so
there
are
two
allocations
implied,
it's
very
likely
that
there
is
another
one
okay.
Anyway,
I
think
we
have
an
action
item
I
have
assigned
to
me.
I
know
what
I
have
to
do
next.
One
see.
C
A
Yeah
yeah,
I
will,
I
will
prove
him
wrong.
I
I
know
how
to
do
that
anyway.
We
know
we
know
what
we
have
to
do.
So,
that's
that's
very
good.
We
have
an
action
plan
there
to
get
to
ga
now
the
next
one
is
there
was
there.
A
Was
this
idea
coming
from
one
of
your
proposal
josh
that
knowing
if
the
current
aggregation,
even
though
we
aggregated
the
data
somehow
like
building
a
histogram
building
a
summary
building,
a
sum
or
something
the
the
the
input
for
these
aggregations
was
raw
measurement
or
or
a
snapshot
of
another
aggregation
or
another,
yes
of
another
aggregation.
A
Second,
if
we
add
it
later,
one
thing:
we
need
to
clarify
what
is
the
default
behavior?
How
do
we
make
this
thing
like
data
coming
without
this
without
this
property?
How
do
we
handle
them
once
we
add
the
property?
Do
we
have
a
full
story
here
or
not
so
from
proto
perspective?
It's
it's
easy
to
add
a
new
field.
It's
backwards
compatible,
but
is
it
semantically
backwards
compatible?
Can
we
make
it
semantically
backwards
compatible
or
not.
C
I
think
I
interpreted
this
question
slightly
differently
and
I
realized
it's
because
of
my
whole
frame
of
mind
was
different
from
the
start
about
whether
we're
describing
inputs
or
outputs,
but
I
was
really
just
trying
to
like
they're
they're,
so
so
it's
the
case
for
a
last
value
aggregation
or
a
sum
aggregation
that
you
take
some
number
of
points
down
to
one.
C
So
if
I
get
one
point,
do
I
know
whether
it
was
produced
by
aggregation
or
whether
it's
the
exact
point,
and
so,
if
I'm
only.
If
there's
only
one
one
point:
how
do
I
know
whether
it's
the
aggregation
of
several
or
whether
it's
there
was
only
one
point,
and
I
think
it
it
definitely
something
we
can
add
later.
But
it's
it's
a
consideration.
I
think
that,
because.
C
If
you
so,
I
mean
there's
different
properties.
When
you
know
the
data
is
raw,
like
the
number
of
points
becomes
meaningful
if
it's
raw
and
the
number
of
points
has
been
lost.
If
it's
the
sum
or
last
value,
there
was
just
a.
There
was
one
also
one
issue-
that's
not
linked
here,
but
but
we
can
find
it.
Basically,
it
was
a
request
from
amazon,
where
there's
a
particular
form
of
metrics,
where
they
just
basically
produce
a
structured
log.
C
Like
there's,
you
know
you
one
counter
event
turns
into
one
log
event,
and
this
would
be
potentially
a
way
to
do
that,
although
you
could
also
imagine
replacing
the
metrics
sdk
with
a
logging,
basically
in
a
logging
output
as
well.
So
I
asked
that
question
and
maybe
we
can
wait
and
see.
A
Okay,
so
I
think
I
think
this
is
then
a
separate
thing,
a
second
concern
than
you
have,
and
please
please,
if
you
can
file
that
issue,
that
about
the
the
count
and
also
still
applies
only
to
metrics.
That
have
includes
account,
if
I
understand
correctly,
because.
C
Well,
there's
I
mean
another
way
of
thinking
about.
It
is
that
we've
got
this,
at
least
in
this
draft
spec
the
notion
of
an
exact
aggregation,
which
is
just
keep
all
the
points.
So
I,
if
I'm
keeping
all
the
points
and
the
question,
is
how
do
I,
how
do
I
code
that?
Basically,
whether
I
know
it's
raw
is
sort
of
secondary
information,
and
we
can,
you
know,
have
one
data
point
for
every
value,
and
the
worry
I
have
is
that
you're
going
to
repeat
your
label
set
for
every
point.
A
Okay,
okay,
sorry
for
interrupting
one
thing
now
for
me
for
me
here:
keeping
okay,
so
there
is
a
second.
There
is
another
issue
in
the
specs
which
you
pointed
to
me,
which
is
about
us
being
able
to
export
raw
values,
and
if
we
do
this
optimization
on
the
raw
values
to
to
have
the
the
list
of
values
versus
every
every
value
becomes
a
point.
That's
that's
for
me
a
separate
thing
this
these
issues,
this
issue
particular
issue,
was
mostly
about.
A
I
have
a
histogram
and
if
the
histogram
is
coming
from
a
record
value
value
recorder
versus
the
histogram
is
from
a
sum
observer
that
I'm
doing
a
histogram
of
some
other
previous
aggregation,
which
was
a
sum
so
completely
different.
I
think
it
seems
that
this
is
not
something
that
people
thought
about
or
felt.
That
is
a
need,
I'm
more
than
happy
to
just
say
we
don't
have
a
good
case
for
for
this
right
now.
Let's
just
defer
it.
Listen,
let's
close
the
issue
and
say
we're
not
gonna
have
right
now.
C
I
mean
to
me
the
sort
of
the
vague,
the
vague
uncertainty
I
have
around
this
type
of
issue
is
that
so
just
because
I
know
that,
well,
I
I
think
the
way
you
phrased
it
it
sounded
like
you
just
need
to
know
whether
the
inputs
to
your
metric
were
raw
or
not,
and-
and
that
makes
me
think
that
if
there's
some
formula
that's
been
applied
to
some
number
of
inputs,
how
would
you
even
know
what
that
formula
is
like
the
formula?
Essentially
is
what
tells
you
well?
C
C
Question
has
to
do
with,
like,
as
you
said,
a
histogram.
So
if,
if
I
want,
you
could
imagine
a
histogram
of
counter
events,
which
would
be
the
increments,
how
many?
How
many
of
different
increment
sizes
were
there
and
you
could
imagine
as
well
computing
the
cumulative
values
out
of
those
same
instruments
and
then
giving
a
histogram
of
cumulative
values
for
a
set
of
label
sets
that
have
been
reduced
in
dimensionality
and,
like
you,
wouldn't
know,
the
difference
between
those
and
in
my
proposals
I
had
I,
it
didn't
come
out
nicely.
C
But
when
you
have
a
raw
value
structure
and
continuity,
paired
were
the
ones
who
told
me
whether
I
was
looking
at
like
a
delta
or
a
cumulative,
but
that
was
totally
arbitrary
because
just
because
the
at
the
api
instruments
do
that,
so
that
was
too
complicated
and
maybe-
and
you
know
if
you've
been
following
gitter
tristan's-
been
raising
the
same
type
of
confusion.
So
maybe
we
need
another
way
to
encode
this
type
of
thing
and
I
did
leave
a
comment
on
182.
I
think
basically.
A
A
A
If
I,
if
I
build
the
rate
of
that
count
from
these
histograms
and
summaries,
does
that
represent
the
amount
of
times
the
app
called
into
the
api,
or
does
that
not
represent
that?
Essentially,
we
cannot
confirm
of
other
things.
It's
just
like.
Is
it
the
number
of
times
the
app
called
into
the
the
synchronous
instrument
or
is
another
way
we
calculated
this,
which
I'm
not
going
to
try
to
encode
all
the
possible
ways,
but
I
was
focusing
only
on
this
thing.
Like
is
the
count,
the
rate
of
count.
A
C
C
A
If,
if
at
least
this
was
something
that
I
understood,
I
think
I'm
more
than
happy
to
say
for
the
moment,
we
don't
see
a
clear
value
on
this
and
or
we
do
not
understand
very
well
the
implications
and
stuff
and
defer
for
later.
Anyone
has
a
different
opinion.
Tyler,
you
seem
very
concentrated
there.
B
Yeah,
I
kind
of
I
think,
I'm
in
the
same
boat
as
josh.
I
think
there's
probably
a
lot
of
corner
cases
that
I
haven't
really
thought
through
yet
so
I'm
probably
I
like
the
default
idea
and
then,
if
we
need
to
add
this
afterwards
so
in
in
we're
not
talking
about
raw
values,
just
to
be
clear,
we're
not
we're
talking.
A
Okay,
yeah
good
the
next
one
monotonic
sum
for
for
summary
and.
A
Yeah,
I
would
put
there
a
deferred
after
because,
because
we
we
still
want
to
think
about
this,
I
will
put
still.
I
will
put
a
release
after
ga
as
a
requirement
and
we
can
debate
after
the
ga
if
we
still
right
either
this
important
or
not,.
B
Cool
so
for
this
histogram
one
is:
are
there
anything
any
downstream
and
points
that
actually
use
a
monotonic
histogram
like
I
know
the
prometheus
has
a
cumulative
histogram.
Well,
that's
a
little
bit
loaded,
because
cumulative
is
kind
of
a
two
part
there
yeah,
so
so.
A
Okay,
that's
some
monotonic
or
not
the
same
thing
as
we
have
a
counter
correct.
We
we
want
to
distinguish
if
we
have
just
a
simple
sound
if
we
have
the
property
of
monotonicity
or
not
on
that
sun,
do
we
want
to
know
for
for
the
sum
that
is
embedded
into
the
into
the
histogram
the
same
information
or
not?
A
A
Nobody
made
made
a
made
a
statement
about
this,
some
which
is
still
a
sound
which
can
be
still
use
the
same
thing-
maybe
maybe
I'm
just
misunderstanding
something
but
for
me
was
like.
Is
this
important
or
not.
A
It's
it's
the
total,
that's
that's
the
count
part.
So
the
count
is
the
number
of
the
the
count
is
the
number
of
samples
and
the
sum
is
the
sum
of
the
values
for
these
samples.
So
so
so
so
you
know
in
a
in
a
distribution
in
histogram
or
a
summary,
you
have
the
number
of
samples
as
what
we
call
count
and
the
right
the
sum
of
the
the
observations,
the
sum
of
the
samples
as
a
sum.
Okay,.
C
Okay
for
me,
the
reason
why
this
might
be
good-
and
I
I
think
I
support
it-
is
that
we
know
of
a
pretty
common
example
that
people
bring
up
fairly
often,
which
is
I've,
got
some.
You
know
I'm
reading
network
packets
or
I'm
reading
like
requests
and
they
each
have
an
individual
size,
and
sometimes
when
I'm
debugging,
I'm
actually
looking
for
an
explanation
that
might
be
like
a
very
large
request
came
through.
So
knowing
the
distribution
of
input
sizes
is
very
valuable,
but
then
I
also
want
to
track
total
request.
Bytes.
C
So
do
I
have
to
create
a
separate
metric
for
that
and
the
answer
in
most
systems
today
is:
yes,
you
do,
but
if
you
had
a
histogram
that
was
marked
as
having
a
monotonic
property,
then
you
basically
are
able
to
get
two
metrics
for
one
and
that's,
I
think,
that's
the
the
idea
is
that
now
you're
looking
at
a
histogram
and
you
can
like
also
generate
a
rate
graph
from
that
yeah.
So
so
this
does
require
change
in
the
way
we
use
metrics.
B
Yeah
yeah,
I
think
I
think
that's
a
really
good
point.
If
we're
gonna
include
the
sum
which
is
somewhat
of
a
duplicate
metric
of
or
not
metrics,
not
the
right
word
there.
It's
somewhat
of
a
duplicate
message
of
just
a
sum
in
itself
being
sent.
We
might
as
well
have
the
full
featured
functionality
of
that
sum
and
including
the
monotonic.
A
Yes,
so
question
first
question
would
be:
is
this
required
for
g.
A
A
What
I'm
trying
to
say
is
false
is
monotonic
for
or
maybe
we
should
make
the
mono
the
monotonicity
a
tree
state
unknown.
No,
and-
and
yes
like
for
me,
for
me,
is
this
unknown
that
we
overload?
We
put
it
under
the
false
mechanism,
because
we
actually
don't
know
that
information,
and
that's
that's
where,
where
I
have
troubles
to
to
translate
to
from
system
that
don't
have
this.
This
knowledge.
C
Yeah
this
also
to
me,
like
my
original
proposals,
which
I
understand
are
confusing
and
not
going
to
go
through
and
I'm
happy
that
way.
But
my
proposals
was
really
just
that.
I
wanted
to
know
what
instruments
there
were
like.
I
only
wanted
to
know
what
the
open
telemetry
instruments
were,
because
there's
some
meaning
there
that
I
think
we
can
benefit
from.
But
it's
true
that
you,
if
you're
importing
data
from
another
system,
you
won't
ever
have
that
information
exactly
correctly.
So
so
maybe
this
there.
C
The
sort
of
potential
for
some
future
benefit
is,
is
nowhere
near
the
complexity
cost
of
of
this
information.
But
for
me
it
feels
it
felt
like
that.
The
questions
were
easier
when
all
I
was
trying
to
do
was
like
include
information
about
what
the
instrument
was
and
and
when
you're
deriving
information
from
instruments
like
all
bets
are
off
and
then
maybe
then
you
have.
I
have
an
unknown
instrument
here
from
open
telemetry
and
that
would
that
would
also
address
the
other
issue
that
we
were.
B
Talking
about,
like
I
mean
I
I
don't
know,
I,
I
don't
think
it's
that
big
of
a
deal
to
say
unknown
equals
false,
because
I
I
mean,
I
think,
about
the
monotonic
as
a
guarantee
like
if
you're
saying
true,
that's
when
you're
guaranteeing
that
this
data
is
going
to
come
through
and
it's
going
to
come
through
sequentially
like
increasing
values.
Right
like
that's,
that's
the
guarantee
that
you're
giving,
if
you
say
no
you're,
saying
that
it's
not
and
if
it
does
come
through
with
sequentially
increasing
values.
B
C
And
that
my
my
motivation
here
was
really
trying
to
figure
out
if,
when
you
are
a
metric
system-
and
you
see
some
metric
for
the
first
time,
you've
never
ever
seen
before,
like
how
should
you
present
that
to
the
user?
And
I
think
that
you
know
when
a
sum
is
monotonic
almost
certainly
you
want
to
show
the
rate
when
the
sun
is
not
monotonic.
C
Very
often,
people
want
to
see
a
total
instead
and
like
knowing
monotonic
or
not
gives
me
the
ability
to
give
a
better
default
out
of
the
box
yeah,
it's
the
same
with
histogram
and
some
because
like
if
I
see
a
histogram,
that's
monotonic,
I
can.
I
can
make
a
dual
plot,
which
is
rate
as
well
as
quantiles.
I
can.
I
can
show
you
both
pieces
of
information
just
out
of
the
box
yeah
and
I
like
that.
C
B
So
josh,
can
I
ask
you
a
question,
though,
in
that
situation,
though,
when
you're
trying
to
classify
the
data
that's
coming
in
like
if
there
was
a
tr
trinary
state
there,
you
had
another
category
that
was
like
unknown.
How
would
you
treat
that
data
differently
than
just
false.
A
The
only
thing
that
I
would
do
maybe
differently-
and
I
got
this
answer
from
from
brian
from
brian
brazil-
was
when
I
asked
them.
Why
do
they
have
an
unknown
scalar,
a
scalar
metric
and
they
asked
them?
Why
do
you
not
treat
them
as
gauges?
A
C
A
So
that's
that's
what
he
told
me
and
the
same
thing
as
here
will
come
with
with
me,
would
be
better.
I
don't
know
if
it's
in
prometheus
or
otherwise
it's
in
that
metric
protocol,
that
you
pointed
george.
C
I
I'm
not
going
to
search
this
for
this
now.
I
have
seen
the
unknown
message.
I
could
always
give
me
a
little
question
mark
but
that
that's
useful.
B
So
then,
I
think
the
question
comes
back
to
the
do
we
wait
till
after
ga
to
put
this
in.
I
think
that
this
is
an
extension
that
we
could
add
later
on
to
the
histogram.
So
I
don't
think
it's
the
end
of
the
world,
but
the
problem
that
I
have
is:
if
josh
is
over
there,
it
lights
up,
saying
like
let's
do
some
new
cool
stuff.
I
really
I
don't
want
to
block
the
new
cool
stuff
in
the
progress
of
this
industry,
let
alone
like
the
field
of
you,
know,
metrics
and
telemetry.
C
A
Okay,
do
we
personally
I'm
on
I'm
on
the
fence?
I
can
have
it
I'm
happy
with
the
the
information
there.
I
would
dr
just
yeah.
C
A
Let's
leave
it
there
for
the
moment
we
assign
an
owner,
I'm
gonna
deal
with
this,
I'm
talking
too
much
right
now,
probably
I
should
shut
up
and
go
and
comment
on
that.
But
for
the
moment,
let's
focus
on
this.
There
is
another
law.
There
is
another
other.
Yes,
there
is
84
sorry,
I
did
not
include
84
issue
184.
A
C
Oh
that
comment
yeah.
You
know
this
relates
to
me
for
me.
This
relates
to
that
same
question.
About
is
the
histogram,
the
increments
that
came
into
it,
or
is
this
the
histogram
of
cumulative
values
and
yeah?
I
don't
know
I
wrote
what
I
said.
I
I
think
there's
I
haven't
really
figured
this
out
yet,
but
there's
something
about
the
way
we
have
a
data,
a
metric
object
or,
let's
say
an
otlp
packet-
has
some
data.
C
So
I
can
tell
whether
they're
cumulative
or
deltas,
but
there's
some
information
that
maybe
I'm
missing
and
I'm
I
haven't
convinced
myself,
whether
that's
okay
or
not,
but
it's
like
what
range
of
data
did
the
query
cover
so
that
I
can
so
that,
like
there's
the
distinction
that
made
this
this
this
here
of
a
problem
is
I
keep
using
the
word
cumulative
delta
in
two
different
ways,
and
I
just
need
more
words
so
like
if
the
query
covers
time
100
to
110,
but
it's
cumulative
that
might
mean
something
different
than
if
the
query
covers
from
time.
A
Yeah,
so
so
josh.
My
two
sense
here
is
right.
Now,
I'm
trying
to
make
temporality
just
relating
to
the
aggregator
slash
aggregation,
which
means
did
the
aggregator
reset
the
data
last
time
after
so
when
exported
resetted
the
data
and
started
from
a
fresh
new
aggregation
and
new
measurements,
I
call
it
measurements.
New
measurements
comes
into
as
into
a
fresh
aggregator
or
the
aggregator
did
not
reset
and
keep
accumulates
things,
and
I'm
just
reporting
this.
A
This
is
what
I'm
calling
right
now
temporality
and
I
want
to
make
that
very
clear
and
I
will
have
a
follow-up
pr.
I
was
writing
another
issue
about
the
east,
anteriors
and
stuff,
and
I
wanted
to
to
make
it
clear
that
temporarity
refers
to
this
aggregator
and
the
behavior
of
the
aggregator,
not
the
meaning
of
the
data.
A
Now
now
there
is
a
meaning
of
the
data,
and
data
may
also
carry
some
temporality
inside
them,
like,
as
you
explained,
for
example,
if
from
somewhere
else,
I'm
getting
deltas
of
some
things
that
are
are,
let's
say,
I'm
building
a
a
summary
or
a
histogram
of
cpu
usage,
but
the
measurement
are
delta.
The
the
amount
of
cpu
I
used
in
the
past
10
minutes
so
so
so
kind
of
the
the
data
that
were
that
got
into
this.
This
aggregation
they
also
had
a
temporarity
or
a
time
associated
with
it
again
right
now.
A
I
think
everyone
agrees
that
it's
very
important
for
to
know
the
aggregator
behavior,
the
aggregator
temporarily
and
now
most
likely
what
we
need.
If
I'm
hearing
you
correctly
is
another
temporality
that
that
is,
that
is
the
data
temporarily.
Not
the
aggregation
temporarily
is
the
data
temporarily
kind
of.
C
I
don't
know
that
we
need
it,
but
it's
a
concept,
that's
lurking
here,
and
it
relates
to
raw
values.
Again
I
think,
but
do
you
understand
me
like
yeah?
Definitely,
data
temporarily
is
exactly
the
right
phrase.
I
I
don't
think
we
want
to
introduce
another
concept,
but
I
and
and
leaving
raw
values
out
for
now,
really
helps
yeah,
because
it's
the
raw
values
that
have
that
difference
of
data
temporality
that
we
talked
about
yeah.
A
Tyler
you,
you
are
laughing
at
this
temporarily.
No.
B
Just
yeah,
as
long
as
as
long
as
we
call
the
other
temporality
ecclesiality
or
something
like
that,
you're
thinking
about
that.
Otherwise.
A
For
temporarily
yeah,
okay,
but,
but
I
think
now
I
understand
what
josh
points
points
that
the
fact
that
the
data
that
we
put
into
this
aggregator
also
came
with
some
time
association.
As
I
gave
you
the
example
of
I'm
I'm
calculating
from
somewhere,
I'm
calculating
the
the
cpu
usage
in
the
past
10
minutes
like
how
much
your
total
I
consume
in
the
past
10
minutes,
and
now
I'm
putting
this
into
an
aggregation.
A
Now
the
fact
that
this
aggregation
that
I'm
building
right
now,
I'm
resetting
it
or
not-
is
the
delta
cumulative
property,
but
that
10
minutes
part
disappears
from
the
data
like
we
lost
that
we
we
just
drop
it
on
the
floor
and
and
that
may
be
something
that
we
should
consider
not
gonna,
start
our
discussion
right
now.
Josh.
Are
we
okay
with
clarifying
this?
What
we
just
talked
right
now
that
there
are
these
two
temporalities
one
that
applies
to
this
one.
A
That
applies
to
delta,
to
the
data,
and
we
say
that
we
are
focusing
right
now
on
the
aggregation
temporarily
and
the
data
temporarily
or
whatever
we
call
it.
We're
gonna
defer
after
ga
is
good.
C
Yes,
I
also
I
I
I
feel,
like
the
same
question
of
terminology
keeps
coming
up
in
talking
about
things,
including
us
back
and
instruments,
definitions
and
so
on,
because
I
I
I've
tried
my
best
to
avoid
saying
the
input
temporality
when
I
talk
about
a
counter
versus
some
observer,
and
so
I
try
to
find
a
different
word
and
I
say
increments
versus
sums,
so
it's
sort
of
like
a
aggregated
or
not
aggregated,
so
it
some
ways.
A
C
A
Thanks,
I
will
clarify
this
and
I
will
mark
it
require
to
do
after
g8,
okay
release
after
ga.
A
C
Great
okay,
I
I
think
we
should
move
on
then,
in
your
opinion,
great
all
right,
let's
see
who,
on
the
call
will
represent
this
issue.
I
know
I
think
I
know
there
you.
D
Are
hi?
Can
you
guys
hear
me
yeah?
Yes,
hello,
hi,
I'm
william,
I'm
an
intern
at
google
and
my
team,
and
I
have
been
putting
together
a
a
dynamic
configuration
system
specifically
for
metrics
right
now,
but
we're
hoping
it
might
be
able
to
be
expanded
in
the
future.
D
But
the
general
idea
is
that
if
you're
running
an
instrumented
application-
and
maybe
sometimes
you
want
to
collect
a
metric
once
every
few
hours,
maybe
at
certain
time
periods
where
it's
especially
sensitive-
you
want
to
collect
it
now,
every
minute
or
every
second
to
be
able
to
build
in
a
system
where
you
can
adjust
the
collection
period
dynamically
at
runtime
without
having
to
say,
stop
or
restart
the
application.
So
there's
three
sort
of
components
that
are
involved
in
this
one.
We
want
to
be
able
to
change
the
collection
period
at
runtime.
D
Second,
one.
We've
also
decided
to
try
associating
collection
periods
with
a
metric
rather
than
all
metrics.
So
the
way
it's
set
up
currently,
it
seems
to
be
collecting
every
single
metric,
every
single
instrumented
metric
and
exporting
them
all
in
one
go.
One
of
the
things
we've
been
implementing
is
a
way
to
be
able
to
tie
a
collection
period
to
a
specific
metric
or
to
a
specific
set
of
metrics,
so
only
those
metrics
get
exported.
D
So
in
the
event,
you
have
certain
metrics
that
should
be
exported
every
day
or
something
that
you
don't
really
care
as
much
about
versus
more
sensitive
metrics
that
you
want
collected
every
minute.
You
can
do
that
without
having
to
export
everything
every
minute
and
then
the
last
part
of
it
is
a
configuration
service.
So
the
the
way
that
we've
set
up
metric
configuration
is
by
having
a
service,
monitor
the
current
state
of
configuration
data
and
this
service
communicates
with
the
instrumented
applications.
D
That's
the
source
of
the
metrics,
and
so
this
service
can
both
send
configuration
data
to
and
from
the
instrumented
applications,
as
well
as
communicate
with
potential
other
configuration
back-ends
either.
A
third-party
back-end
that
decides
to
implement
this
protocol,
or
maybe
sometime
in
the
future,
have
some
way
of
being
able
to
take
configuration
data
in
a
different
format
or
different
protocol
and
convert
it
into
something
that
we
can
use.
So
the
the
project
so
far
is
still
very
early.
D
We
have
like
a
a
prototype
that
we
are
almost
done
with
that
we're
gonna
send
to
the
contrib
repo
for
the
go
sdk
and
for
the
collector
just
to
give
like
an
example,
implementation
of
how
it
might
work,
but
yeah
we'd
love
to
hear
your
thoughts
on
it.
We
also
know
like
the
the
ga
is
coming
up
and
you
guys
are
probably
super
busy
with
that.
D
So
no,
like
immediate
urgency,
just
like
a
cool
kind
of
feature
that
we're
hoping
to
to
build
for
you
guys
and
would
love
to
hear
what
you
guys
think
if
you
get
a
chance
so.
A
Yeah,
to
give
some
initial
feedback
first
of
all
any
experiment,
we
should
not
be
that
that
it
should
not
be
that
hard
to
get
an
experiment
up
and
running
like,
and
I
think
we
should
be
more
relaxed
in
terms
of
when
it
comes
to
experiments.
We
should
look
like
overall
picture
of
what
you
give
to
us,
a
very
nice
description
of
what
you
are
trying
to
achieve,
and
I
I'm
very
happy
with
the
with
the
goals
what
you
are
trying
to
achieve.
A
So
a
couple
of
things
that
I
would
recommend
in
the
experimental
directory
try
to
follow
the
same,
the
same
structure
as
we
follow
in
the
main
directory,
which
means
I
saw
somewhere.
You
have
experimental
in
the
in
the
specification
you
have
in
the
directly
into
the
experimental
directory.
You
have
a
file
called
metric
config
service,
just
put
it
under
metrics
or
follow
the
same
structure
that
we
have
in
the
directory
called
specification,
because
also
the
the
code
owners
are
associated
that
way.
So
even
the
code
owners
follow
the
same
pattern.
A
So
in
order
to
get
the
attention
for
metrics
people,
you
need
to
follow
that.
Just
in
the
front
being
the
first
one,
probably
you
did
not
know,
but
it's
a
it's
a
good
thing,
so
yeah
personally
for
me,
everything
sounds
sounds
good.
I
I'll
give
a
short
read
of
this,
but
but
for
me
it's
it's
a
thumbs
up.
I
think
we
need
to
experiment
with
this.
It
will
be
good
if
you
can
work
with,
maybe
somebody
from
google
and
document
some
of
the
not
downsides
but
but
people.
A
G
I
have
a
I
have
a
question
about
this,
not
not
this
on
the
specs
specifically,
but
I
don't
know
if
this
was
talked
about
before,
but
is
there
a
configuration
story
for
ga
before
this
is
even
released
like
how
are
we
gonna
handle?
I
know
it
was
talked
about
before
about
maybe
having
like
configuration
files
or
like
some
sort
of
environment
variable,
but
I'm
kind
of
out
of
the
loop,
for
that
is
there
any
updates
on
that.
C
Hi
leighton,
I
think
I,
where
we'd
all
love
there
to
be
a
views,
api
and
I
think,
for
my
part,
the
confusion
over
otlp
has
been
far
more
important
and
I'm
also
on
the
hook
for
an
sdk
spec,
which
I
feel
I
can't
write
until
we
figure
out
some
of
these
last
details.
But-
and
I
and
I
intend
to
write
finish
my
pr
on
that
which
is
very
old
at
this
point.
So
I
hope
that
that
answers
your
question
late
night.
G
Right,
I'm
not
even
referring
to
views
like
the
features
that
this
specs
outlines,
like
the
ability
to
have
different
collection
periods,
maybe
like
setting
up
pipelines
within
the
actual
like
config
files.
You
know
attaching
yeah,
you
know.
C
Meteor
providers
with
stuff
like
that
configurable
sdk
I've
been
calling
this
there's
even
a
spec
issue.
It's
probably
the
300s
of
a
process.
I
think
we
all
want
it,
and
I
know
john
has
been
in
experimenting
in
java
with
that
and
I'm
I'm
actually
excited
to
see
how
you've
done
this
work
in
the
go
code.
A
C
C
A
Yeah
I
would
like
to
also,
if
there's
going
to
be
a
document
about
configuration.
I
would
like
to
also
separate
between
between
the
capabilities
of
what
do
we
allow
people
to
configure
in
the
sdk
versus
how
do
we
inject
this
configuration?
So
so
I
think
there
are
two
topics
here
for
me
is
like
what
is
the
configuration
capability?
A
What
do
we
allow
people
to
change
and
how
is
a
different
story
as
long
as
we
have
an
api
on
the
sdk
that
says
change
collection
period
for
this
metric,
then
then,
then
everything
can
be
built
on
top
of
this.
So
so
I
don't
know-
I
don't
know
now,
if,
if
we
need
to
get
in
for
ga
into
details
about
supporting
environment
variables
or
or
this
kind
of
things
like
remote,
controlling
or
anything
like
that,
I'm
more
concerned
about
the
the
capabilities
that
we
want
to
allow
for
people
to
to
configure
and
josh.
A
C
That
yeah,
I
don't
want
to
search
for
it
right
now,
but
it's
like
381
or
something
like
that.
Basically
saying
I'd
like
to
see
a
proposal
here-
and
I
know
I
I
think
john's
recent
reviews
proposal,
which
was
a
very
simplified
version,
was
much
much
more
like
focused
on
getting
practical,
practical
solution
to
this
problem
of,
like
I
have
a
bunch
of
different
exports.
A
Yeah,
I
I
I'm
happy
to
take
that
as
an
action
item,
but
after
otlp
for
me,
otlp
right
now
is
the
first
priority
after
that,
but
that
doesn't
mean
william.
We
are
stopping
you.
As
I
said,
yeah
go
ahead.
We
will
modify
what
you
have
there
to
to
with
other
things,
but
having
this
as
an
experiment
to
prove
what's
possible
was
not
possible.
It's
amazingly
useful
for
us
and-
and
it's
very
very
good.
D
Great
yeah
definitely
yeah,
absolutely
so
yeah.
So
far
we
have
just
the
specification
and
the
the
protocol
up
so
far,
but
we're
we're
gonna
go
ahead
and
make
prs
for
the
actual
implementation.
D
If
you're
curious
about
how
we
we
actually
did
it
yeah
it
was,
it
was
a
tricky
problem
that
we
we
talked
about
a
lot,
but
I
hope
you
guys
will
like
our
solution.
I
guess
but
yeah
we'll
see.
C
Thanks,
yes,
thank
you
very
much.
Sorry
if
we
don't
quite
give
you
enough
feedback
right
away.
There's
too
many
things.
Oh.
C
I
I
didn't
put
this
one
in
and
I
feel
it's
connected
with
the
next
one.
So,
let's
see,
let's
see
who
who
did
put
this
in.
C
Hi
well,
why
don't
you
sorry,
please,
graham
yeah.
H
Sure
so
this
is
just
a
spec
to
describe
the
semantic
conventions
for
http
metric
events.
So
if
you
have
calls
from
like
client,
server
or
server
to
client.
A
H
Want
to
capture
that
information
in
metrics
this
describes
how
to
do
that
just
to
kind
of
go
over
the
the
goal
of
bringing
this
up
topic
in
the
in
this
meeting
is
to
just
bring
awareness
to
it
and
maybe
we'll
read
it
and.
H
So
the
spec
is
mainly
based
off
of
kind
of
the
attributes
that
are
on
spans
for
http
spans
and
so
there's
there's
two
metric
instruments
to
capture
this
data.
There's
a
duration
which
is
a
value
recorder
and
then
there's
a
request
account
to
capture
the
number
of
requests
and
then
the
labels
most
of
the
span
label
or
attributes
were
copied
over
as
labels
here
on
this
metrics.
Do
you
need
all
of
them
on
your
metrics?
No,
we
have
a
you
know
recommended
column,
so
some
are
recommended.
Some
are
optional.
H
If
you
don't
have
them-
and
you
know
some
of
them
due
to
carnality
issues,
we
couldn't
include
such
as,
like
you
know,
request
size
number
of
bytes,
but
I'm
I'm
worried
about.
H
H
Again,
this
is
for
the
disclaimer
about
this
is
just
kind
of
the
beginning
of
the
metric
instruments
and
the
the
labels.
If
we
want
you
know,
we
want
to
change
those
in
the
future.
We
can
I
just
kind
of
wanted
to
get
a
structure
of
what
this
would
look
like
and
get
in
front
of
people
so
yeah
this
yeah.
C
C
To
the
other
issue,
and
and
one
of
the
facts
in
fact
this
this
particular
line
here
was
already
debated
in
the
last
one,
and
the
point
that
was
made
here
is
the
value
recorder
includes
a
count,
so
so
it
should.
We
should
just
need
one
metric
per
span
and
then
I
I
know
others
have
commented
already
on
this
scheme
on
this
list
here.
It's
just
sort
of
like
it
looks
like
you're.
C
B
About
this,
so
so
josh,
just
to
back
up
to
your
first
point,
like
a
value
recorder,
goes
through
an
aggregation
pipeline
and
that
aggregation
pipeline
that
outputs
and
then
maximum
count,
and
then
maximum
count
is
the
thing
that
actually
is
the
count.
That's
a
that
pipeline
guarantee
is
not
always
going
to
be
the
case
going
forward.
B
A
A
I
see,
is
this
only
one
metric-
or
this
is
only
one
instrument.
A
A
G
Okay-
oh
sorry,
I
thought
it
sorry
getting
out
of
my
corner.
A
C
C
F
I'd
like
to
plant
a
little
bit
of
a
seed
here
that
I've
been
thinking
about
quite
a
lot
since
graham's
pr.
If
we
end
up
with
a
bunch
of
separate
semantic
conventions
for
http
and
for
database
and
for
grpc
etc,
is
this
first
one
the
one
that
I
propose
as
general
semantic
convention?
Is
it
even
valuable,
or
should
we
just
roll
it
up
and
make
just
the
semantic
conventions
for
for
the
category
specific
areas,
and
I
recognize
we're
out
of
time
and
we
don't
need
to
dive
into
that
conversation
now.
C
A
very
good
point:
yeah,
as
part
of
me,
is
thinking
now.
Maybe
we
should
just
have
an
attribute
of
the
labels
which
says
I'm
heart,
cardinality
or
I'm
not
and
the,
and
then
the
spec
like
that
you've
written
justin
could
just
be
that
the
transformation
from
a
span
to
a
metric
is
drop.
All
the
high
cardinality
tags
and
slap
them
on
there.
Yeah.
A
The
other
thing
that
I
would
like
to
consider
is:
do
we
extract
all
these
metrics
only
from
spam's
apis
or
or
do
we
do
other
things,
because
it
seems
to
me
all
these
proposals
that
I've
seen
in
the
past
two
weeks
will
only
focus
on
that
thanks,
josh
yeah.
C
A
Yeah,
so
everyone,
if
they
you
don't
know
we,
we
had
another
meeting
tuesday
around
3
p.m,
just
mostly
to
talk
only
about
otlp,
because
it's
the
hard
topic
and
that
my
idea
was
to
keep
that
meeting
around
for
another
two
week
or
two
until
we
finalize
all
the
otlp
hot
issues,
and
then
we
drop
that
meeting.
But
it
was
mostly
to
to
force
us
to
to
make
progress
on
otlp.
B
A
Let's
I
cannot
promise
next
week
the
week
after
next
I
promise
hundred
percent.
But
let
me
let
me
get
this
otlp
moving
faster
and
get
it
done.
I
I
feel
like
it's
the
role.
The
ball
is
moving
right.
Now,
let's,
let's
not
stop
it.
Let's
I
know
I
know
we
may
sacrifice
a
week
for
other
things,
but
I'm
willing
to
do
that.
If
others
are
not,
maybe
we
should
not
do
this.
B
No,
no
that's.
That
sounds
good.
I
think
that
oclp
has
been
yeah
what
you
just
said
we're
doing
great.
Okay,.
E
A
One
thing
that
we
had
in
open
senses-
and
we
have
here
as
well
and
can
be
the
answer-
is
the
correlation
context
baggage.
So
so,
if
we
put
an
information
inside
the
correlation
context
inside
the
context,
okay,
every
time
when
you
create
a
span,
we
can
grab
this
information,
use
it
as
attributes.
We
can
also
consume
these
as
labels
for
the
metrics
is
the
co.
Is
it
readable
right
now,
but
you
don't
need
to
read.
We
as
a
sdk
have
the
access
to
it.
A
We
we
just
say
that
every
time
when
you
do
a
record
metric
when
you
do
a
messaging
record,
if
you
pass
the
context
in
goal
because
it's
manual
passing,
but
in
java
we
grab
from
the
context
and
automatically
allow
you
to
use
as
labels
using
the
views
things
from
the
correlation
context.
Users
don't
have
to
do
anything.
That's
the
beauty,
that's
the
very
beauty
of
this
concept
of
what
we
call
tags
before,
because
we
will
focus
only
on
the
observability
part,
but
that
was
the
beauty
like
you.
A
Don't
have
to
do
anything
you
just
initially
annotate.
These
requests
that
it's
istp
method
is
who
it's
http.
Think
is
bar
or
whatever,
whatever
whatever
and
then
down
the
road.
You
just
do
a
metric
record
and
us
by
default
we
don't
grab
anything
but
with
the
views
we
can
say
hey
with
with
the
views
people
are
allowed
to
tell
us
where
to
find
in
the
correlation
context.
Different
entries
grab
them
and
use
them
as
labels
for
the
metric.
A
Be
honest
in
order
to
call
open
senses,
deprecated
and
end
of
life.
We
need
to
do
that.
That
was
the
most
important
feature
for
opencensus
and
that's
how
I
design
that
system
with
that
mindset
that
there
are
things
flowing
with
the
request
and
every
time
we
you
do
this,
you
have
you
create
spans,
you
create
metrics
and
stuff.
E
So
I
think
my
recommendation
would
be
then
that
I
think
we
need
to
step
back
and
figure
out
that
part
first,
because
right
now,
I
think
we're
talking
a
little
bit
into
a
vacuum
with
open
from
open
telemetry's
perspective,
and
some
of
us
are
used
to
writing
instrumentation
where
you're,
specifically
writing,
metrics
and
specifically
writing
spans,
and
some
of
us
aren't.
Some
of
us
come
from
the
open
sessions
background
where
the
sdk
or
whatever
the
equivalent
would
do
it
automatically
for
you,
and
I
don't
think
we're
all
talking
the
same
language
at
this
point.
E
So
getting
us
all
on
the
same
page
with
respect
to
the
language
and
expectations
of
what
we're
attempting
to
accomplish
feels
like
we'll
spend
less
time
going
around
and
around
and
around
like
which
I
feel
like
we're
doing
right
now,
especially
well
on
justin's
pr,
which
has
been
open
for
a
long
time.
I
feel
like
there's
a
lot
of
talking
across
each
other.
Do
you
think,
do
you
think.
E
E
Yeah
I
mean
I
I
I
don't
think
I
have
time
for
it
right
now.
I'm
sure
I
don't
have
time
for
it
right
now.
A
I
mean
somebody
has
to
start
with
maybe
some
schemas
documentation
and
stuff,
but
really
think
through
the
entire
system
and
all
the
possibilities.
As
I
said,
like
john
pointed
people
are
not
necessarily
used
to
what
I
try
to
do
in
open
senses
and
how
I
solve
that
issue.
There
are.
There
are
things
that
I
don't
know
if
meeting
is
the
right
thing,
but
but
but
instead
of
focusing
on
solving
this,
let's
say
semantic
convention
for
a
specific
thing.
A
We
we
probably
have
to
step
back
a
bit
and
think
about
overall
pictures
and
questions
that
john
us
asked
like
okay,
even
if
we
call
both
apis,
we
call
this
api
to
create
a
span.
We
call
this
api
to
record
the
label,
to
record
the
latency
metric,
for
example,
and
to
another
api
to
record
the
let's
say
the
bytes
in
and
bytes
out.
Okay,
we
called
this
api
a
couple
of
times.
Now
there
is
a
problem
of
for
spans.
We
need
to
set
attributes
for
labels,
we
need
to
set
labels.
A
A
Method
or
do
I
call
it
just
method
for
the
label?
One
of
the
thing.
The
other
thing
is:
how
can
we
avoid
duplicate
code
as
the
one
thing
that
I
mentioned
to
you
like?
If
we
have
these
correlation
context
tags,
we
can
actually
not
have
to
do
all
these
things.
We're
just
talking
about
how
the
the
semantic
convention
we
add.
E
Sounds
like
probably
what
we
need
to
do
before
meetings
is
create
an
issue.
If
we
don't
already
have
an
issue
to
track
this,
we
should
create
an
issue
in
the
specs
tag
it
with
metrics,
make
it
high
priority
and
kind
of
run,
and
I'm
willing
to
do
this.
I
can
probably
do
this
tomorrow,
but
just
outline
the
questions
that
we
that
we
feel
like
we
need
to
figure
out
an
answer
to
just
to
kick
off
the
process.
E
A
And
again
meeting
is
not
necessarily
required.
I
found
it
useful
to
get
people
attention
with
meetings
like
if
I
really
want
to
to
solve
something.
I
found
it
very
useful
to
get
people
attention
and
make
sure
that
we
are
on
the
same
page,
but
yeah
documents
and
stuff
can
work
as
well
and
probably
as
a
community.
We
should
learn
to
do
to
make
documents
work
better
than
they
are
working
right
now.