►
From YouTube: 2020-07-09 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
A
A
Everyone
thanks
for
joining
in
the
metric
city,
I.
Imagine
we'll
get
started
a
little
bit.
Wait.
None
Josh,
I
think
he's
filling
most
of
the
items
found
to
work.
I
got
you
cool.
Well,
Josh
is
figuring
that
out.
Make
sure
that
you
add
your
name
to
be
attendees
list
and
if
you
have
any
other
items,
please
make
sure
you
add
to
the
agenda.
C
Okay,
now
I
have
the
right
microphone,
cool,
hi,
everybody
there's
a
lot
happening
and
I
made
some
notes.
Agenda,
didn't
put
time
estimates
on
these
topics,
so
let's
try
and
keep
on
track
anyway.
C
Here
we
are
cool,
hi
everybody,
so
lots
of
people
here,
good
I,
feel,
like
I've,
been
overwhelmed
with
lots
and
lots
of
issues.
C
I
put
the
things
that
were
up
first
for
me,
particularly
the
stuff
about
prometheus
exporter
work,
but
I
was
really
realizing
that
we've
sort
of
not
given
much
attention
to
views
and
I
want
to
put
it
first.
I
just
revisited
the
Python.
You
know
five,
ninety
six
and
looked
through
it
offered
some
feedback
and
I
like
the
direction,
would
Laysan
or
Chris
or
anyone
who's
more
directly
involved.
In
that
those
proposals
like
to
speak.
C
I
don't
see
enough
people
that
it
came
up,
because
there
was
some
topics,
at
least
for
me.
You
know:
we've
got
a
number
of
internship
projects
right
now,
which
are
all
adding
to
the
kind
of
philosophy
here
and
this
one
that
appears
to
be
connected
as
far
as
I'm
concerned.
Is
this
idea
of
remote
configuration
for
the
SDK
which,
as
a
starting
point,
includes
some
protocol
about
setting
the
interval
used
for
collecting
metrics?
C
C
C
Telemetry
made
some
announcements
and
started
to
gather
attention,
and
then
people
came
in
started
to
try
it
out
and
we
found
a
number
of
ways
that
metrics
export
was
broken,
particularly
in
the
go
repository
where
I've
been
doing
a
lot
of
the
work,
but
also
related
to
ot,
LTE
and
related
to
the
Prometheus
exporter
in
the
collector
always
found
several
configurations,
a
bug
at
the
current
release
of
go
and
the
Prometheus
exporter
in
the
0.4
collector.
All
of
these
things
are
very
problematic
and
I.
C
Think
we
discussed
last
week
that
we're
going
to
disable
the
OT
LP
receiver
and
the
collector
while
we
get
this
sorted
out
so
I
expect
the
next
OC
LP
receiver
will
be.
The
next
collector
release
will
disable
the
excuse
me.
The
next
collector
release
will
disable
the
OCLC
receiver
so
that
we
can
get
it
working
again.
The
problem
sort
of
the
root
cause
here
is
that
the
Prometheus
expert
in
the
collector
is
expecting
the
open
census
data
format,
the
transformation
from
open
census.
C
Topo
TLP
is
not
working
correctly
at
the
moment,
and
so
you
get
effectively.
You
get
counter
values
passed
through
which
are
deltas
and
they're
being
exported
by
Prometheus
with
Delta's
instead
of
as
cumulative.
So
it
looks
to
me
at
this
point
that
the
simplest
fix
would
be
to
literally
start
using
the
Prometheus
client
library
in
the
collector
and
transform
those
ot
LP
data
structures
that
we're
getting
back
into
Prometheus
client
calls
similar
to
what's
being
done
in
the
go
SDK
so
I'm
not
really
looking
for.
C
We
don't
actually
have
any
proposals
here
that
need
input
just
to
say
that
it's
all
very
broken
and
the
reason
is
basically
broken
is
the
OT
LP
is
unfinished
and
we've
been
sort
of
making
releases
without
focusing
on
that
detail.
So
as
far
as
this
goes
on,
the
all
the
issues
point
at
ot,
LP
I
was
going
to
move
on
to
that.
C
Although
there
is
a
note
here
that
PR
tip
one
118
refers
to
Prometheus
since
that's
the
default,
this
is
basically
trying
to
set
the
story
for
gauge
uses
of
gauge
in
Prometheus
and
Snapseed
need
to
come
out
as
gauge
and
in
Prometheus
and
fasty,
and
the
way
we
had
characterized
the
instruments,
especially
the
grouping
instruments,
those
which
received
individual
values.
They
were
being
exported
as
summaries
like
mimics,
some
count
style
and
users
of
Prometheus
and
stats.
D
are
getting
a
little
bit
confused
by
that.
So
the
proposal
118th
changes
the
default.
C
C
This
is
the
big
deal
right
now
and
I
want
more
people
to
keep
thinking
about
this.
This
this
PR
combines
several
people's
work
and
I.
Don't
want
just
take
credit
for
that.
This
includes
Conners
work
on
exemplars
and
it
takes
some
of
the
pieces
of
Tyler's
PRS
that
were
that
were
closed
in
the
recent
recent
weeks.
This
is
mostly
there's
two.
C
There
were
two
big
parts
of
this
change
at
Bowden
last
week
asked
that
we
split
it
up,
I'm
still
considering
whether
that
makes
a
lot
of
sense
or
not,
they
have
to
be
released
together,
or
else
we're
in
a
bad
situation.
So
I'm
not
sure
it
helps
us
to
release
him
into
or
to
make
them
into
two
PRS,
but
the
first
PR
seems
to
be
the
first
part.
It
seems
relatively
accepted.
C
This
is
that
we're
going
to
add
extra
bits
to
explain
all
the
things
that
we
know
about
the
instruments
which
are
not
the
lowest
common
denominator.
So
if
you're
importing
data
from
other
systems,
you
might
not
actually
know
some
of
this
information
that
we
have
detailed
in
the
open
and
open
Tama
tree,
and
that
should
be
okay.
But
getting
these
extra
bits
in
is
going
to
I
believe
offer
more
more
opinionated
products,
more
sort
of
products
that
work
out
a
box
with
sort
of,
because
we
have
more
metadata
about
the
superman'.
C
So
that
part
is
an
enum
called
kind
which
says
whether
you're
cumulative
or
Delta
or
instantaneous,
but
also
tells
you
whether
you're,
adding
a
grouping
and
whether
you're,
synchronous
or
not,
and
whether
you're
going
to
tonic
or
not
so
that's
kind.
There's
a
pipe
called
value
type
which
says
which
field
is
a
data
point
that
you
might
find
your
data
and
which
form
it
takes.
C
So
I
think,
with
agreement
on
kind
and
value
type.
The
rest
of
that
PR
is
about
the
data
point
and
the
actual
struck,
and
what
we
discovered
last
week
is
that
the
actual
structure
has
been
influenced
by
performance
considerations,
particularly
performance
considerations
that
come
from
go
and
that's
because
the
collector
has
written
to
go.
C
So
presently,
if
you
look
at
the
protocol
before
162,
you'll
notice
that
there's
a
there's
a
there's,
a
type
for
an
integer
point,
there's
a
type
for
a
data
for
a
floating
point,
there's
a
type
for
a
summary
and
a
type
for
histogram,
and
they
all
have
some
common
fields,
but
there's
no
common
type
that
contains
those
common
fields,
because
that
would
just
hurt
performance.
And
yet
that
doesn't
feel
great
to
me.
C
So
Boggan
basically
left
us
with
a
performance
question
here
which
says
that
if
you
use
the
naive,
critical
transformation
and
factor
your
structures
so
that
all
the
common
fields
are
in
one
place,
you
end
up
with
a
lot
of
memory
or
slow
performance.
And
it's
the
memory
thing.
That
seems
to
be
the
biggest
deal.
So
this.
So
let's
look
at
this
I'm.
The
only
one
speaking
I
want
to
stop
for
a
minute
and
let
anyone.
D
C
Up
to
me,
anybody
want
to
talk.
Ok,
well,
if
not
I'm,
going
to
show
you
what
I'm
talking
about.
So
this
is
the
proposal
that
I
have
put
together
combining
those
various
pieces
of
work.
There's
a
diagram
which
I
adjusted
a
little
bit
and
I
mentioned
that
most
of
the
there's
two
halves
of
this
work.
One
is
that
there's
a
new
value
type
definition.
It
includes
scalar,
double
histah,
geiler,
histogram,
summary
and
raw
values
and
then
there's
a
kind
field
which
includes
all
this
information,
instantaneous,
Delta,
cumulative
grouping,
adding
monotonic
and
so
on.
C
That
stuff
is
not
what
I
wanted
to
talk
about
here.
It's
this
data
point
organization,
so,
whereas
before
we
had
an
in
point
that
contained
labels
and
time
stamps,
I
introduced
a
data
point
type
which
contains
the
labels
and
timestamps,
and
then
it
has
essentially
a
1
up
here
where
it
can
have
a
value.
That's
an
integer
can
have
a
value.
That's
a
double
can
have
a
value,
that's
a
histogram
or
a
summary,
and
then
we've
talked
about
raw
values
and
exemplars.
C
C
B
C
Go
go
and
then
it's
Google
version,
1
and
Google
version
2.
The
I
this
chance
that
go
go
could
do
something
useful
for
us
here
and
possibly
that
that
could
be
make
it
possible
to
not
need
to
hand
code
everything,
basically
what
the
what
what
Boggan
would
like
is
that
if
you're
going
to
read
in
an
array
of
data
points-
and
the
data
point
contains
just
64,
we
shouldn't
have
to
have
one
worth
of
unused
space
for
double
one
word
unused
space
for
histogram.
C
C
Recording
the
value
type
I
only
need
one
word
of
storage
for
this
data
point
so
I'm
going
to
put
them
into
an
internal
representation
that
just
has
say
a
slice
of
integers
or
a
slice
of
64
and
then
I'm
going
to
carry
that
through
my
pipeline
in
the
collector
and
when
I
get
to
export
I'm
gonna
regenerate
ot
LP,
you
will
actually
pay
a
CPU
cost
in
a
memory
cost
of
regenerating
goat
LP,
but
at
least
it
wasn't
standing
memory.
That's
sort
of
like
passing
through
the
pipeline.
C
It's
transient
at
the
very
end,
but
introducing
a
custom
like
parser
does
mean
that
we
have
to
pay
more
to
encode
it
at
the
end
and
I
think
that
no
one
has
ever
found
that
great
balance
in
go
especially
of
protocol
buffers
that
are
easy
to
handle
in
memory
and
then
cheap
to
re-encode.
So
if
you
have
to
decode,
you
have
to
recode
them
again
and
that's
what
captain
captain
proto
buffers
are
trying
to
do.
C
So
I
guess
trust
in
your
question
was:
can
you
put
a
custom
implementation
of
it
so
that
when
the
protobuf
library
reads
these
values,
if
somehow
reads
them
into
a
value,
that's
going
to
save
space
I,
think
that
is
possible.
It's
going
to
use
a
lot
of
type
unsafety,
though,
which
is
not
not
impossible
to
do.
C
This
dangerous,
but
if
we
don't
do
something
like
that,
I
think
we're
gonna
have
to
move
back
to
the
organization
before.
So
that
looks
like
this,
where
there's
a
double
data
point
an
integer
data
point
and
they
have
the
same
fields
and
we
know
that
we're
only
gonna
have
one
of
them
and
then,
if
just
to
complete
this
story,
this
is
what
it
looked
like
before.
You
had
four
repeated
values
in
the
metric.
Now
you
only
have
one
repeated
value
in
the
metric
and
I
like
that.
But
again
it's
adding
more
memory
yeah.
B
C
C
Well,
you'd
have
the
pointer.
So
all
you
have
I'm
sorry
I'm,
not
gonna.
Do
this
in
the
real
time.
You
have
an
interface
in
the
data
point
and
then
you
have
another
object.
That's
inside
of
there
that
influence
that
interface.
So
you
had
to
allocate
another
object,
so
one
of
is
often
avoided
because
it
just
creates
lots
of
small
objects
and
that's
not
exactly
the
problem
that
that
was
being
solved
by
this
organization.
C
This
organization
avoids
having
a
bunch
of
unused
pointers
which
is
the
opposite
of
the
one-off
problem:
I
guess,
okay,
so
so
that's
what
needs
to
be
addressed,
I'm,
pretty
motivated
to
like
just
keep
moving
forward.
So
maybe
the
thing
to
do
is
to
avoid
this
debate
by
going
back
to
what
was
the
status
quo,
which
mean
any
new
memory.
Although
I
had
added
two
new
fields,
two
new
value
types,
one
is
raw
values
and
one
is
exemplars.
So
we're
going
to
mix
in
a
metric
to
bigger,
even
if
the
data
point
can
be
kept
small.
C
A
C
My
fear
is
that
the
pipeline
that
we
have
or
that
we
imagine,
are
going
to
try
and
get
in
there
and
modify
the
data
stream
and,
if
you,
if
you've
done,
that,
it's
just
going
to
make
things
harder,
because
you
can't
give
a
color
and
a
protocol
object
anymore.
All
you
have
an
interface
that
generates
a
protocol
object
so
that,
if
you've
got
this
like
metrics
transform
processor,
it's
going
to
see
this
heavyweight
interface
that
you
can
iterate
to
visit
to
visit
the
metrics.
C
It
should
be
cheap
to
visit
the
metrics,
but
you're
gonna
have
to
build
a
new
protobuf
object
at
that
point,
to
modify
it
I
guess
so
that
is
probably
the
least
attractive
part
of
it
and
I.
Don't
know
how
much
intention
I
have
to
do
like
mutation
of
data
streams
and
possibly
I
mean
there's
pros
and
cons.
C
You
still
have
to
turn
those
internal
representations
back
into
LP.
So
there's
a
lot
of
unknown.
Your
I
agree,
though
tower
and
the
scope
of
work.
I,
wouldn't
I,
wouldn't
probably
try.
I,
wouldn't
try
to
get
us
to
commit
something
to
something
like
this,
without
without
exploring
or
showing
people
what
it
looks
like
to
parse
some
code
by
hand
and
protocol
buffer
code
by
hand.
B
C
Not
an
annoying
question.
There
was
a
brief
moment
months
ago
at
where
we
were
talking
about
open
metrics
and
the
text
representation
and
somebody
somebody
said:
oh
text
is
always
gonna
be
slower
than
put
a
boss
and
I
said.
C
So
the
the
other
thing
Tristan
is
that
the
open,
metrics
team
has
felt
very
detached
and
not
public.
It
is.
We
have
asked
we
being
myself
in
Bojan
and
others.
Rich
rich
has
been
to
the
meeting
here,
a
few
times,
I
say
regularly
over
the
last
few
months
and
so
he's
acting
liaison.
But
there's
I
haven't
seen
the
RFC
or
the
finest
the
finished
product
of
their
proposal
ever
and
that's
one
thing
that
is
holding
it
back.
C
I,
like
the
open,
metrics,
you
histograms
definitions,
a
lot
and
I
think
we
should
just
copy
them
and
I'd
love
to
get
to
a
point
of
actually
like
talking
about
whether
the
things
that
are
new
about
ot
LP
here
it
might
just
be
compatible
somehow
with
open,
metrics
and
and
yet
open
metrics
is
heavily
influenced
by
Prometheus
and
there's
a
few
things
that
we've
done
here
that
are
significantly
different
than
from
Prometheus
with
performance
considerations
in
mind.
So
we
need
to
have
a
better
bigger
conversation.
If
we're
gonna
go
that
direction,
I
guess.
A
A
A
A
A
Point
and
then
a
time
series
and
then
that
time
series
has
specific
point,
so
it's
more
structured
as
to
where
you're
going.
If
I'm
reading
this
correctly
I
thought
they
have
a
one
of
I.
Think
I
think
it's
similar
in
the
in
the
point
that
you
were
just
looking
at
it.
Oh
you're
right:
it
isn't
one
of
that's
a
value.
Sorry
yeah,
absolutely
so
they
they
don't
even
care
yeah.
C
A
C
Right
we
were
waiting
for
something
before
we
had
a
meeting.
Okay,
well,
I
will
I'm
definitely
making
a
note
that
open
metrics
is
keep
kids
coming
up,
and
it's
also
because
of
the
protocol
tails,
because
I
believe
that
the
EOP
is
structurally
very
close
to
what
they
have
here
and-
and
maybe
we
should
it
be
nice
if
we
could
get
to
a
point
where
there
wasn't
two
protocols.
C
C
Okay,
so
resolution
there
I
think
is:
we
should
have
a
meeting
or
we
should.
We
should
study
the
open,
metrics
and
thinking
about
what
I'm
meeting
them
meeting
to
iron
out
some
details
and
maybe
consolidate,
would
be
useful
and
and
to
satisfy
the
sort
of
tigran's
concern.
We
should
think
about
a
performance
study
of
like
to
make
it
look
compelling
for
the
collector,
because
I
think
that
what
we
just
saw
with
open
metrics
would
have
the
same
problem.
Clutter
is
what
we're
talking
about
for
a
tail
piece.
A
C
Next
yeah
I,
just
I
mean
like
I,
have
in
my
career,
had
a
number
of
experiences
where
writing
protocol
by
hands
was
the
right
thing
to
do
number
being
small,
but
but
not
not
one,
so
I'm
a
little
comfortable,
more
comfortable
with
that,
perhaps
than
many
but
I
do
think
it
should
be
included
with
this
PR
and
or
reverted
but
they're.
Just
thinking
about
the
time
that
that's
gonna
involve.
C
Let's
see
so
what
I
could
do
I,
don't
think
it's
weeks,
I
think
it's
a
few
days
of
course,
there's
lots
of
other
things
on
my
plate.
So
a
few
days
means
prioritizing
something
something
else.
I
guess
and
I
was
gonna
ask
for
some
volunteers
on
a
few
other
things
they're
out
there
right
now
so,
but
but
this
was
to
teach,
you
is
the
biggest
deal
right
now,
I
think
if
it
I.
C
Think
writing
up
a
few
pages
and
a
little
bit
of
sample
code
and
maybe
a
like
a
sample
benchmark
to
show
that,
yes,
you
can
decode
this
faster.
Yes,
you
can
save
memory,
but
yes,
this
makes
it
harder
for
the
pipelines
to
manipulate
and
there's
some
trade-offs
there.
What
do
you
guys
think
this
would
be
the
kind
of
output
I
was
looking
for
and
I
would
be
I
think
getting
these
details
right
is
super
important
right
now,
so
we
shouldn't
rush
through
this
yeah
I,
firmly
support.
We
use
the
script.
C
Okay,
well,
I'm
gonna,
make
notes
and
then,
by
the
end
of
this
meeting,
you'll
see
that
there's
a
lot
of
other
things
that
need
prioritization.
So
let's
go
continue.
I
guess
the
next
thing
I
wanted
to
say:
there's
a
loot,
the
whole
bag
full
of
issues
about
naming
and
I've
been
trying
to
to
delegate
a
lot
of
it.
As
you
could
see
so,
there's.
C
C
C
Justin,
yes,
what's
a
suggestion,
you
are
well
there's
also
Aaron
Aaron,
maybe
on
the
call
Aaron
who's
working
on
119.
The
two
of
you
have
been
the
leading
contributors
on
naming
at
this
point
and
I
do
that's
a
great
suggestion,
teller,
so
so
between
119
and
655,
and
there's
a
lot
of
open
questions
and
I'm,
not
and
I'm.
Actually,
I
think
119
is
very
close.
If
we
could
just
look
at
that,
there
was.
C
Aaron
said
he
would
change
something,
and
this
is
the
only
unresolved
conversation,
so
I
think
if
it
resolved
that
we
can
merge
this
and
then
this
these
questions
in
108
are
pretty
closely
linked
to
the
questions
in
657
was
just
in
his
feeling,
and
this
particular
one
is
deceptively
complicated.
I
had
to
say,
if
you
look
at
this
particular
which
one
of
these
said,
is
it
it's
the
one
that
CJ
started
about
success
and
failure
which
is
down
to
the
bottom?
Here
we
go
so.
C
It's
hard
not
to
extend
to
to
bring
up
other
ongoing
conversations
and
open
summitry
when
I
look
at
this
there's
this
otep
number
123
from
Microsoft
asking
for
more
detail
than
the
canonical
bandpass
codes
and
then
there's
this
issue
here
saying
we
would
like
just
to
know
success
and
failure,
and
the
reason
why
this
seems
so
important
to
me
is
that
I
think
we
want
to
start
having
an
having
an
option
to
automatically
generate
metrics
from
spans.
This
allows
the
collector
to
fire
metrics
on
behalf
of
a
process.
C
That's
just
been
reporting
expands
and
so
we'd
like
to
know
exactly
how
to
take
a
stand
and
turn
it
into
a
metric,
and
we
have
most
of
that
figured
out.
The
question
here
has
to
do
with
status
code,
though,
and
I
I
can
see
kind
of.
Why
why
people
are
asking
for
a
success
and
failure?
Bullion
and
I
can
understand
the
use
of
status,
canonical
Chronicle
code
and
I
can
understand
the
question
from
NOAA
fall
of
Microsoft
asking
for
more.
C
The
spam
will
always
have
a
snow
south
canonical
code
and
therefore
the
metric
can
always
be
mapped
into
success
in
failure
and
that
this
use
of
static,
canonical
code
gives
us
some
flexibility
and
changing
the
definition
of
success
and
failure,
and
so
potentially,
the
reason
we
need
status
canonical
code
is
that
we
want
to
be
able
to
monitor
successful
failure
through
metrics
I.
Don't
know
if
that
argument
made
any
sense,
I'm,
not
sure
if
I
should
try
to
say
it
again,
though
so
I
just
wanted
to
call
out
that
there
are
so.
C
This
is
the
one
I
just
refer
to.
123
is
a
expanse
s.
It's
not
enough.
We
need
more
and
I'll.
Let
you
read
that
it's
pretty
long
and
then
me
asking
if
we
shouldn't
start
thinking
about
naming
spans
to
have
names
that
are
compatible
with
metrics
and
that's
a
really
controversial
one
I,
don't
think,
there's
any
way
I'm
going
to
ever
convince
anyone
of
that.
But
there
are
some
questions
about
how
we
would
generate
metrics
from
spans
that
does
connect
with
657.
So
Justin.
Are
you
interested
in
any
of
this.
E
E
C
E
C
E
So
I
think
at
the
moment
I
I
would
like
us
to
get
108
to
a
state
that
is
kind
of
resolved.
You
know
it
doesn't
I,
don't
know,
since
it's
just
kind
of
suggestions
or
guidelines
I
feel
like
it
should
be
easy
to
get
to
a
state
where
we
can
all
agree,
at
least
on
the
guidelines,
and
then
this
this
PR
that
you
have
up
in
front
of
you
is
like
a
little
bit
more
prescriptive,
but
still
just
guidelines
and
from
there.
Maybe
we
start
working
on
the
views
API
at
the
moment.
C
D
D
C
I,
what
I
meant
to
say
there
is
that
I
was
imagining
that
we
would
have
an
automatic
facility
to
say
there's
a
span
event:
hey
metrics
SDK.
You
want
to
look
at
that
and
then
metrics
SDK
is
gonna,
get
a
span
which
has
attributes
and
a
status
code,
and
we
want
to
turn
that
into
a
metric,
the
just
actually,
it's
just
nice
to
have
to
go
to
metric,
so
the
status
code
can
become
sort
of
like
a
semantic
convention
and
in
fact
this
whole
debate
is
about.
C
Why
is
that
goes
so
special
and
not
just
another
semantic
convention,
I
think
and
I.
Think
what
we're
trying
to
say
here.
It
is
just
it's
effectively
a
semantic
convention,
whether
you
have
a
status
code
or
a
successful
your
label,
it's
not
about
how
duration
is
getting
coded
as
metrics,
so
it
should
be
a
separate
issue,
but
then
I
think.
C
The
reason
why
comes
up
as
a
fused
API
question
is
that
you're
going
to
get
raw
data
from
that
stand
that
says:
dance
assuming
the
status
quo,
the
dispense
that
say
you
know,
deadline
exceeded
or
in
doubt
argument
or
whatever,
and
that's
more
information
than
you
care
about.
As
a
user.
You
care
about
success
and
failure,
and
as
a
user
you
define
dead
Lennox
you
to
be
a
failure
and
you
define
cancel
to
be
a
success
or
whatever?
C
Is
that
that's
where
you
work
so
then
I
think
what
we're
matching
is
that
there
is
a
view
that
can
say:
I
want
to
explore
a
label
called
success,
failure
or
whatever
you
call
it.
Success
equals
true,
which
is
computed
from
some
other
label
in
a
fairly
simple
way,
and
that
the
proposal
here
is
that
maybe
the
views
API
could
support
an
element
of
configuration
primitives
so
that
you
can
say,
I
would
like
to
map
this
metric
into
an
export
where
the
label
is
from
another
label.
C
C
D
The
SDK
does
not
expose
attributes
to
anything
except
export.
The
girls
are
not
readable
by
anything
they're,
obviously
not
readable
by
the
API,
but
they're
also
really
not
readable.
By
span
processors
they're
only
really
readable
at
export
time
and
I
wonder
how
we
get
this,
and
this
actually
ran
into
this
ahead
of
this
discussion
actually
with
with
Justin
a
little
bit
and
I
put
a
comment
on
his
PR
about
the,
but
on
this
one
they
tried
to
do
manual.
D
Instrumentation,
with
this
I
had
the
same
problem
like
I
was
like
wait:
I
don't
have
access
to
the
span
attributes
unless
that's
exactly
the
same.
Instrumentation
that
is
creating
the
spans
is
also
creating
the
metrics
in
this
field.
App
and
they're
gonna
be
a
big
problem.
If
we
really
want
to
start
mapping
attributes
from
spans
on
the
metrics
automatically
you
over
even
manually,
it's
like
there
isn't
really
facility
for
doing
this.
Yeah.
C
So
I
totally
agree
and
support
the
notion
that
standards
are
break.
Only
I
thought
that
there
was
a
proposal
at
an
active
one
to
make
it
readable
in
the
processor,
but
nevertheless
to
know
the
duration
of
a
span.
You
need
to
wait
till
it's
done
anyway,
so
I
think
it.
My
assumption
was
that
the
SDKs
would
have
to
be
intertwined,
so
the
SDKs
will
know
about
each
other.
C
So
we
in
fact
I
think
we're
lacking
a
concept
of
SDK
as
the
single
unified
concept
of
an
SDK,
which
is
the
thing
you
assembled
out
of
a
metric
part
and
it's
tracing
part.
Once
you
start
talking
about
the
two
together,
it
becomes
one
and
there's
no
type
or
anything
or
interface
that
we
have
that
in
the
spec.
C
That
says
the
SDK,
but
you
can
imagine
one
piece
of
code
that
implements
both
span
and
meter
or
the
both
tracer
and
meter
interface,
and
then
does
something
clever
or
you
can
just
imagine
an
export
from
span
being
translated
into
a
metric
event.
I
think
you
can
also
imagine
a
more
sophisticated
thing
that
inner
gates
at
the
sampler
API
level
for
the
chasers
and
that
I
don't
I,
don't
wanna
go
any
further,
but
I
think
there's
something
there
are
yeah.
D
This
interesting
idea
about
kind
of
letting
the
tracer
SDK
have
intimate
detail
or
the
metric
SDK
have
intimate
details
of
the
tracer
SDK
implementation,
because
then
we
start
we
do
start
kind
of
making
some
a
few
prescriptions
on.
If
someone
wanted
to
implement
their
own
SDK,
what
it
would
have
to
provide
like
if
I
only
wanted
to
implement
a
custom
trace
tracing
SDK
like
if
somebody
else
wanted
to
do
that
for
metrics.
Is
there?
Is
there
a
protocol
to
take
a
common
language?
They
can
speak
in
order
to
make
these
facilities
yeah.
C
D
C
Let's
see,
okay,
there's
some
time
left
and
I.
We
we've
gone
through
all
the
items
in
the
agenda,
but
I
feel
like
there's
some
sort
of
unresolved
things
that
we
stepped
past
on
this
and
I
really
hope.
We
can
just
not
talk
about
steps.
It's
going
to
happen
in
spec
meeting,
it's
going
to
happen
in
those
chats.
B
I
have
a
question
that
goes
back
to
earlier
about
find
because
I
think
kind
be
overloaded,
a
lot
and
metrics
spec
looking
at
the
API
Docs
and
then
in
the
go
API,
there's
kind,
which
I
think
is
used
to
mean
like
a
counter
up/down
counter
and
then
there's
numbered
kind
which
I
guess
all
right
there.
Aggregation
kind
is
expertise.
There's
four
kinds
in.
C
C
B
C
Wonder
if
I
could
dwell
on
this
kind
thing,
then
there
were
some
questions
that
came
up
and
I
write
around
here
this
block
of
comment.
So
it's
worth
really
focusing
on
this
because
I'm.
So
if
you
look
backwards
at
open
census
or
at
the
current
ot
LP
proposal,
it's
not
merged,
there
were
just
three
values:
instantaneous,
Delta
and
cumulative
and
I
think
there
are
sort
of
like
tools
of
requirement
on
this.
This
field
here
there's
a
like
absolute
requirement
that
you
know
whether
you're
dealing
with
instantaneous
Delta,
okay
notice.
C
If
you
don't
know
that
much
information,
you
are
very
likely
to
misinterpret
this
data
and
use
it
incorrectly.
So
that's
sort
of
a
required
attribute.
You
must
know
whether
it's
in
yourself
or
to
motive
now
these
other
bits
that
I've
added
in
are
just
taking
advantage
of
extra
bits
and-
and
they
are
more
of
an
advisory.
You
don't
absolutely
need
to
know
this.
You
can
probably
make
a
interpretation
of
the
numbers
without
knowing
this
information.
C
C
Sorry
I'm
looking
up
here,
this
comment
is
complete,
so
I've
talked
about
the
individual
seven
bits
we
know
about
cumulative
delts
and
instantaneous
those
are
required
so
adding
and
grouping
it.
The
two
levels
of
interpretation
I'm
trying
to
establish
are
these
are
just
numbers
you're
just
going
to
do.
C
That's
more
correct
when
you
don't
know
your
more
information,
so
so
adding
versus
grouping
and
I
try
to
explain
how
you
some
of
the
sort
of
finer
points
which
is,
if
you're
a
histogram
you
might
be
adding
or
grouping
if
you're
adding
the
sum
is
still
meaningful
and
if
you're
grouping
the
sum
is
the
synthetic
thing
that
probably
doesn't
mean
eat
anybody.
That's
one
example
of
where
adding
and
grouping
is
useful.
C
So
if
you're
a
system-
and
you
get
a
new
metric,
you've,
never
seen
before
you're
gonna
take
a
different
default,
but
it
based
on
whether
it's
adding
grouping
but
I
want
I
want
to
just
realize
it.
If
you
don't
know,
you
can
make
a
guess,
and
it's
not
going
to
be
the
end
of
the
world
here,
whereas
we
kheema
them
in
Delta,
you
really
did
need
to
know
same
happens
for
the
other
two
bits.
Monotonic.
C
If
you
don't
tell
me
it's
monotonic
I,
just
I'm
I
might
have
a
different
default
behavior
in
the
in
the
user
interface.
But
when
you
know
something's
not
a
tonic,
very
often
you
display
that
as
a
rate
by
default.
If
you
know
something
is
not
monotonic,
you
probably
will
not,
and
then
this
question
came
up
about
synchronicity,
and
this
is
probably
the
least
I
think
it's
probably
the
hardest
to
justify,
but
I
still
want
it.
So
synchronicity
says
whether
this
point
came
from
the
application
or
whether
it
came
from
the
SDK.
C
So
if
you're
an
SDK
callback,
the
rate
of
those
callbacks
is
totally
telling
you
about
the
monitoring
system,
not
about
the
application.
That's
the
that's
what
that
bit,
therefore,
so
so
I'm
welcoming
you
to
all
scrutinize
this
a
little
bit
more
so
synchronous,
monotonic
and
and
adding
versus
whipping
are
all
nice
to
have
not
necessary.
So
if
you're
importing
data
from
another
system,
you
probably
won't
know
and
we
need
to,
we
need
to
choose
defaults,
but
there
are
as
a
default.
That
makes
sense
in
each
case
any
comments.
C
C
C
So
the
big
action
item
4
is
162
here,
which
is
I.
Think
the
most
important
thing
unblock
us
on
Prometheus
is:
do
a
little
bit
of
performance
study
to
make
sure
that
we're
not
going
to
like
harm
ourselves.
By
adopting
this
factored
approach,
that
has
a
new
data
point
type
and
I
am
going
to
take
that
on
I'm
really
eager
for
Justin
to
take
over
108
I
hope
that
that
I'm,
not
pressing
with
too
hard
on
that
and
I,
think,
let's
see
what
else
it
would
do
anything
else
you
want
to
save
for
next
week.
C
For
next
week
we
are
going
to
all
stay
to
these
API
as
that
I
I
want
to
set
a
chart,
have
goal
of
getting
this
162
merged
in
a
week.
So
I'll
put
in
some
time
to
do
performance
study,
maybe
by
Monday
morning
you'll
have
a
performance
study
of
this,
and
otherwise
we
just
keep
on
keeping
on
we'll
try
to
avoid
debate
over
spam
status
and
success.
Failure
for
these
PRS,
which
I
think
will
help
us,
get
them
merged
and
we'll
just
meet
next
week.
How's,
that
sounds
good
cool
thanks
everybody
and
stuffings
share
and.