►
From YouTube: 2020-09-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Here
we
are
okay,
I
started
making
agenda
a
little
while
ago,
try
and
keep
on
top
of
some
issues
here
and
having
a
hacked
week
this
week
at
work.
So
I
feel,
like
I
try
and
have
fun
as
well.
A
Add
your
name
to
the
attendees
list.
If
you're
here
please
so
we
know
who's
following
us
and
time
to
get
started.
So
I
know
that
we're
getting
close
to
this
pressure
on
getting
to
ga.
So
I've
marked
the
issues
below
about
today,
but
there
is
one
item
here
at
the
top
of
the
agenda
that
was
added,
I
believe,
by
bing
fang
from
amazon
who
has
taken
on
the
work
of
the
staff,
see
receiver.
A
So
this
question
I
tried
to
answer
myself
couldn't
turns
out:
there's
not
been
a
collector
release
in
the
last
two
weeks.
I
think
the
last
time
we
spoke,
we
thought
one
was
imminent.
A
B
Yeah
thanks
josh.
I
appreciate
the
input.
I
think
this
is
the
part
waiting
for
the
otlp
is
okay,
I
think
the
other
one
is.
How
do
we
ensure
the
interfaces
is?
There's
a
testings
we
can
run.
We
can
do
that,
make
sure
that
what
we
actually
truly
said
meet
the
requirements
for
otlp
that
testing
cases
we
can
do
right.
I
want
to
make
sure
that
we
are
following
it
so
that
it
benefits
customers
and.
A
The
users
right
well,
I
looked
at
the
code
earlier
to
see
where
it
was
standing,
so
it
looks
like
we've
added
support
for
labels.
That's
good
and
my
understanding
of
how
cesc
works
suggests
that
we
probably
have
all
the
otf
pieces
work
that
we
need,
so
that
is
assuming
that.
Well,
I
just
said
the
wrong
thing.
A
So
the
problem
that
we
have
with
the
dog
staff
c
is
that
points
arrive
raw.
So
if
you're,
a
histogram
measurement
or
a
timing
measurement,
these
are
effectively
raw
measurements
and
we
could
turn
them
into
gauges
in
otlp,
and
some
downstream
system
might
look
at
them
and
be
able
to
use
them.
But
there's
going
to
be
an
aggregation
that
gets
applied
as
far
as
I
know,
currently,
there's
neither
a
pipeline
stage
in
the
collector
that
can
do
this
sort
of
aggregation
and
there's
also
not
a
raw
data.
A
Point
field
that
we
have
in
otsp
to
just
simply
represent
these
raw
points.
So
so
the
idea
of
representing
them
is
engage
is
a
pretty
bad
work
around
because
for
the
most
part,
when
somebody
sees
a
gauge
they're
going
to
think
it's
the
last
value
of
some
something,
not
a
distribution
member.
A
A
I
don't
know
if
that
landed
anywhere,
I
don't
think
it
has,
so
that
is
not
going
to
help
us.
I
know
things
saying
that
you're
you're
you're
under
some
time,
prep
you'd
like
to
get
something
that
works.
The
idea
that
we
had
discussed
the
last
time
I
discussed
this
with
nick
was
that
perhaps
these
that's
the
receiver
should
simply
do
its
own
sort
of
aggregation
over
a
short
period
of
time.
A
One
of
the
next
topics
that
we
talk
about
after
this
one
here
is
about
what
to
do
about
value
recorder,
those
being
the
synchronous
instruments
that
record
measurements
and
when
you
see
a
stepsd
histogram
or
a
statsy
timing
measurement,
they
really
are
equivalent
to
those
value
recorder
measurements.
A
So
we
are
going
to
talk
about
the
default
aggregation
for
those
set,
those
value
recorder,
measurements,
probably
what
we
should
be
doing
and
that's
the
receiver
is
doing
the
same
aggregation
and
it
could
be
over
one
second
or
10
seconds,
but
it
means
adding
a
slight
delay
when
the
data
arrives,
buffering
it
and
building
an
aggregation
and
then
sending
it
as
otlp.
A
So
if
we
can,
I
mean
there's
a
degenerate
case.
You
could
you
could
pick
up.
That
would
be
pretty
probably
pretty
bad
say.
I
have
a
histogram
with
one
point
in
it:
you'd
send
it
down,
and
you
know
somebody
downstream
is
going
to
have
to
aggregate
those
points
together.
Yeah,
maybe
that's
the
right
thing
to
do
with
a
sort
of
first
cut.
You.
A
B
Yeah,
that's
the
part
that
we
we're
called
in
the
middle
that
should
be
from
a
structure
level.
We
may
be
from
every
processor
to
handle
it
and
but
right
now
there's
nobody
working
on
the
processor
part.
So
we
may
need
to
think
about
it
just
or
leave
it
the
aggregation
in
the
statsd
receiver
for
now.
So
that's
the
part
we're
trying
to
figure
out
whether
we
should
leave
it,
because
we
want
to
make
sure
that
it
is
working.
A
Right
yeah,
since
I
haven't
thought
about
it
much
until
just
now,
but
I
do
think
that
it
wouldn't
be
terrible
to
just
put
out
a
histogram
with
one
point
in
it
for
those
those
timing,
measurements
and
those
histogram
measurements.
It's
not.
Obviously
it's
not
going
to
perform
too
well,
but
at
least
it
gets
you
to
the
point
where
you
have
the
data
in
the
form
we
want
it,
and
if
and
it
should
be
correct
in
some
sense,
I
think
that
that
would
probably
be
the
right
first
step.
A
A
It
has
all
the
buffering
that
you
need.
It
has
all
the
support
that
you
need
for
aggregators,
and
I
have
looked
at
whether
we
could
just
take
the
hotel
go
sdk
and
begin
using
it
inside
the
collector.
There's
gonna
be
sort
of
hurdles
there
just
having
to
deal
with
dependencies
and
release
schedules
and
things
as
well
as
a
few
other
sort
of
technicalities,
so
that
that
option
doesn't
sound
so
great
if
expediency
is
what
you're.
A
After
do
you
think
you'd
be
willing
to
move
forward
on
this,
but
first
of
all
we
need
a
collector
release
to
get
otlp
even
in
there
at
0.5,
and
then
I
think
just
writing.
One
histogram
per
point
is
technically
correct
and
perhaps
performs
badly.
Maybe
that
could
be
the
first
step
and
then
second
step
would
be
to
introduce
a
one-second,
delay
and
buffer
points
and
then
put
them
in
histogram,
so
that
you've
got
a
second
worth
of
points
in
instagram,
rather
than
a
single
point.
B
A
You
yeah,
if
you
feel,
feel
free
to
file
an
issue
directly
about
this
one
in
the
collector
contrib
and
we
can
discuss
it
there
or
or
somewhere,
and
we
can
discuss
it
there.
Okay
in
greater
detail,
sounds
good.
Thank
you,
josh
cool
yeah.
Thank
you.
I'm
excited
about
staff
d.
I
simply
you
all
know
that.
Okay,
so
let
me
project
again,
let's
see
okay,
we
just
tucked
through
this
first
one
and
spogun
on
the
call
at
this
point.
A
Okay,
I've
just
written
a
summary
of
the
current
concerns
about
otlp
and
I
try
to
divide
them
into
things
that
we
can
probably
do
post
ga
things
that
we
don't
have
to.
I
think
I
I
think
that
the
the
goal
is
to
get
us
to
ga
with
just
basically
the
minimum
that
we
can,
that
supports
prometheus
and
staff,
see
it
I'd
say
so.
We
have
pending
a
question
about
how
to
represent
summary
values.
We
have
a
question
about
how
to
represent
an
inbox
on
count.
A
Last
week
and
the
week
before,
we
sort
of
have
been
obtaining
this
idea
of
using
dd
sketch,
and
there
was
some
investigation
into
that.
There's
a
ticket
linked
below
the
dd
sketch
structure
does
contain
mid
max
some
count.
That's
one
nice
thing
about
it,
but
there
are
some
technical
issues
that
are
starting
to
make.
It
look
not
quite
ready
for
us.
The
summary,
the
summary
from
prometheus
is
just
a
listing
of
quantize,
so
you
know
p
safety,
p90
p95
in
earlier
drafts
of
otlp.
A
Several
people
have
proposed
that
we
could
probably
combine
the
prometheus
summary,
which
is
quantiles
with
the
midmax
sum
count,
which
is
sort
of
the
extreme
quantiles
and
things
that
are
already
part
of
summary.
I
like
that
proposal,
because
it
gives
us
sort
of
an
alternative
to
histogram
and
gives
us
an
alternative
to
dd
sketch
part
of
that
value
is
not
mergable.
The
the
aggregate
the
quantiles
are
not
mergeable,
but
the
min
maximum
account
are
mergiable.
So
there's
some
hesitation
towards
that
that
struck,
because
it
contains
a
mixture
of
merging
and
non-virtual
values.
A
There
are
no
current
prs
on
this
topic
in
the
praetor
repo
and
I
think
there
ought
to
be
I've
been
talking
a
lot.
Does
anybody
have
any
positions
that
they'd
like
to
share
on
the
topic
of
summaries
or
mid
maximum
counts?.
C
Well,
just
kind
of
recap
on
the
summaries,
like
the
original
removal
of
them,
was
because
of
like
this
in
decision
on
whether
that's
actually
like
the
best
format.
I
I
mean
I
I
think
you
know
I'm
in
the
camp,
like
I'm
fine
with
a
summary,
including
a
mid-max
account.
C
That
seems
to
make
sense
to
me
just
given
the
fact
that
summary
is
kind
of
a
non-optimized
data
format
in
itself,
but
I
do
think
that
before
we
have
a
pr
open,
we
should
probably
have
a
unified
decision
as
to
like
the
direction
we
want
to
go.
I
don't
know
if
the
dd
sketch
sounded
enticing
just
based
on
the
simple
fact
that,
like
it
included
the
maximum
count-
and
you
could
you
know
back
out
approximate
quantile
values
if
you
wanted
to
that
sounds
enticing,
to
say
the
least.
C
A
Yeah,
I'm
gonna,
I'm
gonna
start
sharing
this.
This
is
the
issue
where
that
was
sort
of
discussed
most
recently
and
although
it
at
a
sort
of
distance,
it
looks
great.
There
are
some
technical
stuff.
There's
technicalities
being
discussed
in
this
issue
that
make
it
look
less
great,
I'm
not
saying
it
making
it
that
doesn't
make
it
less
great
in
concept.
A
It's
it's
less
great
that
the
current
state
of
the
world,
meaning
the
several
the
several
implementations
that
datadog
has
and
several
different
implementations
of
data
that
I
have
are
in
different
states
and
the
protocol
is
not
documented
and
there's
some
there's
been
some
skew,
even
implementations
of
their
own,
about
what
that
does.
A
So
it
looks
like
a
good
post
ga
option
to
me:
we
need
to
keep
refining
it
and
get
it
into
spec
and
this
the
implementations
that
datadog
has
given
us
need
to
be
more
mature
and
and
equal
to
each
other.
I
think
sorry.
I
I'm
mostly
responding
to
this
long
comment
here
by
chris,
michael,
the
charles.
D
You
mean
right,
yes,
yeah,
yeah,
very
good,
so
for
this
I've
been
working
with
charles
a
little
bit,
but
but
we
do
have
broad
support
to
get
you
what
you
need
on
the
datadog
side
here
and
make
you
feel
comfortable
with
it
and
and
document
whatever
you
need,
as
well
as
some
client
library
work.
Obviously
these
things
take
planning,
but
if
we
continue
in
this
thread
and
and
you
ask
any
questions,
I'm
sure
you
will
get
answers.
A
I
I
read
this
and
and
took
away
from
that,
basically
that
yeah
we
can
make
this
work.
Charles
has
been
responsive,
like
he
seems
to
understand
the
points
I've
raised
and
it's
great.
I
I
do
think
that
getting
these
questions
sorted
out
and
into
the
protocol
are
going
to
take
longer
than
we'd
like
for
ga.
At
this
point
it
feels
that
way.
It
also
looks
like
the
best
option
and
I'm
so
excited
about
it.
A
First,
ga
well
so
the
tracing
kind
of
committee
has
been
strongly
trying
to
get
that
done
this
month
and
once
the
tracing
stuff
reaches
ga
they're,
going
to
start
to
be
a
lot
more
attention
on
metrics.
A
So
I
think
that,
like
within
a
month
so
we're
going
to
start
to
have
high
pressure
and
it
doesn't
feel
like
these
issues
are
going
to
get
started.
One
of
the
comments
that
below
like
that's
making
me
feel
this
way
is
armin
here
saying,
looks
like
you
know:
we
don't
even
have
implementations
except
for
a
couple
few
languages,
so
it's
not
going
to
be
a
like
across
the
board
default
for
a
long
time.
So
maybe
we
could
just.
D
Okay,
my
concern
with
the
the
summary
one
is
that,
like
you
say
it's
not
mergeable,
but
realistically,
if
it's
implemented,
it's
something
that
we'll
have
to
maintain
forever
right.
It's
not
like
you
can
just
tear
it
out.
A
Yeah
well.
C
I
mean,
like
I
mean
I
feel
like
that's
that's
kind
of
the
critical
component
here
right
is
like
we
could
also
just
try
to
spend
some
time
talking
about
a
deprecation
strategy
or
or
an
upgrade
strategy,
even
to
try
to
see
if
we
could
iterate
on
this
protocol
in
the
future.
If
it's
always
been
like
an
issue
as
well
like
it's,
it
kind
of
external
to
just
this
particular
data
point,
but
just
like
how?
How
are
we
going
to
have
upgrade
strategies
at
the
the
protocol
in
the
future?
D
A
What
I
wanted
to
say
is
that
the
problem
we
have
is
that
prometheus
is
still
such
a
like
a
large
part
of
the
metrics
landscape
and
we've
talked
like
there's.
We
think
there
will
still
be
a
prometheus
summary
data
type
and
people
are
going
to
say
can't
you
write
a
scraper
that
takes.
Is
this
data
and
like
what
do
we
do
when
someone
just
gives
you
that
data,
so
it's,
but
I
think
we
are
trying
to
deprecate
it.
It's
been
trying
to
be,
I
mean
even
prometheus
has
been
trying
to
deprecate
it.
D
Is
also
a
lot
easier
to
degrade
into
a
summary
than
it
is
to
go
the
other
direction
right
like
if
you
want
an
exporter
that
exports
a
summary
one
can
do
that.
But
if
we
write
a
summary
at
the
protocol
level
here,
it's
not
like
we
can
turn
it
into
a
sketch
later.
We
just
have
to
remove
it.
We
don't
want
anymore.
A
Right,
so
it's
a
format
that
we
can
export,
not
import.
I
apologize
I'm
still
getting
this.
Please
please
talk
amongst
yourself.
I
apologize
for
that.
C
I
think
that
having
this
explorer
is
a
good
idea.
I
don't
know
what
the
timeline
is
for,
how
long
you
think
that
it
would
take
to
get
this
included
in
the
specification,
michael,
I
think
that's
probably
so.
D
With
this
timeline,
I
will
definitely
go
and
find
out.
I
know
we
have
support
to
to
to
write
the
code
and
I
just
don't
have
a
feeling
for
the
scale
of
the
work.
So
so
with
the
timeline
in
mind,
I
will
answer
that
as
soon
as
possible.
C
A
A
And
so
I
gather
what
you've
been
saying.
Michael,
isn't
it
is
that
we
can
always
convert
to
a
permit
summary
if
we're
trying
to
export
that
that's
easy,
even
if
you
have
a
histogram
or
a
dd
sketch
that's
possible.
If
you
have
been
back
some
count,
you
can
get,
you
can
do
it.
You
do
quantiles
there
nothing
else,
and
I
think
that
there
has
been
discussion
about
just
removing
prometheus
scraper
from
the
collector,
at
which
point
we
don't
have
to
worry
about
receiving
summaries.
So
much,
there's
an
open
census.
Summary
type!
A
A
Now
michael
you've
mentioned
in
the
past
that
there
are
some
standard
conversions
to
and
from
open
metrics
for
our
dds
to
so
on
a
histogram.
Is
it
sort
of
possible
to
use
the
same?
Like
logic
you?
Could
you
can?
Oh
sorry,
we
know
you
can
generate
a
summary
from
adidas
sketch.
Can
you
generate
a
summary,
a
duty
sketch
from
a
summary
in
any
way
like
a
point.
D
Of
legacy
summaries,
it
is
yes,
it'll,
be
limited
to
the
fidelity
of
the
summary,
of
course,
but
we
do
that
in
our
prometheus
or
openmetrics
integration
for
people
who
want
us
to
scrape
those
endpoints
so
we'll.
D
Yeah,
I
don't
know
it's
no
problem,
it's
just
that
the
the
details
of
the
code
are
a
little
bit
of
field
from
this
product
manager,
but
it's
something
that
our
customers
are
using
every
day.
D
That
conversion
is
is
open
source.
Yes,
the
conversion,
the
opposite
direction
is
something
that
joel
on
the
team
is,
is
working
to
get
you
as
soon
as
possible.
A
Cool
so
tyler,
you
asked,
let's,
let's
try
and
like
let's
ask
at
least
ask
the
question:
what
would
it
take
to
get
dd
sketch
into
the
spec
like
pre-ga,
because
it
does
solve
a
lot
of
our
questions
and
I
think
well,
I'm
looking
at
this,
this
technical
stuff
about
bucket
indexes
in
both
continuous
buckets
and
such
there
so
part
of
it
is.
We
need
a
protocol
to
be
documented
and
then
everybody
has
to
look
at
it
and
agree
on
it.
I
need
to
study
this
with
more
attention
than
I
am
at
the
moment.
A
Then.
The
question
is
the
details
that
we
that,
according
to
this,
we
have
a
java
implementation
and
that
the
python
and
go
implementations
have
fallen
behind
a
little
bit
and
then,
of
course,
there's
the
agent
implementation,
which
is
different
than
the
go
implementation,
which
is
also
in
go
so
so
I
I
mean,
I
think
we
all
like
the
idea
of
using
datadog's
code,
but
it's
just
three
and
three
languages
and
right.
A
D
What
languages
do
you
consider
ga
ready
that
would
be
useful.
A
That's
a
good
question.
I
should.
E
Yeah
we
care
most
deeply
for
ga
about
python
java
javascript
go
and
uh.net.
D
Oh,
that's
less
than
I
thought.
Actually,
okay,
I'm
pretty
sure
that
I
can
get
this
for
you.
I
just
need
to
figure
out
the
timing
and
everything
awesome.
A
Okay,
I
I
will
be
happy
to
take
an
action
item
on
this
topic.
I
would
like
getting
a
lot
more
detailed,
like
list
of
steps
that
we
would
need
to
get
this
in
for
ga.
It
sounds
like
we
need
an
implementation
from
each
of
the
languages
that
morgan
just
gave
we'd
need
to
pass
it
through
a
proto
and
get
it
documented
and
make
sure
we're
all
comfortable
with
it.
A
I
think,
and
then
and
then
there's
gonna
be
some
work
in
the
collector
and
such
in
various
places
to
do
conversions
where
we
want
and
that's
where
I
think
we
can
all
kind
of
believe
that
those
conversions
are
doable
and,
of
course,
there's
some
lost
resolution
potentially,
but
that,
like
that's
the
best
we
have
and
it's
okay,
that
will
have
to
follow.
A
Okay.
What
I'm
hearing
from
the
group
well
at
least
from
tyler
and
michael,
is
that
maybe
we
should
you
know
not
back
off
so
I'll
I'll
stay
on
this
issue.
Is
that
good
enough?
Let's
see.
C
I
think
that
I
think
spending
another
few
days
or
the
next
week.
You
know
not
dedicated
whole
week,
but
just
like
just
getting
a
decent
understanding
of
the
timeline
and
making
a
decision
based
on
you
know,
objective,
like
understanding
of
that
is
probably
a
more
proactive
way
to
to
address
this.
C
I
think
that
yeah,
I
think,
if
you're
able
to
dig
in
you're
able
to
identify
all
the
things
that
you
think
are
really
needed,
and
we
can
get
some
engineers
from
datadogs
to
commit
to
making
this
this
happen
in
that
timeline,
then
I
think
it
solves
a
lot
of
our
problems,
so
it
seems
like
a
the
desired
approach,
but
I
mean
at
the
same
time
like
if
you
go
in-
and
you
say
like
this-
is
gonna
take
all
this
stuff
and
they
come
back
saying.
Like
that's
gonna
take
six
months,
then
we
should.
A
All
right,
I'm
feeling
encouraged
by
this,
I
was
feeling
a
little
bit.
Well,
I'm
I'm
just
I'm
worried
that
a
lot.
We
have
these
sort
of
intractable
questions
in
front
of
us,
sometimes
there's
a
perfect
answer:
okay,
we've
discussed
dd,
sketch
and
and
there's
a
nice
th
the
reason
that
we're
discussing
dj
sketches
it
actually
addresses
these
questions
about
summaries
and
min
maximum
count
very
nicely.
It
also
addresses
this
next
question
about
the
default
aggregator
for
value
recorder.
A
If
we
didn't
have
dd
sketch
we're
left
with
these
two
options,
as
far
as
I
can
see,
and
not
none
of
them
are
great.
So
this
is
again
why
we've
been
talking
dvd
sketch.
So
I'm
going
to
take
this
this
back
at
least
we're
going
to
study
this
in
more
depth.
A
Okay,
I
we've
just
talked
around
one
big
issue:
that
sort
of
has
several
parts
to
it
and
I
feel
like
that
is
a
real
big
topic.
That's
left
for
us,
that's
partly
api,
spec
and
probably
sdks
bag
and,
like
the
whole
thing
for
me,
what
I
see
in
the
remaining
sort
of
like
what
do
we
actually
need
to
get
to
gaa
for
the
open
geometry
project
as
a
whole?
Is
this
there's
a
lot
of
spec
writing
that
hasn't
been
done?
A
You
break
this
down
into
two
parts
and
one
is
the
semantic
conventions.
There
have
been
a
number
of
threads
involving
semantic
conventions
for
http
cement
conventions
for
system
and
host
and
runtime
metric
names.
There
are
the
general
guidelines
on
naming
and
their
restrictions
on
character,
steps
and
sizes
and
labels,
and
things
like
that.
All
of
that
stuff
is
like
sort
of
loosely,
if
not,
if
at
all,
written
into
the
spec.
A
At
this
point,
there's
been
so
much
progress
on
tracing
that
some
of
those
questions
are
now
answered
and-
and
it's
it's
just
a
factoring
question
like
there's,
there's
already
a
speck
in
tracing
that
says,
labels
should
be
no
more
than
this
or
attributes
should
be
no
more
than
this
many
bytes
and
now
we've
got.
We
call
them
lately
metrics,
but
they're
the
same.
A
So
maybe
we
can
just
copy
that
holst
back
and
like,
or
maybe
we
can
factor
that
spec
so
that
there's
a
section
on
labels
and
attributes,
and
we
can
just
refer
to
it
from
the
metrics,
but
that's
not
really
done
yet.
Is
this
new
pr
again
one?
I
haven't
quite
looked
at
yet
it
was
filed
yesterday
and
it's
time
that
we
review
this.
I'm
not
gonna
review
it
in
front
of
you
all,
but
this
is
a
step
forward.
A
This
is
getting
the
system,
metrics
names
and
the
conventions
for
those
system
prop
metrics
back.
Please
everybody
who's
here
and
cares
and
wants
to
see
this
move
forward.
Give
that
a
review.
I
will
be
doing
that
myself
tonight.
Aaron,
are
you
on
the
call
would
like.
F
To
give
us
a
little
introduction
to
it,
yeah
sure,
so
it's
pretty
much
copy
pasted
from
the
otep.
The
only
changes
are
the
utilization.
Metrics
are
now
value
observer.
Instead
of
up
down
counter.
E
F
A
I
see
so
okay
cool.
I
think
this
is
already.
A
Actually,
okay,
good
this
one
down.
A
Here
and
that's
a
current
value,
so
that
makes
me
look
like
an
upstairs
I'm
an
observer.
Oh
yeah,
yeah
the
rest
of
you
following
along
here
there
was
there.
There
was
an
issue
which
I
know
defines
the
link
for
this
issue
that
led
to
the
pr
that
aaron
just
shared
where
there
was
some
confusion
and
questions
about
what
the
api
spec
says
when
you
want
to
choose
a
value
observer
versus
up
down
from
observer
and
so
on.
There
is
some
need
to
put
some
of
this.
A
This
confusion
has
some
answers,
or
at
least
can
be
improved.
There's
some
need
to
put
that
back
in
the
api
spec
as
well.
I
don't
think
we
have
a
ticket
on
that.
I
should
remind
us
to
do
that.
A
Okay,
this
is
great
aaron.
Thank
you,
I'll,
be
reviewing
your
pr
tonight
and,
aside
from
that
work,
you
said,
there'd
be
a
follow-on
for
the
host
metrics.
Aside
from
your
scope
there
involving
what
came
out
of
otep
119,
I
feel
that
there
is
probably
a
few
other
semantic
convention
related
issues.
I'm
actually
looking
for
someone
to
help
own
this.
It
seems
like
it's
a
pretty
large
space
and
it's
separate
almost
from
the
like
sdk
spec
and
the
like
finishing
this
back
on.
You
know
value
recorder
and
the
sketches
and
stuff.
A
If
anyone
here
wants
to
like
be
the
owner
of
metrics
semantic
conventions
and
related
topics,
please
contact
me.
This
is
an
area
where
I'm
I'm
actually
not
sure
what
has
not
been
written
in
this
spec
completely.
There's
a
number
of
issues
that
need
to
be
checked
through
those
are
the
issues
linked
down
here
last
week.
A
Cool
we
are
making
progress.
Thank
you,
aaron.
The
other
item
that
is
ongoing
is
one
of
my
pr's.
I've
got
that
open
here
there.
There
are
some
questions
that
have
not
been
answered,
and
I,
I
think,
probably
the
biggest
problem
that
we're
having
here
is
that
it
takes
a
lot
of
finesse
to
write
a
good,
spec
and
and
we're
we're
finding
a
place
where
I
haven't
put
enough
time
in
stretching
this
one
tr.
A
I
think
that
I
one
once
I
understand
the
problem,
like
john,
has
been
pointing
out
the
confusion
to
me
here.
I.
D
A
Figure
out
how
to
rewrite
it,
I
think
we
are
going
to
have
to
get
a
lot
more
detail.
It's
going
to
become
a
longer
spec,
that's!
The
only
answer
I
have
is
to
make
it
longer
right,
more
so
other
than
this
particular
confusion
over
what
we
say
is
responsible
for
what,
in
this
particular
thread,
we're
close
on
this,
I
think
so.
A
There's
there's
more
work
to
do,
but
we're
gonna
need
more
approvals
as
well.
So
if
you
are
following
this,
and
your
metrics
approve
recently
take
a
look
and
help-
maybe
help
with
some
advice,
if
you
can,
if
you
see
anything
obvious
that
that
you
know
would
help
eliminate
confusion,
it's
it's
hard
to
write.
Specifications
is
all
I
know,
and
this
it's
it's
hard,
because
these
sdks
are
very
complicated
pieces
of
software
and
they're
all
and
they're
very
much
influenced
by
the
language
they're
written
in.
A
C
A
C
C
C
John
you've
been
very
helpful.
Is
that
I
guess
maybe
in
go.
I'm
not
sure
I
haven't
looked
at
your
implementation
that
carefully
the
the
implementation
of
the
meter
api
kind
of
also
implements
the
instrument
apis,
whereas
in
java
at
least
those
two
apis
like
the
the
the
meter,
all
it
does
is
give
you
instruments
like
it.
Does
nothing
else
except
give
out
instruments.
C
It
literally
has
no
other
function
except
being
a
factory
essentially
being
a
factory
for
instruments,
and
the
instruments
then,
are
really
what
we're
talking
about
from
in
the
java
sdk
api
sdk.
What
this
spec
is
talking
about
is
the
behavior
of
the
instruments,
whereas
in
go
it's
really
the
behavior
of
the
meter
implementation
because
they're
kind
of
intertwined,
I
think
that's
where
the
confusion
is
lying
here,
does
that
does
that
align
with
what
your
thoughts
are
as
well.
A
Right
it
does.
We've
got
the
concept
of
a
meter
concept
of
an
which
is
a
sort
of
api
level
thing
that,
as
you
say,
just
creates
these
instruments,
but
in
the
instrument
there's
the
concept
of
an
api
level
instrument
and
then,
when
you
get
to
implementing
this,
there's
a
concept
of
an
sdk
instrument
as
well,
which
I
I
actually
have
introduced
the
term
sdk
instrument
here.
So
now,
the
way
the
go
code
works,
the
accumulator
will
give
you
gives
out
these
sk
instruments
on
behalf
of
well
for
the
meter.
A
A
What
I'm
hoping
is
that,
like
those
differences,
are
not
material
to
the
user
and
and
the
differences
as
far
as
the
sdk
are
concerned,
might
be
minor
and
might
be
like
also
not
material.
So
like
right
now
the
go
implementation
implementation
doesn't
do
anything
atomic
to
like
make
sure
record
that
is
atomic.
A
It
just
calls
through
to
each
instrument
and
makes
a
measurement.
So
you
really
could
say
that
the
like
the
fact
that
there's
an
accumulator
that
has
state
for
multiple
instruments
is
not
relevant.
I
think
so.
I
still
want
to
talk
about
there's
an
accumulator
whose
job
is
to
you
know
quickly,
locate
you
into
like
an
instrument
for
a
label
set
and
find
that
aggregator
and
let
you
apply
an
update
to
it.
So
I
I
I
fully
understand
john,
and
I
just
think
there's
like
we
need.
A
C
So
I
think
so
so
I
think
there's
there's
a
few
things
here
to
kind
of
unwind.
So
there's
there's
a
piece
of
functionality
which
you
called
out,
which
is
the
for
a
given
type
and
label
set,
and
I
think,
there's
anything
else
in
there.
The
name
well
like
all
of
the
metric
identity
or
the
meteor
identity
instrument,
identity
for
a
given
instrument,
identity,
plus
labels.
C
Something
has
to
give
you
give
the
api
and
user
an
implementation
of
something
at
that
point
to
use.
So
that's
one
hunk
of
functionality
and
needs
to
like
keep
a
registration
of
that.
So
you
don't
create
a
whole
bunch
of
extra
ones
that
you
don't
need.
To
probably
I
mean
that
could
depend
on
language.
I
don't
know.
Maybe
it
would
be
super
lightweight,
just
always
create
new
ones.
C
That's
you
know,
that's
probably
shouldn't
be
something
we
need
to
necessarily
specify,
but
there's
this
thing
that
is
responsible
kind
of
for
being
the
instrument
factory,
an
instrument
registry
in
the
place
that,
where
the
all
those
instruments
like,
if
someone
asks
one,
here's,
here's
an
instrument
and
then
there's
the
the
recordings
of
data
and
then
the
accumulation
of
those
recordings
for
a
given
a
given
time
period.
C
A
Yeah,
you
know,
I
think,
that's
that's
where
it
will
end
up.
That's
that's
how
I
see
it
now:
okay
and
the
fact
that
the
go
sdk
uses
one
object
to
do
that
for
all
of
the
instruments
and
the
java
implementation
uses.
One
per
instrument
is
sort
of
like
I
think,
an
implementation
detail
right,
there's
a
level
at
which
that
implementation
detail
actually
translates
into
real
differences.
A
But
I
don't
think
that
the
spec
gets
that
that
far
basically
so
like
if
we
ever
had
to
talk
about
like
an
sdk
with
multiple
collection
intervals,
which
I
was
trying
to
avoid
having
that
happen
all
summer
when
the
insurance
was
working
on
it,
because
that's
where
this
will
become
a
distinction
that
matters.
So
if
you
know
like,
if
there's
some
spec
in
the
sdk
that
says,
you
must
support,
alternate
multiple
configuration
interval,
question
intervals.
A
C
I
wouldn't
I
wouldn't.
I
would
put
that
out
of
your
brain,
assuming
assume
it's
not
a
possibility.
We
mean
it,
maybe
in
the
future
and
we'll
cross
that
bridge
when
we
we
have
burnt
it
down
with
forest
fires,
so
yeah.
So
I
think
that
that
distinction,
the
the
meter
as
like
you
wrote
down
here
as
an
instrument
factory
or
as
a
way
instrument
registry,
because
you
don't
want
to
necessarily
it's
like
at
the
instrument
ready
just
really
like
instrument
identity.
You
ask
for
something
with
an
identity.
C
You
get
back
that
instrument
and
then
there's
they
accumulate
the
accumulator
functionality,
which
is
the
thing
that
accumulates.
I
mean
it
accumulates
right.
It
accumulates
recordings
and
then
there'll
be
the
next
thing,
which
is
the
kind
of
the
aggregation
of
those
recordings
into
whatever
the
aggregate
that
you
are
interested
in
and
that
will
be
kind
of
the
next
piece
of
that
pipeline
and
then
there's
exporters.
C
A
Oh
you,
you
had
meters
as
an
instrument
registry
instrument
yeah,
I
think,
of
instrument
as
an
api
level
thing,
but
there's
also
a
kind
of
like
sdk
side
of
that.
So
that's
where
we
end
up,
maybe
using
the
same
term
twice
and
then
he
said.
Did
you
say
the
instrument
contains
an
accumulator
or
the
instrument
like
how
do
I
what's
the
relationship
between
instrument
and
accumulator,
so.
C
I
would
think
that
the
next
thing
I
would
say,
is
accumulator,
like
I
wouldn't
talk
about.
I
mean
instruments
are
obviously
very
important,
but
you're
absolutely
right
there
in
api
construct.
So
I
would
actually
talk
about
accumulator.
I
think
accumulator
is
the
next
step.
It's
the
next
thing,
and
that
and
the
accumulator
is
the
thing
that
accumulates
measurements
in
the
given
like
that,
it's
the
measurements.
So
now
you
have
keep
a
map
of
instrument
label
set.
So
that's
the
part
where
I
disagree
because
that's
I
think
what
the
meter
does.
C
Okay,
okay,
so
there's
an
instrument
to
aggregator.
Okay,
I
see
the
difference.
There's
the
map
of
identity,
to
instrument,
implementation
or
identity
to
accumulator,
or
something
like
that
and
then
there's
the
label
set
to
okay.
I
understand
where
your
where
your
difference
is
here
goes
instrument:
identity
to
accumulator,
maybe
yeah,
something
like
that.
A
D
C
The
question
there's
a
piece
here,
though,
that
is
missing
and
I
don't
know
what
it
is
and
that's
the
part
that
actually
keeps
track
of
time
and
tells
everything
that
it's
time
to
gather
up
all
those.
It
tells
the
accumulators
that
it's
time
to
push
that
data
into
the
aggregator
or
tell
the
aggregators
to
pull
that
data
out
of
the
accumulators
whatever.
I
don't
really
care
which
direction
it
happens.
But
there's
that
piece
which
I
think
you
called
the
controller
in
go
land,
is
that
the
controller.
A
A
But
basically
this
way
the
accumulator
just
has
to
know
how
to
collect
itself,
and
the
controller
will
tell
us
when
to
do
so
and
the
I'm
using
this
term
accumulation
to
mean
a
snapshot
of
an
aggregator
and
then
so
an
accumulation
leaves
the
accumulator
going
towards
the
processor.
The
processor
gets
these
accumulations,
which
don't
have
time
stamps
yet,
and
the
reason
is
that
the
processor
might
decide
to
do
cumulative
for
the
exporter,
in
which
case
it
knows
the
timestamp
is
way
back
at
the
beginning.
A
So
the
the
the
processor
is
where
timestamps
enter
the
data,
the
data
pipe
and
it's
the
controller
who's
responsible
for
saying
we're
going
to
start
now
and
then
tells
all
the
accumulators
to
collect
and
then
says:
okay,
processor
now,
you're,
we're
finished,
calling
correct,
and
so
the
the
processor
knows
that
the
beginning
and
the
end
time.
And
then
it
takes
all
those
accumulations
and
turns
them
into
aggregations
which
have
those
time
stamps
and
know
whether
they're
cumulative
or
delta.
That's
that's
currently,
the
way
this
works
in
the
go
pipeline
and.
A
I
so
the
the
only
definition
I'm
trying
to
draw
is
that
the
ac
accumulator
outputs,
I'm
going
to
say
a
snapshot
of
a
navigator
plus
it
has
a
resource
label,
set
an
instrument.
There's
no
time
at
this
point
in
time.
The
processor
is
going
to
end
up
knowing
the
times.
So
what
I've
called
an
accumulation
leaves
the
accumulator
but
doesn't
have
doesn't
sometimes
and
then
the
processor
is
responsible
for
taking
the
accumulation
plus
whatever
internal
state.
It
has
in
outputting
an
aggregation
which
is
the
instrument
the
resource.
A
The
label
set
the
aggregation
result,
which
is
a
different
which
has
multiple
apis
depending
on
what
type
of
educator
it
was
less
than
two
time
stamps,
beginning
and
end,
and
so
I'm
starting
to
realize
that
I've
used
accumulation
and
aggregation
to
mean
like
accumulation,
is
the
first
stage
and
aggregation
is
the
second
stage,
but
maybe
it
would
be
best
if
we
did
stop
using
the
word
accumulation
and
created
a
new
word,
which
is
the
snapshot
of
an
aggregator
that
has
no
time
yet,
because
the
processor's
job
is
to
attach
time.
So
it's.
C
A
Yeah
it's
one
aggregator.
They
called
that
call
update
called
a
number
of
times
and
then
was
called
well
the
indigo.
It's
called
synchronized
move
because
it's
it's
got
to
be
atomic.
It
comes
in
there
and
takes
the
current
state
and
copies
it
and
then
resets
the
the
current
state
at
the
same
time,
right
so,
and
what
I'm
calling
an
accumulation
is
that
copy
of
that
snapshot,
copy
of
an
aggregator
plus
the
stuff,
that's
associated
source
instrument
and
label
set-
and
that's
perhaps
the
name-
that's
bad
here.
C
I'm
still,
while
you're
talking
I'm
trying
you
know,
maybe
it
would
be
really
helpful
for
you
and
me
offline
to
sit
and
walk
through
the
java
implementation
and
since
you
know
the
go
instrument
implementation,
I
know
implementation.
Maybe
we
should
the
two
of
us
should
kind
of
sit
down
and
try
to
reconcile,
see
where
that,
like,
I
know,
there's
some
differences
here,
but
maybe
we
can
like
distill
the
common
ground
and
we
can
do
it
not
taking
up
everybody's
time
in
this
meeting,
because
I
don't.
A
Okay,
before
next
thursday-
I
I
like
this
and
then
maybe
after
that-
I
can
finish
this.
I
know
that
we're
it
feels
like
the
sdk.
Spec
is
not
moving
very
fast,
but
but
this
I
think
this.
This
is
a
log
jam
that
has
to
get
broken
first
and
I
think
the
follow-ons
are
going
to
be
easier.
A
Based
on
what
I
know,
cool,
that's
great.
Okay,
all
day
is
for
everyone
who
had
to
listen
to
that
rat
hole
there,
but
I
think
it's
probably
really
important
to
get
this
done.
A
So
we've
talked
to
all
the
things
in
the
agenda
except
tyler
added
something
here
and
I'm
glad
you
did
tyler
the
unified
code.
For
you
measure
issue
click
through
is
being
neglected.
C
Yeah
so
just
kind
of
a
recap:
there's
in
the
api
specification
we
we
say
that
we,
the
metrics,
need
to
be
associated
with
a
unit,
and
this
is
just
to
kind
of
clarify
what
that
unit
is
currently
in
the
otlp.
It's
specified.
C
The
unit
should
be
of
the
form
conforming
with
the
ucum,
which
is
a
standards
for
representing
units,
but
that's
the
only
place
that
there's
actually
some
sort
of
specification
around
this,
and
so
this
is
issue
is
open
to
kind
of
resolve
that,
by
asking
a
lot
of
questions
that
needed
to
get
answered
and
tigran
asked
the
question
as
a
pragmatic
person
should
you
know,
can
we
do
this
after
the
ga
release?
C
And
I
think
that
we've
distilled
it
down
to
the
fact
that,
like
it's
possible
to
talk
about
the
specifics
of
how
sdks
or
maybe
even
the
api,
should
enforce
or
relegate
enforcement
of
this
standards?
But
it
needs
to
be
decided
on
what
standards
we
want
to
actually
set,
because
if
we
decide
on
a
standard,
that's
not
compatible
with
the
uc
and
the
proto
needs
to
actually
change
its
support
model
that
that
wouldn't
work.
So
we
need
to
do
that.
C
I
think,
before
the
ga
specification
reading
through
the
uc-
I'm
not
that
familiar
with
it,
but
it
seems
like
a
great
option
and
it
was
a
good
choice,
I
think,
or
initially
by
the
proto
library,
to
actually
make
that
decision.
It
defines
its
transport
in
7-bit
us
ascii,
which
is
a
pretty
simple
encoding.
C
It
encapsulates
a
lot
of
other
standards
bodies
for
units,
mostly
ieee,
and
some
iso
standards
body
si
it
is
compatible.
It
actually
has
different
base
units
which
the
scientists
didn't
be,
is
really
disappointed
that
we
are
not
supporting
si.
But
I
think
that
the
compatibility
is
good
enough,
and
so
it
seems
realizable,
and
I
think
that
we
should
be
able
to
achieve
all
of
the
goals
laid
out
at
the
top
of
this
issue.
C
And
so
I
was
kind
of
recommending
that
we
should
just
make
a
decision
and
that
the
open
telemetry
is
going
to
support
this
as
the
main
way
to
standardize
on
units
that
we're
going
to
be
transporting
with
otlp
and
maybe
not
put
a
lot
of
detail
into
how
languages
are
going
to
make
this
enforcement
or
the
format
that
they
should
provide
for
usability,
and
let
that
more
so
leave
that
up
to
the
languages
and
maybe
address
those
after
ga.
C
If
they
become
issues
was,
was
my
interpretation
given
the
fact
that
the
standards
were
the
thing
that
actually
needed
to
get
decided
at
this
point
yeah.
So
I
was
kind
of
just
surfacing
at
this
at
this
meeting
because
it
seems
relevant
and
I'd
love
to
get
some
more
opinions.
John
has
already
kind
of
asked
the
question.
I've
put
my
name
in
the
hat.
C
If,
if
we
do
make
a
decision
on
the
sanders
body
that
I
would
be
happy
to
add
a
few
sentences
to
the
existing
api
specification,
saying
that
this
is
the
standard
spotty
that,
eventually
that
the
units
eventually
need
to
be
in
the
form
of
when
they're
transported.
So
it
needs
to
be.
You
know
at
some
point
in
the
sdk
or
in
the
api
level.
It
needs
to
have
some
sort
of
conversion
or
it
needs
to
just
require
this
from
the
start.
A
C
None
that
I
can
think
of
they
the
totals
and
the
total
count
stuff
yeah.
C
That's
a
little
bit
arbitrary
and
that's
kind
of
an
interesting
point,
because
the
the
totals
are
are
going
to
be
in
some
sort
of
unitless
basis,
and
so
because
of
that
the
ucum
has
a
unitless
basis
that
is
similar
to
what
go
does,
and
it's
just
a
default
unit
of
one
the
numeric
number
one,
but
you
can
always
annotate
those
and
by
annotations
you
can
put
in
parentheses,
I'm
sorry
curly
braces
and
those
curly
braces
could
become
more
of
a
standardization
across
open
telemetry,
saying
that,
like
yeah,
like
in
curly
braces,
this
is
a
total
and
we're
just
going
to
call
that,
like
a
more
of
a
a
unified
thing
across
that
telemetry.
C
But
I
don't
think
that
that's
too
does
I
don't
know
how
rigorous
that
needs
to
be
at
this
stage
in
the
game.
I
guess
is
kind
of
the
question
conversion-wise.
Otherwise,
I
don't
think
that
there's
any
problem
like
there's
not
going
to
be
too
many
units
that
are
going
to
be
coming
in
from
other
standard
spaces
that
you
are
going
to
be
able
to
convert.
C
A
Sorry
go
ahead,
the
my
first
like
just
having
its
reactions.
It
seems
count
total
and
some.
It
sounds
like
they're,
including
aggregation
temporality
in
a
unit.
C
That
no,
I
need
to
look
at
what
they
mean.
That's
all
me,
these
are
things
sorry
yeah.
This
is
not
something
that
is
coming
from
the
uc.
The
uc
allows
you
the
functionality
to
add
these
sort
of
things.
If
you
wanted
to
do
as
annotation,
it
does
talk
about
a
total
as
if
you
did
want
to
talk
about
an
aggregated
total
in
a
parenthetical
example.
Essentially,
but
it's
not
like
a
part
of
the
standard
that
you
would
always
include
it
as,
like
a
you
know,
curly
brace,
total.
C
That
is
something
I
think
that
I
was
bringing
up
as
a
extensibility
of
the
standard
and
I
think
it's
a
benefit
of
it,
but
it's
not
a
part
of
the
standard
that
you
would
include
things
like
this
and
other
temporality
elements
in
this
unit.
So
my
I
I'll
reiterate
by
the
concern
that
I
asked
about
because
I
think
it's
still
relevant.
C
So
if
we're
saying
this
is
the
standard
unit,
if
one
thing
in
the
pipeline
decides
it's
going
to
enforce
that
and
another
part
of
the
pipeline,
doesn't
then
how
will
what
will
happen
like
if
a
user
like
we're,
not
controlling
manual
instrumentation,
if
user,
creates
an
instrument
and
attaches
arbitrary
units
onto
it,
because
the
java
apis
don't
restrict
anything
and
then
the
collector
has
decided
that
they're
going
to
enforce
it?
What
does
the
collector
do
if
it
gets
a
get
something
that
isn't
part
of
the
standard
like
that?
Behavior
seems
like
it
needs
like.
C
C
C
So
the
point
of
this
opening
this
issue
was
to
address
that
exact
same
that
exact
concern.
So,
yes,
it
is
important
and
the
fact
of
the
matter
is:
is
that
somebody's
already
made
a
decision
and
that's
the
proto.
C
The
proto's
already
made
a
decision
that
units
are
supposed
to
be
transported
with
conforming
to
the
uc
and
if
the
java
library
sends
it
things
that
are
not
in
that
format,
then
yes,
it
will
barf
and
it
will
not
work
and
it
will
not
convert
these
appropriately
based
on
the
proto
or
or
maybe
it
will,
but
at
the
very
least,
there's
no
guarantee.
So
that's
what
this
issue
is
open
to
resolve,
and
so
the.
E
C
Yeah
and-
and
so
yes,
I
think
that
eventually
it
needs
to
get
further
up
the
pipeline.
The
problem
is,
if
you
make
this
into
a
semantic
convention,
it
is
a
semantic
convention
as
far
as
it's
a
requirement,
because
it's
a
compatibility
requirement,
it's
something
that
normatively
the
specification
says
that
we
need
to
require
people
send
us
valid
data
on
this,
because
otherwise
parts
of
the
telemetry
pipeline
are
going
to
break.
If
it's
not
there,
it's
the
same
thing
as
if
you
know,
if
somebody
sends
you
some.
C
You
know
unit
in
the
form
of
degree
c,
with
the
degrees
sign
being
a
unicode
character,
code
point
and
other
parts
of
the
telemetry
pipeline
have
no
way
to
understand
how
that
interpretation
is
supposed
to
be
resolved.
C
Then
it
becomes
a
is
it
unitless
or
is
it
just
an
error
at
that
point
and
that
could
be
really
problematic
if
especially
a
user
is
expecting
it
to
show
up
in
their
vendor
or
their
open
source,
visualization
toolset
under
a
particular
unit
form,
and
that
doesn't
happen
so
yeah.
This
issue
was
to
try
to
address
that
holistically.
The
question
was
then
asked:
can
we
slice
that,
apart
to
solve
this
in
time
for
ga,
and
so
that
was
where
I
was
coming
at
it
from
just
saying?
C
Well,
if
I,
if
we
define
the
standards
body,
then
that
means
that
that's
something
that
we
can
build
from
in
the
future
and
then
it's
left
up
to
the
sigs
themselves
to
make
that
assumption
and
to
understand
that
they
need
to
provide
a
compatibility
layer
on
top
of
the
otlp
and
that's
where
this
kind
of
is
coming.
From
this
whole
conversation.
C
C
C
It
just
says
follows
the
format
yeah
again,
that's
kind
of
why
this
is
an
issue
in
the
specification
to
help
clarify
what
that
actually
is
going
to
mean.
C
Because
yeah
I
mean
a
comment
in
a
format
I
don't
know
like
there's
I
actually
you
know
there
are
people
are
on
this
call
that
are
way
more
versed
in
how
the
collector
actually
handles
their
tlp.
So,
like
I
I
would
imagine
they
would
know
that
like
yeah,
they
don't
nothing
happens
or
something
happens.
C
I
don't
know
the
answer
to
that,
but
I
do
know
that
like
having
a
comment
in
there
and
having
different
opinions
of
it,
that's
like
the
whole
point
of
a
specification,
so
the
specification
is
going
to
normatively
say
like
what
is
going
to
happen.
There's
a
compatibility
issue
here
where
there
isn't.
C
A
A
What
I
would
expect
is
that
at
some
point,
you're
going
to
use
some
metric
data
and
you're
going
to
try
and
like
add
points
together
and
if
you
have
points
that
are
in
different
units,
they
should
be
convertible
and
that's
what
you're.
That's
the
point
where
you're
going
to
find
an
error
like
if
they're
not
so,
I
don't
think
that
the
collector
actually
cares
what
units
you
apply
and
I
don't
think
that
the
otlp
exporter
cares.
A
What
units
you
apply
is
like
a
string,
that's
going
to
pass
through
the
whole
pipeline
at
some
point,
you're
going
to
try
and
join
or
like
compute
with
those
metrics,
and
if
one
one
process
exported
seconds
and
one
process
exported
microseconds,
then
the
the
benefit
of
having
the
standard
is
that
you
can
convert
them.
If
you
find
that
the
metric,
the
units
are
not
known
and
can't
be
converted,
then
you
have
an
error
which
is
just
like
gonna
happen.
You've
got
errors
all
the
time,
so
I
don't
see
that
that's
too
unusual.
C
Well
or
you
can
have
invalid
data
being
represented
to
the
user
right
like
you
could
just
have
the
collector
saying.
Well,
I
had
it
in
a
comment,
and
this
is
supposed
to
be
in.
You
know
this
format,
so
I'm
just
going
to
assume
it
is,
and
then
they
start
displaying
it.
As
you
know,
microseconds
are
now
seconds
and
all
of
a
sudden
you're
going
like.
Why
is
this
10
to
the
6
higher
than
what
I
expected
or
something
like
that.
A
I
also
feel
like
it's
going
to
be
fairly
natural
for
a
particular
metric.
Let's
say
you
have
a
timing
metric
and
you
and
you
intend
to
generate
that
metric
across
processes
in
different
runtimes,
like
it's
going
to
be
natural
to
output.
You
know
seconds
in
one
place
and
microseconds
in
another
place
and
seconds
in
a
place,
a
shank
and
so
the
the
probably
eventually
someday
the
collector
is
going
to
want
to
be
able
to
merge
that
data
together.
A
I'm
guessing,
I
don't
mean
it's
not
some
downstream
system
and
that's
when
this
will
matter.
C
C
The
the
prefix
issue
is
also
really
important,
like
there's
the
whole
base
unit
right
like
if
you're
gonna,
you
know,
represent
barometric
pressure
in
milligrams
of
mercury
or
pascals
right,
like
those.
C
Units
right,
but
then
there's
the
whole
prefix
issue
of
like
whether
you're
going
to
convert
time
in
in
a
base
unit
of
seconds
or
with
a
prefix
right,
and
I
think
that's
extremely
important,
because
I
know
that,
like
new
relic
is
very
particular
in
time
showing
up
in
a
particular
prefixed
value,
but
other
places
may
not
be-
and
I
know
that
maybe
there's
different
language
implementations
or
a
different
instrumentation
that
may
send
the
units
of
time
like
exactly
like
you're
saying
josh
in
terms
of
seconds
or
nanoseconds
or
or
epochs.
C
You
know,
like
I
don't
know
like
there
are
definitely
different
units
there
and
I
think
that's
something
that
prefix
units
there
that
are
really
important
same
with
a
bitrate
or
throughput
or
yeah.
I
mean,
I
think,
there's
definitely
a
lot
of
computational
things
and
another
side
is
like
in
your
prefix
of
units.
C
It
like
it's
also
really
important
to
be
really
clear
that
your
prefix
are
si
or,
if
they're
information,
tech,
just
our
base,
two
units
or
another
way
to
like
really
make
sure
that
clarification
is
done
because,
like
if
you
have
that
same
problem,
if
I'm
sending
up
megabytes
to
you
and
you're
just
like
well,
okay,
cool,
I
know
megabytes
is
going
to
be
this
and
it's
actually
moving
bytes.
C
A
Expecting
finding
this
an
area
that
I
haven't
studied
enough
to
have
a
strong
opinion
tyler,
do
you
feel
that
you
could
make
the
proposal
that
you
think
is
best.
C
C
I
definitely
think
in
the
gosig,
like
we're,
we're
well
positioned
to
try
to
implement
a
way
that
would
help
users
implement
correct
units
and
give
them
valid
data
after
the
facts,
but
I
I
don't
want
to
like
spend
a
lot
of
time
in
the
sig
or
I'm
sorry
in
the
specification
writing
something
up
that
is
going
to
be
restrictive
or
is
not
going
to
be
worthwhile
or
is
going
to
get
mired
on
weeks
of
internal
communication.
C
On
on
a
pr
comment,
I
I
just
I
yeah
I
I
wanted
to
try
to
resolve
this
because
the
question
was
like:
can
we
do
this
after
the
the
the
ga
release
and-
and
I
just
the
only
the
big
issue
is
if
we
do
this
after
the
ga
release
and
everyone's
just
like
what
the
ucum
that's
stupid,
why
would
we
want
to
use
that
standard?
Let's
use
something
completely
different,
then
the
proto
would
need
to
get
changed
and
that
was
where
the
conflict
is
coming
from.
A
I'm
I'm
uncomfortable
with
being
loose
in
this
and
I
think
I
don't
see
anyone
objecting
to
using
ucum.
I
guess
I
don't
know.
C
Yeah,
that's
actually
a
good
point
like
if
I'm
hearing
john
correctly,
like
you're,
just
worried
about
like
that
enforcement
or
or
the
implementation
of
that
standard,
not
necessarily
the
standard
itself.
Well,
I
mean
the
standard
itself
seems
way
more
far
far
reaching
than
than
computer
telemetry
needs
to
implement.
That's
my
main
concern
like
there's
crazy
crap
in
here,
like
homeopathy,
yeah,
homeopathic
densities
and
stuff.
That's
completely
irrelevant
for
software
telemetry.
Well,
I
don't
know
about
you,
but
I
have
really
big
picture
plans
for
open
telemetry.
I
think
it's
going
to
just
be
worldwide.
C
So
I
agree
but
see
this.
This
is
why
I
wanted
to
answer
that
question
because
like
if
we're
like,
no,
we
don't
need
that
like
let's
do
a
subset
or
let's
make
our
own
or
let's
use
this
other
standard,
that's
fine,
but
if
all
of
a
sudden
that's
incompatible
with
the
ucum
and
then
the
proto
needs
to
change.
That's
not
fine.
After
the
ga
like
or
even
before,
the
ga.
We
need
to
do
that
sooner
rather
than
later.
C
I
guess
okay,
so
john,
am
I
understanding
you
correctly
because
we're
well
over
time
at
this
point
that
maybe
you'll
take
a
look
at
this
and
make
a
recommendation,
or
is
this
just
a
no?
No
sir,
I
am
not
going
to
do
that.
I
just
I
look.
I
try
to
read
this,
this
spec,
the
actual
ucum
spec,
and
it's
like
it's
pages
and
pages
and
pages,
and
so
as
an
as
an
sdk
implementer.
I'm
like
what
do
I
need
to
do
with
this
like.
What's
like
what
like,
can
we
just
see.
E
I
guess
my
my
question
is:
are
we
expecting
sdks
to
automatically
have
like
conversion
rules
or
the
collector
of
conversion?
Rules
have
received
some
things
in
second
and
some
in
microseconds.
It
has
logic
to
go
rationalize
those
and
put
out
a
new
combined
metric
from
those,
because,
if
we're
not,
then
I
wonder
why
we
just
don't
just
make
it
some
string
and
it's
you
know,
developers
enter
their
own
string
names
and
your
mileage
may
vary
morgan.
I
think
your
point.
E
A
So
that
we
only
ever
need
to
worry
about
converting
prefixes
in
some
sense,
it's
only
when
you
start
to
join
multiple
metrics
that
have
different
units
that
you
have
to
really
care
about
what
the
units
really
mean,
like
I'm
just
multiplying
mass
times
velocity,
and
I
get
some
different
units
there.
Yes,
like
that,.
A
Yeah-
and
I
don't
think
that
that
matters
very
much
for
open
geometry,
yes,
and
so
in
some
sense
the
unit
is
a
string
and
I
don't
care
but
prefix
string
that
matters
to
me
because
I
might
want
to
have.
I
might
have
to
merge
data
where
I
assume.
E
C
Well,
I
think
I
think
we're
yeah.
These
are
all
kind
of
ideas
I've
had,
and
I
think
that
it
seems
totally
fine
like
that.
That
seems
realizable,
because
if
you
do
that-
and
you
make
sure
that
the
base
units
that
you're
choosing
here
like
seconds
bites,
others
I'm
sorry
I'm
one
spot.
But
if
you
make
sure
those
are
compatible
with
the
uc,
then
that
means
that
the
proto
is
going
to
be
supported
and
that's
fine,
like
that
seems
like
a
totally
valid
solution
and
then
it
becomes
uniform.
C
So
you
have
consistency
across
what
to
expect
and
you
have
correctness
and
users
are
getting
the
data
they're
actually
wanted
to
see.
But
the
problem
arises
that
like
that
needs
to
get
defined,
because
if
we
don't
make
a
decision
and
then
you
come
up
with
this
idea
that
like
well
actually
we're
gonna
use,
you
know
base
two
prefixes
and
we're
going
to
send
it.
You
know
to
the
proto
without
actually
making
it
a
distinction
or
in
a
format
that
isn't
compatible
with
uc.
C
Then,
on
the
other
side,
like
they're,
going
to
have
a
a
compatibility
issue,
I
guess
so
I'm
fine
with
I'm
fine
with
doing
that
and
defining
like
here's.
Here's
the
five
units
that
you
know
open,
telemetry
metrics
should
support
here
are
the
prefix
values
that
they
should
support
and
they
should,
you
know,
eventually
be
able
to
convert
those
down
to
a
format
that
the
the
proto
is
also
able
to
support.
That's
just
a
larger
scope
than
I
was
trying
to
to
bite
off
for
the
free
ga.
E
Yeah,
I
was
almost
thinking
pre-ga.
We
would
just
have
it
as
a
string
and
tell
people
you
better
make
these
match
and
and
if
they
don't
match,
something's
gonna
throw
an
error
and
you're
gonna
have
to
fix
it
like
that.
To
be
honest
as
a
developer,
that
would
basically
be
my
expectation
is
even
if
you
threw
in
nanoseconds
and
seconds
in
different
instances
with
the
same
metric,
we
would
just
barf
on
it
and
say
I
don't
know
they're
not
the
same.
C
Oh
the
moment,
that's
part
of
the
unit
is
part
of
the
metric
identity,
so
they
wouldn't
get
aggregated
together
the
instrument,
so
they
wouldn't
get
aggregated.
I
understand
that
okay,
yeah,
okay,
they
would
they
would
end
up
separate
as
separate,
recording,
separate
aggregations
that
end
up
in
the
back
end.
E
A
Yeah,
I
don't
know
john,
is
a
sort
of
interesting
tangent
and
we're
over
time,
so
we
should
but
like
there's,
oh
gosh
yeah,
I'm
thinking
the
same
thing.
That
josh
is
thinking
yeah,
never
mind,
I
think
we're
over
time,
and
we
should
end
this.
This
is
okay,
not
least,
I
understand
the
issue,
maybe
I'll,
think
about
it,
or
something
like
that.
I'm
not
gonna
write
a
new
action
item
for
myself,
but
I
I'd
like
if
you
could
probably
take
the
lead
on
this
tyler
and
and.
C
A
C
Whatever
you
tyler,
whatever
you
put
together,
make
sure
that
it's
clear
to
both
api
and
sdk
implementers,
what
their
responsibilities
are
around
this
because
if
it's
just
says,
here's
the
standard,
but
if
there's
nothing
to
do,
then
why
are
we
putting
it
in
the
spec
right?
So
john,
do
you
understand,
like
that's
the
whole
point
of
like
I
was
trying
to
break
this
apart,
like
what
you're,
what
you're
saying
is
like
solve
the
whole
pr?
No.
C
A
I
feel
like
sdks
can
do
nothing,
but
I
would
like
it
if
we
could
parse
this
unit
and
know
the
prefix
and
separate
prefix
from
the
unit,
because
there's
some
exporters
they're
going
to
want
the
prefix
and
they're
going
to
know
what
you're
expecting
it's
good
like
time
is
the
only
one
that
we
really
care
about
right
now.
I
think
I
don't
care
about
whether
one
person
is
writing
binary,
like
binary,
prefixes
and
one's
doing
decimal
like
mega
whatever,
like
whatever
I
don't
care
about
powers
of
2
versus
powers
of
10..
A
A
Okay,
at
least
we
understand
this
issue
we're
over
time.
Yeah
dennis
it's
been
lovely.
Thank
you
all.
Let's
keep
working
thank.