►
From YouTube: 2021-03-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
C
C
C
D
I
do
want
to
take
just
a
quick
second
to
introduce
georg
perklebower
who's
on
the
call
here
he's
a
new
employee
at
dynatrace.
I
don't
know
if
anyone
has
seen
him
in
any
of
the
calls.
I
don't
know
what
he
has
and
has
not
joined,
but
he
was
hired
specifically
for
his
data
science
background
and
will
be
representing
dynatrace
interests
in
metrics
topics,
primarily
so
great
hi.
B
I've
been
really
happy
to
have
your
colleague
bertolt
max
urtle
somebody
urtil,
sorry,
I
remember
people
by
their
handles
helping
us
with
histograms.
That's
been
extremely
useful,
so
glad
to
have
you
here.
D
Yeah,
I
think
oatmear
is
who
you're
thinking
about.
D
B
Yeah,
I
mean
to
me
he's
famous
for
inventing
or
being
co-author
of
the
tea
digest
paper,
which
I
I
still
think
it
was
a
very
influential
piece
of
work
on
this
in
a
space,
and
I've
found
his
his
feedback
to
be
extremely
constructive.
So
thank
you
for
coming
as
well.
They
are
to
talk
about
this
stuff.
We
have.
B
Yeah
so
we've
reached
905
and
I
thought
I'd
introduce
at
least
kick
us
off
here
and
I'll
share
my
screen,
so
you
can
see
so
the
the
where
I
see
this
group
right
now
is
we're
trying
to
kind
of
pin
down
our
protocol
and
get
some
details
finished
as
fast
as
possible
because
it
feels
like
we're
blocking
a
bunch
of
other
work
when
we,
when
we
debate
this
part
of
the
protocol
for
much
longer.
B
In
addition
to
that,
we're
we're
sort
of
trying
to
write
a
document
that
explains
how
to
use
the
data
and
and
what
what
your
obligations
are
and
like
what.
What
correct
and
incorrect
looks
like
my
goal
in
writing.
That
document
is
that
we
could
write
is
that
someone
could
then
begin
working
on
a
collector
implementation
that
will
consolidate
metrics
streams
and
and
let
us
reduce
cardinality
on
the
right
path.
The
typical
example
I
see
here
is
like
a
currently.
Customers
are
using
datadog
agent.
B
B
So
that's
the
purpose
of
the
document
and
we
can
discuss
the
document.
I
think
we
should
not
discuss
the
document
in
real
time
here.
Josh
surf
gave
me
a
suggestion
last
week
that
we
should
begin
chunking
this
up
into
smaller
pieces,
that
we
can
review
as
github
issue
prs,
and
I
agree
with
that.
I
think
we're
at
the
point
now
where
this
draft
document
does
contain,
like
all
the
biggest
ideas
that
we
need
to
get
get
in
there,
and
I
just
I'll
show
it
to
you
before
we
move
on
to
those
issues.
B
B
It's
just
how
we're
going
to
define
the
data
model
in
terms
of
events,
and
one
of
the
things
that
I
think
is
most
key
here
before
we
move
on,
is
that
in
some
of
the
traditional
systems,
I'm
thinking
of
prometheus
there's
been
a
rule
essentially
saying
that
the
metric
must
have
the
same
dimensions
and
if
you
have
different
dimensions
on
a
metric
time
series.
B
And
the
question
has
been
what
does
it
mean
when
I
start
adding
optional
labels
in
the
middle
of
an
ongoing
operating
system
of
metro
for
metrics
and
in
a
prometheus
system?
The
answer
is
things
go
pretty
badly,
and
what
this
document
here
is
trying
to
specify
at
the
end
here
is
that
we
have
a
model
for
events.
If
you
say
something
like
count,
that
event
becomes
a
change
in
some
counter
somewhere
and
we
we
can
describe
what
mixed
label
sets
or
mix
mix
attribute
sets
means
in
relation
to
the
event
model.
B
So
when
I
say
I
want
a
metric
for
a
particular
name,
restricted
to
a
particular
attribute,
what
I'm
trying
to
say
is
select
all
the
events
that
match
the
attribute
and
then
compute
me.
The
result,
I'm
not
trying
to
say,
select
all
the
time
series
or
all
the
otlp
points,
I'm
trying
to
say
select
all
the
events
and
from
there
we
can
get
our
semantics
pretty
clear,
all
the
way
down,
and
it
tells
you
what
you
mean
when
you
mix
label
dimensions.
B
So
I'd
like
people
to
look
at
this,
and
maybe
we
can
discuss
this
later,
but
just
to
keep
this
organized
here.
I
think
our
biggest
problem
today
is
wanting
to
get
the
histogram
type
settled,
because
all
of
our
debates
and
lingering
topics
for
the
protocol
are
about
the
histogram
and
I
think
we
should
start
on
min
and
max
because
it's
been
discussed.
The
least
I
don't
know
that
I
should
keep
talking
if
someone
else
here
wants
to
lead
off
on
this
topic.
Giannis.
F
I
filed
it
because
we
needed
a
tracking
bug,
so
you
know
you
know
you
have
a
lot
of
contact
so
feel
free
to
go.
Yeah.
B
Okay,
I
I
don't
need
to
pick
on
you,
then.
I
think
this
is
an
issue
where
everyone
agrees.
We
want
this
information
and-
and
it's
just
hard
to
fit
into
the
protocol
because
of
the
specifics
of
the
protocol
as
we
have
it,
in
particular,
this
concept
of
temporality.
B
When
you
start
to
talk
about
min
and
max
temporality
falls
apart,
so
you
can
see
this
discussion
and
then
atmar
is
our
new
hero
from
data
datatrace
filling
in
like
look,
we
got
a
list
of
pros
and
cons
here.
It's
truly
a
list
of
pros
and
cons
and
I
think
we're
all
kind
of
come
to
the
end
of
this
agree
that
the
pros
outweigh
the
cons.
B
I
came
up
with
three
ways
we
could
put
min
and
max
into
a
histogram
point
and
they
all
have
pros
and
cons
as
well.
One
idea
was
you
could
just
so
I
think
the
most
obvious
idea
is
you
put
a
field
into
the
histogram
point
called
min
and
max,
and
then
you
have
to
define
what
that
means,
and
the
problem
is
that
most
of
the
histograms
we
record
are
cumulative,
meaning
they
count
from
the
beginning
of
time.
B
So
when
you
see
a
min
or
a
max
in
a
histogram
that
calls
itself
cumulative,
how
would
you
know
that
the
the
min
and
the
max
are
not
truly
cumulative?
They
are
just
sort
of
recent
data
because
there's
actually
nowhere
in
the
cumulative
histogram
report
to
say
my
report
here
reflects
everything
after
a
point
in
time,
there's
nowhere
that
you
say
when
you
begin
collecting
the
current
report.
So
there's
no
way
to
see
if
you
saw
them
in
the
max.
B
That's
the
reason
why
it's
not
obviously
a
good
thing
to
put
it
into
a
field
of
the
histogram
since
min
and
max
don't
seem
to
have
temporality
one
of
the
alternative
suggestions
that
that
I've
discussed
with
somebody
here
at
lightstep
was
that
we
could
actually
create
empty
buckets.
You
just
put
in
the
current
model.
We
have
explicit
boundaries,
you
could
create
a
boundary
a
bucket
that
has
zero
width
and
put
it
anywhere.
B
So
I
don't
really
like
that
one
and
the
last
one
I
think,
the
one
that
yana
suggested
when
she
originally
found
this
issue
is
that
we
already
have
an
exemplar
field
inside
of
the
histogram,
and
I
think
that
it's
probably
the
most
correct
way
to
model
that
human
max
data,
which
is
not
truly
part
of
the
probability
distribution
function.
So
this
is
the
idea
that
we
could
just
tack
on
an
additional
example
that
contains
them
in
the
max
in
any
given
histogram
report
and
by.
F
The
way
we
were,
we
were
thinking
that
it's
a
heck,
you
know
no.
F
Is
a
hard
problem,
but
one
of
the
things
that
you
you
mentioned
about,
like
you
know,
isn't,
isn't
it
true
for
like
the
histogram,
like
the
the
bound
like
the
buckets
as
well?
Like
you
know,
you
don't
know
like
about
the
cumulative
histogram
when
it
actually
started
what
population
is
represented
there.
I
think
what
we
need
to
do
is
to
provide
the
minimum
max
from
the
same
population.
Regardless
of
what
and
you
know
it's
just
the
other
problem
is
like
a
larger
problem.
I
don't
know
like.
F
B
F
F
B
Agree
because
that
the
I
lose
the
ability
to
detect
local
maximums
and
local
minimums
that
in
during
that
recent
period
of
time,
because
if
the
maximum
was
at
any
point
in
the
last
hour,
was
greater
or
less
than
I'm,
I'm
just
losing.
I'm
losing
that
information.
B
I
was
suggesting
that
if
we
represent
min
and
max
as
exam
cars,
we're
basically
preserving
the
event
that
happens
in
generate
of
the
minute
of
max
and
there's
an
individual
point
which
represents
min
and
max,
and
so
you
could
just
include
exemplars
to
represent
min
and
max
and
then
the
back
end
then
is
able
to
pull
out
all
those
mini,
maxes
put
them
in
a
timeline
and
and
plot
a
min,
a
min
line
and
a
max
line
and
there's
only
a
slight
amount
of
missing
information,
which
is
to
say
like.
B
If
I
had
a
minute,
and
I
reported
the
min
and
the
max
during
my
minute.
I
don't
truly
know
what
the
min
in
the
max
was
at
the
very
beginning
of
my
minute.
If
I
was
to
subdivide
my
minute
any
further
and
that's
the
same
problem
as
having
that
hour
and
not
being
able
to
tell
the
min
of
the
max
you
know
earlier
in
the
hour,
so
I
guess
I'm
trying
to
say
is
that
if
we
leave
these
as
exemplars,
we
preserve
everything
we
want.
I'm
not
sure
that
that
doesn't
open
up
new
problems.
B
Well,
yeah,
I'm
trying
to
pick
apart
that
if
we
report
a
histogram
covering
the
last
hour,
I
have
no
way
to
get
anything
about
the
last
minute
of
of
min
and
max,
which
I
think
is
what
people
want
and
you're
right,
though,
that
that
I
think
what
you're
calling
for
is
to
keep
the
data
type
very,
very
targeted
to
you
know
like
the
aggregation
and
the
temporality
that
we
have,
and
so
the
minimax
just
do
not
belong
to
histograms.
I
think
what
you're
saying
that's.
B
So
so
then
I
I
stepped
back,
and
I
considered
what
you
said
as
well,
and
you
know:
prometheus
systems
have
this
notion
of
a
metric
family
and
up
until
now,
optometry
does
not
have
that
notion,
because
every
in
every
case,
right
now
so
far,
we
have
a
single
data
point
type
that
can
project
into
all
the
time
series
that
you
might
get
if
you
were
to
say
map
them
back
into
one
one
time
series
per
per
effective
point.
So
you
get
a
time
series
for
me
and
you
get
a
time
series
for
max.
B
You
get
one
for
some
one
for
count.
One,
for
you
can
also
just
do
this
from
prometheus
summaries.
You
can
get
one
line
for
every
quantile
or
something
like
that,
but
we
have
no
way
to
represent
that
family
in
open
telemetry
and
for
the
most
part
we
don't
need
it
until
you
get
to
this
minimax
topic.
B
So
it's
like
min
and
max
are
the
first
example
where
we
want
to
have
a
family
and
I'm
suggesting
that
we
can
stuff
them
in
exemplars.
So
we
don't
need
a
family.
G
You
mentioned
the
use
case
right,
so
the
the
defined
use
case
here
in
the
bug
is,
I
want
to
know
if
I've
created
bad
quantiles
or
if
my
quantiles
aren't
capturing
the
actual
kind
of
dynamicness
of
the
meter
that
I'm
sampling
right.
That's
that's.
The
idea
behind
having
midmax
is
that's
what
the
user
really
wants.
So
I
want
to
challenge
the
assumption
that
rolling
up
min
and
max
is
bad
for
the
user
in
that
sense,
because
you
can
actually
still
solve
that
use
case
with
an
hourly
roll
up
of
data.
G
If
that's
the
actual
target
use
case
here,
I
think
what
you're
proposing
with
min
and
max
might
actually
end
up
being
a
better
feature
for
users
with
getting
this
like
mid
max
kind
of
at
a
nuanced
level,
but
you're
actually
creating
two
new
time
series
from
those
exemplars
effectively
right,
and
is
that?
Is
that
really
what
we
need
to
do
like
the
reason
I'm
using
a
histogram
to
begin
with?
Is
I
want
a
notion
of
the
spread?
G
You
know
the
distribution
of
points,
and
the
only
reason
we're
asking
for
min
max
here
is.
I
want
to
know
if
I've
created
an
appropriate
representation
that
distribution
to
get
the
visibility
I
need,
or
if
my
distribution
is
somehow
kind
of
like
skewed
one
way
or
another
right.
So
I
don't
think
we
need
all
of
the
setup
around
mid
max
to
solve
that
use
case,
but
that
that's
what
I
want
to
throw
out.
F
B
Yeah
it's
I
question
this
as
well.
I
don't
know
the
answer
I
feel
like.
Whenever
I've
worked
with
histograms,
you
understand
and
there's
some
existing
system
that
wants
to
see
the
minimax.
You
end
up
having
to
contort
your
thinking
around.
Well,
I
know
the
top
bucket.
Is
it
the
beginning
of
that
bucket
or
is
it
the
end
of
that
bucket
and
if
there's
an
infinite
on
that
bucket?
B
I
am
now
just
kind
of
applying
hair
sticks
and
it
feels
like
maybe
not
such
a
high
value
piece
of
information,
but
it
also
it's
not
clearly
any
value
if
you're
applying
all
those
heuristics
to
to
kind
of
guess
at
the
min
and
max
just
by
looking
at
the
extreme
buckets
of
the
histogram.
B
I
think
I'm
trying
to
say
is:
I
don't
know
josh
whether
whether
the
only
use
case
is
to
validate
the
range
of
a
histogram,
since
there
are
other
movements
here
to
get
sort
of
automatic
resolution.
Histograms.
F
This
can
like
sit
for
a
while.
Maybe
let's
think
about
this
problem,
I
mean
I'm
actually
not
sure
how
we
can
think
about
the
problem,
it's
more
about,
like
figure
not
what
use
cases
we
want
to
address
with
this
right,
like.
F
G
B
It
would
the
reason
why
I
asked
about
families.
Is
that
I
I
the
question,
then
is
what
do
you?
What
do
you
do
when
you're
mapping
back
into
a
time
series
representation-
and
I
think,
like
how
do
you
project
that
into
to
prometheus,
and
I
think
you'd
want
to
follow
the
conventions
of
prometheus
summaries?
B
So
you
end
up
appending,
underscore
max
and
underscore
men
and
underscore
sum,
and
so
on
to
the
individual
time
series
that
come
out
of
the
histogram
and
I'm
wondering
if
there's
room
here
to
just
sort
of
spec
out
what
a
family
looks
like,
so
that
we
can
do
that
kind
of
thing
or
whether
min
and
max
are
kind
of
special
cases
they're,
the
only
ones
we've
been
able
to
identify
that
don't
fit
with
our
current
model
that
we
want.
So
I
don't
think
I
want
to
go.
F
That
sounds
great,
I
think
like
we.
We
also
needed
this
in
otlp.
Like
I
mean
just
on
the
wire,
and
we
you
know,
I
think
using
examples
is
also
a
good.
You
know
solution
right
now.
In
the
meantime,.
B
Okay,
so
I
think
we're
gonna.
All
of
us
are
gonna,
go
back
to
the
to
the
our
thought
experiment
of.
Why
do
we
care
about
minimax
and
see
if
either
not
having
it
or
putting
it
in
exemplars
or
somehow
putting
it
into
a
separate
series
and
creating
a
family
idea?
All
those
are
sort
of
reasonable
approaches,
maybe
think
about
it.
B
Well,
now
we
have
two
of
these
issues
open.
I
want
us
to
move
to
another
one.
Here's
also
histograms,
and
so
I
think
this
one
has
not
gotten
much
attention
in
the
last
week,
although
I
wrote
something
up,
I
wanted
to
share
this
idea
briefly.
B
This
was
some
work
to
try
and
incorporate
concepts
of
gauge
histogram
from
prometheus,
plus
the
concept
of
sampled
metric
events,
as
in
statsd.
I'm
also
I've
been
talking
with
some
team
at
amazon
who's
working
on
the
emf
format,
the
embedded
metrics
format,
which
is
another
sort
of
statsd
equivalent.
B
So
we
have
several
examples
now,
where
having
a
histogram
with
floating
point
counts
would
actually
work
out
for
us,
and
I
want
people
to
consider
this,
so
I
don't
think
we
should
try
and
discuss
this
now
unless
enough
people
have
read
this
post
here,
but
I
would
like
to
direct
your
attention
to
this
post
here.
It's
making
two
proposals.
One
is
that
we
can
make
room
for
new
histogram
types
in
the
future
by
a
one
of,
and
we've
discussed
several
ways
that
that
one
up
might
come
out.
B
This
is
the
one
that
we
have
today
and
I
think
this
one's
the
one
that
might
work
for
us
so
that
we
would
take
what
we
have
today
for
the
histogram
and
call
that
the
explicit
count
buckets
and
that's
where
your
boundaries
are
floating
point
explicit,
flooding
points
and
your
buckets
are
integer
counts
and
I'm
proposing
that
if
we
create
a
new
histogram
bucketing
strategy
that
was
almost
the
same
but
had
floating
point
counts.
B
B
B
G
B
Can
definitely
work
with
statsd
and
print
and
and
this
emf
format
that
is
essentially
sampled
metric
events,
but
it
is
a
way
that
it
conceptually
could
be
used
to
store
the
prometheus
summary,
and
then
we
could
remove
the
double
summary
type
that
we
have,
which
is
kind
of
a
verbatim
copy
of
a
prometheus
struct,
just
trying
to
remove
a
data
point
type
here,
while
adding
support
for
statsd.
But
I
think
when
bogdan
filed
this
issue,
originally
it
was
not
about
which
particular
bucket
strategies
we
need.
B
It
was
also
about
how
do
we
pave
the
way
to
have
multiple
bucket
strategies,
and
so,
if
we
can
get
to
an
agreement
on
that,
the
current
strategy?
We
have
looks
like
this
and
we're
going
to
put
that
into
a
one
of
then
we
can
move
forward
and
continue
to
design
new
histogram
types
one
at
a
time.
B
But
if
you
all
agree
to
this,
what
we're
going
to
end
up
with
is
two
explicit
variations
and
we'll
probably
end
up
with
the
same
for
the
exponential
histogram
that
we
come
up
with,
so
that
it
is
reasonable
to
talk
about
floating
point
histograms
and
integer
histograms.
Although
I'd
like
some
others
to
to
agree
with
me
on
that.
G
B
We
can
allow
them
to
be
always
disjoint,
that's
my
opinion.
I
would
like
to
hear
others
thoughts.
I
agree
because
conversion
is
lossy.
B
We
can
give
a
lossy
conversion
approach,
but
I
don't
think
that
we
have
to
mandate
that
that
be
implemented,
and
I
think
that
we
can
say
that
if
you're
receiving
this
data,
you
may
get
it
in
this
format
of
that
format,
and
it's
sort
of
like
it's
up
to
you,
whether
you
want
to
coerce
things
into
a
common
representation
that
can
store
every
histogram
or
whether
you
want
to
just
deal
with
the
variation
or
whether
you
want
to
do
the
lossy
conversion
yourself.
F
Yeah
right
now
we
are
dropping,
like
all
the
you
know,
explicit
count
bucket
like
explicit
buckets.
So
is
that
like
something
great
or
like
I
mean
I,
I
don't
know
what
it
should
look
like.
Is
it
is
it
like?
So
what
is
going
to
be
in
the
end?
What
the
you
know,
average
user,
is
going
through
they're
gonna
say
like
hey.
I
need
exponential
buckets
for
this
particular
you
know
metric
well,
I
think
our
goal
is
that.
B
The
user
doesn't
say
anything,
I
just
want
a
histogram
and
then
it
works.
Why
we
might
have
four
different
bucketing
strategies
in
the
protocol,
for
one
is
that
we
have
these
legacy
imports
paths
where
you're
just
gonna
say
I
have
these
boundaries.
They
are
explicit,
they
are
given
to
me-
and
I
can't
do
anything
so
like
the
prometheus.
Histogram
has
10
12
buckets
or
whatever
and
they're
just
going
to
arrive.
The
way
they
do.
The
user,
hopefully,
will
not
have
to
say
what
their
range
is
and
what
bucketing
strategy
they
want.
B
Well,
I
mean
so
to
the
today's
prometheus
histogram
comes
out
of
the
box
with
12
or
so
boundary
buckets
and
it's
very
fixed,
and
you
can
you
can
change
it
if
you
want,
but
over
the
last
you
know,
10
years
or
so,
like
many
people
have
been
working
on
getting
to
the
point
where
a
histogram
can
just
automatically
find
the
right
buckets,
have
high
resolution
but
be
fairly
compact
and
be
fairly
accurate,
and
then
you
can
just
say,
go
and
it
will
find
you
a
pretty
good
histogram.
B
That's
what
we're
trying
to
unlock
by
adding
these
exponential
bucket
strategies,
even
though
we
know
they
aren't
perfect.
You
can
ask
for
more
and
more
of
these
strategies,
so
there
are
other
ways
to
compact
them
and,
like
other
ways
to
like
collapse,
buckets
so
you
end
up
with
fewer
of
them
and
so
on,
and
we're
we'd
like
to
all
that
to
be
possible
in
the
future.
But
for
now
what
we're
really
trying
to
get
is
that
it's
possible
to
mix
these
so
that
we
can
begin
to
add
these
new
implementations.
F
I
think
one
thing
that
I'm
not
truly
understanding
is
like
who's
normally
setting
up
the
aggregation.
You
know
it's
in
the
end
of
the
day.
It's
in
my
main,
you
know
function
somewhere.
I
set
up
the
aggregation
here.
I
want
a
histogram
of
this.
Do
I,
as
a
user
either,
can
you
know
set
the
boundaries
and
you
know
give
me
a
histogram
with
this
user
specified
boundaries
or
I
will
be
choosing
an
exponential.
F
You
know
histogram,
because
you
know
my
back
end
supports.
That
is
it?
Is
it
like
at
that
level
of
a
decision,
or
is
it
like
once
you
say
that
it's
user
will
never
have
to
care
about
this?
I
I
don't
think
that's
true
right,
like
in
the
end
of
the
day,
they
have
to
like
up
the
set
up,
an
aggregation
right.
H
That's
the
goal
at
least
a
question
josh,
so
so
I'm
trying
to
follow
so
I'm
not
an
expert
in
any
of
this
stuff,
but
it
seems
to
me
that
the
only
difference
and
looking
at
a
very
high
level,
the
only
difference
is
that
the
count
is
a
float
versus
an
end
right.
So
so
I
would
say
that
really,
at
least
from
my
perspective,
there's
really
two
things
that
we
need
to
separate
out
right.
So
so,
regardless
of
what
that
type
is,
there's
some
there's
semantic
differences
that
you're
trying
to
convey
right.
H
So
one
is
for
exponential
and
one
one
is
for
fix,
regardless
what
the
data
type
is,
but
your
proposal
or
what
the
proposal
whoever's
proposal
it
is
seems
to
want
to
collide
the
two
into
one
data
structure.
Is
that
still
the
true
semantic
of
what
we're
trying
to
do
and
because
there's
other
ways
to
solve
that
right?
H
If
there
is
truly
only
one
semantic
then,
and
that
the
only
difference
is
the
data
type,
then
you
could
model
the
data
type
as
one
of
fixed
versus
flow
versus
doing
it
at
the
higher
level
of
merging
the
two
into
one
semantic.
So
so
so
I
don't
understand
if
this
is
a
data
type
issue,
or
is
this
a
semantic
issue.
B
I
hope
this
is
not
a
semantic
issue
that
we
that
we
know
the
api
usage
that
you
use
these
apis
when
you
are
trying
to
characterize
a
distribution
like
the
histogram
is
saying
I
have
some
values.
I
want
to
know
what
they
are
so,
hopefully
not
and
then
like.
What
I
hear
you
asking,
though,
is
about.
Is
there
a
semantic
distinction
to
be
made
about
inc
versus
floating
points,
probably,
and
that
one
unfortunately,
is
tricky
and
and
may
not
be
so
easy
to
settle?
B
I
know
that
some
people
are
quite
upset
when
they
put
in
a
metric
as
an
integer
and
then
it
goes
through
some
sort
of
temporal
alignment
and
it
comes
out
as
a
floating
point,
because
there's
interpolation
happening,
and
so
they
look
at
their
metric
and
they
say
this
is
a
floating
point,
but
I
gave
you
integers.
How
is
that
possible,
and
that's
pretty
frustrating
sometimes
and
yet
in
some
sense,
that's
semantically
allowed.
B
If
you
agree
that
two
observations
of
you
know
that
you
can
replace
some
counts
with
combined
sums
or
subdivided
sums
and
that's
what
interpolation
is
doing.
So
I
think
opentometry
ought
to
take
a
position
to
say
that
when
you
choose
an
instrument
that
has
an
integer
or
floating
point,
you
are
giving
the
system
some
information
about
the
the
range
of
precision
that
you
have
for
your
data.
B
But
if
that,
if
the
system
ends
up
changing
your,
if
you
re
then
request
the
system
to
do
some
interpolation
and
you
end
up
with
floating
points
out
you've,
you
haven't
changed
the
semantics.
You've
you've
just
processed
the
data
in
a
way
that
you
requested
and
that
you
end
up
with
floating
points,
no
big
deal.
I
don't
think
it's
a
problem,
though
I
know
people
disagree
so.
G
Well,
can
I
can
I
rephrase
this
a
little,
so
the
way
I
think
about
it
is
metrics
are
mostly
an
exercise
in
compression.
What
we
really
want
is
every
single
freaking
data
point
to
be
stored
individually,
and
that
gives
us
the
best
granularity
of
everything.
That's
just
totally
impractical
and
dumb.
So
we
don't
do
that.
G
G
So
how
do
we
get
the
minimum
we
need
for
observability
without
destroying
the
actual
use
case
of
the
system
right
yeah,
so
that
that
anyway,
that's
how
I
think
of
it
and
that's
why,
when
it
comes
to
these
histogram
things,
I
would
just
go
all
doubles
and
like
let's
just
let's
just
go
straight
to
assume
everything
is
super
compressed
right,
but
I
think
having
flexibility
here
is
really
useful.
So
I
I
like
the
proposal
for
the
one
house.
B
So
yeah,
so
actually,
if
I'm
interpreting
your,
I
thank
you
josh,
I
one
idea,
you
just
said
is:
let's
just
have
everything
be
doubles,
which
is
how
this
instagram
looks
in
prometheus.
We
could
just
get
rid
of
just
change
everything
to
be
floating
point,
not
ins.
Although
we've
gone
back
and
forth
on
this
inside
of
open
telemetry,
I
know
that
there
was
a
protocol
change
six
months
ago
or
so
and
like
I
had
to
fix
the
go
implementation
and,
like
I
just
feel
like
it's
hard
to
get
agreement
on
this
floating
point
versus
integers
thing.
H
B
I
think
I
we've
hit
the
end
of
the
list
on
the
agenda
here
and
I
oh
well.
We
haven't
actually
victor
you're
up.
I
feel
like
I
knew
that
you
were
about
to
talk
about
issue
1366
and
it
is
connected
here.
So
why
don't
we
just
keep
moving
here.
H
Sure
which
one
is
13
actually.
A
H
Item
is
actually
more
specific
to
what
we
just
talked
about
and
it
might
be
easier
to
resolve-
and
this
is
I
know,
bogdan
and
janna
has
not
commented
on
it,
but
the
the
issue
there
is
that
when
I'm
looking
at
the
current
otlp
protocol
for
metric,
the
very
first
thing
that
I
see
is
that
it's
a
one
of
an
int
gauge,
a
double
gauge,
a
sum.
H
You
know,
in
sum
a
double
sum
and
and
I'm
not
against
necessarily
putting
in
you
know
the
particular.
You
know
data
type,
but
why
is
it
at
the
very
top
most
level?
Why
isn't
it
at
the
very
top
level?
One
of
gauge
one
of
some
one
of
you
know
and
then
one
level
below
that
or
some
level
below
that,
then
you
could
have.
One
of
you
know
a
in
time
series
or
a
float
times
you
and
so
forth.
H
So
I
don't
understand
yeah
why
we
don't
do
that
and
and
yeah
so
so
so
I
just
wanna,
you
know,
put
that
out
and
see
if
we
could
hopefully
close
it,
because
that
seems
like
a
relatively
easy
thing
to
close
one
way
or
the.
B
Other
yeah
yeah
now
I
remember
this
from
a
week
ago,
I
actually
had
made
a
similar
proposal
at
some
point.
You
know,
as
I
recall,
and
then
other
people
could
also
answer.
This
is
that
there
was
an
extreme
care
attention
or
given
to
the
memory
cost
of
the
in-flight
data
for
the
collector,
and
when
you
start
having
one
of
and
putting
multiple
optional
fields
into
the
protocol,
you
end
up
just
sort
of
holding
on
to
unused
memory.
B
So
if
you
had
a
scalar
type,
which
was
both
integer
and
floating
point,
you
end
up
having
two
fields
and
only
using
one
of
them.
I
think,
and
then
that
ends
up
just
sitting
there
like
a
lot
of
memory
tied
up
in
unused
fields.
That
was
the
motivation
that
I
recall,
that's
not
a
very
satisfying
answer.
I
realized.
H
So
I
agree
if
we
put
the
one
off
for
the
given
type
as
I
originally
proposed.
Yes,
I
I
totally
agree
with
that.
You
know
for
each
data
point
we
would
have
to
then
you
know
have
extra
memory,
but
but
we
don't
have
to
do
it
at
the
data
point
level.
We
could
do
it
just
immediately
above
that
right.
You.
B
H
Well,
maybe
I
I
don't
know
I
don't
know
about
that
particular
item,
but
I'm
just
saying
I
think,
looking
at
the
protocol
today,
I
think
there's
actually
a
level
above
the
data
point,
and
so
so
we
could
put
it
there.
You
know
so
it's
like
one
off
and
if
you
look
at
the
like
one
of
the
ink
gauge,
I
think
there's
one
level
yeah.
Let's,
let's
take
a
look
right
if
you
go
to
the
ink
gauge
right,
so
that
takes
a
data
point.
So
so
why?
Wouldn't
that
be
just
gauge?
H
G
I
I
would
recommend
that
you
prototype
this
and
do
some
actual
performance,
evaluation
of
actually
serializing
and
sending
those
two
different
formats
and
then
also
take
a
look
at
what
it
looks
like
in
languages
that
have
types
and
need
to
do
boxing
and
unboxing,
because
what
you're
implying?
Actually
you
might
end
up
paying
begging
peter
to
pay
paul.
So
like
the
idea
that
early
on,
I
can
make
my
type
decisions
and
then
in
my
hot
loop,
I'm
not
doing
a
pull
out
this
particular
metric
type
everything's
much
more
efficient.
G
You
need
to
figure
out
kind
of
which
one
of
those
to
prefer,
and
so
this
could
actually
be
one
of
those
macro
optimizations
that
in
practice
matter
that
that's
actually
what
I
think
this
is
going
on
here,
but
I
would
actually
want
to
sit
down
and
write
a
whole
bunch
of
performance
benchmarks
with
various
tidbits.
To
just
make
sure
I
understand
what's
going
on
here
with
any
kind
of
design
proposal.
G
That's
all
I'm
gonna
say
is
is
like
I
wouldn't.
I
wouldn't
push
down
this
this
if
check
of
the
metric
data
type
that
far
for
performance
reasons,
yet
without
some
kind
of
a
benchmark
to
prove
that
it's
actually
okay
and
it's
not
going
to
cause
regressions
because
a
you're
going
to
fight
people
who
say
it's
a
performance
regression,
even
if
it
isn't
so
prove
it
and
then
b.
I
think
it
actually
might
be
in
this
case.
B
If
you
use
the
standard,
go
protobuf
implementation
and
then
I
think,
there's
there's
a
lot
of
hand
waving
about
how
well
you
could
just
hand
roll
yourself
a
really
fast
implementation
of
this
protocol
or
that
protocol
and
forget
about
the
protobuf
library,
because
the
collector's
performance
story
is
obviously
very
important,
and
I
think
that
that
was
sort
of
like
people,
kind
of
shrugged
and
went
moved
on.
But
but
actually
the
last
time
bogdan
spoke
on
this
topic.
B
He
was
ready
to
write
hand-rolled
protocol
buffers
and
to
optimize
away
the
one-off
cost,
because
it's
pretty
easy
to
do
and
it's
pretty
high
value
win
for
the
collector.
So
I
I
think
you're
right
josh,
you
end
up
making
this
huge
change
and
it's
not
clear
that
you're
going
to
win
on
performance.
B
What
I
do
know
is
that
we
we
rejected
the
idea
of
having
just
floating
point
values
for
the
scalar
types,
because
there
are
kind
of
known
use
cases
where
you're
counting
like
bits
bits
on
the
wire,
and
it's
just
going
up
so
fast-
that
you're
going
to
overflow
a
floating
point
within
days,
and
you
want
that
to
happen
in
within.
You
know
decades.
Instead,
so
we
we
want
to
have
integer,
64-bit,
integer
type
that
we
we
can
use
to
avoid
loss.
H
So
so
I
guess
from
a
very
high
level
it
just
you
know
when
I
first
looked
at
it
the
otlp
protocol,
it
just
seems
like
we're,
you
know,
mixing
data
types
and
semantic
kind
of
together
kind
of
like
what
the
same
question
I
asked
about.
Histogram
I
mean
shouldn't
we
at
the
spec
level.
H
H
I
don't
have
an
opinion
on
that
necessarily
either,
but
I'm
just
saying
that
there
are
it
in
in
destroy
there's,
also
both
cases.
If
you
are
to
split
out
by
data
type,
then
then
different
people
have
to
take
the
burden
of
quote
dealing
with
that,
for
example
in
in
the
you
know,
in
that
issue,
there
was
some
comment
about
well,
certain
vendors
back
in
require
it
to
be
different
type.
Well,
the
same
can
be
said.
Is
that
certain
vendors
do
not,
in
which
case
those
vendors
would
then
have
to
merge.
H
You
know
the
int
and,
and
you
know
the
different
data
types
there's
also
languages
that
don't
pay
as
much
consideration
to
the
data
type.
So
then
the
api
or
sdk
or
exporter.
Someone
will
then
have
to
also
split
it
out
into
you,
know
those
particular
groups,
and
then,
regardless
of
whatever
answer
we
choose,
I
would
think
that
the
otlp
protocol
is
going
to
play
a
big
part
in
informing
how
the
api
is
going
to
look
and
pushing
the
up
to
the
api.
You
know
you
know
this
may
be
philosophical.
H
F
H
F
The
argument
was,
you
know,
some
back
ends
like
stackdriver
requires
that
it
at
the
metric
level,
and
you
cannot
change
that
data
type
at
a
later
time
or
you
cannot
like
provide.
You
know,
data
points
of
various
different.
F
You
know
data
types,
so
it's
more
of
like
a
discussion
about
that,
like
performance
is
the
second
thing
I
think
we
we've
mentioned
a
lot
about
the
performance,
but
not
the
the
the
other
side
of
the
thing
I
mean,
I
I
don't
know
who's
the
best
person
to
answer
this
question,
but
you
know,
do
you
want
to
restrict
a
time
series
to
a
metric
data
type?
Sorry.
B
I
feel
like
there's
a
couple
questions
happening
here,
one
to
victors.
I
don't
I
don't
actually
care
about
in
versus
floating
point
at
the
api
level.
I
think
a
few
people
do
most
of
the
time
account
to
count,
doesn't
matter
it's
floating
point
or
injure,
so
I
think
it's
kind
of
a
corner
case
that
we've
put
into
the
protocol
and
the
api
could
totally
ignore
it.
B
I
think
one
of
the
benefits
of
having
separated
our
data
model
from
our
api
question
is
that
you
could
imagine
an
api
for
floating
point
or
an
api
for
integer
and,
like
there's
only
one
percent
of
the
users
who
are
going
to
care
about
that
integer
api,
because,
most
of
the
time
it
doesn't
matter,
but
it's
one
percent
api
for
the
users
that
do
the
second
issue
here
seems
to
be
about
what
sorry.
I
don't
actually
know
what
the
second
issue
is.
The.
F
Second
issue
is
this:
what
if
I
like,
give
you
an
integer
data
value
and
then
the
next
data
point
is
some
floating
point
or
something.
So
this
is
not
supported
by
all
the
back-ends.
That's
why
you
know
we
said
like
hey:
we
shouldn't
do
it
because
you
know
not
all
the
back-ends
support.
This
is
it.
Is
that
a
satisfactory
answer,
or
you
know
not,
is
a
different
question.
H
So
to
answer:
well,
I
don't
know
about
answer,
but
one
perspective
on
that
is
that
some
systems
do
handle
that
well
and
some
systems
don't,
but
whatever
choice
we
make,
the
other
system
will
have
to
deal
with
it
right.
So
so
then
that
may
lead
to
the
question.
In
terms
of,
is
this
a
survey
of
what
back-end
platforms
we
want
to
support?
Or
you
know
so
we're
going
to
make
an
opinion
on
one
or
the
other.
G
Fundamentally,
that's
just
a
much
harder
protocol
to
deal
with
if
every
single
point
can
have
a
different
data
value
within
a
time
series.
That's
a
lot
harder
in
code
to
deal
with
than
saying.
If
this
is
in.
Let
me
just
now
grab
ins
and
I
know
they're
all
ants
so
so
I
just
want
to
throw
out
that
usability
concern.
G
I
mean
I
I
pointed
it
as
performance
because
it
is,
but
it's
just
also
a
usability
concern
to
me
of
like
if
I
have
to
be
looking
at
every
point
doing
conversion
as
I
read
through
otop
when
I
write
code
against
it
is
that
what
we
want
is
that
is
that
I
don't
know,
I
mean
it's
more
flexible,
but
is
it
good
like?
Do
we
need
the
flexibility.
B
I
think
we
can
have
both,
and
this
is
what
we,
what
what
we
should
do
so
inside
of
the
api,
when
you
you're
an
sdk
we
can
put
in
controls
to
prevent
you
from
duplicate
registration.
You
can't
have
the
same
instrument
with
different
types
or
different
whatevers
so
that
we
can
control
you
inside
of
an
sdk
once
the
data
leaves
the
sdk
and
you
start
having
to
merge
or
like
batch,
combine
data
and
write
it
into
a
back
end.
B
It's
just
so
likely
that
two
different
processes
are
going
to
come
up
with
different
metric
types,
different
metric
names,
different
metric
units,
different
metric
number
types,
so
many
differences
can
happen
and
the
question
becomes
what?
How
shall
we
treat
the
data
that
comes
through
when
you
have
the
same
metric
definition,
but
like
all
kinds
of
differences?
Other
than
that,
and
I
think
to
I
said
we
could
have
both
as
I
would
like
to
define
those
distinct
time
series
or
distinct
metric
series
so
that
they
literally
pass
through
the
collector,
as
if
they're
distinct.
B
You
know,
like
you've,
got
a
time
series
with
with
seconds
and
you've
got
one
with
microseconds
they're
separate
and
they'll
arrive.
At
the
background.
That's
separate,
you
have
the
time
series
that
has
integer
types
and
floating
point
types:
they're
separate
they'll
arrive
at
the
back
end
separate
now.
Let's
the
back
end
decide
whether
they
want
to
convert
integers
into
floating
points
or
not,
and
I
think
it
lets
the
collector
do
the
correct
behavior.
B
Now,
if
we
want
to
have
a
fancy,
let's
say
you
want
to
do
units
conversion
to
do
I'm
getting
units
from
like
a
bunch
of
places,
and
some
of
my
units
are
in
milliseconds.
Some
are
in
seconds
and
I
want
to
do
a
consolidation
and
I
want
to
combine
those
streams
into
a
single.
Well
then
I'm
going
to
do
unit
conversion
of
one
and
then
they're
the
same.
So
I
have
to
install
a
unit
converter
stage
before
I
do
my
like
batching
or
something
like
that
now
they're,
the
same
the
same.
Does
that
help.
H
Yeah
yeah,
I
think
that
makes
sense
to
me
and
at
the
same
time,
I'm
not
necessarily
proposing
that
every
data
point
can
be
variant
of
different
types.
I'm
just
saying
that
at
the
very
top
level
you
set
the
semantic
and
then
underneath
that
one
level,
you
could
have
a
time
series
of
ins
versus
time
series
of
flow
versus
just
every
individual
data
point
being
a
mix
of
ins
and
floats
you
know
so
so
so
to
josh's
point.
I
think
we
could
compromise
on.
You
know
you
have
both
right
so.
B
Yeah,
I
think,
just
just
to
reiterate
something
that
has
maybe
been
said
already.
Is
we
we
believe
that
the
semantics
of
are
prescribed,
whether
it's
integer
or
floating
point,
if
you're
using
a
histogram
instrument,
we
are
trying
to
say
exactly
what
it
means
when
you
call
that
api
you're,
counting
something
for
a
value
or
you're,
cutting
a
certain
number
of
a
value
or
applying
a
weight
to
a
value,
whether
it's
integer
or
floating
point,
and
it
shouldn't
matter
whether
the
how
the
value
is
expressed
in
terms
of
the
bits.
G
G
Can
I
pull
time
check
here?
We
have
10
minutes
left
in
the
meeting.
I
don't
know
how
long
you
want
to
spend
on
each
topic,
but
maybe
we
shore
this
one
up
quick.
H
So
maybe
people
would
just
comment
on
that
to
see
what
we
should
do
next
with
it
or
or
whatever,
and
and
I'll
move
on
to
my
second
topic,
which
is
somewhat
related,
and
the
second
topic
is,
I
still
don't
have
a
quick
answer
in
terms
of
naming
and
josh
kind
of
mentioned
a
little
bit
earlier
that
you
know.
If
the
metric
name
is
the
same,
and
you
know,
then
we
will
registration,
we
won't
allow
it
and
there's
a
bunch
of
you
know.
I
guess
comments
regarding
that.
But
but
more
specifically,
what
do
we
have?
A
H
One,
if
you
scroll
to
the
very
bottom,
just
all
the
way
at
the
bottom.
H
Try
to
summarize
a
bunch
of
conversation,
that's
that's
been
happening,
and
so
from
current
spec,
it
seems
like
we
have
basically
these
available
fields
from
the
api,
and
these
are
the
only
data
that
we
have
and
we
have
the
metric
name,
the
version,
the
instrument
name,
the
the
data
type.
You
know
the
and
then
it
seems
to
me
that
the
fully
qualified
metric
name
would
then
be
those.
B
H
B
Agree
with
you,
this
is
what
I
was
mentioning
earlier
when
I
was
saying
that
that
we
just
treat
things
as
distinct
time
series
or
metric
series
when
they
have
different
details
of
these
of
this
nature
and
you're
right
that
we
need
to
choose
the
list
of
details
that
create
uniqueness
and
I'm
glad
to
see
that
you've
basically
proposed
something
here
now
there
are
some
sort
of
technical
nips
here.
B
I
think,
because
because
we
we've
explicitly
avoided
putting
the
instrument
kind
into
the
otlp
protocol
by
the
time
you
get
data
points,
you
have
data,
point
kinds,
not
instrument
kinds,
and
I
think
it's
a
little
out
of
scope
for
the
time
we
have
left
in
this
meeting
to
talk
about
why
we
think
that
can
be
done.
B
Was
that
clear,
I
think
bogan
has
said
this
in
the
past
as
well.
In
this
meeting
we,
instead
of
instead
of
recording,
which
type
of
instrument
we
we
generated,
we
literally
just
encode
that
the
aggregation
that
was
applied
by
the
type
of
point
that
was
chosen.
E
I
I
prefer
that,
because,
as
we
try
to
get
sorry,
this
is
aaron.
I've
been
lurking
as
we
do
the
next
part
of
the
discussion
about
whatever
changes
we
might
do
at
the
api
level.
That
certainly
gives
more
naming
flexibility
when
what
you
care
about
later
is
what's
on
the
wire.
So
I
I
like
that.
I
like
that
yeah.
B
G
This
list
here
is
very
api.
Centric
like
this
is
the
fully
qualified
metric
name
like
within
an
api
instance
right,
but
I
think
when
you
hit
the
wire
like
in
the
collector
format,
you're
going
to
have
the
resource
on
there
too.
So
like
this,
is
this
this
this
view
right
here
is
very
focused
on
the
api
and
and
let's,
let's
just
focus
on
that
api
discussion
for
this,
because
I
think
it
simplifies
when
you.
A
E
E
Combination,
so,
but
tags
aren't
reflected
in
that
little
list
there
and
is
what
we're
trying
to
say
that
we
want
to
uniquely,
like
is
the
goal
here,
to
have
a
unique
identifier
for
a
metric
such
that
you
could
flag
or
throw
or
barf
or
something
when
somebody
creates
a
tag
that
doesn't
exist
everywhere
right
because
there
I
have
seen
some
notion
that
says
you
can't
have
disjoint
sets
of
tags
for
the
same
metric.
Thank.
B
So
forgive
me,
I
there's
a
data
model
draft
document
that
I
have
linked
in
the
notes
here
which
actually
tried
to
make
a
stab
at
what
you
said,
and
I
and
I
we
I've
personally
and
I
think
open
tomorrow's
position
is
that
we
should
allow
mixing
of
label
dimensions
because
it's
always
been
something
you
could
do
for
for
delta
measurements,
in
a
statsy
like
system
or
for
spans
in
a
tracing
system,
and
I
formulated
a
way
to
at
least
specify
it.
B
That
I
think,
makes
me
happy
and
I
hope
it
makes
others
happy
which
is
to
draw
this.
I'm
gonna,
I'm
gonna,
present
my
document
because
I
have
a
little
diagram.
B
Okay.
So
this
is
the
document
and
I
shared
it
last
week,
but
it's
been
updated
a
little
bit.
The
idea
is
that
we're
going
to
define
what
events
mean
and
we're
going
to
give
meaning
to
those
events
or
the
the
use
of
attributes
at
the
event
model
and
the
meaning
of
the
events
is
very
clear
at
that
level.
B
When
you
apply
a
label
or
an
attribute
to
an
event
later
on
you're,
going
to
select
events
that
match
those
labels
and
then
what
we're
saying
is
that
when
we
talk
about
label
sets
at
the
lower
levels
of
the
data
model
and
at
the
time
series
model,
what
we
want
to
do
is
translate
that
back
up
to
the
event
model
when
we
think
about
labels-
and
this
allows
us
to
mix
labels
in
a
meaningful
way,
and
I
wrote
it
in
english
down
here
a
little
bit
so
metro
b
model
attribute
semantics.
B
I
try
to
tackle
this
question.
I'm
just
going
to
reiterate
what
I
said
earlier.
The
semantic
interpretation
of
metric
attributes
is
tied
to
the
event
model.
So
I'm
going
to
say
what
does
it
mean
to
have
a
a
time
series
with
some
attributes?
It
means
I
selected
all
the
events
that
match
those
attributes,
and
I
computed
a
time
series
and
so
just
to
say
that
again,
one
more
time
resulting
data
are
semantically
defined
to
include
the
subset
of
metric
events
with
matching
attributes.
B
So
this
is
the
the
way
I
would
like
to
define
how
we
can
support
mixed
attribute
sets
it
doesn't
and-
and
if
there
is
a
back
end,
that
requires
you
to
have
consistent
dimensions.
Then
we
have
to
do
what
I'm
calling
dimensional
alignment,
which
is
to
remove
the
dimensions
that
don't
that
aren't
present
or
don't
want
to.
B
Yeah,
yeah
and
and
I'm
defining
that
as
a
term,
we're
calling
this
re-aggregation,
and
the
idea
here
is
that
the
the
otlp
points
should
contain
enough
information
that
they
can
be
automatically
re-aggregated
with
themselves
yeah,
so
that
there's,
if
you
have
a
target
set
of
dimensions,
you
should
just
remove
dimensions
until
you
get
what
you
want,
and
then
you
have
the
correct
data.
At
least
that's
what
we
think.
F
Can
I
ask
a
fundamental
thing:
you
know
this
uniqueness
thing
is
just
you
know
can
be
only
implemented
in
the
scope
of
a
process.
If
we
do
it
in
the
you
know,
api
or
sdk,
so
in
in
open
senses,
we
did
this
and
then
you
know
you
have
like
two
different
services
populating
data
to
the
same
time
series
accidentally,
because
there
are
two
different,
you
know,
code
bases
and
stuff.
F
I
don't
think
that,
like
we
there's
anything,
actually
we
can
do
in
the
api
sdk
level
right
like
so
I'm
not
sure
like
we
should.
Actually
you
know
invest
time
in
this
because
of
that
fundamental
difficulty
and
what's
your
opinion.
C
Okay,
so
from
api
perspective,
I
just
want
to
make
sure
if
we
have
an
api,
it
may
have
a
double
api
if
someone
is
trying
to
register
yint
and
double
at
the
same
time
with
the
same
full,
qualified
name,
the
second
one
should
fail,
we
should
throw
exception
and
tell
them
like
something
is
wrong,
so
they
know
that
ahead
of
time,
but
if
they
have
two
different
sdks
running
in
different
processes,
they
have
both
intern
double.
This
is
not
the
sdk
problem.
The
backend
should
tolerate
that,
because
this
is
the
norm.
H
So
to
to
that
point,
there's
a
couple
of
use
cases
that
I've
identified
previously
in
in
the
simplest
use
case
is
that
you
have
a
library
that
has
two
instances,
so
the
library
is
going
to
register
exactly
the
same
name,
exactly
the
same
instrument
type
every
the
signature
is
exactly
the
same
right.
So
if
you
refuse
you
know
registration,
then
you
know
of
a
second
integer
type
and
that's
a
problem.
C
E
C
No,
if
you're
in
separate
isdk
instances,
we
don't
care,
because
this
is
not
something
we
can
control,
but
within
the
same
ic
cave,
someone
tried
to
register
an
end
gauge
with
a
full
qualified
name
like
victor
and
later
someone
is
trying
to
grab
the
same
thing.
We
should
give
the
same
instance,
but
if
someone
is
trying
to
release
the
vector
this
double,
we
should
fill
them
and
tell
them.
There's
already.
H
So
so,
in
your
case,
then
then
you're
saying
that
per
the
spec
that
you
know
for
what
I
propose
is
that
the
int
type
is
part
of
the
name.
So
in
that
case
that
wouldn't
be
a
concern.
It
would
be
a
concern
if,
for
some
reason
we
decided
that
the
the
data
type
in
versus
double
is
not
part
of
the
unique
name.
B
I
I
was
mostly
saying
this
because
I
want
the
collector
to
treat
them
as
independent
so
that
we
don't
have
to
like
call
it
an
error
if
that
exists,
and
that
we,
but
I'm
also
sort
of
saying
we
can
push
this
problem
all
the
way
to
the
back
end
and
then
talk
to
your
your
developer.
Your
back
end
about
what
that
means
to
you
and
I
it
always
feels
unsatisfying
to
get
to
the
end
of
this
pipeline.
E
Yeah,
I
think
micrometer
because
of
the
stackdriver
thing.
Micrometer
just
does
everything
in
long
or
doubles?
It
just
does
everything
it
doubles
just
to
help
with
it.
It
just
does
everything
in
doubles,
there's
only
one
type,
it
does
it
all
in
doubles
and
that
way
the
double
handles.
The
double
and
the
double
handles
the
end
and
who
cares?
It's.
B
C
H
So
there
is
a
different
proposal.
If
we
don't
care
about
the
naming
is
that
on
the
api
and
sdk
side
we
treat
we
don't
care
about
any
kind
of
naming.
We
just
treat
each
instance
of
a
counter
of
an
instrument,
and
we
just
make
that
unique,
and
if
the
user
actually
wants
to
share
the
same
instrument,
then
it's
up
to
them
to
do
so,
I
mean
we
could
treat
it
that
way,
but
we're.
G
I
think
identity
has
been
the
most
important
thing
in
this
conversation.
We
need
to
basically
say:
metrics
have
an
identity.
We
can
decide
whether
or
not
the
api
throws
on
two
different
metrics
with
the
same
identity,
but
the
other
thing
to
call
out
is
one:
identity
can
have
multiple
time
series
and
that
is
done
via
tags
or
labels
or
whatever
the
hell.
G
You
call
them
right
and
that's
fine,
but
they're
still
the
same
identity
and
then,
whether
or
not
the
api
wants
to
throw
and
only
allow
one
identity
out,
I
think,
is
kind
of
important
of
like.
What's
what
identifies
whether
or
not
the
collector
will
fragment
a
metric
and
consider
it
a
separate
thing
with
multiple
time
series
or
join
it
together.
That's
the
thing
that
needs
to
get
resolved
and
your
document
kind
of
gives
an
idea
of
what
identity
means
from
a
instance.
Is
that
we're
calling
it?
I
don't
or
a.
B
One
question:
I
think
we
all
would
agree
that
the
description
text
does
not
qualify
as
the
identifying
factor,
and
so
it
means
that
if
you're
gonna
get
like
version
skew
like
you're
gonna,
have
slightly
different
descriptions.
Nobody
cares
as
long
as
all
those
other
just
fields
are
okay,
so
we
want
to
say
something
like
that:
it's
okay
to
merge,
different
descriptions
and
toss
one.
We
don't
really
care.
Yes,.
E
B
There
may
be
a
reason
that
we
should
just
remove
these
intypes.
I
know
that
as
someone
who
worked
at
google
and
remember
knowing
people
on
our
team
that
that
was
like
one
of
their
major
like
they
had
to
keep
that
because
the
network
team
insisted
on
counting
bits
and
if
you
have
consistent,
counting
bits
with
your
metric
system,
integers
are
needed.
That's
the
problem,
it's
like
a
corner
case,
but
but
it
was
needed
so.
B
B
Yeah
and
actually
one
of
the
best
pieces
of
feedback
on
the
go
sdk,
which
is
where
we
prototyped
a
lot
of
the
early
api,
was
from
janna
saying
why,
like
there's,
12
different
ap
like
instrument
types,
and
I
don't
really
care
about
inversus
floating
plane.
I
totally
agree
the
api
level.
So,
like
almost
no
one
cares,
and
it
creates
a
huge
amount
of
service
area
for
people
to
not
care
about.
D
C
B
Own
way,
so
great
cool,
let's
make
micrometer.
E
I
will,
I
will
read,
I
will
catch
up
and
read
everything
I
know
I
was
missing.
I've
missed
a
few,
so
I'll
try
to
catch
up
with
where
everybody
is.
B
Thank
you
all
we'll
see
you
at
the
next
one.
I
know.
There's
an
sdk
api
meeting
coming
up,
see
you
there.
Hopefully
some
of
you
there's
a.