►
From YouTube: 2021-03-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
B
B
Okay,
should
we
start
it's
904
on
my
clock,
I
see
victor.
Are
you
on
the
call?
I
feel
like
you're
the
star
of
the
show
here
for
this
first
conversation
for
having
made
the
proposal,
so
I'd
like
to
invite
you
to
talk
and
I
can
present.
I
was
saying
in
the
last
hour
how
we
sort
of
have
three
options
for
this
one
of
it's
where
the
current
one
of
is,
which
is
where
what
we're
looking
at
right.
B
Now,
the
the
other
option
that
you
sort
of
produced
was
that
inside
of
the
in
some
inside
of
the
data
point,
we
could
have
per
number
type
variation.
You
could
have
an
in
a
float,
for
example,
and
then
in
my
own
issue,
I
had
pointed
out
that
you
could
do
the
same
with
histogram.
B
You
could
put
one
type
of
point
for
explicit
boundaries
and
you
could
have
one
type
of
point
for
exponential
boundaries
and
then
I
had
made
my
own
pr
where
I
did
the
same,
but
I
put
that
one
up
for
the
histogram
inside
of
the
actual
data
point
like
somewhere
around
this
point
here,
where
we
currently
have
a
fixed
field.
So
I
actually
having
now
considered
all
three
of
them.
I
prefer
the
option
that
you
created
victor,
not
my
own
proposal,
so
I'd
like
you
to
now
speak.
D
Thank
you
josh,
so
this
whole
thing
actually
stems
from
the
effort
I've
been
pushing
for
trying
to
clarify
on
the
spec.
What
is
a
unique
identity
of
you
know
a
particular
metric,
and
it
just
kind
of
dawns
on
me
that
we
want
to
separate
you
know
the
semantic
of
what
the
instruments
is
away
from
how
we,
you
know
the
details
of
the
data
and
how
we
want
to
quote.
D
You
know,
represented
story,
given
that
we
have
so
many
different
potential
representations
and
vendors
in
the
future
right,
so
so
the
really
just
at
the
very
top
basis.
When
I
looked
at
the
protocol,
the
very
first
thing
that
came
out
to
me
is
that
the
the
one-off
at
the
top
included
all
the
data
types,
and
I
can
imagine
that
potentially
exploding
in
the
future.
D
D
You
know
types
that
otlp
wants
to
support
from
the
semantic
perspective
and
then
once
we
get
down
one
level
deeper,
then
we
can
have
all
the
details
of
such
and
then
I
have
to
repeat
it
there
for
two
reasons,
one
of
which
is
that
if
we,
I
can't
easily
have
a
one-off
with
the
repeated
without
creating
another
layer,
and
then
thinking
about
that
having
the
repeats
serves
actually
good
for
two
purposes
in
that
it
allows
freedom
for
vendors
to
either
merge
data
types
or
keep
them
separate
and
also
allow
future
growth
of
potentially
different
ways
to
store
the
data
appropriately.
D
And
so
what
we're
saying
here
then,
is
that
for
the
for
the
hotel
semantic
of
some,
there
may
be
potentially
different
ways
to
represent
the
details
of
which
and
it's
up
to
the
vendors
to
potentially
either
merge
or
keep
separate
what
they
see
fit
appropriately
and
at
the
same
time,
I
think
the
where
I
put
it
it's
it's
minimum
impact
on
the
protocol
itself.
D
E
So
I
I
would
like
to
say
the
following:
you
started
the
discussion
with
the
fact
about
identity,
and
what
does
it
mean
to
identify
a
point
inside
the
time
series
or
a
metric
or
whatever
we
call
it,
and
there
are
backhands
that
I
know
of
that.
For
them,
the
type
of
the
value
is
part
of
the
identity
and
and
with
this
proposal
I
think
we
we
don't
make
that
clear
and
we
don't
help
them
the
reason
why
we
don't
help
them
is
because
our
proto,
I
think
so
to
answer
your
question.
B
So
can
I
just
just
pitch
in
there
so
you're
suggesting
that
some
vendors-
you
didn't
name
them,
but
I
know
who
they
are
so
stackdriver
is
definitely
one
of
them
lightstep,
maybe
and
there's
a
connection
between
those
two
systems.
Of
course,
I
think
that
you're
right
that
we
are
sticking
our
nose
into
a
question
here
about
whether
this
is
or
is
not
valid
or
invalid.
I
think
do
we
just
agree
on
that.
B
We
are
trying
to
make
a
rule
that
that
we
think
probably
others
should
follow,
and
this
is
now
a
statement
that
we
think
it's
okay
to
mix,
integers
and
double
floating
points
and
there's
no
semantic
loss
and
it's
up
to
you
whether
you
merge
them
or
not,
but
you're
going
to
have
to
know
that
there
are
two
different
repeated
fields
like
for
the
rest
of
the
protocols
time,
and
I
think,
if
I
have
to
compare
that
with
the
alternative
of
having
another
one
of
and
then
having
to
program
around
this
duplication
at
a
higher
level.
B
E
B
So
if
I
map
my
my
proposal
into
victor's,
which
I
said
I
liked
so
victor's
drawn
up
the
double
distinction
here,
but
I
don't
think
we
want
that
anymore.
B
We've
talked
about
erasing
that
in
a
parallel
conversation,
but
the
variation
that
we
do,
I
believe,
still
want
for
histogram
is
this
variation
of
bucket
type
and
we
could
have
a
one
of
like
my
pro
my
I
have
open
pr
about,
or
we
could
do
this
type
of
parallel
repetition
and-
and
I
think
it's
interesting
and
I
like
it
because
it
explicitly
says
it
doesn't
matter
semantically.
B
If
you
have
this
bucket
or
that
bucket,
it's
a
histogram,
you
shouldn't
merge
them
unless
it's
special
case
and
that's
where
you
know
that,
there's
a
pr
about
that
as
well.
It's
like
some
type
of
bucketing
strategies,
a
lot
of
merging
some.
Don't
it's
kind
of
a
question
mark.
We
don't
need
to
specify.
E
And
without
a
one-off
I
would
point
one:
downside
is
the
in
in-memory
representation
which
you
may
be
able
to.
I
don't
think
we
can
avoid
so
so,
if
we
have,
let's
assume
we
end
up
with
five
types
of
boundaries
for
for
good
or
for
bad
having
five
repeated
fields.
Here,
it's
pretty
significant,
I
would
say:
okay,
indeed,
it's
one
per
metric
so
may
not
work
we're
discussing
too
much,
but
I'm
I'm
just
raising
that
issue
that
without
a
one-off
there.
B
B
B
The
way
victor
did
we'd
be
pushing
that
one
up
instead
of
having
two
repeated
fields
parallel
and
that's
the
memory
concern
you're
asking
about,
we
would
push
it
down
into
the
actual
data
point
we'd
have
I
called
it
scalar
data
point
at
one
point
in
the
past,
where
there's
a
one
of
saying
you
could
either
be
a
float
or
you
could
be
an
ant
and
I
think
it's
you're.
What
we're
seeing
is
it's
more
extreme
with
histogram,
because
you
can
imagine
five
different
bucketing
strategies
and
you
can't
imagine
five
different
scalar
number
types.
E
Correct
I
I
can
still
imagine
two
or
three,
especially
for
gauges
I
can
see
in
coming
based
on
strings.
D
E
D
Yeah
there,
so
so
I
yeah
so
quick
comment
on
the
one-off.
I
don't
think
the
one-off
really
serves
at
least
my
intention
that
if
you
had
the
one-off,
then
then
you're
again
forcing
the
you
know
semantic
to
be
repeated
over
and
over
again.
So
it's
not
only
more
expensive
on
the
wire,
but
you
know
you
also
forcing
a
semantic
difference.
D
You
know
in
when
you're
defining
the
instrument
and
so
forth
and
also,
unfortunately,
we
could
work
through
it,
but
I
first
started
with
one
up,
but
the
repeat
and
the
one-off:
don't
mix
well
together,
you'll
introduce
another
level
which
is
also
part
of
your
allocation
of
memory
and
so
forth.
I
did
do
some
testing
in
terms
of
the
size
with
the
repeat.
It
seems
pretty
good
because
it's
really
optional,
so
it's
not
transmitted
if
it's
not
needed
anyway.
So
there's
some
confidence.
E
D
Yep
yeah,
so
I'm
concerned
that
okay,
assuming
that
is
true,
but
if
you
take
the
larger
picture
of
if
I
were
a
user
that
had
both
int
and
double
now,
I
have
to
repeat
the
whole
message
which
as
well,
which
message
well,
I
would
need
a
another
gauge
message
with
one
of
int
and
then
I
have
yet
another
one
with
a
one.
E
Above
so
so
you
are
concerned
not
about
the
size.
You
are
concerned
about
the
distinguish
about
the
data
type
being
part
of
identity
or
not.
This
is
what
I'm
hearing
from
you
you,
you
are
suggesting
to
not
have,
and
what
I
was
trying
to
tell
you
is.
There
are
back
ends
like
stackdriver,
that
george
mentioned.
I
did
not
mention
that
for
them,
this
is
a
a
concern
because
for
them
it
is
an
identity,
and
I
think,
if
we
are
doing
this,
stackdriver
will
not
be
happy
and
will
not
be
able
to
export
to
them.
D
B
E
B
I'm
also
airing
a
little
bit
of
dirty
laundry,
because
light
stuff
is
doing
the
same
thing
and
I
think
they've
done
it
wrong
and
I
work
for
lifestyle.
So
I
think
hotel
has
a
chance
here
to
just
set
a
standard
and
we
can
say
that
we
shouldn't
consider
a
semantic
break
if
you
put
in
and
float
into
the
same
stream-
and
I
think
victor
has
proposed
a
way
to
represent
it,
and
maybe
stackdriver
won't
like
that,
and
maybe
I'm
gonna
have
to
argue
with
my
back-end
engineers.
B
But
I
I
I'm
up
to
that
and
I
don't
think
it's
any
other
protocol.
That
cares
and
in
fact,
right
now
in
sort
of
productionizing
this
right
now.
My
biggest
problem
is
that
real
users
out
there
are
mixing
counters
and
gauges
and
it
works
for
them
today
in
prometheus
and
it
doesn't
work
for
them
in
stackdriver.
It
doesn't
work
for
them
and
like
stuff
and
like
can
we
talk
about
that
as
well
like?
I
think
it
is
actually
an
error
if
you
mix
counter-engage,
but
not
if
he
makes
end
flow.
Okay,.
E
Here
is
to
drop
the
value
type
from
the
from
the
identity
and
say
that
if
you
reported
some
that
initially,
you
reported
a
double
point
for
the
same
label.
Combination.
Okay,
it's
all
the
id
all
the
identifiers
like
the
source
is
the
same.
E
D
Yeah
so
I'll
give
you
some
data
point
on
that,
because
you
know
we've
been
doing
some
research
in
terms
of
the
api
calls
for
collecting
data
and
many
of
the
api
calls.
They
always
just
take
a
double
so
whether
or
not
you're
passing
an
in
or
you're
passing
something
else.
It
gets
translated
to
a
double
right
and
then
back
into
the
sdk.
D
Now
we
have
a
problem.
How
do
we
represent
that?
How
do
we
pick
between
an
int
data
point
or
double
data
point?
If
the
api
side
only
reports
and
translate
internally
to
all
doubles?
So
so
that's
a
separate
conversation.
Maybe
we
need
to
keep
the
type,
but
but
my
point
is
saying
that
I
think
a
lot
of
the
client
users
don't
just
know
about
the
semantic
and
interchangeably.
E
So
I
I
completely
agree
with
you
and
if
the,
if
the
transition
happens
to
double
anyway,
you
just
report
doubles
it's
fine,
but
but
there
are
there.
Is
this
thing
like
if
the
source
is
the
same
or
the
resource
is
the
same,
which
is
part
of
the
identity?
You
will
not
have
for
the
same
time,
series
once
reported
in
and
second
point
being
double,
but
that's
a
vendor
specific.
G
Issue,
so
no
it's
not
here's!
Here's!
Here's
where
it
kind
of
matters
is
if
the
user
has
any
expectation
of
equality
right.
That's
the
fundamental
difference
between
floating
point
and
integer
is
integers.
You
can
assume
equality
and
floating
point.
You
cannot.
So
if
I'm
using
a
gauge-
and
I
and
my
gauge
has
like
specific
integer
values
that
are
effectively
like
an
enum
to
me,
and
I
need
to
query
with
that,
then
I
need
it
to
be
an
integer
and
not
lose
that
precision.
That's
like
the
only
time.
G
G
D
This
is
the
this
is
the
precision
and
you
know,
and
the
accuracy
and
the
whatever
that
I
need
when
I
transmit
it
over
to
otlp,
and
that,
I
think,
is
where
we
have
the
opportunity
to
up
to
promote
or
upscale
downscale
the
number
to
match
the
user's
expectation.
E
That's
a,
I
think
you
are
thinking
a
lot
about
the
the
sdk
and
what
the
sdk
does,
which
is
great,
but
but
I'm
thinking
more
about
on
the
consuming
side
of
these.
So
so,
if
I'm
a
consumer-
and
I
don't
know
about
whatever
the
heck
the
sdk
does,
do-
they
send
them
all
as
doubles
or
not.
But
what
I'm
trying
to
say
is
with
this.
E
B
I
I
have
a
position
so
with
I
think
what
victor
has
proposed
essentially
says
that
we
are
going
to
preserve
the
type
that
you
give
us,
so
it
is
stays
in
a
double
state
of
double
and
that's
meaningful
information.
But
it's
not
semantically
meaningful.
It's
just
a
compression
or
it's
like
there's
a
lossy
conversion
from
one
to
the
other
and
the
user
has
to
say.
Yes,
I
really
want
that
lost
conversion,
or
maybe
the
exporter
forces
it
like
prometheus.
B
Only
sports
doubles
that
you
bust,
but
I
think
I
want
to
just
echo
josh's
point
there
about
like.
Sometimes
it
matters
and
that's
why
we
do
have
all
these
alternate
types
in
the
protocol
like
and-
and
I
think
there's
another
case
that
we
haven't
been
mentioned
here-
that
has
to
do
with
interpolation
or
like
when
you're,
when
you're
doing
this
temporal
alignment,
which
has
been
mentioned
in
the
data
model
document.
B
Sometimes
you
want
to
sort
of
shift
counts
and
that's
totally
legitimate
to
do
the
semantic
of
a
counter
says
you
can
do
that,
and
so
it
might
be
that
you
end
up
with
integer
counts.
Going
in
and
temporal
alignment
gives
you
a
question.
Should
I
do
floating
point
conversion
in
order
to
do
that,
interpolation
correctly,
or
should
I
do
some
sort
of
like
accumulator
that
does
like
has
some
state
in
time
that
kind
of
shuffles
around
that
extra
rounding
point
rounding
error
so
that
I
don't
and
that's
a
different
way
to
do
temporal
alignment.
B
Maybe
there's
a
choice
and
there's
two
implementations
around
later
at
some
point
in
the
future
or
and
a
vendor
might
just
say
yeah
I
accept
both
and
I'm
gonna
literally
store
both
and
do
do.
Do
it
right
or
whatever
or
the
vendor
might
just
quietly
convert
them
one
way
or
the
other.
B
E
Conversions,
how
do
we
ensure
so
right
now
we
have
this
statement
that
across
all
the
repeated
points,
the
labels
are
different,
so
labels
are
part
of
the
point
right
now.
We
we
have
this
statement
or
we
can
put
this
statement
easily
that
labels
are
different
across
all
the
points
in
this
repeated
thing
or
it
can
be
treated
as
a
map
from
labels
to
the
rest
of
the
things
correct.
D
Repeat
yeah,
so
I
think
I
alluded
to
in
this
particular
item
that
I
think
we
should
move
the
the
label
set
up
a
level
rather
than
that.
You
know
at
the
level
it
is
which
is
currently
at
the
data
points,
but
I
haven't
take
tackle
that
issue
immediately
yet,
given
that
I
think
we
need
to
resolve
this
issue
first,
but
but
at
the
protocol
layer.
I
think
it
goes
to
the
speak
to
the
identity
and
I
think
the
identity
includes
label
set.
E
No,
I'm
I'm
I'm
I'm
confused.
Then
then,
then
you
are
going
to
make
exactly
what
josh
proposes.
B
B
You
could
just
say
you
know
I'm
seeing
this
metric
with
different
type
or
different
bucket
style
or
different
number
type,
and
I'm
into
my
standard
configuration
which
is
just
consider
those
to
be
separate,
encodings
of
the
same
semantic
and
then
I'm
going
to
pass
them
through.
That
means
that
different
sdks
may
end
up
reporting
the
same
label
set
an
instrument
with
different
number
types,
and
that's
why
the
collector
may
end
up
having
to
merge
together
two
parallel
arrays
and
that's
where
I
I
begin
to
to
to
wonder
what
we
actually
there's.
B
This
notion
that
I
put
into
the
data
about
draft
about
single
writer.
So
this
idea
that
ultimately
there's
some
identity
of
a
metric
that
came
from
an
sdk
somewhere
that
should
have
a
single
type
and
if
you're
mixing
your
single
writerness
you've
got
a
conflict
and
there's
something
bad.
So
I
I
want
to
say
that
if,
if
there's
no
aggregation
happening,
you
should
never
end
up
mixing
these
because
there's
a
resource
that
keeps
these
things
separate
and
and
an
sdk
should
never
output
a
conflict.
B
E
So
so
so,
but
but
to
to
your
point
victor,
if
you
are
moving
labels
up
and
you
are
moving,
not
just
the
label
keys,
but
the
label
values
and
keys,
which
is
part
of
the
identification.
E
D
No,
I
I
I
mean
it
may
wind
up
being
the
same,
but
I
don't
see
that
necessarily
so
given
if
we
have
a
bound
label
and
the
user
decides
to
report,
sometimes
an
int,
sometimes
in
double,
or
simply
that
we
just
always
takes
double,
and
then
we
have
to
decide
when
we
output,
whether
or
not
we
want
to
output
into
double
so
again,
back
to
the
whole
point.
The
identity
plays
a
key
role.
The
data
type,
the
actual
data
time
series
is
less
important.
E
So
so,
but
you
mentioned
that
you
want
to
move
labels
up.
Can
we
make
exercise
because
I'm
pretty
confident
you
get
to
the
same
solution
as
josh
has,
which
is
a
one-off
inside
the
data
point
for
the
value
for
the
value.
D
B
E
I'm
just
worried
that
we
we
kind
of
get
into
that,
and
maybe
maybe
the
right
solution
is
to
approach
what
josh
wants,
which
is
you
have
one
off
inside
the
data
point
for
for
the
possible
values
and
that's
it,
because
that
will
that
will
satisfy
your
your
thing
and
will
so
so.
E
Essentially,
the
gauge
will
become
a
gauge
which
has
a
repeated
point
and
the
point
will
have
start
time
and
time
whatever
it
has,
and
it
has
a
one
off
the
value
in
or
double
so
we
move,
we
move
the
type
down
to
the
last
part.
D
Yeah,
so
so
so
when
I
first
tried
that
that's
what
I
did-
and
I
think
part
of
that
problem
is
that
we
don't
want
to
necessarily
for
every
data
point,
specify
the
type
and
then
it
mixes
back
and
forth,
because
you
know
I
I'm
this.
The
goal
of
this
isn't
to
necessarily
mix
a
time
series
of
different
types
that
that's
really
not
the
main
goal,
and
if
I
understand
correctly
from
josh's
perspective,
I
think
his
perspective
is
given
a
histogram.
D
There
are
potentially
different.
You
know
ways
to
explicitly
provide
the
buckets.
So
that's
really
still
a
semantic
question.
You
know
for
as
an
associate
with
the
histogram
now
in
the
case
of
the
gauge
that
semantic
is
now
not
there,
we're
straight
immediately
into
the
time
series
data
and
that's
why
the
repeat
was
at
that
level.
But
I
could
imagine
that
if
we,
you
know
the
the
quote,
data
points
could
be:
maybe
there's
some
other
algorithms
that
associated
with
a
gauge
and
that's
where
it
would
be
in.
In
that
case,
that
could
be
a
one-off.
B
Yeah
we
gotta,
we
gotta
timebox
a
little
bit
here,
the
the
discussion
about
moving
labels.
I
heard
I
saw
that
in
victor's
comment.
I
s.
I've
now
listened
to
several
minutes
of
it.
It
does
definitely
deserves
more
thought
offline.
I
think
I
feel
like
there's
a
choice
that
ends
up
like
just
shuffling
around
something,
and
it's
not
like
you
end
up
with
a
win
overall,
but
that's
having
only
thought
about
it
heavily
last
summer,
when
we've
had
this
debate
the
first
time
yeah
so.
E
B
So
so
eventually
clarify
that
in
inside
of
the
histogram
we
do
the
one
up.
Just
like
my
pr,
which
is
is
here
where
there's
a
there's
a
one
of
that's
buckets
belonging
to
the
double
histogram
point.
B
There's
a
grand
data
point,
sorry
and
then
what
you're
saying
is
that
it's
sort
of
it's
sort
of
different
in
this
case
of
scalars,
because
they're,
just
so
small
and
there's
only
two
choices
that
we
know
of
and
that
maybe
then
we
do
is
that
there's
a
data
there's
a
number
data
point
which
has
both
a
integer
and
a
floating
point
and
technically
it's
a
one
of.
But
if
you
do
that,
you're
going
to
end
up
with
three
more
allocations
without
fixing
the
protocol
buffer
library.
Honestly.
E
I
would
not
put
another
type
number.
I
would
just
put
the
values
there
because
adding
a
one
off
later,
if
you
have
the
ids
there,
it's
backwards
compatible
for
purple,
so
we
do
not
have
to
put
a
one
off
right
now.
We
can
add
it
always
later
as
long
as
the
types
that
we
want
to
encapsulate
will
can
be
part
of
a
one-off
and
primitives,
like
double
skins,
can
be
part
of
one
off.
B
C
B
B
F
Zero
or
a
zero,
that's
that's
what
we
discussed
like
does
it
matter.
B
E
It's
it's
just
one
allocation
in
majority
of
the
language
that
I
know
for
primitives,
not
for
not
for
not
for
messages,
but
for
primitives.
If
it's
a
one-off
of
string
and
hint,
it's
only
one
location,
because
which
is
exactly
what
you
want,
because
you
want
to
to
determine
which
of
them.
It
is
from
the.
B
D
B
E
So
if
we
have
the
one-off
there,
I
think
that's
the
right
thing
to
do
like
a
one-off
on
the
e,
because
that
will
preserve
the
fact
that
hey
for
the
same
label
set
inside
the
same
metric.
You
can
have
only
one
value
correct
right
now,
right
now,
with
the
current
proposal
from
victor,
I
can
have
from
this,
for
the
same
label
set
two
values,
which
is
something
that
I
don't
want.
D
B
B
D
E
So
we
have
here
what
I'm
suggesting
and
make
sure
we
understand
each
other,
so
here
blah
blah,
we
have
ink.
So
if
I
edit
this
so
we'll
have
a,
we
will
have
a
sum.
Let's
look
at
the
sum
summary
we'll
have
a
sum
here:
okay,
we'll
have
a.
E
D
B
Although
it
feels
like
kind
of
heavy
weight
and
overbearing
to
have
one
of
like
this,
it
does
satisfy,
I
think,
the
semantic
desire
to
have
it
impossible
to
specify
more
than
one
point
value
at
a
coordinate
whatever.
That
means
the
coordinate
being
all
the
time
stamps
we
have
and
the
resource
the
label
set
and
the
instrument.
D
E
But
but
but
then
you
have
a
so
then
you
have
to
have
a
map
so
correct
you
have
to
have
for
this
is
this?
Is
your
label
set
okay,
all
these
labels,
so
this
is
your
label
sets
for
every
every
label
set?
You
have
to
have
on
this.
So
instead
of
having
a
map,
we
decided
to
go
with
this
representation
of
of
a
map.
D
E
Want
to
aggregate
them
or
if
I
wait
wait,
you
are
talking
about
a
different
thing,
which
is
not
it's
a
non-aggregation
or
raw
measurements
which
we
haven't
defined,
so
so
that,
if
that's
the
case
that
you
are
looking
for,
we'll
have
here
a
raw
measurement,
another
type
where
you
don't
do
aggregation,
because
this
one
means
you
do
a
sum.
So
if
you
have
20
points
this
using
sum
means
you
apply
a
sum
across
your
20
points
and
you
report
the
sum
so.
E
D
Like
it,
it
does
make
sense,
however,
I
think
practically
speaking.
If
and
I
know
we
don't
have
the
sdk
specified
per
se,
but
you
have
different,
potentially
different
batching,
where
sdk
you
collect
two
or
three
or
four
times,
and
you
generate
quote
a
sum,
but
the
exporter
may
batch
at
a
different
rate.
So
the
exporter
winds
up,
potentially,
you
know
exporting
multiple
some
collections
instead
of.
E
I
think
I
think
they
will
have
different
timestamps
in
different
intervals,
so
so
anyway
right,
but
the
label
sets
are
the
same.
So
I
think
this
is.
This
is
something
that
me
and
josh
probably
are
culprit
for
for
this,
because
we
talked
about
optimizing
and
and
doing
the
mini
delta
inside
the
inside
the
sdk
and
then
and
then,
when
we
report,
we
report
the
entire
interval,
but
that
was
an
optimization
that
we
both
of
us
believe
that
okay
on
the
critical
path,
we
we
scrape
things
every
second
and
then,
but
but
on
the
report
type.
D
B
D
B
The
earlier
I
said
that
it
feels
like
we're
just
shuffling
something
around
because
because
the
way
we
have
it
now
is
sort
of
optimized.
For
this
scenario
that
bowdoin
describes,
which
is
like
each
interval,
you
output
one
point
with
one
label
set
per
instrument,
and-
and
I
I
I
can
imagine-
I
also
agree
with
bowdoin
that
there's
it
seems
like
we're
talking
about
adding
raw
value
support,
which
kind
of
changes
the
equation.
And
then
I
agree
with
you:
that'd
be
nice
if
we
could
move
the
label
set.
B
E
So,
by
the
way
we
did
have
in
open
sensors
the
possibility
for
every
label
set
to
report
a
repeated
number
of
points
so
essentially
exactly
what
victor
is
asking,
but
I
used
that
protocol
for
two
years
never
ever
allocated
an
array
more
than
size
one.
So
maybe
maybe
victor
you
used
it.
But
for
me,
based
on
my
experience,
never
ever
allocated.
F
E
We
don't
merge
at
that
level,
so
so
at
that
level,
it's
it's,
and
also
if
we
want
to
merge
to
compute,
deltas
or
stuff
we'll
merge,
the
intervals
will
not
will
not
have
two
two
different
points.
So
and-
and
usually
we
are
talking
here
about
times
that
are
30
seconds
and
you
will
not
wait
30
seconds
for
for
for
batching.
E
So
if
you,
if
you
are
referring
to
batching
things,
because
that
may
be
in
a
batch
scenario
where,
where
you
don't
do
a
real
merge,
you
just
bash
them,
we
I
never
and
no
nobody
could
implement
because,
usually
you
have
a
30
10
seconds
30
seconds,
but
you
will
not
wait
that
much
for
for
batching
things.
You
will
wait,
probably
a
second
for
for
batching
for
multiple
sources
to
send
to
the
back
and
not
not
30
seconds,
so
never,
never
seen
that
in
practice.
E
That's
my
point,
and
it
was
just
just
a
problem
for
for
having
that
victor.
I
do
understand
your
desire
and
I
do
understand
we
are
where
you
are
coming
from,
but
I
think
we
intentionally
dropped
that
from
open
sensors
because
of
the
reason
that
I
just
described
that
that
capability
now
when
it
comes
to
to
your
ask
of
scraping
more
often,
I
I
need
to
understand
better,
because
this
may
imply
that
we
made
some
bad
assumption
in
the
data
model.
D
D
So
so,
for
example,
I
may
have
multiple
pipelines,
although
in
this
case
we're
going
to
have
different
collectors
in
each
of
the
different
pipelines,
but
my
collection,
I
want
to
collect
every
30
minutes,
for
example,
and
in
well
not
30
minutes
right,
every,
let's
say
every
minute.
I
want
to
collect
every
minute,
and
so
I
generate
a
data
point.
D
D
Okay,
so
I
want
to
generate
a
data
point
every
minute,
but
I
don't
want
to
export
it
every
minute
because
my
backend
service
may
be
you
know
delayed
or
I
might
have
some
you
know
contention
whatever
so
either
way.
I
may
wind
up
in
such
a
way
that
when
I
do
export,
I
have
more
than
one
collection
period.
E
Yeah,
I
I
do
see
this,
but
is
it
important
for
the
back
end
to
see
the
the
two
points
that
you
have
or
does?
Does
it
matter
at
the
back
end
level
that
they
see
two
different
things?
Can
we
just
merge
them
not
not
by
having
two
different
points,
but
by
by
just
applying
the
aggregation,
the
sum
right.
D
G
I
call
can
I
call
time
here
because
we
have
we
have
about
12
minutes
left
and
I
feel
like
we're
starting
to
go
in
circles,
so
what
he
I'm
gonna,
throw
this
out
victor
if
you
could
put
together
a
a
proposal
based
on
your
understanding,
I
actually
think
I
want
to
ask
josh
or
bogdan,
to
put
together
a
proposal
of
what
you
have
for
histograms
with
the
additional
you
know,
other
data
types
and
and
then
we'll
evaluate
the
two,
because
I
still
think
there's
some
wait.
D
G
Deal
with
this
label
thing
now
right,
like
I
think,
can
we
defer.
That
is
that
okay,
like
let's
put
that
on
hold,
let's
just
focus
on
histograms
and
getting
those
out
the
door
with
whatever
changes
we
want
to
make
to
other
data
types
that
make
them
align
with
histograms.
That's
what
I'd
like
to
see.
D
D
G
So,
okay
yeah,
I
think
I
think
the
things
that
you
were
bringing
up,
I
would
call
back
to
bogdan's.
We
might
want
a
raw
measurement
type
for
that
purpose,
and
I
think
that
might
be
a
better,
better
discussion
around
that,
but
the
so
the
top
level
data
type
thing
and
josh's
proposal.
I,
if
I
remember
correctly,
the
consensus
that
we
hit
before
we
went
under
label
sets
was
some
kind
of
a
combination
of
those
two,
so
who's
the
best
person
to
write
that
proposal
down.
G
E
I
think
for
the
histogram
we
will
do
we
take
my
pr
josh
pr
does
and
for
for,
for
the
gauges
will
not
no
longer
have
engage
and
double
gauge.
We'll
have
this
gauge
and
down
the
the
way
we'll
have
a
one-off
inch
value
or
double
value,
and
it's
the.
G
E
G
D
G
B
G
Okay,
so
I
can
actually,
I
could
even
move
that
into
when
we
talk
over
general
temporality
too,
but
the
main
thing
I
want
to
do
is
talk
about,
so
we
have
next
steps
of
let's
get
this
histogram
stuff
out
the
door
and
kind
of
shore
up
histograms
right
after
that.
B
That
to
be
like
pretty
low
on
the
list,
because
it
only
is
needed
for
this
thing
called
gage,
histogram
and
prometheus
for
this.
That's
all
it's
needed
for
essentially.
E
B
E
G
C
G
Time
means
clarifying
what
concurrent
sending
requirements
are
allowed
or
not
allowed
from
the
exporter
like?
Are
you
allowed
to
have
out
of
order
points?
You
know
that
that
kind
of
stuff
there's
a
bunch
of
discussions
that
we
need
to
dive
into,
and
what
I
want
to
know
is.
Are
we
going
to
be
ready
to
talk
about
those
next
week?
Should
I
prepare
like
the
bugs
and
like
discussion,
points
and
kind
of
documentation
on
what
we're
going
to
be
discussing
and
key
decisions
to
make
from
the
bugs.
B
G
B
I
feel
like
label
attributes
string
only
labels
and
attributes,
and
that's
that
question
is
the
one
that
has
never
gotten
all
the
way
resolved.
Yeah
all
right.
G
Okay
cool,
so,
let's,
let's
actually
so
next
week,
we'll
talk
about
labels
versus
attributes
and
focus
on
that.
I
I
updated
the
project
kind
of
ordering
to
have
histograms
than
that.
If
there's
anything,
maybe
joshua
and
I
can
get
together
to
kind
of
line
up
the
issues
to
prepare
for
that
discussion
too.
D
B
G
If
you
open
a
bug
about
it,
I'll
make
sure
it's
on
the
list.
I
think
yes,
so
so.
G
At
the
the
written
specification
all
right,
so
there's
there's
there's
there's
pieces
of
the
specification
to
write
that
we
need
to
also
be
doing
in
parallel
and
that
identity.
I
actually
wanted
to
tie
into
the
single
writer
section
of
what
josh
was
talking
about.
G
So
there's
a
there's,
a
section
in
this
in
the
data
model
spec
around
single
writer-
that
I
think
we're
also
going
to
talk
about
metric
identity
within
that,
because
I
think
those
two
are
kind
of
tied
that
fundamental
assumption
of
there's
a
single
source
of
metric
truth,
and
there
might
be
like
two
versions
of
the
same
software
running
with
different
ideas
of
what
the
metric
is.
But
we
have
this
notion
that
only
you
know
one
resource
is
writing
a
metric
at
a
time
and
what
identity
means
there.
G
So,
if
you
want
to
work
on,
I
there's
not
a
bug
open,
there's
a
note
on
the
project
and
that's
because
I
suck
at
github.
If
you
want
to
open
a
bug
and
like
write
that
section,
please
do
because
I
think
I
think
it
deserves
some
good
treatment
and
discussion.
But
that's
that's
where
I
wanted
to
kind
of
walk
through
that:
okay.
D
B
A
Is
this
a
valid
scenario?
Do
we
care
about
this
because
otherwise,
like
I've,
seen
it
in
a
bunch
of
different
meetings
where
people
bring
up?
This
is
some
use
case
and
then
yeah
that's
a
use
case.
But
do
we
care
about
that
use
case?
I
guess
is
the
question
that
I
think
doesn't
get
answered
and
so
saying,
like
known
like
non-goals,
would
be
a
good
thing
and
known
good
goals
of
like
what
we're
trying
to
go
for.
Is
I
think,
what
might
solidify
things
in
people's
minds?
G
An
awesome
point:
the
pull
request
that
pulls
in
the
initial
bit
of
josh's
document
has
those
use
cases
I
actually
added
one
to
it.
Okay,
that
wasn't
called
out,
which
was
like
the
dead,
simple
use
case.
I
please
comment
on
that
and
then
it
also
needs
a
not
supported
use
cases
or
things
that
we
don't.
You
know
we're
we're
not
optimizing
for
possibly
yeah.
I
yeah
I.
I
agree.
B
G
B
G
I
just
I'm
just
trying.
We
only
have
two
minutes,
I'm
trying
to
wrap
up,
so
I
think
we
have
next
steps
josh
and
I
will
meet
to
prepare
for
the
next
meeting,
be
prepared
to
discuss
the
label
and
attribute
string
fun.
Next
meeting.
That's
going
to
be,
we've
only
tried
to
talk
about
it
for
what
nine
months
or
something
as
a
community.
B
I
feel
like
it
would
help
us
to
have
a
prepared
list
of
discussion
points
as
well.
I
know
we've
because
I've
been
there
for
those
conversations
I
could
probably
enumerate
them,
but
bogan's
always
the
one
that
has
the
best
grasp
of
the
objections.
I
will
say.
E
E
There
is
an
aggregation
layer
where,
where
things
are
aggregating
and
and
maybe
the
metric
name
is
not
the
instrument,
maybe
a
different
thing,
so
so
what
I'm
trying
to
say
here
is,
I
think
we
need
to
make
sure
we
have
this
clear,
distinguished
distinction
between
between
what
we
call
instruments
and
what
is
on
the
user
side
versus
what
we
produce
as
well.
B
Yeah,
so
sometimes
the
question
is
about
what
happens
inside
the
sdk,
because
we
can
make
more
restrictions
there
and
I
think
what
as
a
data
model,
we
have
to
remember
that
the
collector
is
going
to
be
passing
a
bunch
of
stuff
through
it
and
the
identity
question
is
a
little
bit
different
and
I
actually
think
we've
solved
it
already
in
this
discussion
about
the
scalar
number
type
like
by
by
choosing
that
one
of
we
make
it
impossible
to
have
more
than
one
point
defined
and
and
and
and
that's
identity
right.
There.
D
B
G
Take
this:
let's
take
this
the
next
week
I
will
work
with
josh
and
bogdan
to
get
the
list
of
agenda
items
to
have
that
better
sorted
sorry,
I
tried
to
make
lunch
between
meetings,
and
that
was
a
mistake,
so
I
was
really
late
so
next
week
I'll
try
to
prepare
better.
In
any
case
thanks
everybody
I'll
try
to
set
a
better
agenda
for
next
week
and
hopefully.
G
Thank
you
for
running
this
agenda.
E
Thank
you,
everybody.
Oh,
I
think
I
think
we
are
good
for
the
first
issue
to
file
prs
and
next
week
we
gonna
talk
about
attributes
and
maybe
start
a
discussion
about
the
identity.
If
we
have
time.