►
From YouTube: 2021-08-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Yeah,
it
seems
I
have
some
simulation
and
for
the
new
pr
I
I
think
with
the
vectors
change.
The
view
should
be
complete
for
prototype.
I
do
see
some
questions
from
from
josh
earlier
this
week
and
I
I
believe
I
answered
then,
but
I
missed
that
meeting
so
do
people
here
still
remember
like.
Is
there
any
open
question
that
I
could
help
to
answer?
I
think
josh
on
tuesday
brought
some
questions
about
the
view.
B
Yeah
most
of
the
questions
about
the
viewing
board
about
how
I
think
his
comments
were
in
there,
but
how
to
basically
how
to
do
the
ordering
it
feels
like
there's,
I
think,
there's
I
think
again,
I'm
speaking
to
joshua,
so
maybe
not
accurately
sort
of
there
were
two
conflicting
things
about
the
order
and
the
and
the
how
to
handle
multiple,
multiple
views
applying
to
the
same
instrument,
but
then
also
apply
things
in
top
to
bottom
order.
B
I
haven't
seen
clarity,
but
I
haven't
been.
I
haven't
been
watching
everything
that's
going
on
with
the
wire
okay,
I'll
I'll
I'll.
A
And
I
do
have
one
one
thing
I
want
to
clarify
with
these
two
josh's.
I
think
we
only
have
gmacd
here
so.
Regarding
this
comment,
I
think
the
the
tracing
spike
is
saying
a
resource
can
be
associated
and
I'm
not
sure
if
that's
a
requirement-
or
it's
just
like
giving
some
flexibility.
That's
why
in
the
original
version,
I
try
to
give
flexibility,
but
it
seems
like
like
josh
and
josh
suarez.
Both
of
you
have
the
understanding
that
this
is
a
requirement.
A
So
it's
it's
more
like
like
individual
language
must
provide
a
way
for
resource
to
be
associated
with
the
tracer
provider
and
meter
provider,
and
based
on
that,
I
changed
the
wording
a
little
bit,
but
I
want
to
make
sure
you
have
the
same
understanding.
C
I
find
this
language
to
be
very
confusing
either
way.
I
think.
C
A
I'm
using
the
same
word,
but
it
makes
me
uncomfortable
because
this
can
it's
very
vague.
Do
you
think
we
can
change
that
to
like
the
meter
provider
must
provide
a
way
to
specify
a
resource,
and
once
the
resource
is
specified,
the
resource
should
be
like
associated
with
all
the
metrics
produced
by
any
meter
associated
with
the
provider.
A
C
Yeah,
I
think
that
would
be
wonderful
if
that
seems
like
the
best
outcome
we
could
get.
The
reason
I
asked
is
that
I
feel,
like
I've
personally
been
confused
about
this
and
assumed
other
things
than
were
actually
written
and
bogan
has
stuck
stuck
to
it
that
he
wanted
to
make
sure
that
there
was
one
resource
per
otel,
sdk,
the
and,
and
that
we're
trying
to
resist
the
idea
that
you
can
have
more
than
one
resource
coming
out
of
a
normal
hotel
sdk.
C
So
the
only
place
where
an
otlp
pipeline
that
you
get
multi-resource
data
packets
are
in
the
collector
or
downstream.
If
that
understanding
sounds
right
to
you,
then
I
think
we're
just
talking
about
the
way
it's
written.
C
A
Yeah,
and,
and
is
that
the
the
only
thing
that
you
have
concerned?
So
if
I
fix
that
I
got
you
like,
I
got
your
approval
or
there
are
other
things
because
I
think
the
exposure
part.
If
we
were
saying
currently
this
pr
is
only
focusing
on
pushing
power.
That's
pretty
clear
or
you
can
take
a
stab
and
see
if
there's
any
other
blocker,
because
I
updated
the
pr
right
before
the
meeting.
C
Yeah,
I
don't
think
I
have
any
objections
to
this
pr
at
all.
Actually,
especially
because
we
addressed
the,
I
guess,
the
bigger
question
about
push
and
pull.
I'm
sorry,
it's
moving
slowly
for
you,
we
should
get
it
merged.
I
I
think
I
think
you
have
my
approval.
I
shouldn't
just
do
it
without
reading
it
again.
A
Okay,
so
I'll
I'll
I'll,
do
this
and
I'll
follow
up
this
and
and
mention
that
in
the
next
tuesday
me,
and
that
would
unblock
me,
so
I
can
work
on
the
pull
exporter
and
the
the
metric
listener
pr.
I
already
have
something
just
waiting
for
this
to
be
merged
and
also
with
victor's
pr
merge
that
unblocked
josh
storage
on
this
one.
So
you
already
have
the
er,
but
the
problem
is
like
the
discussion
we
don't
have
now
with
victor's
pr.
C
Okay,
I
see
your
point,
let's
move
on
in
the
meeting
and
I
will
get
this
approved.
C
Okay,
you're
gonna
need
more
than
me,
though,
so
might
as
well.
Have
me
read
it
one
more
time.
A
A
D
Not
sure
like
whether
everyone
had
a
chance
to
look
at
it,
I
opened
it
as
a
result
of
some
conversation
I
had
with
victor.
So
after
reading,
both
the
aggregators
spec,
which
we
just
got
merged
and
the
view
pr
in
conjunction
like
now
now
that
like
two
pieces
are
in
together.
I
have
this
scenario
and
I'm
looking
for
like
what
is
the
right
answer.
So
I
have
the
specific
example
in
that
issue.
D
So
if
you
can
just
open
that,
I
don't
know
like
maybe
like
I'll
give
a
few
seconds
for
everyone
to
just
go
through
it
I'll
make
it
bigger.
A
D
If
the
issue
is
not
there,
I
can
try
to
explain,
but
I
hope
it's
clear
I
faced
it
in
dot-net
implementation,
but
I
try
to
make
it
like
clean
english
without
any
actual
codes.
D
The
pr4
aggregator,
which
got
merged,
does
not
mention
anything
about
like
special
aggregation
or
like
merging
of
two
aggregations.
So
it's
mostly
like
asking.
Do
we
ever
anticipate
such
spec
to
come
out
in
the
future,
or
do
we
just
expect
each
sdks
to
deal
with
it
themselves,
because
this
is
a
problem
which
I
couldn't
find
answers.
A
Yeah,
I
I
I
think
first,
my
answer
would
be
105.
number
two
is
I
I
think
for
anything.
That
is
a
sum
we
imply
that
both
aggregation
on
the
time
and
aggregation
of
the
spatial
dimension
should
be
applied,
as
sum,
so
you
can
sum
everything
you
can
see.
This
is
my
total,
like
total
exceptions
on
cpu
core
one
and
total
exceptions
on
cpu
core
two,
and
then
you
can
see
for
all
the
cpu
cores.
I
just
add
them
together.
A
This
is
a
spatial
dimension
or
you
can
see
this
is
exception
in
the
first
five
seconds,
and
these
are
the
number
of
exceptions
in
the
next
five
seconds
and
in
ten
seconds
you
add
them
together.
For
me
whether
it's
a
time
like
whether
it's
time
or
it's
another
dimension,
it
doesn't
matter,
you
just
add
everything
together
that
should
be
the
behavior
and
and
in
other
scenarios
you
might
have
different
scenario
like
different
behavior
or
you
even
want
to
make
it
configurable.
D
Okay,
so
like,
let
me
victor
like
comment
because
he
gave
the.
D
My
first
question
is
like:
do
we
expect
that
the
aggregator
spec
would
explicitly
call
out
that
there
can
be
the
time
based
aggregation
and
the
aggregation
on,
like
I
don't
know
the
right
time,
so
I'm
using
special
aggregation,
because
probably
I
heard
the
term
from
george
some
time
back
so
would
we
be
like
adding
any
sort
of
details
onto
the
aggregator
to
make
it
clear
that
you're
not
just
supposed
to
do
the
time
based
aggregation,
you're
also
expected
to
do
the
special
aggregation.
D
So
that's
number
one
thing
and
second,
I
let
victor
like
clarify
like
why
he
thinks
otherwise.
A
Okay,
so
answer
your
first
question,
I
I
think
by
default,
if,
like
folks
like
will
just
fall
into
the
pit
of
success,
then
we
try
to
be
late
and
don't
put
that
in
the
spike,
like
everyone
just
assumed
this
is
the
only
way,
then
we
don't
need
to
do
that
thing
in
the
spec
and
there's
a
confusion.
For
example,
it
seems
it's
already
confused
to
you.
I
would
imagine
it
will
confuse
other
people
as
well.
Then
it
seems
to
me
this
is
something
we
should
address
and
and
it's
good
that
you
have
issue
created.
E
We
can
go
ahead
and,
yes,
okay,
so
so
I
I
have
so.
I
believe
us
cjo
scenario
is
potentially
a
user
reporting
error,
because
I
think
that
in
some
cases
he
potentially
could
double
report.
E
A
particular
you
know
request
so
I'll
give
an
example,
one
that
I
you
know
so
cjo
here
took
you
know
the
number
of
requests
and
he
was
able
to
break
it
down
into
successful
and
unsuccessful.
E
E
If
that
becomes
a
problem,
the
case
you're
double
reporting,
this
particular
singular
request,
and
if
you
do
that,
we
currently
don't
have
a
way
to
disambiguate
that
with
any
form
of
spatial
aggregation,
so
that
that
I
think
is
my
answer
to
that
is
that
it
really
depends
on
the
user
to
understand
the
cumulative,
because
this
particular
instrument
is
a
cumulative
instrument.
The
expectation
is
that
the
user
already
does
the
proper
aggregation,
and
thus
the
instrument
does
not
question
it,
so
it
does.
It
should
not
add
it,
because
it's
cumulative
I'll
stop
here.
C
Yeah,
so
the
one
thing
is
that
gauge
the
location
variable
you
mentioned
is
like
a
gauge
for
you
know,
whereas
the
other
that
we're
discussing
are
sums
and
there
is,
there-
is
sort
of
a
difference
there
riley,
you
may
remember,
we
got
to
a
very
fine
point
somewhere
in
the
sdk
spec
talking
about
asynchronous
instruments,
and
I
raised
some
minor
sort
of
point
about
when
a
user
presents
duplicate
measurements
for
the
same
label
set.
The
intent,
I
believe,
is
to
discard
one
of
them,
because
you
should
only
have
the
ability
to
make
more.
B
C
To
make
one
observation
per
label
set
for
asynchronous
instrument
per
time
period,
and
I
really
think
this
gets
the
same
exact
question
that
we're
having.
I
feel
like
with
a
synchronous
instrument
for
all
the
measurements.
The
assumption
is,
you
could
just
you,
could
just
drop
a
label
and
it
would
be
no
big
deal
and
that's
because
the
counter
changes
or
deltas
or
they're,
you
know
histogram
observations
and
then,
when
it
comes
to
asynchronous
instruments,
there
seems
to
be
some
sort
of
need
to
track
the
original
label
set
of
the
observation
that
was
input.
C
First
of
all,
you
should
probably
duplicate
it,
because
otherwise
the
user
could
make
a
mistake
so
that
this
may
be
a
different
case
than
victor
was
explaining.
But
if
you
have
a
situation
where
the
user
accidentally
reports,
the
success
false
case
for
verb
equals
get
so
they
report
measurements
equals
five,
and
then
they
report
it
again
as
measurements
equals
five.
Do
you
detect
that
as
a
duplicate
measurement,
or
do
you
add
those
together
and
just
get
ten?
C
I
think
it's
a
good
question.
I
I
was
hoping
to
say.
D
One
out
in
the
spec
that
particular
case
where
user
accidentally
reporting
the
same
for
the
I
mean
for
the
same
attributes.
It
says
they
stick.
You
have
the
freedom
like
you
either
take
the
first
one
or
drop
everything
or
like
pass
everything
through,
but
in
this
case
the
user
is
not
doing
anything
wrong.
They
are
just
yeah,
genuinely
reporting,
two
numbers.
It's
the
view
which
comes
and
drops
one
which
makes
us
see
if
the
user
have
sent
a
duplicate,
but
the
user
has
not
made
any
like
duplicates.
They
are
like
sending.
C
It
just
has
success
so
that
you
are
you
have
these
instruments
with
different
dimensions
if
you're
going
to
compare
them
or
export
a
single
dimension,
meaning
erase
one
of
those
dimensions,
the
the
data
model
does
talk
about
doing
applying
the
default
aggregation,
so
that
would
be
telling
you
to
add
105
in
this
case,
because
you're
erasing
an
attribute.
A
E
You
know
how
to
partition
that
singular
aggregation
to
the
different
you
know
attributes
I
think,
that's
where
the
problem
comes
in
or
one.
C
A
B
E
E
Represent
the
instrument
that
you
use
that
you
return
right
so
so
in
this
in
siegel's
particular
case.
Is
that
we're
using
an
asynchronous
counter
and
thus
asynchronous
counter,
is
returning
quote
what
the
user
aggregated
so
five
and
five?
Okay,
so
based
on
that
cumulative,
because
it's
what
the
user
aggregated,
we
don't
add
up
five
and
five,
because
it's
what
the
user
told
us
it's,
the
total
is
already
five.
D
D
Was
we
should
clarify
that
if
in
accumulative
user
is
giving
us
the
cumulative,
we
just
take
it
as
cumulative?
It's
already
aggregated.
However,
if
we
like
play
with
the
dimensions
by
dropping
one
yeah,
then,
like
user
is
not
to
be
blamed
like
they
already
gave
the
right
one.
Now
it's
the
sdk
which
dropped
the
labels,
so
we
have
to
go
and
do
the
summing
anyway,
even
though
the
user
already
gave
the
cumulative
to
calculate
the
sum
of
those
accumulations.
E
So
I
don't
know
how
the
sdk,
given
that
you
have
multiple
of
these
measurements,
be
able
to
attribute
a
reporting
that
says
cumulative
with
these
set
of
labels
to
be
able
to
then
split
out
attribution
of
five
goals
with
this
first
attribute.
Five
goes
to
the
other
attribute,
because
we
have
no
information
to
know
that,
so
thus
it
makes
it
at
least
you
know
I
don't
understand
how
the
sdk
could
quote
drop
labels
without
having
the
understanding
that
this
is.
You
know
how
the
values
that
split
up
across
the
attributes.
C
The
point
of
the
counter
and
the
up
down
counter
instrument
is
to
say
exactly
what
you're
asking
is
to
say
that
when
division
happens,
it's
a
subdivision
and
you
add
up
the
results
and
and
the
gauge
and
the
histogram
in
the
data
model
are
different.
So
I
think
that
we
were
definitely
trying
to
answer
exactly
this
question
and
it
was
meant
and
some
of
the
examples.
C
The
very
first
examples
that
I
got
to
help
me
think
about
this,
where
the
idea
that
you
might
have
a
piece
of
instrumentation,
let's
say
that's,
going
to
count
cpu
seconds,
which
is
cumulative,
usually
when
you're
running
so
now
there
are
two
options
for
this
library
to
get
configured
with.
You
can
either
have
the
course
option,
which
will
which
will
output
the
total
cpu
seconds
on
your
machine,
or
you
can
have
the
fine
grain
option,
which
will
output
cpu
seconds
per
core
on
your
cpu,
and
maybe
I
have
a
16
cpu
machine.
C
So
I'm
suggesting
that
that
you
could
load
up
and
configure
at
runtime
decide
whether
you
want
runtime
instrumentation
to
have
per
cpu
or
just
for
machine
statistics.
Now
those
are
counters
asynchronous
counters,
because
they're
outputting
total
cpu
seconds
now
one
library
runs
with
the
fine
grain,
and
one
library
runs
with
the
core
screen.
The
core
screen
will
output
one
number
now
the
user,
that
in
that
case
the
user
is
actually
inputting
that
the
programmer
of
that
instrumentation
is
going
to
encode,
one
number,
which
is
the
total.
C
The
let's
say
the
operating
system
is
maintaining
that
total
for
them.
So
they
take
the
cheap
route,
which
is
one
measurement
one
total
on
one
dimension,
the
cpu
total,
the
other
user
gets
this
fine
grain
and
they
have
an
extra
attribute
dimension,
which
is
cpu
core
id
and
instead
of
outputting
the
total
for
the
entire
machine.
They
output,
16
values
one
per
core
with
the
attributes
set
to
the
core
number.
C
C
C
Ap
sdk
and
you
get
16
fine
grain
measurements
from
the
other
machine,
and
you
know
how
to
add
them
up
and
they
add
up
to
the
correct
thing
and
you
can
compute
the
system-wide
cpu
usage
rate
correctly,
even
though
some
of
those
measurements
were
calculated
as
a
single
dimension,
and
some
of
them
were
calculated
by
core
id.
And
that's
that's
the
intuition
that
I
developed
for
this
problem
that
we're.
E
Looking
at
right
now,
right
and
and
that
makes
total
sense,
josh
the
the
the
issue
here,
I
think
the
spec
doesn't
specify,
for
example,
if
you're,
using
a
observable
counter
when
you
report
multiple
measurements
across
different
attributes,
there's
no
sense
that
these
are
all
from
quote
one
final
cumulative
and
you're
just
giving
us
the
the
the
breakdown
of
this
cumulative
right.
E
So
then,
the
problem,
then,
aside
from
that,
the
problem,
then,
as
you
have
multiple
measurements
coming
in
the
the
the
higher
level
default
for
the
sdk-
is
that
if
you
have
multiple
reports
of
a
you
know
asynchronous
counter
your
again.
That
higher
level
assumption
is
that
they
already
have
acute
they
already
accumul
they've
already
summed
up
accumulate,
they
already
accumulated
all
the
values
for
you,
so
they're
just
reporting
the
final
cumulative
and
because
of
that
across
time,
p1
and
t0.
E
If
you
have
a
report
at
t1,
you
have
reported
t0
you're
supposed
to
ultimately
end
up
with
the
cumulative
of
t1
and
the
cumulative
of
t0
you're,
not
supposed
to
add
them
together
for
a
cumulative
right.
So
then
that
then
goes
down
to
when
we
report
a
particular
time.
Slice
like
t
zero
and
in
that
one
observed,
call
if
there's
multiple
metrics
or
if
there's
same
metrics
but
multiple
dimension.
E
We
need
to
know
that
those
are
should
be
summed
up
for
the
cumulative
of
gs.1
t0,
but
we
also
need
to
know
hey
by
the
way
we
also
have
split
up
or
or
slices
of
this
cumulative
in
this
time-
zero
right,
but
across
t0
and
p1.
C
I
accept
that
I
understand
this
a
little
more
detail
on
what
you're
suggesting
is
a
problem
in
spec
and
I've
had
this
same
type
of
discussion
with
others
who
this
this
same
problem
came
up
when
I
wrote
specs
a
while
back
tried
to
do
this
too
so
and-
and
I
have
a
very
good
memory
of
the
conversation,
so
I
I
sympathize
you're
saying
that
you
don't
want
to
add
last
the
last
iterations
cumulative
values
into
this
iteration
of
cumulative
values.
C
You
I
mean
I,
I
would
prefer
to
just
explain
to
you
how
I
implemented
it
the
way
I
think
the
correct
behavior
happens.
So
when
you
call
the
asynchronous
callback
for
your
counter,
it's
you
you.
In
going
back
to
that
cpu
usage
case
you're,
either
going
to
output,
one
value
or
you're
going
to
have
16
values
with
an
extra
dimension,
the
sdk,
in
its
first
phase
of
processing
those
metrics
keeps
a
unique
map
of
each
distinct
label.
Set
it's
been
done.
C
C
At
the
end
of
the
callback,
it
now
has
a
single
measurement
per
unique
call
label
set
that
was
output.
Now
you
you've
done.
You
have
a
correct
result
and
at
this
point
you
can
now
erase
labels
and
the
result
will
be
to
sum
them
again,
because
I
in
in
my
sdk
prototype
there's,
I
was
saying
the
first
phase.
C
It's
called
the
accumulator,
so
the
accumulator
is
going
to
get
one
measurement
per
label
set
for
all
the
instruments
and
then
it's
going
to
put
it
through
a
pipeline
and
if
you
remove
attributes
during
the
pipeline,
it
just
reapplies
the
aggregator.
So
that's
spatial
aggregation
at
that
point,
and
so
victor,
I
think
your
point
is
well
taken
you.
You
can't
just
blindly
aggregate
temporal
things
for
asynchronous
instruments.
The
same
way
you
would
spatial
things
for
asynchronous
instruments.
E
Right,
yes,
that's
correct
right,
so
so
one
one
one
approach
to
this
problem:
well,
it's
not
a
complete
solution
necessarily,
but
I
I
don't.
I
haven't
thought
through
the
addressing
of
the
individual
attributes,
but
to
get
the
global
overall
cumulative
per
se
per
observation
for
asynchronous
instrumentation
per
observation
within
that
call.
E
If
we
were
to
then
take
all
of
the
reportings
of
the
same
metric
that
gives
us
all
of
the
quote
16
slices
of
that
you
know
cpu.
We
could
at
that
point
add
those
up
and
treat
the
the
total
as
the
cumulative
as
if
it
was
that
singular
report.
Now
that's
an
easy
thing
to
do.
If
we
with
the
view,
we
then
know
that,
for
the
view
we
only
interested
in
these
particular
attributes,
so
I
think
we
could
do
spatial
aggregation
per
observation,
but
overall,
between
multiple
collect
calls
or
asynchronous
call.
E
A
So
the
answer
to
sigil
sigils
issue
here
is
your
option.
Tool
and
answer.
Two
vector
is
like
cumulative
thing.
If
you
report
that
as
cumulative
sum,
you
never
try
to
mess
up
the
times
every
time
like
every
individual
reporting
cycle
should
have
the
absolute
value
and
you
never
add
across
different
time,
but
first
for
spatial
thing
you
add
whatever
you
want
and
that's
default
option
like.
If
you
don't
care
about
business,
you
should
just
combine
the
first
line
with
the
second
line.
To
just
add
eight
to
the
second.
C
Number-
and
this
raises
other
questions
off
in
the
corners,
if
you,
if
you
will
something
like
what,
if
I
decide
to
erase
the
value-
and
I
stop
reporting
it-
that
there's
a
problem
with
that
in
this
asynchronous
counter
formulation-
and
I
think
we
can
just
say
it
shouldn't-
do-
that
there
are
other
data
types
people
talk
about
like
gage
histogram,
which
might
support
what
what
that
type
of
you
know
a
relationship
from
a
set,
but
not
counter.
C
So
if
the
the
user
has
to
do
the
right
thing
in
these
asynchronous
callbacks
for
sure
yeah.
C
If
you
were,
if
you
were
forced
to
manipulate
the
data
as
in
otlp
format,
it
does
say
that
so
perhaps
what
we're
trying
to
say
is
the
sdk
should
manipulate
data
safely
according
to
the
data
model,
meaning
if,
if
you
are,
if
a
view
tells
you
to
remove
an
attribute,
do
so
respecting
the
rules
of
aggregation
in
the
data
model,
which
says
that
you
should
know
what
type
of
data
point
you
have,
whether
it's
sum
or
gauge
or
histogram,
and
then
removing
attributes
means
summing
or
merging
distributions,
and
in
the
case
of
gauge,
which
we
only
have
an
asynchronous
form
of
there,
shouldn't
really
be
a
merge
and
and
then,
if,
if,
if
there
is
you're
talking
about
tie
breaker,
because
these
two
values
effectively
were
were
produced
in
the
same
callback
with
different
label
sets,
and
then
you
erased
a
label
set.
E
D
D
Referring
to
the
fact
that,
like
if
in
case
of
asynchronous
guides
like
there
should
not
be
any
sum
because,
by
definition,
it's
not
supposed
to
be
summed-
and
this.
C
E
Well,
so
josh
just
thinking
out
loud
for
an
asynchronous
gauge
if
you
were
to
provide
different
attributes,
attribution
you
technically
in
the
back
end
or
in
the
report,
could
say
hey
what
is
my
last
value
given
these
spatial
attributions?
So
I
think
if
we
think
further
deeper,
you
can
actually
do
spatial
aggregation
and
dropping
for
even
something,
as
you
know,
a
gauge
which
is
the
last
value.
So
it's
really
the
last
known
value
for
the
given
attribute
set
that
you
said
you
want.
C
Yes,
I'm
not
I'm
actually
not
saying
that's
incorrect
or
anything
like
that.
There's
sort
of
an
option,
though
I
think
one
is
to
say:
okay,
I
had
16,
let's
just
suppose
it's
cpu
temperature,
because
that's
like
corresponding
my
last
example,
so
you
have
either
16
measurements
of
cpu
temperature
or
you
have
one
measurement
of
cpu
temperature
and
maybe
the
instrumentation
with
one
is
just
taking
the
average
and
reporting
it,
and
maybe
the
16
you're
getting
actually
16
measurements.
C
Now,
if
the
application
says,
let's
erase
that
cpu
core
id
from
the
from
the
instrumentation
library
that
gave
you
16
temperature
measurements,
you
can
take
the
last
one,
but
it's
not
really
ordered,
so
that's
not
making
sense
so
either
you
can
convert
it
to
a
distribution
which,
at
which
point
we're
talking
about
gauge
gauge
histogram
or
you
can
take
an
arbitrary
one.
I
think
random
choice
is
fine.
That's
like
a
good
probabilistic
answer
last
value.
C
If
it's
random
order
equivalent
and
then
average
value
is
probably
the
next
next
best,
but
we're
getting
opinionated
about
the
data.
At
that
point,.
E
A
Yeah
and
and
the
definition
varies
depending
on
the
scenario-
remember
the
car
battery
voltage
example
that
I
gave
in
some
pr.
So
so
currently
it's
not
well
defined
and
I
I
think
it's
impossible
for
us
to
define
a
single
model
for
all
the
scenarios.
It's
just
up
to
the
user
and
probably
the
first
version
we
can
say
like
like
it's
not
even
a
support
scenario.
A
E
Sorry
riley,
but
by
the
way
I
I
don't
fully
agree
with
your
comment
that
the
answer
is
two.
The
answer
is
two
in
this
particular
example,
but
I
think
the
expectations
of
what
you
know,
I'm
just
you
know
thinking
that
the
expectation
is
that
you
know
we,
the
user,
wants.
The
number
to
be
representative
of
you
know
the
actual
physical
request,
but
their
ability
to
give
us
that
data
outside
of
you
know
each
individual
one,
that's
broken
out
to
all
the
possible
attribution,
isn't
necessarily
there.
D
E
D
Can
you
paste
an
example
in
the
issue
like
where
this
answer
does
not
seem
correct
and
then
based
on
that
we
can
decide
whether
we
need
a
spec
clarification
or
like
we
can
just
close
this?
Yes,.
E
A
D
Now
but
like
if
someone
else
faces
the
same
issue,
I
would
expect
that
we
clarify
the
notion
of
spatial
aggregation
in
the
spec
itself,
because
that
something
is
missing.
It's
kind
of
implicit
but
like.
If
more
people
are
confused,
then
we
should
definitely
add
it.
So
my
suggestion
is
like
if
viktor
can
give
another
example
where
this
does
not
work,
then
we
need
to
make
it
a
actual
spread
thing,
but
otherwise
I'm
fine
with
the
answer.