►
From YouTube: 2021-08-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hi
everyone.
I
think
I
got
a
message
from
riley
saying
he
would
be
late
to
this
meeting
and
I
can
try
and
start
talking.
I
do
know
that.
A
Okay,
riley
can't
make
this
entire
meeting
and
josh
can't
make
it
for
20
minutes.
So
here
we
are
talking
about
things.
I
will
be
glad
to
project
what
josh
wanted
us
to
talk
about.
A
A
A
B
Right
so
I
think
in
the
c-sharp
side
I
think
c
joe
has
been
trying
to
provide
a
poc
for
it,
so
so
cjo's
on
the
call
he
may
have
more
comments
on
it.
B
And
I
see
joe's
here
right-
oh
sorry,
he's
playing
the
same
meeting
yeah.
So
this
this
particular
question
is
in
this
is
just
in
the
c-sharp
implementation
that
was
previously
and
I'm
just
giving
a.
I
guess
a
description
of
you
know.
B
I
think
what
this
comment
means
is
that
previously
we
talked
about
the
exporter,
somehow
informing
the
aggregation
or
the
aggregator
about
what
temporality
type
that
it
is
intended
for,
and
so
in
our
prototype,
when
we
call
the
collect
function
or
the
export
function,
we
actually,
you
know
that
function
actually
has
a
parameter.
That
says
we
want
this
in
as
a
you
know,
cumulative
or
we
want
this
as
a
delta.
We
we
do
not
see
that
in
the
in
the
spec,
nor
you
know
so
we
don't.
B
We
didn't
know
exactly
how
to
deal
with.
That
is
the
exporter
who
is
responsible
for
the
final
temporality
of
the
instruments
and
the
aggregation
that
is
collected
so
that
I
think,
is
generally
the
question.
A
So
I
can
offer
my
own
opinion,
and
I
think
one
of
the
hopes
that
we
had
for
the
otlp
exporter
is
that
you
might
be
able
to
export
both
both
temporalities
for
a
sum,
for
example,
if
you're
exporting.
If
your
instrument
is
an
asynchronous
counter,
the
input
is
cumulative
and
if
your
input
is
a
synchronous
counter,
the
input
is
delta.
If
you'd
like
to
have
a
stateless
export
path
with
no
no
memory
requirement,
there
you're
going
to
end
up
exporting
both
and
the
exporter
needs
to
know
which
it's
getting.
A
I
I
I
feel
like
there
probably
are
several
implementations
implementation
strategies
that
can
give
you
that
and
I'm
wary
of
trying
to
overspecify,
but
in
in
my
case
the
otel
go.
Prototype
basically
requires
that
the
exporter
request
what
it
wants
in
any
particular
case,
so
yeah.
That's.
A
Yeah,
like
you,
there's
a
metric,
descriptor
object,
which
says
which
type
of
instrument
this
was
and-
and
you
can
say
I
would
now
like
to
to
view
to
get
a
piece
of
data
for
this
instrument
and
I'm
interested
in
which
which
temporality
I'm
interested
in
and
it's
a
it's
a
selector,
and
so
it
says
if
this
is
in
cumulative,
you
can
just
say:
keep
it
cumulative.
If
this
is
delta,
you
can
say,
keep
it
delta
or
you
can
ask
for
a
conversion
and
the
way
this
works
in
the
exporter
for
go.
A
Is
you
present
this
information
twice
once
is
when
you're
choosing
your
aggregator
like
the
first
time
you
recognize
this
aggregator
you're,
going
to
put
it
into
the
pipeline.
Somehow
there's
some
state
management,
that's
implied
and
that
state
management
happens
before
the
first
export.
In
other
words,
before
you
begin
your
first
export
you're
going
to
make
a
decision
about
whether
to
compute
a
like
a
delta
to
cumulative
conversion.
A
A
The
this
may
sound
like
complications,
but
it
the
reason
I
did
it
was
that
I
wanted
to
see
at
least
proof
of
concept
for
a
single
export
pipeline
that
handles
both
cumulative
and
deltas
in
the
same
place,
and
the
way
it's
done
is
that
you,
you
only
ever
need
two
pieces
of
state
per
metric
either
the
stateless
thing
you
got
in
or
the
conversion
that
you're
asking
for,
and
so,
if
you
just
keep
the
two
words
like
of
memory,
you
can
do
both,
but
that
is
I
probably
more
complicated
than
need,
and
I
definitely
wouldn't
expect
it
that
way.
B
Yeah,
so
I
think
yeah,
so
our
our
current
poc
is
very
similar
in
that
and
the
question
that
we
had
was
we
we
actually
put.
If
we
were
gonna
go
down
this
route,
we
actually
put
the
the
role
of
cumulative
delta
and
delta
cumulative
was
actually
part
of
what
we
call
our
aggregator,
which
is
the
thing
that
supports
aggregation,
and
so
the
aggregation
or
aggregator
in
our
case
is
just
this
black
box.
B
Obviously
this
isn't
specked
out
anywhere.
You
know,
because
it's
implementation
detail,
but
I
I
so
that's
why
the
siegel's
specific
question
here
is
that
do
we
is
the
spec
expected
to
to
solve
the
problem.
We
said
earlier
that
the
exporter
may
have
the
ability
to
inform
the
aggregation
on
the
temporality,
and
I
didn't
see
that
in
the
spec
today.
A
Got
it
and
then
my
my
answer
to
the
question
then,
is
the
exporter
needs
to
inform
the
the
pipeline
long
before
the
first
export
of
what
it
expects
and
then
it
should
repeat
its
expectation.
I
think
yeah,
I'm
saying
that,
because
I
know
you
can
easily
implement
that
greater.
That
does
both
cumulative
and
delta
and
the
reason
is
you're
going
to
get
the
last
input.
D
A
exporter
which
is
like
a
requesting
cumulative
and
the
other
one
is
requesting
data.
A
Yeah,
that's
how
I
had
it
for
a
prometheus.
You
could
say
I'm
going
to
give
prometheus
cumulative
numbers
and
I'm
going
to
have
an
export,
be
stateless
for
for
otlp,
meaning
I'm
trying
to
migrate
to
an
otlp
path
where
I
don't
have
to
keep
memory,
but
for
now
I'm
also
monitoring
by
prometheus
and
I've
got
cumulative
going
out.
B
C
B
D
Are
those
still
there
because,
as
far
as
I
remember,
you
were
able
to
build
a
pipeline
using
them
and
basically
just
like
having
multiple
processors
and
having
multiple
exporters
ties
to
them.
E
A
I
feel
like
this.
I
I'm
always
wary
of
these
conversations,
because
we
get
into
this
court
like
corner
cases
that
are
kind
of
irrelevant
for
default
configuration.
So
I
I
don't
like
to
have
lengthy
conversations
about
this
stuff,
but
I
know
that
we've
had
discussions
in
the
past,
where
you
can
certainly
describe
reasoning
or
rationale
for
having
a
multi-form
of
anything
in
this
pipeline.
A
If,
if
you,
if
you
want
to
have
multiple
exporters
and
support
combined
temporality,
that's
a
option,
you
maybe
could
choose,
and
I,
but
I
wouldn't
try
to
expect
that
for
people
and
I
likewise,
I
don't
feel
like
we
need
to
expect
multi-export,
except
in
as
much
as
it
relates
to
combin,
combining
different
temporalities,
because
multi-export
is
just
another
flavor
of
export.
That,
like
calls
two
exporters
and
unless
you
need
to
have
independent
failure
modes,
which
is
when
things
get
complicated
and
if
you're
having
independent
failure
modes,
you
shouldn't
do
it.
A
If
that's
what
you
want,
you
should
have
another
phase
earlier
in
the
pipeline,
doing
the
teeing
or
whatever.
So
I
just
don't
want
to
spec
this
out
victor
how's.
That.
B
B
Yeah,
because
internally
with
you
know
between
me
and
cj,
we
we
debate
about
you
know
whether
or
not
you
know
our
coach
should
have
any
you
know
support.
You
know
multiple
exporters
in
a
pipeline
or
not
so
do
we
follow
spec?
Do
we
go
our
own
way?
So
that's
if.
A
I
guess
the
reason
why,
when
I
first
got
to
this
point
I
had
a
pull
exporter
and
a
push
exporter
and
the
push
explorer
is
the
one
true
exporter
and
the
pull
exporter
is
like
a
read
access
to
intermediate
state
and
it's
not
really
an
exporter,
and
so
I
still
have
only
one
exporter
so
you're
gonna
need
I
mean
I
need
to
have
like
I'm
sending
otlp
to
two
places,
because
I
want
redundancy
for
push
based,
metrics
or
something
like
that
and
and
then
yeah
do
you
expect
that
we
have
to
then
you
get
into
talking
about
failures
and
like
what
happens
when
one
of
them
stalls-
and
I
I
just
feel
like
we're
outside
of
scope,
somehow.
B
Yeah-
it's
a,
I
don't
know.
I'll
put
it,
I
mean
the
the
use,
our
user.
I
don't
think
will
suffer
either
way
right,
so
the
user
will
maybe
convenient
or
inconveniently
will
just
have
to
set
up.
You
know
either
two
pipelines
or
one
pipeline
with
the
proper
configuration.
So
that
really
isn't
I
I
don't
know
that
it's
a
convenience
issue
for
the
user
or
confusion
for
the
user.
B
However,
having
said
that,
then
from
a
just
purely
efficiency
perspective,
how
do
you
specify
you
know
if,
if
I'm
a
user-
and
I
just
have-
I
have
already
my
instrument-
my
instrumented
item-
which
I
can't
change
right,
so
I
want
to
just
send
my
telemetry
to
otlp
and
some
internal.
You
know
which
is
very
common
here
microsoft.
We
want
to
send
it
to
our
internal
system
as
well
as
an
external,
but
there's
some
you
know
configuration
for
each
of
those
so
that
what
is
the
best
way
to
do?
A
Yeah,
so
is
this
I
I
don't
know
I
don't
I
think
you're
asking.
Should
the
specs
say
you
must
support
multiple
exporters
and
I'm
thinking.
B
A
Yeah,
so
I
think
in
recent
weeks
we've
we've
realized
that
we
can
write
the
views
back
to
say
what
you
must
do
and
not
say
how
to
do
it.
So
is
there
a
view?
Spec
answer
here?
In
other
words,
just
write,
multiple
view,
multiple
editors
and
and
then
name
them
from
the
views,
and
then
your
implementation
should
support
that
somehow
and
we
don't
have
to
say
how
you
do
it.
B
I
I
don't.
I
don't
necessarily
think
this
is
related
to
the
view
per
se.
I
would
think
that
this
is
the
view
stays
as
is,
and
then
for
when
you
create
an
exporter,
the
sdk
may
allow
the
exporter
to
be
configurable
by
its
own.
Whoever
implements
the
exporter
may
may
may
have
its
own
level
of
you
know
what
metrics
it
wants
to
export.
A
B
I
mean
the
spec
as
it
stands
right
now.
It
allows
for
one
exporter,
which
is
you
know
is,
is
the
I
guess,
the
simplest
form,
which
is
probably
enough
for
the
first
version
right.
But
but
you
know
but
jonathan,
you
know
kind
of
asked.
You
know
what
happens.
You
have
multiple
exporters.
I
think
maybe
there's
some
expectation
of
that.
D
I
can,
I
can't
think
of
like
two
use
cases.
One
is
like
I
guess
it's
valid,
the
other
one
is
like
debatable,
and
one
of
them
is
if
you
like,
migrate
from
one
metrics
backend
to
another
one,
and
while
you
are
doing
this
migration,
you
want
to
keep
the
data
in
both
places
and
the
other
one
which
is
kind
of
debatable.
D
A
So
your
second
use
case
is
one
where
you're
just
debugging
yourself
for
diagnostics.
That's
definitely
one
that
josh
named
last
week
in
this
discussion
and
I've
done
that
myself.
Although
in
my
that's
that's
the
case
again,
where
it
looks
like
a
pull
export,
not
a
push
export
and
it's
hard
to
classify
pull
export
is
true
export.
So
so
I
want
to
scope
this
discussion
down
to
just
push
push
export.
A
Should
we
support
two
export
two
push
exporters
in
the
spec
in
a
way,
that's
like
firm,
and
I
think
I'm
hearing
that
as
a
request.
I,
I
wonder
how
to
write
it,
of
course,
and
it
sounds
like
we're
saying
that
there
should
be
configuration
support
for
multiple
exporters
that
have
different
configurations
that
are
wired
together
in
some
way,
and
I
I
worry
it's
just
getting
out
of
hand
complexity-wise
like
especially
don't.
A
Like
to
talk
about
failures
independently,
like
like
you
can
chain
them
together,
they
will
call
one
after
another
and
if
one
fails,
they
all
fail
or
something
like
that
or
if
one
fails,
the
better
fail
fast
and-
and
you
can
continue
the
next
one
but
the
but
the
export
like
the
sdk
upstream
of
the
exporter,
sees
one
exporter.
That's
been
changed.
A
B
Yes,
I
was
going
to
add
that
I
think
riley
talked
about.
You
know
bogdan's
idea
of
a
reader
and
maybe
that's
the
way
that
we
move
forward
with
multiple
exporters
or
such,
but
but
I
haven't
seen
the
spec
on
that.
Yet
so
I
don't
know
where
that,
if
that's
coming
or
when
that's
coming.
A
So
I
so
let
me
just
re
rephrase
that
so
I'm
I
think
that
to
wrap
up
the
current
conversation
we
can.
I
it's
easy
to
say.
Yes,
you
can
have
more
than
one
exporter
as
long
as
they're
configured
to
look
like
one
exporter.
A
Is
needs
to
be
sort
of
added,
and
I'm
I
my
fear
here
is
everything-
becomes
a
configuration
problem
and
we're
not
talking
about
metrics
or
telemetry
anymore.
We're
just
talking
about
crazy
configuration
problems.
I
don't
want
to
do
that,
but
the
second
point
there
was
that
reading
your
own
metrics
is
a
seems
to
be
a
use
case.
It
looks
like
polling,
metrics
and,
and
that
seems
like
it
will
be
addressed
by
a
pulse
back,
that
that
riley
is
working
on.
So
maybe
we
don't
really
need
very
much
here
at
all.
A
I
just
don't
see
what
what
needs
to
be
written,
but
I
want
to
leave
this
comment
so
that
we
can
hopefully
resolve
it
and
merge
this
will
cjo
or
victor.
Will
you
take
an
action
to
discuss
this.
B
A
B
B
A
another
issue
that
sigil
brought
in
last
week
about
how
we
deal
with
dropping
of
labels
view
that
we
discussed
last
week
trying
to
find
the
pr
on
it,
and
I
don't
know
if
there's
any
additional.
Oh,
yes!
So
it's
a
issue!
1874.
A
We
concluded
with
an
answer
and
we
decided
that
we
wanted
to
refine
the
spec
to
to
address
the
to
put
the
answer
in
in
spec
language.
Is
that
correct.
A
It
this
is
a
question
about
the
data
model
where,
in
my
memory
at
least,
we
did
add
words
in
the
data
model,
spec
saying
what
the
intention
of
these
data
points
was.
Otlp
data
points
is
that
they
represent
an
aggregation
anytime.
You
end
up
merging
two
of
them.
You
repeat
the
aggregation.
So
if
you're
combining
some
points,
you
add
them
together.
If
you're
combining
histogram
points,
you
combine
those
distributions
and
if
you're,
if
you've
got,
engage
points.
A
That's
where
we
ran
into
this
like
extended
discussion
about
something
that
almost
nobody
cares
about,
which
is
where
you
begin
to
talk
about
this
thing
called
a
gage
histogram
or
you
begin
to
say
it
doesn't
matter,
just
take
the
last
value
and
and
then
you
people,
people
get
a
little
concerned
about
that.
You're
dropping
an
attribute
completely,
but
the
data
model
does
talk
about
this
very
stuff.
B
Yeah,
so
I
think
I
answered
the
question,
but
I
think
there's
a
slightly
different
additional
problem.
Put
you
know
on
top
of
probably
what
you
just
mentioned
as
well,
but
additionally,
it's
I
think
we
just
clarify
in
when
you
specify
the
view
you
specify
the
the
the
set
of
attributes
that
or
labels
sorry
labels
that
you
want
when
siegel
well,
okay.
So
so
the
question
then,
is
if
you
wanted,
we
talked
about.
B
Riley
talked
about
the
multiple
you
know
instrument,
you
have
the
temperature
and
you
want
the
you
know
the
the
each
of
the
different
machine
versus
you
want
the
total,
so
my
general
assessment
I'll,
try
and
simplify
it
is
that
I
think
during
the
view,
you
need
to
specify
all
of
the
labels
that
constitute
the
whole
of
what
that
you
know
aggregation
or
what
those
instruments
will
include
and
if
you
included
all
the
labels
that
makes
up
the
whole
of
that,
then
you
may
or
may
not
be
able
to
then
say
of
these
labels
that
make
up
the
whole,
which
one
of
these
do.
B
I
want
to
group
by
or
summarize
by
per
se,
and
if
we
were
doing
something
like
that,
then
what
that
gives
us
is
the
ability
to
say
to
be
able
to
safely
pick
out
or
drop
sub
labels,
because
we
know
what
constitute
the
whole
and
in
the
example
that
sigil
gave,
that
I
kind
of
augmented
is
that
when
you
give
an
instrument,
you
give
a
whole
bunch
of
labels.
We
do
not
know
the
intention
that
the
instrumenter
you
know
when
they
were
instrumenting.
B
We
don't
know
the
the
instrumenter's
intention
of
what
makes
up
a
whole
right.
So
I
don't
know
if
that
makes
sense.
I
I
could
give
a
better
example.
A
I
think
we
should
keep
talking
about
this,
so
others
understand,
I'm
not
sure.
If
I
agree
I
do
understand,
at
least
when
I
was
trying
to
say
the
data
model
talks
about
this
topic
a
lot.
The
idea
was
that
the
whole
means
something
I
like
that
you've
used
that
phrase
the
whole
does
mean
something
and
in
the
when
you're
talking
about
some
points,
that
the
whole
is
the
thing
you're
summing
and
we're
talking.
A
Points
the
hole
is
a
measurement
yes
and
you
can
always
have
more
than
one
measurement,
but
you,
but
so
for
gauges.
When
you
have
more
than
one
measurement.
You
are
literally
talking
about
more
than
one
whole
measurement
and
for
sums.
Whenever
you
have
more
than
one
measurement
sum
them
up,
you've
got
a
hole.
B
A
So
I
like
to
think
about
this
in
at
least
for
synchronous
instruments
and
when
we
define
the
we're
talking
mostly
about
sums,
but
we
can
also
do
the
same
exercise
for
gauges
for
a
synchronous
instrument.
A
If
you
have
a
sum
and
you're
and
you're
dropping
labels,
really
no
big
deal
you're
going
to
end
up
adding
those
inputs
to
the
same
aggregator.
So
if
I
had
three
consecutive
synchronous
updates
for
three
labels
and
I
dropped
those
three
labels,
they
end
up
part
of
the
same
sum
on
the
in
like
on
the
synchronous
path.
They
all
hit
the
same
aggregator
right.
C
B
B
Comes
when
you
add
a
view
and
you
only
select
a
portion
of
that
label,
so
what
maybe?
Maybe
the
thinking
about
the
implementation
of
the
view
is,
is
maybe
I
have
it
wrong
in
my
mind,
but
so
so
the
instrumenter
right
does
an
ad
and
it
has
the
it
knows
the
whole,
and
so
it
gives
you
all
the
labels
associated
with
the
whole.
A
B
So
so
the
that's
why
I'm
saying
I
think
the
problem
is
that
when
you
specify
the
view
right,
the
view
generally
just
gives
a
implements
a
different
aggregator
and
in
this
case
the
add
aggregator.
So
when
you're,
when
you
provide
a
view,
you
need
to
provide
the
filtering
for
the
labels
to
constitute
your
quote
new
hole,
because
if
you
don't
tell
the
view
what
your
total
hole
is
you
know
that
the
implementation
you
know
originally
intended,
then
you
only
see
a
subset
of
the
whole.
A
A
B
Is
instrumenting
in
this
case
you
know,
I
don't
know
a
a.
I
think,
a
a
request,
a
get
request,
okay,
and
so,
if
you
ignore
the
first
one,
which
is
the
total
sum,
that's
the
actually,
if
you
scroll
up
sorry
josh,
if
you
scroll
up
a
bit
up
up
up
there,
that
one
yeah,
sorry
the
setup
yeah,
so
the
the
instrumenter
provides
this
set
of
information
in
the
t1
which
is
hey.
I
have
I
have
a,
I
have
a
request
and
I'm
the
instrument
is
classifying
into
two
groups.
B
It's
a
success,
true
or
false,
and
then
separately
a
location
but
not
doesn't
include
true
or
false.
It's
just
you
know,
so
the
instrumenter
wants
to
know
at
the
end.
Is
that
how
many
success?
How
many
gifts
are
successful?
How
many
are
are
not
successful
and
separately
how
many
in
location
a
regardless
of
whether
successful
or
not
or
how
many
is
in
location
b,
regardless
of
whether
it's
successful
or
not
right.
So
the
problem
comes
in
this
instrumentation
here.
Is
that
what
verbs
or
what
labels
actually
constitute
the
whole?
B
Because
if
we
didn't
know
that-
and
we
got
this
this
report
in
one
callback
and
if
you're
saying
the
aggregator
only
sums
all
of
the
values
up,
then
it's
double
counting,
because
it's
counting
the
success
equals
true.
Success
equals
false
and
location
equals
a
location
equals
b
right.
So
that's
problem.
One
problem
number
two:
if
the
view,
if
you
specify
the
view
to
say
I
only
interested
in
the
get
verb,
then
that
means
we
filter
out
the
success
and
the
location.
A
I
grant
you
that
it's
more
complicated,
but
I
was
I
started
with
a
synchronous
example
because
I
think
it
it
matches
our
intuition,
which
is
like
you,
can
either
drop
the
label
or
not
before
you
put
the
measurement
in
and
you
can
still
get
the
same
result.
You
raise
a
good
point
about
double
counting.
I
I
want
to
call
that
out.
I
also
want
to
remind
us
of
this
rule
that
we
wrote
in
the
data
model,
I
think,
to
address
the
same
type
of
thing,
which
is
about
the
single
writer
principle.
A
So
if
there
is
a
double
counting,
it's
almost
certainly
violating
the
that
rule
that
we
created
for
this
reason,
and
so,
if
you're
going
to
filter
and
make
the
appearance
of
somehow
two
counts.
For
the
same
thing,
I
think
you
should
rename
your
metric,
that's
my
attitude,
but
I,
but
I
think
I've
completely
gotten
away
from
the
question
you
were
asking,
which
is
about
asynchronous
instruments,
and
in
this
case
I
also
think
I
can
answer
it,
but
I
want
to
make
sure
that
you
have
a
chance
to
to
speak.
A
So,
okay,
so
now
we're
talking
about
asynchronous
instruments
and
I
think
the
the
the
implementation
challenge
that
you're
facing
looks
something
like
this.
You
have
been
given
two
views
for
this
metric
and
you
they're
being
reported
asynchronously
the
programmer.
B
A
A
Yeah,
and
so
the
difference
between
a
synchronous
and
asynchronous
instrument
in
this
context
is
that
you
have
to
recognize
the
full
label
set
when
these
measurements
are
input.
Otherwise
you
see
duplicates
or
signal
writer
violations.
Yes,
yes,
yes,
that's
correct,
and
I
think
that
that's
okay
and
I
think
we
can
spec.
That
is
that
going
to
solve
your
question
because
I
think
what
we
want
to
see
happen
is
the
asynchronous
callback
recognizes
four
distinct
instruments,
because
they
have
distinct
label
sets.
Even
though
you're
going
to
change
the
label
sets
before
they
export
the
input.
A
The
asynchronous
callback
produced,
four
measurements
with
four
label
sets,
and
this
is
now
now
I
want
to
talk
about
implementation
strategy
from
my
otel
go
prototype.
This
is
exactly
what
happens.
Is
that
you
go
through
the
callback
you
you
see,
those
four
measurements
they
have.
Four
distinct
label
sets,
and
then
you
feed
them
into
your
metrics
processing
pipeline.
Your
metrics
processing
pipeline
could
erase
a
label
which
will
cause
re-aggregation
to
happen.
A
B
B
G
Sorry,
I
I
want
to
call
out
like
that
you,
you
should
not
make
metrics
in
this
fashion,
like
that's,
that's
actually
just
a
bad
setup
of
your
instruments,
because
you're
you're,
literally
you're
reporting
double
counts
there
right
so
like
right,
right,
verb,
get
location,
a
you're,
you're,
definitely
double
counting
your
metrics.
You
should
definitely
have
different
metrics
here
and
if
you
look
at
how
you
configured
it
like
everything
seems
to
be
working
appropriately,
but
I
would
argue
we
shouldn't
set
up
instruments
in
this
fashion.
Generally.
B
A
B
Well,
no,
no,
but
sorry
backing
up
here.
So
so
I
I
get
your
question
josh
and
we'll
get
to
that
so
to
joshua.
Why
do
you
think
that
this
particular
callback
would
be
incorrect.
G
So
so
what
I'm
suggesting
is
the
definition
of
your
your
view
or
the
instrument
itself
are
kind
of
not
working
together,
so
so
it's
doing
exactly
what
you've
told
it
to,
but
the
way
you
set
up
this
metric,
it's
it's
easy
to
abuse
effectively.
So
if
you
think
of
the
default
aggregation
of
removing
labels
on
a
sum
is
to
add
things
together,
we've
defined
a
metric
here
where,
if
I
remove
labels,
I
actually
get
a
sum
that
has
no
valuable
meaning.
A
A
I
there
there's
a
statement
somewhere
high-level
idea,
saying
that
every
attribute
should
should
be
meaningful
in
isolation
or
something
like
that
like.
If
you
can
aggregate
by
this
attribute,
it
ought
to
mean
something
I
think
josh
is
calling
out
that
when
you
mix
your
label
sets,
I
didn't
notice
it
at
first
in
your
example,
when
you're
just
talking
it
earlier,
but
the
fact
that
one
that
these
four
measurements
have
different
different
labels
on
them
is
is
leading
to
the
type
of
confusion.
That
josh
is
mentioning.
I
think.
G
A
G
Like
effectively,
you
should
have
four
measurements,
but
it
should
be
success.
True,
verb,
get
location,
a
success,
true
verb,
get
location,
b,
success,
false
verb,
get
location,
a
and
success.
True
or
success
falls
for
get
location
b,
and
then
you
can
aggregate
those
appropriately
and
pull
out
just
your
successes
or
just
your
locations.
G
But
the
fact
that
you
have
this
divorce
label
set
for
the
same
metric
is
leading
to
the
the
inherent
problem
there
right
so
like
if
you
don't
have
a
consistent
set
of
labels
per
metric
per
instrument
per
instrumentation
library,
we're
going
to
run
into
all
sorts
of
oddities,
and
that
that
I
think,
is
the
fundamental
problem
here.
That's
leading
to
this
weird
question
of
yeah.
The
view
looks
weird,
and
the
answer
looks
weird,
but
fundamentally
it's
it's.
G
The
label
set
choice
that
led
to
that
kind
of
problem,
and
we
do
have
like
josh
was
saying.
I
wish
we
had
different
names
by
the
way,
because
it.
G
Myself,
like
josh,
was
saying
we
have
these
these
specifications
around
how
label
sets
should
work,
especially
across
sums,
and
how
removing
a
label
should
lead
to
another
meaningful
sum.
B
So
I
totally
agree
with
you
josh
on
on
what
you
just
said
here
and
does
my
my
conclusion
to
this.
Is
that
the
reason
for
that
is
that,
given
a
set
of
measurements,
we
really
don't
know
what
constitutes
what
labels
that
constitute
the
hole?
What
constitute
the
whole
could
be,
how
you
do
your
instrumentation?
B
In
other
words,
I
instrumented
this
incorrectly
and
I
have
to
break
out
all
the
labels
does
that
every
measurement
constitute
the
whole
in
terms
of
all
the
label
set
or
if
I
do
it
at
a
late,
binding
perspective.
I
need
to
provide
the
view.
The
label
sets
that
constitute
the
whole
so
either
way
and
how
we
configure
it
you're
right
in
that
the
problem
set
is
that
we
need
to
know
what
the
label
sets
that
makes
up
the
hole,
and
so
we
could
then
do
one
can.
A
A
B
Well,
so
so
the
the
scenario
I
give
t1
is
very
typical
of
what
I
see
in
that
the
user,
the
instrumenter
and
we
could
educate
the
instrumenter
to
do
differently.
But
what
I
see
today
is
that
at
the
callback
or
whenever
even
synchronous
instrument,
they
have
a
measurement
and
they're
just
throwing
a
whole
bunch
of
labels
to
specify
you
know
what
you
know.
All
of
the
things
all
of
the
data
points
that
they
have
there.
I
don't
know
that
they
specifically
would
group
their
quote
whole
instrument
and
call
different
instrumentation
or
different.
B
You
know
records
measurements
for
these
different.
You
know
groups
of
data
and
then,
additionally,
the
grouping
of
what
makes
the
whole
can
change
downstream
after
they've
done
the
instrumentation
so
so
to
one
extent
josh,
I
you
know
both
josh's.
I
agree
with
you
that,
ideally
from
an
instrumentation
perspective,
they
should
just
do
the
right
instrumentation
and
only
provide
recordings
that
constitute
the
whole.
B
But
I
think
in
some
practices-
and
you
know
caveat
that
with
you
know
only
based
on
my
own
experience-
is
that
we
probably
could
delay
that
definition,
because
the
instrumenter
just
wants
to
give
you
a
measurement
and
give
you
all
the
data
set
that
they
have
and
that
it
is
possible
downstream.
To
then
be
able
to
say,
I
only
want
to
consider
you
know:
label
a
and
label
b
to
be
my
whole,
but
then,
in
the
same
exact
you
know
separate
query.
B
G
Okay,
so
you're
arguing
that,
basically,
if
the
instrumenters
did
not
instrument
kind
of
in
a
way
that
is
flexible
for
views,
we
should
provide
a
way
to
to
support
this
use
case.
Where
there's
mismatch
labels.
If
that's
what
you're
arguing
that's
fair,
I
think
we
can
take
that
offline.
We
can
put
together
a
proposal
on
what
that
would
look
like.
There
would
be
a
change
to
the
view
api
to
somehow
filter
metrics.
G
To
get
this
right
if
you're
arguing
that
like
we
should
encourage
instrumenters
to
do
this
and
that
we
should
just
accept
that
people
will,
I
think
we
should
do
our
best
to
make
sure
a
documentation
recommends
against
this
kind
of
a
setup
and
recommends.
You
know,
recommend
something.
G
That's
more
easily
consumable,
like
one
of
the
things
that
I'm
seeing
in
the
java
instrumentation,
for
example,
is
we're
trying
to
be
very
verbose
with
the
set
of
labels
that
we
provide
out
of
the
box,
and
we
want
to
have
a
consistent
set
of
labels
and
a
rich
set
of
labels
to
support
all
possible
use
cases,
and
the
reason
we
need
views
is
because,
once
you
take
that
giant
set
of
labels,
you
run
into
cardinality
problems
in
almost
every
metric
back
end.
G
And
so
we
start
from
a
giant
set
of
rich
labels
that
are
consistent
on
every
single
point.
And
then
we
pair
down
with
views
down
to
things
that
our
metric
backend
support
right.
And
so
that
is
why
you
know
like
personally,
I
was
pushing
for
this
sdk
to
have
views
out
of
the
box,
and
that
is
the
primary
use
case.
This
use
case-
I
I
don't
know
if
we
want
to
optimize
for,
but
I
would
be
happy
to
see
a
pull
request.
Expanding
view
configuration
to
solve
this
with
the
fundamental
opinion.
B
So
this
question,
josh
suresh,
the
instrumentation,
may
potentially
come
from
separate
libraries,
even
though
they
may
constitute
one.
How
do
you
guarantee?
How
do
you
force
the
guarantee
that
every
measurement
taken
will
have
the
full
complete
label
set,
especially
if
you
have
updated
code
where
they
add
new
labels
to
the
labels?
So.
G
You
can
already
filter
on
instrumentation
library
in
your
view
setup,
so
you
should
be
able
to
filter
on
the
different
instrumentation
libraries
and
the
different
meters
that
were
registered
today
in
the
way
views
work.
So
that
would
be
the
step
one
is
I
could
like
if
these
are
coming
from
different
instrumentation
libraries,
they're,
actually,
two
different
instruments.
Yes,.
G
And
so
I
should
be
able
to
filter
out
that
I
should
be
able
to
make
views
against
them
individually
and
I
would
actually
get
the
verb
get
count
separately.
So
I
would
get
one
metric
from
the
one
instrumentation
library
70
and
I
would
get
another
instrument
from
a
different
instrumentation
library
saying
105
right,
because
I
would
actually
have
separate
metrics
there.
G
I
actually
think
that
we
might
need
to
expand
the
metric
data
model
to
include
instrumentation
library
and
identity.
I
don't
want
to
do
it,
but
if
you
look
at
the
specification
of
the
api
today
and
you
look
at
the
specification
of
views
today
when
I
went
and
did
my
implementation,
it's
apparent
that
the
way
things
are
specified.
G
Instrumentation
library
is
part
of
metric
identity
right
now.
Yes,
if
you
read,
if
you
read
between
the
lines
in
the
spec
and
so
to,
you
can
already
fix
this
with
instrumentation
library.
If
you
use
that
as
your
filter
to
your
view,
if
these
are
coming
from
different
instrumentation
libraries,
so
I
don't
think
we
need
to
dive
into
that.
There's
still
the
open
question
of,
should
instrumentation
library
be
part
of
your
raw
metric
identity
and
that's
something
that
I
wanted
to
talk
about,
but
not
I
don't
think
it's
worth
talking
about
right
now.
G
A
Okay,
I
feel
like
challenging
still
this
notion
that
that
if
the
instrumenter
wrote
what
we
see
here
for
the
t1
like
just
take
them
at
their
word
like
they
may
have
left
out
location
on
two
of
their
data
points,
and
they
may
have
left
out
success
on
two
of
their
data
points.
But
it
is
one
metric
and
is
one
library
and
the
meaning
prescribed
is
that
you
can
sum
them
to
make
a
whole
and
any
erasure
of
label
sets
will
give
you
the
same
sum.
A
That's
what
we're
after
and
if
you
are
using
attributes
in
a
way
that
there
is
a
meaningless
aggregation
when
you
remove
one
or
the
other
attribute
you've
done
it
wrong,
and
we
can
find
words
about
that.
I'm
sure
there's
some
written,
I'm
sure
we
could
write
more.
You
know
prometheus,
has
a
very
clear
statement
about
this
when
you're
using
attributes,
the
decision
to
use
an
attribute
as
opposed
to
a
new
metric
must
be
based
on
the
same
criteria.
They
should
be
the
same
hole,
I
think,
is
what
we're
after
anyway.
A
G
Do
I'm
hoping
to
get
some
of
our
internal
metric
guidance
kind
of
exported
into
the
spec
over
time,
with
pull
requests
that
people
can
accept
and
review?
Not
all
of
our
internal
guidance
applies
to
open
telemetry,
but
we
do
have
a
lot
of
guidance
around
good
label
design,
stable
label
design
how
to
make
labels
that
you
can
migrate
over
time.
That
sort
of
thing
that
we'll
try
to
get
we'll
try
to
get
out.
B
Okay,
yeah,
so
so
just
to
complete
this
particular
statement
and
by
the
way
I
I'm
you
know
not
challenging
any
of
the
statements,
we're
saying
here
so
in
this
particular
just
for
finishing
this
particular
conversation,
the
if
you
go
scroll
down
the
expect.
The
my
analysis
of
this
particular
problem
is
that
we
need
to
when
you
specify
the
view
we
just
need
to
know
the
whole
if
you're
getting
the
get
verb.
The
whole
spec
is
based
on.
You
know
the
verb
get
and
the
label
success,
which
I
think
are
saying
exactly.
A
B
Right
so
so
the
the
so
the
example
I'm
giving.
You
is
something
that
we
in
in
one
of
our
back
end.
That's.
What
we
do
is
that
we
allow
the
instrumenters
to
basically
throw
any
forms
of
you
know,
labels
they
want
and
then
back
end,
because
they
don't
really
know
because
you
have
all
of
these
other
people
doing
instrumentation.
So
at
the
ops
level,
you
don't
really
know
what
you
want
to
be
considering
as
a
whole,
so
we
allow
at
the
back
end
or
at
the
downstream,
to
configure
what
label
sets.
B
We
want
to
consider
the
whole
to
do
reporting
on
so
for
this
same
measurement.
We
would
then
set
up
two
quote
equivalent
of
our
views
if
you
will,
but
one
will
just
say
that
the
first
hole
is
get
and
the
verb
and
the
success.
Those
two
labels
are
one
whole.
The
second
one
would
be
you
know
the
location
and
the
verb
would
be
the
second
hole
so
that's
equivalent
to
if
we
were
to
set
up
quote
two
views.
If
you
will
or
two
instruments,
if
you
will.
G
It
sounds
like
if
you
want
to
accomplish
that
in
open
telemetry,
and
you
can't
use
the
instrumentation
library
info
because
you're
using
the
same
meter
for
some
reason,
then
there's
a
feature
request
against
views
where
you
would
like
to
have
a
filter.
That
would
say
instead
of
just
specifying
here
are
the
things
to
keep
you
want
to
say
here:
is
the
label
to
remove
for
this
metric
and
if
it
doesn't
exist,
don't
count
it
for
aggregation.
G
G
What
we
have
today
doesn't
support
that
in
the
sdk.
You
would
do
that
via
some
kind
of
a
query
on
your
back
end.
G
So
if
you're
already
doing
that
in
your
back
end-
and
you
don't
need
that-
that's
great-
if
you
want
to
do
it
the
view
api,
though
you
would
need
to
either
use
the
instrumentation
library
info
right
or
so
the
instrumentation
library
meter,
you
know
the
registration
and
the
filter
syntax.
We
have
in
views
for
that
today,
if
that's
available.
If
not,
there
isn't
a
way
with
the
api
we've
specified
to
do
this
and
you'll
have
to
do
it
somewhere
else
or
open
a
feature
request.
One
of
the
two
that'd
be
my
recommendation.
B
Yeah,
by
the
way
I
I
wasn't
asking
for
any
change,
I
was
just
answering
cjo's
question
specifically
so
so
so
I
think
in
in
this
particular
example,
you
know
if
you
scroll
down.
I
think
our
existing
view
already
supports
this.
It
just
says
the
view
I
want
to
select
the
verb
and
the
success.
As
my
label
set,
in
which
case
the
any
the
location
will
be
dropped.
The
you
know,
the
the
you
know
the
the
the
measurements
with
location
would
that
would
be
drop.
A
Right,
I
want
to
go
back
to
the
single
writer
principle,
though,
because
there's
still
this
concern
that
that,
in
this
example
here
t1
using
verb
equals
get
that's
a
filter
and
you're
saying
I
want
to
take.
Only
the
measurements
with
verb
verb
equals
get,
and
then
I'm
going
to
do
some
sums
of
those
and
then
implied
by
that
is
you're
going
to
do
some
other
filter,
which
is
like
successful
location,
equals
a
or
whatever
and
do
some
some
summations.
And
now
you
have
the
potential
to
double
count,
because
that's.
A
And
so
what
I
hear
is
a
feature
request
is
to
allow
filtering
and
renaming
to
happen
because,
as
long
as
you
rename
your
your
output
after
filter,
it's
okay,
because
you're
not
breaking
that
rule
about
a
single
writer
but
josh-
is
correct.
You're
writing
a
query
engine
in
views
at
this
point,
and
my
my
thinking
was
that
we
were
gonna,
stop
short
of
that
and
there's
a
number
of
complex
queries.
You
could
imagine
wanting
to
do
and
at
some
point
they
become
outside
of
the
sdk
scope.
I
think.
A
Too,
where
this
is
a
problem
that
is
real,
but
just
not
very
real,
not
real
enough
to
me.
So,
like
I'm
a
server
and
I'm
I've
got
shards.
I've
got
500,
shards
loaded
right
now,
and
every
shard
has
some
statistics
about
it
and
I'm
going
to
output
a
sum
for
for
all
my
shards
and
and
I've
got
500
measurements,
and
every
shard
has
a
sum
that
gets
further
sub
divided
by
some
other
attributes.
Now
now
I've
got
a
bunch
of
measurements.
A
G
G
Tell
you
if
that's
okay,
all
right
do
we
know
so.
I
know
riley's
out
and
I
know
there's
two
pr's
that
I'm
expecting
one
is
around
multi-exporter,
because
that
was
pulled
out
of
one
of
the
view
spec
and
then
the
other
one
is
around
pool
based
exporter,
because
that
was
pulled
out
of
the
exporter
spec.
G
Pr
do
we
know
who
owners
are
for
those
two
things
and
or
should
I
follow
up
with
riley
and
see
if
maybe
what
I'm
trying
to
see
is
if
we
can
paralyze
that
work
across
multiple
people
as
opposed
to
having
one
person
do
both?