►
From YouTube: 2021-06-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
E
F
A
A
C
I
I
still
think
that
the
view
part
of
the
sdk
discussion
is
far
more
pressing.
I,
if,
if
we
can
save
20
minutes
at
the
end,
to
talk
about
to
let
josh
give
us
the
tl,
dr
on
the
histogram
manifesto
and
let
us
kind
of
get
a
heads
up.
I
don't
think
we're
going
to
be
making.
I
don't
think
we
can
make
any
decisions
on
it
this
week.
I
think
we
just
need
a
heads
up
on
what
to
look
into
and
get
everybody
involved
in
the
discussion.
E
A
Okay,
we
can,
we
can
start
so
so
for
the
real
pr
I've
made
the
update
based
on
our
discussion
last
thursday.
I
think
there
is
still
some
open
questions,
so
probably
we'll
go
through
them,
one
by
one
and
see
if
folks
still
have
confused.
So
this
part,
I
already
updated
the
wording,
because
the
comment
no
longer
makes
sense
here.
I
wonder
if
we
should
just
resolve
that
and
seems
john
watson
is
not
here.
C
He's
probably
getting
toasted
right
now,
because
it's
really
hot
over
there,
but
I
wanted
to
say
the
default
bucket
thing.
Tyler
had
mentioned
that
prometheus
has
a
default
pocketing
strategy
and
what
they
do
is
they
effectively
default
their
buckets
to
likely
http
traffic
latency
measurements
in
the
unit
that
they
record
and
they
record
in
seconds.
C
So
what
I
did
for
the
java
prototype
was.
I
did
the
same
thing,
but
we
record
in
milliseconds
or
nanoseconds.
I
forget
which
one
but
I
picked
whatever
like.
I
basically
use
the
exact
same
default,
buckets
as
prometheus,
chooses
for
the
same
reason,
and
that
I
think
that
is
reasonable,
because
I'm
guessing
99
of
our
histograms
will
be
sorry.
C
A
lot
of
our
histograms
will
be
for
latency
measurements
of
communication
between
servers
and
the
ones
that
aren't.
If
somebody
has
to
configure
a
different
bucketing
strategy,
that's
fine!
It
it'd
still
be
nice
to
have
a
hint
api.
To
do
that,
I
think,
but
it's
not.
I
I'm
more
comfortable,
given
how
prometheus
does
it
that
that
we're
not
walking
into
terrible
territory,
and
everyone
will
have
to
configure
this
okay
so.
A
C
In
prometheus
all
like,
when
you
make
a
histogram,
it's
always
assumed
to
be
this,
like
http
duration,
bucketing
strategy,
that's
what
the
default
is
for
every
possible
histogram
since
and
so
like
the
question.
The
question,
in
my
mind,
is
okay,
if
they're
doing
that,
that's
probably
because
likely
most
histograms
are
durations
and
when
it's
not,
then
users
would
configure
a
different
bucketing
strategy.
A
Okay,
so
so
one
possible
way
is
here:
we're
saying:
if
you
have
a
histogram
instrument
and
by
default
you
don't
have
any
bucket
and
the
user
didn't
specify
anything
then
we'll
treat
we'll
treat
them
as
if
they
were
duration,
and
we
don't
care
about
the
units.
A
Anything
we'll
just
give
some
default,
meaning
based
on
the
understanding
that,
like
our
guys,
like
90
percent
of
the
folks,
probably
would
use
that
for
some
solar
client
duration,
that
that
might
work
yeah,
probably
like
comparing
the
pros
and
cons,
I
think
having
something
by
default
can
make
things
easier
and
if
it's
wrong,
then
people
would
have
to
do
the
same
thing
as
if
like
before.
If
we
don't
have
the
default,
then
the
same
set
of
the
people
would
have
to
do
the
same
thing.
A
C
Looks
good
yeah
and
yeah
I
I
was
uncomfortable
with
it
at
first,
because
I
thought
that
I
didn't
realize
how
pervasive
duration
would
actually
be
in
practice,
but
it
it
looks
like
it
given
prometheus.
I
think
I
think
it
actually
is.
So
I
think
it'll
work
out,
okay
to
just
focus
on
duration
for
the
default
and
let
people
configure.
A
Is
there
any
link
you
could
share
with
me?
I
want
to
see
like
what's
the
default,
one
like
how
how
they
put
the
bucket
like
is
the
default
one,
always
a
exponential
or
is
a
something
like
equals.
E
E
That
means
they
map
onto
human
intuition
for
log
base,
10
log
numbers
like
scale,
so
it
starts
at
0.005
or
5
milliseconds
and
goes
up
to
10
seconds
and
covers
12
buckets
in
that
that
span
and
then
there's
a
sort
of
negative
infinity
up
to
five
millisecond
bucket
and
a
10
second
to
a
positive
infinity
bucket.
That
makes
12.
A
And
if
the
unit
is
not,
for
example,
if
the
user
specifies
something
like
the
duration
and
they
have
freedom
to
specify
either
using
nanosecond,
millisecond
or
seconds
if
we're
going
to
provide
default
by
eating
this
approach,
we
will
have
to
understand
the
unit
or
we
just
treat
every
single.
Second.
C
I
mean
so,
I
would
say
optimistically
we'd
understand
unit,
but
I
think
if
we
just
treat
everything
as
the
open
telemetry
default,
timer
value,
which
is
actually
different
than
prometheus,
so
I'm
trying
to
find
a
link
for
you
of
what
I
did
for
for
the
the
java
one.
So
you
can
see
but
effectively
the
default
histogram
boundary
is
not
is
not
in
seconds
it's
in
milliseconds,
because
that
is,
if
you
look
at
our
semantic
conventions
for
duration.
That's
what
we
actually
said.
C
The
unit
will
be
in
open
telemetry
here
is,
I
put
it
in
the
zoom
chat.
I
think
that
looks
like
a
terrible
link
from
github.
Hopefully
you
you
can
see
what
that
is.
C
You
want
me
to
open
that
more
or
later,
no
well
it
it
it'll
at
least
link
you
to
the
like
the
the
default
histogram
boundaries.
This
is
again.
I
just
took
the
prometheus
histogram
ones
and
adapted
them
that
didn't
work.
A
So
so,
it
seems
to
me,
like,
would
not
specify
the
the
default
bucket
unit
in
the
spec,
because
every
language
might
have
different
preference.
What
we
can
do
we
are
specifying
the
default
bucket.
C
A
That
yeah,
so
I
I'm
I'm
struggling
with
the
how
we
work
this.
So
we
can
see
like,
like
all
the
histograms,
should
follow
something
like
this
12
buckets
and
put
a
link,
and
if
there
are
instruments
provided
by
the
instrumentation
library
without
our
control-
and
they
decided
not
to
use
the
the
unit
that
we
recommend
in
the
semantic
convention,
then
it's
fine
for
us
to
to
to
give
the
wrong
results.
C
E
I
just
want
to
add.
I
still
think
that
there's
a
pretty
strong
appetite
for
histogram
implementations
that
just
don't
require
this
type
of
configuration,
but
I
think
specifying
defaults
for
explosive
buckets
does
make
sense.
E
What
I'm
worried
about
is
this
question
about
units
conversion
that
riley
sort
of
proposed
is
not
going
to
go
away.
If
you
use
an
instrument
that
has
integer
inputs,
those
default
buckets
don't
make
any
sense
like,
as
most
of
them
are
less
than
one
and
between
zero
and
one.
So
if
you're
an
integer
input,
there
is
only
two
buckets
for
you
or
three
buckets
for
you.
So.
C
D
Should
we
have
should
we
have
defaults
per
unit
per
type?
I
mean
adopt
the
same
thing
like
for
the
unit.
We
can
simply
adapt
like
you
have
the
milliseconds
that
josh
is
is
talking
about,
but
if
the
unit
is
second,
we
just
multiply
the
back
divide
by
the
back.
Sorry
multiply
divide
the
buckets
by
1000
or
something
like
that.
C
I
I
mean
I
I
yes
I'd
argue
that
that's
reasonable
here's,
here's,
my
my
current
assumption
is,
if
folks
are
running
instrumentation
and
they're
abiding
by
the
open,
telemetry
semantic
conventions,
they're
going
to
be
using
milliseconds.
C
So
if
the
default
just
supports
them
and
no
one
else,
it's
probably
useful
enough
to
be
a
decent
default
and
we
still
have
to
provide
a
way
for
people
to
override
buckets.
If
we
get
clever
over
time
and
make
the
default
even
better.
So
less
and
less
people
have
to
provide
their
own
bucket
strategy,
that's
better,
and
if
we
eventually
have
like
open,
histogram
or
hdr
histogram
or
whatever
we
pick
that
doesn't
even
require
buckets
at
all,
then
we've
absolutely
won,
but
I
think
we
can.
C
We
can
slowly
deliver
let
more
and
more
a
good
default
behavior.
C
A
And
if
we
start
with
a
treating
every
number
as
milliseconds,
that
that's
probably
can
cover
like
make
80
percent
of
folks
happy
and
make
the
other
20
percent
of
the
folks
created,
but
the
other
20
even
without
default,
they
would
be
crazy
anyways
because
they
have
to
do
something
so
that
that's
not
going
to
make
their
life
worse.
My
question
is
eventually
when
we
figure
out.
Oh,
we
have
a
better
way.
For
example,
we
can
either
understand
the
unit
or
try
to
be
smart.
F
I
I
don't
know
about
that.
I
think
that,
in
the
spirit
of
what
the
versioning
policy
was
put
forward
by
the
open
summation
community,
like
the
contents
of
what
the
signal
was
actually
producing,
was
specifically
called
out
as
out
of
scope,
it
was
the
api
and
the
connection
with
you
know
the
sdk
and
the
data
model
that
are
all
the
things
that
need
to
remain
for
it's
compatible.
F
A
A
So
do
people
feel
like
this
seems
to
be
a
better
better
answer
than
having
no
default
and
force
everyone
to
specify
that.
I
personally
think
it
is.
F
Can
I
ask
maybe
just
play
a
little
devil's
advocate,
but
like
would
it
also
maybe
make
sense
because
we
have
unless
I'm
mistaken
and
the
units
have
changed
drastically,
there's
a
dimensionless
unit?
Would
it
make
sense
to
use
that
as
the
default
here
or
do
we
really
want
to
say
that
we're
reporting,
milliseconds.
A
F
F
I
don't
know
if
we've
adopted
this
defined
millisecond
dimensionless
and
there's
one
other
and
those
are
the
only
three
actually
defined
units,
and
otherwise
the
ucum
was
referred
to
as
the
de
facto
way
that
you
would
specify
units
so
having
the
fact
that
you
have
a
dimensionless
unit,
it
would
show
that
the
unit
hadn't
been
set
and
that
you're
using
the
default
histogram
and
then
back
ends
can
interpret
that
as
understanding
that
like
okay,
yes,
I
understand
what's
going
on
here
because,
like
what
those
bucketing
strategies
are
like
they're,
they're
unset
defaults
essentially,
and
they
can
make
intelligent
decisions
off
of
that.
F
But
I
I
don't
know
it
seems
also
like
if
you're
going
to
provide
a
default,
it
may
be
more
useful
to
provide
this
millisecond,
because
that
is
in
context.
What
we're
trying
to
say
is
going
to
be.
Like
the
case,
I
see.
A
Currently,
we
don't
have
the
concept
of
the
default
unit,
but
I
think
we're
seeing
unit
is
an
optional
thing
in
the
spike,
so
I'm
pretty.
F
C
The
hell
that
thing
is
yeah,
the
java
code
was
littered
with
like
set
it
to
one.
If
nobody
sets
it
stuff.
C
Yeah,
I
so
so.
Can
I
ask
a
question:
do
we
do
we
feel
so?
My
statement
is,
I
think,
if
we
just
assume
people
are
doing
durations
in
milliseconds
because
of
our
semantic
inventions,
this
default's
probably
going
to
cover
most
cases.
However,
if
we're
uncomfortable
with
that,
we
want
to
provide
defaults
based
on
the
unit
string.
That's
provided
in
some
fashion
in
a
reasonable
way
for
a
set
of
units.
I'd
also
be
comfortable
with
that.
C
If,
if
we
have
like
a
limited
set
and
not
just
infinitely
any
possible
unit,
so
it'd
be
like
if
we
see
second
millisecond
nanosecond
or
nothing,
here's
what
it
is
and
then,
if
we
see
anything
else
like
meters
or
whatever,
we
pick
something,
you
know,
I'm
I'm.
Okay
with
that.
I
think
it's
actually
really
hard
for
us
to
pick
anything,
not
duration
based
as
a
good
default.
C
I
just
fundamentally
because
you
know
what
you
measure
and
the
the
shape
of
measurements
actually
really
matter.
That's
why
I'm
comfortable
with
the
notion
that
duration
is
easy
for
us
to
assume
you're
measuring
communication
between
computers
and
we
have
a
really
good
handle
on
what
the
shape
of
that
looks
like,
but
I
for
for
other
units,
I'm
less
I'm
less
positive
and
I'd
rather
be
absolutely
wrong
and
have
users
configure
what
they
need.
C
Then
you
know
try
try
to
like
spend
tons
and
tons
and
tons
of
time
here,
arguing
about
hey
for
meters.
What
are
we
going
to
do
for
fahrenheit?
You
know
what
are
we
going
to
do.
E
About
a
dedicated
timing
instrument,
I
feel
like
that's
why
this
issue
keeps
coming
back.
Is
that
when
we
talk
about
units
in
time,
it's
very
special
and
very
crazy,
and
then
anything
else
like
it's
such
a
rare
case
that
who
cares
but
having
an
instrument
that
just
takes
care
of
the
units
for
you
and
and
knows
how
to
do
this
type
of
conversion.
I
think
that
would
make
me
feel
better
than
having
to
spec
spec
out
how
to
do
unit
translation
to
figure
out
default
buckets.
I
don't
like
it.
F
B
A
B
A
B
Understood
I'm
just
saying
that
from
the
sdk's
perspective,
regardless
of
whether
they
provide
a
unit
string
or
don't
provide
a
unit
string,
regardless
of
whether
we
define
the
you
know
the
default
unit,
whatever
from
a
sdk
perspective,
we
just
treat
it
as
unitless.
We
take
the
number
we
put
in
the
histogram,
regardless
of
what
the
unit
is
and
we
pass
it
on
yeah.
D
I'm
not
sure
if
I
understand
that,
can
you
explain,
for
example,
if
the
default
is
5,
milliseconds,
10,
milliseconds
so
and
so
forth,
and
we
we
see
that
the
user
reports
in
second,
we
may
make
the
default
0.005
who's,
we,
the
sdk,
but
again
we
don't
change
the
values.
We
change
the
default
buckets
based
on
the
in
the.
B
A
A
A
Josh
is
there
a
chance
that
we
might
change
our
mind
and
say
this
is
not
the
default
histogram,
let's
use
another
one,
because
I
know
the
histogram
like
we're,
trying
to
evaluate
two
different
approaches.
Would
that
change
this,
or
what
do
you
think
is
a
separate
question?
C
Josh
or
other
josh,
the
dmacd.
E
Sorry
on
reading
the
screen,
open,
histogram
doesn't
define
default,
buckets
the
prometheus
default
buckets
exactly
map
into
open
histogram
boundaries
is
what
I
meant
to
say
earlier.
E
I
I'm
not
sure
how
to
answer
your
question
exactly,
but
my
goal
or
my
belief
is
that
we
should
essentially
not
try
to
specify
which
implementation
produces
these
histograms
the
data
model
or
the
data
in
the
protocol.
We're
trying
to
get
agreement
on
essentially
a
histogram
strategy,
which
means
what
the
math
looks
like,
so
that
you
can
figure
out
where
the
boundaries
are
because
otherwise
a
histogram
is
just
a
very
simple
bunch
of
accounts.
E
It's
a
very
high
resolution
histogram
and
I
think
people
are
objecting
to
some
of
that.
So
I
I've
written
up
this
issue
saying
you
know
we
kind
of
have
to
choose
between
base
10
and
base
two,
and
if
we,
if
we
say
both,
we
end
up
with
bad
results
for
everybody,
I'm
not
sure.
If
I've
answered
a
question
you
hoped
I
would
answer.
C
C
So
what
I
would
suggest
is
that
would
be
a
breaking
sdk
change
that
I
would
be
more
than
happy
to
bump
the
sdk
major
version
number
to
do
personally,
because
I
think
it's
going
to
be
super
valuable
for
users,
but
it
is
changing
the
behave
like
the
default
behavior
of
an
instrument
which
is
significant
enough,
that
you
should
warn
users,
so
we
I
think
we
should
plan
to
eventually
move
to
a
better
histogram
structure.
E
D
Can
we
can
we
add
a
mechanism
to
say
default
like
you,
if
you
don't
specify
anything,
the
default
value
is
this,
but
in
the
future,
instead
of
bumping
versions,
can
we
say
that
we
have
a
flag
or
something
a
configuration
that
says
default
new
default
or
default
v1
or.
D
A
D
A
People
will
be
crazy,
so
I
I
would
probably
prefer
let's
stay
simple,
so
we'll
treat
everything
as
milliseconds
and
for
things
that
are
not
milliseconds
like
nanoseconds,
we'll
admit
that
we're
we're
doing
a
dumb
thing
here.
We
don't
have
enough
knowledge
and
people
who
don't
like
that.
They
should
just
go
and
specify
the
configuration
explicitly.
D
E
D
B
E
And
we
don't
have
integer
histograms
correct.
Well,
we
don't
in
the
data
model,
but
it's
it's
often
been
like.
It
was
part
of
open
census.
The
idea
that
you're
going
to
create
a
histogram
interface
users
might
want
to
histogram
some
integers.
C
C
I
like
again,
if
you,
if
someone
says
I'm
using
double
instead
of
long
inherently,
what's
the
difference
in
the
expected
value
range
of
that
measurement
and
you're
positing
that
there
is
a
difference,
I'm
positing
that
I
don't
know
if
there's
a
difference,
because
we
kind
of
control
that
as
a
community
and
how
we
expect
our
stuff
to
behave
right
now
and
how
we
write
our
instrumentation
and
the
code
I've
seen
so
far.
I
don't
think
that's
true,
I
think
we're
okay.
A
E
D
A
E
D
E
D
E
I
think
I
was,
I
think,
riley
was
supposing
a
case
where
you
write
a
view,
that's
kind
of
like
a
template
that
doesn't
that
says
you
know.
If
the
name
is
this,
then
here's
how
you
do
a
view
of
it
and-
and
you
might
have
three
different
instruments
with
the
same
name
but
different
instrumentation
libraries,
maybe
and
then
that's
three
views.
Maybe
I
think
that
was
what
the
line
of
text
was.
A
E
C
C
E
C
Come
out
of
that
view
right,
so
the
notion
of
I
get
in
a
set
of
measurements
with
a
set
of
attributes,
and
I
create
a
set
of
metric
streams
with
a
set
of
attributes.
Right.
Is
that
one
to
one
is
it
end
to
end?
Is
it
one
to
n?
Is
it
one
to
one
like
like?
I
was
thinking
again
or
one
to
one,
but
not
one
can
yeah.
My
my
I
I
want
to
suggest,
for
simplicity's
sake,
in
the
specification
where
we
can
fix
this
later.
C
I
think
the
selection
criteria
should
not
by
default,
do
anything
to
the
output
and
you
should
have
control.
When
you
generate
your
view
over
like
what
attributes
you
will
output
as
their
own
streams
and
which
ones
you
aggregate
away,
I
think
that's
a
separate
config
just
for
simplicity's
sake
in
this
definition,
because
if
we
try
to
do
something
implicit
and
kind
of
magical,
it
gets
really
hard
to
write
what
that
means.
I
feel
like
we're
on
the
borders
of
defining
a
pattern
matching
specification
and
I've
that
those
are
fun.
C
I've
seen
people
get
phds
for
that.
So
I
don't
know
if
we
want
to
go
that
deep
right
now,
maybe
maybe
maybe
we
have
to
right,
but
my
suggestion
would
be
if
we,
if
we
keep
it
somewhat
stupid,
because
this
is
the
sdk
that
that
might
be
a
better
option
in
the
near
term.
C
A
A
So
so
here
you
select
like
basically
you
say
this
is
my
view.
Identity
and
here
is
like
how
I'm
going
to
find
the
corresponding
instrument
and
if
there's
one
match,
then
congratulations
if
there's
no
match
no
wait
later.
If
there's
multiple
match,
then
we're
screwed
and
we
don't
know,
what's
the
behavior,
so
you
go
and
define
the
behavior.
If
you
want
to
say
I
only
want
one
attribute,
I
want
to
aggregate
all
the
other
attributes.
I
think
this,
like
the
details.
A
A
D
D
A
D
A
Why
do
you
want
to
give
the
power
to
the
sdks,
because
there's
a
exception
handling
part
in
the
stable
version
of
the
spike
they're
saying
the
default?
Behavior
can
be
the
ick
writing
some
internal
log,
and
if
the
ick
decided
to
give
a
callback,
then
it
can
change
the
behavior
and
feel
the
feel
the
like
the
customer
at
the
exact
moment.
But
you.
A
I'm
now
seeing
that
and
my
my
argument
would
be:
there
are
specific
languages
that
are
very
sensitive
to
the
type
they
might
have
strong
type,
and
in
this
way
they
might
say,
hey
if
you
specify
the
instrument
that
cover
two
different
types.
I'll
do
some
internal
thing
just
to
merge
them
and
they
have
the
freedom.
D
A
Why,
like
the
spike,
can
be
simpler
if
the
spike
is
saying
I
don't
care
about
anything,
individual
language
should
go
and
figure
out,
but
it's
very
simple
for
the
spec.
Then
then
yeah
we
should
not
have
a
spec.
We
should
just
close.
D
A
A
D
B
So
so
there
is
a
concern
here,
which
is
maybe
somewhat
you
know.
Some
conversations
I
read
is
that,
given
that
we
don't
lock
down
a
configuration
or
we
do
what
we
don't,
but
there's
some
conversation
around
that
if
people
were
to
you
know,
create
new
instrument
that
then
all
of
a
sudden
match
your
criteria.
C
So
you,
like,
you,
should
already
have
all
this
in
information
by
the
time
the
user's
correct,
constructing
instruments,
if
we've
done
it
properly,
there's
also
a
specification
where,
if
the
user
creates
an
instrument
like
like
before
the
sdk
is
actually
provided,
it's
you're
supposed
to
actually
hold
like
a
a
temporary
piece
of
thing
right
that
where,
when
your
implementation
is
actually
then
provided
you'll
wire
everything
together
at
that
point
in
time,
which
is
kind
of
confusing
and
fun.
C
But
that's
also
not
like
that's
a
language
choice
to
allow
that
level
of
behavior,
because
you
can
require
everyone
to
to
have
the
sdk
instantiated.
You
know
statically
in
like
some
spot
before
all
your
instrumentation
initializes.
I
think
that
that's
allowed
by
the
spec
as
well.
Someone
can
correct
me
if
I'm
wrong.
A
Yes,
my
answer
would
be
you
have
the
control.
So
if
you
update
the
instrumentation
library
and
they
introduced
some
new
instruments
and
that
breaks
the
previous
view,
because
now
you
have
multiple
things
selected,
then
what
you
should
do
is
when
you
update
the
instrumentation
library,
you
should
also
change
your
view
definition
because
you
own
the
application.
You
have
the
control.
B
B
So
I'm
missing
something
then
so
so
I
go.
I
configure
my
sdk,
I
configure
my
view
and
initially
I
have
some
set
of
instruments
and
I
have
everything's
fine.
Only
one
instrument
matched
my
view.
Sometime
later,
while
I'm
running,
I
create
a
new
instrument
which
now
happens
to
match
one
of
my
views,
which
now
it's
double
well.
Is
that
a
possible
scenario
and
if
so,
what
do
we
do.
C
I'm
suggesting
the
first
part
of
that
statement.
I
would
disagree
with
when
you
configure
your
sdk.
You
have
no
instruments
and
they're
always
constructed
later,
so,
like
the
the
way
that
this
is
phrased
like
I
said,
the
detection
of
a
conflict
will
happen
when
the
instrument
is
constructed.
Not
when
the
view
is
configured
all
right.
C
That
says
that
that's
that's
an
I
you
can
do
that
like
you,
can
choose
to
implement
your
api
that
way
you
you
actually
don't
have
to
by
the
spec.
You
could
also
say
that
the
sdk
is
fully
instantiated
and
then
the
api
kind
of
makes
use
of
it,
like
both
of
those
things
are
possible.
C
So
if
you
are
allowing
that
level
of
flexibility,
you're
going
to
have
to
handle
that
appropriately,
but
in
either
case
the
the
failure
scenario
here
will
be
detected
when
the
instrument's
created
in
in
likely
in
both
cases
right,
there's,
there's
a
potential
case
in
some
languages
where
you
could
already
have
instruments
constructed
by
the
time
you
configure
review,
but
you're
still
gonna
have
to
handle.
If
you
wanna
cover
all
the
open,
telemetry
languages,
you
have
to
handle
the
case
where
instruments
are
constructed.
After
the
view
is
configured.
B
So
so
in
so
so
from
the
net,
you
know
implementation.
I've
been
doing
that.
The
only
problem
that
I
run
into
with
this
particular
thing
actually
has
nothing
to
do
with
this,
but
actually
has
to
do
with
if
multiple
views
create
and
override
the
name.
I
now
have
potential
collision
of
the
identity
issue,
so
so
yeah.
D
So,
for
if
the
view
does
not
inherit
the
instrument
name
or
does
not
create
unique
money,
you
might
have
exactly
problems
right.
We
have
problems,
but
I
think
that's
when
we
should
have
the
same
mechanism
that
we
have
in
trace,
because
I
think
we
have
defined
an
asynchronous
or
whatever
error,
handling
mechanism
in
all
the
sdks.
So
that's
when
we
should
tell
the
user
in
the
error
handling
that
something
bad
happened.
C
Not
being
you
know,
in
the
selector,
and
so
you
can
have
one
meter
where
someone
creates
an
instrument
with
the
same
name
as
another
meter
where
the
current
instrument
and,
if
they're
incompatible,
we
should
probably
be
issuing
an
error
message
anyway.
I
think
there's
like
a
there's
some
sort
of
phrase
about
this
somewhere,
but
that's
that's
that's
where
we
want
to
provide
an
error
message,
not
a
crash,
and
that's,
I
think,
the
the
scenario
where
it
matters
here.
D
But
but
wait
the
view
will
inherit
the
instrumentation
library
correct.
So
the
view
the
view
will
produce
an
aggregation
that
will
belong
to
that
to
the
same
instrumentation
library
as
the
instrument.
E
E
D
E
D
Now
there
is
still
a
problem
josh
and
josh
and
riley,
which
is
if
the
view
is
allowed
to
change
the
name
of
the
instrument
that
changing
the
name
of
the
name.
Even
though
we
have
the
resource,
the
the
instrumentation
library
that
may
still
cause
duplicates
correct.
If,
if
I
created
in
the
same
meet
from
the
same
meter,
if
I
created
the
instrument
full
and
the
instrument
bar
and
via
the
view,
I
would
say,
I
say
that
I
produce
the
aggregation
that
will
the
metric
that
will
have
a
name
buzz
for
both
of
them.
A
Yeah,
I'm
I'm
trying
to
illustrate
the
problem
here.
So
if
you
have
a
instrument
like
meter
a
meter
b
and
you
have
xyz
three
instruments-
people
here
they
might
say
hey,
I
have.
I
have
two
pipelines
one.
I
want
to
take
every
default
value
and
send
it
to
the
console
exporter,
and
here
they
want
to
say
I
want
to
take
x
and
export
that,
as
is-
and
I
also
want
to
take
y
export
that
has
full
and
also
take
y
exported
bar.
D
D
Okay
so
and
that's
as
I
said,
I
think
the
only
thing
that
we
can
do
at
that
moment
is
to
report
an
error
and
then
either
not
install
that
view
for
that
instrument.
So
we
install
only
for
the
first
one
and
then,
if
we
detect
a
problem,
we
for
the
second
one.
We
just
report
an
error
and
that's
it.
I
think
it's
a
reasonable
default.
B
But
but
you
don't
know
during
the
creation
of
the
view
per
the,
if
you're
allowed
to
create
instrument,
you
know
dynamically
right.
What
do
you
mean
so
so?
I
think
that
when
you
create
the
view-
and
and
you
know
this
is
that
you
know
joshua's
mentioning
but
depending
on
the
implementation,
the
view
the
name
matching
of
the
instrument
name
y,
that
instrument
y
may
not
yet
exist.
B
C
D
That
the
api
spec,
but
but
based
on
the
view,
because
the
view
can
also
change
the
name
of
the
aggregation
thing.
The
problem
may
happen
again
and
if
that
happens
at
runtime,
whenever
we
discover
we
don't
crash,
we
report
an
error
via
the
error
handling
the
way
how
we
have
it
for
trace
and
for
other.
Then
then
the
behavior
could
be
don't
install
that
view
for
this,
because
we
don't,
we
don't
want
to
corrupt
the
data.
D
We
just
report
you
one
one
metric
or
one
instrument
with
that
view,
because
if
we
report
both,
you
will
have
problems.
A
A
So
josh
it
seems
you
have
a
some
idea
here.
So
I
wonder
if
I
could
catch
up
with
you
offline
to
go
through
some
of
the
outstanding
questions.