►
From YouTube: 2022-04-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everyone
I
I
think
carlos,
usually
speaks
through
this
meeting,
and
I
don't
see
him
yet.
A
I
feel
like
with
three
minutes
into
the
hour.
It
would
be
appropriate
to
start
here's
carlos
and
the
first
item
up
on
today's
agenda.
I
believe.
A
B
Yeah
we've
had
this
conversation
before,
but
I
think
it's
gotten
table
to
due
to
bad
timing,
so
I
just
wanted
to
bring
this
back
up
again.
The
fact
that
all
json
messages
are
currently
listed
as
as
alpha
unstable,
and
I
just
wanted
to
see
what
it
would
take
to
stabilize
otlp
json
and
see
what
level
of
interest
there
is
in
starting
to
do
this
mainly
for
traces
and
and
metrics.
At
this
point,
I
think
those
are
the
two
signals
that
make
the
most
sense,
but
does
anybody
have
any
thoughts
here.
C
C
One
of
the
things
is
that
we
wanted
to
have
prototypes
in
other
languages,
which
we
do
we
have
one
in
java
somewhere.
We
can
look
for
that,
so
it
means
that
everything
is
working
for
other
languages.
Just
fine,
but
the
versioning
part
is
a
hard
one,
I
think,
or
not
hard,
but
at
least
we
have
to
jogging.
With
with
that.
B
Yeah,
I
think
we
can
talk
about
that.
I
would
be
curious
to
to
know
if
anybody
has
use
cases
for
otlp
json.
I
work
a
little
bit
in
the
js
sig
and
I
think
it
would
be
super
helpful
there.
I
think
json
is
the
only
reasonable
format
to
use
in
browser.
B
So
I
think
I
realize
that
our
client
side
integrations
are
kind
of
behind,
but
I
think
for
those
to
really
become
useful,
otop
json
is
gonna
have
to
be
like
a
stepping
stone
there,
and
I
did
see
a
thumbs
up
from
daniel
dyla.
I
know
you've
been
doing
some
refactoring
just
to
be
able
to
release
our
exporters
and
it's
it's
been
a
bit
of
a
challenge.
I
think,
in
addition
to
to
other
javascript
related
things
different
thing
to
add.
D
D
Okay,
I
think,
like
you
mentioned
the
the
client
side.
Instrumentation
in
javascript
is
behind
the
server
side
instrumentation,
but
I
think
a
lot
of
that
is
to
do
with
the
fact
that
the
exporters
are
not
considered
stable
and
the
exporter
is
not
stable.
D
It
will
not
be
able
to
be
stable
until
the
protocol
is
stable
and
it's
been
something
that
I
get
asked
about
as
the
maintainer
twice
a
month
for
the
last
two
years,
and
I
always
have
to
give
them
the
same
answer
that
the
json
protocol
itself
is
not
stable.
So
the
exporter
just
can't
be
the
the
instrumentations
being
behind
means
that
nobody
is
particularly
motivated
to
work
on
the
exporter,
but
the
exporter
not
being
stable
means
nobody's
motivated
to
work
on
the
instrumentations,
so
it
just
is
left
behind
now
for
a
year
or
more.
F
We
also
have
the
client
scenario
where
things
can
run
on
mobile
or
like
like
pcs
and
one
challenge
I've
seen
is,
although
you
can
include
a
grpc
like
client
but
number
one
that
increases
the
size
number
two
it
it
might
introduce
dll
version
how,
although
there's
some
way
that
you
can
basically
fuse
the
assembly
code
into
your
application,
make
like
different
versions
of
the
grpc
client
side
by
side,
but
normally
people
try
to
dodge
the
problem.
So
json
is
natively
supported
in
my
language
runtime.
I
can
definitely
see
the
benefit.
G
G
What
I
would
like
to
do
is
make
sure
that
the
last
change
is
settled.
It's
propagated
everywhere.
All
the
code
bases
are
implementing.
It
were
to
make
sure
we
didn't
make
any
mistakes
right,
that
everything
is
good
there,
there's
no
like
there's.
No,
I
guess
there
is
a
tiny,
tiny
chance
that
there
may
be
a
need
to
revert
anything
there.
A
decision
like
that
may
happen,
and
I
don't
think
that
that
will
be
the
case,
but
who
knows
right
once
we're
certain
that
everything
is
good
there.
G
C
Well,
I
know
that
we
had
problems,
at
least
our
like
in
our
case,
with
the
version
handling,
so
we
would
like
to
really
have,
and
in
the
issue
is
required
as
something
you
know
needed
for
for
release,
and
I
think
just
it
could
be
better.
You
know
I
mean
we
will
have
enough
cycles,
I'm
thinking
to
verify
that
the
latest
changes
in
the
product
part
are
done.
So
we
have
enough
time
to
specify
the
version.
C
A
G
Anyway,
I'm
not
aware
of
any
other
things
that
prevent
us
from
moving
forward
with
declaring
things
stable.
Maybe
it
would
be
good
to
do
some
sort
of
auditing
to
make
sure
that
the
names
look
right,
all
the
fields
and
messages
that
we
are
not
going
to
require
any
other
renamings
in
the
future,
but
functionally.
I
think
things
work
as
expected.
A
Can
I
ask
carlos
you
you've
mentioned
versions
like
recording
the
version
in
the
data?
I
think.
Can
you
be,
I
mean?
Is
there
an
explicit
proposal?
A
I
see
the
issue
doesn't
actually
make
a
proposal,
so
I
think
what
you're
asking
for
is
that
right
now
lightstep
is
using
paths
and
the
the
caller
has
to
know
the
version
that
they're
using
and
encode
that
in
the
path
and
then
we
can
figure
out
what
version
you're
using
and
we
can
do
all
the
correct
parsing
we'd
like
to
put
the
version
of
the
otlp
directly
into
the
data,
because
we've
seen
this
problem
with
json
so
much
I
think
I
mean
binary
protobuf
doesn't
have
a
version
in
it.
So
so
can
you
say
more.
C
H
H
So
one
thing
that
came
up
in
the
client
instrumentation
that
josh
actually
mentioned
was
the
schema
url.
So
we
actually
start
using
the
schema
url
to
version
the
content
for
the
client
instrumentation
see
it's.
It's
also
going
to
be
useful
for
defining
the
actual
type
of
message,
so
we
can
actually
start
defining
the
client
individual
messages.
This
is
mainly
for
using
logs
as
the
transport
mechanism.
G
D
At
least
in
json
you
can
always
parse
it
and
get
the
version
out
of
the
body,
but
I
think
with
protobuf.
You
can't
have
the
version
in
the
body
because
you
can't
parse
it
until
you
know
which
version
you
have
right.
So
I
mean
I
guess.
If
headers
aren't
an
option,
then
the
the
url
path
seems
like
the
the
only
other
obvious
solution
to
me.
G
How
are
we
going
to
use
the
version
number
we're,
not
anticipating
breaking
changes
right
and
if
I
mean
is
this
going
to
be
that
we're
not
going
to
be
recording
additive
changes
which
are
backward
compatible
as
a
new
version
number
right,
that's
going
to
be
the
same
version
number.
G
F
G
And
to
do
that,
you
do
not
necessarily
need
a
version
number
if
the
feature
is
implemented
in
a
way
that
it
is,
it
is
either
like
it
is
represented
by
some
data
that
can
be
present
or
missing.
The
fact
of
the
presence
may
be
sufficient
for
you
to
know
that
you
need.
This
is
the
new
feature
that
that
that
thing
is
present
in
the
payload.
You
need
to
work
on
it
right.
A
An
example
where
a
feature
that
was
allegedly
non-breaking
recently
caused
feature
instability
for
us
at
lightstep
and
it
had
to
do
with
stay
on
this
flag.
And,
yes,
we
like
we're
ignoring
it.
So
a
new
version
of
the
protocol
had
something
in
it
that
we
were
ignoring
and
and
because
we
were
ignoring
that.
We
ended
up
interpreting
a
zero
somewhere
where
we
should
have
seen
no
data.
A
So
I
think
we've
made
more
than
one
mistake
of
that
nature
where
we
say
this
is
non-breaking,
but
but
the
nature
of
protocol
makes
it
very
easy
to
ignore
stuff.
So
I
think,
having
extra
versioning
information
in
the
payload
somehow
would
help
us.
I
would
propose
to
put
it
at
the
top
level
of
the
protobuf
structure
passed
to
the
collector,
because
it's
a
top-level
field
at
that
point
and
it's
pretty
easy
to
make
compatibility
happen.
It
doesn't
enter
the
data
itself.
A
It's
just
the
payload
structure
and
I
think
we've
debated
whether
to
put
headers
passing
through
like
for
user
agent
and
stuff,
like
that.
That
would
be
a
good
place
to
put
any
kind
of
sort
of
like
information
about
how
the
data
is
transiting
could
go
at
that
top
level.
In
my
opinion,.
G
There's
another
way
to
also
have
the
the
notion
of
capabilities
bitmask,
some
sort
of
right-
that's
what
we
do
in
op,
so
the
peers
can
declare
what
features
they
understand
and
support
what
they
can
accept.
There's
also
that
I
mean
so
what
I'm
saying
is
that
there's
many
ways
to
try
to
tackle
these
compatibility
issues.
Version
number
is
one
of
those
ways,
and
maybe
we
need
it.
It
probably
is
the
right
thing
to
to
open
an
issue
and
and
start
the
discussion,
because
we're
probably
not
going
to
solve
it
in
this
school.
C
Yeah,
that's
very
good
like
jimmy,
would
you
mind
adding
a
comment
to
the
issue
with
yourself?
I
can
do
that
for
you,
if
you
are
busy
other
than
that,
I
guess
that
we
need
somebody
to
actually
drive
this.
Probably
we
can
the
likes
of
sites.
G
G
A
That
we
all
feel
like
I'm
just
speaking
for
light
stuff
right
now.
The
turn
over
the
json
has
caused
us
tremendous
pain
already,
and
we
just
wanted
to
end.
So
we
agree
that
as
long
as
we
can
stabilize
what
we
have
moving
forward,
I
think
you're,
probably
right
that
that
this
version
can
be
considered
stable
without
having
a
version,
but
I
it
leaves
me
feeling
uneasy
I'll
put
them
I'll.
Put
the
comment
in
the
in.
C
The
thread
yeah,
we
can
probably
spin
off
an
issue
just
focusing
on
the
versioning
part
and
we
can
probably
tigger
on
the
eye
you
have
cycles
for
that.
C
Yeah,
I
think
that
it
was
a
little
bit
different
because
you
are
putting
the
output
somewhere,
because
I
remember
that
by
the
time
we
like
this
contractor,
we
have
had
the
prototype.
This
was
sent
to
the
collector
and
you
had
a
different
approach
right.
I
don't
remember
the
details,
but
I
will
ask
you
offline.
Probably
we
don't
have
to
discuss
it
here,
but
yeah.
Thank
you
for
that.
C
B
B
C
C
I
So,
actually,
for
this
capturing
the
http
headers
with
the
environment
variables,
so
I
have
one
doubt
the
name
of
the
the
header
names
should
they
they
should
either
contain
the
hyphens,
or
should
we
replace
it
with
the
underscore?
F
I'm
I'm
guessing.
Maybe
we
we
should
have
some
good
mating
how
to
escape
the
headers
in
a
generic
way.
Instead
of
doing
this
one-off
case,
because
if
you
escape
this
minus
to
underscore,
then
you
will
have
issue
what,
if
you
have
underscore
consider
something
like
the
http
url
encoding
or
something
else
that
would
work
perfectly
and
yet
it
would
fix
like
it
would
fit
well
into
environment
variables.
F
G
F
J
So
this
this
is
yeah
this.
This
exists
for
for
java
and
python
as
well,
so
so
suncat
and
ashitos.
They
implemented
it
for
python
and
we
saw
that
java
uses
the
same
environment
variables,
and
so
we
thought,
like
hey
those
environment
variables,
let's
see
if
we
can
standardize
on
them.
So
yes,
there's
some
code
in
java
and
python.
Using
this
already.
G
It's
is,
is
it
mentioned
in
the
sdk
specification
in
any
way
that
this
is
a
feature
that
can
be
supported
by
implementations.
G
I
I
So
the
reason
for
basically
replacing
that
hyphen
or
or
minus
with
the
underscore
is
this
since
in
the
if
we
open
the
specification
for
which
actually
mentions
about
how
the
captured
value,
how
the
captured
header.
I
I
I
D
To
me,
this
seems
like
a
very
python
specific
thing.
I
commented
that
on
on
this
issue
at
the
in
the
conversation,
but
the
the
issue
is
that
a
header
comes
in
separated
by
dashes,
which
is
common
in
headers
and
then
in
python
web
frameworks.
D
It's
this
seems
to
me
like
a
language,
specific
concern
or
even
a
framework
specific
concern,
because
it's
not
done
in
every
python
framework,
even
without
knowing
the
details
of
the
framework
in
question.
I
don't
know
like
that.
There's
questions
like
are
the
raw
header
field
names
accessible
and
can
those
just
be
used
by
the
instrumentation
without
using
that
type
of
transformation
done
by
the
framework?
D
But
honestly
I
would
I
would
recommend,
against
this
type
of
thing,
because
any
web
framework
can
apply
any
transformation
they
want.
They
could
you
know
some
framework,
could
pluralize
the
header
name
or
something
like
that,
and
do
we
don't
want
to
support
that?
I
I
don't
think
we
want
to
get
into
the
game
of
supporting
all
of
the
transformations
that
these
web
frameworks
want
to
do.
D
I
think
we
should
follow
what
is
specified
by
the
rfc
they're
case
insensitive
and
you
use
the
name
directly
specified
by
the
rfc
and
if
some
framework
or
some
language
commonly
transforms
them,
then
that's
up
to
the
instrumentation
author
or
the
sdk
author
of
that
particular
language
or
instrumentation.
J
F
C
F
L
F
Yeah
and
for
this
instrumentation
specific
thing,
because
the
the
entire
instrumentation
semantic
convention
is
kind
of
evolving,
I
would
suggest
that
we
avoid
changing
the
the
overall
spec
like
if
there
is
an
instrumentation
semantic
convention.
Whatever
respect,
maybe
we
we
can
put
it
there
if
the
semantic
convention
folks
are
okay,.
M
Yes,
it's
in
the
semantic
conventions,
it's
one
of
the
few
kind
of
optional
things
we
are
in
the
process
of
trying
to
to
clean
some
of
that
up
that
that
feature
is
not
going
away,
but
I
kind
of
agree
we
need,
like
we've
been
needing
to
have
like
a
configuration
file.
I
think
for
a
while,
I
would
say
this
is
probably
one
of
the
next
big
top
items
for
me
once
we're
past,
like
blogs
and
metrics,
and
things
like
that-
and
I
know
that's
something
bogdan
was
interested
in.
L
So
I
think
the
thing
that
stands
out
to
me
is
that
the
pr
proposes
to
change
the
sdk
spec,
but
the
sdk
will
have
no
way
of
implementing
getting
these
values
into
spans,
at
least
no
good
way
right.
The
instrumentation
is
what
knows
where
this
needs
to
go
and
how
the
values
need
to
be
treated.
So
I
think
that
if,
if
we
are
to
specify
a
particular
configuration
mechanism
beyond
what
the
the
semantic
convention
spec
does,
which
is
simply
a
note
that
says
instrumentation
should
require
explicit
configuration
of
which
values
to
put
in
here.
M
Yeah
and
personally
I
don't
want
configuration
library
like
instrumentation
libraries
starting
to
grow
their
own
little
configuration
doohickeys.
M
M
So
the
place
I'd
love
to
see
this
all
go
into
is
like
a
configuration
file
like
like
some
standard
configuration
file
that
the
sdk
can
can
read
and
instrumentation
can
be
handed
like
everything
can
get
handed
the
same
file
and
then
the
end
user
doesn't
have
to
think
about
where,
specifically,
they
need
to
put
all
this
stuff.
M
That
would
be
that'd
be
my
suggestion.
I
I
wouldn't
want
to
start
adding
configuration
options
and
stuff
to
the
individual
instrumentation
libraries,
though
I'm
sure
some
of
them
already
have
some
of
that
going
on.
D
I
think
if
we
had
a
standardized
system,
we
would
have
no
problem
getting
all
of
the
instrumentation
authors
in
js
to
switch
over
to
it,
because
configuration
is
painful
for
us,
but
we
do
definitely
have
instrumentations
with
configurations
and
they're,
not
particularly
consistent.
M
Yeah
yeah
we've
seen
it
going
that
way.
I
don't
know
if
we
have
the
bandwidth
to
to
actually
just
make
this
configuration
file
happen
at
this
point,
but
I
would
be
very
interested
in
that
people
feel
like
because
there's
bandwidth
for
that,
but
I
don't
I
don't
see
bogdan
on
the
call.
I
know
he's
had
opinions
and
tigran.
I
don't
know
if
this
works
overlaps
at
all
with
the
remote
configuration
work
you
all
are
doing.
G
G
Okay,
maybe
I
guess
in
the
way
that
if
we
want
the
instrumentations
to
have
their
own
way-
or
I
guess
somehow
to
that-
the
configurations
of
the
instrumentations
to
be
separately
specified
somehow,
then
we
want
them
to
be
carved
out
in
a
way,
but
also
be
part
of
the
whole.
Then
I
guess
yes,
this
is
a
concern
in
that
case
right
I
don't
know
what
the
solution
in
this
case
is,
but
it's
maybe
something
that
we
need
to
think
about.
M
Okay,
I
mean
it
sounds
like
like
a
next
step
would
just
be
for
someone
to
to
just
propose
what
the
layout
for
this
yaml
file
would
look
like
right,
how
the
sdk
would
load
it,
that
kind
of
stuff,
basically
taking
all
of
our
existing
environment
variables
and
proposing,
like
a
structured
layout
for
all
of
that,
and
that
would
then
include
yeah.
How
would
you
put
instrumentation
configuration
into
that
file?
Like
you
know,
what
would
the
targeting
rules
be.
G
Yeah
there's
an
open
issue
and
I
think
there
is
also
I
proposed
one
possible
layout
there,
which,
which
is
just
a
flat
list
of
like
configuration
options.
So
if
we
need
to
have
a
more
structured
approach
where
they,
there
is
some
sort
of
common
things
that
are
or
the
sdk
things
and
the
these
instrumentations
are
separated,
and
then
we
need
different
proposals
here.
N
Okay,
one
of
the
things
also
ted
that
I
was
looking
at
because
I've
been
kind
of
toying
with
this.
I
wanted
to
get
some
more
time
on
this,
but
I
think
like
you're
saying
like
it's
just
an
overloaded
thing
is
there's.
N
I
think
there
needs
to
also
be
like
a
meta
language
to
the
configuration,
because
you
need
validation
of
some
some
form,
and
this
already
exists
in
a
few
places
like
there's,
json
schema
and
there's
also
on
the
super
project
called
q
lang
that
I
was
looking
at,
but
I
think
that
that's
going
to
be
a
really
key
thing
in
how
we
define
our
configuration
space
is
like
not
only
the
structure
of
the
file,
but
what
those
like
field
values
can
actually
take.
I
think,
is
really
key.
N
M
Yeah,
okay!
Well,
I
kind
of
get
the
impression
that
sort
of
where
a
lot
of
us
are
at
like
we
want.
We
want
to
do
this
soon,
but
no
one
quite
got
the
bandwidth
to
push
it
over
the
finish
line
right
now,.
N
You
could
always
add
it.
You
know
they
could
always
modified
later
on.
So
I
think
that
that's
probably
a
good
way
to
scope.
The
initial
push
yeah,
I
I
don't
know
we
always
have
these
conversations
and
then
it
ends
up
with
like
things
getting
dropped.
I
don't
know
if
we
wanted
to
schedule
a
meeting
or
I
know
there's
already
issues,
but
you
know.
Maybe
we
should
try
to
progress
this.
C
By
the
way,
sorry
to
interrupt
you
about
dimension
that
he
really
would
like
to
have
the
metric
stem
box
today
to
discuss
other
items.
So
let's
go
quickly
over
the
other
items
and
please,
let's
keep
discussing
those
things
offline,
there's
only
so
many
minutes
in
an
hour.
Okay,
the
next
one
is
the
mila
http
respect,
yeah.
E
Yeah,
so
the
quick
intro,
the
motivation
for
the
solving
this
issue
is
because
we
have
way
too
much
required
attribute
sets
on
http
and
we
would
like
to
have
some
consistency
there.
So
backhands
would
know
what
to
expect
and
we
kind
of
create
the
consistent
telemetry
for
from
our
instrumentations.
E
So
the
issue
and
discussion
we
had
suggest
that
we
introduced
three
categories
for
the
attributes:
they're
required
are
those
that
are
used
very
useful
and
all
instrumentation
that
can
obtain
them
should
obtain
them
and
put
on
the
spams.
Then
there
are
optional
attributes
and
they
can
be
opt-in
and
opt-in,
so
some
should
be
provided
by
default
and
back
to
the
previous
conversation,
the
configuration
may
allow
to
disable
or
enable
some
attributes,
and
then
we
have
I've
done
some
revision
of
what
current
instrumentations
do
and
I
try
to
understand
what
can
be
consistently
done.
E
So
I
put
it
in
the
draft
pr
and
like
creating
the
actual
pr
would
require
a
lot
of
tooling
changes.
So
I'm
actually
looking
for
some
tc
members
interested
in
the
semantic
conventions
to
take
a
look
and
then
we
I
want
us
to
agree
on
the
direction.
First,
any
details
obviously
are
discussable
and
then,
if
we
have
an
agreement
on
the
shape
of
this,
I'm
happy
to
do
all
the
tooling
work
and
go
into
every
detail.
C
Yeah,
I
am
interested
in
semantic
conventions.
I
will
be
needing
that
in
my
daily
routine,
so
I
can
take
a
look.
I
think
that
once
we
discuss
and
come
up
with
an
agreement
on
the
draft
after
everybody
has
reviewed
that
I
can
help
that
front.
But
in
the
meantime
please,
let's
review
that.
Okay.
G
E
C
O
O
Basically
that
that's
the
only
required
item-
and
it
will
be
great
to
have
some
sponsorship
from
dc
to
basically
finalize
all
the
work
and
make
it
done.
E
And
actually,
if
it
looks
massive,
but
it's
not
introducing
too
much
making
changes
if
it
would
help,
I
would
add
what
we're
actually
breaking
in
five
languages.
I
had
a
chance
to
go
through
and
hopefully
it
will
be
a
very
small
part
of
what
what's
actually
breaking.
C
Of
course
perfect,
thank
you.
Let's
follow
up
on
that
then,
and
finally,
clarify
tls
secure
default,
behavior.
D
Yeah,
this
is
specifically
for
grpc
I'll,
be
quick.
I'm
really
just
looking
for
somebody
who
is
familiar
with
the
the
configuration
specification
for
the
grpc
security
options
to
take
a
look
at
this
issue
and
comment
when
they
have
time
doesn't
necessarily
have
to
be
right
now.
D
But
the
short
version
is
that
the
if
the
endpoint
is
not
specified,
the
default
value
is
given
as
http
localhost,
port
4317
and
the
insecure
variable
is
defined
to
be
false
by
default,
and
I'm
not
sure
whether
we
should
be
securing
these,
but
with
tls
by
default.
Does
insecure
just
mean
that
the
certificate's
not
validated
or
does
it
mean
that
there
is
no
tls
at
all?
It's
not
incredibly
clear,
so
we're
just
looking
for
for
some
clarity
here,
particularly
on
the
default
behavior.
D
D
G
I
think
the
intent
was
that
the
localhost
connections
should
be
insecure
and
uncompressed.
If
I
remember
correctly,
we
agreed
on
that.
If
the
spec
is
not
reading
like
that,
maybe
we
need
to
fix
that.
G
D
The
spec
that,
if
you
use
the
https
scheme,
then
it
is
secure
and
that
takes
precedence
over
the
insecure
configuration
setting,
but
it
does
not
specify
the
other
way
around
and
the
default
is
given
as
http,
but
the
default
for
insecure
is
also
given
as
false.
So
these
are
all
sort
of
conflicting
yeah
yeah.
D
I
think
it
just
needs
a
clarification
update.
I
would
suggest
yeah
localhost
should
be
probably
not
not
encrypted
by
default,
but
that's
not
the
way.
The
specification
reads
to
me
at
the
moment:
yeah
yeah,
so
we
can
move
on
it's
just.
We
opened
this
almost
three
weeks
ago
now
and
hadn't
hadn't
gotten
a
response,
so
we
just
wanted
to
bring
it
up.
C
A
Hi
everyone.
I
wanted
to
make
sure
that
we
speak
about
particularly
one
issue
in
in
person.
Right
now,
I
listed
four
prs
with
open
debate
and
I
think
the
first
one's
really
important
that
we
just
settle
the
rest
of
them.
I
think
we
could
take
offline
and,
if
you're
interested,
please
take
that
pick
up
those
threads
I'll.
Give
you
a
brief
summary
of
where
I
see
this.
There
seems
to
be
two
positions:
the
one
position
saying
that
view
conflicts
are
impossible
to
handle
consistently.
A
We
should
we
should
let
all
the
data
through
and
the
other
position
is
some
some
feel
that
we
should
find
a
way
to
fail
fast
when
a
view
configuration
is
supplied
that
creates
conflicts.
A
I
believe
the
ambiguity
is
that
instruments
are
not
necessarily
defined
before
views
and
therefore
you
end
up
in
a
situation
where
instruments
may
be
defined
after
views,
conflicts
may
be
created
when
an
instrument
is
registered,
which
is,
after
the
view,
is
registered,
and
it's
not
clear
what
fail
fast
actually
means
to
me.
I'd
like
someone
who
has
a
strong
opinion,
perhaps
to
clarify
what
they
think
on
this
issue.
P
I'd
just
like
to
first
clarify
the
the
two
positions,
so
the
first
one
was
accurate,
as
I
understand
it,
the
second
one
that
you
mentioned,
my
actual
understanding
is
that
when
there
is
a
conflict
at
runtime,
the
position
is
that
the
data
is
not
flowed
through
that
that
view
is
ignored,
so
not
failing
fast.
In
that
case,
that's
the
that's
the
position
that
some
folks
on
the
dot
net
sig
prefer.
A
My
understanding
was
that
they
started
using
the
phrase
fail
fast
to
explain
that
the
the
view
is
failing
fast,
meaning
ignored.
Maybe
I
I
don't
quite
understand
the
meaning
of
failed
fast.
Another
way
I
have
interpreted
it
is
that
you
know
I'm
not
allowed
to
fail
when
I
create
an
instrument,
that's
that's
given
as
a
guideline
and
if
I
wanted
to
essentially
crash
my
application
whenever
a
duplicate
instrument,
registration
conflict
happens
after
views
are
processed.
Well,
here's
what
I
would
do.
A
I
would
have
a
separate
call
into
the
sdk
saying
I
want
to
crash
whenever
an
instrument
duplicate
registration
conflict
happens
and
then
the
instrument
is
created
successfully.
That's
contract
with
open
telemetry.
The
application
code
is
not
going
to
crash,
but
the
call
into
the
sdk
that
says
I
want
to
crash
when
there's
conflict
then
takes
over
and
lets
me
crash.
I
don't
think
that's
particularly
good
behavior,
but
it
does
seem
to
meet
the
specification.
D
My
understanding
of
the
of
the
fail
fast
guidance
was
that
we
should
fail
at
configuration
time
only
if,
like
it
will
fail
every
single
time
this
configuration
is
applied
and
then,
in
all
other
cases,
fail
silently
as
in
not
crash
the
end
users
application,
I'm
I'm
open
to
some
sort
of
a
strict
mode
like
what
you
said,
but
in
the
default
behavior.
I
think
you
can
either
fail
fast
or
fail
silently.
You
really
can't
do
both.
A
A
How
do
I
consistently
handle
that
situation
is
if
they
were
reversed?
I'm
going
to
ignore
that
instrument
that,
second,
that
view
gets
ignored
if
the
instrument
is
registered,
so
there's
a
race
condition,
and
I
don't
know
how
to
handle.
K
That
yeah,
that's
a
good
point
and,
as
you
talked
about
before,
instruments
aren't
allowed
to
fail
instruments
like
during
instrument
and
registration.
They
always
have
to
succeed,
but
in
this
in
this
race
condition
you
outline,
then
they
can
have
the
behavior
of
view
conflicts
where
the
data
is
dropped.
A
So
this
was
the
position
that
leads
me.
This
is
the
fact
that
leads
me
to
support
a
position
saying
we
should
just
process
all
the
views
pass
through
the
conflicts
and
issue
warnings,
and
then
the
consumer
is
able
to
see
all
the
data
and
they're
able
to
see
the
conflicts
and
the
users
somehow
going
to
find
out
that
they're
having
conflicts.
D
Q
D
A
So
this
notion
of
a
strict
mode
has
been
requested
more
than
once
it
was
joshua
who
really
pinned
it
early
in
this
discussion,
and
I
I
support
the
idea
I
kind
of
just
described
something
like
that.
A
way
to
implement
a
strict
mode
is
to
have
a
hanging
call
saying
I
want
to
crash
whenever
I
correct
whenever
something
bad
happens,
and
you
could
just
consider
this
normal
error
handling
like
there's
an
error
handler
in
your
sdk,
you
get
an
error
that
describes
what's
happening
like
you,
can
see
the
error
that
says
duplicate
instrument
conflict.
A
So
it
does
seem
like
sdks
have
options
if
they
want
to,
or
a
user
has
an
option
if
by
handling
errors,
essentially,
if
they
want
to
see
these
things
and
be
strict
and
then
the
only
consistent
solution
I
see
is
to
just
process
all
the
views.
Any
last
comments
on
that.
A
Riley,
how
do
you
feel
about
this?
I
think
you
were
in
the
other
camp.
F
A
Well,
that
sounds
very
good.
I
didn't
mean
to
click.
That
sounds
good
to
me.
I
I
mean,
I
think,
we're
kind
of
all
recognizing
that
there's
not
a
great
solution
when
there's
view
conflicts
when
this
originally
came
up
a
month
and
a
half
ago
the
point
was
there:
are
people
are
going
to
use
a
hotel
and
they're
going
to
have
conflicts
and
and
it's
because
existing
libraries?
A
If
that
is
resolved,
which
I
didn't
expect
it
would
be
so
easily.
There
are
three
more
issues
here.
A
I
consider
these
in
order
of
importance.
The
option
for
multi-instrument
callbacks
is
one
that
I
wrote
it's
because
the
goat
gosig
really
wants
this,
and
there
is
some
debate
and
I'll
summarize
it
this
way.
At
least
this
might
be
biased
summary.
But
to
me
the
differences
between
there's
there's
a
proposal.
A
Here,
that's
going
to
create
a
different
type
of
callback
and
to
me
it's
all,
syntactic
sugar,
a
callback
is
a
piece
of
code
that
executes
that
makes
observations
and
in
the
current
spec
the
callback
is
a
thing
that
returns
a
list
of
values
and
I'm
trying
to
make
it
be
a
call
back
as
a
thing
that
runs
when
you
need
to
define
instrument,
values
and
the
the
idea
that
this
is
syntactic.
A
Sugar
is
how
I
get
over
the
fact
that
well
so,
there's
going
to
be
two
instrument:
apis,
yeah
they're,
just
they're
all
syntactic
sugar.
For
the
same
fundamental
thing,
that's
my
position.
Other
people
seem
to
to
not
agree.
The
reason
we're
here
is
that
the
go
team
decided
to
go
this
way
and
that's
because
we
think
the
performance
implications
are
very
severe.
A
If
many
of
the
reasons
that
people
want
these
asynchronous
callbacks
is
because
they're,
you
know,
reading
a
file
full
of
expensive
measurements
and
once
you've
read
that
file
once
you'd
like
to
make
all
your
measurements
so
and-
and
my
opinion-
is
that,
like
the
the
number
of
of
these
asynchronous
instruments,
that
you're
going
to
create,
is
very
small
and
so
going
to
the
length
going
to
some
length
is
okay
to
make
them
usable.
A
So
I
would
rather
go
to
this
more
verbose
interface
in
order
to
get
the
performance
that
I'm
after
with
multi-instrument
callbacks,
and
I
actually
don't
care
if,
if
people
support
both.
So
the
debate
is
about
whether
we
should
have
this
allow
this
and
whether
we
should
support
more
than
one
interface
in
the
same
sdk,
which
doesn't
bother
me.
But
everyone
else
seems
to
have
some
hang-ups
on
that.
K
A
Thank
you,
jack
I've
in
the
interest
of
time,
I'd
like
to
summarize
the
other
ones
that
are
open
this
one
default
to
the
histogram
aggregation.
Basically
here
where
this
is
yours,
jack,
we're
trying
to
say
that
it's
going
to
be
dangerous.
If
we
allow
this
sort
of
variable
histogram
into
our
default,
so
that
you
could
make
it
very
easy
for
a
user
to
get
a
change
of
data
type
when
they
just
upgraded
their
optometry
library,
we
want
to
make
sure
that
never
happens.
A
A
I
think
that
we
came
around
everyone's
supporting
this,
but
I
probably
it
just
seemed
contentious
enough
that
we
should
make
sure
nobody
disagrees
before
we
go
forward.
Has
a
lengthy
discussion
in
it?
Okay,
I'm
going
to.
A
We
can't
catch
up
with
this
one
in
the
moment,
it's
too
big,
but
but
after
this
meeting
I'm
going
to
go
read
through
that
again
and
make
sure
that
we're
on
the
same
page.
Lastly,
I
was
hoping
bogdan
would
be
here
he's
this.
This
is
widely
agreed
to
here,
but
bogdan
kind
of
bombed
it
at
the.
H
A
What
we're
doing
and
the
remark
is
okay,
so
I
mean
why
do
we
have
views
at
all?
You
know
if
we're
going
to
specify
the
type
of
an
instrument
and
we're
and
and
then
you
expect
that
that
will
produce
exactly
a
particular
type
of
data.
Why
do
we
have
views?
Is
my
question?
I
think
there
are
reasonable
ways
to
reconfigure
an
sdk
and
it's
okay
to
have
the
specification
for
cement
conventions.
Tell
you
what
the
default
behavior
is
and
the
user
can
change
it
if
they
want.
A
I
don't
feel
like
that
makes
submit
conventions
useless
for
back
ends.
It
just
means
that
the
users
have
the
opportunity
to
change
their
defaults.
A
K
One
quick
comment
on
the
dropping
the
histogram
aggregation,
and
you
know
the
discussion
about
whether
we
should
keep
the
best
possible
histogram.
So
this,
the
best
possible
histogram,
is
all
about
retaining
the
ability
to
use
exponential
histograms
when
they
become
available
because
they're
not
in
the
the
initial
stable
metric
specification
and
so
we're
effectively
getting
rid
of
that
by
getting
rid
of
this
best
possible
histogram.
And
I
wanna.
K
I
think
it's
important
that
if
we
do
get
rid
of
that,
that
we
retain
some
sort
of
simple
configuration
parameter
to
enable
exponential
histograms,
because
they're
going
to
be
so
useful
at
modeling
a
wide
degree
of
distributions,
they're
they're,
significantly
better
than
explicit
bucket
histograms
by
default.
So
yeah.
If
we
get
rid
of
this,
just
replace
it
with
some
other
available
configuration
parameter
when
those
become
available.
A
A
Q
There
were
mainly
to
make
sure
that
we
don't
drop
the
support
for
actually
making
making
it
possible
to
switch
over,
because
from
from
what
I
understood
in
the
first
draft,
it
was
just
like
when
we
drop
it,
then
there
is
no
way
of
going
back
to
exponential
histograms,
but
I've
also
come
around
to
the
to
the
to
the
fact
that
we
will
probably
want
to
part
of
me,
we'll
probably
want
to
specify
a
default
aggregation
and
then
have
users
opt
into
whatever
it
is,
and
the
the
thing
is
that
I
think
the
I
don't
know
like
the
the
hard
point
here
is
that
the
explicit
bucket
histogram
that
we
have
defined
as
it
is
currently
which
we
defined
as
the
default
is
probably
not
going
to
be
useful
for,
like
the
whole
range
of
things
that
we
want
to
use.
Q
Q
K
Yeah,
maybe
I
think
sorry
go
ahead.
Maybe
we
can
unblock
this
pr
by
opening
a
follow-up
issue
to
say
that
you
know
when
explicit
histograms,
explicit
bucket
or
exponential
histograms
become
available.
Add
a
configuration
parameter
that
allows
them
to
be
enabled
by
default
for
the
otlp
exporter.