►
From YouTube: 2022-10-04 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
A
A
Okay,
let's
start
if
that
makes
sense.
Thank
you
so
much
everybody
for
joining.
Okay.
We
have
your
items
today.
The
first
one
is
that
one
protein
is
going
out
today.
Please
review
that
it's
going
good
so
far,
if
there's
anything
that,
should
we
mentioned
or
clarify
your,
let
us
know
as
soon
as
possible.
We
don't
have
many
changes,
we
release
113
or
weeks
ago.
Only
so
this
is
a
small
set
of
changes.
Probably
we
are.
We
are
cool.
A
Okay,
next
item
otlp
metrics
for
their
configuration,
extends
beyond
the
scope
of
its
ability.
Yeah,
who
put
this
one.
B
I
put
that
on
there,
because
we
intended
to
discuss
it
last
week.
It
got
pushed
off
the
end
of
the
agenda
last
week
and
I
wanted
to
bring
attention
to
it.
So
Tyler
was
the
original,
filer
I
believe,
and
he
could
also
speak.
But
I
can
summarize
this
as
some
apparent
confusion
about
again
the
role
of
exporters
and
readers
and
configuring
export
Pipeline
and,
in
particular,
how
we
interpret
the
environment
variables
that
are
meant
to
provide
defaults
for
those
exporters
and
I
guess
we
have
a
difference
of
interpretation.
B
The
way
I
had
interpreted
it,
these
environment
variables
there's
one
today
and
a
second
one
was
introduced.
The
first
is
temporality
preference.
The
second
is
a
histogram
aggregation
preference,
and
these
two
variables
would
let
you
configure
an
otlp
exporter
and
the
the
point
of
contention
here
is
about
who
is
responsible
for
what
the
way
I
see.
This
readers
are
configuration
mechanism
for
exporters,
getting
the
right
type
of
data
and
the
right
aggregations.
B
B
My
interpretation
was
that
those
environment
variables
apply
just
once
to
the
default
exporter
that
gets
configured
through
the
environment,
which
is
not
the
same
as
any
other
otlp
exporter,
and
my
reasoning
there
is
that
we
have
in
variables
like
we
have
variables
like
endpoint.
You
wouldn't
assume
that
the
endpoint
is
meant
to
be
defaulted
across
every
otlp
exporter.
B
It's
just
one
exporter,
so
that's
my
position,
I'm,
not
sure
I've
summarized
this
very
well
and
I'd
like
to
hear
if
Tyler
or
Jack
are
here,
who
are
the
first
two
people
who
comment
on
this
issue.
C
Yeah
I'm
here,
I
I,
agree
with
that.
So
in
Java
we
have
our
otlp
metric
exporters
and
they
don't
care
about
the
environment
at
all.
C
D
I
I
also
want
to
call
out
real,
quick,
the
the
notion
that
the
configuration
extends
beyond
the
ability
of
an
exporter.
We
have
a
sentence
in
here,
which
is
maybe
a
bad
sentence,
but
it
says
metric
exporters
always
have
an
Associated
metric
reader
and
the
aggregation
and
temporality
properties
used
by
open
Telemetry
metric
SDK
are
determined
when
registering
a
metric
exporter
through
their
metric
reader.
D
This
was
intended
to
provide
this
relationship,
it's
probably
poorly
worded,
but
the
idea
behind
that
was
a
metric
exporter
can
participate
in
the
metric,
temporality
and
and
and
such
of
a
reader,
as
makes
sense,
because
you
know
overall,
when
we're
defining
the
specification,
the
exporter
was
the
important
concept
of
like
what
aggregation
temporality
you
would
want
to
select
by
default.
That
was
the
whole
idea
behind
it.
So
one
one
way
I
I
interpreted.
D
This
is
maybe
that
specification
wording
is
poor
and
we
need
to
clean
that
up
a
bit
so
that
it's
clear
that
actually
having
that
configuration
on
an
exporter
is
the
right
place
to
put
it
right,
because
it's
possible
that
maybe
there's
an
implementation
issue
here.
But
the
second
thing
is
I.
Just
want
to
reinforce
what
you're
saying
of
like
the
there's
a
default,
environment-based
configuration
exporter
and
you
do
not
have
to
listen
to
environment
variables.
Anytime,
you
instantiate
otlp.
E
E
No
I,
I
I'm
just
gonna,
say
like
I'm
thoroughly
confused
at
this
point
like
there's,
there's
just
so
much
conflict
of
of
Concepts
I.
Think
in
the
specification
it's
really
hard
to
to
understand.
What's
going
on
here,
I
see
Josh's
point
that
there's
like
in
what
Jack's
saying
as
well
like
there's
like
a
single,
singular
default
exporter:
that's
configured
via
environment
variables
that
that
seems
to
make
sense.
I
I,
don't
understand
it
in
this
context.
E
Of
this
configuration
variable
so
like
I
think
that
we
could
do
a
better
job
linking
those
two
or
having
some
sort
of
understanding
in
the
specification
that
says
that
they
are
related.
But
Josh
Earth
to
your
point
of,
like
the
exporter,
is
able
to
configure
a
reader
in
its
histogram
preference
or
or
I'm.
Sorry
in
its
aggregation
preference
or
it's
temporality.
Preference
like
I
mean
that
that
sounds
great,
but
I
never
interpreted
that
anywhere.
E
Before
in
the
specification
you
know
there
are
other
parts
of
the
specification
that
talk
specifically
about
a
reader
having
a
default
aggregation
and
then
that's
configurable
and
and
I'm.
Just
like
I'm
I'm
worried
that
we
have
a
lot
of
competing
ideas
of
what
the
specification
are
defined
by,
but
not
a
unified
one,
and
that's
manifesting
in
this
disjoint
language
that
we're
finding
in
the
specification
right
now.
C
G
F
Well,
it's
just
like
any
like
the
like:
the
batch
processor
can
exist
without
an
exporter,
it
just
doesn't
really
do
anything,
but
I
mean
actually
it
does
do
something
it
batches
and
then
it'll
send
that
batch
when
the
exporter
is
added
the
same
thing.
What
I
did
with
the
readers?
Maybe
that's
wrong,
maybe
didn't
respect
that
and
that's
fine
I
can
change
it,
but
the
you
know
the
way
I
read
the
spec
is
that
a
reader
could
exist
and
then
you
add
an
exporter,
and
then
it
sends
out
the
next
collection.
C
So
that's
that
might
be
a
difference
in
implementation
and
Java.
Both
are
our
batch
span.
Processors
and
our
periodic
metric
reader
can't
exist
without
an
Associated
exporter,
there's
no
like
creating
them
and
then
later
Associated
an
exporter.
I
need
to
attract
another
language
to
see
if
that's
actually
specified
or
if
that's
just
an
implementation,
detail
yeah,
but
yeah.
We
also
have
to
be
specific.
C
We
can't
just
talk
about
readers
as
if
and
periodic
metric
reader,
as
if
they're
the
same
thing
periodic
metric
readers
is
a
very
particular
reader
that
reads
out
an
interval.
It
is
it's
specified
and
it
has
an
Associated
exporter,
but
there
can
be
other
readers
like
you
can
implement
the
Prometheus
exporter
as
a
reader
as
well.
C
D
D
We
tried
to
unify
exporter
and
and
reader,
and
we
tried
to
also
make
it
so
you
don't
have
to
unify,
because
we
had
two
implementations,
one
where
there
was
a
single
notion
of
an
exporter
that
could
handle
both
Bull
and
Bush,
and
then
we
had
other
implementations
where
they
wanted,
like
pull
to
be
readers
and
then
push
to
be
this
exporter
thing,
and
so
the
spec
tries
to
allow
both
of
these
Concepts
to
exist
in
tandem
where
an
SDK
can
choose
to
implement
a
single
exporter
interface
that
does
push
and
pull
or
can
Implement
push,
As
the
metric
exporter
and
pull
as
a
metric
reader.
D
And
so
that's
why
you
have
this
dichotomy
of
you
know.
If
you
have
a
reader
exporter
combination
pair,
the
exporter
needs
to
influence
the
reader
right
and
that's
specified
in
the
exporter
specification,
not
in
the
reader
specification.
So
it
could
be
that
we
need
to
go
update
the
reader
specification
to
account
for
that.
D
So
it's
more
clear,
but
that
that,
like
you're
you're
calling
out
a
little
bit
of
an
inconsistent,
spec
and
you're
absolutely
right
it's
there
and
that's
why
it
exists.
Do
we
want
to
remove
that?
Do
we
want
to
like
unify
here?
That's
that's
the
thing
that
we
could
try
to
push
for
over
time
across
these
implementations.
D
We
didn't
do
so
during
this,
but
we
tried.
You
know,
that's
a
thing
that
we
specifically
tried
to
keep
that
dichotomy
alive.
It's
just
now.
You
see
the
the
language
is
super
complicated,
I.
Think
people
are
running
into
problems,
so
my
suggestion
here
is:
maybe
we
could
be
proscriptive
on
an
implementation
but
flexible
in
the
specification
here
for
folks
coming
in
and
instrumenting
metrics.
D
Maybe
that's
the
best
path
forward,
but
you
know
legitimately.
This
is
confusing
because
it's
allowing
two
things
that
are
subtly
different
to
both
coexist.
E
Yeah
so
I
think
we
share
the
same
goals
here
in
trying
to
make
things
better
for
sure,
but
I
I,
just
I'm
like
really
worried
at
this
point,
because
I
would
really
you
know
this
is
this
kind
of
came
up
because
I
looked
at
this
configuration
option
for
this
export
and
I'm
going
like
well,
there's
no
way
to
do
this
and
go
because,
like
the
exporter,
does
not
communicate
to
a
reader,
aggregation,
temporality
or
aggravation
or
temporality
so
like.
How
is
this
supposed
to
even
take
effect
and
what
I'm
hearing
is
like?
E
Well,
okay,
that's
also
not
the
case
I
guess
in
in
Jack's
situation,
because
there's
something
that
sits
outside
of
it,
but
I'm,
just
like
I
I,
want
to
make
sure
that
we
have
common
implementations
because
it'd
be
really
a
problem.
If
you
know
and
go
we're
doing
one
thing
when
in
Java
we're
doing
another
and
then
a
year
goes
down
the
line
and
all
of
a
sudden
there's
like
an
incompatibility
that
people
just
cannot
resolve.
I
guess
is
the
key
that
we
need
to
be
addressing
here.
B
I'm
I'm
I'm
confused
about
the
actual
concrete
confusion
here:
I'm
and
I'm
familiar
with
the
the
new
go
SDK
I'd
say
so:
I,
don't
see
how
you
couldn't
do
it
and
I'm
I'm,
just
I'm
I'm
worried
that
we
haven't
clarified
the
actual
concrete
point
of
difficulty
here.
So
is
there
an
automatic,
X,
otlp
exporter
that
you
can
configure
through
the
environment?
I
haven't
seen
that
yet
maybe
it's
not
there
yet!
B
But
if
so,
it's
neither
reader
nor
exporter.
It's
code
that
the
x
that
the
SDK
uses
to
initialize
default
exporter
and
I,
don't
think
it
needs
to
be
part
of
the
exporter
or
the
reader.
It's
somewhere
else,
and
I
and
I
want
to
get
that
really
clear
right
now,
we'll.
E
See
and
that's
that's
the
thing
that
isn't
clear
to
me
is
like
that
this
code
relies
on
something
that's
external
to
what's
provided
by
the
SDK,
because,
like
what
you're
saying
with
like
the
endpoint
for
the
otlp
right,
the
otlp
exporter
in
go
for
traces
like
it
looks
at
that
environment
variable
and
if
it
exists,
it
uses
it
like
when
you
set
that
up
and
and
you
don't
provide
an
endpoint
it'll
use
it
like
it
doesn't
matter
if
it's
like
a
singular
one-off
thing:
that's
set
from
the
environment
variable.
It's
set
that
way.
C
I
suppose
that's
true
of
all
the
environment
configuration
properties
is
that
it's
not
clear
whether
they
have
to
apply
to
all
instances
or
just
a
particular
instance.
B
Okay,
that's!
This
is
a
con.
This
is
concretely
what
we're
getting
out
of
this
conversation,
then
that
that
I
don't
think.
We've
made
clear
whether
environment
variables
apply
to
defaults
for
implementation
objects
or
whether
there's
some
sort
of
grand
interpretation
of
the
environment
at
the
beginning.
That
sets
up
a
bunch
of
stuff
and
I
was
interpreting
it
the
second
way,
but
Tyler.
Your
point
is
well
taken.
B
E
Yeah
well,
I
mean
that's,
that's
definitely
the
case
in
in
go
and
I,
don't
know
about
others,
but
I
think
I've
seen
it
in
a
few
others.
Where
that
you
do
interpret
the
environment
variable.
This
must
set
via
configuration,
but
I
also
want
to
point
out
that
Josh
McDonald,
the
Josh
sarath,
also
kind
of
points
out
another
issue
that
is
maybe
separate.
E
Maybe
it's
derailing
the
conversation
a
little
bit
but
I
do
want
to
understand
this
better
because,
like
I'm
confused
by
how
the
reader
is
configured
with
the
default
aggregator
or
an
aggregation,
but
then
also
somehow
like
the
exporters
to
be
able
to
configure
it
as
well,
and
that's
that's
an
inconsistency
that
I
didn't
understand
as
well.
Josh,
sir,
if
you
don't
mind,
I
I
love
like
a
a
link
to
the
language,
because
I'm
trying
to
find
it
in
the
in
the
spec
for
the
ex
export
configuration
side
of
things.
D
Okay,
I
will
include
the
link
here
in
the
chat,
but
like
so
fundamentally
the
thing
to
know
here
is
you
have
two
types
of
readers.
You
have
an
independent
reader
which
can
just
do
things
on
its
own.
Like
the
Prometheus
exporter
is
the
sorry,
the
Prometheus
reader
right
in
Java,
for
example,
or
you
can
have
a
reader
that
attaches
an
exporter.
The
idea
is
any
reader.
Implementation
that
attaches
an
exporter
needs
to
basically
defer
the
aggregation,
temporality
and
decisions
to
that
exporter.
D
We
moved
it
over
into
the
exporter
and
it's
confusing
and
then
the
second
type
is
these
independent
ones
that
basically
read
on
their
own
and
do
their
own
thing
without
even
having
an
exporter,
in
which
case
they
are
able
to
make
the
decision
themselves
right.
But
the
decision
has
to
go
all
the
way
out
to
whoever
owns
the
you
know,
protocol
export
thing.
If
you
will.
E
Okay,
so
yeah,
so
what
you're
saying
I'm
just
going
to
try
and
restate
just
to
make
sure
I
understand
it
is
like.
So
if
you
have
an
otlp
exporter
and
you
register
that
with
a
periodic
metric
reader,
that
exporter
should
be
able
to
tell
the
reader
like
hey
I
want
this
kind
of
aggregation
and
I
want
this
kind
of
temporality,
okay
and
so
I'm
confusing.
C
I'm,
not
I,
don't
think
the
I
don't
think
metric
reader
has
those
properties
on
its
own,
so
metric
reader
is
just
an
interface
and
it's
the
implementations
that
interface
that
have
those
properties,
and
so
like
the
implementation
is
the
periodic
metric
inter
metric
reader.
It
has
to
implement
that
method
of
like
what
is
my
default
aggregation
and
what
is
my
default
temporality
and
you
know
it.
It
can
choose
how
to
implement
those.
However,
it
sees
fit,
and
so
with
the
periodic
metric
reader.
It
just
delegates
to
its
Associated
metric
exporter,.
B
He
doesn't
say
an
implementation,
this
is
just
requirements
and
so
like
you're,
going
to
set
up
an
exporter-
and
you
know
the
temporality
you
need
to
use
and
that
there's
no
more
questions
really
like
I,
don't
see
what
the
problem
is
of
just
the
implementations
can
do
whatever
they
want.
If
they
meet
the
specification.
D
B
D
Call
up
two
things
here,
because
I
think
this
is
important.
One
is
the
influence
of
metric
exporter
to
metric
reader
and
all
that
aggregation
temporality
was
a
late
breaking
change
in
the
spec.
It
was
one
of
the
last
things
we
did
so
I
I.
Think
that,
like
it
is
fair
to
say
that
we
could
do
some
language
cleanup
here
to
make
it
more
apparent.
What's
been
done,
I
I
suspect.
D
If
we
were
to
go
through
history,
this
notion
of
metric
reader
owning
these
two
ingestion
attributes
may
have
predated
some
of
those
changes
or
needs
a
little
bit
of
tweaking.
For
example,
I
agree
with
Tyler
it's
confusing
as
hell.
If
I
read
that
specifically
because
those
options
really
only
matter
when
metric
reader
is
the
owner
of
those
values
other
like
we
could
change
this
language
to
say:
If
a
metric
reader
takes
in
a
metric
exporter,
then
you
don't
provide
these,
but
if
it
doesn't
take
any
metric
exporter,
then
you
do
provide
these
right.
B
B
E
So
I'm
saying
you
don't
like
if
you're,
if
you're
a
new
user
and
and
you
don't
know
what
temporality
is
associated
with
this,
like
I,
just
want
to
pass
the
exporter
and
according
to
the
specification,
it
should
set
the
temporality,
and
so
what
I'm
saying
is
like
in
in
the
go
SDK.
That's
not
the
case
like
you
also
have
to
know
the
temporality
and
I
I
I'm
with
you
like.
E
It
makes
sense
to
me
because
I
understand
what
this
temporality
thing
is,
but
like
I'm
also
worried
that
like,
if
that's
not
the
case
like
in
Java,
you
just
pass
it
like
this
exporter
and
it
handles
the
whole
pipeline
setup.
You
don't
have
to
know
the
temporality,
then
maybe
we're
doing
something
wrong
and
it
goes
is
what
I'm
trying.
B
B
The
only
thing
that
configures
temporality
and
when
you're
setting
up
an
exporter,
you're
gonna
configure
a
reader.
It's
just
whoever
know
is
doing.
That
is
the
one
responsible
for
setting
the
temporality,
and
if
you
know
your
export
is
temporality,
then
you
take
that
expert's
temporality
and
you
give
it
to
the
reader
I,
don't
understand
automatically
inferring
something
from
the
exporter.
When
you
set
this
up,
you
know
what
you're
doing.
E
So
Josh
this
is
this
is
my
concern
like
your
frustration
is.
My
concern
is
because
I
think
you
have
a
very
different
view
from
what
other
people
on
the
calls
view
is
and
I
think
that
that's
being
manifested
in
the
specification
where
there
was
conflicting
ideas
and
now
a
reader
that's
coming
afterwards,
doesn't
understand.
What's
going
on,
because
there
are
like
conflicting
ideas
as
to
like
how
things
get
configured
and.
D
I
want
to
call
out
that,
like
Josh,
what
you're
saying
is
how
I
think
the
spec
was
supposed
to
be
written,
but
then,
when
Tyler
raises
this
bug
and
I
go
reread
the
spec
with
like
Fresh
Light
I'm
like
oh
crap.
This
is
written
in
a
very
confusing
way
that
doesn't
really
outline
that's
the
intention
of
what
we
were
trying
to
do
so
I
totally
agree
with
Tyler
that
this
is
a
confusing
spec
here
and.
D
B
Okay,
we
can
improve
the
language,
but
I'm
I'm,
just
I,
don't
see
the
confusion
as
well
as
everyone
else
may.
That
may
be,
because
I
wrote
part
of
that
language
and-
and
there
is
the
line
it's
quoted
on
the
screen.
If
you
go
down
half
a
page
that
says,
if
the
SDK
provides
a
mechanism
to
Auto,
configure
an
exporter,
then
the
sort
of
Carlos,
if
you
could
scroll
down
the
key
paragraph
right
there,
if
a
language,
provides
a
mechanism
to
automatically
configure
a
metric
reader
to
pair
with
an
Associated
exporter.
B
I
Yeah
I
I
cannot
feel
The,
Simpsons,
God
storage
mentioned
so
probably
for
people
who
are
being
writing
those
bags.
You
have
the
contacts.
It
seems
clear
to
you
for
people
who
are
implementing
the
specs.
They
don't
have
that
full
contacts
by
just
reading
the
English
they
got
confused,
so
I
figure
there's
something
we
can
do.
D
Another
way
to
phrase
it
and
I
think
this
is
this
is
likely
what
we
need
to
Target
if
you
read
the
entire
spec
and
put
all
the
pieces
together
right,
because
you
have
to
read
everything
to
kind
of
understand
it,
that's
great,
but
actually,
when
you're
locally
reading,
how
do
I
Implement
metric
reader
and
you
just
read
that
little
portion
there's
not
enough
call
outs
to
the
other
parts
of
the
spec
that
relate
to
it
and
I.
D
Think
that
could
help
a
good
bit
so,
like
literally
just
calling
out
metric
reader,
there's
an
interaction
with
metric
exporter
that
we
just
call
out
and
say
Here's
a
link
to
that
interaction
that
you
need
to
support
here
as
part
of
the
spec,
as
opposed
to
forcing
you
to
read
metric
reader,
then
metric
exporter,
to
understand
that
interaction
right
because
again,
if
you
read
through
just
the
metric
reader
part
and
that's
it
and
you're
like
okay,
this
is
metric
reader.
This
is
all
I
need
to
know.
D
It's
not
necessarily
complete
unless
you
read
the
rest
of
the
interactions
because
they
don't
show
up
like
in
the
in
the
place
where
you're
expecting
them
with
no
context
does.
That
is
that
fair?
So
if
I
were
to
suggest
like
some
some
fixes
to
the
spec
besides
possibly
learning
things,
that's
one
thing
I
would
definitely
do
is
make
sure
for
metric
reader.
If
there's
interactions
with
other
things
that
they're
called
out
in
the
metric
reader
portion
or
Linked
In
some
fashion.
E
B
The
other
point
coming
out
of
this,
then,
is
that
we
could
specify
that
environment
variables
apply
to
all
the
objects
for
which
they're
meant
to
apply
meaning
every
otlp
exporter
will
take
a
default
of
whatever
endpoint,
for
example,
but
we're
still
specifying
that
there
is
one
automatically
configured
otlp
exporter
so
that
you
can
run
the
SDK
without
any
configuration
and
get
an
otlp
exporter.
So
there's
still
a
spec
that
says,
one
exporter
must
be
created
using
all
the
environment
variables.
E
And
I
think
that
that
makes
sense
in
the
sense
that
if
somebody
provides
the
the
environment,
parallel
otlp
metrics
exporter,
but
I
could
also
see
it
being
extremely
confusing
if
they
didn't
I
I
again,
like
I,
think
we
have
a
lot
of
context
so
like
if,
if
we
go
and
we
set
it
up-
and
we
know
that
like
this
is
only
going
to
apply
if
this
is
there.
But
a
user
looks
at
all
the
environment
variables
that
open
slum
tree
actually
supports,
and
they
say
like.
E
B
Just
offered
that
we
could
change
it.
The
other
way
around
sounds
like
you've
already
made
that
decision
or
we've
already
made
that
decision
for
the
go
Tracer.
So
so,
if
the
environment
variables
affect
every
instance
of
the
otlp
exporter,
that's
okay,
but
we
should
be
consistent
and
specify
it.
Okay,.
B
D
Yeah,
so
if
if
the
rewording
of
the
spec
and
like
like
the
calling
out
to
other
components,
makes
sense
I
think
maybe
let's
rename
the
bug
a
little
bit
around
that
aspect,
I
saw
Riley
commented
I,
don't
know.
Does
someone
have
time
to
actually
make
that
happen?
E
So
I
would
also
point
out
that
I
think
that
there's
two
issues
that
we've
been
talking
about-
and
maybe
that's
a
little
bit
of
the
confusion
that
there's
there's
the
SDK
wording
around
the
metric
reader
and
that
probably
should
be
captured
on
its
own
issue
and
then
there's
this
issue
that
we
have
here.
That
is
a
separate
issue
and
it's
around
the
applicability
of
environment
variable
configuration
to
this
exporter
and
I
think
if
this
is
well
scoped,
but
I
think
that
we
need
another
issue
to
track
the
the
confusion
around
exporter.
E
I
can
create
the
issue.
I
I,
don't
know
if
I'm
the
best
to
resolve
the
issue,
though
so
I
I'll
ping
Riley
in
that
I.
I
I
have
another
reader
part,
because
I
probably
wrote
most
of
that
in
this
bag
for
the
environment
variable
I
remember
we
had
some
discussion
about
like
we
need
to
have
some
systematic
thinking
about
the
configuration
and
the
priority
how
different
layers
of
configuration
override
each
other,
which
one
takes
the
ultimate
call.
So
it
seems
like
a
bigger
issue,
and
it
seems
because
it
is
huge
and
people
decided,
okay,
we're
just
calling
closing
our
eyes
and
and
I
keep
saying
we
like
we're
adding
environmental
variables.
D
That's
a
good
point,
but
I
still
say
we
open
the
issue
and
track
it,
and
hopefully
someone
eventually
steps
up
to
do
it
like
I,
that
that
needs
to
get
solved
absolutely,
but
let's
at
least
track
it.
D
Let's
at
least
have
like
a
thousand
issues
here,
so
someone
finally
feels
the
weight
of
it
and
tackles
it.
The
other
thing
I
I
want
to
call
out
here
is
Tyler.
You
and
Tristan.
Were
the
two
voices
of
Confusion
And
I,
know
you're
actively
going
under
metric
implementations.
I
just
want
to
make
sure
that
implementers
of
metrics
understand
the
result
of
this
conversation
and
can
make
progress.
That's
my
most
important
concern
with
that.
Second
bug
like
like:
do
we
know
how
to
implement
the
metrics
back
or
not.
E
I
I
am
aware
that
I
have
implemented
it
differently
than
Java
and
I
will
go
and
look
at
that
as
what
I'm,
what
I'm
taking
away
from
this.
F
I
So
Tyler
and
Jason
I
I
can
work
with
you
offline
to
like
improve
the
audience
of
the
reader.
Give
you
a
preview,
and
you
try
think
about
it,
and
let
me
know
if
you
feel
it's
clear
when
you
implement
that
in
your
language.
A
Okay,
that's
it
for
about
this
one:
let's
move
the
next
one,
then
okay,
next
one
is
looking
for
additional
reviews
and
comments
on
clarify,
attribute
support
based
on
the
server
web
protocol.
Definition
of
attributes,
yeah.
J
So
this
is
me
this
has
been
open
since
June
and
effectively
what
I'm
trying
to
clarify
here
is
today
the
spec
states
that
attributes
are
effectively
a
limited
set
of
Primitives
simplistically
protobuf
supports
attributes
that
can
be
nested
and
logs
implementation
of
attributes
requires
nesting.
J
J
I
think
Christian
blocked
this
this
morning,
based
on
it's
gone
through
several
iterations.
If
you
want
to
look
back
at
the
history,
but
I'm
really
just
trying
to
clarify
to
say
well,
if
someone
goes
and
writes
something
and
creates
a
protobus
with
spans
that
have
attributes
that
are
nested,
how
should
they
be
dealt
with?
No,
we
should
do
what
I
know
from
a
specification
from
hotel.
It
says
no,
but
it's
possible
so
I'm
just
trying
to
get
get
that
clarified
so
that
everyone
functions
in
the
same
way.
K
I
think
Christians
object.
Objection
is
that
elsewhere
in
the
specification
we
link
to
this
page
to
explain
what
the
limitations
are
particularly
I,
think
in
the
tracing
API.
K
J
But
on
the
line
that
I've
called
that
he's
called
out
here,
it
exclusively
states
in
the
National
Anchorage
that
it
has
to
be
defined.
It's
it's
currently
only
defined
to
be
supported
in
logs,
and
if
it's
not
provide,
if
it's
not
supported,
then
it
should
be
converted,
which
I
think
is
what
most
languages
do
now
anyway,
if
you
try
and
pass
it
something,
that's
not
a
primitive.
H
J
L
Right
right
for
Clear
Clarity,
our
data
model
states
that
nested
attributes
are
possible
right,
at
least
at
the
otlp
level.
It's
just
our
API
definition
in
the
spec
says
you
can't
do
that
for
tracing
currently.
J
Yep
and
then
for
locals,
it
says
you
can
and
then
we're
defining
all
the
event
apis
and
everything
else
that
still
take
attributes
which,
but
again,
then
you
get
a
disparity
between
well
an
attribute.
Canal
can't
take
nested.
That
could
be
saying
it
has
super
logs,
but
it
can't
for
anything
else.
J
K
K
K
So
if
you
know
that
the
data
is
produced
by
open
Telemetry
SDK
through
an
open,
Telemetry
API,
you
you,
you
know
that
it's
impossible,
but
because
it's
not
the
only
source
of
data
of
otlp
data
as
a
recipient,
there's
no
guarantee
that
you
won't
receive
it
from
somewhere
else
from
open,
Telemetry
collector,
for
example,
yep
I
think
that's
fair
and
again,
I
think
the
the
objection
that
Christian
has
here
is
that
it
somehow
also
impacts
the
the
API
specification
which
you
should
try
to
avoid.
I.
Think
that's!
That's!
My
understanding.
L
Basically,
another
way
of
looking
at
it
is
after
we
created
the
tracing
apis.
We
expanded
the
definition
of
an
attribute
to
include
nesting,
and
we
still
have
one
concept
of
an
attribute
at
the
data
level,
but
now
there's
a
discrepancy
at
the
API
level
about
how
nested
objects
should
be
represented
there.
L
So,
in
other
words,
we
didn't
go
back
and
readdress
how
attributes
are
recorded
in
our
existing
apis.
When
we
added
the
the
logging
logging
to
open,
telemetry
and
I
for
one
I
mean
it's,
it's
work,
I,
don't
potentially
we.
It
would
be
good
to
understand
whether
this
is
like
API
breaking
or
if
this
is
just
changing.
How
things
pass
to
the
apis
would
be
recorded
as
data
and
whether
that
would
be
breaking
for
people.
L
But
if
it's
possible
to
smooth
that
out,
it
seems
like
it
would
be
good.
It
would
be
better
in
the
long
run
to
just
have
one
definition
of
attribute
everywhere
that
works
the
same
everywhere.
That
seems
like,
in
the
long
run,
a
less
confusing
situation.
L
One
practical
place
where
that
comes
up
is
span
events
you
have
like
this
existing
span,
events
API
that
has
attributes
and
in
the
future,
you're
going
to
have
like
the
span
events
getting
represented
as
logs
and
vice
versa,
and
that
will
make
that
translation
more
confusing.
If
it's
changing
like
how
that
data
is
represented
as
attributes
and
those
different
models,
yeah.
K
It's
been
open,
like
I
opened
it
two
years
ago,
and
we
from
time
to
time
we
get
additional
hits
on
that
people
comment
and
ask
for
it
with
some
use
cases,
but
it's
still
we're
debating
it
right
so
and
I
would
not
make
this
change
dependent
on
that,
because
this
tries
to
at
the
very
least,
this
tries
to
make
it
clear
that
for
logs
this
is
really
the
case.
We
do
allow
both
in
the
API
and
in
the
data
model
and
on
the
wire.
K
K
J
Yeah,
my
initial
iteration
of
this
linked
to
324,
which
is
what
talking
about
and
and
tried
to
resolve
that.
But
it's
been
whittled
down
to
try
and
get
it
checked
in.
K
The
reality
today
is
that
we
have
two
different
types
of
attributes
in
the
spec.
The
API
is
written
like
that
it
I
don't
want
it,
but
that's
the
reality.
I
think
we
need
to
change
the
spec
to
reflect
the
reality
as
much
as
we
want
to
have
uniformness
here.
I
do
want
it,
but
until
the
other
issue
is
resolved,
we
come
to
an
agreement
there
that
we
do
want
that
really.
K
K
K
Choice,
that's
an
implementation
Choice.
Maybe
you
you
do
your
runtime
checks,
you
you
don't
allow.
You
have
one
compile
time
definition
of
what
an
attribute
is,
but
in
the
tracing
API
at
runtime,
you
fail
if
it
doesn't
match
the
let's
say
the
shape
right
that
you're
reporting,
maybe
that's
a
choice.
J
Yeah
and
that's
what
I
was
alluding
to
with
this
line,
saying:
yeah:
okay,
we
can
have
attributes,
but
at
the
you
know,
at
the
implementation
level
or
when
it
goes
down
the
pipeline.
That
would
be
when
the
check
happens,
and
it
would
have
to
be
converted,
because
really
it's
at
the
exporter
level,
that
it
should
be
converted
not
necessarily
the
span
level.
H
K
K
L
Didn't
allow
that
it's
just
and
I'm
actually
not
clear
on
what
the
specs
currently
says.
As
far
as
passing
the
nested
object
to
a
span
attribute,
do
we
actually
have
something
in
the
spec
that
says.
L
K
Can't
be
a
mop,
the
value
of
reaction
beauty
is
not,
can
never
be
a
map,
that's
what
it
says.
So,
if
you
do
that,
your
implementation,
according
to
the
spec
there's
just
no
way
to
do
the
wrong
thing
right
right.
The
problem
with
that
is
that
for
the
logs
we
do
need
that
we
need
the
maps
for
the
logs
and
now
people
who
need
to
implement
the
logging,
API
and
SDK.
They
have
a
choice.
K
Either
they
introduce
a
new
type
called
log
attribute,
which
at
compile
time
can
be,
can
can
can
be
different
essentially
from
the
tracing
or
metric
attributes,
and
can
allow
Maps
as
values
of
the
attributes
or
they
extend
the
existing
attribute
data
type
in
the
in
the
that
is
also
used
in
the
tracing
API
metrics,
which
now
allows
Maps
as
a
data
type.
Once
you
do
that
you're,
essentially
changing
your
tracing
API,
you
allow
data
that
previously
was
not
allowed
I.
K
I
think
you're
saying
introduce
a
new
data
type
instead
of
calling
it
log
attributes
call
it
extended
attribute,
which
now
allows
nesting,
for
example,
and
then
the
logging
API
accepts
this
new
data
type.
The
extended
attribute
the
tracing
API
does
not.
It
continues
using
the
old
definition
of
an
attribute,
and
now,
if
we
need
to
extend
it
in
the
future,
we
want
to
allow
tracing
API
to
also
accept
this
extended
attribute.
We
can
do
that
by
introducing
new
methods
on
the
on
the
span,
for
example,
to
record
I.
L
G
L
You
have
an
attribute
type
in
Java
right
and
that
doesn't
include
this
yeah.
Yes,
something
something
like
that
in
that
case
would
be
probably
what
you
would
need
to
do
it.
It
seems
better
to
to
leave
the
door
open
for
for
everything
to
be
able
to
converge
on
a
new
extended
attribute
type
if
it's
possible
to
to
do.
L
This
is
also
like,
potentially
problematic
for
some
back
ends
that
don't
allow
this
kind
of
nesting.
But
as
the
point
that
was
raised
earlier
once
it's
in
the
Proto
model,
those
back
ends
need
to
address
that,
no
matter
what?
Because
there's
now
a
way
for
somebody
to
send
that
kind
of
structured
data
to
them,
so
it
just
seems
like
it
would
be
better
to
to
converge
on
the
new
extended
attribute
type
but
you're
right,
tigrant
I
wasn't
thinking
about
typed
stuff
at
the
API
level,.
J
A
K
H
K
H
The
News
don't
generate
homogeneous
or
don't
support
I'm
watching
the
slices
or
arrays
either,
so
we
still
need
a
way
of
dealing
with
those
that
are
allowable
in
the
attributes.
Today,.
K
Cases
multiple
use
cases
where
nested
attributes
are
at
least
desirable,
maybe
not
absolutely
required,
but
they
make
the
shape
of
data
more
natural,
the
shape
of
the
data
they
want
to
record
in
semantic
conventions,
and
my
personal
opinions
is
that,
yes,
we
should
allow
the
in
the
API
and
then
have
a
well-defined
way
of
flattening
that,
and
then
the
exporters
specific
for
the
vendors
that
that
only
support
non-nested
attributes
should
do
that
well-defined
flattening
in
the
expert
right
well,
I.
L
Yeah
I
should
add,
I.
Think
one
of
the
reasons
why
Nev
is
bringing
this
up
is
because
in
JavaScript
in
the
browser
you
have
like
a
very
resource,
limited
environment,
and
you
have
a
lot
of
diagnostic
objects
that
you're
pulling
from
the
browser
yeah
and
the
double
serialization.
L
H
A
Less
than
no
time
we
have
six
minutes,
so,
let's
see
what
we
can
do:
okay,
well,
the
next
one
Tristan
stack
traces.
F
Yeah,
this
actually
flows
from
the
nested
attribute
and
then
I.
Don't
expect
it
to
get
figured
out
right
now,
especially
limited
time,
but
wanted
to
point
people
to
the
port
request,
an
issue
that
is
wanting
structured,
stack,
traces
for
a
user
perspective
that
not
every
vendor
is
going
to
Care
to
parse
like
an
erlang
or
Elixir
stack
trace,
and
so,
even
if
we
provide
what
that
format
is,
there's
a
good
chance.
They
won't
do
it.
F
So
we
won't
get
the
features
of
links
to
source
code
of
where
this
stack
Trace
comes
from,
where
this
area
comes
from
exception
or
whatever
so
having
something.
Even
if
it's
not
the
full
stack
Trace,
because
they're
good
arguments
in
the
in
the
issue
about
why
that
is
overly
complicated,
I
think
it
would
be
nice
to
have
something
that
allowed
for
a
structured,
structured
fields
of
you
know.
This
is
the
file
that
the
this
event
is
talking
about.
So
just
wonderful
have.
B
From
what
I
remember
to
an
Otep
number
69,
or
something
like
that
by
Kevin
Brock,
which
we
discussed
it
extensively
two
years
ago
and
try
to
bring
back
some
of
the
same
feedback?
It's
in
the
issue
that
Tristan
filed.
F
Yeah,
so
my
understanding,
because
I
think
I
was
still
around
at
that
point-
was
that
it
was
too
complicated
to
get
to
Encompass
everything.
So
my
suggestion
is
maybe
there's
some
minimal
thing.
We
can
optionally.
Add
there
I,
don't
know
if
that's
possible,
but
yeah
one
to
call
about
in
case
anybody
had
ideas.
A
Okay,
next
one,
then
enough
case
Diego
semantic
convention,
strategic
question.
G
So
you
can,
can
you
hear
me
yep
all
right,
hello,
so
I
have
a
couple
of
questions
regarding
semantic
conventions.
Understability,
so
I
was
looking
at
the
spec
and
says
that
their
markers
experimental
so
just
wanted
to
confirm
that
that
means
that
a
re-release
of
the
spec
may
include
breaking
changes
on
the
spec
article
respects
that
what's
gonna
happen.
G
Okay,
all
right
so
do
we
expect
that
part
of
respect
to
become
stable,
so,
okay.
D
We
have
an
effort
around
stabilizing
semantic
conventions,
so
there's
a
working
group,
that's
getting
started,
there's
a
slack
Channel
our
first
meetings
likely
to
be
next
week,
but
we're
doing
we're.
Tigrin
and
I
are
working
on
a
design
dock
we're
going
to
send
out
as
part
of
like
the
number
one
issue
to
deal
with
there,
but
effectively
right
now.
We
we
can't
Define
semantic
conventions
as
stable,
because
there's
a
few
underlying
issues
we
have
to
resolve
before
we
can
go
back
and
revisit
them
and
have
them
be
stable.
Foreign.
L
I
I
can
say
from
working
on
this
before
I
went
on
sabbatical.
One
issue
is
we,
we
haven't
necessarily
had
subject
matter.
Experts
come
through
and
review
all
of
our
different
semantic
conventions.
So
we
did
do
this
with
a
working
group
for
HTTP,
for
example,
and
they
came
up
with
changes
they
wanted
to
make.
L
So
there's
this
question
of
like
do
we
really
want
to
stabilize
SQL
and
all
of
these
other
conventions
before
we
have
say,
SQL
experts
come
or
DB,
admins
come
and
look
at
it
and
say
like
no.
No,
you
need
to
do
it
this
other
way
or
it's
annoying
for
me.
So
I
think
that's
actually
one.
G
Okay,
to
give
you
some
quick
context,
the
reason
why
I'm
asking
is
because
in
Python
we
are
trying
to
figure
out
how
to
implement
this
I'll
support
this
for
instrumentations,
and
it
may
be
important
if,
in
the
future,
they
will
be
considerable.
I
mean
that
that
fact
May
can
affect
the
defend
decision
that
we
make
now.
D
That
is
going
to
be
public
relatively
soon,
but
we
can't
Mark
anything
as
stable
until
we
know
that
it's
good
and
that
it
can
be
stable.
So
we
expect
I
I.
Just
frankly,
we
expect
some
breaking
changes
around
resources
and
how
they
interact
with
metrics.
That's
that's.
Coming
from
the
way
existing
semantic
conventions
are,
it
could
be
with
the
way
the
exporters
are
defined.
You
won't
even
notice,
because
you're
say
using
Prometheus
and
so
resource
attributes.
Don't
even
show
up
to
you
that
that's
possible,
but
that's
that's
what
we're
looking
at
correcting.
D
So
you
know
I
put
some
links
to
the
actual
project
proposal
of
what
we
need
to
address
and
and
why
some
pending
tasks
and
a
project
tracker
the
number
one
issue
that
we're
working
on
right
now
is
this
resource
ident
resource
attributes
and
sharing
attributes
between
metrics
and
traces
and
logs?
That's
the
number
one
thing
we
have
to
resolve
before
we
can
say
like
hey.
These
things
are
stable.
D
So
if
you
want
to
participate
in
that
discussion
like
feel
free,
but
unfortunately
I
don't
think
we
can
Mark
things
as
stable
until
we
resolve
some
of
these
fundamental
issues.
A
We're
over
time,
but
if
somebody
wants
to
stay
Furniture
like
this
last
comment,.