►
From YouTube: 2022-07-27 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
B
B
My
I
guess
general
comment
would
be
to
try
to
keep
the
language
as
close
as
consistent
as
possible
with
the
tracing
api,
but
otherwise
I
think
it
looks
good
and-
and
we
definitely
need
more,
more
more
reviews
there
right
so
especially
approvers.
Please
take
a
look.
Here's
a
review
and
anyone
else
who
is
interested
but
also
reviews
are
very,
are
useful.
C
Yeah
no
problem,
but
we
we
could
discuss
briefly
if
it's
okay,
sure.
C
C
So
the
so
the
domain
for
one
thing,
it
does
not
eliminate
the
need
to
have
a
namespace
for
the
the
event
name
itself.
Is
that
correct?
It's
a
separate.
C
The
reason
we
introduced
to
domain
was
so
that
the
possibility
of
you
know
duplicates
is
lower
when,
when
things
evolve,
so
one
domain
you
know,
could
use
some
name
for
an
event
and-
and
you
know
an
year
later,
you
know
somebody
comes
and
claims
that
hey
this
event
is
better
suited
for
you
know,
for
their
purpose
and
and
to
avoid
that
situation
we
we
said,
you
know,
let's
use
the
domain,
to
avoid
that
conflict
and
some
examples
of
domains
could
be.
C
C
B
Yeah,
no,
I
think
it's
no
longer
necessary
right.
That's
that's
the
whole
point
of
the
domain
that
you
no
longer
need
to
prefix
the
event
names
by
by
anything
that
ensures
the
uniqueness
of
the
event
names.
That's
that's
the
whole
point
right.
So,
if
all,
if
all
browser
events
correctly
include
the
event.domain
attribute
equals
browser,
then
it's
no
longer
necessary.
It's
superfluous
to
include
browser
dot
as
a
prefix
for
the
event
name,
yeah,
it's
unnecessary.
I
think
it's
not
just
unnecessary.
B
Events
that
carry
the
same
event.
Names
are
still
completely
different
events
right.
They
will
have
different
structure,
different
meaning,
different
semantics
and
that's
totally
fine.
That's
that's
the
that's
the
that's.
What
the
event
domain
provides
us
right
that
the
luxury
we're
free
to
use
the
event
names
within
the
domain
without
worrying
about
other
domains.
B
C
A
Now,
if
sorry,
if
I
might
just
put
in
there
knife
question
this
name,
spacing
thing-
is
a
problem
all
over
the
place
right
for
everywhere
in
semantic
conventions
right
of
deck,
for
resource
for
attributes
it
we,
it
seems
like
we
typically
do
it
with.
You
know,
dot,
notation
right.
It's
any
particular
reason.
You
guys
went
this
way
splitting
it
into
two
main
events,
rather
than
basically
just
having
one
field
that
you
can
say.
For
example,
you
know
browser
dot,
block.
C
So
all
the
browser
events
you
know
could
be
sent
to
a
a
separate
pipeline,
all
the
mobile
events,
all
the
kubernetes
and
best
based
on
the
you
know,
domain.
You
could
send
them
to
different
pipelines,
whereas
if
it
was
not
a
separate
field,
then
you
will
have
to
you
know,
compare
against
the
prefix
of
the
event
name,
which
could
be
done
as
well,
but
having
a
a
separate
attribute,
I
think
it
makes
it
easier.
Yeah.
B
Yeah
that
that
was,
I
guess,
the
one
of
the
reasons
why
we
introduced
the
scope
attributes
so
event.
Event.Domain
is
supposed
to
be
recorded
as
a
scope
attribute
you
provide
it.
When
you
obtain
a
logger,
you
you
provide
the
domain
name
and
then
all
the
log
records
that
are
emitted
through
that
logger
automatically
carry
the
event
domain
and
it
is
no
longer
included
in
the
log
record
itself.
Okay,
the
benefit
of
that
approach
is
then
the
log
records
on
the
wire.
B
They
are
batched
per
scope
and
you
can
make
a
decision
for
the
entire
scope,
a
routing
decision
or
filtering
decision
when
you
receive
this
telemetry,
let's
say,
for
example,
in
your
back
end,
you
want
to
send
all
profiling
events
to
a
different
place
right,
so
you
can
make
this
decision
once
per
the
entire
bachelor's
vlog
records
that
belongs
to
that
scope.
B
B
If
we
do
the
opposite,
like
what
you
were
describing
semantically,
that's
that's
valid
as
well,
but
you
will
have
to
look
at
each
individual
log
record
at
the
other
side
for
each
log
record
and
make
the
routing
decisions
individually.
In
that
case,
which
is
also
is
going
to
be
more
more
more
computationally
right,
I
guess
less
efficient
right,
so
you
will
have
to
rearrange
regroup
the
the
log
records
into
different
sets
of
batches.
If
you
want
to
send
them
to
different
destinations,
yeah.
A
And
semantically,
and
what
you're
saying
and
I'm
realizing
this
now
it's
the
domain
here
as
it's
called,
is
a
scope
that,
basically,
you
know
it's
essentially
a
batch
scope
right
or
like
literally
like
an
entire,
the
entire.
Let's
use
the
entire
logo.
It
gets
stamped
like
that,
and
that
also
helps
with
yeah.
A
A
There
must
have
been
other
places
where,
within
the
sort
of
semantic
convention
conversations
previously,
there
was
a
need
to
figure
out
how
to
sort
of
also
find
a
way
to
enumerate,
attribute
values
right,
because
now
the
that,
like
what
you
call
domain
here,
becomes
a
an
attribute
right
value.
So
how
has
this
been
previously
solved?
I'm
pretty
sure
this
is
not
it's
probably
not
the
first
time
that,
like
semantic
conventions,
have
to
also
sort
of
a
desire
to
define
the
enumeration
itself
of
the
value.
B
Yeah,
so
it's
being
so,
I
guess
we.
We
saw
two
two
cases
for
the
values:
it's
either
a
predefined
fixed
set
like
like
an
enumeration.
In
that
case
it
is
hard
coded
in
the
semantic
convention
itself,
the
list
of
possible
values,
or
it's
completely
open,
in
which
case
you
just
describe.
What's
the
what's
the
expected
value,
but
it
doesn't
but
doesn't
list
the
values
themselves
right
this.
B
I
don't
think
we
had
a
similar
case
like
this
when
we
know
kind
of
what
the
possible
list
could
be,
but
it's
not
closed
right.
It's
probably
open
right,
because
it
you
will,
you
probably
want
to
have
profiling
or
browser
as
possible
values
for
the
for
the
domain,
but
also
you
don't
you
don't
want
to
the
to
limit
the
the
possibility
to
also
include
any
other
domain
there.
So
it's
kind
of
a
unique
situation.
I
think
I
can't
remember
any
other
semantic
convention
having
a
requirement
like
this.
A
C
B
Yeah,
it's
probably
an
open
set,
but
with
open,
telemetry
semantic
conventions,
some
of
the
possible
values
will
still
be
defined.
Right
we
will
tell
here
is
some
known
values
that
you
can
use.
Everything
else
is
probably
valid
as
well.
That's
one
of
the
options,
or
we
tell
that
if
you
want
to
use
any
other
value,
for
example,
always
prefix
it
by
something
like
custom
domain
dot
whatever
I
I
that's
what
what
what
I
was
referring
to-
and
I
said
I
will
think
about
it,
but
I
I
didn't
so
yeah.
A
So
the
the
the
yaml
here
has
a
specific
field.
Examples
right
like
reading
this
makes
obviously
sense,
but
my
question
then
becomes.
I
think
examples
basically
follows
that
well,
you
know
use
this
or
use
something
else
right,
but
I
think
it
like.
Is
this
normative
in
a
sense
that,
like
you
know,
if
it's
browser
please
use
browser,
is
that
the
takeaway
for
the
user.
B
Aspect:
yeah
yeah.
I
think
it
is
just
an
example.
So
if
you
look
elsewhere,
where
it's
not
an
example,
I
think
it's.
It
actually
says
these
are
the
possible
values
that
should
be
used.
It
uses
a
different
language,
much
much
stronger,
and
in
this
case
I
think
we
need
the
stronger
language.
So
I
don't
know.
Maybe
it
needs
to
refer
to
somewhere
else
where
there
is
a
list
of
well-known
domains.
B
Plus
there
is
some
some
language
which
tells
okay,
but
how
do
you
add
domains
or
how
do
you
express
domains
which
are
not
well
known,
something
some
some
custom
domain?
That
is
the
part
that
I
I'm
not
sure
about
myself.
So
how
do
we
do
that?
But
I
think
it
needs
to
be
there
somewhere
in
the
spec,
so
you're
you're,
hitting
right
in
the
head
of
the
problem
here.
A
Yeah
and
I'm
just
curious
frankly,
I'm
not
trying
to
kind
of
bring
the
big
procedural
hammer
here.
You
know
if
we
think
that,
like
we
can
lay
behind
this
and
just
community
like
how
many
ways
can
you
say
browser
you
know
yeah
like
in
the
back
ends,
you
know
we
would
like
to
sometimes
just
string
match
right,
but
like
sometimes
this,
sometimes
you
end
up
having
to
match
against
the
center
yeah.
C
So
maybe
we
we
should
look
for
other
places
where
the
possible
values
are
defined.
Let's
say
the
span
kind,
the
client,
server
and
internal.
I
think
the
the
difference
there
is
it's
a
it's
a
closed
set,
yeah
right
in
our
case
it's
not
and
for
the
domain
yeah.
B
So
if
you
want
something
else
to
be
there,
then
you
need
to
modify
the
specification
right
and
we
explicitly
say
that
expanding
the
list
is
allowed
renaming
or
deleting
is
disallowed,
but
expanding
is
allowed.
That's
one
of
the
possible
approaches
it's
more.
Restricting
it,
then
makes
it
impossible
for
someone
to
independently
to
make
a
decision
that
they
want
to
record
something
that
belongs
to
a
domain
that
we
don't
know
about
at
open
telemetry,
but
maybe
that's
the
right
approach.
I
don't
know
it's
kind
of
yeah.
C
B
Yeah
so
yeah,
I
don't
know
we
should
either
go
with
with
a
closed
list.
In
that
case,
we
should
definitely
make
it
expandable
by
by
modifying
the
specification
itself
or
somehow
we
should
allow
that
that
we
should
make
it
open,
but
with
with
still
with
some
list
coming
as
part
of
the
open,
telemetry
specification.
Somehow
right.
D
Okay,
if
it's
a,
if
it's
a
closed
list,
how
would
we
accommodate
custom
events
like
from
for
somebody's
custom,
business
domain.
B
That's
the
thing
I
guess
we
can't
in
that
case,
right
or
or
maybe
the
closed
list,
then,
is
that
the
one
of
the
possible
values
is
custom
right,
event.domain
equals
custom,
and
in
that
case
the
recommendation
is
that
the
event
name
should
have
some
sort
of
prefix
which
ensures
that
there
is
no
no
conflict
or
something
like
that.
D
Yes,
it
is
so
the
names
of
the
the
rules
around
event,
naming
change
in
the
case
of
this
yeah,
this
kind
of
catch-all
bucket
yeah,
which
is
a
bit.
B
D
Yeah
from
from
that
standpoint,
I
I'd
be
in
favor
of
an
open
list
with
like
a
set
of
named
values
that
can
be
added
to
defined
in
the
semantic
conventions.
B
A
B
So,
let's
move
to
the
next
one,
then
it's
your
pr
check.
D
Yeah,
so
this
came
up
in
the
javascript
a
while
back
and
there's
an
open
issue
for
this
and
the
specification.
D
Basically,
the
the
language
around
log
processor
today
limits
the
kind
of
functionality
of
what
you
can
do
in
a
log
processor,
because
there's
there's
no
capability
to
enrich
the
log
records
with
additional
data,
potentially
from
context
or
baggage.
So
all
the
log
processor
can
do
today
is
to
send
the
immutable
data
to
a
exporter
to
be
exported
out
of
process.
So
it's
it's
very
limited.
D
I
think
that
was
kind
of
a
an
accidental
omission
when
we
were
originally
drafting
up
the
sdk
specification
and
this
pr
tries
to
bring
the
language
of
log
the
log
sdk
in
line
with
that
of
the
trace
sdk
as
much
as
possible.
So
could
just
use
an
extra
set
of
eyes
on
this,
and
let
me
know
what
you
think.
A
So
what
fundamentally,
what
we're
trying
to
do
is
to
make
sure
that
processes
can
actually
process
stuff
right
so
that
that
seems
to
make
a
ton
of
sense.
A
So
so
I'm
I
don't
know,
I
think
I'm
going
to
take
another
look
as
well,
but
but
but
like
philosophically
this
is
this
one's
easy
to
approve.
As
far
as
I'm
concerned,
you
know,
there's
some
mechanics
around
it
in
terms
of
mapping
it
back
to
previous.
You
know
expressions
in
the
tracing
side
that
I'm
not
super
familiar
with.
So
that's
I
kind
of
spend
a
minute
to
see
whether
that
all
makes
sense,
but
fundamentally
yeah.
Of
course,.
D
But
then
there's
a
read,
write
log
record,
which
is
what
a
log
processor
should
receive,
and
one
of
the
key
questions
there
is
which
fields
within
a
log
record
should
be
mutable
within
a
log
processor.
I
think
for
simplicity
early
that
it
should
potentially
be
a
narrow
scope,
so
potentially
just
adding
attributes
and
then
on
a
case-by-case
basis.
Whatever
fields
people
want
to
edit,
we
can,
we
can
think
about
them
as
as
they're
requested,
but
that's
kind
of
the
change
in
a
nutshell.
B
D
D
But
in
if
we
so
yeah
the
the
trace,
the
trace,
sdk
specification
says
that
span
processors
should
have
access
to
mutate.
Any
of
the
fields
within
that
are
changeable
on
the
span
interface,
and
so
you
know,
spans
are
separated
into
two
stages.
D
There's
a
building
phase,
where,
like
an
initialization
phase,
where
certain
things
are
required,
like
span
links
and
then
in
once
the
span
is
start
has
been
started
like
a
subset
of
the
pieces
of
data
on
a
span
can
be
can
be
changed
and
I
think
notably,
like
links
cannot
be
added
anymore
and
so
a
span
processor
can
do
all
the
things
that
you
can
do
to
a
span
once
the
span
has
been
started,
but
can't
do
things
like
you
know
that,
are
you
know,
part
of
the
initialization
phase,
and
so
there's
a
question
in
logs
about,
like
you
know,
logs,
don't
really
have
this
initialization
phase.
D
A
Like,
for
example,
like
I'm
cooking
this
up
on
the
fly
now
right,
but
let's
say
I
don't
even
have
all
the
resource
information.
I
want
to
look
it
up
in
as
part
of
a
processor
right.
I
might
just
have
some
container
id
and
I
might
want
to
consult
some
master
somewhere
to
resolve
it
back
to
you
know
nice
resource
definition.
A
Then
I
would
have
to
access
that
right,
but
be
able
to
sort
of
at
least
add
fields.
I
guess
there,
and
then
I
mean
you
guys
all
know
the
logs
are
just
like
oftentimes
just
borderline
gibberish,
and
you
know.
A
B
Yeah,
so
are
you
saying
christian
that
let's
just
make
the
entire
log
record
mutable
in
the
processors,
just
all
the
fields.
A
Yeah,
that's
pretty
much
what
that's!
Basically
what
I
just
said.
Yes,
in
fact,
I'm
trying
to
think
of
the
downsides
right.
I
know
that,
like
we
are
trying
to
generally
like
evolve
into
like
you
know,
software
engineering
practice
overall
in
a
field
that,
like
you
know,
we
we
care
very
much
about
mid-ability
and,
like
ideally,
make
things
immutable
and
all
of
those
types
of
things.
But
in
this
particular
case
I
feel
just
the
very
part.
The
way
aspect
of
a
processor
is
that
you
can
flip
the
thing
right.
C
Yeah,
what
if
it
conflicts
with
the
with
the
api,
like
let's
say
in
an
example,
would
be
the
end
user
created
an
event
giving
an
event
name.
So
you
wouldn't
want
this
pen
process
the
log
processor
to
change
the
event
name.
You
might
spam.
D
B
A
A
Let's
say
you
have
deployed,
you
know
somewhere
like
an
older
version
of
ot
and,
like
you,
get
everyone
style
expression
of
something
like
an
event
name,
you
ship,
a
new
collector.
Maybe
you
destroy
your
distro
has
a
thing.
You
know
that
has
some
sort
of
more
futuristic
interpretation
of
what
things
should
look
like
and
yeah
yeah.
D
And
just
just
you
know,
to
throw
out
another
example
from
span,
so
one
field
that
you
can't
update
in
a
span
processor
is
the
span
kind
and
I'm
not
really
sure.
I'm
not
sure
why.
Why
why
that
is,
but
maybe
just
like
as
you're
reviewing
this
kind
of
noodle
on
the
different
fields
and
think,
if
there's
any
exceptional
reason
why
a
particular
field
might
might
need
immutability
versus
the
others.
I
think
that's
kind
of
the
key
question.
Then.
D
E
B
E
I
had
one
thing
to
share
on
there.
I
have
a
good
concrete
use
case
for
why
we
want
to
mutate
log
records
in
the
exporter,
so
we
have
customers
doing
scanning
of
their
logs
and
events
for
credentials,
keys,
connection
strings
and
they
have
kind
of
perverse
rules
for
how
they
do
that
some
of
them
they'll
sample
the
logs
some
of
them.
You
know
they
selectively.
E
Some
will
get
rejected.
Some
get
spilled
off
into
a
different
pipeline,
so
they
can
kind
of
alert
people
that
an
issue
was
detected.
If
you
think
about
doing
that
in
a
processor,
it's
very
expensive,
it's
going
to
block
as
all
the
logs
are
written
on
whatever
thread.
So
we
push
back
initially
saying
you
should
do
this
in
a
processor.
That's
what
the
spec
says,
but
the
case
is
pretty
compelling.
E
You
know
on
the
dedicated
thread
that
we
have
for
the
exporter,
so
we
kicked
this
around
a
lot
and
ended
up
just
saying:
okay,
we'll
just
make
everything
mutatable
to
unblock
people,
so
that's
sort
of
the
statement.net
right
now
is
we
just
made
everything
settable
on
log
record
so,
depending
on
what
we
kick
out
of
this
discussion,
I
know
those
customers.
Users
will
be
very
interested
if
that
makes
sense.
Hopefully
that
was
helpful.
D
It
does
make
sense,
and
so,
in
java,
all
of
our
exporters,
our
log
exporter,
metric
exporter
and
span
exporters,
accept
immutable
or
like
read-only
interfaces
representing
like
expanded
and
metric
data
or
log
data,
and
but
we
still
advocate
that
our
users,
when
you,
when
you
want
to
filter
out
attributes
or
anything
like
that,
you
can
do
so
in
an
exporter.
D
But
the
pattern
is
to
create
like
a
delegating
version
or
a
wrapped
version
of
those
interfaces
like
a
wrapped
metric
data
or
wrapped
span
data
or
wrapped
log
data
where,
when
you're
reading
out
attributes,
you
apply
your
mask
or
your
filter
or
you
transform
the
data.
However
you're
trying
to
do
it,
and
so
so
the
the
interface
is
still
immutable
and
read-only.
But
you
can
still
kind
of
filter
out
data
before
you
send
it
on
to
your
ultimate
exporter,
by
providing
your
own
implementation
of
the
interface
that
selectively
masks
data.
B
B
E
D
So
they,
the
log
processor,
emit
method
is
called
synchronously.
That's
by
definition.
D
And
you
know
the
different
log
processors
have
asynchronous
components
of
them.
For
example,
batch
log,
processor,
cues
them
up
into
a
in
a
queue
and
then
has
a
like.
Typically
has
a
separate
thread
that
on
some
sort
of
interval,
when
the
when
the
batch
is
full
or
when
some
interval
has
elapsed,
will
read
off
that
q
and
then
send
to
a
downstream
exporter.
So
you
know
it's
synchronously,
adding
to
the
queue
and
then
asynchronously
reading
from
the
queue
and
sending
to
an
exporter.
D
But
I
suppose,
like
you
know,
but
you
could
implement
your
own
processor.
That
did
a
similar
thing,
so
you
know
it
didn't
it
just
adds
them
to
it.
Synchronously
adds
them
to
a
queue
and
then
a
separate
thread
reads
them
off
the
queue
it
mutates
them
still
within
the
processor
and
then
sends
them
to
some
other
processor.
That's.
D
C
D
D
No
emit
returns
void
is
it
has
no
response.
So
emit
is
like
the
end
of
a
perfect.
You
know
if
you
have
a
chain
of
processors
or
a
series
of
processors
that
are
registered.
The
on
emit
method
is
just
called
synchronously
and
sequentially
for
each
of
the
registered
processors,
and
so
you
know,
if,
if
you
do
everything
right,
I
think
a
typical
example
would
be.
B
D
B
D
Logger
provider
logger
provider,
yes,
and
then
you
know,
instead
of
a
log
emitter,
we
would
have
a
just
a
logger
yeah.
B
B
B
Myself,
java
and
one
more.net,
I
think.
C
Okay,
so
I
have
a
question
so
there
are
of
events
using
log
records
today,
like
kubernetes
events,
which
won't
be
confirming
to
the
api
spec
that
we
have
now
like
you
know
they
are
not
defining
the
event
name
and
the
even
domain,
which
is
new
with
this
api.
E
D
Collector
that
yeah,
it's
you
know,
receives
kubernetes
events.
B
Well,
it
depends,
I
don't
know
so
that's
that
is
that
you're
talking
about
the
kubernetes
cluster
receiver,
I
think
right
now.
Kubernetes
events
recently
event
receiver,
yeah
yeah
and
that
one
it's
log
records
today,
so
you're
saying
what
happens
with
that?
Should
it
should
should
it
be
modified
to
to
start,
including
given
event,
domain
and
event
name
there.
B
D
So
I
think
you
know
we
introduced
a
new
concept
here
that
differentiates
an
event
from
a
regular
log
record,
and
so
because
that
that
convention
didn't
exist
before
the
kubernetes
events,
receiver
and
the
collector
it
just
emits
regular
log
records
they
may
they
may
be
event-like
in
shape,
but
they're,
not
what
we're
calling
open,
telemetry
events,
because
they
don't
meet
the
criteria,
and
so
you
know
they
can
continue
to
produce
those
but
they're
just
going
to
be
log
records.
B
Yeah,
that's
one
possible
approach
or
we
so
I'm
looking
at
the
implementation.
It
uses
k,
test.event
thought
name
an
attribute
name
like
that
and
puts
the
the
event
name
there.
I
believe
so
I
don't
know
if
there
is
a
semantic
convention
for
that.
Let
me
try
to
see
if
we
have
anything,
because
we
have
kubernetes
semantic
conventions,
but
I
don't
know
if
it
tells
how
the
events
should
be
emitted.
B
Yeah
exactly
so
nothing
about
the
events
there,
so
this
is
kind
of
a
this
does
not
define
the
behavior
of
this
implementation.
The
collector
is
not
defined
by
the
specification
today
anyway.
So
it's
an
open
question.
I
don't
know
whether
it
needs
to
stay
this
way,
just
like
log
records,
as
you
said,
or
it
needs
to
be
modified,
to
confirm
to
the
concept
of
the
events
as
we
describe
them
here
with
the
domain
on
the
name.
D
I
think
it
would
be
good
if
long
term
they
they
do
end
up
publishing,
open,
telemetry
events
as
we're
describing,
I
think,
that's
a
great
use
case
for
for
events
as
these
kubernetes
events
they.
So
I
said
so
yeah,
I'm
thinking
if
they
could
do
that
long
term.
That'd
be
good.
Another
thought
is
that
in
in
the
short
term,
if
somebody
is
interested
in
having
them
treated
as
open,
telemetry
events,
conforming
to
the
the
conventions
that
we've
described,
they
can
use
a
processor
to
do
that.
D
So
they
could
use
the
transform
processor
to
change
the
attribute
that
contains
the
event
name
from
whatever
convention.
They
have
to
event
name
and
then
they
could
add
a
static
like
event
domain
attribute,
with
a
with
a
fixed
value.
C
I
don't
know
who
the
owners
are
for
for
for
that
piece
of
code.
I
can
just
let
them
know.
B
C
Yeah,
I
added
that
so
we
said
the
data
model
for
log
said
that.
C
For
attribute
values
in
log
records,
you
know
they
can
take
maps
as
well
right
or
any
value
yeah,
and
I
I
want
to
there
is
a
there-
is
an
issue
where
there's
some
discussion.
C
You
know
on
this
topic
for
even
for
spans
and
resources
and
other
signals.
I
don't
know
if
it's
needed
for
metrics,
but
I
think
I
I
want
to
understand
in
general.
What
is
the
concern
in
generalizing
having
a
map
data
type
for
attribute
values
not
just
for
logs,
but
for
spans
and
resources
as
well.
C
So
if
that
is
the
only
concern,
then
you
know
we
could
come
up
with
some
convention
right
on
how
to
flatten
the
nested
structure.
C
I
think
it
conflicts
with
the
the
current
dot
notation,
but
in
addition
to
dots
you
know
I
was
thinking
of
you
know
another
reserved
keyword
to
indicate
that
hey
from
from
from
here
on.
You
know
it.
It
represents
a
map,
so
let's
say
http,
headers
http.headers.request,.
C
B
Yeah,
I
think
I
understand
what
you're
saying
introduce
a
way
to
allow
essentially
nested
pups
everywhere
and
also
introduce
a
canonical
way
to
flatten
them
in
the
exporters
where
the
format
dictates
a
flat
flat
wave
of
flux,
values
right
yeah,
I
think
it
makes
sense.
The
concern
here
may
be
that
we
don't
so.
It
still
maybe
is
not
ideal
outcome
for
for
those
for
those
destinations,
and
by
allowing
this
in
the
specification
we're
going
to
encourage
semantic
conventions
which
kind
of
abuse,
the
nested
values,
and
that
that
is
probably
not
an
ideal
outcome.
B
So
I
I
don't
know
I
that
from
one
side
the
argument
is
that
let's
make
this
consistent,
let's
allow
it
for
not
just
logs
but
other
signals
as
well.
From
the
other
hand,
we
only
need
this
really
right
now,
really
just
for
logs
and
allowing
it
for
other
signals
kind
of
has
the
downside.
So
I
don't
know
it's
hard
to
say
what
is
the
right
approach
here.
C
Okay
and
if
we
were
to
implement
this
only
for
logs
or
do
any
of
the
those
few
language
sdk
implementations
for
logs,
allow
that
today
or
that's
something
to
be
added.
D
Java
java
does
not
because
our
attribute
representation
of
a
single
attribute
representation-
that's
shared
across
all
the
signals.
It
might
be
tricky
to
allow
nested
attributes
for
logs
and
not
for
the
other
signals.
I'm
not
sure
how
that
would
work.
C
C
The
concern
about
exporting
you
know
in
a
flattened
way
is
that
not
applicable
to
logs
tigran.
B
B
We
actually
have
a
document
in
the
spec
which
which
describes
how
to
so.
We
have
two
documents.
One
describes
how
to
convert
normal
tlp
data
to
rtlp,
how
to
to
map
data
to
otp
any
volume,
and
we
have
another
document
which
says
how
to
do
the
opposite,
but
in
that
that
second
one
is
very
limited:
how
to
convert
open
telemetry
data
to
normal
tlp
formats.
B
It
is
silent
on
the
topic
of
domestic
attributes
and
all
and
even
about
the
arrays.
It
doesn't
say
anything
special
there.
So
technically
this
this,
this
could
properly
list
the
full
for
the
full
mapping
right.
How
do
you
convert
any
value
like
domestic
values,
arrays
and
maps
and
all
the
stuff
to
normal
tlp
formats?
B
And
we
probably
do
need
that
because
for
for
logs,
this
is
a
valid
case,
but
it's
still
the
the
initial
objection.
There
remains
right.
If
we
don't
need
really
don't
have
a
use
case
to
do
this
for
other
signals,
why
allow
it
and
why
then
encourage
actually
people
to
introduce
semantic
conventions
like
that
and
then
make
this
kind
of
a
non-ideal
situation
for
these
destinations
which
don't
handle
it?
Well,
it
creates
problems
for
certain
destinations.
B
They,
the
fact
that
you
define
a
canonical
flattening
logic
is
fine,
but
they
may
still
need
to
do
something
about
it
right.
They
may
need
to
do
the
unflattening
if
they
want
to
show
it
properly
in
the
ui.
It
creates
work
where,
whereas,
if
you
don't
allow
it,
this
work
doesn't
need
to
happen.
Yes,
so
I
I
understand
the
argument.
D
Wasn't
the
primary
reason
for
having
these
nested
attributes
or
one
of
the
primary
motivations
to
limit
the
size
of
the
exported
data
on
the
wire.
B
Possibly
I
don't
know,
maybe
that's
another
argument.
I
don't
know
if
that's
the
primary
one
but
yeah,
maybe
I
mean
size,
size,
wise.
We
we
have
attribute
size
limits
defined
right.
How
do
you
apply
that
to
nested
complex
objects?
Is
it's
hard
to
tell
right?
It's
also
not
easy
to
compute.
You
have
to
maybe
serialize
and
then
look
at
the
size
of
the
thing.
Whereas
for
for
primitive
data
types,
it's
easier
right.
If
it's
just
a
string,
then
you
can
just
maybe
truncate
it
or
whatever
right.
D
I
was
referring
to,
I
was
referring
to
you
know.
If
you
can
use
nested
attributes,
the
serialized
representation
could
be
smaller
than
if
you
have
to
enumerate
the
full
dot
delimited.
Prefix
yeah
yeah,
with
with
with
each
attribute.
D
Yeah
and-
and
so
I
think,
like
that,
conversation
is
kind
of
there's-
some
really
active
debate
about
that
right
about
whether
compression
and
whether
proto
buff
the
protobuf
binary
representation
will
be
acceptable
in
client
environments
and
in
browser
environments
in
particular,
and
so
like.
D
I
think
the
answer
to
that
question
informs
this,
because
if,
if
the
consensus
is
that
yes,
in
most
cases,
we
can
use
gzip
compression
or
some
sort
of
compression
when
we're
exporting
this
data,
then
having
nested
attributes
for
to
to
limit
the
the
the
size
of
the
serialized
export
payload
is
that's
not
as
a
strong
argument
anymore.
If
compression
is
allowed.
B
I
think
it's
a
weak
argument
anyway,
because
for
for
short
attributes
names,
it's
it's
even
arguable
whether
you
win
anything
there
by
having
nasty
structured
data,
because
it
it
has
a
cost
to
report
that
that
structure
anyway
in
protobuf.
It's
not
like
it's
zero
cost
right.
So,
if
you're,
if
you,
if
the
names,
if
the
names
that
are
separated
by
dots
are
short,
it
may
still
be
more
efficient
to
to
actually
have
the
flood
structure
actually
yeah.
C
To
me,
the
reason
for
you
know
this
requirement
is
not
just
a
smaller
payload,
smaller
data
size,
but
it's
logical
too.
You
know
on
the
back
end,
you
know
you.
We
need
to
put
them
together
into
a
map.
You
know
you
could
avoid
that.
B
C
On
the
client
on
the
agent
side,
the
data
is
generated,
you
know
in
a
nested
style,
so
only
for
export,
it
is
flattened
and
again
in
the
receiver.
It
needs
to
be.
You
know,
converted
back
to
a
map.