►
From YouTube: 2023-04-03 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
B
C
D
Yeah,
that's
good
to
hear
nice
what
what's
well
hold
on
I'm
gonna
lose
audio
here.
But
what's
a
highlight.
C
A
C
It
was
still
very
sunny
and
warm
in
Australia
and
New.
Zealand
was
a
little
further
south,
but
still
right
a
lot
better
than
cold,
dark
and
rainy
Portland.
At
this
time
of
year.
D
I
hear
you
wow,
that
sounds
fun,
yeah
we've
we
had
nothing
but
rain
while
you
were
on
vacation,
so
hopefully
that
makes
you
feel
better
now
I'm
in
Pittsburgh,
not
Portland,
but
you
know
the
we
used
to
make
jokes
about
how
Pittsburgh
is
only
like
number
three
rainiest
in
the
in
the
United
States.
It's
not
like
we're
number
four
or
five.
It's
like
not
enough
that
we
can.
You
know,
complain
that
we're
number
one.
D
Nice
all
right,
I,
think
I,
don't
know
how
many
people
are
showing
up
today.
I
know
that
there's
holidays
coming
up.
So
actually
we
have
a
lot
of
people
out
for
me
at
work
and
a
lot
of
people
taking
off
this
weekend
next
week.
So
anyway,
thanks
to
all
of
you
who
could
make
it
there's
a
packed
agenda
here,
kind
of
fact
feel
free
to
add
your
topics,
if
you,
if
it
fits
within
the
time
box
on
Telemetry
definition,
stability,
please
add
it
there.
D
It
fits
in
the
time
box
for
semantic
convention
process.
Please
add
it
there,
otherwise
add
it
here
and
I
yeah.
Otherwise,
I
filled
out
a
bunch
of
things
for
us
to
go
through
we'll
give
it
another
minute.
D
Come
on
while
people
are
looking
at
the
agenda,
I
was
going
to
share
this
as
well
for
our
project.
Tab
I
went
and
did
a
little
bit
of
cleanup
just
a
little
bit.
One
thing
that
I
added
so
that
we
know
when
we
do
triaging
was
I
created
a
group
for
needs
working
group.
D
So
this
would
be
when
people
are
like
hey.
We
need
a
semantic
invention
in
area
X.
We
can
basically
throw
it
in
there
wait
till
we
have
enough
requests
for
area
X
and
then
spin
up
a
semantic
convention
group
around
it
since
we're
mostly
focused
on
like
core
infrastructure
and
process
here.
D
A
few
examples
you
know
sustained
sustainable
metrics.
That
was
one
that
got
added,
we'll
talk
about
that
in
a
bit.
But
if
you
see
anything
where
you're
like,
oh
that's,
you
know
folks
want
to
actually
spin
up
a
semantic
convention
group
we'll
put
them
there
and
then,
as
we
prioritize
spinning
up
semantic
convention
groups
to
work
through
different
areas,
we
can.
We
can
use
that
to
justify.
You
know
the
ordering
and
the
priority.
D
I
also
moved
things
from
no
status
to
to
do
where
I
felt,
like
people
were
already
working
on
them
or
sorry
in
progress,
and
then
there's
a
few
things
in
to
do
that
are
basically
I.
Think
this
group's
next
priority
and
we
can
go
through
those
in
just
a
little
bit.
A
C
Which
one?
Oh
okay,.
B
B
D
C
C
B
C
I
think
it's
at
least
this
one
is
pretty
isolated
to
this
metric.
We
messed
up
two
things
on
this
metric
I
think
it
happened.
Probably
when
we
were
doing
some
rearranging
of
that
page,
we
basically
need
to
add
two
attributes
and
remove
one
attribute.
B
But
about
hitting
in
in
attribute
that
might
not
be
there
at
first
but
afterwards
and
I,
don't
know
if
we
might
test
that
Elizabeth
as
well,
but
I
can
quickly
skim
over
our
current
conventions
and
if
I
can't
find
anything
similar,
then
I'll
just
remove
it
from
the
board
again.
D
So
I
I
don't
think
this
works
by
the
way,
but
that
we
just
end
up
with
one
giant
negative
time
series
and
another
one,
that's
incredibly
positive
and
you
have
no
idea
how
to
join
the
two
together.
We
could
talk
about
this
should
I.
Add
this
to
the
to
the
agenda
today,
we'll
add
it
real,
quick.
D
Okay,
so
let's,
let's
jump
into
the
agenda
quick,
so
yeah
just
was
going
over
the
project
tracker
there
and
like
some
of
the
the
status
we
have
one
thing
to
call
out
when
it
comes
to
HP
blockers
this
that
in
that
project
tracker,
the
HTTP
blocker
should
really
be
like
General
semantic
invention
things
that
are
not
specific
to
http.
Like
you
know,
what
does
stability
mean?
What
do
we
enforce?
Do
we
have
the
tools
we
need
to
actually
make
semantic
conventions
work
at
all.
Can
we
cut
a
release?
D
Those
sorts
of
things
should
be
in
the
blockers.
So
if
it's
something
that
we
think
is
like
fundamental,
you
know
capability
of
open
Telemetry,
that's
where
we
should
start
handling
it
here,
as
such
I
went
through
all
of
the
new
things
in
the
past
two
weeks,
and
a
few
of
these
were
were
easy,
easy
triage,
but
we
have
let's
cut
this
off
at
15
after
okay,
so
first
off
we
had
this
notion
of
and
this
I
opened.
This
is
disallow
use,
custom
values
false
on
the
news.
D
D
If
you
were
to
say,
allow
custom
values
false
on
a
semantic
convention,
it
will
create
a
new,
unfortunately,
in
a
few
places,
those
that
was
used
for
areas
where
we
feel
like
the
union
could
change
over
time
and
the
stability
of
that
was
kind
of
suspect.
So
we
decided
to
not
enforce
that.
D
Sorry,
I,
don't
want
to
phrase
this
whether
or
not
the
instrumentation
uses
a
new
and
whether
or
not
Downstream
can
assume
that
the
thing
is
closed
or
different
questions
and
we
decided
to
make
sure
that
Downstream
doesn't
rely
on
a
fixed
anew
right
where
those
string
values
can
be
open
over
time.
For
example,
programming
languages,
someone
could
invent
a
new
program
language
between
now
and
tomorrow,
and
then
suddenly
we'll
have
to
deal
with
it
right.
D
So
what
this
is,
there's
a
set
of
tasks
where
we
need
to
figure
out
what
to
do
about
use
custom
value,
so
the
spec
it
needs
an
effort,
an
owner
if
anyone's
interested.
Please
sign
up
and
take
ownership
of
that
PR.
D
Oud,
okay,
the
next
one
I
thought
I
just
wanted
to
highlight
this
was
kind
of
interesting
sustainability
metrics.
This
would
be
like
how
much
power
are
you
using
and
like?
Are
we
being
efficient
with
our
resources
that
actually
I
think
deserves
its
own
working
group?
So
I
threw
it
in
the
new
needs
working
group
Channel,
but
just
wanted
to
call
out
that
if
you're
interested
in
sustainability
or
defining
semantic
conventions
for
sustainability
there's
interest
there,
we
could
probably
try
to
kick
off
a
working
group,
that's
similar
to
this
next
semantic
convention.
D
This
is
about
system
memory.
We
actually
don't
have
a
semantic
convention
kind
of
owner
for
host
metrics
or
like
system
metrics.
If
you
will
so
like
understanding,
Ram
CPU,
that
sort
of
thing
I
think
we're
gonna
have
to
put
something
together
to
get
those
stabilized,
so
I
threw
that
in
there
the
next
one
is
also
similar.
This
is
we're
missing
a
few
resource
types
for
Kate's
and
I'm
sure
this
will
be
an
ongoing
thing,
so
we
have
some
semantic
conventions
for
Kate's
resource
types.
D
If
you
are
this
particular
entity
in
case
you're
supposed
to
create
these
sets
of
attributes,
apparently
we're
missing
a
few
I
think
we
need
a
working
group
kind
of
around
resources
over
time.
So
that's
another
just
call
out
heads
up.
Okay,
the
last
one
I
think
is
interesting.
D
Well,
we
have
these
two.
This
one
I
wanted
to
spend
a
little
bit
of
time
on.
So
this
is
about
clarify
event
type
now.
The
specification
here
is
is
or
the
this
description
is
a
little
bad,
because
it's
just
hey
refer
to
the
Otep
of
what
event
type
is
but
I'm
going
to
go
into
this.
D
D
D
So
the
reason
I'm
elevating
this
is-
and
this
is
something
I
think
we
might
tackle
as
part
of
some
other
efforts.
We
need
to
be
very,
very
clear
about
specification,
cements,
convention
interactions
and
I
think
we
need
to
collect
somewhere.
These
sets
of
attributes
that
are
special
right
exception
is
one
of
them.
Apparently,
event
type
and
event
domain
are
also
some.
We
need
the
ability
to
have
those
two
interact:
okay,
anyway.
So
that's
what
this
bug
is
I
think
it
needs
someone
to
address
it.
If
no
one
else
signs
up.
D
Is
anyone
interested
in
trying
to
actually
basically
clarify
event.type
semantic
convention
or
Market
stable.
D
D
Okay,
I
will
take
that
AI.
Thanks
stress,
that's
a
good
idea.
D
D
We're
at
our
time
box,
but
I
just
wanted
to-
let's,
let's
add
another
three
minutes
for
this:
quick,
because
I
want
to
get
through
this
issue.
So
does
someone
want
to
describe
what
this
one's
about.
B
Yeah
I
can
do
that.
So
there
is
a
concern
about
the
active
requests
metric
and
it
having
a
status
code
attribute,
so
it
counts
the
in-flight
requests
and
at
the
time
the
request
is
in
Flight
you,
you
won't
have
any
status
code,
so
you
will
report
it
without
a
status
quote
and
then
plus
one
the
counter
for
them.
B
And
then
once
you
receive
a
response
or
you
render
a
response,
you
have
a
status
quote
and
then
you,
you
would
minus
one
decrement
the
counter
with
the
status
quo
set,
but
that
those
are
two
different
combinations
and
hence
two
different
metrics
that
you're
keeping
track
of
so
the
the
one
without
a
status
code
would
be
only
ever
increasing
and
the
one
with
status
code
set
to
244
or
whatever
would
just
go,
go
negative
over
time.
C
No
may
need
to
be
careful
with
some
of
the
I
was
realizing
some
of
the
others
like
say:
net
host,
port
and
I.
Think
Luke,
Mill
I
think
we
have
something
saying
about
the
if
conditionally
required
like
or
it
must
be
set
at
on
start.
If
it's
set
at
all.
E
A
D
D
D
Well,
so
we
we
would
remove
status
code
from
the
table,
yes,
but
what
I'm
saying
is:
there's
still
an
issue
with
these
conditionally
required
here
so
like
if
this
isn't
available,
when
you
send
the
request,
when
the
request
is
received
and
done
you
even
if
Port
is
available
at
that
time,
you
still
should
not
provide
it
right
right,
because
the
same
set
of
attributes
used
to
increment,
you
would
use
a
decrement
yeah,
okay,
so.
D
Where,
in
our
general
metric
convention
descriptions
we
can
basically
say
for
Uptown
counters.
You
know
this
keep
the
same
attributes
for
up
and
down,
and
then
you
don't
have
to
list
that
every
single
time
to
use
an
uptown
counter.
B
Yeah
we
have
the
the
same
problem,
for
example
with
the
hardware
semantic
conventions,
so
you
have
one
metric
that
counts
your
devices
that
are
in
an
okay
State
and
then
the
ones
in
a
failed
State,
and
you
would
need
to
decrement
the
K1
and
increment
the
field
on
to
indicate
that
that
one
changed
its
thing.
B
So
that's
very
advice
generally
for
the
other
semantic
conventions
that
are
look
that
they
are
usually
mutually
exclusive,
so
you
have
like
Heap
versus
non-heat
memory,
for
example,
and
such
there
it's
not
an
issue
but
for
those
very
transition
transitions
to
State
like
status
quo
or
or
Hardware
status.
Here
it's
a
problem.
D
D
Cannot
spell
today,
okay,
all
right
cool,
let's
well
we're
a
little
over
the
time
box.
Let's
move
on
to
Telemetry
definition,
stability
and
evolution.
So
I
think
most
important
thing
to
call
out
today
is
that
this
PR,
the
guidance
on
what
is
covered
by
semantic
conventions
is
merged.
D
So,
as
of
now
that
guidance
holds
so
this
would
be.
You
know,
attribute
names
cannot
change,
but
the
values
are
able
to
change.
So
that's
good,
there's
another
pull
request
which
I
think
might
get
merged
soon,
which
is
actually
marking
service
in
telemundry.
Sdk
attributes
are
stable.
Those
are
the
ones
that
are
hard-coded
in
the
specification
and
are
required
for
SDK
authors.
That's
like
a
few
of
them.
This
gets
into
that
discussion
around.
D
Do
we
want
a
more
General
convention?
These
are
both
FYI
topics.
I
actually
didn't
have
any
specific
topics
around
like
stability
and
evolution
for
defining
semantic
conventions.
Today.
Did
anyone
have
anything
they
wanted
to
talk
through.
E
I've
been
checking
more
about
the
transformation
and
my
findings
part
that
there
is
nothing
no
tooling
today
that
can
do
transformation,
yes,
yeah,
so
I
think
that
the
obvious
action
item
is
to
split
it
and
put
it
into
experimental
document,
remove
it
from
the
stability
definition.
D
The
the
skinny
URL
yeah
I,
here's
here's
my
concern.
I
was
going
to
implement
it.
Were
you
able
to
implement
anything
that
didn't
involve
requiring
the
internet.
E
It
is
possible
to
implement
I,
mean
it's
possible
to
implement
with
I,
don't
know
transform
processor,
but
it's
it's
it's
fairly
difficult
and
it's
more
like
a
hard
coding
of
pretty
known
versions
of
okay,
I
I
know
this
attribute
was
renamed
for
version
X
to
version
White.
E
How
does
it
change
anything
like?
Do
you
expect
someone
to
have
implemented
this.
D
I
I've
been
implementing
it
in
Java,
okay,
specifically,
so
that
the
SDK
can
do
Transformations
inside
of
java,
because,
like
we,
we
need,
if
we're
going
to
rely
on
schema
yo.
We
need
an
implementation,
that's
inside
of
an
SDK
and
we
need
an
implementation.
That's
inside
of
the
collector.
D
My
fear,
though,
is
like
there's
a
lot
of
rigmarole.
You
have
to
build
or
like
structure
around
it.
So,
for
example,
the
fundamental
component
that
I
found
I
needed
was
at
any
point
in
time.
I
need
to
look
at
schema.
Url
from
one
signal.
Excuse
me,
URL
from
another
signal
determine
if
I've
downloaded
this
appropriate
schema
URL
for
that
version
right
and
if
not
download
the
right
one
and
then
come
up
with
a
plan
of
here
are
the
set
of
version
numbers
between
me.
D
D
However,
the
way
that
it
applies
to
Telemetry
you're
either
there
are
some
conventions
that
we
can
do
without
being
like
super
memory
heavy,
but
there's
some
where
you
literally
have
to
translate
the
data
and
then,
after
that
data
is
translated,
come
back
in
and
translate
the
data
a
second
time
and
it's
very,
very,
very
memory
inefficient,
some
of
the
translations
that
we've
allowed
in
schema
URL.
So
what
I
was
doing
was
basically
saying:
okay,
let's
pretend
like
in
schema
yell,
the
only
Transformations
we've
ever
defined
are
renames.
D
All
other
Transformations
will
be
removed
for
a
variety
of
good
reasons.
So
let's
say,
if
schema
URL
only
defines
attribute
renames.
What
does
that
look
like,
and
so
that's
what
I've
been
implementing-
and
it's
still
like
I
have
a
lot
of
concerns
here
as
well
is,
is
what
I'll
say
frankly,
like
I
think
we
can
Implement
something
I
don't
know
if
we
can
Implement
something
that
is
low
overhead
and
like
easy
to
understand
what
the
hell
you
did
like
easy
to
add
more
transformations
to
it.
E
There
was
another
problem
that,
even
though
we
can
influence
and
transform
things
that
that
but
were
recorded,
but
we
cannot
do
this
for
sampling
relevant
attributes.
So
if
some
attributes
are
used
in
simpler,
Samplers
have
to
be
version
specific
conversion
of
air
or
we
need
the
Hope
before
sampling
and
it
makes
it
even
more
performance,
sensitive.
E
A
D
E
D
E
Will
be
easy
if
we
either
put
this
whole
section
about
the
the
transformation
into
the
different
document
or
outline
it,
as
this
part,
the
experimental
still
and
just
whatever
data
it
is
it's
experimental
we
don't
care
so.
D
E
No
I
think
I'm,
not
too
many.
There
are
somewhere
below
this.
D
A
D
After
which
they're
allowed
to
change
under
the
following
conditions,
the
change
is
published
as
part
of
the
specification
it's
published
in
the
schema
file
and
the
produced
Telemetry
correctly
specifies
the
respective
schema
URL.
What.
A
E
D
D
E
Yes
and
the
document
under
the
link
to
Telemetry
study,
so
all
the
link
can
stay
whatever.
It
is.
D
Yeah,
okay.
So
what?
Basically?
What
we
want
to
do
is
Purge
out
portions
of
the
document
and
get
anything
that
talks
about
what
is
allowed
to
change
somewhere
else.
That
can
be
experimental
and
we
can
figure
out
the
details
later.
A
D
E
D
A
D
Okay,
cool
and
all
right.
B
B
If
every
single
collector
out
there
would
need
to
connect
to
open
telemetry.io
to
download
schema
files,
and
then
we
would
be
the
single
source
of
of
well
failure,
but
also
the
single
source
of
being
able
to
tell
where
otilis
is
running
and
connecting
to
like
meta,
observability
kind
of
data,
that's
being
gathered
there
and
that
probably
being
discouraged
or
well
opposed
by
by
organizations
that
don't
want
their
collector
to
talk
to
this
strange,
open,
telemetry.io
out
there.
This
is
a
larger
problem
that
we
have
discovered
there.
C
I'm,
just
assuming
that,
at
least
for
the
open,
Telemetry
schema
files,
we
would
include
them
in
the
the
whatever
was
doing
the
transformation
so
that
they
wouldn't
have
to
be
downloaded.
B
Okay,
so
we
would
say
if
you
have,
the
I
mean
SDK
versions
and
and
schema
or
semantic
convention
versions
are
decoupled.
But
if
you
have
today's
Java
SDK,
then
this
one
already
or
today
is
collected,
and
this
one
would
come
with
the
the
latest
schema
files
included
and
the
the
URL
will
just
serve
as
a
unique
identifier
to
tell
which
file
you're
referring
to.
B
But
the
file
is
locally
and
it
would
only
go
fetch
it
if,
if
it
wasn't
included
because
you
can
have
been
an
SDK
from
last
year
and
still
you
you
want
to
do
today's,
you
want
to
follow
today's
semantic
conventions
when
you're
writing
your
instrumentation.
B
A
B
By
by
shipping,
all
already
present
schema
files
to
a
certain
date,
even
if
an
organization
would
block
access
to
the
public
internet
in
their
in
their
access
rules,
which
probably
makes
sense
for
many
of
them,
then
it
would
still
work
in,
say,
95
of
the
cases
or
or
if
we
we
make
it
a
rule
that
you
you
can
only
access,
schema
files
that
are
that
have
been
around
by
the
time
your
your
SDK
package
was
built.
B
D
Right
but
I
think
you're,
starting
to
see
some
of
the
complexity
here
of
houses
defined
right
like
that.
There's
we're
missing
we're
missing
a
whole
bunch
of
failure
scenarios
in
the
specification
I
just
added
like
what
should
happen
if
it
can't
be
downloaded
right.
We
we
hard
code
certain
pieces
of
the
specification
to
interact
with
schema
URL
like
resource
merge.
D
You
know,
should
like
you're
saying
Armin.
Should
we
prevent
you
from
being
able
to
say
this
instrumentation
of
bytes
by
a
schema
URL?
If
that
schema
URL
doesn't
exist
in
some
registry
somewhere
right?
Should
we
actually
block
the
instrumentation
from
existing,
because
we
can't
download
the
schema
URL?
Is
that
something
do
we
want
to
go
that
far
I?
D
Think
there's
a
lot
of
things
we
have
to
figure
out,
but
I
think
that
the
teal
tldr
here,
in
terms
of
like
your
your
overall
concern,
we
did
make
a
recommendation
to
the
specification
to
basically
proceed
as
if
schema
Euro
doesn't
exist
and
we
we
will
still
try
to
make
skim
the
URL
work,
but
we're
actually
not
planning
to
use
it
to
do
to
prevent
renames
from
breaking
major
version.
Bumps
of
the
semantic
conventions
right
like
we
had
already
decided
this
as
a
working
group.
D
Okay,
so
so
might
be
a
useful
thing.
However,
for
the
purposes
of
stability,
we
are
not
going
to
rely
on
it
at
all
in
this
group
for
now,
so,
even
if
we
make
a
change
that
schema,
URL
could
have
it
be
safe,
we
would
still
bump
a
major
version
number.
We
would
still
consider
it
breaking
in
this
group.
D
E
E
D
Yeah
I
think
I
I
absolutely
agree
with
that.
I
think
the
idea
that
we
want
semantic
inventions
to
do
now
is
I'm.
A
user
I
have
instrumentation
from
five
different
libraries.
Those
libraries
are
not
necessarily
owned
by
Hotel,
so
like
I'm,
using
grpc
I'm
using
the
hdb
client
library
from
you
know,
Sue
they're,
using
different
versions
of
semantic
inventions,
schema
URL
at
least
gives
me
as
the
user,
the
ability
to
say
here's
the
one
that
I
want
and
make
the
two
look
somewhat
consistent
right
because
I'm
getting
instrumentation
from
two
different
sources.
D
So
it
wouldn't
change
like
what
major
version
number
it
is.
It
wouldn't
change
like
our
idea
of
breaking
of
what
a
breaking
change
is
all
it
does.
Is
it
gives
the
user
a
new
utility
that
we're
providing
so
that
you
can
at
least
make
your
Telemetry
look
consistent
if
you
have
a
version
mismatch,
so
we're
actually
looking
at
it
more
kind
of
helping
with
that
scenario,
go
ahead.
Yeah
yeah.
E
D
Yep,
okay,
that's
that's
kind
of
what
we're
looking
at
is
I
will
I
might
need
to
sit
down
with
tigrin
and
talk
to
him
about
a
bunch
of
the
stuff,
but
we're
over
our
kind
of
20
minute
time
box.
Around
definition,
stability,
you
had
another
thing:
is
this
related
to
the
definition
of
stability
around
this
this
topic,
or
is
this
related
more
to
a
process.
D
Yeah,
that's
fine
I'm,
putting
it
under
process,
because
the
anything
related
to
onboarding
them
I
think
is
a
process
topic
of
how
we
do
what
we
do,
Okay
cool.
So,
let's
move
on
to
our
process
topics.
Now
one
I
want
to
call
out
the
proposal
to
move
semantic
conventions
into
their
own
GitHub
repository
is
live
that
Otep
I
think
everyone
here
has
reviewed
it
or
commented
on
it.
Maybe
if
you
haven't,
but
most
people
have
major
concerns.
D
E
D
I
I
need
to
go.
Do
some
evaluation
myself
to
understand
how
the
code
generation
Works
in
each
language,
but
there's
a
few
languages
that
I'm
very
concerned
might
have
taken
some
shortcuts
here
that
they
didn't
that
weren't
known
shortcuts,
but
basically
hurt
our
ability
to
change
without
breaking
them.
D
For
example,
I
made
a
PR
where
I
moved
around
some
yaml
files.
If
the
yaml
file
shows
up
in
the
generated
code,
the
name
of
the
ammo
file
now
we're
in
trouble.
We
can't
Shuffle
things
around
right,
so
there's
also
a
I
think
in
go.
The
entire
name
of
the
URL
shows
up
as
the
package
name.
E
So
I,
actually
from
what
I
saw
like
a
bunch
of
followers
and
changes
broke,
everything
like
broke
the
hellos
from
foreign
and
all
the
scripts
needed
to
be
changed
for
the
generation
for
the
version
1190.
If
that
offer
any
consolidance
that
it's
already
breaking
anytime
attribute
is
removed,
somebody
needs
to
go
and
add
one
back
into
the
template,
like
preserve
its
existence,
that
was
deprecated
annotation,
so
this
process
is
already
breaking
and
any
breaking
change
in
the
yaml
files
results
in
the
breaking
change
for
all
the
chord
generation.
B
D
We're
getting
rid
of
your
ability
to
use
the
numes
as
a
noobs;
instead
just
have
a
set
of
known
values,
but
you're
right,
like
that's
part
of
that's
just
part
of
how
some
kind
of
has
been
working
so
far,.
D
D
D
D
I
I
think
that
the
current
semantic
conventions
will
have
to
live
in
the
specification
for
a
Time
as
people
migrate
to
the
new
location,
I
think
the
cost
of
getting
the
build
tools
updated
to
the
new
repo,
the
cost
of
moving
every
single
individual
language
to
the
new
repo
The
Collector
to
the
new
repo
won't
happen
instantaneously,
and
so
what
I
don't
want
to
have
happen
is
US
maintaining
semantic
conventions
in
two
locations
over
time,
while
that
migration
happens
and
then
trying
to
flip
over
in
one
big
Bank.
D
D
They
can
migrate
over
time
as
they
have
time.
We
can
take
time
with
each
language
Sig
to
basically
be
like
hey.
Is
code
generation
working
the
way
you
need
it
to?
Let's
make
sure
it
works
on
this
repo?
Let's
make
sure
it
does
the
right
thing
that
you
know
we
don't
have
the
big
bang
change
of
the
world
you
know
go.
Has
their
own
language
generation
that's
different
than
all
the
other
languages,
they
have
a
different
tool.
D
I,
don't
want
to
have
to
deal
with
both
of
them
at
the
same
time,
all
at
once,
right,
I,
think.
There's
a
lot
of
process
reason
why
I'm
moving
slowly
is
better
and
then,
if
we
look
at
like
does
generated
code
break
I.
Think
that
the
answer
here
because
of
this
migration
should
be
yes,
because
we're
actually
giving
you
a
new
code
generation
tool,
no
matter
what
we're
not
going
to
be
using
the
same
code,
gen
tool
that
the
specification
has
exactly
as
it
was
right.
D
If
you
look
at
the
proposal,
we're
planning
to
actually
change
the
structure
of
what
the
specification
looks
like
so,
instead
of
having
like
traces,
metrics
and
logs
as
top
level
directories,
we
would
have
HTTP
as
a
top
level
directory
and
underneath
that
would
be
specification
for
traces,
metrics
and
logs
right.
E
And
I
think
it's
also
important
that
the
moment
we
are
introducing
stable
HTTP.
We
need
to
generate
code
in
a
different
way
that
there
are
two
artifacts
generated,
stable
and
unstable
one
for
just
for
semantic
conventions,
and
this
change
will
definitely
be
breaking
in
a
sense.
It
changes
all
the
processes
around
core
generation.
D
D
How
do
I
want
to
phrase
this
I
think
for
one
particular
version
of
semantic
inventions?
We
can
have
attributes
in
two
states
right.
A
D
Can
have
attributes
that
are
stable
and
we
can
have
attributes
that
are
up
and
coming,
but
one
of
the
things
we
need
to
start
doing
is
being
a
little
bit
careful
with
attributes
that
are
up
and
coming
and
like
reserve
name
spaces
and
that
sort
of
thing
so
like
I,
think
it's
okay.
For
us
to
say
here
is
an
attribute
name.
This
is
still
not
considered
stable,
but
will
generate
code
for
it
and
have
that
be
part
of
like
the
stable
version
of
semantic
conventions.
D
Yeah,
let
me
show
you
kind
of
what
I
am
intending
oh
crap.
Sorry
I
should
have
put
this
up
before
okay,
so
what
we
did
in
this
PR,
because
there
are
portions
of
the
semantic
interventions
that
are
stable
and
unstable
and
I
think
this
is
what
you're
getting
at.
But
what
I'm
suggesting
is.
We
would
have
an
artifact
called.
You
know
semantic
conventions,
but
within
that
artifact
we
would
have
a
service
experimental
yaml.
D
That
would
somehow
denote
everything
in
here
is
experimental,
and
so,
when
I
do
code,
gen
I
could
make
a
separate
Library
called
semantic
convention.
Experimental
right
with
the
same
version
as
semantic
convention
and
the
semantic
convention
artifact
would
be.
You
know
they
have
the
same
version,
it's
the
same
repo
but
like
there's
pieces
of
it
that
are
experimental
and
pieces
of
it.
D
That
aren't
and
the
way
we
do
that
is,
you
know
we
can
actually
use
the
same
prefix
use,
but
a
different
group
that
denotes
here's
the
experimental
components
and
then
here's
the
group
that
is
the
you
know
stable
components,
so
Telemetry
experimental
versus
Telemetry,
for
example,.
E
D
D
A
Can
write
that
down
better
go
ahead?
Wouldn't
that
end
up
critic
breaking
changes
when
we
actually
moved
it
from
experimental
to
stable,
because
the
name
would
change
based
on
the
group.
D
That's
so
so,
actually
would
it
create
breaking
changes?
It
would
create
a
breaking
change
on
the
experimental,
artifact,
probably
correct,
but
the
name,
the
actual
name
of
the
Telemetry
doesn't
change.
So,
for
example,
here
the
prefix
is
telemetry
and
the
name
is
what
Auto
dot
version.
So
this
would
be.
Telemetry.Auto.Version
is
the
name
of
the
prefix.
If
I
move
this
into
this
particular
yaml
file,
the
attribute
name
doesn't
change,
but
the
generated
code
would.
A
D
That's
a
good
call
out,
I,
think,
okay,
so
that
I
think
we
can
fix
with
our
tooling,
but
that
actually
needs
I
will
based
on
this
discussion.
If
there's
any
other
thoughts,
I'm
going
to
take
some
of
these
suggestions
and
put
them
into
the
Otep,
basically,
what
I
was
just
proposing,
but
it
sounds
like
what
we
need
is
the.
If
I
denote
a
particular
yaml
as
being
experimental,
I
also
need
the
ability
to
say,
like
this
has
been
promoted
to
stable
and
so
code.
D
D
Yes,
yeah
and
that's
actually
one
of
my
main
concerns
with
our
group
is
we
need.
We
need
to
start
investing
in
some
of
these
tools
and
a
lot
of
us
are
highly
Limited
in
terms
of
amount
of
time,
so
this
particular
Otep
I
think
is
going
to
cause
a
lot
of
churn
and
Tool
requirements
that
we're
going
to
have
to
jump
on.
That's
one
of
them,
so
schema
URL
inversion.
If
we
step
back
to
that
a
little
bit,
we
have
about
four
minutes
do.
What
do
we
feel
is?
D
Is
important
here
like
I
gave
you
my
reasoning
behind
why
I
think
schema,
URL
and
version
should
be
different.
Does
anyone
see
it
like
a
different
world?
Where
say
we
keep
the
same,
schema,
URL
location
and
we
migrate
everything
out
of
the
repo
in
some
fashion
that
isn't
breaking?
That
is
easy
to
do
that.
You
know
we
could
piecemeal
it
is
there.
Is
there
a
proposal
from
anyone
here
where
they
feel
strongly?
We
should
preserve
the
schema
URL
version.
A
D
I'll,
take
that
up
with
people
who
comments
on
the
Otep.
The
last
thing
that
we
need
to
figure
out
and
I
think
that
will
be
part
of
this
group
is
we
need
a
clear
boundary
between
the
specification
and
the
semantic
conventions
and
I
think
we
can
do
this
in
place.
So
I
think
we
can
actually
start
sending
some
PRS
in
place
like
what
lidmilla
was
suggesting
with
the
schema
URL,
you
know
manipulations.
We
can
start
carving.
The
specification
out
where
we
have.
D
These
set
of
attributes
are
owned
by
the
specification
and
any
semantic
convention.
Implementation
has
to
abide
by
them,
like
service.name
exception
or
error
type
or
whatever.
We
end
up
doing
with
exceptions
right.
The
Telemetry
SDK
version
in
the
things
that
environment
variables
minute
you
know,
work
with
the
otel
dot
attributes,
for
example,
I
think
we
need
to
start
being
very,
very
clear
in
the
specification
ripping
those
apart.
D
So
if
anyone
has
time
to
to
start
making
those
PRS
or
kind
of
identifying
those
locations
or
semantic
inventions
and
spec
kind
of
overlap,
I
think
we
can
start
preemptively
well,
I
should
say
I.
Think
it's
a
good
idea
for
us
to
do
this
anyway,
even
if
semantic
conventions
remain
in
the
spec,
but
we
can
start
defining
clear
boundaries
between
the
two
and
like
ripping
poles.
If
you
will
and
making
sure
like
semantic
inventions
are
clearly
on
one
side
and
the
specifications
clearly,
on
the
other.
E
I
cannot
commit
to
do
all
the
work,
but
I
can
start
the
problem.
I
want
to
solve
this
a
bit
wider,
though,
that
we
should
probably
designate
and
have
references
to
all
the
attributes
we
use
in
the
spec
language,
because
we
frequently
forget
to
update
them.
They
get
lost,
we
don't
know
if
they
apply
to
Zipkin
or
to
open
Telemetry
and
like
replace,
all,
doesn't
really
work.
So
essentially,
what
we
I
think
we
need
is
a
way
to
mark
all
of
the
attribute
names
and
have
a
link
to
where
they
are
different.
D
Yep
yep
I
I
agree
with
that.
So
actually
it
sounds
like.
Maybe
the
first
thing
we
do
is
make
a
some
kind
of
document
which
are
here
are
attributes
that
the
specification
has
control
of
uh-huh
right,
and
so
basically
we
can
give
the
specification
anything
in
the
hotel,
Dot
namespace
for.
D
And
only
the
specification
can
provide
meaning
for
those
things.
That's
what's
used
in
like
Zipkin
and
that
sort
of
thing
and
we
can.
We
can
then
have
a
list
of
like
here's
ones
defined
by
the
specification
so
service.name
that
that
sort
of
thing
sound
good.
Yes,.
A
D
All
right,
we'll
defer
to
the
next
time
talk
about
your
UCS
prototype
findings.
Thank
you
for
posting
all
those.