►
From YouTube: 2023-02-28 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
B
Okay
yeah,
so
in
that
case,
let's
wait
well,
actually
it's
three.
It's
three
minutes
past
start
time.
I
think
that
we
can
start
if
that's
okay
with
everybody.
B
Okay,
let's
go
over
the
agenda,
then
the
first
item:
Trask
HTTP,
70,
convention
stability.
You
want
to
the
cover.
D
Well,
actually,
the
first
one
is
Josh
here:
no
okay,
oh
wait!
Maybe
Josh
will
show
up
I
wanted
to
give
him
some
time
to
talk
about
this
for
this
one.
D
D
A
G
E
C
There's
no
there's
no
there's
no
additional
semantic
convention
to
record.
Let's
say
database
call
duration
in
in
a
span
without
creating
a
child
span.
For
just
for
that
call.
We
don't
do
anything
like
that.
I.
E
Mean
not
yet,
but
so
this
is
where
things
get
interesting
with
semantic
conventions,
because
nominally
all
of
those
latency
things
were
providing
as
metrics
you
could
compute
from
spans.
The
difference
is
the
metrics
includes
all
possible
data
and
fans
are
expected
to
be
sampled,
but
yeah,
like
I
I,
actually
expect.
We
will
have
latency
metrics
for
databases
at
some
point.
E
I
expect
you
know
we
have
them
for
RPC.
We
have
them
for
HTTP.
We
can
take
a
look
at
yeah.
We
should
also
have
them
for
messaging
depending
right
the
differences
our
span
data
model
actually
includes
time
stamps.
So
like
we're,
not
we
don't
change
that.
This
is
more.
What
the
metric
is
that
we
would
expose.
C
Yeah
yeah,
let
me
guess,
clarify
the
reason
I'm
asking
the
reason
is
I'm
worried
about
breaking
existing
stuff.
What
will
be
added
in
the
future
I'm
fine
with
that?
That's
not
a
problem.
I
just
want
to
understand
what
is
it
that
we're
breaking
by?
By
making
this
change,
it
seems
like
not
much
that's
what
I'm
using
yeah.
F
Very
so
it
does
have
some
some
implications
and
I
don't
know
for
which
route
we
have
decided
for
yet.
But
in
order
to
be
able
to
make
this
change,
we
need
to
either
change
the
default
buckets
for
histograms
the
the
bucket
boundaries
for
the
explicit
packet
histogram
in
order
to
support
Second
durations.
Otherwise
they
they
would
be
close
to
useless
or
go
with
the
approach
in
the
I.
F
Think
second,
but
last
comment
saying
that
if
the
if
the
unit
equals
the
string
s,
then
the
default
bucket
boundaries
should
be
should
be
different,
yeah.
That's
this
one
or
the
third
option.
I
think
that's
right,
but
check
proposed
is
implementing
a
new
API
for
Matrix,
the
hints
API,
so
that
the
reporter
could
could
change
which
boundaries
are
to
be
used.
Recording
the
metrics
but
I'm,
not
sure.
If
we
have
decided
on
on
which
of
the
three
options
we
will
take
in
order
to
to
make
this
change
happen.
D
I
was
thinking
this
just
because
I
think
relying
on
without
this
and
relying
on
users
having
to
add
that
boilerplate
here's,
my
the
buckets
for
every
instrument
is
not
a
great
user
experience.
F
I
I
agree
with
that,
but
personally
I
I
also
don't
like
the
the
proposed
love
to
having
some
special
handling
based
on
the
unit
string
in
the
past.
That
one
was
spec
or
currently
that
one
is
specified
to
be
an
OPEC
string
that
the
SDK
wouldn't
need
to
to
look
into
and
care
about,
and
if
we
would
really
change
it
based
on
on
like
equality
with
s,
then
this
could
be
tricky
to
explain
to
you
since
the
it
suddenly
changes
the
the
bucket
boundaries
and
I
I.
Don't
know
if
it's
a
breaking
change.
E
I
think
changing
back-end
boundaries
is
not
breaking,
but
it
is
surprising,
but,
more
importantly,
we
allow
with
views
we
allow
people
to
select
based
on
the
unit,
so
we're
not
actually
adding
any
new
interaction
with
the
unit
to
the
SDK,
because
I
can
already
Define
a
view
that
looks
for
an
instrument
that
looks
for
instruments
like
histograms
that
are
seconds.
That's
already
a
thing
that's
been
added
to
the
spec.
E
F
Explicit
they
are
the
same.
Something
with
this
unit
should
behave
in
a
different
manner,
but
if
the
SDK
on
its
own
would
use
a
different
layout
based
on
the
unit
string,
then
then
that's
something
new
right.
E
Okay,
I
can
kind
of
buy
that
argument.
I
yeah,
but
I
I
guess
the
most
important
two
points
that
I'll
remake
again
are
one
bucket
boundaries.
Are
things
users
don't
want
to
think
about,
so
the
the
better
we
can
do
with
our
first
guests,
the
better
it
will
be
for
users
and
secondarily,
we
should
be
encouraging
moving
to
exponential
histograms
over
time,
because,
again,
the
whole
point
of
those
is
that
Dynamic.
E
If
you,
if
you
look
at
if
you
look
at
a
lot
of
instrumentation
I,
fully
expect
what
we'll
see
for
default
bucket
history
boundary
is
users
never
touch
it
or
they
explicitly
specify
it
with
a
hint
API,
which
is
why
I
think
we
should
prioritize
Jack's
PR
for
hint
API,
but
in
you
know,
to
the
extent
that
we
never
force
them
to
specify
a
bucket
I.
H
D
G
F
Do
they
not
reflect
or
or
best
guess
on
where
we
would
think
it
would
range
like
in
in
some
singular
or
or
double
digit
millisecond
place?
Maybe
three
digits.
E
We're
highly
targeted
at
microservices
at
this
point.
A
A
E
But
that's
honestly
that
that's
fine
like
again
remember
what
these
bucket
boundaries
are
giving
you
right,
You
have
a
sum,
and
you
have
a
count
in
addition
to
bucket
boundaries,
the
bucket
boundaries
only
give
you
higher
resolution
on
your
percentile
calculation.
E
It's
more
about
controlling
your
error
rate
of
your
percentile
calculation
users
should
never
be
interacting
directly
with
the
buckets
themselves
and
when
you
look
at
them,
you
know
when
you
use
these
histograms
in
practice.
You
see
like
oh,
why
did
my
latency
suddenly
look
like
it?
Bumped,
oh
I,
have
a
horrible
bucket
boundary.
I
need
to
go
fix
that
that's
just
a
common
like
source
of
pain
with
these
things
and
the
what
you're
really
doing
with
these
bucket
boundaries
is
the
error
rate
right.
It's
it's
about
how
close
am
I.
E
What
what's
my
error
bar
of
how
well
I,
actually
understand
my
latency
to
be
at
a
particular
percentile,
but
I
should
be
able
to
get
an
average
no
matter
without
any
buckets
whatsoever.
Right
I
can
get
some
information
about
latency.
The
buckets
are
more
when
I
want
to
dive
into
my
99th
percentile.
Do
I
have
a
good
enough
resolution
to
trust
the
number
I
have
or
am
I
getting
alerted
all
the
time
in
practice,
what
happens
is
if
your
bucket
boundaries
are
bad.
E
You
start
getting
alerts
because
you're
you're,
you
have
huge
error
and
you're
hitting
like
your
max
bucket
your
overflow
bucket
all
the
time
for
every
single
point
right,
then
you
go
in
and
say
crap.
My
dynamic
range
of
this
bucket
is
bad.
Let
me
go
create
a
view
fix
my
bucket
boundary,
so
my
dynamic
range
is
a
little
bit
higher,
so
I
can
get
better
accuracy
at
my
99th
percentile
right
in
practice.
That's
kind
of
how
you
interact
with
histograms
I
again
when
we
think
about
bucket
boundaries.
E
I
think
we
want
to
provide
a
really
good
default,
but
we
also
have
to
be
aware
of
how
these
are
used
in
practice
and
that
expectation
of
user
engagement,
especially
for
those
scenarios
where
our
you
know.
We
can't
guess
well-
and
you
know
it
I
I'm,
not
saying
we
should
just
throw
up
our
hands
and
put
a
default,
but
that's
kind
of
what
everyone
has
done
and
we
throw
up
our
hands
and
said
we'll
use
the
Prometheus
default
right.
E
We
we
because
it's
decent
and
it
works
well
for
like
RPC
communication,
which
also
works
decently
for
database
communication,
but
the
difference
between
those
two
buckets.
Even
if
a
whole
ton
of
things
are
in
the
the
zero
and
a
few
things
are
in
the
five
right.
That's
fine,
because
you're
only
using
the
distribution
to
figure
out
your
percentile
you're,
not
using
it
to
calculate
like
an
average.
D
In
the
interest
of
time
is
or
Nick
does
anybody
have
concerns
about
next
steps?
Here
it
sounds
like
there's
a
desire
to
see
list
of
impacted
metrics
though
it
should
be
relatively
small,
I
guess,
there's
a
question
of
confirming
that
our
approach
is
this,
which
we
don't.
G
G
G
It
could
be
a
hint
as
Jack
mentioned
he
would
explore
or
it
could
be
treat
second
specifically,
and
there
will
be
other
consequences
if
we
made
that
decision,
because
people
might
come
and
say,
Hey,
Now
I
want
kilogram
to
be
treated
specifically
and-
and
we
should
be
prepared
for
that,
but
how
we
solve
that,
like
whether
we
choose
the
the
hint
solution
or
a
different
solution,
is
something
that
can
be
done
after
the
first
decision
is
made,
and
do
we
have
to
bundle
all
this
together?
My
suggestion
is
no.
G
F
G
F
I
do
agree
that
those
are
two
separate
questions,
but
I
also
think
that
for
the
first
question,
we
should
consider
the
impact
of
it,
which,
which
flows
into
question
number
two,
but
for
the
for
the
first
one,
the
only
motivation
for
changing
to
buckets
is
just
backwards,
compatibility
with
chrome
first
right,
there
is
no,
no
other
reason
to
it
and
if
for
me,
for
us
was
to
make
the
the
same
decision
today
that
we
are
making
if
they
want
to
have
default
second
or
millisecond
durations,
are
we
sure
they
would
still
choose
seconds
there
or
do
we
think
that
they
they
just
made?
F
G
So,
based
on
what
I
observed,
there
seems
to
be
people
who
prefer
nanoseconds,
milliseconds
seconds
and
all
the
units
exist
for
a
reason.
So,
if
we
look
at
physics
all
these
units,
they
they're
pretty
popular
and
I
I
find
like
whatever
decision
we're
going
to
make
here.
If
we're
going
to
say,
there's
one
recommended
or
the
default
unit
will
always
have
the
situation
where
some
folks
in
specific
domain
would
be
unhappy
and
the
alternative
ways.
G
G
D
Do
for
I
I'm
looking
to
the
technical
committee
for
this
because
I
think
it's
a
big!
You
know
kind
of
a
big
deal,
change
a
big
something
that
we
need.
Alignment
and
I.
Don't
think
any
I
want
to
make
sure
it's
a
community
decision.
Do
you
think
that
the
you
all
can
discuss
this
in
tomorrow's
technical
committee?
See
if
you
can
get
alignment
among
yourselves
yeah.
F
E
It,
it
is
sorry,
I
was
a
little
late,
yeah,
so
I
think
the
there's
a
PR
here
to
provide
clear
guidance
on
what
semantic
conventions
actually
guarantees
in
terms
of
stability.
E
A
D
E
Did
beautiful,
okay,
so,
first
of
all
the
the
primary
thing
was
like.
What's
the
point
of
this,
and
the
point
of
this
is
when
we
talk
to
a
bunch
of
people,
understanding
what
was
considered
a
stable
change
and
what
was
considered
a
breaking
change
was
different,
depending
on
who
you
talk
to
and
I
think
there's
this
notion
that
you
need
to
understand
stability
from
a
metric
standpoint
from
a
trade
standpoint
from
a
log
standpoint
to
understand
like
how
shared
attributes
work
and
how
events
work
and
that
sort
of
thing.
E
So
we
specifically
call
out
like
some
notion
of
breaking
changes
for
metrics.
For
example,
you
know
adding
new
time
series
actually
can
break
metrics
users.
So
if
you
have
common
attributes,
those
attributes
cannot
add
new
values
over
time.
That
sort
of
thing
we
also
call
out
what
is
in
scope
to
be
handled
by
Telemetry
schema
and
what
is
out
of
scope
to
be
handled
by
Telemetry
schema.
E
That's
the
whole
idea
behind
this
is
like
what
is
that
API
that
I
can
keep
stable
all
right,
so
I
want
to
clarify
that
if
you
read
my
wording
and
it
doesn't
sound
like
what
I
said
out
loud-
that's
common
for
me
and
it's
a
problem
and
I
need
to
go
fix
it.
So
please
comment
appropriately,
but
that's
the
intention
of
this
this
so
I
just
wanted
to
outline
that
I
tried
to
address
it
with
words,
and
hopefully
you
could
find
that
when
you
reread
it,
but
that's
the
goal
here.
E
The
second
thing
that
I
think
is
worth
discussing
now
is:
should
we
consider
a
new
value
changes
breaking
or
not?
I
am
proposing
that
we
do
not,
for
a
variety
of
reasons,
even
for
enumes,
where
we
lock
down
and
say
users
can
only
provide
these
five
values.
I
went
through
and
looked
at
how
often
that's
used
and
in
some
cases,
I
feel
like
the
a
new
that
we
locked
down
could
change
in
the
future
and
so
from
a
dashboard
vendor
perspective
from
a
tooling
vendor
perspective
right.
E
My
thinking
is,
we
should
not
have
people
assume
that
a
new
value
is
all
inclusive
coming
from
semantic
conventions.
It
gives
us
room
to
add
new
values
as
we
need.
We
can
still
ask
that
you
know
instrumentation
use
values,
we've
provided
and
we
can
still
special
case
those
values,
but
I
don't
think
we
should
consider
changing
the
amount
of
a
new
values
of
breaking
change.
E
E
Yeah
so
I'll
give
you
two
examples.
One
is
the
programming
language
of
the
SDK
that
you
use,
there's
an
enume
in
there
of
different
programming
languages
and
what
you
should
use
per
language,
but
we
expect
programming
languages
to
be
invented.
So
if
you
were
to
write
your
thing
right,
they
could
be
added
now.
Let's
look
at
the
news
that
are
kind
of
locked
down.
The
one
that
was
interesting
to
me
was
I.
Think
it's
an
AWS
technology
and
there
was
whether
it's
launched
via
you
know:
technology
a
or
via
fargate
I.
E
So
the
HTTP
method
in
noon-
oh
yeah,
get
post
put
delete.
That's
another
example.
That's
a
good
example!
So
again
there
are
when
I
looked
when
I
looked
at
the
examples.
There
are
about
eight
that
try
to
lock
down
the
Anew
of
those
eight
about
half
have
a
specification
that
protect
them
from
changing
and
the
other
half
don't
yeah
head
connect,
good
good
call,
Anthony
I
anyway,
when
you
look
at
the
details,
right
I
feel
like
there
are,
even
with
the
HTTP
spec.
E
Is
it
possible
that
HTTP
adds
a
new
verb
in
a
later
spec
I
think
it
is,
and
so
I
don't
think
us
locking
these
down
provides
like
you
have
to
look
at
the
value
and
the
consequence
right.
If
we
lock
down
and
say
these
don't
allow
changes,
it
means
dashboard.
Vendors
can
rely
on
that
being
the
only
signal,
but
it
means
they
break
if
if
a
new
value
is
added
right,
if
they
have
to
adapt
to
the
possible
values
when
they
create
dashboards
and
tooling
I,
don't
think
that
is
a
bad
thing.
E
A
F
I
don't
feel
strongly
about
it,
but
the
the
problem
you
just
described,
adding
some
things
in
future.
It
could
still
be
done
with
a
schema
operation.
It
could
just
be
in
add
to
enum
operation
that
adds
it
to
the
list
and
then
consumers
would
know
that
they
would
need
to
be
aware
of
it,
but
I
I
don't
know
if,
if
that
would
be
helpful
in
practice,
so
I
don't
think
strongly
about
it.
What.
E
F
I
mean
you
would
always
need
to
expect
something:
that's
that's
not
there.
Also,
not
everyone
pornos
the
semantic
conventions
as
such,
so
you
would
have
some
some
callback,
but
if
it
was
explicitly
added
then
then
you
would
at
least
know
that
that
you
need
to
expect
it
now
and
can
can
act
on
it,
but
most
likely
I,
don't
think
it
would
come
up
in
practice
like
that.
E
Well,
that
that's
what
I'm
suggesting
is
the
tooling
vendor
has
to
assume
the
values
are
open.
They
have
to
presume
that,
no
matter
what
and
so
I
we
just
don't
we
don't
enfor
like
we.
We
we
provide
guidance
on
what
those
values
should
be,
but
we
don't
tell
tooling
vendors,
it
will
be
stable
or
locked
down.
That
is
not
considered
a
breaking
change.
They
have
to
consider
these
values
open.
C
I
agree:
I,
agree
with
that,
unless
specifically
called
out
for
the
particular
attribute
in
the
semantic
conventions,
for
example
I'm
guessing
the
direction
attribute
for
for
Network
measurements,
I
think
we
have
in
and
out
probably
it's
only
in
and
out
I'm
guessing
and
maybe
even
not
that
right,
I
don't
know.
Maybe
there
is
a
way
to
go
up
and
down
for
Network
as
well.
So
I
agree
with
what
you're
saying
Josh
the
the
consumers
of
telemetry
should
not
assume
that
there
won't
be
additions
to
the
unless
explicitly
called
out
in
that
particular
convention.
E
F
I
also
think
if
we,
if
we,
if
we
explicitly
state
in
the
document
that
you're
currently
heading
there
or
in
the
in
the
paragraph
that
it's
not
stable,
we
could
also
consider
just
removing
the
the
entire
property
of
it
being
closed
or
open,
because
there's
not
much
much
use
to
it.
Otherwise,
right.
E
Yeah
agreed,
I
I
think
the
the
use
of
it
would
be
from
an
instrumentation
standpoint.
I
could
generate
better
code,
but
again
the
the
you
know,
there's
two
ways
to
look
at
into
conventions.
There's
what
instrumentation
authors
need
to
write
their
code
and
there's
what
tooling
vendors
need
to
write
tooling
that
leverages
and
I'm
the
tooling
that
leverages
it
and
I'm
focused
on
the
tooling
vendor
use
case
right
now.
Right
so
I
still
think
the
open
closed.
It
might
be
a
useful
thing
for
instrumentation
authors.
You
could
generate
better
code.
E
B
Yeah
by
the
way,
for
the
sake
of
time,
I
suggest
that
well
it
seems
that
in
general,
there's
initial
agreement
on
what
you
propose:
George
Armin
or
anybody
who
has
concerns
please,
let's
follow
all
follow
up
offline.
B
We
have,
we
have
more
items
in
the
agenda
that
show
later
late.
So,
let's.
E
D
E
Yeah
yeah
I,
just
the
last
point
is
we
added
logs,
so
Jack
Jack
was
mentioning
that
we
should
include
all
signals
here
so
added
an
initial
proposal
for
logs.
So
please
take
a
look.
We'll
comment
on
the
issue
sounds
good.
D
No
I
just
on
the
well
yeah
the
separating
PR
per
signal.
Josh
is
there,
would
you
prefer
not
to
separate
it
for
signal.
B
B
Should
summary
at
least
I
think
that's
pretty
interesting
and
I
would
rather
keep
the
discussion
offline
on
that
point,.
E
A
E
A
D
B
Yeah
I,
don't
know,
probably
what
do
you
prefer?
We
can
try
I
mean
when
I
say
that,
let's
move
on
to
the
next
topic,
you
didn't
mean
to
avoid
the
rest
of
your
section,
just
like
the
one
regarding
looking
up
a
new
values.
D
D
That
is
that
we
can
the
way
that
they're
used
over
here
like
net
peer
name,
that
has
a
very
HTTP
specific
semantic
convention,
definitions,
I'm,
kind
of
overriding
the
base
definition,
and
so
by
not
marking
it
stable
in
the
general
that
gives
some
flexibility,
for
you
know
to
to
refine.
D
Maybe
the
definitions
we
wouldn't
really
be
able
to
change
I,
don't
think
we
would
want
to
change
the
name
after
that
point,
because
then
we
couldn't
change
it
in
HTTP
semantic
conventions,
but
there
was
also
you
know
a
good
point
by
Christian
that
you
know
it
could
be
confusing
that
it's
kind
of
a
subtle,
it's
a
more
subtle
distinction.
There
I'm
also
trying
to
limit
the
amount
of
work
that
we
have
to
do
to
get
HTTP
stable.
D
But
if
anyone
feels,
if
you
feel
that
we
need
to,
then
we
will
pull
this
out,
but
then
we'll
also
need
help
actually
getting
these
stable,
we're
almost
we
need
more.
You
know.
Networking
experts
I
feel,
like
I'm
afraid
that
to
Define
to
go
through
and
Mark
D
stable.
C
So
I
agree
with
Christian
here,
I
think
it's
at
least
the
way
that
it
is
listed
right.
Now
it's
going
to
be
confusing
in
one
document.
It's
marked
as
stable
in
another.
The
exact
same
attribute
with
the
exact
same
name
is
marked
as
experimental
and
I
think
it
can
be
a
source
of
confusion.
At
the
very
least,
this
this
probably
needs
to
be
somehow
pulled
out.
Somehow
I
I
don't
know
right.
C
Split
it
and
say
that
this
is
borrowed
from
or
related
to
that
other
attribute.
Somehow,
and
if
you
change
that,
don't
change
the
particular
aspects
which
we
declare
stable
here,
it's
going
to
be
complicated
and
messy
I
don't
know
if
we
can
do
that,
but
in
the
current
form,
I
find
it
that
it's
it's
it's.
C
D
C
C
Yeah
at
the
very
least
I'm
saying
at
the
very
least,
let's,
let's
make
sure
that
we
very
clearly
specify
on
the
page
where
net
conventions
are
defined,
that
they
are
also
part
of
HTTP
semantic
conventions
which
are
declared
stable.
So
you
are
limited
in
what
you
can
change
there
at
the
very
least
right.
Let's
make
it
explicit
there.
D
C
Yeah
yeah
yeah,
so
the
reason
I
posted
it
there
the
reason
I
blocked
the
old
tab
is
I
noticed
the
a
warring
Trend
to
cite
the
to
reference,
this
Otep
and
use
the
some
sort
of
a
one
of
the
possible
justifications,
maybe
to
make
changes
to
the
spec,
which
I
think
is
a
is
a
very
wrong
thing
to
do.
The
Otep
is
not
yet
approved
and
accepted.
It
may
not
be
accepted,
may
or
may
not
right.
C
We
don't
know
yet
and-
and
we
shouldn't
use
it
as
a
justification
for
specification
changes
yet
only
after
it
becomes
part
of
the
approved
and
merged
or
tips.
Only
after
that,
we
can
do
that
right.
So
I
I
want
to
make
sure
that
we're
or
signaling
that
fact
that
this
is
not.
This
is
not
yet
accepted
so
that
those
those
references
so
people
are
are
clear
on
where
we
are
on
this
open
and
and
and
the
reason
I
also
I.
C
Also
I'm,
not
sure
where
we
are
on
this
order
is
because
there
are
open
questions
there.
There
have
been
open
for
months
starting
from
the
previous
incarnation
of
the
old
tip,
and
they
are
not
addressed.
I,
don't
see
them
being
addressed
and
and
I
don't
think
without
addressing
those
questions
they
thought
that
can
be
meshed
in
particular,
I
want
to
understand.
What's
the
future
after
we
adopt
some
of
the
ECS
conventions
in
open,
Telemetry
semantic
conventions?
G
Agree
with
you,
these
are
good
questions,
so
just
for
folks
to
be
aware
in
this
forum,
where,
like
we're,
having
active
discussion
with
elastic
folks,
and
currently
we
give
them,
we
give
an
offer
basically
saying
if
we,
we
think
elastic
common
scheme
and
open
Telemetry
semantic
comments
can
evolve
to
a
single
thing.
G
Then
what
open
climate
Shield
request
is
elastic
made
announcement
that
they're
going
to
donate
the
elastic
common
schema
as
an
open,
Telemetry,
whatever
schema,
or
something
basically
rename
and
follow
the
cncf
open,
Telemetry
governance
going
forward,
and
this
is
a
big
change.
So
we
cannot
force
elastic
to
make
the
change,
consider
having
serious
internal
discussions
and
then
we're
back
to
the
open
climate
community
and
based
on
that
decision.
Well,
we'll
either
see
we
can
make
progress
here
or
we'll
have
to
abandon
spr.
C
Yeah,
the
point
being
is
that
if
that
doesn't
happen,
if
we're
only
going
to
end
up
borrowing
a
few
semantic
conventions
from
ECS
and
then
later,
ECS
was
going
to
evolve
right.
They
they
will
evolve
and
and
and
change
again
and
diverge
again
from
open,
Telemetry.
I,
don't
see
the
point
of
doing
this,
I.
G
I
agree
with
your
children,
so
so,
based
on
the
initial
discussion,
I
I
think
the
consensus
is:
if
ecis
is
not
willing
to
make
a
donation
and
merge
with
open
Telemetry,
then
we
should
abandon
this
PR
and,
of
course
like
if
open
Telemetry
is
many
convention
keeps
moving.
Someone
will
have
to
do
some
mapping
and
it's
not
specific
to
ecis.
There
are
plenty
of
other
schemas
people
can
always
do
the
mapping,
but
this
is
not
something
that
this
old
half
is
trying
to
cover.
J
Yeah,
maybe
I
should
update
from
from
Netflix
site.
So
I
think
this
is
a
well
at
point
and
I.
Think
this
this
Clarity
on
having
just
these
two
options.
J
Yeah
I
was
clear
with
the
comment
you
reference
the
ground,
I.
Think
since
then
we
are
trying
to
find
the
decision
and
the
discussions
happening.
We
are
working
towards
a
full
donation
so
to
the
proposal.
We
expect
decision
really
by
the
end
of
this
week,
maybe
early
next
week
and
and
then
we
can
continue
from
there.
But
I
appreciate
your
point
this.
This
is
a
valid
one,
so
I
hope
I
can
give
it
an
update.
Next
week.
B
You
I
guess
that
the
one
question
that
I
have
is
that
we
are
supposed
to
have
a
release
for
March
and
that's
very
soon.
So
what
do
we
do
with
ongoing
effort
that
overlaps
with
this?
You
know,
I
think
that
some
of
the
semantic
commissions
were
changed
to
accommodate
some
of
these
changes.
So
I
don't
know
what
to
do.
Yeah.
D
B
Right,
that's
probably
another
thing
that
we
can
always
hear,
but
I
think
there's
value
in.
Even
if
we
were
to
say
worst
case,
we
are
not
going
to
align
with
ECS
that
probably
we
can
learn
some
of
the
things
or
you
know
adopt
some
things
there,
and
this
is
one
example
of
that,
so
it
doesn't
make
sense,
and
we
are
confident
that
this
change
can
go
ahead
independently.
What
happens
with
ECS?
B
D
That
that's
our
position,
the
HTTP
stability
working
groups
position
on
the
PRS
that
we've
submitted
is
that
those
we've
only
submitted
PR's
that
we
support,
regardless
of
whether
we
go
align
with
ECS,
perfect,
but
I
do
agree.
It
has
been
confusing
because
I've
opened
a
ton
of
issues
in
the
spec
repo
sort
of
to
work
through
the
ECS
alignment.
When
the
Otep
hasn't
been
accepted,
and
maybe
I
don't
know
tigran
like
do
you
think
there
would
have
been
a
better
place
to
have
had
these
discussions,
then.
C
Well,
I
think
this
is
great
I,
think
it's
great
that
you
summarized
and
listed
all
these
things.
That
will
be
impacted
if
the
altap
is
accepted,
but
I
I
also
noticed
that
in
some
quick
cases
it
is
referred
to
as
a
justification
to
make
the
change,
which
I
think
is
premature
right.
We
shouldn't
be
doing
that
just
yet,
but
this
what
you
have
this
list
here,
I
think
it's
great.
It's
excellent
good
to
have
it
here,
which.
C
C
D
Makes
sense
all
right,
sorry
Carlos
for
using
up
so
much
time,
there's
so
much
to
get
us
our
first
semcon
to
stable
I,
appreciate
everybody's
time
and
help.
B
Yeah,
that's
pretty
good.
It
was
really
interesting
and
very
you
know
required
this
question
just
to
clarify
things.
So
thank
you
so
much
for
that.
Okay,
let's
move
fast
to
the
next
item.
Manji
I,
don't
know
how
to
pronounce
that
sorry.
I
Yeah,
so
this
is
my
friend
AWS
I'm,
reaching
out
to
ask
about
the
status
of
proposal
207
so
just
to
provide
some
background.
Our
customer
is
using
other
instrumentation
for
most
of
stem
generation,
but
they're
asking
a
way
to
pass
some
additional
attributes
in
their
code
to
the
client
spans
so
that
they
can
further,
so
those
information
can
be
further
processed.
However,
we
know
that
those
client
spans
are
mostly
generated
by
all
the
instrumentation
libraries
which
users
don't
have
control
of.
I
So
that's
how
I
came
across
this
proposal,
because
I
can
help
inject
attributes
and
propagate
to
local
clients.
Fans
I
saw
some
updates
from
yesterday,
which
is
nice,
but
just
want
to
double
confirm
with
you
like
how
promising
it
is.
Do
you
think
to
get
this
officially
accepted
as
a
new
specification
emerged
because
from
the
updates,
it
seems
like
the
proposal
kind
of
still
at
this
moment.
I
Yeah
but
like
like
I,
also
talk
to
Christian,
but
it
seems
like
there's
still
a
lot
of
opens
to
address
right.
E
Yeah
I
think
I
think
the
main
there's
two
there's
two
things
I
mentioned,
which
I
think,
but
the
main
themes
on
it
are
basically
first
of
all
one
is:
how
does
this
interact
with
baggage
I?
Think
Christian
doesn't
really
care
about
baggage
and
doesn't
want
to
make
it
work
with
baggage,
but
I
think
if
we
don't
make
it
work
with
baggage,
it's
just
a
huge
it's
yet
another
wart
that
is
baggage
versus
hotel
and
like
how
baggage
Works
in
hotel,
so
I
think
we.
We
need
to
address
that.
E
E
We
could
actually
blow
up
cardinality
with
the
way
the
the
thing
is
proposed
now,
so
we
actually
might
need
a
notion
of
say
a
per
signal
context
that
I
think
needs
to
get
added
to
the
Otep,
so
I
think.
Overall,
the
need
of
the
thing
the
Otep
proposes
is
absolutely
there
and
something
we
need
to
address.
I
think
what
we're
lacking
is
a
proposal
that
kind
of
fits
the
shape
of
hotel
as
it
is
today
right,
so
something
that
kind
of
addresses
the
overall
design
consideration.
E
So
if
we
had
something
that
addressed
baggage,
I
think
it
would
have
less
contention
and
if
we
have
something
that
allows
us
to
figure
out
how
context
scoped
attributes
fit
with
specific
signals,
I
think
I
think
we
could
make
progress
on
that.
So,
like
I'm,
a
fan
of
trying
to
make
progress
on
that
spec
I
am
currently
completely
overloaded
with
semantic
convention
stability
in
all
frankness.
So
at
this
point
we
need
someone
with
some
thought
leadership
who
understands
deeply
more
aspects
of
the
spec
that
wants
to
go
in
and
resolve
those
issues.
B
B
K
Yeah,
hey
everyone,
I
just
wanted
to
call
out
that
the
working
group
that's
been
focusing
on
configuration,
has
put
together
an
otap
and
it's
now
open
and
ready
for
discussion.
L
Yeah
sure
hi
everyone
so
just
a
little
bit
of
background
on
the
zotep,
so
this
is
for
file
based
configuration
of
the
SDK
and
an
instrumentation,
and
this
addresses
a
variety
of
things.
In
particular,
there's
there's
been
General
discontent
and
there's
been
a
variety
of
deficiencies
with
the
environment,
variable
based
configuration
scheme
and
it's
continued
expansion
over
time.
So,
in
particular,
the
environment
variable
scheme
doesn't
provide
good
options
to
configure
complex
things
like
views,
which
would
require
some
sort
of
structured
encoding
of
the
data.
L
It
doesn't
support
good
ways
to
configure
multiple
span
possible
span,
processors
or
Samplers
that
need
to
be
composed
with
each
other,
and
it's
not
very
extensible.
So
if
you
want
to
reference
like
custom
components
and
configure
those
there's
not
great
options
for
that,
so
the
file
based
configuration
scheme
is
supposed
to
address
these
in
in
other
issues.
So
what
do
we
propose?
L
So
in
summary,
we
prepare
Pros
that
file
configs
file
configurations,
use
yaml
as
their
format,
with
a
an
optional
ability
to
add
support
for
Json
that
languages
can
can
choose
to
do
if
they
if
it
suits
them.
You
know
the
yaml
supports
comments
and
anchors,
and
you
know
we.
We
think
that
it's
more
user
friendly
than
Json.
So
that's
kind
of
some
of
the
reasoning
behind
the
choice
for
yaml
another
design
choice.
L
We
we
decided
to
describe
the
schema
for
this
configuration
file
using
Json,
schema,
protobuf
and
Q
and
A
couple
of
other
options
were
considered,
but
you
know
we
chose
to
use
Json
schema
for
reasons
that
are
currently
being
discussed
in
the
comments
of
the
Otep.
So
if
you
feel
strongly
about
that,
go,
go
share
your
opinion,
other
design
choices
I
want
to
share.
Are
you
know
we
want
to
have
complete
coverage
for
all
the
existing
environment
variables
that
exist
today.
We
want
to
support
configuration
of
custom
extension
components.
L
So
if
you
want
to
have
your
own
exporter
your
own
processors
or
Samplers
Etc,
we
want
to
be
able
to
support
referencing
those
and
configuring
those
we
want
to
support
schema
Evolution.
So
we
wanted
to
add,
have
the
ability
to
add
new
options
as
time
goes
on,
we
want
to
support
environment
variable
substitution.
So
if
you
need
to
include
some
sort
of
Secret
In
Your
in
your
configuration
like
an
API
key,
we
want
to
be
able
to
allow
you
to
inject
that
securely
without
having
to
include
that
as
plain
text.
L
And
then
the
final
thing
that
I
want
to
refer
I
want
to
say
is
that
we
want
to.
We
want
to
be
able
to
specify
your
file
config
via
an
environment
variable,
so
maybe
otel
config
file
would
specify
the
path
of
where
this
configuration
file
lives
and
if
the
config
file
is
specified
via
environment
variable
we're
proposing
that
any
other
environment
variables
that
currently
exist
and
configure
the
SDK
would
be
ignored.
So
it's
either
you
config.
Via
this
a
file
or
the
environment,
variable
configuration
scheme,
but
not
both
so
yeah.
L
Please
go
take
a
look
at
that
otap
and
comment,
and
if
anybody
has
any
other
comments
that
they
want
to
make
right
now,
in
the
last
couple
of
minutes,
we
can
try
to
discuss
them.
I'll.
K
Just
a
quick
note
that
there
are
some
ongoing
prototypes,
so
I
started
on
the
python
prototype,
Martin
Kuba
started
on
the
JavaScript,
prototype
and
I.
Think
jockey
had
been
working
on
the
Java
prototype,
yep
and
I'll,
be
starting
on
the
goprooter
type
this
week.
So.
B
K
I
I
will
call
out
to
Josh
that
XML
was
considered
for
about
five
minutes
at
the
end
of
one
of
our
calls
and
then
after
the
call
ended,
someone
came
back
into
the
channel
and
said
no
we're
not
doing
xsd.
E
Okay,
I
mean,
if
anything
that
lets
me
write,
xslt
again
I'm
in
for,
but
not
not
as
a
serious
project.
I
think
that
should
just
be
an
optional
thing
for
people
who
want
to
goof
here's
here's
my
serious
question
I
think
because
Yuri
raised
this
is:
what
are
you?
Are
you
planning
to
address
Dynamic
configuration
components
versus
static
configuration
components
in
this
Otep
I.
E
Think
right
now,
if
you
look
at
scope
and
it
lists
what
it
excludes,
you
probably
want
to
directly
exclude
Dynamic
if
you're
not
planning
to
address
it
yet
and
I
would
I
would
also
say
yet,
because
I
think
we
already
have
Dynamic
components
that
we
are
like
with
the
Jaeger
remote
sampler.
E
So
I
think
that
we
will
eventually
need
to
get
there.
But
you
should
be
very
explicit
in
that
Otep
about
it.
L
Yeah
so
I
I
replied
to
Yuri's
comment
and
I.
Guess
I'll
just
describe
what
I
my
thoughts
on
that
here.
Very
very
briefly.
So
the
SDK
specifications
for
Tracer
provider,
media
provider
and
logger
provider
all
have
language
that
allows
sdks
to
have
optional
support
for
reconfiguration.
So
you
know,
presumably
you
can
update
your
processor
or
update
your
your
resource.
It's
optional,
it's
a
you
know,
implementations
may
support
updating
configuration
and
so
how
I
imagine
Dynamic
configuration
would
work?
L
Is
you
know
op-amp
would
be
extended
or
implemented
to
you
know,
have
a
component
where
a
client
or
supervisor
can
interact
with
an
op-amp
server
and
receive
configuration
for
a
particular
SDK
and
then
that
client
or
supervisor
would
presumably
go
and
it
would
receive
the
configuration
in
this
file
format
and
it
would
call
the
API
that
allow
you
to
update
the
configuration
and
Tracer
provider,
meter
provider
and
logger
provider,
and
so
you
know
I
think
a
variety
of
things
would
need
to
happen.
L
If,
if
we
really
are
serious
about
Dynamic
configuration,
we
should
probably
strengthen
the
language
in
the
in
the
provider.
To
you
know
from
a
may,
they
may
support
updating
configuration
to
a
should
and
I
think
that,
along
with
some
additional
work
in
the
op-amp
space,
would
you
know
make
everything
would
just
kind
of
come
together,
naturally,
between
op-amp
file
config
and
allowing
the
the
existing
SDK
Tracer
provider,
media
provider
and
logger
provider
to
update
configuration.
Those
are
the
pieces
that
need
to
align.
E
L
H
A
quick
thought
on
that
for
dynamic
if
something
is
coming
in
over
a
protocol
to
like
reconfigure,
Tracer
provider
and
the
configuration
Tracer
providers
have
no
unique
identifier
that
seems
hard
to
match
for,
like
which
Tracer
provider
to
update.
So
maybe
it
does
say
something
about
what
the
static
configuration
should
include,
such
as
a
name
for
Tracer
providers.
If
you
have
multiple
I,
don't
know.
L
Yeah,
perhaps
in
the
future,
the
file
config
can
be
extended
to
give
an
ID
or
name
some.
H
I
think
it
was,
it
might
need
to
be
noun
to
not
have
it
I
guess
it
wouldn't
necessarily
be
a
breaking
change,
but
it
would
curious
for
the
future,
but,
like
current
configs
would
work
in
the
future,
but
I
guess
it
could
just
be
added.