►
From YouTube: 2022-11-14 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
A
All
right
so
I
think
we
have
a
couple
people
in
the
document
that
aren't
here
so
we'll
give
it
another
like
two.
B
Minutes
for
folks
to
kind
of
update
the
agenda,
what
we
want
to
discuss
that
sort
of
thing
and
then
we'll
get
started,
sound
good.
A
B
A
Oh,
the
other
note,
I,
think
our
next
meeting
is
the
week
of
U.S
Thanksgiving.
B
C
B
It's
right
after
wait,
yeah,
but
people
take
off
that
Monday
I,
guess
I
where
I
am,
and
that
is
what
Cyber
Monday
or
whatever
you're,
either
living
in
Hell
or
you're
out
hunting,
that's
like
where
I
live
either
you
work
for
Target
and
you're
on
call
that
whole
day
and
it
sucks.
Or
you
know
you-
you
go
out
hunting,
apparently
okay,
yeah.
So
this
is
the
it's.
The
Monday
after
Thanksgiving
is
what
I
meant
cool.
So
here
we're
good
I
know
that
that
mostly
affects
the
US
interest
rate.
E
These
reviews-
that's
me
just
so
I
think
my
Mike's
working
today
me
shamelessly
advertising
some
pull
requests,
so
I
figure-
maybe
you
guys,
are
interested
given
that
it's
to
the
build
tools,
don't
worry
about
it,
just
yeah
figure,
while
we're
waiting
for
people
to
start
the
meeting.
B
Do
you
want
to
throw
those
in
the
notes,
because
I
think
actually
making
progress
on
the
build
tool
is
important
as
well?
I
haven't
had.
E
It
sounds
good
I'll
paste
it
now.
Yeah.
B
Specifically
like
as
we
Define,
what
stability
means,
I
think
we're
going
to
be
changing
the
tool
for
what
it
accounts
for,
like
you
know,
do
we
need
a
the
topic
today
on
metrics?
Is
you
know,
do
we
need
to
account
for
buckets
in
in
histograms
in
the
tool
and
actually
specify
them
in
this
specification
or
not
so.
A
B
All
right
here
we
go
let's,
let's
get
started,
because
we
have
a
good
packed
agenda
today.
So,
first
off
we
have
a
few
issues
new
to
semantic
inventions.
Again.
B
The
goal
here
in
this
meeting
is
to
help
semantic
conventions
PRS
make
progress
and
to
understand
where
to
block
ones
that
we
need
to
block
because
there's
something
else
it
needs
to
figure
out,
but
for
the
most
part
just
to
give
them
the
ability
to
make
progress
and
tell
the
community
it's
okay,
to
say
yes
to
this
okay,
so
the
first
one
here
is
about
a
new
span
attribute
around
expected
response
time.
B
This
was
an
open
API
convention
where
you
had
like
X
expected
response
time
or
something
I
think
what
I
can't
tell
from
this
is,
if
it's
related
or
at
all,
similar
to
like
a
grpc
deadline,
thing.
B
And
and
what
that
kind
of
looks
like
from
an
HTTP
thing
from
my
for
the
purpose
of
this
meeting
and
this
semantic
convention,
what
do
we
want
to
do
to
propose
to
these
folks
to
make
progress?
Does
this
seem
like
something
that's
going
to
conflict
with
other
things
we
do?
Should
we
ask
them
to
go
approach?
The
HTTP
semantic
invention
team
with
us.
A
D
B
Yep,
okay,
so
I
think.
If
that's
the
case,
we
should
we
should
basically
say
hey.
This
is
a
good.
This
is
a
good
PR
in
the
sense
of
what
you
want
to
do,
but
we're
going
to
delay
this
until
delay,
paying
attention
to
it
until
the
existing
HTTP
semantic
conventions
are
marked
stable.
A
B
D
B
Think
that's
fair!
That's
also
what
I'm
looking
for
with
all
these
bugs
is
basically
to
say
no
progress.
You
know,
feel
free
to
keep
proposing
or
talk
to
these
people
right,
okay,
the
next
one.
This
is
an
interesting
one,
I'm
kind
of
on
this.
This
is
about
denoting
whether
a
span
is
used
in
an
Exemplar
it.
Actually,
the
details
of
this
one
is
exemplars
work
well
with
head
sampling
and
do
not
work
well
with
tail
sampling,
given
what
you
just
suggested.
B
A
This
into
figuring
out
the
it's.
B
B
The
actual
problem
here
is
that
exemplars
are
Reservoir
and
exemplars
get
ousted
by
new
exemplars
that
occur
through
the
lifetime
of
the
metric
report
cycle.
So
the
only
way
you
know
whether
or
not
an
Exemplar
was
collected
is,
if
you
write
metrics
and
traces
at
the
same
time
via
the
same
queue.
That
is
a
huge
liability
for
traces
and
I.
Just
don't
think
it's
viable,
so
I
think
we're
gonna
shut.
That
one
down
in
the
sense
of
this
probably
doesn't
deserve
to
be
area
proposal,
not.
B
D
Right,
yeah
I
was
going
to
say
it's
not
as
I'm
it's.
It
should
probably
block
exemplars
being
stable,
I
mean
not
necessarily
solving
the
problem,
but
deciding
whether
to
solve
the
problem.
D
B
D
B
D
B
I
actually
didn't
see
who
wrote
it,
but
the
the
reality
is
even
in
Prometheus.
If
you
tell
sample
you're
you,
you
could
end
up
with
an
Exemplar
that
doesn't
line
up
with
a
trace.
That
was
sampled.
B
Basically,
he
has
a
metric,
HTTP
server,
duration
right
and
he
has
exemplars
that
show
up
where
the
trace
wasn't
sampled
and
the
problem
is
he's
doing
tail-based
sampling
right
if
you're
doing
head-based
sampling,
everything's
fine,
when
you
do
tail
based
sampling,
it's
the
same
general
problem
of
tail-based
sampling
of
if
I
drop,
a
a
trace
should
I
drop
all
of
the
data
associated
with
that
Trace
when
I
drop
it
and
the
exemplars
aren't
accounted
for
in
many
tail-based
sampling
Solutions,
you
know,
like
Prometheus,
can't
account
for
it
at
all
either.
D
Okay,
I
thought
it
was
slightly
different.
The
but
I'll
I'll
reread
through,
but
I
agree.
It's
not
for
this
group.
A
B
Re-Read
through
it
it's
it's
an
interesting
problem
and
an
interesting
discussion.
I
I
think
again,
like
the
fundamental
thing
is
when
you
do
tail-based
sampling
you're,
trying
to
collect
all
of
the
spans
together
and
drop
them
all
at
once,
or
none
of
them.
That
needs
to
account
for
exemplars.
If
you
want
exemplars
to
participate
right
so
that
that
I
think
is
where
the
issue
lies,
not
necessarily.
E
Where
should
we
route
that
discussion,
because
this
definitely
seems
like
an
important
problem
worthy
of
like
the
open
Telemetry
communities,
like
you
know,
investigation,
I,
but
yeah,
just
not
for
us,
so
like
dude,
like
what's
the
process
in
general,
for
when
we
want
to
reroute
something
like
they
take
it
to
the
wrong
group,
you
know
like
how
do
we
do
this.
B
Reroute
communities
we
can
figure
out
that
process.
Ironically,
I
am
still
in
that
discussion,
because
I
owned
the
Exemplar
spec.
A
E
B
Josh
guy
would
probably
take
it
over
you
yeah,
so
I
think
what
happens
is
actually
I'm
going
to
take
it
to
the
sampling
Sig.
B
If
you
look
at
open,
Telemetry
specification
right
now,
tail
sampling's
allowed,
but
ill-suited
to
the
things
we've
defined
so
far,
it's
kind
of
like
there,
but
we
didn't
do
a
great
job
at
it.
They
were
working
on
a
specification
V2
for
sampling
FYI
that,
like
makes
tail
sampling
better
and
I.
Think
this
needs
to
be
accounted
for
in
that
group.
E
Like
an
immediate
triage,
I
I
think
you're
suggesting
it
there,
but
just
to
call
it
explicitly
is
if
I
were
to
come
across
this
otherwhere
elsewhere
would
I
just
could
I
just
move
to
head
base
sampling
instead
of
tail
based
sampling
until
we
figure
out
a
better
story
for
the
tail
based.
E
B
Yeah
cool,
so
the
next
one
is
actually
about
database
resource
semantic
conventions
and
trying
to
pull
together
domain
experts.
For
this
we
we
had
talked
last
week
about
how
the
database
send
conf
group
kind
of
like
isn't
the
strongest
right
now,
I
guess
the
question
is:
do
we
feel
comfortable?
Currently
there
was
a
group.
Is
there
still
a
group
I
don't
know,
I
haven't
attended
that
meeting.
B
There
was
an
attempt
to
create
a
group
okay,
so
when,
when
Ted
Young
gets
here,
I
think
what
I'd
like
to
do,
though,
is
I.
Think
this
proposal
is,
is
legit,
I
think
there's
some
concerns.
We
need
to
address
that
will
address
implicitly
in
this
group,
but
I
still
would
like
to
see
this
be
able
to
make
progress.
What
I'm
worried
about
is.
B
E
B
It'd
be
nice
if
we
had,
like
you,
know,
oracle
postgres
and
then
submit
some
of
the
Hipster
ones
like
you
know,
redis
that
kind
of
thing.
If
we
could
consider
those
databases.
D
Let's
come
back
to
this
topic
once
Ted
is
here
because
I
believe
he
has
thoughts
on
this.
B
It
with
Ted
all
right
and
then
last
one
I,
think
I'm
taking
too
long
in
this
we're
actually
at
10
minutes
instead
of
seven.
This
is
the
notion
that
they
want
a
cement
convention
to
include
the
name
of
a
link
in
the
span
and
I.
D
Name
of
I
haven't
read
the
issue
this
issue,
yet
so,
okay,.
B
D
B
Yeah,
actually,
that
makes
a
lot
of
sense.
That's
a
good
call.
All
right,
I
think
that's
what
we
do
with
that
one
cool
all
right.
So,
basically,
if
the
messaging
stick
goes
on
that
that
would
makes
progress
and
we
can
get
some
some
folks
on
it
cool.
Let's
move
into
our
topics:
I'm
gonna
cut
yeah.
We
have
20
minutes
exactly
for
both
of
these.
So
first
off
around
metric
stability,
I
did
a
quick
write-up
on
histogram
bucket
boundaries.
B
B
The
idea
behind
a
histogram
versus
what
we
used
to
do
with
summaries
and
rate
calculations
was,
you
can
actually
calculate
the
percentile
in
the
server
as
opposed
to
you
have
to
do
it
in
the
client
and
instrumentation
time.
That's
why
we
moved
to
histograms
picking
your
bucket
boundaries
is
about
picking
the
the
best
most
accurate
way
to
represent
the
distribution
of
measurements
that
you
see
and
it's
something
you
want
to
change
when
it's
wrong
and
it's
something
that
we
probably
like
it's.
B
B
I
have
zero
visibility
into
anything
about
five
seconds
now
from
my
percentiles,
it's
like
everything
falls
apart,
so
I
want
to
shift
my
buckets.
You
know
higher.
So
that's
why
we
have
these
queries
like
query
functionality
around
histograms
to
basically
always
pull
out
percentiles.
B
Why
we
recommend
pulling
out
percentiles
using
these
query
things
when
you
define
alerts,
but
we
have
it.
We
have.
We
have
an
issue
where
it
is
actually
allowed
in
Prometheus
to
interact
with
a
histogram
as
its
raw
Time
series.
B
So
I
could
say,
like
you
know,
I
could
look
at
this
Le
label,
which
means
less
than
or
equal
and
I
can
say.
I
want
to
know
if
the
you
know
the
less
than
or
equal
to
you
know.
Five
second
bucket
is
exceeds
this
value,
that's
something
you
can
do
in
Prometheus.
What
you
probably
want
to
do
is
say
I
want
to
know
when
my
you
know,
50
percentile
latency
is
above
this
or
my
90th
percentile
agency
is
above
this,
but
it
lets
you
do
both
what
I'm
suggesting
for
otel
is.
B
We
could
take
one
of
two
approaches
here
right.
We
could
say
that
we
prevent
histogram
bucket
boundary
changes
during
the
lifetime
of
a
single
time
series.
This
is
the
same
thing
we
did
for
the
metric
type,
so
if
I'm,
if
I
start
reporting
bucket
boundaries
of
this
for
the
life
cycle
of
that
process,
for
the
life
cycle
of
that
service
or
that
application
right
until
it
reboots
I'm
guaranteed
that
bucket
boundary
is
the
same.
B
However,
if
I-
but
this
doesn't
this
means
in
semantic
conventions-
we
wouldn't
enforce
it,
because
at
the
next
semantic
Convention
release,
we
could
change
those
bucket
boundaries
if
we
feel
like
there's
a
better
default
or
if
it's
going
to
be
better
for
the
user
or
allow
the
user
to
change
it.
So
we
we
don't
enforce
stability
cross
restarts
effectively
right,
which
means
we
don't
support
a
bunch
of
Prometheus
use
cases
of
histograms
that
we
consider
degenerate
use
cases.
B
B
That's
a
bit
aggressive
and
I
think
that
if
we
decide
that
this
is
the
way
we'd
like
to
go,
my
plan
for
it
is
to
go
reach
out
to
the
Prometheus
community
and
make
sure
they're
on
board
with
this,
as
well
as
reaching
out
to
the
hotel,
metrics
Community,
to
make
sure
they're
on
board
with
it
right.
The
second
option
is
we
just
go.
You
know
what
that
is
an
instability.
That
is
something
that
breaks,
so
we
just
don't
allow
histogram
buckets
to
change
once
we
Define
them
in
semcov.
B
D
D
Okay,
so
in
the
when
we
Define
metrics,
we
could
define
specific
bound
boundaries
that
we
know
are
better
for
that
particular
metric
type.
For
example,
if
we
know
the
values
always
between
zero
and
one,
we
could
set
boundaries
accordingly,
yeah
yeah.
B
This
is
also
about
whether
we
could
ever
change
the
default
boundary
in
metrics
I
on
honestly
I
think
there's
it's
likely
we
will
have
to
for
at.
There
will
be
a
one-time
breaking
change
to
it
and
we
can
talk
about
why.
But
that's.
B
No,
no
Prometheus
wants
latency
to
be
in
seconds.
We
currently
do
milliseconds.
B
That
said
so,
two
things
that
are
important,
I
think
what
we
just
said
so
Trask
you're
right.
We
have
a
default
bucket
boundary
for
histograms
and
we
don't
specify
bucket
boundaries
currently
in
semconf.
What
I'm
suggesting
is
we
would
never
so
option
one
is,
we
would
never
specify
bucket
boundaries
and
we
would
allow
those
defaults
to
change
version
to
version,
because
we
don't
consider
it
breaking
and
we'll
outline
why
in
the
spec,
the
second
point
I
want
to
make
is
exponential.
B
Histograms
are
actually
designed
to
change
bucket
boundaries
in
process
like
all
the
time
and
we're
building
new
histogram
features
that
should
make
that
work
correctly
in
different
tsdbs.
You
know
like
we
have
one
in
Google.
That
already
does
that
we
already
have
exponential
histograms
Prometheus
is
building,
not
one
that
will
allow
that
and
we've
kind
of
re-emphasized
with
them.
Our
bucket
boundaries
are
guaranteed
to
change
over
the
life
cycle
of
the
process.
Will
your
query
still
work
for
calculating
percentiles?
B
Yes,
so
all
of
those
use
cases
with
histograms
that
allowed
you
to
look
at
individual
buckets
with
that
lie.
You
know:
label
they're,
going
away
with
the
move
to
exponential
histograms,
so
I
feel
like
option
number
one
is
us
as
Hotel
basically
saying
you
know,
explicit
buckets
are
what
we
support
today
and
we're
going
to
make
them
feel
a
lot
like
exponential
and
that
alleviates
kind
of
the
move
to
exponential,
but
they
will
have
the
same
kinds
of
instability
a
little
bit.
The
difference
would
be.
D
So
my
main
concern
with
changing
bucket
boundaries
is
around
like
say:
you
have
bucket
boundaries,
a
bucket
boundary
at
one.
Second,
one
of
your
bucket
boundaries
and
your
you
know
you
have
an
SLA
that
you
know
my
I
need
to
have
90
of
my
requests
under
one
second,
if
we
shift
the
bucket
boundaries
now,
your
SLA
is
not
precise
anymore,
whereas
previously
it
was.
C
B
D
B
Yeah
yeah,
because
of
where
your
buckets
fall,
the
the
it's
ironic,
because
the
reason
I
want
to
keep
it
open
is
to
be
able
to
solve
the
opposite
problem
of
it
looks
like
I'm
in
SLA,
but
I'm
actually
way
over,
because
I
picked
the
wrong
upper
bounds
of
my
buckets.
That's
the
one
I
fall
into
more
often,
basically.
D
F
E
E
Enough
information
to
answer
this
question
for
as
far
as
a
good
design
decision
one
way
or
the
other
in
this
meeting
I
mean
this
is
a
very
interesting
topic
but,
like
I
I
feel
like
we
should
like
reach
out
to
some
customers
and
see
like
what
they
actually
care
about.
Like
you
know,
I
think
we've
mentioned
Prometheus
a
couple
times
in
this
and
then
I,
don't
know
I,
don't
know
I
don't
know.
Do
you
think,
like
a
design?
E
Doc
would
be
a
good
artifact
out
of
this
or
like
do
we
want
to
do
like
just
like
a
quick
one
page
or
like
that's.
A
Yeah
so
yeah
you
should
reach
out
to
customers,
others,
because
screw
give
more
feedback.
E
Okay,
it
seems
like
a
very
important
question:
it's
just
like
I
feel
like
yeah,
without
like
actually
having
some
more
experts
in
the
domain.
I'm
not
sure
we'd
be
able
to
make
the
right
decisions
through
dictation
or
just
you
know.
First
principles.
B
I'm
gonna
I'm
gonna
raise
this
in
the
in
the
specification
meeting
tomorrow
and
then
we'll
also
take
it
to
the
Prometheus
working
group.
I
asked
some
Prometheus
experts
to
take
a
look
at
it
already
to
see
where
they
sit
and
what
they
think
an
interesting
thing
for
us
to
also
decide
and
I
think
we're
kind
of
we
kind
of
already
implicitly
decided.
Is
it
a
breaking
change
to
move
from
an
explicit
bucket
histogram
to
a
exponential
histogram
right,
and
this
is
around
usage
and
query
time
semantics?
B
Do
they
change
I,
actually
think
the
way?
So,
if
we
make
the
assumption
that
people
are
using
that
histogram
quantile
function
or
a
rate
function,
then
it
actually
does
not
break
going
from
one
to
the
other
and
that's
an
interesting
thing.
B
I
just
want
to
call
out
like
what
what
we're
looking
at
here
for
what
determines
breakage
is
what
common
query
semantics
we
see
in
tsdbs
and
whether
or
not
like
changes
to
our
model
breaks,
those
common
query,
semantics
and
we
have
to
make
assumptions
about
how
tsdb
Works
overall
I'm
using
Prometheus
as
a
Guiding
Light
here,
where
I'm,
taking
into
account
things
I
understand
from
influx
DB
from
personal
usage
and
from
my
work
system,
which
is
like
the
basis
for
say
other
systems
that
people
have
written
anyway.
B
D
Does
would
this
allow
us
if
to
change
the
default
in
hotel
from
fixed
bucket
to
exponential?
Is
that
sort
of
one
of
the
the
outcomes
of
this
decision.
B
A
B
We
need
to
get
you
know,
I
think
New,
Relic
and
life
end
would
be
the
two
that
I'd
that
I'm
not
sure
of
at
all,
but
and
Microsoft.
You
guys
have
what
two
metric
systems.
B
Sorry,
two
major
ones:
yeah.
Do
you
have
exponential
histograms
right
now
or
do
you
have
histograms
that
that
have
to
interact
in
this
way.
B
Are
coming?
Okay,
do
you
know
if
when
they
come,
they
will
be
queried
in
the
same
way
like
prompt
ql
works,
I,
don't
okay!
If
you
can
follow
up
on
that,
for
us,
it'd
be
nice
to
know
again
getting
data
points
around
General
usage,
okay,
cool
anyone
else.
You
think
we
need
to
reach
out
to
to
make
a
decision
here
and
any
preference
we
have.
D
Okay,
yeah
I
I
do
not
have
enough
to
weigh
in
know
enough
to
weigh
in
one
way
or
another
other
than
given
that
I
don't
know
enough.
I
would
lean
towards
the
conservative
approach.
D
B
Yeah
I'm
gonna
be
stupid
and
lean
towards
the
aggressive
approach,
because
I
think
the
flexibility
is
worth
attempting
for.
However,
I
do
fully
expect
to
get
shut
down
and
go
conservative
here,
just
as
an
FYI
to
this
group
like
so
we'll
push
for
it,
but
I
I
am
yeah
anyway,
at
least
exponential
histograms
won't
have
that
last
thing
good.
We
have
another
five
minutes
attribute
types,
so
this
is
a
fun
one.
B
Okay,
so
we
have
this
one
that
we're
going
to
be
talking
about
the
attribute
value
types:
okay,
attribute
values
in
otel
can
be
list
of
strings.
Integers
doubles,
floats
structures,
it's
very,
very,
very
flexible
right.
Are
we
going
to
allow
attribute
type
changes?
I
think
this
is
a
much
simpler
question,
I,
think
of
just
probably
no
not
an
option,
but
I
wanted
to
get
at
least
have
it
as
a
discussion
here.
B
E
B
Not
enough
the
yeah
anyway,
like
I,
said
I,
don't
think,
there's
a
strong
enough
evidence
at
all
to
alleviate
that
one,
but
I
just
want
to
throw
it
out
there
for
consideration.
Well,.
E
Attribute
values
are
supposed
to
be
attributes
right,
they're,
not
like
they're,
not
metrics,
they're,
not
something
we're
going
to
aggregate
over
or
do
math
on.
Is
that
a
fair
I
mean
unless
you're,
using
like
an
attribute
processor
or
something
in
general.
An
attribute
is
not
supposed
to
be
mathified
right,
so
I
think
semantically,
even
if
they
do
have
like
a
non-string
value.
It
might
as
well
be
a
string
because
it's
just
it's
just
an
attribute.
B
Yeah
and
yet
we
actually
have
like
process
start
time
as
an
attribute
where
it's
a
time
stamp.
Now
we
have
process
restart
count
as
an
attribute
yeah,
but
the
process
restart
count
is
the
one
that
actually
really
upsets
me
personally,
because
that
it
feels
like
a
metric
that
became
an
attribute.
Do.
E
We
want
to
split
the
concept
of
attributes
from
like
I
mean,
like
maybe
dimensions
and
like
tags
or
something
like
I,
don't
know
and
I
know.
That's
very
that's
very
I'm
not
exactly
proposing
that
we're
going
to
split
our
entire
semantic
convention
here
in
this
call,
but
I
think
that's
an
interesting
like
you
know,
question
we
bring
up
by
asking
that
like
I.
If
are
we
doing
math
on
it
or
not?
If
we're
going
to
do
some
math
on
it,
then
maybe
it's
the
type
matters.
B
And
I
think
this
goes
to
an
overall
guidance
question
we
need
to
provide
of.
When
is
something
an
attribute:
when
is
it
a
metric?
When
is
an
event?
When
is
it
a
log?
When
is
it
a
trace
right?
So
I?
It's
it's.
It's
definitely
a
question
for
this
group.
It
might
be
bigger
than
the
next
four
minutes.
I'd.
D
What's
the
advantage,
what's
the
benefit,
we
would
get
out
of
saying
that
this
is
okay.
B
D
D
Would
make
a
mistake
on
like
in
hint
versus
string.
E
B
I
think
almost
all
the
semantic
Adventures
I've
seen
the
attribute
is
or
the
the
definition
of
the
attribute
is
a
string.
B
Let's,
let's
take
a
quick
Gander,
because
my
I
don't
remember
off
the
top
of
my
head.
If
we
look
at
say
like
resource
attribute
conventions
right
do.
A
E
Do
we
have
a
Blocker
in
the
like
client
API
that
says
like
you
must
be
string
to
you
know
because,
like
we
add
attributes
when
we
actually
like
call-
and
this
is
more
of
a
detail
level
thing,
but
if
we
don't
even
have
like
a
Blocker
in
place
there,
I'm
gonna
say,
like
you
know,
code's
kind
of
the
source
of
Truth
and
right
now
we're
saying
that
if
it
doesn't
care,
then
it
doesn't
care
if
it,
if
it
is
type
aware,
is
something
that
maybe
that's
a
breaking
change,
but
right
now,
I,
don't
see
any
reason
why
this
would
be
a
breaking
change
if
we're
not
like
getting
any
guarantees
around
that
in
the
first
place,.
E
True,
but
if
we're
like,
if
we're
allowing
anything,
there
I
mean
maybe
yeah
I
guess
we
should
just
audit
to
see
what
you
can
actually
put
in
there
like.
If
someone
can
always
abuse
our
API
rights
and
it's
a
back
end
breaks
because
someone
abuses,
it
I
mean
I'm,
not
saying
it's
abuse,
that's
probably
a
strong
word.
But
if
someone
misuses
the
intention
of
the
API
or
if
it's
like
a
beta
API,
then
like
yeah
back
in
break,
but
is
that
the
worst
thing
necessarily
like?
If
it's
not
like?
B
So
I
think
there
are
we
have.
We
have
defined
ways
of
reverting
from
attributes
back
to
Strings
because
we
have
to
for
systems
that
only
support
strings
but
I
believe
it's
expected
in
the
community
that
once
you
have
a
type
you're
going
to
keep
that
type
forever
right
now
today,
so
I
brought
this
up
because
I
think
it's
us,
it's
worth
entertaining
the
question,
yeah
and
I.
Think
like
traffic
is
getting
that.
B
Is
there
compelling
reason
for
us
to
keep
this
open
or
not
and
I
I,
just
personally
I,
don't
think,
there's
a
compelling
reason
outside
of
I
Define,
something
as
a
string
and
I
want
it
to
be
an
INT
and
I
still
don't
find
that
super
compelling
right
as
as
a
as
a
way
of
causing
churn
to
users
right
if
we
Define
the
strings
for
like
Response
Code,
let's
not
change
it
to
input
later,
like
that's
just
leave
it
a
string.
You
know
that's
what
we
decided.
That's
what
it
is
for.
E
Now
I
agree
and
I
agree
trash
in
general.
That's
not!
We
don't
want
to
break
customers,
it's
a
very
violent
way
to
enact
change
and
ask
questions,
but
I
think
this
I
I
don't
want
this
to
hide
the.
What,
for
me,
is
the
deeper
question
of
like
why
are
you
using?
Why
are
types
meaningful
to
any
potential
customers
in
attribute
types?
E
E
We
can't
even
see
how
the
stuff's
being
used,
and
so
yeah
I
mean
like,
of
course,
we're
not
gonna
get
like
100
of
the
spec
out
or
something
like
a
start,
but,
like
you
know
like
as
we
move
to
solidifying
the
spec
I'd
want
to
know
like
okay,
are
you
actually
using
this
for
something
besides
string?
Will
this
break
someone?
Is
that
a
real
like
threat?
E
If
it's
not,
then,
maybe
like,
we
want
to
kind
of
enforce
it
now,
so
it
won't
be
a
concerned
in
the
future
if
it
is
a
threat,
oh
cool.
Why
was
this
like?
You
know
like?
Why
are
you
using
types
now
in
the
attributes,
blah
blah?
Maybe
we
can
have
a
different
solution
for
you.
That's
more
semantically,
meaningful.
D
I
have
one
use
case
that
I
just
thought
of
where,
like
if
you,
since,
if
you
wrote
a
span
processor
and
in
the
span
processor,
you
know
you
read
the
attribute
and
in
your
code
say
it's
Java
type
language.
You
know
you
get
the
response
code
and
you
know
you
cast
it
to
an
INT.
D
B
I,
that's
that's
a
great
point
that
also
happens
in
The
Collector
when
you
do
span
processing
where
you
either
have
to
have
a
bunch
of
gymnastics
to
abstract
over
all
the
types
which
I
think
they
do
so
they
actually
have
like
a
built-in
two
string
on
everything.
So
you
can
stringify
it
if
you
want
to
ignore
the
type
or
you
have
to
like
understand
what
that
type
is,
but
right
now
I
would
argue.
We've
we've
I
think
this
is
the
underlying
question.
B
I
kind
of
wanted
to
tease
out
and
I
think
this
means
I
will
open
a
task
for
us
to
provide
guidance
around
when
something's
an
attribute
when
it's
a
metric-
and
this
is
where
like
say
restart
count-
will
be
like
a
big
open
thing
of.
Like
should
I
treat
this
as
a
metric,
even
though
it's
a
resource
attribute,
you
know
like
what.
Why
is
it
an
attribute
versus
a
metric?
How
are
we
expected
to
interact
with
these
things?
B
What
are,
what
can
we
expect
going
forward
and
stability
of
each
I
think
there's
an
underlying
question
there
that
deserves
to
be
answered,
but
fundamentally
I
I,
just
I,
don't
see
enough
value
in
US
breaking
attribute
types
outside
of
answering
this
question,
like
I.
Think
that
questions
the
important
bit
of
guidance
of
when
it's
an
attribute
when
it's
not
how
to
interact
with
both
but
I,
don't
see
value
in
changing
a
type
to
users
at
all.
I
think
I
just
see
breakages
and
horribleness.
B
Okay,
so
I
think
AI,
I'm
gonna
open
a
bug
specifically
around.
B
Fun
story:
our
internal
query
API
allows
you
to
say
like
when
error
code
is
greater
than
200.,
if
it's
an
integer,
but
if
it's
a
string
it
doesn't
allow
you
to
do
that,
and
it
annoys
the
crap
out
of
me,
because
people
use
error
code
in
both
ways
in
our
API.
B
Anyway,
okay
back
to
all
right,
we're
out
of
this
time
box,
so
semantic
invention,
process
topics
there
is
there
was
a
really
interesting
issue
raised
around
an
up
metric
and
this
is
about
computing
uptime.
B
This
is
a
thing
that
has
been
raised
several
times:
I
think
Josh
from
lightstep
wrote
up
a
whole
bunch
of
crap
around
measuring
process
uptime
and
all
that
kind
of
junk.
What
I
wanted
to
ask
here
is:
do
we
think
we
should
form
an
expert
group
around
up
metrics
grab?
Some
Prometheus
folks
grab
some
of
our
metrics
folks
and
throw
them
at
that
problem.
D
B
All
right,
so
the
next
question
is,
if
I
I
think
I
think
we'll
see
if
we
can
get
enough
interest
for
folks
to
do
that,
if
we
don't
have
enough
time
to
spend
on
it,
I
think
I'll
comment
on
the
original
issue
that
basically,
like
you
know,
we
don't
have
the
resources
to
spend
time.
Thinking
about
this,
if
you'd
like
to
grab
some
experts
and
do
this
on
your
own,
like
feel
free,
but
you
know
we
don't
think
we're
going
to
be
able
to
make
progress
on
that.
D
Yeah
and
I
think
we
should
prioritize
this
against
other
semantic
conventions,
also,
even
if
it's
separate,
even
if
it's
a
separate
group
but
still
taking
Community
attention,.
B
I
actually,
the
phrase
it
another
way,
I
totally
agree,
I
think
HTTP
and
messaging
should
be
our
number
one
and
number
two
to
get
out
the
door
quickly.
B
I
would
argue
that
I
would
love
if
Prometheus
could
outline
that
entire
semantic
convention,
because
they're
the
ones
that
are
actually
going
to
be
hit
by
it
the
hardest
as
the
cause
and
solution
of
the
problem
with
I,
don't
know
if
you
saw
Prometheus,
is
going
to
add
otlp
support
when
they
do,
they
will
have
to
figure
this
out.
B
So
we
could
also
like
defer
answering
the
problem
until
we
see
what
they
do
around
up.
Metrics,
but
I
think
that
the
otel
community
would
like
to
be
a
part
of
that
discussion.
So
I'll
still
I
will
escalate
to
I've
already
escalated
to
Riley
and
Josh.
But,
let's
just
say
from
our
standpoint.
B
Lower
priority
backlog,
We're
Not,
Gonna
We're
Not
Gonna
prioritize
getting
that
through.
Okay,
awesome,
Ted
welcome.
F
Grizz
snack
yeah
sorry
I
had
to
put
too
many
Hotel
meetings.
The
comms
meeting
was
overlapping
with
this
one,
but
luckily
we
fixed
that
going
forwards,
yeah
so
trying
to
tackle
future
work
on
reviewing
and
stabilizing
these
semantic
conventions.
It
would
be
great
if
we
don't
repeat
what
happened
with
HTTP
and
kind
of
had
it
get
like
expanded
out
like
a
body
going
into
a
black
hole
and
do
it
within
some
kind
of
like
set
time
frame.
F
I
think
this
will
also
really
help
get
get
people
involved.
If
we
can
actually
plan
out
a
schedule
for
when
we
are
going
to
at
least
try
to
kick
off
reviewing
each
semantic
convention,
so
I'm
willing
to
put
a
schedule
together
and
try
to
organize
people
into
taking
part
in
it,
but
just
sort
of
like
basic
brass
tacks.
Questions
like
what
do
we
think
is
a
reasonable
timeline
per
domain.
F
B
E
I'm
curious,
if
we
have
any
project
managers
in
the
open,
Telemetry
community
in
general,
that
was
where
my
thought
went.
I
don't
have
an
actual
constructive
one.
Just
that's
a
good
question.
Yeah!
Well,.
D
F
B
Here's
here's
what
I'd
suggest
right
so
I
think
I
think,
instead
of
focusing
on
how
long
it
takes
to
define
the
semantic
inventions,
let's
focus
on
the
process
we
can
control.
So
I
think
we
should
have
about
a
month
to
form
the
expert
committee
and
agree
to
it
and
bless
it
and
say
these
are
the
experts.
These
are
the
people
who
will
be
defining
it.
They
have
authority
to
make
decisions
within
this.
B
Then,
however
long
it
takes
them
to
argue
that
out,
because
there's
always
that
process
that's
fun
in
any
specifications
group,
let
him
talk
it
out
because
they
need
to
get
to
a
point
of
consensus
like
we
do
here
right
and
then
the
last
part
is
the
part
that
I'm
more
worried
about,
which
is
from
the
point
where
they
say.
We
know
what
we
want
to
do
with
semantic
inventions.
B
Here's
our
proposal
right
and
walking
that,
through
the
specifications
to
release
I'd
like
for
that
to
take
at
most
a
month
because
we
already
pre-approved
so
these
are
the
experts.
This
is
what
we're
going
to
bless.
They
come
with
a
big
proposal
of
here's
all
the
things
we
want
to
do,
and
then
we
walk
that
through
one
month
of
you
know,
vetoes
and
back
and
forth,
possibly
like
through
maybe
the
technical
committee
and
the
governance
committee
or
whatever,
and
then
that
comes
out.
B
F
I
I'm
tempted
to
put
put
a
suggested
timeline
on
that
part
as
well,
because
I
think
I
think
it
might
help
groups
to
to
feel
like
they
they
have
a
month
to
to
come
to
consensus.
You
know,
I
I
think
we
may
want
to
just
approach
these
a
little
bit
different
like
we.
We
have
this
philosophy
of.
We
have
these
like
super
long-term
projects,
so
we
get
a
working
group
together.
They
meet
once
a
week
and
like
slowly
hack
through
whatever
I
kind
of
feel
like
with
these.
F
It
should
be
like
maybe
a
little
bit
faster
and
more
concentrated,
because
we're
trying
to
do
like
a
a
one
pass
and
and
get
done
with
it.
We
definitely
found
that
with
like
rum
or
whatever
we're
like
wow.
It's
really
just
slowing
us
down
to
me
once
a
week.
Let's
meet
like
three
times
a
week
and
just
like
like
chew
through
this.
B
I
mean
that
sounds
reasonable.
Then
the
question
is:
can
you
find
enough
how
many
how
many
groups
of
three
times
a
week
meeting,
can
you
find
in.
F
I
I
think
for
one
we,
we
kind
of
need
outside
experts
having
this
stuff
at
a
timeline
is
really
gonna
help
there
I
know
you
know
Microsoft
and
some
other
people
who
are
interested
in
helping
kind
of
like
back
to
wave,
mainly
because
we
didn't
have
any
timeline
like
it.
Just
seemed
like
it
was
going
to
stretch
out
into
Infinity.
B
The
other
question
I
and
I'm
gonna,
throw
this
out
there.
This
is
probably
too
political
when
we
pulled
in
the
Microsoft
experts
specifically
for
HTTP,
and
then
they
got
nitpicked
to
death
by
random
people
in
the
community.
Maybe
that
also
caused
some
friction
there.
That's
why
that's?
Why
legitimately
I
want
this
whole,
like
we
do
this
month
of
collecting
an
expert
committee
for
people
to
so
that
you
know
they're
an
expert.
B
You
know
the
credentials
you
understand
who
this
is
and
why
they're
proposing
it,
and-
and
we
agree
that
this
is
like
a
good
set
of
people-
it's
okay
to
say
like.
Why
did
you
do
this
and
stuff,
but
man
nitpicking
to
death
and
like
blocking
everything
for
one
comment
is
not
what
I
want
going
forward.
Yeah.
F
I
I
think
relevant
to
that
we,
so
we
haven't
fully
like
implemented
any
of
these,
but
I
did
get
some
project
management
stuff
put
into
the
spec
around
when
we
form
these
groups
having
at
least
two
TC
members,
Beyond
them,
which
I
think
is
also
partially
answers.
Our
parallelization
question
because
honestly,
like
that,
a
lot
of
their
nitpicking
came
from
having
to
sort
of
relitigate
everything
you
know
the
second
time
around.
F
Like
somewhat
unavoidable
right,
if
you're
gonna
have
the
pattern
of
like
like
a
working
group
figures
it
out,
and
then
they
present
it
to
everyone
else,
you're
going
to
relitigate
it,
but
doing
that
with,
like
all
of
the
spec
maintainer
approver
people
being
like
totally
in
the
dark
about
the
thing
it
created
just
kind
of
like
a
weird
dynamic.
For
that
to
happen
in.
F
B
Need
to
be
excited
open,
we
need
to
be
open
and
make
sure
everyone's
included,
but
we
also
want
to
move
quickly
yeah.
Sometimes
those
are
at
odds
yeah
right
like
what
this
is.
Why
I'm
suggesting
like
that
at
least
give
a
month
for
the
initial
review
of
who's
on
the
expert
committee
and
then
the
initial
review
of
here's
the
proposal,
so
it
gives
enough
time
for
people
in
the
open
Community
to
give
their
feedback
and
feel
like
it's
addressed
yeah.
F
F
A
TC
member
or
someone
similar
who
made
the
comment
and
then
wandered
off
so
that,
like
that,
also
stopped,
but
you
know
having
two
TC
members
like
involved.
D
Know,
yeah
I
think
that's
really
important
to
make
that
one
month
at
the
end
transition
go
smoothly,
because
those
two
TC
members
can
really
help
to.
You
know
assuage
the
concerns
of
the
broader
community
and
they
can
help
bridge
that
Gap.
That
Outsiders,
don't
maybe
know
about
what
the
the
goings-on
and
who's
who
and
all
that.
E
I
mean
the
worst
case
scenario.
Is
we
push
something
that
apparently
wasn't
perfect
and
oh
I
mean
that's
going
to
happen
anyway,
and
we
have
this
major
versioning
system
right?
So
if
so,
if
we
have
a
I,
don't
know
I,
don't
think
it's
that
big
of
a
deal
right
like
we
always
just
change
it
later.
If
it
turns
out
we're
wrong,
that's
the
whole
point
of
having
versioning
like
I.
F
Once
we
declare
people
I,
don't
want
to
break
yeah
I
would
I
would
I
I
think
I
have
faith
that
we
will
not
the
only
way
we
end
up
doing.
That
is
if
we
just
literally,
have
no
expertise
in
the
subject
when
we
review
it,
I
really
think
that's
the
only
way.
We
end
up
with
something
that's
like
stupid
enough
that
we
would
break
it
as
opposed
to
like
insufficient.
We
just
want
to
like
add
things
to
it.
You
know,
but
we'll
find
out
on.
F
Just
to
kind
of
help,
maybe
with
this
this
process,
the
ECS
community,
I
kind
of
woke
that
conversation
back
up
with
them,
there's
been
a
bit
of
a
Changing
of
the
Guard
over
at
elastic,
but
the
the
new
people
are
are
interested
in,
like
kind
of
like
participating
in
like
actively
donating
BCS
I.
Don't
think
they
want
to
merge
governance
at
this
time,
which
is
okay,
but
it's
another
source
of
expertise
when
it
comes
to
to
doing
each
one
of
these.
B
The
idea
we
had
initially
was
to
have
an
Hotel
person
in
their
approval
process
and
have
ECS
people
in
our
approval
process
around
logging
at
a
minimum
to
start
aligning
the
two
processes.
B
There's
a
discussion
we
could
have
of
whether
or
not
ECS
just
blanket
gets
pulled
in
as
open
Telemetry
because
they're.
If
you
look
at
a
lot
of
what's
in
there,
a
lot
of
it
is
secops
and
things
we
haven't
even
started.
Thinking
about
and
just
blanket
pulling
it
in
might
just
be
the
right
way.
F
I
am
a
hundred
percent
on
board
with
like
if
they
have
already
defined
it.
We
can
have
some
faith
that
they
did
a
good
job
because
they
seem
to
have
done
a
good
job,
especially
around
the
secop
stuff
and
and
just
be
like
if
they've
defined,
something
there.
F
We
pull
it
in
where
there's
a
little
bit
of
like
massaging
that
has
to
happen
is
like
honestly,
some
of
it
just
really
has
to
do
with
they
weren't
thinking
about
tracing,
so
it's
all
like
logging
stuff
and
we
need
to
at
minimum
at
least
be
like
well.
Some
of
these
things
need
to
be
like
span
attributes
or
whatever.
F
B
Let's
defer
this:
let's
talk
about.
We'll.
Add
this
to
the
agenda
for
the
first
thing
next
week,
the
ECS
and
hotel
simcomp,
because
I
think
we
could
kick
off
what
that
looks
like
that'd,
be
really
interesting,
I'm
just
going
to
mention
this
real
quick
because
we're
running
out
of
time.
B
Yes,
do
we,
if
you
know
of
What's
blocking
HP
semantics
conventions
being
marked
stable,
I'll
create
a
bucket
in
our
project
that
we
can
put
bugs
into
and
we
can
actually
start
prioritizing
finishing
them
going
forward
at
this
point,
it's
I
think
it's
metric
stability,
but
anyway,
let's
let's
take
that
offline.
B
B
Let's
go
declare
who
our
experts
were
Let's.
Make
sure
that
we
agree
that
these
are
experts
in
that
sort
of
thing
in
the
community
will
take
their
entire
proposal
and
get
that
list
of
things
in
a
big
queue
that
people
can
review
and
then
we'll
put
one
month
review
and
see
see
how
that
works.
So
basically
I'm
tying
track.
Your
comment
with
Ted's
process
thing:
let's
just
make
that
go
happen
for
HTTP
right
now.
Are
you
okay
putting
that
document
together,
Ted
yeah,
100,
awesome,
let's
go!
Let's
do
it!
Yeah!
B
Okay,
sounds
good
all
right!
Thank
you!
Everyone!
Sorry.
We
ran
out
of
time
again
not
surprising,
but
yeah,
looking
forward
to
seeing
y'all
bye.