►
From YouTube: 2022-12-13 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
A
A
B
Or
afternoon,
for
some
of
you,
let's
start
in
one
minute,
let's
hope
that
more
people
will
show
up.
B
Okay,
let's
start,
let's
see
what
happens
with
more
people
coming
in,
maybe
they're
not
coming.
It's
almost
holiday
season.
Okay,
the
first
item
I
put
it
there.
B
B
Think
we
discussed
that
in
the
past,
and
there
was
General
agreement
that
something
like
this
could
be
desired,
but
then
nothing
was
done
so
I
just
wanted
to
resurrect
this
one
one
of
the
options
there
is
that,
for
example,
this
is
specifically
is
for
for
environment
variables,
and
one
of
the
things
that
Yuri
mentioned
there,
along
with
real
Riley,
is
that
we
could
provide
a
callback
system.
B
B
Okay,
so
let's
hope
that
yeah
more
people
will
take
a
look
at
that.
I
think
that
by
the
way,
this
item
and
the
next
one
are
items
that
designate
Auto
instrumentation
group
are
very
interested
in.
So
that's
why
I'm
exposing
them
here
the
second
one
is
kind
of
mix.
Basically
it's
about
environment
variables
as
well,
and
it
was
creative
to
explicitly
allow
unlimited
as
a
value
you
know,
on
a
numerical
value
for
an
environment.
Variable
Andre
was
asking:
why
would
the
user
desire
to
have
all
limited
like
No
Limit?
B
You
know
value
for
a
number
rememberable,
but
other
than
the
actual
fact
that
this
is
a
good
learn
about
thing.
I
think
that
it
could
be
good
to
have
a
value
that
signals
for
a
numerical
value,
at
least
there's
no
limit.
You
know
instead
of
zero
or
minus
one
I,
don't
know
what
you
know,
but
something
so
I
would
like
to
get
some
opinion
opinions
on
this
one
I,
don't
know
whether
anybody
in
I
don't
know
if
any
Seekers
experienced
this
need
or
not.
C
I
haven't
seen
this
come
up
in
Java
before
so
I
kind
of
am
aligned
with
Riley.
You
know
needing
to
understand
why
this
No
Limit
option
is
needed.
A
E
Was
gonna
say?
Well,
you
know
my
only
issue
with
this
is
it's.
E
Someone
said
it
was
very
python-ish
right,
like
setting
a
numerical
value
to
a
string
is
something
that's
going
to
be
annoying
in
at
least
some
languages
if
any
numerical
value
can
come
in
as
either
a
number
or
a
string
like
that
seems
seems
a
little
annoying
and
well.
I
cannot
understand
why
this
person
wants
like
span
attributes
to
to
be
unlimited,
because
it
feels
weird
to
pick
a
number.
There.
E
I
agree
that
in
general,
like
we
shouldn't
be
leaving
things
that
consume
memory
to
be
open-ended,
I
I
would
almost
want
it
to
be
like
a
second
attribute
you
would
set
instead
for
if
they
were
like
for
specific
specific
attributes
that
we
wanted
to
be
unbounded.
D
So
this
talks
specifically
about
hotel
spawn
attribute
currently
right,
so
the
attributes
in
in
the
in
a
single
span.
So,
let's
say
I,
set
this
to
1
billion
or
whatever
right
I
mean
realistically.
Is
it?
Is
it
possible
to
create
more
than
that
I'm
guessing
no
right,
so
it
is
serving
the
same
purpose
in
that
case.
So
the
only
difference
is
how
you
express
it.
E
Right
and
so
options
are
what
this
person
is
saying
is
a
no
value
should
represent
No
Limit
I
think
that's
wrong,
because
there's
plenty
of
reasons
why
a
null
value
could
come
in
due
to
misconfiguration,
and
that
would
be
a
bad
result
and
I
think
the
idea
of
having
I
mean
correct
me
if
I'm
wrong
for
people
who
work
in
typed
languages
like
Java
like
would
it
would
be
kind
of
annoying
to
have
to
check
whether
things
are
a
string
value
or
a
number
before
processing
them
right.
E
It
would
be
I
I
also
don't
want
us
to
like
have
an
endless
number
of
environment
variables,
but
if
it's
like
a
special
case
that
we're
saying
like
I
want
to
just
rather
than
use
the
default
I
want
to
say,
unlimited
span
attributes
equals
true
is
like
a
second
parameter
out
past.
Instead
of
specifying
the
numerical
value
for
the
number
of
attributes
like
I'd
rather
special
case,
it
yeah
or
negative
one
Aaron
put
in
is
like
that's
something
that
you
it's
a
number
that's
explicit.
That
would
be
impossible
to
be
a
correct
answer.
B
Yeah
I
mean
to
be
curious
about
the
specific
Corner
case,
where
you
know
we
could
use
minus
one
but
going
to
Jack's
point
if
this
is
something
that
no
other
seek
has
seen.
Probably
we
can
postpone
that
discussion
whenever
we
actually
need
something
like
that,
then
we
can
come
back
and
discuss
but
in
the
meantime
just
play
to
say
site-
and
you
know
say
you
know
we
want
to
support
this
for
now.
So
you
have
to
specify
a
large
number
yourself
or
something
like
that.
B
Okay,
thank
you
so
much
for
the
feedback.
I
will
add
these
comments
there
and
yeah
hope
that
makes
them
make
progress
for
now.
Thank
you
so
much.
Okay,
let's
go
to
the
next
one:
Gemma
D
sampling
and
spun
links,
foreign.
G
I
just
want
to
quickly
announce
that
we've
been
talking
about
in
two
different
sigs,
the
sampling
Sig
and
the
client
instrument.
Sorry,
the
messaging
instrumentation
Sig
about
some
emerging
problems
that
people
now
feel
about
fans
and
like
span
links
and
sampling.
G
It's
really
not
obvious
how
those
two
are
meant
to
work
together
and
there's
been
a
discussion
going
on
I
think
we
have
some
proposed
Solutions
or
at
least
I-
think
there
are
some
solutions
here.
The
teaser
I
want
to
give.
You
is
what,
if,
when
a
span
links
to
another
span
and
that
other
span
is
sampled,
you
could
record
the
first
span,
but
when
linking
two
just
because,
in
order
to
complete
that
other
Trace
problem
is
now,
we
have
to
record
a
span
that
wasn't
sampled.
That
is
another
sampling
question.
G
So
please
come
on
Thursday
morning,
it's
going
to
be
in
the
instrumentation
messaging
Sig
and
we
will
talk
about
the
problems
of
sampling
as
by
Links.
That's
my
only
bullet
there
thanks
Carlos.
B
Perfect
thanks
so
much
for
that.
Okay,
next
one
Tristan
more
questions
around
on
Trace
proposals,
API
or
SDK.
Please.
H
Yeah,
so
the
little
quick
backstory
for
anybody
who
wasn't
here
when
we
just
discussed
it
before
untrace
is
a
method
or
function
that
says
from
this
point
on,
do
not
create
any
at
least
any
sampled
or
recorded
spans
within
this
I'd,
say
scope.
But
scope
is
used
for
instrumentation
scope,
so
that's
probably
a
bad
terms
of
use,
but
but
it
might
within
a
block
or
within
a
function,
Anonymous
function
or
something.
H
So
the
I've
been
struggling
with
trying
to
put
a
proposal
together
for
this,
and
we've
discussed
it
before,
and
one
of
the
things
that
I've
struggled
with
now
is
whether
it
belongs
from
the
API
or
the
SDK
SDK
I
started,
leaning
towards,
because
it's
more
of
a
concern
for
people
who
are
it's
not
something
that
an
instrumentation
Library
should
be
doing,
but
should
be
the
end
user
doing
and
it
as
far
as
I
can
tell
it
would
be
another.
H
It
would
need
to
be
a
function
on
the
Tracer,
which
would
be
the
only
other
function
now
on
the
Tracer
in
the
API,
which
is
this
little
untrace
thing,
which
I
just
felt
looked
ugly.
So
I
learned.
If
anybody
had
any
thoughts,
that
might
help
me,
of
course,
along
with
making
this
proposal.
D
I
How
do
you
I
I
think
I?
Maybe
disagree
with
that
if
I'm,
an
instrumentation
author
and
I'm
instrumenting,
like
an
HTTP
library
that
uses
a
different
HTTP
Library
under
the
hood,
I
may
want
to
suppress
those
under
the
hood
calls
in
my
instrumentation
so
that
it
appears
as
one
outgoing
HTTP
call
yeah
I
also
would
all
push
back
on
the
idea
that
this
should
be
specifically
on
the
Tracer.
That
would
almost
argue
for
this
to
be
included
in
the
context
API,
because
I
think
it
would.
I
You
may
potentially
want
to
suppress
other
signals.
Yeah.
H
That
would
be
the
question
is,
would
it
should
it
contain?
Should
it
be
something
on
the
context
that
says:
don't
do
anything
for
signals.
I
guess
you'd
be
able
to
also
select
signals
Say
by
default.
It
would
turn
off
all
signals
and
optimally.
You
could
save
un
or
suppress
and
then
give
an
option
of
metrics
logs
or
traces.
E
Do
you
wondering
whether
this
configuration
rather
than
code,
would
it
be
sufficiently
granular
enough
to
Target
the
scope
right.
C
E
H
Think
not
because
necessarily
one
of
the
use
cases
that
people
have
come
to
me
with
is
the
database
queries
that
some
Library
ends
up
generating,
because
it
makes
a
whole
bunch
of
queries
and
they
don't
want
those
and
that
those
would
be
within
the
scope
of
the
database
Library,
which
is
also
used
for
you
know,
application
logic,
so
they
want
those.
So
they
can't
just
turn
off.
H
Guess
you
could
say
no
yeah,
it's
not
as
simple
as
saying
you
don't
want
the
children
of
the
library.
That's
calling
the
database
library
because
there's
other
children
that
would
want
so
yeah
I
think
it
needs
to
be
within
the
code.
E
I
C
H
H
And
Ruby
I
think
is
switching
to
doing
it
as
a
context.
Entry
just
like
JavaScript.
E
That
seems
like
the
right
way
to
do
it
rather
like
this
is
just
more
information.
You
add
to
the
context,
and
you
can
put
it
in
there
and
then
you
can
pull
it
out
later
and
you
add
two
API
methods
not
on
the
Tracer
on
any
object,
but
just
like
two
functions
to
like
you
know:
stop
tracing
untrace
and
untrace
or
whatever
you
want
to
call
them
well,.
H
A
E
E
In
general,
you
don't
want
to
I
I
know
what
you
mean
about
instrument
like
third-party
instrumentation
shouldn't
touch
this
but
I.
Think
like
in
general.
We
don't
want
any
instrumentation,
including
application
code
pulling
in
the
SDK,
because
then
they're
taking
a
hard
dependency
on
like.
E
Sdk
stuff
at
that
point
and
I
think
we
want
to
avoid
triggering
that
in
general,.
I
I
had
a
previous
spec
proposal
that
is
now
closed,
I'm,
adding
it
into
the
chat
here.
I
guess
we
could
probably
also
add
it
to
the
agenda
of
describing
essentially
exactly
what
we
just
talked
about
where
there's
a
context
key
for
suppressing
tracing.
Oh.
F
F
E
I
mean
I
think
it
would
be
straightforward
enough
to
create
an
Otep
say.
My
prototypes
are
the
thing
we've
already
done
in
these
two
languages,
where
it
is
and
by
having
it
just
be
a
standalone
function
that
touches
the
contacts
that
makes
it
very
easy
for
all
implementations
to
add
it
in
a
backwards
compatible
manner
because
like
in
go
you
could,
you
could
put
it
in
a
different
package
or
something
if
you
wanted
to.
Even
so.
E
H
Yeah
all
right,
I
think
that's
good
for
me
to
go
back
to
the
proposal
and
continue
with
it.
Do
you
do
people
think
it
should
be
a
no
tap
or
should
I
was
just
doing
a
por
on
the
spec,
but.
G
H
J
H
J
C
J
I
E
C
G
Like
or
attention
I
can
think
of
actually
four
or
five
really
promising
hotel
right.
Now
that
I
refer
to
all
the
time.
There's
one
on
otlp
Arrow,
there's
one
on
messaging
structures,
there's
the
one
that
Tristan
links
to
there's
the
one
that
I
don't
really
do
at
the
end,
probably
more
than
that
they're
all
really
good
and
they're
sitting
there.
I
I
think
we're
not
going
to
solve
it
in
this
meeting,
but
it's
something
that
I
think
the
TC
should
likely
address.
E
Yeah
I
would
love
so
I
am
I.
Do
have
several
PMS
who
have
reached
out
to
who
are
interested
in
like
helping
us
with
project
management,
maybe
starting
in
January.
We
have
a
whole
bunch
of
like
other
things
around
these
semantic
conventions
like
which
is
my
kind
of
talking
point
which
we're
on
anyways.
E
Right
now,
so,
my
item
on
the
agenda
is
the
semantic
conventions
and
stabilizing
them
I'm
attached
to
second
draft
here.
So
if
you
just
have
a
look
at
that
real
quick
for
you
share
share
it
there
I'm,
hoping
based
on
feedback.
This
Doc
is
more
readable.
E
The
idea
here
is:
we
have
all
these
different
semantic
conventions
that
we
need
to
stabilize
I'm.
They
all
need
some
amount
of
review
before
we
stabilize
them
and
if
we
look
at
our
current
ones,
some
of
these
might
be
a
doozy
like
messaging
and
like
rum,
are
actually
like
pretty
big
compared
to
baby
reviewing.
E
You
know
my
sequel,
but
even
http
seem
to
like
drag
out
forever,
and
my
concern
is
we're
we're
on
the
cusp
of
stabilizing
metrics
tracing
in
logs,
like
we're
almost
done,
but
because
our
semantic
conventions
are
still
experimental
and
we
haven't
like
done
a
pass-through
instrumentation
and
lifted
them
all
up
to
having
like
a
schema
version
and
everything.
A
E
So
this
proposal
is
a
process
based
on
setting
helping
to
set
up
these
existing
working
groups.
This
process
is
kind
of
based
on
my
feedback
there.
E
E
Specifically
there's
some
required
Staffing
that
we
want
to
address
like
who's
going
to
lead
the
project,
TC
members
and
spec
approvers
who's
going
to
like
be
on
board
with
this
process.
This
project,
so
they're
not
like
off
in
the
woods
on
their
own
who's,
signing
up
to
write
prototypes
for
this
thing
and
also
for
the
prototypes
at
least
you
know,
having
maintainers
or
approvers
in
those
languages
agree
like
yeah
I
will
review
this
prototype
when
it
comes
up
and
then
like
having
a
timeline.
E
So
the
timeline
I'm
proposing
is
three
quarters
long.
The
first
quarter
is
just
us
announcing
that
we're
going
to
start
this
working
group
and
going
out
there
and
looking
for
member
organizations
end
user
communities
where
we
might
find
subject
matter,
experts
looking
at
expanding
our
list
of
specification
approvers-
and
this
is
because
often
a
blocker
for
stabilizing
these
things
is.
We
feel
that
our
current
spec
Community
doesn't
actually
have
the
expertise
in
this
area,
so
we
don't
feel
comfortable
like
doing
the
reviews
of
this
stuff.
E
So,
generally
speaking,
for
these
semantic
conventions,
it
would
be
helpful
to
get
some
subject
matter.
Experts
to
join
my
experience,
trying
to
get
people
to
join
if
they're,
especially
if
they're
gonna
do
like
like
a
lot
of
work
on
it,
is
that
they
need
a
heads
up.
They
can't
start
tomorrow.
Usually
they
say,
I
can
start,
but
I
have
to
go
to
our
quarterly
planning
meeting
at
work
and
get
approval
for
this.
E
So
that's
why
I'm
saying
we
want
to
have
like
one
quarter
where
we
just
try
to
organize
the
group
to
like
get
people
in
then
we
try
to
spend
one
quarter
actually
doing
the
work,
which
is
way
faster
than
we
tend
to
go.
We
tend
to
to
have
like
a.
E
Meeting
Jerome,
do
you
mind
thanks?
We
tend
to
have
a
Cadence
of
meeting
once
a
week
for
these
working
groups
and,
taking
you
know,
a
slow
but
steady
approach.
E
It
would
be
great
if
we
could
review
these
things
much
faster,
so
I'm
proposing
like
if
that
working
group
could
form
and
try
to
get
their
review
of
proposals
out
to
the
community
in
six
weeks
and
then
to
have
like
an
announced
one
month
review
period
where
we're
asking
for
public
comment
on
these
things,
and
hopefully,
at
the
end
of
that
month
we
have
something
we're
satisfied
with
internally
and
we
can
approve
oteps
or
PRS
or
whatever
form
this
is
taken,
and
then
that
gives
us
two
weeks
to
kind
of
internally
do
whatever
cleanup
we
need
like
if
we've
accepted
the
oteps
like
actually
getting
the
PRS
into
the
spec,
so
that
would
be
a
a
three-month
process
of
just
beginning
to
end
getting
the
the
spec
work
done
for
a
semantic
convention
and
then
the
third
quarter.
E
Have
people
who
have
signed
up
in
the
different
languages
to
to
do
a
sweep
through
the
instrumentation
we're
currently
offering
in
contrib
and
and
update
them
to
whatever
changes
we've
made
to
the
semantic
conventions?
This
is
one
of
those
like
little
death
of
a
thousand
paper
cuts
things
I,
don't
think
updating
any
one
piece
of
instrumentation
really
takes
much
time,
but
you
know,
contrib
management
is
a
little
all
over
the
map.
E
Not
all
of
these
things
really
have
maintainers,
so
we
would
just
need
to
to
just
be
a
little
organized
about
how
we
do
this.
If
we
do
it
that
way-
and
we
kick
these
things
off.
This
is
like
a
rough
idea
of
like
when
we
kick
these
things
off.
That
I
think
is
aggressive,
so
first
quarter
we
dive
into
our
existing
ones
that
are
already
working
and
try
to
get
them
done
in
that
quarter.
E
So
try
to
get
these
HTTP
the
messaging
stuff,
the
the
browser
stuff
like
done
in
first
quarter,
while
trying
to
find
people
to
work
on
the
networking
stuff
that
we
have
grpc
and
mobile
clients
and
then
the
next
quarter
trying
to
find
people
for
database
while
doing
this
work.
The
third
quarter
fast
shouldn't
be
too
hard,
but
if
we
do
it,
this
way
we'll
be
done
in
2024.
E
E
This
seems
like
both
a
fast
process,
but
also
like
it's
still
going
to
take
us
over
a
year
to
to
chew
through
this
stuff.
So
that's
my
proposal.
I,
want
to
talk
to
the
TC
board
about
this.
I
have
some
PMS
who
are
willing
to
kind
of
come
in
and
help
us
and
I
would
like
to
see
this,
maybe
be
part
of
like
a
more
General
approach
to
to
having
some
management
around
oteps
and
like
what
spec
projects
we're
working
on,
because
like
Josh
was
saying
we
tend
to.
E
We
have
like
a
lot
of
actually
a
number
of
good
oteps
and,
like
super
useful
things
like
kind
of
out
there
in
the
backlog
that
we
want
to
do,
but
we
only
have
a
limited
amount
of
attention
at
any
given
time
that
we
can
give
to
things,
and
so
if
we
could
be
a
little
more
organized
about
deciding
like
which
of
these
projects,
we're
going
to
try
to
work
on
right.
Now
that
we're
able
to
pay
attention
to
the
person
who's
proposing
the
project
is
around
and
able
to
respond
quickly.
E
Maybe
we
can
start
processing
our
backlog
a
bit
faster
and
for
the
ones
we're
not
working
on.
We
could
tell
those
people
like
explicitly
hey
we're,
not
we're
not
going
to
work
on
this
right
now.
We
want
to
work
on
this
next
quarter
or
something
like
that.
So
they're
not
wondering
why
no
one's
paying
attention
to
their
oteps.
E
D
Because
I
generally
welcome
accelerating
our
schedule
of
stabilizing
the
semantic
conventions,
I
think
it's
okay
to
make
some
mistakes
there.
Even
right
and
the
way
to
fix
those
mistakes
would
be
through
the
schemas
right.
We
will
publish
new
schemas,
we'll
have
new
ways
to
do
to
describe
the
changes
and
otherwise
I
think
we
were
if
we
we're
not
brave
and
and
say
that.
D
Okay,
we're
happy
what
we
have
so
far
and
we
acknowledge
the
fact
that,
maybe
it's
not
perfect-
maybe
there
may
there
are
some
mistakes
that
they
don't
don't
see,
we'll
just
never
never
do
that,
never
release
it,
but
there
is
always
a
possibility
that
there
is
some
sort
of
a
mistake
there.
That
needs
to
be
fixed.
I
think
that
we
should
acknowledge
that
and
say
that's
okay.
This
is
what
we
have.
This
is
the
best
way
that
we
came
up
so
far
with
and
let's
make
it
a
1.0
and
then
we'll
we'll.
D
D
Like
a
process
I'm
not
saying,
let's
not
do
anything,
let's
time
bound
that,
let's,
let's
limit
on
how
long
we
wait
and
I
agree
with
you:
I'm
not
I'm,
not
I'm,
not
opposing
yeah
I'm,
just
saying:
let's,
let's
just
do
it
and
let's
just
do
it
sooner
than
then,
maybe
we
think
is
necessary
to
be
completely
confident
completely
yeah.
There's
no
need
to
be
absolutely
sure
here.
Okay,.
E
Yeah
I
think
we're
never
yeah
I
think
we're
in
alignment
like
I.
Think
yes,
I,
agree
like
there's
no
such
thing
as
perfect
and
in
fact,
I
want
to
help
encourage
these
groups
to
not
try
to
like
reinvent
what
we're
doing
and
come
back
and
be
like
well.
The
way
we're
doing
databases
is
wrong.
We
should
like
take
this
totally
different,
Paradigm
and
break
everything
like
I.
Don't
want
us
to
do
that.
E
I
just
want
us
to
like
if,
if
we
can
be
organized
and
if
we
can
evangelize
this
stuff
to
the
public
before
it
happens,
so
anyone
who
like
does
care
who's
floating
around
open
Telemetry
can
participate.
If
we
can
try
to
time
box
like
six
weeks
for
the
working
group
to
propose
changes
a
month
of
like
public
review
and
then
like
two
weeks,
let's
like
get
it
in
I
think
we
could
do
it
in
that
amount
of
time.
E
D
A
D
E
Yeah,
hopefully
we
could
do
these
quick
and,
and
hopefully
they
need
less
more
minimal.
I.
Think
part
of
why
we're
maybe
a
little
scared
at
least
I
am
is
the
ones
that
we
picked
seemed
to
just
be
doozies
right,
like
messaging,
actually
is
like
a
whole
other
freaking
asynchronous
domain.
The
the
browser
stuff
is
like
a
very
broad
deep
domain
and
those
are
domains
that
we
actually
didn't
really
think
very
much
about.
E
E
Why
I'm
saying
I
think
we
need
to
be
a
little
more
organized
and,
and
it's
it's
just
a
matter
of
like
being
more
organized
I,
think
then
it
is
a
matter
of
like
these
things
are
really
hard,
so
I
can
try
to
help
organize
This
and
like
bring
in
some
other
people
just
to
maybe
help
us
organize
it.
But
I.
You
know
I
don't
want
to
be
imposing
this
on
the
the
spec
community.
So
hopefully
we
can
have
some
more
discussions
about
this
and
try
to
kick
it
off
in
January.
E
I've
said
my
piece:
please
give
feedback
on
the
doc.
If
you
have
any
questions
or
concerns
about
this.
F
E
So
these
are
three
months.
These
are
three-quarter
projects
right,
so
Q3
2023
is
I,
think
the
last
kickoff
date
and
then
it's
three
quarters
from
there
before
we
finish
that
last
one
right
so
so
these
will
be
kind
of
like
overlapping
each
other.
E
Okay,
I'll
I'll
try
to
clarify
that
I
will
I'll
put
I
thought
about
like
creating
a
Gantt
chart
by
making
a
trace
of
this
whole
thing
and
then
taking
a
snapshot
of
that
and
loading
it
in,
but
I'll
I'll
make
it
more
clear
how
long
I
expect
these
things
to
go.
You're.
A
F
All
right,
and
since
you
mentioned
the
schema
Transformations
earlier,
do
you
think
we
have
already
necessary
Transformations
that
might
come
up
defined
in
the
schema
specification
as
well?
Or
do
you
think
we
might
need
more
operations
to
be
defined
like
the
capabilities
of
what
schemers
can
do
extended,
because
that
will
also
have
an
impact
on
backhands
implementing
schemas.
E
I
think
tigran
I
have
some
comments.
There
I
mean
I.
Think
it
comes
down
to
like
a
schema.
Translation
is,
is
a
function
that
you
apply
in
The
Collector,
and
so
it
comes
down
to
anything
that
you
can
any
it's
it's
less
about
like
which
operations
are
available
and
more
about
like
you,
can't
you
can't
create
data
out
of
nothing
right
so,
like
whatever
you're
transforming
the
output
into
has
to
be
based
on
the
input.
E
F
That
does
that
make
sense.
Partially
I
was
just
thinking
if
I
I
know
we
have
renaming
of
attributes
in
there.
We
have
splitting
metrics
I'm
just
wondering
if
there
is
any
anything
that
might
come
up,
that
we
don't
have
in
there.
It's
not
yet
then
yeah
yeah.
D
D
Only
after
that,
we
start
using
that
type
of
transformation
in
the
semantic
meaning
the
changes
to
the
semantic
conventions.
I
I
realize
this
means
that
it
takes
longer
to
release
the
particular
changes,
but
that
that
is
what
we
have
to
live
with
I
believe,
and
hopefully
that
is
going
to
be
rare
right.
Introducing
a
completely
new
kind
of
offer,
change
that
is
not
possible
to
describe
to
express
with
the
current
means,
I
think
that's.
Okay,.
F
H
Yeah,
so
I
ran
into
this
yesterday
with
a
user
who,
it
seems
it
would
be
useful.
The
use
case
is
I
think
covered
in
that
Otep
already,
but
just
to
give
some
background,
I
guess
is
an
HTTP
request
that
has
they
read
in
some
information,
maybe
from
the
database,
or
attach
something
and
about
the
user
or
the
request
in
general.
H
That
would
be
useful
to
them
to
be
on
the
child's
bands
as
well
and
right
now,
the
only
way
to
effectively
do
that
would
be
I
guess
a
span
processor
that
does
this
for
every
single
span
that's
created
in
the
system,
even
if
it's
completely
unrelated
to
http
or
that
endpoint,
so
it
they'll
all
be
checking
if
this
value
exists
in
the
context
that
the
user
has
to
set.
So
it's
felt
like
it
was
cleaner.
H
E
I
I
think
this
is
kind
of
a
perfect
example
of
what
we've
been
talking
about,
because
I
totally
agree.
This
is
this
is
super
useful,
I
think
at
datadog
they
have
like
a
concept
of
Trace
level
attributes.
H
It
needs
a
lot
of
discussion
because
a
bit
I
mean
what
signals
does
it
cover?
You
know
lots
of
questions
yeah.
E
H
A
E
This
order,
or
this
month
like
which
of
these,
are
we
going
to
try
to
tackle
as
as
a
group
or
a
subgroup
or
something
where
we're
saying
like
Okay
like
these
TC
members,
are
interested
like
this
group
of
people.
That
represents
enough
people
to
approve
this
thing
and
get
it
over.
The
hump
are
agreeing
to
like
work
it
out
with
the
the
community
to
get
it
done,
something
just.
H
To
get
this
necessarily
moving
along
but
get
to
get
some
form
of
Otep
process
moving
along
so
that
this
can
because
I
could
have
just
started,
commenting
on
the
spec
on
the
Otep,
but
not
going
to
go
anywhere.
So
there
needs
to
be
some
process
that,
because,
if
there's.
H
A
F
E
G
F
G
207
issue
I
think
we
need
Champions
we're
going
to
discuss
this
in
TC
meeting
tomorrow.
In
this
case,
I'm
not
sure
we
found
there
was
a
disagreement
in
207
and
it
felt
like
it
stalled
everything
right
there
yeah.
H
Yeah
I
suppose
something
like
we
can
loop
back
next
week
after
the
or
you
know,
yeah
we're
thinking
about
Christmas.
That
of
what
is
discussed
in
the
TC
meeting
too
so.
E
H
E
Maybe
at
minimum
in
January,
when
we
come
back,
we
can
kind
of
Kick
It
Off
as
like
a
TC
or
spec
Community,
just
a
list
of
all
the
outstanding
oteps
stuff.
That's
currently
there
and
just
like
just
one
way
or
the
other
pick
some
subset
of
that
stuff
and
say:
okay,
we're
gonna!
These
things
are
valuable.
They've
been
malingering,
but
everyone
involved
is
still
around.
Let's
just
let's
just
tackle
these
first
and
then
like.
Let
the
community
know
that
we're
we're
going
to
go
after
these.
H
And
I
think
if
Jack
is
referring
to
this
discussion
about
saying
no,
not
right
now,
I
think
that's
also.
A
good
thing
to
have
is
no
not
right
now
on
this,
so
that
can
be
what
that
process
can,
as
we
go
through,
the
oteps
can
say
not.
C
H
E
C
I
think
I
have
the
next
item
in
the
list.
Exponential
histogram,
Max
scale.
Okay,
I'll
make
this
quick.
We
only
got
a
couple
of
minutes
left
and
I.
Don't
want
to
take
all
that
time.
So,
in
the
interest
of
limiting
work
in
progress,
I
I've
been
trying
to
put
exponential
histograms
to
bed,
mark
them
as
stable
and
get
them
out
of
this
experimental
State.
There
was
some
discussions
a
couple
of
months
ago
about
potential
performance
issues
with
exponential
histograms,
so
I've
been
looking
into.
C
You
know
all
sorts
of
performance
things,
but
the
Java
implementation,
nothing
jumps
off
the
page.
They
seem
reasonable
from
a
performance
perspective
from
both
memory
allocation
and
CPU.
C
So
I'm
not
I'm,
not
concerned
about
that
in
any.
But
you
know
if
there
is
one
thing
to
talk
about
it's
this:
it's
that
when
the
scale
factor
is,
is
positive.
C
It's
it's
harder
to
compute,
which
bucket
a
particular
measurement
goes
in.
That's
where
this
PR
that
I've
linked
tier
comes
into
play.
This
is
this
would
impose
a
an
optional
Max
scale,
parameter
that
you
could
configure
your
exponential
histograms
with,
and
this
would
limit
the
number
of
rescalings
and-
and
you
know,
avoid
having
to
compute
the
bucket
index
with
the
more
expensive
algorithm.
If
you
are,
if
you
care
I,
think
most
users
won't
care
about
configuring.
Their
Max
scale
at
all,
so
I
think
it
makes
sense
to
have
this
optional,
but
yeah.
C
Please
take
a
look.
I
think
we
should
Mark
exponential
histograms
as
stable,
with
or
without
this
I
think.
This
is
like
an
add-on,
not
a
necessity.
So
yeah
I
wanted
to
take
this
opportunity
to
talk
about
this
PR
a
bit
and
to
say
that
you
know
I
I
would
like
to
propose
opening
up
a
PR
to
actually
mark
them
as
stable
and
seeing
what
folks
think
about
that.
G
I
have
a
technical,
a
little
technical
comment
on
the
cost
and
the
the
scale
being
positive,
but
I
don't
think
we
should
discuss
it
here.
I
think
the
bigger
question
was
whether
we're
ready
to
stabilize
it.
I
know
there
were
a
couple
questions
about
naming
like
it
is
a
base,
two
exponential
histogram,
not
an
exponential
Instagram.
G
In
some
people's
mind,
there
were
questions
about
whether
it
needs
mechanisms
built
into
it
to
expire,
essentially
to
controlled
resolution
loss
which
Prometheus
has
a
plan
for,
and
we
have
maybe
no
plan
for
cumulative
like
Prometheus
or
we
have
Delta,
temporality
and
I.
G
Think
the
trouble
here
is
that
Delta
temporality
is
better
for
exponential
histograms
right
now,
because
it
solves
that
that
reset
problem
and
I
think
there
are
concerns
that
it's
not
ready
for
cumulative
use,
basically
I'm
playing
the
devil's
house,
because
I
would
love
it
stabilized
we
have
Matt,
is
working
on
a
JavaScript
implementation.
Diego
has
almost
finished
his
python
implementation,
there's
a
reference
implementation
in
go
and
we
have
Java.
So
that's
like
we
have
it
added
to
the
go
SDK,
although
I've
done
that
a
few
times
so
I
I
would
call
it
stable.
H
G
Spec
has
a
lot
of
detail
on
how
to
compute
the
mapping
functions.
What
Jax
noticed
and
there's
a
there's
an
issue
open
up
is
that
when
we
did
the
research
there
was
another
approach
called
the
table,
lookup
approach,
it's
super
fast,
it's
super
complicated
and
it
can
go.
It's
like
an
11
lookup
and
it's
it's
it's
exact,
which
is
not
something
you
can
say
about
the
logarithm
approach,
that's
probably
sort
of
an
esoteric
conversation.
G
C
Hey
Josh,
so
the
you
listed
a
couple
of
potential
issues:
there's
a
an
issue
outstanding
that,
like
questions,
whether
we
should
stabilize
exponential
histograms,
would
you
mind
commenting
on
that
issue
with
those
outstanding
questions?
Just
so,
we
can
decide
whether.
G
Okay
and
I
had
a
couple
more
items
on
the
agenda.
I
know
we're
almost
out
of
time
and
actually,
after
listening
to
us
talk
about
oteps
I
think
I
want
to
summarize
quickly
is
that
there's
two
bullets
left
one
of
them
is
an
Otep.
That's
been
sitting
for
a
long
time
and
there's
been
recent
interest.
I
I
said
I
would
be
willing
to
help
organize
an
effort
to
finish
this
uptime
question
or
up
monitoring
and
I.
Advise
anyone
who's
interested
in
that
to
come
to
a
Prometheus
working
group
and
to
look
at
that
Otep.
G
The
first
bullet
I
put
up
is
essentially
an
area
that
I
think
needs
an
Otep
and
I'm,
not
sure
who
will
write
it,
but
it
could
be.
One
of
us
and
I
wanted
to
sort
of
gather
interest.
Basically,
this
there
are
a
few
issues
out
there
about.
You
know
when
you're
dropping
spans
or
you're
dropping
metrics
or
logs.
What
can
you
do
to
tell
the
user
and
to
observe
that
situation
and
monitor
it
in
a
production
system
and
I'm?
Not
sure
any
of
us
have
great
answers
to
that.
G
G
It's
because
of
validation
errors,
and
it's
because
this
cardinality
limit
that
we're
proposing
in
the
openpr
that
I
linked
so
in
any
case,
I
think
that
I'm
looking
for
interest
in
counting
drop
spans,
counting
dropped,
metrics
and,
if
you're
interested
in
that,
maybe
reach
out
to
me
in
one
of
the
ways
that
you
have
to
reach
out
to
me.
Thank
you.