►
From YouTube: 2022-05-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
C
C
One
of
the
use
cases
is
about
what
I
call
multiplexing,
so
we
have
the
base
signal
types
that
traces,
metrics
and
logs,
and
these
signals
can
be
used
to
model
a
specific
domain
or
business
vertical.
If
you
will
so,
for
example,
you
can
use
log
records
to
carry
plain
old
logs
right
or
you
can
use
the
log
records
to
carry
profiling
information,
for
example,
or
client-side
events.
C
C
So
the
way
that
you
can
do
that
today
is
by
looking
at
each
individual
element
of
telemetry,
like
rock
record
or
a
span
and
and
have
a
marker
on
each
individual
record,
which
tells
you
what
it
is
and
then
make
a
decision.
C
This
is
a
possibility,
but
it
is
more
efficient
and
semantically
also
a
good
fit
to
record
this
information
on
the
scope.
When
you,
when
you,
when
you
emit
a
telemetry,
it
has
a
purpose
right,
you
typically
you're
you're
implementing
profiling
for
a
particular
library.
C
You
want
this,
the
telemetry
that
is
emitted
by
your
instrumentation
to
be
all
of
it
to
be
associated
with
profiling,
so
it
actually.
When
you
look
at
the
proposal,
it
shows
in
the
code
that
you
obtain
a
log
emitter.
C
It
also
is
consistent
with
what
we
do
with
other
concepts
that
are
part
of
our
data
model,
the
resource,
the
expand,
the
log
record
and
the
the
metric
data
point.
C
The
the
scope
today
is
missing
the
notion
of
attributes,
so
this
makes
it
more
also
consistent
from
from
modeling
perspective
and
also
consistent
in
the
protocol
as
well.
Between
the
protocol,
we
have
the
three
level
hierarchy
of
resource
scope
and
the
record
the
element
of
the
telemetry
itself,
but
the
mid-level
is
the
scope,
but
it
is
missing
the
attributes
today.
C
So
the
proposal
is
to
add
the
attributes
to
the
scope
to
do
it
in
a
way
that
modifies
the
existing
apis
in
backwards
compatible
way
to
modify
the
api
where
you
obtain
a
tracer
or
planometer
or
log
emitter,
and
extend
it
to
allow
an
optional
another
optional
parameter
where
you
supply
the
attributes.
If
you
don't
supply
any,
then
it's
as
if
you
did
it
in
the
in
the
old
way
right.
You
called
the
previously
existing
aps.
C
So
yeah,
that's
the
proposal,
I'm
looking
for
thoughts,
feedback,
opinions
and
I
guess,
most
importantly,
about
whether
the
language
maintainers
think
that
this
actually
can
be
done
without
breaking
apis.
C
I
think
it
should
be,
but
I
need
confirmation
from
the
sigs
and
also
whether
doing
so
wouldn't
be
prohibitively
expensive
from
the
performance
perspective.
When
you
need
to
do
the
the
exporting
inside
the
the
sdk
in
the
implementation,
yeah
I'll
stop
here.
Anyone
any
thoughts,
any
opinions.
A
C
C
C
Okay,
yes,
so
so
the
way
that
I
tried
to
approach
this
was
to
avoid
proposing,
I
guess,
completely
new
way
of
handling
anything
new
there
right
this.
This
is
kind
of
a
small
in
a
way
like
minor,
surgical
modification
to
the
way
things
already
work
today
as
it
doesn't,
it
doesn't
require
any
major
rework
neither
in
the
api,
nor
hopefully
in
the
implementation.
I
did.
C
I
did
look
at
the
the
co-implementation.
Obviously,
changes
are
necessary,
but
hopefully
it's
not
going
to
be
a
hugely
made
like
like
major
change
there.
The.
A
For
example,
do
I
need
to
restart
the
machine
or
something?
But
when
we
send
that
to
a
remote
agent
or
a
faraway
injection
point,
because
that
injection
is
running
outside
the
box?
It
needs
to
have
additional
information
about
the
host.
This
is
where
we
think
we
should
associate
the
resource
so
depending
on
the
scenario,
if
the
receiving
part
already
have
the
context,
we
try
not
to
repeat
that.
C
So
yeah,
if
I
understand
correctly,
you
are
looking
for
some
sort
of
ability
to
do
routing
inside
the
sdk
right
sort
of
yeah
yeah.
I
think
that
probably
could
be
valuable
today.
If
I
understand
correctly,
the
the
exporting
pipeline
in
the
sdks
is
just
just
linear
right.
You
can
put
processors
there.
It
end
ends
with
one
exporter.
C
There
is
no
way
as
far
as
I
know,
to
do
routing
the
collector
does
do
some
of
that.
You
can
do
routing
in
the
collector,
although
also
not
not
a
very
nicely
way
of
doing
that,
but
I
don't
know
probably-
and
I
would
say
that
you
probably
want
to
make
routing
decisions
based
on
any
of
the
telemetry,
not
just
on
the
scope.
Although
scope
may
be,
I
guess
a
very
convenient
piece
of
data
on
which
you
make
the
decisions
right,
the
routing
decisions.
F
C
I
think
that's
the
open
question
right
and
the
my
initial,
I
guess
thinking
was
that
the
answer
is
no.
You
only
need
to
make,
I
guess,
patching
decisions
which
is
not
really
mutating.
The
individual
elements
right.
It's
more
like
grouping,
if
necessary
by
the
by
the
scope
and
the
exporters
already
do
that
they
they
do
the
grouping
by
the
name
of
the
scope
and
that's
that's
part
of
the
processing.
They
don't
mutate,
the
data
they
reorder
essentially
group
it.
But
I
think
that's
a
that's
a
very
good
question.
C
C
It
may
be
highly
desirable
to
allow
setting
the
session
id
at
the
at
the
processor
level,
rather
than
when
the
scope
is
initialized
and
defined
or
obtained
because
of
the
nature
of
where
the
session
id
is
available
and
when,
when
it,
when
it's
changed
when
it
changes,
so
I
think
that's.
C
If
we
to
justify
allowing
it
at
the
moment,
the
way
that
I
wrote
tap
it
it
is
neutral.
From
this
perspective,
it
doesn't
say
whether
the
scope
is
mutable
or
it's
not
so
it's
kind
of
an
undefined
thing
right.
C
F
The
the
session
id
is
the
example
that
I
had
in
mind
as
well,
but
I
was
thinking
about
that
and
the
fact
that
the
the
metrics
pipelines
don't
have
any
sort
of
utility
for
doing
that
at
the
moment.
So
you
know
we
don't
have
symmetry
between
spans
logs
and
metrics
metrics.
If
we
decided
that
scope
attributes
were
mutable,
we
don't
have
a
way
to
do
that
in
metrics.
At
the
moment,.
C
E
Any
other
yeah
yeah,
but
by
the
way-
and
I
thought
your
prototyping
goal-
looks
pretty
nice
yeah,
I
think
it'd
be
nice
to
have
at
least
one
prototype
in
another
language.
As
I
said
before,
or
yesterday
we
don't
have
prototypes
in
other
languages.
One
more
in
some
other
language
could
be
really
helpful.
I
don't
know.
E
C
I
very
quickly
put
together
that
I
wouldn't
call
it
a
prototyping.
It's
not
a
full
prototype.
It
just
demonstrates
illustrates
the
point
like
what
the
api
change
would
look
like,
and
I
would
I
would
definitely
want
the
the
go
maintainers
to
confirm
that
this
is
a
viable
approach
that
it
actually
is
possible
to
go
all
the
way
and
implement
it.
C
This
way,
I
hope
it
is
and
if,
if,
if
we
believe
that
we
need
more
elaborate
prototypes,
actually
complete
with
with
all
the
functionality,
it's
probably
possible
to
do,
but
it's
it's
going
to
be,
I
guess
non-trivial
amount
of
work
because
we
we
need
to
add
it
to
the
to
the
proto
to
regenerate
the
proto.
I
guess
I
will
need
to
do
the
fork
the
product,
because
we
can't
modify
it
unless
the
hotep
is
actually
accepted.
E
C
C
E
C
It's
it's
a
pr
that
was,
it
has
been
open
for
a
while,
and
I
don't
think
we
have
any
approvals.
We
don't
have
any
rejections
there
either.
C
So
I
thought
it
would
be
good
to
kind
of
discuss
this,
and
my,
I
guess
personal
opinion
is
that
this
this
is
valuable
graph
fuel
is
notable
enough,
so
that
I
think
we
should
have
it
in
our
semantic
conventions.
C
It
adds
it's
a
small
addition.
It's
just
three
new
attributes
in
the
in
the
conventions.
It
seems
like
it's
the
right
thing
to
define
there,
but
I'm
not
a
graphql
expert,
and
it
would
be
great
if
someone
who
is
actually
reviews
this
from
the
perspective
of
the
domain
and
and
if
you're,
fine
with
it.
Please
approve.
C
I
don't
know
if
the
authors
are
here
in
this
call,
if
you
want
to
maybe
add
something
advocate
for
it.
I
don't
know.
E
I
see
that
there's
a
graphql
group
in
github
or
something
maybe
we
can
just
poke
him
like
hey.
Can
you
verify
that
it's
only
three
items,
so
you
can
review
easily
this,
maybe
other
than
that
yeah
looks.
It
looks
fine
to
me.
I
just
checked
that
now
and
yeah,
it's
only
three
items
and
they
look
clean,
simple
enough.
C
Yeah
there
is
a
related
issue
that
was
open
last
year
and
there
were
a
few
people
who
commented
on
the
issue
itself,
but
who
didn't
comment
on
the
pr,
so
I
pinged
these
people
on
the
pr.
Hopefully
they
will
review
and
provide
some
thoughts
there,
but
we
still
need
reviews
from
spec
approvers
anyway.
E
Okay,
that's
all
okay!
Sweet!
Thank
you!
Okay!
I
don't
know
what
I
think
we
can
probably
review
the
two
the
last
two
and
then
we
can
go
to
the
metrics
time
box.
Probably
that's
cleaner!
E
So
if
that's
okay
require
an
optional
http
attributes,
lead
mill.
Are
you
here
or
in
that
note,
that's
probably
not
formula
yeah.
E
No
worries
anyway,
yeah
go
ahead.
G
Yeah,
so
the
I
have
the
pull
request
for
required
and
optional
tributes.
It's
ready
for
review.
The
tooling
is
updated.
Last
time
we
talked
about
sorry
the
way
to
verify
it
doesn't
break
anyone
and
history
of
these
changes.
G
Armin
helped
me
dig
out
the
history,
and
initially
it
was
aided
by
christian
miller
a
long
time
ago.
There
were
no
conversation
about
attribute,
sets
there
and
we
have
no
way
of
verifying
who
depends
on
this
specifically
based
on
arming
words,
probably.
G
There
there
is
a
possibility,
nothing
depends
on
it,
but
I
tried
to
get
comment
from
christian
and
he
didn't
come
back
to
me
I'll,
try
to
get
more
on
this.
So
what
he
suggested
actually
is
to
split
this
into
two
different
things.
C
The
link
goes
to
the
pr
which
is
about
specifically
about
http
semantic
conventions,
but
I
believe
your
proposal
also
is
about
having
a
more
more,
I
guess,
more
elaborate
definition
of
what
is
an
optional
and
required
attribute
you,
you,
you
you
submitted
as
a
as
a
separate
pr
to
the
bluetooth
repository.
D
C
C
G
Right
but
the
the
part
of
it
is
now
in
specification.
I
added
a
new
file
to
with
the
attribute
requirement
level.
C
I
think
yeah,
I
wouldn't
mind
reviewing
it
in
the
context
of
the
http
attributes
to
demonstrate
why
you
want
to
have
these
different
levels.
That's
that's
not
bad,
that's
good,
but
I
think
we
still
would
want
to
accept
and
merge
it
as
a
separate
pr
other
than
so
so
that
that
it
is
visible
that
this
is
a
different
proposal
and
then
on
top
of
that,
do
the
http
attributes.
That's!
Okay!
If
you
have
it
now
in
one
pr,
but
it
would
be
nice
to
split
it
into.
D
D
D
Rather
than
having
to
to
wait
a
month.
You
know
for
the
next
spec
release.
Basically,
so
if
it's
it's
possible
to
to
get
when
lude
miller
makes
the
this
split
and
produces
the
new
pr's,
if
it's
possible
to
get
those
in
before
we
release
the
next
version
of
the
spec,
it
would
help
us
get
started
on
updating
the
http
instrumentation.
D
You
want
to
have
before
so
so
this
pr
we've
been
talking
about
around
required
an
optional
http
attributes,
the
the
http
changes
to
the
http
part
of
this
or
spec
around.
What
attributes
you
should
be.
You
should
be
putting
on
instrumentation
we'd
like
to
get
that
in
there,
so
that
we
can
start
updating
instrumentation.
D
We
don't
want
to
do
two
passes
through
instrumentation,
where
we
update
them
and
change
them
kind
of
like
twice
in
a
row,
and
we
also
don't
want
to
update
instrumentation,
where
we
can't
put
a
schema
version
on
it,
because
you
know
it
would
be
referencing
stuff,
that's
not
actually
in
a
release
version
of
the
spec.
D
But
when
we
get
these
changes
in,
we
can
start
updating,
instrumentation
and
include
a
schema
version
number
for
that
instrumentation
and
even
though
it
wouldn't
be
declared
stable.
Yet
that
would
at
least
be
an
improvement
in
our
instrumentation,
because
back-ends
could
at
least
start
understanding
what
version?
What
schema
version
that
is
present
in
the
data
coming
in.
B
I
do
want
to
call
out
real
quick
that
I
think
this
particular
pr
is
a
huge
quality
improvement
for
http
instrumentation,
just
the
way
it's
phrased.
So
I
I
really
I
don't
know
if
it
makes
sense
to
get
this
in
quicker.
I
think
I
think
it
makes
sense.
Just
from
from
the
standpoint
of
this
is
one
of
the
most
significant
pieces
of
instrumentation.
We
have
in
hotel
we're
the
most
cohesive
bit
we
have
so
far,
and
the
quality
improvement
is
really
good.
B
So
I'm
I'm
a
fan
of
trying
to
figure
out
how
to
get
this
through
and
and
optimizing
its
particular
path
through
the
spec
workflow.
C
So
the
question
I
have
is
this
proposal
that
introduces
the
the
requirement
levels
now
becomes
a
prerequisite
for
merging
this
pr.
Is
it
necessary
like?
Could
we
does
it
need
to
happen
at
the
same
time?
That's
what
I'm
saying,
because
it
makes
it
more
complicated,
more
difficult
to
get
the
pr
accepted.
Is
it
possible
to
not
modify
the
levels
that
we
have
right
now
but
emerge
the
the
the
http.
B
Pers
I
read
through
it
and
I
and
I
have
looked
at
the
instrumentation
and
how
this
works.
I
think
those
like
those
levels
are
necessary
for
for
the
instrumentation
to
get
implemented
like
that's
how
it
has
to
be
specified.
I
think
the
fact
that
we
didn't
have
it
before
was
kind
of
an
oversight.
Their
existing
levels
aren't
quite
sufficient,
and
so
I
do
think
it's
necessary
to
have
those
levels.
C
Okay,
okay,
fair!
I
guess
then,
then
I
I
would
want
the
spec
approvers
to
review
the
the
requirement
levels
pr
first
and
then
once
that
is
merged,
we
we
can
actually
move
forward,
because
if
that
is
not
measured,
if
that
is
rejected,
then
there's
no
way
we
can
measure
the
semantic
conventions
for
http
right.
So
that's
a
prerequisite.
D
D
If
we
could
get
get
these
like
reviewed
this
week,
when
ludmilla
produces
them
or
at
any
rate
just
I
just
want
to
fyi
to
the
the
tc
about,
if
it's
possible
to
to
make
sure
these
are
in
before
we
cut
the
next
release
of
the
spec,
it
would
help
us.
E
D
Yeah,
so
I
have
a
draft
document,
it's
just
in
a
word
document
right
now.
That's
about
an
improvement
to
project
management
around
the
spec
we
would
like
to
we.
We
have
a
lot
of
good
process
written
down
about
how
to
triage
and
manage
the
spec
backlog
on
the
level
of
prs
and
issues.
D
D
But
so
that's
that's
part
of
the
process
that
I
think
we
should
just
like
pick
up
again
and
we've
got
reggie
an
engineering
manager
from
lightstep
who's
willing
to
help
us
do
that
triage
so
that
that's
one
thing
I'd
like
to
see
us
just
start
doing,
but
something
that
actually
needs
some
additional
process
around.
It
is
how
we
manage
the
backlog
of
spec
work
at
kind
of
the
project
level.
D
We
tend
to
spin
up
working
groups
and,
in
general
attack
the
spec
work
kind
of
as
larger
projects
like
there's,
some
spec
work
can
come
in
it's
just
individual
prs
or
a
solo
otep
that
somebody
wrote
up,
but
for
the
most
part
it
requires
people
to
form
a
working
group,
get
subject
matter.
Experts
and
otherwise
coordinate,
create
prototypes
and
all
of
that
stuff.
D
But
we
don't
really
have
a
process
for
describing
how
to
do
that
work
and
we
don't
have
a
process
for
kind
of
taking
that
work
and
turning
it
into
a
prioritized
backlog
of
projects
where
we
can
get
a
sense
of
what
the
projects
are
that
we're
currently
working
on.
You
know
what
order
would
we
like
to
tackle
projects
that
we
haven't
started
yet,
and
I
think
it
would
be
very
helpful
to
to
get
organized
on
that
level.
D
I
think
it
will
also
help
the
tc
understand
where
the
community
is
at
and
their
interests
and
make
sure
that
we've
got
tc
involvement
in
all
these
projects,
without
overwhelming
the
tc,
by
potentially
spinning
up
too
many
projects
at
the
same
time
and
causing
the
tc
to
have
to
contact
switch
too
much,
which
is,
I
think,
a
thing
we
have
done
periodically
so
one.
I
would
love
feedback
on
this
proposal.
D
If
people
haven't
looked
at
it
yet
there's
a
link
in
the
agenda,
I
was
wondering
because
it
seems
like
it's
not
particularly
controversial,
just
what
the
next
step
would
be.
Should
I
turn
this
into
an
otep?
It's
not
really
it's
about.
Like
our
spec
process,
it's
not
really
like
a
proposal
for
the
specification
itself.
A
D
C
Ted
I
I
did
have
a
quick
look.
I
think
this
is.
This
is
great.
We
do
need
more
peace
and
order
and
good
government
right
so
yeah.
I
like
this.
I
guess
I
I
had
small
concern
there
where
it
says
it
talks
about
approvals
of
the
projects.
C
It's
not
clear
to
me
who
who
would
be
the
approver?
Are
we
looking
for
some
sort
of
centralized
authority?
Who
does
the
approvals?
I
I
I
would
be
a
bit
careful
there
to
avoid
making
it
difficult
for
groups
to
self-organize
and
make
progress
on
their
own.
C
Maybe
it's
more
of
a
matter
of
how
you
frame
it,
how
you
phrase
it
so
I
just
just
want
to
be
careful
with
this
right.
I
could.
I
would
want
us
to
to
maintain
the
ability
for
teams
to
be
distributed
and
and
work
independently
from
one
another,
so
it
I
look
more
at
this
more
like
as
more
like
an
enabler.
How
do
we
enable
people
to
do
something
rather
than
how
do
we
create
obstacles
and
and
make
them
ask
for
permission
to
do
something?
Yes,.
D
C
D
Yeah,
no,
that's
that's
great
feedback.
I
I
would
say
that
it
would
be
the
the
tc
deciding
this,
how
the
tc
does
it,
maybe
is
an
assignment
left
to
the
the
reader
or
we
could
could
put
it
in
here,
but
yeah.
I
wouldn't
want
to
over
specify
that
the
the
main
thing
I'm
looking
for
here,
I
I
agree
with
you
that
it's
an
open
world
and
it's
open
source.
D
So
if
groups
of
people
want
to
self-organize
to
work
on
spec
proposals,
that's
awesome,
but
something
we've
definitely
seen
is
that
it's
it's
very
helpful
for
groups
to
have
a
tc
involvement.
D
Basically
from
the
beginning
when,
when
we
go
through
a
process
of
kind
of
officially
spinning
up
a
working
group
like
the
instrumentation
groups,
where
we
do
something
at
the
beginning
to
otherwise
basically
give
people
the
impression
that
you
know
we're
they're
working
as
part
of
the
specification
sig
and
their
work
is
gonna,
get
reviewed
and
have
attention
put
on
it.
D
The
the
tc
and
the
core
community
explain
to
everyone
where
their
attention
is
and
where
their
expectations
are
rather
than
kind
of
blessing
groups
having
them
spit
up,
do
a
whole
bunch
of
work
and
then
kind
of,
like
basically
surprise
the
tc
out
of
the
blue,
with
with
a
bunch
of
proposals
that
they
now
have
to
review
and
also
shortening
the
spec
proposal
timeline
so
that
it
doesn't
right
now,
it
feels
a
bit
like
a
two-step
process,
where
those
groups
do
a
lot
of
work,
build
up
a
lot
of
state,
make
a
proposal
and
kind
of
get
feedback
later
later
than
they
needed
to
get
it
because
they
were
working
a
little
too
independently
about
what
basically
like
what
a
direction.
D
What
direction
the
tc
wants
them
to
to
go
in
the
tc
usually
has
a
lot
of
big
thoughts
about
what
these
groups
do,
because
it
doesn't
seem
like
we've
hit
a
point
yet
or
ever,
where
we'll
be
doing
big
changes
to
the
spec
that
don't
involve
to
to
think
hard
about.
Maybe
some
decisions
we've
already
made
in
the
spec
or
or
otherwise
integrate
it
like
some.
D
Sometimes
it's
a
clean
shot,
but
we've
seen
even
with
these
instrumentation
groups,
things
like
links
and
resources
and
other
stuff
like
like
questions,
show
up
and
we
can't
escape
having
the
tcp
involved.
So
so
I
guess
that's
that's
what
I
see
this
as
just
a
way
for
the
tc
to
work
a
little
more
cleanly
with
with
the
rest
of
the
community
around
where
they
plan
to
to
be
putting
their
attention
and
and
what
order
they
they
would
like
to
attack
the
the
backlog
of
projects
there
is
there.
C
Is
one
one
area
where
I
don't
have
a
lot
of
clarity,
yet
it's
unclear
to
me.
So
if,
if
the
topic,
the
project
is
something
that
I
as
a
pc
member
or
another
tc
member
wants
to
happen
right,
it's
something
that
we
clearly
see
it's
valuable.
We
want
it
to
happen.
I
mean
that
that's
easy.
Another
easy
case
probably
is
when
we
see
that
it's
absolutely
not
a
good
fit
for
open
climate,
we
don't
want
it.
We
have
a
good
set
of
arguments
against
it.
That's
also
easy.
C
The
one
that
is
difficult
is
where
it's
unclear
right.
The
tc
does
not
have
a
particular
interest
right,
no
does
not
have
an
expertise
in
it,
but
it's
some
something
in
an
area
where
no
one,
not
none
of
the
tc
members
or
even
the
spec
approvers,
as
much
of
an
experience
or
an
interest.
It's
something
that
okay,
maybe,
but
I
don't
know-
and
I
don't
have
good
arguments
about
why
not
so
I
would
like
to
understand
what
do
we
do
in
this
case?
It
seems
like
it's
quite
common.
C
What
do
we
do?
I
I
mean:
do
we
force
the
tc
members
to
okay,
whether
you
want
it,
whether
you
don't
want
it,
whether
you
like
it
or
no,
you
have
to
go
and-
and
do
that?
Maybe
that's
the
answer
I
don't
know,
but
I
would
want
to
see
whether
there
is
a
possibility
to
say
hey.
You
know
what,
if
you're,
not
against
it
and
you're,
not
for
it,
then
you
shouldn't
be
creating
obstacles
but
find
a
way.
C
Maybe
a
generic
process
which
enables
these
folks
to
do
that
job
themselves.
Right,
I'm
not
sure
how
it
can
be
done,
but
I
would
prefer
to
see
that
happening
rather
than
tca
members
being
forced
to
go
and
make
a
decision
on
something.
They
are
probably
not
knowledgeable
enough
to
make
the
decision
to
make
the
call.
D
Yeah,
so
I
I
totally
agree
with
that
sentiment,
and
it's
definitely
true
that
and
I
would
guess,
probably
more
true
going
forwards
as
the
specification
work
broadens
yeah
from
maybe
the
the
core
competency
of
this
group
is
sort
of
like
back-end
services
and
distributed
tracing,
and
so
when
we
get
into
stuff
like
rum
and
client
telemetry,
there's
a
natural
reluctance
to
to
want
to
have
certainly
to
want
to
write
those
specifications,
but
also
a
reluctance
to
maybe
to
wanting
to
be
involved
too
much
because
hey
it's
not
it's
not
anyone's
area
of
expertise.
D
D
It's
more
that
they
need
to
be
there
to
shepherd.
The
group
from
the
open,
telemetry
side,
helping
them
navigate
the
process
of
getting
things
getting
their
proposals,
cleaned
up
and
and
into
a
shape
where
they
they
will
have
a
good
chance
of
of
success.
So
so
some
of
that
is
is
there's.
I
found
having
been
part
of
a
number
of
these
working
groups
is
there's
like
two
sides
to
it.
D
There's
the
the
subject
matter,
expertise
about
that
particular
subject
and
then
there's
the
sort
of
open,
telemetry
expertise
about
how
that
would
fit
into
all
the
stuff
we
already
have
and
the
the
tc
members
would
be
for
the
the
second
part.
But
I
do
think
subject
matter.
Expertise
is
a
requirement,
and
part
of
this
proposal
is
to
say
that
every
project
needs
to
have
kind
of
a
tracking
issue.
D
That
explains
where
that
expertise
is
going
to
come
from
and
also
to
have
a
column
which
is
scheduled
projects
a
thing
I
have
found
trying
to
help
organize
these
is
you
can
often
get
subject
matter
expertise,
but
there's
usually
a
time
lag.
D
For
example,
if
people
are
going
to
work
on
it
somewhat
full-time
or
you
know
as
like
in
some
like
official
capacity
as
part
of
their
job,
they
often
need
to
get
approval
somewhere.
D
So
I
think
that's,
the
other
part
is
just
kind
of
ensuring
these
projects
when
they
spin
up
to
whoever's,
organizing
I'm
saying
like
hey,
you
need
to
you
need
to
get
the
requisite
subject
matter
expertise
available,
and
if
that
requires
some
planning,
you
know
let
us
know
and
then
we'll
we'll
work
with
that
group
to
just
get
it
scheduled.
D
E
Thank
you
so
much
for
that.
If
that's
okay
for
the
second
time,
can
we
continue
this
on
offline
because
yeah
we
have
18
minutes
until
theory
we
have
10
volts
for
metrics,
so
take
it
into.
Thank
you
for
the
feedback,
let's
jump
into
metrics
now.
H
Hi
everyone
I
just
wanted
to
put
up
two
items
that
have
been
sitting
for
a
bit
of
time
and
could
use
some
attention
if
you're
interested.
The
first
is
a
pr
which
adds
some
specification
on
the
exponential
histogram
to
the
sdk
specification.
Mainly,
it
has
a
lot
of
approvals
and
does
have
perhaps
a
few
kind
of
minor
details
at
the
end
of
the
review
thread
and
if
you're
inclined,
you
could
go
visit
that
and
give
or
make
some
comments.
H
It's
it's
a
question
of
how
to
handle
man
in
infinite
values
which
are
different
for
histogram,
because
there's
nowhere
in
the
buckets
that
those
values
correctly
map.
So,
unlike
where
integer
and
floating
point
values
well,
integers,
there's
no
such
thing
as
a
nano
and
then
for
floating
point.
The
hardware
defines
what
to
do
when
you
add
it
subtract
it
compare
it
and
so
on.
H
So
we
can
just
let
the
hardware
do
its
thing
for
gauges
and
counters,
but
for
histograms
we
have
to
do
something,
and
the
proposal
is
to
is
to
not
change
what
spec
says,
but
make
sure
that
the
histogram
behaviors
all
are
known.
So
if
you're
gonna
do
something
unspecified,
do
it
everywhere
and
that's
that
would
anyone
like
to
raise
questions
or
topics
about
the
exponential
histogram.
H
That's
pretty
minor,
so
I'd
like
to
let
anyone
who's
interested
go
go
after
that
pr.
This
second,
pr
that
I
have
up
for
us
to
consider
is
a
little
bit
more
of
a
conceptual
question.
We
have
been
close
to
we're
close
to
having
a
stable,
sdk
specification
for
metrics.
It
means
that
we're
starting
to
want
to
have
semantic
conventional
metrics
named
so
that
for
a
particular
area
like
kafka,
someone
that's
under
scrutiny.
H
Right
now,
you'll
have
specified
names
for
metrics
that
are
going
to
be
standardized,
so
it
might
be
request,
request,
latencies
or
something
like
that.
These
are
things
that
traditionally
do
use
histograms
and
the
traditional
integrations
and
instrumentation
packages
that
these
pieces
of
software
are
using
have
been
written
for
the
past,
so
they
may
have
been
written
for
a
statsd
world
where
it
was
very
common
to
take
histogram
aggregate
and
aggregate
histogram
measurements
aggregate
them
into
pre-calculated
quantiles.
H
H
However,
it's
only
specified
in
a
way
that
works
for
prometheus
data
so
that,
if
you
have
had
a
statsd
world
or
some
of
the
the
instrumentation
that
we're
seeing
for
jmx
is
written
in
for
kafka's
written
in
jmx
world,
where
it's
really
not
quite
the
same
as
prometheus
data,
and
because
the
that
summary
data
type
that
we
have
is
very
explicitly
referring
to
a
prometheus
that
it
can
only
be
used
when
those
minutes
match
exactly
if
you're
using
delta
temporality,
especially
for
histograms,
and
you
want
to
say,
calculate
precalculate
quantiles.
H
We
would
have
no
way
to
send
that
data
using
otlp
other
than
by
having
them
literally
be
pre-calculated
metric
streams.
So
here's
a
gauge
with
a
quantile
value.
Here's
this
here's
a
counter
with
a
sum
value:
here's
a
gauge
with
an
average
or
a
mean
or
a
median
or
a
p50
or
a
p90
or
any
of
those
pre-calculated
numbers
we'd
like
to
have
a
way
of
specifying
them,
and
the
motivation
here
is
that
when
we
have
kafka,
instrumentation
that's
already
out
there
and
we
want
to
begin
bringing
that
instrumentation
into
an
open,
telemetry
world.
H
We're
going
to
want
to
try
and
change
that
instrumentation
to
begin
using
histogram
instruments,
because
we
believe
that
you'll
get
higher
quality
instrumentation
from
those
histogram
instruments
in
the
future,
but
for
today
we're
going
to
try
and
integrate
those
packages
and
begin
using
that
instrumentation
that's
been
written
for
a
legacy
and
it's
not
able
to
use
histograms
right
away,
but
it
does
have
these
pre-calculated
quantiles.
We
can
make
these
pre-calculated
quantiles
match
the
semantic
conventions
that
we're
writing
for
the
future
by
saying
how
to
name
a
pre-calculated
metric
series.
H
That's
what
this
that's
pr
is
all
about.
I've
said
about
everything
I
can.
I
don't
think
it's
urgent,
but
I
want
to
make
sure
that
that's
sort
of
one
of
the
topics
we're
discussing
as
we
begin
writing
out
so
many
conventions
for
all
the
metrics
that
we
for
these
packages
of
metric
instrumentation
that
are
going
to
have
standardized
metric
names.
C
To
clarify:
are
you
saying
that
we
would
initially
emit
the
pre-calculated
metrics
like
a
summary
instead
of
the
histogram,
we'll
give
it
a
different
name
so
that
it
doesn't
clash
in
the
future
with
with
a
new
metric,
which
is
the
histogram
that
we
want
to?
Let's
say
in
the
future,
to
introduce
it.
H
Sort
of
almost
not
quite
though
I
actually
want
to
say
something
more
stronger
about
if
you
are
using,
if
you
have
a
cement
convention
that
was
written
for
histogram,
you
can
also
use
a
summary,
because
you
have
like
a
world
of
existing
instrumentation
that
produces
that
summary
and
semantically.
They
they
describe
the
same
type
of
data.
The
observations
that
go
into
those
instruments
are
the
same
type
of
observations,
so
that
you
should
be
able
to
semantically
swap
a
histogram.
H
In
a
summary,
of
course,
we
know
that
breaks
that
breaks
the
user
experience
for
all
the
products
out
there,
but
by
specifying
the
semantic
connection,
we
can
at
least
begin
to
migrate,
so
the
products
can
begin
to
support.
Oh,
you
might
have
a
summary
where
we
think
you
want
a
histogram
in
the
future.
Maybe
we
can
start
building,
query,
workflows
and
dashboard
building
processes
that,
like
are
able
to
accommodate
the
like
the
sameness
between
histogram
and
summary.
H
Now,
if
you
believe
all
that
the
next
step
is
to
have
just
a
convention
for
saying
well,
today's
instrumentation
is
producing
five
quantiles
and
they're
delta
temporality
and
there's
they've
got
some
in
account.
We
can't
turn
those
into
prometheus
summary
data.
It's
not
possible
because
of
the
way,
those
those
summary
data
points
are
specified,
but
we
can
begin
recording
sums
and
counts
and
quantiles
in
a
standard
naming
way
just
like
openmetrics
does
so
that
you
can.
You
can
see
that
you're
getting
data
that
is
semantically
describing
a
distribution.
C
So
you're
saying,
if
you
implement
this
particular
metric,
which
is
defined
as
a
histogram,
but
you
can't
do
the
histogram
for
some
reason.
Instead,
you
can
do
the
summary
as
a
replacement
for
that
histogram,
which
is
kind
of
close
to
what
you
would
get
with
a
histogram
in
in
some
limited
way.
But
it's
it's
better
than
nothing.
If
you
can
do
that
to
that,
but
I
guess
the
question
is:
is
this
something
that
I
guess
the
instrumentation
can
make
that
choice
right?
C
The
semantic
convention
says
that
you
can
do
either
the
histogram
or
the
summary
and
here's
how
you
do
the
summary
from
the
instagram
information.
Is
it
something
that
the
instrumentations
are
free
to
choose
and
if
they
do,
are
they
allow
it
to
change
this
decision
in
the
future?
Is
it
a
breaking
change
to
to
change
its
switch
to
the
other.
H
I
think
these
are
good
questions
to
ask
these
are
questions
that
probably
belong
in
the
discussion
of
the
pr
there's.
In
addition
to
those
questions,
it
seems
that
we
could
either
decide
to
say
that
this
is
encouraged
or
not.
I
mean
the
the
idea
that
you
can
change
between
a
histogram
and
a
summary.
Many
people
are
going
to
find
that
problematic
because
it
might
break
existing
workflows,
but
what
we
have
today
is
a
world
of
instrumentation.
That's
been
written
for
summaries.
H
We
have
lots
of
existing
metrics
that
are
flowing
through
existing
metric
systems
that
are
in
that
form,
and
I
don't
see
a
I
don't
see
exactly
how
other
than
what's
in
this
pr
to
migrate
the
world
from
using
precalculated
metrics,
which
are
a
lot
like
summaries,
whether
they're
recorded
as
individual
series
or
as
summaries,
and
to
replace
those
with
histograms.
The
idea
of
this
is
that
we
can
specify
that
the
semantic
convention
for
a
request,
timing
metric,
is
histogram.
You
may
see
a
summary.
B
Can
I
can
I
ask
a
question
here:
josh,
so
are
you
planning
to
because
I
think
one
thing
that
kind
of
makes
sense
if
a
user
is
already
using
summaries
and
they
move
to
open
telemetry
and
they
get
a
histogram
that
could
break
them
right,
especially
if
their
back
end
doesn't
support
that
histogram.
So
is
part
of
this
going
to
be
there's
a
way
to
convert
histograms
to
summaries.
B
I
can
see
that
example,
but
I
I
think
our
goal,
like
one
of
our
proposals,
should
be
to
get
people
onto
histograms,
because
we
know
that
they
behave
better
in
the
presence
of
alerts
right
that
so
so
I
I
don't
want
to
lose
that,
but
I
I
kind
of
understand
this
whole.
Let's
make
the
migration
path
easier,
which
one
of
those
two
does
this
address.
H
I'm
I
think
I'm
trying
to
I
want
to
us
to
address
both,
I'm
not
sure
if
I've
addressed
either
in
my
pr
but
you're
asking
the
right
questions
josh.
We
we
would
like
to
so
just
taking
kafka,
for
example,
there's
a
draft
pr,
it's
linked
from
mine,
and
it's
got
a
list
of
metrics
that
were
basically
just
copied
out
of
existing
instrumentation.
It's
trying
to
say
these
are
existing
semantic
conventions
for
kafka.
But
what
you
see
are
these
pre-calculated
metrics
in
there?
H
So,
yes,
I
do
think
that
eventually
there
will
be
a
collector
pipeline
processor
that
can
take
histograms
and
pre-calculate
these
things
and
output
them
as
precalculated
streams,
because
there's
a
lot
of
users
out
there,
especially
kubernetes
users
and
probably
kafka
users
who
have
existing
dashboards
that
are
full
of
p90s
and
p50s
and
averages
and
so
on,
and
those
users
will
be
able
to
migrate
by
setting
up
that
open,
telemetry,
collector
processor
that
takes
in
exponential,
histograms
or
and
outputs
precalculated
quantiles.
H
H
I'd
particularly
like
to
see
it
for
the
statsd
receiver,
there's
a
there
are
known
use
cases
of
customers
that
I've
seen
recently
where
they've
got
large
stats
to
installations
they're
using
histograms
and
they've
got.
You
know,
25
different
things,
they're
they're,
they're
processing
from
a
histogram
that
they
use
to
monitor
their
system
and
they
would
like
to
see
opens
monitory
be
useful
for
them,
and
so
this
is
the
idea
being
that
you
could
then
write
from
this
pr.
F
Hey
josh,
so
just
to
make
sure
I
understand
what
you're
proposing
correctly.
So
this
is
made
to
address
the
lack
of
apis
that
allow
you
to
pass
through
pre-calculated,
summary
style
metrics
and,
as
a
stand-in
for
this,
we're
going
to
advocate
that
folks
that
are
like
instrumenting
systems
that
have
these
pre-calculated
summaries
use.
Instruments
like
asynchronous
gauge
an
asynchronous
counter,
an
asynchronous
up
down
counter
to
pass
through
the
details
of
these
summaries
in
this
expanded
format,
and
then
they
would
get
recombined
into
a
summary
by
our
otlp
exporters.
H
You're
taking
it
one
step
further
than
I
wanted
to
maybe
take
it.
I
agree
with
what
you're
saying
I
I'm.
I
would
be
happy
to
say
well,
this
legacy
instrumentation
has
these
pre-calculated
numbers?
Yes,
what
you
said
is
true:
they're
going
to
have
to
use
asynchronous
gauge
and
asynchronous
counters
and
using
yourself
by
encounters
to
convey
those
quantities
of
individual
gate
as
individual
metric
time
series,
but
because
those
maybe
start
out
in
delta
temporality.
H
You
know
you,
we
know
how
to
convert
them
with
cumulative
temporality,
but
actually
the
only
thing
that
can
produce
a
prometheus
summary
is
prometheus
code
like
there's,
not
a
specification
that
says
exactly
what
time
window
you're
going
to
use
and
exactly
what
logic
you're
going
to
use
the
way.
The
summary
point
is
written
has
no
temporality.
It
has
prometheus
logic
in
it.
So
right,
there's.
F
H
Think
you
can
right
to
technically
do
that.
I,
however,
I
definitely
contemplated
what
you're
suggesting
it's
a
reasonable
outcome
to
say
that
the
instrumentation
will
create.
Let's
we'll
call
it
family,
that's
what
openmetrics
calls
it
it's
going
to
create
four
time
series:
it's
got
a
count.
It's
got
a
sum:
it's
got
p50
and
p90.
So
if
you
see
four
time
series
you
could
say,
I
see
the
pattern
here.
I
know
the
command
submitted
convention
pattern
says
these
are
all
part
of
a
summary,
histogram
semantic
convention.
I
could
try
to
convert
them
into
summary.
F
Best
proposal
yeah,
I
just
wanted
to
make
sure
I
understood
so
yeah
that
that
part
was
ambiguous,
whether
you
actually
envisioned
the
exporters
to
recombine
them
into
summaries
or
not.
So
thanks
for
you
could.
H
H
It's
just
that
format
is
what
it
is
now
we
can
probably
bend
our
or
or
redefine
it
to
make
it
work
here,
probably
involves
giving
it
a
temporality
for
one
thing
that
would
be
another
way
to
change
here,
but
I
do
feel
like
the
advantage
to
an
otlp
exporter
is
minimal.
Maybe
it's
useful
in
java,
because
this
is
something
that
you
expect
to
happen
a
lot,
but
it
would
require
more
work
technically
on
the
protocol
to
do
that.
H
Transformation
in
the
otl
exporter,
otlp
exporter
and
as
far
as
a
vendor
lot,
you
know
goes.
I
was
considering
just
asking
the
the
teams
that
manage
this
data
on
our
backend
to
consider
building
query
workflows
that
kind
of
recognize
the
two
patterns,
like
oh
you're,
building
a
query
over
stuff:
that's
a
distribution
either
it
came
in
as
a
histogram,
which
means
good.
We
have
good
data
or
came
in
as
precalculated,
which
means
some
of
the
query.
Workflows,
don't
work
the
same.
They
work
a
little
bit
differently.
E
Okay,
thanks
we're
on
time.
Sadly,
thanks
so
much,
let's
continue
offline
stay,
safe,
talk
to
you
soon.
Thank
you.
Bye.
Everyone.