►
From YouTube: 2022-05-24 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
E
A
So
I
I
was
working
on
a
pr
which
I
I
always
wanted
to
work
on
by
talking
with
j
macd
and
one
one
thing
I
was
trying
to
summarize
here
is
for
people
using
matrix
when
they
pick
the
instrument
type
in
certain
language
runtime,
if
they
don't
just
pick
conquer
or
histogram,
but
in
addition,
they
would
specify
the
underlying
type
and
for
languages
that
have
a
fine
granularity
of
the
integer
types
and
double
precision
types.
A
Usually
it's
not
a
problem,
but
for
languages
like
javascript,
we
noticed
there's
a
a
tricky
thing
so
so
take
this
example
in
javascript
there's
a
concept
called
the
number
max
save
integer.
So
imagine
if
you
have
a
counter
and
every
time
when
you
receive
a
particle
or
something
you
add
one
to
the
counter
and
you
keep
adding
it,
the
problem
is
in
javascript.
A
Eventually,
you
won't
see
the
saturation
when
you
reach
this
very
large
number
and
the
answer
could
be
either
tell
the
user
to
be
aware
and
get
rid
of
this
problem,
or
we
can
have
an
instrument
that
allows
people
to
report
this
small
addition
and
when
we
export
the
data
or
do
the
handling
in
the
sdk,
we
can
wrap
it
around
by
going
back
starting
from
a
negative
number.
In
this
way,
the
backend
would
be
able
to
say:
oh
there's
a
counter
overflow.
A
I'm
not
sure
if
this
is
something
that,
like
other
six,
have
considered,
because
I
I've
been
primarily
focusing
on
c
plus
plus
and
c
sharp.
And
we
don't
have
that
problem,
because
we
have
actually
placed
integers.
D
I
think
it's
great
to
have
this
documented,
even
in
the
languages
where
you
have
the
full
64
bits
of
integer
available.
Technically
you
can
still
overflow
right
so
yeah,
it's
yeah
javascript,
because
it
can
only
do
53
bits,
but
the
issue
generally
is
applicable
to.
If
you
have
large
numbers
you
can
easily
overflow.
So
that's
that's,
not
a
problem
that
is
only
unique
to
gel.
I
mean
it's
unique
that
it
has
less
bits
available,
but
the
overflow
is
not
unique
to
javascript.
A
D
A
Right
yeah
so
float
is:
is
it's
a
little
bit
different
story
for
other
languages
because
here
like?
If
you
want
to
have
the
precision,
if
your
language
allows
you
to
use
integer,
you
should
always
use
integer,
knowing
the
fact
that
integers
have
the
precision,
but
they
don't
have
the
wide
dynamic
range
and
overflow
is
actually
a
very
welcomed
feature
because
you
can
use
integers
to
detect
whether
there's
integer,
overflow
or
underflow
for
up
down
counters.
F
Yeah,
the
the
typical
way
to
solve
this
in
javascript
would
be
to
use
the
big
end
which,
depending
on
what
platform
you
and
you're
on
may
or
may
not
be
available,
but
there
are
also
polyfill
libraries
and
stuff
like
that.
Usually
I've
seen
libraries
take
it
as
an
optional
dependency
and
if
it
exists,
they
will
use
it
and
if
it
doesn't
exist,
they
don't
a
good
example
of
this
would
be
the
javascript
implementation
of
grpc.
F
If
you
don't
have
bigint
installed
and
active,
then
you
just
have
to
know
that
you're
going
to
have
some
overflow.
But
if
you
do
then
grpc
will
use
big
ends
under
the
hood
and
you
can
use
arbitrarily
large
integers.
F
I
don't
know
what
the
cutoff
point
is
where
it
starts
using
it
and
instead
of
native
numbers,
but
it's
all
sort
of
hidden
and
at
least
for
javascript.
You
know
I
would
suggest
something
like
that:
okay,
but
that's
a
very
language,
specific
solution
to
a
certain
general
problem.
D
Headlines,
I
think
what
you
have
in
the
float
section
works
fine
with
me.
I
had
a
bit
more
concerned
with
the
previous
section
which
talks
about
the
the
resets,
and
I
don't
know
if,
if
you
need
the
description
for
resets
here
for
this
pr
at
all,
but
I
I
found
it
a
bit
confusing
because
it
it
hints
at
the
ability
to
detect
the
resets
by
seeing
smaller
numbers
than
the
previous
observation.
But
I
mean
I
find
it
a
bit
problematic
because
it
doesn't
tell
exactly
that.
D
A
A
If
you
use
a
assigned
integer,
if
the
number
dropped
you
know,
there's
a
reset,
but
you
won't
be
able
to
know
what's
the
accurate
number,
so
you
will
give
an
we
don't
know
if
you
see
like
number
one,
three,
five
and
all
of
a
sudden,
it
comes
back
to
two.
You
don't
know
whether
the
the
actual
number
is
seven,
because
you
might
be
losing
some
data
in
the
middle,
but
at
least
it
gives
a
clear
indication
that
you
have
a
reside,
so
you
can
change
your
expectation,
give
example.
A
If
you
gotta
alert
saying
something
is
wrong.
You
can
at
least
call
that
with
oh
there's,
a
potential
reset
might
give
you
additional
hint.
D
A
You
got
my
point
so
up
to
you
yeah,
to
keep
it
yeah
and
even
for
for
systems
that
report
delta,
it's
possible
that
some
data
points
get
lost.
So
in
the
end,
if
you
try
to
aggregate
all
of
them,
it's
it's
not
a
rocket
science.
You
should
never
use
it
in
a
nuclear
power
plant,
but
it
gives
you
a
reasonable
idea.
D
G
D
G
D
E
I
guess
I
would
like
to
call
something
out
which
has
been
kind
of
a
problem
so
to
speak.
Maybe
I
can
get
your
feedback,
it's
not
very
like
urgent,
but
we
have
been
seen
a
few
pr's
regarding
semantic
conventions,
mostly
around
collector
receivers,
and
we
really
need
attention
on
those
ones.
E
I
don't
know
how
to
move
forward.
There
was
some
initial
discussion
and
people
like
talarian
provided
feedback,
but
we
need
more
feedback,
more
reviews,
and
I
don't
know
who
should
I
be
contacting.
I
even
sometimes
was
trying
to
bring
the
original
author
of
the
of
the
receiver,
and
you
know
I
guess
everybody's
too
busy
yeah.
I
don't
know
how
to
proceed
on
that
point.
E
D
Yeah
I
tried
to
ping
people
who
were
the
authors
or
listed
as
code
owners
for
the
collector
components
and
some
people
did
respond,
but
you're
right.
I
think
we
need
more
eyes
here.
The
difficulty
area
is,
I
guess
you
need
someone
who
who
knows
the
particular
piece
of
technology
more
or
less
well
right,
so
that
they
can
tell
whether
the
the
semantic
conventions
make
sense
for
that
thing,
so
it
may
be
a
bit
difficult,
but
on
the
other
hand
we
probably
don't
need
20
approvals
for
this
sort
of
semantic
conventions
to
be
merged.
E
Okay,
yeah,
that's
a
good
call
yeah.
I
want
to
just
clarify
that
in
case
somebody
says
you
know
like
hey:
there
was
only
a
pair
of
reviews.
Well,
the
other
thing
is
that
I
honestly
want
to
say
that
I
know
that
most
people
find
this
kind
of
work.
Semi-Conventions
were
boring,
but
somebody
has
to
do
that
so
anyway,
we
will
be
sending
more
prs.
H
So
can
I
ask
a
question
about
the
this,
like
idea
of
defining
semantic
conventions
for
a
particular
technology
I
apologize.
I
haven't
had
the
time
to
review
these,
but
it
just
seems
a
little
odd
to
me
like.
If
I
was
the
author
of
a
particular
technology,
I
would
have
the
domain
for
understanding
like
the
best
way
to
measure
that
or
hopefully
I
would
so
why
not
let
them
define
it
like
I'm
confused
like
if
we
have
like
a
postgres
semantic
conventions,
why
aren't
they
defined
by
the
postgres
team
and
and
like
how's?
D
H
D
Already
in
the
collector,
we
have
receivers
in
the
collector
which
can
pull
this
data
from
a
running
postgres
instance
or
a
mysql
instance.
We
pull
this
data
and
make
them
into
open
telemetry
metrics,
and
that
already
exists
in
the
collector
that
this
is
this
essentially
documents,
the
the
de
facto
what
we
have
already.
This
is
not
something
that
we
are
nearly
introducing
does
that
need
to
be
in
a
semantic
convention.
So
that's.
G
Go
ahead.
Okay,
so
I
have
an
opinion
on
that
and
this
opinion
is
based
on
several
reviews
on
database
receivers
for
open
telemetry
collector,
so
I
found
them
that
they
are
inconsistent
like
they
use
different
names
for
similar
items,
and
I
strongly
believe
that
we
should
work
through
the
specification
and
the
semantic
conventions
for
those,
because
I
think
that
for
many
users
this
would
be
confusing.
G
I
I
Database
are
different
from
postgres
semantic
conventions
like
we
shouldn't
be
defining
for
a
particular
technology
or
a
particular
implementation.
If
we're
going
to
define
for
a
category
of
implementations
or
technologies,
that
makes
a
lot
more
sense.
We
have
http
semantic
conventions,
not
you
know,
nutty
semantic
conventions.
E
Okay,
well,
let's
say
let's
say,
for
example,
let's
say
that
kafka
is
very
specific
and
it
offers
some
message
queue
stuff
that
doesn't
appear
in
other
semantic
conventions
in
other
similar
message:
q
components.
So
let's
say
what
do
we
do?
We
don't
report
those
we
make
them
independent.
H
The
exact
point
right
like
if
they're
unique
to
the
database
or
to
the
whatever
the
technology
is
then
they're
unique,
but,
like
the
the
complaint
is,
what
happens
when
there's
crossover
right?
That's
that's.
The
complaint
right
now
is
like
what
happens
when
my
sql
things
have
similar
metrics
to
the
same
as
postgres.
H
If
there
are
specifics
to
it,
I
think
that's
important
that
they
should
be
defined
but
like
are
they
do
they
need
to
be
defined
at
the
specification
level,
because
it's
going
to
be
a
definition
for
a
specific
technology,
and
I
think
that
that's
like
you
can
do
a
good
job
there
and
you
can
do
a
good
job
around
also
specifying
naming-
and
maybe
some
like
other
generalization
like
things
but
like
do.
We
need
to
include
that
in
the
part
in
specification.
G
Yes,
so,
okay,
so
maybe
just
let
me
bring
like
a
very
quick
example
like
so
so,
a
very
common
thing
for
databases,
database
name
right
so
for
sql
server
receiver.
We
have
an
attribute
that
is
called
sql
server,
dot
database
underscore
name
and
for
postgresql
sql
receiver.
We
have
not
resource
level,
but
a
record
level
attribute
that
is
named
database
and
for
mysql
sequel
receiver.
J
And
another
layer
of
detail
is
so
take
this
kafka
metrics
receiver
for
the
collector.
So
a
lot
of
those
metrics
have
overlap
with
metrics
that
are
accessible
through
the
kafka,
client
libraries
as
well,
and
so
I
guess
you
know,
the
additional
layer
of
complexity
is
ensuring
that,
whatever
conventions
we
come
up
with
at
the
receiver
level
to
the
extent
that
they
overlap
with
metrics
that
are
accessible
to
the
kafka
clients
that
they
can,
they
can
actually
be
represented.
H
Good
point
like,
but
can
we
also
then
define
semantic
conventions,
for
you
know,
sql
type
database
conventions
like
they
need
to
include
an
attribute
that
is
of
the
database
name
and
it
needs
to.
You
know,
be
in
this
format
I
think
that's
great,
and
then
it
would
hit
all
sql
server.
Mysql
postgres
sql
like
it
just
it
hits
more
things
than
even
just
what
we're
discussing
currently
right.
D
So
we
have
a
section
about
database
metrics
in
the
semantic
conventions
which
is
very
generic.
It
talks
about
primarily
the
database
clients
and
what
we're
discussing
here
is
essentially
how
generic
a
semantic
convention
should
be
to
warrant
its
inclusion
in
the
open
climate
specification,
and
you
can
go
it's
a
whole
spectrum
right.
It
can
be
very
generic.
It
can
be
somewhat
generic,
like
a
subclass
of
things
like
a
database
or
a
cluster
of
things,
with
subclasses
like
relational
databases
and
more
specific
going
from
there
like
specific
piece
of
technology
like
the
postgres.
D
I
think
it's
a
very
open
question,
so
maybe
we
should
file
an
issue
there
and
have
a
discussion
around
that
before
we
measure
this
pr,
but
we
probably
shouldn't
block
the
prs.
Maybe
the
discussion
can
continue
there,
but
these
are
very
good
questions,
so
maybe
maybe
maybe
yeah.
Let's,
let's
involve
more
people
and
have
a
okay.
E
E
Yeah,
I
will
feel
an
issue
for
that
and
he
gets
that
before.
We,
you
know
forget
about
this
topic
in
this
call.
One
of
the
reasons
also
is
that
even
for
stuff
that
is
very
specific
to
kafka
or
to,
for
example,
that
is
not
common
to
other
traces.
E
E
A
I
feel
like
if
from
open
telemetry
like
isd
case
or
the
collector,
we
gave
that
particular
thing
to
the
user.
Then
the
user
should
have
a
place
to
to
interpret
what
others
mean,
and
we
tell
them,
because
this
is
specific.
We
give
you
the
data,
but
there's
nowhere.
It's
documented,
then
it's
a
very
bad
thing
for
the
user.
Regarding
the
balance,
I
I
think
it's
like
natural
language.
It's
it's
always
changing
it's
evolving.
A
Maybe
today
you
have
kafka,
there's
a
specific
concept,
and
tomorrow
people
realize
oh,
this
is
a
great
idea
and
it
should
be
like
applied
to
all
the
all
the
asynchronous
qr
systems
and
the
system
would
always
evolve.
What
you
would
say
is
if
you
have
the
initial
conversation
and
the
documentation
here
somewhere
else,
then
it
starts
to
become
a
chaos
by
having
things
here.
If
we
see
the
conflict
actually
help
to
motivate
people
to
think
about
how
we
can
evolve
this
semantic
convention
over
time.
D
Schemas
right,
we
involve
them
using
schemas,
but
I
do
agree
with
you.
I
think
so
anyway.
The
concern,
I
think,
is
very
valid
like
if
we
go
too
deep
into
specific
technologies,
then
that
additional
maintenance
burden
for
the
specification
right-
and
maybe
that's
the
right
thing
to
do-
but
we
need
to
be
clear
that
we're
we're
taking
significant
responsibility
to
maintain
these
things
right
and
then
we,
if
we
accept
one
database,
then
we
have
to
accept
them
all.
D
A
Yeah,
so
I
I'm
curious
so
when
the
collector
decided
to
take
support
for
postgrad,
sequel
and
kafka,
is
there
a
similar
decision?
I
I
kind
of
feel
like
if
we
can
make
it
easier.
For
example,
we
say
if
a
certain
component
is
taken
by
the
language
maintainer
or
the
collector
maintainers,
then
by
default
it
should
be
documented.
D
D
A
This
is
a
great
idea:
would
that
be
possible
for
components
that
give
the
matrix
to
put
the
information
in
the
in
the
description
or
something
like
in
metrics?
Like
me,
when
you
put
instruments,
you
already
have
certain
way
of
describing
the
unit
description,
so
is
that
possible
to
make
the
information
self-contained.
D
It
depends
on
what
what
information
you
want
to
include
right.
We
don't
have
a
mechanism
for
describing
the
semantics
in,
let's
say,
machine
readable
way,
but
we
could,
in
theory,
yeah
come
up
with
something
like
that.
D
Another
way
we
could
do
that
is
by
adding
this
information
to
the
schema
files.
We
already
referenced
the
emitted
schemas
by
including
the
schema
url
in
some
of
the
receivers
already
in
the
collector,
not
all,
but
if
we
we
have
the
full
picture
described
in
the
schema
file
of
what
is
emitted
there,
then
it's
also
more
efficient.
Instead
of
repeating
this
information
on
all
of
the
immediate
telemetry,
you
just
include
a
reference
to
the
particular
schema
in
the
form
of
a
url
yeah.
A
By
the
way,
I
put
a
pr
link
in
the
chat,
so
please
take
a
look
at
that.
Folks
are
trying
how
to
get
the
matrix
schematic
convention
more
formalized,
so
we
can
automatically
generate
the
markdown
files
from
yamo.
D
D
D
D
So
this
is
a
ptr
in
the
in
the
otlp.
I
don't
know
if
we
have
the
author
here
in
the
call
anyway,
somebody
probably
put
it
through
the
agenda.
Please
do
review
this
an
addition
to
the
to
otlp
protocol
to
make
it
to
provide
more
detailed
responses
to
to
describe
if
something
worked
and
didn't
work
so
that
some
of
the
data
was
partially
accepted.
Some
was
not
accepted,
so
this
just
just
adds
that
we
didn't
have
this
information
in
the
responses
previously.
B
B
B
So
there's
some
discussion
about
whether
we
might
want
structured
values
in
that
response,
because
that
will
allow
us
to
go
into
the
future
so
think
about,
as
as
a
user
and
author
of
the
system,
whether
you'd
be
interested
in
receiving
a
an
otlp
style,
structured
log
message
back
from
your
service.
When
you
get
partial
errors
we
could.
We
could
then
begin
specifying
semantic
conventions
for
errors
that
we
could
use
to
actually
convey
useful
information
in
these
responses
without
overloading
our
clients.
D
If
we
say
that
we
return
a
log
record,
then
we
probably
need
to
also
define
some
of
the
fields
in
that
lock
record
like
where
does
the
human
readable
error
message?
Go?
I
guess
that's
the
body
probably
unsurprisingly,
but
also
is
there
any
expectation
of
the
severity
to
be
there
like
it
should
be
set
to
the
error,
probably
any
any
attributes
that
we
expect
to
be
there.
B
That's
sort
of
my
point
is
that
there's
lots
of
potential
failures
that
we
might
one
day
want
to
convey
from
a
server
to
a
client.
And,
yes,
I
would
say
the
body
is
the
message
and
you
know
probably
open
telemetry
is
going
to
demand
a
kind
of
convention
for
conveying
uncombined
printf
messages.
For
example,
here's
a
message,
format,
string
and
here's.
Some
arguments
like
conventions
I
think,
are
going
to
have
to
come
for
that
type
of.
B
You
know
recognizing
an
error
message
in
the
body
of
a
log,
but
I
don't
think
that
that
would
that
would
need
to
hold
us
back
now.
Any
log
message
will
do.
You
should
know
how
to
format
a
log
message,
and
then
we
can
begin
developing
something
conventions
on
the
fly.
B
Light
step,
for
example,
has
validation
errors
and
we
already
have
a
structured
response.
We
just
have
no
way
to
give
it
back
to
the
user.
It
will
tell
you
an
example
of
the
metric
name.
That's
failing
just
one
though,
because
this
is
happening
over
and
over
and
over
again.
I
don't
need
to
tell
you
every
metric
name
that
fails
every
time,
for
example,
and
we
have
conventions,
I'd
love
to
publish
those.
A
D
We
have
on
the
open
telemetry.io,
we
have
a
list
of
vendors,
I
think,
but
I
don't
remember
if
they
specifically
list
whether
they
support
otlp
or
no.
We
had
that
somewhere.
C
A
Reach
out
to
folks
and
and
seek
more
help
from
them,
I
I
think
many
vendors
who
have
been
developing
their
own
investment
service.
They
have
more
sophisticated
solutions
and
we
probably
need
to
find
a
balance.
Currently,
the
proposal
seems
to
be
like
addressing
the
very
like
basic
problems,
but
I
I
I
think
in
the
future
we
might
need
to
evolve
that.
B
J
B
Yeah
in
this
particular,
it
is
because
at
some
point
you
know
you're
turning
around
and
saying
I
need
to
help.
I
need
help
observing
this
thing.
That's
going
wrong
and
I'm
an
observability
agent
and
so
how's
the
agent
get
observed
as
well,
because
in
this
case
the
sidecar
I'm
talking
about
was
an
actual
collector,
but
you
could
see
the
same
happening
in
an
sdk,
so
light
step
will
give
a
validation
error.
B
If
you
try
to
write
a
counter
for
a
gauge
or
a
gauge
for
account
or
something
like
that,
and
when
we
first
connected
this
prometheus
sidecar
to
lightstack,
we
got
a
lot
of
them,
so
I
was
because
prometheus
doesn't
actually
tell
you
if
you
mix
those
data
together
and
the
outcome
here
is
that
I
I
didn't
want
to
splat
a
giant
log
message
on
the
console
every
minute
or
every
second
or
every
10
seconds,
but
on
a
like
slow
time
period
like
let's
say
five
minutes.
B
I
think
that
there
should
be
some
information
in
the
console
log
to
help
an
operator.
So
what
I
did
was
I
get
one
example
message
for
every
error
response
within
a
metric
name.
That's
failing
and
then
over
five
minutes
I
calculate
a
set
of
them
and
then
I
print
that
to
the
console
every
five
minutes
so
that
I'm
only
printing
the
unique
error
messages
that
I've
seen
over
every
five
minutes
or
the
unique
metric
names
for
every
validation
error
over
five
minutes.
B
And
then
I
also
turn
around
and
count
those
so
that
I'm
I'm,
then
actually
aggregating
a
count
of
field
metrics
by
metric
name
and
you
probabilistically
get
all
the
data
there.
But
the
cost
is
pretty
low.
J
So
that
could
be
useful
then
so
I
guess
then
you
could,
you
could
adjust
the
spec
for
the
otlp
exporters
to
you
know,
have
some
sort
of
language
that
says,
aggregate
these
messages
and
periodically
print
details
to
the
council
summarizing
what
has
happened
over
time.
D
My
use
case
for
this
was
also
about
troubleshooting
but
troubleshooting
the
log
records.
So
let's
say
we:
we
intend
to
send
profiling
information
and
client-side
events
using
otlp
log
records
right.
So
that's
the
current
thinking,
but
let's
say
on
your
back
end:
maybe
those
are
features
that
you
want
to
enable
or
disable.
D
I
mean
the
ability
to
accept
them
independently.
So
you
want
to
disable
profiling,
you
don't
care,
you
don't
want
this
data
to
be
ingested,
so
you
want
to
essentially
accept
partially
the
batch
of
log
records.
If
those
log
records
are
about,
let's
say
client-side,
instrumentation
events,
but
you
want
to
drop
the
profiling
related
log
records.
D
J
One
thought
I've
had
on
that
on
that
kind
of
you
know
both
both
those
example
use
cases
is
that
you
know
the
server
can
return
some
sort
of
indication
to
the
client
that
something
has
gone
wrong
or
the
server
can.
You
know
increment
some
sort
of
metrics
and
expose
those
to
the
user
through
some
sort
of
user
interface
about
what
is
filed.