►
From YouTube: 2022-07-20 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
C
Hey,
I
don't
see
anthony
on
the
call,
so
maybe
I'll
run
it
today.
A
It's
the
histogram
boundary.
I
mean
histogram
incompatibilities
issue
just
want
to
give
you
give
an
update.
I
was
in
the
open,
telemetry
spec
call
yesterday,
and
I
think
there
was
a
lot
of
I
mean.
There's
nice
discussion
there
and
in
general
we
are
going
to
looks
like
we're
going
to
move
forward
with
the
pr2633
which
josh
created,
which
basically
just
changed
the
bound
changes
the
bounds
but
does
not
change
the
indexing.
A
Changing
the
indexing
is
is
a
breaking
change,
so
we
are
not
going
to
change
the
index,
which
is
completely
fine
from
a
prometheus
and
compatibility
perspective
yeah.
If
anybody
has
concerns
around
anything,
just
comment
on
the
pr
or
on
the
issue.
A
The
other
incompatibility
is
the
zero
threshold
field
in
the
hotel
spec
we
have.
A
We
also
I
mean
I
also
opened
another
pr
for
that
and
there's
generally
acceptance
from
the
community
to
add
it
to
the
protocol,
but
basically
likely
not
use
it
anywhere.
It's
going
to
be
part
of
the
protocol,
but
like
open
telemetry
will
not
leverage
it
for
any
of
the
things
that
from
me
this
is
using
for.
But
if
you
round
trip
from
prometheus
to
hotel
to
prometheus,
you
will
still
have
the
zero
threshold
that
is
saved
yeah.
I
just
wanted
to
give
an
update
and
to
see
if
anybody
had
any
questions.
B
So
maybe
I
have
a
quick
question
I
mean
first.
I
think
it
sounds
very
good.
I
mean
once
everyone
is
on
agreement.
That's
nice,
do
you
happen
to
know
because
they
are
like
attempting
to
make
open,
telemetry,
metrics,
finer
or
stable?
Is
there
any
like
timeline
or
like?
I
don't
know
how
this
even
works.
A
A
C
So
the
this
piece
of
the
spec
that
josh
is
changing
with
the
pr
to
change
bucket
boundary
inclusivity
is
already
marked
stable.
C
But
I
think
that's
why
there
was
a
concern
about
changing
c
so.
A
Zero,
the
total
is
stable,
but
the
data
model
is
not.
If
you
look
at
the
data
model's
status,
it
still
says
experimental
in
exponential
histograms.
C
Okay
yeah,
but
it's
why
it's
marked
stable
yeah.
A
A
Cool,
if
there's
no
questions,
we
can
move
to
the
next
one.
I
added
this,
but
I
had
a
conflicting
call
last
time,
so
I
could
like
I'm
sorry
I
couldn't
be
there.
I
was
basically
wondering
what
to
do
with
the
resource
to
telemetry
conversion.
C
I'm
in
favor
of
we
have
we
have
target
info.
That's
always
sent
right
right
now.
That's
the
case.
Yep!
C
C
Yeah,
I
wonder
if
the
case
for
removing
resource
to
telemetry
changes,
if
people
can
still
turn
off
target
info.
C
Point
yeah
resolve
that
discussion
and
then
yeah.
A
Yep
cool
yeah,
any
other
questions
with
the
resource
to
telemetry.
C
To
feature
gate
it
and
leave
it
in
alpha
for
a
while
just
to
give
people
time
to
raise
concerns
if
they
are
using
it
for
something
we
haven't
thought
of
yeah.
We
can
raise
this
at
the
collector
sig
as
well.
A
Well,
I
also
added
the
next
one,
which
is
the
sdk
requirements
around
target
info.
So
the
idea
is,
if
you're
instrumenting
an
application
with
the
for
me
with
total,
and
you
expose
prometheus
metrics
as
the
output
we
expose.
We
are
supposed
to
expose
target
info
with
all
the
resource
resource
attributes,
but
these
resource
attributes
can
change
over
the
lifetime
of
that
process,
so
it
can
restart
and
it
can
get
a
new
id
and
I,
the
other
things
like
ips,
can
change
and
stuff
like
that.
A
So
there
can
be
some
amount
of
churn
to
this,
and
the
question
is:
is
that
a
problem
also
beyond
churn
whenever
one
of
the
labels
change
changes?
There's
a
discontinuity
in
the
in
the
metric?
Is
that
a
problem
I
like
are
these
problems?
I
mean
again,
it
seemed
relevant
to
this
call.
So
I
just
put
this
in.
A
In
my
opinion,
it
shouldn't
be
a
problem
like
both
of
those
shouldn't
be
a
problem
unless
you're
restarting
multiple
times
a
minute
or,
like
you
know,
multiple
times
in
10
minutes.
The
kind
of
overhead
caused
by
one
metric
churning
is
super
low.
Unless
you
have
a
million
targets
which
is
not
likely
to
be
the
case,
it's
just
one
metric
per
target
and
if
that
is
churning
even
once,
every
few
minutes,
that's
fine
and
as
for
metric
discontinuity
from
a
prometheus
hat
on.
A
If
this
data
is
in
prometheus,
I,
if
I
try
to
select
this
data
I'll,
probably
use
attributes
as
table
something
like
hostname
or
pod
name,
or
things
like
that,
and
when
I
do
select
that
and
draw
a
graph
on
it.
I
will
see
a
single
line.
I
mean
I
will
see
a
single
line
but
like
basically
there
will
be
multiple
time
series
in
that
view,
but
it
will
still
be
a
single
line,
and
I
think
that
is
also
okay,
so
I
would
be
in
favor
of
basically
even
if
that's
churn
keep
doing
this
yeah.
C
I
actually
think
the
situation
in
prometheus
is
slightly
better
than
in
open
telemetry,
given
that,
if
users
want
to
join
resource
or
sorry
target
info
with
their
metrics,
they
do
so
in
sort
of
like
they
get
to
choose
what
attributes
they
they
join
with,
whereas
in
open
telemetry
you
just
get
all
of
them,
so
they
can
choose
to
join
with
stable
ones.
B
So
I
also
I
mean
yeah
we
commented
on
the
ticket
as
well,
so
it's,
I
think,
I'm
also
in
favor
of
just
keeping
target
info,
and
maybe
if
there
are
some
rare,
exceptional
cases
where
it
isn't
a
problem,
you
can
still
have
an
opt-out
feature
or
maybe
even
drop
them
on
in
prometheus
itself
or
whatever.
I
just
think,
there's
there's
a
lot
of
things
like
mentioned
in
the
in
the
thread,
which
makes
it
pretty
interesting,
and
I
think
this
this
screenshot
that
tigran
posted
there
like
where
you
navigate.
B
You
know
from
a
process
a
container
to
a
part
to
a
linux
node
to
kubernetes,
cluster
and
so
forth.
I
think
this
idea
is
pretty
pretty
nice.
I
mean
to
have
like
if
you
would
ever
get
http
process
metrics
and
see
it's
slow
and
you
have
a
way
to
look
at
the
underlying
node
and
see.
Is
there
a
disk,
I
o
issue,
and
I
think
this
is
something
worth
thinking
through,
because
currently
there
wouldn't
be
any
way
how
to
know
which
host
metrics
belong
to
which
process
metrics
right
unless
the
eyepiece
match.
A
B
Yeah
I
mean
if
the
scrape
is
an
interface
exposed
in
in
kubernetes,
and
the
linux
host
you
know
sees
a
different
ip.
You
know,
has
different
understandings
of
ports
and
so
forth.
It's
hard
to
link
these
things.
I
mean
that's
always
been
a
huge
problem
and
if,
if
this
discussion
about
the
general
purpose
of
resource
attributes
continues,
you
know
beyond
how
to
map
it
to
to
target
info.
B
I
think
that's
not
not
as
much
of
a
problem
as
it
looks,
but
if
this
discussion
continues
like
what's
the
goal
in
general,
I
think
that
would
be
interesting
to
figure
out
like.
Can
we
actually
find
identifiers
that
link
you
know
stuff
running
in
a
container
in
kubernetes
to
host
metrics
or
to
whatever
availability
zone
metrics
or
stuff
like
that?
But
I
guess
that's
something:
that's
not
even
in
the
scope
of
any
current
discussion,
yet.
A
Yep
in
general,
I
mean
even
I'll
comment
with
the
short
summary
of
what
we
discussed,
but
I
think
you
know
we
are
in
favor
of
this,
even
though
there
is
soon
yeah
cool.
I
will
update
the
issue
with
the
discussion
from
this
call.
B
A
Yeah,
I
will.
I
will
update
the
issue.
A
Yeah
nice,
if
there's
nothing
for
that,
I
I
added
one
more
agenda
item,
but
it's
also
me,
but
before
I
go
into
that,
I
want
to
see
if
others
have
anything
on
the
agenda.
A
A
We
did
a
batch
of
metrics
and
from
that
batch
we
looked
at
unique
job
and
instance
labels
and
then
generate
target
info.
For
those
this
could
mean
if
there's
a
single
scrape
that
might
be
split
across.
You
know,
10
different
patches
inside
open
telemetry,
and
that
means
we're
sending
the
same
target
infometric
again
and
again
likely
with
different
timestamp
labels,
because
we
are
generating
target
info.
A
A
Now
I
would
say
we
just
ignore
the
out
of
order
issues
here,
because
it
means
the
metric
has
been
sent
at
least
once,
and
that
is
good
enough
it.
It
might
not
match
the
scrape
interval.
If
you
scrape
every
15
seconds,
you
might
end
up
with
target
info
metrics
that
are
more
or
less
frequent
than
15
seconds.
But
again
I
cannot
come
up
with
a
solution
that
you
know
sends
it
only
one
time
per
scrape
yeah.
A
C
For
batching,
why
are
you
doing
batching
in
front
of
the
remote
right?
Does
that
make
it
more
efficient
to
send
large
amounts
of
like
multiple
scrapes
at
once.
A
That
is
a
good
question.
Honestly,
I
don't
know,
but
if
you
look
at
any
example
configuration
the
batching
processor,
is
there
yeah
and
there's
also
another
issue
that
says
like
today
in
prometheus?
We
have
a
similar
issue.
A
Even
if
the
prometheus
prometheus
has
a
single
scrape
of
like
ten
thousand
metrics,
we
still
sell
it
in
batches
of
thousand
thousand
thousand
thousand.
This
means,
if
there's
a
histogram
with
buckets,
some
buckets
are
updated
before
other
buckets,
and
this
can
cause
issues
in
the
remote.
A
So
this
is
an
existing
problem
in
prometheus
and
when
we
update
from
hs
remote
right,
we
want
to
make
sure
the
entire
scrape
is
sent
at
once,
instead
of
like
splitting
it
up-
and
I
think
something
similar
is
happening
happening
here,
where
this
scrape
is
being
split
into
multiple
batches.
A
A
Yeah,
if
there's
no
good
solution,
I'll,
also
update
the
issues
saying
we're
still
thinking
about
this.
A
No
worries
I
just
wanted
to
bring
this
up.
C
There's
one
more
thing
I
wanted
to
raise
that
came
to
mind.
I
just
wanted
to
put
together
linked
together,
two
different
things
that
have
been
going
on.
One
is:
there's
been
this
great
issue
around
trying
to
better
understand
memory
usage
in
the
prometheus
receiver
and
it
turns
out
we
spend
a
lot
of
cpu
cycles
and
memory
trying
to
track
the
start
times
of
counters.
C
I
think,
as
a
group,
it
would
be
a
big
win
if
we
could
start
to
support
that,
because
then
consumers
it
would
enable
us
to
really
improve
the
performance
of
the
receiver.
If
we
could
support
the
openmetrics
creation
timestamp
for
counters.
A
Just
to
give
a
little
context,
the
underscore
created
switching
on
by
default
is
slightly
controversial
in
the
prometheus
world,
because
today,
if
you
are
using
prometheus,
prometheus
doesn't
use
the
underscore
created.
It
just
adds
it
as
an
additional
time
series.
So
if
you
have
an
application,
that
is
exposing
a
lot
of
counters,
if
underscore
created,
is
there
by
default.
A
But
the
consequence
of
this
is
today
that,
even
though
client
libraries
support
underscore
created,
it
is
likely
to
be
turned
off
for
a
lot
of
the
applications.
So,
even
though
underscore
created
is
a
concept
in
practice,
it
might
not
exist
for
a
lot
of
the
applications.
That's
something
to
keep
in
mind,
fabian,
correct
me.
If,
if
I'm
wrong,
yeah.