►
From YouTube: 2022-10-12 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
C
C
C
D
A
Yeah
sounds
good
yeah,
so
I've
just
been
debugging
this,
like
weird
issue
we
found
starting
with
0.59
and
then
still
in
the
latest,
where,
like,
depending
on
what
external
labels
are
added
and
what
script
targets
are
in
the
Prometheus
config,
the
second
scrape
and
then
the
ones
after
that
will
start
to
fail,
because
the
of
like
a
duplicate
labels
and
so
I
added
some
like
debug
logs
and
stuff
in
the
pen
function
where
it's
happening,
and
so
the
append
function
is
like.
A
Given
the
label
set
from
Prometheus
is
my
understanding
and
what
I
see
in
the
debug
logs
is
like
in
that
label
set.
That's
given
the
external
labels
are
added,
starting
from
the
second
scrape,
but
I
can't
I
was
wondering
if
anybody
else
has
seen
this,
because
I
can't
quite
figure
out
like
why
it's
happening
because
even
within,
like
I,
only
see
it
for
keep
State
metrics
and
then
within
that,
like
only
a
couple
coupled
metrics,
but
not
the
same
ones.
Every
time.
C
A
B
Also
I
think
from
the
code
that
you
posted.
We.
B
External
labels,
so
if
the
script
already
hide
that
label
I
can
see
how
there's
going
to
be
duplicates,
we
need
to
do
some.
C
Generally,
there's
an
open,
collector
issue
about
duplicate
labels
being
inconsistent,
someone
that
opened
a
PR
to
address
it.
I
opened
the
issue
myself
from
noticing
it,
but
I
haven't
followed
it
it's
because
the
range
operator
on
a
map
goes
forward
and
we
get
operation
goes
forward,
but
the
semantics
are
to
take
the
last
value
and
it's
it
can
lead
to
this
sort
of
issue.
I
could
imagine.
A
Okay,
it
makes
sense
yeah,
because
so
the
labels
don't
exist
in
the
metric.
It's
only
the
external
labels,
but
then
the
next
time
we
like
get
the
time
series.
The
external
labels
are
included
in
the
metrics
like
label
set,
even
though
they
should
be
there,
and
then
it's
only
like
sometimes
like
other
time
series
like
it
succeeds
and
stuff.
It's
only
a
couple.
A
B
A
A
That's
all
I
have
I
was
just
wondering
if
anybody
else
uses
external
labels
or
like
came
across
this.
D
Yeah,
that's
where
I'm
at
too
I'm
like
these.
These
are
part
of
the
Prometheus
config
yeah,
unless
I
would
expect
this
discrete
manager,
well
the
discovery
manager
to
hand
to
the
scrape
manager.
All
of
that
information
that
it
needs
to
add
these
to
do
everything
that
it
scrapes.
We
shouldn't
be
tracking
it
we're
adding
these
at
all
I
wonder
if
we
can
simply
remove
that
and
whether
it'll
still
appear.
A
Yeah
from
what
I
saw,
the
Prometheus
was
adding
them
like
later
on,
like
they're,
only
added
if
Prometheus
is
actually
exporting
them
or
using
like
Federation,
or
something
otherwise
like
in
the
regular
Prometheus,
like
local
server,
they're,
not
added
in
there
like
in
the
craft.
If
you
pull
up
the
UI
or
anything.
D
Okay,
yeah
and
that
I
I've
seen
external
labels
used
on
the
remote
right
side
before
for
doing
things
like
adding
a
a
dative
identifier
for
the
sources
of
metrics,
or
something
like
that.
But
I
hadn't
seen
them
used
on
the
scrape
side.
A
Yeah
there's
the
hash
for
the
group
key
which
I
looked
at,
but
it
didn't
seem
like
that
was
like
that's
just
finding
after
we
get
the
Skype
from
this
required
to
put
the
time
series
but
yeah,
it's
just
very
weird.
B
The
other
approach
we
could
take
would
be
to
basically
remove
the
functionality
of
setting
it
in
the
receiver
by
validating
that
people
aren't
setting
it,
because
both
the
Prometheus
exporter
and
Prometheus
remote
right
exporter
have
a
notion
of
a
constant
or
external
labels
already,
so
it
might
not
be
necessary.
C
An
otlp
consumer
could
just,
is
there
a
if
you
have
an
otlp
exporter?
Can
you
add
resources?
You
use
a
resource
processor
to
do
this
right.
C
Yeah
I
would
have
assumed
that
the
external
labels
were
done
on
the
output
path,
not
the
receiver.
That
I
found
a
lot
about
it.
B
Possible
yeah:
we
can
do
that,
I
mean
we
have
feature
Gates
and
stuff,
so
we
can
do
it
slowly
and
with
and
give
users
time
to
migrate
and
to
raise
problems,
but
just
an
option.
Something
to
consider.
D
I
think
I'm
on
board,
with
that
I
can
put
together
a
PR
that
adds
a
feature.
Gate
adds
a
warning
and
then
we
can
disable
it,
and
sometimes
the
feeder
make
it
disabled
by
default.
Sometime
in
the
future
foreign.
A
Just
like
coming
this
is
there
a
way
we
can
like
pass
it
from
the
config
to
like,
if
we're
using
the
otlp
exporter
or
like
just
kind
of
manually
through
the
Hotel
config.
If
that
makes
sense,.
D
C
I
think
they're
treated
like
that.
That's
how
I've
always
thought
of
them.
C
C
C
Because
of
that,
because
they
use
that
replica
I
guess,
on
the
other
hand,
if
you
had
us.
C
I
guess
you
could
imagine
a
an
exporter
with
a
configuration
of
additional
labels
to
not
put
in
the
Target
info
that
should
be
added,
AS
application,
metric
labels
and
then
you'd
put
your
external
labels
in
that
special
list.
You
put
them
in
the
resource
for
the
receiver,
put
them
in
the
resource
in
the
receiver,
and
you
put
them
in
the
application
metrics
on
the
exporter
because
of
a
special
case
that
you've
configured
foreign.
C
That
seems
like
the
most
portable
solution.
People
who
are
using
otlp
don't
break
that
way.
External
labels
keep
even
being
special,
but
there's
the
keys
are
configured
in
the
exporter
and
the
values
are
configured
in
the
receiver.
C
C
Us
that
if
the
user
were
to
configure
their
external
labels
in
the
receiver,
but
not
in
the
exporter,
they
would
get
them
in
the
targeted,
though,
and
that
actually
I,
don't
think
would
be
so
bad.
A
Prometheus
user
can
join
them
back
to
get
them
where
they
wanted,
probably
and
logically
I.
Believe
it's
correct.
It's
just
you
know
breaking.
D
So
the
proposal,
then,
would
be
to
have
a
feature
gate
that
would
initially
do
nothing
but
emit
a
warning.
Well,
if
it's
not
unable
to
emit
a
warning
if
people
are
using
external
labels,
if
it
is
enabled,
then,
rather
than
adding
those
external
labels
to
metrics,
add
them
to
the
resource
and
then
the
receivers
would
have
would
they
need
additional
configuration
to
specify
which
keys
they
need
to
move
out
of
the
resource
to
metric
labels.
C
I
was
I
was
thinking,
it
would
be
an
exporter
configuration,
but
you
could
also
Imagine
I,
don't
know
how
to
imagine
it
but
like
if
there
was
a
way
to
set
properties
on
attributes
or
resource
attributes.
Then
you'd
say
this
resource
attribute
is
the
one
that
I
intend
to
become
an
external
label
proper,
there's
nothing
in
there
otop
for
that,
so
yeah
I
would
say.
C
B
I
I
think
I
actually
found
the
underlying
bug.
If
we
just
felt
like
fixing
that
too,
it
looks
like
the
labels
that
we
get
passed
at
each
append
is
reused
across
different
appends
for
the
same
label
set.
So
each
time
we
are
appending
external
labels
again
and
again
and
again
to
that
list.
So
it
should
be
possible
for
us
to
fix
that
if
we
wanted
to.
D
D
C
Right
for
the
users-
and
you
could
also
I-
could
imagine
an
option
for
the
future
to
let
the
receiver
turn
them
into
resources
directly
and
in
which
case
they've
become
Target
info,
and
it's
of
you
would
then
avoid
needing
a
resource
processor.
That's
all!
Oh,
my
God.
B
C
D
C
C
The
original
draft
of
that
receiver
was
to
just
have
a
status
okay
metric,
basically,
and
we
can
always
I
think.
One
of
my
longer
concerns
is
here
is
that
we
keep
debating
Uptown
counter
versus
gauge
for
these
zero
one
variables
and
I'd
like
that
to
get
resolved
somehow,
but
the
associated
with
that
is
this
issue
of
in
the
Prometheus
Exposition
format.
We
have
Target
info.
C
C
It's
a
zero
one
number
use
it,
how
you
like
so
and
then
in
the
feedback
for
the
pr
someone
else
pointed
out
that
well,
there's
other
conventions
out
there
to
do
HTTP
code
monitoring
where
you
have
one
metric
per
code,
and
it's
exactly
one
of
these
states
that
metrics
in
the
Prometheus
docks
like
this
is
the
canonical
example
for
a
state
set
is
HTTP
codes
and
therefore
it's
almost
time
to
start
talking
about
how,
as
an
hotel
package
of
instrumentation
I,
would
like
to
do
this
and
I
don't
have
a
good
answer
entirely.
C
I,
the
the
when
this,
when
the
state
set
comes
up.
There's
this
background
concern
about
the
cost
of
encoding,
someone
called
it
one
hot
encoding
like
that's,
that's
a
term
that
exists
for
this
type
of
sparse
Matrix
or
you
can
call
them
indicator
matrices
and
they're
expensive
and
that's
sort
of
the
normal
case.
C
That's
how
Prometheus
specifies
it
so
then,
if
an
Hotel
piece
of
instrumentation
want
to
do
the
same
thing
for
asynchronous
instruments,
they
could
do
like
they
would
have
to
do
it
right,
meaning
you
would
have
to
every
HTTP
code
that
you've
ever
counted
has
to
continue
being
counted
and
I.
Don't
think
we
have
any
guidance
on
how
a
callback
what
happens
when
a
callback
does
or
doesn't
reuse
the
same
attributes
based
on
whether
it's
a
cumulative
or
a
Delta
output,
I
think
there's
some
open
areas
in
the
spec
for
that.
C
But
if
you're
synchronous,
here's
the
thing,
is
it
it's
very
appealing
to
people
to
use
synchronous
instrumentation
for
this
every
time
I
make
a
status
check,
I,
count
the
number
of
response
codes
and
there's
some
sort
of
relationship
that
I
find
interesting,
but
maybe
not
worth
discussing
very
long
between
the
use
of
Delta
temporality
and
the
asynchronous
Prometheus
gauge.
Essentially,
if
you
were
to
normalize
the
Deltas
of
a
HTTP
check
receiver,
it
would
end
up
looking
a
lot
like
a
state
set.
C
Is
what
I'm
trying
to
to
get
at
and
I
mean
that
in
the
sense
that
every
time
you
make
a
check,
you
count
the
the
one
for
the
code
that
you
received
over
an
interval.
You
will
have
done
at
least
one
check.
Zero
checks
said
nothing
to
report.
So
if
you've
done
one
check,
your
one
count
is
exactly
where
your
your
code
landed.
C
If
you
happen
to
do
two
checks
between
an
export
now
you
might
have
two
codes,
and
this
is
what
I'd
get
to
is
I
think
we
can
probably
draw
a
connection
between
normalizing,
the
Delta
temporality
count,
so
I'll
call
it
a
normalized
counter
and
the
states
that
that
was
my
esoteric
idea
for
how
an
instrument
could
be
specified
here,
because
people
want
to
know
how
to
do
this.
Synchronously
and
I
think
we
probably
could
at
the
same
time,
Prometheus
would
tell
you
to
use
a
synchronous
gauge.
C
B
C
Yeah
but
there's
something
about
Delta
temporality
in
States.
That's
that's
interesting
to
me.
I!
Don't
want
to
belabor
this
meeting
with
it,
but
nevertheless
data
data
representation
is
a
separate
question
and
I
just
want
everyone
to
hear
at
least
David.
You
perhaps
to
think
about
the
idea
that
I
put
in
that
comment
of
of
that.
We
could
represent
state
stats
on
the
wire
as
non-monotonic
counters
of
cumulative
temporality
with
a
unit
string
states
that
something
like
that,
and
this
thing
that
kubernetes
did
is
interesting
in
its
own
right.