►
From YouTube: 2022-03-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
one
of
the
issues
that
I
have
it's
been
a
fairly
long-standing
issue
is
the
difficulty
that
we
have
trying
to
deal
with
relabeling
of
the
job,
and
instance
label
values.
This
applies
whether
it
happens
through
metadata
relabeling
or
through
honor
labels.
A
Particular
outer
labels
is
the
problem
that
we're
trying
to
deal
with
now,
so
we
can
support
receiving
federated
previous
data
from
other
servers.
A
So
it
looks
like
the
the
issue
is
that
the
screen
metadata
is
like
the
the
metric
metadata
that
tells
what
type
of
a
metric
it
is
counter
gauge,
has
a
description,
etc
is
associated
with
the
target
metadata,
including
the
original
job,
at
instance
value,
but
when
the
appender
is
given
a
metric,
it's
given
the
metric
labels
after
honor
labels
is,
is
respected,
and
thus
it
gets
the
job
an
instance
from
the
metric
and
has
no
access
to
the
original
scrape
job
and
instance
value.
So
it
can't
look
up
the
metadata
for
that
metric.
A
I
I
had
considered
whether
it
might
be
possible
to
inject
re-labeling
rules.
That
would
add
the
original
job
and
instance
value
that
we
could
then
extract
for
later
up
and
not
pass
down
the
rest
of
the
pipeline
after
the
receiver
has
done
its
work,
but
I'm
concerned
that
that
may
have
some
issues.
A
If
users
have
label
limits
implemented,
I
wouldn't
want
to
push
them
over
label
limit
or
push
labels
off
of
the
push
out
labels
that
should
be
there
that
are
theirs,
and
david
has
mentioned
that
there's
another
google
project
that
has
tried
to
deal
with
this
by
having
a
mechanism
for
retrieving
the
metadata
associated
with
the
target
for
a
current
metric
david.
Can
you
talk
a
little
bit
more
about
how
that
works,
and
what
you
mentioned
that
this
may
require
some
upstream
changes.
If
we
wanted
to
use
it.
C
Yeah,
I
think
it's
just
a
hack,
so
we
would
have
to
figure
out
how
to
make
it
palatable
to
them,
but
I
I
knew
that
they
had
looked
at
this
problem
and
this
is
how
they
decided
to
deal
with
it.
There's
one
function,
I
think
where,
when
the
prometheus
server
creates
a
new
appender
that
we
implement
somewhere,
where
we're
given
a
context
and
so
the
way
they
implemented,
it
was
to
embed
a
embed,
something
in
the
context
that
was
able
to
retrieve
metadata,
given
the
what
is
it
the
final
or
the
original
either
the.
C
A
The
problem
we're
trying
to
solve
is
getting
access
to
the
metric
metadata
from
the
scrape
target
when
we
don't
have
any
identifying
information
for
the
target
that
we
can
use
to
look
up
in
the
jobs
map.
A
Yeah
we
have
what's
given
to
a
call
to
append
on
the
offender,
so
we've
we've
got
a
value
and
a
set
of
labels,
and-
and
I
think
that's
it
right-
the
time
saved
value
in
labels.
D
C
That
function
doesn't
include
a
context
they
they
had
to
work
around
that
elsewhere.
So
it's
an
even
worse
hack.
D
C
Yeah,
I
think
the
issue
for
us
is
that
we
change
a
lot
of
behavior
based
on
metadata
and
so
in
prometheus.
It's
not
that
bad.
If
your
metric
becomes
unknown
typed,
but
for
us
it's
we
can't
deal
with
it.
We
convert
it
to
the
gauge,
and
then
you
know
if
it's
cumulative
or
something
then
it's
much
less
usable
for
users.
D
D
C
D
It
kind
of
needs
to
go
to
that
interface
anyway,
so
it
is
aligned
with
prometa's
goals
to
pass
that
information
down
a
bit
better
and
at
the
point
where
it's
passed
to
csdb,
then
that's
exactly
the
api
you
wanted
and
committees
can
use
it
for
its
own
stuff
later
on.
So
that
is
our
route.
I
would
look
at
because
it
aligns
with
goals.
A
Okay,
so
I
will
take
a
look
at
what
it
will
take
to
make
a
patch
to
propose
that
and
see
where
we
can
go
from
there.
Thank
you.
D
A
B
Hi
I
was
reviewing
the
update
for
resource
attribute
handling
when
converting
from
otlp
to
prometheus,
and
I
was
just
looking
at
the
spec
and
I
could
understand
most
things.
But
I
couldn't
understand
why
we
are
dropping
delta
plus
non-monotonic
sums,
instead
of
converting
them
to
a
gauge
like
convert
it
to
a
cumulative
and
then
convert
it
to
a
page.
F
I
think
there
might
be
an
issue
about
this
already
I've
been,
I
feel
like
someone
else
has
been
asking
me
for
help
reviewing.
F
The
summary
was
that
we
can
convert
the
deltas
to
cumulatives,
that's
sort
of
the
point
of
the
temporality
concept.
It
can
be
done
in
a
bunch
of
places,
technically
speaking,
practically
speaking.
It
is
the
least
cost
least
complexity.
I
think,
to
have
this
be
performed
on
the
path
where
there's
already
a
map
of
current
value,
which
actually
happens
to
happen
in
the
prometheus,
remote
right
exporter.
I
think
I
may
be
wrong
about
this,
so
it's
sort
of
path
dependent
if
it's
coming
in
through
prometheus.
F
If
it's
coming
in
through
otlp
and
going
out
through
prometheus,
then
probably
the
easiest
cheapest
place
to
do
this
conversion
is
in
the
prometheus
remote
write
export
but,
for
example,
I've
been
focused
on
the
statsd
receiver
and
in
that
case
it's
a
lot
easier
just
to
maintain
that
cumulative
state
in
the
receiver,
just
because
you
have
to
have
something
there
as
well,
so
I
I
have
to
track
down
where
else
I've
seen
this
discussion,
I
thought
it
was
making
progress,
but
I
don't
I
was
just
on
vacation
last
week.
I'm
not
sure,
okay.
B
I
just
want
to
add
other
than
that
the
whole
target
info
to
recess
attribute
mapping.
All
of
that
looks
pretty
good
to
me.
C
D
Yes,
yeah,
I
was
just
trying
to
figure
out
how
the
hell
do
we,
you
know,
get
this
square
object
into
this
round
hole,
because
otherwise,
what
would
have
happened
was
that
people
have
pushed
the
target
labels
on
everything
when
they
pushed
and
and
pulled,
which
is,
you
know
not
great
for
pulses
like
how
do
we
solve
it?
So
target
info
is
a
bit
of
it
is
new
work
like
it's
not
a
thing
prometheus
does
today,
but
it's
just
like
here's
an
idea.
No
one
had
a
better
idea
and
we're
kind
of
running
with
it.
D
A
F
And
maintaining
can't
think
of
anything
exactly
in
the
prometheus
sort
of
corner
there.
We
are,
let's
see
so
many
little
debates,
but
nothing
major
happening.
There's
a
you
know
the
go
issue
about
the
api,
that's
a
question
that
doesn't
quite
relate
to
this
group.
We
really
want
to
stabilize
the
sdk
spec
and
we
were
having
debates
over
the
sort
of
many-to-one
relationships
of
media
provider
to
exporter.
F
Prometheus
was
one
of
the
test
cases
for
that
discussion,
just
because
there's
a
hypothetical
here
of
what
do
you
want
to
happen
if
you're
sharing
say
one
prometheus
port
for
more
than
one
kind
of
conceptual
sdk
entity,
so
we
call
those
meter
providers.
So
the
question
is:
if
multiple
meter
providers
wanted
to
share
one
prometheus
port,
how
how
on
earth
might
that
work,
the
sort
of
theoretical
discussion
there
is
a
question
of
what
happens
if
they
produce
the
same
metric
and
I've
put
a
line
in
the
sand
there.
F
I
think
we
should
just
let
it
happen.
Conflicts
will
occur,
don't
do
that,
but
there
is
more
to
this
conversation,
and
maybe,
if
you
have,
since
you
asked
anthony
I'll,
give
you
a
two
minute
version
of
it.
Prometheus
has
a
concept
of
delete,
at
least
it's
in
the
ap
api
of
the
client
libraries.
F
As
I
understand
this
is
sort
of
a
corner
case.
That's
not
maybe
very
well
understood
by
most
users.
I
believe
it's
meant
to
be
used
to
in
careful
situations
where
you
are
literally
forgetting
a
metric
that
you
know
under
the
sort
of
standard
model
should
have
existed
forever.
So
what
does
delete
actually
mean?
I
think
it
means
allow
this
metric
to
be
written
somewhere
else
by
a
different
process.
I
think
I'm
I'm
really
kind
of
stretching
at
this
point,
but
to
kind
of
fill
in
a
definition
where
maybe
none
exists.
F
So
the
reason
why
this
last
sort
of
last
minute
spec
debate
came
up
is
once
we
we
have
to
make
a
sort
of
statement
about
what
are
the
semantics
of
sharing
an
exporter
between
sdks.
If
you
are
are
firm
about
it
and
say
this
minutes
are
there's
no
aggregation.
That's
going
to
happen
better.
Make
sure
you
don't
make
the
same
series
for
more
than
one
of
those
providers
at
that
point,
you
have
a
nice
definition
that
allows
us
to
solve
this
deletion
question,
but
maybe
not
exactly
the
same
way.
F
Prometheus
has
so
deletion
could
be
sort
of
achieved
by
having
a
meter
provider
come
and
go.
If
I
want
to
dynamically
forget
a
whole
bunch
of
metrics.
The
way
we
can
do
that
potentially
in
otel
is
to
create
a
meter
provider,
create
your
metrics
and
then
to
delete
those
metrics
which,
in
the
same
sense,
that
prometheus
supports
you
would
shut
down
that
meter
provider
and
then
potentially,
if
you
want
to
have
some
metrics
survive
you
would
you
would
keep
different
meter
providers
alive.
D
Yeah,
so
a
pile
of
users
use
it
incorrectly,
where
they
actually
want
to
write
an
exporter
and
should
be
using.
You
know
a
custom
collector
and
then
just
basically,
rather
than
maintain
your
data
structures
themselves,
try
to
do
a
direct
instrumentation
and
basically
make
a
product
work
for
themselves,
which
is
how
it's
commonly
used,
but
it's
mostly
users
ending
up
making
things
much
more
complicated
than
need
to
because
they
just
aren't
aware
of
the
custom
collector
api.
D
So
the
cases
where
it
comes
up
is
okay,
maybe
you
do
have
something
that's
no
longer
relevant,
like
a
file
system
goes
away.
It's
not
in
any
way
meant
for
sharing
things
between
systems,
because
remember
a
metric
name
is
meant
to
tie
to
a
line
of
code
somewhere,
and
how
do
you
share
a
line
of
code
between
two
system
systems
that
doesn't
kind
of
work?
It's
like
just
rare
cases
where
it
comes
up
for
performance.
D
So
if
you
had
a
very
high
cardinality
situation
and
some
particular
label
set,
I
hadn't
seen
any
updates
in
an
hour.
You
might
delete
it
like
that.
I've
only
come
across
this
case.
I
think
three
times
ever
in
the
last
15
years,
where
someone
had
actually
done
the
math
and
actually
just
makes
sense,
there's
a
whole
sorts
of
problems
with
that
with
counters
and
so
on.
F
D
So
the
kind
of
question
I
have
is:
why
are
there
multiple
meteor
providers
inside
a
single
process?
So
this
was.
F
I
created
this
is
a
kind
of
a
hypothetical
that
came
out
of
java
sdk
discussion
involving
bridge
from
the
micrometer
api.
So
micrometer
has
something
along
these
lines.
Basically,
where
you
can
create
what
micrometer
calls
a
meter
have
some
metrics
on
that
meter.
F
F
Someone
will
ask
you
what
happens
when
I
share
this
exporter
between
multiple
meter
providers,
so
I
get
that
there's
a
custom
collection
api
and
that
that
custom
collection
api
allows
you
to
vanish
things
as
well,
so
that
over
time,
if
you
just
simply
stop
observing
through
the
custom
collection
api,
those
series
are
effectively
gone.
But
my
understanding
was
more
like
the
the
disappearing
of
a
series
is
kind
of
a
semantic
like
ill-defined
semantic
event
right.
So,
in
any
case,
I
I
have
questions
about,
delete,
you've,
sort
of
answered
them.
F
It
sounds
like
there
are
maybe
two
reasons:
one
is
unit
testing
and
that
kind
of
fits
with
my
same
like
desire
at
least
sort
of
hypothetical
desire
is
like
I'm
just
doing
some
tests.
Now
I
want
to
throw
away
those
metrics
and
forget
them,
but
that
doesn't
seem
like
a
great
proof
or
test
case,
and
the
other
is
you
said
cardinality
so.
F
I've
got
some
object,
that's
just
disappeared
and
or
stopped
being
used,
and
I
want
to
release
that
memory.
I
think
that's
the
sense
in
which
the
micrometer
api
exists.
I
could
have
I'm
thinking
of,
like
my
like
a
very
kind
of
classical
organization
from
my
google
days,
like
you
have
a
sharded
server
right.
It
might
like.
Let's
say
it's
a
big
table
server.
Now
I've
got
shards
that
I
I
can
pick
up
and
they
can
come
and
they
can
go.
F
So
I
pick
up
a
shard
number
17
and
for
now
I'm
issuing
metrics
on
chart
17..
Well,
I'm
getting
overloaded.
I
need
to
shed
some
work.
Shard
17
is
leaving
me
and
it's
going
to
be
picked
up
by
somebody
else.
Now
I
want
to
stop
recording
shard
17
metrics,
because
someone
else
is
going
to
start
reporting
them.
So
this
is
a
case
where
you'd
assign
one
meter
object
to
your
shard.
F
When
the
shard
comes
and
goes,
you
create
a
new
meter,
it's
its
metrics
will
be
exported
until
that
shard
destroys
its
meter
and
goes
away
at
which
point
someone
else
can
pick
up
those
same
metrics.
I
don't
think
this
is
a
extremely
in-demand
feature,
but
it's
a
question
that
has
it's
gonna
happen?
No
matter
what
we
do,
someone
will
come
up
against
this
and
yeah.
That's.
Why
we're
talking
about
it
right.
D
Of
course,
that's
more
clustery
in
this
case,
and
it's
totally
possible
that
you
know,
because
of
races
and
whatnot
they're
too
sharp
the
two
processes
will
know
about
the
same
char
at
the
same
time
or
something
just
for
context
in
big
table
for
other
people
like
a
given
tablet
should
only
be
on
once
have
the
server
at
the
time
there
isn't
any
horror
me
stuff
there.
That's
not
at
that
level,
but
yeah,
but
like
a
disappearing
series,
is
defined.
F
Since
we're
on
this
topic,
I'd
like
to
ask
a
question
brian,
so
you
know
hotel.
Has
this
asynchronous
instrument
api
that
we're
kind
of
really
scrutinizing
right
now
and
it's
a
lot
like
the
custom,
collector
api
and
prometheus,
so
the
custom
collection
happens
and
you
you
know
you
get
to
admit
some
values
right
over
over
one
period
to
the
next
day,
one
scrape
to
the
next.
If
my
custom
collection
forgets
a
value,
the
question
is
how
to
model
that
in
the
otlp.
F
F
Point
just
like
vanish
from
the
otlp,
or
should
we
having
remembered
that
there
was
a
series
there?
If
we
did,
we
can,
we
can
have
the
sdk
do
that
when
we
see
the
series
disappear
in
the
you
know,
like
the
subsequent
scrape,
we
call
the
custom
collector.
The
custom
collector
fails
to
produce
a
series
that
it
was
producing
previously.
F
D
F
F
It
feels
like
an
option.
It's
costly
to
have
to
maintain
that
map,
but
in
order
to
detect
missing
series
you
have
to
have
that
map.
Now,
if
you
have
that
map,
you
can
do
a
lot
of
correctness
checking
and
you
can
also
do
stillness
and
I'm
not
sure
that
the
cost
is
worth
the
correctness
checking,
but
it
might
be
worth
the
stillness
checking,
I'm
not
sure.
D
Yeah,
I
I
guess
it's
kind
of
from
a
high-level
architectural
standpoint.
It's
a
question
of
okay.
Are
you
doing
push
versus,
for
where
does
the
map
live
in
a
pull-based
system?
It
has
to
live
in
prometheus
in
a
push-based
system,
could
kind
of
live
in
either
and
often
I
also
actually
listen
to
monitoring
system
like
I
guess
you
could
also
have
it
live
inside
the
client.
F
F
F
Yeah,
I
think
we
just
haven't
really.
We
have
not
put
a
requirement
on
what
should
happen
if
you,
if
you
fail
to
produce
a
point,
because
if
the,
if
you're,
if
you
have
a
prometheus
scraping,
you
you,
you
probably
don't
need
to
pay
that
extra
cost.
The
whole
point
of
custom
collector
was
not
to
pay
that
cost.
I
think
so,
and
if
you
have
a
push
based
system,
it's
true
that
the
receiver
could
perform
some
sort
of
target
analysis
to
like
figure
out
who's
still
as
well.
So
why
do
that
in
the
client?
F
Ever,
it
just
seems
like
potentially
a
cost
that
has
that
would
be
better
born
somewhere
else.
D
Yeah,
I
was
also
the
question
of
what
else
are
you
tracking
so
like
in
previous,
like
it
didn't
start
off
with
the
stillness,
it
was
already
tracking
cache
ids
for
the
tstb
for
efficiency,
so
adding
the
stalin
and
stuff
onto
that.
It's
like,
oh,
that
cache
id
was
there
last
time
and
not
this
time
that
wasn't
that
expensive
to
add
in
yeah.
F
F
It
will
be
reading
a
slash,
proc
file
system,
it's
going
to
see
a
bunch
of
metrics
in
that
slash
proc
file
and
then
it's
going
to
observe
them
through
the
custom
collection.
Api.
Now,
there's
a
kernel
bug.
Okay,
so
you
read
through
the
proc
file
and
it
had
a
bunch
of
duplicates
when
it
should
not
have
so.
Instead
of
seeing
cpu
0
cpu
1
cpu
2
cpu
3,
you
see
cpu
0,
cpu,
0,
cpu,
0,
cpu,
0.,
that's
a
bug!
F
I
don't
really
want
the
sdk
to
check
for
that
bug
because
it's
expensive.
However,
if
you
have
to
for
other
reasons,
emit
staleness
data,
then
that
cost
is
free
like
if
you're
going
to
pay
for
that
map.
You
might
as
well
do
stillness,
but
I
don't
know
that
we
should
be
paying
for
that
map.
This
is
my
point,
because
if
you
have
that
map,
you
can
also
tell
the
user.
Oh,
you
just
accidentally
put
four
values
for
cpu
zero
and
that's
a
con.
Like
that's
duplicates.
F
D
Actually,
interestingly,
presuming
you're
talking
about
the
gold
client
anyway
and
true
for
me
to
still
be
caught
in
three
different
ways.
Firstly,
the
go
client.
Won't
let
you
do
that
if
you're
using
anything
representing
stomach
stuff,
even
with
custom
collectors,
the
other
clients
don't
do
that,
checking
they
just
basically
are
a
pretty
dumb
pipe.
D
Prometheus
was
well
still,
handling
could
catch
it
and
it
doesn't.
It
might
have
some
reason
for
performance
reasons,
but
then
the
tstp
will
go.
Hey,
wait
a
second
you
already
put
in
something
at
the
timestamp
that
will
actually
do
with
the
bug
blog.
F
Because
we're
apparently
spotted
yeah
so-
and
we
have
in
our
spec
said,
said
that
for
asynchronous
instruments,
any
callback
execution
will
have
identical
timestamp,
even
though
it's
like
you're
going
to
make
these
sequentially,
so
they
can't
literally
have
the
same
timestamp.
We
want
to
say
these
have
the
same
timestamp
so
that
that
that
conflict
is
detected.
But
if
you
say
had
this
two
con,
two
callbacks
and
or
just
for
whatever
reason
you
produce
those
two
values.
I
just
feel
like
it's
expensive,
but
you
know
it's
hard
to
have
this
conversation.
F
This
conversation
we're
having
here,
is
really
kind
of
far
from
the
general
sort
of
general
group
awareness
here.
So
I
don't
know
that
I
want
to
have
this
discussion.
I've
been
trying
to
say
this
is
underspecified.
We
don't
want
to
do
it
yet.
D
Yeah,
normally
driven
by
okay,
what's
cheap
to
do
or
against
the
goal
like
when
the
illness
went
from
being
straight.
Five
minutes
to
you
know
the
current
system
intermediates,
like
fix
the
whole
power
powder
problems.
Performance,
actually
was
reasonable
for
it
all
things
considered
and
but
like
when
you
look
at
it.
Okay,
we
have
to
store,
I
don't
know
20
30
bytes
per
series
or
something
for
all
the
extra
bookkeeping,
but
we
also
have
like
a
full
tstb
with
inside
prometheus.
So
that's
pretty
cheap,
relatively
speaking.
F
A
All
right,
thank
you.
Josh
david
looks
like
you've
got
a
couple
more
spec
follow-ups
to
discuss.
C
I
yeah,
I
didn't
actually
want
to
discuss
them,
but
if
people
here
are
interested,
I
wanted
to
point
them
at
the
follow-ups
they're
fairly
straightforward,
except
for
the
one
that
we
already
discussed,
which
is
resource
round
tripping
using
target
info,
so
feel
free
to
take
a
look
and
approve.
If
you
like
them,
I'd
appreciate
it.
A
Sounds
good
that
takes
us
to
the
end
of
our
agenda,
so
thank
you,
everyone
for
coming,
and
I
will
see
you
next
week.
If
I
don't
see
you
soon.