►
From YouTube: 2021-11-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Hello,
everyone,
sorry
I'm
running
a
little
late.
If
you
could,
please
add
your
names
to
the
meeting,
notes
and
add
any
agenda
items
you
want
to
discuss
we'll
get
started
in
just
a
moment.
B
Okay,
I
don't
see
anybody
else
joining
so
let's
go
ahead
and
get
started.
We
used
to
phone
approach
when
you
snipped
it
on
where
you
were
with
the
testing.
C
Yeah
must
open
okay,
so
the
progress
on
testing
is
we
have
identified.
C
We
have
identified
the
issues
that
we
need
to
file
pr
for,
and
these
issues
have
been
filed
with
the
and
and
and
the
tracking
issue
number
57
of
work
group
prometheus,
like
have
been,
have
been
tracked
with
all
these
issues.
There
are
some
queries
regarding
some
of
the
expected
behavior
of
test,
so
I
can
present
those
issues
I'll
just
share
my
screen
for
that.
C
So
my
screen
is
visible.
Yes,.
C
So
the
first
issue
that
we
have
is
in
the
is
in
the
nance,
like
the
numerical
values
of
matrix.
If
we
have
normal,
nand
or
infinity,
then
when
we
pass
these
values
to
the
prometheus
receiver,
the
output
that
we
get
like
for
the
count
because
they
are
in
64.,
they
don't
come
out
as
nands
the
output
from
the
receiver,
the
p
data
it
comes
out
for
all
the
counts.
C
So
the
in
this
area
of
the
code
when
it
tries
to
adjust
it
count
when
it
tries
to
convert
the
man
into
the
n64,
it
creates
a
minimum
negative
number
and
what
happens
like
when
this
number
is
passed
through
the
exporter
from
this
exporter.
It
converts
this
minimum
negative
number
into
the
maximum
positive
number,
as
shown
here
and
when
this
data
is
then
visualized
on
the
prometheus
web
ui,
then
so
the
between
data
set
one
and
the
dataset
two.
It's
a
data
set.
Two
is
ten
times
of
data
set
one.
C
But
it's
not
observable
from
this
graph
because,
like
the
the
nand
value
is
taken
as
the
maximum
integer
value,
so
because
this
this
thing
is
plotted
in
the
graph
so
like
the
other
values
are
non
observable.
But
if,
if
we
pass
this
data
directly
to
the
prometheus
server,
so
this
directory,
we
pass
it
directly
to
the
permission
server.
The
the
data
can
be
seen
coming
out
in
this
way,
which
makes
like
more
readable
because
the
nand
values
are
here
are
not
taken
as
maximum
integer
values.
C
So
yeah,
so
this
is
one
thing:
that's
like
one
open
issue
so
like,
as
discussed
with
anthony.
You
mentioned
that
with
stale,
so
this
issue
is
also
persistent
with
the
stalemates,
but
with
this
tail
lands
now
there
will
be
a
change
that
these
values
will
be
checked,
using
a
flag,
no
recorded
value,
but
for
normal
nan
and
infinity
values
wanted
to
understand.
Like
will.
The
expected
output
is
still
going
to
be
represented
as
the
maximum
integer
value,
or
it
will
be
replaced
with
something
else.
B
And
so
david
or
vishwa,
I
don't
know
if
you
guys
have
thoughts
on
this,
but
my
thinking
is
that
for
things
like
a
bucket
count
or
even
a
histogram
overall
account,
nan
or
infinity
or
nonsensical
values
and
should
be
dropped.
It
looks
like
that's
what
prometheus
is
doing
in
the
example
here
as
well?
Is
that
it
simply
drops
them,
but
I
don't
know
if
either
of
you
have
thoughts
on
how
that
should
be
handled.
C
That's
why
it
comes
at
empty
space
over
here,
because
in
this
scrape,
yeah
yeah
like
anthony,
was
saying.
D
I
think
I
think
prometheus
is
dropping,
but
we
need
confirmation
from
brian
or
somebody
from
the
prometheus
side
before
we
can
for
sure
say
that
it's
being
dropped.
B
Yeah
I
was
hoping
that
brian
would
be
here
and
could
confirm
that
today,
but
I
would
say:
perish.
We
should
proceed
with
a
change
to
drop
those
values
to
drop
any
non-stale
man's
or
infinite
values
that
we
receive
for
non-floating
point
numbers,
and
then
we
can
confirm
that
with
brian
once
we
be
mpr
ready,
just
ask
him
to
review
it.
D
B
Yeah,
so
one
of
the
complications
we
have
is
that
prometheus
represents
like
the
bucket
counts
and
even
histogram
counts,
as
floats
not
as
integers,
so
they
can
store
a
nand
in
there
if
they
wanted
to
otlp
does
not
p
data
does
not
so
we
can't
represent
the
value
of
a
bucket
count
as
a
dan.
D
D
Yeah,
I
think,
for
now
we
can
probably
drop
and
then
confirm
with
brian,
if
that
is
the
behavior.
A
C
Okay,
so
the
the
next
issue
in
this
one
is
regarding
the
honor
timestamp
so
currently,
so
this
is
the
test
data
that
was
used
to
test
the
premises
receiver.
C
So,
in
this
test
data
a
time
stamp
was
given
with
the
with
the
with
the
gauge
metric
type
and
when
this
data,
the
owner
by
default,
the
owner
timestamp,
is
true
so
and
when
this
data
is
passed
for
the
primary
server,
the
p
data
that
comes
out
so
all
the
metrics
basically
gorge
counter
and
histogram,
they
honor
the
timestamp.
We
can
see
like
they,
the
the
start.
Timestamp
that
is
associated
with
these
metrics
is
the
timestamp
that
was
provided
by
the
cache
data.
C
But
with
summary
it's
other
case,
it
doesn't
honor.
The
timestamp,
like
other
metrics,
are
doing
it's.
It
is
generating
its
timestamp
without
honoring
the
the
honor
timestamp.
So,
as
we
can
see
in
this
picture,
like
the
summary
has
the
timestamp
163588338.
C
But
all
other
like
histogram,
they
have
the
time
start
time
stamp
of
that
was
given
by
the
user,
so
they
are
honoring.
So
just
wanted
to
understand,
like
is
this
a
expected
behavior
that
thus
somebody
should
not
honor
the
the
honor
time
stamp.
E
E
First,
just
a
quick
question
like
from
your
data:
it
doesn't
look
like
the
summary
metric
has
a
timestamp
with
it,
based
on
the
screenshot.
Just
just
wanted
to
confirm.
C
It's
here
in
the.
B
Line
number
118.,
no
in
in
the
exposition,
something
about
below
that,
so
yeah
page
one
doesn't
have
a
timestamp
does
page
two
every
time
stamp
on.
The
summary
is
that
where
that
sample
was
taken
from.
C
No,
even
if,
if
it
doesn't
have
the
timestamp,
it
should
take
the
timestamp
from
the
gorge.
C
Because
even
for
counter,
I
have
not
provided
a
time
stamp
explicit
timestamp
over
here,
but
it's
taking
it
from
the
god.
A
C
Okay,
I
will
just
just
one
slide.
D
B
I
would
say
at
a
distinct
time
stream
for
every
metric
and
validate
that
each
metric
gets
a
distinct
time
stamp
and
then
also
have
one
where
only
only
one
of
them
gets
a
timestamp
and
see
if
any
of
the
other
metrics
get
that
timestamp
when
they
really
shouldn't
but
either
way.
B
I
think
one
of
the
things
we
we
wanted
to
to
check
with
the
prometheus
folks
was
whether
this
was
a
bug
that
they
had
observed
in
the
prometheus
scraper
before,
because
I
don't
think
that
we
are
creating
the
scrape
timestamp
anywhere
we're
we're
potentially
adjusting
the
start
timestamp
in
some
places.
But
I
don't
think
we
ever
changed
the
data
point
in
timestamp,
and
so
that
should
be
coming
out
of
the
premedia
scraper.
A
G
C
Okay,
okay,
sure
so
I'll
move
on
to
the
next
issue,
which
is
regarding
the
untyped
metric.
So
in
this
one.
So
even
if
we
have
one
untyped
metric
where
in
the
whole
page,
we
have
four
other
three
other
metrics,
it
doesn't
straight
anything
because
there
is
a
presence
of
one
unknown.
So
it
doesn't
just
omit
one
metric
which
is
unknown
it.
It
completely
fails.
The
scrape
so
just
wanted
to
understand.
Like
is
this
the
expected
behavior
there
is
one
unknown
metric.
C
It
doesn't
scrape
the
whole
page,
so
this
was
fixed.
Actually,
there
was
a
pr
to
make
unknown.
D
As
double
gauges.
B
I
thought
we
landed
that
a
while
back.
Let
me
check
that,
though,.
D
Maybe
it's
not
in
the
are
you
guys
using
the
latest
receiver
or.
C
B
Yeah-
and
it
looks
like
this
was
merged
at
right
around
the
same
time
that
37-1
would
have
been
put
out.
I
think.
B
C
So
I
just
wanted
to
regarding
the
previous
issue,
so
I
have
given
it.
I
have
run
the
code
now
with
the
with
this
test
data.
The
page
one
only
have
the
timestamp
in
the
in
the
cache
and
which
is
the
value.
I
think
it
should
end
with
931,
because
195
will
go
in
the
nanos
and
on
the
page
2
the
timestamp
is
associated
with
each
metric.
So
with
summary,
it
should
be
967,
but
the
data
that
comes
out
with
with
the
so
this
is
the
histogram,
and
this
is
the
summary.
C
The
point
timestamp
comes
out
as
967,
which
is
the
data
that
was
feed
to
the
metric.
I
just
keep
the
other
window
here,
so
this
967
was
is
is
taken
as
a
point
timestamp
and
the
start
timestamp
that
comes
out
to
it
is
the
it's
generated
here.
Yeah.
B
So
this
that's
correct
this,
the
the
start,
timestamp
doesn't
ever
come
from
prometheus
prometheus
doesn't
have
a
concept
of
intervals
it.
It
only
tells
you,
when
the
current
scrape
timestamp
was
we
calculated
in
the
receiver,
the
start
timestamp
now.
This
is
slightly
odd
behavior,
because
prometheus
is
telling
us
that
this
grape
was
from
before
the
prior
scrape
that
we
had
seen,
and
so
perhaps
we
should
handle
that
situation
somewhat
differently.
I
don't
know
what
the
appropriate
thing
to
do.
There
is
like
a
prometheus,
remote
right.
B
D
Yeah
and
also
so
the
point
timestamp
is
not
the
897
right
here,
it's
967
seems
different.
C
But
for
histogram
it's
like
the
923,
that's
the
that's!
The
timestamp
that
that's
fed
to
the
metric
and
the
start
metric
is
the
well
the
start.
Time
is
the
time
it
has
taken.
B
The
the
time
stamp
on
the
gauge
was
931,
but
not
in
the
histogram.
The
histogram
has
its
own
timestamp
from
the
first
grape,
which
was
the
time
that
it
was
grouped
because
no
timestamp
was
provided.
I
think
this
is
functioning
correctly,
ish
other
than
the
fact
that
we're
accepting
out
of
order
samples.
B
I
I
don't
know
I
like,
I
think,
for
some
metric
systems,
it's
not
a
problem
and
I
think
in
the
usual
case
we
won't
be
receiving
them,
but
I
I
don't
know
what
we
should
do
in
this
behavior
or
in
the
situation
where
we've
we've
got.
You
know
some
exposition
explicitly
telling
us
no.
B
This
is
this
is
an
older
timestamp
you,
you
just
take
this
and
treat
it
as
if
it's
older
and
that
puts
samples
out
of
order,
like
I
think
I
think
like
when
we
get
those
samples
down
to
the
remote
right
exporter,
the
the
exporter
will
sort
them
by
time
stamp
before
it
tries
to
ship
them,
but
if
that
gets
broken
up
into
two
batches,
that
may
cause
problems.
D
B
Yeah
yeah:
we
should
make
sure
that
we've
got
an
issue
noting
that
we
may
be
accepting
out
of
order
samples,
and
we
need
to
figure
out
what
to
do
with
that.
B
Okay,
so
for
the
renaming
the
metrics
one,
I
was
really
hoping
that
someone
from
prometheus
would
be
here
to
help
us
with
this.
Hopefully,
this
is
just
something
they've
changed
in
recent
versions.
I
think
person
was
doing
the
initial
testing
with
a
more
recent
version
of
prometheus
than
the
receiver
is
using,
but
this
this
behavior
should
be
entirely
within
the
control
of
the
prometheus
scraper
and
we're
not
doing
anything
to
handle
we're
labeling
explicitly,
but
we're
getting
different.
Behaviors.
H
But
I
think
this
is
also
related
to
the
job
and
the
instance.
B
P
labeling
right,
not
this
particular
issue
there.
There
is
an
issue
with
job
and
instance
relabeling
when
we
are
honoring
the
labels
that
come
in
a
scrape
and
job
and
instance
are
set
there
we're
not
able
to
properly
handle
that,
because
we
build
up
a
metadata
cache
that
tries
to
tie
metrics
to
the
configuration
through
the
job
and
instance,
labels,
and
I
need
to
spend
some
time
digging
into
how
we
can
avoid
doing
that.
Because
prometheus
doesn't
do
it.
B
It's
able
to
work
without
having
to
have
the
original
job,
in
instance,
labels
or
the
javanese
labels
that
the
scraper
would
have
put
on
if
the
scrape
didn't
tell
it
to
replace
them.
But
this
is
instead
that
the
behavior
of
the
replace
relabel
rule
is
different
between
prometheus
and
our
receiver.
B
So
looks
like
when
there's
a
replacement,
that's
explicitly
provided
the
the
whole
name.
You
know
when
the
target
label
is
named,
the
whole
name
is
replaced
as
would
be
expected,
but
when
that
replacement
is
dropped
that
that
replace
value
at
59
there
just
is
isn't
provided
what
prometheus
does
is.
It
replaces
the
part
of
the
regex
that
matches
and
sorry
that
rejects
that
will
match
all
the
way
to
the
end
of
the
string,
so
that
should
match
everything
bruce.
C
Yeah,
that's
correct,
so
prometheus
changed,
it
removed
the
radish
underscore
and
yeah
only
keeps
the
connected
underscore
clients.
A
It
went
to
look
up
metadata
for
it
and
similar
to
the
job,
and
instance,
problem
didn't
find
any
and
marked
it
as
unknown,
and
then
it
got
dropped
somewhere
later.
A
B
B
C
C
Oh
yeah,
with
this
one
so
prometheus,
sorry,
the
collector
dropped
it.
So
it
just
didn't
really:
it
dropped
the
metric
when
the
replacement
was
not
provided.
B
Yeah,
so
that
sounds
more
like
the
the
name,
re-labeling
issue
that
the
david
was
talking
about,
so
that
still
exists.
Okay,.
C
B
A
So
the
correct
behavior,
according
to
the
prometheus
maintainers,
is
for
us
to
pass
the
metric
with
the
relabeled
name,
but
without
any
metadata.
So,
for
example,
this
counter
with
that
help
text
would
get
passed
as
a
gauge
without
help
text.
A
If
you
did
this
and
that's
the
behavior
that
prometheus
has
today-
and
we
are
expected
to
match
that,
so
you
shouldn't
be
renaming
your
metrics
because
it
won't
do
everything
you
probably
hope
it
does.
But
what
we
currently
do
by
dropping.
It
is
different
from
what
the
prometheus
server
does
and
so
yeah.
We
should
match
their
behavior.
A
B
Yeah,
I'm
pretty
sure
I
remember
there
being
an
issue
open
for
that.
There's
there's
a
to-do
in
the
code
around
the
config,
where
it
drops
any
label
rules
that
have
the
target
label
of
name
with
a
to
do
on
it.
C
B
Okay,
so
these
were
all
of
the
issues
you
had
to
discuss
right,
bruce
yeah.
B
Okay,
so
do
you
have
a
clear
next
step
on
each
of
these
and
what
should
be
done.
C
Just
to
yeah
for
the
first
issue
for
regular
nans
and
infinity,
it's
to
the
expected
behaviors
to
drop
the
metric
for
now
and
for
honor
time
stamp,
probably
need
to
raise
the
issue,
because
it's
not
the
expected
behavior
that
the
no
for
honor
timestamp.
C
I
need
to
look
into
it,
but
it
seems
like
I
ran
the
test
wrong,
but
the
only
issue
that
is
that
the
that
the
time
stamps
like
it's
breaking
the
sequence
so
need
to
look
into
that
particular
part
for
renaming
metrics
yeah.
So
this
issue,
or
has
already
been
raised
so
and
that
current
behavior
is
that
it
should
drop
it
and
that's
what
it
is
doing
and
for
untyped
metric.
C
It
is
dropping
the
whole
page,
but
need
to
check
it
again
with
the
with
the
current
version
of
the
character.
Contrib.
B
Yeah
and
I
I
wonder
if
those
two
are
related-
the
renaming
and
dropping
of
the
untyped
metrics,
because
if
renaming
is
supposed
to
lead
to
an
untyped
metric,
then
untyped
metric
gets
dropped.
I
can
see
one
following
from
the
other,
so
maybe
we
can
kill
a
couple
birds
with
one
stone
there,
but
not
overly
optimistic.
B
B
Okay,
is
there
anything
else
that
anyone
had
to
discuss
today.
E
I
have
just
a
couple
of
quick
questions
like
questions
to
add
on
to
publishers.
I'm
just
gonna
share
my
screen.
E
So
yeah
just
to
provide
like
an
update
on
the
status
of
the
tests.
We
have
filed
issues
on
collector
contract
to
for
those
test
implementations
and
they
are
linked
to
the
prometheus
work
group
issue
so
like
this
is
acting
as
a
tracker
for
that
other
than
that
I
had
I
just
just
to
bring
it
up
again.
E
I
think
anthony
mentioned
this
just
when
we
were
talking
about
metric
renaming
but
yeah,
so
the
test
for
honor
labels
right
now
so
like
we
have,
if
you
have
an
exposition
with
existing
job
and
instance,
labels
and
honor
labels
set
to
true
the
script,
fails
like
the
ortol
collector
prometheus
receiver.
Script
fails
an
enough
metric
value
of
zero
is
generated
for
this,
and
I
found
in
the
collector
logs
that
this
is
related
to
the
metadata
cache
that
anthony
was
talking
about.
B
E
There
is
already
an
issue
on
collector
contrib
regarding
this,
but
it's
related
to
the
federation,
but
I
think
it's
the
same
problem
where
the
exposition
already
has
existing
job
and
instance,
labels
so
yeah.
I
just
wanted
to
confirm
if,
if
we
need
to
file
another
bug
issue,
or
should
this
be
good
enough
to
track
this
particular
problem.
B
I
think
I
think
that
issue
is
good
enough
to
track
it.
You
know
rushme
had
brought
that
to
us
a
week
or
two
ago
as
well.
Here
we
discussed
it,
but
I
I
think
it's
all
related
to
the
same
issues,
so
we
can
go
ahead
and
try
to
track
any
more.
Can
we
do
on
this.
E
Okay,
yeah
and
other
than
that
we
we
were.
We
had
this
open
matrix
tests,
so
we
were
so.
The
plan
was
to
use
the
negative
test
data
from
the
openmetrics
repository,
so
we
ran
those
negative
tests
with
the
hotel
collector
and
for
these
22
cases,
the
prometheus
receiver,
ingests,
some
sort
of
metrics
and
and
the
up
metric
value
is
one.
So
basically
it
doesn't
identify
these
as
a
negative
negative
data
or
invalid
data
so
like,
for
example,
bad
or
bad
missing
or
extra
commas.
E
This
is
a
test
case
with
the
following
exposition
and
from
the
receiver
output.
We
can
see
that
it
correctly,
I
don't
like
it
doesn't
identify
as
invalid.
E
So
since
we
are
relying
on
prometheus
script
package,
we
basically
basically
ran
this
against
the
prometheus
server
as
well,
and
that
is
also
able
to
like
successfully
scrape
it,
and
this
is
what
I
see
on
prometheus
on
the
remedial
side
so
yeah.
So
the
question
is
like
is:
are
these
open
metrics
test
cases
fully
compatible
with
prometheus
itself,
because
it
doesn't
look
like
that's
the
case.
B
Yeah,
so
I
I
don't
think
the
expectation
is
that
prometheus
is
fully
open,
metrics
compatible,
but
I
thought
the
ways
they
were
incompatible
was
in
not
implementing
optional
features,
actually
correctly
parsing,
malformed
or
supposedly
malformed
data
as
a
weird
one
for
me,
but
it
looks
like
it
actually
does
correctly
parse
that,
even
though
it's
it's
not
well
formed.
E
Yeah
like,
for
example,
like
with
negative
counts,
it's
it's
still
identifies
it
as
a
valid,
so
you
so
you're
saying
prometheus.
H
B
Yeah,
our
expectation
is
that
the
scraper
doesn't
produce
anything
for
these,
but
it
does
and-
and
it
looks
like
it
does
even
in
the
prometheus
package-
and
it
ends
up
storing
something
in
the
tsdb
so
yeah.
I
think
that
we
should
probably
take
these
22
tests
and
open
an
issue
upstream
saying:
hey
the
prometheus
scraper
is
actually
ingesting
data
from
these
malformed
openmetrics.
D
C
E
I
I
can,
I
can
open
an
issue
if
that's
what
we
want-
yeah,
okay,
cool.
Thank
you.
B
Yeah
yeah:
this
is
all
really
great
work
and
moving
us
much
closer
towards
being
able
to
have
confidence
that
our
receiver
is
doing
the
right
thing,
even
just
knowing
the
places
where
it
isn't
doing.
The
right
thing
is
is
a
big
step
in
the
right
direction.