►
From YouTube: 2021-11-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
porous,
can
you
add
the
link
to
the
prometheus
issues
that
we
have
so
far.
C
A
A
All
right,
so,
let's
let's
get
started
and
I
think
richard
left
a
comment
that
he
cannot
join
in,
but
he's.
A
Yeah
he
just
left
an
comment
on
one
of
the
issues
that
we
had
filed.
You
know
based
on
our
testing
results,
so
josh
you're
you're
first
on
the
list.
So
that's
looking
forward.
A
D
A
D
Just
say
that
I
was
at
a
meeting
last
yesterday:
spec
sig
and
we
discussed
an
issue,
and
the
action
item
was
for
me
to
take
it
to
this
group
and
relate
what
I
what
was
discussed.
I
so
I'm
carrying
an
issue
for
sort
of
someone
else
or
just
a
general
group
issue,
and
I
don't
have
a
super
strong
opinion
so
I'll
just
describe
it.
Maybe
I'll
open
it
I'll
share,
since
no
one
else
is.
D
All
right
here
we
are.
Hopefully
you
see
that
yes,
this
is
an
issue
about
how
to
handle
the
open,
telemetry
concept
known
as
instrumentation
library,
also
known
as
meter
attributes.
This
is
the
name
of
the
instrumentation
package
and
I
think
it's
probably
worth
a
little
background.
As
I
recall
the
way
this
was
introduced
to
open
telemetry.
Originally,
the
concept
was
that
you
will
have
an
instrumentation
package.
D
D
I
think
originally
because
we're
worried
about
spans
that
happen
too
often,
and
we
know
that
to
shut
off
the
strand,
you
gotta
somehow
know
what
you're
shutting
down
and
having
a
name
attached
to
the
meter
became
this
thing.
We
call
instrumentation
library
and
you
set
that
up
when
you're
configuring,
your
meter
as
well
as
your
tracer,
so
it's
called
a
name
tracer
or
named
meter.
D
It
has
a
name
as
well
as
a
version
and
a
schema
url,
but
for
the
most
part,
schema
and
version
are
not
what
we're
here
to
talk
about
we're
here.
To
talk
about
this
attribute,
which
is
called
a
meter
attribute,
and
it's
not
a
metric
attribute,
and
it
is
in
the
openometry
protocol
and
and
it
sort
of
appears,
as
a
form
of
consistency
across
all
the
protocols.
The
tracer
has
it
metrics,
have
it
library,
has
it
the
logs,
have
it
as
we
got
into
the
metric
specification.
D
You
know
a
couple
years
ago
and
it's
been
iterated
on
many
times.
This
has
surfaced
in
metrics
a
couple
of
ways,
and
I
think
this
is
kind
of
the
end
of
that
road.
D
What
we're
talking
about
is
whether
the
metric,
whether
the
instrumentation
package
name,
is
considered
identifying
when
we
talked
about
the
instrument
so
and
I
think
we're
trying
to
pin
it
down
because
as
we
export
to
prometheus,
we
need
to
know
what
what
we
think
and
my
my
belief
is
that
you
know
that
if
you
look
at
this
issue,
I
think
there's
some
examples
down
as
we
get
to
the
bottom.
D
But
if
oh
shoot
I
wish
I
was,
I
wish
we
had
the
original
poster
here,
because
the
examples
shown
yesterday
were
better
than
these
and
I
don't
know
exactly
where
they
are,
but
but
you
can
imagine
a
simple
metric
like
request
count,
and
that
is
a
count
that
has
a
meaning,
which
is
how
many
requests
were
counted.
And
so
the
question
is:
if
we
have
two
libraries
and
they
both
have
a
metric
named
request
count.
D
Do
you
think
of
that
those
as
the
same
metric
or
do
you
think
of
them
as
different
metrics?
It
matters
a
lot,
because
when
we
begin
to
pass
that
data
through
otlp
and
then
into
prometheus,
we
definitely
don't
want
to
end
up
with
the
same
metric
names
having
multiple
definitions,
it
breaks
our
own
rules
in
open
someone
tree
would
call
that
a
single
writer
rule
prometheus
has
the
same
rule.
I
just
doesn't
give
it
a
name,
and
this
definitely
breaks
breaks
when
you
get
to
prometheus.
D
So
then,
we've
talked
about
what
kind
of
what
kind
of
outcomes
do
we
wish
for,
and
I
I
do
believe
personally
that
we
should
keep
this
semantic
idea
that
they're
the
same
metric.
D
But
we
also
should
keep
them
distinct
because
they
can't
be
written
together
in
the
same
stream,
but
they
are
the
same
metric
in
the
sense
that
one
of
the
original
goals
I
understood
for
this
named
meter
is
to
allow
you
to
substitute
a
library,
meaning
I'm
going
to
take
out
this
instrumentation
library
and
plug
in
a
different
instrumentation
library
that
achieves
the
same
goals,
which
is
to
say
the
same.
D
On
the
other
hand,
you
could
definitely
imagine
just
a
moment
a
case
where
the
the
two
different
metrics
are
coming
from
different
libraries
that
are
intended
to
run
side
by
side,
but
they
are
just
request,
counts,
and
the
question
is:
can
I
count
them
together
or
not?
And
the
question
is:
are
they
compatible?
D
So
we've
talked
about
at
least
having
a
restriction
to
say
that
when
you
use
the
same
metric
name
across
libraries,
you
will
make
sure
they
have
compatible
types
because
they
they
belong
together
when
you
aggregate
them.
So
if
you,
if
you
are
going
to
aggregate
away
the
instrumentation
library,
then
you
definitely
want
to
put
a
recording
role
in
there
and
do
the
right
thing
to
compute
the
sum
of
a
counter,
for
example,
from
two
sources,
and
that's
that
would
that
would
be
an
outcome.
I
think
that
makes
sense.
D
Okay,
I'm
gonna
stop
talking
now.
The
question
is
when
we
get
to
prometheus
and
we
have
two
metrics
with
the
same
name
and
they
have
different
properties,
calling
them
different
meter
names.
Should
we
prefix
that
metric
name,
which
maybe
breaks
the
original
contract,
the
semantic,
a
goal
that
I
had
but
might
be
the
right
outcome?
Should
we
add
an
instrumentation
label?
That's
the
one
that's
been
proposed,
but
that
of
course
creates
migration
difficulties.
D
You've
got
four
labels
yesterday
and
you
try
to
use
hotel
and
now
you've
got
five
labels
and
that's
that's
its
own
sort
of
problem,
and
so
what
we've
come
to
is.
None
of
these
are
good
options.
Unfortunately,
and
that
is
what
I
came
to
say,
is
to
relate
that
issue.
We
just
discussed
it
and
then
we
said
this
would
be
good
to
ask
the
prometheus
quirking
group,
and
I
said
I
would
say
everything
I
just
said.
F
So
the
key
question
is
here:
do
these
two
things
sharing
the
metric
name
have
identical
semantics
like
100
identical
and
it's
my
experience
that
things
change
sharing
the
same
metric
name,
do
not
with
extremely
few
exceptions
like
process
cpu
seconds
total
has
identical
semantics
because
we
were
really
really
careful,
but,
unlike
request,
count
does
not
at
least
is
extremely
unlikely.
It
would
because,
for
example,
one
library
might
be
measuring
that
before
authentication
order
might
be
after
authentication,
so
therefore
they're
not
comparable.
So
what
prometheus
does
is
basically
crashes.
F
Your
binary
and
says:
go
fix
your
metric
names,
which
is
you
know,
possibly
not
what
you
want
to
do
so,
maybe
some
form
of
auto
prefixing,
but
then
that's
going
to
break
dashboards
on
one
side
or
the
other
yep.
Adding
a
label
is
not
correct
because
you're
then
breaking
everyone
and
so
yeah.
It's
really
a
question
of
okay.
F
How
do
we
avoid
the
situation
in
the
first
place
and
like
another
common
thing
we
see
is
that
someone
is
trying
to
share
a
metric
between
two
entirely
different
pieces
of
code,
like
let's
say,
there's
two
request
handlers
in
different
parts
of
the
code
base
and
it's
like.
Oh
yeah,
obviously
we're
gonna
stress
metric
and
the
actual
answer
is
no.
Do
it
up
in
the
rooting
engine
before
it
ever
hits
either,
and
it's
just
that
kind
of
way.
You're
layering
this
wrong
and
making
your
life
harder.
F
F
D
We
were
talking
about
was
like
instrumentation
packages,
so
you've
got
some
like
heavyweight
library,
say
kafka
and
it's
not
natively
instrumented
and
you
have
like
wrapper
type
instrumentation
packages
that
are
going
to
present
the
kafka
interface
and
call
through
to
the
real
thing.
But
but
all
that's
an
instrumentation
okay,
then
you
decided
that
you
didn't
like
that
one
package
of
instrumentation,
so
you
swapped
it
for
another,
but
there
were
let's
say:
maybe
some
semantic
conventional
names
like
like
key
metric
names
that
you
decided
to
keep
the
same.
D
So
you
use
the
same
name,
but
you
never
run
them
at
the
same
time.
In
this
case,
which
is
yeah
which
is
compatible
with
prometheus.
F
Yeah
yeah,
so
I
guess
like
in
that
sort
of
situation.
What
you
try
to
do
is
make
sure
within
one
binary,
that
you
know
you're
only
using
one
which
may
not
be
possible.
If
you
know
some
other
dependency,
transitively
is
using
the
other
one,
and
it's
just
like.
Okay,
in
that
case,
hope
someone
prefixes
it,
but
it's
the
same
situation.
Just
you
know
a
more.
D
Complicated
way
getting
there
yeah,
I
feel,
like
we've
reached
roughly
the
same
place
here.
D
What
was
said
just
a
little
more
depth
is,
is
that
if
we
wanted
to
do
the
validation
I
described,
which
is
to
say,
if
you
have
the
same
metric
name
in
different
libraries
and
they're
in
there
and
their
types
are
compatible
well,
I
can
tolerate
that,
and-
and
there
are
reasons
why-
and
I
can
get
back
to
that
if
that
matters,
but
if
there
are
def
different
types
and
it's
definitely
a
problem,
we
can
warn
the
user
right
away,
so
I'm
able
to
then
I
can
detect
an
error
inside
of
a
process.
D
And
now
you
have
a
situation
to
really
deal
with,
and
that's
probably
what
we're
going
to
end
up
with
having
to
specify
is
like
it
doesn't
sound
like
those
a
great
solution,
but
at
least
we
know
what
we
can
do
inside
a
single
binary,
but
as
for
what
we
can
do
inside
of
a
collector,
I
think
that's
probably
the
open
question
I
mentioned.
D
Recording
rules
there
has
been
talk
and
questioning
about
whether
hotel's
collector
would
ever
have
something
similar,
and
I
think,
if
you're,
if
you're
trying
to
you,
can
imagine
it
just
certainly
you
know
the
logic
is
is
tractable
and
you
could
do
that.
So
that's
one
solution
here
is
to
like
require
the
user
to
to
say
when
they're
gonna
have
that
sort
of
situation
and
explicitly
erase
it
by
by
properly
aggregating
it
yeah.
F
So
so,
like
the
situation
that
doesn't
quite
work
out
that
way
in
practice,
because
if
you
have
two
completely
unrelated
binaries
that
happen
to
clash
like
request
count
example,
you
know,
even
if
they
have
the
same
type,
doesn't
mean
they're
the
same
metric,
because
one
again
one
is
counting
before
authentication.
One
is
after
the
simplest
example.
F
So
in
that
situation,
in
your
recording
rules,
you
you're,
distinguishing
like
okay,
everything
from
my
redis
aggregate
over
here
everything
my
cassandra
advocate
I'm
over
here
and
you
kind
of
sort
out
that
way
and
then
you're
trying
to
tracking
metadata
per
you
know
time
series
it's
like
okay,
these
are
the
ones
that
are
associated
with
the
redisc
metadata.
F
These
are
the
ones
associated
with
the
cassandra
e
metadata,
but
like
yeah
within
a
single
binary
and
you're
scraping
is
about
that's
the
most
possible
consistency
level
and
everything
else
is
a
harder
problem
down
the
line.
Unfortunately,.
D
Yeah,
the
other
I
mean
the
other
just
minor
side-
note,
which
I
think
I'll
say
for
completeness
is
that
we
did
describe
overlap
resolution
at
some
point
in
the
spec,
because
if
you
don't
apply
this
fix
inside
of
open
telemetry,
you
can
see
that
the
instrumentation
names
are
the
same,
but
suppose
you
erase
the
instrumentation
names
now
you
don't
know
which
instrumentation
names
produce
the
metrics
there's
still
a
way
to
see
that
you
have
overlapping
metrics,
because
you've
got
the
single
single
writer
rule.
D
D
D
Well,
this
is
the
we're
still
talking
about
situation
where
you,
where
somebody
has
decided
to
use,
request,
account
and
and
there's
a
redis
library
and
a
kafka
library,
but
all
doing
it
and
and
now
you're
at
the
collector
and
you're
trying
and
you're
trying
to
export
those
metrics
and
if
they
happen
to
use
the
same
attribute
sets
you
you
could
end
up
in
a
bad
situation,
but
that
seems
very
unlikely.
So
we
could
also
just
say:
don't
use
the
same
attribute
sets
and
you're
fine.
F
More
than
okay,
you,
if,
let's
say
like
the
redis
and
kafka
one
that's
relatively
easy
to
say,
is
that
okay,
you've
badly
named
your
metrics,
they
should
have
been
called
redis
request
account
and
kafka
requests.
Can't
right.
That's
the
please
don't
do
that.
The
problem
is,
if
you
have
kafka,
1
and
kafka.
2
request
count.
F
G
Well,
I'm
not
sure
that
it's
fair
to
say
that
they
need
to
be
read
as
request.
Kevin
kaufman
request
count
because
it's
running
with
an
open,
telemetry
sdk,
an
api
you've
got
the
instrumentation
library
name,
it's
an
attribute
of
the
meter
that
attaches
to
all
of
its
metrics,
and
it
may
make
perfect
sense
to
only
call
it
request
count
so
that
you're
not
repeating
redis
through
every
metric
name
that
you
create
in
that
instrumentation.
F
D
I'll
just
give
an
example
at
lightstep
we
have,
I
don't
know
20
binaries
running
in
our
on
our
sas
and
actually
that's
probably
not
true
anymore,
but
we
around
there
and
we
have
a
common
package
of
instrumentation,
because
almost
all
the
binaries
are
go.
So
you
know
you
call
you
use
our
standard
instrumentation
internally.
D
It's
not
hotel,
it's
still
our
own
packed
up
stuff
from
five
years
ago
and
you
will
get
a
standard
metric
and
a
standard
trace
for
every
for
your
request
and
the
standard
metric
is
named
requests
and
it
used
to
be
in
the
old
days
that
we
prefixed
the
the
binary
name.
So
it
was
like
metric
db,
requests
and
satellite
requests
and
stuff
like
that.
But
but
we
stopped
doing
that
and
we
actually
just
have
them
all,
be
requests
with
a
label
or
an
attribute,
which
is
you
know,
binary
name
or
something
like
that.
D
That's
equivalently
what
we're
doing
and
when
anthony
said
this
example.
Just
a
second
ago,
it's
like
okay,
I'm
defining
the
request
semantic
to
be
request
to
a
lightstep
binary,
and
in
order
to
make
that
work,
I
need
to
make
sure
that
I
have
unique
attributes
which
would
have
been
true
anyway.
Even
if
it
was
a
single,
you
know
a
single
instrumentation
package.
D
I
have
to
make
sure
my
attributes
are
unique,
so
I
think
we
could
end
up
close
this
issue
by
saying:
don't
do
that
if
you're
a
single
binary
check
the
types
are
compatible
and
tell
the
user
not
to
do
that,
but
otherwise
user.
Beware
system
detect
overlaps
or
you
know
just
beware
and
don't
use
the
same
attributes
if
you're
going
to
use
the
same
same
metric
things.
D
D
For
a
given
scraper
anyway,
I
think,
that's,
I
think
you're
saying
say
the
same
thing
yeah
is.
That
is
that
you
can't
have
two
binaries
with
the
same
attributes.
So
if
you
had
two
instrumentation
packages
with
the
same
metric
names,
you
better
not
have
the
same
attributes.
D
You
understood,
thank
you
brian.
I
think
we've
beaten
this
one
to
death.
We
we
agree
that
there
aren't
very
many
good
solutions
and
I
think
what
we
can
end
up
we
can
conclude
here
is:
do
not
prefix
do
not
add
an
attribute,
maybe
optional.
I
I
can
see
a
new,
a
new
installation
saying:
oh,
that's
a
good
idea.
Let's
add
that
attribute,
but
do
nothing
by
default
and
warn
the
user
and
then
require
the
sdk
to
warn
the
user
in
a
more
explicit
way
in
the
situation
that
we
know
about.
F
D
So
here's
what
I'll
do
I
will
I'm
gonna
share?
I'm
gonna
write
up
everything.
I
remember
from
this
discussion
right
now
in
the
issue
and
I
will
hand
back
the
meeting
to
you,
a
leader.
A
Okay,
okay,
all
right
josh,
but
this
is
super
helpful
because
I
I
do
think
that
the
collector
especially
has
this
issue
where
you
know
you
have
different
sources,
and
you
know
again
based
on
the
based
on
the
use
case,
you
will
run
into
you
know
this
confusion.
So
having
a
rule,
there
is
useful,
especially
for
configuration
and
initial
setup
yeah
cool
thanks.
A
Thanks
josh
thanks
brian
all
right
moving
on,
so
I
think
that
we
wanted
to
kind
of
also
give
an
update
on
the
you
know.
A
As
you
know,
we've
been
testing
writing
receiver,
prometheus
receiver
tests
for
the
collector
and
kind
of
looking
at
you
know
what
the
openmetrics
suite
is
and
the
openmetrics
rules
are
and
then
also
verifying
it
with
the
you
know,
with
what
the
prometheus
server
emits
so
again,
number
item
number
two
and
three
are
part
of
that
same
effort,
and
maybe
we
can
just
look
at
the
issue.
Let
me
just
bring
it
up:
nine,
six,
nine,
nine
that
we
filed
again.
A
This
was
to
verify
I
can
share
my
screen
and
porussian
must
find.
Maybe
you
can
guys
can
go
through.
You
know
what
you
have
filed
so
far
on
the
prometheus
receiver
for
tests
that
are
passing
and
there
we
are
so
just
just
this
is
just
for.
A
A
Valid
yeah
yeah,
that's
correct,
so
out
of
out
of
161
161
tests,
we
have
run
94
tests
pass,
these
tests
are
dropped
and
then
67
tests
are
not
dropped
and
out
of
that,
22
tests
have
incorrect
metrics.
So
we
listed
out
the
the
types
which
are
failing
or
are
bad,
I
should
say
and
where
the
scrape
is
successful
with
the
metric
saying
just
ingested.
A
So
it's
a
bad
result
and
again
just
wanted
to
share
that.
So
must
find
that
you
want
to
go
into
more
detail
on.
F
F
F
It
doesn't
test
everything
because
it
works
line
by
line
like
there
is
probably
some
stuff,
like
I'm,
not
sure
it's
doing
as
much
history
on
validation
as
it
could
on
a
per
line
basis
for
like
exemplars
and
so
on,
and
you
know
if
we
can
prove
that
it's
fine
but
prometheus,
basically
just
states
that
it
will
ingest
valid
input
and,
obviously
the
more
stuff
we
can
reject
as
well
in
prometus
the
better,
because
people
will
be
confused
thinking
that
premedia
is
the
reference
parser
when
it
isn't.
A
F
A
I
see
I
see,
but
I
mean
that
that's
a
tough
choice
right
because
you
shouldn't
sacrifice
compliance
for
performance.
F
Yeah
but
yeah
it's
ways
and
also
wouldn't
prometheus.
Do
it
because
of
historical
stuff
as
well,
which
hotel
wouldn't
have
to
deal
with
like
I
have
ways
to
make
most
of
this
efficient.
F
If
I
didn't
have
to
support
the
original
prometheus
text
format,
and
particularly
the
fact
that
they
come
in
any
order
at
all,
I
can
randomize
the
lines
and
it's
meant
to
come
in
which
ruins
the
ideas
I
have
from
doing
it
efficiently,
but
that's
not
a
problem
for
motel
yeah,
because
you
don't
have
to
support
that
too
well,
or
at
least
you
could
just
rely
on
prometheus
and
cheat
for
that
one.
I
guess.
A
You
could
yes,
but
the
the
other
issue.
Is
you
know
how
do
you
actually
provide
a
stable
user
experience
right
because
you're
kind
of
pushing
if
somebody
is
using
prometheus
under
the
hood-
and
it
really
doesn't-
you
know,
pass
all
the
exposition
format
rules
then
you're
you're
kind
of
passing
the
buck
to
the
user
right,
yeah,
yeah.
F
F
A
Okay,
so
we
maybe
that
should
be
communicated
again.
I'd
like
to
better
understand.
You
know
some
of
the
use
cases
that
vishwa,
I
think
you
were
trying
to
address.
You
know
based
on
which
we've
kind
of
you
know
looked
at.
You
know
what
the
compatibility
is,
because
it's
very
difficult
to
you
know
say
overall,
because
we
want
to
say
it's
all
completely.
You
know
hotel
is
completely
open,
metrics
compatible,
which
is
the
baseline.
You
know,
that's
that's
the
standard
and
then
yeah.
Every
every
other
tool
is
also
compliant
with
it
right.
Yeah.
E
F
Well,
the
thing
is
that
openmetrics
exists
and
it
is
a
standard
and
there's
a
reference
parser.
However,
the
parser
used
internally
in
prometheus
does
not
fully
implement
that
there
is
because
there
are
certain
bad
out
bad
inputs.
It
does
not
reject
now,
or
some
of
those
would
probably
easily
fix
up.
Like
you
know,
I
think
some
of
the
exemplar
stuff
could
be
a
little
tighter,
but
anything
that
depends
across
information
across
lines.
It
doesn't
currently
spot
because
it
can't
do
that
efficiently.
E
Okay,
so
so
later
so,
then
you
think
we
have
to
go
with
the
prometheus
suite
of
tests
rather
than
going
after
the
open
metric
speed
of
this,
because
these
will
fail
right
until
the
these
should
not
be
compatible.
Basically,
primitives
will
be
doing
different
than
what.
F
C
G
Are
ordering,
but
some
of
them
definitely
are
like
single
line
exactly
or
single
single
line
exposition
that
that
could
be
caught
if
prometheus
can
catch
your
single
line
issues
yeah,
but
I
think
what
we
need
to
do
in
the
collector
sig
is
decide,
is
prometheus
compatibility,
our
goal
or
is
openmetrics
compatibility
if
prometheus
is
not
openmetrics
and
not,
will
will
not
soon
be
metrics
compatible
and
that's
obviously
going
to
have
some
implications
for
us
in
the
collector
because
prometheus
we
can
reuse
the
scraping
infrastructure
if
we're
using
that,
but
without
the
ability
to
swap
in
a
parser
or
something
like
that
for
the
scraper.
A
A
A
I
think
we'll
have
to
definitely
document
them
and
say
that
this
is
you
know
we
could.
We
could
either
exclude
them
from
our
test
suite
or
we
could
say
that
we
run
them,
but
please
ignore
the
results,
because
you
know
because
this
is
a
known
issue-
and
this
is
you
know
this
is
what
we
recommend
is.
This
is
the
solution
that
you,
you
know.
There's
example
our
testing
that
or
our
implementation
that
will
handle
some
of
these
issues
and
also
work
on
the
prometheus
side.
That
will
make
it
compatible.
F
A
Is
it?
Is
it
possible
to
designate
the
legacy
use
cases
which
you
know
make
these
tests
fail
as
some
kind
of
exceptions,
because
their
legacy
are
these
all
legacy
in.
F
A
F
Because
then,
you
would
be
breaking
existing
technically
valid
input
for
prometheus
because,
like
as
I
said,
like
the
text
format
in
prometheus,
a
decision
was
made
that
it
can
come
in
random
order
for
each
of
the
lines.
So
I
can
do
something
efficiently.
F
If
I
know
I'm
getting
one
group
at
a
time-
and
I
can
just
remember
the
last
group-
and
all
I
have
to
do-
is
maintain
a
list
of
all
metric
groups,
but
if
they
can
come
in
a
random
order
when
we're
processing
the
other
format,
that
kind
of
discovers
that
maybe
the
way
around
it
but
yeah
at
least
my
first
thought
on
how
to
solve
some
of
this
efficiently.
F
A
Yeah
so
brian
can
you
at
least
you
know
flag
what
can
be
fixed
and
then
we
can
also
see
you
know
what
add
tests
accordingly
and
then
you
know
we
I
mean,
because
we'd
like
to
continue
to
stick
to
open
metrics.
That's
it's
the
standard
I
mean
again,
everybody
should
be
aligned
on
the
standard,
so
you
know
if,
as
long
as
there's
a
path
forward,
that
would
be
good
in
that
direction.
Otherwise
reusability
becomes
very
hard
right
because
underneath
yeah.
F
F
F
If
you
have
well,
the
thing
is
that
once
the
small
piece
of
stuff
here
can
be
fixed,
if
you
get
as
long
as
all
the
client,
libraries
are
largely
working,
are
producing
correct
output
and
we
can
check
them
against
the
python
clients
it'll
mostly
work
out
like
it
worked
for
years,
because
for
a
long
time
now
prometheus
has
not
been
the
canonical
parser
reference
parser
for
the
text
format.
Previous
text
format
since
permeate
is
two
in
fact,
and
has
not
been
rejecting
certain
invalid
inputs.
F
G
There's
a
chance
that,
in
reality,
much
of
this
is
a
non-issue
and
that
these
scenarios
are
never
or
very
rarely
encountered,
but
we
we
still
want
to
be
able
to
state
compliance
and
prove
compliance.
So
we
should
be
tracking
work
towards
that.
C
I
just
wanted
to
add
one
thing.
So
initially
we
were
just
reusing
the
or
using
the
negative
tests
or
invalid
as
invalid
data
from
the
open
metrics
test,
but
open
metrics
tests
also
have
some
valid
inputs
that,
according
to
the
openmetrics
repository,
should
be
supported
by
prometheus.
Such.
H
C
Untyped
metrics
and
state
sets
and
gauge
histograms,
which
currently
aren't
scraped
successfully
by
prometheus,
so
that.
B
B
F
Yeah
yeah,
like
if
radius,
is
rejecting
that
and
you're
setting
the
correct
content
type
header,
then
that
is
definitely
a
bug
that
needs
to
be
fixed
on
prometheus,
okay,.
A
Yeah,
we'll
we'll
definitely
brand
go
through
and
report
all
the
bugs
and
or
at
least
all
the
issues
we
are
seeing.
I
mean
again,
they
may
not
be
bugs
necessarily,
but
we
can
definitely
report
them
and
we
will.
B
Yeah
in
this
one,
this
is
the
log
that,
if
that
that
primitive
generates,
it
says
invalid
metric
type,
unknown.
B
In
the
negative
test
there
is
this
one
more
thing
like:
if
there
is
only
metadata
like
help
type,
but
there
is
no
data
point
existing
then
server
always
scrapes
it
and
it
generates
a
successful
script.
But
there
is
no
data
point
in
it.
So.
G
F
F
B
B
This
one
particularly,
is
not
scraped,
but
is
it
a
good
case
really
because
it
has
like
these
timestamps
like
triple
zero,
zero
point
zero,
so
it
is
expected
that
it
will
not
scrape
it
and
it's
just
scripting
it
as
well.
F
F
Well,
so
you
shouldn't
use
in
64
for
your
timestamps,
at
least
not
in
seconds,
because
you'll
lose
well,
you
can.
This
gets
timestamp
precision
issues
which
is
also
what
this
is
kind
of
testing,
because,
like
it's
permitted
for
something
scraping
to
only
care
about
second
precision,
but
we
also
say
that
if
you
might
have
nanosecond
precision
and
someone
has
nanosecond
precision,
then
the
float64
isn't
enough
to
hold
it.
B
I
need
to
check
this
because
last
I
checked
it's
it's
using
integer
64.,
so.
C
F
F
C
Yeah
problem
here
is
that
the
open
metrics
test
suite
according
to
this,
then
this
should
be
a
valid
metric.
Oh
yeah.
F
F
Yeah,
like
prometheus,
is
doing
what
it's
designed
to
do:
yeah
yeah
and
the
parser
is
parsing
it
and
then
prometheus
is
just
dropping
it
on
the
floor
saying
no.
We
can't
because
prometheus
is
append
only-
and
this
is
that
is
too
old
or
too
new,
or
something
like
that.
B
So
it's
not
necessary
for
all
positive
cases
over
here
to
pass
because
it
may
have.
G
Yeah,
so
the
the
problem
here
is
is
that
these
are
all
tests
of
parsing
the
open,
metrics
exposition
format,
but
the
way
that
we
have
to
apply
these
tests
is
through
prometheus
or
through
the
prometheus
receiver
and
nope
inflammatory,
which
goes
beyond
parsing
all
right.
So
there
may
be
logic
that
happens
between
the
parsing
and
the
the
point
at
which
we're
able
to
evaluate
did
this
do
the
thing
that
we
expected
that
would
prevent
us
from
using
some
of
these
tests.
G
So
I
think
this
is
a
test
that
we
can't
use
as
a
test
of
the
scraping
system.
If
we
were
testing
the
parser
independently,
we
could
use
it.
F
E
And
then
have
only
the
must
actually
run
for
the
time
being.
I
think
that
should
be.
F
B
So
example,
with
where
yeah
this
is
the
example
like
prometheus
scrapes
it
as
a
varied
script
but
yeah.
So
this
is
the
example.
It
just
says
slash
help
but
help,
but
it
comes
out
as
a
valid
script
in
the
prometheus
server.
B
That's
a
bad
screen.
Yeah,
that's
involved
in
push
yeah,
but
it
prometheus
creates
a
valid
script
for
for
all
these
kinds
like
there
are
many
kinds
of
these
kind,
these
ones.
I
think,
yeah,
that
there
are
many
like
bad
unit
one.
It
only
has
like
unit
nothing
else
and
no
data
point,
but
prometheus
scrapes
it
with
up
value
one
quality
scrapes
so
like
in
most
of
the
ones.
G
B
Yeah,
because
everywhere,
where
it
is
noted,
no
data
point
prometheus
scrapes
it
with
up
value.
One.
F
It
just
means
that
hey
your
existing
prometeus
parser,
like
similarly,
I
was
just
saying
for
metric
stuff.
We
should
be
as
strict
as
we
can
on
the
per
line
basis
that
maybe
there
were
some
things
we
can
restrict
here
for
as
well
in
the
prometeous
text.
Format:
okay,
because
that's
wrong,
no
matter
which
format
you're
using
right.
A
Okay,
all
right,
we'll
we'll
double
check.
Definitely
what
parser
we
are
going
through
and
and
rerun
these
tests
and
and
we
try
to
figure
out.
You
know
right
now,
at
least,
which
are
the
mandatory
tests
from
an
open
metric
standpoint
with
maybe
with
richard
and
richie
and
brian.
You
know
distributing
it.
A
I
know
we're
in
a
tough
spot
here,
because
I
mean
I
I
really
want
want
the
open
metrics.
We
should
be
able
to
support
open,
metrics,
clearly
and
say:
hey,
you
know,
that's
that's
the
standard,
that's
what
we're
fully
compliant
with,
but
then
you
know
if
if
we
are
using
prometheus
internally,
then
you
know
how
do
we
reconcile
those
two
right?
Yeah
and
I
mean
again,
let's
figure
out
a
path
where
we
can
help
with
that
compliance.
A
You
know
again
we'll
file
all
the
issues
that
we're
seeing,
which
are
valid
issues
yeah
go
from
there.
There's.
A
All
right
cool,
miss
sven
did
you
want
to
go
over
any
other
questions
you
had
on
the
test.
A
Okay,
so
brian,
what
do
you
recommend
as
the
next
step,
we'll
file
all
the
issues
we'll
you
know
double
check
the
parser
on
the
open,
metrics
side?
Is
there
anything
else
that
we
can
we
can
do
or
what
do
you
reckon.
F
I'll
probably
discuss
whenever
we
have
the
next
meeting,
but
yeah
like
you've,
already
listed
the
issues,
but
I
guess,
if
you
can
break
them
down
to
hey
prometheus,
is
doing
the
wrong
thing
or
hey.
Just
multi-line
like
the
grouping.
Ones
are
grouping
ones,
okay
and
even
things
like
detecting
bad
counter
values
like
negative
counter
values.
That
depends
on
going
line
and
not
going
line
by
line,
but
anything
we
can
catch
like
hey
exam
players
or
commas,
or
something
like
that.
We
should
be
able
to
catch
prometheus
yep.
At
least.
A
H
Nothing
for
me,
I've
been
hunting
down
a
batch
processor,
but
the
last
few
days.
A
Okay,
all
right
sounds
good.
I
think
that's
all
we
had
on
our
list
of
items
to
discuss.