►
From YouTube: 2021-05-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
please
add
your
questions
or
comments
on
the
meeting
notes.
That'd
be.
A
A
Way
and
iris
did
you
have
some
questions
that
you
want
to
add
to
the.
C
Agenda
yeah
I'll,
add
those
right
now.
A
A
A
A
While
we
get
started
again,
I
think
grace
I
had
tagged
you
on
a
couple
of
issues
on
status
for
a
so
actually
here,
a
couple
of
pr's
that
you
had
filed.
A
D
A
All
right,
that's
it!
Let's
get
started
because
I
know
you're
at
least
five
minutes
fast.
So
again,
I
just
wanted
to
bring
up
the
issue
of
you
know
a
significant
backlog
on
the
prs
that
are
waiting
to
be
approved
or
merged,
and
this
is
something
that
you
know
has
been
an
issue
on
the
collector
components
for
a
while.
That's
one
of
the
reasons
why
we
created
even
this
workgroup
repo
so
that
we
could
actually
work
on
components
in
parallel
and
you
know
continue
to
develop.
A
A
So
I've
been
working
with
one
of
the
things
I've
done
is
worked
with
bogdan
to
triage
the
collector
issues
and
the
prs
that
are
pending
attack
them
and
the
process
you
know
is
that
as
soon
as
they
get
reviews
they
are,
they
can
be
tagged
as
ready
to
be
merged
and
then
bogdan
can
merge
them.
But
that
said,
there
are
a
couple
of
issues
again.
We
are
still
facing.
A
First
of
all,
many
of
us
who
are
on
this
call
are
not
approvers.
In
fact,
probably
most
of
us
are
not
approvers,
so
we
get
blocked
on
the
approval,
step,
final
approval
step,
and
then
bogdan
is
out
of
pocket
this
week.
So
it's
just
been,
you
know
a
waiting
process
that
said,
I'm
trying
to
you
know
I'm
working
with
them
and
josh
surat,
also
hi
josh
to
get
this
resolved.
A
We're
you
know
in
the
process
of
actually
writing
up
a
issue
that
we
can
file
and
then
take
up
with
the
tc
to
resolve
this
either
we
get,
you
know,
repo
separated
for
the
prometheus
components
or
we
get
basically
approvers.
You
know
a
couple
of
us
become
approvers,
maybe
david,
ashbal
and
and.
A
Anybody
else
who's
interested
anthony
included,
but
that
said
again,
this
is
the
update
I
have
as
of
last
night,
so
I've
been
working.
You
know
closely
back
and
forth,
getting
all
our
backlogs
triaged
so
that
we
can
give
an
accurate
picture
of
how
many
pr's
are
blocked,
waiting
to
be
merged
and
otherwise.
A
But
abundance
are
you
aware
of
this,
and
so
is
tigran?
It's
just
that
I
think
he's
just
had
a
family
emergency
so
and-
and
this
is
exactly
you
know
what
the
project
needs
to
address-
that
you
know
people
can
be
out
for
different
reasons.
A
Yep
yep,
all
right
so
just
wanted
to
give
everybody
an
update,
so
that
folks
are
aware
of
you
know
us
just
giving
up
jana
did
you
want
to
discuss.
E
E
We
I
have
a
couple
of
you
know
things
to
merge,
given
people
are
out,
I'm
not
sure.
What's
the
how
we
are
continuing
with
this,
but
this
is
like
you
know,
these
issues
are
affecting
the
stability
milestones,
yeah.
A
Yep,
absolutely
no
yep
and
I'm
I'm
hoping
that
they
will,
you
know,
get
merged
sooner
than
later.
I
would
still
appreciate
if
any
of
us,
I
don't
think
any
of
us
on
this
call
actually
are
on
the
approvers
list
for
collector.
So
I
am
working
with
bogdan
to
get
that
fixed.
E
There
is
one
more
item,
apart
from
the
you
know,
pr
submerged,
which
is
the
next
item
that
I
want
to
talk
about,
so
we
want
to
deprecate
or
remove
the
external
labels
from
the
remote
friday
exporter,
but
I
ideally
think
that
we
should
do
this
before
the
stability,
and
you
know
the
stability
milestone
is
like
may
31st
and
I'm
not
sure
if
we
should
remove
this
capability
completely
or
you
know
just
stop
undocumenting
it
not
to
encourage
people
to
use
it.
So
there's
been
some
feedback
on
the
pr
that
you
know.
E
E
F
I
read
the
issue
that
there
was
there
you're
proposing
to
move
the
external
labels
out
of
the
remote
right
exporter
and
put
them
into
some
other
location
in
the
configuration.
Is
that
correct?
It's.
E
It's
going
to
be
in
the
receiver,
there's
already
a
pr
waiting
for
it,
we're
waiting
to
be
for
it
to
be
merged.
F
E
No,
we
we
decided
that
this
is
the
way
to
go.
I
mean
receiver,
we
should
handle
this
in
the
receiver.
The
question
is
what
what
what's
going
to
happen
to
the
exporter,
you
know
feature:
should
we
remove
it?
Should
we
keep
it,
should
we
deprecate
and
say
that
we're
gonna
eventually
remove
this?
You
know
what's
next
step,
there
is
the
question.
F
There's
a,
I
think,
there's
probably
going
to
be
a
moment
in
the
near-ish
future,
where
hotel
starts
to
talk
about
what
an
external
label
really
is,
and
if
we
did
that
it
might
be
something
that
we
address
in
the
future.
I'm
aware
that
there
are
essentially
two
uses
of
these
external
labels.
One
is
like
a
true
resource.
F
The
way
we
do
and
and
for
those
I
think
we
should
just
use
a
resource
processor-
you
can
attach
any
resource
in
your
pipeline
and
any
consumer
of
the
data
might
want
those
but
there's
a
second
use
of
external
labels.
That's
really
about
how
cortex,
replication
and
deduplication
happens,
and
these
labels
are
not
truly
meant
to
apply
to
the
series
or
the
streams
or
else
you're
actually
creating
streams
that
don't
exist.
E
There's
another
use
of
external
labels.
If
you
want
to
add
the
same
label
to
every
you
know
sample,
you
know
primitives
as
a
configuration
of
external.
That's.
F
Use
is
more
about
like
how
to
configure
replication
and
a
high
availability,
and
for
that
I
think
it
makes
sense
to
keep
it
in
the
in
the
exporter.
Even
if
hotel
comes
along
and
gets
a
formal
concept
here,
you
know
just
configuring.
Your
remote
right
export
external
labels
makes
sense,
because
that's
tied
to
your
cortex
configuration
yeah.
F
F
I
agree,
I
think,
that's
a
good
question,
there's
a
backlog
of
issues
that
I've
been
asked
to
file
and
one
is
about
external
labels,
and
the
other
is
well
it's
it's
about
this
question,
and
so
it's
fine
I'd
like
to
say
that
I'm
going
to
file
an
issue
for
hotel
to
consider
this
problem
of
external
labels.
That
second
use
case
not
the
first
use
case.
Okay,.
E
Okay,
should
we
so
the
question
is
now
is
like
we
can
keep
the
external
lives.
I
just
don't
think
that
we
are
at
a
stage
that
we
completely
finalized.
You
know
what
we
want
to
do.
If
there's
going
to
be
a
processor,
then
it's
you
know
the
application
of
the
same
thing.
So
should
we
that's?
Why
that
there
is
a
pr
that
just
undocuments
external.
E
To
be
there,
but
until
we
resolve
you
know
what
we
want
to
do,
maybe
it's
it's
better
to
discourage
people
using
this
or
I'm
just
hiding
the
fact
that
it's
available,
I
don't
know
I
I
don't,
have
any
strong
opinions.
I
can
also
close
that
pr.
F
This
touches
on
a
larger
discussion,
which
I've
only
touched
part
of
on
the
periphery,
about
the
architecture
of
the
processors
that
we
have
in
general.
So
there's
a
and
it's
sort
of
easy
to
create
a
new
processor
that
does
a
one-off
thing
and
it
might
work
for
some
of
the
use
cases,
but
not
all
of
them,
and
then
you
can
imagine
creating
another
simple
processor
to
do
the
other
use
cases.
F
F
There's
been
this
discussion
in
slack
and
I
feel
like
it's
left
the
scope
of
hotel
at
some
point.
It's
like
we're
designing
a
service,
not
an
instrumentation
library,
semantic
conventions,
so
I'm
not
sure
how
to
scope
this
work.
I
agree
that
it's
a
good
idea
to
have
like
a
more
intentional
plug-in
or
processor
design,
and
then
maybe
you
wouldn't
be
talking
about
putting
external
labels
into
an
exporter.
If
you
had
a
very
flexible
and
sort
of
standardized
processor
for
that
stuff,.
A
C
E
About
like
hey,
are
we
okay
with
breaking
configuration
in
the
long
term
or
not
that
type
of
question?
If
we
don't
want
to
break
anyone,
maybe
we
should
start
with
small.
You
know
something
we
can
take
back.
You
know
the
functionality,
so
we
shouldn't
document
it.
Maybe.
A
Tiana
is
there
a
short
term?
I
mean
you
have
advised
that
in
the
short
term
we
add
this
to
the
receiver,
but
it's.
E
A
E
Right
yeah,
I
mean
if
anybody
wants,
I
have
no
bandwidth
on
this.
B
E
B
E
As
a
part
of
the
resource
attributes,
and
then
we
transform
into
instance
and
job
there's
a
we
just
merged
something
recently
so
they're
like
attributes
and
then
they're
converted
into
labels.
F
We
just
take
all
the
resource
labels
and
promote
them,
and
that's
when
this
the
second
use
case
becomes
the
issues
that,
to
all
other
otlp
consumers
these
these
odd
external
labels.
That
just
tell
you
which
replica
you
are,
are
sort
of
abusing
the
concept
because
they
described
the
path.
The
data
took
not
the
property
of
the
instrumented
thing.
B
E
Yeah
it's
correct
and
we
we
kind
of
like
kept
the
scope
intentionally
small,
not
to
care
about
resource
labels.
It's
user's
responsibility
to
set
whatever
they
want
in
the
either
as
an
external
label
or
in
their
app.
So
that's
just
sort
of
like
the
scope
we
didn't
want
to
be
in
the
business
of
you
know
populating
them
at
least
not
for
now.
B
I
was
going
to
say
one
way
that
we
might
be
able
to
address.
This
would
be
to
allow
configuration
not
of
the
external
labels
themselves,
but
of
the
resources
that
we
want
to
promote.
So
someone
could
then
add
resources
and
then
promote
them
as
a
two-step
way
to
sort
of
add
labels
to
something
that
was
coming
through.
F
It
does
make
sense.
The
question
is
whether
that's
the
feature
you
want
five
years
from
now
and
I
think
we're
having
a
problem.
Is
it
it's
like
five
years
from
now
we
want
a
very
different
design
and
we're
trying
to
say
we're,
never
going
to
change
it
now,
a
month
from
now,
which
is
hard
to
do
this.
Your
proposal
makes
a
lot
of
sense.
I
can
imagine
keeping
that
forever
david,
but
the
one
that
I'm
thinking
about
that
I
have
to
file
an
issue
about
gets
back
to
like
we.
F
So
I
think
there
may
be
a
schema
that
you
can
put
on
top
of
the
data
and
then
you,
you
wouldn't
have
to
consult
your
hard-coded
list
of
things
to
promote
you'd,
consult
your
schema
to
say
which
one
of
these
belong
in
my
attributes,
which
ones
do
I
drop.
So
there
are
other
solutions
that
we
might
find,
and
I
don't
know
how
to
manage
the
problem
of
short
term
versus
long-term
design
planning.
A
F
Yeah,
I
I
keep
not
providing
this
issue
that
I
promise
this
is
gonna
happen.
Now
I'm
gonna
make
a
proposal
that
says:
we've
now
got
a
schema.
Url
concept
that's
been
introduced
in
your
instrumentation
library.
You
might
put
a
schema
together.
That
says
these
are
the
attributes
that
truly
matter
to
me.
You
shouldn't
erase
these,
and
these
are
the
attributes
any
other
attribute.
I
don't
care
about
it.
Just
you
might
get
rid
of
it.
F
That's
two
different
categories,
and
then
this
third
category
of
attribute
is
this
is
an
attribute
that
says
something,
but
just
you
shouldn't
take
it
for
meaning.
You
should
just
take
it
for
like
a
duplicate
label
like
an
external
label
in
prometheus,
so
there's
three
classes
of
attribute
and
if
we
knew
which
class
of
attribute
an
attribute
was
you
could
treat
it
automatically
and
that's
what
I
think
is
probably
a
better
solution
for
us,
but
I
don't
want
us
to
block
on
that.
G
Can
I
can
I
go
back
to
jana's
original
question
of
just
what
do
I
do
with
this
one
config
field
and
this
one
exporter,
and
so
I'm
a
googler,
which
means
my
answer
should
be
just
deprecate
it
and
don't
provide
any
migration
path
right.
G
That's
that's
what
I
should
answer,
but
what
I'm
going
to
say
is,
I
think,
given
the
discussion
I
heard
and
given
some
of
the
long-term
things
here,
there's
a
use
case
which
might
not
be
supported
today,
where
that
little
bit
of
config
needs
to
remain
so,
if
you
keep
it
to
support,
just
that
use
case
that
people
might
have.
G
G
But
I
think
what
I'm
hearing
is.
We
don't
think
that
use
case
is
solved
today.
People
may
want
it.
I
don't
know
how
much
they'll
want
it,
but
it
sounds
like
you
want
to
just
leave
that
config
in
place
for
now,
either
in
a
deprecated
form
or
in
some
fashion
that
people
can
still
solve.
That
problem.
Is
that
correct.
F
I
was
going
to
say
something
somewhere,
just
there's
a
performance
argument
to
have
this
flexibility
in
every
single
processor
like
if
it.
If
you
want
to
do
this
in
the
receiver,
do
it
if
you
want
to
do
this
in
the
processor,
do
it?
If
you
want
to
do
this
in
the
exporter,
do
it
because
it's
going
to
be
more
efficient
to
do
it
wherever
you
want
to
do
it.
E
G
Can
you
rename
it
to
get
rid
of
the
confusion
around
why
it's
used
in
that
export?
I.
E
A
E
Trying
to
do
in
the
remote
right
exporter
is,
is
something
you
know
in
in
the
prometus
server
as
a
whole,
so
prone
to
servers
isn't
split
into
two
pieces
right.
There
is
no
receiver
and
exporters
just
one
piece,
so
we're
giving
people
the
same,
meaning
the
same
functionality
with
the
same
meaning
in
both
places,
because
we
have
to
split
the
functionality
into
two
pieces.
E
So
I
think
that
you
know
renaming
it
is
going
to.
You
know,
make
it
more
confusing
I'll,
just
rather
keep
it
as
external
labels,
but
I
think
I
think
it's
fair
to
keep
it
around
and
you
know
there's
no
performance
penalty
or
anything
if
you
disable
them,
if
you
don't
have
any
external
labels.
So
it's
not
a
big
deal.
F
D
F
Those
configurations
seem
both
valid.
It's
the
question
of
whether
you
have
other
producers
or
consumers
of
the
data
that
want
those
same
labels
or
not.
So
you
put
them
in
external
labels
in
the
exporter
if
it's
only
bound
for
prometheus,
but
you
might
put
them
in
the
receiver
if
there's
another
produce
another
path
for
the
data.
D
A
Yeah
I
mean
unless
yeah
there
is
a
conflict
in
you
know,
by
having
both
on
which
I
don't
think
there
is.
That
is
both
the
receiver
as
well
as.
E
It's
just
like,
I
think,
I'm
worried
about
like
bad
configuration.
People
will
copy
paste
things
and
like
yeah,
they
didn't
realize
that
it's
in
the
receiver
and
they
will
have
it
in
the
exporter
as
well,
and
then
they
will
get
confused
because
you
know
we
are
working
on
this
project
and
we
have
very
distinct.
I
think
understanding
of
what
is
the
receiver
and.
E
I
don't
see
that,
like
an
average
developer
will
be
super,
you
know
paying
a
lot
of
attention
to
these
things.
That's
that's
the
problem
and
that's
a
minor
problem.
I
think.
A
I
mean
we
can
definitely
augmented
documentation
and
and
clear.
You
know
in
the
code
as
well
as
externally,
but
that's
probably
the
best
we
can
do
and
unless
we
have
a
more
comprehensive
design.
C
C
C
But
basically
the
questions
I
have
while
we're
working
through
this
is
we're
at
the
kind
of
like
the
design
stage,
and
I
was
just
wondering
if
there's
any
suggestions
for
how
the
server
should
be
set
up
right
now,
we're
thinking
of
designing
using
a
nor
just
a
normal
http
http
server
that
serves
an
endpoint
to
be
able
to
make,
for
example,
port
requests
to
update
the
list
of
scrape
targets
within
a
file,
but
I'm
open
to
any
suggestions.
A
That
josh
did
you,
do
you
have
an
opinion
exactly.
H
H
About
how
collector
use
should
be
structured
that
would
be
applicable
regardless,
whether
it
was
prometheus
or
some
other
type
of
receiver.
D
So
do
you
know
when
this
would
be
useful?
Why
would
why
would
someone
call
into
this
standpoint
or
update
it's
just
changing
the
escape
conflict
through
services.
H
Yeah,
so
this
is
part
of
a
larger
project
that
we're
trying
to
sequence
that
will
enable
the
open,
telemetry
operator
to
construct
a
set
of
collectors
and
have
a
separate
load
balancer
that
will
handle
the
prometheus
service
discovery
activities
and
then
divide
the
scrape
targets
that
it's
discovered
up
amongst
the
set
of
collectors
that
are
operating
and
inform
the
collectors
of
the
targets
that
they
should
be
scraping.
H
The
operator
or
the
load
balancer
that
the
operator
creates,
would
be
communicating
to
the
collectors
by
this
end.
Point
yeah
this.
This
wouldn't
be
a
thing
that
I
would
expect
end
users
normally
to
communicate
to
which
is
part
of
why
the
question
around
authentication
and
authorization
and
how
that
should
be
handled.
H
So
this
needs
to
rewrite
a
file
on
the
file
system,
and
I
one
way
it
could
be
accomplished
would
be
to
perhaps
to
have
the
file
sd
configs,
looking
at
files
that
are
mapped
from
config
maps,
but
I
think
that
becomes
slightly
limiting
that
I'm
not
sure
you're
able
to
add
new
configs
in
that
manner,
and
it
also
limits
us
to
kubernetes,
whereas
without
that,
I
think
this
could
still
be
applied
beyond
kubernetes.
Even
though
the
operator
is
the
the
initial
target
of
it.
F
I
agree
this
is
a
question
for
the
collector
sig.
It
seems
to
be
more
about
operations
and
authentication.
C
Okay,
yeah,
that's
fine
I'll,
ask
them
to
collect
to
say
then,
but
thank
you
so
much
everyone
for
your
input.
A
And
I
think
where
the
authentication
question
is
also,
let's
bring
it
up
in
the
collector's
thing,
because
it.
G
All
right,
so,
I
think
both
josh
and
I
need
to
talk
about
some
of
these.
The
first
one
here
is.
I
was
kind
of
curious.
I
know
in
openmetrics
there's
this
thing
called
a
gauge
histogram,
which
is
different
than
histogram,
and
I
was
curious
if
anyone
knows
how
we're
handling
gage
histogram
today,
since
we
don't
actually
have,
as
far
as
I
know,
a
correct
way
to
model
it.
No
tlp.
F
Is
that
we
don't
produce
those
points?
Those
points
are
actually
not
produced
by
a
prometheus
sdk
ever
they're
only
produced
by
recording
rules
that
are
aggregating
data
inside
a
prometheus
server,
and
so
we
we
haven't
had
to
address
the
question
there
is
this
standing
issue
about
whether
an
instantaneous
temporality
will
address
it
or
what
bogdan
has
proposed
at
one
point
like
another,
a
duplicate,
histogram
point:
that's
like
the
same
thing,
but
called
different
something
differently,
and
I
I
prefer
the
first
option.
I
In
relation
to
prometheus,
these
do
exist,
come
from
exporters
they're,
just
pretty
rare,
so
like
the
vast
majority
of
them
are
impromptu
l,
but
I
think
the
first
cross
was
like
post
fix
and
someone
wanted
to
see
hey
the
current
mail
cue.
How
long
have
you
things
been
in
there?
So
they
can
come
from
exporters.
G
Them
I
I
expected
these
to
be
kind
of
rare.
What
I'm
curious
is
do
they
show
up
today
in
open
telemetry's
receiver,
and
if
so,
how
do
they
show
up?
That's
what
that's
all
I'm
trying
to
find
out.
So
if
openmetrics
and
prometheus
has
them
in
receivers,
does
that
mean
our
implementation
of
receiver
gets
these
things.
G
G
So
that's
how
we
can
get
a
test
case
going.
I
Yeah,
you
basically
have
to
generate
one
by
hand
and
because,
like
even
the
api
for
these
hasn't
been
quite
figured
out
at
like
the
developer
api,
because
they're
that
rare
and
but
if
you
look
even
at
the
open
telemetry,
is
that
the
open
metric,
spec
there's
an
example.
There.
G
I
yeah,
I
saw
it
in
the
open
metric
spec,
I'm
trying
to
figure
out
how
to
all
I
want
is
a
unit
test
if
you
will
of
the
prometheus
receiver
and
open
telemetry,
where
I
can
make
a
test
that
fails
when
it
gets
one
of
these
gauge
histograms.
So
I
understand
what
open
telemetry
is
actually
doing
with
these
things
when
it
sees
them
if
it
sees
them.
Okay,
so
it
sounds
like
maybe
the
prometheus
scraper
is
not
going
to
be
getting
one
of
these
things.
F
H
To
be,
I
I
just
found
it
in
my
local
copy.
Let
me
find
a
link
to
it,
but
it
there's
a
comment
in
there
that
says:
dropping
support
for
gage
histogram.
For
now,
until
we
have
an
official
speculative
implementation,
and
so
that
case
is
just
commented
out.
G
Okay,
is
there
a
bug
open
specifically
around
this?
That's
not
the
okay,
so
so
josh
knows.
I
actually
disagree
with
the
instantaneous
temporality
thing
as
well.
Bogdan
convinced
me,
but
let's
we'll
talk
about
that
later.
What
I
want
is
a
specific
bug
around
gage
histogram,
where
we
can
make
sure
that
we
support
it,
and
then
we
can
argue
like
how
to
support
it
in
different
bugs
and
all
that
kind
of
junk.
I
just
want
to
make
sure
we
have
a
bug
open
around
this.
A
I
don't
think
so
josh.
I
think
you
should
open
one,
because
I've
looked
at
the
backlogs
and
I
didn't
see
one.
G
We
should
have
a
test
to
make
sure
that
we
can
handle
this,
but
it's
probably
not
like
a
p
zero
in
any
way,
there's
other
more
important
fish
to
fry,
but
we
should
make
sure
we
resolve
this.
Okay.
I
G
Okay,
that
that
sounds
completely
reasonable,
I'll
I'll,
open
a
bug
and
take
the
eye
for
that.
F
You
you
understand
it
better
than
I
do.
Thank
you
josh.
So
I
didn't
get
to
writing
an
issue
for
this
backlog
bit,
but
I'll
briefly
explain
it.
We
our
interest
two
weeks
ago.
F
I
think
this
meeting
brian
explained
that
the
the
use
of
the
stillness
markers
is
totally
independent
from
the
use
of
start
time
for
beginning
resetting
of
time
series
and
that
all
made
sense
we're
trying
to
make
sure
that
the
otlp
protocol
can
be
used
both
as
a
push
for
for
end
user
metrics,
but
also
as
a
push
from
collector
to
sas
and
so
on.
So
we
want
otlp
to
be
capable
of
conveying
stillness
and
we
have
two
potential
ways
of
doing
that.
F
One
is
using
start
times
and
end
times
as
a
sort
of
implicit
means
of
conveying
gaps
like
we
tried
to
scrape
something
and
it
wasn't
there
so
we're
going
to
stop.
Recording
time
ranges
that
cover
this
moment
in
time
and
then,
when
that
target
comes
back,
we're
going
to
resume
like
covering
the
the
time
range
with
valid
data
points.
F
And
then,
when
you
see
that
stream
of
data
you're
going
to
see
there
was
a
gap
because
there
was
a
time
gap
essentially,
but
that's
an
implicit
form
of
detecting
a
stillness
that
we
might
might
choose,
and
that
would
work
in
a
push-based
system.
But
in
a
pull-based
system
you
have
an
actual
explicit
event
that
says
I
wasn't
found.
It
has
an
exact,
exact
timestamp
when
you
aren't
found
and
comparing
and
contrasting
the
implicit
versus
the
explicit
mechanism.
F
The
explicit
mechanism
requires
a
nand
value
or
something
like
that
or
a
different
protocol
field,
and
we
don't
quite
know
how
to
do
that.
Whether
we
want
to
only
have
the
implicit
mechanism
or
whether
we
want
to
have
the
explicit
mechanism
and
if
so,
how
nand
values
seem
to
work.
But
a
lot
of
people
are
are
yelling
at
me
about
them
and
also
nand
values
are
problematic
when
you
think
about
histograms
and
summaries
like
how
do
I
say
the
histogram
wasn't
there
and
so
on.
F
So
the
questions
are:
do
we
need
an
explicit
staleness
form
in
otlp
and
if
so,
how
should
we
do
it?
And,
if
not,
is
it
okay
to
just
rely
on
these
implicit
boundaries,
get
time
gaps
which
tells
us
when
something
is
absent,
but
not
exactly
when
it
was
absent?
It
just
says
during
this
range
the
thing
was
gone:
it
might
be
resetting.
We
don't
know
that
was
my
issue.
I
don't
have
a
solution
and
I
haven't
written
it
up
any
more
than
that.
I'm
looking
for
feedback
and
interest.
F
I
Yeah
so
like
there's
a
few
cases,
one
is
that
there
was
a
scrape
or
push
and
that
metric
was
missing
then,
in
which
case
hey,
you
have
the
timestamp,
because
it's
the
timestamp
of
that
scraper
push
and
the
other
case
is
the
one
where
that
scraper
push
doesn't
happen
at
all.
In
which
case
you
know,
you
basically
use
hey
when
you
well,
you
scrape
it's
when
you
try
to
scrape
at
least
prometheus
when
you
push
it
when
the
push
probably
would
have
been,
I
guess.
F
F
That's
the
question,
I
think,
there's
no
more
discussion.
We
might
want
to
just
write
it
in
an
issue
and
let
others
comment.
I
G
F
Yeah
I'll
ask
a
bug
today
or
tomorrow
that
says:
stay
on
the
markers.
Do
we
need
them
to
be
explicit,
yep
question
mark
because
we
could
imagine
just
using
the
implicit
values
we're
just
going
to
lose
some
time
stamp
granularity.
Something
like
that,
and
I
don't
know
that
that's
going
to
be
acceptable.
G
F
G
A
Mean
brian's
very
good
about
commenting,
and
you
know
on
the
issues
on
especially
the
prometheus
with
group,
so.
G
E
E
Do
you
see
this
like
in
the
scope
of
collector,
like
you
know
where,
if
there
was
a
staleness
market
type
of
you
know
something
that
you
can
put
on
wire?
I
mean
where
else
this
would
be
a
relevant
thing.
F
I'm
not
sure,
but
but
that's
what
I
was
kind
of
getting
at
when
I
said
this
is
a
super
meta
level.
It's
like
really
we
get
into
the
question
of
how
should
you
monitor
your
service
and
and
the
use
of
stillness
markers
is
pretty
key
if
you
take
the
prometheus
worldview
which
many
have,
and
that
says
that
I
need
to
know
explicitly
when
a
series
goes
missing
and
you
can't
make
that
observation
about
yourself.
F
So
it
has
to
be
a
third
party
that
puts
that
in
and
that's
why
it's
not
legal
to
put
a
nand
value
in
open
metrics,
but
it's
it's
semantically,
meaningful
and
required
in
prw,
and
so
I
thought
I
hope
I've
answered
your
question.
I
I
F
I
Or
more
accurately,
because
it's
all
text
the
text
parser
there
is
one
which
is
go
default.
Man
is
prometheus
now
internally
and
then
I
chose
a
different
band:
a
signaling
man,
basically
the
one
when
there's
an
error
in
floating
point
match:
it'll
use
a
signaling
nan,
so
you
know
less
likely
to
cause
breakage
later
on,
but
it's
basically
two
different
bit
patterns
internally.
So.
F
I,
like
nands,
I
think
that
they
are
great,
they
are
semantically,
they
type
they
type
as
numbers,
but
they
semantically
break
everything
and
that's
what
they're
there
for
so,
I
think
it's
okay
to
use
man
and
I
would
visit.
I
would
defend
that
position,
but
I
don't
know
that
everyone
likes
them
and
I
don't
know
that
everyone
will
agree.
So
I
think
you're
doing
the
right
thing
with
man
and
they're.
They
should
be
considered
valid
values,
they
just
mean
exactly
what
they
mean
that
there's
an
error
here.
That's.