►
From YouTube: 2021-07-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Hi
everyone
looks
like
alolita
is
having
some
issues
with
zoom
and
it's
going
to
restart
a
computer.
So
I
guess
I
will
get
us
kicked
off.
Please
add
your
names
to
the
agenda.
If
you
haven't
already
and
add
any
items
you
wish
to
discuss
to
the
agenda
so
that
we
can
discuss
them,
josh
looks
like
you.
Have
the
first
item
on
the
agenda.
C
Yeah
alalina
asked
me
to
come
here
to
talk
about
some
of
the
things
in
the
metrics
data
model,
so
I'm
not
sure
how
many
of
these
will
be
fully
relevant
or
sorry,
not
fully
they're
all
relevant.
I
don't
know
how
many
days
we
can
actually
make
progress
on
discussions
versus
heads
up.
Please
take
a
look
and
tell
us
what
you
think,
but
some
of
these
definitely
we
should
talk
about
so.
C
First
off
we
have
some
metrics
protocol
changes
coming
to.
We
had
an
issue
where
we
attempted
to
define
histograms
such
that
some
would
only
be
present
when
it
is
monotonic
and
so
that
we
know
we
can
export
it
to
prometheus.
However,
we're
using
protocol
buffer
three,
which
means
something.
C
And
it's
zero:
if
it's
not
present
right
like
we
can't
determine
whether
it's
present
or
whether
it's
not
so
there's
a
open,
pull
request
to
actually
add
a
new
field
higher
up.
That
will
tell
us
whether
or
not
the
histogram
has
monotonic
sums.
C
So
we
know
whether
or
not
it's
safe
to
export
to
prometheus,
and
the
assumption
is
that
everything
coming
out
of
the
protocol
prior
to
this
change
will
have
monotonic
sums
and
that
zero
is
not
something
you
can
trust
just
as
a
heads
up.
If
you
want
to
take
a
look
that
is
this,
this
first
link
here
or
no.
It's
actually
the
second
link
there,
but
that's
like
the
easier
thing
to
describe
so
yeah.
C
That's
that's
one
of
the
changes
coming
to
the
data
model
and
there's
a
question
of
where
we
should
be
tracking.
Whether
or
not
our
histograms
have
monotonic
sums
if
it
should
be
at
an
outer
point
where
labels
are
late,
so
if
it
should
be
at
the
basically
the
instrument
level
or
if
it
should
be
at
the
actual
metric
data
point
level.
C
That
was
one
of
the
open
discussions.
We
decided
to
do
it
kind
of
at
an
instrument
level.
So
when
open
telemetry
reports
histogram
the
instrument
itself
will
determine
whether
or
not
the
sum
is
monotonic
and
then
all
values
of
histograms
that
come
out
all
the
labels.
All
of
the
time
series
for
that
instrument
will
be
either
with
monotonic
sums
or
without
monotonic.
C
Sums:
okay:
the
first
link
here
is
around
trying
to
understand
stainless
markers.
This
is
from
josh
mcdonald,
and
so
this
is.
This
is
a
proposal
where
we'll
have
an
explicit
flag
in
the
open
telemetry
protocol
to
know
whether
or
not
a
point
is
stale
josh.
Do
you
want
to
speak
to
this
at
all
anything
specific
to
call
out.
D
C
Okay
right,
so
this
is
again
more
of
a
heads
up.
I
haven't
been
spending
enough
time
doing
cross
transfer.
You
know
discussions
so
just
want
to
give
a
heads
up
on
some
changes.
So
that's
that's
there
for
discussion.
The
last
one,
I
think
is,
is
most
significant
and
worth
a
lot
of
discussion
again
josh.
You
can
please
please
jump
in
here,
but
there's
a
proposal
for
new
histogram
format.
C
I
know
tlp.
It's
a
base.
Two
exponential
histogram
protocol
there's
been
a
lot
of
discussion,
although
I
think
I'm
being
disingenuous,
when
I
just
say
it's
base
two
so
I'll,
let
josh
take
it
from
here.
E
So
I
put
up
this
issue,
I
called
referendum.
I
was
looking
for
more
feedback,
I
did
get
more
feedback
and
I
did
close
that
issue.
I
have
sent
out
a
pr
that
took
bjorn's
proposal
almost
verbatim
and
dropped
it
into
an
open,
telemetry
proto
file
and
sent
that
out,
and
the
first
feedback
came
in
yesterday
night
from
uk
at
new
relic.
Who
has
raised
a
point
of
contention
basically
saying
this:
is
a
transport
protocol
not
a
storage
protocol?
E
We
should
not
be
trying
to
compress
as
much
I've
read
that
and
started
to
think
about
it,
but
I
really
want
this
to
be
about
a
wider
community
and
you
know
bjorn's
recommendation
was
for
prometheus
and
openmetrics
and
it
probably
makes
sense
to
advi
to
take
uk's
advice
and
simplify
the
transport
here.
But
it's
open
for
review
right
now.
E
C
Format
to
open
telemetry
and
we
want
to
make
sure
it's
prometheus
compatible
going
forward
with
with
and
we're
in
alignment
on
the
right
direction.
If
you
need
links
to
all
the
discussion,
I
think
a
lot
of
it's
in
the
pr
and
there's
there's
a
lot
of
good
reads
here.
So
please
take
a
look.
C
I
don't
know
if
it's
worth
discussing
any
specifics
here
or
if
you
want
to
ask
questions,
but
mostly
just
wanted
to
give
kind
of
a
heads
up
on
open
telemetry
protocol
changes
and
around
prometheus
compatibility,
specifically
issues
that
that
we
we
saw
and
want
to
adapt
and
fix.
E
Yeah,
I
think
it's
most
important
that
we
don't
like
decide
to
adopt
something,
that's
compatible
with
the
draft
and
then
prometheus
moves
in
a
different
direction
like
radically
from
there.
But
I
don't
think
that's
going
to
happen
and
you're
right.
We
can
convert
between
a
compressed.
You
know,
like
delta,
encoded
input
and
a
uncompressed
output.
No
big
deal
there'll
be
a
question
of
when
we
go
to
convert
back
the
other
way,
which
is
going
to
have
to
happen.
What
algorithm
or
you
know
like.
E
F
Deal
yeah
like
the
main
trick.
There
is
as
you're
doing
that
over
time
because,
like
on
anyone's
snapshot,
it
doesn't
matter
it's
more
like
that
over
time.
That
varies.
That
might
be
an
issue
for
something
downstream,
but
yeah
like,
as
you
said,
that's
something
I've
been
dealt
with
when
it
has
come
to
us.
C
Cool,
so
please
take
a
look
and
comment
on
the
issues.
If
you
have
any
concerns,
but
yeah
the
I
guess
the
tldr
is
new,
histograms
will
be
freaking
awesome
when
they're,
finally,
through
I'm,
really
excited
for
all
of
us
to
adopt
a
new
style.
Histogram
second
thing:
the
stillness
markers.
I
think
that
will
have
implications
in
how
we
do
transformations
to
the
collector
but
will
hopefully
lead
to
a
much
more
robust
model
for
into
prometheus
into
otlp
back
to
prometheus.
B
All
right,
thank
you,
josh
and
I'll.
Do
that
I'll,
remember
to
blame
you
when
I
deal
with
sums
next
up
looks
like
we've
got
a
couple
prs.
These
are
for
the
target
allocation
implementation
in
the
openclavitrate
operator.
B
G
No
there's
nothing
much
to
add
to
that.
We
have
submitted
the
fias
just
one
thing
that
that's
one
more
small
enhancement.
I
mean
addition
to
that
which
actually
enables
the
collector
to
scrape
the
target,
so
that
is,
that
will
be
another
pr
which
will
be
submitted
in
a
short
way.
So
that's
the
only
addition
other
than
that
everything
else
is
complete
and
I'm
just
waiting
for
the
reviews
and
all
that.
B
Yeah
great,
so
the
last
step
of
this
is
the
configuration
rewriting
to
tell
the
collector
to
use
the
http
service
discovery
to
go
hit
the
target
allocator
to
get
its
targets,
and
hopefully
we'll
have
that
pr
up
shortly
as
well,
and
then
we'll
be
able
to
string
all
of
these
capabilities
together.
B
B
We
have
recently
added
an
implementation
in
attempts
to
keep
track
of
labels
that
it
has
seen
label
sets
that
it
is
seen
in
the
most
recent
scrape
and
will
emit
stainless
markers
for
targets
that
it
did
not
see
in
a
subsequent
scrape
and
what
he's
seeing
is
a
single
job
that
has
two
targets
that
end
up
being
scraped
independently.
B
If
this
is
incorrect,
behavior,
because
we
are
currently
passing
the
stillness
tests
in
the
compliance
test,
suite
and
two,
what
should
the
behavior
be
here?
Is
this
actually
the
correct,
behavior
or
should
stay
on
this,
be
only
tracked
for
those
things
that
were
attempted
to
be
scraped
during
a
scrape,
and
if
so,
do
we
have
some
way
of
identifying
distinguishing
between
things
we've
seen
before,
but
we
haven't
attempted
to
scrape
again
and
things
that
we
have
subsequently
attempted
to
describe.
F
Yes,
select
from
a
semantics
standpoint
and
prometheus
stillness
is
per
target,
so
if
you
have
two
completely
independent
targets
affecting
each
other,
that's
a
bug
on
your
end
now,
there's
a
question
of
credit.
Compliance
should
detect
that
I
guess
the
compliance
tests
need
to
look
for
spurious
scale
markers
because
that's
what
this
is
or
would
be
from
that
standpoint,
but
yeah,
it's
just
like
a
weird
bug.
It's
because.
A
It
currently
doesn't
yeah,
it
currently
doesn't
just
have
street.
B
Okay
yeah,
so
it
sounds
like
that
aligns
with
my
thinking
about
what
is
going
wrong
here,
but
perhaps,
if
the
the
compliance
test
suite
could
be
updated
to
catch
this,
this
sounds
like
the
the
sort
of
thing
that's
easy
to
to
get
wrong
in
an
implementation.
If
we've
done
it
thinking,
we've
done
it
right.
I
imagine
the
next
person
who
comes
along
and
tries
to
know
it
may
also
do
something
similar
so
having
the
test
suite
catch.
That
would
be
good.
F
B
They
they
are
things
with
a
prometheus
endpoint.
It
sounds
like
he's,
got
multiple
collectors
up
and
running
and
he's
trying
to
scrape
their
prometheus
endpoints.
Okay,.
F
B
Okay,
thanks
for
that
feedback,
I
I
think
I
know
where
to
direct
the
inquiry.
Then
there-
and
I
think,
that's
all
we
have
on
the
agenda
today.
Unless
anybody
else
has
anything
else
they
want
to
bring
up.
Alolita
you've
joined
us
now.
Is
there
anything
you
want
to
discuss.
D
No,
I
I
just
I'm
sorry
I
but
I
I
had
already
listed
some
of
the
items
that
we
had
the
fan
to.
I
don't
know
if
josh
was
able
to
join
us
or
not,
but
he
was
going
to
give
us
an
update
on
some
of
the
histogram
discussions
that
are
ongoing.
D
Right,
cool,
okay,
cool
I'll
catch
up
on
the
recordings,
but
again
you
know
it
was
just
to
make
sure
that
we
are
aligned
with
the
larger
histogram
discussions
that
are
ongoing
and
also
you
know,
making
sure
that
all
the
prometheus
you
know
at
the
exponential
histogram
support
is
also
covered
in
that.
So
that's
all
I
had
the
only
other
question
I
had
is
richard:
if
there
is
any
other
updates
in
terms
of,
I
think
that
anthony
was
already
covering
the
compliance
tests
and
the
stillness
marker
implementation.
D
H
So
there
are
no
updates
to
the
compliance
suite
at
the
moment,
but
once
again,
anyone
who
wants
to
contribute
more
than
welcome
like,
for
example,
a
stainless
thing
for
target,
would
be
a
great
addition
to
just
put
it
upstream
in
the
compliance
suite
and
that
everyone
benefits
from
this
kind
of
thing
as
to
as
to
formalizing
it.
In
40
minutes,
the
cncf
governing
board
will
meet
and
we
expect
them
to
to
formally
accept
the
compliance
program
during
this
meeting.
So
we
should
have
an
update
today
by
your
time.
D
Okay,
and
and
what
is
the
program
in
addition
to
the
you
know,
the
tests
that
exist?
Is
there
a
formal
document
that.
H
There
there
is
write
up
of
the
thing
and
as
of
right
now
that
look
like
so
just
a
moment.
H
There
it
is
in
chat,
and
also
I
can
put
it
into
the
meeting
notes.
So
that's
the
that's
the
draft
text
which
we
submitted.
I
know
chris
a
had
some
thoughts
around
making
changes,
but
I
didn't
see
any
of
them.
So
maybe
there
are
changes
before
before
it's
officially
stamped
as
to
how
this
works
on
the
legal
side
and
what
forms
you
have
to
fill
out
and
everything.
That's
something
we'll
be
working
with
linux
foundation,
lawyers,
basically
as
soon
as
cncf
governing
board.
Officially
signs
that
thing
off.
H
D
And
and
and
and
so
I
have
a
question
here
so
I
mean
you
know,
I'm
looking
through
the
you
know
the
link
that
you
shared
richard
and-
and
there
is
this
idea
of
a
certification
right
and
displaying
the
so-called
prometheus
compatible
logo.
F
D
And
word
so:
who
is
this
targeted
for
for
other
open
source
projects
also
or
or
for
vendors
or
both?
What
what's
the
target
here.
H
It's
project
and
vendors.
So
again,
this
is
on
the
kubernetes
thing
yeah.
So,
for
example,
I
would
expect
well,
I
fully
expect
cortex
to
to
be
one
of
those
or
thanos
and
such
I
also
fully
expect
that
grafana
cloud
offering
or
aws
amp
or
all
those
other
offerings
will
be
certifying
themselves
for
obvious
reasons.
H
So
both
the
projects
and
the
commercial
offerings
would
be
carrying
this.
The
stamp
of
approval.
D
Okay,
and-
and
you
know
what
is
the,
what
are
the
assets
that
need
to
be
submitted.
D
H
That
is
precisely
what
we
will
be
working
on
with
the
linux
foundation
lawyers
as
soon
as
the
cncf
governing
board
actually
signs
off
on
on
the
text.
Okay,
okay,
that's
good
to
know!
I
don't
know!
I
know
that
as
soon
as
this
is
officially
stamped,
then
we
move
forward
also
with
a
creation
of
of
the
actual
logo
and
trademark
registration
of
the
logo.
I
mean
that's
not
written
in
here,
but
that
will
be
part
of
the
legal
stuff.
It's
also
already
on
the
linux
foundation
trademark
thing
website.
H
H
This
asset,
which
you
can
put
on
your
website
in
your
repository
whatever
and
obviously
linux
foundation,
has
the
trademark
on
that
thing
as
well,
and
then
linux
foundation
can
enforce
trademark
rules
as
to
who
is
allowed
to
to
use
this
under
what
circumstances,
and
that
is
the
legal
vehicle
to
get
everyone
to
to
actually
follow
and
and
be
required
to
to
follow
the
thing
and
run
the
test
suite
and
not
lie
about
it.
H
Blah
blah
blah
blah,
because
if
anyone
plays
not
nice,
then
trademark
trademark
law
is,
is
the
vehicle
where
you
can
actually
do
this,
but
also
you
need
to
sign
certain
boilerplate
that
you
won't
cheat
in
the
tests,
blah
blah
blah
blah
blah,
but
again
none
of
those
are
finalized
yet
because
we
only
will
create
them
once
governing
board
signs
off.
D
Okay,
so
I
mean-
and
then
you
have,
this
recertification
clause,
which
is
certification
of
conformance,
is
valid
for
12
weeks.
You
know,
I
mean
and
or
two
minor
prometheus
releases,
so
I
mean
you're
requiring
everybody
to
rerun,
conformance
and
resubmit.
Every
12
weeks.
H
Yes,
this,
the
precise
details
again
are
not
yet
hammered
out
on
how
this
works,
but
we
were
asked
to
deliberately
have
a
short
and
aggressive
cycle
in
particular,
as
this
is
new,
of
course,
one
of
the
worst
things
which
could
happen
is
we
find
out.
H
Also,
how
many
people
submit
this
like,
for
example,
I
know
that
grafana
labs
obviously
has
interest
in
attaining
this
stamp.
I
am
fully
expecting
that
aws
and
everyone
else
on
this
call
here
will
will
have
a
commercial
interest
in
attaining
this.
But
what
about
beyond
this?
Will
it
be
500
companies,
or
will
it
be
five
like
those
kinds
of
things
you
just
need
to
figure
out
also,
for
example,
for
the
prometheus
remote
right?
H
It's
relatively
easy
because
you
can
just
run
the
test
suite
and
you
can
just
tell
us
where
to
run
it
again.
So
we
verify
your
results
for
prom
ql.
You
need
you
need
someone
to
actually
interpret
the
results
and
you
need
someone
who
actually
puts
the
data
into
your
test
system,
blah
blah
blah
blah
blah.
H
This
is
not
fully
automated
and
again
it's
a
great
thing
where
you
can,
where
you
can
ask
your
managers
and
such
to
to
put
resources
against
because,
ideally,
if,
if
someone
streamlines
the
testing
process,
form,
let's
say
prom
qr
for
themselves,
they
would
be
doing
it
in
a
way
that
it's
streamlined
for
everyone
else.
D
Okay,
I
mean
that
makes
sense,
but
I
think
that
12
weeks
is
very
short
for
commercial
services.
I
mean
for
projects
it
can
be
automated,
but
for
services,
which
you
know
again,
typically
large
services
don't
upgrade
within
within
12
weeks.
D
Okay,
I
mean
that's
very
helpful.
Thank
you.
I
mean
that's
something
that
I
think
is
useful
for
everyone
who
has
any
kind
of
you
know
remote
right,
implementations
that
they're
either
using
in
services
or
you
know,
even
hotel,
where
we
have
a
whole
pipeline
for
it
vishwa
do
you
have
any
questions
or.
J
So
I
don't
know
where
to
start
is
there
anything
that
is
officially
published
in
the
prometheus
documentation
that
we
can
use
as
a
start
or.
J
But
I'm
not
sure
you
know
what
would
be
the
the
list
of
things
that
we
need
to
check
on
the
receiver
to
say
that
it's
prometheus
complaint.
H
There
currently
isn't
is
no
test,
but
this
is
also
something
like
all
of
those
things
will
need
to
be
built
out
over
time
if
you
test
this
yourself
by
all
means
open
source
it,
and
ideally
upstream
it,
we
have
a
perfect
place
in
the
organization
wherever
this
thing
can
live.
If
you
wanted
to
yeah,
we
also
have
things,
for
example,
alerting
and
and
certain
properties
of
tsdp,
like
not
all
the
properties
of
tsdp,
but
asserting
that,
if
you
store
a
value,
you
don't
lose
precision
or
anything.
H
D
I
mean
the
thing,
though,
is
that
richard
and
I
mean
I
saw
your
response
also
on
the
issue.
We
have
an
open
issue.
You
know
in
our
backlog
is
that
again
it's
super
useful
to
have
a
receiver.
You
know
com
conformance
test
right.
That
is
what's
what's
valid
and
what's
not
what's
expected,
behavior
right,
so
we
can
itemize.
D
You
know
a
few,
but
you
know
at
the
end
of
the
day
the
prometheus
project
you
know,
maintainers
are
probably
you
know,
should
be
the
experts
on
what
the
behavior
of
the
receiver
should
be.
For
you
know
all
the
use
cases
that
prometheus
supports
and
I'm
wondering
if,
at
least
at
a
high
level,
those
use
cases
could
be
itemized
so
that
we
could
add
tests
for
them
right
because
again,
we
can
absolutely
add-
and
you
know,
contribute
those
tests.
D
We
have
kind
of
you
know
guessed
through
some
of
them
based
on
the
documentation
that
exists,
but
I
think
that
understanding
the
the
valid
use
cases
for
prometheus
behavior
is
something
that
would
be
useful
to
have.
Even
if
there
are
links
to
you
know
existing
docs
or
or
anything
else
that
you
can
point
us
to.
H
I
mean
the
primary
one:
it's
this
one.
I
think
we
saw
that
in
the
past,
I'm
also
putting
it
into
the
chat
like
this
is
when
it
becomes
when
it
comes
to
remote
right.
That
is
the
reference
course
that
is
yeah.
H
The
way
we
work
in
prometheus
team
is
as
soon
as
we
know
that
something
is
imminently
coming
we
create
a
design
document
which
oftentimes
is
simply
empty
until
someone
starts
filling
it
out,
one
that
design
document.
It
is
at
a
point
of
of
review.
It's
shared
within
within
the
developer
community
within
premier's
team,
blah
blah
blah.
H
We
give
feedback
because
we
iterate
until
until
there
is
consensus
that
yes,
this
is
the
way
forward,
and
then
you
have
this
this
itemized
list
to
to
implement
against,
so
to
make
it
specific,
because
I
know
this
is
super
fluffy
to
make
it
specific.
If
you
want
to
have
that
receiver
test
suite
the
best
way
I
can
literally
during
this
call,
create
an
empty
design.
D
D
Not-
and
I
know
it's
not-
I
mean
it's
really
building
this
from
ground
up,
but.
J
That's
okay,
but
I
think
that
would
be
a
great
way
to
because
I
just
don't
want
to
write
tests
that
will
pass.
I
want
to
write.
H
This
right,
actually
exactly
yes,
absolutely
and
again.
The
thing
is
we
we
create
this.
This
design
document
get
consensus
on
what's
in
the
design
document
and
by
this
process.
You
already
know
that
those
are
the
items
which
people
care
about,
and
the
beauty
of
this
is
like
that's
part
of
of
what
we
anticipate
happening
with
the
with
the
formalization
of
this
program.
H
Let's
say
there
is
one
implementation
which
doesn't
implement
the
remote
write
receiver
correctly.
Obviously,
your
management
will
have
a
strong
business
interest
in
making
that
fact
public.
Of
course,
your
sales
people
will
care
about
this.
So
there
is
an
intrinsic
multiple,
intrinsic
extrinsic
motivation
within
your
company
and
within
all
the
other
companies
trying
to
attain
compatibility
to
keep
each
other,
honest
and
and
to
just
uplift
everyone
else.
By
having
proper
tests
I
mean
oftentimes,
it
will
just
be
overlooked.
H
We
we
know
a
few
cases
where
certain
bits
and
pieces
of
of
prometheus
ecosystem
are
deliberately
not
followed,
but
just
making
this
transparent
by
this
you
create
this
incentive
to
to
also
let
people
actually
assign
time
time
to
this.
J
Yeah
make
sense
and
rich
so
where,
where,
where
will
be
the
discussion
about
this
happening,
I.
D
Think
vishwa,
what
richard
said
is
that
he'll
create
a
dock
on
the
prometheus
repo
compliance
repo
and
then
we'll
just
you
know
we
can
all
add
to
it.
H
Precisely
I'm
going
to
share
the
link
in
within
seconds,
so
there
we
go
so
this
here
will
be
the
economical
plates
I'll
copy
in
a
little
bit
of
boilerplate
and
such
in
a
minute
or
two
okay,
and
then
the
discussion
about
this
happens
on
prometheus
developer
mailing
list
and
the
best
way
to
to
get
started
is
to
just
subscribe
to
the
mailing
list
and
just
send
email
with
hey
here.
I
am,
I
want
to
implement
this
in
that.
I
will
be
writing
this
in
this
document.
D
Okay
sounds
good.
I
mean
again
thank
you
for
doing
this,
because
richard
I've
added
the
link
in
the
in
the
issue
that
we
have
so
cooks
can
folks
can
also
start,
you
know
adding
to
it,
and
we
will
definitely
do
that,
and
this
will
be
a
great
you
know
very
useful
for
us.
Also
as
we
you
know,
we
really
want
to
make
sure
that
every
component
that
is
in
the
prometeous
pipeline
for
open
telemetry,
especially
in
the
collector
with
the
receiver
and
the
and
the
prometheus
remote
right
exporter,
are
both.
K
D
Okay-
and
I
think
that,
if
we're
done
with
this
beneath
you
had
another
question,
do
you
want
to
bring
that
up?
Yeah.
L
So
I'm
currently
exploring
the
prometheus
receiver,
so
I
just
want
to
understand:
can
I
use
it
as
a
drop
in
replacement
for
my
prometheus
instance
if
I'm
collecting
both
traces
and
metrics?
So
I
I
do
not
see
like
like
too
much
of
detailed
information
on
the
readme
of
prometheus
receiver.
So
if
anyone
can
share
resources
or
give.
L
B
Okay,
so
I
think,
with
at
least
with
respect
to
alert
manager,
that's
going
to
operate
on
data,
that's
stored
in
the
dsdb,
and
so
that
wouldn't
be
something
that
the
receiver
would
handle.
Is
that
correct
brian
you'd,
probably
correct
me
if
I'm
wrong
there
and
I'm
not
familiar
enough
with
recording
rules,
to
comment
on
whether
those
would
be
a
good
fit
for
the
receiver.
B
B
Anything
downstream
of
that
would
be
the
responsibility
of
some
other
component,
so
an
analog
to
alert
manager
would
have
to
come
in
as
either
a
processor
or
something
that
happens
in
data
storage.
After
an
exporter
has
exported
the
data.
F
L
But
like
if
my
understanding
is
right,
the
prometheus
receiver
within
collector
is
a
stateful
set
like
if
there
is
a
deployment
available
for
it
to
be
a
stateful
set.
So
it
has
all
the
data
within
it
right
so
does
it
by
default?
Does
any
recording
rules
and
alerting
rules.
B
It
does
not,
and
it
does
not
currently
store
any
data.
The
receiver
does
not
the
the
collector
can
be
deployed
as
a
stateful
set.
This
is
primarily
to
enable
the
use
of
a
wall
in
exporters
or
the
like.
We
don't
currently
have
the
prometheus
receiver
store
any
of
the
data
that
it
receives.
It
passes
it
down
a
processing
pipeline.
I
I
just
had
one
quick
question,
so
I
was
trying
to
find
the
prometheus
version
that
the
receiver
is
using
just
to
see
what
exact
config
settings
are
available.
Yeah.
D
I
D
I
think
that
today
it's
a
very
good
question,
because
I
think
this
discussion
also
came
up
in
the
collect
in
the
spec
sig.
You
know
earlier
this
week
where
you
know
again
today
we
use
the
user
agent
header
to
be
able
to
convey
the
version
of
the
collector
in
the
in
the
prometheus
remote
right
exporter,
okay,
user
agent
header.
So
if
you
look
at
the
code,
that
is
something
we
can
weigh
as
part
of
it.
D
Now,
what
we
don't
convey
is,
and
receiver
version
number
and
usually
that
is
in
sync,
with
the
collector
release
version
number
and
that's
how
you
know
it's
correlated,
but
we
are
working
on
a
tool,
a
set
of
builder.
You
know
tools
that
will
actually
help
in
having
that
manifest
to
correlate
different
components
that
are
bundled
in
a
release,
including
prometheus
receiver.
That
will
have
a
clear
one-to-one
mapping.
I
mean:
is
that
useful?
Or
are
you
looking
for
something
different?
D
No,
that's
super
useful!
That's
what
we're
looking
for!
I
mean
if
you
look
at
the
user
agent
header
and
how
it's
constructed.
You
can
also
look
at
the
prometheus,
remote
right
exporter,
which
is
you
know
in
core
and
then
the
aws
prometheus,
remote
right
exporter,
which
actually
adds
some
more
information
in
the
user
agent
header
take
a
look
at
that
and
again,
let's
discuss
it's
a
good
good.
We
I
mean
we.
I
think
we
should
definitely
standardize
what
we
are
passing,
but
as
a
basic
set
of
information.
D
Today
we
are
picking
up
os.
We
are
picking
up.
You
know
the
collector
version
number
whatever
is
available.
We
are
passing
that
in
the
user
agent
header,
if
there's
anything
else
that
we
should
add,
then
let's
discuss
that
for
sure.
Okay
sounds
good,
yeah
I'll,
take
a
look.
B
B
The
the
import
paths
have
not
changed
and
go
modules
would
require
a
changed
import
path
for
v2
as
opposed
to
v1
and
so
we're
importing
them
through
the
use
of
version
pinning
effectively
saying
give
me
the
latest
one
point
x
version,
but
actually
I
want
the
the
the
commit
that
was
20,
21
06
21
at
1505.01
with
this
commit
hash,
which
makes
it
kind
of
hard
to
say
okay,
what
actual
weakness
version
am
I
running
like?
B
If
I
want
to
go
to
prometheus
releases
and
and
say
you
know,
is
this
2.26
or
is
it
2.27
comparing
that
can
be
kind
of
hard?
And
I
don't
know
if
there's
a
really
good
answer
to
to
that
other
than
go.
Look
up
that
commit
and
see
if
it
ties
to
a
version.
J
Yeah,
I
think
I
was
so
the
reason
we
why
we
hit
this
was
people
were
authoring?
You
know
script
configuration
and
there
were
config
settings
that
prometheus
actually
understands,
and
the
collector
did
not
understand.
J
So
it
would
be
great,
I
think,
if
as
a
part
of
the
prometheus
receiver
or
even
the
collector.
D
J
Way,
you
know
available
as
a
part
of
the
header.
You
know
that
we
can
consume
anywhere
downstream.
D
Yeah,
I
agree,
I
mean
that's
a
good
problem
to
solve,
because
I
think
that
we
do
need
to
understand.
You
know
at
least
which
which
prometheus
version
each
collector
version
is
pinned
to,
and
that,
usually
you
know
we
kind
of
reverse
engineer
from
the
release
notes
and
the
change
log.
But
yes,
it's
not
it's
not
obvious
other
than
to
the
folks
who
are
working
on
this
day-to-day.
D
What
what
version
you
know
the
collector
releases,
our
fin
too,
so
we
can
do
it
both
ways
right.
We
can
actually
make
sure
that
the
collector
change
log
is
clearly
identifying
also
the
version
for
you
know
dependencies
such
as
prometheus
or
jaeger
or
zipkin,
which
are
you
know
core
to
the
project.
Interoperability
support
to
be
clearly
tagged
to
a
particular
release,
exactly.
B
Of
date
is
authoritative,
we
know
we
can
trust
it.
It's
just
not
terribly
useful
or
human
friendly
because
of
the
the
use
of
the
the
commit
pending
the
way
this
works
richie.
Do
you
know
if
the
prometheus
project
has
any
plans
to
update
the
the
import
path
to
a
v2,
to
allow
using
tagged
versions
as
opposed
to
commit
pinning.
F
So
this
has
been
a
long
discussion
and
the
fundamental
problem
is
that
gold
modules
are
just
completely
unsuited
to
sort
of
thing.
Prometheus
is
doing,
which
is
releasing
software
rather
than
the
library
like
there's
various
discussions,
and
I
think
if
you
look
at
some
of
the
previous
dev
summits,
you'll
find
discussions
there
at
the
last
consensus.
I
believe
that
this
is
from
a
few
years
back.
Was
that
hey?
Would
there
be
snapshots?
You
can
go
and
follow
those.
A
F
D
D
I
mean
anthony-
I
guess
we'll
have
to
think
about.
You
know,
based
on
the
tooling
that
we're
building
for
hotel
to
think
about
how
how
at
least
some
of
this
could
be
addressed
on
the
collector's
side.
F
Yeah,
like
I
think,
I've
struck
my
head
that
and
going
just
use
a
tag
name.
I
can
still
use
2.6.
whatever
I
think
that'll
work,
yeah.
D
D
Yeah
exactly
I
mean
as
long
as
it's
stacked
somewhere,
which
we
can
point
to.
I
think
that
at
least
that's
an
immediate
solution.
J
And
I
also
think
we
have
the
same
problem
for
the
otp
versioning
in
the
in
the
in
the
collector
project
as
well.
For
example,
we
have
an
otp
server
and
we
actually
use
the
prometheus
receiver
and
the
wp
exporter
to
export
the
metrics
to
the
otp
server,
and
we
found
that,
like
a
few
weeks
back,
the
collector
actually
moved
to
otlp
0.8.0.
J
J
Yep
as
well,
I
can
go
and
find
that
in
the
release
notes,
but
it
was
I
had
to
go
through,
like
you
know,
five
releases
to
figure
out.
You
know
which
one
changed
where
to
you
know
to
know
what
version
that
was
currently
being
used.
J
J
And
then
the
version
for
whatever
the
key.
D
Which
one
I
mean,
that's
something
we
are
actually
working
on
for
the
collector,
for
you
know
the
different
components
and
kind
of
the
tooling
and
the
dependency
tracking.
Now
having
all
that
available,
I
think
that'll
take
some
time
because,
but
we
do
track
otlp
versions
for
the
collector.
You
know-
and
we
do
understand
that
as
you
are
moving
from
one
version
to
the
other.
D
As
you
can
see
items
you
know,
even
in
our
backlogs
of
the
collector,
you
cannot
just
jump
from
0.5,
for
example
to
0.9.0,
which
is
the
current
otlp
version
right.
You
have
to
actually
go
through
and
upgrade
at
least
the
collector.
D
You
know
components
version
by
version
and
that's
just
because
of
exactly
the
there
are
breaking
changes.
You
know
that
are
being
introduced
in
the
in
the
spec
and
and
the
proto
changing
and
otlp
itself.
So
I
mean
you
have
to
track
it
right
now
and,
and
it
is
manual
very
much
so
I
mean
it's
it's
kind
of
again
tribal
knowledge
that
you
know
folks
who
are
directly
working
on
this
are
understand
and
everybody
else
has
kind
of
got
to
figure
it
out.
D
But
again
you
know,
maybe
I
mean
this
is
a
good
point,
and
can
you
create
an
issue
for
this
even
on
our
backlog,
because
then
you
know
this
helps
us
in
the
tooling
requirements.
F
D
B
Yeah
so
that'll
be
aiming
towards
making
it
easier
to
manage,
releasing
the
the
components
in
the
collector
and
collector
contrib
at
varying
stability
levels.
I
think
the
the
work
that
we're
going
to
be
doing
with
the
collector
builder
as
well.
B
If
we
can
embed
the
manifest
that
was
used
to
build
a
collector
into
the
collector
so
that
it
can
be
programmatically
interrogated
or
exposed
in
a
z,
page
or
something
like
that
could
be
super
useful
so
that
someone
could
come
along
to
a
collector
and
say
hey
what
components
do
you
actually
have
in
you?
I
think
that
would
be
useful
in
terms
of
otlp,
I
think,
hopefully
soon
we
will
get
to
otlp
0.9,
which
is
the
point
at
which
backwards
compatibility
should
stop
breaking.
B
I
think
that
was
where
the
the
metrics
data
model
was
was
declared
stable
and
going
forward.
There
would
be
only
backwards,
compatible
changes
introduced
since
then,.
I
J
D
Yeah
yeah,
that
would
be
great,
and
if
it's
prometheus
specific,
you
know
it
would
be
nice
if
you
could
do
it
on
the
work
group
backlog
too.
Okay,
okay,
because
I
think
they're,
two
distinct
issues,
I
mean
they're
too,
both
they're
two
different
pipelines,
but
nonetheless
addressing
a
common
problem.