►
From YouTube: 2021-04-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
I
think
we
can
probably
wait
for
a
couple
more
minutes
for
folks
to
join
in
and
then
get
started.
C
And
if
any
of
you
have
any
items
to
discuss,
let
me
just
share
the.
C
E
C
E
Tom
can't
make
it,
but
he
has
volunteered
me
to
talk
about
it
all
right.
C
I
created
an
issue
on
our
backlog
in
terms
of
you
know,
using
the
the
tests,
for
you
know
any
future
testing
as
we
integrate
it
into
the
collector
run
and
as
well
as
the
others.
E
We're
currently
trying
to
come
up
with
naming
and
such
so
I
will
most
likely
hammer
out
a
design
dock
and
how
how
those
tests
will
actually
be
perceived
and
run
and
blah
blah
blah
blah
from
prometheus
perspective.
E
So
we
can
so
we
can
just
have
some
some
names.
Cncf
has
asked
us
to
to
to
get
that
into
a
form
like
the
what's
the
name,
kubernetes
tck
tkc,
something
I
forgot
the
name.
There
is
some
some,
some
compatibility
and
or
compliance
thing
which
kubernetes
does
and,
and
they
basically
asked
us
to
to
to
have
a
look
at
this
and
and
and
model
stuff
upon
it.
I
guess
so.
There
will.
F
D
So
richard,
though,
I
see
this
that
that
compatibility
tool
is
moved
to
the
previous
organization
is
that
yes,
okay,
okay.
E
That,
if
you
go
on
the
prometheus,
the
bell
mailing
list,
we
had
the
discussion.
I
think
yesterday,
where
basically
we
want
to
move
the
the
compatibility
testing
into
the
main
org
to
to
just
have
everything
in
the
same
place.
Julius's
stuff
is
currently
not
not
implemented
as
unit
tests
or
anything.
E
So
you
cannot
fully
automate
this,
but
this
is
also
a
a
function
of
or
julius's
prong
l
test,
just
to
be
to
be
precise,
they
need
more,
hence
deportation
by
humans
than
than
other
than
than
the
remote
read
right.
E
The
same
is
is
likely
true
for
the
open
metrics
as
well,
but
we
will
probably
come
up
with
checklists
or
something
or
maybe
have
a
step
there
where
some
subject
matter,
expert
needs
to
needs
to
look
at.
The
result
sets
to
do
to
do
some,
some
human
testing,
obviously
with
the
intention
to
automate
it
more
and
more
over
time,
and
anyone
who
is
actually
running
against
the
test
suite,
hopefully,
will
also
have
a
high
incentive
to
to
help
automate
it
more.
E
I
think
I
saw
jana
already
in
the
in
in
tom's
repository.
C
I
think
we've
been
looking
at
it
richard,
and
so
I
think
jana
cannot
join
right
now,
but
let's
get
started
we
have.
We
want
to
talk
about
the
obviously
the
compatibility
tests
kind
of
walk
through
that,
but
also
there
is
a
concurrency
bug
that
we
are
working
on
right
now
and
I
was
hoping
emmanuel
would
join
in.
So
let's
get
started
with
our
first
topic.
E
Okay,
that's
the
first
topic.
Okay,
then
I
am
going
to.
E
E
Okay,
then,
that's
just
error
on
the
side
of
including
everything
that
is
the
full
yeah.
E
Issue:
okay,
perfect!
So
let's
walk
through
it
in
a
section
in
a
second
once
I'm
done
reformatting
this
as
you
everyone
who's
in
the
doc.
E
You
can
see
that
we
are
currently
or
that
tom
is
currently
testing
five
five
implementations
prometheus
agent
or
prometheus
itself
more
to
the
point
prometheus
agent,
not
yet
grafana
agent,
victoria,
matrix
agent
telegraph
and
open
telemetry
and
giving
a
giving
not
yet
a
score,
but
just
plain
pass
and
and
fail
results.
We
actually
changed
a
bit
in
the
test
suite
to
to
break
it
more
into
into
orthogonal
tests.
E
If
you
ran
all
the
tests
in
full
aggressiveness
in
particular,
open
telemetry
would
fail
a
lot
more,
so
we
split
those
out
into,
for
example,
for
for
upness
and
such
so
it
is
a
distinct
test
and
it's
deliberately
ignored
in
the
other
tests
to
basically
have
a
more
fine-grained
view
of
what's
happening,
which
will
also
obviously
make
it
easier
to
to
to
to
work
on
one
thing
and
not
basically
block
all
debugging
or
all
implementation
before
that
one
thing
is
done,
yeah,
so
the
things
we're
currently
I
mean
you
can
read
it
yourself.
E
We
are
currently
testing
for
for
timestamp
correctness
coming
over
remote
right
that
names
and
labels
work.
As
expected,
we
actually
found
one
bug
in
the
grafana
agent
with
the
test
suite.
E
E
Didn't
reject
duplicate
labels
as
we
found
out.
Oh.
E
Yeah
it
wasn't,
it
was
a
one-line
change,
but
yeah,
it's
you,
you
see
it's
it's
working
and
it's
it's
uncovering
stuff
which
we
didn't
know
existed
and
it's
already
obviously
richie.
E
E
Stairless
handling
is,
is
probably
self-evident,
but
we
can
also
try
and
link
this
to
to
stuff
in
the
prometheus
wg
repository.
Stairless
is
basically
what
do
you
need
to
do
when
a
certain
time
series
goes
away?
After
what
time
do
you
need
to
remove
it
from
from
the
result
set
when
querying
and
such,
and
also
when
sending
it
on
upness
obvious,
but
that
is
being
fixed
or
worked
on
by
josh?
E
If
I
saw
that
correctly
instance,
label
is
basically
seeing
if
there
is
an
instance
label
on
the
data
which
is
being
emitted,
counter
testing
for
monotosity,
I
think
that's
more
or
less
it
repeated
labels
is.
I
think
that
the
the
thing
which
we
walked
in
with
the
grafana
agent
there,
if
you
have,
if
you
have
one
label
multiple
times
in
one
in
one
time,
series
that's
not
allowed,
because
that's
that's
a
duplicate,
that's
a
duplication,
and
we
don't
know
how
to
deal
with
this
on
on
our
side.
E
Gorgeous
are
working
nicely
sorting
of
labels,
as
per
as
per
optimization,
such
is
also
working.
There
are
some
invalid
exposures
which
are
being
passed
on.
There
is
no
job
label
and
I
didn't
look
into
the
histogram
test,
but
that
is
also
currently
not
not
working
the
other
thing
or
not,
not
passing
the
other
thing.
E
We
will
most
likely
also
put
all
the
open
metrics
tests
into
this
from
the
side
of
an
agent
scraping
them
and
then
just
as
a
first
low-hanging
food
round,
see
if,
on
the
remote,
read
right
side
anything
comes
out,
even
though
it
only
gets
bad
bad
exposures
on
the
on
the
ingestion
side,
which
is
basically
a
super
trivial
test
to
see
if
all
those
all
those
invalid
cases
of
of
open,
metrics
formatting
are
rejected
or
not,
there
is
a,
I
don't
know
if
you
recently
looked
into
or
if
you
looked
into
the
test
suite.
E
So
these
will
also
be
integrated
and
a
branch
which
is
not
yet
merged,
because
I
did
it
by
hand
which
gives
you
the
full
the
full
results
that
were
the
full
test
set.
C
That's
excellent
so
richard
when
you
plan
to
when
does
this
stand
to
get
integrated
the
open
metrics?
Please.
E
We
don't
know
I
forgot
the
name,
but
someone
on
griffana
side
volunteered
to
do
this
as
basically
a
bit
of
self-onboarding
into
into
the
premises
ecosystem.
E
E
No
on
the
on
the
on
the.
A
E
This
one
I'll
also
link
it
here
that
is
new
as
of
today
and
the
plan,
by
both
julius
and
by
by
thomas,
to
move
everything
which
is
currently
in
the
prom
labs
or
prominence.
No
problem
labs
from
ql
testing
repo
and
everything
which
is
in
tom's
remote,
read,
write
testing
repo
into
into
this
one
repository.
E
C
So
the
question
I
had
is:
is
there?
Are
there
any
gaps
right
now
in
these
set
of
tests?
I
mean
these
are
obviously
the
you
know.
The
basic
functionality
of
of
the
that
is
implemented
for
remote
right
support,
right,
which
the
agent
is
also
a
grafana
agent,
is
also
supporting.
E
E
Precisely
that's
helpful,
it
will
be
basically
when
we
find
something
we
will.
We
will
extend
it.
We
already
started
thinking
about
potentially
having
a
few
test
cases
for
tsdb,
and
we
will
just
where
we
where,
where
there
is
a
need
and
where,
where
it
adds
value,
we'll
just
try
and
come
up
with
tests.
E
The
rough
thinking
is
to
from
the
from
the
test
point
of
view,
basically
version
them
with
a
date,
and
then
you
know
that
you
run
the
2021-3
version,
and-
and
you
know
that,
as
of
version
2021-3
of
this
and
that
test,
you
you
pass
it
and
as
new
stuff
is
added
yeah,
basically
rolling
forward
thing
same
as
with
energy
labels
and
such.
C
E
Yeah,
no,
absolutely
it's
I
mean
it's
open
and
if
anyone
who
wants
to
ask
more
tests
absolutely
great,
so
we
find
more
more
stuff.
C
E
C
Great
any
questions
from
folks
on
the
call.
C
Richard
anton,
have
you
take
a
look?
Have
you
taken
a
look
at
this
already
or.
J
F
C
Have
you
had
a
chance
to
look
at
this.
C
D
Yesterday
I
haven't
looked
at
the
code
yet
so
richard
one
question,
so
do
you
mind
if
we
had
more
sources
like
more
destinations?
For
example,
we
plan
to
have
one.
H
For
remote.
D
Right
one
for
asia
from
microsoft.
So
when
that.
D
To
add
support
you
know
for
for
that,
and
we
will
create
a
pr
absolutely.
E
Okay,
cool.
Thank
you
basically
edit.
What
what
he
and
I
were
able
to
find
with
probably
five
minutes
of
googling,
so
just
checking
what
what
we
found
offhand
and
yeah
anything
which
is
missing
just
just
file
an
issue
and,
and
ideally
say
you
plan
on
implementing
this.
So
we
can
synchronize
the
work
but
yeah.
Absolutely
everything.
E
Just
added,
thank
you.
The
same
goes
for
the
other
test,
suites,
like
also
if
there
is
something
in
in
azure,
which
speaks
prompkill
or
something
just
pr
it
and
and
edit
okay.
C
Yeah
and
and
you've
called
this
repo
prometheus,
but
it's
remote
right
right
so
is.
C
E
Share
my
screen
for
a
second
again
here:
yeah
prometheus
compliance,
it's
called
prometheus
and
the
repo
is
called
compliance
course.
We
are
not
good
at
naming,
but
this
is
basically
where
things
will
will
live
and
if
you
look,
then
you
can
just
go
there.
E
Question
you'll
have
prom
kill
and
what
have
you
as
well-
and
I
mean
if,
if
someone
wants
to
to
start
a
remote
read
or
something
absolutely
absolutely.
C
C
G
Sure
so
let
me
go
ahead
and
share
this.
G
So
the
the
first
option
is
what
was
implemented
in
the
initial
prototype
that
was
developed,
wherein
the
the
operator
itself
manages
in
process
a
a
set
of
go
routines
that
are
responsible
for
doing
the
the
scrape
target
discovery
and
pushing
changes
out
to
collector
instances.
I
think
this
is
somewhat
simpler.
It
it
keeps
everything
self-contained
within
the
operator.
G
There's
no
worry
about
version
draft
or
incompatibility
between
the
operator
and
the
the
component
responsible
and
handling
load
balancing
it
doesn't
require
any
additional
communications
channels,
it's
all
in
process,
but
it
does
also
make
the
operator
do
a
bit
more
than
what
I
think
a
typical
kubernetes
operator
controller
would
do
right.
G
It's
got
responsibilities
other
than
simply
reconciling
kubernetes
resources,
and
I
think
it's
probably
not
a
scenario-
that's
hugely
likely
to
happen,
but
if
someone
were
to
try
to
run
a
bunch
of
collectors
in
the
same
cluster
with
this
operator,
this
would
potentially
limit
their
scalability
by
having
all
of
them
running
in
the
same
pod,
with
the
controller
limiting
their
resource
pool.
G
So
the
other
idea
I've
considered
is
having
the
operator
manage
a
deployment
of
a
load.
Balancer
run
a
pod
that
that
runs
some
separate
standalone
program.
That's
given
a
configuration
through
a
configuration
map
and
that
would
then
be
responsible
for
doing
this
great
target
discovery
and
pushing
changes
out
to
the
collector
endpoints.
G
I
think
this
simplifies
the
operator
compared
to
the
other
one.
It
gets
it
back
to
just
the
mode
of
being
a
collector
reconciliation
loop
for
kubernetes
resources,
and
perhaps
the
most
important
advantage
from
my
perspective,
I
think,
is
that
it
gives
the
end
users
the
flexibility
in
determining
which
load
balancing
system
they're
going
to
use
we've
talked
about.
You
know
plenty
of
different
ways
to
distribute
out
and
charts
shard
out
these
targets.
This
way,
I
think,
gives
gives
room
for
flexibility
and
experimentation
there
by
simply
replacing
the
container
image.
G
In
terms
of
disadvantages,
though,
I
think
it
does
add
a
bit
more
moving
parts
overall,
slightly
more
complex
but
they're
they're,
all
normal
kubernetes
resources
right,
we're
just
creating
another
deployment
with
a
config
map
and
a
single
pod
in
it,
and
we
would
require
some
new
communication
channel
for
the
controller
to
communicate
changes
to
to
that
new
pod,
whether
that
is
watching
the
config
map
for
changes
or
if
it's
an
http
service
or
grpc
service
that
runs
within
that
pod,
that
the
controller
can
communicate
to
when
something
changes
just
to
to
poke
it
and
say
hey,
there
may
be
a
new
set
of
pods.
G
You
need
to
distribute
out
to
or
something
like
that,
whereas
with
the
other
approach,
that's
that's
all
handled
in
process
with
go
routines,
and
so
it's
slightly
not
simpler,
but
less
of
an
issue.
I
think
so.
Those
are
the
the
two
initial
thoughts
that
I
have.
I
was
wondering
if
anybody
has
any
immediate
reaction
to
that,
and
we
don't
need
to
go
into
a
whole
lot
of
detail
on
this
here.
G
But
if
you
want
to
leave
comments
on
this
doc
or
reach
out
to
me
with
any
thoughts
you
may
have,
I'm
certainly
opened
hearing
that
I'm
going
to
to
start
prototyping
within
our
current
operator.
I
think
the
second
approach
will
be
my
next
step
start
proving
that
out
and
making
sure
that
it
actually
will
work
unless
somebody
comes
along
and
says:
oh,
no,
that's
that's
never
going
to
work!
Here's
here's!
Why
yeah!
That's!
That's
that.
C
David,
do
you
have
have
you
taken
a
look
when.
K
You
say:
there's
a
deployment
of
load
balancers.
Is
there
going
to
be
multiple
of
those
that
are
managing
which
targets
go
to
which
stateful
sets
or
is
there
just
one.
G
G
G
Yeah,
as
I
say,
I
don't
know
that
that's
going
to
be
a
common
scenario.
I
would
imagine
that
most
users
are
going
to
run
a
single
collector
in
a
single
stateful
set.
That
is
a
single
load
balancer,
but
I
I
could
see
you
know
some
customers
saying
hey.
I
want
to
have
all
of
my
development
groups
push
to
this,
the
central
location,
that's
that's
managed
by
one
team,
but
they
all
get
their
own
collectors
so
that
they
don't
have
their
data
crossing
or
something
like
that.
I
see.
Okay,.
K
D
Okay,
it's
basically
tendency
rather
than
scalability
the
reason
for
this
splitting
between
the
first
model
and
second
one.
G
G
So
the
the
advantage
of
the
the
other
one
in
the
multi-tenant
scenario
is
the
potential
scalability
of
that
tenancy
right.
I
think
you
could
support
more
distinct
stateful
set
tenants
in
that
approach
because,
as
you
add,
stateful
tenants,
you
also
add
load
balancer
pods
that
get
their
own
separate
pool
of
resources,
whereas
if
it's
all
within
the
operator
process,
you
need
to
scale
up
that
operator
process
in
order
to
ensure
you've
got
the
resources
to
deal
with
all
of
that
load
balancing
whether
that's
actually
a
problem.
G
Yet
I
don't
know
it's
just
a
potential
problem.
I
foresee
yeah
in
that
approach,
trying
to
scale
the
number
of
tenants
you're
dealing
with
again.
I
don't
think
that
that's
also
a
hugely
common
scenario,
so
I
wouldn't
make
or
break
the
decision
based
on
that.
It's
just
a
point
of
difference
between
the
two.
K
B
I
have
a
comment
I
was.
I
was
on
the
phone,
so
I
couldn't
watch
your
presentation
as
you
spoke,
but
I
heard
the
options.
One
of
the
things
that
we've
just
recently
done
at
lightstep
was
to
to
experiment.
There's
an
issue
talking
about
how
we
can
push
otlp
data
and
then
try
to
transform
it
into
prw
in
a
push-only
system,
and
the
thing
that
we
experimented
with
for
service
discovery
was
to
push
a
metric.
B
That
explains
what
services
are
present
it's.
This
is
a
way
that
we
can
combine
push
and
pull
systems
together.
So
I
just
want
to
make
a
comment
that
if
we
there's,
I
see
the
diagram,
it
looks
like
there's
some
single
entity
or
perhaps
replicated
entity.
That's
going
to
pull
from
kubernetes
and
publish
information
about
which
targets
there
are.
If
we
make
that
publish
step,
be
open,
telemetry,
metrics
format
data,
then
we
can
use
this
data
in
more
than
one
way.
It's
a
it's
a
win
for
everybody.
I
think
just
wanted
to
throw
that
idea
out.
G
Okay,
yeah,
that's
that's
interesting,
and
that's
so
that's
a
one
of
the
other
components
of
the
system
that
needs
to
be
solved
is
how
does
this
component,
whether
it's
running
within
the
operator
or
separately,
inform
the
collector
of
new
targets
that
it
needs
to
deal
with
the
initial
proof
of
concept
that
was
implemented
implemented
a
grpc
service
in
an
extension
for
the
collector
that
received
prometheus
scraped,
target
information,
static,
sd,
configs
and
wrote
it
out
to
a
file,
but
that
could
be
another
option
to
to
explore.
G
B
Thank
you
yeah.
My
proposal
wouldn't
handle
moving,
scraping
figs
around
at
all.
It's
just
a
way
of
sharing
state
about
the
service
discovery
output,
as
in
a
format
that
metric
systems
can
consume
directly,
rather
than
requiring
it
to
then
go
into
a
scrape
like
pool
and
output
metrics.
We
just
get
this
I've
called
it.
There's
there's
an
issue
1078
inspectly
po,
about
this
about
how
we
can
specify
cement
convention
for
up,
and
it's
all
connected
to
that
the
proposal
was
to
name
it
present,
but
I've
also
like
that's,
not
necessarily
the
greatest
name.
B
It's
like
one
metric
to
explain
which
services
are
discovered
right
now,
anyway,
that's
just
a
proposal
we'll
keep
working
on
that.
It's
gonna
stay
connected
to
this
one
here.
J
We've
also
discussed
so
there's
a
lot
of
overlap
with
observers
in
the
collector,
which
basically
discovers
targets.
It's
great
we've
actually
discussed
having
that
information
be
emitted,
as
maybe
like
using
the
logs
pipeline.
J
You
know
instead
of
right
now,
it's
like
this
out
of
band
like
it
goes
from
an
extension
to
to
the
to
receivers,
just
like
this
just
kind
of
in
memory
representation,
so
we've
also
yeah,
like
tigran,
was
looking
at
representing
those
scrape
the
observer
targets.
You
know
using
logs
and
semantic
dimensions.
B
So
yeah
I've
seen
that
conversation
too.
I'm
also
trying
to
steer
that
same
conversation
in
the
direction
of
having
this
a
single
metric
for
service
discovery.
Just
because
there's
many
ways
to
use
that
data
independently
of
the
way
prometheus
is
doing
it
like,
for
example,
count
how
many
services
are
out
there.
I
don't
need
to
pull
from
them
all
to
count
how
many
service
discovery
results
there
are
you
know
if
you
start
erasing
labels
from
the
service
discovery
results?
B
You
say
you
can
see
information
about
your
your
fleet
of
machines,
there's
interesting
data
there,
even
without
scraping
those
targets
in
the
form
of
metrics
data.
As
we're
trying
to
say
I
I
see
the
connection
to
logs
as
well,
but
I
think
we
can
skip
the
logs
transformation
and
go
straight
to
metrics
in
this
case,.
G
C
G
B
I
may
I
may
be
referring
to
sort
of
private
threads.
I
know
tigran
just
yesterday
or
this
week
has
published
something
about
schemas
and
I
think
that's
actually
step
one
in
the
direction
here
is
like.
If
you
need
to
give
a
schema
to
talk
about
what
your
log
events
mean
before
you
can
use
them
to
convey
target
information,
for
example.
So
we
should
ask
tigrin
for
his
up-to-date
information
and
maybe
jay
you've
seen
that
already
I'm
just
referring
to
these
documents.
I've
seen
in
the
otep
yesterday
about
schemas.
B
I'm
not
sure
you
haven't,
I
haven't
seen
the
public
versions
at
least
yeah
yeah.
There
was
one
document
that
he
shared,
that
wasn't
wasn't
posted
on
github,
that
that
talked
about
translating
from
logs
into
resources,
and
I
think
there's
there's
a
very
meaningful
discussion
to
be
had
there
and
once
you're
done
with
that,
I'd
translate
it
into
a
metric
for
the
metric
systems,
essentially.
B
Chagrin
opened
otep,
I
think
it's
152
about
schemas
for
telemetry
data
and
there's
there's
not
a
direct
connection
to
this
topic
that
we're
discussing
right
now.
But
I
know
that
in
the
back
of
his
mind,
he's
working
on
what
jay
just
referred
to
is
that
you
see
a
sequence
of
log
events
which
might
say
starting
a
process
now
you've
got
a
process,
that's
up
or
sorry.
That's
not
not
that's
up.
E
Yep,
I'm
I'm
not
certain
if
this
is
actually
going
in
a
direction
which
is
useful.
My
suggestion,
I
mean
that
so
just
shoot
it
down
if,
if,
if
it's
not
in
a
good
direction,
but
instead
of
doing
this,
as
as
as
pretend
metrics,
it
sounds
to
me
as
if
just
transporting
the
literal
service
discovery
along
the
pipeline
is,
is
what
you
are
actually
looking
to
do
to
to
just
basically
gain
the
the
benefits
of
service
discovery
at
every
stage
of
the
push
pipeline.
B
E
You
can
you
can,
at
least
in
prometheus
service
discovery.
You
can.
You
can
also
attach
other
custom
labels
and
such
so
you
would
need
to
which
is
basically,
you
can
serialize
this
into
into
an
info
metric
or
underscore
service
discovery,
infometric
or
something.
So
you
can
obviously
serialize
this,
but.
B
Yeah,
I
think
it'd
be
great
to
have
a
discussion
about
the
hotel
representation
for
infometric,
because
to
me
that
it's
just
a
single
gauge
with
a
set
of
labels,
which
is
one
instance,
label
one
job
label
and
all
the
other
labels
that
you
happen
to
have
so
so
that
I
think
that's
equivalent
to
what
you
said.
And
although
I
don't
fully
unders
appreciate
the
infoset
distinctions
that
might
be
there.
Yeah.
I
J
I'm
not
sure
I
mean.
Maybe
we
showed
this
as
a
separate
discussion,
but
like
with
the
I
mean
so
service
discovery
is
usually
event
based
right.
You
have
like
a
start
event,
stop
event,
it's
kind
of
like
the
most
basic
thing
right,
there's
like,
whereas
with
the
metric
like.
Do
you
assume,
if
you
haven't
seen
the
metric
within
so
many
seconds
or
so
many
minutes
that
it's
it's
not
reporting
anymore?
It's
not
quite
as
it's
a
little
bit
of
a
different
model
right.
B
Scenario
that
I'm
trying
to
propose
imagine
that
every
target
had
a
synthetic
virtual
metrics
endpoint
that
always
returned
exactly
nothing:
zero
metrics,
but
it's
alive.
You
can
see
it
you
and
it
answered
your
question
so
imagine
that
every
target
just
returned
empty
results.
You'd
get
an
up
metric
for
every
target,
I'm
proposing
that,
instead
of
that,
you
just
create
create
something
like
a
present
metric
for
every
target
and
skip
that
step
of
scraping
them
and
getting
an
empty
result.
J
So
you're
saying
like
as
like
when
you
get
that
metric.
That
indicates
like
let's
say,
you're
sending
that
virtual
metric
every
10
seconds.
That
means
you're
scraping
every
10
seconds
like.
Is
that
the
signal
to
scrape
or
is
it
not.
J
E
There's
one
problem
with
with
that
particular
thing:
sorry
for,
but
when
you
have
the
service
discovery
and
you
have
it
on
on
the
side
of
whatever
is
creating
the
up
metric
and
such
you
know
that
at
the
point
in
time
where
that
service
is
supposed
to
be
alive
or
in
place
or
existing,
that
you
also
were
able
to
reach
it,
which
is
basically
what
the
up
does,
if
you,
if
you
detach
the
information
that,
at
this
precise
moment
in
time,
that
thing
should
be
existing
from
the
information
that
this
precise
moment
in
time.
B
Yeah
there's
definitely
it's
definitely
harder
to
do
correct
transformation
into
an
up
metric
in
a
push
system
is
what
we're
just
saying,
I
think,
and
it
remains
to
be
shown
that
it
can
be
done.
H
I
It's
a
completely
different
channel
because,
fundamentally
how
you're
marshaling
to
start
it
doesn't
matter
so
much
as
the
data
is
going
from
a
to
b
and
two
proposals
here
are
whether
there
is
some
form
of
controller
collecting
all
the
stat
and
sending
it
over
or
the
scrapers
themselves
are
generating
the
data.
I
think
that's
basically
the
question
being
asked.
G
Oh,
no,
I
don't
think
in
neither
of
these
scenarios
would
the
scraper
be
doing
the
service
discovery
right.
So,
even
in
this
scenario,
the
thing
that's
doing
the
service
discovery
and
generating
the
configs
that
get
pushed
to
the
actual
scrapers
is
separate.
We
we
want
there
to
be
a
a
single
system.
You
know
that
there
would
be
one
pod
that
does
the
service
discovery
and
there
could
be
many
of
these
pods
that
are
doing
the
scraping.
G
We
want
to
be
able
to
to
delegate
out
and
distribute
the
targets
amongst
all
of
the
pods
that
are
doing
the
scraping,
while
only
having
one
one
pod
that
has
watches
on
kubernetes
objects
to
discover
these.
These
targets.
G
In
this
model,
what
would
go
along
this
channel
this
this?
The
discovery
system,
informing
the
collectors
of
new
targets,
is
either
a
prometheus
static.
Sd
configuration
notice,
the
the
the
target
in
labels
sets
of
those
or
potentially,
as
josh,
mentions,
the
idea
of
a
otlp
metric
that
has
the
same
information
encoded
as
attributes
on
on
the
metric.
G
I
think
if,
if
there
are
advantages
to
having
the
ability
to
distribute
that
information
as
otlp
metrics
outside
of
the
system,
that's
great,
but
maybe
we
can
gain
some
benefit,
but
it's
almost
certainly
going
to
be
the
simplest
thing
we
can
do
to
just
push
static,
sd
json
that
has
a
list
of
target
labeled.
G
G
Okay,
I
think
the
the
other
thing
I
wanted
to
mention
fairly
quickly
was,
as
we
discussed
last
week,
we
we
talked
about
exposing
the
prometheus
operator,
config
generator
capability.
I've
got
a
pr
up
for
that.
It's
got
one
approval.
I
don't
know
if
there's
anybody
from
the
prometheus
operator
maintainer
set
here,
but
if,
if
anybody
else
can
take
a
look
at
that,
it's
it's
very
simple.
It
changes
some
letters
from
lowercase
to
uppercase
in
a
few
places
I'll.
G
If
there's
nobody
here
that
that
can
help
move
that
logo
I'll
ask
in
the
slack
channel
litter,
let's
see.
C
Any
other
recommendations
richie
for
our
reviewers.
G
Yeah,
I
will
ask
you
the
prometheus
operator
slack
channel
on
the
kubernetes
slack
and
I'm
sure
I
will
find
the
right
people
there.
C
All
right
just
seeking
reviews,
okay
got
it
thanks
who
had
the
research
label
pr.
D
Hey
that
was
me,
so
there
are
a
few
obvious
bugs
a
few
ps4,
so
I
just
want
to
bring
it
up
here
so
that
someone
can
approve
it.
Take
a
look
at
it
and
I
see
there
are
several
approaches.
People
are,
you
know
one
pr,
for
example,
adds
the
resource
labels
in
the
exporter
and
the
other
one
actually
adds
it.
D
Prometheus
server,
we
would
probably.
D
We
would
get
from
the
prometheus
server
similar
to
I
mean
whatever
we
get
from.
J
Is
one
of
the
so
I've
seen
this
and
we
have
this
problem
with
other
receivers
in
that
you
know
the
concept
of
resource
labels
or
resource
attributes?
It's
a
open,
telemetry
thing
right.
You
have
like
resource
attributes,
then
you
have
like
log
attributes.
Data
point.
Sorry,
data
point
labels,
metric
labels,
so
there's
like
two
different
hierarchies
where,
like
the
metadata,
can
be
attached
and
so
like
when
you're
receiving
assume
prometheus
metrics,
you
don't
know
whether
it
should
be
a
resource
label
or
a
resource
attribute
or
a
data
point
label.
D
Yeah,
but
there
are
a
few
exceptions
like,
for
example,
job
and
instance
are
actually
missing
right
now
from
the
receiver
and
and
without
those
I
think
it's
hard
to
consume
these
metrics.
D
D
You
want
a
special
case,
a
few
things
there
is
a
there
is
a
pr
here
as
well.
Actually
that
actually
picks
up.
You
know
every
resource
label
and
add
that
as
a
as.
D
Label,
so
I
think
it's
a
mix
and
match
of
both
a
few
things
have
to
be
special
case.
You
know
just
because,
yes,
we
are
saying
that
this
is
a
replacement
for
prometheus.
J
D
Yes,
that's
what
I
meant
to
say,
which
is
not
happening
now.
The
job
and
the
insurance
labels
are.
H
D
F
B
There's
a
there's,
a
name
configuration
in
the
sd
config,
and
then
I
guess
your
underscore
underscore
instance,
underscore
underscore
gets
relabeled
to
instance.
Is
that
the
special
case
we
need
yeah.
B
I
think,
generally
speaking,
we
would
like
to
say
that
all
hotel
resources
are
should
be
promoted
to
prometheus
labels
on
export,
but
I
think
there
will
be
troubles
if
we
actually
make
that
statement
a
sort
of
a
blanket
rule,
because
people
are
going
to
add
types
of
attributes
that
just
don't
make
sense
in
that
model.
So
I
think,
there's
probably
special
cases
or
if
we
have
schemas,
we
can
have
sort
of
explain
which
ones
ought
to
be
excluded
by
default.
B
Example
is
service
instance
id.
That's,
just
never
going
to
be
a
good
label
for
a
prometheus
system
and
that's
going
to
be
a
semantic
conventional
resource
in
hotel,
and
I
think
what
we
want
to
say
is
certain
resources.
Certain
resources
are
unique.
Therefore,
we
should
exclude
them
from
metric
systems,
something
along
those
lines.
B
Deployment
yeah,
I
I
agree,
so
I
think
what
we're
trying
to
say
is
there
may
be
some
default
rules
for
relabeling
that
that
you
might
want
to
adopt
based
on
some
schema
information
or
some
semantic
conventions,
and
then
you
know,
of
course,
the
special
cases
you
can
always
say.
B
I
just
need
to
add
my
own
labeling
rules,
I'm
trying
to
say
that
by
default
hotel
libraries
are
probably
going
to
put
in
a
service.instance
id
and
by
default
you
probably
don't
want
that
in
prometheus,
but
we
don't
want
to
ask
every
user
to
set
up
a
labeling
configuration
to
say
erase
service
instance
id
because
that's
sort
of
a
default
recommendation.
I
think.
I
B
H
B
H
B
The
way
I
would
say
that
in
otel's
world
is,
is
we
want
to
semantically
specify
these
resources
and
then
dictate
whether
they
should
be
on
by
default
or
not,
essentially-
and
I
think
that
might
be
some
sort
of
attribute
about
the
resource
label
or
resource
key,
so
service
instance
id
off
by
default
job
on
by
default.
Something
like
that.
I
Yeah
just
be
aware
that
some
of
those
practices
are
historical
and
relate
to
well
soundcloud,
specific
implementation
back
in
the
day,
and
some
people
took
an
example
which
was
meant
to
be
just
by
the
way.
This
is
a
thing
you
can
theoretically
do
as
the
canonical
way
of
doing
things,
and
so
some
judgment
may
have
be
applied
in
light
of
modern
best
practices.
C
I
Well,
it
depends,
but
one
of
the
practices
was
blindly
adding
all
service
labels
from
kubernetes
onto
which,
which
doesn't
necessarily
make
sense,
because
those
can
change
lifetime
of
a
part
yeah.
So
it
gets
very
much
back
to
per
user
things
like
adding
the
kubernetes
namespace.
That
is
a
thing
that
would
not
be
unreasonable.
B
Yeah,
I
agree
we're
getting
into
a
place
of
wanting
to
have
these
best
practices
that
are
it's
not
a
question
about
prometheus
versus
hotel.
It's
really
a
question
about
like
how
should
we
set
up
monitoring
for
kubernetes?
It's
really
interesting
here.
I've
been
looking
at
the
commute
community
home
chart
and
the
kubernetes
operator
to
get
some
ideas,
but
I'm
definitely
looking
to
learn
from
the
experts.
E
There's
also
some
considerations
in
the
in
the
open
metrics
back
about
when
to
apply
stuff
to
all
labels,
but
this
is
from
the
point
of
view
of
a
single
target,
which
is
obviously
a
thing
in
in
pull
where
you
can
say
everything
from
a
target.
Should
it
have
this
and
that
information,
yes
or
no
I'll
like.
I
can
also
link.
I
C
Mean
okay,
so
vishwa
does
that?
Actually
I
mean,
I
don't
feel
that
we
answered
the
specific
question
you
had
in
these
implementations.
D
Yeah,
I
think
it
probably.
I.
D
D
Yeah,
I
think
a
few
things
we
can
just
do
right
away,
like
you
know,
include
the
job
and
instance.
Certainly
we
can
have
other
things
up
for
discussion.
Yeah
there
is.
There
is
also
a
pr
you
know
where
there
is
an
option
to
pick
up.
You
know
every
resource
attribute
and
add
them.
As
a
data
point
attribute.
D
C
So
josh,
how
do
you
propose
we?
Actually
I
mean
this
discussion
has
come
up
multiple
times
in
the
prometheus.
You
know
context
as
well
as
obviously
the
larger
metrics
labeling.
You
know
configuration
versus
what
is
handled
and
as
rules
and
for
labeling.
Where
do
you
suggest
we
actually
continue
this
discussion?
Should
that
continue
to
happen
here
on
our
work
group,
or
is
it
a
larger
discussion
for
metrics.
B
If
you're
asking
me,
I
think
we
should
adopt
prometheus
100
as
far
as
the
relabel
configuration
is
it's
a
known
like
language
that
I
don't
want
to
redesign
yeah.
I
think
nobody
does
so
I
mean
I
I
just
mentioned
how
I
think
that
some
defaults
would
be
nice,
so
you
don't
need
to
have
these
blocks
of
yeml,
but
I
would
like
the
hotel
collector
to
literally
support
those
transformations,
not
have
a
new
format.
C
C
C
I
I
think
we
don't
know
that
yet
right
because
technically
today
we
are
assuming
that
that's
not
part
of
the
receiver
logic,
but
as
josh
mentioned
you
know
there
are
special
cases,
so
we
need
to
figure
out
where
that
would
sit.
B
I
mentioned
the
idea
of
publishing
service
discovery
as
a
metric,
but
but
anthony
was
saying
we
could
just
publish
service
discovery
configs.
I
think
that
you,
you
would
probably
want
to
apply
your
target
re-labeling
once
whenever
you
finish
your
service
discovery,
I
think
it's
static.
I
would
like
someone
to
confirm
that
for
me,
because
then
we
should
publish
the
relabeled
target
information
so
that
not
every
collector
has
to
do
it
or
in
other
words
I
think
that
that
there's
some
service
discovery
process
that
can
do
target
re-labeling.
G
And
that
is,
that
is
how
the
prototype
that
had
been
implemented
for
distributing
service
discovery
worked
it
in
the
process
that
did
the
service
discovery,
applied,
relabels
and
then
distributed
out.
Those
relabeled
targets,
great
yeah,.
G
B
B
I'm
going
to
make
gustavo,
I
was
going
to
publish
it
this
week.
C
All
right,
I
think,
did
we
cover
the
external
labels
bug.
That
was
the
last
issue.
We're
at
time
whose
issue
is
that.
D
Yeah,
that's
mine
as
well.
So
currently
all
the
external
labels
are
being
ignored
by
the
by
the
collector.
It's
just
a
clearly
above
basically.
B
I
think
otlp
needs
a
place
or
a
way
to
add
external
labels.
It's
not
there
right
now
and
I,
and
there
are
two
ways
I've
thought
about
it.
One
would
be
to
add
another
field,
a
parallel
to
resource
and
say
these
are
like
resource
but
they're,
not
identifying.
In
some
sense,
it's
not
actually
my
preference
I've
mentioned
earlier.
This
idea
of
schema
is
coming,
and
I
think
that
this
gives
us
a
way
to
just
label
these
semantic
conventional
attributes
with
properties
outside
of
the
actual
data
stream.
B
So
I
can
say
I
know
that
this
attribute
is
of
a
particular
type,
so
I
want
to.
I
want
to
basically
have
a
way
to
spec.
In
my
attribute
this
is
an
external
label,
and
I
think
there
are
probably
probably
three
categories
of
label
right
now
in
our
system,
as
we
look
at
there's
things
that
are
identifying
like
you
use
them
to
join
with
other
streams.
B
Job
and
instance
are
identifying
in
that
sense,
there's
non-identifying,
but
just
but
descriptive
about
the
process
so,
like
all
the
secondary
attributes
qualify
in
there
and
then
there's
external
labels,
which
is
in
the
third
category
by
itself,
which
is
like
these
are
attributes
that
literally
may
not
be
used
to
identify
the
data
and
if
you
strip
them
it's
the
same
data.
And
so
I
see
three
categories
and
somehow
we
got
to
get
the
state
into
otlp.
In
my
opinion,.
I
B
C
M
M
Hey
hey,
I
just
heard
you
say
something
about
prometheus
and
I
just
joined
on.
Is
there
a
prometheus
submitting
right
now?
There
is
a
prometheus.
L
Working
group
at
open
telemetry,
which
is
right
before
this
one,
they
have
the
sigmatic
right
before
the
collector
meeting
and
we
use
the
same
zoom
room
and
when
yeah,
sometimes
they
they
when
they
are
late,
we
overlap.
L
Okay,
let's
see
anything
at
all
in
them
today.
I
don't
remember
this
happening
ever
like
we
always
had
something
to
discuss,
which
is,
I
don't
know
if
it's
good
or
bad,
but
it's
unusual.
L
L
L
L
L
F
Yeah,
I
wasn't
exactly
sure
as
well,
so
I'll
just
wait
for
a
bargain's
response
for
him
to
review
it.