►
From YouTube: 2020-09-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
D
A
A
E
Right
most
of
the
exporters
in
the
country
are
implemented
as
in
a
way
that
they
consume
some
sort
of
dependency
than
their
specific
dependencies.
So
there's
as
much
vendor
specific
code
as
the
contributor
wanted.
E
F
So
this
is
something
that
I
will
have
been
discussing
with
a
bunch
of
other
people
at
aws
who
work
on
open
telemetry.
So
at
least
for
our
back
end,
we
have
use
cases
so
like
the
collector
from
various
different
receivers
might
receive
a
metric
that
has
like
a
bunch
of
different
labels.
Let's
say
it
has
like
five
different
labels
and
then,
when
you
publish
that
the
back
end,
you
actually
don't
want
to
just
like
take
that
metric.
You
know,
convert
it
to
the
backend
format
and
just
publish
it
directly.
F
We
actually
want
to
do
is
publish
multiple
different
instances
of
like
that
metric,
but
with
different
sets
of
labels.
So
that's
why
I
call
it
a
set
of
set
of
labels
if
you
scroll
down
to
the
next
page.
I
think
this
is
very
clear
with
the
example
that
I
show
so.
This
is
a
very
arbitrary
example
that
I
made
up.
Kubernetes
example:
the
labeled
names
are
there
kind
of
like
they're
just
totally
made
up
like
those
label.
Names
probably
wouldn't
exist
with
any
receivers
we
have.
F
But
you
see
here,
you
have
like
a
incoming
metric
with
five
labels
and
then
the
thing
below
is
I'm
saying
what
I
want
to
publish
it
as
so.
I
want
to
do
some
renaming,
but
then
I
also
want
to
publish
two
instances
of
it
with
two
different
sets
of
labels
and
basically
of
those
five
labels.
I
only
care
about
a
few
of
them,
but
I
also
want
to
like
publish
different
likes
groupings
of
them.
F
Basically,
so
we
were
kind
of
like
thinking
of
how
we
want
to
do
this,
so
the
first
thought
the
obvious
thought
was
okay.
Well,
let's
just
build
this
feature
in
our
exporter,
which
is,
I
think,
you
know
what
we
might
do.
It's
pretty
straightforward
right,
but
we're
thinking
well,
wait.
This
might
be
a
use
case
that
you
might
want
to
do
with
more
than
one
exporter.
F
Like
this
use
case
of
saying
you
know
I
have.
I
want
to
publish
a
metric,
multiple
instances
of
this
metric
with
different
groupings
of
labels
or,
as
I
say,
a
set
of
set
of
labels.
F
So
maybe
it
should
be
a
processor
that
everyone
could
use,
and
so
I
have
scroll
up
a
bit
a
kind
of
like
proposal
that
I
came
up.
F
It's
obviously
needs
to
be
worked
on
like
a
lot
for
the
idea
of
maybe
how
it
could
be
in
the
metrics
transform
processor
yeah,
where
you,
you
have
a
new
action
action
that
lets
you
publish
with,
like
label
sets,
and
so
basically
there
you
put
in
a
list
of
lists
where
each
list
is
the
set
of
labels
for
an
instance
of
the
metric
that
you
want
to
publish.
F
I
was
curious
what
you
guys
think
about
this.
If,
if
you
think
it's
a
use
case,
that's
common
to
many
other
exporters
or
back
ends,
you
know
or
like.
If
you
think
it's
something
that
other
people
run
into
as
well,.
E
So
when
you
eliminate
some
labels,
does
that
mean
that
that
you
have
to
aggregate
across
eliminated
labels.
E
E
Yeah,
you
have
to
do
some
sort
of
aggregation
if
you're,
if
you're
you're
removed,
I
guess
yeah,
it's
just
so.
These
are
superficial
labels
in
reality.
There's
no
need
to
aggregate
here,
but
yeah
generally
generally,
if
you
eliminated,
for
example,
all
of
the
poles
specific
things.
Let's
say
you
only
left
cluster.
E
E
F
That's
a
good
point.
Okay,
I
didn't
think
of
that,
and
I
guess
that
makes
it
difficult
in
the
processor,
because
the
metrics
transform
processor
can
only
aggregate
across
like
a
a
chunk
of
labels
that
it
gets
right
or
a
chunk
of
metrics
that
he
hits
at
one
time.
I
guess
yeah.
I
didn't
think
that,
because
our
basically
our
back
end
cloudwatch
metrics,
can
do
that
aggregation.
You
can
give
it
like
10
values
with
the
same
time
stamp
or,
and
it
will
like
do
the
aggregation.
E
But
if
it's
a
general
feature
that
allows
you
to
specify
any
labels
to
be
eliminated
and
some
to
be
preserved,
then
there
is
need
to
be
an
aggregation.
Now.
I
think
so.
It
depends
right
what
the
goal
is.
If
this
is
very
specific
to
your
backhand,
the
right
place,
probably
for
this
functionality
to
leave,
is
in
the
exporter.
E
If
you
want
it
to
be
something
more
generic
than
the
right
places
in
a
processor,
but
if
you
make
it
a
processor,
then
you
will
have
to
solve
this
problem,
the
the
aggregation,
because
it
it's
it's,
it's
it's
very
possible
that
what
you're
eliminating
requires
an
aggregation
in
that
case.
So
you
have
these
three
three
options.
Consider
I
don't
think
if
it
makes
sense
to
have
it
in
the
receiver.
E
E
E
That's,
I
don't
know,
does
anybody
else
have
any
ideas
about
this?
E
E
F
I
think
that's
a
metric
transformer
processor.
It
looks
like
it's
implemented
it.
It
doesn't
quite
do
what
I
need
right
now,
though,
like
it
can
change
the
label
names
the
metric
names.
It
can
delete
labels.
The
key
thing
that
it's
missing
is
the
ability
to
basically
like
duplicate
the
same
metric
with
a
different
groupings
of
labels,
kind
of.
E
F
Yeah
it's
in
the
contrib
for
some
reason:
it's
not
in
the
core
repo.
I
found
it
from
the
contrib.
E
F
Yeah,
I
guess
the
hard
thing
is
just
is
what
I
didn't
think
of
that
the
aggregation
bit
is
well,
it's
it's
unsolvable
right.
Basically,
it's
almost
like.
E
E
Accumulative
metric
right
and
if
not
all
data
points
are
passing
through
this
particular
instance
of
the
collector,
you
don't
have
a
way
to
actually
aggregate
it
fully.
You
only
see
portion
of
the
data
subsequently,
I
don't
know
what
the
current
metric
transform
processor
does.
I,
I
am
not
familiar
with
the
implementation,
but
if,
if
you're
saying
it
already
supports
eliminating
labels,
then
maybe
it's
worth
looking
into
how
it
does
that?
Maybe
it
only
works
for
a
single
instance.
I
don't
know
honestly,
I
don't
have
an
answer.
F
Yeah,
so
it
does
there's
like
okay,
like
if
you
look
at
its
readme,
it
says
that
it
does
aggregation
when
you
delete
labels,
but
it
is
restricted
to
like
within
a
batch.
I
think
here
I
can
throw
a
link
in
the
chat
for
people
who
want
to
look
at
it
on
their
own.
I
think,
based
upon
what
you're
saying
so
I
have
a
like
a
meeting
in
like
two
hours
with
a
bunch
of
other
people
a
to
be
asked
to
discuss
what
we're
gonna
do.
G
F
G
E
E
I
don't
have
a
good
solution.
Sorry,
maybe
if,
if
you
know
exactly
what
you
need
to
do
for
your
back
end,
it
may
be
easier
to
do
that,
specifically
in
just
your
exporting
okay,
we
needed
to
have
some
specific
transformations
which
only
made
sense
for
our
back
end.
So
we
did
that
we
have
some
some
work
done
in
our
signal:
fx
exporter,
for
example,
which
didn't
make
sense
to
try
to
generalize,
so
we
ended
up
doing
it,
but
that's
not
unheard
of
so
it's
it's
a
possibility.
E
Welcome:
okay,
next,
the
collector
builder
demo,
if
we
have
time
yes,
let's
do
that:
okay,.
B
Screen,
I
think
we're
gonna
do
that
next
week
I
just
tried
to
share
a
screen
and
it
shows
me
a
warning
from
zoom
saying
cannot
start
sure.
We
only
support
wayland
on
with
some
other
solutions,
which
in
theory
includes
federal,
2025
and
above
and
so
I'm
included,
but
it
doesn't
work.
B
So
you
you
can
keep
with
the
items
from
from
kevin
and
oswar
and
I'll
try
to
restart
my
session
here
and
see.
If
I
can.
H
So
that's
that's
my
issue
right
there.
So
the
questions
that
I
had
were
one
of
the
tests,
so
their
three
tests
are
flaky
and
the
issue
was
pretty
much
want
to
investigate.
Why,
if
they're
so
flaky
and
if
they
are,
let's
enable
them
or
remove
them.
So
one
of
the
three
already
got
resolved
so
that
was
one
of
them
is
outdated.
The
other
one
said
it
was
flaky
on
windows
and
to
enable
it
when
the
following
issues
fixed.
But
the
issue
is
kind
of
ambiguous.
H
It
just
got
like
closed,
but
I
did
run
it
on
on
windows
a
couple
of
times
and
it
did
like
work
fine
and
I
linked
the
logs
there.
So
I
was
wondering
since
would
it
be
safe
to
if
I
make
a
pr
would
it
be
safe
to
just
like
enable
that
test
again,
and
there
was
another.
C
H
The
very
last
one
in
in
the
three
that
I
mentioned
there
that
said
that,
oh,
like
the
reason
why
it's
flaky
is
because
we
haven't,
we
don't
have
a
way
to
flush
all
the
written
traces
for
the
open
census,
receiver
and
since
it
was
like
it
was
like
skipped.
Two
years
ago
I
was
wondering
if
there's
been
like
support
for
flushing
written
traces
like
added
since
then,
and
if
not,
I
was
thinking
about
like
removing
that
test
entirely,
because
you
know
it's
not
been.
It's
not
been
running
for
for
two
years.
E
So
if,
for
the
first
first
one,
if
there
is
any
evidence
that
the
problem
is
fixed,
if
you
see
that
in
the
issue,
then
I'd
say
yes,
let's,
let's
enable
the
test.
If
it
turns
out
that
it's
still
not
fixed
and
failing,
then
we
will
have
to
look
into
it
again
for
the
or
the
second
one
I
I
don't
know.
I
am
not
sure
if
there
is
a
way
to
flash
the
traces
does
anybody
know
I'm
not
familiar
with
this
particular
test
which
which
what
exactly
does
it
test?
H
It's
the
this
one.
The
test
accept
all
grpc.
E
And
this
requires
flashing
traces
sent
using.
I
guess,
open,
sensitive
exporters,
yep,
okay,
so
I
I
don't
know
if
there
is
a
way
to
do
that
flashing.
I
would.
If
I
have
to
guess
I
would
say:
probably
there
is
no
change
there.
I
don't
think
there's
a
lot
of
activity
happening.
You
know
consensus
project,
so
it's
likely
that
whatever
was
the
situation
back
then
when
the
test
was
written,
is
like
the
skilled
situation
now
maybe
worth
checking.
But
that
could
be
my.
I
guess
you
know.
E
So
yeah,
as
I
mentioned,
if
you
don't
see
a
good
way
to
fix
it
like
reasonably
easy
way
to
fix
it,
then
yes,
then
the
test
should
be
removed.
There's
no
point
in
keeping
it
right.
Oh
thank
you!
So
we're
not
going
to
be
able
to
fix
it
and
and
there's
no
point
in
spending
days
and
days
and
trying
to
just
fix
a
single
test
right.
E
It
means
that
we
will
have
to
somehow
try
to
fix
our
functionality
to
open
sensors
exporter
or
the
agent
because
it
uses
the
open
census
agent,
which
is
in
a
different
repository,
which
we
don't
necessarily
control
is
that
it,
it's
probably
just
not
visible.
A
So
I've
been
working
on
some
conference
presentations
on
open
telemetry
and
I've
got
some
diagrams
and
some
example
configurations
and
stuff
that
need
to
go
into
the
collector
documentation,
which
we've
just
kind
of
got
scattered
out
through
the
project
right
now
has
there
been
any
plan,
talked
about
of
where
we're
going
to
ultimately
publish
this
documentation,
and
you
know
and
what
the
format
would
be
and
and.
C
A
E
Yeah,
I
think,
for
the
licensing
open
telemetry
already
made
that
decision
for
us
and
it
uses
apache
license
for
everything.
As
far
as
I
know
now,
obviously,
nothing
prevents
you
from
publishing
your
own
whatever.
If
you
come
up
with
a
documentation
about
collectively
you
can
publish
as
an
author
yeah,
with
whatever
license
that
you
prefer,
but
not.
E
You
can
choose
your
own
license
and
probably,
if
you
will,
as
for
the
location,
I
think
so
far,
we
used
the
the
repository
for
the
documentation.
There
is
some
minimal
references
to
the
documentation
from
the
website
from
open22.io
website,
but
I
don't
think
we
have
any
plans
to
have
extensive
documentation
of
the
collective
there.
I
would.
C
E
Users
who
just
run
the
collector
but
also
have
developers
who
may
need
to
customize
it,
and
I
don't
think
that
we
would
want
to
split
this
documentation
having
two
places
depending
on
who
the
target
audience
is
because
there
is
an
overlap
likely
right,
you
would
want
developers.
E
C
E
E
A
E
A
I
okay
yeah,
I
kind
of
do
I
mean
I
looked
around
see
what
the
other
open
telemetry
projects
are
doing
like
python's
out
on
read
the
docs,
which
is
kind
of
typical
for
python
and
and
the.
A
The
new
go
doc,
whatever
they
call
it,
but
there's
also
like
some
of
the
the
the
vendors
are
creating
their
own
stuff.
I
know
light
steps
got
like
a
big,
huge
thing
on
open
telemetry,
which
is
pretty
interesting
but
yeah.
I
think
I
think,
would
be
very
helpful
to
have
you
know
comprehensive
documentation
on
how.
C
A
B
For
arts
worth
we
have
a
a
external
repository
for
the
jager
tracing
organism
organization.
B
It's
called
even
documentation,
I
think,
and
on
the
website
we
have
jaeger,
tracing
dot,
io,
slash
docs,
so
it's
published
along
with
the
website
and-
and
it's
really
helpful
there,
because
you
know
whenever
we
have
a
new
feature,
we
can
just
create
a
new
issue
on
the
documentation
repository,
and
we
know
that
we
have
to
document
that
I
we
also
have
automatically
generated
documentation
for
the
flags,
for
instance.
So
whenever
we
add
a
new
flag,
a
build
would
just
parse
those
flags
or
parts
to
output
and
then
generate
a
that
kind
of
documentation
right.
B
The
one
thing
that
we
that
we
cannot
generate
is
good
use,
case-based,
documentation
and
and
because,
when
you
document
your
own
component,
you,
you
are
only
documenting
that
small
piece
of
you
know
of
work.
You're,
not
your
user
might
not
have
an
idea
on
how
it
fits
in
the
big
picture.
So
if
all
you
have
is
smaller
pieces
of
information,
users
might
not
even
know
how
to
how
to
get
started,
and
I
think.
C
B
B
A
A
All
right,
I
will
look
into
that
or
look
at
the
how
jaeger's
doing
it
and
I'll
come
up
with
some
kind
of
proposal.
We
might
even
need
to
make
it
bigger.
You
know
cover
all
the
sdks
or
you
know,
have
some
link
to
all
sdks
because,
like
you
know,
go
is
very
structured.
There's
one
place
to
look
but
like
the
the
javascript
you
go
out
in
the
npm.
I
I
A
B
If
we
need
any
help
with
the
design,
I'm
sure
the
csf
can
can
provide
some
help
there,
because
okay,
our
documentation
was
mainly
I
mean
the
infrared,
the
the
bootstrapping
code
and
so
on.
It
was
done
by
by
someone
at
the
cncf,
so
all
we
developers
had
to
do
was
you
know,
write
the
actual
markdown
files,
all
right.
B
So
should
I
try
to
share
my
screen
again?
Yes,
okay,
it
seems
to
work
now
desktop
desktop2,
so
you
should
now
be
seeing
github.
B
All
right,
so
this
is
actually
the
wrong
tab.
The
main
context
for
for
this
builder
is
this
issue
here,
and
it
is
something
that
we
at
red
hat
were
were
facing.
B
So
we
needed
to
build
a
couple
of
open,
telemetry,
collector
distributions,
so
perhaps
the
most
famous
one
right
now
is
the
yeager,
the
new
generation
of
eager,
which
is
based
on
open,
telemetry
components
and
another
when
we
have
a
need
for
another
distribution
so-
and
we
thought
that
you
know
people
out
there
might
want
to
build
their
own
distributions,
but
they
are
not
developers
unnecessarily.
So
perhaps.
B
Or
they
they
are
not
go
developers.
So
it's
well
for
us.
It
might
sound
simple
to
do
that.
It's
it
requires.
Quite
a
lot
of
you
know
previous
knowledge
for
people
to
build
their
own
open,
telemetry,
collector
solution.
So
that's
the
purpose
of
this
tool.
It's
called
it's
called
open,
telemetry
collector
builder-
and
I
have
I
have
here-
a
project
called
observatorium
hotel
call.
So
observatorium
is
an
is
a
project
at
red,
hat
and
hotel
call.
B
Is
then
the
distribution
that
the
open,
telemetry
collector
distribution
for
ob03
right?
So
writing
the
binary
and
sending
a
configuration.
So
let
me
probably
show
you
the
configuration
first.
So
this
is
the
manifest
file
and
in
there
I
specify
how
the
distribution
should
look
like.
So
it
mainly
has
two
sections.
So
the
distribution.
C
B
And
the
modules
and
another
distribution
I
have,
then:
what
is
the
module
name
like
gold
module?
What
is
the
name
of
my
distribution
descriptions,
versioning
output
cloud
and
which
open
terms
your
collector
version
to
use
as
the
base?
B
None
of
this
is
required,
so
you
can
just
get
a
solution
without
any
of
that
it
would
just
get
sensible
defaults,
and
then
you
have
a
list
of
modules
to
load
into
your
distribution.
So
we
have.
B
Here,
but
we
could
be
having
extensions,
we
could
be
having
receivers
or
exporters
as
well.
They
all
follow
the
same
pattern,
so
the
idea
is
that
you
specify
a
gold
module
in
this
case.
Here
I
have
a
module
that
exists
on
this
repository
itself,
so
under
process
source,
authentication,
processor,
and
it
follows
that
the
same.
B
The
same
template
the
same
format
as
the
go
mod
declaration,
so
location
and
the
version
right-
and
there
are
some
options
for
for
for
the
modules,
so
one
of
them
is
a
path.
So
if
it's
a
local
checkout
of
the
code,
I
can
just
specify
where
to
find
this
source.
B
I
can
also
specify
what
is
the
import
to
use
in
the
generated
source
code
so
like
this
one
or
what
is
the
name
of
the
processor
in
this
case
authentication
processor,
those
other
two
pieces
of
information
are
optional,
because
we
can
derive
them
from
the
gold
module
and
that's
it.
So
I
have
two
other
modules
here,
so
they're
object,
processor
and
the
resource
detection
processor.
B
So
if
I
just
run
it,
then
here
with
the
open,
telemetry
collector
builder,
passing
the
manifest.
It
should
then
generate
the
sources
and
compile
it
for
me
in
a
binary.
So
I
have
a
binary.
C
B
Then
under
build
all
right,
so
I
have
all
the
generated
source
here,
the
main
the
the
module
files
and
the
components-
and
I
can
then
just
startup-
is
a
fire
office
running
with
this
config
collector,
and
I
have
a
open,
telemetry
collector
here,
started
with
all
the
the
processors
and
exporters.
I
that
I
watch
so
I
don't.
I
don't
actually
have
to.
C
B
C
B
A
selected
few
processors
and
perhaps
some
that
don't
exist
under
different
trip.
Perhaps
some
they're
under
my
my
own
domain
yeah
and
that's
it
so
very
simple,
but
hopefully
it's
useful
for
other
people
as
well.
E
Nice,
I
think
it's
very
useful,
so
this
starts
with
just
the
core
of
the
collector
and
on
top
of
that,
what
you
specify
in
the
manifest
exactly.
B
So
if
I,
if
I
go
here,
if
I
don't
specify
a
manifest,
it
will
just
generate
them
on
a
temporary
directory.
B
And
see
on
the
go
mod,
we
have
only
the
collector
yeah
components.
We
have
nothing
pretty
much
nothing
here.
E
B
B
Anyway,
to
compile
the
gold
binary,
but
I
don't
actually
have
to
know
goal
right.
A
C
B
Good
so
try
it
out
and
if
you
have
feedback
above
reports,
I'm
more
than
happy
to
to
receive
them.
Oh
thank
you-
and
I
do
have
one
more
topic
for
today
and
that's
the
authentication
processor
it's
beautiful
today.
B
I
don't
see
the
call
okay
because
he
just
left
a
comment
right
before
it's
reaching,
so
we
are
going
back
and
forth
with
the
health
education
module.
B
There
were
a
couple
of
comments
from
from
you
and
from
program
on
the
pr
but
pavel-
and
I
think
pawel
is
here
or
was
here
in
this
call,
and
he
actually
suggested
a
something
very
interesting,
which
is
to
add
the
authentication
options
or
authentication
settings
to
their
receiver
to
their
grpc
searches,
basically
android
http
server
settings
so
that
we
can
install
jrpc,
interceptors
or
http
wrapper
handlers
to
the
actual
servers
like
to
the
receivers,
so
that
authentication
happens
much
earlier
than
during.
You
know
as
a
processor.
B
The
advantage
of
that
is
that
we
block
requests
before
they
they
are
unmarshaled
or
before
they
are
processed
in
any
way
as
we
we
intercept
them
before
anything
is
done
and
for
an
authenticated,
processor
processes
requests.
That
means
we
are
saving
up
some
some
cpu
cycles.
Now
one
of
the
arguments
for
that
is,
you
know.
If
we
have
more
work
than
necessary
and
people
have
enough
power,
then
it's
easy
to
cause
hddis
attack
right.
So
that's
that's
one
of
the
thoughts
behind
moving
that
up.
This
stack.
E
Yeah
makes
sense
a
lot.
I
think
it's
best
that
you
discuss
this
with
pogba.
He
was
interested
in
this
particular
functionality.
B
Know
how
this
works:
okay,
yeah!
This
is
more
intrusive
than
a
processor,
because
the
processor
can,
as
as
I
just
showed
there,
it
can
live
even
outside
of
the
open,
telemetry
project
right.
But
this
one
here
this
approach
here
means
that
we
are
changing
the
configuration
for
every
receiver.
Basically,
every
receiver
that
uses
jpc
or
http
is
going
to
be
affected
because
we
are
adding
a
new
node
of
configuration.
There.
B
E
B
B
Yes,
I
think
it
is
part
of
a
bigger
discussion.
This
last
point,
because
a
couple
of
meetings
ago
we
had
a
discussion
about
plugins
and
I
remember
people
talking
about
the
same
kind
of
problems.
I
don't
know
months
ago
I've
been
able
to
intercept
or
should
plug
stuff
or
plug
some
custom
logic
into
into
other
pieces.
B
I
think
this
fits.
I
could
very
much
see
people
wanting
to
do
their
own
custom
logic
and
building
as
part
of
like
the
builder,
so
but
right
now,
you're
right.
Everything
should
be
in
the
in
the
core
open
country,
core
yeah.
E
I
think
it
makes
sense
and
I
think
the
authentication,
certainly
you
can
tell
that
it's
completely
orthogonal
and
it
shouldn't
touch
the
receivers.
I
don't
think
so
it
you
can
argue
that
it's
probably
an
essential
part
of
receiver
if
we
can
implement
it
as
a
generic
functionality
that
can
be
easily
applied
to
all
receivers.
That's
great
right
but
saying
that
receivers
should
not
even
at
all
be
aware
of
how
authentication
works.
Probably
that's
not
the
goal.
E
B
Yeah
yeah
the
only
change
to
the
jpc
server
settings
that
I
have
I
mean
I
have
a
branch
here,
almost
working
and
the
only
changes
that
I
had
to
do
with
you
know.
The
existing
code
was
to
add
a
authentication
object
to
the
to
the
jpc,
client,
server
settings
and
http
server
settings
and
that
configuration
itself
is
in
a
new
package
like
config
auth
and
everything
really
related
to
the
authentication,
like
the
actual
mechanism,
is
then
inside
an
internal
health
package
yeah.
So
it's.
E
E
B
Yeah,
because
there
is
a
because
yeah
behind
the
scenes,
we
call
two
server
options
for
jrpc
and
to
server
on
on
http,
and
we
can
just
install
the
interceptors
there
or
the
handlers.
So
the
receivers
won't
even
be
aware
that
they
are
being
intercepted.
Yeah.
B
Cool
okay,
so
I'll
move
forward
and
with
this
approach
and
update
the
pr
then
yeah
best
to
also.
E
A
Oh,
I
just
thought
of
one
more
thing:
the
prometheus
we're
on
prometheus
one.
C
A
X,
libraries
and
there
there's
some
there's,
there's
what
an
upgrade
some
of
the
the
down
or
the
upstream
prometheus
libraries,
which
we
can't
do
until
we
go
to
prometheus2.x,
which
is
a
pretty
big
job.
What
what's
what's?
What's
our
feeling
on
that
of
going
from
prometheus
1.x
to
2.x.
E
A
A
E
I
would
say,
probably
there's
no
rush
with
that
upgrade,
especially
given
that
we
want
to
ga
and
maybe
a
big
chunk
of
work.
Unless
someone
is
willing
to
undertake
that
yeah.
I
think
it's
fine
too.
A
E
Yeah
maybe
just
like,
if
someone
is
interested
in
doing
that,
and
if
it's
a
big
chunk
of
work
which
cannot
be
delivered
in
one
pr,
maybe
start
a
parallel
component,
just
mark
it
as
an
experimental,
don't
use
it,
and
maybe
you
won't
even
enable
it
where
you
just
do
this
use
the
new
dependency.
Hopefully
it's
possible
to
have
both
as
dependencies
at
the
same
time
in
the
collector
and
just
start
working
on
the
new
previous
receiver,
which
uses
the
new
data
format
and
uses
the
real
dependency.
A
I
Ideally,
I
think
I
looked
at
the
code
as
well
some
time
ago,
and
most
of
it
is
somewhat
recent,
if
not
too
recent,
so
I
don't
think
there
should
be
a
huge
delta
and
moving
to
their
most
recent
version.
I
I
Yeah
I
had
a
couple
of
questions
about
this
prometheus
receiver.
If
it's
okay,.
A
I
Any
reason
why
we
there's
a
piece
of
code
that
drops
all
the
stale
markers
that
the
prometheus
script
manager
generates.
We
drop
it
at
the
receiver
itself,
but
things
like
the
cortex
prometheus
remote
writer,
remote
writer.
They
still
rely
on
the
style
markers.
So
should
it
be
a
decision,
that's
being
done
at
the
exporter
rather
than
the
receiver.
A
I
Maybe
okay:
okay,
okay,
it
sounds
good
and
yeah
and
one
other
question
is:
is
there?
Is
there
possibility
to
not
use
the
script
manager
at
all?
Because
it's
a
relatively
lightweight
function
that
can
be
done
with
with
simpler
code
than
the
script
manager
right?
Because
it
brings
a
lot
of
bloat
into
the
code.
I
So
it's
it's
something
that
the
prometheus
uses
to
orchestrate
all
the
scraping
since
it's
centralized,
but
given
that
open
telemetry
already
has
the
extension
to
do
the
kubernetes
discovery
and
everything
outside.
Is
it
really
adding
a
lot
of
value
to
inherit
the
entire
prometheus
scripting
infrastructure.