►
From YouTube: 2021-03-31 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
C
A
Yeah,
I
think
we
have
lots
to
discuss.
I
think
folks
are
just
adding
things
in
all
right.
Okay,
I
think
there
are
other
folks
who
have
not
joined
yet.
So
let's
wait
a
bit.
A
B
Hey
this
is
so
so
me
and
grace.
Actually
we
we
thought
we
will
just
start
creating
a
small
pr
for.
B
Resource
labels,
please
take
a
look.
I
don't
know
if
this
is
the
approach
that
we
want
to
take,
or
is
there
a
generic
approach
across?
You
know
all
the
receivers
that
we
need
to
do
for
resource
labing.
Please
take
a
look
and
give
us
feedback
and
we
can.
We
can
make
the
changes
as
a
property.
We
want
to
just
get
the
ball
rolling
on
this
one.
A
That's
good,
and,
and
did
you
have
a
design
that
you
were
proposing
here
for
this?
This
is
a
pro
in
the
receiver
itself,
right.
B
Yeah,
it's
a
very
simple
change.
It's
it's
just
configurable,
I'm
not
sure
if
this
is
the
right
approach,
so
I
just
want
to
that.
I
just
want
everybody
else
to
take
a
look
at
it
and
suggest
if
you
want
to
do
it
in
a
different
way,.
E
So
so
me
and
my
team
have
been
considering
contributing
a
a
stateful
delta
to
cumulative
processor
to
the
collector
and
yeah.
You
know
we
were
wondering
if
the
prometheus
working
group
had
discussed
that
before
or
if
that
has
you
know,
been
brought
up
and
and
rejected,
or
anything
like
that.
Basically,
if
there's
any
context
around
that
or
any
interest
around
that,
we
should
know
about
before
embarking
about
on
that
type
of
thing.
A
I
think
again
it's
been
the,
so
if
you
look
at
the
collector
code
and
the
receiver
code,
there
is
some
logic
to
you
know:
handle
deltas
as
well
as
handle
cumulatives,
but
I
think
there
is
no
converter.
F
And
we
discussed
this,
but
there
are
some
difficulties
in
the
data
model
itself.
There
is
no
way
to
like,
for
example,
say
if
a
delta
is
a
duplicate
data
point
or
like
we
were
looking
for
some
like
sort
of
like
either
identifiers
or
sequence
numbers
in
deltas.
I
mean
there's
an
issue
on
this.
It
says
sort
of
like
a
difficult
thing.
F
I
was
talking
to
a
couple
of
people
from
your
company
yesterday,
like
they
mentioned
that
you
kind
of
like
have
a
prototype
or
an
easier
solution,
but
that's
what
I
was
trying
to
understand
like
are
you?
Do
you
not
necessarily
care
about
this
type
of
cases?
You're,
just
rebuilding
from
whatever
is
coming
in,
and
can
you
explain
a
bit
like
your
approach?
Maybe
that
would
be
the
easiest
way
to
so.
E
So
we're
we're
really
just
you
know,
scratching
the
surface
of
this
right.
Now.
Our
interest
is
in
transformation,
actually,
the
other
way
around
cumulative
to
delta,
our
company's
back
end
works
best
with
deltas,
and
so
that
that's
our
preference.
But
you
know
we
figured
if
we
were
going
to
embark
on
some
sort
of
generic
processor,
that
it
should
be
able
to
go
both
ways,
and
you
know
I
I
I
should.
I
should
say
that
we
haven't
dug
into
this,
probably
as
much
as
some
of
the
people
have.
E
But
you
know
our
basic
approach
was
to
examine
the
you
know
the
the
set
of
attributes
on
all
the
metrics
and
use
that
to
identify
you
know
uniqueness,
and
then
you
know
track
them
in
memory
and
as
new
metrics
come
in,
evaluate
the
difference
between
the
old
one
and
the
new
one
from
to
to
switch
from
cumulative
to
delta.
F
Yeah
cumulative
to
delta
sounds
much
easier,
like
you
have
absolute
values,
and
you
know
you
can
just
basically
diff
the
other
way
around
is
a
bit
more
difficult,
and
we
haven't
looked
into
that
like
we
care
about
this
for
prometheus,
because
prometheus
doesn't
support,
you
know,
except
deltas,
we
have
to
send
absolutely
you
know
cumulative
data
in
the
end
of
the
day
and
right
now,
if
you
use
otlp
to
send
deltas
you
can
we
can't
export
them
to
prometheus.
F
You
know
so
it's
kind
of
like
an
important
thing
for
us
to
do.
We've
been
considering
that
as
kind
of
like
a
phase
do
we.
We
have
this
like
phased
approach
in
terms
of
prometheus
support.
The
first
phase
is
more
about
like
hey:
let's
try,
let's
make
sure
that
we
can
scrape
prometheus
and
export
parameters
and
phase
two
will
be
more
of
like
this
type
of
stuff.
A
F
E
Well,
would
your
okay,
that's
that's
good
to
know.
Would
your
approach
to
that
to
be
to
try
to
add
a
processor
to
the
collector.
F
Yeah
that
could
be
one
of
the
possible
approaches,
but
then,
like
there's
other
questions
such
as.
Are
we
going
to
be
doing
this
for
all
the
like
incoming?
You
know,
deltas
and,
like
who's,
going
to
be
deciding
that,
like
we
were,
I
think
initially
thinking
that
this
will
be
living
in
the
prometus
exporter.
F
E
Yeah,
so
I
guess
we'll
we'll
when
we
as
we
make
progress,
we'll
we'll
keep
this
group
in
the
loop
yeah.
F
E
All
right
that
sounds
good
thanks
for
that
context.
D
You
may
want
to
bring
it
up
at
the
sig
at
the
at
the
collector's
sig
it's
in
an
hour
as
well.
Yeah.
A
Yeah,
because
I
think
that
there
is
a
larger
context-
and
I
think
I
don't
see
josh
here,
but
there
there
has
been-
you
know
several
discussions
in
the
collector's
sake
earlier
also,
you
know
again
exactly
dealing
and
looking
at
this
problem,
so
it
looks
like
your
use
case
is
more
going
from
cumulatives
to
delta.
It's
not
so
much
the
other
way
around.
E
F
Yeah
we
only
cared
about
this
in
the
scope
of
prometheus
so
far,
also
yeah.
A
F
If
you
already
do
this,
maybe
would
it
make
sense
to
kind
of
make
it
reusable.
E
Of
what
we
were
thinking,
so
our
our
new
relic
exporter
and
the
collector
was
doing
this
transformation
as
well.
We
figured
we'd
extract
that
out
and
and
find
a
way
to
make
it
reusable.
A
So
I
think
that
jay,
you
bring
up
a
good
point.
I
mean
it
might
be
worth
jack.
Looking
at
the
how
you
know,
signalfx
is
implementing
that
in
the
exporter
and
seeing
if
that
you
know,
can
be
proposed
as
a
more
generic
solution
and
whether
that
fits
into
your
you
know,
use
cases
or
not.
E
Yeah
and
and
I'll
definitely
bring
it
up
at
the
the
collector
working
group
as
well.
B
So
jay
is
this:
only
for
the
prometheus
that
you
are
targeting
now
or
you
want
this
to
be
very
generous
across
all
the
all.
The
receivers.
D
So
sorry,
for
so
ours
is
done
specifically
when
we
export
to
to
our
backend
okay,
what
I
mean
what
so
we
have
a.
We
basically
have
this
whole
transformation
thing
that
we
do
that's
kind
of
a
combination
of
metrics,
transform,
processor
and,
and
this
thing
like
it's
all
kind
of
unified
into
into
one
thing,
so
I
mean
I
think
we
should
also
consider
whether
maybe
let's
talk
about
this
more
in
depth
at
the
collector's
sig.
D
F
Good
to
know
so,
you
are
also
like
kind
of
like
in
the
same
spot
that
you
know
some
of
the
challenges
that
I
was
explaining.
What's
a
duplicate
and
what
is
not
you
already,
I
don't
have
any
solutions
to
those
type
of
things
and
that's
that's
seems
to
be
fine.
F
Yeah
I
mean
the
you
know:
the
protocol
itself
doesn't
give
you
a
good
way
to
identify
the
the
specific
you
know
if
it's
an
actual
delta
point
or
if
it's
a
duplicate
of
an
existing
thing,
so
there
would
I'll
ping
an
issue.
You
can
take
a
look
we
I'll
cc
on
that
thread.
It's
it
could
be
a
minor
thing.
E
D
A
Okay,
are
there?
Are
there
any
other
topics,
because
I
think
that
we
don't
have
other
specific
topics
on
the
agenda
jana
did
you
want
to
bring
up
any
other
areas
that
we
were
starting
to
work
on.
F
H
Yeah,
I'm
not
prepared
to
share
that
with
this
group
just
yet
perhaps
next
week.
B
So
anything
on
the
scale
front,
so
we
were
also
looking
at
you
know,
stressing
the
collector
and
see
how
far
it
can
go.
Actually,
I
know
we
discussed
about
this
in
the
last
weeks.
One
I
just
want
to
confirm
if
anybody
is
doing
anything
here,
we
can
also
contribute.
A
I
think
so,
specifically
in
terms
of
testing
or
performance
testing,
we
have
done
some
work
on
the
a
dot
side,
that
is
the
aws
distro
for
open
telemetry,
and
we
have
actually
tested
some
of
the
pipelines
of
the
existing
components.
A
If
you're
interested
in
taking
a
look
at
you
know
those
tests,
we
have
an
open
source
test
framework,
so
I'll
ping,
you,
the
links
and.
F
There
is
some
work
also
needs
to
be
done
in
terms
of
benchmarking.
This
would
you
be
interested
in
like
contributing
any
of
those
bits.
We
want
to
see
like
how
this
compares
to
prometheus
server
in
terms
of
like
scrape
from
scrape
to
remove
right.
F
Like
you
know,
the
data
points
wise,
like
I
mean
throughput
wise,
like
how
does
it
compare
in
terms
of
resource
consumptions
and
the
throughput,
and
we
we
discussed
like
slightly
in
the
previous
meeting,
but
we
didn't
take
a
lot
of
action
in
terms
of
actually,
you
know
building
any
any
an
environment
to
benchmark
this,
so
that
would
be
a
useful
contribution.
B
Yes,
definitely
let
me,
let
me
bring
you
guys
on
slack
so
that
we
can.
We
can
divide
a
few
things
and
start
going.
A
Yeah,
that's
a
good
idea,
there's
also
an
open
issue
that
you
know
kind
of
captures
the
are
interested
in
being
a
bit
more
formal
about
the
performance,
benchmarking
criteria
and
again
you
know,
as
jana's
listed
in
this
specific
issue,
there
are
several
benchmarks
that
we
ran
and
again
there
is
some
testing
that
we've
done
before.
So
please
take
a
look.
C
F
I
have
an
update
on
if
we
are,
if
we
can
continue
on
some
of
the,
like
other
scaling
things
that
we
yeah
yeah
decided.
F
So
we
decided
to
scope
down
basically
the
the
operator
to
initially
work
with,
like
hey,
let's
scale
up
to
this
specific
number
of
things,
and
then
we
will
eventually
improve
it
with,
like
other
auto
scaling
policies,
the
initial
thing
will
be
just
like
you
go
and
like
set
the
number
of
replicas
you
want,
and
you
realize
that
your
cluster
is
really
big,
so
you
go
and,
like
you
know,
manually
increase
it.
The
improvement
will
be.
You
know.
F
On
top
of,
this
will
be
like
having
a
policy
where
you
will
be
able
to
like
all
the
scale
based
on
maximum
targets
per
replica,
but
I
think
that
we
will.
We
were
focused
on
doing
the
basics
first,
because
there's
a
lot
of
like
complexity
in
terms
of
how
we're
going
to
merge
this
into
the
existing
operator.
So
we
have
to
do
all
those
things
done.
F
Plus
I
have
like
the
basic
functionality
and
then
you
know
we
can.
We
can
work
on
the
other
scale
and
piece
just
as
an
fyi.
B
Yeah
but
jenna,
we
cannot
actually
do
anything
on
auto
scaling
unless
we
do
sharding.
First
right.
F
Yeah
so
shorting
what
I
mean
by
operator,
it
will
be
shard,
it
will
be
doing
the
sharding.
The
number
of
replicas
will
be
like,
if
you
say
like
five
replicas,
for
example,
it
will
take
all
the
like.
You
know
the
targets
split
them
into
five,
and
then
you
know
communicate
that
you
know
which
targets
each
replica
should
scrape.
But
you
know
it's
not
gonna
all
the
scale
to
like
10.
If
you
have
more
targets
automatically
like
you
will
have
to
see.
F
B
F
D
Sorry
so
I
may
have
just
trying
to
catch
up
on
chat
yeah.
I
didn't
notice,
I
wasn't
in
the
channel,
so
did
we
definitively
confirm
that
a
single
instance
can't
can't
scale
to
the
number
we
need.
D
Yeah,
I
guess
I
just
want
to
make
sure
that
we
I
mean,
if
there's
any
like,
that,
we
do
like
a
reasonable
amount
of
optimization.
First,
you
know
before
we
I
mean
fourth
screen.
F
I
think
our
use
case
is
a
bit
different
and
you
know
it
should
be
a
bit
more
like
automat,
like
automatically
handling,
regardless
of
your
like
cluster
size,
type
of
like
experience
that
we
want
to
give
and
that's
why
we
cared
about
this,
but
initially
we'll
start
with
like
not
having
auto
scaling
because
of
the
we
are
trying
to
sort
out
how
we're
going
to
you
know
what
the
crds
will
be
like,
what
we're
going
to
be
merging
to
the
operator
and
so
on.
A
F
Okay-
and
you
know
jay,
if
you
are
interested
in
the
you
know
improving
the
performance
that's
kind
of
like
so
the
big
bulk
of
work
is
figuring
out
how
we're
going
to
merge
this
to
the
operator.
F
That's
why
we're
so
focused
on
that,
but
you
know
we
need
to
do
optimizations
on
the
receiver
and
the
exporter
and
there's
there
could
be
some
inherent
like
you
know,
performance
issues
in
the
collector
as
well.
We
haven't
benchmarked
any
of
those
things
and
I
think
like
starting
from
either.
Comparing
the
you
know,
the
performance
with
like
prometheus
server
at
this
would
be
a
good
initial
point
and,
like
most
of
our
customers,
would
say
that,
like
hey,
I
just
want
to
use
a
lightweight
agent
to
be
able
to
get
rid
of
primitive
servers.
F
So,
even
if
we
are
like
performing
at
the
level
of
prometheus
server,
I'm
not
sure.
If
this
should
be
our
target,
it
should
be
much.
We
should
be
much
much
more
efficient,
but
we
haven't
done
a
lot
of
work
in
order
to
you
know,
be
able
to
evaluate
the
collector
it's
like.
If
there's
any
inherent,
you
know
issues
there.
We
also
are
not
super.
F
We
need
time
to
be
able
to
basically
get
to
that.
So
if
you
care
about
it
like,
if
you
have
bandwidth
that
would
be
super
nice,
we
take
care
of
the
like
operator.
You
can
take
care
of
the
like.
You
know
single
replica
experience.
D
F
Are
you
still
like
interested
in
you
know
using
this
as
a
side
car
also,
like
you
know
the
simple
prometheus
receiver
that
you
know
you
had
in
the
con
trip
bogdan.
You
know
a
couple
of
times
like
pointed
out,
like
that's
the
that's
the
case
that
you
care
more
more
or
rather
than
like.
F
D
Yeah,
well,
it's
it's
not
so
much
a
sidecar
for
for
the
sidecar
case,
as
it
is
for
working
with
the
observers
and
having
it
because
the
observers
in
the
service
discovery.
We
have
this
native
to
the
collector.
It's
built
on
a
one.
Two
like
a
one
instance
of
a
receiver
is,
is
you
know,
scrapes
one
one
target,
so
it's
like
a
one
to
one
yeah
one
thing
versus,
whereas
the
prometheus
receiver
is,
like
a
you,
know,
consequent
number
of
targets
yeah.
D
So
it's
just
a
little
bit
yeah
a
little
bit
different
model,
so
yeah
I'll.
Definitely
one
thing
I'll
be
looking
into
is
is
like
for
that
case.
We
we
don't
need
any
of
the
prometheus
like
we
just
want
to
to
parse
the
prometheus
format.
We
don't
need
like
any
of
the
prometheus,
relabeling
or
or
anything
right,
because,
like
all
that's
there's
a
kind
of
a
native
way
to
do
that
in
the
collector
so
see.
I
think
so.
D
It
sounds
like
y'all
are
more
focused
on
like
the
drop
in
kind
of
the
drop
in
use
cases
like
the
drop-in
replacement
of
the
previously
server,
whereas
I
think
we're
more
interested
in
like
the.
How
do
you
use
the
native
hotel.
F
Yeah
we've
been
we're.
We're
interested
in
being
able
to.
You
know
like
give
something
to
the
customer
that
will
scrape
their
prometheus
and
export
to
wherever
they
want.
Like
that's
a
very
long-term
goal,
we're
not
sure
if
you
know
telemetry
would
be
a
solution
for
that.
But
that's
why
we're
you
know
trying
to
do
this
things
type
of
things
to
see.
F
You
know
it
could
be
a
good
place,
but
in
our
initial
target,
is
like
really
being
able
to
scrape
prometheus
and
export
to
our
managed
parameters
without
asking
people
to
like
deploy
another
thing,
because
we
want
to
make
like
open,
telemetry
collector
available
on
every
platform.
F
So
you
know,
if
you
know,
one
of
the
difficulties
is
like
at
aws.
Everybody
uses
like
four
to
five
different,
like
agents,
and
you
know
that's
kind
of
like
a
burden
in
terms
of
like
operating
agents.
It's
like
you
know,
costing
more
resources.
It's
just
it's
just
a
pain.
So
that's
that's
the
that's
the
reason
that
we
care
about
this
at
least
making
you
know
the
prometheus
of
prometheus
is
a
thing,
but
the
overall
goal
would
be.
F
If
we
can
repeat
like
you
know,
if
we
can
export
prometheus
to
other
ex
places,
that
would
be
that
that's
kind
of
like
the
next
step.
So
you
know
you
as
a
as
an
operator
of
a
kubernetes
cluster
or
anything
else
that
you
know
has
like
prometheus
metrics
around
shouldn't,
be
thinking
too
much
about
it.
We
should
be
able
to,
you
know,
scrape
all
those
things
automatically
for
you.
D
H
Yeah
so
yeah,
I
just
put
into
check
out
a
link
to
a
google
doc
where
I've
got
the
the
goals
and
non-goals.
We
were
discussing
yesterday
a
bit
cleaned
up,
and
so
I
think,
maybe.
H
First
time
I'm
trying
to
present
with
zoom,
so
it's
asking
me
for
access
to.
Oh.
A
A
Oh,
I
think
you
have
to
drop
off
and
join
again.
Yeah
I'll,
be
back
in
one
moment
all
right
cool.
In
the
meantime,
I
think
richard
had
the
question
here
in
chat.
Are
there
any
numbers
per
core
ram
for
the
current
implementation.
I
My
question
was
if,
if
there's
already
some
numbers,
how
much
ingestion
the
collector
and
such
can
do
as
we
agreed
on
elite,
I
I'm
also
trying
to
find
current
numbers
for
how
much
per
core
ingestion
prometheus
does
and
as
that
has
more
to
do
than
than
just
collecting,
and
that
that
is
probably
a
good
baseline
to
aim
for
just
to
try
and
give
some.
F
Dimensions,
what
are
the
dimensions
that
you
are
looking
at
in
terms
of
like
you
know
how
many
targets
you're
scraping
how
many
metrics
published
from
those
targets
if
we
can
like
share
the
same
thing
in
benchmark
against
the
same
thing?
That
would
be
super
useful
to
compare.
I
I
I'll
try
and
see
if
we
have
any
any
somewhat
current
things,
and
then
we
can
just
replicate
the
same
with
open
telemetry
for
for
promises
itself,
but
this
is
outside
of
just
collecting.
It
doesn't
truly
matter
how
many
targets
versus
how
many
like
many
targets
with
few
metrics,
or
vice
versa,
or
one
huge
one,
is-
is
roughly
equivalent.
I
I
Also,
due
to
the
fact
that
a
lot
of
prometheus
is
spent
in
storage
and
and
such,
which
would
not
be
the
case
for
for
the
collector,
I'm
also
trying
to
find
numbers
on
the
grafana
agent.
Of
course,
this
is
a
lot
more
reduced
and
maybe
even
for
the
work
in
progress
prometheus
agent,
because
that
is
even
more
reduced
and
as
such,
a
lot
more
like,
like
the
collector.
F
The
the
numbers
I
can't
share
the
numbers
because
I
didn't
I
mean
I
sounded
a
bit
stupid
in
the
previous
meeting,
but
this
is
what
actually
happened.
I
was
running
some.
I
was
trying
to
reproduce
some
concurrency
bugs
that
we
have
in
the
remote
ride
exporter
and
that's
why
I
had
to
run
a
bunch
of
like
load
tests
and
then
at
some
point.
I
realized
that,
like
oh,
the
issue
is
not,
as
you
know,
reproducible
as
it
used
to
be,
and
that
particular
you
know
load
was
kind
of
like
thinking
to
like
hey.
F
This
is
like
you
know,
it's
just
a
receiver
is
becoming
more
of
a
bottleneck
at
this
point
and
that
number
looked
very
small
to
me
compared
to
like
what
you
know
prometheus
server
does
so
I
don't
have
like
a
really
conclusive.
You
know
numbers
and
we
should
work
on
that
as
soon
as
possible,
but
I
can
tell
you
that,
like
there
is
a
there's,
a,
I
think
it's
going
to
underperform,
then
what
we
are
expecting
according
to
my
initial
law
tests,
but
I
might
be
wrong.
F
A
Okay,
cool,
I
think
richard
probably
has
a
hard
stop
and
shortly,
but
anthony
is
back
so
there.
A
H
Yeah,
so
I
think
basically,
this
just
lays
out
at
a
high
level
the
the
goals
and
non-goals
what
we're
going
for,
but
I
think
jana
earlier
kind
of
got
to
the
the
core
of
it,
which
I
I
think
is
this
part
here
where
you
want
to
be
able
to
deploy
the
collector
as
a
staple
set
with
multiple
replicas.
You
know
one
or
more
replicas
and
then
be
able
to
shard
or
distribute
the
targets.
H
The
scrape
targets
from
a
prometheus
service
discovery
configuration
to
those
targets
in
some
manner,
and
then
we
can
reallocate
those
targets
if
the
number
of
replicas
changes.
But
we
are
not
at
this
point
really
concerned
with
automatically
scaling.
H
We
expect
that,
if
we're
able
to
get
the
get
to
a
point
where
we
have
a
fixed
number
of
targets
and
we
can
allocate
the
or
a
fixed
number
of
collectors-
and
we
can
allocate
the
targets
among
those
collectors,
then
when
the
scaling
comes
in,
we
can
react
to
that
scaling
and
the
two
problems
can
be
handled
independently.
H
And
I
think
we
would
want
this
to
replace
the
entire
prometheus
scrape
config,
so
we
don't
want
to
worry
about
trying
to
figure
out.
Are
there
some
targets
that
should
be
handled
in
this
manner
and
some
that
shouldn't?
So
if
a
user
has
a
more
complex
configuration
where
they
want
to
control
which
targets
are
scraped
by
which
collectors
or
which
nodes
they
would
not
use
this
system,
they
would
have
to
create
their
own
configuration,
and
ideally,
multiple
collector
deployments
with
multiple
different
configurations
could
live
side
by
side.
C
So
basically,
I
just
want
to
restate
this
point
to
make
sure
I
understood
you,
you
don't
intend
for
the
automatic
division
of
which,
which
instance
creates,
which
targets
to
coexist
with
I'm
manually
tuning
knobs
for
what
scrapes,
which
targets
one
or
the
other.
H
J
I
think
this
looks
awesome.
I
think
this
is
a
very
good
direction
and
that's
well
scoped.
H
So
cool
okay,
so
my
next
steps,
then,
are
to
try
to
lay
out
a
set
of
tasks
or
to
try
to
create
a
work
breakdown
for
this.
You
know
what
are
the
things
we
actually
need
to
do
in
order
to
to
make
this
a
reality,
and
that's
what
I'll
be
starting
on
today
and
hopefully
we'll
be
able
to
discuss
that
next
week
or
whenever
our
next
discussions
are.
A
Any
other
comments.
Folks
have
again,
you
know
great
time
to
ask
questions
or
just
add
them
to
the
doctor.
F
We
may
add
a
few
configuration
settings
but
they're
going
to
be
like
additions,
not
removals.
So
I
think
that's,
that's
the
only.
I
H
Yeah-
and
I
I
think
any
changes
to
the
collector
would
be
localized
to
the
prometheus
receiver.
You
know
the
the
proof
of
concepts
that
had
been
created
for
this
had
an
extension
that
was
used
to
deliver
scrape
targets
to
the
collector.
I
think
we
want
to
look
at
moving
that
into
the
receiver
and
adding
some
configuration
on
the
receiver
for
that
to
better
localize
that,
but
I
don't
expect
that
there
will
be
any
significant
changes
to
the
collector
itself
in
order
to
accomplish
this.
J
One
one
more
question:
is
the
prometheus
crds
the
only
way
to
configure
a
prometheus
receiver?
Can
I
also
specify,
like
other
ways
of
doing
things
as
well,
in
addition
to
the
config
that's
going
to
be
generated
from
those
or.
H
Yes,
any
prometheus
great
config,
so
the
the
expectation
would
be
that
the
collector
would
be
configured
with
a
prometheus
grape
config.
The
controller
would
pull
that
out
before
it
creates
the
collector
use
that
as
its
config
and
merge
that,
with
whatever
comes
from
the
prometheus
crds
as
well,
perfect.
K
Can
I
ask
for
one
thing
for
the
prometheus
crd:
can
we
make
sure
that
it's
not
called
exactly
prometheus,
so
we
could
avoid
an
a
naming
conflict
with
the
existing
prometheus
operator
since
we're.
B
H
H
My
intent
here
was
that
we
would
watch
the
crds
created
by
prometheus,
like
the
service,
monitor
and
pod
monitor
for
service
discovery,
not
that
we
would
be
creating
new
crds
for
those
okay.
I
just
wanted
to
clarify
just
want
to
make
sure
that
we're
not.
K
A
Cool
any
other
topics
folks
have
otherwise.
I
think
we've
got
a
bunch
of
areas
to
work
on.