►
From YouTube: Grafana Agent Community Call 2021-11-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right,
I
will
go
ahead
and
get
us
started.
A
My
name's
matthew,
durham,
I'm
a
developer
here
at
grafana
with
the
agent,
and
this
is
their
first
community
meeting
so
yay
most
people
out
here,
graphite
labs
people,
that's
awesome!
We
want
everybody
to
be
available
to
join
and
and
try
to
open
it
up.
I
want
to
take
just
a
minute
to
talk
about.
Why
we're
doing
this.
One
of
the
things
is.
We
want
to
be
a
more
open
project,
especially
as
it
grows.
A
It
was
originally
just
a
project
by
our
esteemed
colleague
robert
there
and
he
was
the
lone
toiler
on
it,
but
now
the
team's
grown
quite
a
bit
and
has
it's
getting
more
uptick
in
usage.
We
want
to
develop
things
in
the
open
and
part
of
that
is
opening
up
our
meetings
and,
to
some
degree,
opening
up
our
roadmap
and
we're
going
to
talk
about
some
other
things,
opening
up
and
just
trying
to
drive
community
involvement
and
get
input
from
the
greater
community.
A
So
this
is
a
chance
for
people
to
jump
in,
and
I
clicked
admit
to
somebody:
oh
they're,
there,
okay,
hey
andre,
we
were
just
getting
started
so
yeah
kind
of
the
flow.
Is
we
this?
I
would
say
this
agenda.
People
are
free
to
add
to
it.
I'm
not
exactly
sure
what
the
permissions
are
for
it
for
not
kind
of
people,
but
I
will
take
a
look
at
that,
and
this
is
anything
you
all
want
to
talk
about.
Instead
of
just
graphing
labs
people
talking
it'd
be
great.
A
If
we
had
conversation
from
the
community,
so
that's
kind
of
my
little
spill
on
the
intro.
I
think
it
would
be
kind
of
neat
just
to
introduce
the
frontal
labs
people.
So
people
know
who
they
are.
We
won't
do
this
intermediate.
I
don't
think,
but
probably
for
the
first
one,
so
I'll
go
around
the
room
and
yell
out
people
so
again,
matt
durham
from
labs
robert.
A
B
Sure
so
hi,
I'm
robert,
I
was
like
the
second
programmer
to
work
on
the
agent
tom
wilkie
wrote
the
first
commit
and
then
I
took
it
from
there
and
that
was
like
back
in
2019
or
like
early
2020,
and
now
I
am
the
tech
lead
by
default.
I
think,
is
the
the
official
term
for
that.
C
True,
I
can
go
next
yeah,
I'm
just
I'm
a
also
software
engineer
here
at
graphene
labs.
I
work
mostly
on
open
telemetry,
but
I
I
do
have
a
task
about
the
agent
on
my
queue
that
I
promised
to
finish.
Someday.
D
Hey
I'm
gotham,
I'm
a
prometheus
and
cortex
maintainer
and
recently
moved
to
the
agent
squad
to
work
on
open,
telemetry
and
help
a
little
with
the
agent
yeah
robbie.
E
Hey
how's
it
going
everybody,
I'm
robbie.
Excuse
me:
I'm
new
to
grafana,
I'm
a
software
engineer
here,
I'm
only
four
or
five
weeks
in
and
I
am
going
to
be
working
on
the
agent
so
excited
to
be
here.
Thanks
matt.
F
Mario
hi
everyone,
my
name,
is
mario,
I'm
actually
on
the
temple
squad,
but
a
lot
of
the
work
that
I
do
is
work
in
writing
box
for
the
tracing
bits
of
the
agents.
So
then
I
can
solve
them.
A
Cool
is
there
anybody
I
left
out
or
anybody
else
who
wants
to
give
an
intro
you're
not
required
too
by
any
means.
A
All
right
cool
yeah,
I
guess
we
can
move
on
and
talk
about
prometheus
agent.
B
So
I
wrote
this
down
and
I
didn't
plan
exactly
what
I
was
gonna
say,
but
as
of
last
week
week
before,
maybe
we
have
officially
taken
the
grafana
agent
code
and
moved
it
to
prometheus
first
kind
of
like
some
of
the
details
there,
the
the
code
still
exists
in
the
agent.
B
We
do
want
to
get
rid
of
it
and
use
prometheus's
code
once
that's
released
in
a
stable
form,
but
that
hasn't
happened
yet
so
it
still
exists
in
two
places
for
now,
but
generally
I'd
like
to
kind
of
talk
a
little
bit
about
why
we
did
that
and
will
we
keep
doing
similar
things,
so
the
grafana
agent
actually
started
as
a
as
a
project,
because
prometheus
team
at
the
time
were
interested
in
supporting
an
agent-like
mode,
and
we
wanted
to
build
it
for
our
users
of
profound
cloud.
B
But
since
then,
like
grafana
labs
always
tries
to
contribute
as
much
code
as
they
can.
So
the
the
goal
was
never
to
like
have
a
second
prometheus
or
you
know,
fork
prometheus
or
anything
like
that.
It
was
always,
let's
you
know,
do
this
downstream,
make
it
really
good
and
if
prometheus
decides
that
they
want
the
code,
we'll
definitely
move
it.
So
that
has
happened
and
we've
moved
the
code
now.
B
The
code
we've
moved
is
specifically
the
the
storage
bits
so
essentially,
since
the
graphite
agent
launched
as
a
prometheus
without
a
tsdb,
the
code
relevant
to
do
that
is
instead
of
storing
metrics
to
a
tsdb,
you
sort
it
directly
to
a
right
ahead
log
and
then
remote
write
will
read
from
that
red
head
log.
So
that's
the
code.
B
We
moved
there's
a
lot
of
other
agent
code
like
the
embedded
exporters,
the
scraping
service
and
host
filtering
those
things
are
still
in
the
agent
and
we're
starting
to
work
with
upstream
for
some
of
those
about.
How
can
we
take
the
next
iteration
of
it
and
also
make
those
public?
I
think
you
know
generally,
the
strategy
is
if
we
have
ideas
for
how
we
can
make
things
better.
B
I
think
we
should
absolutely
use
like
the
agent
as
a
as
like
a
test
bed
for
new
use
cases
like
prometheus
agent
mode
or,
like
you
know,
host
filtering
or
like
the
feature
that
robbie
proposed
that
we'll
talk
about
later
and
then
like
once
they're,
pretty
good
move
them
around
like
we
don't
we
don't
have
to
own
things.
B
We
can
share
it
with
others,
and
I
think,
while
we
already
shared
some
code
with
prometheus
through,
like
the
sig
v4
support
back
in
back
sometime
last
year,
I
think
prometheus
asian
is
the
first
time
we
did
a
huge
contribution
and
I'm
excited
to
do
more
of
these
like
given
the
opportunity.
B
Does
that
make
sense
to
anyone
like?
Does
anyone
have
any
questions
about
if
that
makes
sense,
as
a
strategy
like,
I
think
I
I
had
a
blog
post,
we
published
this
week,
maybe
about
this,
where
I
do
kind
of
mention
the
goals
of
the
agent
being.
Actually,
maybe
I
just
pulled
up.
Let
me
faster.
B
That's
that
air.
Now,
though,
we
have
a
blog
right.
B
Nice,
thank
you
so
yeah,
like
the
three
goals
I
talk
about.
Are
we
want
to
be?
We
want
the
agent
to
be
the
best
companion
piece
we
can
be
to
the
open
source
stack.
So
that's
going
to
be
prometheus,
loki,
tempo
and
well,
I
guess
grafana,
even
though
we
don't
really
interact
with
grafana
much,
but
then
after
that,
I
think
enabling
new
use
cases
that
make
the
first
use.
B
You
know
the
first
gold
more
possible
would
is
like
the
second
big
goal
and
that's
where,
like
prometheus
agent
mode
comes
in,
that's
where
host
filtering
and
all
these
other
things
we're
trying
to
do
come
in.
But
then,
if
you
take
those
two,
I
think
we
have
a
pretty
decent
project,
but
we
don't
want
to
keep
things
to
ourselves.
So
the
third
goal
is
to
share
those
proven
use
cases
with
other
people,
so
mainly
the
graffana
agent,
as
a
project
tends
to
be
more
around
helping
others
rather
than
kind
of
building.
B
Someone's
gonna
watch
a
youtube
recording
later
and
think
I
don't
know
what
he's
talking
about.
So,
if
anyone's
any
questions
they
want
me
to
clarify
for
the
youtubers.
Please
just
really
say
like
comment
subscribe,
yeah
smash
that,
like
button.
A
Andre
looks
like
he
has
a
question
and
I'll
repeat
it,
since
I'm
not
entirely
sure
if
messages
go
on
the
recording
andre
says
it's
not
clear
what
the
the
graffana
agent
and
the
prometheus
agent
are
different
projects
is
the
front
agent
going
to
use
prom
agent.
B
Thank
you,
that's
a
good
question.
They
are
different
projects,
so
we
moved
the
agent
that
could
have
talked
about.
It
is
now
part
of
prometheus
prometheus's
standard
code
base,
and
as
of
this
week,
you
can
download
like
the
beta
release
of
prometheus
and
launch
it
in
an
agent
mode
which
is
using
the
same
agent
code.
We
were
talking
about
once
that
is
out
of
beta
and
is
stable.
We
are
going
to
remove
the
code
from
the
agent
and
import
prometheus
prometheus's
agent
code
as
a
dependency
and
then
use
that
directly.
B
A
All
right,
so
the
next
topic
we
have
is:
should
there
be
a
public
bug
scrub.
A
little
background
is
internally
for
probably
three
or
four
months
feels.
Like
we've
been
doing
an
internal
bug
scrubber,
we
just
looked
through
the
backlog
and
tried
to
clean
it
up
and
find
any
issues
that
might
have
been
floating
around
that
we
either
resolved
or
want
to
add
more
actively
to
the
backlog,
and
this
is
something
we
just
do
internally.
A
I
believe
it's
about
every
six
weeks
to
just
try
to
resolve
issues,
and
we
had
talked
about
part
of
our
opening
up
is
making
that
a
public
meeting
for
people
to
either
comment.
You
know
you
can
always
you
know.
If
you
have
a
comment,
you
can
always
add
it
to
the
git
of
issues,
but
this
allows
for
maybe
a
more
free
form
discussion
and
allows
people
to
kind
of
come
in
and
jump
in.
So
what
do
people
think
about
that?.
A
Then
this
could
be,
you
know,
there's
something
different
from
labs
or
you
know,
community
members.
B
It
is
pretty
boring.
It
is
like
an
hour
of
looking
through
issues,
so
it
wouldn't
be
the
most
exciting
call,
but
I
I
do
agree
with
robbie
like
asynchronous
communication
is
like
the
backbone
of
the
frontal
labs,
but
I
still
think
sometimes
you
do
need
that
synchronous
communication
to
resolve
things
and
it
might
be
helpful,
but
on
the
other
hand,
if
there
was
like
200
open
issues,
we
wouldn't
want
every
single
one
of
those
200
people
to
be
on
the
call
to
like
defend
why
their
things
shouldn't
be
closed.
B
So
I
think
it's
a
give
and
take
like.
Hopefully
it
would
you
know
if
it
was
public
we
could.
It
would
be
obvious
to
people
when
it's
time
to
have
a
on-call
discussion
and
and
or
live
discussion,
and
not
just
like
keep
it
to
comments,
because
it
you
know,
like,
I
think
the
argument
I'm
making
is.
It
will
take
time
to
to
discuss
the
issues
with
with
someone
who
opened
it.
A
Yeah
and
I
think
I
think,
planting
the
seed
of
it
being
open.
You
know
that
can
grow
and
evolve
better
than
if
it's
closed.
So
I
guess
is
there
anyone
who
thinks
we
shouldn't
do
it.
A
Okay,
so
we'll
probably
get
that
scheduled
and
we
will
probably
add
to
the
community
calendar
and
we'll
also
add
links
here.
I
assume
I'll
have
some
students
here
designed
in
the
anybody,
any
questions
about
that
all
right.
The
design
and
the
open
rfc.
B
All
right,
so,
in
the
spirit
of
doing
more
things
in
the
open
and
trying
to
help
build
a
wider
community
around
the
agent
I
had
opened
an
rfc,
which
I
will
link
in
in
the
google
chat
to
talk
about
the
process
of
how
we
do,
like
you
know,
designing
new
features
going
forward.
B
I
think
you
know
I
I
might
have
made
a
mistake
with
this
one
and
I
think
there's
still
work
to
do.
The
the
spirit
of
what
I
was
trying
to
accomplish
was
to
a
get
consensus
that
we
want
to
do
things
publicly.
First,
whenever
it's
possible,
and
we
want
to
give
people
the
the
ability
to
to
engage
in
ideas
before
like
we
have
consensus
internally.
I
think
you
know
we
tend
to
accidentally.
Do
things
privately
a
lot?
You
know
we
have.
B
We
have
like
a
graphing
internal
slack
and
it's
really
easy
to
just
only
use
that,
so
it
would
take
like
active
effort,
and
I
think,
having
a
having
like
a
proposal
for
how
we
make
that
active
effort.
To
do
things
more
publicly
was
a
good
idea.
B
I
think
where
I
failed
was
trying
to
be
strict
in
what
it
meant
to
do
in
a
public
proposal,
and
I
do
like
markdowns
for
like
code
reviews
and
like
history,
but
at
this
point
I'm
not
sure
if
we
should,
I
don't
know
if
it's
a
good
idea
to
to
say
big
designs
must
be
done
as
markdowns
like.
B
Maybe
maybe
I
don't
know
I
I
guess
it's
open
for
discussion
like
we
wanted.
We
want
to
have
more
things
designed
in
the
open
we
want
to
allow
people
to,
you
know,
contribute
designs.
We
want
people
to
get
engaged.
Are
there
any
concerns
about
like
how
that
happens?
It
do
you
think
we
should
just
support
whatever
people
want
to
use
for
proposals.
B
Well,
if
there's
a
no
comments,
I
think
the
the
direction
I'll
probably
go
in
is
we'll
keep
an
archive
of
public
proposals.
B
Now
those
may
be
in
forms
of
markdown
documents
in
a
folder
somewhere,
where
they
may
be
marked
on
documents
that
link
to
a
google
google
sheet
or
google
document
or
whatever
I
don't
know,
but
I
think
we
need.
I
need
a
second
iteration
of
this
rfc
in
a
way
that
is
less
prescriptive
and
more
ambiguous,
so
that
there's
less
effort
to
get
involved.
G
B
I
I
agree,
I
think
one
of
the
interesting
things
is
this:
isn't
the
second?
This
isn't
the
first
time
I
try
to
do
this.
I've
I've
made
a
similar
proposal
to
loki,
which
we
implemented
very
briefly,
and
the
problem
was
markdown
is
really
nice
to
read
it's
so
hard
to
review
and
I
think,
like
the
tooling,
around
code,
reviewing
markdown
just
doesn't
exist,
and
I
don't
know
if
there's
a
solution
to
that
like
maybe
maybe
we
just
try
it,
and
you
know
we
can
see
how
we
can
improve
the
workflow
around
reviewing
it.
B
So
the
people
really
liked,
or
the
low-key
maintainers,
really
like
the
ability
in
google
google
docs
to
have
like
the
comments
on
the
side
and
to
have
that
like
not
break
up
the
thing
you're
reading
and
when
you
review
like
markdown
like
I
reviewed
the
prometheus
blog
post
from
bartek,
and
it
was
just
so
hard
to
keep
track
of
what
was
going
on.
Because
every
few
lines
there
was
there
was
a
break
with
someone's
comment.
It
was
pretty
distracting.
A
B
I
think
that's
an
interesting
idea.
It
does
bring
back
the
are
we
being
too
prescriptive.
Concerns
like
I
could
imagine
a
flow
where
it's
open
an
issue
link
to
your
google
doc
and
then
open
a
pr
to
turn
that
into
markdown.
I
think
that's,
that's
good,
but
should
we
leave
it
open
so
that
if
people
didn't
want
to
do
that,
like
the
two
different
steps,
they
could
just
go
straight
to
the
the
markdown
rfc.
A
I
I
think
I
think,
that's
fine,
because
I
think
at
this
point
just
getting
community
proposals
in
any
pretty
much
any
form
that
we
can
work
with
is
a
net
win.
B
B
All
right:
well,
I
think
that's
good
feedback
and
I
have
enough
to
take
a
second
shot
at
this.
This
markdown
or
or
turn
it
into
a
google
talk
and
then
turn
into
a
markdown
at
some
point,.
A
Okay
proposal
for
remote
configuration,
yeah,
robbie.
E
On
it
sure
so,
basically,
this
is
trying
to
give
us
the
ability
to
be
able
to
pull
the
configuration
from
some
remote
source.
E
We
propose
to
support
several
remote
sources
like
s3,
azure,
blob
storage,
gcp
storage,
among
others,
http
https,
and
so
the
thinking
behind
this
feature
is
that
for
users
who
may
be
running
multiple
instances
of
the
agent
across
their
infrastructure,
it
would
be
much
easier
to
be
able
to
pull
a
single
remote,
config
or
or
they
could
be
different,
remote
configs,
for
example,
but
it
definitely
makes
managing
the
configuration
much
simpler.
E
So
there
were
a
few
issues
that
cropped
up.
As
you
can
see
on
the
issue
that
I
filed,
some
people
have
brought
up
some
ballot
points
that
I
think
we
need
to
address
and
I'll
also
be
updating
that
issue
today
and
taking
some
of
those
things
into
consideration.
I
know
robert
probably
has
a
few
things
to
say
about
this
one
as
well.
B
Yeah,
I
think
I
mean
I've
been
doing
a
lot
of
things
about
the
agent
over
the
last
two
years
and
I
think
I
had.
I
had
decided
at
some
point
that
there's
a
lot
of
things
we
could
gain
in
terms
of
the
project
if
we
supported
loading
things
remotely-
and
I
am
like
months
past
the
the
idea
of
that
we
should
do
this
and
I've
already
dove
like
really
deep
into
how
we
will
do
it,
and
I
think
that
kind
of
caused
a
few
problems.
B
I
think
it
meant
that
this
proposal
was
in
terms
of
like
technical
details
and
not
why
we
should
do
it,
and
I
think
that's
completely.
My
fault,
like
I
asked
robbie,
to
take
charge
of
this
and
it's
unfortunate
that
the
context
got
lost.
So
I
am
helping
him
kind
of
regain
that
context
and
I'm
gonna.
Let
him
drive
it
so
there
might
even
be
new
ideas.
I
haven't
thought
of
yet
for
how
people
might
use
this.
B
But
for
the
background
context,
I've
been
thinking
a
lot
about
like
how
to
improve
the
scraping
surface
of
the
agent
and
make
it
more
generally
useful.
One
of
the
parts
of
the
scraping
service
that
I
don't
like
is
the
configuration
api
for
like
uploading,
individual
files.
B
I
think,
aside
from
like
performance
problems,
meaning
that
I
don't
like
it,
but
I
think
like.
If
you
could
just
say
you
know:
here's
where
these
things
exist
as
like
a
folder
or
something
it's
a
remote
folder.
We
could
just
pull
them
dynamically,
and
so
that
was
kind
of
like
the
seed
of
here's,
how
we
can
start
using
things
remotely
to
configure
the
agent
instead
of
building
in
features
and
solutions
that
we
would
want
to
support.
Or
you
know
whatever
the
second
idea
was
kind
of
brought
from
matt
of.
B
Like
you
know,
in
past
jobs
he's
worked
where
you
can
sorry.
In
past
jobs,
he's
worked
in
places
where
you
can
say,
like
here's,
an
s3
bucket
with
a
file
just
load
that
for
your
config,
and
I
think,
that's
actually
kind
of
generally
useful
and
ignoring
my
room-
is
running
really
loudly
ignoring
the
s3
aspect
of
it.
B
If
you
could
also
say,
like
here's,
an
http
server
that
has
a
file
that
http
server
could
dynamically
generate
a
config,
and
most
people
won't
write
servers
like
that,
but
I
do
think
that
opens
the
door
for
command
control
of
agents,
so
command
control
being
like
some
server.
You
know
you
configure
it
to
have
like
a
fleet
of
agents
or
tags
or
whatever,
and
then
agents
will
connect
to
that
server
to
say
what
their
config
is
and
that
server
will
like
build
it
at
request
time.
I'm
really
rambling
here.
B
Why
I
started
thinking
about
this,
but
right,
okay,
so
off
those
two
ideas,
if
I
think
we
were
able
to
reduce
it
down
to
its
simplest
form
by
saying
config
files
could
be
urls
and
like
that
is
like
the
simplest
way
to
to
like
enable
both
of
those
use
cases-
and
I
kind
of
got
excited
about
that
because
I
figured
it's
really
generic
and
a
lot
of
other
people
could
benefit
from
this,
which
feeds
way
back
into
like
what
I
was
saying
earlier
about,
like
sharing
use
cases
with
other
people.
B
So,
rambling
aside,
I
think
I
you
know,
I
want
to
help
recapture
that
context
and
and
try
to
justify
it
more,
rather
than
focusing
on
just
here's
exactly
how
we're
implementing
this
thing
that
robert
frida
already
decided.
He
wanted
to
do.
B
G
I
just
had
a
comment.
I
think
it's
a
a
pretty
good
idea,
especially
when
you
think
about
you
know.
The
playing
agent
then
stayed
on
vms
and
stuff,
it's
always
a
hassle
to
get
the
config
deployed,
and
then
you
have
a
change
and
you
need
to.
You
know,
rebuild
your
machine,
so
yeah
have
some.
You
know,
startup,
skeptics
or
whatever
that
that
problem
would
be
nice
just
to
you
know,
have
it
supported
out
of
the
box?
A
Like
you
know,
kubernetes
or
even
config
management
and
rolling
out
new
configs
was
extremely
problematic
in
pushing
those
files
so
making
the
switch
that
we
could
just
pull
from
one
central
location
reduced
the
state
that
we
had
to
push
and
also
really
helped
with
configuration
drift.
A
It
was
a
big
win
when
you
don't
have
all
that
infrastructure.
You
know
that
kubernetes
or
or
console,
or
something
or
even
something
like
ansible
shift
or
containers.
You
know
which
we
didn't
have
any
of
that.
So
it
made
life
much
easier.
B
I
think
there's
a
pretty
strong
counter
argument
where,
like
I
could
you
know
why
can't
I
just
run
a
tool
on
my
machine
that
syncs,
like
you,
know,
s3
to
a
file
like
sure
you
know
that
that's
fair
and
like
that
would
work,
but
I
think
once
we
have
this
in
place,
this
is
opening
the
door
to
more
interesting
use
cases.
So,
for
example,
in
gotham
might
not
like
this.
B
For
example,
what
if
the
password
file
fields
for
remote
write
could
also
be
urls,
then
you
could
say,
like
you
know,
here's
my
password
in
vault
and
then,
like
you,
know,
every
request.
It
would
pull
from
vault
and
you
know
use
that
password
for
remote
right
and
then,
like
you
know,
other
you
know,
file
based
things
where
it
makes
sense
could
also
you
know,
be
urls
and
also
be
dynamically
loaded,
and
that's
where
I
think
most
of
the
power
comes
from
that
wouldn't
be
equivalent
to
like
just
running
some
sinker.
A
Yeah
and
if
andre
would
like
to
have
the
ability
to
have
remote
config
with
some
kind
of
overrides
so
that
they
can,
you
know
centralize,
scraping
config.
A
I
definitely
feel
like
that
kind
of
ties
in
with
robert's
idea
of
just
if
it's
an
http
endpoint,
you
can
kind
of
you
know,
do
what
you
want.
You
know,
if
you
add
a
tenant
id
to
that
http.
You
generate
that
config
on
the
fly
and
there's
probably
some
more
mature
solutions.
Enrolling
your
own
http
endpoint.
We
that
could
be
utilized
there,
but
it
it
opens
the
avenue
andre.
Did
I
get
your
point
there?
Are
there
any
clarifications.
B
Please,
like
feel
free,
like
if
you're
interested
in
this
get
involved
in
in
the
proposal
we
might.
I
might
ask
robbie
to
like
have
a
second
proposal
that,
like
addresses
the
concerns
that
we
got
in
comments.
Otherwise,
like
the
comment
list,
will
just
keep
growing
and
growing
and
growing.
But
if,
if
there's
a
second
one,
we'll
link
to
it
from
the
first
and
close
it,
and
I'm
not
sure
yet,
if
I'll
actually
ask
you
to
do
that,
robbie.
A
G
B
I
think
generally,
for
now
I
want
integrations
to
be
useful
in
a
way
such
that
it
interacts
with
the
other
subsystems
of
the
agent
in
some
way,
so
it
might
generate
prometheus
metrics
loki
logs
tempo
traces
as
long
as
it
like
meets
one
of
those
three.
I
think
it's
fine,
but
if
it
was,
for
example,
like
embedding
file,
beat
and
writing
to
elasticsearch,
then
I
think
it's
kind
of
that
would
probably
be
out
of
scope.
B
I'd
say
so
like
definitely
like
open
an
issue
and
like
talk
about
it
like
as
a
proposal
and
then
we'll
have
a
discussion,
but
it
doesn't
sound
like
a
no
or
like
an
obvious
now.
Yeah
cool.
A
B
Yeah,
so
we're
currently
trying
to
get
the
operator
to
be
in
feature
parity
with
the
agent.
Currently,
it's
lacking
integrations
and
it's
lacking
traces,
and
now
I'm
not
working
on
traces.
Yet
someone
might
at
some
point,
but
we
are
currently
working
on
the
integration
support.
One
big
issue
is:
what
happens
when
someone
deploys
two
crds
or
custom
resources
that
are
both
the
redis
exporter
integration?
B
That
would
mean
today,
because
the
agent
only
lets
you
define
one
redis
exporter.
You
would
need
to
deploy
two
agent
pods
to
do
that
now.
I
started
implementing
it,
but
then
I
realized,
like
the
code,
was
a
nightmare
and
we
already
want
to
enable
more
than
one
instance
of
a
single
integration.
B
So
my
work
has
shifted
to
doing
that
first
and
it's
currently
in
progress
and
should
be
out
within
like
a
next
release
or
two
where
you
could
configure
the
agent
to
have
an
array
of
reds
exporters,
not
just
one
and
that'll
make
writing
the
integration
support
for
the
operators
so
much
more
easy,
so
much
easier.
G
I
also
added
a
broadcast
to
jasonnet.libs
to
generate
the
jsonnet
things
for
your
crds,
so
we
can
yeah
create
those
straightforward
from
yeah
and
see
that
that's
awesome.
Where
is
this?
It's
a
placenta
on
github,
it's
a
library
of
different,
auto
generator
apis
from
clds.
B
B
We
have
a
lot
of
projects
that
receive
contributions
in
respect
to
another
thing
so
like
if
someone
opens
an
issue
in
like
graffana
helm,
charts
for
the
agent
home
chart,
we're
not
going
to
see
it,
and
I
don't
know
how
to
solve
this.
Yet
it's
it's
a
weird
issue:
yeah,
it's
just
just
random
thought.
B
A
Okay,
cool
yeah:
does
anybody
have
feedback
on
how
the
actual
meeting
went
or
things
you'd
like
to
see
different?
It's
only
our
first
one,
so
I
don't
know
if
we
really
got
a
rhythm.
Yet
we
tried
to
emulate
the
other
community
calls,
but,
as
always,
we
can
hear
from
that.
Based
on
you
know,
community
feedback.
E
B
To
attend
again,
would
you
want
anything
to
be
different.
G
A
A
All
right
with
that,
we
will
end,
and
then
I
should
have
looked
up
when
the
next
meeting
was,
but
I
will
put
it
in
the
document.
A
Yeah
yeah,
so
it
looks
like
december.
The
15th
will
be
the
next
one
back
to
our
normal
date,
all
right.
Well,
I
appreciate
everybody's
input
and
everybody
coming
here.
You
know
and
feel
free
to.
We
have
the
channel
in
the
grafana
community,
the
agent
channel
you're
more
welcome
to
bring
up
anything
there
or
we
added
access.
So
you
can
add
topics
you'd
like
to
see
in
the
document
and
appreciate
y'all
coming.
Thank
you.