►
From YouTube: Grafana Agent Community Call 2023-03-22
Description
All about Agent Flow! For more details about our community calls https://docs.google.com/document/d/1TqaZD1JPfNadZ4V81OCBPCG_TksDYGlNlGdMnTWUSpo/edit
A
A
Let
me
bring
actually
bring
it
up
because
I
don't
remember
we're
going
to
talk
about
one,
what
we're
doing
to
bring
grafana
Asian
operator
functionality
directly
into
the
agent
two,
what
we're
going
to
do
to
make
grafana
agent
flow,
configs,
reusable
called
modules
and
three,
what
we're
doing
to
make
refine
agent
flow
horizontally,
scalable,
so
without
without
further
Ado
I
want
to
hand
it
off
to
Craig
who
will
be
talking
about
the
the
operator
stuff.
B
Hey
my
name
is
Craig
I'm,
a
developer
on
the
agent
and
I've
been
working
for
a
while
on
the
operator,
specifically
on
handling
pod,
monitor
service
monitors
and
probes
in
the
agent
for
anyone
not
familiar
with
that,
a
pod
monitor
is
or
kubernetes
it's
a
custom
resource
that
you
can
drop
into
your
cluster.
That
says
essentially,
I
want
to
scrape
pods
that
match
this
spec.
B
B
Currently
we
can
consume
those
with
the
agent.
Only
through
the
grafana
agent
operator,
we
have
an
operator
which
is
running
in
your
cluster
and
you
create
custom
resources
for
pod
monitors,
but
also
for
configuring.
Your
agent,
the
problem
with
this
model
is
it
forces
people
who
want
to
use
pod
monitors
to
use
the
operator
to
deploy
their
grafana
agents
and
there's
some
rough
edges
there.
B
B
Okay,
so
I'm
saying
I
want
to
find
all
the
Pod
monitors
in
my
cluster.
You
could
filter
it
to
specific
pod
monitors.
Only
I'm,
saying
I
want
to
forward
them
to
my
grafana
cloud
account.
B
It
could
be
any
Prometheus
anywhere
and
that's
all
it
is
so
when
I
run
the
agent
it
will
find
all
of
my
pod
monitors.
It
will
discover
pods
that
match
those
it
will
scrape
them.
It
will
forward
them
all
in
one
go
so
here,
I'm
just
running
this
against
my
local
a3s
cluster
I
have
exactly
one
pod
monitor
here
for
core
DNS.
It's
trying
to
match
pods
here.
This
have
k8s
app
is
Cube
DNS.
B
Where
is
this
thing?
One
pod
in
my
cluster
that
matches
it
8s,
app
and
Cube
DNS,
so
I
would
expect
this
pod
monitors
running
in
my
cluster
I
would
expect
it
to
find
so
here
it
found
that
pod,
Monitor
and
I
expected
to
find
that
pod
and
scrape
it.
If
we
look
at
the
grafana
agent
UI,
this
is
a
new
feature
in
flow
2,
mad
props,
to
the
team
that
made
this
yeah.
B
B
B
The
deployment
of
the
agent
from
the
dynamic
configuration
that
pod
monitors
give
so
if
someone
would
rather
deploy
the
agent
with
our
new
Helm
chart,
that's
really
cool
but
still
use
pod
monitors
they
can
in
the
future.
B
So
hopefully
people
frustrated
with
the
operator
that
only
want
it
for
pod
monitors
and
service
monitors,
and
things
can
use
this
in
the
future.
Any
questions
about
that.
A
Thanks
Craig
I
think
I
think
that's
super
cool,
like
one
of
the
things
that
I
think
is
really
interesting
about
bringing
pod
monitors
in
to
flow.
Is
you
could
hypothetically,
like
use
upon,
monitor
component,
convert
that
to
otlp
data
and
then
send
it
to
like
some
other
company
which
is
really
cool
like
you
can't
do
that
with
Prometheus
operator?
You
can't
do
that
with
the
agent
today.
So
I'm
really
excited
to
see
like
all
these
new
capabilities,
with
putting
operator
support
directly
into
flow.
B
Yeah
I
really
like
that
we're
handling
the
metrics
directly.
Instead
of
like
all
the
operator
you
Prometheus,
operating
on
grafana
agent
operator,
they
find
the
Pod
monitors.
They
then
generate
a
config
with
all
that
data
in
it
statically
and
give
it
to
the
agent
that's
running
so
there's
kind
of
this
intermediate
step
of
like
okay.
Let's
read
the
big
old
blob
of
config
that
it
generated,
whereas
with
this
model
we
just
scrape
it
directly,
you
can
see
the
state
of
you
know
what
pod
monitors.
Have
you
found?
B
A
C
Up
pascals,
what
others
here
these
are:
we
gonna
be
implementing
in
flow.
B
So
the
big
three
are
pod
monitors,
service,
monitors
and
probes.
Those
are
the
the
kind
of
dynamic
monitoring
ones
from
Prometheus
operator.
There's
also
some
things
like
pod
logs,
there's
Prometheus
rules,
there's
a
variety
of
of
kubernetes
custom
resources.
The
agent
can
handle
I.
Think
kubernetes
events
are
one.
B
That's
one
of
the
nice
things
about
flow.
Is
we're
really
not
Limited
as
far
as
what
we
can
do,
it's
kind
of
more
able
to
be
a
a
multi-purpose
tool.
A
B
No
I
I
prefer
to
deploy
directly
with
the
helm
chart
manage
to
config
myself
directly
I
feel
like
that's
just
a
simpler
model,
so
hopefully
people
would
have
the
choice
and
we
can
help
make
better
informed
decisions
about
when
you
would
want
to
use
the
operator
and
when
you
wouldn't
the
operator
managing
your
grafana
agent
installations
is
just
a
little
bit
of
a
foreign
concept
for
for
most
people
and
it's
it's
taken
directly
from
what
Prometheus
operator
does,
but
it
does
cause
confusion,
I
think.
A
But
and
and
flow
supports
traces,
so
you
know
there's
that
oh.
A
A
A
D
Perfect,
okay,
all
right,
so
we're
gonna
talk
about
modules,
so
kind
of
before
we
get
into
modules.
I
just
want
to
show
what
you
know.
Current
state
looks
like
and
then
we're
going
to
get
into
an
example,
eventually
with
modules
showing
what
it
would
look
like
in
that
format.
So,
today
what
we
have
with
flow
is
you
kind
of
need
to
stack
all
of
your
configuration
into
a
single
file?
D
So
we've
got
this
single
dot
River
file
here
and
inside
of
it
I've
got
some
metrics
I've
got
some
logs
and
I've
got
some
traces.
So
what
happens
is
this
is
a
fairly
simple
example
that
we're
going
to
go
through
of
getting
some
basic
metrics
clogs
and
traces
out
there.
However,
if
you
can
imagine
for
more
complex
configurations,
it'll
be
a
little
difficult
to
manage
as
it
gets
bigger
as
well
as
if
you
have
reusable
configurations
that
you
want
to
be
able
to
mix
and
match
this.
D
D
So
the
way
we
got
to
where
we
are
now
is
we
start
out
with
this
with
this
RFC,
so
I
want
to
kind
of
just
call
this
out
as
as
something
that
the
the
team
has
gone
through
and
it's
been
available.
You
know
public
for
comment
and
input,
and
it
you
know
it's
it's
quite
detailed
about
what
we're
trying
to
do
and
why
and
to
follow
that
up.
D
Here's,
here's
the
pull
request
for
it
and
you
can
see
we've
got
67
comments,
so
there's
been
quite
a
bit
of
healthy
discussion
on
it,
which
is
been
fantastic
and
I.
Think
you
know
by
the
time
we
release
this
this,
this
RFC
will
will
be
merged
and
finalized.
So
it'll
be
part
of
the
the
agent
repo
going
forward
so
that
people
can
look
back
on
the
history
of
this,
which
is
fantastic
one.
D
Once
we
kind
of
have
the
the
RFC
laid
out
to
where
we're
comfortable,
we,
we
did
a
prototyping
phase,
so
I
I
think
there
was
at
least
three
prototypes
that
were
built
to
to
approach
this
from
some
different
angles
within
the
code
base
and
I'll
kind
of
show
you
so
so.
This
is
a
very
simplified
diagram
of
the
before
so
kind
of
like
that
single
River
file
that
I
showed
we've
got
this
flow
controller,
which
manages
a
set
of
components
so
that
would
all
go.
D
You
know
one
configuration
file,
think
of
it
like
that,
and
it's
got
some
number
of
components
and
that
scales
out
that
way,
where
are
we
headed
with
modules,
looks
more
like
this,
so
we've
still
got
that
top
level
flow
controller
with
some
number
of
components,
but
in
here
we've
got
this.
D
What
we're
calling
a
module,
loader
and
and
that's
a
component
that
starts
another
flow
controller,
and
then
inside
of
that
we
can
have
that
managing
different
components
and
I
wanted
to
show
here
we've
also,
you
know
you
can
have
a
module
flow
controller
inside
of
a
module
flow
controller
and
that
that'll
be
an
example.
So
you
can,
you
can
keep
going.
We
we've
talked
about.
You
know
putting
a
limit
on
how
deep
you
can
go,
but
I
don't
think
we've
finalized
what
that
would
be
or
what
that
would
look
like.
D
Yet,
okay
sure
I
got
all
okay.
So
what
do
we?
What
do
we
actually
have
so
far?
What
have
we
built?
What?
Where
are
we
at
this
moment
in
time,
so
concept
docs?
So
here
we've
added-
and
this
will
be
in
the
next
release-
a
concept
doc
that
talks
about
modules-
oh
and
I,
gotta
blow
this
up.
I'm
sure
maybe
like
that-
and
this
also
includes
down
at
the
bottom
and
I'm
not
going
to
go
through
this
in
detail,
but
I
just
want
to
bring
it
up
for
awareness.
D
We've
got
examples
of
what
a
module
would
look
like
and
how
it
would
be
referenced
from
another
River
config,
so
right
here,
We've
also
got
documentation
on
the
module
string
component
itself,
so
it's
similar
to
other
grafana
agent
components.
So
it's
right
in
the
list,
but
it
starts
with
a
module
Dot.
D
So
that's
what
we've
got
over
here
and
that's
at
least
well
documented.
For
now,
so
that's
the
documentation
side.
What
is
this
stuff
actually
look
like
in
practice
and
we're
going
to
do
a
a
little
demo,
so
we
started
with
this
concept
of
module.string
as
the
first
module
loader
just
make
sure
any
questions,
okay,
good.
D
So
this
is
an
example
of
what
it
would
look
like.
So
here
we've
got
this
module
string
component
and
we
can
see
that
it's
loading
content
from
another
local
file
component
so
for
module.string,
it
kind
of
requires
two
components
to
make
it
work
there
and
you'll
see
Works
we're
prototyping
and
I'm
going
to
demo
what
a
module.file
would
look
like.
So
we
don't
need
multiple
components,
but
here's
an
example
of
that
first
config
that
I
showed
you,
but
we've
broken
it
up
with
modules
into
three
distinct
areas:
metrics
logs
and
traces.
D
So,
instead
of
having
all
the
config
in
one
place,
we've
got
additional
config
files
that
we're
pointing
to
one
one
for
each.
So
that's
how
a
module.string
started,
and
then
we
got
into
the
idea
of
K.
Well,
it's
kind
of
a
bummer
to
have
to
do
both
of
these.
So
what
if
we
try
to
do?
I'll
bring
it
up
here.
D
What,
if
we
just
do
it
as
module
file
and
just
bring
the
file
stuff
inside
so
this
this
module
loader
will
work
with
files.
We've
we're
also
exploring
things
like
module.
Git,
conceptually
that's
been
considered
and
you
know
other
module
ideas.
Let
us
know
we're
definitely
trying
to
brainstorm.
What
ones
would
be
useful.
D
A
Us
to
okay
I
see
we're
basically
allowing
it
sounds
like
we're,
allowing
you
to
split
up
your
configs
and
kind
of
reuse
them.
Oh,
we
got
a
question
if
you
want
to
take
it
out
loud
or
type
it
in
chat,
either
one's
fine.
F
D
So,
are
you
thinking
like
in
terms
of
creating
new
components
entirely
or
using
modules,
with
flow
components
that
exist.
F
A
little
bit
above
I'm
trying
to
understand
how
we
can
use
modules
with
flow
components
that
exist
and
if
we
are
to
compo
create
new
components.
Is
that
as
as
easy
as
it
as
just
adding
another
file
and
writing
a
DOT
River
file?
That
adds
it
in
trying
to
understand.
What's
going
on,
yeah.
D
Yeah,
so
so
for
so
for
new
components,
that's
kind
of
built
into
the
agent
itself,
so
these
are
the
currently
supported
components,
although
I'm
looking
at
the
next
release,
so
there
might
be
one
or
two
that
are
additional,
but
for
creating
a
new
component
that
that
would
involve
writing
software
into
the
agent.
D
However,
for
all
the
existing
components,
they
can
all
be
used
with
a
module.
So
I
can
be
a
little
more
precise
here
in
terms
of
this
example
of
how
this
is
working,
so
I.
A
Think
the
general
angle
to
think
about
it
is
like
a
module.
Is
the
idea
that
you
can
take
a
set
of
components
and
encapsulate
them
as
one
thing,
so
you
can
have
like
a
pipeline
where
you
discover
things
on
kubernetes
and
scrape
metrics
from
those
things
you
discovered
and
then
filter
some
things
out.
You
can
kind
of
combine
those
three
different
or
those
set
of
components
into
one
module
that
have
other
people
just
import
that
that
module
as
a
single
unit,
I
think
kind
of
like,
like
almost
kind
of
like
a
Helm
chart.
D
Yeah
and
and
to
to
look
at
kind
of
like
a
precise
example
here
or
a
specific
examples,
so
in
this
case
we're
we're
using
a
module
export
to
access
a
component
from
within
the
module.
The
fact
that
this
is
the
hotel
processor
batch
component
is
not
really
important
in
the
sense
that
this
could
have
been
any
flow
component
that
we're
that
we're
exporting.
D
So,
if
I
go
here,
we
can
see
that
this
module
is
exporting.
This
oldtel
processor
batch
default
input,
which
is
which
is
defined
down
here.
So
that's
where
I
say
any
any
flow
component
would
be
compatible
with
doing
this.
You
can
also-
or
you
know,
do
the
same
thing
with
arguments,
so
that's
the
to
go
the
other
direction
to
go
to
push
data
down
into
a
module.
We've
got
this
for
the
metrics.
We've
got
this
Prometheus
exporter
Unix
and
we're
passing
the
targets
of
that
into
a
module.
D
D
D
Okay
last
thing
I
got
here
for
us
is
just
the
running
example.
So
this
has
been
running,
although
maybe
for
we've
been
I,
think
I
think
I've
lost
my
Logic.
D
Too,
oh
yeah,
you
get
it
you
you
got
it
so
this
is
just
showing
that
the
the
modules
are
pushing
data
up
to
up
to
the
graphonic
cloud.
I
can
see
if
we
can't
get
the
logs
to
go
up,
but
either
way
so
we've
got,
we've
got
the
metrics,
definitely
the
traces
and
hopefully
get
some
logs
popped
up
here,
but
so
this
gets
running
and
I'll
show
you
the
graph
as
well.
So
the
graph
looks
a
little
different.
We're
still
doing
some
work
on
it.
Let's
refresh
now
that
it's
running.
D
Should
be
running
there,
we
go
so
now
we
can
see
that
this
module
file,
let's
look
at
the
traces
one.
So
here
are
the
the
components
within
the
module.
So,
instead
of
having
all
the
the
components
at
the
top
level
of
our
graph
they're
they're
kind
of
pushed
down
to
within
the
module-
and
then
you
can,
you
can
click
through
those
there's
some
work.
D
We
need
to
do
for
nested
modules
to
make
that
work
nicely,
but
this
is
kind
of
the
gist
of
of
what
it
looks
like
that's
different
than
if
it
was
not
a
module.
So
I
think
that
covers
it.
So
yeah
we've
got
some
finishing
touches
to
do
and
building
more
module.
Loaders
is
is
kind
of
where
we're
at
in
the
development
cycle
here
for
this.
For
this
feature
so.
A
Thanks
Eric
I
think
it's
really
cool
what
type
of
module
loaders
do
you
think
you
mentioned
module
file
and
module
git?
What
other
types
of
modulars
do
you
think
we
might
support.
D
D
You
know,
maybe
other
ones
out
there.
It's
it
kind
of
feels
like
the
possibilities
are
endless,
so
it'll
kind
of
depend
on
what
people
want
and
and
request
I
think
a
little
bit.
A
Cool
so
then,
my
other
question
is
like
what
types
of
things
like
what
subset
of
the
overall
pipeline
do
you
imagine
people
are
going
to
use
modules
for
like?
Is
it
going
to
be
like
the
an
end-to-end
pipeline
or
or
what.
D
Yeah,
it's
hard
to
say,
I
I
kind
of
like,
like
you
saw
kind
of
breaking
up,
my
metrics
logs
and
traces
into
different
pieces
for
reusability.
I
I
could
see
having
modules
for
each
of
the
different
components.
That
kind
of
have
a
working
example,
four
different
components,
but
yeah
I'm
not
totally
sure
on
that.
What
do
you
think
Robert
I?
Guess
you
have
some
thoughts.
A
I'm,
pretending
I,
don't
know
anything
on
modules:
okay,
fine
I'll,
I'll
drop,
the
ACT
I
mean
I
I.
Think
end-to-end
pipelines
are
probably
not
the
best
use
case
of
modules
unless
it's
just
for,
like
you
using
it
locally
to
split
things
up,
I
think
the
more
common
thing,
because
the
intent
modules
are
reusable
right,
I
think
it's
going
to
be
more
common,
that
they
handle
some
subset
of
the
pipeline,
probably
at
least
like
retrieval
and
and
processing,
but
not
delivery.
A
I
think
delivery
is
probably
always
going
to
be
in
the
hands
of
like
giving
back
to
the
user
to
to
build
something.
That's
like
truly
reusable,
like
across
the
board.
E
D
It
is
running
independent
flow
controllers,
but
right
now
a
failed
module
load
will
stop
the
agent
from
from
starting
up
just
similar
I
guess
to.
If
you
had
a
bad
portion
of
config
in
your
River
file
today
it
wouldn't
start
up
so
I.
Think
I
think
it
kind
of
mirrors
that
for
the
moment,
but
there's
definitely
been
some
conversations
about
that.
A
D
So
I'm
wondering
if
this
is
kind
of,
if
you're
familiar
with
this,
what
we're
referring
to
as
static
mode
with
the
yamls.
So
this
is
kind
of
specific
to
the
the
river
configs
for
the
flow
motive
agent
and
it's
I
guess
fully
compatible
with
the
the
flow
mode.
A
Yeah,
this
is
a
flow
mode
exclusive
feature
we're
putting
most
of
our
efforts
into
flow
mode
right
now.
We
think
it's
the
future
of
the
agent
where
we
can
unlock
all
these.
You
know
new
use
cases.
I
I
want
to
go
back
to
Warrior's
question
real
quick
that
is
so
on
the
initial
load
of
of
the
agent
a
flow.
A
All
components
must
be
healthy,
but
after
that
initial
load,
or
they
must
succeed
to
unloading,
but
after
the
initial
load,
things
can
fail.
So
if
you
start
out
with
a
healthy
agent,
but
then
your
module
has
a
bug
in
it.
The
module
loader
will
will
continue
like
any
other
component
in
its
last
valid
state.
So
you
don't
get
like
a
a
propagating
failure.
In
that
specific
case,
you
just
have
to
make
sure
that
the
the
initial
load
is
air
free.
A
C
That
sounded
interesting,
let's
see
at
first,
if
I
wanted
to
call
out
quickly
our
designing
the
open
philosophy
that
showed
here.
We
want
the
community
to
see
what
we're
working
on
and
have
the
community
also
take
part
so
yeah
clustering.
Can
everybody
see
my
screen.
C
It,
okay,
great
so
clustering
the
phone
items
we're
looking
for
building
a
new
solution
for
horizontally
scaling,
metric
collection
by
making
use
of
the
dynamic
nature
of
graphene
flow.
C
And,
of
course
much
of
this
is
based
on
previous
work.
There
were
proposals
of
implementing
this
behavior
in
static
mode
early
on,
but
we
never
got
through
to
it,
and
the
contact
is
that
we
routinely
run
the
agents
with
tens
of
millions
in
active
metric
cities
without
hiccups,
and
our
usual
recommendation
is
that
we
start
thinking
about
original
scaling
around
the
2
million
reactive
series
Mark
to
avoid
any
surprises,
if
cardinality,
if
the
cardinality
explodes,
but
the
current
implementations
are
not
really
applicable
to
flow,
they
have
various
drawbacks
like
Hazmat.
C
Sorting
is
not
Dynamics
and
is
mostly
for
static
clusters
when
whether
it's
not
very
often
that
new
members
join
in
or
leave
the
cluster,
the
scrapping
service
has
an
external
set
of
dependencies
that
the
fortune,
the
specific
thing
and
host
filtering
ties
you
to
a
demo
site
like
deployment,
so
they
all
are
not
super
applicable
to
what
we're
doing
right
now.
C
So
our
goal
is
to
build
the
flow
native
cluster
anymore,
that
can
elastically
scale
the
agent
by
making
use
of
a
single
configuration
file
and
to
do
that
we'll
make
use
of
the
gossip
protocol
it's
what
also
Mimi
and
Locker
using
for
the
distributed
deployment.
So
you
have
some
operational
experience
around
it
and
we
think
it
can
be
a
good
fit
for
anybody
not
familiar
with
what
the
gossip
protocol
is
and
how
it
works.
You
can
see
it
in
action
here.
C
Imagine
you
have
this
20
cluster
node
with
a
fun
node
factor
of
two.
That
means
that
each
node
can
send
a
message
to
two
of
its
neighbors
and
if
we
start
with
the
stop
node
that
receives
one
new
piece
of
information,
we
can
see
where
it
might
send
this
information
to
its
neighbors
and
by
sending
the
message
this
information
is
propagated
in
the
next
in
the
next
cycle.
Each
of
the
nodes
that
has
information
propagates
it
to
more
of
its
neighbors
and
so
on,
and
so
on.
C
C
But
the
magic
Source
here,
The
crucial
part,
is
that
in
the
eventually
consistent
State,
each
node
can
perform
the
Sardine
and
independently
arrive
at
the
same
result.
So,
for
a
specific
has
value
of
x,
all
nodes
will
agree
which
certain
period
is
responsible
for
it
and
in
case
of
cluster
changes,
only
one
over
n
of
these
tokens
will
be
to
be
redistributed.
C
So
for
starters,
we
have
identified
the
two
use
cases
that
we
would
like
to
start
working
towards,
in
sequence:
first
off
Target
distribution.
So
if
all
the
agents
in
the
cluster
have
the
same
configuration
file
and
can
discover
the
same
set
of
targets,
then
we
can
use
this
distributed
sardine
to
make
its
node
only
scrape
the
targets
which
is
under
their
own
ownership.
C
The
Second
Use
case
that
we
would
like
to
enable
later
on
is
to
have
fine
grain
scheduling
so
for
components
that
don't
make
sense
to
run
in
all
the
nodes.
For
example,
you
have
a
10
node
cluster.
We
don't
want
to
run
10
instances
of
a
MySQL
exporter.
For
example,
you
can
only
have
one
or
two
for
high
availability.
C
Then
we
can
make
this
distributed.
The
system
decide
which
node
should
have
ownership
of
that
component,
so
it
gets
good
for
scheduling,
so
here's
how
it
works
right
now,
for
example,
we
can
have
a
prometheus.scrap
component
that
has
to
scrape
these
four
targets,
and
it
also
has
an
argument
that
says:
nodes
update
equals
true.
So
every
time
the
cluster
State
changes
either
a
new
member
joins
the
cluster
or
a
member
leaves
the
cluster
because
it
was
unhealthy
or
whatever.
It
will
call
an
update
method
on
this.
C
The
update
method
of
this
component,
so
it
will
redistribute
the
targets.
I'm.
Sorry
I
can't
make
the
phone
size
any
larger
than
that
right
now,
I
hope
it's
readable.
So
initially
we
can
see
that
the
node
is
responsible
for
five
targets
here,
because
it's
a
a
one
node
cluster,
but
when
a
new
peer
joins
in,
we
can
see
that
it's
only
responsible
for
four
of
them,
since
the
new
node
is
responsible,
takes
ownership
of
one
of
those
same
happens
for
a
winning
new,
and
you
know
the
joins
I.
C
Don't
know
that
we're
tracking
here
has
a
responsibility
for
even
less
of
the
targets,
but
when
it
leaves
it
takes
a
responsibility
of
one
more
and
when
another
joints
it
may
have
a
responsibility
of
more
as
it's
not
like
a
an
exact
match.
But
it's
based
on
the
hash
values
which
will
balance
out
in
the
long
term
and
not
just
for
small
numbers,
go
ahead.
Robert.
A
C
Approach
and
depending
on
the
starter
implementation
that
you
choose,
you
can
have
like
better
distribution
but
with
a
more
CPU
intensive
starting
mechanism.
So
you
will
spend
more
Cycles
deciding
who
does
what
or
you
can
have
a
more
efficient
that
is
less
accurate.
C
It
should
like
with
a
lot
of
big
numbers.
Then,
if
the
classes
are
even
distributed
distributed,
then
each
agent
should
have
the
same
amount
of
data
it's
assigned
to
it.
A
Thank
you
so,
like
the
second
now
second
question:
today
we
use
like
hash
Mar
charting.
That's
typically
how
we
recommend
people
to
do
things
yeah.
Why
would
you
want
to
stop
using
hash,
smart
charting
and
use
this
instead?
Sorry,
if
you
already
covered
this,
no.
C
Don't
worry,
I
actually
have
explained
it
better,
so
with
huasma
charting
what
we
do
is
we
have
the
address
label
of
any
Prometheus
Target
and
we
assign
this
to
a
value
for
zero
to
a
to
n
minus
one
for
an
inside
cluster,
and
then
it's
a
cluster
node
is
assigned
the
set
of
addresses.
C
The
thing
is
that
when
a
member
leaves
or
joins
the
cluster
first,
we
have
to
update
the
configuration
of
one
or
more
of
these
of
these
cluster
members
and
also
we
have
to
rearrange
the
targets
of
a
every
participant.
C
So
this
means
that
we
lose
efficiency
as
a
you
may
have
to
have
cast
things
that
will
no
longer
be
present
on
this
machine
and
you're
at
headlock
may
become
slower
and
that
kind
of
thing.
Whereas
in
this
approach
you
typically
only
need
to
distribute
1
over
n
of
your
targets
and
you
don't
have
to
recalculate
everything
for
all
so
for
a
10,
node
site
cluster,
you
would
have
to
roughly
redistribute
10
of
the
targets.
I'm,
not
sure
if
it
was
the
best
explanation
here.
If
you
can
chime
in.
A
C
C
I
hope
that
it
makes
sense
so
without
having
knowledge
of
prior
art.
But
if
it
doesn't,
then
let
me
know
and
I
will
work
on
it
to
to
make
it
clear.
But
yeah
there
is
an
RFC
as
a
draft
PR
in
our
agent
repo.
C
C
So
there
are
some
failure
modes,
of
course,
like
networking
failures
where
I
know
they
may
lose
connection
to
other
peers,
but
can
still
discover
and
scrape
things
and
think
that
it's
a
unknown
its
own
cluster
overloading
itself
during
class
changes
and
then,
as
long
as
the
cluster
is
not
in
the
eventual
consistent
State,
some
targets
may
be
scraped
twice
or
not
at
all.
C
But
if
the
cluster
set
will
Varys
on
a
similar
time
frame
to
the
scrape
interval
for
this
pool
based
models,
then
it
should
be
okay
and
we
won't
have
a
lost
Matrix
and
our
eventually
consistent
State
and
this
design
that
we
established
it
only
works
for
agents
running
the
same
configuration
so
in
case
of
add
a
lot
of
new
configuration.
C
We
still
have
to
make
sure
that
the
the
loading
of
all
the
flow
components
also
happens
in
a
similar,
smaller
time
frame
than
the
metric
collection
interval,
and
the
roadmap
is
to
make
this
work
as
an
horizontal
political,
scalar,
a
local
government
environment,
to
see
it
in
action.
These
kind
of
systems
are
hard
debug
typically,
so
we
want
to
make
it
some
first
class
debugging
so
that
people
can
actually
understand.
C
What's
going
on
in
the
agents
and
debug
issues,
we
hope
the
flow
UI
can
be
of
help
here
and
also
offers
on
opinion,
United
ways
to
Monitor
and
alert
from
these
clusters.
There's
already
some
metrics
like
to
have
the
conflict
houses
and
see
which
config
files
would
load
and
how
much
time
it
took,
but
there's
still
work
to
do
and
the
plan
is
to
have
something
usable.
This
quarter
so
stay
tuned
for
news,
and
hopefully
we'll
have
something
good
for
you
to
use
yeah.
C
Yeah
by
the
next
stage
in
release,
if
I'm
being
optimistic,
which
is
around
the
end
of
April
but
yeah
I'm,
not
putting
a
promise
on
that
right
now,.
G
A
All
right,
thanks
Pals
any
questions
for
pascalis
about
clustering,
or
you
know
anything
along
those
lines.
A
All
right
so
last
up,
actually
sorry
so
on
our
agenda.
We
had
like
a
plan
to
talk
about
proposals,
but
before
we
do
that,
is
there
any
questions
that
anyone
has
like
in
general
that
they
want.
They
want
us
to
talk
about
before
we
move
on
I
give
about
30
seconds
for
someone
to
say
yes
or
no
less
than
30
seconds.
A
All
right
we're
moving
on
so
for
context.
We've
been
doing
a
lot
of
work
on
Flow,
as
you
can
tell
for
maybe
from
this
call
and
there's
a
lot
of
ideas
coming
in,
but
that
kind
of
means
we
have
a
huge
backlog
of
issues
of
like.
Oh
wouldn't
it
be
cool.
If
we
did
this,
so
what
we
thought
would
be.
A
A
good
idea
is
explore
a
new
thing
where,
at
the
end
of
the
community
call
we
all
together
kind
of
look
through
some
of
the
proposals
that
we
have
for
flow
and
figure
out.
Do
they
still
make
sense?
Do
we
want
to
do
this?
We
want
to
close
it
I,
hand-picked
some,
which
will
probably
be
interesting
to
talk
about
and
we'll
see
how
this
goes.
A
We
might
not
do
this
again
if
it's
just
boring,
but
let
me
share
my
screen
and
show
the
first
one
I
have
selected
up
I'm
going
from
oldest
to
newest
here,
so
back
in
September.
This
is
like
right
after
flow
is
like
the
initially
released.
Probably
I
suggested
hey
what
about
a
component
which
can
run
a
binary
on
your
system
and
return
the
output
to
other
flow
components?
A
There's
the
initial
there's
the
obvious
like.
Oh,
no,
that
sounds
insecure
right,
like
there's
that
concern,
but
I
kind
of
just
want
to.
You
know
start
with
that
context,
open
up
the
floor
and
see
what
people
think
if
they
think
this
is
useful
if
they
think
this
is
a
horrible
idea
right
now
it's
in
the
unplanned
Milestone,
so
it's
kind
of
at
risk
I
think
of
just
being
closed
as
like
a
yeah,
we're
not
feeling
cool
with
this
one.
A
G
Matt
yeah
I
mean
there's,
obviously
the
scary
part,
but
I
think
you
know
long
term
with
module.
If
we
have
the
plan
to
introduce
like
for
lack
of
a
better
term
security
levels,
you
know
like
network
access
or
file
system
access.
So
if
we
plug
those
to
the
main
module
and
have
that
has
like
maybe
the
default
off
I
think
this
becomes
a
lot
more
reasonable.
A
G
A
A
So
I
will
say,
like
the
the
initial
use
case
that
I
had
identified
for
this
was
I
wanted
to
run
like
some
components,
to
talk
to
kubernetes,
but
I
needed
to
use
like
a
kubernetes
authentication
helper.
To
even
do
that.
So
here
I'm
showing
like
like
digitalocean
CTL,
to
get
the
credentials
for
kubernetes
cluster
in
like
an
expose
that
as
a
secret
to
components
but
I'm
wondering
if
like
if
anyone's
looking
at
this,
and
they
can
think
to
themselves.
A
A
Any
other
ideas,
any
other
concerns
or
questions.
Do
we
want
to
vote
to
see
what
we
do
with
this
just
figuring
this
as
I
go
I,
I.
Think,
okay,
here,
here's
the
vote.
Here's
the
thing
to
vote
on.
Should
we
do
anything
about
this
right
now
or
should
we
leave
it
and
revisit
later
and
by
do
anything,
I
mean
close
or
you
know
you
know
something.
D
I,
don't
know
where
that
is
closing
now
but
I,
but
I
wonder
if
this
specific
use
case
is
like
the
one
matches
laid
out
if
solving
them
using
something
this
generic
may
not
be
the
answer
ever
you
know
what
I
mean
like
you,
you
would
you
know
it's
a
great
suggestion,
but
would
you
solve
it?
You
know
a
different
a
different
way,
so
you
weren't
opening
yourself
up
too
much.
That's
all.
A
D
A
I
mean
it
sounds
like
I
mean
we're
not
doing
a
ring,
an
official
vote
right
now,
but
it
does
sound
like
no
one's
terrified,
but
also
we
really
haven't
identified
a
strong
need
for
this.
A
So
for
now
I'll
say
we
can
let
it
ride.
But
it's
seeming
like
this
doesn't
have
a
very
positive
future
in
in
sight
here.
It
probably
won't
make
it.
E
I'm
thinking
this
makes
many
things
that
are
not
possible
today,
like
you,
can
at
least
hack
around
it
right,
I'm
thinking,
you
have
a
very
custom
way
of
decrypting
a
secret
in
your
file
system,
or
you
know,
I
mean
you
can
do
plenty
of
things
with
this.
It's
mostly
around
you
know,
thinking
about
the
security
model,
as
you
know,
I
like
the
idea
of
having
like
this
different
levels
of
security
and
maybe
off
by
default,
but
I
like
that
they
are
having
you
know.
E
A
I
think
one
of
the
things
that
I'm
kind
of
curious
about
is
the
security
model
here
right
because
technically,
there's
already
a
component
to
read
files
from
the
local
file
system
and
you
could
use
Prometheus
remote
right
with
a
like
malicious
server,
not
not
to
give
the
people
attack.
Vector
ideas
on
our
YouTube
recording,
but,
like
you,
could
you
could
technically
use
local
file
and
Prometheus
remote
right
to
send
any
file.
You
want
to
any
server
and
you
have
to
make
sure
when
you're
running
flow.
A
C
Pascals
I'd
say
yes,
I'd
say
yes,
because
the
attack
Vector
is
not
just
sending
sensitive
information
to
the
outside
world,
but
maybe
actually
doing
something
in
the
same
system.
But
yeah.
A
E
E
A
A
A
There
I
mean
I,
didn't
think
about
those
two
use
cases,
but
yeah
I
think
I'm
I'm
on
the
fence
that
this
is
a
huge
security
risk
and
if
we
did
introduce
it,
you
would
probably
want
to
be
very
cautious
right,
like
you
would
want
to
make
sure
that
you're
loading
modules
that
don't
have
this
capability
that,
like
only
like
the
main
config
file,
is,
is
doing
this
and
that
you
have
like
tight
control
over
the
config
file
and
making
sure
that
the
command
and
arguments
here
aren't
coming
from
other
components
like
you
have
to
be
able
to
like
super
super,
be
sure
about
what
it's
doing,
and
maybe
for
that
reason
alone.
A
Unless
we
had
a
bunch
of
use
cases
that
were
pressing,
we
probably
don't
want
to
do
this
I
kind
of
want
to
close
it
now,
but
maybe
maybe,
let's
just
let
it
ride,
and
then
you
know
we'll
revisit
later,
but
it
does
sound
like
this.
Probably
it
would
introduce
more
problems
than
it
solves.
A
Moving
on
to
the
next
one,
this
is
actually
your
your
proposal
here
you
want
to
talk
about
this.
You
remember
what
this
is:
yeah
yeah.
C
So
somebody
had
asked
whether
we
could
set
the
attendant
ID
on
metrics,
like
we
have
the
ability
to
do
with
a
lucky
dot
process
component
in
the
tenant
States
or
how
prompting
does
it,
but
for
Matrix
this
is
not
possible.
C
C
We
actually
went
through
different
ideas
on
how
this
could
be
achieved
from
less
like
invasive
to
more
invasive,
and
it's
currently
fell
in
a
limbo
like
there
were
some
ideas,
but
we
weren't
sure
which
one
to
proceed
with
I
still
think
that
this
can
be
useful
for
people,
but
I'm,
not
sure
whether
we
have
a
consensus
in
the
way
that
we
want
to
achieve
this,
and
whether
the
performance
trade-off
is,
is
not
that
bad
for
people
to
actually
use
it
and
get
something
out
of
it.
A
Like
what
would
so
just
to
make
sure
I
understand
like
if.
D
C
A
A
C
A
A
Yeah
I
mean
this
is
I
would
like
this,
because
I
mean
this
is
something
that
comes
up
a
lot.
Loki
prompt
tell
us
something
for
this.
I
mean
sorry
looking
components
too
too,
where
you
can
use
a
label
to
inject
the
the
header
to
write
to,
but
Prometheus
can't
do
that.
So
I
think
I
think
this
really
comes
to
like
a
contribute
to
Prometheus
Upstream
kind
of
thing.
A
C
So
you
think
you
we
should
work
with
upstream
and
make
make
this
like
an
abstain,
feature.
A
I
mean
I
would
like
routing,
but
I
would
also
I.
Also
think
of
I
also
think
for
the
unbounded
case.
Yeah.
C
C
Okay,
yeah
I,
see
so
like
for
this
specific
proposal.
I
would
like
I
will
just
post
a
comment
later
on.
I
would
like
to
see
us
have
a
new
component
with
a
disclaimer
that,
like
having
n
different
tenants,
will
make
the
right
head
log
be
n
times
as
large,
and
people
can
see
whether
they
want
the
soccer
head
under
systems
or
not.
People
do
use
code
external,
even
though
it
has
the
same
pixels,
but
it's
it
works
for
them
in
terms
of
the
use
case.
A
All
right
well
with
four
minutes
left
for
the
call
I,
don't
think
it
makes
sense
to
to
do
another.
One
of
these
thanks.
Everyone
for
joining
I
think
this
is
where
I
would
normally
Pander
to
the
to
the
YouTube
audience
like
like
comment
subscribe
all
those
things
it
probably
helps
out.
I,
don't
know
it's
I
don't
run
on
the
Pravana
Channel,
but
we
do
these
monthly
they're
posted
to
YouTube.
We
do
them
live
too.
A
So
if
you're
interested
in
joining
these
live
next
time,
join
the
grafana
slack
and
go
to
the
Asian
Channel.
Where
we'll
let
you
know
when
we're
doing
another
one,
but
yeah
thanks,
I'll,
see
everyone
next
time.
H
H
I
have
a
question
I
already
asked
in
agent
Channel
yeah,
and
so
my
situation
is
I
want
to
replace
our
Q
primitive
stack
with
the
grafana
agent
yeah,
but
as
I
see,
there
is
no
option,
so
we
wanted
to
save
our
huge
amount
of
parameter,
throws
trds
and
also
we
want
to
use
service
mentors.
H
So
today,
I
tried
to
install
your
funnel
agent
via
VIA
Helm,
chart
and
operator,
and
I
see
that
in
case,
when
I
use,
Operator
Operator
allowed
to
use
service
managers,
but
can't
use
prameters
rules
am
I,
correct
yeah.
Do
you
have
any
plans
to
implement
to
a
greater
using
like
you
using
no
promises
rules
or
any
plans
to
add
possibility
to
grafana
agent
in
flow
mode
to
use
service
mentors
yeah.
A
I
can't
say
a
long,
but
I
can
answer
this
question
at
least
I
think
so
right
now
the
strategy
is:
we've
recognized
that
a
lot
of
people
are
struggling
with
the
operator,
and
so
we
think
that
it
generally
makes
more
sense
and
it's
easier
for
people
to
understand
and
for
us
to
document
and
help
people
be
successful
with.
If
we
took
what
the
operator
does
today
and
moved
it
all
into
flow.
A
So
I
don't
know
what
that
means
for
the
future
of
the
operator,
but
it
does
mean
that
within
the
next
soon
I
can
give
you
a
timeline,
but
within
the
next
some
amount
of
time
we
want
the
agent
flow
mode
inflow
mode
to
be
able
to
support
service,
monitors,
pod
monitors
and
probes,
and
then
today
it
already
supports
the
alert,
alert
manager,
rules
and
pod
logs
and
we'll
probably
continue
adding
support
for
more
and
more
crds
over
time
into
float
mode.
G
Cool
yeah.