►
From YouTube: Grafana Agent Community Call 2023-06-28
Description
Discussed automatic conversion from Agent Static mode to Agent Flow mode. Join us https://docs.google.com/document/d/1TqaZD1JPfNadZ4V81OCBPCG_TksDYGlNlGdMnTWUSpo/edit#heading=h.jkpb8gk2r6u8
A
Good
evening,
hey
everyone
and
welcome
to
the
June
edition
of
the
Quran
agent
Community
call.
Today
we
have
two
things
on
the
agenda.
The
first
thing
is
something
Eric
from
the
development
team
he's
been
working
on.
It's
automatic
conversion
of
a
Prometheus
configuration
to
flow
configuration
low
is
the
new
way
of
configuring,
the
grafana
agent,
and
it
is
going
to
replace
the
old
static
mode
of
configuring.
A
The
agent,
which
is
essentially
a
combination
of
Prometheus
configuration,
open,
Telemetry
configuration
for
traces
and
also
prompter,
configure
configuration
for
logs,
so
we're
going
to
have
an
automatic
way
to
configure
configuration
automatically
from
promptail
flow,
from
open
Telemetry
to
flow
and
from
Prometheus
to
flow.
But
for
now
we
so
far
have
been
only
working
on
the
Prometheus
Flow
Part,
and
that
is
what
Eric
is
going
to
present.
Then,
for
the
second
part
of
the
call,
we
can
review
open
proposals.
A
C
B
B
There
we
go
yeah,
let
me
know
as
usual,
if
I
forget
to
blow
something
up
large
enough
and
I'll
I
will
do
my
best
to
do
so,
and
here
we
go
so
all
right,
so
automated
config
conversion.
So
what
we
want
to
talk
about
today,
pollen
teed
it
up
nicely-
is
about
how
to
get
up
and
running
with
flow
quicker,
how
to
take
existing
configurations
and
turn
them
into
something
that
works
without
having
to
do
a
lot
of.
B
You
know
a
lot
of
effort
really
so
we'll
start
here
so
this
website,
you
know,
maybe
maybe
you've
you
know,
looked
at
flow.
You
think
it's
great.
We
think
it's
great,
of
course,
and
you
want
to
figure
out
how
do
I
get
started
with
it
and
I've
got
this
configuration
file
where
that
I
know
already
works,
but
this
configuration
language
is
is
totally
different
and
I
just
want
to
get
off
the
ground.
This
is
where
this
comes
into
play.
B
B
So,
what's
the
the
design
for
this?
It's
it's
pretty
straightforward,
which
is
nice
at
a
very
you
know,
high
level.
What
we're
doing
is
we're
taking
the
source,
config
file
and
loading
that
into
memory,
and
then
we
translate
that
into
the
refiner
agent
flow
argument
types
and
then
from
those
types
we're
able
to
generate
the
flow
config
file.
So
it's
nice
because
we've
got
you
know
the
the
validation
of
the
original
source
file
already
built
in
for
free.
B
If
you
will
and
then
we
also
make
sure
that
we're
generating
good
flow
config
on
the
on
the
other
end,
because
we're
going
through
the
actual
flow
argument,
types
in
code,
so
the
scope
as
Paul
and
mentioned
we
we're
starting
with
the
Prometheus
configuration
which
is
just
about
done,
where
we're
kind
of
dotting
some
eyes
across
some
D's
that
that
type
of
stuff
with
it.
But
that's
that
one's
pretty
much
done
and
should
be
part
of
the
the
next
release.
Then
we're
doing
promptail,
open
Telemetry
and
the
big
one.
B
That
kind
of
combines
the
three
plus
some
other
stuff
is
the
static
mode
of
the
agent
itself,
so
that
that's
kind
of
like
the
main
thing
we're
progressing
towards,
because
we
want
it
to
be
easy
to
go
from
the
older
setup
of
the
agent,
the
static
mode
into
flow
mode
going
forward,
so
that'll
make
that
process
a
lot
easier.
B
So
what
were
some
of
the
technical
challenges
and
things
that
we
kind
of
had
to
decide
on
so
defaults
is,
is
a
big
one,
so
we
have
defaults
in
the
source
configuration
and
we
have
defaults
and
Flow.
So
what
we
did
was
we
said
if,
if
we
have
a
default
in
flow
and
the
source
configuration
file
default
matches
or
somebody
provides
a
value
that
matches
the
flow
default,
let's
leave
it
off
of
the
flow
output,
because
the
defaults
will
take
care
of
it.
B
What
that
does
is
it
prevents
us
from
creating
a
lot
of
configuration,
that's
unnecessary
and
then,
in
the
case,
where
there's
a
default
for
the
source,
configuration
file
and
flow
defaults,
don't
match.
We
put
the
value
on
there
with
the
sources
default
so
that
we
don't
lose
that
so
we're
trying
to
preserve
functionality.
By
doing
so,
Secrets
was
one
thing
we
had
to
tackle.
That
was
just
because
natively,
you
know
when
you
try
to
tokenize
a
secret.
B
We
hide
it
by
just
putting
this
exact
string
with
you
know
Secret
in
parenthesis,
but
in
our
case,
when
you're
converting
a
configuration
file,
if
you
have
a
secret
value
in
it,
you
want
to
preserve
that
and
that'll
be
important
when
we
talk
about
doing
the
conversion
running
it
on
the
fly,
if
we
lose
information
that
that
would
not
be
possible,
I'll
need
to
figure
out
how
to
handle
component
exports.
B
So
when
we
have
one
component
reference
and
export
from
another
component,
there's
some
things
that
had
to
be
tackled
there
and
then
how
do
we
manage
unsupported
features?
So
an
unsupported
feature
is
something
that
exists
in
the
source
configuration,
but
it
does
may
not
exist
in
flow
yet
or
it
may
not
ever
exist
in
flow.
If
it's
not
something,
that's
planned
to
be
supported,
so
we
want
to
make
sure
we
have
warnings
for
that
and
then
also
we
want
a
way
to
return
errors
if
something
is
totally
wrong.
B
So
there's
an
example:
there
with
Prometheus
configuration
not
parsing
correctly.
Okay,
all
right,
I
think
I
gotta
hit
this.
What
did
my
video?
Oh
there?
It
goes
it's
loading
in
big,
so
this
is
just
taking
a
look
at
a
little
bit
of
a
before
and
after
of
a
configuration
so
on
the
left,
we
have
once
again
the
Prometheus
configuration
and
on
the
right
we
have
the
automated
conversion
results.
B
So,
looking
at
this,
we've
got
a
couple
scrape
configs
here:
we've
got
this
job
name
with
Prometheus
one,
and
then
we've
got
one
with
Prometheus
two
and
we'll
see
that
reflect
in
the
on
the
right
side.
We
also
have
this
remote
right
section
with
a
a
couple
targets
here
and
we
can
see
looking
at
this
first
section
that
those
do
if
we
look
at
the
remote
right
section,
those
do
match
up
on
the
right
side
here.
B
You'll
also
notice
that
stuff
I
was
talking
about
with
defaults,
where
there
are
things
that
are
that
are
not
listed
in
Prometheus,
because
they're
using
the
default
values,
but
since
our
flow
defaults
don't
precisely
match
them,
we've
included
them
automatically
so
that
the
functionality
is
preserved.
Okay,
let's
bump
this
one
forward.
B
Okay,
so
here
we're
focusing
in
on
Prometheus
one,
the
the
scraping
section
here
and
what
we've
got
is
we
can
see
that
there's
Target
there's
multiple
targets
that
are
concatenated
together
here
here
we
can
see
on
line
48
that
there
is
a
export
used
from
a
different
component,
so
we've
got
that
that
all
working
as
well.
So
if
there's
multiple
targets
handles
that
needs
to
reference,
another
component
handle
set
as
well.
B
The
discovery
Azure
is
defined
over
here
on
the
Prometheus
config
sign
online
17,
and
we
can
see
that
that's
that's
reflected
on
the
right.
B
Excellent
okay,
oh
yeah,
and
the
other
thing
I
want
to
mention
is
here's.
Here's
the
the
example
of
the
the
secrets
not
being
obscured
so
in
here
we
have
the
basic
auth
section
with
the
user
and
password
and
then
on
the
the
output.
We
maintain
that
as
well.
B
And
looking
at
the
last
section,
so
here
we've
I
just
want
to
this
one:
we've
got
a
re-label
into
it
in
into
the
mix
as
well.
So
you
know
that's
kind
of
that's
kind
of
that.
So
that's
the
metric
relabel
config
section
for
Prometheus
2
being
included.
B
B
Coming
soon,
all
right
so
CLI
options
on
the
agent
actually
there's
a
there's,
an
open
pull
request
for
the
first
of
these
right
now
that
should
be
should
be
merged
soon.
So
there's
two
things:
we
want
to
be
able
to
do
from
the
from
the
agency
CLI
commands,
so
we
want
to
be
able
to
convert
a
configuration
file
and
I'll
show
a
quick,
quick
demo
of
that
where
you
provide
a
whatever
the
source
format.
File
is
so,
if
we're
talking
generally
speaking
here,
Beyond
just
Prometheus,
but
Prometheus
is
the
first
one.
B
We
want
to
be
able
to
take
that
configuration
and
convert
it
and
I'll
put
it
into
a
river
file.
The
second
thing
we
want
to
be
able
to
do
is:
convert
a
Prometheus
or
whatever
other
source
config.
We
support
into
River
and
start
the
agent
on
the
Fly,
so
you
wouldn't
have
to
save
that
River
file
to
use
it.
Basically,
we
also
want
to
have
the
option
where
you
can
bypass
the
warnings
for
the
configuration.
B
If
it's
something
unsupported,
you
know
maybe
you're.
Okay
with
that
you're
aware
of
that-
and
you
want
to
want
to
be
able
to
move
forward-
or
maybe
you
do
want
to
be
blocked
by
that,
so
we'll
have
the
option
for
both
there
we're
also
looking
at
documentation
for
best
ways
in
general
to
migrate
to
the
Griffon
agent
flow
I.
Think
the
automated
config
conversions
here
are
a
piece
of
that,
but
there
are
other
things
that
are
not
entirely
part
of
just
a
configuration
file,
translation.
B
B
B
B
B
D
Permanent
things,
we're
not
going
to
support
or
are
they
going
to
kind
of
get
resolved
and
get
handled
as
time
goes
on.
B
I
think
I
think
it'll
depend
on
what
it
is.
You
know,
as
we
add
more
features
to
flow.
Some
may
go
away,
but
there
may
be
some
things
for
certain
configuration
sources
that
we
don't
plan
to
support.
There
was
one
I
was
just
starting
to
work
on
the
static
mode.
So
an
example
is
the
wall
directory
where
that
same
concept
doesn't
exactly
apply
in
flow.
So
if
you
have
that
on
your
Source
config
file,
we
would
pretty
much
always
have
a
warning
and
say
hey
for
flow.
B
It's
not
an
unsupported
warning
necessarily,
but
it's
like
hey
for
flow.
This
is
done
in
a
different
way,
so
I
would
not
expect
the
warning
like
that
to
go
away
so
it
it's
kind
of
a
mixed
bag,
depending
on
the
exactly
what
it
is.
Yeah.
A
Field
support
conversion
from
something
other
than
those
three
things
that
we
mentioned
well
for
things.
The
open,
open,
Telemetry
conflicts
permit
this
configs
from
there
and
agent
who
we
ever
convert
from
another
type
of
conflict.
B
I
see
I
think
that's
a
good
question.
I'll
give
you
my
my
personal
take
is
I.
Would
I
would
like
to
to
make
it
as
easy
as
possible
for
people
to
start
using
flow?
B
You
know
we've
kind
of
scoped
out
where
we
want
to
hit
especially
focused
again
on
getting
from
the
static
mode
of
the
agent
to
flow
and
then
those
other
pieces
kind
of
are
part
of
the
puzzle,
so
they
kind
of
they
kind
of
come
along
for
the
ride
a
little
bit
and
then
we
want
to
you
know,
be
closer
to
Opel
so
that
that's
a
motivator
there,
but
yeah
I
would
like
to
see
others
in
the
future.
B
I'm,
not
sure
where
that'll
get
in,
like
you
know
the
priority
to-do
list,
if
you
will
but
yeah
I,
think
it's
I
think
we've
kind
of
opened
Pandora's
Box
in
that
area.
If
you
will,
where
we'll
be
able
to
do
so
using
this
groundwork,
yeah.
D
Matt,
when
running
kind
of
in
auto
mode
in
this
I
actually
thought
of
Are
We
handling
command
line
arguments.
B
Currently,
this
is
just
focused
on
the
configuration
file,
so
you
would
use
the
flow
command
line.
Arguments
you'd
be
using
with
a
non-flow
file.
So
you
know
this
is
you're
in
the
fuzzy
land
in
the
migration
guide
area
of
things
that
are
not
I'm,
converting
config
file,
a
to
config
file,
B
of
A
different
type
and
having
them
be
the
same.
B
A
Thank
you,
Eric.
That's
grip
station
yeah,
so
the
next
part
of
the
call
is
all
about
reviewing
open
proposals.
But
before
we
proceed
just
want
to
ask:
is
there
anything.
C
A
That
anyone
would
like
to
discuss,
or
are
there
any
particular
proposals
which
you
would
like
to
discuss?
Please
let
me
know
on
the
channel
I'm
going
to
post
the
link
to
the
proposals
which
we're
going
to
review.
A
Okay,
so
we're
going
to
distribute
flow
proposals.
So
here's
what
we
think.
E
F
I
think
we're
already
yeah
I
I
I.
Think
it's
fair
to
say
that
we've
approved
this
ages
ago,
because
it
was
really
it
was
less
about
the
timeline
of
how
flow
gets,
released
and
more
about
the
way
flow
gets
exposed,
and
we
kind
of
did
multiples
of
these
options
like
we
did
option
one.
We
have
a
full
specific
binary
and
we
did
option
too.
Also
we
have
we
did.
We
do
both
of
them.
So
I
think
this
is
approved.
F
The
only
thing
that's
kind
of
up
in
the
air
still
is
the
the
release
timeline,
but
that
was
never
part
of
The
Proposal.
That
was
just
like
a
a
suggestion
for
what
we
might
do.
A
Okay,
that
makes
sense
now,
do
you
think
we
should
close
it
now
or
just
leave
it
open
on
closing
data.
F
I've
been
a
market
up
in
a
market,
we
don't
have
proposal
approved.
Do
we?
No?
It's
proposal
accepted
sorry
from
Market
accepted
and
I'm
going
to
close
it
as
completed,
because
we
did.
We
did
really
do
this.
A
A
A
We're
going
from
the
earliest
the
you
know,
some
of
the
most
recent
first.
A
Yeah
sure
we
can
just
go
like
this,
so
what
is
the
spot?
Who
wrote
again
require
all
capsule
values
to
implement
marker
interface.
F
A
Okay,
I
think
this
one
is
for
me
and
this
one
is
quite
easy
facing.
So
if
I
remember
correctly,
the
issue
was
that
if
you
hit
the
reload
endpoint
on
the
agent,
it
will
say
that
the.
A
Yeah,
okay,
so
I
think
the
problem
was
that
if
you
try
to
reload
the
configuration
of
the
agent
and
your
new
configuration
has
a
certain
problem
with
certain
components,
then
the
agent
would
apply
as
much
as
possible
from
that
configuration.
A
A
A
It's
more
of
a
maybe
a
warning
or
partial,
wear
that
a
lot
of
the
changes
were
applied
and
maybe
we
should
even
clarify
which
changes
to
the
graph
were
applied,
or
we
should
simply
reject
the
whole
graph
reload
if,
if
even
part
of
the
graph
fails
and
I
personally
think
it
will
be
more
intuitive.
If
we
reject
the
whole
graph
reload
rather
than
try
to
accept
as
much
as
possible,
and
then
you
know,
I'll
potentially
leave
parts
of
the
graph
running
with
the
old
configuration.
D
Yeah
I
mean
I
like
the
idea
of
it
being
more
Atomic
and
that
changes
generally
get
applied,
or
they
generally
don't
I
know
there's
a
lot
of
nuance
there,
especially
if
things
are
loading
from
Network
might
not
might
be
some
issues
there,
but
in
general,
I,
like
it
being
more
atomic.
C
Yeah
my
question
is
in
line
with
what
Matt
said
so
when
we're
loading
say
a
module
from
the
network.
If
there
is
a
an
error
there
should
that
make
the
entire
reload
fail
for
like
the
entire
config
or
should
we
try
to
you,
know
Mark
the
module,
that's
unhealthy,
but
keep
going.
F
I
think
there's
some
I'm
okay
with
this
to
a
certain
level,
but
then
to
to
a
more
deeper
level.
I
think
it
depends
on
how
deep
this
goes
like.
If
it's,
if
you
give
it
an
invalid
config
file,
we
reject
that
whole
thing.
We
don't
make
any
changes,
I
think
that's,
okay.
What
I
don't
want
to
do,
though,
is
while
we
are
rolling
out
a
config
file.
F
If
one
component
rejects
its
new
config,
then
we
have
to
walk
back
and
revert
all
the
changes
we
made
to
all
the
other
components
because
well
I,
don't
know
why
yeah
that's
just
my
gut
feeling.
My
gut
feeling
is
like
I'm.
Okay,
if
the
config
file
as
itself
is
atomic
but
changes
to
the
graph,
like
as
components,
get
updated,
I
think
that
should
still
stay
the
way
it
is,
which
is
a
gradual
rollout,
but
I
think
also
that
we
already
do
this
like.
F
A
Yeah
that's
correct,
but
it
doesn't
have
to
be
necessarily
a
raw
bug.
It
really
depends
on
how
we
apply
the
configuration.
If
we,
for
example,
let's
say,
construct
some
kind
of
graph
and
then
swap
the
two
graphs,
then
it's
not
really
a
rollback.
You
just
never
Swope
the
graph.
If
you
couldn't
construct
the
new
graph
in
the
first
place,
no.
F
But
you
have
to
do
a
rollback
because
you
have
to
apply
the
graph
gradually
like.
Let's
say
it's,
a
local
file
being
fed
into
a
Prometheus
scrape
right.
You
have
to
apply
the
local
file
first
and
if
that
fails
well
sorry,
then
you
can
apply
the
Prometheus
scrape.
But
if
there's
like
a
complex
chain
of
that,
you
have
to
walk
it
back
to
the
the
first
thing
that
failed
and
then
remember
everything.
Actually,
you
have
to
walk
back
everything.
Anything
that
changed
up
to
that
point
needs
to
be
reverted
back
to
its
previous
state.
A
Yeah
yeah
yeah,
okay,
yeah
I
mean
yeah
I,
guess
it
depends
on
how
we're
creating
this
graph
so
to
speak.
Yeah
the
multicellular
countries.
D
I've
said
we
can't
overcome
it,
but
maybe
the
more
nuanced
approaches.
Now
you
roll
back.
If
a
module
fails,
you
roll
back
to
a
given
safe
State
like
maybe
it
is
not
the
whole
config's
Atomic,
but
a
module,
but
a
flow
instance
is
atomic
I.
Don't
know
if
that
solves
anything,
but
it
may
narrow
down
scope.
B
So
that
could
fail
at
pull
time
at
any
point
in
time.
If
somebody
updates
that
file
to
something
bad
or
invalid,
it's
a
little
different
than
the
typical
case
of
hey,
hitting
a
Reload
endpoint
I
want
to
reload
the
whole
graph.
So
it's
just
a
behavioral
difference.
There.
D
F
I
want
to
make
sure
we're
we're
all
we're
still
talking
about
what
this
proposal
is
actually
talking
about,
because
the
change
it's
saying
to
make
is
something
we
already
do
and
so
I'm
trying
to
roll
back
to
what
was
even
what
release
even
was
January
10th.
What
was
yeah?
It's
not
31,
Maybe
30.,
okay
30
was
a
relevant
release
on
January
10th.
F
F
C
D
F
F
C
F
And
that
was
how
it
worked
before,
like
we
construct
the
graph,
then
we
walk
and
apply
everything
in
the
graph,
but
it
didn't
matter
before
if
there
was
a
problem
in
that
graph.
So
now
it's
we
can
start
the
graph
and
if
there's
an
error
we
return
early.
We
don't
we
don't
walk
it
in
that
case,
okay,
yeah
Eric
did
do
this
one.
Let
me
see
when,
when
he
did
this,
we're
not
blaming
you
Eric,
but.
F
D
D
F
F
B
Yeah,
maybe
it
would
be
worth
pollen
retesting
his
case
there
and
seeing
if
it
meets
his
expectations
with
a
quick
sanity
check
and
then
good
to
go.
A
Yep,
thank
you
yeah
from
what
Robert
said.
That
was
indeed
the.
So
yes,
that's
pretty
good.
Thank
you
good
to
know
about
so
implemented.
A
So
yeah,
let's
move
on
what
do
we
have
here?
Something
from
studies?
I,
don't
know
if
this
is
user
facing,
but
it
seems
like
yeah.
It's
not
something
implementation,
people
yeah.
We
do
okay,.
A
D
G
Thanks
for
providing
the
link
but
yeah
generally,
it's
hard
to
understand.
What's
going
on
in,
like
the
lucky
dot
process
component,
when
you
have
multiple
stages,
and
especially
ones
that
are
branching
or
conditionally
applied,
so
that
this
inspect
stage
could
help
with
that
by
not
altering
lines
and
either.
D
Yeah
I
mean
I
think
if
it
exposes
something
to
the
debug
info.
I
think
there's
a
lot
of
asterisks
around
something
I
think
that's
pretty
helpful
and
I
think
oh
originally,
with
flow
being
able
to
kind
of
hook
in
hooks
into
Data
flowing
between
two
components,
as
always
a
cool
idea-
and
this
is
sort
of
like
that.
G
By
the
way,
currently,
the
reason
work
around
these
with
the
local
dot
Echo
component,
so
that
you
could
split
your
logical
process
pipeline
in
the
middle
and
have
that
have
a
forward
to
a
lucky
dot,
Echo
component
and
then
to
the
rest
of
the
pipeline.
But
by
no
means
it's
just
handed
us
having
that
built
into
the
pipeline
itself.
F
We
have
like
a
match
stage:
I.
Think
right.
Where
you
can.
You
can
like
do
custom
processing
if
it
matches
a
certain
selector
yeah,
but
it
seems
like
if
we
had
I
almost
wonder
if
it's,
if
it's
more
powerful
in
general,
for
debugging
and
for
other
use
cases
if
we
had
a
match
stage
which,
instead
of
like
executing
other
stages,
could
forward
logs
to
other
components,
because
that
would
also
cover
this
use
case
here.
F
G
That's
an
interesting
idea:
I
didn't
know
if
it's
a
certain
Pro
stage
only
because
generally
the
idea
of
routing
logs
is
an
interesting
one
to
explore,
but
yeah.
It's
certainly
a
way
to
to
do
this
here.
I've.
F
D
I
I
didn't
raise
my
hand.
Sorry
if
we
do
go
with
this
is
mostly
for
debugging,
maybe
rename
it
to
debug
instead
of
inspect,
and
it
makes
it
more
clear
when
you'll
find
it.
A
Yeah
sure
it's
not
just
they
were
here,
yeah.
A
Yep
yeah
sounds
good
to
me
as
well.
Just
the
thing
is
that
people
would
have
to
reload
the
configuration
for
this
to
take
an
effect,
I
kind
of
wish.
There
was
a
those
away
just
to
use
the
UI
to
inspect
most
things
without
having
to
reload
the
config
but
I
guess
in
this
case
computer.
C
F
So
this
came
out
of
the
other
issue.
Maybe
this
isn't
necessary
anymore,
with
with
modules
and
kind
of
all
that
stuff.
But
the
idea
was
to
have
a
component
called
debug
variables
where
it
could
have
a
collection
of
different
variables
with
different
default
values
like
info
for
log
level,
and
you
could
reference
that,
like
any
other
component
except
the
debug
variables
also
supports
like
an
HTTP
API,
where
you
can
change
the
value
of
the
variable
at
runtime,
and
that
would
cause
a
graph
reload
because
it'd
be
a
new
export.
F
So
that
would
hypothetically,
allow
you
to
say
like
well.
I'd
start
with
info,
but
I
could
go
to
the
UI
or
I
could
hit
up
an
endpoint
to
change
that
whenever
I
want
at
the
bottom
of
this
page,
I
also
describe
integration
with
the
UI,
where
the
UI
can
show
a
special
page
for
debug
variables
which
allow
you
to
just
type
in
the
value
instead
of
like
having
to
hit
up
an
endpoint
like
with
curl
or
something,
but
given
modules.
F
I,
don't
I,
don't
know
how
much
value
this
provides
anymore,
because
modules
are
effectively
very,
very
similar.
Although
this
is,
this
is
more
like
value
level,
rather
than
like
entire
module
level.
F
And
also
yeah,
like
it
integrates
with
like
conditionals
like
like
I
show
here,
where
you
could,
you
know,
switch
between
a
local
and
the
cloud
Prometheus,
based
on
what
you,
what
your
debugger
will
set
to.
D
Yeah
I
mean
I
think
you
could
definitely
do
this
with
modules.
This
is
a
little
cleaner,
I
think
it
depends
on
how
often
this
would
get
used
baby.
D
You
know
this
is
a.
This
is
a
few
either
local
remote
Getters
in
a
coalesce
statement.
Basically,
would
be
the
way
to
do
it
in
today's
world.
F
I'm
gonna
poo
poo,
my
own
proposal
here
I
think
this
would
not.
This
is
something
you
would
you
would
shove
in
your
conf
I.
Don't
think
you
would
have
a
permanent
config
file
where
you
have
everything
set
to
debug
like
a
debug
variable,
because
otherwise
it's
just
a
mess
right.
So
I
don't
know
if
this
encourages
a
clean
config
like
if
you
wanted
to
test
something
your
config
file,
just
change
the
config
and
reload
it
right.
So
I
think.
F
Based
on
that
and
like
the
conditional
thing
like,
can
you
imagine
our
production
configs
just
having
like
a
hook
in
it
forever?
I,
don't
know
like
I
think,
maybe
with
modules
and
all
this
stuff
I
would
actually
vote
against
my
own
proposal
now
I
would
say
no
to
this
I.
Don't
think
this
is
very
I.
Don't
think
this
is
useful.
A
A
Yeah,
okay,
well,
I,
don't
think
I
have
to
live
with
this
objective.
We
can
maybe
just
move
on
to
the
next
one.
If
you'd.
F
A
Function:
closures,
I've
been
hearing
about
function,
closures
for
as
long
as
I've
been
in
the
team.
I,
never
read
this
proposal,
I.
F
Oh
God,
I,
don't
know
if
you
have
enough
time
to
discuss
this
one,
what
I'll
say
this
is
mapreduce
filter.
This
is
I
I,
said
a
long
time
ago
about
River,
allowing
you
to
do
interesting
things
by
given
access
to
the
values
of
components
like
discovery,
but
you
can't
actually
take
advantage
of
those
values
today.
F
You
kind
of
just
have
to
wire
them
around,
but
this
would
actually
enable
the
user
to
have
custom
transformations
in
formats
that
we
didn't
necessarily
expect
Like
relabeling
rules
to
change
things
with
exactly
what
they
want
to
do.
It
would
be
a
power
user
type
of
a
thing,
but
with
modules,
power,
user
type,
things
are,
you
know,
kind
of
hidden
away.
So
there's
some
discussion
here
about
about.
We
have
like
with
the
Syntax
for
for
these
should
be,
but
really
it
is
not
produce.
Filter
is
what
I'm
talking
about.
D
Bit
so
that's
the
only
reason
I
wouldn't
give
this
thumbs
up
now
is
because
we
don't
have
conditionals
if
we
did
them
both
same
time
and
maybe
thumbs
up
to
both
and
I
also
believe
anything
we
do
with.
This
should
probably
be
labeled.
Maybe
it's
experimental
for
a
bit,
because
I
think
syntax
is
going
to
be
a
discussion
point.
D
Yeah,
so
if
you
go
to
the
next
block,
it's
literally
like
you
know,
if
else,
if
FL
you
know
right
there,
oh
my
gosh
I
should
move
it
around
a
lot
to
go
up
a
little
bit,
but
basically
yeah
yeah
right
there
see
if
Target.
D
So
yeah
I
I
think
if
we
approve
both
this
and
the
conditionals
at
the
same
time,
then
yeah
I'm,
totally
down
I,
mean
I'm
down
with
this
long
term.
So
maybe
it's
a
thumbs
up
for
me,
regardless
just
that
we
gotta
do
conditional
first.
F
F
C
E
Wonder
why
why
wouldn't
we
use
the
like
other
reliable
components,
or
why
do
we
have
both
things?
Why
do
we
have
two
ways
of
doing
this,
and
then
I
also
have
a
lot
of
like
I
would
want
to
understand
how
how
we're
going
to
implement
this
in
more
detail.
F
Well,
going
back
to
like
why
re-labeling
is
not
always
the
approach
here.
Let's
say
you,
and
this
is
a
real
use
case-
you're
running
an
exporter
component,
but
you
want
to
decorate
that
exported
component
with
labels
from
ec2
service
discovery,
which
map
to
the
ec2
instance
that
the
agent
is
running
on.
F
There's
no
way
to
write
that
today
you
can't
you
just
can't
write
that
today,
but
with
mapreduce
Filter,
you
could
filter
down
to
the
ec2
instances
which
correspond
to
the
one
that
agent's
running
on
and
then
combine
the
two
then
map
over
the
like
the
targets
from
the
exporter
component
and
then
merger
labels
together.
So
that
would
be
a
use
case
enabled
by
mapreduce
filter
which
otherwise
we
would
need
to
set
apart.
That
use
case
set
aside
that
use
case
and
then
create
components
just
for
that
one
use
case.
F
So
that's
kind
of
the
idea
of
the
mapreduce
filter
thing
is
it's
like
allowing
users
to
decide
what
they
want
to
do
with
the
values
we
give
to
them
in
ways
that
we
probably
didn't
really
anticipate
and
probably
didn't
create
components
for
in
advance.
E
So,
just
to
quickly
respond
while
we're
on
this
question,
it's
also
possible
to
let's
say
this
example
you
gave
when
you
join
metrics.
You
could
also
just
feed
that
in
as
like
this
additional
context
to
the
reliable
component
and
then
have
them
available
as
values,
but.
F
But
how
did
how
do
we
tell
the
component
to
filter
down
the
map
to
only
one
target,
like
you
discovered
all
your
HD2
instances,
but
only
one
of
them
has
the
one
you
want
right.
So
I
guess
you
have
to
I.
Guess
you
have
to
fill
the
okay
I?
Guess
you?
What
you
would
have
to
do
then
is
you'd
have
to
feed
that
into
another
relabel
which
filters
down
the
targets.
So
just
one,
and
then
you
could
feed
that
single
Target
that
single
Target
as
as
the
context
and
then
you
then
you
could
use
that.
E
Not
sure
I
follow
that
example
that
you
have
in
mind
right
now,
but
yeah.
E
A
D
D
This
could
work
on
almost
any
data
type
so
like
if
you
wanted
to
filter
and
reduce
based
upon
like
reading
the
Json
file,
sort
of
getting
that
as
an
array
of
of
maps
and
then
filter
that
to
a
random
stream
key,
that's
something
you
can't
do
with
your
label
configs,
because
that
only
works
on
a
specific
kind
of
data
type.
D
E
Yeah
I
I
think
that's
that's
fair.
There's,
however,
like
you
know
the
the
cost
of
ownership
like
depending
how
expensive
these
things
are.
You
know
the
bank,
the
main
drive
a
lot
of
our
resources
as
a
team
and
the
community
in
the
in
here.
So
I
also
like
be
interested
in
understanding
the
how
it's
implemented,
maybe
I,
don't
know
if
we
like
leveraging
some
Json
it
or
something
like
that
here.
Maybe
but
it's
it's
kind
of
a
trade
of
like
you
know,
between
the
complexity
and
how
many
features
it
unlocks.
D
So
when
an
example,
I
might
have
here
that
I
think
would
be
interesting
is
like
on
the
forward
two
up
there.
If
you
want
to
have
a
conditional
debug
forward
to
that,
like
hey,
if
this
variable
set,
then
I
want
to
forward
to
a
different
thing,
you
could
do
that
with
the
map
right
or
do
some
other
Shenanigans,
and
that's
something
you
just
very
hard
to
do
without
something
like
this.
F
This
is
all
in
one
commit,
so
if
you
click
on
just
that
one
commit
there
at
the
top
or
that
maybe
well,
that's
not
gonna
work.
It's
a
way
I'm
way
behind
now,
but
like
the
the
message
of
the
command
share
that
works
too,
so
it's
really
like
250
lines.
I
implemented
this
with
with
some
kind
of
to-do's
for,
for
you
know
some
things,
but
the
bulk
of
the
work
is
actually
in
the
center
Library,
where
I
implemented
the
oh
God.
What
do
we
even
call?
F
It
I
mean
I,
think
I've
been
an
object,
merge
or
something
I
implemented.
One
yeah
object
merge,
which
is
now
we're
saying
gonna,
be
done
a
little
bit
differently
in
The
Proposal.
But
this
is
not
a
lot
of
code
like
adding
functions
in
because
River
already
supports
function.
Calls
we
just
never
really
use
it.
Oh
no,
we
know
sorry,
we
do
all
the
time
like
it's
really
taking
advantage
of
the
existing
support
for
functions
in
the
river.
E
Yeah,
that's
that's,
definitely
making
it
easier.
E
Yeah,
you
know
we
don't
need
a
full
consensus,
but
yeah
my
mind
is
like
I
would
like
to
spend
more
time
on
understanding
this,
and
maybe
I'll
just
keep
a
tab
open
for
this
and
take
a
look
this
week.
F
Yeah
sorry
I
was
muted
when
I
was
talking
earlier.
I
like
what
go
does
they
have
a
label
called
likely
accept
just
to
indicate
that,
like
this
is
trending
towards
a
yes,
but
we
need
more
time.
Do
we
want
to
do
that
here
or
we
want
to
just
leave
it.
A
F
A
Thank
you,
so
we're
almost
thought
I
just
have
it's
a
new
one
and
it's
about
how
we
discuss
proposals
in
the
community.
Core
I
had
not
been
thinking
entirely
a
good
idea
to
maybe
make
a
list
of
the
proposals
you
want
to
talk
about
before
you
have
to
call
I
think.
Maybe
we
can
get
more
participation
that
way
and
it
might
be
a
good
way
to
make
sure
the
audience
is
more
up
to
speed
on
the
actual
proposals.
Maybe
we
can
just
review
them
before
the
call
and
just
talk
about
it.
A
A
Okay,
thank
you
for
tuning
we're,
basically
all
the
time.
What
is
the
anything.
C
A
Okay,
so
thank
you
for
joining
the
tuning
list
of
the
agent
Community
called
them
and
we'll
speak
again
next
month.
Thank
you.