►
From YouTube: Grafana Agent Community Call 2022-04-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Welcome
everybody
to
the
grafana
agent
community
call
today
focus
of
the
beginnings
going
to
be
talking
about
the
new
rsc
for
agent
flow
and
we'll
just
go
ahead
and
jump
into
that.
So
I
will
share
my
screen.
B
And
I'll
say,
if
you
have
any
questions
since
we're
not
under
nearly
we
got
time,
raise
your
hand
if
you
want
to
ask
any
questions
while
we're
going
through
so
yeah.
Let's
talk
about
agent
flow,
and
this
is
kind
of
a
big
rfc,
we're
looking
at
to.
B
B
There's
a
lot
of
times
where,
if
you're
making
the
change
to
view
a
log
or
even
see
some
metrics
coming
through
the
easiest
way
to
view
that
is
to
look
at,
you
know,
grafana
and
see
if
those
metrics
came
through,
that's
generally
easier
than
trying
to
interrogate
the
log
logs
of
the
agent
itself
or
do
something
else
from
a
development
point
of
view,
we've
got
a
lot
of
tight
coupling
that
makes
it
hard
to
add
additional
capabilities.
B
So,
as
we
get
a
lot
more
requests
for
more
complex
workflows,
it's
becoming
more
and
more
difficult
to
implement
those
and
to
test
those
and
then
kind
of
overarching.
Is
this
idea
that
we
want
the
mental
model
of
the
telemetry
pipeline
to
map
to
the
agent's
model
of
how
it's
configured
and
how
it's
actually
written?
And
it
is
a
belief
that
the
closer
we
get
that
mental
model
to
how
the
agents
actually
processing
it
the
easier
it
will
be
able
to
understand
and
make
changes.
B
So
this
all
started
in
a
conversation
with
robert
prado
and-
and
we
were
both
working
on
configuration-
robert
was
working
on
hcl
and
dependency
graphs.
I
was
looking
at
bringing
more
of
our
configuration
in
the
house
and
owning
it
and
trying
to
simplify
it,
and
we
were
talking
about
what
users
wanted
and
some
of
the
features
we
were
unable
to
do.
Some
questions
that
kept
getting
asked
and
we
kind
of
both
were
trying
to
circle
around
this
issue
of
how
to
make
the
the
agent
easier
and
more
intuitive
to
use
and
robert
had
this
graph.
B
B
We
reviewed
the
hotel
pipeline
a
lot
of
people
liked
it,
but
we
wanted
to
just
take
it
to
the
next
level
to
allow
more
branching,
more
multiple
paths
looseness
and
connecting
between
things,
and
furthermore,
we
want
to
allow
users
to
visualize
that
data,
like
that
graph.
I
just
showed,
I
think,
there's
so
much
power
in
just
showing
to
the
user
hey.
This
is
what
your
data
looks
like,
and
this
is
how
it
flow
and
make
it
very.
You
can
interrogate
what
things
the
components
are
doing
and
get
feedback.
B
So
we
discussed
it
a
lot
and
through
this
process
we
came
to
two
technical
approaches,
messages
and
expressions,
so
we're
going
to
talk
a
little
bit
about
each
one
and
we
wanted
to
really
explore
these
two,
because
we
think
this
is
going
to
be.
B
You
know
a
possible
future
of
the
agent
and
we
wanted
to
make
the
right
choice
so,
instead
of
just
doing
a
simple
hello
world
or
doing
an
rsc
where
we
just
talked
about
it,
we
both
wanted
to
implement
proof
of
concept
and
these
proof
of
concepts.
We
wanted
to
be
meaty
and
actually
have
some
some.
You
know
we
wanted
them
to
write
to
grafana.
We
wanted
them
to
work
off
an
exporter.
B
We
wanted
them
to
feel
real,
so
I'm
going
to
talk
a
little
bit
about
express
messages
and
then
robert's
going
to
talk
a
little
bit
about
expressions.
B
So
when
I
say
messages,
I'm
referring
to
an
actor
framework,
proto
dot
actor
in
this
case,
if
anyone's
familiar
with
actor
frameworks,
they're
laying
popular
lives
that
they've
been
used.
Some
but
kind
of
the
concept
is
here
is
each
component
is
an
actor
that
communicates
well
weed,
well-defined
messages,
components,
don't
really
know
much
about
each
other.
B
They
only
know
the
addresses
of
other
actors
and
they're
connected
via
core
screen
components
and
part
of
an
actor
framework
is
an
actor
or
component
in
this
case
are
connected
to
another
actor
or
component
via
a
mailbox
and
when
say
a
metric
generator
down
here,
it
cues
up
an
array
of
metrics,
that's
actually
cued
into
a
mailbox,
and
our
control
plane
will
pull
a
message
from
that
queue.
Add
it
to
metric
filter
and
then
metric
filter
will
write
that
message.
B
And
this
is
kind
of
how
the
configurations
have
looked,
so
you
have
a
list
of
nodes
here.
So
in
the
top
we
have
a
generator.
It
spawns
some
metrics
every
10
seconds,
there's
two
filters
that
kind
of
chained
each
other,
and
then
we
remote
right
endpoint,
if
you'll
notice
kind
of
highlighted,
are
the
outputs.
B
B
B
You
trust
that
the
developer
has
written
the
component
to
handle
what
you're
inputting.
Obviously,
the
control
plane
will
not
allow
an
invalid
configuration,
but
it's
up
to
develop
to
kind
of
create
that
concurrency
is
handled
by
default.
So
now,
after
frameworks
generally
force
that
only
one
action
is
occurring
at
a
time
within
a
single
component
and
we
get
all
this
kind
of
like
built-in
rate
limiting
telemetry
and
pipelining
middleware.
B
A
lot
built
in
that
actor
systems.
Handle
configurations
could
be
yaml,
which
is
what
we're
familiar
with,
and
creation
of
guinean
tools
is
simplified
because
we're
connecting
components
and
we
get
to
use
existing
libraries
proto.actor
in
this
case
and
there's
not
many
changes
that
are
actually
needed.
B
So
some
downsides
does
not
allow
fine-grained
access
to
fields
and
there's.
A
great
example
of
this
is
take
credentials
as
you'll
see
in
expressions
you
can
access
a
single
field
within,
let's
say
a
console
instance,
or
a
http
endpoint
to
get
a
password
and
pass
that
directly
into
a
component
in
messages.
We
have
to
wrap
that
in
essentially
a
credentials
object
and
pass
that
along
and
a
component
has
to
look
at
the
credentials,
object
and
grab
what
credential
it
needs
out
of
it.
B
If
existing
components
need
to
understand
them,
you
have
to
go
back
and
add
that
comprehension
to
those
the
relationships
between
components
are
rigid,
so
they
have
to
be
defined
up
front
and
it's
more
convention
over
configuration
and
by
convention.
That
means
that
the
developer
of
a
component
will
have
to
understand
how
to
behave
when
a
you
know.
B
Message
comes
in
instead
of
the
the
control
plane.
Setting
enough
like
expressions
would
so
I'm
going
to
do
a
little
demo.
I
have
flow
running
or.
B
Messages
agent
flow
running
in
the
background
and
I'm
going
to
show
this
is
a
configuration
I'm
running
very
similar.
This
simple
yaml
configuration
and
then
I
have
this
running
on
localhost,
which
is
generating
a
mermaid
configuration
and
if
we
take
a
look
at
the
mermaid
and
if
I
drop
that
into
oh.
B
There
so
it's
just
a
mermaid
configuration
of
how
it
looks
so
if
I
copy
this
and
I
go
to
the
editor
and
then
let
me
oh,
oh
wow
that
actually
worked
really
well.
This
is
the
visualization
that
it
will
generate.
So
you
get
this
really
nice
viewpoint
of
how
the
data
flows.
So
in
this
I
have
my
agent
logs
instance,
is
writing
to
a
file
writer's
link.
B
We
also
get
some
interrogation
on
the
nodes,
so
this
is
the
list
of
nodes
that
I'm
I'm
writing.
Filter
the
filters
two
filters,
the
remote
right,
the
file
writer
agent,
logs
github
generator,
and
I
can
access
those
and
get
some
status
update
the
config
the
status.
I
also
get
built
some
built-in
stats.
B
If
I
spell
that's
right
and
in
this
you
can
see
I'm
getting
some
mailbox
information,
the
type
of
messages
received
and
posted
is
when
a
message
is
added
to
the
mailbox
received
is
when
it
is
pulled
from
the
mailbox
and
processed.
So
this
would
indicate
that
there's
a
lag
if
those
numbers
were
off,
especially
if
they're
off
by
a
lot.
B
B
So
the
one
of
them
is
I'm
defining
a
object
here
and
essentially
this
the
outs
is
a
address
to
whatever
the
configuration
that's
determined
yep
go
ahead.
Robert.
B
B
Is
the
loop
where
I'm
constantly
processing
messages,
so
it
comes
in?
I
switch
on
the
type,
so
I'm
going
to
knit
I
get
the
list
of
children
that
I
need
to
send
to
at
start.
I
get
myself
and
then,
if
I
get
a
an
array,
this
is
a
metric
filter
by
the
way
didn't
specify
that
so
it
has
an
input
of
an
array
of
metrics
and
an
output
of
an
array
of
metrics.
B
So
in
this
I
get
an
array
of
metrics
I
loop
through
them.
I
call
this
match
function
which
basically
just
adds
or
drops
or
mutates
it
in
some
way.
If
it's
nil,
then
I
assume
that's
a
dropped
one.
I
was
a
pendant
and
then
finally
for
each
of
my
outs,
I
send
it
out
so
for
each
item
that
I
need
to
that
is
connected
to
it
in
the
graph.
B
I
just
move
that
message:
that
array
of
messages
on
down
the
line-
there's
some
more
boilerplate
here-
that
kind
of
responds
to
the
actor
framework,
but
that's
kind
of
the
heart
of
an
actor.
Is
this
receive
function
and
the
receive
function
it
when
the
concurrency
is
handled
by
the
caller
and
the
actual
component
doesn't
have
to
worry
about
it
too
much,
that's
kind
of
a
messages
at
a
high
level,
and
now
I
will
turn
it
over
to
robert.
Oh.
B
B
B
This
concept
we
have
like
this
was
one
of
our
initial
concepts,
but
it
mostly
holds
true.
You
could
have
a
service
discovery
that
allows
some
sort
of
filter
to
filter
out
anything,
that's
that
is
non-redis
and
then
pass
it
to
your
redis
integration,
and
that
would
create
an
array
of
redis
instances.
Now
there's
this
is
high
conceptual
here.
The
actual
implementation
of
message
versus
expressions
may
change
this
a
little
bit.
But
yes
does
that
answer
your
question.
B
C
So
the
first
thing
was
just
kind
of
recognizing
that
if
you
configure
the
agent
today,
it's
really
repetitive,
you
have
to
do
a
service
discovery,
a
bunch
of
times
once
from
metrics
once
for
traces,
one's
for
logs
and
they're,
really
more
or
less
probably
going
to
be
the
same
service
discovery
minus
logs.
You
need
the
you
know
like
the
the
file
metal
label
added,
but
I
found
it
annoying
like
it's
kind
of
unfortunate
that
you
can't
reuse
service
discovery,
especially
when
you
do
have
to
launch
like
three
actual
things
watching
kubernetes.
C
The
second
use
case
also
relates
to
service
discovery.
Where
integrations
in
the
agent
aren't
super
useful
within
kubernetes.
You
have
to
have
like
a
hard-coded
list
of
of
things
that
the
integrations
are
collecting
from
where
on
kubernetes
you
might
have
like
a
pod,
auto
scaler,
which
is
auto
scaling,
redis
pods,
and
you
can't
really
use
the
redis
integration
with
that.
I
think
the
only
thing
you
could
do
today
to
collect
metrics
on
those
is
to
have
a
sidecar,
which
you
really
don't
want
to.
You
know
tell
people
that
they
have
to
do
so.
C
It
would
have
been
nice
if
you
could
feed
promedia
service
discovery
into
integrations
and
use
that
to
specify
the
config
for
an
integration,
but
the
last
use
case
actually
related
to
the
operator.
C
C
We
had
talked
originally
about
some
idea
of
being
able
to
chain
service
discoveries
together
so
that
you
might
you
know,
have
kubernetes
service
discovery,
feeding
into
http
service
discovery,
but
it
didn't
really
feel
natural
with
how
prometheus
works.
So
I
was
thinking
about
all
that
stuff,
and
you
know
we
had
looked
at
a
hotel.
We
had
looked
at
vector.
Those
are
pretty
popular
kind
of
pipeline
based
agents,
but
I
thought
that
they
were
a
little
fixed
functioned.
C
C
What
the
the
the
realization
I
came
to
was
that
terraform
does
pretty
much
that
idea
where
you
have
all
these
bunch
of
different
resources,
and
you
can
combine
them
in
ways
that
hashicorp
probably
could
not
predict
the
only
difference
between
terraform
and
what
I
would
have
wanted
to
do
is
I
wanted
something
to
run
as
a
daemon
long
term
and
to
re-evaluate
things
over
time.
So
that
is
what
became
the
core
of
expressions.
We
have
declared
of
components
where
you
can
use
expressions
to
reference
the
input
or
the
like.
C
So
on
one
hand
this
means
components
become
really
flexible
in
messages.
The
relationship
is
defined
between
two
components.
We
are
saying
in
expressions.
The
the
relationship
is
field
based
where
some
output
field
for
some
component
is
bound
to
any
number
of
input
fields
for
other
components,
and
the
relationship
is
more
flexible
because
we
all
we
care
about
is
the
type
and
not
the
like
the
message
itself
or
like
the
the
data
being
passed
around.
C
I
also
think,
because
it
makes
the
declarative
graph,
it
makes
it
a
little
bit
easier
to
examine
the
system
state
as
a
whole,
where
you
can,
you
know
just
pull
up
the
entire
state
and
see
what
the
input
and
output
for
everything
currently
is,
and
this
is
an
idea
I
probably
should
take
out
here,
because
I'm
not
sure
about
it,
but
using
hcl
also
means
we
could
use
hcl
for
other
parts
of
like
telemetry
mutation.
So
we
might
be
able
to
use
hdl
expressions
for
mutating
logs
or
for
like
changing
labels.
C
C
Next
slide,
please.
So,
on
the
other
hand,
though,
like
hcl
is
not
yaml,
there
is
a
learning
curve.
Anything
with
expressions
you
know
will
require
some
learning,
but
also
like
this
is
a
little
novel.
Hashicorp's
terraform
is
not
doing
html
the
same
way
I
am,
and
so
I
had
to
modify
hcl
for
my
needs.
I
wanted
to
pass
around
go
interfaces
and
you
can't
do
that
with
hcl.
C
It
has
to
be
types
that
can
be
represented
in
the
language
itself,
but
I
really
wanted
the
interfaces,
so
I
made
a
change
to
go
hcl
and
a
fork
which
added
support
for
what
are
called
capstool
types
within
like
the
the
the
type
system
that
hdl
uses,
and
that
allows
me
to
pass
around
channels
or
interfaces
or
whatever
else
you
might
need
for
kind
of
pipeline
type
components.
C
My
biggest
concern
is
that,
like
this
is
suspiciously
unprecedented,
I
don't
think
anyone
else
is
doing
pipelines
in
a
way
quite
like
this,
and
I
don't
like
being
clever
and
if
like
no
one
else,
is
doing
something
like
this.
My
first
thought
is:
this
might
be
too
clever.
Why?
Why
has
no
one
you
know
built
something
like
this
before,
which
is
for
me
kind
of
one
of
the
biggest
concerns
and
why
I
am
I'm
nervous
about
the
expressions,
even
though
I
I
quite
like
it
next
slide.
C
So
you
know
there's
kind
of
some
gray
areas
here
right
like
even
though
hdl
is
not
yaml
a
lot
of
people
don't
really
like
gamma.
It's
really
easy
to
have
yaml
files
that
are
huge
and
hard
to
manage,
because
the
annotation
thing,
and
at
first
I
was
kind
of
hesitant
about
hcl,
but
I
I
think
I
actually
quite
like
it,
especially
the
expressions
I
find
it
really
nice
and
there's
like
a
built-in
formatter
stuff,
like
that,
it's
really
interesting,
I
think,
for
a
config
language.
C
The
other
part,
I
think,
is
you
know
a
pro
icon,
even
though
this
is
way
more
complex
than
the
agent
is
today,
and
you
know
way
more
complex
than
the
relationships
you
would
build
with
messages.
This
is
open
to
abstractions,
so
we
could
build
a
simpler
representation
of
the
config
on
top
of
the
hcl,
like
with
a
gui
or
something
for
people
who
really
don't
need.
You
know
they
don't
need
to
know
about
relationships
between
components.
C
They
don't
need
to
know
about
expressions
they
just
kind
of
want
to
get
some
config
working,
which
would
make
the
the
hcl
format
kind
of
like
the
assembly
language
of
the
grafana
agent,
where
higher
level
languages
would
exist
on
top
or
for
people
who
don't
need
to
write
assembly.
Not
that
I
don't
think
anyone
really
needs
to
read
something
anymore,
but
yeah.
That's
that
that's
another
story.
C
So
I
have
five
components
to
find.
The
first
two
are
similar
to
what
I
shown
in
the
in
the
example
config,
where
I
have
the
remote
http
component,
feeding
into
the
metrics
forwarder.
In
this
case,
I'm
saying
what
the
the
available
state
fields
would
be,
so
this
component
exposes
a
a
content
field,
which
is
the
current
response
body,
and
this
one
exposes
a
receiver
where
you
can
send
metrics.
This
is
where
the
capsule
types
come
into
play
where
I'm
using
interfaces
here.
C
I
also
have
prometheus
service
discovery,
so
this
is
using
static
service
discovery,
not
very
exciting,
but
this
would
work
with
any
of
them.
Its
current
state
is
the
current
full
array
of
discovered
targets,
so
you
can
imagine
if
this
is
kubernetes.
This
would
be
every
pod
that
it
found
you
know,
maybe
post
relabeling
rules
or
pre-labeling.
I
don't
know
how
we're
going
to
do
that
then.
C
As
another
example,
I'm
using
a
github
integration
which
embeds
the
github
exporter
and
its
state
is
also
an
array
of
targets,
so
it'll
be
just
one
where
you
can
collect
the
metrics
for
this
embedded
integration
at
the
bottom.
I
have
actually
doing
the
scraping
where
you
give
it
a
list
of
targets
to
scrape
from,
and
the
targets
at
scrapes
are
the
combination
of
everything
that
was
discovered
from
static
discovery
and
the
github
integration,
and
any
metric
that
gets
collected
will
get
forwarded
to
the
receiver
for
that
forward
or
component.
You
scrape
every
60
seconds.
C
C
Our
service
discovery
found
this
one
target,
can't
really
say
it
found
anything
like
it.
Converted
the
input
to
two
prometheus
targets
and
the
github
integration
did
something
similar
where
we
also
have
this
label
from
metrics
path,
because
we're
embedding
the
the
metrics
endpoint
and
they
can't
just,
albeit
slash
metrics.
C
At
the
end.
Here
we
have
the
metric
scraper,
which
is
taking
those
two
targets
as
input
and
it's
sending
to
a
an
internal
metrics
receiver
value.
So
this
is
where
things
get
a
little
weird,
because
not
everything
we're
doing
can
be
represented
in
hcl.
I
have
comments
to
say:
hey:
this
is
an
internal
value
of
this
type.
Sorry,
you
got
to
just
trust
us.
We
can't
really
show
you
what
it
what
it
is
other
than
like,
maybe
inner
value,
which
would
be
very
useful
but
alone.
C
This
might
not
be
super
useful,
so
I
can
also
say
I
just
want
this
one
component,
so
I'll
I'll
request.
That's
actually
not
the
name
of
it.
Sorry,
I
just
want
the
config
for
this
one
component
and
I
was
just
showing
this,
but
maybe
you
also
want
debug
information
beyond
kind
of
the
current
input
and
output,
so
you
can
request
debug
information,
which
adds
health
for
the
components,
so
health
can
be
like
not
running,
running
unhealthy
whatever.
C
If
there's
an
error,
there'll
be
a
message
here,
but
there's
no
error
right
now
and
the
last
time
that
component
got
updated
or
re-evaluated,
then
it
can
also.
Each
component
can
also
optionally
expose
extra
debug
information.
So
here
we
have
both
prometheus
targets.
Just
like
prometheus
would
show
you
what's
the
health
of
the
target,
what
are
the
labels
of
the
target?
When
was
it
last
grape?
What
was
the
scrape
air?
How
long
did
it
take
to
scrape
the
normal
stuff?
So
we
have
this
health
for
every
component.
C
If
I
remove
my
filter
here
there
we
go
that
work,
yeah.
Okay,
I
have
question
and
pretend
there
we
have
every
component
is
showing
their
health,
but
only
I
didn't
implement
the
info
status
stuff
for
anything,
but
the
metrics
forwarder
like
matt.
I
also
have
a
graph
to
render
now,
unlike
matt's
rep
presentation,
this
is
not
a
flow
graph.
This
is
a
dependency
graph
where
the
metrics
forwarder
is
directly
referencing
in
some
way.
The
remote
http
example
password
component.
C
It's
it's
probably
possible
to
turn
this
into
a
representation
of
flow,
but
we
would
need
to
have
some
kind
of
internalized
understanding
of
directionality
for
each
component,
which
we
just
don't
have
right
now.
So
like
discovery
would
never
be
an
n
component,
it
would
always
be
an
out
component
because
it
emits
targets
so
to
speak,
even
though
we're
talking
about
like
a
declarative
system,
I'll
dive
into
the
code
just
a
little
bit.
C
I
don't
want
to
get
bogged
down
in
that
detail,
but
I
do
want
to
show
kind
of
the
interface
that
these
components
have
to
implement
so
at
a
base
level.
All
of
them
implement
a
component
interface
with
a
run
method
which
will
run
the
component
until
a
context
gets
cancelled.
This
is
similar
to
how
we
do
integrations,
but
we
also
require
them
to
implement
an
update
where
they
take
in
their
new
state
that
was
determined
by
the
control
plane
that
I
wrote
then
there's
a
set
of
like
extension
interfaces.
C
So
if
components
have
state
they
can
expose
their
current
state
whenever
it's
called,
and
they
can
also
have
handlers
if
they
have
handlers
like
they
give
integration
does
and
if
they
have
metrics
to
be
registered,
they
can
also
implement
a
prometheus
collector.
C
So
when
you
construct
a
component,
you
get
back
this
global
set
of
options
which
includes
the
id
of
your
component,
something
to
log
to,
and
you
know,
if
you
need
like
http
address
of
the
the
process.
That's
running
here
that
is,
but
the
important
part
here
is
this
on
state
change
function.
So
if
a
component
calls
this
function
at
any
point,
it'll
tell
the
control
plane.
My
current
state
has
changed
whenever
you
are
ready
to
reevaluate
the
graph.
Please
request
my
current
state
again.
C
C
C
I
don't
think
I
like
the
way
this
is
happening,
but
here
is
the
metrics
from
the
metrics
forwarder,
where
we
inject
the
label
which
matches
the
component
id,
which
kind
of
lets
you
have
like
per
component
metrics
with
a
label
saying
what
that
component
actually
is,
which
might
be
useful,
but
there's
probably
a
better
way
of
doing
this.
That's
not
just
the
hacky
way.
I
threw
it
all
together
anyway.
Dutch
expressions,
thank
you
and
I'll
hand
it
back
over
to
matt.
B
All
right
any
questions
about
expressions.
D
Go
ahead,
fine
yeah!
Sorry!
Just
so,
I
understand,
like
you,
did,
implement
your
own
continuous
graph,
traversal
engine
thing
and
don't
use
any
of
the
terraform
hdl
bits
for
this
right.
So
we
do
use
hcl
hcl.
C
Terraform
like
like
internalize
their
packages
for
the
for
like
their
dag,
so
I
did
re-implement
my
own
dag,
not
a
lot
of
code,
but
also
the
control
plane
that
constantly
reevaluates
things
really
doesn't
exist
in
terraform.
D
Okay,
okay,
okay
and
how
many
language
features
or
functions
that
I'm
not
entirely
sure
if
they
are
part
of
hcl
or,
if
they're
part
of
terraform,
but
like
stuff,
like
json
decode,
to
read
from
json
files
and
stuff
like
this
and
somehow
get
it
into
the
graph
by
reading
from
a
file
or
something
would
be
terribly
useful,
as
well
as
like
some
of
the
inline
loops
that
they
have
like.
Do
you
plan
to
implement
some.
C
Of
this
I
do
plan
to
implement
for
each
so
you
can
have
like
a
dynamic
component
being
expanded
based
on
array.
Right
now,
I
only
actually
exposed
the
concat
function.
There
are
many
of
them
in
the
the
go
cty
library,
which
is
the
type
system
hcl
uses
we'll
expose
those
by
default,
but
I
think
we'll
have
to
take
the
other
functions
that
terraform
has
on
a
case-by-case
basis.
I
definitely
do
want
the
ability
to
like
get
a
get
a
file
from
a
function
and
then
load
that
into
the
graph.
D
Okay
and
in
general,
like
the
whole,
the
whole
continuous
evaluation
stuff
really
gets
nice.
If
you
let's
say
you
can
watch
a
kubernetes
secret
resource
like
you
just
declare
this
thing
as
an
input
of
some
sort
has
it
has
its
own
expression
and
then
this
thing
automatically
just
watches
on
the
secret
being
updated
in
the
cluster
and
then,
whenever
it
changes
it
can
like
recalculate
the
rest
of
the
graph.
So.
C
I
should
say
aye
sorry.
This
is
still
like
to
be
decided
by
the
team,
but
we
like
which
approach
we
want
to
go
for,
and
I
don't
want
to
drag
people
in,
but
yeah
that
that
is
the
thought
I
was
having
with
how
to
reuse
those
types
of
components.
B
All
right
so
yeah
some.
We
think
this
is
going
to
be
kind
of
a
future
of
the
agent
we're
still
investigating.
Obviously,
we
are
prototyping
both
approaches
this
week.
Both
of
the
branches
that
have
working
examples
are
in
the
agent
repository
I've
added
links
to
the
rsc
and
both
of
those
branches
in
the
community
call
note,
and
we
appreciate
any
feedback
go
to
robert.
C
So
we're
probably
going
to
talk
about
deprecating
the
operator
in
a
second,
but
I
do
want
to
say
that,
like
this
does
feed
into
duplicating
the
operator,
we
imagined
there
could
be
some
component,
which
emits
the
service
monitors,
pod
monitors
and
does
something
it
might.
You
know
it
might
define
a
scrape
job.
C
We
don't
know
what
that
something
is
yet,
but
we
think
operator,
specific
functionality
is
a
much
more
natural
fit
as
a
component
than
it
would
be
anywhere
in,
like
the
today's
version
of
the
agent
and
also
like
at
this
point,
we're
not
saying
any
of
the
agent
goes
away
or
like
the
current
config.
If
we
ship
this
it'll,
be
opt-in
and
then
way
down
the
line
we'll
decide.
Should
this
be
the
default?
Should
we
remove
it?
So
like
no?
No
concerns
about
us
making
any
like
massively
breaking
changes
just
yet.
B
All
right
yeah,
let's
talk
about
the
operator,
then
this
is
rc1565
deprecating.
The
agent
operator.
A
This
really
stems
from
a
few
things:
we're
not
entirely
clear
how
well-loved
the
agent
operator
is
in
the
community,
but
we've
noticed
it
does
have
a
pretty
high
maintenance
cost
and
in
the
some
of
the
feedback
we
have
is
that
it
is
hard
to
understand
the
concepts
and
hard
to
use
the
operator
with
it
being
kind
of
a
separate
way
of
configuring,
the
agent
it
it
can
be
confusing
if
you're
using
the
operator
do
the
config
snippets
from
the
docs
still
apply
to
you
or
do
you
have
to
find
another
way
to
do
it,
and
all
this
kind
of
is
combined
with
features
that
we
implement
in
the
agent
have
to
be
essentially
re-implemented
in
the
operator
to
be
available
to
operator
users.
A
So
one
question
we've
asked
is:
is
the
operator
worth
it
and
I
kind
of
want
to
say
no,
but
people
may
have
other
opinions.
I
I
feel
like
we
may
get
as
much
bang
for
our
buck,
just
focusing
on
making
easier
ways
to
deploy
the
agent
to
kubernetes,
but
still
keeping
the
configuration
the
same
as
if
you
were
deploying
it
locally.
A
The
the
problem
is,
we
we
don't
have
an
official
helm
chart.
There
is
a
community
one,
that's
okay,
that
we've
we've
heard
has
been
used,
but
the
other
thing
the
operator
does
is
the
prometheus
crds
for
pod,
monitor,
service
monitors
and
and
robert
mentioned
flow
might
be
a
good
way
to
bring
that
into
the
agent.
If
that's
something
that
people
are
really
clamoring
for,
we
can
kind
of
push
that
and
and
find
ways
to
integrate
it
even
sooner
or
if
people
just
love
the
operator,
then
we're
completely
open
to
keeping
it
around.
A
A
D
Foreign
yeah,
I
already
wrote
a
bit
in
the
in
the
rc
about
it
like
the
pod
monitors
and
stuff.
Definitely
there
is
something
that
a
lot
of
people
use
to
mark
stuff
for
metrics
consumption,
at
least.
D
D
With
the
stuff
like
like
presented
earlier,
this
could
be
something
that
could
be
made
to
work.
I'm
not
entirely
sure
about
stuff
like
like
trace
collection
and
block
collection
and
how
we're
gonna
do
this
with
that
kind
of
stuff.
Maybe
maybe
we
can
do
the
cube,
catalogs
kind
of
thing
and
and
have
another
expression
for
that,
and
then
we
don't
even
need
a
demon
set
but
yeah.
I
don't
particularly
care
about
whether
it's
going
to
be
an
operator
or
whether
it's
going
to
be
something
like
this.
A
I
think
that's
a
really
good
point.
I
think
part
of
my
confusion
when
I
first
started
using
the
operator
was
this
kind
of
conflation
between
you're
using
this
operator
to
deploy
the
agent
as
well
as
to
configure
the
agent,
and
you
have
to
understand
it
pretty
well
to
realize
that's.
What's
going
on.
A
So
I
think
you're
definitely
on
to
like
configuring.
The
agent
is
kind
of
a
pain
and
that's
kind
of
what
motivated
this
whole
flow
discussion
as
well.
So
we
do
want
to
make
it
as
easy
as
possible.
D
D
I
sync
my
time
in
and
I
think
if
we
have
something
like
this,
where
you
can
expect
the
current
graph
of
things
and
and
how
metrics
are
flowing
and
maybe
have
like
some
example
thumps
on
what
kind
of
metrics
are
of
are
passing
by
that,
could
could
really
make
this
easier.
B
A
C
Now
I
I'm
gonna
be
careful
here,
because
I
use
the
word.
Abstractions
would
be
good
for
like
expressions
before
I'm,
not
I'm
generally
not
a
fan
of
abstractions,
where
the
abstraction
is
just
a
translation
of
the
other
thing,
and
that's
what
I
think
helm
charts
end
up
being
where,
unless
you
are
explicitly
saying
this
helm
chart
is
for
a
subset
of
functionality.
C
C
I
think
the
best
abstraction
is
either
a
subset
of
functionality,
which
is
what
I
was
talking
about
with
flow.
Where,
like
you,
might
not
expose
expressions
for
like
a
gui,
for
example,
or
not
having
an
abstraction
at
all,
so
with
like
a
helm
chart,
I
would
almost
prefer
a
user
to
say
here's
the
agent
config
and
for
the
helm,
chart's
only
responsibility
to
be
creating
the
kubernetes
resources.
So,
like
you
provide
the
agents
config,
we
do
the
rest
and
that
way
we're
not
trying
to
like
re-represent
what
the
agent
like
how
the
agent
gets
configured.
C
This
is
kind
of
just
me
on
a
soapbox
about
abstractions,
but
this
is
kind
of
one
of
the
things
that
I've
ran
into
the
most
as
problems
when
maintaining
the
operator.
This
idea
that
it's
not
a
subset.
It's
a
one-to-one
mapping
and
that's
hard.
B
Recorded
anyway,
all
right
all
right
does
flooring
s,
there's
nothing
preventing
the
agent
from
also
discovering
this
config
from
inside
the
cluster
right
question.
Mark.
A
D
But
yeah
and
then
like
with
flow,
let's
say:
let's
say
I
want
to
have
like
I
want
to
define
the
the
the
the
remote
right
source
and
and
how
I'm
gonna,
I'm
gonna
that
I'm
gonna,
that
I
want
to
collect
the
service
monitors
from
inside
the
cluster
and
make
them
their
own
flow
resources
like
if
I
can
throw
that
config
in
the
cluster
and
the
the
grafana
agent
has
a
mode
to
discover
this
kind
of
thing
and,
like
included
in
the
in
the
graph
of
things
it
does,
then
we
don't
really
need
an
operator
to
describe
some
crds,
but
we
can
just
visually
describe
the
that
that
thing
in
the
cluster.
C
Would
say
regardless
sorry
matt,
you
know
I'll
get
in
line.
B
All
right,
yeah
so
like
when
I
was
working
on
dynamic
configuration
it
which,
which
kind
of
does
some
of
this
effort
at
essentially
compile
time
in
the
configuration
one
of
the
things
it
can
do,
is
query
like
ec2
clusters
and
based
on
tags,
kind
of
gather
the
resources
and
automatically
pass
those
in.
B
So
I
kind
of
imagine
flow
having
that
same
concept
just
being
better
because
it's
not
at
you
know,
compile
time
of
the
config,
but
it's
dynamic
and
refreshing
itself.
So
I
think
that
will
drive
a
lot
of
value.
Robert.
C
My
hand
just
lowered
itself
automatically.
I
I
can't
speak
for
all
the
maintainers,
but
I
would
argue
very
strongly
that,
no
matter
what
approach
we
take
for
flow,
whether
it's
expressions
or
messages,
one
of
the
core
requirements
eventually
should
be
the
ability
to
get
a
partial
config
from
some
remote
source,
whether
that's
http
or
files
or
crds,
like
config
maps
or
secrets
whatever.
I
think
I
think
that
is
fundamentally
important
for
making
like
asian
flow
feel
good.
That's
a
lot
it's.
I
sounded
so
impressive
until
the
end
there
but
yeah
I
would.
C
A
A
Just
have
the
agent
doing
it.
If
it's
important
and
it's
kind
of
a
cool
concept,
I'm
excited.
B
All
right
I
raised
my
own
hand,
andre
says
remote
kit,
config
would
be
awesome.
I
will
say
currently
in
today's
world
we
are.
We
do
support
remote
configuration
it's
under
experimental
features.
There
is
a
remote
configuration
where
you
can
load
a
there's,
a
remote
configuration
option
that
just
allows
you
to
load
a
normal
agent
configuration.
B
C
Robert,
the
big
difference
between
dynamic
configuration
and
what
we're
talking
about
for
flow
is
that
flow
can
respond
in
real
time
to
changes
in
your
environment.
So
dynamic
configuration
really
is
just
a
template.
It's
resolved
at
load
time
and
you
have
to
manually
reload
it
to
cause
it
to
read.
You
know:
recompute,
it's
it's
its
state,
but
with
flow
we
were
saying
like
if
the
config
map
ends
up
changing
changing
that
will
re.
You
know
that
will
change
that
partial
that
got
loaded
in.
C
So,
if
you're
looking
for
that
that,
like
dynamic
response,
we
don't
have
that
yet,
but
if
like
just
getting
it
at
load,
time
is
good
enough.
Then
that
is
supported
today.
C
One
other
thing
I
want
to
ask:
since
we
gave
this
presentation:
does
anyone
have
any
opinions
about
the
like
messages
versus
kind
of
expressions
and
like
the
two
approaches
there
they're
very
similar,
except
for
the
implementation,
and
I
think
only
the
only
thing
that
matters
for
the
user
is
the
difference
between
binding
components
versus
binding
fields
and
that's
really
where
the
difference
is.
D
B
Yeah
click
and
raise
my
hand
yeah.
I
think
that's
something
that
if
you're
already
familiar
with
with
hcl,
then
the
expressions
is
much
more
intuitive
and
and
better
option.
B
All
right,
I
guess
we
have
a
few
minutes
left
open
it
up
to
anything
that
florian
or
andre
want
to
bring
up.
Do
y'all
have
any.
If
y'all
have
any
topics.
B
All
right
does
anyone
else
have
anything
they
want
to
bring
up.
B
All
right:
well,
I
appreciate
everybody
joining
in.
We
have
all
the
links
in
the
notes
feel
free
to
comment
on
any
rfcs
or
any
of
the
branches,
and
we
appreciate
everybody
coming
and
hope
to
see
you
next
time
see
ya.