►
From YouTube: Grafana Agent Community Call 2023-05-17
Description
Github Module repository https://github.com/grafana/agent-modules, clustering and flow remote_write functionality. https://docs.google.com/document/d/1TqaZD1JPfNadZ4V81OCBPCG_TksDYGlNlGdMnTWUSpo/edit
A
Okay,
okay
and
welcome
everybody
to
the
May
refiner
Asia
Community
College
push
backer
a
little
bit
and
do
some
other
things
about
going
on.
That
will
welcome
everybody
here,
as
always,
feel
free
to
type
in
chat
raise
hand
get
our
attention
some
way.
If
there's
something
you
want
to
talk
about,
or
you
have
a
question
about
anything
we're
covering,
we
have
a
few
things
on
the
agenda,
but
yeah
we're
kind
of
here
for
the
community
to
talk
about
whatever
you
want
to.
A
We
just
have
a
few
things
that
we're
so
we're
just
not
sitting
here
in
silence,
so
I'll
go
ahead
and
get
us
started
unless
anybody
has
anything.
A
A
A
Today
we're
going
to
talk
a
little
bit
about
modules,
which
is
our
new
feature
that
essentially
lets
you
load
River
configuration
from
different
locations
and
load
them
up
with
a
set
of
arguments,
and
they
have
a
set
of
exports,
and
you
can
see
all
sorts
of
details
about
that
in
our
documentation
and
it
went
out
in
the
last
release,
I
believe
so
you
can
use
it
today.
A
But
what
we're
going
to
talk
about
today
is
the
refund
of
modules
Repository.
This
is
a
separate
repository
from
our
standard,
refined
agent.
It
is
like
GitHub,
slash,
grafana,
slash
agent,
dash
modules.
There's
a
link
in
the
agent
and
I
need
I
will
be
adding
probably
a
link
somewhere
in
the
agent
documentation,
so
it's
got
some
cross-sleep
there,
and
essentially
we
want
to
make
this
to
be
a
blessed
repository
of
functionality.
We
think
that's
great
to
share.
A
We
have
a
few
things
there
already,
such
as
Gotham
submitted
a
hotel
collector
submitted
some
lgtm
collectors
and
we're
hoping
to
kind
of
build
up
a
a
library
of
common
use
cases
that
people
can
basically
just
import,
let's
say
through
our
module.getloader
and
be
able
to
pull
that
in
and
we
100
percent
would
love
any
Community
suggestions
or
if
you've
got
a
certain
thing
that
you
want.
You
think
you
would
like
to
share.
A
This
is
the
place
about
it,
for
it
put
a
PR
out
there,
we'll
give
it
a
go
over
and
assume,
and
everything
should
we'll
pull
it
in.
It
doesn't
have
to
be
big.
It
doesn't
have
to
be
fancy
if
it's
got.
You
know
some
good
value,
we'll
pull
it
in
and
I
think
to
talk
a
little
bit
about
more
of
that,
we'll
turn
to
Eric,
but
I'll
say:
does
anybody
have
any
questions
about
what
the
repository
is
before?
Eric
gives
us
a
little
tour.
B
Okay,
I
should
have
unmuted
before
I
was
going
to
share
a
screen.
It's
like
a
little
dead
silence
there.
Okay,
so
yeah!
So
we'll
take
a
little
peek
here
at
this
new
repository
and
then
a
little
quick
demo
showing
off
how
one
of
the
modules
Works
what
it
takes
to
kind
of
set
it
up
and
run
it.
So
here's
the
repository,
there's,
you
know
it's
just
getting
started
so.
Oh,
yes,
Zoom!
B
You
yeah
got
it
thanks
for
the
reminder,
so
there's
not
a
ton
here,
but
we've
kind
of
got
the
basic
structure
of
what
we
want
here.
The
two
main
things
that
I'll
point
you
towards
is
this
example
folder
as
well
as
then
the
modules
folder
we'll
look
at
so
the
example
folder.
Just
at
you
know,
a
high
level
I
will
say,
is
kind
of
a
thing
you
can
take
a
peek
at
that
demonstrates
how
you
can
create
modules.
B
It's
not
necessarily
the
most
practical
way
of
doing
it
for
the
particular
use
case
that
it
is
which
is
getting
the
agent's
Telemetry
data
out
to
to
the
cloud.
But
nonetheless,
it
kind
of
shows
you
how
you
can
set
things
up,
how
you
can
do
arguments
how
you
can
do
exports,
so
some
working
examples
of
that
as
well
as
okay.
What
if
I
want
to
do
this
without
a
module?
B
Well,
here's
what
it
would
have
looked
like
what,
if
I
want
to
use
each
of
our
different
module
loaders.
We
have
examples
for
each
of
those,
so
we've
got
the
module
file,
the
module
git
and
the
module
string
implementation
leveraging.
The
modules
that
exist
in
these
subfolders
so
gives
you
a
little
bit.
You
can
poke
around
and
see
ways
you
can
set
things
up.
B
Okay,
so
that's
that's
the
example
folder
the
modules
folder
and
here
we've
got
a
couple
right
now.
This
is
where
we
really
want
to
expand.
Things.
B
Put
you
know
real
world
use
cases
into
here,
and
the
one
I
want
to
show
off
today
is
same
idea
so
that
the
Telemetry
data
for
the
grafana
agent
itself
going
up
to
the
graphana
cloud.
So
for
each
module
we
have
a
readme
which
will
explain
what
the
module
arguments
are.
B
If
it
has
any
exports,
it
will
explain
what
those
are,
as
well
as
an
example
of
how
to
leverage
the
module.
So
this
particular
module
is
sending
data
to
Prometheus,
Loki
and
tempo.
So
the
example
here
you
can
see
shows
you
passing
all
the
the
arguments
that
it
needs
here
in
order
to
run
the
module.
So
let's
take
a
look
at
that
so-
and
this
is-
and
this
is
using
the
module
git
module
loader,
but
it
doesn't
have
to
be
specifically
using
that
one.
B
B
B
C
B
Know
it's
when
yeah
anyways
I
was
just
thinking
that
right
before
and
I
was
like.
I
should
look
up
how
to
do
that,
but
anyways
it's
it's
this
one
right!
It's
this
one
I'd
blow
it
up,
even
bigger
one
one
size
larger,
so
that
that's
kind
of
what
I'm
running
but
I
just
want
to
show
that
in
practice
that
that's
it.
B
So
this
is
going
to
run
right.
It
points
at
this
module,
so
here's
the
actual
contents
of
the
module
you
can
see.
All
the
arguments
are
defined.
B
The
exports
are
defined
here
and
then
here's
all
the
flow
components
listed
out.
The
specifics
for
now
are
not
you
know.
Super
important
I,
don't
want
to
cover
that
here.
So
here
we
go.
So
this
is
me
running
that
config
file
and
I'm
forwarding
my
logs
into
a
file
path.
B
I
think
in
the
future
we've
talked
about
and
we're
pretty
close
to
not
having
to
do
that,
so
that
the
agent
could
forward
its
logs
without
having
to
write
them
to
a
file.
So
that'll
just
make
it.
You
know
slightly
easier
to
do
this,
which
is
which
would
be
cool
okay,
so
that
should
be
running
so
we'll
give
it
a
minute.
B
But
here's
up
to
the
graphonic
cloud
looks
like
we've
already
got
the
logs
coming
in
and
I
expect
we'll
get,
hopefully,
hopefully
a
heartbeat
with
the
metrics
from
from
the
agent
and
then
and
then
some
Trace
data.
B
A
B
Yeah
good
question
I
think
we're
just
kind
of
looking
for
like
what
is
the
use
case
that
it
solves
kind
of
a
little
bit
what
I
covered
is
you
know
what
are
the
arguments?
What
are
the
exports
that
somebody
looking
to
use
the
module
can
get
a
glance
to
say:
hey
is
this
for
me,
I
think
we're
still
kind
of
learning
what
people
are
going
to
need
and
that'll
be
coming
from
people
using
it
the
community,
so
things
are
going
to
evolve,
I
would
say
and
we'll
kind
of
go
from
there.
A
All
right,
if
no
one
else
has
any
more
questions,
we
will
move
on
to
the
next
topic
of
conversation,
which
is
new
agent
flow
components,
allowing
agents
to
act
as
a
proxy
for
logs
and
metrics
and
I
just
blanked
on
who
added
this.
D
D
So,
first
we'll
look
at
a
high
level
overview
of
agent
as
a
proxy
work
for
an
example
configuration
and
then
have
a
quick
demo
and
Q
a
so.
This
is
an
overview
an
example
set
up,
and
here
we
have
agent,
one
that
is
collecting
logs
and
metrics
from
let's
say,
logs
from
a
file,
metrics
could
be
scraped
from
kubernetes
cluster
and
agent.
D
One
sends
the
logs
and
metrics
over
the
network
to
agent
two
and
agent
2
acts
as
a
proxy
and
sends
them
for
further
to
let's
say,
for
example,
to
lock
key
or
Prometheus.
D
D
E
D
Lucky
Source
API
for
receiving
logs
the
API
expected
by
these
new
components
is
compatible
with
our
existing
logs
and
metrics,
sending
components
and,
additionally,
any
any
other
remote
right.
Clients
in
the
ecosystem,
such
as
promptail,
should
be
compatible
with
these
new
components.
D
Let's
now
take
a
look
at
the
configuration
a
bit
more
detail,
so
in
this
example
for
metrics
pipeline
agent,
one
has
a
Prometheus
scrape
component,
which
scrapes
agent's
own
metrics
and
force
them
to
another
component,
Prometheus
right,
Prometheus,
remote
right
proxy,
which
is
a
component
that
will
send
metrics
over
HTTP
to
localhost
Port
1990,
where
we
expect
to
our
proxy
agent
to
receive
them.
D
And
here
it
is
the
proxy
agent
using
the
new
component
Prometheus
receive
HTTP,
which
is
listening
for
metrics
over
HTTP
and
forwarding
them
to
yet
another
Prometheus
remote
right,
Cloud
component
there.
It
is
which
in
turn
sends
those
metrics
to
Prometheus
note
that
only
proxy
agent
needs
to
know
the
credentials
for
the
managed
Prometheus.
D
The
Lux
pipeline
looks
very
similar,
just
using
blocky
specific
components.
So
we
have
collecting
logs
from
a
file
sending
logs
to
the
proxy
agent
over
http,
and
the
proxy
agent
is
using
the
new
component
that
receives
logs
over
http
and
forwards
them
using
low
key
right
components
to
a
managed
lucky
instance.
D
So
now
I
have
a
short
demo.
It's
not
that's
a
it's
a
recorded
one
well,
hopefully,
will
be
useful
as
well.
So
we
have
here
a
config
that
we
can
see
that
for
agent,
one
we're
using
basically
the
same
configuration
we
just
walked
through
and
similar
for
the
proxy
agent.
The
conflict
we
just
seen
for
logs
and
metrics
I
also
have
this
short
script
that
will
generate
some
logs.
D
D
And
the
proxy
agent
using
the
proxy
config
file,
which
is
seen
and
the
agent
1
using
the
collector
config
we
we
do
all
right
here,
the
default
port
to
avoid
conflict
between
the
two
agents,
and
this
is
10
times
sped
up.
But
after
waiting
a
little
bit,
you
can
see
that
the
dashboard
populates
with
logs
received
from
the
producer
and
metrics
from
the
agent-
and
this
went
from
agent
1
via
proxy
to
grafana
cloud
in
this
instance,
and
now
the
data
sources
of
this
dashboards
are
reaching
to
that
Griffin
account.
D
D
Yeah,
that's
a
great
question,
it's
something
that
we
have
been
discussing
internally
and
we,
if
you
so
you
can
scale
vertically.
But
if
you
wanted
to
scale
horizontally,
which
is
probably
what
most
people
would
ask
about,
we
have
not
come
up
with
a
recommendation
yet,
but
this
is
something
we
are
discussing
and
hopefully
soon
we
will
be
able
to
recommend
kind
of
a
setup
that
will
allow
you
to
scale
proxies
and
also
provide
some
level
of
redundancy.
C
I
should
find
a
way
to
ask
this
as
a
question,
but
I,
don't
know
how
someone's
going
to
say
it
I
think
one
of
the
one
of
the
cool
things
about
how
this
is
being
done
versus
sending
like
using
the
agent
as
a
proxy
in
Titanic
mode
is
that
you
can
have
this
chain
to
pipelines
right.
So
you
can
have
an
agent
receiving
logs
over
the
network
from
another
agent,
but
still
have
it
do
filtering
or
post-processing
or
whatever
else
you
want
to
do,
and
that
was
not
possible
with
static
mode.
D
A
We
transformed
from
a
from
atheist
out
metric
to
an
Hotel
style
metric.
D
D
Okay,
one
more
call
for
any
extra
questions:
yeah,
don't
worry,
there's
still
a
chance
to
reach
out.
You
can
find
us
on
communitygraphone.com
or
reach
out
on
our
grafana
public
slack
Channel.
Thank
you.
A
E
Yeah,
it's
topical,
so
yeah,
30
percent
of
my
screen
scaling
Prometheus.
Can
you
see
my
screen
yep
good
yeah,
so
scaling
Prometheus
is
hard
and
I.
Think
that's
what
I
think,
like
your
concerns
with
the
agent
make
it
that
easier,
but
still
there's
not
a
lot
of
room
on
how
you
can
do
things.
In
most
of
the
cases
we
either
recommended
people
use
something
agent
received.
E
That
provides
a
whole
lot
of
more
operational
overhead
that
you
have
to
worry
about
about
any
scave
agent
for
their
needs,
and
it
goes
through
all
for
start
mode.
But
when
we
tend
to
use
the
then
starting
to
mature
flow
as
their
way
to
run
the
agent,
we
thought
that
there's
a
better
way
to
go
about
things,
and
this
is
how
the
whole
idea
of
clustering
went.
E
So
clustering
should
allow
you
to
run
the
agent
in
a
high,
valuable
mode
being
able
to
dynamically
scale
up
and
down
to
meet
the
your
workload,
and
they
will
sooner
gonna
publish
some
documentation
around
how
to
run
the
agent
and
the
part
of
all
this
effort
is
also
providing
some
opinionated
dashboards
about
being
able
to
monitor
your
cluster
of
agents
and
be
able
to
build
for
any
issues
and
try
to
understand
what's
going
right
or
wrong
with
your
cluster.
E
So
I
want
to
set
a
couple
of
dashboards
that
we've
created
they're
available
in
the
grafana
agent
triple
on
our
agent
flow
mixing.
The
first
one
is
the
dashboard
that
provides
a
general
overview
of
the
whole
cluster.
That
signifies
a
it
says
how
many
nodes
we
have
that's
a
node
table
about
the
agent
that
we
have
and
the
state
whether
some
agent
is
starting
up
and
still
hanging
as
a
viewer
agent
or
this
is
shutting
down,
is
in
isn't
a
terminating
state,
or
it's
signified
that
it's
gonna
participate
into
Distributing
the
load.
E
You
can
also
the
information
here,
as
well
as
a
state
timeline,
to
see
how
your
class
has
been
evolving
through
time.
Here.
We
can
see
that
when
we
deploy
to
remove
versions
with
these
annotations,
we
had
a
brief
moment
where,
like
for
a
few
seconds
for
like
15
seconds,
this
cluster
was
not
converted
as
a
new
produce
coming
along.
But
since
these
time
frames
are
quite
small
in
comparison
to
the
scrape
interval,
you
shouldn't
be
losing
any
data.
E
E
So
they
can
all
individually
work
together
to
distribute
the
load
and
not
miss
any
metrics
from
here.
We
can
drill
down
to
a
specific
note
that
we're
gonna
see
more
information
about
where
we
can
see
some
more
detailed
information
around
how
many
peers
that
does
this
knows
about,
and
what
is
the
state
of
this
piece
and
what
is
the
data
bandwidth
that
is
being
used
for
a
streaming
and
a
packet
based
communication?
E
The
success
rate
of
this
communication,
as
well
as
if
there
are
any
pending
packets
that
cannot
be
sent.
E
From
this
dashboard
we
can,
we
can
see
the
the
overhead
of
class
telling
is
pretty
low,
so
for
this
Fortnight
cluster,
we
see
that
it's
about
200
bytes
per
second
for
the
packet-based
communication
plus
another
like
30
to
50
bytes
for
the
streaming
always
open
over
HTTP
2
connections
and
I.
Think
we've
tested
the
with
around
up
to
150
nodes
and,
if
I'm
not
mistaken,
we
never
go
to
app
more
than
one
kilobytes
per
second
of
traffic
is
required
to
synchronize
those
nodes.
E
E
You
can
get
notified
around
when
the
Lambert
clocks,
the
nodes
are
not
propagating
correctly,
which
I'll
explain
in
a
short
bit
alert
on
the
Node,
the
health
score
of
its
nodes
on
whether
there's
a
name
conflict
or
the
there
are
nodes
that
cannot
be
terminated
or
if
there's
a
cluster
configuration
type.
E
For
the
two
things
that
I
haven't
really
explained,
it's
known,
it
keeps
a
lamp
or
clock
time,
which
is
a
way
for
these
nodes
to
roughly
order
the
messages
that
exchanging
with
one
another.
It's
time
a
node
receives
a
message
from
another
peer.
It
increments,
it's
her
own
clock
by
the
time
that
it
has
a
scenery
being
reported
by
another
P
plus
one.
So,
ideally,
this
acid
always
increase
in
value
as
a
nodes
are
gossiping
messages
about
one
another.
E
Lastly,
one
of
these
assumption
that
we're
making
around
this
clustering
is
that
all
nodes
will
be
working
with
the
same
configuration
file
and
we
can
structure
that
River
configs
in
a
way
that
you
don't
have
to
like
house
mode,
provides
different
configurations
voltage
and
which
is
a
brittle,
and
it
is
a
little
process
that
can
easily
lead
to
errors.
E
So
we
can
also
alert
on
the
hash
of
these
configuration
files
and,
for
example,
pickup
cases
where
a
config
Mark
has
not
been
propagated
correctly
or
cases
where,
for
example,
there
are
clusters
that
are
running
on
the
same
name
space,
but
might
have
a
conflicting
jobs
to
to
distribute
in
themselves
yeah.
E
As
I
said,
these
are
available
on
the
mixing
in
the
final
agent
repo,
if
you're
not
familiar
with
how
mixings
work
and
how
you
can
make
use
of
the
neuron
clusters
feel
free
to
reach
out
on
our
GitHub
or
our
Community
slack,
and
we
can
provide
some
help
with
that,
and
we
hope
that
once
this
is
out
with
the
next
release,
you
can
continue
playing
with
clustering
and
provide
your
feedback
on
it.
C
S
answered
the
question:
I
have
a
few
questions
one.
Could
you
show
the
remote
right
dashboard
just
to
demonstrate
that,
like
they're,
sharing
kind
of
active
series
amongst
themselves.
C
You
can
you
can
click
on
the
dashboards
button
on
okay,
no,
no
yeah.
None
of
this
is
the
right
one.
The
dashboards
drop
down
on
the
oh
I
see.
F
C
So
the
very
bottom
set
of
panels
will
show
kind
of:
oh,
it's
only
showing
One
agent
I
think
you
have
to
change
the.
C
E
In
the
previous
setup,
we
were
sending
like
a
million
sums
per
second,
while
here
we've
dropped
by
half,
because
we
don't
have
to
run
two
replicas
of
each
spot
for
availability,
because
it's
handled
by
aviating
themselves.
Luckily,
this
also
means
that
you
can
cast
your
agent.
You
can
cut
your
rates
and
cost
in
half,
so,
if
you're,
using
Hotmail
Sergeant
to
scale
your
agents
in
a
way
that
you
can
demand,
this
can
not
only
make
your
setup
like
easier
to
handle,
but
also
much
cheaper.
C
So
if
we
were
to
add
a
new
agent,
the
active
series
would
increase
slightly
for
a
little
while,
but
then
it
would
eventually
even
out
to
redistribute
the
load.
Amongst
you
know
the
the
new
set
of
agents-
and
you
can
kind
of
see
that
here
like
whenever
we
do
a
rollout
like
it,
jumps
up
a
little
bit
because
they're
all
kind
of
shuffling,
but
eventually
like
it
it
it
redistributes
so
like
if
you're
going
to
use
clustering.
C
We'll
eventually
have
this
written
down
as
recommendation,
but
you'll
want
to
make
sure
that,
like
no
single
agent
like
if
you
lose
one
or
two
of
your
shards,
that,
like
the
whole
thing,
doesn't
get
brought
down.
My
second
question:
pascala
says:
how
do
you
configure
this
like?
What
do
you
have
to
do
to
turn
this?
On?
Okay,.
E
E
We're
using
a
headless
service
in
kubernetes,
because
if
you
pass
a
DNS
name
instead
of
an
IP
address,
it
will
perform
a
look
at
SRV
lookup
and
it
will
try
to
connect
to
all
the
address
that
finds
through
that
name
and
after
you
see
that
the
your
agents
are
able
to
be
connected
to
one
another.
You
can
go
to
primitives.scrape,
which
are
the
two
components
that
currently
support
clustering
and
set
the
enabled
the
attribute
in
the
clustering
block
to
true.
E
Thank
you
if,
but
we
follow
the
agents
in
the
classroom
and
I'm
using
the
same
configuration
file
and
are
receiving
the
same
Target
from
any
service
Discovery
mechanisms.
E
This
means
that
the
load
will
automatically
be
distributed
between.
Another
rubber
corrected
me
that
flared
to
scrape
is
not
yet
released,
but
will
be
in
V
0.34,
which
is
about
like
three
weeks
out
now.
E
All
right,
thank
you
for
your
time,
we'll
continue
providing
policing
this
experience
in
providing
more
instructions
around
how
to
make
you
successful
using
clustering
and
hope
to
get
your
feedback.
A
All
right
that
was
great
now,
the
last
topic
of
the
agenda
is
the
review
of
open
proposals,
but
before
we
jump
into
that,
because
that
could
take
literally
the
rest
of
the
call
does
anyone
have
any
topic
they
would
like
to
bring
up.
A
All
right,
I'm
gonna,
share
my
screen,
so
we
have
just
to
give
you
a
head
up.
We
have
a
in
the
issues
share
screen
with
also
taking
window.
A
All
right,
hopefully,
you
see
a
big
long
list
of
proposals
awesome,
so
we
have
filtered
specifically
for
flow
and
we
have
the
you
know
by
date.
I
think
we
can
skip
this
flow
relief
plan.
Robert.
C
A
The
next
one
is
a
local
dot,
exec
one
of
the
oldest
conversations
around
flow.
This
is
essentially
being
able
to
run
an
arbitrary
thing,
executable
correct
on
a
machine.
I'm
gonna
get
the
output.
C
Can
you
zoom
in
Matt
just
a
little
bit
it's
unfortunate
that
someone
says
to
be
appreciated,
because
I
want
to
close
this
I
want
to
reject
my
own
proposal.
I.
Think.
Last
time
we
talked
about
this
like
there's
a
lot
of
concerns
about
security
right.
It's
like
the
the
the
things
you
could
like,
with
with
modules
in
mind.
C
You
could
hypothetically
get
a
module
which
does
a
lot
of
malicious
things.
Right
like
you
could
have
a
module
which
reads
any
file
on
your
file
system.
It
sends
that
file
to
somewhere
else.
So
you
should
trust
the
module
sources,
but
the
the
I
think
the
blast
radius
of
a
malicious
local
exec
is
a
lot
more
dramatic
like
someone
could
remove
course
remove
your
root
file
system
right
and
I.
C
Think
based
on
that
and
based
on
the
use
case,
I
originally
proposed
I'm
I'm,
just
not
seeing
it
yet,
like
I
I,
think
it's
it's
I'm,
not
sure
it's
work,
I'm,
not
sure
it's
worth
it.
Even
though
I
think
it'd
be
neat.
A
I'll
call
upon
myself
I
agree
with
that:
I
think
it's
pots,
I'm
sorry
I
agree
with
Robert
that
we
should
close.
This
I
think
it's
possible.
Maybe
once
we
get
the
capabilities
proposal,
which
is
probably
way
way
way
down
the
line,
maybe
this
gets
a
little
more
lag,
but
even
then
it's
pretty
scary.
C
I
I
want
to
take
a
step
back
real,
quick
and
say,
like
we
haven't
really
done
a
lot
of
these
right,
like
we've
only
talked
about
proposals
once
live
and
I
think
we're
still
figuring
out
what
we,
what
what
does
it
mean
to
say
this
is
rejected.
I
think
rejection
never
means
like
no
forever
right,
but
it
means
that
like
right
now,
this
is
not
really
aligned
with
our
goals
and
like
for
the
reason
of
security.
We're
probably
saying
that
this
is
rejected,
although
I
think
we
have
to
take
a
vote
on
that.
C
Right,
like
like
just
me
saying
this
is
being
rejected,
doesn't
mean
it's
rejected,
but
we
do
have
the
governance
team
here,
I
think
most
of
them
right,
maybe
there's
enough
for
like
super
majority,
if
we
want
to
do
like
a
governance
vote
now
we
we
could
to
whether
or
not
this
this
is
rejected
or
not,
or
are
we
going
to
say
rough
consensus
like
yeah?
We
don't
we
don't
see
this
being
in
in
the
direct
plans
right
now.
C
A
C
C
A
A
E
I
think
it's
kind
of
attracting
issue
for
some
components
that
we
haven't
had
the
chance
to
add
yet,
but
yeah
I
think
we
can
skip
this
and
go
ahead
with
the
rest
of
it.
Okay,.
E
Yeah,
so
the
initial
idea
was
that
we
could
do
something
simple,
like
cortex
tenant,
which
we
could
recognize
the
drawback
and
the
performance
penalty,
but
have
something
that
people
might
find
useful
I,
don't
know
how
you
feel,
I
think
that
going
through,
like
a
proper
solution
for
this,
would
be
much
more
involved
effort
pulling
with
some
work
on
it.
So
he
may
have
more
context
around
it.
E
But
the
question
for
me
is:
do
we
do
something
immediately
usable
knowing
them
drawback
that
it
has,
or
do
we
just
drop
it
completely
for
now
and
figure
it
out
later?.
G
So
I
I
published
an
RFC
for
this
there's
a
request
that
should
have
been
open
for
a
while
and
I
think
the
main
drawback
with
the
court
extended
approach
was
that
it
doesn't
have
much
resiliency.
So
if
there's
any
issue
that
doesn't
really
fail
over
very
nicely,
you
might
lose
your
data
yeah.
So
I
don't
know
exactly
how
much
time
it
would
take
to
make
a
more
resilient
approach
and
I'm,
not
quite
sure.
C
I
see
the
value
of
this,
and
this
kind
of
leads
me
to
the
next
point
of
like
accepting
a
proposal
doesn't
mean
we're
going
to
plan
it
for
work
right
away
either
right,
but,
unlike
local
exec
I
think
this
has
a
lot
more
clear
use
cases
and
there's
little
risk
to
it.
It's
just
will
we
have
the
time
to
implement
it?
I
don't
know
but
like
I
would
accept
it
in
terms
of
this,
functionality
would
be
accepted
into
the
agent
I
think
if
there
was
a
PR.
E
Oud,
in
my
mind,
the
accepted
label,
but
the
explicitly
say
that
the
we
scope
it
down
to
just
a
contextendant
wrapper
with
all
the
good
and
about
concluded.
A
E
See
so
yeah,
that's
a
good
question.
We
shouldn't
decide
on
the
implementation
right
now.
Okay,
let's
go
with
the
idea.
A
I'll
call
by
myself
I
like
the
idea
I'm,
not
necessarily
someone
on
the
specific
implementation,
but.
E
B
Not
particularly,
but
this
keeps
all
the
history
and
what
was
originally
proposed
here
and
yeah
yeah,
something
like
that.
Yeah
yeah
is.
A
But
yeah
I
guess:
does
anybody
have
a
problem?
Anybody
have
an
against
accepting
this.
A
Accepted
right,
yeah,
all
right,
Robert
I
have
no.
A
All
right
cool
should
have
added
all
right.
Okay,
is
this:
it's
done.
C
A
Okay,
so
I
think
this
one's
had
quite
a
bit
of
History
Maybe
yeah,
but
basically,
instead
of
doing
this
way,
you
can
do
a
variety
of
ways-
pollen.
G
There's
something
that
they
found
quite
confusing
when
I
first
started
using
River
when
you
use
quotation
marks
it
sort
of
makes
you
look
like
string
rather
than
a
component
name
and
I
thought
to
be
more
consistent
with
other
programming
languages.
If
we
don't
use
the
quotation
marks.
G
A
Yeah
I'm
in
general
favor.
This
I
also
would
be
in
favor
of
purely
dot
notation,
which
is
probably
a
more
contentious
thing
and
by
dot
notation
I
mean
creating
the
ID
has
you
would
be
if
you
were
referencing
it
go
ahead.
Robert.
C
C
And
it
I
mean
it
would
be
less
of
a
problem
when
we
remove
Singletons,
because,
like
the
only
the
only
time
where
the
last
thing
is
not
a
label
is
with
export
or
Unix
that'll
go
away
eventually
and
that'll
be
sorry.
The
component
will
stay,
but
the
the
fact
that
it
doesn't
support
labels
will
go
away.
So
maybe
I'll
be
more
in
favor
of
this
then,
but
for
right
now
it
makes
it
ambiguous
and
I
think
that
is
a
reason.
A
By
myself,
part
of
me
also
feels
that
the
ship
itself,
to
some
degree
that
going
back
and
changing
it
or
making
it
handle
both
that
just
seems
painful.
B
C
I,
don't
think
breaking
changes
are
a
concern
like
we
can
find
ways
to
support
both
in
a
way.
That's
you
know
transparent
to
the
user,
maybe
not
super
annoying,
but
I'm
still
in
favor
of
saying
no
to
this
one.
D
So
Robert
to
your
point
about
like
not
distinguishing
the
name
when
you
refer
when
you
reference
the
receiver,
as
you
can
see,
like
the
the
second
block,
yeah
I
mean
there's
like
that.
Also
just
dots
and
and
you
cannot
differentiate
the
name
that
could
that
easily
from
the
from
the
component
name.
C
No,
that
that's
fair,
I'm,
talking
more
about
the
Declaration
than
I
am
the.
It
is
a
lot
easier
for
us
to
to
so.
Okay,
sorry,
the
context
here
is
when
we
used
HCL
instead
of
river.
It
was
hard
to
tell
like
when
the
component
name
stopped
and
then
when
the
label
started,
and
we
had
to
have
a
bit
of
logic
around
it,
trying
to
do
a
lookup
of
Prometheus
remote
rights
default,
and
then
that
would
fail.
C
So
then
we
would
do
Prometheus
remote
right
and
it
would
kind
of
like
iteratively
search
backwards
until
we
found
a
component
name
just
because
of
how
HCL
Works.
So
that
is
why,
when
like
I
was
essentially
forking,
HCL
I
made
the
choice
to
clearly
differentiate
between
the
two,
but
it
that
was
only
a
problem
for
the
Declaration,
not
for
the
referencing
for
the
referencing.
It
was
never
an
issue.
A
A
Policy,
but
not
do
either
of
you
use
flow
and,
if
so,
in
the
confusing
to
shift
between
the
label,
syntax
and
the
dot
syntax,
because
I
think
most
of
the
developers
who
have
finally
been
using
it
for
a
long
time,
and
there
is
no
or
there's
less
confusion.
If
you
don't
want
to
chat,
that's
fine
I'll
just
give
a
second
here.
If
anybody
wants
to
jump
in
either
of
these
two.
A
F
Okay,
yeah,
it's
a
little
hard.
If
you
haven't
had
any
experience
with
it,
but
you
just
do
it
just
get
in
and
do
it
and
then
it.
A
All
right,
pollen.
G
Just
clarify
something
so
about
this
label
syntax.
Will
the
proposal
be
suggesting
is
that
if
the
component
name
is
declared
with
quotation
marks,
it
makes
sense
to
reference.
The
question
marks
if
it's
not
referenced
with
quotation
marks,
makes
sense
to
also
be
declared
without
quotation
marks,
but
that's
kind
of
what
it
is.
So
we
don't
have
to
use
this
label.
You
know
kind
of
map
like
notation.
G
We
could
simply
get
rid
of
the
quotation
marks
in
the
Declaration
of
the
component,
but
yeah.
If
that's
too
difficult,
then
that's
not
a
problem.
I
just
thought
it's
a
bit
confusing
for
first
time
users,
so
yeah
just
clarify.
There's
no
proposal
here
to
switch
from.
G
Yeah,
we
don't
have
to
switch
necessarily
to
a
label
notation.
We
could
just
not
use
quotation
marks
of
the
component
definition.
C
All
right,
I
think
I
think
this
thing's
about-
let's
not
vote
now,
but
let's
put
this
on
some
type
of
voting
queue
and
then
we'll
make
the
decision
there.
I'll.
A
Yeah
all
right
well,
thank
you,
everybody
cool,
and
maybe
we
have
time
for
one
more.
Oh
Lord,
the
next
one
add
conditional
Expressions
to
the
river.
A
A
Right,
let's
see,
let's
just
see
so
yeah,
do
you
want
us
to
give
us
a
very
quick
overview?
What
this
does.
C
I
mean
okay,
so
tools
like
terraform
support
HDL,
but
they
also
support
being
configured
with
yaml
or
Json.
So
this
is
kind
of
the
equivalent
of
that
thing
where
River
well
I,
like
writing
it
in
an
editor.
It
doesn't
technically
play
nice
with
a
lot
of
tools
like
Helm
charts
or
whatever.
C
So
The
Proposal
here
is
to
accept
some
type
of
Json
enamel
representation
of
river,
with
the
drawback
that,
if
we
add-
and
we
will
add
new
language
features
like
variables
and
stuff
like
that,
those
might
not
have
a
direct
representation
in
the
more
kind
of
machine,
readable
formats.
C
I
think
I
propose
if
you
scroll
down
a
little
bit
I,
do
propose
what
that
might
look
like
yeah,
so
kind
of
side
by
side
there,
just
I
mean
it
just
it's.
It's
yamlified
I,
don't
know
how
to
say
it.
A
I'll
call
upon
myself
I,
don't
particularly
like
this
proposal
or
I,
don't
like
well
yeah.
This
proposal
of
having
Jason
feels
like
it
really
opens
every
surface
area
and
I,
don't
know
if
we
can
support
all
features
of,
like
you
said
of
river
in
these
languages
here
in
yaml
or
Jameson,
without
it
being
real
ugly.
A
A
To
some
degree
from
like
static
node,
so
if
you
really
wanted
to
write
gamma,
you
could
do
that
and
get
away
with
it
for
a
very
long
time.
A
C
We
used
to
have
a
pressing
use
case,
which
was
allowing
Helm
charts
from
across
grafana
to
adopt
flow
and
still
have
templating
support
with
helm,
but
we've
since
identified
a
separate
path
to
enable
the
same
use
case
which
didn't
require
this
so
now
this
is
more
of
a
does.
It
help
adoption
of
flow
when
integrating
with
tooling,
like
Helm
or
terraform,
or
whatever
else
is,
is
deploying
agents.
Does
this
help
make
flow,
be
used
more.
A
F
If
you're
open
to
just
a
newbie
beginner
perspective
once
I
get
the
hang
of
the
river
file
using,
that
is
so
much
more
like
simpler
to
configure
than
my
yaml
file
and
I
understand,
there's
totally
different
use
cases
across
the
environment,
but
yeah
no
I
love.
It
awesome.
C
Yeah
I
mean
because
we
don't
have
a
pressing
use
case
right
now
and
because
this
is
a
little
contentious,
I
I'd
be
okay,
with
closing
it
and
say
like
we're
open
to
rediscussing
this
later.
If
it
comes
up
again
but
right
now
it
has,
it
hasn't,
come
up
I
to
be
honest,
I
am
really
I
was
really
jumping
the
gun
with
this
one
I'm,
just
assuming
problems
might
happen,
and
maybe
they
maybe
they
will,
maybe
maybe
they
won't.
A
E
That,
if
we
close
it,
let's
just
add
the
the
note
that
this
might
help
when
people
are
trying
to
provision
and
flow
configuration
with
some
other
tool.
So
that's
somebody
who
is
searching
for
it
in
the
future
will
find
in
the
penis
pen
on
this,
but
yeah
I'd
say:
let's
just
keep
it
for
now.
B
Eric
yeah
I
was
just
going
to
respond
to
Nicholas
comment
there
and
I
I
want
to
say
that
that's
one
of
I
think
our
our
Visions
for
the
module
repos
that
each
of
those
modules
or
the
river
configs
in
there
do
double
as
a
live
working
example
as
well.
C
Robert
all
right,
it
feels
like
we
have
rough
consensus
for
closing
as
rejected
for
now
like
we
can
revisit
this
in
the
future.
Maybe
that's
true
for
all
rejections.
C
A
Oh,
let
me
add
that
I'll
add
that
here
in
a
second,
because
I
think
we're.
F
A
Much
coming
to
time,
okay,.
A
So
yeah
that
would
love
any
feedback
about
doing
especially
the
proposals.
That's
something
we're!
That's
pretty
new!
So
if
you
all
have
any
feedback
on
hey,
it's
cool
or
no.
A
This
is
a
big
time
waste
or
somewhere
in
between
or
ways
to
improve,
it
feel
free
to
cut
it
in
the
channel
or
the
document,
and
as
always,
if
there's
any
topics
you'd
like
to
see
next
month,
please
put
those
in
the
document
and
this
will
be
going
up
on
YouTube
here
in
a
few
days,
roughly
and
I
appreciate
everybody
showing
up.