►
From YouTube: Grafana Agent Community Call 2023-01-18
Description
In this community call, we discussed the new Loki flow components. Due to technical issues we had to record the demo later and splice it in. You can find future meetings at https://docs.google.com/document/d/1TqaZD1JPfNadZ4V81OCBPCG_TksDYGlNlGdMnTWUSpo/edit
A
A
And
welcome
to
the
grafana
agent
January
Community
cup:
it's
been
a
little
bit
of
time
since
we've
been
here.
A
So
if
I
thank
everybody
for
coming
and
we
will
go
ahead
and
jump
in
as
always,
if
anyone
has
any
questions,
feel
free
to
you
know,
put
in
chat,
raise
a
hand
whatever
you
need
to
get
a
hold
of
somebody
and
get
some
attention.
But
we'll
start
with
the
first
topic,
and
that
is
logging
components,
flow
specifically
and
I'll
turn
it
over
to
cascadis.
B
Hello,
everyone
I'll
set
my
screen.
B
B
Still,
nothing
right,
correct
should
I,
send
you
the
river
config
that
you
need
to
run
sure.
C
A
A
Are
you
sending.
B
B
Yeah,
do
we
do
you
want
us
to
go
through
the
configuration,
explain
what's
going
on
or
do
you
want
to
run
it
to
build
the
agent
and
run
it
and
see
seat
in
action.
A
Explain
it
first
and
then
we
can
share
it.
Okay,.
B
So,
first
we
can
see
the
the
global
lodging
block,
which
is
what
we
currently
have
in
the
agent
along
with
the
hint
of
what's
coming
in
the
future.
It's
a
pretty
cool
feature
that
they
will
be
able
to
connect
the
agent
Zone
logging
to
the
lodging
pipelines
that
we've
been
working
towards
to
provide
some
feedback.
This
past
quarter,
we've
been
trying
to
reach
future
parity
with
a
static
agent
mode
and
allow
our
flow
pipelines
to
have
a
complete
to
have
the
full
lodging
experience.
B
So
you
can
discover
log
sources,
scrape
log
engines
from
them,
perform
some
relabeling
or
processing
stages
and
then
send
them
over
to
a
lucky
instance.
So,
in
the
first
component
that's
defined
here
we
can
see
discovery.f
file,
which
uses
glob
patterns
to
discover
files
on
disk
and
then
similar
to
other
Discovery
components.
This
can
be
used
by
subsequent
ones
to
git
log
enters
from,
and
this
is
exactly
what
we
do
with
the
second
components:
lucky
the
source
that
file
using
a
river
expression.
B
We
type
the
output,
the
exports
of
the
first
component
to
the
targets
field
here,
and
we
also
tell
it
where
to
forward
the
received
log
entries
from
to
work
for
word,
received
login
to,
in
this
case,
we'd
like
to
pass
log
entries
through
some
processing
stages
defined
by
the
next
component,
which
is
the
lucky
dot
process.
B
The
way
that
they're
located
just
dot
process
component
works
is
similar
to
promptails
pipeline
stages.
It
can
run
one
or
more
processing
stages
in
sequence,
and
these
status
have
access
to
a
certain
extracted
values
map
where
one
stage
can
use
the
output
of
a
previous
one
and
combine
them
in
powerful
ways.
So,
for
example,
here
we
Define
a
new
state
that
parses
simply
parses
incoming
log
lines
as
the
CRI
logic
format.
B
In
the
comments
below
we
can
see
how
that
works.
So
the
first
line
is
a
CRI
formatted
log
line
that
contains
a
timestamp,
the
stream,
where
the
low
came
from
a
one
letter,
flag,
plus
the
rest
of
the
log
language
contents.
B
B
So
what
the
next
stage
does
is
parse
some
content
as
a
log
fmt.
If
the
source
is
missing
or
is
empty,
then
the
log
line
itself
is
parsed,
but
here
we
instruct
it
to
parse
the
the
value
stored
in
the
content
field
and
to
map
whatever
value
it
finds
as
level
in
the
LVL
variable,
as
well
as
whatever
funds
under
message
under
MSG.
B
The
next
stage,
which
is
a
labels,
one,
is
able
to
take
values
from
this
set
the
map
and
the
assign
them.
As
labels
to
our
log
lines,
so
our
log
entry
will
now
have
a
new
label
called
log
level
which
will
be
worn
and
the
value
that
was
extracted
before
as
well
as
in
new
states.
That
adds
some
static
labels
in
this
case
hostname
and
source.
B
B
Next
up,
we
have
two
more
lodging
pipelines
for
you,
one
that
worked
for
receiving
six
lock
messages
over
TCP.
It
just
defines
a
listener.
What
what
we
wanted
to
listen
for
for
those
messages
and
what
labels
to
pass
to
these
low
calories
and
what
the
forwarded
to
in
this
case
we're
going
to
forward
it
to
unlock
the
three
level
component.
That
will
add
the
hostname
label
receiver,
which
are
kind
of
receive
otlp
signals.
B
Matrix
lock
synthesis
in
this
case
we're
only
interested
about
logs
and
forward
them
to
another
call
exporter
logic
component,
which
is
able
to
allow
you
to
transverse
between
the
two
ecosystems
similar
to
metrics.
B
We've
been
investing
into
providing
a
good
experience,
a
first-class
experience
for
a
open
Telemetry,
so
the
users
can
seamlessly
switch
between
receiving
from
an
open
element,
endpoint
and
writing
to
Lucky
or
doing
the
opposite,
like
scraping
logs
in
the
Loki
format,
transforming
them
to
open
Telemetry
log
entries,
then
using
some
open,
Telemetry
exporter
to
export
them
to
different
environment.
B
B
The
Logans
is
here
will
also
be
forwarded
to
the
same
locky
dot.
Relabel
component.
We
just
has
a
one
labeling
rule,
it's
a
replace
rule
which
will
populate
the
hostname
level
again.
The
replacement
here
is
the
same
labor
expression
that
we
saw
before
just
taking
the
contents
of
the
house
name,
environment
variable
and
putting
them
there.
And
finally,
these
two
entries
are
also
sent
to
the
lucky.
The
drag
component,
which
is
connected
to
graph
on
the
cloud
we
need
some
technical
difficulties
that
we
encountered
before.
B
We
probably
won't
be
able
to
see
the
log
entries
arriving
graph
on
the
cloud.
We
will
be
sharing
a
snippet
like
this
into
the
community
call
nodes.
So
you
can
get
started
and
have
a
starting
point.
So
you
can
start
thinking,
but
if
you
just
send
a
log
file,
log
entry
assist
log
message
and
an
open,
Telemetry
compatible
log
entry
to
the
correct
endpoint,
and
you
would
be
able
to
connect
to
your
grafana
instance
and
see
all
three
log
entries
passed
with
the
correct
search
labels.
A
Foreign
I
think
that
well
just
I'll,
if
you
after
this,
if
we
want
to
meet
you,
get
together,
we'll
take
we'll
record
a
little
thing
of
showing
it
in
action
and
I'll
slip
it
into
the
recording.
B
Hello,
everyone:
this
is
a
companion
recording
to
the
grafana
agent
January
Community
College
Edition,
where
we
showcase
some
of
our
new
lucky
components
and
how
they
can
be
used
and
combined
to
build
lodging
Pipelines
due
to
some
technical
difficulties.
We
could
not
run
the
demo
live
so
I'll
set
my
screen
and
show
it
to
you
right
now.
B
Very
quickly,
this
is
a
conflict
that
we
went
through
a
lodging
block
that,
in
the
future
we'll
be
able
to
redirect
the
agent's
own
logging
in
a
a
lucky
pipeline.
Here,
we're
discovering
log
files
using
Globe
patterns,
reading
them
and
then
running
some
processing
stages
on
them.
B
We
also
have
some
listeners
for
assist
log
messages
and
open
Telemetry
log
entries
where
we
add
some
relabeling
rules
to
them
to
add
the
hostname
environment
variable
and
we
send
everything
both
to
a
lucky.
The
density
out
component,
that
is
for
demo
purposes,
so
that
we
can
see
everything
working
in
the
command
line,
as
well
as
a
aggrapher
cloud
instance,
but
go
ahead
and
run
it.
B
The
agent
is
running
and
let
me
send
the
and
the
long
analog
methods
to
one
of
the
files
that
we're
tailing
I
can
see
here.
That
I
can
see
the
the
login3
in
my
standard
output,
and
we
can
do
the
same
with
methods
using
netcat.
B
Similarly,
I
can
see
the
entry
over
here
same
goes
for.
B
That
I
have
received
log
entries
from
three
sources
with
the
label:
log
file
or
Talent
syslog
with
an
application
event.
Log
entry,
the
method
something
happened
after
the
look
at
the
process
Pipeline
and
the
outer
logins
as
well.
B
It's
interesting
to
see
is
the
flow
UI,
which
is
a
really
nice
way
of
debugging
Telemetry
pipelines
and
seeing
how
they
match
the
mental
model
of
the
user
and
what
actually
goes
on
when
the
data
flows
through
the
defined
pipelines.
So
here
we
have
the
locate
the
source,
the
syslog
component
and
the
auto
collector
receiver.tlp
export
the
data
in
logic,
format
and
both
of
these
direct
data
through
a
relabel
stage
and
send
them
over
to
grafana
cloud,
and
we
started
output.
B
We're
trying
to
make
if
we
open
Telemetry
ecosystem,
a
first-class
citizen
in
girlfriend
island,
so
that
users
can
seamlessly
switch
between
the
two.
So
here
you
can
see
an
example
of
receiving
log
entry
from
an
open,
Telemetry,
instrumental
duplication
and
then
handing
them
as
located,
but
you
can
actually
do
the
opposite
as
well
and
on
the
right
side,
the
right
hand,
side,
you
can
see
the
opposite.
B
You
can
see
the
discovery.file
component,
picking
up
the
target
which
the
login
source
file
reads
and
starts
telling
those
files
it
sends
those
files
over
to
lucky.process
so
that
they
get
processed,
and
then
they
get
the
written
to
grafana
cloud
and
log
to
standard
output
as
well.
B
Each
of
these
components
has
its
arguments
listed,
so
here
we
can
see
how
the
globe
patterns
were
expanded
and
what
files
flow
is
currently
tailing.
Along
with
whether
the
tailor
is
running
what
the
labels
it
is
appending
to
the
log
entries.
What
is
its
read
offset
and
we
can
see
what
other
components
is
depends
on.
B
B
Lodging
pipelines
not
only
an
easier
to
write,
understand
and
modify
configuration
file,
but
also
something
that
is
easier
to
understand
and
make
sense
of
in
production
incident
or
when
you're
trying
to
debug
an
issue
feel
free
to
reach
us
in
the
agent
repo
in
our
public,
Community
slack
or
in
GitHub
discussion
about
how
this
works
for
you
and
Lessons
Learned
or
any
fixed
requests
will
be
how
to
hear
from
you
have
a
great
rest
of
the
day.
Thank
you.
B
Again
feel
free
to
use
this,
let
us
know
how
it
works.
For
you,
one
of
Flo's
biggest
advantages
is
the
UI
which
allows
a
better
overview
of
how
all
the
components
are
tied
together
and
how
data
flows
through.
So
it
will
be
easy
to
see
this.
It
would
be
easy
to
see
this
pipeline
Pro
in
series
pipeline
like
working
and
see
data
flowing
through
it,
but
if
we
might
record
this
later
and
share
it
with
you,.
A
A
Robert,
do
you
want
to
take
that
I
know?
You
have
possible
noise
happening.
C
We
have
film
charts
now
for
the
agent
hooray
use
them.
They're,
they're,
really
they're,
really
new
they're,
going
through
a
lot
of
changes,
so
I
can't
promise.
You
won't
have
to
rewrite
your
values
on
yaml
a
bunch
of
times,
but
we're
excited
because
we
think
the
helm
charts
are
the
way
people
configure
like
the
play
things
in
kubernetes
and
not
Chase
on
it.
So
yeah
it
it
it.
The
the
helm,
charts
are
meant
for
flow
mode
by
default,
but
you
can
change
it
to
be
static
mode.
C
C
Good
questions,
static
mode
is
the
the
the
the
I
mean.
How
do
we
want
to
describe
this
so
flow?
Okay,
so
static
mode
is
not
flow
mode.
Does
that
help?
Is
that
useful
static
mode
is
what
the
Asia
originally
launched
with?
We,
we
kind
of
gave
a
name
in
retrospect
after
flow
was
introduced.
C
C
We
really
just
needed
to
give
it
a
name
when
we
said
flow
mode
like
what
is
the
opposite
of
flow
mode.
Well,
it's
static
mode.
That's
Alice!.
B
Basically,
the
the
yaml
config
that
yeah
it's.
C
Certainly
the
default
mode
like
there's
only
two
modes:
it's
either
static
or
it's
flow.
The
second
question
was:
where
do
you
get
the
Helm
charts,
so
the
the
helm
chart
is
deployed
to
the
the
normal
grafana
helmet
repository,
but
the
code
is
in
the
grafana
Asian
repo.
A
C
A
Think,
actually,
maybe
November's
on
the
YouTube
channel
sometime
to
a
day,
probably
actually
tomorrow
takes
a
bit
to
compile.
So
thank
you
for
everybody
coming
and
we'll
see
you
later.
Thank
you.