►
From YouTube: Grafana Agent Community Call 2022-03-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
so
we
got
a
few
things
on
the
agenda
today
and
we
will
just
jump
right
into
them
as
soon
as
I
pull
it
off.
I
think
the
first
thing
is
internally.
We
had
talked
a
little
bit
about.
We
do
a
bug
scrub
where
we
kind
of
go
through
the
backlog
and
some
of
the
other
teams,
I
believe,
have
that
opened
up
and
we
were
trying
to
see
if
anyone
had
any
interest
ex,
you
know
interest
in
either
attending
or
whether
it's
the
whole.
B
Strong
opinions,
I
mean
it'd,
be
a
good
opportunity
to
like
for
people
to
find
out
why
we
might
not
be
working
on
something
and
to
discuss,
maybe
encourage
people
to
contribute
if
we
don't
have
the
time
to
kind
of
put
our
time
into
something.
I'm
saying
time
too
many
times,
I
don't
know,
I
think
I'm
I'm
not
opposed
to
it.
It
kind
of
feeds
into
our
whole
open
by
default
thing,
but
it
also
might
be
a
pretty
boring
call
for
a
lot
of
people.
C
Scouts,
my
two
senses
that
if,
if
we
have
some
issue
that
has
a
very
long
back
and
forth
thread
of
communication
and
think
communication
takes
time,
we
could
invite
the
specific
person
that
wants
to
discuss
it
in
the
back
scrub
so
that
they
can
provide
their
feedback
like
right.
There
right
away.
B
B
There
is
well,
I
think
we
have
to
be
careful
to
not
do
that
too
often,
because
it
might
make
things
take
longer
right,
but
I
think,
if
something's,
if
something's,
has
that
much
discussion,
it's
probably
already
taking
quite
a
long
time
anyway.
Yeah
so
saying
like.
Let's
discuss
it
on
a
call
during
the
next
bug,
scrub-
probably
wouldn't
change
anything
so
yeah.
I
like
that.
I
do
cool.
A
A
Yeah,
it
will
literally
be
six
weeks
from
today
right
if
we
did
it
right
before
this
call,
so
I
will
add
a
to
do
that
term.
I
schedule
it
on
the
community
channel
and
set
up
new
agenda.
Slash.
A
All
right
cool
you're
up
about
covering
re-labeling
rules
a
little
bit.
C
Yeah
can
I
share
my
screen:
absolutely
okay,
just
to
set
some
ground
rules
here
feel
free
to
interrupt,
and
it's
not
like
a
teaching
class
or
anything.
I
want
to
start
a
discussion
and
if
anybody
spots
a
mistake
or
has
a
better
example
or
a
more
interesting
use
case
to
discuss
then
feel
free
to
produce
so
right
away.
C
Can
you
see
my
screen
now
great
okay,
so
I
guess
everybody's
heard
about
the
labeling
rules
right?
Is
anybody
who's
not
aware
of
them
at
all.
C
C
Now
should
be
good
right,
yes,
yeah,
okay,
so
for
me
to
generally
ingest
matrix
on
a
line-oriented
format.
That
means
that
each
line
is
a
new
metric
or
a
new
metadata
line
that
should
be
parsed
and
ingested,
stored,
etc.
Where
there's
a
metric
name,
there's
a
bunch
of
key
value
pairs
that
are
these
labels,
the
value
of
the
metric
and
an
optional
timestamp,
and
what
prometheus
does
is
that
it
treats
its
unique
set
of
labels
as
a
separate
metric.
C
So
these
key
value
pairs
can
be
used
to
quantify.
For
example,
if
I'm
measuring
http
latency,
then
cable,
they
can
be
used
to
like
classify
which
endpoint
has
been
called,
which
http
method
was
used,
which
was
the
status
code
returned
or
which
server
served.
The
request,
for
example,
and
its
unique
combination
of
these
values,
is
going
to
be
a
separate
metric
store
stored
in
the
prometheus
database
and
that
also
feeds
into
why
labels
are
important
for
a
matrix
cardinality.
C
There
are
some
internal
labels
that
are
being
set
by
prometheus
itself
like
they
have
to
do
either
with
the
metric
name,
where
the
script
came
from,
which
path
was
used
to
scrape
any
url
parameters
that
have
been
passed
or
a
special
labels
set
by
the
service
discovery
mechanism.
C
Before
this
come
on,
I
didn't
pray
at
all.
It
was
it's
all
improv.
C
C
So,
enabling
config
consists
of
these
seven
fields,
which
define
a
labeling
step
and
a
prometus
configuration
file
may
have
an
array
of
relabeling
steps
that
are
applied
in
order
of
appearance
and
they
can
also
be
applied
in
different
parts
of
a
metric
life
cycle
which
I
will
discuss
later
on,
but
just
to
see
and
understand
what
we're
talking
about.
The
actual
fields
that
you
can
configure
are
the
source
labels,
the
separator
the
target
label,
an
optional,
regular
expression,
which
defaults
to
just
getting
everything.
C
C
Okay
and
then
we
can
use
a
regular
expression
to
actually
extract
specific
fields
from
these
concatenated
values
and
either
continue
with
the
next
operations
if
the
regular
expression
passes
or
just
a
board
execution
if
the
regular
expression
doesn't
match.
C
So
here
we
could
match
everything
that
comes
after
the
kata
at
the
concatenated
values
and
perform
some
operations
only
on
those
metrics
or
only
on
those
targets,
or
we
could
just
filter
out
all
the
subsystems
that
operate
on
the
radius
cluster,
for
example,
keep
in
mind
that
these
parenthesized
values
can
be
referred
to
later
on
as
a
capsule
group.
So
this
allows
us
to
get
more
creative
with
what
we
can
populate
in
a
target
label.
C
And,
of
course,
the
target
label
is
the
actual
label
which
in
which
the
result
of
the
replacement
will
end
up
populating.
C
So
if
we
continue
from
the
previous
example
and
concatenate
it
all
in
a
big
block,
then
if
we
had
those
two
source
labels
and
captured
the
different
parts
with
regex
and
replaced
them
on
the
my
new
label
target
label,
then
we
would
see
that
there
would
be
one
extra
key
value
pair
added
to
the
specific
metric,
which
would
be
would
look
like
that.
C
And
finally,
the
modulus
field
expects
a
positive
integer,
which
is
was
kind
of
weird
when
I
first
read
about
it
and
what
it
did
is
take
an
md5
of
the
extracted
value.
That
means
the
concatenated
label
values
with
the
separator
and
the
performance
modulus
operator,
a
modulus
operation
based
on
this
positive
integer
and
we'll
see
in
a
few
what
we
can
do
with
that.
C
So
what
can
we
do
with
these
building
blocks?
How
can
they
help
us
in
the
in
our
day-to-day
work?
C
The
first
thing
that
we
can
do
is
instruct
prometheus
to
actually
keep
or
drop
specific
targets
and
metrics
based
on
whether
these
label
values
match
the
regix
or
not,
and
if
we
go
back
to
the
previous
example.
C
Or
we
could,
on
the
other
hand,
just
have
the
same
source
labels
perform
rejects
and
just
keep
everything,
that's
that
that
comes
from
the
kata
subsystem
and
then
just
drop
by
default.
Everything
that
that
is
that
comes
from
other
servers,
for
example,
and
in
cases
that
we've
had
to
support
the
community
requests.
This
can
be
useful
in
like
keeping
only
custom
metric
names
if
you
have
metrics
with
high
cardinality
or
use
it
in
accordance
with
what
labels
your
service
discovery
mechanism
exposes.
C
B
No,
no
I
just
saw
I
was
gonna
mention
your
last
sentence
there,
but
I
noticed
you
wrote
it
down
now
that
you
scrolled
this
one
the
make
sure
that
the
labels
are
unique.
After
doing
that,
exactly.
C
So
if
we
dropped,
for
example,
the
subsystem
label
here-
and
we
only,
we
had
both
cutouts
web
server
and
cut
out
sql
and
if
they
had
different
values
permissions
we
didn't
know
which
one
to
ingest,
I
didn't
know
what
would
happen
in
this
case.
Robert.
Do
you
remember,
would
it
just
ingest
either
of
those
or
would
it
terror
out?
C
Sorry,
can
you
repeat
the
question
again,
yeah,
of
course,
so
in
the
previous
example,
sorry
here
we
had
the
this
label
said:
okay
and
if
we
actually
dropped
the
server
label
and
we
ended
up
with
two
metrics
that
had
the
subsystem
cata
value.
What
would
happen
to
those
two
values?
Would
they
for
me
to
select
one
at
random
or
would
it
terror
out.
B
The
well
okay,
so
it
depends
if
they're
from
the
same
target
or
for
different
targets.
If
they're
from
different
targets,
then
it's
effectively
whatever
gets
scraped
last,
but
as
long
as
they
have
this.
As
long
as
they
have
the
same
timestamp,
then
the
most
recent
sample
will
be
accepted.
If
they
have
different
timestamps,
then
you'll
get
out
of
order
errors.
B
Okay,
you.
C
Okay
and
then
moving
on
replaced
the
default
action
for
a
relaying
rule.
If
we
haven't
defined
one
and
what
it
does
is
just
replace
the
value
of
a
given
label
with
a
the
replacement.
C
What
what's
in
the
replacement
field?
So
in
the
simplest
of
cases,
you
can
use
these
to
just
replace
the
env
label,
with
the
production
value
to
just
hard
code,
things
which
is
the
most
common
use
case.
I
think,
or
you
could
do
more,
fancy
things
like
having
the
source
labels
being
the
address
and
the
port
number
and
concatenating
them
performing
a
replacement
using
the
regis
capture
groups
and
then
passing
them
over
to
another
label.
C
And
finally,
not,
finally,
the
haas
mode
action
is
a
what
makes
use
of
the
modulus
that
we
discussed
previously.
C
It
actually
performs
a
modulus
operation
and
then
can
be
used
to
actually
keep
or
drop
targets
based
on
the
value
of
this
module's
operation.
C
I
think
an
example
here
could
make
things
clearer
if
we
have
that
custom
matrix
with
labels,
node
and
val
name
and
val,
and
we
perform
the
modulus
with
a
value
of
8,
then
what
it
would
do
is
they
populate
this
target
label?
Tmp
has
mode
with
the
result
of
this
calculation.
It
would
just
take
an
md5.
C
So,
for
example,
the
kubernetes
service
discovery,
as
we
mentioned
before,
exposes
a
number
of
internal
labels
that
start
with
a
double
underscore,
and
that
means
that
they
will
not
be
present
in
the
final
level
set
of
the
metric
and
this
unless
we
explicitly
configure
them
to
be
and
what
we
could
use
a
label
map
here
for
is
to
just
perform
a
regress
capture
of
this
kind
of
names.
C
So
here
it
will
capture
as
a
first
group,
the
pod
and
the
as
a
second
capture
group,
the
container
name,
and
we
could
just
replace
that
value
with
the
kh
pod
underscore
container
name,
thus
actually
rest
keeping
those
labels,
those
label
values
and
not
discarding
them
at
the
end
of
a
scrape
psyche.
C
Okay-
and
one
last
thing
is
that
a
confusing
part
of
labeling
rules
that
they
can,
they
can
be
found
in
different
parts
of
the
parameters,
config
file,
and
that's
why
it
took
a
long
time
for
me
to
actually
understand
how
they
work
and
why
my
metrics
weren't,
showing
up
they
can
be
defined
as
both
under
the
reliable
configs
and
metric
label
called
the
label
conflicts
key.
C
C
The
difference
between
those
is
that
the
these
rules
are
applied
in
different
parts
of
a
matrix
life
cycle.
So
the
first
one
here
is
applied
directly
when
the
the
matrix
is
first
scraped.
So
dropping
something
here
means
dropping
the
entire
scrape
target,
the
entire
machine
that
you're
scraping
so
that
you
won't
get
any
other
metrics
any
other
metric
series
from
that
machine,
while
the
second
block
here
is
applied
after
the
screen
happens.
B
Yeah
could
go
back
up
real,
quick,
so
relabel
configs
those
are
applied
before
scraping
after
I,
I
think
it's.
B
To
say
after
discovery,
but
before
scraping
then
metric
relabel
configs
happens
during
scrape
for
every
sample
that
you
see
and
that's
before
it
gets
added
to
the
rate
ahead
log
and
then
rate
label
configs
happens
at
like
when
the
right
ahead
log
is
being
read
for
sending
stuff
out
to
the
remote
endpoint.
B
So
the
first
two
happen
before
you
store
things
on
disk
and
the
last
one
happens
after
it's
already
on
disk.
The
remote
write.
Relatable
rules
are
useful
for
splitting
up
what
you're
writing
to
do
different
endpoints.
So
maybe
you
want
to
send
team
a's
metrics
to
endpoint
number
one
and
team
b's
metrics
endpoint
number
two.
You
could
use
greatly
available
rules
to
do
that.
There.
C
Be
available
when,
after
the
scrape.
B
Yeah,
any
any
label,
starting
with
two
underscores,
gets
removed
after
the
relabel
configs.
C
And
finally,
the
last
place
where
this
can
be
used
is
in
an
alert
manager,
configuration
and
correcting
this,
but
I
think
that
this
also
follows
the
same
pattern.
This
triggers
out.
C
I
think
this
one
can
make
sure
that
the
alerts
are
that
are
sent
to
remote
alert
manager
are
standardized
mainly,
and
this
is
used
to
see
in
which
alert
manager
to
which
alert
manager.
We
should
post
and
learn
too
right.
B
I
actually
don't
know
about
this
one.
This
isn't
in
the
agents.
I've
known
knowledge
about
how
the
alerting.
C
B
C
B
You
did
a
good
job.
Relabel
rules
are
a
big
source
of
confusion
for
a
lot
of
people,
especially
because
there's
a
lot
of
fields
that
only
apply
if
you're,
using
a
certain
action
and
having
a
dock
like
this,
is
really
really
useful.
A
Yeah
I
mean
whenever
I
early
on
whenever
I
I
needed
to
reference
anything.
There
was
like
one
blog
post
that
I
would
consistently
go
to
that
talked
about
the
differences,
so
actually
that
blog
post
was
not
on
grafana.com.
It
was
just
a
random
blog
post.
So
having
a
document
and
having
this
handy
will
be
excellent.
C
Do
you
think
that
we
should
add
like
more
examples
on
these
blog
posts
or
more
technical
explanation
of
why
things
happen
the
specific
way?
How
would
you
make
this
more
approachable
and
more
useful
to
end
users
if
we
were
to
publish
this
like
tomorrow,.
A
A
B
I
think
maybe
coming
up
with
a
list
of
the
most
common
use
cases
that
people
want
to
use,
relabel
rules
for
and
then
saying
something
like
if
you
want
to
drop
metrics,
because
you
have
too
many,
you
want
to
use
metric
labor
rules.
If
you
want
to
change
metrics
before
they
are
written,
you
want
to
use
metric
label
stuff
like
that.
B
So
that
way,
I
think
what
you
have
is
good,
then
having
just
a
list
like
a
bullet
point
list
somewhere
at
the
bottom
of,
like
example,
use
cases
and
what
to
use
or
what
to
mix
would
tie
it
all
together
to
help
people
understand
it
a
little
bit
better.
A
All
right,
we'll
do
open
topics
or
agenda
anybody
has
anything
they
want
to
talk
about.
Here
would
be
your
time
to
do
it.
A
All
right,
if
no
one
has
anything
else,
then
I
believe
that
is
the
full
agenda
I'll.
Have
it
open
right
now,
it's
in
a
different
tab,
so
all
right,
if
no
one
has
anything
else,
then
we
will
hop
off
and
give
everybody
some
time
back,
and
this
will
get
posted
up
generally
later.
Today,
all
right.