►
From YouTube: Grafana Agent Community Call September
Description
Aaron presents a deep dive into using modules and best practices. Find out more https://docs.google.com/document/d/1TqaZD1JPfNadZ4V81OCBPCG_TksDYGlNlGdMnTWUSpo/edit
A
A
A
If
we
have
time
at
the
end
that
we
will
take
a
look
at
if
you
have
any
questions
throughout
feel
free
to
post
them
in
the
chat,
this
will
be
up
on
YouTube,
so
wherever
you're
watching
welcome
with
that,
we'll
give
a
quick
chance
if
anybody
does
have
any
questions
that
they've
come
here
with
that
they
want
to
hit
on
give
a
brief
second
here.
If
you
want
to
post
any
in
the
chat
or
just
on
mute
and
fire
away,.
A
Okay,
nobody
jumps
so
no
problem,
we'll
jump
right
into
Aaron
if
you
have
any
as
he's
talking
or
about
anything
in
general
that
you
think
of
again
feel
free
to
to
post
it.
So
I
would
like
to
welcome
Aaron
Aaron
today,
who's
solution,
architect
at
grafana
who's,
going
to
talk
a
bit
about
what
he
does
with
kubernetes
and
the
agent
and
modules
so
go
ahead.
Aaron.
B
Thanks
here
my
name
is
Aaron
Benton
I
am
a
Senior
Solutions
architect
on
the
Professional
Services
team
here
at
grafana,
and
today,
I
just
want
to
take
a
little
bit
of
time
and
cover
some
of
the
different
approaches
that
we
we've
taken
on
the
Professional
Services
team
and
in
terms
of
how
to
get
the
most
out
of
the
agent
how
to
get
the
most
out
of
your
observability
stack
as
efficiently
as
possible,
with
as
little
amount
of
code
as
possible.
B
So
to
speak,
and
in
the
the
metrics
world
you
probably
are
familiar
with.
There's
an
annotation.
You
know
namespace.
If
you
will
of
prometheus.io
M4
scrape.
You
know
poor
path
Etc.
So
when
I
first
started,
writing
a
config
for
that
I.
Did
it
with
the
agent
static
mode
and
started
to
expand
on
that
as
the
use
cases
that
I
was
coming
across
needed
to
have
more
extensibility
or
more
control
over
what
a
user
wanted
to
scrape
or
ingest
you
know.
A
perfect
example
is
a
tenant.
B
You
know
many
of
the
customers
that
I
would
work
with.
Are
you
know,
deploying
mimir
and
Jim
with
multiple
tenants
and
wanted
to
Simply
Be
able
to
control
which
tenant
a
metric
went
to
or
screen
went
to
as
a
a
result
of
just
setting
an
annotation,
so
started
kind
of
exploring
that
I
want
to
share
my
screen
here
and
there
is
not.
You
know,
a
any
make
sure
it's
on
the
right.
One
should
see
Visual
Studio
here,
hopefully
yeah.
B
Cool
so
I'm
gonna
just
start
out
and
go
through
a
little
bit
of
this
config
and
static
mode
before
we
we
switch
over
to
flow
just
so
you
kind
of
get
an
idea
of
how
awesome
it
is
that
I
was
able
to
report
this
over
to
flow
and
all
that's
out
there
and
to
be
able
to
be
used
so
with
metrics.
You
know,
I
have
my
right.
C
B
Want
to
get
metrics
about
your
applications,
you
know
or
other
open
source
technologies
that
you're
deploying
like
it's
not
just
about
kubernetes
metrics
and
in
this
is
my
you
know,
opinion
only
I'm,
not
a
fan
of
using
you
know
things
like
a
service
monitor
or
a
pod,
monitor
or
a
probe
simply
because
that
the
more
service
monitors,
pod
monitors,
tropes,
whatever
that
are
deployed,
that's
more
service,
Discovery
requests
that
are
happening
against
the
kubernetes
API,
it's
unbounded
I
would
rather
have
a
a
single
set
that
says,
discover
all
of
the
endpoints
or
all
of
the
services
or
whatever
as
a
single
job,
and
you
go
from
there
and
we
can
have
a
debate
about
that.
B
I'm
happy
to
you
know,
discuss
that
further,
but
anyway,
in
this
particular
config
we
have
a
added
support
for
these
various
different
annotations.
Along
with
the
Prometheus
I
o
ones.
We've
also
extended
a
metrics.agent
dockerfana.com
forward.
Slash
you
know,
operation,
you
know
type
annotation,
so
in
here
I
have
one
for
endpoints.
You
know
we're
doing
all
of
these
different
things
as
part
of
Discovery
relabelings.
B
You
know
setting
things
like
a
you
know:
deployment
label
Etc,
which
is,
is
you
know,
very
useful,
because
we
can
take
the
same
approach
if
I
look
at
my
looks
so
this
is
the
next
kind
of
evolution
of
this
is
I,
said
well,
if
I'm
using
annotations
for
metrics.
Why
can't
I
use
them
for
my
logs
and
this
really
opened
up
a
lot
of
possibilities
in
terms
of
controlling.
B
Pipeline
stages
to
for
each
different
use
case,
I
could
simply
expose
annotations
that
allowed
a
developer
SRE
whomever
to
control
the
behavior
of
their
lungs
as
no
more
sending
the
entirety
of
the
locks.
As
an
example,
the
default
product
configuration
literally
had
like
one
stage
in
there,
and
it
said
CRI,
I,
think
or
Docker
and.
B
It
just
parse
the
log
line
as
it
is
and
send
everything.
Well
what
if
a
developer
leaves
debug
mode
on
right.
Do
you
need
to
ingest
all
of
your
debug
logs
or
Trace
level
logs,
and
my
opinion
is
no:
if
debug
mode
is
on,
then
you
most
likely
troubleshooting
some
type
of
issue
you're
viewing
the
Pod
blogs
directly
on
the
Pod,
whatever
the
case
may
be,
you
don't
need
to
send
those
logs
to
Loki
for
storage,
whether
that's
you
know,
on-prem
or
or
in
the
cloud
right.
B
Save,
though
save
that
money
so
to
speak,
or
things
like.
In
my
opinion,
a
label
of
level
should
be
associated
with
every
single
log.
B
I
see
it
all
the
time
where
someone
will
be
searching
for
they'll
do
a
specific
set
of
labels
right
and
namespace
whatever
and
I'll
look
for
a
specific
error
message:
wow
that
error
log
level
probably
represents
less
than
half
a
percent
of
the
overall
log
volume.
Why
would
you
want
to
search
through
all
the
info
logs
for
a
specific
error
like
you
can
just
disregard
all
the
info
level
logs
by
specifying
the
level
of
error
in
this
case?
B
So
there's
a
lot
of
different
annotations
on
here,
I'm
going
to
go
over
some
of
those
more
in
in
detail,
but
what
I
wanted
to
start
with
was
this
is
a
very
you
know,
lengthy
configuration,
so
there's
gobs
for
Auto,
scraping
jobs
for
pods
or
for
endpoints
for
probes
even
are
also
all
supported,
would
do
probes
for
services
and
ingresses.
When
you
start
to
look
through
this
config,
it
does
not
get
any
smaller
and
it
could
actually
be
even
more
three
four
times
larger
than
this.
B
B
B
If
I
was
to
look
at
this
logs
config
in
static
mode
again,
with
all
of
these
different
annotations
that
we're
supporting
that
ends.
Up
being
you
know,
a
thousand
line
configuration
and
that's
just
for
one
tenant,
the
config
has
to
be
repeated
per
tenant,
although
when
you're
adding
another
tenant.
This
is
what
it
would
look
like
with
the
yaml
anchors,
so
not
a
lot
of
extra
code
to
add
more
tenants,
but
we
have
all
of
the
different.
You
know
stages.
If
this
is
your
log
formats,
K
log
we're
doing
this.
B
All
of
these
things,
I
can't
pick
and
choose.
If
I'm
you
know
sharing
this
config,
you
know
with
a
customer
working
on
a
a
deployment
of
Griffon
agent.
B
This
is
where
flow
comes
in
so
for
metrics.
In
this
case,
this
is
my
config.
B
I
mean
I
I'm.
Actually,
this
agent
is
sending
metrics
for
two
tenants
and
an
on-prem
or
self-managed
Griffon,
Enterprise
Metrics
deployment
and
then
I'm
also
sending
the
metrics
to
Griffon
file
and
I
was
able
to
do
that
through
what
67
lines
versus
1200
lines
or
whatever
that
was
before
and
don't
get
me
wrong.
All
of
those
all
that
code
still
exists.
B
You
know
with
flow.
We
were
able
to
Port
that
to
modules,
and
there
is
a
publicly
available
modules
repository
so
grafana
Agent
Dash
modules.
B
B
I'll,
probably
kind
of
switch
back
and
forth
between
these,
as
we
talk
about
them
just
to
try
to
trigger
thoughts
or
approaches,
use
cases
that
you
might
have
for
something
like
this.
The
first
one
I
mentioned
about
control
and
behavior
I
want
to
talk
about
these
two,
which
is
scrub
nulls
or
scrub
empties
prior
to
joining
grafana
I
spent
about
five
years
at
capture
base.
As
a
principal
Solutions
architect
in
the
the
no
secret
world,
you
know
dealing
with
Json
and
I
used
to
say
this
all
the
time.
B
Json
is
great
because
it's
a
flexible
schema
right.
The
schema
is
literally
stored.
Next
to
the
value.
The
Json
is
not
an
efficient
storage
mechanism
because
the
schema
is
stored
next
to
the
value
right.
Is
there
extra
bytes
and
I'll
use
istio
as
an
example,
many
of
you
are
probably
using
istio.
If
you're
using
sgo
Json
logging,
it
will
log
a
consistent
Json
model,
whether
or
not
it
has
a
value,
it
will
still
write
the
schema,
and
you
know
whatever
prop
colon.
No,
it
doesn't.
Do
you
any
good
at
all
to
store
that?
B
B
B
Other
ones
so
there's
a
couple
other
scrubbing
ones
on
here.
That
I
think
are
worth
mentioning
as
well,
which
is
scrubbing
of
the
timestamp.
If
you
look
at
your
logs
in
you
know,
explore
or
whatever
have
you
ever
notice
that
there's
two
time
stamps
there?
B
One
of
them
is
the
the
metadata
time
stamp
of
when
the
timestamp
or
the
log
was,
you
know,
captured
and
that's
what
what
is
displayed.
However,
everyone
with
logs
obviously
like
you're
writing
a
timestamp
to
the
log
line
itself
that
can
be
redundant
essentially,
so
this
annotation
of
scrubbing,
the
timestamp,
actually
removes
the
timestamp
from
the
log
line
itself
and
depending
on
what
time
stamp
format
you
were
using.
That
can
you
know,
save
another
20
30
bytes.
B
Does
it
seem
like
a
lot,
but
you
do
a
couple
hundred
billion
log
messages
a
month
that
can
be
some
significant
savings
and
taking
it
even
further
is
the
log
level
foreign
level
we
want
to
make
it
a
label.
B
These
modules
do
that
either
by
you
specifying
a
log
format,
and
we
know
where
to
extract
it
at
or
a
best
effort
that
we
can
kind
of
dive
into.
In
terms
of
those
regular
expressions,
if
you
wanted
to,
you
could
remove
the
log
level.
The
default
behavior
is
to
leave
everything
as
it
is,
and
that's
what
I
really
like
about
this
annotation
based
approach
to
logging
and
metrics
is
you're
opting
in
to
the
behavior.
So
we
want
to
ingest
all
the
logs
exactly
as
they
are.
B
If
you
want
to
enhance
them
more,
we
do
that
including
things
like
dropping
the
log
levels
like
if
you
want
to
remove
debug
the
default
for
this
is
to
drop
debug.
If
you
had
a
use
case,
you
needed
to
keep
debug
set
The
annotation
to
false,
and
we
would
retain
that
message
type
of
thing.
B
Then
there
might
be
use
cases
when
you
want
to
do
masking
right.
Where
you
said
annotation,
we
look
for
an
SSN
or
a
credit
card
or
email
address.
Whatever
the
case
may
be,
we
can
mask
that
value
directly
within
the
logs.
If
you
don't
want
that
behavior
the
beauty
of
the
module-based
approaches,
don't
include
that
module
right.
B
So,
let's
take
a
look
at
what
I
mean
by
there,
so
in
my
particular
config
for
logs
I'm,
simply
referencing
the
publicly
available
agent
modules
repository
and
giving
it
a
path
I,
just
called
it.
All.River
naming
things
is
hard,
I'm
sure
we
go.
That's
one
thing
we
all
can
agree
on
so
for
all.river.
B
This
is
what
that
config
does
has
different
arguments.
Then
I
mean
actually
just
including
another
module
to
say
get
the
targets
right.
This
one
is
logs
from
worker,
so
under
targets,
there's
is
actually
two.
This
one
here,
logs
from
API,
uses
the
kubernetes
API
directly.
So
when
you
are
deploying
an
agent,
the
logs
agent
should
be
a
Daemon
set.
If
you
are
getting
the
Pod
logs
from
the
worker,
if
you're
getting
the
Pod
blocks
from
the
API,
that
should
be
a
stateful
set.
B
A
I
got
to
check
the
the
docs,
which
one
was
it.
The.
B
The
main
reason
that
you
would
consider
using
the
API
one-
you
know-
obviously
it's
less
pogs,
but
you
don't
have
to
have
a
privileged
container
or
privilege
access
in
that
case,
which
some
of
you
may
have
a
need
for.
It
is
obviously
going
to
use
more.
You
know
CPU
network
type
of
things,
to
process
all
those,
because
it's
making
HTTP
calls,
but
usually
you
know,
I'm
going
to
always
default
to
getting
log
directed
from
the
worker,
deploying
it
as
a
Daemon
set.
B
So
coming
back
to
the
all
that
I,
that's
how
I
get
my
targets
and
if
we
actually
treat
this
out
so
we
go
from
all
to
worker.
There's.
B
Right
so
and
here,
and
that
actually
includes
another
module,
so
every
bit
of
that
code
from
that
static,
config
was
actually
broken
up
into
multiple,
separate
modules
right
and
even
including
here
you're,
seeing
the
path
modules
kubernetes
related
Lanes
common.
Well,
if
we
look
at
this
tree,
I
have
logs
and
I
have
metrics
and
then
there's
re-labelings.
B
The
cool
thing
is
these
re-labelings
are
actually
shared
across
my
metrics
and
my
logs,
because
service
Discovery
is
the
same.
Doesn't
matter
so
now,
I'm
able
to
follow
or
create
more
reusable.
You
know
components
I'm
not
going
to
sit
here
and
walk
through
the
tree
for
each
one
of
those
things
if
you've
used
the
grafana
agent,
UI
or
I.
Guess
if
you
weren't
aware
there
is
a
UI
for
the
agents
and
just
put
mine
over
here,
so
pods
I'm,
just
going
to
jump
in
to
one
of
these.
C
B
Going
to
put
the
agent
team
on
the
spot,
sorry
there's
a
super
useful
graph.
Here
you
can
zoom
in
and
out
on
and
see
the
whole
tree
of
everything
and
currently
with
modules.
It
doesn't
draw
the
whole
tree,
I'm
sure
that's
on
the
roadmap
of
things,
but
that
would
be
awesome.
B
Or
modules
needs.
B
Yet,
but
no,
this
is
this
is
incredibly
useful,
even
if
your
testing
modules
just
to
see
how
everything
is
connected,
viewing
that
graph
getting
a
visualization
of
you
know.
What's
there
Etc
I
can
see,
even
though
there's
only
these
modules
on
here
I
know,
what's
going
on,
I
can
see
the
health
of
them.
B
Obviously
like
we
can
come
back
here
and
I
could
drill
in
to
you
know
each
one
of
these
things
see
what's
going
on
where
what
arguments
are
being
passed
around
or
what's
exported
all
that
extremely
useful,
but
the
coming
back
to
the
config,
you
know
doing
a
bunch
of
relabelings
applying
that
you're
determining
like
the
container
runtime.
So
this
allows
me
to
choose
how
I
want
to
parse
the
books
by
simply
determining
the
container
runtime
dynamically.
B
You
no
longer
have
to
say
it
has
to
be
Docker.
It
has
to
be
CRI
I
handle
that
for
you
in
a
module
simply
because
I'm
adding
a
relabeling
of
the
container
runtime
that
gets
set,
and
then
right
here
we
choose
if
it's
container
D
I'm
going
to
use
CRI.
If
it
is
Docker
I'm
going
to
use
docker,
you
don't
have
to
declare
that
anymore.
We
just
determine
it
using
a
match
stage
so
kind
of
useful.
There
saves
a
little
bit
of
time
determining
how
we
want
to
do
that.
B
So
after
we
got
the
get
the
targets,
then
we
have
all
these
other
modules
right,
there's
a
log
formats.
All
these
are
all
the
log
formats
that
you
know
we've
initially
added
support
for.
Are
they
going
to
be
100
accurate?
No,
because
every
language
you
could
change
whatever
format
you
want.
Etc
I
think
this
is
more
outside
of
you
know,
well-structured
ones
like
log
fmt
or
Json
as
an
example
or
problem
Kayla
could
be
thrown
in
there.
B
If
you
have
some
custom
format,
you
would
need
to
update
that
accordingly
to
to
handle
it,
but
I'm
gonna
just
pick
this
one
here,
it's
a
generic
Json,
all
we're
doing
is
saying:
did
you
set
the
annotation
of
vlogsaging
profano.com
log
format,
generic
Json
and
we're
matching
a
pattern
to
make
sure
that
it
is
actually
Json
object?
If
it
is,
then
we
there's
a
label
for
log
type
that
gets
set
and
we'll
talk
about
that.
You
can
drop
those.
B
If
you
want,
we
tried
to
dynamically
determine
the
label
or
the
level
I'm
Sorry
by
using
the
jme's
path
expression,
basically
just
putting
a
bunch
of
ores
in
here
off
of
things
that
I've
seen
from
different
use
cases
that
may
be
there.
So
we
set
the
level
checks
to
see
if
you're
setting
a
time
stamp,
and
if
you
are,
you
know
we're
going
to
essentially
drop
that
property
from
the
Json
object.
B
And
then
this
is
around
you
know:
do
we
want
to
screw
up
the
level
or
not
and
for
the
other
scrubbing
I
would
say
or
not
other
scrubbings,
but
when
you
go
through
here,
you
know
this
defaults,
the
log
level.
So
it
ensures
that
it's
there's
always
one
set.
If
it
can't
determine
the
default
log
level,
then.
B
So
if
the
log
level
is
still
unknown,
we
look
to
see
if
it's
in
a
k,
log
format-
if
it
is
we
set
this.
Otherwise
this
is
the
regular
expression
for
a
best
effort
to
determine
the
log
level,
but
regardless
a
log
level
of
unknown
is
used.
The
reason
we
do
that,
then
you
can
write
a
log,
ql
selector
to
say
show
me
all
the
ones
that
are.
A
C
B
Address
those,
if
you
need
to
update
the
formatting
or
the
module,
add
a
new
module
to
detect
that
particular
types
at
the
log
level.
You
can
do
that
this
is
for
all
of
our
scrubbing,
and
then
we
run
into
this
one
here
which
I'm
going
to
come
back
to
here
in
just
a
second.
At
the
very
end,
one
of
the
last
modules
is
label
keep
okay.
B
B
Is
a
very
high
cardinality
label?
I'm,
not
a
fan
of
using
pod
as
a
label.
I
take
a
different
approach
to
determining
what
that
is
so
I'm
going
to
see.
If
I
can
find
this
real,
quick.
B
So
in
my
relabelings
comment,
I'm
actually
creating
a
label
called
deployment
again
naming
things
is
hard
whatever,
but
the
deployment
label.
In
this
case,
if
my
pod
was
named,
you
know
grafana
agent.
This
is
obviously
from
a
Daemon
set.
It
would
be.
The
deployment
label
would
be
set
as
Daemon
set
forward,
slash
grafana
agent,
friendly
name.
B
B
So
why
do
we
do
that?
This
reduces
the
cardinality
of
a
given
label,
so
if
I
was
to
show
you
here
from
grafana
clown
I'm
going
to
pick,
you
know
deployment.
These
are
the
different
values
that
are
set
or
available.
I
should
say,
as
part
of
the
deployment
label
reduces
cardinality
right.
You
have
a
Daemon
set
if
you're,
adding
worker
notes,
whatever
you're
constantly
adding
you
know,
pods
are
getting
added
or
if
you
have
a
replica
set
you're
doing
multiple
deployments
throughout
a
day.
B
B
That
ends
up
being
more
efficient
right,
because
that's,
ultimately,
what
people
you
would
be
searching
for
to
begin
with
is
I
want
to
find
this
message
or
look
at
the
aggregate
of
all
of
my
pods
to
begin
with.
So
what
is
my
error
rate
or
what
is
my
you
know,
bytes
per
second,
whatever
you
don't
care
about
the
Pod
you're
already
eliminating
that
off
when
you're
performing
that
aggregation
to
begin
with,
so
don't
use
it
right
very
useful.
However,
again
I
understand,
there's
the
edge
case
to
find
the
pods
and.
D
A
The
chat
that
I
want
to
direct
you
to
so
Jonathan,
as
he
said,
given
the
recent
updates
to
Loki
for
structured
metadata,
they
recommend
not
using
that
many
labels
due
to
the
overhead
of
querying
that
many
log
streams
right.
B
Yes,
so
the
the
structured
metadata,
a
know
that
that's
there's
enhancements
being
made
and
work
being
done
around
there.
I,
don't
know
what
I'm
allowed
to
talk
about
or
not
on
there.
So
I
would
probably
defer
that
to
the
public
grafana
slack
and
the
Loki
team
in
there
to
get
more
details
on
what
they're
currently
working
on
what
may
be
coming
and
then
the
road
map,
but
and
and
hesitant
to
say,
because
I'm
I
don't
know
what
I
would
am
allowed
to
say
or
not
in
terms
of
what's
coming
there.
B
B
You'll
hear
the
term
High
cardinality
a
lot
and
in
a
relational
database.
You
know
if
you're,
creating
an
index
on
a
username
field.
High
card
analogy
is
a
great.
That
means
the
end.
You
then
use
selectivity.
Selectivity
is
a
way
to.
C
B
B
You
don't
want
high
cardinality,
but
you
don't
want
low
cardinality.
You
want
a
balance
and
I
think
the
easiest
way
to
describe
it
would
be,
if
all
queries
targeting
those
logs,
if
you're,
considering
adding
a
label,
if
all
queries
targeting
those
logs
would
use
that
label.
Ninety
percent
of
the
time
then,
yes,
add
that
label,
if
it
would
be
ten
percent
of
the
time,
do
not
add
that
label
is
kind
of
the
easiest
way.
To
think
about
that,
it's
not
that
high
cardinality
and
low-key
is
a
bad
thing.
B
B
B
Yeah
and
I
Jonathan
I'm
not
necessarily
familiar
what
they've
called
out
with
the
level
or
what
you
should
add
in
you
know
structured
metadata
and
those
types
of
things
from,
and
everybody
has
different
use
cases
and
I
would
say
in
my
experience
at
least
with
customers.
B
The
log
ql
queries
that
they're
writing
are
almost
always
targeting
a
specific
level
and
I've
always
recommended
that
and
they've
gotten
great
performance
by
filtering
on
that,
and
so
again
this
is
what
we're
kind
of
covering
here
is
just
an
annotation
based
approach
by
leveraging
modules
and
the
cool
theme
of
modules
is
you
can
you
can
Fork
this
repository?
You
can
make
whatever
changes
you
want
to
it.
You
can
create
your
own
modules
and
or
copy
all
of
these,
and
then
edit
them.
You
know
in
your
own.
You
know
private
git,
repo.
B
The
main
thing
here
is
just
showing
the
power
of
modules
and
I
air
can
or
Matt
can
correct
me
because
they
looked
at
the
PRS,
but
I
think
there's
like
80
some
modules
that
were
added
in
here,
that
you
can
kind
of
pick
and
choose
from
in
terms
of
an
approach,
and
there
is
a
couple
examples
up
here
like
for
logs,
if
you
want
to
specify
your
own
log
formats
like
this,
is
just
showing,
but
the
config
now
flows,
so
to
speak
very
nice
by
saying
I'm
going
from
this
module
to
this
module,
this
module,
you
don't
necessarily
care
about
all
of
the
bits
or
the
details
that
are
happening.
B
You
know,
inside
of
there
right
and
as
part
of
what
we
do
with
this
particular
config
is
we
actually
take
all
of
the
Pod
labels
and
all
of
the
Pod
annotations
and
we
make
them
labels
in
the?
How
do
I
say
it?
We
make
them
eligible
to
be
labels
and
Loki
by
exposing
them
as
labels
to
the
promptail
pipeline
stages.
That's
why
the
very
last
stage
that
I
will
always
have
is
a
label
keep
that,
regardless
of
whatever
came
through
the
pipeline.
B
These
are
the
only
labels
that
I'm
going
to
allow
to
ultimately
be
created,
no
matter
what
so
the
I
guess,
jumping
back
over
the
very
one
of
the
very
last
modules
that
I
would
or
that
that's
in
here
in
this
example,
I'm
just
doing
two
tenants,
but
there's
also
a
module
for
the
cubelet
journal,
so
the
system
Deluxe
in
this
case
you
can
basically
retrieve
I-
think
the
default
is
all
whatever
system
unit
files
are
there
return
those
logs,
but
you
can
filter
them.
B
So,
in
my
case
again,
these
would
fall
under.
B
Oh,
what
did
I
call
it?
Yeah?
Oh
yes,
Loki
Source,
Journal
cubelet,
so
this
is
obviously
just
like
a
play
cluster.
Whatever
these
are
the
different
system
unit
files
that
I
have
log
messages
for
kind
of
funny.
You
know
I
could
look
at
sshd
and
obviously
see
that
foreign
people
are
regularly
trying
to
connect
in
here.
For
that,
but
primarily
you
would
use
this
for,
like
you
know,
it's
beneficial
tying
it
with
your
kubernetes
logs
with
the
cubelet
service
or
container
d.
B
B
I,
don't
know
I'm
just
going
to
do
like
dashboards,
whatever
I
haven't
done
a
lot
lately,
so
I
might
have
to
go
back
a
little
bit,
yeah
very
powerful
feature.
Obviously,
if
you
you
might
be
aware,
but
annotations-
and
there
is
this
dashboard
and
several
of
the
others
there's
annotations
to
show
the
events
that
are
coming
in,
which
means
we
need
to
collect
the
kubernetes
events
as
logs.
B
What
I
typically
do
for
me,
I
deploy
I
would
deploy,
at
least
in
this
example.
Three
types
of
Agents
I
would
deploy
a
metrics
agent,
I
would
deploy
a
logs
agent
and
then
I
would
deploy
a
separate
events
agent.
The
main
reason
that
I
do
events
separately
is
metrics
is
a
staple
set
logs
is
a
Daemon
set
and
I
would
do.
Events
as
a
stateful
set
I.
Think
in
a
lot
of
examples,
you
would
see
events
coming
or
coupled
with
metrics
as
the
stateful
sect
reason
I
like
to
do.
B
The
metrics
you
can
cluster
metrics
right
in
static
mode.
That
would
be,
you
know,
hash
mod,
sharding
things
like
that
in
in
flow.
There
is
a
clustering
component.
I
believe
that
feature
is
currently
marked
as
as
beta
the
modules
that
are
defined
in
that
repository
actually
do
support
clustering.
The
default
behavior
is
false,
but
that
that
does
work
but
again,
I
know
that
is
Mark
disk
beta
and
you
know,
could
be
subject
to
change.
B
So
that's
the
other
reason
why
I
would
deploy
events
separately
from
metrics
is.
If
you
started
clustering,
your
metrics
agents
and
you
have
three
agents
as
an
example.
If
events
was
on
there,
events
is
not
clusterable,
so
you're
going
to
end
up
with
3x.
The
number
of
events
in
that
case
so
I
like
to
keep
those
separately
and
flow,
does
allow
you
to
perform
rewrites
now
on
the
events
which
is
extremely
useful.
B
So
you
know
from
an
events
standpoint-
there's
probably
not
too
many
in
here
now,
but
you
can
take
a
look,
so
we
can
see
I've
added
or
I
should
say
like
extracted
a
component
component.
If
we
looked
in
here
is
actually
coming
from
Source
components,
I
could
have
I
could
make
that
Source
component
I
was
just
already
using
a
label
of
components
by
looking
for
that
label
or
annotation
being
set
on
pods.
B
So
like
what
app.kubernetes.io
forward
slash
component
or
if
it
was
just
a
component,
there's
a
lot
of
those
type
of
things
in
the
the
modules
where
it
looks
for
multiples,
kubernetes,
IO,.
B
B
We
would
look
at
these
these
different
places,
so
either
app
kubernetes,
Iowa,
component
or
just
components,
and
because
of
how
labels
work
etc.
This
is
it
doesn't
matter
which
one
this
came
from,
because
it's
consumed
from
a
separate
module,
a
separate
scrape
job,
so
the
last
annotation
I'd
say
to
mention
real,
quick
as
well.
B
That
is
supported
so,
along
with
all
the
logs
in
the
metrics,
is
probe
based
annotations,
so
do
support
the
Prometheus
I
o
probe,
but
there
is
some
additional
ones
as
well,
so
we're
supporting
probes
agent
grafana.com
forward,
slash
Pro,
there's
annotations
as
well
I've
kind
of
found
that
different
dashboards
or
you
know,
rule
mix-ins
or
even
some
of
the
integration
of
our
product
Cloud
rely
on
a
very
specific
job
name
when
you're
doing
Auto
scraping
that
can
be
difficult
to
match.
B
B
You
know
to
be
set.
So
that's
on
there
as
well
and.
B
B
B
The
really
the
nice
thing
with
Git
is:
you
can
set
the
pull
frequency
of
how
often
you
want
it
to
go.
Do
that
so
you
you
don't
have
to
worry
about,
deploying
anything.
They
can
just
go
pull
it
from
your
your
git
repository
directly,
but
any
I
wanted
to
kind
of
leave
time
for
questions
or
feedback
thoughts.
C
Matt
is
there
like
a
prescribed
easy,
like
are
all
the
defaults
pretty
same
like
if
I
just
wanted
to
run
this
on
a
Kate
cluster
and
not
really
have
to
think
too
much?
Is
it
kind
of
set
up
for
that
or
what's
what's
the
easy
way
to
get
started
here?.
B
Yeah,
so
basically
for
each
one,
my
metrics
and
logs
in
this
case
they're
just
an
all
file,
and
that
includes
everything
with
all
the
defaults
set.
You
know
there
is
a
few
arguments
that
I've
seen
that
are
common,
like
you
know,
like
label
for
cluster
environment
region,
that
you
might
want
to
add.
C
B
Essentially,
the
the
there's
not
a
way,
at
least
that
I
could
determine
to
make
that,
like
a
a
dictionary,
at
least
that
I
could
get
to
work,
but
I
just
add
those
type
of
things
as
like
external
labels
through
the
right,
but
mainly
yeah,
there's
same
defaults
for
all
of
those,
even
like
the
file
name,
we
Pronto
always
creates
a
label
for
the
file
name.
That
was
tailed.
That
can
be
very,
very
unique
and
again.
The
goal
here
is
to
reduce
the
cardinality.
B
So
if
this
is
your,
you
know
file
name,
you
know
this
is
namespace
and
then
a
pod
name
container
ID
container
a
number
of
rotations.
We
go
ahead
and
just
rename
that
file.
So
if
you
did
decide
to
keep
the
file
name
label,
we're
removing
the
uniqueness
just
setting
it
up
for
better
performance
out
of
the
the
beginning,
but
yeah
everything
is
kind
of
Same
by
default.
B
The
only
thing
I
guess
you
could
maybe
consider
not
saying
is
the
default
for
dropping
debug
and
Trace
level.
Messages
is
true,
but
you
can
easily
add
The
annotation
to
keep
those
if
you
wanted,
but
yeah
all
the
other
behavior
that
we've
talked
about
are
like
scrubbing
things,
removing
them
from
there
is.
B
You
have
to
opt
in
to
that
behavior,
so
yeah.
C
B
And
for
for
metrics,
all
of
the
the
jobs
for
metrics
do
line
up
with
the
Griffon
Cloud
Integrations,
so
you
can
see
like
on
a
lot
of
these,
whether
it's
this
Cube
API
server
like
we're
setting
that
exact
name.
B
However,
if
you
didn't
want
to
use
the
Groupon
Cloud
Integrations
for
that,
then
you,
you
know
you're
using
a
different.
You
know,
mix
in
or
project
dashboard,
whatever
you
can
change
the
name
of
that
job
label
again
for
those.
So
all
these
try
to
have
match
exactly
what
is
coming
from
grafana
cloud
and
going
from
there
metrics
wise.
The
only
thing
I
would
say
to
call
out
is
there:
is
these
scrape
jobs
for
Auto,
scraping
of
endpoints
and
auto
scraping
of
pods,
but
you'll
notice
in
here?
B
B
This
is
I
mean
you
can
debate
this.
If
you
want
my
opinion
or
kind
of
default,
that
I've
always
gone
with
is
to
use
endpoints
for
scraping
of
the
metrics,
and
this
is
due
to
the
I
mean
if
you
just
read
the
documentation.
B
This
last
two
blocks
here:
if
the
endpoint
belongs
to
a
service,
the
role
of
cert,
all
of
the
labels
for
service
are
attached.
If
it's
backed
by
a
pod,
all
the
labels
from
the
Pod
are
also
attached,
so
endpoint
for
the
most
part,
gets
you
the
best
of
both
worlds.
This
way,
I
can
also
add
the
name
of
the
service
automatically
as
a
label
for
the
metrics.
B
B
And
if
you
have
both
of
those
set
and
you
would
were
scraping,
both
endpoint
Auto,
scraping,
endpoints
and
auto
scraping
pods.
You
would
end
up
with
redundant
metrics
and
they
would
truly
be
two
different
series
as
well,
because
endpoints
would
have
an
additional
label
of
service
that
it
came
from
and
pod
would
not
so
good
question.
D
D
My
question,
for
you
is
mainly,
are
there
features
you
wish
flow
had
to
make
it
easier
to
lay
out
these
modules
to
make
them
more
configurable
or
like?
Are
there
any
pain
points
you
particularly
had.
B
A
couple
I
guess
you
could
say
most
of
it
was
just
simple
things:
I
would
say
like
with
the
river
language,
if
I'm
defining
a
multi-line
array
and
I
well,
you
should
yes
always
have
a
trailing
comma
on
the
last
one.
You
know,
I
have
a
JavaScript
background,
makes
sense.
B
River
fails
if
there's
not
a
trailing
comma
on
last
entity
in
Array,
if
it's
multi-line,
that
was
kind
of
silly
and
I
would
miss
that
regularly
the
other
one
probably
I,
don't
know
what
most
people
would
think
about
it.
For
me,
early
on
in
my
Development
Career
I
did
a
ton
of
cold
fusion
development,
and
when
you
have
your
functions,
whatever
you
would
reference
arguments
that
came
in
arguments
is
plural
in
the
confusion
language,
because
it's
a
dictionary
of
multiple
values.
B
I
can't
tell
you
the
amount
of
times
that
I
typed
arguments
dot,
something
instead
of
argument,
dot,
something
because
I
again
argument
arguments
is
a
dictionary,
and
so
that
was
just
a
simple
is
a
simple
thing,
but
that
was
one
just
a
annoyance,
I
guess
I
would
say
of
constant,
like
mistyping,
those
being
able
to
visualize
the
changes
or
the
entire
modules.
When
you
have
this
many
modules
that
are
coupled
together
and
that
graph
would
would
be
super
super,
useful
and
and
I
think
it
kind
of
leads
to
the
next
thing.
B
I
guess
that
we
want
to
talk
about.
If
there
was
time
Eric,
which
was
proposals
seeing
more
in
the
the
UI
is
if
you're
developing,
like
even
your
config
or
developing
modules,
is
incredibly
useful
to
see
what
is
coming
from,
where
or
even
like
what
labels
exist,
that
there
was
one
time
I
want
to
say
that.
B
With
end
points,
there's
here
it's
plural
end
points,
but
when
you
look
for
some
of
these
other
ones,
it's
singular
and
I
had
I
can't
remember
which
one
of
these
I
was
using
or
whatever,
but
I
had
typed
it
as
a
singular
and
I.
It
was
driving
me
crazy.
Why
I
couldn't
figure
out
why
this
you
know
bit
wasn't
working,
but
then
from
the
the
UI
I
was
able
to
start
going
into
like
the
discovery
stages
and
actually
see
what
was
discovered.
B
That's
incredibly
useful,
but
I
just
have
a
you
know,
a
tiny
little
testing
cluster
that
I'm
playing
with
and
being
able
to
filter
like
not
render
this
whole
list
and
being
able
to
filter
on
what
it
exports.
When
it's
this
gigantic
list
of
things,
you
know
I
mean
if
you're
in
a
large
kubernetes
cluster
I'm
sure
you
you
all
you've
seen
it.
If
you've
used
this,
it
can
take
forever
for,
like
this
page.
B
Just
just
hangs,
you
know
from
mainly
from
browser
rendering,
so
if
there's
a
way
that
I
can
filter
this
or
like
search
this
without
having
that
whole
payload
come
back
would
be
great.
The
Prometheus
like
scraping
components,
are
awesome
because
you,
you
actually
get
a
little
bit
more
from
there.
That
I
found
very
useful.
B
I,
don't
know
whatever
for
one
of
these
being
able
to
go
in
to
like
a
scrape
and
actually
see
it's
on
here
on
this
Prometheus.
You
know
cubelet
one
when
we
view
this
to
see.
B
Yeah,
oh
that's
why
it's
because
the
tenant
that
I'm
on
in
that
case,
because
it's
doing
multiple
tenants.
So
where
is
that
cubelet
here
and
then
the
script
so
being
able
to
see
not
only
like
yes,
this
is
the
arguments
that
came
in
from
relabelings
right.
You
can
see
the
other,
whatever
the
labels
were
added,
and
maybe
it's
better
to
look
at
this
for
like
pods,
but
ultimately
seeing
the
targets
again
is
very
useful
to
know.
B
Okay,
these
are
the
labels
that
were
added
as
part
of
like
service
discovery,
but
it
might
be
better.
You
know
to
see
somehow
you
know
if
there's
like
the
random
sample
or
whatever
metrics
or
to
be
able
to
put
sometimes
I
want
to
be
able
to
put
like
a
trap
in
of
what
metrics
are
being.
You
know,
collected
somehow
or
just
getting
more
insights
in
there
from
the
metrics,
but
for
Loki
there
there
wasn't
those
insights.
B
Like
when
you
you
start
getting
into
those
particular
components
there,
there's
not
a
lot
of
insights
there
for
Loki
Loki
does
have
an
echo
that
just
writes
to
standard
out,
but
if
I
wanted
to
hook
that
into
like
an
existing
flow,
there
would
be
an
existing.
B
You
know:
inner
production
application,
whatever
Loki
Echo,
is
not
going
to
do
me
much
good,
because
it's
literally
going
to
Echo,
like
everything
and
so
I,
put
a
like
a
proposed
lot
there
to
add
a
selector
argument
to
Loki
that
I
could
selectively
filter
what
was
getting
sent
out,
but
also
I,
think
from
the
UI.
If
you
were
able
to
do
something,
you
know
within
like
a
a
loki.echo
as
an
example,
or
even
the
Loki
dot
process
that
was
using
like
websockets
and
Publishing
those
messages
to
the
websocket
as
well.
B
I
I
think
that
would
be
very
beneficial
because
when
you
start
doing
a
bunch
of
the
low-key
dot
processes
or
enhancing
the
log
line,
removing
things
from
the
log
line
whatever
it,
it
can
just
be
very
useful
to
get
those
insights
and
I.
Don't
think.
I
showed
this
earlier,
I
started
going
out
and
we
got
distracted
so
I'm
just
going
to
show
this.
B
Where
I
have
my
deployment
label
here,
but
I
am
embedding
the
name
of
the
Pod
to
the
end
of
the
log
line
so
yeah
that
that
could
ultimately
be
structured,
metadata,
I,
don't
know
from
the
the
structured
metadata
where
all
that
stands
or
like
if
it's
retained
or
search
or
whatever
that
may
be,
but
right
now,
I
just
throw
those
to
the
end
of
the
log
line
and
if
I
truly
do
need
to
filter
on
a
pod.
B
I
can
just
throw
this
in
there
and
I
am
going
to
get
those
exact
logs
from
a
line
filter.
If
it
was
a
Json
object,
I
would
add
a
property
of
underscore
underscore
pod
and
that's
the
behavior
of
those
modules
you
know.
Obviously,
as
Loki
is,
the
newer
versions
are
released
or
new
features
are
released.
B
Those
approaches
may
change,
and
the
great
thing
is
we'll
just
change
the
modules
you
know
and
I
would
encourage
you
if
you're
using
the
public
module
repository
to
include
a
revision
and
what
you're
referencing
that
way,
you
do
have
a
consistent
type
of
deployment
or
release,
at
least
that
you
could
reference
in
case
the
module
gets
changed,
but
yeah.
A
Well,
we
are
a
couple
minutes
over
now,
so
I
did
want
to
I'm
gonna
give
a
last
caller
and
if
there's
any
anything
else,
you
want
to
say,
but
this
has
been
really
amazing,
I
think
for
even
for
the
Asian
team
to
see
and
for
everybody
to
see.
You
know
how
you're,
using
these
things
in
the
agent
modules
repo
in
practice,
there
are
some
other
modules
in
there.
Oh
it's
a
short
pitch
for
that,
so
we
we
like
to
think
of
that
a
little
bit
as
examples
of
how
to
use
flow.
C
A
Modules,
so
if
you're
just
looking
for
like
some
example,
River
stuff,
that's
a
pretty
good
place
to
poke
around
just
to
get
a
feel
for
it,
but
yeah
Aaron.
Anything
else
you
want
to
hit
on
before.
We
call
it
for
today.
B
Nope,
if
anything
comes
up
a
question
about
those,
obviously
you
know
reach
out
on
the
grafona
community.
Slack
and
you'll
be
happy
to
to
jump
in
and
or
if
there's
an
issue
you
know
feel
free
to
open
an
issue
there
or
even
better
contribute
your
own
modules.
A
Yeah,
send
us
a
PR
all
right,
oh
easier.
Okay,
all
right,
I'm
gonna
pause,
the
the
recording
here
thanks
everybody
thanks.