►
From YouTube: Grafana Community Call 2022-07-21
Description
Join our next Grafana community call: https://docs.google.com/document/d/1GpgvanMeNqf-CDegv6E0yAV1s2f7H2knUFuNouqwX3A/
Learn more at https://grafana.com and if all of this looks like fun, feel invited to see if there’s a role that fits you at https://grafana.com/about/careers/
A
A
So
today
we
are
talking
about
the
observability
locks
and
races
squad
and
what
we
worked
on.
What
is
not
worthy
as
such.
These
are
people
involved
in
the
squad.
We
generally
work
on
anything
locks
and
traces
related.
Like
beats
some
data
sources
like
tempo
or
loki
also
on
visualizations,
like
deluxe
visualization,
so
the
traces,
visualizations
or
anything
similar.
A
B
B
B
So
now
we
are
in
builder
mode
and
when
we
talk
to
you
to
lock
users
and
our
community,
we've
heard
that
usually,
when
you
start
to
write
the
query,
you
start
with
selecting
your
log
stream,
which
means
specifying
label
and
values.
So
that's
what
we
put
on
the
top,
so
you
can
start
from
the
top
and
then
move
to
the
operations
to
add
operations.
You
can
either
click
on
this
operations.
B
Button
or
my
favorite
part,
is
query
patterns
which
are
kind
of
like
a
kickstarters
for
your
query,
so
you
can
browse
through
templates
and
if
there
is
something
that
works
for
you
and
you
click
on
it.
Multiple
operations
are
added
for
you,
based
on
the
based
on
the
purpose.
So
in
this
case
I
wanted
to
do
log
query
with
parsing
and
filtering
and
it
added
these
four
operations
for
me.
B
I
can
then
go
through
the
operations
and
modify
them
if
needed
or
change
the
values.
If
that's
not
what
I'm
looking
for,
so
that's
what
I'm
doing
here,
I
decided
to
filter
for
level
label
that
has
error
value
and
you
can
run
the
query
if
you
decide
to
add
more
operations,
so
you
can
add
as
many
as
you
want.
You
can
then
just
continue
by
clicking
on
the
operation
button.
B
You
can
choose
from
like
operations
that
are
for
metrics
query
or
continue
with
operations
for
walk
query.
So
here
I'm
adding
rate
and
sum
and
again,
if
I
would
want
to,
I
can
go
and
modify
the
queries
right
into
this
in
these
operations,
and
so
this
one
is
the
is
the
builder
section,
but
we
have
also
added
the
explain
section.
An
explain
section
is
very
useful
for
the
cases
when
you
would
like
to
learn
more
about
your
query,
or
maybe
your
teammates
wrote
the
query
that
you
are
not
sure.
B
What
exactly
does
it
do
so
here,
operation
by
operation,
we
add
documentation
of
what
each
specific
operation
do
and
we
kept
the
code
mode.
So
if
you
are
a
pro
lock
users
and
are
poland
in
locule,
you
can
write
your
queries
here.
Coding
or
very
nice
thing
is
that
we
have
as
a
part
of
this
project
added,
parser
or
created
parser
and
which
means
that
you
can
switch
between
these
three
modes
without
losing
the
query
and
we
basically
just
start
query
as
a
string
and
then
visualize
it
in
the
different
forms.
B
B
It
happened
to
me
the
last
time,
so
my
favorite
one
of
the
favorite
things
is
that
we
have
finally
unified
query
editor
for
explore
and
for
dashboard.
So
we
have
one
editor
that
is
used
for
both
of
these
use
cases.
We
have
added
query
patterns
and
that's
also
something
that
we
would
like
to
improve
on
the
explain
section
and
also
we
have
published
a
letter
log
url
package.
B
So
if
you
are
lock
users-
and
you
are
maybe
building
your
own
query
builder-
you
can
use
it
there
as
well,
so
it's
available
for
our
community,
so
that's
query
builder
and
which
is
a
part
of
graphene
9,
and
here
is
a
little
preview
of
what
we
are
currently
working
on
right
now.
B
So
query
hints
that
we
are
going
to
see
work
in
a
way
where
we
run
very
small,
very
fast
queries.
While
you
are
writing
your
query:
analyze
the
data,
so
in
this
case
logs
and
based
on
that
create
suggestions.
B
So,
for
example,
in
this
case
we
learned
that
the
logs
are
in
json
format,
so
we
can
offer
json
parser.
We
also
got
information
from
the
logs
that
there
are
some
pipeline
errors,
so
we
were
able
to
offer
this
hint.
That
is
basically
appliable
for
your
log
lines
that
you
would
get,
and
the
last
slide
that
I
have
is
related
to
what
query
hints
do
we
support
and
maybe,
if
some
plugins
developers
are
here
or
are
watching
this-
maybe
kind
of
like
intro
what
we
use
or
how
did
we
create
these
hints?
B
So
we
have
hints
for
parser
and
pipeline
errors,
that's
what
we
could
see
and
also
for
renaming
of
level
like
labels.
So
if
your
label
level
label
has
name
ldl
or
error
level,
this
is
problematic
for
log
volumes,
because
your
graph
suddenly
doesn't
have
these
colors
and
doesn't
have
differences,
be
like
it.
It
cannot
see
what
rd
levels,
so
we
are
in
this
case
offering
a
hint
that
renames
your
level
like
label
to
level,
and
that
way
you
have
nice
colors
in
your
log
volumes.
B
So
how
did
we
create
it?
We
rely
on
data,
so
two
data
source
methods
that
are
part
of
data
source
api.
First
one
is
get
query
hint
where
we
supply
query
and
samples.
It
can
be
the
results
or
sample
data
and
produce
query
hint
and
then,
when
users
click
on
query
hints,
we
use
modify
query
method
that
takes
this
query.
B
C
C
So,
first
of
all,
the
traces
panel
available
from
grafana
9.0
allows
you
to
view
your
traces
in
a
dashboard
and
the
reason
that
we
wanted
to
do
this
was
because
explorer
is
very
handy
for
getting
idea
of
what
your
data
is
doing.
But,
of
course
it's
not
saved.
You
have
to
keep
the
tab
open,
but
if
you
want
to
have
easy
access
to
your
queries
over
time
and
indeed
to
use
template
variables,
you'll
want
to
use
them
in
a
dashboard.
C
Something
else
that
I've
been
working
on
is
the
apm
table
and
the
whole
idea
behind
this
application
performance
management
table
is
to
allow
you
to
get
apm
data
out
of
the
box
without
having
to
do
some
extra
setup,
or
you
know,
take
time
out
of
your
busy
day
to
get
it
working.
Essentially,
this
is
provided
more
or
less
for
free
through
the
tempo
metrics
generator,
basically
that
scans
your
incoming
traces
and
generates
metrics
from
them,
which
it
then
stores
in
prometheus
through
remote
rate.
What
does
this
allow
you
to
do?
C
C
So
the
table
is
a
top
five,
of
course,
and
we
wanted
the
table
to
be
as
easy
to
use
as
possible
and
as
small
as
possible.
So
that's
why
we
went
with
only
five
results,
but
in
order
to
make
it
easy
to
use
we
added
links
as
well.
C
So
your
rate,
column
and
I'll
show
you
in
a
moment
in
the
video
your
rate
and
error
rate
and
and
duration
columns
all
have
links
that
will
take
you
directly
to
prometheus,
with
the
query
filled
in
for
you,
exemplars
turned
on
in
the
case
of
the
duration
metric
and
as
well.
We've
added
a
link
to
temple
to
take
you
straight
from
the
table
so
that
you
can
search
for
the
particular
service
name
and
of
course
you
can
filter
results
as
well.
C
So
if
you
want
to
search
boy
server,
service
and
other
metrics
as
well,
you
can
do
that
and
it
will
filter
the
table
and
the
service
graph
for
you.
C
C
You
can
hit
the
tempo
link,
which
will
open
up
tempo
for
you,
and
you
can
also
add
filters
once
the
query
is
run
after
the
filter
is
entered,
you'll,
see
that
it
updates
the
the
service
graph
for
you
remove
the
query
and
the
filter
run.
The
query
table
is
updated
again
shows
everything
because
there's
no
filter.
C
D
Hey
another
feature
that
we've
been
working
on,
the
tray
side
of
things
is
trace
to
metrics.
So
we
currently
support
linking
between
various
signals.
Metrics
logs
and
traces
through
features
like
trace
the
logs
logs
or
traces,
exemplars
and
kind
of
the
missing
link
was
to
be
able
to
jump
between
traces
to
metrics.
D
So
whereas
an
exemplar
for
example,
would
take
you
from
an
aggregated
metric
to
a
trace,
which
is
a
very
specific
instance
that
contributed
to
that
metric
traceometrics
will
do
the
opposite.
You're,
looking
at
a
trace,
a
very
specific
example,
and
you
see
that
a
span
is
100
milliseconds,
for
example,
which
seems
like
maybe
it's
a
lot
longer
than
normal,
but
you
don't
have
a
clear
picture
because
you're
looking
at
a
single
data
point
with
trace
of
metrics,
you
could
link
a
p90
latency
based
on
the
span
name.
D
For
example,
you
could
click
on
the
link,
and
then
you
could
see
that
aggregated
data
to
see
whether
or
not
you
know
that
span
duration
kind
of
is
anomalous
if
it's
normal,
so
it
just
gives
you
a
different
view
into
your
data,
so
this
will
live
inside
the
trace
view.
Alongside
some
of
the
other
links
that
we
have
like
traced
to
logs
as
well
as
fan
references,
so
if
we
go
to
the
next
slide,
we'll
see
what
the
configuration
looks
like
for
this.
D
Similarly
to
trace
to
logs
you're,
going
to
select
a
data
source
that
you're
linking
to
and
then
you're
going
to
configure
tags.
These
tags
are
a
mapping
between
the
span
attribute
names
and
the
metric
label
names
that
you
want
to
filter
your
metrics
based
off
of
because
there's
some
incompatibility
between
how
tempo
accepts
attributes
versus,
for
example,
prometheus
labels
naming
conventions.
D
You
have
the
option
to
map
between
them.
For
example,
you
look
at
the
first
tag.
The
spam
attribute
value
is
kate,
stop
pod.
That
has
a
period
in
it
which
would
be
an
invalid
prometheus
label.
So
you
map
that
to
pod,
which
would
be
the
value
on
your
metric.
For
example,
it
gives
you
a
little
bit
of
flexibility
there
after
you've
configured
your
tags.
D
You
then
create
the
metrics
that
you
want
to
have
included
in
the
links,
so
you
have
some
descriptive
label
that
tells
you
what
metric
you're
linking
to
and
then
you
write
out
the
query
with
a
special
keyword
in
there,
which
is
dollar
sign,
dollar
sign,
underscore
underscore
tags
and
that
will
interpolate
the
values
from
the
tags
that
you
selected
into
the
query
at
runtime.
So
I'll
dynamically
look
up
the
values
on
the
span
insert
it
into
the
metrics
query,
so
that
you
are
looking
at
the
data
that
is
correlated
to
that
span.
D
So,
for
example,
you
click
on
one
of
the
spans
and
it
will
look
up,
kate's,
dot,
pod
and
they'll.
You
know
put
in
the
pod
name
into
your
metrics
label
so
that
you're,
seeing
the
latency
or
the
error
rate
for
exactly
that
pod
or
cluster.
Whatever
tags
that
you
have
configured,
this
will
be
coming
out
in
9.1
as
opposed
to
tracer
logs.
D
There
are
a
lot
of
edge
cases
and
how
you're
going
to
be
writing
your
queries
and
so
excited
to
get
feedback
and
see
the
different
scenarios
that
all
our
users
are
going
to.
You
know
want
to
use
this
in
so
we'll
likely
be
making
changes
as
we
get
that
feedback
and
coming
releases
so
that
it
can
be
a
bit
more
flexible
as
to
the
way
that
you
are
writing
your
queries,
selecting
tags,
maybe
mapping
between
different
data
sources.
A
There
are
two
important
changes
that
happened:
one
is
the
lucky
data
format,
change
and
also
the
release.
Some
orders
of
elastic
search
are
not
supported
anymore
in
elasticsearch
data
source.
Regarding
a
local
data
format,
change
the
problem
was
efficiency
generally,
as
data
flows
through
grafana.
We
package
them
into
containers
called
data
frames,
and
the
problem
is
that
usually
you
expect,
when
you
run
a
query,
to
get
back
like
one
two
data
frames,
but
it's
lucky.
Sometimes
it
happened
that
every
logo
became
its
own
content
and
it
was
very
inefficient.
A
So
we
changed
this
to
a
more
better
format
in
general.
As
long
as
you
look
at
your
locks
using
deluxe
visualization,
for
example,
you
are
in
explore.
Nothing
really
changes
everything
works
as
before,
but
if
you
load
this
data
into,
for
example,
a
table
panel
in
the
dashboard,
there
will
be
changes.
Adjustments
has
to
be
made.
The
changelog
contains
more
information
about
this.
A
The
other
change
is
the
elasticsearch
support
for
old
versions,
which
generally
we
supported
very
old
elasticsearch
database
versions,
and
this
was
not
manageable
anymore,
so
we
decided
to
only
support
the
newer
versions.
We
looked
around
and
turned
sound
that
elastic
the
company
already
maintains
a
list
of
which
database
versions
they
support
and
which
they
do
not,
and
we
simply
use
that
list
at
this
point
it
should
be
the
oldest
version.
We
support.
E
Yeah,
thank
you
now
I
want
to
talk
about
a
bit
of
about
what's
next,
so
we
have
two
major
topics
in
the
next
slide.
Basically,
we
have
those
two
spaces
locks
and
traces,
and
in
the
log
space
we
will
do
some
more
improvements
in
the
logs
explore
and
basically
all
those
improvements
are
from
ux
research.
E
So,
for
example,
we
will
move
the
downloads
button
to
download
your
logs
from
explore,
move
it
one
layer
up,
so
you
will
easily
find
this
button.
For
example,
then,
second
thing
iran
already
showed
the
hints
in
the
low
quick
variability,
but
we
are
obviously
looking
into
implementing
more
hints
and
if
you
have
ideas,
the
next
slide
will
be
interesting
for
that.
But
we'll
come
to
that
in
a
bit,
then,
in
our
lock
space
we
have
two
external
data
sources
for
splunk
and
for
open
search
and
yeah.
E
Regarding
the
traces
space
we
will
focus
mainly
on
on
tempo
and
especially
there
on
the
trace
view,
but
also
on
service
graph
improvements
and
also
in
traces.
We
have
an
external
data
source
sentry
and
sentry
recently
did
some
changes
and
additions
to
their
own
apis,
and
we
will
follow
up
with
those
api
changes
in
our
external
data
source
yeah
then,
on
the
next
slide
to
go
to
actions.
E
Basically,
so
if
you
have
any
ideas
or
feature
request
feel
free
to
click,
the
link
on
the
left,
which
is
which
will
bring
you
to
the
github
page
of
the
graphing
repository,
especially
in
the
discussions
topic
there
and
feel
free
to
just
open
up
any
issue
or
open
up
any
discussion
items.
So
we
can
follow
up
on
a
feature
request
there.