►
From YouTube: Grafana Tempo Community Call 2023-06-08
Description
Join our next Tempo community call: https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY/edit#heading=h.3x2mcvpczj56
What was discussed:
- TraceQL features
- Streaming results API
- Upcoming Dynamics metrics API
A
Here
we
go:
okay,
cool
all
right.
Everyone
well
welcome
to
the
June
Tempo
Community
call,
there's
definitely
a
lot
of
cool
stuff
going
on.
We've
done
a
ton
of
new
features,
since
the
last
Community
called
so
we'll
kind
of
walk
through
some
of
those
Trace
ql
features
a
lot
of
them
and
they're
really
cool
and
I.
Think
maybe
we'll
recap,
some
of
the
other
stuff
that
we
had
in
progress
for
this
quarter
and
and
talk
about
some
updates
there
and
then
also
on
the
grafana
side
and
I.
A
Think
I'll
go
ahead
and
share
this
here,
but
we
have
this
agenda
doc.
So
we
have
a
couple
those
things
to
talk
about
that
I
mentioned,
but
you
know
we
can
always
talk
about
anything.
So
if
you
have
any
questions-
or
you
know-
feedback
about
Tempo
things
like
that,
so
let
let's
we're
more
than
happy
to
talk
about
that
I,
think
I'll
go
ahead
and
share
my
screen.
A
I
think
that's
a
good
way
to
kind
of
talk
about
these
trade
skill
features
since
they're
text
yeah,
so
I
think
these
are
all
new,
so
these
would
be
really
cool
since
the
last
Community
called,
but
we
had
a
lot
of
these
traceql
features,
so
one
of
them
is
Select
and
I
can
actually
click
the
set,
so
select
is
something
that
we've
kind
of
been
toying
around
with
the
language
for
a
while,
and
it
will
let
you
bring
back
things
to
display
in
the
table,
attributes
that
maybe
you
don't
want
to
filter
on
so
you
just
want
to
bring
back
whatever
is
there,
and
so,
if
there
was
a
kind
of
work
around
to
do
this
before,
where
you
could,
you
know
put
a
condition
on
that
filter
that
didn't
really
do
anything
like
an
integer
greater
than
zero,
but
select
I
think
it
turned
out
really
cool,
I'm
happy.
A
We
all
like
the
language
I,
think
it's
easy
to
use,
but
it
actually
should
be
a
little
bit
more
performant
than
the
original
workaround
we
had
because
the
way
we
implemented
it
is
it's
only
on
the
matches.
So
that's
pretty
cool,
whereas
the
other
one
would
have
been
pushed
all
the
way
down
and
evaluated
across
everything,
so
yeah.
So
that's
in
there
buy
and
coalesce
I.
A
Don't
actually
think
there's
a
good
screenshot
in
here,
but
buy
and
coalesce
are
a
way
to
group
your
spans
while
you're
doing
a
query,
so
you
could
do
like
buy
namespace
or
buy
region
or
buy
URL.
And
so
then
you
could
evaluate
conditions
on
just
the
subsets
of
groups.
So
that
would
be
something
cool
like
if
you
wanted
to
see
the
number
of
calls
per
database
or
the
number
or
the
average
duration
per
API
call
or
something
like
that
within
a
Trace.
A
So
that's
really
cool
and
there's
also
going
to
be
some
upcoming
front-end
work
to
support
this.
So
if
you
do
buy
namespace
and
you're
calculating
something
like
the
average
iteration
per
namespace
will
actually
show
that
value
on
the
traces
result.
When
you're,
looking
at
so
you'll,
see
I,
don't
have
a
good
screenshot
but
you'll
see
something
you
know
like
your
trace
and
then
each
group
for
distinct
value
that
was
in
that
buy
we'll
come
back
with
the
value
that
we
had
in
there.
A
This
was
really
cool
this
one.
Just
this
was
a
community
contribution
and
this
was
really
cool,
but
we
I
finally
have
the
ability
to
invert
a
regex
match
which
was
super
cool,
and
so
that
was
great
and
to
see
that
from
the
community
was
great,
so
it
means
I
I
think
we
did
like
I'm
really
happy
to
see
the
involvement
and
just
the
like
the
usage
of
trisco
on
stuff,
like
that
I
think
that's
really
cool
and
then
Trace
level,
intrinsics
yeah,
definitely
yeah.
A
These
ones
are
also
really
cool,
and
so
in
the
parquet
file
in
the
trade
in
the
in
the
schema,
we
actually
had
a
lot
of
columns
that
we
weren't
exposing
to
trace
ql,
yet
they
were
there
and
we
actually
had
them
in
the
old
search.
So
this
was
an
area
where
we
kind
of
like
took
a
step
back
with
traceql,
but
we
always
wanted
to
get
back
in
there.
A
So
things
like
traceration
root,
service,
name,
root
name.
These
are
Trace
level
properties,
so
Trace
duration
would
be
the
earliest
timestamp
to
the
longest
timestamp
and
it
doesn't
have
to
have
a
single
span
that
crosses
so
that's
something
that
is
actually
really
useful:
service
name
and
root
span,
name
or
the
top
level
root
span
if
it
exists.
A
So
these
are
really
useful
because
they're
also
really
fast.
So
these
are
really
small
columns.
They
just
have
one
value
per
Trace,
so
if
they
apply
to
your
kind
of
like
the
query
that
you're
wanting
to
run
using,
these
can
be
really
speed,
things
up
and
they're
really
cool
yeah.
So
they
just
look
like
these
new
intrinsics
here.
B
Yeah
I
actually
have
a
question.
I
didn't
look
into
that
wat
C
behaved
that
when
the
recent
Dave
Roots
span
or
their
multiple
is,
is
it.
A
B
A
You
know
that's
a
good
question,
so
Trace
duration
doesn't
have
that
issue
because
the
way
it's
calculated
is
the
earliest
start
to
the
latest.
Stop
so
that
one
always
has
a
value
for
root
service
name.
They
probably
are
either
empty
strings
or
nil
yeah.
If
there's
not
a
root
span
but
yeah
that'd
be
we
could
look
into
that
I,
don't
know,
that's
a
good
question.
Actually
yeah
I
wish
there
was
an
example
query,
but
you
would
do
something
like.
B
A
Yeah
I
think
this
is
cool
because
I
mean
this
is
all
just
done
in
the
past
month
or
so
so
pretty
pretty
fast
speed
there.
Another
change
in
the
upcoming
Tempo
release
the
parquet
2.
That's
been
out
there
for
a
while,
so
we're
going
to
go
ahead
and
make
that
the
default
I,
don't
think,
there's
any
concerns.
There's
nothing
really
to
note
there.
It
does
have
some
small
schema
changes
that
could
improve,
or
maybe
negatively
affect,
your
compatibility
with
other
parquet
tools,
we're
trying
to
improve
compatibility,
but
we
found
out.
A
A
This
streaming
I
think
we
actually
mentioned
last
call,
but
I
wanted
to
mention
yeah.
So
just
to
recap
that
one,
though,
is
the
what
it
means
is
you
know
where
is
that
PR
actually
yeah,
that
PR
has
a
really
cool
video
in
it?
That's
what
we
want.
Yeah.
B
C
B
All
right,
so
this
is
the
pre-request
of
the
new
instrument,
API
being
integrated
into
profana,
and
here
is
a
quick
demo
profile.
It
would
look
like.
B
So,
as
per
usual,
if
you
just
click
on
bigquery-
and
there
is
this
new
param
stream
response
and
as
you'll
see
you
you'll
get
results
right
away
and
as
it
starts
getting
more,
it
will
just
get
streams
over
and
over
until
you
reach
that
whatever
configured
limits
you
have
for
that
query.
A
Yeah
this
feature
is
really
cool
because
it's
10
there's
sometimes
a
lot
of
results
already,
but
they
don't
show
up
because
we
keep
going
until
we
hit
the
limit,
and
so
this
is
really
cool
yeah,
especially
for
those
needle
in
the
haystack
queries
that
don't
find
a
lot
of
results
and
we
yeah
there's
even
some
more
UI
improvements
for
that
coming,
but
I
guess
like
I,
don't.
A
A
We
won't
re,
go
through
it
again
here,
but
if
you
want
to
see
a
video
of
that
look
at
the
recording
for
last
month,
but
it's
it's
a
to
calculate
metrics
over
your
traces,
but
there's
no,
but
you
can
use
any
Trace
ql
query
in
any
attribute
resource
level
or
span
level,
and
so
it's
for
cases
where
the
cardinality
of
a
traditional
metric,
the
cardinality
of
that
attribute,
is
too
high.
So
I
put
a
couple
examples
in
here
like
IP
address
or
user
ID
or
things
like
a
grid.
A
It
would
be
great
for
that
and
so
the
way
this
will
work
is
it's
a
part
of
the
metrics
generator.
It
will
start,
you
know
Zach.
We
actually
need
to
put
a
design
dock
or
something
up
in
here.
They
would
really
help
explain
this
yeah,
but
so
to
be
a
processor
in
the
metrics
generator
and
then
a
new
API
that
it
exposes
and
so
going
through
this.
A
The
those
that
processor
will
have
its
own
copy
of
the
data,
and
so
part
of
that
is
we're
also
planning
to
switch
generators
over
from
stateless
to
stateful,
because
they'll
have
a
copy
of
the
blocks,
and
if
you
were
to
lose
that
pod,
it
would
lose
that
data.
That
data
is
in
a
different
format
than
the
normal
block
data.
And
so,
if
we
lose
that
you
would
that
data
would
be
gone.
So
if
they're
stateful,
then
it
would
be
just
like
an
adjuster
so
we'll
we
could.
A
You
know
kind
of
expect
those
changes
to
be
like
coming
along
with
this
feature.
Fausto,
which
which
feature
would
be
your
favorite.
D
Yeah
calculated
the
the
high
cardinality
data
on
the
Fly,
like
yeah
I,
have
so
many
use
cases
all
the
time.
I
I
I
built
some
like
a
small
inkhouse
tool
to
do
this
already.
But
it's
not
that
nice,
because
it's
not
that
well
integrated,
but
yeah
I
would
love
to
have
this
already
within
Temple.
A
Yeah,
no,
that's
that's
great,
so
the
kind
of
metrics
that
it
would
be
generating
are
real,
simple
to
start
we're
just
doing
like
can
metrics
so
I
think
it's
like
p50
P90,
95.99
errors.
Total
span
counts.
Does
that
sound
like
something
that
does
that
cover
what
you?
What
you
look
at
or
like
I'd,
be
interested
to
hear
like
what
other
kinds
of
things
your
in-house
solution
does
right
now.
D
The
most
important
is
the
count
of
the
counts
like,
for
example,
sometimes
I
see
in
like
I,
see
like
let's
say:
I
have
a
meta
generator
that
have
a
diametrics
for
all
HTTP
requests
that
we
get
and
then
I
see.
Today.
I
saw
a
huge
Spike
I
said:
oh,
what
is
going
on
I
work
for
RTL
news,
so
it
means
RTL
along
or
published
news.
D
So
it
looks
like
we
have
some
breaking
news
right
and
I
wanted
to
see,
which
URL
is
this
one
specifically
we're
discussing
all
of
this
right
or
if
it's
maybe
an
attack
or
something
like
what?
What
is
this
right?
D
We
have
other
tools
to
have
this,
but
I
would
like
to
use
our
tracing
tool
because
we
have
all
data
there,
also
yeah,
so
with
the
account
then
I
I
saw,
which
article
that
were
having
this
spies
and
I
think
that
was
yeah.
I
would
like
to
have
this
little
yeah.
A
Yeah
definitely
so
this
would
just
be
an
API
on
the
tempo
back
end.
The
grafana
front,
end
I,
think
is
still
a
ways
away.
I
mean
it's
still
in
the
design
phase,
but
the
API
would
be
there
yeah
yeah.
It's
definitely
an
exciting
feature.
D
D
So
like
any
like,
if
you
have
any
error-
and
you
see-
okay,
I
have
this
plan
with
error.
But
what
is
common
among
all
these
experiments
that
have
this
error?
I
want
to
find
patterns
so
sometimes
I
do
have
meta
generators,
creating
metrics
for
some
of
those
attributes,
but
I
don't
have
for
all
of
them.
Then,
with
this
I
can
start
like
attributes
or
attributes
and
checking.
Oh,
let's
see
if
it's
treated
like
one
user
or
one
article
ID
or
one
something
that
might
have
high
quality
or
anything
so
like
that.
Yeah.
A
Yeah
cool
now,
I
think
next
call
we
should
have
documentation
and
some
examples.
Things
like
that
yeah,
maybe
a
mock-up
or
something
yeah
that'd
be
cool.
A
Oh
yeah,
okay,
cool
thanks
for
whoever
added
this
link
here
for
the
grafana
front-end
query.
Builder
I
wanted
to
show
this
off
because
I
think
this
is
new
too
this
thing
here.
So
that's
really
cool,
so
we're
kind
of
like
replacing
the
old
search
tab
with
the
new
thing
that
works
similarly,
but
it
builds
the
trace,
URL
query
for
you,
so
this
brings
it
much
more
in
line
with
Loki
and
the
Prometheus
data
sources.
A
Things
like
that,
so
I
think
this
would
be
really
cool
and
also
it's
a
really
good
way
to
like
learn.
Trace
ql
I
think
so
it
kind
of
helps
out
for
people
that
are
just
just
just
picking
it
up
and
I
have
no
idea
what
version
of
grafana
this
is
in,
but
I,
but
it's
cool.
A
A
So
this
is
when
you're
looking
at
a
trace,
I
wonder
if
I
can
zoom
in
on
this
when
you're
looking
at
a
trace,
it's
the
ability.
So
if
you
have
a
very
large
Trace,
you
know
100
000
spans
a
million
spans.
It
can
be
really
hard
to
to
find
what
you're
looking
for.
So
this
is
the
ability
to
filter
down
once
you've
loaded
a
trace.
A
B
Yeah,
an
improvement
to
to
that
that
doesn't
show
in
that
demo
is
that
now
you
can
also
remove
the
rest
of
the
of
this
patent
so
and
for
very,
very
large
traces,
even
just
highlighting
it's
not
enough
yeah.
Now
you
can
just
remove
like
thousands
of
his
pants
and
just
show
the
one
for
a
service
or
whatever
filtering
is.
A
Cool
yeah
and
those
are
the
front
end
changes
that
came
to
mind.
Are
there
any
more
that
I
missed
that?
That
would
be
useful
to
talk
about
here.
So
sometimes
you
know,
this
is
a
Tempo
Community
call,
but
I
think
it's
good
to
also
look
at
the
front
end
improvements
because
there's
been
a
lot
of
them
lately,
and
so
that's
like
we're
all
really
excited
about
these
front.
End
improvements
too.
A
Cool
well,
if
there's
any
more,
we
can
add
a
link
in
here.
Okay,
so
last
time
we
talked
about
V,
Park,
A3
I
know
we
just
made
to
the
default
and
we're
already
working
on
the
next
one.
But
now
there
we
put
up
the
Mario
and
Adrian
have
put
up
a
design
proposal.
So
if
you
want
to
know
more
about
more
specifics
about
the
per
K3
format,
you
can
click
this
and
it'll
go
into
more
depth
about
it,
and
so
I
think
I'm
really
excited
about
this
too.
A
B
Yeah
I
mean
I
can
quickly
describe
it.
So
yeah
we're
already
talked
about
flip
rk3
yeah.
There
is
a
rendered
version.
You
can
click
on
the
description
but
yeah.
So
the
tldr
of
this
new
version
is
for
parquet.
We
have
a
static
schema
and
we
defined
a
set
of
arbitrary
attributes
to
be
moved
to
Dedicated
columns.
And
now
we
want
to
do
that
now
dynamically,
because
that's
the
attributes
that
we
chose
not
necessarily
work
for
everyone,
so
we
want
to
be
able
to
configure
that
dynamically
at
runtime.
B
Another
thing
that
we
found
is
that
you
don't
even
need
to
do
a
in-depth
analysis
of
query
patterns
just
taking
the
top
10
attributes
or
like
top
10,
that
take
more
space
in
the
blog
just
results
in
massives
increases
in
the
speed,
because
you're
reducing
the
amount
of
data
on
this
array
of
key
values
that
you're
searching
for
and
you're
making
the
block
smaller.
B
So
you,
basically
you
just
have
to
read
less
data
and
then
yeah
just
to
wrap
up
I,
encourage
everyone
to
take
a
look,
even
if
you
don't
feel
like
you
gonna
comment
on
anything.
This
is
kind
of
the
first
step
on
a
bigger
change.
We
want
to
do
in
parquet,
ideally
in
the
future,
we
would
like
to
have
everything
dynamic.
B
So
it's
the
the
schema
is
more
flexible,
so
this
is
kind
of
like
the
first
iteration
on
on
making
RK
more
Dynamic,
so
I
think
it's
like
a
good
starting
point.
If
you
want
to
follow
this
development
for
months
to
come
and
yeah,
so
just
encourage
everyone
to
take
a
look
and
if
you
have
any
feedback,
it's
welcome.
A
Oh
I
mean
you
all
are
already
moving
really
quickly.
There's
a
first
PR
draft
PR
out,
which
was
awesome,
total
surprise
to
I.
Think
the
rest
of
us
said
it
was
moving
that
quickly,
yeah
there
so
there's
10
columns
at
each
level
resource
10
resource
columns,
10
span
columns
and
they
can
be
mapped
to
anything.
And
then,
even
if
it's
an
attribute
you
don't
query
on,
it
can
still
be
helpful
because
it
pulls
it
away
from
the
attributes.
You
do
use
yeah.
B
That's
right,
yeah,
that
was
one
of
the
most
interesting
findings
is
Adrian.
Did
a
analysis
over
some
blocks
that
we
had
and
yeah
just
like
the
simplest
solution,
which
is
just
taking
the
attributes
that
take
the
most
of
the
space
results
in
massive
increases
in
speed,
absolutely
like
80
percent,
even
if
you're
not
queried
by
those
just
by
having
to
search
over
the
last
data.
A
Okay,
cool
I
think
that
I
think
that
covers
everything
that
we
we
thought
of
anyone
have
anything
else
that
they
are
want
to
talk
about
or
any
questions
faster.
Anything
else
you
want
to
share
ask
us
about.
C
D
The
metrics
it's
about
the
metric
generator
I,
wonder
like
it
was
for
some
time
ago,
but
after
we.
C
Migrate
from
1.5.
D
D
We
found
we
saw
that,
like
a
big
increase
in
memory.
Is
this
something
that
you
also
see
in
your
own
deployments?.
A
Yeah
I
mean
I,
think
that's
an
area
that
we
still
want
to
profile
and
optimize
in
I
mean
I.
I
I
will
say
that
I,
don't
think.
We've
done
a
lot
of
work,
optimizing
the
metrics
generator
and
so
there's
probably
a
lot
of
room
for
improvement,
but
yeah,
it's
hard
to
say
like
if
the
how
the
memory
would
have
changed
from
1.5
to
2
on
at
least
I'm,
not
close
enough
to
that.
A
Maybe
Mario.
If
you
have
any
ideas
but.
B
Yeah
I
was
trying
to
recall
on
what
changes
may
have
caused,
that
increasing
memory
and
like
from
the
top
of
my
head,
I
can
I,
don't
think
We've
made
any
significant
changes
that
resulted
in
more
memory
usage,
yeah,
I,
don't
know.
I
will
have
to
review
the
list.
We've
been
making
a
lot
of
changes
to
generate
to
for
the
metrics
we're
making
changes
to
some
metrics,
but
most
of
them,
if
not
all,
are
hidden
behind
feature
facts.
B
We
will
now
generate
a
numerical
Target
info,
but
that's
behind
the
feature
flag
and
all
of
these
new
changes
will
incur
a
more
memory
usage,
okay,
yeah,
if
you,
if
you
have
them
disabled
anything,
we've
we've
made
any
any
changes,
maybe
genio
or
sack
you
know
of
some
changes
but
yeah
I.
Don't.
E
Know
yeah
we
did
well
I
did
change
one
thing
on
how
we
store
the
the
hash
of
the
labels.
E
Sorry,
the
so
we
used
to
only
hash
base
on
I
think
value
of
labels,
but
now
because
all
the
labels
were
the
same,
but
now
that
the
labels
can
be
dynamic,
we're
hashing
on
both
the
labels
and
the
values,
so
it
might
cause
when
I
checked
it
in
Ops
we
saw
very
minimal
increase
in
memory,
so
I
didn't
bring
it
up,
but
there
was
some
but
very
minimal,
not
even
one
percent.
On
our
side.
A
So
I
I
will
say
that
we
have
had
good
successes
in
go
men
limit
on
the
generator
pods
because
it
seems
like
they
will
kind
of
like
grow
in
memory,
maybe
more
than
expected,
but
Gilman
does
a
good
job
of
keeping
them
alive
and
they
they're
really
stable
after
that.
So
if
your
problem
is
zooms,
I
might
try
that.
D
A
A
Okay,
cool:
if
there's
nothing
else,
I
guess
we
can,
we
can
wrap
up
yeah.
No.
This
was
definitely
jam-packed
month
with
features
so
the
next.
The
upcoming
Tempo
release
is
going
to
be
huge,
but
I
think
we're
a
little
bit
too
early
to
start
talking
about
tutu
specifically,
but
it's
already
shaving
up
to
be
a
great
release,
I
think
so.
Yes,
okay,
sure,
yeah,
so
I
think
it's
exciting,
well,
cool
all
right!
Everyone!
Well,
thanks
for
joining
and
we'll
see
you
next
month.
Okay,
all.