►
From YouTube: Grafana Tempo Community Call 2023-07-13
Description
Join our next Tempo community call: https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY/edit#heading=h.3x2mcvpczj56
What was discussed:
- vParquet3 PR is up! Details on how this will increase query performance
- Structural operators demo
- User Configurable Overrides
- Tempo 2.2 is coming soon
A
All
right,
okay,
cool
yeah,
so
we
got
some
really
exciting
stuff
to
talk
about.
This
is
really
fun,
and
so
we
have
like
a
new
release
of
tempo
that
we've
been
readying.
There's
tons
of
cooling
features
in
here
a
lot
of
really
exciting
stuff,
but
first
I
think
we'll
hand
it
over
to
Mario
Mario.
You
said
you
need
to
cut
out
a
little
early.
So
we'll
let
you
cover
this,
but
this
is
really
cool.
B
That's
right:
okay,
yeah,
so
V,
Park
A3,
the
new
storage
formats
in
Tempo.
We've
talked
about
this
format.
A
couple
of
times
we
did
put
out
a
design
proposal,
I'll
kind
of
outline,
the
concepts
that
this
new
format
it's
built
in
and
we'll
to
summarize
it.
For
those
who
don't
know
what
what
it
is.
It
introduces
New
Concept
of
dedicated
columns.
So
in
parquet
we
have
a
static
schema
and
which
we
Define
different
attributes
that
map
two
different
columns.
Basically,
but
that
doesn't
happen
for
every
single
attribute.
B
B
So
with
free
parquet,
three
What
it
will
allow
is
to
select
any
attribute
and
bump
up
bump
it
to
a
dedicated
column.
So
it's
kind
of
like
creating
an
index
in
SQL
database,
as
in
you
believe,
an
attribute
is
important
for
searches
and
you
want
to
speed
up
searches
by
that
attribute.
Then
you,
bumper
to
this
new
special
treatment
so
yeah.
B
The
news
is
that
the
request
to
for
this
new
format
is
up
within
developing
in
parallel
Branch
domain
and
now
we're
merging
these
development
Branch
into
main,
which
is
what's
Linked
In.
The
documentary
belief,
yes,
I,
think
it's
still
Mark
struft,
because
there
are
a
few
conflicts
to
resolve.
We
just
introduced
a
new
operator.
I.
Think
we're
going
to
mention
that
later
so
I
want
to
just
pull.
B
It's
called
the
news
and
yeah
I
have
prepared
a
slide
to
show
you
how
this
is
gonna,
look
like
from
our
operational
perspective,
so
how
the
config
is
going
to
look
like
and
then
an
utility
to
select
the
columns
to
kind
of
help
users
decide
what
columns
are
the
best
to
bump
to
dedicate
it.
B
All
right
yeah,
so
the
config
can
and
everyone
see
it.
Okay,
config
all
right.
So
it's
part
of
the
overrides.
The
main
idea
is
that
we
want
this
configuration
to
be
mutable
at
runtime,
so
you
don't
have
to
restart
Tempo
to
introduce
a
new
configuration,
a
new,
dedicated
column.
I.
Don't
think
the
intention
is
to
be
changing.
B
This
configuration
every
10
minutes
because
that's
going
to
cause
issues
with
compaction,
you
can
merge
two
blocks
with
different
configuration,
but
yeah
TDA
is
as
query
patterns
and
the
data
changes
over
time.
B
You
can
tune
these
configurations,
so
it's
a
new
field
in
the
override
called
dedicated
columns
and
then
it's
a
slide
of
the
attributes
that
you
want
to
bump
to
a
dedicated
column.
So
the
first
again
in
the
in
the
slides,
the
first
like
entry,
it's
a
an
attribute
that
you're
bump
into
dedicated
column.
The
second
one
is
a
different
attribute
and
what's
in
this
conversation
is
fast,
is
the
scope?
Well,
you
can
Define
either
that
the
attribute
is
in
the
resource.
B
Scope
is
in
this
panoscope
you
for
now
you
have
10
of
each,
so
you
have
10
columns
for
resource
attributes
and
then
for
span
attributes
I,
don't
think
we
have
plans
yet
on
expanding,
reducing
this
or
making
other
combinations,
they
will
figure
out
as
we
get
more
operational.
Knowledge
then,
is
the
type
of
the
attribute
you
have
to
Define,
either,
whether
if
it's
a
string,
an
integer
volume
previously
forward
and
finally
the
name
of
the
attributes.
B
Basically,
so
like
the
examples
here,
you
have
a
resource
attribute
that
a
string
and
it's
named
a
cluster
IP
and
then
oh,
if
I
had
another
one
and
well.
The
second
one
is
in
this
pattern
attribute
pipe
integer
and
it's
grpcs
status
code
for
now,
as
well.
B
I
forgot
to
mention
the
only
supported
pipe,
is
a
string
which
is
the
most
common
type
of
attribute,
so
should
be
covering
most
of
the
surface,
but
we
definitely
intend
on
supporting
all
the
possible
types
that
there
are
so
from
this
config,
it's
possible
that
more
options
will
be
available
in
the
future
like
encodings
or
compression.
B
We
don't
support
that
as
of
now,
but
it's
very
likely
that
in
feature
iterations
we'll
be
introducing
those
options
so
yeah
you
can
use
this
option
that
we
are
already
using
for
for
the
dedicated
columns
that
we
have
in
this
static,
the
schema,
not
this
one's,
but
the
ones
from
the
hard-coded
schema
and
then
well,
which
attributes
to
configure.
B
We
have
added
two
new
commands
to
the
tempo,
CLI,
analyze
block
and
the
plural
analysis
blocks.
Essentially,
you
can
run
this
command,
selecting
a
tenant
and
then
block
ID
or
just
a
tenant,
and
it
will
parse
Sparky
block
and
do
a
summary
of
what
the
top
attributes
by
size
are
in
that
block.
So
we
chat
with
are
contributing
the
most
to
to
the
size
of
of
that
generic,
slides
and
well.
B
This
is
how
it
looks
like
it's
from
a
test
data
that
we
have,
and
here
you
see
we
have
out
of
the
all
the
attributes
in
in
this.
In
this
panel
level,
these
are
the
once
there
are
taking
most
uncompressed
signs.
That's
technical
detail
doesn't
matter
much
but
yeah.
B
Essentially,
this
should
be
guiding
you
to
which
attribute
you
should
be
moving
to
Dedicated
columns,
I
guess
we'll
gain
more
operational
knowledge
with
time,
but
for
now
all
the
benchmarks
that
we've
run
indicate
that
just
taking
the
ones
that
take
more
space
is
already
pretty
useful,
even
if
you're
not
searching
by
those
because
you're
reducing
that
generic
slice
and
that
has
a
huge
gaining
in
performance,
so
I
think
for
now.
This
is
going
to
be
like
the
guideline.
B
A
Yeah,
that's
fantastic,
Mario
and
Adrian
have
been
working
on
this
for
months,
and
so
just
there's
tons
of
work
on
here.
We're
really
excited
about
this
internally.
I
think
I
mean
who
doesn't
like
faster
Tempo
searches
and
I.
Think
the
tool
is
really
cool,
so
it
should
be
very
easy.
You
know
to
turn
on
like
fingers
crossed
it'll,
be
experimental
in
the
next
release.
So
that's
our
goal,
I
think
we'll
see
cool
cool,
okay,
I
guess
next
I
will
do
a
little
demo
of
the
structural
operators.
A
A
A
A
So
that's
what
structural
operators
are
doing,
they're,
letting
you
look
for
like
a
span
with
the
descendant
or
a
child
or
a
sibling,
so
there's
different
things
like
that,
so
not
sure
how
best
to
describe
it
so
we'll
just
kind
of
jump
in
here
is
this
small
enough.
Can
everybody
read
this?
Okay,
okay,
cool,
so
I
just
have
a
couple
traces
here
like
this
one.
Why,
like
there
is
stuff
going
on
you
have
like
a
back-end
service
that
calls
another
service
that
calls
the
postgres
database
and
what
these
are.
A
A
So
you
maybe
could
have
done
like
little
similar
queries
before
you
could
look
find
a
trace
with
both
of
these
services,
but
you
couldn't
tell
that
they
were
necessarily
linked.
So
this
operator
here
is
meanings
is
means
any
any
descendants.
Man
right
so
I
mean
I.
A
Can
run
this
and
if
I
click
Like
The
Links,
it's
showing
me
the
postgres
span
down
here
and
it's
always
going
to
have
somewhere
up
above
it
right,
the
ancestor
which
that
matches
these
conditions,
and
so
you
can
actually
combine
this
with
with
any
other
normal
stuff
that
you
would
query
so
maybe
I
only
want
to
find
like
slower
ones
like
that's
120
milliseconds.
So
maybe
we
do
150
milliseconds
right.
A
And
so
so,
this
required
a
new
parquet
format
internally.
So
like
we're
doing
a
lot
of
cool
stuff
and
another
operator
is
the
child
operator,
so
this
would
be
a
direct
match
between
like
parent,
whereas
this
one
looks
for
anywhere
in
the
hierarchy.
A
A
A
A
There's
maybe
not
a
lot
to
look
at,
but
you
could
look
for
things
like,
maybe
where
it
generated
some
database
queries,
multiple
database
queries
or
maybe
this
API
did
these
other
activities
on
the
back
end,
so
yeah
I
think
that's
the
demo.
Does
anybody
have
any
questions
or
any
ideas
of
other
cool
queries
that
we
could
run?
A
C
C
Easy
access
to
the
structure
of
the
trace
is
just
so
neat
something
this
is
the
one
I
want
to
run
span
dot,
so
I'm
going
to
put
this
in
chat,
but,
like
I,
should
put
this
another
doc.
Well
like
route,
you
know
pick
some
endpoint,
that's
currently
erroring
and
then
you
can
look
for
a
descendant
where
status
equals
error.
Your
Tempo
is
really
starting
to
get
into
the
realm
of
root,
cause
analysis,
so
I
build
a
dashboard
that
shows
error
rates
per
endpoint
and
then
I
just
click
on
a
link.
C
A
Yeah
so
I
mean
right,
so
we
can
see
kind
of
like
quickly
get
to
the
erroring
spans.
Just
in
this
table,
I
mean
if
we
click
them
in
this
case
the
toy
data.
It's
all
the
same,
but
I
think
like
there's
really
cool
stuff
here,
because
you
could
even
keep
going
like
combining
this
with
grouping
by
service
or
IP
or
cluster,
and
then
looking
at
things
like
that.
D
D
A
Sure
yeah
this
was
this
was
way
originally
in
the
language
design
that
we
didn't
get
to
it
until
now.
So
yeah
like
it's
been
out
there,
but
we
always
do
a
blog
post
for
each
release.
So
I
think
we'll
touch
on
that
in
there.
But
for
this
one
we
were
thinking
it
might
be
worth
its
own
blog
post.
So
we
might
do
that
too.
D
A
In
the
agenda
doc,
there
is
a
link
to
the
pr
is
that
kind
of,
like
maybe
talks
about
it
a
little
bit.
If
you
wanted
to
look
a
little
more
too
thanks.
A
A
Okay,
all
right!
Well,
thanks,
hey
the
demo
worked
all
right,
exciting,
let's
see
I
think
coonrod!
You
want
to
talk
about
some
stuff.
E
E
E
This
is
like
a
bit
more
of
an
advanced
feature
for
like
people
running
Tempo
in
like
a
Hostess
scenario
in
which
you
have
an
Ops
Team
who's
responsible
for
running
Tempo
and
a
user
who
is
just
using
tempo
if
you're,
using
like
if
you're
running,
Tempo
and
also
using
Tempo
yourself,
this
might
be
less
relevant,
but
I
mean
I
hope
it
could
still
like,
maybe
enable
some
interesting
use
cases.
E
So
maybe
it's
just
something
I
wanted
to
highlight
like
mentioned.
First
is
like
what
are
overrides
example,
because
this
is
kind
of
like
a
convoluted
term.
We
have
a
lot
of
different
overrides
in
Temple
and
basically,
like
overwriters
are
just
used
as
a
way
to
change
variables
within
sample
without
restarting
the
whole
process.
So
you
can
change
a
config
map.
Tempo
pulls
this
config
map
regularly
and
then
it
picks
up
changes
like
inside
this
overrides.
We
have
multiple
types
of
overrides
and
I.
E
Could
kind
of
like
categorize
them
between
like
operational
limits,
which
would
be
stuff
like
you
know
you
can
set
an
ingestion
limit
for
tenant.
You
can
maybe
configure
the
queue
size
or
like
the
maximum
search
duration
for
a
tenant.
So
this
is
stuff
like
protect
your
cluster.
We
also
have
configuration
that
we
want
to
be
able
to
update
at
runtime.
E
So
the
overrides
allow
us
to
have
settings
that
only
apply
to
a
single
tenant,
so
it
could
be
put
a
metric
generator.
We
can
enable
the
processors
for
one
Talent
only
or
we
can
configure
Dimensions
or
different
histogram
bucket
buckets
for
like
the
specific
tenant.
E
You
know
if
you're
running
a
multi-tenant
system,
you
don't
want
to
use
like
the
same
settings
for
all
of
them,
because
it
might
have
like
a
slightly
different
kind
of
crazy
at
all.
I
refuse
a
configurable
overrides
will
mostly
targeting
these
10
specific
configurations,
so
these
settings
are
like
really
configuring
features
for
the
tenant.
We
don't
want
users
to
change
operational
limits,
because
that
could
you
know
kind
of
like
destabilize
the
the
cluster
itself
right.
E
Only
the
operator
should
be
allowed
to
change
in
Justin
limits,
as
a
user
would
be
kind
of
funny.
If
you
could
like
change
your
block
retention
and
like
double
it
for
free
or,
like
you
know
so,
like
focusing
on
distance
specific
concreations,
how
would
this
look
like?
So
this
is
like
a
simple
architecture
of
like
how
Tempo
is
usually
deployed,
so
you
have
tamper
in
the
middle
at
the
left.
You
have
grafana
querying
sample
and
Tempo
will
be
storing
its
data
in
Object
Store.
E
Usually
so
it's
all
the
trace
data
Tempo
has
two
sources
of
configuration.
You
have
the
like
what
I
call
the
config,
which
is
just
like
the
big
config
yaml
configuring,
all
the
components
saying
like
hey,
you
can
find
a
backend
at
this
address.
Usually
this
is
stored
in
a
kubernetes
config
map
and
besides
the
config,
we
also
have
the
runtime
configuration,
which
is
also
a
config
map,
but
this
one
is
regularly
pulled
by
by
Tempo
I
think,
probably
like
every
minute
or
maybe
faster,
every
10
seconds.
E
So
if
you
change
the
runtime
configuration
Tempo
will
see
this
and
it
will
update
its
internal
state
so
like
in
the
current
situation
as
a
temp
operator,
you
want
to
change
something
in
Tempo.
You
would
edit
the
runtime
configuration
Tempo
will
automatically
detect
this,
and
then
you
know
do
some
changes,
this
kind
of
works
fine,
but
it
means
that
you
know
as
an
end
user.
E
So
what
we
want
to
do
is
allow
the
tempo
user
to
send
API
requests
directly
to
Tempo
to
change
these
overrides
and
as
Tempo,
we
can't
you
know,
write
like
we
don't
want
to
write
to
the
config
map
to
the
runtime
configuration
so
instead
we'll
write
these
these
user
configured
overrides
in
a
separate
bucket,
also
on
Object
Store.
E
So
after
like,
if
you
want
to
use
Music
people
overrides,
we
recommend
like
deploying
two
pockets,
one
for
your
regular
Trace
data,
one
for
the
overrides
and
from
then
on
Tempo,
we'll
use
both
the
runtime
configuration
and
the
overwrites
in
this
new
bucket
to
actually
calculate
the
final
overwrites
for
a
talent.
E
Yeah,
it's
kind
of
like
how
we
work,
so
we
have
an
API
at
API
overwrites.
You
can
do
a
get
a
post
and
delete
request
on
it
later
on.
We
also
want
to
do
a
patch,
so
you
can
only
send
kind
of
like
your
diff,
like
hey
I,
want
to
change
the
specific
variable,
but
I
don't
want
to
like
send
the
whole
Json
again
because
I
don't
know
what
the
current
overrides
are
and
because
it's
also
just
be
a
Json
request.
So
we're
trying
to
keep
this
kind
of
like
as
simple
as
possible.
E
Because
once
we
introduce
user
configurable
overrides,
there
will
be
different
sources
of
overrides
you'll
have
the
override
set
in
the
runtime
configuration
which
I've
put
here
on
the
right,
which
is
usually
a
yaml
file,
and
it
will
contain
a
list
of
overrides.
Maybe
the
block
retention
is
set
to
I
think
this
is
like
30
days
and
then
the
forward
is
kept
empty
when
you
configure
the
user.
Critical
overrides
will
also
store
this
Json
data
on
the
bucket,
and
this
will
just
be
a
file
like
on
the
left,
which
says
Hey
forward
is
equals.
E
E
E
So
right
now
we
only
support
changing
forwarders
because
that's
kind
of
like
just
the
first
MVP
we
want
to
work
on,
but
we
want
to
Target
like
most
of
the
metrics
generator
override,
so
the
ones
configuring,
processors,
custom,
Dimensions,
histogram,
buckets
all
those
kind
of
things
so
that
users
can
configure
that
using
the
API.
And
then
we
also
want
to
support
a
patch
API.
E
A
Cool
thanks
again
red
is
anyone
here
running
like
a
multi-tenant,
Tempo
installs.
A
D
E
This
will
also
work
in
a
single
tenant
scenario,
but
it's
most
useful
like
if
you're
running
Tempo
yourself,
you
can
just
change
the
config
map.
Of
course
you
don't.
C
Two
two
should
come
out
soon.
The
next
couple
weeks
expect
an
RC,
zero
and
then
we'll
start
that
path
open
by
the
end
of
the
month
or
early
next
month.
We'll
have
an
actual,
you
know,
cut
tutu,
it's
going
to
be
an
enormous
release,
absolutely
packed
with
features
as
big,
not
as
big
as
202o
is
ridiculous,
but
still
bigger
than
two
one
and
bigger
than
I
would
have
expected
a
lot
of
Trace
ql
features.
C
We've
talked
about
the
buy
function,
coalesce
there's
some
new
there's
some
new
intrinsics
there's
a
new
select
operator
and
there's
the
structural
operators.
C
We
just
looked
at
so
big
improvements
to
trace
ql
a
lot
of
new
features
there
streaming
query
and
point
will
be
in
there
there's
a
new
way
to
control
the
number
of
spans
per
span
set
right
now
we
always
return
three,
but
you
can
change
that
if
you
want
I
just
pick
some
highlights
because
Park
A3,
probably
unless
something
weird
happens,
the
pr
is
up
I'm
going
to
give
it
some
serious
time
next
week.
Hopefully
we
can
get
it
in
and
that
will
be
in
there.
C
It's
experimental
and
V
Park
A2
will
be
default
into
two.
The
new
metrics
API
we've
been
talking
about
where
you
can
start
experimenting
with
that
and
get
Dynamic
metrics
across
your
across
your
spans
and
the
metrics
generator.
So
it's
just
absolutely
chock
full
2-2
I,
don't
think
we've
even
sacrificed
stability
or
or
TCO
or
anything
to
add
these
features.
It's
it's
just
a
huge
feature.
Release
frankly
pretty
excited
about
it.
I
hope
you
are
two
I
think
that's
it.
C
We
just
had
a
I
was
even
gone
two
weeks
we
just
had
a
massive
month
or
three
months
or
two
months
or
whatever
the
week
before
two
weeks
before
I
went
on
vacation
I
slammed
out
some
of
those
features
in
traceql
the
structural
operators.
Is
you
and
Adrian
huge
I?
Don't
know
we
just
really
been
killing
it
last
couple
months,
I
mean
sorry
not
to
you
know,
Pat
myself
on
the
back.
I
do
think,
though,
that
we
have
just
done
a
ton
in
the
past
couple
months.
Honestly.
A
Yeah,
so
the
new
block
format
will
be
the
parquet2.
You
don't
have
to
do
anything
so
Tempo
will
just
handle
it.
A
A
A
Cool
so
yeah,
so
we
have
plenty
of
time.
We
kind
of
wanted
to
see
here
to
have
everyone
ask
us
anything
like
we
could
just
chat.
Let's
just
talk
about
Tempo.
If
you
have
some
problems
or
some
ideas
or
some
questions
happy
to
hear
it.
A
A
Yeah
I
think
there's
a
lot
of
cool
queries
and
I.
Think
like
with
traceql,
there's
a
lot
of
depth
to
the
language
that
it's
not
like.
You
don't
see
it
right
away,
but
I
think
once
you're
in
there
like
there's
a
lot
of
flexibility
like
I
I,
didn't
even
show
it
in
the
demo,
but
you
can
chain
the
structural
operators
and
look
for
very
specific
paths
through
a
trace
like
you
can
use
multiple
child
and
sibling
and
descendant
operators,
and
things
like
that.
So
I
think
there's
a
lot
of
really
cool
stuff.
C
A
Yeah
right
right
so
I
think
there's
a
lot
of
really
cool
stuff
in
there
and
I
would
I'm
I
think
you
know
I'm
looking
forward
to
seeing
like
the
kinds
of
queries
that
people
run
in
the
use
cases
and
the
workflows
that
it
unlocks.
C
A
A
A
A
C
Bit
so
the
metrics
API
was
a
way
for
us
to
quickly
add
some
powerful
Dynamic
metric
features
into
tempo
I
think
this
quarter
we're
going
to
take
a
step
back
and
decide
if
we
want
to
continue
down
this
path
and
invest
in
this
way
of
doing
things,
if
we're
more
interested
in
an
integration
between
traceql
and
prom,
ql
kind
of
like
Loki,
and
we're
going
to
make
some
decisions,
this
upcoming
quarter
about
that
and
start
moving
the
whole
of
tempo
in
that
direction.
C
The
API
will
probably
continue
to
work
like
we
can
always
just
turn
that
API
into
just
a
wrapper
for
prompt
ql.
We
can
rewrite
what
you
put
into
there.
It's
a
subset
of
what
you
could
do
with
prompt
ql
and
jsql,
so
I'd
say
stay
tuned
to
these
Community
calls
we're
going
to
be
talking
a
lot
about
our
vision
for
metrics
what
we
want
out
of
that
and
how
we're
going
to
achieve
it.
So
this
is
kind
of
like
a
first
experimental
step
in
that
direction.
D
A
Almost
there
is
a
document
that
we
have
hey
Kim.
Maybe
you
can
find
that
link
and
paste
it
in
the
dock
or
here
in
the
chat.
We
do
have
a
page
that
talks
about
how
you
can
actually
use
this
with
your
grafana
cloud
account,
so
you
can
create
an
API
key
and
call
it
directly
from
outside,
but
it
also
goes
through
kind
of
like
the
the
schema
and
the
parameters
and
stuff
too.
So
that
would
be
a
good
link
here.
A
A
Yeah
I,
don't
think
it'll
work,
but
if
it
does
happen
in
in
a
few
minutes
we
can
come
back
to
it
cool
yeah.
So
here's
the
the
link
for
that
cool.
A
Cool
anything
else.
A
A
Okay,
all
right
well,
I
guess
we
can
wrap
up
we'll
wrap
up
a
little
bit
early.
Everyone
thanks
for
joining.
This
has
been
a
great
call.