►
From YouTube: Tempo Community Call 2022-04-14
Description
In this very special edition of the Tempo Community Call you can find such gems as:
- Tempo 1.4 details
- Roadmap + official Parquet announcement
- Metrics Generator updates
- TraceQL core concepts overview
A
A
Okay,
I
was
wondering
who
wants
to
drive
with
like
the
slides.
I
only
have
a
couple,
but
I
could
am
I'm
doing
it.
B
B
We
got
a
pretty
cool
agenda
coming
up
and
I
will
go
ahead
and
present
this.
If
I
can
present
now
window,
oh
man,
I
almost
presented
not
in
slideshow
mode
ruined
all
of
the
good
announcements,
be
embarrassing,
all
right,
so
2022
april
community
call.
We
got
a
kind
of
a
good
agenda.
I
think
today
pretty
action
packed
one
four
is
coming
out
soon.
B
I'd
say
like
in
a
week
or
two
we'll
cut
one
four
we're
just
gonna,
let
it
be
stable
for
a
little
bit
in
our
environments
we
haven't
seen
any
issues
but
we're
getting
like
a
a
final
bug.
Fixer
you
know
feature
and
while
we're
in
the
next
week,
or
so
I'd,
say
marty's
going
to
tell
us
all
about
parquet
we've
been
investigating
this
format.
For
months,
we
have
made
incredible
progress,
he's
going
to
tell
us
what
we're
going
to
do
with
it
next
and
he's
also
going
to
talk
through
a
road
map.
B
So
for,
like
the
next
three
months
and
six
months,
kind
of
vision
for
tempo.
Where
we're
going
with
the
project,
then
we're
going
to
coonrod's
going
to
keep
us
updated
on
metrics
generator,
particularly
cool
that
that's
in
one
four.
So
one
four
is
the
first
open
source
release
of
tempo
that
has
this
new
component.
It's
totally
optional.
If
you
do
run
it,
you
can
generate
metrics
from
your
traces
and
send
those
to
a
prometheus
remote
right.
B
Endpoint
he'll
walk
us
through
some
of
the
updates
there
and
then
finally,
we
submitted
a
pr
to
our
repo,
which
includes
some
like
first
looks
at
traceql
and
we'll
talk
through
some
of
our
ideas
there.
You
know
what
we're
kind
of
going
for
what
what
you
know,
what
what
what
our
ideas
are,
I
suppose
in
the
world
of
query
languages
for
traces
and
we'll
finish
up
with
the
old
q.
B
A
so
you'd
be
welcome
to
ask
any
questions
you
want
there
and,
like
I
said,
don't
feel
like
you
are
restricted
to
only
talking
to
that
time
feel
free
to
throw
something
in
chat.
You
know
whenever
we're
talking
about
it
and
we
can
kind
of
kind
of
answer,
questions
as
we're
flowing
through
it.
If
that
works
better
for
you
cool,
so
one
four
exclamation
point
tempo.
One
four
is
like
I
said,
maybe
like
in
a
week
or
two
we're
kind
of
talking
internally
about
it.
B
It's
real
close,
we're,
definitely
not
gonna,
put
anything
major
and
in
the
next
couple
weeks
we
might
have
a
fine.
I
think
we
want
to
get
some
metrics
documentation
in
there,
so
it
cuts
nicely
into
the
into
the
our
docs
website
and
when
you
have
one
for
you
can
see
all
the
metrics
stuff
and
maybe
like
one
more
small
thing
or
two,
but
we're
basically
there.
B
So
I'd
expect
that
in
the
next
week
or
two
I'm
in
particular,
we've
have
some
improvements
for
search,
so
we've
added
some
hedging
or
cross
hedging,
which
is
our
favorite
hammer
to
smash
nails
with
with
tempo,
and
it's
a
trick
where
we're
going
to
basically
repeat
requests
to
our
serverless
endpoints
when
they
tend
to
be
a
little
slow
and
there
was
some
queer
search,
config
movement.
So
if
you
have
put
some
search
in
there
or
if
you've
done
some
configuration
regarding
search,
then
there
might
be
some
changes.
B
There
might
be
some
changes
in
the
there
might
be
some
changes
in
the
config
they'll
all
be
in
the
release.
Notes,
of
course-
and
we
have
a
new
object,
encoding
it's
going
to
require
a
special
rollout
and
we'll
detail
all
this
in
the
release.
Notes
as
well,
but
basically
you
roll
all
your
ingesters
first
and
then
your
distributors,
but
it's
a
more
efficient
encoding
that
will
it
actually
fixes
a
bug
with
our
min
max
time
ranges
on
our
blocks
and
it
will
also
just
help
speed
up
search
a
little
bit.
B
The
current
version
of
search
metrics
generator,
is
making
its
first
appearance
in
one
four.
I
think
that's
the
big
major
feature,
that's
the
headline
piece
of
one
four.
So,
if
you're
running
tempo,
I
think
I'd
encourage
you
to
experiment
with
this.
Like,
I
said
it's
totally
optional,
but
it
does
generate
some
really
cool
metrics
and
grafana
ui
also
support
for
the
service
graph
metrics.
B
So
you
can
make
some
nice
service
graphs
and
we
have
some
ideas.
I
think,
to
improve
that
in
the
future
as
well,
and
then
we've
also
tempo
one
of
the
things
it
kind
of
struggles
with
is
when
it
has
enormous
traces
like
in
the
close
to
100
megs
or
more
than
that,
it
can
oom
components
when
they
attempt
to
like
search.
You
know,
they're
working
their
way
through
all
the
traces
they
hit
a
hundred
meg
trays.
B
It
can
uma
component
compaction
suffered,
sometimes
so,
we've
just
kind
of
improved
our
support
for
this
already
existing
configuration
option
enforced
it
in
more
places
than
just
ingestion.
Basically
so,
and
the
complete
change
log
will
of
course
be
available
with
release,
notes,
there's
tons
of
small
improvements
and
bug,
fixes
and
all
kinds
of
normal
things.
So
look
forward
to
that
I'd,
say
yeah,
like
maybe
a
week
or
two
from
today
with
that
I'll
give
it
to.
I
think
marty's
next
is
that
right.
A
A
I
actually
wanted
to
add
one
more
thing
for
1.4,
so
there
are
some
compaction
improvements,
so
if
you
have
really
large
traces
like
100,
000
or
more
spans,
that
was
in
here-
and
I
think
like
gabriel,
I
think
we
worked
with
like
tanner
for
some
of
that,
so
maybe
that
would
be
interesting
to
you
like
useful,
too
cool
yeah
next
slide.
I
guess
what
is
the
next
slide
boom?
All
right,
we're
really
excited
about
parquet.
A
B
A
You're
right
for
the
next
community
call
we'll
get
that
for
sure,
so
we
actually
dug
really
deep
on
this
in
the
last
community.
Call-
I
didn't
put
all
of
this
in
here
again,
but
we
kind
of
went
into
the
schema.
The
approach
like
what
are
the
reasons
some
of
the
benchmarks
that
we've
seen
and
things
like
that
and
since
then,
we've
just
been
digging
into
it
and
really
crossing
off
any
blockers
that
are
out
there
and
we've
been
writing
up
the
code
so
yeah.
A
A
So
this
is
something
we've
kind
of
shown
on
and
off
for
the
past
couple
months
about
search
like
what
does
tempo
search
look
like
what
is
the
road
map?
Oh
we're
missing
an
arrow
or
a
line
here
that
shows
where
we
are,
where
tempo
one
point.
Four
is
yeah.
A
The
slides
are
too
complicated
for
us,
so
we
in
the
past
couple
releases
like
we've
steadily
progressed
along
this
road
map.
Tempo
1.4,
which
is
coming
out,
really
soon,
is
kind
of
here
at
the
end
of
phase
two.
So
it
includes
the
backend
search
and
the
serverless
features,
and
things
like
that,
and
then
also
we'll
hear
more
about
the
trace
query
language
that
has
been
in
progress.
A
There's
the
pull
request
out
there
now
and
what
it
looks
like
for
the
next
phases,
so
an
implement
implementation
of
traceql,
more
ui
that
we
want
to
add
along
with
it
and
things
like
that
and
so
kind
of
what
happened
here.
Is
we
added
in
the
back
end
search,
and
we
realized
it
wasn't
just
performant
enough
for
what
we
really
wanted,
and
so
that's
why
we
started
kind
of
pausing
and
investigating
real
heavily
into
the
block
format.
So
we
really
felt
like
that
was
the
bottleneck
by
having
this
highly
compressed
kind
of
like
basic
protocol.
A
A
So
this
is
a
road
map
for
the
next
three
and
six
months,
so
metrics
generator
will
continue
iterating
on
that
there's
a
lot
of
really
good,
more
ideas
and
cool
things.
We
want
to
do
with
that
and
along
with
that,
the
features
that
are
coming
out
of
it
and
the
data,
so
there
it
drives
more
ui
enhancements.
A
So
what
we
want
to
see
from
the
tempo
data
source
incremental
ui
is
just
a
lot
more
usability
and
features
and
I'm
sure
we'll
go
into
that
and
you'll
see
some
of
that
later
and
then
trace2l
the
language
definition
that's
been
happening
and
it'll
keep
happening,
and
so
that's
out
there.
You
can
look
at
that.
We'll
hear
more
about
that
and
so
arcade
blocks,
and
this
is
where
we
see
that
going
so
in
order
to
really
get
searched
where
we
want
it
to
be.
A
We
want
to
try
out
this
new
blog
format
and
so
we'll
start
iterating
that
over
the
next
couple
of
temp
tempo
releases
and
then
we'll
probably
do
one
release
where
it's
marked
experimental
and
then
one
where
it's
marked
stable
and
then
yeah.
So
there's
a
line
here.
So
maybe
we'll
cut
tempo
2.0
when
that's
ready.
We're
not
sure.
That's
something
we're
talking
about
it's
a
very
big
change
to
tempo
and
so
yeah.
That
might
be
something
really
cool
to
look
forward
to
and
there's
a.
A
B
Like
to
underline
something,
marty
said
where
you
know
we
built
all
this
parallelization
we're
trying
to
get
search
working
on
our
existing
blocks,
and
it
was
just
too
slow
and
it's
not
what
we
want
for
tempo.
It
just
wasn't
at
the
level
we
wanted
it
at.
So
marty
has
spent
an
enormous
amount
of
time,
marty
and
nanya,
both
digging
into
various
new
back
informants,
and
this
parque
format
has
just
become
more
and
more
promising
the
more
we've
invested
in
it.
B
So
the
big
announcement
here
in
particular
is
just
we
are
a
hundred
percent
for
sure,
moving
into
part
k
and
we're
looking
at
like
a
three
to
six
month
time
frame
for
an
announcement
there
and
for
a
release
that
includes
support
for
it.
The
early
benchmarks
are
extremely
promising,
like
any
big
tech
switch.
There's
going
to
be
road,
you
know,
there'd
be
bumps
it's
going
to
be.
B
You
know,
there's
be
some
difficulties
there's,
but
we're
excited
to
move
forward
to
this,
because
we
just
think
it
is
the
best
format
to
store
this.
You
know
multivariate
extremely
high
cardinality
data
with
no
restrictions,
no
cardinality
issues
or
anything
like
that.
A
Well,
I
think
so
what
I
would
do
is
if
you
want
to
learn
about
parquet,
just
in
general,
I'm
sure
you
could
just
google
that
I
don't
know
of
anything
offhand,
but
there's
a
ton
of
it
out
there
parque
has
been
around
for
a
while
if
about
our
specific
format,
so
we
kind
of
touched
on
some
of
that
in
the
previous
community
call,
but
we'll
also
be
pushing
out
a
design
proposal
here
soon,
like
maybe
next
week
or
the
week
after,
about
what
our
specific
parquet
file
format
that
we're
going
to
go
with.
A
B
Okay,
so
that
might
be
a
good
resource
too
for
whoever's
interested
if
you
want
to
kind
of
dig
into
what
porque
does
why
we
may
have
chosen
it.
It's
a
schema-less
columnar
format
which
really
matches
up
nicely
with
trace
data,
where
you
have
some
spans
that
have
tags
a
and
b
and
other
spans
that
have
tags
c
and
d
and
every
combination
in
between.
B
So
we
think
it's
really
good
match
up
and
dremel,
and
I
think
parquet
are
two
great
resources
to
be
googling
or
ducked
up
going
or
whatever
you're
into
it
doesn't
make
a
good
verb.
Does
it
cool
any
other,
any
other
parquet
thoughts,
we're
gonna
move
on,
I
think
to
some
metrics
generator
updates
from
rod.
If
he's
feeling
up
to
it.
If
there's
no
other
kind
of
ideas,
there
cool
all
right
here
we
go.
Do
you
want
to
present
coordinator?
C
Just
see
you
next
slide
so
yeah,
I
just
wanted
to
share
some
updates
about
metrics
generator,
since
the
last
community
call,
we
didn't
do
any
like
major
architectural
changes,
but
we
added
some
really
nice
features
and
a
lot
of
improvements
under
the
hood,
as
we
kind
of
been
running
this
in
our
biggest
cluster
and
you
know
just
kind
of
figuring
out
what
stuff
works.
Well,
so,
if
you're
interested
in
architecture,
you
can
look
at
the
previous
community
call,
which
has
a
lot
more
detail.
C
One
thing
we
added
like
we
had
this
originally
now
we
removed
it
now
we
added
again
is
exemplar
support,
so
all
the
metrics
we
generate
using
the
metric
generator
will
have
exemplars
pointing
to
traces
that
are
stored
in
temple.
So
since
we
generate
metrics
from
traces,
we
you
know
have
this
trace
id
from
the
trace3.
C
So,
for
example,
this
is
a
screenshot
of
service
graphs.
You
can
see
you
can
click
on
the
request,
histogram
for
one
of
the
metrics
and
then
the
little
like
squares
or
dots
are
exemplars.
You
can
click
on
them,
so
next
slide
shows
kind
of
information
you
can
see.
So
an
example
has
like
a
very
specific
value,
so
you
can
see
the
exact
value
of
this
metric
of
like
this
data
point
in
the
metric
and
there's
a
link
to
the
trace
id
and
on
the
next
slide.
C
You
can
also
see
the
same
for
spam
metrics,
so
it
is
an
example
of
using
spam
matrix
to
show
the
latency
of
all
the
spans,
starting
with
http
get
of
the
tempo
query
frontend.
So
these
are
the
end
points
of
the
query
frontends,
so
you
can
see
them
in
time
and
they're
just
like
a
bunch
of
examples
and
you
can
click
on
them,
so
maybe
you're
looking
for
like
a
slow
request,
so
you
can
look
at
the
top
of
the
graph
click
on
one
of
those
or
you're.
Looking
for
a
fast
request.
C
C
C
Another
nice
feature
we
added
is,
you
can
now
add
additional
dimensions
to
your
metrics,
so
service,
graphs
and
spam
metrics
have
a
number
of
default
labels.
They
come
with
so
for
service
graphs.
We
always
capture
the
client
and
the
server
for
spam
metrics.
We
always
capture
the
service,
name
span,
name,
span
status
and
span
kind
and
that's
like
a
good
basic
set,
but
sometimes
you
might
want
to
add
tag
specific
about
your
environment.
For
instance,
you
might
have
a
namespace
tag
on
your
traces
or
maybe
you
want
to
add
additional
information
like
the
http
status.
C
So
we
added
a
config
option
dimensions
and
what
the
processor
will
do
is
it
will
look
at
every
span.
Try
to
find
this
tag
either
in
the
resources
attribute,
the
resource
attributes
or
in
the
span
attributes.
It
will
just
look,
you
know
everywhere
for
the
span
and
then,
if
there
is
a
value,
it
adds
a
label
to
the
metric
and
we
also
do
the
conversion.
If
there's
a
tag
with
a
point
in
a
darden
or
some
other
illegal
character
for
permissions
labels,
we
will
convert
it
into
an
underscore.
C
Here
I
plotted
the
amount
of
spare
metrics
and
amount
of
spans
that
were
ingested,
and
I
sum
them
by
namespace.
So
this
allows
us
to
see
how
many
spams
temple
ingested
by
namespace,
which
can
be
interesting
if
you
have,
for
instance,
multiple
teams
sending
spans
sending
traces
you
might
want
to
see
like
you
know,
how
much
is
every
team
sending
us
and
then
you
can
kind
of
make
a
breakdown
on
the
next
slide.
C
I
showed
the
same,
but
with
a
histogram
I
think
yeah
there's
an
example
of
a
specific
span,
so
I'm
looking
for
the
p90
latency
of
the
span.
Http
get
the
loki
query
range
span
and
I
sum
them
by
cluster,
so
you
can
compare
performance
of
the
same
span
across
multiple
clusters
and
you
can
see
like
you
know
this.
This
blue
cluster
is
much
faster
for
some
reason.
So
maybe
that's
something
you
want
to
investigate
and
then
the
next
slide.
C
I
think
yeah,
that's
all
for
dimensions,
so
it
can
be
very
useful
when
you
tune
into
your
environment.
We're
working
on
the
docks
as
well,
so
you
can
already
find
the
configuration
for
the
matrix
generator.
If
you
go
to
the
latest
version
of
the
docs,
which
is
kind
of
like
the
rolling
release
of
the
docks,
it
gets
updated
all
the
time.
C
So
we
already
have
description
of
the
configurations
and
we
will
also
be
adding
a
page
about
running
the
metrics
generator
to
share
kind
of
our
operational
experience
to
share
with
which
metrics
to
monitor
and
how
to
configure
the
metric
generators.
So
just
to
make
sure
you
know
just
to
share
our
knowledge
and
help
everyone
out.
That
tries
to
run
this,
and
I
think
in
on
the
final
slide.
C
It's
another
cool
announcement,
I
think,
is
we're
currently
working
on
rolling
this
out
in
grafana
cloud
traces.
So
the
goal
there
is
that
we
generate
metrics
automatically
for
all
the
traces
that
are
ingested
by
grafina
cloud
traces.
So
if
you
enable
this-
and
you
send
us
your
traces,
we
will
generate
metrics
and
we
will
write
them
to
the
grafana
cloud.
Metrics
tenants
of
the
same
stack.
C
So
that
means
you
don't
have
to
run
the
agent
anymore
on
your
own
environment,
to
generate
these
metrics,
because
that
can
be
very
resource,
expensive
and
it
will
just
be
easier,
like
the
integration
will
just
be
done
for
you
right.
The
metrics
we
send
will
be
billed
as
regular
active
series,
so
they're
not
special
in
any
way
they're
the
same
metrics
as
if
you
would
generate
them
yourself,
but
you
know
you
save
the
hassle
of
having
to
generate
them
and
spending
the
resource
on
that.
C
That
said,
we're
currently
looking
for
beta
testers.
So,
if
you're
interested
in
this,
if
you're
using
the
final
cloud
traces
you
can
reach
out
to
us
like
just
send
a
message
on
slack
or
whatever,
and
we
can
enable
this
for
your
tenants.
We
currently
you
know
especially
looking
for
feedback
to
iterate
on,
as
we
add,
like
you
know,
more
configuration
options
more
like
features
and
stuff
like
that
and
that's
kind
of
it
for
matrix
centimeter.
B
Cool,
I
think
magic
generator
is
kind
of
the
big
feature
of
one
four.
I'm
excited
to
see
it.
I
think
one
three
was
search.
One
four
was
metrics
generator,
not
sure
what
next
will
be,
but
we'll
figure
out
the
next
couple
months,
but
yeah
all
very
neat
stuff.
The
exemplars
are
really
exciting.
We
me
and
marty
were
just
debugging
some
issue
yesterday,
in
which
we,
you
know
we're.
B
Looking
at
some
metrics,
we
were
able
to
jump,
use
exemplars
to
jump
to
a
trace,
and
we
very
quickly
found
that
there
was
some
kind
of
slowdown
and
an
auth
component
in
our
in
our
cloud
that
was
causing
us
to
have
slower
queries
than
we
wanted
and
the
ability
to
see
the
exemplar.
That
was
you
know,
outlying
outstanding
super
high
up
on
the
graph
click
on
it
and
immediately
jump
to
the
trace
and
see
why
it
was
slow.
C
Yep
yeah
we're
also
working
so,
for
instance,
spam
matrix
has
exemplars
which
allow
you
to
go
to
a
trace,
but
we're
also
working
at
integrating
these
metrics
into
the
trace
view.
So
when
you
look
at
a
trace,
there
will
be
in
the
future
a
button
pointing
to
the
metrics,
so
you
can
go
from
trace
to
metric
and
then
back
to
the
trace
and
kind
of
like
keep
going
until
you
find
what
you're
looking
for
cool
there's
a
question:
is
it
possible
to
generate
metrics
just
for
specific
spans?
C
So
at
this
point
it's
not
possible
you.
If
you
enable
the
the
processors
for
a
specific
tenant,
we
will
process
all
the
spams.
So
we
don't
have
any
filtering
there,
but
we
are
thinking
about
adding
filtering
capabilities
to,
for
instance,
say
you
know
only
generate
metrics
for
spans
that
have
a
specific
service
name
or
maybe
a
specific
tag
to
kind
of
allow
you
to
know.
You
know
if
you
know
that
some
service
just
sends
too
many
spans
and
the
metrics
are
not
interesting.
B
Cool
yeah,
exciting
stuff
like
to
thank
conrad
and
mario
who's,
not
on
this
call
they're
kind
of
the
smaller
team.
The
sub
team.
That's
been
putting
tons
of
time
into
that.
Like
gnrd,
said
if
you're
interested
in
participating
in
the
beta
just
at
one
of
us
in
slack
and
then
any
questions
throw
on
our
select
channel
and
I
think
either
you
know.
B
Kunron
and
mary
are
kind
of
the
experts,
but
anyone
would
be
welcome
to
help
you
out
in
terms
of
like
configuring
getting
this
running
once
we've
cut
one
for
cool,
so
this
seems
like
a
pretty
action-packed
community
call.
Traceql
parquet
is
real,
1.4
is
happening,
but
anyways
traceql.
Also
another
announcement,
big
announcement.
Just
I
guess.
Two
days
ago
we
submitted
this
pr
to
tempo,
which
is
a
design
proposal
for
traceql,
which
is
the
query
language
we'll
be
using
to
to
query
traces
to
select
traces
or
define
traces.
B
Currently
we
have
a
ui,
that's
you
know
very
simplistic
it
does.
It
does
a
good
job.
You
can
do
duration,
searches,
you
can
say
the
service
name.
You
know
you
can
pick
service
names.
You
can
pick
other
tags.
It
has
a
pretty
basic
way
of
retrieving
searches,
but
we
believe
that
there's
much
more
powerful
ways
to
select
and
find
traces
and
that's
kind
of
why
we've
put
some
time
into
this
idea
traceql.
B
So
the
design
doc
in
particular,
is
meant
to
be
like
exposing
some
of
the
core
concepts
of
traceql.
It's
not
really
a
complete
language
spec.
We
are
working
on
that
internally,
but
it's,
I
think,
13
to
14
pages
right
now
and
still
in
development,
and
I
didn't
think
it
was
appropriate
to
just
like
brain
dump
that
entire
thing
it
didn't
feel
like
it
would
be
very
valuable
to
the
community.
So
this
is
the
core
ideas
and
the
core
concepts.
B
Please
jump
in
a
comment
if
you'd
like
or
add
ideas
or
thoughts,
criticisms,
we'd
like
to
spend
some
time
with
the
community
working
out
traceql
see
if
people
like
it
see
what
they
dislike
and
like
about
it,
so
that
we
can
take
that
and
work
it
into
the
language
spec
before
we
really
before.
We
really
solidify
the
whole
thing.
B
So
anyways
I'll
talk
about
some
of
the
principles,
some
of
the
things
we
cared
about
in
traceqo
while
we
were
while
we
were
building
it
and
look
at
some
examples
here.
First
thing
is
simplicity.
You
know,
and
I
think,
every
language
every
queer
language
at
some
level
attempts
to
have
some
kind
of
simplicity,
and
I
think
we
definitely
want
that
in
traceql.
B
We
just
want
it
to
be
very
easy
to
ask
the
most
basic
thing,
and
the
most
basic
thing
is
just
you
know.
Please
show
me:
please
show
me
a
trace
that
has
this
tag
or
show
me
a
trace
that
has
this
tag
and
is
over
one
second
long
or
something,
and
so
we
wanted
to
include
this
extremely
basic
way
of
just
sitting
down
and
typing.
Your
question:
http
status
is
500,
just
show
me
any
trace
with
that.
B
I
don't
care
and
then
it
will
immediately
start
giving
you
results
so
that
top
line
there
with
the
curly
braces
is
kind
of
the
official
syntax.
The
curly
brace
specifies
that
you're
selecting
a
set
of
spans
and
the
set
of
spans
you're
selecting
are
any
spans
where
the
tag
where
there
is
a
tag
http
status
equal
to
the
value
500.
B
directly
below
that
we
kind
of
have
this
idea
that
maybe
we
want
a
shorthand.
I
think
it's
still
up
for
discussion,
but
maybe
we
don't
even
want
that.
Maybe
we
want
it
so
that
there's
a
shorthand
where,
if
you
just
type
in
the
most
basic
query,
with
nothing,
we'll
just
throw
some
curly
braces
around
it
for
you
and
evaluate
it
so
again,
the
idea
is,
we
want
simplicity
here.
It's
important
that
we
really
just
want
you
to
be
able
to
type
the
most
basic
idea.
B
You
want
a
user
to
just
sit
down
and
ask
a
basic
question
and
get
a
trace
or
two
back
or
find
some
matching
options
right.
So,
but
this
spans,
that
idea
is
very
important.
So
it's
really
what
the
language
is
based
around
we're
going
to
be
selecting
sets
of
spans
and
then
we'll
see
in
a
second
we'll,
be
passing
those
through
a
pipeline,
but
down
below
you
can
see
some
more
complicated
examples.
B
So
in
the
first
one
I'm
ending
two
conditions,
so
only
the
spans
that
match
both
of
these
conditions
will
be
like
part
of
the
span
set
that
it
is,
then
that
then
controls
the
results
at
the
traces
that
are
passed
back
so
for
the
first
query
there
for
that
first
line
where
it
says
http
status,
equals
500
and
service
name
equals
app.
B
Both
of
those
conditions
would
have
to
be
true
on
us.
The
same
span
for
that
trace
to
kind
of
be
included
in
my
result
set
and
on
the
second
one
I'm
looking
for
both
of
those
conditions
to
be
true
anywhere
in
a
anywhere
in
a
trace.
So
I'm
looking
for
I'm
going
to
collect
all
of
the
spans
that
were
http
status
is
500
and
then
all
of
the
spans
where
service
name
is
app
and
then,
when
I
like
and
quote
unquote,
these
two
sets
together.
B
So
the
second
one
will
find
any
trace
where
both
of
those
two
things
are
true
somewhere
in
the
trace
and
the
first
one
of
those
two
examples
will
find
only
traces
where
both
of
those
are
true
on
the
same
span
and
this
curly
brace
thing
you'll
see
throughout.
But
this
and
this
idea
of
span
sets
you'll
see
throughout.
B
I
was
kind
of
right
thinking
about
what
you
know
what
to
make
these
slides
about,
and
I
wrote
simple
and
powerful
and
I
was
laughing
at
myself
because
I
think
any
language
you
ever
see
a
presentation
on
they'll,
probably
say
it's
simple
and
powerful,
but
ours
is
too
so
we
want
to
be
powerful
as
well,
we'll
be
able
to
say
complicated
things.
Not
just.
We
want
the
basic
thing
to
be
basic,
but
we
also
want
to
be
able
to
express
more
complex
combinations
of
conditions
aggregations.
B
We
have
this
idea
of
grouping
and
we've
built
it
around.
Like
a
pipeline
like
I
was
saying
so
on
this
very
first
line.
We're
going
to
be
selecting
span
sets
where
http
status
is
200,
then
we're
piping,
those
into
account
function
and
then,
if
the
span
set,
has
a
count
greater
than
3.
It's
passed
through
that
filter,
basically,
so
that
first
line
will
find
any
trace
where
this
tag
hb
status
is
200
is
seen
more
than
three
times
inside
of
that
choice.
B
So
this
would
hopefully
give
power
users
more
capability
to
express
queries
that
are
taking
an
aggregate
information
about
their
trace
or
asking
more
difficult
questions
about
their
trace.
B
B
We're
going
to
break
that
span,
set
down
into
multiple
span,
sets
grouped
by
namespace
and
then
we're
going
to
count
each
of
those,
so
that
particular
query
will
find
that
particular
query
will
find
any
traces
where
hb
status
is
200
more
than
three
times
in
one
namespace
and
of
course
these
tags
are
just
arbitrary
attributes
on
spans
and
I'm
saying
namespace,
but
for
you,
maybe
that
is
you
know,
ns
or
cluster
or
region,
or
maybe
some
other
tag.
So
these
are
not
specific.
You
know
special
names.
B
We
also
felt
it
was
very
important
to
ask
structural
questions
about
our
traces,
so
this
we
feel
in
particular,
is
lacking
for
any
existing
tracing
solution.
That
I'm
aware
of
where
you
can
ask
something
about
the
way
your
trace
is
structured,
a
power
user
who
knows
their
knows
the
way
their
applications
are.
Architected
knows
the
structure
of
their
traces
roughly
in
the
general
sense,
and
they
can
craft
very
comp,
very
let's
say
very
detailed
questions
about
their
trace.
B
So,
there's
first
of
all
this
idea
of
like
a
parent,
so
I
can
ask
a
question
about
the
parent
of
the
current
span
with
what
we
with
what
is
called
like
an
intrinsic
attribute
on
the
span,
which
is
in
this
case
it's
parent.
So
I'm
saying
parent
dot.
Duration,
minus
duration
is
greater
than
500
milliseconds.
B
So
if
there's
a
difference
in
the
parent
of
the
span
and
this
spans
duration,
it's
going
to
be
caught
by
this
filter
if
it's
greater
than
500
milliseconds,
I'm
going
to
pass
that
along.
Maybe
I
can
use
this
kind
of
query
to
look
for
spanned
differences
where
maybe
there's
like
a
network
latency
issue
or
something
like
that,
because
two
spans
have
very
widely
different,
wildly
different
durations
and
then
we
also
have
the
idea
of
a
descendant
operator
where
you're
going
to
say.
B
I
want
to
find
traces
where
this
one
region
is
or
where
region
eu
west
one
is
a
descendant
of
region
e:
u
s,
zero
somewhere
in
the
trace,
so
somewhere
in
my
trace.
Maybe,
like
my
root
span,
I
have
region
e,
u
s,
0
and
then
somewhere
down
in
a
leaf
or
in
one
of
the
branches.
I
have
region
e
west,
one
that
would
match
this
set
this
particular
condition,
and
you
would
find
traces
that
have
you
that
have
this
pattern,
where
you're
moving
from
one
region
to
the
other.
B
So
we're
asking
like
structural
questions
about
our
choices.
How
is
our
trace
structured?
It's
going
to
give
you
the
power
to
find
ask
very
detailed
questions
about
your
choices
and
find
really
cool
outliers
that
help
you
understand,
not
what
the
core
of
your
choices
are
doing,
but
really
dig
into
the
the
niche
cases.
The
cases
that
tracing
is
powerful,
for
which
is
this
unique
trace?
You
know
why
is
this
trace
unique
and
how
can
I?
How
can
I
correct
an
issue
with
it
or
why
was
you
know
what's
going
on
with
it?
B
Why
is
it
different,
so
the
primary
ideas
here
again
are
this:
like
span
set
idea
the
pipelining
idea,
and
then
we
also
want
to
provide
structural
queries,
so
the
pr
that
was
submitted
again
has
it.
Has
the
it
has
the
core
concepts?
It
has
a
lot
of
what
we
just
talked
about
here
and
as
well
as
some
additional
details.
B
I
really
didn't
get
into
all
the
details,
even
in
that
pr
we're
looking
for
comments,
so
please
drop
some
in
there
and
please
give
us
some
guidance
and
help
us
build
something
that
we
think
we
would
all
want
to
use
in
this
open
source
project
tempo.
B
Also,
this
is
the
first
step
for
traceql.
We
do
intend
to
start
building
things
like
metrics
out
of
traces
with
this
language-
that's
not
included
in
the
pr,
but
it
is
on
our
internal
road
map,
our
internal
discussions
and
our
bigger
design,
dock
we're
moving
into
metrics
from
traces,
and
that's
definitely
going
to
be
a
part
of
this
eventually
cool.
B
I
think
that's
roughly
it
for
our
part
of
things
appreciate
everyone
coming
along
at
this
point,
you're
welcome
to
ask
questions
about
anything
that
we've
covered.
B
I
meant
I
made
it
up
about
an
hour
before
this
thanks,
but
yeah
go
check
out
that
pr,
I
think,
there's
some
exciting
conversation
in
there
already
there's
some
members
of
the
community
have
already
commented,
and
I
would
love
to
hear
what
anyone's
thoughts
on
that
are.
A
Cool
how's
everybody's
is
everybody
here.
Using
tempo
is
anybody's
clusters
having
issues
everything
is
going
great.
D
Yeah
pretty
much,
I
think
that
the
combined
of
extreme
volume
and
single
tenant
is
what
we
run
into
the
the
most,
but
we're
we
have
a
good.
We
have
a
good
like
feedback
loop
for
for
tweaking
compaction
is
the
main
thing.
Basically,
I
think
the
other
ear
pieces
are
working,
pretty
pretty
pretty.
D
Well,
there's
one
thing
that
we,
I
don't
know
if
I
mentioned
last
time
that
we
used
to
run
search,
enabled,
but
not
not
actually
use
it
just
to
like
see
how
it
would
perform
and
the
hit
the
cpu
hit
on
ingest
on
the
gestures
and
distributors
was
significant
and
we
had
to
turn
it
off.
D
B
A
Mean
this
is
real
vague,
but
I
mean
our
our
thoughts
are.
Is
that
that
hit
kind
of
goes
away
like
as
with
the
new
block
formats?
Maybe
it
can?
We
don't
have
to
build
that
extra
data
structure
on
the
side?
The
reason
we
did
that
is
because
again
the
proto
was
just
too
costly
to
go
through,
so
maybe
yeah
promising
things
there.
Hopefully.
B
Yeah,
I'm
not
going
to
lie
to
you,
search
as
it
exists
would
not
have
worked
for
your
volume.
It
barely
works
for
our
volume
and
I
think
we're
roughly
in
line
you
might
be
a
little
higher
than
even
our
largest
clusters
are,
and
so
you
needed
parquet
anyway,
so
don't
worry
about
turning
search
on
it
wouldn't
have
bought
you
anything
great.
B
I
think
another
another
cool
thing
that
somebody
requested
that
might
be
of
interest
to
gaben.
That
crew,
also
is
somebody
asked
about-
and
I
thought
about
this
before
is
providing
time
ranges
for
trace
by
id
search,
because
at
some
point
the
big
bottleneck
for
query.
Is
you
really
search
every
block?
It's
a
true
key
value
store
if
you've
pushed
this
key
at
any
point,
this
trace
id
at
any
point
in
the
last.
B
You
know
how
many
days
your
attention
is,
it
finds
that
trace
id,
but
a
lot
of
times
when
you
ask
for
a
trace
id.
You
have
some
context
like
when
we
do.
Our
search
you'll
have
some
context
about
when
the
trace
was,
if
you're,
using
a
log
format
like
if
you're,
using
loki
or,
if
you're,
using
elastic
to
do
your
searches.
You
have
roughly
the
idea
of
when
you
know
that
trace
hit.
B
So
if
you
just
provide
a
time
range,
it's
like
24
hours
or
something
you
will
drastically
reduce
the
number
of
blocks
you
search
and
still
get
the
trace.
So
that's
some
proposed
by
somebody,
and
I
think
we
might
try
to
include
that.
Maybe
in
one
five
on
our
side,
the
trick
would
be,
of
course,
moving
grafana
over
to
that
and
choosing
when
to
do
that.
D
That's
I
mean
that
sounds
good
just
because
for
our
workflow
we
have
like
a
custom,
trace,
search
thing,
anyways
and
so
through
that
and
that's
done
through
a
custom
plugin
in
grafana.
So
we
have
the
time
window
where
the
trace
was
found.
You
know
already,
so
we
could.
It
would
be
easy
for
us
to
like
pass
this
on
to
then
the
trace
lookup
afterwards
cool.
A
We
had
a
question
here
in
the
chat,
is
search
performance
also
optimized
on
1.4,
when
not
using
serverless
right
so
for
larger
volumes,
and
I'm
not
sure
what
the
actual
cutoff
would
be.
Serverless
would
is
kind
of
required
in
order
to
just
search
that
entire
block
space.
I
mean
even
an
hour
like
at
our
volume,
there's
a
lot
of
data
way
too
much
much
for
just
a
few
pods
and
kubernetes
to
do.
A
The
optimization
here
in
1.4
is
actually
to
basically
store
the
trace
timestamps
like
outside
of
the
proto,
so
we
can
not
deserialize
the
proto
so,
and
that
makes
it
all
the
way
from
the
ingester
to
the
back
end.
So
I
mean
there
is
an
optimization,
even
if
you're,
not
using
serverless,
but
if
you're
not
using
serverless,
the
amount
of
back-end
searching
you
can
do
is
already
pretty
limited,
where
this
optimization
is
not
going
to
be
like
a
make
or
break
kind
of
thing.
So
really,
I
guess
it
depends
on
your
volume.
B
B
Or
your
queries
up
to
thousands
and
have
the
same
impact?
Well,
I'm
not
sure
what
would
happen
to
your
query
front
ends
if
you
scale
with
the
thousands,
but
still
the
idea,
is
there
right?
It's
not
doing
anything
special
anything
different
than
the
queriers
is
just
an
opportunity
to
have
massive
scaling
on
demand.
Basically,.
B
And
I
mentioned
the
parallelization
work
and
how
it
kind
of
came
into
one
four,
and
I
would
like
to
say
that
all
of
that
will
be
part
of
traceql
and
all
of
that
will
be
part
of
the
new
park,
a
block
format.
None
of
that
was
throwaway.
We
were
just
building
all
of
this
parallelization
around
our
current
block
formats
and
even
with
massive
parallelization.
B
We
still
are
not
getting
the
speeds
we
wanted,
so
all
of
that
will
move
forward
into
parquet
and
we
will
have
these
massive
parallelization
options,
along
with,
along
with
the
new
block
format
for
just
for,
we
think
we'll
be
able
to
hit
much
higher,
much
higher
ingest
rates
and
much
better
speeds
on
surge.
B
We
should
add
a
way
to
choose
the
fields
we
want
to
display
in
the
results,
beyond
the
metadata
sure
now
kind
of
like
line
format
and
loki
anyway.
Gotta
drop,
see
you
all
dropping
that
bomb
and
taking
off
and
on
you
sure,
bud
so
to
talk
about
the
result
set.
That
is
something
that's
important
to
us
in
traceql,
so
we
talked
about
you're,
selecting
a
span
set,
which
I
think
is
kind
of
cool.
B
So
let's
say
you
do
http
status
200,
what's
going
to
come
out
of
the
end
of
that
pipeline
is
sets
of
spans
that
match
the
query
you
typed
in
and
what
my
idea
is,
and
we
haven't
sketched
this
out
with
graffana
yet.
But
the
idea
I
have
is,
let's
display
that
in
the
results
so
in
the
results,
you'll
see
like
a
trace.
Okay,
this
trace
matched.
It
has
some
http
status
200s
and
here's
two
or
three
of
the
spans
that
match.
So
I
could
deep
link
to
the
spans.
B
So
I
can
jump
straight
to
the
matching
things
or
I
can
just
click.
You
know
the
trace
id
itself
and
kind
of
see
the
and
just
go
to
the
whole
trace
as
one
you
go
to
the
root.
B
Basically,
so
I
think
passing
this
or
kind
of
the
result
set
being
sets
of
spans
and
using
that,
like
the
span
sets
being
or
the
pipeline
being
centered
around
span
sets,
I
think,
is
kind
of
cool,
because
it's
gonna,
I
think,
allow
us
to
do
cool
deep
leaking
into
the
traces
from
the
results,
pain
and
grafana,
and
I
kind
of
excited
about
seeing
it
honestly.
I
think,
there's
gonna
be
some
really
neat
features
there.
B
B
But
if
you
type
a
query
in-
and
you
very
specifically
pick
a
span
and
you
can
jump
to
that
span-
maybe
that
will
help
with
some
of
these
larger
trace
issues
and
then,
as
far
as
custom
fields
yeah,
I
don't
know-
maybe
I'm
not
sure
where
you
specify
that
maybe
in
the
language
we
could
talk
about
that.
B
I'm
not
sure
how
I
feel
about
that.
I
don't
know
if
I
want
the,
I
don't
know
if
I
want
the
language
to
say
what
to
display,
but
I
don't
know
we'll
think
about
it.
I
don't
know
if
anybody
else
has
strong
opinions.
There.
B
Cool
well,
I
suppose,
if
there's
no
other
questions
like
thank
you
all
for
your
time.
It
was
another.
Good
community
call
expect
some
exciting
announcements
at
observabilitycon
coming
up.
I'm
sorry
grafanacon
is
coming
up
in
a
couple
months
and
I
think
in
june
and
then
observabilitycon
this
fall.
We
expect
to
have
some
really
big
announcements
at
those
conferences
and,
of
course,
all
the
community
call
where
you
guys
can
keep
updated.
B
One
four
is
coming
out:
we
have
a
slack
channel
ping,
us
in
the
slack
channel
of
course,
file
issues
on
the
github,
the
github
issue
tracker
or
reach
out
to
us.
We
have
a
community
forum
as
well.
Any
of
these
any
of
these
avenues
is
great
to
get
a
hold
of
us.
B
If
you
have
any
issues
run
in
tempo
or
you
have
questions
about
the
open
source
offering
or
whatever,
please
please
get
a
hold
of
us
and
let
us
know
how
you're
doing
and
then
jump
into
maybe
that
traceql
pr,
if
you
get
a
chance
and
give
us
some
ideas,
thoughts,
feedback
on
that
thanks
to
everybody
thanks
to
community
for
using
tempo.
It's
awesome
to
see
so
many
people
here
and
we
are
excited
to
keep
working
on
this
project
thanks
sheldon
all
right
take
care,
everybody.