►
From YouTube: Tempo Community Call 2022-05-19
Description
A casual fireside chat in which we discuss Tempo 1.4.1, new TraceQL progress and announce the first internal cluster running Parquet blocks!
A
Cool,
so
this
is
the
tempo
community
call,
may
may
2022
and
we're
a
week
late.
We
normally
do
it
the
second
week
of
the
month,
but
we're
a
week
late
this
time,
because
we
had
an
off
site
for
the
company
and
everybody
was
not
at
their
desks
and
it
was
just
going
to
be
a
little
bit
difficult
to
do
the
normal
thing.
So
instead
we
pushed
it
a
week
and
I'm
glad
to
see
people
still
shut
up
and
we'll
have
some
good
discussion.
A
Hopefully
I'm
going
to
start
by
putting
a
new
member
of
the
team
on
the
spotlight
here,
kim
you're,
welcome
to
say
nothing
since
I
have
not
talked
to
you
about
this
before
now,
but
kim
is
our
newest
member
of
the
team
she's,
a
doc's
writer
she's,
going
to
help
us
improve
all
of
our
documentation
both
for
open
source
tempo
and
for
enterprise,
tempo
kim.
If
you
want
to
say
something,
reduce
yourself,
feel
free,
but
don't
feel
obligated
if,
if
you
just
want
to
hang
out
some.
A
And
we
need
it
thanks
kim
she's.
C
A
Been
super
impactful,
I
see
you're
jumping
in
pr's,
giving
her
ideas
giving
thumbs
up
for
some
of
these
docs
we've
been
doing
getting
some
good
suggestions,
so
we're
very
excited
to
have
kim
around.
I
think
she's
gonna
do
some
good
things
for
the
team
and
for
tempo
cool,
like
I
said
other
than
this,
it's
pretty
welcome
kim
glad
to
have
you
other
than
this.
It's
pretty
pretty
laid
back
we're
going
to
move
through
some
of
these
topics
and
I'm
going
to
have
kunrod.
A
Do
this
next
thing
he's
going
to
talk
about
1-4
and
1-4-1,
which
was
cut
since
our
last
community
call
so
kendra
wants
you
to
wax
poetic
about
that.
C
Yeah
sure
so
in
the
last
minute
call
we
already
announced
we
were
cutting
or
planning
to
cut
tempo
1.4.
In
the
meantime,
this
has
happened
a
couple
of
weeks
ago,
the
release
1.4,
and
then
we
also
released
1.4.1.
C
With
a
couple
of
bug
fixes
I've
linked
the
release,
notes.
You
can
check
out
all
the
changes
in
the
different
releases,
but
yeah
I
mean
we
recommend
using
1.41,
of
course,
there's
also
a
blog
post
but
like
to
give
a
quick
summary
of
this
release.
The
major
new
feature
in
this
release
is
the
metrics
generator,
so
there's
a
new
component,
we
added
to
tempo,
which
can
generate
metrics
from
the
traces
that
are
being
ingested
by
tempo,
so
these
metrics
are
based
upon
your
traces.
We
have
two
types
of
them:
spam,
metrics
and
service
graphs.
C
So
we
can
visualize
this
in
grafana,
so
you
can
see
which
services
are
talking
with
each
other
and
how
much
you
know,
calls
they're
sending
what's
typically
in
cs,
so
the
magic
generator
is
a
new
component
that
makes
this
possible
if
you're
running
tempo
in
microservices,
modis
you'll
have
to
add
a
new
like
a
new
deployment,
a
new
service
which
is
the
metric
generator,
and
you
have
to
hook
it
into
the
ring
and
stuff
like
that.
If
you're
running
single
binary,
it's
just
the
same
process
as
usual,.
D
C
Do
check
it
out
if
you
have
any
questions,
leave
like
post,
something
in
slack
we're
happy
to
help
out
some
other
highlights
from
this
release
is
we
also
did
a
change
to
the
trace's
endpoint,
so
the
career
endpoint,
so
we
have
an
endpoint
which
you
can
supply
a
trace
id
and
then
we
look
for
that
trace
in
the
back
end.
So
a
change
here
is:
you
can
now
also
provide
a
start
and
the
end
time
stamp.
C
So
you
can
ask
tempo
hey,
give
me
this
trace,
but
also
look
only
look
in
this
time
range
because
I
know
this
is
the
interesting
time
range.
So
it's
kind
of
like
an
optimization.
If
you
know
the
time
range
of
your
trace,
don't
look,
don't
search
the
full
back
end.
Just
look!
Look
for
this
smaller
time
range
and
besides
that
we
also
had
like
a
lot
of
changes
in
searching
with
servlets.
I
think
I'm
not
sure
I'm
missing
any
other
highlights
for
one
one,
not
four.
C
Yeah
check
out
the
blog
post
have
more
details
about
the
metrics
generator.
Your
and
the
release
notes
also
have
like
a
list
of
the
changes
that
you
can
go
through.
I
think
there
were
a
couple
of
breaking
changes.
C
C
And
there's
also
a
performance
consideration
there,
so
you
know
check
out
the
documentation
to
see
what
we
recommended
during
the
rollout,
because
I'm
not
sure
about
it
anymore.
A
We've
done
this
a
few
times
so,
if
you've
been
using
tempo
for
a
while,
you
might
be
used
to
this
procedure,
but
sometimes
we
change
the
way.
The
distributors
communicate
the
ingesters
and
the
standard
procedures
to
do
all
your
injectors
first,
so
they
have
whatever
that
new
endpoint
is
that
the
distributors
expect
and
then
do
the
distributor
second.
A
So
it's
that
the
same
kind
of
process,
if
you
don't
it'll,
just
throw
a
bunch
of
errors
for
the
rollout
time
and
then
be
fine
and
just
reviews
a
bunch
of
spans,
so
that
may
or
may
not
be
fine
depending
on
you
know
what
your
environment
is
and
what
your
expectations
are
of
the
back
end
and
what
your
users
expect
but
yeah.
I
was
also
cool
excited
about
that
query
range.
A
But
I
knew
there
would
be
a
time
where
we
would
want
to
just
search
for
us
use
a
subset
of
the
blocks
and
in
a
lot
of
cases
like
when
you're
doing
tempo,
search
or
other
ways
to
find
choices.
You
have
a
decent
idea
of
the
time
range.
So
why
not
just
chuck
a
couple
hours
on
either
side
of
your
expectation
and
you'll
probably
get
the
whole
trace.
So
I
think
that
was
a
cool
addition
and
it
was
a
community
member
who
threw
that
one
in
which
is
also
really
neat.
C
Yeah
for
sure,
I
think
that
was
all
for
the
new
release.
Yeah.
C
And
provide
any
feedback
in
slack.
E
Yeah
there
was
one
really
cool
thing:
is
the
metrics
generator
the
metrics
it
emits
have
exemplars?
I
think
right,
yep
yeah,
so
because
we're
receiving
traces
and
then
generating
metrics
from
them.
The
example
of
that
trace
will
actually
show
up
as
an
exemplar
in
the
metrics
that
it
generated
really
so
some
of
them.
So,
for
instance,
if
you're
looking
at
like
the
metric,
that's
the
latency
for
a
certain
service.
It
should
have
exemplars.
So
that's
really
cool,
so
you
just
click
it
and
go
straight
to
some
of
those
original
traces.
A
C
A
Sc
guy-
and
he
put
this
amazing
demo
together
with
span
metrics
and
service
graph
metrics
and
I'm
gonna-
find
that
and
show
it
off
a
little
bit.
But
at
the
end
of
the
call
we'll
look
at
this
in
the
discussion
on
the
exemplar
stuff,
it's
really
neat
cool
yeah
exemplars
are
super
powerful,
but
it's
hard
you
have
to
reinstrument.
In
some
cases
you
have
to
find
a
trace
id
and
add
it
to
the
instrumentation.
A
It
just
is
a
pain
and,
as
a
result,
it's
been
not
very
widely
adopted,
despite
like
being
an
extremely
powerful
tool-
and
I
really
like
that-
metrics
generator
kind
of
just
gives
exemplars
automatically
to
your
users
off
the
span
metrics.
So
you
just
automatically
get
this
feature.
You
don't
have
to
make
any
changes
to
your
code
to
your
instrumentation,
to
the
way
you're
collecting
data
to
your
prometheus
server,
everything's,
just
gonna
work,
and
then
you
get
cool
exemplars
and
you're
happy.
A
It's
all
it
takes
cool
traceql,
so
through
scale's
also
been
a
subject
of
discussion
for
the
past
months
frankly,
but
continue
to
make
progress
here.
This
is
the
query
language
for
traces
that
grafana's
working
on
for
tempo
there's
two
pr's
there.
I
really
should
have
changed
the
order.
In
fact,
I'm
going
to
change
the
order
right
now,
because
they
kind
of
have
an
order
to
them.
A
The
first
is
the
pr
we
put
up
a
month
or
so
ago
about
like
core
concepts
like
the
main
ideas
we
went
in
the
language
if
you've
not
seen
that
definitely
check
it
out.
I
mean
it's
about
a
one
to
two
pager.
I
think,
and
the
idea
behind
it
is
just
to
talk
about
like
the
major
ideas.
Please
give
feedback
there
and
we've
have
a
lot
of
good
feedback
there.
A
I
don't
know
if
the
license
places
licenses
that
we're
used
to
like
apache,
2
and
agpl
makes
sense
with
regard
to
the
spec,
but
we
definitely
want
to
make
sure
that
the
parser
or
the
implementation
have
very
clear
licensing
in
case
you're
interested
in
using
some
of
that
code.
So
I'm
trying
to
get
some
answers
internally.
It
was
a
good
question
from
the
community
and
as
soon
as
I
can
get
that
and
put
that
in
the
thread.
A
A
The
second
link
is
exciting,
because
I
just
put
it
up
today
that
doesn't
make
it
exciting.
I
guess,
but
it
is
exciting
because
it
is
a
parser,
so
the
parser
is
about
80
90
done
and
it
covers
the
full
spec.
The
core
concepts.
Dock
is
not
the
full
spec,
but
it
gives
you
the
main
idea,
and
this
does
cover
the
full
spec.
In
fact,
it
might
even
go
overboard.
A
What's
called
an
ast,
an
abstract
syntax
tree
out
of
those
out
of
that
string,
then
we
can
use
that
structure,
which
is
the
structure
of
the
query
to
go,
execute
it
against.
Hopefully,
some
parquet
blocks
in
the
near
future
kind
of
two
members.
Our
team
is
kind
of
headed
this
way
from
two
different
directions.
I've
been
working
on
the
traceql
side,
marty
and
ananya
have
been
pushing
hard
on
the
parque
side
and
we
have
some
cool
announcements
about
that
as
well.
A
If
you
are
looking
at
the
parser,
I
recommend
two
files
in
particular
xpr.y.
This
is
all
the
rules
and
if
you
look
at
it
first,
maybe
it's
confusing,
maybe
page
down
a
bit.
I
think
if
you
start
reading
it
it'll
start
to
make
a
little
bit
of
sense
like
the
language,
the
yak
syntax
is
pretty
straightforward.
A
Once
you
start
kind
of
digging
in
and
thinking
about
it,
some.
So
even
if
you've
not
seen
it,
it
might
be
fun
to
read
a
bit.
It's
a
little
bit
different
than
your
normal
code,
and
I
have
had
fun
writing
it.
If,
I'm
being
totally
honest
and
maybe
you
could
enjoy
reading
it
as
well
also,
another
cool
file
to
look
at
is
just
parser
test,
because
this
is
going
to
show
you.
A
You
know
it's
just
got
tons
and
tons
of
examples
of
queries,
they're
being
tested,
of
course,
against
what
the
expected
you
know,
structure
to
generate,
but
both
of
these
files
might
be
worth
a
quick
skim
if
you're
interested
in
the
language
or
developing
languages
generally,
I
think
those
are
both.
Both
cool
touch
points
to
check
out,
but
yeah.
This
is
hopefully
coming
soon.
A
We're,
like
I
said,
rapidly
approaching
we're
rapidly
approaching
parque
in
the
back
end,
just
a
parser's
good,
but
there's
still
a
distance
between
marrying
up
a
parser
and
actually
executing
against
code.
I
think
we're
gonna
have
to
come
to
like
a
decision
internally
about
what
do.
We
support
first
we're
to
support
a
subset
of
this
first,
even
if
we
can
parse
everything
and
then
slowly
move
forward
and
add
more
and
more
features
over
time.
A
Like
so
there'll
be
errors,
maybe
at
first
you
know
unsupported
whatever
unsupported
distance
part
of
that
and
we'll
be
clear
about
that
in
documentation
which
game
will
help
us
with
that's
right
cool.
So
turkey
has
on
the
way
I'm
really
excited
to
finally
get
this
parser
together
and
get
it
out.
A
I
expect
to
merge
that,
hopefully,
the
next
week
or
so
and
like
I
said,
really
trying
to
get
some
answers
internally
for
this
licensing
stuff
and
then
we'll
get
the
we'll
get
the
design
dock
in
there
as
well
and
for
the
entire
community
for
anyone,
both
pr's
they're
public
pr's.
Of
course
we
invite
anyone
to
comment
any
thoughts
or
improvements
or,
if
you
just
want
to
knit
my
pr
a
bunch
I'll
make
some
changes
cool
other
than
that.
I
think.
E
E
D
A
A
Github
needs
to
get
it
together.
Okay,
cool!
Please
please
jump
into
that.
I
I'm
gonna
be
begging
marty
for
a
real
review,
probably
the
next
week
or
so
as
well
as
the
rest
of
the
team
to
help
me
get
it
into
shape
and
he's
going
to
tell
me
why
it's
going
to
be
impossible
to
run
against
parquet
and
we'll
probably
make
some
adjustments
and
then
go
from
there
cool
and
then,
oh
sorry,
a.
D
Quick
question:
is
there
a
plan
to
leave
an
interface
kind
of
like
the
current
tempo
search
interface,
where,
instead
of
doing
traceql,
you
know
a
user
could
just
select
fields
from
dropdowns.
A
Yep
so
the
other
f,
the
other
languages
like
prom,
ql
and
log
q,
all
kind
of
started
with
just
this
text
box
right
and
then
they've
kind
of
tried
to
add
builders
and
do
some
of
these
other
features
and
we're
kind
of
doing
the
opposite.
So
we
have
the
existing
ui.
I
really
wish
conor
were
on
because
he
could
talk
about
this
more
or
officially
he's
our
graffana
dove,
but
I
want
it
to
work
the
opposite
of
that.
So
we
already
have
kind
of
a
builder.
A
E
D
Yeah,
no,
like
I'm,
excited
for
traceqo,
but
obviously
we've
got
devs
who,
even
though
we
use
prom
q,
a
log
ql
pretty
extensively,
they
still.
You
know
they
kind
of
only
write
queries,
maybe
once
a
week,
maybe
every
couple
weeks
and
so
they're-
definitely
not
as
familiar
with
it.
So
it's
nice
for
them
to
like
they
know
like
hey.
D
I
just
want
http
codes
that
are
this
or
durations
that
are
over
this,
and
so
it's
really
easy
for
them
to
be
able
to
go
and
search
that
we've
had
some
other
search
performance
issues
with
tempo
search.
I
think
we're
still
on
one
three
though,
but
it's
it's
still
been
nice
there
right,
like
we've
had
we
in
fact,
we
had
an
issue
that
we
had
to
debug,
where
the
app
wasn't
emitting
a
trace
id
into
the
logs.
D
So
we
couldn't
correlate
like
we
knew
there
was
a
trace
somewhere,
but
we
couldn't
find
it
and
before,
like
10.13,
we
wouldn't
be
able
to
do
that
so
that
little
drop
down,
yeah
menu
thing
or
whatever
is
very
handy
for
them,
but
yeah
as
far
as
like
total
percent
of
workload.
I
I'm
not
sure
I
really
know
that
I
mean
I
probably
wouldn't
use
it
as
much,
but
I
know
it
would
definitely
get
used
by
our
team.
A
Cool
I'm
interested
to
hear
you're
using
search
with
one
three,
because
it's
not
very
fast.
That's
kind
of
what
marty
in
particular
has
been
very.
D
D
It's
not
fast
and
in
particular
we
also
basically
kind
of
put
out
like
a
don't
use.
The
span
name
feel,
like
don't
click
on
the
span
name
box,
because
it
will
just
freeze
for
minutes.
D
We
do
a
lot
of
span
names
named
by
api
path
and
also
in
some
cases
we
have
spans
that
are
attached
to
a
parent
trace,
and
so
the
span
name
is
like
the
you
know:
the
redis
data
function
there's
some
cleanup
that
we
obviously
have
to
do
because,
like
there's
a
span,
that's
like
and
and
and
and
it's
like,
it
gets
a
little
bizarre
but
yeah.
D
D
But
no
it's
like
we
knew
going
into
it
that
it
was
very
much
in
beta,
but
it's
the
only
way
that
we
have
to
kind
of
get
those
like
live
traces
that,
for
whatever
reason
we
didn't
log
a
trace
id
so
yeah
we
like
it
but
yeah
it
definitely
needs
some
improvement.
I
agree.
A
Completely
with
that,
I'm
going
to
transition
to
marty
who's
going
to
tell
you.
E
E
Yeah
so
the
span
name
drop
down
like
it's
just
really
slow
to
populate.
There's,
like
probably
a
million
entries
or
something
you
can
actually
type
in
the
tags
field
name
equals
it's
the
same
tag
underneath
I
feel
like
autocomplete,
might
still
get
you
there,
but
maybe
if
that
would
be
another
workaround
to
search
that
field.
If
you
really
wanted.
D
A
lot
of
times
we
know
what
we're
looking
for.
It's
just
getting
the
ui
to
respond.
You
know
like
if
somebody
goes
and
clicks
that
drop
down
box,
because
they,
you
know
they
know.
Oh
this
being
span.
Name,
hey
I'll,
go
search
that
first,
because
I
know
what
I'm
looking
for
and
then
it
just
basically
freezes
on
him.
A
We
need
a
config
option
to
disable
that
box.
Yeah,
that's
been
tough
for
us
too.
We
I
feel,
like
we
still
struggle
with
that
thing.
A
lot
it's
I
think
it's
on
grafana's
side,
though
I
think
it's
just
there's
so
many
options.
It
just
bogs
your
phone.
C
A
Don't
know
we
should
talk
to
connor
about
that
and
see.
We
talked
to
connor
about
that
and
see
if
he
has
thoughts
or
we
should
dig
a
little
bit
on
that.
But
yeah
we've
seen
slowness
in
that
box
and
I
don't
think
one
five
is
going
to
fix
that
because
I
do
think
it's
just
due
to
the
ex,
like
a
huge
number
of
spam
names
that
it
tries
to
populate
there.
E
Yep
yeah
cool
hey,
so
we
have
been
working
on
converting
our
block
format
over
to
parquet.
We
actually
just
published
this
just
after
the
last
community
call,
but
I
think
it
was
closed.
So
we
have
a
design
proposal
out
there.
So
there's
this
first
link
here
in
the
document
for
a
parquet
schema
that
we
think
works
well
for
tracing
in
creating
a
schema.
There's
not
really
like
one
size
fits
all.
So
there's
a
lot
of
different
ways.
E
You
could
go
with
parquet,
it's
very
flexible,
there's
different
ways
to
project
attributes
into
columns,
different
ways
to
layout
the
data,
even
sort
it
and
what
data
types
you
use.
So
we
kind
of
went
with
something
that
we
feel
like
is
a
balancing
act
between
block
size
and
search,
speed,
right
and
some
other
factors.
So
there's
a
lot
of
rationale
in
here.
E
You
could
dig
through
kind
of
like
what
we're
going
for
and
maybe
some
other
thoughts
we
had
or
directions
we'd
like
to
go
in
the
future
with
it,
because
we
don't
see
this,
as
maybe
being
like
a
sense
schema
like
there's,
definitely
more
more
things.
We
want
to
try
here
so
yeah
that
design
proposals
there
and
read
through
it.
E
E
So
what
that
means
is,
if
you
are,
if
your
data
contains
these
tags
you'll
be
able
to,
and
then,
when
you
search
for
that
tag,
it's
going
to
be
isolated
to
a
single
column
like,
for
instance,
like
service
name
span,
name
or
even
things
like
http.url,
so
the
normal
data
model,
every
other
tag
is
just
going
to
be
in
this
like
this
key
value
list,
so
it's
going
to
be
mixed
in,
but
if
you're
using
these
common
tags,
it'll
be
a
lot
more
effective,
so
we
think
that
will
work
well,
I'd
love
to
hear
you
know
other
use
cases
or
other
feedback
or
just
in
general,
that
kind
of
stuff.
E
E
I
know
it's
awful,
so
we
are
actually
already
running
this
internally
in
a
development
cluster.
It's
pretty
small,
at
least
compared
to
where
we
want
to
go
with
it.
It's
about
ten
thousand
spans
per
second,
it's
not
melting
down
catastrophically.
So
I'd
say
that's
like
success,
so
yeah,
that's
pretty
cool.
E
We
definitely
have
some
things
a
few
more
things
to
work
out,
so
I
don't
have
any
numbers
or
metrics
to
show
here
comparison
wise,
it's
kind
of
a
newer
cluster
new
new
traffic
load
for
this
cluster,
so
I
don't
have
before
and
after,
but
we
do
expect
it
to
be
more
resource,
high
resources
on
the
ingest
path,
so
creating
the
part
k
is
more
intensive
than
just
our
existing
block
format,
which
is
just
a
lot
of
very
simple
protobuf
bytes.
E
But
the
flip
side
is
that
now
it's
more
optimized
for
the
read
path
so
yeah.
So
I
guess
not
sure
what
else
to
say
about
that
other
than
yeah.
Definitely
some
making
some
progress
here.
D
E
Yeah,
so
I
we
are
verifying
that
it
does
work
with
off-the-shelf
tooling,
so
there's
parquet
tools
just
verifying
that
the
file
is
compatible
with
readable.
So,
like
that's
looking
well,
I
haven't
used
any
sort
of
big
data
tooling,
like
that.
Just
it's
been
a
while,
since
I've
worked
with
those
and
more
focused
on
this
other
stuff,
athena
is
definitely
a
really
cool
one
that
we
want
to
support.
I
think
to
really
support
that.
E
Well,
we'll
need
a
couple
of
other
changes
in
tempo
and
I'm
not
really
sure
how
those
will
play
out,
but
one
of
them,
for
instance,
is
the
block
file
path
in
in
an
s3
bucket.
I
know
athena
works
well
with,
like
the
auto
discoverable
partitioning
in
the
file
like
the
folder
paths,
so
tempo
doesn't
have
that
right
now
and
then
the
other
thing
is
as
we
change
the
schema,
I'm
wondering
how
well
those
tools
would
be
able
to
adapt
to
different
schemas.
So
that's
another
reason.
E
So
that's
kind
of
where
we're
starting
other
than
athena.
Like
do
you
use
anything
else,.
D
So
not
at
the
moment
so
for
kind
of
like
backups,
like
my
interest
in
the
use
case,
for
this
is
that
the
logs
for
part
of
the
reason
why
I
run
in
tempo
is
because
we
couldn't,
we
basically
couldn't
pay
for
all
the
egress
for
our
trace
and
apm
data,
and
so
we
used
to
use
one
of
the
vendor
apm
solutions
and
when
we
went
away
with
that
to
basically
back
it
in
house,
one
of
the
things
that
we
lost
was
like
web
transaction
logs
and
that's
something
that
tempo
didn't
do
early
on
I
mean,
even
though
we
captured
like
all
the
scenes,
we
have
the
data
in
tempo.
D
There
wasn't
a
way
to
do
it
by
trace
id
when
you
know
before
one
three
relays
or
anyways.
So
what
I
ended
up
having
to
do
is
like
get
another
like
apm
solution,
where
I
feed
all
of
that
all
the
hotel
data
as
well,
and
then
it
goes
through
and
strips
out,
all
the
or
you
know
finds
all
the
web
transactions
and
then
basically,
I
just
drop
the
rest
of
the
the
trace
data
that
I
don't
need.
That's
not
the
web
transaction,
and
so
that
system
actually
costs
me
more
than
the
rest
of
tempo.
D
Does
that
the
tempo
does
everything
and
the
thing
that
only
keeps
the
web
transactions
basically
costs
more
than
what
tempo
does.
So,
I'm
really
looking
forward
to
like
one
of
the
things
I
was
looking
at
was
either
in-house,
like
put
something
in
there,
that
kind
of
dumps
it
into
a
column
store
like
click
house
or
when
you
all
guys.
You
know
announce
that
y'all
guys
are
going
to
do
the
4k
thing.
Then
you
know
consider
like
oh
it'd,
be
great.
D
If
I
could
use
something
like
athena
or
you
know
something
else
where
I
could
use
that
same
data
set
and
then
you
know,
go
find,
you
know,
go
find
the
traces
that
would
relate
to
a
web
transaction
and
then
just
have
a
graph
or
you
know
a
table
or
whatever
that
just
gives
those
results.
The
way
that
they
kind
of
want
to
look
for
it
there.
So
that's.
What
is
interesting
to
me
is
just
the
idea
of
like
using
the
data
that
tempo's
already
processed
to
present
that
same
sort
of
information,
so.
E
Yeah,
that's
really
cool!
That's
that's.
Definitely
something
we'll
have
to
keep
in
mind
as
we
do
this
that
park
or
tempo
is
not
the
only
consumer
of
these
of
this
parquet
data
like
we
want
it
to
work
well
with
other
tooling
for
sure
what
did
you
could
you
repeat
like
what
the
so?
Let's
say
you
have
the
bucket
of
parquet
and
athena.
What
would
you
be
looking
up
the
trace
id
or
was
there
other
correlating
data
so.
D
Basically,
it
would
be
like
finding
traces
that
were
http
transactions
and
then
finding
you
know
your
kind
of
standard
information
from
the
attributes
that
you
know
what
path
that
went
to
your
response
code,
those
sorts
of
things
and
then
it's
basically
just
kind
of
like
if
your
http
transaction
was
like
the
parent
trace,
just
kind
of
like
dropping
all
those
fans
that
are
below
that
is
effectively
what
it's
doing
and
just
kind
of
you
know
kind
of
like
a
log
list
of
you
know,
there's
this
request
came
in
on
this
past
this
code
and
you
know,
etc.
D
That's
basically
all
we're
doing.
Ideally,
I
would
like
to
expand
it
to
like
errors
as
well.
Right,
like
you,
could
attach
the
air
to
the
trace,
and
then
you
could
have
transaction
logs
and
error
logs
that
get
generated
out
of
the
trace
information,
and
then
we
don't
have
to
instrument
that
in
two
different
places
or
multiple
places
right
now,
we've
got
a
completely
different
solution
for
error
handling.
A
Definitely
yeah,
I
think
one
of
the
reasons
we
were
pushing
so
harder
and
parquet
is
exactly
what
you're
describing
like.
We
saw
so
much
value
in
the
tools
that
exist
and
I
will
add
it's
a
goal
for
tempo
to
work
with
as
much
as
possible.
But
you
know
we
can't
dog
food
all
this.
We
can't
test
all
these
things
ourselves,
so
if
you
do
find
it
not
working
with
a
specific
tool
that
you
want
like,
we
want
to
hear
that
feedback.
A
A
Unless
something
weird
happens,
then
then
we'd
love
to
hear
feedback
about
that
we'd
love
to
see
how
it
works
in
athena,
and
I
think
we're
gonna
try
to
do
a
little
bit
of
ourselves
and
do
some
blog
posts
and
show
off
of
that,
because
it's
definitely
it's
definitely
something
we
want
to
highlight.
A
A
This
is
bill
and
this
might
actually
meet
some
of
the
needs.
You're
talking
about
lucas,
so
this
is
all
built
off
span
metrics,
which
is
in
one
four.
So
none
of
these
are
metrics
generated
by
an
application
that
none
of
these
are
custom
metrics.
A
These
are
all
generated
from
the
traces
that
are
sent
through
tempo,
and
then
we
generate
the
metrics
and
push
those
to
a
remote
right,
endpoint
and
but
you
can
see,
we
have
like
request
request
rates
by
endpoint
latencies
error
rates,
and
this
is
all
fancy
I
don't
personally
like
gauges
and
such,
but
it
looks
cool.
Doesn't
it
for
a
demo
headset
this
up
by
the
way
we
should.
I
really
wish
you're
on
here,
to
give
you
a
better
demo
than
I
can.
A
It
does
amazing
things
not
only
that,
but
we
have
like
top
10
slowest
end
points,
so
we
can
maybe
really
quickly
dig
in
and
see
if
something
is
worse
than
it
should
be.
He
even
set
up
these
awesome
searches,
so
I
can
immediately
jump
over
to
a
search
based
on
you
know
the
the
end
point
that
was
there.
So
this
is
a
grafana
table.
He's
got
the
links
set
up,
so
I
can
jump
here
and
of
course
now
I
can
add
more
parameters.
A
Maybe
I
only
want
to
see
things
longer
than
100
milliseconds
or
a
second
or
whatever,
like
you
know,
so,
really
cool
features
here
exemplars,
so
we
were
talking
about
remote
right
or
the
the
metric
generator
creates
exemplars.
So
this
is
kind
of
a
bunch
of
stuff
slammed
into
one
a
lot
of
different
endpoints.
You
can
see,
but
the
you
could
break
this
out.
We
could
you
know
you
of
course
use
grafana
templating
to
select
individual
endpoints,
but
I
have
exemplars
now
so
I
can
see
this
guy
apparently
they're
everywhere.
A
I
can
pick
this
one
up
here.
It's
a
particularly
latent
endpoint
or
a
particulate
inquiry.
You
know
I
can
go
query
with
jet.
Apparently
this
is
set
up
for
enterprise,
but
I
can
jump
immediately
over
to
the
trace
right.
I
had
an
exemplar
I'm
now
on
the
trace.
The
log
links
are
here.
I
have
kind
of
all
of
my
observability
signals
set
up
in
one.
I
really
love
what
conrad
and
mario
have
done
over
the
past
few
months.
A
Building
the
metrics
generator
like
this
adds
so
much
value
to
the
traces
that
are
pushed
through
tempo.
If
you
take
some
time
to
send
this
set
this
up,
so
this
shows
kind
of
like
the
high
end
of
what
can
be
done
if
you
spend
some
time
with
it,
but
it's
so
valuable
and
it's
so
cool.
So
congratulations
to
them
as
well.
A
A
C
Yeah
so
this
demo
uses
jet,
but
the
metric
generator
is
all
you
know.
These
are
all
open
source
features.
A
If
anybody's
questions
about
anything
we've
talked
about
or
just
about
tempo,
or
you
know,
life,
I
suppose
feel
free
to
check
them
and
chat
or
if
you
want
to
mute
nest,
something
yeah
it'd
be
fun
to
talk
about
whatever,
and
if
not,
then
we
can
go
our
separate
ways.
Oh
thanks
fausto.
I
appreciate
that
yeah.
That
dashboard
is
all
the
work
of
kunrod
and
mario
on
the
back
end
and
our
essie
heads
he
sometimes
joins
these
calls.
A
I
wish
he
was
here
to
hear
the
praise,
but
he's
extremely
good
essie
and
he
does
a
lot
of
great
work,
but
he
put
that
demo
together
and
it's
super
cool.
A
Cool
well,
if
it's
time
to
be
on
our
way,
I
appreciate
you
all,
showing
up
and
we'll
be
doing
this
in
a
month
and
I
think
observabilitycon
is
coming
up
as
well
we're
going
to
do
a
demo
there
as
well.
A
It's
going
to
be
a
lot
of
these
same
things
or
show
off
some
new
ui
for
metrics
generator,
we're
going
to
have
more
data
on
parquet,
hopefully
more,
like
high
high
level
like
performance
data
at
larger
scales,
both
on
search
and
like
how
much
more
cpu
and
memory
it
takes,
and
maybe
traceq
will
make
a
few
steps
as
well.
But
we're
excited
to
talk
about
that
at
observability.
Oh
sorry,
it's
grafonicon!
A
We
have
two
conferences,
it's
hard
to
remember
it's
grafonicon
and
it's
about
a
month
or
so
I
think
so
look
out
for
that
and
if
you
want
jump
in
our
session
and
you'll,
see
a
lot
of
the
same
things.
It'll
be
a
little
bit.
Flashier
though
I
think
and
there'll
be
a
q,
a
cool
all
right.
Everybody
have
a
great
month
and
we
will
see
you
when
we
see
you.