►
From YouTube: Tempo Community Call 2021-03-11
Description
Points of discussion:
- Future Tempo development
- Exemplar Demo
- Instrumentation Guides
- 0.6.0 and upcoming 0.7.0
Rolling meeting notes & announcements: https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY
A
Start
this
community
call.
We
have
our
agenda
doc
over
here,
I'll
link
it
in
the
chat
feel
free
to
add
anything
you'd
like
to,
and
if
there
are
no
community
concerns
or
questions
we
will.
We
have
plenty
to
talk
about
on
our
end,
so
we'll
just
kind
of
fill
it
out
as
we
go
along.
A
D
D
My
previous
experience
has
been
mostly
building
distributed,
systems
yeah
and,
I
believe,
observability
and
therefore
tracing
it's
fundamental
to
building
and
operating
those
systems.
D
I'm
really
excited
to
join
this
project.
I
think
this
retracing
is
a
really
fun
field
right
now
and
there
are
a
lot
of
challenges
to
make
things
easier
for
the
user.
E
A
A
We
added
the
fourth
force
path,
style
and
s3,
and
some
better
s3
configuration
options.
We
added
per
tenant
block
retention,
that's
good
for
us.
If
you're
running
a
multi-tempo
cluster,
that's
very
helpful.
If
you're,
not,
then
it
doesn't
do
much
for
you.
A
I
thought
we
added
something
pretty
major
there,
but
I
guess
not.
It
must
have
been
0.50
where
we
added
stuff.
Oh,
we
added
exhaustive
search,
which
is
pretty
cool
because
it'll
handle
longer
choices,
better.
A
And
yeah
a
handful
of
things
there
in
070
we
have
a
number
of
improvements
to
search
to
speed
up
search,
including
the
including
the
recently
merged
change
to
add
page
based
index
to
or
access
to
the
index
file.
Current
previously
we
were
previously.
A
We
were
loading
the
entire
index
at
once
and
for
our
larger
blocks
in
the
tens
of
millions
of
traces,
we're
seeing
indexes
upward
of
a
meg
which
was
just
takes
a
while
to
pull
off
of
gcs
so
to
reduce
that,
and
also
to
reduce
the
kind
of
the
total
throughput
needed
for
the
queries
we
made
it.
So
you
could
acc,
it
would
access
it
by
page
and
we
have
configured,
I
think,
default
at
250
kilobytes.
So,
instead
of
pulling
multiple
mags
it'll
be
pulling
250k
at
a
time
and
so
right.
A
I
think
it's
kind
of
hard
to
tell
when
you
do
these
changes
immediately
the
latency
impact,
because
only
the
new
blocks
support
it.
The
v2
blocks,
which
we
just
added
so
as
it's
rolling
out
over
the
next
two
weeks,
internally,
we'll
kind
of
get
a
better
feel
for
how
much
it
improves
with
query
speed,
and
I
think
we
have
maybe
one
or
two
other
major
kind
of
improvements
there.
Caching
is
one
of
those
I
think
daniel
wants
to
look
at
that
pretty
soon.
E
Yep,
the
idea
is
to
use
the
macron
cache.
That's
right
now,
it's
in
gore-tex
that
it
uses
like
apple,
so
yeah,
it's
going
to
improve
a
lot
yeah,
all
the
other
things
with
with
the
caching.
H
Yeah,
the
caching,
but
so
right
now
it's
on
like
the
hot
path
of
querying.
Every
time
we
fetch
a
key
from
our
backend
and
it's
not
in
memcache.
We
actually
write
it
as
part
of
that
query
on
the
hot
path
to
memcache
and
sometimes
write
latencies
to
memcache
just
shoot
up
and
we
see
like
200
milliseconds
and
it
affects
our
query
latencies.
H
The
background
cache
is
supposed
to
handle
all
of
this
as
a
background
process
and
like
buffer
rights
and
memory
right
in
the
background
and
supposed
to
be
magic
according
to
the
cortex
team,
and
we
believe
so
great
yeah
joe.
You
want
to
show
us
the
latency
improvements
from
the
based
indexes.
I
think
that
graph
is
really
cool
to
look
at.
A
A
Yeah,
okay,
let
me
do
that
good
idea.
So,
let's
see
this
is
present
now
boink
window.
This
is
our
reads
dashboard.
This
dashboard
is
available
in
the
actual
repo
it's
under
the
kind
of
the
tempo
mix-in
and
then
there's
kind
of
a
which
is
all
jason,
ed
and
then
there's
like
a
generated
version
also
sitting
there
so
anyone's
welcome
to
kind
of
pull
this
and
load
into
grafana.
A
But
this
dashboard
has
a
really
nice
overview
of
the
read
path
in
tempo,
and
this
is
the
query
front
end
I
believe
yeah.
It
should
be
right.
The
query
front-end,
so
this
kind
of
spiky
gorpy
area
is
where
I
was
actually
doing.
The
roll
out
and
kind
of
reintroducing
vulture
we
made
some
changes
to
poultry.
Daniel
did
actually,
and
so
it
was
kind
of
down
during
this
period,
but
once
it
came
back,
we're
seeing
you
know
significantly
improved
read
times
with.
A
I
think,
there's
kind
of
two
changes
that
impacted
that
one
was
the
page
indexes
and
then
the
other
one
was
kind
of
we
reduced
the
size.
It's
called
the
index
down
sample
bytes,
but
the
amount
of
like
the
range,
the
range
in
the
main
data
file
that
each
index
relates
to.
A
The
idea
here
is
generally
to
reduce
the
total
megs
per
second
into
the
queriers,
because
that
was
just
that
is
the
bottleneck
on
querying
right
now
is
they
have
to
pull
so
much
data
to
answer
your
questions,
so
this
these
changes
are
designed
to
reduce
that
significantly
and
then
the
kind
of
final
change
I
have
in
mind,
which
is
a
an
adjustment
to
our
balloon
filters
and
maybe
should
go
in
in
a
couple
weeks
here.
A
I'd
say
that
will,
I
think,
do
a
really
good
job
of
allowing
us
to
filter
out
more
right
now
her
bloom
filter,
false
positive,
is
five
percent,
which
is
way
more
than
I'd
like,
but
it
keeps
them
small
enough
to
load.
Basically,
so
we're
going
to
make
some
changes
that
allow
us
to
significantly
reduce
the
number
of
blocks
we
have
to
load
next.
At
that
point,
I'm
going
to
feel
pretty
good.
I
feel
like
about
the
kb
store
and
we're
really
going
to
be
strong
on
what's
next
in
tempo.
At
that
point,.
A
A
Compression,
I
think
we're
seeing
like
six
to
seven
to
one
compression
ratio
on
our
blocks,
which
is
an
amazing
cost
savings.
Of
course,
it
actually
improved
query
speed
as
well,
again
kind
of
related
to
that
whole
idea
of,
even
though
you
know
it's
extra
cpu
cost
because
we're
having
to
decompress
once
we
receive
it.
Just
reducing
the
amount
of
megs
you
have
to
pull
from
the
back
end
is
huge
for
for
reducing
query
time
and
reducing
load
on
your
queries.
A
So
both
those
changes,
kind
of
have
been
helpful
in
that
area
as
well.
The
compression
pr
supports
a
whole
bunch
of
different
options
and
we
use
the
standard,
and
I
would
call
it
I
wouldn't
say
the
only
supported
option,
but
certainly
the
only
one
that's
been
kind
of
like
proven
operationally
at
scale.
So
there
are
other
operation
options
there.
I
would
stick
to
z
standard
it's
the
default,
but
if
you
do
explore,
other
options
feel
free
to
kind
of
share
with
us.
The
results.
E
A
Right
previously,
it
right
queried
loki.
I
did
all
this
stuff
and
the
only
thing
you
could
really
verify
was
whether
the
trace
was
there
and
if
all
of
the
like
parent
child
relationships,
matched
up
because
we're
actually
creating
the
trace.
Now,
it's
nice
because
you
don't
have
a
lucky
dependency
and
then
also
we
can
verify
if
the
trace
is
actually
correct
and
all
of
its
features,
because
we
know
what
it
should
look
like
versus
you
know.
A
Just
is
the
parent
child
stuff
set
up
right
cool
and,
yes,
somebody
was
looking
to
pr
that
to
the
helm
charts.
So
I
gotta
go
deal
with
that.
Speaking
of
film
charts,
we
have
elm
charts,
hey,
which
is
a
new.
A
We
have
new
help
charts
available
in
I'll
put
the
repo
in
the
doc,
which
is
kind
of
a
shared
repo
that
all
grafana
home
charts
are
in
oops,
not
helm,
charts,
grafana,
grafana,
home
shirts.
A
A
So
we've
recently
added
both
a
distributed
and
a
and
a
single
binary
version
and
kind
of
you
know
if
you're
into
the
helm
scene
kind
of
reduces.
You
know
the
burden
of
deploying
and
also,
I
think
it
does
a
good
job
of
highlighting
kind
of
how
we
would
deploy
it.
A
So
you
know
if
you
are
going
to
deploy
it
totally
different
way
or
your
way,
these
kind
of
documents
you
know
or
this
kind
of
work
is
helpful-
just
to
see
how
what
the
relationships
of
temple
are,
how
those
services
work
together,
how
we
set
up
the
services
in
kubernetes
to
help
you,
you
know
kind
of
set
it
up
your
way.
I
suppose-
and
this
is
huge
there
we
go
that's
a
little
bit
better.
A
And
had
some
good
action
on
the
helm,
repo
a
lot
of
help
improving
those,
I
basically
run
them
and
make
sure
they
work
in
the
most
basic
sense.
But
we
have
a
lot
of
good
community
members
keeping
those
up
to
date,
adding
new
features
filing
issues
and
we've
seen
some
good
traction
on
the
helm,
charts
people
like
home.
F
A
Two
other
things
I
wanted
to
highlight.
I
think
these
are
my
two
favorite
changes
recently
joe
sacks
joe's
corner
joe's
corner
favorite
changes.
E
A
A
So
we
kind
of
have
the
setup
for
multi-arch
images
now
and
if
I
think
we
have
arm
64
and
amd64,
you
know.
If
you
have
other
needs,
we
can
add
those
it
just
kind
of
makes
a
lot
easier
to
do
a
docker
pull,
because
you
know
it
will
detect
your
architecture
automatically
and
pull
the
right
image,
and
so
you
don't
have
to
you
know,
figure
out
which
image
to
pull
it'll
kind
of,
do
it
for
you,
and
also
it
makes
it
possible
to
run
on
arm
64
without
recompiling.
A
So
previously
you
would
have
had
to
kind
of
do
your
own
work
to
build
it
yourself,
but
now
we
have
the
docker
images
for
it.
So
these
are,
I
think,
probably
by
far
and
away
rm
and
amd64
the
most
common.
A
You
know
architectures
if
any
others
are
needed,
just
let
us
know
and
we'll
see
if
we
can't
support
those
as
well,
and
then
I
owe
marty
for
this
for
sure
this
change
was
something
that's
been
haunting
me
for,
like
six
months
plus
nine
months,
I
don't
even
know,
but
we
just
had
the
most
awful
vendor
situation
ever
we
were
like
having
to
basically
the
open
telemetry
team
made
their
gen
proto
internal
and
it
just
destroyed
our
vendor
situation,
and
it
made
it
very
difficult
to
upgrade
a
lot
of
like
conflicting
issues
here,
so
I'm
already
ironed
it
all
out
and
our
our
vendor
process
actually
works.
A
Go
mod.
Vendor
actually
works
in
our
repo,
where
it
didn't
before
go
mod
tidy
works
in
our
repo,
where
it
didn't
before.
So
it's
just
a
fantastic
improvement.
I
I
Testing
and
benchmarking
it
didn't
there
was
the
downside:
wasn't
there
the
impact
wasn't
very
bad
at
all
I
mean
it
was
actually
probably
within
the
noise
of
any
benchmarking
in
general.
So
I
think
it's
good
cool.
I
Oh
sure,
actually
we
should,
although
so
this
is
not
actually
in
tempo,
but
it
is
related
to
to
it
in
general,
so
this
was
something
that
we
recently
did
if
you're
using
the
grafana
agent.
This
would
be
important.
So
what's
that.
A
I
Right
yeah,
so
I'm
not
sure
if
it's
been
released,
but
in
a
recent
commit
the
griffin
agent
was
updated.
So
it
internally
is
using
the
open
telemetry
collector
to
kind
of
provide
that
single
all
the
multiple
receivers
and
the
single
output
stream
of
otlp
up
to
anywhere.
So,
if
you're,
using
it
on
premise
or
grifana
cloud,
but
that
was
that
output
stream
was
uncompressed
and
internally
it
supported
compression.
So
it
was
a
small
change
to
graphon
agent
to
expose
that
setting
it
is
actually
enabled
by
default.
I
So
it
will
do
gzip
compression
for
all
of
the
egress
now
by
default,
but
it
can
still
be
turned
off
if
needed.
Yeah,
so
the
savings
there
are
are
good
are
great.
So
I
think
we
saw
about
75
percent
reduction.
F
A
Cool-
and
he
also
marty's
been
on
compression
lately,
I
guess,
or
we
haven't
generally.
I
suppose
we
also
enable
internal
snappy
compression
right
between
the
tempo
components
which
improve
performance.
I
believe
as
well.
H
H
Yeah
another,
please
add
this
to
your
favorite
changes.
Yes,.
H
We're
finally,
finally
dropping
support
for
tempo
query:
we're
deprecating,
temporary,
yes
and
now
griffana
connects
to
tempo,
and
we
can
finally
call
it
a
real
single
binary
deployment.
Yeah
not.
H
A
A
That
is,
I
know,
I
think,
zoltan
from
the
grafana
team
added
that
support.
H
B
That
is
incorrect.
That
was
pushed
out
another
week.
A
tentative
date
for
the
7.5
stable
release
is
march
23rd,
but
beta
2
will
be
around
march
16th
ish
at
current
time
anyway.
A
B
Well,
speaking
of
grafana
releases,
I
heard
in
an
earlier
meeting
that
you
guys
might
be
looking
at
adding
support
for
the
node
graph
to
tempo
or
an
api
change
to
support
node
graph
and
to
make
temple
work
happily
with
it.
Is
that
true,
can
you
confirm.
A
There
are
kind
of
two
developments
in
that
world.
One
is,
I
saw
this
and
I
don't
know
I
guess
I'm
not
super
familiar
with
the
details,
but
somebody
from
the
tempo
team,
I'm
sorry,
the
graffana
team
added
support
for
a
single
request
to
draw
a
graph
so
to
draw
a
service
grab
you
like
do
it.
You
know
you
do
a
request
to
tempo.
A
You
get
your
your
normal
trace
view,
which
shows
all
the
spans
it
just
uses
those
spans
to
build
a
service
graph
and
show
you
kind
of
the
relationships
you
know
between
the
services.
In
that
one
query,
I'm
pretty
sure
I
saw
some
work
on
that
from
andre
and
then.
Secondly,
we
are
talking
about
how
to
do
kind
of
a
metrics
based
service
graph
and
we'll
talk
about,
maybe
that
oh,
we
can
kind
of
get
that
in
a
second.
But
we
have
this
idea
for
some
next
steps
for
tempo.
A
We're
gonna
not
in
this
call,
but
very
soon
share
a
a
roadmap
which
will
kind
of
set
up.
Some
of
the
next
features
we
want
to
add
and
service
graphs
is
on
the
internal
list
right
now,
and
I
expect
it
to
make
the
make
it
to
this
roadmap,
which
would
basically
build
a
set
of
metrics
over
time.
Of
course,
right
and
then
we
can
build
service
graphs
on
top
of
those
metrics.
A
So
the
advantage
here
is,
you
can
look
at
a
service
graph
at
different
points
in
time
and
do
like
comparisons
week
over
week
or
whatever,
by
storing
metrics
in
cortex.
Basically,.
A
B
A
Yeah,
why
don't
we
just
talk
about
some
of
these
next
ideas
for
tempo,
so
service
graphs,
like
we
said,
is
kind
of
on
this
list.
We
also
have
this
idea
for
what
it's
kind
of
like
an
auto
indexing
idea.
So
currently
with
tempo.
You
know
it's
a
key
value
store
if
you've
been
using
it
you're
aware
of
this
and
you
log
your
trace,
ids.
That
kind
of
allows
you
to
build
your
custom
index
your
own
way
of
finding
traces.
A
So
these
are
the
kind
of
two
ways
we
envision
people
finding
traces
in
tempo
for
now,
but
we
kind
of
recognize
that
this
can
be
difficult,
particularly
in
an
environment
where
you
have
a
huge
number
of
microservices
and
you
don't
want
to
add
this
kind
of
logging
or
instrumentation
everywhere,
or
maybe
I
mean
we
all
know
that
people
are
in
situations
where
they
have
binaries.
They
can't
even
recompile.
They
have
other
kind
of
situations
where
it's
just
as
there's
two.
It's
too
large
of
a
cost.
A
The
barrier
for
entry
is
too
large,
I
would
say-
and
so
we're
kind
of
building
this
idea
of
doing
this,
for
you,
probably
in
the
agent
where
the
agent
would
just
basically
generate
for
you
by
analyzing
the
spans
that
are
coming
through
and
build
log
lines
which
it
would
push
to
loki
and
then
let
you
automatically
kind
of
index
your
traces
and
build
some
search
around
that
and
add
some
common
fields.
Of
course,
like
you
know
the
service
names
namespaces
cluster
names
latency.
A
You
know
the
kind
of
common
things
people
are
used
to
searching
for.
So
we
are
looking
at
a
way
to
basically
build
that
for
you
by
looking
at
the
traces
as
they
come
through
the
agent,
and
we
think
it
will
help
a
lot
of
people
adopt
tempo
and
kind
of
get
on
board
without
having
to
maybe
make
changes
to
applications
that
they
already
have
and
they've
already
instrumented.
A
Kind
of
on
the
same
note,
we
are
looking
at
ways
to
add
metrics,
based
on
your
kind
of
streaming
metrics,
based
on
your
spans,
based
on
on
the
trace
data.
There's
an
open,
telemetry,
collector
processor
that
we
want
to
make
use
of
and
improve
that
will
watch
spans
and
read
their
span,
names
and
other
information
out
of
the
span
and
then
record
latencies,
histograms
and
we'd
like
to
add
those
features
to
tempo
by
vendoring
that
and
bringing
it
in
and
there's
also
support
in
the
open,
telemetry
collector
for
tail-based
sampling.
A
So
adding
these
features
to
the
agent
will
allow
more
flexibility
kind
of
in
what
it
can
do,
what
it
can
push
into
grafana
cloud
and
kind
of
how
you
can
use
tempo
and
that's
kind
of,
like,
I
would
say,
our
short-term
goals
for
tempo
and
its
relationship
and
graffana
cloud
and
kind
of
its
feature
set
and
using
it
along
with
grafana
cloud.
A
A
We're
very
excited
mario's
going
to
do
it
all
for
us
there.
We
go
right
right,
cool
marty.
Do
you
want
to
do
this?
Exemplars
demo.
I
Sure
yeah
all
right,
I
think
I
guess,
if
it's
okay,
I
will
share
a
browser
window
and
probably
click
around
a
bunch
of
different
tabs.
A
E
I
It's
work:
okay,
cool,
okay,
so
I
guess
to
preface
this
this
talk
is
kind
of
about
exemplars,
but
also
kind
of
maybe
saying
that
the
news
is
that
to
see
them
in
action
is,
is
pretty
easy
now,
with
this
tns
demo,
so
I'll
walk
through
that.
I
But
the
idea,
but
to
preface
it
also
a
little
bit
more,
is
to
say
that
exemplars
are
still
very
close,
but
they
are
not
quite
quite
there
yet
we're
still
waiting
on
a
couple
of
pull
requests
to
be
merged,
but
it
is
very
easy
to
see
everything
in
action,
and
so
that
is
what
I
will
walk
through,
but
I'm
not
sure
it
will
look
much
different
other
than
this
specific
repo
here,
which
makes
it
easy
cool.
So.
I
So
prometheus
would
yeah
okay
cool,
so
this
demo
here
is
called
the
tns
demo
and
tns
stands
for
the
new
stack
and
without
on
this
tab
is
kind
of
like
what
the
demo
actually
is
running.
So
this
is
just
a
toy
application,
which
kind
of
looks
like
hacker
news,
and
you
can.
You
know,
click
on
things
and
things
like
that,
and
it's
actually
just
a
kind
of
a
test
ground
to
show
all
the
different
features
of
things
that
are
demo
able
like
between
the
different
tools
and
because
the
applications
are
so
simple.
I
I
This
setup
should
run
out
of
the
box
and
it
does
currently
use
the
loki
plugin
for
logs,
but
to
see
examples,
that's
actually
not
necessary.
So
with
some
small
changes
you
could-
or
if
you
already
have-
that
it's
very
easy
but
docker
compose
up
will
start
the
different
applications
and
also
grafana
with
the
built-in
dashboard,
and
so
there
is
a
built-in
load
generator,
so
it
will
actually
start
generating
its
own
traffic,
which
will
start
populating
these
graphs.
I
So
after
some
time
you'll,
you
should
start
seeing
things
like
this,
and
so
what's
different
here
is
that
we
have
exemplars.
So
these
are
these
three
graphs
are
graphs
of
the
different
latencies
for
the
for
the
application
and
as
part
of
the
load
generation
and
part
of
the
application.
They
do
kind
of
do
some
synthetic
actions
here
so
for
different
failures
and
higher
latency,
and
so
you,
the
graphs,
will
without
doing
anything
you
there
will
be
variations
in
traffic,
which
is
helpful
for
seeing
the
different
features
so
just
as
a
walkthrough.
I
So
this
is
a
normal
graph,
but
with
the
exemplars
turned
on,
each
of
these
dots
is
in
a
is
a
points
to
a
trace
which
which
matches
the
different
series
for
this
graph,
and
so
maybe
down
here
on
on
this
bottom
one.
It's
very
easy
to
see
how
oops
the
exemplars
are
following
along
the
histogram
graph
itself,
and
so
when
they're,
when
the
graph
is
high,
we
can
see
the
example
has
been
being
plotted
high.
That
show
that,
and
so
for
example
here
so
this
one
here.
I
A
Go
back,
I
don't
know
if
this
is
in
earlier
versions
of
grafana
or
is
added
recently,
but
if
you
mouse
over
an
exemplar,
it
gives
you
the
labels,
which
is
kind
of
neat,
so
it'll
show
you
the
status
code
for
that
exemplar
yeah
the
route.
I
guess
it's
root
here,
but
method.
A
You
know
other
kind
of
information
which
I
don't
I
don't
know
if
the
old
one
did
or
not,
but
when
I
first
noticed
that
that
was
really
cool
because
you
can
kind
of
like
mouse
over
a
few,
oh
find
the
one
that
has
the
status
code.
You
want
or
the
path
or
something
and
then
grab
that
one
and
use
it
to
look
up
a
trace.
I
Yeah
so
so,
let's
see
to
walk
through
a
couple
of
the
other
features
that
are
going
on
here.
So
this
demo
is
is
good
because
it
has
examples,
but
also
because
of
the
couple
of
other
things
that
are
that
are
going
on
so
this
button
here
I
guess
we
would
call
this
metrics
two
traces,
so
this
is
one
workflow,
so
you
have
metrics
we
can
get
into
the
corresponding
traces
cool.
I
So
another
feature
here
is
traces
to
logs,
so
we
can
click
the
log
button
for
a
given
trace
and
it
will
find
based
on
the
the
span,
the
time
range
and
also
some
other
attributes
that
it
will
add
here.
So
in
this
case,
for
that
trace,
there
are
certain
attributes
that
it
will
pre-populate
into
the
logs
query.
So
here's
the
job.
So
in
this
case
this
is
the
container
that
was
the
load
generation.
A
I
was,
I
was
trying
to
come
up
with
this
like
clever,
like
way
to
think
about
metrics,
logs
and
choices,
and
that
flow
you
just
showed-
and
this
is
kind
of
what
I
was
coming
up
with
the
metrics-
tell
us
what
is
happening
right.
They
tell
us
how
bad
it's
happening
and
kind
of
what
is
happening
like
on
our
endpoints.
A
The
trace
tells
us
where
it's
happening,
because
you
can
zoom
in
on
a
specific
process.
You
can
see
errors
in
the
process
and
how
that
request
propagated
through
your
system
and
then
the
logs
tell
you
why
it's
happening.
So
you
start
with
a
metric.
You
see
an
error,
then
you
see
look
at
the
trace
and
you
see
what
part
of
the
trace
is
having
issues
which
process
and
then
you
click
on
the
logs
and
you
go
see,
you
know
memcached
down
or
redis
didn't
return
a
time
or
whatever
right,
you
kind
of
start.
A
I
Yeah
definitely
so
there
is
a
link
here
that
I'd
like
to
show
about
how
actually
the
latency
and
exemplars
are
occurring
under
the
hood.
So
this
demo
is
using
a
weworks
endpoints
which
are
automatically
are
they're
already
instrumented
to
capture
to
record
the
latency.
That's
where
the
metric
comes
from,
but
also
the
exemplar.
So
here
there
are
a
couple
lines
here
and
I
guess
everybody
can
see
this
okay
yeah.
I
So
this
is
part
of
the
the
go
sdk
for
prometheus
and
there
is
a
new
method
introduced
to
use
an
example.
It's
called
observe
with
exemplar.
I
So
in
this
case
the
middleware
is
checking
to
see
if
the
current
trace
is
being
sampled
and
in
this
case
we're
100
sampled,
and
if
it
is,
then
it
will
call
observe,
observe
with
example
or
with
the
trace
id
that
was
extracted
from
the
context.
Otherwise
it
would
be
the
kind
of
normal
metric
with
no
example
cool,
so
cool.
A
A
If
you
have
interest
in
exemplars,
if
it's
something
that
would
you
think
your
company
would
use
or
you
would
use
then
find
your
openmetrics
client,
your
prometheus
client
and
we
look
for
an
issue
related
to
it
and
thumbs
it
up.
There's
no
issues
file
a
new
one
it'd
be
great
to
get
some
community
kind
of
support
for
exemplars
in
these
clients,
so
kind
of
like
making
them
maintain
or
letting
the
maintainers
know.
This
is
a
feature.
That's
you
know,
good
for
you
or
interesting
to
you.
H
I
H
I
So
they're
now
are
on
dashboards,
good
thanks.
So
another
part
about
this
is
that
they
are
on
dashboards
they,
but
to
point
out
they
do
require
the
new
type
of
time
series
panel
as
well.
So
these
panels
on
the
left
are
actually
the
the
previous
graph
panel.
I
These
ones
on
the
right
here
have
been
updated
to
the
latest
time
series.
It's
a
little
bit,
maybe
harder
to
see
here,
but
the
that
the
crosshair
here
is
a
little
bit
different.
So
that's
one
way
to
tell,
and
if
we
were
to
edit
one
and
so
exemplars
are
actually
there's
a
second
step
to
enabling
exemplars.
Once
you
have
the
time
series
panel,
there's
a
switch
that
now
that's
included,
which
turns
on
exemplars.
I
G
I
So
we'll
walk
through
looking
at
one,
so
what
we
will
do
is
so
this
is
the
the
current
the
graph
one.
That's
the
most
popular.
There
is
time
series,
so
that's
the
switch
to
this
one
yeah
and
I'm
not
sure
about
the
compatibility.
I
mean
in
my
experience
it.
It
seems
to
work
well,
yeah,
you're
right.
It
is
labeled
data,
but
yeah.
I
C
B
E
B
The
last
one
and
is
the
last
one
that
is
still
in
angular-
it's
got
for
a
long
time.
I
think
it
was
the
only
actual
that
visualization
that
graffana
had
so
people
just
kept
packing
functionality
onto
it,
and
so
it's
really
like
three
or
four
different
types
of
graphs,
all
in
one.
So
as
we
are
rewriting
the
graph
to
react,
we
are
actually
breaking
those
out
into
separate
types
of
graphs
which
you'll
see
the
rest
of
them
coming
soon
in
in
grafana
8
in
a
few
months.
B
But
time
series
is
the
first
one,
so
eventually
this
one,
a
scatterplot
graph,
a
bar
chart,
and
maybe
one
other-
that
I'm
blanking
on
right
now,
will
actually
completely
replace
the
graph
panel.
So
all
of
basically
all
of
the
line
time
series
charting
has
already
been
replaced
and
is
in
time
series
you
can
actually
convert
your
graphs
to
time
series
right
now
if
you
want
to
there's
a
button
in
there.
So,
basically,
this
is
time
series
is
graphs,
bigger
and
better
little
brother.
B
C
C
G
B
And
time
series
will
both
be
coming
out
of
beta
in
8.0,
but
you
know
how
it
is.
Things
are
always
subject
to
change
so,
but
I
know
that
they
will
be
updated
in
8.0,
because
the
node
graph
has
some.
We've
got
some
demos
internally
on
some
cool
updates
that
are
in
the
works
for
that
very
cool.
B
B
Actually,
the
time
series
does
have
that
you
can
take
a
look
at
that.
How
that
looks
in
the
7.4
grafana
play
demo
dashboard,
or
I
believe
that
it
is
already
enabled
in
the
test
dated
db.
I
don't
know
how
to
do
it
for
a
live
for
live
one
yet,
but
I
think
you
can
hook
it
up
there.
Somehow
it's
it's
all
in
alpha
right
now.
So
it's
kind
of
in
motion.
I
Sure
cool
yeah,
so
there
are
a
couple
other
things
here
that
are
part
of
this
demo.
This
page
this
is
in
the
the
tempo
repo.
So
it
will
you
know
you
can
kind
of
walk
through
that,
but,
like
this
log,
qr
v2.
B
I
Yeah,
so
this
is
not
really
related
to
tracing,
I
guess,
but
it
is
a
way
to
for
for
things
that
are
for
log
lines
that
include
included,
trace
id
and
other
parameters.
It
is
something
that
is
makes
it
easy.
I
guess
to
this
is
another
way
to
discover
traces.
Yes,
let's
go
ahead
and
look
at
it.
I
So
in
this
case
the
the
application
is
loggings
requests
along
with
some
other
things
so
including
the
status
code
and
duration,
and
what
we
can
do
is
actually
run
a
query
to
discover
them
and
then
click
the
matching
trace.
So
in
this
case,
what
we're
looking
for
is
any
failed
request,
so
it's
status
go
to
anywhere
in
the
5x
range
and
a
duration
over
100
milliseconds,
and
so
you
know
once
we
find
that
includes
the
trace
id
equals.
We
can
click
the
button
and
view
the
trace.
I
So
this
demo,
I
guess,
is
good
for
for
seeing
that
other
kind
of
another
way
to
take
your
logs
and
discover
a
trace
from
them,
especially
if
you
have
you
know
meaningful
values
that
are
logged
as
well,
and
so
I
guess
the
magic
here
is
piping.
It
to
the
log
format
operator,
which
will
process
anything
kind
of
like
in
this
key
equals
value
kind
of
format.
So
that's
the
log
format
format.
I
I
Yeah
cool-
let's
see,
I
guess
one
more
thing
is
I
wanted
to
talk
about
just
this
what's
going
on
in
here,
so
this
is
pre-release
software
just
to
kind
of
point
out.
There
are
a
lot
of
moving
parts.
Oh
and
we
should
there
was
like
a
good
blog
post
that
was
posted
in
the
community
slack
that,
I
think,
is
also
a
good
write-up
of
all
the
different
moving
parts
of
exemplars,
but
just
to
walk
through
here
a
couple
things.
So
within
this
demo
there
is
a
cert.
I
There
is
a
pre-release
branch
of
grafana
being
used.
So
I
think
this
would
work
with.
Maybe
I'm
not
sure
what
the
what
images
are
available
for
the
75
branch,
but
because
it's
funny
like
this
pre-release,
actually,
let's
say
7.5
pre,
so
that's
kind
of
funny.
I.
E
A
I
Okay,
cool
and
the
other
thing
is
running
a
custom
prometheus
image.
This
is
built
off
of
the
top
yeah.
I
This
is
built
off
of
the
pull
request
for
the
first
phase
of
exemplar
support,
but
even
after
it's
merged,
it
will
require
this
command
line
to
be
ran
to
enable
it
so
it
will
be
an
experimental
feature,
called
exemplar
storage
and
it
will
be
have
to
be
enabled
with
this
command
line
cool
yeah.
I
think
that's,
that's!
A
good
good
review
of
exemplars
cool
cool.
A
Cool
yeah
yeah,
so
there's
link
in
the
in
the
chat
to
what
a
lot
of
what
marty
was
talking
about
both
to
the
original
demo,
which
is
grafana
tns
as
well
as
kind
of
the
references
to
it
from
the
tempo
repo,
and
you
can
kind
of
just
with
docker
compose
up
play
with
all
the
stuff
he
was
showing
there,
which
is
cool
daniel.
Do
you
want
to
take
this
next
section
this?
We
kind
of
have
the
python
example
you
put
together
as
well
as
we
have
a
bunch
of
guides.
E
E
E
There
are
like
some
blog
posts
for
java
spring
boot
out
instrumentation
for
go
open,
metrics,
examples4.net
and
also
some
community
resources
for
node.js
or
javascript.
So
if
you
build
something
around,
this
feel
free
to
open
apple
request
on
other
sample,
because
it
will
be
very,
very
useful
for
from
the
people.
F
A
That's
a
good
point:
we
do
have
a
number
of
external
links
there
to
different
people's
github
repos,
who
have
kind
of
put
some
of
these
nicer
examples
together.
So
if
you
have
something
like
this,
if
you
have
a
repo
that
just
shows
off
how
to
maybe
use
open
template,
instrumentation
use
it
with
tempo.
Do
something
like
this
we'd
be
glad
to
include
that
link
in
our
docs
feel
free
to
ask
them
at
a
pr
or
contact
us
in
the
slack
and,
let
us
know,
cool
yeah.
A
A
A
All
right:
well,
let's
go
ahead
and
wrap
up
here
like
to
thank
people
for
showing
up,
and
I
think
we
had
a
good
talk
today,
mainly
kind
of
covered
stuff
that
happened
at
o60.
We
talked
a
little
about
what
o7
is
coming
out
soon
and
kind
of
some
neat
things
there
helm
charts
are
out,
we
overviewed
some
of
these
features.
A
Next
steps
for
tempo
are
exciting
and
we
talked
about
that
some
and
then
we
got
a
good
demo
from
marty
and
some
discussion
from
daniel
about
some
of
these
examples.
Hopefully
the
examples
hit
some
of
the
technologies
you're
using
and
if
not,
then
please,
let
us
know
we'd
be
glad
to
kind
of
whip
something
else
up
or
whatever
cool
all
right.
Everybody
take
care.
Please
try
to
get
a
hold
of
us
in
there's
a
public
slack.
A
There's
a
community.community.grafana.com
forum
for
tempo
there's
a
lot
of
different
ways
to
chat
with
us
if
you're
using
tempo-
and
you
have
questions-
or
you
just
want
to
ask
generally
about
the
project-
please
jump
in
there
and
hit
us
with
without
whatever
you
have
and
we'd
be
glad
to
help.