►
From YouTube: Tempo Community Call 2021-09-09
Description
Discussion of Grafana Tempo news.
C
B
A
Okay
I'll
check
this
in
a
bit.
Let's
get
this
going,
it
is
10
after
the
start.
Let's
get
this
rolling
a
little
bit,
we
will
go.
Come
back
to
the
poll,
the
poll's
important.
I
will
announce
the
results,
but
first,
let's
get
into
tempo.
So
this
is
the
september
tempo
community
call.
Thank
you
all
for
coming
we'll
kind
of
review.
A
A
Before
we
get
into
that,
though
we
have
an
observability
con
coming
up,
I
I
guess
I
should
know
the
dates.
I
don't
know
if
we've
announced
the
dates,
but
whatever
in
the
next
few
months
in
the
doc,
there
is
a
link
to
the
cfp.
A
So
if
you
click
that
it's
a
call
for
presentations-
and
you
are
welcome
to
submit
anything-
you
know
it
doesn't
have
to
be-
does
not
be
tempo
related
of
course,
but
observability.
You
know
the
grafana
ecosystem
feel
free
to
submit
talks
about.
You
know.
A
Any
kind
of
success
you
had-
or
you
know,
setups
or
whatever
we'd
love,
to
hear
some
of
what
you've
done.
So
you
can
present
at
observabilitycon.
A
It
will
be
a
virtual
conference,
so
I
think
everything
is
pre-recorded,
so
you
won't
really
need
to
like
sweat
the
details
about
doing
some
kind
of
live
presentation.
You
can
kind
of
put
that
together.
If
that's
a
concern
for
you
and
in
particular,
if
you
have
any
tempo
success
stories
or
any
kind
of
tempo
news
or
like
a
way,
you've
used
tempo
in
your
company.
We
have
a
kind
of,
I
said
I
guess
a
set
of
lightning
talks
that
are
about
like
external
people
using
our
software.
A
So
if
you
just
want
to
do
like
a
five
minute
thing
that
also
works.
If
you
want
like
a
just
a
small
segment
to
walk
through
how
tempo
or
another
graffana
product
has,
you
know,
you've
used
it
successfully,
then
then
I
think
all
of
that
would
be
good.
Please
click.
The
cfp
link
check
it
out.
Think
about
something
you
might
want
to
present
and
let
us
know
the
other
thing
I
want
to
announce.
A
Before
I
hand
the
mic
over
to
my
my
friends
here
is:
we
have
started
using
milestones
in
the
tempo
project,
so
we
kind
of
talked
about
this
internally
and
we've
been
trying
to
find
ways
to
organize
work.
You
know
know
what
everyone's
doing
and
to
kind
of
have
an
idea
of
what
we're
working
towards
at
any
time,
and
we
felt
like
the
github
milestones-
were
a
good
match
for
what
we
wanted.
So
right
now,
there's
a
v
1.2
milestone
up
that
you'll
see,
and
we
will
continue
to
do
this
going
in
the
future.
A
A
What's
going
on
on
with
tempo,
as
you
kind
of
like
are
looking
forward
to
the
next
releases,
and
maybe
you
don't
have
the
time
to
go,
you
know,
check,
notes
or
look
for
a
community
call
or
something
it's
just
a
good
place
to
get
like
a
live
update
of
what
we
consider
a
priority
for
the
next
release
and
we
intend
to
do
this
for
1.3,
1.4
and
all
future
versions.
A
So
far,
I
think
we've
had
roughly
like
a
two
month
or
so
every
other
month,
cadence
on
releases,
and
I
I
expect
that
to
roughly
continue,
there's
no
hard
dates.
We're
not
like
you
know
on
purpose
every
other
month
releasing,
but
we
do
kind
of
just
get
like
a
set
of
features
that
we
think
are
worth
sorry
a
set
of
features
that
we
think
are
worth
releasing
and
then
cut
it
and
that's
been
about
once
every
month,
so
kind
of
expect
that
cadence
to
continue
I'd,
say
cool.
A
Those
are
my
major
points.
I
think
somebody
added
a
1.1
release
here.
If
they
want
to
talk
about
it
and
then
we
can
move
the
presentation.
C
Yeah,
I
added
that
I
just
so
during
the
last
call
we
were
kind
of
talking
about
it
was
coming
up
and
I
just
wanted
to
say
it
is
out
there
and
there's
a
blog
post
that
kind
of
goes
through
the
main
changes
in
there.
I
don't
think
there's
any
like
big
functionality,
it's
mostly
a
stability
release
and
then
there's
a
lot
of
good
improvements
in
there
for
stability
and
performance
and
things
so
cool
yeah.
Okay,
let's
see.
A
C
Let's
see
this
was
talking
about,
so
1.2
is
upcoming,
so
that's
the
current
release.
Our
next
release,
that's
kind
of
like
what
everything
that
we're
talking
about
now
will
be
kind
of
related
to
that
we
wanted
to
mention
member
list
fixes
and
are
these
in
1.2,
so
member
list,
if
you're
using
member
list
they're,
you
know,
we've
had
some
issues
ourselves
and
maybe
some
other
people
with
you
know.
The
ring
propagation
can
sometimes
lead
to
unhealthy.
You
know
gestures
or
other
components,
especially
during
rollouts,
if
there's
a
lot
of
them.
C
If
the
size
of
the
rings
in
tempo
are
your
sizes
that
have
had
a
higher
chance
of
leading
to
unhealthy
components,
and
there
were
some
really
really
important
fixes
that
were
merged.
I
think
these
actually
might
have
been
in
1-1.
You
know
what,
instead
of
rambling,
let's
go
check
that
out.
A
I
was
actually
looking
it
up
myself,
although
I
do
approve
of
rambling.
I
do
it
a
fair
amount,
myself
and
yeah.
Oh
no,
wait,
maybe
not
yeah.
This
has
been
fixed,
so
one
one
this
is
fixed.
Not
only
did
it
fix
the
unforgettable
issue,
but
it
also
found
we
have
found
some
issues
which
is
like
propagating
bad
messages
repeatedly
for
no
reason,
so
the
amount
of
volume
of
traffic
has
been
reduced
and
we've
also
tightened
up
our
defaults
for
member
list
and
everything
else,
and
so
you
should
see
better
performance.
There.
C
Yeah,
a
really
important
fix
was
the
propagation
of
the
tombstones,
the
handling.
So
if
you
did
have
an
unhealthy
entry-
and
you
forgot
it-
those
work
very
well
now,
so
it's
easy
to
forget
the
unhealthy
components,
so
yeah
I'd,
say:
okay,
cool,
so
we're
going
to
talk
about
search
for
a
little
bit
here.
So
I
guess
I
should
preface
and
say
we
don't
have
a
live
demo,
but
we
do
have
some
screenshots
and
we'll
kind
of
walk
through
and
talk
about
it.
C
I'm
sure
we'll
have
a
demo
for
the
next
one
we
were
just.
We
were
just
weren't
ready
for
that
is
that
cool?
So
if
you're
expecting
a
demo,
I'm
sorry,
but
maybe
next
time
cool.
So
talk
about
the
timeline.
This
is
kind
of
where
we're
where
we're
at
right
now-
and
I
this
was
in
the
previous
call-
I
don't
know
if
it
looked
exactly
the
same,
but
I'm
just
putting
it
here
because
it's
good
to
talk
about
so
phase
one
we're
you
know
tempo
1.2
will
have
an
api.
C
It
will
search
the
ingestors
and
there's
a
matching,
a
grafana
ui
for
tempo
that
we'll
we'll
talk
through
there,
and
so
this
the
tempo
query
language,
like
the
trade,
the
query
language
that
we're
looking
for
it's
still
out
there.
So
it's
coming
up,
but
it's
not
in
this
ui.
So
I
just
wanted
to
make
sure
like
that
was
kind
of
like
explained
here.
C
So
the
ui
is,
let
me
see
if
I
can
make
this
bigger.
I
don't
know
how
well
this
is
going
to
show
up.
Is
that
making
it
worse
or
the
same?
C
Think
I
know
what
the
fields
are,
so
that
helps
yeah
the
screenshot's
a
little
wide
but
sure
so
when
it's
enabled
there
will
be
a
new
tab
inside
the
tempo
data
source
called
search
and
the
previous
one
that
was
called
search
was
a
loki
based
search
and
that
is
renamed.
It
called
blocky
search.
So
there's
a
search
tab.
Now
it
looks
a
little
bit
different
cool
thanks
and
what
we
did
is
the
this
search
is,
you
know,
kind
of
like
basic.
C
It's
like
looking
at
tags
and
attributes
of
the
spans,
there's,
not
the
full
power
of
the
query
language.
So
what
we've
done
is
go
through
things
like
service
name
like
that's
a
drop
down,
so
it
will
populate
the
drop
down
with
all
the
different
service
name,
attributes
that
have
come
through
it
that
are
in
the
adjusters
and
then
the
next
drop
down
is
a
span
name.
So
span
name
is
going
to
be
like
the
operation.
C
I
think
that's
typically
like
a
url
or
something
else
right,
so
it
depends
on
the
instrumentation,
and
so
oh,
this
is
missing
the
there's
the
run
button
up
at
the
top,
so
yeah.
You
would
click
that
to
execute
the
query,
so
we
kind
of
have
instead
of
a
query
language.
That's
this
ui
that
will
build
kind
of
like
there
is
it'll,
build
the
api,
correct
request
behind
the
scenes
and
you
can
run
the
query.
C
This
tag
tag
field
is
an
open,
ended
or
a
kind
of
like
a
dynamic
field
here,
where
it's
in
log
format,
format,
and
that
is
for
finding
matching
tags.
So
this
query,
what
it's
doing
is
it
will
find
any
trace
with
the
service.
Name.
Is
tempo
query
front-end
right,
and
this
is
actually
the
service.name
attribute
that
comes
through
like
in
the
batch.
C
So
if
you're
looking
at
the
kind
of
like
trace
details
inside
that's
what
that
comes
from-
and
this
is
looking
for
the
any
cluster
that
is
ops,
usd0,
that's
one
of
our
internal
clusters,
and
so
it
will
combine
all
of
these
parameters
together
and
find
those
traces.
This
is
a
minimum
duration
field.
This
third
one
down
here,
whoops
yeah,
and
so
that
will
only
find
traces
that
are
more
than
one
second
and
so
that
duration
field
is
the
go
duration
kind
of
format.
C
So
you
could
do
five
mil
five,
ms
for
milliseconds
or
seconds
or
hours
minutes
things
like
that,
and
the
search
results
are
just
displayed
in
a
table
down
below,
and
these
are
just
some
basic
columns
like.
I
know
we've
talked
about
this
before,
like
what
other
data
would
we
like
to
extract
yeah
so
trace
id?
These
are
clickable
links.
So
when
you
click
the
trace
id,
it
will
show
up
in
the
secondary
window
just
like
how
kind
of
like
what
you
would
expect.
C
The
other
columns
are
trace
name.
So
this
is
the
service
name
and
operation,
and
it's
kind
of
like
just
you
know
something
to
show
to
help
differentiate
their
the
start
time
of
the
trace
and
the
duration,
and
these
columns
are
sortable
and
then
so
yeah.
So
if
you
click
a
trace,
this
is
what
it
looks
like
it:
pops
it
up
on
the
side
and
it's
you
know
just
like
looking
at
another
tree
side
by
side.
A
Time
something
else
like
sometimes
this
like
split
view,
is
a
little
small
for
me.
I
think
my
monitor
might
be
smaller
than
others
if
you
control
click
that,
which
is
what
I
normally
do-
it'll
just
pop
a
new
window.
So
for
small
traces
I
think
the
split
is
really
cool,
but
when
your
trace
gets
huge,
I
think
you
want
like
a
its
own
window
and
the
control
click
works
nice
for
that,
so
richie
just
can't.
C
C
Yeah
so
well,
let's,
let's
talk
about
this
a
little
bit
more.
I
want
to
talk
about
these
tags,
and
so
this
is
a
tag-based
approach.
It
will
find
traces
with
matching
tags.
You
can
combine
multiples
so
in
this
field
here
I
it's,
you
could
type
additional
values
in
so
this
is
looking
for
cluster
and
the
root
dot
is
kind
of
something
that
means
it
only
looks
at
the
the
matching
the
root
span,
so
that
cluster
attribute
has
to
be
on
the
root
span.
A
A
E
A
On
this,
like
consistent
metadata
thing,
right
like
let's
have
the
same
tags
on
our
spans,
our
metrics
and
our
logs,
and
it
kind
of
lets
you
build
all
these
cool
correlations.
I
think
cluster
you
can
either
configure
it
on
the
client.
A
So
you
can
say
always
add
these
tags,
and
you
can
just
tell
the
client
to
put
the
cluster
on
every
single
span
admitted
by
a
process
or
namespace
or
whatever,
and
also
the
grafana
agent
has
the
ability
to
like
query,
query
metadata
out
of
kubernetes
and
other
sources
and
then
attach
it
to
spans
as
they
flow
through
the
agent.
C
Right,
so
other
common
tags
would
be
hostname
pod,
ip
right
really
like
it'll,
it's
it.
Can
it's
any
tag
that
you
have
in
your
data,
so
I
mean,
depending
on
how
your
spans
are
instrumented
or
what
kind
of
values
you
have.
You
know,
customer
id
file
name
things
like
that.
So,
like
any
of
those
would
work
these
this.
This
condition
here
is
actually
a
substring
match,
so
we
don't
have
the
full
language,
so
we've
kind
of
just
taken
some
some
shortcuts.
I
guess
I
would
say
it's
a
case.
C
Insensitive
substring
match,
so
it
doesn't
have
to
be
exact,
and
so
I
think
until
we
have
the
language
where
you
can
be
more
specific
about
that,
that's
probably
this
general
approach
is
the
best.
So
just
it
looks
like
an
equal
sign,
but
it's
actually
it's
a
substring
match
cool.
C
There
is
a
max
duration
limit,
and
so
I
guess
you
could
find
traces
under
that
limit
and
then
there's
a
limit
field.
I
think
these
are
maybe
not
as
useful
as
or
as
powerful
as
the
other
ones,
but
just
to
mention
that
they're
there
limit
limits,
how
many
results
are
returned.
So
if
you
just
wanted
like
the
first
couple,
instead
of
a
bunch
of
them.
A
A
So
this
is
a
good
time,
as
we
kind
of
go
through
this
to
maybe
get
questions
or
suggestions
together,
it'd
be
a
good
place
to
kind
of
discuss
that.
So,
if
you
think
of
something
any
ideas,
please
propose
them.
Thanks.
A
C
Yeah,
actually
without
the
live
demo,
this
is
a
little
bit.
We
it's
this
isn't
shown
in
here,
but
all
of
the
tags
fields
have
auto
fill
and
so
for
the
tag
names.
If
you
start
typing
it
will
it
has
two
two
autofills
one
it'll
autofill,
the
the
name,
the
tag
name.
So
if
you
have
like
pod
hostname
http.url
http.status
code,
it
will
start
suggesting
those
names
and
then,
after
you
do
the
equal
sign,
it
will
auto
populate
for
the
values
right,
and
so
it
has
autofill
on
those
two,
two
two
things.
D
No,
I
I
mean
like
in
more
like
ui,
not
some
logic
based
on
text
or
something
just
to
easily
find
the
trace
id.
You
already
know,
like
you,.
A
Certainly
would
be
impossible
over
the
fullback
and
just
cause.
We
have
billions
in
the
back
end
and
if
you
type
the
number
one
then
you're
going
to
get
a
billion
entries
back
right.
So
maybe
recent,
like
you
know,
I'm
not
sure
how
to
constrain
that
to
make
it
work
like
because
trace
ids
there's.
So
many
of
them
like
you
would
have
to
be,
maybe
one
in
the
last
few
seconds
or
the
last
few
minutes.
It
would
be
very
difficult
to
do
that
across
the
full
data
set
for
sure.
F
C
Yeah
yeah,
that's
funny.
They
ought
to
complete
anything
in
these
screenshots,
but
it
looks
really
cool.
Definitely
we'll
get
to
that.
Next
time.
Cool,
let's
see
so,
let's
talk
about
so
like
these
are
some
numbers
from
our
internal
testing.
I
think
we
had
some
of
these
last
month,
but
here's
some
later
ones
so
we're
searching
over
190
traces
and
it's
about
140
gigabytes
of
data
400
blocks.
A
C
A
C
Yeah,
yes,
so
yeah,
it
can
come
back
a
lot
quicker
if
you're
filtering,
for
something
that
has
a
lot
of
a
lot
of
data
cool.
G
C
So
the
injectors
have
a
setting,
and
I
meant
to
get
that
setting
name
right
before
this.
They
have
a
setting
that
determines
how
long
they
keep
a
block
after
flushing
it
to
the
back
end,
so
they
always
keep
blocks
for
a
little
bit
and
because,
after
they
flush
to
yeah,
it's
in
right
here,
complete
underscore
block
underscore
timeout
after
adjusters
flush,
a
block
to
the
back
end.
C
They
don't
delete
it
off
their
disk
right
away
because
they
need
to
let
other
components
have
a
time,
have
a
chance
to
query
and
pull
the
block
list
and
get
that
block
in,
and
so
normally
it's
like
five
minutes,
but
we've
extended
that
here
just
for
for
other
reasons,
and
also
just
to
have
more
data
in
the
ingesters.
C
So
as
you
can
increase
that
that
will
control
how
much
how
long
data
is
retained
on
the
adjusters
and
if
you
increase
that
then
you'll
be
able
to
search
more
at
the
downside.
Is
it's
using
more
disk
on
the
adjusters.
A
Right,
so
what
we're
talking
about
here
is
just
like
recent
traces
search
is
what
we
want
to
release
in
1.2,
at
least
behind
a
flag
and
then
we're
also
working
on
full
back
end
search
as
well.
This
is
just
kind
of
like
a
first
step.
C
A
D
A
Yeah
right,
if
you're
like
during
a
single
binary
and
10
000
spans
a
second
or
something
20,
000
spans
a
second,
you
could
throw
a
big
disc
on
there
and
probably
get
full
search
over
you
know,
hours
and
hours
or
a
full
day.
We
do
like
over
a
million
spans,
a
second
like
150
or
200
mega
seconds,
and
then
you
know
you
can
only
keep
so
much
of
that
on
disk
before
you
run
out
of
disk.
D
A
A
Our
less
interesting
are
less
interesting
endpoints
like
the
right
path,
which
is
a
lot
easier
or
a
lot
simpler
and
doesn't
have
issues
as
much.
That
one
is,
I
think,
at
20
now
or
30,
or
something
like
that.
We
brought
it
down
a
bit
for
search,
because
while
we
were
adding
this
feature,
we
wanted
to
kind
of
stabilize
around
it
before
scaling
back
up.
A
D
C
Yeah
we
actually,
I
have
some
some
slides
and
things
that
kind
of
go
over
that.
I
think
maybe
let's
do
that
on
the
next
call,
but
it
will
talk
more
about
the
internals
about
what
kind
of
data
and
it's
actually
it's
a
lot
of.
It's
changing
still
so
we're
kind
of
still
adding
to
it,
but
yeah
yeah
we'll
go
through
that
on
the
next
call.
If
that
sounds
good
yeah,
let's
see.
C
Yeah
is
there
going
to
be
a
setting
to
cap?
The
search,
look
back
sort
of
like
loki.
Does
I
think
the
full
search
like
what
we
really
want
to
do
is
so
grafana
has
the
time
range?
That's
part
of
the
query
already
and
we'll
just
follow
the
time
range.
So
I
think
that's
controlling,
so
I
know
wait.
The
look
back
is
something
a
little
bit
trickier
than
that.
I
think
I
think
what
we'll
do
is.
C
I
think
what
we
would
do
in
tempo
is
just
follow
the
time
range
once
we
go
to
like
the
next
release
for
the
ingesters.
We
don't
look
at
the
time
at
all.
It's
just
all
data
and
adjusters.
That's
we
just
search
everything
in
the
adjusters
in
the
next
release.
We
would
follow
the
time
range,
and
so
I
think
what
we
do
is
we
look
in
the
adjusters
and
the
back
end
just
like
how
we
would
do
for
a
trace
id
lookup.
A
Because
cortex,
I
know,
has
a
has
a
setting
where,
if
it
like,
it
will
only
look
in
the
injectors
for
certain
ranges
of
time
and
look
at
the
back
end
for
different
ranges
of
time,
and
I
figured
we'd
replicate
that
in
tempo.
So
I'm
not
sure.
If
that's
it,
you
basically
say
like
everything
within
15
minutes.
Don't
bother
checking
the
back
end
check
adjusters
everything
outside
of
50
minutes,
go!
Look
in
the
back
end
or
something
like
that
table
manager,
whoa
yeah.
We
don't
have
a
table
manager.
A
I
mean
like
yeah,
I
guess
I
was
thinking
like
like
I
mean
if
you
searched
for
like
six
weeks
ago,
and
you
only
had
a
week
of
data,
I
figured
it
would
just
return
nothing,
although
I
guess
what
this
is
doing
is
maybe
short-circuiting
that
and
just
quickly
being
able
to
reply
like
there's.
No
data
here
adjust
your
retention
period.
Perhaps
is
that
maybe
that's
the
point
of
this
setting?
C
Yeah,
okay,
so
here's
how
to
to
get
search
now,
if
you
would
want
to
mess
around
with
it
and
let
us
know
that
would
be
awesome
for
grafana
it's
the
latest
8.2
pre-image,
so
I
don't
think
8.2
has
been
released,
but
this
this
experimental
ui
will
be
in
there
and
it's
still
behind
a
feature.
C
Toggle,
and
so
I
have
here,
is
the
environment
variable
that
you
can
set
to
turn
that
on
and
I'm
not
sure
if
you
know
future
toggles,
if
you've
seen
that
before
there's
two
ways
to
do
it,
this
environment
variable
one
is
just
pretty
easy.
So
you
can,
if
you
enable
that
environment
variable
with
to
the
value
tempo
search
that
will
show
up
and
then
for
tempo,
just
grab
the
latest
main
branch
build,
and
so
it
will
be
in
1.2.
C
But
we
don't
have
any
release
candidates
or
anything
like
that
yet,
and
it
is
also
behind
a
flag.
So
there's
this
search
underscore
enabled
set
that
to
true
and
it
has
to
be
set
that
way
for
both
the
distributor
and
the
querier
configs.
So
those
are
kind
of
like
the
inputs
and
outputs
where
it
has
to
be
enabled
the
adjuster
it
doesn't
require
any
changes
cool.
Let's
see.
C
Okay,
cool
yeah-
let's
just
keep
going
so
this
is
other
stuff
that
we
have
in
1.2.
There's
memory
usage
for
the
compactors
seems
to
be
really
improved,
so
we
just
had
some
internal
improvements
for
buffering
that
we
did
as
part
of
a
just
benchmarking
while
we
were
doing
search
and
we
came
across
them.
So
I
think
so-
that's
good.
I
think
compactor
ums
are
probably
pretty
popular,
and
so
I
think
everybody
will
be
happy
about
this.
One.
A
Right,
I
think
this
was
a
memory
leak
that
you
just
accidentally
fixed
in
the
middle
of
the
search
br.
You
were
looking
to
do
the
performance
improvement,
but
it
actually
also
happened
to
catch
this
compactor
memory
leak.
That
was
never
quite
bad
enough
for
us
to
really
dig
into,
but
did
result
in
some
mooms
and
then
you
just
magically
fixed
it,
which
was
awesome.
A
C
Okay,
cool:
there
is
a
new
cli
command
to
query
blocks,
to
pull
a
trace
straight
out
of
the
back
end.
So
that's
pretty
cool,
so
you
can
do
that
without
a
tempo
running.
You
can
just
point
the
cli
at
the
bucket
or
the
file
store
at
the
back
end
directly,
and
it
will
do
the
same.
Work
pull
the
bloom
filters
down,
go
through
the
index.
Things
like
that,
so
that
can
be
really
good
for
debugging
or
a
way
to
access
the
raw
data.
C
C
Yeah,
so
if
you
look
at
the
1.2
milestone
you'll
kind
of
see
what
we're
targeting
for
that
release,
in
this
case
service
graphs,
I
will
stop
and
I
think
mario.
E
Oh
well,
if,
if
you
can
it's
just
one,
that's.
E
E
Cool
thanks,
yay
series
graphs
yeah,
so
I
think
we
spoke
about
service
graphs
in
the
past
community
call.
But
in
summary,
so,
service
graphs
is
a
feature
we're
working
on
as
a
visual
representation
of
the
different
relationships
that
various
services
have
and
well.
The
idea
is
to
build
a
diagram
where
nodes
are
services
and
edges
are
the
how
the
services
communicate.
So
we
understand
how
our
system
is
built
and
yeah,
I'm
about
to
understand
the
different
relationships
between
them
and
we've.
So
it's
in
development.
E
We
didn't
want
to
show
a
live
demo,
yet
the
ui
is
a
bit
rough,
but
hopefully
for
the
next
time
we
can
have
one,
but
still
we
have
news
and
some
developments
we
have
set
up
on
a
given
like
an
architecture,
a
strategy,
so
how
it
works
is,
as
we
process
expands
in
the
agent,
we
can
build
these
relationships
by
processing,
so
we
inspect
the
kind
of
this
pan
and
we
look
for
two
spans
which
match
our
client
and
server
services.
E
So
we
can
build
this,
so
the
two
services
communicating
that
gets
translated
to
a
prometheus
metric
that
we
viewed
to
well
in
this
diagram
is
cortex,
but
any
prometheus
compatible,
back-end
works
and
yeah.
So
we
built
a
couple
of
metrics
regarding
the
latency,
how
many
hits
and
like
how
many
requests
happen
between
services
and
such
and
then
rafana
can
pull
all
those
metrics
and
will
build
all
the
relationship
between
services
and
basically
draw
the
map.
E
So
just
a
couple
of
last
notes
were
the
the
back-end
development,
the
development
and
the
agent
is
happening
in
a
dev
branch,
which
is
signal
in
the
here
in
the
presentation.
So
it's
not
on
main.
If
you
go
look
into
it
and
you
don't
find
it
and
then
obviously
you
can
run
this
so
we're
running
this
internally
and
you
can
run
too
it's
the
the
ui
is
hidden
under
a
feature
flag.
E
The
tempo
service
graphs
tempo
service
graph
that
you
can
enable
similar
to
how
you
will
have
to
do
for
search
and
then
obviously
you
will
have
to
configure
the
data
source
that
is
similar
to
linking
two
data
sources
in
grafana,
just
the
same
case
as
for
traces
to
locks,
for
instance,
yeah,
and
that's
pretty
much
it.
If
anyone
has
any
questions
or
thoughts
yeah.
If
not,
we
can
continue.
I
guess.
C
Yeah,
so
I
guess
one
thing
to
to
just
reiterate:
there
is
the
service
graph.
Metrics
are
computed
by
the
eight
by
the
agent,
so,
if
you're
sending
it
directly
to
tempo
or
you're
using
a
different
agent
or
collector,
these
won't
be
available.
That's
the
architecture
that
we've
gone
with
for
now
saying
that
yeah
just.
C
They're
going
forward
so
a
little
bit
more,
so
we
are
now
running
exemplars
internally
and
I
think
we
have
like
a
million
exemplars
enabled
in
our
prometheus
data
sources
and
they
look
cool.
I'm
not
sure
what
else
there
is
to
say
here.
This
is
this
note's
kind
of
for
us,
mainly
or
I
guess,
if
you're
running
like
a
cluster,
but
configuring
limits
per
tenant
was,
I
think,
probably
like
the
latest
development
in
there.
A
A
Yeah
our
scale
is
down
right
now,
which
is
kind
of
sad.
We
did
reduce
it
some
while
we
worked
on
search
because
there's
additional
memory
and
cpu,
and
we
want
to
kind
of
like
tighten
these
things
up
before
we
kind
of
bring
it
back,
but
I
think
we're
at
about.
A
Daniel
probably
made
it
so
right,
our
internal,
our
internal
volume
is
a
little
down
and
that's
just
to
kind
of
stabilize
search
and
feel
better
about
search
and
then
we're
gonna
ratchet
things
back
up
and
then.
E
A
Cool,
so
that's
I
believe,
do
we
have
anything
else,
tempo
team.
A
If
not,
we
do
have
this
poll
here.
If
anyone
has
questions,
you're
welcome
to
type
them
in
chat,
you're,
welcome
to
unmute
and
ask
questions
comments
whatever
you
can
also
put
them
in
the
agenda,
we'll
pick
them
up
there
as
well,
and
we
can
address
anything.
You
know
you
all
need
at
the
moment
before
we
totally
sign
off,
though
we
do
need
to
revisit
the
poll
results
and
hare
has
won.
Actually,
hats
did
not
win
daniel
here.
One
six
to
four,
I'm
kind
of
wondering
was
richie
like
logging
in
and
off
to
vote
repeatedly.
A
I
think
I
think
we
might
need
to
go
up
and
talk
with
richie.
I
don't
think
this
is
a
fair
poll
at
all.
Throw
these
results
out
start
over
for
sure.
B
D
A
And
then
a
third
way
is
through
the
kubernetes
like
service
discovery,
processor,
there's
a
processor
that
will
go
reach
out
and
find
metadata
and
try
to
attach
that
metadata
to
spans
based
on
the
ip
it's
going
to
try
to
match
up
the
ip
for
the
metadata,
as
well
as
the
span
that
passed
through
of
the
three.
The
third
is
the
most
robust,
but
also
the
most
finicky
that
you
have
to
kind
of
like
work
with
and
play
with.
The
other
two
are
rock
solid
and
are
gonna
work.
A
All
the
time
and
you'll
have
no
issues
with
them,
so
I'd
experiment
with
those
three
ways
of
getting
what
you
want.
A
Odd
names,
I
think
most
I
think
most
most
clients
attest-
attach
a
host
name.
Does
anybody
know
that
offhand?
I
think
our
pod
name
comes
through
the
kubernetes
service
discovery
thing
I.
E
A
I
would
say
each
client
is
probably
going
to
by
default,
add
slightly
different
tags,
so
some
are
probably
going
to
add
something
like
hostname,
which
might
be
the
pod
name.
I
don't,
I
don't
know
off
the
top
of
my
head.
Let's
see
yeah,
so
our
yeah,
we
have
actually
twice
so
we
have
host.name,
which
the
jager
we
usually
the
jager
clients,
the
jaeger
client
attached
host.name,
which
matches
the
pod
name,
and
then
we
also
have
attached
the
pod
name
through
this
kubernetes
service
discovery
thing,
which
is
the
same.
E
G
I
just
have
a
question,
so
I'm
just
thinking
out
loud
here,
so
is
it
possible
to
use
aeger
with
tempo
so
for
xyz
reasons,
if
I
do
not
want
to
use
the
tracing
visualization
with
grafana,
and
I
want
to
use
the
eager
ui
component.
So
how
does
the
query
layer
work
between
tempo
and
graphana,
and
so,
if
I
remember
correctly
in
the
past,
like
grafana
uses
internally
the
acre
ui,
if
I
remember
correctly,
but
I'm
not
sure
on
that
but
yeah,
I
would
love
to
know
more.
A
Right
so
the
grafana
team,
forked,
the
jaeger
ui
when
they
made
the
jaeger
when
they
made
the
grafana,
whatever
ui,
trace
ui
there
is
somehow
we
do
actually
have
a
way
to
do
this.
What
you're
wanting
which
is
point
a
jaeger
ui
at
tempo
that
actually
works?
That's
called
tempo
query
is
that
right.
C
The
performance
will
not
be,
as
you
know,
if
you're
it
could
be
bad
because
the
way
it
works
is
it
actually
has
to
fetch
each
full
trace
from
tempo,
and
it
doesn't
just
get
the
those
kind
of
like
basic
properties
like
what
the
grafana
ui
that's
specialized
for
tempo
is
showing
it
so
that
actually
will
work.
I
mean
you
could
try
it.
I
guess
if,
depending
on
how
many
results
there
is
or
how
fast
your
tempo
is,
where
the
data
is
stored,
it
may
be
pretty
good
or
it
may
be
pretty
bad.
A
A
G
C
If
you're
running
that
tempo
query
container
that
we've
linked
there's,
you
can
do
two
things
with
it.
If
you
browse
to
it,
it
has
the
jager
ui
in
there,
so
you
can
actually
use
that
as
is
or
yeah.
You
could
use
the
jaeger
data
source
in
grafana
pointed
to
it.
So
it
also
works
like
the
jager
data
source
and
you
could
use
it
that
way.
A
B
A
All
right,
thanks
for
the
questions,
thanks
for
the
thoughts
everyone,
I
think
this
might
be
it.
It's
been
good
chat
with
you
feel
free
to
reach
out
on
the
tempo
github
repo,
of
course,
make
issues.
There's
also.
I
think
we
enabled
discussions
recently,
so
you
can
use
github
discussions.