►
From YouTube: Tempo Community Call 2021-07-08
Description
Discussion of Tempo News over the past 2 months including adding search, service graphs, Tempo 1.0 and other news!
A
All
right,
let's
go
ahead
and
get
started.
I
guess
we're
about
five
after
so.
A
Oh
thank
you
for
hitting
admit
we're.
Gonna
talk
a
little
bit
about
1.0,
which
you
released
and
announced
about
about
a
month
ago.
I
guess
it's
now
at
graphonic
online
and
we'll
maybe
kind
of
briefly
touch
on
some
of
the
features
that
we
discussed
there.
We
have
some
real
cool
news
and
a
road
map
just
to
share
for
search
and
marty
and
conrad
are
going
to
be
talking
about
that
some.
A
They
both
spent
some
time
recently
kind
of
kick-starting
that
project,
and
I
think
we
have
some
exciting
news
to
share
on
that
and
then
some
more
news
about,
mario,
which
is
not
entirely
spoiled
by
this
line
right
here,
which
is
100
spoiled
by
that
line
right
there
and
then
a
couple
of
just
issues
in
tempo
and
new
features
that
we
want
to
highlight
because
we
think
they're
you
know
worth
discussing
for
those
of
you
who
are
operating
tempo
and,
like
I
said,
generally
feel
free
to
add
anything
to
the
agenda.
A
If
you
have
any
kind
of
questions
or
things
you
want
us
to
address
and
I'll
just
go
ahead
and
put
q
a
at
the
bottom,
you
can
put
things
before
that,
but
I'll
just
open
up
for
general
discussion.
If
anybody
has
any
kind
of
questions
or
just
wants
to
talk
about
tempo
generally,
so
to
quickly
talk
about
1.0,
we
released
it
at
graphonic
online
and
we
felt
like
it's
a
very
stable
place
for
the
for
tempo
to
be.
A
It
is
kind
of
we
really
spent,
maybe
the
last
month
or
so
focused
on
performance
and
stability.
I
think,
particularly
in
the
adjusters,
and
really
got
focused
on
this
kind
of
key
value
store
that
tempo,
and
it
was
initially
like
this
high
volume.
You
know
inexpensive
to
operate,
object,
storage,
kind
of
object,
storage,
dependency
back
end,
and
that
was
really
the
focus
about
1.0
and
we
think
we
did
a
really
good
job
of
hitting
that
I
think
our
costs
were.
A
I
can't
remember
we
were
doing
like
cpu
or
spans
per
cpu,
and
we
were
in
the
like
eight
or
nine
thousands.
I
think.
A
The
last
time
I
looked
so
9
000
spans
per
second
per
cpu,
and
then
that
includes
all
elements
of
tempo
and
a
14-day
retention,
which
is
what
we
do
operate
internally,
so
really
happy
with
where
1.0
is,
but
we're
going
to
talk
about
some
exciting
features
and
honestly,
once
we
get
some
of
these
in,
we
might
just
it
might
be
2.0
search
once
search
is
involved
and
complete
and
cut
that
might
when
that's
stable
and
we
feel
confident
that
might
be
tempo.
2.0
also
put
a
quick
note
in
there.
A
There
was
one
bug
that
was
fixed
and
patched
there
where,
if
you
shut
down
an
adjuster
too
quickly-
and
you
didn't,
let
it
gracefully
exit,
it
would
sometimes
corrupt
its
wall,
and
the
fix
was
just
to
prevent
a
panic
on
the
on
restart.
Basically,
so
what
tempo
should
do,
or
the
injustice
specifically
is
replay
the
wall
until
it
finds
some
kind
of
corruption
or
some
kind
of
issue,
and
then
stop
and
log
a
message
if
there
is
an
issue
and
but
still
have
replayed
everything
that
was
able
to
that's
what
we're
seeing
internally.
A
So
1.0
is
boring
and
nobody
wants
to
hear
about
that
marty
or
I
don't
know
how
you
you
you
all
organize
this.
So
why
don't
you
all
jump
in
here
and
just
kind
of
talk
to
us
about
where
search
it,
just
searches.
B
Sure
so
I'll
start,
so
I
guess
about
a
month
ago,
during
grafana
con
online,
we
talked
about
search
being
a
really
big
thing
for
tempo.
It's
something
we
saw
on
the
horizon
and
it
kind
of
got
a
jump
start
in
the
past
couple
weeks.
We
did
an
internal
hackathon
or
final
laps
and
we
chose
to
you,
know
hack
on
template
search,
and
so
it
came
along
a
lot
faster
than
we
thought.
So
I
have
some
timelines
and
things
and
I
can
walk
through
it.
So
let
me
go
ahead
and
share
something
here.
C
B
Okay,
cool,
I
guess
everybody
can
see
that
okay,
so,
let's
see
so
there's
kind
of
a
lot
going
on
here,
but
I'll
kind
of
walk
through
the
timeline
and
then
we're
gonna
do
a
demo
at
the
end,
so
like
it'll
be
really
cool,
but
I
guess
it's
good
to
talk
about,
maybe
the
whole
picture
and
where
we
are,
where
we're
gonna
go
first,
so
there's
kind
of
like
three
phases
that
we're
going
to
do
this
with
the
first
one
is
very
basic
and
that's
what
we're
you'll
see
sooner.
B
It's
a
basic
search
api
searching
data
in
the
ingesters,
so
that
includes
live
traces
in
memory,
trust
traces
that
have
been
flush
disk
on
the
adjuster
right.
So
this
is
kind
of
like
maybe
like
the
last
five
ten
minutes,
an
hour
of
data
and
then
a
new
kind
of
experimental
ui
grafana.
So
that
will
because
the
current
one
only
does
look
up
by
trace
id,
so
there
would
be
some
ui
changes
that
come
along
with
that
for
the
capabilities.
B
B
We
kind
of
hinted
that
at
that
at
the
graphonic
online
before
I
don't
have
the
slides,
but
you
know
I
think
what
we
we
want
to
do,
something
that's
going
to
be
easy
to
pick
up
and
make
sense
and
kind
of
fit
in
with
you
know,
loki
and
prometheus,
and
things
like
that,
then
the
last
phase
is
implementing
that
language
and
finishing
off
the
ui.
So
that's
kind
of
like
three
phases
that
we
see
tempo
search
going
in
cool,
let's
see
so
the
phase
one.
B
So
this
is
where
we're
at,
and
this
is
kind
of
like
what
we'll
see,
hopefully
pretty
soon,
I
think,
probably
within
a
month
right,
basic
search
capability,
so
service
name,
operation
operation
is
also
the
span
name.
So
this
will
be
things
like
you
know,
very
basic
things
that
should
already
exist
in
most
tracing
data,
duration,
right,
slow,
fast,
traces
tags,
so
the
url
issued
status
code,
custom
tags
like
database
queries
or
customer
ids
things
like
that.
B
The
ui
that
I
mentioned
and
then
kind
of
like
just
to
talk
about
the
way.
We're
approaching
this,
because
the
tempo
tco
is
important
as
we're
kind
of
like
trying
to
do
this
in
a
very
specific
way
where
we
can
maintain
a
low
tco
while
adding
search
without
increasing
the
cost
very
much
so
the
way
the
approach
we've
we're
going
so
far
is
to
use
an
efficient
data
format
called
flat
buffers.
B
It
is
a
format
where
the
on
disk
format
and
memory
are
the
same.
So
there's
no
parsing
or
decoding
required
to
process
that
data,
so
it
should
be
very
efficient
very
fast,
and
the
other
thing
is
that
we're
gonna
make
search
optional.
So
there
are
a
lot
of
other
solutions
out
there,
or
at
least
you
know,
tempos
existed
for
this
one
without
search
capabilities,
so
there
are
gonna,
be
a
lot
of
installations,
or
maybe
people
using
it.
B
B
B
The
data
flow
kind
of
goes
around
starting
at
the
distributor
right,
so
we
will
be
extracting
kind
of
metadata
from
the
trace.
Those
top
level
attributes
the
basic
tags,
and
things
like
that
that
we
want
to
search
on
the
duration
start
time.
Those
are
extracted
in
flight
and
then
sent
to
the
ingestor.
Just
like
the
normal
trace
data
is
the
data
doesn't
go
anywhere
off
the
ingester
to
start
that's
kind
of
like
the
phase
two
that
I
mentioned.
B
B
I
think
that
is
probably
okay
for
kind
of
like
talking
about
the
timeline
and
kind
of
like
the
immediate
goals
and
then
so
now
I
guess
we
could
show
off
like
a
little
working
demo.
Does
that
sound
good
yeah.
A
D
All
right,
yeah,
it's
a
live
demo
yeah.
I
just
just
had
some
triple
setting
it
up,
but
it
should
work
now.
So,
let's
see,
I
should
see
grafana
right
now.
Okay,
so
yeah
I'll,
you
know
just
show
off
a
bit
how
it
works.
So
we
have
also
adapted
the
grafana.
So
the
tempo
data
source
here
has
beside
the
trace.
Id
box
also
has
a
search
box
and
in
this
search
box
you
can
fill
in
the
query,
which
consists
of
a
series
of
tags
that
you're
searching
for
within
your
trace.
D
So
how
it
works
right
now
is
you
could,
for
instance,
search
for
all
traces
that
have
a
root
span
with
a
specific
name,
so
you
will
start
searching
on
root
because
you
want
to
find
something
from
the
root
span
and
then
you
can
search
for
the
tag
name,
for
instance,
and
then
we've
also
implemented
autocomplete.
So
it
will
suggest
a
couple
of
names
that
were
present
within
the
trace
data.
D
So
in
this
case
I
can,
for
instance,
take
the
slash
card,
which
is
you
know,
some
kind
of
endpoint
in
this
trace
generator,
and
then
you
can
run
the
query
and
it
will
return
the
results
with
a
like
a
little
summary
of
the
trace.
So
you
have
the
the
name
of
the
trace.
You
have
the
start
time
of
the
trace
and
also
the
duration
and
then,
as
usual,
you
can
click
on
the
trace
id
and
you
get
the
full
trace
here
and
then
from
here
and
you
can
continue
searching.
D
Yeah
seems
to
cause
a
little
problem
right
now,
so
you
know
it's
all
very
new,
so
there's
still
some
bugs
out
there.
We
can
try
that
again,
let's
see
if
I
take
20
milliseconds
yeah
so
yeah,
all
the
traces
should
be
longer
than
200
milliseconds
right
now,
so
this
can
be
a
way
to
filter
down
and
search
for
you
know
the
the
slowest
traces,
the
slowest
requests
and
then
from
there
on,
you
can
also
add
more
tags.
D
So,
for
instance,
I
know
in
this
trace
data
you
have
like
a
region
tag.
I
think
so
it's
some
tag,
that's
also
a
set.
I
can
also
continue
searching
on
that.
So,
for
instance,
I
want
to
have
all
the
traces
that
have
the
region
tag
and
it
should
have
the
region
east,
for
instance,
and
then
it
will
filter
traces
again
and
only
show
the
traces
have
a
span
containing
the
attack
somewhere.
D
I'm
not
sure
if
I
can
show
this
right
now,
but
the
tech
search
is
actually
not
a
complete
match,
but
it
searches
for
if
the
strings
contains
so,
for
instance,
if
I
would
search
for
us,
it
would
match
with
all
tags
that
contain
us,
so
it
would
be
us
east
and
us
west
or
whatever
your
trace
data
is
like.
E
D
And
then,
besides,
this
we've
also
implemented
the
jager
api,
so
the
jager
data
source
also
works
in
grafana,
so
you
can
also
search
some
sense
and
a
jquery.
D
A
D
E
B
B
Yeah,
so
I
mean
we
have
talked
about
what
makes
sense
to
display
here.
I
think
that's
something
that
we'll
continue
to
define
trace.
Name
start
time
and
duration
were
just
very
basic
ones
that
we
went
with
for
kind
of
like
just
getting
started
quickly,
but
I
mean
I
think
it
maybe
we
would
want
to,
and
we
have
talked
about,
maybe
other
attributes
that
we
could
pull
out
but
yeah.
I
don't
know
it
depends
on
so
where
we
end
up
like
the
query
language
itself.
A
So
this
is
kind
of
the
thing,
obviously
we're
developing
right
now,
if
you
have
input,
this
would
be
a
great
time
to
take
it,
because
we
can
give
it
back
to
the
grafana
team
as
they're
actively
developing
it.
So
if
anyone
has
any
kind
of
thoughts
here
about
additional
things,
they'd
want
in
the
cy
or
on
this
table,
please
let
us
know.
E
I
I
had
a
question
the
query
language
that
you're
just
using
how
close
would
that
be
to
jaeger's
query
forms
because
they
have
some
standard
sort
of
parameters
like
a
start,
end
limit,
look
back
service
and
so
on
and
so
forth
that
they
would
use
to
pass
parameters
to
the
back
end
of
jaeger
to
do
the
query
again,
for
example,
elasticsearch
or
whatnot.
Would
there
be
any
correlation
between
this
or
you
are
having
your
own?
B
Sure
yeah,
so
they
are
very
close
and
actually
jager.
Api
compatibility
was
our
first
initial
goal
and
then
so
so
that
will
work
like
if
you're
using
tempo
query,
which
is
not
really
required
anymore.
That
is
something
that
we
will
have
and
that's
where
we
started
out
joe.
B
So
the
the
the
tags
and
the
duration
of
filtering
was
will
work.
It
will
be
compatible.
The
the
query
language.
There
is
not
really
a
language,
it's
just
a
set
of
tags,
so
I
think
that
actually
matches
up
with
the
entry
on
the
jager
ui
as
well.
F
B
And
also
the
service
name
and
operation,
we
may
not
go
with
that
terminology,
but
it
may
just
be
tags
that
you
pick
operation
is
really
just
the
span
name.
I
think
in
the
open,
20
protocol.
It's
just
called
the
span
name.
So
operation
terminology
might
not
make
sense
there,
but
yeah.
It
should
be
compatible.
A
Something
we're
still
dealing
with
is
or
talking
about
is
like
this
auto
complete
field
that
they
typed
in
is
really
cool.
One
option
would
be
to
go
to
something
much
more
jaeger-like,
where
you
just
have
a
service
drop
down
an
operation
name
drop-down
and
like
a
free
fork
for
the
tags.
So
that's
another
path
that
could
take
and
nothing
is
really
set
in
stone.
A
Now
I
can't
say
I
I
was
actually
in
favor
of
going
to
the
yeager
style,
where
it's
all
the
drop
downs,
but
I
really
like
how
clean
that
autofill
was
like
the
suggest
the
auto
suggest
was
so
it
might
make
sense
to
keep
that.
I
don't
know.
E
And
one
other
question
just
for
me
to
see
how
how
much
analytics
can
we
run
on
the
data
that
is
going
to
be
stored
in
tempo
for
attributes
like
time
and
duration
and
so
on
and
so
forth,
along
with
other
data
sources?
Can
we
can
we
use
this
in
conjunction
to
to
run
graphs
and
heat
maps,
for
example,
for
duration
of
operations
and
plot
that,
over,
let's
say,
prometheus's
response
time
or
take
a
look
at
loki's
error
rate
and
plot
all
of
this
together
or
it
is
not
being
looked
at
that
way.
B
Sure
so
extracting
metrics
from
a
query
is
definitely
something
that
we
would
like
to
do
in
the
future.
I'm
not
sure
how
quick
or
how
feasible
that
is
right,
because
that
would
be
maybe
a
very
large
time
window
to
of
traces
to
process.
So
I
think
that's
something
we
just
have
to
figure
out,
but
no
there's,
no
immediate
plans.
B
Part
of
it
also
is
impacted
by
just
the
query
language
like
how
we
end
up
there
with
wrapping
it
with
aggregates
and
bringing
those
things
like
in
that
in
so
you
could
plot
something
we
had
actually
talked
about.
Maybe
something
that
is
easy
and
griffin
is
plotting.
The
duration
for
the
search
results
that
are
actually
returned.
I
mean
stuff
like
that
is
pretty
easy,
but
I
think,
if
to
go
any
further
in
analytics,
I
think
it's
really
hard
to
say.
E
E
I
I
don't
know
the
name
that
is
very
useful
to
have
a
quick
glance
and
what's
going
on
over
a
span
of
time
with
the
operations
that
you
would
filter
through
that
sidebar
of
jaegergri,
and
I'm
thinking
that
having
that
and
then
plot
that
over
with
something
that
you
read
from
prometheus
about
the
services
internal
operation
and
put
that
on
how
it
is
being
viewed
from
external
sources
would
be
very
informative.
But
I
guess
you
said
that's
going
to
be
taking
a
longer
time
to
execute
as
a
query,
so,
probably
not
for
version
two.
A
Right
I'd
say
our
immediate
plans
are
focused
around
search,
but
definitely
in
the
back
of
our
minds.
While
we
design
this
query,
language
is
metrics
and
a
lot
of
these
other
things
that
people
want.
So
it
is
something
we're
thinking
about,
but
right
now
we're
just
focused
on
search,
I
would
say
after
we
start
wrapping
up
search.
We
might
start
talking
about
that
next
and
if
you
stay
connected
to
the
community,
you'll
start
getting
more
news
about
that
super.
Thank
you.
G
Sure
so
does
this
remove
the
need
for
the
logging
of
the
trace
id
because
instrumenting,
I
don't
know,
I've
got
like
eight
applications
now
instrumented
and
another
10
to
go,
and
it
that
part
is
painful
sure.
So.
A
With
regard
to
that,
I
guess
in
a
sense
right,
maybe
there's
things
you
could
put
on
a
log
line
that
would
not
quite
work
with
that
are
not
part
of
your
spans,
in
which
case
the
logline
is
technically
more
flexible.
A
But
yes,
this
should
kind
of
remove
that
need
to
what
you
just
said
with
the
instrumentation
I'd,
encourage
you
to
look
at
the
agent's
like
automatic
logging
option
so
which
would
not
require
you
to
do
like
these
additional
log
lines
per
application.
If
you
don't
want
to.
A
In
that
case,
the
agent
just
writes
a
log
line
for
every
single
trace
that
comes
through
and
will
kind
of
set
up
that
index,
for
you
automatically,
obviously
not
as
flexible
as
doing
it
yourself,
but
much
easier
to
get
started
with
also
real
quick
on
the
metrics.
We
also
have
features
in
the
agent
where,
right
now
it
will
post
histograms
of
span
durations
and
counts
on
spam
names.
So
you
can
right
now
see
like
p99
of
a
span
p50.
A
Those
kind
of
things
are
like
just
total
rate
at
which
a
particular
span
is
created,
so
the
agent-
it's
not
obviously
as
flexible
as
like
log
ql
and
loki,
but
it
is
kind
of
a
start
in
terms
of
getting
metrics
out
of
your
choices.
A
Let's
see,
oh,
I
got
some
more,
I
think,
being
able
to
display
arbitrary
tags
and
the
table
output
is
something
we
desire
at
shopify
as
well.
Okay,
I
will
note
that
and
take
that
back
to
the
team.
I
was
kind
of
wondering
that
myself,
when
somebody
said
what
can
you
do
with
this
table
or
customize?
A
The
first
thing
I
thought
of
was
it
might
be
nice
to
be
able
to
like
say
if
you
see
this
tag
in
the
trace,
please
put
it
on
the
table
that
might
help
people
choose
which
of
the
traces
are
relevant
and
so
I'll.
Take
that
back
to
the
team.
That's
a
good
note!
Oh,
I
should
put
this
in
the
dock
shouldn't.
I,
let's
see.
A
D
Now
they
actually
work
with
was
just
this
previous
issue.
So
after
we
first
worked,
okay.
A
I
really
do
like
the
idea
of
the
arbitrary
tags.
I
think
that
would
be
very
valuable,
being
able
to
search
to
cross
span
boundaries,
root.span
equals,
cart
and
trace,
contains
spanned
outside
equals
two
or
that
have
yeah.
So
that's
something.
So
we
have
this
kind
of
working
group
now,
where
we're
discussing
with
just
some
other,
very
just
some
other
members
of
the
tracing
community,
who
are
very
advanced
here.
A
F
A
You
know,
does
a
trace,
contain
a
tag
right
or
is
the
root
span
name
a
certain
thing,
and
so
we're
going
to
start
with
that.
But
we
do
see
a
feature
for
this,
including
like
structural
questions
such
as
what
you're
saying
like.
Do
you
have
a
trace
with
this
span?
Name,
that
is
a
descendant.
A
I'm
sorry,
do
you
have
a
span
with
a
certain
name
that
is
a
descendant
of
another
span
with
a
name
might
be
an
instance
of
a
structural
question
or
like
set
it
create
a
set
of
conditions
on
one
span
and
say:
is
it
a
child
of
this
other
set
of
conditions
anywhere
in
my
trace
and
those
require
cracking
out
open,
a
trace,
marshaling
them
to
some
kind
of
internal
format
and
then
just
digging
through
them?
This
is
much
more
difficult,
but
it
is
also
on
our
list.
A
I
would
think
of
that
kind
of
on
a
similar
timeline
as
metrics
like
we're
going
to
tackle
this
first
like
core
thing
about
search,
and
we
want
to
do
the
things
that
everybody
wants.
Duration
and
span
names
and
operation
names,
tag
names
and
these
things
and
then
we're
going
to
start
looking
to
the
community
after
that
on
what
to
prioritize
next-
and
I
would
say,
metrics
and
these
kind
of
structural
questions
are
both
in
my
mind,
high
on
that
list
as
obvious
kind
of
next
things
to
move
to.
B
E
A
A
Very
cool
regarding
the
language
we
intend
to
spend
some
time
just
kind
of
with
a
very
small
group
working
through
the
details
and
hashing
out
something
that
makes
sense
to
us,
at
which
point
we
will
post
that
publicly
for
comment
and
kind
of
take
a
round
of
comments
from
the
community.
The
idea,
of
course,
being
making
sure
that
all
the
things
you
know
we
came
up
with
would
be.
You
know,
meet
the
needs
of
the
community
and
do
what
people
expect
from
a
trace
language,
but
the
first
pass.
A
We
want
to
be
just
kind
of
a
more
focused
group,
so
we
can
move
faster
and
hopefully
cover
ground
quickly,
cool,
fantastic
demo-
and
I
am
super
excited
about
this.
This
grafana
hackathon
thing
really
gave
martin
conrad
a
chance
to
get
focused
and
provide
some
enormous
value
in
a
short
time.
Just
really
bootstrap
this
project-
and
it's
really
neat
to
see
is
any
other
questions
would
watch
again.
That's
right,
5
out
of
5.
G
I
had
a
comment
on
the
you
know:
there
was
the
with
the
ui
side
on
grafana,
so
there's
a
mention
of
like
hey.
We,
we
can
have
like
some
drop
downs
or
the
or
the
the
autofill
text,
but
if
you
like,
if
you
go
to
look
at
the
loki
interface,
that
has
both
right
there's
you
can
type
in.
You
can
start
your
log
ql,
selector
and
it'll.
G
It
will
help
auto
fill
it
in
for
you
with
suggestions,
but
if
you're
like,
I
don't
know
where
to
start,
you
hit
the
browser
thing
and
you
get
that
big
panel
that
nicely
once
ed
helped
me
out,
and
I
realized
I
was
blocking
an
api
that,
as
you
select
certain
ones,
it
filters
out
the
others
right
so
that
you
only
see
like
oh,
what's
included
in
the
the
path
that
I've
selected
right.
So
I've
hit
this
api.
Oh
this
doesn't
have
all
those
other
tags.
G
It
only
has
these
ones
right,
but
that
way
your
two
interfaces
would
be
would
be
consistent
between
them.
A
Yeah,
I
think,
there's
a
lot
of
opportunity
here
and
we
that
the
team
has
already
put
together
some
basic
things
like.
Please
show
me
a
bunch
of
spam
names
or
show
me
the
span
name
show
me.
The
service
names
show
me
the
like
values
that
a
tag
may
have
right.
Is
that
true,
I
think
that's
true.
Okay,
that's
true
good
good
and
I
think
we'll
definitely
take
a
lot
of
learning
from
the
existing
loki
ui
and
try
to
build
both
kind
of
things.
A
Right
like
this
live
query
thing
where,
if
you
want
to
type
and
have
ultimate
flexibility,
and
also
some
of
these
more
like
clicking
kind
of
elements
now,
the
loki
ui
is
very
mature.
Of
course,
because
look
he's
been
around
quite
a
bit
longer
and
you
know
at
first
it
was
just
a
freeform
text
field
and
they've
added
the
autocomplete
and
they've
added
a
lot
of
these
nice
features.
A
So
I
would
expect
the
kind
of
tempo
search
to
maybe
follow
a
similar
path
where
at
first
you
know
we
have
a
text
field
and
we're
going
to
be
adding
a
lot
of
these
things
as
we
go
along.
Maybe,
with
a
little
bit
of
luck,
we
can
start
with
some
of
that,
since
we've
seen
the
success
in
loki,
people
love
that
loki
as
well
as
we
already
have
all
of
these
calls
that
we
know
how
they
work
and
we
know
we
can
get
what
we
want
span
names
and
service
names
and
such.
A
Cool
yeah,
I
did
not
prep
mario
for
this
marty.
Do
you
want
to
talk
about
service
crafts?
Of
all?
That's
I've
told
the
team.
We've
talked
as
a
team
and
I
think
the
two
things
that
we
really
want
to
be
pushing
forward
on
our
search,
and
I
think,
marty
and
conrad
are
leading
that
and
then
mario
is
going
to
be
leading
service
graph
creation,
along
with
some
graffana
team
members.
Do
you
have?
A
I
know
you
don't
have
anything
prepared
because
I
didn't
say
anything
for
right
now,
but
if
you
wanted
to
say
some
brief
words
about
kind
of
like
our
thoughts
on
that,
I
think
that
might
be
valuable.
F
The
idea
is
to
create
the
topology
of
the
system
based
on
tracing
data,
so
it
just
brings
more
value
to
instrumenting
your
system
with
with
tracing
basically
yeah.
So
we
have
already
done
some
work
on
this
area.
We
have
some
ideas
on
how
to
implement
it.
We're
looking
into
doing
this
work
during
the
ingestion
of
the
traces,
especially
during
the
collection.
Rather
so,
as
traces
are
being
collected
in
the
graphana
agent,
we
can
generate
the
necessary
data
to
build
these
service
graphs.
F
Also,
most
likely
the
data
will
be
in
form
of
prometheus
metrics,
so
we
can
well.
This
is
all
up
to
the
discussion
still
but
yeah.
One
thought
is
that
we
can
see
those
service
grabs
and
how
they
evolve
over
time.
So
you
would
not
only
have
like
the
instant
picture
of
your
system,
but
you
can
see
how
yeah
latencies
or
how
services
are
interacting
between
early
this
morning
and
then
in
the
afternoon,
if
your
access
pattern
changed
and
so
on,
yeah.
A
Cool
one
thing
we
are
talking
about
in
the
service
graph
and
the
reason
mario
is
setting
this
up
is
because
these
tend
to
be
generated
in
the
agent.
We
have
features
in
the
agent
to
drop
spans
and
drop
traces
to
down
sample
before
you
push
to
graphonic
cloud
or
to
a
tempo
backend,
in
which
case
you
would
want
to
calculate
metrics
or
calculate
service
graphs
before
that
moment,
to
give
you
the
best,
you
know
possible
view
of
your
of
your
systems.
E
A
So
that's
not
quite
as
far
along.
We
still
need
some
work
there
and
a
lot
of
that's
going
to
be
ui
work.
To
do
that.
Well,
is
it
possible
to
trace
a
call
that
spans
across
multiple
tempo
back
ends?
If
you
mean
query
a
trace,
you
mean,
like
query:
trace
that
ended
up
in
multiple
tempo
back
ends,
trace
a
call
yes,
so
yes
technically,
but
that's
actually
a
grafana
enterprise
feature,
and
so
it
is
not
in
the
open
source
project.
A
You
could
write
a
front
end
to
do
that
yourself,
but
the
actual
feature
as
release
microphone
will
be
an
enterprise
product
as
part
of
like
we
have
like
a
like.
We
have,
I
think
it's
called
gem
right,
grafana
enterprise
metrics.
We
have
ground
for
fun,
enterprise,
logs
and
traces
as
well,
and
it
will
be
oh
yeah
plans
for
federation
wow.
Everybody
has
the
same
question.
It
is
actually
oddly
under
development
now
and
should
be
wrapped
up
soon,
but
it
will
be
part
of
our
enterprise
offering.
A
A
Cool
cool
next
up
on
the
list.
This
was
totally
not
spoiled
because
I
didn't
just
type
it
right
in
here,
but
mario
was
made
a
maintainer
since
our
last
community
call,
it's
actually
been
a
while,
I
think,
maybe
almost
a
month
and
a
half
right
since
we
made
him
a
maintainer,
but
we
haven't
had
a
community
call,
and
I
really
wanted
to
highlight
his
work.
A
He's
done
an
enormous
amount
of
work
in
the
agent
to
add
all
of
the
almost
all
of
the
features
we
announced
at
graphonic
online,
where
his
working
agent
and
integrating
with
the
open
palm
tree
collector
he's
done
some
upstream
work
there
as
well
he's
done
some
work
in
tempo.
A
A
We
are
very
excited
to
have
him
on
the
team
and
I
think
he'll
continue
to
be
an
awesome
member
of
this
team
and
we
look
forward
to
service
graphs,
mario.
F
Yeah
well
thanks
joe
and
all
the
teams
yeah,
I'm
very
grateful
to
to
be
a
maintainer
and
I'm
very
excited
to
to
keep
working
on
temp
I
mean
just
look
at
search
and
it's
a
very
rich
field,
and
so
many
things
to
come.
Yeah
I
can
be
other
than
excited
to
to
work
on
temple,
cool.
A
Yeah
very,
very
excited
to
have
you,
man,
you've
been
a
great
member
of
the
kufana
team
tempo
team
cool.
So
I
there
are
a
couple
of
things
more,
like
technical,
related
to
the
operation
of
tempo
that
have
happened
recently
that
I
wanted
to
talk
about.
So
we're
going
to
kind
of
maybe
finish
up.
You
know
our
part
of
that
with
this
and
then
after
that,
we'll
have
this
kind
of
q
a
period
you're
welcome
to
hit
us
with
any
questions
that
anyone
may
have.
A
The
first
is
this
idea
and
there's
a
link
in
the
doc
about
hedging
requests,
so
we
were-
or
we
are
of
course
querying
a
back
end
like
an
object,
storage,
right,
s3
or
gcs,
and
we
were
seeing
wide
arrays
of
latency
on
this
backend.
So
our
p50
was
nice.
It
was
a
couple
hundred
milliseconds.
A
It
was
very
low
when
we
were
requesting
data
from
the
back
end,
but
sometimes
it
would
go
to
three
or
four
seconds
and
that
was
kind
of
our
p999
almost
but
the
way
we
did
our
search,
we
were
kind
of
bound
by
this
long
tail.
So
if
a
request
to
the
back
end
took
three
seconds,
it
would
bring
down
the
entire
trace
search
and
the
entire
trace
search
would
then
cost
three
seconds,
no
matter
what
we
did
and
it
was,
it
was
heavily
impacting
our
p99.
A
So
we
found
this
idea
of
hedged
requests.
The
pr
I
think,
links
to
the
links
to
the
library
we
used.
If
you
use
go,
and
you
have
a
similar
kind
of
problem,
I'd
recommend
this
library,
it's
a
very
cool
library,
and
they
made
a
lot
of
improvements
for
us,
which
we
appreciate,
and
you
can
also
see
in
that
link
to
pr
just
like
our
p99
tanking.
A
It
was
like
10
to
20
seconds
and
it
tanked
down
to
like
right
above
p9
or
p99,
came
to
be
basically
right
above
p90
at
at
you
know:
250
milliseconds,
no
sorry
2.5
seconds
or
so,
and
kind
of
the
trick.
Here
is,
after
a
certain
amount
of
time
you
basically
have
a
timeout
and
it's
configurable
in
tempo,
but
after
a
certain
amount
of
time,
if
the
query,
if
the
request
doesn't
come
back,
just
issue
it
again
and
it's
kind
of
like
what
we
all
do.
A
Naturally,
when
you
go
to
a
website
and
it's
spinning
and
it's
spinning
and
you
just
hit
f5
and
it
comes
back
instantly,
it's
like
it's
the
exact
same
idea.
So
what
you
do
in
your
normal
browsing
if
something
is
not
not
quite
coming
back
and
you're,
not
sure
why
it's
just
an
attempt
to
cut
off
the
long
tail
by
just
issuing
a
second
request
and
we've
set
our
internal
one
at
above.
A
I'd
recommend,
if
you
deal
with
anything
similar
if
you
code
in
any
kind
of
similar
space,
to
check
out
this
idea,
and
I
think
it's
called
the
tail
at
scale
palette
scale.
This
is
there's
this
whole
nice
paper
about
this.
It's
written
by
google,
of
course
right,
but
it's
just
a
nice
paper
about
how
to
deal
with
these
kind
of
long
tail
issues
at
scale,
and
this
idea
of
hedge
requests
came
from
that.
That
paper.
A
Cool
member
list
I
have
to
bring
member
list
up
again.
I
feel
like
every
other
community
call,
we
say:
hey
memberless
is
finally
fine,
it's
stable,
yay
and
then
the
next
community
call.
I
say
we
found
another
problem
and
we're
having
issues
with
it.
So
this
is
the
call
where
we
say
we're
having
problems
with
it
and
we're
continuing
to
research.
Those.
If
you
have
a
large
production
tempo
install,
I
would
recommend
maybe
using
console
or
ncd.
A
We
continue
to
use
memphis
because
we
really
want
to
get
this
production
ready
and
we
want
to
see
these
problems
and
expose
ourselves
to
them.
So
we
can
get
it
fixed
and
we
have
a
member
of
the
cortex
team
actively
working
on
figuring
out,
what's
going
on
here
and
getting
a
fix
together,
because
all
this
code's
in
the
cortex
code
base.
A
But
if
you're
seeing
similar
issues,
I
would
highly
recommend
maybe
moving
to
console
or
etsy
to
store
your
ring.
I
know
it's
an
extra
dependency,
but
if
you
need
a
rock-solid
production
production
kind
of
system,
then
I
would
recommend
those
those
approaches.
The
issue
that
we
continue
to
see
is
the
same.
A
Where
it's
we
have
an
adjuster
or
a
compactor
or
a
member
of
the
ring.
We
cannot
forget,
you,
click
forget
and
it
disappears
for
a
while
and
then
it
comes
back
and
it
just
messes
everything
up
and
makes
you
sad
so,
like
I
said
we
have
active
development
on
this.
We
really
want
to
get
this
production
ready
for
all
of
our
back
ends:
cortex
tempo
and
loki
and
unfortunately,
tempo
is
somehow
the
guinea
pig.
A
I
don't,
I
think
it's
I
don't
know
why
that
worked
out
that
way,
but
we
are
the
only
internal
member
that
uses
member
list
right
now
until
we
can
get
this
stable
and
make
sure
it
provides.
You
know
what
everyone
needs,
which
is
a
stable
way
to
propagate
the
ring,
but
just
a
heads
up
on
that
kind
of
continuing
saga
of
member
list
is
anybody
have
any
other
questions
or
comments.
I
forget
issue.
That's
right!
Sorry
zack.
A
I
recommend
consular
whatever
any
plans
to
support
an
auth
proxy
yeah,
real
quick
to
what
zach
said.
That
is
at
some
point
where
we've
had
to
go
to,
and
it
said
where
we've
gotten
it
down
to
like,
where
I
think
I
can
cycle
all
of
our
ops
cluster
in
under
a
minute
by
bringing
the
entire
thing
down
which
basically
erases
memberless
so
memberless
is
like
in
memory
ring,
that's
gossiped
between
all
of
the
different
elements
of
the
cluster
and
one
way
to
just
destroy.
A
It
is
to
literally
bring
down
the
whole
cluster
and
bring
it
back,
and
we
can
do
that
in
under
a
minute,
which
is
not
a
good.
Well,
it's
good.
I
guess
that
you
can
that
we
can
do
that,
but
it's
not
good
that
we've
had
to
do
that
and
I
think
we
need
this
fixed.
We
need
it
fixed.
I
know
you
all
need
it
fixed
too,
and
I'd
recommend
looking,
maybe
at
a
consular
at
cd
installation
to
the
question
of
an
auth
proxy.
A
I
I'd
like
to
maybe
hear
a
little
bit
more
about
what
you
mean
by
that,
like
we
have
an
internal
proxy
that
you
know,
it's
no
reason
to
open
source
this
proxy
because
it
integrates
with
our
internal
grafanacom,
api
and
exchanges,
a
token
make
sure
the
token's
valid
and
then
attaches
that
you
know
that
tenant
id
the
xscope
org
id
so
there's
zero
reason
to
open
source
this
because
it
only
works
their
private,
like
authentication
api.
A
So
if
you
could
provide
maybe
some
details
about
what
you
would
want
out
of
an
auth
proxy,
maybe
we
could
talk
about
what
opportunity
opportunities
we
might
have
to
work
on
that.
A
I
think
cortex
has
one.
Is
that
right
and
doesn't
loki
use
like
an
nginx
thing?
I
think
there's
been
some
other
random
attempts.
I
might
be
making
this
up.
Isn't
there
like
a
open
source,
cortex
gateway,
not
maintained
by
a
grafana.
A
I
don't
know,
I
guess
my
short
answer
is
this
is
not
on
our
roadmap.
Basically
to
have
a
token-based
authentication.
We
do
use
grafana's
auth
proxy
to
authenticate
users,
a
token-based
authentication,
so
maybe
something
like
this
cortex
gateway
which,
like
I'm
basically
for
the
I'm,
basically
just
scanning
the
readme
right
now.
So
I
don't
really
have
deep
operational
experience
here,
but
it
looks
like
it'll
exchange
a
jot
for
a
tenant
id
and
I
think
that's
perhaps
what
you're
asking.
A
Certainly,
if
people
put
effort
into
this,
we
would
help
support
it.
I
don't,
I
wouldn't
like
actively
work
against
somebody
putting
this
kind
of
feature
together.
This
is
nice,
but
we
do
not
have
any
kind
of
work
slated
towards
this
right
now.
Oh
look.
It
looked
dave
henderson
that
actually
has
worked
on
this
quarter
scale.
I
don't
know
he's
one
of
he's.
One
of
our
engineers
on
the
griffin
host
grafana
team.
A
And
then
I
think
another
thing
that
has
come
up
kind
of
in
this
vein
is
our
helm.
Charts.
I
think
loki
has
like
an
nginx
ingress.
Proxy
thing
is
that
right
and
people
have
asked
us
about
putting
that
in
our
helm.
Charts
as
well
thanks
daniel
and
I
don't
know
like
the
helm,
charts
are
very
heavily
community
maintained.
A
I
will
approve
prs
and
submit
them
or
proof
vrs
and
merge
them,
and
you
know
every
once
in
a
while
I'll
pull
the
whole
thing
and
test
it
and
make
sure
things
are
looking
good
still,
but
we
don't
really
do
a
whole
lot
of
work
internally.
Maintaining
that
helm
chart
we
use
jsonnet,
and
so
it's
kind
of
difficult
for
us
to
iterate
on
that,
because
we
don't
use
it
and
we
don't
see
the
output
of
it.
A
A
I
wonder
if,
at
some
point
we
should
take
a
list
of
you
know:
high
priority
features
in
home,
chart
to
increase
adoption
and
just
spend
a
week
and
knock
as
many
of
them
out
as
possible
to
help
out
the
community,
because
it's
difficult
for
us
to
know
what
people
need,
because
we
don't
use
it
and
I
think,
as
a
result,
it's
not
as
good
as
it
could
be.
A
Cortex
has
one
as
well,
so
the
cortex
helm
chart
has
a
has
a
nginx
thing:
okay,
yeah!
Let
me
talk
with
that
internally
with
the
team.
It
might
be
real
that
should
maybe
my
hackathon
should
be
make
the
helm
chart
way
better,
but
having
someone
spend
some
time
a
week
or
two
and
just
dig
through
the
helm,
chart
and
really
help
the
team
become
familiar
with
it.
I
think
I
may
be
the
only
one
who's
installed
it,
maybe
one
or
two
other
members
have
installed
it.
A
I'm
not
sure
having
some
more
experience
with
it
as
well
as
trying
to
add
some
of
these
features
could
be
very
valuable.
In
fact,
there's
a
helm
chart
pr
up
right
now
and
I
have
not
a
chance
to
read
it.
It's
it's
difficult
for
us
to
support
it.
A
Unfortunately,
but
let
me
let
me
bring
that
up
with
the
team
and
see
if
we
can
get
somebody
focused
on
kind
of
providing
that
same
nginx,
ingress
thing
and
then
see
if
there's
any
kind
of
like
proxy
capabilities
there
or
author
anything
like
that,
I
don't
really
know
what
what
they
have
in
the
in
those
nginx
configs.
A
The
biggest
problem,
70
config,
is
a
big
string
instead
of
an
object.
Actually,
somebody
just
submitted
a
proposal
and
a
new
config
structure,
and
I
don't
I
have
not
looked
at
it.
I
think
the
idea
of
that
giant
string
is
then
you
can
override
one.
You
can
override
the
whole
file
at
once
by
just
providing
your
own
config
file,
and
then
I
think
it
also
lets
them
template
in
variables.
A
So,
like
you
can
add
other
config
values
to
the
values.yaml
level
and
it
will
template
into
the
broader
config,
but
I
have
not
had
a
chance
to
look
at
this
and
I'm
just
not
a
helm
chart
guru
by
any
stretch
of
the
imagination.
A
So
so
it's
just
not
something
I
can
comment
on
well,
instead
of
a
big
string,
I
guess
we
want
like
well
formatted
the
ammo
we
like
we
want
the
yam
all
to
be
part
of
the
value
example.
Is
that
kind
of
the
goal
here
and
then
we
can
override
and
build
a
config
file
directly
in
values
handle.
Is
that
the
the
goal.
H
Well,
I'm
not
sure,
I'm
not
sure
if
that's
the
way
to
go
but
having
I'm
not
sure
if
temple
has
this
problem,
but
cortex
has
that
multiple
components
share
configs
from
other
components,
so
some
configs
from
like
the
distributor,
are
defined
in
the
ingester
part.
So
that's
also
a
problem.
So
I
don't
know
if
we
can
fix
the
the
general
config
before
splitting
the
parameters.
A
Yeah,
I
know
what
you
mean
like
where
every
component
shares
the
same
config.
It's
not
good.
In
fact,
in
one
of
the
things
it
does
is
it
makes
the
ingesters
in
the
helm
chart
pull
the
back
end,
which
they
don't
need
to
and
other
pieces.
So
when
we
define
our
config,
we
turn
off
pulling
from
the
injectors
because
it
doesn't
need
to
happen,
but
the
helm
chart
will
do
that.
So
there
are
details
that
are
not
broken
but
sub-optimal
because
of
this
shared
config,
and
I
agree
with
that.
A
If
you
do
have
any
kind
of
feedback
there
is
this
issue
that
was
just
open
where
somebody
is
trying
to
help
us
with
the
config.
So
please
comment
on
that.
If
you
have
ideas,
if
you
have
experience
in
this
area-
and
you
comment
on
this
issue,
you
can
help
me
kind
of
evaluate
this
proposal
and
the
submitted
pr
that
went
along
with
it,
and
I
really
appreciate
oh
yeah,
my
bad
thanks
kunron.
Apparently
I
can't
copy
paste.
A
Thank
you.
Cool
thanks.
Thanks
andre,
an
open,
kilometer
collector
exporter
that
acts
as
a
distributor.
Strangely,
yes,
we
have
discussed
that.
So
basically,
we
would
need
to
on
the
injustice,
support
the
open
telemetry
right
now
we
support
receipt
of
the
open,
telemetry
grpc
service
at
the
distributor
level,
but
not
the
investor
level.
A
The
main
reason
we
don't
do
that
is
because,
in
the
distributor
we
do
a
lot
of
rate
limiting.
We
have
some
like
custom
features
there
and
additionally,
we
support
a
replication
factor.
So
every
span
for
our
cluster
that
comes
into
tempo
is
replicated
three
times
and
sent
to
three
ingestors,
and
this
improves
availability
as
well
as
durability
of
the
data,
so
it
would
require
just
it
would
just
require
the
open
telemetry
collector
to
be
a
little
bit
of
something.
A
That's
not
additional
features,
additional
processors
and
that's
kind
of
why
we
have
haven't
tackled
that,
although
we
have
discussed
it,
something
else
that
concerns
me
is
the
collector.
I
think
the
collector
will
200
before
it's
passed,
the
data
on
it
buffers
everything
in
a
big
queue
right,
and
so
it
will
send
success
to
the
sending
service
before
it
will
pass
it
down
the
line
to
the
adjuster
and
something
we
value
in
tempo
is
when
we
send
a
200.
A
We
have
passed
this
data
onto
at
least
three
ingestors,
and
we
feel
very
confident
that
this
durability
of
this
data
is
high,
and
we
have
said
we
have
this
data
we've
stored
it,
whereas
I
don't
think
the
collector
supports
that.
I
think
the
collector
had
immediately
200
and,
if
it
doomed,
if
the
distributor
fell
apart
before,
is
able
to
pass
it
on
to
the
ingestor,
then
we
wouldn't
actually
have
stored
that
data.
So
I
think
those
are
the
main
reasons
we
haven't
done
it
and
the
only
reason
I
can
talk
about
that.
A
B
Yeah,
I
think
functionality
is
probably
going
to
diverge
even
further.
I
mean
just
what
we
talked
about
today:
search
a
specialized
functionality
that
wouldn't
make
any
sense
to
be
in
the
open,
tournamentary,
collector,
yep.
A
Yeah,
that's
a
good
call,
we're
purposefully,
trying
to
do
as
little
marshalling
and
processing
in
the
adjuster
as
possible,
which
is
actually
putting
a
little
bit
more
work
on
the
distributor
and
a
little
bit
more
features.
And
I
think
some
of
the
search
design
that
marty
showed
was
kind
of
focused
around
that.
How
we're
doing
work
in
the
distributor
level
and
passing
it
on
to
the.
A
Adjuster
cool
all
right:
this
is
a
good
meeting.
This
is
a
good
community
call.
Thank
you
all
for
showing
up
and
discussing
and
having
questions
and
contributing
ideas.
We'll
take
all
everything
everything
that
was
mentioned
I'll,
take
back.
There's
a
lot
of
those
grafana
comments
about
some
of
the
ui,
we'll
take
that
back
to
the
team
and
continue
to
develop
that
please
show
if
you
can
show
up
to
the
next
one.
We
can
continue
to
provide
road
maps.
Hopefully
at
that
point
we'll
have
you
know
additional
ideas
or
maybe
a
feature
flag.
A
You
can
actually
turn
on
in
tempo
and
grafana
to
see
some
of
this
and
play
with
it
yourself
and
yeah.
Please
stay
in
touch
with
the
community
and
the
team,
and
you
will
see
a
lot
of
this
develop
fast.
We
hope
all
right
take
care
everybody,
and
we
will
see
you
when
we
see
you.