►
From YouTube: Tempo Community Call 2021-01-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Sure
to
start
the
meeting
we
have,
we
in
here
from
tempo
are
from
walmart
ananya
from
us
and
diana
from
us.
A
C
Yeah
so
yeah.
My
first
question
is
okay.
Now
what
is
the
scale,
the
scalability
of
tempo?
So
we
are
looking
looking
for
like
minions
more
than
minutes,
like
three
minutes
of
a
second
span.
A
Three
million
per
second:
we
currently
do
300
000.
A
And
we
have
a
14
day
retention,
so
we
have
two
you'd
have
two
kind
of
considerations
here:
the
right
path,
how
much
you
could
like
ingest
and
once
you
start
ingesting
all
this
and
writing
them
out
as
blocks
to
your
back
end,
how
many
blocks
you
have
and
then
querying
those
blocks,
and
so
the
more
you
ingest,
of
course,
the
more
blocks
you
have
and
the
slower
your
queries
can
be,
and
so
that's
kind
of
the
the
two
things
you're
balancing
now.
C
Yeah
two
weeks
at
least
okay.
A
Recently
ananya
has
done
some
work
on
what
we
call
the
query
front,
front-end,
which
is
allowed
for
parallelization
of
our
queries,
and
it's
improving
that
side
quite
a
bit
and
then
there's
another
kind
of
improvement,
we're
working
on
in
the
ingestors
right
now
they
reply
too
slowly,
but
I
think
I
think
three
million
spans
per
second
is
within
the
range
of
tempo
right
now,
with
no
architectural
changes
there
might
be
some.
A
There
might
be
some
speed
bumps
as
you
scale
up
to
that
size.
There
might
be.
You
know
some
small
problems
or
things
that
we
need
to
address
some
performance
improvements.
I
could
see
those
kind
of
things
cropping
up,
but
I
don't.
B
A
C
Yeah
I
heard
that
that
goal.
A
Would
go
yeah
yeah?
I
was
really
waiting
on
the
work
that
ananya
has
put
together,
which
is
the
front
end,
because
that
was
what
would
really
slow
down.
The
query
path
is
what
would
really
slow
down
for
us.
I
have
no
concerns
about
the
right
path,
making
it
two
million
spans
a
second,
but
I
did
want
to
make
sure
we
could
query
it
effectively
once
we.
C
C
C
A
You
mean
like
having
a
shorter
block
list
with
a
larger
board.
B
A
So
we
cap
our
blocks
at
six
million
traces
and
that
takes
us
about
an
hour
to
compact
that
block
to
create
that
block,
and
this
is
limited
by.
I
guess
I
don't
know
what
back
inch
we're
talking
about,
but
we
use
gcs
and
we
see
about
40
to
50
mega
second
to
gcs
through
our
compactors,
and
so
there
comes
a
point
where
you're
spending
too
much
more
time
to
compact
blocks
than
it's
worth.
But
it's
certainly.
B
A
It's
certainly
something
that
we
could
experiment
with
more
there's
a
lot
of
tunables
in
tempo.
That
would
let
you
create
larger
blocks
if
you
wanted,
and
you
could
kind
of
like
get
a
feel
for
how
big
of
a
block
you
could
create
and
what
that
would
look
like
or
and
how
that
would
work
with
compactors
yeah,
but,
like
I
said
right
now,
we
do
about
six
million.
I
think
is
our
cap.
Is
that
right,
anonymous.
A
And
last
I
checked:
we
had
three
billion
traces
over
a
two
week
retention.
An
interesting
aspect
to
all
of
this
is
kind
of
your
spans
to
trace
ratio.
A
Yeah,
so
if
you
had
we,
we
average
about
100
spans
per
trace.
A
Smaller
you
know
if
you
had
more
spans
or
if
you
had
fewer
spans
per
trace,
you
could
push
those
blocks
to
be
larger.
If
you
had
more,
maybe
you'd
have
to
reduce
that
size.
I
don't
know
so
there's
a
lot
of
tunables
in
tempo.
That
would
let
you
kind
of
adjust
it
to
handle
your
workload.
B
D
C
Right
now
we
have
timer
trace
as
a
solution.
Okay,.
D
C
Are
stored
in
sprung,
so,
basically,
we
are
just
searching
sprung
for
all
the
trees
and
we
are
looking
forward
for
the
next
generation
of
system
yeah.
Okay,
now
one
information
will
be
very
helpful
for
us
say
for
your
one
million
performance
test
goal
now,
besides
the
one
million
okay,
what
kind
of
scale
I
would
like
to
know:
okay,
what
kind
of
resources
right?
What
how
much
see,
how
many
cpu,
how
much
block
storage
and
overall
or
cost
for
that?
That
will
give
a
very
good
indicator.
Okay.
Now?
C
How
can
we
scale
it
and
also
how
how
costly
to
do
that.
A
A
C
A
So
we're
at
about
50
cpu
at
300,
000
spans.
A
second
memory
we're
about
90
gig,
is
what
it
says.
Is
that
right
we
I'm
looking
at
our
little
spreadsheet
now
that
we
used.
A
C
B
A
A
C
That's
it
per
second
right
yeah.
Actually,
that
is
the
the
request
per
second.
B
C
B
A
Okay
yeah,
I
I
do
think
tempo
is
in
range
of
three
million
per
second
without.
C
C
C
A
Who
do
I
think,
who's
on
amazon?
I
must
have
made
that
up.
Okay,
are
you
on
internal
resources?
Are
you
all
self-hosted.
A
Okay,
so
you
have
like
your
own
hosting.
C
A
A
Yeah,
no
don't
welcome
nathan,
so
we're.
E
Kind
of
any
other
question,
so
we've
got
it's
a
pretty
intimate
community
call
this
time.
So
you
know
you
got
the
two
guys
building
tempo
feel
free
to
ask
more
questions
or
just
geek
out
on
on
tracing
since.
Well,
that's
what
what
do
you
all
do?
I
guess.
C
Yeah,
I
know
that'll
say
currently:
you
are
mainly
searching
trace
by
trace
id
right
any
plan
for
additional
feature
to
support
more
advanced
searching.
A
Yeah,
so
nothing
concrete
right
now,
but
we
do
recognize
that
people
are
wanting
more
advanced
search
right,
and
so
we
intend
to
support
that
eventually
right
now,
we're
focusing
on
kind
of
shoring
up
the
existing
feature
set,
which
is
minimal.
But
the
idea
is
again
for
mass,
you
know,
trace
storage
and
we
think
we
can
hit
that
you
know
with
what
we
have
now:
that's
that's
the
primary
goal
and
what
we're
working
towards
aggressively
after
that
then?
Yes,
I
do
have.
A
I
do
strongly
believe
we're
gonna
start
looking
at
some
kind
of
query
language.
Some
way
to
ask
for
you
know:
I'm
not
gonna
promise
anything
here,
but
generic
kind
of
normal
questions
like
show
me
traces
over
a
certain
duration
or
show
me
choices
that
have
these
kind
of
tags
or
whatever.
A
So
the
answer
is
yes.
There
is
that,
on
the
on
the
horizon,
we've
gotten
a
lot
of
response
from
larger
companies
who
have
this
problem,
where
they
want
to
store
extremely
large
trace
volume,
because
everything
is
so
costly
and
tempo
has
kind
of
hit
a
nerve
there
we
feel
like,
but
we
do
recognize
to
grow
tempo
more
and
to
continue
building
the
product
and
the
open
source
community.
F
F
Wanted
to
make
you
aware
so
my
my
team,
and
I
so
jeremy,
who
is
your
technical
account
manager
and
then
myself,
who
is
the
account
executive
over
walmart,
we've,
actually
been
working
with
shri
who's.
One
of
the
sre
team
leads
out
of
arkansas
and
we
so
we're
working
with
her
on
that
side
of
things
so
she's
more
on
like
handling
the
metrics
and
some
of
the
log
stuff.
But
as
far
as
the
tracing
side
of
conversation
goes,
we
actually
have
an
upcoming
meeting.
F
F
Awesome,
I
love
it.
Well,
it's
great
to
meet
you.
I
wanted
to
let
you
know
that
we're
we're
all
thinking
about.
I
know
that
they
had
some
questions,
they're
looking
to
learn
more
about,
and
then
I
know
as
a
follow-up
to
that
joe
has
already
volunteered
himself
for
later.
In
january,
we
will
kind
of
have
a
more
influential
conversation
after
we've
collected
some,
you
know
more
information.
What
you're
looking
to
do
so?
Okay,.
G
A
Cool
so
yeah
I
do
have
some.
I
think
ananias
met
with
walmart
a
certain
group
a
few
times,
and
I
have
one
coming
up
as
well.
So
yeah
we're
excited
to
talk
with
you
all
about
it
and,
like
I
said,
I
think
your
all
scale
is
what
tempo
is
really
meant
for
and
I'm
excited
to
see
you
all
give
it
a
shot,
especially
at
that
kind
of
range.
Three
million.
A
second
would
be
amazing.
It'd
be
a
lot.
B
A
To
work
through
work
with
work
on
it
with
you
all
to
you
know,
overcome
whatever's
necessary
to
get
there.
I
don't
think
it'd
be
major.
I
think
it
can
hit
that
easily,
but
yeah
there'll
be
something
right.
Some
crack
will
appear
once
we
go
to
10x
what
we
have
now.
E
Yeah:
okay,
what's
really
interesting
to
me
as
a
layperson,
and
you
know
if
I
wasn't
working
at
an
open
source
company,
I
wouldn't
know
this
is
how
supportive
walmart
has
been
of
these
of
our
open
source
projects.
E
So
it's
so
it's
interesting
when
kind
of
the
you
know
the
stereotypical
big
company
says,
okay
with
open
source
is,
is
the
way
to
go
and
and.
C
Walmart
has
been
using
open
source
porta
and
also
have
open
source
project
from
long
term
yeah
at
least
10
years
back.
10
about
that
about
that
time,
a
walmart.
B
C
Labs
walmart
labs
is
more
in
the
e-commerce
side
and
try
to
use
all
the
open
source
technology
all
right.
Well,
final
labs,
walmart,.
B
G
Cool
okay,
well,.
A
C
A
No
doubt
yeah
we'll
be
glad
to
meet
you
all
in
a
couple
weeks
see.
A
Yep,
okay,
so
we
have
recently
put
together
the
query.
Front-End
ananya
worked
on
this
anania.
Do
you
want
to
give
a
quick
overview
of
the
state
of
that
work
and
kind
of
what
we're
doing
there.
D
Sure
so,
first
off
like
thanks,
we
are
like
that
was
super
helpful,
and
I
think
this
is
exactly
what
we're
trying
to
do
right
now
with
tempo
right,
throw
as
much
data
as
you
can
and
it's
all
like
gonna,
be
in
object,
storage,
so
super
cheap,
100
sampling
and,
like
that's
exactly
where
we
want
to
be,
and
so
something
that
we've
been
working
on
over
the
past
quarter
is
the
query
front
end
which
helps
to
kind
of
scale
the
query
path,
and
it
helps
us
do
an
exhaustive
search
over
all
of
the
blocks
that
are
present
in
the
backing
in
parallel.
D
So
previous
to
this
addition,
a
single
node
would
actually
scan
through
all
of
the
blocks
iteratively,
and
that
is
like
expensive
in
terms
of
time
and
compute.
Well,
we
haven't
solved
compute
yet,
but
for
time
what
we've
done
is
gone
ahead
and
sharded.
The
queries
at
the
query,
front-end
and
I
think,
it'd
be
also
interesting
to
show
a
bunch
of
metrics
and
traces
right
about
the
query.
Front-Runner
room
go.
G
D
So
yeah
we
now
for
every
query
that
we
get
we
shared
it.
We
shared
the
block
space,
so
a
single
query
or
a
single
node
doesn't
have
to
query
for
all
of
the
blocks.
We
can
shadow
the
query
and
split
the
block
space,
and
so
each
of
these
block
spaces
can
then
be
sourced
in
parallel
and
that's
helped
dramatically
reduce
our
touch
times.
So,
let's
take
a
look
at
that.
D
And
let's
collapse
all
of
these
and
focus
on
the
and
the
read
dashboard,
and
so
basically,
what
we
see
here
is
like
a
really
concise
information
about
the
query
paths,
and
here
we
see
the
number
of
queries
that
we're
getting
at
the
query
frontend.
So
right
now
we
have
like
a
really
small,
artificial
workload
right.
We
have
this
tool
called
vulture
which
queries
tempo
regularly,
because
there
aren't
so
many
humans
poking
around
tempo
at
the
moment,
but
this
is
yeah
an
artificially
generated
load
and
we
can
see
that
at
the
query
front
end.
G
D
So
before
we
introduce
the
query
front
end
right,
so
this
is
what
it
looked
like.
I
think
it
was
roughly
98
percentile
for
all
of
our
requests
used
to
be
two
seconds
and
then
something
happened
here
and
it
went
down
to
one
second,
and
so
the
point
is
that
we
can
it's
not
just
this
improvement.
We
can
now
actually
scale
up
the
number
of
queries
which
are
actually
splitting
out
the
query
in
parallel
and
processing
it
in
parallel
right.
A
Okay,
that
two
second
p99
is
actually
driven
by
some
locking
in
the
injectors
which
we
recently
identified
as
in
a
couple
days
back
so
we'll
fix
that
soon
as
well,
and
I
actually
expect
to
have
a
p99
in
the
five
to
600,
milliseconds
p90,
of
course,
and
p50
significantly
below
that.
A
D
D
Yeah,
this
is
a
trace
of
okay
c-o-v-n
itself.
Now
so
hey
I'm
still
going
to
talk
about
in
this
case,
and
so
this
is
the
trace
of
retrieving
a
trace
from
gcs
and
it's
starting
to
look
like
a
real
distributed
system.
Now,
because
we
have
three
three
services
that
this
trace
actually
passes
through.
We
have
the
query
front
end.
We
have
the
querier
and
one
of
these
should
have
the
ingester
right
there
we
go
so
the
query
path
in
tempo
right
now
passes
through
these
three
components.
D
It
first
hits
the
query
front
end
where
it's
sharded
out
and
split
into
a
bunch
of
smaller
queries
which
are
all
queue
and
then
the
query
will
feed
off
of
this
cue
pick
one
of
the
shards
that
it
needs
to
work
on
and
then
perform
search
on
it,
which
includes
like
searching
the
ingestor,
as
well
as
the
object,
stored
backing
and
so
yeah
that's
cool.
That
was
what.
A
Let's
see
so
other
things
I
kind
of
wanted
to
review.
I
wish
marty
were
here
he's
currently
at
our,
but
marty
recently
contributed
some
really
nice
gains
on
the
cpu.
A
We
were
previously
marshalling
our
proto
twice
and
sending
it
twice
and
his
changes
allow
us
to
basically
only
marshal
them
lunch.
We
saw
almost
like
a
halving
of
our
total
cpu
cost
or
so
yeah.
Our
total
cpu
usage,
as
a
result
of
this
and
on
the
distributor,
makes
sense
due
to
the
like,
due
to
the
reduction
in
time
and
number
of
times
were
marshalling,
because
we
knew
we
were
doing
less
work
on
the
injustice
side.
A
We
also
saw
some
really
impressive
gains,
though
that
are
make
less
sense
to
us
and
it's
something
we're
still
investigating
getting
internally
marty
put
together
a
really
nice
pr
which
is
currently
linked
in
the
dock.
A
Linking
you
know
a
lot
of
different
metrics.
He
starts
a
good
discussion
about
what
we
understand
and
what
we
don't
understand,
and
so
I'd
recommend
reading
through
that,
if
you're
interested,
there's,
probably
some
other
low
hanging
fruit
for
cpu
cost
around
here
and
there,
but
this
was
a
huge
win
for
tempo
tco
for
sure
this
reduction
in
cpu
usage
at
the
time
that
we
he
did.
This
cpu
was
actually
our
largest
cost.
A
Yeah,
if
I
could,
let
me
look,
let
me
look
it's
a
good
idea.
I
was
not
prepared
to
do
that,
but
I
can
figure
it
out.
A
Well,
maybe
I
can
oh
there
we
go
yeah.
Let
me
show
this
yeah
present
now
a
window,
probably
this
window,
yeah
yeah.
A
I
know
right,
that's
a
double
check
and
I
was
not
about
to
record
whatever
is
my
slack
at
the
moment,
so
this
is
our.
Let
me
do
over
just
like
five
minutes,
so
we
know
for
sure
what
it
is.
This
is
our
cpu
usage
right
now.
This
is
after
marty's
fix
or
performance
improvement.
A
A
Oh,
this
is
when
we
reverted
briefly,
so
this
is
his
initial
test.
This
is
moving
back
to
our
normal
version.
This
is
when
we
actually
deployed
this
version,
so
there
you
can
see
it's
roughly
cut
in
two,
which
is
frankly
crazy,
like
I
said
part
of
it
makes
sense
and
part
of
it.
Doesn't
it
require
us
to
dig
more
into
grpc
to
understand.
A
A
Winter
break
there
you
go
back
up
here.
There's
also
been
a
lot
of
fluctuation
in
our
ops
environment,
in
terms
of
how
much
we're
ingesting
so
it's
kind
of
hard,
sometimes
to
tell
maybe
you
know
what
the
cpu,
what
this
cpu
relates
to,
or
how
many
spans
per
second,
I
suppose
cool.
A
There's
a
couple
things
we
have
identified
some
locking
in
the
injector
and
we
just
submitted
or
merged
a
pr
today,
and
I
think
the
community
should
be
aware
of
it.
The
let's
see
this
pr.
A
So
there
is
a
heavy
work
being
done
in
the
where
we
append
new
traces
coming
in
and
that
work
is
to
maintain
a
slice
of
index
entries
essentially
and
the
slice
is
ordered.
So
we
are
inserting
into
the
slice,
which
is
painful.
Of
course,
there's
no
surprises
there.
In
fact,
I
wrote
it.
I
was
thinking
this'll
one
day
make
me
sad
and
it
was
last
week
about
when
I
started
realizing
or
digging
into
this
problem
and
realized.
It
was
that
code
that
was
making
me
sad.
Inserting
a
slice
is
expensive.
A
I
think
it's
order
of
n
and
we
need
something
significantly
cheaper
than
that,
so
what
we
did
to
fix
this
was
just
to
adjust
the
locking
so
that
push
was
no
longer
impacted
by
it
and
we
should
see.
Let's
see
we
should
see.
A
A
Yeah
I
know
I
was
preparing
it
on
this
much
safer
screen
before
I
brought
it
over
to
the
public
screen.
Let's
see,
let's
look
for
the
last
just
three
hours.
I
mean
I
deployed
it
this
morning
and
a
little
bit
longer
in
three
hours.
Apparently.
A
So
we
went
from
like
a
p99
to
two
seconds
which
really
closely
matches
kind
of
ananya's
p99
on
this
query,
and
so
there's
about
this,
like
p99
locking
or
this
about
two
second
lock.
Basically,
that's
killing
us
so
on
the
query
path
we
have
this,
we
were
here,
it
looks
like
p99
is
like
1.8
or
something.
In
fact.
Let
me
just
hide
this.
A
Since
it's
killing
our
graph
there
doink
p50
dropped
quite
a
bit
as
well
as
p
whatever
this
is
p90
and
then
p50
right
right,
but
p99
of
course
came
down
as
well.
Let
me
do.
Oh,
that
looks
good
p99.
A
Let
me
get
rid
of
that
huge
spike,
and
so
we
can
see
we're
about
two
seconds
and
now
we're
about
400
milliseconds.
I
believe
4
500
milliseconds,
so
it
was
a
huge
reduction
in
latency
there
and
the
same
problem
exists
on
the
query
path,
but
it'll
be
harder
to
fix
because,
like
we
can't
adjust
the
locks
this
time
we
actually
have
to
fix
the
performance
issue.
A
I'm
probably
going
to
use
a
heap
basically
for
login
insertion
and
search
it'll
just
require
a
little
bit
of
it'll
just
require
a
little
bit
of
kind
of
code,
redoing
code
refactoring
to
to
adjust
kind
of
how
this
works,
but
it
shouldn't
be
that
hard,
and
I
expect
to
get
that
soon,
at
which
point
ananya's
work
on
the
query
front
end
will
be
far
more
obvious
and
the
ability
to
scale
will
be
far
more
powerful
right
now.
D
Yeah,
it's
just
the
99s.
I
think.
G
A
B
A
And
I
was
unsure
if
I
was
configuring
it
wrong
or
doing
something
incorrect,
and
I
realized
it
was
actually
that
p99,
which
was
driving
up
our
query
latency
so
much.
It
was
making
it
just
hard
to
see
the
impact
of
the
scaling,
so
that
will
improve
soon.
D
Actually,
even
memcache
d
is
that
how
you
say,
memcached
connection
like
just
increasing
the
number
of
idle
connections
like
the
number
of
idle
connections,
it
reduced
the
connection
churn
so
much
and
we're
no
longer
seeing
those
connect.
Timeouts
and
catchy
latency
is
stable.
It's
so
much
better.
Yeah.
D
While
this
loads,
what
updates
do
we
have
from
our
last
release?
This
is
going
to
be
like
a
huge,
huge
release.
It's
like
a
big
chunky
release.
A
Big
chungus
release
yeah,
it's
gonna
have
a
lot
of
stuff
in
it
and
not
just
by
line
items
line.
Items
is
not
gonna.
Look
that
impressive
compared
to
some
previous
releases,
but
there
are
some
huge
improvements.
The
gray
front
end
the
cpu
gains
we
talked
about.
I
don't
know
what
else.
There's
probably
some
other
things
in
there.
D
D
This
is
the
number
of
connections
we
have
to
memcached
in
our
tracing
ops,
environment,
and
this
is
over
a
seven
day
period
right-
and
this
is
when
I
made
the
change
of
increasing
the
number
of
idle
connections.
And
if
you
look
here,
we
have
a
connection
rate
of
56,
it
varies,
but
it's
about
16
to
70.
D
A
I
think
you'd
probably
see
more
query
impact
again
if
we
were
not
plagued
by
our
in
jester
issue
our
injustice
yeah,
which
I'm
excited
to
fix.
G
A
So
yeah
well,
we
should
have
a
new
pier
up
soon.
I'd
hope
sometime
next
week,
at
least
to
it
won't
be
an
o50.
But
honestly,
most
people
are
not
impacted
by
this
unless
you're
doing
pretty
high
volume,
so
an
o50,
probably
in
the
next
release,
and
shortly
after
we
cut
0.50,
we'll
have
a
fix
for
the
great
path.
A
Let's
see
fix
for
query
path
coming
soon,
what
else
we
are
in
the
middle
of
a
heavy
backend
refactor?
I'm
gonna
put
a
couple
pr's
up
here.
It's
something
I'm
personally
working
on,
so
the
one
one
of
our
goals
for
v1
is
to
have
compression.
So,
like
I
said
earlier,
like
I
said
earlier,
cpu
used
to
be
our
largest
cost
and
now
storage
is
our
largest
cost.
We're
currently
storing
80
terabytes
in
gcs,
so
compression
will
of
course
help
with
this.
A
We
actually
also
expect
it
to
help
with
query
speed.
I
think
it's
going
to
cost
us
a
little
in
cpu,
of
course,
in
memory,
because
we're
going
to
be
doing
more
work
to
compress,
but
I
do
expect
our
query
speed
to
go
up
because
we'll
be
requesting
less
from
the
back
ends
when
we,
when
we,
when
we
pull
the
data-
and
I
of
course
expect
our
storage
cost
to
go
down
we'll
see.
A
If
that's
true,
I
think
you're
kind
of
like
balancing,
basically
how
much
compression
ratio
you're
getting
how
long
it
takes
to
pull
that
across
the
wire
versus
how
long
it
takes
to
uncompress.
So,
presumably
we'll
do
this
in
a
way
that
will
give
us
some
time
back
because
we'll
have
spent
more
time
transmitting
it
than
decompressing
it.
But
it
could
be
wrong.
We'll
figure
that
out
and
of
course
we
intend
to
do
some
experiment,
experimentation
and
make
sure
that
we're
getting
some
good
value
out
of
this
before
we
push
it.
A
But
the
refactoring
is
basically
to
isolate
the
way.
We
write
our
blocks
to
the
back
end
to
a
set
of
code
that
we
can
then.
Essentially,
we
could
basically
have
a
shim
in
between
the
code
that
you
know,
queries
and
reads
and
deals
with
the
blocks
and
the
code
that
actually
writes
the
blocks.
So
there'll
be
just
kind
of
a
logical
separation
there,
which
will
allow
us
to
have
version
blocks,
which
is
the
goal,
the
real
goal
of
this
refactor.
A
So
we
can
make
some
changes
tempo
when
you
query
it
will
pull
different
blocks
and
we'll
know
how
to
deal
with
an
uncompressed
block
and
a
compressed
block
at
the
same
time
and
can
kind
of
deal
with
these
different
versioning
issues
so
hoping
to
get
that
in,
I
would
say
in
the
next
couple
weeks
it's
kind
of
I've
been
trying
to
do
small
changes.
Even
some
of
these
pr's
are
large
and
onion's
been
kind
of
dealing
with
them.
A
Well,
but
I've
been
trying
to
doing
a
little
bit
of
time,
because
the
whole
change
had
been
extremely
large.
Tempo
was
kind
of
written
as
a
proof
of
concept,
and
it
was
not
really
perhaps
designed
in
the
most
specific
way,
or
it
could
have
been
done,
perhaps
better,
or
at
least
knowing
what
I
know
now.
But
I
think
that's
normal
for
software,
you
always
get
to
a
point
where
you
wish
you'd
done
something
slightly
different
and
you
do
a
little
bit
of
rework
to
make
that
happen.
E
Or
at
least
it's
common
for
agile
and
and
open
source.
So
you
know
just
growing
pains
a
bit
normal
stuff.
D
B
D
E
A
Okay,
no,
I
think
what
we're
going
to
do
here
is
right,
so
we
have
that
index
down
sample
configuration
rate
so
for
every
record
in
our
index
we
reference
a
bunch
of
traces
right
now
we're
set
at
100
and
I
think
that's
the
default
for
tempo.
A
So
every
record
in
the
index
refers
to
100
traces
and
so
we're
going
to
compress
those
as
a
group.
Essentially,
so
everything
will
still
look
the
same,
except
when
you
let's
say
you
do
your
fine
so
right
now,
when
we
do
a
find,
we
pull
the
bloom
filter.
We
check
the
bloom
filter.
If
that
passes,
we
pull
the
index.
We
use
the
index
to
pull
a
section
of
the
full
block
of
the
all
of
the
data
and
then
right
now
that
block's
in
compression,
we
immediately
start
iterating
through
it.
A
So
instead
we
will
dk
that
will
be
one
compressed
block
and
we'll
decompress
that
and
then
we'll
start
iterating
through
it
cool.
So
let's
go
right
now
I
wondering
kind
of.
If
it
will
let
us
push
our
100
traces
per
record
higher,
which
would
reduce
our
index
sizes.
A
Always
good,
I
don't
know,
there'll
be
some
more
leeway
there
for
our
tunables.
I
think
there'll
be
some
more
options
on
the
table
once
we
do
that.
A
Nice
yeah:
why
don't
we
wrap
this
up
anya?
Do
you
have
anything
you
want
to
end
with.
D
Nothing
much,
I
think
one
interesting
thing
is:
oh
yeah
we'll
be
cutting
the
o5o
release
soon,
this
or
next
week
comes
with
a
bunch
of
performance
improvements,
scalability
improvements,
so
yeah
do
check
it
out.
We
have
we
also
added
which
back-end
did
we
add?
Is
your
blob
storage?
Oh
yeah,.
B
E
No
doubt
speaking
of
releases,
I
just
came
from
a
grafana
7.4
release
planning
meeting
and
one
of
the
big
features
that
we
were
talking
about
was
exemplar
support.
So
does
I
assume
tempo
has
exemplars
or
is
that
something
to
come
in
future?.
G
D
E
E
Yeah,
like
apparently
sending
logs
related
to
traces
to
loki.
Of
course,
I
might
have
misunderstood
it.
E
A
Generally,
we
do
our
lookup
through
logs,
so
maybe
you're
thinking
about
something
like
that.
We'll
do
a
like
a
log
line
that
includes
a
bunch
of
information
along
with
the
trace
id
and
we
use
that
for
our
search
through
loki.
So
perhaps
you're
hearing
something.
A
Let's
see
other
finish
up
stuff,
we
think
temple's
coming
along.
Well,
I'm
excited
to
push
it
higher
more
more
volume.
Like
I
said
earlier,
I
really
think
we
have
struck
a
chord
with
the
larger
groups.
A
The
people
who
were
struggling
to
push
the
size
of
their
tracing,
larger
due
to
cost
is
really
where
we
are
trying
to
land
with
tempo,
and
we
think
this
first,
this
first
pass
at
tempo,
really
fills
that
need
nicely
people
who
are
already
kind
of
have
the
structure
to
maybe
like
enforce
their
logging
and
have
their
logging
all
nicely
formatted,
and
you
know
consistent
with
their
tracing
and
this
tempo
provides
this
really
powerful
background.
A
When
you
kind
of
have
this
set
up,
however,
we
are
seeing
need
for
search,
and
we
are
hearing
people
say
this.
So
search
is
on
the
list.
A
I'd
expect
in
the
next
few
months
to
start
talking
more
seriously
about
it
and
start
building
like
timelines
and
roadmaps
around
that
doing,
just
design,
docs
and
figuring
out.
You
know
what
it's
gonna
take
to
get
search
in
tempo
and
how
we're
gonna
do
it.
E
Another
question
regarding
that,
so
a
lot
of
the
time
so
grafana
has
been
trying
to
make
some
of
our
design.
Docs
public-
and
I
know
prometheus-
does-
does
tempo.
Do
that
too.
A
G
No
just
food.
A
For
thought
prometheus,
of
course,
does
because
yeah
prometheus
is
not
right,
like
no
single
company
kind
of
owns.
A
Right
tempo
is
primarily
debited
by
grafana.
We
recognize
that
certainly
we're
trying
to
build
a
community,
but
perhaps
things
a
little
bit
different
cortex
as
well.
Cortex
is
a
multi-company
kind
of
effort
for
sure,
whereas
griffon
and
loki
are
more
driven
by
I'm
sorry,
loki
and
tempo
are
more
driven
by
grafana
as
well
as,
of
course
grafana
itself.
I
don't
know
that's
a
good
question.
Maybe
we
should
make
more
effort.
I
did
write
a
design
doc
to
do
this
refactor
a
couple
weeks
ago.
Maybe
we
should
make
more
effort.
B
A
D
I
remembered
one
other
thing,
one
last
thing:
while
we
were
discussing
about
grafana,
so
we're
gonna
drop
the
very
tempo
query
container,
all
together
in
the
coming
release,
like
not
o50,
but
the
one.
After
that
we
will.
So.
D
I
have
initiated
talks
with
the
grifana
team
to
sort
of
support
the
open,
telemetry
format
directly
in
grafana,
so
we've
started
talks
there,
but
yeah
we'll
keep
you
posted
on
how
it
goes
and
hopefully,
one
day
we'll
get
rid
of
that
annoying.
A
I
independency
know
we
were
talking
about.
I
I
kind
of
brought
that
up
with
what's
his
name,
I
can't
remember
his
name
anyways
one
of
the
grafana
engineers
at
one
point.
A
I
I
think,
for
the
short
term:
if
it's
faster,
we
should
just
serve
jager,
which
is
what
graphon
expects,
but
certainly
if
they
will
support
hotel.
Let's
just
do
that.
You
know.
B
A
To
get
to
drop
tempo
query,
I'm
100
fine
with
serving
jaeger
format
directly
out
of
tempo
for
grafana,
but
certainly,
if
they're
willing
to
go
the
extra
mile,
yeah,
okay,
yeah.
So
tempo
is
doing
great
excited
to
keep
working
on
this
and
we
will
have
another
community
call
in
a
month.
Certainly
anyone
is
welcome
to
join.